<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="review-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Psychol.</journal-id>
<journal-title>Frontiers in Psychology</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Psychol.</abbrev-journal-title>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fpsyg.2023.1298235</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Review</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Mathematical modeling of human memory</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Finotelli</surname> <given-names>Paolo</given-names></name>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/142679/overview"/>
<role content-type="https://credit.niso.org/contributor-roles/conceptualization/"/>
<role content-type="https://credit.niso.org/contributor-roles/investigation/"/>
<role content-type="https://credit.niso.org/contributor-roles/methodology/"/>
<role content-type="https://credit.niso.org/contributor-roles/resources/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-original-draft/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-review-editing/"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Eustache</surname> <given-names>Francis</given-names></name>
<xref ref-type="corresp" rid="c002"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/213350/overview"/>
<role content-type="https://credit.niso.org/contributor-roles/conceptualization/"/>
<role content-type="https://credit.niso.org/contributor-roles/investigation/"/>
<role content-type="https://credit.niso.org/contributor-roles/methodology/"/>
<role content-type="https://credit.niso.org/contributor-roles/supervision/"/>
<role content-type="https://credit.niso.org/contributor-roles/validation/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-original-draft/"/>
</contrib>
</contrib-group>
<aff><institution>Normandie Univ, UNICAEN, PSL Universit&#x000E9; Paris, EPHE, INSERM, U1077, CHU de Caen, Centre Cyceron, Neuropsychologie et Imagerie de la M&#x000E9;moire Humaine</institution>, <addr-line>Caen</addr-line>, <country>France</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Markus Boeckle, Karl Landsteiner University of Health Sciences, Austria</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Giorgio Gronchi, University of Florence, Italy</p>
<p>Denis Brouillet, Universit&#x000E9; Paul Val&#x000E9;ry, Montpellier III, France</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Paolo Finotelli <email>finotelli&#x00040;cyceron.fr</email></corresp>
<corresp id="c002">Francis Eustache <email>francis.eustache&#x00040;unicaen.fr</email></corresp>
</author-notes>
<pub-date pub-type="epub">
<day>22</day>
<month>12</month>
<year>2023</year>
</pub-date>
<pub-date pub-type="collection">
<year>2023</year>
</pub-date>
<volume>14</volume>
<elocation-id>1298235</elocation-id>
<history>
<date date-type="received">
<day>21</day>
<month>09</month>
<year>2023</year>
</date>
<date date-type="accepted">
<day>05</day>
<month>12</month>
<year>2023</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2023 Finotelli and Eustache.</copyright-statement>
<copyright-year>2023</copyright-year>
<copyright-holder>Finotelli and Eustache</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract>
<p>The mathematical study of human memory is still an open challenge. Cognitive psychology and neuroscience have given a big contribution to understand how the human memory is structured and works. Cognitive psychologists developed experimental paradigms, conceived quantitative measures of performance in memory tasks for both healthy people and patients with memory disorders, but in terms of mathematical modeling human memory there is still a lot to do. There are many ways to mathematically model human memory, for example, by using mathematical analysis, linear algebra, statistics, and artificial neural networks. The aim of this study is to provide the reader with a description of some prominent models, involving mathematical analysis and linear algebra, designed to describe how memory works by predicting the results of psychological experiments. We have ordered the models from a chronological point of view and, for each model, we have emphasized what are, in our opinion, the strong and weak points. We are aware that this study covers just a part of human memory modeling as well as that we have made a personal selection, which is arguable. Nevertheless, our hope is to help scientists to modeling human memory and its diseases.</p></abstract>
<kwd-group>
<kwd>memory</kwd>
<kwd>mathematics</kwd>
<kwd>amnesia</kwd>
<kwd>models</kwd>
<kwd>neuropsychology</kwd>
<kwd>demantia</kwd>
</kwd-group>
<counts>
<fig-count count="2"/>
<table-count count="0"/>
<equation-count count="31"/>
<ref-count count="60"/>
<page-count count="17"/>
<word-count count="14327"/>
</counts>
<custom-meta-wrap>
<custom-meta>
<meta-name>section-at-acceptance</meta-name>
<meta-value>Cognitive Science</meta-value>
</custom-meta>
</custom-meta-wrap>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>1 Introduction</title>
<p>In neuropsychology, memory is conceived as a complex function made up of several interacting systems. Five major systems are most often differentiated: working memory (or short-term memory), episodic memory, semantic memory, perceptual memory, and procedural memory. These different systems, which make up individual memory, interact with collective memory. Memory makes it possible to record, store, and restore information, but this definition is incomplete in view of its complexity since it forges our identity, constitutes the source of our thoughts, operates back and forth with representations of our personal and collective past, projects them toward an imagined future, builds our life trajectory, and participates in the regulation of our social relations and our decision-making. Amnesic syndromes as well as dementia syndromes has been the main sources of inference to differentiate several forms of memory, by highlighting dissociations between disturbed and preserved memory capacities in these pathologies. Regarding the interactive construction of memory systems and processes, we refer to the Memory NEo-Structural Inter-Systemic model (MNESIS), which is a macromodel based on neuropsychological data. The reader can find all the details in Eustache et al. (<xref ref-type="bibr" rid="B16">2016</xref>).</p>
<sec>
<title>1.1 Working memory</title>
<p>Working memory is the memory system responsible for temporarily maintaining and manipulating information needed to perform activities as diverse as understanding, learning, and reasoning. It consists of two satellite storage systems (the phonological loop and the visuo-spatial notebook), supervised by an attentional component, the central administrator.</p>
<p>The phonological loop is responsible for storing verbal information, manipulating it, and refreshing it. The visuospatial notebook is involved in the storage of spatial and visual information as well as in the formation and manipulation of mental images. The central administrator manages the transfer of information to long-term memory. It relies on an episodic buffer, responsible for the temporary storage of integrated information from different sources, which plays a role in encoding and retrieval in episodic memory. It is thus at the interface between several systems and uses a multidimensional code common to these different systems.</p>
</sec>
<sec>
<title>1.2 Long-term memory</title>
<p>Within long-term memory, episodic memory is the memory of personally experienced events, located in their temporal-spatial context of acquisition. Its fundamental characteristic is to allow the conscious memory of a previous experience: The event itself (what), but also the place (where) and the moment (when) it occurred. The retrieval of a memory in episodic memory gives the impression of reliving the event due to a &#x0201C;mental journey in time&#x0201D; through one&#x00027;s own past, associated with &#x0201C;autonoetic awareness&#x0201D; (or self-awareness).</p>
<p>Semantic memory is the memory of concepts, knowledge about the world, regardless of their context of acquisition. It is associated with &#x0201C;noetic consciousness&#x0201D; or awareness of the existence of objects and various regularities. Semantic memory allows introspective behavior about the world but also includes general knowledge about oneself: personal semantics.</p>
<p>Representations can thus be based on general (semantic type) or specific (episodic type) knowledge. On the contrary, procedural memory makes it possible to acquire skills, with training (over many trials), and to restore them without referring to previous experiences. It is expressed in action and its contents are difficult to verbalize. Procedural memory allows us to perform activities without explicitly remembering the procedures and without awareness of when we learned them.</p>
<p>Another distinction opposes explicit memory and implicit memory. Explicit memory refers to situations in which a subject voluntarily recalls information. On the contrary, implicit memory is brought into play without the subject&#x00027;s knowledge, when a previous experience modifies his performance in a task that does not require his conscious recall. Thus, the fact of seeing an image for the first time facilitates its subsequent identification, including if it is presented in a degraded form. Implicit memory depends on the system of perceptual representations, which corresponds to a perceptual memory and makes it possible to maintain information in memory, even if it is meaningless, and can manifest itself without the knowledge of the subject.</p>
<p>The MNESIS model (Eustache et al., <xref ref-type="bibr" rid="B16">2016</xref>) specifies the interactive functioning of memory systems, which take their place within collective memory, see <xref ref-type="fig" rid="F1">Figure 1</xref>.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p>MNESIS, an overall representation of individual memory, and its interface with collective memory. MNESIS represents the five systems of individual memory. The three long-term representation systems (perceptual memory, semantic memory, and episodic memory) are organized hierarchically. Many episodic memories undergo a process of semantization over time. In addition, the phenomena of reviviscence, both conscious and unconscious, are essential for mnesic consolidation, thus underlining the importance of the dynamic and reconstructive nature of memory. This characteristic of memory has as its corollary the modification of the memory trace and the possible formation of false memories. At the center of the MNESIS model, there is the working memory, with the classic components (the central administrator, the phonological loop, and the visuo-spatial notebook) and the episodic buffer, a temporary interface structure that solicits different neurocognitive systems. Depending on the activity in progress, it can regulate the expression of self-awareness in the present or participate in the establishment of a new skill. Procedural memory is presented, with a hierarchy ranging from the support of motor and perceptual-motor skills to that of cognitive skills. The links with perceptual memory are favored for perceptual-motor procedural memory and with declarative systems for cognitive procedural memory. In any case, interactions with representation systems (including working memory) are particularly important during the procedural learning phase. The bonds loosen during the progressive automation of learning (adapted from Eustache et al., <xref ref-type="bibr" rid="B16">2016</xref>).</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpsyg-14-1298235-g0001.tif"/>
</fig>
</sec>
</sec>
<sec id="s2">
<title>2 Mathematical models of human memory</title>
<p>This section is dedicated to illustrating the most theoretical important mathematical models of human memory present in the literature, which are based on concepts proper to mathematical analysis and linear algebra, such as mathematical analysis, differential equations, vector, and matrix algebra. The literature on mathematical and computational models of memory is vast (see for example, Sun, <xref ref-type="bibr" rid="B55">2008</xref>). Hence, we focus our review just on models whose rationale is underpinned by mathematical analysis as well as linear algebra. With &#x0201C;analysis&#x0201D; we mean the branch of mathematics dealing with continuous functions, limits, and related theories, such as differentiation, integration, measure, infinite sequences, series, and analytic functions. Differential equations are an important (sub)area of mathematical analysis with many applications in the study of memory, and more broadly of the brain. Differently, linear algebra deals with vectors and matrices and, more generally, with vector spaces and linear transformations. From this perspective, the history of attempts to model memory dates back to the late 1800s and continues to our days. Interestingly, after the approaches of pioneers in the study of memory such as Ribot (<xref ref-type="bibr" rid="B49">1906</xref>) and Ebbinghaus (<xref ref-type="bibr" rid="B14">1913</xref>), there was a period of stalemate, a sort of &#x0201C;memory modeling winter&#x0201D; which gained momentum starting from the 60 of the last century, due to an increasing interest and to new computational tools, it is becoming more and more popular.</p>
<sec>
<title>2.1 Ebbinghaus forgetting curve</title>
<p>The study of higher mental processes by using experimentation started in the second part of the 19<sup><italic>th</italic></sup> century due to Ebbinghaus, such an approach was in opposition to the popularly held thought of the time. In 1885, Ebbinghaus, in his groundbreaking <italic>Memory. A Contribution to Experimental Psychology</italic> (original title: &#x000DC;ber das Ged&#x000E4;chtnis) described the experiments he conducted to describe the processes of forgetting (and learning). His experiments represent one of the first attempts to study the mechanisms of forgetting even if he used himself as the sole subject. Indeed, in his experiment, he memorized lists of three letter nonsense syllable words&#x02013;two consonants and one vowel in the middle. Then, he measured his own capacity to relearn a given list of words after a variety of given time period. He found that forgetting occurs in a systematic manner, beginning rapidly and then leveling off. He plotted out his results diving rise to the famous <italic>Ebbinghaus forgetting curve</italic>. Ebbinghaus remarked that first, much of what it is forgotten is lost soon after it is originally learned. Second, the amount of forgetting eventually levels off.</p>
<p>Many equations have since been proposed to approximate forgetting. For example, in 1985, Loftus (<xref ref-type="bibr" rid="B31">1985</xref>) described a new method for determining the effect of original learning (or any other variable) on forgetting. Loftus tried to answer a major question, i.e., how much forgetting time is required for memory performance to fall from any given level to some lower level? If this time is the same for different degrees of original learning, then forgetting would not be affected by degree of original learning. In terms of evaluation, if this time is greater for higher degrees of original learning, then forgetting is slower with higher original learning. Loftus applied his method to a variety of forgetting data, the outcomes indicated that forgetting is slower for higher degrees of original learning. Loftus supposed that forgetting is characterized by the following assumptions: First, original learning produces some amount of information in memory. The higher the original learning, the greater the amount of information. Second, following learning, the amount of retrievable information decays exponentially over time. Third, performance, i.e. number of items recalled or recognized, is a linear function of information. If <italic>P</italic> is the performance (e.g., number of items recalled), which Loftus assumed to be equal to the amount of information at time <italic>t</italic> following learning, then it is possible summarize the model by means of the following equation:</p>
<disp-formula id="E1"><label>(1)</label><mml:math id="M1"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>&#x003F1;</mml:mi><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003C2;</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msup></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where &#x003F1; represents the units of information are originally stored in memory, while &#x003C2; the rate of decay. In conclusion, Loftus remarked that the application of the proposed method to a variety of forgetting data indicated that forgetting is slower for higher degrees of original learning.</p>
<p>In a similar way, 10 years later, in 1995, Wozniak et al. (<xref ref-type="bibr" rid="B60">1995</xref>), proposed perhaps the simplest forgetting curve, being an exponential curve described in by the Equation (2). The main characteristic of such a proposal is the existence of two components of long-term memory.</p>
<disp-formula id="E2"><label>(2)</label><mml:math id="M2"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>R</mml:mi><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>S</mml:mi></mml:mrow></mml:mfrac></mml:mrow></mml:msup></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <italic>R</italic> is retrievability (a measure of how easy it is to retrieve a piece of information from memory) and <italic>S</italic> is stability of memory (determines how fast <italic>R</italic> falls over time in the absence of training, testing, or other recall), and <italic>t</italic> is time.</p>
<p>As a final observation, around the same time, Ebbinghaus developed the forgetting curve, psychologist Sigmund Freud theorized that people intentionally forgot things in order to push bad thoughts and feelings deep into their unconscious, a process he called &#x0201C;repression.&#x0201D; There is debate as to whether (or how often) memory repression really occurs (McNally, <xref ref-type="bibr" rid="B32">2004</xref>).</p>
<sec>
<title>2.1.1 Strong and weak points of Ebbinghaus&#x00027; work on memory</title>
<sec>
<title>2.1.1.1 Strong points</title>
<list list-type="bullet">
<list-item><p>It was a pioneering study.</p></list-item>
<list-item><p>The model served as a model for further studies on cognitive abilities and psychological evaluations.</p></list-item>
</list>
</sec>
<sec>
<title>2.1.1.2 Weak points</title>
<list list-type="bullet">
<list-item><p>Ebbinghaus was the only subject in the study, and therefore, it was not generalizable to the population. In addition, a large bias is to be expected when a subject is a participant in the experiment as well as the researcher.</p></list-item>
<list-item><p>There are other analytical forms of the forgetting curve that could fit the obtained result, for example, the power law (see Wixted and Ebbesen, <xref ref-type="bibr" rid="B59">1991</xref>). Nevertheless, the exponential form has several applications and in other brain-related fields such as complex brain network analysis, where the probability of formation of links follows such an analytical form.</p></list-item>
</list>
</sec>
</sec>
<sec>
<title>2.1.2 Mathematical developments</title>
<p>A remarkable development (and implementation, too) of Ebbinghaus&#x00027; theory is the study by Georgiou et al. (<xref ref-type="bibr" rid="B17">2021</xref>). Basically, Georgiou, Katkov, and Tsodyks proposed a model which is strength-dependent retroactive interference between the memories. Hence, only if a stronger memory is acquired after the weaker one, then the weaker one is erased. The model results in powerlaw retention curves with exponents that very slowly decline toward -1. The asymptotic value for all realistic time lags that can be measured experimentally.</p>
</sec>
</sec>
<sec>
<title>2.2 Ribot&#x00027;s law</title>
<p>In 1906, Ribot in his book <italic>Les maladies de la m&#x000E9;moire</italic> described the so called Ribot&#x00027;s law of retrograde amnesia (actually it was hypothesized in 1881 by Th&#x000E9;odule Ribot itself). Such a law states that there is a time gradient in retrograde amnesia, so recent memories are more likely to be lost than the more remote memories. We remark that not all patients with retrograde amnesia report the symptoms of Ribot&#x00027;s law.</p>
<p>In other words, the Ribot gradient is a pattern where memory loss in retrograde amnesia is larger for recent periods rather than for remote periods. A possible explanation for this gradient lies in the consolidation of memories, which is more prominent in long-term memories. Consolidation is a key concept to explain the gradient in retrograde amnesia. For example, if the hippocampal memory system is damaged in a subject, she/he will tend to lose more of their recent than of their remote memories (Kopelman, <xref ref-type="bibr" rid="B30">1989</xref>; Squire, <xref ref-type="bibr" rid="B52">1992</xref>). That is exactly the Ribot gradient! Ribot, basically, suggested that recent memories might be more vulnerable to brain damage than remote memories.</p>
<p>If we assume that the retrieval of memories depends on the hippocampal memory system then the Ribot gradient can be intuitively interpreted. In this sense, consolidation is a fundamental process. Indeed, through consolidation, memories gradually become stored in the neocortex, giving rise to the corticohippocampal system, making them independent of the hippocampal system (Squire et al., <xref ref-type="bibr" rid="B54">1984</xref>; Squire and Alvarez, <xref ref-type="bibr" rid="B53">1995</xref>). If the hippocampal system is damaged, recent memories are lost because they still depend on such a system. Differently, since old memories have already been stored in the neocortex through consolidation, they are thus spared. It is possible to provide the analytical form of the Ribot gradient, as shown in Murre et al. (<xref ref-type="bibr" rid="B39">2013</xref>). If we refer to <italic>r</italic><sub>1</sub>(<italic>t</italic>) as to the intensity of the hippocampal process (as a function of time) and to <italic>r</italic><sub>2</sub>(<italic>t</italic>) as to that of the neocortical process, then the sum of the intensities of the individual processes <italic>r</italic>(<italic>t</italic>) &#x0003D; <italic>r</italic><sub>1</sub>(<italic>t</italic>)&#x0002B;<italic>r</italic><sub>2</sub>(<italic>t</italic>) represents the total memory intensity (see for example, Memory Chain Model Murre and Chessa, <xref ref-type="bibr" rid="B40">2011</xref>). This superimposition of intensities allows to treat specific pathological cases. For example, a full lesion at time <italic>t</italic><sub><italic>l</italic></sub> of the hippocampus cause the removal of the term <italic>r</italic><sub>1</sub>(<italic>t</italic><sub><italic>l</italic></sub>) from the total intensity <italic>r</italic>(<italic>t</italic><sub><italic>l</italic></sub>). As a consequence, the only remaining term is <italic>r</italic><sub>2</sub>(<italic>t</italic><sub><italic>l</italic></sub>), the neocortical intensity at the time of the lesion, <italic>t</italic><sub><italic>l</italic></sub>, which reflects the result of the consolidation process until the lesioning time <italic>t</italic><sub><italic>l</italic></sub>. Hence, it follows that the shape of the Ribot gradient with a full hippocampal lesion at time <italic>t</italic><sub><italic>l</italic></sub> is identical to the expression for <italic>r</italic><sub>2</sub>(<italic>t</italic><sub><italic>l</italic></sub>). The predicted shape of these test gradients is, therefore, given by</p>
<disp-formula id="E3"><label>(3)</label><mml:math id="M3"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mi>R</mml:mi><mml:mi>i</mml:mi><mml:mi>b</mml:mi><mml:mi>o</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mo>-</mml:mo><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msup></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>We remark that tests of retrograde amnesia do not measure intensity directly, but they rather measure recall probability, that is, the reason for the symbol <italic>p</italic><sub><italic>Ribot</italic></sub>(<italic>t</italic>), <italic>p</italic> stands for &#x0201C;probability&#x0201D;.</p>
<sec>
<title>2.2.1 Strong and weak of Ribot&#x00027;s law</title>
<sec>
<title>2.2.1.1 Strong points</title>
<list list-type="bullet">
<list-item><p>Similarly to what written for the Ebbinghaus&#x00027; works, the Ribot&#x00027;s law was a pioneering and leading study.</p></list-item>
<list-item><p>The model served as a model for further studies on cognitive abilities, psychological evaluations as well as to investigate memory diseases.</p></list-item>
<list-item><p>Many neurodegenerative diseases, including Alzheimer&#x00027;s disease, are also linked to retrograde amnesia and consequently can be explained, at least as a first approximation, by the Ribot&#x00027;s law.</p></list-item>
</list>
</sec>
<sec>
<title>2.2.1.2 Weak points</title>
<list list-type="bullet">
<list-item><p>Currently, Ribot&#x00027;s law is not universally accepted as a supporting example for memory consolidation and storage. As a component of the standard model memory of systems consolidation, it is challenged by the multiple trace theory which states that the hippocampus is always activated in the storage and retrieval of episodic memory regardless of memory age.</p></list-item>
<list-item><p>Similarly to what is observed for the Ebbinghaus&#x00027; curve, there are other analytical forms that could well explain Ribot&#x00027;s law (see Wixted and Ebbesen, <xref ref-type="bibr" rid="B59">1991</xref>) even though the exponential form has some properties very useful to take advantage of for modeling purposes.</p></list-item>
</list>
</sec>
</sec>
<sec>
<title>2.2.2 Mathematical developments</title>
<p>In our opinion, Murre et al. (<xref ref-type="bibr" rid="B39">2013</xref>) showed a stunning example of an application of Ribot&#x00027;s law to modeling amnesias. Their model assumes that memory processes can be decomposed into a number of processes that contain memory representations. Memory processes has a wide range of variability, from milliseconds (extremely short-term processes) to decades (very long-term processes). A memory representation could be thought of as consisting of one or more traces, such a representation can be viewed as neural pathways, any of which suffices to retrieve the memory. This trace generation is governed in a random way. Each trace in a process generates traces of its representation in the next higher process, for example, through long-term potentiation (LTP) in the hippocampus (Abraham, <xref ref-type="bibr" rid="B1">2003</xref>) or neocortex (Racine et al., <xref ref-type="bibr" rid="B48">1995</xref>). LTP is a stable facilitation of synaptic potentials after high-frequency synaptic activity, is very prominent in the hippocampus and is a leading candidate memory storage mechanism. We remark that a trace can be overwritten by different traces or by neural noise; in these cases, the trace is lost. As a consequence, it can no longer generate new traces in higher processes. The authors hypothesize that first, all traces in a process share the same loss probability; second, higher processes in the chain have lower decline rates. If the hippocampus undergoes a lesion at time <italic>t</italic><sub><italic>l</italic></sub>, then no more memories will be formed after that. In addition, no more consolidation from hippocampus-to-cortex happens.</p>
<p>If <italic>r</italic>(<italic>t</italic><sub><italic>l</italic></sub>) denotes the intensity of a particular memory at the time of the lesion, after <italic>t</italic><sub><italic>l</italic></sub>, a decline of the memory intensity, with neocortical decline rate <italic>a</italic><sub>2</sub>, will be observed, the equation representing this case is given by</p>
<disp-formula id="E4"><label>(4)</label><mml:math id="M4"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>r</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>a</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msup></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where &#x003C4; is the time elapsed since the lesion. Interestingly, the authors introduce the case partial lesion of the hippocampus, this means that they leave the size of the lesion as a free parameter. The lesion parameter is denoted as &#x003BB;, &#x003BB; ranges from 0 to 1, extremes included. If the lesion parameter is 0, no lesion is present; on the opposite, if &#x003BB; &#x0003D; 1, there is a complete lesion. In case of a partial lesion, the Ribot gradient is equal to</p>
<disp-formula id="E5"><label>(5)</label><mml:math id="M5"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mi>R</mml:mi><mml:mi>i</mml:mi><mml:mi>b</mml:mi><mml:mi>o</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mo>-</mml:mo><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>-</mml:mo><mml:mi>&#x003BB;</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:msub><mml:mrow><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:mrow></mml:msup></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>This is the most general form of the model, based on the Ribot gradient, proposed by the authors. Generally, the tests of retrograde amnesia provide recall probabilities as a function of time elapsed, such a probability is denoted as <italic>p</italic>(<italic>t</italic>). Mathematically speaking, an observed recall probability <italic>p</italic>(<italic>t</italic>) can be transformed into an intensity <italic>r</italic>(<italic>t</italic>) by taking &#x02212; ln(1 &#x02212; <italic>p</italic>(<italic>t</italic>)), where ln is the natural-based logarithm.</p>
</sec>
</sec>
<sec>
<title>2.3 Atkinson-Shiffrin memory model</title>
<p>The Atkinson-Shiffrin model (also known as the multi-store model or modal model) is a model of memory proposed in 1968 by Atkinson and Shiffrin (<xref ref-type="bibr" rid="B7">1968</xref>). Such a model is very influential. This model asserts that human memory has three separate components: First, a sensory register, where sensory information enters memory. Second, a short-term store, also called short-term memory (STM), which receives and holds input from both the sensory register and the long-term store. Third, a long-term store, where information which has been rehearsed (explained below) in the short-term store is held indefinitely (see <xref ref-type="fig" rid="F2">Figure 2</xref>).</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p>The Atkinson-Shiffrin memory model: the flow chart characterizing inputs to the memory system (adapted from Atkinson et al., <xref ref-type="bibr" rid="B6">1967</xref>).</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpsyg-14-1298235-g0002.tif"/>
</fig>
<sec>
<title>2.3.1 Sensory memory</title>
<p>The sensory memory store has a large capacity, but for a very brief duration, it encodes information from any of the senses (principally from the visual and auditory systems in Humans), and most of the information is lost through decay. The threshold above mentioned is strictly linked to the attention. Indeed, attention is the first step in remembering something; if a person&#x00027;s attention is focused on one of the sensory stores, then the data are likely to be transferred to STM (for more details, see for example Goldstein, <xref ref-type="bibr" rid="B19">2019</xref>).</p>
</sec>
<sec>
<title>2.3.2 Short-term memory</title>
<p>If the information passes the selection in the first stage (sensory memory) of selection, then it is transferred to the short-term store (also short-term memory). As with sensory memory, the information enters short-term memory decays and is lost, but the information in the short-term store has a longer duration, approximately up to 30 s when the information is not being actively rehearsed (Posner, <xref ref-type="bibr" rid="B45">1966</xref>). A key concept in this model is the memory rehearsal, and it is a term for the role of repetition in the retention of memories. It involves repeating information over and over in order to get the information processed and stored as a memory.</p>
<p>It should be noted that a (continuous) rehearsal acts as a sort of regeneration of the information in the memory trace, thus making it a stronger memory when transferred to the long-term store (see Section 2.3.3). Differently, if maintenance rehearsal (i.e., the repetition of the information) does not occur, then information is forgotten and lost from short term memory through the processes of displacement or decay. Once again, a thresholding procedure occurs.</p>
<p>In terms of capacity, the short-term store has a limit to the amount of information that it can held in, quantitatively from 5 to 9 chunks (7 &#x000B1; 2).<xref ref-type="fn" rid="fn0001"><sup>1</sup></xref></p>
</sec>
<sec>
<title>2.3.3 Long-term memory</title>
<p>The long-term memory is in theory a sort of unlimited store, where the information could have a permanent duration. In the authors&#x00027; model, the information that is stored can be transferred to the short-term store where it can be manipulated.</p>
<p>Information is postulated to enter the long-term store from the short-term store after the thresholding process. As Atkinson and Shiffrin modeled it, transfer from the short-term store to the long-term store is occurring for as long as the information is being attended to in the short-term store. The longer an item is held in short-term memory, the stronger its memory trace will be in long-term memory. Atkinson and Shiffrin based their observations on the studies by Hebb (<xref ref-type="bibr" rid="B20">1961</xref>) and Melton (<xref ref-type="bibr" rid="B33">1963</xref>), which show that repeated rote repetition enhances long-term memory. There is also a connection with the Ebbinghaus&#x00027; studies on memory that shows how forgetting increases for items which are studied/repeated fewer times (Ebbinghaus, <xref ref-type="bibr" rid="B14">1913</xref>).</p>
<p>Remarkably, simple rote rehearsal is not the stronger encoding processes; indeed, in author&#x00027;s opinion, the new information to information which has already made its way into the long-term store is a more efficient process.</p>
<p>The authors used a mathematical description of their proposal. Such a mathematical formalization is well detailed in Atkinson et al. (<xref ref-type="bibr" rid="B6">1967</xref>). In short, the memory buffer may be viewed as a state containing those items which have been selected from the sensory buffer for repeated rehearsal. Once the memory buffer is filled, each new item which enters causes one of the items currently in the buffer to be lost. It is assumed that the series of study items at the start of each experimental session fills the buffer and that the buffer stays filled thereafter. The size of the memory buffer is denoted by <italic>r</italic>, which is defined as the number of items which can be held simultaneously, depends upon the nature of the items and thus must be estimated for each experiment. It is also assumed that a correct response is given with probability one if an item is in the buffer at the time it is tested. Every item is selected by the sensory buffer (namely, it undergoes a thresholding process) to be entered into the memory buffer. The authors assume that the items are examined at the time they enter the sensory buffer. The items can be already in the buffer, i.e., their stimulus member can already be in the buffer or their stimulus member can not currently be in the buffer. The former case is denoted by the Authors as <italic>O-item</italic> (or &#x0201C;old&#x0201D; item), while the latter as <italic>N-item</italic> (&#x0201C;new&#x0201D; item). When an O-item is presented for study, it enters the memory buffer with probability one; the corresponding item, which was previously in the buffer, is discarded. When an N-item is presented for study, it enters the buffer with probability &#x003B1;, such a probability is function of the particular scheme that a subject is using to rehearse the items currently in the buffer. It is an N-item enter, the probability that such an event occur is &#x003B1;, then some item currently in the buffer is lost. Of course, the probability that an N-item fails to enter the buffer is 1 &#x02212; &#x003B1;; in this case, the buffer does not undergo any change and the item in object decays and is permanently lost from memory.</p>
<p>The memory buffer is arranged as a push-down list. The newest item that enters the buffer is placed in slot <italic>r</italic>, and the item that has remained in the buffer the longest is in slot 1, i.e., the slot where the oldest item is. If an O-item enters slot r, the corresponding item is lost. Then, the other items move down one slot if necessary, retaining their former order. When an N-item is presented for study and enters the buffer (with probability &#x003B1;), it is placed in the <italic>r</italic><sup><italic>th</italic></sup> slot. The item currently in slot <italic>j</italic> has a probability &#x003BA;<sub><italic>j</italic></sub> to be discarded (or knocked out, the term used by the authors), the following condition must hold: &#x003BA;<sub>1</sub> &#x0002B; &#x003BA;<sub>2</sub> &#x0002B; &#x003BA;<sub>3</sub> &#x0002B; ... &#x0002B; &#x003BA;<sub><italic>j</italic></sub> &#x0002B; ...&#x003BA;<sub><italic>r</italic></sub> &#x0003D; 1, with <italic>r</italic> &#x02265; <italic>j</italic>. When the <italic>j</italic><sup><italic>th</italic></sup> item is discarded, each item above the <italic>j</italic><sup><italic>th</italic></sup> moves down one and the new item enters the <italic>r</italic><sup><italic>th</italic></sup> slot. The simplest form of &#x003BA;<sub><italic>j</italic></sub> is <inline-formula><mml:math id="M6"><mml:msub><mml:mrow><mml:mi>&#x003BA;</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>r</mml:mi></mml:mrow></mml:mfrac></mml:math></inline-formula>; in this case, the item to be knocked out is chosen independently of the buffer position.</p>
<p>At this point, let us focus on long-terms storage (LTS).</p>
<p>LTS can be viewed as a memory state where the information accumulates for each item. The authors made a few assumptions:</p>
<list list-type="order">
<list-item><p>Information about an item may enter LTS only during the period that an item resides in the buffer.</p></list-item>
<list-item><p>The status of an item in the buffer is in no way affected by transfer of information to LTS.</p></list-item>
<list-item><p>Recall from the buffer is assumed to be perfect, and recall from LTS is not necessarily perfect and usually will not be.</p></list-item>
<list-item><p>The information is transferred to LTS at a constant rate &#x003B8; during the entire period in which an item resides in the buffer; &#x003B8; is the transfer rate per trial. Hence, if an item remains in the buffer for exactly <italic>j</italic> trials, then that item accumulated an amount of information equal to <italic>j&#x003B8;</italic>.</p></list-item>
<list-item><p>Each trial following the trial on which an item is discarded by the buffer, then it causes a decrease of information stored in LTS by a constant proportion &#x003C4;. So, if an item were discarded by the buffer at trial <italic>j</italic>, and <italic>i</italic> is the number of trials intervened between the original study and the test on that item, the amount of information stored in LTS at the time of test would be <italic>j&#x003B8;&#x003C4;</italic><sup><italic>i</italic>&#x02212;<italic>j</italic></sup>.</p></list-item>
</list>
<p>In case of a subject undergoes a test on an item, the subject gives the correct response if the item is in the sensory or memory buffer, but if the item is not in either of these buffers, the subject searches LTS. This LTS search is called the <italic>retrieval process</italic>. In this regard, two important observations should be made: First, it is assumed that the likelihood of retrieving the correct response for a given item improves as the amount of information stored concerning that item increases. Second, the retrieval of an item gets worse; the longer the item has been stored in LTS. In other words, there is some sort of decay in information as a function of the length of time information has been stored in LTS.</p>
<p>After these assumptions and observations, it is then possible to specify the probability of a correct retrieval of an item from LTS. If the amount of information stored at the moment of test for an item is zero, then the probability of a correct retrieval should be at the guessing level. As the amount of information increases, the probability of a correct retrieval should increase toward unity. The authors define <italic>p</italic><sub><italic>ij</italic></sub> as the probability of a correct response from LTS of an item that had a lag of <italic>i</italic> trials between its study and test and that resided in the buffer for exactly <italic>j</italic> trials. Hence, such a probability can be mathematically written as</p>
<disp-formula id="E6"><label>(6)</label><mml:math id="M7"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mo>-</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>-</mml:mo><mml:mi>g</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mi>j</mml:mi><mml:mi>&#x003B8;</mml:mi><mml:msup><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>-</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:msup></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <italic>g</italic> is the guessing probability; for example, if an experiment is made up of 26 response alternatives, then the guess probability is <inline-formula><mml:math id="M8"><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mn>26</mml:mn></mml:mrow></mml:mfrac></mml:math></inline-formula>.</p>
</sec>
<sec>
<title>2.3.4 Strong and weak points of the Atkinson and Shiffrin model</title>
<sec>
<title>2.3.4.1 Strong points</title>
<p>Some of the strongness of the model can be summarized in the following way:</p>
<list list-type="bullet">
<list-item><p>It provides a good understanding of the structure and processes of the human memory.</p></list-item>
<list-item><p>It is distinguished as it has generated a lot of research into memory.</p></list-item>
<list-item><p>Many memory studies provide evidence to support the distinction between STM and LTM (in terms of encoding, duration, and capacity).</p></list-item>
<list-item><p>Due to its multi-store structure, it is able to explain specific well-known case in neuropsychology, such as the case of Henry Gustav Molaison (Annese et al., <xref ref-type="bibr" rid="B5">2014</xref>).</p></list-item>
</list>
</sec>
<sec>
<title>2.3.4.2 Weak points</title>
<p>Despite the fact of being influential such a model has some weak points, we have those as follows:</p>
<list list-type="bullet">
<list-item><p>The model is oversimplified, for example, it suggests that each of the stores works as an independent unit, that is not the case.</p></list-item>
<list-item><p>The model does not explain memory distortions (memory can be distorted when they are retrieved because there is a necessity to fill in the gaps to create meaningful memory).</p></list-item>
<list-item><p>There are some memories that can be stored in long-term memory even if the amount of rehearsal is minimal, for example, a severe bicycle crash.</p></list-item>
<list-item><p>Sometimes despite a prolonged rehearse action to remember information, it is not transferred to long-term memory.</p></list-item>
</list>
</sec>
</sec>
<sec>
<title>2.3.5 Mathematical developments</title>
<p>As already mentioned previously, the Atkinson-Shiffrin memory model is an influential model. It is no surprise to note that several models have been developed on its basis. In the following, we provide a chronological history of such developments.</p>
<p>The Search of Associative Memory (SAM) model by Raaijmakers and Shiffrin, was proposed in 1981 and described in Raaijmakers and Shiffrin (<xref ref-type="bibr" rid="B47">1981</xref>); the likelihood of remembering one of the remaining words is lower than if no cues are given at all when free recall of a list of words is prompted by a random subset of those words. SAM utilizes interword connections extensively in retrieval, a mechanism that has been overlooked by prior thinking, to predict this effect in all of its forms.</p>
<p>The SAM model for recall (Raaijmakers and Shiffrin, <xref ref-type="bibr" rid="B47">1981</xref>) is extended by assuming that a familiarity process is used for recognition. The recall model, proposed in 1984 by Gillund and Shiffrin (<xref ref-type="bibr" rid="B18">1984</xref>), proposes probabilistic sampling and recovery from an associative network that is dependent on cues. The recall model postulates cue-dependent probabilistic sampling and recovery from an associative network. The recognition model, proposed by Gillund and Shiffrin, is strictly linked to the recall model because the total episodic activation due to the context and item cues is used in recall as a basis for sampling and in recognition to make a decision. The model predicts the results from a new experiment on the word-frequency effect.</p>
<p>In 1997, Shiffrin and Steyvers (<xref ref-type="bibr" rid="B51">1997</xref>), proposed the REM model (standing for retrieving effectively from memory) developed to predict places explicit and implicit memory, as well as episodic and general memory, into the framework of a more complex theory that is being created to explain these phenomena. The model assumes storage of separate episodic images for different words, each image consisting of a vector of feature values.</p>
<p>Mueller and Shiffrin (<xref ref-type="bibr" rid="B35">2006</xref>) presented the REM-II model, and this model is based on Bayesian statistics. REM-II models the development of episodic and semantic memory. Semantic information is represented by the model as a collection of these features&#x00027; co-occurrences, while episodic traces are represented as sets of features with varying values. Feature co-occurrence approaches the complexity of human knowledge by enabling polysemy and meaning connotation to be recorded inside a single structure. The authors present how knowledge is formed in REM-II, how experience gives rise to semantic spaces, and how REM-II leads to polysemy and encoding bias.</p>
<p>The SARKAE (Storing and Retrieving Knowledge and Events) model proposed by Nelson and Shiffrin (<xref ref-type="bibr" rid="B41">2013</xref>), which represents a further development of the SAM model, describes the development of knowledge and event memories as an interactive process: Knowledge is formed through the accrual of individual events, and the storage of an individual episode is dependent on prior knowledge. To support their theory, the authors refer to two experiments that provide data to support the theory: These experiments involve the acquisition of new knowledge and then testing in transfer tasks related to episodic memory, knowledge retrieval, and perception</p>
<p>Lastly, we would like to point out that there are also models that are in contrast with the Atkinson and Shiffrin&#x00027;s original model; among these, there is a dynamic model by Cox and Shiffrin (<xref ref-type="bibr" rid="B12">2017</xref>), that consider that memory is cue-dependent, such a model is in line with MINERVA (see Section 2.5).</p>
</sec>
</sec>
<sec>
<title>2.4 A neuromathematical model of human information</title>
<p>In 1983, Anderson (<xref ref-type="bibr" rid="B3">1983</xref>) proposed a neuromathematical model of human information processing. The acquisition of new contents is a fundamental part of cognition. Two fundamental aspects of such an acquisition are the rate of information processing during the learning phase and the efficiency of the subject (the learner) in mobilizing relevant information in long-term memory. They play a fundamental role in transmitting newly acquired information to stable storage in long-term memory. Hence, they are extremely important in (new) contents acquisition. In addition, these cognitive processes, moreover, may be substantially related in tempo and quality of organization to the efficiency of higher thought processes such as divergent thinking and problem-solving ability that characterize scientific thought. Being a critical topic in the study of memory, Anderson proposed and empirically evaluated a mathematical model of information acquisition.</p>
<p>According to Anderson, sufficient neuroscientific information is available to suggest that the processes of information acquisition in short-term memory (STM) can be modeled as a set of time-dependent equations representing rates of general processes in the central nervous system (CNS) activity.</p>
<sec>
<title>2.4.1 Stability function</title>
<p>Anderson assumed that the holding capacity of short-term memory is limited. Therefore, the stability of information in STM partially depends on the amount of information stored in STM and in general will decline as the information load increases. Some characteristics of the information could influence its efficient storage in STM and the capacity of the learner to effectively organize and transmit the information to long-term memory (LTM). Two properties of stimulus information considered by Anderson in this first approximation are (1) the information quality (&#x003B2;) and (2) the information quantity (&#x003C1;). Information quality is defined as the abstractness of the information. Then, Anderson introduces <italic>S</italic>, which represent the activity of the central nervous system associated with storage of information and its stability in short-term memory. The magnitude of this activity will decline as the load of information increases; in other words, the stability of information in STM decreases as STM holding capacity begins to reach saturating levels. The rate of decrease in stability with time will be proportional to the amount of activity accumulated. As a consequence, this mathematically is equivalent to write</p>
<disp-formula id="E7"><label>(7)</label><mml:math id="M9"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:mi>S</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:mi>f</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>S</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Equation (7) is the more general form for describing the rate of decrease in stability. Indeed, it should be considered that the rate of decrease in stability should be less for learners with higher intellectual ability than those of lower ability. Moreover, the rate of decrease in stability should be increased; the more abstract the information and the greater the rate of presentation (the larger the progression density). Both of these factors contribute to the cognitive demand placed on the learner. Hence, Anderson proposed the following refined statement that represents the instantaneous rate of change in stability:</p>
<disp-formula id="E8"><label>(8)</label><mml:math id="M10"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:mi>S</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>&#x003B1;</mml:mi><mml:mi>&#x003B2;</mml:mi><mml:mi>&#x003B4;</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003BA;</mml:mi></mml:mrow></mml:mfrac><mml:mi>S</mml:mi></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where &#x003B1; is a constant of proportionality, &#x003B2; is the content quality (i.e., the abstractness), &#x003B4; is the content quantity (progression density), and &#x003BA; is the learner&#x00027;s intelligence quotient properly scaled, are constant too. By integrating Equation (8) the analytical form of S is obtained:</p>
<disp-formula id="E9"><label>(9)</label><mml:math id="M11"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>S</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>S</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:mi>&#x003B1;</mml:mi><mml:mi>&#x003B2;</mml:mi><mml:mi>&#x003B4;</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003BA;</mml:mi></mml:mrow></mml:mfrac><mml:mi>t</mml:mi></mml:mrow></mml:msup></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <italic>S</italic><sub>0</sub> is the initial value of <italic>S</italic> at <italic>t</italic><sub><italic>o</italic></sub> and <italic>t</italic> is time since the start of the learning experience. This is a decreasing exponential function representing the rate of decay in stability of information in STM as information load increases with time. Equation (8) is, therefore, a time-dependent function representing CNS stability. In psychological terms, it is a prediction of the amount of residual short-term memory holding capacity at a point in time after onset of the learning experience. The amount of STM information storage capacity depends on the amount of information already stored in STM and the complexity of the incoming information as represented in part by the variables &#x003B2; and &#x003C1; in the rate coefficients of the equation. In addition, to the stability of information in STM, the amount of instability in CNS associated with uncertainty in encoding novel stimulus material must be considered.</p>
</sec>
<sec>
<title>2.4.2 Instability function</title>
<p>As learning progresses and behavior becomes more differentiated, initial instability associated with the new learning task will decrease. Let <italic>I</italic> represent activity in the CNS associated with instability of the system and &#x003BB; the coefficient of decay of <italic>I</italic> with time. Hence, the instantaneous rate of decay in instability of CNS for information encoding, which is related to the amount of activity <italic>I</italic> through the instability coefficient &#x003BB; can be mathematically written as</p>
<disp-formula id="E10"><label>(10)</label><mml:math id="M12"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:mo>-</mml:mo><mml:mi>&#x003BB;</mml:mi><mml:mi>I</mml:mi></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>The integration of Equation (10), with the initial condition <italic>I</italic>(<italic>t</italic> &#x0003D; 0) &#x0003D; <italic>I</italic><sub>0</sub>, provides</p>
<disp-formula id="E11"><label>(11)</label><mml:math id="M13"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>I</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mi>&#x003BB;</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msup><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>At any point in time, the capacity of the CNS to encode information will be equivalent to the difference between the stability function and the instability function or</p>
<disp-formula id="E12"><label>(12)</label><mml:math id="M14"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>S</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>-</mml:mo><mml:mi>I</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>S</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:mi>&#x003B1;</mml:mi><mml:mi>&#x003B2;</mml:mi><mml:mi>&#x003B4;</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003BA;</mml:mi></mml:mrow></mml:mfrac><mml:mi>t</mml:mi></mml:mrow></mml:msup><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mi>&#x003BB;</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msup></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Equation (12) represents the net encoding capacity of STM at an arbitrary point in time <italic>t</italic>.</p>
</sec>
<sec>
<title>2.4.3 The gain function</title>
<p>Then, Anderson introduce CNS activity correlated with information gain, called <italic>N</italic>. Then, he wrote</p>
<disp-formula id="E13"><label>(13)</label><mml:math id="M15"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>&#x003BA;</mml:mi></mml:mrow><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>&#x003B1;</mml:mi></mml:mrow><mml:mo>&#x00304;</mml:mo></mml:mover><mml:mi>&#x003B2;</mml:mi><mml:mi>&#x003B4;</mml:mi></mml:mrow></mml:mfrac><mml:mi>N</mml:mi><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>In Equation (13), it is clear how the instantaneous rate of increase in information is directly proportional to N and &#x003BA;, the intelligence of the subject, inversely related to &#x003B2;, and the abstraction of stimulus information, and &#x003B4;, the progression density. <inline-formula><mml:math id="M16"><mml:mover accent="true"><mml:mrow><mml:mi>&#x003B1;</mml:mi></mml:mrow><mml:mo>&#x00304;</mml:mo></mml:mover></mml:math></inline-formula> is a constant of proportionality. Theoretically speaking, the gain function represents the amplification of CNS activity associated with the elaboration of information in memory through active memory processes of reorganization of information in LTM.</p>
<p>By solving the following Cauchy problem,</p>
<disp-formula id="E14"><label>(14)</label><mml:math id="M17"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mtable style="text-align:axis;" equalrows="false" equalcolumns="false" class="array"><mml:mtr><mml:mtd><mml:mfrac><mml:mrow><mml:mi>d</mml:mi><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mi>d</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:mfrac><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>&#x003BA;</mml:mi></mml:mrow><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>&#x003B1;</mml:mi></mml:mrow><mml:mo>&#x00304;</mml:mo></mml:mover><mml:mi>&#x003B2;</mml:mi><mml:mi>&#x003B4;</mml:mi></mml:mrow></mml:mfrac><mml:mi>N</mml:mi></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mi>N</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>0</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>The obtained solution is</p>
<disp-formula id="E15"><label>(15)</label><mml:math id="M18"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>N</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mfrac><mml:mrow><mml:mi>&#x003BA;</mml:mi></mml:mrow><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>&#x003B1;</mml:mi></mml:mrow><mml:mo>&#x00304;</mml:mo></mml:mover><mml:mi>&#x003B2;</mml:mi><mml:mi>&#x003B4;</mml:mi></mml:mrow></mml:mfrac><mml:mi>t</mml:mi></mml:mrow></mml:msup><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
</sec>
<sec>
<title>2.4.4 Composite equation</title>
<p>The product of the gain function, Equation (15), and the modulation factor, Equation (12), yields the composite equation:</p>
<disp-formula id="E16"><label>(16)</label><mml:math id="M19"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mfrac><mml:mrow><mml:mi>&#x003BA;</mml:mi></mml:mrow><mml:mrow><mml:mover accent="true"><mml:mrow><mml:mi>&#x003B1;</mml:mi></mml:mrow><mml:mo>&#x00304;</mml:mo></mml:mover><mml:mi>&#x003B2;</mml:mi><mml:mi>&#x003B4;</mml:mi></mml:mrow></mml:mfrac><mml:mi>t</mml:mi></mml:mrow></mml:msup><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>S</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:mi>&#x003B1;</mml:mi><mml:mi>&#x003B2;</mml:mi><mml:mi>&#x003B4;</mml:mi></mml:mrow><mml:mrow><mml:mi>&#x003BA;</mml:mi></mml:mrow></mml:mfrac><mml:mi>t</mml:mi></mml:mrow></mml:msup><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:mn>0</mml:mn></mml:mrow></mml:msub><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mi>&#x003BB;</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <italic>N</italic><sub><italic>t</italic></sub> is the net information&#x00027;s gain at time <italic>t</italic>. With appropriate choice of constants (&#x003B1; and <inline-formula><mml:math id="M20"><mml:mover accent="true"><mml:mrow><mml:mi>&#x003B1;</mml:mi></mml:mrow><mml:mo>&#x00304;</mml:mo></mml:mover></mml:math></inline-formula>) and properly scaled variables (&#x003B4;, &#x003B2;, &#x003BA;, and &#x003BB;), the equation yields learning curves that can be empirically tested in relation to data obtained in human learning experiments. The composite equation, therefore, represents the total information gain (<italic>N</italic><sub><italic>t</italic></sub>) at a point in time (<italic>t</italic>) and is the product of the subjects&#x00027; capacity to generate interrelationships among units of information in LTM (G factor), and the amount of immediate net STM encoding capacity (M factor).</p>
</sec>
<sec>
<title>2.4.5 Strong and weak of Anderson&#x00027;s model</title>
<sec>
<title>2.4.5.1 Strong points</title>
<p>The model introduces differential equations to model human memory information processing in a simple form, immediately available to anyone. The model has yielded good predictions for student recall in short-term learning experiences.</p>
</sec>
<sec>
<title>2.4.5.2 Weak points</title>
<p>The model is limited to cognitive phenomena in short-term learning experiences lasting on the order of minutes to one-half hour. It is based on the assumption that the subject (the learner) is not aided by external prompts such as notes or other forms of mental aids. Important factors such as the motivational state of the learner and/or fatigue and stress are not taken into account. A warning point (this holds for any model) is the duration of the learning experiences and the characteristics of the learners used in experimental studies: These parameters need to be carefully controlled to avoid biases that may be introduced if they deviate appreciably from a moderately motivated population.</p>
</sec>
</sec>
<sec>
<title>2.4.6 Mathematical developments</title>
<p>Anderson proposed some implemented versions of the original model. For example, in Anderson (<xref ref-type="bibr" rid="B4">1986</xref>), he included coefficients which represent the motivational state of the learner. In particular, two coefficients were included: the first is an exponential coefficient in the gain function representing largely a change rate of learning associated with varying motivation, while the second is an initial factor in the gain equation change in motivation at the outset of a learning task. This permits modeling of the effects of variations in motivation on the rate and amount of information in a learning task.</p>
<p>We remark that some criticisms to Anderson&#x00027;s model were moved by Preece and Anderson (<xref ref-type="bibr" rid="B46">1984</xref>). Preece suggests that Anderson&#x00027;s data could better, or &#x0201C;more parsimoniously&#x0201D;, represented by a learning model proposed by Hicklin (<xref ref-type="bibr" rid="B22">1976</xref>). In response to this critique, Anderson stated that several mathematical models have been created to forecast human learning curves, with a significant portion of these models being dependent on learner-specific characteristics. These models, however, do not take into account variations in the information input or the complexity of the information, such as the interaction between short- and long-term memory. Therefore, more complex models are required to explore more natural learning scenarios where information receipt occurs, and the Anderson model is designed to do just that.</p>
</sec>
</sec>
<sec>
<title>2.5 MINERVA 2-A simulation model of human memory</title>
<p>In 1984, Hintzman (<xref ref-type="bibr" rid="B23">1984</xref>) proposed the so-called MINERVA 2-A simulation model of human memory. The model makes some assumptions: First, only episodic traces are stored in memory; second, repetition produces multiple traces of an item; third, a retrieval cue contacts all memory traces simultaneously; fourth, each trace is activated according to its similarity to the retrieval cue; five, all traces respond in parallel, the retrieved information reflecting their summed output. MINERVA 2 represents an attempt to account for data from both episodic and generic memory tasks within a single system. The theory underpinning the model is primarily concerned with long-term or secondary memory (SM) although it also assumes that there is a temporary working store or primary memory (PM) that communicates with SM. The interactions between the two stores are restricted to two elementary operations: PM can send a retrieval cue, or &#x0201C;probe&#x0201D;, into SM, and it can receive a reply, called the &#x0201C;echo.&#x0201D; When a probe is sent to SM, a single echo is returned. Information in the echo, and its relation to information in the eliciting probe, are the only clues available to PM regarding what information SM contains. The author remarks that SM is a vast collection of episodic memory traces, each of which is a record of an event or experience. An experience is assumed to occur when a configuration of primitive properties or features is activated in PM, and a memory trace is a record of such a configuration. The experience is strictly connected to a memory trace. Indeed, each experience leaves behind its own memory trace even if it is virtually the same as an earlier one. This means that the effects of repetition are mediated by multiple copies&#x02014;or redundancy&#x02014;rather than by strengthening. Hintzman speculates that there is no separate conceptual, generic, or semantic store. Hence, all information, whether specific or general, is retrieved from the pool of episodic traces that constitutes SM. When a probe is communicated from PM to SM, it is simultaneously matched with every memory trace, and each trace is activated according to its degree of similarity to the probe. The echo that comes back to PM represents the summed reactions of all traces in SM. In other words, there is no process by which individual memory traces can be located and examined in isolation. All SM traces are activated in parallel by the probe, and they all respond in parallel, and the echo contains their combined messages. A trace&#x00027;s contribution to the echo is determined by its degree of activation, so only traces that are relatively similar to the probe make a significant contribution to the echo.</p>
<sec>
<title>2.5.1 The model description</title>
<p>MINERVA 2 bears some similarity to MINERVA 1 (see Hintzman and Ludlam, <xref ref-type="bibr" rid="B24">1980</xref>) but is applicable to a much wider variety of tasks. An experience (or event) is represented as a vector, whose entries (which represent the features, i.e., a configuration of primitive properties that activate so that an experience occurs) belongs to the set {&#x0002B;1, 0, &#x02212;1}. The values &#x0002B;1 and &#x02212;1 occur about equally often, so that over a large number of traces, the expected value of a feature is 0. In a stimulus or event description, a feature value of 0 indicates that the particular feature is irrelevant. In an SM trace description, a value of 0 may mean either that the feature is irrelevant or that it was forgotten or never stored. In learning, active features representing the present event are copied into an SM trace. Each such feature has probability <italic>L</italic> of being encoded properly, and with probability 1 &#x02212; <italic>L</italic> the tract feature value is set at <italic>O</italic>. If an item is repeated, a new trace is entered into SM each time it occurs. The authors define <italic>P</italic>(<italic>j</italic>), it represents the feature <italic>j</italic> of a probe or retrieval cue, and <italic>T</italic>(<italic>i, j</italic>), a mathematical object (see Hintzman and Ludlam, <xref ref-type="bibr" rid="B24">1980</xref>), which is the corresponding feature of memory trace <italic>i</italic>. <italic>T</italic>(<italic>i, j</italic>) must be statistically compared to <italic>P</italic>(<italic>j</italic>), that is why <italic>T</italic>(<italic>i, j</italic>) is a function both of the trace <italic>i</italic> and the probe <italic>j</italic>. The similarity of trace <italic>i</italic> to the probe is computed as</p>
<disp-formula id="E17"><label>(17)</label><mml:math id="M21"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:munderover></mml:mstyle><mml:mfrac><mml:mrow><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>j</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mi>T</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:mfrac><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <italic>N</italic> is the total number of features that are nonzero in either the probe or the trace.</p>
<p><italic>S</italic>(<italic>i</italic>) can be viewed a sort of correlation index: If <italic>S</italic>(<italic>i</italic>) &#x0003D; 0, then the probe and trace are orthogonal, if <italic>S</italic>(<italic>i</italic>) &#x0003D; 1, they perfectly match, taking on both positive and negative values. The activation level of a trace, <italic>A</italic>(<italic>i</italic>), is a positively accelerated function of its similarity to the probe. In the study&#x00027;s simulations,</p>
<disp-formula id="E18"><label>(18)</label><mml:math id="M22"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>A</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>S</mml:mi><mml:msup><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msup><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Raising the similarity measure to the third power increases the <italic>signal-to-noise</italic> ratio, in that it increases the number of poorly matching traces required to overshadow a trace that closely matches the probe. It should be noted that if trace <italic>i</italic> was generated randomly (by a process orthogonal to that generating the probe), then the expected value of <italic>A</italic>(<italic>i</italic>) is 0 and the variance of <italic>A</italic>(<italic>i</italic>) is quite small. Thus, <italic>A</italic>(<italic>i</italic>) should be very near to 0 unless trace <italic>i</italic> fairly closely matches the probe.</p>
<sec>
<title>2.5.1.1 Intensity</title>
<p>When a probe activates the traces in SM, information is returned in the echo. The echo is assumed to have two properties: intensity and content. The intensity of the echo is given by</p>
<disp-formula id="E19"><label>(19)</label><mml:math id="M23"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>I</mml:mi></mml:mrow><mml:mrow><mml:mi>E</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>M</mml:mi></mml:mrow></mml:munderover></mml:mstyle><mml:mi>A</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where M is the total number of traces in memory. The variance of <italic>I</italic><sub><italic>E</italic></sub>, <italic>Var</italic>(<italic>I</italic><sub><italic>E</italic></sub>), is a function of the number of target traces. If <italic>L</italic> &#x0003D; 1, then this function is flat, reflecting only the baseline &#x0201C;noise&#x0201D; in <italic>I</italic> produced by non-target traces. If <italic>L</italic> &#x0003C; 1 and is constant, then <italic>Var</italic>(<italic>I</italic><sub><italic>E</italic></sub>) increases linearly with frequency because the <italic>A</italic>(<italic>i</italic>) values of the individual target traces vary and contribute independently to <italic>I</italic><sub><italic>E</italic></sub>. Frequency judgments and recognition judgments are assumed to be based on the intensity of the echo, and therefore, characteristics of the <italic>I</italic><sub><italic>E</italic></sub> distribution are crucial in simulating performance in these tasks.</p>
</sec>
<sec>
<title>2.5.1.2 Content</title>
<p>The content of the echo is the activation pattern across features that is returned from memory following the probe. It is assumed that the activation of each SM trace, <italic>i</italic>, is passed to each of its constituent features, <italic>j</italic>, as the product of <italic>A</italic>(<italic>i</italic>) and <italic>T</italic>(<italic>i, j</italic>). Note that the product will be positive if the signs of <italic>A</italic>(<italic>i</italic>) and <italic>T</italic>(<italic>i, j</italic>) are the same and negative if they are different. The contributions of all <italic>M</italic> traces in memory are summed for each feature; thus, activation of feature <italic>j</italic> in the echo is given by</p>
<disp-formula id="E20"><label>(20)</label><mml:math id="M24"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>C</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>j</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>M</mml:mi></mml:mrow></mml:munderover></mml:mstyle><mml:mi>A</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mi>T</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>The values taken by <italic>C</italic>(<italic>j</italic>) can range from negative to neutral to positive, and their profile (i.e., the associated histogram) across features is assumed to be immediately available in PM. Only traces that are similar to the probe become strongly activated. The author remarks that those traces can contain information not present in the probe itself, and thus, the model is capable of associative recall.</p>
<p>In order to simulate the retrieval of associative information, the set of features can be divided into two segments. For example, to represent face-name pairs, features <italic>j</italic> &#x0003D; 1, ..., 10 might be reserved for the faces and the remaining features, <italic>j</italic> &#x0003D; 11, ..., 20, for the names. Then, a trace of 20 features would represent a single occurrence of a particular pair. Recall of a name upon presentation of a face can be accomplished with a probe having <italic>j</italic> &#x0003D; 1, ..., 10 filled in and <italic>j</italic> &#x0003D; 11, ..., 20 set to 0, focusing on <italic>C</italic>(11), ..., <italic>C</italic>(20) in the echo. Retrieval of a face given a name would be done in the opposite fashion.</p>
</sec>
</sec>
<sec>
<title>2.5.2 Strong and weak points of MINERVA 2</title>
<sec>
<title>2.5.2.1 Strong points</title>
<p>MINERVA 2 can deal with the problem of &#x0201C;ambiguous recall.&#x0201D; The ambiguous recall problem is that information retrieved from memory is sometimes only vaguely similar to what was originally stored or to any acceptable response.</p>
</sec>
<sec>
<title>2.5.2.2 Weak points</title>
<p>The model is very simple and therefore limited in its applications.</p>
</sec>
</sec>
<sec>
<title>2.5.3 Mathematical developments</title>
<p>There is a rich literature regarding the developments as well as implementations of the Hinztman model. For example, the ATHENA model (see Briglia et al., <xref ref-type="bibr" rid="B10">2018</xref>) as an enactivist<xref ref-type="fn" rid="fn0002"><sup>2</sup></xref> mathematical formalization of Act-In model by Versace et al. (<xref ref-type="bibr" rid="B58">2014</xref>), within MINERVA2 non-specific traces: ATHENA is a fractal model which keeps track of former processes that led to the emergence of knowledge; in this way, it can process contextual processes (abstraction manipulation). An interesting characteristic of ATHENA is that it is a memory model based on an inference process that is able to extrapolate a memory from very little information (Tenenbaum et al., <xref ref-type="bibr" rid="B56">2011</xref>). As a consequence, ATHENA accounts for the subjective feeling of recognition, unlike MINERVA2 (for details see Benjamin and Hirshman, <xref ref-type="bibr" rid="B8">1998</xref>). As a final remark, it should be noted that Nelson and Shiffrin (<xref ref-type="bibr" rid="B41">2013</xref>) considered that this process should be implemented in SARKAE, as suggested and described by Cox and Shiffrin (<xref ref-type="bibr" rid="B12">2017</xref>).</p>
</sec>
</sec>
<sec>
<title>2.6 Computational models of memory search</title>
<p>Kahana (<xref ref-type="bibr" rid="B27">2020</xref>) in his study reviewed the fundamental concepts in the mathematical modeling of human memory. We think it is worth analyzing them.</p>
<sec>
<title>2.6.1 Representational assumptions</title>
<p>The act of remembering involves accessing stored information from experiences that are no longer in the conscious present. In order to model remembering, it is necessary therefore define the representation that is being remembered. Mathematically, a static image can be represented as a two-dimensional matrix, which can be stacked to form a vector. Memories can also unfold over time, as in remembering speech, music, or actions. Although one can model such memories as a vector function of time, theorists usually eschew this added complexity, adopting a unitization assumption that underlies nearly all modern memory models. The unitization assumption states that the continuous stream of sensory input is interpreted and analyzed in terms of meaningful units of information. These units, represented as vectors, form the building blocks (units) of memory and both the inputs and outputs of memory models. Scientists interested in memory study the encoding, storage, and retrieval of these units of memory.</p>
<p>Let <inline-formula><mml:math id="M25"><mml:mover class="overrightarrow"><mml:mrow><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover><mml:mo>&#x02208;</mml:mo><mml:msup><mml:mrow><mml:mi>&#x0211D;</mml:mi></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:msup></mml:math></inline-formula> represent the memorial representation (vector) of item <italic>i</italic> in the scalar space &#x0211D;<sup><italic>N</italic></sup>. The <italic>N</italic> elements of the vector <inline-formula><mml:math id="M26"><mml:mover class="overrightarrow"><mml:mrow><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover></mml:math></inline-formula> are denoted by <inline-formula><mml:math id="M27"><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mover class="overrightarrow"><mml:mrow><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>,</mml:mo><mml:mover class="overrightarrow"><mml:mrow><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula>, that represent information in either a localist or a distributed manner. According to localist models, each item vector has a single, unique, non-zero element, with each element thus corresponding to a unique item in memory. Hence, the localist representation of item <italic>i</italic> can be viewed as a vector <inline-formula><mml:math id="M28"><mml:mover class="overrightarrow"><mml:mrow><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>j</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula>, whose elements <italic>f</italic><sub><italic>i</italic></sub>(<italic>j</italic>) are defined such that</p>
<disp-formula id="E21"><label>(21)</label><mml:math id="M29"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>j</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mrow><mml:mtable style="text-align:axis;" equalrows="false" columnlines="none" equalcolumns="false" class="array"><mml:mtr><mml:mtd><mml:mn>0</mml:mn></mml:mtd><mml:mtd><mml:mtext class="textrm" mathvariant="normal">&#x000A0;if&#x000A0;</mml:mtext><mml:mi>i</mml:mi><mml:mo>&#x02260;</mml:mo><mml:mi>j</mml:mi></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mn>1</mml:mn></mml:mtd><mml:mtd><mml:mtext class="textrm" mathvariant="normal">&#x000A0;if&#x000A0;</mml:mtext><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mi>j</mml:mi></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>The last case represents the unit vectors.</p>
<p>Differently, according to distributed models, the features representing an item distribute across many or all of the elements. In this case, a probability <italic>p</italic> of assuming scalar 1 must be introduced. In detail, consider the case where <italic>f</italic><sub><italic>i</italic></sub>(<italic>j</italic>) &#x0003D; 1 with probability <italic>p</italic> and <italic>f</italic><sub><italic>i</italic></sub>(<italic>j</italic>) &#x0003D; 0 with probability 1 &#x02212; <italic>p</italic>. The expected correlation between any two such random vectors will be zero, but the actual correlation will vary around zero. The same is true for the case of random vectors composed of Gaussian features as is commonly assumed in distributed memory models (see for example Kahana et al., <xref ref-type="bibr" rid="B29">2005</xref>).</p>
</sec>
<sec>
<title>2.6.2 Multitrace theory</title>
<p>Encoding is the set of processes where a subject (the learner) records information into memory. The subject does not simply record sensory images but, rather, creates the multidimensional (i.e., vectorial) representation of items as well as produce a lasting record of the vector representation of experience. To this aim, it needs to introduce another mathematical tool able to describe how the brain record a lasting impression of an encoded item or experience since the only vector is not enough to do that. Such a mathematical tool is the matrices. Mathematically, the set of items in memory form a matrix, that is basically an array, where each row represents a feature or dimension, and each column represents a distinct item occurrence. The matrix encoding item vectors <inline-formula><mml:math id="M30"><mml:mover class="overrightarrow"><mml:mrow><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover><mml:mo>,</mml:mo><mml:mover class="overrightarrow"><mml:mrow><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover><mml:mo>,</mml:mo><mml:mover class="overrightarrow"><mml:mrow><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover><mml:mo>,</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>,</mml:mo><mml:mover class="overrightarrow"><mml:mrow><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover></mml:math></inline-formula> can be represented as follows:</p>
<disp-formula id="E22"><label>(22)</label><mml:math id="M31"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mi>M</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mtable style="text-align:axis;" equalrows="false" columnlines="none none none none none none none none none" equalcolumns="false" class="array"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd><mml:mtd><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd><mml:mtd><mml:mo>&#x022EF;</mml:mo></mml:mtd><mml:mtd><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd><mml:mtd><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd><mml:mtd><mml:mo>&#x022EF;</mml:mo></mml:mtd><mml:mtd><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>&#x022EE;</mml:mo></mml:mtd><mml:mtd><mml:mo>&#x022EE;</mml:mo></mml:mtd><mml:mtd><mml:mo>&#x022EE;</mml:mo></mml:mtd><mml:mtd><mml:mo>&#x022EE;</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd><mml:mtd><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd><mml:mtd><mml:mo>&#x022EF;</mml:mo></mml:mtd><mml:mtd><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where the first column of the matrix represents the entries (i.e., the elements) of vector <inline-formula><mml:math id="M32"><mml:mover class="overrightarrow"><mml:mrow><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover></mml:math></inline-formula>, the second column the entries of vector <inline-formula><mml:math id="M33"><mml:mover class="overrightarrow"><mml:mrow><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover></mml:math></inline-formula> and so on. The multitrace hypothesis implies that the number of traces can increase without bound. In summary, the multitrace theory positing that new experiences, also including repeated ones, add more columns to the growing memory matrix <inline-formula><mml:math id="M34"><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mi>M</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:math></inline-formula> described in Equation (22). Nevertheless, without positing some form of data compression, the multitrace hypothesis creates a formidable problem for theories of memory search.</p>
</sec>
<sec>
<title>2.6.3 Composite memories</title>
<p>This theory, in contrast with the view that each memory occupies its own separate storage location, states that memories blend together in the same manner that pictures may be combined (as happens in morphing). From a mathematical point of view, this translates in simply summing the vectors representing each image in memory. Then, there are at least two techniques to be used to deal with such a sum: first, averaging the sum of features, but in this way, information about the individual exemplars are discarded; second, defining a composite storage model to account for data on recognition memory, as proposed by Murdock (<xref ref-type="bibr" rid="B36">1982</xref>). This model specifies the storage equation in the following way:</p>
<disp-formula id="E23"><label>(23)</label><mml:math id="M35"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mover class="overrightarrow"><mml:mrow><mml:msub><mml:mrow><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:mi>&#x003B1;</mml:mi><mml:mover class="overrightarrow"><mml:mrow><mml:msub><mml:mrow><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover><mml:mo>&#x0002B;</mml:mo><mml:mover accent="true"><mml:mrow><mml:mover accent="true"><mml:mrow><mml:msub><mml:mrow><mml:mi>B</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x00304;</mml:mo></mml:mover></mml:mrow><mml:mo>&#x00304;</mml:mo></mml:mover><mml:mover class="overrightarrow"><mml:mrow><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <inline-formula><mml:math id="M36"><mml:mover class="overrightarrow"><mml:mrow><mml:msub><mml:mrow><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover></mml:math></inline-formula> is the memory vector and <inline-formula><mml:math id="M37"><mml:mover class="overrightarrow"><mml:mrow><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover></mml:math></inline-formula> represents the item studied at time <italic>t</italic>. The variable 0 &#x0003C; &#x003B1; &#x0003C; 1 is a forgetting parameter, and <inline-formula><mml:math id="M38"><mml:msub><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mi>B</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> is a diagonal matrix whose entries <inline-formula><mml:math id="M39"><mml:msub><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mi>B</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>i</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> are independent Bernoulli random variables (i.e., a variables that take the value of 1 with probability <italic>p</italic> and 0 with probability 1 &#x02212; <italic>p</italic>). The model parameter, <italic>p</italic>, determines the average proportion of features stored in memory when an item is studied.</p>
<p>If the same item is repeated, then it is encoded again. Indeed, some of the features sampled on the repetition could not be previously sampled; hence, repeated presentations will fill in the missing features, thereby differentiating memories and facilitating learning. It is possible to consider the feature of the studied items as independent and identically distributed normal random variables as done by Murdock (<xref ref-type="bibr" rid="B36">1982</xref>).</p>
<p>Rather than summing item vectors directly, it is better first expanding an item&#x00027;s representation into a matrix form and then sum the resultant matrices since if not there would be a substantial loss of information. Although this is beyond the scope of this study, we note that this operation forms the basis of many neural network models of human memory (Hertz et al., <xref ref-type="bibr" rid="B21">1991</xref>). In this case, the entries of vector <inline-formula><mml:math id="M40"><mml:mover class="overrightarrow"><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover></mml:math></inline-formula> represent the firing rates of neurons, then the vector outer product <inline-formula><mml:math id="M41"><mml:mover class="overrightarrow"><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover><mml:mo>&#x000B7;</mml:mo><mml:mover class="overrightarrow"><mml:mrow><mml:msup><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover></mml:math></inline-formula> forms a matrix <inline-formula><mml:math id="M42"><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mi>M</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:math></inline-formula> whose entries are <inline-formula><mml:math id="M43"><mml:msub><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mi>M</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>f</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mi>f</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>j</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula>. Incidentally, this matrix exemplifies the Hebbian learning. However, this treatment could be interpreted as oversimplified since Hopfield network is not considered. The matrix <inline-formula><mml:math id="M44"><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mi>M</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:math></inline-formula> should represent connections between neurons in the network, which itself defines transitions of the network state, and the fixed point of the dynamic is desired memory. We refer interested readers to Hopfield (<xref ref-type="bibr" rid="B25">2007</xref>) and related references.</p>
</sec>
<sec>
<title>2.6.4 Summed similarity</title>
<p>If an item has already encoded and it is encountered again, we often quickly recognize it as being familiar. To create this sense of familiarity, the brain must somehow compare the representation of the new experience with the contents of memory. Such a research could be lead in series or in parallel. In the former case, the target item is compared to each stored item memory until a match is found. This process is generally slow. In the latter case, the research is in parallel, meaning by this that a simultaneous comparison of the target item with each of the items in memory. This second process is faster. Nevertheless, there is a point of attention to be considered: when an item is encoded in different situations, the representations will be very similar but not identical. Summed similarity models present a potential solution to this problem. Rather than requiring a perfect match, we compute the similarity for each comparison and sum these similarity values to determine the global match between the test probe and the contents of memory. There are a few similarity models, one of the simplest summed-similarity model is the recognition theory first proposed by Anderson (<xref ref-type="bibr" rid="B2">1970</xref>) and finally elaborated by Murdock (<xref ref-type="bibr" rid="B37">1989</xref>). The model elaborated by Murdock is called TODAM (Theory of Distributed Associative Memory). In this model, subjects store a weighted sum of item vectors in memory as detailed in Equation (23). In order to establish if a (test) item was already encoded, it is necessary that the dot product between the vector characterizing the item and the memory vector exceeds a threshold. Specifically, the model states that the probability of finding a perfect match (we denote this case with &#x0201C;OK&#x0201D;) between the test item (called <inline-formula><mml:math id="M45"><mml:mover class="overrightarrow"><mml:mrow><mml:mi>g</mml:mi></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover></mml:math></inline-formula>) and one of the stored memory vectors is</p>
<disp-formula id="E24"><label>(24)</label><mml:math id="M46"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>O</mml:mi><mml:mi>K</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mover class="overrightarrow"><mml:mrow><mml:mi>g</mml:mi></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover><mml:mo>&#x000B7;</mml:mo><mml:mover class="overrightarrow"><mml:mrow><mml:msub><mml:mrow><mml:mi>m</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover><mml:mo>&#x0003E;</mml:mo><mml:mi>k</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>P</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mover class="overrightarrow"><mml:mrow><mml:mi>g</mml:mi></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover><mml:mo>&#x000B7;</mml:mo><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:munderover></mml:mstyle><mml:msup><mml:mrow><mml:mi>&#x003B1;</mml:mi></mml:mrow><mml:mrow><mml:mi>L</mml:mi><mml:mo>-</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msup><mml:msub><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mi>B</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mover class="overrightarrow"><mml:mrow><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mo>&#x0003E;</mml:mo><mml:mi>k</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>The TODAM embodies the direct summation model of memory storage. Such a summation model of memory storage implies that memories form a prototype representation. Hence, each individual memory contributes to a weighted average vector whose similarity to a test item determines the recognition decision. However, some criticisms are moved to this approach. Indeed, studies of category learning indicate that models based on the summed similarity between the test cue and each individual stored memory provide a much better fit to the empirical data than do prototype models (Kahana and Bennett, <xref ref-type="bibr" rid="B28">1994</xref>). Some alternative approaches (see for example Nosofsky, <xref ref-type="bibr" rid="B42">1992</xref>) represent psychological similarity as an exponentially decaying function of a generalized distance measure. That is, they define the similarity between a test item, <inline-formula><mml:math id="M47"><mml:mover class="overrightarrow"><mml:mrow><mml:mi>g</mml:mi></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover></mml:math></inline-formula>, and a (fixed) studied item vector, <inline-formula><mml:math id="M48"><mml:mover class="overrightarrow"><mml:mrow><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mo>*</mml:mo></mml:mrow></mml:msup></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover></mml:math></inline-formula>, where <italic>i</italic><sup>&#x0002A;</sup> is any fixed value between 1 and <italic>L</italic>, as</p>
<disp-formula id="E25"><label>(25)</label><mml:math id="M49"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>S</mml:mi><mml:mi>i</mml:mi><mml:mi>m</mml:mi><mml:mi>i</mml:mi><mml:mi>l</mml:mi><mml:mi>a</mml:mi><mml:mi>r</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mi>y</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mover class="overrightarrow"><mml:mrow><mml:mi>g</mml:mi></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover><mml:mo>,</mml:mo><mml:mover class="overrightarrow"><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mi>&#x003C4;</mml:mi><mml:mo>&#x02016;</mml:mo><mml:mover class="overrightarrow"><mml:mrow><mml:mi>g</mml:mi></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover><mml:mo>-</mml:mo><mml:mover class="overrightarrow"><mml:mrow><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mo>*</mml:mo></mml:mrow></mml:msup></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover><mml:msub><mml:mrow><mml:mo>&#x02016;</mml:mo></mml:mrow><mml:mrow><mml:mi>&#x003B3;</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mi>&#x003C4;</mml:mi><mml:msup><mml:mrow><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mstyle displaystyle="true"><mml:msubsup><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:msubsup></mml:mstyle><mml:msup><mml:mrow><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>g</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>j</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:msup><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mo>*</mml:mo></mml:mrow></mml:msup></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>j</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>&#x003B3;</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mfrac><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>&#x003B3;</mml:mi></mml:mrow></mml:mfrac></mml:mrow></mml:msup></mml:mrow></mml:msup><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <italic>N</italic> is the number of features, &#x003B3; indicates the distance metric (&#x003B3; &#x0003D; 2 corresponds to the Euclidean norm), and &#x003C4; determines how quickly similarity decays with distance. Equation (25) can be generalized to <italic>L</italic> items, by considering the encoding item vectors <inline-formula><mml:math id="M50"><mml:mover class="overrightarrow"><mml:mrow><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover></mml:math></inline-formula>, <italic>i</italic> &#x0003D; 1, ..., <italic>L</italic> vectors and the corresponding memory matrix <inline-formula><mml:math id="M51"><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mi>M</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mover class="overrightarrow"><mml:mrow><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover><mml:mo>,</mml:mo><mml:mover class="overrightarrow"><mml:mrow><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover><mml:mo>,</mml:mo><mml:mover class="overrightarrow"><mml:mrow><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover><mml:mo>,</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>,</mml:mo><mml:mover class="overrightarrow"><mml:mrow><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula>. Then, the generalized equation is obtained by summing the similarities between <inline-formula><mml:math id="M52"><mml:mover class="overrightarrow"><mml:mrow><mml:mi>g</mml:mi></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover></mml:math></inline-formula> and each of the stored vectors in memory,</p>
<disp-formula id="E26"><label>(26)</label><mml:math id="M53"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>S</mml:mi><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:munderover></mml:mstyle><mml:mi>S</mml:mi><mml:mi>i</mml:mi><mml:mi>m</mml:mi><mml:mi>i</mml:mi><mml:mi>l</mml:mi><mml:mi>a</mml:mi><mml:mi>r</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mi>y</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mover class="overrightarrow"><mml:mrow><mml:mi>g</mml:mi></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover><mml:mo>,</mml:mo><mml:mover class="overrightarrow"><mml:mrow><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>.</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>The summed-similarity model generates an &#x0201C;OK&#x0201D; match if <italic>S</italic> exceeds a threshold.</p>
<p>We remark that <inline-formula><mml:math id="M54"><mml:mover class="overrightarrow"><mml:mrow><mml:mi>g</mml:mi></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover></mml:math></inline-formula> can play the role either of target (i.e., <inline-formula><mml:math id="M55"><mml:mover class="overrightarrow"><mml:mrow><mml:mi>g</mml:mi></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:mover class="overrightarrow"><mml:mrow><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover></mml:math></inline-formula> for some value of <italic>i</italic>) or probe, in this last case <inline-formula><mml:math id="M56"><mml:mover class="overrightarrow"><mml:mrow><mml:mi>g</mml:mi></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover><mml:mo>&#x02209;</mml:mo><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mover accent="false" class="mml-overline"><mml:mrow><mml:mi>M</mml:mi></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mo accent="true">&#x000AF;</mml:mo></mml:mover></mml:math></inline-formula>.</p>
</sec>
<sec>
<title>2.6.5 Contextual coding</title>
<p>Another relevant point in the study of memory encoding is temporal coding, associations are learned not only among items but also between items and their situational, temporal, and/or spatial context (see for example, some fundamental studies such as Carr, <xref ref-type="bibr" rid="B11">1931</xref>). The idea of temporal coding was developed more recently in 1970 by Tulving and Madigan (<xref ref-type="bibr" rid="B57">1970</xref>). Specifically, these authors distinguished temporal coding from contemporary interpretations of context. Differently from this, subsequent research brought these two views of context together: this is the case shown in Bower&#x00027;s temporal context model (Bower, <xref ref-type="bibr" rid="B9">1972</xref>). According to Bower&#x00027;s model, contextual representations constitute a multitude of fluctuating features, defining a vector that slowly drifts through a multidimensional context space. These contextual features form part of each memory, combining with other aspects of externally and internally generated experience. Because a unique context vector marks each remembered experience, and because context gradually drifts, the context vector conveys information about the time in which an event was experienced. By allowing for a dynamic representation of temporal context, items within a given list will have more overlap in their contextual attributes than items studied on different lists or, indeed, items that were not part of an experiment (see Bower, <xref ref-type="bibr" rid="B9">1972</xref>). It is possible to implement a simple model of contextual drift by defining a multidimensional context vector, <inline-formula><mml:math id="M57"><mml:mover class="overrightarrow"><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>c</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mi>c</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>.</mml:mo><mml:mo>,</mml:mo><mml:mi>c</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>N</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:math></inline-formula>, and specifying a process for its temporal evolution. To this aim, it needs specify a unique random set of context features for each list in a memory experiment or for each experience encountered in a particular situational context. However, contextual attributes fluctuate as a result of many internal and external variables that vary at many different timescales. An alternative approach proposed by Murdock (<xref ref-type="bibr" rid="B38">1997</xref>), is to write down an autoregressive model for contextual drift, such as</p>
<disp-formula id="E27"><label>(27)</label><mml:math id="M58"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mover class="overrightarrow"><mml:mrow><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:mover class="overrightarrow"><mml:mrow><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover><mml:mo>&#x0002B;</mml:mo><mml:msqrt><mml:mrow><mml:mn>1</mml:mn><mml:mo>-</mml:mo><mml:msup><mml:mrow><mml:mi>&#x003C1;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:msqrt><mml:mover class="overrightarrow"><mml:mrow><mml:mi>&#x003F5;</mml:mi></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <inline-formula><mml:math id="M59"><mml:mover class="overrightarrow"><mml:mrow><mml:mi>&#x003F5;</mml:mi></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover></mml:math></inline-formula> is a random vector whose elements are each drawn from a Gaussian distribution, while each item presentation is represented by <italic>i</italic> indexes. The variance of the Gaussian is defined such that the inner product <inline-formula><mml:math id="M60"><mml:mover class="overrightarrow"><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003F5;</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover><mml:mo>&#x000B7;</mml:mo><mml:mover class="overrightarrow"><mml:mrow><mml:msub><mml:mrow><mml:mi>&#x003F5;</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover></mml:math></inline-formula> equals one for <italic>i</italic> &#x0003D; <italic>j</italic> and zero for <italic>i</italic> &#x02260; <italic>j</italic>. Accordingly, the similarity between the context vector at time steps <italic>i</italic> and <italic>j</italic> falls off exponentially with the separation: <inline-formula><mml:math id="M61"><mml:mover class="overrightarrow"><mml:mrow><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover><mml:mo>&#x000B7;</mml:mo><mml:mover class="overrightarrow"><mml:mrow><mml:msub><mml:mrow><mml:mi>c</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo>&#x020D7;</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mi>&#x003C1;</mml:mi></mml:mrow><mml:mrow><mml:mo>|</mml:mo><mml:mi>i</mml:mi><mml:mo>-</mml:mo><mml:mi>j</mml:mi><mml:mo>|</mml:mo></mml:mrow></mml:msup></mml:math></inline-formula>. This means that the change in context between the study of an item and its later test will increase with the number of items intervening between the study and the test, producing the classic forgetting curve. In terms of the study of memory and in continuity with the above sections, it is possible to concatenate each item vector with the vector representation of context at the time of encoding (or retrieval) and store the associative matrices used to simulate recognition and recall in our earlier examples. An alternative way is directly associate context and item vectors in the same way that we would associate item vectors with one another.</p>
</sec>
<sec>
<title>2.6.6 Strong and weak points of the models</title>
<sec>
<title>2.6.6.1 Strong points</title>
<p>The above described models are based on mathematics and linear algebra. In this sense, they are definitely innovative. One immediate consequence is that a computation approach, we mean the creation of codes can be naturally implemented.</p>
</sec>
<sec>
<title>2.6.6.2 Weak points</title>
<p>The models show a main limitation: They cannot explain diseases affecting episodic memories. In order to bypass this criticism, it needs to modify their analytical form.</p>
</sec>
</sec>
<sec>
<title>2.6.7 Mathematical developments</title>
<p>These models are quite recent, therefore, as far as we know, there are no developments published in the literature yet.</p>
</sec>
</sec>
<sec>
<title>2.7 Conclusion and future challenges</title>
<p>Modeling and computation are intended to take on an increasingly important role in (neuro)psychology, neuroscience, and psychiatry. One of the most important consequences of the mathematical modeling of human memory is to better understand the diseases affecting it. Modeling such diseases and find computational biomarker could also represent a great help to (neuro)psychologists and physicians. As a final step, we shortly describe the most relevant memory diseases whose distinctive traits, such as amnesias, could be mathematically modeled.</p>
<sec>
<title>2.7.1 Alzheimer&#x00027;s disease (AD)</title>
<p>Maybe Alzheimer&#x00027;s disease (AD) is the most popular neurological disease affecting memory (Eustache et al., <xref ref-type="bibr" rid="B15">1990</xref>), and the most common form of dementia (Jack, <xref ref-type="bibr" rid="B26">2012</xref>). It is a progressive, degenerative, and fatal brain disease, in which synapses connections in the brain are lost. The evidence suggests that women with AD display more severe cognitive impairment relative to age-matched males with AD as well as a more rapid rate of cognitive decline (Dunkin, <xref ref-type="bibr" rid="B13">2009</xref>).</p>
</sec>
<sec>
<title>2.7.2 Semantic dementia (SD)</title>
<p>Semantic dementia (SD) designates a progressive cognitive and language deficit, primarily involving comprehension of words and related semantic processing, as described in a very pioneering work by Pick (<xref ref-type="bibr" rid="B43">1904</xref>). These patients lose the meaning of words, usually nouns, but retain fluency, phonology, and syntax. Semantic dementia is distinguishable from other presentations of frontotemporal dementia (see Section 2.7.3) and Alzheimer&#x00027;s disease (see Section 2.7.1) not only by fluent speech and impaired comprehension without the loss of episodic memory, syntax, and phonology but also by empty, garrulous speech with thematic perseverations, semantic paraphasias, and poor category fluency.</p>
</sec>
<sec>
<title>2.7.3 Fronto-temporal dementia (FTD)</title>
<p>Frontotemporal dementia is an uncommon type of dementia that causes problems with behavior and language. It is result of damage to neurons in the frontal and temporal lobes of the brain. Many possible symptoms can result, including unusual behaviors, emotional problems, trouble communicating, difficulty with work, or difficulty with walking.</p>
</sec>
<sec>
<title>2.7.4 A case study: autobiographical amnesia</title>
<p>Talking about neurodegenerative diseases one relevant case of interest is autobiographical amnesia (Piolino et al., <xref ref-type="bibr" rid="B44">2003</xref>). There are different theories regarding long-term memory consolidation that can be applied to investigate pathologies involving memory. For example, according to the standard model of systems consolidation (SMSC) (Squire and Alvarez, <xref ref-type="bibr" rid="B53">1995</xref>), the medial temporal lobe (MTL) is involved in the storage and retrieval of episodic and semantic memories during a limited period of years. An alternative model of memory consolidation, called the multiple trace theory (MTT), posits that each time some information is presented to a person, it is neurally encoded in a unique memory trace composed of a combination of its attributes (Semon, <xref ref-type="bibr" rid="B50">1923</xref>). In other words, it suggests that the capacity of the MTL to recollect episodic memories is of a more permanent nature. Piolino et al. (<xref ref-type="bibr" rid="B44">2003</xref>), to test these models, studied three groups of patients with a neurodegenerative disease predominantly affecting different cerebral structures, namely, the MTL (patients in the early stages of Alzheimer&#x00027;s disease) and the neocortex involving either the anterior temporal lobe (patients with semantic dementia) or the frontal lobe (patients with the frontal variant of frontotemporal dementia, fv-FTD). Then, they compared these groups of patients (the cardinality of the three set of patients was nearly the same) with control subjects using a specific autobiographical memory task designed specially to assess strictly episodic memory over the entire lifespan.</p>
<p>This task considers the ability to mentally travel back in time and re-experience the source of acquisition by means of the remember/know paradigm. The outcome was interesting since all three groups of patients produced strongly contrasting profiles of autobiographical amnesia regardless of the nature of the memories in comparison with that of the control group. In details, temporally graded memory loss in Alzheimer&#x00027;s disease, showing that remote memories are better preserved than recent ones; in semantic dementia, memory loss is characterized by a reversed gradient,<xref ref-type="fn" rid="fn0003"><sup>3</sup></xref> while memory loss without any clear gradient was found in fv-FTD. By focusing on episodic memories (see Section 1), the authors found that they were impaired, whatever the time interval considered in the three groups, though the memory loss was ungraded (i.e., no temporal gradient was detected) in Alzheimer&#x00027;s disease and fv-FTD and temporally graded in semantic dementia, sparing the most recent period.<xref ref-type="fn" rid="fn0004"><sup>4</sup></xref> A deficit of autonoetic consciousness<xref ref-type="fn" rid="fn0005"><sup>5</sup></xref> emerged in Alzheimer&#x00027;s disease and fv-FTD but not in semantic dementia though beyond the most recent 12-month period. The authors remarked that the sematic dementia group could not justify their subjective sense of remembering to the same extent as the controls since they failed in providing contextual information, spatial or temporal details, etc. The results demonstrated that autobiographical amnesia varies according to the nature of the memories under consideration and the locus of cerebral dysfunction. The analysis was carried on by considering both the two competing models for long-term memory consolidation above described (i.e., SMSC and MTT), the authors observed that new insights based on concepts of episodic memories in the early of 2000s challenge the standard model and tend to support the MTT instead.</p>
<sec>
<title>2.7.4.1 How the mathematical models could face (autobiographical) amnesia</title>
<p>After having introduced the autobiographical amnesia, we would like to provide the reader with an example of how amnesia can be differently modeled by employing some models, as well as their implementations, above described. A first approach is based on the Ribot&#x00027;s law, and its implementation (Murre et al., <xref ref-type="bibr" rid="B39">2013</xref>). Murre et al. hypothesized the decline function as an exponential function characterized by a constant decay rate even if it should be observed that the exponential decline assumption is not critical for the working of the model. The relation between memory intensity and recall probability can be described by a simple function:</p>
<disp-formula id="E28"><label>(28)</label><mml:math id="M62"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>p</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mo>-</mml:mo><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mi>i</mml:mi><mml:mi>n</mml:mi><mml:mi>t</mml:mi><mml:mi>e</mml:mi><mml:mi>n</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mi>y</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msup></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Typically, a forgetting function is characterized by the fact that the &#x0201C;hippocampus&#x0201D; process declines rapidly, while the &#x0201C;neocortex&#x0201D; process builds up intensity. The neocortical process builds up slowly and eventually comes to a halt when the hippocampus process is depleted. There are two parameters that define the model: the first parameter relates to how quickly newly created traces fill up a process. The decline rate, which the authors designate as <italic>a</italic><sub>1</sub> and <italic>a</italic><sub>2</sub> for the neocortex and hippocampal regions, respectively, is the second parameter. Conversely, &#x003BC;<sub>1</sub> and &#x003BC;<sub>2</sub> denote the intensity gained during learning (the hippocampus plays a role in this process) and the rate at which consolidation fills the neocortex, respectively.</p>
<p>The Ribot gradient (see Section 2.2), i.e., the temporal gradient in retrograde amnesia, is characterized by a pattern with disproportional memory loss for recent time periods. Murre et al. made the hypothesis that the hippocampal, as well as the adjacent medial temporal lobe (MTL), process is damaged in amnesia. In this case, the contribution of the hippocampal and MTL processes are removed. In the memory chain model proposed by the authors, the total memory intensity, <italic>r</italic>(<italic>t</italic>) is the sum of the intensities of two processes: <italic>r</italic><sub>1</sub>(<italic>t</italic>), the intensity of the hippocampal process, and <italic>r</italic><sub>2</sub>(<italic>t</italic>), the intensity of the neocortical process. Hence,</p>
<disp-formula id="E29"><label>(29)</label><mml:math id="M63"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>r</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mrow><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x0002B;</mml:mo><mml:msub><mml:mrow><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>It should be noted the time dependence in Equation (29). Indeed, a full lesion at time <italic>t</italic><sub><italic>l</italic></sub> of the hippocampus translates to removing the contribution of <italic>r</italic><sub>1</sub>(<italic>t</italic><sub><italic>l</italic></sub>) from the total intensity <italic>r</italic>(<italic>t</italic><sub><italic>l</italic></sub>). In such a case, the neocortical intensity, <italic>r</italic><sub>2</sub>(<italic>t</italic><sub><italic>l</italic></sub>), which reflects the result of the consolidation process until the lesioning time <italic>t</italic><sub><italic>l</italic></sub>, is the only term surviving. The authors remarked that tests of retrograde amnesia do not measure intensity directly but they rather measure recall probability. The predicted shape of these test gradients is, therefore, given by the following equation:</p>
<disp-formula id="E30"><label>(30)</label><mml:math id="M64"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mi>R</mml:mi><mml:mi>i</mml:mi><mml:mi>b</mml:mi><mml:mi>o</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mo>-</mml:mo><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>r</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>t</mml:mi></mml:mrow><mml:mrow><mml:mi>l</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:msup></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>If the hippocampus is lesioned at time <italic>t</italic><sub><italic>l</italic></sub>, then there no more memories will be formed after that. There will also be no more consolidation from hippocampus-to-cortex. We have already explained in Section 2.2.2, the consequences and how Equation (30) changes.</p>
<p>Another approach addresses to the Atkinson and Shiffrin model (Atkinson and Shiffrin, <xref ref-type="bibr" rid="B7">1968</xref>). In Section 2.3.3, we have described the mathematical formalization of the model. In case of amnesia, we expected that the information, which is transferred to LTS at a constant rate &#x003B8;, changes since &#x003B8; does. In our opinion, &#x003B8; reduces though it does not necessarily vanish, apart from serious cases where memory circuits are permanently broken. The most relevant impact interests the <italic>retrieval process</italic>. Such a process degrades since it is assumed that the likelihood of retrieving the correct response for a given item improves as the amount of information stored concerning that item increases. As already introduced, see Section 2.3.3, the probability of a correct response from LTS of an item that had a lag of <italic>i</italic> trials between its study and test, and that resided in the buffer for exactly <italic>j</italic> trials. Hence, such a probability can be mathematically written as</p>
<disp-formula id="E31"><label>(31)</label><mml:math id="M65"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mo>-</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>-</mml:mo><mml:mi>g</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mi>j</mml:mi><mml:mi>&#x003B8;</mml:mi><mml:msup><mml:mrow><mml:mi>&#x003C4;</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>-</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msup></mml:mrow></mml:msup><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <italic>g</italic> is the guessing probability. In case of amnesia, we expect that <italic>g</italic> approaches 0 and that &#x003B8; became smaller and smaller depending on the degree of severity of amnesia. In the most extreme case, &#x003B8; tending toward zero, <italic>p</italic><sub><italic>ij</italic></sub> vanishes.</p>
<p>These approaches are really different. In our opinion, they have pros and cons. For example, the approach by Murre et al. is really interested by a mathematical point of view. The idea to consider the hippocampus and neocortex as &#x0201C;big players&#x0201D; in amnesia is embraceable. However, they are not the only cerebral areas of interest in this kind of disease, just think about the thalamus. Furthermore, the same conclusions could be drawn by considering other analytical functions different from the exponential. Regarding the Atkinson and Shiffrin approach, the strong point is a statistical approach. Similarly to the previous case, such approach can well describe the case of partial or total hippocampus removal (see for example the case of Henry Gustav Molaison, also known as &#x0201C;Patient H.M.&#x0201D;<xref ref-type="fn" rid="fn0006"><sup>6</sup></xref>). By using this model, we cannot take into account factors such as motivation, effect and strategy (e.g., mnemonics techniques).</p>
</sec>
</sec>
</sec>
<sec>
<title>2.8 Final remark</title>
<p>The case study above described is just an example, other conditions such as chronical stress have also tremendously impact on human memory. Mathematical modeling could be an efficient tool to shed more light on it, as well as on other mnemonic pathologies.</p>
</sec>
</sec>
<sec sec-type="author-contributions" id="s3">
<title>Author contributions</title>
<p>PF: Conceptualization, Investigation, Methodology, Resources, Writing&#x02014;original draft, Writing&#x02014;review &#x00026; editing. FE: Conceptualization, Investigation, Methodology, Supervision, Validation, Writing&#x02014;original draft.</p>
</sec>
</body>
<back>
<sec sec-type="funding-information" id="s4">
<title>Funding</title>
<p>The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.</p>
</sec>
<ack><p>FE and PF thank all the members belonging to Unity 1077 (&#x0201C;Neuropsychologie et Imagerie de la M&#x000E9;moire Humaine&#x0201D;) for their support and advice. In particular, PF would like to thank Pierre Gagnepain for this opportunity.</p>
</ack>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s5">
<title>Publisher&#x00027;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<fn-group>
<fn id="fn0001"><p><sup>1</sup>We recall that a in cognitive psychology the chunking is a process by which small individual pieces of a set of information, the chunks, are bound together to create a meaningful whole later on in memory (Miller, <xref ref-type="bibr" rid="B34">1956</xref>). Nevertheless, short-term memory is limited in capacity, consequently it severely limits the amount of information that can be attended to at any one time.</p></fn>
<fn id="fn0002"><p><sup>2</sup>Enactivism is a theory describing cognition as a mental function that arises from the dynamic interaction of the organism with its environment.</p></fn>
<fn id="fn0003"><p><sup>3</sup>In cognitive psychology a reverse temporal gradient denotes a pattern of retrograde amnesia characterized by greater loss of memory for events from the recent past (i.e., close to the onset of the amnesia) than for events from the remote past.</p></fn>
<fn id="fn0004"><p><sup>4</sup>Retrograde amnesia is usually temporally graded, which means that the most recent memories are affected first, and your oldest memories are usually spared. This is known as Ribot&#x00027;s law, see Section 2.2.</p></fn>
<fn id="fn0005"><p><sup>5</sup>Autonoetic consciousness is the human ability to mentally place oneself in the past and future (i.e., mental time travel) or in counterfactual situations (i.e. alternative outcomes), and to thus be able to examine one&#x00027;s own thoughts.</p></fn>
<fn id="fn0006"><p><sup>6</sup>Patient H.M. is an important case study in (neuro)psychology. Indeed, a large portion of his hippocampus was removed during a surgery to alleviate severe epilepsy.He was left with anterograde amnesia, but completely unable to form new explicit memories. This case was crucial to understand the role of the hippocampus in memory formation.</p></fn>
</fn-group>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Abraham</surname> <given-names>W. C.</given-names></name></person-group> (<year>2003</year>). <article-title>How long will long-term potentiation last? Philosophical Transactions of the Royal Society of London</article-title>. <source>Series B: Biol, Sci</source>. <volume>358</volume>, <fpage>735</fpage>&#x02013;<lpage>744</lpage>. <pub-id pub-id-type="doi">10.1098/rstb.2002.1222</pub-id></citation>
</ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Anderson</surname> <given-names>J.</given-names></name></person-group> (<year>1970</year>). <article-title>Two models for memory organization using interacting traces</article-title>. <source>Math. Biosci</source>. <volume>8</volume>, <fpage>137</fpage>&#x02013;<lpage>160</lpage>. <pub-id pub-id-type="doi">10.1016/0025-5564(70)90147-1</pub-id></citation>
</ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Anderson</surname> <given-names>O.</given-names></name></person-group> (<year>1983</year>). <article-title>A neuromathematical model of human information processing and its application to science content acquisition</article-title>. <source>J. Res. Sci. Teach</source>. <volume>20</volume>, <fpage>603</fpage>&#x02013;<lpage>620</lpage>. <pub-id pub-id-type="doi">10.1002/tea.3660200702</pub-id></citation>
</ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Anderson</surname> <given-names>O. R.</given-names></name></person-group> (<year>1986</year>). <article-title>Studies on information processing rates in science learning and related cognitive variables. I: Some theoretical issues related to motivation</article-title>. <source>J. Res. Sci. Teach</source>. <volume>23</volume>, <fpage>61</fpage>&#x02013;<lpage>66</lpage>. <pub-id pub-id-type="doi">10.1002/tea.3660230107</pub-id></citation>
</ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Annese</surname> <given-names>J.</given-names></name> <name><surname>Schenker-Ahmed</surname> <given-names>N.</given-names></name> <name><surname>Bartsch</surname> <given-names>H.</given-names></name> <name><surname>Maechler</surname> <given-names>P.</given-names></name> <name><surname>Sheh</surname> <given-names>C.</given-names></name> <name><surname>Thomas</surname> <given-names>N.</given-names></name> <etal/></person-group>. (<year>2014</year>). <article-title>Postmortem examination of patient H.M.&#x00027;s brain based on histological sectioning and digital 3D reconstruction</article-title>. <source>Nat. Commun</source>. <volume>5</volume>, 4122. <pub-id pub-id-type="doi">10.1038/ncomms4122</pub-id><pub-id pub-id-type="pmid">24473151</pub-id></citation></ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Atkinson</surname> <given-names>R. C.</given-names></name> <name><surname>Brelsford</surname> <given-names>J. W.</given-names></name> <name><surname>Shiffrin</surname> <given-names>R. M.</given-names></name></person-group> (<year>1967</year>). <article-title>Multiprocess models for memory with applications to a continuous presentation task</article-title>. <source>J. Math. Psychol</source>. <volume>4</volume>, <fpage>277</fpage>&#x02013;<lpage>300</lpage>. <pub-id pub-id-type="doi">10.1016/0022-2496(67)90053-3</pub-id></citation>
</ref>
<ref id="B7">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Atkinson</surname> <given-names>R. C.</given-names></name> <name><surname>Shiffrin</surname> <given-names>R. M.</given-names></name></person-group> (<year>1968</year>). <article-title>Human memory: A proposed system and its control processes</article-title>, in <source>The Psychology of Learning and Motivation: Advances in Research and Theory</source>, ed. <person-group person-group-type="editor"><name><surname>Spence</surname> <given-names>K. W.</given-names></name></person-group> (<publisher-name>Academic Press</publisher-name>), <fpage>89</fpage>&#x02013;<lpage>195</lpage>.</citation>
</ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Benjamin</surname> <given-names>A. S.</given-names></name> <name><surname>Bjork</surname> <given-names>R. A.</given-names></name> <name><surname>Hirshman</surname> <given-names>E.</given-names></name></person-group> (<year>1998</year>). <article-title>Predicting the future and reconstructing the past: A bayesian characterization of the utility of subjective fluency</article-title>. <source>Acta Psychologica</source> <volume>98</volume>, <fpage>267</fpage>&#x02013;<lpage>290</lpage>. <pub-id pub-id-type="doi">10.1016/S0001-6918(97)00046-2</pub-id><pub-id pub-id-type="pmid">9621834</pub-id></citation></ref>
<ref id="B9">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Bower</surname> <given-names>G.</given-names></name></person-group> (<year>1972</year>). <article-title>Stimulus-sampling theory of encoding variability</article-title>. in <source>Coding Processes in Human Memory</source>, eds. <person-group person-group-type="editor"><name><surname>Melton</surname> <given-names>A. W.</given-names></name> <name><surname>Martin</surname> <given-names>E.</given-names></name></person-group> <publisher-loc>New York</publisher-loc>: <publisher-name>Wiley</publisher-name>, <fpage>85</fpage>&#x02013;<lpage>121</lpage>.</citation>
</ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Briglia</surname> <given-names>J.</given-names></name> <name><surname>Servajean</surname> <given-names>P.</given-names></name> <name><surname>Michalland</surname> <given-names>A.-H.</given-names></name> <name><surname>Brunel</surname> <given-names>L.</given-names></name> <name><surname>Brouillet</surname> <given-names>D.</given-names></name></person-group> (<year>2018</year>). <article-title>Modeling an enactivist multiple-trace memory. athena: a fractal model of human memory</article-title>. <source>J. Math. Psychol</source>. <volume>82</volume>, <fpage>97</fpage>&#x02013;<lpage>110</lpage>. <pub-id pub-id-type="doi">10.1016/j.jmp.2017.12.002</pub-id></citation>
</ref>
<ref id="B11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Carr</surname> <given-names>H.</given-names></name></person-group> (<year>1931</year>). <article-title>The laws of association</article-title>. <source>Psychol. Rev</source>. <volume>38</volume>, <fpage>212</fpage>&#x02013;<lpage>228</lpage>. <pub-id pub-id-type="doi">10.1037/h0075109</pub-id></citation>
</ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cox</surname> <given-names>G.</given-names></name> <name><surname>Shiffrin</surname> <given-names>R.</given-names></name></person-group> (<year>2017</year>). <article-title>A dynamic approach to recognition memory</article-title>. <source>Psychol. Rev</source>. <volume>124</volume>, <fpage>795</fpage>&#x02013;<lpage>860</lpage>. <pub-id pub-id-type="doi">10.1037/rev0000076</pub-id><pub-id pub-id-type="pmid">29106269</pub-id></citation></ref>
<ref id="B13">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Dunkin</surname> <given-names>J. J.</given-names></name></person-group> (<year>2009</year>). <source>The Neuropsychology of Women</source>. <publisher-loc>New York</publisher-loc>: <publisher-name>Springer</publisher-name>, <fpage>209</fpage>&#x02013;<lpage>223</lpage>.</citation>
</ref>
<ref id="B14">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ebbinghaus</surname> <given-names>H.</given-names></name></person-group> (<year>1913</year>). <article-title>&#x000DC;ber das Gedchtnis</article-title>, in <source>Memory: A Contribution to Experimental Psychology</source> eds. <person-group person-group-type="editor"><name><surname>Ruger</surname> <given-names>H. A.</given-names></name> <name><surname>Bussenius</surname> <given-names>C. E.</given-names></name></person-group> <publisher-loc>New York</publisher-loc>: <publisher-name>Teachers College, Columbia University</publisher-name>.</citation>
</ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Eustache</surname> <given-names>F.</given-names></name> <name><surname>Cox</surname> <given-names>C.</given-names></name> <name><surname>Brandt</surname> <given-names>J.</given-names></name> <name><surname>Lechevalier</surname> <given-names>B.</given-names></name> <name><surname>Pons</surname> <given-names>L.</given-names></name></person-group> (<year>1990</year>). <article-title>Word-association responses and severity of dementia in Alzheimer disease</article-title>. <source>Psychol. Rep</source>. <volume>66</volume>, <fpage>1315</fpage>&#x02013;<lpage>1322</lpage>. <pub-id pub-id-type="doi">10.2466/pr0.1990.66.3c.1315</pub-id><pub-id pub-id-type="pmid">2385720</pub-id></citation></ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Eustache</surname> <given-names>F.</given-names></name> <name><surname>Viard</surname> <given-names>A.</given-names></name> <name><surname>Desgranges</surname> <given-names>B.</given-names></name></person-group> (<year>2016</year>). <article-title>The mnesis model: memory systems and processes, identity and future thinking</article-title>. <source>Neuropsychologia</source> <volume>87</volume>, <fpage>96</fpage>&#x02013;<lpage>109</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2016.05.006</pub-id><pub-id pub-id-type="pmid">27178309</pub-id></citation></ref>
<ref id="B17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Georgiou</surname> <given-names>A.</given-names></name> <name><surname>Katkov</surname> <given-names>M.</given-names></name> <name><surname>Tsodyks</surname> <given-names>M.</given-names></name></person-group> (<year>2021</year>). <article-title>Retroactive interference model of forgetting</article-title>. <source>J. Mathemat. Neurosci</source>. <volume>11</volume>, <fpage>1</fpage>&#x02013;<lpage>15</lpage>. <pub-id pub-id-type="doi">10.1186/s13408-021-00102-6</pub-id><pub-id pub-id-type="pmid">33484358</pub-id></citation></ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gillund</surname> <given-names>G.</given-names></name> <name><surname>Shiffrin</surname> <given-names>R.</given-names></name></person-group> (<year>1984</year>). <article-title>A retrieval model for both recognition and recall</article-title>. <source>Psychol. Rev</source>. <volume>91</volume>, <fpage>1</fpage>&#x02013;<lpage>67</lpage>. <pub-id pub-id-type="doi">10.1037/0033-295X.91.1.1</pub-id><pub-id pub-id-type="pmid">6571421</pub-id></citation></ref>
<ref id="B19">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Goldstein</surname> <given-names>E. B.</given-names></name></person-group> (<year>2019</year>). <source>Cognitive Psychology: Connecting Mind, Research, and Everyday Experience (5E ed.)</source>, <publisher-loc>Boston, MA</publisher-loc>: <publisher-name>Cengage</publisher-name>.</citation>
</ref>
<ref id="B20">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Hebb</surname> <given-names>D. O.</given-names></name></person-group> (<year>1961</year>). <article-title>Distinctive features of learning in the higher animal</article-title>, in <source>Delafresnaye, editor, Brain mechanisms and learning</source>. <publisher-loc>Oxford</publisher-loc>: <publisher-name>Blackwell</publisher-name>, <fpage>37</fpage>&#x02013;<lpage>46</lpage>.</citation>
</ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hertz</surname> <given-names>J.</given-names></name> <name><surname>Krogh</surname> <given-names>A.</given-names></name> <name><surname>Palmer</surname> <given-names>R. G.</given-names></name> <name><surname>Horner</surname> <given-names>H.</given-names></name></person-group> (<year>1991</year>). <article-title>Introduction to the theory of neural computation</article-title>. <source>Phys. Today</source> <volume>44</volume>, <fpage>70</fpage>. <pub-id pub-id-type="doi">10.1063/1.2810360</pub-id></citation>
</ref>
<ref id="B22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hicklin</surname> <given-names>W. J.</given-names></name></person-group> (<year>1976</year>). <article-title>A model for mastery learning based on dynamic equilibrium theory</article-title>. <source>J. Math. Psychol</source>. <volume>13</volume>, <fpage>79</fpage>&#x02013;<lpage>88</lpage>. <pub-id pub-id-type="doi">10.1016/0022-2496(76)90035-3</pub-id></citation>
</ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hintzman</surname> <given-names>D. L.</given-names></name></person-group> (<year>1984</year>). <article-title>Minerva 2: A simulation model of human memory</article-title>. <source>Behavior Research Methods, Instruments, &#x00026; Computers</source> <volume>16</volume>, <fpage>96</fpage>&#x02013;<lpage>101</lpage>. <pub-id pub-id-type="doi">10.3758/BF03202365</pub-id></citation>
</ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hintzman</surname> <given-names>D. L.</given-names></name> <name><surname>Ludlam</surname> <given-names>G.</given-names></name></person-group> (<year>1980</year>). <article-title>Differential forgetting of prototypes and old instances: Simulation by an exemplarbased classification model</article-title>. <source>Memory Cognit</source>. <volume>8</volume>, <fpage>378</fpage>&#x02013;<lpage>382</lpage>. <pub-id pub-id-type="doi">10.3758/BF03198278</pub-id><pub-id pub-id-type="pmid">7421579</pub-id></citation></ref>
<ref id="B25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hopfield</surname> <given-names>J.</given-names></name></person-group> (<year>2007</year>). <article-title>Hopfield network</article-title>. <source>Scholarpedia</source> <volume>2</volume>, <fpage>1977</fpage>. <pub-id pub-id-type="doi">10.4249/scholarpedia.1977</pub-id></citation>
</ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jack</surname> <given-names>C. R.</given-names></name></person-group> (<year>2012</year>). <article-title>Alzheimer disease: new concepts on its neurobiology and the clinical role imaging will play</article-title>. <source>Radiology</source> <volume>263</volume>, <fpage>344</fpage>&#x02013;<lpage>361</lpage>. <pub-id pub-id-type="doi">10.1148/radiol.12110433</pub-id><pub-id pub-id-type="pmid">22517954</pub-id></citation></ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kahana</surname> <given-names>M.</given-names></name></person-group> (<year>2020</year>). <article-title>Computational models of memory search</article-title>. <source>Annu. Rev. Psychol.</source> <volume>71</volume>, <fpage>107</fpage>&#x02013;<lpage>138</lpage>. <pub-id pub-id-type="doi">10.1146/annurev-psych-010418-103358</pub-id><pub-id pub-id-type="pmid">31567043</pub-id></citation></ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kahana</surname> <given-names>M. J.</given-names></name> <name><surname>Bennett</surname> <given-names>P.</given-names></name></person-group> (<year>1994</year>). <article-title>Classification and perceived similarity of compound gratings that differ in relative spatial phase</article-title>. <source>Percept. Psychophys</source>. <volume>55</volume>, <fpage>642</fpage>&#x02013;<lpage>656</lpage>. <pub-id pub-id-type="doi">10.3758/BF03211679</pub-id><pub-id pub-id-type="pmid">8058452</pub-id></citation></ref>
<ref id="B29">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kahana</surname> <given-names>M. J.</given-names></name> <name><surname>Rizzuto</surname> <given-names>D. S.</given-names></name> <name><surname>Schneider</surname> <given-names>A.</given-names></name></person-group> (<year>2005</year>). <article-title>Theoretical correlations and measured correlations: relating recognition and recall in four distributed memory models</article-title>. <source>J. Exp. Psychol. Learn. Mem. Cogn</source>. <volume>31</volume>, <fpage>933</fpage>&#x02013;<lpage>953</lpage>. <pub-id pub-id-type="doi">10.1037/0278-7393.31.5.933</pub-id><pub-id pub-id-type="pmid">16248743</pub-id></citation></ref>
<ref id="B30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kopelman</surname> <given-names>M. D.</given-names></name></person-group> (<year>1989</year>). <article-title>Remote and autobiographical memory, temporal context memory, and frontal atrophy in korsakoff and alzheimer patients</article-title>. <source>Neuropsychologia</source> <volume>27</volume>, <fpage>437</fpage>&#x02013;<lpage>460</lpage>. <pub-id pub-id-type="doi">10.1016/0028-3932(89)90050-X</pub-id><pub-id pub-id-type="pmid">2733818</pub-id></citation></ref>
<ref id="B31">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Loftus</surname> <given-names>G. R.</given-names></name></person-group> (<year>1985</year>). <article-title>Evaluating forgetting curves</article-title>. <source>J. Exp. Psychol.: Learn. Memory Cognit</source>. <volume>11</volume>, <fpage>397</fpage>&#x02013;<lpage>406</lpage>. <pub-id pub-id-type="doi">10.1037/0278-7393.11.2.397</pub-id></citation>
</ref>
<ref id="B32">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>McNally</surname> <given-names>R. J.</given-names></name></person-group> (<year>2004</year>). <article-title>The science and folklore of traumatic amnesia</article-title>. <source>Clin. Psychol.: Sci. Pract</source>. <volume>11</volume>, <fpage>29</fpage>&#x02013;<lpage>33</lpage>. <pub-id pub-id-type="doi">10.1093/clipsy.bph056</pub-id></citation>
</ref>
<ref id="B33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Melton</surname> <given-names>A. W. O.</given-names></name></person-group> (<year>1963</year>). <article-title>Implications of short-term memory for a general theory of memory</article-title>. <source>J. Verbal Learn. Verbal Behav</source>. <volume>2</volume>, <fpage>1</fpage>&#x02013;<lpage>21</lpage>. <pub-id pub-id-type="doi">10.21236/AD0422425</pub-id></citation>
</ref>
<ref id="B34">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Miller</surname> <given-names>G. A.</given-names></name></person-group> (<year>1956</year>). <article-title>The magical number seven, plus or minus two: Some limits on our capacity for processing information</article-title>. <source>Psychol. Rev</source>. <volume>63</volume>, <fpage>81</fpage>&#x02013;<lpage>97</lpage>. <pub-id pub-id-type="doi">10.1037/h0043158</pub-id><pub-id pub-id-type="pmid">13310704</pub-id></citation></ref>
<ref id="B35">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mueller</surname> <given-names>S.</given-names></name> <name><surname>Shiffrin</surname> <given-names>R.</given-names></name></person-group> (<year>2006</year>). <article-title>Rem-ii: A model of the developmental co-evolution of episodic memory and semantic knowledge</article-title>, in <source>Paper presented at the International Conference on Learning and Development (ICDL), Bloomington, IN</source>.</citation>
</ref>
<ref id="B36">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Murdock</surname> <given-names>B.</given-names></name></person-group> (<year>1982</year>). <article-title>A theory for the storage and retrieval of item and associative information</article-title>. <source>Psychol. Rev</source>. <volume>89</volume>, <fpage>609</fpage>&#x02013;<lpage>626</lpage>. <pub-id pub-id-type="doi">10.1037/0033-295X.89.6.609</pub-id></citation>
</ref>
<ref id="B37">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Murdock</surname> <given-names>B.</given-names></name></person-group> (<year>1989</year>). <article-title>Learning in a distributed memory model</article-title>, in <source>Current Issues in Cognitive Processes: The Floweree Symposium on Cognition</source>, eds. <person-group person-group-type="editor"><name><surname>Izawa</surname> <given-names>C.</given-names></name></person-group> <publisher-loc>Hillsdale, NJ</publisher-loc>: <publisher-name>Erlbaum</publisher-name>, <fpage>69</fpage>&#x02013;<lpage>106</lpage>.</citation>
</ref>
<ref id="B38">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Murdock</surname> <given-names>B.</given-names></name></person-group> (<year>1997</year>). <article-title>Context and mediators in a theory of distributed associative memory (todam2)</article-title>. <source>Psychol. Rev</source>. <volume>104</volume>, <fpage>839</fpage>&#x02013;<lpage>862</lpage>. <pub-id pub-id-type="doi">10.1037/0033-295X.104.4.839</pub-id></citation>
</ref>
<ref id="B39">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Murre</surname> <given-names>J. M.</given-names></name> <name><surname>Chessa</surname> <given-names>A. G.</given-names></name> <name><surname>Meeter</surname> <given-names>M.</given-names></name></person-group> (<year>2013</year>). <article-title>Mathematical model of forgetting and amnesia</article-title>. <source>Front Psychol</source>. <volume>4</volume>, <fpage>76</fpage>. <pub-id pub-id-type="doi">10.3389/fpsyg.2013.00076</pub-id><pub-id pub-id-type="pmid">23450438</pub-id></citation></ref>
<ref id="B40">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Murre</surname> <given-names>J. M. J.</given-names></name> <name><surname>Chessa</surname> <given-names>A. G.</given-names></name></person-group> (<year>2011</year>). <article-title>Power laws from individual differences in learning and forgetting: mathematical analyses</article-title>. <source>Psychon. Bull. Rev</source>. <volume>18</volume>, <fpage>592</fpage>&#x02013;<lpage>597</lpage>. <pub-id pub-id-type="doi">10.3758/s13423-011-0076-y</pub-id><pub-id pub-id-type="pmid">21468774</pub-id></citation></ref>
<ref id="B41">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nelson</surname> <given-names>A.</given-names></name> <name><surname>Shiffrin</surname> <given-names>R.</given-names></name></person-group> (<year>2013</year>). <article-title>The co-evolution of knowledge and event memory</article-title>. <source>Psychol. Rev</source>. <volume>120</volume>, <fpage>356</fpage>&#x02013;<lpage>394</lpage>. <pub-id pub-id-type="doi">10.1037/a0032020</pub-id></citation>
</ref>
<ref id="B42">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Nosofsky</surname> <given-names>R.</given-names></name></person-group> (<year>1992</year>). <article-title>Exemplar-based approach to relating categorization, identification, and recognition</article-title>, in <source>Multidimensional Models of Perception and Cognition</source>, eds. <person-group person-group-type="editor"><name><surname>Ashby</surname> <given-names>F. G.</given-names></name></person-group> <publisher-loc>Hillsdale, NJ</publisher-loc>: <publisher-name>Erlbaum</publisher-name>.</citation>
</ref>
<ref id="B43">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pick</surname> <given-names>A.</given-names></name></person-group> (<year>1904</year>). <article-title>A ber primre progressive demenz bei erwachsenen (translation: about primary progressive dementia in adults)</article-title>, in <source>Prag Med Wochenschr</source>, <fpage>29417</fpage>.</citation>
</ref>
<ref id="B44">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Piolino</surname> <given-names>P.</given-names></name> <name><surname>Desgranges</surname> <given-names>B.</given-names></name> <name><surname>Belliard</surname> <given-names>S.</given-names></name> <name><surname>Matuszewski</surname> <given-names>V.</given-names></name> <name><surname>Laleve</surname> <given-names>C.</given-names></name> <name><surname>De la Sayette</surname> <given-names>V.</given-names></name> <etal/></person-group>. (<year>2003</year>). <article-title>Autobiographical memory and autonoetic consciousness: triple dissociation in neurodegenerative diseases</article-title>. <source>Brain</source> <volume>2003</volume>, <fpage>2203</fpage>&#x02013;<lpage>2219</lpage>. <pub-id pub-id-type="doi">10.1093/brain/awg222</pub-id><pub-id pub-id-type="pmid">12821510</pub-id></citation></ref>
<ref id="B45">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Posner</surname> <given-names>M. I.</given-names></name></person-group> (<year>1966</year>). <article-title>Components of skilled performance</article-title>. <source>Science</source> <volume>152</volume>, <fpage>1712</fpage>&#x02013;<lpage>1718</lpage>. <pub-id pub-id-type="doi">10.1126/science.152.3730.1712</pub-id></citation>
</ref>
<ref id="B46">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Preece</surname> <given-names>P. F.</given-names></name> <name><surname>Anderson</surname> <given-names>O. R.</given-names></name></person-group> (<year>1984</year>). <article-title>Comments and criticism. Mathematical modeling of learning</article-title>. <source>J. Res. Sci. Teach</source>. <volume>21</volume>, <fpage>953</fpage>&#x02013;<lpage>955</lpage>. <pub-id pub-id-type="doi">10.1002/tea.3660210910</pub-id></citation>
</ref>
<ref id="B47">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Raaijmakers</surname> <given-names>J.</given-names></name> <name><surname>Shiffrin</surname> <given-names>R.</given-names></name></person-group> (<year>1981</year>). <article-title>Search of associative memory</article-title>. <source>Psychol. Rev</source>. <volume>88</volume>, <fpage>93</fpage>&#x02013;<lpage>134</lpage>. <pub-id pub-id-type="doi">10.1037/0033-295X.88.2.93</pub-id></citation>
</ref>
<ref id="B48">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Racine</surname> <given-names>R. J.</given-names></name> <name><surname>Andrew Chapman</surname> <given-names>C.</given-names></name> <name><surname>Trepel</surname> <given-names>C.</given-names></name> <name><surname>Campbell Teskey</surname> <given-names>G.</given-names></name> <name><surname>Milgram</surname> <given-names>N. W.</given-names></name></person-group> (<year>1995</year>). <article-title>Post-activation potentiation in the neocortex. iv. multiple sessions required for induction of long-term potentiation in the chronic preparation</article-title>. <source>Brain Res</source>, <volume>702</volume>, <fpage>87</fpage>&#x02013;<lpage>93</lpage>. <pub-id pub-id-type="doi">10.1016/0006-8993(95)01025-0</pub-id><pub-id pub-id-type="pmid">8846100</pub-id></citation></ref>
<ref id="B49">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ribot</surname> <given-names>T.</given-names></name></person-group> (<year>1906</year>). <source>Les maladies de la mmoire</source>. <publisher-loc>Paris</publisher-loc>: <publisher-name>L&#x00027;Harmattan</publisher-name>.</citation>
</ref>
<ref id="B50">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Semon</surname> <given-names>R. W.</given-names></name></person-group> (<year>1923</year>). <source>Mnemonic Psychology</source>. <publisher-loc>London</publisher-loc>: <publisher-name>George Allen and Unwin</publisher-name>.</citation>
</ref>
<ref id="B51">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shiffrin</surname> <given-names>R. M.</given-names></name> <name><surname>Steyvers</surname> <given-names>M.</given-names></name></person-group> (<year>1997</year>). <article-title>A model for recognition memory: Rem retrieving effectively from memory</article-title>. <source>Psychon. Bullet. Rev</source>. <volume>4</volume>, <fpage>145</fpage>&#x02013;<lpage>166</lpage>. <pub-id pub-id-type="doi">10.3758/BF03209391</pub-id><pub-id pub-id-type="pmid">21331823</pub-id></citation></ref>
<ref id="B52">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Squire</surname> <given-names>L. R.</given-names></name></person-group> (<year>1992</year>). <article-title>Memory and the hippocampus: a synthesis from findings with rats, monkeys, and humans</article-title>. <source>Psychol. Rev</source>. <volume>99</volume>, <fpage>195</fpage>&#x02013;<lpage>231</lpage>. <pub-id pub-id-type="doi">10.1037/0033-295X.99.2.195</pub-id><pub-id pub-id-type="pmid">1594723</pub-id></citation></ref>
<ref id="B53">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Squire</surname> <given-names>L. R.</given-names></name> <name><surname>Alvarez</surname> <given-names>P.</given-names></name></person-group> (<year>1995</year>). <article-title>Retrograde amnesia and memory consolidation: a neurobiological perspective</article-title>. <source>Curr. Opin. Neurobiol</source>. <volume>5</volume>, <fpage>169</fpage>&#x02013;<lpage>175</lpage>. <pub-id pub-id-type="doi">10.1016/0959-4388(95)80023-9</pub-id><pub-id pub-id-type="pmid">7620304</pub-id></citation></ref>
<ref id="B54">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Squire</surname> <given-names>L. R.</given-names></name> <name><surname>Cohen</surname> <given-names>N. J.</given-names></name> <name><surname>Nadel</surname> <given-names>L.</given-names></name></person-group> (<year>1984</year>). <article-title>The medial temporal region and memory consolidation: A new hypothesis</article-title>, in <source>Memory Consolidation</source>, eds. <person-group person-group-type="editor"><name><surname>Weingarter</surname> <given-names>H.</given-names></name> <name><surname>Parker</surname> <given-names>E.</given-names></name></person-group> <publisher-loc>Hillsdale, NJ</publisher-loc>: <publisher-name>Erlbaum</publisher-name>, <fpage>185</fpage>&#x02013;<lpage>210</lpage>.</citation>
</ref>
<ref id="B55">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Sun</surname> <given-names>R.</given-names></name></person-group> (<year>2008</year>). <source>The Cambridge Handbook of Computational Psychology</source>. <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>.</citation>
</ref>
<ref id="B56">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tenenbaum</surname> <given-names>J. B.</given-names></name> <name><surname>Kemp</surname> <given-names>C.</given-names></name> <name><surname>Griffith</surname> <given-names>T.</given-names></name> <name><surname>Goodman</surname> <given-names>N.</given-names></name></person-group> (<year>2011</year>). <article-title>How to grow a mind: Statistics, structure, and abstraction</article-title>. <source>Science</source>. <volume>331</volume>, <fpage>1279</fpage>&#x02013;<lpage>1285</lpage>. <pub-id pub-id-type="doi">10.1126/science.1192788</pub-id><pub-id pub-id-type="pmid">21393536</pub-id></citation></ref>
<ref id="B57">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tulving</surname> <given-names>E.</given-names></name> <name><surname>Madigan</surname> <given-names>S.</given-names></name></person-group> (<year>1970</year>). <article-title>Memory and verbal learning</article-title>. <source>Annu. Rev. Psychol</source>. <volume>21</volume>, <fpage>437</fpage>&#x02013;<lpage>484</lpage>. <pub-id pub-id-type="doi">10.1146/annurev.ps.21.020170.002253</pub-id></citation>
</ref>
<ref id="B58">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Versace</surname> <given-names>R.</given-names></name> <name><surname>Vallet</surname> <given-names>G. T.</given-names></name> <name><surname>Riou</surname> <given-names>B.</given-names></name> <name><surname>Lesourd</surname> <given-names>M.</given-names></name> <name><surname>Labeye</surname> <given-names>E.</given-names></name> <name><surname>Brunel</surname> <given-names>L.</given-names></name></person-group> (<year>2014</year>). <article-title>Act-in: an integrated view of memory mechanisms</article-title>. <source>J. Cogn. Psychol</source>. <volume>26</volume>, <fpage>280</fpage>&#x02013;<lpage>306</lpage>. <pub-id pub-id-type="doi">10.1080/20445911.2014.892113</pub-id></citation>
</ref>
<ref id="B59">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wixted</surname> <given-names>J. T.</given-names></name> <name><surname>Ebbesen</surname> <given-names>E. B.</given-names></name></person-group> (<year>1991</year>). <article-title>On the form of forgetting</article-title>. <source>Psychol. Sci</source>. <volume>2</volume>, <fpage>409</fpage>&#x02013;<lpage>415</lpage>. <pub-id pub-id-type="doi">10.1111/j.1467-9280.1991.tb00175.x</pub-id></citation>
</ref>
<ref id="B60">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wozniak</surname> <given-names>P. A.</given-names></name> <name><surname>Gorzelanczyk</surname> <given-names>E. J.</given-names></name> <name><surname>Murakowski</surname> <given-names>J. A.</given-names></name></person-group> (<year>1995</year>). <article-title>Two components of long-term memory</article-title>. <source>Acta Neurobiol. Exp. (Wars)</source> <volume>55</volume>, <fpage>301</fpage>&#x02013;<lpage>305</lpage>.</citation>
</ref>
</ref-list>
</back>
</article>