Research Topic

Human and Artificial Models of Memory

  • Submission closed.

About this Research Topic

Recent advances in Artificial Intelligence (AI) have been driven by Deep Learning (DL), a sub-field of machine learning (ML). ML uses hierarchical structure, a property shared by the brain’s cortex, which also controls cognitive functions in a hierarchical manner. The consensus view is that simpler features, which are considered more perceptual, are learned (i.e., stored) and processed (i.e., retrieved, used in inference) at lower levels of the hierarchy, whereas progressively more complex features, which are considered more conceptual or symbolic, are learned/processed at progressively higher levels. Moreover, probabilistic inference processes at higher levels become increasingly temporally extended and involve some form of working memory, obtaining the character of sequential reasoning over symbols.

Historically, ML has focused on generative modeling tasks using more perceptual features, such as vision and speech recognition. More recently, impressive successes have been made on problems of increasingly symbolic and sequential nature, including game-playing and machine language translation. This was accomplished by combining the hierarchy commonly used in DL with other key enabling methods, such as reinforcement learning (RL) and long short-term memory (LSTM) units, and augmenting the core neural network (DL) model with an external memory for specific items, for example “Memory Networks”, “Neural Turing Machine”, and “Neural Episodic Control”. From the standpoint of cognitive psychology, inclusion of an external memory is of particular interest given its analogy with the core dichotomy of human memory, i.e. episodic and semantic memory. Episodic memories entail detailed, multimodal memories of specific experienced events, which are formed with single trials and can last for many decades. Semantic memory refers to the knowledge of the statistical structure of the world, e.g. spatial and temporal classes and causal relations between objects and events. While the architectures of the aforementioned models broadly comport with the functional specialization of the hippocampus and the cortex to episodic and semantic memory, respectively, many of their detailed mechanisms are closer in design to computer science principles than to human memory. Numerous questions remain as to the precise, mechanistic relation of episodic, semantic, and working memory. In these regards, cognitive psychology can potentially offer fresh inspiration and guidance to AI. It can also benefit from the advances shown by these new AI/ML models and from their computational power to test new hypotheses. One of this Research Topic’s main objectives is to encourage and enhance collaboration between these communities.

Therefore, this Research Topics aims to collect contributions on memory models from both cognitive psychology and AI/ML perspectives. We particularly welcome contributions on the following subjects:
- Cognitive psychology models of working and episodic memory and psychometric tests to measure individual differences
- New computational implementations and simulations of working or episodic memory
- AI/ML models including explicit memory, for example memory-augmented neural networks
- New methods and benchmarks for testing AI systems related to memory tasks (e.g. new virtual environments and datasets).
We encourage article submissions that distinguish between different memory types and investigate their characteristics. The studies should aim to demonstrate interpretable behavior in how they effectively use working and/or long-term memory rather than limiting the reporting to the presentation of metrics that beat state-of-the-art benchmarks.

Topic editor Dr. Ahmet S. Ozcan is employed by the IBM Almaden Lab. All other topic editors declare no competing interests with regards to the Research Topic subject.


Keywords: Memory, Artificial Intelligence, Neural Networks, Working Memory, Episodic Memory


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Recent advances in Artificial Intelligence (AI) have been driven by Deep Learning (DL), a sub-field of machine learning (ML). ML uses hierarchical structure, a property shared by the brain’s cortex, which also controls cognitive functions in a hierarchical manner. The consensus view is that simpler features, which are considered more perceptual, are learned (i.e., stored) and processed (i.e., retrieved, used in inference) at lower levels of the hierarchy, whereas progressively more complex features, which are considered more conceptual or symbolic, are learned/processed at progressively higher levels. Moreover, probabilistic inference processes at higher levels become increasingly temporally extended and involve some form of working memory, obtaining the character of sequential reasoning over symbols.

Historically, ML has focused on generative modeling tasks using more perceptual features, such as vision and speech recognition. More recently, impressive successes have been made on problems of increasingly symbolic and sequential nature, including game-playing and machine language translation. This was accomplished by combining the hierarchy commonly used in DL with other key enabling methods, such as reinforcement learning (RL) and long short-term memory (LSTM) units, and augmenting the core neural network (DL) model with an external memory for specific items, for example “Memory Networks”, “Neural Turing Machine”, and “Neural Episodic Control”. From the standpoint of cognitive psychology, inclusion of an external memory is of particular interest given its analogy with the core dichotomy of human memory, i.e. episodic and semantic memory. Episodic memories entail detailed, multimodal memories of specific experienced events, which are formed with single trials and can last for many decades. Semantic memory refers to the knowledge of the statistical structure of the world, e.g. spatial and temporal classes and causal relations between objects and events. While the architectures of the aforementioned models broadly comport with the functional specialization of the hippocampus and the cortex to episodic and semantic memory, respectively, many of their detailed mechanisms are closer in design to computer science principles than to human memory. Numerous questions remain as to the precise, mechanistic relation of episodic, semantic, and working memory. In these regards, cognitive psychology can potentially offer fresh inspiration and guidance to AI. It can also benefit from the advances shown by these new AI/ML models and from their computational power to test new hypotheses. One of this Research Topic’s main objectives is to encourage and enhance collaboration between these communities.

Therefore, this Research Topics aims to collect contributions on memory models from both cognitive psychology and AI/ML perspectives. We particularly welcome contributions on the following subjects:
- Cognitive psychology models of working and episodic memory and psychometric tests to measure individual differences
- New computational implementations and simulations of working or episodic memory
- AI/ML models including explicit memory, for example memory-augmented neural networks
- New methods and benchmarks for testing AI systems related to memory tasks (e.g. new virtual environments and datasets).
We encourage article submissions that distinguish between different memory types and investigate their characteristics. The studies should aim to demonstrate interpretable behavior in how they effectively use working and/or long-term memory rather than limiting the reporting to the presentation of metrics that beat state-of-the-art benchmarks.

Topic editor Dr. Ahmet S. Ozcan is employed by the IBM Almaden Lab. All other topic editors declare no competing interests with regards to the Research Topic subject.


Keywords: Memory, Artificial Intelligence, Neural Networks, Working Memory, Episodic Memory


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

About Frontiers Research Topics

With their unique mixes of varied contributions from Original Research to Review Articles, Research Topics unify the most influential researchers, the latest key findings and historical advances in a hot research area! Find out more on how to host your own Frontiers Research Topic or contribute to one as an author.

Topic Editors

Loading..

Submission Deadlines

Submission closed.

Participating Journals

Loading..

Topic Editors

Loading..

Submission Deadlines

Submission closed.

Participating Journals

Loading..
Loading..

total views article views article downloads topic views

}
 
Top countries
Top referring sites
Loading..