ORIGINAL RESEARCH article
Front. Syst. Neurosci.
Volume 19 - 2025 | doi: 10.3389/fnsys.2025.1630151
This article is part of the Research TopicNeurobiological foundations of cognition and progress towards Artificial General IntelligenceView all articles
Neural Network Models of Autonomous Adaptive Intelligence and Artificial General Intelligence: How Our Brains Learn Large Language Models and Their Meanings
Provisionally accepted- 1Center for Adaptive Systems; Departments of Mathematics and Statistics, Psychological and Brain Sciences, and Biomedical Engineering, Boston University, Boston, Massachusetts, United States
- 2Boston University, Boston, Massachusetts, United States
- 3Boston Universit, Boston, Massachusetts, United States
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
This article describes a biological neural network model that explains how humans learn to understand large language models and their meanings. This kind of learning typically occurs when a student learns from a teacher about events that they experience together. Multiple types of self-organizing brain processes are involved, including content-addressable memory; conscious visual perception; joint attention; object learning, categorization, and cognition; conscious recognition; cognitive working memory; cognitive planning; neural-symbolic computing; emotion; cognitive-emotional interactions and reinforcement learning; volition; and goal-oriented actions. The article advances results in Grossberg ( 2023) showing how small language models are learned that have perceptual and affective meanings. The current article explains how humans, and neural network models thereof, learn to consciously see and recognize an unlimited number of visual scenes. Then bi-directional associative links can be learned and stably remembered between these scenes, the emotions that they evoke, and descriptive language utterances of them. Adaptive Resonance Theory circuits control model learning and self-stabilizing memory. These human capabilities are not found in AI models like ChatGPT. The current model is called ChatSOME, where SOME abbreviates Self-Organizing MEaning. The article summarizes neural network highlights since the 1950s and of leading models, including Adaptive Resonance, Deep Learning, LLMs, and Transformers.
Keywords: Neural Network, ChatSOME, Learning, recognition, Cognition, Language, emotion, Consciousness
Received: 16 May 2025; Accepted: 23 Jun 2025.
Copyright: © 2025 Grossberg. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Stephen Grossberg, Center for Adaptive Systems; Departments of Mathematics and Statistics, Psychological and Brain Sciences, and Biomedical Engineering, Boston University, Boston, 02215, Massachusetts, United States
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.