ORIGINAL RESEARCH article
Front. Comput. Neurosci.
Volume 19 - 2025 | doi: 10.3389/fncom.2025.1646810
This article is part of the Research TopicTowards Sustainable AI: Energy and Data Efficiency in Biological and Artificial IntelligenceView all articles
Maximizing Theoretical and Practical Storage Capacity in Single-Layer Feedforward Neural Networks
Provisionally accepted- University of Southern California, Los Angeles, United States
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Artificial neural networks are limited in the number of patterns that they can store and accurately recall, with capacity constraints arising from factors such as network size, architectural structure, pattern sparsity, and pattern dissimilarity. Exceeding these limits leads to recall errors, eventually leading to catastrophic forgetting, which is a major challenge in continual learning. In this study, we characterize the theoretical maximum memory capacity of single-layer feedforward networks as a function of these parameters. We derive analytical expressions for maximum theoretical memory capacity and introduce a grid-based construction and subsampling method for pattern generation that takes advantage of the full storage potential of the network. Our findings indicate that maximum capacity scales as (N/S) S , where N is the number of input/output units and S the pattern sparsity, under threshold constraints related to minimum pattern differentiability.Simulation results validate these theoretical predictions and show that the optimal pattern set can be constructed deterministically for any given network size and pattern sparsity, systematically outperforming random pattern generation in terms of storage capacity. This work offers a foundational framework for maximizing storage efficiency in neural network systems and supports the development of data-efficient, sustainable AI.
Keywords: Neural Network, Memory capacity, Data-efficient AI, sustainable AI, Constructive algorithms
Received: 14 Jun 2025; Accepted: 30 Jul 2025.
Copyright: © 2025 Chou and Bouteiller. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Zane Zeenhee Chou, University of Southern California, Los Angeles, United States
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.