Mathematical understanding of information storage, compression and prediction in Neural Networks

  • 1,494

    Total views and downloads

About this Research Topic

Submission deadlines

  1. Manuscript Summary Submission Deadline 14 March 2026 | Manuscript Submission Deadline 15 March 2026 | Manuscript Extension Submission Deadline 15 April 2026

  2. This Research Topic is currently accepting articles.

Background

A neural network (NN) is an ensemble of basic signal-processing units that are highly interconnected. Deep Neural Networks (DNNs) are NNs formed by several successive layers of units whose topology can be either acyclic and feedforward or cyclic and recurrent. DNNs have been remarkably successful in machine learning applications, such as computer vision, natural language processing, speech and audio processing, finance, robotics, and so forth. Furthermore, recent studies found that biological neural networks form architectures analogous to DNNs even if the basic processing units are fundamentally different. Regardless of their success, DNNs are fundamentally used as a "black box", meaning that the inner workings or detailed explanations of how they arrive at their output are not revealed. Currently, we still do not have the mathematical tools to fully understand the formal properties of these networks and their limitations. Improving our theoretical understanding of DNNs is particularly important today, as these tools are being deployed in an increasing number of safety-critical scenarios, necessitating constant scrutiny of their behavior.

The goal of the Research Topic is to investigate mathematical frameworks that enable a better understanding of how information is learned and represented within a neural network, including the study of existing approaches that go in this direction. Among others, notable examples are the storage capacity limit of Hopfield Neural Networks and the Information Bottleneck (IB) interpretation of deep learning. The storage capacity limit of Hopfield Recurrent NNs was characterized by Amit, Gutfreund, and Sompolinsky in the specific case of Hebbian learning. This limit is a fraction of the network dimension and as this capacity is approached or exceeded, the probability of errors in pattern retrieval increases. Furthermore, Tishby’s IB theory interprets deep learning as a progressive compression of input data and maximization of the underlying relevant information. In this regard, Achille and Soatto suggest that DNNs acquire invariances in a similar way the IB does, emphasizing how such mechanisms drive invariant feature extraction. Moreover, the IB’s variational approximation was linked to Variational Autoencoders by Alemi et al., shedding light on the latter’s ability to learn compressed and meaningful representations.

This Research Topic on the Mathematical Understanding of Information Storage, Compression, and Prediction in Neural Networks invites researchers to present state-of-the-art approaches, focusing on the mathematical analysis of deep neural networks, including their information-theoretic interpretation and their statistical limits.

The topics relevant to this Research Topic include, but are not limited to, the following:

• Theory of Deep Feed-forward and Recurrent NNs.
• Information-theoretic principles and interpretation of NNs.
• The Information Bottleneck and deep learning
• Compression in Deep Neural Networks
• The analysis of pattern and memory storage in NNs.
• Deep NNs for brain-inspired machine learning and biological modeling.
• Statistical Physics of deep neural networks.
• Dynamical Systems Modeling of NNs.
• Neural Network Dynamics and Stability.
• Generalization and Regularization in NNs.
• Learning Theory and Neural Networks.
• Mathematical Models of Learning and Plasticity.
• Neural Network Interpretability and Explainability.
• Energy-Based Models in Deep Learning.
• Neural Network Compression, Pruning, Sparsity and Efficient Computing.
• Mathematics of Self-Supervised Deep Learning.
• Optimization Landscapes and Loss Surface Analysis
• Neural Network Generalization and Overparameterization
• Mathematical Theories of Transformers and Attention Mechanisms
• Theoretical Foundations of Transfer Learning and Domain Adaptation

Please note: Manuscripts dealing with biologically inspired Neuronal Networks studies (based on the brain) are in Scope for Frontiers in Computational Neuroscience. Further, Contributions touching on the practical implications of theoretical models for neural networks, particularly those manuscripts highlighting innovative applications and optimizations of deep Neural Networks and optimal information storage should be submitted under Frontiers in Neuroinformatics.

Research Topic Research topic image

Article types and fees

This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:

  • Brief Research Report
  • Conceptual Analysis
  • Data Report
  • Editorial
  • FAIR² Data
  • General Commentary
  • Hypothesis and Theory
  • Methods
  • Mini Review

Articles that are accepted for publication by our external editors following rigorous peer review incur a publishing fee charged to Authors, institutions, or funders.

Keywords: Information Storage, Neural Networks, Deep Learning, Statistical Physics, Information Theory, Information Bottleneck, Explainable NNs, Learning Theory, Machine Learning

Important note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Topic editors

Manuscripts can be submitted to this Research Topic via the main journal or any other participating journal.

Impact

  • 1,494Topic views
View impact