Your new experience awaits. Try the new design now and help us make it even better

EDITORIAL article

Front. Appl. Math. Stat., 12 January 2026

Sec. Optimization

Volume 11 - 2025 | https://doi.org/10.3389/fams.2025.1764289

This article is part of the Research TopicOptimization for Low-rank Data Analysis: Theory, Algorithms and ApplicationsView all 7 articles

Editorial: Optimization for low-rank data analysis: theory, algorithms and applications

  • 1School of Data, Mathematical, and Statistical Sciences, University of Central Florida, Orlando, FL, United States
  • 2Department of Mathematics, Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong SAR, China
  • 3Faculty of Applied Science and Technology, Hilla Limann Technical University, Department of Information and Communication Technology, Wa, Ghana
  • 4Department of Mathematics, Tufts University, Medford, MA, United States

Low-rank data analysis has emerged as a powerful paradigm across applied mathematics, statistics, and data science. With the rapid growth of modern datasets in size, dimensionality, and complexity, low-rank structures offer a principled way to extract latent patterns, reduce redundancy, and enable scalable computation. Optimization methods tailored to these structures are central to this progress, providing both theoretical guarantees and practical algorithms for high-impact applications.

This Research Topic brings together six contributions spanning the spectrum from foundational theory to real-world implementation. The collection reflects the interdisciplinary nature of low-rank modeling and optimization, highlighting the advances in algorithm design, convergence analysis, and domain-specific applications. Collectively, these works demonstrate how low-rank optimization continues to shape the landscape of applied mathematics, statistics, and data science.

Needell et al. study the challenging problem of optimizing neural networks, where training can be slow and inefficient when all the parameters need to be tuned. Randomization-based models, such as random vector functional link (RVFL) networks, mitigate this bottleneck by fixing the input-to-hidden weights at random, thereby reducing learning to a simple linear problem. Despite their strong empirical performance in recent years, these models have lacked rigorous theory. This work strengthens the foundational theory by proving that RVFL networks are universal approximators with error decaying as O(1/n), and by establishing non-asymptotic, high-probability guarantees via concentration inequalities. The authors further extend the framework to functions defined on smooth, low-dimensional manifolds, an important setting for modern data with intrinsic low-rank structure, and validate their results with numerical experiments. The study bridges a critical gap between theory and practice, clarifying why randomized, low-complexity architectures can serve as powerful tools for efficient data-driven approximation.

Kassab et al. study the challenging problem of topic detection at different time scales. Temporal text data, such as news or social media streams, combine long-term themes with short-lived events, making it essential for topic models to detect both and locate them accurately in time. The authors show that non-negative CP tensor decomposition (NCPD) can automatically uncover topics with diverse temporal persistence, and they introduce a sparsity-constrained variant (S-NCPD), together with an online version, to directly regulate topic duration. They provide theoretical guarantees, including convergence of the proposed algorithms to stationary points, and conduct extensive experiments on semi-synthetic and real-world datasets. The results demonstrate that S-NCPD and its online counterpart can reliably identify both transient and persistent topics in a quantifiable, interpretable manner, outperforming the traditional matrix-based models. The online method further improves computational efficiency. Overall, this work advances tensor-based dynamic topic modeling by delivering principled tools for detecting and controlling topic persistence.

Terui et al. consider the problem of collaborative filtering on highly sparse rating data, where accurate prediction requires low-rank methods that can handle extensive missing entries. They propose a modified non-negative/binary matrix factorization (NBMF) algorithm that masks unrated items and leverages a low-latency Ising machine to efficiently solve the binary subproblems inherent in NBMF. The authors provide direct comparison between NMF and NBMF for recommendation tasks, showing that the masked NBMF method achieves higher prediction accuracy and significantly shorter computation time, especially on large and sparse matrices. This work demonstrates that specialized combinatorial-optimization hardware can enhance low-rank factorization and improve recommendation performance.

Mukai et al. introduce a unified optimization framework for tensor completion and reconstruction, formulated as a general low-rank inverse problem under linear observation models. Their algorithm accommodates multiple loss functions (ℓ1, ℓ2, and generalized KL) and a wide array of tensor decomposition models by combining majorization–minimization with alternating direction method of multipliers (ADMM) in a hierarchical structure. A key contribution is treating least-squares tensor decompositions as plug-and-play modules, enabling users to swap in any existing or future TD method without redesigning the solver. This flexibility allows the same algorithmic backbone to handle tasks ranging from denoising and completion to deconvolution, compressed sensing, and medical imaging. This work provides a general-purpose, extensible optimization engine for diverse reconstruction problems.

Wekesa and Korir develop a hybrid decision-support framework that integrates supervised machine learning with two-stage stochastic programming (TSSP) to optimize human intelligence (HUMINT) source performance management under uncertainty. The methodology leverages XGBoost and SVM models trained on synthetic operational data to predict source reliability and deception risk, which are then incorporated as scenario probabilities in the TSSP model for adaptive task allocation. The proposed approach yields substantial operational gains, including reduced tasking costs and improved mission success rates. This study demonstrates the value of probabilistic planning in intelligence operations.

Faraj et al. propose two enhanced variants of the fitness-dependent optimizer (FDO), a population-based metaheuristic for solving complex gradient-free optimization problems. To address FDO's weak exploitation and slow convergence, EESB-FDO introduces stochastic boundary repositioning for infeasible solutions, while EEBC-FDO uses a boundary-carving mechanism to guide such solutions back into the feasible search space. A complementary ELFS strategy constrains the step size of the Lévy-flight random walk, often used in metaheuristics to broaden exploration, thereby improving stability. Benchmarking on classical test suites shows substantial gains over the original FDO and several state-of-the-art algorithms, with further validation on four real-world engineering and system problems. This study advances the use of structured boundary-handling and controlled stochastic search as practical tools for building more reliable and effective optimization algorithms.

The six articles in this Research Topic collectively advance our understanding of low-rank optimization from multiple perspectives. They deepen insights into non-convex landscapes and low-rank constraints, propose scalable algorithms for high-dimensional data, and demonstrate the practical utility of low-rank models in domains such as imaging, signal processing, and intelligence operations. The strong synergy between theory, algorithms, and applications is a defining strength of this Research Topic.

Looking ahead, low-rank methods are poised to play an even greater role in emerging areas such as foundation models, federated learning, and trustworthy AI. As data-driven systems become more complex and pervasive, the demand for interpretable, efficient, and robust modeling frameworks will only grow. We hope this Research Topic inspires further research at the intersection of optimization and low-rank structure, and we thank the authors, reviewers, and editorial team for their valuable contributions to this vibrant field.

Author contributions

HC: Supervision, Validation, Writing – review & editing, Writing – original draft. DX: Writing – original draft, Writing – review & editing, Validation, Supervision. EG: Validation, Supervision, Writing – review & editing, Writing – original draft. AT: Writing – original draft, Validation, Writing – review & editing, Supervision.

Funding

The author(s) declared that financial support was not received for this work and/or its publication.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declared that generative AI was not used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Keywords: data analytics, dimensionality reduction, matrix completion, matrix factorization, optimzation, tensor factorization

Citation: Cai H, Xia D, Ganaa ED and Tasissa A (2026) Editorial: Optimization for low-rank data analysis: theory, algorithms and applications. Front. Appl. Math. Stat. 11:1764289. doi: 10.3389/fams.2025.1764289

Received: 09 December 2025; Accepted: 16 December 2025;
Published: 12 January 2026.

Edited and reviewed by: Hong-Kun Xu, Hangzhou Dianzi University, China

Copyright © 2026 Cai, Xia, Ganaa and Tasissa. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Abiy Tasissa, YWJpeS50YXNpc3NhQHR1ZnRzLmVkdQ==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.