Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Artif. Intell.

Sec. Machine Learning and Artificial Intelligence

Volume 8 - 2025 | doi: 10.3389/frai.2025.1643088

This article is part of the Research TopicConvergence of Artificial Intelligence and Cognitive SystemsView all articles

Neural Architecture Search Applying Optimal Stopping Theory

Provisionally accepted
Matthew  SheehanMatthew SheehanOleg  YakimenkoOleg Yakimenko*
  • Department of Systems Engineering, Naval Postgraduate School, Monterey, United States

The final, formatted version of the article will be published soon.

Neural architecture search (NAS) exploration requires tremendous amounts of computational power to properly explore. This makes exploration of modern NAS search spaces impractical for researchers due to the infrastructure investments required and the time needed to effectively design, train, validate, and evaluate each architecture within the search space. Based on the fact that early-stopping random search algorithms are competitive against leading NAS methods, this paper explores how much of the search space should be explored by applying various forms of the famous decision-making riddle within optimal stopping theory: the Secretary Problem (SP). 672 unique architectures, each trained and evaluated against the MNIST and CIFAR-10 datasets over 20,000 runs, producing 6,720 trained models confirm theoretically and empirically the need to randomly explore ~37% of the NAS search space until halting can occur for an acceptable discovered neural architecture. Additional extensions of the SP investigated include implementing a “good enough” and a “call back” feature; both further reduce exploration of the NAS search space to ~15% and ~4%, respectively. Each of these investigations were further confirmed statistically upon NAS search space populations consisting of 100 to 3,500 neural architectures increasing in steps of 50, with each population size analyzed over 20,000 runs. The paper details how researchers should implement each of these variants, with caveats, to balance computational resource costs and the desire to conduct sufficient NAS practices in a reasonable timeframe.

Keywords: Neural architecture search, markov decision processes, Automated machine learning, Optimal stopping theory, Secretary problem, Markov time

Received: 07 Jun 2025; Accepted: 01 Aug 2025.

Copyright: © 2025 Sheehan and Yakimenko. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Oleg Yakimenko, Department of Systems Engineering, Naval Postgraduate School, Monterey, United States

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.