ORIGINAL RESEARCH article
Front. Artif. Intell.
Sec. Machine Learning and Artificial Intelligence
This article is part of the Research TopicConvergence of Artificial Intelligence and Cognitive SystemsView all 4 articles
Generalization Bounds for a Generator-Regularized InfoGAN-Inspired Adversarial Objective
Provisionally accepted- 1Virginia Commonwealth University, Richmond, United States
- 2University of South Alabama, Mobile, United States
- 3The University of Alabama at Birmingham, Birmingham, United States
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
The Information Maximizing Generative Adversarial Network (InfoGAN) can be formulated as a minimax problem involving two neural networks, namely a discriminator and a generator, together with an additional mutual information regularization term. In this paper, we study an InfoGAN-inspired adversarial framework obtained by removing the latent code component and introducing an explicit regularization term on the generator. This yields a generator-regularized adversarial objective that is analytically tractable for learning-theoretic analysis. We establish generalization error bounds by analyzing the difference between the empirical and population objective functions, with error bounds obtained via the Rademacher complexity of the discriminator, generator, and their composition. The resulting bounds reveal explicit n−1/2 and m−1/2 decay rates and quantify the role of the generator regularization parameter. The theory is specialized to two-layer networks with Lipschitz continuous and non-decreasing activation functions, where explicit entropy-based complexity bounds are derived. Experiments on the CIFAR-10 dataset validate the predicted scaling behavior and demonstrate that the generalization gap decreases systematically with sample size, further highlighting the stabilizing effect of generator regularization. Overall, this work provides a first rigorous generalization analysis for an InfoGAN-inspired adversarial objective with explicit generator regularization.
Keywords: Generalization error, Generative Adversarial Networks, neural networks, Rademacher complexity, regularization
Received: 23 Oct 2025; Accepted: 30 Jan 2026.
Copyright: © 2026 Hasan, Muia and Islam. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Mahmud Hasan
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.