Skip to main content

ORIGINAL RESEARCH article

Front. Comput. Sci.
Sec. Computer Vision
Volume 6 - 2024 | doi: 10.3389/fcomp.2024.1274779

Orthogonality and graph divergence losses promote disentanglement in generative models Provisionally Accepted

  • 1Arizona State University, United States

The final, formatted version of the article will be published soon.

Receive an email when it is updated
You just subscribed to receive the final version of the article

Over the last decade, deep generative models have evolved to generate realistic and sharp images. The success of these models is often attributed to an extremely large number of trainable parameters and an abundance of training data, with limited or no understanding of the underlying data manifold. In this paper, we explore the possibility of learning a deep generative model that is structured to better capture the underlying manifold's geometry, to effectively improve image generation while providing implicit controlled generation by design. Our approach structures the latent space into multiple disjoint representations capturing different attribute manifolds. The global representations are guided by a disentangling loss for effective attribute representation learning, and a differential manifold divergence loss to learn an effective implicit generative model.Our results show that our approach can learn to disentangle physically meaningful attributes without direct supervision with ground truth attributes, and also leads to controllable generative capabilities. Results are shown on the challenging 3D shapes dataset.

Keywords: Generative models, Auto-encoders, graph divergence, Manifolds, geometry

Received: 08 Aug 2023; Accepted: 29 Feb 2024.

Copyright: © 2024 Shukla, Dadich, Singh, Rayas, Saidi, Dasarathy, Berisha and Turaga. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Mx. Ankita Shukla, Arizona State University, Tempe, United States