AUTHOR=Shukla Ankita , Dadhich Rishi , Singh Rajhans , Rayas Anirudh , Saidi Pouria , Dasarathy Gautam , Berisha Visar , Turaga Pavan TITLE=Orthogonality and graph divergence losses promote disentanglement in generative models JOURNAL=Frontiers in Computer Science VOLUME=Volume 6 - 2024 YEAR=2024 URL=https://www.frontiersin.org/journals/computer-science/articles/10.3389/fcomp.2024.1274779 DOI=10.3389/fcomp.2024.1274779 ISSN=2624-9898 ABSTRACT=Over the last decade, deep generative models have evolved to generate realistic and sharp images. The success of these models is often attributed to an extremely large number of trainable parameters and an abundance of training data, with limited or no understanding of the underlying data manifold. In this paper, we explore the possibility of learning a deep generative model that is structured to better capture the underlying manifold's geometry, to effectively improve image generation while providing implicit controlled generation by design. Our approach structures the latent space into multiple disjoint representations capturing different attribute manifolds. The global representations are guided by a disentangling loss for effective attribute representation learning, and a differential manifold divergence loss to learn an effective implicit generative model.Our results show that our approach can learn to disentangle physically meaningful attributes without direct supervision with ground truth attributes, and also leads to controllable generative capabilities. Results are shown on the challenging 3D shapes dataset.