ORIGINAL RESEARCH article
Front. Comput. Sci.
Sec. Computer Vision
DeepGeoFusion: Personalized Facial Beauty Prediction Through Geometric-Visual Fusion
Provisionally accepted- 1Northwestern Polytechnical University, Xi'an, China
- 2Air Force Medical University, Xi'an, China
- 3Xidian University Guangzhou Institute of Technology, Guangzhou, China
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Personalized facial beauty prediction has emerged as a critical advancement beyond population-level models, with transformative applications in aesthetic surgery planning and user-centric recommendation systems. However, contemporary methods face three fundamental limitations: (1) inadequate modeling of aesthetic-sensitive facial regions, (2) ineffective fusion of heterogeneous geometric and visual features, and (3) high dependency on extensive annotation for personalization. To address these challenges, we introduce DeepGeoFusion – a novel framework that synergizes global deep visual features (extracted via Vision Mamba) with anatomically constrained facial graphs constructed from 86 landmarks through Delaunay triangulation. Our core innovation, the Graph Node Attention Projection Fusion (GNAPF) block, dynamically aligns cross-modal representations using topology-aware attention mechanisms. Crucially, DeepGeoFusion incorporates a lightweight adaptation mechanism that generates personalized preference vectors from only 10 seed images via confidence-gated optimization. Extensive experiments on SCUT-FBP5500 demonstrate that our geometric-visual fusion paradigm achieves statistically significant improvements in personalized prediction accuracy, while maintaining robustness across genders and ethnicities compared to state-of-the-art methods.
Keywords: Face beauty prediction, Personalized beauty prediction, Geometric feature, Feature fusion, Graph Attention
Received: 26 Aug 2025; Accepted: 08 Dec 2025.
Copyright: © 2025 Wang, Huang, Feng and Feng. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Kunwei Wang
Dong Huang
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.
