Your new experience awaits. Try the new design now and help us make it even better

SYSTEMATIC REVIEW article

Front. Artif. Intell.

Sec. AI in Finance

Volume 8 - 2025 | doi: 10.3389/frai.2025.1660548

Explainable Person–Job Recommendations: Challenges, Approaches, and Comparative Analysis

Provisionally accepted
Fang  TangFang TangRenqi  ZhuRenqi ZhuFeng  YaoFeng YaoJunzhi  WangJunzhi WangLailong  LuoLailong LuoBo  LiBo Li*
  • National University of Defense Technology, Changsha, China

The final, formatted version of the article will be published soon.

As person–job recommendation systems (PJRS) increasingly mediate hiring decisions, concerns over their "black box" opacity have sparked demand for explainable AI (XAI) solutions. This systematic review examines 85 studies on explainable PJRS methods published between 2019 and August 2025, selected from 150 screened articles across Google Scholar, Web of Science, and CNKI, following PRISMA 2020 guidelines. Guided by a PICOS-formulated review question, we categorize explainability techniques into three layers—data (e.g., feature attribution, causal diagrams), model (e.g., attention mechanisms, knowledge graphs), and output (e.g., SHAP, counterfactuals)—and summarize their objectives, trade-offs, and practical applications. We further synthesize these into an integrated end-to-end framework that addresses opacity across layers and supports traceable recommendations. Quantitative benchmarking of six representative methods (e.g., LIME, attention-based, KG-GNN) reveals performance–explainability trade-offs, with counterfactual approaches achieving the highest Explainability-Performance (E‑P) score (0.95). This review provides a taxonomy, cross-layer framework, and comparative evidence to inform the design of transparent and trustworthy PJRS systems. Future directions include multimodal causal inference, feedback-driven adaptation, and efficient explainability tools.

Keywords: Explainable, person–job recommendations, black box, deep learning, comparative analysis

Received: 06 Jul 2025; Accepted: 29 Aug 2025.

Copyright: © 2025 Tang, Zhu, Yao, Wang, Luo and Li. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Bo Li, National University of Defense Technology, Changsha, China

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.