Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Artif. Intell.

Sec. AI for Human Learning and Behavior Change

Volume 8 - 2025 | doi: 10.3389/frai.2025.1690616

This article is part of the Research TopicNew Trends in AI-Generated Media and SecurityView all 8 articles

Explainable Multilingual and Multimodal Fake News Detection: Towards Robust and Trustworthy AI for Combating Misinformation

Provisionally accepted
Rohini  JadhavRohini Jadhav1Vishal  MeshramVishal Meshram2Amol  BhosleAmol Bhosle3Kailas  PatilKailas Patil4*Sital  DashSital Dash4Shrikant  JadhavShrikant Jadhav5*
  • 1College of Engineering, Bharati Vidyapeeth (Deemed to be University), Pune, India
  • 2Vishwakarma Institute of Technology, Pune, India
  • 3MIT Art Design and Technology University, Pune, India
  • 4Vishwakarma University, Pune, India
  • 5San Jose State University, San Jose, United States

The final, formatted version of the article will be published soon.

Abstract. Fake news detection requires systems that are multilingual, multimodal, and explainable—yet most existing models are English-centric, text-only, and opaque. This study introduces two key innovations: (i) a new multilingual–multimodal dataset of 74,000 news articles in Hindi, Gujarati, Marathi, Telugu, and English with paired images, and (ii) HEMT-Fake, a Hybrid Explainable Multimodal Transformer that integrates text, image, and relational signals with hierarchical explainability. The architecture combines transformer embeddings, CNN–BiLSTM text encoders, ResNet image features, and GraphSAGE metadata fused via multi-head attention. Its explainability module unites attention, SHAP, and LIME to provide token-, sentence-, and modality-level transparency. Across four languages, HEMT-Fake delivers a ~5% Macro-F1 improvement over XLM-R and mBERT, with gains of 7–8% in low-resource languages. The model sustains 85% accuracy under adversarial paraphrasing and 80% on AI-generated fake news, halving robustness losses compared to baselines. Human evaluation shows 82% of explanations judged meaningful, confirming transparency and trust for fact-checkers. Impact Statement.HEMT-Fake advances fake news detection by combining multilingual coverage, multimodal reasoning, and explainable outputs in a single framework. By achieving higher accuracy in low-resource languages and maintaining robustness against AI-generated misinformation, it directly supports fact-checkers, journalists, and policymakers in combating misinformation across diverse linguistic and cultural contexts.

Keywords: Fake news detection, Misinformation and disinformation, Multilingual Dataset, Explainable artificial intelligence, Hybrid Deep Learning Architecture, Adversarial robustness, social media analysis

Received: 22 Aug 2025; Accepted: 22 Oct 2025.

Copyright: © 2025 Jadhav, Meshram, Bhosle, Patil, Dash and Jadhav. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence:
Kailas Patil, kailas.patil@vupune.ac.in
Shrikant Jadhav, shrikant.jadhav@sjsu.edu

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.