Your new experience awaits. Try the new design now and help us make it even better

SYSTEMATIC REVIEW article

Front. Artif. Intell.

Sec. Natural Language Processing

This article is part of the Research TopicNew Trends in AI-Generated Media and SecurityView all 14 articles

An AI-Driven Conceptual Framework for Detecting Fake News and Deepfake Content: A Systematic Review

Provisionally accepted
  • 1Sol Plaatje University, Kimberley, South Africa
  • 2Walter Sisulu University, Mthatha, South Africa

The final, formatted version of the article will be published soon.

The rapid advancement of generative artificial intelligence (AI) has enabled the creation of highly realistic synthetic media, commonly referred to as deepfakes, which are increasingly multimodal and difficult to detect. While these technologies offer creative and commercial potential, they also pose critical challenges related to misinformation, media trust, and societal harm. Despite the growing body of research, existing reviews remain fragmented, often separating technical detection advances from social and governance considerations. This study addresses this gap through a systematic review conducted in accordance with PRISMA guidelines across IEEE Xplore, Scopus, ACM Digital Library, and Web of Science. From an initial set of 120 database records, complemented by citation chaining, 34 studies published between 2014 and 2025 were included for analysis. Eighteen studies focused on deepfake generation and detection models, eight examined social and behavioural implications, and eight addressed ethical and regulatory frameworks. Thematic synthesis reveals a clear methodological shift from convolutional neural networks toward transformer-and CLIP-based architectures, alongside the emergence of large-scale benchmark datasets. However, persistent challenges remain in multimodal detection, cross-dataset generalization, explainability–robustness trade-offs, and the translation of governance principles into deployable systems. This review contributes an integrated conceptual framework that operationally connects detection technologies, explainable AI (XAI), and governance mechanisms through explicit feedback loops. Future research directions emphasize robust multimodal benchmarks, retrieval-augmented detection systems, and interdisciplinary approaches that align technical innovation with ethical and policy safeguards.

Keywords: Deepfakes, Explainable AI (XAI), Generative artificial intelligence, media trust, misinformation, Multimodal detection

Received: 02 Nov 2025; Accepted: 19 Jan 2026.

Copyright: © 2026 Moyo, Tuyikeze, Matsebula and Obagbuwa. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Bravlyn Victoria Chido Moyo

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.