ORIGINAL RESEARCH article
Front. Artif. Intell.
Sec. Machine Learning and Artificial Intelligence
This article is part of the Research TopicNew Trends in AI-Generated Media and SecurityView all 14 articles
Hybrid Deep Feature Integration Model for Robust Deepfake Detection Using Transfer-Learned Neural Networks
Provisionally accepted- 1KONERU LAKSHMAIAH EDUCATION FOUNDATION, Guntur, India
- 2Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Bowrampet, Hyderabad-500043, Telangana, India., Hyderabad, India
- 3University of Massachusetts Amherst, Amherst, United States
- 4School of Computer Science & Engineering (SCOPE), VIT-AP University, Amaravati, Andhra Pradesh, India, Amaravathi, India
- 5Professor, Capital Engineering College, Mahatpalla, Bajpur Khordha, Bhuabneswar, Odisha, Bhuabneswar, India
- 6President, Maryam Abacha American University of Nigeria, Hotoro GRA, Kano State, Federal Republic of Nigeria, Kano, Nigeria
- 7Dept of Computer science and Engineering, SRKR Engineering College(A), Bhimavaram, Andhra Pradesh, India, Bhimavaram, India
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
With the rapid evolution and development of artificial intelligence and intelligent learning, the creation of realistic deepfake multimedia content has become accessible and is raising substantial requirements for digital security and media authenticity. While prevailing methods rely profoundly on deep learning and transformer driven practices, their computational cost, resource usage and sensitivity towards dataset bias prevent real-world usage and deployment. This work studies several practices for perceiving deepfake content in images and videos, analyzing state-of-the-art techniques, Convolutional Neural Network, Xception, ResNet50 and propose hybrid approach (DAAL-NET) with lightweight, Bi-stream artifact-resistant deepfake content detection capabilities to simultaneously learn spatial patterns, cues, and temporal motion inconsistencies. The framework combines three significant novelties: 1) a Local Forensics Encoder with Learnable Frequency Attention mechanism to analyze high-frequency manipulation; 2) a Motion Irregularity Encoder with depth wise temporal convolutions and gated recurrent units to obtain frame-level motion gaps; and 3) a Multi-Stream Interaction Module for bidirectional spatial temporal fusion using cross-attention. A scientifically trained Artifact Confidence Calibration Layer is proposed to improve probability and reliability. Experiments This is a provisional file, not the final typeset article supervised on Datasets of Celeb-DF(v2) and Kaggle exhibit that the proposed hybrid approach enhances macro-F1, calibration error, and temporal robustness compared to baseline models. The proposed model obtains a competitive outcome under constrained computational resources, making it appropriate for forensic applications, real-world media authentication systems, low-power deployments, and scalable deepfake screening pipelines.
Keywords: Bi-Stream Neural Networks, DALL-NET, deep fake detection, Learnable Frequency Attention, Motion Irregularity Encoder, Temporal Attention Gated Recurrent Unit
Received: 04 Nov 2025; Accepted: 06 Feb 2026.
Copyright: © 2026 Potluri, Kandagatla, Mohanty, Rout, (Dr.) Mohammad Israr and Gupta. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Sirisha Potluri
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.
