Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Big Data

Sec. Cybersecurity and Privacy

This article is part of the Research TopicNew Trends in AI-Generated Media and SecurityView all 13 articles

Spatiotemporal Deep Learning Framework for Predictive Behavioral Threat Detection in Surveillance Footage

Provisionally accepted
Asha  MattaAsha Matta1*Chandra Sekhara Rao  M. V. P.Chandra Sekhara Rao M. V. P.2
  • 1Acharya Nagarjuna University, Guntur, India
  • 2RVR and JC College of Engineering, Guntur, India

The final, formatted version of the article will be published soon.

Anomaly detection in video surveillance remains a challenging problem due to complex human behaviors, temporal variability, and limited annotated data. This study proposes an optimized spatiotemporal deep learning framework that integrates a Convolutional Neural Network (CNN) for spatial feature extraction with a Long Short-Term Memory (LSTM) network for temporal dependency modeling. The CNN processes frame-level appearance information, while the LSTM captures sequential motion patterns across video frames, enabling effective representation of anomalous activities. Hyperparameter optimization and regularization strategies are employed to improve convergence stability and generalization performance. The proposed model is evaluated on the DCSASS surveillance dataset and the experimental results demonstrate that the optimized CNN-LSTM framework achieves an accuracy of 98.1%, with consistently high precision, recall, and F1-score across 3-fold, 5-fold, and 10-fold cross-validation settings. Comparative analysis shows that the proposed method outperforms conventional machine learning models and recent deep learning baselines, highlighting its effectiveness and robustness for practical video-based anomaly detection in surveillance environments.

Keywords: anomaly detection, CNN-LSTM hybrid model, Human activity recognition, Optimized Deep Learning, Video Surveillance

Received: 18 Dec 2025; Accepted: 19 Jan 2026.

Copyright: © 2026 Matta and M. V. P.. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Asha Matta

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.