Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Bioeng. Biotechnol.

Sec. Biosensors and Biomolecular Electronics

This article is part of the Research TopicIntegration of Next-Generation Technologies with Biosensors for Advanced Diagnostics and Personalized MedicineView all 5 articles

A Cross-Domain Framework for Emotion and Stress Detection Using WESAD, SCIENTISST-MOVE, and DREAMER Datasets

Provisionally accepted
  • 1Jouf University, Sakakah, Saudi Arabia
  • 2Anderson University, Anderson, United States
  • 3University of South Carolina, Columbia, United States
  • 4Federal University Oye-Ekiti, Oye, Nigeria
  • 5Prince Sattam bin Abdulaziz University, Al Kharj, Saudi Arabia
  • 6University of Tabuk, Tabuk, Saudi Arabia

The final, formatted version of the article will be published soon.

Emotional and stress-related disorders pose a growing threat to global mental health, emphasizing the critical need for accurate, robust, and interpretable emotion recognition systems. Despite advances in affective computing, existing models often lack generalizability across diverse physiological and behavioral datasets, limiting their practical deployment. This research addresses this gap by proposing a cross-dataset deep learning framework that leverages transferable feature representations and temporal modeling to enhance emotion, stress, and activity state recognition. This study presents a dual deep learning-based framework for mental health monitoring and activity monitoring. The first approach introduces a framework for stress classification that uses a 1D-CNN model trained on the WESAD dataset. This model is then fine-tuned using the ScientISST-MOVE dataset to detect daily life activities based on motion signals, and it is used as transfer learning for a downstream task. An explainable AI technique is used to interpret the model's predictions, while class imbalance is addressed using focal loss and class weighting. The second approach employs a temporal conformer architecture combining CNN and transformer components to model temporal dependencies in continuous affective ratings of emotional states based on valence, arousal, and dominance (VAD) using the DREAMER dataset. This method incorporates feature engineering techniques and models temporal dependencies in ECG signals. Both approaches are supported by additional modules such as clustering and rule-based emotion interpretation for richer analysis. The deep learning classifier trained on WESAD biosignal data Almadhor et al. Running Title achieved 98% accuracy across three classes, demonstrating highly reliable stress classification. The transfer learning model, evaluated on the ScientISST-MOVE dataset, achieved an overall accuracy of 82% across 4 activity states, with good precision and recall for high-support classes. However, the explanations produced by Grad-CAM appear uninformative and do not clearly indicate which parts of the signals influence the prediction. The conformer model achieved an R2 score of 0.78 and a rounded accuracy of 87.59% across all three dimensions, highlighting its robustness in multi-dimensional emotion prediction. The framework demonstrates strong performance, interpretability, and real-time applicability in personalized affective computing.

Keywords: Biosignal classification, Mental health monitoring, Stress detection, deep learning, Transfer Learning, emotion recognition, physiological signals, explainable artificial intelligence (XAI)

Received: 03 Jul 2025; Accepted: 10 Nov 2025.

Copyright: © 2025 Almadhor, Ojo, Nathaniel, Ukpong, Alsubai and Al Hejaili. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Ahmad Almadhor, aaalmadhor@ju.edu.sa

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.