Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Phys.

Sec. Social Physics

Volume 13 - 2025 | doi: 10.3389/fphy.2025.1599428

This article is part of the Research TopicSecurity, Governance, and Challenges of the New Generation of Cyber-Physical-Social Systems, Volume IIView all 4 articles

Intelligent Emotion Recognition for Drivers Using Model-Level Multimodal Fusion

Provisionally accepted
Xing  LuanXing Luan1Quan  WenQuan Wen1*Bo  HangBo Hang2
  • 1Jilin University, Changchun, China
  • 2Hubei University of Arts and Science, Xiangyang, Hubei, China

The final, formatted version of the article will be published soon.

Unstable emotions are considered to be an important factor contributing to traffic accidents. The probability of accidents can be reduced if emotional anomalies of drivers can be quickly identified and intervened. In this paper, we present a multimodal emotion recognition model, MHLT, which performs model-level fusion through an attentional mechanism. By integrating video and audio modalities, the accuracy of emotion recognition is significantly improved. And the model performs better in predicting emotion intensity, a driver emotion recognition dimension, than traditional results that focus more on emotion,recognition classification.

Keywords: road rage detection1, driver emotion recognition2, multimodal emotion recognition3, attention mechanism4, deep learning5

Received: 25 Mar 2025; Accepted: 12 Jun 2025.

Copyright: © 2025 Luan, Wen and Hang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Quan Wen, Jilin University, Changchun, China

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.