Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Artif. Intell.

Sec. Machine Learning and Artificial Intelligence

Volume 8 - 2025 | doi: 10.3389/frai.2025.1669937

This article is part of the Research TopicAdvanced Integration of Large Language Models for Autonomous Systems and Critical Decision SupportView all 3 articles

Situational Perception in Distracted Driving: An Agentic Multi-Modal LLM Framework

Provisionally accepted
Ahmad  NazarAhmad Nazar1Mohamed  Y. SelimMohamed Y. Selim2*Ashraf  GaffarAshraf Gaffar1Daji  QiaoDaji Qiao1
  • 1Iowa State University of Science and Technology, Ames, United States
  • 2Iowa State University, Ames, United States

The final, formatted version of the article will be published soon.

ABSTRACT Distracted driving remains a critical public safety issue, leading to thousands of accidents annually. Although existing driver assistance systems focus primarily on distraction detection, there remains a significant gap in real-time environmental perception and context-aware intervention. This work presents a novel large language model (LLM)-driven framework for distracted driving intervention, which assumes a pre-detected distraction signal and dynamically integrates captured multi-modal camera and GPS data to generate timely and relevant verbal driver alerts. The proposed system adapts an agentic approach with specialized tools for object detection, speed limit and live traffic conditions, and weather-based detections. Unlike conventional machine learning-based driver assistance systems, our approach dynamically synthesizes real-time environmental data. The framework orchestrates multiple awareness agents, optimizing information retrieval to enhance situational awareness without overwhelming the driver with verbose alerts. Our evaluations demonstrate the framework's effectiveness, achieving semantic intervention correctness of 85.7% and an average response latency of 1.74s. Our findings highlight LLM's potential in improving driving safety through structured multi-modal reasoning for intelligent AI-driven driver support.

Keywords: LLM, Distracted driving, Multi-Modal, LLM agents, Data-driven, Situational Awareness, Perception

Received: 20 Jul 2025; Accepted: 23 Sep 2025.

Copyright: © 2025 Nazar, Selim, Gaffar and Qiao. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Mohamed Y. Selim, myoussef@iastate.edu

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.