ORIGINAL RESEARCH article
Front. Big Data
Sec. Cybersecurity and Privacy
Volume 8 - 2025 | doi: 10.3389/fdata.2025.1617978
This article is part of the Research TopicCybersecurity of Artificial Intelligence Integration in Smart Systems: Opportunities and ThreatsView all 4 articles
Robust Detection Framework for Adversarial Threats in Autonomous Vehicle Platooning
Provisionally accepted- 1University of Vienna, Vienna, Austria
- 2Diplomatic Academy of Vienna, Vienna, Vienna, Austria
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Autonomous vehicle platooning (AVP) has been introduced as an innovative approach towards enhancing vehicle dynamics, where several self-driving cars are grouped to travel in proximity to improve factors such as fuel consumption, safety, and traffic flow. However, the system is designed based on V2V and V2I communication to bring adversarial threats to the forefront. Such attacks may tamper with the information from the sensors, the control signals or decision-making mechanisms and bring about hazardous occurrences such as accidents or traffic jams. This paper discusses a new method to identify adversarial threats in AVP systems, combining active learning and machine learning classifiers. RF, GB, XGB, KNN, LR, and AdaBoost classifiers are used in the proposed method to improve detection accuracy and decrease the need for labelled data. The experimental outcome shows that RF with active learning provides the best accuracy of 83.91% for better identification of adversarial threats. The elaborated concept presents a probable answer to protect AVP systems and enable them to function stably and securely in conditions unidentified in laboratory environments.
Keywords: Autonomous Vehicle Platooning (AVP), Active Learning, Machine learning classifiers, Adversarial Threat Detection, anomaly detection
Received: 25 Apr 2025; Accepted: 24 Jun 2025.
Copyright: © 2025 Ness. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Stephanie Ness, University of Vienna, Vienna, Austria
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.