ORIGINAL RESEARCH article
Front. Big Data
Sec. Cybersecurity and Privacy
Beyond Performance Metrics: Demonstrating Generative AI's Unique Value Propositions in Cybersecurity Threat Detection through Hybrid Pipeline Integration
University of Salamanca, Salamanca, Spain
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Abstract
This study evaluates the use of generative artificial intelligence (GenAI) in cybersecurity threat detection to examine its benefits in a workflow where people must make decisions. The experiments used the BODMAS dataset (134,435 samples) and a smaller exploratory subset from UNSW-NB15. State-of-the-art machine learning (ML) classifiers were compared with a zero-shot large language model (LLM). On standard classification metrics, ML consistently outperformed the LLM-based systems. This comparison is not the main contribution but establishes a performance baseline. The relevant point is how the LLM-based system behaves when the case is unclear. Although it is not reliable as a primary detector in this setting, it can support the analyst-side work that follows detection. In particular, it can generate short explanations in plain language, organize context around an alert, and provide an initial interpretation when the instance does not match the learned classes. This is valuable because it affects the triage speed and quality of escalation. Therefore, a hybrid pipeline is proposed. ML is used by default for high-confidence and time-sensitive decisions. The LLM is only called when the confidence is low or when an explanation is needed for review. Latency and cost are treated as deployment constraints, and the hallucination risk is handled as a reliability limitation that requires human oversight. Overall, GenAI is unlikely to replace ML-based detection methods. Its contribution is better framed as interpretive support for ambiguous or unfamiliar alerts.
Summary
Keywords
cybersecurity, Explainable AI, Generative artificial intelligence, Hybrid systems, Large language models, machine learning, threat detection, zero-shotlearning
Received
15 December 2025
Accepted
27 February 2026
Copyright
© 2026 González-Ramos and Chamoso. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Juan Antonio González-Ramos; Pablo Chamoso
Disclaimer
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.