<?xml version="1.0" encoding="utf-8"?>
    <rss version="2.0">
      <channel xmlns:content="http://purl.org/rss/1.0/modules/content/">
        <title>Frontiers in Computer Science | Computer Security section | New and Recent Articles</title>
        <link>https://www.frontiersin.org/journals/computer-science/sections/computer-security</link>
        <description>RSS Feed for Computer Security section in the Frontiers in Computer Science journal | New and Recent Articles</description>
        <language>en-us</language>
        <generator>Frontiers Feed Generator,version:1</generator>
        <pubDate>2026-04-05T16:55:09.887+00:00</pubDate>
        <ttl>60</ttl>
        <item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1779065</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1779065</link>
        <title><![CDATA[From risk to resilience: addressing cybersecurity threats in Brazil’s government digital transformation]]></title>
        <pubdate>2026-03-26T00:00:00Z</pubdate>
        <category>Perspective</category>
        <author>Bruno Baranda Cardoso</author>
        <description><![CDATA[This chapter analyzes Brazil’s journey in consolidating a resilient digital ecosystem in the public sector, covering both the advances and challenges faced in implementing the National Digital Government Strategy (ENGD). It first addresses the varying degrees of digital maturity among government bodies and agencies, highlighting the inequalities and factors contributing to the fragmentation of the technological environment within the government. Next, it discusses the urgent need to shift from a predominantly reactive posture to a proactive approach in managing cyber risks, emphasizing the importance of a culture of prevention and continuous training of public agents. Finally, it underscores the strategic role of public technology companies, which act not only as facilitators of digital transformation but also as key players in threat identification, comprehensive risk assessments, and the consolidation of cyber protection mechanisms within the Brazilian State.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1762332</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1762332</link>
        <title><![CDATA[Explainable AI: enhancing decision-making in the detection of cyber threats]]></title>
        <pubdate>2026-03-20T00:00:00Z</pubdate>
        <category>Review</category>
        <author>P. W. C. Prasad</author><author>Md Shohel Sayeed</author><author>Duc-Man Nguyen</author><author>Daniel Patricko Hutabarat</author><author>Golam Md Mohiuddin</author>
        <description><![CDATA[The rapid growth of the Internet and the increasing reliance on digital systems have significantly expanded the global digital footprint, creating new challenges for cybersecurity. Artificial Intelligence (AI) technologies, particularly Machine Learning (ML) and Deep Learning (DL), have become central to addressing these challenges by enabling the automation of complex and data-intensive tasks across antivirus solutions, intrusion prevention systems, threat intelligence platforms, and email security tools. While these technologies provide high levels of accuracy in detecting anomalies, malware, and other forms of malicious activity, they are often criticized for operating as “black-box” systems. The lack of interpretability in their decision-making processes limits the ability of cybersecurity professionals to fully understand, validate, and trust the outcomes of AI-driven models, thereby restricting their practical adoption in high-stakes environments. To mitigate these limitations, Explainable Artificial Intelligence (XAI) has emerged as a promising paradigm that aims to make AI outputs transparent, interpretable, and actionable. By providing human-understandable explanations of automated decisions, XAI can bridge the gap between technical performance and practitioner usability, enabling analysts to make informed decisions, improve incident response, and strengthen organizational resilience against both known and emerging threats. This paper reviews recent state-of-the-art developments in XAI for cybersecurity, with a particular emphasis on anomaly detection a critical area for identifying insider threats, zero-day exploits, and atypical system behavior. The review follows a structured literature analysis of peer-reviewed studies published between 2018 and 2025, identified through systematic searches in major academic databases including IEEE Xplore, Scopus, Web of Science, and ACM Digital Library. After applying predefined inclusion and exclusion criteria focused on XAI applications in cybersecurity, 53 relevant studies were analysed to synthesize methodological trends, application domains, and evaluation practices. Drawing on these findings, the paper consolidates fragmented research contributions, identifies current gaps, and provides recommendations for advancing the design and adoption of explainable, trustworthy AI systems in cybersecurity. The analysis further highlights a critical deployment challenge: the integration of explainability mechanisms often introduces trade-offs between predictive accuracy, computational efficiency, and real-time scalability factors that are essential in operational cybersecurity environments.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1735253</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1735253</link>
        <title><![CDATA[RoLLMRec: a robust LLM-based recommender system for defending against shilling and prompt injection attacks]]></title>
        <pubdate>2026-03-12T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Sarama Shehmir</author><author>Rasha Kashef</author>
        <description><![CDATA[Large Language Models (LLMs) are increasingly being integrated into recommender systems, offering contextual reasoning, cross-domain adaptability, and natural language interaction. However, their adoption also introduces vulnerabilities such as prompt injection, semantic poisoning, and shilling attacks, which can distort recommendations and erode user trust. Addressing these risks is essential for the safe deployment of LLM-based recommenders. We propose RoLLMRec, a defense oriented architectural framework and evaluation methodology for LLM-based recommender systems that integrates prompt filtering, retrieval augmented grounding, trust aware scoring, and an auditing feedback loop. RoLLMRec improves robustness under the evaluated prompt level and semantic adversarial settings, while multimodal support is included at the architectural level only and is not empirically evaluated in the current experimental setup.RoLLMRec unifies five core components: (1) prompt shielding and input filtering to detect and block adversarial instructions; (2) retrieval-augmented generation to enrich factual grounding and reduce hallucination; (3) multimodal LLM encoding for text, metadata, and image inputs; (4) trust-aware scoring and Top-K ranking; and (5) adaptive feedback loops for continual learning. Evaluations on benchmark datasets such as Yelp, MovieLens, and Amazon Books show that RoLLMRec surpasses BERT4Rec, RecVAE, and LightGCN, improving NDCG@10 and HR@10 by up to 6% and 5%, respectively. Under a 10% prompt-injection attack, it maintains a Robust Hit Rate (RHR@10) above 0.63 and a Perturbation Sensitivity Index (PSI) below 0.135, achieving 15%–25% higher resilience. It also sustains a Semantic Stability Score (SSS) above 0.60 in zero-shot cross-domain transfer, confirming stable semantic intent.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1751284</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1751284</link>
        <title><![CDATA[Machine learning-based early incident detection system in a bakery plant’s industrial network: a cognitive model for counteracting hybrid threats]]></title>
        <pubdate>2026-03-02T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Gulshat Amanzholovna Amirkhanova</author><author>Dmytro Ihorovych Prokopovych-Tkachenko</author><author>Saltanat Almykhametovna Adilzhanova</author><author>Nazar Zubchenko</author><author>Liya Erbolkyzy Bektemir</author>
        <description><![CDATA[IntroductionIn the context of growing cyber risks to critical industries, including bakery complexes, this paper proposes a cognitive architecture for early incident detection in the operational technology (OT) network.MethodsThe architecture integrates User and Entity Behavior Analytics (UEBA), a Security Information and Event Management (SIEM) system, and Zero Trust principles, focusing on hybrid threats: from external attacks on industrial controllers, such as programmable logic controllers (PLCs) to internal operator errors. At the analytics layer, two complementary deep learning pipelines are used: a convolutional neural network (CNN) + long short-term memory (LSTM) (CNN + LSTM) model for detecting low-level network patterns (Byte2Image) and an autoencoder (AE) combined with LSTM (AE + LSTM model) for predicting time-series data and identifying anomalies in equipment telemetry. An adaptive threshold decision procedure is introduced for the first time, optimizing both accuracy and computational resources on edge nodes. The architecture complies with the IEC 62443 and ISO/IEC 27019 standards.Results and discussionHigh performance metrics, specifically Precision, were demonstrated in the bakery plant’s digital twin scenarios.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1669659</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1669659</link>
        <title><![CDATA[A secure authentication scheme for smart home environments: a biometric-driven approach]]></title>
        <pubdate>2026-02-27T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Zahra M. Rajeh</author><author>Sharaf A. Alhomdy</author><author>Fursan Thabit</author><author>Khawla A. Maodah</author>
        <description><![CDATA[A smart home represents an emerging technological revolution. Devices such as smart TVs, smart refrigerators, and smart locks are connected to the Internet to enhance convenience in daily life. However, users contact these smart home devices via public channels, which makes the data being transferred vulnerable to attacks. Ensuring the privacy and data security of home users becomes a significant challenge. As smart home systems become increasingly integrated into our daily routines, securing them is crucial. This study presents a lightweight authentication scheme for smart homes. It combines biometric data (OTIC) with cryptographic techniques. The goal is to achieve robust security while maintaining minimal computational overhead. The scheme allows mutual authentication among users, gateways, and devices. A formal security analysis is conducted using the Real-or-Random (RoR) model. The results demonstrate the scheme’s resilience against polynomial-time adversaries. The scheme is efficient, robust, and resistant to common attacks, making it a practical solution for securing smart home networks. In the informal analysis, the proposed scheme was compared to other smart home authentication schemes. The comparison addressed various security features, including eavesdropping attacks, fault analysis attacks, and other security aspects. Finally, the performance analysis shows that the scheme performs well in terms of computation cost (memory = 332.2916 bits, CPU = 6.8299%, and Time = 1.5341 ms), as well as communication cost of 2,400 bits. These results demonstrate that the scheme offers lightweight performance with enhanced security.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1764808</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1764808</link>
        <title><![CDATA[Annoyed by cybersecurity? Human-centric perspectives on cybersecurity]]></title>
        <pubdate>2026-02-26T00:00:00Z</pubdate>
        <category>Review</category>
        <author>Ravdeep Kour</author><author>Ramin Karim</author><author>Annika Wägenbauer</author>
        <description><![CDATA[Humans play a vital role in designing, developing, implementing, and using technical systems. For this reason, it is crucial to keep humans in the loop at each phase of these systems to make them more secure and user-friendly. There needs to be a balance between using these systems securely and making them easy to use. Today, under pressure to secure our systems from cyberattacks, we primarily focus on making them secure but often overlook making them easy to use. Thus, the objective of this paper is to provide a human-centric perspective on cybersecurity and to introduce a human-centric framework that enables Industry 5.0, where humans have direct interaction with systems and solutions that are more customer-oriented. To carry out this research, the authors have applied the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines to investigate human-centric research over a 10-year period, from 2015 to 2025. The literature shows that most human-centric research contributions are well-balanced, with conceptual, experimental, and survey approaches each accounting for approximately 64% of the total, indicating a mature blend of theoretical and applied research. These studies are focused on developing structured, strategic approaches that integrate human factors into cybersecurity practices across sectors such as education, government, health, software, smart home networks, and others. To conduct this research, the authors have prepared an anonymous questionnaire with fundamental questions about secure system’s design, which can be easily used. The evaluation results show that frequent password resets (33.3%) and frequent authentication (26.7%) are the most “annoying” cybersecurity measures. Additionally, most respondents consider biometric login the most user-friendly security feature, followed by single sign-on and automatic security patch updates. What is missing in existing literature and studies is a holistic perspective on human-centrism, beyond mere ease of use. We aim to cover that blind spot by introducing our independently developed framework in this paper.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1719783</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1719783</link>
        <title><![CDATA[From cybersecurity to digital health: an AI-based eGuide framework for Oman's healthcare centers]]></title>
        <pubdate>2026-02-19T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Akbar Khanan</author><author>Yasir Abdelgadir Mohamed</author><author>Mohamed Bashir</author><author>Dil Nawaz Hakro</author><author>Danish Garg</author>
        <description><![CDATA[The AI-based eGuide platform for healthcare centers in Oman represents a cornerstone of the Sultanate's critical national health infrastructure, underpinning both patient care and national resilience. This paper develops a comprehensive cybersecurity and governance framework to secure the eGuide system against an increasingly complex threat landscape characterized by phishing campaigns, ransomware incidents, and data leakage risks. Building upon global best practices, the study advances a transition from legacy perimeter security models toward a Zero Trust Architecture, ensuring continuous authentication, dynamic authorization, and micro segmentation of services. The framework is reinforced by the adoption of ISO/IEC 27000 aligned governance, demonstrable compliance with Oman's Personal Data Protection Law (PDPL), the General Data Protection Regulation (GDPR), and the Health Insurance Portability and Accountability Act (HIPAA). Further contribution is the integration of mathematically verified security primitives, including multi-factor authentication, hybrid RBAC cum ABAC access models, and blockchain-enabled audit trails, providing rigorous assurances of privacy, integrity, and accountability. The methodology also incorporates continuous evaluation cycles and penetration testing strategies, enabling proactive detection and mitigation of vulnerabilities. By embedding resilience through architectural scalability, high-availability patterns, and disaster recovery mechanisms, this research positions the eGuide platform as a secure, reliable, and future-ready foundation for Oman's digital health ecosystem.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1752739</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1752739</link>
        <title><![CDATA[Privacy-preserving process data generation based on dual-discriminator conditional generative adversarial networks]]></title>
        <pubdate>2026-02-18T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Yi Guo</author><author>Zhong Li</author>
        <description><![CDATA[IntroductionThe growing adoption of data-centric business analytics demands effective safeguarding techniques for processing data that contains procedural details. Although Petri net-driven process mining successfully extracts operational knowledge from activity sequences, current protection approaches often diminish analytical value. Therefore, preserving process-related information while ensuring privacy remains a critical challenge.MethodsThis study presents a Privacy-Preserving Process Data Generation method based on Dual-Discriminator Conditional Generative Adversarial Networks (P3DGAN) to generate privacy-preserving process data. To avoid mode collapse during model training, P3DGAN employs two discriminators that separately model the dataflow and workflow characteristics of process data. Furthermore, we propose a game-optimization strategy based on Petri net theory to capture the global distribution characteristics of process data. Furthermore, we introduce a workflow-level privacy metric based on the Euclidean distance between trace variants (ED-TV) to support comprehensive risk assessment.ResultsExperimental results on four real-world process datasets demonstrate that our method can generate high-quality process data with strong privacy protection compared with competitive peers.DiscussionThe proposed framework achieves an effective multi-dimensional privacy-utility trade-off, demonstrating its potential for practical applications in privacy-sensitive domains such as healthcare, banking, and manufacturing.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1723711</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1723711</link>
        <title><![CDATA[Security of drivers in intelligent transportation systems: privacy-preserving federated transfer learning for driver drowsiness detection]]></title>
        <pubdate>2026-01-30T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Khubab Ahmad</author><author>Poh Ping Em</author><author>Nor Azlina Ab Aziz</author>
        <description><![CDATA[Driver drowsiness is a serious concern for road safety within intelligent transportation systems, and it can undermine the safety and dependability of critical transport infrastructure. As modern vehicles become more connected and data-focused, centralized learning systems that share driver and vehicle information can expose private details and raise privacy and security concerns. This study presents a privacy-preserving framework that enables secure learning among multiple vehicles without sharing raw data. It uses the On-Board Diagnostic-II sensor data, combined with transfer learning, to detect driver drowsiness in real time within a federated learning framework. Signals such as speed, engine revolutions, throttle position, and steering torque are extracted from cars and then converted into image representations using Mel-Frequency Cepstral Coefficients so the model can identify changes in driving behavior. These image features are used to train a pretrained ResNet50 network; this trained model can classify driver states as drowsy or normal. Each vehicle trains on its own data while the central server updates the shared model weights through a client-weighted averaging strategy that keeps learning balanced for all clients. This process keeps data private while the model trained on different driving pattern. Using client weights DrowsyXnet achieved 98.29% accuracy, which is nearly matched the centralized baseline of 98.67%. The latent feature graph showed a clear separation between drowsy and normal states, indicating that the model learns the underlying signals rather than merely incidental correlations. The proposed framework improves intelligent transportation systems while preventing leakage of private data. The use of driver drowsiness detection system into vehicles can prevent drowsiness related accidents and enhance overall road safety.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1735919</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1735919</link>
        <title><![CDATA[Operationalising artificial intelligence bills of materials for verifiable AI provenance and lifecycle assurance]]></title>
        <pubdate>2026-01-21T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Petar Radanliev</author><author>Omar Santos</author><author>Carsten Maple</author><author>Kayvan Atefi</author>
        <description><![CDATA[IntroductionArtificial intelligence (AI) systems increasingly rely on complex, multi-layered software supply chains, creating substantial challenges for reproducibility, transparency, and security assurance. Existing software bills of materials inadequately capture AI-specific artefacts such as model lineage, training provenance, and disclosure metadata, limiting verifiable lifecycle governance.MethodsThis study proposes an Artificial Intelligence Bill of Materials (AIBOM) schema that extends the CycloneDX standard through structured schema engineering. The framework integrates cryptographic validation and agent-driven automation to enable machine-verifiable provenance. An autonomous AI pipeline was implemented to conduct continuous environment inspection, vulnerability enrichment, and reproducibility auditing across containerised analytic workflows.ResultsEmpirical evaluation demonstrates 98.7% reproducibility fidelity across replicated executions, 96.2% precision in vulnerability matching against reference datasets, and a 63% reduction in manual oversight compared with conventional documentation-based approaches.DiscussionThe results demonstrate the feasibility of automated provenance assurance and reproducible AI lifecycle validation at scale. The proposed AIBOM framework strengthens software supply chain transparency, enhances provenance integrity, and provides a generalisable methodology for securing AI systems. It further supports alignment with international information security and compliance standards, advancing the scientific foundations of reproducibility engineering in AI-enabled systems.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1687867</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1687867</link>
        <title><![CDATA[Identifying key features for phishing website detection through feature selection techniques]]></title>
        <pubdate>2026-01-21T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Raed Alazaidah</author><author>Mohammad BaniSalman</author><author>Khaled E. Alqawasmi</author><author>Ali Abu Zaid</author><author>Yousuf Hazaimeh</author><author>Fuad Sameh Alshraiedeh</author><author>Emma Qumsiyeh</author>
        <description><![CDATA[Over the past few years, phishing has evolved into an increasingly prevalent form of cybercrime, as more people use the Internet and its applications. Phishing is a type of social engineering that targets users' sensitive or personal information. This paper seeks to achieve two main objectives: first, to identify the most effective classifier for detecting phishing among 40 classifiers representing six learning strategies. Secondly, it aims to determine which feature selection method performs best on websites with phishing datasets. By analyzing three unique datasets on phishing and evaluating eight metrics, this study found that Random Forest and Random Tree were superior at identifying phishing websites compared with other approaches. Similarly, GainRatioAttributeEval, along with InfoGainAttributeEval, performed better than the five alternative feature selection methods considered in this study.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1728980</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1728980</link>
        <title><![CDATA[Advanced DNS tunneling detection: a hybrid reinforcement learning and metaheuristic approach]]></title>
        <pubdate>2026-01-15T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Mahmoud Sammour</author><author>Mohd Fairuz Iskandar Othman</author><author>Aslinda Hassan</author><author>Omar Bhais</author><author>Mohammed Saad Talib</author>
        <description><![CDATA[IntroductionDNS tunneling remains a critical network threat, exploiting the inherent trust in the DNS protocol for unauthorized communication, data exfiltration, and firewall evasion.MethodsAddressing this challenge, this paper introduces a novel, hybrid feature selection framework that integrates the Random Forest classifier with an Enhanced Reinforcement Learning-Guided Grey Wolf Optimizer (EnhancedRLGWO). The EnhancedRLGWO employs a Dueling Deep Q-Network and strategic Opposition-Based Learning to intelligently navigate the feature space and identify an optimal, minimal subset.ResultsEvaluated against the benchmark CIRA-CIC-DoHBrw-2020 dataset, the proposed approach achieved a state-of-the-art accuracy of 99.82% and a weighted F1-score of 99.79% using a highly compact subset of only 12 features. This performance significantly outperforms existing machine learning-based DNS tunneling detection systems, such as a hybrid feature selection model achieving 98.3% accuracy and a full 28-feature Random Forest baseline (98.50% accuracy). The experimental results showed the robustness of this method in identifying various types of DNS tunneling attacks, including Iodine, DNS2TCP, and DNScat2, while maintaining performance and accuracy.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1709565</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1709565</link>
        <title><![CDATA[RASID: a secure UAV-based platform for intelligent traffic accident assessment with cryptographic verification and AI-driven analysis]]></title>
        <pubdate>2025-12-19T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Albandari Alsumayt</author><author>Arwa Almalki</author><author>Fatimah Almushraf</author><author>Hams Almansori</author><author>Lara Alfaraj</author><author>Sara Almulla</author><author>Zahrah Aljanabi</author><author>Sammar Algothami</author>
        <description><![CDATA[Traffic accident management typically deals with delays from the time an accident is reported to the time of the actual submission of the final report, and this ultimately causes traffic congestion. The process can be done in a significantly shorter time compared to the traditional way by utilizing unmanned aerial vehicles (UAVs) in accident management, especially drones. This project aims to provide a simulation of a secure drone platform to assess vehicle traffic accidents. This approach eliminates the demand for an investigator's presence on the scene, which speeds up the process of submitting accident reports and cuts down on response time. Furthermore, the research proposes security measures to ensure the integrity and confidentiality of all gathered data by a drone in both aspects of in-transmission and storage. The common risks of gathering data by drone include unauthorized interception, access, and possible alteration of data in transmission between the drone and the ground station. The current traffic accident management mostly experiences delays between the incident reporting and final documentation, which creates a jam on the streets and ineffective response by authorities. This study introduces RASID, a secure drone-based system that aims to automate the incident assessment process, assure the integrity and confidentiality of data, and speed up reporting. The project simulates realistic drones through the employment of the AirSim tool, the authentication and encryption methods were professionally verified using ProVerif, and utilized YOLOv8-based AI models for incident investigation and automated liability assessments. High-resolution photographs of the incident scene are automatically taken by the drones, and TLS encryption is implemented to transfer the data to a secure cloud. After that, the data is encrypted with AES-256 and verified using OpenID Connect. The ProVerif results showed that messages could not be accessed or altered without authorization, proving that the exchanges among the nodes were private and authentic. The AI module achieved a precision of 0.6919, a recall of 0.6244, F1 score of 0.6564, and mAP@50 of 0.6717. It was most precise in two scenarios: rear-end and front-end collisions. The findings demonstrate that the RASID system is capable of securely collecting, transmitting, and analyzing accident data, enabling nearly real-time crash assessments. This study provides the improvements of efficiency, accuracy, and cybersecurity of traffic accident management via the integration of secure drone operations, well-known and proven encryption mechanisms, along with AI-powered analytics, when compared to the traditional crash assessment methods.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1703586</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1703586</link>
        <title><![CDATA[Can LLMs effectively provide game-theoretic-based scenarios for cybersecurity?]]></title>
        <pubdate>2025-12-11T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Daniele Proverbio</author><author>Alessio Buscemi</author><author>Alessandro Di Stefano</author><author>The Anh Han</author><author>German Castignani</author><author>Pietro Liò</author>
        <description><![CDATA[IntroductionGame theory has long served as a foundational tool in cybersecurity to test, predict, and design strategic interactions between attackers and defenders. The recent advent of Large Language Models (LLMs) offers new tools and challenges for the security of computer systems. In this work, we investigate whether classical game-theoretic frameworks can effectively capture the behaviors of LLM-driven actors and bots.MethodsUsing a reproducible framework for game-theoretic LLM agents, we investigate two canonical scenarios—the one-shot zero-sum game and the dynamic Prisoner's Dilemma—and we test whether LLMs converge to expected outcomes or exhibit deviations due to embedded biases. We experiments on four state-of-the-art LLMs and five natural languages (English, French, Arabic, Vietnamese, and Mandarin Chinese) to assess linguistic sensitivity.ResultsFor both games, we observe that the final payoffs are influenced by agents characteristics such as personality traits or knowledge of repeated rounds. We also uncover an unexpected sensitivity of the final payoffs to the choice of languages, which should warn against indiscriminate application of LLMs in cybersecurity applications and call for in-depth studies, as LLMs may behave differently when deployed in different countries. We also employ quantitative metrics to evaluate the internal consistency and cross-language stability of LLM agents.DiscussionIn addition to uncovering unexpected behaviors requiring attention by scholars and practitioners, our work can help guide the selection of the most stable LLMs and optimizing models for secure applications.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1673393</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1673393</link>
        <title><![CDATA[Correction: Investigating methods for forensic analysis of social media data to support criminal investigations]]></title>
        <pubdate>2025-11-17T00:00:00Z</pubdate>
        <category>Correction</category>
        <author>Muhammad Arshad</author><author>Ashfaq Ahmad</author><author>Choo Wou Onn</author><author>Emmanuel Arko Sam</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1683495</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1683495</link>
        <title><![CDATA[Targeted injection attack toward the semantic layer of large language models]]></title>
        <pubdate>2025-11-11T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Yi Zhang</author><author>Jantan Aman</author>
        <description><![CDATA[In the AI era, high-value targeted injection attacks and defences based on the semantic layer of Large Language Models will become the main battlefield for security confrontations. Ultimately, any form of artificial information warfare boils down to a battle at the semantic level. This involves using information technology to attack the semantic layer and, consequently, the human brain. Specifically, the goal is to launch targeted attacks on the brains of specific decision-making groups within society, thereby undermining human social decision-making mechanisms. The ultimate goal is to maximize value output in the fields of political economy, religion, and ideology, including wealth and power, with minimal investment in information technology. This paper uses the pyramid model perspective to unify the information security confrontation protocol stack, including biological intelligence, human intelligence, and artificial intelligence. It begins by analysing the characteristics and explainable of AI models, and feasible means of their multi-dimensions offensive and defensive mechanisms, proposing an open engineering practice strategy that leverages semantic layer gaming between LLMs. This strategy involves targeted training set contamination at the semantic layer and penetration induction through social networks. At the end of this article, expands the contamination of training set data sources to the swarm oscillating environment in human-machine sociology and ethical confrontation, then discusses attacks targeting the information cocoon of individuals or communities and extends the interaction mechanism between humans and LLMs and GPTs above the semantic layer to the evolution dynamics of a Fractal Pyramid Model.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1642566</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1642566</link>
        <title><![CDATA[Entropy measurement and online quality control of bit streams by a true random bit generator]]></title>
        <pubdate>2025-10-07T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Cesare Gerolimetto Fabrello</author><author>Valeria Rossi</author><author>Kamil Witek</author><author>Alberto Trombetta</author><author>Mateusz Baszczyk</author><author>Piotr Dorosz</author><author>Wojciech Kucewicz</author><author>Massimo Caccia</author>
        <description><![CDATA[Generating random bit streams is required in various applications, most notably in cyber-security, which is essential for Internet of Everything applications to enable secure communication between interconnected devices. Ensuring high-quality and robust randomness is crucial to mitigate risks associated with predictability and system compromise. True random numbers provide the highest levels of unpredictability. However, known systematic biases that can emerge from physical imperfections, environmental variations, and device aging in the processes exploited for random number generation must be carefully monitored. This article reports the implementation and characterization of an online procedure for the detection of anomalies in a true random bit stream. It is based on the NIST adaptive proportion and repetition count tests, complemented by statistical analysis relying on the Monobit and RUNS tests. The procedure is implemented in firmware through dedicated hardware accelerators processing configurable-length sequences, with automated anomaly detection triggering alerts after three consecutive threshold violations. The implementation is performed simultaneously with bit stream generation and also provides an estimate of the entropy of the source. A statistical analysis of the results from the NIST procedure to evaluate the symbols of the bit-stream as independently and identically distributed is also performed, leading to a computation of the minimum entropy of the source that cross-checks the previously mentioned estimate. The experimental validation of the approach is performed up the bit streams generated by a quantum, silicon-based entropy source.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1570085</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1570085</link>
        <title><![CDATA[Modeling the dynamics of misinformation spread: a multi-scenario analysis incorporating user awareness and generative AI impact]]></title>
        <pubdate>2025-10-03T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Kurunandan Jain</author><author>Krishnashree Achuthan</author>
        <description><![CDATA[The proliferation of misinformation on social media threatens public trust, public health, and democratic processes. We propose three models that analyze fake news propagation and evaluate intervention strategies. Grounded in epidemiological dynamics, the models include: (1) a baseline Awareness Spread Model (ASM), (2) an Extended Model with fact-checking (EM), and (3) a Generative AI-Influenced Spread model (GIFS). Each incorporates user behavior, platform-specific dynamics, and cognitive biases such as confirmation bias and emotional contagion. We simulate six distinct scenarios: (1) Accurate Content Environment, (2) Peer Network Dynamics, (3) Emotional Engagement, (4) Belief Alignment, (5) Source Trust, and (6) Platform Intervention. All models converge to a single, stable equilibrium. Sensitivity analysis across key parameters confirms model robustness and generalizability. In the ASM, forwarding rates were lowest in scenarios 1, 4, and 6 (1.47%, 3.41%, 2.95%) and significantly higher in 2, 3, and 5 (19.67%, 56.52%, 29.47%). The EM showed that fact-checking reduced spread to as low as 0.73%, with scenario-based variation from 1.16 to 17.47%. The GIFS model revealed that generative AI amplified spread by 5.7%–37.8%, depending on context. ASM highlights the importance of awareness; EM demonstrates the effectiveness of fact-checking mechanisms; GIFS underscores the amplifying impact of generative AI tools. Early intervention, coupled with targeted platform moderation (scenarios 1, 4, 6), consistently yields the lowest misinformation spread, while emotionally resonant content (scenario 3) consistently drives the highest propagation.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1646679</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1646679</link>
        <title><![CDATA[A deep one-class classifier for network anomaly detection using autoencoders and one-class support vector machines]]></title>
        <pubdate>2025-10-03T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Polyzois Bountzis</author><author>Dimitris Kavallieros</author><author>Theodora Tsikrika</author><author>Stefanos Vrochidis</author><author>Ioannis Kompatsiaris</author>
        <description><![CDATA[IntroductionThe integration of deep learning models into Network Intrusion Detection Systems (NIDS) has shown promising advancements in distinguishing normal network traffic from cyber-attacks due to their capability to learn complex non-linear patterns. These approaches typically rely on both benign and malicious network traffic during training. However, in many organizations, collecting malicious traffic is challenging due to privacy restrictions, high costs of manual labeling, and requirement for advanced security expertise.MethodsIn this study, we introduce a deep one-class classification model that is trained exclusively on flow-based benign network traffic data, with the goal of identifying attacks during inference. The proposed anomaly detection model consists of two steps, a One-Class Support Vector Machine (OC-SVM) and a deep AutoEncoder (AE). While autoencoders have shown great potential in anomaly detection, their effectiveness can be undermined by spurious network activity located on the boundaries of their discriminating capabilities, thus failing to identify malicious behavior. Our model leverages the topological structure of the OC-SVM to generate decision scores for each traffic flow, which are subsequently incorporated into an autoencoder as part of the input feature space.ResultsThis approach enhances the ability of the autoencoder to detect incidents that deviate from normal patterns. Furthermore, we propose a heuristic method for tuning the trade-off parameter of the OC-SVM, based only on one-class data, achieving comparable performance to grid-based methods that require both benign and malicious labeled data. Experimental results on a benchmark network intrusion data set, the UNSW-NB15, suggest that OCSVM-AE performs well on unseen attacks and is more effective than traditional and deep-learning based one-class classifiers.DiscussionThe method makes no specific assumptions about the data distribution, making it broadly applicable and suitable as a complementary tool to signature-based intrusion detection systems.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1582206</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1582206</link>
        <title><![CDATA[Component features based enhanced phishing website detection system using EfficientNet, FH-BERT, and SELU-CRNN methods]]></title>
        <pubdate>2025-10-01T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Mahmoud Murhej</author><author>G. Nallasivan</author>
        <description><![CDATA[IntroductionPhishing is a type of cybercrime used by hackers to steal sensitive user information, making it essential to detect phishing attacks on websites. Many prevailing works have utilized Uniform Resource Locator (URL) links and Document Object Model (DOM) tree structures for Phishing Website Detection (PWD). However, since phishing websites imitate legitimate websites, these approaches often produce inaccurate detection results.MethodsTo enhance detection efficiency, we propose a PWD system that focuses on important website features and components. The process begins with collecting URL links from phishing website datasets, followed by the generation of Hypertext Markup Language (HTML) formats. A DOM tree structure is then constructed from the HTML, and components are extracted along with Natural Language Processing (NLP) features, credentials, URL, DOM tree similarity, and component features. The DOM-tree components are converted into score values using Feature Hasher-Bidirectional Encoder Representations from Transformers (FH-BERT). These score values are fused with component features, and significant features are selected using an Entropy-based Chameleon Swarm Algorithm (ECSA).ResultsThe final classification is performed by Scaled Exponential Linear Unit Convolutional Recurrent Neural Network (SELU-CRNN). Simulation results demonstrate that the proposed technique improves PWD performance, achieving higher accuracy (98.42%) and reduced training time (63,003 ms) compared to prevailing methods.DiscussionBy integrating component, semantic, and structural features, the proposed model enhances both robustness and efficiency, making it an effective solution for phishing website detection.]]></description>
      </item>
      </channel>
    </rss>