Enhancing AI Robustness in Cybersecurity: Challenges and Strategies

  • 156

    Total downloads

  • 3,670

    Total views and downloads

About this Research Topic

This Research Topic is still accepting articles.

Background

Artificial intelligence systems are becoming increasingly pivotal in cybersecurity, yet their adoption is fraught with significant challenges. Current deployments often suffer from inadequate robustness evaluations, largely due to the limited availability and quality of datasets. This lack of thorough validation introduces various risks, including ethical dilemmas, legal issues, and violations of digital rights. Furthermore, as AI technologies orchestrate and manage network operations, they open up new avenues for attackers, exacerbating vulnerabilities inherent in AI and machine learning algorithms.

This Research Topic aims to comprehensively explore the series of challenges AI systems encounter within the cybersecurity domain. Specific emphasis will be placed on understanding adversarial AI attacks, such as injection tactics, and the defense mechanisms like adversarial training that can fortify systems against such exploits. The goal is also to delve into deception techniques, including the use of honeypots, digital twins, and virtual personas, and the role of explainable AI in enhancing transparency and trust in AI functionalities.

To adequately encapsulate the complexities at the intersection of AI and cybersecurity, this Research Topic will focus on both "AI for Cybersecurity" and "Cybersecurity for AI" aspects. Within this framework, the scope is clearly delineated as follows:

We primarily focus on the intersection of emerging AI technologies with the dynamic realm of cybersecurity threats.
We welcome contributions on a variety of pertinent themes:

- Economic implications of adversarial AI
- Ethical considerations in adversarial AI
- AI and ML techniques in cyber threat intelligence
- Machine learning in automated software testing
- Human factors affecting adversarial AI
- Defensive strategies against adversarial ML attacks
- Machine learning applications in analyzing cryptographic protocols
- Privacy-enhancing techniques in machine learning

This Research Topic is supported by the following EU-funded projects:

- AI-ASsisted cybersecurity platform empowering SMEs to defend against adversarial AI attacks (AIAS)
- Cloud-based Platform-agnostic Adversarial AI Defence framework (CPAID)
- AI Attack and Defense for the Smart Healthcare (ANTIDOTE)
- Revolutionised Enhanced Supply Chain Automation with Limited Threats Exposure (RESCALE)

Research Topic Research topic image

Article types and fees

This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:

  • Brief Research Report
  • Curriculum, Instruction, and Pedagogy
  • Data Report
  • Editorial
  • FAIR² Data
  • General Commentary
  • Hypothesis and Theory
  • Methods
  • Mini Review

Articles that are accepted for publication by our external editors following rigorous peer review incur a publishing fee charged to Authors, institutions, or funders.

Keywords: Adversarial AI attacks, Adversarial training, Cyber attacks detection and mitigation, Deception mechanism, AI in cybersecurity, Defensive strategies, Explainable AI, Robustness evaluation

Important note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Topic editors

Manuscripts can be submitted to this Research Topic via the main journal or any other participating journal.

Impact

  • 3,670Topic views
  • 1,905Article views
  • 156Article downloads
View impact