Generative Artificial Intelligence (GenAI) for Cybersecurity Applications

  • 1,139

    Total views and downloads

About this Research Topic

This Research Topic is still accepting articles.

Background

Historically, Artificial Intelligence (AI) was deployed to process, learn, and predict information by imitating the abilities of intelligent beings, enabling a digital gadget (e.g., smartphone or smartwatch) or computer-controlled robot to execute smart tasks. Nowadays, Generative AI (GenAI) deploys Large Language Models (LLMS) or Multimodal Large Language Models (MLLMs) to investigate existing data, learn from it, and then craft new content (e.g., texts, images, videos, and audio) with similar characteristics. Technically, the GenAI tools (e.g., ChatGPT-4o) still have negative cyberpsychological impacts (e.g., distortion of users' digital trust and critical thinking) and technological constraints such as hallucinations, privacy, safety, content integrity, copyrights, and ownership that are yet to be addressed. However, GenAI is revolutionizing cybersecurity by enriching defence mechanisms and empowering new security strategies while providing novel tools for malicious actors to craft unprecedented cyberattacks. Therefore, to address emerging threats enabled by GenAI tools in our society, cyber researchers and experts should investigate innovative technological, legal, and ethical solutions.

In general, GenAI models can process vast amounts of data to extract anomalies and patterns indicative of possible cyber threats for cybersecurity applications such as deepfake detection or misinformation propagation. For instance, GenAI models facilitate a more proactive approach to cybersecurity by recognizing and mitigating potential threats before they pose a significant danger to society. GenAI models can also be utilized to simulate complicated cyberattack scenarios and automate necessary processes, allowing rapid cyber incident reactions.

Conversely, GenAI technologies can also be leveraged by malicious attackers to devise more deceptive and advanced cyberattacks. For example, GenAI tools can be employed to construct highly convincing deepfake-phishing contents (e.g., Elon Musk Quantum AI trading bot) that are difficult for classical algorithms or human eyes to detect. In addition, GenAI-crafted malware can adapt and evolve, making it harder for defence mechanisms to recognize and neutralize. Not to mention that the public usage of GenAI tools impacts the critical thinking abilities of users and brings illusions of overtrust to AI-based systems, which is wrong as they suffer from problems such as hallucinations. The dual-application nature of GenAI technologies signifies that while they provide powerful tools for enhancing cybersecurity, they also increase the complexity of potential cyber threats, requiring continuous research and development in defensive strategies.

This Research Topic focuses on scientific innovations and policy enforcement strategies to safeguard humanity from uncontrollable and unexplainable GenAI services in society and tackle real-world cybersecurity challenges. Hence, we invite educators, policy-makers, researchers, and practitioners to disseminate their breakthroughs in shaping safer GenAI technologies and restricting their possible harm to society. We are looking for research or review papers that suggest new preventive approaches and/or frameworks to mitigate the cyber risks of GenAI-based public services, such as deepfake-phishing, cyber-scamming, and cyberbullying.

Topics of interest:

- Devising explainable and trustworthy GenAI tools: Designing approaches for deploying GenAI models to detect the reliability and authenticity of multimedia contents (e.g., deepfake or misinformation detection).

- Evaluating cyberpsychological impacts of GenAI Tools on Users: Assessing potential cyberpsychological impacts while learning knowledge by using GenAI tools (e.g., distorting users' creativity and creative thinking abilities).

- Proposing effective cyber-wellness education (or Digital Media Literacy) approaches: Considering practical necessities to devise educational concepts for safer user-to-GenAI tools interactions.

- Developing GenAI safety assurance models: The development of enforcement models to use bounded morality in GenAI-based systems ensures that they are more ethically aligned with international laws such as the EU AI Act and the General Data Protection Regulation (GDPR).

- Distinguishing GenAI Safety in Risk Levels: Investigating three aspects of GenAI safety as quantitative measures (e.g., capability, generality, and control) to develop verifiable metrics for these dimensions while characterizing and investigating the GenAI risks more accurately.

- Exploring correlations of accountability, explainability, and transparency with GenAI safety: Investigating GenAI-based tools’ characteristics, such as explainability, transparency, accountability, and ethics, to discover how they relate to the safety of their services.

- Assessing GenAI ethical integrity implementation: Analyzing the significance of GenAI ethical integrity and developing novel solutions that predict issues that derive unethical practices (e.g., hallucinations) in GenAI services.

- Investigating Managerial models to reduce gaps between enforcement policies and actions: Exploring managerial strategies to integrate and perform GenAI-related policies to reduce the possible risks and the negative impacts of such services on society.

- Developing cyber-wellness education content for everyone: Exploring effective educational practices to train all range of users in society to protect their data and financial assets while interacting with the GenAI-crafted content.

- Investigating deepfake-empowered social engineering attacks: Novel ways of deepfake-enabled cyberattacks such as deepfake-phishing, cyber-scamming, and cyberbullying are on the rise, which requires detection tools and defence mechanisms to reduce their risks in society.

Research Topic Research topic image

Article types and fees

This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:

  • Brief Research Report
  • Clinical Trial
  • Community Case Study
  • Conceptual Analysis
  • Data Report
  • Editorial
  • FAIR² Data
  • FAIR² DATA Direct Submission
  • General Commentary

Articles that are accepted for publication by our external editors following rigorous peer review incur a publishing fee charged to Authors, institutions, or funders.

Keywords: GenAI Security, GenAI Privacy, GenAI Safety Assurance, GenAI Enabled Cyberattacks, GenAI-based Defense Mechanisms

Important note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Topic editors

Manuscripts can be submitted to this Research Topic via the main journal or any other participating journal.

Impact

  • 1,139Topic views
View impact