New Trends in AI-Generated Media and Security

  • 3,619

    Total downloads

  • 36k

    Total views and downloads

About this Research Topic

This Research Topic is still accepting articles.

Background

AI-generated media encompasses a broad spectrum of content, including images, videos, audio, and text, produced with AI technologies. Recent advances in AI models offer numerous benefits for humans in various aspects of life and work. Specifically, the benefits of AI-generated media span a wide range of fields, from enhancing education and healthcare to boosting creativity, optimizing business processes, and improving accessibility. However, multimedia generated by AI models also carries significant risks of misuse and societal upheaval. Specifically, these models can be employed to craft convincing fake news and deepfakes, thereby challenging the information accuracy and public trust.

Despite substantial progress in generative modeling (e.g., GANs, diffusion models, and large language models), current research often lacks a coordinated focus on responsible design, fair deployment, and societal alignment of such technologies. Moreover, the tools for detection, provenance tracing, watermarking, and forensic analysis often lag behind the pace of generation techniques, creating asymmetries that adversaries can exploit. This Research Topic aims to address the following core problem: How can we ensure the responsible and secure development, detection, and deployment of AI-generated media in alignment with societal values and public trust?

To address this, we seek contributions that:
• Analyze emerging trends and societal impacts of generative AI across domains such as elections, education, journalism and healthcare.
• Propose novel detection and authentication techniques for AI-generated content across modalities.
• Investigate robustness, fairness, and generalization of detection models under distribution shifts.
• Develop frameworks for watermarking, provenance tracing, and secure media attribution.
• Explore regulatory, ethical, and human-centered considerations in deploying generative technologies.

We welcome original research articles, reviews, and case studies that advance this field, including but not limited to:

AI-Generated Media:
• Foundation Models, Diffusion Models, Large Multimodal Models (Agents) in Multimedia
• Decentralized Foundation Models from Small Data
• Media Generation with multimodal Large Language Models
• Causal and Mechanistic Explanations of Large Language Models
• Visual and Vision-Language Pre-training
• AI Reasoning
• AI for Social Media
• Generative AI for Medical and Well-being

Media Security:
• Fake News/Media Detection
• Media Forensics and Anti-Forensics
• Adversarial Attack and Defense in AI-generated Media
• Deep Fakes/Misinformation/Disinformation
• Media/DNN Watermarking
• Reliability for Multimedia Applications and Systems
• The Security of Large AI Models
• Authenticity in Human-AI Generated Content
• Responsible AI for Multimedia

Research Topic Research topic image

Article types and fees

This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:

  • Brief Research Report
  • Clinical Trial
  • Community Case Study
  • Conceptual Analysis
  • Data Report
  • Editorial
  • FAIR² Data
  • FAIR² DATA Direct Submission
  • General Commentary

Articles that are accepted for publication by our external editors following rigorous peer review incur a publishing fee charged to Authors, institutions, or funders.

Keywords: Generative AI, Security, Multimedia, Media forensics, Large AI Models, Generation, Detection, Responsible AI

Important note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Topic editors

Manuscripts can be submitted to this Research Topic via the main journal or any other participating journal.

Impact

  • 36kTopic views
  • 29kArticle views
  • 3,619Article downloads
View impact