To unravel the increasingly elusive adversary image forgeries, various approaches have been proposed in the recent literature, including:
(1) approaches based on camera sensor identification that rely on the preservation of the sensor characteristics
(2) image encoding-based approaches that exploit the low-level image compression features,
(3) the most investigated category incorporates all the editing approaches which include both the active and the passive techniques.
The active techniques insert a snippet to secure it against any tampering attempt, whereas the passive methods are intuitively based on measurable traces of possible manipulations associated with global and local noise signals, artifacts, foreground/background, image illumination, etc.
The passive methods are direct and more general, however, they present some vulnerability to image compression. They can also fail against color alteration and are somewhat dependent on the resolution of the image itself. The recent sophisticated generative algorithms (e.g., GAN, VAE, diffusion models, etc.) add a high degree of difficulty in uncovering image forgeries.
In this Research Topic collection, we are soliciting high-level research on exposing forgeries and manipulations in images and videos using AI. The topics include original contributions addressing the following items:
• Active image protection methods (e.g., signature, watermarking, etc.)
• Camera sensor-based deepfake countermeasures
• Image encoding-based deepfake detection
• Passive editing-based approaches, e.g., exploiting noise, intensity, and/or any other valuable image attributes.
The topics may also consist of research from related fields, such as face anti-spoofing, detection of tampering in remote sensing/medical images/documents, and image/scene generation.
Keywords:
Deepfake detection, image tampering, image integrity, generative models, image protection
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.
To unravel the increasingly elusive adversary image forgeries, various approaches have been proposed in the recent literature, including:
(1) approaches based on camera sensor identification that rely on the preservation of the sensor characteristics
(2) image encoding-based approaches that exploit the low-level image compression features,
(3) the most investigated category incorporates all the editing approaches which include both the active and the passive techniques.
The active techniques insert a snippet to secure it against any tampering attempt, whereas the passive methods are intuitively based on measurable traces of possible manipulations associated with global and local noise signals, artifacts, foreground/background, image illumination, etc.
The passive methods are direct and more general, however, they present some vulnerability to image compression. They can also fail against color alteration and are somewhat dependent on the resolution of the image itself. The recent sophisticated generative algorithms (e.g., GAN, VAE, diffusion models, etc.) add a high degree of difficulty in uncovering image forgeries.
In this Research Topic collection, we are soliciting high-level research on exposing forgeries and manipulations in images and videos using AI. The topics include original contributions addressing the following items:
• Active image protection methods (e.g., signature, watermarking, etc.)
• Camera sensor-based deepfake countermeasures
• Image encoding-based deepfake detection
• Passive editing-based approaches, e.g., exploiting noise, intensity, and/or any other valuable image attributes.
The topics may also consist of research from related fields, such as face anti-spoofing, detection of tampering in remote sensing/medical images/documents, and image/scene generation.
Keywords:
Deepfake detection, image tampering, image integrity, generative models, image protection
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.