In real-world environments, visual perception systems often face challenges due to adverse visibility conditions, including fog, rain, snow, low light, underwater distortions, haze, glare, smoke, and occlusions. These conditions significantly degrade image quality and reliability, reducing the performance of critical applications in autonomous driving, remote sensing, surveillance, medical imaging, robotics, and marine and aerial navigation. Addressing these challenges requires robust image processing techniques capable of restoring and/or interpreting degraded visual information and recovering meaningful features for downstream tasks. While in the last few years, multiple approaches have been presented, a lot of challenges are still open and have not yet been explored, such as the lack of data for training self-supervised approaches or their application in realistic scenarios.
While traditional image enhancement techniques often prove ineffective under severe degradation, recent years have shown significant progress in Machine Learning (ML) approaches, Deep Learning (DL) in particular, showing promising results in both the restoration of image and video quality and the robust extraction of features for classification, segmentation, and other tasks.
The objective of this Research Topic is to advance scientific understanding and practical implementation of ML methods for image processing under real-world visibility constraints, with a particular focus on DL techniques. This goal is pursued through the collection of innovative algorithms, learning-based approaches, and their practical applications, including surveys and reviews.
Topics of interest include, but are not limited to, the following:
- Image enhancement and restoration under fog, haze, or low illumination - Learning-based dehazing, deraining, and desnowing techniques - Adverse-weather-aware object detection and tracking - Underwater and aerial image enhancement using ML/DL - Domain adaptation and generalization across weather conditions - Synthetic data generation and simulation for training under adverse conditions - Multimodal and sensor fusion approaches - Explainable and interpretable ML/DL models in degraded visibility scenarios - Benchmarking datasets and evaluation protocols specific to adverse environments
We welcome interdisciplinary contributions from the computer vision, AI, remote sensing, robotics, and environmental imaging communities. We welcome submissions presenting novel methodologies, empirical validations, deployment strategies, or theoretical insights.
Article types and fees
This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:
Brief Research Report
Community Case Study
Conceptual Analysis
Data Report
Editorial
FAIR² Data
FAIR² DATA Direct Submission
General Commentary
Hypothesis and Theory
Articles that are accepted for publication by our external editors following rigorous peer review incur a publishing fee charged to Authors, institutions, or funders.
Article types
This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:
Brief Research Report
Community Case Study
Conceptual Analysis
Data Report
Editorial
FAIR² Data
FAIR² DATA Direct Submission
General Commentary
Hypothesis and Theory
Methods
Mini Review
Opinion
Original Research
Perspective
Policy and Practice Reviews
Review
Study Protocol
Systematic Review
Technology and Code
Keywords: Deep Learning, Image Processing, Image Restoration, Machine Learning
Important note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.