Multimodal remote sensing (MRS) integrates heterogeneous data streams from optical, synthetic aperture radar (SAR), hyperspectral, and thermal sensors, enabling pivotal applications in environmental monitoring, disaster response, and urban planning. While deep learning (DL) has long been a cornerstone of remote sensing (RS) image processing, recent years have witnessed transformative advancements—including foundation models, transferable multimodal fusion, and self-supervised learning—that directly address traditional limitations. Unlike conventional DL-RS integrations, which have been extensively explored, these emerging approaches tackle MRS’s core challenges (e.g., high dimensionality, sensor heterogeneity, and scarce labelled data) with unprecedented efficiency and scalability. As interdisciplinary innovation accelerates, there is an urgent need to synthesize these cutting-edge DL-driven methodologies tailored specifically to MRS. This Research Topic focuses exclusively on these frontier developments, bridging state-of-the-art DL research with practical MRS requirements to propel the field beyond established paradigms.
This Research Topic aims to address the gap between cutting-edge DL developments and real-world MRS processing demands. Key challenges include inefficient multimodal data fusion, limited adaptability of DL models to scarce MRS samples, poor interpretability of black-box models, and computational constraints for large-scale data. To tackle these, the issue seeks to aggregate innovative research that advances DL-enabled methods for MRS image processing. By curating high-quality studies on novel fusion strategies, lightweight architectures, few/zero-shot learning, interpretable models, RS foundation models, and impactful real-world applications, we aim to establish a definitive resource for researchers and practitioners. Ultimately, this Research Topic will foster cross-disciplinary collaboration, accelerate the translation of DL innovations into MRS applications, and shape the future direction of intelligent multimodal remote sensing, contributing to solutions for global challenges like climate change and sustainable development.
We focus on frontier DL-driven innovations for multimodal remote sensing image processing, with specific themes including (but not limited to):
1. Advanced DL-driven unimodal/multimodal learning and fusion frameworks.
2. Lightweight/edge-friendly neural architectures for large-scale unimodal/multimodal RS data processing.
3. Small/few/zero-shot learning approaches for scarce labelled RS samples.
4. Interpretability and uncertainty quantification of DL models in RS scenarios.
5. RS foundation models and their fine-tuning/adaptation for unimodal/multimodal tasks.
We welcome the submission of unpublished manuscripts that align with the aforementioned scope, including: original research papers, comprehensive review articles, and case studies.
Article types and fees
This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:
Brief Research Report
Data Report
Editorial
FAIR² Data
FAIR² DATA Direct Submission
General Commentary
Hypothesis and Theory
Methods
Mini Review
Articles that are accepted for publication by our external editors following rigorous peer review incur a publishing fee charged to Authors, institutions, or funders.
Article types
This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:
Brief Research Report
Data Report
Editorial
FAIR² Data
FAIR² DATA Direct Submission
General Commentary
Hypothesis and Theory
Methods
Mini Review
Opinion
Original Research
Perspective
Policy and Practice Reviews
Policy Brief
Review
Systematic Review
Technology and Code
Keywords: remote sensing, image processing, deep learning, multimodal learning, multi source fusion, hyperspectral, data fusion
Important note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.