Research Topic

Multi-modal Sensor Fusion

  • Submission closed.

About this Research Topic

For autonomous agents it is essential to have a clear view and understanding of their surroundings to be able to navigate safely. To generate this view many different types of sensors including vision, lidar, radar, ultrasound, hyperspectral, infrared and other sensors can be used.

Each sensor has its advantages and disadvantages for detection and extraction of scene information. Some sensors are passive while others are active, both of which have their benefits and downsides. To ensure safe operation in all weather and lighting conditions, to extend the field of view, and to be able to detect all types of materials, often more than one sensor is employed.

To be able to exploit the advantages of the multiple sensors it is necessary to fuse the data delivered by these sensors. The fused data should ensure a higher certainty about the existence of objects and free space, which is used by mobile robots for obstacle detection or path planning. The fusion of these sensor data therefore becomes a mandatory and very crucial aspect of the perception process in almost all automated, assistance and autonomous systems.

The fusion can be based on a time series of data from the same type or from several different sensing modalities, the latter called multi-modal sensor fusion. Multi-modal sensor fusion offers advantages in terms of being able to sense various complementary aspects of an object or scene with the different modalities, i.e. increasing information gain. Due to the uncertain nature of sensor measurements, the fusion of the (time series) data is non-trivial. And as soon as the sensors are mounted on a moving agent the issue of synchronization, drift, blur and others makes the fusion yet more difficult. Fusing sensors with different modalities is even more challenging because the sensors perceive different aspects of the environment. One sensor might sense an object while the other sensor might not see it at all because of the material of the object, for instance seeing through glass with a visual camera and getting a return with an ultrasound sensor. Oftentimes the sizes and the distances to the objects are also reported differently. These aspects makes multi-modal sensor fusion non-trivial.

The goal of this Research Topic is to bring together the scientists who are dealing with different topics within multi-modal sensor fusion framework, including but not limited to
- Occupancy grid based methods
- Kalman/Particle Filter based methods
- Neural Network approaches
- Bayesian Networks approaches
- Time asynchrony between the modalities
- Fusion level and strategy: early/late
- Extrinsic calibration of multi-sensor systems


Keywords: Sensor, fusion, multi-modal, perception, neural network


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

For autonomous agents it is essential to have a clear view and understanding of their surroundings to be able to navigate safely. To generate this view many different types of sensors including vision, lidar, radar, ultrasound, hyperspectral, infrared and other sensors can be used.

Each sensor has its advantages and disadvantages for detection and extraction of scene information. Some sensors are passive while others are active, both of which have their benefits and downsides. To ensure safe operation in all weather and lighting conditions, to extend the field of view, and to be able to detect all types of materials, often more than one sensor is employed.

To be able to exploit the advantages of the multiple sensors it is necessary to fuse the data delivered by these sensors. The fused data should ensure a higher certainty about the existence of objects and free space, which is used by mobile robots for obstacle detection or path planning. The fusion of these sensor data therefore becomes a mandatory and very crucial aspect of the perception process in almost all automated, assistance and autonomous systems.

The fusion can be based on a time series of data from the same type or from several different sensing modalities, the latter called multi-modal sensor fusion. Multi-modal sensor fusion offers advantages in terms of being able to sense various complementary aspects of an object or scene with the different modalities, i.e. increasing information gain. Due to the uncertain nature of sensor measurements, the fusion of the (time series) data is non-trivial. And as soon as the sensors are mounted on a moving agent the issue of synchronization, drift, blur and others makes the fusion yet more difficult. Fusing sensors with different modalities is even more challenging because the sensors perceive different aspects of the environment. One sensor might sense an object while the other sensor might not see it at all because of the material of the object, for instance seeing through glass with a visual camera and getting a return with an ultrasound sensor. Oftentimes the sizes and the distances to the objects are also reported differently. These aspects makes multi-modal sensor fusion non-trivial.

The goal of this Research Topic is to bring together the scientists who are dealing with different topics within multi-modal sensor fusion framework, including but not limited to
- Occupancy grid based methods
- Kalman/Particle Filter based methods
- Neural Network approaches
- Bayesian Networks approaches
- Time asynchrony between the modalities
- Fusion level and strategy: early/late
- Extrinsic calibration of multi-sensor systems


Keywords: Sensor, fusion, multi-modal, perception, neural network


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

About Frontiers Research Topics

With their unique mixes of varied contributions from Original Research to Review Articles, Research Topics unify the most influential researchers, the latest key findings and historical advances in a hot research area! Find out more on how to host your own Frontiers Research Topic or contribute to one as an author.

Topic Editors

Loading..

Submission Deadlines

Submission closed.

Participating Journals

Loading..

Topic Editors

Loading..

Submission Deadlines

Submission closed.

Participating Journals

Loading..
Loading..

total views article views article downloads topic views

}
 
Top countries
Top referring sites
Loading..

Comments

Loading..

Add a comment

Add comment
Back to top