Research Topic

Robotics Perception in Adversarial Environments

About this Research Topic

Robotics perception research has advanced tremendously in recent years thanks to the development of affordable and cutting edge sensor technologies (e.g., LiDAR, sonars, etc.) and data-driven techniques. While progress is still being made, several of these methods are trained, applied and evaluated with abundant and high-quality data. However, many field or in-the-wild robotics applications face substantial performance drops with respect to applications in constrained/structured environments due to low-quality visual data common in these scenarios; which suffers from various types of degradation and environmental disturbances (fog, ash, or inclement weather). And although some of these artifacts can be overcome by sophisticated algorithms and models, their impact becomes more noticeable as the level of degradation or change passes some empirical threshold.

Based on this, and as an extension of the ICRA 2019 workshop on “Underwater Robotics Perception", the goal of this Research Topic is to review the recent progress of robust visual perception technologies and methods in challenging adversarial environments.

We welcome computer vision and robotics experts from various fields to share their experience and perspective while working with applications for dynamic environments with non-dependable data, e.g., autonomous driving, agricultural robotics, underwater exploration, mining, search and rescue robotics, highly agile UAVs, environmental conservation, and many others.

We welcome articles with theoretical or practical significance. Authors can report theoretical innovations or robust perception frameworks that cope with data volatility and degradation, as well as system papers that describe applications under challenging conditions and insights into why a particular approach performs well and the surmounted challenges. Topics include but are not limited to:

● Robust recognition from low-quality and/or scarce data in different sensor domains (optical cameras, LiDARS, sonars, multibeam, event-cameras, multi-and hyper-spectral sensing, etc.).
● Robust recognition in highly dynamic environments or long-term deployment robotic systems.
● Image/video restoration and enhancement from degradations due to low illumination, color distortion, inclement weather, poor visibility, etc.
● Novel sensors developments or sensor fusion and calibration techniques for robust visual perception.
● Simulated environments and continuous system integration, i.e., synthetic data generation, simulation to real-world transition, hardware-in-the-loop.
● Low-quality and scarce data mining, augmentation, and processing methods for visual systems.
● Deep learning practices and machine learning pipelines in any of the mentioned topics.
● Heavily tested systems in field trials and best practices for deployment and data management.
● Surveys of computer vision algorithms and applications under adversarial and challenging environments.
● Applications of any of the previous to vision-based localization, registration, mapping, modeling, pose estimation and other areas.


Keywords: Perception, Field Robotics, Adversarial Environments, Sensor Fusion, Machine Learning


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Robotics perception research has advanced tremendously in recent years thanks to the development of affordable and cutting edge sensor technologies (e.g., LiDAR, sonars, etc.) and data-driven techniques. While progress is still being made, several of these methods are trained, applied and evaluated with abundant and high-quality data. However, many field or in-the-wild robotics applications face substantial performance drops with respect to applications in constrained/structured environments due to low-quality visual data common in these scenarios; which suffers from various types of degradation and environmental disturbances (fog, ash, or inclement weather). And although some of these artifacts can be overcome by sophisticated algorithms and models, their impact becomes more noticeable as the level of degradation or change passes some empirical threshold.

Based on this, and as an extension of the ICRA 2019 workshop on “Underwater Robotics Perception", the goal of this Research Topic is to review the recent progress of robust visual perception technologies and methods in challenging adversarial environments.

We welcome computer vision and robotics experts from various fields to share their experience and perspective while working with applications for dynamic environments with non-dependable data, e.g., autonomous driving, agricultural robotics, underwater exploration, mining, search and rescue robotics, highly agile UAVs, environmental conservation, and many others.

We welcome articles with theoretical or practical significance. Authors can report theoretical innovations or robust perception frameworks that cope with data volatility and degradation, as well as system papers that describe applications under challenging conditions and insights into why a particular approach performs well and the surmounted challenges. Topics include but are not limited to:

● Robust recognition from low-quality and/or scarce data in different sensor domains (optical cameras, LiDARS, sonars, multibeam, event-cameras, multi-and hyper-spectral sensing, etc.).
● Robust recognition in highly dynamic environments or long-term deployment robotic systems.
● Image/video restoration and enhancement from degradations due to low illumination, color distortion, inclement weather, poor visibility, etc.
● Novel sensors developments or sensor fusion and calibration techniques for robust visual perception.
● Simulated environments and continuous system integration, i.e., synthetic data generation, simulation to real-world transition, hardware-in-the-loop.
● Low-quality and scarce data mining, augmentation, and processing methods for visual systems.
● Deep learning practices and machine learning pipelines in any of the mentioned topics.
● Heavily tested systems in field trials and best practices for deployment and data management.
● Surveys of computer vision algorithms and applications under adversarial and challenging environments.
● Applications of any of the previous to vision-based localization, registration, mapping, modeling, pose estimation and other areas.


Keywords: Perception, Field Robotics, Adversarial Environments, Sensor Fusion, Machine Learning


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

About Frontiers Research Topics

With their unique mixes of varied contributions from Original Research to Review Articles, Research Topics unify the most influential researchers, the latest key findings and historical advances in a hot research area! Find out more on how to host your own Frontiers Research Topic or contribute to one as an author.

Topic Editors

Loading..

Submission Deadlines

31 January 2020 Manuscript

Participating Journals

Manuscripts can be submitted to this Research Topic via the following journals:

Loading..

Topic Editors

Loading..

Submission Deadlines

31 January 2020 Manuscript

Participating Journals

Manuscripts can be submitted to this Research Topic via the following journals:

Loading..
Loading..

total views article views article downloads topic views

}
 
Top countries
Top referring sites
Loading..

Comments

Loading..

Add a comment

Add comment
Back to top