About this Research Topic
Building autonomous systems with advanced perceptual capabilities is one of the fundamental challenges in robotics. Indeed, perceptual intelligence is a key factor to enable the robots to perform real-world operations such as object manipulation, navigation, long-term planning, and human-robot interaction. In recent decades, advances in computer vision, deep learning and robot perception have paved the way for novel semantic approaches for Simultaneous Localisation And Mapping (SLAM), to go beyond plain geometric representations of rigid scenes. Nowadays, even if geometric SLAM can be regarded as a mature field, there are still further challenges that hinder the deployment of a novel generation of robots with human-like perceptual intelligence for map estimation, localization, and navigation. In this context, topics such as enhancing the accuracy and robustness of current approaches, developing novel scene representations and high-level features, encoding semantic content, targeting a holistic understanding of the environment, modeling scene dynamics, appearance changes and non-rigid objects, among others, could greatly expand the capabilities of current robotic platforms and hence their potential applications.
During the last few years SLAM has made its way to real-world applications, ranging from autonomous driving to small-scale household robots as well as tiny drones. However, in order to widen the range of possible scenarios where robots can operate with autonomy, technologies for localization and mapping require more compact, informative and high-level scene representations as well as novel intelligent algorithms to interact more effectively with both humans and objects in the scene. This Research Topic aims at highlighting the challenges of such trending topics, providing different perspectives to the concept of “Spatial Perception”. We aim to provide a platform for presenting original research contributions and results, for introducing open-source datasets and software for semantic localization and for presenting high-impact applications of “Spatial Perception” and related topics. We believe that such a timely collection of quality research and associated resources will set an important milestone for future works in the field.
The topics of interest for this Research Topic include and are not limited to:
• 3D scene understanding
• Vision, Lidar and radar perception
• Multisensor approaches to spatial perception
• Odometry, SLAM and their applications
• Dynamic SLAM
• Localization, mapping, and dynamic object tracking
• Efficient SLAM
• Lifelong mapping/SLAM
• Scene representation
• Place recognition
• Feature representation, indexing, storage, and analysis
• Feature learning, extraction, and matching
• Semantic segmentation
• Deep learning for semantic scene representation
• Efficient deep architectures for robot localization and mapping
• Graph-based optimization
• Benchmark datasets for lifelong SLAM, semantic segmentation, ...
• Open-source systems and implementations
• Field reports
Dr. Reza Sabzevari is a researcher at Robert Bosch GmbH. The research he conducts at Bosch Research is relevant to this Research Topic. All other Topic Editors declare no competing interest with regards to the theme of this collection.
Keywords: Semantic Scene Representation, Lifelong SLAM, Spatial Perception, Spatial AI, SLAM in Dynamic Environments
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.