About this Research Topic
A key component to achieve long-term autonomy is the ability to localize robustly across multiple mapping sessions. To this end, place recognition algorithms exploit the similarity between images or point clouds to identify previously visited locations in a way that is the most repeatable and accurate against viewpoint and environmental changes.
Many solutions to the place recognition problem exist in literature specifically targeting urban or man-made environments, most of the time considering the autonomous driving scenario, where the environment is feature rich and viewpoints are usually parallel to the driving direction. Furthermore, approaches based on point clouds, e.g. exploiting LiDAR data, are facilitated by the possibility of collecting, through segmentation, unique shapes in a repeatable manner such as buildings, parked cars or road signs. The wide availability of datasets for autonomous driving enables also robust segmentation through deep learning approaches thanks to the similarity of shared categories of objects in different relevant environments.
Contrarily, place recognition in unstructured outdoor environments is a challenging task for autonomous mobile robots. Repetitive structures and features, e.g. rocks or trees, cause often perceptual aliasing or ambiguity. Furthermore, variable and harsh lighting conditions cause the appearance of the environment to change significantly over time. Finally, the motion characteristics of mobile robots during autonomous exploration of open unstructured terrains are usually targeted at avoiding obstacles or maximize the coverage of the explored areas. Thus, viewpoints of actuated camera mounts (i.e. pan-tilt units) tend to cover areas in the close proximity of the robot, and revisit places which are observed from different viewpoints.
In this case, visual place recognition algorithms are challenged by the variability of viewpoints and environment appearance. Algorithms relying on point clouds are challenged instead by the ambiguity between traversable ground and obstacles, which deliver important information for place recognition as unique landmarks. In the context of exploring unknown environments, little assumptions can be made on the type of visual features or 3D structures which are observed and that might aid their description and recall from a database.
These issues call for new methods to tackle the place recognition problem in outdoor unstructured environments. This Research Topic aims at gathering ideas ranging from multi-modal perception to deep learning strategies to address this issue in a robust and universal manner or leveraging active or attention-based perception strategies.
Research papers are welcomed that address, but not limited to, the following topics:
• Visual place recognition in outdoor environments
• LiDAR-based place recognition in outdoor environments
• Multi-modal or multi-sensor perception strategies
• Topological or mapless outdoor SLAM
• Learning of robust visual or structural feature descriptors
• Active loop closure techniques for autonomous mobile robots
• Place recognition based on visual saliency techniques
• Distributed place recognition in heterogeneous robotics teams
Keywords: Place Recognition, Multi-modal perception, Active SLAM, Long-term localization, Collaborative mapping
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.