As robotics technology continues to advance, enabling precise and reliable collaboration between humans and robots has become increasingly important across industrial, healthcare, and service domains. A core foundation of such collaboration lies in accurate localization—particularly using multimodal sensor systems that can perceive and track both human and robot positions in real time. However, achieving robust and accurate localization in dynamically changing environments remains challenging due to sensor limitations and environmental uncertainties. By combining data from heterogeneous sources such as LiDAR, UWB, IMU, and vision-based systems, multimodal localization provides the spatial awareness necessary for intelligent and safe interaction in shared environments.
Intelligent human–robot interaction further leverages this accurate spatial information, allowing robots to proactively adapt to human intentions, respond effectively to dynamic environmental changes, and facilitate seamless and intuitive cooperation. Enhanced localization thus enables robots not only to safely coexist with humans but also to actively engage and assist in complex collaborative tasks, improving efficiency, user experience, and overall system performance.
This Research Topic aims to explore the latest advancements in multimodal localization techniques and their critical role in enabling intelligent human–robot collaboration. Precise and robust localization underpins robots' ability to safely and effectively cooperate with humans in dynamic and unpredictable environments. We welcome original research that addresses key challenges, such as designing novel multimodal sensor frameworks, developing advanced sensor fusion algorithms, enhancing real-time and robust localization capabilities, and implementing intelligent control strategies informed by multimodal localization data.
Contributions should highlight how accurate multimodal localization can significantly improve robots' awareness of human intentions, facilitate adaptive interaction behaviors, and increase the safety, reliability, and efficiency of collaborative robotic systems. Additionally, we encourage submissions that demonstrate innovative practical applications of multimodal localization in various fields, such as industrial manufacturing, healthcare assistance, and service robotics. Comprehensive studies, experimental evaluations, and real-world implementations that clearly illustrate the impact of multimodal localization on collaborative tasks and system performance are particularly welcome.
We invite submissions from researchers and practitioners specializing in robotics, control systems, sensing technologies, localization algorithms, artificial intelligence, and human–robot interaction. Both theoretical advancements and practical application studies are highly encouraged. Potential topics include, but are not limited to, multimodal sensor fusion algorithms, visual-inertial localization, ultra-wideband (UWB) localization, LiDAR-based spatial perception, robust real-time localization methods, perception-informed motion planning, and intelligent adaptive control architectures guided by localization data.
We particularly welcome manuscripts demonstrating successful integration of multimodal localization into real-world collaborative robotic systems, as well as comprehensive performance evaluations and field experiments. Review articles summarizing current advancements, providing benchmark datasets, and outlining future research trends in multimodal localization and intelligent human–robot collaboration are also encouraged. Authors should emphasize practical implications, integration challenges, and innovative solutions that effectively bridge theory and practice.
Article types and fees
This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:
Brief Research Report
Data Report
Editorial
FAIR² Data
FAIR² DATA Direct Submission
General Commentary
Hypothesis and Theory
Methods
Mini Review
Articles that are accepted for publication by our external editors following rigorous peer review incur a publishing fee charged to Authors, institutions, or funders.
Article types
This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:
Brief Research Report
Data Report
Editorial
FAIR² Data
FAIR² DATA Direct Submission
General Commentary
Hypothesis and Theory
Methods
Mini Review
Opinion
Original Research
Perspective
Policy and Practice Reviews
Review
Systematic Review
Technology and Code
Keywords: IMU, vision, Multimodal localization algorithms, UWB, Sensors, Sensor fusion techniques for robust and real-time positioning, Human–robot interaction enhanced by accurate spatial tracking, Intelligent control strategies driven by localization data, Applications of collaborative robotics in industrial or assistive scenarios, Experimental Systems, Field Trials, Benchmark Datasets
Important note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.