ORIGINAL RESEARCH article
Front. Robot. AI
Sec. Robot Vision and Artificial Perception
This article is part of the Research TopicComputational Multimodal Sensing and Perception for Robotic SystemsView all articles
Targetless LiDAR–Camera Extrinsic Calibration via Semantic Distribution Alignment
Provisionally accepted- 1Hefei Institutes of Physical Science, Chinese Academy of Sciences (CAS), Hefei, China
- 2University of Science and Technology of China, Hefei, China
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
LiDAR–camera fusion systems are widely used in robotic localization and perception, where 3 accurate extrinsic calibration is crucial for multi-sensor fusion in field and autonomous robotics. 4 During long-term operation, however, the extrinsic parameters can drift because of vibration 5 and other disturbances, degrading localization accuracy and perception stability. Traditional 6 target-based calibration requires dedicated boards and controlled environments, making on-7 site recalibration inconvenient, while targetless approaches often suffer from highly non-convex 8 objectives and limited robustness in challenging outdoor scenes. To address these issues, 9 we propose a targetless LiDAR–camera extrinsic calibration method that casts calibration as 10 minimizing a semantic distribution consistency risk on SE(3). We align semantic probability 11 distributions from the two sensing modalities in the image domain and freeze the pixel sampling 12 measure at an anchor pose, so that pixel weighting no longer depends on the current extrinsic 13 estimate and the objective landscape remains stable during optimization. On top of this anchor-14 fixed measure, we introduce a direction-aware weighting strategy that emphasizes pixels sensitive 15 to yaw perturbations, improving the conditioning of rotation estimation. We further use a globally 16 balanced Jensen–Shannon divergence to mitigate semantic class imbalance and enhance 17 robustness. Experiments on the KITTI Odometry dataset show that the proposed method reliably 18 converges from substantial initial perturbations and yields stable extrinsic estimates, indicating its 19 promise for maintaining long-term LiDAR–camera calibration in real-world robotic systems.
Keywords: Directional Observability Weighting, Jensen–Shannon divergence, Robotic perception, Semantic distribution alignment, Targetless LiDAR–camera calibration
Received: 04 Dec 2025; Accepted: 05 Feb 2026.
Copyright: © 2026 Chen and Sun. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Bingyu Sun
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.
