Research on the targetless automatic calibration method for mining LiDAR and camera
-
摘要: 矿用车辆实现无人驾驶依赖于准确的环境感知,激光雷达和相机的结合可以提供更丰富和准确的环境感知信息。为确保激光雷达和相机的有效融合,需进行外参标定。目前矿用本安型车载激光雷达多为16线激光雷达,产生的点云较为稀疏。针对该问题,提出一种矿用激光雷达与相机的无目标自动标定方法。利用多帧点云融合的方法获得融合帧点云,以增加点云密度,丰富点云信息;通过全景分割的方法提取场景中的车辆和交通标志物作为有效目标,通过构建2D−3D有效目标质心对应关系,完成粗校准;在精校准过程中,将有效目标点云通过粗校准的外参投影在逆距离变换后的分割掩码上,构建有效目标全景信息匹配度目标函数,通过粒子群算法最大化目标函数得到最优的外参。从定量、定性和消融实验3个方面验证了方法的有效性:① 定量实验中,平移误差为0.055 m,旋转误差为0.394°,与基于语义分割技术的方法相比,平移误差降低了43.88%,旋转误差降低了48.63%。② 定性结果显示,在车库和矿区场景中的投影效果与外参真值高度吻合,证明了该方法的稳定性。③ 消融实验表明,多帧点云融合和目标函数权重系数显著提高了标定精度。与单帧点云相比,使用融合帧点云作为输入时,平移误差降低了50.89%,旋转误差降低了53.76%;考虑权重系数后,平移误差降低了36.05%,旋转误差降低了37.87%。Abstract: The realization of autonomous driving for mining vehicles relies on accurate environmental perception, and the combination of LiDAR and cameras can provide richer and more accurate environmental information. To ensure effective fusion of LiDAR and cameras, external parameter calibration is necessary. Currently, most mining intrinsically safe onboard LiDARs are 16-line LiDARs, which generate relatively sparse point clouds. To address this issue, this paper proposed a targetless automatic calibration method for mining LiDAR and camera. Multi-frame point cloud fusion was utilized to obtain fused frame point clouds, increasing point cloud density and enriching point cloud information. Then, effective targets such as vehicles and traffic signs in the scene were extracted using panoramic segmentation. By establishing a corresponding relationship between the centroids of 2D and 3D effective targets, a coarse calibration was completed. In the fine calibration process, the effective target point clouds were projected onto the segmentation mask after inverse distance transformation using the coarse-calibrated external parameters, constructing an objective function based on the matching degree of effective target panoramic information. The optimal external parameters were obtained by maximizing the objective function using a particle swarm algorithm. The effectiveness of the method was validated from three aspects: quantitative, qualitative, and ablation experiments. ① In the quantitative experiments, the translation error was 0.055 m, and the rotation error was 0.394°. Compared with the method based on semantic segmentation technology, the translation error was reduced by 43.88%, and the rotation error was reduced by 48.63%. ② The qualitative results showed that the projection effects in the garage and mining area scenes were highly consistent with the true values of the external parameters, demonstrating the stability of the method. ③ Ablation experiments indicated that multi-frame point cloud fusion and the weight coefficients of the objective function significantly improved calibration accuracy. When using fused frame point clouds as input compared to single-frame point clouds, the translation error was reduced by 50.89%, and the rotation error was reduced by 53.76%. Considering the weight coefficients, the translation error was reduced by 36.05%, and the rotation error was reduced by 37.87%.
-
表 1 粗校准和精校准的结果对比
Table 1. Comparison of coarse and fine calibration results
方法 $ \mathrm{\Delta }t $/m $ \mathrm{\Delta }X $/m $ \mathrm{\Delta }Y $/m $ \mathrm{\Delta }Z $/m $ \mathrm{\Delta }\theta $/(°) $ \mathrm{\Delta }R $/(°) $ \mathrm{\Delta }H $/(°) $ \mathrm{\Delta }A $/(°) 粗校准 0.195 0.096 0.124 0.114 0.991 0.929 0.778 0.991 精校准 0.055 0.031 0.028 0.036 0.394 0.257 0.212 0.205 表 2 标定参数扰动实验整体结果
Table 2. Overall results of the perturbation experiments on calibration parameters
参数 平均值 标准差 最大值 最小值 $ \Delta X $ 0.0223 m0.0062 0.0339 m0.0097 m$ \Delta $Y 0.0238 m0.0094 0.0409 m0.0073 m$ \Delta Z $ 0.0210 m0.0062 0.0326 m0.0099 m$ \Delta $R 0.1689 °0.0477 0.2490 °0.0691 °$ \Delta $H 0.1434 °0.0372 0.2132 °0.0578 °$ \Delta $A 0.1371 °0.0389 0.2035 °0.0512 °表 3 不同方法的校正结果对比
Table 3. Comparison of correction results across different methods
方法 平移误差/m 旋转误差/(°) $ \mathrm{\Delta }t $ $ \mathrm{\Delta }X $ $ \mathrm{\Delta }Y $ $ \mathrm{\Delta }Z $ $ \mathrm{\Delta }\theta $ $ \mathrm{\Delta }R $ $ \mathrm{\Delta }H $ $ \mathrm{\Delta }A $ 文献[17] 0.098 0.059 0.064 0.047 0.767 0.445 0.497 0.374 精校准 0.055 0.031 0.028 0.036 0.394 0.257 0.212 0.205 表 4 不同点云输入的消融研究
Table 4. Ablation study on different point cloud inputs
实验设置 $ \mathrm{平}\mathrm{移}\mathrm{误}\mathrm{差}/\mathrm{m} $ $ \mathrm{旋}\mathrm{转}\mathrm{误}\mathrm{差} $/(°) 单帧点云 0.112 0.852 融合帧点云 0.055 0.394 表 5 权重系数的消融研究
Table 5. Ablation study of weight coefficients
实验设置 $ \mathrm{平}\mathrm{移}\mathrm{误}\mathrm{差}/\mathrm{m} $ $ \mathrm{旋}\mathrm{转}\mathrm{误}\mathrm{差} / $(°) 不考虑权重 0.086 0.634 考虑权重 0.055 0.394 -
[1] 王国法. 煤矿智能化最新技术进展与问题探讨[J]. 煤炭科学技术,2022,50(1):1-27. doi: 10.3969/j.issn.0253-2336.2022.1.mtkxjs202201001WANG Guofa. New technological progress of coal mine intelligence and its problems[J]. Coal Science and Technology,2022,50(1):1-27. doi: 10.3969/j.issn.0253-2336.2022.1.mtkxjs202201001 [2] 陈晓晶. 井工煤矿运输系统智能化技术现状及发展趋势[J]. 工矿自动化,2022,48(6):6-14,35.CHEN Xiaojing. Current status and development trend of intelligent technology of underground coal mine transportation system[J]. Journal of Mine Automation,2022,48(6):6-14,35. [3] 宋秦中,胡华亮. 基于CNN算法的井下无人驾驶无轨胶轮车避障方法[J]. 金属矿山,2023(10):168-174.SONG Qinzhong,HU Hualiang. Obstacle avoidance method for underground unmanned trackless rubber-tyred vehicle based on CNN algorithm[J]. Metal Mine,2023(10):168-174. [4] 张宏伟,高亚男,王宇,等. 燃料受限条件下矿区无人驾驶卡车路径最优化策略研究[J]. 金属矿山,2024(8):140-145.ZHANG Hongwei,GAO Yanan,WANG Yu,et al. Study on route optimization strategy of unmanned truck in mining area under fuel constraint condition[J]. Metal Mine,2024(8):140-145. [5] 胡青松,孟春蕾,李世银,等. 矿井无人驾驶环境感知技术研究现状及展望[J]. 工矿自动化,2023,49(6):128-140.HU Qingsong,MENG Chunlei,LI Shiyin,et al. Research status and prospects of perception technology for unmanned mining vehicle driving environment[J]. Journal of Mine Automation,2023,49(6):128-140. [6] ZHANG Qilong,PLESS R. Extrinsic calibration of a camera and laser range finder (improves camera calibration)[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems,Sendai,2004. DOI: 10.1109/IROS.2004.1389752. [7] ZHOU Lipu,DENG Zhidong. Extrinsic calibration of a camera and a lidar based on decoupling the rotation from the translation[C]. IEEE Intelligent Vehicles Symposium,Madrid,2012:642-648. [8] WANG Weimin,SAKURADA K,KAWAGUCHI N. Reflectance intensity assisted automatic and accurate extrinsic calibration of 3D LiDAR and panoramic camera using a printed chessboard[J]. MDPI AG,2017(8). DOI: 10.3390/RS9080851. [9] SIM S,SOCK J,KWAK K. Indirect correspondence-based robust extrinsic calibration of LiDAR and camera[J]. Sensors,2016,16(6). DOI: 10.3390/s16060933. [10] LIAO Qinghai,CHEN Zhenyong,LIU Yang,et al. Extrinsic calibration of lidar and camera with polygon[C]. IEEE International Conference on Robotics and Biomimetics,Kuala Lumpur,2018. DOI: 10.1109/ROBIO.2018.8665256. [11] 徐孝彬,曹晨飞,张磊,等. 基于四面体特征的面阵激光雷达与相机标定方法[J]. 光子学报,2024 ,53(7):176-190.XU Xiaobin,CAO Chenfei,ZHANG Lei,et al. Planar array lidar and camera calibration method based on tetrahedral features[J]. Acta Photonica Sinica,2024,53(7):176-190. [12] 谢婧婷,蔺小虎,王甫红,等. 一种点线面约束的激光雷达和相机标定方法[J]. 武汉大学学报(信息科学版),2021,46(12):1916-1923.XIE Jingting,LIN Xiaohu,WANG Fuhong,et al. Extrinsic calibration method for LiDAR and camera with joint point-line-plane constraints[J]. Geomatics and Information Science of Wuhan University,2021,46(12):1916-1923. [13] PANDEY G,MCBRIDE J R,SAVARESE S,et al. Automatic extrinsic calibration of vision and lidar by maximizing mutual information[J]. Journal of Field Robotics,2015,32(5):696-722. doi: 10.1002/rob.21542 [14] ZHAO Yipu,WANG Yuanfang,TSAI Y. 2D-image to 3D-range registration in urban environments via scene categorization and combination of similarity measurements[C]. IEEE International Conference on Robotics and Automation ,Stockholm,2016:1866-1872. [15] JIANG Peng,OSTEEN P,SARIPALLI S. SemCal:semantic LiDAR-camera calibration using neural mutual information estimator[C]. IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems,Karlsruhe,2021. DOI: 10.48550/arXiv.2109.10270. [16] MA Tao,LIU Zhizheng,YAN Guohang,et al. CRLF:automatic calibration and refinement based on line feature for LiDAR and camera in road scenes[EB/OL]. (2021-03-08)[2024-06-22]. https://arxiv.org/abs/2103.04558v1. [17] ZHU Yufeng,LI Chenghui,ZHANG Yubo. Online camera-LiDAR calibration with sensor semantic information[C]. IEEE International Conference on Robotics and Automation,Paris,2020. DOI: 10.1109/ICRA40945.2020.9196627. [18] ISHIKAWA R,OISHI T,IKEUCHI K. LiDAR and camera calibration using motions estimated by sensor fusion odometry[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems,Madrid,2018:7342-7349. [19] WANG Li,XIAO Zhipeng,ZHAO Dawei,et al. Automatic extrinsic calibration of monocular camera and LIDAR in natural scenes[C]. IEEE International Conference on Information and Automation,Wuyishan,2018:997-1002. [20] SCHNEIDER N,PIEWAK F,STILLER C,et al. RegNet:multimodal sensor registration using deep neural networks[C]. IEEE Intelligent Vehicles Symposium,Los Angeles,2017:1803-1810. [21] IYER G,RAM R K,MURTHY J K,et al. CalibNet:geometrically supervised extrinsic calibration using 3D spatial transformer networks[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems,Madrid,2018:1110-1117. [22] LYU Xudong,WANG Shuo,YE Dong. CFNet:lidar-camera registration using calibration flow network[J]. Sensors,2021. DOI: 10.48550/arXiv.2104.11907. [23] WANG Weimin,NOBUHARA S,NAKAMURA R,et al. SOIC:semantic online initialization and calibration for LiDAR and camera[EB/OL]. (2023-03-09)[2024-06-22]. https://arxiv.org/abs/2003.04260v1. [24] JAIN J,LI Jiachen,CHIU M,et al. OneFormer:one transformer to rule universal image segmentation[C]. IEEE/CVF Conference on Computer Vision and Pattern Recognition,Vancouver,2023:2989-2998. [25] XIAO Zeqi,ZHANG Wenwei,WANG Tai,et al. Position-guided point cloud panoptic segmentation transformer[EB/OL]. (2023-03-23)[2024-06-22]. https://arxiv.org/abs/2303.13509v1.