留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

矿用激光雷达与相机的无目标自动标定方法研究

杨佳佳 张传伟 周李兵 秦沛霖 赵瑞祺

杨佳佳,张传伟,周李兵,等. 矿用激光雷达与相机的无目标自动标定方法研究[J]. 工矿自动化,2024,50(10):53-61, 89.  doi: 10.13272/j.issn.1671-251x.2024070056
引用本文: 杨佳佳,张传伟,周李兵,等. 矿用激光雷达与相机的无目标自动标定方法研究[J]. 工矿自动化,2024,50(10):53-61, 89.  doi: 10.13272/j.issn.1671-251x.2024070056
YANG Jiajia, ZHANG Chuanwei, ZHOU Libing, et al. Research on the targetless automatic calibration method for mining LiDAR and camera[J]. Journal of Mine Automation,2024,50(10):53-61, 89.  doi: 10.13272/j.issn.1671-251x.2024070056
Citation: YANG Jiajia, ZHANG Chuanwei, ZHOU Libing, et al. Research on the targetless automatic calibration method for mining LiDAR and camera[J]. Journal of Mine Automation,2024,50(10):53-61, 89.  doi: 10.13272/j.issn.1671-251x.2024070056

矿用激光雷达与相机的无目标自动标定方法研究

doi: 10.13272/j.issn.1671-251x.2024070056
基金项目: 陕西省创新人才推进计划−科技创新团队(2021TD-27)。
详细信息
    作者简介:

    杨佳佳(2000—),男,陕西咸阳人,硕士研究生,研究方向为无人驾驶车辆环境感知和煤矿智能化,E-mail:22205224117@stu.xust.edu.cn

    通讯作者:

    张传伟(1974—),男,安徽淮南人,教授,博士,研究方向为机电系统智能控制和矿用智能车辆,E-mail:2099409913@qq.com

  • 中图分类号: TD67

Research on the targetless automatic calibration method for mining LiDAR and camera

  • 摘要: 矿用车辆实现无人驾驶依赖于准确的环境感知,激光雷达和相机的结合可以提供更丰富和准确的环境感知信息。为确保激光雷达和相机的有效融合,需进行外参标定。目前矿用本安型车载激光雷达多为16线激光雷达,产生的点云较为稀疏。针对该问题,提出一种矿用激光雷达与相机的无目标自动标定方法。利用多帧点云融合的方法获得融合帧点云,以增加点云密度,丰富点云信息;通过全景分割的方法提取场景中的车辆和交通标志物作为有效目标,通过构建2D−3D有效目标质心对应关系,完成粗校准;在精校准过程中,将有效目标点云通过粗校准的外参投影在逆距离变换后的分割掩码上,构建有效目标全景信息匹配度目标函数,通过粒子群算法最大化目标函数得到最优的外参。从定量、定性和消融实验3个方面验证了方法的有效性:① 定量实验中,平移误差为0.055 m,旋转误差为0.394°,与基于语义分割技术的方法相比,平移误差降低了43.88%,旋转误差降低了48.63%。② 定性结果显示,在车库和矿区场景中的投影效果与外参真值高度吻合,证明了该方法的稳定性。③ 消融实验表明,多帧点云融合和目标函数权重系数显著提高了标定精度。与单帧点云相比,使用融合帧点云作为输入时,平移误差降低了50.89%,旋转误差降低了53.76%;考虑权重系数后,平移误差降低了36.05%,旋转误差降低了37.87%。

     

  • 图  1  激光雷达和相机标定原理

    Figure  1.  LiDAR and camera calibration principles

    图  2  矿用激光雷达与相机的无目标自动标定方法框架

    Figure  2.  Framework of targetless automatic calibration method for mining LiDAR and camera

    图  3  多帧点云融合算法流程

    Figure  3.  Multi-frame point cloud fusion algorithm flow

    图  4  点云融合前后对比

    Figure  4.  Comparison before and after point cloud fusion

    图  5  2D−3D目标质心匹配对

    Figure  5.  Matching pairs of 2D-3D target centroids

    图  6  粒子群算法流程

    Figure  6.  Particle swarm algorithm flow

    图  7  实验采集仪器

    Figure  7.  Experimental data acquisition device

    图  8  粗校准结果

    Figure  8.  Results of coarse calibration

    图  9  增加扰动的精校准的MAE

    Figure  9.  Mean absolute error(MAE) of fine calibration with added perturbation

    图  10  不同方法在数据集上的平移和旋转误差分布

    Figure  10.  Distribution of translation and rotation errors of different methods on dataset

    图  11  本文方法的投影结果与外参真值的投影结果

    Figure  11.  Comparison of projection results between the proposed method and the true values of the external parameters

    表  1  粗校准和精校准的结果对比

    Table  1.   Comparison of coarse and fine calibration results

    方法$ \mathrm{\Delta }t $/m$ \mathrm{\Delta }X $/m$ \mathrm{\Delta }Y $/m$ \mathrm{\Delta }Z $/m$ \mathrm{\Delta }\theta $/(°)$ \mathrm{\Delta }R $/(°)$ \mathrm{\Delta }H $/(°)$ \mathrm{\Delta }A $/(°)
    粗校准0.1950.0960.1240.1140.9910.9290.7780.991
    精校准0.0550.0310.0280.0360.3940.2570.2120.205
    下载: 导出CSV

    表  2  标定参数扰动实验整体结果

    Table  2.   Overall results of the perturbation experiments on calibration parameters

    参数 平均值 标准差 最大值 最小值
    $ \Delta X $ 0.0223 m 0.0062 0.0339 m 0.0097 m
    $ \Delta $Y 0.0238 m 0.0094 0.0409 m 0.0073 m
    $ \Delta Z $ 0.0210 m 0.0062 0.0326 m 0.0099 m
    $ \Delta $R 0.1689° 0.0477 0.2490° 0.0691°
    $ \Delta $H 0.1434° 0.0372 0.2132° 0.0578°
    $ \Delta $A 0.1371° 0.0389 0.2035° 0.0512°
    下载: 导出CSV

    表  3  不同方法的校正结果对比

    Table  3.   Comparison of correction results across different methods

    方法 平移误差/m 旋转误差/(°)
    $ \mathrm{\Delta }t $ $ \mathrm{\Delta }X $ $ \mathrm{\Delta }Y $ $ \mathrm{\Delta }Z $ $ \mathrm{\Delta }\theta $ $ \mathrm{\Delta }R $ $ \mathrm{\Delta }H $ $ \mathrm{\Delta }A $
    文献[17] 0.098 0.059 0.064 0.047 0.767 0.445 0.497 0.374
    精校准 0.055 0.031 0.028 0.036 0.394 0.257 0.212 0.205
    下载: 导出CSV

    表  4  不同点云输入的消融研究

    Table  4.   Ablation study on different point cloud inputs

    实验设置 $ \mathrm{平}\mathrm{移}\mathrm{误}\mathrm{差}/\mathrm{m} $ $ \mathrm{旋}\mathrm{转}\mathrm{误}\mathrm{差} $/(°)
    单帧点云 0.112 0.852
    融合帧点云 0.055 0.394
    下载: 导出CSV

    表  5  权重系数的消融研究

    Table  5.   Ablation study of weight coefficients

    实验设置 $ \mathrm{平}\mathrm{移}\mathrm{误}\mathrm{差}/\mathrm{m} $ $ \mathrm{旋}\mathrm{转}\mathrm{误}\mathrm{差} / $(°)
    不考虑权重 0.086 0.634
    考虑权重 0.055 0.394
    下载: 导出CSV
  • [1] 王国法. 煤矿智能化最新技术进展与问题探讨[J]. 煤炭科学技术,2022,50(1):1-27. doi: 10.3969/j.issn.0253-2336.2022.1.mtkxjs202201001

    WANG Guofa. New technological progress of coal mine intelligence and its problems[J]. Coal Science and Technology,2022,50(1):1-27. doi: 10.3969/j.issn.0253-2336.2022.1.mtkxjs202201001
    [2] 陈晓晶. 井工煤矿运输系统智能化技术现状及发展趋势[J]. 工矿自动化,2022,48(6):6-14,35.

    CHEN Xiaojing. Current status and development trend of intelligent technology of underground coal mine transportation system[J]. Journal of Mine Automation,2022,48(6):6-14,35.
    [3] 宋秦中,胡华亮. 基于CNN算法的井下无人驾驶无轨胶轮车避障方法[J]. 金属矿山,2023(10):168-174.

    SONG Qinzhong,HU Hualiang. Obstacle avoidance method for underground unmanned trackless rubber-tyred vehicle based on CNN algorithm[J]. Metal Mine,2023(10):168-174.
    [4] 张宏伟,高亚男,王宇,等. 燃料受限条件下矿区无人驾驶卡车路径最优化策略研究[J]. 金属矿山,2024(8):140-145.

    ZHANG Hongwei,GAO Yanan,WANG Yu,et al. Study on route optimization strategy of unmanned truck in mining area under fuel constraint condition[J]. Metal Mine,2024(8):140-145.
    [5] 胡青松,孟春蕾,李世银,等. 矿井无人驾驶环境感知技术研究现状及展望[J]. 工矿自动化,2023,49(6):128-140.

    HU Qingsong,MENG Chunlei,LI Shiyin,et al. Research status and prospects of perception technology for unmanned mining vehicle driving environment[J]. Journal of Mine Automation,2023,49(6):128-140.
    [6] ZHANG Qilong,PLESS R. Extrinsic calibration of a camera and laser range finder (improves camera calibration)[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems,Sendai,2004. DOI: 10.1109/IROS.2004.1389752.
    [7] ZHOU Lipu,DENG Zhidong. Extrinsic calibration of a camera and a lidar based on decoupling the rotation from the translation[C]. IEEE Intelligent Vehicles Symposium,Madrid,2012:642-648.
    [8] WANG Weimin,SAKURADA K,KAWAGUCHI N. Reflectance intensity assisted automatic and accurate extrinsic calibration of 3D LiDAR and panoramic camera using a printed chessboard[J]. MDPI AG,2017(8). DOI: 10.3390/RS9080851.
    [9] SIM S,SOCK J,KWAK K. Indirect correspondence-based robust extrinsic calibration of LiDAR and camera[J]. Sensors,2016,16(6). DOI: 10.3390/s16060933.
    [10] LIAO Qinghai,CHEN Zhenyong,LIU Yang,et al. Extrinsic calibration of lidar and camera with polygon[C]. IEEE International Conference on Robotics and Biomimetics,Kuala Lumpur,2018. DOI: 10.1109/ROBIO.2018.8665256.
    [11] 徐孝彬,曹晨飞,张磊,等. 基于四面体特征的面阵激光雷达与相机标定方法[J]. 光子学报,2024 ,53(7):176-190.

    XU Xiaobin,CAO Chenfei,ZHANG Lei,et al. Planar array lidar and camera calibration method based on tetrahedral features[J]. Acta Photonica Sinica,2024,53(7):176-190.
    [12] 谢婧婷,蔺小虎,王甫红,等. 一种点线面约束的激光雷达和相机标定方法[J]. 武汉大学学报(信息科学版),2021,46(12):1916-1923.

    XIE Jingting,LIN Xiaohu,WANG Fuhong,et al. Extrinsic calibration method for LiDAR and camera with joint point-line-plane constraints[J]. Geomatics and Information Science of Wuhan University,2021,46(12):1916-1923.
    [13] PANDEY G,MCBRIDE J R,SAVARESE S,et al. Automatic extrinsic calibration of vision and lidar by maximizing mutual information[J]. Journal of Field Robotics,2015,32(5):696-722. doi: 10.1002/rob.21542
    [14] ZHAO Yipu,WANG Yuanfang,TSAI Y. 2D-image to 3D-range registration in urban environments via scene categorization and combination of similarity measurements[C]. IEEE International Conference on Robotics and Automation ,Stockholm,2016:1866-1872.
    [15] JIANG Peng,OSTEEN P,SARIPALLI S. SemCal:semantic LiDAR-camera calibration using neural mutual information estimator[C]. IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems,Karlsruhe,2021. DOI: 10.48550/arXiv.2109.10270.
    [16] MA Tao,LIU Zhizheng,YAN Guohang,et al. CRLF:automatic calibration and refinement based on line feature for LiDAR and camera in road scenes[EB/OL]. (2021-03-08)[2024-06-22]. https://arxiv.org/abs/2103.04558v1.
    [17] ZHU Yufeng,LI Chenghui,ZHANG Yubo. Online camera-LiDAR calibration with sensor semantic information[C]. IEEE International Conference on Robotics and Automation,Paris,2020. DOI: 10.1109/ICRA40945.2020.9196627.
    [18] ISHIKAWA R,OISHI T,IKEUCHI K. LiDAR and camera calibration using motions estimated by sensor fusion odometry[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems,Madrid,2018:7342-7349.
    [19] WANG Li,XIAO Zhipeng,ZHAO Dawei,et al. Automatic extrinsic calibration of monocular camera and LIDAR in natural scenes[C]. IEEE International Conference on Information and Automation,Wuyishan,2018:997-1002.
    [20] SCHNEIDER N,PIEWAK F,STILLER C,et al. RegNet:multimodal sensor registration using deep neural networks[C]. IEEE Intelligent Vehicles Symposium,Los Angeles,2017:1803-1810.
    [21] IYER G,RAM R K,MURTHY J K,et al. CalibNet:geometrically supervised extrinsic calibration using 3D spatial transformer networks[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems,Madrid,2018:1110-1117.
    [22] LYU Xudong,WANG Shuo,YE Dong. CFNet:lidar-camera registration using calibration flow network[J]. Sensors,2021. DOI: 10.48550/arXiv.2104.11907.
    [23] WANG Weimin,NOBUHARA S,NAKAMURA R,et al. SOIC:semantic online initialization and calibration for LiDAR and camera[EB/OL]. (2023-03-09)[2024-06-22]. https://arxiv.org/abs/2003.04260v1.
    [24] JAIN J,LI Jiachen,CHIU M,et al. OneFormer:one transformer to rule universal image segmentation[C]. IEEE/CVF Conference on Computer Vision and Pattern Recognition,Vancouver,2023:2989-2998.
    [25] XIAO Zeqi,ZHANG Wenwei,WANG Tai,et al. Position-guided point cloud panoptic segmentation transformer[EB/OL]. (2023-03-23)[2024-06-22]. https://arxiv.org/abs/2303.13509v1.
  • 加载中
图(11) / 表(5)
计量
  • 文章访问数:  127
  • HTML全文浏览量:  29
  • PDF下载量:  19
  • 被引次数: 0
出版历程
  • 收稿日期:  2024-07-16
  • 修回日期:  2024-10-08
  • 网络出版日期:  2024-08-16

目录

    /

    返回文章
    返回