矿用激光雷达与相机的无目标自动标定方法研究
Research on targetless automatic calibration method of mining lidar and camera
-
摘要: 近年来,我国煤矿智能化建设步伐加快,矿用无人驾驶辅助运输车辆已成为智能化煤矿的重要内容和验收标准。矿用车辆实现无人驾驶依赖于准确的环境感知,而激光雷达和相机的结合可以提供更丰富和准确的环境信息。为确保其有效融合,需进行外参标定。本文提出一种矿用激光雷达与相机的无目标自动标定方法,针对矿用车载激光雷达低线束的特点,采用多帧点云融合方法增加点云密度,结合全景分割提取有效目标,通过构建2D-3D目标质心对应关系进行粗校准得到初始外参,然后将有效目标点云通过初始外参投影在分割掩码上,构建有效目标图像和点云的全景信息匹配度函数,利用粒子群算法优化外参。实验结果显示,该方法平移误差为0.055m,旋转误差为0.394°,相较于单帧点云作为输入,融合帧点云作为输入时平移误差降低了50.89%,旋转误差降低了53.76%:相较于基于语义特征的算法,平移误差降低了43.88%,旋转误差降低了48.63%,表明本文方法可有效提高矿用激光雷达与相机外参自动标定精度,为后续矿用激光雷达数据与图像融合提供了研究基础。Abstract: Abstract: In recent years, the pace of intelligent construction of coal mines in China has accelerated, and mine unmanned auxiliary transport vehicles have become an important content and acceptance standard of intelligent coal mines. The realization of unmanned driving of mining vehicles depends on accurate environmental perception, and the combination of lidar and camera can provide richer and more accurate environmental information. In order to ensure its effective fusion, external parameter calibration is required. In this paper, a non-target automatic calibration method for mine-used laser radar and camera is proposed. According to the characteristics of low-line-beam of mine-used vehicle-borne laser radar, a multi-frame point cloud fusion method is used to increase the point cloud density, and the effective target is extracted by panoramic segmentation. The initial external parameters are obtained by constructing the 2D-3D target centroid correspondence, and then the effective target point cloud is projected on the segmentation mask through the initial external parameters. The panoramic information matching function of the effective target image and the point cloud is constructed, and the external parameters are optimized by particle swarm optimization. The experimental results show that the method has a translation error of 0.055m and a rotation error of 0.394°, which reduces the translation error by 50.89% and the rotation error by 53.76% when the fused-frame point cloud is used as an input compared to a single-frame point cloud as an input: the translation error reduces by 43.88% and the rotation error reduces by 48.63% compared to the algorithm based on the semantic features, which indicates that this paper's method can effectively improve the automatic calibration accuracy of mining LiDAR and camera external parameters, which provides a research basis for the subsequent mining LiDAR data and image fusion.
点击查看大图
计量
- 文章访问数: 25
- HTML全文浏览量: 7
- 被引次数: 0