面向矿井环境的激光雷达−惯性−视觉紧耦合SLAM算法

LiDAR-inertial-vision tightly-coupled SLAM algorithm for mining environments

  • 摘要: 矿井环境中非结构化地形、光照条件差及特征重复等条件导致单一传感器的同时定位与地图构建(SLAM)精度不足,多传感器数据紧耦合融合可在一定程度上提升精度,但仍存在计算量大、光照突变适应性差等问题。针对上述问题,以快速紧耦合稀疏直接激光雷达−惯性−视觉里程计(FAST−LIVO)算法为基础进行改进,提出一种面向矿井环境的激光雷达−惯性−视觉紧耦合SLAM算法。在多传感器数据紧耦合融合部分,采用LK(Lucas−Kanade)光流法代替原有稀疏直接法,利用光流法追踪稳定特征点并构建视觉重投影误差,同时利用随机样本一致(RANSAC)算法剔除离群点以保留高质量视觉约束;结合惯性测量单元(IMU)先验估计与激光雷达点到平面残差,通过迭代误差状态卡尔曼滤波器实现多传感器数据紧耦合融合,输出高精度位姿。在地图构建部分,采用增量式k−d树(ikd−Tree)动态管理点云以构建激光雷达局部地图;通过网格筛选与Shi−Tomas得分计算提取视觉特征点,并采用数组管理实时移除视场外特征点以构建视觉局部地图;通过将激光雷达点云投影至对应图像提取RGB颜色信息生成彩色点云帧,再依据优化位姿拼接彩色点云帧以构建彩色点云地图。基于Gazebo仿真平台的测试结果表明,相比FAST−LIVO算法,所提算法的绝对轨迹误差(ATE)和相对位姿误差(RPE)均降低了20%以上,且巷道侧壁、内部料堆轮廓、地面等特征更清晰。在公开数据集M2DGR上的测试结果表明,所提算法的定位精度较LEGO−LOAM,FAST−LIO及FAST−LIVO算法有所提升,且在转弯处无明显漂移,轨迹稳定性更优,且所提算法处理数据的平均时间缩短。在长走廊模拟环境的测试结果表明,所提算法对空间结构的还原更清晰,线条、轮廓等细节更精准,噪点抑制效果更佳,能更准确地反映真实环境布局。

     

    Abstract: In underground mining environments, unstructured terrains, poor lighting conditions, and repetitive features lead to insufficient accuracy in single-sensor simultaneous localization and mapping (SLAM). Although tight coupling fusion of multi-sensor data can improve accuracy to some extent, challenges remain, including high computational load and poor adaptability to sudden illumination changes. To address these issues, an improved LiDAR-inertial-visual tightly-coupled SLAM algorithm was proposed based on the Fast Tightly-Coupled Sparse-Direct LiDAR-Inertial-Visual Odometry (FAST-LIVO) algorithm. For multi-sensor tight coupling fusion, the Lucas-Kanade (LK) optical flow method was adopted to replace the original sparse direct method, tracking stable feature points and constructing a visual reprojection error. Meanwhile, the random sample consensus (RANSAC) algorithm was applied to eliminate outliers and retain high-quality visual constraints. By combining Inertial Measurement Unit (IMU) prior estimation with LiDAR point-to-plane residuals, the iterative error-state Kalman filter was employed to achieve tight coupling fusion of multi-sensor data, outputting high-precision pose estimates. For map construction, an incremental k-d tree (ikd-Tree) was employed to dynamically manage point clouds for building the LiDAR local map. Visual feature points were extracted through grid filtering and Shi-Tomasi score calculation, while an array was utilized to manage and dynamically remove features outside the field of view, constructing the visual local map. The LiDAR point clouds were projected onto corresponding images to extract RGB color information, generating colored point cloud frames. These frames were then stitched based on optimized poses to construct a colored point cloud map. Experimental results on the Gazebo simulation platform demonstrated that, compared to the FAST-LIVO algorithm, the proposed method reduced both absolute trajectory error (ATE) and relative pose error (RPE) by over 20%, with clearer features such as tunnel sidewalls, internal pile contours, and ground surfaces. Tests on the public M2DGR dataset showed that the proposed algorithm achieved higher localization accuracy than LEGO-LOAM, FAST-LIO, and FAST-LIVO, exhibited no significant drift at turns, and maintained superior trajectory stability. Additionally, the average data processing time of the proposed method was reduced. Test results in a long corridor simulation environment demonstrate that the proposed algorithm achieves clearer reconstruction of spatial structures, more accurate details such as lines and contours, better noise suppression, and more precise reflection of the actual environmental layout.

     

/

返回文章
返回