留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

SLAM技术及其在矿山无人驾驶领域的研究现状与发展趋势

崔邵云 鲍久圣 胡德平 袁晓明 张可琨 阴妍 王茂森 朱晨钟

崔邵云,鲍久圣,胡德平,等. SLAM技术及其在矿山无人驾驶领域的研究现状与发展趋势[J]. 工矿自动化,2024,50(10):38-52.  doi: 10.13272/j.issn.1671-251x.2024070010
引用本文: 崔邵云,鲍久圣,胡德平,等. SLAM技术及其在矿山无人驾驶领域的研究现状与发展趋势[J]. 工矿自动化,2024,50(10):38-52.  doi: 10.13272/j.issn.1671-251x.2024070010
CUI Shaoyun, BAO Jiusheng, HU Deping, et al. Research status and development trends of SLAM technology in autonomous mining field[J]. Journal of Mine Automation,2024,50(10):38-52.  doi: 10.13272/j.issn.1671-251x.2024070010
Citation: CUI Shaoyun, BAO Jiusheng, HU Deping, et al. Research status and development trends of SLAM technology in autonomous mining field[J]. Journal of Mine Automation,2024,50(10):38-52.  doi: 10.13272/j.issn.1671-251x.2024070010

SLAM技术及其在矿山无人驾驶领域的研究现状与发展趋势

doi: 10.13272/j.issn.1671-251x.2024070010
基金项目: 江苏省科技成果转化专项资金项目(BA2023035);煤矿采掘机械装备国家工程实验室开放课题项目(GCZX-2023-01);江苏高校优势学科建设工程资助项目(PAPD)。
详细信息
    作者简介:

    崔邵云(1999—),男,山西晋城人,硕士研究生,主要研究方向为煤矿井下激光SLAM,E-mail:1055462124@qq.com

    通讯作者:

    鲍久圣(1979—),男,安徽桐城人,教授,博士,博士研究生导师,主要研究方向为矿山运输及其智能化,E-mail: cumtbjs@cumt.edu.cn

  • 中图分类号: TD67

Research status and development trends of SLAM technology in autonomous mining field

  • 摘要: 无人驾驶是矿山智能化关键技术之一,其中即时定位与地图构建(SLAM)技术是实现无人驾驶的关键环节。为推动SLAM技术在矿山无人驾驶领域的发展,对SLAM技术原理、成熟地面SLAM方案、现阶段矿山SLAM研究现状、未来矿山SLAM发展趋势进行了探讨。根据SLAM技术所使用的传感器,从视觉、激光及多传感器融合3个方面分析了各自的技术原理及相应框架,指出视觉和激光SLAM技术通过单一相机或激光雷达实现,存在易受环境干扰、无法适应复杂环境等缺点,多传感器融合SLAM是目前最佳的解决方法。探究了目前矿山SLAM技术的研究现状,分析了视觉、激光、多传感器融合3种SLAM技术在井工煤矿、露天矿山的适用性与研究价值,指出多传感器融合SLAM是井工煤矿领域的最佳方案,SLAM技术在露天矿山领域研究价值不高。基于现阶段井下SLAM技术存在的难点(随时间及活动范围积累误差、各类场景引起的不良影响、各类传感器无法满足高精度SLAM算法的硬件要求),提出矿山无人驾驶领域SLAM技术未来应向多传感器融合、固态化、智能化方向发展。

     

  • 图  1  SLAM系统框架

    Figure  1.  Simultaneous localization and mapping(SLAM) system framework

    图  2  视觉SLAM建图类型

    Figure  2.  Visual SLAM mapping types

    图  3  ORB−SLAM框架

    Figure  3.  Oriented features from accelerated segment test and rotated binary robust independent elementary features simultaneous localization and mapping(ORB-SLAM) framework

    图  4  LSD−SLAM框架

    Figure  4.  Large-scale direct monocular simultaneous localization and mapping (LSD-SLAM )framework

    图  5  激光SLAM建图类型

    Figure  5.  Laser SLAM mapping types

    图  6  LOAM框架

    Figure  6.  LiDAR odometry andmapping (LOAM) framework

    图  7  LOAM−Livox框架

    Figure  7.  LiDAR odometry and mapping for Livox (LOAM-Livox) framework

    图  8  VINS−Mono框架

    Figure  8.  Visual inertial navigation system-monocular(VINS-Mono) framework

    图  9  LIO−SAM框架

    Figure  9.  Lidar-inertial odometry via smoothing and mapping (LIO-SAM) framework

    图  10  LVI−SAM框架

    Figure  10.  Lidar-visual-inertial odometry via smoothing and mapping(LVI-SAM) framework

    图  11  矿山应用场景

    Figure  11.  Mining application scenarios

    图  12  文献[47]提出的关键帧选取流程

    Figure  12.  Keyframe selection flow proposed in literature [47]

    图  13  文献[48]提出的改进双边滤波Retinex算法流程

    Figure  13.  Flow of improved bilateral filtering Retinex algorithm proposed by literature [48]

    图  14  文献[52]提出的基于NDT的激光SLAM框架

    Figure  14.  Normal distributions transform(NDT )-based laser SLAM framework proposed in literature [52]

    图  15  文献[53]提出的基于GICP的激光SLAM框架

    Figure  15.  Generalized iterative closest point(GICP)-based laser SLAM framework proposed by literature [53]

    图  16  改进LeGO−LOAM框架

    Figure  16.  Improved LeGO-LOAM framework

    图  17  井下相机−IMU融合SLAM框架

    Figure  17.  Underground Camera-IMU fusion SLAM framework

    图  18  井下激光雷达−IMU融合SLAM框架

    Figure  18.  Underground LiDAR-IMU fusion SLAM framework

    图  19  井下相机−激光雷达−IMU融合SLAM框架

    Figure  19.  Underground Camera-LiDAR-IMU fusion SLAM framework

    表  1  上文未提及的常见多传感器融合SLAM方案

    Table  1.   Common multi-sensor fusion SLAM solutions

    多传感器融合SLAM方案 所属类型 优点 缺点
    激光−惯性里程计与地图构建[31] 激光雷达−IMU 率先开源的激光雷达与IMU融合方案 计算效率不高
    激光−惯性状态估计器[32] 激光雷达−IMU 相较LIO−mapping运行速度提高近1个数量级 复杂程度高
    快速激光−惯性里程计系列[33-35] 激光雷达−IMU 轻量级定位建图,运行效率高;Faster−LIO
    可应用至固态激光雷达
    牺牲了一定精度;更适合小尺度场景
    固态激光雷达−惯性里程计与地图构建[36] 激光雷达−IMU 适用于固态激光雷达的融合方案 狭窄长廊特征匹配退化严重
    激光−惯性里程计与地图构建[37] 激光雷达−IMU 可消除动态物体影响;低漂移;强鲁棒性 实时性较差
    基于关键帧的视觉−惯性SLAM系统[38] 相机−IMU 轨迹估计精确 无回环检测;无法构建环境地图
    基于方向加速分割测试特征检测子和
    旋转二进制鲁棒独立特征描述子的
    视觉−惯性SLAM系统[39]
    相机−IMU 通过融合IMU数据解决快速运动下
    特征点丢失的问题
    无法长时间应用至光照变化明显、
    欠特征场景
    基于方向加速分割测试特征检测子和
    旋转二进制鲁棒独立特征描述子的
    即时定位与地图构建[40]
    相机−IMU 定位精度、实时性好 快速运动场景存在特征丢失问题;
    大尺度场景计算消耗量大
    视觉−激光里程计与地图构建[41] 激光雷达−相机−IMU 精度高;鲁棒性好 无“回环检测”
    激光−单目视觉里程计[42] 激光雷达−相机−IMU 环境信息丰富,便于后续语义分割等操作 精度低于V−LOAM
    稳健实时的激光−惯性−视觉联合估计[43-45] 激光雷达−相机−IMU 实时性强;可拓展性优秀 计算资源需求大
    快速激光−惯性−视觉里程计[46] 激光雷达−相机−IMU 计算效率高;鲁棒性好 硬件要求高
    下载: 导出CSV
  • [1] 鲍久圣,刘琴,葛世荣,等. 矿山运输装备智能化技术研究现状及发展趋势[J]. 智能矿山,2020,1(1):78-88.

    BAO Jiusheng,LIU Qin,GE Shirong,et al. Research status and development trend of intelligent technologies for mine transportation equipment[J]. Journal of Intelligent Mine,2020,1(1):78-88.
    [2] 鲍久圣,张牧野,葛世荣,等. 基于改进A*和人工势场算法的无轨胶轮车井下无人驾驶路径规划[J]. 煤炭学报,2022,47(3):1347-1360.

    BAO Jiusheng,ZHANG Muye,GE Shirong,et al. Underground driverless path planning of trackless rubber tyred vehicle based on improved A* and artificial potential field algorithm[J]. Journal of China Coal Society,2022,47(3):1347-1360.
    [3] SMITH R C,CHEESEMAN P. On the representation and estimation of spatial uncertainty[J]. The International Journal of Robotics Research,1986,5(4):56-68. doi: 10.1177/027836498600500404
    [4] 刘铭哲,徐光辉,唐堂,等. 激光雷达SLAM算法综述[J]. 计算 机工程与应用,2024,60(1):1-14. doi: 10.54254/2755-2721/60/20240821

    LIU Mingzhe,XU Guanghui,TANG Tang,et al. Review of SLAM based on lidar[J]. Computer Engineering and Applications,2024,60(1):1-14. doi: 10.54254/2755-2721/60/20240821
    [5] HUANG Leyao. Review on LiDAR-based SLAM techniques[C]. International Conference on Signal Processing and Machine Learning,Stanford,2021:163-168.
    [6] 李云天,穆荣军,单永志. 无人系统视觉SLAM技术发展现状简析[J]. 控制与决策,2021,36(3):513-522.

    LI Yuntian,MU Rongjun,SHAN Yongzhi. A survey of visual SLAM in unmanned systems[J]. Control and Decision,2021,36(3):513-522.
    [7] DAVISON A J,REID I D,MOLTON N D,et al. MonoSLAM:real-time single camera SLAM[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2007,29(6):1052-1067. doi: 10.1109/TPAMI.2007.1049
    [8] KLEIN G,MURRAY D. Parallel tracking and mapping for small AR workspaces[C]. 6th IEEE and ACM International Symposium on Mixed and Augmented Reality,Nara,2007:225-234.
    [9] MUR-ARTAL R,MONTIEL J M M,TARDOS J D. ORB-SLAM:a versatile and accurate monocular SLAM system[J]. IEEE Transactions on Robotics,2015,31(5):1147-1163. doi: 10.1109/TRO.2015.2463671
    [10] MUR-ARTAL R,TARDOS J D. ORB-SLAM2:an open-source SLAM system for monocular,stereo,and RGB-D cameras[J]. IEEE Transactions on Robotics,2017,33(5):1255-1262. doi: 10.1109/TRO.2017.2705103
    [11] NEWCOMBE R A,LOVEGROVE S J,DAVISON A J. DTAM:dense tracking and mapping in real-time[C]. International Conference on Computer Vision,Barcelona,2011:2320-2327.
    [12] 张继贤,刘飞. 视觉SLAM环境感知技术现状与智能化测绘应用展望[J]. 测绘学报,2023,52(10):1617-1630.

    ZHANG Jixian,LIU Fei. Review of visual SLAM environment perception technology and intelligent surveying and mapping application[J]. Acta Geodaetica et Cartographica Sinica,2023,52(10):1617-1630.
    [13] ENGEL J,STUCKLER J,CREMERS D. Large-scale direct SLAM with stereo cameras[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems,Hamburg,2015:1935-1942.
    [14] TATENO K,TOMBARI F,LAINA I,et al. CNN-SLAM:real-time dense monocular SLAM with learned depth prediction[C]. IEEE Conference on Computer Vision and Pattern Recognition,Honolulu,2017:6243-6252.
    [15] 尹鋆泰. 动态场景下基于深度学习的视觉SLAM技术研究[D]. 北京:北京邮电大学,2023.

    YIN Juntai. Research on visual SLAM technology based on deep learning in dynamic scene[D]. Beijing:Beijing University of Posts and Telecommunications,2023.
    [16] MONTEMERLO M,THRUN S,KOLLER D,et al. FastSLAM:a factored solution to the simultaneous localization and mapping problem[J]. American Association for Artificial Intelligence,2002. DOI: 10.1007/s00244-005-7058-x.
    [17] HESS W,KOHLER D,RAPP H,et al. Real-time loop closure in 2D LIDAR SLAM[C]. IEEE International Conference on Robotics and Automation,Stockholm,2016:1271-1278.
    [18] ZHANG Ji,SINGH S. LOAM:lidar odometry and mapping in real-time[J]. Robotics:Science and Systems,2014. DOI: 10.15607/RSS.2014.X.007.
    [19] SHAN Tixiao,ENGLOT B. LeGO-LOAM:lightweight and ground-optimized lidar odometry and mapping on variable terrain[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems,Madrid,2018:4758-4765.
    [20] LIN Jiarong,ZHANG Fu. Loam livox:a fast,robust,high-precision LiDAR odometry and mapping package for LiDARs of small FoV[C]. IEEE International Conference on Robotics and Automation,Paris,2020:3126-3131.
    [21] LI Lin,KONG Xin,ZHAO Xiangrui,et al. SA-LOAM:semantic-aided LiDAR SLAM with loop closure[C]. IEEE International Conference on Robotics and Automation,Xi'an,2021:7627-7634.
    [22] CHEN X,MILIOTO A,PALAZZOLO E,et al. SuMa++:efficient LiDAR-based semantic SLAM[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems,Macau,2019:4530-4537.
    [23] WANG Guangming,WU Xinrui,JIANG Shuyang,et al. Efficient 3D deep LiDAR odometry[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2023,45(5):5749-5765.
    [24] QIN Tong,LI Peiliang,SHEN Shaojie. VINS-mono:a robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics,2018,34(4):1004-1020. doi: 10.1109/TRO.2018.2853729
    [25] LI Peiliang,QIN Tong,HU Botao,et al. Monocular visual-inertial state estimation for mobile augmented reality[C]. International Symposium on Mixed and Augmented Reality,Nantes,2017:11-21.
    [26] QIN Tong,PAN Jie,GAO Shaozu,et al. A general optimization-based framework for local odometry estimation with multiple sensors[EB/OL]. (2019-01-11)[2024-06-22]. https://arxiv.org/abs/1901.03638.
    [27] SHAN Tixiao,ENGLOT B,MEYERS D,et al. LIO-SAM:tightly-coupled lidar inertial odometry via smoothing and mapping[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems,Las Vegas,2020:5135-5142.
    [28] SHAN Tixiao,ENGLOT B,RATTI C,et al. LVI-SAM:tightly-coupled lidar-visual-inertial odometry via smoothing and mapping[C]. IEEE International Conference on Robotics and Automation,Xi'an,2021:5692-5698.
    [29] 祝晓轩. 基于单目相机与IMU融合的SLAM系统研究[D]. 青岛:青岛大学,2023.

    ZHU Xiaoxuan. Research on SLAM system based on monocular camera and IMU fusion[D]. Qingdao:Qingdao University,2023.
    [30] 秦晓辉,周洪,廖毅霏,等. 动态环境下基于时序滑动窗口的鲁棒激光SLAM系统[J]. 湖南大学学报(自然科学版),2023,50(12):49-58.

    QIN Xiaohui,ZHOU Hong,LIAO Yifei,et al. Robust laser SLAM system based on temporal sliding window in dynamic scenes[J]. Journal of Hunan University(Natural Sciences),2023,50(12):49-58.
    [31] YE Haoyang,CHEN Yuying,LIU Ming. Tightly coupled 3D lidar inertial odometry and mapping[C]. International Conference on Robotics and Automation,Montreal,2019:3144-3150.
    [32] QIN Chao,YE Haoyang,PRANATA C E,et al. LINS:a lidar-inertial state estimator for robust and efficient navigation[C]. IEEE International Conference on Robotics and Automation,Paris,2020:8899-8906.
    [33] XU Wei,ZHANG Fu. FAST-LIO:a fast,robust LiDAR-inertial odometry package by tightly-coupled iterated Kalman filter[J]. IEEE Robotics and Automation Letters,2021,6(2):3317-3324. doi: 10.1109/LRA.2021.3064227
    [34] XU Wei,CAI Yixi,HE Dongjiao,et al. FAST-LIO2:fast direct LiDAR-inertial odometry[J]. IEEE Transactions on Robotics,2022,38(4):2053-2073. doi: 10.1109/TRO.2022.3141876
    [35] BAI Chunge,XIAO Tao,CHEN Yajie,et al. Faster-LIO:lightweight tightly coupled lidar-inertial odometry using parallel sparse incremental voxels[J]. IEEE Robotics and Automation Letters,2022,7(2):4861-4868. doi: 10.1109/LRA.2022.3152830
    [36] LI Kailai,LI Meng,HANEBECK U D. Towards high-performance solid-state-LiDAR-inertial odometry and mapping[J]. IEEE Robotics and Automation Letters,2021,6(3):5167-5174. doi: 10.1109/LRA.2021.3070251
    [37] ZHAO Shibo,FANG Zheng,LI Haolai,et al. A robust laser-inertial odometry and mapping method for large-scale highway environments[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems,Macau,2019:1285-1292.
    [38] LEUTENEGGER S,LYNEN S,BOSSE M,et al. Keyframe-based visual–inertial odometry using nonlinear optimization[J]. The International Journal of Robotics Research,2015,34(3):314-334. doi: 10.1177/0278364914554813
    [39] MUR-ARTAL R,TARDOS J D. Visual-inertial monocular SLAM with map reuse[J]. IEEE Robotics and Automation Letters,2017,2(2):796-803. doi: 10.1109/LRA.2017.2653359
    [40] CAMPOS C,ELVIRA R,RODRIGUEZ J J G,et al. ORB-SLAM3:an accurate open-source library for visual,visual-inertial,and multimap SLAM[J]. IEEE Transactions on Robotics,2021,37(6):1874-1890. doi: 10.1109/TRO.2021.3075644
    [41] ZHANG Ji,SINGH S. Visual-lidar odometry and mapping:low-drift,robust,and fast[C]. IEEE International Conference on Robotics and Automation,Seattle,2015:2174-2181.
    [42] GRAETER J,WILCZYNSKI A,LAUER M. LIMO:lidar-monocular visual odometry[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems,Madrid,2018:7872-7879.
    [43] LIN Jiarong,ZHENG Chunran,XU Wei,et al. R(2)LIVE:a robust,real-time,LiDAR-inertial-visual tightly-coupled state estimator and mapping[J]. IEEE Robotics and Automation Letters,2021,6(4):7469-7476. doi: 10.1109/LRA.2021.3095515
    [44] LIN Jiarong,ZHANG Fu. R3LIVE++:a robust,real-time,RGB-colored,LiDAR-rnertial-visual tightly-coupled state estimation and mapping package[C]. International Conference on Robotics and Automation,Philadelphia,2022:10672-10678.
    [45] LIN Jiarong,ZHANG Fu. R3LIVE++:a robust,real-time,radiance reconstruction package with a tightly-coupled LiDAR-inertial-visual state estimator[J]. IEEE transactions on pattern analysis and machine intelligence,2024. DOI: 10.1109/TPAMI.2024.3456473.
    [46] HENG Chunran,ZHU Qingyan,XU Wei,et al. FAST-LIVO:fast and tightly-coupled sparse-direct LiDAR-inertial-visual odometry[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems,Kyoto,2022:4003-4009.
    [47] 高毅楠,姚顽强,蔺小虎,等. 煤矿井下多重约束的视觉SLAM关键帧选取方法[J]. 煤炭学报,2024,49(增刊1):472-482.

    GAO Yinan,YAO Wanqiang,LIN Xiaohu,et al. Visual SLAM keyframe selection method with multiple constraints in underground coal mines[J]. Journal of China Coal Society,2024,49(S1):472-482.
    [48] 冯玮,姚顽强,蔺小虎,等. 顾及图像增强的煤矿井下视觉同时定位与建图算法[J]. 工矿自动化,2023,49(5):74-81.

    FENG Wei,YAO Wanqiang,LIN Xiaohu,et al. Visual simultaneous localization and mapping algorithm of coal mine underground considering image enhancement[J]. Journal of Mine Automation,2023,49(5):74-81.
    [49] 马宏伟,王岩,杨林. 煤矿井下移动机器人深度视觉自主导航研究[J]. 煤炭学报,2020,45(6):2193-2206.

    MA Hongwei,WANG Yan,YANG Lin. Research on depth vision based mobile robot autonomous navigation in underground coal mine[J]. Journal of China Coal Society,2020,45(6):2193-2206.
    [50] HUBER D F,VANDAPEL N. Automatic three-dimensional underground mine mapping[J]. The International Journal of Robotics Research,2006,25(1):7-17. doi: 10.1177/0278364906061157
    [51] 安震. 自主导航搜救机器人关键技术研究[D]. 沈阳:东北大学,2015.

    AN Zhen. Research on key technologies of autonomous navigation search and rescue robot[D]. Shenyang:Northeastern University,2015.
    [52] LI Menggang,ZHU Hua,YOU Shaoze,et al. Efficient laser-based 3D SLAM for coal mine rescue robots[J]. IEEE Access,2019,7:14124-14138. doi: 10.1109/ACCESS.2018.2889304
    [53] REN Zhuli,WANG Liguan,BI Lin. Robust GICP-based 3D LiDAR SLAM for underground mining environment[J]. Sensors,2019,19(13). DOI: 10.3390/s19132915.
    [54] 邹筱瑜,黄鑫淼,王忠宾,等. 基于集成式因子图优化的煤矿巷道移动机器人三维地图构建[J]. 工矿自动化,2022,48(12):57-67,92.

    ZOU Xiaoyu,HUANG Xinmiao,WANG Zhongbin,et al. 3D map construction of coal mine roadway mobile robot based on integrated factor graph optimization[J]. Journal of Mine Automation,2022,48(12):57-67,92.
    [55] 许鹏程. 基于粒子群优化的煤矿井下机器人FASTSLAM算法研究[D]. 北京:煤炭科学研究总院,2017.

    XU Pengcheng. Research on FASTSLAM algorithm of coal mine underground robot based on particle swarm optimization[D]. Beijing:China Coal Research Institute,2017.
    [56] 杨林,马宏伟,王岩,等. 煤矿巡检机器人同步定位与地图构建方法研究[J]. 工矿自动化,2019,45(9):18-24.

    YANG Lin,MA Hongwei,WANG Yan,et al. Research on method of simultaneous localization and mapping of coal mine inspection robot[J]. Industry and Mine Automation,2019,45(9):18-24.
    [57] 代嘉惠. 大功率本安驱动煤矿救援机器人定位与建图算法研究[D]. 重庆:重庆大学,2019.

    DAI Jiahui. Study on localization and mapping algorithm of high-power intrinsically safe coal mine rescue robot[D]. Chongqing:Chongqing University,2019.
    [58] 李仲强. 煤矿救援机器人自主建图和导航技术研究[D]. 淮南:安徽理工大学,2019.

    LI Zhongqiang. Research on self-construction and navigation technology of coal mine rescue robot[D]. Huainan:Anhui University of Science and Technology,2019.
    [59] 李芳威,鲍久圣,王陈,等. 基于LD改进Cartographer建图算法的无人驾驶无轨胶轮车井下SLAM自主导航方法及试验[J/OL]. 煤炭学报:1-12[2024-06-22]. https://doi.org/10.13225/j.cnki.jccs.2023.0731.

    LI Fangwei,BAO Jiusheng,WANG Chen,et al. Unmanned trackless rubber wheeler based on LD improved Cartographer mapping algorithm underground SLAM autonomous navigation method and test[J/OL]. Journal of China Coal Society:1-12[2024-06-22]. https://doi.org/10.13225/j.cnki.jccs.2023.0731.2023.0731.
    [60] 顾清华,白昌鑫,陈露,等. 基于多线激光雷达的井下斜坡道无人矿卡定位与建图方法[J]. 煤炭学报,2024,49(3):1680-1688.

    GU Qinghua,BAI Changxin,CHEN Lu,et al. Localization and mapping method for unmanned mining trucks in underground slope roads based on multi-line lidar[J]. Journal of China Coal Society,2024,49(3):1680-1688.
    [61] 薛光辉,李瑞雪,张钲昊,等. 基于激光雷达的煤矿井底车场地图融合构建方法研究[J]. 煤炭科学技术,2023,51(8):219-227.

    XUE Guanghui,LI Ruixue,ZHANG Zhenghao,et al. Lidar based map construction fusion method for underground coal mine shaft bottom[J]. Coal Science and Technology,2023,51(8):219-227.
    [62] ZHU Daixian,JI Kangkang,WU Dong,et al. A coupled visual and inertial measurement units method for locating and mapping in coal mine tunnel[J]. Sensors,2022,22(19):7437. doi: 10.3390/s22197437
    [63] 汪雷. 煤矿探测机器人图像处理及动态物体去除算法研究[D]. 徐州:中国矿业大学,2020.

    WANG Lei. Research on image processing and dynamic object removal algorithm of coal mine detection robot[D]. Xuzhou:China University of Mining and Technology,2020.
    [64] YANG Xin,LIN Xiaohu,YAO Wanqiang,et al. A robust LiDAR SLAM method for underground coal mine robot with degenerated scene compensation[J]. Remote Sensing,2022,15(1). DOI: 10.3390/RS15010186.
    [65] YANG Lin,MA Hongwei,NIE Zhen,et al. 3D LiDAR point cloud registration based on IMU preintegration in coal mine roadways[J]. Sensors,2023,23(7). DOI: 10.3390/S23073473.
    [66] 司垒,王忠宾,魏东,等. 基于IMU−LiDAR紧耦合的煤矿防冲钻孔机器人定位导航方法[J]. 煤炭学报,2024,49(4):2179-2194.

    SI Lei,WANG Zhongbin,WEI Dong,et al. Positioning and navigation method of underground drilling robot for rock-burst prevention based on IMU-LiDAR tight coupling[J]. Journal of China Coal Society,2024,49(4):2179-2194.
    [67] 李猛钢,胡而已,朱华. 煤矿移动机器人LiDAR/IMU紧耦合SLAM方法[J]. 工矿自动化,2022,48(12):68-78.

    LI Menggang,HU Eryi,ZHU Hua. LiDAR/IMU tightly-coupled SLAM method for coal mine mobile robot[J]. Journal of Mine Automation,2022,48(12):68-78.
    [68] 董志华,姚顽强,蔺小虎,等. 煤矿井下顾及特征点动态提取的激光SLAM算法研究[J]. 煤矿安全,2023,54(8):241-246.

    DONG Zhihua,YAO Wanqiang,LIN Xiaohu,et al. LiDAR SLAM algorithm considering dynamic extraction of feature points in underground coal mine[J]. Safety in Coal Mines,2023,54(8):241-246.
    [69] 薛光辉,张钲昊,张桂艺,等. 煤矿井下点云特征提取和配准算法改进与激光SLAM研究[J/OL]. 煤炭科学技术:1-12[2024-06-22]. http://kns.cnki.net/kcms/detail/11.2402.TD.20240722.1557.003.html.

    XUE Guanghui,ZHANG Zhenghao,ZHANG Guiyi,et al. Improvement of point cloud feature extraction and alignment algorithms and LiDAR SLAM in coal mine underground[J/OL]. Coal Science and Technology:1-12[2024-06-22]. http://kns.cnki.net/kcms/detail/11.2402.TD.20240722.1557.003.html.
    [70] 李栋. 基于多源信息融合的巷道语义地图构建与复用方法研究[D]. 苏州:苏州大学,2022.

    LI Dong. A Method of construction and reuse of roadway semantic map based on multi-source information fusion[D]. Suzhou:Soochow University,2022.
    [71] 陈步平. 矿用搜救机器人多源信息融合SLAM方法研究[D]. 徐州:中国矿业大学,2023.

    CHEN Buping. Research on SLAM method of multi-source information fusion for mine search and rescue robot[D]. Xuzhou:China University of Mining and Technology,2023.
    [72] 马艾强,姚顽强. 煤矿井下移动机器人多传感器自适应融合SLAM方法[J]. 工矿自动化,2024,50(5):107-117.

    MA Aiqiang,YAO Wanqiang. Multi sensor adaptive fusion SLAM method for underground mobile robots in coal mines[J]. Journal of Mine Automation,2024,50(5):107-117.
    [73] 滕睿. 露天矿运输车辆无人驾驶关键技术研究[D]. 阜新:辽宁工程技术大学,2023.

    TENG Rui. Research on key technologies of unmanned driving of transport vehicles in open-pit mine[D]. Fuxin:Liaoning Technical University,2023.
    [74] 张清宇,崔丽珍,李敏超,等. 倾斜地面3D点云快速分割算法[J]. 无线电工程,2024,54(2):447-456.

    ZHANG Qingyu,CUI Lizhen,LI Minchao,et al. A fast segmentation algorithm for 3D point cloud on inclined ground[J]. Radio Engineering,2024,54(2):447-456.
    [75] 张清宇. 煤矿环境下LiDAR/IMU融合定位算法研究与实现[D]. 包头:内蒙古科技大学,2023.

    ZHANG Qingyu. Research and implementation of LiDAR/IMU fusion positioning algorithm in coal mine environment[D]. Baotou:Inner Mongolia University of Science & Technology,2023.
    [76] 马宝良,崔丽珍,李敏超,等. 露天煤矿环境下基于LiDAR/IMU的紧耦合SLAM算法研究[J]. 煤炭科学技术,2024,52(3):236-244.

    MA Baoliang,CUI Lizhen,LI Minchao,et al. Study on tightly coupled LiDAR-Inertial SLAM for open pit coal mine environment[J]. Coal Science and Technology,2024,52(3):236-244.
    [77] 李慧,李敏超,崔丽珍,等. 露天煤矿三维激光雷达运动畸变算法研究[J/OL]. 煤炭科学技术:1-12[2024-06-22]. http://kns.cnki.net/kcms/detail/11.2402.td.20240325.1558.006.html.

    LI Hui,LI Minchao,CUI Lizhen,et al. Research on 3D LiDAR motion distortion algorithm for open-pit coal mine[J/OL]. Coal Science and Technology:1-12[2024-06-22]. http://kns.cnki.net/kcms/detail/11.2402.td.20240325.1558.006.html.
  • 加载中
图(19) / 表(1)
计量
  • 文章访问数:  168
  • HTML全文浏览量:  35
  • PDF下载量:  15
  • 被引次数: 0
出版历程
  • 收稿日期:  2024-07-03
  • 修回日期:  2024-10-28
  • 网络出版日期:  2024-09-29

目录

    /

    返回文章
    返回