留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于视觉与激光融合的井下灾后救援无人机自主位姿估计

何怡静 杨维

何怡静,杨维. 基于视觉与激光融合的井下灾后救援无人机自主位姿估计[J]. 工矿自动化,2024,50(4):94-102.  doi: 10.13272/j.issn.1671-251x.2023080124
引用本文: 何怡静,杨维. 基于视觉与激光融合的井下灾后救援无人机自主位姿估计[J]. 工矿自动化,2024,50(4):94-102.  doi: 10.13272/j.issn.1671-251x.2023080124
HE Yijing, YANG Wei. Autonomous pose estimation of underground disaster rescue drones based on visual and laser fusion[J]. Journal of Mine Automation,2024,50(4):94-102.  doi: 10.13272/j.issn.1671-251x.2023080124
Citation: HE Yijing, YANG Wei. Autonomous pose estimation of underground disaster rescue drones based on visual and laser fusion[J]. Journal of Mine Automation,2024,50(4):94-102.  doi: 10.13272/j.issn.1671-251x.2023080124

基于视觉与激光融合的井下灾后救援无人机自主位姿估计

doi: 10.13272/j.issn.1671-251x.2023080124
基金项目: 国家自然科学基金资助项目(51874299)。
详细信息
    作者简介:

    何怡静(2000—),女,山东枣庄人,硕士研究生,主要研究方向为宽带移动通信和井下无人机定位,E-mail:21120060@bjtu.edu.cn

    通讯作者:

    杨维(1964—),男,北京人,教授,主要研究方向为宽带移动通信系统与专用移动通信,E-mail:wyang@bjtu.edu.cn

  • 中图分类号: TD67

Autonomous pose estimation of underground disaster rescue drones based on visual and laser fusion

  • 摘要: 无人机在灾后矿井的自主导航能力是其胜任抢险救灾任务的前提,而在未知三维空间的自主位姿估计技术是无人机自主导航的关键技术之一。目前基于视觉的位姿估计算法由于单目相机无法直接获取三维空间的深度信息且易受井下昏暗光线影响,导致位姿估计尺度模糊和定位性能较差,而基于激光的位姿估计算法由于激光雷达存在视角小、扫描图案不均匀及受限于矿井场景结构特征,导致位姿估计出现错误。针对上述问题,提出了一种基于视觉与激光融合的井下灾后救援无人机自主位姿估计算法。首先,通过井下无人机搭载的单目相机和激光雷达分别获取井下的图像数据和激光点云数据,对每帧矿井图像数据均匀提取ORB特征点,使用激光点云的深度信息对ORB特征点进行深度恢复,通过特征点的帧间匹配实现基于视觉的无人机位姿估计。其次,对每帧井下激光点云数据分别提取特征角点和特征平面点,通过特征点的帧间匹配实现基于激光的无人机位姿估计。然后,将视觉匹配误差函数和激光匹配误差函数置于同一位姿优化函数下,基于视觉与激光融合来估计井下无人机位姿。最后,通过视觉滑动窗口和激光局部地图引入历史帧数据,构建历史帧数据和最新估计位姿之间的误差函数,通过对误差函数的非线性优化完成在局部约束下的无人机位姿的优化和修正,避免估计位姿的误差累计导致无人机轨迹偏移。模拟矿井灾后复杂环境进行仿真实验,结果表明:基于视觉与激光融合的位姿估计算法的平均相对平移误差和相对旋转误差分别为0.001 1 m和0.000 8°,1帧数据的平均处理时间低于100 ms,且算法在井下长时间运行时不会出现轨迹漂移问题;相较于仅基于视觉或激光的位姿估计算法,该融合算法的准确性、稳定性均得到了提高,且实时性满足要求。

     

  • 图  1  井下巷道无人机坐标系

    Figure  1.  Underground roadway drone coordinate system

    图  2  无人机自主位姿估计流程

    Figure  2.  Process of drone autonomous pose estimation

    图  3  ORB特征点深度恢复

    Figure  3.  Depth recovery of ORB feature points

    图  4  关键帧选取策略

    Figure  4.  Key frames selection strategy

    图  5  不同算法的估计轨迹与真实轨迹比较

    Figure  5.  Comparison of estimated trajectories with real trajectories of different algorithms

    图  6  不同算法的绝对位姿误差、相对位姿误差对比

    Figure  6.  Comparison of absolute pose error and relative pose error of different algorithms

    图  7  不同算法的平均平移误差和平均旋转误差

    Figure  7.  Average translation and rotation errors of different algorithms

    表  1  算法主要模块平均运行时间

    Table  1.   Average running time of main modules of algorithm ms

    算法模块 平均运行时间
    位姿估计ORB特征点提取与匹配25.81
    激光特征点提取与匹配21.57
    视觉激光位姿融合12.69
    位姿优化滑动窗口与局部地图优化94.46
    下载: 导出CSV

    表  2  不同算法的平均资源占用率比较

    Table  2.   Comparison of average resource usage of different algorithms %

    算法 CPU占用率 内存占用率
    基于视觉的位姿估计算法 19.7 30.8
    基于激光的位姿估计算法 18.9 26.8
    基于视觉与激光融合的位姿估计算法 21.2 31.6
    下载: 导出CSV
  • [1] 王恩元,张国锐,张超林,等. 我国煤与瓦斯突出防治理论技术研究进展与展望[J]. 煤炭学报,2022,47(1):297-322.

    WANG Enyuan,ZHANG Guorui,ZHANG Chaolin,et al. Research progress and prospect on theory and technology for coal and gas outburst control and protection in China[J]. Journal of China Coal Society,2022,47(1):297-322.
    [2] 毕林,王黎明,段长铭. 矿井环境高精定位技术研究现状与发展[J]. 黄金科学技术,2021,29(1):3-13.

    BI Lin,WANG Liming,DUAN Changming. Research situation and development of high-precision positioning technology for underground mine environment[J]. Gold Science and Technology,2021,29(1):3-13.
    [3] 江传龙,黄宇昊,韩超,等. 井下巡检无人机系统设计及定位与避障技术[J]. 机械设计与研究,2021,37(4):38-42,48.

    JIANG Chuanlong,HUANG Yuhao,HAN Chao,et al. Design of underground inspection UAV system and study of positioning and obstacle avoidance[J]. Machine Design & Research,2021,37(4):38-42,48.
    [4] 范红斌. 矿井智能救援机器人的研究与应用[J]. 矿业装备,2023(1):148-150.

    FAN Hongbin. Research and application of mine intelligent rescue robot[J]. Mining Equipment,2023(1):148-150.
    [5] 马宏伟,王岩,杨林. 煤矿井下移动机器人深度视觉自主导航研究[J]. 煤炭学报,2020,45(6):2193-2206.

    MA Hongwei,WANG Yan,YANG Lin. Research on depth vision based mobile robot autonomous navigation in underground coal mine[J]. Journal of China Coal Society,2020,45(6):2193-2206.
    [6] MUR-ARTAL R,MONTIEL J,TARDOS J D. ORB-SLAM:a versatile and accurate monocular SLAM system[J]. IEEE Transactions on Robotics,2015,31(5):1147-1163. doi: 10.1109/TRO.2015.2463671
    [7] CAMPOS C,ELVIRA R,RODRIGUEZ J J G,et al. ORB-SLAM3:an accurate open-source library for visual,visual-inertial,and multimap SLAM[J]. IEEE Transactions on Robotics,2021,37(6):1874-1890. doi: 10.1109/TRO.2021.3075644
    [8] LIN Jiarong,ZHANG Fu. Loam livox:a fast,robust,high-precision LiDAR odometry and mapping package for LiDARs of small FoV[C]. IEEE International Conference on Robotics and Automation,Paris,2020:3126-3131.
    [9] DELLENBACH P,DESCHAUD J E,JACQUET B,et al. CT-ICP:real-time elastic LiDAR odometry with loop closure[C]. International Conference on Robotics and Automation,Philadelphia,2022:5580-5586.
    [10] CHEN K,LOPEZ B T,AGHA-MOHAMMADI A,et al. Direct LiDAR odometry:fast localization with dense point clouds[J]. IEEE Robotics and Automation Letters,2022,7(2):2000-2007. doi: 10.1109/LRA.2022.3142739
    [11] 余祖俊,张晨光,郭保青. 基于激光与视觉融合的车辆自主定位与建图算法[J]. 交通运输系统工程与信息,2021,21(4):72-81.

    YU Zujun,ZHANG Chenguang,GUO Baoqing. Vehicle simultaneous localization and mapping algorithm with lidar-camera fusion[J]. Journal of Transportation Systems Engineering and Information Technology,2021,21(4):72-81.
    [12] 洪炎,朱丹萍,龚平顺. 基于TopHat加权引导滤波的Retinex矿井图像增强算法[J]. 工矿自动化,2022,48(8):43-49.

    HONG Yan,ZHU Danping,GONG Pingshun. Retinex mine image enhancement algorithm based on TopHat weighted guided filtering[J]. Journal of Mine Automation,2022,48(8):43-49.
    [13] 李艳,唐达明,戴庆瑜. 基于多传感器信息融合的未知环境下移动机器人的地图创建[J]. 陕西科技大学学报,2021,39(3):151-159.

    LI Yan,TANG Daming,DAI Qingyu. Map-building of mobile robot in unknown environment based on multi sensor information fusion[J]. Journal of Shaanxi University of Science & Technology,2021,39(3):151-159.
    [14] 毛军,付浩,褚超群,等. 惯性/视觉/激光雷达SLAM技术综述[J]. 导航定位与授时,2022,9(4):17-30.

    MAO Jun,FU Hao,CHU Chaoqun,et al. A review of simultaneous localization and mapping based on inertial-visual-lidar fusion[J]. Navigation Positioning and Timing,2022,9(4):17-30.
    [15] SHAN T,ENGLOT B,MEYERS D,et al. LIO-SAM:tightly-coupled lidar inertial odometry via smoothing and mapping[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems,Las Vegas,2020:5135-5142.
    [16] QIN Chao,YE Haoyang,PRANATA C E,et,al. R-LINS:a robocentric lidar-inertial state estimator for robust and efficient navigation[J]. 2019. DOI: 10.48550/arXiv.1907.02233.
    [17] XU Wei,ZHANG Fu. Fast-LIO:a fast,robust LiDAR-inertial odometry package by tightly-coupled iterated Kalman filter[J]. IEEE Robotics and Automation Letters,2021,6(2):3317-3324. doi: 10.1109/LRA.2021.3064227
    [18] XU Wei,CAI Yixi,HE Dongjiao,et al. Fast-LIO2:fast direct LiDAR-inertial odometry[J]. IEEE Transactions on Robotics,2022,38(4):2053-2073. doi: 10.1109/TRO.2022.3141876
    [19] ZUO Xingxing,GENEVA P,LEE W,et al. LIC-fusion:LiDAR-inertial-camera odometry[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems,Macau,2019:5848-5854.
    [20] ZHAO Shibo,ZHANG Hengrui,WANG Peng,et al. Super odometry:IMU-centric LiDAR-visual-inertial estimator for challenging environments[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems,Prague,2021:8729-8736.
    [21] KIM B,JUNG C,SHIM D H,et al. Adaptive keyframe generation based LiDAR inertial odometry for complex underground environments[C]. IEEE International Conference on Robotics and Automation,London,2023:3332-3338.
    [22] LIN Jiarong,ZHENG Chunran,XU Wei,et al. R2LIVE:a robust,real-time,LiDAR-inertial-visual tightly-coupled state estimator and mapping[J]. IEEE Robotics and Automation Letters,2021,6(4):7469-7476. doi: 10.1109/LRA.2021.3095515
    [23] STURM J,ENGELHARD N,ENDRES F,et al. A benchmark for the evaluation of RGB-D SLAM systems[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems,Vilamoura-Algarve,2012. DOI: 10.1109/IROS.2012.6385773.
  • 加载中
图(7) / 表(2)
计量
  • 文章访问数:  69
  • HTML全文浏览量:  21
  • PDF下载量:  8
  • 被引次数: 0
出版历程
  • 收稿日期:  2023-08-31
  • 修回日期:  2024-04-24
  • 网络出版日期:  2024-05-10

目录

    /

    返回文章
    返回