Volume 50 Issue 4
Apr.  2024
Turn off MathJax
Article Contents
HE Yijing, YANG Wei. Autonomous pose estimation of underground disaster rescue drones based on visual and laser fusion[J]. Journal of Mine Automation,2024,50(4):94-102.  doi: 10.13272/j.issn.1671-251x.2023080124
Citation: HE Yijing, YANG Wei. Autonomous pose estimation of underground disaster rescue drones based on visual and laser fusion[J]. Journal of Mine Automation,2024,50(4):94-102.  doi: 10.13272/j.issn.1671-251x.2023080124

Autonomous pose estimation of underground disaster rescue drones based on visual and laser fusion

doi: 10.13272/j.issn.1671-251x.2023080124
  • Received Date: 2023-08-31
  • Rev Recd Date: 2024-04-24
  • Available Online: 2024-05-10
  • The autonomous navigation capability of drones in post disaster mines is a prerequisite for their capability to perform rescue and disaster relief tasks. The autonomous pose estimation technology in unknown three-dimensional space is one of the key technologies for autonomous navigation of drones. At present, vision based pose estimation algorithms are prone to blurred scale and poor positioning performance due to the inability of monocular cameras to directly obtain depth information in three-dimensional space and the susceptibility to underground dim light. However, laser based pose estimation algorithms are prone to errors due to the small viewing angle, uneven scanning patterns, and constraints on the structural characteristics of mining scenes caused by LiDAR. In order to solve the above problems, an autonomous pose estimation algorithm of underground disaster rescue drones based on visual and laser fusion is proposed. Firstly, the monocular camera and LiDAR carried by the underground drone are used to obtain the image data and laser point cloud data of the mine. The ORB feature points are uniformly extracted from each frame of the mine image data. The depth information of the laser point cloud is used to recover the ORB feature points. The visual based drone pose estimation is achieved through inter frame matching of the feature points. Secondly, feature corner points and feature plane points are extracted from each frame of underground laser point cloud data, and laser based drone pose estimation is achieved through inter frame matching of feature points. Thirdly, the visual matching error function and the laser matching error function are placed under the same pose optimization function, and the pose of the underground drone is estimated based on vision and laser fusion. Finally, historical frame data is introduced through visual sliding windows and laser local maps to construct an error function between the historical frame data and the latest estimated pose. The optimization and correction of the drone pose under local constraints are completed through nonlinear optimization of the error function, avoiding the accumulation of estimated pose errors that may lead to trajectory deviation of the drone. The simulation experiments that simulating the complex environment after a mine disaster are conducted. The results show that the average relative translation error and relative rotation error of the pose estimation algorithm based on visual and laser fusion are 0.001 1 m and 0.000 8°, respectively. The average processing time of one frame of data is less than 100 ms. The algorithm does not experience trajectory drift during long-term operation underground. Compared to pose estimation algorithms based solely on vision or laser, the accuracy and stability of this fusion algorithm have been improved, and the real-time performance meets the requirements.

     

  • loading
  • [1]
    王恩元,张国锐,张超林,等. 我国煤与瓦斯突出防治理论技术研究进展与展望[J]. 煤炭学报,2022,47(1):297-322.

    WANG Enyuan,ZHANG Guorui,ZHANG Chaolin,et al. Research progress and prospect on theory and technology for coal and gas outburst control and protection in China[J]. Journal of China Coal Society,2022,47(1):297-322.
    [2]
    毕林,王黎明,段长铭. 矿井环境高精定位技术研究现状与发展[J]. 黄金科学技术,2021,29(1):3-13.

    BI Lin,WANG Liming,DUAN Changming. Research situation and development of high-precision positioning technology for underground mine environment[J]. Gold Science and Technology,2021,29(1):3-13.
    [3]
    江传龙,黄宇昊,韩超,等. 井下巡检无人机系统设计及定位与避障技术[J]. 机械设计与研究,2021,37(4):38-42,48.

    JIANG Chuanlong,HUANG Yuhao,HAN Chao,et al. Design of underground inspection UAV system and study of positioning and obstacle avoidance[J]. Machine Design & Research,2021,37(4):38-42,48.
    [4]
    范红斌. 矿井智能救援机器人的研究与应用[J]. 矿业装备,2023(1):148-150.

    FAN Hongbin. Research and application of mine intelligent rescue robot[J]. Mining Equipment,2023(1):148-150.
    [5]
    马宏伟,王岩,杨林. 煤矿井下移动机器人深度视觉自主导航研究[J]. 煤炭学报,2020,45(6):2193-2206.

    MA Hongwei,WANG Yan,YANG Lin. Research on depth vision based mobile robot autonomous navigation in underground coal mine[J]. Journal of China Coal Society,2020,45(6):2193-2206.
    [6]
    MUR-ARTAL R,MONTIEL J,TARDOS J D. ORB-SLAM:a versatile and accurate monocular SLAM system[J]. IEEE Transactions on Robotics,2015,31(5):1147-1163. doi: 10.1109/TRO.2015.2463671
    [7]
    CAMPOS C,ELVIRA R,RODRIGUEZ J J G,et al. ORB-SLAM3:an accurate open-source library for visual,visual-inertial,and multimap SLAM[J]. IEEE Transactions on Robotics,2021,37(6):1874-1890. doi: 10.1109/TRO.2021.3075644
    [8]
    LIN Jiarong,ZHANG Fu. Loam livox:a fast,robust,high-precision LiDAR odometry and mapping package for LiDARs of small FoV[C]. IEEE International Conference on Robotics and Automation,Paris,2020:3126-3131.
    [9]
    DELLENBACH P,DESCHAUD J E,JACQUET B,et al. CT-ICP:real-time elastic LiDAR odometry with loop closure[C]. International Conference on Robotics and Automation,Philadelphia,2022:5580-5586.
    [10]
    CHEN K,LOPEZ B T,AGHA-MOHAMMADI A,et al. Direct LiDAR odometry:fast localization with dense point clouds[J]. IEEE Robotics and Automation Letters,2022,7(2):2000-2007. doi: 10.1109/LRA.2022.3142739
    [11]
    余祖俊,张晨光,郭保青. 基于激光与视觉融合的车辆自主定位与建图算法[J]. 交通运输系统工程与信息,2021,21(4):72-81.

    YU Zujun,ZHANG Chenguang,GUO Baoqing. Vehicle simultaneous localization and mapping algorithm with lidar-camera fusion[J]. Journal of Transportation Systems Engineering and Information Technology,2021,21(4):72-81.
    [12]
    洪炎,朱丹萍,龚平顺. 基于TopHat加权引导滤波的Retinex矿井图像增强算法[J]. 工矿自动化,2022,48(8):43-49.

    HONG Yan,ZHU Danping,GONG Pingshun. Retinex mine image enhancement algorithm based on TopHat weighted guided filtering[J]. Journal of Mine Automation,2022,48(8):43-49.
    [13]
    李艳,唐达明,戴庆瑜. 基于多传感器信息融合的未知环境下移动机器人的地图创建[J]. 陕西科技大学学报,2021,39(3):151-159.

    LI Yan,TANG Daming,DAI Qingyu. Map-building of mobile robot in unknown environment based on multi sensor information fusion[J]. Journal of Shaanxi University of Science & Technology,2021,39(3):151-159.
    [14]
    毛军,付浩,褚超群,等. 惯性/视觉/激光雷达SLAM技术综述[J]. 导航定位与授时,2022,9(4):17-30.

    MAO Jun,FU Hao,CHU Chaoqun,et al. A review of simultaneous localization and mapping based on inertial-visual-lidar fusion[J]. Navigation Positioning and Timing,2022,9(4):17-30.
    [15]
    SHAN T,ENGLOT B,MEYERS D,et al. LIO-SAM:tightly-coupled lidar inertial odometry via smoothing and mapping[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems,Las Vegas,2020:5135-5142.
    [16]
    QIN Chao,YE Haoyang,PRANATA C E,et,al. R-LINS:a robocentric lidar-inertial state estimator for robust and efficient navigation[J]. 2019. DOI: 10.48550/arXiv.1907.02233.
    [17]
    XU Wei,ZHANG Fu. Fast-LIO:a fast,robust LiDAR-inertial odometry package by tightly-coupled iterated Kalman filter[J]. IEEE Robotics and Automation Letters,2021,6(2):3317-3324. doi: 10.1109/LRA.2021.3064227
    [18]
    XU Wei,CAI Yixi,HE Dongjiao,et al. Fast-LIO2:fast direct LiDAR-inertial odometry[J]. IEEE Transactions on Robotics,2022,38(4):2053-2073. doi: 10.1109/TRO.2022.3141876
    [19]
    ZUO Xingxing,GENEVA P,LEE W,et al. LIC-fusion:LiDAR-inertial-camera odometry[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems,Macau,2019:5848-5854.
    [20]
    ZHAO Shibo,ZHANG Hengrui,WANG Peng,et al. Super odometry:IMU-centric LiDAR-visual-inertial estimator for challenging environments[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems,Prague,2021:8729-8736.
    [21]
    KIM B,JUNG C,SHIM D H,et al. Adaptive keyframe generation based LiDAR inertial odometry for complex underground environments[C]. IEEE International Conference on Robotics and Automation,London,2023:3332-3338.
    [22]
    LIN Jiarong,ZHENG Chunran,XU Wei,et al. R2LIVE:a robust,real-time,LiDAR-inertial-visual tightly-coupled state estimator and mapping[J]. IEEE Robotics and Automation Letters,2021,6(4):7469-7476. doi: 10.1109/LRA.2021.3095515
    [23]
    STURM J,ENGELHARD N,ENDRES F,et al. A benchmark for the evaluation of RGB-D SLAM systems[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems,Vilamoura-Algarve,2012. DOI: 10.1109/IROS.2012.6385773.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(7)  / Tables(2)

    Article Metrics

    Article views (82) PDF downloads(9) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return