ZHANG Yufei, MA Hongwei, MAO Qinghua, HUA Hongtao, SHI Jinlong. Coal mine mobile robot positioning method based on fusion of vision and inertial navigatio[J]. Journal of Mine Automation, 2021, 47(3): 46-52. DOI: 10.13272/j.issn.1671-251x.2020110049
Citation: ZHANG Yufei, MA Hongwei, MAO Qinghua, HUA Hongtao, SHI Jinlong. Coal mine mobile robot positioning method based on fusion of vision and inertial navigatio[J]. Journal of Mine Automation, 2021, 47(3): 46-52. DOI: 10.13272/j.issn.1671-251x.2020110049

Coal mine mobile robot positioning method based on fusion of vision and inertial navigatio

  • The existing mobile robot monocular vision positioning algorithm performs poorly in illumination changing and weak illumination areas, and cannot be applied to dark scenes in coal mines. In order to solve these problems, the oriented FAST and rotated BRIEF (ORB) algorithm is improved by non-maximal value suppression processing and adaptive threshold adjustment. The random sample consensus (RANSAC) algorithm is used for feature point matching, which improves the efficiency of feature point extraction and matching in weak illumination areas of coal mines. In order to solve the problem that the distance between the robot and the object and the size of the object cannot be determined by monocular vision positioning alone, the epipolar geometry method is used to visually calculate the matched feature points, and the inertial navigation data is used to provide scale information for monocular visual positioning. Based on the tight coupling principle, the graph optimization method is applied to fuse, optimize and solve the inertial navigation data and monocular visual data so as to obtain the robot pose information. The experimental results show that: ① Although the number of feature points extracted is small, the ORB algorithm takes less time. The feature points, which are evenly distributed, can accurately describe the object features. ② Compared with the original ORB algorithm, the improved ORB algorithm has a certain increase in extraction time. However, the number of available feature points extracted is also greatly increased. ③ The RANSAC algorithm eliminates the mismatched points and improves the accuracy of feature point matching, thus improving the accuracy of monocular vision positioning. ④ The accuracy of the improved fusion positioning method is greatly improved, the relative error is reduced from 0.6 m to less than 0.4 m, the average error is reduced from 0.20 m to 0.15 m, and the root mean square error is reduced from 0.24 m to 0.18 m.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return