Coal mine mobile robot positioning method based on fusion of vision and inertial navigatio
-
摘要: 针对现有移动机器人单目视觉定位算法在光照变化和弱光照区域表现较差、无法应用于煤矿井下光照较暗场景的问题,通过非极大值抑制处理、自适应阈值调节等对快速特征点提取和描述(ORB)算法进行改进,采用随机抽样一致性(RANSAC)算法进行特征点匹配,提高了煤矿井下弱光照区域的特征点提取和匹配效率。针对仅靠单目视觉定位无法确定机器人与物体的距离及物体大小的问题,采用对极几何法对匹配好的特征点进行视觉解算,通过惯导数据为单目视觉定位提供尺度信息;根据紧耦合原理,采用图优化方法对惯导数据和单目视觉数据进行融合优化并求解,得到机器人位姿信息。实验结果表明:① ORB算法虽然提取的特征点数较少,但耗时短,且特征点分布均匀,可以准确描述物体特征。② 改进ORB算法与原ORB算法相比,虽然提取时间有了一定的增加,但提取的可用特征点数也大大增加了。③ RANSAC算法剔除了误匹配点,提高了特征点匹配的准确性,从而提高了单目视觉定位精度。④ 改进后融合定位方法精度有了很大提升,相对误差由0.6 m降低到0.4 m以下,平均误差由0.20 m减小到0.15 m,均方根误差由0.24 m减小到0.18 m。Abstract: The existing mobile robot monocular vision positioning algorithm performs poorly in illumination changing and weak illumination areas, and cannot be applied to dark scenes in coal mines. In order to solve these problems, the oriented FAST and rotated BRIEF (ORB) algorithm is improved by non-maximal value suppression processing and adaptive threshold adjustment. The random sample consensus (RANSAC) algorithm is used for feature point matching, which improves the efficiency of feature point extraction and matching in weak illumination areas of coal mines. In order to solve the problem that the distance between the robot and the object and the size of the object cannot be determined by monocular vision positioning alone, the epipolar geometry method is used to visually calculate the matched feature points, and the inertial navigation data is used to provide scale information for monocular visual positioning. Based on the tight coupling principle, the graph optimization method is applied to fuse, optimize and solve the inertial navigation data and monocular visual data so as to obtain the robot pose information. The experimental results show that: ① Although the number of feature points extracted is small, the ORB algorithm takes less time. The feature points, which are evenly distributed, can accurately describe the object features. ② Compared with the original ORB algorithm, the improved ORB algorithm has a certain increase in extraction time. However, the number of available feature points extracted is also greatly increased. ③ The RANSAC algorithm eliminates the mismatched points and improves the accuracy of feature point matching, thus improving the accuracy of monocular vision positioning. ④ The accuracy of the improved fusion positioning method is greatly improved, the relative error is reduced from 0.6 m to less than 0.4 m, the average error is reduced from 0.20 m to 0.15 m, and the root mean square error is reduced from 0.24 m to 0.18 m.
点击查看大图
计量
- 文章访问数: 147
- HTML全文浏览量: 19
- PDF下载量: 21
- 被引次数: 0