Research on 3D target detection of unmanned trackless rubber-tyred vehicle in coal mine
-
摘要: 基于3D检测的环境感知是实现煤矿井下无轨胶轮车无人驾驶技术的基础。因井下环境中光照不足,导致RGB图像信息缺失,且巷道空间狭小导致激光雷达采集的点云数据存在较多噪声,现有的基于图像或雷达点云的目标3D检测方法在井下难以取得较好的检测效果。针对该问题,提出一种融合图像和雷达点云的无人驾驶无轨胶轮车目标3D检测方法。针对获取的无轨胶轮车行驶环境数据进行预处理:采用全局直方图均衡化方法提升RGB图像亮度,降低井下光照不均影响;对雷达点云数据进行双边滤波去噪及主成分分析降维处理,以提升点云数据质量,减少运算时间。设计了一种融合图像与雷达点云检测模型,采用区域生成网络生成2D图像候选区域,对其与点云数据进行早期特征级融合生成3D候选区域,并与经感兴趣区域池化的图像和点云数据进行后期区域级融合,输出3D检测锚框,实现目标检测。实验结果表明,与基于YOLO3D,MV3D模型的检测方法相比,该方法对待测目标的检测精度较高,较好地实现了精度与检测速度的平衡。井下测试结果表明,该方法能够准确检测出无轨胶轮车行驶环境中的行人或车辆位置,无漏检情况,具有良好的井下适应性。Abstract: The environment perception based on 3D detection is the basis of unmanned driving technology of trackless rubber-tyred vehicle in coal mine. Due to the lack of light in the underground environment, the RGB image information is missing, and the narrow roadway space leads to more noise in the point cloud data collected by laser radar, so the existing 3D target detection methods based on image or radar point cloud can not achieve good detection effect in the underground. In order to solve this problem, a 3D target detection method for unmanned trackless rubber-tyred vehicle is proposed, which fuses image and radar point cloud. The obtained driving environment data of the trackless rubber-tyred vehicle is preprocessed. The global histogram equalization method is used to improve the brightness of RGB images and reduce the effect of uneven lighting in coal mine. The bilateral filtering and denoising and principal component analysis dimensionality reduction processing are performed on radar point cloud data to improve the quality of point cloud data and reduce computing time. A fusion image and radar point cloud detection model is designed. The region proposal network is used to generate 2D image candidate regions, which are fused with the point cloud data at the early characteristic level to generate 3D candidate regions, and then fused with the pooled image and point cloud data at the later region level to output the 3D detection anchor frame to realize target detection. The experimental results show that compared with the detection methods based on YOLO3D and MV3D models, the proposed method has higher detection precision of the target to be tested, and achieves a better balance between precision and detection speed. The underground test results show that the method can accurately detect the position of pedestrians or vehicles in the driving environment of trackless rubber-tyred vehicle, and has good underground adaptability.
-
[1] 王陈,鲍久圣,袁晓明,等.无轨胶轮车井下无人驾驶系统设计及控制策略研究[J].煤炭学报,2021,46(增刊1):520-528.WANG Chen,BAO Jiusheng,YUAN Xiaoming,et al.Design and control strategy of underground driverless system for trackless rubber tire vehicle[J].Journal of China Coal Society,2021,46(S1):520-528. [2] ALI W,ABDELKARIM S,ZAHRAN M,et al.YOLO3D:end-to-end real-time 3D oriented object bounding box detection from LiDAR point cloud[C]//Proceedings of the European Conference on Computer Vision,Munich,2018:10-30. [3] MOUSAVIAN A,ANGUELOV D,FLYNN J,et al.3D bounding box estimation using deep learning and geometry[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,Honolulu,2017:7074-7082. [4] CHABOT F,CHAOUCH M,RABARISOA J,et al.Deep MANTA:a coarse-to-fine many-task network for joint 2D and 3D vehicle analysis from monocular image[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,Hawaii,2017:2040-2049. [5] CHEN Xiaozhi,KUNDU K,ZHU Yukun,et al.3D object proposals using stereo imagery for accurate object class detection[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2017,40(5):1259-1272. [6] LI Bo,ZHANG Tianlei,XIA Tian.Vehicle detection from 3D lidar using fully convolutional network[Z/OL].arXiv Preprint,arXiv:1608.07916. https://arxiv.org/abs/1608.07916v1. [7] KU J,MOZIFIAN M,LEE J,et al.Joint 3D proposal generation and object detection from view aggregation[C]//IEEE/RSJ International Conference on Intelligent Robots and Systems,Madrid,2018:1-8. [8] CHEN Xiaozhi,MA Huimin,WAN Ji,et al.Multi-view 3D object detection network for autonomous driving[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Hawaii,2017:1907-1915. [9] QI C R,LIU Wei,WU Chenxia,et al.Frustum PointNets for 3D object detection from RGB-D data[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,Salt Lake City,2018:918-927. [10] 闫凌,黄佳德.矿用卡车无人驾驶系统研究[J].工矿自动化,2021,47(4):19-29.YAN Ling,HUANG Jiade.Research on unmanned driving system of mine-used truck[J].Industry and Mine Automation,2021,47(4):19-29. [11] 贾祝广,孙效玉,王斌,等.无人驾驶技术研究及展望[J].矿业装备,2014(5):44-47.JIA Zhuguang,SUN Xiaoyu,WANG Bin,et al.Research and prospect of unmanned driving technology[J].Mining Equipment,2014(5):44-47. [12] 路皓翔,刘振丙,郭棚跃,等.多尺度卷积结合自适应双区间均衡化的图像增强[J].光子学报,2020,49(10):158-172.LU Haoxiang,LIU Zhenbing,GUO Pengyue,et al.Multi-scale convolution combined with adaptive Bi-interval equalization for image enhancement[J].Acta Photonica Sinica,2020,49(10):158-172. [13] 呼亚萍,孔韦韦,黄翠玲,等.一种基于卷积运算与全变分模型的图像去噪方法[J].电讯技术,2020,60(10):1194-1199.HU Yaping,KONG Weiwei,HUANG Cuiling,et al.An image denoising method based on convolution operation and full variational model[J].Telecommunication Engineering,2020,60(10):1194-1199. [14] REN Shaoqing,HE Kaiming,GIRSHICK R,et al.Faster R-CNN:towards real-time object detection with region proposal networks[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2017,39(6):1137-1149. [15] LIU Wei,ANGUELOV D,ERHAN D,et al.SSD:single shot multibox detector[C]//European Conference on Computer Vision,Amsterdam,2016:21-40.
点击查看大图
计量
- 文章访问数: 1617
- HTML全文浏览量: 37
- PDF下载量: 80
- 被引次数: 0