A maintenance guidance system for coal mine electromechanical equipment based on improved YOLOv5s
-
摘要: 针对煤矿机电设备辅助维修中二维码标注工作量大、通用性低及现有免注册识别方法实现复杂、难以部署等问题,提出了一种基于改进YOLOv5s的煤矿机电设备维修指导系统。该系统由设备免注册识别模块、故障维修指导模块、远程专家接入指导模块组成。设备免注册识别模块通过HoloLens眼镜上的摄像头采集故障设备图像,并通过改进YOLOv5s图像识别算法进行分析和处理,识别出故障设备型号;故障维修指导模块根据故障设备型号自动匹配调用预设好的混合现实拆装模型,形成维修指导解决方案;远程专家接入指导模块通过音视频会话、虚拟标注等方式实现远程专家与现场维修人员的交互。为保证用户使用混合现实设备时的沉浸感体验,针对混合现实设备自身算力不足问题,采用ShuffleNetV2替换YOLOv5s中的Backbone,得到YOLOv5s−SN2网络,从而减少模型参数量,降低计算开销。实验结果表明:YOLOv5s−SN2相较于YOLOv5s精度略有下降,但每秒浮点运算次数(FLOPS)从16.5×109下降到7.6×109,参数量从15.6×106个下降到8.2×106个;在YOLO系列模型中,YOLOv5s−SN2性能最优。以三叶罗茨鼓风机为例验证系统整体效果,结果表明,YOLOv5s−SN2可快速识别出电动机型号,调用与之匹配的虚拟模型及维修流程,远程专家可通过音视频接入和标注等方法辅助现场工作人员进行机电设备维修。Abstract: In order to solve the problems of large workload and low versatility of QR code labelling and complex implementation and difficult deployment of existing no-registration recognition methods in the auxiliary maintenance of coal mine electromechanical equipments, a coal mine electromechanical equipments maintenance guidance system based on improved YOLOv5s is proposed. The system consists of a equipment no-registration recognition module, a fault maintenance guidance module, and a remote expert access guidance module. The equipment no-registration recognition module collects images of faulty equipments through the camera on HoloLens glasses, and analyzes and processes them through an improved YOLOv5s image recognition algorithm to recognize the faulty equipment model. The fault maintenance guidance module automatically matches and calls the preset mixed reality disassembly and assembly model based on the model of the faulty equipment, forming a maintenance guidance solution. The remote expert access guidance module achieves interaction between remote experts and on-site maintenance personnel through audio and video sessions, virtual annotation, and other methods. In order to ensure an immersive experience for users when using mixed reality equipment, ShuffleNetV2 is used to replace the Backbone in YOLOv5s to obtain the YOLOv5s-SN2 network, which reduces the number of model parameters and computational overhead. The experimental results show that YOLOv5s-SN2 has a slight decrease in precision compared to YOLOv5s, but the number of floating-point operations per second (FLOPS) has decreased from 16.5×109 to 7.6×109, and the number of parameters has decreased from 15.6×106 to 8.2×106. Among the YOLO series models, YOLOv5s-SN2 has the best performance. Taking the three leaf Roots blower as an example to verify the overall effectiveness of the system, the results show that YOLOv5s-SN2 can quickly recognize the motor model, call the matching virtual model and maintenance process. The remote experts can assist on-site personnel in electromechanical equipment maintenance through methods such as audio and video access and annotation.
-
表 1 轻量化模型对比
Table 1. Comparison of lightweight models
模型 mAP FLOPS/109 参数量/106个 YOLOv5s 0.910 16.5 15.6 YOLOv5s−MN3 0.879 6.5 7.4 YOLOv5s−SN2 0.904 7.6 8.2 表 2 不同YOLO模型对比
Table 2. Comparison of different YOLO models
模型 P R mAP FLOPS/109 参数量/106个 YOLOv5s 0.910 0.856 0.893 16.5 15.6 YOLOv5m 0.954 0.912 0.934 49.6 40.6 YOLOv6 0.908 0.750 0.905 38.5 37.5 YOLOv7 0.893 0.832 0.906 76.2 74.6 YOLOv5s−SN2 0.904 0.873 0.884 7.6 8.2 -
[1] 孙艺凌. 基于数字孪生与混合现实技术的机电设备辅助维修方法研究[J]. 中国新技术新产品,2024(2):36-38.SUN Yiling. Research on auxiliary maintenance methods for mechanical and electrical equipment based on digital twin and hybrid reality technology[J]. New Technologies and New Products of China,2024(2):36-38. [2] 李喆,陈佳宁,张林鍹. 核电站设备维修中混合现实技术的应用研究[J]. 计算机仿真,2018,35(5):340-345.LI Zhe,CHEN Jianing,ZHANG Linxuan. Application of mixed reality technology in maintenance of nuclear power stations[J]. Computer Simulation,2018,35(5):340-345. [3] WOLFARTSBERGER J,ZENISEK J,WILD N. Data-driven maintenance:combining predictive maintenance and mixed reality-supported remote assistance[C]. 10th Conference on Learning Factories,Graz,2020:307-312. [4] 张旭辉,张雨萌,王妙云,等. 基于混合现实的矿用设备维修指导系统[J]. 工矿自动化,2019,45(6):27-31.ZHANG Xuhui,ZHANG Yumeng,WANG Miaoyun,et al. Maintenance guidance system of mine-used equipments based on mixed reality[J]. Industry and Mine Automation,2019,45(6):27-31. [5] 朱金达,赵永衡. 基于混合现实的自行火炮维修指导系统[J]. 兵器装备工程学报,2023,44(4):45-52. doi: 10.11809/bqzbgcxb2023.04.008ZHU Jinda,ZHAO Yongheng. Mixed reality-based self-propelled artillery repair guidance system[J]. Journal of Ordnance Equipment Engineering,2023,44(4):45-52. doi: 10.11809/bqzbgcxb2023.04.008 [6] 张旭辉,张雨萌,王岩,等. 融合数字孪生与混合现实技术的机电设备辅助维修方法[J]. 计算机集成制造系统,2021,27(8):2187-2195.ZHANG Xuhui,ZHANG Yumeng,WANG Yan,et al. Auxiliary maintenance method for electromechanical equipment integrating digital twin and mixed reality technology[J]. Computer Integrated Manufacturing Systems,2021,27(8):2187-2195. [7] 王崴,洪学峰,雷松贵. 基于MR的机电装备智能检测维修[J]. 图学学报,2022,43(1):141-148.WANG Wei,HONG Xuefeng,LEI Songgui. Intelligent inspection and maintenance of mechanical and electrical equipment based on MR[J]. Journal of Graphics,2022,43(1):141-148. [8] REDMON J,DIVVALA S,GIRSHICK R,et al. You only look once:unified,real-time object detection[C]. IEEE Conference on Computer Vision and Pattern Recognition,Las Vegas,2016:779-788. [9] REDMON J,FARHADI A. YOLO9000:better,faster,stronger[C]. IEEE Conference on Computer Vision and Pattern Recognition,Honolulu,2017:7263-7271. [10] REDMON J,FARHADI A. YOLOv3:an incremental improvement[EB/OL]. [2024-03-05]. https://arxiv.org/abs/1804.02767. [11] LIU Wei,ANGUELOV D,ERHAN D,et al. SSD:single shot multiBox detector[C]. The 14th European Conference on Computer Vision,Amsterdam,2016:21-37. [12] BOCHKOVSKIY A,WANG C Y,LIAO H. YOLOv4:optimal speed and accuracy of object detection[EB/OL]. [2024-03-05]. https://arxiv.org/abs/2004.10934. [13] CHEN H-Y,SU C-Y. An enhanced hybrid MobileNet[C]. The 9th International Conference on Awareness Science and Technology,Fukuoka,2018:308-312. [14] HOWARD A,SANDLER M,CHEN Bo,et al. Searching for MobileNetV3[C]. IEEE/CVF International Conference on Computer Vision,Seoul,2019:1314-1324. [15] HOWARD A G,ZHU Menglong,CHEN Bo,et al. Mobilenets:efficient convolutional neural networks for mobile vision applications[EB/OL]. [2024-03-05]. https://arxiv.org/abs/1704.04861. [16] SANDLER M,HOWARD A,ZHU Menglong,et al. MobileNetV2:inverted residuals and linear bottlenecks[EB/OL]. [2024-03-05]. http://arxiv.org/pdf/1801.04381.pdf. [17] HAN Kai,WANG Yunhe,TIAN Qi,et al. Ghostnet:more features from cheap operations[C]. IEEE/CVF Conference on Computer Vision and Pattern Recognition,Seattle,2020:1580-1589. [18] ZHANG Xiangyu,ZHOU Xinyu,LIN Mengxiao,et al. ShuffleNet:an extremely efficient convolutional neural network for mobile devices[C]. IEEE/CVF Conference on Computer Vision and Pattern Recognition,Salt Lake City,2018:6848-6856. [19] 刘慧杰,陈强. 基于ORB特征的改进RGB−D视觉里程计[J]. 制造业自动化,2022,44(7):56-59,106.LIU Huijie,CHEN Qiang. Improved RGB-D visual odometer based on ORB feature[J]. Manufacturing Automation,2022,44(7):56-59,106. [20] FAN Y C,HAN H,TANG Y L,et al. Dynamic objects elimination in SLAM based on image fusion[J]. Pattern Recognition Letters,2018,13(2):56-59. [21] NEWCOMBE R A,IZADI S,HILLIGES O,el al. KinectFusion:Real-time dense surface mapping and tracking[C]. The 10th IEEE International Symposium on Mixed and Augmented Reality,Basel,2011:127-136.