Tear detection method of conveyor belt based on fully convolutional neural network
-
摘要: 针对现有输送带撕裂检测方法存在井下可见光成像质量差、缺少撕裂物理尺寸测量手段、泛化能力差等问题,提出了一种基于全卷积神经网络的输送带撕裂检测方法。该方法基于线结构光成像原理采集图像,可有效解决煤矿井下光照条件差的问题;采用改进最大值法进行线激光条纹检测,可有效排除条纹断点,精确提取条纹,并拟合出缺失点;选用全卷积神经网络中的U−net网络对线激光条纹进行撕裂分割,将撕裂检测问题转换成语义分割问题,并通过降维对U−net网络进行优化,从而减少参数量和计算量;将分割结果反投影回原始图像,利用线结构光标定数据完成撕裂物理尺寸测量。实验结果表明:改进最大值法可有效处理线激光条纹断点区域,无误检和漏检,性能优于Steger法和灰度重心法;U−net网络收敛速度快于SegNet和FCNs网络,迭代的稳定性较强,评价指标最优,U−net4网络性能优于U−net3和U−net5。在验证集上的检测结果表明,撕裂检测的召回率为96.09%,精确率为96.85%。在实验平台的测量结果表明,撕裂物理尺寸测量的最大相对误差为−13.04%。Abstract: The existing conveyor belt tear detection methods have problems, such as poor underground visible light imaging quality, lack of tear physical size measurement means, and poor generalization capability. In order to solve these problems, a conveyor belt tear detection method based on fully convolutional neural network is proposed. The method collects images based on a line-structured light imaging principle, and can effectively solve the problem of poor lighting conditions in a coal mine. The improved maximum method is used to detect line laser stripes, which can effectively eliminate the breakpoints of stripes, accurately extract stripes, and fit the missing points. The U-net network in the fully convolutional neural network is selected to segment the line laser stripe. The tear detection problem is converted into a semantic segmentation problem. The U-net network is optimized through dimension reduction, so as to reduce the number of parameters and calculations. The segmentation result is back-projected to the original image. The physical size of the tear is measured using the line-structured light calibration data. The experimental results show that the improved maximum method can effectively deal with the breakpoint area of line laser stripes without false detection and missed detection. The performance is superior to the Steger method and gray-weighted centroid method. The convergence speed of the U-net network is faster than that of the SegNet and FCNs network. The iteration stability is strong, and the evaluation index is optimal. The performance of the U-net4 network is better than that of U-net3 and U-net5. The test results on the verification set show that the recall rate of tear detection is 96.09%, and the precision is 96.85%. The measurement results on the experimental platform show that the maximum relative error of tear physical dimension measurement is −13.04%.
-
表 1 不同网络训练结果对比
Table 1. Comparison of training results of different networks
网络模型 dice系数 mIoU 验证集 训练集 验证集 训练集 U−net 0.9471 0.9816 0.9470 0.9831 SegNet 0.9388 0.9680 0.9389 0.9665 FCNs 0.9327 0.9573 0.9328 0.9576 表 2 不同U−net网络训练结果对比
Table 2. Comparison of training results of different U-net networks
网络模型 dice系数 mIoU 验证集 训练集 验证集 训练集 U−net3 0.9408 0.9663 0.9412 0.9677 U−net4 0.9456 0.9819 0.9467 0.9828 U−net5 0.9471 0.9816 0.9470 0.9831 表 3 撕裂检测混淆矩阵
Table 3. Confusion matrix of tearing detection
真值 预测值 撕裂 正常 撕裂 123 5 正常 4 N/A 表 4 撕裂物理尺寸测量结果
Table 4. Measurement results of tear physical dimensions
序号 测量结果/mm 标准值/mm 相对误差/% 1 11.30 10.18 10.96 2 15.08 16.14 −6.58 3 12.21 13.90 −12.16 4 6.80 7.72 −11.95 5 13.45 12.18 10.45 6 18.23 16.90 7.86 7 10.04 9.08 10.61 8 16.65 15.84 5.10 9 11.59 13.06 −11.24 10 14.13 13.02 8.52 11 20.75 19.36 7.16 12 17.52 18.10 −3.19 13 17.96 16.48 9.00 14 11.41 10.72 6.40 15 18.61 17.06 9.11 16 4.87 5.60 −13.04 -
[1] HAKAMI F,PRAMANIK A,RIDGWAY N,et al. Developments of rubber material wear in conveyer belt system[J]. Tribology International,2017,111:148-158. doi: 10.1016/j.triboint.2017.03.010 [2] GUO Xiaoqiang,LIU Xinhua,ZHOU Hao,et al. Belt tear detection for coal mining conveyors[J]. Micromachines,2022,13(3):449. doi: 10.3390/mi13030449 [3] KOZLOWSKI T,WODECKI J,ZIMROZ R,et al. A diagnostics of conveyor belt splices[J]. Applied Sciences,2020,10(18):6259. doi: 10.3390/app10186259 [4] QIAO Tiezhu, CHEN Lulu, PANG Yusong, et al. Integrative binocular vision detection method based on infrared and visible light fusion for conveyor belts longitudinal tear[J]. Measurement, 2017, 110: 192-201. [5] YANG Ruiyun, QIAO Tiezhu, PANG Yusong, et al. Infrared spectrum analysis method for detection and early warning of longitudinal tear of mine conveyor belt[J]. Measurement, 2020, 165. DOI: 10.1016/j.measurement.2020.107856. [6] YANG Yanli, ZHAO Yanfei, MIAO Changyun, et al. On-line longitudinal rip detection of conveyor belts based on machine vision[C]. IEEE International Conference on Signal and Image Processing, Beijing, 2016: 315-318. [7] 太原理工大学. 一种基于红外图像的实时矿用胶带预警撕裂检测方法: 201811338007.9[P]. 2018-11-12.Taiyuan University of Technology. A real-time early warning and tear detection method for mine conveyor belt based on infrared images: 201811338007.9[P]. 2018-11-12. [8] 周宇杰,徐善永,黄友锐,等. 基于改进YOLOv4的输送带损伤检测方法[J]. 工矿自动化,2021,47(11):61-65. doi: 10.13272/j.issn.1671-251x.17843ZHOU Yujie,XU Shanyong,HUANG Yourui,et al. Conveyor belt damage detection method based on improved YOLOv4[J]. Industry and Mine Automation,2021,47(11):61-65. doi: 10.13272/j.issn.1671-251x.17843 [9] 韩雷. 基于线激光视觉检测的矿用输送机纵向撕裂保护系统研究[J]. 神华科技,2018,16(9):29-31,49. doi: 10.3969/j.issn.1674-8492.2018.09.008HAN Lei. Study about mine conveyor longitudinal tear protection system by using line laser visual detection[J]. Shenhua Science and Technology,2018,16(9):29-31,49. doi: 10.3969/j.issn.1674-8492.2018.09.008 [10] LYU Zhiwei,ZHANG Xiaoguang,HU Jiangdi,et al. Visual detection method based on line lasers for the detection of longitudinal tears in conveyor belts[J]. Measurement,2021,183. DOI: 10.1016/j.measurement.2021.109800. [11] 徐辉,刘丽静,沈科,等. 基于多道线性激光的带式输送机纵向撕裂检测[J]. 工矿自动化,2021,47(7):37-44. doi: 10.13272/j.issn.1671-251x.17681XU Hui,LIU Lijing,SHEN Ke,et al. Longitudinal tear detection of belt conveyor based on multi linear lasers[J]. Industry and Mine Automation,2021,47(7):37-44. doi: 10.13272/j.issn.1671-251x.17681 [12] BLAZEJ R,JURDZIAK L,KOZLOWSKI T,et al. The use of magnetic sensors in monitoring the condition of the core in steel cord conveyor belts:tests of the measuring probe and the design of the DiagBelt system[J]. Measurement,2018,123:48-53. doi: 10.1016/j.measurement.2018.03.051 [13] ZUO Chao,FENG Shijie,HUANG Lei,et al. Phase shifting algorithms for fringe projection profilometry:a review[J]. Optics and Lasers in Engineering,2018,109:23-59. doi: 10.1016/j.optlaseng.2018.04.019 [14] ZHANG Song. High-speed 3D shape measurement with structured light methods:a review[J]. Optics and Lasers in Engineering,2018,106:119-131. [15] 黄琬婷,胡小平. 一种基于张氏标定法的单目相机改进标定算法[J]. 导航与控制,2019,18(1):105-111. doi: 10.3969/j.issn.1674-5558.2019.01.014HUANG Wanting,HU Xiaoping. An improved calibration algorithm of monocular camera based on Zhang's plane calibration method[J]. Navigation and Control,2019,18(1):105-111. doi: 10.3969/j.issn.1674-5558.2019.01.014 [16] 冀振燕,宋晓军,付文杰,等. 激光光条中心线提取研究综述[J]. 测控技术,2021,40(6):1-8. doi: 10.19708/j.ckjs.2021.06.001JI Zhenyan,SONG Xiaojun,FU Wenjie,et al. Review on centerline extraction for laser stripe[J]. Measurement & Control Technology,2021,40(6):1-8. doi: 10.19708/j.ckjs.2021.06.001 [17] LONG J, SHELHAMER E, DARRELL T. Fully convolutional networks for semantic segmentation[C]. IEEE Conference on Computer Vision and Pattern Recognition, Boston, 2015. [18] BADRINARAYANAN V,KENDALL A,CIPOLLA R. SegNet:a deep convolutional encoder-decoder architecture for image segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2017,39(12):2481-2495. doi: 10.1109/TPAMI.2016.2644615 [19] RONNEBERGER O, FISCHER P, BROX T. U-Net: convolutional networks for biomedical image segmentation[C]. International Conference on Medical Image Computing and Computer-Assisted Intervention, 2015: 234-241. [20] CHEN L C, ZHU Y, PAPANDREOU G, et al. Encoder-decoder with atrous separable convolution for semantic image segmentation[C]. European Conference on Computer Vision, 2018: 833-851.