Method for recognizing coal flow status of scraper conveyor in working face
-
摘要: 煤矿井下工作面刮板输送机场景中存在的刮板输送机姿态多变、煤料形状不规则、设备安装位置受限、高粉尘、异物遮挡等不利因素,导致现有针对带式输送机场景的煤流状态识别方法无法有效在刮板输送机场景下进行工程化应用。针对上述问题,提出了一种基于时序视觉特征的工作面刮板输送机煤流状态识别方法。该方法首先利用DeepLabV3+语义分割模型获取工作面煤流视频图像中粗略煤流区域,并在此基础上通过线性拟合方法进行精细煤流区域定位与分割,实现煤流图像提取;然后将煤流图像按视频时序进行排列,构成煤流图像序列;最后采用C3D动作识别模型针对煤流图像序列进行特征建模,实现煤流状态自动识别。实验结果表明:该方法能准确获取煤流图像并自动、实时识别煤流状态,煤流状态平均识别准确率达92.73%;针对工程化部署应用,利用TensorRT对模型进行加速处理,对于分辨率为1 280×720的煤流视频图像,整体处理速度为42.7帧/s,满足工作面煤流状态智能监测实际需求。Abstract: The various poses of scraper conveyors, irregular coal material shapes, limited equipment installation positions, high dust, and foreign object obstruction in the scene of scraper conveyors in underground coal mines have led to the inability of existing coal flow status recognition methods for belt conveyor scenarios to be applied in engineering. In order to solve the above problems, a method for recognizing the coal flow status of a scraper conveyor in a working face based on temporal visual features is proposed. This method first utilizes the DeepLabV3+semantic segmentation model to obtain rough coal flow regions in the coal flow video image of the working face. Then the method uses linear fitting method to locate and segment fine coal flow regions, achieving coal flow image extraction. Then the method arranges the coal flow images in video sequence to form a sequence of coal flow images. Finally, a convolutional 3D (C3D) action recognition model is used to model the features of coal flow image sequences and achieve automatic recognition of coal flow status. The experimental results show that this method can accurately obtain coal flow images and automatically and real-time recognize coal flow status, with an average recognition accuracy of 92.73% for coal flow status. For engineering deployment applications, TensorRT is used to accelerate model processing. For the coal flow video image with a resolution of 1 280×720, the overall processing speed is 42.7 frames/s, which meets the actual demand for intelligent monitoring of coal flow status at the working face.
-
表 1 模型加速前后推理耗时对比
Table 1. Comparison of inference time before and after model acceleration
ms 模型框架 DeepLabV3+ C3D PyTorch 39.5 15.1 TensorRT 14.1 5.7 -
[1] 李纪栋,蒲绍宁,翟超,等. 基于视频识别的带式输送机煤量检测与自动调速系统[J]. 煤炭科学与技术,2017,45(8):212-216.LI Jidong,PU Shaoning,ZHAI Chao,et al. Coal quantity detection and automatic speed regulation system of belt conveyor based on video identification[J]. Coal Science and Technology,2017,45(8):212-216. [2] 李瑶,王义涵. 带式输送机煤流量自适应检测方法[J]. 工矿自动化,2020,46(6):98-102.LI Yao,WANG Yihan. Adaptive coal flow detection method of belt conveyor[J]. Industry and Mine Automation,2020,46(6):98-102. [3] 李学晖. 基于机器视觉和深度学习的带式输送机煤量识别方法研究[D]. 邯郸:河北工程大学,2022.LI Xuehui. Research on coal quantity identification method of belt conveyor based on machine vision and deep learning[D]. Handan:Hebei University of Engineering,2022. [4] 汪心悦,乔铁柱,庞宇松,等. 基于TOF深度图像修复的输送带煤流检测方法[J]. 工矿自动化,2022,48(1):40-44,63.WANG Xinyue,QIAO Tiezhu,PANG Yusong,et al. Coal flow detection method for conveyor belt based on TOF depth image restoration[J]. Industry and Mine Automation,2022,48(1):40-44,63. [5] 陈湘源. 基于超声波的带式输送机多点煤流量监测系统设计[J]. 工矿自动化,2017,43(2):75-78.CHEN Xiangyuan. Design of multipoint coal flow monitoring system of belt conveyor based on ultrasonic[J]. Industry of Mine Automation,2017,43(2):75-78. [6] 郝洪涛,王凯,丁文捷. 基于超声阵列的输送带动态煤量检测系统[J]. 工矿自动化,2023,49(4):120-127.HAO Hongtao,WANG Kai,DING Wenjie. A dynamic coal quantity detection system for conveyor belt based on ultrasonic array[J]. Journal of Mine Automation,2023,49(4):120-127. [7] 郭伟东,李明,亢俊明,等. 基于机器视觉的矿井输煤系统优化节能控制[J]. 工矿自动化,2020,46(10):69-75.GUO Weidong,LI Ming,KANG Junming,et al. Optimal energy saving control of mine coal transportation system based on machine vision[J]. Industry and Mine Automation,2020,46(10):69-75. [8] BADRINARAYANAN V,KENDALL A,CIPOLLA R. SegNet:a deep convolutional encoder-decoder architecture for image segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2017,39(12):2481-2495. doi: 10.1109/TPAMI.2016.2644615 [9] HE Kaiming,GKIOXARI G,DOLLAR P,et al. Mask R-CNN[C]. IEEE International Conference on Computer Vision,Venice,2017:2961-2969. [10] RONNEBERGER O,FISCHER P,BROX T. U-Net:convolutional networks for biomedical image segmentation[C]. International Conference on Medical image Computing and Computer-Assisted Intervention,Munich,2015:234-241. [11] CHEN L C,PAPANDREOU G,SCHROFF F,et al. Rethinking atrous convolution for semantic image segmentation[J/OL]. [2023-11-19]. https://arxiv.org/abs/1706.05587. [12] CHEN L C,PAPANDREOU G,KOKKINOS I,et al. DeepLab:semantic image segmentation with deep convolutional nets,atrous convolution,and fully connected CRFs[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2017,40(4):834-848. [13] CHEN L C,ZHU Yukun,PAPANDREOU G,et al. Encoder-decoder with atrous separable convolution for semantic image segmentation[C]. European Conference on Computer Vision,Munich,2018:801-818. [14] CHOLLET F. Xception:deep learning with depthwise separable convolutions[C]. IEEE Conference on Computer Vision and Pattern Recognition,Hawaii,2017:1251-1258. [15] WANG Limin,XIONG Yuanjun,WANG Zhe,et al. Temporal segment networks for action recognition in videos[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2019,41(11):2740-2755. doi: 10.1109/TPAMI.2018.2868668 [16] LIN Ji,GAN Chuang,HAN Song. TSM:temporal shift module for efficient video understanding[C]. IEEE/CVF International Conference on Computer Vision,Seoul,2019:7083-7093. [17] FEICHTENHOFER C,FAN Haoqi,MALIK J,et al. SlowFast networks for video recognition[C]. IEEE/CVF International Conference on Computer Vision,Seoul,2019:6202-6211. [18] CARREIRA J,ZISSERMAN A. Quo vadis,action recognition? a new model and the kinetics dataset[C]. IEEE Conference on Computer Vision and Pattern Recognition,Honolulu,2017:6299-6308. [19] TRAN D,WANG Heng,TORRESANI L,et al. A closer look at spatiotemporal convolutions for action recognition[C]. IEEE/CVF Conference on Computer Vision and Pattern Recognition,Salt Lake City,2018:6450-6459. [20] TRAN D,BOURDEV L,FERGUS R,et al. Learning spatiotemporal features with 3D convolutional networks[C]. IEEE International Conference on Computer Vision,Santiago,2015:4489-4497. [21] BERTASIUS G,WANG Heng,TORRESANI L. Is space-time attention all you need for video understanding?[C]. International Conference on Machine Learning,Vienna,2021:813-824.