Recognition of unsafe behaviors of key position personnel in coal mines based on improved YOLOv7 and ByteTrack
-
摘要: 应用人工智能技术对矿井提升机司机等煤矿关键岗位人员的行为进行实时识别,防止发生设备误操作等危险情况,对保障煤矿安全生产具有重要意义。针对基于图像特征的人员行为识别方法存在的抗背景干扰能力差与实时性不足问题,提出了一种基于改进YOLOv7和ByteTrack的煤矿关键岗位人员不安全行为识别方法。首先,基于MobileOne和C3对YOLOv7目标检测模型骨干与头部网络进行轻量化改进,提高模型推理速度;其次,融合ByteTrack跟踪算法,实现工作人员跟踪锁定,提高抗背景干扰能力;然后,采用MobileNetV2优化OpenPose的网络结构,提高对骨架特征的提取效率;最后,通过时空图卷积网络(ST−GCN)分析人体骨架关键点在时间序列上的空间结构和动态变化,实现对不安全行为的分析识别。实验结果表明:MobileOneC3−YOLO模型的精确率达93.7%,推理速度较YOLOv7模型提高了52%;融合ByteTrack的人员锁定模型锁定成功率达97.1%;改进OpenPose模型内存需求减少了170.3 MiB,在CPU与GPU上的推理速度分别提升了74.7%和54.9%;不安全行为识别模型对疲劳睡岗、离岗、侧身交谈和玩手机4种不安全行为的识别精确率达93.5%,推理速度达18.6 帧/s。Abstract: The application of artificial intelligence technology can real-time recognize the behavior of key position personnel in coal mines, such as mine hoist drivers, to prevent dangerous situations such as equipment misoperation. It is of great significance for ensuring coal mine safety production. The personnel behavior recognition method based on image features has problems of poor resistance to background interference and insufficient real-time performance. In order to solve the above problems, a coal mine key position personnel unsafe behavior recognition method based on improved YOLOv7 and ByteTrack is proposed. Firstly, based on MobileOne and C3, lightweight improvements are made to the backbone and head network of the YOLOv7 object detection model to improve the inference speed of the model. Secondly, integrating ByteTrack tracking algorithm, to achieve the tracking and locking of personnel is achieved, and the capability to resist background interference is improved. Thirdly, MobileNetV2 is used to optimize the network structure of OpenPose and improve the efficiency of skeleton feature extraction. Finally, the spatial temporal graph convolutional networks (ST−GCN) is used to analyze the spatial structure and dynamic changes of the key points of the human skeleton in the time series, achieving the analysis and recognition of unsafe behaviors. The experimental results show that the precision of the MobileOneC3−YOLO model reaches 93.7%, and the inference speed is improved by 52% compared to the YOLOv7 model. The success rate of personnel locking model integrating ByteTrack reaches 97.1%. The improved OpenPose model reduces memory requirements by 170.3 MiB. The inference speed on CPU and GPU is improved by 74.7% and 54.9%, respectively; The recognition precision of the unsafe behavior recognition model for four types of unsafe behaviors, including fatigue sleeping on duty, leaving work, side talking, and playing with mobile phones, reaches 93.5%, and the inference speed reaches 18.6 frames per second.
-
表 1 评价指标
Table 1. Evaluation indexes
评价指标 定义 计算公式 Precision(↑) 精确率 $ \dfrac{{{\mathrm{TP}}}}{{{\mathrm{TP}} + {\mathrm{FP}}}}$ Params(↓) 参数量 — AP(↑) 关键点相似度为[0.50,0.55,···,0.95]时
10个位置的平均精确率— 帧率(↑) 每秒处理的帧数 $ \dfrac{{总帧数}}{{时间}} $ MOTA(↑) 多目标跟踪准确率 $ 1 - \dfrac{{{\mathrm{FN}} + {\mathrm{FP}} + {\mathrm{IDSW}}}}{{{\mathrm{GT}}}}$ IDF1(↑) ID调和均值 $ \dfrac{{2{\mathrm{TP}}}}{{2{\mathrm{TP}} + {\mathrm{FP }}+ {\mathrm{FN}}}}$ FP(↓) 误跟踪目标数 — FN(↓) 漏跟踪目标数 — 表 2 人员检测模型性能
Table 2. Personnel detection model performance
模型 精确率/% 参数量/107个 单帧图像推理耗时/s YOLOv5 89.6 3.12 0.0395 YOLOv7 91.1 3.72 0.0437 YOLOv8 91.9 4.36 0.0475 MobileOneC3−YOLO 93.7 2.51 0.0208 表 3 人员检测模型消融实验结果
Table 3. Ablation experiment results of personnel detection model
改进策略 精确率/% 参数量/107个 单帧图像推理耗时/s MobileOne C3 × × 91.1 3.72 0.043 7 √ × 90.8 2.93 0.034 3 × √ 92.1 2.71 0.035 9 √ √ 93.7 2.51 0.020 8 表 4 跟踪算法对比实验结果
Table 4. Comparison experiment results of tracking algorithms
算法 IDF1/% MOTA/% FP FN 帧率/(帧·s−1) SORT 75.5 73.6 4 856 21 376 21.3 DeepSort 85.7 81.3 6 512 19 837 14.7 ByteTrack 88.1 85.5 5 539 13 557 29.6 表 5 关键岗位人员锁定结果统计
Table 5. Statistics on the key position personnel locking results
关键岗位人员 成功锁定次数 平均成功
锁定次数总体锁定
成功率/%1轮 2轮 3轮 矿井提升机司机 60 59 60 59 97.10 绞车司机 59 60 57 58 变电站值班人员 58 60 59 58 井口信把工 59 58 60 58 表 6 不安全行为识别模型性能测试结果
Table 6. Performance test results of unsafe behaviors recognition model
模型 AP/% 模型内存/MiB 单帧图像推理耗时/s CPU:12900K GPU:3090 OpenPose 74.6 203.8 0.8326 0.0573 改进OpenPose 72.8 33.5 0.2103 0.0258 表 7 不安全行为识别模型消融实验结果
Table 7. Ablation experiment results of unsafe behaviors recognition model
改进策略 精确率/% 帧率/(帧·s−1) YOLOv7+OpenPose 91.8 7.1 YOLOv7+改进OpenPose 92.1 12.5 MobileOneC3−YOLO+OpenPose 91.6 13.7 MobileOneC3−YOLO+改进OpenPose 93.5 18.6 -
[1] 李琰,刘珍,陈南希. 基于矿工大数据的不安全行为主题挖掘与语义分析[J]. 煤矿安全,2023,54(9):254-257.LI Yan,LIU Zhen,CHEN Nanxi. Topic mining and semantic analysis of unsafe behavior based on miner big data[J]. Safety in Coal Mines,2023,54(9):254-257. [2] 黄辉,张雪. 煤矿员工不安全行为研究综述[J]. 煤炭工程,2018,50(6):123-127.HUANG Hui,ZHANG Xue. Review of research on unsafe behavior of miners[J]. Coal Engineering,2018,50(6):123-127. [3] 丁恩杰,俞啸,夏冰,等. 矿山信息化发展及以数字孪生为核心的智慧矿山关键技术[J]. 煤炭学报,2022,47(1):564-578.DING Enjie,YU Xiao,XIA Bing,et al. Development of mine informatization and key technologies of intelligent mines[J]. Journal of China Coal Society,2022,47(1):564-578. [4] 沈铭华,马昆,杨洋,等. AI智能视频识别技术在煤矿智慧矿山中的应用[J]. 煤炭工程,2023,55(4):92-97.SHEN Minghua,MA Kun,YANG Yang,et al. Application of AI identification technology in intelligent coal mine[J]. Coal Engineering,2023,55(4):92-97. [5] 刘浩,刘海滨,孙宇,等. 煤矿井下员工不安全行为智能识别系统[J]. 煤炭学报,2021,46(增刊2):1159-1169.LIU Hao,LIU Haibin,SUN Yu,et al. Intelligent recognition system for unsafe behavior of coal mine employees underground[J]. Journal of China Coal Society,2021,46(S2):1159-1169. [6] 温廷新,王贵通,孔祥博,等. 基于迁移学习与残差网络的矿工不安全行为识别[J]. 中国安全科学学报,2020,30(3):41-46.WEN Tingxin,WANG Guitong,KONG Xiangbo,et al. Identification of miners' unsafe behaviors based on transfer learning and residual network[J]. China Safety Science Journal,2020,30(3):41-46. [7] 李占利,权锦成,靳红梅. 基于3D−Attention与多尺度的矿井人员行为识别算法[J]. 国外电子测量技术,2023,42(7):95-104. doi: 10.3969/j.issn.1002-8978.2023.07.014LI Zhanli,QUAN Jincheng,JIN Hongmei. Mine personnel behavior recognition algorithm based on 3D−Attention and multi-scale[J]. Foreign Electronic Measurement Technology,2023,42(7):95-104. doi: 10.3969/j.issn.1002-8978.2023.07.014 [8] 王宇,于春华,陈晓青,等. 基于多模态特征融合的井下人员不安全行为识别[J]. 工矿自动化,2023,49(11):138-144.WANG Yu,YU Chunhua,CHEN Xiaoqing,et al. Recognition of unsafe behaviors of underground personnel based on multi modal feature fusion[J]. Journal of Mine Automation,2023,49(11):138-144. [9] WANG C-Y,BOCHKOVSKIY A,LIAO H-Y M. YOLOv7:trainable bag-of-freebies sets new state-of-the-art for real-time object detectors[C]. IEEE/CVF Conference on Computer Vision and Pattern Recognition,Vancouver,2023:7464-7475. [10] VASU P K A,GABRIEL J,ZHU J,et al. An improved one millisecond mobile backbone[EB/OL]. [2024-02-20]. https://arxiv.org/pdf/2206.04040.pdf. [11] 黄家才,赵雪迪,高芳征,等. 基于改进YOLOv5s的草莓多阶段识别检测轻量化算法[J]. 农业工程学报,2023,39(21):181-187. doi: 10.11975/j.issn.1002-6819.202307186HUANG Jiacai, ZHAO Xuedi, GAO Fangzheng, et al. Recognizing and detecting the strawberry at multi-stages using improved lightweight YOLOv5s[J]. Transactions of the Chinese Society of Agricultural Engineering,2023,39(21):181-187. doi: 10.11975/j.issn.1002-6819.202307186 [12] ZHANG Yifu,SUN Peize,JIANG Yi,et al. ByteTrack:multi-object tracking by associating every detection box[EB/OL]. [2024-02-20]. https://arxiv.org/abs/2110.06864v1. [13] CAO Zhe,SIMON T,WEI S-E,et al. Realtime multi-person 2D pose estimation using part affinity fields[C]. IEEE Conference on Computer Vision and Pattern Recognition,Honolulu,2017:1302-1310. [14] SIMONYAN K,ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[EB/OL]. [2024-02-20]. https://arxiv.org/pdf/1409.1556.pdf. [15] SANDLER M,HOWARD A,ZHU Menglong,et al. MobileNetV2:inverted residuals and linear bottlenecks[EB/OL]. [2024-02-20]. http://arxiv.org/pdf/1801.04381.pdf. [16] YAN Sijie,XIONG Yuanjun,LIN Dahua. Spatial temporal graph convolutional networks for skeleton-based action recognition[C]. AAAI Conference on Artificial Intelligence,New Orleans,2018:5361-5368. [17] BAI Shaojie,KOLTER J Z,KOLTUN V. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling[EB/OL]. [2024-02-20]. https://arxiv.org/pdf/1803.01271.pdf. [18] MILAN A,LEAL-TAIXÉ L,REID I,et al. MOT16:a benchmark for multi-object tracking[EB/OL]. [2024-02-20]. https://arxiv.org/pdf/1603.00831.pdf. [19] BEWLEY A,GE Zongyuan,OTT L,et al. Simple online and realtime tracking[C]. IEEE International Conference on Image Processing,Phoenix,2016:3464-3468. [20] WOJKE N,BEWLEY A,PAULUS D. Simple online and realtime tracking with a deep association metric[C]. IEEE International Conference on Image Processing,Beijing,2017:3645-3649. [21] LIN T−Y,MAIRE M,BELONGIE S,et al. Microsoft COCO:common objects in context[EB/OL]. [2024-02-20]. https://www.microsoft.com/en-us/research/wp-content/uploads/2014/09/LinECCV14coco.pdf.