留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

面向煤矿安全监测边缘计算的YOLOv5s剪枝方法

陈志文 陈嫒靓霏 唐晓丹 柯浩彬 蒋朝辉 肖菲

陈志文,陈嫒靓霏,唐晓丹,等. 面向煤矿安全监测边缘计算的YOLOv5s剪枝方法[J]. 工矿自动化,2024,50(7):89-97.  doi: 10.13272/j.issn.1671-251x.2024010095
引用本文: 陈志文,陈嫒靓霏,唐晓丹,等. 面向煤矿安全监测边缘计算的YOLOv5s剪枝方法[J]. 工矿自动化,2024,50(7):89-97.  doi: 10.13272/j.issn.1671-251x.2024010095
CHEN Zhiwen, CHEN Ailiangfei, TANG Xiaodan, et al. YOLOv5s pruning method for edge computing of coal mine safety monitoring[J]. Journal of Mine Automation,2024,50(7):89-97.  doi: 10.13272/j.issn.1671-251x.2024010095
Citation: CHEN Zhiwen, CHEN Ailiangfei, TANG Xiaodan, et al. YOLOv5s pruning method for edge computing of coal mine safety monitoring[J]. Journal of Mine Automation,2024,50(7):89-97.  doi: 10.13272/j.issn.1671-251x.2024010095

面向煤矿安全监测边缘计算的YOLOv5s剪枝方法

doi: 10.13272/j.issn.1671-251x.2024010095
基金项目: 国家自然科学基金项目(62173349);湖南省自然科学基金项目(2022JJ20076);湖南省科技创新计划项目(2022RC1090)。
详细信息
    作者简介:

    陈志文(1986—),男,湖南永州人,副教授,博士研究生,研究方向为工业数据智能解析、机器视觉,E-mail:zhiwen.chen@csu.edu.cn

  • 中图分类号: TD76

YOLOv5s pruning method for edge computing of coal mine safety monitoring

  • 摘要: 目前,边缘计算与机器视觉相结合具有较好的煤矿安全监测应用前景,但边缘端存储空间和计算资源有限,高精度的复杂视觉模型难以部署。针对上述问题,提出了一种面向煤矿安全监测边缘端的基于间接和直接重要性评价空间融合(IDESF)的YOLOv5s剪枝方法,实现对YOLOv5s网络的轻量化。首先对YOLOv5s网络中各模块的卷积层进行结构分析,确定自由剪枝层和条件剪枝层,为后续分配剪枝率及计算卷积核剪枝数奠定基础。其次,根据基于卷积核权重幅值和层相对计算复杂度的卷积核权重重要性得分为可剪枝层分配剪枝率,有效降低剪枝后网络的计算复杂度。然后,基于卷积核直接重要性评价准则,将卷积层的间接输出重要性以缩放因子的形式引入直接重要性空间中,更新卷积核位置分布,构建包含卷积核输出信息和幅值信息的融合重要性评价空间,提高卷积核重要性评价的全面性。最后,借鉴topk投票的思想对中值滤波筛选冗余卷积核的流程进行优化,并用有向图的邻接矩阵中节点的入度来量化卷积核的冗余程度,提高了冗余卷积核筛选过程的可解释性和通用性。实验结果表明:① 从平衡模型精度和轻量化程度的角度出发,剪枝率为50%的YOLOV5s_IDESF是最优的轻量级YOLOv5s。在VOC数据集上,YOLOv5s_IDESF的mAP@.5和mAP@0.5∶0.95均达到最高,分别为0.72和0.44,参数量降至最低2.65×106,计算量降低至1.16×109,综合复杂度也降至最低,图像处理帧率达到31.15 帧/s。② 在煤矿数据集上,YOLOv5s_IDESF的mAP@.5和mAP@0.5∶0.95均达到最高,分别为0.94和0.52,参数量降至最低3.12×106,计算量降低至1.24×109,综合复杂度也降至最低,图像处理帧率达到31.55 帧/s。

     

  • 图  1  残差单元模块

    Figure  1.  Residual module

    图  2  IDESF框架

    Figure  2.  Framework of indirect and direct evaluation space fusion(IDESF)

    表  1  各剪枝率下的各模型在VOC2007测试集上的性能对比

    Table  1.   Performance comparison of each model on the VOC2007 test set at each pruning rate

    剪枝
    率/%
    模型 mAP@.5 mAP@
    0.5∶0.95
    FLOPs/109 参数
    量/106
    帧速率/
    (帧·s−1
    0 YOLOv5s 0.82 0.57 2.07 7.11 29.67
    20 YOLOv5s_FPGM 0.81 0.56 2.00 7.06 37.31
    YOLOv5s_SFP 0.81 0.56 2.00 7.06 37.18
    YOLOv5s_IDESF 0.73 0.40 1.67 5.34 28.09
    30 YOLOv5s_FPGM 0.80 0.54 2.00 7.06 37.18
    YOLOv5s_SFP 0.80 0.54 2.00 7.06 37.04
    YOLOv5s_IDESF 0.72 0.40 1.47 4.51 28.01
    40 YOLOv5s_FPGM 0.70 0.44 2.00 7.06 36.90
    YOLOv5s_SFP 0.78 0.50 2.00 7.06 37.04
    YOLOv5s_IDESF 0.72 0.40 1.28 3.71 32.26
    50 YOLOv5s_FPGM 0.61 0.36 2.00 7.06 37.74
    YOLOv5s_SFP 0.70 0.43 2.00 7.06 37.88
    YOLOv5s_IDESF 0.72 0.44 1.16 2.65 31.15
    60 YOLOv5s_FPGM 0.58 0.31 2.00 7.06 37.45
    YOLOv5s_SFP 0.64 0.37 2.00 7.06 37.59
    YOLOv5s_IDESF 0.64 0.38 0.90 2.26 32.90
    70 YOLOv5s_FPGM 0.48 0.25 2.00 7.06 37.88
    YOLOv5s_SFP 0.57 0.31 2.00 7.06 37.88
    YOLOv5s_IDESF 0.64 0.34 0.72 1.61 36.36
    80 YOLOv5s_FPGM 0.14 0.06 2.00 7.06 38.02
    YOLOv5s_SFP 0.11 0.05 2.00 7.06 37.74
    YOLOv5s_IDESF 0.18 0.08 0.72 1.04 35.21
    下载: 导出CSV

    表  2  VOC2007测试集上各模型的性能比较(剪枝率=50%)

    Table  2.   Performance comparison of each model on the VOC2007 test set (pruning rate=50%)

    模型 mAP@.5 mAP@
    0.5∶0.95
    FLOPs/
    109
    参数
    量/106
    Co 帧速率/
    (帧·s−1
    YOLOv5s 0.82 0.57 2.07 7.11 9.18 29.67
    YOLOv5s−ghostnet 0.71 0.43 1.00 5.53 6.53 36.36
    YOLOv5s_eagleEye 0.71 0.42 1.08 3.86 4.94 53.19
    YOLOv5s_FPGM 0.61 0.36 2.00 7.07 9.07 37.74
    YOLOv5s_SFP 0.70 0.43 2.00 7.07 9.07 37.88
    YOLOv5s_IDESF 0.72 0.44 1.16 2.65 3.81 31.15
    下载: 导出CSV

    表  3  各剪枝率下各模型在MH−dataset测试集上的性能对比

    Table  3.   Performance comparison of each model on the MH-dataset test set at different pruning rates

    剪枝
    率/%
    模型 mAP@.5 mAP@
    0.5∶0.95
    FLOPs/
    109
    参数
    量/106
    帧速率/
    (帧·s−1
    0 YOLOv5s 0.87 0.48 2.05 7.07 30.58
    20 YOLOv5s_FPGM 0.89 0.49 1.98 7.02 32.15
    YOLOv5s_SFP 0.88 0.47 1.98 7.02 31.95
    YOLOv5s_IDESF 0.91 0.52 1.72 5.40 28.90
    30 YOLOv5s_FPGM 0.81 0.46 1.98 7.02 31.65
    YOLOv5s_SFP 0.84 0.45 1.98 7.02 34.13
    YOLOv5s_IDESF 0.91 0.50 1.57 4.61 29.07
    40 YOLOv5s_FPGM 0.86 0.46 1.98 7.02 33.56
    YOLOv5s_SFP 0.88 0.48 1.98 7.02 32.26
    YOLOv5s_IDESF 0.93 0.52 1.41 3.85 30.12
    50 YOLOv5s_FPGM 0.86 0.46 1.98 7.02 34.25
    YOLOv5s_SFP 0.83 0.47 1.98 7.02 33.33
    YOLOv5s_IDESF 0.94 0.52 1.24 3.12 31.55
    60 YOLOv5s_FPGM 0.89 0.46 1.98 7.02 34.01
    YOLOv5s_SFP 0.89 0.50 1.98 7.02 33.11
    YOLOv5s_IDESF 0.90 0.42 1.06 2.40 31.15
    70 YOLOv5s_FPGM 0.86 0.45 1.98 7.02 35.71
    YOLOv5s_SFP 0.77 0.41 1.98 7.02 34.25
    YOLOv5s_IDESF 0.77 0.31 0.87 1.71 31.45
    80 YOLOv5s_FPGM 0.50 0.41 1.98 7.02 34.60
    YOLOv5s_SFP 0.49 0.34 1.98 7.02 33.00
    YOLOv5s_IDESF 0.47 0.19 0.77 1.39 31.15
    下载: 导出CSV

    表  4  MH−dataset测试集上各模型的性能比较(剪枝率=50%)

    Table  4.   Performance comparison of each model on the MH-dataset test set (pruning rate=50%)

    模型 mAP@.5 mAP@
    0.5∶0.95
    FLOPs/
    109
    参数
    量/106
    Co 帧速率/
    (帧·s−1
    Baseline(YOLOv5) 0.87 0.48 2.05 7.07 9.12 30.58
    YOLOv5−ghostnet 0.71 0.33 0.96 5.46 6.42 30.49
    YOLOv5s_eagleEye 0.91 0.48 1.07 3.82 4.89 39.37
    YOLOv5s_FPGM 0.86 0.46 1.98 7.03 9.01 34.25
    YOLOv5s_SFP 0.83 0.47 1.98 7.03 9.01 33.33
    YOLOv5s_IDESF 0.94 0.52 1.24 3.12 4.36 31.55
    下载: 导出CSV
  • [1] LUAN Hengxuan,XU Hao,TANG Wei,et al. Coal and gangue classification in actual environment of mines based on deep learning[J]. Measurement,2023,211:. DOI: 10.1016/j.measurement.2023.112651.
    [2] 王宇,于春华,陈晓青,等. 基于多模态特征融合的井下人员不安全行为识别[J]. 工矿自动化,2023,49(11):138-144.

    WANG Yu,YU Chunhua,CHEN Xiaoqing,et al. Recognition of unsafe behaviors of underground personnel based on multi modal feature fusion[J]. Industry and Mine Automation,2023,49(11):138-144.
    [3] 董昕宇,师杰,张国英. 基于参数轻量化的井下人体实时检测算法[J]. 工矿自动化,2021,47(6):71-78.

    DONG Xinyu,SHI Jie,ZHANG Guoying. Real-time detection algorithm of underground human body based on lightweight parameters[J]. Industry and Mine Automation,2021,47(6):71-78.
    [4] 许志,李敬兆,张传江,等. 轻量化CNN及其在煤矿智能视频监控中的应用[J]. 工矿自动化,2020,46(12):13-19.

    XU Zhi,LI Jingzhao,ZHANG Chuanjiang,et al. Lightweight CNN and its application in coal mine intelligent video surveillance[J]. Industry and Mine Automation,2020,46(12):13-19.
    [5] SHAO Linsong,ZUO Haorui,ZHANG Jianlin,et al. Filter pruning via measuring feature map information[J]. Sensors,2021,21(9). DOI:10.3390/s21196601.
    [6] LUO Jianhao,WU Jianxin. An entropy-based pruning method for CNN compression[EB/OL]. [2023-12-12]. https://arxiv.org/abs/1706.05791v1.
    [7] HE Yang,DING Yuhang,LIU Ping,et al. Learning filter pruning criteria for deep convolutional neural networks acceleration[C]. IEEE/CVF Conference on Computer Vision and Pattern Recognition,Seattle,2020:2006-2015.
    [8] LI Hao,KADAV A,DURDANOVIC I,et al. Pruning filters for efficient ConvNets[EB/OL]. [2023-12-12]. https://arxiv.org/abs/1608.08710v3.
    [9] HE Yang,KANG Guoliang,DONG Xuanyi,et al. Soft filter pruning for accelerating deep convolutional neural networks[EB/OL]. [2023-12-12]. https://arxiv.org/abs/1808.06866v1.
    [10] SARVANI C H,RAM D S,MRINMOY G. UFKT:unimportant filters knowledge transfer for CNN pruning[J]. Neurocomputing,2022,514:101-112. doi: 10.1016/j.neucom.2022.09.150
    [11] CHIN T W,DING Ruizhou,ZHANG Cha,et al. Towards efficient model compression via learned global ranking[C]. IEEE/CVF Conference on Computer Vision and Pattern Recognition,Seattle,2020:1515-1525.
    [12] ZHANG Wei,WANG Zhiming. FPFS:filter-level pruning via distance weight measuring filter similarity[J]. Neurocomputing,2022,512:40-51. doi: 10.1016/j.neucom.2022.09.049
    [13] HE Yang,LIU Ping,WANG Ziwei,et al. Filter pruning via geometric Median for deep convolutional neural networks acceleration[C]. IEEE/CVF Conference on Computer Vision and Pattern Recognition ,Long Beach,2019:4335-4344.
    [14] FATEMEH B,MOHAMMAD A M. Evolutionary convolutional neural network for efficient brain tumor segmentation and overall survival prediction[J]. Expert Systems with Applications,2023,213. DOI: 10.1016/j.eswa.2022.118996.
    [15] ALESSIA A,GIANLUCA B,FRANCESCO C,et al. Representation and compression of Residual Neural Networks through a multilayer network based approach[J]. Expert Systems with Applications,2023,215. DOI:10.1016/j.eswa.2022.119391.
    [16] ZHOU Hao,ALVAREZ J M,PORIKLI F. Less is more:towards compact CNNs[M]. Cham:Springer,2016.
    [17] ÁLVAREZ J M,SALZMANN M. Learning the number of neurons in deep networks[J]. Neural Information Processing Systems,2016. DOI: 10.48550/arXiv.1611.06321.
    [18] WEN Wei,WU Chunpeng,WANG Yandan,et al. Learning structured sparsity in deep neural networks[EB/OL]. [2023-12-12]. https://arxiv.org/abs/1608.03665v4.
    [19] LIU Zhuang,LI Jianguo,SHEN Zhiqiang,et al. Learning efficient convolutional networks through network slimming[C]. IEEE International Conference on Computer Vision ,Venice,2017:2755-2763.
    [20] HE Yihui,ZHANG Xiangyu,SUN Jian. Channel pruning for accelerating very deep neural networks[C]. IEEE International Conference on Computer Vision,Venice,2017:1398-1406.
    [21] YOU Zhonghui,YAN Kun,YE Jinmian,et al. Gate decorator:global filter pruning method for accelerating deep convolutional neural networks[EB/OL]. [2023-12-12]. https://arxiv.org/abs/1909.08174v1.
    [22] MILTON M,BISHSHOY D,DUTTA R S,et al. Adaptive CNN filter pruning using global importance metrics[J]. Computer Vision and Image Understanding,2022,222:. DOI: 10.1016/j.cviu.2022.103511.
    [23] LIN Mingbao,JI Rongrong,WANG Yan,et al. HRank:filter pruning using high-rank feature map[C]. IEEE/CVF Conference on Computer Vision and Pattern Recognition,Seattle,2020:1526-1535.
    [24] LUO Jianhao,WU Jianxin,LIN Weiyao. ThiNet:a filter level pruning method for deep neural network compression[C]. IEEE International Conference on Computer Vision ,Venice,2017:5068-5076.
    [25] CHANG Jingfei,LU Yang,XUE Ping,et al. Automatic channel pruning via clustering and swarm intelligence optimization for CNN[J]. Applied Intelligence,2022,52(15):17751-17771. doi: 10.1007/s10489-022-03508-1
    [26] YU Ruichi,LI Ang,CHEN Chunfu,et al. NISP:pruning networks using neuron importance score propagation[C]. IEEE/CVF Conference on Computer Vision and Pattern Recognition,Salt Lake City,2018:9194-9203.
    [27] ZHU M,GUPTA S. To prune,or not to prune:exploring the efficacy of pruning for model compression[EB/OL]. [2023-12-12]. https://arxiv.org/abs/1710.01878v2.
    [28] HAN Song,POOL J,TRAN J,et al. Learning both weights and connections for efficient neural network[J]. Neural Information Processing Systems,2015. DOI: 10.48550/arXiv.1506.02626.
    [29] FRANKLE J,CARBIN M. The lottery ticket hypothesis:finding sparse,trainable neural networks[EB/OL]. [2023-12-12]. https://arxiv.org/abs/1803.03635v5.
    [30] GALE T,ELSEN E,HOOKER S. The state of sparsity in deep neural networks[EB/OL]. [2023-12-12]. https://arxiv.org/abs/1902. 09574v1.
    [31] MOSTAFA H,WANG Xin. Parameter efficient training of deep convolutional neural networks by dynamic sparse reparameterization[C]. 36th International Conference on Machine Learning,Long Beach,2019:4646-4655.
    [32] EVERINGHAM M,GOOL L,WILLIAMS C K I,et al. The pascal visual object classes (VOC) challenge[J]. International Journal of Computer Vision,2010,88(2):303-338. doi: 10.1007/s11263-009-0275-4
    [33] LI Bailin,WU Bowen,SU Jiang,et al. Eagleeye:fast sub-net evaluation for efficient neural network pruning[C]. 16th European Conference on Computer Vision,Glasgow,2020,639-654.
  • 加载中
图(2) / 表(4)
计量
  • 文章访问数:  114
  • HTML全文浏览量:  45
  • PDF下载量:  38
  • 被引次数: 0
出版历程
  • 收稿日期:  2024-01-29
  • 修回日期:  2024-06-30
  • 网络出版日期:  2024-07-30

目录

    /

    返回文章
    返回