基于深度学习的带式输送机非煤异物识别方法

Research on the identification method of non-coal foreign object ofbelt conveyor based on deep learning

  • 摘要: 针对现有非煤异物图像识别法识别目标单一、模型缺乏定位能力等问题,提出一种基于深度学习的带式输送机非煤异物识别方法。该方法以目标检测算法YOLOv3为基础框架,采用Focal Loss函数替换YOLOv3模型中的交叉熵损失函数,对YOLOv3模型进行改进;通过调节最佳超参数(权重参数α和焦点参数γ)来平衡样本之间的比例,解决非煤异物样本不平衡问题,使模型在训练时更专注学习复杂目标样本特征,提高模型预测性能;搭建了异物数据集,并通过异物数据集对分类性能和速度进行实验。结果表明:Focal Loss函数在异物数据集中表现优于交叉熵损失函数,在γ=2,α=075时准确率提升5%,故最佳超参数为γ=2,α=075;改进后的YOLOv3模型对锚杆、角铁、螺母3种非煤异物的识别精确率分别提升了约47%,35%和68%,召回率分别提升了约66%,35%和60%;模型在2080Ti平台下每张图像预测类别与实际类别一致,且置信度在94%以上。

     

    Abstract: In order to solve the problems of single identification target and lack of positioning ability of the existing image identification methods of foreign objects, an identification method of non-coal foreign object of belt conveyor based on deep learning is proposed.This method uses the target detection algorithm YOLOv3 as the basic framework, and uses the Focal Loss function to replace the cross entropy loss function in the original model to improve the YOLOv3 model. By adjusting the optimal hyperparameters (weight parameter α and focus parameter γ) to balance the ratio between samples, the method solves the non-coal foreign object sample imbalance problem. Therefore, the model focuses more on learning complex target sample characteristics during training and improves the model forecast performance. A foreign object dataset is built and the classification performance and speed are tested by the foreign object dataset.The results show that the Focal Loss function performs better than the cross entropy loss function in the foreign object dataset, and the accuracy is increased by 5% when γ=2 and α=075. Therefore, the optimal hyperparameter is γ=2 and α=075.The improved YOLOv3 model's identification accuracy of the three non-coal foreign objects of bolts, angle ironsand nuts increases by about 47%, 35% and 68% respectively, and the recall rate increases by about 66%, 35% and 60% respectively. Under the 2080Ti platform, the predicted type of each image is consistent with the actual type, and the confidence level is above 94%.

     

/

返回文章
返回