Detection of underground personnel safety helmet wearing based on improved YOLOv8n
-
Graphical Abstract
-
Abstract
Existing methods for detecting safety helmet wearing among underground personnel fail to consider factors such as occlusion, small target size, and background interference, leading to poor detection accuracy and insufficient model lightweighting. This paper proposed an improved YOLOv8n model applied to safety helmet wearing detection in underground. A P2 small target detection layer was added to the neck network to enhance the model's ability to detect small targets and better capture details of safety helmets. A convolutional block attention module (CBAM) was integrated into the backbone network to extract key image features and reduce background interference. The CIoU loss function was replaced with the WIoU loss function to improve the model's localization capability for detection targets. A lightweight shared convolution detection head (LSCD) was used to reduce model complexity through parameter sharing, and normalization layers in convolutions were replaced with group normalization (GN) to reduce model weight while maintaining accuracy as much as possible. The experimental results showed that compared to the YOLOv8n model, the improved YOLOv8n model increased the mean average precision at an intersection over union threshold of 0.5 (mAP@50) by 1.8%, reduced parameter count by 23.8%, lowered computational load by 10.4%, and decreased model size by 17.2%. The improved YOLOv8n model outperformed SSD, YOLOv3-tiny, YOLOv5n, YOLOv7, and YOLOv8n in detection accuracy, with a complexity only slightly higher than YOLOv5n, effectively balancing detection accuracy and complexity. In complex underground scenarios, the improved YOLOv8n model were able to achieve accurate detection of safety helmet wearing among underground personnel, addressing the issue of missed detections.
-
-