LI Shanhua, XIAO Tao, LI Xiaoli, et al. Miner action recognition model based on DRCA-GCN[J]. Journal of Mine Automation,2023,49(4):99-105, 112. DOI: 10.13272/j.issn.1671-251x.2022120023
Citation: LI Shanhua, XIAO Tao, LI Xiaoli, et al. Miner action recognition model based on DRCA-GCN[J]. Journal of Mine Automation,2023,49(4):99-105, 112. DOI: 10.13272/j.issn.1671-251x.2022120023

Miner action recognition model based on DRCA-GCN

  • The underground "three violations" behavior brings serious safety hazards to coal mine production. It is of great significance to perceive and prevent unsafe actions of underground personnel in advance. The poor video quality in coal mine monitoring leads to limited accuracy of image based action recognition methods. In order to solve the above problem, a dense residual and combined attention-graph convolutional network (DRCA-GCN) is constructed. A miner action recognition model based on DRCA-GCN is proposed. Firstly, the human pose recognition model OpenPose is used to extract human key points. The missing key points are compensated to reduce the impact of missing key points caused by poor video quality. Secondly, DRCA-GCN is used to identify the miner actions. DRCA-GCN introduces a combined attention mechanism and a dense residual network on the basis of the spatio-temporal inception graph convolutional network (STIGCN). By using the combined attention mechanism, the capability of each network layer in the model to extract important time series, spatial key points and channel features is enhanced. By using the dense residual network to compensate for the extracted action features, the feature transmission between different networks is strengthened. It further enhances the model's recognition capability for miner action features. The experimental results indicate the following points. ① On the public dataset NTU-RGB+D120, when using Cross-Subject(X-Sub) and Cross-Setup(X-Set) as evaluation protocols, the recognition precision of DRCA-GCN is 83.0% and 85.1%, respectively. It is 1.1% higher than the precision of STIGCN, and higher than other mainstream action recognition models. The effectiveness of the combined attention mechanism and dense residual network is verified through ablation experiments. ② After compensating for missing key points, on the self built mine personnel action (MPA) dataset, the average recognition accuracy of DRCA-GCN for squatting, standing, crossing, lying down and sitting movements increases from 94.2% to 96.7%. The recognition accuracy of DRCA-GCN for each type of action is above 94.2%. Compared with STIGCN, the average recognition accuracy has been improved by 6.5%. It is not likely to misrecognize similar actions.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return