Abstract:
Underground personnel behavior recognition is an important measure to ensure safe production in coal mines. The existing research on behavior recognition of underground personnel lacks research and analysis on the perception mechanism, and the feature extraction method is simple. In order to solve the above problems, a behavior recognition method for underground personnel based on fusion networks is proposed. The method mainly includes three parts: data preprocessing, feature construction, and recognition network construction. Data preprocessing: the collected channel status information (CSI) data is processed through CSI quotient models, subcarrier denoising, and discrete wavelet denoising to reduce the impact of environmental noise and equipment noise. Feature construction: the processed data is transformed into images using the Gramian angular summation/difference fields (GASF/GADF) to preserve the spatial and temporal features of the data. Recognition network construction: according to the features of personnel actions, a fusion network composed of a gate recurrent unit (GRU) based encoding and decoding network and a multiscale convolutional neural network (CNN) is proposed. GRU is used to preserve the correlation between pre and post data. The weight allocation strategy of the attention mechanism is used to effectively extract key features to improve the accuracy of behavior recognition. The experimental results show that the average recognition accuracy of this method for eight movements, namely walking, taking off a hat, throwing things, sitting, smoking, waving, running, and sleeping, is 97.37%. The recognition accuracy for sleeping and sitting is the highest, and the most prone to misjudgment are walking and running. Using accuracy, precision, recall, and
F1 score as evaluation indicators, it is concluded that the performance of the fusion network is superior to CNN and GRU. The accuracy of personnel behavior recognition is higher than the HAR system, WiWave system and Wi-Sense system. The average recognition accuracy of walking and taking off a hat at normal speed is 95.6%, which is higher than 93.6% for fast motion and 92.7% for slow motion. When the distance between transceiver devices is 2 m and 2.5 m, the recognition accuracy is higher.