面向缺失数据迁移插补的矿用钻机主轴轴承故障诊断

Fault diagnosis of main shaft bearings in mining drills for transfer imputation of missing data

  • 摘要: 针对矿用钻机复杂井下工况下监测数据噪声多、漂移大且存在大量缺失的问题,提出了一种基于模型迁移的钻机主轴轴承运行数据缺失值插补方法。该方法基于嵌入时空注意力的双向时序卷积联合生成对抗插补网络(BiTCGAIN−STA),通过双向时序卷积网络(BiTCN)捕捉前后时序依赖关系,通过时空注意力(STA)机制自适应分配时间与通道权重,并通过生成对抗训练提升插补样本的分布一致性与多样性,同时在目标域上进行真实数据微调,以增强迁移鲁棒性。提出了一种基于自适应加权融合与Informer网络的轴承故障诊断模型,结合Informer长序列特征提取网络对融合信号进行深度表征,从而提高对微弱故障特征的识别能力。实验结果表明:在不同缺失率下,BiTCGAIN−STA模型的均方根误差(RMSE)均显著高于Mean,MICE,GAIN等主流模型,实现了高质量的数据重建;轴承故障诊断模型对微弱故障的识别准确率达99.87%,显著高于Transformer、图神经网络(GNN)等模型。

     

    Abstract: In response to the problems of excessive noise, large drift, and numerous missing values in the monitoring data under complex underground working conditions of mining drilling machines, a bidirectional temporal convolutional joint generative adversarial interpolation network with embedded spatio-temporal attention (BiTCGAIN-STA) was designed. The bidirectional temporal convolutional network (BiTCN) is used to capture the temporal dependency between previous and subsequent time sequences, and the spatio-temporal attention (STA) mechanism is employed to adaptively allocate time and channel weights. Through generative adversarial training, the distribution consistency and diversity of the interpolated samples are improved. At the same time, real data fine-tuning is performed on the target domain to enhance the transfer robustness. A bearing fault diagnosis model based on adaptive weighted fusion and the Informer network is proposed. The Informer long sequence feature extraction network is used to deeply represent the fused signals, thereby improving the ability to identify weak fault features. Experimental results show that, under different missing rates, the root mean square error (RMSE) of the BiTCGAIN-STA model is significantly higher than that of mainstream models such as Mean, MICE, and GAIN, achieving high-quality data reconstruction. The bearing fault diagnosis model has an identification accuracy of 99.87% for weak faults, significantly higher than models such as Transformer and graph neural network (GNN).

     

/

返回文章
返回