红外和可见光图像融合的隧道火源深度估计技术

Tunnel fire source depth estimation technology based on infrared and visible light image fusion

  • 摘要: 矿井巷道、交通隧道等场景受火灾威胁的困扰,采用基于图像的智能火灾探测方法在火灾初期快速识别其发生位置具有重要意义。现有方法面临时间序列一致性问题,且对相机姿态变化具有高度敏感性,在复杂动态环境中的识别性能下降。针对该问题,提出一种红外(IR)和可见光(RGB)图像融合的隧道火源深度估计方法。引入自监督学习框架的位姿网络,来预测相邻帧间的位姿变化。构建两阶段训练的深度估计网络,基于UNet网络架构分别提取IR和RGB特征并进行不同尺度特征融合,确保深度估计过程平衡。引入相机高度损失,进一步提高复杂动态环境中火源探测的准确性和可靠性。在自制隧道火焰数据集上的实验结果表明,以Resnet50为骨干网络时,构建的隧道火源自监督单目深度估计网络模型的绝对值相对误差为0.102,平方相对误差为0.835,均方误差为4.491,优于主流的Lite-Mono,MonoDepth,MonoDepth2,VAD模型,且精确度阈值为1.25,1.252,1.253时整体准确度最优;该模型对近景和远景区域内物体的预测效果优于DepthAnything,MonoDepth2,Lite-Mono模型。

     

    Abstract: Tunnel scenarios such as mine roadways and traffic tunnels are often plagued by fire threats. It is of great significance to use image-based intelligent fire detection methods to rapidly identify the fire's location during its early stages. However, existing methods face the problem of times series consistency and are highly sensitive to changes in camera pose, resulting in decreased detection performance in complex and dynamic environments. To address this issue, a tunnel fire source depth estimation method based on infrared (IR) and visible light (RGB) image fusion was proposed. A pose network within a self-supervised learning framework was introduced to predict pose changes between adjacent frames. A two-stage training depth estimation network was constructed. Based on the UNet network architecture, IR and RGB features were extracted and fused at different scales, ensuring a balanced depth estimation process. A camera height loss was introduced to further enhance the accuracy and reliability of fire source detection in complex and dynamic environments. Experimental results on a self-constructed tunnel flame dataset demonstrated that when Resnet50 was used as the backbone network, the absolute relative error of the constructed tunnel fire source self-supervised monocular depth estimation network model was 0.102, the square relative error was 0.835, and the mean square error was 4.491, outperforming mainstream models such as Lite-Mono, MonoDepth, MonoDepth2, and VAD. The overall accuracy was optimal under accuracy thresholds of 1.25, 1.252, and 1.253. The model had better prediction results for objects in close-up and long range areas than DepthAnything, MonoDepth2, Lite-Mono models.

     

/

返回文章
返回