Abstract:
The realization of autonomous driving for mining vehicles relies on accurate environmental perception, and the combination of LiDAR and cameras can provide richer and more accurate environmental information. To ensure effective fusion of LiDAR and cameras, external parameter calibration is necessary. Currently, most mining intrinsically safe onboard LiDARs are 16-line LiDARs, which generate relatively sparse point clouds. To address this issue, this paper proposed a targetless automatic calibration method for mining LiDAR and camera. Multi-frame point cloud fusion was utilized to obtain fused frame point clouds, increasing point cloud density and enriching point cloud information. Then, effective targets such as vehicles and traffic signs in the scene were extracted using panoramic segmentation. By establishing a corresponding relationship between the centroids of 2D and 3D effective targets, a coarse calibration was completed. In the fine calibration process, the effective target point clouds were projected onto the segmentation mask after inverse distance transformation using the coarse-calibrated external parameters, constructing an objective function based on the matching degree of effective target panoramic information. The optimal external parameters were obtained by maximizing the objective function using a particle swarm algorithm. The effectiveness of the method was validated from three aspects: quantitative, qualitative, and ablation experiments. ① In the quantitative experiments, the translation error was 0.055 m, and the rotation error was 0.394°. Compared with the method based on semantic segmentation technology, the translation error was reduced by 43.88%, and the rotation error was reduced by 48.63%. ② The qualitative results showed that the projection effects in the garage and mining area scenes were highly consistent with the true values of the external parameters, demonstrating the stability of the method. ③ Ablation experiments indicated that multi-frame point cloud fusion and the weight coefficients of the objective function significantly improved calibration accuracy. When using fused frame point clouds as input compared to single-frame point clouds, the translation error was reduced by 50.89%, and the rotation error was reduced by 53.76%. Considering the weight coefficients, the translation error was reduced by 36.05%, and the rotation error was reduced by 37.87%.