Method of support posture perception in mining face based on visual-inertial information fusion
-
摘要:
针对目前采场支架姿态感知中惯导方法存在漂移误差大、解算精度低,以及视觉方法存在相机易受粉尘与设备遮挡而位姿识别误差大等问题,提出了一种基于视觉−惯导信息融合的采场支架姿态感知方法。首先将四特征点红外标靶固定于支架底座凸台,将双目相机分别固定于支架顶梁与掩护梁,采用基于Canny−最小二乘法的靶标识别方法和基于四特征点的BA−PnP算法解算顶梁、掩护梁相对底座的俯仰角、横滚角。然后将惯性测量单元固定于液压支架顶梁、掩护梁、底座,通过惯性测量单元中MEMS陀螺仪和加速度计的互补滤波方法解算顶梁、掩护梁、底座在世界坐标系下的俯仰角、横滚角。最后将视觉系统解算的姿态角与惯导解算的姿态角进行扩展卡尔曼滤波多源信息融合,利用视觉信息的低频稳定性抑制惯性测量单元的累计误差,得到精确的采场支架姿态。采用基于视觉、惯导和视觉−惯导信息融合3种支架姿态感知方法进行对比实验,结果表明:① 初始静止状态下,3种方法的精度均较高,但随着支架运行循环次数增加,基于视觉、惯导的感知结果逐渐偏离真值。② 基于视觉、惯导和视觉−惯导信息融合方法的顶梁相对底座的俯仰角感知均方根误差分别为0.201,0.190,0.081°,掩护梁相对底座的俯仰角感知均方根误差分别为0.340,0.297,0.162°。③ 基于视觉−惯导信息融合方法解算的液压支架立柱伸缩长度的均方根误差为13.682 mm,满足现场需求。基于视觉−惯导信息融合的采场支架姿态感知方法可为液压支架智能化控制提供更准确的姿态参数。
Abstract:To address the issues of large drift errors and low calculation accuracy in inertial navigation methods, as well as significant posture recognition errors in visual methods due to camera interference from dust and equipment obstructions, a method of support posture perception in mining face based on visual-inertial information fusion was proposed. First, four feature points of infrared targets were fixed to the base platform of the support, and binocular cameras were fixed to the support top beam and shield beam. A target recognition method based on Canny and least squares, along with a BA-PnP algorithm based on four feature points, was used to solve the pitch and roll angles of the top beam and shield beam relative to the base. Then, an inertial measurement unit (IMU) was fixed to the hydraulic support top beam, shield beam, and base. The complementary filtering method of the MEMS gyroscope and accelerometer in the IMU was used to solve the pitch and roll angles of the top beam, shield beam, and base in the world coordinate system. Finally, the posture angles calculated by the visual system and the inertial navigation system were fused using the extended Kalman filter for multi-source information fusion. The low-frequency stability of the vision information was used to suppress the accumulated errors of the IMU, resulting in accurate posture perception of the mining support. Three methods for support posture perception, based on vision, inertial navigation, and visual-inertial information fusion, were compared in experimental results. The findings showed that: ① In the initial stationary state, all three methods had high accuracy, but as the support operation cycles increased, the vision-based and inertial navigation-based results gradually deviated from the true values. ② The root mean square errors (RMSE) of pitch angle perception for the top beam relative to the base were 0.201°, 0.190°, and 0.081° for the vision-based, inertial navigation-based, and visual-inertial information fusion methods, respectively. For the shield beam relative to the base, the RMSE of pitch angle perception were 0.340°, 0.297°, and 0.162°, respectively. ③ The RMSE of the hydraulic support column extension length calculated by the visual-inertial information fusion method was 13.682 mm, meeting on-site requirements. The visual-inertial information fusion-based support posture perception method could provide more accurate posture parameters for the intelligent control of hydraulic supports.
-
0. 引言
液压支架作为工作面主要支护设备,其自动纠偏、调直、故障预测及自适应调姿等是智能化工作面建设的重要环节。目前,液压支架位姿监测主要采用惯导(惯性导航)系统、光纤、视觉、接触等方法来实现。P. B. Reid等[1]通过在液压支架顶梁、底座和掩护梁等各关键部位安装惯导进行姿态监测。张坤等[2]通过在超前支架关键部位装设九轴姿态传感器测量液压支架群组的空间姿态。刘相通等[3]、李磊等[4]通过在液压支架关键部位安装惯导,测量支架顶梁、掩护梁及底座姿态。Liang Minfu等[5]基于光纤光栅倾斜传感器建立液压支架姿态监测模型,并提出一种倾斜监测的误差补偿算法。任怀伟等[6]通过在液压支架顶梁上安装深度相机,利用深度视觉技术确定相机位姿变化,进而解算液压支架支撑高度与顶梁姿态角。Chen Hongyue等[7]提出基于超声测距数据的液压支架偏航角和翻滚角的计算方法。张洪伟等[8]研制了立柱缩量仪,用于监测支架立柱的位移信息。Gao Kuidong等[9]研制了拉线位移传感器,用于监测液压支架位姿位姿。从现场应用来看,接触测量传感器安装效果受人为影响较大。视觉测量尽管无漂移误差,但易受工作面粉尘和振动影响,精度较低。光纤技术测量精度较高,但成本高,易破坏,可靠性有待进一步提高。现场对惯导技术应用广泛,但该方法漂移误差大、解算精度低,监测过程需要人工干预校准。
采用视觉+惯导的多传感器融合技术是提高支架姿态感知精度的可行途径[10-12]。Li Guangqiang等[13]、A. I. Mourikis等[14]利用多状态限制卡尔曼滤波器MSCKF对双目视觉惯性里程计进行优化。H. Carlsson等[15]、于永军等[16]将视觉系统解算姿态与惯导分开进行,在视觉系统得到位姿估计后,将其作为状态向量加入滤波框架与惯导实现融合。万继成等[17]、毛清华等[18]提出了基于惯导与视觉信息融合的掘进机组合定位方法,并在EBZ260 型悬臂式掘进机上进行了测试。在上述研究基础上,本文提出基于视觉−惯导信息融合的采场支架姿态感知方法。采用红外标靶和双目相机,通过光束平差−透视点定位(Bundle Adjustment-Perspective-n-Point,BA−PnP)姿态估计算法解算液压支架顶梁、掩护梁相对底座的俯仰角和横滚角。将惯性测量单元(Inertial Measurement Unit,IMU)固定于液压支架顶梁、掩护梁、底座,通过IMU中的MEMS陀螺仪和加速度计的互补滤波方法解算液压支架顶梁、掩护梁和底座在世界坐标系下的俯仰角和横滚角。基于扩展卡尔曼滤波器(Extended Kalman Filter,EKF)进行视觉与惯导信息融合,解算采场支架的精确位姿,并开展基于视觉、惯导、视觉−惯导信息融合方法的采场支架姿态感知对比实验。
1. 基于视觉−惯导的液压支架姿态感知方法
1.1 坐标系和支架姿态角
建立地理坐标系(n系)、支架坐标系(a系)、靶标坐标系(v系)、顶梁坐标系(d系)和掩护梁坐标系(y系),如图1(a)所示,其中n系以地球中心为坐标原点,N轴指向真北,E轴水平向东,D轴与其他两轴构成右手坐标系;a系以支架底座前端中心为原点,X(a)轴指向支架前方,Y(a)轴指向支架左方,Z(a)轴指向支架的正上方,a系为随动坐标系,其与n系之间的姿态角度差($ \gamma_{1} $,$ \beta_{1} $,$ \alpha_{1} $)由IMU实时测量得到,分别表示支架底座的偏航角、俯仰角和横滚角。v系以红外标靶中心为坐标原点,X(v)轴指向标志板前方,Y(v)轴指向标志板左方,Z(v)轴指向标志板正上方,v系为随动坐标系,其与a系之间的姿态角度差($ \gamma_{0} $,$ \beta_{0} $,$ \alpha_{0} $)由靶标的安装位置决定。d系为随动坐标系,以顶梁传感器安装中心作为原点,X(d)轴指向顶梁前方,Y(d)轴指向顶梁左方,Z(d)轴指向顶梁的正上方,y系建立参照d系。顶梁IMU测量的绝对姿态角(${\gamma _n}$,${\beta _n}$,${\alpha _{{n}}}$)、顶梁相对底座的姿态角(${\gamma _{{a}}}$,${\beta _{{a}}}$,${\alpha _{{a}}}$)及地理坐标系、支架坐标系和靶标坐标系间角度关系如图1(b)所示。
1.2 基于视觉−惯导信息融合的支架姿态感知系统
基于视觉−惯导信息融合的支架姿态感知系统包括视觉感知系统、IMU感知系统、基于EKF的信息融合算法,如图2所示。将红外标靶固定于底座凸台,顶梁相机和掩护梁相机实时获取靶标图像和靶标坐标,通过BA−PnP算法进行相机位姿估计,得到顶梁和掩护梁相对底座的俯仰角、横滚角;将IMU安装在液压支架顶梁、掩护梁和底座上,通过IMU中的MEMS陀螺仪和加速度计的互补滤波算法,得到顶梁、掩护梁和底座在世界坐标系的俯仰角、横滚角;采用EKF方法将视觉和惯导信息融合,得到支架的姿态角,进而计算出立柱/平衡缸长度。
1.3 视觉感知系统
视觉感知系统主要由液压支架、相机、红外标靶和计算机组成,如图3所示。红外标靶固定于支架底座凸台上,与底座成固定夹角,确保在支架正常工作行程内靶标始终位于相机视场范围内;相机安装在顶梁和掩护梁上并朝向红外靶标;计算机用来进行图像处理及姿态解算。对靶标位置进行初始标定,由掩护梁相机与顶梁相机实时采集红外靶标图像,采用基于Canny−最小二乘法的圆形靶标识别方法定位靶标中心,通过BA−PnP算法[19]进行特征提取、姿态解算,得到顶梁、掩护梁相对底座的俯仰角和横滚角。
1.4 IMU感知系统
IMU感知系统由安装于顶梁、掩护梁、底座上的IMU组成,其中顶梁/掩护梁的IMU与相机安装于同一位置,且与顶梁/掩护梁保持平行,底座的IMU安装于底座轴线中心,如图4所示。IMU与相机通过串口接入计算机系统,通过时间戳插值实现数据对齐。通过互补滤波方法[20]解算顶梁、掩护梁、底座在世界坐标系下的俯仰角和横滚角。
1.5 基于EKF的视觉−惯导数据融合算法
采用EKF对惯导和视觉数据进行融合,系统模型的核心是非线性状态转移方程和非线性观测方程。
系统的状态向量可表示为
$$ \boldsymbol{x}=\left[\beta_a\ \ \alpha_a\ \ b_{\mathrm{\mathit{\beta}}}\ \ b_{\alpha}\right]^{\mathrm{T}} $$ (1) 式中:$ {\beta _a},\;{\alpha _a} $分别为a系下视觉系统解算的支架顶梁相对底座的俯仰角、横滚角;$ b_{\beta},\;b_{\alpha} $分别为IMU测量的支架顶梁相对底座的俯仰角、横滚角的零偏误差。
采用IMU输出的相对姿态角与视觉相同的相对姿态角变化量作为观测值,定义量测向量为
$$ {{\boldsymbol{z}}}=\left[\beta\ \ \alpha\ \ \Delta\beta\ \ \Delta\alpha\right]^{\mathrm{T}} $$ (2) 式中:$ \beta,\ \alpha $分别为IMU输出的相对俯仰角、横滚角;$ \Delta\beta,\ \Delta\alpha $分别为视觉系统测量的相邻时刻相对俯仰角、横滚角变化量。
EKF系统的状态方程和观测方程为
$$ \left\{ \begin{gathered} {{\boldsymbol{x}}_k} = {{\boldsymbol{F}}_k}{{\boldsymbol{x}}_{k - 1}} + {{\boldsymbol{w}}_k} \\ {{\boldsymbol{z}}_k} = {{\boldsymbol{H}}_k}{{\boldsymbol{x}}_k} + {{\boldsymbol{\eta}}_k} \\ \end{gathered} \right. $$ (3) 式中:$ {{\boldsymbol{x}}_k} $为k时刻的状态向量;$ {{\boldsymbol{F}}_k} $为k时刻的状态转移雅可比矩阵;$ {{\boldsymbol{w}}_k} $为k时刻的过程噪声向量,符合$N(0,{\boldsymbol{Q}})$分布,Q为过程噪声协方差矩阵;$ {{\boldsymbol{z}}_k} $为k时刻的量测向量;$ {{\boldsymbol{H}}_k} $为k时刻的观测雅可比矩阵;$ {{\boldsymbol{\eta}}_k} $为k时刻的量测噪声向量,符合$N(0,{\boldsymbol{R}})$分布,R为量测噪声协方差矩阵。
其中,$ {{\boldsymbol{F}}_k} $的表达式为
$$ {{\boldsymbol{F}}_k} = \left[ {\begin{array}{*{20}{c}} 1&0&{ - {{\Delta }}t}&0 \\ 0&1&0&{ - {{\Delta }}t} \\ 0&0&1&0 \\ 0&0&0&1 \end{array}} \right] $$ (4) 式中$\Delta t$为采样周期。
$ {{\boldsymbol{H}}_k} $的表达式为
$$ {{\boldsymbol{H}}_k} = \left[ {\begin{array}{*{20}{c}} 1&0&0&0 \\ 0&1&0&0 \\ 1&0&{ - {{\Delta }}t}&0 \\ 0&1&0&{ - {{\Delta }}t} \end{array}} \right] $$ (5) Q的表达式为
$$ {\boldsymbol{Q}} = {\mathrm{diag}}(\sigma _{{\beta }}^2\Delta {t^2},\sigma _{{\alpha}}^2\Delta {t^2},\sigma _{{b_{\beta}}}^2\Delta {t^2},\sigma _{{b_{\alpha}}}^2\Delta {t^2}) $$ (6) 式中:$\sigma _{{\beta}}^2$,$\sigma _{{\alpha}}^2$分别为陀螺仪俯仰角、横滚角角速度噪声方差;$ \sigma_{b_{\beta}}^2 $,$ \sigma_{b_{\alpha}}^2 $分别为相对俯仰角、横滚角零偏误差的随机游走噪声方差。
R的表达式为
$$ {\boldsymbol{R}} = {\mathrm{diag}}(\sigma _{{{\mathrm{IMU}},\;\beta}}^2,\;\sigma _{{{\mathrm{IMU}},\;\alpha}}^2,\;\sigma _{{{\mathrm{VIS}},\;\Delta\beta}}^2,\;\sigma _{{{\mathrm{VIS}},\;\Delta\alpha}}^2) $$ (7) 式中:$\sigma _{{{\mathrm{IMU}},\;\beta}}^2$,$\sigma _{{{\mathrm{IMU}},\;\alpha}}^2$分别为IMU相对俯仰角、横滚角的测量噪声方差;$ \sigma_{\mathrm{VIS},\;\Delta\beta}^2 $,$ \sigma_{\mathrm{VIS},\;\Delta\alpha}^2 $分别为视觉相对俯仰角、横滚角变化噪声方差。
由于利用视觉信息的低频稳定性抑制惯导的累计误差,所以基于EKF的视觉−惯导数据融合可得到准确的采场支架位姿。
2. 基于视觉−惯导信息融合的采场支架姿态感知实验
2.1 采场支架姿态感知实验台搭建
搭建采场支架姿态感知实验台,如图5所示,支架以ZY12000/25/50D液压支架为原型,按5∶1的相似比例制作,最大支撑高度为1 m。按照图3和图4安装相机和传感器,采用Intel RealSense D435i相机(分辨率为1 920×1 080像素,帧率为25帧/s)及N100型IMU(参数见表1)。计算平台为便携式计算机,配置Intel i7−13700处理器、NVIDIA RTX 4060显卡,运行Ubuntu 20.04系统,底层代码通过C++实现。
表 1 IMU传感器参数Table 1. Inertial measurement unit sensor parameters参数 陀螺仪 加速度计 量程 ±2 000 (°)/s ±16g 零偏稳定性 <10 (°)/h <0.04 mg 线性度 <0.1% FS <0.1% FS 噪声密度 $ 0.002\;8(\text{° })/(\mathrm{s}·\sqrt{\mathrm{Hz}}) $ $ 75\text{ }\text{μ}g/\sqrt{\mathrm{Hz}} $ 带宽 256 Hz 260 Hz 正交性误差 ±0.05° ±0.05° 分辨率 <0.02 (°)/s <0.5 mg 注:g为重力加速度。 2.2 支架姿态感知实验
支架1个运行循环包括由初始静止的近水平状态−降架−静止−升架−静止5个阶段,实验中重复4个循环。在静止阶段采用高精度倾角仪测量支架顶梁及掩护梁的姿态角并作为真值。分别基于视觉、惯导和视觉−惯导信息融合3种方法进行支架顶梁姿态感知实验,结果如图6所示。可看出初始静止状态下,3种方法的精度均较高,但随着支架运行循环次数的增加,基于视觉、惯导的感知结果逐渐偏离真值;基于视觉、惯导和视觉−惯导信息融合方法的均方根误差(Root Mean Squared Error,RMSE)分别为0.201°,0.190°和0.081°,视觉−惯导信息融合方法感知的支架顶梁姿态更准确。
分别采用上述3种方法进行支架掩护梁姿态感知实验,结果如图7所示。可看出初始静止状态下,3种方法的精度均较高,但随着支架运行循环次数的增加,基于视觉、惯导的感知结果逐渐偏离真值;基于视觉、惯导和视觉−惯导信息融合方法的RMSE分别为0.340°,0.297°和0.162°,视觉−惯导信息融合方法感知的支架掩护梁姿态更准确。
采用HY150−1500型拉线式传感器测量前立柱的实时长度,同时采用EKF融合得到顶梁、掩护梁的俯仰角,基于支架运动模型[21]解析前立柱长度。前立柱长度测量值和解析值如图8所示。可看出基于EKF的支架前立柱长度解析值的RMSE为13.682 mm,能够满足现场需求。
3. 结论
1) 设计了基于视觉−惯导信息融合的支架姿态感知系统,采用基于Canny−最小二乘法的靶标识别方法和基于四特征点的BA−PnP算法解算基于视觉信息的顶梁、掩护梁相对底座的俯仰角和横滚角。将IMU固定于液压支架顶梁、掩护梁、底座,通过IMU中的MEMS陀螺仪和加速度计的互补滤波方法解算顶梁、掩护梁、底座在世界坐标系的俯仰角和横滚角。基于视觉系统解算的相对俯仰角、横滚角与惯导解算的俯仰角、横滚角,通过EKF多源信息融合,利用视觉信息的低频稳定性抑制IMU的累计误差,得到准确的支架姿态。
2) 开展了采场支架姿态感知实验,结果表明:基于视觉、惯导和视觉−惯导信息融合的顶梁相对底座的俯仰角感知RMSE分别为0.201°,0.190°,0.081°,掩护梁相对底座的俯仰角感知RMSE分别为0.340°,0.297°,0.162°,基于视觉−惯导信息融合解算的液压支架立柱伸缩长度的RMSE为13.682 mm。基于视觉−惯导信息融合的采场支架姿态感知方法可为液压支架智能化控制提供更准确的姿态参数。
-
表 1 IMU传感器参数
Table 1 Inertial measurement unit sensor parameters
参数 陀螺仪 加速度计 量程 ±2 000 (°)/s ±16g 零偏稳定性 <10 (°)/h <0.04 mg 线性度 <0.1% FS <0.1% FS 噪声密度 $ 0.002\;8(\text{° })/(\mathrm{s}·\sqrt{\mathrm{Hz}}) $ $ 75\text{ }\text{μ}g/\sqrt{\mathrm{Hz}} $ 带宽 256 Hz 260 Hz 正交性误差 ±0.05° ±0.05° 分辨率 <0.02 (°)/s <0.5 mg 注:g为重力加速度。 -
[1] REID P B,DUNN M T,REID D C,et al. Real-world automation:new capabilities for underground longwall mining[C]. Australasian Conference on Robotics and Automation,Brisbane,2010:1-8.
[2] 张坤,孙政贤,刘亚,等. 基于信息融合技术的超前液压支架姿态感知方法及实验验证[J]. 煤炭学报,2023,48(增刊1):345-356. ZHANG Kun,SUN Zhengxian,LIU Ya,et al. Research and experimental verification of attitude perception method of advanced hydraulic support based on information fusion technology[J]. Journal of China Coal Society,2023,48(S1):345-356.
[3] 刘相通,李曼,沈思怡,等. 液压支架关键姿态参数测量系统[J]. 工矿自动化,2024,50(4):41-49. LIU Xiangtong,LI Man,SHEN Siyi,et al. Measurement system for key attitude parameters of hydraulic support[J]. Journal of Mine Automation,2024,50(4):41-49.
[4] 李磊,许春雨,宋建成,等. 基于PSO−ELM的综采工作面液压支架姿态监测方法[J]. 工矿自动化,2024,50(8):14-19. LI Lei,XU Chunyu,SONG Jiancheng,et al. Attitude monitoring method for hydraulic support in fully mechanized working face based on PSO-ELM[J]. Journal of Mine Automation,2024,50(8):14-19.
[5] LIANG Minfu,FANG Xinqiu,LI Shuang,et al. A fiber Bragg grating tilt sensor for posture monitoring of hydraulic supports in coal mine working face[J]. Measurement,2019,138:305-313.
[6] 任怀伟,李帅帅,赵国瑞,等. 基于深度视觉原理的工作面液压支架支撑高度与顶梁姿态角测量方法研究[J]. 采矿与安全工程学报,2022,39(1):72-81,93. REN Huaiwei,LI Shuaishuai,ZHAO Guorui,et al. Measurement method of support height and roof beam posture angles for working face hydraulic support based on depth vision[J]. Journal of Mining & Safety Engineering,2022,39(1):72-81,93.
[7] CHEN Hongyue,CHEN Hongyan,XU Yajun,et al. Research on attitude monitoring method of advanced hydraulic support based on multi-sensor fusion[J]. Measurement,2022,187. DOI: 10.1016/j.measurement.2021.110341.
[8] 张洪伟,万志军,程敬义,等. 新型液压支架活柱位移监测仪的研制[J]. 中国煤炭,2015,41(9):69-73. DOI: 10.3969/j.issn.1006-530X.2015.09.017 ZHANG Hongwei,WAN Zhijun,CHENG Jingyi,et al. Development of a new displacement monitor of hydraulic support pillar[J]. China Coal,2015,41(9):69-73. DOI: 10.3969/j.issn.1006-530X.2015.09.017
[9] GAO Kuidong,XU Wenbo,ZHANG Hongyang,et al. Relative position and posture detection of hydraulic support based on particle swarm optimization[J]. IEEE Access,2020,8:200789-200811.
[10] ZHAO Tianyi,AHAMED M J. Pseudo-zero velocity re-detection double threshold zero-velocity update (ZUPT) for inertial sensor-based pedestrian navigation[J]. IEEE Sensors Journal,2021,21(12):13772-13785.
[11] HUANG Guoquan. Visual-inertial navigation:a concise review[C]. International Conference on Robotics and Automation,Montreal,2019. DOI: 10.1109/ICRA.2019.8793604.
[12] QIN Tong,LI Peiliang,SHEN Shaojie. VINS-mono:a robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics,2018,34(4):1004-1020.
[13] LI Guangqiang,YU Lei,FEI Shumin. A binocular MSCKF-based visual inertial odometry system using LK optical flow[J]. Journal of Intelligent & Robotic Systems,2020,100(3):1179-1194.
[14] MOURIKIS A I,ROUMELIOTIS S I. A multi-state constraint Kalman filter for vision-aided inertial navigation[C]. IEEE International Conference on Robotics and Automation,Rome,2007:3565-3572.
[15] CARLSSON H,SKOG I,JALDÉN J. Self-calibration of inertial sensor arrays[J]. IEEE Sensors Journal,2021,21(6):8451-8463. DOI: 10.1109/JSEN.2021.3050010
[16] 于永军,徐锦法,张梁,等. 惯导/双目视觉位姿估计算法研究[J]. 仪器仪表学报,2014,35(10):2170-2176. YU Yongjun,XU Jinfa,ZHANG Liang,et al. Research on SINS/binocular vision integrated position and attitude estimation algorithm[J]. Chinese Journal of Scientific Instrument,2014,35(10):2170-2176.
[17] 万继成,张旭辉,杨文娟,等. 基于视觉与惯导的掘进机组合定位方法[J/OL]. 煤炭科学技术:1-12[2025-01-27]. http://kns.cnki.net/kcms/detail/11.2402.td.20240524.1014.003.html. WAN Jicheng,ZHANG Xuhui,YANG Wenjuan,et al. Combined positioning method of roadheader based on vision and inertial navigation[J/OL]. Coal Science and Technology:1-12[2025-01-27]. http://kns.cnki.net/kcms/detail/11.2402.td.20240524.1014.003.html.
[18] 毛清华,周庆,安炎基,等. 惯导与视觉信息融合的掘进机精确定位方法[J]. 煤炭科学技术,2024,52(5):236-248. DOI: 10.12438/cst.2023-1003 MAO Qinghua,ZHOU Qing,AN Yanji,et al. Precise positioning method of tunneling machine for inertial navigation and visual information fusion[J]. Coal Science and Technology,2024,52(5):236-248. DOI: 10.12438/cst.2023-1003
[19] 王平,周雪峰,安爱民,等. 一种鲁棒且线性的PnP问题求解方法[J]. 仪器仪表学报,2020,41(9):271-280. WANG Ping,ZHOU Xuefeng,AN Aimin,et al. Robust and linear solving method for Perspective-n-Point problem[J]. Chinese Journal of Scientific Instrument,2020,41(9):271-280.
[20] VALENTI R G,DRYANOVSKI I,XIAO Jizhong. Keeping a good attitude:a quaternion-based orientation filter for IMUs and MARGs[J]. Sensors,2015,15(8):19302-19330.
[21] 庞义辉,刘新华,王泓博,等. 基于千斤顶行程驱动的液压支架支护姿态与高度解析方法[J]. 采矿与安全工程学报,2023,40(6):1231-1242. PANG Yihui,LIU Xinhua,WANG Hongbo,et al. Support attitude and height analysis method of hydraulic support based on jack stroke drive[J]. Journal of Mining and Safety Engineering,2023,40(6):1231-1242.