留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于质心−凸包−自适应聚类法的浮选泡沫动态特征提取

魏凯 王然风 王珺 韩杰 张茜

魏凯,王然风,王珺,等. 基于质心−凸包−自适应聚类法的浮选泡沫动态特征提取[J]. 工矿自动化,2024,50(8):151-160.  doi: 10.13272/j.issn.1671-251x.18182
引用本文: 魏凯,王然风,王珺,等. 基于质心−凸包−自适应聚类法的浮选泡沫动态特征提取[J]. 工矿自动化,2024,50(8):151-160.  doi: 10.13272/j.issn.1671-251x.18182
WEI Kai, WANG Ranfeng, WANG Jun, et al. Dynamic feature extraction for flotation froth based on centroid-convex hull-adaptive clustering[J]. Journal of Mine Automation,2024,50(8):151-160.  doi: 10.13272/j.issn.1671-251x.18182
Citation: WEI Kai, WANG Ranfeng, WANG Jun, et al. Dynamic feature extraction for flotation froth based on centroid-convex hull-adaptive clustering[J]. Journal of Mine Automation,2024,50(8):151-160.  doi: 10.13272/j.issn.1671-251x.18182

基于质心−凸包−自适应聚类法的浮选泡沫动态特征提取

doi: 10.13272/j.issn.1671-251x.18182
基金项目: 国家自然科学基金项目(52274157);内蒙古自治区重点专项项目(2022EEDSKJXM010);山西省重点研发计划项目(202102100401015)。
详细信息
    作者简介:

    魏凯(1997—),男,福建福州人,硕士研究生,主要研究方向为机器视觉与人工智能,E-mail:wk19971212@163.com

    通讯作者:

    王然风(1970—),男,山西长治人,副教授,博士,主要研究方向为智能分选,E-mail:wrf197010@126.com

  • 中图分类号: TD94

Dynamic feature extraction for flotation froth based on centroid-convex hull-adaptive clustering

  • 摘要: 面对复杂的浮选现场环境及浮选泡沫自身相互粘连导致的边界不清等情况,现有泡沫动态特征(流动速度和崩塌率)提取方法往往无法准确划定属于每个泡沫的动态特征采样区域、不能全面匹配相邻帧间的特征点对且难以有效识别崩塌区域。针对上述问题,提出了一种基于质心−凸包−自适应聚类法的浮选泡沫动态特征提取方法。该方法采用集成Swin−Transformer多尺度特征提取能力的改进型Mask2Former,实现对泡沫质心的精准定位和崩塌区域的有效识别;通过最优凸包评价函数搜寻目标泡沫周围相邻一圈泡沫质心构建的凸包,拟合出接近实际泡沫轮廓的动态特征采样区域;运用基于Transformer的局部图像特征匹配(LoFTR)算法匹配相邻帧图像间的特征点对;针对动态特征采样区域内部的所有特征点对,通过基于OPTICS算法的主特征自适应聚类法提取每个泡沫的主要流动速度。实验结果表明,在普通泡沫质心定位和崩塌区域识别任务中,该方法分别取得了88.83%,97.92%的准确率及77.90%,96.52%的交并比;以2.69%的平均剔除率实现了99.93%的特征点对匹配正确率;在多种工况下均能有效划定与实际泡沫边界相近的特征采样区域,进而定量提取每个泡沫的动态特征。

     

  • 图  1  基于质心−凸包−自适应聚类法的浮选泡沫动态特征提取方法原理

    Figure  1.  Principle of dynamic feature extraction method for flotation froth based on centroid-convex hull-adaptive clustering

    图  2  实验系统组成

    Figure  2.  Experimental system setup

    图  3  不同成像条件下质心分类定位及崩塌区域识别效果

    Figure  3.  Classification and localization of centroid and identification of collapse region under different imaging conditions

    图  4  不同算法的特征点对匹配结果

    Figure  4.  Matching results of feature point pairs under different algorithms

    图  5  浮选泡沫动态特征采样区域划定效果

    Figure  5.  Delineation effectiveness of sampling regions for dynamic feature of flotation froth

    图  6  浮选泡沫动态特征量化效果

    Figure  6.  Quantification effect of dynamic features of flotation froth

    表  1  不同算法在质心分类定位与崩塌区域识别任务的性能对比

    Table  1.   Performance comparison of different algorithms in centroid classification positioning and collapse region identification tasks %

    算法 普通泡沫质心 较小泡沫质心 不成泡区域 崩塌区域
    交并比 准确率 交并比 准确率 交并比 准确率 交并比 准确率
    OCRNet 61.02 67.36 45.92 54.07 87.22 89.81 84.58 87.59
    PSPNet 72.46 82.16 54.19 64.32 91.41 95.42 87.27 91.69
    DeepLabV3+ 71.22 81.07 55.33 69.89 90.20 94.09 88.05 93.19
    CCNet 60.71 73.56 34.44 44.39 73.80 74.89 72.14 76.68
    Segmenter 66.27 77.66 59.24 73.15 84.89 93.43 79.93 95.19
    SegFormer 62.03 72.58 42.66 51.70 88.94 95.16 87.87 92.63
    改进型Mask2Former 77.90 88.83 69.65 87.78 96.68 97.11 96.52 97.92
    下载: 导出CSV

    表  2  不同算法下特征点对提取结果对比

    Table  2.   Comparison of extraction results of feature point pairs under different algorithms

    算法平均特征点检测数/个
    (±标准差)
    平均错误点剔除数/个
    (±标准差)
    平均剔除率/%平均特征点最终匹配数/个
    (±标准差)
    平均匹配正确率/%
    (±标准差)
    SIFT+FLANN1 669.37
    (±303.41)
    803.12
    (±214.20)
    48.11866.25
    (±355.23)
    98.35
    (±0.47)
    SIFT+BF461.63
    (±82.65)
    27.651 207.75
    (±278.69)
    78.80
    (±10.11)
    SIFT+RANSAC830.75
    (±250.49)
    49.76838.63
    (±378.71)
    98.93
    (±0.37)
    SURF+FLANN2 131.88
    (±166.28)
    990.38
    (±278.63)
    46.461 141.5
    (±373.15)
    99.40
    (±0.44)
    SURF+BF601.50
    (±125.63)
    28.211 530.38
    (±227.45)
    85.08
    (±9.32)
    SURF+RANSAC1049.75
    (±273.11)
    49.241 082.12
    (±376.68)
    99.02
    (±0.36)
    AKAZE+BF1 279.75
    (±260.92)
    236.63
    (±101.56)
    18.491 043.125
    (±247.66)
    92.90
    (±6.58)
    AKAZE+ORB221.88
    (±103.37)
    17.341 057.88
    (±252.56)
    94.81
    (±4.88)
    AKAZE+GMS354.75
    (±173.71)
    27.72925.0
    (±282.74)
    99.85
    (±0.21)
    LoFTR3 510.88
    (±118.99)
    94.37
    (±62.04)
    2.693 416.50
    (±277.95)
    99.93
    (±0.33)
    下载: 导出CSV
  • [1] 王然风,高建川,付翔. 智能化选煤厂架构及关键技术[J]. 工矿自动化,2019,45(7):28-32.

    WANG Ranfeng,GAO Jianchuan,FU Xiang. Framework and key technologies of intelligent coal preparation plant[J]. Industry and Mine Automation,2019,45(7):28-32.
    [2] POPLI K,SEKHAVAT M,AFACAN A,et al. Dynamic modeling and real-time monitoring of froth flotation[J]. Minerals,2015,5(3):570-591. doi: 10.3390/min5030510
    [3] 赵洪伟,谢永芳,蒋朝辉,等. 基于泡沫图像特征的浮选槽液位智能优化设定方法[J]. 自动化学报,2014,40(6):1086-1097.

    ZHAO Hongwei,XIE Yongfang,JIANG Zhaohui,et al. An intelligent optimal setting approach based on froth features for level of flotation cells[J]. Acta Automatica Sinica,2014,40(6):1086-1097.
    [4] MOOLMAN D W,EKSTEEN J J,ALDRICH C,et al. The significance of flotation froth appearance for machine vision control[J]. International Journal of Mineral Processing,1996,48(3/4):135-158.
    [5] NEETHLING S J,BRITO-PARADA P R. Predicting flotation behaviour-the interaction between froth stability and performance[J]. Minerals Engineering,2018,120:60-65. doi: 10.1016/j.mineng.2018.02.002
    [6] ALDRICH C,MARAIS C,SHEAN B J,et al. Online monitoring and control of froth flotation systems with machine vision:a review[J]. International Journal of Mineral Processing,2010,96(1/2/3/4):1-13.
    [7] 阳春华,杨尽英,牟学民,等. 基于聚类预分割和高低精度距离重构的彩色浮选泡沫图像分割[J]. 电子与信息学报,2008,30(6):1286-1290.

    YANG Chunhua,YANG Jinying,MU Xuemin,et al. A segmentation method based on clustering pre-segmentation and high-low scale distance reconstruction for colour froth image[J]. Journal of Electronics & Information Technology,2008,30(6):1286-1290.
    [8] 李建奇. 矿物浮选泡沫图像增强与分割方法研究及应用[D]. 长沙:中南大学,2013.

    LI Jianqi. Froth image enhancement and segmentation method and its application for mineral flotation[D]. Changsha:Central South University,2013.
    [9] 唐朝晖,郭俊岑,张虎,等. 基于改进I−Attention U−Net的锌浮选泡沫图像分割算法[J]. 湖南大学学报(自然科学版),2023,50(2):12-22.

    TANG Zhaohui,GUO Juncen,ZHANG Hu,et al. Froth image segmentation algorithm based on improved I-Attention U-Net for zinc flotation[J]. Journal of Hunan University (Natural Sciences),2023,50(2):12-22.
    [10] 牟学民,刘金平,桂卫华,等. 基于SIFT特征配准的浮选泡沫移动速度提取与分析[J]. 信息与控制,2011,40(4):525-531.

    MU Xuemin,LIU Jinping,GUI Weihua,et al. Flotation froth motion velocity extraction and analysis based on SIFT features registration[J]. Information and Control,2011,40(4):525-531.
    [11] 郭中天,王然风,付翔,等. 基于图像特征匹配的煤泥浮选泡沫速度特征提取方法[J]. 工矿自动化,2022,48(10):34-39,54.

    GUO Zhongtian,WANG Ranfeng,FU Xiang,et al. Method for extracting froth velocity of coal slime flotation based on image feature matching[J]. Journal of Mine Automation,2022,48(10):34-39,54.
    [12] 唐朝晖,刘金平,桂卫华,等. 基于数字图像处理的浮选泡沫速度特征提取及分析[J]. 中南大学学报(自然科学版),2009,40(6):1616-1622.

    TANG Zhaohui,LIU Jinping,GUI Weihua,et al. Froth bubbles speed characteristic extraction and analysis based on digital image processing[J]. Journal of Central South University (Science and Technology),2009,40(6):1616-1622.
    [13] JAHEDSARAVANI A,MARHABAN M H,MASSINAEI M. Prediction of the metallurgical performances of a batch flotation system by image analysis and neural networks[J]. Minerals Engineering,2014,69:137-145. doi: 10.1016/j.mineng.2014.08.003
    [14] MORAR S H,BRADSHAW D J,HARRIS M C. The use of the froth surface lamellae burst rate as a flotation froth stability measurement[J]. Minerals Engineering,2012,36:152-159.
    [15] LIU Ze,LIN Yutong,CAO Yue,et al. Swin transformer:hierarchical vision transformer using shifted windows[C]. IEEE/CVF International Conference on Computer Vision,Montreal,2021:10012-10022.
    [16] CHENG Bowen,MISRA I,SCHWING A G,et al. Masked-attention mask transformer for universal image segmentation[C]. IEEE/CVF Conference on Computer Vision and Pattern Recognition,New Orleans,2022:1290-1299.
    [17] KIRILLOV A,WU Yuxin,HE Kaiming,et al. PointRend:image segmentation as rendering[C]. IEEE/CVF Conference on Computer Vision and Pattern Recognition,Seattle,2020:9796-9805.
    [18] SUN Jiaming,SHEN Zehong,WANG Yu'ang,et al. LoFTR:detector-free local feature matching with transformers[C]. IEEE/CVF Conference on Computer Vision and Pattern Recognition,Nashville,2021:8918-8927.
    [19] ANKERST M,BREUNIG M M,KRIEGEL H P,et al. OPTICS:ordering points to identify the clustering structure[J]. ACM Sigmod Record,1999,28(2):49-60. doi: 10.1145/304181.304187
    [20] YUAN Yuhui,CHEN Xilin,WANG Jingdong. Object-contextual representations for semantic segmentation[C]. European Conference on Computer Vision,Glasgow,2020:173-190.
    [21] ZHAO Hengshuang,SHI Jianping,QI Xiaojuan,et al. Pyramid scene parsing network[C]. IEEE Conference on Computer Vision and Pattern Recognition,Honolulu,2017:6230-6239.
    [22] CHEN L C,ZHU Yukun,PAPANDREOU G,et al. Encoder-decoder with atrous separable convolution for semantic image segmentation[C]. European Conference on Computer Vision,Munich,2018:801-818.
    [23] HUANG Zilong,WANG Xinggang,HUANG Lichao,et al. CCNet:criss-cross attention for semantic segmentation[C]. IEEE/CVF International Conference on Computer Vision,Seoul,2019:603-612.
    [24] STRUDEL R,GARCIA R,LAPTEV I,et al. Segmenter:transformer for semantic segmentation[C]. IEEE/CVF International Conference on Computer Vision,Montreal,2021:7242-7252.
    [25] XIE Enze,WANG Wenhai,YU Zhiding,et al. SegFormer:simple and efficient design for semantic segmentation with transformers[C]. Conference and Workshop on Neural Information Processing Systems,2021:12077-12090.
    [26] LOWE D G. Distinctive image features from scale-invariant keypoints[J]. International Journal of Computer Vision,2004,60(2):91-110. doi: 10.1023/B:VISI.0000029664.99615.94
    [27] BAY H,ESS A,TUYTELAARS T,et al. Speeded-up robust features (SURF)[J]. Computer Vision and Image Understanding,2008,110(3):346-359. doi: 10.1016/j.cviu.2007.09.014
    [28] MUJA M,LOWE D G. Fast approximate nearest neighbors with automatic algorithm configuration[C]. The 4th International Conference on Computer Vision Theory and Applications,Lisboa,2009:331-340.
    [29] OI L,LIU Wei,LIU Ding. ORB-based fast anti-viewing image feature matching algorithm[C]. Chinese Automation Congress,Xi'an,2018:2402-2406.
    [30] FISCHLER M A,BOLLES R C. Random sample consensus:a paradigm for model fitting with applications to image analysis and automated cartography[J]. Communications of the ACM,1981,24(6):381-395. doi: 10.1145/358669.358692
    [31] RUBLEE E,RABAUD V,KONOLIGE K,et al. ORB:an efficient alternative to SIFT or SURF[C]. International Conference on Computer Vision,Barcelona,2011:2564-2571.
    [32] BIAN Jiawang,LIN Wenyan,LIU Yun,et al. GMS:grid-based motion statistics for fast,ultra-robust feature correspondence[J]. International Journal of Computer Vision,2020,128(6):1580-1593. doi: 10.1007/s11263-019-01280-3
    [33] MICHAELIS C,MITZKUS B,GEIRHOS R,et al. Benchmarking robustness in object detection:autonomous driving when winter is coming[EB/OL]. [2024-02-25]. https://arxiv.org/abs/1907.07484v2.
  • 加载中
图(6) / 表(2)
计量
  • 文章访问数:  40
  • HTML全文浏览量:  13
  • PDF下载量:  2
  • 被引次数: 0
出版历程
  • 收稿日期:  2024-02-27
  • 修回日期:  2024-08-19
  • 网络出版日期:  2024-09-06

目录

    /

    返回文章
    返回