用户名: 密码: 验证码:
大构件焊缝磨抛机器人视觉测量技术的研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
随着工业和制造业的迅速发展,用于制造高速列车车体、轮船船体、飞机机身的大型结构件的需求日益增加,焊接作为制造领域的一种重要工艺技术,已被广泛应用于大型平面或曲面结构件的成型连接中。焊接之后多余的焊接金属材料需要被去除,磨抛去除焊后焊缝不仅是得到光滑一致的被连接件表面的工艺要求,而且还是减小焊接应力的途径,因此,去除大型结构件焊后焊缝多余的金属材料具有现实意义。
     目前,大型结构件焊缝多余金属材料的去除依然通过工人手工磨抛完成,技术工人使用千叶片等磨抛工具对大型结构件焊缝进行磨抛,不仅劳动强度大效率低,而且加工精度也难以保障,工人在焊缝磨抛中经常会磨伤母材,更重要的是磨抛过程中产生的粉尘还会危害操作人员的身体健康,操作人员有时还要在高处和狭小的空间进行作业,工作环境恶劣,因此迫切需要大型结构件焊缝磨抛自动化。如果使用机床进行大型结构件焊缝的自动化磨抛,需要机床的尺寸大于大型结构件尺寸,这样的大型设备加工、装配都很困难,加工柔性不足,而且成本昂贵。针对上述问题,本文提出采用小型自主移动机器人对大型结构件焊缝进行磨抛的崭新技术思路,机器人对焊缝进行自主磨抛的前提是对焊缝进行准确识别、空间测量和定位,只有得到焊缝实时的三维空间几何信息和位置信息,才能有效规划磨抛参数,对焊缝进行自动化磨抛和加工余量的检测。针对焊缝空间测量的关键技术问题,本文从磨抛机器人视觉系统设计、焊缝磨抛建模、视觉系统数学建模、焊缝图像处理、焊缝特征提取、特征点定位、磨抛余量检测等方面开展了一系列的创新性研究工作。
     本文根据大型结构件焊缝磨抛的工况特点以及焊缝磨抛工艺的测量要求,创新性地将双目立体视觉技术、激光结构光辅助测量、P4P机器人定位技术相结合,设计了焊缝磨抛机器人视觉系统。利用激光结构光在焊缝不同截面的投影线形状的变化,反映焊缝空间三维几何信息的变化,通过图像处理的方法准确地提取了结构光中心线的亚像素坐标,依据对中心线亚像素坐标的特征分析,快速准确地计算了焊缝特征点的亚像素坐标。利用双目立体视觉技术,对同一帧图像对的对应特征点进行视差计算,得到焊缝的三维空间几何信息,这样避免了进行图像匹配的大量计算,节省了大量的计算资源和时间。采用同一平面的4个发光二极管作为定位特征点,准确地提取了焊缝在机器人坐标系中的坐标以及焊缝的方位信息。
     依据焊缝磨抛机器人视觉系统计算的焊缝空间几何信息与位置信息,深入分析和讨论了焊缝磨抛方法,建立了以焊缝空间几何信息为基础的焊缝磨抛模型,计算了磨抛力与焊缝去除体积的量化关系,提出了焊缝磨抛模型,确定了磨抛机器人视觉系统工作的总体流程,以此为依据对主要的视觉系统硬件进行了选型和计算,并计算和验证了视觉系统Z方向上的理论分辨率。建立了焊缝磨抛机器人视觉系统的数学模型并进行求解,在分析和对比各种摄像机标定方法的基础上对摄像机的内参数进行了标定,并提出了摄像机径向畸变和中心畸变的校正方法。
     详细分析了焊缝结构光图像噪声信号的来源和特点,在此基础上采用自适应中值滤波方法对焊缝图像进行去噪处理,使用灰度插值的方法确定双目校正图像的灰度,对焊缝图像进行了边缘锐化、对比度拉伸以及二值化处理,此后对焊缝结构光图像进行结构光边缘像素搜索,得到了结构光中心的像素坐标。为了提高焊缝图像处理的速度和效率,针对直线焊缝,提出了动态ROI(感兴趣区域)定位算法,将要处理的图像面积缩减到原来图像面积的2%,减小了计算量,提高了计算速度。在动态ROI内,提出了列差分高斯算法,无需图像的前处理便可提取结构光中心的亚像素坐标。对于一般形状的焊缝而言,提出了以光条位置分布和焊缝图像结构为基础的动态ROI检测算法,快速准确地提取了包含焊缝特征点在内的动态ROI,针对结构光亮度沿截面近似高斯分布的特点,采用HESSIAN矩阵为基础的偏微分检测算法和长度阈值去除算法,提取了结构光光心的亚像素坐标并去除了多余的分支线。在提取了焊缝结构光亚像素坐标的基础上,提出了斜率和距离阈值分析算法,准确地提取了焊缝宽度的转折点以及焊缝的最高点信息,采用数值积分的方法,求出了焊缝在激光投影线位置的截面面积。
     本文提出的机器人视觉系统模型中机器人坐标和摄像机坐标之间的数学关系是通过机器人本体上同一平面内的4个发光二极管进行计算的。特征点在图像中呈现近似椭圆形态,选择和讨论了可应用于该椭圆形目标点中心提取的经典算法,包括:高斯加权灰度重心法、带阈值的灰度重心法、高斯曲面拟合法、抛物面拟合法等,对这些经典方法进行了推导和分析。在此基础上,本文首次提出了SZCM光学特征点中心定位算法,该算法解决了大模板Zernike矩阵计算量庞大、计算效率低下的问题,采用基于判别式值约束的椭圆拟合方法,有效改善了传统最小二乘法进行椭圆拟合时伪边缘点对拟合精度的影响,试验结果显示:本文提出的SZCM算法对理想光斑中心的提取精度较高,在高斯噪声条件下误差小于0.002像素,实物图像处理实验中心坐标的方差仍然可以达到0.004像素,可以较好克服噪声信号以及提取伪边缘信号的影响。
     在理论分析的基础上,本文对视觉系统的测量和定位方法进行了试验研究,分别通过焊缝高度、宽度和截面面积的测量试验对比了搜索拟合算法、列高斯差分算法、HESSIAN矩阵为基础的亚像素算法的图像处理精度和稳定性。分别在不同的算法下,测量了焊缝同一截面位置的空间几何信息,试验结果表明偏微分结构光中心线亚像素提取算法结合斜率-距离阈值分析法可以获得较高的测量精度,测量精度在0.09mm以内;以偏微分检测算法为例,对视觉系统的重复测量精度进行了试验研究,试验结果表明,视觉系统的重复测量精度在0.04mm以内;进行了机器人焊缝跟踪试验,机器人视觉伺服系统对焊缝跟踪的最大位置误差为0.64mm,完全满足机器人磨抛加工的设计要求;对大构件焊缝进行了磨抛作业试验,对比了本文提出的基于视觉测量信息的焊缝磨抛模型和焊缝恒力磨抛模型的磨抛效果,试验结果表明:恒力磨抛时,焊缝的余高对原始高度有较大的仿形性,使得磨抛后焊缝余高变化较大,无法满足磨抛加工的要求,而采用本研究提出的焊缝磨抛模型和控制方法,焊缝加工余高变化范围在±0.15mm内,完全满足大构件焊缝自动化磨抛的技术要求。
     论文的理论和试验研究工作表明:本文提出的小型自主移动机器人自动磨抛大型结构件焊缝的思路正确合理,能够显著提高磨抛的精度和加工质量,本文所建立的焊缝磨抛机器人视觉系统稳定、可靠,提出的结构光光心检测算法、椭圆特征点中心亚像素定位算法、焊缝特征点提取算法等图像处理和特征识别方法具有很好的鲁棒性和精度,具有较快的图像处理速度,能够满足对大型结构件焊缝实时测量和定位的要求,为大型结构件焊缝自动化磨抛提供了新的技术方案和解决途径。
With the great development of industry and manufacture business,the demand forlarge-scale parts used in high speed train,steamer body and aircraft body is increasing. As animportant technical process in manufacturing field, welding has been widely employed in theflat and curved surfaces moulding and connecting of large-scale structures. Unnecessarymetal materials should be removed after weld. Grinding and polishing unnecessary materialscan not only meet the demand for smooth joint part, but also reduce the welding stress. Theprocess of removing is critical.
     Currently, the process is still manually performed. Skilled workers grand and polishlarge-scale structures by abrasive cloth flap wheel. The whole process needs high labourintensity, but comes inefficiency, no guarantee of working accuracy. The damage of basemetal happens a lot during grinding and polishing, more importantly, prolonged contact withdust during the process can harm workers’ health. In some cases, workers need to operatewithin narrow space or at high altitude, where the environment could be execrable.Automatic grinding and polishing of large-scale structures is in pressing need. Automaticgrinding and polishing by machine tools needs the size of large-scale structure smaller thanmachine tool. The machine tools of required size are usually difficult to assemble or process,with low flexibility, but expensive. Based on difficulties discussed, a new thought ofgrinding and polishing large-scale structures by micro-robot is proposed in the paper. It isonly the geometry and position information of three-dimensional space at actual time can beobtained that the parameters to be used will be programmed to test grinding and polishingand machine allowance. Focusing on critical technical problems discussed above, the papershows the research works including vision system of grinding and polishing robot, parameterprogramming, mathematical modeling of vision system, weld bead image processing, weldbead feature extracting, featured point sub-pixel coordinates extracting,and grindingallowance detection.
     According to specific structural characters and working conditions of large-scale structuregrinding and polishing, taking requirements of robotic vision system into consideration,vision system of grinding and polishing robot is designed by combining binocular stereovision system, laser-assisted source and P4P robot featured points. The change of pictureshape projected by laser structure light indicates change of3D geometric information ofweld bead space. Sub-pixel coordinates are obtained by image processing, and the featuredpoints’ coordinates of weld bead are extracted. Based on binocular stereo visionsystem, parallax computation is performed in corresponding feature points of the sameimage pair so that the three-dimensional space geometric information is obtained. Thecomputation avoids massive calculation for image match, and saves lots of resource and time.Since four LEDs on the same plane are as feature point,we can obtain correct coordinateinformation of weld featured points and azimuth information of weld bead.
     After detailed analysis of plan for grinding and polishing parameters based on weld bead information(geometrical information&position information)in vision system, grindingmould for weld bead is built. According to quantitative relationship between grinding forceremoved material in weld bead, a strategy of grinding and polishing control is proposed inthe paper, and the workflow of grinding and polishing robotic vision system is determined.Based on the obtained data, the type of hardware is confirmed and relative calculationperformed, resolving power of vision system at Z direction preliminary calculated andtestified. Mathematical model of polishing and grinding robotic vision system is developedand performed solution. Basing on the traits of different calibration methods, calibrateintrinsic parameter of camera and calibration method of camera is determined.
     Detailed analysis of noise source and noise characteristic in weld bead structure lightimage is performed, furthermore, noise is removed by adaptive filtering method, andadjusted image is obtained by gray level interpolation,Edge shaping,contrast stretch andbinary image processing are performed in weld image. Search pixel of structure light edge,then the pixel coordinates of structure light bar center is found. In order to improve theaccuracy and speed of weld bead processing, aiming at nearly straight weld bead, a methodof location algorithm for ROI (region of interest) is proposed in the paper. By this method,the size of processing image area is within2%of the original area, the amount of calculationreduced, while the speed improving. Column differential Gauss algorithm specifically forROI is proposed: a new way of extracting coordinates of sub pixel in the center of structurelight without pre-processing. For the weld bead of general shape, a dynamic ROI containingweld bead feature points can be accurately and quickly obtained. In the foundation of the factthat luminance of structure light spreads approximately Gaussian distribution, the paper putsforward Gaussian model and parabolic model for luminance distribution. Sub-pixelcoordinates are extracted by partial differential detection algorithm (on basis of HESSIANmatrix) and by length threshold removal algorithm. Moreover, the excessive branch line isremoved. Basing on extraction of sub pixel coordinates of weld bead structure light bar,analysis algorithm of threshold value of slope and distance is proposed. According to theproposed method, the information of turning point and the highest point of weld bead arecorrectly extracted. The area of section where laser projection bar locates is calculated by themethod of numerical integration.
     The mathematic relation between robot and camera coordinates proposed in the paper iscalculated by4LEDs of robot body on the same plane. Considering the approximate ellipseshape LED appeared in the image, an extraction algorithm for ellipse sub-pixel center isproposed which applying adaptive threshold segmentation and gray weighted interpolation.According to the image processing tests, the precision of extraction by force algorithm isverified by comparison. Collected data includes all relative parameters necessary for visionsystem. Error compensation is presented on the background of analysis of vision systemmeasurement error and of positioning error.
     On the basis of theoretical analysis, the paper discusses the method used in vision systemmeasurement and positioning, also makes comparison for image processing precision of Search fitting algorithm,differential algorithm,Steger algorithm based on HESSIAN. Underdifferent algorithms, the spatial geometrical information of the same section in weld bead ismeasured. The result indicates that Steger light center extraction algorithm combiningSlope-distance threshold analysis can yield to good measurement precision of within0.09mm. Weld bead allowance detection algorithm is verified, and the method with highprecision and efficiency. In case of Steger algorithm, tests are performed for precision ofvision system repeatability. The tests yield to precision of repeatability within0.04mm.Grinding and polishing on robot test indicates the reliability and validity of robotic visionsystem. Grinding and polishing programming on basis of the weld bead information and thebuilt grinding and polishing mould leads to the same conclusion.
     The paper research work indicates that the thought of weld bead grinding and polishingstructure part by micro mobile robot is reasonable, which could improves the precision andprocessing quality of grinding and polishing. The grinding and polishing robot vision systemis with stability and reliability. The image processing methods have high robustness,precision and speed including structure light center detection algorithm, featured point centersub-pixel extraction algorithm and weld bead featured point extraction algorithm. Thesystem meets requirement for real time detection and positioning in weld bead. The researchwork discussed in the paper provides a new technical protocol and resolution for large-scalestructure piece robotic grinding and polishing.
引文
[1] CHANG D J, SON D H, LEE J W, et al. A new seam-tracking algorithm throughcharacteristic-point detection for a portable welding robot[J]. Robotics andComputer-Integrated Manufacturing,2012,28:1-13.
    [2] FANG Zaojun, XU De, TAN Min. Visual seam tracking system for butt weld of thinplate[J]. International Journal of Advanced Manufacturing Technology,2010,49:519-526.
    [3] TUNG Picheng, WU Mingchang, HWANG Y R. An image-guided mobile roboticwelding system for SMAW repair processes[J]. International Journal of Machine Tools&Manufacture,2004,44:1223-1233.
    [4] XU Yanling, Lü Na, ZHONG Jiyong, et al. Research on the real-time trackinginformation of three-dimension welding seam in robotic GTAW process based oncomposite sensor technology[J]. Journal of J. Intell. Robot Syst,2012,19(2):36-40.
    [5]高向东,罗锡柱.基于卡尔曼滤波的焊缝检测技术研究[J].机械工程学报,2004,40(4):172-176.
    [6] MARKUS H B, MATRIN L Z,MATTHIAS R, et al. Weld seam tracking and panoramaimage generation for on-line quality assurance[J]. International Journal of AdvancedManufacturing Technology,2013,65:1371-1382.
    [7]程柏,郑军,潘际銮.强弧光干扰下焊缝结构光图像跟踪信息获取[J].清华大学学报,2008,48(11):1724-1727.
    [8]邓景煜,张轲,秦涛,等.基于动态小窗Hough变换的焊缝特征直线提取[J].焊接学报,2011,32(11):37-40.
    [9]申俊琦,胡绳荪,冯胜强,等.基于数学形态学的焊缝图像边缘提取[J].天津大学学报,2010,43(4):373-376.
    [10]黄军芬,蒋力培.基于二维小波分析的焊缝坡口边缘检测[J].焊接学报,2003,24(6):69-72.
    [11]李原,徐德,沈扬,等.一种焊缝结构光图像处理与特征提取方法[J].焊接学报,2006,27(9):25-30.
    [12] CHEN Shanben, CHEN Xizhang, QIU T, et al. Acquisition of weld seam dimensionalposition information for arc welding robot based on vision computing[J]. Journal ofIntelligent and Robotic Systems,2005,43:77-97.
    [13]屈稳太,张瑶瑶,颜钢锋.基于Hough变换的焊缝位置检测技术[J].机械工程学报,2005,41(9):212-216.
    [14]周律,陈善本.打断支线条算法及其在焊缝图像处理中的应用[J].机械工程学报,2009,45(3):282-285.
    [15]周富强,王飞,张广军,等.结构光直线光条图像特征的三步法提取[J].机械工程学报,2008,44(11):215-219.
    [16] LIU Zhen, SUN Ju nhua, WA NG Heng, et al. Simple and fast rail wear measurementmethod based on structured light[J]. Optics and Lasers in Engineering,2011,49:1343-1351.
    [17]艾海舟,苏延超.图像处理、分析与机器视觉[M].北京:清华大学出版社,2011.
    [18]徐德,谭民,李原.机器人视觉测量与控制[M].北京:机械工业出版社,2008.
    [19] R.Hartley,A. Zisserman. Multiple View Geometry in Computer Vision[M]. CambridgeUniversity Press,2000.
    [20]张毓晋.图像工程[M].北京:清华大学出版社,2012.
    [21] David Forsyth, J.P.,Computer Vision:A Modern Approach[M].Prentice Hall,2003.
    [22]郑南宁.计算机视觉与模式识别[M].北京:国防工业出版社,1998.
    [23] Thrun S, Fox D, Burgard W. Robust Monte Carlo localization for mobilerobots[J].Artficial Intelligence,1999:343-349.
    [24]马颂德,张正友.计算机视觉[M].北京:科学出版社,1998.
    [25]吴立德.计算机视觉[M].上海:复旦大学出版社,1993.
    [26] Zhenzhong Xiao, Liang Jin, Dehong Yu. A cross-target-based accurate calibrationmethod of binocular stereo systems with large-scale field-of-view[J].Measurement,2010(43),747-754.
    [27] Meng Xie, Zhenzhong Wei, Guangjun Zhang, et al. A flexible technique for calibratingrelative position and orientation of two cameras with no-overlappingFOV[J].Measurement,46(2013)34-44.
    [28] Ma S D. A Self-calibration technique for active vision system[J]. IEEE Trans.Robotics and Automation.1996,12(1):114-120.
    [29] Basu A. Active calibration: alternative strategy and analysis. Proceedings of IEEEConference on Computer Vision and Pattern Recognition[C].1993:495-500.
    [30] Thomas Chaperon,Jacques Droulez, Guillaume Thibault. Reliable camera pose andcalibration from a small set of point and line correspondences: A probabilisticapproach[J]. Computer Vision and Image Understanding.2011,115(5):576-585.
    [31]. Aaron Mavrinac, Xiang Chen, Kemal Tepe.An automatic calibration method forstereo-based3D distributed smart camera networks[J]. Computer Vision and ImageUnderstanding.2010,114(8):952–962.
    [32] Amanda PiaiaSilvatti, FabioAug ustoSalveDias.Comparison of different cameracalibration approaches for underwater applications[J].Journal of Biomechanics2012,45(6):1112–1116.
    [33] Avidan Shai, Shashua Amnon. Trajection triangulation:3D Reconstruction of MovingPoints from a Monocular Image Sequence[J]. IEEE Trans. PAMI.2000,22(4):348-357.
    [34] Mark S.Nixon,Alberto S.Aguado. Feature Extraction and Image Processing forComputer Vision,Third Edition[M].Beijing, Publishing House of Electronics Industry,2013.
    [35]戴金波.基于视觉信息的图像特征提取算法研究[D].长春:吉林大学,2013.
    [36] T.Poggio, H.Voorhees, A.Yuille. A Regularized Solution to Edge Detection. Tech. Rep.MA, Rep.AIM-833: MIT ArtificialIntell. Lab,1985.
    [37] Grimson. Computational Experiment With a Feature Based on Stereo Algorithm[J].IEEE Trans PAMI.1985,7(1):17-33.
    [38] R. Krishnamoorthi, G. Annapoorani.A simple boundary extraction technique forirregular pupil localization with orthogonal polynomials[J].Computer Vision and ImageUnderstanding2012,16:262-273.
    [39] Julez B. A Method of Coding TV Signals Based on Edge Detection[J]. Bell SystemTech.1959,38(4):1001-1020.
    [40] L.G.Roberts. Machine Perception of Three-Dimension Solids. In: J.t. Tippett ea, editor.Optical and Electro-Optimal Information Processing, Cambridge, MA: MIT Press;1965.
    [41] Canny John. A Computational Approach to Edge Detection[J].IEEE Trans PatternAnalysis and Machine Intelligence.1986, PAMI-8(1):679-697.
    [42] RafaelC.Gonzalez, Richard E.Woods. Digital Image Processing,Third Edition.Beijing:Publishing House of Electrinics Industry,2010.
    [43] Jain R., Kasturi R., et al. Machine vision (gravure). Beijing: China Machine Press,2003.
    [44] Tyler A. Davis, Yung C. Shin. Vision-based clad height measurement[J].Machine Visionan d Applications,2011,22(1):129–136.
    [45] Carsten Steger. Extracting curvilinear structures:A differential geometricapproach[J].Machine Vision,1996,1064:129–136.
    [46] Ca rsten Steger. Unbiased extraction of lines withparabolic and Gaussian profiles[J].Computer Vision and Image Unde rstanding,2013,117(2):97-112.
    [47] Richard O. Duda, Peter E. Hart.Use of the Hough Transformation To Detect Lines andCurves in Pictures[J].Communications of the ACM,1972,15(1):11-15.
    [48] Rong-chin Lo,Wen-Hsiang Tsai. Gray-Scale Hough Transform for Thick Line Detectionin Gray-Scale Images[J]. Pattern Recog.1995,28(5):647-661.
    [49]程正兴.小波分析算法与应用.西安:西安交通常大学出版社,1997.
    [50]成礼智,王红霞,罗永.小波理论与应用.北京:科学出版社,2004.
    [51]RafaelC.Gonzalez, Richard E.Woods. Digital Image Processing,Third Edition.BEIJING:Publishing House of Electrinics Industry,2010.
    [52] Lee J., Haralick R., Shapiro L. Morphologic edge detection[J]. IEEE Trans PatternAnalysis and Machine Intelligence.1987, PAMI-9(2):142-156.
    [53]陈海永,孙鹤旭,徐德.一类窄焊缝的结构光图像特征提取方法[J].Transactions of TheChina Welding Institution.2012,33(1):61-65.
    [54] Yuan Jianying, Wang Linfang, Wang wenbin. A robusttargets matching algorithm in3D reconstruction from multi-views[C].2010International Conference on Electricaland Control Engineering,2010:249-252.
    [55] Li Zhanli, Liu Mei. Research on decoding method of code targets in close–rangePhotogrammetry[J]. Journal of Computational Information Systems,2010,6(8):2699-2705.
    [56]马杨彪,钟约先,郑聆,袁朝龙.三维数据拼接中编码标志点的设计与检[J]清华大学学报:自然科学版,2006,46(2):169-171.
    [57]Zhang Zhijiang, Che Rensheng, Huang Qingcheng. Algorithm of characteristic pointimaging center in vision coordinate measurement[J].Optics and PrecisionEngineering,1998,6(5):12-18.
    [58]周玲,张丽艳,郑剑冬,张维中.近景摄影测量中标记点的自动检测[J].应用科学学报,2007,25(3):288-294.
    [59]黄桂平.圆形标志中心子像素定位方法的研究与实现[J].武汉大学学报:信息科学版,2005,30(5):388-391.
    [60]殷永凯,刘晓利,李阿蒙,彭翔.圆形标记点的亚像素定位及其应用[J].红外与激光工程,2008,37:47-50.
    [61] Jensen K. Subpixel edge localization and interpolation of stereo images[J].IEEE Transon PAMI,1995,17(6):629-634.
    [62] Ghosal S., Mehrotra R, Orthogonal Moment Operators for Subpixel Edge Detection[J].Pattern Recognition,1993,26(2):295-306.
    [63]高世一,赵明扬,张雷,邹媛媛.基于Z ernike正交矩的图像亚像素边缘检测算法改进[J].自动化学报,2008,34(9):1163-1168.
    [64] M.R. Shortis, T.A.Clarke. A comparison of some techniques for the subpixel location ofdiscret target images[C].SPIE,1994,2350:239-250.
    [65] C.Y.Liu, R.S.Che, Y.H.Gao.High precision location algorithm for optical feature ofvision measturement[J]. Physics Conference Series,2006,48:474-478
    [66]刘国栋,刘炳国,陈凤东等.亚像素定位算法精度评价方法的研究[J].光学学报,2009,29(12):3446-3451.
    [67]陈阔,冯华君,徐之海,李奇,陈跃庭.亚像素精度的行星中心定位算[J].光学精密工程,2013,21(7):1882-1890.
    [68]邓文怡,吕乃光.测量工件三维表面的工业视觉测量系统[J].华中理工大学学报,1999,27(1):79-83.
    [69] Frank chen, GordonM.Brown, MuminSong.Overview of three-dimensional shpemeasurement using optical methods[J].Optical Engineering,2000,39(l):10-22.
    [70]杨剑.大尺寸视觉测量精度的理论和试验研究[D].北京:北京邮电大学,2010.
    [71]黄艳.基于主动视觉的大空间测量关键技术研究[D].哈尔滨:哈尔滨工业大学,2013.
    [72]张广军.视觉测量[M].北京:科学出版社,2008.
    [73] Yuan Li, You Fu Li, QingLin Wang. Measurement and Defect Detection of the WeldBead Based on Online Vision Inspection[J].Instrumentation and Measurement, IEEETransactions,2009,59(7):1841-1849.
    [74]段峰,王耀南,雷晓峰等.机器视觉技术及其应用综述[J].自动化博览,2002,3:59-61
    [75]刘曙光,屈萍鸽,费佩燕.机器视觉在纺织检测中的应用[J].纺织学报,2004,24(6):89-92.
    [76]http://www.xjtudic.com
    [77] Xu Qiaoyu,Ye Dong,Che Rensheng. StereoVision Coordinate Measurement SystemBased on Optical Probe. Proceedings of the SecondInternational Conference onComplex Systems and Applications-Modeling, Control and Simulations (ICCSA’2007), Jinan. DCDIS Series B, Watam Press.2007,14(S2):594-598.
    [78] David Samper, Jorge Santolaria, Francisco Javier Brosed, et.al. A stereo-vision systemto automate the manufacture of a semitrailer chassis[J].The International Journal ofAdvanced Manufacturing Technology,2013,67(9):2283-2292.
    [79] Li-Heng Lin, Peter D. Lawrence, Robert Hall. Robust outdoor stereo vision SLAM forheavy machine rotation sensing[J]. Machine Vision and Applications,2013,24(1):205-226.
    [80] SandroBarone, AlessandroPaolin, ArmandoViviano Razionale. Shape measurement by amulti-view methodology based on the remote tracking of a3D optical scanner[J].Opticsand Lasers in Engineering,2012,50(3):380-390.
    [81] S.Palani. U. Natarajan. Prediction of surface roughness in CNCend milling by machinevision system using artificial neural network based on2D Fourier transform [J]. TheInternational Journal of Advanced Manufacturing Technology,2011,54(9):1033-1042.
    [82] Wen-Tung Chang, Shui-Fa Chuang, Yi-Shan Tsai. Avision-aided automation system fordestructive web thickness measurement of microdrills[J].The International Journal ofAdvanced manufacturing Technology,2011,54(9):1033-1042.
    [83] Zhen Liu, Fengjiao Li, Guangjun Zhang An external parametercalibration method formultiple cameras based on laser rangefinder[J].Measurement,2014,47,954-962.
    [84] Xiaoyu Zhang, Qingbo Li, Guangjun Zhang. A strategy for multivariate calibrationbased on modified single-indexsignal regression: Capturing explicit non-linearity andimproving prediction accuracy[J]. Infrared Physics&Technology,2013,61:176-183.
    [85] Zhen Liu, XinguoWei, GuangjunZhang.External parameter calibration of widelydistributed vision sensors with non-overlapping fields of view[J].Optics and Lasers inEngineering,2013,51,643-650.
    [86] Junhua Sun,ZhenLiu,YuntaoZhao. Motion deviation rectifying method of dynamicallymeasuring rail wear based on multi-line structured-light vision2013,50:25–32.
    [87]全燕鸣,黎淑梅,麦青群.基于双目视觉的工件尺寸在机三维测量[J].光学精密工程,2013,,20(4):1054-1061.
    [88]孙岩,张征宇,黄诗捷.风洞试验中模型迎角视觉测量技术研究[J].航空学报,2013,34(1):1-7.
    [89]郭寅,刘常杰,邾继贵,等.高速列车动态位姿测量方法及校准技术研究[J].光电子·激光,2013(1):112-118.
    [90]陈向伟.机械零件计算机视觉检测关键技术的研究[D].长春:吉林大学,2005.
    [91]钟玉琢,乔秉新,李树青.机器人视觉技术[M].北京:国防工业出版社,1994.
    [92] A.K.Elshennawy. The role of inspection in automated manufacturing[J].Computer Industrial Engineering.1989,17(1-4):327-331.
    [93] Michael C.Fairhurst. Computer Vision for Robotic Systems[M].An Introduction. TheUniversity Press.Cambridge,1988.
    [94]李武斌,路长厚,李君,等.圆钢表面缺陷视觉检测技术研究现状与展望[J].无损检测,2012,34(5):53-57.
    [95]王耀南,吴成中,张辉.医药输液视觉检测机器人关键技术综述[J].机械工程学报,2013,49(7):130-140.
    [96] S.Palani, U.Natarajan, M.Chellamalai.On-line prediction of micro-turningmulti-response variables by machine vision system using adaptive neuro-fuzzyinference system (ANFIS)[J].Machine Vision and Applications,2013,24:19–32
    [97]刘庆纲,樊志国,刘超,等. H型钢端面尺寸的精密视觉检测方法[J].光电工程,2013,40(11):1-7.
    [98]孙大为,蔡艳,佟彤,等.基于编码结构光的角焊缝视觉检测[J].焊接学报.2013,34(7):47-50.
    [99] Zh eng Liu· Hiroyuki Ukida· Pradeep Ramuhalli,et al.Integrated imaging and visiontechniques for industrial inspection: a special issue on machine vision andapplications[J].Machine Vision and Applications,2010,21:597-599.
    [100] Wei Huang,Radovan Kovacevic. Development of a real-time laser-based machinevision system to monitor and control welding processes[J].The International Journal ofAdvanced Manufacturing Technology,2012,63(1-4):235-248.
    [101]何相呈,祝钊,苏真伟,等.一种基于ARM的钢管焊渣条分叉视觉检测系统[J].测量与检测技术,2014,41(1):66-69.
    [102] Wen Hua, Hongtao Liu, Chun Hu,et al.Vision-based force measurement usingpseudo-zernike moment invariants[J]. Measurement,2013,46:4293–4305.
    [103] Andreas Traschtz, Wolf Zinke, Detlef Wegener. Speed change detection in foveal andperipheral vision.Vision Research,2012,72:1-12.
    [104]杨彦利,苗长云,亢伉,等.输送带跑偏故障的机器视觉检测技术[J].中北大学学报,2012,33(6):667-671.
    [105]孟德安,赵升吨,张琦,等.板材折弯件全长弯曲角度的双目视觉检测方法研究[J].锻压技术.2013,38,1-6.
    [106] Marcelo Kleber Felisberto, Heitor Silverio Lopes,et al. An object detection andrecognition system for weld bead extraction from digital radiographs[J].ComputerVision and Image Understanding2006,102:238-249.
    [107] P. Sathiya, M.Y.Abdul Jaleel. Measurement of the bead profile and microstructuralcharacterization of a CO2laser welded AISI904Lsuper austenitic stainlesssteel[J].Optics&Laser Technology,2010,42:960–968.
    [108] Du Quanying, Chen Shanben, Lin Tao.Inspection of weld shape based on the shapefrom shading[J]. The International Journal of Advanced Manufacturing Technology,2006,27(7-8):667–671.
    [109]罗明成,熊波,尹周平.一种芯片三维外观视觉检测光路设计[J].现代电子技术,2012,35(4):177-180.
    [110]于虹,杨帆等.基于计算机视觉的PCB缺陷检测技术研究[J].仪器仪表学报,2008,29(4):584-586.
    [111] Te-Hsiu Sun, Chun-Chieh Tseng, Min-Sheng Chen. Electric contacts inspection usingmachine vision[J].Image and Vision Computing,2010,28(6):890-901.
    [112] G. Senthil Kumar, U. Natarajan, S. S. Ananthan. Vision inspection system for theidentification and classification of defects in MIG welding joints[J]. The InternationalJournal of Advanced Manufacturing Technology,2012,61(9-12):923-933.
    [113]郑金驹,李文龙,王瑜辉,等. QFP芯片外观视觉检测系统及检[J].中国机械工程,2013,24(3):290-301.
    [114]薛晓洁,孙长库,叶声华.用于BGA共面性检测的激光扫描在线测试系统[J].光电工程,2001,28(1):39-42.
    [115]程豪,黄磊,刘昌平.基于笔画和AdaBoost的两层视频文字定位算法[J].自动化学报,2008,34(10):1312-1318.
    [116] T.MaenPaa, M.Pietikainen. Classification with color and texture:jointly orseparately[J]. PatternRecognition,2004,37(8):1629-1640.
    [117]江汀,王宇.平版卷筒纸胶印机的油墨控制(上)[J].印刷质量与标准化,2008(7):39-42.
    [118]孙伟,张小瑞,唐慧强,等.基于嘴唇色度Fisher分类的驾驶疲劳视觉检测[J].南京信息工程大学学报(自然科学版),2011,3(4):324-330.
    [119]谢长贵,谢志江.热态重轨表面缺陷机器视觉检测的关键技术[J].重庆大学学报,2013,36(10):16-21.
    [120]王邦国,贾振元,刘巍.基于结构光的大锻件尺寸测量中光条纹中心线提取方法[J].大连理工大学学报,2012,52(2):203-208.
    [121] DWORKIN.S.B, NYE.T.J. Image processing for machine vision measurement of hotformed parts[J].Journal of Materials Processing Technology,2006,174:1-6.
    [122]高世一,杨永强,杨凯珍.不等厚板激光焊接焊缝缺陷结构光视觉检测[J].激光技术,2011,35(4):440-443.
    [123] Yanjun Fu, Yonglong Wang, Meiting Wan. Three-dimensional profile measureof the blade based on surface structured light[J].Optik,2013,124:3225-3229.
    [124]胡春海,刘斌,郑龙江.基于双CCD的锻件尺寸测量的研究.红外与激光工程,2008,37(S1):11-14.
    [125] S. B. Dworkin, T. J. Nye. Image Processing for Machine Vision Measurement of HotFormed Parts.Journalof Materials Processing Technology,2006,(174):1-6.
    [126] Z. Jia, B. Wang, W. Liu, et al. An Improved Image Acquiring Method for MachineVision Measurement of Hot Formed Parts. Journal of Materials Processing Technology,2010,210(2):267-271.
    [127]周显青,张玉荣,褚洪强.稻米粒形与外观品质计算机视觉检测技术研究进展[J].粮食与饲料工业,2012,6:1-3.
    [128]孔彦龙,高晓阳,李红玲,等.基于机器视觉的马铃薯质量和形状分选方法[J].农业工程学报,2012,28(17):143-148.
    [129]汪成龙,李小昱,武振中.基于流形学习算法的马铃薯机械损伤机器视觉检测方法[J].农业工程学报,2014,30(1):245-252.
    [130]A. Torregrosa, F. Albert, N. Aleixos, C. Ortiz, J. Blasco. Analysis of the detachment ofcitrus fruits by vibration using artificial vision[J].Biosystems Engineering,2014,119:1-12.
    [131]F.H. Fan, Q. Ma, J. Ge, Q.Y. Peng, et al. Prediction of texture characteristics fromextrusion food surface images using a computer vision system and artificial neuralnetworks[J].Journal of Food Engineering,2013,118(4):426-433.
    [132] Boaz Zion. The use of computer vision technologies in aquaculture–Areview[J].Computers and Electronics in Agriculture,2012,88:125-132.
    [133] Mahmoud Omid, Mahmoud Soltani, Mohammad Hadi Dehrouyeh,et al. Anexpert egggrading system based on machine vision and artificial intelligencetechniques[J].Journal of Food Engineering,2013,118(1):70-77.
    [134] Gerrit Polder, Gerie W.A.M. van der Heijden, Joop van Doorn,et al.Automaticdetection of tulip breaking virus (TBV) in tulip fields using machinevision[J].Biosystems Engineering,2014,117:35-42.
    [135] Muhammad Ali Ashraf, Naoshi Kondo, Tomoo Shiigi.Use of Machine Vision to SortTomato Seedlings for Grafting Robot[J].Engineering in Agriculture, Environment andFood,2011,4(4);119-125.
    [136] Jinlin Xue, Lei Zhang, Tony E. Grift.Variable field-of-view machine vision based rowguidance of an agricultural robot[J]. Computers and Electronics inAgriculture,2012,84:85-91.
    [137]苗艳凤.木材山峰状纹理的视觉特性研究[D].南京:南京林业大学,2013.
    [138]龙德帆,樊尚春.计算机视觉在原木材积检测中的应用[J].仪器仪表学报,2004,25(S4):1024-1025.
    [139] Peng Zhao, Gang Dou, Guang-Sheng Chen. Wood species identification usingfeature-level fusion scheme[J].International Journal for Light and ElectronOp tics,2014,125(3):1144-1148.
    [140] Mathieu Dassot, Aurélie Colin, Philippe Santenoise, et al. Terrestrial laser scanning formeasuring the solid wood volume, including branches, of adult standing trees in theforest environment. Computers and Electronics in Agriculture,2012,89:86-93.
    [141] Yan Liu, Feihong Yu. Automatic inspection system of surface defects on opticalIR-CUT filter based on machine vision[J].Optics and Lasers in Engineering,2014,55:243-257.
    [142]冯博,刘炳国,陈凤东.光学元件损伤在线检测图像处理技术强激光与粒子[J].强激光与粒子束,2013,25(7):1697-1700.
    [143]张彬,刘缠牢.球面光学元件表面疵病的自动检测技术研究.光学仪器,2013,35(6):16-20.
    [144] Carol Martínez, ThomasRichardson, Peter Thomas. A vision-based strategy forautonomous aerial refueling tasks[J]. Robotics and Autonomous Systems,2013:61(8):876-895.
    [145] Der-Baau Perng, Hsiao-Wei Liu, Ching-Ching Chang.Automated SMD LEDinspection using machine vision[J].The International Journal of AdvancedManufacturing Technology,2011,57(9-12):1065–1077.
    [146] Dimitrios Kosmopoulos et al. Automated inspection of gaps on the automobileproduction line through stereo vision and specular reflection[J]. Computer in Industry,2001,46:49-63.
    [147] Wei Huang, Radovan Kovacevic. Development of a real-time laser-based machinevision system to monitor and control welding processes[J]. The International Journalof Advanced Manufacturing Technology,2013,68(5-8):1123-1136.
    [148] S.SatorresMart′nez, J. G′omez Ortega, J.G′amez Garc′a. An industrial visionsystem for surface quality inspection of transparent parts[J]. The International Journalof Advanced Manufacturing Technology,2013,68(5-8):1123-1136.
    [149]徐成华.三维人脸数据获取与识别[D].北京:中国科学院自动化研究所,2004.
    [150]章毓晋.中国图像工程及当前的几个研究热点[J].计算机辅助设计与图形学学报,2002,14(6):489-500.
    [151] Hao Du, Zou Danping,Yan Qiu Chen. Relative Epipolar Motion of TrackedFeaturesfor Correspondence in Binocular Stereo[C].Computer Vision ICCV2007. IEEE11thInternational Conference, Riode Janeiro,14-21Oct,2007:1-8.
    [152] Qiu Chan,Liu Zhenyu,Tan Jianrong. Hybrid Dimension Based Modeling of PartSurface Topography and Identification of Its Characteristic Parameters, AppliedSurface Science,2012,258:7082-7093.
    [153]Peng Xiang, Liu Zhenyu, Tan Jianrong. Compartmental Modeling and Solving ofLarge Scale DistillationColumns under Variable Operating Conditions,Separation andPurification Theory,2012,98:280-289.
    [154] Weifeng Chen,Wei Chen,Hujun Bao,An Efficient Direct Volume RenderingApproach for Dichromats,IEEE Transactions on Visualization and Computer Graphics,2011,17(12):2144-52.
    [155]孙慧贤,张玉华,罗飞路航空发动机篦齿盘表面裂纹的视觉检测[J].2009,17(5):1187-1195.
    [156]王俊龙,曲兴华,赵阳.多CCD视觉检测技术的研究[J].传感器与微系统,2010,29(1):114-120.
    [157]李言俊,张科.视觉仿生成像制导技术及应用[M].北京:国防工业出版社,2006
    [158]邹定海,叶声华,王春和.用于在线测量的视觉检测系统[J].仪器仪表学报,1995,16(4):337-340.
    [159]周律.基于视觉伺服的弧焊机器人焊接路径获取方法研究[D].上海:上海交通大学,2007.
    [160] Mitchell Dinham, Gu Fang. Automomous weld seam identification and localisationusing eye-in-hand stereo vision for robotic arc welding[J].Robotics andComputer-Integrated Manufacturing,2013,29:288-301.
    [161] Xi zhang Chen,Shanben Chen,Tao Lin,et al. Practical methodto locate the initialposition of weld seam using visual technology[J]. International Journal of AdvancedManufacturingTechnology,2006,30(7-8):663-668.
    [162] S.B. Chen, X.Z. Chen,J.Q. Li. The Research on Acquisition of Welding SeaSpacePosition Information for Arc Welding Robot Based on Vision[J].Journal of intelligent&robotic systems,2005,43(1):77-97.
    [163] Hongyuan Shen, Tao Lin, Shanben Chen. Real-Time Seam Tracking Technology ofWelding Robot with Visual Sensing[J]. Journal of intelligent&robotic systems,2010,59:283-298.
    [164] Zhen Ye,Gu Fang,ShanbenChen,et al. Passive vision based seam tracking system forpulse-MAG welding[J]. International Journal of Advanced ManufacturingTechnology,2013,67:1987-1996.
    [165] Zaojun Fang, De Xu, Min Tan. Vision-based initial weld point positioning using thegeometric relationship between two seams[J]. International Journal of AdvancedManufacturingTechnology,2013,66:1535-1543.
    [166] Zaojun Fang, De Xu, Min Tan. Visual seam tracking system for butt weld of thinplate[J].International Journal of Advanced ManufacturingTechnology,2010,49:519-526.
    [167] Roland Siegwart, Introduction to Automous Mobile Robots, Massachusetts institute oftechnology, isbn:0-262-19502-x,2004.
    [168]Ivers, D.E. Process Modelling of Coated Abrasive Disk Grinding as Part of a RoboticSolution[M], MIT Department of Mechanical Engineering,1985.
    [169] D.E. Whitney, A.C.Edsall, A.B.Todtenkopf, et.al, Development and Control of anAutomated Robotic Weld Bead GrindingSystem[J],Transactions of the ASME,1990,112(2):166-176.
    [170] M.A.Abidi,T, Chandra. A New Efficient and Direct Solution for pose EstimationUsing Quadrangular Targets: Algorithm and Evaluation [J].IEEE Tran. On PAMI,1995,17(5):534-538.
    [171] M.A.Abidi,T, Chandra. A New Efficient and Direct Solution for pose EstimationUsing Quadrangular Targets: Algorithm and Evaluation [J].IEEE Tran. On PAMI,1995,17(5):534-538.
    [172]胡占义,雷成,吴福朝.关于P4P问题的一点讨论[J].自动化学报,2001,27(6):770-774
    [173]Yuan J S. A general photogrammetric method for determining object position andorientation [J].IEEE Trans on Robotics and Automation.TPAMI,1995,17(5):534-538.
    [174]吴福朝,胡占义. PnP问题的线性求解算法[J].软件学报,2003,14(3):682-688
    [175]Hu Z.Y., WU F.C, A Note on the Number Solution of the Non-coplanar P4PProblem[J].IEEE Transaction on Pattern and Analysis and MachineIntelligence,2002,24(4),550-555.
    [176]Mitchell Dinham, Gu Fang. Automomous weld seam identification and localisationusing eye-in-hand stereo vision for robotic arc welding[J].Robotics andComputer-Integrated Manufacturing,2013,29:288-301.
    [177] Qiu Chan,Liu Zhenyu,Tan Jianrong. Hybrid Dimension Based Modeling of PartSurface Topographyand Identification of Its Characteristic Parameters, AppliedSurface Science,2012,258:7082-7093.
    [178]Mahmoud Omid, Mahmoud Soltani, Mohammad Hadi Dehrouyeh,et al. Anexpert egggrading system based on machine vision andartificial intelligencetechniques[J].Journal of Food Engineering,2013,118(1):70-77.
    [179]Andrew Fitzgibbon, Maurizio Pilu, and Robert B. Fisher. Direct Least Square Fittingof Ellipses[J]. Pattern Analysis MachineIntelligence,1999,21(5):476-480.
    [180] Ghosal S,Mehrotra R.Orthogonal moment operators for subpixel edgedetection[J].Pettern Recognition,1993,26(20):295-306.
    [181]刘松涛,殷福亮.基于图割的图像分割方法及其新进展[J].自动化学报,2012,38(6):911-922.
    [182]高世一,赵明扬,张雷,邹媛媛.基于Z er ni ke正交矩的图像亚像素边缘检测算法改进[J].自动化学报,2008,34(9):1163-1168.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700