用户名: 密码: 验证码:
单幅图像三维测量系统的标定与解码技术研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
近些年来,动态三维测量,特别是基于结构光的单幅图像三维测量技术成为研究热点。尽管众多研究者提出各种方法,构建了不同的测量系统。但是,投影机标定的高精度要求尚有待进一步提高;定量评价体制的欠缺阻碍了技术的快速进步;一般性单元技术缺乏深入研究。因此,本文以单幅图像三维测量为研究课题,重点研究高精度系统标定,对应性评价机制。在定量化评价机制的基础上,对码字识别,码字匹配等解码关键技术展开研究,进一步提高了算法性能。
     系统标定是三维测量的前提。本文将系统标定建模为参数估计问题,从模型、数据获取、估计方法三个方面对当前结构光系统标定方法进行了综述和分类。在此基础上,分析了主流的投影机标定方法的弊端――假设投影误差为正态分布。本文通过理论分析、仿真实验证明,投影误差在投影机图像平面不满足独立同分布,也不符合正态分布。由于摄像机是结构光系统的观察设备,是观察噪声的源头,本文提出摄像机图像空间的投影机标定方法。首先,通过标定摄像机计算投影机标定所需要的数据。然后,根据观察噪声的统计分布,在摄像机图像平面使用光束平差法对投影机进行标定。该方法与传统方法最大的不同是目标函数所在空间的差异,其本质是对噪声统计信息的不同理解。新方法避免了传统方法导致的噪声与投影机参数的耦合作用,更好的利用了噪声统计信息,标定精度更高。新的投影机标定方法与摄像机标定方法目标函数在同一空间,使得系统标定得以综合为一个统一的目标函数。这一变化使得观察数据得以增加,使最大似然估计获得更小的估计标准差。实验表明:新方法的标准差是传统方法的66.5%。
     对应性是单幅图像三维测量问题的核心。本文提出了在对应性层次上的评价体系。它不同于以往在点云、曲面层次的评价,而是在对应性层次进行的。本文采用时空分析技术获得对应性的基准数据,进而定义正确率和回调率两个指标反映对应性结果的质量。该评价体系特点是:1)定量分析对应性。传统方法只能分析精度信息,这一评价指标不仅受到噪声的影响缺乏重复性和稳定性,而且操作任务量大。2)通用性,可适用于任何结构光设备。对应性的广泛性使得该方法可比较任何编码结构光方法。另外,它没有传统方法需要插值的麻烦,只需要改变编码方法获取基准数据即可。3)对结果反应准确、合理。正确性反应了对应性中正确结果的比例。当正确率越高,说明产生的点云中噪声点越少,后续噪声处理工作越少。回调率反映了正确的对应和基准对应性数量的比例。当回调率越高,说明该方法获得了更多数据信息,这也说明了其采集效率越高。在实验中,对于四种著名编码方法分别实现,并且采用本文提出的评价体系进行评价比较。实验结果发现:多幅图像技术远远优于单幅图像技术,其中多频相移技术最为健壮和高效。单幅图像技术目前效果较差,需要大力提高其性能。
     从图像中解码出对应性,需要三个步骤:码字检测、码字识别、码字匹配。本文将码字识别建模为非监督分类问题,在评价体系的框架下重点研究了颜色码识别。本文分析了各种颜色特征量,提出了一个新的颜色不变量――规范颜色量。它具有对光照方向、法线方向和亮度不敏感的特性。实际的比较试验中,发现新颜色特征具有较高的类的可分性,使用多种聚类方法识别正确率处于上游。该特征对于K-means方法来说具有初值不敏感的特性,会避免收敛于局部值。通过大量实验发现了颜色码收敛中心的趋近特性。基于此,本文设计了决策引导的聚类方法,解决K-means对初值敏感的问题。实验表明,对于一些常见的颜色不变量,该方法可收敛于最大值,避免了迭代终止在局部最小值。
     对于码字匹配问题从匹配算法和码字表达两个侧面分别展开研究。对主要的码字序列匹配算法进行分析,发现传统的局部匹配方法在序列不连续部分容易出错,提出了保持非连续性的匹配方法。该方法基于窗口投票机制,充分利用了窗口信息,不仅可判断全票通过的核,而且可以判断潜在边缘。该方法比传统方法识别正确率更高,减少了错误匹配。针对带状条纹图像的解码问题,传统方法采用边缘码字进行匹配。本文提出了采用颜色码和混合码的间接解码技术。通过理论分析,发现间接解码技术具有高汉明距离,多信息量,强化局部约束的特点。实验中通过定量的比较,证明了该方法具有更高的正确率和回调率。另外,实验比较中发现,局部匹配方法优于全局假设的动态规划方法。
     基于以上研究,本文构建了一套单幅图像三维测量系统。它可用于测量动态物体,实际测量人脸表情,重建曲面质量良好。
Recent years have seen numerous advances in the area of one-shot shape acquisi-tion, which open up the possibility of reconstructing dynamic scenes by repeating theprocess at video rate. Although many advancement has made in various measurementsystems, several important problems are unresolved, such as higher precision cali-bration of the projector is in demand; the lack of quantitative evaluation mechanismhinders the direct comparison of di?erent methods; the results of decoding is dissatis-fied for piratical situations. So the one-shot shape acquisition is the topic of this paper,and several aspects are discussed and analyzed, including calibration, evaluation, codeidentification, code matching.
     The structured light system calibration is a necessary step in order to extract accu-rate metric information from 2D images. It often includes two separate stages: cameracalibration and projector calibration. The calibration is a process of finding the trueparameters of the model from the available observations. It can be seen as a problemof parameter estimation. Three aspects, such as models, observations, and estima-tion methods, are indispensable. Di?erent methods are reviewed according to thesethree aspects. Nowadays, the traditional projector calibration adopts the cost func-tion on the projector image, which is based on the assumption, that the reprojectionerror is Gaussian distribution. Actually, even if the measured points on the camera im-age are independent and identically-distributed Gaussian variables, it is impossible toobtain the statistical characteristics of the reprojection errors on the projector image,because the relation between the original measurement error and the reprojection erroris nonlinear. In this paper, a method is proposed to estimate the projector parametersaccording to the camera image reprojection error. It doesn’t need to know the statisti-cal characteristics of the projector image point and makes full use of the background knowledge of the camera image noise, because our cost function is not about the re-projection error on the projector image like the traditional method. The simulationsand experiments a?rm that this method has higher precision, even if the real noise isnot normal.
     Until now, the lack of the ground truth has prevented such direct comparisons ofcoded structured light algorithms. In this paper, an evaluation framework is proposedto compare di?erent kinds of methods. First the ground truth is obtained by the ro-bust space-time analysis. Then our evaluation methodology is introduced to describethe algorithm’s performance. Two indices, the accuracy and recall are defined foreach method. Finally, quantitative and qualitative comparisons are conducted in ourexperiments. Four well known coding methods are evaluated using three benchmarkdatasets. The results show the multi-shot shape acquisition methods have better perfor-mances than one-shot shape acquisition methods. When di?erent decoding algorithmsare adopted for the same coding method, the accuracies and recalls are changed.
     This evaluation framework can help to make the quantitative comparison of al-gorithms and prompt the further advance in algorithm performance.The advantages ofthe correspondence are obvious. First, it is general in all coded structured light meth-ods. Sometimes, the surface is not necessary and the point cloud is enough. However,the correspondence is always in demand to obtain the 3D points or surfaces. Second, itis independent of the calibration. Generally, the wrong correspondences cause distantisolated points, while the precision of calibration just makes the variation of the 3Dpoints in the confidence interval. So the outliers in the point cloud are mainly fromthe wrong correspondences. The ratio of outliers to the whole points re?ects the per-formance of results. Third, it can help to evaluate an individual point through judgingits correspondence. However, the surface re?ects the entirety property or local areacharacter.
     In coded structured light, encoding and decoding are the two essential steps toreconstruct a shape. Encoding transfers position information to a representation, whichis convenient to be transmitted by the structured light system. It is the pattern that cancontrol the projected light of the projector. Decoding lies in analyzing the acquiredsignal by the structured light system to recover the transmitted information. Decoding in coded structured light consists of three steps: code detection, code identification,and code matching. Code detection is to find the useful feature from the acquiredimage.
     Code identification is to partition data into a certain number of groups. Datafrom the same color code should be similar to each other, while the data from di?erentcolor codes should not. Because the data are unlabeled, it is a problem of unsupervisedclassification. we adopted analysis methods to evaluate the performance of di?erentcolor features, and the order of these color features in the discriminating power wasconcluded after a large number of experiments. Our proposed color feature, namedregularized color, is proved to be the best. It is insensitive to surface orientation, illu-mination direction and illumination intensity for matte, dull surfaces. Second, in orderto overcome the drawback of K-means, a decision-directed method was introduced tofind the initial centroids. Quantitative comparisons a?rm that the global peak can befound and our method is robust with high accuracy.
     Code matching can be seen as a problem of sequence matching. The windoweduniqueness is a property of the projected sequence. Two aspects are studied, coderepresentation and matching algorithm.
     First, the relationship between the representations of stripe patterns and decod-ing results are discussed. The problem of decoding stripe patterns can be modeled asmatching two code sequences. We proposed to decode edges indirectly based on theproperty of the stripe pattern, which can be represented as edge code sequence, colorcode sequence or mixed code sequence. While traditional methods match two edgecode sequences, indirect decoding is to match two color sequences or mixed codesequences. The advantages of the proposed method including higher hamming dis-tance, enforced local coherence, and more code information, make indirect decodingexcellent in performance. Previously the lack of the ground truth has prevented directcomparisons of di?erent decoding algorithms. Here we obtain six benchmark datasetsusing the robust space-time analysis. Five decoding methods are quantitatively eval-uated using the ground truth. The comparison results show that our method is robustto complex code situations and that it outperforms the state of the art technique in thearea.
     Second, a discontinuity-preserved method is proposed in the sequence matching,which is based on the window voting to judge correct correspondences and potentialborders. Our matching method is also compared with the traditional local matchingmethods. The results a?rm that ours has higher accuracies on four di?erent objects.
     Based on the above-mentioned research, I implemented a complete 3D scanningsystem using a DLP projector and one synchronized digital cameras. The proposedone-shot shape acquisition is employed to measure the surface of the human face. Theshape can be acquired at the video rate. The results are fine and our method is e?ective.
引文
[1] CHEN F, BROWN G, SONG M. Overview of three-dimensional shape measurementusing optical methods[J]. Optical Engineering, 2000, 39:10.
    [2] BLAIS F. Review of 20 years of range sensor development[J]. Journal of Elec-tronic Imaging, 2004, 13:231.
    [3] ZHANG S. Recent progresses on real-time 3D shape measurement using dig-ital fringe projection techniques[J]. Optics and Lasers in Engineering, 2010,48(2):149–158.
    [4] SU X, ZHANG Q. Dynamic 3-D shape measurement method: A review[J]. Opticsand Lasers in Engineering, 2010, 48(2):191–204.
    [5]叶声华,邾继贵.视觉检测技术及应用[J].中国工程科学, 1999, 1(001):49–52.
    [6]段峰,王耀南,雷晓峰, et al.机器视觉技术及其应用综述[J].自动化博览,2002, 19(003):59–61.
    [7] SCHARSTEIN D, SZELISKI R. A taxonomy and evaluation of dense two-frame stereocorrespondence algorithms[J]. International journal of computer vision, 2002,47(1):7–42.
    [8] FORSYTH D, PONCE J. Computer vision: a modern approach[M].[S.l.]: PrenticeHall Professional Technical Reference, 2002.
    [9] SEITZ S, CURLESS B, DIEBEL J, et al. A comparison and evaluation of multi-viewstereo reconstruction algorithms[C]//Computer Vision and Pattern Recognition,2006 IEEE Computer Society Conference on. .[S.l.]: [s.n.] , 2006,1:519–528.
    [10]何海涛.复杂面形的光学三维测量相关技术研究[D].[S.l.]:上海大学, 2005.
    [11]陈亮辉.采用结构光方法的三维轮廓测量[D].[S.l.]:大连理工大学, 2006.
    [12]张启灿.动态过程三维面形测量技术研究[D].[S.l.]:四川大学, 2005.
    [13]苏显渝,李继陶.信息光学[M].[S.l.]:科学出版社, 1999.
    [14]张广军.机器视觉[M]. Vol. 99.[S.l.]:科学出版社, 2005.
    [15] SALVI J, PAGES J, BATLLE J. Pattern codification strategies in structured light sys-tems[J]. Pattern Recognition, 2004, 37(4):827–849.
    [16] SALVI J, FERNANDEZ S, PRIBANIC T, et al. A state of the art in structured lightpatterns for surface profilometry[J]. Pattern Recognition, 2010, 43:2666–2680.
    [17] SALVI J, BATLLE J, MOUADDIB E. A robust-coded pattern projection for dynamic 3Dscene measurement[J]. Pattern Recognition Letters, 1998, 19(11):1055–1065.
    [18] PAGE`S J, SALVI J, FOREST J. A new optimised de bruijn coding strategy for struc-tured light patterns[C]//17th International Conference on Pattern Recognition..[S.l.]: [s.n.] , 2004,4:284–287.
    [19] PETRIU E, BIESEMAN T, TRIF N, et al. Visual object recognition using pseudo-random grid encoding[C]//Intelligent Robots and Systems, 1992., Proceedings ofthe 1992 lEEE/RSJ International Conference on. .[S.l.]: [s.n.] , 2002,3:1617–1624.
    [20] ALBITAR C, GRAEBLING P, DOIGNON C. Design of a monochromatic pattern fora robust structured light coding[C]//Image Processing, 2007. ICIP 2007. IEEEInternational Conference on. .[S.l.]: [s.n.] , 2007,6.
    [21] MORANO R, OZTURK C, CONN R, et al. Structured light using pseudorandomcodes[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,1998:322–327.
    [22] LI H, STRAUB R, PRAUTZSCH H. Structured Light Based Reconstruction underLocal Spatial Coherence Assumption[C]//the Third International Symposium on3D Data Processing, Visualization, and Transmission. .[S.l.]: [s.n.] , 2006:575–582.
    [23] JE C, LEE S W, PARK R H. High-Contrast Color-Stripe Pattern for RapidStructured-Light Range Imaging[C]//8th European Conference on Computer Vi-sion. .[S.l.]: [s.n.] , 2004:95–107.
    [24] SATO K, INOKUCHI S. Three-dimensional surface measurement by space encodingrange imaging[J]. Journal of Robotic Systems, 1985, 2:27–39.
    [25] CASPI D, KIRYATI N, SHAMIR J. Range imaging with adaptive color structuredlight[J]. IEEE Transactions on Pattern analysis and machine intelligence, 1998,20(5):470–480.
    [26] MALACARA D. Optical shop testing[M].[S.l.]: Wiley-Interscience, 2007.
    [27] MACWILLIAMS F, SLOANE N. Pseudo-random sequences and arrays[J]. Proceedingsof the IEEE, 1976, 64(12):1715–1729.
    [28] BERLEKAMP E. Algebraic coding theory[M]. Vol. 111.[S.l.]: McGraw-Hill NewYork, 1968.
    [29] RUSKEY F, CATTEL K. Combinatorial object server[J]. 2003.
    [30] CHEN S, LI Y, ZHANG J. Vision processing for realtime 3-D data acquisition basedon coded structured light[J]. IEEE Transactions on Image Processing, 2007,17(2):167.
    [31] POSDAMER J, ALTSCHULER M. Surface measurement by space-encoded projectedbeam systems[J]. Computer Graphics and Image Processing, 1982, 18(1):1–17.
    [32] BOYER K, KAK A. Color-encoded structured light for rapid active ranging[J].IEEE Transactions on Pattern analysis and machine intelligence, 1987, 9(1):14–28.
    [33] FECHTELER P, EISERT P. Adaptive color classification for structured light sys-tems[C]//the 15th International Conference on Computer Vision and PatternRecognition-Workshop on 3D Face Processing. .[S.l.]: [s.n.] , 2008:1–7.
    [34] ZHANG L, CURLESS B, SEITZ S. Rapid shape acquisition using color structuredlight and multi-pass dynamic programming[C]//The 1st IEEE International Sym-posium on 3D Data Processing, Visualization, and Transmission. .[S.l.]: [s.n.] ,2002:24–36.
    [35] HALL-HOLT O, RUSINKIEWICZ S. Stripe boundary codes for real-time structured-light range scanning of moving objects[C]//Eighth IEEE International Confer-ence on Computer Vision. .[S.l.]: [s.n.] , 2001:359–366.
    [36] ZHANG X, ZHU L. Determination of edge correspondence using color codes forone-shot shape acquisition[J]. Optics and Lasers in Engineering, 2011, 49(1):97–103.
    [37] KAWASAKI H, FURUKAWA R, SAGAWA R, et al. Dynamic scene shape reconstructionusing a single structured light pattern[C]//IEEE Conference on Computer Visionand Pattern Recognition. .[S.l.]: [s.n.] , 2008:1–8.
    [38] ALBITAR I, GRAEBLING P, DOIGNON C. Robust structured light coding for 3d recon-struction[C]//IEEE 11th International Conference on Computer Vision. .[S.l.]:[s.n.] , 2007:1–6.
    [39] KONINCKX T, VAN GOOL L. Real-time range acquisition by adaptive structuredlight[J]. IEEE transactions on pattern analysis and machine intelligence, 2006,28(3):432–445.
    [40] PAGES J, SALVI J, COLLEWET C, et al. Optimised De Bruijn patterns for one-shotshape acquisition[J]. Image and Vision Computing, 2005, 23(8):707–720.
    [41] FECHTELER P, EISERT P, RURAINSKY J. Fast and high resolution 3d face scan-ning[C]//Image Processing, 2007. ICIP 2007. IEEE International Conference on..[S.l.]: [s.n.] , 2007,3.
    [42] FISHER R, NAIDU D. A comparison of algorithms for subpixel peak detection[J].Image Technology, Advances in Image Processing, Multimedia and Machine Vi-sion, 1996:385–404.
    [43] HUGLI H, MAITRE G. Generation and use of color pseudorandom sequences forcoding structured light in active ranging[C]//Society of Photo-Optical Instrumen-tation Engineers (SPIE) Conference Series. .[S.l.]: [s.n.] , 1989,1010:75.
    [44] KONINCKX T, GEYS I, JAEGGLI T, et al. A graph cut based adaptive structuredlight approach for real-time range acquisition[C]//International Symposium on3D Data Processing, Visualization and Transmission. .[S.l.]: [s.n.] , 2004:413–421.
    [45] CLARKE T, FRYER J. The development of camera calibration methods and mod-els[J]. Photogrammetric Record, 1998, 16(91):51–66.
    [46] SALVI J, ARMANGUE X, BATLLE J. A comparative review of camera calibratingmethods with accuracy evaluation[J]. Pattern recognition, 2002, 35(7):1617–1635.
    [47] TSAI R. A versatile camera calibration technique for high-accuracy 3D machinevision metrology using o?-the-shelf TV cameras and lenses[J]. IEEE Journal ofrobotics and Automation, 1987, 3(4):323–344.
    [48] ZHANG Z. A ?exible new technique for camera calibration[J]. IEEE Transactionson pattern analysis and machine intelligence, 2000, 22(11):1330–1334.
    [49] DEWAR R. Self-generated targets for spatial calibration of structured light opticalsectioning sensors with respect to an external coordinate system. Robots andVision’88 Conf[J]. Proceedings, Detroit, 1988:5–13.
    [50] JAMES K. Noncontact machine vision metrology within a CAD coordinate sys-tem[J]. Proceedings of Autofact’88Conference, Chicago Illinois, 1988, 12:9–17.
    [51] MARZANI F, VOISIN Y, VOON L, et al. Calibration of a three-dimensional recon-struction system using a structured light source[J]. Optical Engineering, 2002,41:484.
    [52] REID I. Projective calibration of a laser-stripe range finder[J]. Image and VisionComputing, 1996, 14(9):659–666.
    [53] CHEN C, KAK A. Modeling and calibration of a structured light scanner for 3-Drobot vision[C]//1987 IEEE International Conference on Robotics and Automa-tion. Proceedings. .[S.l.]: [s.n.] , 1987,4.
    [54] HUYNH D, OWENS R, HARTMANN P. Calibrating a structured light stripe system: anovel approach[J]. International Journal of computer vision, 1999, 33(1):73–86.
    [55] ZHOU F, ZHANG G. Complete calibration of a structured light stripe vision sensorthrough planar target of unknown orientations[J]. Image and Vision Computing,2005, 23(1):59–67.
    [56] LEGARDA-SA′ENZ R, BOTHE T, JU¨PTNER W. Accurate procedure for the calibrationof a structured light system[J]. Optical Engineering, 2004, 43:464.
    [57] SONG Z, CHUNG R. Use of LCD Panel for Calibrating Structured-Light-BasedRange Sensing System[J]. IEEE Transactions on Instrumentation and Measure-ment, 2008, 57(11):2623–2630.
    [58] ZHANG S, HUANG P. Novel method for structured light system calibration[J]. Op-tical Engineering, 2006, 45:083601.
    [59] YAMAUCHI K, SAITO H, SATO Y. Calibration of a structured light system by observ-ing planar object from unknown viewpoints[C]//19th International Conferenceon Pattern Recognition. .[S.l.]: [s.n.] , 2008:1–4.
    [60] SCHREIBER W, NOTNI G. Theory and arrangements of self-calibrating whole-bodythree-dimensional measurement systems using fringe projection technique[J].Optical Engineering, 2000, 39:159.
    [61] FOFI D, SALVI J, MOUADDIB E. Uncalibrated reconstruction: an adaptation to struc-tured light vision[J]. Pattern Recognition, 2003, 36(7):1631–1644.
    [62] FURUKAWA R, KAWASAKI H. Uncalibrated multiple image stereo system with ar-bitrarily movable camera and projector for wide range scanning[C]//3-D DigitalImaging and Modeling, 2005. 3DIM 2005. Fifth International Conference on..[S.l.]: [s.n.] , 2005:302–309.
    [63] TRIGGS B, MCLAUCHLAN P, HARTLEY R, et al. Bundle adjustment-a modern syn-thesis[J]. Lecture Notes in Computer Science, 1999:298–372.
    [64] HEIKKILA J. Geometric camera calibration using circular control points[J]. PatternAnalysis and Machine Intelligence, IEEE Transactions on, 2000, 22(10):1066–1077.
    [65] GENNERY D. Least-squares camera calibration including lens distortion and auto-matic editing of calibration points[J]. 1998.
    [66] DERICHE R, GIRAUDON G. Accurate corner detection: An analyticalstudy[C]//Computer Vision, 1990. Proceedings, Third International Conferenceon. .[S.l.]: [s.n.] , 1990:66–70.
    [67] MOR? J. The Levenberg-Maquardt algorithm. Implementation and theory[J].Numerical Analysis”(GA Watson ed.) Springer Verlag, Berlin, 1978.
    [68] SLAMA C, THEURER C, HENRIKSEN S. Manual of photogrammetry.[M].[S.l.]:American Society of Photogrammetry, 1980.
    [69] ZHU L, LUO H, ZHANG X. Uncertainty and sensitivity analysis for camera calibra-tion[J]. Industrial Robot: An International Journal, 2009, 36(3):238–243.
    [70] CURLESS B, LEVOY M. Better optical triangulation through spacetime anal-ysis[C]//Proceedings of IEEE International Conference on Computer Vision..[S.l.]: [s.n.] , 1995:987–994.
    [71] PENG T. Algorithms and models for 3-D shape measurement using digital fringeprojections[J]. 2007.
    [72] CUMANI A. Edge detection in multispectral images[J]. CVGIP: Graphical Modelsand Image Processing, 1991, 53(1):40–51.
    [73] ZHANG X, ZHU L. Robust calibration of a color structured light system usingcolor correction[C]//2nd International Conference on Intelligent Robotics andApplications, Singapore. .[S.l.]: [s.n.] , 2009.—125—
    [74] GEVERS T, STOKMAN H. Classifying color transitions into shadow-geometry, illu-mination, highlight or material edges[C]//Image Processing, 2000. Proceedings.2000 International Conference on. .[S.l.]: [s.n.] , 2002,1:521–524.
    [75] GEVERS T, SMEULDERS A. Color based object recognition[J]. Pattern recognition,1999, 32(3):453–464.
    [76] CHENG H, JIANG X, SUN Y, et al. Color image segmentation: advances andprospects[J]. Pattern Recognition, 2001, 34(12):2259–2281.
    [77] GEUSEBROEK J, VAN DEN BOOMGAARD R, SMEULDERS A, et al. Color invariance[J].IEEE Transactions on Pattern Analysis and Machine Intelligence, 2001:1338–1350.
    [78] SHAFER S. Using color to separate re?ection components[J]. Color Research &Application, 1985, 10(4):210–218.
    [79] FORGY E. Cluster analysis of multivariate data: e?ciency versus interpretabilityof classifications[J]. Biometrics, 1965, 21:768–780.
    [80] MACQUEEN J, et al. Some methods for classification and analysis of multivariateobservations[C]//Proceedings of the fifth Berkeley symposium on mathematicalstatistics and probability. .[S.l.]: [s.n.] , 1967,1:14.
    [81] SUNG K, POGGIO T. Example-based learning for view-based human face detec-tion[J]. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 2002,20(1):39–51.
    [82] XU R, WUNSCH D. Survey of clustering algorithms[J]. IEEE Transactions onneural networks, 2005, 16(3):645–678.
    [83] CHEN C, HUNG Y, CHIANG C, et al. Range data acquisition using color structuredlighting and stereo vision[J]. Image and Vision Computing, 1997, 15(6):445–456.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700