用户名: 密码: 验证码:
图像局部不变特征提取与匹配及应用研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
图像特征提取是图像分析、模式识别及计算机视觉等领域的一个重要研究内容,它是众多问题的研究基础。由于目标所在的图像之间大部分都存在旋转、视点、尺度、光照、模糊等变换,因此如何提取图像的稳定特征成为了相关领域的研究重点。近年来,一类局部不变特征由于其针对图像平移、旋转、尺度、光照及视点等变换具有不变性,已经在图像配准、图像拼接、物体识别、目标跟踪、数字水印及图像检索等方面得到了很广泛地应用。基于局部不变特征的方法主要步骤有特征提取(包括特征检测与特征描述)和特征匹配。本文深入分析了不变特征相关理论基础,研究了已有的一些局部不变特征提取及匹配方法,针对这些方法存在的问题作了相应的改进,并将改进后的新方法应用到了图像配准、物体识别等领域,获得了较好的效果。
     研究了基于尺度空间理论的多尺度特征点检测方法,分析了该类方法存在的问题,并基于Harris角点提出了一种新的多尺度特征点检测方法。新方法首先在尺度空间中的每个尺度内检测Harris特征点,遍历所有尺度以跟踪全部的Harris点,并同时将其分组,使每组仅代表一个局部结构。然后筛选每个组内的特征点,选取使角点度量值和尺度归一化Laplace函数同时达到极值的点代表该局部结构。最后利用SIFT描述符对特征点进行描述和匹配。由实验结果可知,对于包含尺度、视角、JPEG压缩及模糊变化的图像,新方法比原Harris-Laplace方法检测的特征点具有更高的重复率。并且对于两个谱段的遥感图像配准,新方法能够得到比原Harris-Laplace方法更高的配准精度。
     尺度不变特征变换(Scale Invariant Feature Transform, SIFT)是一种广泛应用的特征点描述符。该描述符仅利用了特征点的局部邻域梯度信息,当图像中包含多个相似的局部结构时,SIFT描述符使散落在相似局部结构中的点极易发生误匹配。针对这一问题,本文提出了一种基于空间分布描述符的SIFT误匹配校正方法。该方法首先利用SIFT算法进行特征点提取与匹配,然后对于每一个匹配结果中的特征点,再利用图像边缘像素点对该点的空间分布信息重新描述,形成一种独特性更高的空间分布描述符,最后采用该描述符针对匹配结果中的两种误匹配进行校正。实验结果表明,与随机抽样一致性法(Random sample consensus, RANSAC)相比,利用空间分布描述符剔除更多误匹配的同时,也能够保留更多原本正确的匹配,具有一定的实用价值。
     同时,本文分析了一种仿射不变特征提取方法,即多尺度自卷积法(Multi-scale Auto-convolution, MSA),基于该方法提出了一种多尺度自卷积熵(Multi-scale Auto-convolution Entropy, MSAE)的仿射不变特征提取方法。首先,基于MSA特征构造了MSAE特征,并证明了MSAE特征具有仿射不变性;再利用广义典型相关分析(Generalized Canonical Correlation Analysis, GCCA)将MSA和MSAE进行特征融合,得到新的包含图像更多信息的组合仿射不变特征,将该组合特征和MSA特征作为描述符,均分别对整幅图像及图像中检测得到的最稳定极值区域(Maximally Stable Extremal Region, MSER)进行描述,并对描述后的整幅图像和MSER区域分别进行了分类识别实验,证明了新的组合仿射不变特征描述符比MSA特征描述符具有更高的独特性。
     估计对极几何的约束关系是剔除误匹配的主流方法,其中M.估计法(M-Estimators)具有相对较快的计算速度及对高斯噪声的稳定性,因此具有很好的应用前景。但该类方法完全依赖由线性最小二乘法估计得到的矩阵初始值,精度较低,稳定性较差。基于此,本文提出了一种改进的M-Estimators算法。首先利用7点法计算得到基础矩阵的初始值,再将匹配点与对应极线的对极距离平方和作为度量,计算求得较原M-Estimators算法更加精确的矩阵初始值,然后利用此初始值剔除掉原匹配点集中的错误匹配点及坏点,最后运用Torr M-Estimators法对此时的匹配点集进行非线性优化计算,得到了最终的匹配点对。实验结果表明,与M-Estimators和Torr-M-Estimators相比,改进的M-Estimators法在误匹配以及高斯噪声存在的情况下,不仅提高了基础矩阵的估计精度而且具有很好的鲁棒性。
     最后,在研究Mean Shift算法及相关的Camshift (Continuously Adaptive Mean Shift)算法的原理并分析Camshift算法缺点的基础上,本文提出了一种结合SIFT和Camshift的目标跟踪方法。该方法中,首先将目标区域转到HSV颜色空间,采用SIFT算法分别提取目标区域H、S和V三通道的特征点,当目标区域中的背景纹理不复杂时,SIFT特征点大部分落于目标之上,再利用这些特征点统计目标的色度直方图,然后根据该直方图计算下一帧图像的目标颜色概率分布图,接着采用SIFT算法提取搜索区域H、S和V三通道的特征点,并将其与目标区域的特征点进行匹配,最后利用匹配后搜索区域特征点围成的图像块中的像素点计算下一次搜索窗口的中心位置及窗口的尺度变化因子。通过包含室内和室外测试环境的3组序列图像的跟踪实验,证明了新方法的有效性。
Image feature extraction, which serves as a key for many areas of image processing, plays an important role in image analysis, pattern recognition and computer vision. Since the images are always suffered from a series of transformations, such as rotation, viewpoint, scale, lightness, blur and so on, the issue that how to detect stable features becomes an emphasis in related research fields. In recent years, a kind of local features, invariant to a class of image transformations, has been proved to be successful in a wide range of applications, such as image registration, image stitching, object recognition, target tracking, watermarking, image retrieval and so on. The kind of methods based on local invariant features mainly consists of feature extraction (including feature detection and description) and feature matching. In this paper, some theories about invariant features were analyzed thoroughly and some existing methods of detecting and matching local invariant features have been studied. Then several novel algorithms based on original ones with better performance in image registration, object recognition and tarket tracking have been proposed.
     A kind of methods about multi-scale feature extraction based on scale-space theory has been studied and some shortcomings of these methods were analyzed. On the basis of the analysis, a feature detector named improved Harris-Laplace is proposed to obtain higher repeatability than that of original Harris-Laplace. In this novel method, the Harris feature points in each scale are extracted respectively first, and all points detected in each scale are tracked and grouped beginning with the largest scale in the scale-space to make each group represent one local structure. Then the point in each group which simultaneously leads to the maxima of corner points measuring and scale normalization Laplace function is selected. Finally, these points are described and matched by SIFT descriptor successfully. To some image with the transformations, such as scale, viewpoint, JPEG compression and blur, experimental results indicate that the proposed method has higher repeatability than original Harris-Laplace. Moreover, comparing with original Harris-Laplace, a more accurate registration precision of multi-sensor remote sensing images was obtained by the advanced method.
     Scale Invariant Feature Transform (SIFT) is a widely used descriptor for local invariant feature. However, since this descriptor uses the gradient information in the neighborhood of one feature point, some mismatches may appear when the extracted feature points locate in some similar structures of one image. So a novel method based on a kind of spatial distribution descriptor is proposed to correct the mismatches caused by SIFT. In the proposed method, the feature points were detected and matched first by SIFT and then each matched point can be described again to generate a more distinctive descriptor using the spatial distribution of the pixels on the image contour to the matched point. Finally, two kinds of mismatchs were corrected by the new descriptor. The experimental results indicate that, comparing with the Random sample consensus (RANSAC), the proposed algorithm shows the ability to exclude more false matches while retain more of the original correct matches.
     Meanwhile, a new local affine invariant feature descriptor is proposed. First, a new kind of feature named as Multi-scale Auto-convolution Entropy (MSAE) is constructed based on MSA and proved to be affine invariant. Then the MSA is combined with MSAE using the Generalized Canonical Correlation Analysis (GCCA) to obtain a new feature with more information. This combined feature can be seen as a new local affine invariant feature descriptor. Finally, the whole image and the Maximally Stable Extremal Region (MSER) extracted from the image are described by the new descriptor, respectively. Two recognition experiments verify that the proposed combined affine invariant feature is more distinctive than MSA.
     Furthermore, a kind of algorithm, based on epipolar geometry constraint, now is known as a mainstream method for discarding mismatches. Among them, M-Estimators, with fast computation speed and robustness to Gaussian noise, has good application prospects in discarding mismatches. Because this algorithm depends entirely on the primary matrix obtained by the method of least squares, its precision and stability of detection is not very well. Then an improved M-Estimators algorithm for estimating the fundamental matrix was studied. The improved method calculates the primary matrix by seven-point technique first. Then the quadratic sum of the distances between the matching points and the corresponding epipolar lines is set as a metric to calculate a more precise initial fundamental matrix than M-Estimators. In the following step, this obtained initial matrix is used to eliminate the mismatches included in the original point set. Finally, a nonlinear optimization for the new matched points set is carried out with Torr-M-Estimators and some finally matched point pairs are obtained. Through a mass of experiments performed in the case of mismatches and Gaussian noise, the experimental results indicate that the proposed algorithm not only improves the estimation precision but also shows a well robustness, comparing with M-Estimators and Torr-M-Estimators.
     In the last part, two algorithms, Mean Shift and its derivative Camshift, have been researched and a new target tracking method combining SIFT and Camshift is proposed to overcome the shortcomings of Camshift. In the first step, the target region is transformed to HSV color space, then the feature points are extracted from H, S and V channels by SIFT algorithm respectively. All SIFT points, most of which are located on targets in view of the weak texture of target background, can be used to generate hue histogram. Secondly, color probability distribution of the next frame image is obtained based on the hue histogram. Thirdly, SIFT algorithm is used to detect points in the searching region from H, S and V and then these points are matched with those points in the target region. Finally, the pixels within the region obtained by the matched points located in searching region can be used to calculate the new center and size of the searching region. Three image sequences including testing environment indoors and outdoors are used to evaluate the propose method. Experimental results indicate that the new method has better validity.
引文
[1]F. Schaffalitzky, A. Zisserman. Multi-view matching for unordered image sets, or "How do I organize my holiday snaps?". Proceedings of the 7~(th) European Conference on Computer Vision, Copenhagen, Denmark,2002,414-431.
    [2]P. Pritchett, A. Zisserman. Wide baseline stereo matching. Proceedings of the 6~(th) lniemational Conferenee on ComPuter Vision, Bombay, India,1998:754-760.
    [3]A. Baumberg. Reliable feature matehing across widely separated views. Proceedings of the IEEE Conference on ComPuter Vision and Pattem Recognitionl,2000:774-781.
    [4]乔警卫,胡少兴.三维重建中特征点提取与匹配算法研究.系统仿真学报,2008,20(Suppl):400-403.
    [5]郑碧娜,江泽涛,吴敏.一种基于立体像对稠密匹配的三维重建方法.中国体视学与图像分析,2009,14(2):187-191.
    [6]于勇,张晖,林茂松.基于双目立体视觉三维重建系统的研究与设计.计算机技术与发展,2009,19(6):127-130.
    [7]M. Brown, D. G. Lowe. Recognising panoramas. Proceedings of the 9th International Conference on Computer Vision, Nice,2003:1218-1227.
    [8]D. G. Lowe. Local feature view clustering for 3D object recognition. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Kauai, Hawaii,2001: 682-688.
    [9]Y. Dufournaud, C. Schmid, R. Horaud. Matching images with different resolutions. Proeeedings of the Conference on Computer Vision and Pattern Recognition, Hilton Head Island, South Carolina, USA,2000:612-618.
    [10]K. Grauman, T. Darrell. Efficient image matching with distributions of local invariant features. Proeeedings of the Conference on Computer Vision and Pattern Recognition, 2005,2:627-634.
    [11]F. Isgro, M. Pilu. A fast and robust image registration method based on an early consensus paradigm. Pattern Recognition Letter,2004,25 (8):943-954.
    [12]Y. Dufournaud, C. Schmid, R. Horaud. Image Matching with Scale Adjustment. Computer Vision and Image Understanding,2004,93:175-194.
    [13]L. Qin, W. Zeng, W. Gao, et al. Local invariant descriptor for image matching. Proeeedings of the Conference on Acoustics, Speech, and Signal Processing,2005,2: 1025-1028.
    [14]李玲玲,李翠华,曾晓明等.基于Harris-Affine和SIFT特征匹配的图像自动配准.华中科技大学学报(自然科学版),2008,36(8):13-16.
    [15]李芳芳,肖本林,贾永红等.SIFT算法优化及其用于遥感影像自动配准.武汉大学学报(信息科学版),2009,34(10):1245-1249.
    [16]李晓明,郑链,胡占义.基于SIFT特征的遥感影像自动配准.遥感学报,2006,10(6):885-892.
    [17]马林.高分辨雷达目标识别系统技术研究.电子学报,2009,37(12):2633-2638.
    [18]U. Park, S. Pankantia, A. K. Jain. Fingerprint verification using SIFT features, Proceedings of the SPIE Conference on Defense and Security Symposium, Biometric Technology for Human Identification, Orlando, Florida,2008,6944:69440K-69440K-9.
    [19]M. Bicego, A. Lagorio, E. Grosso, et al. On the use of SIFT features for face authentication, Computer Vision and Pattern Recognition Workshop,2006,35.
    [20]A. S. Mian, M. Bennamoun, R. A. Owens. Face recognition using 2D and 3D multimodal local features. International Symposium on Visual Computing, Lake Tahoe, Nevada, America,2006,4292:860-870.
    [21]J. Luo, Y. Ma, E. Takikawa, et al. Person-specific SIFT Features for face recognition. Proceedings of IEEE International Conference Acoustics Speech and Signal Processing, Honolulu, Hawaii, America,2007,2:593-596.
    [22]何家峰,叶虎年,叶妙元.一种基于小波变换的虹膜识别方法.仪器仪表学报,2002,23(3):196-l98.
    [23]辛国栋,王巍,计算机辅助虹膜诊断中特征提取方法研究.计算机工程与设计,2006,27(18):3321-3323.
    [24]N. Larios, H. Deng, W. Zhang, et al. Automated insect identification through concatenated histograms of local appearance features. Proceedings of IEEE Workshop on Applications of Computer Vision, Piscataway, America,2007:23-32.
    [25]M. Donoser, C. Arth, H. Bischof. Detecting, tracking and recognizing license plates. Proceedings of Asian Conference on Computer Vision, Heidelberg, Germany.2007, 4844:447-456.
    [26]L. Dlagnekov, S. Belongie. Recognizing cars. Technical Report CS2005-0833, UCSD University of California, San Diego,2005.
    [27]G. Dorko, C. Schmid. Selection of scale-invariant parts for object class recognition. Proceedings of the 9~(th) International Conference on Computer Vision, Nice, France,2003, 634-640.
    [28]王君秋,查红斌.结合兴趣点和边缘的建筑物和物体识别方法.计算机辅助设计与 图形学学报,2006,18(8):1257-1263.
    [29]尤红建,胡岩峰,张世强.自动识别航空CCD图像上建筑物的方法.光电工程,2005,32(9):8-11.
    [30]承德保.基于多特征组的遥感图像中建筑物目标自动识别与标绘的方法.电子与信息学报,2008,30(12):2867-2870.
    [31]S. Tran, L. Davis. Robust object tracking with regional affine invariant features. Proceedings of the 11th International Conference on Computer Vision, Rio de Janeiro, Brazil 2007.
    [32]S. Battiato, G Gallo, G Puglisi, et al. SIFT features tracking for video stabilization. Proceeding of the 14th International Conference on Image Analysis and Processing, Modena, Italy,2007:825-830.
    [33]陈凤东,洪炳镕,蔡则苏等.一种基于主动视觉的运动目标跟踪方法.华中科技大学学报(自然科学版),2008,36(SupI):90-93.
    [34]吴锐航,李绍滋,邹丰美.基于SIFT特征的图像检索.计算机应用研究,2008,25(2):478-481.
    [35]T. M. Lehmann, M. O. Guld, C. Thies. Content-based image retrieval in medical applications. Methods of Information in Medical,2010,2:354-361.
    [36]T. Deselaers, D. Keysers, H. Ney. Features for Image Retrieval:An Experimental Comparison, Information Retrieval,2008,11 (2):77-107.
    [37]杨晓东,吴玲达,谢毓湘等.二值图像轮廓局部描述和检索方法.计算机应用,2010,30(1):65-67.
    [38]黄朝兵,余胜生,周敬利.一类基于局部信息的特征描述子的图像检索.计算机科学,2005,32(2):216-218.
    [39]张笃振,任世锦.融合颜色特征与LBP的图像检索.计算机工程与应用,2009,45(25):186-187.
    [40]丁贵广,戴琼海,徐文立.基于兴趣点局部分布特征的图像检索方法.光电子·激光,2005,16(9):1101-1106.
    [41]刘瑞祯,谭铁牛.数字图像水印研究综述.通信学报,2000,21(8):39-48.
    [42]M. Barni, I. J. Cox, T. Kalker. Digital watermarking. Proceedings of 4th International Workshop on Digital Watermarking, Siena, Italy,2005:15-19.
    [43]王向阳,邬俊,侯丽敏.一种基于图像特征点的数字水印嵌入方法.电子学报,2007,35(7):1318-1322.
    [44]王向阳,邬俊,侯丽敏.基于图像特征的数字水印算法研究.中国图象图形学报,2006,11(11):1562—1565.
    [45]王贤敏,关泽群,吴沉寒.基于图像内容的局部化自适应数字水印算法.计算机辅助设计与图形学学报,2004,16(4):465-469.
    [46]H. Moravec. Towards Automatic Visual Obstacle Avoidance. Proceedings of the 5~(th) International Joints Conference on Artificial Intelligence, Cambridge,1977:584-590.
    [47]H. Moravec. Rover visual obstacle avoidance. Proceedings of the 7~(th) International Joints Conference on Artificial Intelligence, Vancouver,1981:785-790.
    [48]S. M. Smith, J. M. Brady. SUSAN-a new approach to low level image processing. International Journal of Computer Vision,1997,23(1):45-78.
    [49]陈志方,张艳宁,杨将林等.一种改进的SUSAN算法.微电子学与计算机,2007,24(11):142-144.
    [50]张迁,刘政凯,庞彦伟等.一种遥感影像的自动配准方法.小型微型计算机系统,2004,25(7):1129-1131.
    [51]V. Etienne, L. Robert. Detecting and matching feature points. Journal of Visual Communication and Image Representation,2005,16(1):38-54
    [52]K. Y. Chae, W. P. Dong, C. S. Jeong. SUSAN window based cost calculation for fast stereo matching. Proceedings of the International Conference on Computational Intelligence and Security,2005,3802:947-952.
    [53]赵志华,蔡健荣,赵杰文等.SUSAN算子在苹果图像缺陷分割中的应用研究.计算机工程,2004,30(15):141-142.
    [54]陈廉清,袁红彬,王龙山.SUSAN算子在微小轴承表面缺陷图像分割中的应用.光学技术,2007,33(2):305-307.
    [55]顾华,苏光大,杜成等.人脸关键特征点的自动定位.光电子·激光,2004,15(8):975-979.
    [56]王荣本,余天洪,顾柏园.基于边界的车道标识线识别和跟踪方法研究.计算机工程,2006,32(18):195-196.
    [57]H. Mauricio, M. Geovanni. Facial feature extraction based on the smallest univalue segment assimilating nucleus (SUSAN) algorithm. Proceedings of the International Conference on Picture Coding Symposiump,2004,261-266.
    [58]C. Harris, M. Stephens. A combined corner and edge detector. Proceedings of the 4~(th) Alvey Vision Conference, Mancheste,1988:147-151.
    [59]C. Schmid, R. Mohr, C. Bauckhage. Evaluation of interest point detectors. International Journal of Computer Vision,2000,37(2):151-172.
    [60]王阿妮,马彩文,刘爽等.基于角点的红外与可见光图像自动配准方法.光子学报,2009,38(12):3328-3332.
    [61]仵建宁,郭宝龙,冯宗哲.一种基于兴趣点匹配的图像拼接方法.计算机应用,2006,26(3):610-612.
    [62]X. Wang, B. Yang. Automatic Image Registration Based on Natural Characteristic Points and Global Homography. Proceedings of the International Conference on Computer Science and Software Engineering,2008,2:1365-1370.
    [63]冯宇平,戴明,张威等.一种用于图像序列拼接的角点检测算法.计算机科学,2009,36(12):270-271.
    [64]郭辉.基于视频的车辆检测和车型识别的研究.华东交通大学硕士论文,2009.
    [65]张美多,郭宝龙.车牌识别系统关键技术研究.计算机工程,2007,33(16):186-188.
    [66]孙敏,李德玉,俞梦孙.基于Harris算子和K-means聚类的红外图像脸部特征自动定位.航天医学与医学工程,2007,20(4):285-288.
    [67]陈宇波,许海柱,黄婷婷等.在人脸图像中确定嘴巴位置的方法.电子科技大学学报,2007,36(6):1308-1310.
    [68]谭园园,李俊山,杨威.新的近距离红外目标跟踪算法.电光与控制,2007,14(3):8-11.
    [69]刘闯,龚声蓉,崔志明等.基于角点采样的多目标跟踪方法.中国图象图形学报,2008,13(10):1873-1877.
    [70]黄祖伟.基于双目立体视觉的目标跟踪算法研究.山东大学硕士论文,2007.
    [71]K. Mikolajczyk, C. Schmid. Indexing Based on Scale Invariant Interest Points. Proceedings of the 8th International Conference on Computer Vision, Vancouver, Canda, 2001,525-531.
    [72]K. Mikolajczyk, C. Schmid. An affine invariant interest point detector. Proceedings of the 7th European Conference on Computer Vision, Copenhagen, Denmark,2002, 128-142.
    [73]K. Mikolajczyk, C. Schmid. Scale & Affine invariant interest point detectors. International Journal of Computer Vision,2004,60(1):63-86.
    [74]K. Mikolajczyk. Detection of local features invariant to affine transformations. Ph.D. thesis, Institut National Polytechnique de Grenoble, France,2002.
    [75]T. Lindeberg. Feature detection with automatic scale selection. International Journal of Computer Vision,1998,30(2):79-116.
    [76]罗代建.遥感影像配准与融合技术研究.电子科技大学硕士论文,2006.
    [77]K. Krish, S. Heinrich, W. E. Snyder. Global registration of overlapping images using accumulative image features. Pattern Recognition Letters,2010,31(2):112-118.
    [78]W. Wang, D. Luo, W. Li. Algorithm for automatic image registration on Harris-Laplace features. Journal of Applied Remote Sensing,3(1):033554.
    [79]程邦胜,唐孝威.Harris尺度不变性关键点检测子的研究.浙江大学学报(工学版),2009,43(5):855-859.
    [80]高明,曹洋,方帅.序列全景图像的特征提取与匹配.合肥工业大学学报(自然科学版),2009,32(4):449-452.
    [81]张小洪,李博,杨丹.一种新的Harris多尺度角点检测.电子与信息学报,2007,29(7):1735-1738.
    [82]刘濛,吴成东,张云洲等.基于多分辨率光流和多尺度角点的运动车辆跟踪.东北大学学报(自然科学版),2008,29(9):1240-1244.
    [83]K. Mikolajczyk, B. Leibe, B. Schiele, Local features for object class recognition, Proceedings of the International Conference on Computer Vision,2005:525-531.
    [84]D. Marr, E. Hildreth. Theory of edge detection. Proceedings of the Royal Society of London (Series B), Biological Sciences,1980,207 (1167):187-217.
    [85]B. M. Romeny, L. M. J. Florack, A. H. Salden, et al. Higher order differential structure of images. Image and Vision Computing,1994,12(6):317-325.
    [86]C. Schmid, R. Mohr. Local grayvalue invariants for image retrieval. IEEE Transactions on Pattern Analysis and Machine Intelligence,1997,19(5):530-534.
    [87]T. Lindeberg. Scale-space theory:A basic tool for analysing structures at different scales. Journal of Applied Statistics,1994,21(2):224-270.
    [88]D. G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision,2004,60 (2):91-110.
    [89]Y. Fan, M. Ding, Z. Liu, et al. Novel remote sensing image registration method based on an improved SIFT descriptor. Proceedings of Multispectral Image Processing & Pattern Recognition, Wuhan, China,2007,6790-67903G.
    [90]C. Huo, K. Chen, Z. Zhou, et al. Hybrid approach for remote sensing image registration. Proceedings of Multispectral Image Processing & Pattern Recognition, Wuhan, China, 2007,6790,679006.
    [91]A. Franz, I. C. Carlson, S. Renisch. An adaptive irregular grid approach using SIFT features for elastic medical image registration. Proceedings of Bildverarbeitung fuer die Medizin, Hamburg-Eppendorf, Germany,2006,201-205.
    [92]M. Urschler, J. Bauer, H. Ditt, et al. SIFT and shape context for feature-based nonlinear registration of thoracic CT Images. Proceedings of Computer Vision Approaches to Medical Image Analysis,2006,73-84.
    [93]D. R. Kisku, A. Rattani, E. Grosso, et al. Face identification by SIFT-based complete graph topology, Automatic Identification Advanced Technologies,2007,63-68.
    [94]A. S. Mian, M. Bennamoun, R. Owens. An efficient multimodal 2D-3D hybrid approach to automatic face recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence,2007,1584-1601.
    [95]D. G. Lowe. Object recognition from local scale-invariant features. Proceedings of International Conference on Computer Vision, Corfu, Greece,1999,1150-1157.
    [96]M. Grabner, H. Grabner, H. Bischof. Fast approximated SIFT. Proceedings Asian Conference on Computer Vision,2006,1:918-927.
    [97]H. Bay, T. Tuytelaars, L. Van Gool. Surf:Speeded up robust features. Proceedings of the 9~(th) European Conference on Computer Vision,2006,404-417.
    [98]K. Mikolajczyk, C. Schmid. Comparison of affine-invariant local detectors and descriptors. Proceedings of 12th European Signal Processing Conference, Vienna, Austria,2004.
    [99]S. Lazebnik, C. Schmid, J. Ponce. A sparse texture representation using local affine regions. IEEE Transactions on Pattern Analysis and Machine Intelligence,2005,27(8): 1265-1278.
    [100]杨建.基于Harris-affine特征的图像检索系统研究与实现.大连理工大学硕士论文,2008.
    [101]T. Tuytelaars, L. Van Gool. Matching widely separated views based on affine invariant regions. International Journal of Computer Vision,2004, 1(59):61-85.
    [102]T. Tuytelaars, L. Van Gool. Wide baseline stereo matching based on local, affinely invariant regions. Proceedings of the 11~(th) British Machine Vision Conference, Bristol, UK,2000,412-425.
    [103]L. Hua, Y. S. Cheng, P. L. Zhong. Robust non-frontal face alignment with edge based texture. Journal of Computer Seience and Teehnology,2005,20(6):849-854.
    [104]K. Mikolajczyk, A. Zisserman, C. Sehmid. Shape recognition with edge-based features. Proeeedings of the British Machine Vision Conference,2003,2:779-788.
    [105]M. Banerjee, M. K. Kundu. Edge based features for content based image retrieval. Pattern Recognition,2003,36(11):2649-2661.
    [106]J. Matas, O. Chum, M. Urban, et al. Robust wide-baseline stereo from maximally stable extremal regions. Image and Vision Computing,2004,22(10):761-767.
    [107]K. Mikolajczyk, T. Tuytelaars, C. Schmid, et al. A comparison of affine region detectors. International Journal of Computer Vision,2005,65(1/2):43-72.
    [108]M. Donoser, H. Bischof. Efficient maximally stable extremal region (MSER) tracking. Proceedings of the Conference on Computer Vision and Pattern Recognition,2006,553-560.
    [109]L. Cheng, J. Gong, X. Yang, et al. Robust affine invariant feature extraction for image matching. IEEE Geoscience and Remote Sensing Letters,2008,5(2):246-250.
    [110]程亮,龚健雅,宋小刚等.面向宽基线立体影像匹配的高质量仿射不变特征提取方法.测绘学报,2008,37(1):77-82.
    [111]P. E. Forssen. Maximally stable colour regions for recognition and matching. Proceedings of the Conference on Computer Vision and Pattern Recognition, Minneapolis, USA,2007,1-8.
    [112]T. Kadir, M. Brady. Scale, Saliency and image description. International Journal of Computer Vision.2001,45(2):83-105.
    [113]T. Kadir, A. Zisserman, M. Brady. An affine invariant salient region detector. Proeeedings of the 8th European Conference on Computervision,2004:345-457.
    [114]J. J. Koenderink, A. V. Doom. Representation of local geometry in the visual system. Biological Cybernetics,1987,55(6):367-375.
    [115]陈星星,张荣等.基于多尺度相位特征的图像检索方法.电子与信息学报,2009,31(5):1193-1196.
    [116]赵一凡,夏良正.基于轮廓波特征的纹理图像识别方法.东南大学学报(自然科学版),2008,38(SupⅡ):128-131.
    [117]Y. Ke, R. Sukthankar. PCA-SIFT:A more distinctive representation for local image descriptors. Proceedings of the Conference on Computer Vision and Pattern Recognition, Washington, USA,2004,511-517.
    [118]K. Mikolajczyk, C. Schmid. A performance evaluation of local descriptors. IEEE Transactions on Pattern Analysis and Machine Intelligence,2005,27(10):1615-1630.
    [119]S. Belongie, J. Malik, J. Puzicha, Shape context:A new descriptor for shape matching and object recognition, Proceedings of the Neural Information Processing Systems, 2000:831-837.
    [120]O. Tuzel, F. Porikli, P. Meer. Region covariance:A fast descriptor for detection and classification. Proceedings of European Conference on Computer Vision,2006:589-600.
    [12l]何冰,王晅,赵杰.基于不变矩的抗旋转、缩放、平移鲁棒性数字水印.计算机工程与应用,2010,46(1):183-186.
    [122]陈白帆,蔡自兴.基于尺度空间理论的Harris角点检测.中南大学学报(自然科学版),2005,36(5):751-754.
    [123]陶茂垣,卢正鼎,袁武钢等.基于图像尺度空间的几何不变特征点提取算法.电子 学报,2006,34(12A):2564-2568.
    [124]刘睿,王峰,陈卫东等.基于小波变换多尺度Harris角点检测算法.微计算机信息,2009,25(18):244-246.
    [125]K. Mikolajczyk, C. Schmid. Comparison of affineinvariant local detectors and descriptors. Proceedings of 12~(th) European Signal Processing Conference, Vienna, Austria,2004.
    [126]W. Freeman, E. Adelson. The design and use of steerable filters. IEEE Transactions on Pattern Analysis and Machine Intelligence,1991,13(9):891-906.
    [127]P. Montesinos, V. Gouet, R. Deriche. Differential Invariants for Color Images. Proceedings of 14~(th) International Conference on Pattern Recognition, Brisbane, Australia,1998.
    [128]L. Van Gool, T. Moons, D. Ungureanu. Affine/photometric invariants for planar intensity patterns. Proceedings of the 4th European Conference on Computer Vision, Cambridge, UK,1996,642-651.
    [129]F. Schaffalitzky, A. Zisserman. Multi-view matching for unordered image sets. Proceedings of the 7th European Conference on Computer Vision, Copenhagen, Denmark,2002,414-431.
    [130]A. E. Abdel-Hakim, A. A. Farag. CSIFT:A SIFT descriptor with color invariant characteristics. In:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, New York, America,2006,2:1978-1983.
    [131]J. H. Chen, C. S. Chen, Y. S. Chen. Fast algorithm for robust template matching with M-estimators. IEEE Transactions on Signal Processing,2003,51 (1):230-243.
    [132]赵录刚,吴成柯.基于随机抽样一致性的多平面区域检测算法.计算机应用,2008,28(S2):154-157.
    [133]D. Sidibe, P. Montesinos, S. Janaqi. Fast and robust image matching using contextual information and relaxation. Proceedings of 2~(th) International Conference on Computer Vision Theory and Applications, Barcelona, Spain,2007:68-75.
    [134]E. N. Mortensen, H. Deng, L. Shapiro. A sift descriptor with global context. In Proceedings of International Conference Computer Vision and Pattern Recognition, San Diego, America,2005,1:184-190
    [135]S. Belongie, J. Malik. J. Puzicha. Shape matching and object recognition using shape Contexts. IEEE Transactions on Analysis and Machine Intelligence,2002,24(4):509-522.
    [136]P. Viola, M. Jones. Rapid object detection using a boosted cascade of simple features. Proceedings of International Conference Computer Vision and Pattern Recognition, Hawaii, America,2001,1:511-518.
    [137]M. Hu. Visual pattern recognition by moment invariants. IEEE Transactions on Information Theory,1962,8 (2):179-187.
    [138]C. T. Zahn, R. Z. Roskies. Fourier descriptors for plane closed curves. IEEE Transactions on Computer,1972,21 (3):269-281.
    [139]M. Teague. Image analysis via the general theory of moments. Journal of the Optical Society of America,1980,70 (8):920-930.
    [140]J. Resnick. The radon transforms and some of its applications. IEEE Transactions on Acoustics, Speech and Signal Processing,1985,33 (1):338-339.
    [141]H. Xiong, T. Zhang Y. S. Moon. A translation and scale invariant adaptive wavelet transform. IEEE Transactions on Image Processing,2000,9 (12):2100-2108.
    [142]夏永泉,刘正东,杨静宇.不变矩方法在区域匹配中的应用.计算机辅助设计与图形学学报,2005,17(10):2152-2156.
    [143]E. Rahtu, M. Salo, J. Heikkila. Affine invariant pattern recognition using multiscale Autoconvolution. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005,27(6):908-918.
    [144]J. BenArie, Z. Wang. Pictorial recognition of objects employing affine invariance in the frequency domain. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1998,20(6):604-618.
    [145]李迎春,陈贺新,高磊.基于仿射不变矩的神经网络目标识别.计算机工程,2004,30(2):31-33.
    [146]M. Petrou, A. Kadyrov. Affine invariant features from the trace transform. IEEE Transactions on Pattern Analysis and Machine Intelligence,2004,26(1):30-44.
    [147]J. Kannala, E. Rahtu, J. Heikkila. Affine registration with multi-scale autoconvolution. Proceedings of International Conference on Image Processing, Genoa,2005,3:1064-1067.
    [148]E. Rahtu, M. Salo, J. Heikkila. A new convexity measure based on a probabilistic interpretation of images. IEEE Transactions on Pattern Analysis and Machine Intelligence,2006,28(9):1501-1512.
    [149]E. Rahtu, M. Salo, J. Heikkila. Multiscale autoconvolution histograms for affine invariant pattern recognition. Proceedings of 16th British Machine Vision Conference, Edinburgh,2006,3:1059-1068.
    [150]唐涛,粟毅,陈涛等.一种新的图像局部仿射不变特征提取方法.计算机仿真,2007, 24(7):229-234.
    [151]徐学强,汪渤,于家城等.一种新型不变矩在图像识别中的应用.光学技术,2007,33(4):580-583.
    [152]T. K. Kim, J. Kittler, R. Cipolla. Discriminative learning and recognition of image set classes using canonical correlations. IEEE Transaction on PAMI,2007,29(6):1005-1018.
    [153]洪泉,陈松灿,倪雪蕾.子模式典型相关分析及其在人脸识别中的应用.自动化学报,2008,34(1):21-30.
    [154]Q. S. Sun, S. G. Zeng, Y. Liu, et al. A new method of feature fusion and its application in image recognition. Pattern Recognition,2005,38(12):2437-2448.
    [155]孙权森,曾生根,王平安等.典型相关分析的理论及其在特征融合中的应用.计算机学报,2005,28(9):1524-1533.
    [156]Q. S. Sun, P. A. Heng, Z. Jin, et al. Face recognition based on generalized canonical correlation analysis. International Conference on Intelligent Computing (Hefei, China), Lecture Notes in Computer Science, Springer-Verlag, Heidelberg, Berlin,2005,3645: 958-967.
    [157]J. Yang, J. Y. Yang, D. Zhang, et al. Feature fusion:parallel strategy vs. serial strategy. Pattern Recognition,2003,36 (6):1369-1381.
    [158]Z. Yang, F. Cohen. Cross-weighted moments and affine invariants for image registration and matching. IEEE Transactions on Pattern Analysis and Machine Intelligence,1999, 21(8):804-814.
    [159]雷琳,蔡红苹,唐涛等.基于MSA特征的遥感图像多目标关联算法.遥感学报,2008,12(4):586-592.
    [160]丁辉,付梦印,王美玲.一种多尺度几何分析的摄像机对弱定标算法.小型微型计算机系统,2007,28(6):1115-1118.
    [161]李立春,邱志强,王鲲鹏等.基于匹配测度加权求解基础矩阵的三维重建算法.计算机应用,2007,27(10):2530-2533.
    [162]S. Brandt. Maximum likelihood robust regression with known and unknown residual models. Proceedings of the Statistical Methods in Video Processing Workshop, in Conjunction with ECCV 2002, Copenhagen, Denmark.2002,97-102.
    [163]马颂德,张正友.计算机视觉—计算理论与算法基础.北京:科学出版社,1998.
    [164]孙亦南,刘伟军,马永壮等.一种加权计算基础矩阵的高精度算法.计算机工程,2005,31(15):186-188.
    [165]宋汗辰,张小义,吴玲达.一种基础矩阵线性估计的鲁棒方法.计算机工程,2005, 31 (15):178-179.
    [166]Z. Zhang. Determining the epipolar geometry and its uncertainty a review. International Journal Computer Vision,1998,27 (2):161-198.
    [167]M. Fischler, R. Bolles. Random sample consensus:A paradigm for model fitting with applications to image analysis and automated cartography. Communication of the ACM, 1981,24(6):381-395.
    [168]陈付幸,王润生.基于预检验的快速随机抽样一致性算法.软件学报,2005,16(08):1431-1437.
    [169]C. Menard, A. Leonardis. Stereo matching using M-Estimators. Proceedings of the 7~(th) International Conference on Computer Analysis of Images and Patterns, Kiel, Germany. 1997:305-312.
    [170]X. Armangue, J. Salvi. Overall view regarding fundamental matrix estimation. Image and Vision Computing,2003,21(2):205-220.
    [171]张艳青,王知衍.基于竞争规则和挡板辅助的基础矩阵估计算法.海南大学学报自然科学版,2005,23(2):105-110.
    [172]李宏言,盛利元,闻姜等.基于改进遗传算法的基础矩阵估计方法.计算机工程与应用,2006,42(34):57-59
    [173]P. H. S. Torr, D. W. Murray, The development and comparison of robust methods for estimating the fundamental matrix. International Journal of Computer Vision,1997, 24(3):271-300.
    [174]R. Hartley. In defence of the 8-point algorithm. Proceedings of the Eighth International Conference on Computer Vision, IEEE Computer Society Press, Boston,1995, 1064-1070.
    [175]曾鹏鑫,陈鹏.基于目标运动模型的跟踪方法.系统仿真学报,2006,18(12):3491-3494.
    [176]W.Hu, T.Tan, L.Wang, et al. A survey on visual surveillance of object motion and behaviors. IEEE Transactions on Systems, Man and Cybernetics,2004,34(3):334-352
    [177]G.R.Bradski, S.Clara. Computer vision face tracking for use in a perceptual user interface. Intel Technology Journal,1998,2 (2):1-15.
    [178]J.Biemond, P.M.B Roosmalen, R.L.Lagendijk. Improved blotch detection by postp-rocessing. IEEE International Conference on Acoustics and Signal Processing,1999, 3101-3104.
    [179]徐琨,贺昱曜,王卫亚.基于Camshift的自适应颜色空间目标跟踪算法.计算机应用,2009,29(3):757-760.
    [180]寇超,白琮,陈泉林.改进反向投影的Camshift人脸跟踪算法.计算机仿真,2009,26(6):228-231..
    [181]伍翔.视频图像中运动目标检测与跟踪.哈尔滨工业大学硕士论文,2007.
    [182]X.Qiu, Q.Lu. Target tracking and localization of binocular mobile robot using camshift and SIFT. Proceedings of the first ACM/SIGEVO Summit on Genetic and Evolutionary Computation, Shanghai, China,2009,483-488.
    [183]彭宁嵩,杨杰,刘志等.Mean-Shift跟踪算法中核函数窗宽的自动选取.软件学报,2005,16(9):1542-1550.
    [184]胡铟,杨静宇.基于分块颜色直方图的MeanShift跟踪算法.系统仿真学报,2009,21(10):2936-2939.
    [185]郭世龙,李文锋,李波等.基于Camshift算法的移动机器人视觉跟踪系统.华中科技大学学报(自然科学版),2008,36(Sup):156-158.
    [186]J. G. Allen, R. Y. D. Xu, J. S. Jin. Object tracking using camshift algorithm and multiple quantized feature spaces. Proceedings of the Pan-Sydney area workshop on Visual information processing, Darlinghurst, Australia,2004,2-7.
    [187]O. D. Nouar, G. Ali, C. Raphael. Improved object tracking with camshift algorithm. Proceedings of International Conference on Acoustics, Speech, and Signal Processing. Toulouse, France,2006,2:Ⅱ-Ⅱ.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700