用户名: 密码: 验证码:
基于运动目标的红外图像末制导跟踪算法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
随着我国对精确打击武器的迫切需求,开展基于运动目标的红外图像末制导跟踪算法研究具有非常重要的国防意义。末制导跟踪阶段,目标与摄像机距离逐渐接近,导致目标在图像中的投影面积、目标的外观形态、背景等发生剧烈变化。一般常用的目标跟踪算法很难满足要求,因此,末制导跟踪仍然是一项富有挑战性的研究课题。
     本论文将末制导跟踪细分为三个阶段,即弱小目标捕获、面目标跟踪以及精确打击部位识别,采用复合式的跟踪方式,具体包括以下三个方面:
     (1)弱小目标捕获阶段,目标与摄像机距离较远,目标的在图像中的投影面积很小,表现为一个模糊的斑点,不具有形状和纹理特征。针对这类目标,提出基于感兴趣区域的分层背景补偿目标捕获算法。首先,提取图像的感兴趣区域,分层对每个感兴趣区域计算其光流参数;然后,建立背景透视运动模型,补偿背景的相对运动;最后,经自适应差分法检测运动目标,结合轨迹关联算法,利用目标运动的时序特征,对目标进行进一步确认,降低虚警概率。
     (2)面目标跟踪阶段,目标与摄像机距离逐渐接近,这时的目标渐渐呈现出细节和纹理特征。目标与摄像机存在剧烈的相对运动,导致目标在图像序列中的外观形态发生变化,采用固定的模板跟踪必将导致跟踪失败,针对这一问题,提出基于粒子滤波的子空间目标跟踪算法,采用的稀疏图像表述方法以及子空间特征提取方法能够自适应目标的外观变化。首先,应用粒子滤波进行重要性采样,对采样后的样本进行稀疏图像表述;然后,将样本集向目标子空间进行投影,估计其最大似然概率,更新目标状态;最后,更新目标子空间,使其在线学习并适应目标的外观变化。
     (3)精确打击部位识别阶段,目标在视野中变得很大,甚至会充满视场。为了保持跟踪的连续性,提出基于局部鲁棒特征的目标识别算法,识别出目标的局部要害位置,将其作为新的跟踪点。本文针对传统特征点匹配存在错误匹配的问题,提出了两点改进策略,在不影响正确性的前提下取得了很好的效果。
With the urgent needs of our precision strike weapons, to carry out the researchon tracking algorithms for infrared image terminal guidance based on moving targethas very important defense significance. At terminal guidance tracking stage, with thecamera gradually approaching to the target, the projected area of the target in theimage, the appearance of the target and background will change drastically.Commonly target tracking algorithm is difficult to meet the requirements, so, terminalguidance tracking is still a challenging research topic.
     In this paper, terminal guidance tracking is broken down into three phases,namely the dim and small target acquisition, the surface target tracking, and precisionstrike parts identification. Composite-tracking mode is adopted in this paper, mainlyincluding the following three aspects:
     (1) Dim and small target acquisition phase. The distance from target to thecamera is very far. The projected area of the target in the image is small, appearing ablurry spot. Such target generally does not have the shape and texture features. Forthis type of target, the acquisition algorithm based on the layered backgroundcompensation of the region of interest is proposed. First, image regions of interest areextracted, and layered optical flow parameter is calculated for each region of interest;then, establish background perspective motion model, and compensate the relative movement of the background; finally, adaptive difference approach is adopted todetect moving target. For further confirmation of the target, trajectory correlationalgorithm which makes use of the timing characteristics of the target motion is used,so that the probability of false alarm is greatly reduced.
     (2) Surface target tracking stage, the target and the camera distance graduallyapproaching. At this time the target is gradually showing detail and texture features.Because of the dramatic relative motion, the appearance of the target in the imagesequence changes drastically. So, using the fixed template to track is bound to leadingtracking failure. To solve this problem, the subspace tracking algorithm based onparticle filter is proposed. Which use sparse image representation and subspacefeature extraction, makes the system adaptive to changes in appearance of the target.First, particle filter is applied to importance sampling, and the sample set arerepresented by sparse image; then, project the sample set to target subspace, andestimate the maximum likelihood probability to update the target state; finally, updatethe target subspace. Thanks to its online learning and adaptation, that can make thesystem adaptive to the target change.
     (3) Precision strike parts identification phase. The target image in the field ofview becomes large, even full the field of view. In order to keep tracking processcontinues, the target recognition algorithm based on local robust features is proposed.That identifies the target partial crucial position as a new tracking point. Thetraditional feature point matching method exists failure matching. So, two-pointimprovement strategies are applied. The experiments indicate that the improvedalgorithm achieves good results, and does not affect the correctness of the premise.
引文
[1]黄长强.机载弹药精确制导武器[M].北京:国防工业出版社,2011:1-3.
    [2]徐春夷.国外导引头技术现状及发展趋势[J].制导与引信,2012,33(2):11-15.
    [3] J. C. Antoniotto. Precision-guided munitions, semi-active laser versusmillimeterwave guidance [J]. International Defence Review,1986,(9):1269-1276.
    [4] R. Gupta, A. Dayal, I. S. Sharma. Effectiveness of Precision-guided Munitionson Armour Systems [J]. Defence Science Journal,2007,57(4):281-287.
    [5]曾宪林.红外成像导引头及其成像制导武器评述[J].航天电子对抗,2004,2004(5):45-48.
    [6]杨卫平,沈振康.红外成像导引头及其发展趋势[J].激光与红外,2007,37(11):1129-1136.
    [7]赵善彪,张天孝,李晓钟.红外导引头综述[J].控制与制导,2006,8,25(8):42-45.
    [8]郭修煌.精确制导技术[M].北京:国防工业出版社,1999.
    [9]杨卫平.空间红外成像制导信息处理技术研究[D]:[博士学位论文].长沙:国防科技大学,1998.
    [10]毕文.海上目标检测跟踪算法研究及其在DSP硬件平台的实现[D]:[硕士学位论文].烟台:烟台大学,2009.
    [11]T. Kennedy. Advances in smart munitions [J]. Defense Science and Electronics,1986,5(1):63-67.
    [12]谢永亮,熊伟.红外成像寻的制导的关键技术[J].飞行导弹,2007,(8):1-5.
    [13]R. A. Steinberg. Infrared surveillance.1: Statistical model [J]. Applied Optics,1980,19(1):77-85.
    [14]R. A. Steinberg. Infrared surveillance.2: Statistical model [J]. Applied Optics,1980,19(10):1673-1686.
    [15]王兵学,张启衡,陈昌彬,等.凝视型红外搜索跟踪系统的作用距离模型[J].光电工程,2004,31(7):8-11.
    [16]Gang Wei, Kun-tao Yang. Discussion on operating range of shipborne infraredsearch-and-track system [C].2nd International Symposium on Advanced OpticalManufacturing and Testing Technologies, Optical Test and Measurement Technologyand Equipment,2006,6150:1-4.
    [17]王卫华,牛照东,陈曾平.海空背景凝视红外成像系统作用距离研究[J].红外与毫米波学报,2006,25(2):150-153.
    [18]V. Markandey, A. Reid, S. Wang.Motion estimation for moving target detection[J]. IEEE Transactions on Aerospace and Electronic Systems,1996,32(3):866-874.
    [19]J. B. Kim, H. J. Kim. Efficient region-based motion segmentation for a videomonitoring system [J]. Pattern Recognition Letters,2003,24(1-3):113-128.
    [20]A. M. Davide, M. Matteo, N. Matteo.A Revaluation of Frame Difference in Fastand Robust Motion Detection [C]. Proceedings of4thACM international workshopon video surveillance and sensor networks,2006,215-218.
    [21]N. Otsu. A Threshold Selection Method from Gray-Level Histograms [J]. IEEETransactions on Systems Man And Cybernetics,1979,9(1):62-66.
    [22]V. T. Tom, T. Peli, M. Leung et al. Morphology-Based Algorithm for PointTarget Detection in Infrared Backgrounds [C]. The SPIE Proceedings of Signal andData Processing of Small Targets,2005,1954(1):2-11.
    [23]J. F. Rivest, R. Fortin. Detection of Dim Targets in Digital Infrared Imagery byMorphological Image Processing [J]. Optical Engineering,1996,35(7):1886-1893.
    [24]汪洋,郑亲波,张钧屏.基于数学形态学的红外图像小目标检测.红外与激光工程,2003,32(1):28-31.
    [25]W. N. Lie. Automatic Target Segmentation by Locally Adaptive ImageThresholding [J]. IEEE Transactions on Image Processing,1995,4(7):1036-1041.
    [26]苏峰,凌清,高梅国.红外小目标实时检测系统实现.激光与红外,2008,38(8):826-829.
    [27]J. W. Wang, E. Y. Du, C. I. Chang, et al. Relative Entropy-based methods forImage Thresholding.2002IEEE International Symposium on Circuits and SystemsProceedings,2002,2(1):265-268.
    [28]P. Sahoo, C. Wilkins, J. Yeager. Threshold Selection using Renyi’s Entropy [J].Pattern Recognition,1997,30(1):71-84.
    [29]S. G. Sun. Target Detection using Local Fuzzy Thresholding and BinaryTemplate Matching in Forward-looking Infrared Images [J]. Optical Engineering,2007,46(3):1-9.
    [30]S. Greenberg, S. R. Rontman, H. Guterman, et al. Region-of-interest-basedAlgorithm for Automatic Target Detection in Infrared Images [J]. OpticalEngineering,2005,44(7):1-10.
    [31]R. C. Gonzalez,R. E. Woods.Digital image processing3ed [M].US:PearsonEducation,2008.
    [32]R. Szeliski. Computer Vision: Algorithms and Applications [M]. Springer,2010.
    [33]Z. Lin, J. Wang, K. Ma. Using eigencolor normalization forillumination-invariant color object recognition [J]. Pattern Recognition,2002,35(11):2629-2642.
    [34]M. J. Swain, D. H. Ballard. Color indexing [J]. International Journal ComputerVision.1991,7(1):11-32.
    [35]D. Guillamet, J. Vitria. A comparison of local versus global color histograms forobject recognition [C]. ICPR,2000,422-425.
    [36]J. G. Leu. On Indexing the Periodicity of Image Textures [J]. Image and VisionComputing,2001,19(13):987-1000.
    [37]R. M. Haralick, K. S. Shangmugam, I. H. Dinstein. Textural Feature for ImageClassification [J]. IEEE Transactions on Systems, Man and Gybernetics,1973,SMC-3(6):610-621.
    [38]洪继光.灰度-梯度共生矩阵纹理分析方法[J].自动化学报,1984,10(1):22-25.
    [39]A. Drimbarean, P. F. Whelan. Experiments in Colour Texture Analysis [J].Pattern Recognition Letters,2001,22(10):1161-1167.
    [40]M. Kass, A. Witkin, D. Terzopoulos. Snakes: Active Contour Models [J].International Journal of Computer Vision,1988,1(4):321-331.
    [41]T. F. Cootes, C. J. Taylor, D. H. Cooper, et al.Active Shape Models-TheirTraining and Application [J].Computer Vision and Image Understanding,1995,61(1):38-59.
    [42]T. F. Cootes,G. J. Edwards,C. J. Taylor. Active Appearance Models [J]. IEEETransactions on Pattern Analysis and Machine Inteligence,2001,23(6):681-685.
    [43]H. P. Moravec. Towards automatic visual obstacle avoidance [C]. Proceedingsof International Joint Conference on Artificial Intelligence,1977:584-590.
    [44]A. Yilmaz, O. Javed, M. Shah. Object tracking: A survey [J]. Acm ComputingSurveys,2006,38(4):1-45.
    [45]C. Harris, M. Stephens. A Combined Corner and Edge Detector [C].Proceedings of the4th Alvey Vision Conference,1988,15:147-151.
    [46]K. Mikolajczyk, C. Schmid. Scale&affine invariant interest point detectors [J].International journal of computer vision,2004,60(1):63-68.
    [47]D. G. Lowe. Object recognition from local scale-invariant features [C]. IEEEInternational Conference on Computer Vision,1999,2:1150-1157.
    [48]D. G. Lowe. Distinctive image features from scale-invariant keypoints.International journal of computer vision,2004,60(2),91-110.
    [49]纪华.仿射不变特征提取及其在景象匹配中的应用[D]:[博士学位论文].长春:长春光学精密机械与物理研究所,2010.
    [50]H. Bay, T. Tuytelaars, L Van Gool. Surf: Speeded up robust features [C].Computer Vision–ECCV2006,2006,3951:404-417.
    [51]J. Z. Wang, J. Li, G. Wiederhold. SIMPLIcity: Semantics-sensitive integratedmatching for picture libraries [J]. IEEE Transactions on Pattern Analysis andMachine Intelligence,2001,23(9):947-963.
    [52]L. Bruzzone and D. F. Prieto. Unsupervised Retraining of a MaximumLikelihood Classifier for the Analysis of Multitemporal Remote Sensing Images [J].IEEE Transactions on Geoscience and Remote Sensing,2001,39(2):456-460.
    [53]万华林.图象检索中高层语义和低层可视特征的提取研究[D]:[博士学位论文].北京:中国科学院计算技术研究所,2002.
    [54]D. Comaniciu, V. Ramesh, P. Meer. Kernel-Based Object Tracking [J]. IEEETransactions on Pattern Analysis and Machine Intelligence,2003,25(5):564-575.
    [55]D. Exner, E.Bruns, D. Kurz, et al. Fast and robust CAMShift tracking [C].IEEEComputer Society Conference on Computer Vision and Pattern RecognitionWorkshops,2010,9-16.
    [56]J. Jeyakar, R. V. Babu, K. R. Ramakrishnan. Robust object tracking withbackground-weighted local kernels [J]. Computer Vision and Image Understanding,2008,112(3):296-309.
    [57]R.V. Babu, P. Perez, P. Bouthemy. Robust tracking with motion estimation andlocal Kernel-based color modeling [J]. Image and Vision Computing,2007,25(8):1205-1216.
    [58]X. Li, W. Hu, H. Wang, et al. Robust object tracking using a spatial Pyramidheat kernel structural information representation [J]. Neurocomputing,2010,73(16-18):3179-3190.
    [59]H. Zhou, Y. Yua n, C. Shi. Object tracking using SIFT features and meanshift [J].Computer Vision and Image Understanding,2009,113(3):345-352.
    [60]A. Yilmaz. Object Tracking by Asymmetric Kernel Mean Shift with AutomaticScale and Orientation Selection [C]. IEEE Conference on Computer Vision andPattern Recognition,2007, l-6.
    [61]I. Leichter, M. Lindenbaum, E. Rivlin. Mean Shift tracking with multiplereference color histograms [J]. Computer Vision and Image Understanding,2010,114(3):400-408.
    [62]Q. Chen, Q. Sun, P. Heng, et al. Two-Stage Object Tracking Method Based onKernel and Active Contour [J]. IEEE Transactions on Circuits and Systems for VideoTechnology,2010,20(4):605-609.
    [63]H. Liu, Z. Yu, H. Zha, et al. Robust human tracking based on multi-cueintegration and mean-shift [J]. Pattern Recognition Letters,2009,30(9):827-837.
    [64]陈远.复杂场景中视觉运动目标检测与跟踪[D]:[博士学位论文].武汉:华中科技大学,2008.
    [65]R. Verdu-Monedero, J. Morales-Sanchez, L. Weruaga. Convergence analysis ofactive contours [J]. Image and Vision Computing,2008,26(8):1118-1128.
    [66]O. Dzyubachyk, W. Van Cappellen, J. Essers, et al. Advanced Level-Set-BasedCell Tracking in Time-Lapse Fluorescence Microscopy [J]. IEEE Transactions onMedical Imaging,2010,29(3):852-867.
    [67]]N. Xu, R. Bansal, N. Ahuja. Object segmentation using graph cuts based activecontours [C]. IEEE Computer Soeiety Conference on Computer Vision and PatternRecognition,2003,2:46-53.
    [68]D. Chung, W. James MacLean, S. Dickinson. Integrating region and boundaryinformation for spatiallycoherent object tracking [J]. Image and Vision Computing,2006,24(7):680-692.
    [69]M. Allili, D. Ziou. Object tracking in videos using adaptive mixture models andactive contours [J]. Neurocomputing,2008,71(10-22):2002-2011.
    [70]M. Fussenegger, P. Roth, H. Bisehof, et al. A level set framework using a newincremental robust Active Shape Model for object segmentation and tracking [J].Image and Vision Computing,2009,27(8):1157-1168.
    [71]吴继明.基于水平集方法的主动轮廓模型理论研究及应用[D]:[博士学位论文].广州,华南理工大学,2010.
    [72]M. A. Charmi, S. Derrode, F. Ghorbel. Fourier-based geometric shape Prior forsnakes [J]. Pattern Recognition Letters,2008,29(7):897-904.
    [73]Q. Chen, Q. Sun, P. Heng, et al. Parametric active contours for object trackingbased on matching degree image of object contour Points [J].Pattern RecognitionLetters,2008,29(2):126-141.
    [74]S. Avidan. Support Vector Tracking [J]. IEEE Transactions on Pattern Analysisand Machine Intelligence,2004,26(8):1064-1072.
    [75]Y. Yeh, C. Hsu. Online Selection of Tracking Features Using AdaBoost [J].IEEE Transactions on Circuits and Systems for Video Technology,2009,19(3):442-446.
    [76]M. Grabner, H. Grabner, B. Horst. Learning Features for Tracking [C]. IEEEConference on Computer Vision and Pattern Recognition,2007, l-8.
    [77]G. Boccignone, P. Campadelli, A. Ferrari, et al. Boosted Tracking in Video [J].Signal Processing Letters,2010,17(2):129-132.
    [78]G. Welch, G. Bishop. An introduction to the kalman filter. Technical Report,University of North Carolina at Chapel Hill,1997.
    [79]刘惟锦,章毓晋.基于Kalman滤波和边缘直方图的实时目标跟踪[J].清华大学学报,2008,48(7):1104-1107.
    [80]G. Kitagawa. Monte carlo filter and smoother for non-gaussian nonlinear statespace models. Journal of Computational and Graphical Statistics,1996,5(1):1-25.
    [81]M. Isard, A. Blake. Condensation: Unifying low-level and high-level tracking ina stochastic framework [C]. European Conference on Computer Vision, Berlin,1998,1406:839-908.
    [82]M. Isard and A. Blake. CONDENSATION-Conditional density propagation forvisual tracking [J]. International Journal of Computer Vision,1998,29(1):5-28.
    [83]李忠新.图像镶嵌理论及若干算法研究[D]:[博士学位论文].南京:南京理工大学,2004.
    [84]J. J. Gibson. The Perception of the Visual World [M]. Houghton Mifflin,1950.
    [85]A. Shiozaki. Edge Extraction using entropy operator [J]. Computer Vision,Graphics, and Image Processing,1986,36(1):1-9.
    [86]B. D. Lucas, T. Kanade. An iterative image registration technique with anapplication to stereo vision [J]. IJCAI,1981.
    [87] T. LINDEBERG. Feature Detection with Automatic Scale Selection [J].International Journal of Computer Vision,1998,30(2):79-116.
    [88] D. A. Ross, J. Lim, R. Lin, et al. Incremental Learning for Robust VisualTracking [J]. International Journal of Computer Vision,2008,77(1-3):125-141.
    [89]D. Crisan and A. Doucet. A Survey of Convergence Results on Particle FilteringMethods for Practitioners [J]. IEEE Transaction on signal processing,2002,50(3):736-746.
    [90]A Doucet, N Gordon, V Krishnamurthy. Particle Filters for State Estimation ofJump Markov Linear Systems [J]. IEEE Transactions on Signal Processing,2001,49:613-624.
    [91]A. Doucet, A. M. Johansen. A tutorial on particle filtering and smoothing:Fifteen years later. In Oxford Handbook of Nonlinear Filtering, Oxford UniversityPress,2009.
    [92]Simon Godsill, Tim Clapp. Improvement strategies for Monte Carlo particlefilter [M]. Signal Processing Group, University of Cambridge,2008.
    [93]姚剑敏.粒子滤波跟踪方法研究[D]:[博士学位论文].长春:中国科学院长春光学精密机械与物理研究所,2004.
    [94]S. Arulampalam, S. R. Maskell, Neil Eil Gordon, et al. A tutorial on particlefilters for on-line non-linear/non-Guassian Bayesian traking [J]. IEEE Transaction onSignal Processing,2002,50(2):174-188.
    [95]孙伟.基于粒子滤波的视频目标跟踪关键技术及应用研究[D]:[博士学位论文].西安,西安电子科技大学,2009.
    [96]陈爱华.复杂环境下多模式融合的视频跟踪算法研究[D]:[博士学位论文].长春:中国科学院长春光学精密机械与物理研究所,2009.
    [97]Jingyu Li, Yulei Wang, Yanjie Wang. Visual tracking and learning using speededup robust features [J]. Pattern Recognition Letters,2012,33(16):2094-2101.
    [98]纪华,吴元昊,孙宏海,等.结合全局信息的SIFT特征匹配算法[J].光学精密工程,2009,17(2):439-444.
    [99]杨晓敏,吴炜,卿粼波,等.图像特征点提取及匹配技术[J].光学精密工程,2009,17(9):2276-2282.
    [100] Neto H V, Nehmzow U. Automated exploration and inspection: Comparingtwo visual novelty detectors [J]. International Journal of Advanced Robotic Systems,2005,2(4):355-362.
    [101] LUO J, Oubong G. A comparision of SIFT, PCA-SIFT and SURF [J].International Journal of Image Processing,2009,3(4):143-152.
    [102]王永明,王贵锦.图像局部不变性特征与描述[M].北京:国防工业出版社,2010.
    [103] P. Y. Simard, L. Bottou, P. Haffner, et al. Boxlets: a fast convolutionalgorithm for signal processing and neural networks [C]. Proceedings of conferenceon Advances in Neural Information Processing Systems,1999,11:571-577.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700