用户名: 密码: 验证码:
双目PTZ视觉系统的研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
随着世界范围内安全意识的提高,基于视觉的安全监控技术受到越来越多的重视,因此视觉监控系统的研究有着广泛的应用前景。本论文以变色龙的视觉特性作为出发点,对双目PTZ(pan-tilt-zoom)视觉系统进行研究。无论从出发点、系统的物理构成、还是能实现的功能上,和现有的相关研究都有很大不同。
     与人类及绝大多数动物的视觉系统不同,变色龙的双眼具有可以通过独立运动观察全景、通过协同实现高精度立体视觉等特性,这些特性对其捕食和御敌有重要的帮助。双目PTZ视觉系统可以实现类似功能,其优势主要包括:可以同时获取场景的全景信息和高分辨率局部信息,可以通过立体视觉获取场景深度信息,两个相机可以独立工作,也可以协同工作等。这些优势非常有利于视觉监控及场景理解等相关应用。在已发表文献中,相关研究还非常少见。
     本论文主要针对双目PTZ视觉系统的基础问题进行研究,研究内容和成果主要包括以下几个方面:
     1.提出了一种球面立体校正模型及其自动标定方法,可用于建立两个PTZ相机之间的关系,并且可以在线对任意两幅图像进行立体校正;
     2.研究了双目PTZ视觉系统下的立体视觉问题,提出了一个完整的算法框架以实现任意PTZ参数下的深度图估计;
     3.以双目PTZ立体视觉为基础,利用摄像机参数的可变性和可控性,分别提出了一种局部场景高分辨率深度图和大范围场景深度图的获取方法,这些深度信息可用于场景理解及监控中的场景建模等应用;
     4.提出了双目PTZ视觉系统下的一种高分辨率视频稳定化方法,可提高远距离监控视频的稳定化程度,方便后续的证据收集、行为分析等应用;
     5.以两个视觉监控的具体应用为例,验证了系统的功能。
     以上研究内容都通过实验进行了验证;部分研究成果已用于长江航道船舶监控(与交通部合作项目),并取得良好效果。
With the increase of safety awareness around the world, vision based securitymonitoring technologies have received more and more attention. The research aboutvisual surveillance system has a wide application scope. Inspired by the special visionsystem of chameleons, this thesis studies the dual-PTZ-camera system in this thesis.In this system, two pan-tilt-zoom cameras serve as a vision unit. This work is di?erentfrom most existing research in motivation, system composition and the capability thesystem can achieve.
     Di?erent from the vision system of human beings and most animals, chameleon’stwo eyes are very special in that they can either move independently to watch globalview or move in a cooperative way to achieve high-precision stereo vision. This abilityhelps chameleons catch preys e?ciently. In the dual-PTZ-camera system, two cam-eras are able to achieve similar functions like chameleons. The main advantages ofthis system include: (1) both large-view information and high-resolution local-viewinformation can be obtained at the same time; (2) stereo vision can be employed toobtain depth information of the scene; (3) two cameras can work in either independentor cooperative mode, etc. These advantages will benefit many visual surveillance ap-plications. As far as we know, very little relative research has been found in literature.
     This thesis covers some basic problems in dual-PTZ-camera system. The mainresearch contents and results of our study include the following aspects:
     (1) This thesis proposes a spherical stereo rectification model which can be usedfor stereo vision and representing a kind of relationship between two PTZcameras. A self-calibration approach is also presented to build this model;
     (2) A stereo vision framework in dual-PTZ-camera system is proposed. It can beused to obtain the depth map of the scene provided camera (pan-tilt-zoom)parameters;
     (3) According to the variability and controllability of camera parameters, high- resolution and wide-scope depth information can be obtained. This depthinformation is very useful for scene understanding and modeling;
     (4) This thesis proposes a novel framework to stabilize and complete high-spatial-resolution video using the dual-PTZ-camera system. This work can be usedfor not only improving the visual e?ect, but also gesture recognition, behaviorand gait analysis, object identification, security evidence collection, etc.
     (5) This thesis proposes two preliminary applications for visual surveillance toverify the advantages and abilities of the dual-PTZ-camera system.
     Experimental results verify the feasibility of our study. Some techniques havealready applied in the‘Vessel Surveillance System on Yangtze River’(a program co-operated with the ministry of communications of PRC).
引文
[1]马颂德,张正友.计算机视觉计算理论与算法基础.科学出版社, 1998.
    [2]章毓晋.图像理解与计算机视觉.北京:清华大学出版社, 2000.
    [3] Forsyth D A.,Ponce J.原著,林学,王宏等译.计算机视觉:一种现代方法.电子工业出版社, 2004.
    [4] Hu W, Tan T, Wang L, et al. A Survey on Visual Surveillance of Object Motion and Be-haviors. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications andReviews, 2004, 34(3):334–352.
    [5] Collins R, Lipton A, Fujiyoshi H, et al. Algorithms for cooperative multisensor surveillance.Proceedings of the IEEE, 2001. 1456 - 1477.
    [6]侯志强,韩崇昭.视觉跟踪技术综述.计算机学报, 2006, 32(4).
    [7] Collins R, Lipton A, Kanade T, et al. A System for Video Surveillance and Monitoring.Technical Report CMU-RI-TR-00-12, Robotics Institute, Carnegie Mellon University, Pitts-burgh, PA, May, 2000.
    [8] Bogaert M, Chleq N, Cornez P, et al. The PASSWORDS Project. Proceedings of Interna-tional Conference on Image Processing, 1996. 675-678.
    [9] Rota N, Thonnat M. Video Sequence Interpretation for Visual Surveillance. Proceedingsof the Third IEEE International Workshop on Visual Surveillance (VS’2000), Washington,DC, USA: IEEE Computer Society, 2000. 59.
    [10] Matsuyama T. Cooperative distributed vision. Proceedings of the DARPA Image Under-standing Workshop, 1998. 365–384.
    [11] Maybank S J, Tan T. Introduction. International Journal of Computer Vision, 2000,37(2):173.
    [12] Collins R T, Biernacki C, Celeux G, et al. Introduction to the Special Section on VideoSurveillance. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000,22(8):745–746.
    [13] Suter D, Comaniciu D, Kanatani K. Special issue on statistical methods in video processing.Image and Vision Computing, 2004, 22(2):83–84.
    [14] Tao H, Sawhney H S. Special issue on video surveillance research in industry and academia.Machine Vision and Applications, 2008, 19(5-6):277.
    [15] Jones G A. Special issue on Intelligent Visual Surveillance. Computer Vision and ImageUnderstanding, 2008, 111(1):1.
    [16] Special Issues on Video Communication, Processing and Understanding for Third Genera-tion Surveillance Systems. Proceedings of the IEEE, 2001, 89(10).
    [17] Velastin S. Special Section on Intelligent Distributed Surveillance Systems. IEE Proceed-ings of Visual Imaging and Signal Processing, 2005, 152(2).
    [18] Nicolescu M, Medioni G G. Electronic Pan-Tilt-Zoom: A Solution for Intelligent RoomSystems. Proceedings of IEEE International Conference on Multimedia and Expo, 2000.1581-1584.
    [19] Lim S N, Elgammal A, Davis L S. Image-based pan-tilt camera control in a multi-camerasurveillance environment. Proceedings of IEEE International Conference on Multimediaand Expo, 2003. 645-648.
    [20] Kang S, Paik J, Koschan A, et al. Real-Time Video Tracking Using PTZ Cameras. Pro-ceedings of 6th International Conference on Quality Control by Artificial Vision QCAV03,SPIE vol.5132, 2003. 103-111.
    [21] Senior A W, Hampapur A, Lu M. Acquiring Multi-Scale Images by Pan-Tilt-Zoom Con-trol and Automatic Multi-Camera Calibration. Proceedings of 7th IEEE Workshop onApplications of Computer Vision / IEEE Workshop on Motion and Video Computing(WACV/MOTION 2005), 2005. 433-438.
    [22] Chen I H, Wang S J. E?cient Vision-Based Calibration for Visual Surveillance Systemswith Multiple PTZ Cameras. Proceedings of the 2006 IEEE International Conference onComputer Vision Systems, 2006. 24.
    [23] Nelson E, Cockburn J. Dual Camera Zoom Control: A Study of Zoom Tracking Stability.Proceedings of the 32nd IEEE International Conference on Acoustics, Speech, and SignalProcessing (ICASSP), 2007. 941–944.
    [24] Chen C, Yao Y, Page D, et al. Heterogeneous Fusion of Omnidirectional and PTZ Cam-eras for Multiple Object Tracking. IEEE Transactions on Circuits and Systems for VideoTechnology, Special Issue on Video Surveillance, 2008, 18(8):1052–1063.
    [25] Kettnaker V, Zabih R. Bayesian Multi-Camera Surveillance. Proceedings of IEEE Confer-ence on Computer Vision and Pattern Recognition, 1999. 2253-2261.
    [26] Khan S, Javed O, Rasheed Z, et al. Human Tracking in Multiple Cameras. Proceedings ofIEEE International Conference on Computer Vision, 2001. 331-336.
    [27] Cai Q, Aggarwal J. Tracking Human Motion in Structured Environments Using aDistributed-Camera System. IEEE Transactions on Pattern Analysis and Machine Intel-ligence, 1999, 21(11):1241–1247.
    [28] Dockstader S L, Tekalp A M. Multiple camera tracking of interacting and occluded humanmotion. Proceedings of the IEEE, 2001. 1441 - 1455.
    [29] Stau?er C, Grimson W E L. Learning Patterns of Activity Using Real-Time Tracking. IEEETransactions on Pattern Analysis and Machine Intelligence, 2000, 22(8):747–757.
    [30] Zhao T, Aggarwal M, Kumar R, et al. Real-Time Wide Area Multi-Camera Stereo Tracking.Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (1), 2005.976-983.
    [31] Zhou X, Collins R T, Kanade T, et al. A master-slave system to acquire biometric imageryof humans at distance. First ACM SIGMM International Workshop on Video Surveillance,2003. 113–120.
    [32] Bodor R, Morlok R, Papanikolopoulos N. Dual-Camera System for Multi-Level ActivityRecognition. Proceedings of the IEEE/RJS International Conference on Intelligent Robotsand Systems, 2004. 643 - 648.
    [33] Horaud R, Knossow D, Michaelis M. Camera cooperation for achieving visual attention.Machine Vision and Applications, 2006, 16(6):331–342.
    [34] Black J, Ellis T. Multi camera image tracking. Image and Vision Computing, 2006,24(11):1256–1267.
    [35] Jain A, Kopell D, Kakligian K, et al. Using Stationary-Dynamic Camera Assemblies forWide-area Video Surveillance and Selective Attention. Proceedings of IEEE Conference onComputer Vision and Pattern Recognition (1), 2006. 537-544.
    [36] Gupta A, Mittal A, Davis L. COST: An Approach for Camera Selection and Multi-ObjectInference Ordering in Dynamic Scenes. Proceedings of IEEE International Conference onComputer Vision, 2007. 1-8.
    [37] Chameleon. http://en.wikipedia.org/wiki/Chameleon.
    [38] Avni O, Borrelli F, Katzir G, et al. Scanning and tracking with independent cameras - abiologically motivated approach based on model predictive control. Autonomous Robots,2008, 24(3):285–302.
    [39] Rogers L J. Evolution of hemispheric specialization: advantages and disadvantages. Brainand Language, 2000, 73(2):236–253.
    [40] Ott M. Chameleons have independent eye movements but synchronise both eyes duringsaccadic prey tracking. Experimental Brain Research, 2001, 139(2):173–179.
    [41] Herrel A, Meyers J, Aerts P, et al. The mechanics of prey prehension in chameleons. Journalof Experimental Biology 203, 2000. 3255–3263.
    [42] Wren C R, Azarbayejani A, Darrell T, et al. Pfinder: Real-Time Tracking of the HumanBody. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1997, 19(7):780–785.
    [43] Haritaoglu I, Harwood D, Davis L S. W4: Real-Time Surveillance of People and Their Ac-tivities. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(8):809–830.
    [44] Pai C J, Tyan H R, Liang Y M, et al. Pedestrian detection and tracking at crossroads. PatternRecognition, 2004, 37(5):1025–1034.
    [45] Stein G P. Tracking from Multiple View Points: Self-calibration of Space and Time. Pro-ceedings of IEEE Conference on Computer Vision and Pattern Recognition, 1999. 1521-1527.
    [46] Javed O, Rasheed Z, Shafique K, et al. Tracking Across Multiple Cameras With DisjointViews. Proceedings of IEEE International Conference on Computer Vision, 2003. 952-957.
    [47] Dockstader S, Tekalp A. Multiple camera tracking of interacting and occluded humanmotion. Proceedings of the IEEE, 2001. 1441-1455.
    [48] Hu W, Hu M, Zhou X, et al. Principal Axis-Based Correspondence between Multiple Cam-eras for People Tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence,2006, 28(4):663–671.
    [49] Woodfill J, Gordon G, Buck R. Tyzx DeepSea High Speed Stereo Vision System. Proceed-ings of IEEE Conference on Computer Vision and Pattern Recognition, Workshop on RealTime 3-D Sensors and Their Use, 2004. 41.
    [50] Newman R, Matsumoto Y, Rougeaux S, et al. Real-Time Stereo Tracking for Head Poseand Gaze Estimation. Proceedings of IEEE International Conference on Automatic Faceand Gesture Recognition, 2000. 122-128.
    [51] Adorni G, Cagnoni S, Mordonini M, et al. Omnidirectional stereo systems for robot navi-gation. Proceedings of IEEE Workshop on Omnidirectional Vision and Camera Networks,Omnivis, 2003. 79-89.
    [52] Koyasu H, Miura J, Shirai Y. Mobile robot navigation in dynamic environments usingonmidirectional stereo. Proceedings of IEEE International Conference on Robotics andAutomation, 2003. 893-898.
    [53] Geyer C, Daniilidis K. Paracatadioptric Camera Calibration. IEEE Transactions on PatternAnalysis and Machine Intelligence, 2002, 24(5):687–695.
    [54] Strelow D, Mishler J, Koes D, et al. Precise Omnidirectional Camera Calibration. Proceed-ings of IEEE Conference on Computer Vision and Pattern Recognition (1), 2001. 689-694.
    [55] Ying X, Hu Z. Catadioptric Camera Calibration Using Geometric Invariants. IEEE Trans-actions on Pattern Analysis and Machine Intelligence, 2004, 26(10):1260–1271.
    [56] Adorni G, Bolognini L, Cagnoni S, et al. Stereo Obstacle Detection Method for a HybridOmni-directional/Pin-Hole Vision System. Proceedings of RoboCup, 2001. 244-250.
    [57] Scotti G, Marcenaro L, Coelho C, et al. Dual camera intelligent sensor for high definition360 degrees surveillance. Proceedings of IEE Vision, Image and Signal Processing, 2005.250-257.
    [58] Yao Y, Abidi B R, Abidi M A. Fusion of Omnidirectional and PTZ Cameras for AccurateCooperative Tracking. Proceedings of IEEE International Conference on Video and SignalBased Surveillance (AVSS’06), 2006. 46.
    [59] Grei?enhagen M, Ramesh V, Comaniciu D, et al. Statistical Modeling and PerformanceCharacterization of a Real-Time Dual Camera Surveillance System. Proceedings of IEEEConference on Computer Vision and Pattern Recognition, 2000. 2335-2342.
    [60] Liu Q, Kimber D, Foote J, et al. FlySPEC: a multi-user video camera system with hybridhuman and automatic control. Proceedings of ACM Multimedia, 2002. 484-492.
    [61] Sinha S, Pollefeys M. Towards calibrating a pan-tilt-zoom cameras network, in: OMNIVIS2004, workshop on Omnidirectional Vision and Camera Networks held in conjunction withECCV 2004.
    [62] Badri J, Tilmant C, Lavest J M, et al. Camera-to-Camera Mapping for Hybrid Pan-Tilt-Zoom Sensors Calibration. Proceedings of Scandinavian Conference on Image Analysis,2007. 132-141.
    [63] Everts I, Sebe N, Jones G A. Cooperative Object Tracking with Multiple PTZ Cameras.Proceedings of 14th International Conference on Image Analysis and Processing, 2007.323-330.
    [64] Amnuaykanjanasin P, Aramvith S, Chalidabhongse T. Face Tracking Using Two Coopora-tive Static and Moving Cameras. Proceedings of IEEE International Conference on Multi-media and Expo, 2005. 1158 - 1161.
    [65] Sharkey P M, Murray D W, Reid I D, et al. A modular head/eye platform for real-timereactive vision. Mechatronics, 1993, 3:517–535.
    [66] Davison A J. Mobile Robot Navigation using Active Vision, PhD thesis, Univ. of Oxford,1998.
    [67]原魁,路鹏,邹伟.自主移动机器人视觉信息处理技术研究发展现状.高技术通讯,2008, 1:104–110.
    [68] Pollefeys M, Sinha S N. Iso-disparity Surfaces for General Stereo Configurations. Proceed-ings of European Conference on Computer Vision (III), 2004. 509-520.
    [69] Loop C T, Zhang Z. Computing Rectifying Homographies for Stereo Vision. Proceedingsof IEEE Conference on Computer Vision and Pattern Recognition, 1999. 1125-1131.
    [70] Hartley R I. Theory and Practice of Projective Rectification. International Journal of Com-puter Vision, 1999, 35(2):115–127.
    [71] Roy S, Meunier J, Cox I J. Cylindrical rectification to minimize epipolar distortion. Pro-ceedings of IEEE Conference on Computer Vision and Pattern Recognition, 1997. 393-399.
    [72] Pollefeys M, Koch R, Gool L J V. A Simple and E?cient Rectification Method for GeneralMotion. Proceedings of IEEE International Conference on Computer Vision, 1999. 496-501.
    [73] Brown M Z, Burschka D, Hager G D. Advances in Computational Stereo. IEEE Transac-tions on Pattern Analysis and Machine Intelligence, 2003, 25(8):993–1008.
    [74] Bouguet J. Camera Calibration Toolbox for Matlab. http://www.vision.caltech.edu/bouguet/calib doc/.
    [75] Willson R. Modeling and Calibration of Automated Zoom Lenses, Ph.D. dissertation,Carnegie Mellon University, 1994.
    [76] Collins R T, Tsin Y. Calibration of an Outdoor Active Camera System. Proceedings ofIEEE Conference on Computer Vision and Pattern Recognition, 1999. 1528-1534.
    [77] Davis J, Chen X. Calibrating pan-tilt cameras in wide-area surveillance networks. Proceed-ings of IEEE International Conference on Computer Vision, 2003. 144-149.
    [78] Li M, Lavest J M. Some Aspects of Zoom Lens Camera Calibration. IEEE Transactions onPattern Analysis and Machine Intelligence, 1996, 18(11):1105–1110.
    [79] Lowe D G. Distinctive Image Features from Scale-Invariant Keypoints. International Jour-nal of Computer Vision, 2004, 60(2):91–110.
    [80] Fischler M A, Bolles R C. Random Sample Consensus: A Paradigm for Model Fitting withApplications to Image Analysis and Automated Cartography. Communications of ACM,1981, 24(6):381–395.
    [81] Arya S, Mount D M, Netanyahu N S, et al. An Optimal Algorithm for Approximate NearestNeighbor Searching Fixed Dimensions. Journal of the ACM, 1998, 45(6):891–923.
    [82] Hartley R I. In Defense of the Eight-Point Algorithm. IEEE Transactions on Pattern Anal-ysis and Machine Intelligence, 1997, 19(6):580–593.
    [83] Niste′r D. An E?cient Solution to the Five-Point Relative Pose Problem. IEEE Transactionson Pattern Analysis and Machine Intelligence, 2004, 26(6):756–777.
    [84] Hartley R I, Zisserman A. Multiple View Geometry in Computer Vision. Second ed.,Cambridge University Press, ISBN: 0521540518, 2004.
    [85] Fujiki J, Torii A, Akaho S. Epipolar Geometry Via Rectification of Spherical Images. Pro-ceedings of Third International Conference on MIRAGE, 2007. 461-471.
    [86] Marr著,姚国正等译.计算机视觉理论.北京:科学出版社, 1988.
    [87]游素亚,徐光祐 .立体视觉研究的现状与进展.中国图像图形学报, 1997, 1:17–24.
    [88] Barnard S T, Fischler M A. Computational Stereo. ACM Computing Surveys, 1982,14(4):553–572.
    [89] Dhond U, Aggarwal J. Structure from Stereo– A Review. IEEE Transactions on Systems,Man and Cybernetics, 1989, 19(6):1489–1510.
    [90] Koschan A. What is New in Computational Stereo Since 1989: A Survey of Current StereoPapers. Technical Report 93-22, Technical Univ. of Berlin, 1993.
    [91] Scharstein D, Szeliski R. A Taxonomy and Evaluation of Dense Two-Frame Stereo Corre-spondence Algorithms. International Journal of Computer Vision, 2002, 47(1-3):7–42.
    [92]白明,庄严,王伟.双目立体匹配算法的研究与进展.控制与决策, 2008, 23(7).
    [93] Harris C G, Stephens M. A combined corner and edge detector. Proceedings of 4th AlveyVision Conference, 1988. 147–151.
    [94] Tao H, Sawhney H S, Kumar R. A Global Matching Framework for Stereo Computation.Proceedings of IEEE International Conference on Computer Vision, 2001. 532-539.
    [95] Ernst F, Wilinski P, Overveld C W A M. Dense Structure-from-Motion: An ApproachBased on Segment Matching. Proceedings of European Conference on Computer Vision(2), 2002. 217-231.
    [96] Wei Y, Quan L. Region-Based Progressive Stereo Matching. Proceedings of IEEE Confer-ence on Computer Vision and Pattern Recognition (1), 2004. 106-113.
    [97] Aloimonos J, Weiss I, Bandyopadhyay A. Active vision. International Journal of ComputerVision, 1988, 2:333–356.
    [98] Ahuja N, Abbott A L. Active Stereo: Integrating Disparity, Vergence, Focus, Aperture andCalibration for Surface Estimation. IEEE Transactions on Pattern Analysis and MachineIntelligence, 1993, 15(10):1007–1029.
    [99] Sharkeya P M, Murrayb D W, McLauchlanc P F, et al. Hardware development of the Yorickseries of active vision systems. Microprocessors and Microsystems, 1998, 21(6):363–375.
    [100] Truong H, Abdallah S, Rougeaux S, et al. A novel mechanism for stereo active vision.Proceedings of Australian Conference on Robotics and Automation, 2000.
    [101] A. Dankers A Z. Active vision - rectification and depth mapping. Proceedings of AustralianConference on Robotics and Automation, 2004.
    [102]余洪山,王耀南.主动立体双目视觉平台的设计与实现.工业仪表与自动化装置,2004, 1:61–63.
    [103]刘江华,陈佳品,程君实.双目视觉平台的研究.机器人技术与应用, 2002, 1:36–40.
    [104] Shih S W, Hung Y P, Lin W S. Calibration of an active binocular head. IEEE Transactionson Systems, Man, and Cybernetics, Part A, 1998, 28(4):426–442.
    [105] Ma S. A self-calibration technique for active vision systems. IEEE Transactions on Roboticsand Automation, 1996, 12:114–120.
    [106] Du F, Brady M. Self-calibration of the intrinsic parameters of cameras for activevisionsystems. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition,1993. 477-482.
    [107] Klarquist W N, Bovik A C. FOVEA: A Foveated Vergent Active Stereo System for DynamicThree-Dimensional Scene Recovery. Proceedings of IEEE International Conference onRobotics and Automation, 1998. 3259-3266.
    [108] Scassellati B. A Binocular, Foveated Active Vision System. Technical Report AIM-1628,Cambridge, MA, USA, 1998.
    [109] Park S C, Lee S W. Fast Distance Computation with a Stereo Head-Eye System. Proceed-ings of Biologically Motivated Computer Vision, 2000. 434-443.
    [110]解凯,郭恒业,张田文.图像Mosaics技术综述.电子学报, 2004, 32(4):630–634.
    [111] Sinha S N, Pollefeys M, Kim S J. High-Resolution Multiscale Panoramic Mosaics fromPan-Tilt-Zoom Cameras. Proceedings of the Fourth Indian Conference on Computer Vision,Graphics & Image Processing, 2004. 28-33.
    [112] Boykov Y, Veksler O, Zabih R. Fast Approximate Energy Minimization via Graph Cuts.IEEE Transactions on Pattern Analysis and Machine Intelligence, 2001, 23(11):1222–1239.
    [113] Matsushita Y, Ofek E, Ge W, et al. Full-Frame Video Stabilization with Motion Inpainting.IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006, 28(7):1150–1163.
    [114] Jia J, Wu T P, Tai Y W, et al. Video Repairing: Inference of Foreground and Backgroundunder Severe Occlusion. Proceedings of IEEE Conference on Computer Vision and PatternRecognition (1), 2004. 364-371.
    [115] Wexler Y, Shechtman E, Irani M. Space-Time Video Completion. Proceedings of IEEEConference on Computer Vision and Pattern Recognition (1), 2004. 120-127.
    [116] Pilu M. Video Stabilization as a Variational Problem and Numerical Solution with theViterbi Method. Proceedings of IEEE Conference on Computer Vision and Pattern Recog-nition (1), 2004. 625-630.
    [117] Buehler C, Bosse M, McMillan L. Non-Metric Image-Based Rendering for Video Stabi-lization. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2),2001. 609-614.
    [118] Bertalm′?o M, Sapiro G, Caselles V, et al. Image inpainting. Proceedings of SIGGRAPH,2000. 417-424.
    [119] Levin A, Zomet A, Weiss Y. Learning How to Inpaint from Global Image Statistics. Pro-ceedings of IEEE International Conference on Computer Vision, 2003. 305-312.
    [120] Brown L G. A Survey of Image Registration Techniques. ACM Computmg Surveys, 1992,24(4):325–376.
    [121] Zitova′B, Flusser J. Image registration methods: a survey. Image and Vision Computing,2003, 21(11):977–1000.
    [122] Szeliski R. Image Alignment and Stitching: A Tutorial. Technical Report MSR-TR-2004-92, Microsoft Corp., 2004.
    [123] Comaniciu D, Ramesh V, Meer P. Real-Time Tracking of Non-Rigid Objects Using MeanShift. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2),2000. 142-149.
    [124] Bergen J R, Anandan P, Hanna K J, et al. Hierarchical Model-Based Motion Estimation.Proceedings of European Conference on Computer Vision (2), 1992. 237-252.
    [125] Shum H Y, Szeliski R. Systems and Experiment Paper: Construction of Panoramic ImageMosaics with Global and Local Alignment. International Journal of Computer Vision, 2000,36(2):101–130.
    [126] Cucchiara R, Grana C, Piccardi M, et al. Detecting Moving Objects, Ghosts, and Shadowsin Video Streams. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003,25(10):1337–1342.
    [127] Lucas B D, Kanade T. An Iterative Image Registration Technique with an Application toStereo Vision. Proceedings of the 7th International Joint Conference on Artificial Intelli-gence, 1981. 674-679.
    [128] Horn B K P, Schunck B G. Determining Optical Flow. Artificial Intelligence, 1981, 17(1-3):185–203.
    [129] Syeda-Mahmood T F, Haritaoglu I, Huang T S. CVIU special issue on event detection invideo. Computer Vision and Image Understanding, 2004, 96(2):97–99.
    [130] Chang S F, Luo J, Maybank S, et al. An Introduction to the Special Issue on Event Anal-ysis in Videos. IEEE Transactions on Circuits and Systems for Video Technology, 2008,18(11):1469–1472.
    [131] Cheng Y. Mean Shift, Mode Seeking, and Clustering. IEEE Transactions on Pattern Anal-ysis and Machine Intelligence, 1995, 17(8):790–799.
    [132] OpenCV . Open Source Computer Vision Library, 2007. http://www.intel.com/technology/computing/opencv/index.htm.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700