用户名: 密码: 验证码:
基于视觉的微小型无人直升机位姿估计与目标跟踪研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
无人机在军事和民用上都具有广阔的应用前景,近年来已成为全球范围内的研究热点之一。实现自主导航是无人机能够在实际中应用的前提。相比基于惯性传感器和全球定位系统的传统导航方式,基于视觉的导航方式实时感知环境信息,能够完成相对位姿估计、避障机动飞行和地面目标跟踪这类与环境相关的任务,越来越受到重视。本论文选择微小型无人直升机作为实验平台,针对无人直升机视觉导航中的位姿估计问题和地面目标跟踪问题展开研究。本论文的主要工作为:
     第一章首先介绍了本论文的研究背景和选题意义,随后综述了国内外无人机视觉导航技术研究现状,把无人机位姿估计方法按照使用的原理进行分类,并对地面目标跟踪硬件系统和软件系统进行分析,最后阐明了本论文的主要研究内容和篇章结构。
     第二章对相关的基础理论做简单介绍,首先阐述了常用坐标系和姿态表示方法,接着介绍了与射影几何、摄像机模型和彩色图像相关的知识,最后讨论了基于扩展卡尔曼滤波的同步定位与地图构建算法。
     第三章针对微小型无人直升机在城市环境中飞行时的姿态估计问题,提出了一种在朝下拍摄的图像中检测建筑物侧面垂直地面的边缘直线的消影点的方法。消影点检测方法的具体做法是先用Hough变换检测边缘直线,再进行反向Hough变换同时求取消影点与垂直边缘直线。根据射影几何关系从消影点坐标直接解算无人直升机的姿态。实验结果表明该方法能可靠地检测出消影点,直接解算得到的姿态值是准确的。
     第四章针对微小型无人直升机在高空飞行时的姿态估计问题,提出了一种在朝前拍摄的图像中检测地平线的方法。地平线检测方法的具体做法是先用Hough变换检测出较长的直线作为候选地平线,再结合暗原色先验图像中的区域信息确定地平线。继而在扩展卡尔曼滤波理论框架下,结合地平线观测模型与刚体旋转模型估计无人直升机的姿态,并通过预测地平线参数及其残差协方差来判断检测到的地平线是否正确,如果错误则进行纠正。实验结果表明该方法能准确可靠地检测出地平线。
     第五章针对微小型无人直升机在近似悬停旋转时的姿态估计问题,对“视觉罗盘”——基于单目视觉同步定位与地图构建的姿态测量方法进行改进。微小型无人直升机具有较大的角加速度,需要提高运动模型中系统噪声方差的设定值才能使视觉罗盘继续工作,但是导致匹配计算量增大,匹配错误发生率提高。改进包括使用多分辨率路标选取策略初始化新路标,使用逐层主动搜索匹配算法对路标进行快速匹配。实验结果表明改进后的方法具有匹配计算量减少,匹配错误发生率降低的优点。
     第六章针对微小型无人直升机在近地面飞行时的位姿估计问题,对基于单目视觉同步定位与地图构建的位姿估计方法进行改进。改进包括用路标簇表示多个同时初始化的路标来减少摄像机参数冗余,增强摄像机位姿约束作用;提高路标初始深度设定值以减少在滤波更新后出现负深度的几率;用2点随机抽样一致性数据关联算法应对摄像机多个轴上的角速度快速变化。实验结果表明改进后的单目视觉SLAM方法位姿估计精度可以达到自主飞行需要。
     第七章针对微小型无人直升机的地面目标跟踪问题,搭建了一套微小型无人直升机地面目标跟踪硬件系统,并提出了一种基于均值漂移和似然图像椭圆亮斑检测的视觉跟踪算法。为硬件系统先后设计加工了两代云台,第一代云台采用传统两轴两框架结构,第二代云台采用两轴三框架结构,解决了两轴两框架结构云台转动中存在奇异点的问题。视觉跟踪算法用目标/混合区域面积加权的直方图频度比值作为似然度,以抑制背景对目标的干扰;将目标跟踪转变为似然图像椭圆亮斑检测,用均值漂移估计椭圆亮斑的中心位置,用椭圆高斯差分算子估计椭圆亮斑的方向角和长、短半轴尺度。实验表明新跟踪算法能对旋转变化和尺度变化的目标进行跟踪。
     第八章对本论文的研究工作进行了总结,并对进一步的研究做了展望。
Unmanned Aerial Vehicles (UAVs), which are being widely used in both military and civil applications, have been one of the most hot research topics global wide in recent years. The implementation of autonomous navigation is the prerequisite for UAV to be applied in reality. In comparison with traditional navigation that relies on inertial sensors and global position system (GPS), vision based navigation always acquires environmental information in real time, and can manage tasks related with environment such as relative pose estimation, obstacle avoidance maneuver, surveillance and tracking, which makes it receive more and more attention. In this dissertation, a miniature unmanned helicopter (MUH) is selected as the main experimental platform, and some researches have been done to solve problems in vision-based navigation, including relative pose estimation and ground target tracking. The main work of this dissertation is listed below:
     In chapter1, research background and significance of this dissertation are introduced. Firstly, domestic and foreign research on vision-based navigation of UAV is reviewed. Then, the vision based navigation methods are classified according to the theories they use, and both the hardware platform and software system for ground target tracking are analyzed. Finally, the research contents and organization of this dissertation are clarified.
     In chapter2, a brief introduction to the related basic theories is made. Firstly, the usually used reference coordination definitions and attitude description methods are described. Then, the relative knowledge of projective geometry, camera model and color image is introduced. Finally, the simultaneous localization and mapping algorithm based on extended Kalman filter (EKF-SLAM) is discussed.
     In chapter3, in order to estimate the attitude of MUH flying in urban environment, a method to extract the vanishing point of vertical edge lines belong to buildings in aerial images captured by downwards looking camera is proposed. This method firstly extracts edge lines using Hough transform, and then extracts the vanishing point and vertical edge lines simultaneously using inverse Hough transform. The attitude of MUH is directly calculated from vanishing point's coordinate according to projective geometry. Experimental results show that this method can extract vanishing point reliably, and the estimated attitude is accurate.
     In chapter4, in order to estimate the attitude of high altitude flying MUH, a method to extract horizon line in aerial images captured by forwards looking camera is proposed. This method firstly extracts the longest lines as the candidates of horizon using Hough transform, and then determines the horizon by taking region information of the dark channel priori image into consideration. Attitude of MUH is estimated using an EKF that combines horizon observation model and rigid body rotation model. The predicted horizon's parameters and innovation are used for judging whether the extracted horizon is true. Correction will be carried out when the extracted horizon line is erroneous. Experimental results show that this method can extract horizon robustly.
     In chapter5, in order to estimate the attitude of MUH hovering and rotating, modifications are made to a MonoSLAM-based attitude estimation method called "visual compass". Since MUH has large angular accelerations, the visual compass method can work only if the a priori system noise covariance in motion model is set a large value. But this results in high computational cost for matching and high rate of erroneous matches. A multi resolution landmark selection strategy is adopted for initializing new landmark. Correspondingly, an active search and match layer by layer algorithm is proposed for fast landmark matching. The modified visual compass method is proved superior in less computational cost and lower rate of erroneous matches by experimental results.
     In chapter6, in order to estimate the pose of MUH flying close to ground, modifications are made to the pose estimation method that based on MonoSLAM. Instead of parameterising several simultaneously initialized landmarks individually, landmark clutter is used for less camera parameter redundancy and reinforced camera pose constraints. The initial depth of landmark is set a larger value for deducing the ratio of changing depth from positive to negative after EKF updating. A2point random sample consensus (RANSAC) data association algorithm is proposed to cope with situation when angular velocities around multi-axis of camera change rapidly. The accuracy of modified MonoSLAM-based pose estimation method fulfills the requirement for autonomous flight in field-experiment.
     In chapter7, in order to track the ground object from a MUH, a hardware platform for surveillance and tracking is constructed. Two gimbals are designed and manufactured. The first one with two-axis two-frame is traditional, and the second one with two-axis three-frame can avoids system singularity encountered in two-axis two-frame gimbals. Besides, a visual tracking algorithm based on mean shift and elliptical blob detection in likelihood map is proposed. This algorithm uses object/mixed regions'area weighted ratio of histograms as likelihood to suppress the interference from background. In likelihood map, the object appears to be an elliptical blob. The position of blob is estimated using mean shift, while the orientation, semi-major axis and semi-minor axis of blob are determined by calculating the difference of elliptical Gaussian. Experimental results show that this algorithm can track object with both rotation and scale changes.
     In chapter8, the current research work of this dissertation is summarized and some recommendations for further research are presented.
引文
[1]Cambone S.A., Krieg K.J., Pace P. Unmanned Aerial Systems Roadmap 2005-2030. Washington DC:Office of the Secretary of Defense,2005.
    [2]小小(编译).国际无人机系统协会和无人机分类.电子工程信息,2005,(6):55-56.
    [3]Wilson J. UAV worldwide roundup--2005. Aerospace America,2005,43(9):26-34.
    [4]Lyon D.H. A military perspective on small unmanned aerial vehicles. Instrumentation & Measurement Magazine,2004,7(3):27-31.
    [5]孙杰,林宗坚,崔红霞.无人机低空遥感监测系统.遥感信息,2003,(1):49-50.
    [6]晏磊,吕书强,赵红颖,et al.无人机航空遥感系统关键技术研究.武汉大学学报:工学版,2004,37(6):67-70.
    [7]Nebikera S., Annen A., Scherrer M., et al. A light-weight multispectral sensor for micro UAV-opportunities for very high resolution airborne remote sensing. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences,2008, 37:1193-2000.
    [8]周洁萍,龚建华,王涛,et al.汶川地震灾区无人机遥感影像获取与可视化管理系统研究.遥感学报,2008,12(6):877-884.
    [9]Li Z., Liu Y., Hayward R., et al. Knowledge-based power line detection for UAV surveillance and inspection systems, in 23rd International Conference of Image and Vision Computing New Zealand, Christchurch,2008:1-6.
    [10]Golightly I., Jones D. Visual control of an unmanned aerial vehicle for power line inspection, in ICRA 2005, Seattle, IEEE,2005:288-295.
    [11]陈晓兵,马玉林,徐祖舰.无人飞机输电线路巡线技术探讨.南方电网技术,2008,2(6):59-61.
    [12]Available:www.flying-cam.com/en/news.php
    [13]邓正隆.惯性技术.哈尔滨:哈尔滨工业大学出版社,2006.
    [14]Titterton D.H., Weston J.L. Strapdown inertial navigation technology. London:Peter Peregrinus Ltd,2004.
    [15]Gebre-Egziabher D. Design and performance analysis of a low-cost aided dead reckoning navigator[Docter dissertation]. California:Stanford University,2004.
    [16]帅平,曲广吉,向开恒.现代卫星导航系统技术的研究进展.中国空间科学技术,2004,24(3):45-53.
    [17]刘基余.GPS卫星导航定位原理与方法.北京:科学出版社,2008.
    [18]石卫平.国外卫星导航定位技术发展现状与趋势.航天控制,2004,22(4):30-35.
    [19]Wendel J., Meister O., Schlaile C., et al. An integrated GPS/MEMS-IMU navigation system for an autonomous helicopter. Aerospace Science and Technology,2006,10(6): 527-533.
    [20]马颂德,张正友.计算机视觉:计算理论与算法基础.北京:科学出版社,1998.
    [21]Marr D., Poggio T. A computational theory of human stereo vision, in Proceedings of the Royal Society of London. Series B. Biological sciences,1979,204(1156):301-328.
    [22]Michelson R. Rules for the current international aerial robotics competition mission.2004.
    [23]Available:http://www.cs.cmu.edu/afs/cs/project/chopper/www/heli_project.html
    [24]Available:http://web.mit.edu/whall/www/heli/
    [25]Available:http://robotics.eecs.berkeley.edu/bear/
    [26]Available:http://sun-valley.stanford.edu/projects/helicopters/helicopters.html.
    [27]Available:http://www-robotics.usc.edu/-avatar/
    [28]Available:http://controls.ae.gatech.edu/gtar/
    [29]Available:http://uav.ifas.ufl.edu/
    [30]Available:http://www.cs.ox.ac.uk/projects/SUAAVE
    [31]Available:http://www.delfly.nl/
    [32]Available:http://www.disam.upm.es/colibri
    [33]Available:http://mec2.tm.chiba-u.jp/%7Enonami/research/uav/home.html
    [34]Available:http://users.cecs.anu.edu.au/-Jonghyuk.Kim/index.html
    [35]Hartley R., Zisserman A. Multiple view geometry in computer vision,2 ed. Cambridge: Cambridge University Press,2004.
    [36]Amidi O., Kanade T., Miller R. Vision-based autonomous helicopter research at Carnegie Mellon Robotics Institute 1991-1997, in International Conference on American Helicopter Society, Washington DC, AHS,1998:375-386.
    [37]那盟,贾培发.微型直升机自主飞行辅助视觉系统研究.计算机工程与应用,2006,42(30):220-223.
    [38]Yuan Z., Gong Z., Wu J., et al. A real-time vision-based guided method for autonomous landing of a rotor-craft unmanned aerial vehicle, in IEEE International Conference on Mechatronics and Automation, Niagara, IEEE,2005,4:2212-2215.
    [39]邱力为,宋子善,沈为群.用于无人直升机着舰控制的计算机视觉技术研究.航空学报,2003,24(4):351-354.
    [40]刘士清,胡春华,朱纪洪.一种基于灭影线的无人直升机位姿估计方法.计算机工程与应用,2004,40(9):50-54.
    [41]徐贵力,倪立学,程月华.基于合作目标和视觉的无人飞行器全天候自动着陆导引关键技术.航空学报,2008,29(2):437-442.
    [42]邱力为,宋子善,沈为群.无人直升机自主着舰的计算机视觉算法.北京航空航天大学学报,2003,29(2):99-102.
    [43]Lange S., Sunderhauf N., Protzel P. A vision based onboard approach for landing and position control of an autonomous multirotor UAV in GPS-denied environments, in ICRA 2009, Munich,-IEEE,2009:1-6.
    [44]Hu M.K. Visual pattern recognition by moment invariants. IRE Transactions on Information Theory,1962,8(2):179-187.
    [45]Saripalli S., Sukhatme G. Vision-based autonomous landing of an unmanned aerial vehicle, in ICRA 2002, Washington DC, IEEE,2002, pp.2799-2804.
    [46]Saripalli S., Montgomery J.F., Sukhatme G.S. Visually guided landing of an unmanned aerial vehicle. IEEE Transactions on Robotics and Automation,2003,19(3):371-380.
    [47]Saripalli S., Sukhatme G.S. Landing on a moving target using an autonomous helicopter. Transactions in Advanced Robotics,2006,24:277-286.
    [48]李阳.微型UAV视觉系统研究[硕士学位论文].杭州:浙江大学,2007.
    [49]孙胜凯.微型UAV视觉辅助降落系统研究[硕士学位论文].杭州:浙江大学,2008.
    [50]张广军,周富强.基于双圆特征的无人飞行器着陆位置姿态视觉测量方法.航空学报,2005,26(3):344-348.
    [51]Martinez C., Mondragon I.F., Olivares-Mendez M.A., et al. On-board and ground visual pose estimation techniques for UAV control. Journal of Intelligent & Robotic Systems, 2011,61:1-20.
    [52]Rondon E., Salazar S., Escareno J., et al. Vision-based position control of a two-rotor VTOL miniUAV. Journal of Intelligent & Robotic Systems,2010,57(1-4):49-64.
    [53]宁成军,史忠科.一种小型无人机地空跟踪监视系统设计与实现.测控技术,2011,30(2):31-34.
    [54]潘翔,童丸丸,姜哲圣.用于UAV视觉导航的跑道检测与跟踪.传感技术学报,2010,23(6):820-824.
    [55]Canny J. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence,1986,8(6):679-698.
    [56]Hough P.V.C. Machine analysis of bubble chamber pictures, in International Conference on High Energy Accelerators and Instrumentation, CERN,1959.
    [57]潘翔,马德强,吴贻军,et al.基于视觉着陆的无人机俯仰角与高度估计.浙江大学学报:工学版,2009,43(4):692-696.
    [58]Fischler M.A., Bolles R.C. Random sample consensus:A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 1981,24(6):381-395.
    [59]Harris C, Stephens M. A combined corner and edge detector, in Proceedings of the 4th Alvey Vision Conference, Manchester, AVC,1988,15:147-151.
    [60]李绍燕.基于视觉的无人战斗机自主着陆研究[博士学位论文].北京:北京航空航天大学,2004.
    [61]陈龙胜,陈谋,姜长生.基于视觉信息的无人机自主着陆过程姿态和位置估计.电光与控制,2009,16(5):47-51.
    [62]王丽君,吴成富,陈怀民.基于计算机视觉/INS的无人机自主着陆组合导航研究.计算机测量与控制,2009,17(2):329-331.
    [63]Ettinger S.M. Design and implementation of autonomous vision-guided micro air vehicle[Docter dissertation]. Florida:University of Florida,2001.
    [64]Ettinger S.M., Nechyba M.C., Ifju P.G., et al. Vision-guided flight stability and control for micro air vehicles, in IEEE/RSJ IROS2002, EPFL, IEEE,2002,3:2134-2140.
    [65]Ettinger S.M., Nechyba M.C., Ifju P.G., et al. Towards flight autonomy:Vision-based horizon detection for micro air vehicles, in ICRA 2002, Washington DC, IEEE,2002.
    [66]高爱民,曹云峰.一种基于视觉的微型飞行器姿态检测算法.飞机设计,2002,4(12):70-73.
    [67]赵世峰,张海,范耀祖.一种基于计算机视觉的飞行器姿态估计算法.北京航空航天大学学报,2006,32(8):885-888.
    [68]Cornall T., Egan G. Measuring horizon angle from video on a small unmanned air vehicle, in International Conference or Autonomous Robots and Agents, Palmerston North, IEEE, 2004:339-344.
    [69]Otsu N. A threshold selection method from gray-level histograms. Automatice,1075.11 285-296.
    [70]Anil. K.J. Data clustering:50 years beyond K-means. Pattern Recognition Letters,2009, 31(8):651-665.
    [71]McGee T.G., Sengupta R., Hedrick K. Obstacle detection for small autonomous aircraft using sky segmentation, in ICRA 2005, Barcelona, IEEE,2005:4679-4684.
    [72]Bao G.Q., Xiong S.S., Zhou Z.Y. Vision-based horizon extraction for micro air vehicle flight control. IEEE Transactions on Instrumentation and Measurement,2005,54(3): 1067-1072.
    [73]Dusha D, Boles W, Walker R. Attitude estimation for a fixed-wing aircraft using horizon detection and optical flow, in Biennial Conference of the Australian Pattern Recognition Society on Digital Image Computing Techniques and Applications, Glenelg, IEEE,2007: 485-492.
    [74]丛杨,李小毛,田建东,et al.基于天际线识别的无人机实时姿态角估计.仪器仪表学报,2009,30(5):938-943.
    [75]丁萌.基于计算机视觉的无人机自主着陆方法研究[博士学位论文].南京:南京航空航天大学,2006.
    [76]Ding M., Cao Y.F, Guo L. A method to recognize and track runway in the image sequences based on template matching, in First International Symposium on Systems and Control in Aerospace and Astronautics, Harbin, IEEE,2006:1218-1221.
    [77]刘新华.基于视觉的无人机着陆姿态检测和跑道识别[博士学位论文].南京:南京航空航天大学,2004.
    [78]Kelly J., Srikanth S., Sukhatme G.S. Combined visual and inertial navigation for an unmanned aerial vehicle, in Proceedinds of the International Conference on Field and Service Robotics, Chamonix, Springer,2008,42:255-264.
    [79]Tomasi C, Kanade T. Detection and tracking of point features:School of Computer Science, Carnegie Mellon University,1991.
    [80]Garcia Carrillo L.R., Dzul Lopez A.E., Lozano R., et al. Combining stereo vision and inertial navigation system for a quad-rotor UAV. Journal of Intelligent & Robotic Systems, 2011:1-15.
    [81]Byrne J., Cosgrove M., Mehra R. Stereo based obstacle detection for an unmanned air vehicle, in ICRA 2006, Orlando, FL,2006:2830-2835.
    [82]倪炜基.无人机导航中的立体视觉技术研究[硕士学位论文].南京:南京航空航天大学,2011.
    [83]Hrabar S., Sukhatme G.S., Corke P., et al., Combined optic-flow and stereo-based navigation of urban canyons for UAV, in IEEE/RSJ IROS 2005, Edmonton, Alberta, IEEE, 2005:3309-3316.
    [84]Meingast M., Geyer C., Sastry S. Vision based terrain recovery for landing unmanned aerial vehicles, in 43rd IEEE Conference on Decision and Control, Atlantis, IEEE,2004,2: 1670-1675.
    [85]Moore R.J.D., Thurrowgood S., Bland D., et al. A stereo vision system for UAV guidance, in IEEE/RSJ IROS 2009, St. Louis, IEEE,2009:3386-3391.
    [86]张丽薇,谢少荣,罗均,et al.基于仿生双目的无人旋翼机自主着陆方法.计算机工程,2010,36(19):193-194.
    [87]Agarwal A., Jawahar C., Narayanan P. A survey of planar homography estimation techniques. Technical Report, Centre for Visual Information Technology,2005.
    [88]Malis E., Vargas M. Deeper understanding of the homography decomposition for vision-based control. Technical Report, Arobas INIRA Sophia Antipolis, Universidad der Sevilla,2007.
    [891 Zhang X.B, Fang Y.C., Ma B.J., et al. A fast homography decomposition technique for visual servo of mobile robots, in 27th Chinese Control Conference, Kunming, IEEE,2008: 404-409.
    [90]Kanade T., Amidi O., Ke Q. Real-time and 3D vision for autonomous small and micro air vehicles, in 43rd IEEE Conference on Decision and Control, Atlantis, IEEE,2004,2: 1655-1662.
    [91]Caballero F., Merino L., Ferruz J., et al. Homography based Kalman filter for mosaic building, applications to UAV position estimation, in ICRA 2007, Roma, IEEE,2007: 2004-2009.
    [92]任沁源.基于视觉信息的微小型无人直升机地标识别与位姿估计研究[博士学位论文],杭州:浙江大学,2008.
    [93]任沁源,李平,韩波.基于视觉信息的微型无人直升机位姿估计.浙江大学学报:工学版,2009,43(1):18-22.
    [94]Plinval H., Morin P., Mouyon P., et al. Visual servoing for underactuated VTOL UAVs:A linear, homography-based approach, in ICRA 2011, Shanghai, IEEE,2011:3004-3010.
    [95]Eynard D., Vasseur P., Demonceaux C., et al. UAV altitude estimation by mixed stereoscopic vision, in IEEE/RSJ IROS 2010, Taipei, IEEE,2010:646-651.
    [96]Andersen E.D., Taylor C.N. Improving MAV pose estimation using visual information, in IEEE/RSJ IROS 2007, San Diego, IEEE,2007:3745-3750.
    [97]Schwendner J. Homography based state estimation for aerial robots. Advances in Artificial Intelligence,2008:332-339.
    [98]Horn B.K.P., Schunck B.G. Determining optical flow. Artificial Intelligence,1981, 17(1-3):185-203.
    [99]Tammero L.F., Dickinson M.H. The influence of visual landscape on the free flight behavior of the fruit fly drosophila melanogaster. Journal of Experimental Biology,2002. 205:327-343.
    [100]Srinivasan M.V., Zhang S.W., Lehrer M., et al. Honeybee navigation en route to the goal: Visual flight control and odometry. Journal of Experimental Biology,1996,199:237-244.
    [101]Barrows G.L. Mixed-mode VLSI optic flow sensors for micro air vehicles[Docter dissertation]. Maryland:University of Maryland,1999.
    [102]Green W.E., Oh P.Y., Sevcik K., et al. Autonomous landing for indoor flying robets mine optic flow, in Proceedings of the 2003 ASME International Mechanical Engineering Congress, Washington DC, ASME,2003.
    [103]Chahl J., Srinivasan M., Zhang S. Landing strategies in honeybees and applications to uninhabited airborne vehicles. International Journal of Robotics Research,2004,23(2): 101-110.
    [104]Serres J., Dray D., Ruffier F., et al. A vision-based autopilot for a miniature air vehicle: Joint speed control and lateral obstacle avoidance. Autonomous Robots,2008,25(1): 103-122.
    [105]Ruffier F., Franceschini N. Optic flow regulation:the key to aircraft automatic guidance. Robotics and Autonomous Systems,2005,50(4):177-194.
    [106]Zufferey J.C., Floreano D. Fly-inspired visual steering of an ultralight indoor aircraft. IEEE Transactions on Robotics,2006,22(1):137-146.
    [107]Chellappa R., Qian G., Srinivasan S. Structure from motion:Sparse versus dense correspondence methods, in ICIP1999, Kobe, IEEE,1999,2:492-499.
    [108]Nister D., Naroditsky O., Bergen J. Visual odometry, in CVPR 2004, Washington DC, IEEE,2004,1:652-659.
    [109]Nister D. An efficient solution to the five-point relative pose problem, in CVPR 2003, Madison, Wisconsin, IEEE,2003,2:195-202.
    [110]Triggs B., McLauchlan P., Hartley R., et al., Bundle adjustment-a modern synthesis. Vision algorithms:theory and practice,2000,1883:153-177.
    [111]Johnson A., Montgomery J., Matthies L. Vision guided landing of an autonomous helicopter in hazardous terrain, in ICRA 2005, Barcelona, IEEE,2005:3966-3971.
    [112]Kendoul F., Fantoni I., Nonami K. Optic flow-based vision system for autonomous 3D localization and control of small aerial vehicles. Robotics and Autonomous Systems,2009, 57(6-7):591-602.
    [113]余家祥,萧德云,姜鲁东.基于多帧图像同名点的无人机对地定位新方法.兵工学报,2008,29(3):300-304.
    [114]Strasdat H., Montiel J., Davison A.J. Real-time monocular SLAM:Why filter? in ICRA 2010, Anchorage, IEEE,2010:2657-2664.
    [115]Bailey T., Durrant-Whyte H. Simultaneous localization and mapping (SLAM):Part Ⅱ. Robotics and Automation Magazine,2006,13(3):108-117.
    [116]Durrant-Whyte H., Bailey T. Simultaneous localization and mapping:Part Ⅰ. Robotics and Automation Magazine,2006,13(2):99-110.
    [117]Kim J., Sukkarieh S. Real-time implementation of airborne inertial-SLAM. Robotics and Autonomous Systems,2007,55(1):62-71.
    [118]Sazdovski N.A.V., Tsourdos A., White B.A. Low altitude airborne SLAM with INS aided vision system, in AIAA Guidance Navigation and Control Conference, Portland, AIAA, 2007.
    [119]Artieda J., Sebastian J.M., Campoy P., et al. Visual 3-d slam from UAVs. Journal of Intelligent and Robotic Systems,2009,55(4):299-321.
    [120]Davison A. J. Real-time simultaneous localisation and mapping with a single camera, in ICCV 2003, Beijing, IEEE,2003,2:1403-1410.
    [121]Davison A.J., Reid I.D., Molton N.D., et al. MonoSLAM:Real-time single camera SLAM. IEEE Transactions on Pattern Analysis and Machine Intelligence,2007,29(6):1052-1067.
    [122]Sunderhauf N., Lange S., Protzel P. Using the unscented kalman filter in mono-SLAM with inverse depth parametrization for autonomous airship control, in IEEE International Workshop on Safety, Security and Rescue Robotics, Roma, IEEE,2007:1-6.
    [123]Caballero F., Merino L., Ferruz J., et al., Vision-based odometry and SLAM for medium and high altitude flying UAVs. Journal of Intelligent and Robotic Systems,2009,54(1-3): 137-161.
    [124]邹海荣,龚振邦,罗均.无人飞行器地面移动目标跟踪系统研究现状与展望.宇航学报,2007,27(B12):233-236.
    [125]Cohen I, Medioni G. Detecting and tracking moving objects for video surveillance, in CVPR 1999, Fort Collins, Colorado, IEEE,1999.
    [126]Jakobsen O.C., Johnson E.N. Control architecture for a UAV-mounted pan/tilt/roll camera gimbal, in Proceedings of AIAA Guidance, Infotech at Aerospace Arlington, Virginia, AIAA,2005.
    [127]Helble H., Cameron S. Oats:Oxford aerial tracking system. Robotics and Autonomous Systems,2007,55(9):661-666.
    [128]王洋,刘伟.盘旋跟踪地面目标小型无人机控制系统设计.北京航空航天大学学报,2010,36(10):1252-1255.
    [129]王冠林,史海庆,谭洪胜,et al.旋翼空中机器人系统架构及设计.计算机工程与应用,2010,46(27):66-69.
    [130]丁卫.基于超小型无人机的地面目标实时图像跟踪[博士学位论文].上海:上海大学,2007.
    [131]刘瑞,蒋蓁,雷小光.小型机载云台结构设计和分析.机电工程,2010,27(2):5-7.
    [132]Wren C.R., Azarbayejani A., Darrell T., et al. Pfinder:Real-time tracking of the human body. IEEE Transactions on Pattern Analysis and Machine Intelligence,1997,19(7): 780-785.
    [133]Collins R.T., Lipton A.J., Fujiyoshi H., et al. Algorithms for cooperative multisensor surveillance, in Proceedings of the IEEE, IEEE,2001,89(10):1456-1477.
    [134]Pavlidis I., Morellas V., Tsiamyrtzis P., et al. Urban surveillance systems:from the laboratory to the commercial world, in Proceedings of the IEEE, IEEE,2001,89(10): 1478-1497.
    [135]侯志强,韩崇昭.视觉跟踪技术综述.自动化学报,2006,32(4):603-617.
    [136]杨戈,刘宏.视觉跟踪算法综述.智能系统学报,2010,5(2):95-105.
    [137]Bascle B., Deriche R. Region tracking through image sequences, in ICCV 1995, Boston, IEEE,1995:302-307.
    [138]Burt P.J., Yen C., Xu X. Local correlation measures for motion analysis:A comparative study, in IEEE CPRIP 1982, IEEE,1982:269-274,.
    [139]Comaniciu D., Ramesh V., Meer P. Kernel-based object tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence,2003,25(5):564-577.
    [140]王长军,朱善安.基于Mean-shift的目标平移与旋转跟踪.中国图象图形学报,2007,12(8):1367-1371.
    [141]Collins R.T. Mean-shift blob tracking through scale space, in CVPR 2003, Madison, Wisconsin, IEEE,2003,2:234-240.
    [142]彭宁嵩,杨杰,刘志,et al. Mean-Shift跟踪算法中核函数窗宽的自动选取.软件学报,2005,16(9):1542-1550.
    [143]左军毅,梁彦,赵春晖,et al. Mean Shift跟踪算法中尺度自适应策略的研究.中国图象图形学报,2008,13(9):1750-1757.
    [144]钱惠敏,茅耀斌,王执铨.自动选择跟踪窗尺度的Mean-Shift算法.中国图象图形学报,2007,12(2):246-249.
    [145]Bradski G.R. Computer vision face tracking for use in a perceptual user interface. Intel Technology Journal,1998,5(1):58-76.
    [146]Hu W., Tan T., Wang L., et al. A survey on visual surveillance of object motion and behaviors. IEEE Transactions on Systems, Man, and Cybernetics, Part C:Applications and Reviews,2004,34(3):334-352.
    [147]Dockstader S.L., Tekalp A.M. Multi-view spatial integration and tracking with Bayesian networks, in Proceedings of the 2001 International Conference on Image Processing, Thessaloniki, IEEE,2001,1:630-633.
    [148]Kass M., Witkin A., Terzopoulos D. Snakes:Active contour models. International Journal of Computer Vision,1988,1(4):321-331.
    [149]田丰,闫建国,曹莹慧.无人机视频中的目标定位研究.计算机测量与控制,2010,18(4):900-901.
    [150]谷雨.空中机器人视觉导航与控制若干问题研究[博士学位论文].杭州:浙江大学,2009.
    [151]周超.微小型无人直升机地面地标跟踪研究[硕士学位论文].杭州:浙江大学,2011.
    [152]范保杰,朱琳琳,崔书平,et a1.旋翼无人机视觉跟踪系统.红外与激光工程,2011,40(1):149-152.
    [153]Zou H.R., Gong Z.B., Xie S.R. A pan-tilt camera control system of UAV visual tracking based on biomimetric eye, in ROBIO 2006, Kunming, IEEE,2006,1477-1482.
    [154]Skoglar P. Modelling and control of IR/EO-gimbal for UAV surveillance applications [Master dissertation]. Linkopings:Linkopings University,2002.
    [155]辛哲奎,万勇纯,张雪波.小型无人机地面目标跟踪系统机载云台自适应跟踪控制.控制理论与应用,2010,27(8):1001-1006.
    [156]李湘清,孙秀霞,彭建亮,et a1.基于运动补偿的小型无人机云台控制器设计方法.系统工程与电子技术,2011,33(2):376-379.
    [157]辛哲奎,方勇纯.无人机地面目标跟踪系统的建模与控制.高技术通讯,2009,19(4):398-403.
    [158]Barber D.B., Redding J.D., McLain T.W., et al., Vision-based target localization from a fixed-wing miniature air vehicle. Journal of Intelligent and Robotic Systems,2006,47(4): 361-382.
    [159]Watanabe Y., Fabiani P., Mouyon P. Research perspectives in UAV visual target tracking in uncertain environments, in IEEE/RSJ IROS 2008, Nice, IEEE,2008.
    [160]MacArthur D.K. Tracking and state estimation of an unmanned ground vehicle, system using an unmanned air vehicle system[Docter dissertation]. Florida:University of Florida, 2007.
    [161]Ruangwiset A. Path generation for ground target tracking of airplane-typed UAV, in ROBIO 2009, Guilin, IEEE,2009:1354-1358.
    [162]李志宇,徐烨烽,杨国粱.跟踪地面目标的小型无人机飞行控制仿真研究.科学技术与工程,2008,8(5):1374-1378.
    [163]Lee J., Huang R., Vaughn A., et al. Strategies of path-planning for a UAV to track a ground vehicle, in AINS 2003, Menlo Park, California,2003.
    [164]Sun M., Zhu R., Yang X. UAV Path generation, path following and gimbal control, in IEEE International Conference on Networking, Sensing and Control, Sanya, IEEE,2008: 870-873.
    [165]吴建德.基于频域辨识的微小型无人直升机的建模与控制研究[博士学位论文].杭州:浙江大学,2007.
    [166]方舟.小型无人机的辨识建模与运动控制[博士学位论文].杭州:浙江大学,2008.
    [167]徐玉.微小型无人直升机飞控平台与姿态融合算法研究[博士学位论文].杭州:浙江大学,2008.
    [168]徐玉,李平,韩波.一种面向机动的低成本姿态测量系统.传感技术学报,2007,20(17):2272-2275.
    [169]方舟,李平.一种微小型无人直升机飞控系统的设计与实现,中国无人机大会论文集,2006:440-445.
    [170]Montiel J., Davison A.J. A visual compass based on SLAM, in ICRA 2006, Orlando, FL, IEEE,2006:1917-1922.
    [171]Civera J., Davison A.J., Montiel J. Inverse depth parametrization for monocular SLAM. IEEE Transactions on Robotics,2008,24(5):932-945.
    [172]Civera J., Grasa O.G., Davison A.J.1 Point RANSAC for extended Kalman filtering: Application to real time structure from motion and visual odometry. Journal of Field Robotics,2010,27(5):609-631.
    [173]刘俊,石云波,李杰.微惯性技术.北京:电子工业出版社,2005.
    [174]Phillips W., Hailey C., Gebert G. Review of attitude representations used for aircraft kinematics. Journal of Aircraft,2001,38(4):718-737.
    [175]Choukroun D., Bar-Itzhack I.Y., Oshman Y. Novel quaternion Kalman filter. IEEE Transactions on Aerospace and Electronic Systems,2006,42(1):174-190.
    [176]J. Y. Bouguet. Camera calibration toolbox for Matlab.2004.
    [177]Available:www.keywordpicture.com/keyword/matryca%20ccd/
    [178]Available:http://www.itrc.narl.org.tw/Publication/Newsletter/no81/p08.php
    [179]W. K. Pratt, Ed., Digital Image Processing. New York:John Wiley & Sons,1991.
    [180]Dissanayake M.W.M.G, Newman P., Clark S., et al. A solution to the simultaneous localization and map building (SLAM) problem. IEEE Transactions on Robotics and Automation.2001,17(3):229-241.
    [181]Bailey T., Nieto J., Guivant J., et al. Consistency of the EKF-SLAM algorithm, in IEEE/RSJ IROS 2006, Beijing, IEEE,2006:3562-3568.
    [182]Paz LM, Tardos JD, Neira J. Divide and conquer:EKF SLAM in O(n). IEEE Transactions on Robotics,2008,24(5):1107-1120.
    [183]Ser P.K., Siu W.C. Memory compression for straight line recognition using the Hough" transform. Pattern-Recognition Letters,1995,16(2):133-145.
    [184]Xu L., Oja E. Randomized Hough transform (RHT):Basic mechanisms, algorithms, and computational complexities. CVGIP Image Understanding,1993,57(2):131-131.
    [185]Kiryati N., Eldar Y., Bruckstein A.M. A probabilistic Hough transform. Pattern Recognition,1991,24(4):303-316.
    [186]Chutatape O., Guo L. A modified Hough transform for line detection and its performance. Pattern Recognition,1999,32(2):181-192.
    [187]Galambos C., Kittler J., Matas J. Gradient based progressive probabilistic Hough transform, in IEE Proceedings on Part K, Vision, Image, and Signal Processing, UK, IEE, 2001,148(3):158-165.
    [188]Majumdar A.K. Design of an ASIC for straight line detection in an image, in 13th International Conference on VLSI Design, Calcutta, IEEE,2000:128-133.
    [189]Tagzout S., Achour K., Djekoune O. Hough transform algorithm for FPGA implementation. Signal Processing,2001,81(6):1295-1301.
    [190]Barnard S.T. Interpreting perspective images. Artificial Intelligence,1983,21(4):435-462.
    [191]Tuytelaars T., Van Gool L., Proesmans M., et al. The cascaded Hough transform as an aid in aerial image interpretation, in ICCV 1998, Bombay, IEEE,1998:67-72.
    [192]Rother C. A new approach to vanishing point detection in architectural environments. Image and Vision Computing,2002,20(9-10):647-655.
    [193]Minagawa A., Tagawa N., Moriya T., et al., Vanishing point and vanishing line estimation with line clustering. IEICE Transactions on Information and Systems,2000, E83-D: 1574-1582.
    [194]Hoffman N., Preetham A.J. Rendering outdoor light scattering in real time, in Proceedings of Game Developer Conference, San Jose, California, GDC,2002.
    [195]Polidorio A.M., Flores F.C., Imai N.N., et al. Automatic shadow segmentation in aerial color images, in XVI Brazilian Symposium on Computer Graphics and Image Processing, IEEE,2003:270-277.
    [196]He K., Sun J., Tang X. Single image haze removal using dark channel prior, in CVPR 2009, Miami, IEEE,2009:1956-1963.
    [197]Gonzalez R.C., Woods R.E. Digital image processing. Upper Saddle River, N.J.:Prentice Hall,2002.
    [198]Labrosse F. The visual compass:Performance and limitations of an appearance based method. Journal of Field Robotics,2006,23(10):913-941.
    [199]Sturm J., Visser A. An appearance-based visual compass for mobile robots. Robotics and Autonomous Systems,2009,57(5):536-545.
    [200]Burt P.J. Fast filter transform for image processing, in Computer Graphics and Image Processing,1981,16(1):20-51.
    [201]Bailey T. Mobile robot localisation and mapping in extensive outdoor environments [Docter dissertation]. Sydney:Sydney University,2002.
    [202]Neira J., Tardos J.D. Data association in stochastic mapping using the joint compatibility test. IEEE Transactions on Robotics and Automation,2001,17(6):890-897.
    [203]Davey S.J. Simultaneous localization and map building using the probabilistic multi-hypothesis tracker. IEEE Transactions on Robotics,2007,23(2):271-280..
    [204]Martinez-Cantin R., Castellanos J.A. Bounding uncertainty in EKF-SLAM:The robocentric local approach, in ICRA 2006, Orlando, FL, IEEE,2006:430-435.
    [205]Bailey T. Constrained initialisation for bearing-only SLAM, in ICRA 2003, Taipei, IEEE, 2003,2:1966-1971.
    [206]Sola J., Monin A., Devy M., et al. Undelayed initialization in bearing only SLAM, in IEEE/RSJ IROS 2005, Edmonton, Alberta, IEEE,2005:2499-2504.
    [207]Kwok N.M., Dissanayake G. An efficient multiple hypothesis filter for bearing-only SLAM, in IEEE/RSJ IROS 2003, Sendai, IEEE,2003,1:736-741.
    [208]Pietzsch T. Efficient feature parameterisation for visual slam using inverse depth bundles, in BMVC, Leeds,2008.
    [209]Imre E., Berger M.O., Noury N. Improved inverse-depth parameterization for monocular simultaneous localization and mapping, in ICRA 2009, Kobe, IEEE,2009:381-386.
    [210]Parsley M.P., Julier S.J. Avoiding negative depth in inverse depth bearing-only SLAM, in IEEE/RSJ IROS 2008, Nice, IEEE,2008:2066-2071.
    [211]Fukunaga K., Hostetler L. The estimation of the gradient of a density function, with applications in pattern recognition. IEEE Transactions on Information Theory,1975,21(1): 32-40.
    [212]Cheng Y. Mean shift, mode seeking, and clustering. IEEE Transactions on Pattern Analysis and Machine Intelligence,1995,17(8):790-799.
    [213]康一梅,谢晚冬,胡江,et al.目标尺度自适应的Mean Shift跟踪算法.兵工学报,2011,32(2):210-216.
    [214]牛长锋,刘玉树.一种新的Mean-Shift对象跟踪方法.光电工程,2008,35(3):26-29.
    [215]薛陈,朱明,陈爱华.鲁棒的基于改进Mean-shift的目标跟踪.光学精密工程,2010,18(1):234-239.
    [216]Collins R.T., Liu Y., Leordeanu M. Online selection of discriminative tracking features. IEEE Transactions on Pattern Analysis and Machine Intelligence,2005,27(10): 1631-1643.
    [217]Lindeberg T. Detecting salient blob-like image structures with a scale-space primal sketch: A method for focus-of-attention. International Journal of Computer Vision,1993,11(3): 283-318.
    [218]Lindeberg T. Feature detection with automatic scale selection. International Journal of Computer Vision,1998,30(2):79-116.
    [219]Lowe D.G. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision,2004,60(2):91-110.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700