用户名: 密码: 验证码:
动态环境下基于线特征的RGB-D视觉里程计
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:RGB-D Visual Odometry in Dynamic Environments Using Line Features
  • 作者:张慧娟 ; 方灶军 ; 杨桂林
  • 英文作者:ZHANG Huijuan;FANG Zaojun;YANG Guilin;University of Chinese Academy of Sciences;Ningbo Institute of Material Technology and Engineering, Chinese Academy of Sciences;Zhejiang Key Laboratory of Robotics and Intelligent Manufacturing Equipment Technology;
  • 关键词:同时定位与建图 ; 视觉里程计 ; 线特征 ; 动态环境 ; RGB-D
  • 英文关键词:simultaneous localization and mapping;;visual odometry;;line feature;;dynamic environment;;RGB-depth
  • 中文刊名:JQRR
  • 英文刊名:Robot
  • 机构:中国科学院大学;中国科学院宁波材料技术与工程研究所;浙江省机器人与智能制造装备技术重点实验室;
  • 出版日期:2018-04-13 10:56
  • 出版单位:机器人
  • 年:2019
  • 期:v.41
  • 基金:国家自然科学基金-浙江两化融合联合基金(U1509202);; 国家重点研发计划(2017YFB1300400);; 浙江省重点研发计划(2018C01086)
  • 语种:中文;
  • 页:JQRR201901009
  • 页数:8
  • CN:01
  • ISSN:21-1137/TP
  • 分类号:77-84
摘要
基于RGB-D的视觉SLAM(同时定位与建图)算法基本都假设环境是静态的,然而在实际环境中经常会出现动态物体,导致SLAM算法性能的下降.为此,本文提出一种基于线特征的RGB-D视觉里程计方法,通过计算直线特征的静态权重来剔除动态直线特征,并根据剩余的直线特征估计相机位姿.本文方法既可以减小动态物体的影响,又能避免点特征过少而导致的跟踪失效.公共数据集实验结果表明,与现有的基于ORB(orientedFAST and rotated BRIEF)点特征的方法相比,本文方法减小了动态环境下的跟踪误差约30%,提高了视觉里程计在动态环境下的精度和鲁棒性.
        Most of RGB-D SLAM(simultaneous localization and mapping) methods assume that the environments are static. However, there are often dynamic objects in real world environments, which can degrade the SLAM performance.In order to solve this problem, a line feature-based RGB-D(RGB-depth) visual odometry is proposed. It calculates static weights of line features to filter out dynamic line features, and uses the rest of line features to estimate the camera pose. The proposed method not only reduces the influence of dynamic objects, but also avoids the tracking failure caused by few point features. The experiments are carried out on a public dataset. Compared with state-of-the-art methods like ORB(oriented FAST and rotated BRIEF) method, the results demonstrate that the proposed method reduces the tracking error by about 30%and improves the accuracy and robustness of visual odometry in dynamic environments.
引文
[1] Huang A S, Bachrach A, Henry P, et al. Visual odometry andmapping for autonomous flight using an RGB-D camera[C]//15th International Symposium of Robotics Research. Berlin,Germany:Springer, 2017:235-252.
    [2]王飞,崔金强,陈本美,等.一套完整的基于视觉光流和激光扫描测距的室内无人机导航系统[J].自动化学报,2013,39(11):1889-1900.Wang F, Cui J Q, Chen B M, et al. A comprehensive UAV indoornavigation system based on vision optical flow and laser Fast-SLAM[J]. Acta Automatica Sinica, 2013, 39(11):1889-1900.
    [3] Sch¨ops T, Engel J, Cremers D. Semi-dense visual odometryfor AR on a smartphone[C]//IEEE International Symposium onMixed and Augmented Reality. Piscataway, USA:IEEE, 2014:145-150.
    [4] H¨ane C, Heng L, Lee G H, et al. 3D visual perception for self-driving cars using a multi-camera system:Calibration, map-ping, localization, and obstacle detection[J]. Image and VisionComputing, 2017, 68(S1):14-27.
    [5] Kerl C, Sturm J, Cremers D. Robust odometry estimationfor RGB-D cameras[C]//IEEE International Conference onRobotics and Automation. Piscataway, USA:IEEE, 2013:3748-3754.
    [6] Whelan T, Johannsson H, Kaess M, et al. Robust real-time visu-al odometry for dense RGB-D mapping[C]//IEEE Internation-al Conference on Robotics and Automation. Piscataway, USA:IEEE, 2013:5724-5731.
    [7] Kerl C, Sturm J, Cremers D. Dense visual SLAM for RGB-Dcameras[C]//IEEE/RSJ International Conference on IntelligentRobots and Systems. Piscataway, USA:IEEE, 2013:2100-2106.
    [8] Gutierrez-Gomez D, Mayol-Cuevas W, Guerrero J J. DenseRGB-D visual odometry using inverse depth[J]. Robotics andAutonomous Systems, 2016, 75:571-583.
    [9] Jaimez M, Kerl C, Gonzalez-Jimenez J, et al. Fast odometry andscene flow from RGB-D cameras based on geometric clustering[C]//IEEE International Conference on Robotics and Automa-tion. Piscataway, USA:IEEE, 2017:3992-3999.
    [10]付梦印,吕宪伟,刘彤,等.基于RGB-D数据的实时S-LAM算法[J].机器人,2015,37(6):683-692.Fu M Y, L¨u X W, Liu T, et al. Real-time SLAM algorithm basedon RGB-D data[J]. Robot, 2015, 37(6):683-692.
    [11] Sun Y X, Liu M, Meng M Q H. Improving RGB-D SLAMin dynamic environments:A motion removal approach[J].Robotics and Autonomous Systems, 2017, 89:110-122.
    [12] Wang Y B, Huang S D. Towards dense moving object segmen-tation based robust dense RGB-D SLAM in dynamic scenar-ios[C]//13rd International Conference on Control, Automation,Robotics, and Vision. Piscataway, USA:IEEE, 2014:1841-1846.
    [13] Kim D H, Kim J H. Effective background model-based RGB-Ddense visual odometry in a dynamic environment[J]. IEEETransactions on Robotics, 2016, 32(6):1565-1573.
    [14] Ng P C, Henikoff S. SIFT:Predicting amino acid changesthat affect protein function[J]. Nucleic Acids Research, 2003,31(13):3812-3814.
    [15] Bay H, Ess A, Tuytelaars T, et al. Speeded-up robust features(SURF)[J]. Computer Vision and Image Understanding, 2008,110(3):346-359.
    [16] Rublee E, Rabaud V, Konolige K, et al. ORB:An efficient alter-native to SIFT or SURF[C]//IEEE International Conference onComputer Vision. Piscataway, USA:IEEE, 2011:2564-2571.
    [17] Endres F, Hess J, Sturm J, et al. 3-D mapping with an RGB-Dcamera[J]. IEEE Transactions on Robotics, 2014, 30(1):177-187.
    [18] Henry P, Krainin M, Herbst E, et al. RGB-D mapping:Usingdepth cameras for dense 3D modeling of indoor environments[C]//12th International Symposium on Experimental Robotics.Berlin, Germany:Springer, 2014:477-491.
    [19] Wang Y, Huang S D, Xiong R, et al. A framework for multi-session RGBD SLAM in low dynamic workspace environment[J]. CAAI Transactions on Intelligence Technology, 2016, 1(1):90-103.
    [20]辛菁,苟蛟龙,马晓敏,等.基于Kinect的移动机器人大视角3维V-SLAM[J].机器人,2014,36(5):560-568.Xin J, Gou J L, Ma X M, et al. A large viewing angle 3-dimensional V-SLAM algorithm with a Kinect-based mobilerobot system[J]. Robot, 2014, 36(5):560-568.
    [21]康轶非,宋永端,宋宇,等.动态环境下基于旋转-平移解耦的立体视觉里程计算法[J].机器人,2014,36(6):758-768.Kang Y F, Song Y D, Song Y, et al. Stereo visual odometryalgorithm with rotation-translation decoupling for dynamic en-vironments[J]. Robot, 2014, 36(6):758-768.
    [22] Li S, Lee D. RGB-D SLAM in dynamic environments usingstatic point weighting[J]. IEEE Robotics and Automation Let-ters, 2017, 2(4):2263-2270.
    [23] Hirose K, Saito H. Fast line description for line-based SLAM[C]//23rd British Machine Vision Conference. Guildford, UK:B M V A Press, 2012.
    [24] Lu Y, Song D Z. Robust RGB-D odometry using point andline features[C]//IEEE International Conference on ComputerVision. Piscataway, USA:IEEE, 2015:3934-3942.
    [25] Gomez-Ojeda R, Moreno F A, Scaramuzza D, et al. PL-SLAM:A stereo SLAM system through the combination of points andline segments[EB/OL].(2017-05-29)[2018-01-01]. http://arxiv.org/pdf/1705.09479.pdf.
    [26] Pumarola A, Vakhitov A, Agudo A, et al. PL-SLAM:Real-timemonocular visual SLAM with points and lines[C]//IEEE Inter-national Conference on Robotics and Automation. Piscataway,USA:IEEE, 2017:4503-4508.
    [27] Yang S, Scherer S. Direct monocular odometry using points andlines[C]//IEEE International Conference on Robotics and Au-tomation. Piscataway, USA:IEEE, 2017:3871-3877.
    [28]李海丰,胡遵河,陈新伟.PLP-SLAM:基于点、线、面特征融合的视觉SLAM方法[J].机器人,2017,39(2):214-220,229.Li H F, Hu Z H, Chen X W. PLP-SLAM:A visual SLAMmethod based on point-line-plane feature fusion[J]. Robot,2017, 39(2):214-220,229.
    [29] van Gioi R G, Jakubowicz J, Morel J M, et al. LSD:A fast linesegment detector with a false detection control[J]. IEEE Trans-actions on Pattern Analysis and Machine Intelligence, 2010,32(4):722-732.
    [30] Zhang L, Koch R. An efficient and robust line segment match-ing approach based on LBD descriptor and pairwise geometricconsistency[J]. Journal of Visual Communication and ImageRepresentation, 2013, 24(7):794-805.
    [31] Bartoli A, Sturm P. The 3D line motion matrix and alignmentof line reconstructions[J]. International Journal of Computer Vi-sion, 2004, 57(3):159-178.
    [32] Sturm J, Engelhard N, Endres F, et al. A benchmark for the eval-uation of RGB-D SLAM systems[C]//IEEE/RSJ InternationalConference on Intelligent Robots and Systems. Piscataway,USA:IEEE, 2012:573-580.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700