用户名: 密码: 验证码:
服务机器人智能空间中机器视觉测量技术及其应用研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
随着科技的发展和人类社会老龄化趋势的日渐加剧,使服务机器人应用于家庭环境以完成聊天陪护、端茶递水等家政服务的诉求愈加迫切。以知识分布与智能分布为思想,以将摄像机、麦克风、温湿、瓦斯等传感器通过无线网络技术分布到环境中为手段的智能空间技术就是一种对服务机器人提供强有力支持,使之能够全面感知环境,正确理解人的意图,更为有效地为人类提供各种主动服务的技术。智能空间技术与服务机器人相结合,在扩展了服务机器人的感知和决策能力的同时,服务机器人作为智能空间中的可移动感知与执行设备,也丰富了智能空间的信息感知和服务执行功能。
     与人类通过眼睛获取外部环境80%有用信息的特点类似,智能空间通过摄像机这种核心传感器来感知环境信息,并利用机器视觉测量技术来完成诸如人的行为识别与理解、服务机器人的定位与导航、服务机器人对药品水杯等家庭物品的抓取与搬运等大部分家政工作。机器视觉测量技术是指根据不同视图的图像信息恢复出摄像机的欧氏运动(位姿信息)及景物的欧氏结构的图像处理技术,本文主要针对智能空间中的机器视觉测量技术及其应用进行了研究,主要研究内容和结果概括如下:
     (1)研究了世界平面单应矩阵和世界平面诱导的单应矩阵的计算及分解问题,这两种单应矩阵的分解是由2D图像信息恢复摄像机相对位姿和景物结构的有效途径。首先从Schur补的相关性质出发,给出了两种稀疏L-M迭代优化方法,并分别利用DLT方法和L-M迭代法对两种单应进行了计算;然后针对世界平面单应,在摄像机内参矩阵部分已知和完全未知的情况下,分别从单幅图像和多幅图像恢复出了摄像机的内参矩阵和摄像机坐标系相对景物平面固联坐标系的位姿,针对世界平面诱导的单应,提出了一种基于SVD的单应矩阵数值分解算法,利用这种算法可通过单应矩阵分解得到两组摄像机位姿和景物结构的物理可实现解。同时给出了景物结构已知情况下的单应矩阵分解算法,利用这种算法可直接得到唯一的摄像机相对位姿;最后推导了齐次图像坐标下的图像矩特征和齐次射影坐标下的图像矩特征间的转换关系,证明了两视图在2D仿射变换下其归一化图像几何中心矩为绝对对称逆变张量,并利用这一论断给出了一种全新的2D仿射单应的计算方法。
     (2)研究了服务机器人智能空间中的分布式摄像机系统对运动目标进行定位的问题。首先令机器人作几组已知的运动,并提取出对应的图像点,利用地面所在世界平面的单应分解方法对分布式摄像机的内外参数进行了标定;然后根据摄像机标定结果,设计了实现起来简单快速,但存在运动目标远离摄像机时精度下降缺点的运动目标单摄像机定位算法;最后引入了双目摄像机运动目标定位算法,针对利用简单的线性三角形法进行定位存在求得的运动目标点不在地面的缺点,进一步给出了带约束的线性三角形定位法,并为了进一步提高定位精度,最终采用了LM算法对检测到的运动目标的图像点进行优化,利用精化后的图像点和带约束的线性三角形法完成了运动目标的双目定位。(3)研究了基于顶棚激光投影循迹的室内服务机器人导航方法。首先建立了投影器的运动学模型,根据模型可以建立投影器各关节角位移、角速度和激光斑在地面的位移、速度间的非线性映射关系,同时为了将世界坐标系中的规划路径转换到投影器基坐标系中,提出了一种新的投影器外参数标定方法,完成了投影器的外参数标定,这样就可以使顶棚投影器在地面上投射出规划好的路径;然后将移动机器人抽象为一个三自由度机械臂,建立了它的运动学模型,并采用了一种基于世界平面分解的简单有效的方法标定出了机载摄像机的内外参数,同时计算出了以机载摄像机坐标系{c}为参考坐标系的地面所在的世界平面π;最后分别设计了自适应补偿跟踪控制律和非线性状态反馈控制律来控制移动机器人完成对运动激光斑的视觉伺服跟踪。
     (4)研究了服务机器人对QR Code人工地标这种智能空间中知识分布与智能分布的重要载体的定位与识读问题。首先按照功能将QR Code人工地标分为了用于全局语义地图生成的QR Code人工物标和面向局部导航地图生成的QR Code人工路标,并分别对两种地标的外围模式和内部信息编码方式进行了设计;然后分别利用QR Code外围模式的蓝色矩形框和红色同心圆部分给出了两种QR Code人工地标的定位方法;最后就求得的QR Code人工地标的位置信息构造了任务函数,并设计了基于位置的视觉伺服识读控制律,完成了服务机器人对QRCode人工地标的识读。
     (5)研究了搭载有机械臂的服务机器人对家庭环境智能空间中物品的搜寻与抓取、搬运操作。首先设计了辅助服务机器人抓取操作的QR Code人工物标,并对人工物标内的信息进行了编码,将对家庭物品的识别转换为对QR Code人工物标的识别,降低了服务机器人对家庭物品的操作难度;然后将搭载有机械臂的移动服务机器人抽象为一个广义机械臂,并为了完成此广义机械臂的统一控制,建立了广义机械臂的运动学模型,并分别利用解析法和数值法给出了其运动学逆解,同时提出了一种全新的机械臂手眼关系标定方法,这种方法通过令固联于机械臂末端执行器上的摄像机观测简单2D标定物,然后机械臂作两组任意运动,完成了手眼参数标定;最后考虑移动机器人运动的非完整性约束,设计了一种物品抓取的切换控制律:首先在眼注视的约束下逼近待操作物品,逼近到一定程度后,再利用单应分解加已知摄像机摄动来求取当前摄像机坐标系相对期望摄像机坐标系的位姿信息并切换到基于位置的look then doing视觉伺服控制方式,完成物品的抓取操作。
With the development of technology and the intensified human society aging trend, the demands of using service robot in home environment to accomplish the domestic services such as chatting, accompanying, tea or water delivery have become increasingly urgent. The intelligent space technique, who uses knowledge distribution and intelligent distribution as the main thoughts, and uses the distribution of camera, microphone, temperature sensor, humidity sensor and gas sensor through wireless network technology into the environment as the main means, provides strong support for a service robot to fully perceive the environment, correctly understand people's intent and more effectively provide active service. The combination of intelligent space technology and service robot technology extends the robot's perception and decision-making ability. At the same time, as a removable perception and implementation equipment, the service robot enhances the functionality of intelligent space for information perception and service tasks execution.
     Human access to80%of the external environment information through eyes, similar to this characteristics, the intelligent space uses camera as the core sensor to perceive the environment information, and uses machine vision measure technique to finish most of the service tasks such as recognition and understanding of human behavior, localization and navigation of service robot, handling and delivering of medicines or cups and so on. The machine vision measure technique is a kind of image processing technique which is used to restore the cameras' Euclidean motion information (position and orientation) and the3D scene structure information through different image views. This dissertation focuses on the machine vision measure technique and its applications in home environment intelligent space, the main contents and results are summarized as follows:
     (1)The computation and decomposition of two different kinds of homography, which are called homography from a world plane and homography induced by a scene plane, are discussed. The decomposition of the two kinds of hompgraphy is an effective way to acquire the camera relative pose and scene structure from2D image information. Firstly, based on the properties of schur complement, two different types of sparse LM iterative optimization methods are given, and computations of the two kinds of homography are given using the DLT method and LM iterative method respectively. Moreover, for the homography from a world plane, the camera intrinsic parameters, camera relative pose and scene structure are restored using single view or at least three different views in case of camera intrinsic parameters partly known or completely unknown respectively, for the homography induced by a scene plane, a singular value decomposition algorithm is proposed to acquire two solutions verifying a physical constraint, meanwhile a homography decomposition algorithm is given in case of knowing the scene structure, using which we can acquire the unique solution. Finally, the conversion relationship between homogeneous image coordinates based image moments and homogeneous projective coordinates based image moments is derived and the fact that the normalized central moments of the image are absolutely symmetrical inverter tensor is proved under the2D affine transformation, which is applied to educe a new method to compute the corresponding2D affine transformation.
     (2) The problem of using distributed camera system to locate the moving targets in service robot intelligent space is discussed. Firstly, the robot set to do several known movements, and extract the image correspondence. By using the decomposition of homography from a world plane, the internal and external parameters of the distributed camera system is calibrated. Then using the calibration results, a simple but fast monocular camera based moving target localization algorithm is designed, but the shortages of this algorithm reduces the accuracy when moving target is away from the camera. Finally a binocular camera based moving target localization algorithm is introduced, for the shortcomings of solved moving target's position not on the ground using simple linear triangular method, a constrained linear triangular method is further given, Using the constrained linear triangular method and the LM method refined image points, the binocular camera based moving target localization is accomplished.
     (3) The problem of using ceiling projection navigation system to realize the visual track navigation method is researched. Firstly the kinematic model of the ceiling laser projector is established, using the model the nonlinear mapping relationship between projector's angular displacement, angular velocity and laser spot's linear displacement, the linear velocity on the ground can be acquired. At the same time in order to transform the planned route from the world coordinate frame to the projector's base coordinate frame, an original projector's external parameter calibration method is proposed. Secondly the mobile service robot is considered to be a3-DOF manipulator, its kinematic model is established, the internal and external parameters of the camera mounted on the robot are calibrated using decomposition of hompgraphy from a world plane, meanwhile consider camera frame as the reference coordinate frame, the3D parameter of ground plane is computed. Finally the adaptive Compensation Tracking Control Law and the Nonlinear State Error Feedback Tracking Control Law are designed respectively to control the service robot so that it can finish the visual servo tracking of the moving laser spot projected on the ground.
     (4) The problem of localization and recognition of the QR Code based artificial landmark is discussed, which is an important carrier for knowledge distribution and intelligence distribution. Firstly The QR Code based artificial landmark is divided into two categories:the QR Code based artificial object mark used for Global Semantic Map Building and the QR Code based artificial signpost used for Local Navigation Map Building, then the external pattern and internal information encoding are designed respectively. Moreover, by using the blue rectangle area and red circular ring area respectively, two different kinds of QR Code based artificial landmark location mthods are given. Finally, a task function is constructed according to the solved localization information and a position based visual servo control law, which is designed to control the service robot to realize the recognition of the QR Code based artificial landmark.
     (5) The problem of searching, handling and delivering of the objects dispersed in intelligent space for service robot with one manipulator is researched. Firstly, the QR Code based artificial object mark is designed, which is used to assist the service robot to finish the handling task, and the internal information is coded so that by converting the identification of household objects to the identification of corresponding QR Code, the object mark can reduce the difficulty of the operation for service robot effectively. Moreover by considering the mobile service robot equipped with a manipulator as a generalized manipulator, the kinematic model of the generalized manipulator is established, and certain analytical and numerical methods are given to solve the inverse kinematics, and an original manipulator's hand-eye relationship calibration method is proposed as well. Finally take the mobile robot's non-holonomic constraint into account, a switch object handling control law, which is composed of approximating control law and position based look then doing visual servoing control law, is designed. Under the approximating control law's effect, the service robot approximates the object to be operated under the constraints of gazing, then using the decomposition of homography induced by a scene plane and the known camera motion, the position and pose information between current camera frame and desired camera frame can be computed, and the control law switches to the position based look then doing visual servoing control mode so that the handling operation can finally be accomplished.
引文
[1]Johanson B, Winograd T, Fox A. Interactive workspaces [J]. Computer,2003,36(4): 99-101.
    [2]Niitsuma M, Hashimoto H. Spatial memory as an aid system for human activity in intelligent space[J]. IEEE Transactions on Industrial Electronics,2007,54(2):1122-1131.
    [3]TIAN Guo-hui, LI Xiao-lei, ZHAO Shou-peng. Research and Development of Intelligent Space Technology for A Home Service Robot[J]. Journal of Shandong University (Engineering Science),2007,37(5):53-59.
    [4]贾云得.机器视觉[M].北京:科学出版社,2000.
    [5]J. Hill and W. T. Park. Real time control of a robot with a mobile camera[C]. Proc.9th ISIR, Washington, DC,1979:233-246.
    [6]Abdel-Aziz Y I and Karara H M. Direct linear transformation from comparator coordinates into object space coordinates[C]. ASP Symposium on Close-Range Photogrammetry,1971, 1-18.
    [7]Tsai R and Lenz R K. A technique for fully autonomous and efficient 3D robotics hand/eye calibration [J]. IEEE Trans. on Robotics and Automation,1989,5(3):345-358.
    [8]Tsai R. An efficient and accurate camera calibration technique for 3D machine vision[C]. In Proc. CVPR'86,1986,364-374
    [9]Weng J Y, Paul Cohen and Marc Herniou. Camera calibration with distortion model and accuracy evaluation[J]. IEEE PAMI,1992,14(10):965-980.
    [10]Zhang Z Y. Flexible camera calibration by viewing a plane from unknown orientation[C]. In Proc. ICCV'99,1999,666-673.
    [11]Bouguet J Y. Camera Calibration Toolbox for Matlab,2010. [Online]. Available: http://www.vision.caltech.edu/bouguetj/calib doc/index.html.
    [12]Meng X Q, Li H and Hu Z Y. A new easy camera calibration technique based on circular points[C]. In Proc. BMVC'2000,2000,496-505.
    [13]吴朝福,王光辉,胡占义.由矩形确定摄像机内参数与位置的线性方法[J].软件学报,14(3):703-712,2003.
    [14]Zhang Z Y. Camera calibration with one-dimensional objects[J]. IEEE Transaction on Pattern Analysis and Machine Intelligence,2004,26(7):892-899.
    [15]Hammarstedt P, Sturm P, Heyden A. Degenerate cases and closed-from solutions for camera calibration with one-dimensional objects[C]. In Proceedings of IEEE International Conference on Computer Vision. Beijing, China,2005.1:317-324.
    [16]Wu F C, Hu Z Y, Zhu H J. Camera calibration with moving one dimensional objects[J]. Pattern Recognition,2005,38(5):755-765.
    [17]王亮,吴福朝.基于一维标定物的多摄像机标定[J].自动化学报,33(3):225-231.2007.
    [18]O. D. Faugeras, Q. Luong, and S. Maybank. Camera self-calibration:theory and experiments[C]. In Proc. European Conference on Computer Vision, LNCS 588,321-334, Springer-Verlag,1992.
    [19]T. Moons, L. Van Gool, M. Van Diest, and E. Pauwels. Affine reconstruction from perspective image pairs obtained by a translating camera[C]. In Applications of Invariance in Computer Vision, LNCS 825. Springer-Verlag,1994.
    [20]M. Armstrong, A. Zisserman, and P. Beardsley. Euclidean reconstruction from uncalibrated images[C]. In Proc. British Machine Vision Conference,509-518,1994.
    [21]R. I. Hartley. Self-calibration from multiple views with a rotating camera[C]. In Proc. ECCV'94,1994,471-478.
    [22]M. Polllefeys, L. Van Gool, and A. Oosterlinck. The modulus constraint:a new constraint for self-calibration[C]. In Proc. International Conference on Pattern Recognition,31-42, 1996.
    [23]Armstrong M, Zisserman A and Hartley R. Self-Calibration from image triples[C]. ECCV'96,1996,3-16.
    [24]Heyden A, Astrom K. Euclidean reconstruction from image sequences with varying and unknown focal length and principle point[J]. CVPR,97,1997,438-443.
    [25]R. I. Hartley, Self-Calibration of stationary cameras[J]. International Journal of Computer Vision.1997,22(1):5-23.
    [26]李华,吴朝福,胡占义.一种新的线性摄像机自标定方法[J].计算机学报,2000,23(11):1121-1129.
    [27]雷成,吴福朝,胡占义.一种新的基于主动视觉系统的摄像机自标定方法[J],计算机学报,2000,23(11):1130-1139.
    [28]吴福朝,李华,胡占义.基于主动视觉系统的摄像机自标定方法研究[J],自动化学报,2001,27(6):752-763.
    [29]雷成,吴朝福,胡占义.Kruppa方程与摄像机自标定[J].自动化学报,2001,27(5):621-630.
    [30]杨长江,孙凤梅,胡占义.基于二次曲线的纯旋转摄像机自标定[J],自动化学报,2001,27(3):310-317.
    [31]Bergholm F. Edge focusing[J]. IEEE Trans. Pattern Analysis andMachine Intelligence, 1987,9(9):726-741.
    [32]Bomana M, Hohne K H, Tiede U and Rieme M.3D segmentation of MR images of the head for 3D display[J]. IEEE Trans. on Medical Imaging,1990,9:177-183.
    [33]Ziou D, Tabbone S. A multi-scale edge detector[J]. Pattern Recognition,1993,26(9): 1305-1314.
    [34]Brown M A, Blackwell K T and Khalak H G. Multi-scale edge detector and feature binding: an integrated approach[J]. Pattern Recognition,1998,31(10):1479-1490.
    [35]Bhabatosh Chanda, Malay K and Y. Vani Padmaja. A multi-scale morphologic edge detector[J]. Pattern Recognition,1998,31(10):1469-1478.
    [36]Michael H F. Optimizing edge detectors for robust automatic threshold selection:coping with edge curvature and noise[J]. Graphical models and image processing,1998,60: 385-401.
    [37]应骏,叶秀清,顾伟康.一个基于知识的边缘提取算法[J].中国图形图像学报,1999,4A(3):239-242.
    [38]J B MacQueen. Some Methods for classification and analysis of multivariate observations[C]. Proceedings of 5th Berkeley Symposium on Mathematical Statistics and Probability, Berkeley, University of California Press,1:281-297,1967
    [39]Otsu N. A threshold selection method from gray-level histograms[J]. Automatica 11:285-296,1975
    [40]Pun T. A new method for gray levelpicture thresholding using the entropy of the histogram[J]. Signal Processing,1980,2:223-237.
    [41]Peleg S. A new probabilistic relaxation scheme[J]. IEEE Trans. Pattern Analysis and Machine Intelligence,1980,2:362-369.
    [42]Niblack W. An introduction to digital image processing[M]. Strandberg Publishing Company Birkeroed, Denmark,1985.
    [43]Brink A B. Gray level thresholding of images using acorrelation criterion[J]. Pattern Recognition Letters,1989,9:335-341.
    [44]Lim Y W and LEE S U. On the color image segmentation algorithm based on the thresholding and the fuzzy c means techniques[J]. Pattern Recognition,1990,23(9): 935-952.
    [45]Hall L O, Bensaid A M, Clarke L P and Velthuizen R P. A comparison of neural network and fuzzy clustering techniques in segmenting magnetic resonance images of the brain[J]. IEEE Trans. Neural Network,1992,3(5):672-681.
    [46]Adams R and Bischof L. Seeding region growing[J]. IEEE Trans. PatternAnalysis andMachine Intelligence,1994,16:641-647.
    [47]Lin J S, Cheng K S and Mao C W. A fuzzy Hopfield neural network for medical image segmentation[J]. IEEE Trans. Neural Science,1996,43(4):2389-2398.
    [48]Tremeau A and Borel N. A region growing and merging algorithm to color segmentation[J]. Pattern Recognition,1997,30(9):1191-1203.
    [49]刘文萍,吴立德.图像分割中闽值选取方法比较研究[J].模式识别与人工智能,1997,10(3):271-277.
    [50]陈燕新,戚飞虎.基于竞争hopfield网络的自动聚类图像分割方法[J].模式识别与人工智能,1998,11(2):215-221.
    [51]尹平,王润生.基于边缘信息的分开合并图像分割算法[J].中国图像图形学报,1998,3A(6):450-454.
    [52]戴剑斌,张大力.图像分析中的松弛标记法[J].中国图形图像学报,1998,3A(2):96-99.
    [53]Sang Ho Oark. Color image segmentation based on 3D clustering:morphological approach[J]. Pattern Recognition,1998,31(8):1061-1076.
    [54]Wang J P. Stochastic relaxation on partitions with connected components and its applications to image segmentation[J]. IEEE Trans. on Pattern Analysis and Machine Intelligence,1998,20(8):619-636.
    [55]Philippe Schmid. Segmentation of digitized dermatoscopic images by two dimensional color clustering[J]. IEEE Trans. on Medical Imaging,1999,18(2):164-171.
    [56]靳宏磊,朱蔚萍,李立源,陈维南.二维灰度直方图的最佳分割算法[J].模式识别与人给你个智能,1999,12(3):329-333.
    [57]胡世英,周源华.模糊选抒多分辨率Kohonen聚类网络用于灰度图像分割[J].电子学报,1999,27(10):34-37.
    [58]Felzenszwalb P F, Huttenlocher D P. Efficient graph-based image segmentation[J]. Int. J. Compute Vision 59(2):167-181,2004
    [59]Matas J, Chum O, Urban M, Pajdla T. Robust wide-baseline stereo from maximally stable extremal regions[J]. Image Vision Comput 22(10):761-767,2004
    [60]Bhandarkar S M and Zhang Hui. Image segmentation using evolutionary computation[J]. IEEE Trans. on Evolutionary Computation,1999,3(1):1-21.
    [61]Hewer G A, Kenny C and Manjunath B S. Variational image segmentation using boundary functions[J].IEEE Trans. on Image Processing,1998,7(9):1269-1282.
    [62]Harris CG, Stephens MJ. A combined corner and edge detector[C]. In:Proceedings of the Fourth Alvey Vision Conference, Manchester. pp 147-151,1988.
    [63]Noble J A. Finding corners[J]. Image Vision Comput 6(2):121-128,1988.
    [64]Shi J, Tomasi C. Good features to track[C]. In:Proc. Computer Vision and Pattern Recognition. IEEE Computer Society, Seattle, pp 593-593,1994.
    [65]Mikolajczyk K, Schmid C. Scale and affine invariant interest point detectors[J]. Int. J. Computer Vision,60(1):63-86,2004.
    [66]Lowe DG. Distinctive image features from scale-invariant keypoints[J]. Int. J. Computer Vision 60(2):91-110,2004.
    [67]Bay H, Ess A, Tuytelaars T, Van Gool L. Speeded-up robust features (SURF)[J]. Comput Vis Image Und 110(3):346-359,2008.
    [68]Rosten E, Porter R, Drummond T. FASTER and better:A machine learning approach to corner detection[J]. IEEE Trans. on Pattern Analysis,32:105-119,2010.
    [69]Agrawal M, Konolige K, Bias M (2008) CenSurE:Center surround extremas for realtime feature detection and matching. In:Forsyth D, Torr P, Zisserman A (eds) Lecture notes in computer science[C]. Computer Vision-ECCV 2008, vol 5305. Springer-Verlag, Berlin Heidelberg, pp 102-115,2008.
    [70]Calonder M, Lepetit V, Strecha C, Fua P. BRIEF:Binary robust independent elementary features[C]. In:Daniilidis K, Maragos P, Paragios N (eds) Lecture notes in computer science. Computer Vision-ECCV 2010, vol 6311. Springer-Verlag, Berlin Heidelberg, 778-792,2010.
    [71]Leavers V F. Which Hough transform?[J]. Comput Vis Image Und 58(2):250-264,1993.
    [72]Hart PE. How the Hough transform was invented [DSP history][J]. IEEE Signal Proc Mag 26(6):18-22,2009.
    [73]Fischler MA, Bolles RC. Random sample consensus:A paradigm for model fitting with applications to image analysis and automated cartography[J]. Commun ACM 24(6): 381-395,1981.
    [74]Chum O, Matas J. Matching with PROSAC-Progressive sample consensus[C]. In:IEEE Conf. on Computer Vision and Pattern Recognition, vol 2. San Diego,220-226,2005.
    [75]Luong QT. matrice fondamentale et autocalibration en vision par ordinateur[D]. Universitede Paris-Sud, Orsay, France,1992.
    [76]Longuet-Higgins H. A computer algorithm for reconstruction of a scene from two projections[J]. Nature 293:133-135,1981.
    [77]P. A. Beardsley, D. Sinclair and A. Zisserman. Ego-motion from six points[C]. Insight meeting, Catholic University Leuven. Feb.1992.
    [78]R. Mohr. Projective geometry and computer vision[M].In C. H. Chen, L.F. Pau, and P. S.P. Wang, editors, Handbook of Pattern Recognition and Computer Vision. World Scientific, 1992.
    [79]Nister D. An efficient solution to the five-point relative pose problem[C]. In:IEEE Conf. on Computer Vision and Pattern Recognition, vol.2. Madison,195-202,2003.
    [80]Li H and Hartley R. Five-point motion estimation made easy[C]. In:18th Int. Conf. on Pattern Recognition ICPR 2006, Hong Kong,630-633.
    [81]Faugeras O D.Three-dimensional computer vision:A geometric viewpoint[M].The MIT Press,1993.
    [82]Faugeras O D,Lustman F.Motion and structure from motion in a piecewise planar environment[J].Int.J.Pattern Recognition,2(3):485-508,1988.
    [83]Z.Zhang and A.R.Hanson.3D Reconstruction Based on Homography Mapping[C].Proc. ARPA96,1996,1007一1012.
    [84]Malis E, Vargas M. Deeper understanding of the homography decomposition for vision-based control[R].INRIA,6303,2007.
    [851 Bernnemann A E,Hollis R L,Lavin M A and Musits B L.Sensors for robotic assembly. Proc[C].Int.Conf. on Robotics and Automation,Pennsylvania,USA,1988,1606-1610.
    [86]Khosla P K,Neuman C P and Prinz F B.An algorithm for seam tracking application[J].Int. J.Robotics Research,1985,4(1):27-41.
    [87]Page G J.Vision driven stack picking in an FMS cell[C].Proc.4th Int.Conf. on Robot Vision and Sensory Control,London,UK,1984,1-12.
    [88]Chen H T,Jiang P and Wang Y J.Shape glass cutting direct drive robot[C].Proe.Int.Conf. on Manufacturing Automation,Hong Kong,1997,146-151.
    [89]蒋平,林靖,陈辉堂,王月娟.机器人轨线跟踪的视觉与控制集成方法[J].自动化学 报,1999,25(1):18-24.
    [90]林靖,陈辉堂,蒋平,王月娟,吴本荣.视觉引导机器人轨线跟踪与复现[C].第二届 全球华人智能控制与智能自动化大会论文集,西安,1997,638-643.
    [91]Weiss L E,Anderson A C,and Neuman C P.Dynamic sensor-based control of robots with visual feedback[J].IEEE Trans.on Robotics and Automation,1987,3(5):404-417.
    [92]Feddema J T and Mitchell O R.Vision-guided servoing with feature-based trajectory generation[J].IEEE Trans.on Robotics and Automation,1989,5(5):691-700.
    [93]Shirai Y and Inoue H.Guiding a robot by visual feedback in assembling tasks[J]. Pattern Recognition,5(2):99-106,1973.
    [94]Hill J and Park W T.Real time control of a robot with a mobile camera[C].In:Proc.9th ISIR,SME,Washington,DC.Mar,233-246,1979.
    [95]Weiss L E.Dynamicvisual servo control of robots:an adaptive image-based approach[D]. Carnegie-Mellon University,1984.
    [96]Weiss L,Sanderson A C and Neuman C P.Dynamic sensor-baased control of robots with visual feedback[J].IEEE Trans.on Robotics and Automation,3(1):404-417,1987.
    [97]Hashimoto K.Visual servoing[J].In:Robotics and Automated systems,vol 7.World Scientific,1993.
    [98]Feddema J T.Real time visual feedback control for hand-eye coordinated robotic systems[D]. Ph. D. Dissertation, Purdue University,1989.
    [99]Chaumette F, Rives P, Espiau B. Positioning of a robot with respect to an object, tracking it and estimating its velocity by visual servoing[C]. In:Proc. IEEE Int. Conf. Robotics and Automation, Seoul,2248-2253,1991.
    [100]Samson C, Espiau B, Le Borgne M. Robot control:the task function approach[M]. Oxford University Press,1990.
    [101]Westmore D B and Wilson W J. Direct dynamic control of a robot using an end-point mounted camera and Kalman filter position estimation[C]. In:Proc. IEEE Int. Conf. Robotics and Automation, Seoul. Apr,2376-2384,1991.
    [102]Corke P I andGood M C. Dynamic effects in high-performance visual servoing[C]. In: Proc. IEEE Int. Conference on Robotics and Automation,1838-1843,1992.
    [103]Espiau B, Chaumette F, Rives P. A new approach to visual servoing in robotics[J]. IEEE Trans. on Robotics and Automation,8(3):313-326,1992.
    [104]Corke P I. High-performance visual closed-loop robot control[D]. Ph.D. Dissertation, University of Melbourne, Dept. Mechanical and Manufacturing Engineering, July 1994.
    [105]Corke P I and Good MC. Dynamic effects in visual closed-loop systems[J]. IEEE Trans. on Robotics and Automation,12(5):671-683,1996.
    [106]Hutchinson S, Hager G, Corke P I. A tutorial on visual servo control[J]. IEEE Trans. on Robotics and Automation,12(5):651-670,1996.
    [107]Chaumette F and Hutchinson S. Visual servo control 1:Basic approaches[J]. IEEE Trans. on Robotics and Automation,13(4):82-90,2006.
    [108]Chesi G and Hashimoto K. Visual servoing via advanced numerical methods[C]. Lecture notes in computer science,401. Springer-Verlag,2010.
    [109]苏剑波,李耀通.一种用于机器人物体运动参数的快速识别方法[J].自动化学报,1994,20(3):286-291.
    [110]Chaumette F. Image moments:A general and useful set of features for visual servoing[J]. IEEE Trans. on Robotics and Automation.20(4):713-723,2004.
    [111]Tahri O and Chaumette F. Point-based and region-based image moments forvisual servoing of planar objects[J]. IEEE Trans. on Robotics and Automation.21(6):1116-1127, 2005.
    [112]Aggarwal J K and Nandhakumar N. On the computation of motion from sequences of images-a review[C]. Proc. IEEE,1988,76(8):917-935.
    [113]Ganthier J P, Bornard G and Sibermann M. Motions and pattern analysis:harmonic analysis on motion grops and their homogeneous spaces[J]. IEEE Trans. on System. Man. and Cybernetics,1991,21(1):159-172.
    [114]Wells G, Venaille C and Torras C. Promising research vision-based robot positioning using neural networks[J]. Image and Vision Computing,1996,14(10):715-732.
    [115]Horn B K P and Schunck B G. Determine optical flow:a retrospective[J]. Artificial Intelligence,1993,59:81-87.
    [116]Feddema J T, Lee CSG, Mitchell O R. Weighted selection of image features for resolved rate visual feedback control[J]. IEEE Trans. on Robotics and Automation,7(1):31-47, 1991.
    [117]Hashimoto K, Kimoto T, Ebine T and Kimura H. Manipulator control with image-based visual servo[C]. In:Proc. IEEE Int. Conf. Robotics and Automation, Seoul,2267-2272, 1991.
    [118]B. Espiau. A new approach to visual servoing in robotics[J]. IEEE Trans. on Robotic and Automation,8(3):313-326,1992.
    [119]Feddema J T and Mitchell O R. Vision-guided servoing with feature-based trajectory generation[J]. IEEE Trans. on Robotics and Automation,5(5):691-700,1989.
    [120]Papanikolopoulos N P, Khosla P K, KanadeT. Adaptive robot visual tracking:Theory and experiments[J]. IEEE Trans. on Automat Contr 38(3):429-445,1993.
    [121]Hosoda K and Asada M. Versatile visual servoing without knowledge of true Jacobian[C]. In:Proc. Int. Conf. on Intelligent Robots and Systems(IROS), Munich. Sep,186-193, 1994.
    [122]Jagersand M, FuentesO, Nelson R. Experimental evaluation of uncalibrated visual servoing for precision manipulation[C]. In:Proc. IEEE Int. Conf. on Robotics and Automation, Albuquerque, NM,2874-2880,1996.
    [123]Piepmeier J A, McMurray G and Lipkin H. A dynamic quasi-Newton method for uncalibrated visual servoing[C]. In:Proc. IEEE Int. Conf. on Robotics and Automation, Detroit,1595-1600,1999.
    [124]Hashimoto K and Noritsugu T. Performance and sensitivity in visual servoing[C]. Proc. IEEE Conf. on Robotics and Automation, Leuven, Belgium,1998,2321-2336.
    [125]Sharma R and Hutchinson S. Motion perceptibility and its application to active vision-based servo control[J]. IEEE Trans. on Robotics and Automation,1997,13(4): 607-617.
    [126]姜海涛.医院环境下智能空间视觉服务支持系统[D].山东大学,2011.
    [127]吉艳青.家庭环境下人的行为理解系统研究[D].山东大学硕士学位论文,2009.
    [128]薛英花,田国会,吴皓,吉艳青.智能空间中的服务机器人路径规划[J].智能系统学报,2010,5(3):260-265.
    [129]张涛涛.基于智能空间的服务机器人导航若干关键问题研究[D].山东大学硕士学位 论文,2011.
    [130]吴皓,田国会,薛英花,张涛涛.基于QR code技术的家庭半未知环境语义地图构建[J],模式识别与人工智能,2010,23(4):464-47.
    [131]吴皓,田国会,陈西博,张涛涛,周风余.基于机器人服务任务导向的室内未知环境地图构建[J],机器人,2010,32(2):196-201

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700