用户名: 密码: 验证码:
基于张量表示的人脸表情识别算法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
表情是人类用来表达情绪的一种基本方式,是非语言交流的重要组成部分,近些年随着自然人机交互和智能机器人的发展,计算机自动人脸表情识别技术受到越来越多的关注。本文主要基于图像数据的张量表示和分解技术,并结合图保持的流形学习方法,研究表观人脸表情图像的特征提取,并最终用于表情识别。论文的主要创新点包括:
     1.抽象出张量子空间模型,在此基础上对传统张量算法中的张量投影进行正交化的改进,提出正交张量流形学习算法,取得了更好的识别效果,并从理论上给出了解释。
     1)通过对大量基于“张量-张量”映射的降维算法的分析,提出统一的张量子空间模型,其中详细定义了张量子空间的基(基张量)、投影和重构等概念。该模型是向量子空间模型的自然扩展,使我们能够以一个新的视角看待张量降维问题。根据该模型,向量降维算法中的一些概念或性质可以很自然地被引入到张量算法中(如正交性,非负性,稀疏性等),从而提高相应张量算法的性能。
     2)在张量子空间模型基础上,研究了张量投影的正交化,并对已有的张量流形学习算法进行正交化的改进,分别提出正交张量邻域保持嵌入算法(OTNPE)和正交张量边界费舍尔分析算法(OTMFA)。理论分析和实验结果表明,投影正交化能够使传统的张量流形学习算法更好地保持人脸表情的流形结构,从而改善了人脸表情的表征和识别效果。
     2.将张量秩一分解技术与图保持的流形学习准则结合,提出张量秩一差分图保持分析算法(TR1DGPA)。TR1DGPA中首先构造一个体现两两类间鉴别性的判罚图,同时根据局部线性嵌入算法(LLE)构造内类的近邻图,并以差的形式将二者结合形成差分图保持目标,然后在此目标下将每个张量样本分解为一组共同秩
     一张量的加权线性组合,其权重系数构成该样本的低维特征。TR1DGPA具有如下特性:(1)保持张量样本内部的空间排列信息;(2)保持样本类内的局部流形;(3)加强样本两两类间的鉴别性。该算法能够很好地收敛,而且相比“张量-张量”映射的降维算法和向量降维算法具有更小的计算复杂度。通过实验发现,相比以前的一些相关算法,TR1DGPA对于表观人脸表情的识别更加有效。
     3.提出正交的张量秩一差分图保持投影算法(OTR1DGPP)。OTR1DGPP目的是在差分图保持目标下,依据求张量秩一分解原则求解一组正交化的基张量用于投影。算法中给出了一个全新的正交化方案,该方案相比之前类似的算法具有更大的灵活性,而且能够很好地收敛。实验表明,与以前的一些正交化算法相比较,OTR1DGPP对人脸表情的识别能取得更好的结果。
     4.将非负张量分解(NTF)技术与图保持的流形学习准则结合,提出鉴别的邻域保持非负张量分解算法(DNPNTF)。NTF能够将非负张量样本集分解为一组非负基张量和权重系数的线性组合,然而NTF是基于重构最优的,并没有考虑原始样本集的流形结构和鉴别信息。DNPNTF算法在NTF的基础上增加了图保持约束,使求出的非负基张量能够同时保持样本集同类内的局部流形和不同类间的分离性。在求解过程中采用梯度下降法,并构造出乘法更新规则,保证解的非负性。另外详细证明了算法的收敛性。实验表明DNPNTF对人脸表情的识别比其他相关的非负算法更加有效,而且DNPNTF所求出的非负基图像具有更好的稀疏性。
As a basic way to display humans'inner emotions, facial expression makes an important part of the non-word communication between people. Recent years, along with the development of the natural man-machine interaction and the intellectual robot, the automatic facial expression recognition has attracted more and more attentions. This thesis mainly researches feature extraction from appearance facial expression images for recognition, using the tensor representation and decomposition techniques, combined with the graph preserving based manifold learning methods. The innovative work of this thesis includes:
     1. Conclude a tensor subspace model, and based on which we othogonilize the tensor projections for the traditional tensor algorithms, and propose the orthogonal tensor manifold learning algorithms.
     1) Through analysis of several "tensor-to-tensor" projection based dimensionality reduction algorithms, we sum up a generalized tensor subspace model which explicitly gives the definitions about the basis of the tensor subspace (basis tensor), the tensor subspace projection and reconstruction. The model could be reckoned as as a natural extension of the vector subspace model, and by which we can consider the tensor dimensionality reduction algorithms from a new perspective. This model makes some conceptions and characteristics in the vector dimensionality reduction algorithms available for the tensor algorithms (e.g. the orthogonality, the non-negativity, the sparseness and so on), such that the performances of the corresponding tensor algorithms may be improved.
     2) Under the introduced tensor subspace model, we investigate the orthogonality of the tensor projection, and improved the existing tensor based manifold learning algorithms into the orthogonal version, where Orthogonal Tensor Neighborhood Preserving Embedding (OTNPE) and Orthogonal Tensor Marginal Fisher Analysis (OTMFA) are proposed. Both the theoretical analysis and the experimental results show that the orthogonalization could make the traditional tensor based manifold learning algorithms preserve the facial expression manifold much better, therefore the facial expression representation and recognition can be improved.
     2. Propose the Tensor Rank One Differential Graph Preserving Analysis algorithm (TR1DGPA) algorithm through combining the tensor rank-one decomposition technique with the graph preserving based manifold learning criterion.
     First, a penalty graph representing the pairwise inter-class discrimination is constructed, and meanwhile an intra-class affinity graph is built by the Locally Linear Embedding (LLE), then the differential graph preserving objective is formed by the difference of these two graphs. Finally, under this objective TR1DGPA decomposes each original tensor sample into a linear combination of rank-one tensors, where the coefficients form the low dimensional feature of the original sample. TR1DGPA has the following characteristics:(1) it preserves the inner spatial structure information within the original tensor samples;(2) it preserves the intra-class local manifold;(3) it enhances the pairwise inter-class separability. We prove that TR1DGPA converges very well and has less computational complexity than the vector representation based algorithms and the "tensor-to-tensor" projection based algorithms. In the experiment, compared with some former related algorithms, we find that TR1DGPA is more effective for the appearance facial expression recognition.
     3. Propose the Orthogonal tensor rank one differential graph preserving projections algorithm (OTR1DGPP).
     OTR1DGPP aims to obtain a set of orthogonal rank-one basis tensors for projection according to the tensor rank-one decomposition principle, based on the differential-form graph preserving objective function. In the algorithm, a novel, effective and converged orthogonalization process is given, which has more flexibility than the former similar algorithm. The experiments show that OTR1DGPP can obtain better facial expression recognition results than some former related orthogonal algorithms.
     4. Propose the Discriminant Neighborhood Preserving Non-negative Tensor Factorization (DNPNTF) algorithm by combining the Non-negative Tensor Factorization (NTF) and the graph preserving based manifold learning principle.
     NTF could decompose an ensemble of non-negative tensors into a group of non-negative basis tensors and the corresponding weighted coefficients. However NTF is based on optimal reconstruction and it does not consider the manifold structure and the discriminative information within the original samples. DNPNTF adds the graph preserving constraint based on NTF, which make the resolved non-negative basis tensors could preserve the intra-class manifold and keep the inter-class separability. During the resolving process, we adopt the gradient descent method and construct the multiplicative update rule, ensuring the non-negativity of the solutions. Also we give the detail proof for the convergence of the algorithm. The experiments for the facial expression recognition verify that DNPNTF is more effective than the related non-negative algorithms. And we find that the non-negative basis images obtained by DNPNTF have better sparseness.
引文
[1]Darwinc, The Expression of the Emotions in Animals and Man, London, UK, JohnMurray,1872.
    [2]M. Pantic, L. J. M. Rothkrantz., Automatic Analysis of Facial Expressions:The State of the Art, IEEE Transactions on Pattern Analysis and Machine Intelligence,2000,22(12):1424-1445.
    [3]P. Ekman, R. Davidson, The Nature of Emotion:Fundamental Questions, New York, Oxford University Press,1994.
    [4]A. Mehrabian, Communication without words, Psychology Today,1968,2(4):5356.
    [5]袁保宗,阮秋琦,王延江,刘汝杰,唐晓芳,新一代(第四代)人机交互的概念框架特征及关键技术,电子学报,2004,31(B12):1945-1954.
    [6]A. Jaimes, N. Sebe, Multimodal Human Computer Interaction:A Survey, Computer Vision and Image Understanding,2007,108(1-2):116-134.
    [7]B. Fasel, J. Luettin, Automatic Facial Expression Analysis:A Survey, J. Pattern Recognition, 2003,36(1):259-275.
    [8]P. Ekman, Emotions Revealed, New Ed edition, New York, Phoenix Press,2004.
    [9]P. Ekman, Emotions in the Human Face, Cambridge, Cambridge University Press,1982.
    [10]P. Ekman, Facial expression and emotion, J. American Psychologist,1993,48:384-392.
    [11]P. Ekman, Emotions Inside Out:130 Years after Darwin's "The Expression of the Emotions in Man and Animals", Annals of the New York Academy of Sciences,2003,1000:266-278.
    [12]P. Ekman, W. Friesen, Constants across Cultures in the Face and Emotion, Journal of Personality,1971,17(2):124-129.
    [13]P. Ekman, W. Friesen, Facial Action Coding System, Palo Alto CA, USA, Consulting Psychologist Press,1978.
    [14]M. Pardas, A. Bonafonte, J. L. Landabaso, Emotion Recognition Based on MPEG-4 Facial Animation Parameters, Proc. Acoustics, Speech, and Signal Processing.4, Washington, DC, USA,IEEE Computer Society,2002,3624-3627.
    [15]W. Friesen, P. Ekman, Emotional facial action coding system, unpublished manual,1984.
    [16]C. Izard, The maximally discriminative facial movement coding system (MAX), Available from Instructional Resource Center, Newark, Delaware, University of Delaware,1979.
    [17]C. Izard, L. Dougherty, E. Hembree, A system for identifying affect expressions by holistic judgments, unpublished manuscript,1983.
    [18]H. Kobayashi, F. Hara, Facial interaction between animated 3D face robot and human beings, Proceedings of the International Conference on Systems, Man and Cybernetics, Orlando, FL, USA, 1997,3732-3737.
    [19]M. Pantic, L. Rothkrantz, Expert system for automatic analysis of facial expression, Image Vision Comput. J.,2000,18 (11):881-905.
    [20]M. Turk, A. P. Pentland, Face recognition using eigenfaces, IEEE Conference on Computer Vision and Pattern Recognition,1991,586-591.
    [21]A. J. Calder, A. M. Burton, P. Miller, A principal component analysis of facial expressions, Vision Research,2001,41(9):1179-1208.
    [22]L. Z. Zhao, W. Gao, X. L. Chen, Eigenface dimension variant classification and its application in expression recognition, Chin. J. Comput,1999,22 (6):627-632.
    [23]P. N. Belhumeur, J. P. Hespanha, D. J. Kriegman, Eigenfaces vs. Fisherfaces:recognition using class specific linear projection, IEEE Transactions on Pattern Analysis and Machine Intelligence, 1997,19(7):711-720.
    [24]D. Abboud, F. Davoine, Appearance factorization based facial expression recognition and synthesis, ICPR(4),2004,163-166.
    [25]M. Lyons, J. Budynek, S. Akamatsu, Automatic classification of single facial images, IEEE Trans. Pattern Anal. Mach. Intell.,1999,21(12):1357-1362.
    [26]Z. Wen, T. Huang, Capturing subtle facial motions in 3d face tracking, Proceedings of IEEE International Conference on Computer Vision, Nice, France,2003,2:1343-1350.
    [27]S. Bashyal, G. K. Venayagamoorthy, Recognition of facial expressions using Gabor wavelets and learning vector quantization, Engineering Applications of Artificial Intelligence,2008, 21:1056-1064.
    [28]阮秋琦编著,数字图象处理学,电子工业出版社,2001.
    [29]M. P. Loh, Y. P. Wong, C. O. Wong, Facial expression recognition for e-learning systems using Gabor wavelet & neural network, Processing of the Sixth IEEE International Conference on Advanced Learning Technologies, DC, USA, IEEE Computer Society Washington,2006,523-525.
    [30]Y. Wang, H. Ai, B. Wu, et al, Real time facial expression recognition with adaboost, Proceedings of International Conference on Pattern Recognition, Cambridge, UK,2004,3:926-929.
    [31]Y. Shinohara, N. Otsu, Facial expression recognition using fisher weight maps, Proceedings of IEEE Conference on Automatic Face and Gesture Recognition, Seoul, Korea,2004,499-504.
    [32]X. Feng, Facial expression recognition based on local binary patterns and coarse-to-fine classification, Proceedings of International Conference on Computer and Information Technology, Wuhan, China,2004,178-183.
    [33]J. B. Tenenbaum, V. D. Silva, J. C. Langford, A global geometric framework for nonlinear dimensionality reduction, Science 2000,290:2319-2323.
    [34]S. T. Roweis, L. K. Saul, Nonlinear dimensionality reduction by locally linear embedding, Science,2000,290:2323-2326.
    [35]M. Belkin, P. Niyogi, Laplacian Eigenmaps and spectral techniques for embedding and clustering, Proceedings of the Conference on Advances in Neural Information Processing Systems, 2001,585-591.
    [36]X. F. He, D. Cai, S. C. Yan, H. J. Zhang, Neighborhood preserving embedding, Proceedings of the IEEE International Conference on Computer Vision,2005,1208-1213.
    [37]X. F. He, P. Niyogi, Locality preserving projections, Proceedings of the Conference on Advances in Neural Information Processing Systems,2003,153-160.
    [38]S. C. Yan, D. Xu, B. Zhang, H. J. Zhang, Q. Yang, S. Lin, Graph embedding and extension:A general framework for dimensionality reduction, IEEE Transactions on Pattern Analysis and Machine Intelligence,2007,29(1):40-51.
    [39]H. S. Seung, D. Leed, The manifold ways of perception, Science,2000,290 (5500):2268-2269.
    [40]D. Liang, J. Yang, Z.L. Zheng, Y.C. Chang, A facial expression recognition system based on supervised locally linear embedding, Pattern Recognition Letters,2005,26 (17):2374-2389.
    [41]H. Wang, K. Q. Wang, Affective Interaction Based on Person-Independent Facial Expression Space, Neurocomputing,2008,71(10-12):1889-1901.
    [42]I. Buciu, I. Pitas, Application of non-negative and local nonnegative matrix factorization to facial expression recognition, Proceedings of International Conference on Pattern Recognition, Cambridge, UK,2004,288-291.
    [43]S. Mitra, Y. Liu, Local facial asymmetry for expression classification, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Washington, DC, USA,2004,2:889-894.
    [44]Y. Zhu, L. C. DeSilva, C. C. Ko, Using moment invariants and HMM in facial expression recognition, Pattern Recognition Letters,2002,23(1-3):83-91.
    [45]T. F. Cootes, C. J. Taylor, D. H. Cooper, J. Graham, Active Shape Models-Their Training and Application, Computer Vision and Image Understanding,1995,61(1):38-59.
    [46]T. F. Cootes, C. J. Taylor, Active shape models smart snakes, British Machine Vision Conference, Springer-Verlag,1992,266-275.
    [47]A. Lanitis, C. Taylor, T. Cootes, Automatic interpretation and coding of face images using flexible models, IEEE Trans. Pattern Anal. Mach. Intell.,1997,19 (7):743-756.
    [48]T. F. Cootes, G. J. Edwards, C. J. Taylor, Active appearance models, IEEE Transactions on Pattern Analysis and Machine Intelligence,2001,23(6):681-685.
    [49]左坤隆,刘文耀,基于活动外观模型的人脸表情分析与识别,光电子激光,2004,15(7):853-857.
    [50]D. Cristinacce, T. F. Cootes, I. Scott, A multi-stage approach to facial feature detection, Proceedings of the 15th British Machine Vision conference, London, UK,2004,277-286.
    [51]F. Q. Tang, B. Deng, Facial expression recognition using AAM and local facial features, Proceedings of the Third International conference on Natural computation, Haikou, china,2007, 632-635.
    [52]Y. Chang, C. Hu, M. Turk, Probabilistic expression analysis on manifolds, Proceedings of International Conference on Computer Vision and Pattern Recognition, Washington DC, USA,2004, 2:520-527.
    [53]C. Hu, R. Feris, M. Turk, Real-time view-based face alignment using active wavelet networks, Proceedings of IEEE International Workshop On Analysis and Modeling of Faces and Gestures, Nice, France,2003,215-221.
    [54]H. Hong, H. Neven, C. Von der Malsburg, Online facial expression recognition based on personalized galleries, Proceedings of the Second International Conference on Automatic Face and Gesture Recognition (FG'98), Nara, Japan, IEEE,1998,354-359.
    [55]L. Wiskott, Labeled Graphs and Dynamic Link Matching for Face Recognition and Scene Analysis [Dissertation], Reihe Physik, Frankfurt, Verlag Harri Deutsch,1995,53.
    [56]J. Lien, Automatic recognition of facial expression using hidden Markov models and estimation of expression intensity, Ph.D. Thesis, The Robotics Institute, CMU, April 1998.
    [57]K. Mase, A. Pentland, Recognition of facial expression from optical flow, IEICE Trans. E,1991, 74(10):3474-3483.
    [58]T. Otsuka, J. Ohya, Spotting segments displaying facial expression from image sequences using HMM, IEEE Proceedings of the Second International Conference on Automatic Face and Gesture Recognition (FG'98), Nara, Japan,1998,442-447.
    [59]M. Yoneyama, Y. Iwano, A. Ohtake, K. Shirai, Facial expression recognition using discrete Hopfield neural networks, Proceedings of the International Conference on Image Processing (ICIP), Santa Barbara, CA, USA,1997,3:117-120.
    [60]I. A. Essa, A. P. Pentland, Coding, analysis, interpretation, and recognition of facial expressions, IEEE Transactions on Pattern Analysis and Machine Intelligence,1997,19(7):757-763.
    [61]M. Black, Y. Yacoob, Recognizing facial expressions in image sequences using local parameterized models of image motion, Internal. J. Comput. Vision,1997,25(1):23-48.
    [62]S. B. Gokturk et al., Model-based face tracking for view-independent facial expression recognition, Proc. Fifth IEEE International Conference on Automatic Face and Gesture Recognition, Washington, DC, USA, IEEE Computer Society,2002,287-293.
    [63]D. Terzopoulos, K. Waters, Analysis of facial images using physical and anatomical models, Proceeding of the Third International Conference on Computer Vision, Osaka, Japan,1990,727-732.
    [64]D. DeCarlo, D. Metaxas, The integration of optical flow and deformable models with applications to human face shape and motion estimation, Proceedings of the International Conference on Computer Vision and Pattern Recognition (CVPR'96),1996,231-238.
    [65]M. Wang, Y. Iwai, M. Yachida, Expression recognition from time-sequential facial images by use of expression change model, IEEE Proceedings of the Second International Conference on Automatic Face and Gesture Recognition (FG'98), Nara, Japan,1998,324-329.
    [66]F. Bourel, C. Chibelushi, A. Low, Robust facial expression recognition using a state based model of spatially-localized facial dynamics, Proceedings of IEEE International Conference on Automatic Face and Gesture Recognition, Washington, DC, USA,2002,106-111.
    [67]M. Pardas, A. Bonafonte, J. L. Landabaso, Emotion recognition based on MPEG4 facial animation parameters, Proceedings of IEEE Acoustics, Speech, and Signal Processing, Orlando, FL, USA,2002,4:3624-3627.
    [68]W. Fellenz, J. Taylor, N. Tsapatsoulis, S. Kollias, Comparing template-based, feature-based and supervised classification of facial expressions from static images, Proceedings of Circuits, Systems, Communications and Computers (CSCC'99), Nugata, Japan,1999,5331-5336.
    [69]G. Donato, S. Bartlett C. Hager, P. Ekman, J. Sejnowski, Classifying facial actions, IEEE Trans. Pattern Anal. Mach. Intell.,1999,21(10):974-989.
    [70]B. Fasel, J. Luettin, Recognition of asymmetric facial action unit activities and intensities, Proceedings of the International Conference on Pattern Recognition (ICPR 2000), Barcelona, Spain, 2000.
    [71]T. Choudhury, A. Pentland, Motion field histograms for robust modeling of facial expressions, Proceedings of the International Conference on Pattern Recognition (ICPR 2000), Barcelona, Spain, 2000.
    [72]Y. Chang, C. Hu, M. Turk, Manifold of Facial Expression, Proc. Int'l Workshop on Analysis and Modeling of Faces and Gestures,2003.
    [73]Y. Chang, C. Hu, M. Turk, Probabilistic expression analysis on manifolds, CVPR,2004.
    [74]C. Hu, Y. Chang, R. Feris, M. Turk, Manifold based analysis of facial expression, Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW'04),2004, 81.
    [75]J. Bourgain, On Lipschitz Embedding of Finite Metric Spaces in Hilbert Space, Israel J. Math., 1985,52(1-2):46-52.
    [76]C. Shan, S. Gong, P. W. McOwan, Appearance Manifold of Facial Expression, Proc. ICCV Workshop on HCI,2005.
    [77]S. Haykin, Neural Networks:A Comprehensive Foundation, Macmillan/IEEE Press,1994.
    [78]N. Cristianini, J. Shawe-Taylor, An Introduction to Support Vector Machines,2000, Cambridge University Press.
    [79]J. Wright, A. Yang, A. Ganesh, S. Sastry, Y. Ma, Robust face recognition via sparse representation, IEEE Trans. Pattern Anal. Mach. Intell.,2009,31 (2):210-227.
    [80]R. E. Schapire, The boosting approach to machine learning:An overview, MSRI Workshop on Nonlinear Estimation and Classification,2002.
    [81]N. Gueorguieva, G. Georgiev, I. Valova, Facial expression recognition using feedforward neural networks, Proceedings of the International Conference on Artificial Intelligence, Las Vegas, NV, USA,2003,285-291.
    [82]L. Franco, A. Treves, A Neural Network Facial Expression Recognition System using Unsupervised Local Processing, Proceedings of the 2nd International Symposium on Image and Signal Processing and Analysis, Pula, Croatia,2001,628-632.
    [83]L. Ma, K. Khorasani, Facial expression recognition using constructive feedforward neural networks, IEEE Transactions on Systems, Man and Cybernetics, Part B,2004,34(3):1588-1595.
    [84]K. Anderson, P. W. McOwan, A real-time automated system for the recognition of human facial expressions, IEEE Transactions on Systems, Man and Cybernetics, Part B,2006,36(1):96-105.
    [85]I. Kotsia, I. Pitas, Facial Expression Recognition in Image Sequences Using Geometric Deformation Features and Support Vector Machines, IEEE Transactions on Image Processing,2007, 16(1):172-187.
    [86]N. Sebe, I. Cohen, A. Garg, et al., Emotion recognition using a Cauchy naive Bayes Classifier, Proceedings of International Conference on Pattern Recognition, Quebec City, Canada,2002, 1:17-20.
    [87]I. Cohen, N. Sebe, F. G. Cozman, et al., Learning bayesian network classifiers for facial expression recognition with both labeled and unlabeled data, Proceedings of International Conference on Computer Vision and Pattern Recognition, Madison, Wisconsin, USA,2003, 1:595-604.
    [88]I. Cohen, N. Sebe, A. Garg, et al., Facial expression recognition from video sequences: Temporal and static modeling, Computer Vision and Image Understanding,2003,91(1-2):160-187.
    [89]M. Yeasin, B. Bullot, R. Sharma, From facial expression to level of interest:a spatio-temporal approach, Proceedings of International Conference on Computer Vision and Pattern Recognition, Washington, DC, USA,2004,2:922-927.
    [90]X. Zhou, X. Huang, B. Xu, et al., Real-time facial expression recognition based on boosted embedded hidden Marko model, Proceedings of the Third International Conference on Image and Graphics, Hong Kong, China,2004,290-293.
    [91]T. Kanade, J. F. Cohn, Y. Tian, Comprehensive database for facial expression analysis, Proceedings of the Fourth International Conference of Face and Gesture Recognition, Grenoble, France,2000,46-53.
    [92]M, Lyons, S, Akamatsu, M, Kamachi, et al, Coding facial expressions with Gabor wavelets, Proceedings of the Third IEEE International Conference on Automatic Face and Gesture Recognition, Nafa, Japan,1998,200-205.
    [93]T. Sim, S. Baker, M. Bsat, The CMU pose, illumination, and expression (PIE) database, Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition, Washington, DC, USA,2002,46-51.
    [94]P. J. Phillips, H. Moon, S. Rizvi, P. J. Rauss, The FERET evaluation methodology for face-recognition algorithms, IEEE Transactions on Pattern Analysis and Machine Intelligence,2000, 22(10):1090-1104.
    [95]A. M. Maninez, R. Benavente, The AR face database, Technical Report 24, The computer Vision center (CVC), Barcelona, Spain,1998.
    [96]W. Cao, B. Cao, S. Shan, et al, The CAS-PEAL large-scale Chinese face database and baseline evaluations, IEEE Transactions on Systems, Man and Cybernetics, Part A,2008,38(1):149-161.
    [97]吴丹,林学訚,人脸表情视频数据库的设计与实现,计算机工程与应用,2004,40(5):177-180.
    [98]薛雨丽,毛峡,张帆,BHU人脸表情数据库的设计与实现,北京航空航天学报,2007,33(2):224-228.
    [99]J. N. Bassili, Facial Motion in the Perception of Faces and of Emotional Expression, J. Experimental Psychology,1978,4:373-379.
    [100]金一,人脸识别中的若干算法研究,北京交通大学博士学位论文,2009.
    [101]张军平,流形学习若干问题研究,机器学习及应用,清华大学出版社,2006.
    [102]I. Borg, P. Groenen, Modern multidimensional scaling:theory and application, New York, Springer-Verlag,1997.
    [103]Z. Y. Zhang, H. Y. Zha, Principal manifolds and nonlinear dimensionality reduction via tangent space alignment, SLAM Journal of Scientific Computing,2005,26(1):313-338.
    [104]徐蓉,姜峰,姚鸿勋,流形学习概述,智能系统学报,2006,1(1):44-51.
    [105]M. Belkin, P. Niyogi, Towards a theoretical foundation for Laplacian-based manifold methods, Journal of Computer and System Sciences,2008,74:1289-1308.
    [106]C. S. Zhang, G J. Wan, N. Y. Zhao, D. Zhang, Reconstruction and analysis of multi-pose face images based on nonlinear dimensionality reduction, Pattern Recognition,2004,37(1):325-336.
    [107]X. F. He, D. Cai, J. W. Han, Isometric Projection, Proceedings of the National Conference on Artificial Intelligence,2007,528-533.
    [108]罗四维,赵连伟,基于谱图理论的流形学习算法,计算机研究与发展,2006,43(7):1173-1179.
    [109]X. L. Li, S. Lin, S. C. Yan, D. Xu, Discriminant locally linear embedding with high-order tensor data, IEEE Transactions on Systems Man and Cybernetics Part B-Cybernetics,2008, 38(2):342-352.
    [110]Q. Q. Gu, J. Zhou, Local Relevance Weighted Maximum Margin Criterion for Text Classification, Proceedings of the 9th SLAM International Conference on Data Mining,2009, 1129-1140.
    [111]R. O. Duda, P. E. Hart, D. G. Stork, Pattern Classification, second ed., Wiley, NY,2001.
    [112]X. F. He, S. C. Yan, Y. Hu, P. Niyogi, H. J. Zhang, Face Recognition Using Laplacianfaces, IEEE Trans. Pattern Analysis and Machine Intelligence,2005,27(3):328-340.
    [113]W. H. Greub, Multilinear Algebra,2nd ed., New York, Springer Verlag,1978.
    [114]L. R. Tucker, Some mathematical notes on three-mode factor analysis, Psychometrika,1966, 31:279-311.
    [115]A. Kapteyn, H. Neudecker, T. Wansbeek, An approach to n-mode component analysis, Psychometrika,1986,51(2):269-275.
    [116]L. D. Lathauwer, B. D. Moor, J. Vandewalle, A multilinear singular value decomposition, SIAM Journal on Matrix Analysis and Applications,2000,21(4):1253-1278.
    [117]L. Lathauwer, B. Moor, J. Vandewalle, On the best rank-1 and rank-(R1, R2,..., RN) approximation of high-order tensors, SIAM Journal on Matrix Analysis and Applications,2000, 21(4):1324-1342.
    [118]T. Kolda, Orthogonal tensor decompositions, SIAM Journal on Matrix Analysis and Applications,2001,23(1):243-255.
    [119]J. Xia, D. Y. Yeung, G. Dai, Local discriminant embedding with tensor representation, Proceedings of the IEEE International Conference on Image Processing,2006,929-932.
    [120]G. Dai, D. Yeung, Tensor embedding methods, Proceedings of the National Conference on Artificial Intelligence,2006,330-335.
    [121]J. Ye, Generalized low rank approximations of matrices, Machine Learning,2005, 61(1-3):167-191.
    [122]S. C. Yan, D. Xu, Q. Yang, L. Zhang, X. O. Tang, H. J. Zhang, Discriminant analysis with tensor representation, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition,2005,526-532.
    [123]W. Zhang, Z. C. Lin, X. O. Tang, Tensor linear Laplacian discrimination (TLLD) for feature extraction, Pattern Recognition,2009,42(9):1941-1948.
    [124]X. F. He, D. Cai, P. Niyogi, Tensor subspace analysis, Neural Information Processing Systems, 2005,499-506.
    [125]D. Tao, X. Li, X. Wu, S. Maybank, General tensor discriminant analysis and Gabor features for gait recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence,2007, 29(6):1700-1715.
    [126]A. Shashua, A. Levin, Linear image coding for regression and classification using the tensor-rank principle, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition,2001,42-49.
    [127]H. P. Lu, K. N. Plataniotis, A. N. Venetsanopoulos, MPCA:Multilinear Principal Component Analysis of Tensor Objects, IEEE Transactions on Neural Networks,2008,19:18-39.
    [128]S. C. Yan, D. Xu, Q. Yang, L. Zhang, X. O. Tang and H. J. Zhang, Multilinear Discriminant Analysis for Face Recognition, IEEE Transactions On Image Processing,2007,16:212-220.
    [129]H. P. Lu, K. N. Plataniotis, A. N. Venetsanopoulos, A Survey of Multilinear Subspace Learning for Tensor Data, Pattern Recognition,2011,44:1540-1551.
    [130]D. Tao, X. Li, X. Wu, S. Maybank, Tensor Rank One Discriminant Analysis-A convergent method for discriminative multilinear subspace selection, Neurocomputing,2008,71:1866-1882.
    [131]H. Wang, N. Ahuja, Rank-R approximation of tensors using image as-matrix representation, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2005.346-353.
    [132]F. P. Nie, S. M. Xiang, Y. Q. Song, C. S. Zhang, Extracting the optimal dimensionality for local tensor discriminant analysis, Pattern Recognition,2009,42:105-114.
    [133]H. Wang, N. Ahuja, Compact representation of multidimensional data using tensor rank-one decomposition, Proceedings of the International Conference on Pattern Recognition,2004,44-47.
    [134]J. Ye, R. Janardan, Q. Li, GPCA:An efficient dimension reduction scheme for image compression and retrieval, Proceedings of the ACM SIGKDD Conference,2004,354-363.
    [135]J. Ye, R. Janardan, Q. Li, Two-dimensional linear discriminant analysis, Proceedings of the Conference on Advances in Neural Information Processing Systems,2004,1569-1576.
    [136]H. P. Lu, K. N. Plataniotis, A. N. Venetsanopoulos, Uncorrelated multilinear discriminant analysis with regularization and aggregation for tensor object recognition, IEEE Transactions on Neural Networks,2009,20:103-123.
    [137]G. Hua, P. A. Viola, S. M. Drucker, Face Recognition using Discriminatively Trained Orthogonal Rank One Tensor Projections, Proceedings of the Computer Vision and Pattern Recognition,2007,1-8.
    [138]S. Liu, Q. Q. Ruan, Orthogonal Discriminant Neighborhood Preserving Embedding For Facial Expression Recognition, ICIP,2010.
    [139]D. Cai, X. F. He, J. W. Han, H. J. Zhang, Orthogonal laplacianfaces for face recognition, IEEE Transactions on Image Processing,2006,15(7):3608-3614.
    [140]Y. Li, D. Luo, S. Liu, Orthogonal discriminant linear local tangent space alignment for face recognition, Neurocomputing,2009,72(4-8):1319-1323.
    [141]H. Hu, Orthogonal Neighborhood preserving discriminant analysis for face recognition, Pattern Recognition,2008,41(6):2045-2054.
    [142]E. Kokiopoulou, Y. Saad, Orthogonal Neighborhood Preserving Projections:A Projection-Based Dimensionality Reduction, IEEE Transactions on Pattern Analysis and Machine Intelligence,2007,29(12):2143-2156.
    [143]N. T. Tan, L. Huang, C. P. Liu, Face Recognition Based on LBP and Orthogonal Rank-One Tensor Projections, ICPR,2008.
    [144]J. W. Ellison, D. W. Massaro, Fearural evaluation, integration, and judgment of facial affect, Journal of Experimental Psychology:Human Perception and Performance,1997,23(1):213-226.
    [145]D. D. Lee, H. S. Seung, Learning the parts of objects by non-negative matrix factorization, Nature,1999,401(6755):788-791.
    [146]D. D. Lee, H. S. Seung, Algorithms for non-negative matrix factorization, Advances in Neural Information Processing Systems, Vancouver, British Columbia, Canada,2001,13:556-562.
    [147]M. Welling, M. Weber, Positive tensor factorization, Pattern Recognition Letters,2001, 22(12):1255-1261.
    [148]A. Shashua, T. Hazan, Non-negative tensor factorization with applications to statistics and computer vision, Proceedings of the International Conference on Machine Learning,2005.
    [149]T. Hazan, S. Polak, A. Shashua, Sparse image coding using a 3D non-negative tensor factorization, Proceedings of the International Conference on Computer Vision,2005,1:50-57.
    [150]B. Krausz, C. Bauckhage, Action Recognition in Videos using Nonnegative Tensor Factorization, Proceedings of the 20th International Conference on Pattern Recognition,2010, 1763-1766.
    [151]S. Z. Li, X. W. Hou, H. J. Zhang, Q. S. Cheng, Learning spatially localized, parts-based representation, Proceeding of the IEEE International Conference on Computer Vision and Pattern Recognition, Hawaii, USA,2001,207-212.
    [152]P. O. Hoyer, Non-negative matrix factorization with sparseness constraints, Journal of Machine Learning Research,2004,5:1457-1469.
    [153]S. Zafeiriou, A. Tefas, I. Buciu, I. Pitas, Exploiting discriminant information in non-negative matrix factorization with application to frontal face verification, IEEE Transactions on Neural Networks,2006,17(3):683-695.
    [154]D. Cai, X. F. He, X. Y. Wu, J. W. Han, Non-negative matrix factorization on manifold, Proceedings of the International Conference on Data Mining (ICDM2008),2008.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700