用户名: 密码: 验证码:
关于模式识别中大样本分类技术的几个关键问题研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
模式识别(PR)是机器学习中的一个重要研究任务,而分类是模式识别中的一个基本的研究课题。目前,虽然面向大规模数据集的模式识别技术得到了较深入的研究,也获得了许多重要的研究成果并广泛地应用于实际生产中,但仍有许多问题需要进一步的探索和研究。本课题主要是面向大规模数据集,从学习效率、决策效率和隐私保护等3个方面给予重大关注,并进行深入研究。主要贡献有:
     (1)从分类间隔方面探讨分类技术,并结合相关技术提高分类器的学习速度。基于一种新的分类间隔概念,提出了最大向量夹角间隔分类器(MAMC),而MAMC的核化形式等价于中心约束最小包含球(CC-MEB),从而利用核心集向量机(CVM)将其扩展为MAM-CVM,实现了对大样本的快速训练。另外,利用样本间的密度差(DoD)构造分类间隔,提出了最大间隔对数向量机(MMLVM),给出的一般化误差界能保证算法在大样本数据集上获得更好的性能。真实数据集上的实验结果验证了算法的有效性。
     (2)从隐私保护方面探讨分类技术,提出具有隐私保护功能和快速决策方法。证明了ISE准则下的高斯核密度估计与最小包含球(MEB)等价,在此基础上提出了一种隐私团校准的MEB学习方法,并引入模糊隶属度函数解决二类及多类问题中的区域不可分问题。此外,鉴于一类支持向量数据描述(SVDD)决策速度较慢和模型隐私泄露问题,从样本的核特征空间出发,利用核超球球心在原始样本特征空间中的原像,提出一种SVDD的快速决策方法(FDA-SVDD),使得SVDD的决策复杂度从O(n)降低到O(1)并保护了模型隐私。
     (3)提出一种适合解决非线性问题的线性支持向量机(LSVM)快速集成模型FMELSVM。LSVM具有算法简单,训练和测试速度快等优势,但不能解决线性不可分问题。鉴于此,在LSVM基础上,提出了LSVM的快速集成模型FMELSVM,该模型运用径向基函数RBF的非线性组合来拟合非线性的决策函数。利用梯度下降法最大化训练样本的交叉熵对数似然函数,可以有效快速的获得优化解。真实数据集上的实验结果表明,此模型改善了LSVM的非线性能力和提高了训练和决策效率。
     (4)实现了一般化超球方法的快速学习。以一般化软间隔MEB模型为切入点,提出一般化MEB的快速学习方法(FL-GMEB)。一般化MEB模型对偶问题的不等式约束条件的变化,使其不能视为MEB问题,进而不能方便地使用CVM。鉴于此,FL-GMEB放松不等式约束条件,使其等价于CC-MEB,从而利用CVM获得其核心集(CS);然后利用局部线性嵌入(LLE)的逆思想将CS扩充为拓展核心集(ECS);最后利用ECS的优化权作为一般化MEB模型的逼近解。结果,FL-GMEB获得了一个保持样本边界局部结构的软超球,提高了对边界离群信号的鲁棒性。
Pattern recognition (PR) is an important research task in machine learning and classificationis a fundamental topic in pattern recognition. Up to now, the classification methods for largedatasets have been further studied and many obtained technologies have been successfully andwidly applied in practical production. However, there are many issues in classification task to bedeeply explored and extensively studied, e.g. learning effieciency, decision effeiciency and privacypreservation. Therefore, we focus on large datasets and address the above issues in our study. Themain contributions include:
     (1)Discuss classification methods in view of the classification margin and speed up theirlearning process. A new classification method, called Maximum Vector-Angular MarginClassifier (MAMC) is proposed, which is based on a new concept of margin calledvector-angular margin. The kernelized MAMC can be equivalently formulated as the CenterConstrained Minimum Enclosing Ball (CC-MEB), thus MAMC can be extended toMaximum Vector-Angular Margin Core Vector Machine (MAM-CVM) by introducing CoreVector Machine (CVM) method to fast train large datasets. Besides, we construct theclassification margin by using the difference of densities (DoD), then a classification methodis presented, which is called Maximum Margin Logistic Vector Machine (MMLVM). Thegiven generalization error bound guarantee that MMLVM will obtain the better performancefor the larger data sets. The experimental results on the real-world data sets validate theeffectivety of the above methods.
     (2)Discuss the privacy problem of classification methods and present some learning methodswhich can not only preserve privacy and improve testing speed. It is proved that Gaussiankernel density estimate with Integrated Squared Error (ISE) criterion can be equivalent tothe minimum enclosing ball (MEB). Based the result, a new learning method of MEB withprivacy cloud data called Privacy Cloud Calibration MEB (PCC-MEB) is proposed. Then,PCC-MEB is extended to Fuzzy Privacy Cloud Calibration MEB (FPCC-MEB) byintroducing a fuzzy membership function to resolve unclassifiable zones among classes.Besides, for improving the decision speed of one-class SVDD and decreasing the risk ofmodel privacy violation, a fast decision approach called FDA-SVDD is proposed in thispaper by utilizing the preimage in original feature space corresponding to the center ofsphere in kernel feature space. As a result, the decision complexity of SVDD is reducedfrom O(n) to O(1) and the model privacy can be preserved.
     (3)The nonlinear capability of Linear Support Vectore Machine (LSVM) is addressed and a newmodel called Fast Model of Ensembling LSVMs (FMELSVM) is developed. Although the algorithm of LSVM is simple, efficient in training and testing speeds, and with thecapability of preserving the model privacy, it can not be applied for nonlinear datasets. Toaddress this issue, we design FMELSVM based on LSVM. FMELSVM ultilizes thecombination of the nonlinear Radical Basis Functions (RBFs) for fitting a nonlinear decisionfunction. And it can be solved efficiently by the gradient descent method to maximize a loglikelihood function which is the cross-entropy error of training data. The experimentalresults show that the nonlinear capability of LSVM is improved and the training anddecision speed is also boosted.
     (4)Realize the fast learning of the generalized hypeshpere. We stress on the generalizedsoft-margin MEB models and propose a fast learning approach called Fast Learning ofGeneralized MEB (FL-GMEB). Due to the change of the inequality constraint in thegeneralized MEB, it can not be considered as a MEB problem. Accordingly, we can notconveniently use CVM to train the generalized MEB for large datasets. To address thisissue, FL-GMEB slightly relaxes the constraints in the generalized MEB such that it can beequivalent to the corresponding CC-MEB, which can be solved with the correspondingCore Set (CS) by CVM. Then, FL-GMEB attempts to obtain the extended core set (ECS)by expanding neighbors of some samples in CS into ECS in terms of the inverse concept ofLocally Linear Embedding (LLE). Finally, FL-GMEB takes the optimized weights of ECSas the approximate solution of the generalized MEB. As a result, the FL-GMEB methodobtains a soft hypershpere with the capability of preserving the structure of the localboundary, which results in the robust with the outliers around the boundary.
引文
[1] Sergios T, Konstantinos K.模式识别[M].第3版.李晶皎,王爱侠,张广渊译.北京:电子工业出版社,2006
    [2]孙即祥.现代模式识别[M].第2版.北京:高等教育出版社,2008
    [3]杨淑莹.模式识别与智能计算: Matlab技术实现[M].北京:电子工业出版社,2008
    [4] Sch lkopf B, Smola A. Learning with Kernels [M]. Cambridge, MA: MIT Press,2002
    [5] Mercer J. Functions of positive and negative type and their connection with the theory ofintegral equations [C]. In: Philosophical Transactions of the Royal Society, London,Series A, Containing Papers of a Mathematical or Physical Character,1909,209:415-446
    [6] Aizerman M A, Braverman E A, Rozonoer L. Theoretical foundations of the potentialfunction method in pattern recognition learning [J]. Automation and Remote Control,1964,25:821-837
    [7] Cortes C, Vapnik V. Support vector networks [J]. Machine Learning,1995,20(3):273-297
    [8] Tax D M J, Duin R P W. Support vector data description [J]. Machine Learning,2004,54(1):45-66
    [9] Wu M R, Ye J P. A small sphere and large margin approach for novelty detection usingtraining data with outliers [J]. IEEE Transactions on Pattern Analysis and MachineIntelligence,2009,31(11):2088-2092
    [10] Tsang I W, Kwok J T, Cheung P M. Core vector machines: fast SVM training on verylarge data sets [J]. Journal of Machine Learning Research,2005,6:363-392
    [11] Sch lkopf B, Smola A, Muller K R. Nonlinear component analysis as a kernel eigenvalueproblem [J]. Neural Computing,1998,10(5):1299-1319
    [12] Mika S, Ratsch G, Weston J, Scholkopf B, Mullers K R. Fisher discriminant analysiswith kernels [C]. In: Hu Y H, Larsen J, Wilson E, Douglas S, eds. Neural Networks forSignal Processing IX, IEEE,1999:41-48
    [13] Baudat G, Anouar F. Generalized discriminant analysis using a kernel approach [J].Neural Computation,2000,12(10):2385-2404
    [14] Gregory A B, Octavia I C. Weighted parzen windows for pattern classification [J]. IEEETransactions on Pattern Analysis and Machine Intelligence,1996,18(5):567-570
    [15] Mark G, Chao H. Probability density estimation from optimally condensed data samples [J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2003,25(10):1253-1264
    [16] JooSeuk K, Clayton D S. L2kernel classification [J]. IEEE Transactions on Pattern Analysisand Machine Intelligence,2010,32(10):1822-1831
    [17] Zien A, R tsch G, Mika S, Sch lkopf B, Lengauer T, Müller K R. Engineering support vectormachine kernels that recognize translation initiation sites [J]. Bioinformatics,2000,16(9):799-807
    [18] Veeramachaneni S, Nagy G. Style context with second-order statistics [J]. IEEE Transactionson Pattern Analysis and Machine Intelligence,2005,27(1):14-22
    [19] Joachims T. Text categorization with svm: learning with many relevant features [C]. In:Proceedings of the10th European Conference on Machine Learning,1998
    [20] Collobert R, Bengio S, Bengio Y. A parallel mixture of SVMs for very large scale problems[J]. Neural Computation,2002,14(5):1105-1114
    [21] Deng Z H, Chung F L, Wang S T. FRSDE: fast reduced set density estimator using minimalenclosing ball approximation [J]. Pattern Recognition,2008,41:1363-1372
    [22] Chung F L, Deng Z H, Wang S T. From minimum enclosing ball to fast fuzzy inferencesystem training on large datasets [J]. IEEE Transactions on Fuzzy Systems,2009,17(1):173-184
    [23] Tsang I W, Kwok, J T, Zurada J M. Generalized core vector machines [J]. IEEE Transactionson Neural Networks,2006,17(5):1126-1140
    [24] Sims K. IBM introduces ready-to-use cloud computing collaboration services get clientsstarted with cloud computing [EB/OL].2007. http://www-03.ibm.com/press/us/en/pressrelease/22613.wss
    [25]陈康,郑纬民.云计算:系统实例与研究现状[J].软件学报,2010,20(5):1337-1348
    [26]张建勋,古志民,郑超.云计算研究进展综述[J].计算机应用研究,2010,27(2):429-433
    [27]林兆骥,付雄,王汝传,韩志杰.云计算安全关键问题研究[J].信息化研究,2011,37(2):1-4
    [28]冯登国,张敏,张妍,徐震.云计算安全研究.软件学报,2011,22(1):71-83
    [29] Chen K, Liu L. Privacy preserving data classification with rotation perturbation [C]. In:Proceedings of the15th IEEE International Conference on Data Mining, IEEE ComputerSociety, Washington, DC, USA,2005:589-592
    [30] Yu H, Jiang X, Vaidya J. Privacy-preserving SVM using nonlinear kernels on horizontallypartitioned data [C]. In: Proceedings of the2006ACM Symposium on Applied Computing,ACM Publisher, New York, NY, USA,2006:603-610
    [31] Hu Y, Fang L, He G. Privacy-preserving SVM classification on vertically partitioned datawithout secure multi-party computation [C]. In: Proceedings of the5th InternationalConference on Natural Computation, IEEE Press, Piscataway, NJ, USA,2009:543-546
    [32] Vaidya J, Yu H, Jiang X. Privacy-preserving SVM classification [J]. Knowledge andInformation Systems,2008,14(2):161-178
    [33] Laur S, Lipmaa H, Mielik inen T. Cryptographically private support vector machines [C]. In:Proceedings of the12th ACM SIGKDD International Conference on Knowledge Discoveryand Data Mining, ACM Publisher, New York, NY, USA,2006:618-624
    [34] Verykios V S, Bertino E, Fovino I N, Provenza L P, and Yucel Saygin, Theodoridis Y.State-of-the-art in privacy preserving data mining [C]. ACM Special Interest Group onManagement of Data,2004,33(1):50-57
    [35]张鹏,唐世渭.朴素贝叶斯分类中的隐私保护方法研究[J].计算机学报,2007,30(8):1267-1276
    [36] Lin K P, Chen M S. On the design and analysis of the privacy-preserving SVM classifier [J].IEEE Transactions on Knowledge and Data Engineering,2011,23(11):1704-1717
    [37] Inan A, Kantarcioglu M, Bertino E. Using anonymized data for classification [C]. In:Proceedings of the2009IEEE International Conference on Data Engineering, IEEE ComputerSociety, Washington, DC, USA,2009:429-440
    [38] Sweeney L. Achieving k-anonymity privacy protection using generalization and suppression[J]. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems,2002,10(5):571-588
    [39] LeFevre K, DeWitt D J, Ramakrishnan R. Incognito: efficient full-domain k-anonymity [C].In: Proceedings of the2005ACM SIGMOD international conference on Management of data,ACM Publisher, New York, NY, USA,2005:49-60
    [40] LeFevre K, DeWitt D J, Ramakrishnan R. Mondrian multidimensional k-anonymity [C] In:Proceedings of the22nd International Conference on Data Engineering, IEEE ComputerSociety, Washington, DC, USA,2006:25-36
    [41] Jianneng C, Carminati B, Ferrari E, Tan K L. Castle: a delay-constrained scheme fork-anonymizing data streams [C]. In Proceedings of the IEEE24th International Conference onData Engineering, IEEE Computer Society, Washington, DC, USA,2008:1376-1378
    [42]韩建民,于娟,虞慧群,贾泂.面向敏感值的个性化隐私保护[J].电子学报,2010,38(7):1723-1728
    [43]张战成,王士同,钟富礼.具有隐私保护功能协作式分类机制[J].计算机研究与发展,2011,48(6):1018-1028
    [44] Stefan R. SVM classifier estimation from group probabilities [C] In: Proceedings of the27thInternational Conference on Machine Learnin, Haifa, Israel,2010
    [45] Kueck H, Freitas N. Learning about individuals from group statistics [C] In: Proceedings ofthe21st Conference in Uncertainty in Artificial Intelligence, Arlington, Virginia,2005:332-339
    [46] Quadrianto N, Smola A, Caetano T, Le Q. Estimating labels from label proportions [J].Journal of Machine Learning Research,2009,10:2349-2374
    [47] Thorsten J. Training linear SVMs in linear time [C]. In: Proceedings of the12th ACMInternational Conference Knowledge Discovery and Data Mining, Philadelphia, PA,2006:217-226
    [48] Hsieh C J, Chang K.W, Lin C J, Keerthi S S, Sundararajan S. A dual coordinate descentmethod for large-scale linear SVM [C]. In: Proceedings of the25th International ConferenceMachine Learning, Helsinki, Finland,2008:1-12
    [49] Fan R E, Chang K W, Hsieh C J, Wang X R, Lin C J. LIBLINEAR: A library for large linearclassification [J]. Journal of Machine Learning Research,2008,9:1871-1874
    [50] Vapnik V. The Nature of Statistical Learning Theory(2nd Edition)[M]. Springer, NewYork,1999
    [51] Chung FL, Wang S T, Deng Z H, Hu D W. Fuzzy kernel hyperball perceptron [J]. AppliedSoft Computing,2004,5:67-74
    [52] Shivaswamy P, Jebara T. Ellipsoidal kernel machines [C]. In: Proceedings of the11thInternational Conference on Artificial Intelligence and Statistics,2007
    [53] Roweis S T, Saul L K. Nonlinear dimensionality reduction by locally linear embedding [J].Science,2000,290(5500):2323-2326
    [54] Sch lkopf B, Smola A, Williamson R C, Bartlett P L. New support vector algorithms [J].Neural Computation,2000,12(5):1207-1245
    [55] Hu M, Chen Y, Kwok J T. Building sparse multi-kernel SVM classifiers [J]. IEEETransactions on Neural Networks,2009,20(5):827-839
    [56]皋军,王士同.基于矩阵模式的最小类内散度支持向量机[J].电子学报,2009,37(5):1051-1057
    [57] Williams C, Seeger M. Using the Nystr m method to speed up kernel machines [C].Advances in Neural Information Processing Systems, Cambridge, MA: MIT Press,2001,13:682-688
    [58] Smola A, Sch lkopf B. Sparse greedy matrix approximation for machine learning [C]. In:Proceedings of the17th International Conference on Machine Learning, Stanford, CA,2000,911-918
    [59] Achlioptas D, McSherry F, Sch lkopf B. Sampling techniques for kernel methods [C]. In:Advances in Neural Information Processing Systems, Dietterich T, Becker S, Ghahramani Z.Eds. Cambridge, MA: MIT Press,2002.14:335-342
    [60] Fine S, Scheinberg K. Efficient SVM training using low-rank kernel representations [J].Journal of Machine Learning Research,2001,2:243-264
    [61] Fletcher R. Practical Methods of Optimization [M]. New York: Wiley,1987
    [62] Bǎdoiu M, Clarkson K L. Optimal core-sets for balls [J]. Computational Geometry: Theoryand Applications,2008,40(1):14-22
    [63] Bǎdoiu M, Har-Peled S, Indyk P. Approximate clustering via core sets [C]. In: Proceedings ofthe34th Annual ACM Symposium on Theory of computing, ACM Publisher, New York, NY,USA,2002:250-257
    [64] Asharaf S, Murty M N, Shevade S K. Multiclass core vector machine [C]. In: Proceedings ofthe24th International Conference on Machine Learning, ACM Publisher, New York, NY,USA,2007:41-48
    [65]钱鹏江,王士同,邓赵红,徐华.基于最小包含球的大数据集快速谱聚类算法[J].电子学报,2010,38(9):2035-2041
    [66] Kubat M., Matwin S. Addressing the curse of imbalanced training sets:one-sided selection [C].In: Proceedings of the14th International Conference on Machine Learning, Nashville, MorganKaufmann Pubishers,1997:179-186
    [67] Jianwen Tao, Shitong Wang, Wenjun Hu, Wenhao Ying. ρ-Margin kernel learning machinewith magnetic field effect for both binary classification and novelty detection [J]. InternationalJournal of Software and Informatics,2010,4(3):305-324
    [68] Fu-lai Chung, Zhaohong Deng, Shitong Wang. Robust relief-feature weighting, marginmaximization and fuzzy optimization [J]. IEEE Transactions on Fuzzy Systems,2010,18(4):726-744
    [69] Yijun S, Sinisa T, Goodison S. Local-learning-based feature selection for high-dimensionaldata analysis [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2010,32(9):1610-1626
    [70] Horn R A, Johnson C R. Matrix Analysis [M]. Cambridge University Press,1985
    [71] Pollard D. Convergence of Stochastic Processes [M]. Springer-Verlag,1984
    [72] Tong Z. Covering number bounds of certain regularized linear function classes [J]. MachineLearning Research,2002,2:527-550
    [73] Ng A Y. Feature selection, L1vs. L2regularization, and rotational invariance [C]. In:Proceedings of the21th International Conference on Machine Learning, ACM Publisher, NewYork, NY, USA,2004:78-86
    [74] Banach S. Sur les opérations dans les ensembles abstraits et leur application aux équationsintégrales [J]. Fundamenta Mathematicae,1922,3:133-181
    [75]张铃,张钹,殷海风.多层前向网络的交叉覆盖设计算法[J].软件学报,1999,10(7):737-742
    [76]周伟达,张莉,焦李成.线性规划支撑向量机[J].电子学报,2001,29(11):1507-1511
    [77]任双桥,杨德贵,黎湘,庄钊文.分片支撑矢量机[J].计算机学报,2009,32(1):77-85
    [78] Xiaofei H, Deng C, Partha N. Laplacian score for feature selection [C]. In: Advances inNeural Information Processing Systems18, Weiss Y, Sch lkopf B., Platt J., eds., MIT Press,2006:507-514
    [79] Hull J J. A database for handwritten text recognition research [J]. IEEE Transactions onPattern Analysis and Machine Intelligence,1994,16(5):550-554
    [80] Quadrianto N, Smola A, Caetano T, Le Q. Estimating labels from label proportions [C]. In:Proceedings of the25th International Conference Machine Learning, ACM Publisher, NewYork, NY, USA,2008,10:776-783
    [81] Alan J I. Recent developments in nonparametric density estimation [J]. Journal of theAmerican Statistical Association,1991,86(413):205-224
    [82] JooSeuk K, Clayton S. Kernel classification via integrated squared error [C]. In: Proceedingsof IEEE Workshop on Statistical Signal Processing, Madison, WI,2007:783-787
    [83] JooSeuk K, Clayton S. Robust kernel density estimation [C]. In: Proceedings of IEEEICASSP, Las Vegas,2008:3381-3384
    [84]蔡艳宁,胡昌华,汪洪桥,张琪.基于支持向量预选取的支持向量域故障预报[J].控制与决策,2009,24(7):985-989
    [85] Chang C C, Lin C J. Training v-support vector classifiers: theory and algorithms [J]. NeuralComputation,2001,13(9):2119-2147
    [86] Roberts S, Tarassenko L. A probabilistic resource allocation network for novelty detection [J].Neural Computation,1994,6(2):270-284
    [87] Towel G G. Local expert autoassociators for anomaly detection [C]. In: Proceedings of the17th International Conference on Machine Learning, Morgan Kaufmann Publishers, CA, USA,2000:1023-1030
    [88] Chen Y X, Dang X, Peng H X, Bart H L. Outlier detection with the kernelized spatial depthfunction [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2009,31(2):288-304
    [89] Tenenbaum J B, Silva V, Langford J C. A global geometric framework for nonlineardimensionality reduction [J]. Science,2000,290(5500):2319-2323
    [90] Frank A, Asuncion A. UCI machine learning repository [EB/OL]. http://archive.ics.uci.edu/ml. Irvine, CA: University of California, School of Information and Computer Science
    [91]王晓明,王士同.平均邻近间隔支撑向量机[J].智能系统学报,2010,5(4):313-319
    [92]皋军,王士同,邓赵红.基于全局和局部保持的半监督支持向量机[J].电子学报,2010,38(7):1626-1633
    [93] Tran Q A, Zhang Q L, Li X. Reduce the number of support vectors by using clusteringtechniques [C]. In: Proceedings of International Conference on Machine Learning andCybernetics,2003,2:1245-1248
    [94] Hall P, Park B U, Samworth R J. Choice of neighbor order in nearest-neighbor classification[J]. Annals of Statistics,2008,36(5):2135-2152
    [95] Belur V D. Nearest Neighbor (NN) Norms: NN Pattern Classification Techniques [M]. IEEEComputer Society, Washington, DC,1991
    [96] Richard O D, Peter E H, David G S, Pattern Classification (2nd Edition)[M]. Wiley, NY,2000
    [97] Robert A J, Michael I J, Steven J N, Geoffrey E H. Adaptive mixtures of local experts [J].Neural Computation,1991,3(1):79-87
    [98] Clodoaldo A M L, André L V C, Fernando J V Z. Pattern classification with mixtures ofweighted least-squares support vector machine experts [J]. Neural Computing andApplications,2009,18(7):843-860
    [99] Platt J C. Probabilistic outputs for support vector machines and comparisons to regularizedlikelihood methods [C]. In: Advances in Large Margin Classifiers, MIT Press,1999:61-74
    [100] Kovalsky S Z, Cohen G, Hagege R, Francos J M. Decoupled linear estimation of affinegeometric deformations and nonlinear intensity transformations of images [J]. IEEETransactions on Pattern Analysis and Machine Intelligence,2010,32(5):940-946
    [101] Marcin O. New separating hyperplane method with application to the optimisation of directmarketing campaigns [J]. Pattern Recognition Letters,2011,32(3):540-545
    [102] Marzio M D I, Taylor C C. Kernel density classification and boosting: an L2analysis [J].Statistics and Computing,2005,15(2):113-123
    [103]文传军,詹永照等.最大间隔最小体积球形支持向量机[J].控制与决策,2010,25(1):79-83
    [104] Wang J G, Neskovic P, Cooper L N. Pattern classification via single sphere [J]. LectureNotes in Computer Science, Discovery Science,2005,37(35):241-252
    [105] Hao P Y, Chiang J H, Lin Y H. A new maximal margin spherical structured multi-classsupport vector machine [J]. Applied Intelligence,2009,30(2):98-111
    [106] Liu Y, Zheng Y F. Maximum enclosing and maximum excluding machine for patterndescription and discrimination [C]. In: Proceedings of the18th International Conference onPattern Recognition, IEEE Computer Society Publisher, Washington, DC, USA,2006:129-132
    [107] Yoon M, Yun Y, Nakayama H. A role of total margin in support vector machines [C]. In:Proceedings of the International Joint Conference on Neural Networks, Portland, USA,2003,3:2049-2053
    [108]彭新俊,王翼飞.总间隔v-支持向量机及其几何问题[J].模式识别与人工智能,2009,22(1):8-16
    [109] Belkin M, Niyogi P. Laplacian eigenmaps for dimensionality reduction and datarepresentation [J]. Neural Computation,2003,15(6):1373-1396
    [110] Donoho D, Grimes C. Hessian eigenmaps: locally linear embedding techniques forhigh-dimensional data [C]. Proceedings of the National Academy of Sciences of the UnitedStates of America, Washington, National Acad Sciences Publishers,2005,100(10):5591-5596
    [111] Liu Y H, Liu Y C, Chen Y J. Fast support vector data descriptions for novelty detection [J].IEEE Transactions on Neural Networks,2010,21(8):1296-1313

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700