用户名: 密码: 验证码:
基于内容的商品图像分类方法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
基于内容的图像分类是指根据图像的视觉特征对图像进行自动语义分类,需要克服类内差异、遮挡、姿态变化和背景干扰对分类带来的不利影响,目前是计算机视觉领域最具挑战的课题之一。在电子商务领域,基于内容的商品图像自动分类能够为交易双方提供快速商品查询、确定商品的置放策略及进行用户感兴趣商品的智能推荐,从而有力提高电子商务市场的整体效能,是电子商务智能化的迫切要求。
     本文主要使用判别式分类模型研究基于内容的商品图像分类方法,具体工作如下:
     (1)为实现按照某种感兴趣信息(如女士皮鞋是圆头还是尖头,T恤衫是圆领口还是V型领口等)或商品类型对在线商品进行快速自动分类,研究了基于图像类描述与图像-类最近邻分类方式的商品图像分类方法。这种方法对每一个商品图像类建立类统计描述模型,在特征空间计算测试图像与每一类统计模型(类描述)的距离,将距离最小的图像类作为最终的分类结果。具体从两个方面构建商品图像类描述,实现图像-类最近邻分类。
     ①全局特征法。采用具有互补特性的塔式梯度方向直方图和塔式关键词直方图全局特征构造商品图像基于特征分布参数的类描述符和基于特征分级匹配的类描述符;然后通过计算测试图像描述符与各类类描述符之间的距离(图像-类距离)实现商品自动分类。计算过程简单,分类性能比现有相关文献有一定提升。
     ②局部特征法。为克服构建全局特征过程中量化误差的影响,局部特征法将商品图像及商品图像类看做是若干独立同分布局部特征的无序集合,采用图像-类最近邻方式实现商品图像分类。为快速实现图像-类距离的计算,本文在对每类的局部特征描述子进行多级聚类,通过设定聚类级数和类过滤比例能够灵活平衡分类正确率与分类速度。
     (2)构建图像类描述需要较大数量的已标记样本。针对已标记(训练)样本数量较少的情况,本文采用基于数据驱动的核函数构建方法,在词包(Bag Of Words, BOW)模型的基础上,设计了一种基于加权二次卡方(Weighted Quadratic Chisquared, WQC)巨离的直方图核函数,使用具有核技巧的支持向量机进行商品图像分类。对于训练样本较少情况下的图像分类,基于WQC直方图核函数方法有着较明显优势。
     (3)考虑到商品图像分类具有类别数量多、类内变化大、分类依据多样等复杂性,研究了多特征联合方法以提高商品图像分类性能。①多核联合方法。为避免传统多核学习中繁琐而困难的联合优化问题,提出了基于(去中心化)核经验校准的商品图像分类方法;②多分类器联合方法。本文建立了基于异构强分类器决策层联合的商品图像分框架,提出了基于支持向量机二级级联的商品图像分类算法。本文所提出的两种多特征联合方法能充分利用特征的互补特性,比传统多特征联合方法更能有效地提高商品图像分类性能。
The aim of content-based image classification is to implement semantic classification automatically based on the visual features. However, some adverse effects, such as the within-class variation, obscure, pose variation, and background interference, are hard to overcome. Therefore it is still a challenging problem in the field of computer vision. Automatic product image classification can effectively improve the overall effectiveness of the E-commerce market, such as quick product querying, determining the placement strategy and conducting product intelligent recommendation. Consequently, it is a critical requirement of E-commerce intelligent. This dissertation focuses on the content-based product image classification with discriminative classification model. The main research work is as follows:
     (1) For the service of automatic real-time online product classification with some specific information of interest (such as round or pointed of lady shoes, round or V-neckline of T-shirts, etc.) or product categories, the product classification schemes are developed based on the class-specific descriptors and image-to-class nearest neighbor classifiers, in which each product image category is modeled statistically, the category nearest to the query product image in the feature space is chosen as the final classification result. Two kinds of approaches are proposed for class descriptor construction and image-to-class nearest neighbor classification:
     ①Global feature based schemes. With two global complimentary features PHOG (Pyramid Histogram Of Gradient) and PHOW (Pyramid Histogram Of visual Words), CDDP (Class-specific Descriptor with Distribution Parameter) scheme and CDHFM (Class-specific Descriptor with Hierarchical Feature Matching) scheme are constructed, respectively. The image-to-class distances are calculated between the descriptors of the test product image and each class-specific descriptor for automatic product image classification. The procedure is simple and the classification performances are prior to the relative literature.
     ②Local feature based scheme. In this scheme, all the product images and image classes are regarded as orderless sets of local descriptors and image-to-class nearest neighbour classifier is employed for product image classification. Local feature descriptors of each category are hierarchically clustered to speed the calculation of image-to-class distances, and the trade-off between classification accuracy and speed can be achieved flexibly through the set of clustering level numbers and the class filter ratio.
     (2) To construct the class-specific descriptor, the labeled samples are required to be in a quantity sufficient for good performance. In the case of product classification with a small number of labeled samples, data-driven kernel building methods are explored and a Weighted Quadratic Chi-squared (WQC) histogram kernel function is designed to combine with BOW (Bag Of Word) model. With the kernel based support vector machines, the proposed histogram kernel function offers superior performances with small training samples.
     (3) Taking into account the complexity of product image classification, such as the big number of categories, large within-class variation, multiple classification bases, multiple features combination methods are designed to boost classification performances.①Multiple kernels combination. To avoid the tedious and difficult joint optimization process, a (decentralized) kernel empirical aligment based scheme is proposed.②Multiple classifier combination. A framework is built with decision-level fusion of heterogeneous strong classifiers, and a scheme of two-layer SVM classifiers cascading is proposed for product image classification. The proposed multiple kernel and multi-classifier combination methods can take the advantage of the complementary features, and perform much better than the traditional combination methods for product image classification.
引文
[1]Vanrullen R and Thorpe S J. The time course of visual processing:from early perception to decisi-on making [J]. Journal of cognitive neuroscience,2001,13(4):454-461.
    [2]韩东峰.图像分类识别中特征及模型若干问题的研究[D].长春:吉林大学,2008.
    [3]李远.基于局部特征及概率图模型的图像分类识别算法研究[D].长春:吉林大学,2008.
    [4]Lefort R, Fablet R, and Boucher J. Weakly supervised classification of objects in images using soft random forests [C]. The 1 lth European Conference on Computer Vision (ECCV), Crete, Greece, 2010,6314:185-198.
    [5]Jordan M, Kleinberg J, Schokopf B. Pattern recognition and machine learning[M], New York 2006.
    [6]Florent P, Jorge S and Thomas M J. Improving the fisher kernel for large-scale image classification [C]. The 1 lth European Conference on Computer Vision (ECCV), Crete, Greece, 2010,6314:143-156.
    [7]Zhou X, Yu K, Zhang T, et al. Image classification using supervector coding of local image descri-ptors [C]. The 1 lth European Conference on Computer Vision (ECCV), Crete, Greece,2010:141-154.
    [8]付岩,王耀威,王伟强,高文.SVM用于基于内容的自然图像分类和检索[J].计算机学报,2004.26(10):1261-1265.
    [9]亓晓振,王庆,一种基于稀疏编码的多核学习图像分类方法[J].电子学报,2012.40(4):773--779.
    [10]Opelt A, Pinz A, Fussenegger M, et al. Generic object recognition with boosting[J]. IEEE Transa-ctions on Pattern Analysis and Machine Intelligence,2006,28(3):416-431.
    [11]Viola P, Platt J and Zhang C. Multiple instance boosting for object detection [J]. Advances in Ne-ural Information Processing Systems,2006,18:1417-1424.
    [12]Lu J, Plataniotis K N, Venetsanopoulos A N, et al. Ensemble-based discriminant learning with b-oosting for face recognition [J]. IEEE Transactions on Neural Networks,2006,17(1):166-178.
    [13]Opelt A, Fussenegger M, Pinz A et al. Weak hypotheses and boosting for generic object detection and recognition[C], European Conference on Computer Vision, Prague, Czech Republic,2004: 71-84.
    [14]He X, Zemel R, and Carreira-Perpin M. Multiscale conditional random fields for image labeling [C]. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Washington, DC USA,2004,2:5-702.
    [15]Li S Z. Markov random field modeling in image analysis [M]. Springer-Verlag New York Inc, 2009.
    [16]Ciresan D C, Ueli M, Jonathan M, et,al. Flexible high performance convolutional neural networks for image classification[C]. Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence, Catalonia, Spain,2011:1237-1242.
    [17]Jarrett K, Kavukcuoglu K, Ranzato M.A and LeCun Y. What is the best multi-Stage architecture for object recognition? [C], Proceedings of the International Conference on Computer Vision (IC-CV'09), Kyoto, Japan,2009:2146-2153.
    [18]Wang Z, Hu Y and Chia L T. Improved learning of I2C distance and accelerating the neighborho-od search for image classification [J]. Pattern Recognition,2011,44(10-11):2384-2394.
    [19]Boiman O, Shechtman E and Irani M. In defense of nearest-neighbor based image classification [C]. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Anchorage, Alaska, US A,2008:1-8.
    [20]Wang Z, Hu Y, and Chia L T. Image-to-class distance metric learning for image classification [J]. Lecture Notes in Computer Science,2010:706-719.
    [21]Bosch, A, Zisserman A and Munoz X. Image classification using Random Forests and Ferns [C]. IEEE 1 lth International Conference on Computer Vision, Rio de Janeiro, Brazil,2007:1-8.
    [22]Moosmann F, Nowak E andJurie F. Randomized clustering forests for image classification [J]. I-EEE Transactions on Pattern Analysis and Machine Intelligence,2008.30(9):1632-1646.
    [23]Nilsback M E and Zisserman A. Automated flower classification over a large number of classes [C]. Sixth Indian Conference on Computer Vision, Graphics & Image Processing, Bhubaneswar, India,2008:722-729.
    [24]Zhao Z and Cai A. Combining multiple SVM classifiers for adult image recognition.2nd IEEE I-nternational Conference on Network Infrastructure and Digital Content [C],Beijing, China,2010:-149-153.
    [25]陈再良.一种基于向量空间模型的商品图像法分类方法[D],上海:复旦大学2008.
    [26]孙林,吴相林,罗松涛,周莉,张红艳.基于人体检测的网络商品图像分类[J].微计算机信息,2010,29:15-17.
    [27]Tomasik B, Thiha P and Turnbull D. Tagging products using image classification [C]. Proceedin-gs of the 32nd international ACM SIGIR conference on Research and development in informatio-n retrieval. NK, USA,2009:792-793.
    [28]Kannan, A. Improving product classification using images [C]. International Conference on Data Mining (ICDM), Sanya, China,2011:310-319.
    [29]Nowak E, Jurie F and Triggs B. Sampling strategies for Bag-of-Features image classification [C]. The European Conference on Computer Vision, Graz, Austria,2006:490-503.
    [30]Microsoft Research:Product image categorization data set (PI 100)[DB/OL], http://research.micr-osoft.com/en-us/people/xingx/pi100.aspx,2010.11.
    [31]The Caltech-101 Object Categories [DB/OL], http://www.vision.Caltech.edu/feifeili/Datasets.-htm/,2010.11.
    [32]The caltech256 [DB/OL], http://www.vision.caltech.edu/Image_Datasets/Caltech256/,2010.11.
    [33]Everingham M, et al. The PASCAL visual object classes challenge[J], International Journal of Co-mputer Vision,2010,2(88):303-338.
    [34]Deng, J, Dong W, Socher R, et al. Imagenet:A large-scale hierarchical image database [C].IEEE Conference on Computer Vision and Pattern Recognition, Miami, Florida, USA,2009:248-255.
    [35]Barla A, Odone F and Verri A. Histogram intersection kernel for image classification. Internatio-nal Conference on Image Processing (ICIP) [C]. Barcelona, Catalonia, Spain,2003,3:513-516.
    [36]Vailaya A, Figueiredo M A T, Jain A K, et al. Image classification for content-based indexing[J]. Image Processing,2001,10(1):117-130.
    [37]Szummer M.and Picard R W. Indoor-outdoor image classification. IEEE International Workshop on Content-Based Access of Image and Video Database[C], Media Lab, MIT, Cambridge, MA 1998:42-51.
    [38]Thureson J and Carlsson S. Appearance based qualitative image description for object class reco-gnition[C].8th European Conference on Computer Vision, Prague, Czech Republic,2004,3022-:518-529.
    [39]Bosch A, Munoz X and Marti R. Which is the best way to organize/classify images by content? [J]. Image and Vision Computing,2007,25(6):778-791.
    [40]Fei-Fei L and Perona P. A bayesian hierarchical model for learning natural scene categories [C]. Computer Vision and Pattern Recognition (CVPR), San Diego, California, USA,2005, (2):524--531.
    [41]Mikolajczyk K. and Schmid C. A performance evaluation of local descriptors [J]. Pattern Analys-is and Machine Intelligence,2005,27(10):1615-1630.
    [42]Belongie S, Malik J and Puzicha J. Shape matching and object recognition using shape contexts [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2002,24(4):509-522.
    [43]Lowe D. Distinctive Image features from scale-invariant keypoints [J]. International Journal of C-omputer Vision,2004,60(2):91-110.
    [44]Ke Y and Sukthankar R. PCA-SIFT:A more distinctive representation for local image descriptor-s[C]. Computer Vision and Pattern Recognition (CVPR). Washington, DC, USA 2004,2:506-5 13.
    [45]Lazebnik S, Schmid C, and Ponce J. A sparse texture representation using local affine regions [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2005,27(8):1265-1278.
    [46]Freeman W T and Edward H A. The design and use of steerable filters[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,1991,13(9):891-906.
    [47]Baya H, Essa A, Tuytelaarsb T,et al. SURF:Speeded Up Robust Features [J],Computer Vision a-nd Image Understanding,2008,110(3):346-359
    [48]Ojala T, Pietikainen M and Harwood D. Performance evaluation of texture measures with classif-ication based on Kullback discrimination of distributions[C], Proceedings of the 12th IAPR Inter-national Conference on Pattern Recognition (ICPR), Groningen, Netherlands,1994,1:582-585.
    [49]Tamura H, Mori S and Yamawaki T. Textural features corresponding to visual perception [J]. IE-EE Transactions on Systems, Man and Cybernetics,1978,8(6):460-473.
    [50]Oliva A and Torralba A. Modeling the Shape of the Scene:A Holistic Representation of the Spat-ial Envelope [J]. International Journal of Computer Vision,2001,42(3):145-175.
    [51]Fischler, M.A. and Elschlager R.A. The representation and matching of pictorial structures [J]. I-EEE Transactions on Computers,1973,100(1):67-92.
    [52]Mojsilovic A, Gomes J and Rogowitz B. I see:Perceptual features for image library navigation [J]. SPIE Human Vision and Electronic Imaging,2002.
    [53]Felzenszwalb P, McAllester D and Ramanan D. A dis criminatively trained, multiscale, deforma-ble part model[C]. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Anc-horage, Alaska, USA,2008:1-8.
    [54]Boureau Y L, Bach F, LeCun Y, et al. Learning mid-level features for recognition[C]. IEEE Con-ference on Computer Vision and Pattern Recognition (CVPR), San Francisco, USA,2010:2559--2566.
    [55]Sivic J. and Zisserman A. Video Google:A text retrieval approach to object matching in Video-s[C]. Ninth IEEE International Conference on Computer Vision(ICCV).Nice, France,2003, 2:1470-1477.
    [56]Van Gemert J C, Veenman C J, Smeulders A W M, et al. Visual word ambiguity[J]. Pattern Ana-lysis and Machine Intelligence,2010,32 (7):1271-1283.
    [57]Wright J, Yang A Y, Ganesh A,et al. Robust face recognition via sparse representation [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2008:210-227.
    [58]Wright J, Yi Ma, Mairal J, et al. Sparse representation for computer vision and pattern recognitio-n[J]. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Champaign, Urba-na, IL, USA,2010,98(6):1031-1044.
    [59]Yang, J, Yu K, Gong Y, et al. Linear spatial pyramid matching using sparse coding for image cla-ssification [C], IEEE Conference on Computer Vision and Pattern Recognition, Miami, Florida, USA,2009:1794-1801.
    [60]Bo L and Sminchisescu C. Efficient match kernels between sets of features for visual recognition [J]. Advances in Neural Information Processing Systems.2009:1-9.
    [61]Jaakkola T S and Haussler D. Exploiting generative models in discriminative classifiers [J]. Adv-ances in Neural Information Processing Systems,1999:487-493.
    [62]Perronnin F and Dance C. Fisher kernels on visual vocabularies for image categorization [C]. IE-EE Conference on Computer Vision and Pattern Recognition (CVPR), Minneapolis, Minnesota, USA,2007:1-8.
    [63]Duda R O, Hart P E and Stork D G. Pattern classification and scene analysis [M].2nd ed. Join W-iley & Sons inc,2001.
    [64]Bosch A, Zisserman A and Munoz X. Representing shape with a spatial pyramid kernel[C]. Proc-eedings of the 6th ACM international conference on Image and video retrieval.NK, USA,2007: 401-408.
    [65]Lazebnik S, Schmid C and Ponce J. Beyond bags of features:spatial pyramid matching for recog-nizing natural scene categories [C]. IEEE Conference on Computer Vision and Pattern Recogniti-on (CVPR), New York, NY, USA,2006,2:2169-2178.
    [66]Dalal N and Triggs B. Histograms of oriented gradients for human detection[C]. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), San Diego, CA, USA, 2005,1:886-893.
    [67]Weber R, Schek H J and Blott S. A quantitative analysis and performance study for similarity se-arch methods in high-dimensional spaces [J]. Institute of electrical & electronics engineers. 1998:1-12.
    [68]Zhou X, Zhou X, Cui N,et al. Hierarchical gaussianization for image classification [C]. IEEE 12t-h International Conference on Computer Vision, Kyoto, Japan,2009:1971-1979.
    [69]Rodriguez F, Sapiro G and MINNEAPOLIS M U. Sparse representations for image classification: Learning discriminative and reconstructive non-parametric dictionaries [R/OL],2008, http://ww-w.ima.umn.edu/preprints/jun2008/2213.pdf.
    [70]Zhang H, Berg A C, Maire M, et al. SVM-KNN:Discriminative nearest neighbor classification f-or visual category recognition [C]. IEEE Conference on Computer Vision and Pattern Recognitio-n (CVPR), New York, NY, USA,2006, (2):2126-2136.
    [71]Jianchao Y, Kai Y, Yihong G and Huang T. Linear spatial pyramid matching using sparse coding for image classification [C]. IEEE Conference on Computer Vision and Pattern Recognition (CV-PR), Miami, FL, USA,2009:1794-1801.
    [72]Chapelle O, Haffner P, and Vapnik V. SVMs for histogram-based image classification [J]. IEEE Transactions on Neural Networks,1999,10(5):1055-1064.
    [73]Siddiquie B, Vitaladevuni S and Davis L. Combining Multiple Kernels for Efficient Image Clas-sification [C]. Workshop on Applications of Computer Vision (WACV), Snowbird, UT, USA,2-009:1-8.
    [74]Amato G. and Falchi F.KNN based image classification relying on local feature similarity [C]. S-ISAP'10 Proceedings of the Third International Conference on SImilarity Search and Aplication-s.NY, USA,2010,101-108.
    [75]Mitchell T.M. Machine learning[M]. McGraw-Hill.New York,1997.
    [76]Vedaldi A and Fulkerson B. VLFeat:An open and portable library of computer vision algorithms [C], ACM International Conference on Multimedia, Firenze, Italy,2010,1469-1472.
    [77]Bentley J. L. Multidimensional binary search trees used for associative searching. Communicatio-ns of the ACM[J],1975,18(9):509-517.
    [78]邓乃扬,田英杰.支持向量机—理论、算法与拓展[M].科学出版社,北京,2009.
    [79]Maji S, Berg A.C and Malik J. Classification using intersection kernel SVMs is efficient[C]. IEE-E Conference on Computer Vision and Pattern Recognition (CVPR), Anchorage, Alaska, USA, 2008:1-8.
    [80]Jianguo Z, Marszalek M, Lazebnik S, et al. Local features and kernels for classification of texture and object categories:A Comprehensive Study [C]. IEEE Conference on Computer Vision and P-attern Recognition (CVPR), New York, NY, USA,2006:13.
    [81]Boughorbel S, Tarel J and Boujemaa N. Generalized histogram intersection kernel for image rec-ognition [C].IEEE International Conference on Image Processing (ICIP), Genoa, Italy,2005,3:1-61-164.
    [82]Camps-Valls G, Rojo-Alvarez J and Martinez-Ramon M. Kernel methods in bioengineering Sign-al and Image Processing[M],2007:Igi Publishing.
    [83]Decoste D, Scholkopf B. Training invariant support vector machines [J]. Machine learning,2002, 46(1):161-190.
    [84]Wallraven C, Caputo B and Graf A. Recognition with local features:the kernel recipe[C]. Ninth IEEE International Conference on Computer Vision (ICCV), Nice, France,2003,1:257-264.
    [85]Barla A, Franceschi E. Odone F, et al. Image kernels[J]. Lecture Notes in Computer Science, 2002(2388):617-628.
    [86]Bosch A, Zisserman A and Munoz X. Image classification using ROIs and multiple kernel learni-ng [J/OL]. http://eia.udg.es/~aboschr/Publicacions/bosch08a_ preliminary.pdf (2010).
    [87]Pele O and Werman M. The quadratic-chi histogram distance family[C]. The 11th European Con-ference on Computer Vision (ECCV 2010), Crete, Greece,2010:749-762.
    [88]Wikipedia.TF-IDF. http://en.wikipedia.org/wiki/Tf%E2%80%93idf.
    [89]Wang, J, Yang J, Yu K, et al., Locality-constrained linear coding for image classification[C]. IE-EE Conference on Computer Vision and Pattern Recognition, San Francisco,USA,2010:3360--3367.
    [90]Cunningham P and Delany S J.K-Nearest neighbour classifiers[R]. Technical Report UCD-CSI--2007-4,2007:1-17.
    [91]Shawe-Taylor J. and Cristianini N. Kernel methods for pattern analysis[M]. Cambridge Universi-ty Press,2004.
    [92]Hastie T, Tibshirani R, Friedman J. The element of statistic learning[M].Stanford, California, 2008.
    [93]Gehler P and Nowozin S. On feature combination for multiclass object classification[C]. Internat-ional Conference on Computer Vision (ICCV), Kyoto, Japan,2009:221-228.
    [94]Rakotomamonjy A, Bach F,Canu S, et al. More efficiency in multiple kernel learning[C]. Procee-ding ICML'07 Proceedings of the 24th International Conference on Machine Learning.NY, USA 2007:775-782.
    [95]Sonnenburg S, Ratsch G, Schafer C, Bernhard Scholkopf. Large scale multiple kernel learning[J]. The Journal of Machine Learning Research.2006,7:1531-1565.
    [96]Rakotomamonjy A., et al. SimpleMKL [J].Journal of Machine Learning Research,2008,9:2491-2521.
    [97]Bach F, Lanckriet G and Jordan M. Multiple kernel learning, conic duality, and the SMO algorit-hm [C]. Proceeding ICML'04 Proceedings of the twenty-first international conference on Machin-e learning, Kentucky,USA,2004:1-6.
    [98]Lanckriet G, Cristianini N, Bartlett P, et al. Learning the kernel matrix with semidefinite progra-mming [J]. The Journal of Machine Learning Research,2004,5:27-72.
    [99]Cortes C. Invited talk:Can learning kernels help performance? [C]. Proceedings of the 26th An-nual International Conference on Machine Learning. ACM New York, NY, USA,2009.
    [100]Vedaldi A, Gulshan V, Varma M,et al.Multiple kernels for object detection[C].IEEE 12th Inter-national Conference on Computer Vision, Oxford, UK,2009:606-613.
    [101]Cristianini N, Kandola J, Elisseeff A, et al. On kernel-target alignment[J]. Innovations in Mach-ine Learning,2006,194:205-256.
    [102]Chang C C. and Lin C J. LIBSVM:a library for support vector machines[J]. ACM Transactions on Intelligent Systems and Technology 2011,2(3):1-39.
    [103]Kittler J, Hatef M, Duin R P W, et, al. On combining classifiers[J]. IEEE Transactions on Patter-n Analysis and Machine Intelligence,2002,20(3):226-239.
    [104]Lu X, Wang Y and Jain A K. Combining classifiers for face recognition[C]. International Conf-erence on Multimedia and Expo. Baltimore, MD, USA,2003,3:13-16.
    [105]Hu R,Damper R.I. A 'no panacea theorem'for classifier combination [J]. Pattern Recognition,8-[41],2008:2665-2673.
    [106]Duin R P W. The combining classifier:To train or not to train[C]? The 16th International Conf-erence on Pattern Recognition.2002,2:765-770.
    [107]Byun H and Lee S W. A survey on pattern recognition applications of support vector machines [J]. International Journal of Pattern Recognition and Artificial Intelligence,2003,17(3):459-486.
    [108]Gilpin S A and Dunlavy D M. Heterogeneous ensemble classification [M]. Csri summer proce-edings,2008:90.
    [109]Breiman L.Bagging predictors [J].Machine learning,1996,24(2):123-140.
    [110]Freund Y, Schapire R and Abe N. A short introduction to boosting [J]. Journal of Japanese Soci-ety for Artificial Intelligence,1999,14:771-780.
    [111]Tulyakov S, Jaeger S, Govindaraju V, et, al. Review of classifier combination methods [J]. Mac-hine Learning in Document Analysis and Recognition,2008:361-386.
    [112]Freund Y. and Schapire R. A decision-theoretic generalization of on-line learning and an applic-ation to boosting [C].European Conference on Computational Learning Theory, Barcelona, Spa-in,1995,904:23-27.
    [113]Wolpert D H. Stacked generalization [J].Neural Networks,1992,5(2):241-259.
    [114]Everingham M, Zisserman A. Christopher Williams and Luc Van Gool.The PASCAL visual o-bject classes challenge 2006 (VOC2006) results [R]. Citeseer,2006.
    [115]Wu T F, Lin C J, and Weng R C. Probability estimates for multi-class classification by pairwise coupling[J]. The Journal of Machine Learning Research,2004,5:975-1005.
    [116]Bagui S C. Combining pattern classifiers:methods and algorithms [J]. Technometrics,2005,47 (4):517-518.
    [117]Torralba A, Fergus R and Freeman W T.80 Million Tiny Images:A Large Data Set for Nonpar-ametric Object and Scene Recognition [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2008,30(11):1958-1970.
    [118]Bengio Y. Learning deep architectures for AI[J]. Foundations and Trends(?) in Machine Learnin-g.2009,1(2):1-127.
    [119]Zeiler M.D, Taylor G.W, and Fergus R. Adaptive deconvolutional networks for mid and high le-vel feature learning[C],International Conference on Computer Vision, Barcelona, Spain,2011: 6-13.
    [120]Krizhevsky A, Sutskever I, Hinton G. ImageNet classification with deep convolutional neural n-etworks[C], Advances in Neural Information Processing Systems, Lake Tahoe, Nevada, USA, 2012:1106-111.
    [121]杨楠.基于内容的商品图像分类技术研究[D].硕士论文,辽宁:大连理工大学,2012.
    [122]Pan S J, Yang Q. A survey on transfer learning [J]. IEEE Transactions on Knowledge and Data Engineering,2010,10(22):1345-1359.
    [123]Kloft M,Brefeld U, Sonnenburg S,Zien A. Lp-norm multiple kernel learning[J]. The Journal of Machine Learning Research,2011,12(2):953-997.
    [124]McCann S, Lowe, D.G. Local naive bayes bearest beighbor for image classification [C]. Procee-dings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition. Washington, DC, USA,2012:3650-3656.
    [125]Fernando B,Fromont E.Muselet, D.Sebban, M. Discriminative feature fusion for image classific-ation[J], IEEE Conference on Computer Vision and Pattern Recognition(CVPR), Providence, RI, USA,2012:3434-3441.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700