用户名: 密码: 验证码:
基于稀疏编码的半监督低秩核学习算法
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Semi-Supervised Low-Rank Kernel Learning Algorithm Based on Sparse Coding
  • 作者:杨烁 ; 刘兵 ; 周勇
  • 英文作者:YANG Shuo;LIU Bing;ZHOU Yong;School of Computer Science and Technology, China University of Mining and Technology;
  • 关键词:半监督学习 ; 成对约束 ; 自编码器 ; 稀疏编码
  • 英文关键词:semi-supervised learning;;pairwise constraints;;autoencoder;;sparse coding
  • 中文刊名:JSGG
  • 英文刊名:Computer Engineering and Applications
  • 机构:中国矿业大学计算机科学与技术学院;
  • 出版日期:2018-09-14 11:22
  • 出版单位:计算机工程与应用
  • 年:2019
  • 期:v.55;No.926
  • 基金:国家自然科学基金(No.61572505);; 江苏省六大人才高峰项目(No.2015-DZXX-010)
  • 语种:中文;
  • 页:JSGG201907028
  • 页数:7
  • CN:07
  • 分类号:180-186
摘要
传统半监督非参核学习方法通常基于流形假设和成对约束信息建立学习模型。但是,这种模型对于某些复杂高维稀疏的数据而言算法复杂性较高。为了解决复杂高维稀疏数据核学习问题,提出一种基于稀疏自编码的非参核学习算法,通过稀疏自编码器引入稀疏约束,不仅提高了非参核学习方法的鲁棒性,避免了过拟合问题,而且提升了非参核学习算法的学习效率。通过核聚类实验验证了提出算法的有效性。实验结果表明,在非参核学习模型中融入了稀疏自编码器能够提高核聚类的效果,提升了半监督非参核学习算法的学习效率。
        The models of non-parametric kernel learning methods are generally built based on the manifold assumption and pairwise constraints. But it is so expensive for some intricate high-dimensional and sparse data, due to the high complexity of non-parametric kernel learning methods. In this paper, a sparse autoencoder with non-parametric method based on sparse self-coding is proposed. Through adding sparse autoencoder, the method not only overcomes overfitting problem and improves robustness, but also runs faster. The kernel cluster experiments are conducted with the kernel obtained by the proposed method. All results demonstrate that the proposed method outperforms the traditional non-parametric kernel learning method.
引文
[1]张仁峰,吴小俊,陈素根.通用稀疏多核学习[J].计算机应用研究,2016,33(1):21-27.
    [2]Ashish S,Patel V M.Multiple kernel learning for sparse representation-based classification[J].IEEE Transactions on Image Processing:A Publication of the IEEE Signal Processing Society,2014,23(7):3013-3024.
    [3]Baghshah M S,Shouraki S B.Kernal-based metric learning for semi-supervised clustering[J].Neurocomputing,2010,73(7/9):1352-1361.
    [4]Chen C Y,Zhang J P,He X F.Non-parametric kernel learning with robust pairwise constraints[J].International Journal of Machine Learning and Cybernetics,2012,3(2):83-96.
    [5]Li Z,Liu J,Tang X.Pairwise constraint propagation by semidefinite programming for semi-supervised classification[C]//Proceedings of the 25th International Conference on Machine Learning(ICML),Helsinki,Finland,2008:576-583.
    [6]Hu F L,Chen S C,Zhang D Q,et al.Semi-supervised kernel matrix learning by kernel propagation[J].IEEETransactions on Neural Networks,2010,21(11):1831-1841.
    [7]Zhuang J,Tsang I W,Hoi S C H.SimpleNPKL:simple non-parametric kernel learning[C]//Proceedings of the 26th International Conference on Machine Learning(ICML),Montreal,QC,Canada,2009:1273-1280.
    [8]Zhuang J,Tsang I W,Hoi S C H.A family of simple nonparametric kernel learning algorithms[J].Journal of Machine Learning Research,2011,12:1313-1347.
    [9]Baghshah M S,Shouraki S B.Learning low-rank kenel matrices for constrained clustering[J].Neurocomputing,2011,74(12/13):2201-2211.
    [10]Hu E L,Kwok J T.Scalable nonparametric low-rank kernel learning using block coordinatedescent[J].IEEETransactions on Neural Networks and Learning System,2014,26(9):1927-1938.
    [11]Hoi S C H,Jin R,Lyu M R.Learning nonparametric kernel matrices from pairwise constraints[C]//Proceedings of the 24th International Conference on Machine Learning(ICML),New York,USA,2007:361-368.
    [12]Hinton G E,Salakhutdinov R R.Reducing the dimensionality of data with neural networks[J].Science,2006,313:504-507.
    [13]Bengio Y,Lamblin P,Popovici D.Greedy layer-wise training of deep networks[C]//Neural Information Processing System Foundation,Vancouver,2007:153-160.
    [14]王雅思,姚鸿勋,孙晓帅,等.深度学习中的自编码器的表达能力研究[J].计算机科学,2015,42(9):56-65.
    [15]Krizhevsky A,Hinton G E.Using very deep autoencoders for content-based image retrieval[C]//2011 European Symposium on Artificial Neural Networks,Bruges,Belgium,2011:27-29.
    [16]Xia B,Bao C.Wiener filtering based speech enhancement with weighted denoising auto-encoder and noise classification[J].Speech Communication,2014,60:13-29.
    [17]You Q,Zhang Y J.A new training principle for stacked denoising autoencoders[C]//Seventh International Conference on Image and Graphics,Qingdao,China,2013:384-389.
    [18]秦胜君,卢志平.稀疏自动编码器在文本分类中的应用研究[J].科学技术与工程,2013,13(31):9422-9426.
    [19]Luo Y,Wan Y.A novel efficient method for training sparse auto-encoder[C]//IEEE International Congress on Image and Signal Processing,Hangzhou,China,2013:1019-1023.
    [20]Wei W,Yan H,Yizhou W,et al.Generalized autoencoder:A neural network framework for dimensionality reduction[C]//2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops,Columbus OH,USA,2014:496-503.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700