用户名: 密码: 验证码:
深度学习应用于网络空间安全的现状、趋势与展望
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Situation,Trends and Prospects of Deep Learning Applied to Cyberspace Security
  • 作者:张玉清 ; 董颖 ; 柳彩云 ; 雷柯楠 ; 孙鸿宇
  • 英文作者:Zhang Yuqing;Dong Ying;Liu Caiyun;Lei Kenan;Sun Hongyu;National Computer Network Intrusion Protection Center,University of Chinese Academy of Sciences;School of Cyber Engineering,Xidian University;
  • 关键词:深度学习 ; 网络空间安全 ; 攻击与防御 ; 应用安全 ; 网络安全
  • 英文关键词:deep learning;;cyberspace security;;attacks and defenses;;application security;;network security
  • 中文刊名:JFYZ
  • 英文刊名:Journal of Computer Research and Development
  • 机构:中国科学院大学国家计算机网络入侵防范中心;西安电子科技大学网络与信息安全学院;
  • 出版日期:2018-01-12 09:06
  • 出版单位:计算机研究与发展
  • 年:2018
  • 期:v.55
  • 基金:国家重点研发计划项目(2016YFB0800703);; 国家自然科学基金项目(61572460,61272481);; 信息安全国家重点实验室的开放课题(2017-ZD-01);; 国家发改委信息安全专项项目((2012)1424)~~
  • 语种:中文;
  • 页:JFYZ201806001
  • 页数:26
  • CN:06
  • ISSN:11-1777/TP
  • 分类号:3-28
摘要
近年来,深度学习应用于网络空间安全的研究逐渐受到国内外学者的关注,从分类算法、特征提取和学习效果等方面分析了深度学习应用于网络空间安全领域的研究现状与进展.目前,深度学习主要应用于恶意软件检测和入侵检测两大方面,指出了这些应用存在的问题:特征选择问题,需从原始数据中提取更全面的特征;自适应性问题,可通过early-exit策略对模型进行实时更新;可解释性问题,可使用影响函数得到特征与分类标签之间的相关性.其次,归纳总结了深度学习发展面临的十大问题与机遇,在此基础上,首次归纳了深度学习应用于网络空间安全所面临的十大问题与机遇,并将十大问题与机遇归为3类:1)算法脆弱性问题,包括深度学习模型易受对抗攻击和隐私窃取攻击;2)序列化模型相关问题,包括程序语法分析、程序代码生成和序列建模长期依赖问题;3)算法性能问题,即可解释性和可追溯性问题、自适应性和自学习性问题、存在误报以及数据集不均衡的问题.对十大问题与机遇中主要问题及其解决方案进行了分析,指出对于分类的应用易受对抗攻击,最有效的防御方案是对抗训练;基于协作性深度学习进行分类的安全应用易受隐私窃取攻击,防御的研究方向是教师学生模型.最后,指出了深度学习应用于网络空间安全未来的研究发展趋势.
        Recently,research on deep learning applied to cyberspace security has caused increasing academic concern,and this survey analyzes the current research situation and trends of deep learning applied to cyberspace security in terms of classification algorithms,feature extraction and learning performance.Currently deep learning is mainly applied to malware detection and intrusion detection,and this survey reveals the existing problems of these applications:feature selection,which could be achieved by extracting features from raw data;self-adaptability,achieved by early-exit strategy to update the model in real time;interpretability,achieved by influence functions to obtain the correspondence between features and classification labels.Then,top 10 obstacles and opportunities in deep learning research are summarized.Based on this,top 10 obstacles and opportunities of deep learning applied to cyberspace security are at first proposed,which falls into three categories.The first category is intrinsic vulnerabilities of deep learning to adversarial attacks and privacy-theft attacks.The second category is sequence-model related problems,including program syntax analysis,program code generation and long-term dependences in sequence modeling.The third category is learning performance problems,including poor interpretability and traceability,poor self-adaptability and self-learning ability,false positives and data unbalance.Main obstacles and their opportunities among the top 10 are analyzed,and we also point out that applications using classification models are vulnerable to adversarial attacks and the most effective solution is adversarial training;collaborative deep learning applications are vulnerable to privacy-theft attacks,and prospective defense is teacherstudent model.Finally,future research trends of deep learning applied to cyberspace security are introduced.
引文
[1]LeCun Y,Bengio Y,Hinton G.Deep learning[J].Nature,2015,521(7553):436-444
    [2]Goodfellow I,Bengio Y,Courville A.Deep Learning[M].Cambridge,MA:MIT Press,2016:528-566
    [3]Deng Li,Yu Dong.Deep learning:Methods and applications[J].Foundations and Trends in Signal Processing,2014,7(3/4):197-387
    [4]Krizhevsky A,Sutskever I,Hinton G E.ImageNet classification with deep convolutional neural networks[J].Communications of the ACM,2012,60(2):2012-2025
    [5]Taigman Y,Yang Ming,Ranzato M A,et al.Deepface:Closing the gap to human-level performance in face verification[C]//Proc of the 29th IEEE Conf on Computer Vision and Pattern Recognition.Piscataway,NJ:IEEE,2014:1701-1708
    [6]Dahl G E,Deng Li,Yu Dong,et al.Context-dependent pretrained deep neural networks for large-vocabulary speech recognition[J].IEEE Trans on Audio,Speech,and Language Processing,2012,20(1):30-42
    [7]Sainath T N,Vinyals O,Senior A,et al.Convolutional,long short-term memory,fully connected deep neural networks[C]//Proc of the 39th IEEE Int Conf on Acoustics,Speech and Signal.Piscataway,NJ:IEEE,2015:4580-4584
    [8]Sak H,Senior A,Beaufays F.Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition[J].Computer Science,2014,9(3):338-342
    [9]Collobert R,Weston J.A unified architecture for natural language processing:Deep neural networks with multitask learning[C]//Proc of the 25th Int Conf on Machine Learning.New York:ACM,2008:160-167
    [10]Cruz-Roa A,Ovalle J E A,Madabhushi A,et al.A deep learning architecture for image representation,visual interpretability and automated basal-cell carcinoma cancer detection[C]//Proc of the 9th Int Conf on Medical Image Computing and Computer-Assisted Intervention.Berlin:Springer,2013:403-410
    [11]Huang Wenyi,Stokes J W.MtNet:A multi-task neural network for dynamic malware classification[C]//Proc of the5th Int Conf on Detection of Intrusions and Malware,and Vulnerability Assessment.Berlin:Springer,2016:399-418
    [12]LeCun Y,Jackel L D,Boser B,et al.Handwritten digit recognition:Applications of neural network chips and automatic learning[J].IEEE Communications Magazine,1989,27(11):41-46
    [13]Rumelhart D E,Hinton G E,Williams R J.Learning representations by back-propagating errors[J].Cognitive Modeling,1988,5(3):533-536
    [14]Hinton G E,Osindero S,Teh Y W.A fast learning algorithm for deep belief nets[J].Neural Computation,2006,18(7):1527-1554
    [15]Hinton G E,McClelland J L.Learning representations by recirculation[C]//Proc of the 1st Int Conf on Neural Information Systems.Cambridge,MA:MIT Press,1987:358-366
    [16]Papernot N,McDaniel P,Sinha A,et al.Towards the science of security and privacy in machine learning[OL].2016[2017-07-12].https://arxiv.org/pdf/1611.03814.pdf
    [17]Nguyen A,Yosinski J,Clune J.Deep neural networks are easily fooled:High confidence predictions for unrecognizable images[C]//Proc of the 30th IEEE Conf on Computer Vision and Pattern Recognition.Piscataway,NJ:IEEE,2015:427-436
    [18]Bolukbasi T,Wang J,Dekel O,et al.Adaptive neural networks for efficient inference[C]//Proc of the 34th Int Conf on Machine Learning.New York:ACM,2017:527-536
    [19]Koh P W,Liang P.Understanding black-box predictions via influence functions[OL].2017[2017-08-11].https://arxiv.org/pdf/1703.04730.pdf
    [20]Srivastava N,Hinton G E,Krizhevsky A,et al.Dropout:A simple way to prevent neural networks from overfitting[J].Journal of Machine Learning Research,2014,15(1):1929-1958
    [21]RSIP.Deep learning and convolutional neural networks:RSIP vision blogs[EB/OL].2016[2017-08-18].http://www.rsipvision.com/exploring-deep-learning/
    [22]Deng Li.Three classes of deep learning architectures and their applications:A tutorial survey[J].APSIPA Trans on Signal and Information Processing,2012,11(2):1132-1160
    [23]LeCun Y,Bottou L,Bengio Y,et al.Gradient-based learning applied to document recognition[J].Proceedings of the IEEE,1998,86(11):2278-2324
    [24]Szegedy C,Liu Wei,Jia Yangqing,et al.Going deeper with convolutions[C]//Proc of the 30th IEEE Conf on Computer Vision and Pattern Recognition.Piscataway,NJ:IEEE,2015:37-46
    [25]Simonyan K,Zisserman A.Very deep convolutional networks for large-scale image recognition[OL].2014[2017-08-02].https://arxiv.org/pdf/1409.1556.pdf
    [26]He Kaiming,Zhang Xiangyu,Ren Shaoqing,et al.Deep residual learning for image recognition[C]//Proc of the 31st IEEE Conf on Computer Vision and Pattern Recognition.Piscataway,NJ:IEEE,2016:770-778
    [27]Eigen D,Rolfe J,Fergus R,et al.Understanding deep architectures using a recursive convolutional network[J].Computer Science,2014,10(2):38-55
    [28]Masci J,Meier U,Cires,an D,et al.Stacked convolutional autoencoders for hierarchical feature extraction[C]//Proc of the 20th Int Conf on Artificial Neural Networks.Berlin:Springer,2011:52-59
    [29]Krizhevsky A,Hinton G.Convolutional deep belief networks on cifar-10[J].Unpublished Manuscript,2010,7(6):1007-1020
    [30]Cho K,Van Merriёnboer B,Gulcehre C,et al.Learning phrase representations using RNN encoder-decoder for statistical machine translation[J].Computer Science,2014,10(2):187-205
    [31]Sutskever I,Vinyals O,Le Q V.Sequence to sequence learning with neural networks[J].IEEE Trans on Signal Processing,2014,4(2):3104-3112
    [32]Bahdanau D,Cho K,Bengio Y.Neural machine translation by jointly learning to align and translate[J].Computer Science,2014,7(2):109-136
    [33]Schuster M,Paliwal K K.Bidirectional recurrent neural networks[J].IEEE Trans on Signal Processing,1997,45(11):2673-2681
    [34]Graves A.Supervised Sequence Labelling with Recurrent Neural Networks[M].Berlin:Springer,2012:4-38
    [35]Graves A,Liwicki M,Bunke H,et al.Unconstrained online handwriting recognition with recurrent neural networks[C]//Proc of the 29th Conf on Neural Information Processing Systems.Piscataway,NJ:IEEE,2007:458-64
    [36]Graves A,Schmidhuber J.Offline handwriting recognition with multidimensional recurrent neural networks[C]//Proc of the 30th Int Conf on Neural Information Processing Systems.Piscataway,NJ:IEEE,2008:545-552
    [37]Graves A,Schmidhuber J.Framewise phoneme classification with bidirectional LSTM and other neural network architectures[J].Neural Networks,2005,18(5):602-610
    [38]Graves A.Generating sequences with recurrent neural networks[J].Computer Science,2013,10(3):30-45
    [39]Baldi P,Brunak S,Frasconi P,et al.Exploiting the past and the future in protein secondary structure prediction[J].Bioinformatics,1999,15(11):937-946
    [40]Gers F A,Schmidhuber J,Cummins F.Learning to forget:Continual prediction with LSTM[J].Neural Computation,2000,12(10):2451-247
    [41]Cho K,Van Merriёnboer B,Bahdanau D,et al.On the properties of neural machine translation:Encoder-decoder approaches[J].Computer Science,2014,7(2):103-111
    [42]Chung J,Gulcehre C,Cho K H,et al.Empirical evaluation of gated recurrent neural networks on sequence modeling[OL].2014[2017-08-11].https://arxiv.org/pdf/1412.3555
    [43]Chung J,Gulcehre C,Cho K,et al.Gated feedback recurrent neural networks[C]//Proc of the 32nd Int Conf on Machine Learning.New York:ACM,2015:2067-2075
    [44]Jozefowicz R,Zaremba W,Sutskever I.An empirical exploration of recurrent network architectures[C]//Proc of the 32nd Int Conf on Machine Learning.New York:ACM,2015:2342-2350
    [45]Chrupaa G,Kádár A,Alishahi A.Learning language through pictures[J].Computer Science,2015,8(2):76-90
    [46]Hochreiter S,Schmidhuber J.Long short-term memory[J].Neural Computation,1997,9(8):1735-1780
    [47]Graves A,Jaitly N.Towards end-to-end speech recognition with recurrent neural networks[C]//Proc of the 31st Int Conf on Machine Learning.New York:ACM,2014:1764-1772
    [48]Kiros R,Salakhutdinov R,Zemel R S.Unifying visualsemantic embeddings with multimodal neural language models[J].Computer Science,2014,10(3):137-152
    [49]Vinyals O,Toshev A,Bengio S,et al.Show and tell:A neural image caption generator[C]//Proc of the 30th IEEE Conf on Computer Vision and Pattern Recognition.Piscataway,NJ:IEEE,2015:3156-3164
    [50]Xu K,Ba J,Kiros R,et al.Show,attend and tell:Neural image caption generation with visual attention[C]//Prof of the 32nd Int Conf on Machine Learning.New York:ACM,2015:2048-2057
    [51]Vinyals O,Kaiser,Koo T,et al.Grammar as a foreign language[C]//Proc of the 28th Conf on Advances in Neural Information Processing Systems.Cambridge,MA:MIT Press,2015:2773-2781
    [52]Pascanu R,Gulcehre C,Cho K,et al.How to construct deep recurrent neural networks[J].Computer Science,2013,6(5):90-109
    [53]Salakhutdinov R,Mnih A,Hinton G.Restricted Boltzmann machines for collaborative filtering[C]//Proc of the 24th Int Conf on Machine Learning.New York:ACM,2007:791-798
    [54]Salakhutdinov R,Hinton G.Deep Boltzmann machines[J].Journal of Machine Learning Research,2009,5(2):196-2006
    [55]Rozanov Y A.Markov Random Fields[M].Berlin:Springer,1982:55-102
    [56]Elfwing S,Uchibe E,Doya K.Expected energy-based restricted Boltzmann machine for classification[J].Neural Networks,2015,64(2):29-38
    [57]Mnih V,Larochelle H,Hinton G E.Conditional restricted Boltzmann machines for structured output prediction[C]//Proc of the 27th Conf on Uncertainty in Artificial Intelligence.Berlin:Springer,2011:514-522
    [58]Taylor G W,Hinton G E,Roweis S T.Two distributedstate models for generating high-dimensional time series[J].Journal of Machine Learning Research,2011,12(3):1025-1068
    [59]Hinton G.A practical guide to training restricted Boltzmann machines[J].Momentum,2010,9(1):926-937
    [60]Tang Yichuan,Salakhutdinov R,Hinton G.Robust Boltzmann machines for recognition and denoising[C]//Proc of the 27th Computer Vision and Pattern Recognition.Piscataway,NJ:IEEE,2012:2264-2271
    [61]Li Guoqi,Deng Lei,Xu Yi,et al.Temperature based restricted Bzoltzmann machines[J].Scientific Reports,2016,6(2):191-210
    [62]Nair V,Hinton G E.3Dobject recognition with deep belief nets[C]//Proc of the 22nd Conf on Advances in Neural Information Processing Systems.Cambridge,MA:MIT Press,2009:1339-1347
    [63]Indiveri G,Liu S.Memory and information processing in neuromorphic systems[J].Proceedings of the IEEE,2015,103(8):1379-1397
    [64]Liao Bin,Xu Jungang,LüJintao,et al.An image retrieval method for binary images based on DBN and softmax classifier[J].IETE Technical Review,2015,32(4):294-303
    [65]Abdel-Zaher A M,Eldeib A M.Breast cancer classification using deep belief networks[J].Expert Systems with Applications,2016,46(2):139-144
    [66]Deng Li,Yu Dong.Deep convex net:A scalable architecture for speech pattern classification[C]//Proc of the 12th Conf of the Int Speech Communication Association.Florence,Italy:ISCA,2011:2285-2288
    [67]Hinton G E,Salakhutdinov R R.Using deep belief nets to learn covariance kernels for Gaussian processes[C]//Proc of the 1st Conf on Advances in Neural Information Processing Systems.Cambridge,MA:MIT Press,2008:1249-1256
    [68]Arel I,Rose D C,Karnowski T P.Deep machine learning-a new frontier in artificial intelligence research[J].IEEE Computational Intelligence Magazine,2010,5(4):13-18
    [69]Bengio Y.Learning deep architectures for AI[J].Foundations and Trends in Machine Learning,2009,2(1):1-127
    [70]Alain G,Bengio Y.What regularized autoencoders learn from the data-generating distribution[J].The Journal of Machine Learning Research,2014,15(1):3563-3593
    [71]Hinton G E,Salakhutdinov R R.Reducing the dimensionality of data with neural networks[J].Science,2006,313(5786):504-507
    [72]Schmidhuber J.Deep learning in neural networks:An overview[J].Neural Networks,2015,61(9):85-117
    [73]Makhzani A,Frey B.K-sparse autoencoders[OL].2013[2017-06-14].https://arxiv.org/pdf/1312.5663
    [74]Vincent P.A connection between score matching and denoising autoencoders[J].Neural Computation,2011,23(7):1661-1674
    [75]Vincent P,Larochelle H,Bengio Y,et al.Extracting and composing robust features with denoising autoencoders[C]//Proc of the 25th Int Conf on Machine Learning.New York:ACM,2008:1096-1103
    [76]Ling Zhenhua,Kang Shiyin,Zen H,et al.Deep learning for acoustic modeling in parametric speech generation:A systematic review of existing techniques and future trends[J].IEEE Signal Processing Magazine,2015,32(3):35-52
    [77]Kamyshanska H,Memisevic R.The potential energy of an autoencoder[J].IEEE Trans on Pattern Analysis and Machine Intelligence,2015,37(6):1261-1273
    [78]Bengio Y,Courville A,Vincent P.Representation learning:A review and new perspectives[J].IEEE Trans on Pattern Analysis and Machine Intelligence,2013,35(8):1798-1828
    [79]Vincent P,Larochelle H,Lajoie I,et al.Stacked denoising autoencoders:Learning useful representations in a deep network with a local denoising criterion[J].Journal of Machine Learning Research,2010,11(12):3371-3408
    [80]Rifai S,Vincent P,Muller X,et al.Contractive autoencoders:Explicit invariance during feature extraction[C]//Proc of the 28th Int Conf on Machine Learning.New York:ACM,2011:833-840
    [81]Rifai S,Mesnil G,Vincent P,et al.Higher order contractive autoencoder[C]//Proc of the 24th European Conf on Machine Learning and Knowledge Discovery in Databases.Berlin:Springer,2011:645-660
    [82]Sun Meng,Zhang Xiongwei,Zheng T F.Unseen noise estimation using separable deep auto encoder for speech enhancement[J].IEEE/ACM Trans on Audio,Speech,and Language Processing,2016,24(1):93-104
    [83]Staudemeyer R C.Applying long short-term memory recurrent neural networks to intrusion detection[J].South African Computer Journal,2015,56(1):136-154
    [84]Dahl G E,Stokes J W,Deng Li,et al.Large-scale malware classification using random projections and neural networks[C]//Proc of the 38th Int Conf on Acoustics,Speech and Signal Processing.Piscataway,NJ:IEEE,2013:3422-3426
    [85]Kolosnjaji B,Zarras A,Webster G,et al.Deep learning for classification of malware system call sequences[C]//Proc of the 30th Australasian Joint Conf on Artificial Intelligence.Berlin:Springer,2016:137-149
    [86]Tobiyama S,Yamaguchi Y,Shimada H,et al.Malware detection with deep neural network using process behavior[C]//Proc of the 8th Int Conf on Computer Software and Applications.Piscataway,NJ:IEEE,2016:577-582
    [87]Pascanu R,Stokes J W,Sanossian H,et al.Malware classification with recurrent networks[C]//Proc of the 40th Int Conf on Acoustics,Speech and Signal Processing.Piscataway,NJ:IEEE,2015:1916-1920
    [88]Athiwaratkun B,Stokes J W.Malware classification with LSTM and GRU language models and a character-level CNN[C]//Proc of the 42nd Int Conf on Acoustics,Speech and Signal Processing.Piscataway,NJ:IEEE,2017:2482-2486
    [89]Wang Xin,Yiu S M.A multi-task learning model for malware classification with useful file access pattern from API call sequence[OL].2016[2017-08-17].https://arxiv.org/pdf/1610.05945
    [90]Hardy W,Chen Lingwei,Hou Shifu,et al.DL4MD:A deep learning framework for intelligent malware detection[C]//Proc of the 16th Int Conf on Data Mining.Piscataway,NJ:IEEE,2016:61-68
    [91]Rhode M,Burnap P,Jones K.Early stage malware prediction using recurrent neural networks[OL].2017[2017-06-14].https://arxiv.org/pdf/1708.03513
    [92]Nix R,Zhang Jian.Classification of Android apps and malware using deep neural networks[C]//Proc of the 17th Int Joint Conf on Neural Networks.Piscataway,NJ:IEEE,2017:1871-1878
    [93]Saxe J,Berlin K.Deep neural network-based malware detection using two-dimensional binary program features[C]//Proc of the 10th Int Conf on Malicious and Unwanted Software.Piscataway,NJ:IEEE,2015:11-20
    [94]Shin E C R,Song D,Moazzezi R.Recognizing functions in binaries with neural networks[C]//Proc of the 24th USENIX Security Symp.Berkely,CA:USENIX Association,2015:611-626
    [95]Yuan Zhenlong,Lu Yongqiang,Wang Zhaoguo,et al.Droid-Sec:Deep learning in Android malware detection[J].ACM SIGCOMM Computer Communication Review,2014,44(4):371-372
    [96]Yuan Zhenlong,Lu Yongqiang,Xue Yibo.DroidDetector:Android malware characterization and detection using deep learning[J].Tsinghua Science and Technology,2016,21(1):114-123
    [97]Xu Lifan,Zhang Dongping,Jayasena N,et al.HADM:Hybrid analysis for detection of malware[C]//Proc of the3rd SAI Intelligent Systems Conf.Berlin:Springer,2016:702-724
    [98]Jung W,Kim S,Choi S.Poster:Deep learning for zero-day flash malware detection[C]//Proc of the 36th IEEE Symp on Security and Privacy.Piscataway,NJ:IEEE,2015:32-34
    [99]Li Ping,Hastie T J,Church K W.Very sparse random projections[C]//Proc of the 12th ACM SIGKDD Int Conf on Knowledge Discovery and Data Mining.New York:ACM,2006:287-296
    [100]David O E,Netanyahu N S.Deepsign:Deep learning for automatic malware signature generation and classification[C]//Proc of the 12th Int Joint Conf on Neural Networks.Piscataway,NJ:IEEE,2015:76-84
    [101]Debar H,Becker M,Siboni D.A neural network component for an intrusion detection system[C]//Proc of the 23rd Computer Society Symp on Research in Security and Privacy.Piscataway,NJ:IEEE,1992:240-250
    [102]Creech G,Hu Jiankun.A semantic approach to host-based intrusion detection systems using contiguous and discontiguous system call patterns[J].IEEE Trans on Computers,2014,63(4):807-819
    [103]Fiore U,Palmieri F,Castiglione A,et al.Network anomaly detection with the restricted Boltzmann machine[J].Neurocomputing,2013,122(3):13-23
    [104]University of California.KDD Cup 99[EB/OL].1999[2017-08-18].http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html
    [105]Tavallaee M,Bagheri E,Lu Wei,et al.A detailed analysis of the KDD CUP 99data set[C]//Proc of the 2nd IEEE Symp on Computational Intelligence for Security and Defense Applications.Piscataway,NJ:IEEE,2009:1-6
    [106]Canadian Institute for Cybersecurity.NSL-KDD dataset[EB/OL].2017[2017-08-18].http://www.unb.ca/cic/research/datasets/nsl.html
    [107]Kim J,Kim J,Thu H L T,et al.Long short-term memory recurrent neural network classifier for intrusion detection[C]//Proc of the 22nd Int Conf on Platform Technology and Service.Piscataway,NJ:IEEE,2016:49-54
    [108]Putchala M K,Deep learning approach for intrusion detection system(IDS)in the Internet of things(IoT)network using gated recurrent neural networks(GRU)[D].Dayton,Ohio,USA:Wright State University,2017
    [109]Gao Ni,Gao Ling,Gao Quanli,et al.An intrusion detection model based on deep belief networks[C]//Proc of the 12th Int Conf on Advanced Cloud and Big Data.Piscataway,NJ:IEEE,2014:247-252
    [110]Li Yuancheng,Ma Rong,Jiao Runhai.A hybrid malicious code detection method based on deep learning[J].International Journal of Software Engineering &Its Applications,2015,9(5):205-216
    [111]Salama M A,Eid H F,Ramadan R A,et al.Hybrid intelligent intrusion detection scheme[G]//Soft Computing in Industrial Applications.Berlin:Springer,2011:293-303
    [112]Niyaz Q,Javaid A,Sun W,et al.A deep learning approach for network intrusion detection system[C]//Proc of the 9th EAI Int Conf on Bio-inspired Information and Communications Technologies.New York:ACM,2016:21-26
    [113]Abolhasanzadeh B.Nonlinear dimensionality reduction for intrusion detection using autoencoder bottleneck features[C]//Proc of the 7th Conf on Information and Knowledge Technology.Piscataway,NJ:IEEE,2015:26-31
    [114]Alom M Z,Bontupalli V R,Taha T M.Intrusion detection using deep belief networks[C]//Proc of the 9th Conf on Aerospace and Electronics.Piscataway,NJ:IEEE,2015:339-344
    [115]Aygun R C,Yavuz A G.Network anomaly detection with stochastically improved autoencoder based models[C]//Proc of the 4th Int Conf on Cyber Security and Cloud Computing.Piscataway,NJ:IEEE,2017:193-198
    [116]Wang Zhanyi.The applications of deep learning on traffic identification[EB/OL].2015[2017-07-15].https://www.blackhat.com/docs/us-15/materials/us-15-Wang-The-ApplicationsOf-Deep-Learning-On-Traffic-Identification-wp.pdf
    [117]Yu Yang,Long Jun,Cai Zhiping.Network intrusion detection through stacking dilated convolutional autoencoders[J].Security and Communication Networks,2017,2(3):212-225
    [118]Wang Wei,Zhu Ming,Zeng Xuewen,et al.Malware traffic classification using convolutional neural network for representation learning[C]//Proc of the 1st Int Conf on Information Networking.Piscataway,NJ:IEEE,2017:712-717
    [119]Kim G,Yi H,Lee J,et al.LSTM-based system-call language modeling and robust ensemble method for designing host-based intrusion detection systems[OL].2016[2017-08-02].https://arxiv.org/pdf/1611.01726
    [120]Yu Yang,Long Jum,Cai Zhiping.Session-based network intrusion detection using a deep learning architecture[C]//Proc of the 14th Conf on Modeling Decisions for Artificial Intelligence.Berlin:Springer,2017:144-155
    [121]Zaheer M,Tristan J B,Wick M L,et al.Learning a static analyzer:A case study on a toy language[EB/OL].2016[2017-08-17].https://openreview.net/references/pdf/id=ry54RWtxx
    [122]Godefroid P,Peleg H,Singh R.Learn&fuzz:Machine learning for input fuzzing[OL].2017[2017-08-17].https://arxiv.org/pdf/1701.07232
    [123]Melicher W,Ur B,Segreti S M,et al.Fast,lean,and accurate:Modeling password guessability using neural networks[C]//Proc of the 25th USENIX Security Symp.Berkely,CA:USENIX Association,2016:175-191
    [124]Hu Wei,Tan Ying.Generating adversarial malware examples for black-box attacks based on GAN[OL].2017[2017-06-14].https://arxiv.org/pdf/1702.05983
    [125]Hu Weiwei,Tan Ying.Black-box attacks against RNN based malware detection algorithms[OL].2017[2017-08-02].https://arxiv.org/pdf/1705.08131
    [126]Rosenberg I,Shabtai A,Rokach L,et al.Generic blackbox end-to-end attack against RNNs and other API calls based malware classifiers[OL].2017[2017-06-14].https://arxiv.org/pdf/1707.05970
    [127]Grosse K,Papernot N,Manoharan P,et al.Adversarial perturbations against deep neural networks for malware classification[OL].2016[2017-08-19].https://arxiv.org/pdf/1606.04435
    [128]Wang Qinglong,Guo Wenbo,Zhang Kaixuan,et al.Adversary resistant deep neural networks with an application to malware detection[C]//Proc of the 23rd ACM SIGKDD Int Conf on Knowledge Discovery and Data Mining.New York:ACM,2017:1145-1153
    [129]DARPA.Explainable artificial intelligence(XAI)[EB/OL].2016[2017-08-18].https://www.darpa.mil/program/explainable-artificial-intelligence
    [130]Ribeiro M T,Singh S,Guestrin C.Why should I trust you/Explaining the predictions of any classifier[C]//Proc of the22nd ACM SIGKDD Int Conf on Knowledge Discovery and Data Mining.New York:ACM,2016:1135-1144
    [131]Goodfellow I J,Vinyals O,Saxe A M.Qualitatively characterizing neural network optimization problems[OL].2014[2017-08-19].https://arxiv.org/pdf/1412.6544
    [132]Weston J,Chopra S,Bordes A.Memory networks[OL].2014[2017-07-15].https://arxiv.org/pdf/1410.3916
    [133]Kumar A,Irsoy O,Ondruska P,et al.Ask me anything:Dynamic memory networks for natural language processing[C]//Proc of the 33rd Int Conf on Machine Learning.New York:ACM,2016:1378-1387
    [134]Goodfellow I,Pouget-Abadie J,Mirza M,et al.Generative adversarial nets[C]//Proc of the 27th Conf on Advances in Neural Information Processing Systems.Cambridge,MA:MIT Press,2014:2672-2680
    [135]Wang Kunfeng,Gou Chao,Duan Yanjie,et al.Generative adversarial networks:The state of the art and beyond[J].Acta Automatica Sinca,2017,43(3):321-332(in Chinese)(王坤峰,苟超,段艳杰,等.生成式对抗网络GAN的研究进展与展望[J].自动化学报,2017,43(3):321-332)
    [136]Mirza M,Osindero S.Conditional generative adversarial nets[OL].2014[2017-08-02].https://arxiv.org/pdf/1411.1784
    [137]Odena A.Semi-supervised learning with generative adversarial networks[OL].2016[2017-07-15].https://arxiv.org/pdf/1606.01583
    [138]Donahue J,Krhenbühl P,Darrell T.Adversarial feature learning[OL].2016[2017-08-19].https://arxiv.org/pdf/1605.09782
    [139]Chen Xi,Duan Yan,Houthooft R,et al.InfoGAN:Interpretable representation learning by information maximizing generative adversarial nets[C]//Proc of the29th Conf on Advances in Neural Information Processing Systems.Cambridge,MA:MIT Press,2016:2172-2180
    [140]Odena A,Olah C,Shlens J.Conditional image synthesis with auxiliary classifier GANs[OL].2016[2017-08-13].https://arxiv.org/pdf/1610.09585
    [141]Yu Lantao,Zhang Weinan,Wang Jun,et al.SeqGAN:Sequence generative adversarial nets with policy gradient[C]//Proc of the 31st Conf on Artificial Intelligence.Menlo Park,CA:AAAI,2017:2852-2858
    [142]Radford A,Metz L,Chintala S.Unsupervised representation learning with deep convolutional generative adversarial networks[OL].2015[2017-07-16].https://arxiv.org/pdf/1511.06434
    [143]Denton E L,Chintala S,Fergus R.Deep generative image models using a Laplacian pyramid of adversarial networks[C]//Proc of the 28th Conf on Advances in Neural Information Processing Systems.Cambridge,MA:MIT Press,2015:1486-1494
    [144]Larsen A B L,Snderby S K,Larochelle H,et al.Autoencoding beyond pixels using a learned similarity metric[OL].2015[2017-07-16].https://arxiv.org/pdf/1512.09300
    [145]Im D J,Kim C D,Jiang Hui,et al.Generating images with recurrent adversarial networks[OL].2016[2017-08-13].https://arxiv.org/pdf/1602.05110
    [146]Arjovsky M,Chintala S,Bottou L.Wasserstein GAN[OL].2017[2017-07-25].https://arxiv.org/pdf/1701.07875
    [147]Salimans T,Goodfellow I,Zaremba W,et al.Improved techniques for training GANs[C]//Proc of the 29th Conf on Advances in Neural Information Processing Systems.Cambridge,MA:MIT Press,2016:2234-2242
    [148]Goodfellow I.NIPS 2016tutorial:Generative adversarial networks[OL].2016[2017-07-24].https://arxiv.org/pdf/1701.00160
    [149]NIPS.Non-targeted adversarial attack[EB/OL].2017[2017-07-24].https://www.kaggle.com/nips-2017-adversarial-learning-competition
    [150]Pascanu R,Mikolov T,Bengio Y.On the difficulty of training recurrent neural networks[C]//Proc of the 30th Int Conf on Machine Learning.New York:ACM,2013:1310-1318
    [151]Andreas J,Rohrbach M,Darrell T,et al.Neural module networks[C]//Proc of the 31st IEEE Conf on Computer Vision and Pattern Recognition.Piscataway,NJ:IEEE,2016:39-48
    [152]McDaniel P,Papernot N,Celik Z B.Machine learning in adversarial settings[J].IEEE Security &Privacy,2016,14(3):68-72
    [153]Huang Ling,Joseph A D,Nelson B,et al.Adversarial machine learning[C]//Proc of the 4th ACM Workshop on Security and Artificial Intelligence.New York:ACM,2011:43-58
    [154]Biggio B,Corona I,Maiorca D,et al.Evasion attacks against machine learning at test time[C]//Proc of the 23rd Joint European Conf on Machine Learning and Knowledge Discovery in Databases.Berlin:Springer,2013:387-402
    [155]Biggio B,Fumera G,Roli F.Pattern recognition systems under attack:Design issues and research challenges[J].International Journal of Pattern Recognition and Artificial Intelligence,2014,28(7):146-158
    [156]Szegedy C,Zaremba W,Sutskever I,et al.Intriguing properties of neural networks[OL].2013[2017-08-02].https://arxiv.org/pdf/1312.6199
    [157]Goodfellow I J,Shlens J,Szegedy C.Explaining and harnessing adversarial examples[OL].2014[2017-07-27].https://arxiv.org/pdf/1412.6572
    [158]Warde-Farley D,Goodfellow I,Hazan T,et al.Perturbations,Optimization,and Statistics[M].Cambridge,MA:MIT Press,2016:1-32
    [159]Cires,an D,Meier U,Masci J,et al.Multi-column deep neural network for traffic sign classification[J].Neural Networks,2012,32(Special Issue):333-338
    [160]Papernot N,McDaniel P,Jha S,et al.The limitations of deep learning in adversarial settings[C]//Proc of the 1st European Symp on Security and Privacy.Piscataway,NJ:IEEE,2016:372-387
    [161]Moosavi-Dezfooli S M,Fawzi A,Frossard P.Deepfool:A simple and accurate method to fool deep neural networks[C]//Proc of the 31st IEEE Conf on Computer Vision and Pattern Recognition.Piscataway,NJ:IEEE,2016:2574-2582
    [162]Kurakin A,Goodfellow I,Bengio S.Adversarial examples in the physical world[OL].2016[2017-07-27].https://arxiv.org/pdf/1607.02533
    [163]Papernot N,McDaniel P,Goodfellow I,et al.Practical black-box attacks against machine learning[C]//Proc of the12th ACM Asia Conf on Computer and Communications Security.New York:ACM,2017:506-519
    [164]Papernot N,McDaniel P,Goodfellow I.Transferability in machine learning:From phenomena to black-box attacks using adversarial samples[OL].2016[2017-07-19].https://arxiv.org/pdf/1605.07277
    [165]Tramèr F,Papernot N,Goodfellow I,et al.The space of transferable adversarial examples[OL].2017[2017-07-19].https://arxiv.org/pdf/1704.03453
    [166]Moosavi-Dezfooli S M,Fawzi A,Fawzi O,et al.Universal adversarial perturbations[C]//Proc of the 32nd IEEE Conf on Computer Vision and Pattern Recognition.Piscataway,NJ:IEEE,2017:893-901
    [167]Russakovsky O,Deng Jia,Su Hao,et al.ImageNet large scale visual recognition challenge[J].International Journal of Computer Vision,2015,115(3):211-252
    [168]Liu Yanpei,Chen Xinyun,Liu Chang,et al.Delving into transferable adversarial examples and black-box attacks[OL].2016[2017-08-02].https://arxiv.org/pdf/1611.02770
    [169]Papernot N,McDaniel P,Swami A,et al.Crafting adversarial input sequences for recurrent neural networks[C]//Proc of the 35th Military Communications Conf.Piscataway,NJ:IEEE,2016:49-54
    [170]Papernot N,McDaniel P,Wu Xi,et al.Distillation as a defense to adversarial perturbations against deep neural networks[C]//Proc of the 37th IEEE Symp on Security and Privacy.Piscataway,NJ:IEEE,2016:582-597
    [171]Hinton G,Vinyals O,Dean J.Distilling the knowledge in a neural network[OL].2015[2017-08-14].https://arxiv.org/pdf/1503.02531
    [172]Carlini N,Wagner D.Towards evaluating the robustness of neural networks[C]//Proc of the 38th IEEE Symp on Security and Privacy.Piscataway,NJ:IEEE,2017:39-57
    [173]Carlini N,Wagner D.Defensive distillation is not robust to adversarial examples[OL].2016[2017-08-09].https://arxiv.org/pdf/1607.04311
    [174]Hosseini H,Chen Yize,Kannan S,et al.Blocking transferability of adversarial examples in black-box learning systems[OL].2017[2017-08-09].https://arxiv.org/pdf/1703.04318
    [175]Papernot N,McDaniel P.Extending defensive distillation[OL].2017[2017-08-09].https://arxiv.org/pdf/1705.05264
    [176]Brendel W,Bethge M.Comment on“biologically inspired protection of deep networks from adversarial attacks”[OL].2017[2017-08-09].https://arxiv.org/pdf/1704.01547
    [177]Wang Qinglong,Guo Wenbo,Zhang Kaixuan,et al.Learning adversary-resistant deep neural networks[OL].2016[2017-08-16].https://arxiv.org/pdf/1612.01401
    [178]Wang Qinglong,Guo Wenbo,Ororbia I I,et al.Using noninvertible data transformations to build adversary-resistant deep neural networks[OL].2016[2017-08-16].https://arxiv.org/pdf/1610.01934
    [179]Gu Shixiang,Rigazio L.Towards deep neural network architectures robust to adversarial examples[OL].2014[2017-08-16].https://arxiv.org/pdf/1412.5068
    [180]Ororbia I I,Alexander G,Giles C L,et al.Unifying adversarial training algorithms with flexible deep data gradient regularization[OL].2016[2017-06-06].https://arxiv.org/pdf/1601.07213
    [181]Lyu C,Huang Kaizhu,Liang Haining.A unified gradient regularization family for adversarial examples[C]//Proc of the 15th Int Conf on Data Mining.Piscataway,NJ:IEEE,2015:301-309
    [182]Zhao Qiyang,Griffin L D.Suppressing the unusual:Towards robust cnns using symmetric activation functions[OL].2016[2017-08-14].https://arxiv.org/pdf/1603.05145
    [183]Rozsa A,Gunther M,Boult T E.Towards robust deep neural networks with BANG[OL].2016[2017-06-06].https://arxiv.org/pdf/1612.00138
    [184]Miyato T,Maeda S,Koyama M,et al.Distributional smoothing with virtual adversarial training[OL].2015[2017-06-06].https://arxiv.org/pdf/1507.00677
    [185]Tramèr F,Kurakin A,Papernot N,et al.Ensemble adversarial training:Attacks and defenses[OL].2017[2017-06-04].https://arxiv.org/pdf/1705.07204
    [186]Na T,Ko J H,Mukhopadhyay S.Cascade adversarial machine learning regularized with a unified embedding[OL].2017[2017-06-04].https://arxiv.org/pdf/1708.02582
    [187]Shaham U,Yamada Y,Negahban S.Understanding adversarial training:Increasing local stability of neural nets through robust optimization[OL].2015[2017-09-14].https://arxiv.org/pdf/1511.05432
    [188]Huang Ruitong,Xu Bing,Schuurmans D,et al.Learning with a strong adversary[OL].2015[2017-09-17].https://arxiv.org/pdf/1511.03034
    [189]Nkland A.Improving back-propagation by adding an adversarial gradient[OL].2015[2017-09-12].https://arxiv.org/pdf/1510.04189
    [190]Demyanov S,Bailey J,Kotagiri R,et al.Invariant backpropagation:How to train a transformation-invariant neural network[OL].2015[2017-08-02].https://arxiv.org/pdf/1502.04434
    [191]Grosse K,Manoharan P,Papernot N,et al.On the(statistical)detection of adversarial examples[OL].2017[2017-09-12].https://arxiv.org/pdf/1702.06280
    [192]Metzen J H,Genewein T,Fischer V,et al.On detecting adversarial perturbations[OL].2017[2017-09-12].https://arxiv.org/pdf/1702.04267
    [193]Lu Jiajun,Issaranon T,Forsyth D.SafetyNet:Detecting and rejecting adversarial examples robustly[OL].2017[2017-08-02].https://arxiv.org/pdf/1704.00103
    [194]Cloudflare.The wireX botnet:How industry collaboration disrupted a DDoS attack[EB/OL].2017[2017-09-02]https://blog.cloudflare.com/the-wirex-botnet/
    [195]Fredrikson M,Jha S,Ristenpart T.Model inversion attacks that exploit confidence information and basic countermeasures[C]//Proc of the 22nd ACM SIGSAC Conf on Computer and Communications Security.New York:ACM,2015:1322-1333
    [196]Shen Shiqi,Tople S,Saxena P.Auror:Defending against poisoning attacks in collaborative deep learning systems[C]//Proc of the 32nd Annual Conf on Computer Security Applications.New York:ACM,2016:508-519
    [197]Hitaj B,Ateniese G,Perez-Cruz F.Deep models under the GAN:Information leakage from collaborative deep learning[OL].2017[2017-09-01].https://arxiv.org/pdf/1702.07464
    [198]Dwork C,Roth A.The algorithmic foundations of differential privacy[J].Foundations and Trends in Theoretical Computer Science,2014,9(3/4):211-407
    [199]Dwork C.Differential privacy:A survey of results[C]//Proc of the 5th Int Conf on Theory and Applications of Models of Computation.Berlin:Springer,2008:42-61
    [200]Dwork C,McSherry F,Nissim K,et al.Calibrating noise to sensitivity in private data analysis[J].Journal of Privacy and Confidentiality,2016,7(3):265-284
    [201]Zhu Tianqing,Li Gang,Zhou Wanlei,et al.Differentially private data publishing and analysis:A survey[J].IEEE Trans on Knowledge and Data Engineering,2017,29(8):1619-1638
    [202]Xie Pengtao,Bilenko M,Finley T,et al.Crypto-nets:Neural networks over encrypted data[OL].2014[2017-09-01].https://arxiv.org/pdf/1412.6181
    [203]Rivest R L,Adleman L,Dertouzos M L.On data banks and privacy homomorphisms[J].Foundations of Secure Computation,1978,4(11):169-180
    [204]Ohrimenko O,Schuster F,Fournet C,et al.Oblivious multi-party machine learning on trusted processors[C]//Proc of the 25th USENIX Security Symp.Berkely,CA:USENIX Association,2016:619--636
    [205]Shokri R,Shmatikov V.Privacy-preserving deep learning[C]//Proc of the 22nd ACM SIGSAC Conf on Computer and Communications Security.New York:ACM,2015:1310-1321
    [206]Abadi M,Chu A,Goodfellow I,et al.Deep learning with differential privacy[C]//Proc of the 23rd ACM SIGSAC Conf on Computer and Communications Security.New York:ACM,2016:308-318
    [207]Papernot N,Abadi M,Erlingssonú,et al.Semi-supervised knowledge transfer for deep learning from private training data[OL].2016[2017-08-02].https://arxiv.org/pdf/1610.05755
    [208]Hamm J,Cao P,Belkin M.Learning privately from multiparty data[C]//Proc of the 33rd Int Conf on Machine Learning.New York:ACM,2016:555-563

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700