用户名: 密码: 验证码:
人眼驱动语音合成的若干关键技术研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
伴随信息技术、人工智能的不断发展,语音合成在人机交互技术中受到越来越多的重视。但目前语音合成的主要问题在于合成的自然度、表现力不够,还不能接近自然语言的标准;同时语音合成的人机交互方式较为单调,缺乏使用者主观驱动的机制。
     本文首先对语音合成的历史发展进行了介绍,并总结了语音合成的一般过程,进而指出影响合成效果的重要环节是语音合成时的韵律生成模块。在深入研究的基础上,认为一方面可以引入新的人机交互手段来丰富语音合成的形式,提出了利用阅读时人的眼睛活动规律,主观控制驱动韵律生成;另一方面又充分利用机器学习的方法挖掘韵律规则,建立模拟精度更高的韵律模型。
     时长模型和重音模型是韵律生成要解决的关键问题。在时长韵律模型方面,提出了用阅读时的眼动注视时长,对合成语音的发音时长进行同步控制的思路。人的眼动阅读过程是一个综合、多因素交叉作用的复杂过程,如注视、眼跳、回视等;且语音编码和眼动控制是两个并行的独立系统。因此研究人眼驱动的“眼动时长”,就要权衡各种因素的影响,获取人眼注视的时长规律,以此作为眼动时长建模依据。在重音韵律模型方面,提出ELM极限学习机和半监督SELM机器学习方法用于重音预测,并通过实验进行了比对验证。本文还对语义重音的预测进行了探索性研究,由于语义重音取决于人的主观意识表达,本文尝试统计分析人的眼动信号与重音的联系,实验结果表明眼动注视时长和注视次数等特征和情境语义中的重音级别有相关性。
     围绕以上几个方面,本文的主要工作和创新点如下:
     1.提出利用人阅读时的眼动信号来驱动语音合成的方法,将眼动控制引入到语音合成的人机交互中。该方法对丰富人机交互的形式,或者残疾人辅助语音交互方面都有广泛的现实意义和应用前景。
     基于对现有的三种眼动控制模型的分析和内隐韵律阅读的特点,着重剖析了眼动阅读过程中,对文字的语音加工系统和眼动控制系统的相对独立性;证明了在文本熟悉度一致的条件下,阅读的眼动注视时长窗口和内部语音的发音时长窗口的同步关系;在此基础上提出了基于汉语层级韵律结构的眼动时长模型。该模型改变了以往对语音时长采用机器学习、概率预测的方法,倾向于捕获阅读者真正的内部阅读韵律,合成带有个性化节奏的语音。
     2.提出单隐含层前馈神经网络ELM极限学习机用于汉语重音预测。
     ELM极限学习机继承了传统神经网络泛化性能好的特点,使用单隐含层连接输入和输出权值矩阵。该算法可以适应任意输入权值和偏置向量,具有更强的泛化能力和更低的算法复杂度。实验分别使用ELM极限学习机和以RBF为核函数的SVM两种机器学习算法进行了汉语重音预测实验;对比了重音预测的正确率和算法执行时间;实验数据表明,该神经网络模型在保证预测精确度的基础上大大提高了重音分类学习和预测的速度,证明了该算法的有效性。
     3.提出改进的基于半监督策略的SELM极限学习机模型,并且将其用于汉语重音预测。
     SELM适用于训练样本集中只存在少量的已标注样本的情况。该算法在已标注样本学习基础上,对未标记样本进行置信度阈值检验。检验采用交换训练集和预测集的方法,最终确定高置信度的扩充样本。实验使用SELM算法在未标注样本倍增的前提下进行重音预测,证明了SELM算法在少量标注样本的基础上对未标注样本的分类仍具有较高的正确率和执行效率。该半监督策略的机器学习算法为在减少样本标注工作量的前提下获得大量样本的高效率预测提供了一种有效的解决办法。
     4.提出以眼动注视特征进行语义重音预测的探索性研究。
     本文以一组眼动重音预测实验,对使用眼动数据预测情境语料中的语义重音进行了探讨,并使用三种神经网络模型对眼动实验样本进行分类实验。结果表明,眼动注视时长和注视次数等特征和情境语义中的重音级别有相关性。
     5.引入基于语调叠加的Fujisaki模型的基频建模方法,讨论了基频曲线生成和韵律修改。
     本文概述了该建模方法的执行流程。即以语调叠加的基频参数化模型——Fujisaki模型为原型,在合成时长归一化的原始语音基础上,总结眼动时长模型的韵律生成和重音预测的结果,提出一种改进的语音合成模型:ED_Fujisaki模型,该模型可以合成带有阅读者主观韵律表达的个性化韵律。
With the development of information technology and artificial intelligence,speech synthesis plays a significant role in the fields of Human-Computer InteractionTechniques. However, the main problem of current speech synthesis techniques islacking of naturalness and too monotonous to realize the mechanism of usersubjective drive.
     This paper summarizes the general process of speech synthesis. It is pointed outthat prosody generation module is an important part in the process of this technique.Duration model and stress model are two key issues for prosody generation. For theduration model, the synchronization control of gaze duration of eyes when readingand pronunciation duration in speech synthesis was presented. For the stress model,the method of Extreme Learning Machine (ELM) and Semi-supervised ExtremeLearning Machine (SELM) were presented to predict stress and the comparison wasaccomplished through experiments. In this paper, the semantic stress estimation wasalso researched. Because semantic stress depends on the expression of subjectiveawareness, the relationship between eye movement and stress was tried to becalculated and analyzed.
     Around the aspects illustrated above, the main research work and innovationpoints are listed as follows:
     The method of using eye movement signal to control speech synthesis wasproposed. Introducing eye movement characteristics into speech synthesis will enrichhuman-computer interactive form and there will be practical significance andapplication prospect in terms of disabled assisted speech interaction. Based on thecharacteristics of the implicit rhythm reading, relative independence between speechprocessing system of text and eye movement control system was discussed. It wasproved that under the same text familiarity condition, gaze duration of eyes whenreading and internal voice pronunciation duration are synchronous.
     A single hidden layer feedforward neural network ELM was proposed for Chinese language stress prediction. In the experiment, ELM and SVM with RBFkernel function were respectively used for Chinese language stress prediction, and theresults showed that ELM with high accuracy can greatly improve the speed ofclassification learning and prediction.
     A modified semi-supervised SELM model was proposed to accomplish stressprediction. SELM is only used in the training sample set with small amount of labeledsamples. Based on the labeled samples learning, this algorithm will test theconfidence threshold of unlabeled samples. Testing adopts exchange training set andprediction set to determine high degree confidence of expanded samples. Theexperiment showed that SELM algorithm has higher efficiency in the classification ofunlabeled samples. This algorithm of semi-supervised strategy provides an effectivesolution for reducing sample label workload.
     The exploratory study of semantic stress prediction based on the gazecharacteristics of human eyes was proposed. A group of eye movement stressprediction experiments were accomplished to discuss how to use eye movement datato predict semantic stress in the specific context. Three kinds of neural networkmodels were also used to classify experimental samples of eye movements. Theresults showed that the characteristics of eyes such as gaze duration and fixationcount are related with semantic stress level.
     Fujisaki modeling method based on the tone superposition was introduced todiscuss fundamental frequency curve generation and rhythm modification. Amodified speech synthesis model ED_Fujisaki model was presented. This model cansynthesize personalized rhythm of readers’ subjective expression.
引文
[1] D. H. Klatt..Review of text-to-speech conversion for English.The Journal of the AcousticalSociety of America,1987,Vol.82(3):732~793
    [2]蔡润身.汉语语音合成系统的基频曲线模型研究:[博士学位论文].天津:南开大学.2008
    [3]李娜.基于韵律特征的汉语普通话情感语音分析与合成:[硕士学位论文].天津:南开大学.2010
    [4]张家录.论语音技术的发展.声学学报,2004,Vol.29(3):193~199
    [5]赵子平,基于统计机器学习的汉语韵律短语预测研究:[博士学位论文].天津:南开大学.2008
    [6]陶建华.蔡莲红语音合成系统的关键技术与应用实例.杭州科技,2000(02):19~21
    [7]蔡莲红,黄德智,蔡锐.现代语音技术基础与应用.北京:清华大学出版社,2003
    [8] Qian Y,Chu M,Peng H.Segmenting unrestricted Chinese text into prosodic words instead oflexical words.In:Proceeding of ICASSP2001,Salt Lake City,2001:825~828
    [9]贺琳,初敏,吕士楠等.汉语合成语料库的韵律层级标注研究.第五届全国现代语音学学术会议论文集.北京:清华大学出版社:2001:323~326
    [10] Goble C,Bennett E,N’I Chasaide A. Expressive synthesis: How crucial is voice quality?Proceeding of IEEE workshop on Speech Synthesis,Santa,Monica,2002,PP.91-94
    [11]施宁.基于眼电信号的眼动跟踪技术研究:[博士学位论文].华东理工大学.2005
    [12]闫国利.眼动分析法在心理学研究中的应用.天津:天津教育出版社,2004
    [13] Huey, E. B. The psychology and pedagogy of reading. Cambridge, MA: MIT Press.1908
    [14] Brown, R.. Words and things: An introduction to language. New York: Free Press.1958
    [15] Rayner.K.,&Pollatsek.A. The psychology of reading. Englewood Cliffs, NJ: Prentice-Hall.1989
    [16] Ashby.L Clifton C.The prosodic property of lexical stress affects eye movements duringsilent reading.Cognition,2005,96(4):89-100
    [17] Stoherfoht B,Friederici A D,Alter K,Steube A.Processing focus structure and implicitprosody during reading:Differential ERP effects.Cognition,2007,104(3):565—590
    [18] Hyona, J. and Niemi, P. Eye movements in repeated reading of a text. Acta Psychologica,1990,73,259-280.
    [19] Johnson.N.S. and Mandler. J.M. Attala of two structures: Underlying and surface forms instories. Poetics,1980,9,51-86.
    [20] Ashby.L Clifton C.The prosodic property of lexical stress affects eye movements duringsilent reading.Cognition,2005,96(4):89-100
    [21] Pollatsek A,Reichle E D,Rayner K.Tests of the E-Z Reader model:Exploring the inter facebetween cognition and eye movement contro1.Cognitive Psychology,2006,52:l-56.
    [22] Chen S H,Lai W H,Wang Y R.A New Duration Modeling Approach for Mandarin Speech.IEEE Transactions on Speech and Audio Processing,JULY,2003,Vol.11(4)
    [23]冯勇强.汉语音节时长和音高特征统计分析及建模:[博士学位论文].北京:中科院声学所.2001
    [24]初敏.高清晰度高自然度汉语文语转换系统的研究:[博士学位论文].北京:中科院声学所.1995
    [25]王丽江.中文阅读知觉广度的眼动研究:[博士学位论文].天津:天津师范大学.2011
    [26]仲晓波,杨玉芳.汉语普通话句子重音在时长方面的声学表现.心理学报.2003,35(2):143~149
    [27]朱维彬.支持重音合成的汉语语音合成系统.中文信息学报.2007,21(3):122~128
    [28]李雅,潘诗锋,陶建华.采用重音调整模型的HMM语音合成系统.清华大学学报(自然科学版).2011,51(9):1171~1175
    [29]唐晓亮.基于神经网络的半监督学习方法研究:[博士学位论文].大连理工大学,2009
    [30]王韫佳,初敏,贺琳,等.连续话语中双音节韵律词的重音感知.声学学报.2003,28(6):534~539.
    [31] Lehiste I. Supra segment als. M. I. T. Press,1970.150~151
    [32] Trask R. A Dictionary of Phonetics and Phonology (in Chinese). Beijing: Philology Press,2000
    [33]赵元任.汉语口语语法(吕叔湘译).北京:商务印书馆.1979,23~26
    [34]罗常培,王均.普通语音学纲要.北京:商务印书馆.1981,138~142
    [35]沈炯,Hoek J.汉语语势重音的音理.语文研究,1994,3:10~15
    [36]沈炯.汉语语调模型刍议.语文研究,1992,4:16~24
    [37] Stephen J E, William F C. Speech intonation and focus location in matched statements.Journal of the Acoustical Society of America,1985,80(2):402~413
    [38] Bao Z,et al. Generative Phonology: Theory and Usage(in Chinese),Press of the Academy ofSocial Science of China,1997
    [39] Cutler A, Dahan D, Wilma, Donselaar. Prosody in the comprehension of spoken language: aliterature review. Language and Speech,1997,40(2):141~201
    [40] Eefting W. The effect of “information value”and“accentuation”on the duration of Dutchwords, syllables, and segments. Journal of the Acoustical Society of America,1990,89(l):412~424
    [41] Eefting W. The effect of accentuation and word duration on the naturalness of speech.Journal of the Acoustical Society Of America,1990,91(l):411~419
    [42] Zhong X. The Perception and its Acoustic Cues of prominence in Standard Chinese(inChinese). Doctoral Dissertation,Institute of Psychology,Chinese Academy of Sciences,2000
    [43] Wightman C W, Ostendorf M. Automatic Labeling of Prosodic Patterns. IEEE Trans. onAudio and Speech Processing,1994,2(4):469-481.
    [44] Ananthakrishnan S, Narayanan S S. An Automatic Prosody Recognizer Using a CoupledMulti-stream Acoustic Model and a Syntactic-prosodic Language Model. Proc ofInternational Conference on Acoustics, Speech, and Signal Processing.[S. l.]: IEEE Press,2005.
    [45] Hasegawa-Johnson M, Chen Ken. Simultaneous Recognition of Words and Prosody in theBoston University Radio Speech Corpus. Speech Communication,2005,46(3/4):418-439.
    [46] Jeon J H, Yang Liu. Automatic Prosodic Events Detection Using Syllable-based Acoustic andSyntactic Features. Proc of International Conference on Acoustics, Speech, and SignalProcessing. Taichun, China: IEEE Press,2009.
    [47]倪崇嘉,刘文举,徐波.基于互补模型的汉语重音检测.计算机工程.2011,37(23):20~23
    [48]胡伟湘,董宏辉,陶建华等.汉语朗读话语重音自动分类研究.中文信息学报,2005,19(6):78-83.
    [49]邵艳秋,韩纪庆,刘挺等.自然风格言语的汉语句重音自动判别研究.声学学报,2006,31(3):203-210.
    [50] Huang G B, Zhu Q Y, Mao R Z, et al. Can Threshold Networks be Trained Directly? IEEETransactions on Circuits and Systems II: Express Briefs,2006,53(3):187-191.
    [51] Huang G B, Zhu Q Y, Siew C K. Extreme learning machine: Theory and applications.Neurocomputing,2006,70(1-3):489-501.
    [52] Zhang R X, Huang G B, Sundararajan N, et al. Multicategory Classification Using anExtreme Learning Machine for Microarray Gene Expression Cancer Diagnosis. IEEE-ACMTransactions on Computational Biology and Bioinformatics,2007,4(3):485-495.
    [53] Huang G B, Chen L, Siew C K. Universal Approximation Using Incremental ConstructiveFeedforward Networks With Random Hidden Nodes. IEEE Transactions on Neural Networks,2006,17(4):879-892.
    [54] Yeu C W T, Lim M H, Huang G B, et al. A New Machine Learning Paradigm for TerrainReconstruction. IEEE Geoscience Remote Sensing Letter,2006,3(3):382-386.
    [55] GUO R F, HUANG G B, LIN Q P, et al. Error Minimized Extreme Learning Machine withGrowth of Hidden Nodes and Incremental Learning. IEEE Transactions on Neural Networks,2009,20(8):1352~1357
    [56]史峰,王辉,郁磊等.MATLAB智能算法30个案例分析.北京航空航天大学出版社.2011,290~300
    [57] Huang G B, ZHU Q Y, Siew C K. Extreme Learning Machine: a New Learning Scheme ofFeedforward Neural Networks. Proceedings of International Joint Conference on NeuralNetworks,2004,7:25~29
    [58] Kristin P.Bennett,Ayhan Demiriz.Semi-supervised support vector machines.Proceedings ofthe1998conference on Advances in neural information processing systems II.1998.368~374
    [59] Aldenderfer M.S,Blashfield P K.Cluster analysis.Sage Publications,Beverly Hills,USA,1984
    [60] Nigam K,McCallum A,et al.Text classification from labeled and unlabeled documents usingEM.Machine Learning,Vol.39:103~134
    [61] Li xiaoli,Liu Jiming,Shi zhongzhi.A Chinese web page classifier based on support vectormachine and unsupervised clustering.Chinese Journal of Computers,2001,Vol.24(1):62~68
    [62] Broadley C.E,Utgoff P.E.Multivariate decision trees.Machine Learning.1995(19):45~77
    [63] Platt J C.Large margin DAGs for multi-classification.Advances in Neural InformationProcessing Systems:MIT Press.2000,Vol.(12):547~553
    [64] Chapelle O,Haffner P,Vapnik V.N. Support vector machine for histogram-based imageclassification.IEEE Transactions on Neural Networks.1999,Vol.10(5):1055~1064
    [65] Chapelle O, Scholkopf B, Zien A. Semi-supervised learning [M].Cambridge, Mass: MITPress,2006.
    [66]易星.半监督学习若干问题研究:[博士学位论文].北京:清华大学,2004
    [67] Fabio Cozman,Ira Cohen,Marcelo Cirelo.Semi-Supervised Learning of Mixture Models.Proceedings of the20th International Conference on Machine Learning.Washington DC.2003.
    [68] Olivier Chapelle,J.Weston,Bernhard Scholkopf.Cluster Kernels for Semi-SupervisedLearning.Proceedings of Advances in Neural Information Processing Systems.2002,Vol.15:601~608
    [69] Sugato Basu,Arindam Banerjee,Raymond J.Mooney. Semi-supervised Clustering bySeeding.Proceedings of the19th International Conference on MachineLearning.Sydney.2002.19~26
    [70] Kiri Wagstaff,Claire Cardie,Seth Rogers,and Stefan Schroedl.Constrained K-meansClustering with Background Knowledge. Proceedins of the18th International Conference onMachine Learning.Massachusetts.2001.577~584
    [71] K.P.Bennett,A.Demiriz.Semi-supervised support vector machines. Advances in NeuralInformation Processing Systems.Cambridge,MA.1998.368~374
    [72] Tommi Jaakkola,Maria Meila,and Tony Jebara. Maximum entropy discrimination. TechnicalReport AITR-1668. Massachusetts Inst.of Technology AI lab,1999
    [73] Tony Jebara. Discriminative,Generative and Imitative Learning. PhD thesis,MassachusettsInst.of Technology Media laboratory,2001
    [74] Liu, B.,Lee,W.S.,Yu,P.S.and Li,X.Partially Supervised Classification of Text Documents.Proceedings of the19th International Conference on Machine Learning.Sydney.2002.387~394
    [75] J.L.Schafer. Analysis of Incomplete Multivariate Data.Chapman&Hall,1997
    [76] Roderick J.Little,Donald B.Rubin. Statistical analysis with missing data.Technometrics,Vol.45(4):364~365
    [77] A.McCallum,R.Rosenfeld,T.Mitchell,and A.Ng.Improving text classification by shrinkagein a hierarchy of classes.Proceedings of the15th International Conference on MachineLearning.Madision.1998.359~367
    [78] Fabien Letouzey.Learning from positive and unlabeled examples.Proceedings of11thInternational Conference on Algorithmic Learning Theory(ALT).Sydney.2000.71~85
    [79] Wee Sun Lee,Bing Liu.Learning with Positive and Unlabeled Examples Using WeightedLogistic Regression.Proceedings of the12th International Conference on MachineLearning.Washington DC.2003.448~455
    [80] Tobias Scheffer,Christian Decomain,Stefan Wrobel.Active learning of partially hiddenmarkov models.Proceedings of the ECML/PKDD Workshop on Instance Selection.2001
    [81] Kamal Nigam,Andrew Mccallum,Sebastian Thrun,Tom Mitchell.Text Classification fromLabeled and Unlabeled Documents using EM.Machine Learning.2000,39:103~134
    [82] Martin Szummer,Tommi Jaakkola.Partially labeled classification with Markov randomwalks.In Advances in Neural Information Processing Systems.2002.945~952
    [83] M.Belkin,P.Niyogi.Using Manifold Structure for Partially Labeled Classification.Advancesin Neural Information Processing Systems15.MA:MIT Press.2002.929~936
    [84] Aharon Bar-Hillel,Tomer Hertz,Noam Shental,and Daphna Weinshall.Learning distancefunctions using equivalence relations.Proceedings of the20th International Conference onMachine Learning.Washington DC.2003.11~18
    [85] Charles C.Kem,Thomas L.Griffiths,Sean Stromsten,Joshua B.Tenenbaum.Semi-SupervisedLearning with Trees.Proceedings of the17th Annual Conference on Neural InformationProcessing Systems.2003.69~72
    [86] Xiaojin Zhu,Zoubin Ghahramani,John Lafferty.Semi-Supervised Learning Using GaussianFields and Harmonic Functions.Proceedings of20th International Conference on MachineLearning.Washington DC.2003.912~919
    [87] Blum,A.,Chawla,S.Learning from Labeled and Unlabeled Data using Graph Mincuts.Proceedings of the18th International Conference on Machine Learning.Massachusetts.2001.19~26
    [88] Martin Szummer,Tommi Jaakkola.Kernel expansions with unlabeled examples.Advances inNeural Information Processing Systems.2000.626~632
    [89] J.Larsen,A.Szymkowiak,L.K.Hansen.Probabilistic Hierarchical Clustering with labeled andUnlabeled Data.invited submission for Int.Journal of Knowledge Based IntelligentEngineering Systems,2001
    [90] ZHOU Qiaofeng,CAI Lianhong. Mandarin stress and its simulation in TTS system.Microcomputer,1996,16(4):16-19.(in Chinese)
    [91]赵元任著,陈保亚译.汉语的字调跟语调.赵元任语言学论文集,北京:商务印书馆,2002
    [92] Hirst Daniel,Robert Espesser. Automatic modeling of fundamental frequency using aquadratic spline function.Travaux de I’ Institut de Phonetique d’Aix,1993,15:71~85
    [93] Hirst Daniel,Albert Di Cristo,Robert Espesser.Levels of representation and levels ofanalysis for intonation.M. Horne (ed.) Prosody:Theory and Experiment.2000:51~87
    [94] Hirst Daniel. Form and function in the representation of speech prosody. SpeechCommunication,2005,Vol.46(3-4):334~347
    [95] Paul Taylor.The Tilt intonation model.Proceeding of ICSLP98[C],Sydney.1998:1015~1018
    [96] Paul Taylor, Alan W Black. Speech synthesis by phonological structure matching[A].Proceedings of Eurospeech [C],Budapest, Hungary,1999:1034~1037
    [97] Paul Taylor.Analysis and Synthesis of Intonation using the Tilt Model[J].AcousticalSociety of America,2000,(107):1697~1714
    [98] Fujisaki, H., Hirose,K..Analysis of voice fundamental frequency contours for declarativesentences of Japanese.Journal of the Acoustical Society of Japan.1984.Vol.5(4):233~241
    [99] Mixdorff H..A novel approach to the fully automatic extraction of fujisaki model prameters.Pro.ICASSP,2000,Vol.3:1281~1284
    [100]苏庄銮.情感语音合成:[博士学位论文].合肥:中国科学技术大学,2006
    [101]任蕊.基于Fujisaki模型的情感语音信号分析与合成:[硕士学位论文].北京交通大学,2008
    [102] R.Cowie,RR.Cornelius. Describing the emotional states that are expressed in speech,Speech Communication,2003,vol.40,PP.5-32
    [103] M.Schroder,Emotional Speech synthesis-a review,Proceedings of Euro speech,200北京:1(1),pp.561-564
    [104] Mixdorff, Hansj rg.2001. MFGI, a linguistically motivated quantitative model of Germanprosody. Improvements in Speech Synthesis, ed. by Eric Keller, Gérard Bailly, AlexMonaghan, Jacques Terken&Mark Huckvale,134-143. Chichester:John Wiley.
    [105] Mixdorff, Hansj rg.2002. An Integrated Approach to Modeling German Prosody.Dresden:Technische Universit t Dresden dissertation.
    [106] Mixdorff, Hansj rg, Hiroya Fujisaki, Gao Peng Chen, and Yu Hu.2003. Towards theautomatic extraction of Fujisaki Model parameters for Mandarin. Proceedings of8thEuropean Conference on Speech Communication and Technology (EUROSPEECH2003),873-876. Geneva, Switzerland.
    [107]刘浩杰.汉语语音合成系统的基频建模和优化:[博士学位论文],北京:中国科学院声学研究所,2005
    [108]曾一鸣.情感语音合成的研究和系统实现:[硕士学位论文].上海:上海交通大学,2009

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700