用户名: 密码: 验证码:
基于图像分析的自然彩色夜视成像方法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
夜视技术借助于光电成像器件拓展了人类的光谱响应范围尤其是在黑暗中观察景物的能力,使夜晚变得透明。传统的夜视图像都是单色,不利于对于场景的理解及目标的识别。随着对色彩在人类认知系统中所起作用的认识逐步加深,如何实现彩色夜视、尤其是与人眼视觉特性相符合的自然彩色夜视,已经成为夜视领域的研究热点。自然彩色夜视技术可以使夜视图像获得最佳的观测性能,在各种军用及民用领域,如战场监控、情报传送、刑事侦察、安全检查、交通管制、夜间导航、文物保护等方面有着重要应用。
     本论文首先从夜视图像的成像特性和应用特性出发,探讨具有景物深度及空间立体视觉感的自然彩色夜视实现的问题、单波段红外热像的自然彩色夜视实现的问题、红外/微光双波段图像的实时自然彩色夜视实现的问题;其次从与自然彩色夜视实现过程相关的数学模型出发,探讨用于图像识别的数据降维算法和用于模型训练的稀疏学习算法。具体而言,论文的主要研究内容和创新点如下:
     (1)具有景物深度及空间立体视觉感的自然彩色夜视实现方法研究。
     提出夜视图像应具有立体空间感的新问题。并基于夜视图像的特性,提出利用单目双波段夜视图像中的红外/微光的强度来估计景物的深度信息,再利用色彩饱和度变化来增强夜视图像空间感的方法。该方法首先通过构建一个夜视图像的模式数据库,利用辐射/反射特征和纹理特征来识别图像中的景物类别、并为各景物赋予其相应的特征色彩,再根据估计的深度信息对景物色彩的饱和度进行调制,以便获得符合人眼观察规律的具有空间立体视觉感的彩色夜视图像视觉。最后通过实验证明这种彩色夜视图像不仅具有与人眼视觉特性相吻合的自然色彩,更具有与景物深度相吻合的空间立体视觉感,从而可以显著改善观察者对夜视图像的理解效果、提高目标识别率。
     (2)单波段红外热像的自然彩色夜视实现方法研究。
     提出基于单波段红外热像实现自然彩色夜视的新问题。为克服红外图像缺乏细节、局部信息不足的问题,提出使用一种具有“多尺度”和“空间上下文”信息的特征向量来对像素点进行分析;为解决红外图像的亮度分布规律与可见光图像极为不同的问题,首次采用“监督学习”的方法来建立色彩估计模型,给出一种基于线性回归的线性模型和一种基于支持向量回归(SVR)的非线性模型。该方法不同于以往的基于“多波段图像融合技术”的彩色夜视实现方法,可直接基于单波段红外热像实现彩色夜视,在提升彩色夜视系统便携性、降低成本等方面具有重大价值。最后通过实验证明该方法可以有效地实现单波段红外热像的自然彩色夜视。
     (3)红外/微光双波段图像的实时自然彩色夜视实现方法研究。
     为了克服以往的实时彩色夜视实现方法不适用于红外/微光双波段图像的问题,根据红外/微光双波段图像的特性设计了一种新的“自然—高亮色彩查找表”。不同于以前的方法,新的色彩查找表并不是由自然彩色图像生成,而是由一种特制的“环境具有自然色而目标具有高亮色”的红外/微光彩色融合图像生成。因此,这种新的色彩查找表与红外/微光双波段图像的光谱变化具有极好的相关性,可以实现准确映射。在合成这种特制的红外/微光彩色融合图像时,为了克服红外/微光双波段图像缺乏与普通可见光图像相近的光谱变化的缺陷,使用微光波段的纹理信息来获得自然色彩;为了克服自然色彩中缺少与热目标相对应的高亮色的缺陷,使用红外/微光的特征级融合图像作为亮度通道以保持目标区域与背景环境间的高对比度。最后通过实验证明这种彩色夜视图像具有环境呈现自然色、目标呈现高亮色的特点,可以使观察者在拥有良好环境感知度的同时对热目标也有极高的敏感度。
     (4)基于全局推断保留映射的半监督降维方法研究。
     提出一种充分利用未标记样本所隐含的鉴别信息的线性半监督降维方法GIP。该方法不同于以往基于局部几何特征的半监督降维方法,通过定义并使用一种未标记样本的全局结构:“全局鉴别结构”来求得投影矩阵,使得未标记样本所隐含的类别信息得到充分利用,以提升降维后数据的分类效果。为了推断未标记样本所隐含的类别信息,一种基于路径不相似度的测量被用来构造数据间的连接矩阵。通过在数据可视化、物体图像识别数据库、人脸识别数据库、声音识别数据库上的实验证明该方法的有效性。最后也通过实验说明GIPP在微光图像彩色夜视成像中的应用:基于自动相关反馈结构,实现用于色彩传递的彩色参考图像的自动选择。
     (5)基于Double Lomax先验的稀疏线性模型的变分贝叶斯求解方法研究。
     提出一种新的“稀疏提升”型先验分布:Double Lomax先验,首先证明它代表了比ARD先验更逼近于e0范数的松弛函数,因此理论上它可以利用更少的测量值个数来恢复稀疏变量;然后证明它像ARD先验一样可以被表示成为高斯混合尺度模型(GSM)的形式,因此计算上它可以得到闭合解,同时兼顾了理论优越性和计算可行性。另外,推导出使用该先验的线性稀疏模型(SLM)的全变分贝叶斯求解方法,并分析说明该全变分贝叶斯求解方法可以很好地克服在使用非凸松弛型先验时会出现的“多个局部极值点”及“过学习”的问题。通过在自回归模型(AR)系统识别及压缩感知(CS)方面的实验说明该方法与基于ARD先验的方法相比的优越性。最后也通过实验说明该方法在单波段红外热像彩色夜视成像中的应用:可以使用较少的样本就训练出准确的色彩估计模型。
With optical-electro imaging devices, the night vision technology expands the range of human spectral response and enhances the ability to observe in the dark, resulting in "transparent night". Until recently, the standard representation of night vision imagery is monochrome, which is disadvantageous to scene interpretation and target detection. However, with the deeper understanding of the color's role in human visual perception, there is a growing interest in displaying night vision imagery with colors, especially with the natural colors that are consistent with human visual properties. The natural color night vision technology allows night vision images to obtain the best observing performances, thus has demonstrated important applications in both military and civilian areas, such as battlefield surveillance, intelligence transmission, criminal investigation, safety inspection, traffic control, night navigation, historic preservation.
     In this thesis, discussions on the natural color night vision technology are conducted in two aspects. The first aspect is focusing on the properties of night vision imaging and application, discussing about the problem of depth perception enhancement in night vision imagery, the problem of colorization for single band thermal images, and the problem of real-time colorization for visible/thermal images. The second aspect is focusing on the related mathematical models for the natural color night vision technology, discussing about the dimensionality reduction method for image recognition, and the sparse learning method for model training. Specifically, the main contributions and innovations of this thesis are as follows:
     (1) Study on Coloring night vision imagery for depth perception.
     Depth perception for night vision imagery is important for scene comprehension. A novel scheme is proposed to give multiband night vision imagery a natural color appearance with depth sense. The scheme simulates the depth cue by varying saturation value of each color object, in correspondence with its relative depth value that is estimated from the ratio between infrared and low-light-level sensor outputs. In the proposed scheme, a night vision pattern database is built in advance, and employed to recognize the night vision objects based on emitting-reflecting and texture features. Then, each object is provided its correspondence natural color, and the saturation value of the natural color is varied according to the estimated depth information. Experimental results show that the proposed scheme can achieve satisfying results, provides night vision imagery smoothly natural color appearance as well as the sense of depth, thereby the situational awareness and target detection can be improved.
     (2) Study on Colorizing Single Band Thermal Night vision Images.
     Consider the problem of assigning single-band thermal night vision image with natural day-time color appearance. Modeling color distribution of thermal imagery is a challenging problem, since there are insufficient local features for estimating the chromatic value at a point. The proposed color estimation model incorporates multi-scale and spatially arranged image features, both linear and nonlinear model (SVR) are discussed. A supervised learning procedure is first employed to estimate colors of monochromic images, since the irrelevant luminance between thermal image and natural image. Different from current color night vision methods that based on "multi-band night vision image fusion", the proposed approach can be directly applied on single-band thermal image, so that the portability of night vision system can be enhanced. Experimental results show that the proposed approach leads to relatively accurate description of the desired color distribution.
     (3) Study on Real-time Color Night-vision for Visible and Thermal images.
     A real-time scheme to display the visible and thermal images in a fused representation with natural daylight color background and highlight thermal targets is proposed. The scheme is based on a specially designed "natural- highlight color" lookup-table (LUT). The LUT is derived from the combination of a visible-thermal image and its daylight-background-highlight-targets fused representation. To form this representation, the grayscale visible image is first transferred daylight colors by using the natural color transfer technique based on local texture information, then the luminance component of this colorized visible image is replaced with the feature level fusion image that optimized the target region feature of thermal image and the texture feature of visible image by using K-means clustering algorithm and discrete wavelet transform (DWT). Once the LUT has been derived the color mapping can be applied to different images and deployed in real-time. Experimental results show that the proposed scheme can achieve satisfying results, the overall scene recognition and situational awareness can be improved.
     (4) Study on Global Inference Preserving Projection for Semi-supervised Discriminant Analysis.
     A new linear dimensionality reduction approach, Global Inference Preserving Projection (GIPP), is proposed to perform classification task in semi-supervised case. It is based on a new global structure, which reveals the underlying discriminative knowledge of unlabeled samples. A path-based dissimilarity measurement is used to infer the underlying class information for unlabeled samples. Experimental results on data visualization, object recognition database, face recognition database and spoken letter recognition database demonstrate the effectiveness of the proposed approach. Moreover, it is shown that the proposed approach can be successfully applied in colorizing low-light-level night vision image, in the sense that the natural color reference image for color transfer can be automatically and efficiently selected.
     (5) Study on A Variational Bayesian Approach to Sparse Linear Model Based on Double Lomax Priors.
     A new family of sparsity-promoting prior coined to as Double Lomax prior is proposed. It is shown that on one hand it provides a tighter approximation to the L0 norm than ARD prior thus has theoretical superior for recovering sparse vectors with fewer measurements; on the other hand it owns Gaussian Scale Mixture representation thus has computational tractability for efficient Bayesian processing. A full Variational Bayesian inference is developed here to solve for SLM using Double Lomax priors. Being a strictly log-convex prior, Double Lomax prior brings challenges in inference procedure, such as multi-mode and asymmetrical posteriors. Analysis shows that the Variational Bayesian inference developed here is needed for avoiding local minimum and over-fitting. Experiments on both correlated and uncorrelated SLM simulations with applications to AR model identification and compressive sensing have demonstrated the effectiveness of the proposed approach. Moreover, it is shown that the proposed approach can be successfully applied in colorizing single band thermal night vision images, in the sense that fewer training samples are needed.
引文
[1]G.L. Walls, "The Vertebrate Eye and its Adaptive Radiation," Cranbrook Institute of Science, Bloomfield Hills, Michigan,2006.
    [2]U. Ansorge, G. Horstmann, and E. Carbone, "Top-down contingent capture by color: evidence from RT distribution analyses in a manual choice reaction task," Acta Psychologica, vol.120, no.3, pp.243-266,2005.
    [3]B.F. Green, and L.K. Anderson, "Colour coding in a visual search task," Journal of Experimental Psychology, vol.51, pp.19-24,1956.
    [4]C.L. Folk, and R. Remington, "Selectivity in distraction by irrelevant featural singletons:evidence for two forms of attentional capture," Journal of Experimental Psychology:Human Perception and Performance, vol.24, no.3, pp.847-858,1998.
    [5]V. Goffaux, C. Jacques, A. Mouraux, A. Oliva, P. Schyns, and B. Rossion, "Diagnostic colours contribute to the early stages of scene categorization:behavioural and neurophysiological evidence," Visual Cognition, vol.12, no.6, pp.878-892, 2005.
    [6]G.A. Rousselet, O.R. Joubert, and M. Fabre-Thorpe, "How long to get the "gist" of real-world natural scenes?," Visual Cognition, vol.12, no.6, pp.852-877,2005.
    [7].A. Cavanillas, "The Role of Color and False Color in Object Recognition with Degraded and Non-Degraded Images," Naval Postgraduate School, Monterey, CA, 1999.
    [8]A. Oliva, and P.G. Schyns, "Diagnostic colors mediate scene recognition," Cognitive Psychology, vol.41, pp.176-210,2000.
    [9]I. Spence, P. Wong, M. Rusan, and N. Rastegar, "How color enhances visual memory for natural scenes," Psychological Science, vol.17, no.1, pp.1-6,2006.
    [10]K.R. Gegenfurtner, and J. Rieger, "Sensory and cognitive contributions of color to the recognition of natural scenes," Current Biology, vol.10, no.13, pp.805-808, 2000.
    [11]F.A. Wichmann, L.T. Sharpe, and K.R. Gegenfurtner, "The contributions of color to recognition memory for natural scenes," Journal of Experimental Psychology: Learning, Memory, and Cognition, vol.28, no.3, pp.509-520,2002.
    [12]M.T. Sampson, "An assessment of the impact of fused monochrome and fused color night vision displays on reaction time and accuracy in target detection (Report AD-A321226)," Naval Postgraduate School, Monterey, CA,1996.
    [13]R.G. Driggers, K.A. Krapels, R.H. Vollmerhausen, P.R. Warren, D.A. Scribner, J.G. Howard, B.H. Tsou, and W.K. Krebs, "Target detection threshold in noisy color imagery," In Proc. Infrared Imaging Systems:Design, Analysis, Modeling, and Testing XII, G.C. Holst (Ed.), The International Society for Optical Engineering, 2001,pp.162-169.
    [14]M.J. Sinai, J.S. McCarley, and W.K. Krebs, "Scene recognition with infra-red, lowlight, and sensor fused imagery," In Proc. the IRIS Specialty Groups on Passive Sensors,1999, pp.1-9.
    [15]W.K. Krebs, D.A. Scribner, G.M. Miller, J.S. Ogawa, and J. Schuler, "Beyond third generation:a sensor-fusion targeting FLIR pod for the F/A-18," In Proc. Sensor Fusion:Architectures Algorithms and Applications,vol. Ⅱ, B.V. Dasarathy (Ed.), International Society for Optical Engineering,1998, pp.129-140.
    [16]J.E. Joseph, and D.R. Proffitt, "Semantic versus perceptual influences of color in object recognition," Journal of Experimental Psychology:Learning, Memory, and Cognition, vol.22, no.2, pp.407-429,1996.
    [17]A. Oliva, "Gist of a scene," In Proc. Neurobiology of Attention, L. Itti, G. Rees, J.K. Tsotsos (Eds.), Academic Press,2005, pp.251-256.
    [18]E.A. Essock, M.J. Sinai, J.S. McCarley, W.K. Krebs, and J.K. DeFord, "Perceptual ability with real-world night-time scenes:image-intensified, infrared, and fusedcolor imagery," Human Factors, vol.41, no.3, pp.438-452,1999.
    [19]A. Toet, and J.K. IJspeert, "Perceptual evaluation of different image fusion schemes," In Proc. Signal Processing, Sensor Fusion, and Target Recognition X, I. Kadar (Ed.), The International Society for Optical Engineering,2001, pp.436-441.
    [20]J.T. Vargo, "Evaluation of operator performance using true color and artificial color in natural scene perception (Report AD-A363036)," Monterey, Naval Postgraduate School, CA,1999.
    [21]A. Toet, J.K. IJspeert, A.M. Waxman, and M. Aguilar, "Fusion of visible and thermal imagery improves situational awareness," In Proc. Enhanced and Synthetic Vision, J.G. Verly (Ed.), International Society for Optical Engineering, Bellingham, 1997, pp.177-188.
    [22]M.J. Sinai, J.S. McCarley, W.K. Krebs, and E.A. Essock, "Psychophysical comparisons of single-and dual-band fused imagery," In Proc. Enhanced and Synthetic Vision, J.G. Verly (Ed.), The International Society for Optical Engineering, 1999,pp.176-183.
    [23]B.L. White, "Evaluation of the Impact of Multispectral Image Fusion on Human Performance in Global Scene Processing," Naval Postgraduate School, Monterey, CA, 1998.
    [24]N.P. Jacobson, and M.R. Gupta, "Design goals and solutions for display of hyperspectral images," IEEE Trans. Geoscience and Remote Sensing, vol.43, no.11, pp.2684-2692,2005.
    [25]N.P. Jacobson, M.R. Gupta, and J.B. Cole, "Linear fusion of image sets for display," IEEE Trans. Geoscience and Remote Sensing, vol.45, no.10, pp.3277-3288,2007.
    [26]S. Sun, and H. Zhao, "Perceptual evaluation of color night vision image quality," In Proc. International Conference on Information Fusion,2007.
    [27]金伟其,王玲雪,赵源萌,史世明,王霞,彩色夜视成像处理算法的新进展,红外与激光工程,vol.37,no.1,pp.147-150,2008.
    [28]N. Cohen, G. Mizrahni, G. Sarusi, and A. Sa'ar, "Integrated HBT/QWIP structure for dual color imaging," Infrared Physics and Technology, vol.47, no.1-2, pp.43-52, 2005.
    [29]R. Breiter, W.A. Cabanski, K.-H. Mauk, W. Rode, J. Ziegler, H. Schneider, and M. Walther, "Multicolor and dual-band IR camera for missile warning and automatic target recognition," In Proc. Targets and Backgrounds:Characterization and Representation Ⅷ, W.R. Watkins, D. Clement, W.R. Reynolds (Eds.), The International Society for Optical Engineering,2002, pp.280-288.
    [30]A.C. Goldberg, P. Uppal, and M. Winn, "Detection of buried land mines using a dualband LWIR/LWIR QWIP focal plane array," Infrared Physics and Technology, vol.44, no.5-6, pp.427-437,2003.
    [31]E. Cho, B.K. McQuiston, W. Lim, S.B. Rafol, C. Hanson, R. Nguyen, and A. Hutchinson, "Development of a visible-NIR/LWIR QWIP sensor," In Proc. Infrared Technology and Applications XXIX, B.F. Andresen, G.F. Fulop (Eds.), The International Society for Optical Engineering,2003, pp.735-744.
    [32]S.V. Bandara, S. D. Gunapala, J. K. Liu and S. B. Rafol et al., "Four-band quantum well infrared photodetector array," Infrared Physics and Technology, vol.44, no.5-6, pp.369-375,2003.
    [33]M. Aguilar, D.A. Fay, D.B. Ireland, J.P. Racamoto, W.D. Ross, and A.M. Waxman, "Field evaluations of dual-band fusion for color night vision," In Proc. Enhanced and Synthetic Vision, J.G. Verly (Ed.), The International Society for Optical Engineering,1999, pp.168-175.
    [34]M. Aguilar, D.A. Fay, W.D. Ross, A.M. Waxman, D.B. Ireland, and J.P. Racamoto, "Realtime fusion of low-light CCD and uncooled IR imagery for color night vision," In Proc. Enhanced and Synthetic Vision, J.G. Verly (Ed.), The International Society for Optical Engineering,1998, pp.124-135.
    [35]D.A. Fay, A.M. Waxman, M. Aguilar, D.B. Ireland, J.P. Racamato, W.D. Ross, W. Streilein, and M.I. Braun, "Fusion of multi-sensor imagery for night vision:color visualization, target learning and search," In Proc. the Third International Conference on Information Fusion,2000, pp. TuD3-3-TuD3-10.
    [36]A.M. Waxman, D.A. Fay, A.N. Gove, M.C. Seibert, J.P. Racamato, J.E. Carrick, and E.D. Savoye, "Color night vision:fusion of intensified visible and thermal IR imagery," In Proc. Synthetic Vision for Vehicle Guidance and Control, J.G. Verly (Ed.), The International Society for Optical Engineering,1995, pp.58-68.
    [37]A.M. Waxman, A.N. Gove, D.A. Fay, J.P. Racamoto, J.E. Carrick, M.C. Seibert, and E.D. Savoye, "Color night vision:opponent processing in the fusion of visible and IR imagery," Neural Networks, vol.10, no.1, pp.1-6,1997.
    [38]A.M. Waxman, M. Aguilar, R.A. Baxter, D.A. Fay, D.B. Ireland, J.P. Racamoto, and W.D. Ross, "Opponent-color fusion of multi-sensor imagery:visible, IR and SAR," In Proc. the 1998 Conference of the IRIS Specialty Group on Passive Sensors, 1998,pp.43-61.
    [39]A. Toet, and J. Walraven, "New false colour mapping for image fusion," Optical Engineering, vol.35, no.3, pp.650-658,1996.
    [40]A. Toet, J.K.I Jspeert, A.M.Waxman and M.Aguilar, "Fusion of visible and thermal imager improves situational awareness," Displays, No.18, pp.85-95,1997.
    [41]J. Li, Q. Pan, T. Yang, and Y. Cheng, "Color based grayscale-fused image enhancement algorithm for video surveillance," In Proc. the Third International Conference on Image and Graphics (ICIG'04),2004, pp.47-50.
    [42]D. Scribner, J.M. Schuler, P. Warren, R. Klein, and J.G. Howard, "Sensor and image fusion," In Proc. Encyclopedia of Optical Engineering, R.G. Driggers (Ed.), 2003, pp.2577-2582.
    [43]J.G. Howard, P. Warren, R. Klien, J. Schuler, M. Satyshur, D. Scribner, and M.R. Kruer, "Real-time color fusion of E/O sensors with PC-based COTS hardware," In Proc. Targets and Backgrounds VI:Characterization, Visualization, and the Detection Process, W.R.Watkins, D. Clement, W.R. Reynolds (Eds.), The International Society for Optical Engineering,2000, pp.41-48.
    [44]J. Schuler, J.G. Howard, P. Warren, D.A. Scribner, R. Klien, M. Satyshur, and M.R. Kruer, "Multiband E/O color fusion with consideration of noise and registration," In Proc. Targets and Backgrounds VI:Characterization, Visualization, and the Detection Process, W.R. Watkins, D. Clement, W.R. Reynolds (Eds.), The International Society for Optical Engineering,2000, pp.32-40.
    [45]G. Huang, G. Ni, and B. Zhang, "Visual and infrared dual-band false color image fusion method motivated by Land's experiment," Optical Engineering, vol.46, no.2, pp.027001-1-027001-10,2007.
    [46]P. Warren, J.G. Howard, J. Waterman, D.A. Scribner, and J. Schuler, "Real-time, PC-based color fusion displays (Report A073093)," Naval Research Lab., Washington, DC,1999.
    [47]D.A. Fay, A.M. Waxman, M. Aguilar, D.B. Ireland, J.P. Racamato, W.D. Ross, W.Streilein, and M.I. Braun, "Fusion of 2-/3-/4-sensor imagery for visualization, target learning, and search," In Proc. Enhanced and Synthetic Vision, J.G. Verly (Ed.), SPIE The International Society for Optical Engineering,2000, pp.106-115.
    [48]L. Bai, W. Qian, Y. Zhang, and B. Zhang, "Theory analysis and experiment study on the amount of information in a color night vision system," In Proc. Third International Symposium on Multispectral Image Processing and Pattern Recognition, H. Lu, T. Zhang (Ed.), The International Society for Optical Engineering,2003, pp. 5-9.
    [49]S. Das, Y.-L. Zhang, and W.K. Krebs, "Color night vision for navigation and surveillance," In Proc. the Fifth Joint Conference on Information Sciences, J. Sutton, S.C. Kak (Eds.),2000.
    [50]A.M. Waxman, A.N. Gove, M.C. Seibert, D.A. Fay, J.E. Carrick, J.P. Racamato, E.D. Savoye, B.E. Burke, R.K. Reich, et al., "Progress on color night vision: visible/IR fusion, perception and search, and low-light CCD imaging," In Proc. Enhanced and Synthetic Vision, J.G. Verly (Ed.), The International Society for Optical Engineering,1996, pp.96-107.
    [51]A.M. Waxman et al., "Solid-state color night vision:fusion of low-light visible and thermal infrared imagery," MIT Lincoln Laboratory Journal, vol.11, pp.41-60, 1999.
    [52]D. Scribner, P. Warren, J. Schuler, M. Satyshur, and M. Kruer, "Infrared color vision:an approach to sensor fusion," Optics and Photonics News, vol.9, no.8, pp. 27-32,1998.
    [53]G.L. Walls, The Vertebrate Eye and its Adaptive Radiation, Cranbrook Institute of Science, Bloomfield Hills, Michigan,2006.
    [54]金伟其,王岭雪,王生祥,等.夜视图像的彩色融合技术及其进展.红外技术,vol.25,no.1,pp.6-12,2003,
    [55]王岭雪,金伟其,刘广荣,高稚允.基于侧抑制特性的夜视图像彩色融合方法研究.北京理工大学学报,vo1.23,no.4,pp.513-516,2003.
    [56]王岭雪,金伟其,石俊生,等.基于拮抗视觉特性的多波段彩色夜视融合方法研究.红外与毫米波学报,vo1.25,no.6,pp.455-459,2006.
    [57]L. Bai, G. Gu, Q. Chen, and B. Zhang. "Study on information obtaining and fusion of color night vision system," In Proc. SPIE, vol.4556,2001.
    [58]L. Bai, Q. Chen, G. Gu, and B. Zhang. "Infrared and low light level image processing in color night vision technology," In Proc. SPIE, vol.4548,2001.
    [59]柏连发,张毅,顾国华,等,微光图像和紫外图像分析与融合方法研究,红外与激光工程,vo1.36,no.1,pp.113-117,2007.
    [60]A. Toet, "Natural colour mapping for multiband nightvision imagery," Information Fusion, vol.4, no.3, pp.155-166,2003.
    [61]Y. Zheng, B.C. Hansen, A.M. Haun, and E.A. Essock, "Coloring night-vision imagery with statistical properties of natural colors by using image segmentation and histogram matching," In Proc. Color Imaging X:Processing, Hardcopy and Applications, R. Eschbach, G.G. Marcu (Eds.), The International Society for Optical Engineering,2005, pp.107-117.
    [62]S. Sun, Z. Jing, Z. Li, and G. Liu, "Color fusion of SAR and FLIR images using a natural color transfer technique," Chinese Optics Letters, vol.3, no.4, pp.202-204, 2005.
    [63]S. Sun, Z. Jing, G. Liu and Z. Li, "Transfer color to night vision images," Chinese Optics Letters, vol.3, no.8, pp.448-450,2005.
    [64]V. Tsagiris, and V. Anastassopoulos, "Fusion of visible and infrared imagery for night color vision," Displays, vol.26, no.4-5, pp.191-196,2005.
    [65]L. Wang, W. Jin, Z. Gao, and G. Liu, "Color fusion schemes for low-light CCD and infrared images of different properties," In Proc. Electronic Imaging and Multimedia Technology, L. Zhou, C.-S. Li, Y. Suzuki (Eds.), vol.Ⅲ, The International Society for Optical Engineering,2002, pp.459-466.
    [66]Y. Zheng, and E.A. Essock, "A local-coloring method for night-vision colorization utilizing image analysis and fusion," Information Fusion, vol.9, no.2, pp. 186-199,2008.
    [67]S. Shi, L. Wang, W. Jin and Y. Zhao, "Color night vision based on color transfer in YUV color space," In Proc. SPIE The International Society for Optical Engineering 6623,66230,2008.
    [68]M.A. Hogervorst and A. Toet, "Fast and true-to-life application of daytime colors to night-time imagery," In Proc. the International Conference on Information Fusion, 2007.
    [69]M.A. Hogervorst, and A. Toet, "Method for applying daytime colors to nighttime imagery in realtime", In Proc. the International Conference on Information Fusion, 2008.
    [70]A. Toet and M.A. Hogervorst, "Portable real-time color night vision". In Proc. the International Conference on Information Fusion,2008.
    [71]M.A. Hogervorst, A. Toet, and F.L. Kooi, TNO Defense Security and Safety, Method and system for converting at least one first-spectrum image into a second-spectrum image 060765328-2202,2006.
    [72]M.A. Hogervorst, and A. Toet, "Fast natural color mapping for night-time imagery," Information Fusion, vol.11, no.2, pp.69,2010.
    [73]A. Toet, and M.A. Hogervorst, "Towards an Optimal Color Representation for Multiband Nightvision Systems," In Proc. the International Conference on Information Fusion,2009.
    [74]A. Toet, "Colorizing single band intensified nightvision images," Displays, vol. 26, no. 1,pp.15-21,2005.
    [75]L. Wang, Y. Zhao, W. Jin, S. Shi and S. Wang (2007). "Real-time color transfer system for low-light level visible and infrared images in YUV color space". Ⅰ. Kadar (Ed.), Signal Processing, Sensor Fusion, and Target Recognition XVI.2007, pp.1-8.
    [76]E. Reinhard, M. Ashikhmin, B. Gooch, et al, "Color transfer between images," IEEE Computer Graphics and Application, vol.21, no.5, pp.34-41,2001.
    [77]T. Welsh, M. Ashikhmin, and K. Mueller, "Transferring color to grayscale images," In Proc. SIGGRAPH,2002.
    [78]D.L. Ruderman, T.W. Cronin, and C.-C. Chiao, "Statistics of cone responses to natural images:implications for visual coding," Journal of the Optical Society of America A, vol.15, no.8, pp.2036-2045,1998.
    [79]A.E. Welchman, A. Deubelius, V. Conrad, H.H. Bulthoff and Z. Kourtzi, "3D shape perception from combined depth cues in human visual cortex," Nature Neuroscience, vol.8, no.6, pp.820-827,2005.
    [80]J. Porrill, J.P. Frisby, W.J. Adams and D. Buckley, "Robust and optimal use of information in stereo vision," Nature, vol.397, pp.63-66,1999.
    [81]B. Wu, T. L. Ooi, and Z. J. He, "Perceiving distance accurately by a directional process of integrating ground information," Letters to Nature, vol.428, pp.73-77, 2004.
    [82]J.M. Loomis, "Looking down is looking up," Nature News and Views, vol.414, pp.155-156,2001.
    [83]S. H. Schwartz, Visual Perception (2nd ed.). Connecticut:Appleton and Lange, 1999.
    [84]E. B. Goldstein, Sensation and Perception (6th ed.), Wadsworth, MI,2001.
    [85]I. Bulthoff, H. Bulthoff, and P. Sinha, "Top-down influences on stereoscopic depth-perception," Nature Neuroscience, vol.1, pp.254-257,1998.
    [86]J. Malik and P. Perona, "Preattentive texture discrimination with early vision mechanisms," Journal of the Optical Society of America A, vol.7, no.5, pp.923-932, 1990.
    [87]S. G. Narasimhan, and S. K. Nayar, "Shedding light on the weather," In Proc. Computer Vision and Pattern Recognition (CVPR),2003.
    [88]R. P. O'Shea, S. G. Blackburn and H. Ono, "Contrast as a depth cue", Vision Research, vol.34, no.12, pp.1595-1604,1994.
    [89]B. A. Wandell, Foundations of Vision. Sunderland:Sinauer Associates,1995.
    [90]M. Wexler, F. Panerai, I. Lamouret, and J. Droulez, "Self-motion and the perception of stationary objects," Nature, vol.409, pp.85-88,2001.
    [91]L. Harkness, "Chameleons use accommodation cues to judge distance," Nature, vol.267, pp.346-349,1977.
    [92]A. Saxena, S. H. Chung and A. Y. Ng, "3-D depth reconstruction from a single still image," International Journal of Computer Vision, vol.76, no.1, pp.53-69,2008.
    [93]G.Wyszecki and W.S. Stiles, Color Science:Concepts and Methods, Quantitative Data and Formulae,2nd ed., Wiley, MI,1982.
    [94]R. M. Haralick, K. Shanmugam and I. Dinstein. "Texture features for image classification," IEEE Trans. Systems, Man and Cybernetics, vol.3, no.6, pp.610-621, 1973.
    [95]D. Comaniciu and P. Meer, "Mean shift:A robust approach toward feature space analysis, " IEEE Trans. Pattern Analysis and Machine Intelligence, vol.24, no.5, pp. 603-619,2002.
    [96]E. Davies, Machine vision:Theory, algorithms, practicalities,2nd ed., Academic Press,1997.
    [97]D.L. Ruderman, T.W. Cronin, and C.-C. Chiao, "Statistics of cone responses to natural images:implications for visual coding," Journal of the Optical Society of America A, vol.15, no.8, pp.2036-2045,1998.
    [98]A. J. Smola, and B. Scholkopf, "A tutorial on support vector regression," Statistics and Computing, vol.14, no.3, pp.199,2004.
    [99]C. Cortes, and V. Vapnik, "Support vector networks," Machine Learning, vol.20, pp.273-297,1995.
    [100]V. Vapnik (1995), The Nature of Statistical Learning Theory. New York: Springer-Verlag.
    [101]J. Davis, and V. Sharma, "Background-Subtraction using Contour-based Fusion of Thermal and Visible Imagery," Computer Vision and Image Understanding, vol. 106, no.2-3, pp.162,2007.
    [102]P.J. Burt, and E.H. Adelson, "Merging images through pattern decomposition," In Proc. the SPIE Applications of Digital Image Processing, A.G. Tescher, ed., vol.575, 1985, pp.173-181.
    [103]G. Pajares, and J.M. de la Cruz, "A wavelet-based image fusion tutorial," Pattern Recognition, vol.37, no.9, pp.1855-1872,2004.
    [104]S.R. Jang, C.T. Sun and E. Mizutani, Neuro-Fuzzy and Soft Computing:A Computational Approach to Learning and Machine Intelligence, Prentice Hall Inc, 1997.
    [105]Y. Zheng, E.A. Essock and B. C. Hansen, "An advanced image fusion algorithm based on wavelet transform-incorporation with PCA and morphological processing," In Proc. the SPIE, vol 5298,2004, pp.177-187.
    [106]G. Liu, Z. Jing, S. Sun, "Multi resolution image fusion scheme based on fuzzy region feature," Journal of Zhejiang University Science A, vol.7, no.2, pp.117-122, 2007.
    [107]X. He, S. Yan, Y. Hu, P. Niyogi, and H.-J. Zhang, "Face Recognition Using Laplacianfaces," IEEE Trans. Pattern Analysis and Machine Intelligence, vol.27, pp. 328-340,2005.
    [108]X. Fu and L. Wang, "Data dimensionality reduction with application to simplifying RBF network structure and improving classification performance," IEEE Trans. Systems, Man, and Cybernetics, Part B, vol.33, pp.399-409,2003.
    [109]M.A. Turk and A.P. Pentland, "Face recognition using eigenfaces," In Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition,1991, pp. 586-591.
    [110]P.N. Belhumeur, J.P. Hespanha, and D.J. Kriegman, "Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection," IEEE Trans. Pattern Analysis and Machine Intelligence, vol.19, pp.711-720,1997.
    [111]D. Foley, "Considerations of sample and feature size," IEEE Trans. Information Theory, vol.18, pp.618-626,1972.
    [112]M. Belkin, P. Niyogi, and V. Sindhwani, "Manifold Regularization:A Geometric Framework for Learning from Labeled and Unlabeled Examples," Journal of Machine Learning Research, vol.7, pp.2399-2434,2006.
    [113]V. Sindhwani, P. Niyogi, and M. Belkin, "Beyond the point cloud:from transductive to semi-supervised learning," In Proc. International Conference on Meachine Learning (ICML),2005, pp.824-831.
    [114]X. Zhu, Z. Ghahramani, and J.D. Lafferty, "Semi-Supervised Learning Using Gaussian Fields and Harmonic Functions," In Proc. International Conference on Meachine Learning (ICML),2003, pp.912-919.
    [115]D. Cai, X. He, and J. Han, "Semi-supervised Discriminant Analysis," IEEE International Conference on Computer Vision (ICCV),2007.
    [116]Y. Song, F. Nie, C. Zhang, and S. Xiang, "A unified framework for semi-supervised dimensionality reduction," Pattern Recognition, vol.41pp.2789-2799, 2008,.
    [117]J. Chen, J. Ye, and Q. Li, "Integrating Global and Local Structures:A Least Squares Framework for Dimensionality Reduction," IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR),2007.
    [118]B. Liu, M. Wang, R. Hong, Z. Zha, and X.-S. Hua, "Joint learning of labels and distance metric," IEEE Trans. Systems, Man, and Cybernetics, Part B, vol.40, pp. 973-978,2010.
    [119]H. Zhao, "Combining labeled and unlabeled data with graph embedding,' Neurocomputing, vol.69, pp.2385-2389,2006.
    [120]H. Zhao, S. Sun, and Z. Jing, "Local-information-based uncorrelated feature extraction," Optical Engineering, vol.45, pp.020505,2006.
    [121]C. Chen, L. Zhang, J. Bu, C. Wang and W. Chen. "Constrained Laplacian Eigenmap for Dimensionality Reduction," Neurocomputing, vol.73, no.4-6, pp.951-958,2010.
    [122]C. Hou, C. Zhang, Y. Wu and F. Nie. "Multiple View Semi-supervised Dimensionality Reduction," Pattern Recognition, vol.43, no.3, pp.720-730,2010.
    [123]D. Xu, S. Yan. "Semi-Supervised Bilinear Subspace Learning," IEEE Trans. Image Processing, vol.18, no.7, pp.1671-1676,2009.
    [124]F.R.K. Chung, Spectral Graph Theory (CBMS Regional Conference Series in Mathematics, No.92), American Mathematical Society,1997.
    [125]M. Belkin and P. Niyogi, "Laplacian eigenmaps and spectral techniques for embedding and clustering," Advances in Neural Information Processing Systems 14, pp.585-591,2002.
    [126]M. Belkin, P. Niyogi, and V. Sindhwani, "On Manifold Regularization," AISTAT, 2005.
    [127]H. Wang, S. Chen, Z. Hu, and W. Zheng, "Locality-Preserved Maximum Information Projection," IEEE Trans. Neural Networks, vol.19, pp.571-585,2008.
    [128]D. Meng, Y. Leung, T. Fung, and Z. Xu, "Nonlinear Dimensionality Reduction of Data Lying on the Multicluster Manifold," IEEE Trans. Systems, Man, and Cybernetics, Part B, vol.38, pp.1111-1122,2008.
    [129]K. Fukunaga, Introduction to Statistical Pattern Recognition, Academic Press, 1990.
    [130]A.Y. Ng, M.I. Jordan, and Y. Weiss, "On spectral clustering:Analysis and an algorithm," Advances in Neural Information Processing Systems 14, MIT Press, pp. 849-856,2001.
    [131]T. Hastie, A. Buja, R. Tibshirani, and U. of Toronto. Dept. of Statistics, Penalized discriminant analysis, University of Toronto, Dept. of Statistics,1992.
    [132]H. Chang and D.-Y. Yeung, "Robust path-based spectral clustering," Pattern Recognition, vol.41, pp.191-203,2008.
    [133]B. Fischer and J.M. Buhmann, "Path-based clustering for grouping of smooth curves and texture segmentation," IEEE Trans. Pattern Analysis and Machine Intelligence, vol.25, pp.513-518,2003.
    [134]B. Fischer, V. Roth, and J.M. Buhmann, "Clustering with the Connectivity Kernel," Advances in Neural Information Processing Systems 16, S. Thrun, L. Saul, and B. scholkopf, eds., Cambridge, MA:MIT Press,2004.
    [135]B. Fischer, T. Zoller, J. M. Buhmann, R. Friedrich, and W. Universitat, "Path Based Pairwise Data Clustering with Application to Texture Segmentation," Energy Minimization Methods in Computer Vision and Pattern Recognition, vol.2134, p. 235-250,2001.
    [136]J. Helmsen, E.G. Puckett, P. Colella, and M. Dorr, "Two new methods for simulating photolithography development in 3D," In Proc. SPIE, the International Society for Optical Engineering, vol.2726, pp.253-261.
    [137]J.A. Sethian, "A fast marching level set method for monotonically advancing fronts," In Proc. the National Academy of Sciences of the United States of America, vol.93,1996, pp.1591-1595.
    [138]J.N. Tsitsiklis, "Efficient algorithms for globally optimal trajectories," IEEE Trans. Automatic Control, vol.40, pp.1528-1538,2002.
    [139]L. Yatz, A. Bartesaghi, and G. Sapiro, "O(N) implementation of the fast marching algorithm," Journal of Computational Physics, vol.212, pp.393-399,2006.
    [140]H. Zhao, "A fast sweeping method for eikonal equations," Mathematics of Computation, vol.74, pp.603-627,2005.
    [141]T.H. Cormen, C.E. Leiserson, R.L. Rivest, and C. Stein, Introduction to Algorithms, The MIT Press,2001.
    [142]L.-F. Chen, H.-Y.M. Liao, M.-T. Ko, J.-C. Lin, and G.-J. Yu, "A new LDA-based face recognition system which can solve the small sample size problem," Pattern Recognition, vol.33, pp.1713-1726,2000.
    [143]D. Beymer and T. Poggio, "Face recognition from one example view," In Proc. IEEE International Conference on Computer Vision (ICCV), Los Alamitos, CA, USA: IEEE Computer Society,1995, pp.500-507.
    [144]S. Nene, S. Nayar, and H. Murase, "Columbia Object Image Library (COIL-100)," 1996.
    [145]N.M. Graham, and D.B., Allinson, "Characterizing virtual eigensignatures for general purpose face recognition," Face Recognition:From Theory to Applications, NATO ASI Series F, Computer and Systems Sciences, vol.163, pp.446-456,1998.
    [146]T. Sim, S. Baker, and M. Bsat, "The CMU Pose, Illumination, and Expression Database," IEEE Trans. Pattern Analysis and Machine Intelligence, vol.25, pp.1615-1618,2003.
    [147]T. Sim, S. Baker, and M. Bsat, "The CMU Pose, Illumination, and Expression (PIE) Database of Human Faces," 2001.
    [148]F.S. Samaria and A.C. Harter, "Parameterisation of a stochastic model for human face identification," In Proc. the Second IEEE Workshop on Applications of Computer Vision,1994, pp.138-142.
    [149]A. Frank and A. Asuncion, "{UCI} Machine Learning Repository," 2010.
    [150]Y. Rui, T. S. Huang, M. Ortega, and S. Mehrotra. "Relevance feedback:A power tool for interative content-based image retrieval," IEEE Trans. Circuits and Systems for Video Technology, vol.8, no.5,1998.
    [151]X. He. "Incremental semi-supervised subspace learning for image retrieval," In Proc. the ACM Conference on Multimedia, New York,2004.
    [152]Y.-Y. Lin, T.-L. Liu, and H.-T. Chen. "Semantic manifold learning for image retrieval," In Proc. the ACM Conference on Multimedia, Singapore,2005.
    [153]J. Yu and Q. Tian. "Learning image manifolds by semantic subspace projection," In Proc. the ACM Conference on Multimedia, Santa Barbara,2006.
    [154]A. Alexandrov, W. Ma, A.E. Abbadi, and B.S. Manjunath, "Adaptive Filtering and Indexing for Image Databases," In Proc. Storage and Retrieval for Image and Video Databases (SPIE),1995, pp.12-23.
    [155]J.-J. Fuchs, "On sparse representations in arbitrary redundant bases," IEEE Trans. Information Theory, vol.50, no.6, pp.1341-1344, Jun.2004.
    [156]D. L. Donoho and M. Elad, "Optimally sparse representation in general (nonorthogonal) dictionaries via l1 minimization," in Proc. National Academy of Sciceces of the USA, vol.100, no.5,2003, pp.2197-2202.
    [157]R. Tibshirani, "Regression shrinkage and selection via the lasso," Journal of the Royal Statistical Society:Series B, vol.58, no.1, pp.267-288,1996.
    [158]B. D. Rao and K. Kreutz-Delgado, "An affine scaling methodology for best basis selection," IEEE Trans. Signal Processing, vol.47, no.1, pp.187-200, Jan.1999.
    [159]D. P. Wipf and B. D. Rao, "Sparse Bayesian learning for basis selection," IEEE Trans. Signal Processing, vol.52, no.8, pp.2153-2164, Aug.2004.
    [160]J. Christmas and R. Everson, "Robust autoregression:Student-t innovations using variational Bayes," IEEE Trans. Signal Processing, vol.59, no.1, pp.48-57, Jan. 2011.
    [161]D. L. Donoho, "Compressed sensing," IEEE Trans. Information Theory, vol.52, no.4, pp.1289-1306, Apr.2006.
    [162]S. Ji, Y. Xue, and L. Carin, "Bayesian compressive sensing," IEEE Trans. Signal Processing, vol.56, no.6, pp.2346-2356, Jun.2008.
    [163]M. Elad, M. A. T. Figueiredo, and Y. Ma, "On the role of sparse and redundant representations in image processing," in Proc. IEEE, vol.98, no.6,2010, pp.972-982.
    [164]D. P. Wipf and S. Nagarajan, "A unified Bayesian framework for MEG/EEG source imaging," NeuroImage, vol.44, no.3, pp.947-966, Feb.2009.
    [165]I. F. Gorodnitsky, J. S. George, and B. D. Rao, "Neuromagnetic source imaging with FOCUSS-a recursive weighted minimum norm algorithm," Electroencephalography and Clinical Neurophysiology, vol.95, no.4, pp.231-251, oct.1995.
    [166]A. Bruckstein, D. L. Donoho, and M. Elad, "From sparse solutions of systems of equations to sparse modeling of signals and images," SIAM Review, vol.51, no.1, pp.34-81,2009.
    [167]S. S. Chen, D. L. Donoho, and M. A. Saunders, "Atomic decomposition by basis pursuit," SIAM Journal on Scientific Computing, vol.20, no.1, pp.33-61,1999.
    [168]J. A. Tropp and A. C. Gilbert, "Signal recovery from random measurements via orthogonal matching pursuit," IEEE Trans. Information Theory, vol.53, no.12, pp. 4655-4666, Dec.2007.
    [169]B. D. Rao, K. Engan, S. F. Cotter, J. Palmer, K. Kreutz-delgado, and S. Member, "Subset selection in noise based on diversity measure minimization," IEEE Trans. Signal Processing, vol.51, no.3, pp.760-770, Mar.2003.
    [170]R. Chartrand and V. Staneva, "Restricted isometry properties and nonconvex compressive sensing," Inverse Problems, vol.24, no.035020, pp.1-14,2008.
    [171]E. J. Candes, M. B. Wakin, and S. Boyd, "Enhancing sparsity by reweighted l1 minimization," Journal of Fourier Analysis and Applications, vol.14, no.5, pp.877-905, Dec.2008.
    [172]E. J. Candes and T. Tao, "Decoding by linear programming," IEEE Trans. Information Theory, vol.51, no.12, pp.4203-4215, Dec.2005.
    [173]R. Chartrand, "Exact reconstruction of sparse signals via nonconvex minimization," IEEE Signal Processing Letters, vol.14, no.10, pp.707-710, Oct. 2007.
    [174]D. J. C. Mackay, Information Theory, Inference & Learning Algorithms,1st ed. Cambridge University Press,2002.
    [175]M. J. Wainwright and M. I. Jordan, Graphical Models, Exponential Families, and Variational Inference. Now Publishers Inc,2008.
    [176]D. J. C. Mackay and C. Laboratory, "Bayesian non-linear modeling for the prediction competition," in ASHRAE Transactions, vol.100, pt.2, pp.1053-1062, 1994.
    [177]C. M. Bishop and M. E. Tipping, "Variational relevance vector machines," in Proc.16th Conf. Uncertainty in Artificial Intelligence,2000, pp.46-53.
    [178]M. E. Tipping, "Sparse Bayesian learning and the relevance vector machine," Journal of Machine Learning Research, vol.1, pp.211-244, Jun.2001.
    [179]A. P. Dempster, N. M. Laird, and D. B. Rubin, "Iteratively reweighted least squares for linear regression when errors are Normal/Independent distributed," in Multivariate Analysis V, P. R. Krishnaiah, Ed.,1980, pp.35-57.
    [180]D. P. Wipf and S. Nagarajan, "A new view of automatic relevance determination," in Advances in Neural Information Processing Systems,2008.
    [181]C. Kleiber and S. Kotz, Statistical size distributions in economics and actuarial sciences. Hoboken, NJ:Wiley,2003.
    [182]D. V. Widder, The Laplace Transform. Princeton University Press,1946.
    [183]J. A. Palmer, K. Kreutz-Delgado, D. P. Wipf, and B. D. Rao, "Variational EM algorithms for non-Gaussian latent variable models," in Advances in Neural Information Processing Systems,2006.
    [184]M. A. T. Figueiredo, "Adaptive sparseness for supervised learning," IEEE Trans. Pattern Analysis and Machine Intelligence, vol.25, no.9, pp.1150-1159, Sep.2003.
    [185]C. M. Bishop, Pattern Recognition and Machine Learning,1st ed. New York: Springer,2007.
    [186]M. W. Seeger and D. P. Wipf, "Variational Bayesian inference techniques," IEEE Signal Processing Magazine, vol.27, no.6, pp.81-91, Nov.2010.
    [187]M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul, "An introduction to variational methods for graphical models," Machine Learning, vol.37, no.2, pp. 183-233, Nov.1999.
    [188]D. J. C. MacKay, "Ensemble learning and evidence maximization," in Advances in Neural Information Processing Systems,1995.
    [189]B. Jorgensen, Statistical Properties of the Generalized Inverse Gaussian Distribution. Lecture Notes in Statistics.9. New York-Berlin:Springer,1982.
    [190]I. S. Gradshtein, I. M. Ryzhik, A. Jeffrey, and D. Zwillinger, Table of Integrals, Series and Products,7th ed. Boston:Elsevier,2007.
    [191]E. Cojocaru, "Parabolic cylinder functions implemented in Matlab," arXiv/0901.2220,2009.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700