用户名: 密码: 验证码:
基于视觉感知的图像质量评价方法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
图像质量评价方法是指通过设计数学模型,对图像进行智能分析,并按照设计的质量尺度进行自动评分的客观评价方法。根据对原始图像的依赖程度,图像质量评价方法可分为全参考型、部分参考型和无参考型三种类型。图像质量评价方法是分析图像压缩和处理效果、反馈图像传输质量的关键技术,是多媒体系统中不可或缺的重要组成部分。同时,由于图像是视频的基本单位,视频是图像在时间域上的连续序列。因此,对图像质量评价方法的研究也是视频质量评价方法研究的重要基础。传统的依靠人工观察的主观方法对图像进行质量评价不但费时费力,而且评价的结果受到评价环境、评价人员工作背景等因素的影响,无法客观反映图像的质量。因此开发能准确反映人对图像的主观感受,并且能在多媒体系统中广泛应用的图像质量评价方法,成为当前国际性的研究热点。人类视觉系统(HVS)是人获取图像信息的唯一手段。心理物理学研究表明,HVS对图像的感知是有选择性的,不同的区域具有不同的视觉显著性。然而,已有的图像质量评价方法并未充分考虑HVS对图像感知的多样性。因此,研究基于视觉感知的图像质量评价方法,进一步提高图像质量评价的准确性,具有重要的理论意义和应用价值。本文正是在这样的背景下,展开了基于视觉感知的全参考型、部分参考型和无参考型图像质量评价方法的研究,并在此研究基础上开展了视频质量评价方法的讨论。
     第一章绪论部分首先阐述了选题的意义,然后综述了国内外研究现状并作了相应的总结,最后介绍了本课题的主要研究内容和论文结构。
     第二章对全参考型图像质量评价方法进行研究,提出了一种基于视觉感知的结构相似度图像质量评价方法。该方法首先利用空间域视觉特征生成视觉感知图,并通过计算块结构相似度生成失真感知图。然后根据视觉感知图和失真感知图分别确定视觉特征显著区域和失真严重区域,并进一步考虑了由视觉特征显著或失真严重引起的视觉注意焦点转移对视觉感知的影响,生成视觉注意焦点转移后的视觉感知图。最后利用视觉感知图和视觉注意焦点转移后的视觉感知图对块结构相似度进行加权,获得图像的客观质量。
     第三章对部分参考型图像质量评价方法进行研究,提出了一种基于小波域水印的部分参考型图像质量评价方法。该方法利用掩盖效应这一视觉感知特性生成水印嵌入指示图,确定合适的水印嵌入位置,并且设计量化参数自动调节系统,确定合适的水印嵌入强度。同时利用子带图像受损时敏感程度不一样的特点,选择在合适的子带内嵌入水印,并通过提取水印恢复率获得图像的客观质量。
     第四章对无参考型图像质量评价方法进行研究,提出了一种基于视觉感知的无参考型图像块效应评价方法和一种基于正交最小二乘法径向基函数(OLS-RBF)的无参考型图像模糊和振铃效应评价方法。该块效应评价方法利用空间域视觉特征生成视觉感知图,对块效应失真和块平坦度进行加权,获得图像的客观质量。该模糊和振铃效应评价方法利用结构纹理区域内的边缘特征和图像主观质量训练基于OLS-RBF的网络,得到无参考型图像客观质量评价模型。
     第五章在对图像质量评价方法的研究基础上,展开对视频质量评价方法的研究,提出了一种全参考型视频质量评价方法。该方法首先设计基于视觉感知的视觉注意模型生成视觉感知图和失真感知图。然后根据失真感知图确定失真严重区域,并进一步考虑了由失真严重引起的视觉注意焦点转移对视觉感知的影响,生成视觉注意焦点转移后的视觉感知图。再利用该视觉感知图对结构相似度进行帧内质量加权,获得单帧视频图像的客观质量。最后模拟突发性严重失真对序列质量的影响,设计帧质量贡献度曲线,对单帧视频图像进行帧间质量加权,获得视频序列的整体客观质量。
     第六章总结了本论文的研究成果和创新点,并提出了进一步研究的方向和任务。
Image quality assessment is to analyze the image intelligently and predict the perceived image quality automatically based on designing a mathematical model. It can be classified into full reference, reduced reference and no reference according to the availability of the original image. It becomes one of the key technologies for benchmarking the image compression or processing algorithms and plays an important role in the multimedia application system. Meanwihle, the image is the basic unit of the video, and the video composes of the images which are arranged with time sequence. The research on the image quality assessment is the important basis of the one on the video quality assessment. However, the traditional subjective image quality assessment which depends on the observation of human beings is impractical, slow and expensive for most applications. Researches focusing on developing the new generation of image quality assessment which can reflect the subjective feeling of human beings precisely and can also be applied in the practical multimedia systems become a necessity. Currently, Human Visual System (HVS) is the only way to access the image information. The perception of HVS for the image is selective, and different regions or objects in the image have diverse levels of visual importance. However, the current image quality assessment ignores this diversity of perception mechanism. Therefore, it is of theoretical significance and practical value to take an in-depth research on the improvement of the prediction accuracy of image quality assessment by applying the principle of the visual perception.
     In chapter 1, the significance of the research work is presented together with a brief summary of the present research status.
     In chapter 2, a structural similarity quality metric based on the visual perception is proposed, which is a full reference metric. The spatial features are combined to produce the visual perceptual map. The focus of attention is shifted because of the most visual saliency or a serious distortion; it results in the reproduction of the visual perceptual map. The structural similarity is weighted by the above two visual perceptual maps, and the objective quality of the image is acquired.
     In chapter 3, an image quality assessment based on watermarking in the wavelet domain is proposed, which is a reduced reference metric. The watermark is embedded into the selected frequency sub-bands according to the different distortion sensitivity. Meanwhile, the watermark embedded map and the adaptive adjustment of quantization parameter are applied to decide the position and the strength of the watermark, which ensure the watermark's invisibility effectively. The image objective quality is acquired through the watermark recovery rate.
     In chapter 4, a perceptual blockiness metric and an Orthogonal Least Squares Radial Basis Function (OLS-RBF) based blurring and ringing metric are proposed; they are both no reference metrics. The blockiness metric calculates the spatial visual features to produce the visual perceptual map. Then, it weighs the blockiness to acquire the image objective quality. The blurring and ringing metric extracts the generalized features of the edge points in the structure-texture region; the objective quality of the JPEG2000 image is evaluated by training an OLS-RBF network model from the generalized features and the subjective scores.
     In chapter 5, a video structural similarity quality assessment based on the visual perception is proposed, which is a full reference metric. A distortions-weighing spatiotemporal visual attention model is designed. The visual perceptual map which is produced based on this model and the structural similarity map are used for the objective quality of the single video frame. Then, the objective quality of the whole video sequence is calculated by introducing a frame quality contribution function which weighs the qualities of each frame and gives a much heavier weighting factor to the extremely damaged frames to takes the case of burst-of-error into account.
     The final chapter concludes the new achievements of the whole research and the prospect of the future research.
引文
[1]Wang Z., Sheikh H.R., Bovik A.C. Objective video quality assessment, in:B. Furht, O. Marqure (Eds.), The Handbook of Video Database:Design and Applications, CRC Press, Boca Raton, Florida,2003, pp.1041-1078.
    [2]Sheikh H.R., Sabir M.F., Bovik A.C. A statistical evaluation of recent full reference image quality assessment algorithms [J]. IEEE Transactions on Image Processing,2006,15(11):3440-3451.
    [3]Marmolin H. Subjective MSE measures[J]. IEEE Transactions on Systems, Man and Cybernetics,1986, SMC-16(3):486-489.
    [4]Voran S.D., Wolf S. The development and correlation of objective and subjective video quality measures[C]. IEEE Pacific Rim Conference on Communications, Computers and Signal Processing Processing,1991: 483-485.
    [5]Mavlankar A., Baccichet P., Girod B., et al.. Video quality assessment and comparative evaluation of peer-to-peer video streaming systems[C]. IEEE International Conference on Multimedia and Expo,2008:645-648.
    [6]Masry M., Sermadevi Y. A scalable wavelet-based video distortion metric and applications [J]. IEEE Transactions on Circuits and Systems for Video Technology,2006,16(2):260-272.
    [7]Yang K.C., Huang A.M., Nguyen T.Q., et al.. A new objective quality metric for frame interpolation used in video compression[J]. IEEE Transactions on Broadcasting,2008,54(3):680-690.
    [8]Gelasca E.D., Ebrahimi T. On evaluating video object segmentation quality:A perceptually driven objective metric[J]. IEEE Journal on Selected Topics in Signal Processing,2009,3(2):319-335.
    [9]Wang Z., Bovik A.C., Sheikh H.R., et al.. Image quality assessment:From error visibility to structural similarity[J]. IEEE Transactions on Image Processing,2004,13(4):600-612.
    [10]Mei T., Hua X.S., Zhu C.Z., et al.. Home video visual quality assessment with spatiotemporal factors[J]. IEEE Transactions on Circuits and Systems for Video Technology,2007,17(6):699-706.
    [11]Tao S., Apostolopoulos J., Guerin R. Real-time monitoring of video quality in IP networks[J]. IEEE/ACM Transactions on Networking,2008,16(5): 1052-1065.
    [12]Jumisko-Pyykko S., Hakkinen J. Profiles of the evaluators-Impact of psychographic variables on the consumer-oriented quality assessment of mobile television[C]. SPIE Proceeding on Multimedia on Mobile Devices, 2008,6821:L8210.
    [13]Puttenstein J.G., Heynderickx I., De H.G. Evaluation of objective quality measures for noise reduction in TV-systems[J]. Signal Processing:Image Communication,2004,19(2):109-119.
    [14]Tektronix. Picture Quality Analysis System PQA500 Data sheet. http://www.tektronix.com.
    [15]VQEG. COM 12-67-E, VQEG Full Reference Television Phase Ⅰ:VQEG subjective test plan.1998.
    [16]VQEG. COM 12-60-E, VQEG Full Reference Television Phase Ⅰ:Evaluation of new methods for objective testing of video quality:Objective test plan. 1998.
    [17]VQEG. COM 9-80-E, VQEG Full Reference Television Phase Ⅰ:Final report from the video quality experts group on the validation of objective models of video quality assessment.2000.
    [18]VQEG. COM 9-C-60-E, VQEG Full Reference Television Phase Ⅱ:Final report from the video quality experts group on the validation of objective models of video quality assessment.2003.
    [19]Webster A. Progress and Future Plans for VQEG. http://www.vqeg.org
    [20]Itti L., Baldi P. A principled approach to detecting surprising events in video[C]. IEEE Conference on Computer Vision and Pattern Recognition, 2005,Ⅰ:631-637.
    [21]Koch C., Ullman S. Shifts in selective visual attention:towards the underlying neural circuitry[J]. Human Neurobiology,1985,4:219-229.
    [22]Niebur E., Koch C. Computational architectures for attention, in:P. Parasuraman (Ed.), The Attentive Brain, MIT Press, Cambridge, Massachusetts,1998, pp.163-186.
    [23]Walther D., Koch C. Modeling attention to salient proto-objects[J]. Neural Networks,2006,19:1359-1407.
    [24]Treisman A.M., Gelade G. A feature-integration theory of attention[J]. Cognitive Psychology,1980,12(1):97-136.
    [25]Yuen M., Wu H.R. A survey of hybrid MC/DPCM/DCT video coding distortions[J]. Signal Processing,1998,70(3):247-278.
    [26]Wang Y., Zhu Q.F. Error control and concealment for video communication:A review[J]. Proceedings of the IEEE,1998,86(5):974-997.
    [27]ITU-R. BT.500-11, Methodology for the Subjective Assessment of the Quality of Television Pictures.2002.
    [28]Winkler S. Vision Models and Quality Metrics for Image Processing Applications, Vienna University of Science and Technology, Electrical Engineering/Communications Engineering,2000.
    [29]Geisler W.S., Banks M.S. Visual performance, in:M. Bass (Ed.), Handbook of Optics, McGraw-Hill Professional, New York,1995.
    [30]Wandell B.A. Foundations of Vision, Sinauer Associates Inc., Stamford,1995.
    [31]Ware C. Information Visualization-Perception for Design, Morgan Kaufmann, 2004.
    [32]Pierrot-Deseilligny C., Milea D., Muri R.M. Eye movement control by the cerebral cortex[J]. Current Opinion in Neurology,2004,17(1):17-25.
    [33]Deubel H., Schneider W.X. Saccade target selection and object recognition: Evidence for a common attentional mechanism[J]. Vision Research,1996, 36(12):1827-1837.
    [34]Shen J.H. On the foundations of vision modeling I. Weber's law and Weberized TV restoration[J]. Physica D-Nonlinear Phenomena,2003, 175(3-4):241-251.
    [35]Campbell F.W. The human eye as an optical filter[J]. Proceedings of the IEEE, 1968,56(6):1009-1014.
    [36]Sakrison D.J. On the role of the observer and a distortion measure in image transmission[J]. IEEE Transactions on Communications,1977, COM-25(11): 1251-1267.
    [37]Kelly D.H. Visual contrast sensitivity[J]. Optica Acta,1977,24(2):107-129.
    [38]Kelly D.H. Spatiotemporal variation of chromatic and achromatic contrast thresholds[J]. Journal of the Optical Society of America,1983,73(6):742-750.
    [39]Yang J., Makous W. Spatiotemporal separability in contrast sensitivity[J]. Vision Research,1994,34(19):2569-2576.
    [40]Aziz M.Z., Mertsching B. Fast and robust generation of feature maps for region-based visual attention[J]. IEEE Transactions on Image Processing, 2008,17(5):633-644.
    [41]Tang C.W. Spatiotemporal visual considerations for video coding[J]. IEEE Transactions on Multimedia,2007,9(2):231-238.
    [42]Legge G.E., Foley J.M. Contrast masking in human vision[J]. Journal of the Optical Society of America,1980,70(12):1458-1471.
    [43]Wu H.R., Yuen M. Generalized block-edge impairment metric for video coding[J]. IEEE Signal Processing Letters,1997,4(11):317-320.
    [44]Tong H.H., Venetsanopoulos A.N. A perceptual model for JPEG applications based on block classification, texture masking, and luminance masking[C]. IEEE International Conference on Image Processing,1998,3:428-432.
    [45]Quan H.T., Ghanbari M. Asymmetrical temporal masking near video scene change[C]. IEEE International Conference on Image Processing,2008: 2568-2571.
    [46]James W. The Principles of Psychology, Holt, New York,1890.
    [47]Itti L., Koch C. Computational modeling of visual attention[J]. Nature Reviews Neuroscience,2001,2(3):194-203.
    [48]Itti L., Koch C., Niebur E. A model of saliency-based visual attention for rapid scene analysis [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,1998,20(11):1254-1259.
    [49]Walther D., Koch C. Salinecy Toolbox. http://www.saliencytoolbox.net
    [50]Le Meur O., Le Callet P., Barba D., et al.. A coherent computational approach to model bottom-up visual attention[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2006,28(5):802-817.
    [51]Ma Y.F., Zhang H.J. A model of motion attention for video skimming[C]. IEEE International Conference on Image Processing,2002,1:129-132.
    [52]Rapantzikos K., Tsapatsoulis N., Avrithis Y., et al.. Bottom-up spatiotemporal visual attention model for video analysis[J]. IET Image Processing,2007,1(2): 237-248.
    [53]Shih H.C., Hwang J.N., Huang C.L. Content-based attention ranking using visual and contextual attention model for baseball videos[J]. IEEE Transactions on Multimedia,2009,11(2):244-255.
    [54]You J., Liu G., Li H. A novel attention model and its application in video analysis[J]. Applied Mathematics and Computation,2007,185(2):963-975.
    [55]ITU-T. P.800, Methods for Subjective Determination of Transmission Quality. 1996.
    [56]ITU-T. P.800.1, Mean Opinion Score (MOS) Terminology.2003.
    [57]Miyahara M., Kotani K., Algazi V.R. Objective picture quality scale (PQS) for image coding[J]. IEEE Transactions on Communications,1998,46(9): 1215-1226.
    [58]Sheikh H.R., Wang Z., Bovik A.C. LIVE image quality assessment database release 2. http://live.ece.utexas.edu/research/qualitv
    [59]VQEG. VQEG Phase I Test Sequences. ftp://vqeg.its.bldrdoc.gov/SDTV
    [60]ITU-R. BT.814-1, Specifications and Alignment Procedures for Setting of Brightness and Contrast of Displays.1994.
    [61]ITU-R. BT.815-1, Specification of a Signal for Measurement of the Contrast Ratio of Displays.1994.
    [62]ITU-R. BT.1082, Studies toward the Unification of Picture Assessment Methodology.1989.
    [63]Damera-Venkata N., Kite T.D., Geisler W.S., et al.. Image quality assessment based on a degradation model[J]. IEEE Transactions on Image Processing, 2000,9(4):636-650.
    [64]Eskicioglu A.M., Fisher P.S. Image quality measures and their performance[J]. IEEE Transactions on Communications,1995,43(12):2959-2965.
    [65]Li J., Chen G, Chi Z. A fuzzy image metric with application to fractal coding[J]. IEEE Transactions on Image Processing,2002,11(6):636-643.
    [66]Li X. Blind image quality assessment[C]. IEEE International Conference on Image Processing 2002,1:I/449-I/452.
    [67]Taubman D.S., Marcellin M.W. JPEG2000:Image Compression Fundamentals, Standards, and Practice, Kluwer Academic Publishers,2001.
    [68]Brooks A.C., Pappas T.N. Using structural similarity quality metrics to evaluate image compression techniques[C]. IEEE International Conference on Acoustics, Speech and Signal Processing,2007,1:1873-1876.
    [69]Do Q.B., Beghdadi A., Luong M., et al.. A perceptual pyramidal watermarking technique[C]. IEEE International Conference on Multimedia and Expo,2008: 281-284.
    [70]Richter T., Larabi C. Toward objective image quality metrics:The AIC eval program of the JPEG[C]. SPIE Proceeding on Applications of Digital Image Processing,2008,7073.
    [71]Yusof Y., Khalifa O.O. Imperceptibility and robustness analysis of DWT-based digital image watermarking[C]. International Conference on Computer and Communication Engineering 2008:1325-1330.
    [72]Girod B. What's wrong with mean-squared error?, in:A.B. Watson (Ed.), Digital images and human vision, MIT Press,1993, pp.207-220.
    [73]Wang Z., Bovik A.C. A universal image quality index[J]. IEEE Signal Processing Letters,2002,9(3):81-84.
    [74]Wang Z., Bovik A.C. Mean squared error:Love it or leave it? A new look at signal fidelity measures[J]. IEEE Signal Processing Magazine,2009,26(1): 98-117.
    [75]Wang Z., Bovik A.C., Lu L. Why is image quality assessment so difficult?[C]. IEEE International Conference on Acoustic, Speech, and Signal Processing, 2002,4:3313-3316.
    [76]Eckert M.P., Bradley A.P. Perceptual quality metrics applied to still image compression[J]. Signal Processing,1998,70(3):177-200.
    [77]Watson A.B. The cortex transform:rapid computation of simulated neural images[J]. Computer Vision, Graphics, and Image Processing,1987,39: 311-327.
    [78]Daly S. The visible difference predictor:An algorithm for the assessment of image fidelity, in:A.B. Watson (Ed.), Digital Images and Human Vision, MIT Press, Cambridage, Massachusetts,1993, pp.179-206.
    [79]Lubin J. The use of psychophysical data and models in the analysis of display system performance, in:A.B. Watson (Ed.), Digital images and human vision, MIT Press, Cambridage, Massachusetts,1993, pp.163-178.
    [80]Heeger D.J., Teo P.C. Model of perceptual image fidelity[C].1996,2: 343-344.
    [81]Chou C.H., Li Y.C. A perceptually tuned subband image coder based on the measure of just-noticeable-distortion profile[J]. IEEE Transactions on Circuits and Systems for Video Technology,1995,5(6):467-476.
    [82]Lubin J. Human vision system model for objective picture quality measurements[C]. IEE International Broadcasting Convention,1997:498-503.
    [83]Yang C.L., Kuang K.Z., Chen G.H., et al.. Gradient-based structural similarity for image quality assessment[J]. Huanan Ligong Daxue Xuebao/Journal of South China University of Technology (Natural Science),2006,34(9):22-25.
    [84]Wang Z., Simoncelli E.P. Translation insensitive image similarity in complex wavelet domain[C]. IEEE International Conference on Acoustics, Speech, and Signal Processing,2005,2:573-576.
    [85]Wang B., Wang Z., Liao Y., et al.. HVS-based structural similarity for image quality assessment[C]. IEEE International Conference on Signal Processing, 2008:1194-1197.
    [86]杨威,赵剡,许东.基于人眼视觉的结构相似度图像质量评价方法[J].北京航空航天大学学报,2008,34(1):1-4.
    [87]Moorthy A.K., Bovik A.C. Visual importance pooling for image quality assessment[J]. IEEE Journal on Selected Topics in Signal Processing,2009, 3(2):193-201.
    [88]Brooks A.C., Zhao X., Pappas T.N. Structural similarity quality metrics in a coding context:Exploring the space of realistic distortions[J]. IEEE Transactions on Image Processing,2008,17(8):1261-1273.
    [89]Zhang M., Mou X. A psychovisual image quality metric based on multi-scale structure similarity[C]. IEEE International Conference on Image Processing, 2008:381-384.
    [90]Wang Z., Lu L., Bovik A.C. Video quality assessment based on structural distortion measuremen[J]. Signal Processing:Image Communication,2004, 19(2):121-132.
    [91]Lu Z., Lin W., Yang X., et al.. Modeling visual attention's modulatory aftereffects on visual sensitivity and quality evaluation[J]. IEEE Transactions on Image Processing,2005,14(11):1928-1942.
    [92]Seshadrinathan K., Bovik A.C. A structural similarity metric for video based on motion models[C]. IEEE International Conference on Acoustics, Speech and Signal Processing,2007,1:869-872.
    [93]Wang Z., Li Q. Video quality assessment using a statistical model of human visual speed perception[J]. Journal of the Optical Society of America A: Optics and Image Science, and Vision,2007,24(12):B61-B69.
    [94]Wang Z., Wu G., Sheikh H.R., et al.. Quality-aware images[J]. IEEE Transactions on Image Processing,2006,15(6):1680-1689.
    [95]Carnec M., Le C.P., Barba D. Visual features for image quality assessment with reduced reference[C]. IEEE International Conference on Image Processing,2005,1:421-424.
    [96]Lu W., Gao X., Li X., et al.. An image quality assessment metric based contourlet[C]. IEEE International Conference on Image Processing,2008: 1172-1175.
    [97]Farias M.C., Carli M., Mitra S.K. Objective video quality metric based on data hiding[J]. IEEE Transactions on Consumer Electronics,2005,51(3):983-992.
    [98]Avanaki A.N., Sodagari S., Diyanat A. Reduced reference image quality assessment metric using optimized parameterized wavelet watermarking[C]. IEEE International Conference on Signal Processing,2008:868-871.
    [99]Wang S., Zheng D., Zhao J., et al.. An image quality evaluation method based on digital watermarking[J]. IEEE Transactions on Circuits and Systems for Video Technology,2007,17(1):98-105.
    [100]Gao W., Mermer C., Kim Y. A de-blocking algorithm and a blockiness metric for highly compressed images[J]. IEEE Transactions on Circuits and Systems for Video Technology,2002,12(12):1150-1159.
    [101]Kim W., Hasan M.A., Kim C., et al.. Measuring blocking artifacts for IPTV-based service quality evaluation[C]. IEEE International Conference on Advanced Communication Technology,2008,2:1250-1253.
    [102]Kirenko I.O., Shao L., Muijs R., et al.. Enhancement of compressed video signals using a local blockiness metric[C]. IEEE International Conference on Acoustics, Speech and Signal Processing,2008:1397-1400.
    [103]Triantafyllidis G.A., Tzovaras D., Strintzis M.G. Blocking artifact detection and reduction in compressed data[J]. IEEE Transactions on Circuits and Systems for Video Technology,2002,12(10):877-890.
    [104]Wang Z., Bovik A.C., Evans B.L. Blind measurement of blocking artifacts in images[C]. IEEE International Conference on Image Processing,2000,3: [d]981-984.
    [105]Perra C., Massidda F., Giusto D.D. Image blockiness evaluation based on sobel operator[C]. IEEE International Conference on Image Processing,2005, 1:389-392.
    [106]Karunasekera S.A., Kingsbury N.G. A distortion measure for blocking artifacts in images based on human visual sensitivity[J]. IEEE Transactions on Image Processing,1995,4(6):713-724.
    [107]Wu H.R., Yuen M. A generalized block-edge impairment metric for video coding[J]. IEEE Signal Processing Letters,1997,4(11):317-320.
    [108]Pan F., Lin X., Rahardja S., et al.. A locally adaptive algorithm for measuring blocking artifacts in images and videos[J]. Signal Processing:Image Communication,2004,19(6):499-506.
    [109]Zhai G., Zhang W., Yang X., et al.. Modeling blocking visual sensitivity profile[C]. IEEE International Conference on Multimedia and Expo,2006, 2006:485-488.
    [110]Liu H., Heynderickx I. A no-reference perceptual blockiness metric[C]. IEEE International Conference on Acoustics, Speech and Signal Processing,2008: 865-868.
    [111]Liu H., Heynderickx I. A perceptually relevant no-reference blockiness metric based on local image characteristics [J]. Eurasip Journal on Advances in Signal Processing,2009,2009.
    [112]De A.A., Moschitta A., Russo F., et al.. Image quality assessment:An overview and some metrological considerations[C]. IEEE International Workshop on Advanced Methods for Uncertainty Estimation in Measurement, 2007:47-52.
    [113]Ong E., Lin W., Lu Z., et al.. No-reference JPEG-2000 image quality metric[C]. IEEE International Conference on Multimedia and Expo,2003,1: 545-548.
    [114]Marziliano P., Dufaux F., Winkler S., et al.. Perceptual blur and ringing metrics:Application to JPEG2000[J]. Signal Processing:Image Communication,2004,19(2):163-172.
    [115]Sheikh H.R., Bovik A.C., Cormack L. No-reference quality assessment using natural scene statistics:JPEG2000[J]. IEEE Transactions on Image Processing, 2005,14(11):1918-1927.
    [116]Parvez S.Z., Kawayoke Y., Horita Y. No reference image quality assessment for JPEG2000 based on spatial features[J]. Signal Processing:Image Communication,2008,23(4):257-268.
    [117]Gastaldo P., Zunino R. Neural networks for the no-reference assessment of perceived quality[J]. Journal of Electronic Imaging,2005,14(3):1-11.
    [118]Gastaldo P., Zunino R., Heynderickx I., et al.. Objective quality assessment of displayed images by using neural networks [J]. Signal Processing:Image Communication,2005,20(7):643-661.
    [119]Babu R.V., Suresh S., Perkis A. No-reference JPEG-image quality assessment using GAP-RBF[J]. Signal Processing,2007,87(6):1493-1503.
    [120]Zhu L., Wang G. Image quality evaluation based on no-reference method with visual component detection[C]. IEEE International Conference on Natural Computation,2007,2:155-159.
    [121]LeCallet P., Viard-Gaudin C., Barba D. A convolutional neural network approach for objective video quality assessment[J]. IEEE Transactions on Neural Networks,2006,17(5):1316-1327.
    [122]Engelke U., Zepernick H.J. An artificial neural network for quality assessment in wireless imaging based on extraction of structural information[C]. IEEE International Conference on Acoustics, Speech and Signal Processing,2007,1: I1249-I1252.
    [123]Caviedes J.E., Oberti F. No-reference quality metric for degraded and enhanced video[C]. SPIE Proceeding on Visual Communications and Image Processing,2003,5150 I:621-632.
    [124]Farias M.C., Mitra S.K. No-reference video quality metric based on artifact measurements [C]. IEEE International Conference on Image Processing,2005, 3:141-146.
    [125]Rajashekar U., van L.I., Bovik A.C., et al.. GAFFE:A gaze-attentive fixation finding engine[J]. IEEE Transactions on Image Processing,2008,17(4): 564-573.
    [126]Tang C.W., Chen C.H., Yu Y.H., et al.. Visual sensitivity guided bit allocation for video coding[J]. IEEE Transactions on Multimedia,2006,8(1):11-18.
    [127]Rajashekar U., Van der Linde I., Bovik A.C., et al.. Foveated analysis of image features at fixations[J]. Vision Research,2007,47(25):3160-3172.
    [128]Wang Z., Lu L.G., Bovik A.C. Foveation scalable video coding with automatic fixation selection[J]. IEEE Transactions on Image Processing,2003,12(2): 243-254.
    [129]Le Callet P., Viard-Gaudin C., Pechard S., et al.. No reference and reduced reference video quality metrics for end to end QoS monitoring[J]. IEICE Transactions on Communications,2006, E89-B(2):289-296.
    [130]Cai L., Tu R., Zhao J., et al.. Speech quality evaluation:A new application of digital watermarking[J]. IEEE Transactions on Instrumentation and Measurement,2007,56(1):45-55.
    [131]Bouchakour M., Jeannic G, Autrusseau F. JND mask adaptation for wavelet domain watermarking[C]. IEEE International Conference on Multimedia and Expo,2008:201-204.
    [132]Liu K.-C. Just noticeable distortion model and its application in color image watermarking[C]. IEEE International Conference on Signal Image Technology and Internet Based Systems,2008:260-267.
    [133]Liu W., Dong L., Zeng W. Optimum detection for spread-spectrum watermarking that employs self-masking[J]. IEEE Transactions on Information Forensics and Security,2007,2(4):645-654.
    [134]Chen S., Cowan C.F., Grant P.M. Orthogonal least squares learning algorithm for radial basis function networks[J]. IEEE Transactions on Neural Networks, 1991,2(2):302-309.
    [135]Hsu C.W., Chang C.C., Lin C.J. A Practical Guide to Support Vector Classification, http://www.csie.ntu.edu.tw/~cjlin
    [136]Marziliano P., Dufaux F., Winkler S., et al.. A no-reference perceptual blur metric[C]. IEEE International Conference on Image Processing,2002,3: 57-60.
    [137]Martinez-Rach M., Lopez O., Pinol P., et al.. A study of objective quality assessment metrics for video codec design and evaluation[C]. IEEE International Symposium on Multimedia,2006:517-524.
    [138]Feghali R., Speranza F., Wang D., et al.. Video quality metric for bit rate control via joint adjustment of quantization and frame rate[J]. IEEE Transactions on Broadcasting,2007,53(1):441-446.
    [139]VQEG. RRNR-TV Group Test Plan Draft Version 2.1.2007.
    [140]Cheng W.H., Wang C.W., Wu J.L. Video adaptation for small display based on content recomposition[J]. IEEE Transactions on Circuits and Systems for Video Technology,2007,17(1):43-58.
    [141]Winkler S., Mohandas P. The evolution of video quality measurement:From PSNR to hybrid metrics[J]. IEEE Transactions on Broadcasting,2008,54(3): 660-668.
    [142]Serences J.T., Yantis S. Selective visual attention and perceptual coherence[J]. TRENDS in Cognitive Sciences,2006,10(1):38-45.
    [143]Kalanit GS., Rafael M. The human visual cortex[J]. Annual Review Neuroscience,2004,27:649-677.
    [144]Joint Video Team (JVT) of ISO/IEC MPEG & ITU-T VCEG, H.264/AVC Reference Software ver.12.4. http://iphome.hhi.de/suehring/tml/download
    [145]Zheng Y. Research on H.264 Region-of-Interest Coding Based on Visual Perception, Zhejiang University,2008.
    [146]Ma Y.F., Zhang H.J. A new perceived motion based shot content representation[C]. IEEE International Conference on Image Processing,2001, 3:426-429.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700