用户名: 密码: 验证码:
室外场景的光照分析研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
增强现实是近二十年来信息领域中迅速兴起的一种技术。增强现实技术将计算机生成的景物实时地叠加到一个真实场景的画面中,通过将真实信息和虚拟信息互相融合,创造出一个和谐统一的环境。由于具有对真实环境进行增强显示的特性,增强现实技术在文化遗产保护、医疗手术规划、军事训练等领域都具有独特的优势。但是,迄今为止,增强现实技术中仍有大量的问题尚未解决。其中,实时重构室外场景的光照条件用于虚拟物体的绘制,实现虚拟物体与真实场景的光照一致性对室外场景虚实融合的真实感具有十分重要的作用。
     室外光照估计也是计算机视觉的研究课题之一,变化的室外光照往往导致阴影检测、物体识别、视频跟踪等算法的不稳定。实时地求解出室外场景视频中每一帧的光照条件,然后将光照进行归一化处理去除变化光照的影响可极大地提高许多算法的性能。如何在场景三维模型未知的条件下实时地估计在线视频的光照条件对计算机视觉和计算机图形学都具有十分重要的意义。
     本文围绕固定视点下室外场景视频序列的实时光照估计展开研究,针对现有方法的不足提出了一系列新的解决方法。本文工作的主要贡献和创新如下:
     ·首次给出了室外场景图像的统计参数与场景光照参数之间的解析表达式。在假定太阳光为平行光,天空光为泛光的条件下,从基本的光照明模型出发,推导出了场景图像的方差和均值与场景的太阳光和天空光入射光强之间互为表达的解析表达式,揭示了图像的统计属性与场景光照之间的关系,为研究场景的光照条件提供了新的思路和途径。
     ·基于图像的统计参数与光照参数之间的解析表达式,提出了一个估计室外场景光照的框架。该框架在离线阶段通过学习建立解析表达式,在在线光照估计阶段,根据建立好的解析表达式,利用图像的统计参数实时地计算出场景的太阳光和天空光入射光强。为了对动态场景取得稳定的光照估计结果,进一步提出了一种基于光照的空间连贯性和时间连贯性对已求的光照参数进行光顺的方法,获得了稳定的求解结果。
     ·将天空光模型推广到更为一般的面光源模型,提出了一个基于图像分解的室外场景光照估计框架。首先将一幅室外场景的图像表示成太阳光基图像和天空光基图像的线性组合,其中组合系数是所要求解的太阳光和天空光入射光强,太阳光和天空光基图像则分别表示了场景的几何、材质的乘积。在离线阶段通过学习求解出采样太阳位置下的太阳光和天空光基图像。在在线阶段通过对采样太阳位置处场景的太阳光基图像进行更新,实时求解视频中每一帧的太阳光和天空光入射光强。
     ·提出了一种无需离线学习的室外场景光照估计方法。首先使用上述图像分解模型,将图像表示成太阳光和天空光基图像的线性组合。然后,通过对太阳光基图像的特点进行分析,将太阳光和天空光入射光强的求解归结为一个可实时求解的能量最小化问题。该方法对场景的材质无要求,允许场景中存在具有复杂材质属性的物体。除了增强现实,该方法可广泛应用于一般的在线视频处理,如阴影检测、基于图像的重光照、颜色恒常性以及光照归一化等。
     本文所提出的三种估计室外光照的方法均不需要场景的三维几何信息,因而避免了对大规模室外场景进行重建时面临的种种技术挑战,使光照估计算法更为实用。
Augmented reality grows rapidly during the past two decades. By overlaying the computer-generated objects onto the real scenes, augmented reality can create a harmonious environment. Since augmented reality can enhance the display of the real world, it has been widely used in protection of digital cultural heritage, medical surgery planning, military training and so on. However, many open problems in augmented reality still remain, of which estimating the illumination of outdoor scenes in real time and then shading the virtual objects to maintain illumination consistency plays an important role in achieving high realism of an augmented reality system.
     Also, outdoor illumination estimation is one of research topics in computer vision. Varying illumination usually severely degrades the performance of algorithms proposed in object recognition, video segmentation, video tracking, etc. If illumination is estimated in real time, then it can be normalized, and hence the performance of those algorithms can be greatly improved. Therefore, the real-time illumination estimation of outdoor scenes is of great importance for both computer graphics and computer vision.
     The dissertation focuses on real-time illumination estimation of outdoor videos captured under a fixed view. The contributions of the dissertation are as follows:
     ·An analytical expression relating image statistics and scene illumination is first derived. Under the assumption that the sunlight is directional and the skylight is ambient, the thesis derives an analytical expression between the mean and deviation of an image and the sunlight and skylight of a scene. The expression provides a new way to understand the outdoor illumination.
     ·Based on the analytical model, a framework for real-time estimation of outdoor illumination is developed. In this approach, the correlation between the illumination parameter and the image statistics is first constructed from a set of images at off-line stage. At online stage, given a new input image captured from the same view, the illumination parameter is derived from the image statistic properties via the pre-computed correlation. In order to handle occasional motions occurred in outdoor scenes, an algorithm exploiting spatial and temporal illumination coherency is proposed to smooth the estimation results.
     ·The skylight is further extended to a uniformly distributed area light source. Then the thesis proposes a linear model to represent an outdoor image as a linear combination of the sun basis image and sky basis image. While the sunlight basis image and skylight basis image encode the effect of the scene geometry, surface reflectance, the coefficients are the sunlight and skylight which we wish to recover. Based on the linear model, a framework for estimating outdoor illumination is developed. The framework obtains sun basis image and sky basis image under some key sun positions at off-line stage. Then at online stage, by updating the sun basis image of key sun positions, the illumination parameters of every frame are obtained in real time.
     ·A novel approach for estimating outdoor illumination without off-line learning is proposed. The model representing an outdoor image as a linear combination of sun basis image and sky basis image is first adopted. Exploiting the characteristics of the sun basis image, the computation of sunlight and skylight of every frames is finally reduced to a minimization problem which can be solved in real time. This method involves no reflectance assumption and permits objects with complex reflectance such as specular, or anisotropic surfaces. Compared with previous work, this method requires no preprocess stage and is suitable for both video post-processing and online processing such as augmented reality, shadow detection, relighting, color constancy and illumination normalization.
     None of the above approaches requests information of scene geometry, thus getting rid of the difficulties in reconstructing the 3D geometry of large-scale outdoor scenes, making the outdoor illumination estimation more practical.
引文
[1]R.Azuma.A Survey of augmented reality.Presence:Teleoperators and Virtual Environments,6(4):355-385,2001.
    [2]K.Kansy,T.Berlage,G.Schmitgen and P.Wibkirchen.Real-Time integration of synthetic computer graphics into live video scenes.In Proc.of the Conference on the Interface of Real and Virtual Worlds,Montpellier,France,June 26-30,pages 93-101,1995.
    [3]M.Pollefeys,R.Koch and L.V.Gool.Self-calibration and metric reconstruction in spite of varying and unknown internal camera parameters.In IEEE International Conference on Computer Vision,pages 90-95,1998.
    [4]K.Cornelis,M.Pollefeys,M.Vergauwen,L.V.Gool and K.U.Leuven.Augmented reality using uncalibrated video sequences.Lecture Notes In Computer Science,2018:144-160,2001.
    [5]G.Zhang,X.Qin,W.Hua,T Wang,P.Heng and H.Bao.Robust metric reconstruction from challenging video sequences.In IEEE International Conference on Computer Vision and Pattern Recognition,pages:1-8,2007.
    [6]http://www.2d3.com/.
    [7]http://www.realviz.com.
    [8]S.G.Narasimhan,C.Wang and S.K.Nayar.All the images of an outdoor scene.In European Conference on Computer Vision,pages 148-162,2001.
    [9]N.Jacobs,N.Roman and R.Pless.Consistent temporal variations in many outdoor scenes.In IEEE International Conference on Computer Vision and Pattern Recognition,pages 1-6,2007.
    [10]Jean-Francois Lalonde,S.G.Narasimhan and A.A.Efros.What Does the Sky Tell Us About the Camera? In European Conference on Computer Vision,2008.
    [11]Y.Moses,Y.Adini and S.Ullman.Face recognition:The problem of compensating for changes in illumination direction.In European Conference on Computer Vision,1:286-296,1994.
    [12]G.Healey.Segmenting images using normalized color.IEEE Trans.System,Man and Cybernetics,22(1):64-73,1992.
    [13]P.M.Comanicu.Mean shift:A robust approach toward feature space analysis.IEEE Transactions on Pattern Analysis and Machine Intelligence,24:603-619,2002.
    [14]B.A.Buluswar and S.D.Draper.Color recognition in outdoor images.In IEEE International Conference on Computer Vision,pages 171-177,1998.
    [15]P.F.Felzenszwalb.Learning models for object recognition.In IEEE International Conference on Computer Vision and Pattern Recognition,1:1056-1062,2001.
    [16]D.J.Crandall and D.P.Huttenlocher.Weakly supervised learning of part-based spatial models for visual object recognition.In European Conference on Computer Vision,1:16-29,2006.
    [17]J.Black,T.Ellis and P.Rosin.A novel method for video tracking performance evaluation.In IEEE Int.Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance,pages 125-132,2003.
    [18]S.Shi,Q.Zheng and H.Huang.A fast algorithm for real-time video tracking.Workshop on Intelligent Information Technology Application,pages 120-124,2007.
    [19]P.J.Phillips,H.Moon,S.A.Rizvi and P.J.Rauss.The FERET evaluation methodology for face-recognition algorithms.IEEE Transactions on Pattern,Analysis and Machine Intelligence,10(22):1090-1104,2000.
    [20]P.J.Phillips,P.Grother and R..J.Micheals.FRVT2002:Evaluation Report.http://www.frvt.org/DLs/FRVT_2002_Evaluation_Report.pdf,2003.
    [21]卿来云。人脸识别中的光照问题研究。中国科学院研究生院博士学位论文,2005。
    [22]J.Canny.A computational approach to edge detection.IEEE Trans.Pattern Anal.Mach.Intell.,8(6):679-698,1986.
    [23]B.Poggio,R.Brunelli,T.Poggio and I.P.Trento.Deriving intrinsic images from image sequences.In Proc.of International Joint Conferences on Artificial Intelligence,pages 1278-1284,1991.
    [24]S.Edelman,D.Reisfeld and Y.Yeshurun.A system for face recognition that learns from examples.In European Conference on Computer Vision,pages 787-791,1992.
    [25]W.Y.Zhao and R.Chellappa.Illumination-insensitive face recognition using symmetric shape- from-shading.In IEEE International Conference on Computer Vision and Pattern Recognition,1286-1293,2000.
    [26]C.Bauckhage and T.Kaster.Fast,illumination insensitive face detection based on multilinear techniques and curvature features.In Proc.of the International Conference on Pattern Recognition,1:507-510,2006.
    [27]Y.Adini,Y.Moses and S.Ullman.Face recognition:the problem of compensating for changes in illumination direction.IEEE Transactions on Pattern Analysis and Machine Intelligence,19(7):721-732,1997.
    [28]W.Rees.Physical Principles of Remote Sensing.Cambridge Univ.Press,1990.
    [29]M.Minnaert.Light and Color in the Open Air.Dover,1954.
    [30]D.K.Lynch and W.Livingston.Color and Light in Nature.Cambridge University Press,1995.
    [31]L.Rayleigh.On the scattering of light by small particles.Philosophical Magazine,pages 447-451,1871.
    [32]J.F.Blinn.Light reflection functions for simulation of clouds and dusty surfaces.ACM SIGGRAPH Computer Graphics,16(3):21-29,1982.
    [33]R.V.Klassen.Modeling the effect of the atmosphere on light.ACM Transactions on Graphics,6(3):215-237,1987.
    [34]K.Kaneda,T.Okamoto,E.Nakame and T.Nishita.Photorcalistic image synthesis for outdoor scenery under various atmospheric conditions.The Visual Computer,7:247-258,1991.
    [35]T.Nishita,Y.Dobashi,K.Kaneda and H.Yamashita.Display method of the sky color taking into account multiple scattering.In Pacific Graphics,pages 117-132,1996.
    [36]G.I.Pokrowski.Uber die Helligkeitsverteilung am Himme(English Translation:The Brightness Distribution of the Sky).Physikal.Zeitschr.,30:697-700,1929.
    [37]CIE-110-1994.Spatial distribution of daylight-luminance distributions of various reference skies.Tech.rep.,International Commission on Illumination,1994.
    [38]A.Takagi,H.Takaoka,T.Oshima and Y.Ogata.Accurate rendering technique based on colorimetric conception.In Proc.of SIGGRAPH'90,24(4):263-272,1990.
    [39]A.J.Preetham,P.Shirley and B.Smits.A Practical Analytic Model for Daylight.In Proc.of SIGGRAPH'99,pages 91-100,1999.
    [40]G.Zotti,A.Wilkie and W.Purgathofer.A critical review of the Preetham skylight model.In WSCG 2007 Short Communications Proceedings,1:23-30,2007.
    [41]D.Slater and G.Healey.What is the spectral dimensionality of illumination functions in outdoor scenes? In IEEE International Conference on Computer Vision and Pattern Recognition,pages 105-110,1998.
    [42]G.Wyszecki and W.Stiles.Color Science:Concepts and Methods,Quantitative Data and Formulae.Wiley New York,second edition,2000.
    [43]K.Sunkavalli,F.Romeiro,W.Matusik,T.Zickler and H.Pfister.What do color changes reveal about an outdoor scene? In IEEE International Conference on Computer Vision and Pattern Recognition,pages 1-8,2008.
    [44]R.Hartley,R.Oupta and T.Chang.Stereo from uncalibrated cameras.In IEEE International Conference on Computer Vision and Pattern Recognition,pages 761-764,1992.
    [45]R.Hartley.In Defence of the eight point algorithm.In IEEE International Conference on Computer Vision,19(6):580-593,1997.
    [46]O.Faugeras.What can be seen in three dimensions with an uncalibrated stereo rig.In European Conference on Computer Vision,588:563-578,1992.
    [47]O.Faugeras.Three-Dimensional Computer Vision-A Geometric Viewpoint.MIT press,1993.
    [48]L.S.Nyland.Capturing dense environmental range information with a panning,scanning laser rangefinder.Technical Report,University of North Carolina at Chapel Hill,1998.
    [49]D.McAllister,L.Nyland,V.Popescu,A.Lastra and C.McCue.Real-time rendering of real-world environments.In Porc.of Euwgraphics Workshop on Rendering,pages 145-160,1999.
    [50]3Dscanners.www.3dscanners.com.
    [51]S.Rusinkiewicz and M.Levoy.A multiresolution point rendering system for large meshes.In Proc.of SIGGRAPH'00,pages 343-352,2000.
    [52]M.Levoy,K.Pulli,B.Curless et al.The Digital Michelangelo Project:3D scanning of large statues.In Proc.of SIGGRAPH'00,page 131-144,2000.
    [53]The Pieta project,http://www.research.ibm.com/pieta/.
    [54]J.Chen and B.Chen.Architectural modeling from sparsely scanned range data.International Journal of Computer Vision,78(2-3):223-236:2008.
    [55]K.Kim,J.Summet,T.Starner,D.Ashbrook,M.Kapade and I.Essa.Localization and 3D reconstruction of urban scenes using GPS.In Proc.of IEEE International Symposium on Wearable Computers,pages 11-14,2008.
    [56]史长鳌,周园,刘秀格,侯遥青。基于LIDAR数据融合的数码城市三维重建。河北工程大学学报(自然科学版),24(2):81-83,2007。
    [57]U.Clarenz,M.Rumpf and A.Telea.Fairing of point based surfaces.In Proc.of Computer Graphics International,pages 600-603,2004.
    [58]G.Hu,Q.Peng and A.R.Forrest.Mean shift denoising of point-sampled surfaces.The Visual Computer,22(3):147-157,2006.
    [59]C.Xiao,Y.Miao,S.Liu and Q.Peng.A dynamic balanced flow for filtering point-sampled geometry.The Visual Computer,22(3):210-219,2006.
    [60]T.Weyrich,M.Pauly,S.Heinzle and R.Keiser.Post-processing of scanned 3D surface data.In Proc.of Symposiym on Point-Based Graphics,pages 85-94,2004.
    [61]J.C.Carr,R.K.Beatson,J.B.Cherrie et al.Reconstruction and representation of 3D objects with radial basis functions.In Proc.of 1'01,pages 67-76,2001.
    [62]M.Pauly,N.J.Mitra,J.Giesen,L.Guibas and M.Gross.Example-based 3D scancomplction.In IEEE Proc.of SIGGRAPH'05,pages 23-32,2005.
    [63]S.Park,X.Guo,H.Shin and H.Qin.Shape and appearance repair for incomplete point surfaces.In IEEE International Conference on Computer Vision,pages 1260-1267,2005.
    [64]MetaCreations.Canoma.www.metacreations.com/products/canoma.
    [65]Eos Systems Inc.Photomodeller.www.photomodeler.com.
    [66]Integra:Renoir.Deriving intrinsic images from image sequences.www.integra.co.jp/eng/products/renoir.
    [67]B.Choudhury and S.Chandran.A survey of image-based relighting techniques.In Proc.International Conference on Computer Graphics Theory and Applications,pages 176-183,2006.
    [68]Y.Yu,P.E.Debevec,J.Malik and T.Hawkins.Inverse global illumination:Recovering reflectance models of real scenes from photographs.In Proc.SIGGRAPH'99,pages 215-224,1999.
    [69]C.Loscos,G.Drettakis and L.Robert.Interactive virtual relighting of real scenes.IEEE Trans.Vie.Comput.Graph.,6(3):289-305,2000.
    [70]I.Sato,Y.Sato and K.Ikeuchi.A method for estimating illumination distribution of a real scene based on soft shadows.Lecture Notes in Computer Science,pages 44-58,1998.
    [71]I.Sato,Y.Sato and K.Ikeuchi.Illumination distribution from brightness in shadows:adaptive estimation of illumination distribution with unknown reflectance properties in shadow regions.In IEEE International Conference on Computer Vision,pages 875-882,1999.
    [72]I.Sato,Y.Sato and K.Ikeuchi.Illumination distribution from shadows.In IEEE Computer Vision and Pattern Recognition,pages 1306-1312,1999.
    [73]I.Sato,Y.Sato and K.Ikeuchi.Illumination from Shadows.IEEE Trans.Pattern Anal.Mach.Intell.,25(3):290-300,2003.
    [74]I.Sato,Y.Sato and K.Ikcuchi.Stability issues in recovering illumination distribution from brightness in shadows.In IEEE Computer Vision and Pattern Recognition,2:400-407,2001.
    [75]T.Kim and Ki-Sang Hong.A practical single image based approach for estimating illumination distribution from shadows.In IEEE International Conference on Computer Vision,1:266-271,2005.
    [76]Q.Zheng and R.Chellapp.Estimation of Illuminant Direction,Albedo,and Shape from Shading.IEEE Trans.Pattern Anal.Mach.Intell.,13(7):680-702,1991.
    [77]D.Samaras and D.Metaxas.Coupled lighting direction and shape estimation from single images.In IEEE International Conference on Computer Vision,pages 868-874,1999.
    [78]Y.Yang and A.Yuille.Sources from shading.In IEEE Computer Vision and Pattern Recognition,pages 534-439,1991.
    [79]E.V.Vega and Y.H.Yang.Default shape theory:With the application to the computation of the direction of the light source.Journal of the Optical Society of America A,60:285-299,1994.
    [80]Y.Zhang and Yee-Hong Yang.Illuminant direction determination for multiple light sources.In IEEE Computer Vision and Pattern Recognition,1:269-276,2001.
    [81]Y.Wang and D.Samaras.Estimation of multiple illuminants from a single image of arbitrary known geometry.In European Conference on Computer Vision,pages 272-288,2002.
    [82]H.C.Lee.Method for computing the scene-illuminant chromaticity from specular highlights.Joumol of the Opticul Sociey ojilmerica A,3(10):1694-1699,1986.
    [83]T.M.Lehmann and C.Palm.Color line search far illuminant estimation in real-world scenes.Joumol of the Opticul Sociey ojilmerica A,18(11):2679-2691,2001.
    [84]O.S.Kwon,Y.H.Cho,Y.T.Kim and Y.H.Ha.Illumination estimation based on valid pixel selection in highlight region.In Proc.International Conference on Image Processing,4:2419-2422,2001.
    [85]Y.Wang and D.Samaras.Estimation of multiple directional light sources for synthesis of mixed reality images.In Proc.Pacific Graphics,pages 38-47,2002.
    [86]Yuanzhen Li,Stephen Lin,Hanqing Lu and Heung-Yeung Shum.Multiplecue illumination estimation in textured Scenes.In IEEE Computer Vision and Pattern Recognition,2:1366-1373,2003.
    [87]J.F.Blinn and M.E.Newell.Texture and reflection in computer generated images.Communications of the ACM,19(10):542-547,1976.
    [88]彭群生,金小刚,鲍虎军。计算机真实感图形学的算法基础。科学出版社,2002。
    [89]P.Debevec and J.Malik.Recovering high dynamic range radiance maps from photographs.In Proc.of SIGGRAPH'97,pages 369-378,1997.
    [90]I.Sato and Y.Sato and K.Ikeuchi.Acquiring a radiance distribution to superimpose virtual objects onto a real scene.IEEE Transactions on Visualisation and Computer Graphics,5(1):1-12,1999.
    [91]S.Gibson and J.Cook and T.Howard and R.Hubbold.Rapid shadow generation in real-world lighting environments.In Proc.Eurographics Workshop on Rendering,pages 219-229,2003.
    [92]K.Agusanto and L.Li and Z.Chuangui and N.W.Sing.Photorealistic rendering for augmented reality using environment illumination.In Proc.IEEE/ACM International Symposium on Augmented and Mixed Reality,pages 208-215,2003.
    [93]N.Magncnat-Thalmann,A.E.Foni,G.Papagiannakis and N.Cadi-Yazli.Real time animation and illmnination in ancient roman sites.The International Journal of Virtual Reality,6(1):11-24,2007.
    [94]J.Hensley,T.Scheuermann,M.Singh and A.Lastra.Fast HDR imagebased lighting using summed-area tables.In Symposium on.Interactive 3D Graphics and Games(poster),2007.
    [95]T.Jensen,M.S.Andersen and C.B.Madsen.Real-Time image based lighting for outdoor augmented reality under dynamiclly changing illumination conditions.In Proc.International Conference on Computer Graphics Thetory and Applications,2006.
    [96]S.Heymann,A.Smolic,K.Muller and B.Froehlich.Illumination reconstruction from real-time video for interactive augmented reality.In Proc.International Conference on Video and Image Precessing,2005.
    [97]I.Sato,M.Hayashida,F.Kai,Y.Sato and K.Ikeuchi.Fast image synthesis of virtual objects in a real scene with natural shadings.Systems and Computers in Japan,36(14):1864-1872,2005.
    [98]P.Debevec.Rendering synthetic objects into real scenes:Bridging traditional and image-based graphics with global illumination and high dynamic range photography.In Proc.SIGGRAPH'98,pages 189-198,1998.
    [99]P.Debevec,A.Wenger,C.Tchou,A.Gardner,J.Waese and T.Hawkins.A lighting reproduction approach to live-action composing.ACM Transactions on Graphics,21(3):547-556,2002.
    [100]M.S.Andersen,T.Jensen and C.B.Madsen.Estimation of dynamic light changes in outdoor scenes without the use of calibration objects.In IEEE International Conference on Pattern Recognition,4:91-94,2006.
    [101]K.Sunkavalli,W.Matusik,H.Pfister and S.Rusinkiewicz.Factored timelapse video.In Proc.of SIGGRAPH'07,26(3):101-111,2007.
    [102]E.Nakamae,K.Harada,T.Ishizaki and T.Nishita.A montage method:the overlaying of the computer generated images onto a background photograph.In Proc.of SIGGRAPH'86,pages 207-214,1986.
    [103]S.Mann and R.Picard.On being ' undigital' with digital cameras:Extending dynamic range by combining differently exposed pictures.In Proc.IS(?)T 46th annual conference,pages 422-428,1995.
    [104]T.Mitsunaga and S.Nayar.Radiometric self-calibration.In IEEE International Conference on Computer Vision and Pattern Recognition,pages 374-380,1999.
    [105]M.Grossberg and S.Nayar.Modeling the space of camera response functions.IEEE Transaction on Pattern Analysis and Machine Intelligence,26(10):1272-1282,2004.
    [106]S.J.Kim,J.M.Frahm and M.Pollefeys.Radiometric calibration with illumination change for outdoor scene analysis.In IEEE International Conference on Computer Vision and Pattern Recognition,pages 1-8,2008.
    [107]W.He.Range registration for 3D reconstruction of large-scale scenes.Master thesis,Peking University,2004.
    [108]K.Jacobs and C.Loscos.Classification of illumination methods for mixed reality.Computer Graphics Forum,25(1):29-51,2006.
    [109]X.Llado,A.Oliver,M.Petrou,J.Freixenet and J.Marta.Simultaneous surface texture classification and illumination tilt angle prediction.In Proc.The British Machine Vision Conference,2003.[110]M.Chantler.Classifying Surface Texture while Simultaneously Estimating Illumination Direction.International Journal of Computer Vision,62(1-2):83-96,2005.
    [111]S.Barsky and M.Petrou.Surface Texture Using Photometric Stereo Data:Classification and Direction of Illumination Detection.Journal of Mathematical Imaging and Vision,29(2-3):185-204,2007.
    [112]B.V.Ginneken and J.J.Koenderink and K.J.Dana.Texture histograms as a function of irradiation and viewing direction.International Journal of Computer Vision,31(3):169-184,1999.
    [113]M.J.Chantler and M.Schmidt and M.Petrou and G.McGunnigle.Deriving intrinsic images from image sequences.In Proc.European Conference on Computer Vision,3:289-303,2002.
    [114]S.Barsky.Surface Shape and Colour Reconstruction using Photometric Stereo.PhD thesis,University of Surrey,2003.
    [115]O.Drbohlav and M.Chantler.How do joint image statistics change with illumination? Technical Report HW-MACS-TR-0020,Heriot- Watt University,UK,2004.
    [116]K.Tadamura,E.Nakamae,K.Kaneda,M.Baba,H.Yamashita and T.Nishita.Modeling of skylight and rendering of outdoor scenes.Computer Graphics Forum,12(3):189-200,1993.
    [117]H.G.Barrow and J.M.Tenenbaum.Recovering intrinsic scene characteristics from images.Computer Vision Systems.Academic Press,1978.
    [118]Y.Weiss.Deriving intrinsic images from image sequences.In IEEE International Conference on Computer Vision,2:68-75,2001.
    [119]Y.Matsushita,K.Nishino,K.Ikeuchi,and M.Sakauchi.Illumination normalization with time-dependent intrinsic images for video surveillance.In IEEE Computer Vision and Pattern Recognition,1:3-10,2001.
    [120]Y.Matsushita,S.Lin,S.B.Kang,and H.-Y.Shum.Estimating intrinsic images from image sequences with biased illumination.In European Conference on Computer Vision,2:274-286,2004.
    [121]A.Agrawal,R.Raskar,and R.Chellappa.Edge suppression by gradient field transformation using crossprojection tensors.In IEEE Computer Vision and Pattern Recognition,2:2301-2308,2006.
    [122]M.Bell and W.T.Freeman.Learning local evidence for shading and reflectance.In IEEE International Conference on Computer Vision,1:670-677,2001.
    [123]B.V.Funt,M.S.Drew and M.Brockington.Recovering shading from color images.In European Conference on Computer Vision,pages 124-132,1992.
    [124]R.Kimmel,M.Elad,D.Shaked,R.Keshet and I.Sobel.A variational framework for retinex.International Journal of Computer Vision,52:7-23,2003.
    [125]P.Sinha and E.Adelson.Recovering reflectance and illumination in a world of painted polyhedra.In IEEE International Conference on Computer Vision,pages 156-163,1993.
    [126]M.F.Tappen,W.T.Freeman and E.H.Adelson.Recovering intrinsic images from a single image.IEEE Trans.Pattern Analysis and Machine Intelligence,27:1459-1472,2005.
    [127]M.F.Tappen,W.T.Freeman,and E.H.Adelson.Estimating intrinsic component images using non-linear regression.In IEEE Computer Vision and Pattern Recognition,2:1992-1999,2006.
    [128]E.Land and J.McCann.Lightness and retinex theory.Journal of the Optical Society of America A,3:1684-1692,1971.
    [129]A.Blake.Boundary conditions of lightness computation in mondrian world.Computer Vision,Graphics and Image Processing,32:314-327,1985.
    [130]Li Shen,Ping Tan and Stephen Lin.Intrinsic image decomposition with non-local texture cues.In IEEE Conference on Computer Vision and Pattern Recognition,pages 1-7.2008.
    [131]Y.Z.Hsu,H.H.Nagel and G.Rekers.New likelihood test methods for change detection in image sequences.Graphical Model and Image Processing,pages 73-106,1984.
    [132]K.P.Karmann and A.von Brandt.Moving object recognition using an adaptive background memory.In V.Cappellini,editor,Time- Varying Image Processing and Moving Object Recognition 2,Elsevier,pages 289-296,1990.
    [133]W.Long and Y.H.Yang.Stationary background generation:An alternative to the difference of two images.Pattern Recognition,,23:1351-1359,1990.
    [134]P.L.Rosin and T.Ellis.Detecting and classifying intruders in image sequences.In British Machine Vision Conf.,pages 293-300,1991.
    [135]Y.H.Yang and M.D.Levine.The background primal sketch:An approach for tracking moving objects.Machine Vision Applic.,5:17-34,1992.
    [136]M.Bichsel.Segmenting simply connected moving objects in a static scene.IEEE Trans.on Patt.Anal.and Machine Intell.,16:1138-1142,1994.
    [137]W.S.Ching.A novel change detection algorithm using adaptive threshold.Pattern Recognition Letters,12:459-463,1994.
    [138]F.Porikli and O.Tuzel.Bayesian background modeling for foreground detection.In Proc.of the third ACM international workshop on Video surveillance (?) sensor networks,pages 55-58,2005.
    [139]E.Salvador,P.Green and T.Ebrahimi.Shadow identification and classification using invariant color models.In IEEE International Conference on Acoustics,Speech,and Signal Processing,3:1545-1548,2001.
    [140]T.Horprasert,D.Harwood and L.S.Davis.A statistical approach for real-time robust background subtraction and shadow detection.In IEEE Conference on Computer Vision,1999.
    [141]G.Buchsbaum.A spatial processor model for object colour perception.Journal of the Franklin Institute,310:1-26,1980.
    [142]R.Gershon,A.D..Jepson and J.k.Tsotsos.From[r,g,b]to surface reflectance:computing color constant descriptors in images.Perception,755-758,1988.
    [143]D.A.Forsyth.A novel algorithm for color constancy.International Journal of Computer Vision,5:5-36,1990.
    [144]G.D.Finlayson.Color constancy in diagonal chromaticity space.In IEEE International Conference on Computer Vision,pages 218-223,1995.
    [145]G.D.Finlayson.Color in perspective.IEEE Trans.Pattern Anal.Mach.Intell.,18(10):1034-1038,1996
    [146]G.D.Finlayson and S.D.Hordley.A Theory of Selection for Gamut Mapping Color Constancy.In IEEE Computer Vision and Pattern Recognition,pages 60-65,1998.
    [147]G.D.Finlayson and S.D.Hordley.Improving gamut mapping color constancy.IEEE Transactions on Image Processing,9(10):1774-1783,2000
    [148]G.D.Finlayson,S.D.Hordley and I.Tastl.Gamut Constrained Illuminant Estimation.In IEEE International Conference on Computer Vision,pages 792-799,2003.
    [149]G.Finlayson,M.Drew and B.V.Funt.Diagonal transforms suffice for color constancy.In IEEE International Conference on Computer Vision,pages 163-171,1993.
    [150]K.Barnard,G.D.Finlayson and B.V.Funt.Color constancy for scenes with varying illumination.Computer Vision and Image Understanding,65(2):311-321,1997.
    [151]牛玲。基于贝叶斯推理的颜色恒常性计算。北京交通大学硕士学位论文,2008.
    [152]M.Ebner.Color constancy using local color shifts.In Proc.European Conf.Computer Vision,pages 276-287,2004.
    [153]P.H.Christensen and L.G.Shapiro.Three-Dimensional Shape from Color Photometric Stereo.Int'l J.Computer Vision,13(2):213-227,1994.
    [154]P.Hansson and J.Fransson.Deriving intrinsic images from image sequences.In Color and Shape Measurement with a Three-Color Photometric Stereo System,43(3971-3977),2004.
    [155]B.P.L.Lo and S.A.Velastin.Automatic congestion detection system for underground platforms.In Proc.of 2001 Int.Symp.on Intell.Multimedia,Video and Speech Processing,pages 158-161,2000.
    [156]R.Cucchiara,C.Grana,M.Piccardi and A.Prati.Detecting moving objects,ghosts and shadows in video streams.IEEE Trans.on Patt.Anal.and Machine Intell.,25(10):1337-1342,2003.
    [157]D.H.Brainard and W.T.Freeman.Bayesian color constancy.Journal of the Optics Society of America A,14(7):1393-1411,1997.
    [158]C.Rosenberg,T.Minka and A.Ladsariya.Bayesian Color Constancy with Non-Gaussian Models.In Proc.Neural Information Processing Systems (poster),2003.
    [159]Y.Tsin,R.T.Collins,V.Ramesh and T.Kanade.Bayesian Color Constancy for Outdoor Object Recognition.In IEEE Conference on Computer Vision and Pattern Recognition,2001.
    [160]D.H.Brainard,P.Longere,P.B.Delahunt,W.T.Freeman,J.M.Kraft and B.Xiao.Bayesian model of human color constancy.Journal of vision,6(11):1267-1281,2006.
    [161]S.Skaff,T.Arbel and J.Clark.Active Bayesian Color Constancy with Non-Uniform Sensors.In Proc.International Conference on Pattern Recognition 2:681-684,2002.
    [162]R.LenschH,J.Kalltz and M.Goesele.Image-Based reconstruction of spatially varying materials.In Eurographics workshop on Rendering,pages 103-114,2001.
    [163]R.Ramamoorthi and P.Hanrahan.A signal-processing framework for inverse rendering.In Proc.SIGGRAPH'01,pages 117-128,2001.
    [164]S.Boivin and A.Gagalowicz.Image-based rendering of diffuse,specular and glossy surfaces from a single image.In Proc.SIGGRAPH'01,2:68-75,2001.
    [165]Y.Yu,P.Debevec,J.Malik and T.Hawkins.Inverse Global Illumination:Recovering Reflectance Models of Real Scenes from Photographs.In Proc.SIGGRAPH'99,pages 215-224,1999.
    [166]Y.Sato and K.Ikeuchi.Reflectance analysis for 3D computer graphics model generation.Graphical Models and Image Processing,58(5):437-451,1996.
    [167]Y.Yu and J.Malik.Recovering photometric properties of architectural scenes from photographs.In Proc.SIGGRAPH 98,pages 207-217,1998.
    [168]A.Georghiades,P.Belhumeur and D.Kriegman.From few to many:Illumination cone models for face recognition under variable lighting and pose.IEEE Trans.Pattern Analysis and Machine Intelligence,23(6):643-660,2001.
    [169]H.Lensch,J.Kautz,M.Goesele,W.Heidrich and H.Seidel.Image-based reconstruction of spatial appearance and geometric detail.ACM Trans.on Graphics,22:234-257,2003.
    [170]Z.Lin,T.T.Wong and H.Y.Shum.Relighting with the reflected irradiance field:Representation,sampling and reconstruction.Int.J.Comput.Vision,49(2-3):229-246,2002.
    [171]K.Nishino and S.K.Nayar.Eyes for relighting.ACM Trans.Graph.,23(3):704-711,2004.
    [172]J.S.Nimeroff,E.Simoncelli and J.Dorsey.Efficient Re-rendering of Naturally Illuminated Environments.In Proc.Eurographics Workshop on Rendering,pages 359-373,1994.
    [173]R.Ramamoorthi and P.Hanrahan.An efficient representation for irradiance environment maps In Proc.SIGGRAPH'01,pages 497-500,2001.
    [174]Y.Schechner,S.Nayar and P.Belhumcur.A theory of multiplexed illumination.In IEEE International Conference on Computer Vision,2:808-815,2003.
    [175]T.T.Wong,P.A.Heng,S.H.Or and W.Y.Ng.Image-based rendering with controllable illumination.In Proc.Eurographics Workshop on Rendering,pages 13-22,1997.
    [176]A.Wenger,A.Gardner,C.Tchou,J.Unger,T.Hawkins and P.Debevec.Performance relighting and reflectance transformation with time-multiplexed illumination.ACM Trans.Graph,24(3):756-764,2005
    [177]A.Gardner,C.Tchou,A.Wenger,T.Hawkins and P.Debevec.Postproduction re-illumination of live action using Interleaved lighting.In Proc.SIGGRAPH'04(poster),2004.
    [178]F.M.Noguer,S.K.Nayar and P.N.Belhumeur.Optimal illumination for image and video relighting.In IEEE IEEE European Conference on Visual Media Production,2005.
    [179]I.W.Busbridge.The Mathematics of Radiative Transfer.Cambridge University Press,1960.
    [180]T.T.Wong,C.W.Fu,P.A.Heng and C.S.Leung.The Plenoptic Illumination Function.IEEE Transactions on Multimedia,4(3):361-371,2002.
    [181]J.O.Dorsey,F.X.Sillion and D.P.Greenberg.Design and simulation of opera lighting and projection effects.In Proc.SIGGRAPH'91,25(4):41-50,1991.
    [182]P.Hallinan.A low-dimensional representation of human faces for arbitrary lighting conditions.In IEEE International Conference on Computer Vision and Pattern Recognition,pages 995-999,1994.
    [183]A.S.Georghiades,D.J.Kriegman and P.N.Belhumeur.Illumination Cones for Recognition under Variable Lighting:Faces.In IEEE International Conference on Computer Vision and Pattern Recognition,pages 52-59,1998.
    [184]W.Matusik,H.Pfister,A.Ngan,P.Beardsley,R.Ziegler and L.McMillan.Image-Based 3D Photography using Opacity Hulls.In Proc.SIGGRAPH'02,pages 427-437,2002.
    [185]S.K.Nayar,P.N.Belhumeur and T.E.Boult.Lighting sensitive display.ACM Transactions on Graphics,23(4):963-979,2004.
    [186]P.Debevecy,T.Hawkins,C.Tchou,Haarm-Pieter Duikery and W.Sarokin.Acquiring the reflectance field of a human face.In Proc.SIGGRAPH'00,pages 145-156,2000.
    [187]D.G.Lowe.Object recognition from local scale-invariant features.In IEEE International Conference on Computer Vision,2:1150-1157,1999.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700