Read CSThesis.pdf text version

PHOTOREALISTIC ANIMATION RENDERING WITH POPULATION MONTE CARLO ENERGY REDISTRIBUTION

by Yu-Chi Lai

A dissertation submitted in partial fulfillment of the requirements for the degree of

Doctor of Philosophy (Computer Sciences)

at the

UNIVERSITY OF WISCONSIN­MADISON

2009

c Copyright by Yu-Chi Lai 2009 All Rights Reserved

i

To my dad

ii

ACKNOWLEDGMENTS

I thanks to my adviser Charles Dyer for his support on my PhD work. Without his support and guidance, I probably gave up this work when Stephen Chenney left UW-Madison. I started to work with Chuck about 6 years ago when I took computer vision class with him. From that moment, Chuck has been my best mentor and has always been there when I needed advice. Chuck has always provided me courage and proper directions to face the difficulties when pursuing the PhD. degrees in ECE and CS. He has taught me so much over the years from doing research to writing papers. I really appreciate everything he has done for me. I thanks to my original adviser Stephen Chenney for his inspiration and valuable guidance when I started my research in computer graphics. I feel so fortunate to have had the opportunity to work with Stephen because he is always very supportive, patient and encouraging. Stephen taught me a lot not only in computer graphics, but also in many aspects in life. He is very sharp as a researcher and extremely nice as a person. His stimulating insights and discussions helped the research tremendously. His kindness and encouragement made the experience of working with him very enjoyable. I would like to thank my other committee members, Mike Gleicher, Yu-Hen Hu, Li Zhang, and Jerry Zhu. Mike is a strong believer of lab environment and makes the lab a pleasant place to work, from which I have benefited so much. Thank Yu-Hen, Li, and Jerry for serving on my committee and taking the time to review my thesis. I can not express sufficiently for how much I have enjoyed our computer graphics/vision group environment and how much I have learned from all our group members. I would like to mention Michael Wallick, Rachel Heck, Lucas Kovar, Mankyu Sung, ShaoHua Fan, and Feng Liu for the discussions and sharing of their experience on prelim and oral exams. I also appreciate ShaoHua

iii Fan and Feng Liu for the "lunch meetings" and all the fun, relaxing chats we had. Special thank goes to ShaoHua for giving the inspiration and basic foundation for pursuing this topics. I also thanks for Daniel Wong, Theophilus Benson, and Brandon Smith for their patience to review my papers and dissertation. This helped me remove the frustrated grammatical mistakes commonly made in my writing. Finally, my deepest gratitude goes to my family, for everything.

DISCARD THIS PAGE

iv

TABLE OF CONTENTS

Page LIST OF TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii NOMENCLATURE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x

ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 1.2 1.3 2 The Global Illumination Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . Summary of Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 4 5 7

Monte Carlo Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 2.2 2.3 2.4 Monte Carlo Methods: A Brief History . . . Monte Carlo Integration . . . . . . . . . . 2.2.1 Estimators and Their Properties . . MCMC and Metropolis-Hastings Sampling D-Kernel Population Monte Carlo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. 7 . 8 . 10 . 11 . 13

3

Basics for Global Illumination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.1 3.2 3.3 3.4 Radiometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bidirectional Scattering Distribution Function . . . . . . . . . . The Rendering Equation . . . . . . . . . . . . . . . . . . . . . Monte Carlo Integrations for Global Illumination . . . . . . . . 3.4.1 Path Integral Formulation of the Rendering Equation . . 3.4.2 Monte Carlo Integrations . . . . . . . . . . . . . . . . . 3.4.3 Raytracing-Based Monte Carlo Rendering Algorithms . Markov Chain Monte Carlo Algorithms for Global Illumination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 17 19 22 22 26 28 31

3.5

v

Page 3.5.1 Formulation for Markov Chain Monte Carlo Algorithms . . . . . . . 3.5.2 Markov Chain Monte Carlo Global Illumination Methods . . . . . . Population Monte Carlo Rendering . . . . . . . . . . . . . . . . . . . . . . . Global Illumination Animation Rendering . . . . . . . . . . . . . . . . . . . 3.7.1 Hybrid Methods of Image-Based Rendering and Global Illumination . 3.7.2 Radiosity Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.3 Virtual Light Sources . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.4 Photon Shooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.5 Irradiance Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.6 Parallel Computation and Accelerate Ray Tracing . . . . . . . . . . . 3.7.7 Reuse Path Information . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.8 Decouple the Rendering and Displaying Processes . . . . . . . . . . 3.7.9 Hardware Rendering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 35 37 38 39 40 41 42 44 45 47 49 50

3.6 3.7

4

Photorealistic Image Rendering with Population Monte Carlo Energy Redistribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4.1 Population Monte Carlo Energy Redistribution 4.1.1 PMC-ER Equal Deposition Algorithm . 4.1.2 Energy Estimation . . . . . . . . . . . 4.1.3 The Kernel Function for Each Path . . . 4.1.4 Resampling . . . . . . . . . . . . . . . Results . . . . . . . . . . . . . . . . . . . . . . Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 53 54 56 60 62 66

4.2 4.3 5

Efficient Schemes for Monte Carlo Markov Chain Algorithms in Global Illumination 68 5.1 New Schemes to MCMCs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 Population Monte Carlo Energy Redistribution with Equal Deposition . . . 5.1.2 New Lens Perturbation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.3 Resampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Comparison between Old Mutations and New Mutations . . . . . . . . . . 5.2.2 Comparison between Stratified Regeneration and All Three Regenerations . 5.2.3 Combine Everything into PMC-ER-E . . . . . . . . . . . . . . . . . . . . Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 70 71 74 80 81 83 83 88

5.2

5.3

vi

Page 6 Physically-based Animation Rendering with Markov Chain Monte Carlo . . . . . . 90 6.1 6.2 Animation Formulation for Markov Chain Monte Carlo . . . . . . . . . . . . Population Monte Carlo Energy Redistribution . . . . . . . . . . . . . . . . 6.2.1 Population Monte Carlo Energy Redistribution with Equal Deposition 6.2.2 Time Perturbation . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.3 Iteratively Distribute Samples in a Frame-by-Frame Manner . . . . . Animation Rendering System . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.2 Host Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.3 Client Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.4 Acceleration Temporal and Spatial Structures in Time . . . . . . . . 6.3.5 Sample Recorder . . . . . . . . . . . . . . . . . . . . . . . . . . . . Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 94 94 95 99 100 100 100 101 102 102 103 107

6.3

6.4 6.5 7

Discussion and Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 7.1 7.2 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Future Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

LIST OF REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

DISCARD THIS PAGE

vii

LIST OF TABLES

Table 4.1 5.1 5.2 Page Statistical information when rendering images with PMC-ER-E and ERPT . . . . . . 66 Statistical information when rendering images with old mutations and new mutations. 81

Statistical information when rendering images with stratified regeneration and with three regenerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Measurements comparing PMC-ER with PMC-ER-E using all the new schemes. . . . 82 Scene specific paramesters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Statistical information when rendering animation with time mutation and without time mutation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

5.3 6.1 6.2

DISCARD THIS PAGE

viii

LIST OF FIGURES

Figure 2.1 2.2 3.1 Page The Metropolis sampling algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 The generic D-Kernel Population Monte Carlo algorithm. . . . . . . . . . . . . . . . 13 The input and output geometrical relation used to describe bidirectional scattering distribution function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Image sweep diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 The geometric relation fro three-point light transport equation. . . . . . . . . . . . . . 23 Population Monte Carlo Energy Redistribution Equal Deposition iteration loop. . . . . 55 An example of using lens perturbation to reconstruct a path. . . . . . . . . . . . . . . 57 An example of using caustics perturbation to reconstruct a caustics path. . . . . . . . . 58 Cornell Box images rendered with PMC-ER-E and ERPT. . . . . . . . . . . . . . . . 62 Dragon images rendered with PMC-ER-E and ERPT. . . . . . . . . . . . . . . . . . . 63 Room images rendered with PMC-ER-E and ERPT. . . . . . . . . . . . . . . . . . . 64 Modified Population Monte Carlo Energy Redistribution Equal Deposition iteration loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 An example of using the new lens perturbation to reconstruct a caustics path . . . . . 73 An example of constructing a caustics path from a light source. . . . . . . . . . . . . 77 Cornell Box images rendered with new-scheme PMC-ER-E and original PMC-ER-E. . 84 Dragon images rendered with new-scheme PMC-ER-E and original PMC-ER-E . . . . 85

3.2 3.3 4.1 4.2 4.3 4.4 4.5 4.6 5.1

5.2 5.3 5.4 5.5

ix

Figure 5.6 6.1

Page

Room images rendered with new-scheme PMC-ER-E and original PMC-ER-E. . . . . 86 The PMC-ER Equal Deposition iteration loop for a client process when rendering an animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 An example of using time perturbation to reconstruct a path. . . . . . . . . . . . . . . 110 Condor animation rendering flow chart. . . . . . . . . . . . . . . . . . . . . . . . . . 111 The keyframed rigit transformation to represent the animation of object. . . . . . . . . 112 The diagram to represent a movie-like sample recorder. . . . . . . . . . . . . . . . . . 112 A sequence of Cornell Box images rendered with time mutation and without time perturbation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 A Cornell Box images rendered with motion blur. . . . . . . . . . . . . . . . . . . . . 113 The 60-th frame from the animations rendered the CB2, room, chess board, and basement scenes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

6.2 6.3 6.4 6.5 6.6

6.7 6.8

DISCARD THIS PAGE

x

NOMENCLATURE

Q Total energy of photon events Radiant power or flux

~ E(·)/E(X)Irradiance or energy of a light transport path when being constructed I ~ L/L(X) fr fs ft xy XY ~~ xy ~ ~ XY S M Rn Intensity on a pixel or the integral of a function Radiance or contribution of a light transport path Bidirectional Reflectance Distribution Function (BRDF) Bidirectional Scattering Distribution Function (BSDF) Bidirectional Transmission Distribution Function (BTDF) A 3D surface point A 3D surface sample point A light transport path A sample light transport path The set of 3D finite surfaces The set of valid 3D object surface in a scene N-dimensional real space

xi N(xt ) dA(xt ) dA (xt ) the domain of integration Surface normal at the surface point of xt The differential imaginary area around the surface point xt The differential imaginary area projected onto the normal direction around the surface

point xt () 3D direction The solid angle in the direction of

T r(xt , ) Transport function to find the first intersection surface point from current surface point xt in the direction of . G(x, x ) V (x, x ) p(·) E[·] V[·] [·] T [·] Eff [·] The geometry term between two surface points The visibility term between two surface points The density Distribution function of the sample Expectation of a function Variance of a function Standard deviation of a function The timing for doing the job The efficiency of doing the job

P - Eff [·] Perceptually-based mean squared efficiency of doing the job MSE[·] (f ) The mean square error

f (x)(x)dx the integral of a function based on the distribution function (x)

xii (·) ^ IN T (·|·) T r(·|·) K(·|·) A(·|·) ~ E ed EIP pIP dh th err W µ PT BPT PM Bias Estimator for I with N samples Tentative transition function Transition function Kernel function Acceptance probability function Average path energy in a rendered result Deposit energy for each mutation Average path energy for deposit on the image plane Probability to choose a pixel position from the image plane The mixture weight for choosing a perturbation method. The perturbation radius in spatial domain The perturbation radius in temporal domain The mixture weight for pixel-position distribution. Error rate. The sensor response function Area product measurement Path tracing Bidirectional path tracing Photon mapping

xiii IC PMC PMC-IP PMC-HI PMC-PT MCMC MLT ERPT PMC-ER Irradiance caching Population Monte Carlo The Population Monte Carlo Image Plane sampling algorithm The Population Monte Carlo Hemispherical Integral sampling algorithm Population Monte Carlo Path Tracing Markov Chain Monte Carlo Metropolis Light Transport Energy Redistribution Path Tracing Population Monte Carlo Energy Redistribution

PMC-ER-E Population Monte Carlo Energy Redistribution Equal Deposition PMC-ER-BPopulation Monte Carlo Energy Redistribution Balance Transfer PDF SPP Probability Distribution Function Samples Per Pixel

PHOTOREALISTIC ANIMATION RENDERING WITH POPULATION MONTE CARLO ENERGY REDISTRIBUTION Yu-Chi Lai Under the supervision of Assistant Professor Li Zhang and Professor Charles R. Dyer At the University of Wisconsin-Madison Requirement of realism becomes higher and higher with the advance of computer hardwares and algorithms. To generate photorealistic animation requires the estimation of a large number of temperally and spatially correlated path integrals. In this dissertation we present a space-time algorithm to exploit the spatial and temporal coherence among all path integrals based on the Markov Chain Monte Carlo (MCMC) framework when rendering a physically-correct animation. Although sample resue can improve the rendering efficiency, the choice of good mutation strategies for MCMC algorithms is non-trivial and has a major impact on image quality. Thus, we adapt the Population Monte Carlo framework into the energy redistribution algorithm to reuse information from important samples with a mutation process whose mutation strategy is adapted on-the-fly. The purposed algorithm is called Population Monte Carlo Energy Redistribution (PMC-ER) and is self-tuning to a large extent. Through the progress of the dissertation, we also propose a new lens perturbation to simplify the computation and control of caustics perturbation and possibly increases the perturbation success rate and two path-regeneration methods aiming to concentrate more computation on "high perceptual variance" regions and "hard-to-find-but-important" paths. Additionally, a new perturbation technique called time perturbation is developed to explore the temporal coherence among paths. Furthermore, in order to make animation rendering plausible, we develop a parallel rendering system to distribute the iterative computational tasks to a pool of computers. Each task is rendered with a set of task-based parameters without introducing bias. The resulting animations are perceptually better than those rendered in a frame-by-frame manner. In this dissertation we demonstrate that Population Monte Carlo Energy Redistribution can enhance the rendering efficiency by automatically adjusting the rendering parameters when exploiting the

xiv temporal and spatial coherence among light transport paths and by concentrating more computation on visually important regions on the image plane and in the path space.

Li Zhang

Charles R. Dyer

xv

ABSTRACT

A physically-correct animation can be generated by estimating a large number of highly correlated path integrals falling on the image plane at a sequence of different periods of time. Therefore, exploring temporal and spatial coherence among path integrals is very important for enhancing rendering efficiency. Removing temporal artifacts is even more important than spatial artifacts because our visual perception is very sensitive to temporal artifacts. Thus, exploring temporal coherence among path integrals is important to remove temporally perception-sensitive artifacts in animation rendering. In this dissertation we purpose a novel spac-time global illumination algorithm, Population Monte Carlo Energy Redistribution which concentrates computation on important light transport paths and automatically adjusts energy distributed extent for each light transport path. We adapt statistical framework of Population Monte Carlo into the energy redistribution algorithm to improve rendering efficiency. In the Population Monte Carlo framework, information collected in previous iterations is used to guide subsequent iterations by adapting the kernel function to approximate the target distribution without introducing bias into the final result. Based on this framework, our algorithm automatically adapts the amount of energy redistribution at different pixels of different frame and the extent over which energy is redistributed. After applying MCMC algorithms to rendering static images, we observe that MCMC algorithms are limited in achieving higher rendering efficiency due to possibly high failure rate in the caustics perturbation method and the stratified exploration of the image plane. We improve MCMCs significantly by introducing a new lens perturbation and new path-regeneration methods. The new lens perturbation method simplifies the computation and control of caustics perturbation and possibly increases the perturbation success rate. The new path-regeneration methods

xvi aim to concentrate more computation on "high perceptual variance" regions and "hard-to-find-butimportant" paths. We implement these schemes in the PMC-ER framework to demonstrate the effectiveness of these improvements. Our results show that rendering efficiency is improved with these new schemes. Since our perception is very sensitive to temporal artifacts, exploring temporal coherence among light transport paths is very important to remove temporally perception-sensitive artifacts in animation rendering. Using the contribution of a light path to all frames in an animation as the sampling distribution function allows us to adapt Markov Chain Monte Carlo (MCMC) algorithms to exploit the temporal and spatial coherences to generate a perceptually pleasant animation. As a result, a new perturbation technique called time perturbation is developed to explore the temporal coherence among paths. Furthermore, in order to make animation rendering plausible, we distribute the iterative computational tasks to a pool of computers for parallel computation. Each task is rendered with a set of parameters adapted according to the local properties of each task. We demostrate that this local adatpation does not introduce bias statistically. The resulting animations are perceptually better than those rendered in a frame-by-frame manner.

1

Chapter 1 Introduction

Currently, a large number of movies employ visual effects using computer graphics. Photorealistic rendering is critical for such visual effects. Bad rendering will make the audience lose the sense of realism. In addition, product designers, architects, and illumination designers need photorealistic rendering to predict the appearance of their products and designs. For example, an architect needs to know the appearance of the interior of a room and the appearance of a building under different weather conditions; a stage designer needs to understand the effects of different lights on a concert stage; a car designer would like to see the appearance of a car in different illuminations. The main advantage of physically-based rendering is that the images generated can accurately predict the appearance in the real world without actually setting up the real scenes. Furthermore, due to advances in computer hardware the demand of realism for real-time applications such as computer games and virtual reality walk-throughs becomes higher and higher. Physicallybased rendering can provide the base for developers to understand the fundamental principles of image rendering in order to enhance realism and visual appeal. And some real-time techniques such as light maps and environment maps need photorealistic rendering to generate proper media in the preprocessing phase. Efficiently rendering high-quality images is a long standing goal in global illumination. Therefore, our goal is to create an unbaised rendering algorithm which enhances the rendering efficiency in generating a physically correct animation. The reminder of this chapter presents the statement of problems, a brief summary of contributions, and an overview of the subsequent chapters.

2

1.1 The Global Illumination Problem

The goal of our algorithm is to efficiently render a photorealistic animation through off-line computation. Thus, the input to our problem is a complete scene with animated objects. The output is a rendered photorealistic animation. Since the computation is off-line, it is proper to assume that all information related to the animated scenes is known a priori. The animated information includes change in the intensity of a light, change in the material properties of an object, movement of lights and objects, and movement of the viewpoint. With this assumption, we can fully utilize all information in the scene, especially the information contained in frames preceding and following the current frame. In order to generate physically accurate animations, we need to simulate all inter-reflections between the lights and objects in the scene model in different periods of time; this is called the global illumination problem [8, 142]. The rendering equation [77] is the foundation for physically-based rendering in global illumination. There are two different types of methods: finite element and Monte Carlo methods. Finite element methods, or radiosity algorithms [53], are most effective in scenes containing purely diffuse surfaces. However, it is difficult to generalize finite element algorithms for non-diffuse surfaces and complex geometries both practically and theoretically. On the other hand Monte Carlo methods can handle general light-surface interactions and different geometric representations. They are ideal for solving global illumination problem for a complex scene. Kajiya [77] proposed path tracing, which is the first unbiased Monte Carlo algorithm. Over the years, new Monte Carlo algorithms with various trade-offs in speed, accuracy, and generality have been developed. Past results demonstrate that Monte Carlo is the most general and robust method. However, the results also demonstrate that if too few samples are taken when using Monte Carlo algorithms, visible noise appears in the resulting images and the noise level is reduced only slowly as the number of samples increases. The results shown in [143, 24] demonstrate that to render a single complex scene takes many hours. Furthermore, professionals render an animation in a frame-by-frame manner [7, 140]. This means that all samples are generated independently and the construction of the samples have to be repeated and started from scratch for each frame. As a result, Monte Carlo is

3 rarely exploited for the visualization of dynamic scenes in which photorealism is important. Since the samples are highly correlated in both the spatial and temporal domains, it is obvious that not using spatial or temporal information among samples is a waste ­ the information on high contribution samples which can help generate good samples for the neighborhood of a pixel in current frame or in consecutive frames is not being exploited. More seriously, not exploiting the temporal coherence among samples and frames causes temporal artifacts (flickering and shimmering). Our visual perception has evolved to be sensitive to these kinds of artifacts. Our results show that exploiting the temporal and spatial coherence among samples enhances the speed of computation and reduces noise in rendered results. The global illumination community has started to pay more attention to the temporal coherence among samples to accelerate the rendering process. Some have started to study simple animation cases where only the light sources are moving or only the viewpoint is changing [12, 125, 123, 124, 62, 95]. Samples are reused by reevaluating the contribution of each sample or reweighting each sample based on the multiple-importance framework. The limitation in which objects are allowed to move limits the usefulness of the algorithms. Some attempts have been made to reuse samples by copying their contribution across several frames [105, 144]. Although those algorithms speed up the rendering process by reducing the number of rays being traced and reducing the flickering artifacts, they introduce bias (errors) into the rendering results since the copied samples may not be valid or reweighted properly across frames. To the best of our knowledge, there is no unbiased global illumination algorithm proposed to take advantage of temporal coherence for rendering a complete and complex animated scene by considering all possible light transport paths through the entire animation. In this disseration we present a novel and unbiased rendering algorithm, Population MonteCarlo Energy Redistribution (PMC-ER), based on statistical Population Monte Carlo (PMC) methods to take advantage of temporal and spatial coherence for rendering a complete and complex animated scene by considering all possible light transport paths through the entire animation. PMC-ER techniques offer three major advantages over existing methods: 1) they reduce spatial

4 and temporal noise by choosing samples correlatedly across frames and pixels without introducing bias; 2) they provide a natural way to continue exploring important light transport paths and remove paths with low contribution; and 3) they can automatically adjust the rendering parameters used to explore the spatial and temporal coherence. All these can enhance the rendering efficiency in physically-based animation rendering. We demonstrate that our rendered animations are perceptually pleasant.

1.2 Summary of Contributions

We adapt the population Monte Carlo framework into energy redistribution to render a physically correct animation. Our main contributions are the following: · Population Monte Carlo Energy Redistribution: Population Monte Carlo Energy Redistribution is a novel global illumination algorithm which can concentrate computation on important light transport paths and automatically adjust the energy distributed area for each light transport path. · Exploration of temporal coherence among paths: we extend the formulation of Markov Chain Monte Carlo (MCMC) algorithms from rendering a static image to rendering an animation. In addition to the formulation of animation rendering, a new time perturbation method is designed to mutate the path in the time domain to explore the temporal coherence among path integrals. The time perturbation method is designed to reuse path samples with similar properties at different moments of time in order to reduce the temporal artifacts of an animation. · A new lens mutation method: when we implement the MCMC algorithms, we found that there is possibly high failure rate in the caustics perturbation. The high failure rate in the caustics perturbation possibly comes from the bad prediction of the perturbation angle range in the caustics perturbation. Thus, we purpose a new lens perturbation to increase the success rate of perturbations .

5 · Two new regeneration methods: stratified exploration of the image plane is a limitation to increasing rendering efficiency for the MCMC algorithms because the importance of regions on the image are not perceptually the same. The ERPT and PMC-ER algorithms carefully generate new paths according to the need of exploring the image plane evenly for achieving unbiasedness. This choice is sub-optimal because some areas on the image plane contain higher perceptual variance than others do. To contribute more computation effort in reducing the variance in these high perceptual variance regions would increase the perceptual quality. In addition, some types of paths such as caustics paths are visually important but hard to be found with a general path tracing algorithm. Concentrating more computation effort on these paths can further improve the rendering efficiency. However, evenly exploring the image plane prevents MCMCs from spending more computation effort in exploring those "hardto-find-but-important" paths and limits the improvement in rendering efficiency. And thus two new path-regeneration methods to concentrate more computation on "high perceptual variance" regions and "hard-to-find-but-important" paths to enhance the rendering efficiency.

1.3 Thesis Outline

Chapter 2 gives an overview of Monte Carlo methods. After a brief history of Monte Carlo methods, the principle of Monte Carlo integration, which uses Monte Carlo simulation to estimate an integration, is described. We further introduce two concepts: Markov Chain Monte Carlo methods and Population Monte Carlo methods. Both concepts are the core of this dissertation. Chapter 3 introduces the basic concepts related to global illumination and physically based rendering. After providing the definition of the most commonly used terms in radiometry, we present the surface BRDF, the rendering equation, and the formulation for energy redistribution with an extra dimension time. Then, a summary of existing Monte Carlo rendering algorithms for rendering static images is given. Finally, we also give a survey of animation rendering algorithms existing in global illumination community.

6 Chapter 4 presents Population Monte Carlo Energy Redistribution algorithms for rendering static images. The algorithm concentrates computation on important light transport paths and automatically adjusts energy distributed area for each light transport path based on the information collected in previous iterations. The automatic adaptation eases the burden of setting parameters for rendering a scene and also enhances the rendering efficiency. Chapter 5 presents a new lens perturbation and new path-regeneration methods to improve the rendering efficiency when using MCMC algorithms to render static images. The new lens perturbation simplifies the computation and control of the caustics perturbation method and possibly increases the perturbation success rate. The new path-regeneration methods aim to concentrate more computation on "high perceptual variance" regions and "hard-to-find-but-important" paths. Chapter 6 presents a space-time algorithm that exploits the spatial and temporal coherence among all paths based on the Markov Chain Monte Carlo framework when rendering a physicallycorrect animation. A new perturbation technique called time perturbation is developed to explore the temporal coherence among paths. Furthermore, in order to make animation rendering plausible, we develop a parallel rendering system to distribute the iterative computational tasks to a pool of computers. Each task is rendered with a set of task-based parameters without introducing bias. Chapter 7 concludes with a summary and the original contributions in the thesis, and identifies some future research directions.

7

Chapter 2 Monte Carlo Integration

Monte Carlo integration is the main tool used in this dissertation. In this chapter the basic concepts of statistics related to Monte Carlo integration. We first present a simple history of Monte Carlo. Following is the discussion of Monte Carlo integration. Metropolis-Hastings sampling is the foundation for the Markov chain Monte Carlo (MCMC) algorithms in global illumination which correlatedly generates samples in the integration space. Finally, we present Population Monte Carlo which uses the information collected in the previous iterations to adjust the kernel function to become a better approximation to the ideal sampling density in order to reduce the variance. Good references for Monte Carlo methods include Kalos et al. [78], Hammersley et al. [58], Spanier et al. [137], Robert et al. [122], Gilks et al. [49], Doucet et al. [35], and Liu [91].

2.1 Monte Carlo Methods: A Brief History

The first documented research about Monte Carlo happened in 1777. It was an experiment conducted by Comte de Buffon for randomly dropping a needle of length L onto a board with equidistant parallel lines. The distance between consecutive lines is d which d L. The experiment showed that the probability of a needle intersecting one of these lines is 2L d

p=

[2.1]

Later, Laplace showed that this can be used to estimate the value of .

8 Modern Monte Carlo originated in the early 1940's at the Los Alamos National Laboratory for the need of developing nuclear weapons for World War II. In late 1946 Stainislaw Ulam suggested the potential of using random sampling to simulate the motion of neutrons. With the development of the first electronic computer, John von Neumann developed detailed proposal for using Monte Carlo as a research tool for developing nuclear weapons. His proposal leaded to small-scale simulations whose results are important for the success of the nuclear-weapon project. Metropolis and Ulam [96] published the first paper to describe their ideas in Monte Carlo. Their paper attracted a great deal of researches in 1950's. Metropolis also suggested the name "Monte Carlo" which is a city in Monaco and famous for its casino.

2.2 Monte Carlo Integration

Standard numerical integration techniques have difficulty in solving the problems in highdimensional domains, especially when the integrand is not smooth. One of the most important applications of Monte Carlo is to evaluate the integration of a function which is equivalent to evaluate the expectation of that function. In this section we would like to demonstrate how to transform the computation of integration problems to the computation of expectation problems. Assume that we would like to evaluate an integral of the form

I=

f (x)dµ(x)

[2.2]

where is the domain of integration which is a region in multiple-dimensional space, f : is the integrand and is a real valued function, and µ is a measure on . In the basic form Monte Carlo integration is to estimate the integral in Equation 2.2 by independently sampling N points, X1 , · · · , XN , according to some convenient density function p(x), and then computing the estimate 1 ^ IN = N f (Xi) i=1 p(Xi )

N

[2.3]

In the original form we can choose = [0, 1]s where s is the dimension of the integrand and the samples Xi are chosen independently and uniformly at random. We can get the estimator as

9

1 ^ IN = N

N

f (Xi)

i=1

[2.4]

^ It is easy to show that the expectation of the estimator IN gives the correct result.

^ E IN

1 = E N 1 = N = = I

N

f (Xi ) i=1 p(Xi ) f (x) p(x)dµ(x) p(x)

N

[2.5] [2.6] [2.7] [2.8]

i=1

f (x)dµ(x)

Assuming that f (x)/p(x) is finite whenever f (x) = 0. For estimating the variance of Monte Carlo integration, we denote Ci = f (Xi )/p(Xi ). We estimate the variance of C:

V C = E C 2 -E C =

2

f (x)2 dµ(x) - I 2 p(x)

[2.9]

^ Proposed that V[C] is finite, we can observe that the variance of IN decrease linearly with N: 1 ^ V IN = V N

N

Ci =

i=1

1 V N2

N

Ci =

i=1

1 N2

N i=1

V Ci =

1 V C N

[2.10]

where we have used the facts that V aC = a2 V C and Ci are sampled independently, and thus ^ V N Ci = NV C . Therefore, the standard deviation can be estimated as [IN ] = C / N. i=1 Two important points can be drawn from the standard deviation derived above. 1. The standard deviation of Monte Carlo integration decreases at a rate of N -1/2 . 2. The statistical error is independent of the dimensionality of the integral. This is a very important property because the numerical integration methods are well known to suffer the curse of dimensionality i.e. the computational cost grows exponentially with the dimension

10 of the integrals. Thus, this property makes Monte Carlo integration a proper tool for solving high-dimensional integration problems There are several advantages of applying Monte Carlo integration. First, it converges at a rate of O(N -1/2 ) in any dimension, regardless of the smoothness of the integrand. This make it suitable for global illumination problems because the integrand are often high-dimensional and sometime not smooth. Second, it is easy to understand and simple to use. The only requirement for using Monte Carlo integration is a density function, p(x), for generating samples and the ability to evaluate the weight of a sample f (Xi)/p(Xi ). Finally, it is general and flexible. They can be applied to a wide range of problems. For example, in global illumination we can formulate the rendering problems as path integrals in the path space of all light transport paths. The domain of the path space is naturally an infinite-dimensional space but it is straightforward to handle with Monte Carlo integration.

2.2.1 Estimators and Their Properties

A function F of random variables is called an estimator for an unknown quantity if its mean E F is a usable approximation of . A particular numerical value of F after instantiating the random variables with the known sample data, is called an estimate. There are different possible estimators for a given quantity. Normally, we would like to evaluate the performance of an estimator for a quality. In order to do the comparison among different estimators, we need some criteria. In this section we presents some basic metrics to form the criteria including mean squared error, bias, and efficiency. · Mean Squared Error: to estimate the quality of an estimator is expressed by its mean squared error (MSE). The MSE of an estimator F to a quantity is normally calculated as MSE F = E (F - )2 which is the expectation of the square of the difference between F and . [2.11]

11 · Bias: F is called an unbiased estimator of if its expected value is exactly the same as the true value of . If not, the difference between them is the bias:

F = E F -

[2.12]

One advantage of using an unbiased estimator is that it is guaranteed to get the correct value of if enough samples are used. Also, the expected value of an unbiased estimator will be the correct value after any number of samples, which makes it much easier to analyze the error of the estimator. · Efficiency: notice that we can make the variance of an unbiased estimator as small as possible by taking many samples. However, increasing the number of samples increases the running time of the estimator. Thus, we would like to have an estimator which has small variance and running time. The trade-off between running time and variance is denoted as efficiency of a Monte Carlo estimator [58]:

Eff F =

1 V F T F

[2.13]

where T F is the running time for the estimator and V F is the variance of the estimator. However,we must point out that in some cases there may not exist any obvious choice of a better estimator even some estimators can be clearly better than others in some other cases.

2.3 MCMC and Metropolis-Hastings Sampling

Since this dissertation is based on the Markov Chain Monte Carlo methods. In this section we would like to provide the background knowledge for the Markov Chain Monte Carlo (MCMC) method and Metropolis-Hastings sampling which is a well-know application of MCMC methods. MCMC methods apply Markov Chain simulation to generate samples from a specific target

12 distribution, (x) [49]. In other words, Markov chain Monte Carlo (MCMC) methods start up with an initial sample X0 and generate a random walk, X0 , X1 , X2 , · · ·, through a distribution T r(Xz+1|Xz ) called transition function of the chain to approximate the target distribution (x) in given state space . The transition function T r(Xz+1 |Xz ) is a conditional probability density function and it depends on the current state Xz . From Monte Carlo's point of view, the samples generated by MCMC methods can be used to compute any expectation (or integrals) using to an acceptable degree of accuracy. Metropolis-Hastings sampling [97, 59] provides a way to apply this simple concept to generate such a Markov chain. To generate a sample, the algorithm starts with proposing a candidate state,

Xz+1 using the information of Xz and the tentative transition function T (Xz+1|Xz ) which is the

probability of going from state Xz to state Xz+1 . The algorithm then either accepts the candidate, Xz+1 or retains in the original state Xz . Figure 2.1 shows the Metropolis algorithm:

1 2 3 4 5 6 7 8

Initialize X0 ; set z = 0. for z = 1 to Nsample

Xz+1 T (x|Xz )

generate a random number r [0, 1]

if( r < A(Xz+1 |Xz ) ) then Xz+1 = Xz+1

else Xz+1 = Xz

Figure 2.1: The Metropolis sampling algorithm.

The function A(Xz+1 |Xz ) is the acceptance probability. It is computed as (Xz+1)T (Xz |Xz+1) (Xz )T (Xz+1 |Xz )

A(Xz+1 |Xz ) = min 1,

[2.14]

13 Remarkably, tentative transition function can be almost any form to make the Markov chain generated by the algorithm converge to the stationary distribution (x). Metropolis-Hastings sampling is very general. The beauty of this algorithm is able to sample from any arbitrary complex probability distribution function, (x), as long as (x) can be evaluated. Although the purposal distribution, T (Xz+1 |Xz ), can be almost any form for the chain to eventually converge to (x), the design of the proposal distribution has large impact on the convergence rate. Hence, the main step for designing a Metropolis sampler is to find a good proposal distribution function.

2.4 D-Kernel Population Monte Carlo

The Population Monte Carlo algorithm [20] provides us an iterative importance sampling framework. The distinguishing feature of PMC is that the kernel functions are modified after each step based on information gathered from prior iterations. The kernels adapt to approximate the ideal importance function based on the samples seen so far. While this dependent sampling may appear to introduce bias, it can be proven that the result is either unbiased or consistent, depending on whether certain normalizing constants are known (in our case they are known). The generic D-Kernel PMC sampling algorithm [33, 34] which is an evolution of PMC is stated in Figure 2.2. 1 2 3 4 5 6 7 generate the initial population, s = 0 for s = 1, · · · , Niteration adapt Ki (x(s)|x generate Xi

(s) (s) (s)

(s-1)

)

(s) (s-1)

for i = 1, · · · , Npopulation

(s) (s) (s)

Ki (x|Xi

) )

wi = (Xi )/Ki (Xi |Xi

(s-1)

resampling process: elimination and regeneration

Figure 2.2: The generic D-Kernel Population Monte Carlo algorithm.

14 The algorithm works on a population of samples denoted by X1 , . . . , XN , where s is the iteration number and N is the population size, to evaluate by sampling according to the target distribution (x). The algorithm first creates a set of initial population by using any unbiased sampling method. A kernel function, Ki (xi |xi Xi

(s-1) (s) (s) (s-1) D (s) (s)

f (x)dx, where is f (x) = (x)h(x)

), for each member in the population is adapted at the end of

(s)

the loop. The responsibility of the member kernel function is to take the existing member sample, , as input and produces a candidate new sample, Xi , as output (line 5). The resampling

step in line 7 is designed to cull candidate samples with low weights and promote high-weight samples. The resampling process consists of two steps: elimination and regeneration. It is designed to eliminate the samples with low contribution to the final result and to explore new unexplored regions. The weight computed for each sample, wi , is essentially its importance weight. At any given iteration, an estimator of the integral of interest can be computed and is unbiased for (h): 1 ^ f (x) = (h) = ^ N E 1 N

N i=1 N i=1 (s) (s) (s)

wi h(Xi )

(s)

(s)

wi h(Xi )

(s)

(s)

= = = =

1 N 1 N 1 N

N i=1 N

E wi h(Xi ) (x)h(x) Ki (x|xi

(s) (s-1)

(s) (s-1) ) i=1 Ki (x|xi N i=1

)dx

(x)h(x)dx

f (x)dx

It concludes that (h) is an unbiased estimator of (h). ^ Before applying PMC to rendering problems, we must first answer the following questions: · What is the sampling domain and how big is population size? · What is the member function and what is the adaption criteria? · What techniques are used for sampling from the kernel functions and resampling step? In the following chapters we will answer them for our rendering algorithm.

15

Chapter 3 Basics for Global Illumination

The goal of global illumination algorithms is to simulate the light transport from the light sources to the camera. There are complex interactions among the light and the scene surfaces through the transport process. In this chapter we would like to present the basic knowledge of light transport needed for the later discussion in this dissertation. First, important physical quantities used to describe the light transport are presented. Then, the light-scattering properties on the scene surfaces are described. An analytical form of the light transport problem, the rendering equation, is derived from the light-scattering events on the scene surfaces. It is very difficult to find the analytical solution for the rendering equation. The rendering equation is transformed into path integrals in light transport path space in order to use Monte Carlo integration. Several important ray-tracing Monte Carlo algorithms are reviewed and their strengths and weaknesses are analyzed. Furthermore, the path integrals are highly correlated. Markov Chain Monte Carlo algorithms can evaluate the path integrals correlatedly by sample reuse. Therefore, we reformulate the path integrals to solve the rendering problem with Markov Chain Monte Carlo algorithms. Finally, animation rendering algorithms in global illumination are reviewed. In the entire dissertation we assume that the light speed is infinite. As a result the light transport events from a light source to the camera happen transiently. In other words we describe the transient events happening at a specific moment.

16

3.1 Radiometry

We discuss several important physical quantities for light transport in this section. All quantities are described as the distribution of energy with respect to one or more parameters. · Flux : energy per unit time i.e. the time rate of energy passing through a surface or a region and is measured as Watts [W = J · s-1 ]. =

dQ(t) dt

[3.1]

where Q is the total energy of the photon events. This quantity is normally used to describe the rate at which energy is emitted, absorbed, or scattered by a finite surface S 3 . For example, we can describe that a light bulb emits energy in a rate of 60 Watts which means the total energy passing through an imaginary sphere surface surrounding the light bulb in a rate of 60 Watts. · Irradiance E: flux per unit surface area and is measured as [W · m-2 ]. E(xt ) = d(xt ) dA(xt ) [3.2]

where xt is a point on a surface S (either real or imaginary) and the superscript t is used to denote the moment when the measurement is taken and dA is the differential area around the point with specific normal N(xt ). This quantity is normally used to describe the measurement of incident radiation on one side of surface (i.e light incident from the upward hemisphere). For example, the irradiance from an un-occluded point light source at point, xt , which is distance r away from the light is · (4r 2 )-1 . · Radiance L: flux per unit area per unit solid angle and is measured as [W · m-2 · sr -1 ]. L(xt , ) = d2 (xt , ) dA (xt )d() [3.3]

17 where dA is the projected area of dA on a hypothetical surface perpendicular to the direction . Because the radiance is defined with "per unit solid angle", the value of the radiance does not attenuated. For example the radiance of a point light source at xt and xt is the 1 2 same.

3.2 Bidirectional Scattering Distribution Function

The bidirectional scattering distribution function describes the light-scattering properties of a surface. Let us consider the geometric relation shown in Figure 3.1. xt is a point on a scene surface M and superscript t denotes the moment of the event happening and we would like to consider

the radiance leaving in a particular direction o . The radiance Lo (xt , o) depends on the radiance

arriving at xt from all directions and the properties of the surface. We first consider the outgoing radiance for the incident light from an infinitesimal cone around the fixed direction i where the cone occupies a solid angle of d(i ). The irradiance generated by the light strikes at the point xt equals to

dE(xt , i ) = Li (xt , i )d(i). The light is then scattered by the surface in all directions. We represent the radiance leaving in the direction of o from the incident light in the direction of i as dLo (xt , i ). Experimentally, we can observe a proportional relationship between dE(xt , i ) and dLo (xt , o ) which is expressed as

dLo (xt , o ) dE(xt , i ) As a result we define BSDF as the ratio of the outgoing radiance in the direction o to the incoming differential irradiance from the direction i . It is a function of incoming direction, outgoing direction and surface point xt : fs (xt , o, i ) = dLo (xt , o) dLo (xt , o ) = dE(xt , i ) Li (xt , i ) cos i d(i )

[3.4]

18

Figure 3.1: The input and output geometrical relation used to describe bidirectional scattering distribution function. There are two important properties of BSDF. Models that have these properties are considered to be physically plausible. · Helmholtz Reciprocity Rule: for any incoming and outgoing direction pair, i and o , BSDF is symmetric to the directions:

fs (xt , o , i) = fs (xt , i , o)

[3.5]

· Energy Conservation Law: the energy conservation law says that the quantity of light scattered must be less than or equal to the quantity of incident light. For any direction o ,

S

fs (xt , o , ) cos d( ) 1

[3.6]

where S is the imaginary hemispherical surfaace around the point xt BSDF can be used to describe all possible relationship between incoming and outgoing direction. Physically, we separate the scattering into two different events: reflection and transmission. As a result we split BSDF into two different functions: bidirectional reflectance distribution function (BRDF) which is used to describe the light-reflecting properties of a surface and bidirectional transmittance distribution function (BTDF) which is used to describe the light-transmitting properties of a surface. By integrating the scattered radiance of incident light

19

dLo (xt , o) = Li (xt , i )fs (xt , o , i) cos i d(i) overall directions, we can predict Lo (xt , o ). Lo (xt , o ) = Li (xt , i)fs (xt , o, i ) cos i d(i )

S2

[3.7]

This equation called surface scattering equation can be used to predict the appearance of the surface, given a description of the incident illuminance.

3.3 The Rendering Equation

In the past most photorealistically rendering research in global illumination is limited to render static images. However, in this dissertation we would like to render an animation sequence. Thus, we have to derive the rendering equation which is able to handle animation rendering from the scattering function. Through the process, we will also define the data structure which is similar to a scroll of film used in a movie camera to store the tempo-spatial samples to generate the animation. Fig. 3.2 shows the conceptual diagram of the image sweep. Each frame in the animation is stacked in the time-line aligning with the camera movement. To render a photorealstic image from a scene must consider all the light interaction in the scene at the period of shutter open. In other

shutter shutter words the valid period of light transport events for a frame is between Tk and Tk +

T shutter . The difference of denotation between t and T is similar to the random variable x and the random sample X. In other words T is the specific moment for a sample and t indicates a random time with a plausible moment in the entire animation. Here we simplified our discussion by assuming that the period of shutter open is transient and thus t = T for the following discussion. Global illumination algorithms achieve photorealistically rendering by simulating light transport behaviors based on physical principles. We start to consider the behavior of light transport by first change the notation of Lo to Ls in Equation 3.7 to represent the outgoing radiance due to the scattering behavior of the surface. The law of energy conservation predicts that exitant radiance at

20

Figure 3.2: The image sweep is a set of three dimensional hyper-cubes which correspond to the frames, F rame1 , . . . , F rameN and formed by all image planes aligned with the movement of the camera at different frame periods. Please notice that k is used to index the frame and t is used to indicate the moment when a path is sampled. Valid path samples for frame k are generated between shutter shutter shutter Tk and Tk +T shutter , where Tk is the moment when the shutter opens. In addition, all pixel samples in the valid period must also spatially pass through the image plane I t centered at the center of the camera. In other words, the image sweep is the sweep of the image plane along shutter shutter the camera's locomotion in space along the time between Tk and Tk + T shutter .

21 a surface point along an outgoing direction must be equal to the sum of the emitted and reflected radiances. We describe this relationship as the energy balance equation: Lo (xt , o ) = Le (xt , o ) + Ls (xt , o) When we replace the Ls term with the scattering Equation 3.7, we get Lo (xt , o ) = Le (xt , o) + fs (xt , o , i )Li (xt , i ) cos i d(i)

[3.8]

[3.9]

i

where i is all possible input ray direction. Radiance along a ray is constant in free space i.e. radiance will not change until it hit some surface or media. We define the xt through a ray-casting function xt = T r(xt , ) where xt is the first surface point visible from xt along the direction . The relationship between the incoming radiance and outgoing radiance can be written as Li (xt , i ) = Lo (T r(xt , i), -i ) Equation 3.9 can be rewritten as

[3.10]

Lo (xt , o ) = Le (xt , o ) +

i

fs (xt , o , i )L(T r(xt , i ), -i ) cos i d(i )

[3.11]

We obtain the rendering equation for generally describing the complex light transport behaviors inside a 3D animated or static virtual scene.

of a hypothetical sensor that responds to the radiance Li (xt , ) incident upon it. The response of the sensor is characterized by the sensor response function, Wjk (xt , ). The overall response for a pixel j of frame k is determined by

k Ij =

shutter +T shutter Tk shutter Tk

k k for an animation or I1 , · · · , INpixel for a static image. Each measurement corresponds to the output

1 1 k However, the goal of light transport is to compute a set of real-valued measurements I1 , · · · , INpixel , · · · , I1 , · · · ,

M×S

Wjk (xt , )Li(xt , )dA(xt)dxt ().

[3.12]

22 where M is the object surfaces in the scene, and S is the 3D imaginary hemispherical surface above the sensor response surface. When rendering a static scene, the value of k and t can be neglected or set to zero. When rendering a frame with a transient shutter open period in an animation scene i.e. Tshutter = 0, the value of k and T will be a constant, and T can be computed from k.

3.4 Monte Carlo Integrations for Global Illumination

The radiance arrives at point xt from the direction i can be calculated from the emitting and scattering radiance from all surface points in the scene. Radiance from emitting depends on the surface properties but the radiance from scattering depends on the incident and emitting radiance of the surface points. In other words the rendering equation is a form of the Fredholm equation of the second kind in which we need to solve the unknown quantity L appearing on the both sides of the equation. The Fredholm equation is hard to be solved analytically. However, if we manipulate the rendering equation mathematically to the formation of path integrals, the complex light transport problem can be solved by using general-purpose integration methods.

3.4.1 Path Integral Formulation of the Rendering Equation

The goal of this section is to transform the rendering equation into a form of simple integrals.

k In other words we would like to express the measurement Ij in each pixel of each frame in the

form of

k Ij =

fjk (~ t )dµ(~ t ) x x

[3.13]

~ where xt is a valid light transport path at the moment t; is the complete set of light transport paths from a light source to the camera for the entire period when the scene exists; µ(~ t ) is the x ~ surface area measure for the path xt ; fjk (~ t ) is the measurement of contribution to the pixel j at x ~ frame k from the path xt .

23

Figure 3.3: Geometrical relation in three-point representation of the light transport equation.

24

3.4.1.1

The Three-Point Formulation of the Transport Equation

The path is constructed from vertices and edges. Thus, it is easier for us to transform the light transport equation into a three-point form. In other words we would like to replace the directional variables i , o in the transport equation with vertices xt . Fig. 3.3 shows the geometrical relation used in the three-point representation. First, the equilibrium radiance can be expressed in the form L(xt xt ) where xt , xt M are points on the surfaces. We can get L(xt xt ) = L(xt , ) where = (xt - xt )· can be expressed as fs (xt xt xt ) = fs (xt , i, o ) where i = (xt - xt )· xt - xt

-1

[3.14]

xt - xt

-1

is the unit directional vector from xt to xt . Then BSDF

[3.15]

-1

and o = (xt - xt )·

xt - xt

.

Finally, the light transport equation can be expressed in the three-point form

L(xt xt ) = Le (xt xt ) +

M

L(xt xt )fs (xt xt xt )G(xt xt )dA(xt ) [3.16]

where G is the geometry term and can be represented as G(xt xt ) = V (xt xt )

cos o cos i xt - xt

[3.17]

where V is the visibility test, its value is 1 if x and x mutually visible. We can also express the measurement Equation 3.12 as

k Ij

=

shutter +T shutter Tk shutter Tk

M ×M

Wjk (xt xt )L(xt xt )G(xt xt )dA(xt )dA(xt ).

[3.18]

25

3.4.1.2

The Path Integral Formulation

In order to use general-purpose integration methods, we would like to express each measurement in the path-integral form

k Ij =

fjk (~ t )dµ(~ t ) x x

We use l to represent the set of paths of length l, i.e. the set of paths have the form

~ xt = xt xt . . . xt 0 1 l where 1 l and xt M for each l and each vertex xt is the intersection of the light l l transport on the surface at the moment t. We define area-product measure µl on this set of paths as

µl (D) =

D

dA(xt ) . . . dA(xt ) 0 l

where D l is a set of paths. Formally, µl is an area product measurement. µl can also be defined as dµl (xt . . . xt ) = dA(xt ) . . . dA(xt ) 0 l 0 l Next, the path space can be defined as

[3.19]

=

l=1

l

[3.20]

i.e. represents the set of paths with finite lengths. And the area-product measurement µ to this space is

µ(D) =

l=1

µl (D

l )

[3.21]

26 We finish the formulation of path integrals by recursively expanding the transport Equation 3.16 to obtain

l-1 k Ij

=

l=1 Ml+1

Le (xt 0

xt )G(xt 1 0

·Wjk (xt l-1 = +

M2 M3

i=1 t t xl )dA(x0 ) · · · dA(xt ) l

xt ) 1

fs (xt xt xt )G(xt xt ) i-1 i i+1 i i+1

Le (xt xt )G(xt xt )Wjk (xt xt )dA(xt )dA(xt ) 0 1 0 1 0 1 0 1 Le (xt xt )G(xt xt )fs (x0t xt xt )G(xt xt ) 1 2 1 2 0 1 0 1 Wjk (xt xt )dA(x0 )dA(x1 )dA(x2 ) 1 2 [3.22]

+···

The integrand fjk for different path lengths can be defined by expanding the appropriate term ~ in Equation 3.22. For example, fj for a path XT = XT XT XT , we have 0 1 2

~ fjk (Xt ) = Le (XT XT )G(Xt XT )fs (XT XT XT )G(XT XT )Wjk (XT XT ) 0 1 0 1 0 1 2 1 2 1 2 This function fjk is called the measurement contribution function. Now we finish defining all the terms needed for path integrals. There are several benefits to have the path integral model. First, by reducing light transport to an integration problem, it allows general-purpose integration methods to be applied. Second, the path integrals also lead to new techniques for sampling paths locally because it describes scattering from one surface at a time. This allow light transport algorithms to construct a path incrementally by recursive sampling of the integral equation. Finally, the path integral model is useful to understand the limitations of unbiased Monte Carlo algorithms.

3.4.2 Monte Carlo Integrations

k To render a frame in an animation is to compute the intensity, Ij , of each pixel j at each frame

k, by estimating the integrals:

27

k Ij =

Wjk (~ t )L(~ t )dµ(~ t ), x x x

[3.23]

where is the path space, Wjk (~ t ) is the measurement function for pixel j of k-th frame ­ nonx ~ zero if xt passes through the image plane at a pixel position within the support of the reconstruction ~ filter at pixel j of the k-th frame ­ and L(~ t ) is the radiance carrying by light transport path xt . x Note that Wjk is dependent on the pixel location and the moment of the path and thus, it depends only on the last edge xt xt of the path. We are ignoring, for discussion purposes, depth of field l-1 l effects, which would necessitate integration over directions out of the pixel. Taking an importance ~1 ~ sampling view, given a set of samples, XT1 , . . . , XTN from an importance function p(~ t ) for a x N animation, the intensity in each pixel is estimated using 1 ^k Ij = N ~ ~ Wjk (XTm )L(XTm ) m m ~ Tm ) p(X m=1

N m

[3.24]

k ^k According to Monte Carlo integration, Ij = Ij when the number of samples taken approaches

infinity. We note that the only difference among all pixels is the term of Wjk (·). Equation 3.24 ~ can be broken into many integrals, one for the support of each pixel. Provided p(XT ) is known in ~ each path which passes through the valid image plane, the global nature of p(XT ) is not important. ~ The rendering equation can be transformed to evaluate the expected radiance, E L(XT ) , carried by each sampled path and then accumulate this radiance in pixels whose support of reconstruction includes the path-passing pixel position. At the final step of rendering, the accumulation in each pixel is averaged by the total number of samples dropped in the support of the pixel. The MC algorithms discussed in Section 3.4.3 are used to render a static image and thus we can set t = 0. In additions since they also neglect the motion blur effect which is an integration of all path contribution in the time domain when shutter opens i.e. T shutter = 0. Under this assumption we can set the time of transport event at a transient moment Tk .

28

3.4.3 Raytracing-Based Monte Carlo Rendering Algorithms

There are many Monte Carlo algorithms developed to solve the rendering equation. Path tracing is the first Monte Carlo algorithm introduced by Kajiya [77] to solve the rendering equation unbiasedly. In this section we summarize several important Monte Carlo raytracing algorithms including path tracing, bidirectional path tracing, irradiance caching, and photonmapping. The first two algorithms are unbiased and the last two algorithms are biased.

3.4.3.1

Path Tracing

James Kajiya first presented the rendering equation in the paper [77] and then provided path tracing as the solution to the rendering equation. Path tracing is the first algorithm to find the complete solution for global illumination. Path tracing generates a random ray trees starting from the camera. Each branch of the tree is a valid path by starting a ray from the camera, tracing the ray through the scene and finally reaching a light source. At each bounce, a new direction for the ray is sampled according to a distribution function corresponding to the surface properties at the interaction point. The contribution of each valid path to the image plane is evaluated by the radiance the path carries weighted by the probability of this path being generated. Each valid path is then viewed as a valid sample for a pixel if the sensor function of the pixel cover the pixel position which the path passes through. At the end the average of the contribution radiance from all sample paths passing through the sensor located on the pixel is the measurement of intensity of that pixel. A variation to the path tracing algorithm starts the tracing ray from a light source instead of the camera. The algorithm is known as light tracing, particle tracing or backward ray tracing. Light tracing is a dual algorithm of path tracing because the physics of light transport does not change when a path is reversed according to the Helmholtz reciprocity rule. Both path tracing and light tracing have their own advantages and disadvantages. Furthermore there are algorithms combining them to overcome the limitations of path tracing and light tracing and become a more robust algorithm. The combination algorithms are called bidirectional path tracing, which is discussed next.

29

3.4.3.2

Bidirectional Path Tracing

Lafortune [84] and Veach [150] developed bidirectional path tracing independently. However, their algorithms are based on different statistic frameworks. Veach proposed the bidirectional importance sampling algorithm based on the framework of multiple importance sampling while Lafortune formulated the bidirectional path tracing algorithm as a recursive evaluation of the global reflectance distribution function (GRDF). Bidirectional path tracing generalizes the standard path tracing algorithm. When observing paths generated by path tracing and light tracing, some types of sub-paths are more efficiently sampled when tracing from the camera and some types of sub-paths are more efficiently sampled when tracing from light sources. Thus, combining both types of tracings can result in a better tracing algorithm. Bidirectional path tracing first generates a pair of "eye" sub-path which starts tracing from the camera and "light" sub-path which starts tracing from light sources. Multiple complete paths can be formed by connecting the different vertices in the pair of "eye" sub-path and "light" sub-path. Multiple importance sampling [151] weighs the contribution of these full paths to form the estimator of pixel intensities. Multiple importance sampling provides a set of near-optimal weights to reduce the variance of the estimator. Bidirectional path tracing combines the advantages of path tracing and light tracing. It is unbiased and can handle a scene with arbitrary geometry and lighting. Because it link the "eye" and "light" sub-paths, it provides the advantage of combining both the visual importance from "eye" sub-paths and the lighting importance from "light" sub-paths. It significantly reduces the variance in a rendered image compared to path tracing. However, it still requires a huge amount of samples to generate a smooth image. Because "eye" and "light" sub-paths have to be connected to form valid full paths, bidirectional path tracing is not suitable for scenes where most "eye" and "light" sub-paths are not visible to each other. For example, the eye is in one room which is different from the room where the light source is and the connections among "light" and "eye" sub-paths are through a small door jar. Most of the connections probably fail. In addition, each path is generated independently. The important path information has been thrown away when generating a new path.

30 As a result even a difficult and important path may be located but the information related to this path is thrown away when it generates the next sample path.

3.4.3.3

Irradiance Caching

Indirect lighting usually changes slowly on diffuse surfaces. Thus, we can first compute and store the accurate computation of the indirect lighting at a sparse set of scene locations. The stored computations can be used to estimate the indirect lighting at new locations. This is the basic idea of irradiance caching [169]. The detail of the algorithm is as follows. When estimating exitant radiance at a diffuse point, the algorithm looks up the irradiance cache for one or more acceptable samples. The gradients of the irradiance [168] estimated from cached samples are used to indicate the acceptability of the cached samples. This method considers the distance of samples and the irradiance difference caused by the changing in position and orientation. If the algorithm finds one or more acceptable samples, an interpolated irradiance value is calculated from those samples; otherwise, the algorithm estimates the accurate irradiance at that point and stores the value into the irradiance cache. Since the algorithm only caches the irradiance values, the information on the directional distribution of the incoming radiance is lost, and so this technique can only be used for diffuse surfaces. In addition, the algorithm uses interpolation of cached irradiance which introduces bias into the result. Therefore, advance convergence tests cannot be applied to this algorithm.

3.4.3.4

Photon Mapping

Photon mapping [73] considers the advantages of tracing paths from the camera and light sources and caching the indirect lighting in irradiance caching. It uses a two-pass rendering process. The first pass shoots photons from light sources by using standard light tracing. Whenever a photon intersects a non-specular surface (diffuse or glossy), caches called photon maps store a record of the position, incoming direction, the flux of the photon according to the photon properties. The second pass traces view rays from the camera to render the image by using the photon

31 maps built in the first pass. The photon mapping algorithm divides the integrand into four components: direct lighting, specular reflection, caustics, and indirect lighting. Direct lighting and specular reflection are accurately evaluated using standard Monte Carlo ray tracing. The caustics is evaluated via a caustics photon map. The indirect lighting is evaluated via an indirect photon map. Photon mapping is reasonably efficient to handle all lighting effects such as caustics and color bleeding. Another advantage is that the photon map does not depend on the underlying scene geometry, which means that it scales well with scene complexity. However, the estimation of radiance introduces bias into the algorithm. This limits the algorithm from using advance convergence tests and numerical evaluation of the performance of the algorithm.

3.5 Markov Chain Monte Carlo Algorithms for Global Illumination

All algorithms discussed in the previous section generates each sample independently. Thus, important path information will be lost through the generation process. In addition, the path integrals are highly correlated. Thus, sample reuse is an important technique to reduce the rendering variance. In this section we would like to use Markov Chain Monte Carlo algorithms for sample reuse. In the following discussion we use the variable superscript t to denote the moment when the path is created. In static image rendering t is fixed to a constant, T . In animation rendering t can be any moment of time in the entire animation and this denotation allows us to do the muation in time for exploring temporal coherence among light transport paths. We will explain the time mutation in Ch. 6.

3.5.1 Formulation for Markov Chain Monte Carlo Algorithms

Eq. 3.23 split the contribution function fjk into two components, the measurement function of the sensor located at pixel j of frame k, Wjk , and radiance carried by the path, L. The radiance is independent of Wjk and thus can be reused for all pixels. Therefore, b =

L(~ t )dµ(~ t ) expresses x x

32 the total radiant energy delivered through the image sweep as showed in Figure 3.2. The idea of ~ ing to the contribution of the paths to the image sweep, where each XTz is obtained by a random z ~ Tz-1 mutation to the path Xz-1 . Through the process paths mutate, and newly mutated paths are accepted or rejected with a carefully chosen probability to ensure that paths are sampled according to their contribution to the ideal image sweep. The mutations can have almost any desired form, and typically involve adding, deleting, or replacing a small number of vertices on the current path. In other words p = (1/b)L is implicitly used as the density function to generate a sequence of samples. The integration can now be evaluated as ~0 ~ Markov Chain Monte Carlo algorithms is to generate a sequence of paths, XT0 , · · · , XTN , accordN

k Ij

1 =E N

N i=1

~ Ti bWjk (Xi )

3.5.1.1

Acceptance Probability

The Metropolis algorithm constructs a transition function T r whose resulting stationary distribution is proportional to the given L, and which converges to L as quickly as possible. Time ~ Tz-1 ~ mutation will be discussed in Ch. 6. Given Xz-1 , we obtained XTz as follows. First, we choose a z ~ T tentative sample Xz z , which can be done in almost any way desired. Generally the process can be ~ ~ ~ z-1 ~ density that XTz = Y TY given that Xz-1 = XTX . The tentative sample is then either accepted or z

~ ~ represented by the tentative transition function T (·|·), where T (Y TY |XTX ) gives the probability

T

~ ~ ~ ~ rejected, according to an acceptance probability A(Y TY |XTX ). To see how to set A(Y TY |XTX ), suppose that we have already reached equilibrium, i.e. pi-1 is proportional to L. We must define

T T ~ ~ ~ sider the density is proportional to L(XTX )T (Y TY |X~ X )A(Y TY |X~ X ), and a similar statement

~ ~ the transition function, T r(Y TY |XTX ), such that the equilibrium is maintained. To do this, con-

~ ~ holds for the transition density from Y TY to XTX . To maintain equilibrium, it is sufficient that these densities be equal:

~ ~ ~ ~ ~ ~ ~ ~ ~ ~ L(XTX )T (Y TY |XTX )A(Y TY |XTX ) = L(Y TY )T (XTX |Y TY )A(XTX |Y TY )

[3.25]

33 This condition is known as detailed balance. Thus the unique equilibrium distribution must be proportional to L. In order to reach equilibrium as quickly as possible, the best strategy is to make ~ ~ ~ ~ A(Y TY |XTX ) and A(XTX |Y TY ) as large as possible which is achieving by letting. ~ ~ ~ L(XTX )T (Y TY |XTX ) ~ ~ A(Y TY |XTX ) = min 1.0, ~ ~ ~ L(Y TY )T (XTX |Y TY )

[3.26]

3.5.1.2

Hybrid Algorithms Combining Independent Monte Carlo Sampling and Correlatedly Markov Chain Monte Carlo Sampling

Veach [149] proposed that using importance energy, f /p, of the seed path to weight each mutation of a Markov Chain can remove the start-up bias if the detailed balance is maintained at each mutation. Therefore, a hybrid algorithm, energy redistribution with balance energy transfer, is proposed to combine the independent sampling path tracing algorithm with the correlatedly sampling Metropolis algorithm. The idea is to generate a set of independent samples by path tracing as seed paths. Then each seed path is used to correlatedly explore a local path space, xt , around the seed path by Metropolis. xt is a sub-space of and is defined as the domain ~ ~ of all paths which can be reached through a sequence of perturbations from the seed path. This definition is used to guarantee that the local exploration can reach and stay at the local stationary probability, f (~t )/bxt where bxt = x ~ ~ be calculated as

k Ij = xt ~

f (~ t )dµ(~ t ) b. The estimation of the pixel intensity can x x 1

NM CM C l=1

1 NM C

NM C z=1

NM CM C

~ f (XTz ) k ~ Tzl z W (Xzl ) ~ Tz ) j p(Xz

[3.27]

The expectation of the estimation can be calculated as

k E(Ij ) =

=

f (~ t ) x

f (~ t ) x 1 E p(~ t ) x NM CM C

xt ~

NM CM C l=1 t

Wjk (~ t ) y

f (~ ) y dµ(~ t )dµ(~ t ) y x t bx ~

~ Wjk (XTl )|~ t p(~ t )dµ(~ t ) x x x l [3.28]

~ where E(·|~ t ) is the conditional expectation on an independent seed path xt . x ~1 ~2 Before solving Equation 3.28, we observe that if two paths XT1 and XT2 are mutually reachable ~1 by a sequence of perturbations, the local path domain based on XT1 is equal to the local path

34 ~2 ~ domain based on XT2 i.e. XT1 = XT2 . This is true because any path xt XT1 can be reached ~ ~ ~

1 2 1

~2 ~2 ~1 ~ from path XT2 by mutating XT2 to XT1 by a sequence of perturbations and then to xt . In other

2 2 1

~ ~ words xt is also in XT2 . Similarly, any path xt XT2 is also in XT1 . Therefore, we can define ~ ~ ~ a maximal pertubation path domain, p , as a set of all paths that are reachable by a sequence of perturbations from any member in the domain and according to our discussion we can get p = XT1 = XT2 . The entire path domain, , can then be partitioned as p,1 ~ ~

1 2

p,2 · · · where

p,m is a maximal perturbation path domain and p,m in Equation 3.28 can now be written as

k E(Ij )

p,n = 0 where m = n. The expectation

=

i

i

f (~ t ) x dµ(~ t ) x bp,i

p,i

Wjk (~ t )f (~ t )dµ(~ t ) y y y

=

i

p,i

Wjk (~ t )f (~ t )dµ(~ t ) y y y

= where bp,i =

p,i

Wjk (~ t )f (~ t )dµ(~ t ) y y y

f (~ t )dµ(~ t ). Therefore, the estimator is unbiased. The variance analysis in [9] x x

allows us to bound the variance of the Markov Chain of a seed path as ~ VM CM C (Xi ) 1 NM CM C ~k bXi Npixel Ej (1 + ~ 2R(1) ) q(1 - q) - R(1)

~k where Npixel is the number of pixels, Ej is the average importance energy of the path associated with this pixel, q is the probability that path contributes to the pixel under considerations, and R(1) is the correlation between random variance indicating that two subsequent paths go through the same pixel. Each seed path is generated independently and thus the variance of the entire hybrid estimator is equal to the sum of variance in each Markov Chain and can be calculated as V

k C(Ij )

=

1

2 NM C

× NM C ×

1

2 NM CM C

NM CM C l=1

~i VM CM C (XTi )

NM C

Thus the variance of the entire algorithm is O(1/NM C ), and the algorithm converges to the correct answer if the number of independent samples go to infinity. Veach [149] also proposed that we can use an equal weighting scheme if we resample the paths according to its importance energy. Thus, we can use a similar deterministic sampling strategy

2R(1) 1 ~k bNpixel Ej (1 + ) × NM CM C q(1 - q) - R(1)

35 in [40] to generate total number of age path energy and equal to ~ ~i ~i Ei (XTi )/E paths where Ei (XTi ) =

~T f (Xi i ) ~ Ti ) p(X

i

~ and E is the aver-

~i Ei (XTi )/N where N is the number of paths. In other words, each

seed path has a number of Markov Chains proportional to its energy. The equal weighting scheme now becomes energy redistribution with equal deposition and the estimator can be calculated as

k Ij

~ E = NM C

NM C z=1

Npath m=1

1 NM CM C

NM CM C l=1

~ Wjk (XTzml ) zml

[3.29]

Ti

~ ~i We can observe that E(ENpath ) = E(XTi ) =

~ E(X ) ~i ~ where Npath = Nf loor + (0 : 1?E(XTi ) - (Nf loor + U(0, 1)) × E > 0) and Nf loor = Ei . ~ ~T f (Xi i ) ~T p(X i )

i

which is the importance energy of the in-

dependent sampling algorithm for the path. The expectation and convergence analysis is similar to the one discussed in the previous paragraph. Two criteria are needed to maintain the unbiasedness of energy redistribution with equal deposition: first, the detailed balance of mutation is maintained through the process for reaching and staying in the stationary probability; second, the number of Markov Chains is proportional to the energy carried by the path for keeping the independent sampling algorithm unbiased. Later, we will show that although our kernel function for each path adapts from iteration to iteration, it is fixed inside a single Markov Chain. In addition, the acceptance probability is chosen to maintain the detailed balance at each perturbation and thus the stationary probability requirement is achieved which implies convergence and unbiasedness. Furthermore, our algorithm deposits the remaining energy of an eliminated path by the equal deposition method back into the image sweep before a path is removed from the population in resampling process or terminated at the end of a task. This achieves the second criterion of unbaisedness.

3.5.2 Markov Chain Monte Carlo Global Illumination Methods

The previous section discusses the fundamental theory of the Markov Chain Monte Carlo algorithms. In this section we would like to review two important algorithms related to this dissertation: Metropolis Light Transport (MLT) and Energy Redistribution Path Tracing (ERPT)

36

3.5.2.1

Metropolis Light Transport

Metropolis Light Transport (MLT) is a rendering algorithm based on the Metropolis sampling framework [149]. Metropolis sampling samples paths according to their contribution to the image plane by means of Markov Chain Monte Carlo random walks. A sequence of dependent paths are generated through a well-defined transition function to reach the stationary distribution of that chain which is proportional to L. It only requires that L is known up to a constant scale and can be evaluated at each point in the domain. In other words, no analytical form for L is necessary. The mutation strategies proposed in the algorithm achieve the ergocity and local exploration of the path space and generate good results. The key advantage of MLT is to coherently explore the path space. As a result, once an important path is found, the relative path space around this important path will be explored. It is especially good for the difficult but important paths. As a result, MLT is very efficient in handling traditionally difficult scenes such as light going through a door jar. Another advantage of MLT is that the Metropolis sampling framework ensures its unbiasedness. MLT is also competitive with previous unbiased algorithms for relatively simple scenes.

3.5.2.2

Energy Redistribution Path Tracing

Energy Redistribution Path Tracing (ERPT) introduced by Cline et al. [25] is a hybrid global illumination algorithm that combines the ideas of metropolis light transport and path tracing. ERPT can be understood as multiple seed paths MLT algorithm: the algorithm starts by a set of initial paths passing through stratified pixel positions on the image plane. The initial paths then are used as a seed path for starting the Markov Chains. As in MLT, a tentative path is generated by mutating the original seed path and accepted with a probability to maintain the detailed balance for the chain. The path mutation strategies are to locally explore the correlated paths in the path space to reduce the rendering variance. The set of initial paths generated by the standard path tracing algorithm is used to guarantee the ergocity. However, this algorithm uses post-processing noise filters to reduce image noise; this introduces bias.

37

3.6 Population Monte Carlo Rendering

Fan in his dissertation [40] proposed three physically-based rendering algorithms, Population Monte Carlo Image Plane (PMC-IP), Population Monte Carlo Hemispherical Integrals (PMC-HI), and Population Monte Carlo Path Tracing (PMC-PT), which are based on the Population Monte Carlo framework. PMC-IP uses the variance of sample path radiances in each pixel to adaptively decide the pixel-sample distribution function on the image plane. The algorithm distributes more samples to regions with high perceptual variance in order to reduce the variance in the cost of increasing noise in smooth regions. However, little increment of noise in the smooth regions can improve the perception of the entire image by reducing the variance in the high perceptual variance region. PMC-HI uses the variance of direct lighting samples on the estimated surface point to adjust the sampling methods. PMC-IP and PMC-HI can easily cooperate with ray-tracing algorithms such as path tracing and photonmapping. However, the PMC-HI algorithm cannot be used in Markov Chain Monte Carlo algorithms because MCMC algorithms directly use a light vertex instead of the estimation of direct lighting. Although PMC-IP inspired us to concentrate more computation effort on the visually important areas on the image plane, directly applying the uneven exploration of the image plane into energy redistribution introduces bias into the final result. Therefore, we design our new variance-regeneration method with a proper weighting scheme based on the PMCIP framework in order to prevent bias. In addition, the adaptation in both algorithms is still in a frame-by-frame manner. In the initial phase PMC-PT traces a set of paths such as 3 samples per pixel (SPP) with a general path tracing algorithm and then randomly chooses part of the paths as population paths. In each iteration, PMC-PT perturbs the population paths with lens perturbation to explore the spatial coherence among paths. The radiance of the newly perturbed path is computed by f (~s )/K(~s )|~s-1). y y y The radiance is labeled with the chosen perturbation radius. In resampling phase, the accumulation of the labeled radiances is used to adapt the kernel function and eliminate part of the population. From our observation we find that there are several limitations: First, PMC-PT only uses part of path-tracing samples. This wastes a large amount of computation effort because the cost to trace

38 a path is very high. In our algorithm we try to distribute the energy in each generated path to the surrounding pixels to prevent the cost of extra path tracing. Second, the resample criterion used in PMC-PT normally concentrates too much computation effort on exploring the high-contribution paths. Our PMC-ER algorithms adapt the PMC framework into the energy redistribution algorithm and use the energy of a path to determine the life time of the path in the population. In addition, the acceptance probability is used as the criterion for the adaptation of mixture probability because the acceptance probability is a better indication of how well the chosen perturbation performed Third, PMC-PT uses a single mixture function for the entire population. Although the adaption for the entire population can adjust the perturbation parameters according to the properties of the scene, the difference among paths is large even in the same scene. Thus, the population path in our algorithm uses an individual kernel function to consider the difference in properties of each path. Finally, PMC-PT only uses lens perturbation methods to explore the correlated paths. However, as we will discuss in Ch. 5, the failure rate of lens perturbations is hundred percent when applying the lens perturbation to a caustics path. Thus, PMC-PT performs like a path tracing algorithm when tracing a caustics path and this introduces larger bright spots on the image plane and affects the performance of the algorithm. We improve the algorithm by developing a new lens perturbation algorithm for enhancing the rendering efficiency.

3.7 Global Illumination Animation Rendering

Since the goal of this dissertation is to enhance the efficiency of animation rendering. Thus, in this section, we review animation rendering algorithms in the global illumination community. Damez et al. [27] presented a complete survey of research on animation rendering in the global illumination community. Interested readers can refer to it for further information.

39

3.7.1 Hybrid Methods of Image-Based Rendering and Global Illumination

An approach proposed by Nimeroff et al. [107] is to combine image-based approaches with radiosity methods to render a global illumination walk-through. A set of precomputed images is used to represent the 3D space of the environment. The set of images is constructed in such a way that any viewpoint inside the environment can be generated through an interpolation process of these images. Each precomputed image represents the direct and indirect illumination computed by using the standard wavelet radiosity. The algorithm is efficient but it is an open problem to find an optimum set of images to represent the animated scene and also everything inside the scene except the view point must be static and the material must be diffuse. Myszkowski et al. [104] used a hybrid algorithm combining ray tracing and image-based rendering (IBR). The algorithm derives as many pixels as possible by using inexpensive IBR techniques without affecting the walkthrough animation quality. They also proposed a perceptionbased tempo-spatial animation quality metric which is designed specifically for handling synthetic animation sequences. Yee et al. [172] demonstrated that greater errors can be tolerated for less salient image regions in which the density of indirect lighting samples can be substantially reduced. They proposed to compute the salience map which is used to control the irradiance caching for secondary lighting. The global illumination solution is computed for a set of key-frames and then interpolation is used to compute the inbetween frames. Such an approach might result in visually noticeable errors in the lighting distribution, which is affected by changes in the environment in the course of the animation. [172, 104] face the same problem: the accuracy of lighting reconstruction fluctuates between frames, achieving the highest level for the key-frames, and then gradually decreasing for in-between frames.

40

3.7.2 Radiosity Methods

The radiosity community has conducted a significant amount of research on animation rendering. First, research attempted at efficiently using radiosity methods to compute high-quality animation focused on adapting the full-matrix and progressive radiosity algorithm to take advantage of time coherence. Baum et al. [11] identified form factors between patches that do not interact with moving objects in preprocess. Modified form factors are incrementally computed by using a reprojection technique which are to project the bounding box of the moving objects on hemicubes to determine the form factors that need to be updated. Later, [108, 37, 38] proposed mechanisms for the prediction of changes in form factors based a geometrical data structures call visibility complex and visibility skeleton. [115] proposed to follow the principle of cell animations and compute every frame as the sum of a fixed image and a frame-dependent one. However, the requirement of huge amount of memory and lack of scalability prevent them from further development. There exist two major groups of methods: progressive radiosity uses an incremental computation of the radiosity solution based on the previous frame and relies on shooting out corrective energy to the scene regions affected by the environment changes [23, 47, 100, 99] and hierarchical radiosity updates the hierarchy of links between the static objects affected by moving occluders using different clustering methods [44, 132, 36, 126, 92, 28]. Those algorithms introduce mechanisms for controlling the frame rate and efficiently identifying which part of the scene is modified. However, the hierarchical radiosity framework poorly supports the light transfer between glossy and specular surfaces. This drawback can be eliminated by adding photon tracing atop of the line-space hierarchy, used for diffuse surface. The photons traversing changed scene regions are identified using a dynamic spatial data structure and are selectively updated. But still, the memory requirements in all those hierarchical approaches are extremely high due to the storage of the link structure. In Global Monte Carlo Radiosity [14, 15] the temporal coherence of costly visibility computations is efficiently and conservatively exploited by using the same global lines for the whole animation, but then the radiosity solution is performed independently for each frame. Global lines are tested for intersection once against all static objects, and against every frame position of the

41 dynamic objects. Visibility lists are built for each global line. Static and dynamic lists are then merged and sorted for every frame and used to perform the light transfer. Only visibility is reused not the radiosity. In addition, limitation to nearly diffuse material makes them have limited usage. Recently a two-pass radiosity method was proposed by Martin et al. [93]. Their algorithm takes advantage of a priori knowledge of the dynamic properties of the scene to exploit temporal coherence. Their algorithm first runs a global pass to identify the "bad" and "good" links. The second pass then updates the "bad" links according to the movement of objects. However, as mentioned previously, radiosity techniques are limited by the ability to handle general scattering models and general geometric representations.

3.7.3 Virtual Light Sources

[82] implemented a hybrid method which consisted of two steps: first, CPU traces the photons shooting out from light sources into the scene to construct a set of virtual light sources; second, a multiple-pass hardware rendering techniques are applied to render each virtual light sources. In a dynamic environment, the virtual light sources are eliminated according to certain aging criteria and added into the pool to explore the temporal coherence. [156, 160] added the ability of handling more general lighting effects into the instant radiosity algorithm. In addition, they implemented the algorithm in a computer cluster to achieve the interactive rate but each frame is rerendered independently and the temporal coherence is not explored. [158] extended the architecture discussed in [156, 160] to solve the multiple light sources problem by creating a cumulative distribution function in a preprocess phase and using the CDF to sample important light sources. The temporal averaging CDF is used to let virtual light sources generated from the same light sources as temporal consistent as possible. However, the number of VPLs from each light source may change from frame to frame. The temporal artifacts still exist. [128, 129] used bidirectional path tracing and metropolis light transport to distribute the virtual light sources in the scene to make sure that virtual light sources appear in visually important areas. However, these two algorithms focused on the single frame rendering. They do not consider any temporal coherence. [87] employed a

42 two-pass instant radiosity method to reduce the temporal artifacts. In the first pass, a set of onebounce virtual light sources are created and the corresponding shadow maps are created. And in the second pass hardware local lighting is calculated at each visible surface point from the camera with the depth information in the shadow map. For the following frame, part of the light sources are removed and some new light sources and shadow maps are added into the structure. The remained light sources can enhance the temporal coherence among rendering results. However, the invalidity of the remained light sources introduces bias and the algorithm is limited to a scene with diffuse or slightly glossy surfaces. [166, 163, 65] solved the many light sources problem by clustering the contribution of light sources to a set of representative point light sources and computed the surface appearance from the representative light sources in a static scene. In the preprocess, a set of virtual light sources is created accordingly. The algorithms are for static rendering and thus there exists serious temporal flickering. [66] solved the temporal flickering by using a tensor clustering method to do the clustering of point light sources in spatial and temporal domain at the same time. However, all these light-clustering methods are approximations. They are approximate to real answers and can provide fast feedback to the user in interactive mode. The real physical results are still computed by the global illumination algorithms if needed

3.7.4 Photon Shooting

Two-pass methods, photon shooting and image rendering, provide two different places to take advantage of the temporal coherence among samples. Myszkowski et al. [105] implemented an algorithm based on caching photons from sampled paths that are subsequently invalidated after a certain period of time. The lifetime of a photon is derived from perceptual considerations using the criterion of video and animation quality measurement (AQM). The density estimation particle tracing algorithm is used to balance the trade-off between maximizing the number of photons collected in the temporal domain to reduce the stochastic noise and minimizing the time interval in which the photons were traced to avoid collecting invalid photons. Effectively, the algorithm blurs each photon in the temporal domain in order to reduce stochastic noise but it also introduces halo artifacts as a light moving and shadow lingers after an object's passing. The fundamental problems

43 are that the lifetime of a photon does not depend on the changes in the scene and the contribution of each photon is not weighted properly across the frames. Dmitriev et al. [32] addressed the invalidity issue by proposing a selected photon tracing algorithm to identify and update those invalid photons. The invalid photons are updated by using the periodicity of the Halton random number sequence to regenerate the sample path for the invalid photon in the current frame. The algorithm only relieves the usage of invalid photons but it does not actually remove all invalid photons. In addition the reweighting issue is not addressed by this algorithm. Weber et al. [170] used space-time bilateral photon filtering to reduce the use of invalid photons by adaptive choice of filter parameters. However, it does not guarantee conservative results either. Cammarano et al. [19] solved the motion blur problem in global illumination by proposing a tempo-spatial photonmapping algorithm. In the first pass, a 4D photon map is created by tracing photons in space and time. The quality propagated is radiant energy instead of radiant flux. Then, the relationship between radiance and radiant energy is used to estimate the radiance at each scene points. The estimated radiance is related to the collected area and the collected period. In short period of rendering time, the validity of photons should not be a question. Thus, it works fine for motion blur rendering. However, when generalized for animation rendering, the validity of photons becomes an issue. They do not provide proper argument in finding the correct temporal photons. Lim [54] separated the global diffuse and specular lighting and used different methods to compute each effect. They used the hierarchical radiosity method to compute the global diffuse effect and photon tracing to construct caustics textures accounting for the specular effect. They also provided an algorithm to limit the update of photons in the local modified regions by a dynamically modified octree to compactly store the information of particles transiting in the scene. The largest issue in photon shooting is that the invalidity of copied values introduced bias into the final result. It is hard to identify invalidity of copied photons.

44

3.7.5 Irradiance Cache

In order to generate high-quality images, the final gathering process needs to be applied for the image-rendering pass. Tawara et al. [146] separated the irradiance caches into static irradiance caches which are used to store the irradiance value computed for all static objects when removing the dynamic object and dynamic irradiance caches including negative energy are computed independently for each frame. The static irradiance caches provide a temporally consistent illumination. However, the independent computation of the dynamically cached values may introduce serious temporal artifacts and thus the quasi Monte Carlo method is used to trace the dynamic photons and the validity of cached values is an issue in the algorithm. Recently Tawara et al. [145] extended irradiance caching to the temporal domain by reusing the cached irradiance samples across frames. The photon maps used for rendering each frame are regenerated because the cost of generation is relatively low compared to the final gathering process and regeneration can get rid of the invalid photon problem in the photon shooting process. In every subsequent frame the value of an irradiance cache is updated by choosing a number of directions to evaluate the irradiance on this cache. The number of direction is randomly chosen according to a distribution constructed according to the age of the irradiance caching. The major issue is still that the updating of irradiance caches is not conservative. In other words, there may still exist quite a few invalid irradiance caches which are used to compute the irradiance for the image rendering pass. In addition, the criterion of choosing which cache to update is heuristic. Smyk et al. [135] introduced "anchor" to keep track of cache locations to strata where the irradiance is collected. In each frame, the visibility change of each stratum in each anchor point is checked to do the necessary update of the irradiance in the anchor point. The updating of irradiance cache is conservative. Thus, it is possible to overcome invalid caches problem. New anchor points are added if the regions become too sparse and old anchor points may be removed if no enough valid strata exists. Temporal exploration of irradiance caches is only helpful for diffuse and moderately glossy surfaces. The algorithm did not handle the temporal exploration among caustics paths. However, the temporal artifacts caused by caustics paths in different frames are normally more obvious. They only reuse the cached values in static objects not in dynamic objects which is perceptually more important. Temporal artifacts may be

45 more serious. Tawara et al. [144] provided a summary of temporal exploration algorithms include [105, 170, 32, 146, 145, 62] proposed by their groups. They analyzed the pros and cons of each algorithm and also gave suggestions about the proper applications for each algorithm. In addition, the photon reuse algorithms separate the direct lighting and indirect lighting. The temporal coherence in direct lighting is not considered.

3.7.6 Parallel Computation and Accelerate Ray Tracing

A 3D animation scene can be treated as a static one in 4D (space + time) as proposed by Glassner [51] and Groller et al. [118]. Chapman [22] proposed an object-space intersection algorithm by transforming the ray through the entire animation into object space and doing the intersection at once. These algorithms can enhance the speed of ray tracing. However, the accelerations in intersection are independent from each other and thus, these algorithms do not intend to solve the issue of temporal artifacts. With faster hardware and algorithmic improvements, real-time ray tracing is finally within reach. At a broad level, most of the work in real-time ray tracing algorithms can be classified into three main categories: improved techniques to compute spatial data structures, exploiting ray coherence and parallel algorithms on shared memory or distributed memory system. The first two categories are relatively correlated and thus we provide a review of the relative improvement in this field in the following paragraphs. Then, we describe the advance in parallel computation of ray tracing. Since ray tracing is the critical component for global illumination, it is obvious that we can speed up animation rendering by accelerating the efficiency of ray tracing. There has been a great deal of work done on accelerating data structures for ray tracing, such as Bounding Volume Hierarchies (BVH), Grids, Octrees [50], and Binary Space Partitioning (see e.g. [52, 60] when rendering statics scene). Gadede and Gunther [45] provided an overview of many spatial subdivisions, along with the requirement for various application. Recent work in interactive ray tracing, however, has focused on kd-trees [157, 43, 121, 141] and grids [116, 162] or multilevel grids [110, 120]. When rendering an animation, we at least need to consider two cost: the cost to update changing scene

46 objects in the acceleration structure and the cost to traverse rays through the acceleration structures. Instead of tracing one ray at a time, packet tracing creates groups of spatially coherent rays that are simultaneously traced together through the acceleration structure, where all rays perform each traversal iteration in lock-step. [21, 121, 141] proposed notably coherent ray tracing in kdtree, [162, 55, 5] proposed coherent packet ray tracing in grid-based acceleration structures and [89, 159] proposed coherent packet ray tracing in BVH-based acceleration structures. However, traversal is not the only cost for interactive ray tracing. Spatial data structures which are crucial for fast ray tracing must be rebuilt or updated as the scene changes, and this can become a bottleneck for the speed of ray tracing. Thus, [154, 56, 155, 4, 174, 114] focused on rebuilding or updating a kd-tree-based acceleration structures for dynamic scenes. [72, 71, 86] focused on rebuilding or updating a grid-based acceleration structure for animated scenes, and [90, 159, 139, 56, 89, 173] focused on rebuilding or updating a BVH-based acceleration structure for animated scenes. [85] explored the idea of accelerating the operation of intersecting a scene with a ray using constrained tetrahedralizations. The papers listed here are representative not exclusive and [161] provided an excellent survey on the topics of improved techniques to compute acceleration structures and exploiting ray coherence When carefully observing the functionality of these interactive ray tracing algorithms, they reduce the cost of tracing a ray by using coherent tracing or a better acceleration structure. They only explore the spatial coherence but not temporal coherence among rays. Thus, the temporal artifact problem still exists among these algorithms There exists another research direction to parallel tracing rays in multiple processors to achieve interactivity in global illumination. [102, 103, 110, 111, 134] are examples of parallel ray tracing. They carefully designed the distribution of ray-tracing tasks to available processors to balance two factors: load balance and synchronization. The rendering speed can be improved massively. [119] provided a survey on parallel rendering with ray-tracing, radiosity, and particle tracing. Most of these research do not intend to explore the temporal coherence among paths except that [94] proposed to reuse the lens edge by tracking the primary surface points from the previous frame to start the ray tracing in order to reduce temporal artifacts but the algorithm still does not explored the coherence of the entire light transport paths. Since PMC-ER algorithm are highly parallel, it

47 can be easily adjusted into these parallel architectures. In Ch. 6 we demonstrate a Condor parallel rendering system.

3.7.7 Reuse Path Information

Many researchers have explored ways to exploit spatio-temporal coherenc in the image plane to reduce the computational costs for animation rendering. With a fixed camera, changing materials or moving objects can be accelerated by storing partial or complete ray trees [17, 101, 75, 130, 10, 41]. However, Raytracing is limited to specular surface only. It has problem in generating diffuse surface effect. When the entities allowed to move inside an animated scene are limited, it is easy to reuse light paths to explore the temporal coherence to reduce temporally-correlated noise. This is relatively easy to achieve when the viewpoint is the only entity changing inside an animated scene. Since the viewpoint does not change the illumination inside the scene, the global illumination results at each surface can be cached in a structure such as a photon map [74] and then the cached value is used to render the view-dependent frames. Reprojection has been used in ray tracing to accelerate the generation of animation [6, 76]. These methods save considerable computation by reprojecting primary intersection points from the previous frame to the current frame and only recomputing the rays which are potentially incorrect. Sbert et al. [125] proposed that if the entity allowed to move is limited to light sources,

light paths can be reused from the second point of the path in different frames. The weight for each reconstructed path can be computed using the multiple importance scheme to increase the rendering efficiency and reduce the temporal artifacts. [124, 123] are extensions of the similar concept in [125] to randomly shooting radiosity computation. Except the limitation in moving entities, these algorithms also limit the surface material of the first vertex from the lights to be diffuse because a specular surface point will reject the reconnection to light source points because the relation between input and output direction is a delta function. There exists several methods that proposed to allow motion only in two entities: the point light sources and the viewpoint. The visibility of the entire scene will not change when the allowable moving entities are point light

48 sources and the viewpoint. When checking the light transport paths, we notice that only the first link and the last link of the path can possibly change through the entire animation. Thus, we can cache the light paths generated in previous frames and then reweight the contribution to the current frame based on the multiple-importance framework. Those algorithms achieved significant improvement in rendering efficiency and reduction in temporal noise. However, the limitation to point light sources and the movement of the viewpoint and point lights limits the application of these methods. Havran et al. [62] reused the sampled paths from bi-directional path tracing to reduce temporal flickering. The algorithm tries to reuse static paths from the previous frame by checking the visibility of each edge and computing the contribution of the path according to the camera movement. For each reused path by recomputing the BRDF value at the first hit point of the camera path is recomputed. The reweighting schemes for valid paths introduces bias and their algorithm reuses static paths whose first vertex from the camera has diffuse material. [95] limited the only moving entity in the animation is the camera. The algorithm relinked the lens edge and then use an unbiased multiple important techniques to reweight the paths and deposit the contribution on the image. This saves us the reconstruction of path and inherits the temporal coherence among paths. Ghosh et al. [48] applied the framework of Sequential Monte Carlo to the problem of sampling environment map lighting in an animated sequences. Their work first constructs a set of sample paths from the environment map light source to the camera in the first frame. For the subsequent frames they generate a new set of samples by filtering the samples according to the dynamic product distribution. Their algorithm provide two advantages: new paths are not constructed from scratch and the temporal coherence between the current and previous frames is considered. The former can speed up the rendering and the latter can reduce the temporal flicking. However, their work is limited to environment map lighting and is not easy to extend to the complete light transport paths.

49

3.7.8 Decouple the Rendering and Displaying Processes

[16] proposed to directly update the newly available pixel information on currently displayed buffer instead of background buffer to increase the interactivity by saving the buffer-switch time. With proper update scheme, the frameless rendering display exhibits fluid motion. [165, 164] proposed a mechanism based on point reprojection and progressively adaptive sampling of the illumination to provide fast feedback for interactive use of a pixel or ray based renderer. The point samples are stored in a data structure called Render Cache. [165] used the age of the samples and other heuristics to identify samples that are likely to be invalid. [165] used temporal weighting and temporal gradient to evaluate the contribution of irradiance or radiance from different periods of time. To prevent the holes which can be witnessed with point based approaches, [88, 167] proposed to use a holodeck structure to keep track of the radiance of rays which have already been traced inside the entire scene. During rendering, rays inside the visible area are identified by the holodeck structure and then reprojected back to the image plane, and the server also schedules necessary update of rays for each beam. [138] maintained a set of textures which represent the difference between two computations. The corrective textures are updated through an adaptive and lazy sample tracing. The actual display process draws the approximate interactive solution, augmented by the corrective textures. [133] used tapestry representation which utilizes the unit sphere centered at the viewpoint as the logical projection surface instead of a planar projection plane. An icosahedral mesh is created to cover the surface of the sphere. Given a new sample, we perform a point locating algorithm to identify an existing spherical triangle that encloses its projection. During rendering, the spherical triangle is used to reproject the result onto the image plane. This can relieve the hole problem existing in planar reprojection. [148] proposed reconstruction in object space by a data structure called Shading Cache. High quality illumination samples are computed and used to progressively update the shading of patch vertices in a 3D mesh to overcome the gap possibly existing in point-based methods. The polygonal meshes are rendered through graphic pipeline. [29] further proposed an adaptively updating pattern on the frame buffer to improve the visual artifacts caused by temporal aliasing.

50 All these alogrithms proposed to decouple global illumination computations from geometric projection and image rendering. As such, they are not actual global illumination: they do not aim at computing the illumination, but at providing an interactive display and update of global illumination solutions, when the actual underlying algorithm cannot provide the required frame rate. Therefore, they face a similar problem which when the content of images change rapidly, the system need time to be correctly updated i.e. the intermediate images is not correct.

3.7.9 Hardware Rendering

There are huge amount of research in hardware rendering in recent years. There were a few research such as [116, 117] which focused on implementing the global illumination algorithm in graphics hardware architecture for the ability of parallel computation. [109] provided a good survey in this field. However, each frame is still rendered independently and the problem of temporal artifacts still exists. Other research in this category emphasizes the interactivity instead of correctness. Approximation and precomputation are necessary to achieve interactivity. In this section a few interesting research are reviewed but the list is not thorough. First, reflection is an important global illumination effect. Diefenbach [31, 30] and Wojdala et al. [171] proposed multipass pipeline rendering to render mirror reflections for planar surfaces, and [69, 79, 1 8, 80] proposed methods to handle the reflection of curved surfaces by using environment mapping. [1, 2, 106, 112, 3, 64, 13] proposed that all possible radiance on all sample surface points with all possible view and light directions are precomputed and stored. Then, the precomputed data is used to render the object under dynamic, local, and low frequency lighting on fly in the rendering phase. Since the data is precomputed, the rendering result is deterministic and the temporal artifacts are reduced significantly. However, the precomputation phase still involves the usage of general global illumination algorithms such photon mapping. Light map and shadow map methods are proposed by [57, 68, 70, 127, 131] to illuminate and shadow textured scenes. The use of rasterization hardware allows for interactive display of diffse

51 solution of the scene. However, glossy and specular effects can only be approximated or must be added by a separate ray tracing pass

52

Chapter 4 Photorealistic Image Rendering with Population Monte Carlo Energy Redistribution

To generate a physically correct image involves the estimation of a large number of integrals of path contributions falling on the image plane. It is well known that the integrals have highly correlated integrands. However, a standard Monte Carlo rendering algorithm evaluates the integrals independently. As a result, even a small but important region in the domain is located during the process. This information is lost to other samples because of the independent sampling. Sample reuse is an important technique to reduce the variance by exploiting the correlation between integrals. Markov Chain Monte Carlo algorithms for global illumination, such as Metropolis Light Transport [152] and Energy Redistribution Path Tracing [25], enable sample reuse by mutating existing samples into new ones, but the choice of good mutation strategies is non-trivial and has a major impact on image quality. Population Monte Carlo (PMC) algorithms provide us a tool to reuse the information collected in previous iterations. PMC Energy Redistribution, adapting the framework of PMC into the energy redistribution algorithm, exploits information from important samples through reuse with a mutation process whose mutation strategy is adapted on-the-fly. It is self-tuning to a large extent. The PMC Energy Redistribution algorithm iterates on a population of light transport paths. The population paths are created by tracing the view rays passing through stratified pixel positions on the image plane by a general Monte Carlo ray tracing algorithm such as path tracing and

53 bidirectional path tracing. In our implementation, we use a general path tracing algorithm. The resampling process eliminates part of the population paths, regenerates new paths to achieve ergocity, and adapts the kernel functions. The goal of elimination and regeneration is to replace the well explored or low-contribution paths with new paths generated according to the need of exploring the image plane evenly for achieving unbiasedness. Furthermore, any information available in the previous iterations can be used to adapt the kernel function of each population path that produces a new population based on the current population. The procedure is then iterated: sample, redistribute, resample, redistribute, resample . . . . The result is a self-tuning unbiased algorithm which can explore the important paths locally. Since we use the PMC-ER algorithm to generate static images, the denotation of t or T is neglected for simplicity in the discussion of this chapter.

4.1 Population Monte Carlo Energy Redistribution

In this section we first present an overview of Population Monte Carlo Energy Redistribution which can concentrate computation on important light transport paths by exploiting the correlated information among paths and automatically adjust the energy distributed area for each light transport path based on information collected in previous iterations. Then, we present the details including the estimation of average path energy, the kernel function which is used to perturb the current population paths to generate new paths, and the resampling process which eliminates and regenerates part of the population paths and adaptats the kernel functions according to success rate of previous perturbations for implementing PMC-ER.

4.1.1 PMC-ER Equal Deposition Algorithm

Figure 4.1 shows the PMC-ER Equal Deposition algorithm. In the preprocess phase, the algorithm first generates a pool of stratified pixel positions. This pool of pixel positions is used to generate initial population paths and new replacement paths during the resampling process in each iteration in order to guarantee even exploration of the image plane. Then, the algorithm estimates ~ the average energy contained in the image, E, and the deposition energy, ed , for each perturbation.

54 The detail of estimation is presented in Section 4.1.2. Initial population paths are created by tracing rays through unused pixel positions (x, y) from the stratified pool with the path tracing algorithm. ~ In this work, a path, X, is referred to a light transport path starting from a light, L, scattering diffusely, D, or specularly, S, inside the scene several times, and ending at the camera, E. The path is denoted as L(S|D) E. Interested readers can refer to [67, 149] for detail. Figure 4.2 and 4.3 shows two examples of such paths. In each inner loop z, we do Nequal perturbations at each path in the population according to ~ ~ acceptance probability A(Yi |Yi the path's kernel function, Ki (~ (z) |Yi y ~

(z) (z-1) (s) (z-1)

), discussed in Section 4.1.3. After perturbation, the

) is used to determine whether the path in the population ~ (z-1) before perturbation. or stays as the original path Yi

~ switches to the new generated path Yi

(z)

Then, ed energy is deposited on the image plane at the pixel position of the new population path, ~ (z) Yi . In the outside loop s, the resampling process which is discussed in Section 4.1.4 eliminates well-distributed and low-contribution paths and deposits the remaining energy of the eliminated paths, regenerates new paths to explore the image plane evenly, and adapts the kernel function of each population path. After finishing the s loop, the algorithm deposits the remaining energy of the population paths onto the image sweep before exiting. Please notice that the remaining energy in eliminated paths and terminated population paths at the end of the task is deposited to the image to guarantee the number of Markov Chains starting from each path proportional to its initial energy for maintaining unbiasedness.

4.1.2 Energy Estimation

When applying energy redistribution with equal deposition discussed in Section 3.5.1.2. we need a criterion to decide whether to start a Markov Chain. The total radiant energy passing through the image plane, b =

L(~ dµ(~ ), is chosen as the criterion in our implementation. Since y y

b is an integration of radiance carried by all paths, we can use Monte Carlo integration to estimate it by using the following two equations:

55

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

generate a pool of stratified pixel position ~ estimate the E, ed generate initial population of paths in s = 0 for s = 1, · · · , Niteration determine i for each perturbation for i = 1, · · · , Npopulation ~ ~ if Ei,remain + U(0, 1)E > E for z = 1, · · · , Nequal ~ generate Yi

(z) (s)

~ deposit ed energy on Yi ~ Ei,remain - = E wi = Ei,remain

(s)

~ (z) ~ (z) ~ (z) ~ (z) ~ (z-1) Yi = (U(0, 1) < A(Yi |Yi )) ?Yi : Yi

(z)

Ki (~ (z) |Yi y ~

(s)

(z-1)

)

resample the population: elimination and regeneration deposit remaining energy in all population paths

Figure 4.1: The PMC-ER Equal Deposition iteration loop. U(0, 1) generates a random number uniformly distributed between 0 and 1, and Ei,remain is the energy remained in the population path, i, after the inner energy redistribution loops.

~ E(Y) = ~ E =

~ ~ L(Y) L(Y)AIP = ~ ~ ~ pIP (Y)ppath (Y) ppath (Y) 1 N

N i=1

[4.1] [4.2]

~ E(Yi )

where pIP is the probability to choose the specific pixel position, ppath is the probability to generate ~ the path by tracing a view ray through the specific pixel position with path tracing, and E(Y) is the estimated path energy. Once we choose the starting criterion for equal deposition, we can also estimate the deposition energy, ed which is:

56

ed =

~ E Nmutations

[4.3]

where Nmutations = NM C × NM CM C is the expected total number of mutations and NM C which is the total number of independent path samples and NM CM C = Nequal which is the number of perturbations in each Markov Chain With this value, the PMC-ER Equal Deposition algorithm can directly render the final image from the accumulation of energy without the need to calibrate the total energy of the accumulation image.

4.1.3 The Kernel Function for Each Path

~ (z) ~ (z-1) in iteration z - 1, (see Figure 4.1). we generates a sample path Yi in iteration z given Yi use a mixture distribution: The kernel function for each population path is a conditional kernel, Ki (~ (z) |Yi y ~

(s) (z-1)

), that

Ki (~ i |Yi y ~

(s)

(z)

(z-1)

)=

h

i,h T (~ (z) |Y (z-1) : dh ) y ~

(s)

[4.4]

Each component, T (~ |Y : d), perturbs an existing path to a new one for local exploration of y ~ the path space according to the perturbing radius, d. Lens and caustics perturbation are two good candidates for this job. The following is simple description of these two mechanisms: · Lens perturbation: Figure 4.2 shows an example of lens perturbation. Lens perturbation is to replace a sub-path yn-1 · · · yk of the form EDS (L|D). The perturbation method takes the existing path and moves the image point which it passes. In our case, the new pixel location is uniformly sampled within a disk of radius, d, a parameter of the kernel component. The remainder of the path is reconstructed to pass through the new image point and extend the sub-path through additional specular bounces to be the same length as the original path. The tenative transition probability for lens perturbation can be computed as

G(yn-1 , yn-2 ) n-k-2 G(yj , yj+1 ) ~ ~ Td,lens (Y |Y) = Ad j=n-2 | cos j ,in |

57

Figure 4.2: This is a path with the form of LDDSSE and used to demonstrate lens perturbation. We would like to replace the lens sub-path y5 y4 y3 y2 y1 of the form of ESSDD. We first perturb the pixel position of the original path at y5 by uniformly choosing a point from the perturbing disk and then cast a view ray to pass through the new pixel position as shown in the bottom to get y4 . We extend the sub-path through the same specular bounces at y4 and y3 as the corresponding y4 and y3 to get y2 . Then, y2 and y1 are linked to form a new lens-perturbed path with the same form of LDDSSE as the original one.

58

Figure 4.3: The is a path with the form of LDSSDE and used to demonstrate caustics perturbation. We would like to replace the caustics sub-path y1 y2 y3 y4 y5 of the form DSSDE. At the head vertex of the caustics sub-path, y1 , we perturbed the outgoing light ray direction by (, ), where is uniformly sampled from [0, max ] and is uniformly sampled from [0, 2], to get y2 as showed in the bottom. We extend the sub-path through the same specular bounces at y2 and y3 as the corresponding y2 and y3 to get y4 . Then, y4 and y5 are linked to form a new complete caustics-perturbed path with the same form of LDDSSE as the original one.

59

where G(yj , yj+1 ) is the geometric term between yj and yj+1 , Ad is the area of the per-

turbation, and j ,in is the angle between the normal of the surface and the direction of the

incoming light ray at yj .

· Caustics perturbation: Figure 4.3 shows an example of caustics perturbation. Caustics perturbation is to replace a caustics sub-path with a suffix ym · · · yk of the form (D|L)S D + E. To do this, we generate a new sub-path starting from the vertex ym , the head vertex of the caustics sub-path. The direction of the segment ym ym+1 is perturbed by a random amount (, ) and then extend the sub-path through additional specular bounces to be the same length as the original one. The (, ) is uniformly sampled from [0, max ] and [0, 2] where the central axis, = 0, corresponds to the direction of the original ray and max is the range of sampling angle computed from corresponding perturbation radius, d, by the following equation from [149]: |yn-1 - yn-2 | n-1 k=m |yk - yk-1 |

max = (d)

[4.5]

where (d) is the angle through which the ray yn yn-1 needs to be perturbed to change the image location by a distance of d pixels. The tentative transition probability for caustics perturbation can be computed as ~ ~ Td,caustics (Y |Y) = G(ym , ym-1 ) 2max cos m,out

m-k-2 G(yj , yj+1 ) j=m-1 | cos j ,out |

where j ,out is the angle between the normal of the surface and the direction of the leaving

light ray at yj .

In original ERPT work, the size of perturbation was a parameter to be set at start-up. In PMCER, we can choose a reasonable set of different sized perturbations in the mixture which is three in our case. Large perturbation is effective at redistributing information over a wide area, while small one is benefit for image regions where illumination is changing quickly.

60 When using the kernel function to perturb a path, we first choose d according to the weights,

(s) i,d ,

where d is the radius of lens perturbation and

dh

i,dh = 1. And then either lens or caustics

(s)

perturbation is chosen according to lens = 0.1 and caustics = 0.9 in our case which is set to prefer caustics perturbation when it is possible. We then perturb the current path to generate a new path. The acceptance probability is calculated accordingly as follow:

~ (s) ~ ~ ~ |Y) = min 1.0, L(Y )Ki (Y|Y ) ~ A(Y ~ (s) ~ ~ L(Y)Ki (Y |Y)

[4.6]

~ where L(Y) is the path contribution. When evaluating the acceptance probability all possible ~ ~ proposals that might generate Y from Y must be considered and thus the acceptance probablity is computed as:

(s) ~ ~ Ki (Y |Y) =

dh

(s) ~ ~ ~ ~ i,dh (lens Tdh ,lens (Y |Y) + caustics Tdh ,caustics (Y |Y))

However, Tierner et al. [147] observed that the acceptance probability can be computed as ~ ~ ~ f (Y)Top-type (X)|Y ~ ~ A(Y|X) = min 1.0, ~ ~ ~ f (X)Top-type (Y|X) where Top-type is the type of perturbation chosen. And the detailed balance of the perturba~ (X| ~ ~ ~ ~ ~ ~ ~ ~ tion can be still maintained as f (X)Top-type (Y|X) = f (Y)Top-type (X|Y) f (Y)Top-type(Y|Y) if ~ ~ X) ~ f (X)T

op-type

~ ~ ~ f (Y)Top-type (X|Y). Thus, the criterion for unbiasedness and convergence of a Markov Chain is

~ (X| ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ f (Y)Top-type (X|Y) f (X)Top-type (Y|X); otherwise f (X)Top-type (Y|X) f (Y)Top-type(Y|Y) = ~ ~ X) ~ f (X)T

op-type

still achieved. However, Equation ?? is chosen to avoid the computation of other possible transition functions to improve the efficiency of perturbation.

4.1.4 Resampling

The resampling step in this algorithm achieves three purposes: it carries forward to next round samples that have high energy remaining without flowing out, it provides an opportunity to add some completely new paths into the population for evenly exploring the image plane, and the

61 information about which perturbations are chosen inside the inner loop guides the adaption of the kernel functions. · Elimination: This step is to eliminate well-explored and low-contribution samples from the population. ~ When we generate a new population path, the energy of the path, E(Y), is computed using Equation 4.2 and set it to Eremain . At the end of each s loop, we reduce the energy remaining ~ in the path by E. We first remove the paths with negative energy. Then, the probability of the paths surviving in the elimination process is proportional to the energy remaining in the path, Eremain . If there is energy remained in the eliminated path, the remained energy is deposited before we really eliminate the path from the population to guarantee the unbiasedness. · Regeneration: Regeneration is to maintain the constant number of paths in the population. It also gives us the chance to decide where we would like to explore in the next iterations. For achieving unbiasedness, we need to evenly explore the image plane. Thus, the regeneration of new paths is according to the criteria of stratification. In the preprocess phase, we compute the total stratified number of pixel positions needed for the entire process. Then a pool of stratified pixel positions is generated according to that number. During the regeneration process, we keep asking the pool to give us the next unused stratified pixel position. A new path is generated by tracing through the new pixel position with the path tracing algorithm ~ and the energy of the path, E(Y), is computed using the Equation 4.2 and set it to Eremain . · Adapt alpha Values The purpose of values is to choose a proper perturbation radius which decides the area of local exploration according to the success rate of each perturbation. Thus, when a new path is generated, the i,h is set to a constant probability for each component, which allows us to uniformly choose all perturbations. After initialization, acceptance probability of each perturbation is tagged with the index of the kernel mixture component and the index of the path. At the adaptation step, we compute the accumulation of the acceptance probabilities tagged

(s)

62

Figure 4.4: The first image on the left is a Cornell Box image computed using PMC-ER Equal Deposition algorithm; the second image is computed using ERPT with 9 SPPs; the third image is computed using ERPT with 16 SPPs; and the fourth image is the mutation strategy used during the process. The strategy image shows that the kernel function automatically adjusts to perturbations with small radii near the physical border and lighting border to increase the success rate of transferring image. with k-th component for each member path and uses it to adjust the mixture probabilities. We can compute the value of i,h as:

(s)

Nperb i,h (s)

=

m=1

~ (z) ~ (z-1) )m,h Am (Yi |Yi dh

(1 - )i,h Nperb h =1 i,h

i,h = +

where Nperb is the total number of perturbations applied to this population path, and m,h = 1 if dh is chosen as the radius of perturbation in m-th perturbation i.e m = h

4.2 Results

The perturbation radius is another important factor. It affects the area where the energy can be distributed to and the success rate of the distribution operation. In the smooth lighting area, we hope that this radius is large, in order to get a smooth image as soon as possible. However, in complex lighting areas such as shadow, caustics regions, we hope that it is small or the success rate of perturbation declines largely. Our algorithm automatically adjusts this aspect through the process of adaptation of values.

63

Figure 4.5: A dragon scene computed using our PMC-ER Equal Deposition at the top. The bottom left is the zoom-in of the caustics part computed by PMC-ER equal deposition and the bottom right is the same part computed by ERPT. PMC-ER has fewer artifacts overall. By sharing more information among paths and by better reusing the high contribution paths, PMC-ER is an improvement over ERPT.

64

Figure 4.6: A room scene computed using our PMC-ER Equal Deposition at the top and ERPT at the bottom. PMC-ER has fewer artifacts overall. By sharing more information among paths and by better reusing the high contribution paths, PMC-ER is an improvement over ERPT.

65 We compared our PMC-ER Equal Deposition algorithm with the energy redistribution path tracing (ERPT) algorithm on the Cornell Box scene, a dragon scene, and a complex room scene using the criterion of starting with a similar number of initial PT paths. In all three cases we used a population size of 5000. There are three perturbation radii: 5, 10, and 50 pixels, respectively. The range of the perturbed angle for the corresponding caustics perturbation is computed according to Equation 4.5. In each step inside the inner loop, each member generates 16 mutations, and 40% of the population is eliminated based on its remaining energy and regenerated using the stratification mechanism. We also use 4 SPPs for estimating the energy contained in an image for both the PMC-ER and ERPT algorithms. The Cornell Box image (Figure 4.4) was rendered using our PMC-ER Equal Deposition algorithm with 1000 iterations which roughly has the same total number of initial PT paths as the image rendered using the ERPT with 8 SPPs. Observing the strategy image, the kernel function automatically adjusts to small perturbation radii to increase the success rate of energy flowing out when the exploration is near physical borders and lighting borders such as the shadow and caustics area and the light edge. We can see that our algorithm removes the bright spot artifacts from ERPT algorithm. When we compare our result with an image rendered with ERPT with 16 SPPs, our image gets fewer artifacts. In other words PMC-ER renders a more converged image compared to the corresponding image generated by the ERPT algorithm with the same number of initial PT paths. The dragon scene (Figure 4.5) was rendered at 900×900 with 12800 iterations and 20 mutations for each member in the population inside the loop in comparison with the image rendered using ERPT with 32 SPPs and 20 mutations to each initial PT path. We can see that the image rendered with PMC-ER has fewer artifacts than the image rendered with ERPT. The room scene (Figure 4.6) was rendered at 720×405 with 19200 iterations and 20 mutations for each member in the population inside the loop in comparison with the image rendered using ERPT with 128 SPPs and 20 mutations to each initial PT path. We can see that the image rendered using PMC-ER has fewer artifacts than the image rendered using ERPT. Note that for all PMC-ER

66 Image Box1 Method ERPT(8) ERPT(16) PMC-ER Dragon ERPT(32) PMC-ER Room ERPT(128) PMC-ER Time (s) 4401.8 8935.7 5281.2 88596.1 97455.7 82656.5 96575.1 Err 0.85 Eff 2.7e-4

0.526 2.1e-4 0.37 1.13 0.46 5.4e-4 1.0e-5 2.3e-5

0.052 2.3e-4 0.010 1e-3

Table 4.1: Measurements comparing energy redistribution path tracing (ERPT) with PMC-ER-E, for a roughly equal number of sample rays. Equal Deposition and ERPT implementations, we did not use the filter proposed in [25] to smooth the final image. The statistics for three rendered images is presented in Table 4.1. We use the mean squared efficiency (Eff ) metric for comparing algorithms, computed as: Err I =

pixels err 2

Npixels

,

Eff I =

1 T I ×Err I

where err is the difference in intensity of a pixel between the rendered value and the ground truth value, T I is the running time of the algorithm to render that image and Npixels is the overall pixel count. Eff I is a measure of how much longer (or less) you would need to run one algorithm to reach the quality of another [113]. We can see that our algorithm gets better efficiency than ERPT algorithm does.

4.3 Discussion

Many PMC kernels in the literature are mixture models. Mixtures are typically formed by combining several components that are each expected to be useful in some cases but not others. The adaption step then determines which component are useful for a given input. Mixtures allow

67 otherwise unrelated functions to be combined, such as perturbation with different radii. We would prefer the kernel function having many components. However, when the kernel function contains many adaptable parameters, each iteration would requires high adaptive sample counts for gathering proper information to adapt the kernel function. This prevents us from using a larger number of different perturbing radii. Such a strategy would be appealing for efficiently rendering a scene with geometries having very different sizes appearing on the image plane, but the adaptive sample count required to adequately determine the mixture component weights would be too large. Instead we use three perturbation radii for all images rendered. We also observe that the deposition energy, ed is important for generating nice results for ERPT algorithms. If ed is too small, the algorithm becomes too slow and inefficient but it converges to smooth results. However, if ed is too large, the algorithm generates bright spots because a path require higher energy to pass the distribution criterion in order to start a Markov Chain. Thus, more paths fail to reach this criterion. In addition, the number of the Markov Chains descreases. All these increase the chance of accumulating large energy in a close spot. The choice of ed a trade-off between efficiency and quality.

68

Chapter 5 Efficient Schemes for Monte Carlo Markov Chain Algorithms in Global Illumination

Markov Chain Monte Carlo algorithms such as Metropolis Light Transport (MLT) [152], Energy Redistribution Path Tracing (ERPT) [25], and Population Monte Carlo Energy Redistribution (PMC-ER) exploit the correlated information among light transport paths to improve the rendering efficiency. However, MCMC algorithms are limited in achieving higher rendering efficiency due to the possibly high failure rate in caustics perturbation and the sub-optimal stratified exploration of the image plane. The possibly high failure rate in caustics perturbation comes from the bad prediction of the perturbation angle range in caustics perturbation. The predicted perturbation range depends on the path and scene properties. If the predicted range is too large, the failure rate of the caustics perturbation will be high and cause extra high energy to accumulate at some specific spots on the image plane. As a result, the too large predicted range decreases the rendering efficiency. In addition to the high failure rate, MCMC algorithms need to implement the lens and caustics perturbation methods separately because it is impossible for the original lens perturbation method to generate a new path from a caustics path with the form of EDS D + (D|L). At the same time, the calculation of the perturbation angle induces extra cost in caustics perturbation and burden in predicting the change in the pixel position of a perturbed path on the image plane for MCMC algorithms.

69 The ERPT and PMC-ER algorithms carefully generate new paths according to the need of exploring the image plane evenly for achieving unbiasedness. But this choice is sub-optimal because some areas on the image plane contain higher perceptual variance than others do. To contribute more computation effort in reducing the variance in these high perceptual variance regions would increase the perceptual quality. In addition, some types of paths such as caustics paths are visually important but hard to be found with a general path tracing algorithm. Concentrating more computation effort on these paths can further improve the rendering efficiency. Thus, evenly exploring the image plane prevents MCMCs from spending more computation effort in exploring those "highperceptual-variance" regions on the image and "hard-to-find-but-important" paths. This limits the further improvement in rendering efficiency. To relieve these two limitations, we first introduce a new lens perturbation which can be used for both lens perturbation and caustics perturbation. This new perturbation method allows us to control the perturbation by a single and simple lens perturbation radius. The control parameter, perturbation radius, is intuitive and predictive in the movement of the pixel position of a perturbed path on the image plane for both lens perturbation and caustics perturbation. It increases the success rate of perturbing a caustics path which improves the rendering efficiency in turn. We propose two methods to generate paths in order to spend more computation effort on exploration of virtually noisy regions on the image and "hard-to-find-but-important" paths without introducing bias. The variance-regeneration method generates paths passing through high perceptual variance regions on the image plane to reduce the variance in this regions for enhancing the perceptual quality of the rendered image. The caustics-regeneration method generates a set of caustics paths to further explore the caustics path space. We weight the energy deposited by each perturbation according to the type of the population path to prevent the new regeneration methods from introducing bias into the final result. Since we use the PMC-ER algorithm to generate static images, the denotation of t or T is neglected for simplicity in the discussion of this chapter.

70

5.1 New Schemes to MCMCs

PMC-ER adaptively selects pixels for redistribution, and also adapts algorithm parameters when tracing paths into the scene. A traced path is then used as the initial state for starting Markov Chains to redistribute the path's energy to nearby pixels and find additional light transport paths. At the same time, PMC-ER adjusts the redistribution area of each path according to the successfulness of the previous redistribution. The intuition is that different pixels will find different initial paths, the information can then be conveyed to neighboring pixels through Markov Chains, and the size of conveying area will be different from path to path. In this section we first briefly discuss Population Monte Carlo Energy Redistribution with Equal Deposition. We then present the detail of implementing the new lens mutation and path regeneration methods.

5.1.1 Population Monte Carlo Energy Redistribution with Equal Deposition

Figure 5.1 shows the PMC-ER Equal Deposition (PMC-ER-E) algorithm. The algorithm first ~ estimates the average energy contained in the image, E, and the deposition energy, ed , for each perturbation which are discussed in Chapter 4. The algorithm generates a pool of pixel positions and a set of caustics paths. The pool of pixel positions and the set of caustics paths are used to generate initial population paths and replacement paths during the resampling process in each inner iteration. In each inner loop, z, we do Nequal perturbations at each path in the population according to the path's kernel function:

Ki (~ i |Yi y ~

(s)

(z)

(z-1)

)=

dh

i,dh T (~ (z) |Y (z-1) : dh ) y ~

(s)

[5.1]

Then, the acceptance probability can be computed as ~ ~ ~ f (Y)Top-type (X)|Y ~ ~ A(Y|X) = min 1.0, ~ ~ ~ f (X)Top-type (Y|X)

71 where Top-type is the type of perturbation chosen. As shown in Ch. 3, the detailed balance can be maintained under this choice of acceptance probability for achieving the requirement of convergence and unbiasedness. ~ (z) Yi . The constant, R, is an energy-deposited constant which is 1 for the original PMC-ER-E algorithm. The constant is adapted according to the properties of the population path discussed in Section 5.1.3 in order to guarantee the unbiasedness of the algorithm. ~ In the s loop, E amount of energy is removed from Ei,remain of all paths. Then, the resampling process eliminates well-distributed and low-contribution paths and redeposits the remained energy in the eliminated paths, regenerates paths to maintain a constant number of population paths, and adapts the kernel function of each population path. After finishing the s loop, the algorithm deposits the remaining energy of the population paths onto the image before exiting. Please notice that the remaining energy in eliminated paths and terminated population paths at the end of the task is deposited to the image to guarantee the number of Markov Chains starting from each path proportional to its initial energy for maintaining unbiasedness. After perturbation, Ed = R ed energy is deposited at the pixel position of the new path,

5.1.2 New Lens Perturbation

Details about the mixture kernel funtion are given in Ch. 4. In this section we only focus on how to use the perturbation method to replace the original caustics perturbation method. Figure 5.2 shows an example of caustics path with the form of EDS (L|D). Original lens perturbation fails to replace this kind of paths because it is impossible to find exactly the same outgoing direction at the first specular vertex from the camera when we perturb the pixel position at the camera vertex. Thus, we need to use caustics perturbation. However, our new lens perturbation method is designed to perturb a sub-path with the form of E(D|S)+ (SD|S) in order to directly replace the lens and caustics perturbation methods. Figure 5.2 shows an example of new lens perturbation for a caustics path. We would like to perturb the sub-path, xn-1 · · · xk of the form E(D|S)+ (SD|S) by the following steps: First, the perturbation method moves the pixel position of the existing path on the image plane which

72

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

~ estimate the E, ed , , RG2C generate a pool of pixel positions and a set of caustics paths for s = 1, · · · , Niteration generate initial population of paths in s = 1 determine i for each perturbation for i = 1, · · · , Npopulation ~ ~ if Ei,remain + U(0, 1)E > E for z = 1, · · · , Nmutations ~ generate Yi

(z) (z) (s)

~ Ei,remain - = E wi

(o)

~ deposit Ed = R ed on Yi

~ (z) ~ (z) ~ (z-1) ) ?Yi : Yi ~ (z) ~ (z-1) Yi = (U(0, 1) < A(Yi |Yi

(z)

Ki (~ i |Y (z-1) ) y ~

(z)

= Ei,remain

resample the population: elimination and regeneration deposit remaining energy in all population paths

Figure 5.1: The PMC-ER Equal Deposition iteration loop. U(0, 1) generates a random number uniformly distributed between 0 and 1. Ei,remain is the energy remaining in the population path i, after the inner energy redistribution loops. R is the energy-deposited constant discussed in Section 5.1.3.4 it passes. In our case, the new pixel location is uniformly sampled within a disk of radius d, a parameter of the kernel component. The disk is centered at the pixel position of the existing path. The path is reconstructed to pass through the new image point to get the first vertex from

the camera yn-2 . Then, according to the bounce at the vertices in the original path, we use two

different methods to construct the sub-path to have the same length as the original one. If ym where n - 2 m k + 1 is a specular vertex, we choose a specular bounce to find the next vertex

ym-1 . If ym is a diffuse vertex, we copy ym-1 to ym-1 and link ym to ym-1 to form the link.

After we reconstruct the sub-path, the tentative transition probability can be computed as

73

Figure 5.2: This is a path with the form of LDDSDE and used to demonstrate the replacement of caustics perturbation with the new lens perturbation method. We would like to replace the caustics sub-path y5 y4 y3 y2 y1 of the form of EDSD. We first perturb the pixel position of the original path at y5 by uniformly choosing a point from the perturbing disk and then cast a view ray to pass through the new pixel position as showed in the bottom to get y4 . We link the y4 and y3 to form the link. Then, we extend the sub-path through the same specular bounce at y3 as the corresponding y3 to get y2 . Then, y2 and y1 are linked to form a new lens-perturbed path with the same form of LDDSDE as the original one.

74

G(yn-1 , yn-2 ) n-k-2 G(yj , yj+1) ~ ~ : 1?yj S Td,lens (Y |Y) = Ad | cos j ,in | j=n-3

where G(yj , yj+1) is the geometric term between yj and yj+1 , Ad is the area of the perturba-

tion, and j ,in is the angle between the normal of the surface and the direction of the incoming

light ray at yj . This relieves us from the need in the original caustics perturbation method to es-

timate the perturbation angle, , for each path. The computation for is strenuous and hard to predict the movement caused by caustics perturbation on the image plane. When using our new lens perturbation method, we can use a single pixel perturbation radius to control the movement of the radius on the image plane. The results show that the control is easier and movement on the image plane is more predictive.

5.1.3 Resampling

The resampling process consists of three steps: elimination which eliminates well-explored and low-contribution samples and deposits the remaining energy of the eliminated path into the image, regeneration which maintains the constant number of paths in the population and designs an exploration pattern in the path space, and adaptation of values which adjusts the energy distribution area. In this session, we only focus on the process of regeneration. For the details of elimination and adaptation please refer to Chapter 4. To generate a new replacement path, we use three types of regeneration paths: paths passing through a set of stratified pixel positions, paths passing through a set of pixel positions generated according to the perceptual variance, and a set of caustics paths tracing from the light sources. To achieve this, we need to have two modifications in the original algorithm. First, we need to modify step 3 in Figure 4.1 from the generation of a pool of stratified pixel positions to a pool of pixel positions and a set of caustics paths. Second, we need to adjust R. The following sections are the implementation details.

75

5.1.3.1

Pixel Positions from Stratification Criterion

Pharr [113] demonstrates how important sampling evenly on the image plane is in reducing the variance. In energy redistribution, the even distribution of starting pixel positions guarantees the unbiasedness because every image area has the same probability to generate paths and the contribution of all paths to the image plane depends on its energy. Thus, we assign Nunif orm samples to each pixel.

5.1.3.2

Pixel Positions from Perceptual Variance Criterion

In order to distribute more initial path samples to regions with high perceptual variance, the algorithm must be able to compute the perceptual-weighted variance of each pixel. Thus, an extra image varaible Ivariance is added to keep track of the radiance of energy-estimated paths falling in each pixel during the preprocess of estimating the average path energy. Then, the variance of sample radiances in each pixel can be computed from the image variable Ivariance . To account for perception, the variance in each pixel is divided by the threshold-versus-intensity function tvi(I) introduced by Ferweda et al. [42]. A new variable, j , is introduced to indicate the degree of requirement for more samples at pixel j. In other words pixels that require further exploration should have higher j . Thus, j should be proportional to the estimate of the perceptually-weighed variance at each pixel and can be computed as

j =

2 j ~ tvi(Ij )

j Npixel j=1 j

j =

2 ~ where Ij is the average radiance that falls into pixel j and j is the variance among all sample

radiances falling in pixel j. In order to use variance-regeneration, we first compute with the energy estimation and choose totally Nvariance samples for the entire process. Then each pixel is assgined Nvariance (j) samples according to j . Then, totally Nunif orm + Nvariance (j) samples for

76 pixel j are evenly distributed inside the pixel. All pixel positions from all pixels in the image are formed the pool of pixel positions. During the regeneration process, we can ask for a pixel sample from this pool or ask for a new path from the pool of caustics paths discussed in Section 5.1.3.3. If we get a pixel sample, we then use the path tracing algorithm to generate a path passing through the new pixel positions. The unweighted energy of the path is calculated as described in Chapter 4. In Section 5.1.3.4, we will describe how to weight the deposited energy properly without introducing bias. In this paper, we set up the Nvariance to be large enough for us to use the deterministic sampling method described in [40] to limit the cost of generating a sample from the variance image, 's. In addition to efficiency, the sample distribution is relatively stratified according to the sampling probability with deterministic sampling. This implementation can further improve the rendering efficiency.

5.1.3.3

Caustics Paths

A path tracing algorithm traces paths starting from the camera. However some types of paths are easier to trace when starting from a light source such as caustics paths. The photonmapping algorithm uses caustics photons to improve the rendering efficiency. The rendering results in Figure 5.5 show that caustics paths are hard to find by the path tracing algorithm but they are very important to generate the smooth caustic regions on the floor near the dragon. Thus, these two reasons motivate us to use specific types of light paths for the exploration of caustics path space. In the preprocess we generate a pool of Ncaustics caustics paths in the following way. First, we choose a light source and then choose a position on that light source as the start light vertex. From the light vertex, we trace a path in the scene as described in [149, 83]. Then, we connect each vertex in the light path to the camera vertex. If the complete path formed is a valid caustics path, we keep the path in the candidate pool. Finally, we can construct one valid caustics path by randomly choosing a valid one from the candidate pool. Figure 5.3 shows a example. The criteria for a caustics path is: first, the length of the path must be over 4 vertices; second, the path must contain at least one specular vertex; third, the first connection vertex from the camera vertex must be a

77

Figure 5.3: A caustics path is generated by tracing the ray from a light source. Each vertex in the path is linked to the camera vertex. The algorithm then checks whether the new linked path is a caustics path and if the path is, the algorithm keeps the valid path in the candidate pool. After finishing the whole process, we then randomly choose a path from the candidate pool and put it into the caustics path pool. diffuse surface. Without weighting the path energy, these extra "hard-to-find" paths will introduce bias. In Section 5.1.3.4, we will describe how to weigh the deposited energy without introducing bias.

5.1.3.4

Weighting the Energy of Newly Regenerated Paths

In original energy redistribution algorithm, we evenly distribute the pixel positions. The energy distributed ratio R should be one. However, if we unevenly add extra samples into the image plane and path space without weighting the energy of each path, the extra sample paths will introduce biased energy into the image. In this section, we describe how to weight the energy to ensure that the result is still unbiased. For the variance-regeneration, each pixel originally has Nunif orm samples dropped in the effective area and this guarantee that the expected energy deposited by paths initialized from each pixel is statistically the same. To keep the energy deposited from paths initialized in each pixel statistically equal, we should weight the deposited energy of the path by

78

Nunif orm

~ ~ E(j) E(j) = (Nunif orm + Nvariance (j)) Rvariance NM C NM C Nunif orm Rvariance = Nvariance (j) + Nunif orm

where Nunif orm is the assigned uniform samples per pixel, Nvariance (j) is the number of sam~ ples assigned to pixel (j) by variance-regeneration, E(j) is the expected path energy of a path generated by tracing through the pixel j and

~ E(j) NM C

is the expected energy deposited by a path

through the pixel j and NM C = Nunif orm Npixel. By weighting the deposited energy by a ratio of Rvariance , we make sure that the total energy expected to be distributed by all paths starting from that pixel is statistically the same as the one by the originally stratified paths from that pixel. The situation for caustics-regeneration is different from the case of the variance-regeneration. Caustics paths can be viewed as global paths because they are generated by light tracing and possible to pass through any pixel position on the image plane. Thus, we need to handle them a little differently. Statistically the ratio of caustics paths and general paths generated from path tracing algorithm should be fixed. We can use this ratio to weight the energy of all caustics paths in order to avoid biasedness. The energy-deposited ratio can be calculated by the following equation.

Nexpect

~ ~ Ecaustics Ecaustics = (Nexpect + Ncaustics ) Rcaustics NM C NM C Nexpect Rcaustics = Ncaustics + Nexpect

where Nexpect is the expected total number of caustics paths existing in the paths generated by stratified-regeneration, Ncaustics is the total number of caustics paths in the pool of caustics paths, ~ Ecaustics is the expected path energy of a caustics path, and

~ Ecaustics NM C

is the expected deposited

energy of a caustics path. In the preprocess, we estimate RG2C which is the ratio of the total number of caustics paths to the total number of general paths. Then, we can compute Nexpect =

79 RG2C × Npixels × Nunif orm for evaluation of Rcaustics . By weighting the energy of each caustics path by a ratio of Rcaustics , we can guarantee the unbiasedness of the final result.

5.1.3.5

Ratio for Combining These Two Regeneration Methods

In previous section the ratio Rcaustics and Rvariance are calculated when applying each regeneration method separately. However, we would like to combine them to make the system more efficient. In this section we present how we compute the real ratio R for each population path generated from different regeneration methods with different path properties. The total expected ~ energy E delivered to the image plane from the light sources is equal to the sum of the energy delivered by all paths including the set of caustics paths and the paths generated from the pools of pixel positions. The paths generated from the pools of pixel positions can be further splitted into ~ the pool of caustics paths and non-caustics paths. The total expected energy E can be computed as

Npath

~ E = = =

i=1 Npixel-pool

Ri ×

~ E NM C

Ncaustics ~ ~ E E Rcaustics-pool × + NM C NM C i=1

i=1 Nnoncaustics-in-pixel-pool i=1 Nexpect

Rpixel-pool ×

Rnoncaustics-in-pixel-pool × ~ E NM C

~ E NM C

+ +

i=1

i=1 Ncaustics

Rexpected-caustics-in-pixel-pool × Rcaustics × ~ E NM C

where Ri is the energy-deposited ratio for each path; Npixel-pool is the total number of pixel positions; Ncaustics is the total number of caustics paths added by the algorithm; When carefully observing the equation listed above, we can find that the extra caustics paths and the estimated

80 Nexpect caustics paths in the pixel-position pool must be weighted by Rcaustics according to caustics path weighting scheme. Since Nexpect caustics paths come from the pixel pool, they should be weighted by extra Rvariance . Thus, the real ratio, R, of the path deposit energy is separated into the following three situations: first, if a path is from the pool of pixel positions and is a caustics path, the ratio should be Rvariance (j) ×Rcaustics ; second, if a path is from the pool of pixel position but is not a caustics path, the ratio should be Rvariance (j); third, if a path is from the pool of caustics path, the ratio should be Rcaustics . By using the ratio accordingly, we can guarantee the unbiasedness.

5.2 Results

In evaluating the performance we compared our implementations with new schemes against the original PMC-ER Equal Deposition algorithm on a Cornell Box (CB) scene, a dragon scene, and a complex room scene using the criterion of starting with the same number of initial PT paths. In all three cases, we used a population size of 5000 and three perturbation radii: 5, 10, and 50 pixels. In each step inside the inner loop, each member generates 20 perturbations, and 40% of the population is eliminated based on its remaining energy and regenerated. We use 16 samples per ~ pixel (SPPs) for estimating E, ed , , and RC2G . When applying the new schemes to the PMC-ER algorithms, we use NSP P samples per pixels to compute totally Ntotal initial paths and Niteration iterations, for the PMC-ER algorithms. If we plug new regenerations into PMC-ER, we will choose totally Nvariance variance-regeneration pixel positions and Ncaustics caustics-regeneration paths for the entire rendering process. Thus, (Niteration , Nvariance , Ncaustics ) are main parameters used to describe the corresponding algorithms used to render the results in the following sections. The statistics for three test scenes when using each new scheme separately is presented in Tables 5.1 and 5.2. Table 5.3 presents the statistics when applying all new schemes with PMCER-E. We used the perceptually-based mean squared efficiency (P - Eff ) metric for comparison. P - Eff is computed as:

81 Image Box1 Method E E+Lens* Dragon E E+Lens Room E E+Lens Total Iter Niteration 1225 1225 2430 2430 18800 18800 Time (s) 4769.1 4683.1 13081.3 12640.4 96575.1 95812.1 Err Eff

0.0311 6.74e-3 0.0257 8.31e-2 3.09 1 2.47e-5 7.91e-5

0.0274 3.78e-4 0.0208 6.91e-4

Table 5.1: Measurements comparing PMC-ER with original lens and caustics mutation and the stratified regeneration, and PMC-ER with the new lens perturbation method and the stratified regeneration. * +Lens represents that we implement the new lens perturbation method into PMC-ERs.

Err I =

err 2 , pixels tvi(I)

P - Eff I =

1 T I ×Err I

where err is the difference in intensity of a pixel between the rendered value and the ground truth value and T I is the running time of the algorithm for rendering that image. P - Eff is a measure of how much longer (or less) you would need to run one algorithm to reach the perceptual quality of another [113].

5.2.1 Comparison between Old Mutations and New Mutations

Table 5.1 shows the comparison between PMC-ER-E with the original perturbations and PMCER-E with the new lens perturbation method in rendering 3 scenes. Both of them use stratified regeneration. We would like to identify the improvement introduced by the new lens perturbation. We use the same total number of iterations for each scene as described in the previous section. The efficiency is improved by a factor of 1.23 for the CB scene, 3.2 for the dragon scene, and 1.82 for the room scene.

82

Image Box1

Method E E+Reg*

Total Iter Niteration 1225 1225 2430 2430 18800 18800

Time (s) 4769.1 5366.3 13081.3 14296.7 96575.1 98158.9

Err

Eff

0.0311 6.74e-3 0.0176 1.05e-2 3.09 0.985 2.47e-5 7.10e-5

Dragon

E E+Reg

Room

E E+Reg

0.0274 3.78e-4 0.0105 9.70e-4

Table 5.2: Measurements comparing PMC-ER with original lens and caustics mutation with the stratified regeneration, and PMC-ER with original lens and caustics mutation with all regeneration methods. * +Reg represents implementations of the new regeneration methods in PMC-ERs.

Image Box1

Method E E+Lens+Reg

Total Iter (Niteration ) 1225 1225 2430 2430 18800 18800

Time (s) 4769.1 5718.4 13081.3 16897.7 96575.1

Err 0.0311 0.0149 3.09 0.164 0.0274

Eff 6.74e-3 1.17e-2 2.47e-5 3.61e-4 3.78e-4

Dragon

E E+Lens+Reg

Room

E E+Lens+Reg

107832.5 0.00569 1.62e-3

Table 5.3: Measurements comparing PMC-ER with PMC-ER-E using all the new schemes.

83 The calculation of the perturbation angle proposed in Ch. 4 depends on the path properties and scene properties. It may predict a too large range of perturbation angle to cause the high failure rate of caustics perturbation which in turn introduces bright spots into the rendered results. We observe this effect from rendering the dragon and room scenes. We do an experiment of making the predicted range 10 times smaller than the predicted value and render the dragon and room scene. The results are better. Thus, we know that the prediction of the perturbation range is an important factor to rendering efficiency.

5.2.2 Comparison between Stratified Regeneration and All Three Regenerations

Table 5.2 shows the comparison between the original PMC-ER-E algorithm with stratified regeneration and PMC-ER-E with the new regeneration methods in rendering 3 scenes. Both of them use the original perturbations. The CB scene is rendered with (Niteration = 1225, Nvariance = 199200, Ncaustics = 108000). The efficiency is improved by a factor of 1.55. The dragon scene is rendered with (Niteration = 2430, Nvariance = 779700, Ncaustics = 30300). The efficiency is improved by a factor of 2.87. The room scenes is rendered with (Niteration = 18800, Nvariance = 1363200, Ncaustics = 969600). The efficiency is improved by a factor of 2.56.

5.2.3 Combine Everything into PMC-ER-E

We render the Cornell Box scene (Figure 6.6) at 640×480 with (Niteration = 1225, Nvariance = 199200, Ncaustics = 108000) which is equivalent to 8 SPPs. In comparison, the original PMC-ER equal deposition algorithm also renders the same scene with (Niteration = 1225). Table 5.3 shows our algorithm improves the efficiency by a factor of approximate 2.14. We render the dragon scene (Figure 5.5) at 900×900 with (Niteration = 2430, Nvariance = 779700, Ncaustics = 30300) which is equivalent to 6 SPPs. In comparison original PMC-ER-E renders the same scene with (Niteration = 2430). Our algorithm shows much better performance than PMC-ER-E by a factor of approximate 17.53 in Table 5.3.

84

Figure 5.4: The top image is a Cornell Box image computed using PMC-ER-E algorithm with all new schemes; the left in the bottom is the cropped image of the caustics region for the Cornell Box scene computed using PMC-ER-E with (Niteration = 1225, Nvariance = 199200, Ncaustics = 108000) plugging in all new schemes; the middle is the cropped image computed by the PMC-ER-E algorithm with (Niteration = 1225); the right is the cropped image computed by the PMC-ER-E algorithm with (Niteration = 2450).

85

Figure 5.5: The top is the rendering result of a dragon scene computed using PMC-ER-E with all new schemes; the left in the middle row is the cropped image of the caustics region below the dragon head computed using PMC-ER-E with (Niteration = 2430, Nvariance = 779700, Ncaustics = 30300) plugging with all new schemes; the right in the middle row is the cropped image computed by the original PMC-ER-E algorithm with (Niteration = 2430); the left in the bottom row is the cropped image computed by the original PMC-ER-E algorithm with (Niteration = 7290); the right in the bottom row is the cropped image computed by the original PMC-ER-E algorithm in (Niteration = 21870) iterations.

86

Figure 5.6: A room scene computed using PMC-ER-E with (Niteration = 2430, Nvariance = 779700, Ncaustics = 30300) plugging in all new schemes is at the top and the original PMC-ER-E algorithm with (Niteration = 18800) is at the bottom. The top result contains more artifacts overall.

87 Our algorithm renders the room scene (Figure 5.6) at 720×405 with (Niteration = 18800, Nvariance = 1363200, Ncaustics = 969600) which is equivalent to 128 SPPs. In comparison original PMC-ER-E renders the same scene with (Niteration = 18800) Our algorithm shows a better result than PMCER-E by a factor of approximately 4.28 in Table 5.3. When we check the cropped images of the caustics region under the glass ball for the CB scene, the result from our PMC-ER-E algorithm with all new schemes is smoother than the results from the PMC-ER-E algorithm with equivalent number of samples per pixel and with twice equivalent number of samples per pixel. The improvement is due to several factors. First, varianceregeneration puts more emphasis on these regions because they contain high perceptual variance. Second, caustics-regenerations concentrates more computation in the caustics path space i.e. more exploration is put on the caustics region. When viewing an image, the attention of the viewer is drawn towards the caustic regions in the image. Thus, improving the quality of the rendered caustic regions has a large impact on the perception of a rendered image. Finally, the new lens mutation method increases the perturbation success rate to increase the exploration of the caustics path for each population path. As a result, our algorithm can generate smoother caustics regions for the dragon and CB scene. Top in Figure 5.6 rendered using PMC-ER-E with all new schemes has fewer artifacts than bottom in Figure 5.6 rendered using PMC-ER-E. We observe that the variance-regeneration puts more samples around the regions of the lamp light in the right of the image. There is no obvious caustics region in the scene but the bright spots generated during the rendering process mostly come from the caustics paths. Thus, concentrating more computation in exploration of caustics path space reduces the variance of the result image. In addition, the failure rate of the caustics perturbation is high for this scene. With the new lens perturbation method, the success rate can increase largely. As a result, the rendered image is much smoother.

88

5.3 Discussion

In this section we present the new lens perturbation method and two path regeneration methods. We demonstrate their effectiveness in the PMC-ER framework. Here we present a short discussion of how to apply these schemes into the MLT and ERPT frameworks. The new lens perturbation method can directly replace the lens and caustics perturbation methods. For applying the path regeneration methods to MLT, we can follow a similar strategy as lens sub-path mutation presented in [152]. The lens sub-path mutation first generates a set of lens sub-paths and then replace the lens sub-path in current seed path to generate a new path. To apply the new regeneration methods to MLT, the variance of the image is first computed with the estimation of average image energy in the preprocessing phase of MLT, and we can use these regeneration methods to generate a pool of paths passing through high-variance regions and caustics paths. During the mutation process, we can replace the current seed path with one of the paths from the pool. We can compute the acceptance probability accordingly and decide whether the seed path transfers to the new generated path. This should achieve a similar result as presented in our demonstration. When applying it to the ERPT algorithms, we can estimate the perceptual variance information and RG2C of the scene with the estimation of the average path energy similar to the one described in Section 5.1.3.2 . Then, we need to decide how many caustics paths and how many variance-regeneration samples for the entire process. Once we decide that, we start to generate perceptual paths according to the variance image computed in the preprocess phase and caustics paths. The energy-deposited ratio R in Section 5.1.3.4 is computed accordingly to weight the deposited energy of a population path according to their properties. We should be able to increase the efficiency of the ERPT algorithms. There are several important factors including the number of different perturbation radii, and the initial values for each perturbation radius listed in the original PMC-ER algorithm in Chapter 4. In our algorithm there is another important factor which is the ratio between the total number of the stratified regeneration paths and the special regeneration paths. If the ratio is high, the image space exploration rate will be too low. As a result, the variance of those highly explored regions is reduced significantly but we will have a higher variance in the other regions. If the ratio is too

89 low, our algorithm reverts to the original PMC-ER algorithm. In the current implementation we try several values for the ratio and then choose the best values using visual inspection. However, in the future, we would like to have an algorithm to automatically discover the ratio for rendering an image with the lowest mean-squared error. In this section we use the perceptual variance from the radiance of paths as the criterion for the variance-regeneration. This distributes more samples to high perceptual variance regions according to the radiance samples but not the real variance in the rendered result. Thus, a new variance evaluation algorithm based on the rendered results is needed. We also find that the bright diffuse surfaces near light sources are normally smoother than the dark diffuse surfaces on the rendered image. This is because the criterion to start a MCMC chain is ~ whether the amount of energy plus a random amount of energy among 0 and E is over the average ~ path energy E . We make a trade-off between speed and variance. Thus, paths passing through higher energy regions more easily starts Markov Chains to generate a smoother result. This is useful for quickly rendering acceptable images but this also causes higher variance on dark regions which may in turn spend more iterations to gain a smooth result on these dark regions. In the future, we should separate the rendering process into two phases: to explore all regions evenly and to explore more on the high energy regions. The even exploration of all regions should use balance energy transfer algorithm for those paths with energy lower than average path energy to make the low-contribution paths have chance to distribute their energy to smooth the dark regions. After a fixed number of iterations we should switch to the exploration focusing on high energy regions. The algorithm used in the second phase should be the same as current PMC-ER. This may result in a better rendering algorithm.

90

Chapter 6 Physically-based Animation Rendering with Markov Chain Monte Carlo

A large number of movies employ visual effects using computer graphics. Photorealistic rendering is a critical element to create realistic visual effects. An animation is generally rendered as a sequence of static images frame-by-frame manner by using Monte Carlo algorithms described in [113, 39]. As a result the rendered animations normally contain annoying temporal artifacts such as flickering and shimmering if the computation time is not long enough. Therefore, some research on global illumination has started to pay attention to enhancing the temporal coherence among frames in order to remove temporal artifacts. Caching photons and irradiances of sample paths [19, 153, 105, 32, 170, 145, 136, 46] from preceding and following frames can greatly improve the computational efficiency as well as reduce temporal aliasing. [98] caches computed irradiances from different moments of time on the surfaces. The cached values are used to create a basis for temporal interpolation of irradiance in order to greatly reduce temporal artifacts. However, the fundamental problem of these techniques is that the invalidity of cached samples across frames introduces bias and error into the final result. When only the camera and the point lights are allowed to move inside an animation scene, it is easy to reuse all light paths by re-evaluating the path contributions [12, 125, 123, 124, 61] or reweighting samples based on the multipleimportance framework [95]. However, movement limitations reduce the utility. Havran et al. [63] reused the static-object paths from bi-directional path tracing to reduce temporal flickering. For

91 each static candidate path, they first check the validity of the path, i.e. no edge is blocked by moving objects and then the contribution of valid static paths is evaluated by recomputing the BRDF value at the first hit point from the camera. The reweighting scheme introduces bias and causes unwanted artifacts. In addition, they only intended to reuse static paths not all paths. Ghosh et al. [48] applied the framework of Sequential Monte Carlo to exploit the temporal coherence among paths by reweighting the samples in previous frames to generate good samples for the current frame. However, their work is limited to environment map lighting. In this work we intend to exploit the temporal coherence among all path integrals in the entire animation whose animated entities are described by key-framed rigid transformation. Markov Chain Monte Carlo (MCMC) algorithms such as Metropolis Light Transport (MLT) [152, 81], Energy Redistribution Path Tracing (ERPT) [25], and Population Monte Carlo Energy Redistribution (PMC-ER) in Ch. 4 and 5 have demonstrated the strengths of exploring the spatial coherence among path integrals when rendering a static image. However, since all these algorithms were originally formulated to render a static image without considering the temporal coherence among frames in an animation, their results suffer from the temporal flickering artifacts. In this paper we incorporate MCMC algorithms into exploring the temporal coherence among path integrals. We formulate the contribution of a light transport path to all frames in an animation as the sampling distribution for MCMC algorithms. This formulation allows us to extend from rendering a static image to rendering an animation. As a result, a new time perturbation method is designed to reuse path samples with similar properties at different moments of time to explore the temporal coherence among path integrals in order to reduce the temporal artifacts of an animation. In addition to having an algorithm to explore the temporal coherence, we also need to face computational limitation when rendering the entire animation sequence. Rendering a high-quality image for a complex scene normally takes hours to days on a moderate-performance computer. In this chapter the task of rendering a physically correct and perceptually pleasant image sequence may take months to finish on a single computer. Our rendering system is designed to render an animation in parallel. We separate the rendering process into two types of tasks: host and client. The host task is the main control which creates an input file for each client, collects data from each

92 client, and updates the internal data for next iteration. The client tasks are designed to run on a set of available computers such as a Condor [26] pool or a computer cluster. Each client task is assigned a subset of iterative computational tasks. Each computational task contains initial paths traced by a general path tracing algorithm from a designated frame for energy redistribution. For efficient usage of parallel computation, we present a formulation to allow us to locally adjust the rendering parameters in each task without introducing bias. In this chapter we demonstrate the strength of exploiting the temporal coherence among paths by building the temporal perturbations on the PMC-ER algorithms for parallel rendering. The animations rendered with temporal perturbations are perceptually more pleasant.

6.1 Animation Formulation for Markov Chain Monte Carlo

In Ch. 3 we present a complete derivation of formulation for animation rendering with Markov Chain Monte Carlo algorithms. Here we presents a short version of the formulation and please refer to Ch. 3 for more detail. When rendering an animation, the intensity of the j-th pixel of the

k k-th frame, Ij can be expressed as path integrals: k Ij =

fjk (~ t )dµ(~ t ) and x x

fjk (~ t ) x

= Le (xt , xt )G(xt , xt ) 0 1 0 1

Nv -2

fs (xt -1 , xt , xt +1 )G(xt , xt +1 ) n n n n n

n =1

fs (xt v -2 , xt v -1 , xt v )G(xt v -1 , xt v )Wjk (xt v -1 , xt v ) N N N N N N N = Wjk (~ t )f (~ t ) x x where t is the moment when the light transport event occurs, represents the set of all valid light transport paths that begin on a point of a light surface, interact with the scene surfaces, and end at the camera for the entire animation denoted as xt , µ(~ t ) is the surface area product measure for ~ x ~ the path xt , fjk is the contribution brought by a light transport path to pixel j of frame k, fs is the bidirectional scattering distribution function, Le is the emittance radiance of a light source, Wjk is

93 sensor response, G is the geometry term, and f is the radiance carried by the path. The contribution function can be decomposed into two components: sensor response, Wjk , and the radiance carried by the path, f . Because the radiance is independent of the response of the given pixel, the radiance can be reused for all pixels. In other words, b =

f (~ t )dµ(~ t ) can be used to express the total x x

radiant energy delivered through the image sweep as shown in Figure 3.2. The intensity for each pixel can be estimated by Monte Carlo integration

k Ij =

Wjk (~ t )f (~ t )dµ(~ t ) x x x

1 N

N 1

~i Wjk (XTi )

~i f (XTi ) ~i p(XTi )

~i where p(XTi ) is the importance sampling function used by Monte Carlo integration. The idea of Markov Chain Monte Carlo algorithms is to generate a set of correlated paths according to their radiance brought to the image sweep. Through the process, paths mutate, and newly mutated paths are accepted or rejected with a carefully chosen probability to ensure that paths are sampled according to their radiance brought to the image sweep. In other words, p(~t ) = x

f (~t ) x b

is implicitly

used as the target distribution density. Under this formulation, we can design temporal and spatial mutations by considering the coherence among all valid paths in the entire animation. Since stratefied exploration of the image plane is important to reduce the variance in rendering images, the random exploration of the image plane by the Metropolis algorithm is a big limitation in improving rendering efficiency. Thus, a hybrid algorithm, energy redistribution with balance energy transfer, is purposed to combine the independently sampling path tracing algorithm with the correlatedly sampling Metropolis algorithm. The idea is to generate a set of independent samples by path tracing as seed paths. The estimation of the pixel intensity can be calculated as

k Ij =

1 NM C

NM C i=1

1 NM CM C

NM CM C l=1

~i f (XTi ) k ~ Til Wj (Xil ) ~i p(XTi )

[6.1]

Furthermore, if we resample the paths according to its importance energy, the weight of each perturbation is equal. We can use a similar deterministic sampling strategy in [40] to generate totally ~i ~i ~ Ei (XTi )/E paths where Ei (XTi ) =

~ f (Xi i ) . ~T p(X i )

i T

In other words, each seed path has a number

of Markov Chains proportional to its energy. The equally weighting scheme now becomes energy

94 redistribution with equal deposition and the estimator can be calculated as

k Ij

~ E = NM C

NM C i=1

Npath m=1

1 NM CM C

NM CM C l=1

~ Wjk (XTil ) il

Ti

[6.2]

~ E(X ) ~i ~ where Npath = Nf loor + (0 : 1?E(XTi ) - (Nf loor + U(0, 1)) × E > 0) and Nf loor = Ei . The ~

variance and expectation analysis are shown in Ch. 3.

6.2 Population Monte Carlo Energy Redistribution

When rendering an animation, we distribute the computation to a task according to the starting time of initial paths belonging to the same frame. Thus, in the following we will present PMCER Equal Deposition (PMC-ER-E) for a single computational task. We then describe the details of time perturbation. Finally, we present a formulation to allow us to locally adjust the sampling parameters for each task without introducing bias.

6.2.1 Population Monte Carlo Energy Redistribution with Equal Deposition

Figure 6.1 shows the PMC-ER-E algorithm executed in a single computational task in a single computer. At the beginning of each computational task, a pool of pixel positions and a pool of caustics paths are generated and they are used for the initial population paths and the replacement paths in the resampling process. In each inner loop, z, we spatially or temporally perturbed each member path Nequal times to redistribute the energy to the neighborhood of the path according to a mixture distribution: ~ Ki ( Y i

(o) (z-1)

~ yi ) = +

(z)

dh

(o) ~ ~ i,dh Tspatial (Y (z-1) : dh y(z) ) (o) ~ ~ i,th Ttemporal (Y (z-1) : th y(z) )

th

When perturbing a path, we first choose one perturbation from the set of spatial and temporal perturbations according to the weights, i , where

(o) (o) perb i

= 1. The current path is perturbed

to generate a new path according to the perturbation parameter dh for a spatial perturbation and th

95 for a temporal perturbation. The acceptance probability is chosen as ~ ~ ~ f (Y t1 )Top-type (Xt1 )|Y t1 ~ ~ A(Y t1 |Xt0 ) = min 1.0, ~ ~ ~ f (Xt0 )Top-type (Y t1 |Xt0 ) where Top-type is the type of perturbation chosen. We can observe that the detail balance of this per~ ~ T1 Y ~ T1 ~ ~ ~ ~ ~ ~ turbation is maintained by f (XT0 )Top-type (Y T1 |XT0 ) = f (Y T1 )Top-type (XT0 |Y T1 ) f ((Y T0)Top-type(X T1 ||XT0 )) ~ ~ ~ f X )T (Y

op-type T1

)T (X ~ ~ ~ ~ ~ ~ ~ ~ ~ if f (Y T1 )Top-type (XT1 |Y T1 ) f (XT0 )Top-type (Y T1 |XT0 ); otherwise f (XT0 )Top-type (Y T1 |XT0 ) f (YT0 )Top-type(YT1 ~ ~ f (X

op-type

~ T1

~ T1

k ~ position of the newly mutated path, Yi . The energy-deposit constant, Ri , is computed according

k ~ ~ ~ f (Y T1 )Top-type (XT0 |Y T1 ). After each perturbation, Ed = Ri ed energy is deposited at the pixel (z)

to the properties of this frame such as the number of samples in each pixel and the number of caustics paths in order to maintain the energy delivered from each pixel or each type of paths sta~ tistically the same. At the end of each o loop, E energy is removed from the path's Ei,remain , and the resampling process is applied. Paths in the population are eliminated according to its Ei,remain . If there is any energy remaining in the eliminated paths, the remaining energy is deposited by equal deposition into the image sweep. To maintain the constant number of population, we replace the eliminated path with newly generated paths from the pool of stratified and variance-regeneration pixel positions or unused paths from the pool of caustics paths in Ch. 5. We also adapt values according to the performance of perturbations. We use the acceptance probability of each perturbation as indication to determine the probability of perturbation choice in Ch. 4. After finishing the o loop, the algorithm deposits the remaining energy of the population paths onto the image sweep before exiting. The remaining energy in eliminated paths and terminated population paths at the end of the task is deposited to the image sweep to guarantee the number of Markov Chains starting from each path proportional to its initial energy for maintaining unbiasedness.

6.2.2 Time Perturbation

In order to exploit the temporal coherence, we design a new path perturbation strategy called time perturbation. In this section we first describe how to use our temporal perturbation method to reconstruct the path temporally and how to calculate the tentative transition probability accordingly.

96

1

k k k ~ get E, ed , T , k, RC2G , Nsample , Ncaustics , Nequal , k

Npopulation , Nunif orm from the host 2 3 4 5 6 7 8 9 10 11 12 13 14 15

(o)

generate a pool of pixel positions and a set of caustics paths generate initial population of paths in s = 1 at the moment T for o = 1, · · · , Niteration determine i for each perturbation for i = 1, · · · , Npopulation ~ ~ if Ei,remain + U(0, 1)E > E for z = 1, · · · , Nequal

(o) (z-1) ~ (z) generate Yi Ki (~ (z) |Yi y ~ ) i ~ (z) deposit Ed = Rk ed on Yi (o)

~ (z) ~ (z) ~ (z) ~ (z-1) ~ (z) Yi = (U(0, 1) < a(Yi |Yi )) ?Yi : Yi

~ Ei,remain - = E wi

= Ei,remain

resample the population: elimination and regeneration deposit remaining energy in all population paths

~ Figure 6.1: The PMC-ER equal deposition iteration loop. E is averaging path energy, ed is depok sition energy of each perturbation, RC2G is the ratio between caustics and all paths in this frame, k Npopulation is the size of the population, Nsample is the total number of initial seed paths, Nunif orm k is the number of stratified samples per pixel, Ncaustics is the number of caustics-generation paths, k T is the time when the shutter opens, k is the index of the frame, Niteration is the total number of k k task iterations, Nvariance is the number of variance-regeneration paths, and j is the variance-regeneration distribution function. The system use path tracing with a fixed number of samples per k k ~ pixel to estimate E, ed , RC2G , and j which is proportional to the perceptual variance of the sample radiances in the given pixel frame-by-frame. U(0, 1) generates a random number uniformly distributed between 0 and 1. Ei,remain is the energy remaining in the population path i after the i inner energy redistribution loops. Rk is the energy-adapted constant discussed in Section 6.2.3. ~ The main idea behind time perturbation is that when a path XT0 exists at time T0 , there may be ~ a correlated path Y T1 whose vertices are object-based rigid transformations of the vertices in the

97 ~ path XT0 [15]. The object-based rigid transformation of a surface point from T0 to T1 is computed as X T1 = M T1 (M T0 )-1 X T0 where M t denotes the rigid transformation for the surface from the object space to the world space at t and xt is the world coordinate of the surface point xobj at t. ~ A valid perturbed path Y T1 can be generated if the visibility check passes in each edge. However, this simple time perturbation method may fail if a path contains a specular vertex because the relation between the input and output direction of a specular vertex is a delta function. Thus, a more complex scheme is developed for paths containing specular vertices. Figure 6.2 shows an example of a valid time perturbation of a path with specular vertices. The step we take to perturb the path is as follow: First, we identify the specular sub-paths of the form {L|[(D|L)D]}(S + D|S + )+ {[D(D|E]|E}. Second, the specular sub-paths are reconstructed by either backward sampling or forward sampling. The criterion for choosing a sampling direction and the details of reconstruction will be described shortly. Finally, un-updated diffuse vertices are object-based rigid transformed to new locations and un-updated edges are linked and updated accordingly. The first and last step are trivial. Thus, we describe the details of step 2. The criterion used to choose either backward sampling or forward sampling is: · Case 1: if the path is an eye sub-path of the form {L|[(D|L)D]}(S + D|S + )+ E, backward sampling from the eye vertex is chosen to reconstruct the sub-path to ensure that the eye sub-path passes through the camera. · Case 2: if the path is a light sub-path of the form L(S + D|S + )+ D(D|E), forward sampling is chosen to reconstruct the sub-path to ensure that the light sub-path starts from a valid light source. · Case 3: if the path has the form (L|D)D(S + D|S + )+ D(D|E), forward sampling and backward sampling are randomly chosen to reconstruct the sub-path. After we make the choice of the reconstruction direction, we have to reconstruct the sub-path in the chosen direction. In the following we provide how to reconstruct a sub-path backward. A sub-path can be constructed forward in a similar manner except that the direction of transverse

98 is reversed. We would like to reconstruct the sub-path, xt0 · · · xt0 of the form D(S + D|S + )+ D l m

t1 t1 t1 t1 transformed from t0 to t1 to create ym and ym-1 . Next, the edge ym ym-1 is linked to form the

by using backward sampling. First, the position of vertex m - 1 and m is object-based rigid

starting ray for constructing the sub-path. Since xt0 is a specular vertex, we choose a specular m-1

t1 t1 bounce at ym-1 to find the next vertex ym-2 . Then, according to the bounce at the vertices in the

original path, we use two different methods to construct the sub-path to have the same length as the original one. If xt0 where m - 2 n l + 1 is a specular vertex, we choose a specular bounce to n

t1 t rigid transform method and link yn1 to yn-1 to form the new edge. t1 find the next vertex; otherwise we first transform the vertex xt0 to get yn-1 by the object-based n-1

After the path is reconstructed temporally, the tentative transition probability for the time perturbation method is computed by considering all possible ways to reconstruct the path. Since we divide the path into specular sub-paths and diffuse sub-paths, the tentative transition can be computed by multiplying the tentative transition probability of each sub-path. For a diffuse sub-path, the tentative transition probability is one because the transformation and the edge linking is deterministic. For a specular sub-path, the tentative transition probability is computed according to three different cases discussed in the previous paragraph. And the tentative transition probability of each case must be computed accordingly. The overall tentative transition probability can be computed as: Tth ,time ~ ~ (Xt0 Y t1 ) = 1 th

Nsub i=1 i Tspecular and

Tbackward Tspecular = Tf orward

1 G(yj1 ,yj+1 ) | cos j,out | 1 G(yj1 ,yj-1 ) | cos j,in | t t t t

Case1 Case2

0.5 · Tf orward + 0.5 · Tbackward Case3 where Tf orward = Tbackward =

m-1 j=l+1 l+1 j=m-1 t : 1?yj0 S t : 1?yj1 S

t1 where yj+1, Tf orward is the tentative transition probability when reconstructing the sub-path for-

ward, Tbackward is the tentative transition probability when reconstructing the sub-path backward,

99 and j,in , j,out is the angle between the normal of the surface and the direction of the incomt ing/outgoing light ray at yj0 .

6.2.3 Iteratively Distribute Samples in a Frame-by-Frame Manner

Since variance-regeneration and caustics-regeneration purposed in Ch. 5 can enhance the rendering efficiency, we would like to have them in our rendering system. The weighting scheme is important to guarantee unbiasedness of the final result at the expense of the computation to for the distribution of pixel positions and caustics paths in the entire animation. In other words, the system itself need to generate variance-regeneration pixel positions and caustics-regeneration paths before creating iterative computation tasks in order to properly weight the energy deposited. However, we would like to distribute this regeneration to each computation tasks in order to take full advantage of the parallel rendering. Carefully observing the energy distribution algorithm, we found that the ~ energy delivered to the entire image sweep can be computed by E = ~ iteration-frame-based manner as E =

Niteration m=1 Nf rame k=1 Niteration m=1 Nf rame k=1

m,k Nsample n=1

NM C i=1

~ E (Xi ) × NM CM C × ed

where the E (·) is the energy of a seed path. The average path energy can be expressed in a ~ E (Xmkn ) × NM CM C × ed =

~ E (m, k) . where Niteration is the total iterations used, Nf rame is the total

m,k Nsample n=1

m,k number of frames in the animation, Nsample is the total number of samples distributed to the k-th

~ frame in m-th iteration, and E (m, k) =

~ ered at frame k in iteration m. If we can keep E (m, k) statistically unchanged in each frame of

~ E (X) × NM CM C × ed denotes the energy deliv-

each iteration, the energy delivered to the entire image sweep can keep unchanged. This derivation allows us to calculate the energy-deposited ratio in each frame according to the number of assigned stratified pixel position to each pixel, the number of variance-regeneration pixel positions in each pixel, and the total number of generated caustics-regeneration paths in that frame without introducing bias in Ch. 5.

100

6.3 Animation Rendering System

The MCMC formulation allows us to exploit the temporal and spatial coherence among paths when rendering an animation sequence. However, rendering a physically-correct animation on a single computer seems implausible. Thus, in order to make rendering an animation plausible, we decided to build a parallel animation rendering system. In this section we provide an overview of our parallel animation rendering system.

6.3.1 Overview

Figure 6.3 shows a flow diagram of our system. Our rendering system divides the process into two sets of tasks: host and client. The host task maintains the execution of the rendering process. It is the main controller of our rendering process. Client tasks are iterative computation tasks distributed to run on a computer pool. Each computational task is to render a specific set of initial paths. We will discuss the detail of each process in the following paragraphs.

6.3.2 Host Process

The left part in Figure 6.3 shows the processes that are run by the host. The host, which is the main control unit in our rendering system, is used to maintain the rendering process in a proper state: preprocess, script-generation, data-collection, and information-update. ~ 1. Preprocess: this state computes the average path energy, E, deposit energy of each perturbak tion, ed , the caustics path ratio for each frame, RG2C , and the degree of sample requirement,

k. 2. Script-generation: this state creates the input file of each client task according to the information collected from the preprocess state or the information gathered in the update state. The input files are sent to a pool of computers for parallel rendering. 3. Data-collection: when a client process finishes its computation, the intermediate data is sent to a predefined directory. The host periodically checks the arrival of data and reads the

101 data from finished client process to update the intermediate rendered results data of the host process. After collecting data from the client processes, the host turns the control over to the information-update state. 4. Information-update After collecting all data from all client processed, the host process compute k for the next iteration.

6.3.3 Client Process

A client task runs energy redistribution of a subset of initial paths on a computer from a computer pool for parallel rendering. In our system we use the time of initial paths as the distribution criterion, i.e. a set of initial paths that are valid for a specific frame k is distributed to a task. In Section 6.2.3 we demonstrate that local adjustment of the rendering parameters for each task does not introduce bias into the final result when a proper weighting scheme is applied. There are three new functions needed for animation rendering: the first is the description of object animation; the second is the acceleration structure; the third is the data structure to record samples. In the following section we give details about these three new functions.

6.3.3.1

Key-framed Rigid Transformation

Currently, we only implement an animated object with a set of key-framed rigid transformations. Each key presents the rigid transformation of a key frame for the object at a moment of time. Each key contains p, s, q, t to describe the position, scale, orientation, and the moment of time. We assume that the key frame is dense enough for creating proper interpolation for the object rigid transformation at any moment with no obvious artifacts. We use linear interpolation (LERP) for interpolating vector values and spherical linear interpolation (SLERP) for interpolating quaternion values. The idea is shown in Figure 6.4. With this representation it is easy to compute the rigid transformation of an object in any moment of time and also to do the rigid transformation of a vertex needed in Sections 6.2.2.

102

6.3.4 Acceleration Temporal and Spatial Structures in Time

Good acceleration aggregation structure is critical for an efficient global illumination. If an intersection test needs to query every object in the scene, the compuation for intersection will be too long for any application. Kd-tree for static rendering improve the efficiency of ray tracing largely. Thus, we build our animation acceleration structure based on the kd-tree structure concept. We first can use the same kd-tree structure with a bounding box structure to bound the entire animated in the world through the entire animated period of time. However, this is inefficient because the animated objects may move in a large extent inside the world, no interesection querry of the object can be neglected through the acceleration structure. Thus, the keyframed animation concept inspires us to chop the motion of the entire object motion into Nchop sub-set of motions. Then, each motion can be bounded by a bounding box tagged with the start and end time of the motion. A kdtree is constructed by sorting all bounding boxes from all objects. Nchop is a userspecified object parameter. If the object moves rapidly, Nchop will be large for avoiding the creation of a huge bounding box for a motion. The intersection querry is similar to original kd-tree except that we need to check the time stamp in a ray against the tagged start and end time of a bounding box. This is not optimal but it gives us acceptable performance.

6.3.5 Sample Recorder

In static image, we only need to implement a film to record the sample drop in the image plane by using the spatial filter. However, when we rendering a sequence of images to form an animation, we need to implement a temporal filter and a data structure to record the samples. In our implementation, we use a box filter corresponding to the camera shutter. The easiest implementation for us would be a data structure similar to a scroll of films which contains a set of images and one for each frame in the animation. However, the memory needed is huge for rendering a long animation. For example, the memory requirement for rendering 4-second animation with a frame rate of 30 frames per second in the resolution of 640 × 480 is about 1.2 GB. This is probably over the limit of some machines. In addition, we need to record several auxiliary film-sized information. Thus,

103 this type of implementation is not practical. We have to modified this simple implementation as follows. Because the initial paths distributed to each iterative computational task based on their beginning time, the center of energy redistribution is know. Our time perturbation should distribute most of samples in frames centered arround the beginning frame. Thus, we set up a proper number of images in the memory space to record the center perturbed samples and then use bucket data structure in the disk space to record off-centered perturbed samples. Figure 6.5 demonstrates this concept. When finishing, the images recorded center samples and the buckets recorded off-center samples are sent back to a predetermined directory. The collect data daemon will check and read all data in that directory to update the intermediate results in the host process.

6.4 Results

In our implementation, we attained available computers from a Condor pool [26] to run client processes in parallel. We used the daemon to communicate with the manager of the Condor pool to submit the iterative computational tasks, query the completeness of each process in the pool, and collect data from the pool. Details about the Condor system are available at www.cs.wisc.edu/condor. To evaluate the performance of exploring temporal coherence among path integrals, we compared animations rendered with time perturbations to animations rendered frame-by-frame on the Cornell Box (CB) scene, a room scene, a chessboard scene, and a basement scene using the criterion of starting with the same number of initial PT paths and using the same setting for the regeneration methods. Since Condor determines the distribution of computational tasks based on the load of the system and the priority of the user, the completion time of each task is not predictable. It is not fair to compare the overall rendering time between different rendering methods because the load of Condor varies from time to time. As a result even the same rendering algorithm may result in very different overall rendering time. Thus, the statistics listed in table 6.2 are collected from the computation of the first frame in the first iteration for each animation in a local machine. Table 6.1 also lists the scene-specific parameters. Time needs for rendering an iterative task of an animation with temporal perturbations requires about 10% more than frame-by-frame. In addition, rendering

104 with temporal perturbations requires roughly additional 60 s to update the intermediate result. In each task we allocate extra 20 frames around the center frame of the task to record the majority of perturbed samples to avoid frequently writing off-center sample records to the disk and save disk space for the off-center records, and this requires extra 200 MB memory. However, when a perturbed sample falls outside this range, the task writes a off-center record of time, pixel position, and radiance into the disk. At the end of the task, the center frames and out-of-core records are all sent back to the system. The rendering task with temporal perturbations generates additional 48 54.2 times more data than in the frame-by-frame manner. In Fig. 6.6, the brightness around the edge of the caustics region rendered in a frame-by-frame manner changes drastically in consecutive frames. In comparison, the caustics region in the animation rendered with time perturbations looks smooth and has similar shape and brightness in consecutive frames. In accompanied animations, the animation rendered in frame-by-frame manner has seriously flickering artifacts in the caustics regions. We demonstrate the strength of temporal exploration to create temporally consistent lighting by intentionally using a less smooth caustics region. Later in CB2, we use another set of parameters to generate a smoother animation of the same CB animated scene. Fig. 6.8 shows the 60-th frame of CB2, room, chess board, and basement scenes rendered with temporal perturbation. When we check the accompanied animation results, we found that the temporal artifacts in animations rendered with temporal perturbations are far less than the results without the temporal perturbations. In addition, the algorithm with temporal perturbations can generate a converged animation faster than without temporal perturbations. For the sake of comparison, we rendered a frame in a transient moment. However, our system can easily be used to render frames with motion blur by setting a non-zero shutter-open periods because our time perturbation can easily perturb a path to another path in any moment of time in the entire animation. Fig. 6.7 demonstrate that motion blur can be added into a CB animation.

105

Scene CB1 CB2 Room Chess Basement

host Niteration

Nunif orm 4 4 4 4 4

Nvariance 0 36864000 36864000 36864000 36864000

Scaustics 0 0 1.25 1 1

1 1 4 8 8

Table 6.1: Measurements comparing temporal exploration with frame-by-frame. In all cases, we used a population size of 5000, three spatial perturbations having radii: 5, 10, and 50 pixels and two temporal perturbations having radii: 0.066 and 0.165 s. In the inner loop z, we perturbed each member 20 times and eliminated 40% of the population based on its remaining energy and regenerated new paths to maintain constant number of members in the population. In the preprocess we k ~ used 16 samples per pixel (SPPs) for estimating E, ed , RG2C , and k . In the rendering process we host chose Niteration as the total number of iterations, Nunif orm as the number of sample per pixel for stratified regeneration, and Nvariance as the size of the variance-regeneration samples used in an k iteration. We computed the number of caustics-regeneration in each frame as Scaustics × Nexpect. host Thus, we use (Niteration , Nunif orm , Nvariance , Scaustics ) to specify the parameters used to render the animation scenes.

106

Scene CB1

Method frame-by-frame time perturbation

Clt(s) Dt(s) Dk (M) Mk (M) 2532 2788 3213 3559 1743 1844 2093 2244 2743 2954 1 59 1 62 1 65 1 63 1 68 5 240 5 243 5 263 5 253 5 271 256 453 266 463 606 803 366 563 746 943

Err 1.93e-2 1.82e-2 1.5e-2 6.29e-3 3.94e-2 8.37e-3 5.98e-2 4.51e-1 2.61e-1 2.08e-2

CB2

frame-by-frame time perturbation

Room

frame-by-frame time perturbation

Chess

frame-by-frame time perturbation

Basement

frame-by-frame time perturbation

Table 6.2: The third column is the time needed to finish the first frame in the first iteration in each animation. The fourth column is the time required to update the data from the first frame in the first iteration. The fifth column is the disk usage for the data when finishing the first frame in the first iteration. The sixth column is the memory consumption in computing the task of the first frame in the first iteration. The seventh column is the average perceptual error defined in [40] for the entire animation.

107

6.5 Discussion

Time perturbations find correlated paths across different frames and our system needs to store the contribution of each correlated path in a set of frames. Theoretically it is possible for our algorithm to explore all temporal coherence in the entire animation but practically the system has a limited amount of memory. Therefore, we made a trade-off between the extent of temporal exploration and the amount of memory and disk usage and the time for updating each process. We use a set of temporal perturbation radii and the number of perturbations in each Markov Chain to get the samples distributed roughly in a 20-frame radius from the center of the starting frame to limit memory usage for center samples and disk space of out-of-center samples. Although this limits the extent of temporal exploration when the perturbed distance from the starting frame grows, the failure rate of temporal perturbations grows. This should have slight effect on the rate of generating temporally consistent animations. A more efficient memory and disk usage and update scheme or a better task distribution scheme is wanted in the future to avoid the limitation of temporal exploration radii and to relieve the burden of memory and disk space requirement and the time for updating task-based data. Currently we use visual inspection and static perceptual error to check the rendering animations. This is time consuming and less preferred. There are several animation measurement metrics developed for video compression. The main issue for video compression is that data loss causes blockiness artifacts. However, the main issue for MC algorithms is that noise pops up randomly in the scene and causes serious temporal artifacts. A proper metric must take into account this independent disturbance. Such a MC-based perceptual metric can help us evaluate the rendering results and further adjust our kernels to concentrate on perceptually important regions. In the current implementation we used key-framed rigid transformations to animate entities in the scene. It is obvious that our algorithm can be easily adjusted to render objects with key-framed properties such as material, light intensity, and other information. In addition, it is easy to extend the implementation to include skin-skeleton animation and morph animation. For skin skeletons each vertex's position is related to the skeleton position and orientation at the moment. We can use

108 the intersection point's parameters (u, v) and three vertices' positions to compute the position of corresponding positions for different moments of time. Similarly, the morph animation also uses (u, v) to compute the corresponding vertex position at different moments. This mechanism allows us to compute the same corresponding position for each intersection point for different moments of time. Our algorithm makes a trade-off between the rendering speed and the image quality by choosing the averaging path energy to determine the number of Markov Chains for each seed path. As a result, the image quality of the dark regions is generally noisier. When rendering a static image, the dark regions are perceptually less important and, therefore, the perceptual noise is hard to notice. However, since our perception is sensitive to temporal inconsistency, the noisy dark regions become perceptually important when rendering an animation. Although temporal exploration can reduce the variance in dark regions, the variance in dark regions is still perceptible because the number of Markov Chains distributed to the dark regions is low. Kelemen et al. [81] used multiple importance framework to combine the independent path samples and correlated path samples to relieve this problem. This inspires us to think about taking advantage of the fact that our algorithm generates a set of independent sample paths before the energy redistribution step. We would like to develop a hybrid algorithm combining balance transfer and equal deposition. In the algorithm balance transfer is designed to explore the low-energy paths and equal deposition is designed to spending more computation on exploring the high-energy paths. This should be able to relieve the problem of the relatively noisy dark regions. In addition, Kelemen et al. also purposed to map the creation and mutation of paths to a high-dimension uniform random number cube to increase the success rate of mutations for enhancing rendering efficiency. Our time and lens perturbation is designed to use small perturbation on the time and image plane domain to increase the perturbation success rate. A comparison in the success rate between local perturbations and uniform-cube perturbations is needed in the future. We also would like to explore the possibility of combining the uniform-cube perturbation methods with our adaptation algorithm and spatial and temporal perturbation method by systematically perturbing the random variables used to control the perturbation of time and the perturbation of lens sub-paths to further increase the success rate of perturbation.

109 Although we demonstrate the strength of temporal exploration based on PMC-ER, the temporal exploration and local adjustment is easily adapted into the MLT and ERPT frameworks. The new time perturbation method can be added into the choice of the mutation methods in the ERPT and MLT algorithms with the implementation of the image sweep. Parallel rendering the animation with all regeneration methods and locally adjusting parameters is important to get a converged animation quickly. Since the ERPT algorithm has a similar parallel energy-distribution structure, we can use the same task distribution framework. Each task contains the paths starting from the same

k ~ frame. In the preprocess we can estimate k and RG2C for each frame with E and ed . The compu-

tation of energy deposition ratio discussed in Sec. 6.2.3 can be directly applied. However, applying parallel local adjustment to MLT is different. We should first generate a seed path per frame and then a pool of replacement paths consisting of variance-regeneration and caustics-regeneration paths of the same frame. Then, during the mutation process, we can replace the current seed path with one of the paths from the pool similar to lens replacement mutation. The acceptance probability can be computed accordingly to decide whether the exploration path switches to the new replacement path. This should achieve a similar result as presented in our demonstration. Another issue is that when we render a sequence of animated frames, the system uses the same number of iterations for each frame. However, some frames need a larger number of iterations and some frames need only a few iterations. In this work we manually set up a scene with a reasonable length and require a similar number of iterations for each frame. However, this is sub-optimal. In the future we would like to develop an algorithm that can automatically check the convergence of each frame and terminate the rendering for those frames automatically. System set up to run roughly between 1000 to 4000 s due to the experiment in Condor system. We would like to use our algorithm in other kind of computer cluster to have direct control and direct computation to see the result. There are several possible extensions to our system including skin skeleton and morph animation to allow us creating complex animations, a better intersection acceleration structure, and an efficient way to read and write the intermediate images for each iteration and so on.

110

Figure 6.2: This is a path with the form LDDSSE and is used to demonstrate the backward ~ sampling method. We would like to transform the path Xt0 at t0 with 6 vertices to a new path ~ Y t1 at t1 . Since the original path has the form of case 1, we have to reconstruct the sub-path by backward sampling. Thus, we first object-based rigid transform the position of vertices, xt0 and 5 t t t t xt1 to y51 and y40 and link the edge y51 y41 to form the tracing ray at time t1 . Then, we extend the 4 t t sub-path through the same specular bounce at y41 as the corresponding xt0 to get y31 and the same 4 t t specular bounce at y31 as the corresponding xt0 to get y21 . Since xt0 and xt0 are diffuse surfaces, 3 1 0 t t we only need to object-based rigid transform their positions to get y11 and y01 and link the edges t1 t1 t1 t1 t1 ~ of y2 y1 and y1 y0 to form a new path Y .

111

Figure 6.3: Overview of the animation rendering system. The left part represents the host process and the right part represents clients. The daemon in the host handles the creation of an input file for each client process, queries the completeness of each client process, and collects the data from each client process. A client process receives the input file from the host, executes the iterative computational task, and sends the results back to the host.

112

Figure 6.4: The keyframed rigit transformation to represent the animation of object.

Figure 6.5: The diagram to represent a movie-like sample recorder.

113

Figure 6.6: The top row of images is coming from the cropped images of the caustics region of the 27th and 28th frame of the Cornell Box animation rendered in a frame-by-frame manner. The second row of images is coming from the cropped images of the caustics region of the 27th and 28th frame of the Cornell Box animation rendered with our time perturbations. Our results generate a smoother and temporally consistent caustics region in these two consecutive frames and the animation also has a smoother and more consistent caustics region. The caustics region rendered in frame-by-frame manner is noisy and looks different in shape for consecutive frames. As a result the animation has noisy and twisting caustics regions.

Figure 6.7: These are the cropped image for 5th frames in the Cornell box scene. The left is rendered with a finite shutter open period and the right is rendered with a delta period. We can observe that the motion blur is added naturally. We can see that the caustics regions, the glass ball, and the shadow regions contain blurred areas around the edge.

114

Figure 6.8: The images from left to right are the 60-th frame generated from CB, room, chess board, and basement scenes using PMC-ER algorithms with temporal perturbations.

115

Chapter 7 Discussion and Conclusion

The Monte Carlo method is the most general and robust method for solving the global illumination problem. The major challenges in Monte Carlo rendering are to sample the path space efficiently and to construct good estimators to reduce the variance in the rendered images. This dissertation has introduced Population Monte Carlo Energy Redistribution which learns to become an effective sampler based on the information collected from early iterations for exploration of the temporal and spatial coherence among paths. This chapter presents a summary of the main contributions of this work and a discussion of future work.

7.1 Contributions

Sample reuse is an important technique to reduce the sampling variance. Markov Chain Monte Carlo algorithms such as Metropolis Light Transport [149] and Energy Redistribution Path Tracing [25] apply this technique to sample the path space correlatedly in order to reduce the rendered variance. However, the choice of good mutation strategies is non-trivial and has a major impact on image quality. Population Monte Carlo algorithms can automatically adjust their sampling parameters according to the information collected previously. Thus, we propose to adapt the PMC framework into the energy redistribution algorithm to become Population Monte Carlo Energy Redistribution (PMC-ER). Our algorithm perturbs a population of light transport paths to generate new paths based on the kernel function of each path for exploring the spatial and temporal coherence among path integrals. The resampling process eliminates well-explored or low-constribution

116 paths from the population, regenerates new paths to explore the path space in a designed pattern, and adapts the kernel function of each path according to the performance of perturbations. The procedure is then iterated: sample, redistribute, resample, redistribute, resample . . . . The result is a self-tuning unbiased rendering algorithm being able to explore the spatial and temporal coherence among light transport paths. Predicting a proper perturbation range for caustics pertubation is difficult due to the dependence on the scene and path properties. If the predicted range is too large, the failure rate of the caustics perturbation will be high to reduce the rendering efficiency. A new lens perturbation method is proposed to replace both lens perturbation and caustics perturbation with possible increment in the success rate of perturbation to improve the rendering efficiency. Furthermore, the control parameter, perturbation radius, is intuitive and predictive in the movement on the image plane for both lens perturbation and caustics perturbation. Stratified exploration is important for ERPT and PMC-ER to guarantee unbiasedness. However, this choice is sub-optimal because the importance of regions on the image are not perceptually the same i.e. some areas on the image plane contain higher p erceptual variance than others do. In addition certain types of paths are visually more important but hard to find by a general path tracing algorithm. Thus, we propose variance-regeneration to concentrate more computation on high perceptual variance regions and caustics regeneration to put more effort in exploration of caustics paths. The deposited energy of each perturbation is weighted according to the path properties and the way which the path is generated to prevent introducing bias. Both reduce the variance in perceptual important areas with the cost of slight increase in the variance of other areas. The overall effect is to improve the perception of the rendered image. Since our perception is sensitive to temperal artifacts. "Frame-by-frame" rendering algorithms generate artifacts which appear independently among consecutive frames and cause serious temporal artifacts such as flickering and shimmering. In this dissertation we purpose the usage of PMC-ER to exploits the temporal coherence among path integrals for rendering a general animated scene. An animated object in the scene is described by key-framed rigid transformation. A

117 new perturbation technique called time perturbation is developed to explore the temporal coherence among paths. The time perturbation method is designed to reuse path samples with similar properties at different moments of time in order to reduce the temporal artifacts of an animation. Rendering a physically-correct animation for a complex scene may take months for a moderate computer. In order to generate a converged animation in reasonable time, our system distribute iterative computational tasks to a pool of computer for parallel rendering. Each computational task redistributes the energy of a set of initial paths traced through the shutter open period for a specific frame with locally adjusted parameters. In this dissertation we demonstrate that this local adaptation of parameters does not introduce bias into the final result. The rendered animations are perceptually pleasant with much fewer temporal artifacts.

7.2 Future Research

PMC-ER uses sample reuse and adaptation of the kernel function to become a better sampler for rendering an animation. However, through the entire research we realize that the algorithm is helpful to improve the rendering efficiency but they are still too slow for practical usage in industry. Thus, the following is some future works which may be able to make the algorithm even more efficient. In this work we face the difficulty of how to evaluate the perceptual quality of the results. There are algorithms for evaluating the perceptual quality of statically rendering images and the perceptual quality of compressed video. The evaluation in the static images lacks the consideration of temporal coherence in visual perception. And the evaluation in the compressed video mainly concentrates on the problem of blockedness which is caused by the loss of information during the compression process. However, the evaluation in animation rendering does not face the loss of information but the independent noise in each frame. The temporal inconsistency is very serious to our perception. We need to consider this effect when evaluating the quality of rendering. Currently we use visual inspection to check the rendering animations. This is time consuming and hard to incorporate into the rendering algorithms to make rendering process automatically. Thus a proper

118 metric to evaluate the performance among consecutive frames can allow us to further adjust our kernel functions to take the entire rendering sequence into account and evaluate the results. The choice of the ratio between the total number of stratified regeneration paths and special regeneration paths is a trade-off between the degree of stratified exploration of the image plane and the degree of effort concentrated on exploring perceptually important regions on the image plane and path space. Currently, the ratio is manually set up at the beginning of the rendering process. In the future we would like to develop an algorithm to automatically discover the ratio for rendering an image with the lowest mean-squared error. In this dissertation variance-generation uses the perceptual variance of sample radiances as the criterion to distribute pixel samples. The distributions are sub-optimal because they do not reflect the true need on the rendered images. I would like to have an algorithm to analyze the intermediate result to allow us to distribute more samples to those noisy regions on the intermediate result. When observing our rendered results, we find that the bright regions are normally smoother than the dark regions on a rendered image. The is because we make a trade-off between quality and ~ speed by choosing E as the criterion to start a Markov Chain. In the future we would like to have a hybrid algorithm to use balance transfer to explore the low-energy paths and equal deposition to concentrate more computation on high-energy paths to relieve the problem in the dark regions. Furthermore, most parts of images are converged quickly except a few areas. Generally, the noises are caused by a few high-energy paths with high failure rate in perturbations. We think about further improving the rendering efficiency by: First, a proper criterion is used to identify these high-failure-high-energy paths and then they are recorded in a list. Second, we analyze the properties of these difficultly perturbed paths and try to distribute its energy into the intermediate result. Or we can just remove them if it did not affect the perception of the image. Although our time perturbation methods can possibly explore the correlation among paths in the entire animation but requirement of memory to store all perturbed samples will be too high to fit in a computer system. Therefore, we made a trade-off between the extent of temporal exploration and the amount of memory usage and the time for updating each process. We set up reasonable temporal perturbation radii and a proper number of perturbations to get the samples

119 distributed roughly in a 20-frame radius from the center of the starting frame. However in the future a better storage or task distribution scheme should be developed to avoid the limitation of temporal exploration radii. In addition, temporal and spatial perturbations are implemented separately. The initial number of perturbations assigned to each will be relatively low. This may cause our algorithm to prefer a perturbation because of the accidentally high success rate of that perturbation. In the future we would like to develop a spatio-temporal perturbation algorithm to explore spatial and temporal domain at the same time to increase the number of initial test perturbations and reduce the chance of accidentally biasing one perturbation. In the future skin-skeleton animation and morphing animation seems to be a direct extension to the current implementation of the key-framed animation. Both skin-skeleton animation and morphing animation can trace the transformation of a diffuse vertex at different moments of time by using the intersection's parameter value (u, v). The ability to trace a diffuse vertex allows us to generate a correlated paths needed in time perturbation. In current implementation we assume that the change in lighting condition in the rendered sequence of frames is low enough to generate converged frames in relatively similar iterations. However, in general the lighting condition for a long sequence of animation changes largely. Our algorithm needs to chop up the animated scenes into several small animated scenes for fulfilling this assumption. In the future we would like to develop an algorithm that can automatically check the convergence of each frame and terminate the rendering for those frames automatically.

120

LIST OF REFERENCES

[1] Precomputed radiance transfer for real-time rendering in dynamic, low-frequency lighting environments, New York, NY, USA, 2002. ACM. [2] Clustered principal components for precomputed radiance transfer, New York, NY, USA, 2003. ACM. [3] Precomputed local radiance transfer for real-time lighting design, volume 24, New York, NY, USA, 2005. ACM. [4] Fast kd-tree Construction with an Adaptive Error-Bounded Heuristic. IEEE, Sept. 2006. [5] Large ray packets for real-time Whitted ray tracing, 2008. [6] Stephen J. Adelson and Larry F. Hodges. Generating exact ray-traced animation frames by reprojection. IEEE Comput. Graph. Appl., 15(3):43­52, 1995. [7] A.A. Apodaca and L. Gritz. Advanced RenderMan. Morgan Kaufmann, 1999. [8] James Arvo. Transfer Functions in Global Illumination. In ACM SIGGRAPH '93 Course Notes - Global Illumination, pages 1­28. 1993. [9] Michael Ashikhmin, Simon Premoze, Peter Shirley, and Brian Smits. A variance analysis of the metropolis light transport algorithm. Computer and Graphics, 25(2):287­294, 2004. [10] Kavita Bala, Julie Dorsey, and Seth Teller. Interactive ray-traced scene editing using ray segment trees. In In Tenth Eurographics Workshop on Rendering, pages 39­52, 1999. [11] D. Baum, J. Wallace, M. Cohen, and D. Greenberg. The back-buffer algorithm: an extension of the radiosity method to dynamic environments. In The Visual Computer, pages 298­306, 1986. [12] P. Bekaert, Mateu Sbert, and J. Halton. Accelerating path tracing by re-using paths. In Rendering Techniques '02 (Proceedings of the Thirteenth Eurographics Workshop on Rendering), pages 125­134, 2002.

121 [13] Aner Ben-Artzi, Ryan Overbeck, and Ravi Ramamoorthi. Real-time brdf editing in complex lighting. ACM Trans. Graph., 25(3):945­954, 2006. [14] Gonzalo Besuievsky and Xavier Pueyo. Animating radiosity environments through the multi-frame lighting method. Journal of Visualization and Computer Animation, 12:93­ 106, 2001. [15] Gonzalo Besuievsky and Mateu Sbert. The multi-frame lighting method: a monte carlo based solution for radiosity in dynamic environments. In Proceedings of the eurographics workshop on Rendering techniques '96, pages 185­ff., London, UK, 1996. Springer-Verlag. [16] Gary Bishop, Henry Fuchs, Leonard Mcmillan, and Ellen J. Scher Zagier. Frameless rendering: double buffering considered harmful. pages 175­176, 1994. [17] N. Briere and P. Pierre. Hierarchical view-dependent structures for interactive scene manipulation. In SIGGRAPH '96, pages 83­90, 1996. [18] Brian Cabral, Marc Olano, and Philip Nemec. Reflection space image based rendering. In In SIGGRAPH 99, pages 165­170, 1999. [19] Mike Cammarano and Henrik Wann Jensen. Time dependent photon mapping. In In Proceedings of the 13th Eurographics Workshop on Rendering, pages 135­144, 2002. [20] O. Capp´ , A. Guillin, Jean-Michel Marin, and Christian Robert. Population Monte Carlo. e Journal of Computational and Graphical Statistics, 13(4):907­929, 2004. [21] Alan Chalmers and Theresa-Marie Rhyne, editors. Interactive Rendering with Coherent Ray Tracing, volume 20. Blackwell Publishers, Oxford, 2001. [22] J. Chapman, T. W. Calvert, and J. Dill. Spatio-temporal coherence in ray tracing. In Proceedings of Graphics Interface '91, pages 101­108, Jun 1991. [23] S.E. Chen. Incremental radiosity: An extension of progressive radiosity to an interactive image synthesis system. In SIGGRAPH '90: Proceedings of the 23th annual conference on Computer graphics and interactive techniques, pages 135­144, 1990. [24] P.H. Christensen. Industrial-strength global illumination. In SIGGRAPH '03, Course Notes No. 27, pages 139­149, 2003. [25] David Cline, Justin Talbot, and Parris Egbert. Energy redistribution path tracing. In SIGGRAPH '05, pages 1186­1195, 2005. [26] Condor. http:://www.cs.wisc.edu/condor. [27] Cyrille Damez, Kirill Dmitriev, and Karol Myszkowski. State of the art in global illumination for interactive applications and high-quality animations. Computer Graphics Forum, 22(1):55­77, march 2003.

122 [28] Cyrille Damez and Francois Sillion. Space-time hierarchical radiosity. In In Rendering ¸ Techniques ?9, pages 235­246. Springer, 1999. [29] Abhinav Dayal, Cliff Woolley, Benjamin Watson, and David Luebke. Adaptive frameless rendering. In SIGGRAPH '05: ACM SIGGRAPH 2005 Courses, page 24, New York, NY, USA, 2005. ACM. [30] Paul J. Diefenbach. Pipeline rendering: interaction and realism through hardware-base multi-pass rendering. PhD thesis, Univ. of Pennsylvania, 1996. [31] Paul J. Diefenbach and Norman I. Badler. Multi-pass pipeline rendering: realism for dynamic environments. In SI3D '97: Proceedings of the 1997 symposium on Interactive 3D graphics, pages 59­ff., New York, NY, USA, 1997. ACM. [32] Kirill Dmitriev, Stefan Brabec, Karol Myszkowski, and Hans-Peter Seidel. Interactive global illumination using selective photon tracing. In Rendering Techniques '02 (Proceedings of the 13th Eurographics Workshop on Rendering), pages 25­36, 2002. [33] R. Douc, A. Guillin, J. M. Marin, and C. P. Robert. Convergence of adaptive sampling schemes. Technical Report 2005-6, University Paris Dauphine, 2005. [34] R. Douc, A. Guillin, J. M. Marin, and C. P. Robert. Minimum variance importance sampling via population Monte Carlo. Technical report, University Paris Dauphine, 2005. [35] A. Doucet, N. de Freitas, and N. Gordon. Sequential Monte Carlo Methods in Practice. Springer-Verlag, 2001. [36] George Drettakis and Francois X. Sillion. Interactive update of global illumination using ¸ a line-space hierarchy. In SIGGRAPH '97: Proceedings of the 24th annual conference on Computer graphics and interactive techniques, pages 57­64. ACM Press/Addison-Wesley Publishing Co., 1997. [37] Fr´ do Durand, George Drettakis, and Claude Puech. The visibility skeleton: a powerful and e efficient multi-purpose global visibility tool. In SIGGRAPH '97: Proceedings of the 24th annual conference on Computer graphics and interactive techniques, pages 89­100, New York, NY, USA, 1997. ACM Press/Addison-Wesley Publishing Co. [38] Fr´ do Durand, George Drettakis, and Claude Puech. The 3d visibility complex. ACM Trans. e Graph., 21(2):176­206, 2002. [39] Philip Dutre, Kavita Bala Peter Shirley and, and Philippe Bekaert. Advanced Global Illumination. A K Peters Ltd, 2006. [40] Shaohua Fan. Sequential Monte Carlo Methods for Physically Based Rendering. PhD thesis, University of Wisconsin at Madison, 2006.

123 [41] Sebastian Fernandez, Kavita Bala, Moreno A. Piccolotto, and Donald P. Greenberg. Interactive direct lighting in dynamic scenes. Technical Report PCG-00-2, 2000. [42] James A. Ferwerda, Sumanta N. Pattanaik, Peter Shirley, and Donald P. Greenberg. A model of visual adaptation for realistic image synthesis. In SIGGRAPH '96, pages 249­258, 1996. [43] T. Foley and J. Sugerman. Kd-tree acceleration structures for a gpu raytracer. In Proceedings of HWWS, pages 15­22, 2005. [44] D.A. Forsyth, C. Yang, and K. Teo. Efficient radiosity in dynamic environments. In Proceedings of the Fifth Eurographics Workshop on Rendering, pages 221­231, 1994. [45] Volker Gaede and Oliver Gunther. Multidimensional access methods. ACM Computing Surveys, 30:170­231, 1998. [46] P. Gautron, K. Bouatouch, and S. Pattanaik. Temporal radiance caching. In IEEE Transactions on Visualization and Computer Graphics, volume 13, pages 891­901. IEEE Computer Society, 2007. [47] D.W. George, F.X. Sillion, and D.P. Greenberg. Radiosity redistribution for dynamic environments. In IEEE Computer Graphics and Applications, pages 26­34, 1990. [48] Abhijeet Ghosh, Arnaud Doucet, and Wolfgang Heidrich. Sequential sampling for dynamic environment map illumination. In Proc. Eurographics Symposium on Rendering, pages 115­126, 2006. [49] Walter R Gilks, Sylvia Richardson, and David J Spiegelhalter. Markov chain Monte Carlo in Practice. Chapman & Hall, 1996. [50] A. S. Glassner. Space subdivision for fast ray tracing. pages 160­167, 1988. [51] Andrew S. Glassner. Spacetime ray tracing for animation. IEEE Computer Graphics and Applications, 8(2):60­70, March 1988. [52] Andrew S. Glassner. An Introduction to Ray Tracing. Academic Press, 1989. [53] Cindy M. Goral, Kenneth E. Torrance, Donald P. Greenberg, and Bennett Battaile. Modeling the interaction of light between diffuse surfaces. In SIGGRAPH '84: Proceedings of the 11th Annual Conference on Computer Graphics and Interactive Techniques, pages 213­222, 1984. [54] Xavier Granier and George Drettakis. A final reconstruction framework for an unified global illumination algorithm. ACM Transactions on Graphics, 23(2), April 2004. [55] C.P. Gribble, T. Ize, A. Kensler, I. Wald, and S. G. Parke. A coherent grid traversal approach to visualizing particle-based simulation data. pages 758­768, 2006.

124 [56] J. Gunther, H. Friedrich, H. Wald, H.-P. Seidel, and P. Slusallek. Ray tracing animated scenes using motion decomposition. In Proc. Eurographics, 2006. [57] Paul Haeberli and Kurt Akeley. The accumulation buffer: hardware support for high-quality rendering. In SIGGRAPH '90: Proceedings of the 17th annual conference on Computer graphics and interactive techniques, pages 309­318, New York, NY, USA, 1990. ACM. [58] Hammersley and Handscomb. Monte Carlo Methods. John Wiley & Sons, 1965. [59] W. K. Hastings. Monte Carlo sampling methods using Markov chains and their applications. Biometrika, 57:97­109, 1970. [60] Vlastimil Havran. Heuristic Ray Shooting Algorithms. Ph.d. thesis, Department of Computer Science and Engineering, Faculty of Electrical Engineering, Czech Technical University in Prague, November 2000. [61] Vlastimil Havran, Ji´ Bittner, and Hans-Peter Seidel. Exploiting temporal coherence in ray ri casted walkthroughs. In SCCG '03: Proceedings of the 19th spring conference on Computer graphics, pages 149­155, New York, NY, USA, 2003. ACM. [62] Vlastimil Havran, Cyrille Damez, Karol Myszkowski, and Hans-Peter Seidel. An efficient spatio-temporal architecture for animation rendering. In EGRW '03: Proceedings of the 14th Eurographics workshop on Rendering, pages 106­117. Eurographics Association, 2003. [63] Vlastimil Havran, Cyrille Damez, Karol Myszkowski, and Hans-Peter Seidel. An efficient spatio-temporal architecture for animation rendering. In SIGGRAPH '03: ACM SIGGRAPH 2003 Sketches & Applications, pages 1­1, New York, NY, USA, 2003. ACM. [64] Milo Haan, Fabio Pellacini, and Kavita Bala. Direct-to-indirect transfer for cinematic s s relighting. ACM Trans. Graph., 25(3):1089­1097, 2006. [65] Milo Haan, Fabio Pellacini, and Kavita Bala. Matrix row-column sampling for the manys s light problem. In SIGGRAPH '07: ACM SIGGRAPH 2007 papers, page 26, New York, NY, USA, 2007. ACM. [66] Milo Haan, Edgar Vel zquez-Armend riz, Fabio Pellacini, and Kavita Bala. Tensor cluss s a a tering for rendering many-light animations. In Proc. of the 19th Eurographics Symposium on Rendering. Eurographics Association, 2008. [67] Paul S. Heckbert. Adaptive radiosity textures for bidirectional ray tracing. In SIGGRAPH '90, pages 145­154, 1990. [68] T. Heidmann. Real Shadows - Real Time. Iris Universe, 1991.

125 [69] Wolfgang Heidrich and Hans peter Seidel. Realistic, hardware-accelerated shading and lighting. In Proceedings of the 26th annual conference on Computer graphics and interactive techniques, pages 171­178, 1999. [70] Michael Herf and Paul S. Heckbert. Fast soft shadows. In In Visual Proceedings, SIGGRAPH 96, page 145, 1996. [71] T. Ize, P. Shirley, and S.G. Parker. Grid creation strategies for efficient ray tracing. In Interactive Ray Tracing 2006, IEEE Symposium on, pages 27­32, 2006. [72] T. Ize, I. Wald, C. Robertson, and S.G. Parker. An evaluation of parallel grid construction for ray tracing dynamic scenes. In Interactive Ray Tracing 2006, IEEE Symposium on, pages 47­55, 2006. [73] Henrik Jensen. Global illumination using photon maps. In Rendering Techniques'96: Proceedings of the Eurographics Workshop on Rendering, pages 21­30, 1996. [74] Henrik W. Jensen. Realistic image synthesis using photon mapping. AK Peters, 2001. [75] David A. Jevans. Object space temporal coherence for ray tracing. In Proceedings of the conference on Graphics interface '92, pages 176­183, San Francisco, CA, USA, 1992. Morgan Kaufmann Publishers Inc. [76] Sig Badt Jr. Two algorithms for taking advantage of temporal coherence in ray tracing. In The Visual Computer, pages 123­132, 1988. [77] James T. Kajiya. The rendering equation. In SIGGRAPH '86, pages 143­150, 1986. [78] Kalos and Whitlock. Monte Carlo Methods, Volume I: Basics. John Wiley & Sons, 1986. [79] Jan Kautz and Michael D. Mccool. Interactive rendering with arbitrary brdfs using separable approximations. In In Eurographics Rendering Workshop, pages 281­292, 1999. [80] Jan Kautz, Pere pau Vazquez, Wolfgang Heidrich, Hans peter Seidel, and Aplicacions Udg. A unified approach to prefiltered environment maps. pages 185­196. Springer, 2000. [81] Csaba Kelemen, Laszlo Szirmay-Kalos, Gyorgy Antal, and Ferenc Csonka. A simple and robust mutation strategy for the metropolis light transport algorithm. volume 21, pages 531­540, 2002. [82] Alexander Keller. Instant radiosity. In SIGGRAPH '97: Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques, pages 49­56, 1997. [83] Eric P. Lafortune and Yves D. Willems. Bi-directional path tracing. In Proceedings of Compugraphics, pages 145­153, 1993.

126 [84] Eric P. Lafortune and Yves D. Willems. Bidirectional path tracing. In Proceedings of Third International Conference on Computational Graphics and Visualization Techniques (Compugraphics '93), pages 145­153, 1993. [85] Ares Lagae and Philip Dutr´ . Accelerating ray tracing using constrained tetrahedralizations. e In SIGGRAPH '08: ACM SIGGRAPH 2008 posters, pages 1­1, New York, NY, USA, 2008. ACM. [86] Ares Lagae and Philip Dutr´ . Compact, fast and robust grids for ray tracing. In SIGGRAPH e '08: ACM SIGGRAPH 2008 talks, pages 1­1, New York, NY, USA, 2008. ACM. [87] Samuli Laine, Hannu Saransaari, Janne Kontkanen, Jaakko Lehtinen, and Timo Aila. Incremental instant radiosity for real-time indirect illumination. In Proceedings of Eurographics Symposium on Rendering 2007, pages 277­286. Eurographics Association, 2007. [88] Greg Ward Larson. The holodeck: A parallel ray-caching rendering system. In In Proceedings Eurographics Workshop on Parallel Graphics and Visualization, 1998. [89] Christian Lauterbach, Sung eui Yoon, and Dinesh Manocha. Rt-deform: Interactive ray tracing of dynamic scenes using bvhs. In In Proceedings of the 2006 IEEE Symposium on Interactive Ray Tracing, pages 39­45, 2006. [90] Jonas Lext and Tomas Akenine-muller. Eurographics 2001 / jonathan c. roberts short presentations towards rapid reconstruction for animated ray tracing. In Eurographics, pages 311­318, 2001. [91] Jun S. Liu. Monte Carlo Strategies in Scientific Computing. Spinger-Verlag, 2001. [92] I. Martin, X. Pueyo, and D. Tosti. A framework for animatio in global illumination. Technical report, Univ.of Girona, 1999. [93] Ignacio Mart´n, Xavier Pueyo, and Dani Tost. Frame-to-frame coherent animation with i two-pass radiosity. IEEE Transactions on Visualization and Computer Graphics, 9(1):70­ 84, 2003. [94] W. Martin, W. Martin, S. Parker, S. Parker, E. Reinhard, E. Reinhard, P. Shirley, P. Shirley, W. Thompson, and W. Thompson. Temporally coherent interactive ray tracing. Journal of Graphics Tools, 2:41­48, 2002. [95] A. Mendez-Feliu, M. Sbert, and L. Szirmay-kalo. Reusing frames in camera animation. Journal of WSCG, 14(3), 2006. [96] N. Metropolis and S. Ulam. The Monte Carlo method. Journal of the American Statistical Association, 44:335­341, 1949.

127 [97] Nicholas Metropolis, Arianna W. Rosenbluth, Marshall N. Rosenbluth, Augusta H. Teller, and Edward Teller. Equation of state calculations by fast computing machine. The Journal of Chemical Physics, 21(6):1087­1092, 1953. [98] Mark Meyer and John Anderson. Statistical acceleration for animated global illumination. ACM Trans. Graph., 25(3):1075­1080, 2006. [99] S. Muller, W. Kresse, and N. Gatenby. A radiosity approach for the simulation of daylight, 1995. [100] S. Muller and F. Schoeffel. Fast radiosity repropagation for interactive virtual environments using a shadow-form-factor-list, 1994. [101] K. Murakami and K. Hirota. Incremental ray tracing. In Eurographics Workshop on Photosimulation, Realism and Physics in Computer Graphics, 1989. [102] M. J. Muuss. Toward real-time ray-tracing of combnatorial solid geometric models. In Proceeding of BRL-CAD Symposium, 1995. [103] M. J. Muuss and M. Lorenzo. High-resolution interactive multispectral missile sensor simulation for atr and dis. In Proceeding of BRL-CAD Symposium, 1995. [104] Karol Myszkowski, Takehiro Tawara, Hiroyuki Akamine, and Hans-Peter Seidel. Perception-informed accelerated rendering of high quality walkthrough sequences. In Proc. of the 10th Eurographics Workshop on Rendering, pages 5­18, 1999. [105] Karol Myszkowski, Takehiro Tawara, Hiroyuki Akamine, and Hans-Peter Seidel. Perception-guided global illumination solution for animation rendering. In SIGGRAPH '01: Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pages 221­230. ACM Press, 2001. [106] Ren Ng, Ravi Ramamoorthi, and Pat Hanrahan. All-frequency shadows using non-linear wavelet lighting approximation. ACM Trans. Graph., 22(3):376­381, 2003. [107] Jeffry Nimeroff, Julie Dorsey, and Holly Rushmeier. Implementation and analysis of an image-based global illumination framework for animated environments. IEEE Transactions on Visualization and Computer Graphics, 2(4):283­298, 1996. [108] Rachel Orti, Stphane Riviere, Frdo Durand, and Claude Puech. Radiosity for dynamic scenes in flatland with. Compute Graphics Forum, 15:237­248, 1996. [109] John D. Owens, David Luebke, Naga Govindaraju, Mark Harris, Jens Krger, Aaron E. Lefohn, and Timothy J. Purcell. A survey of general-purpose computation on graphics hardware. Computer Graphics Forum, 26(1):80­113, 2007. [110] S. Parker, W. Martin, Y. Livnat, P. Sloan, P. Shirley, B. Smits, and C. Hansen. Interactive ray tracing. In Symposium on Interactive 3D Computer Graphics, 1999.

128 [111] S. Parker, W. Martin, Y. Livnat, P. Sloan, P. Shirley, B. Smits, and C. Hansen. Interactive ray tracing for volume visualization. IEEE transaction on COmputer Graphics and Visualization, 5(3), 1999. [112] Fabio Pellacini, Kiril Vidim e, Aaron Lefohn, Alex Mohr, Mark Leone, and John Warren. c Lpics: a hybrid hardware-accelerated relighting engine for computer cinematography. In SIGGRAPH '05: ACM SIGGRAPH 2005 Papers, pages 464­470, New York, NY, USA, 2005. ACM. [113] Matt Pharr and Greg Humphreys. Physically Based Rendering from Theory to Implementation. Morgan Kaufmann, 2004. [114] Stefan Popov, Johannes G¨ nther, Hans-Peter Seidel, and Philipp Slusallek. Experiences u with streaming construction of SAH KD-trees. In Proceedings of the 2006 IEEE Symposium on Interactive Ray Tracing, pages 89­94, September 2006. [115] X. Pueyo, D. Tost, I. Martin, and B. Garcia. Radiosity for dynamic environments. In The Journal of Visualization and Computer Animation, volume 8, pages 221­231, 1997. [116] Timothy J. Purcell, Ian Buck, William R. Mark, and Pat Hanrahan. Ray tracing on programmable graphics hardware. ACM Transactions on Graphics, 21(3):703­712, July 2002. ISSN 0730-0301 (Proceedings of ACM SIGGRAPH 2002). [117] Timothy J. Purcell, Craig Donner, Mike Cammarano, Henrik Wann Jensen, and Pat Hanrahan. Photon mapping on programmable graphics hardware. In Proceedings of the ACM SIGGRAPH/EUROGRAPHICS Conference on Graphics Hardware, pages 41­50. Eurographics Association, 2003. [118] Werner Purgathofer, editor. Using Temporal and Spatial Coherence for Accelerating the Calculation of Animation Sequences. North-Holland, September 1991. [119] E. Reinhard, A. G. Chalmers, and F. W. Jansen. State of the art report overview of parallel photo-realistic graphics, 1998. [120] Erik Reinhard, Brian Smits, and Charles Hansen. Dynamic acceleration structures for interactive ray tracing. In In Proceedings of the Eurographics Workshop on Rendering, pages 299­306, 2000. [121] Alexander Reshetov, Alexei Soupikov, and Jim Hurley. Multi-level ray tracing algorithm. In SIGGRAPH '05: ACM SIGGRAPH 2005 Papers, pages 1176­1185, New York, NY, USA, 2005. ACM. [122] Christian P. Robert and George Casella. Monte Carlo Statistical Methods. Springer-Verlag, 2nd edition, 2004.

129 [123] Mateu Sbert and Francesc Castro. Reuse of paths in final gathering step with moving light sources. In International Conference on Computational Science 2004, pages 189­196, 2004. [124] Mateu Sbert, Francesc Castro, and John Halton. Reuse of paths in light source animation. In CGI '04: Proceedings of the Computer Graphics International, pages 532­535, Washington, DC, USA, 2004. IEEE Computer Society. [125] Mateu Sbert, Laszlo Szecsi, and Laszlo Szirmay-Kalos. Real-time light animation. In Computer Graphics Forum (Proceedings Eurographics 2004), volume 23(3), pages 291­ 300, 2004. [126] F. Schoeffel and P.Pomi. Reducing memory requirements for interactive radiosity using movement prediction. In Eurographics Rendering Workshop, pages 125­134, 1999. [127] Mark Segal, Carl Korobkin, Rolf van Widenfelt, Jim Foran, and Paul Haeberli. Fast shadows and lighting effects using texture mapping. SIGGRAPH Comput. Graph., 26(2):249­252, 1992. [128] Benjamin Segovia, Jean-Claude Iehl, Richard Mitanchey, and Bernard P´ roche. Bidirece tional instant radiosity. In Proceedings of the 17th Eurographics Workshop on Rendering, to appear, 2006. [129] Benjamin Segovia, Jean-Claude Iehl, and Bernard P´ roche. Metropolis instant radiosity. In e Proceedings of Eurographics 2007, to appear, 2007. [130] C. H. S´ quin and E. K. Smyrl. Parameterized ray-tracing. In SIGGRAPH '89: Proceedings e of the 16th annual conference on Computer graphics and interactive techniques, pages 307­ 314, New York, NY, USA, 1989. ACM. [131] Jonathan Shade, Dani Lischinski, David H. Salesin, Tony DeRose, and John Snyder. Hierarchical image caching for accelerated walkthroughs of complex environments. In SIGGRAPH '96: Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, pages 75­82, New York, NY, USA, 1996. ACM. [132] E. Shaw. Hierarchical radiosity for dynamic environments. In Computer Graphics Forum (Proceedings Eurographics 1997), volume 16, pages 107­118, 1997. [133] Maryann Simmons and Carlo H. Sequin. Tapestry: A dynamic mesh-based display representation for interactive rendering. In Proceedings of the 11th Eurographics Workshop on Rendering, pages 329­340, 2000. [134] S.J.Gortler and K.Myszkowski, editors. Interactive Distributed Ray Tracing of Highly Complex Models. Springer, 2001. available at http://graphics.cs.uni-sb.de/ wald/Publications.

130 [135] Miloslaw Smky, Shin-ichi Kinuwaki, Roman Durikovic, and Karol Myszkowski. Temporally coherent irradiance caching for high quality animation rendering. In The European Association for Computer Graphics 26th Annual Conference EUROGRAPHICS 2005, volume 24 of Computer Graphics Forum, pages xx­xx, Dublin, Ireland, 2005. Blackwell. [136] M. SMyk, S. Kinuwaki, R. Durikovic, and K. Myszkowski. Temporally coherence irradiance caching for high quality animiation rendering. In Computer Graphics Forum (Proceedings Eurographics 2005), 2005. [137] Jerome Spanier and Ely M. Gelbard. Monte Carlo principles and neutron transport problems. Reading, Mass., Addison-Wesley Pub. Co, 1969. [138] Marc Stamminger, Jorg Haber, Hartmut Schirmacher, and Hans peter Seidel. Walkthroughs with corrective texturing. In tal Signal Processing. California Technical Publishing, pages 377­388, 2000. [139] Gordon Stoll, William R. Mark, Peter Djeu, Rui Wang, and Ikrima Elhassan. Razor: An architecture for dynamic multiresolution ray tracing. Technical report, University of Texas at Austin, 2005. [140] K. Sung, A. Pearce, and C. Wang. Spatial-Temporal Antialiasing. IEEE Transactions on Visualization and Computer Graphics, 8(2):144­153, 2002. [141] J¨ rg Schmittler Sven Woop and Philipp Slusallek. Rpu: A programmable ray processing o unit for realtime ray tracing. In Proceedings of ACM SIGGRAPH 2005, pages 434­444, July 2005. [142] L´ szl´ Szirmay-Kalos. Monte Carlo methods for global illumination. In Spring Conference a o of Computer Graphics?9, pages 1­28, 1999. Invited talk. [143] Eric Tabellion and Arnauld Lamorlette. An approximate global illumination system for computer generated films. In SIGGRAPH '04: Proceedings of the 31st Annual Conference on Computer Graphics and Interactive Techniques, pages 469­476, 2004. [144] T. Tawara, K. Myszkowski, K. Dmitriev, V. Havran, C. Damez, and H.-P. Seidel. Exploiting temporal coherence in global illumination. In SCCG '04: Proceedings of the 20th spring conference on Computer graphics, pages 23­33. ACM Press, 2004. Invited Talk. [145] T. Tawara, K. Myszkowski, and H.-P. Seidel. Exploiting temporal coherence in final gathering for dynamic scenes. In CGI '04: Proceedings of the Computer Graphics International (CGI'04), pages 110­119. IEEE Computer Society, 2004. [146] Takehiro Tawara, Karol Myszkowski, and Hans-Peter Seidel. Localizing the final gathering for dynamic scenes using the photon map. In G¨ nther Greiner, Heinrich Niemann, Thomas u Ertl, Bernd Girod, and Hans-Peter Seidel, editors, Proceedings of Vision, Modeling, and Visualization VMV 2002, pages 69­76, Erlangen, Germany, November 2002. Akademische Verlagsgesellschaft Aka.

131 [147] Luke Tierney. A note on Metropolis-Hastings kernels for general state spaces. The Annals of Applied Probability, 8(1):1­9, 1998. [148] Parag Tole, Fabio Pellacini, Bruce Walter, and Donald P. Greenberg. Interactive global illumination in dynamic scenes. In SIGGRAPH '02: Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques, pages 537­546. ACM Press, 2002. [149] Eric Veach. Robust Monte Carlo Methods for Light Transport Simulation. PhD thesis, Stanford University, 1997. [150] Eric Veach and Leonidas J. Guibas. Bidirectional estimators for light transport. In Proc. of the 5th Eurographics Workshop on Rendering, pages 147­162. Eurographics Association, 1994. [151] Eric Veach and Leonidas J. Guibas. Optimally combining sampling techniques for Monte Carlo rendering. In SIGGRAPH '95: Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques, pages 419­428, 1995. [152] Eric Veach and Leonidas J. Guibas. Metropolis light transport. In SIGGRAPH '97, pages 65­76, 1997. [153] V. Volevich, K. Myszkowski, A. Khodulev, and E. Kopylov. Using the visual differences predictor to improve performance of progressive global illumination computations. volume 19, pages 122­161, New York, NY, USA, 2000. ACM. [154] I. Wald, C. Benthin, and P. Slusallek. Distributed Interactive Ray Tracing of Dynamic Scenes. In Proc. of the IEEE Symposium on Parallel and Large-Data Visualization and Graphics, 2003. [155] I. Wald and V. Havran. On building good kd-trees for ray tracing, and on doing this in o(n log n). Technical report, Univ.of Utah, 2006. [156] I. Wald, T. Kollig, C. Benthin, A. Keller, and P. Slusallek. Interactive Global Illumination. In Proc. of the 13th Eurographics Workshop on Rendering, pages 15­24, 2002. [157] Ingo Wald. Realtime Ray Tracing and Interactive Global Illumination. PhD thesis, Computer Graphics Group, Saarland University, 2004. [158] Ingo Wald, Carsten Benthin, and Philipp Slusallek. Interactive global illumination in complex and highly occluded environments. In EGRW '03: Proceedings of the 14th Eurographics workshop on Rendering, pages 74­81, Aire-la-Ville, Switzerland, Switzerland, 2003. Eurographics Association. [159] Ingo Wald, Solomon Boulos, and Peter Shirley. Ray tracing deformable scenes using dynamic bounding volume hierarchies. ACM Transactions on Graphics, 26:2007, 2007.

132 [160] Ingo Wald, Thomas Kollig, Carsten Benthin, Alexander Keller, and Philipp Slusallek. Interactive global illumination using fast ray tracing. In EGRW '02: Proceedings of the 13th Eurographics workshop on Rendering, pages 15­24, Aire-la-Ville, Switzerland, Switzerland, 2002. Eurographics Association. [161] Ingo Wald, William R. Mark, Johannes G¨ nther, Solomon Boulos, Thiago Ize, Warren Hunt, u Steven G. Parker, and Peter Shirley. State of the art in ray tracing animated scenes. In Dieter Schmalstieg and Ji´ Bittner, editors, STAR Proceedings of Eurographics 2007, pages 89­ ri 116. The Eurographics Association, September 2007. [162] Ingo Wald, Ize Thiago, Andrew Kensler, Aaron Knoll, and Steven G. Parker. Ray Tracing Animated Scenes Using Coherence Grid Traversal. In SIGGRAPH '06: Proceedings of the 32th annual conference on Computer graphics and interactive techniques, 2005. [163] Bruce Walter, Adam Arbree, Kavita Bala, and Donald P. Greenberg. Multidimensional lightcuts. In SIGGRAPH '06: ACM SIGGRAPH 2006 Papers, pages 1081­1088, New York, NY, USA, 2006. ACM. [164] Bruce Walter, George Drettakis, and Donald Greenberg. Enhancing and optimizing the render cache. In Paul Debevec and Simon Gibson, editors, Thirteenth Eurographics Workshop on Rendering (2002), page June. Eurographics, ACM Press, 2002. [165] Bruce Walter, George Drettakis, and Steven Parker. Interactive rendering using the render cache. In D. Lischinski and G.W. Larson, editors, Rendering techniques '99 (Proceedings of the 10th Eurographics Workshop on Rendering), volume 10, pages 235­246, New York, NY, Jun 1999. Springer-Verlag/Wien. [166] Bruce Walter, Sebastian Fernandez, Adam Arbree, Kavita Bala, Michael Donikian, and Donald P. Greenberg. Lightcuts: a scalable approach to illumination. ACM Trans. Graph., 24(3):1098­1107, 2005. [167] Gregory Ward and Maryann Simmons. The holodeck ray cache: an interactive rendering system for global illumination in nondiffuse environments. ACM Trans. Graph., 18(4):361­ 368, 1999. [168] Gregory J. Ward and Paul Heckbert. Irradiance gradients. In Proceedings of the 3rd Eurographics Workshop on Rendering, pages 85­98, 1992. [169] Gregory J. Ward, Francis M. Rubinstein, and Robert D. Clear. A ray tracing solution for diffuse interreflection. In SIGGRAPH '88: Proceedings of the 15th annual conference on Computer graphics and interactive techniques, pages 85­92, 1988. [170] Markus Weber, Marco Milch, Karol Myszkowski, Kirill Dmitriev, Przemyslaw Rokita, and Hans-Peter Seidel. Spatio-temporal photon density estimation using bilateral filtering. In CGI '04: Proceedings of the Computer Graphics International (CGI'04), pages 120­127. IEEE Computer Society, 2004.

133 [171] A. Wojdala, M. Gruszewski, and K. Dudkiewicz. Using harware texture mapping for efficient image synthesis and walkthrough with specular effects. In Machine Graphics and Vision, pages 137­151, 1994. [172] Li Hector Yee, Yang Li, and Hector Yee. Spatiotemporal sensitivity and visual attention for efficient rendering of dynamic environments. ACM Transactions on Graphics, 20:39­65, 2001. [173] Sung-Eui Yoon, Sean Curtis, and Dinesh Manocha. Ray tracing dynamic scenes using selective restructuring. In SIGGRAPH '07: ACM SIGGRAPH 2007 sketches, page 55, New York, NY, USA, 2007. ACM. [174] Kun Zhou, Qiming Hou, Rui Wang, and Baining Guo. Real-time kd-tree construction on graphics hardware. In SIGGRAPH Asia '08: ACM SIGGRAPH Asia 2008 papers, pages 1­11, New York, NY, USA, 2008. ACM.

Information

156 pages

Report File (DMCA)

Our content is added by our users. We aim to remove reported files within 1 working day. Please use this link to notify us:

Report this file as copyright or inappropriate

1324759


You might also be interested in

BETA
Microsoft Word - Spherical Harmonic Lighting.doc
010_gems3_fm.qxd
Introduction to Programming Languages