• Aucun résultat trouvé

Toward more realism and robustness in global illumination

N/A
N/A
Protected

Academic year: 2021

Partager "Toward more realism and robustness in global illumination"

Copied!
181
0
0

Texte intégral

(1)

HAL Id: tel-01260319

https://tel.archives-ouvertes.fr/tel-01260319

Submitted on 21 Jan 2016

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

Adrien Gruson

To cite this version:

Adrien Gruson. Toward more realism and robustness in global illumination. Graphics [cs.GR]. Uni-versité Rennes 1, 2015. English. �NNT : 2015REN1S059�. �tel-01260319�

(2)

THÈSE / UNIVERSITÉ DE RENNES 1

sous le sceau de l’Université Européenne de Bretagne

pour le grade de

DOCTEUR DE L’UNIVERSITÉ DE RENNES 1

Mention : Informatique

Ecole doctorale MATISSE

présentée par

Adrien Gruson

préparée à l’unité de recherche UMR 6074 IRISA

et au centre IRISA - Rennes Bretagne Atlantique

ISTIC

Toward more Realism

and Robustness in

Global Illumination

Thèse soutenue à Rennes le 6 Juillet 2015

devant le jury composé de :

Tamy BOUBEKEUR

Professeur à Télécom ParisTech / rapporteur

Elmar EISEMANN

Professeur à TU Delft / rapporteur

Luce MORIN

Professeur à INSA Rennes / examinateur

Jaroslav KŘIVÁNEK

Maître de Conférences à Charles Univ. / examinateur

Kadi BOUATOUCH

Professeur à Univ. de Rennes 1 / directeur de thèse

Rémi COZOT

Maître de Conférences à Univ. de Rennes 1 / co-directeur de thèse

(3)

Using computer generated images has grown up for several years. For example, such images are used in the entertainment industry to produce contents (films, video games) or to preview future projects/prototypes. Generated images have several levels of realism. In this PhD, we focus on the generation of photo-realistic images by using physically based rendering processes.

Such images are often generated to achieve an artistic aim (atmosphere, aesthet-ics, information, etc.). To do so, the artist has several aspects to manage: the 3D scene itself and all the parameters used at the different steps necessary for generating images (rendering process, post production steps, etc.). The creation of computer graphics generated images can be a difficult and time consuming process. The aim of this thesis is to help the artist achieve his goals.

In the first part of this PhD, we focus on improving the rendering step. This step is responsible for the image generation by taking as input the 3D scene and several other parameters (number of samples, type of rendering technique, etc.). The aim of our work is to accelerate or make more robust this step so that it is easier for the artist to use.

In that perspective, we propose a new rendering algorithm that renders partici-pating media on GPU. This algorithm is fast enough to make visualisation almost interactive. However, it only supports participating media, which limits its usage.

This is why we design a more general rendering algorithm that can handle a vast variety of 3D scenes. This rendering algorithm is robust and progressive (mak-ing preview of the final image possible). To achieve this, our technique improves (stochastic) progressive photon mapping (SPPM) by adding participating media support. Moreover, we use Metropolis sampling procedure to be able to get high efficiency on complex 3D scenes.

With this new algorithm, we have a robust general rendering algorithm. However, it still shares disadvantages with Metropolis sampling procedure: bad repartition of the relative error on the image plane. To address this issue, we propose a new importance function that distributes this error better. We propose two practical versions of this importance function for SPPM: an image-based formulation and a spatial one. Moreover, we use replica exchange and multiple importance sampling (MIS) to make this technique as robust as possible.

The second part of the thesis focuses on assisted tools for the artist. For example, we propose a new way to estimate the reference illuminant in 3D scenes. Then, this illuminant can be used to make white balancing or style-transfer during the post processing step. Another example of our work lies in the possibility of automatizing some lighting configurations in 3D scenes. Indeed, the lighting is important for the final image appearance. However, it is difficult for the artist to find adequate

(4)

parameters to configure the lighting. We propose a new algorithm that takes as an input the artist’s intention (aesthetics). Then, the algorithm optimizes and finds the light sources configuration (size and flux) that matches the artist’s wishes.

(5)

First of all, I would like to thank my two supervisors: Kadi Bouatouch and Rémi Cozot. Each of them helped me on their different areas of expertise. I would especially like to thank Kadi for his time and attention. I know that I am not patient and easy-going (at times) but you were always positive and helped me a lot.

I would also like to thank my co-authors/research partners: Jaroslav, Mickaël, Vincent, Charly, Ajit and Sumant. Special thanks to Jaroslav and Mickaël. Jaroslav, thanks for your time and all you taught me about research (I’m sorry I was helpless respecting deadlines). Moreover, I have enjoyed your perfectionism that pushed me beyond my limits. Mickaël, thanks to you too: you are more than a colleague, you are a friend. Thanks for your collaboration (almost on all my research projects) and your patience (contrary to me).

Many thanks as well to the FRVSense group (Ricardo, Billal, Matis, Hristina, Ronan, Mahmoud, Christian and Maryse). More globally, I also want to thank the graphics community and the people from the research lab.

Finally, I would like to thank all my relatives and close friends: François for the beers and the SC2 team game sessions (except that you still have a very weak level). These helped me relax! Thanks to my parents for their love, support, and for showing me that working is important. A special thank to Mathilde for her unconditional support and love throughout these four years.

(6)

List of figures

7

List of tables

11

1 Introduction

13

1 Summary of the contributions . . . 15

2 Publications . . . 15

I

Background on Global Illumination

17

2 Mathematical and Physical models

21 1 Radiometric quantities . . . 23

2 Surface interaction . . . 24

3 Volume interaction . . . 26

3 Monte Carlo solutions

31 1 General formulation . . . 31 2 Importance sampling . . . 32 2.1 General framework . . . 32 2.2 Multiple distributions . . . 33 3 Practical aspects . . . 35 3.1 Direct rendering . . . 36

3.2 Indirect rendering with unbiased estimator . . . 38

3.2.1 Path tracing . . . 40

3.2.2 Light tracing . . . 41

3.2.3 Bidirectional Path tracing . . . 42

3.3 Indirect rendering with biased estimator . . . 44

3.3.1 Photon mapping . . . 46

3.3.2 Progressive photon mapping . . . 47

3.4 Combining biased and unbiased estimators . . . 49

3.5 Discussion . . . 51

4 Markov Chain Monte Carlo

53 1 Introduction . . . 53

2 Overview of the MLT algorithm . . . 53

3 Practical aspect . . . 56

(7)

3.2 Importance functions. . . 61

3.3 Other mathematical tools . . . 63

II

Efficient and robust rendering techniques

67

5 Light propagation maps on GPU

71 1 Previous works . . . 71

2 Fattal’s algorithm . . . 73

3 New Method: Parallel and Scalable LPM . . . 77

3.1 Parallelization . . . 77

3.2 Streaming . . . 78

4 Implementation and Results . . . 80

5 Conclusions & Further works . . . 83

6 Progressive volume photon tracing

85 1 Related work . . . 86

2 Background . . . 87

3 Overview . . . 89

4 Implementation details . . . 90

4.1 Preprocessing step . . . 90

4.2 Visibility-driven Photon shooting step . . . 92

4.2.1 Radiance update . . . 92 4.3 Collecting statistics . . . 93 4.3.1 Image update . . . 93 4.3.2 Radius update . . . 94 5 Results . . . 94 6 Conclusion . . . 96

7 A spatial importance function for MLT

101 1 Related Work. . . 102

2 Overview . . . 104

3 Importance Function . . . 104

4 Algorithm . . . 108

4.1 Importance function calculation . . . 108

4.2 Spatial region definition and refinement . . . 110

4.3 Algorithm Overview . . . 111

4.4 Sampling form the importance function . . . 112

5 Results . . . 113

6 Limitations and Discussion. . . 118

7 Conclusions and Future Work . . . 118

III

Computer-aided global illumination techniques for

artists

121

(8)

1 Introduction . . . 125

2 Chromatic Adaptation . . . 126

3 Related works . . . 127

4 Our color adaptation method . . . 129

4.1 Generalization of chromatic adaptation . . . 129

4.2 Eye-centered estimate of the adaptation color . . . 129

5 Results . . . 133

5.1 Standard tests cases . . . 133

5.2 Complex tests cases . . . 134

5.3 Sequence tests cases . . . 135

6 Conclusion . . . 136

9 Automatic aesthetics-based lighting design with global

il-lumination

139 1 Introduction . . . 139 2 Related Works . . . 140 2.1 Image-based methods . . . 140 2.2 Global Methods . . . 141 2.3 Discussion . . . 142

3 Overview of the approach . . . 143

4 Approaching an aesthetics with function minimization . . . 145

4.1 Objective function . . . 145

4.1.1 fmeanObj and fmeanBack . . . 146

4.1.2 fvarObj and fvarBack . . . 146

4.1.3 fgrad . . . 146 4.1.4 fhist . . . 147 4.2 Free variables . . . 148 4.3 Optimization . . . 149 5 Results . . . 151 6 Future improvements. . . 154 7 Conclusion . . . 156

10 Conclusion

157 1 Future work . . . 157

Bibliography

177

(9)

1.1 The different steps to produce computer generated images . . . 13

2.1 Measure transformation from surface domain to solid angle . . . 22

2.2 Reflective material (BRDF) or transmissive material (BTDF) . . . 24

2.3 Veach path formulation. . . 26

2.4 Different interaction between the light and the participating media. . . . 27

2.5 Participating media interaction: single and multiple scattering. . . 28

3.1 CDF usage to sample proportional to the PDF . . . 33

3.2 Graphical explanation of the difference between the efficiencies of differ-ent sampling strategies for computing direct lighting. . . 37

3.3 Rendered images with different sampling strategies for computing direct lighting. . . 38

3.4 Rendered images with MIS for computing direct lighting. . . 39

3.5 The different rendered images in case of direct or indirect rendering. . . . 39

3.6 Primitive and explicit light source connection path tracing. . . 40

3.7 Comparison between path tracing and light tracing. . . 41

3.8 The different paths possibility when using BDPT. . . 42

3.9 Comparison between path tracing, light tracing and BDPT . . . 43

3.10 Schematic explanation on the photon mapping / directional relaxation robustness. . . 45

3.11 Comparison of BDPT and photon mapping . . . 45

3.12 Knaus and Zwicker approach for progressive photon mapping. . . 48

3.13 Comparison between BDPT, SPPM and VCM. . . 50

4.1 Veach’s mutations for path MLT. . . 57

4.2 Manifold exploration for path MLT . . . 58

4.3 Kelemen MLT using primary sample space. . . 59

4.4 Different results for the same MLT process using different importance functions.. . . 62

4.5 Chen et al. article [CWY11] importance function for SPPM. . . 63

5.1 Errors due to the DOM discretization: false scattering and ray effect. . . 74

5.2 LPM principe over a 2D domain. . . 74

5.3 Ray traversal over the 2D domain.. . . 75

5.4 Parallelization issue when we put one thread per ray. . . 77

5.5 Solution used to run LPM over a GPU. . . 78

5.6 Streaming slice approach for the GPU implementation. . . 80

(10)

5.8 Memory requirement for a 25 propagation directions, 6 storage directions

(U and I). Comparison between streamed or not approach. . . 82

5.9 Summary of the speedup between the original CPU algorithm and our implementation on 2 GPUs. . . 82

5.10 Results of two 1283 participating medium lit by an environmental map. . 83

6.1 Different ways to gather photons: ray marching, BRE and our method. . 86

6.2 Different possible view rays in a scene (reflected by glossy object). . . 91

6.3 Example of a beam Kd-tree building for a set of beams. . . 91

6.4 Plots of the RMSE for "breakfast hall" and "dragon smokes" scene. . . . 96

6.5 Results obtained for the "dragon smokes" scene. . . 97

6.6 Results obtained for the breakfast hall scene (courtesy of Greg Zaal). . . 98

6.7 Results obtained for the kitchen scene (courtesy of Jay-Artist). . . 99

7.1 The importance function ˆI(Gk) for a measurement point. . . 110

7.2 The spatial regions used for the spatial based importance function. . . . 111

7.3 Relative error distribution in dinner hall for different techniques. . . 114

7.4 Comparison of our method utilizing imaged based importance function and spatial based importance function. . . 115

7.5 Comparison of our method using only two Markov chains and using all three Markov chains. . . 115

7.6 Comparison of our method without and with utilization of multiple im-portance sampling. . . 116

7.7 Result for our technique in simple scenes compare to SPPM.. . . 116

7.8 Comparison matrix between VSPPM [HJ11], Vorba et al.[VKŠ+14] and our method. . . 117

7.9 Example of style transfer. . . 123

8.1 Global illumination rendering with and without chromatic adaptation. . . 125

8.2 The 2 steps of the chromatic adaptation process. . . 127

8.3 Architecture overview of our chromatic adaptation process. . . 131

8.4 Results in the Wilkie’s test cases. . . 134

8.5 Chromatic adaptation results when a red spotlight partially lits a white statue. . . 135

8.6 Chromaticity diagram for RGB color space for Map of a 2 room scene and 3 view frustrums. . . 135

8.7 Spatial coherency of the adaptation color estimate in the case of 2 room scene.. . . 136

8.8 Results in map of a 3 room scene and camera trajectory. . . 137

8.9 Spatio-temporel coherency issue during a video sequence. . . 138

8.10 Scene addressing transmission through a glass. . . 138

9.1 In our technique (Automated aesthetics-based lighting design), we will mainly address two target aesthetics: High-key and Low-key aesthetics. . 139

9.2 Our framework of our technique. . . 144

9.3 Signature cumulated histograms . . . 147

(11)

9.5 Optimization of our algorithm for the two different aestetics in teapot

scene.. . . 149

9.6 Example of computation of minimal distance. . . 150

9.7 Results in Girl and Creature scenes. . . . 153

9.8 Results in Fruit basket scene. . . . 154

9.9 Evolution of the objective function during the optimization process. . . . 155

10.1 Les différentes étapes de génération d’une image de synthèse. Plusieurs étapes sont répétées jusqu’a ce que l’artiste atteigne le style d’image visée.159 10.2 Les différentes façons de construire un chemin de lumière. . . 161

(12)
(13)

6.1 Definition of quantities used in PVPT chapter. . . 88

6.2 Scene configuration and rendering parameters. . . 95

9.1 Configurations and weights used for the target function. . . 152

9.2 Final values for the teapot scene. . . . 152

9.3 Final values, fhist and fq values for the Girl and the Creature scenes (fig. 9.7). . . 154

(14)
(15)

1

Computer generated images can achieve different levels of realism. In this thesis, we focus on photorealistic images obtained with the help of physics laws related to light propagation. However, these images are generated to meet an artist’s aesthetic objective. One way to achieve this goal is to let the artist play with the different input parameters at his/her disposal (green boxes, fig. 1.1): the 3D scene itself and all the parameters needed during the different rendering steps (green boxes, fig.1.1). Specifically, a 3D scene consists of: a virtual camera, a set of light sources, different 3D objects as well as the associated materials defining their appearance. A material describes how light is changed when it interacts with an object. All these variables (3D scene configuration and all other parameters) make it difficult for the artist to produce images that match his/her intent. A common solution is to use a time consuming trial-error process (fig. 1.1).

Description3D Scene AlgorithmRendering RenderedImage ProductionPost ImageFinal

Parameters Parameters

Try/Error loops

Figure 1.1 – The different steps to produce computer generated images. All these steps have several inputs (orange boxes) and generate images as output. However, the variety of parameters makes it difficult for the artist to generate an image that matches his/her intent. So, several iterations are required to generate an acceptable image.

The motivations of this PhD work is to simplify the artist’s workflow. For that, we have focused on 2 main topics:

1. development of new rendering techniques that are more robust and faster.

Robustness is needed so that the used rendering technique achieves good per-formances in all possible 3D scenes. As a consequence, the artist can use the same rendering technique for all his/her projects (which reduces potential er-rors of configuration). Moreover, a fast feedback to the user (i.e. progressive rendering) is required so that the user does not wait for the result for a long

(16)

time. However, when using progressive rendering a preview must be consistent with the final image quality.

2. development of new tools dedicated to the artist, for example a technique

that automates a redundant task allowing saving time for the artist. Another example is a technique that extracts useful information that can be used during the post production step.

Organization of the dissertation

This manuscript is decomposed into three main parts:

1. Part 1 – Background on GI: physically-based rendering using physics laws

to simulate light/matter interactions. In this part, we will introduce the math-ematical model used for physically-based rendering (Chapter2). Then, we de-fine Monte Carlo estimator used to evaluate the rendering equation (Chapter

3). Finally, we will introduce Monte Carlo Markov Chain (MCMC) that uses a Metropolis algorithm to render complex 3D scenes in terms of geometry, visibility and light/matter interactions (Chapter 4).

2. Part 2 – Efficient and robust rendering techniques: First, we will

present our new GPU algorithm to render participating media (Chapter 5). This algorithm allows almost interactive visualization for single and multiple scattering. Second, we will show how we have extended (stochastic) progres-sive photon mapping (SPPM) to handle participating media, placed in a 3D scene, together with their interaction with the scene’s objects (Chapter 6). This technique is slower than the first one but is more general and robust. Indeed, it is able to handle any type of 3D scenes: different kinds of material, scene complexity, etc. Third, we will propose a new importance function for Metropolis-based rendering techniques that better distributes the relative er-ror (Chapter 7). While the SPPM algorithm is robust as it uses Metropolis sampling procedure, it still has a major drawback: the error of the estimator is not evenly distributed over the image plane. Our new importance function ad-dresses this issue by proposing two practical implementations: an image-based importance function and a spatial one.

3. Part 3 – Computer-aided GI techniques for the artist: This part

con-tains the description of two methods aiming at helping the artist in his creation process. First, we will present our new method to estimate the main illuminant color for a 3D scene (Chapter 8). To do that, we estimate the lighting (aver-age irradiance) arriving at the observer. This illuminant can be used during a post-production step to apply color transformations (white balancing, color transfer, style transfer, etc.). Second, we will present a new method to deter-mine the lighting setup for a 3D scene (light source size and flux) (Chapter

9). Our technique takes as input the user intent and uses it to find a lighting configuration (setup) that matches the artist’s desire.

(17)

For the reader For a reader familiar with global illumination, the chapters 1 and

2 can be skipped. Moreover, if the reader has a good background about metropolis rendering technique, the chapter 3 can be also skipped. The first three chapters do not bring any new contribution, they only help understanding the next chapters corresponding to our contributions.

1

Summary of the contributions

The work presented in this thesis brings the following contributions to the computer graphics field:

 a new rendering algorithm for participating media implemented on GPU;  a visibility guided metropolis algorithm for photon mapping inside

participat-ing media;

 a new importance function for Metropolis rendering to evenly distribute the

relative error of the estimator,

 a robust estimation of the reference illuminant in a 3D scene,

 a flexible automatic technique for determining the lighting setup to target a

specific aesthetics desired by an artist.

2

Publications

Most of the work presented in this thesis is published in the following papers:

 A. Gruson, A. Hakke Patil, R. Cozot, K. Bouatouch and S. Pattanaik, "Light

Propagation Maps on Parallel Graphics Architectures", Eurographics Sympo-sium on Parallel Graphics and Visualization, 2012

 C. Collin, M. Ribardiere, A. Gruson, R. Cozot, S. Pattanaik and K.

Boua-touch, "Visibility-driven progressive volume photon tracing", CGI 2013 and The Visual Computer: International Journal of Computer Graphics - Volume 29, Issue 9

 A. Gruson, R. Ribardiere, R. Cozot and K. Bouatouch, "Rendu Progressif

basé Metropolis-Hasting dans des scènes à topologies multiples", AFIG 2014 and REFIG Vol. 8

 A. Gruson, R. Ribardiere and R. Cozot, "Eye-Centred Color Adaptation in

Global Illumination", Pacific Graphics 2013 and Computer Graphics Forum, Volume 32 (2013), Number 7

 V. Leon, A. Gruson, R. Cozot and K. Bouatouch, "Automatic

Aesthetics-based Lighting Design with Global Illumination", Pacific Graphics (Short pa-per), 2014

(18)
(19)

Background on Global

Illumination

(20)
(21)

In this part, we will summarize all the technical and mathematical details needed to produce realistic computer generated images. In particular, we will focus on phys-ically based rendering. In physphys-ically based rendering, a physical model describes the interaction between light and the different elements of a 3D environment, often composed of:

1. light sources;

2. surfaces or/and volumes that interact with light stemming from light sources

(reflection, refraction, scattering, absorption, etc.);

3. a virtual camera which represents the viewer.

Actually, light can be expressed by a flux emitted by light sources that progres-sively reaches an energy equilibrium. However, the speed of light is so fast that the energy equilibrium is reached instantly. The aim of physically based rendering techniques is to evaluate this equilibrium numerically so as to compute a final image. This equilibrium can be expressed as an integral equation. In practice, generating an image consists in finding light paths that start from light sources, interact sev-eral times with the scene, and finally reach the camera. However, determining these light paths is difficult because of the complexity of the different light interactions. In chapter2, we will present the physical models used to describe the different light interactions.

The set of possible light paths is infinite, an approximation is then needed. One solution is to use Monte Carlo methods that randomly create light paths in the scene (chapter3). Then, the solution is to average the contributions of the randomly sampled paths to produce an image. This solution is elegant and can provide high quality images. However, the efficient creation of valid light paths is crucial when it comes to produce noiseless images. This generation can be difficult in a complex 3D environment with complex materials or difficult visibility.

Intensive research has been done to build efficient strategies to construct valid paths. Some of them are targeted to handle special scenes or certain light phenom-ena, others combine several techniques to handle different kinds of phenomena. At the end, the rendering technique can be difficult to implement and not easy for use (e.g. a lot of parameters). One simple solution is to extract some knowledge about the scene by using some previous sample paths. This solution is often built upon a Markov Chain model and uses the Metropolis-Hasting algorithm [MRR+53,Has70] to efficiency produce light paths (chapter 4).

(22)
(23)

2

Light propagates in a 3D space Light can reach (from an incoming direction)

or leave (in an outgoing direction) a surface. It can also be emitted, reflected, scat-tered or absorbed by taking into account incident/outgoing directions. So, for this reason, before going into the definitions of physically light quantities (radiometric quantities), we first review some mathematical concepts in the 3D space. Indeed, light propagation can be expressed in the direction domain (direction of scattering) or in the surface domain. In this section, we will review the surface and direction domain formulations and their associated measures to use them in the rest of the manuscript. Then, we will show how to move on from one domain to another.

Surface domain In a 3D environment, we assume that the scene geometry

con-sists of a finite set of surfaces in R3. The union of all the surfaces is denoted M.

Its measure is denoted dA(~x) at the surface point ~x. Moreover, each surface point has an associated normal ~n(~x) (written ~n for simplicity) that describes the surface orientation in a 3D space.

Directional domain This domain is important because, in physically based

ren-dering, a lot of sampling decisions are taken in it. Each direction is represented by a normalized vector ~ω ∈ R3. The domain of direction originating from a surface

point ~x can be divided in two parts. First, we can define the space of the upper hemispheres:

Ω+ = {~ω : |~ω| = 1, ~ω · ~n ≥ 0} (2.1)

Second, the other part of the sphere (the lower hemisphere) is defined as:

Ω− = {~ω : |~ω| = 1, ~ω · ~n ≤ 0} (2.2)

In general, in computer graphics, we are often interested only in the upper hemi-sphere that receives incoming light (except in volume rendering where the notion of normal is absent, to this end the domain of direction is the total sphere). For simplicity, in the rest of the manuscript, we will use Ω for the space of direction. Moreover, we have the equality:

cos θ = ~ω · ~n (2.3)

where the angle θ is the polar angle between the direction ~ω and a surface normal

~n. For simplicity, we use an absolute operator in order not to take into account the

orientation of these two vectors.

In the directional domain, the measure is a solid angle and is denoted dσ(~ω) for a given direction ~ω. A solid angle is the equivalent in 3D of an angle defined in a 2D space. Solid angles are expressed in steradians.

(24)

~n dσ(~ω) dA(~x) θ dA~ ω(~x) ~ x(~ω) ~n(~x) dσ(~ω) dA(~x) dA(~y) ~x ~y ~n(~y) (a) (b)

Figure 2.1 – (a) The different projections possible for different measures defined in the surface (in green) or directional (in red) spaces. (b) Surface dA(y) subtended by solid angle dσ(~ω). With measure transformation, it is possible to express the solid angle dσ(~ω) by using dA(y) (see eq. (2.6)).

Projection and domain transformation Each measure (in the surface (dA)

or in the directional (dσ) domain) is equivalent to a measure after projection. For example, to project a solid angle onto a surface, we use the polar angle θ between the direction ~ω and the normal ~n:

~

x(~ω) = cos θdσ(~ω) (2.4)

And the inverse process is possible, to project a surface onto a solid angle: dA

~

ω(~x) = cos θdA(~x) (2.5)

These two projection operations are shown in fig. 2.1 (a). In some part of the manuscript, we will express the integral problem in the projected solid angle domain. To express this domain, we will use the symbol Ω⊥.

Moreover, if we want to combine different models expressed in different spaces, we need to express them in the same space. This situation occurs often in rendering where some decisions are made in the directional domain and others in the surface domain. However, it is possible to change the measure from the directional to the surface domain, by transformation from the solid angle ~ω to the projected point ~y:

~x(~ω) =

~ x(~ω)

dA(~y) dA(~y) = G(~x↔~y)dA(~y) (2.6) where G(~x↔~y) is the geometry factor that corresponds to the Jacobian relating solid angle to area. This factor is expressed as:

G(~x↔~y) = |~n(~x) · ~ω| × |~n(~y) · ~ω|

||~x − ~y||2 V(~x↔~y) (2.7)

Note the apparition of the term V (~x↔~y) which expresses the visibility between ~x and ~y. In physically based rendering, different decisions are expressed in different domains (direction or surface).

(25)

1

Radiometric quantities

Light is an electromagnetic radiation and it could be measured with physical quan-tities called radiometric quanquan-tities. Moreover, by using these quanquan-tities correctly, we come up naturally to the rendering equation that is used in rendering. This equation expresses the light transport problem as an integral evaluation problem.

Flux (or radiant power), notated Φ, is the fundamental quantity which expresses

the total light energy Q received or emitted per unit of time: Φ = dQ

dt .

The unit is watt (W). This quantity will be fundamental as all the next quantities are derived from it.

Irradiance notated E, is the flux received per unit surface area:

E(~x) =

dA(~x). (2.8)

The unit is Watt per square meter (W · m2). A relation between this quantity and

the flux is given by:

Φ = Z

SE(~x)dA(~x). (2.9)

where S is a surface. Note that the same quantity exists for emission and is named radiosity or emittance.

Radiance notated L, is the flux emitted by a surface, per unit project area, per

unit solid angle. The radiance emitted at the point ~x into the direction ~ω will be notated L(~x→~ω).

For emitted radiance, the differential expression (see fig. 2.1) is:

L(~x→~ω) = d 2Φ

dσ(~ω)dA(~x) cos θ. (2.10)

where d2Φ is the differential flux of the light emitted at point ~x, θ is the angle

between the normal ~n and the direction ~ω. This quantity is important because it expresses the light as we perceive it (from a surface into a direction). The radiance is independent of the distance. Moreover, there exists some relationship between radiosity and emitted radiance (eqs. (2.4) and (2.5)):

L(~x→~ω) = dE(~x→~ω)

~ x(~ω)

(2.11) There also exists a notion of incident radiance. It is the radiance incident from a direction ~ω at a point ~x. It will be denoted L(~x←~ω). In this case, the differential flux comes from another surface.

(26)

Luminance is a photometric value that is equivalent to radiance (radiometric

value). Luminance is equal to radiance up to a factor that is the human eye response. This is why, in the rest of the manuscript, we will use both of them to refer to the same quantity.

2

Surface interaction

BRDF

BTDF

Diffuse Glossy Dirac

Diffuse Glossy

Rough Smooth ROUGHNESS

Dirac

Reflection Reflection Reflection

Transmission Transmission Transmission

Figure 2.2 – In general there are two main types of material for the surfaces: Reflective material (BRDF) or transmissive material (BTDF). Transmission obeys Snell’s law. Moreover, for these two types, different interactions are possible: Dif-fuse, Glossy and Dirac. These interactions depend on the roughness of the surface. Diffuse interaction concerns an extremely rough surface. On the contrary, a com-pletely smooth surface creates one single light reflection.

Now that we have defined the physical quantities of light, let us see how light interacts with surfaces. To do that, we describe how radiometric quantities change with this interaction. Different surface interactions are possible and summarized in fig.2.2. All these interactions are handled by a general model named BSDF

(bidirec-tional scattering distribution function). This model includes two light phenomena:

reflection and transmission. BSDF is approximated by several mathematical models [TS67, ON94, WMLT07]. For more information, the reader can refer to the PBRT book [PH10]. Moreover, note that BSDF models make an approximation: the in-coming position and the outgoing position of light are the same. A more general model is expressed by a BSSRDF model [JMLH01]. However, for simplicity, we will focus our presentation only on BSDF model, termed fr. Moreover, to be physically

correct, a BSDF model needs to meet two important constraints:

1. Helmholtz reciprocity principle: for every pair of direction ~ωi and ~ωo, we have

fr(~x, ~ωi→~ωo) = fr(~x, ~ωo→~ωi)

2. Energy conservation: for all the directions ~ωo, the total energy of light reflected must meet the following constraint:

Z

Ω⊥fr(~x, ~ωi→~ωo)dσ

~

(27)

A BRDF is expressed as (we use eq. (2.11) for quantity change) a luminance change of the luminance coming from ~ωi toward ~ωo:

fr(~x, ~ωi→~ωo) = dL(~x→~ωo ) dE(~x←~ωi) = dL(~x→ωo) L(~x←~ωi)dσ~x(~ωi) . (2.12)

where ~ωi is the incident direction, ~ωo is the outgoing direction and L(~x→~ωo) the incident radiance at point ~x. We can integrate this equation and express the outgoing radiance in the direction ~ωo:

dL(~x→~ωo) = fr(~x, ~ωi→~ωo)L(~x←~ωi)dσ~x(~ωi) L(~x→~ωo) = Z Ω⊥fr(~x, ~ωi→~ωo)L(~x←~ωi)dσ~ x(~ωi). (2.13) In physically based rendering, we need to compute the outgoing radiance of a surface viewed through the camera. It is possible to express it using eq. 2.13. To do that, we need to include the light source emission Le, which gives the rendering

equation [Kaj86]: L(~x→~ωo) = Le(~x→~ωo) + Z Ω⊥fr(~x, ~ωi→~ωo)L(~x←~ωi)dσ~x(~ωi) (2.14) where L(~x→~ωo) is the luminance viewed by the camera and L(~x←~ωi) the incident radiance. Note that in the integral, the incident radiance L(~x←~ωi) is unknown. To estimate it, we need to cast a ray from the position ~x in the direction ~ωi. At the intersection point, we need to evaluate again the same integral. This amounts to a recursive evaluation of this integral. By applying measure changes using eq. (2.6), the rendering equation can be expressed in the surface domain M as:

L(~x→~ωo) = Le(~x→~ωo) +

Z

Mfr(~x, (~x − ~y)→~ωo)L(~x←~y)G(~x↔~y)dA(~y) (2.15)

where the incident direction is replaced ~ωi = (~x − ~y) and L(~x←~y) the incident radiance emitted by the point ~y. Note that a visibility term is included inside the geometry factor.

Previous formula only mention about radiance computation. However, a com-puter generated image is a 2D array of pixel. Each pixel is oversampled into points

~x(i.e. to solve the aliasing problem or to reduce noise), which corresponds to a set

of view rays, each of them passing through a point ~x. These rays may intersect the

scene at a point ~x. At each point ~x, the reflected radiance L(~x→~x) needs to be

evaluated. To compute the radiance viewed from a pixel j, an integration over this pixel is needed: Ij = Z M×MW (j) e (~x→~x)L(~x→~x)G(~x↔~x)dA(~x)dA(~x) (2.16) where ~x lies on the pixels in the image space and W(j)

e (~x→~x′) represents the emitted camera importance for the pixel j. For example, this term includes the filtering operation done on the pixel side.

To evaluate the equation2.15, it is difficult to generate paths of different lengths efficiently. Indeed, light can bounce multiple times before it reaches the camera.

(28)

In the model, this is shown in eq. (2.15) where L is on the two expression side. A general formulation was introduced by Veach [Vea97]. It uses the surface domain to define the rendering equation problem for a given pixel j:

Ij =

Z

Pfj(x)dµ(x), (2.17)

where P is the path space, x = ~x0~x1· · · ~xk a path defined by k vertices and dµ(x) the measure. The measure is the product of measures in the surface domain for each

~xi:

dµ(x) = Yk l

dA(~x0). (2.18)

The contribution function f(x) can be expressed as the product of the BSDF, the emission of the light sources and the sensor response:

fj(x) = Le(~x0→~x1)T (x)Wej(~xk−1→~xk) (2.19)

T(x) = G(~x0↔~x1)

k−1Y i=1

fr(~xi−1→~xi→~xi+1)G(~xi↔~xi+1) (2.20) where Wj

e(~xk−1→~xk) is the sensor response and T (x) the throughput of the given path that includes the geometry factor and the BSDF values. An overview of this formulation is given in fig. 2.3. ~xk ~x0 ~x1 ~xk−1 G(~x0 ↔~x1) Le(~x0→~x1) fr(~x0→~x1→~x2) fr(~xk−2→~xk−1→~xk) G(~xk− 1↔~x k) Wj e(~xk−1→~xk) S

Figure 2.3 – Throughput figure of one path using Veach formulation.

However, the path integral includes all the possible length paths. One possibility is to split different path lengths to get different integrals:

Ij = ∞ X k=1 Z k+1 M fj(x0. . . xk)dA(x0) . . . dA(xk) (2.21)

These different integrals over surfaces (eq. (2.21)) need to be evaluated. They are restricted to surface interaction only. However, light can interact with more complex objects such as smoke, liquid, called participating media. These phenomena introduce light volume interactions, which we need to take into account and add to our mathematical formulation.

3

Volume interaction

Volume interactions are due to participating media (such as smoke, fire, clouds, ice, dust, etc.) that interact with light. Participating media are important for realism

(29)

in the film industry. However, integrating such media is computationally expen-sive. Indeed, their integration domain has one more dimension than surfaces. The intersection between a ray and media is a line and not a point (fig. 2.5). Interac-tion can occur at each point associated with this intersecInterac-tion line, which makes the computation expensive.

emission absorption out scattering in scattering Figure 2.4 – Different interaction between the light and the participating media.

In such media, different light interactions can occur (fig. 2.4):

 Absorption and out-scattering: called also extinction, these two phenomena

are responsible for the energy lost inside the media.

 In scattering: is the light scattered in the view direction. Indeed, at each point

in the media, light can come from every direction and be partially scattered in the view direction.

 Emission: self emission (i.e. light source) in the view direction in a media such

as fire.

All theses interactions are expressed by the radiative transfer equation (RTE) [Sub60]. This equation expresses the change of light at a position ~y inside the volume and in the direction ~ω:

dL(~y→~ω)

d~y = σa(~y)Le(~y→~ω) + σs(~y)Li(~y→~ω) − σt(~y)L(~y→~ω) (2.22) where Le is the self-emitted radiance and Li the luminance due to the incident luminance that scatters into ~ωi. Moreover, σa(~y) is the absorption coefficient at the position ~y, σs(~y) the scattering coefficient and σt(~y) = σs(~y) + σa(~y) the extinction coefficient. For homogeneous media, these coefficients remain constant at each point. Otherwise the media are called heterogeneous. Incident light Li at the position ~y includes all the possible incoming directions:

Li(~y→~ω) =

Z

ρ(~y, ~ωi→~ω)L(~y←~ωi)dσ(~ωi) (2.23)

where, Ω is the sphere domain and ρ(~y, ~ωi→~ω) is the phase function that is similar to the BSDF describing the proportion of radiance coming from the direction ~ωi and reflected in the direction ~ω. Moreover, similarly to the coefficients, the phase function can vary or not inside the media. There exists different models of phase function. For more information, the reader can refer to the book of Engel et al. [HKRs+06].

(30)

~x ~xe ~y ~xs ~x0 L(~y→~x) Multiple scattering Single scattering L(~x0→~x) S

Figure 2.5 – The incident luminance inside the participating media can bounce only once (single scattering) or multiple times (multiple scattering).

However, we need to express the RTE (eq. (2.22)) in an integral form to be able to evaluate it (section3). For example, we want to evaluate the total energy received by a viewer L(~x←~ω) from the position ~x in the direction ~ω. To do that, we need to integrate the light interactions along the line between the intersection of the view ray and the medium. For this purpose, we define an entry point ~xe and an exit point ~xs. Then, we integrate the light interactions over the line using the RTE (eq. (2.22)). Moreover, we have, if it is the case, to take into account the back surface at the position ~x0. All these considerations are shown in fig. 2.5 and expressed as:

L(~x←~ω) =

Z ~xs

~ xe

τ(~xe↔~y)σa(~y)Le(~y→~ω)d~y

+ Z ~xs ~ xe τ(~xe↔~y)σs(~y) Z Ωρ(~y, ~ωi→~ω)L(~y←~ωi)dσ(~ωi)  d~y (2.24) + τ(~xe↔~xs)L(~x0→~ω)

where τ(~xe↔~y) = e−β(~xe→ ~y) is the transmittance that represents physically the amount of distance in the medium before the light scatters and β is the optical thickness. The higher the value, the more the volume is opaque.

β(~xe→~y) =

Z ~y ~xe

σt(~y)d~y

Now that we have defined the volume interaction as an integral problem, we want to express it in the path domain. It is totally possible to generalize the path framework, introduced by Veach [Vea97], to include participating media by changing

(31)

the throughput formulation:

T(x) =

k−1Y i=j

f(~xi)G(~xi↔~xi+1)Vatt(~xi↔~xi+1) (2.25)

f(~xi) =

(

fr(~xi−1→~xi→~xi+1) when ~xi is on a surface

ρ(~xi−1→~xi→~xi+1) when ~xi is in a medium

(2.26)

G(~x↔~y) = Dx(~x)Dy(~y)

||~x − ~y||2 (2.27)

Vatt(~x↔~y) = τ(~x↔~y)V (~x↔~y) (2.28)

where Dx(~x) is the projection operation that is a cosine term if ~x is on a surface and 1 if ~x is in the medium. Vatt is the attenuated visibility term that includes the regular visibility term V (~x↔~y) extracted from the geometric term and multiplied by the transmittance of the medium. Note that if there is no medium between ~x and ~y the transmittance value is equal to 1.

(32)
(33)

3

In the previous chapter, we have defined physically based rendering as a problem of integral evaluation (eq. (2.21)). In this section, we will detail how to evaluate this integral efficiently. Several approaches are possible. Monte Carlo based approaches are good candidates to solve this problem because of their simplicity and flexibility. For these reasons, we will detail this approach in the following sections. First, we will review the general formulation of the Monte Carlo estimator and its properties. Second, we will present some improvements such as importance sampling or mul-tiple importance sampling. Finally, we will show how to use it in physically based rendering.

1

General formulation

Before giving the Monte Carlo estimator, we need to define some mathematical basis. The expected value Ep[f (x)p(x)], where p(x) is the probability density function (pdf), is equal to the integral value of a function f(x) over a domain Ω:

Z Ωf(x)dx = Z Ω f(x) p(x)p(x)dx = Ep[ f(x) p(x)] (3.1)

The Monte Carlo estimator is an estimate of the expected value. However, to compute the integral of f(x), the Monte Carlo estimates the expected value of f (x)/p(x). Given a set of N uniform random variables x

i ∈ Ω, the Monte Carlo estimator FN gives: Ep[ f(x) p(x)] ≈ FN = 1 N N X i=1 f(xi) p(xi) (3.2) where p(xi) is the probability to generate xi. The simplest density probability is the uniform density. When using the Monte Carlo approach, the pdf p(x) must be different from zero for all x where |f(x) > 0|. Note that, for uniform pdf, this condition is always fulfilled.

For Monte Carlo, variance corresponds to the estimation error. The aim is to reduce this variance as much as possible. The variance of the estimator FN is defined as: V[FN] = E h (FN)2 i − E[FN]2 (3.3)

Estimator properties A Monte Carlo estimator FN is consistent if it converges to the correct solution with an infinite number of samples. This can be expressed by the following formula:

limN → ∞Prob[(FN

Z

(34)

A Monte Carlo estimator FN is unbiased if the expected value of the estimate is equal to the correct solution:

Z

f(x)dx − E[FN] = 0 (3.5)

Note that an unbiased estimator is not automatically consistent. Indeed, some rendering algorithms use a pre-computation step to evaluate some values. For ex-ample, these values can be the overall brightness of the image plane, etc. These values are then used by an unbiased rendering algorithm. However, as these values are not refined, the resulting algorithm can get unbiased but non-consistent.

Infinite dimension of integration In the eq. (2.21), a path can bounce a large

number of times. However, fixing a maximum number of bounces (dimension of the integral) makes the estimator biased. One solution is to use a Russian roulette approach that will put a probability q to stop the path. However, the throughput of the path needs to be scaled by the probability to continue a divided by 1 − q. This technique makes the estimator unbiased:

E[I] = (1 − q) E[I]

1 − q

!

+ q = E[I] (3.6)

This technique can increase variance of the estimator but keeps it unbiased.

Samples placement To increase the efficiency of a Monte Carlo estimator, we

need to reduce its variance using sampling strategies such as:

 Stratified sampling: it covers the domain of integration better. The idea of this

technique is to partition the domain and run a different Monte Carlo estimator for each stratum. The final estimator is a weighted sum of the estimates over all the domains.

 Importance sampling: use a pdf p proportional to the integrand. Indeed, the

function f (x)/p(x) will be more flattened than f(x). This technique reduces variance and is often used in global illumination.

Note that this list is not exhaustive. Different variance reduction techniques can be combined to achieve lower variance. In physically based rendering, importance sampling is a common technique to reduce the variance. This approach is presented in detail in the next section.

2

Importance sampling

2.1

General framework

The key idea is to choose a pdf p proportional to the integrand f. The more the shape of the pdf is similar to the function, the more the variance is reduced. The

(35)

extreme case corresponds to a pdf proportional to the function, i.e. c · p(x) = f(x), where c is a constant. In this special case, the variance of the estimator is zero:

1 NV[ f(x) p(x)] = 1 NV[ 1 c] = 0

However, this is not feasible in practice. The main reason is because the constant factor (which is unknown) is equal to:

c= R 1

f(x)dx

that is exactly the integral we are trying to evaluate. Moreover, we need to generate samples with the pdf proportional to f(X), which could be an issue. For instance, in physically based rendering, the integrand (eq. (2.21)) is a product of several terms. Some terms are unknown (i.e. the incident radiance), which makes harder the determination of the CDF (cumulative distribution function used to sample proportional to a pdf (fig.3.1)).

u1 u2

Probability distribution

function Cumulative distributionfunction

p(x) P(x) x x P(x) x P−1(u 1) P−1(u2) 1 Integration

Figure 3.1 – CDF is computed by integrating a PDF. Then, we project uniform samples (u1 and u2) using the inverse CDF. The projected samples are distributed proportionally to the PDF.

Bad importance sampling (shapes of p(x) and f(x) are very different) could lead to variance higher than that obtained with uniform sampling. Determining a pdf

p(x) (for a good importance sampling) for a high dimension integration domain is a difficult task. One solution is to create this pdf p(x) using different pdf pi(x). Each one defined on a sub-domain.

2.2

Multiple distributions

Constructing p(x) using several pdf to sample the function f(x) may be a good idea at first glance. But the problem is how to use the different sample distributions to get a lower variance. Indeed, averaging over the different sampling strategies can lead to an extra variance (proof in eq. (3.8)). One elegant solution is to combine the

(36)

different estimators with a Multiple Importance Sampling estimator. This estimator is defined as follows: F = n X i=1 1 ni ni X j=1 wi(xi,j) f(xi,j) pi(xi,j) , (3.7)

where n is the number of sampling strategies and ni is the number of samples allocated to each strategy. xi,j is the jth sample from the distribution pi. Each sample is assigned a weight wi. In other words, this formula is the weighted sum of the different estimatorsf (xi,j)/pi(xi,j). To obtain an unbiased estimator, the weighting

function wi must meet two conditions:

1. Pn

i=1wi(x) = 1 for f(x) 6= 0,

2. wi(x) = 0 whenever pi(x) = 0

These conditions imply that the set of the sampling techniques need to sample where

f(x) 6= 0. However, one sampling technique pi does not need to sample the whole domain, but only one sub-domain.

Now, we need to discuss the choice of the weighting function wi. Indeed, suppose that we have three pdf p1, p2 and p3, and only one sample is taken for each one.

This leads to the following estimator:

F = w1 f(x1,1) p1(x1,1) + w2 f(x2,1) p2(x2,1) + w3 f(x3,1) p3(x3,1) . (3.8)

If the weighting function is constant and one sampling strategy is bad, then F will have variance as well, since:

V[F ] = w1V[F1] + w2V[F2] + w3V[F3]

Different weighting strategies are discussed by Veach [Vea97]. One possible weighting strategy is the power heuristic:

wi(x) =

(nipi(x))β

P

k(nkpk(x))β

(3.9) In case of β = 1, we have another weighting strategy called balance heuristic. The idea behind these heuristics is to assign a bigger weight to the sampling strategy with higher probability. Using the formulation of the balance heuristic, we can rewrite the global estimator as:

F = n X i=1 1 ni ni X j=1 nipi(xi,j) P knkpk(xi,j) ! f(xi,j) pi(xi,j) = Xn i=1 ni X j=1 f(xi,j) P knkpk(xi,j) = 1 N n X i=1 ni X j=1 f(xi,j) P kckpk(xi,j)

where ck =nk/N. We can see that this formulation clearly expresses a Monte Carlo estimator accounting for several pdf.

(37)

Conclusion We have seen in this section how to use Monte Carlo estimator to

evaluate the integral. Multiple importance sampling proved to be an efficient way for reducing the variance of the Monte Carlo estimator. This technique is commonly used in physically based rendering. However, we still need to define the set of sampling strategies. In the next subsection, we will discuss the different uses of the Monte Carlo estimator in rendering.

3

Practical aspects

In this section, we present the different ways of using Monte Carlo to solve the light transport equation (expressed in section2). As we have seen in the previous section, the pdf p(x) needs to mimic the integrand to reduce the variance of the estimator. In rendering, the variance is perceptible as noise in the rendered image. To produce high quality images, this noise has to be below a certain threshold in order not to be perceptible by the user.

To build the pdf p(x), different strategies are possible in physically based ren-dering. The aim of rendering is to find contributive light paths that connect to the camera. To introduce different sampling strategies, first we introduce a sub-problem in light transport equation that is direct rendering. This problem considers one light bounce only. Due to low dimensionality, this problem is easy to solve with Monte Carlo. It is easy to understand the different sampling strategies and their respective performances. So, at the beginning of this section, we describe a practical implementation for direct rendering. This implementation uses all the techniques described in the Monte Carlo section (importance sampling, multiple importance sampling).

Then, we focus on physically based rendering with several light bounces. We present classical unbiased estimators to evaluate the light transport equation (eq. (2.21)): path tracing, light tracing and bidirectional path tracing. Moreover, we discuss their respective advantage/drawback compared the other rendering techniques.

All unbiased techniques get inefficient for certain sampling scenarios. An example of these scenarios corresponds to the situation where the light integrand domain reduces to a small path domain. For example, this is the case when a light caustic is viewed through a smooth mirror. To address this issue, we introduce biased rendering techniques that are more robust than unbiased ones.

The convergence rate (variance decreases with the number of samples) is dif-ferent for these two classes of techniques. Unbiased techniques have often better convergence rate than biased ones. On the other hand, biased techniques are more robust to evaluate the path space. Recent research focuses on combining these two classes of techniques. We review them at the end of this section.

Note that in this section, we will consider mainly the surface integration (without participating media). However, in some parts of this section, some methods will be evoked when it comes to render participating media.

(38)

3.1

Direct rendering

Direct rendering problem can be reduced to a visibility problem. It amounts to determine the visibility between the light sources and the surfaces viewed through the camera. Only one light bounce is considered. We can express the problem using the directional domain (similar to eq. (2.14)) but we restrict the incoming radiance to the radiance Le() emitted from the light source and arriving at point ~x:

L(~x→~ωo) = Z Ω⊥fr(~x, ~ωi→~ωo)Le(~x←~ωi)dσ~ x(ωi), (3.10) where ~x is a point on a surface viewed through the camera and ~ωo is the view direction. Moreover, we have restricted the incident radiance Le(~x←~ωi) to the radiance emitted from the light sources.

The integrand is equal to the product of two terms: the BSDF frand the incident

direct lighting Le,i. As we have seen before, constructing a sampling strategy that includes the two terms is challenging. However, it is possible to construct a sampling strategy by considering only one term: sampling according to the BSDF or sampling according to the radiance of the light source.

Sampling according to the BSDF To sample the BSDF, the incident direction

~ωi is randomly sampled according to a PDF p(~ωi). This PDF is proportional to the BSDF value:

p(~ωi) ∝ fr(~x, ~ωi, ~ωo),

where p is expressed using projected solid angles. In practice, to render an image, we first trace a ray from the camera through a given pixel. This ray may intersect the 3D scene at a point ~x. Then we choose an incident potential light direction ~ωo according to the BSDF and we trace a ray from the point ~x in this direction. For an intersection point ~x, we evaluate the emitted radiance L

e(~x←~x′). For this sampling strategy, the Monte Carlo estimator is:

L(~x→~ωo) ≈ 1 N N X j=1 fr(~x, ~ωj→~ωo)Le,i(~x←~ωj) p(~ωj) , (3.11)

where N is the number of samples. Note that this sampling strategy does not take into account the incident radiance from the light sources, so only one term of eq. (3.10) is considered.

Sampling according to the emitted radiance To be able to sample the light

sources, we need to express the eq. (3.10) in the surface domain using the domain transformation expressed in eq. (2.6):

L(~x→~x′) =

Z

Ml

fr(~y→~x→~x)Le(~x←~y)G(~x↔~y)dA(~y). (3.12) where Ml is the surface domain of the light sources and ~y a point on a light source. In practice, this strategy starts similarly to the previous one. First, we trace a ray

(39)

from the camera through a given pixel (at position ~x). This ray may intersect the

3D scene at a point ~x. Then we randomly sample a position on the light source ~y and evaluate the visibility to ~x. If ~x is visible from ~y, then we evaluate the BSDF value fr(~y→~x→~x′). For this sampling strategy, the Monte Carlo estimator is:

L(~x→~ωo) ≈ 1 N N X j=1 Le(~x←~yj)G(~x↔~yj) p(~yj) fr(~yj→~x→~x′) (3.13) where p(~yj) is the pdf used to sample a point ~yj on the light source surfaces and N the number of samples.

S S

(a)

Large light sourcesSmooth BSDF

(b)

Small light sourcesRought BSDF

Figure 3.2 – This figure explains the difference between the efficiencies of the sampling procedures graphically. The two sampling technique are shown: red dots represent the BSDF samples while the blue dots the light source samples. Green domain expresses the directional contributive direction domain. In this particular domain, the BSDF and the incident luminance are non zero. (a) BSDF sampling performs better than emitter sampling. Indeed, emitter samples are non contributive due to the BSDF value. (b) Emitter sampling performs better than BSDF sampling.

Discussion Figure 3.3 shows the results of the different sampling strategies in a

simple scene [Vea97]. In this scene, we have four light sources with different sizes and colors. Moreover, there are four rectangles with different BSDF roughness. The BSDF is smoother when a rectangle is far away from the camera.

In this scene, the different sampling strategies generate a noise, the level of which varies over the image. BSDF sampling gets better for smooth BSDF. Indeed, in the directional space, the BSDF has a smaller space than the light source (fig.3.2, case (a)). In this case, light sampling can find a valid path but with a zero BSDF value. On the other hand, with a small light source or a non directional BSDF there is less chance to hit a light source after a bounce according to the BSDF (fig. 3.2, case (b)). In conclusion, the two strategies correspond to different sampling scenarios (light source size, BSDF roughness). By combining them one can get a more robust sampling strategy called: Multiple Importance Sampling (MIS).

Multiple importance sampling In section 2.2, we have presented the multiple

(40)

(a)BSDF sampling (b) Emitter sampling

Figure 3.3 –Images rendered with different sampling strategies. If we exclude the diffuse background, we can observe that the two sampling strategies are comple-mentary.

expensive. Indeed, for each sample, we need to evaluate the pi(x) for all the sampling strategies. However, this extra computation is negligible compared to the ray tracing operation.

To be able to combine the two strategies with MIS, we need to express them in the same domain. Indeed, the BSDF sampling p(~ωi) is expressed according to pro-jected solid angle and the light source sampling is expressed in the surface domain. One possibility is to express all of them in the surface domain using the following transformation (based on eq. (2.6)):

p(~y) = p(~ωi)G(~x↔~y) (3.14) where ~ωi is the solid angle associated with the direction ~x→~y.

Figure3.4shows a result obtained with multiple importance sampling. Note that the same number of samples is used for producing the images of fig. 3.3. Moreover, without extra knowledge, we have used the power heuristic (eq. (3.9), β = 2) with an equal number of samples for each sampling strategy. In other words, each sampling procedure (BSDF or emitter) received twice fewer samples. Assigning the same number of samples to each strategy could generate additional noise (i.e. on the diffuse background). The reason is, in regions where only one sampling strategy works (pdf proportional to the integrand), this strategy receives fewer samples than without MIS. Recent research has been done to have a better weighting scheme [PBPP11]. Moreover, in this sub-section, we have only presented two sampling techniques (BSDF and emitter). More sampling strategies are possible. For more information, the reader can refer to Shirley et al. article [SWZ96].

3.2

Indirect rendering with unbiased estimator

Unlike direct rendering, physically based rendering accounts for multiple light bounces. Light paths composed of more than one light bounce are called indirect paths. Cre-ating an efficient sampling strategy to sample these paths is challenging. Figure 3.5

(41)

(a) Combined sampling (b) Relative contribution

Figure 3.4 –On the left: The combined results using multiple importance sampling with balance heuristic. These results have less overall variance than previous results. In areas where only one strategy works, there is extra noise. This is because it uses twice fewer samples than the previous rendered images. On the right: the relative weighted contribution of each sampling technique shown with false colors. Blue color means that the light sampling is more efficient than BSDF sampling. On the contrary, red color means that BSDF sampling is better. Green color means that the two strategies perform equivalently.

(a) Direct rendering (b) Indirect rendering (c) Combination of the two

Figure 3.5 – In this figure, we show the different rendered images in case of direct or indirect rendering. In the direct rendered image, several shadows are due to the light position. This component has discontinuities and some parts of the scene do not receive any lighting. The indirect component is relatively smooth and quite constant in the scene. This rendering generates more noise than the direct one because of the difficulty of computation. The combination of both produces an image of better quality.

Computing the indirect lighting component can be time consuming. Moreover, a rendering algorithm needs to find a path (of several interactions) that connects the camera to a light source. This kind of path is often called contributive paths. There are several reasons why a path is not contributive: zero BSDF value, visibility problem, etc. There exists several techniques to build these contributive paths. All these techniques have their drawbacks and advantages. For this reason, we rapidly present hereafter different techniques that efficiently build such paths: path

(42)

tracing, light tracing and bidirectional path tracing. We also show how much multiple

importance sampling is crucial for robust rendering technique.

3.2.1 Path tracing

Path tracing is often used in rendering engines (Arnold renderer, Cycles, Octane

renderer, PRMan, etc.). It builds a light path from the camera to the light sources incrementally. At the beginning, for each pixel, a number of paths are traced to evaluate the incoming lighting (eq. (2.16)). This algorithm makes possible to control the sampling rate for each pixel. This is a desirable property because controlling the variance allows to control the noise level in the produced image. By equally sampling all the pixels, the algorithm spreads the variance smoothly in the image space. This is important for producing noiseless images until a certain threshold, because the human visual system can perceive the noise.

After throwing the primary ray from the camera and finding the first intersection point, two different path tracing techniques can be used. In the first one, named

primitive path tracing, a ray continues to bounce until it hits a light source. This is

similar to BSDF sampling for direct rendering because it only relies on this strategy. So, this estimator shares the same drawback than direct rendering with only BSDF sampling. In case of a small light source, this estimator will have a high variance.

The second way to implement a path tracing is to use explicit light connection. Similarly to the primitive path tracing, new vertices (path nodes) are generated according to the BSDF. But, at each intersection point, the direct component is evaluated using an emitter sampling technique. Note that when MIS is not used, the fact of hitting randomly a light source (by the bouncing procedure) does not make a contribution. This is because we cannot have two ways of sampling the same component (direct) without combining them.

S S

Primitive path tracing Path tracing with

explicit light source connection Figure 3.6 – The two ways of implementing path tracing. On the left: primitive path tracing will bounce until it randomly hits a light source. On the right: the technique will use explicit source light connections. Each new vertices is connected to a randomly sampled point in the light source. A ray is traced to check the visibility between this point and the new vertex.

These two techniques are described in fig. 3.6. In the same spirit as direct rendering, we have two different sampling strategies. In the same way, we can use

Figure

Figure 1.1 – The different steps to produce computer generated images. All these steps have several inputs (orange boxes) and generate images as output
Figure 2.2 – In general there are two main types of material for the surfaces:
Figure 2.5 – The incident luminance inside the participating media can bounce only once (single scattering) or multiple times (multiple scattering).
Figure 3.1 – CDF is computed by integrating a PDF. Then, we project uniform samples (u1 and u2) using the inverse CDF
+7

Références

Documents relatifs

Then, using a smoothing inequality for the quenched free energy and second moment estimates for the quenched partition function, combined with decoupling inequalities, we prove

We describe an algorithm for computing the value function for “all source, single desti- nation” discrete-time nonlinear optimal control problems together with approximations

An efficient algorithm, derived from Dijkstra’s shortest path algorithm, is introduced to achieve interval consistency on this global constraint.. Mathematics

Among this class of methods there are ones which use approximation both an epigraph of the objective function and a feasible set of the initial problem to construct iteration

In particular we list the different scenes with the ε accuracy threshold (see Section 4.1), and the corre- sponding number of directional distribution basis func- tions used for

Also the dif- fusion approximation using blobs method [20, 21] could be simplified for the isotropic scattering case, being then a good choice (less memory and computation time

Finally, when the traversal of the passive links of h e is complete, we examine all of the active links corresponding to the current part of line space (links originating from s or

Once all the vertices have had their directional intensity distri- butions updated with respect to the shooting patch, the specular re- flectors are then