• Aucun résultat trouvé

Algorithm Details

Dans le document Real-Time Volume Graphics (Page 171-176)

Global Volume Illumination

6.3 Translucent Volume Lighting

6.3.2 Algorithm Details

Set-up. When slicing the volume, you will need to add additional texture coordinates for each vertex. In all, you will need three or four sets: (1) 3D texture coordinates for the volume data, as with any texture-based volume rendering application, (2) 2D texture coordinates for the position

i

that the vertex projects to in the light buffer, much like geometric shadow mapping algorithms, (3) 3D texture coordinates for representing the view direction from the eye to the vertex, and if the light is not infinitely far from the volume, (4) texture coordinates representing the light direction from the light to the vertex. Texture coordinates 3 and 4, the light and view directions, are needed to evaluate the phase function.

Pass 1: Observer update. In the first pass, a slice is rendered from the observer’s point of view. The eye buffer is the render target, and thecurrent light buffer is bound as a texture. In the fragment program, the transfer function is evaluated and the material color is multiplied by the sum of the indirect and direct light previously computed at that slice position in thecurrent light buffer. Remember that the direct light component is achromatic, i.e., it is a scalar value. The total light intensity is expressed as

I= (Li+Ld)I0, (6.9)

where I is the total (RGB) light intensity available at the current sample, Li is the indirect (RGB) light color from thecurrentlight buffer, Ldis the direct (achromatic/scalar) light intensity from thecurrentlight buffer, and I0 is the original light color. If phase functions or surface lighting is used, this term should only be applied to the direct lighting component, since we are assuming that the scattered, indirect light is diffuse,

I= (Li+LdS())I0, (6.10) whereS() is a surface shading or phase function. This color is then blended into the observer buffer using the alpha value from the transfer function in the usual fashion.

Pass 2: Light update. In the second pass, the same slice is rendered into thenext light buffer from the light’s point of view to update the lighting for the next iteration. The current light buffer is bound as a texture.

In the fragment program for this pass, the texture coordinates for the neighborhood samples can be computed by adding a random 2D vector to the current sample location’s texture coordinate. This random vector is stored in a noise texture, similar to those used for volume perturbation discussed in Chapter 12. Four neighborhood samples are usually enough to produce good results. Randomizing the neighborhood sample offsets in this way can mask some artifacts caused by a coarse, regular sampling. The amount of this offset is scaled by either a user-defined blur angle (θ) or an angle based on the anisotropy of the phase function, times the slice-plane spacing (d):

6.3 Translucent Volume Lighting 155 The current light buffer is then read using the new texture coordinates.

These values are weighted and summed to compute the blurred inward flux (scattered light) at the sample. The transfer function is evaluated for the current sample based on the volume data to obtain the indirect attenuation (αi) and direct attenuation (αd) values for the current slice.

The blurred inward flux is attenuated as described before (Equation 6.7) and written to theRGBcomponents of thenextlight buffer. The direct light intensity, i.e., the alpha component from thecurrentlight buffer read using the unmodified texture coordinates, is attenuated by αd (Equation 6.6) and written out to the alpha component of thenextlight buffer.

After this pass has been completed, the light buffers are swapped, so that the next buffer becomes the current and vice versa. That is, the last light buffer rendered-to will be read-from when the next slice is ren-dered. This approach is calledping pong blendingand is discussed further in Section 9.5.

6.3.3 Analysis

This empirical volume shading model adds a blurred indirect light contri-bution at each sample:

I(x1, ω) =T(0, l)I(x0, ω) + l

0

T(0, s)C(s)Il(s)ds , (6.12) where τi(s) is the indirect light attenuation term, C(s) is the reflective color at the sample s, S(s) is a surface shading parameter, and Il is the sum of the direct light and the indirect light contributions. These terms are defined as follows: where Il is the intensity of the light as before, Il(s) is the light inten-sity at a location on the ray as it gets attenuated, and P(θ) is the phase function. Note that the final model in Equation 6.12 includes direct and indirect components as well as the phase function that modulates the di-rect contribution. Spatially varying indidi-rect contribution (scattering) and phase function were not included in the classical volume-rendering model discussed in previous chapters.

i i

i i

i i

i i

156 Global Volume Illumination

S

x s( )

S T 1( , )

x s( ) 1 Tl

Figure 6.11. Left: general light transport scenario, where at any samplex(s)we must consider incoming light scattered from all directions over the unit sphere Ω. Right: the approximation, which only considers light scattered in the forward direction within the cone of directions, the light directionωlwith apex angleθ.

It is interesting to compare the nature of this approximation to the physically based light transport equation. One way to think of it is as a forward diffusion process. This is different from traditional diffusion ap-proximations [247, 280, 70] because it cannot backpropagate light. A per-haps more intuitive way to think of the approximation is in terms of what light propagation paths are possible. This is shown in Figure 6.11. The missing paths involve lateral movements outside the cone or any backscat-tering. This can give some intuition for what effects this model does not approximate, such as a reverse bleeding under a barrier. This, however, is less of a problem in volume rendering than it is with surface-based translu-cency techniques, as materials below the surface are being rendered and will contribute to light paths toward the eye.

Because the effect of indirect lighting in dense media is effectively a diffusion of light through the volume, light travels farther in the volume than it would if only direct attenuation is taken into account. Translu-cency implies blurring of the light as it travels through the medium due to scattering effects. This effect is approximated by simply blurring the light in some neighborhood and allowing it to attenuate less in the light direction. Figure 6.12 shows how the important phenomenological effect of translucency is captured by this model. The upper-left image, a photo of a wax candle, is an example of a common translucent object. The upper-right image is a volume rendering using this model. Notice that the light penetrates much deeper into the material than it does with direct attenua-tion alone (volumetric shadows), seen in the lower-right image. Also notice the pronounced hue shift from white to orange to black due to an indirect attenuation term that attenuates blue slightly more than red or green. The lower-left image shows the effect of changing just the reflective (material) color to a pale blue.

6.3 Translucent Volume Lighting 157

Figure 6.12. Translucent volume-shading comparison. The upper-left image is a photograph of a wax block illuminated from above with a focused flashlight.

The upper-right image is a volume rendering with a white reflective color and a desaturated orange transport color (1−indirect attenuation). The lower-left image has a bright blue reflective color and the same transport color as the upper-right image. The lower-right image shows the effect of light transport that only takes into account direct attenuation. (Images reprinted from [129], c2003 IEEE.)

Surface shading can also be added for use with scalar data sets. For this we recommend the use of a surface-shading parameter, the so-called surface scalar. This is a scalar value between one and zero that describes the degree to which a sample should be surface shaded. It is used to interpolate between surface shading and no surface shading. This value can be added

i

to the transfer function, allowing the user to specify whether or not a classified material should be surface shaded. It can also be determined automatically using the gradient magnitude at the sample. In this case, we assume that classified regions will be surface-like if the gradient magnitude is high and therefore should be shaded as such. In contrast, homogeneous regions, which have low gradient magnitudes, should only be shaded using light attenuation.

Dans le document Real-Time Volume Graphics (Page 171-176)