• Aucun résultat trouvé

Gradient Estimation

Dans le document Real-Time Volume Graphics (Page 127-131)

Local vs. Global Volume Illumination

5.3 Gradient-Based Illumination

5.3.1 Gradient Estimation

There is a variety of techniques for estimating the gradient from discrete volume data. In GPU-based volume rendering, gradient estimation is usu-ally performed in one of two major ways. Either the gradient vector will be pre-computed and stored in an additional volume texture that is sampled at runtime, or gradient estimation is implemented on-the-fly, which means that directional derivatives must be estimated in real time at any point in the volume. The major difference between the two approaches is that pre-computed gradients are commonly calculated at the integer positions of the original grid and interpolated trilinearly, whereas on-the-fly gradients are computed on a per-pixel basis in the fragment shader. There are different methods for estimating gradient vectors, which differ in the computational complexity and the accuracy of the resulting gradients.

Finite differences. Finite differencing schemes are fast and efficient meth-ods for estimating partial derivatives and gradients on discrete data. All finite differencing schemes are based on a Taylor expansion of the function to be differentiated. The Taylor series of a 1D scalar function f(x) in the

x

i−1

x

i

x

i+1

x

i−1

x

i

x

i+1

x

i−1

x

i

x

i+1 Figure 5.2.Finite differencing schemes approximate the derivative of a curve (blue) by substituting the slope of the tangent (dotted red) by the slope of the secant (green). Forward differences (left) construct the secant from the current sample to the next, backward differences (middle) from the current sample and the previous one. Central differences construct the secant from the previous sample to the next.

i

neighborhood of a pointx0 is defined as the infinite sum, f(x0+h) = f(x0) + f(x0)

If we stop the Taylor expansion after the second term, f(x0+h) = f(x0) + f(x0)

1! h +o(h2), (5.7) and solve for the first-order derivative, we obtain a first approximation,

f(x0) = f(x0+h)−f(x0)

h +o(h), (5.8)

This approximation is called aforward difference. As we see, the approx-imation error is of the same order as the step sizeh. The same approach can be used with a backward Taylor expansion,

f(x0−h) = f(x0) f(x0)

1! h +o(h2), (5.9) and results in another approximation for the first-order derivative called backward difference:

f(x0) = f(x0)−f(x0−h)

h +o(h). (5.10)

The approximation error of the backward differencing scheme has the same order as the forward differences. To obtain a finite differencing scheme with a higher-order approximation error, we write down one forward and one backward Taylor expansion up to the third term,

f(x0+h) = f(x0) + f(x0) subtract the second equation from the first one,

f(x0+h)−f(x0−h) = 2f(x0)h +o(h3), (5.13) and solve for the first-order derivative,

f(x0) =f(x0+h)−f(x0−h)

2h +o(h2). (5.14)

5.3 Gradient-Based Illumination 111 The result is a finite differencing scheme called central differenceswith an approximation error within the order of magnitude o(h2). The approxi-mation error for central differences thus is of higher order compared with forward or backward differences.

Central differences are the most common approach for gradient esti-mation in volume graphics. Each of the three components of the gradient vector ∇f(x) =∇f(x, y, z) is estimated by a central difference, resulting

As we see, six additional neighbor samples are taken with a distance h around the position where the gradient is estimated. For pre-computing gradient vectors, the step sizehcan simply be set to the grid size in order to avoid unnecessary interpolation operations. For computing gradients on the fly, the step sizehis set to a constant value that is small with respect to the grid size.

One important property of central differences in this regard is that the order of applying linear, bilinear, or trilinear interpolation and central differencing does not matter, as the result will be exactly the same. This can be easily verified: As an implication, central difference gradients stored in anRGBtexture that is sampled using linear, bilinear, or trilinear interpolation are equivalent to performing the six neighbor look-ups of Equation 5.15 with linear, bilinear, or trilinear interpolation on-the-fly, respectively. In practice, however, care has to be taken with regard to the texture filtering precision of the GPU.

We will address this issue in Chapter 9.

Convolution filtering for gradient estimation. Although finite differences often yield gradients of sufficient quality, more general approaches based on larger filter kernels might achieve considerably better results, yet at a higher computational cost.

The standard approach for filtering a signal or function is to perform a mathematical operation called a convolution of the function with a fil-ter kernel, which is covered in more detail in Sections 1.5.3 and 9.2. The described gradient-estimation techniques based on finite differences are, in fact, special cases of convolution filtering. An important property of convo-lution is that differentiation and convoconvo-lution with a linear filter obeys the

i

associative law. Instead of computing the derivative of a function and filter-ing it afterwards, the function can as well be convolved with the derivative of a filter kernel, yielding the same result.

However, such a linear filter can compute a partial derivative in one direction only. Three different filter kernels are necessary for estimat-ing the full gradient vector. Each filter calculates the directional deriva-tive along one of the major axes. The results yield the x-, y-, and z-components of the gradient. Each of these 3D filter kernels is computed by performing the tensor product of a 1D derivative filter for one axis with a 1D function reconstruction filter for each of the other two axes, e.g., hx(x, y, z) =h(x)h(y)h(z) for the directional derivative along thexaxis, whereh(x) is the function reconstruction filter andh(x) is the first-order derivative of the reconstruction filter. For example, Figures 9.8 and 9.14 in Section 9.2 show the cubic B-spline for function reconstruction and its first derivative for derivative reconstruction, respectively.

Discrete filter kernels. When the derivative is only needed at the grid points, it is sufficient to represent the filter kernel as a collection of discrete filter weights, which are the values of the filter kernel where it intersects the grid. This approach is very common inimage processing. Because a discrete filter has a single value at its center, the width of such a filter is usually odd, e.g., 3×3×3 or 5×5×5 in 3D.

A common discrete filter kernel for gradient estimation is the Sobel operator, which is also often used for edge detection. The standard 3D Sobel kernel has size 33 and can be computed from a triangle filter for function reconstruction with smoothing (h(1) = 1, h(0) = 2, h(1) = 1; with normalization factor 1/4) and central differences (h(1) = 1, h(0) = 0, h(1) = 1; with normalization factor 1/2) for derivative reconstruction.

The 3D kernel for derivation inxis thenhx(x, y, z) =h(x)h(y)h(z) with

In order to estimate the correct gradient magnitude, these weights have to be normalized by a factor of 1/32. The other two kernels hy(x, y, z) and hz(x, y, z) can be obtained by computing the respective tensor product or by simply rotating the axes of hx(x, y, z). There are several variants of

5.3 Gradient-Based Illumination 113 the Sobel kernel that use slightly different weights, e.g., usingh(−1) = 3, h(0) = 10, andh(1) = 3 with a normalization factor of 1/16.

Although it produces gradients of better quality than the simple central differences scheme, an obvious disadvantage of a full 3×3×3 filter kernel such as the Sobel is its computational complexity. The Sobel kernel shown above requires 54 neighbor sample look-ups (18 for each of the three gradi-ent compongradi-ents) and the corresponding multiplications and additions for evaluating the convolution. Therefore, filter kernels of this size are usually only used for pre-computing gradients that are then stored in a texture for easy and fast retrieval in the fragment shader.

Other examples of filter kernels for gradient reconstruction would be the Prewitt edge detection filter that uses a box filter where the Sobel filter uses a triangle filter, and a Gaussian and its derivative, which, however, is usually only used with size 5×5×5 and above.

Continuous filter kernels. When the function or derivative is needed be-tween grid points and the discrete filters described above are used, they have to be applied at all neighboring grid points, e.g., at the eight corners of a cube, and then interpolated, e.g., using trilinear interpolation. How-ever, a much more natural choice is to use continuous filter kernels instead of discrete filters in order to reconstruct the function and its derivatives at arbitrary points in the volume directly from the original grid of function samples.

Fast filtering with a continuous cubic B-spline and its derivatives is described in detail in Section 9.2. On-the-fly gradient reconstruction with the cubic B-spline is possible in real time on current GPUs for rendering isosurfaces. This is especially easy when deferred shading is used, a general image-space method that is described in Section 8.7. The cubic B-spline can also be used as a basis for real-time computation of implicit isosurface curvature, which is described in Section 14.4.

Dans le document Real-Time Volume Graphics (Page 127-131)