• Aucun résultat trouvé

Data Sources and Volume Acquisition

Dans le document Real-Time Volume Graphics (Page 37-43)

1.5 Volume Data and Reconstruction Filters

1.5.2 Data Sources and Volume Acquisition

Data for volume rendering can come from a variety of different areas of application. An important type of application is the scientific visualization of scalar data. More specifically, medical imaging was one of the early fields that adopted volume rendering. In medical imaging, 3D data is typically acquired by some kind of scanning device.

i

20 Theoretical Background and Basic Approaches

CT (computerized tomography) is a frequently used method for ob-taining medical 3D data. The physical scanning process is based on x-rays.

The x-rays are emitted onto the patient’s body from one side and the ra-diation that traverses the body is recorded on the other side. Through the interaction between x-rays and the different materials that make up the pa-tient’s body, radiation is attenuated. The radiation emitter rotates around the patient to obtain attenuation data for different directions through the body. From this collection of detected radiation, a 3D image is finally reconstructed.

MRI (magnetic resonance imaging) relies on nuclear magnetic resonance to identify different materials in a 3D spatial context. An MRI scanner needs a strong magnetic field—often in the range of a few tesla—to align the spins of atomic nuclei. Through a separate, weaker magnetic field, MRI scanners can perturb the aligned spins of the nuclei by an excitation pulse.

When the spin realigns with the outer magnetic field, radiation is emitted.

This radiation is recorded by the scanner. Different types of nuclei (i.e., different types of atoms) have different radiation characteristics, which is used to identify materials. Moreover, the excitation field is modified by a magnetic gradient field in order to introduce a spatial dependency into the signals. In this way, materials can be located in 3D space.

Although MRI and CT are the most common methods for 3D medical imaging, other modalities are also used. For example, ultrasound or PET (positron emission tomography, which is based on the emission of positrons from a short-lived radioactive tracer isotope) can be applied for 3D recon-struction. All these medical imaging modalities have in common that a discretized volume data set is reconstructed from the detected feedback (mostly radiation). This means that some kind of transformation process takes place and that we typically do not use the original data for volume rendering. More examples for sources of volume data in medical imaging are described by Lichtenbelt et al. [165].

Simulation results are another class of data sources. Typical exam-ples are CFD (computational fluid dynamics) simulations in engineering, computed electromagnetic fields in physical sciences, or simulations of fire and explosions for special effects. Here, the grid used for the simulation is often different from the grid used for visualization. The simulation grid might be designed for a well-behaved simulation and therefore adapted to the physical phenomenon (e.g., by using an adaptive, unstructured grid), whereas the visualization is often based on a uniform grid, which facilitates fast volume-rendering methods. As a consequence, even simulation data is often transformed before it is given to the volume renderer.

Another typical source of volume data is voxelization (see Section 12.2).

Voxelization turns a surface representation of a 3D object (e.g., a triangle mesh representation) into a volumetric object description. Here, the

ac-1.5 Volume Data and Reconstruction Filters 21 curacy of the representation can be controlled by choosing an appropriate resolution for the volumetric grid. A related data source is procedural mod-eling (see Section 12.3). Here, the volume is specified by algorithms or code segments that define rules to evaluate the scalar field at any given point in space. Therefore, procedural descriptions are not affected by accuracy problems.

In general, the type of data source has to be taken into account when the reliability and quality of volume rendering needs to be assessed. In particular, the errors introduced by measurements and the inaccuracies from data transformations should be considered. Whereas reliability might be less important for special effects rendering, it can be crucial in scientific and medical visualization.

1.5.3 Reconstruction

As discussed in Section 1.5.1, a volume data set is usually represented in discretized form—typically on a uniform or tetrahedral grid. This dis-cretization leads to the issue of reconstructing a scalar function on all points in the 3D domain.

The problem of a faithful reconstruction is addressed intensively in the context of signal processing. In this book, we only briefly review a few, most relevant aspects of signal processing. More background information can be found in textbooks of the field, for example, by Oppenheim and Schafer [204].

We first consider the 1D case, and later extend the discussion to 3D reconstruction. An important question for an appropriate reconstruction is: is the number of samples sufficient to reconstruct the underlying con-tinuous function? The answer to this question is given by the Nyquist-Shannon sampling theorem of information theory [201, 238]. This theorem states that the sampling frequency must be greater than twice the highest frequency of the input signal to be able to reconstruct the original signal from the sampled version. Otherwise the signal will be aliased; i.e., the continuous signal will be reconstructed incorrectly from the discrete signal.

In mathematical notation, appropriate sampling can be described as follows: for a continuous and periodic input signal represented by the function f(t), we first determine its maximum frequency νf. Maximum frequency means that the Fourier transform of f(t) is zero outside the frequency interval [−νf, νf]. Then the critical sampling frequency—the Nyquist frequency—isνN = 2νf. For an appropriate sampling, more than 2νf samples have to be chosen per unit distance. For a uniform sampling at a frequencyνs, the sample points can be described byfi=f(i/νs), with integer numbersi.

i

22 Theoretical Background and Basic Approaches

Provided that the original signal is sampled at a frequency νs > νN, the signal can be recovered from the samplesfi according to

f(t) =

i

fi sinc(π(νst−i)). (1.19) The sinc function (forsinus cardinalis) is defined

sinc(t) =

sin(t)

t ift= 0 1 ift= 0 .

A signal whose Fourier transform is zero outside the frequency interval [−νf, νf] is called band-limited because its bandwidth (i.e., its frequency) is bounded. In practice, the frequency content of an input data set may be unknown. In these cases, a low-pass filter can be applied to restrict the maximum frequency to a controlled value.

Equation 1.19 is one example of aconvolution. In general, the convolu-tion of two discrete funcconvolu-tions f(i) =fi andh(i) =hi is given by

g(m) = (f∗h)(m) =

i

f(i)h(m−i). (1.20) The summation indexiis chosen in a way thatf(·) andh(·) are evaluated at all positions within their respective supports (i.e., at all positions where f(·) andh(·) are nonzero). Therefore, the above sum is a finite sum if at least one of the functionsf(·) andh(·) has finite support.

The analogue of the above equation for the continuous case leads to the convolution integral

g(t) = (f∗h)(t) =

−∞

f(t)h(t−t) dt. (1.21)

In general, convolution is a prominent approach to filtering. In this context, f(·) is the input signal,h(·) is the filter kernel, andg(·) is the filtered output.

In this notation, Equation 1.19 describes the convolution of a sampled input signalfi with a sinc filter kernel.

Unfortunately, the sinc filter has an unlimited extent; i.e., it oscillates around zero over its whole domain. As a consequence, the convolution has to be evaluated for all input samplesfi, which can be very time-consuming.

Therefore, in practice, reconstruction filters with finite support are often applied. A typical example is the box filter, which leads to nearest-neighbor interpolation when the box width is identical to the sampling distance (i.e., the reconstructed function value is set to the value of the nearest sample point). Another example is the tent filter, which leads to piecewise linear

1.5 Volume Data and Reconstruction Filters 23

Figure 1.8. Three reconstruction filters: (a) box, (b) tent, and (c) sinc filters.

reconstruction when the width of one side of the tent is identical to the sampling distance. Figure 1.8 illustrates different reconstruction filters.

Another approach to overcome the problem of the infinite support of the sinc filter is to compute the convolution in frequency space, where the sinc filter has finite support. For example, Artner et al. [3] adopt this approach in the context of volume rendering.

So far, we have discussed functions of only one variable. By apply-ing a tensor-product approach, 1D reconstruction can be immediately ex-tended to n dimensions and, in particular, to 3D volumes. The essen-tial idea is to perform the reconstruction for each dimension in a com-bined way. For the 3D case, a tensor-product reconstruction filter is h(x, y, z) = hx(x)hy(y)hz(z), where hx(·), hy(·), hz(·) are one-parameter filters along the x, y, and z directions. One important advantage of uni-form grids (see Section 1.5.1) is their direct support for tensor-product reconstruction.

We discuss the example of tensor-product linear interpolations in more detail because they are widely used in volume rendering. As illustrated in Figure 1.9, the tensor-product approach separates the interpolations

1-y

Figure 1.9. Tensor-product linear interpolations: linear, bilinear, and trilinear.

i i

i i

i i

i i

24 Theoretical Background and Basic Approaches

along the different dimensions and therefore allows us to compute the re-constructed value by a sequence of linear interpolations. To simplify the notation, the following discussion assumes normalized coordinate values x, y, and z that are in the interval [0,1] for points within the cell (i.e., line in 1D, rectangle in 2D, and cube in 3D). Normalized coordinates can be obtained from arbitrary coordinates by scaling and translation. Linear interpolation between two pointsa andbcan then be computed by

f(p) = (1−x)f(a) +xf(b),

wheref(a) andf(b) are the function values at the sample pointsaandb, respectively. The result is the interpolated function value at pointp.

Tensor-product linear interpolation in two dimensions is called bilin-ear interpolation. Bilinbilin-ear interpolation at point p can be computed by successive linear interpolations in the following way:

f(p) = (1−y)f(pab) +yf(pcd),

with the intermediate results from linear interpolations along thex direc-tion according to

f(pab) = (1−x)f(a) +xf(b), f(pcd) = (1−x)f(d) +xf(c).

By combining these expressions, we obtain a single expression for bilinear interpolation:

f(p) = (1−x)(1−y)f(a) + (1−x)yf(d) +x(1−y)f(b) +xyf(c), which explicitly shows that bilinear interpolation is not linear because it contains quadratic terms (terms of second order).

Similarly, trilinear interpolation in three dimensions can be computed by the linear interpolation between two intermediate results obtained from bilinear interpolation:

f(p) = (1−z)f(pabcd) +zf(pef gh).

The termsf(pabcd) andf(pef gh) are determined by bilinear interpolation within two faces of the cube. Trilinear interpolation contains terms up to cubic order; i.e., trilinear interpolation is not linear.

Tensor-product linear interpolations play a dominant role in volume rendering because they are fast to compute. In particular, graphics hard-ware provides direct support for this kind of interpolation within 1D, 2D, or 3D textures. Although tensor-product linear interpolations are often used,

1.6 Volume-Rendering Pipeline and Basic Approaches 25

Figure 1.10. Comparison between trilinear filtering (left) and cubic B-spline filtering (right).

it should be noted that they might not result in appropriate rendering qual-ity. As discussed in Chapter 9, especially in the context of volume filtering and reconstruction (Section 9.2), better filtering methods have been de-veloped for real-time volume rendering. Figure 1.10 serves as an example image that motivates the use of more accurate reconstruction filters: tri-linear interpolation (Figure 1.10(left)) shows significant artifacts, whereas a higher-order filter (Figure 1.10(right)) removes most of these artifacts.

Uniform grids are most often used in volume rendering and, thus, cor-responding tensor-product reconstruction filters are frequently employed.

For other grid structure, slightly different reconstruction methods may be applied. For example, barycentric interpolation is a common technique for tetrahedral cells. Barycentric interpolation provides a linear interpolant and it might be better known as the interpolation method for values within triangles. For example, graphics hardware interpolates in-between values of triangle meshes in this way.

1.6 Volume-Rendering Pipeline and Basic

Dans le document Real-Time Volume Graphics (Page 37-43)