• Aucun résultat trouvé

Geometry Set-Up

Dans le document Real-Time Volume Graphics (Page 69-79)

3.2 2D Texture–Based Volume Rendering

3.2.2 Geometry Set-Up

float4 result = tex2D(slice, texUV);

return result;

}

Listing 3.1. A simple fragment program in Cg that samples the given 2D texture image.

During texture set-up, we must prepare the texture images for the three stacks of slices and upload it to local graphics memory. During rendering, the geometry set-up will take care that the correct textures are bound for each polygon. OpenGL automatically performs a least-recently-used (LRU) texture management strategy. If storage space is needed to roll in additional textures, those texture images with the oldest time stamp are swapped out. This is appropriate as long as we have enough local graphics memory to store all textures required during one frame. In some cases the LRU strategy is inefficient. In fact, a most-recently-used (MRU) strategy is advantageous, if the texture data required to render one frame does not fit into local graphics memory all at once. In this case, texture priorities must be used to control the memory management.

As mentioned above, we assume for now that we are directly given emission and absorption values for each voxel instead of the scalar value.

The information stored in each texel is an RGBAquadruplet. TheRGB part defines intensity and color of the emitted light. The A component specifies opacity, i.e., the amount of light absorbed by the voxel. For now, we create a number of texture objects with an internal texture format of RGBA. We will change the internal format later, when we assign the optical properties using transfer functions (see Chapter 4).

For shading the fragments, we use the simple fragment program dis-played in Listing 3.1. The final color of the fragment is replaced by the sample from the active 2D texture. More elaborate fragment programs will be introduced in later chapters, when we look at transfer functions and illumination techniques.

3.2.2 Geometry Set-Up

A code fragment implementing the view-dependent geometry set-up in OpenGL is given in Listing 3.2. To compute the viewing direction

rela-i i

i i

i i

i i

52 Basic GPU-Based Volume Rendering

GLfloat pModelViewMatrix[16];

GLfloat pModelViewMatrixInv[16];

// get the current modelview matrix

glGetFloatv(GL MODELVIEW MATRIX, pModelViewMatrix);

// invert the modelview matrix

InvertMatrix(pModelViewMatrix,pModelViewMatrixInv);

// rotate the initial viewing direction

GLfloat pViewVector[4] = {0.0f, 0.0f, -1.0f, 0.0f};

MatVecMultiply(pModelViewMatrixInv, pViewVector);

// find the maximal vector component int nMax = FindAbsMaximum(pViewVector);

switch (nMax) { case X:

if(pViewVector[X] > 0.0f) { DrawSliceStack PositiveX();

} else {

DrawSliceStack NegativeX();

} break;

case Y:

if(pViewVector[Y] > 0.0f) { DrawSliceStack PositiveY();

} else {

DrawSliceStack NegativeY();

} break;

case Z:

if(pViewVector[Z] > 0.0f) { DrawSliceStack PositiveZ();

} else {

DrawSliceStack NegativeZ();

} break;

}

Listing 3.2. OpenGL code for selecting the slice direction. An example implemen-tation for the drawing functions can be found in Listing 3.3.

3.2 2D Texture–Based Volume Rendering 53 tive to the volume object, the modelview matrix must be obtained from the currentOpenGLstate. This matrix represents the transformation from the local coordinate system of the volume into camera space. The viewing direction in camera space (the negative z-axis in OpenGL) must be trans-formed by the inverse of this matrix. According to the maximum compo-nent of the transformed viewing vector, the appropriate stack of slices is chosen. This code sample assumes that all object and camera transforma-tions are stored in the modelview matrix stack. You should not misuse the projection matrix for storing them. Note that the multiplication of the negative z-axis with the viewing matrix in this example can further be simplified by directly extracting and negating the third column vector from the 4×4 matrix.

The selected stack of object-aligned polygons is displayed by drawing it in back-to-front order. During rasterization, each polygon is textured

// draw slices perpendicular to x-axis // in back-to-front order

void DrawSliceStack NegativeX() { double dXPos = -1.0;

double dXStep = 2.0/double(XDIM);

for(int slice = 0; slice < XDIM; ++slice) {

// select the texture image corresponding to the slice glBindTexture(GL TEXTURE 2D, textureNamesStackX[slice]);

// draw the slice polygon glBegin(GL QUADS);

glTexCoord2d(0.0, 0.0); glVertex3d(dXPos,-1.0,-1.0);

glTexCoord2d(0.0, 1.0); glVertex3d(dXPos,-1.0, 1.0);

glTexCoord2d(1.0, 1.0); glVertex3d(dXPos, 1.0, 1.0);

glTexCoord2d(1.0, 0.0); glVertex3d(dXPos, 1.0,-1.0);

glEnd();

dXPos += dXStep;

} }

Listing 3.3.OpenGL code for drawing a stack of object-aligned textured polygons in back-to-front order along the negativex-axis. The volume is assumed to lie within the unit cube and has a resolution ofXDIM×YDIM×ZDIMvoxels. In a practical implementation, a display list should be used and the geometry should be written into vertex buffers in order to minimize the number of function calls.

i

54 Basic GPU-Based Volume Rendering

with the image information directly obtained from its corresponding 2D texture map. Bilinear interpolation within the texture image is accelerated by the texturing subsystem. Note that the third interpolation step for a full trilinear interpolation is completely omitted in this approach.

Let us assume that our volume is defined within the unit cube (x, y, z [1,1]) and has a resolution ofXDIM×YDIM×ZDIM voxels. Listing 3.3 shows the code for drawing a slice stack along the negativex-axis. The drawing function for the positivex-axis is simply obtained by reversing theforloop in Listing 3.3. This means thatdXPos is initialized with a value of 1.0 and decremented with each pass. In this case, the texture names must be bound in reverse order, the index into the array must beXDIM-slice-1instead of slice.

Drawing functions for the remaining viewing directions are simply ob-tained by permutation of the vector components and by using the array of texture names that corresponds to the selected major axis. For most efficient rendering, the geometry should also be stored in a vertex array or a vertex buffer, if available. This will reduce the number of function calls and the amount of data transferred to the GPU. The entireforloop including the texture binding operations can be compiled into a display list.

3.2.3 Compositing

According to the physical model described in Section 1.4, the equation of radiative transfer can be iteratively solved by discretization along the viewing ray. As described above, the internal format for our 2D textures isRGBA, which means that each texel allocates four fixed-point values, one value for the red (R), green (G), and blue (B) components, respectively, plus one for the opacity (A) value. For each voxel, the color value (RGB) is the source term ci from Equation 1.13. The opacity value A is the in-verted transparency (1−Ti) from Equation 1.12. Using this configuration, the radiance I resulting from an integration along a viewing ray can be approximated by the use of alpha blending.

The blending equation specifies a component-wise linear combination of theRGBA quadruplet of an incoming fragment (source) with the values already contained in the frame buffer(destination). If blending is disabled, the destination value is replaced by the source value. With blending en-abled, the source and the destinationRGBAquadruplets are combined by a weighted sum forming a new destination value. In order to compute the iterative solution according to Equation 1.11, opacity (1−Ti) stored in the Acomponent of the texture map must be used as blending factor. To im-plement the back-to-front compositing scheme from Equation 1.15, a color

3.2 2D Texture–Based Volume Rendering 55

// alpha blending for colors pre-multiplied with opacity glEnable(GL BLEND);

glAlphaFunc(GL ONE, GL ONE MINUS SRC ALPHA);

// standard alpha blending setup glEnable(GL BLEND);

glAlphaFunc(GL SRC ALPHA, GL ONE MINUS SRC ALPHA);

Listing 3.4. Compositing: OpenGL code for back-to-front alpha blending. The upper listing assumes that the color values are pre-multiplied with opacity in order to avoid color bleeding during interpolation. The lower listing is the standard set-up for alpha blending in OpenGL

component C∈ {R, G, B}is computed by a blending equation as follows:

Cdest = Csrc + Cdest(1−Asrc). (3.1) This blending scheme corresponds to the OpenGL alpha blending set-up displayed in the upper part of Listing 3.4. It is important to note that this blending set-up uses associated colors as explained in Section 1.4.4.

Associated colors consist ofRGB components that are already weighted by their corresponding opacityA.

The described blending set-up is different from the standard way of alpha blending you might be familiar with. OpenGLapplications often use

A

B

C A

B

C A

B

C

color bleeding

Figure 3.3. Example of color bleeding during interpolation: a triangle is drawn with different colors specified at the vertices. Color values are interpolated in the interior of the triangle. In the middle image, the red and green vertices have been set to completely transparent, but their colors are still “bleeding” into the inte-rior of the triangle due to linear interpolation. In the right image, color bleeding was suppressed by pre-multiplying the vertex colors by their opacity value before interpolation.

i

56 Basic GPU-Based Volume Rendering

a different equation for back-to-front blending, denoted

Cdest = Csrc·Asrc + Cdest(1−Asrc). (3.2) This equation is equivalent to the blending set-up displayed in the lower part of Listing 3.4. It assumes that theRGB components of the incoming fragment are not pre-multiplied with opacity A. The color values are thus weighted by the opacity at the blending stage, before they are written into the frame buffer. Although at first glance both set-ups seem to be equivalent, they are actually not. The benefit of associated colors is the fact that color-bleeding artifacts that may occur during interpolation are avoided.

To understand the principle of color bleeding, let us examine the simple case outlined in Figure 3.3. A triangle is drawn with different color values at the vertices. If we enable smooth shading, the color values for fragments in the interior of the triangle are interpolated from the values given at the vertices. If we set the opacity value A for some of the vertices to 0 (full transparency), the color value of the vertex should not have any influence on the rendering at all. However, as can be seen in the middle image in Fig-ure 3.3, this is not the case if standard interpolation and blending is used.

The color of the red and green vertices are still visible, due to component-wise linear interpolation of theRGBAquadruplets across the triangle. Exam-ine a fragment that lies halfway between the fully transparent red vertex (RGBA= [1,0,0,0]) and the fully opaque blue vertex (RGBA= [0,0,1,1]). It will receive anRGBAvalue of [12,0,12,12]. The red component is not equal to 0, although the red vertex should be invisible.

Contrary to the example illustrated in Figure 3.3, in our volume-rendering approach, color-bleeding effects occur during texture filtering instead of fragment color interpolation, but the effect is the same. Both effects can easily be suppressed by using associated colors. To avoid color bleeding, it is only necessary to pre-multiply theRGBvertex colors by their corresponding opacity value A prior to interpolation. In this case, a com-pletely transparent vertex would receive anRGBAvalue of (RGBA= [0,0,0,0]) regardless of its original color. As can be seen in the right image of Fig-ure 3.3, the color-bleeding artifacts have been successfully removed. The blending weight for the source color is here set toone(see Listing 3.4, top), because we already have multiplied it with the source opacity value before the interpolation. As we see, such a blending set-up allows color-bleeding effects to be removed at no additional cost.

As an alternative, the back-to-front scheme may be substituted by front-to-back compositing (see Section 1.4.2). Only a few modifications to the code are necessary: the slices must now be drawn in reverse order. This can easily be achieved by exchanging the drawing functions for the positive and the negative case in Listing 3.3. In the upper part of Listing 3.4, the

3.2 2D Texture–Based Volume Rendering 57 blending weights must be replaced by GL ONE MINUS DST ALPHA and GL ONE for associated colors. The result is a blending equation according to

Cdest = Csrc(1−Adest) + Cdest. (3.3) For nonassociated colors, the RGB value of each fragment must be multi-plied by its alpha component in the fragment program. The drawback of front-to-back compositing is that an alpha buffer is required for storing the accumulated opacity. The back-to-front compositing scheme manages without the alpha buffer because the alpha value of the incoming fragment is used as the blending weight. Front-to-back compositing, however, is re-quired to implement early ray termination and occlusion-culling techniques, as we will see in Chapter 8.

Maximum intensity projection. As an alternative to solving the equation of radiative transfer, maximum intensity projection (MIP) is a common technique that does not require numerical integration at all. Instead, the color of a pixel in the final image is determined as the maximum of all the intensity values sampled along the ray, according to

I = maxk=0..N

sk

, (3.4)

withsk denoting the original scalar value sampled along the ray.

Unfortunately, the maximum operation in the blending stage is not part of the standard OpenGL fragment operations. Implementing MIP is a simple example for the use of the widely supported OpenGL exten-sion EXT blend minmax. This extension introduces a newOpenGL function glBlendEquationEXT, which enables both maximum and minimum compu-tation between source and destination RGBA quadruplets. The respective blending set-up is displayed in Listing 3.5.

#ifdef GL EXT blend minmax // enable alpha blending glEnable(GL BLEND);

// enable maximum selection glBlendEquationEXT(GL MAX EXT);

// setup arguments for the blending equation glBlendFunc(GL SRC COLOR, GL DST COLOR);

#endif

Listing 3.5. OpenGL compositing set-up for maximum intensity projection in the per-fragment operations using the widely supported extensionEXT blend minmax.

i i

i i

i i

i i

58 Basic GPU-Based Volume Rendering

(a) (b)

Figure 3.4.CT angiography: a comparison between theemission-absorption model (a) andmaximum intensity projection(b). Note that the depth relations in image (b) are unclear because only the largest value along the ray is displayed regardless of occlusion.

Maximum intensity projection is frequently used in medical applica-tions. It is applicable to tomographic data recorded after injecting contrast dye of high signal, such asangiographydata. A visual comparison of MIP and ray integration is exemplified in Figure 3.4 by means of CTA1 data of blood vessels inside the human head. Whereas for the emission-absorption model (Figure 3.4(a)) a transfer function table must be assigned to extract the vessels (see Chapter 4), the same vascular structures are immediately displayed in the MIP image (Figure 3.4 (b)). Note that in comparison to ray integration, the surface structure of the bone is not visible in the MIP image. Bone structures have the highest signal intensity in CT data.

Hence, all rays that hit a bone voxel somewhere inside the data set are set to bright white. In consequence, a major drawback of MIP is the fact that depth information is completely lost in the output images. This comes with a certain risk of misinterpreting the spatial relationships of different structures.

3.2.4 Discussion

The main benefits of our first solution based on 2D texture mapping are its simplicity and its performance. The high rendering speed is achieved by utilizing bilinear interpolation performed by the graphics hardware. Be-cause only 2D texturing capabilities are used, fast implementations can be achieved on almost every OpenGL compliant hardware. We will see, however, that this first solution comes with several severe drawbacks if we analyze the quality of the generated images.

1CTA: computerized tomography angiography.

3.2 2D Texture–Based Volume Rendering 59

Figure 3.5. Aliasing artifacts become visible at the edges of the slice polygons.

The image quality is equivalent to a CPU implementation using a shear-warp factorization [149], because the same computational mechanisms are applied. Magnification of the images often results in typical aliasing arti-facts, as displayed in Figure 3.5. Such artifacts become visible at the edges of the slice polygons and are caused by an insufficient sampling rate.

The sampling rate in our implementation cannot be changed. It is determined by the distance between two slice images. This distance is fixed and restricted by the number of texture images we have created. We will see in Chapter 4 that a fixed sampling rate is impractical, especially if used in conjunction with transfer functions that contain sharp boundaries. The sampling rate must be increased significantly to accommodate to additional high frequencies introduced into the data.

The strong aliasing artifacts in Figure 3.5 originate from an inaccu-racy during ray integration. We could easily remove such artifacts by pre-computing and inserting multiple intermediate slices. This would be equiv-alent to increasing the sampling rate. Interpolating additional slices from the original discrete volume data and uploading them as texture images, however, would mean that we waste graphics memory by storing redundant information on the GPU. Obviously, the sampling rate we use is too low and bilinear interpolation is not accurate enough. In Chapter 4, we will examine the sampling rate problem in more detail. It becomes evident that we need a mechanism for increasing the sampling rate at runtime without increasing the resolution of the volume in memory.

Before we proceed, let us have a look at other inaccuracies introduced by the algorithm. In order to analyze image quality, it is important to examine how numerical integration is performed in this implementation.

Let us reconsider the physical model described in Chapter 1. Both the discretized transparencyTi and the source term ci are built upon the no-tion of a constant length ∆xof ray segments. This segment length is the distance between subsequent sampling points along the viewing ray, and it is determined by the spacing between two adjacent slice planes with re-spect to the viewing direction. The distance between two slices of course is fixed. The source terms and opacity coefficients stored in the 2D textures

i i

i i

i i

i i

60 Basic GPU-Based Volume Rendering

d0 d1 d2 d3 d4

Figure 3.6. The distance between adjacent sampling points depends on the view-ing angle.

are only valid if we assume a fixed distance between the sampling points along a ray. This, however, is not true for the described algorithm, be-cause the distance between adjacent sampling points depends on the angle at which the assumed viewing ray intersects the slices (seeFigure 3.6). In consequence, the result of the numerical integration will only be accurate for one particular viewing direction in case of orthographic projection. For perspective projection, the angle between the viewing ray and a slice poly-gon is not even constant within one image. Throughout our experiments, however, we have observed that this lack of accuracy is hardly visible as long as the field of view is not extremely large.

In addition to the sampling artifacts, a flickering may be visible when the algorithm switches between different stacks of polygon slices. The rea-son for such effects is an abrupt shift of the sampling positions. Figure 3.7 illustrates this problem. Figures 3.7 (a) and (b) show the viewing direction at which the slicing direction is ambiguous. If we examine the location of the sampling points by superimposing both configurations (Figure 3.7(c)), it becomes clear that the actual position of the sampling points changes abruptly, although the sampling rate remains the same. According to the sampling theorem, the exact position of the sampling points should not have any influence on the reconstructed signal. However, this assumes an ideal reconstruction filter and not a tent filter. The magnitude of the nu-merical error introduced by linear approximation has an upper limit that

(c)

(a) (b)

Figure 3.7. Flickering is caused by changing between different slice stacks (a) and (b). The superposition (c) shows that the location of the sampling points abruptly changes, which results in visible switching effects.

Dans le document Real-Time Volume Graphics (Page 69-79)