• Aucun résultat trouvé

The Aliasing Problem

Dans le document Computer Graphics and Geometric Modeling (Page 60-64)

No matter how good a line drawing algorithm is, it is impossible to avoid giving most discrete lines a staircase effect (the “jaggies”). They just will not look “straight.”

Increasing the resolution of the raster helps but does not resolve the problem entirely.

In order to draw the best looking straight lines one has to first understand the “real”

underlying problem which is one of sampling.

The geometric curves and surfaces one is typically trying to display are continu-ous and consist of an infinite number of points. Since a computer can only show a finite (discrete) set of points, how one chooses this finite set that is to represent the object is clearly important. Consider the sinusoidal curve in Figure 2.11. If we sample such a sine wave badly, say at the points A,B,C, and D, then it will look like a straight line. If we had sampled at the points A,E,F, and D, then we would think that it has a different frequency.

The basic problem in sampling theory: How many samples does one have to take so that no information is lost?

This is a question that is studied in the field of signal processing. The theory of theFourier transformplays a big role in the analysis. Chapter 21, in particular Section 21.6, gives an overview of some of the relevant mathematics. For more details of the mathematics involved in answering the sampling problem see [GonW87], [RosK76], or [Glas95]. We shall only summarize a few of the main findings here and indicate some practical solutions that are consequences of the theory.

Definition. A function whose Fourier transform vanishes outside a finite interval is called a band-limitedfunction.

One of the basic theorems in sampling theory is the following:

The Whittaker-Shannon Sampling Theorem. Let f(x) be a band-limited function and assume that its Fourier transform vanishes outside [-w,w]. Then f(x) can be reconstructed exactly from its samples provided that the sampling interval is no bigger than 1/(2w).

Figure 2.11. Aliasing caused by bad sampling.

If T is a sampling interval, then 1/T is called the sampling frequencyand 1/(2T) is called the Nyquist limit. The Whittaker-Shannon Theorem says that if a function is sampled less often than its Nyquist limit, then a complete recovery is impossible. One says that the function is undersampled in that case. Undersampling leads to a phe-nomenon referred to as aliasing, where fake frequencies or patterns appear that were not in the original object. The two-dimensional situation is similar, but in practice one must sample a lot more because of limitations of available reconstruction algorithms.

Now in the discussion above, it was assumed that we were taking an infinite number of samples, something that we obviously cannot do in practice. What happens if we only take a finite number of samples? Mathematically, this corresponds to where we multiply the sampled result by a function that vanishes outside a finite interval.

The main result is that it is in general impossibleto faithfully reconstruct a function that has only been sampled over a finite range. To put it in another way, no function that is nonzero over only a finite interval can be band-limited and conversely, any band-limited function is nonzero over an unbounded set.

The practical consequences of the theory sketched above can be seen in lots of places. Aliasing is most apparent along edges, near small objects, along skinny high-lights, and in textured regions. Ad hoc schemes for dealing with the problem may be disappointing because of the human visual system’s extreme sensitivity to edge dis-continuities (vernier acuity). Aliasing is also a problem in animation. The best-known example of temporal aliasing is the case of the wagon wheel appearing to reverse its direction of motion as it spins faster and faster. Other examples are small objects flash-ing off and on the screen, slightly larger objects appearflash-ing to change shape and size randomly, and simple horizontal lines jumping from one raster line to another as they move vertically. See Figure 2.12. This happens because objects fall sometimes on and sometimes between sampled points.

Jaggies do not seem to appear in television because the signal generated by a tel-evision camera, which is sampled only in the vertical direction, is already band-limited before sampling. A slightly out of focus television camera will extract image samples that can be successfully reconstructed on the home television set. People working in computer graphics usually have no control over the reconstruction process. This is part of the display hardware. In practice, antialiasing techniques are imbedded in algorithms (like line-drawing or visible surface determination algorithms). The approaches distinguish between the case of drawing isolated lines, lines that come from borders of polygons, and the interior of polygons.

There are essentially two methods used to lessen the aliasing problem. Intuitively speaking, one method treats pixels as having area and the other involves sampling at a higher rate. The obvious approach to the aliasing problem where one simply

Figure 2.12. Objects appearing, disappearing, changing size.

increases the resolution of the display device is a special case of the latter. Mathe-matically, the two methods are

(1) prefiltering, and

(2) supersampling or postfiltering

Prefiltering. This amounts to treating each sample point as representing a finite area rather than simply a dot. Because lines often fall between pixels, this would avoid concentrating everything at a pixel in a hit-or-miss fashion. Mathematically, the process corresponds to applying a convolutional filter before sampling. One must make sure that the highest frequency of a signal in a scene does not exceed one-half the sampling rate.

Two widely used models for computing the area subtended by a pixel are

(1) One considers the image a square grid as in Figure 2.13 with the pixels in the centers of the squares.

(2) One computes the area using a weighting function similar to a Gaussian func-tion. This in fact models the effect of the electron beam of a CRT and print-ing processes more closely. The pixels are larger and overlap. Details near the center now count more heavily than those near the edge.

Model (1) is easier than (2), but (2) produces better pictures. Internal details, such as highlights, are harder to handle.

In the case of boundaries of polygons we can use shading to suggest the position of the edges and can make the picture look as if it had higher resolution than it in fact has. Therefore, associate to each pixel an intensity proportional to the percent-age of its area that is covered by the polygon. For example, if the intensity values ranged from 0 to 15, then we might assign pixel Ain Figure 2.13 a value of 2 and pixelB, a value of 8. This approach could obviously substantially increase the amount of computation one has to do. However, by using an efficient approximation of the area that is needed, it turns out that all it takes is a slight modification to the Bresenham algorithm to get an efficient implementation of it, namely, the Pitteway-Watkinson algorithm. See [PitW80] or [Roge98].

Another approach for drawing antialiased lines treats the lines as having a thick-ness. An algorithm of this type is the Gupta-Sproull algorithm. See [GupS81], [Thom90], or [FVFH90]. It also starts with the standard Bresenham algorithm and then adds some checks for nearby pixels above and below each pixel that would be drawn by that algorithm.

Figure 2.13. Pixel intensities based on percentage of area covered.

Supersampling. Here we sample at more points than will actually be displayed.

More precisely, we sample at n uniformly situated points within the region associated to each pixel and then assign the average of these values to the pixel. One usually over-samples the same amount in each direction so that n = s2for some scaling factor s.

For example, to create a 512 ¥ 512 image we would sample at 1536 ¥ 1536 points if s is 3. The samples would be taken 1/3 of a pixel width apart. In Figure 2.14, each square corresponds to a pixel in the final image and the dots show the location of the nine samples per pixel.

Postfiltering. In supersampling the sample values for each pixel are averaged. This gives each sample the same weight. Postfiltering uses the same approach but allows each sample to have a different weight. Supersampling is therefore a special case of postfiltering. Different weighting or “window” functions can be used. For example, if we represent the weighting operation in matrix form with the ij’th entry being the weighting factor for the ij’th sample, then rather than using the supersampling matrix

we could use

Mathematically, postfiltering corresponds to a convolution and filtering operation on the samples. The cost of generating an image with supersampling and postfiltering is proportional to the number of scan lines. The cost of calculations involving shading is proportional to the square of the number of scan lines. This means that the algo-rithm is particularly expensive for visible surface determination algoalgo-rithms.

In conclusion, antialiasing techniques add a large amount of computation time to any algorithm that uses them. To minimize this extra work, one tries to do it only for areas where problems occur and makes no special computations for the rest. Of course, this assumes that one knows all about the picture, say a jar defined via many polygons. For lots more about antialiasing techniques see [FVFH90].

1 8

Figure 2.14. Supersampling with scaling factor 3.

Dans le document Computer Graphics and Geometric Modeling (Page 60-64)