• Aucun résultat trouvé

Errors in object position and geometry

Signal Representation

2.3 Discrete signals

2.3.6 Errors in object position and geometry

The tessellation of space in discrete images limits the accuracy of the estimation of the position of an object and thus all other geometri-cal quantities such as distance, area, circumference, and orientation of lines. It is obvious that the accuracy of the position of a single point is only in the order of the lattice constant. The interesting question is, however, how this error propagates into position errors for larger objects and other relations. This question is of significant importance because of the relatively low spatial resolution of images as compared to other measuring instruments. Without much effort many physical quantities such as frequency, voltage, and distance can be measured with an accuracy better than 1 ppm, that is, 1 in 1,000,000, while im-ages have a spatial resolution in the order of 1 in 1000 due to the limited number of pixels. Thus only highly accurate position estimates in the

order of 1/100 of the pixel size result in an accuracy of about 1 in 100,000.

The discussion of position errors in this section will be limited to or-thogonal lattices. These lattices have the significant advantage that the errors in the different directions can be discussed independently. Thus the following discussion is not only valid for 2-D images but any type of multidimensional signals and we must consider only one component.

In order to estimate the accuracy of the position estimate of a sin-gle point it is assumed that all positions are equally probable. This means a constant probability density function in the interval∆x. Then the varianceσx2 introduced by the position discretization is given by Papoulis [2, p. 106]

σx2= 1

∆x

xn+Z∆x/2 xn∆x/2

(x−xn)2dx= (∆x)2

12 (2.13)

Thus the standard deviationσx is about 1/120.3 times the lattice constant∆x. The maximum error is, of course, 0.5∆x.

All other errors for geometrical measurements of segmented objects can be related to this basic position error by statistical error propa-gation. We will illustrate this with a simple example computing the area and center of gravity of an object. For the sake of simplicity, we start with the unrealistic assumption that any cell that contains even the smallest fraction of the object is regarded as a cell of the object.

We further assume that this segmentation is exact, that is, the signal itself does not contain noise and separates without errors from the background. In this way we separate all other errors from the errors introduced by the discrete lattice.

The area of the object is simply given as the product of the number Nof cells and the areaAc of a cell. This simple estimate is, however, biased towards a larger area because the cells at the border of the object are only partly covered by the object. In the mean, half of the border cells are covered. Hence an unbiased estimate of the area is given by

A=Ac(N−0.5Nb) (2.14)

whereNbis the number of border cells. With this equation, the variance of the estimate can be determined. Only the statistical error in the area of the border cells must be considered. According to the laws of error propagation with independent random variables, the variance of the area estimateσA2 is given by

σA2=0.25A2cNbσx2 (2.15) If we assume a compact object, for example, a square, with a length of D pixels, it has D2 pixels and 4D border pixels. Using σx 0.3

(Eq. (2.13)), the absolute and relative standard deviation of the area Thus the standard deviation of the area error for an object with a length of 10 pixels is just about the area of the pixel and the relative error is about 1 %. Equations (2.14) and (2.15) are also valid for volumetric images if the area of the elementary cell is replaced by the volume of the cell. Only the number of border cells is now different. If we again assume a compact object, for example, a cube, with a length ofD, we now haveD3cells in the object and 6D2border cells. Then the absolute and relative standard deviations are approximately given by

σV0.45VcD and σV

V 0.45

D2 ifD1 (2.17) Now the standard deviation of the volume for an object with a diameter of 10 pixels is about 5 times the volume of the cells but the relative error is about 0.5 %. Note that the absolute/relative error for volume measurements in/decreases faster with the size of the object than for area measurements.

The computations for the error of the center of gravity are quite similar. With the same assumptions about the segmentation process, an unbiased estimate of the center of gravity is given by

xg= 1

Again the border pixels are counted only half. As the first part of the estimate with the nonborder pixels is exact, errors are caused only by the variation in the area of the border pixels. Therefore the variance of the estimate for each component of the center of gravity is given by

σg2= Nb

4N2σ2 (2.19)

whereσ is again the variance in the position of the fractional cells at the border of the object. Thus the standard deviation of the center of gravity for a compact object with the diameter ofDpixels is

σg 0.3

D3/2 ifD1 (2.20)

Thus the standard deviation for the center of gravity of an object with 10 pixel diameter is only about 0.01 pixel. For a volumetric object with a diameter ofDpixel, the standard deviation becomes

σgv 0.45

D2 ifD1 (2.21)

Signal

Figure 2.7:Steps from a continuous to a discrete signal.

This result clearly shows that the position of objects and all related geometrical quantities such as the distances can be performed even with binary images (segmented objects) well into the range of 1/100 pixel. It is interesting that the relative errors for the area and volume estimates of Eqs. (2.16) and (2.17) are equal to the standard deviation of the center of gravity Eqs. (2.20) and (2.21). Note that only the statistical error has been discussed. A bias in the segmentation might easily result in much higher systematic errors.

2.4 Relation between continuous and discrete signals A continuous functiong(q)is a useful mathematical description of a signal as discussed in Section2.2. Real-world signals, however, can only be represented and processed as discrete or digital signals. Therefore a detailed knowledge of the relation between these two types of signals is required. It is not only necessary to understand the whole chain of the image formation process from a continuous spatial radiance distri-bution to a digital image but also to perform subpixel-accurate image interpolation (Chapter8) and warping of images (Chapter9) as it is, for example, required for multiscale image operations (Chapter14).

The chain of processes that lead from the “true” signal to the digital signal include all the steps of the image formation process as illustrated in Fig.2.7. First the signal of interest,s(x), such as reflectivity, temper-ature, etc. of an object is somehow related to the radianceL(x)emitted by the object in a generally not linear function (Volume 1, Chapter3).

In some cases this relation is linear (e. g., reflectivity), in others it is highly nonlinear (e. g., temperature). Often other parameters that are not controlled or not even known, influence the signal as well. As an example, the radiance of an object is the product of its reflectivity and the irradiance. Moreover, the radiance of the beam from the object to the camera may be attenuated by absorption or scattering of radiation (Volume 1, Section3.4.1). Thus the radiance of the object may vary with many other unknown parameters until it finally reaches the radiation collecting system (optics).

The optical system generates an irradianceE(x)at the image plane that is proportional to the object radiance (Volume 1, Chapter5). There is, however, not a point-to-point correspondence. Because of the lim-ited resolution of the optical systems due to physical limitation (e. g.,

diffraction) or imperfections of the optical systems (various aberra-tions, Volume 1, Section4.5). This blurring of the signal is known as the point spread function (PSF ) of the optical system and described in the Fourier domain by the optical transfer function. The nonzero area of the individual sensor elements of the sensor array (or the scanning mechanism) results in a further spatial and temporal blurring of the irradiance at the image plane.

The conversion to electrical signalU adds noise and possibly fur-ther nonlinearities to the signalg(x, t)that is finally measured. In a last step, the analog electrical signal is converted by an analog-to-digital converter (ADC) into digital numbers. The basic relation between con-tinuous and digital signals is established by the sampling theorem. It describes the effects of spatial and temporal sampling on continuous signals and thus also tells us how to reconstruct a continuous signal from its samples. The discretization of the amplitudes of the signal (quantization) is discussed in Section2.5.

The image formation process itself thus includes two essential steps.

First, the whole image formation process blurs the signal. Second, the continuous signal at the image plane is sampled. Although both pro-cesses often happen together, they can be separated for an easier math-ematical treatment.