• Aucun résultat trouvé

Color Restoration

Dans le document L’Université de La Rochelle (Page 48-54)

Color Processing

3.3 Color Restoration

The ancient map archives represent an important part of our collective memory. In introduction, we expressed the difficulties to analyze ancient documents which were deprecated due to the time, usage condition or storage environment. So clearly, a real need for image restoration has come up. A pre-process, a faded color correc-tion [Chambah 2000] has been executed to bring colors back to original or at least to unleash color significance. It works automatically by increasing non-uniformly the color saturation of washed-out pigments without affected the dominant color.

Here, we present some advances in automating the color fading restoration process, especially with regard to the automatic color correction technique. First of all, let us illustrate the particularities of our images.

3.3. Color Restoration 27

Figure 3.1: The color is a triple that depends on the source light (illuminant), the object and the sensor

3.3.1 Color illuminant

Color vision is the capacity of an organism or machine to distinguish objects based on the wavelengths (or frequencies) of the light they reflect, emit, or transmit. The nervous system derives color by comparing the responses to light from the several types of cone photoreceptors in the eye. These cone photoreceptors are sensitive to different portions of the visible spectrum. For humans, the visible spectrum ranges approximately from 380 to 740 nm, and there are normally three types of cones.

What we see depends on the triple composed of the object reflecting the source light (called illuminant) and the sensor (our eyes), figure3.1. An object may be viewed under various conditions. For example, it may be illuminated by sunlight, the light of a fire, or a harsh electric light. In all of these situations, human vision perceives that the object has not the same color: an apple does not always appear red, whether viewed early in the morning under the sunset or during the day. In our case, the map sheets were digitalized by a commercial scanner involving a cool-white florescent light (Standard illuminant reference: F2).

3.3.2 Image characteristics

Cadastral maps hold some important characteristics that we want to identify. Our interest was to analyze the color properties of our map collection. A complete analysis on each image would have been too time consuming, our assumption was to pick up randomly 50 images and to perform a color analysis on each of them. Among this smaller image set, we report the basic properties of a single image which is representative of our problem. It makes our discussion clearer while being still valid and likely extensible to the rest of the corpus. In the rest of the chapter we focus on the image presented in figure3.2. Our first test was to visualize the color distribution in the RGB space. Figure3.3 illustrates how color pixels are spread into the RGB cube. From this experiment, a first comment underlies the color points alignment along the "gray" axis, the straight line of equationx=y=z. Secondly, the "gray"

axis seems to give the main direction however the color cloud tends to become larger as the RGB values get higher (r>200,g>200,b>200). This fact can denote a more important variability when colors are under-saturated. Color saturation is used to describe the intensity of color in the image. A saturated image has overly bright colors. The saturation represents the "purity" of a color, with lower saturation being less pure (more washed out, as in pastels). Our next step was to visualize a color histogram of our test image. Figure 3.4 reveals the occurrence of colors in the RGB space. The color space was discretized to calculate a 3D histogram.

The discretization is a simple quantification step, the RGB cube is divided into 100 smaller cubes where the amount of pixels in each cube is counted. From this histogram, we observe that most of the information is concentrated into a sphere;

the center of the sphere is likely to be located at r '220, g '220, b '220with a radius less than 50. Our last attempt to characterize our images is a conventional statistical framework. A Principal Component Analysis (PCA) was carried out. Let X be the color vector for a given pixel:

X =

For an image with N pixels, the covariance matrix can be written as follows:

C=

R G B

R cov(R) cov(R,G) cov(R,B) G cov(G,R) cov(G) cov(G,B) B cov(B,R) cov(B,G) cov(B)

Wherecov(,)is the covariance between two variables. For example, the covariance between R and G can expressed as follows:

θR,G =cov(R, G) = 1

Where XiR denotes theR component of the ith pixel. Where µ is the mean vector and µR the R component of the mean vector.

µR= 1

To find the eigenvectors and eigenvalues of the covariance matrix, we compute the matrix V of eigenvectors which diagonalizes the covariance matrix C:

V−1CV =D

where D is the diagonal matrix of eigenvalues of C. The matrix D will take the form of an M × M diagonal matrix, where

D[p, q] =λm for p=q=m

3.3. Color Restoration 29

V =

V0 V1 V2

0.6126 -0.5664 -0.5513 0.5895 -0.1373 0.7960 -0.5266 0.8126 -0.2498 Table 3.1: Eigenvectors for the test image.

Figure 3.2: A representative image of our problem. The map sheet was digitalized by a commercial scanner involving a cool-white florescent light (Standard illuminant reference: F2)

is themth eigenvalue of the covariance matrix C, and D[p, q] = 0 for p6=q.

The matrix V, also of dimension M × M, contains M column vectors, each of length M, which represent the M eigenvectors of the covariance matrix C. The eigenvalues and eigenvectors are ordered and paired. The mth eigenvalue corresponds to the mth eigenvector. Eigenvector are reported in table 3.1. The eigenvalues represent the distribution of the source data’s energy among each of the eigenvectors, where the eigenvectors form a basis for the data. The first axis (V0) explains 75.78% of the information while the cumulative inertia of the two first components reaches the 99%. Through this color analysis some remarkable considerations have been stated. Firstly, in the RGB space, colors are distributed along the "gray" axis while being spread all around this axis. Secondly, most of pixels, most of our data can be delineated by a sphere located into the under-saturated area of the RGB cube. Next, the PCA tends to reveal a high variability of data since two axis are required to explain significantly the information laid into the image. From this last observation we describe a color restoration based on PCA.

(a) (b)

Figure 3.3: Color pixel distribution in the RGB cube.

(a) (b)

Figure 3.4: Color pixel histogram in the RGB cube.

3.3. Color Restoration 31

3.3.3 Color enhancement based on PCA Let Y be the data in an independent system axis:

Y =V(X−µ) Where:

• V are the eigenvectors of the covariance matrix.

• µis the mean vector.

LetY0be the data extended according the direction the main factorial axis:

Y0=KY The restoration matrix is given as follow:

M =V−1KV LetX0be the vector containing the restored values:

X0=M(X−µ) +µ

The parameters k1 ,k2, k3 are calculated automatically. We want to extend as much as we can the dynamic of the factorial axis but if the parameters are pushed too high, they may cause a peek phenomena (X0R,G,B >255) of the color primaries and create the apparition of false-colors. To avoid this situation, we increase wisely and iteratively the parameters until the upper bound (255) is reached. To ensure this condition, for a given set of parameters k, we verify that no RGB values are above 255. The problem is formulated in the following equation 3.1. A piece of example is presented in figure 3.5. The resorted image seems visually more saturated and colors look warmer and more intense.

(a) Original image (b) Restored image

Figure 3.5: Image restoration by means of non-uniform increasing of the saturation

Dans le document L’Université de La Rochelle (Page 48-54)