• Aucun résultat trouvé

Symmetric Logarithmic Image Processing Model, Application to Laplacian Edge Detection

N/A
N/A
Protected

Academic year: 2021

Partager "Symmetric Logarithmic Image Processing Model, Application to Laplacian Edge Detection"

Copied!
8
0
0

Texte intégral

(1)

HAL Id: hal-00709350

https://hal.archives-ouvertes.fr/hal-00709350v2

Preprint submitted on 26 Jun 2012

HAL is a multi-disciplinary open access

archive for the deposit and dissemination of

sci-entific research documents, whether they are

pub-lished or not. The documents may come from

teaching and research institutions in France or

abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est

destinée au dépôt et à la diffusion de documents

scientifiques de niveau recherche, publiés ou non,

émanant des établissements d’enseignement et de

recherche français ou étrangers, des laboratoires

publics ou privés.

Application to Laplacian Edge Detection

Laurent Navarro, Guy Courbebaisse

To cite this version:

Laurent Navarro, Guy Courbebaisse. Symmetric Logarithmic Image Processing Model, Application

to Laplacian Edge Detection. 2011. �hal-00709350v2�

(2)
(3)

Symmetric Logarithmic Image Processing Model

Application to Laplacian Edge Detection

Laurent Navarro, Guy Courbebaisse

Abstract—This paper introduces a new model for logarithmic

image processing, called Symmetric Logarithmic Image Pro-cessing (SLIP), that provides an algebraic framework for the processing of transmitted light images and intensity images. The SLIP model is inspired by the previously developed Logarithmic Image processing (LIP) model and has been built to exhibit a symmetric structure that allows to deal with negative values dur-ing logarithmic processdur-ing. Structured with a combination law and an amplification law, the SLIP model defines a vector space structure on a symmetric bounded set instead of the positive cone structure that was characteristic of the LIP model. Furthermore, in the continuation of the LIP model, the SLIP model is physically consistent with transmitted light formation and human vision’s brightness perception laws, but also allows to unify the two physical entities. This article introduces mathematical notions and operations defining the SLIP model, then explains why it is physically and psychophysically well justified, and finally the SLIP model specificity is illustrated with a real application example.

Index Terms—Logarithmic image processing, Symmetric

Log-arithmic Image Processing.

I. INTRODUCTION

In this paper, a new model for logarithmic image processing is proposed. Logarithmic image processing has been widely developed in recent years. Its relevance is due to the analogy done with the non-linearities of human perception or those of transmitted light images for example. The homomorphic theory introduced by Oppenheim [1] is the starting point of logarithmic image processing models. The principle is to introduce a logarithmic homomorphic function allowing the mapping of an image into a superior algebraic structure. In 1972 Stockham [2] proposed an image enhancement method based on the homomorphic theory.

In 1988, in the first article introducing the LIP model [3], Jourlin and Pinoli presented a new algebraic structure for image processing. The main idea of this model was to provide a framework allowing the processing of transmitted light images in a bounded intensity range. The principle is to represent an image by the light filter through which it has been formed. Later, Brailean and Al. [4] showed that the LIP model is also consistent with the non linear (logarithmic) human visual system. A mathematical generalization of the LIP model has also been proposed by Panetta and Al., the Parameterized LIP model [5], which allows interesting adaptive adjustments

L. Navarro is with the Centre Ing´enierie et Sant´e (CIS), LPMG-UMR CNRS 5148, Ecole Nationale Sup´erieure des Mines de Saint-Etienne, 158 cours Fauriel, 42023 Saint-Etienne cedex 2, FRANCE, Email: navarro@emse.fr.

G. Courbebaisse is with the CREATIS, CNRS UMR 5220, Inserm U 1044, UCB Lyon 1, INSA Lyon, 7 Av. J. Capelle, 69621 Villeurbanne, FRANCE, Email: guy.courbebaisse@insa-lyon.fr.

of the model. Image processing operations have been achieved with the parameterized LIP model, like for example edge detection [6] or image fusion [7].

In the LIP model, the intensity of an image is modeled by its gray tone function valued in the bounded set [0, M ).

The combination and amplification laws of these gray tone functions introduced by Jourlin and Pinoli [3] first defined a cone structure on the [0, M ) range. In order to define a

vector space structure they further extended the[0, M ) range

to(−∞, M ). However, the LIP model’s negative extension’s

problem is that the LIP operations are not physically justified on the negative part. In addition, in many cases of image processing, the operators deal with negative parts. Ultrasound medical imaging or color image processing based on color opponent process theory are specific examples of images containing negative parts. The negative part is also a problem in the case of image processing operations such as wavelet transform or Laplacian edge detection. Some authors presented models that can deal with the negative parts of images. Patrascu and al. [8] elaborated an Homomorphic logarithmic image processing model defined on the(−1, 1) set. Florea and

al. [9] also extended the pseudo-logarithmic model on the basis of the Vertan and al.’s model [10], in order to obtain a vector space structure on(−1, 1). Shvaytser and al. [11] proposed an

other way to treat negative values, this model has been used in [12]. This model will not be detailed in this article because it is not a mathematically symmetric model, as it is defined in the[0, M ) range.

The original Symmetric Logarithmic Image Processing (SLIP) model proposed in this paper is defined in the contin-uation of the logarithmic image processing (LIP) model and extends the cone space structure of the LIP model to a vector space structure in introducing an odd isomorphism.

The present paper is organized as follows: the section 2 briefly presents the LIP model and the existing symmetric logarithmic image processing models. The section 3 introduces the proposed Symmetric Logarithmic Image Processing (SLIP) model that preserves the(−M, M ) symmetric set for all

op-erations, and is still consistent with the human visual system. Finally, the section 4 shows SLIP model advantages on a real application example, a Laplacian edge detection.

II. CLASSICALLIPMODEL AND EXISTING SYMMETRIC LOGARITHMIC MODELS

A. Mathematical considerations

A linear vector spaceS over a field K, generally R or C, is

(4)

× of each element of S by each element of K on the left,

these operations satisfying a number of classical properties. A positive linear coneS+overK+, generallyR+orC+, is

equipped with a vector addition+ and a scalar multiplication × of each element of S+ by each element ofK+ on the left.

K+,R+ andC+ are constituted by the positive elements of

the fields K, R and C respectively. A positive linear cone has the same properties and operation rules than a linear space, so it is closed for addition and for positive scalar multiplication operations. A neutral element exists for the addition, the addition is commutative and associative, and the positive scalar multiplication also possesses a neutral element and follows the associative and distributive laws.

B. The LIP model

1) The Gray tone function: In the LIP model, an image

is represented by its associated gray tone function, denoted

f , defined on the non-empty spatial domain D in R2. The

gray tone functions are valued in the bounded real number interval [0, M ), where M is strictly positive, called the gray

tone range. Elements of [0, M ) are called the gray tones. The

gray tone functionf is related to the gray level function ˜f as

follows:

f = M − ˜f (1)

In this approach, the intensity scale is inverted: 0 is total

whiteness or transparency, and M the absolute blackness

or opacity. Indeed, the initial goal of the LIP model was the addition of two transmitted-light images. Physically, the addition of two transmitted-light images follows the classical transmittance law, and total blackness can not be reached. The scale inversion is justified, as 0 corresponds to the total

transparency and is the neutral element for a mathematical addition in this case.

2) The Structure of the LIP model: The initial aim of the

LIP model was to define an additive operation closed in the bounded positive real number intensity range[0, M ) [3] [13],

ie. the positive linear cone[0, M ).

The addition of two gray tone functionsf and g defined on

the spatial support D and valued in the real number interval [0, M ) is defined as:

f△ g = f + g −+ f g

M (2)

and the multiplication by a positive real scalarλ is defined as: λ△ f = M − M×  1 − f M λ (3) In order to extend the positive linear cone[0, M ) to a vector

space, Jourlin and Pinoli [3] defined the opposite △ f of a gray−

tone functionf :

△ f = −M f

M − f (4)

and they extended the scalar multiplication to any real number. Then the subtraction between two gray tone functions f and g can be introduced:

f△ g = M− f − g

M − g (5)

[0, M ) to (−∞, M ). The set of gray tone functions valued in this range

is related to R through the fundamental isomorphism [13]

defined as: ψ(f ) = −M ln M − f M  (6) and ψ−1(f ) = M1 − e−fM (7)

However, the set on which the vector space is defined is asymmetric. The LIP operations are bounded in the positive part [0, M ) and unbounded in the negative part (−∞, O].

Thus, despite it is mathematically consistent, the LIP model distorts negative values informations in real applications (see section 4).

C. Other symmetric logarithmic models

1) LIP (HLIP) model: The

Homomorphic-LIP’s main idea was purely mathematical. Patrascu and al. [8] noticed than during the processing of an image, the mathematical operations concerning the real functions use the real numbers algebra, so results are spread on the whole real axis. The problems appear at the end of the processing, when it is necessary to truncate the results in order to represent them on a bounded range.

Thus, in the HLIP model, gray functions are valued in the symmetric set(−1, 1).

The addition of two gray levelsf and g is defined as: f < + > g = f + g

1 + f g (8)

and the multiplication by a scalarλ is defined as: λ < × > f = (1 + f )

λ− (1 − f )λ

(1 + f )λ+ (1 − f )λ (9)

The space of gray functions structured with the addition

< + > and the multiplication by a real scalar < × > defines

a real vector space. The fundamental isomorphism between the space of gray functions valued in(−1, 1) and the classical

vector space defined inR is expressed as: ψ(f ) = 1 2. ln  1 + f 1 − f  (10) and ψ−1(f ) = ef− e−f ef+ e−f (11)

The HLIP model has no real physical or physiological justifications, but the results are bounded in the (−1, 1) set.

2) Symmetric Pseudo-LIP model: The Pseudo-LIP model

has been introduced by Vertan and al. [10] in 2009. They proposed the use of a logarithmic-like image processing model with gray tone valued in the [0, 1) range.

The addition of two gray levelsf and g is defined as: f ⊕ g = f + g − 2f g

1 − f g (12)

and the multiplication by a positive real scalarλ is defined as:

λ ⊗ f = λf

(5)

The fundamental isomorphism between the space of gray valued in [0, 1) and the classical cone space valued in [0, ∞)

is defined as: ψ(f ) = f 1 − f (14) and ψ−1(f ) = f 1 + f (15)

The symmetric Pseudo-LIP model has been introduced by Florea and al. [9] in order to achieve the extension to a vector space structure. The aim of this model is comparable to the aim of the HLIP, as the results are bounded on the (−1, 1)

range. The fundamental isomorphism between the space of gray valued in (−1, 1) and the classical vector space valued

in(−∞, ∞) is defined as: ψ(f ) = f 1 − |f | (16) and ψ−1(f ) = f 1 + |f | (17)

III. FUNDAMENTALS OF THESYMMETRICLOGARITHMIC

IMAGEPROCESSINGMODEL

A. Definitions

This section presents the concepts needed in subsequent sections of this paper: signum and absolute value functions and their properties. These functions belong to mathematical distributions [14].

Distributions (generalized functions) are mathematical ob-jects that generalize the notion of function and measurement. Distribution theory extends the concept of differentiation to all locally integrable functions [14].

The absolute value function is defined as:

|x| = 

x if x ≥ 0

−x if x < 0 (18)

The derivative of the absolute value function, defined for non-negative real numbers as:

sgn(x) = d |x|

dx , f or x 6= 0 (19)

is the signum function defined on R as:

sgn(x) =    −1 if x < 0 0 if x = 0 1 if x > 0 (20)

B. The space of gray levels

In the SLIP model, an image is represented by its associated gray level function, denoted f , defined on the non-empty

spatial domainD in R2. The gray level functions are valued in

the bounded symmetric real number interval(−M, M ), where M is strictly positive, called the gray levels range. Elements

of (−M, M ) are called the gray levels. M represents the

maximum light intensity and−M is the total light absorption.

These assumptions will be more detailed in subsection III-E.

−200 −100 0 100 200 −600 −400 −200 0 200 400 600 LIP SLIP

Fig. 1. Comparison diagram of LIP and SLIP models. The two functions have the same behavior for the positive values, but for the negative values the LIP model is not symmetric.

C. The fundamental isomorphism

The SLIP model has been built in creating an odd isomor-phism inspired from the LIP model isomorisomor-phism in order to obtain a model which has the same behavior for positive and negative values. The SLIP fundamental isomorphism is defined as: ψ(f ) = −M sgn(f ). ln M − |f | M  (21) and ψ−1(f ) = M sgn(f ).1 − e−|f |M  (22) which must be understood in the sense of distributions. A comparison of the LIP and the SLIP models isomor-phisms exhibits that the SLIP model is symmetric and well adapted to negative values, whereas the LIP model, if extended beyond 0, presents a non logarithmic behavior for negative

values (fig. 1).

D. The vectorial operations

In order to define a vector space structure, addition and multiplication by a real scalar are introduced below.

1) The addition: The addition of two gray levelsf and g

is defined as: f△ g = M sgn(f + g).+  1 −  1 −|f | M γ1 .  1 − |g| M γ2 (23) with: γ1= sgn(f ) sgn(f + g); γ2= sgn(g) sgn(f + g) (24)

This can be easily proved using the following property:

|x| = x

(6)

scalar λ is defined as: λ△ f = M sgn(λf ).× " 1 −  1 − |f | M |λ|# (26) The set of gray levels functions valued on the symmetric range (-M,M), structured with addition and multiplication by a real scalar, defines a real vector space.

E. Physical and Physiological considerations

In this subsection, it is shown that the SLIP model allows to unify physical and psychophysical properties of the LIP model in using the model’s symmetry.

In the classical LIP model, a scale inversion of the gray tone range[0, M ) is performed, so white values are represented by 0 and the black ones by M . This fact is completely justified

for transmitted light images [15], as the value 0 corresponds

to the total transparency and M total darkness, which can

not be reached. In this case, images are considered as images resulting from light passed through a filter. In the context of the human visual perception, the justification of Pinoli [16] for this inversion stands on the bio-electrical intensity delivered by the retinal stage of the eyes. In fact, Baylor et al. established in 1979 [17] [18] that the increase of the incident light intensity produces a decrease of the bio-electrical intensity of the eye. The problem with this assumption concerning the LIP model is that the ”glare limit” of the eye is reached for white values (high intensities), so the logarithmic behavior of the human visual system is located in the white values and it seems that the inversion in the context of human vision is not relevant. When using the LIP model, it is necessary to consider two application cases: transmitted light images where a scale inversion is generally performed and reflected light images where the LIP operations can be performed directly on image gray levels.

In the SLIP model, in order to unify the two physical entities, the positive part [0, M ) of the interval (−M, M )

is dedicated to reflected light images and the negative part

(−M, 0] is dedicated to light intensity filters. So, the negative

part (−M, 0] of the SLIP model is equivalent to the positive

part [0, M ) of the LIP model. Indeed, a light intensity filter

can be mathematically seen as a negative image. With the SLIP model, a reflected light image can be processed directly without any transformation, as it is defined on the [0, M )

range. On the other hand, in order to process a transmitted light image ˜f with the SLIP model, it is necessary to drag it

along the isomorphism, by substracting the illumination light

M to tranform it into its corresponding light intensity filter f

(f becomes entirely negative):

f = ˜f − M (27)

This property is important, because images of different types can be processed together with the SLIP framework. The result of a reflected image passed through a light intensity filter can be calculated; for example an eye viewing a scene through a slide, as shown in fig. 2. In this example, the filter consists in 3 horizontal bands of 25% light absorption, i.e. a value

Fig. 2. Original Lena image (left) and Lena image viewed through a light intensity filter (right). The filter consists in 3 horizontal bands of 25% light absorption..

of −M/4 (non-absorbtive bands have a value of 0), and it is

represented by the negative image F whose values are in the (−M/4, 0] range. It is also important to note that the 0 bound

represents both the abscence of absorption and the limit of light detection of the eye, and this is where the connection between reflection images and filters is done. The resulting imageI′is the classical Lena image (fig. 2)I viewed through

the filterF . This is directly performed with a SLIP addition:

I′= I△ F.+ (28)

IV. EXPERIMENTALRESULTS ONLAPLACIANEDGE

DETECTION

A. Laplacian operator

This section aims at confirming advantages of the SLIP model in dealing with negative parts during processing. The Laplace operator is a good simple example, as the results run from either side of zero with negative values. However, more sophisticated operators such as Fourier transform or any operator using subtraction operation could benefit the fact that results are bounded. Gray-level mathematical morphology could also take advantage of the SLIP model, as they widely use subtraction.

A LIP-Laplacian operator has been developed by Deng and Pinoli [19] in order to perform differentiation-based edge detection: lap∆f = 1 8△ ((f× 1△ f+ 3△ f+ 7△ f+ 9) + △ 2△ (f× 2△ f+ 4△ f+ 6△ f+ 8)△ 12− △ f× 5) (29)

but this Laplacian-like operator has been built so that there is no negative part in the results. The Laplace operator used in this section is the standard Laplace operator or Laplacian, used in image processing. It is a second order differential operator defined as the divergence of the gradient.

In the discrete image processing case, the Laplacian of an image can be calculated using a convolution of the considered image by a Laplacian kernel. The standard Laplacian V-8 kernel for Image Laplacian computation is:

k =   1 1 1 1 -8 1 1 1 1   (30)

(7)

Then the Laplacian filtering is performed using the convolu-tion operaconvolu-tion withI an image of size M × N to be processed

andk the kernel of size 3 × 3:

O(i, j) = I(i, j) ∗ k (31)

This expression is numerically equivalent to:

O(i, j) = 3 X k=1 3 X l=1 I(i + k − 1, j + l − 1).k(k, l) (32) where i = 1..M − m + 1 and j = 1..N − n + 1

The calculus of the SLIP Laplacian filter is:

O(i, j) = ψ−1(ψ(I(i, j)) ∗ k)) = I(i, j))△ k∗ (33)

CLIP (classical linear image processing) and SLIP Lapla-cian resulting images are shown in fig. 3. The LIP model fails in computing the standard Laplacian because the negative values are not bounded. These negative values reached so low values that the resulting image is distorted, and is white when normalized in the(0, 1) range.

Fig. 3. Lena image filtered with the CLIP (left) and SLIP (right) models V-8 Laplacian filter. Images have been normalized in the(0,1) set.

The maximum and minimum values of the Laplacian re-sulting images of Lena computed with CLIP, LIP and SLIP models are indicated in table I. In the digital caseM is equal

to28= 256. These results confirm that SLIP model operators

are bounded in the (−M, M ) range.

min max

CLIP -810 622

SLIP -254.9748 254.6712 LIP -2.5831e+006 254.6712

TABLE I

COMPARATIVE TABLE OF MAXIMUM AND MINIMUM VALUES OF THE

LAPLACIAN IMAGES COMPUTED WITHCLIP, LIPANDSLIPMODELS.

B. Laplacian edge detection

A comparison between the CLIP, SLIP, HLIP and Pseudo-LIP models exhibits that the differences in the fundamental isomorphisms of each model lead to different laplacian results. In this subsection, segmentation has been performed using the classical Otsu method [20] in order to find the right theshold to segment each laplacian image, as it is summarized in table II. The histograms of the three Laplacian images are bimodals, so the Otsu method is appropriate. Results of segmentations

are shown in fig. 4. The CLIP Laplacian segmentation doesn’t exhibit any effective detection in the image. Then, it appears clearly that there is an increase in level of detection for each model, due to the different thresholds automatically founds with the Otsu method. The HLIP Laplacian segmentation fails to detect all edges, as the thresholded image doesn’t exhibit all reflections of Lena’s hat in the mirror. On the contrary, The Pseudo-LIP Laplacian segmentation exhibits a lot of noise. The SLIP Laplacian segmentation image is a compromise between edges and noise detection and will be more usable for binary mathematical morphology operations.

Fig. 4. CLIP(top left), SLIP (top right), HLIP (bottom left) and Pseudo-LIP (bottom right) Laplacians of Lena image thresholded with the Otsu method.

Threshold value CLIP 0.4706 SLIP 0.4118 HLIP 0.3725 Pseudo-LIP 0.4980 TABLE II

COMPARATIVE TABLE OF THRESHOLD VALUES OF THELAPLACIAN IMAGE PROCESSED WITH THEOTSU METHOD FORCLIP, SLIP, HLIPAND

PSEUDO-LIPMODELS.

V. CONCLUSION

In this paper, the SLIP (Symmetric Logarithmic Image Processing) model for Logarithmic Image Processing has been proposed, in the continuation of the LIP model. Based on an odd function, it provides an algebraic structure for the processing of transmitted light images and intensity images reunified on the same bounded range (−M, M ). It is also

physically justified and consistent with transmitted light im-ages formation and human vision brightness perception laws. Moreover, the SLIP model, structured with combination and amplification laws, defines a vector space structure on the symmetric bounded range which is mathematically consistent.

(8)

example, a Laplacian edge detection, where the SLIP model is far better adapted. Indeed, resulting images are bounded on the symmetric interval (−M, M ) whereas the same operator, in

the case of the classical LIP model, give resulting images in the

(−∞, M ) set and distorts information. The SLIP model opens

strong new perspectives, as it is equipped with a vector space structure on the bounded range(−M, M ), and also physically

and psychophysically justified. Our current researches focus on other image processing operations and in sophisticated tools, such as SLIP Fourier Transform, mathematical morphlogy and wavelet transform [21]. Taking into account recent work in logarithmic image processing, the extension of the PLIP to the Parameterized SLIP could be easily implemented, in order to define PSLIP operators dealing with negative values.

REFERENCES

[1] A. Oppenheim, “Superposition in a class of nonlinear systems.” Techni-cal report (Massachusetts Institute of Technology. Research Laboratory of Electronics); 432., 1965.

[2] T. Stockham, “Image processing in the context of a visual model,” Proceedings of the IEEE, vol. 60, no. 7, pp. 828–842, 1972.

[3] M. Jourlin and J. Pinoli, “A model for logarithmic image processing,” Journal of Microscopy, vol. 149, no. 1, pp. 21–35, 1988.

[4] J. Brailean, B. Sullivan, C. Chen, and M. Giger, “Evaluating the em algorithm for image processing using a human visual fidelity criterion,” in Acoustics, Speech, and Signal Processing, 1991. ICASSP-91., 1991 International Conference on. IEEE, 1991, pp. 2957–2960.

[5] K. Panetta, E. Wharton, and S. Agaian, “Parameterization of logarithmic image processing models,” IEEE Tran. Systems, Man, and Cybernetics, Part A: Systems and Humans, 2007.

[6] E. Wharton, K. Panetta, and S. Agaian, “Logarithmic edge detection with applications,” in Systems, Man and Cybernetics, 2007. ISIC. IEEE International Conference on. IEEE, 2007, pp. 3346–3351.

[7] S. Nercessian, K. Panetta, and S. Agaian, “Multiresolution decompo-sition schemes using the parameterized logarithmic image processing model with application to image fusion,” EURASIP Journal on Advances in Signal Processing, vol. 2011, p. 1, 2011.

[8] V. Patrascu, V. Buzuloiu, and C. Vertan, “Fuzzy image enhancement in the framework of logarithmic models,” Studies in Fuzziness and Soft Computing, vol. 122, pp. 219–236, 2003.

[9] C. Florea and C. Vertan, “Piecewise linear approximation of logarithmic image processing models for dynamic range enhancement,” University” Politehnica” of Bucharest Scientific Bulletin, Series C: Electrical Engi-neering, vol. 71, no. 2, pp. 3–14, 2009.

[10] C. Vertan, A. Oprea, C. Florea, and L. Florea, “A pseudo-logarithmic image processing framework for edge detection,” in Advanced Concepts for Intelligent Vision Systems. Springer, 2008, pp. 637–644. [11] H. Shvayster and S. Peleg, “Inversion of picture operators,” Pattern

Recognition Letters, vol. 5, no. 1, pp. 49–61, 1987.

[12] G. Deng, “A generalized unsharp masking algorithm,” Image Processing, IEEE Transactions on, no. 99, pp. 1–1, 2011.

[13] M. Jourlin and J. Pinoli, “Logarithmic image processing: The mathe-matical and physical framework for the representation and processing of transmitted images,” Advances in Imaging and Electron Physics, vol. 115, pp. 129–196, 2001.

[14] L. Schwartz, M´ethodes math´ematiques pour les sciences physiques. Hermann, 1961.

[15] F. Mayet, J. Pinoli, and M. Jourlin, “Physical justifications and applica-tions of the lip model for the processing of transmitted light images,” TS. Traitement du signal, vol. 13, no. 3, pp. 251–262, 1996.

[16] J.-C. Pinoli, “The logarithmic image processing model: Connections with human brightness perception and contrast estimators,” Journal of Mathematical Imaging and Vision, vol. 7, no. 4, pp. 341–358, 1997. [17] D. Baylor, T. Lamb, and K. Yau, “Responses of retinal rods to single

photons.” The Journal of physiology, vol. 288, no. 1, p. 613, 1979. [18] D. Baylor, B. Nunn, and J. Schnapf, “The photocurrent, noise and

spectral sensitivity of rods of the monkey macaca fascicularis.” The Journal of Physiology, vol. 357, no. 1, p. 575, 1984.

logarithmic image processing model,” Journal of Mathematical Imaging and Vision, vol. 8, no. 2, pp. 161–180, 1998.

[20] N. Otsu, “A threshold selection method from gray-level histograms,” Automatica, vol. 11, pp. 285–296, 1975.

[21] G. Courbebaisse, F. Trunde, and M. Jourlin, “Wavelet transform and lip model,” Image Anal. Stereol, vol. 21, no. 2, pp. 121–125, 2002.

Laurent Navarro is Research Engineer in the Ecole Nationale Suprieure des Mines de Saint-Etienne. He is with the Center for Health Engineering, LPMG-UMR CNRS 5148. His research areas include time-frequency signal processing, mechanical simulation and image processing in the field of biomedical engineering.

Guy Courbebaisse is the project leader and sci-entific coordinator of the FP7 European Project ’THROMBUS’ (www.thrombus-vph.eu). He is with the CREATIS, CNRS UMR 5220, INSA-Lyon. He worked successively in the domains of airborne counter-measure systems, spark-ignition engines, polymer processing. Nowadays his field of activity concerns medical imaging.

Figure

Fig. 1. Comparison diagram of LIP and SLIP models. The two functions have the same behavior for the positive values, but for the negative values the LIP model is not symmetric.
Fig. 2. Original Lena image (left) and Lena image viewed through a light intensity filter (right)
Fig. 3. Lena image filtered with the CLIP (left) and SLIP (right) models V-8 Laplacian filter

Références

Documents relatifs

tions are present, it is necessary to be careful if one considers universal ratios concerning amplitudes governing logarithmic singularities in different varia-

Heatmaps of (guess vs. slip) LL of 4 sample real GLOP datasets and the corresponding two simulated datasets that were generated with the best fitting

In particular, we will prove that, under mild assumption on the base, the logarithmic Sobolev constant of the wreath product can be expressed in terms of the spectral gap of the

By the results of part 3, the stratification by the values of logarithmic residues is the same as the stratification by the values of Kähler differentials which is an

This quest initiated the mathematical interest for estimating the sum of Dirichlet eigenvalues of the Laplacian while in physics the question is related to count the number of

RTs to respond «different»seem to comply with a serial, self-terminating comparison model (because RTs are linearly longer for more complex objects and linearly shorter for

A non-zero finite even Borel measure on the unit sphere S n−1 is the cone- volume measure of an origin-symmetric convex body in R n if and only if it satisfies the

In this section, we prove the existence of a solution of problem (3.2) for the case where the discrete measure is not concentrated on any closed hemisphere of S n−1 and satisfies