• Aucun résultat trouvé

Optimal Design for Prediction in Random Field Models via Covariance Kernel Expansions

N/A
N/A
Protected

Academic year: 2021

Partager "Optimal Design for Prediction in Random Field Models via Covariance Kernel Expansions"

Copied!
9
0
0

Texte intégral

(1)

HAL Id: hal-01280355

https://hal.archives-ouvertes.fr/hal-01280355

Submitted on 29 Feb 2016

HAL is a multi-disciplinary open access

archive for the deposit and dissemination of

sci-entific research documents, whether they are

pub-lished or not. The documents may come from

teaching and research institutions in France or

abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est

destinée au dépôt et à la diffusion de documents

scientifiques de niveau recherche, publiés ou non,

émanant des établissements d’enseignement et de

recherche français ou étrangers, des laboratoires

publics ou privés.

Optimal Design for Prediction in Random Field Models

via Covariance Kernel Expansions

Bertrand Gauthier, Luc Pronzato

To cite this version:

Bertrand Gauthier, Luc Pronzato. Optimal Design for Prediction in Random Field Models via

Co-variance Kernel Expansions. MODA 11, J. Kunert, Ch.H. Müller, Jun 2016, Hamminkeln-Dingden,

Germany. �hal-01280355�

(2)

Optimal Design for Prediction in Random Field

Models via Covariance Kernel Expansions

Bertrand Gauthier and Luc Pronzato

Abstract We consider experimental design for the prediction of a realization of a second-order random field Z with known covariance function, or kernel, K. When the mean of Z is known, the integrated mean squared error of the best linear pre-dictor, approximated by spectral truncation, coincides with that obtained with a Bayesian linear model. The machinery of approximate design theory is then avail-able to determine optimal design measures, from which exact designs (collections of sites where to observe Z) can be extracted. The situation is more complex in the presence of an unknown linear parametric trend, and we show how a Bayesian linear model especially adapted to the trend can be obtained via a suitable projection of Z which yields a reduction of K.

1 Introduction

We consider a centered second-order random field (Zx)x∈X indexed over X , a compact subsetX of Rd, d ≥ 1, with covariance function E{Z

xZx0} = K(x, x0) =

K(x0, x). We also consider a σ -finite measure µ onX which will be used to define the Integrated Mean Squared Error (IMSE) for prediction, see (1). We assume that µ (X ) = 1 without any loss of generality. We suppose that K : X × X → R is continuous and thus belongs to L2µ×µ(X × X ). This framework is now classical for computer experiments, where (Zx)x∈X represents the output of a computer code with input variables x ∈X , in the sense that it forms a prior on the possible be-havior of the code when x vary inX , see in particular [10]. The design problem

B. Gauthier

KU Leuven, ESAT-STADIUS Center for Dynamical Systems, Signal Processing and Data Analyt-ics, Belgium, e-mail: bgauthie@esat.kuleuven.be

L. Pronzato

CNRS, Laboratoire I3S – UMR 7271 Universit´e de Nice-Sophia Antipolis/CNRS, France, e-mail: pronzato@cnrs.fr

(3)

2 Bertrand Gauthier and Luc Pronzato

then consists in choosing n sites x1, . . . , xnwhere to observe Zxwithout error(i.e., n inputs values for the computer code) in order to predict at best the realization of Zx overX . Note that the objective concerns prediction of one instance of the random field, based on a finite set of error-free observations.

The best linear predictor of Zx, unbiased and in terms of mean squared error (the simple kriging predictor), based on the vector of observations zn= (Zx1, . . . , Zxn)

> obtained with the designDn= (x1, . . . , xn), is given by k>(x)K−1zn, where k(x) = (K(x1, x), . . . , K(xn, x))>and the matrix K has elements {K}i j= K(xi, xj). Its Mean Squared Error (MSE) is MSE(x;Dn) = K(x, x) − k>(x)K−1k(x).

The IMSE will be computed for the measure µ, which weighs the importance given to the precision of predictions in different parts ofX ,

IMSE(Dn) = Z

XMSE(x;Dn) dµ(x) . (1)

Choosing a designDnwith minimum IMSE is then a rather natural objective; how-ever, IMSE-optimal designs are considered as difficult to compute, see [10]. The introduction of the following integral operator on L2µ(X ) yields a spectral repre-sentation of the IMSE and is at the core of the design approach presented in this paper.

For all f ∈ L2µ(X ) and x ∈ X , we define

Tµ[ f ](x) = Z

X f(t)K(x,t) dµ(t) .

From Mercer’s theorem, there exists an orthonormal set { eφk, k ∈ N} of eigenfunc-tions of Tµ in L2µ(X ) associated with nonnegative eigenvalues, the eigenfunctions corresponding to non-zero eigenvalues being continuous onSµ, the support of µ. Moreover, for x, x0∈Sµ, K has the representation K(x, x0) = ∑∞k=1λkφek(x) eφk(x0), where the λkdenote positive eigenvalues that we assume to be sorted by decreasing values, and the convergence is uniform on compact subsets ofSµ. As shown below, see (2), the integration over X in (1) can then be replaced by a summation over k, the index of eigenvalues. There is a slight technicality here, due to the fact that Sµ may be strictly included inX , so that the eigenfunctionsφek, which are only defined µ-almost everywhere, may not be defined at a general design point xi∈X . This is the case for instance when µ is a discrete measure supported at a finite set of pointsXQ⊂X (thus in particular when the IMSE is approximated via a quadrature rule) andDnis not a subset ofXQ. A general way to avoid this difficulty is via the introduction of the canonical extensions

φk= (1/λk)Tµ[ eφk] , λk> 0 ,

which are defined on the whole setX and satisfy φk(x) = eφk(x) onSµ. Note that the φkare continuous since we assumed that K(·, ·) is continuous. One may refer to [4] for more details. Then, K(x, x0) = ∑∞

(4)

IMSE(Dn) = τ − ∞

k=1 λk2φ> kK −1 φ k, (2) where τ = ∑∞

k=1λkand φk= (φk(x1), . . . , φk(xn))>. A spectral truncation that uses only the ntrclargest eigenvalues yields an approximation of IMSE(Dn),

IMSEtrc(Dn) = τtrc− ntrc

k=1 λk2φ> kK −1 φ k= trace(Λtrc− ΛtrcΦ > trcK−1ΦtrcΛtrc) , (3)

with τtrc= ∑nk=1trc λk, Φtrcthe n × ntrc matrix with elements {Φtrc}ik= φk(xi) and Λtrc= diag{λ1, . . . , λntrc}. This satisfies IMSEtrc(Dn) ≤ IMSE(Dn) ≤ IMSEtrc(Dn)+

τerr, where τerr= τ − τtrc= ∑k>ntrcλk, see [4, 6].

Following [2, 13], we recall below (Sect. 2) how, using its Karhunen-Lo`eve de-composition, (Zx)x∈X can be interpreted as a Bayesian Linear Model (BLM) with correlated errorshaving the eigenfunctions φk, k = 1, . . . , ntrc, as regressors, and for which the IMSE of prediction coincides with IMSEtrc(Dn). A similar decomposi-tion has been used in [3], but for the estimadecomposi-tion of trend parameters (or equivalently, for the prediction of the mean of Zx). The IMSE for a BLM with uncorrelated er-rorstakes the form of a Bayesian A-optimality criterion, which opens the ways for the construction of approximate optimal designs (design measures) by convex op-timization, see [8]. Exact designsDn, that is, collections of sites where to observe Z, can be extracted from these optimal measures, see Sect. 3. However, replacing correlated errors by uncorrelated ones, in order to be able to use the approximate design machinery, introduces an additional approximation (besides spectral trunca-tion). Whereas the approximation in [13] relies on an homoscedastic model, the one we propose in Sect. 2 uses a more accurate heteroscedastic model. The situation is more complex when a linear parametric trend is present, and in Sect. 4 we show how a suitable projection of (Zx)x∈X on the linear subspace spanned by the trend, equivalent to a kernel reduction, yields a BLM with smaller errors than the direct approach of [13] that uses the original kernel K.

2 Bayesian Linear Models

Exact BLM The Karhunen-Lo`eve decomposition of Zxyields

Zx= ntrc

k=1

βkφk(x) + εx, (4)

where εx, x ∈X , is a centered random field with covariance given by E{εxεx0} = Kerr(x, x0) = K(x, x0) − Ktrc(x, x0),

(5)

4 Bertrand Gauthier and Luc Pronzato

with Ktrc(x, x0) = ∑nk=1trc λkφk(x)φk(x0) and where the βk are mutually uncorrelated centered r.v., orthogonal to εxfor all x ∈X , with var(βk) = λk. No approximation is involved at this stage, and this BLM gives an exact representation of (Zx)x∈X. Con-sider the predictor bZx= ∑nk=1trc βbkφk(x) = φ>trc(x)bβ , with φ

trc= (φ1(x), . . . , φntrc(x))

> and bβ = (Φ>trcK−1errΦtrc+ Λtrc−1)−1Φ>trcK−1errznthe estimator that minimizes the regu-larized LS criterion J(β ) = (zn− Φtrcβ )>K−1err(zn− Φtrcβ ) + β>Λtrc−1β (bβ coincides with the posterior mean of β when Zx is Gaussian). Direct calculation shows that the IMSE of bZxequals IMSEtrc(Dn) given by (3) and can also be written as

IMSEtrc(Dn) = trace[(Φ>trcK−1errΦtrc+ Λ−1trc)−1] .

Approximate BLM with uncorrelated errors In [13], the correlated errors εxof (4) are replaced by uncorrelated homoscedastic errors having variance σ2= τerr= τ − τtrc. A more accurate approximation of the exact model (4) is obtained when using uncorrelated but heteroscedastic errors, with the same variance σ2(x) = Kerr(x, x) as εx. We shall call this model the heteroscedastic BLM and denote by Σerrthe diagonal matrix diag{Kerr(x1, x1), . . . , Kerr(xn, xn)}. The curve (solid line) on the top of Fig. 3-Right shows σ2(x), x ∈ [0, 1], for the Mat´ern 3/2 kernel K(x, x0) = C3/2,10(|x − x0|), where

C3/2,ϑ(t) = (ϑt + 1) exp(−ϑt) , (5) with ntrc= 10. The first 4 eigenfunctions φk for this kernel are plotted in Fig. 2-Left. The strongly oscillating behavior of σ2(x), due to the form of eigenfunctions in Ktrc, motivates the use of the heteroscedastic BLM instead of an approximate model with homoscedastic errors. This seems important within the framework of computer experiments, but would be less critical, however, in presence of additive uncorrelated measurement errors, as considered in [13]; in that case, the errors εx due to spectral truncation are even neglected in [3]. The IMSE for prediction with the heteroscedastic BLM is

IMSEhBLMtrc (Dn) = trace[(Φ>trcΣ−1errΦtrc+ Λtrc−1)−1] . (6)

3 Optimal Design

The IMSE (6), considered as a function of the exact designDn, has the form of an A-optimality criterion for the Bayesian information matrix Φ>trcΣ−1errΦtrc+ Λ−1trc, see [8]. For ξ a probability measure onX (i.e., a design measure), let

M(ξ ) = Z Xσ −2(x)φ trc(x)φ > trc(x) dξ (x) ,

which satisfies IMSEtrchBLM(Dn) = trace[(nM(ξn) + Λ−1trc)−1], with ξn the empiri-cal measure (1/n) ∑ni=1δxi associated with Dn. For any α ≥ 0, the Bayesian

(6)

A-optimality criterion

ψα(ξ ) = trace[(αM(ξn) + Λtrc−1)−1]

is a convex function of ξ which can easily be minimized using an algorithm for optimal design, see, e.g., [9, Chap. 9]. The solution is much facilitated when (i) µ is a discrete measure with finite supportXQ(a quadrature approximation is used to compute the IMSE) and (ii) only design measures ξ dominated by µ are considered (design points are chosen among quadrature points inXQ). Indeed, only the values

e

φk(xj) for xj∈XQare then used in all calculations, without having to compute the canonical extensions φk= (1/λk)Tµ[ eφk], see Sect. 1, and the eφk(xj) are obtained by the spectral decomposition of a Q × Q matrix, with Q = |XQ|, see [4, 5]. Once an optimal design measure ξα∗minimizing ψα(·) has been determined, we still have to extract from it an exact designDnwith n points (and no repetitions since there are no observation errors). We have tried several procedures; all are based on the selection of n points among the support of ξα∗,Sξ

α = (x

(1), . . . , x(m)) say, and involve some trial and error for tuning values of α and ntrcin order to facilitate extraction of an n-point design when n is given a priori. They are listed below:

1. Keep the n points x(i)ofSξ

α with largest weight ξ

∗ α(x

(i)).

2. Perform a greedy (one-point-at-a-time) minimization of the IMSE using the finite setSξ

αas design space.

3. Perform a minimum-spanning-tree clustering ofSξ

α, with the metric ∆ (x, x

0) = K(x, x) + K(x0, x0) − 2K(x, x0) induced by K, with n clusters (the xi then corre-spond to the barycenters of x( j)in clusters, with weights the ξα∗(x

( j))).

Choosing α and ntrcof the same order of magnitude as n seems reasonable. Note that while 1. and 2. maintain the xi within XQ when µ has finite support XQ and ξα∗ is dominated by µ, this is not the case for 3. and canonical extensions φk= (1/λk)Tµ[ eφk] must then be computed (which requires summations overXQ). After extraction of a n-point designDn, a local minimization of IMSE(Dn) (a prob-lem inRn×d) can be performed using any standard algorithm for unconstrained op-timization (e.g., conjugate gradient, variable metric), since optimal points for obser-vation of Zxneed to be as correlated as possible with points withinX , and therefore lie in the interior ofX . Here also canonical extensions must be computed.

An illustration of procedure 3. is presented in Fig. 1 for X = [0,1]2 and K(x, x0) = C3/2,10(kx − x0k), see (5), with ntrc= α = 10; µ is the uniform measure on a regular gridXQof Q = 33 × 33 points and a vertex-exchange algorithm with Armijo-type line search [1] is used to determine ξα∗supported onXQ.

4 Kernel Reduction for Models with Parametric Trend

Consider now the random field Yx= g>(x)θ + Zx, where Zxis as in Sect. 1, g(x) = (g1(x), · · · , gp(x))> is a vector of (known) real-valued trend functions defined on

(7)

6 Bertrand Gauthier and Luc Pronzato Fig. 1 The 20 squares

cor-respond to the support of ξα∗

when the vertex-exchange algorithm is stopped (the di-rectional derivative of ψα(ξ )

exceeds −10−4), the associ-ated minimum spanning-tree identifies n = 12 clusters, the stars indicate the optimal 12-point design obtained by local minimization of the IMSE

X and where θ = (θ1, . . . , θp) ∈ Rpis an unknown vector of parameters. The Best Linear Unbiased Predictor (BLUP) of Yx (the universal kriging predictor), based on observations yn= (Yx1, . . . ,Yxn)

>, is g>(x)b

θ + k>(x)K−1(yn− Gbθ ), where bθ = (G>K−1G)−1G>K−1ynwith G the n × p matrix with elements {G}i j= gj(xi) (we assume that G has full column rank). Its MSE is

MSE(x;Dn) = K(x, x) − k>(x)K−1k(x)

+[g(x) − G>K−1k(x)]>(G>K−1G)−1[g(x) − G>K−1k(x)] , see, e.g., [11, Chaps. 3,4], and IMSE(Dn) =RXMSE(x;Dn) dµ(x).

Similarly to Sect. 2, we can consider an exact BLM Yx= f>(x)γ + εx, where f>(x) = (g>(x), φ>

trc(x)) and γ = (θ >

, β>)>. The predictor of Yx is then bYx = f>(x)bγ , where bγ = (F

>K−1

errF + Γ−1)−1F>K−1erry, with F = (G, Φtrc) and Γ−1 = diag{0, . . . , 0, λ1−1, . . . , λn−1trc} where there are p zeros. Its IMSE coincides with the truncated version of IMSE(Dn) and is given by

IMSEtrc(Dn) = trace[(F>K−1errF + Γ−1)−1U] ,

where U = (f|f>)L2 µ= Z Xf(x)f > (x) dµ(x) = (g|g >) L2 µ (g|φ > trc)L2µ (φ trc|g >) L2µ Idntrc ! ,

with Idntrc the ntrc-dimensional identity matrix. Note that U does not depend on

Dn. Approximation of Yx by a heteroscedastic BLM where uncorrelated errors with variance σ2(x) = Kerr(x, x) are substituted for εx, gives a predictor with IMSEtrchBLM(Dn) = trace[(F>Σ−1errF + Γ−1)−1U], i.e., a Bayesian L-optimality crite-rion which can be minimized following the same lines as in Sect. 3. However, in general the realizations of (Zx)x∈X are not orthogonal to the linear subspaceT spanned by the trend functions g, so that, roughly speaking, this direct approach involves unnecessarily large error variances σ2(x).

Denote by p the orthogonal projection of L2µ(X ) onto T : for any f ∈ L2 µ(X ), p f = g>(g|g>)−1

L2µ

(g| f )L2

(8)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 −1.5 −1 −0.5 0 0.5 1 1.5 x φk (x) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 x φk (x)

Fig. 2 First 4 eigenfunctions φk(x), x ∈X = [0, 1], for the kernel K(x, x0) = C3/2,10(|x − x0|) (Left)

and for Kq(x, x0) adapted to the trend functions g(x) = (1, x, x2)>(Right)

(Zx)x∈X belong to L2µ(X ) and define pZx= g>(x)(g|g>)−1L2 µ R Xg(t)Ztdµ(t), so that Yx= g>(x)θ + Zx= g>(x)θ + pZx+ qZx= g>(x)θ0+ qZx, where q = idL2µ− p and θ0= θ + (g|g>)−1 L2µ R

Xg(t)Ztdµ(t). The covariance kernel of (qZx)x∈X, called re-duction of the kernel K in [12], is

Kq(x, x0) = E{(qZx)(qZx0)} = K(x, x0) + g>(x)Sg(x0) − b>(x)g(x0) − g>(x)b(x0) , where S = (g|g>)−1 Lµ2 (Tµ[g]|g >) L2 µ(g|g >)−1 L2µ and b(x) = (g|g>)−1 L2µ Tµ[g](x). A key property here is that, for a given designDn, the two random fields g>(x)θ + Zx and g>(x)θ + qZxwith unknown θ yield the same BLUP and same IMSE. This can be related to results on intrinsic random functions, see [7]; notice, however, that the kernel reduction considered here applies to any linear parametric trend and is not restricted to polynomials in x.

Figure 2 presents the first four eigenfunctions for the kernels K(x, x0) (Left) and Kq(x, x0) (Right) for a one-dimensional random process on [0, 1] with Mat´ern 3/2 covariance function, see (5), and trend functions g(x) = (1, x, x2)>. The eigenfunc-tions oscillate at higher frequency for Kq(x, x0) than for K(x, x0): in some sense, ker-nel reduction has removed from K the energy which can be transferred to the slowly varying trend functions. This is confirmed by Fig. 3-Left which shows the first 20 eigenvalues for both kernels. One can thus expect that for the same number ntrcof eigenfunctions, an heteroscedastic BLM will be more accurate when based on Kq than when based on K. This is confirmed by Fig. 3-Right where σ2(x) = Kerr(x, x) is plotted for both kernels when ntrc= 10.

Acknowledgements This work was partially supported by the ANR project DESIRE (DESIgns for spatial Random fiElds), nb. 2011-IS01-001-01, joint with the Statistics Dept. of the JKU Linz (Austria). B. Gauthier has also been supported by the MRI Dept. of EdF Chatou from Sept. to Dec. 2014.

(9)

8 Bertrand Gauthier and Luc Pronzato 0 2 4 6 8 10 12 14 16 18 20 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 k λk 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.005 0.01 0.015 0.02 0.025 0.03 0.035 0.04 0.045 0.05 x σ 2(x)

Fig. 3 Left: first 20 eigenvalues (sorted by decreasing values) for the kernel K(x, x0) = C3/2,10(|x −

x0|) (top, solid line) and for Kq(x, x0) adapted to the trend functions g(x) = (1, x, x2)>(bottom,

dashed line). Right: σ2(x), x ∈X = [0, 1], for the heteroscedastic BLM with K(x, x0) (top, solid

line) and with Kq(x, x0) (bottom, dashed line); n

trc= 10 in both cases

References

1. Armijo, L.: Minimization of functions having Lipschitz continuous first partial derivatives. Pacific Journal of Mathematics 16(1), 1–3 (1966)

2. Fedorov, V.: Design of spatial experiments: model fitting and prediction. In: S. Gosh, C. Rao (eds.) Handbook of Statistics, vol. 13, chap. 16, pp. 515–553. Elsevier, Amsterdam (1996) 3. Fedorov, V., M¨uller, W.: Optimum design for correlated processes via eigenfunction

ex-pansions. In: J. L´opez-Fidalgo, J. Rodr´ıguez-D´ıaz, B. Torsney (eds.) mODa’8 – Advances in Model–Oriented Design and Analysis, Proceedings of the 8th Int. Workshop, Almagro (Spain), pp. 57–66. Physica Verlag, Heidelberg (2007)

4. Gauthier, B., Pronzato, L.: Spectral approximation of the IMSE criterion for optimal designs in kernel-based interpolation models. SIAM/ASA Journal on Uncertainty Quantification 2, 805–825 (2014)

5. Gauthier, B., Pronzato, L.: Approximation of IMSE-optimal designs via quadrature rules and spectral decomposition. Communications in Statistics – Simulation and Computation (2015). DOI:10.1080/03610918.2014.972518, to appear

6. Harari, O., Steinberg, D.: Optimal designs for gaussian process models via spectral decompo-sition. Journal of Statistical Planning and Inference 154, 87–101 (2014)

7. Matheron, G.: The intrinsic random functions and their applications. Advances in Applied Probability pp. 439–468 (1973)

8. Pilz, J.: Bayesian Estimation and Experimental Design in Linear Regression Models, vol. 55. Teubner-Texte zur Mathematik, Leipzig (1983). (also Wiley, New York, 1991)

9. Pronzato, L., P´azman, A.: Design of Experiments in Nonlinear Models. Asymptotic Normal-ity, Optimality Criteria and Small-Sample Properties. Springer, LNS 212, New York, Heidel-berg (2013)

10. Sacks, J., Welch, W., Mitchell, T., Wynn, H.: Design and analysis of computer experiments. Statistical Science 4(4), 409–435 (1989)

11. Santner, T., Williams, B., Notz, W.: The Design and Analysis of Computer Experiments. Springer, New York (2003)

12. Schaback, R.: Native Hilbert spaces for radial basis functions I. In: New Developments in Approximation Theory, pp. 255–282. Springer (1999)

13. Sp¨ock, G., Pilz, J.: Spatial sampling design and covariance-robust minimax prediction based on convex design ideas. Stochastic Environmental Research and Risk Assessment 24(3), 463– 482 (2010)

Figure

Fig. 1 The 20 squares cor- cor-respond to the support of ξ α ∗ when the vertex-exchange algorithm is stopped (the  di-rectional derivative of ψ α (ξ ) exceeds −10 −4 ), the  associ-ated minimum spanning-tree identifies n = 12 clusters, the stars indicate t
Fig. 2 First 4 eigenfunctions φ k (x), x ∈ X = [0,1], for the kernel K(x,x 0 ) = C 3/2,10 (|x− x 0 |) (Left) and for K q (x,x 0 ) adapted to the trend functions g(x) = (1,x,x 2 ) > (Right)
Fig. 3 Left: first 20 eigenvalues (sorted by decreasing values) for the kernel K(x,x 0 ) = C 3/2,10 (|x−

Références

Documents relatifs

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of

Certes, la vie sur Terre a toujours été reliée à la présence de l’eau liquide sur sa surface, mais les réactions de synthèse des molécules du vivant sont

In order to perform the shape optimization of the stiffener, a set of design variables is associated to the control points of the embedded surface. These design

Figure 4:12 Graph for the final Pareto optimal Non Dominated Set: optimization of combined sugarcane distillery and cogeneration process ... exergy efficiency for

We will first define the parameters space S n as a n- dimensional space in which each dimension corresponds to one of the n design parameters of the robot. .) or some

The second result, which is based on the dissipativity theory, allows through the use of Linear Matrix Inequalities (LMIs) to compute an upper-bound on the sampling interval for

Le dernier patient, du groupe Lapeyronie, sans comorbidité, a été opéré en juin 2013 par incision patch pour une maladie de Lapeyronie avec une angulation à 40° sur une plaque

AFSSA: French Food Safety Agency; ANAH: National Agency of the Housing Environment; B-Pb: blood-lead; CBA: cost benefit analysis; CEPA: California Environmental Protection Agency;