• Aucun résultat trouvé

Resampling-based confidence regions and multiple tests

N/A
N/A
Protected

Academic year: 2022

Partager "Resampling-based confidence regions and multiple tests"

Copied!
65
0
0

Texte intégral

(1)

Resampling-based confidence regions and multiple tests

Sylvain Arlot1,2,3

joint work with Gilles Blanchard4 Etienne Roquain´ 5

1CNRS

2´Ecole Normale Sup´erieure de Paris

3Willow, Inria Paris-Rocquencourt

4Universit¨at Potsdam, Institut f¨ur Mathematik

5Universit´e Pierre et Marie Curie, LPMA

JSTAR 2010, Statistiques en grande dimension, Rennes October, 21st 2010

(2)

Model

Observations: Y= (Y1, . . . ,Yn) =

Y11 . . . Y1n ... ... ... ... YK1 . . . YKn

Y1, . . . ,Yn∈RK i.i.d. symmetric Unknown meanµ= (µk)k

Unknown covariance matrix Σ n K

Aims: Find a confidence region forµ or {ks.t.µk 6= 0}

(3)

Model

Observations: Y= (Y1, . . . ,Yn) =

Y11 . . . Y1n ... ... ... ... YK1 . . . YKn

Y1, . . . ,Yn∈RK i.i.d. symmetric Unknown meanµ= (µk)k

Unknown covariance matrix Σ n K

Aims: Find a confidence region forµ or {ks.t.µk 6= 0}

(4)

Model

Observations: Y= (Y1, . . . ,Yn) =

Y11 . . . Y1n ... ... ... ... YK1 . . . YKn

Y1, . . . ,Yn∈RK i.i.d. symmetric Unknown meanµ= (µk)k

Unknown covariance matrix Σ n K

Aims: Find a confidence region forµ or {ks.t.µk 6= 0}

(5)

Illustration (1): K = 16384 n, spatially correlated noise

20 40 60 80 100 120

20

40

60

80

100

120

b= 2

20 40 60 80 100 120

20

40

60

80

100

120

b = 12

20 40 60 80 100 120

20

40

60

80

100

120

b= 6

20 40 60 80 100 120

20

40

60

80

100

120

b= 40

(6)

Illustration (2): K = 16384 n, textured noise

(7)

Illustration (3): neuroimaging and microarrays

(neural activity) (gene expression levels)

(8)

Multiple simultaneous hypothesis testing

For everyk we test: H0,k: “µk = 0” againstH1,k: “µk 6= 0”.

Amultiple testing procedure rejects:

R(Y)⊂ {1, . . . ,K}.

Type I errors measured by the Family Wise Error Rate:

FWER(R) =P(∃k ∈R(Y) s.t.µk = 0).

⇒build a procedure R such that FWER(R)≤α?

strong controlof the FWER: ∀µ∈RK power: Card(R(Y)) as large as possible

(9)

Multiple simultaneous hypothesis testing

For everyk we test: H0,k: “µk = 0” againstH1,k: “µk 6= 0”.

Amultiple testing procedure rejects:

R(Y)⊂ {1, . . . ,K}.

Type I errors measured by the Family Wise Error Rate:

FWER(R) =P(∃k ∈R(Y) s.t.µk = 0).

⇒build a procedure R such that FWER(R)≤α?

strong controlof the FWER: ∀µ∈RK power: Card(R(Y)) as large as possible

(10)

Multiple simultaneous hypothesis testing

For everyk we test: H0,k: “µk = 0” againstH1,k: “µk 6= 0”.

Amultiple testing procedure rejects:

R(Y)⊂ {1, . . . ,K}.

Type I errors measured by the Family Wise Error Rate:

FWER(R) =P(∃k ∈R(Y) s.t.µk = 0).

⇒build a procedure R such that FWER(R)≤α?

strong controlof the FWER: ∀µ∈RK power: Card(R(Y)) as large as possible

(11)

Multiple simultaneous hypothesis testing

For everyk we test: H0,k: “µk = 0” againstH1,k: “µk 6= 0”.

Amultiple testing procedure rejects:

R(Y)⊂ {1, . . . ,K}.

Type I errors measured by the Family Wise Error Rate:

FWER(R) =P(∃k ∈R(Y) s.t.µk = 0).

⇒build a procedure R such that FWER(R)≤α?

strong controlof the FWER: ∀µ∈RK power: Card(R(Y)) as large as possible

(12)

Multiple simultaneous hypothesis testing

For everyk we test: H0,k: “µk = 0” againstH1,k: “µk 6= 0”.

Amultiple testing procedure rejects:

R(Y)⊂ {1, . . . ,K}.

Type I errors measured by the Family Wise Error Rate:

FWER(R) =P(∃k ∈R(Y) s.t.µk = 0).

⇒build a procedure R such that FWER(R)≤α?

strong controlof the FWER: ∀µ∈RK power: Card(R(Y)) as large as possible

(13)

Thresholding

Reject R(Y) ={ks.t.√

n|Yk|>t}

where

Y= n1Pn

i=1Yi empirical mean

t =tα(Y) threshold (independent ofk ∈ {1, . . . ,K})

FWER(R) =P(∃k s.t. µk = 0 and√

n|Yk|>t)

=P(√

n sup

ks.t.µk=0

|Yk|>t)

≤P(√ nsup

k

|Yk|>t)

=P

kY−µk>tn−1/2

So,L confidence region ⇒ control of the FWER

(14)

Thresholding

Reject R(Y) ={ks.t.√

n|Yk|>t}

where

Y= n1Pn

i=1Yi empirical mean

t =tα(Y) threshold (independent ofk ∈ {1, . . . ,K})

FWER(R) =P(∃k s.t. µk = 0 and√

n|Yk|>t)

=P(√

n sup

ks.t.µk=0

|Yk|>t)

≤P(√ nsup

k

|Yk|>t)

=P

kY−µk>tn−1/2

So,L confidence region ⇒ control of the FWER

(15)

Bonferroni threshold, Gaussian case

Union bound:

FWER(R)≤P ∃k s.t. √ n

Yk−µk >t

≤Ksup

k P(√

n|Yk −µk|>t)

≤2KΦ(t/σ) ,

where Φ is the standard Gaussian upper tail function.

Bonferroni’s threshold: tαBonf=σΦ−1(α/(2K)).

deterministic threshold

too conservative if there are strong correlations between the coordinates Yk

⇒how to do better ?

(16)

Bonferroni threshold, Gaussian case

Union bound:

FWER(R)≤P ∃k s.t. √ n

Yk−µk >t

≤Ksup

k P(√

n|Yk −µk|>t)

≤2KΦ(t/σ) ,

where Φ is the standard Gaussian upper tail function.

Bonferroni’s threshold: tαBonf=σΦ−1(α/(2K)).

deterministic threshold

too conservative if there are strong correlations between the coordinates Yk

⇒how to do better ?

(17)

Goal

For everyp∈[1,+∞] , find a threshold tα(Y) such that

∀µ∈RK P

√nkY−µkp>tα(Y)

≤α .

⇒Lp confidence ball for µat level α (FWER(R)≤αif p= +∞) Non-asymptotic: ∀K,n

General correlations Assumptions:

(Gauss) Yi ∼ N(µ,Σ) withσ = (Σk,k)k=1...K known (SB)Yi −µ∼µ−Yi (symmetry) and∀k,

Yki

≤M a.s.

Ideal threshold: t =q?α, (1−α)-quantile ofD √

nkY−µkp qα? depends on D(Y) unknown⇒ estimated byresampling.

(18)

Goal

For everyp∈[1,+∞] , find a threshold tα(Y) such that

∀µ∈RK P

√nkY−µkp>tα(Y)

≤α .

⇒Lp confidence ball for µat level α (FWER(R)≤αif p= +∞) Non-asymptotic: ∀K,n

General correlations Assumptions:

(Gauss) Yi ∼ N(µ,Σ) withσ = (Σk,k)k=1...K known (SB)Yi −µ∼µ−Yi (symmetry) and∀k,

Yki

≤M a.s.

Ideal threshold: t =q?α, (1−α)-quantile ofD √

nkY−µkp qα? depends on D(Y) unknown⇒ estimated byresampling.

(19)

Goal

For everyp∈[1,+∞] , find a threshold tα(Y) such that

∀µ∈RK P

√nkY−µkp>tα(Y)

≤α .

⇒Lp confidence ball for µat level α (FWER(R)≤αif p= +∞) Non-asymptotic: ∀K,n

General correlations Assumptions:

(Gauss) Yi ∼ N(µ,Σ) withσ = (Σk,k)k=1...K known (SB)Yi −µ∼µ−Yi (symmetry) and∀k,

Yki

≤M a.s.

Ideal threshold: t =q?α, (1−α)-quantile ofD √

nkY−µkp qα? depends on D(Y) unknown⇒ estimated byresampling.

(20)

Classical procedures and known results

Parametric statistics: no, becauseK2nK.

Asymptotic results (e.g. [van der Vaart and Wellner 1996]):

not valid, because K n.

Holm’s multiple testing procedure: better than Bonferroni, but unefficient with strong correlations

(Multiple) tests by symmetrization:

do not work if H0,k ={µk ≤0}. do not give confidence regions.

not translation-invariant ⇒less powerful if kµk is large

(21)

Classical procedures and known results

Parametric statistics: no, becauseK2nK.

Asymptotic results (e.g. [van der Vaart and Wellner 1996]):

not valid, because K n.

Holm’s multiple testing procedure: better than Bonferroni, but unefficient with strong correlations

(Multiple) tests by symmetrization:

do not work if H0,k ={µk ≤0}. do not give confidence regions.

not translation-invariant ⇒less powerful if kµk is large

(22)

Classical procedures and known results

Parametric statistics: no, becauseK2nK.

Asymptotic results (e.g. [van der Vaart and Wellner 1996]):

not valid, because K n.

Holm’s multiple testing procedure: better than Bonferroni, but unefficient with strong correlations

(Multiple) tests by symmetrization:

do not work if H0,k ={µk ≤0}. do not give confidence regions.

nottranslation-invariant ⇒less powerful if kµk is large

(23)

Resampling principle [Efron 1979 ; ...]

Sample Y1, . . . ,Yn resampling−→ (W1,Y1), . . . ,(Wn,Yn) weighted sample

“Yi is keptWi times in the resample”

Weight vector: (W1, . . . ,Wn), independent ofY

Example 1: Efron’s bootstrap ⇔ n-sample with replacement

⇔(W1, . . . ,Wn)∼ M(n;n−1, . . .n−1)

Example 2: Rademacher weights: Wi i.i.d. ∼ 12δ−1+12δ1 ⇔ subsampling, with subsample size ≈n/2

Heuristics: D(sample|true distribution)≈ D(resample|sample)

(24)

Resampling principle [Efron 1979 ; ...]

Sample Y1, . . . ,Yn resampling−→ (W1,Y1), . . . ,(Wn,Yn) weighted sample

“Yi is keptWi times in the resample”

Weight vector: (W1, . . . ,Wn), independent ofY

Example 1: Efron’s bootstrap ⇔ n-sample with replacement

⇔(W1, . . . ,Wn)∼ M(n;n−1, . . .n−1)

Example 2: Rademacher weights: Wi i.i.d. ∼ 12δ−1+12δ1 ⇔ subsampling, with subsample size ≈n/2

Heuristics: D(sample|true distribution)≈ D(resample|sample)

(25)

Resampling principle [Efron 1979 ; ...]

Sample Y1, . . . ,Yn resampling−→ (W1,Y1), . . . ,(Wn,Yn) weighted sample

“Yi is keptWi times in the resample”

Weight vector: (W1, . . . ,Wn), independent ofY

Example 1: Efron’s bootstrap ⇔ n-sample with replacement

⇔(W1, . . . ,Wn)∼ M(n;n−1, . . .n−1)

Example 2: Rademacher weights: Wi i.i.d. ∼ 12δ−1+12δ1 ⇔ subsampling, with subsample size ≈n/2

Heuristics: D(sample|true distribution)≈ D(resample|sample)

(26)

Resampling principle [Efron 1979 ; ...]

Sample Y1, . . . ,Yn resampling−→ (W1,Y1), . . . ,(Wn,Yn) weighted sample

“Yi is keptWi times in the resample”

Weight vector: (W1, . . . ,Wn), independent ofY

Example 1: Efron’s bootstrap ⇔ n-sample with replacement

⇔(W1, . . . ,Wn)∼ M(n;n−1, . . .n−1)

Example 2: Rademacher weights: Wi i.i.d. ∼ 12δ−1+12δ1 ⇔ subsampling, with subsample size ≈n/2

Heuristics: D(sample|true distribution)≈ D(resample|sample)

(27)

Concentration method

kY−µkp concentrates around its expectation, standard-deviation ≤ kσkpn−1/2

Estimate E

kY−µkp

by resampling

⇒qαconc(Y) =cst×√ nE

kYW −W Ykp|Y

+ remainder(kσkp, α,n)

Works well if expectations (∝p

log(K)) are larger than fluctuations (∝Φ−1(α/2))

(28)

Concentration method

kY−µkp concentrates around its expectation, standard-deviation ≤ kσkpn−1/2

Estimate E

kY−µkp

by resampling

⇒qαconc(Y) =cst×√ nE

kYW −W Ykp|Y

+ remainder(kσkp, α,n)

Works well if expectations (∝p

log(K)) are larger than fluctuations (∝Φ−1(α/2))

(29)

Quantile method

Ideal threshold: q?α= (1−α)-quantile of D √

nkY−µkp

⇒Resampling estimate of q?α: qαquant(Y) =(1−α)-quantile ofD √

nkYW −W Ykp|Y

with W := 1 n

n

X

i=1

Wi

YW := 1 n

n

X

i=1

WiYi Resampling empirical mean qαquant(Y) depends only onY⇒ can be computed with real data

(30)

Quantile method

Ideal threshold: q?α= (1−α)-quantile of D √

nkY−µkp

⇒Resampling estimate of q?α: qαquant(Y) =(1−α)-quantile ofD √

nkYW −W Ykp|Y

with W := 1 n

n

X

i=1

Wi

YW := 1 n

n

X

i=1

WiYi Resampling empirical mean qαquant(Y) depends only onY⇒ can be computed with real data

(31)

Quantile method

Ideal threshold: q?α= (1−α)-quantile of D √

nkY−µkp

⇒Resampling estimate of q?α: qαquant(Y) =(1−α)-quantile ofD √

nkYW −W Ykp|Y

with W := 1 n

n

X

i=1

Wi

YW := 1 n

n

X

i=1

WiYi Resampling empirical mean qαquant(Y) depends only onY⇒ can be computed with real data

(32)

Concentration theorem

Theorem

Assume(Gauss) and W Rademacher. For everyα ∈(0,1), qαconc,1(Y) :=

√nE

kYW −W Ykp|Y

BW +kσkpΦ−1(α/2) CW

√nBW + 1

satisfies

P

√nkY−µkp >qconc,1α (Y)

≤α withσ :=

q

var(Yk1)

1≤k≤K, and

BW :=E 1 n

n

X

i=1

WiW2!1/2

= 1− O(n−1/2) and CW = 1

Main tool: Gaussian concentration theorem [Cirel’son, Ibragimov and Sudakov, 1976]

(33)

Remarks

Valid for quite general weights(withBW andCW are independent of K and easy to compute).

Similar result under assumption (SB) (with larger constants).

k·kp can be replaced by supk(·)+ ⇒ one-sided multiple tests Almost deterministicthreshold:

⇒ ifp =∞,qαconc,2(Y)≈min

tαbonf,qconc,1α (Y)

still has a FWER ≤α.

(34)

Remarks

Valid for quite general weights(withBW andCW are independent of K and easy to compute).

Similar result under assumption (SB) (with larger constants).

k·kp can be replaced by supk(·)+ ⇒ one-sided multiple tests Almost deterministicthreshold:

⇒ ifp =∞,qαconc,2(Y)≈min

tαbonf,qconc,1α (Y)

still has a FWER ≤α.

(35)

Practical computation

Monte-Carlo approximation with W1, . . . ,WB only (with an additionnal term of order B−1/2)

Alternatively, V-fold cross-validation weights

⇒ computation time ∝V, accuracy ∝CWBW−1≈p n/V. Estimation of σ: under (Gauss), if

k :=

v u u t 1 n

n

X

i=1

Yki −Y2

then for everyδ ∈(0,1) , P

kσkp

Cn− 1

√nΦ−1 δ

2

kbσkp

≥1−δ , with Cn= 1− O(n−1) .

(36)

Practical computation

Monte-Carlo approximation with W1, . . . ,WB only (with an additionnal term of order B−1/2)

Alternatively, V-fold cross-validation weights

⇒ computation time ∝V, accuracy ∝CWBW−1≈p n/V. Estimation of σ: under (Gauss), if

k :=

v u u t 1 n

n

X

i=1

Yki −Y2

then for everyδ ∈(0,1) , P

kσkp

Cn− 1

√nΦ−1 δ

2

kbσkp

≥1−δ , with Cn= 1− O(n−1) .

(37)

Simulations: p = +∞, n = 1000, K = 16384, σ = 1

(38)

Simulations: p = +∞, n = 1000, K = 16384, σ = 1

(39)

Quantile method

Rademacher weights only: Wi i.i.d. ∼ 12δ−1+12δ1

qquantα (Y) = (1−α)-quantile of D √

nkYW −W Ykp Y

Heuristics ⇒ should satisfyP kY−µkp >qαquant(Y)

≤α

(40)

Quantile theorem: need for a supplementary term

Theorem

Y symmetric. W Rademacher. α, δ, γ∈(0,1).

If f is a non-negative threshold with level bounded byαγ: P

√nkY−µkp >f(Y)

≤αγ Then,

qαquant+f(Y) =qquantα(1−δ)(1−γ)(Y) +

r2 log(2/(δα)) n f(Y) yields a level bounded byα:

P √

nkY−µkp>qαquant+f(Y)

≤α

(41)

Remarks

Uses only the symmetry of Y around its mean

k·kp can be replaced by supk(·)+ ⇒ one-sided multiple tests The supplementary threshold f only appears in a second-order term.

(Gauss) ⇒ three thresholds:

takef amongqαγBonf (if p= +∞),qconc,1αγ andqαγconc,2

(SB)⇒ f =qαγconc,SB

In simulation experiments,f is almost unnecessary.

qquantα(1−δ)(1−γ)(Y) can be replaced by aMonte-Carlo estimated quantile (i.e., simulate onlyW1, . . . ,WB)

⇒ loose at most (B+ 1)−1 in the level, nothing if α(1−δ)(1−γ)(B+ 1)∈N.

(42)

Remarks

Uses only the symmetry of Y around its mean

k·kp can be replaced by supk(·)+ ⇒ one-sided multiple tests The supplementary threshold f only appears in a second-order term.

(Gauss) ⇒ three thresholds:

takef amongqαγBonf (if p= +∞),qconc,1αγ andqαγconc,2

(SB)⇒ f =qαγconc,SB

In simulation experiments,f is almost unnecessary.

qquantα(1−δ)(1−γ)(Y) can be replaced by aMonte-Carlo estimated quantile (i.e., simulate onlyW1, . . . ,WB)

⇒ loose at most (B+ 1)−1 in the level, nothing if α(1−δ)(1−γ)(B+ 1)∈N.

(43)

Remarks

Uses only the symmetry of Y around its mean

k·kp can be replaced by supk(·)+ ⇒ one-sided multiple tests The supplementary threshold f only appears in a second-order term.

(Gauss) ⇒ three thresholds:

takef amongqαγBonf (if p= +∞),qconc,1αγ andqαγconc,2

(SB)⇒ f =qαγconc,SB

In simulation experiments,f is almost unnecessary.

qquantα(1−δ)(1−γ)(Y) can be replaced by a Monte-Carlo estimated quantile (i.e., simulate only W1, . . . ,WB)

⇒ loose at most (B+ 1)−1 in the level, nothing if α(1−δ)(1−γ)(B+ 1)∈N.

(44)

Simulations: n = 1000, K = 16384, σ = 1, p = +∞

(45)

Simulations: without the additive term?

(46)

Simulations: role of p (n = 1000, p = 16)

(47)

Simulations: role of p (n = 1000, p = 10)

(48)

Simulations: role of p (n = 1000, p = 2)

(49)

Simulations: role of p (n = 1000, p = 1)

(50)

Quantile approach vs. Symmetrization

Defineqsymα (Y,p) as the (1−α)-quantile of D √

nkYWkp Y

Symmetrization argument: if Y symmetric andW Rademacher, then

P kY−µkp >qαsym(Y−µ,p)

≤α

since

(Y−µ)W = 1 n

n

X

i=1

Wi(Yi −µ)∼ 1 n

n

X

i=1

(Yi −µ) =Y−µ . qsymα (Y−µ,p) unknown⇒ replacingµbyY leads to

qquantα (Y,p) =qαsym(Y−Y,p)

(51)

Quantile approach vs. Symmetrization

Defineqsymα (Y,p) as the (1−α)-quantile of D √

nkYWkp Y

Symmetrization argument: if Y symmetric andW Rademacher, then

P kY−µkp >qαsym(Y−µ,p)

≤α

since

(Y−µ)W = 1 n

n

X

i=1

Wi(Yi −µ)∼ 1 n

n

X

i=1

(Yi −µ) =Y−µ .

qsymα (Y−µ,p) unknown⇒ replacingµbyY leads to qquantα (Y,p) =qαsym(Y−Y,p)

(52)

Quantile approach vs. Symmetrization

Defineqsymα (Y,p) as the (1−α)-quantile of D √

nkYWkp Y

Symmetrization argument: if Y symmetric andW Rademacher, then

P kY−µkp >qαsym(Y−µ,p)

≤α

since

(Y−µ)W = 1 n

n

X

i=1

Wi(Yi −µ)∼ 1 n

n

X

i=1

(Yi −µ) =Y−µ .

qsymα (Y−µ,p) unknown⇒ replacingµbyY leads to qquantα (Y,p) =qαsym(Y−Y,p)

(53)

Multiple testing with the uncentered quantile threshold

Ift≥qαsym((Yk)ks.t.µk=0,+∞) , then

FWER(R) =P(∃k s.t. µk = 0 and√

n|Yk|>t)

=P(√

n sup

ks.t.µk=0

|Yk|>t)≤α

by symmetry ofY. This holds in particular for

qquant. uncent.

α (Y) :=qsymα (Y,+∞)≥qsymα ((Y)ks.t.µk=0,+∞)

⇒can be used for multiple testing, but more conservative, especially when the signalµis strong

(54)

Simulations: n = 1000, K = 16384, σ = 1, 0 ≤ µ

k

≤ 2.9

(55)

Step-down procedure [Holm 1979 ; Westfall and Young 1993 ; Romano and Wolf 2005]

Reorder the coordinates:

Yσ(1)

≥ · · · ≥ Yσ(K)

Define the thresholds tk =t(Yσ(k), . . . ,Yσ(K)) for k = 1, . . . ,K

Definekb= max

ks.t.∀k0≤k,Yσ(k0)>tk0(Y) Reject H0,k for all k ≤bk

⇒this procedure has a FWER controlled by α if each tk has (use thattK=t((Yk)k∈K) is a non-decreasing function ofK).

(56)

Simulations: n = 1000, K = 16384, σ = 1, 0 ≤ µ

k

≤ 2.9

(57)

Simulations: power (0 ≤ µ

k

≤ 2.9)

(58)

Hybrid approach, computational cost

Idea: (centered) quantile better when the signal is strong, uncentered quantile better for weak signals

First step: use qαquant+Bonf(Y) to reject a first set of hypotheses.

Second step: apply the step-down procedure associated with qquant. uncent.

α(1−γ) (Y) to the remaining hypotheses.

Goal: reduce the computational cost / increase the power for a given number of iterations

(59)

Hybrid approach, computational cost

Idea: (centered) quantile better when the signal is strong, uncentered quantile better for weak signals

First step: use qαquant+Bonf(Y) to reject a first set of hypotheses.

Second step: apply the step-down procedure associated with qquant. uncent.

α(1−γ) (Y) to the remaining hypotheses.

Goal: reduce the computational cost / increase the power for a given number of iterations

(60)

Hybrid approach, computational cost

Idea: (centered) quantile better when the signal is strong, uncentered quantile better for weak signals

First step: use qαquant+Bonf(Y) to reject a first set of hypotheses.

Second step: apply the step-down procedure associated with qquant. uncent.

α(1−γ) (Y) to the remaining hypotheses.

Goal: reduce the computational cost / increase the power for a given number of iterations

(61)

Hybrid approach: average number of iterations

3.5 4 4.5 5 5.5 6 6.5 7 7.5

5 10 15 20 25 30

Av. number of iterations

Signal range (dB) s-d. quant. uncent.

hybrid approach

(62)

With early stopping: power vs. number of iterations

0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

5 10 15 20 25 30

Av. (power / max. power)

Signal range (dB) sdqu, t=1

sdqu, t=2 sdqu, t=3 hybrid, t=1 hybrid, t=2 hybrid, t=3

(63)

Conclusion

Two resampling-based procedures:

concentration (almost deterministic threshold) quantile (related to symmetrization techniques)

⇒Multiple testing + Confidence regions

FWER / level control (non-asymptotic, K may ben) very general correlation structures allowed

Simulations: efficient in presence of correlations step-down procedures are possible

Open problems:

quantile threshold withoutf? with other weights?

with non-symmetric Y?

(64)

Conclusion

Two resampling-based procedures:

concentration (almost deterministic threshold) quantile (related to symmetrization techniques)

⇒Multiple testing + Confidence regions

FWER / level control (non-asymptotic, K may ben) very general correlation structures allowed

Simulations: efficient in presence of correlations step-down procedures are possible

Open problems:

quantile threshold withoutf? with other weights?

with non-symmetric Y?

(65)

Conclusion

Two resampling-based procedures:

concentration (almost deterministic threshold) quantile (related to symmetrization techniques)

⇒Multiple testing + Confidence regions

FWER / level control (non-asymptotic, K may ben) very general correlation structures allowed

Simulations: efficient in presence of correlations step-down procedures are possible

Open problems:

quantile threshold withoutf? with other weights?

with non-symmetric Y?

Références

Documents relatifs

This is very important, as we will see that our adaptive strategy crucially depends on the existence of these adaptive confidence sets: Consider for example the problem of

Multiple pumping tests applied to the same unconfined aquifer using the same pumping and observation wells under various initial conditions are investigated.. The

While in the large samples (N = 500) the data-generating model class outperforms all its rivals, the bivariate model that includes inflation shows a better performance for

Ce chapitre explique tout d’abord bri`evement la m´ethode de simulation adopt´ee et les r´esultats obtenus par Legendre : la statistique de test utilis´ee par cet auteur est

Parallel to what was proposed for the concentration- based thresholds in Section 2.5, one can, as a first solution, consider a block-wise Rademacher resampling scheme, or,

Ainsi, pour chaque groupe, l’ensemble des entités fournit une table de vérité partielle d’une fonction booléenne vraie pour les interpréta- tions correspondant aux entités du

When the implementation does not have extra states, the idea is to check that the implementation state reached on x q (access string for q) “behaves like” q; in other words,

This paper studies the optimal scoring of multiple choice tests in which the marks for wrong selection and omission jointly minimize the mean square dierence between score