• Aucun résultat trouvé

Frequency approach for detecting nonstationarity in dependent data

N/A
N/A
Protected

Academic year: 2021

Partager "Frequency approach for detecting nonstationarity in dependent data"

Copied!
26
0
0

Texte intégral

(1)

HAL Id: hal-02126749

https://hal.archives-ouvertes.fr/hal-02126749v2

Preprint submitted on 4 Feb 2021

HAL is a multi-disciplinary open access

archive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

Frequency approach for detecting nonstationarity in

dependent data

Mohamedou Ould Haye, Anne Philippe

To cite this version:

Mohamedou Ould Haye, Anne Philippe. Frequency approach for detecting nonstationarity in depen-dent data. 2019. �hal-02126749v2�

(2)

Frequency approach for detecting nonstationarity in dependent

data

M. Ould Haye1 A. Philippe2∗

1School of Mathematics and Statistics.

Carleton University, 1125 Colonel By Dr. Ottawa, ON, Canada, K1S 5B6

2Laboratoire de Math´ematiques Jean Leray et ANJA INRIA - Rennes,

2 rue de la Houssiniere, Universit´e de Nantes, 44 322 Nantes France.

Abstract

Distinguishing long memory behaviour from nonstationarity can be very difficult as in both cases the sample autocovariance function decays very slowly. Available stationarity tests either do not include long memory or fare poorly in terms of empirical size, especially near the boundary between long memory and nonstationarity. We propose a parameter-free decision rule, that is based on evaluating periodograms at different epochs. We establish some asymptotic theorems in order to validate the method. Limiting distribu-tions are easily tractable as sum of weighted independent χ2random variables. Moreover,

numerical studies are provided to show that the proposed approach outperforms existing methods. We also apply our method to a well-known empirical data, often cited as an example of confusion between long memory and nonstationarity.

Keywords : Long memory, dependence, stationarity, frenquency

1

Introduction

Let us consider a stationary process (Xt) with a spectral density of some semi-parametric

form:

f (λ) = |λ|−2df∗(λ) (1)

with −1/2 < d < 1/2 and f∗is an even, positive, continuous function on [−π, π]. Hereafter, we will refer to processes satisfying condition (1) as satisfying the null hypothesis of stationarity H0. It is worth noticing that spectral densities of the form (1) encompasse a large variety of

stationary processes such as the very popular fractionally autoregressive integrated moving

(3)

average processes FARIMA(p, d, q)

Φ(B)(1 − B)dXt= Θ(B)t,

where t is a white noise with variance σ2, B is the lag operator BXt = Xt−1, and where

Φ(B) = 1 − φ1B − · · · − φpBp, Θ(B) = 1 + θ1B + · · · + θqBqare the autoregressive and moving

average polynomials. The FARIMA spectral density is then given by

f (λ) = σ 2 2π Θ(eiλ) Φ(eiλ) 2! 1 − e iλ −2d = |λ|−2d " σ2 2π sin(λ/2) (λ/2) −2d! Θ(eiλ) Φ(eiλ) 2!# .

We say that the process (Xt) is of short memory if d = 0 (i.e. the spectral density f in

(1) is continuous and positive over [−π, π]), and we say that the process is of long memory if 0 < d < 1/2 (i.e. the spectral density is unbounded at zero). The case −1/2 < d < 0 corresponds to the so-called persistent memory, where f (0) = 0. In time-domain terms, short memory means that the covariance function is summable (i.e. covariance function γ(h) = Cov(Xi, Xi+h) goes to zero fast as the lag h increases) and long memory means that

the covariance function is not summable (i.e. the covariance function goes to zero slowly as the lag increases). Literature is full of real world examples of both types of processes. Over the past decade, several monographs have been written on statistical and probabilistic aspects of long memory processes. The reader is referred to the recent book by Beran et al. (2013) as well as the book of Giraitis et al. (2012) and references therein. Under mild conditions, the switch between time-domain and frequency-domain is easily made via the two representations

f (λ) = 1 2π ∞ X h=−∞ e−ihλγ(λ) = 1 2π " γ(0) + 2 ∞ X h=1 cos(λh)γ(h) # , |λ| ≤ π, and reciprocally γ(h) = Z π −π eihλf (λ)dλ = Z π −π cos(λh)f (λ)dλ, h = 0, ±1, ±2, . . . .

From an applied point of view, long memory behaviour seems to be often confused with a lack of stationarity, as time series exhibiting such a behaviour tend to have a sample auto-correlation function with large spikes at several lags which is well known to be the signature of non-stationarity for many practitioners. There are many stationarity tests such as Phillips and Perron (1988) tests and the frequency test of Bailey and Giraitis (2016), which can be seen as extensions of the augmented Dickey-Fuller tests since their stationarity asumption includes larger class than i.i.d. case. For instance the former allows for mixing and the later includes short memory linear processes. However, the first (and, to the best of our knowledge, so far the only) statistical test for stationarity under a large umbrella of dependence, i.e. short

(4)

and long memory dependence, was introduced by Giraitis et al. (2006). We refer to this test as V/S test or statistic. In the present paper we introduce a new frequency approach which turns out to be much easier to use as the limiting distribution under the stationarity assump-tion is simply a weighted sum of independent χ2 random variables, rather than a functional of stochastic integrals as in the limit of V/S. Also and unlike the later one, the proposed test statistic does not depend on the memory parameter being tested d. This frequency approach seems to be very promising in the sense that it often yields a much simpler limit than the time-domain ones that have been used in time series analysis. This fact was pointed out in our previous paper (see Gromykov et al. (2018)) when testing only for long memory versus short memory. From a sample X1, . . . , Xn of the process (Xt), we are interested in building a

testing procedure to discriminate between dependence and non-stationarity. The centerpiece of this method is to construct a classification rule as the limit (when  → 0) of rejection regions R testing if the data come from a stationary process with a memory parameter d < 1/2 − .

From these tests, with asymptotic significance level α for H0 : d < 1/2 − , we reach the classification rule that asymptotically (in n) wrongly rejects H0 with probability zero.

The proposed statistic is built from periodograms taken at different epochs. More precisely, the procedure is as follows: we split our initial sample X1, . . . , Xn into m blocks (or epochs),

each of size `, and we construct the periodogram In,ion the ith block X(i−1)`+1, . . . , Xi`. The

construction of this averaging of periodograms is known as Bartlett method to estimate the spectral density. Let

Qn,m(s, d) =  m−2d Xs j=1 In(λj) 1 m Pm i=1In,i(λ0j) , (2) with In(λ) = 1 2πn n X t=1 Xteitλ 2 , In,h(λ0) = 1 2π` h` X t=(h−1)`+1 Xteitλ 0 2 , (3)

and s is the number of Fourier frequencies. Throughout the paper s is fixed, and the statistic Qn,m(s, d) depends on the Fourier frequencies λ1, . . . , λs, with λj = 2πj/n, and λ01, . . . , λ0s,

with λ0j = 2πj/`. We want to stress that our test statistic is Qn,m(s, 1/2) which is d free

and therefore does not require estimating d, as being dependent on the very parameter being tested is not a desirable property for a test.

We assume that m = m(n), and ` = `(n) → ∞ as n → ∞. It is important to emphasize the fact that m and ` increase with n and are not constant, and that m = n/`. We are simply using notation m and ` rather than m(n) and `(n) only for the sake of simplicity.

The rest of the paper is organized as follows. Section 2 contains main limiting theorems related to the statistic Qn,m(s, d), including its behaviour under a wide range of nonstationary

alternatives. It will also shed light on what happens when we only consider high frequency spectrum where we cannot distinguish between short and long memory (see Proposition 1), which turns out to be severely impacting the power performance of the test. In Section 3

(5)

we present a decision rule to detect nonstationarity using asymptotic properties obtained in Section 2. In particular, we establish the theoretical performances of the classification rule. Section 4 contains a Monte Carlo study to illustrate the performance of our proposed method as well as an application to the well known annual varve time series data that is often cited as an illustration of confusion between long memory and non stationarity pattern. The paper ends with an appendix containing some technical parts of the proofs.

2

Main results

We start by stating a clarifying proposition on a modified version Q0n,m(s, d) of our main statistic Qn,m(s, d) that shows that when we stay far away from the frequency zero, the

effects of short and long memory on our periodogram-based statistics are asymptotically the same. Then we move to the main theorem that distinguishes between the two effects when we stay at low frequencies. Before we state our results, we set their context. Let (Xt) be a

linear process of the form

Xt= ∞

X

j=0

ajt−j, (4)

where (j) are i.i.d. random variables with zero mean and finite fourth moment. Denote

σ2 = E(21) and η = E(41). In addition, and throughout the rest of the paper, we will assume

that if the process Xt has spectral density fdof the form (1) with −1/2 < d < 0, then

aj = c j−1+2d(1 + O(j−1)), j ≥ 1, c > 0. (5)

Consider the statistic

Q0n,m(s, d) =m−2d s X k=1  jk jk0 2d In(λjk) 1 m Pm i=1In,i(λ0j0 k) . (6) where for k = 1, . . . , s, λjk = 2πjk/n, λ 0 jk0 = 2πj 0 k/`, and where jk> Kn→ ∞, jk0 > K`→ ∞.

Proposition 1. Let Xt be as defined in (4), with a spectral density of the form (1) with

−1/2 < d < 1/2. If m → ∞, m = o(n) and s ≥ 1 is fixed, then Q0n,m(s, d)−→ Q(s, 0),D

where −→ denotes convergence in distribution, and Q(s, 0) has Gamma distribution withD parameters (s, 1) (i.e. with mean and variance equal to s).

Proof. The proof is relegated to Section 5.1 of the appendix.

In what will follow, we will focus on the statistic Qn,m(s, d) defined in (2). In the next

(6)

process is stationary. As for its behaviour under the alternative of ”the process has a stochastic or deterministic trend”, it is given in the Theorem 2.

Theorem 1. Let Xt be a linear process of the form (4) admitting a spectral density of the

form (1) with −1/2 < d < 1/2, and Qn,m(s, d) is as defined in (2) where s ≥ 1 is fixed, with

` = O(n1−δ) and m = O(n1−δ0) for some δ, δ0 > 0. Then

Qn,m(s, d) D −→ Q(s, d) = 2s X i=1 ζi(d)Qi (7)

where Q1, . . . , Q2s are i.i.d. χ21 random variables and ζ1(d), . . . ζ2s(d) the eigenvalues of the

covariance matrix Σ(d) defined in Section 5.3.

Remark 1. The limiting distribution form (7) is an elaborated form of the limiting form (9) (see in the proof below). Actually (7) can be seen as a convolution of independent Gamma random variables and therefore its quantiles can easily be computed with high resolution (see for instance Hu et al. (2020)). Evaluating the limit in (9) requires Monte Carlo simulations which may yield unstable quantiles.

Proof. The case d = 0 is already established in Gromykov et al. (2018). Assume 0 < |d| < 1/2. From (2) we get Qn,m(s, d) = m−2d s X j=1 f (λj) f (λ0j) ! In(λj)/f (λj) 1 m Pm i=1In,i(λ 0 j)/f (λ 0 j) ∼ s X j=1 In(λj)/f (λj) 1 m Pm i=1In,i(λ0j)/f (λ0j) , as n → ∞.

We know from Hurvich and Beltr˜ao (1993) and Terrin and Hurvich (1994) that  In(λ1) f (λ1) , · · · ,In(λs) f (λs)  D → [L1(d)(Z12(1) + Z22(1)), · · · , Ls(d)(Z12(s) + Z22(s))].

We note in passing that two errors have slipped into the proof of Theorem 5 of Hurvich and Beltr˜ao (1993) on page 471 of the article leading to a coefficient of (−1)j+k+1in their formulae (7) and (8) while there should be no such a coefficient.

Using Slutsky Lemma, it will then be enough to show that, for each fixed j, as n → ∞ (and therefore as `, m → ∞), 1 m m X i=1 In,i(λ0j) f (λ0j) P −→ Lj(d), (8) to obtain Qn,m(s, d) D → Z12(1) + Z22(1) + · · · + Z12(s) + Z22(s). (9)

(7)

The limiting distribution above is that of the norm of a (2s × 1) zero mean Gaussian vector of covariance matrix Σ(d). Writing

Σ(d) = Ω0(d)Λ(d)Ω(d)

where Λ(d) is the diagonal matrix of Σ(d)’s eigenvalues (denoted ζi(d), i = 1, . . . , 2s) and

Ω(d) is an orthogonal matrix so that

Z12(1) + Z22(1) + · · · + Z12(s) + Z22(s) = kZk2 = kΩ(d)Zk2= kΛ(d)1/2W k2 = 2s X i=1 ζi(d)Wi2,

where W is (2s × 1) standard Gaussian vector. Then we obtain that the limit can be written as the sum of weighted i.i.d χ21 random variables as stated.

Now we prove (8). Using the the fact that In,i(λ0j)/f (λ0j), i = 1, . . . , m are identically

dis-tributed, and then Hurvich and Beltr˜ao (1993), we have for fixed j,

E 1 m m X i=1 In,i(λ0j) f (λ0j) ! = E In,i(λ 0 j) f (λ0j) ! → Lj(d).

Since f (λ0j) ∼ c`2d, it will then be enough to show that

`−4d Var 1 m m X i=1 In,i(λ0j) ! → 0.

Let us first consider the case 0 < d < 1/2. Similar computations as in Gromykov et al. (2018) show that the LHS of the expression above can be asymptotically bounded by

2` −4d m m X u=1 1 ` ` X p=1 ` X q=1 `−p+1 X h=1+q−` T`,u(h, p, q), (10) where T`,u(h, p, q) = T (1) `,u(h, p, q) + T (2) `,u(h, p, q) + T (3) `,u(h, p, q), with

T`,u(1)(h, p, q) = γ(`u − h)γ(`u − h + q − p), T`,u(2)(h, p, q) = γ(`u − h + q)γ(`u − h − p),

(8)

and

T`,u(3)(h, p, q) = (η − 3)σ4

X

i=0

aiai+pai+h+`uai+h+`u+q.

Let us handle T`,u1 (h, p, q) first: with k = q − p, we have (and without loss of generality, assuming γ(h) ≥ 0 and asymptotically of the form O(h2d−1)), and making u starts from 3, with some constants C1, C2, C3

1 m`4d+1 m X u=3 ` X p=1 ` X q=1 `−p X h=1−`+q γ(`u − h)γ(`u − h − p + q) = 1 m`4d+1 m X u=3 ` X k=−1 `−k X p=1 `−p X h=1−`+k+p γ(`u − h)γ(`u − h + k) ≤ 1 m`4d+1 m X u=3 ` X k=−1 `−k X p=1 ` X h=−` γ(`u − h)γ(`u − h + k) ≤ 1 m`4d m X u=3 ` X k=−1 ` X h=−` γ(`u − h)γ(`u − h + k) = 1 m`4d m X u=3 ` X k=−` `u X s=`(u−1) γ(s)γ(s + k) ≤ C1 1 m Z m 2 Z 1 −1 (y + x)2d−1dy  x2d−1dx ≤ C2 m Z m 2 x2d−1dx ≤ C3m2d−1→ 0.

T`,u2 (h, p, q) treats the same way. For the last term T`,u3 (h, p, q), we have uniformly in u,

` X h=−` T`,u(3)(h, p, q) = (η − 3)σ4 ∞ X i=0 aiai+p i+(u+1)` X s=i+(u−1)` asas+q ≤ (η − 3)σ4 ∞ X i=0 aiai+p ∞ X s=` asas+q = γ(p)o(1), as ` → ∞, and therefore `−4d1 ` ` X p=1 ` X q=1 ` X h=−` T`,u(3)(h, p, q) = `−4d   ` X p=1 γ(p)   o(`) ` = ` −2do(`) ` → 0, as ` → ∞.

Finally let us consider the case −1/2 < d < 0. We then have, using (5), γk ∼ Ck−1+2d as

k → ∞, and similar calculations as above can easily show that the quantity in (10) converges to zero as n → ∞. This completes the proof of Theorem 1.

(9)

Theorem 2. Let Yt be a linear process with a spectral density of the form (1). and consider

the following two classes of processes Xt

• Stochastic trend (unit root) : Xt= Xt−1+ Yt

• Deterministic trend (structural breaks): Xt = gn(t) + Yt where gn(t) = nαg(t/n), with

α > 0, and where g is either differentiable with bounded derivative satisfying Z 1

0

g(x)ei2πtjkxdx 6= 0, for some k ∈ {1, 2, . . . , s}, (11)

or g is a step function. Then for all 0 ≤ γ ≤ 1/2 we have

Qn,m(s, γ) P

−→ ∞

where −→ denotes convergence in probability. Moreover, if jP k = o(m1/2+d) and jk0 = o(`),

then

Q0n,m(s, γ)−→ ∞,P Proof. The proof is relegated to Section 5.2 of the appendix.

Remark 2. In spite of the fact that the statistic Q0n,m does not tell between short and long memories (see Proposition 1) it does detect the non stationarity but with lower power due to the fact that we are not considering lower frequencies in building this statistic. Numerical studies (not provided here) confirm such a loss in power compared to Qn,m.

Remark 3. We note in passing that condition (11) is satisfied by most of nonconstant func-tions. For example, nonnegative step functions (allowing change in the mean), polynomials, etc.

For testing the stationarity, a decision rule cannot be wisely derived directly from Qn,m(s, d)

as it requires estimating d. The convergence results established in Theorems 1 and 2 are valid for the Qn,m(s, ˆd) if ˆd is a consistent estimator of d such that ˆd ∈ (−1/2, 1/2) and

ˆ

d − d = oP(1/ ln n) for stationary linear process with spectral density of the form (1). This

is a classical approach in dealing with unknown parameter in a test statistic, see e.g. the contruction of V/S test in Giraitis et al. (2006). However the poor quality of existing esti-mators of d near 1/2 will have enormous consequences on the empirical size of estimated-d based tests, such as V/S or Qn,m(s, ˆd). It is worth noticing that, stricto sensu, our Qn,m(s, ˆd)

(10)

3

A classification rule.

We want to check the stationarity of the process. More precisely we define the following hypotheses: the null hypothesis H0: ”the process Xt is stationary” as defined in Theorem 1

versus H1 ”the process Xthas a stochastic or deterministic trend” as defined in Theorem 2.

We propose a decision rule that does not require the estimation of the long memory param-eter. Our approach uses the fact that the limiting distribution Q(s, 1/2) is well defined (see Theorem 1). Note that this strategy cannot be applied to V/S since its limiting distribution is degenerated at d = 1/2.

Let  ∈ (0,12) fixed. We consider the restricted null hypothesis H0: d ≤ 12 − . For testing H

0 versus the non-stationarity, the critical region R= {Qn,m(s,12− ) > qα(s,12− )}

provides an asymptotic unbiased test with a significance level α. This test is also consistent under the alternative H1 previously defined, but it has a ”blind spot” d ∈ (1/2 − , 1/2).

These properties are immediate consequences of Theorems 1 and 2.

When  tends to zero, the null hypothesis H0 converges to H0 and the critical region R

to R = {Qn,m(s,12) > qα(s,12)} where qα(s,12) be the quantile of Q(s, 1/2), i.e.

P  Q(s, 1/2) > qα  s,1 2  = α.

The limit of R comes from the fact that the statistic Qn,m(s, d) and qα(s, d) are continuous

function at d = 1/2 (see proof of Proposition 2 below). In the light of this result, we adopt the following rule :

Classification rule: we classify the process Xt as nonstationary if

Qn,m(s,

1

2) > qα(s, 1 2), otherwise we classify it as stationary.

Asymptopic properties of the classification rule: As n → ∞, we have the following properties:

• under the assumption H0

P (Qn,m(s, 1/2) > qα(s, 1/2)) → 0

• under the alternative H1

(11)

We note that we are fully aware that seemingly replacing qα(s, 1/2) in the above by any

positive constant will do, but this choice of constant is explained in Proposition 2.

Link with memory parameter testing. We compare the decision of our rule with those of the testing procedures based on R.

Proposition 2. For a sample (X1, . . . , Xn),

1. if Qn,m(s,12) > qα(s,12), i.e. if the process is classified as nonstationary, then for all

small positive , the R-based test rejects H0 at asymptotic significance α.

2. if Qn,m(s,12) < qα(s,12), i.e. if the process is classified stationary, then for all small

positive , the R-based test fails to reject H0 at asymptotic significance level α.

That is, deciding that the process in hands is stationary with our classification method is equivalent to accepting it with a certain (unknown) memory parameter d ∈ (−1/2, 1/2). Proof. It relies on the properties of the statistic Qn,m(s, d), which is a continuous decreasing

function in d (when all the other parameters n, m are fixed). Moreover the quantile qα(s, d)

is a continuous function at d = 1/2 (when s, α are fixed). According to (7), d only appears in the weights, so it is enough to prove the continuity of functions defined in (29)–(32). The latter can easily be checked by the dominated convergence theorem.

1. Assuming Qn,m(s,12) > qα(s,12) then Qn,m(s,12 − ) > qα(s,12 − ) for all small positive

, (thanks to the monotonicity of Qn,m(s, d) and the continuity of qα(s, d) at d = 1/2).

Indeed, there exists δ > 0 such that Qn,m(s,

1

2) > δ + qα(s, 1 2), and δ> 0 such that for all  < δ

qα(s, 1 2) − qα(s, 1 2 − ) < δ. By monotonicity of Qn,m(s, d), we get Qn,m(s, 1 2− ) < Qn,m(s, 1 2) < qα(s, 1 2 − ).

Then we reject H0 at significance level α for all  and therefore we reject H0 : d < 1/2

which is in accordance with our classification rule.

2. Assuming Qn,m(s,12) < qα(s,12) then it exists λ : 0 < λ < qα(s,12) − Qn,m(s,12).

(12)

qα(s,12), so that there exists λ such that for all  < λ Qn,m(s, 1 2 − ) − Qn,m(s, 1 2) ≤ λ/2 and −λ/2 ≤ qα(s,1 2) − qα(s, 1 2 − ) ≤ λ/2 which implies −λ/2 ≤ Qn,m(s, 1 2− ) − qα(s, 1 2 − ) + qα(s, 1 2) − Qn,m(s, 1 2) ≤ λ and so Qn,m(s, 1 2 − ) − qα(s, 1 2 − ) ≤ λ − (qα(s, 1 2) − Qn,m(s, 1 2)) < 0

for all  < λ. This means that we accept H0 at asymptotic significance level α, for

 < λ, and therefore we accept H0. This is again in accordance with our classification

rule.

4

Numerical Studies

4.1 Simulated models

We investigate two kinds of null hypothesis: • Long memory processes (d 6= 0).

We simulate FARIMA(0,d,0) processes Xt = (1 − B)−dt where d ∈ (−.5, .5) and (t)t

are standard Gaussian innovations. • Short memory processes (d = 0).

We simulate AR(1) processes: Xt = aXt−1+ t where a ∈ (−1, 1) and t are standard

Gaussian innovations.

Two alternatives are simulated as follows: starting from a FARIMA(0,d,0) processes Yt with

d ∈ (−1/2, 1/2), we take • unit-root process Xt=

Pt i=1Yi.

Note that, as an integrated process, Xt can be written as a FARIMA(0,d + 1,0) with

d ∈ (−1/2, 1/2) which is not stationary (See Taqqu (2003) for a precise definition) • Structural breaks (a change point in the mean in the middle n/2): Xt= µt+ Yt where

µt=    0 if t < n/2, ∆ if t ≥ n/2. (12)

(13)

4.2 Empirical size and power evaluation

Fundamentally, the statistics Qn,m(s, 1/2) depend on the choice of the bandwidth parameter

m and the number of Fourier frequencies s. Our strategy is to calibrate m, s in order to ensure an empirical size that remains smaller than the value α(= 5%) set in advance, in particular when d is very close to 1/2. In doing so, and contrary to the V/S which is known to suffer from high empirical size, we are trying to drastically reduce the likelihood of rejecting the stationarity of processes with strong long memory processes.

Figure 1 shows the empirical size as function of d of the test Qn,m(s, 1/2) at different values

of n. Each curve corresponds to a selected value of m and s. Among these values we see that the choice m = n.5 is the optimal one, uniformly in d. Indeed, even for d close to 1/2 and moderate sample size, the empirical probability of wrongly rejecting H0 does not exceed .05.

The choice of s does not seem to be that crucial for moderate sample size. In Figure 1, we also represent the empirical size of V/S test (see the discussion at the end of Section 2). It confirms that V/S does suffer from very large empirical sizes in the neighbourhood of d = .5. In this context the comparison between V/S test and ours in terms of power is not relevant. Our procedure seems to perform well in recognizing the stationarity of strong long memory processes (see results in Table 1 for d close to .5).

Figure 2 shows that, with s = 1 our statistic performance is comparable to V/S’s for station-ary AR(1) processes near unit root (a close to 1). We notice that near unit root, the choice of the number of frequencies s is important in particular for moderate sample sizes, we should keep s very low. Of course Dickey Fuller’s test is better than both VS and our statistics for testing stationary AR processes against unit root. However our null hypothesis H0 is much

wider, as it contains long memory which cannot be seen as stationary by Dickey Fuller’s test. Note that when we stay away from unit root the stationarity is accepted, with probability one by all the above tests.

According to the performance in terms of empirical size, we keep the choice of m = √n to illustrate the power function behaviour under both alternatives: stochastic and deterministic trends (see Figure 3, 4 and 5).

Figure 3 confirms the consistency of our decision rule for the stochastic trends (FARIMA(0, d, 0) with d > 1/2). Namely, when it comes to choosing an s (the number of lower frequencies to be used), there does not seem to be a need for a trade-off between a good empirical size and a good power. The choice s = 5 seems to be satisfactory for both size and power. We were able to avoid of such standard trade-off in part since the probability of wrongly rejecting H0 tends to zero instead of α (as n → ∞). Figure 4 shows the convergence of the empirical

distribution function of the P-values to Dirac mass at 0 and 1 as d tends -1/2 and 3/2. It seems that the convergence rate, as function of d, is symmetric around d = 1/2.

From Figure 5, we note that the non increasing behaviour (as function of d) of the power function in the structural breaks case is consistent with previous findings in time-domain

(14)

0.00 0.05 0.10 0.15 0.20 0.25 −0.50 −0.25 0.00 0.25 0.50 d proba FARIMA (0,d,0) : n= 500 0.00 0.05 0.10 0.15 0.20 0.25 −0.50 −0.25 0.00 0.25 0.50 d proba test VS Q m=n^1/2 s=1 Q m=n^1/2 s=2 Q m=n^1/2 s=3 Q m=n^1/2 s=4 Q m=n^1/2 s=5 Q m=n^1/3 s=1 Q m=n^1/3 s=2 Q m=n^1/3 s=3 Q m=n^1/3 s=4 Q m=n^1/3 s=5 n= 5000

Figure 1: Empirical level evaluated on FARIMA(0,d,0) process as function of the long memory parameter d ∈ (−.5, .5). We compare the statistics V/S and Qn,m(s, 1/2)) for different values

of parameters (m, s) and different sample sizes n

context (see for instance Giraitis et al., 2006). This is due to fact that the marginal variance Var(Y1) = σ2Γ(1 − 2d)/Γ2(1 − d) increases with d, so that any small to moderate jump would

be more difficult to detect for d close 1/2, as the stationary noise Yt will dominate. Contrary

to the stochastic trends, the power function seems to be very sensitive to the choice of the number of frequencies s. For small jumps and positive d, only the choice s = 1 seems to yield satisfactory results. m = n1/2 m = n1/3 n d s = 1 s = 2 s = 3 s = 4 s = 5 s = 1 s = 2 s = 3 s = 4 s = 5 5000 .41 0.00 0.00 0.00 0.00 0.00 0.02 0.01 0.01 0.01 0.00 .49 0.04 0.04 0.04 0.04 0.04 0.06 0.06 0.05 0.05 0.06 500 .41 0.01 0.01 0.00 0.00 0.00 0.03 0.03 0.03 0.02 0.02 .49 0.06 0.06 0.06 0.05 0.05 0.07 0.08 0.08 0.08 0.08

Table 1: Empirical size of the Q-test for FARIMA(0,d,0) processes when d is close to .5.

4.3 Empirical application :

Varve dataset We examine the glacial varve dataset (annual data over 634 years, beginning about 12,600 years ago). This dataset is available in the R package astsa (see Stoffer, 2019) which can be downloaded from the Comprehensive R Archive Network (see R Core Team, 2018). It is often cited as an illustration of confusion that might occur between long memory and nonstationarity. In Shumway and Stoffer (2017) the stationarity of this (logged) time series is discussed.

(15)

0.00 0.25 0.50 0.75 1.00 0.80 0.85 0.90 0.95 a proba test VS DF Q m=n^1/2 s=1 Q m=n^1/2 s=2 Q m=n^1/2 s=3 Q m=n^1/2 s=4 Q m=n^1/2 s=5 AR model : n= 500

Figure 2: Empirical size evaluated on AR(1) as function of its coefficient a, we compare V/S, Dickey-Fuller test, and Qn,

n(s, 1/2) for different values of s.

0.00 0.25 0.50 0.75 1.00 −0.5 0.0 0.5 1.0 1.5 d proba FARIMA (0,d,0) : n= 500 0.0 0.1 0.2 0.3 0.40 0.45 0.50 0.55 0.60 d proba ZOOM around 1/2 0.00 0.25 0.50 0.75 1.00 −0.5 0.0 0.5 1.0 1.5 d proba test Q m=n^1/2 s=1 Q m=n^1/2 s=2 Q m=n^1/2 s=3 Q m=n^1/2 s=4 Q m=n^1/2 s=5 FARIMA (0,d,0) : n= 5000

Figure 3: Empirical probability to reject the null hypothesis as function of d the parameter of fractional process with d ∈ (−1/2, 3/2). We compare Qn,

(16)

0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 values y 0.0 0.5 1.0 d

n= 500 m=n^1/2 & s=1

Figure 4: Empirical cumulative distribution function of the P-value for Q-test for different values of d ∈ (−.5, 1.5). 0.00 0.25 0.50 0.75 1.00 −0.50 −0.25 0.00 0.25 0.50 d proba

Change point : n= 500 & jump = 1.5

0.00 0.25 0.50 0.75 1.00 −0.50 −0.25 0.00 0.25 0.50 d proba jump = 2.5 0.00 0.25 0.50 0.75 1.00 −0.50 −0.25 0.00 0.25 0.50 d proba test Q m=n^1/2 s=1 Q m=n^1/2 s=2 Q m=n^1/2 s=3 Q m=n^1/2 s=4 Q m=n^1/2 s=5 jump = 3.5

Figure 5: Empirical probabilities to reject the stationarity H0 as function of d for the change

(17)

2 3 4 5 0 200 400 600 v alue 0.00 0.25 0.50 0.75 1.00 0 10 20 lag acf

Looking at the sample autocorrelation function which is dying down extremely slowly, we suspect non-stationarity in the dataset.

However our Qn,m(s, 1/2) test fails to reject stationarity assumption at 5%, for all s ∈

{1, . . . , 5} and m = n1/2 or n1/3) with p-values ranging from .08 to .47. Our findings are in

line with augmented Dickey–Fuller test and Phillips and Perron (1988) test (as mentioned in Shumway and Stoffer (2017)). The conclusions of the three tests support the decision that the logged varve series is stationary. However only our test allows for long memory stationarity. Subsequently, local Whittle estimator of d gives a value of .411 quite below the boundary .5. Note that the V/S test rejects the stationarity hypothesis at 5% for this series, but such a conclusion should be looked at with caution as V/S empirical size drastically increases with d (see Figure 1). We also mention that the spectral test proposed in Bailey and Giraitis (2016) concludes to the nonstationarity of logged varve. However this test does not allow for long memory as it only tests unit root versus short memory.

5

Appendix

5.1 Proof of Proposition 1

It is essentially inspired by Ch. 5 results of Giraitis et al. (2012). The difference here is the fact that we are computing the periodograms over different epochs. Noticing that as n = m` → ∞, f (λj) ∼  2πj m` −2d f∗(0), f (λ0j) ∼ 2πj ` −2d f∗(0) 1

(18)

we easily get as n → ∞, Q0n,m(s, d) ∼ s X k=1 In(λjk)/f (λjk) 1 m Pm i=1In,i(λ0j0 k)/f (λ 0 jk0) . (13)

Using Theorem 5.3.1 of Giraitis et al. (2012), the numerators in the sum above are asymptot-ically independent and exponentially distributed. Therefore it will be enough to show that each denominator in the sum (13) converges to 1 in probability. First and since In,i(λ0j0

k),

i = 1, . . . , m are stationary, using Proposition 5.3.1 of Giraitis et al. (2012), we have

E 1 m m X i=1 In,i(λ0j0 k)/f (λ 0 jk0) ! = E  In,1(λ0j0 k)/f (λ 0 j0k)  → 1. (14)

So, we need to show that for every fixed k = 1, . . . , s,

Var 1 m m X i=1 In,i(λ0j0 k)/f (λ 0 j0 k) ! → 0. (15)

For fixed k = 1, . . . , s, let

˜ In,i= In,i(λ0j0 k ) f (λ0j0 k ) .

Using the stationarity of ˜In,i i = 1, . . . , m, the LHS of (15) can be written as

1 m2 m X i=1 m X j=1 Cov ˜In,i, ˜In,j  = 1 m Var( ˜In,1) + m X h=1  1 − h m  γI(h) ! where γI(h) = Cov ˜In,1, ˜In,h+1  .

Using equation (5.3.21) of Theorem 5.3.2 in Giraitis et al. (2012), we obtain that Var( ˜In,1) → 1

as n → ∞. For the cross-epoch covariances, we can write ˜

In,1= ˜In,1() + rn,1

˜

In,h+1= ˜In,h+1() + rn,h+1,

where ˜In,1() is the scaled periodogram built from the innovations 1, . . . , `, and ˜In,h+1()

is built from `h+1, . . . , `(h+1), which are clearly independent for h ≥ 1. With this, we have

(using also the stationarity of ˜In,i, and rn,i, i = 1, . . . , m),

(19)

Each of the previous three terms converges uniformly (in h) to zero, due to the fact that E(rn,12 ) → 0, E( ˜In,1) → 1 and E( ˜In,1)2 → 1 (see Giraitis et al. (2012), proposition 5.3.1). This

completes the proof of (15).

5.2 Proof of Theorem 2

The proof will be done for Q0n,m. It is essentially the same for Qn,m. First consider the

deterministic trend case: Xt = gn(t) + Yt. In what follows and for simplicity, c designates

a generic positive constant that may be different from one expression to another. For fixed k = 1, . . . , s, and omitting the coefficient 2π in the denominator, we have from (3), with In,Y

denoting the periodogram built from Y1, . . . , Yn,

In(λjk) = In,Y(λjk) + n 2α+1|D n(g)|2+ 2nα+1/2 Re  Dn(g)Dn(Y )  (16) where Dn(g) = n X t=1 g t n  ei2πtjk/n1 n, and Dn(Y ) = 1 √ n n X t=1 Ytei2πtjk/n. As n → ∞, we have E(In,Y(λjk)) ∼ cf (λjk) ∼ c  n jk 2d , and by Riemann sum approximation of an integral,

|Dn(g)|2 ∼ c 6= 0,

according to assumption (11), which is also satisfied by step functions. As for the cross-term, we have

E|(Dn(g)Dn(Y )| = |(Dn(g)|E|Dn(Y )| ≤ |Dn(g)|E1/2(In,Y(λjk) ≤ c(n/jk)

d. (17)

So the leading term of the numerator In(λjk) in the statistic

In(λjk) 1 m Pm i=1In,i(λ 0 jk) (18) is n2α+1|Dn(g)|2, so that In(λjk) n2α+1 P → c. (19)

(20)

For the denominator in (18), and similarly to (16), recall that (3), In,i(λ) = 1 2π` i` X t=(i−1)`+1 (Yt+ gn(t))eitλ 2 .

We first mention that if Zn≥ 0 and E(Zn) = O(1) then Zn= OP(1). So we have (using (14)

and with similar decomposition as in (16)), with In,i,Y denoting the periodogram built from

the ith epoch of the stationary process Yt,

`−1 1 m m X i=1 In,i,Y(λ0jk) = OP(` 2d−1),

For the trend part, we have

`−2γ 1 m m X u=1 1 ` ` X t=1 gn(t + (u − 1)`)eitλ 0 jk 2 = `−2γ−1n2α ` X t=1 ` X s=1 m X u=1 g t + (u − 1)` n  g s + (u − 1)` n  1 m ! ei(t−s)λ0jk ∼ `−2γ−1n2α ` X t=1 ` X s=1 Z 1 0 g t n+ x  g s n+ x  dx  ei(t−s)λ 0 jk = `−2γ−1n2α Z 1 0   ` X t=1 g t n+ x  eitλ0jk 2 dx. (20)

(In the integrals above we extended g beyound (0,1) with value 0). If g is differentiable with g0 bounded then (20) can be written as

`−2γ−1n2α Z 1 0   ` X t=1  g(x) + t ng 0 n(t, x))  eitλ0jk 2 dx where αn(t, x) ∈ (x, t/n + x), = `−2γ−1n2α Z 1 0   ` X t=1 t ng 0 (αn(t, x))eitλ 0 jk 2 dx ≤ kg 0k2 ∞ `3−2γ n2 ,

which is of mush smaller order than n−2γIn(λjk) given (19).

If g is a step function with a single jump at x0 (general case treats the same way), then the

integral in (20) is equal to Z x0 x0−2`/n   ` X t=1 g t n+ x  eitλ 0 jk 2 dx ≤ kgk2`2× 2` n = C `3 n.

(21)

In both cases of g, the trend part will be of order `2−2γn2α−1, which is of mush smaller order than n−2γIn(λjk).

For the cross-terms and similarly to (17), we easily obtain that

`−2γE 1 m m X u=1 1 ` ` X t=1 gn(t + (u − 1)`) e itλ0jk ` X t=1 Yt+(u−1)`e itλ0jk ! = 1 n` −2γ`1/2 m X u=1 ` X t=1 gn(t + (u − 1)`) e itλ0 jk E " 1 √ ` ` X t=1 Yt+(u−1)`e itλ0 jk # = `1/2−2γ m X u=1 ` X t=1 gn(t + (u − 1)`) e itλ0jk 1 n ! E " 1 √ ` ` X t=1 Yteitλ 0 jk # ≤ `1/2−2γnα n X k=1 g k n  1 n E In,1,Y(λ 0 jk) 1/2 ∼ nα`1/2−2γ Z 1 0 |g(x)|dx c ` jk d ,

also of much smaller order than n−2γIn(λjk). Hence in total, the denominator is of much

smaller order than the numerator and therefore, for each fixed frequency λjk, the ratio statistic

n ` −2γ Inj k) 1 m Pm i=1In,i(λ0jk) P → ∞,

which finally leads to Q0n,m(s, γ) and Qn,m(s, γ) go infinity.

We now consider a stochastic trend: Xt = Xt−1+ Yt where Yt has density of the form (1).

Let fix a Fourier frequency λjk. We begin with the numerator, assuming µ = 0. With

St = Y1+ · · · + Yt, we write Xt = X0 + St and using the well known summation by parts

formula n X t=1 Atbt= AnBn− n X t=1 atBt, where Ak= a1+ · · · + ak, Bk = b1+ · · · + bk,

and the fact that

n

X

t=1

(22)

we obtain In(λjk) = 1 n n X t=1 (X0+ St)eitλjk 2 = 1 n n X t=1 Steitλjk 2 = 1 n n X t=1 Yt 1 − eitλjk 1 − eiλjk ! 2 ∼ n 2+2d (jk)2 1 n1/2+d n X t=1  1 − eitλjk  Yt 2 .

Let us start with the case when jk ≥ Kn→ ∞, then for −1/2 < d < 1/2, we will have, using

Theorem 4.3.1 of Giraitis et al. (2012) 1 n1/2+d n X t=1 Yt D → Z (22)

where Z is a Gaussian random variable (with positive variance) and by Theorem 5.3.1 of Giraitis et al. (2012) jkd n1/2+d n X t=1 eitλjkYt→ ZD 1+ iZ2, (23)

where Z1 and Z2 are i.i.d. nondegerated Gaussian random variables, so that since jk→ ∞,

1 n1/2+d n X t=1 (1 − eitλjk)Yt          D → Z if 0 < d < 1/2, D ∼ jk−d(Z1+ iZ2) if − 1/2 < d < 0. .

For the case d = 0 and jk→ ∞, we have (using again (21) and putting h = t − s)

Cov √1 n n X s=1 Ys, 1 √ n n X t=1 eitλjkYt ! = n X t=1 n X s=1 eitλjkγ(t − s) = 1 n eiλjk 1 − eiλjk n−1 X h=1  (eihλjk− 1) + (1 − e−ihλjk)  γ(h) = 2i n eiλjk 1 − eiλjk n−1 X h=1 sin 2πjkh n  γ(h) ∼ i n 4 (2πjk/n)2 2πjk n n X h=1 γ(h) ∼ 2i πjk ∞ X h=1 γ(h) → 0 as n → ∞. (24)

Therefore, invoking Theorem 4.3.2 of Giraitis et al. (2012), when d = 0 and jk → ∞ then

1 n1/2 n X t=1 (1 − eitλjk)Yt→ Z + ZD 1+ iZ2

(23)

where Z, Z1, Z2 are independent nondegenerated Gaussian random variables. In summary, when jk→ ∞ we have 1 n1/2+d n X t=1 (1 − eitλjk)Yt                      D → Z if 0 < d < 1/2, D → Z + Z1+ iZ2 if d = 0, D ∼ jk−d(Z1+ iZ2) if − 1/2 < d < 0. .

Now if jk is a fixed integer then by Theorem 2 of Deo (1997) applied with the function

g(x) = 1 − ei2πjkx, we obtain for 0 < d < 1/2 that

1 n1/2+d n X t=1  1 − eitλjkYt 2 D → Z 1 0 (1 − ei2πjks)dB H(s) 2 ,

where BH is a fractional Brownian motion with parameter H = 1/2 + d. The limit above

is a positive random variable being sum of two positively weighted independent χ2

1 random

variables. The case d = 0 and jkis fixed is similar to the case when d = 0 and jk→ ∞ above

and when we compute the limiting covariances as in (24) we can see that only Z and Z2 are

correlated, so the Gaussian limit Z + Z1+ iZ2 remains nondegenerated when jk is fixed. It

remains the case −1/2 < d < 0. Here we still have the convergences (22) and (23) and the summability of the covariance function γ(h). Hence similar computation of the covariances as in (24) can be established to obtain again that only the two Gaussian limits Z and Z2 are

correlated and we have

Cov(Z, Z2) = 2 πjk lim n 1 n2d n X h=1 γ(h). Hence we still have

1 n1/2+d n X t=1  1 − eitλjk  Yt D → Z + Z1+ iZ2,

a nondegerated Gaussian limit. In summary we will have in both cases (jk constant integer

or jk → ∞ and −1/2 < d < 1/2) 1 n1/2+d n X t=1 (1 − eitjk)Y t 2 D → Q,

where Q is a positive random variable or ∞, making In(λjk) asymptotically (as n → ∞)

at least of the order n2+2d/(jk)2. Now let us handle the denominator. Similarly to the

numerator, we can easily write for any given Fourier frequency λ0j0

k (with j

0

k = o(`)), using

(24)

E  W`,u(λ 0 jk0) 2 ∼ c`(`/jk0)2d, 1 m m X u=1 In,u(λ 0 j0 k) 2 = 1 m m X u=1 1 ` ` X j=1 Sj+(u−1)`eijλ 0 j0 k 2 = 1 m m X u=1 1 ` S`,u− W`,u(λ 0 j0 k) 2 |1 − eijλ 0 j0 k|−2  = 2 n m X u=1  |S`,u|2+ |W`,u(λ0j0 k)| 2O  `2 (jk0)2  = `2m n OP(` 2d+1) = O P(`2d+2), where S`,u= ` X j=1

Yj+(u−1)`, and W`,u(λ0j0 k) = ` X j=1 Yj+(u−1)`eiuλ 0 j0 k.

Finally, combining the results on the numerator and denominator, we obtain that asymptot-ically n ` −2γ Inj k) 1 m Pm u=1In,u(λ 0 jk) = n 2+2d−2γ (jk)2OP(`2+2d−2γ) Q→ ∞,P as jk = o(m1/2+d). 

5.3 Definition of the covariance Σ(d)

The matrix Σ(d) is the covariance of a zero-mean Gaussian vector (Z1(1), Z2(1), . . . , Z1(s), Z2(s))

such that for j, k = 1, . . . , s, Z1(j), Z2(k) are independent for all j, k, and

Var(Z1(j)) = 1 2− Rj(d) Lj(d) (25) and Var(Z2(j)) = 1 2 + Rj(d) Lj(d) , (26) and for j 6= k, Cov(Z1(j), Z1(k)) = Lj,k(d) − Rj,k(d) pLj(d)Lk(d) (27) Cov(Z2(j), Z2(k)) = Lj,k(d) + Rj,k(d) pLj(d)Lk(d) , (28) with Lj(d) = 2 π Z ∞ −∞ sin2(λ/2) (2πj − λ)2 λ 2πj −2d dλ, (29) Rj(d) = 1 π Z ∞ −∞ sin2(λ/2) (2πj − λ)(2πj + λ) λ 2πj −2d dλ, (30) Lj,k(d) = (jk)d π Z ∞ −∞ sin2(λ/2) (2πk − λ)(2πj − λ) λ 2π −2d dλ, (31)

(25)

and Rj,k(d) = (jk)d π Z ∞ −∞ sin2(λ/2) (2πk + λ)(2πj − λ) λ 2π −2d dλ. (32)

Simplifications of formulae (25)-(32). We now give some simplifications of these variance-covariance formulae. This is helpful when we want to numerically compute them using a statistical software such as R. We should mention that the main issues with numerical computation arise when d < 0. First we can easily check that

Rj(d) = 1 π2j Z ∞ 0 sin2(πjλ) 1 − λ2 λ −2d

which can be easily numerically computed, (directly over a compact (e.x. [0 , 5],) then via an integration by parts over (5, ∞). We chose the value 5, just to stay away form the value 1 in the infinite interval. Then we can also easily find that

Lj(d) = 4 π2j Z ∞ 0 sin2(πjλ) (1 − λ2)2λ −2d dλ − 2Rj(d),

the first integral on the right-hand side above being numerically computable directly. As for the covariances, we can also easily check that

Rj,k(d) = (jk)d(jk − k2) π2 Z ∞ 0 sin2(πλ) (j2− λ2)(k2− λ2)λ −2ddλ + (k/j)dR j(d)

the integral in the right-hand side above being numerically computable directly, and that

Lj,k(d) = (jk)d(jk + k2) π2 Z ∞ 0 sin2(πλ) (j2− λ2)(k2− λ2)λ −2d dλ − (k/j)dRj(d).

References

Bailey, N. and Giraitis, L. (2016). Spectral approach to parameter-free unit root testing. Comput. Statist. Data Anal., 100:4–16.

Beran, J., Feng, Y., Ghosh, S., and Kulik, R. (2013). Long-memory processes. Springer, Heidelberg. Probabilistic properties and statistical methods.

Deo, R. S. (1997). Asymptotic theory for certain regression models with long memory errors. J. Time Ser. Anal., 18(4):385–393.

Giraitis, L., Koul, H. L., and Surgailis, D. (2012). Large sample inference for long memory processes. Imperial College Press, London.

Giraitis, L., Leipus, R., and Philippe, A. (2006). A test for stationarity versus trends and unit roots for a wide class of dependent errors. Econometric Theory, 22(6):989–1029.

(26)

Gromykov, G., Ould Haye, M., and Philippe, A. (2018). A frequency-domain test for long range dependence. Statistical Inference for Stochastic Processes, 21(3):513–526.

Hu, C., Pozdnyakov, V., and Yan, J. (2020). Density and distribution evaluation for convo-lution of independent gamma variables. Computational Statistics, 35(1):327–342.

Hurvich, C. M. and Beltr˜ao, K. I. (1993). Asymptotics for the low-frequency ordinates of the periodogram of a long-memory time series. J. Time Ser. Anal., 14(5):455–472.

Phillips, P. C. B. and Perron, P. (1988). Testing for a unit root in time series regression. Biometrika, 75:335–346.

R Core Team (2018). R: A Language and Environment for Statistical Computing. R Foun-dation for Statistical Computing, Vienna, Austria.

Shumway, R. H. and Stoffer, D. S. (2017). Time series analysis and its applications with R examples. Springer Texts in Statistics. Springer, fourth edition.

Stoffer, D. (2019). astsa: Applied Statistical Time Series Analysis. R package version 1.9. Taqqu, M. (2003). Fractional brownian motion and long-range dependence. In Theory and

Applications of Long-Range Dependence, pages 5–38. Eds Doukhan, P. and Oppenheim, G. and Taqqu, M. Birkhauser, Boston.

Terrin, N. and Hurvich, C. M. (1994). An asymptotic Wiener-Itˆo representation for the low frequency ordinates of the periodogram of a long memory time series. Stochastic Process. Appl., 54(2):297–307.

Figure

Figure 1: Empirical level evaluated on FARIMA(0,d,0) process as function of the long memory parameter d ∈ (−.5, .5)
Figure 3: Empirical probability to reject the null hypothesis as function of d the parameter of fractional process with d ∈ (−1/2, 3/2)
Figure 4: Empirical cumulative distribution function of the P-value for Q-test for different values of d ∈ (−.5, 1.5)

Références

Documents relatifs

The analysis results are shown in Fig.1. On average, 62% of used data are in a read-only state but they represent only 18% of the accesses made by the application. The propor- tions

Proportions of hits (for test-items T) and false alarms (for similar lures S and different lures D) presented as a function of version (expressive/mechanical) and test item (T),

We aim to use a Long Short Term Memory ensemble method with two input sequences, a sequence of daily features and a second sequence of annual features, in order to predict the next

SSN 1 The CLNN model used the given 33 layers of spatial feature maps to predict the taxonomic ranks of species.. Every layer of each tiff image was first center-cropped to a size

Central or peripheral blocks allow the delivery of selective subarachnoid anesthesia with local normobaric and/or hyperbaric anesthetics; lipophilic opioids may be used with due

We could imagine for instance passing the normalized disparity plane as an extra channel in the cell input and instead of learning features about the most probable color, let

Among all wavelet estimators of the memory parameter d presented in this paper, for a given choice of wavelet and scales involved in the estimates, the estimator with optimal

The Long Short-Term Memory Deep-Filer (LSTM-DF) was presented in this paper to filter rPPG signals as an al- ternative to conventional signal processing techniques that