HAL Id: hal-01102806
https://hal.archives-ouvertes.fr/hal-01102806
Submitted on 13 Jan 2015
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of
sci-entific research documents, whether they are
pub-lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diffusion de documents
scientifiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
HAWKES PROCESSES ON LARGE NETWORKS
Sylvain Delattre, Nicolas Fournier, Marc Hoffmann
To cite this version:
Sylvain Delattre, Nicolas Fournier, Marc Hoffmann. HAWKES PROCESSES ON LARGE
NET-WORKS. Annals of Applied Probability, Institute of Mathematical Statistics (IMS), 2016.
�hal-01102806�
HAWKES PROCESSES ON LARGE NETWORKS
SYLVAIN DELATTRE, NICOLAS FOURNIER AND MARC HOFFMANN
Abstract. We generalise the construction of multivariate Hawkes processes to a possibly infinite network of counting processes on a directed graph G. The process is constructed as the solution to a system of Poisson driven stochastic differential equations, for which we prove pathwise existence and uniqueness under some reasonable conditions.
We next investigate how to approximate a standard N -dimensional Hawkes process by a simple inhomogeneous Poisson process in the mean-field framework where each pair of individuals interact in the same way, in the limit N → ∞. In the so-called linear case for the interaction, we further investigate the large time behaviour of the process. We study in particular the stability of the central limit theorem when exchanging the limits N, T → ∞ and exhibit different possible behaviours.
We finally consider the case G = Zd
with nearest neighbour interactions. In the linear case, we prove some (large time) laws of large numbers and exhibit different behaviours, reminiscent of the infinite setting. Finally we study the propagation of a single impulsion started at a given point of Zd
at time 0. We compute the probability of extinction of such an impulsion and, in some particular cases, we can accurately describe how it propagates to the whole space.
Mathematics Subject Classification (2010): 60F05, 60G55, 60G57.
Keywords: Point processes. Multivariate Hawkes processes. Stochastic differential equations. Limit theorems. Mean-field approximations. Interacting particle systems.
1. Introduction
1.1. Motivation. In several apparently different applied fields, a growing interest has been ob-served recently for a better understanding of stochastic interactions between multiple entities evolving through time. These include: seismology for modelling earthquake replicas (Helmstetter-Sornette [21], Kagan [27], Ogata [36], Bacry-Muzy [5]), neuroscience for modelling spike trains in brain activity (Gr¨un et al. [16], Okatan et al. [37], Pillow et al. [38], Reynaud et al. [40, 41]), genome analysis (Reynaud-Schbath [39]), financial contagion (Ait-Sahalia et al. [1]), high-frequency finance (order arrivals, see Bauwens-Hautsch [6], Hewlett [22], market microstruc-ture see Bacry et al. [2] and market impact see Bacry-Muzy [4, 5]), financial price modelling across scales (Bacry et al. [3], Jaisson-Rosenbaum [24]), social networks interactions (Blundell et al. [7], Simma-Jordan [43], Zhou et al. [46]) and epidemiology like for instance viral diffusion on a network (Hang-Zha [45]), to name but a few. In all these contexts, observations are often represented as events (like spikes or features) associated to agents or nodes on a given network, and that arrive randomly through time but that are not stochastically independent.
In practice, we observe a multivariate counting process (Z1
t, . . . , ZtN)t≥0, each component Zti
recording the number of events of the i-th component of the system during [0, t], or equivalently the time stamps of the observed events. Under relatively weak general assumptions, a multi-variate counting process (Z1
t, . . . , ZtN)t≥0is characterised by its intensity process (λ1t, . . . , λNt )t≥0,
informally defined by
Pr Zi has a jump in [t, t + dt] Ft= λitdt, i = 1, . . . , N, 1
where Ft denotes the sigma-field generated by (Zi)1≤i≤N up to time t. For modelling the
inter-actions, a particularly attractive family of multivariate point processes is given by the class of (mutually exciting) Hawkes processes (Hawkes [18], Hawkes-Oakes [19]), with intensity process given by λit= hi XN j=1 Z t− 0 ϕji(t− s)dZsj ,
where the causal functions ϕji : [0,∞) → R model how Zj acts on Zi by affecting its intensity
process λi. The nonnegative functions h
iaccount for some non-linearity, but if we set hi(x) = µi+x
with µi ≥ 0, we obtain linear Hawkes processes where µi can be interpreted as a baseline Poisson
intensity. In the degenerate case ϕji= 0, we actually retrieve standard Poisson processes.
Multivariate Hawkes processes have long been studied in probability theory (see for instance the comprehensive textbook of Daley-Vere-Jones [12] and the references therein, Br´emaud-Massouli´e [9] and Massouli´e [32] or the recent results of Zhu [47, 48]). Their statistical inference is relatively well understood too, from a classical parametric angle (Ogata [34]) together with recent significant advances in nonparametrics (Reynaud-Bouret-Schbath [39], Hansen et al. [20]). However, the fron-tier is progressively moving to understanding the case of large N , when the number of components may become increasingly large or possibly infinite, see Massouli´e [32] and Galvez-L¨ocherbach [17] for some constructions in that direction. This context is potentially of major importance for future developments in the aforementioned applied fields. This is the topic of the present paper.
1.2. Setting. We work on a filtered probability space (Ω,F, (Ft)t≥0, Pr). We say that (Xt)t≥0
is a counting process if it is non-decreasing, c`adl`ag, integer-valued (and finite for all times), with all its jumps of height 1. For (Xt)t≥0 a (Ft)t≥0-adapted counting process, there is a unique
non-decreasing predictable process (Λt)t≥0, called compensator of (Xt)t≥0, such that (Xt− Λt)t≥0 is a
(Ft)t≥0-local martingale, see Jacod-Shiryaev [25, Chapter I].
We consider a countable directed graph G = S, Ewith vertices (or nodes) i∈ S and (directed) edges e∈ E. We write e = (j, i) ∈ E for the oriented edge. We also need to specify the following parameters: a kernel ϕ = (ϕji, (j, i) ∈ E) with ϕji : [0,∞) 7→ R, and a nonlinear intensity
component h = (hi, i∈ S) with hi : R7→ [0, ∞). The natural generalisation of finite-dimensional
Hawkes processes is the following.
Definition 1. A Hawkes process with parameters (G, ϕ, h) is a family of (Ft)t≥0-adapted counting
processes (Zi
t)i∈S,t≥0 such that
(i) almost surely, for all i6= j, (Zi
t)t≥0 and (Ztj)t≥0 never jump simultaneously,
(ii) for every i∈ S, the compensator (Λi
t)t≥0 of (Zti)t≥0 has the form Λit =
Rt 0λ
i
sds, where the
intensity process (λit)t≥0 is given by
λit= hi X j→i Z t− 0 ϕji(t− s)dZsj , with the notationPj→i for summation over{j : (j, i) ∈ E}.
This model can be seen as a particular case of that introduced by Massouli´e [32] who considers more general sets of sites (possibly Rd) and a larger class of intensities. We say that a Hawkes
We will give some general existence, uniqueness and approximation results for nonlinear Hawkes processes, but all the precise large-time estimates we will prove concern the linear case.
A Hawkes process (Zti)i∈S,t≥0 with parameters (G, ϕ, h) behaves as follows. For each i ∈ S,
the rate of jump of Zi is, at time t, λ
i(t) = hi(Pj→i
P
k≥1ϕji(t− Tkj)1{Tkj<t}), where (T
j k)k≥1
are the jump times of Zj. In other words, each time one of the Zj’s has a jump, it excites its
neighbours in that it increases their rate of jump (in the natural situation where h is increasing and ϕ is positive). If ϕ is positive and decreases to 0, the case of almost all applications we have in mind, the influence of a jump decreases and tends to 0 as time evolves.
1.3. Main results. In the case where G is a finite graph, under some appropriate assumptions on the parameters, the construction of (Zi
t)i∈S,t≥0is standard. However, for an infinite graph, the
situation is more delicate: we have to check, in some sense, that the interaction does not come from infinity.
The first part of this paper (Section 2) consists of writing a Hawkes process as the solution to a system of Poisson-driven S.D.E.s and of finding a set of assumptions on G and on the parameters (ϕ, h) under which we can prove the pathwise existence and uniqueness for this system of S.D.E.s. Representing counting processes as solutions to S.D.E.s is classical, see Lewis-Shedler [31], Ogata, [35], Br´emaud-Massouli´e [9], Chevallier [11]. However, the well-posedness of such S.D.E.s is not obvious when G is an infinite graph.
In a second part (Section 3), we study the mean-field situation: we assume that we have a finite (large) number N of particles behaving similarly, with no geometry. In other words,S = {1, . . . , N} is endowed with the set of all possible edgesE = {(i, j) : i, j ∈ S}, and there are two functions h and ϕ such that hi = h and ϕij = N−1ϕ for all i, j ∈ S. We show that, as N → ∞, Hawkes
processes can be approximated by an i.i.d. family of inhomogeneous Poisson processes. Concerning the large-time behaviour, we discuss, in the linear case, the possible law of large numbers and central limit theorems as (t, N )→ (∞, ∞) and we observe some different situations according to the position ofR0∞ϕ(t)dt with respect to 1 (the so-called critical case).
Finally, we consider in Section 4 the case where G is Zd, endowed with the set of edgesE = {(i, j) : |i − j| = 0 or 1}, where | · | denotes the Euclidean distance. We study the large time behaviour, in the linear case where hi(x) = µi+ x and when ϕij = (2d + 1)−1ϕ does not depend
on i, j. We first assume that µi does not depend too much on i (consider e.g. the case where the
µi are random, i.i.d. and bounded) and show that (i) if R ∞
0 ϕ(t)dt > 1, then there is a law of
large numbers and the interaction makes everything flat, in the sense that for all i6= j, Zi t ∼ Z
j t
as t→ ∞; (ii) ifR0∞ϕ(t)dt < 1, then there is again a law of large numbers, but the limiting value depends on i. We also explain why these results are reminiscent of the infinite setting and of the interaction. Finally, we study the case where µi = 0 for all i but where there is an impulsion at
time 0 at i = 0. We compute the probability of extinction of such an impulsion and, in some particular cases, we study how it propagates to the whole space (when it does not blow out). 1.4. Notation. The Laplace transform of ϕ : [0,∞) 7→ R is defined, when it exists, by
Lϕ(α) =
Z ∞ 0
e−αtϕ(t)dt.
We also introduce the convolution of h, g : [0,∞) 7→ R as (if it exists) (g ⋆ h)t =R t
0gsht−sds =
Rt
2. Well-posedness using a Poisson S.D.E.
We will study Hawkes processes through a system of Poisson-driven stochastic differential equa-tions. This will allow us to speak of pathwise existence and uniqueness and to prove some propa-gation of chaos using some simple coupling arguments.
Consider, on a filtered probability space (Ω,F, (Ft)t≥0, Pr), a family (πi(ds dz), i∈ S) of i.i.d.
(Ft)t≥0-Poisson measures with intensity measure ds dz on [0,∞) × [0, ∞).
Definition 2. A family (Zi
t)i∈S,t≥0of c`adl`ag (Ft)t≥0-adapted processes is called a Hawkes process
with parameters (G, ϕ, h) if a.s., for all i∈ S, all t ≥ 0
(1) Zti= Z t 0 Z ∞ 0 1z≤ h i X j→i Z s− 0 ϕji(s− u)dZuj πi(ds dz). This formulation is consistent with Definition 1.
Proposition 3. (a) A Hawkes process in the sense of Definition 2 is also a Hawkes process in the sense of Definition 1.
(b) Consider a Hawkes process in the sense of Definition 1 (on some filtered probability space (Ω,F, (Ft)t≥0, Pr). Then we can build, on a possibly enlarged probability space ( ˜Ω, ˜F, ( ˜Ft)t≥0, ˜Pr),
a family (πi(ds dz), i ∈ S) of i.i.d. ( ˜F
t)t≥0-Poisson measures with intensity measure ds dz on
[0,∞) × [0, ∞) such that (Zi
t)i∈S,t≥0 is a Hawkes process in the sense of Definition 2.
Point (a) is very easy: for a Hawkes process (Zi
t)i∈S,t≥0 in the sense of Definition 2, it is clear
that for every i ∈ S, the compensator of Zi is Rt 0 R∞ 0 1{z≤hi(Pj→i Rs− 0 ϕji(s−u)dZ j u)}dzds, which is
equal toR0thi(Pj→iR0s−ϕji(s− u)dZuj)ds. Furthermore, the independence of the Poisson random
measures (πi(ds dz), i
∈ S) guarantees that for all i 6= j, (Zi
t)t≥0 and (Ztj)t≥0 a.s. never jump
simultaneously.
Point (b) is more delicate but standard and a very similar result was given in Br´emaud-Massouli´e [9]. Their proof is based on results found in the book of Jacod [23], of which one of the main goals is exactly this topic: prove the equivalence between martingale problems and S.D.E.s. The many results of [23] generalize those of Grigelionis [15]. See also the pioneering work of Kerstan [28]. We also refer to Chevallier [11, Section IV] where a very complete proof is given as well as a historical survey. Let us mention that the idea to integrate an indicator function with respect to a Poisson measure in order to produce an inhomogeneous Poisson process with given intensity was first introduced by Lewis-Shedler [31], and later extended by Ogata [35] in the case of a stochastic intensity.
The following set of assumptions will guarantee the well-posedness of (1).
Assumption 4. There are some nonnegative constants (ci)i∈S, some positive weights (pi)i∈S and
a locally integrable function φ : [0,∞) 7→ [0, ∞) such that
(a) for every i∈ S, every x, y ∈ R, |hi(x)− hi(y)| ≤ ci|x − y|,
(b) Pi∈Shi(0)pi<∞,
(c) for every s∈ [0, ∞), every j ∈ S,Pi,(j,i)∈Ecipi
ϕji(s)
≤ pjφ(s).
Remark 5. (i) If S is finite, then Assumption 4 holds true, with the choice pi= 1, as soon as hi
is Lipschitz continuous for all i∈ S and ϕji is locally integrable for all (j, i)∈ E.
(ii) If S = Zd is endowed with
E = {(i, j) : |i − j| = 0 or 1}, then Assumption 4 holds, with the choice pi = 2−|i|, if Pi∈Zd2−|i||hi(0)| < ∞ and if there are c > 0 and ϕ ∈ L1loc([0,∞)) such
that |hi(x)− hi(y)| ≤ c|x − y| and |ϕjk(t)| ≤ ϕ(t) for all i ∈ S, x, y ∈ R, (j, k) ∈ E and t ≥ 0.
(iii) Consider next S = Zd endowed with the set of all possible edges
E = {(i, j) : i, j ∈ Zd
} and assume that there is c > 0 such that |hi(0)| ≤ c and |hi(x)− hi(y)| ≤ c|x − y| for all i ∈ S,
x, y ∈ R. Assume that there are ϕ ∈ L1
loc([0,∞)) and a nonincreasing a : [0, ∞) 7→ [0, ∞) such
that |ϕji(t)| ≤ a(|i − j|)ϕ(t) for all (i, j) ∈ E and t ≥ 0. Then if Pi∈Zda(|i|) < ∞, Assumption 4
holds true.
(iv) Consider the (strongly oriented) graph Z+ endowed with the set of edgesE = {(i, i + 1) :
i ∈ Z+}. Then Assumption 4 holds true as soon as there is ϕ ∈ L1loc([0,∞)) such that for every
i∈ Z+, there are ci> 0 and ai> 0 such that|hi(x)− hi(y)| ≤ ci|x − y| and |ϕi(i+1)| ≤ aiϕ.
Points (ii) and (iii) of course extend to other graphs. In (iv), there is no growth condition on |hi(0)|, ci and ai. This comes from the fact that the interaction is directed: Z0 is actually
a Poisson process with rate h0(0), the intensity of Z1 is entirely determined by that of Z0, and
so on. Hence this example is not very interesting. But we can mix e.g. points (ii) and (iv): informally, coefficients corresponding to edges directed to the origin have to be well-controlled, while coefficients corresponding to edges directed to infinity require less assumptions.
Proof. Point (i) is obvious. To check (ii), simply note that for all j ∈ S, Pi,(j,i)∈Ec2−|i||ϕji| ≤
cϕ2−|j|P
i,(j,i)∈E2|j|−|i| ≤ c2(2d + 1)ϕ2−|j| and define φ = c2(2d + 1)ϕ. Point (iv) holds with
(pi)i∈Z+ defined by p0 = 1 and, by induction, pi+1 = min{2
−i/(1 + h
i+1(0)), pi/(1 + aici+1)}.
This of course implies that Pi∈Z
+pi|hi(0)| < ∞ and that for all j ≥ 1,
P
i,(j,i)∈Ecipi|ϕji| =
cj+1pj+1|ϕj(j+1)| ≤ cj+1pj+1ajϕ≤ pjϕ as desired.
To prove (iii), we work with the sup norm|i| = |(i1, . . . , id)| = max{|i1|, . . . , |id|}. The delicate
part consists in showing that there is b : N7→ [0, ∞) and a constant C > 0 such thatPi∈Zdb(|i|) <
∞ and, for all j ∈ Zd, P
i∈Zdb(|i|)a(|i − j|) ≤ Cb(|j|). Then the result will easily follow, with
the choices pi = b(|i|) and φ = Ccϕ. We define b recursively, by b(0) = a(0) and b(k + 1) =
max{a(k + 1), [(k + 1)/(k + 2)]2db(k)
}. Using that a is nonincreasing, we easily check that b is nonincreasing. We next check that Pi∈Zdb(|i|) < ∞, i.e. that Pk≥0kd−1b(k) <∞, knowing by
assumption that Pk≥0kd−1a(k) <∞. We have, for k ≥ 0,
b(k + 1)− a(k + 1) = [(k + 1)/(k + 2)]2db(k)− a(k + 1)+ ≤[(k + 1)/(k + 2)]2d(b(k)
− a(k))++ [(k + 1)/(k + 2)]2da(k)− a(k + 1)+
≤[(k + 1)/(k + 2)]2d(b(k)
Recalling that b(0) = a(0), one gets b(k)− a(k) ≤ Pkℓ=1(a(ℓ− 1) − a(ℓ))[(ℓ + 1)/(k + 1)]2d by iteration. Hence X k≥1 kd−1(b(k)− a(k)) ≤X k≥1 kd−1 k X ℓ=1 (a(ℓ− 1) − a(ℓ))[(ℓ + 1)/(k + 1)]2d =X ℓ≥1 (a(ℓ− 1) − a(ℓ))(ℓ + 1)2dX k≥ℓ kd−1(k + 1)−2d ≤CX ℓ≥1 (a(ℓ− 1) − a(ℓ))ℓd. This last quantity is nothing but CPℓ≥1a(ℓ)[(ℓ + 1)d
− ℓd]
≤ CPℓ≥1a(ℓ)ℓd−1 < ∞. We have
thus checked thatPk≥0kd−1b(k) =P
k≥1a(k)kd−1+
P
k≥1kd−1(b(k)− a(k)) < ∞.
We finally prove that for all j ∈ Zd, P
i∈Zdb(|i|)a(|i − j|) ≤ Cb(|j|). First, we claim that
there is C such that b(k) ≤ Cb(2k) for all k ≥ 0. This is easily checked, iterating the inequality b(k)≤ [(k + 2)/(k + 1)]2db(k + 1). Next we write, using that a and b are nonincreasing,
X i∈Zd b(|i|)a(|i − j|) ≤ X |i|<|j|/2 b(|i|)a(|i − j|) + X |i|≥|j|/2 b(|i|)a(|i − j|) ≤a(|j|/2)X i∈Zd b(|i|) + b(|j|/2)X i∈Zd a(|i − j|) ≤Ca(|j|/2) + Cb(|j|/2).
By definition of b, we have a(|j|/2) ≤ b(|j|/2). And we have just seen that b(|j|/2) ≤ Cb(|j|). We finally have checked thatPi∈Zdb(|i|)a(|i − j|) ≤ Cb(|j|) as desired.
Our well-posedness result is the following.
Theorem 6. Under Assumption 4, there exists a pathwise unique Hawkes process (Zi
t)i∈S,t≥0such
that Pi∈SpiE[Zti] <∞ for all t ≥ 0.
Observe that this result is not completely obvious in the case of an infinite graph. In some sense, we have to check that the interaction does not come from infinity. Let us insist on the fact that, even in simple situations, a graphical construction is not possible: consider e.g. the case of Zendowed with the set of edges E = {(i, j) : |i − j| = 0 or 1}, assume that hi(x) = 1 + x for all i∈ S and that ϕij = 1 for all (i, j)∈ E. Then one easily gets convinced that we cannot determine
the values of (Z0
t)t∈[0,T ]by observing the Poisson measures πi in a (random) finite box.
As a second comment, let us mention that we believe it is not possible, or at least quite difficult, to obtain the full uniqueness, i.e. uniqueness outside the class of processes satisfying P
i∈SpiE[Zti] < ∞ (or something similar). Indeed, consider again the case of Z endowed with
E = {(i, j) : |i − j| = 0 or 1}, assume that hi(x) = 1 + x for all i∈ S and that ϕji = 1 for all
(i, j)∈ E. One easily checks that for (Zi
t)i∈S,t≥0 a Hawkes process, for mit= E[Zti], it holds that
mi t= t +
Rt
0(mi−1s + mis+ mi+1s )ds for every i. This infinite system of equations is of course closely
related to the heat equation ∂tu(t, x) = 1 + ∂xxu(t, x) on [0,∞) × R and with initial condition
u(0, x) = 0. As is well-known, uniqueness for this equation fails to hold true without imposing some growth conditions as |x| → ∞. See e.g. Tychonov’s counterexample of uniqueness, which can be found in John [26, Chapter 7].
To compare Theorem 6 with the results of Massouli´e [32], let us consider a very simple situation. Let S = Zd be endowed with
E = {(i, j) : |i − j| = 0 or 1}, let hi(x) = 1 + x for all i∈ S and
ϕij(t) = ϕ(t) > 0 for all (i, j) ∈ E. Then Theorem 6 applies as soon as ϕ ∈ L1loc([0,∞)), see
Remark 5-(ii). Theorem 1 of [32] does not apply for two reasons: supi∈Shi is not bounded and,
more important, it does not hold true thatP(i,j)∈Eϕij ∈ L1loc([0,∞)) since
P
(i,j)∈Eϕij(t) = +∞
for all t ≥ 0. On the contrary, [32, Theorem 2] applies, but only in the subcritical case where (2d + 1)R0∞ϕ(t)dt < 1.
Proof. We first prove uniqueness. Let thus (Zi
t)i∈S,t≥0 and ( ˜Zti)i∈S,t≥0 be two solutions to (1)
satisfying the required condition. Set ∆it= Z t 0 d Zi s− ˜Zsi for i ∈ S,t ≥ 0. In other words, ∆i
tis the total variation norm of the signed measure d Zsi− ˜Zsi
on [0, t]. We also put δi
t= E[∆it] and first prove that
(2) δit≤ ci Z t 0 X j→i |ϕji(t− s)|δsjds. We have ∆it= Z t 0 Z ∞ 0 1z≤hi P j→i Rs− 0 ϕji(s−u)dZ j u − 1z≤hi Pj→i Rs− 0 ϕji(s−u)d ˜Z j u πi(ds dz).
Taking expectations, we deduce that δti= Z t 0 Ehhi X j→i Z s− 0 ϕji(s− u)dZuj − hi X j→i Z s− 0 ϕji(s− u)d ˜Zuj i ds ≤ci X j→i Eh Z t 0 Z s− 0 ϕji(s− u) d∆j uds i (3)
by Assumption 4-(a). Using Lemma 22, we see that Z t 0 ds Z s− 0 |ϕ ji(s− u)|d∆ju= Z t 0 ϕji(t− u) ∆j udu
which, plugged into (3), yields (2).
Set δt=Pi∈Spiδti, where the weights pi were introduced in Assumption 4. By assumption, δt
is well-defined and finite. We infer by (2) that δt≤ Z t 0 X i∈S pici X j→i ϕji(t− s) δj sds. By Assumption 4-(c), δt≤ Z t 0 X j∈S δjs X i,(j,i)∈E cipiϕji(t− s)ds ≤ Z t 0 X j∈S pjδsjφ(t− s)ds = Z t 0 φ(t− s)δsds.
We now quickly prove existence by a Picard iteration. Let Zti,0= 0 and, for n≥ 0, (4) Zti,n+1= Z t 0 Z ∞ 0 1{z≤hi(P j→i Rs− 0 ϕji(s−u)dZ j,n u )}π i(ds dz). We define δi,nt = E[ Rt 0|dZ i,n+1 s − dZsi,n|] and δtn = P i∈Spiδ i,n
t . As in the proof of uniqueness, we
obtain, for n≥ 0, δtn+1≤ Z t 0 φ(t− s)δn sds. (5)
Next, we put mi,nt = E[Z i,n
t ]. By Assumption 4-(a), hi(x)≤ hi(0) + ci|x|, whence
mi,n+1t ≤E hZ t 0 hi(0) + ci X j→i Z s− 0 |ϕ ji(s− u)|dZuj,n dsi≤ Z t 0 hi(0) + ci X j→i |ϕji(t− s)|mj,ns ds, where we used that, by Lemma 22, R0tR0s−|ϕji(s− u)|dZuj,nds =
Rt
0|ϕji(t− u)|Z j,n
u du. Setting
un
t =Pi∈Spimi,nt and using Assumption 4-(b)-(c),
un+1t ≤t X i∈S hi(0)pi+ Z t 0 X i∈S pici X j→i |ϕji(s− u)|mj,ns ds≤ Ct + Z t 0 φ(t− s)un sds. (6)
Since u0t = 0 and φ is locally integrable, we easily check by induction that un is locally bounded
for all n ≥ 0. Consequently, δn is also locally bounded for all n
≥ 0. Lemma 23-(ii) implies that for all T ≥ 0,Pn≥1δn
T <∞. This classically implies that the Picard sequence is Cauchy and thus
converges: there exists a family (Zi
t)i∈S,t≥0of c`adl`ag nonnegative adapted processes such that for
all T ≥ 0, limnPi∈SpiE[R T 0 |dZ
i
s− dZsi,n|] = 0. It is then not hard to pass to the limit in (4) to
deduce that (Zi
t)i∈S,t≥0solves (1). Finally, Lemma 23-(iii) implies that supnunt <∞ for all t ≥ 0,
from whichPi∈SpiE[Zti] <∞ as desired.
3. Mean-field limit In this section, we work in the following setting.
Assumption 7. Let h : R7→ [0, ∞) be such that |h|lip = supx6=y|x − y|−1|h(x) − h(y)| < ∞ and
let ϕ = [0,∞) 7→ R be a locally square integrable function.
For each N ≥ 1, we consider the complete graph GN with vertices SN ={1, . . . , N} and edges
EN ={(i, j) : i, j ∈ SN}, i.e. all pairs of points in SN are connected. We put hNi = h for all
i∈ SN and ϕNji = N−1ϕ for (i, j)∈ EN.
Under Assumption 7, the triplet (GN, ϕN, hN) satisfies Assumption 4 (the graph GN is finite)
for each N ≥ 1. Therefore, a Hawkes process (ZtN,1, . . . , Z N,N
t )t≥0with parameters (GN, ϕN, hN)
is uniquely defined by Theorem 6. Introduce the limit equation
(7) Zt= Z t 0 Z ∞ 0 1z≤ h Z s 0
ϕ(s− u)dE[Zu] π(ds dz), for every t ≥ 0,
where π(ds dz) is a Poisson measure on [0,∞) × [0, ∞) with intensity measure dsdz. Also, dE[Zu]
is the measure on [0,∞) associated to the (necessarily) non-decreasing function u 7→ E[Zu]. Note
that a solution Z = (Zt)t≥0, if it exists, is an inhomogeneous Poisson process on [0,∞) with
3.1. Propagation of chaos. The main result of this section reads as follows. We denote by D([0,∞), R) the set of c`adl`ag R-valued functions on [0, ∞) and by P(D([0, ∞), R)) the set of probability measures on D([0,∞), R).
Theorem 8. Work under Assumption 7.
(i) There is a pathwise unique solution (Zt)t≥0 to (7) such that (E[Zt])t≥0 is locally bounded.
(ii) It is possible to build simultaneously the Hawkes process (ZtN,1, . . . , Z N,N
t )t≥0with
parame-ters (GN, ϕN, hN) and an i.i.d. family (Z i
t)t≥0,i=1,...,N of solutions to (7) in such a way that for
all T > 0, all i = 1, . . . , N , Ehsup [0,T ]|Z N,i t − Z i t| i ≤ CTN−1/2,
the constant CT depending only on h, ϕ and T (see Remark 9 below for some bounds of CT in a
few situations).
(iii) Consequently, we have the mean-field approximation 1 N N X i=1 δ(ZN,i t )t≥0 −→ L (Zt)t≥0 in probability, as N → ∞,
where P(D([0, ∞), R)) is endowed with the weak convergence topology associated with the topology (on D([0,∞), R)) of the uniform convergence on compact time intervals.
Proof. For (Zt)t≥0 a solution to (7), the equation satisfied by mt= E[Zt] writes
(8) mt= Z t 0 h Z s 0 ϕ(s− u)dmu ds for every t≥ 0.
By Lemma 24, we know that this equation has a unique non-decreasing locally bounded solution, which furthermore is of class C1on [0,
∞). We now split the proof in several steps.
Step 1. Here we prove the well-posedness of (7). For (Zt)t≥0 a solution to (7), its expectation
mt = E[Zt] solves (8) and is thus uniquely defined. Thus the right hand side of (7) is uniquely
determined, which proves uniqueness. For the existence, consider m the unique solution to (8) and put Zt =R0tR0∞1{z≤h(Rs
0ϕ(s−u)dmu)}π(ds dz). We thus only have to prove that E[Zt] = mt. But
E[Zt] =Rt
0h(
Rs
0ϕ(s− u)dmu)ds, which is nothing but mtsince m solves (8).
Step 2. We next introduce a suitable coupling. Let (πi(ds dz))
i≥1be an i.i.d. family of Poisson
measures with common intensity measure dsdz on [0,∞) × [0, ∞). For each N ≥ 1, we consider the Hawkes process (ZtN,1, . . . , ZtN,N)t≥0
ZtN,i= Z t 0 Z ∞ 0 1 z≤h N−1PN j=1 Rs− 0 ϕ(s−u)dZ N,j u π i(ds dz).
Next, still denoting by m the unique solution to (8), we put, for every i≥ 1, Zit= Z t 0 Z ∞ 0 1 z≤h Rs− 0 ϕ(s−u)dmu πi(ds dz).
Step 3. Here we introduce ∆i N(t) =
Rt 0|d(Z
i
u− ZuN,i)| and δN(t) = E[∆iN(t)], which obviously
does not depend on i (by exchangeability). Observe that sup [0,t]|Z i u− ZuN,i| ≤ ∆iN(t), whence E h sup [0,t]|Z i u− ZuN,i| i ≤ δN(t). (9)
The first inequality follows from the fact that|Ziu− ZuN,i| ≤ |
Ru 0 d(Z i r− ZrN,i)| ≤ Ru 0 |d(Z i r− ZrN,i)|.
We show in this step that for all t > 0, δN(t)≤ |h|lipN−1/2 Z t 0 Z s 0 ϕ2(s− u)dmu 1/2 ds +|h|lip Z t 0 |ϕ(t − s)|δ N(s)ds. (10) First, ∆1 N(t) equals Z t 0 Z ∞ 0 1z≤h N−1PN j=1 Rs− 0 ϕ(s−u)dZ N,j u − 1z≤h Rs− 0 ϕ(s−u)dmu πi(ds dz).
Taking expectations, we find δN(t) = Z t 0 Ehh Z s 0 ϕ(s− u)dmu− h N−1 N X j=1 Z s 0 ϕ(s− u)dZuN,j i ds, whence δN(t)≤|h|lip Z t 0 Eh Z s 0 ϕ(s− u)dmu− N−1 N X j=1 Z s 0 ϕ(s− u)dZju ids +|h|lip Z t 0 EhN−1 N X j=1 Z s 0 ϕ(s− u)d[Zju− ZuN,j] ids =|h|lip(A + B). (11)
Using exchangeability and Lemma 22, B≤ Z t 0 Eh Z s 0 |ϕ(s − u)|d∆ 1 N(u) i ds = Z t 0 |ϕ(t − u)|δ N(u)du. (12)
Next, we use that Xj s=
Rs
0 ϕ(s− u)dZ j
u are i.i.d. with mean
Rs 0 ϕ(s− u)dmu, whence A≤ N−1/2Z t 0 (Var X1 s)1/2ds. (13)
But it holds that
Xs1= Z s 0 Z ∞ 0 1{z≤h(R0uϕ(u−r)dmr)}ϕ(s− u)π 1(du dz).
Since the integrand is deterministic, denoting by ˜π1 the compensated Poisson measure,
Xs1− E[Xs1] = Z s 0 Z ∞ 0 1{z≤h(Ru 0 ϕ(u−r)dmr)}ϕ(s− u)˜π 1(du dz).
Recalling Assumption 7, we find Var X1 s = Z s 0 ϕ2(s− u)h Z u 0 ϕ(u− r)dmrdu = Z s 0 ϕ2(s− u)dmu. (14)
Step 4. Here we conclude that for all T ≥ 0, sup[0,T ]δN(t)≤ CTN−1/2. This will end the proof
of (ii) by (9). This is not hard: it suffices to start from (10), to apply Lemma 23-(i) and to observe that R0t R0sϕ2(s− u)dm
u
1/2
ds is locally bounded (which follows from the assumption that ϕ is locally square integrable and the fact that m is C1on [0,
∞)).
Step 5. Finally, (iii) follows from (ii): by Sznitman [42, Proposition 2.2], it suffices to check that for each fixed ℓ≥ 1, ((ZtN,1)t≥0, . . . , (ZtN,ℓ)t≥0) goes in law, as N→ ∞, to ℓ independent copies of
(Zt)t≥0 (for the uniform topology on compact time intervals). This clearly follows from (ii).
We now want to show that the constant CT we get can be quite satisfactory.
Remark 9. Work under Assumption 7. (a) Assume that |h|lipR
∞
0 |ϕ(s)|ds < 1 (subcritical case) and that
R∞ 0 ϕ
2(s)ds <
∞. Then (ii) of Theorem 8 holds with CT = CT , for some constant C > 0. This is a satisfactory slow growth.
(b) Assume that h(x) = µ + x for some µ > 0 and that ϕ(t) = ae−bt for some a > b > 0
(if a < b, then point (a) applies). Then mt= E[Zt] ∼ µa(a − b)−2e(a−b)t as t→ ∞ and (ii) of
Theorem 8 holds with CT = Ce(a−b)T, for some constant C > 0. This is again quite satisfactory:
the error is of order N−1/2m T.
Proof. We start with (a). Using the notation of the previous proof, it suffices (see (9)) to show that δN(T ) ≤ CT N−1/2. Setting Λ = |h|lipR
∞
0 |ϕ(s)|ds < 1, starting from (10) and observing
that δN is non-decreasing, we find δN(t)≤ |h|lipN−1/2R0t(R0sϕ2(s− u)dmu)1/2ds + ΛδN(t), whence
δN(t)≤ CN−1/2R t 0( Rs 0 ϕ 2(s
− u)dmu)1/2ds. We thus only have to check that R s 0ϕ
2(s
− u)dmu is
bounded on [0,∞). SinceR0∞ϕ2(s)ds <
∞, it suffices to prove that m′ is bounded on [0,
∞). But m′ t= h( Rt 0ϕ(t−u)m ′
udu)≤ h(0)+|h|lipR0t|ϕ(t−u)|m′udu, whence sup[0,T ]m′t≤ h(0)+Λ sup[0,T ]m′t
and thus sup[0,T ]m′
t≤ h(0)/(1 − Λ) for any T > 0.
We next check (b). First, (8) rewrites mt= µt + aR0tR0se−b(s−u)dmuds, with unique solution
mt= −µbt a− b + µa(e(a−b)t − 1) (a− b)2 ∼ µa (a− b)2e (a−b)t.
Next, using (10) and the explicit expressions of h, ϕ and m, we find δN(t)≤N−1/2 Z t 0 Z s 0 ϕ2(s− u)dmu 1/2 ds + Z t 0 ϕ(t− s)δN(s)ds ≤CN−1/2e(a−b)t/2+ aZ t 0 e−b(t−s)δ N(s)ds.
Setting uN(t) = δN(t)ebt, we get uN(t)≤ CN−1/2e(a+b)t/2+ aR0tuN(s)ds. By Gr¨onwall’s lemma,
uN(t) ≤ CN−1/2e(a+b)t/2+ aR t 0CN
−1/2e(a+b)s/2ea(t−s)ds. On easily deduces, since a > b, that
uN(t)≤ CN−1/2eatso that δN(t)≤ CN−1/2e(a−b)t. The use of (9) ends the proof.
3.2. Large time behaviour. We now address the important problem of the large time behaviour. Since the solution (Zt)t≥0 to (7) is nothing but an inhomogeneous Poisson process, its large-time
behaviour is easily and precisely described, provided we have sufficiently information on the solution to (8). The question is thus: can we use the large time estimates of the mean-field limit to describe the large-time behaviour of the true Hawkes process with a large number of particles? To fix the
ideas, we consider the linear case. For A a symmetric nonnegative matrix, we denote by N (0, A) the centered Gaussian distribution with covariance matrix A.
We treat separately the subcritical and supercritical cases.
Theorem 10. Work under Assumption 7 with ϕ nonnegative and h(x) = µ + x for some µ > 0. Assume also that Λ = R0∞ϕ(s)ds < 1. For each N ≥ 1, consider the Hawkes process (ZtN,1, . . . , Z
N,N
t )t≥0 with parameters (GN, ϕN, hN). Consider also the unique solution (mt)t≥0
to (8).
1. We have mt∼ a0t as t→ ∞, where a0= µ/(1− Λ).
2. For any fixed i ≥ 1, ZtN,i/mt tends to 1 in probability as t → ∞, uniformly in N. More
precisely, E[|ZtN,i/mt− 1|] ≤ Cm−1/2t for some constant C.
3. For any fixed ℓ≥ 1 (m1/2t (Z N,i
t /mt− 1))i=1,...,ℓ goes in law to N (0, Iℓ) as (t, N )→ (∞, ∞)
(without condition on the regime).
Point 2. is of course related to the classical law of large numbers for multivariate Hawkes processes, see e.g. Br´emaud-Massouli´e [9] or [3]. What we prove here is that this law of large numbers is uniform in N . From this result, we deduce that Pr(|ZtN,i/mt− 1| > ε) ≤ Cε−1m−1/2t ,
uniformly in N . In view of the papers by Bordenave-Torrisi [8] and Zhu [49, 48], which concern one-dimensional processes, one might expect that a much more fast decay (large deviation bound) could be proved, under a Cram´er condition on ϕ. It would be interesting to decide if such a bound is uniform in N .
Theorem 11. Work under Assumption 7 with ϕ nonnegative and h(x) = µ + x for some µ > 0. Assume also that Λ =R0∞ϕ(s)ds∈ (1, ∞]. Assume finally that t 7→R0t|dϕ(s)| has at most polyno-mial growth. For each N ≥ 1, consider the Hawkes process (ZtN,1, . . . , ZtN,N)t≥0 with parameters
(GN, ϕN, hN). Consider also the unique solution (mt)t≥0 to (8).
1. We have mt ∼ a0eα0t as t → ∞, where α0 > 0 is determined by Lϕ(α0) = 1 and where
a0= µα−20 (
R∞ 0 tϕ(t)e
−α0tdt)−1.
2. For any fixed i≥ 1, ZtN,i/mt tends to 1 in probability as (t, N )→ (∞, ∞). More precisely,
there is a constant C such that E[|ZtN,i/mt− 1|] ≤ m−1/2t + CN−1/2(1 + m−1t ).
3. For any fixed ℓ≥ 1, (i) (m1/2t (Z
N,i
t /mt− 1))i=1,...,ℓ goes in law toN (0, Iℓ) if t→ ∞ and N → ∞ with mt/N → 0;
(ii) (N1/2(ZN,i
t /mt− 1))i=1,...,ℓ goes in law to (X, . . . , X), if t→ ∞ and N → ∞ with mt/N→
∞. Here X is a N (0, σ2)-distributed random variable, where σ2= α2 0µ−2
R∞ 0 e
−2α0sm′
sds.
Let us summarize. At first order (law of large numbers), the mean-field approximation is always good for large times. At second order (central limit theorem), the mean field approximation is always good for large times in the subcritical case, but fails to be relevant for too large times (depending on N ) in the supercritical case: the independence property breaks down.
In the supercritical case, we have the technical condition that t7→R0t|dϕ(s)| has at most
poly-nomial growth. This is useful to have some precise estimates of the solution m to (8). This is, e.g. always satisfied when ϕ is bounded and non-increasing, as is often the case in applications. It is slightly restrictive however, since it forces ϕ(0) to be finite.
It should be possible to study also the critical case, but then the situation is more intricate: many regimes might arise. With a little more work, we could also study, in the supercritical case, the regime where mt/N→ x ∈ (0, ∞).
In order to prove Theorems 10 and 11, we will use the following central limit theorem for martingales. For two c`adl`ag martingales M and N , we denote by [M, N ] the quadratic covariation defined by [M, N ]t= MtNt−R
t
0Ms−dNs−
Rt
0Ns−dMs. When M and N are purely discontinuous,
it holds that [M, N ]t=Ps≤t∆Ms∆Ns, see Jacod-Shiryaev [25, Chapter I, §4e].
Lemma 12. Let ℓ ≥ 1 be fixed. For N ≥ 1, consider a family (MtN,1, . . . , MtN,ℓ)t≥0 of
ℓ-dimensional local martingales satisfying M0N,i = 0. Assume that all their jumps are uniformly bounded and that [MN,i, MN,j]
t = 0 for every N ≥ 1, i 6= j and t ≥ 0. Assume also that
there is a continuous increasing function (vt)t≥0 : [0,∞) 7→ [0, ∞) such that for all i = 1, . . . , ℓ,
lim(t,N )→(∞,∞)vt−2[MN,i, MN,i]t= 1 in probability. In the case where v∞= limt→∞vt<∞,
as-sume moreover that for all i = 1, . . . , ℓ, all t0> 0, uniformly in t≥ t0, limN →∞[MN,i, MN,i]t= v2t
in probability.
Then vt−1(MtN,1, ..., MtN,ℓ) converges in law to the Gaussian distribution N (0, Iℓ) as (t, N )→
(∞, ∞), where Iℓ is the ℓ× ℓ identity matrix.
Proof. Let (tN)N ≥1be a sequence of positive numbers such that tN → ∞. We want to prove that
v−1tN(M
N,1 tN , ..., M
N,ℓ
tN ) converges in law to N (0, Iℓ). For all u∈ [0, 1], set
τuN = inf{t ≥ 0 : vt2≥ u vt2N}.
Since v is increasing and continuous, τN is also continuous and increasing for each N . We also
clearly have v2 τN
u = uv
2
tN for all u∈ [0, 1] and τ
N
1 = tN. Finally, for each u > 0 fixed, the sequence
τN
u is increasing.
For all u ∈ (0, 1], limNvτ−2N u[M
N,i, MN,i] τN
u = 1 in probability. Indeed, in the case v∞ = ∞,
this follows from the facts that limNτuN =∞ and lim(t,N )→(∞,∞)vt−2[MN,i, MN,i]t = 1. When
v∞<∞, the additional assumption (uniformity in t ≥ t0 of the convergence as N → ∞) clearly
suffices, since the sequence τN
u is increasing and thus bounded from below.
We define the martingales (LN,i
u )u∈[0,1] by LN,iu = vt−1NM
N,i τN
u . All their jumps are uniformly
bounded (because those of MN,iare assumed to be uniformly bounded and because sup
Nv−1tN <∞
since v is increasing). We also have [LN,i, LN,j]
u = 0 for all i 6= j, all u ∈ [0, 1]. Furthermore,
using that v2 τN u = uv 2 tN, [LN,i, LN,i]u= [MN,i, MN,i] τN u v2 tN =[M N,i, MN,i] τN u v2 τN u u→ u
in probability. Therefore, according to Jacod-Shiryaev [25] (Theorem VIII-3.11), the process (LN,1
u , . . . , LN,ℓu )u∈[0,1] converges in law to (Bu1, . . . , Bℓu)u∈[0,1] where the Bi are independent
stan-dard Brownian motions. In particular, (LN,11 , . . . , L N,ℓ
1 ) goes in law to N (0, Iℓ). To conclude the
proof, it thus suffices to observe that LN,i1 = v−1tNMτN,iN 1
= vt−1NMtN,iN . We can now give the
Proof of Theorem 10. In the present (linear) case, we can rewrite (8) as mt =R t 0(µ +
Rs 0 ϕ(s−
recalling that Λ =R0∞ϕ(s)ds < 1, we have m′
t∼ a0and mt∼ a0t as t→ ∞, where a0= µ/(1−Λ),
which proves point (i). The proof is now divided in several steps. Step 1 will also be used in the supercritical case.
Step 1. Recall that, for some i.i.d. family (πi(ds dz))i≥1of Poisson measures on [0,∞) × [0, ∞)
with intensity measure dsdz, ZtN,i= Z t 0 Z ∞ 0 1 z≤µ+N−1PN j=1 Rs− 0 ϕ(s−u)dZ N,j u π i(ds dz).
We have E[ZtN,i] = mt. Indeed, by exchangeability, we see that E[ZtN,i] = E[ZtN,1] and that
E[ZtN,1] = Z t 0 µ + N−1 N X j=1 Z s 0 ϕ(s− u)dE[ZuN,j] ds = Z t 0 µ + Z s 0 ϕ(s− u)dE[ZuN,1] ds, whence (E[ZtN,1])t≥0solves (8), of which the unique solution is (mt)t≥0by Lemma 24.
We next introduce UtN,i= ZtN,i− mt and the martingales (here ˜πi(ds dz) = πi(ds dz)− dsdz)
MtN,i= Z t 0 Z ∞ 0 1 z≤µ+N−1PN j=1 Rs− 0 ϕ(s−u)dZ N,j u ˜π i(ds dz).
We consider the mean processes ZNt = N−1
PN 1 Z N,i t , U N t = N−1 PN 1 U N,i t and finally M N t = N−1PN 1 M N,i
t . An easy computation using (8) and Lemma 22 shows that
UtN,i=MtN,i+ Z t 0 N−1 N X j=1 Z s 0 ϕ(s− u)dZN,j u ds− mt =MtN,i+ Z t 0 Z s 0 ϕ(s− u)N−1 N X j=1 dZuN,j− dmu ds =MtN,i+ Z t 0 ϕ(t− s)N−1 N X j=1 ZN,j s − ms ds, so that UtN,i= M N,i t + Z t 0 ϕ(t− s)UNsds. (15)
This directly implies that
UNt = M N t + Z t 0 ϕ(t− s)UNs ds. (16)
Next, we observe that [MN,i, MN,j]
t= 0 for all i6= j (because these martingales a.s. never jump
simultaneously) and that [MN,i, MN,i]
t = ZtN,i. Hence [M N
, MN]t = N−1Z N
t . We thus have
E[(MtN,i)2] = E[ZtN,i] = mt and E[(MN
t )2] = N−1E[Z N
t ] = N−1mt.
Step 2. Recalling (16) and using that Λ = R0∞ϕ(s)ds < 1, we observe that sup[0,t]|U N s| ≤ sup[0,t]|M N s| + Λ sup[0,t]|U N s |. Consequently, Ehsup [0,t]|U N s| i ≤ (1 − Λ)−1Ehsup [0,t]|M N s| i ≤ CN−1/2m1/2t
by the Doob and Cauchy-Schwarz inequalities. We easily deduce that EhZ t 0 ϕ(t− s)|UNs|ds i ≤ ΛEhsup [0,t]|U N s| i ≤ CN−1/2m1/2 t ,
whence finally, recalling (15), m−1t E[|U N,i t |] ≤ m−1t E[|M N,i t |] + Cm−1t N−1/2m 1/2 t ≤ Cm −1/2 t .
This says that E[|ZtN,i/mt− 1|] ≤ Cm−1/2t and thus proves point 2.
Step 3. We then fix ℓ≥ 1 and use (15) to write, for i = 1, . . . , ℓ, m1/2t (Z N,i t /mt− 1) = m−1/2t U N,i t = m −1/2 t M N,i t + m −1/2 t Z t 0 ϕ(t− s)UNsds. First, E[m−1/2t Rt 0ϕ(t− s)|U N
s |ds] ≤ CN−1/2, which tends to 0 as (t, N ) → (∞, ∞), by the
es-timate proved in Step 2. To conclude the proof of point 3, we thus only have to prove that (m−1/2t M
N,i
t )i=1,...,ℓ goes in law to N (0, Iℓ) as (t, N ) → (∞, ∞). To this end, we apply Lemma
12. The jumps of the martingales MN,i are uniformly bounded (by 1) and we have seen that
[MN,i, MN,j]t = 0 for all i 6= j. The function (mt)t≥0 is continuous and increases to infinity.
It thus suffices to check that, as (t, N ) → (∞, ∞), m−1t [MN,i, MN,i]t → 1 in probability. Since
[MN,i, MN,i]
t= ZtN,i, this is an immediate consequence of point 2.
We now turn to the supercritical case. Proof of Theorem 11. We rewrite (8) as mt=R
t 0(µ + Rs 0 ϕ(s− u)dmu)ds = µt + Rt 0ϕ(t− s)msds by
Lemma 22. This equation is studied in details in Lemma 26: there is a unique α0 > 0 such that
Lϕ(α0) = 1 and, defining a0∈ (0, ∞) as in the statement, we have mt∼ a0eα0tand m′t∼ a0α0eα0t
as t → ∞, which proves point 1. We also know that Γ(t) = Pn≥1ϕ⋆n(t) ∼ (a0α20/µ)eα0t, that
Υ(t) =R0tΓ(s)ds∼ (a0α0/µ)eα0tand further properties of m, m′, Γ, Υ are proved in Lemma 26.
Step 1. We adopt the same notation as in the proof of Theorem 10-Step 1, of which all the results remain valid in the present case. Point 1 follows from Lemma 26-(a).
Step 2. First, (16) says exactly that UN = MN+ ϕ ⋆ UN. Using Lemma 26-(e), we deduce that UN = MN+ Γ ⋆ MN (the processes UN and MN are clearly a.s. c`adl`ag and thus locally bounded). Since E[(MNt )2] = N−1mtby Step 1,
E[|UNt |] ≤E[|MNt |] + Z t 0 Γ(t− s)E[|MNs |]ds ≤N−1/2m1/2 t + Z t 0 Γ(t− s)N−1/2m1/2 s ds ≤CN−1/2(1 + m t).
The last inequality easily follows from Lemma 26-(b). Using (16) again, we see thatR0tϕ(t− s)UNsds = U N t − M N t , whence Eh Z t 0 ϕ(t− s)UNs i≤ E[|MNt |] + E[|U N t |] ≤ N−1/2m 1/2 t + CN−1/2(1 + mt)≤ CN−1/2(1 + mt).
On the other hand, we know from Step 1 that E[(MtN,i)2] = mt. Using (15), we conclude that
E[|ZtN,i/mt− 1|] = m−1t E[|UtN,i|] ≤ m−1/2t + CN−1/2(1 + m−1t ). which ends the proof of 2.
Step 3. We then fix ℓ≥ 1 and write, for i = 1, . . . , ℓ, by (15), (ZtN,i/mt− 1) = m−1t UtN,i= m−1t MtN,i+ m−1t
Z t 0
ϕ(t− s)UNsds.
Step 3.1. We first consider the regime (t, N )→ (∞, ∞) with mt/N → 0 and study
m1/2t (Z N,i t /mt− 1) = m−1/2t M N,i t + m −1/2 t Z t 0 ϕ(t− s)UNsds.
The second term tends to 0 in probability, because we can bound, using Step 2, its L1-norm
by Cm−1/2t N−1/2(1 + mt), which tends to 0 in the present regime. We thus just have to prove
that (m−1/2t M N,i
t )i=1,...,ℓ goes in law toN (0, Iℓ). We use Lemma 12: the martingales MN,i have
uniformly bounded (by 1) jumps and we have seen that [MN,i, MN,j]
t= 0 for i6= j. The function
(mt)t≥0 is continuous and increases to infinity. It only remains to check that m−1t [MN,i, MN,i]t
tends to 1 in probability. But [MN,i, MN,i]
t= ZtN,i, so that the conclusion follows from point 2.
Step 3.2. We finally consider the regime (t, N )→ (∞, ∞) with mt/N→ ∞ and study
N1/2(ZtN,i/mt− 1) = N1/2m−1t M N,i t + N1/2m−1t Z t 0 ϕ(t− s)UNsds. First, N1/2m−1 t M N,i
t → 0 in probability, because its L1-norm is bounded by N1/2m −1/2
t (recall that
E[(MN,i
t )2] = mt), which tends to 0 in the present regime. Since VtN := N1/2m−1t
Rt
0ϕ(t− s)U N sds
does not depend on i, it only remains to prove that VN
t goes in law to N (0, σ2). We write,
using (16), recalling that UN = MN + Γ ⋆ MN (see Step 2) and integrating by parts (recall that Υ(t) =R0tΓ(s)ds) VtN = N1/2m−1t (U N t − M N t ) = N1/2m−1t Z t 0 Γ(t− s)MNsds = N1/2m−1t Z t 0 Υ(t− s)dMNs. Introduce WtN = (α0/µ)N1/2R0te−α0sdM N
s and observe that, since E[[M N , MN]t] = N−1mt, E[(VtN − WtN)2] =EhN Z t 0 (m−1t Υ(t− s) − (α0/µ)e−α0s)2d[M N , MN]s i = Z t 0 (m−1t Υ(t− s) − (α0/µ)e−α0s)2m′sds.
Lemma 26-(c) tells us that this tends to 0 as t→ ∞. We thus only have to prove that WN t goes
in law toN (0, σ2) as (t, N )
→ (∞, ∞).
This follows again from Lemma 12 (with ℓ = 1): the jumps of the martingale (WN
t )t≥0 are
bounded by (α0/µ)N−1/2 (because those of M N
are bounded by N−1). The function v t =
(α0/µ)(R t 0e
−2α0sm′
sds)1/2 is continuous and increasing to the finite limit v∞ = σ (which was
defined in the statement). We thus only have to prove that (a) vt−2[WN, WN]t→ 1 in probability
N → ∞. By Lemma 12, we will deduce that v−1t WtN goes in law to N (0, 1) as (t, N) → (∞, ∞),
which of course implies that WN
t goes in law toN (0, σ2) as desired.
We have, since [MN, MN]t= N−1Z N t , [WN, WN]t= (α0/µ)2N Z t 0 e−2α0sd[MN, MN] s= (α0/µ)2 Z t 0 e−2α0sdZN s. Using that ZNt = U N
t + mtand performing an integration by parts, we see that
[WN, WN]t=vt2+ (α0/µ)2 Z t 0 e−2α0sdUN s =v2 t + (α0/µ)2e−2α0tU N t + 2(α30/µ2) Z t 0 e−2α0sUN sds.
Recalling that that E[|UNt |] ≤ CN−1/2(1 + mt), we infer
Eh(α0/µ)2e−2α0tUN t + 2(α30/µ2) Z t 0 e−2α0sUN sds i ≤NC1/2 e−2α0t(1 + m t) + Z t 0 e−2α0s(1 + m s)ds , which is bounded by CN−1/2 by Lemma 26. We have proved that sup
t≥0E[|[WN, WN]t− vt2|] ≤
CN−1/2, from which points (a) and (b) above immediately follow. The proof is complete.
4. Nearest neighbour model
We consider here the case where G is a regular grid, on which particles interact (directly) only if they are neighbours. We will work on Zd, endowed with the set of edges
E = {(i, j) ∈ (Zd)2 : |i − j| = 0 or 1},
where |(i1, . . . , id)| = (Pdr=1i2r)1/2. Thus each point has 2d + 1 neighbours (including itself).
We hesitated to include self-interaction, but this avoids some needless complications due to the periodicity of the underlying random walk on Zd.
Assumption 13. (i) The graph G = (S, E) is S = Zd (for some d
≥ 1) endowed with the above set of edges E.
(ii) There is a nonnegative locally integrable function ϕ : [0,∞) 7→ [0, ∞) such that for all (j, i)∈ E, ϕji= (2d + 1)−1ϕ.
(iii) For all i∈ Zd, there is µ
i≥ 0 such that hi(x) = µi+ x. The family (µi)i∈Zd is bounded.
We next introduce a few notation. In the whole section, we call vector (and write in bold) a family of numbers indexed by Zd. We call matrix a family indexed by Zd
×Zd. The identity matrix
I is of course defined as I(i, j) = 1{i=j}. We will often use the product of a matrix and a vector.
The matrix A = (A(i, j))i,j∈Zd defined by
A(i, j) = (2d + 1)−11 {(i,j)∈E}
(17)
will play an important role. Since A is a stochastic matrix, we can define, for any Λ∈ (0, 1), QΛ(i, j) =
X
n≥0
ΛnAn(i, j). (18)
4.1. Large-time behaviour. Under Assumption 13, we can use Theorem 6 (with pi= 2−|i|, see
Remark 5-(ii)): there is a unique Hawkes process (Zi
t)i∈Zd,t≥0 with parameters (G, ϕ, h) such that
P
i∈Zd2−|i|E[Zti] <∞. Let us state the first results of this section. As usual, we treat separately
the subcritical and supercritical cases.
Theorem 14. Work under Assumption 13 and assume further that Λ =R0∞ϕ(t)dt < 1. Consider the unique Hawkes process (Zi
t)i∈Zd,t≥0 with parameters (G, ϕ, h). For all i∈ Zd, t−1Zti goes in
probability, as t→ ∞, toPj∈ZdQΛ(i, j)µj.
Theorem 15. Work under Assumption 13 and assume further that Λ = R0∞ϕ(t)dt ∈ (1, ∞] and that t 7→ R0t|dϕ(s)| has at most polynomial growth. Consider α0 > 0 uniquely defined by
Lϕ(α0) = 1. Assume finally that the “mean value”
µ = lim r→∞ 1 #{i ∈ Zd : |i| ≤ r} X |i|≤r
µi exists and is positive.
(19)
Consider the unique Hawkes process (Zi
t)i∈Zd,t≥0 with parameters (G, ϕ, h). Then for all i ∈ Zd,
e−α0tZi
t goes in probability, as t→ ∞, to a0= µα−20 (
R∞ 0 tϕ(t)e
−α0tdt)−1.
Let us comment on these results. In the subcritical case, the parameter µ = (µi)i∈Zd is strongly
present in the limiting behaviour: the limit of t−1Zi
t depends on a certain mean of µ around the
site i and thus depends on i. In the supercritical case, the behaviour is very different: the limit value of e−α0tZi
t does not depend on i, and depends on µ = (µi)i∈Zd only through a global mean
value. Observe also that for a finite-dimensional (e.g. scalar) Hawkes process, there is no law of large numbers: one can get a limit of something like e−α0tZi
t, but the limit is random, see Zhu
[49, Section 5.4] (in particular Theorem 23 and Corollary 1). In that sense, we can say that in the supercritical case, the law of large number is reminiscent of the infinite dimension and of the interaction.
We will need a precise approximation for An(i, j) where A is defined by (17). It is given by
the local central limit theorem, since A is the transition matrix of an aperiodic symmetric random walk on Zd with bounded jumps. Precisely, we infer from Lawler-Limic [29, Theorem 2.1.1 and
(2.5)] that there is a constant C such that for all n≥ 1, all i ∈ Zd,
|An(0, i)
− pn(i)| ≤ C
n(d+2)/2
(20)
where, for t > 0 and x∈ Rd,
(21) pt(x) = 2d + 1 4πt d/2 exp−(2d + 1)|x| 2 4t .
To apply [29, Theorem 2.1.1], we needed to compute the covariance matrix Γ corresponding to our random walk, we found Γ = 2(2d + 1)−1I
d, Id being the d× d identity matrix.
Lemma 16. Consider the matrix (A(i, j))i,j∈Zd defined by (17).
(i) It holds that εn=Pj∈Zd(An(i, j))2 does not depend on i∈ Zd and tends to 0 as n→ ∞.
(ii) Let µ = (µi)i∈Zd be bounded and satisfy (19). Then for all i∈ Zd, limn→∞(Anµ)i = µ.
Point (i) is easy: since An(i, j) = An(0, j
− i) and since A is stochastic, one has εn =
P
j∈Zd(An(0, j))2≤ supj∈ZdAn(0, j). Moreover, by (20), An(0, j)≤ pn(j)+Cn−(d+2)/2≤ Cn−d/2.
We conclude that εn≤ Cn−d/2→ 0 as desired.
Now we turn to the proof of (ii). Let i∈ Zd. First we show that lim
n[(Anµ)i−Pj∈Zdpn(j)µj] =
0. Since (Anµ)
i = Pj∈ZdAn(i, j)µj = Pj∈ZdAn(0, j− i)µj and since the family (µj)j∈Zd is
bounded, it suffices to prove that vn =Pj∈Zd|An(0, j− i) − pn(j)| → 0. We write
vn≤ X |j|≤n1/2+1/4d |An(0, j − i) − pn(j)| + X |j|>n1/2+1/4d (An(0, j − i) + pn(j)) = vn1+ vn2.
On the one hand, using thatPj∈Zd|j|2An(0, j)≤ Cn (the variance of the random walk at time n
is of order n), so thatPj∈Zd|j|2An(0, j− i) =
P
k∈Zd|i + k|2An(0, k)≤ C(|i|2+ n) and thus
X
|j|>n1/2+1/4d
An(0, j− i) ≤ Cn−1−1/2dX
j∈Zd
|j|2An(0, j− i) ≤ Cn−1−1/2d(|i|2+ n). Similarly, we havePj∈Zd|j|2pn(j)≤ Cn and thus
X |j|>n1/2+1/4d pn(j)≤ n−1−1/2d X j∈Zd |j|2p n(j)≤ Cn−1/2d.
Consequently limnv2n= 0. On the other hand,
v1n≤ X |j|≤n1/2+1/4d |An(0, j− i) − pn(j− i)| + X |j|≤n1/2+1/4d |pn(j− i) − pn(j)|.
From (20), the first sum is bounded by Cn−(d+2)/2#
{j ∈ Zd :
|j| ≤ n1/2+1/4d
} ≤ Cn−3/4
→ 0. For the second sum, we use that, with cd= (2d + 1)/4,
|pn(j− i) − pn(j)| = pn(j) 1 − exp −cnd|i|2+2cd n i.j . Hence for |j| ≤ n1/2+1/4d and for n large enough (e.g. so that|i|n−1/2+1/4d≤ 1),
|pn(j− i) − pn(j)| ≤ Cpn(j)(|i|2n−1+|i|n−1/2+1/4d)≤ Cpn(j)(1 +|i|2)n−1/4.
ThusP|j|≤n1/2+1/2d|pn(j− i) − pn(j)| ≤ C(1 + |i|2)n−1/4 and we deduce that limnvn1 = 0.
We have shown that lim vn = 0. It only remains to check that limnPj∈Zdµjpn(j) = µ. Let
(rk)k≥0 be the increasing sequence of nonnegative numbers such that{rk}k≥0={|j| : j ∈ Zd}
and observe that X
j∈Zd µjpn(j) = X k≥0 pn(rk) X |j|=rk µj.
A discrete integration by parts shows that X j∈Zd µjpn(j) = X k≥0 (pn(rk)− pn(rk+1)) X |j|≤rk µj= X k≥0 v(rk)(pn(rk)− pn(rk+1)) 1 v(rk) X |j|≤rk µj, where v(r) = #{j ∈ Zd :
|j| ≤ r}. We easily conclude that limnPj∈Zdµjpn(j) = µ as desired,
because
(a) limk→∞v(r1k)P|j|≤rkµj= µ;
(b) for all k≥ 0 fixed, limnv(rk)(pn(rk)− pn(rk+1)) = 0;
Point (a) follows from our condition (19) on µ, point (b) is obvious (because |v(rk)(pn(rk)−
pn(rk+1))| ≤ v(rk) supi∈Zdpn(i)≤ Cv(rk)n−d/2→ 0). To check (c), we writeP∞k=0v(rk)(pn(rk)−
pn(rk+1)) =Pj∈Zdpn(j) =Pj∈ZdAn(0, j)+
P
j∈Zd[pn(j)−An(0, j)] = 1+Pj∈Zd[pn(j)−An(0, j)].
This tends to 1, because limnPj∈Zd|pn(j)− An(0, j)| = 0, as seen in the first part of the proof
(this is vn in the special case where i = 0).
Let us now give the
Proof of Theorem 14. We split the proof into several steps. We assume that there is at least one i∈ Zd such that µ
i > 0, because else the result is obvious (because then Zti= 0 for all i∈ Zd, all
t≥ 0). The first step will also be used in the supercritical case. Step 1. We write as usual, for some i.i.d. family (πi(ds dz))
i≥1of Poisson measures on [0,∞) ×
[0,∞) with intensity measure dsdz, Zti= Z t 0 Z ∞ 0 1 z≤µi+(2d+1)−1Pj→i Rs− 0 ϕ(s−u)dZ j u π i(ds dz). Let us put mi
t= E[Zti] and mt= (mit)i∈Zd. A simple computation (using one more time Lemma
22) gives us, for all i∈ Zd,
mit= µit + Z t 0 (2d + 1)−1X j→i ϕ(t− s)mjsds.
Using the vector formalism, this rewrites mt= µt +R t
0ϕ(t− s)(Ams)ds. We furthermore know
(from Theorem 6) that for all t≥ 0,Pi∈Zd2−|i|mit<∞. Applying Lemma 27, we see that mi is
of class C1 on [0,
∞) for each i ∈ Zd, that
m′t= µ + Z t 0 ϕ(t− s)Am′sds (22) and that m′t=I +X n≥1 An Z t 0 ϕ⋆n(s)dsµ. (23)
Lemma 27 also tells us that ut= supi∈Zdsup[0,t](mis)′ is locally bounded, which of course implies
that supi∈Zdsup[0,t]misis also locally bounded (because mi0= 0 for all i∈ Zd), and that
ut≤ C +
Z t 0
ϕ(t− s)usds.
(24)
We introduce the martingales, for i∈ Zd, (we use a tilde for compensation),
Mti= Z t 0 Z ∞ 0 1 z≤µi+(2d+1)−1Pj→i Rs− 0 ϕ(s−u)dZ j u ˜π i(ds dz)
and observe as usual that [Mi, Mj]
t= 0 when i6= j (because these martingales a.s. never jump at
the same time) while [Mi, Mi]
t= Zti. We finally introduce Uti= Zti−mit, the vectors Ut= (Uti)i∈Zd
and Mt= (Mti)i∈Zd and observe that
Ut= Mt+
Z t 0
ϕ(t− s)AUsds.
Indeed, for every i∈ Zd, using Lemma 22 and the equation satisfied by mi t, we find Uti=Mti+ Z t 0 (2d + 1)−1X j→i ϕ(t− s)(Zj s− mjs)ds = Mti+ Z t 0 ϕ(t− s)(AUs)ids.
Equation (25) can be solved as usual as Ut= Mt+ X n≥1 Z t 0 ϕ⋆n(t − s)AnM sds . (26)
Finally, we easily check that vt = supi∈Zdsup[0,t]E[|Usi|] is locally bounded (because E[|Uti|] ≤
E[|Zi
t|] + mit≤ 2mit) and satisfies (start from (25), use that E[|Mti|] ≤ E[[Mi, Mi]t]1/2= E[Zti]1/2=
(mi
t)1/2≤ (
Rt
0usds)1/2) and that A is stochastic)
vt≤ Z t 0 usds 1/2 + Z t 0 ϕ(t− s)vsds. (27)
Step 2. Here we prove that there is a constant C such that for all i∈ Zd, (mi
t)′≤ C (and thus
also mi
t≤ Ct). This follows from (24), which implies that ut≤ C + Λut, whence ut≤ C/(1 − Λ).
Step 3. For all i∈ Zd, (mi
t)′ ∼ (QΛµ)i, whence also mit∼ (QΛµ)it, as t→ ∞. Indeed, starting
from (23), using the monotone convergence theorem and thatR0∞ϕ⋆n(s)ds = (R∞
0 ϕ(s)ds) n = Λn, lim t→∞(m i t)′= I +X n≥1 ΛnAnµ i= (QΛµ)i.
Step 4. There is a constant C such that for all i∈ Zd, all t
≥ 0, E[|Ui
t|] ≤ Ct1/2. Indeed, this
follows from (27) and Step 2, which imply that vt≤ Ct1/2+ Λvt, whence vt≤ Ct1/2/(1− Λ).
Step 5. The conclusion follows immediately, writing EhZ i t t − (QΛµ)i i≤ EhU i t t i+m i t t − (QΛµ)i ,
which tends to 0 as t→ ∞ by Steps 3 and 4.
We now turn to the supercritical case.
Proof of Theorem 15. We consider m (not to be confused with m) the unique solution to mt=
µt +R0tϕ(t− s)msds, where µ is the mean value defined by (19). This equation is studied in details
in Lemma 26: with α0 and a0defined in the statement, we have mt∼ a0eα0tand m′t∼ a0α0eα0t
as t→ ∞, as well as Γ(t) =Pn≥1ϕ⋆n(t)
∼ (a0α20/µ)eα0tand Υ(t) =
Rt
0Γ(s)ds∼ (a0α0/µ)e α0t.
Step 1. We adopt the notation introduced in Step 1 of the proof of Theorem 14. Step 2. Here we check that that there is C such that for all i∈ Zd, (mi
t)′ ≤ Ceα0t (and thus
mi
t ≤ Ceα0t). This follows from (24), which tells us that ut = supi∈Zd(mit)′ is locally bounded
and satisfies ut≤ C +R0tϕ(t− s)usds. Setting ht= ut−R0tϕ(t− s)usds, we see that ht is locally
bounded (from above and from below), because u is locally bounded and ϕ is locally integrable. We furthermore have u = h + u ⋆ ϕ. Applying Lemma 26-(e), we deduce that u = h + h ⋆ Γ. But h is bounded from above by C. Consequently, u≤ C + C ⋆ Γ = C(1 + Υ), where Υ was defined in Lemma 26. The conclusion follows from Lemma 26-(b).
Step 3. We now show that for all i∈ Zd, (mi
t)′ ∼ (mt)′ (whence mit∼ mt) as t→ ∞. Let us
fix i∈ Zd and set ri
t= (mit)′− (mt)′, which satisfies rti= µi− µ +Pn≥1((Anµ)i− µ)R t 0ϕ
We know from Lemma 16-(ii) that ηn= (Anµ)i− µ tends to 0 as n → ∞. Consequently, Lemma 26-(d) tells us that e−α0tP n≥1((Anµ)i− µ)R t 0ϕ ⋆n(s)ds tends to 0 as t → ∞. Hence e−α0tri t→ 0
as t→ ∞. Using finally that m′
t∼ a0α0eα0tas t→ ∞, we conclude that (mti)′/m′t= 1 + rit/m′t∼
1 + (a0α0)−1e−α0trti→ 1 as desired.
Step 4. Here we check that for every i∈ Zd, e−α0tE[|Ui
t|] tends to 0 as t → ∞. We start from
(26) to write |Uti| ≤ |Mti| + X n≥1 Z t 0 ϕ⋆n(t− s)|(AnMs)i|ds. But E[|Mi
t|] ≤ E[Zti]1/2 = (mit)1/2 ≤ Ceα0t/2 by Step 2 and E[(AnMt)2i] =
P
j(An(i, j))2mjs ≤
Ceα0tP
j(An(i, j))2≤ Ceα0tεn by Lemma 16-(i), with εn→ 0 as n → ∞. Consequently,
e−α0tE[ |Ui t|] ≤ Ce−α0t/2+ Ce−α0t X n≥1 ε1/2n Z t 0 ϕ⋆n(t− s)eα0s/2ds.
Lemma 26-(d) allows us to conclude. Step 5. The conclusion follows, writing
Eh Z i t a0eα0t − 1 i≤ Eh U i t a0eα0t i+ m i t a0eα0t − 1 ,
and using Steps 3, 4, and that mt∼ a0eα0t by Lemma 26-(a).
4.2. Study of an impulsion. Here we want to study how an impulsion at time 0 at i = 0 propagates. To this end, we work under Assumption 13 with µi= 0 for all i∈ Zd, but we assume
that Z0 has a jump at time 0. Such a study is of course important: it allows us to measure, in
some sense, the range of the interaction.
We first define precisely the process under study.
Definition 17. We work under Assumption 13-(i)-(ii) and consider a family (πi(ds dz), i
∈ Zd)
of i.i.d. (Ft)t≥0-Poisson measures on [0,∞) × [0, ∞) with intensity measure dsdz. We say that a
family (Zi
t)i∈Zd,t≥0 of (Ft)t≥0-adapted counting processes is an impulsion Hawkes process if
∀ i ∈ Zd, Zi t = Z t 0 Z ∞ 0 1z≤X j→i (2d + 1)−1[Z s− 0 ϕ(s− u)dZj u+ ϕ(s)1{j=0}] π i(ds dz).
As said previously, the term Pj→iϕ(s)1{j=0} is interpreted as an excitation due to a forced
jump of Z0 at time 0: simply rewrite it as 1 {0→i}R
s−
0 ϕ(s− u)δu(ds).
The following proposition is easy.
Proposition 18. Adopt the assumptions and notation of Definition 17. There exists a pathwise unique impulsion Hawkes process (Zi
t)i∈Zd,t≥0 such that P
i∈ZdE[Zti] <∞ for all t ≥ 0.
Exactly as Theorem 6, this result can be deduced from Massouli´e [32, Theorem 2] in the sub-critical case whereR0∞ϕ(s)ds < 1.
Proof. The proof resembles much that of Theorem 6, so we only sketch it. We start with unique-ness and thus consider two impulsion Hawkes processes (Zi
t)i∈Zd,t≥0 and ( ˜Zti)i∈Zd,t≥0 such that
P
i∈ZdE[Zti+ ˜Zti] <∞. We set ∆it=
Rt