C.1 Section 3
Proof of Proposition 1.
To have the expression (16), we get a formula for C, thanks to the method of characteristics. Fix z ≥0 and denote for all z ≥t≥0: Hence, by reversing the characteristics, we get:
C(t, z) =
However, we have: which gives (16), using the equality (48).
The formula (17) is a classical compute of the solution of a linear differential equation, and (18) was found just before.
Proof of Proposition 2.
We denote by f and g the same functions defined during the proof of theorem 1. We have so to compute the derivative according toz of f for z = 0:
∂zf(t,0, τ) = e−Rτtβ(s)ds E Proof of Proposition 3.
By differentiating with respect toz the first equation of the system 15, we have:
∂tz2C(t, z) =β0(z) +∂zz2 C(t, z) +d E[m?ez m?]−∂zC(t, z) E[ez m?]
N(t) e−C(t,z). Evauating this last relation toz = 0, we get:
∂t∂zC(t,0) =β0(0) +V(t) +d E[m?]− hmi(t)
In particular, we have β0(0) = U Z
R
m J(m)dm = U µJ, with µJ the mean of the mutation rate.
Proposition 26. Assume that J includes some beneficial mutations, i.e.
supp(J)∩(0,∞)6=∅.
Then lim
t→∞β(t) = lim
t→∞β0(t) = +∞.
Proof. See Gil et al. (2017).
Proposition 27. Let m0 ∈(−∞,+∞] be defined by m0 = sup(supp(p0)).
The functionC0 is convex and C00(t)→m0 as t→+∞. Furthermore, if m0 <+∞, then
t→∞lim C000(t) = 0.
Proof. See Gil et al. (2017).
Proof of Theorem 4.
1. Assume thatJ includes some beneficial mutations. Thanks to Propositions 26 and 27, we know that for t large enough:
N0(C00(t) +β(t))eC0(t) >0.
Thus, using Proposition 16, we get
hmi(t) = N0 (C00(t) +β(t)) eC0(t)+d E[em? t]−d e−R0tβ(s)ds N0 eC0(t)+dRt
0 E[eτ m?]e−Rτtβ(s)dsdτ
≥ d E[em? t]eR0tβ(s)ds−d N0 eR0t(C00(s)+β(s))ds+dRt
0 E[eτ m?]eR0τβ(s)dsdτ. Thanks to Proposition 26, we know that β diverges to +∞and so:
E[etm?]eR0tβ(s)ds ≥ Z +∞
0
etyp?(y)dy eR0tβ(s)ds
diverges too. Thus fortlarge enoughhmi(t)is positive, which gives the results.
2. We proove the expected results, by the same arguments and using the fact that β(t)→ −U ast tends to+∞.
3. Let us proove that the numerator (denoted here by h(t)) in the formula (16) is negative for all t≥0.
First, we know that β is convex, with β(0) = 0. Additionally, we can proove that β(t) tends to −U as t tends to +∞: so β(t) is nonpositive for all t ≥ 0.
Second, Proposition 27 yields that for all t ≥ 0, C00(t) < 0, thanks to the hypothesis m0 <0. So we have:
f(t)< dE[em?]−de−R0tβ(s)ds.
This upper bound is a decreasing function, and at t= 0 is equal to 0. There-fore, we get f(t)<0which gives hmi(t)<0and so the characteristic time is infinite.
Proof of Proposition 5.
By convexity of β, we have for all t > 0, β(t) ≥ β0(0)t. Thus we get, thanks to
Jensen’s inequality (thanks to the convexity of the exponential function) yields so that: which gives the result (because t0 >0).
Proof of Proposition 6.
Thanks to the strict convexity of β, the existence and uniqueness of τ > 0 such that β(τ) = 0 follows from the properties:
β(0) = 0 ; β0(0) =U µJ <0 and β(+∞) = +∞.
Always by convexity, it is clear thatβ0(τ)>0 and t0 >0.
Futhermore, the convexity of β yields that Z t0
Thanks to (21), the Jensen inequality yields that et0 E[m?]≤E
The dicriminant of P is
∆ = 4
"
E[m?] β0(τ) −τ
2
−τ2 β0(0) β0(τ)
#
which is negative (because β0(0) = U µJ <0 and β0(τ) > 0). Therefore P has two rootsx− <0 and x+ >0 defined by
x±=τ −E[m?] β0(τ) ±
s
E[m?] β0(τ) −τ
2
−τ2 β0(0) β0(τ). The sign of P(t0) gives the result.
Proposition 28. We have:
β(t0) t0 2 >
Z t0
0
β(s)ds=−m? t0.
Proof. This is given by strict convexity ofβ.
Proof of Proposition 8.
The relation (22) yields clearly the independence of t0 to d.
The equality (22) yields
t0+m? ∂t0
∂m? =−β(t0) ∂t0
∂m?,
which gives the formula (23). The sign is a consequence of it, using Proposition 28.
Thanks to the definition of β, we get, for allU > 0:
m? ∂t0
∂U =−1 U
Z t0
0
β(s)ds−β(t0)∂t0
∂U. Adding the relation (22), we get:
∂t0
∂U = 1 U
m? t0 β(t0) +m?. By Proposition 28, we get ∂t∂U0 <0.
C.2 Section 6
Proof of Proposition 14.
Letv be the distribution defined almost surely by:
v(t, m) :=
ζ(t, m) + d
m?−Uδm?(m)
e−tm,∀t >0, a.e.m∈R.
This map is a solution of the PDE
Thanks to Grönwall’s lemma, we have Nx(t)≤Nx(0)− with the continuous functionsF, G (well-defined thanks to (38)) defined as follows
F(t) = U Proof of Proposition 15.
As for Proposition 14, we can check that for all t, x >0 kv(t,·)kx ≤ kv(0,·)kx− with the notation introduced in the proof of Proposition 14. However, we have
kv(t,·)kx =kζ(t,·)kx−t+ d
m?−Ue(x−t)|m?|.
It is easy to see that for all(t, x)∈[0, T]×[0, X], F(t)≤b and −G(t)≤ −c.
Proof of Proposition 16.
Letφ(t) :=R
Rp(t, m)dm. Equation (34) yields:
φ0(t) =hmi(t)(1−φ(t)) + d
Since hmi − Nd is continuous, this Cauchy problem has a unique solution and so φ≡1.
Let us turn to the proof of (40). We have for t≥0:
yJ(y)dy, thanks to the mass preservation
=hmi(s) +µJ, because of (38). Proof of Corollary 1.
Thanks to Equation (35), we have:
Z t By Inequality (40), we get
hmi(t)≥ hmi(0) +tU µJ +dm? Proof of Theorem 17.
Let us study the PDE introduced during the proof of Proposition 14:
∂tv(t, m) =U[Jt? v−v]− dU
m?−UJt(m−m?)e−tm?, whereJt(m) =e−tmJ(m).
LetE be the space L∞(R). The differential equation can be seen as v0(t) = U[Jt? v(t)−v(t)]− dU
m?−UJt(· −m?)e−tm? and v :R+→E.
LetT > 0and G the following application [0, T]×E → L∞(R)
(t, y) 7→ U[Jt? y−y]− mdU?−UJt(· −m?)e−tm? .
Let us remark that E is a Banach space. If G is Lipschitz continuous in y, the Cauchy-Lipschitz theorem yields that there exists a unique solutionv ∈C1([0, T], E)
of (
v0(t) =U[Jt? v−v]− mdU?−UJt(· −m?)e−tm? v(0,·) = ζ0+ m?d−Uδm?
which is equivalent to the existence and the uniqueness of the solution ζ of (42) in C1([0, T], L∞loc(R)), thanks to the relation:
ζ(t, m) =etmv(t, m)− d
m?−Uδm?(m).
This is true for any T >0. By uniqueness of the solution on [0, T], we can extend ζ on R+. Moreover, by the proposition 14, we establish the last property for ζ.
Let us proove the Lipschitz continuity of G. Lety1, y2 ∈E and t∈[0, T]. Then we have for allm∈R
|G(t, y1)(m)− G(t, y2)(m)|=U|Jt?(y1−y2)−(y1−y2)|
≤U(kJtkL1(R)+ 1)ky1−y2kL∞(R)
≤U Z
R
J(m)et|m|dm+ 1
ky1−y2kL∞(R)
≤U Z
R
J(m)eT|m|dm+ 1
ky1−y2kL∞(R), which yields
kG(t, y1)− G(t, y2)kL∞(R)≤cky1−y2kL∞(R) wherec is the constantU R
RJ(m)eT|m|dm+ 1 .
The positivity of the solution wil be admitted.
Proof of Theorem 18.
Let ζ0 = N0p0. As p0 and so ζ0 decays faster than any exponential function, Theorem 17 yields the existence (and uniqueness) of the solution ζ of (33) with the initial condition ζ0. Let N(t) = R
Rζ(t, m)dm. Then N is a smooth function.
We can also define p(t, m) = ζ(t, m)/N(t) which is in C∞(R+, L∞loc(R)). As in the introduction, (p, N) is a solution of the Cauchy problem (43).
Let (˜p,N˜) an other solution of (43) and ζ˜ = ˜pN˜. We can check that ζ˜ ∈ C1(R+, L∞loc(R))is an other non-negative solution of the same Cauchy problem than
ζ. By uniqueness of the solution,ζ = ˜ζ. Thanks to mass preservation of the mean Proof of Proposition 19.
Assume that supp(J) ⊂ (−∞,0]. Since β ∈ L1(R), Lebesgue’s dominated con-vergence theorem gives lim
t→∞β(t) = −U. Hence we obtain: thanks to Lebesgue’s dominated convergence theorem.
Assume here that m0 < U. If m0 6= 0, then, thanks to Proposition 27, we get have the same equivalent as before.
Assume here m0 =U. Hence: Moreover Equation (16) yields:
t→∞lim hmi(t) = N0(m0−U)
N0 =m0−U.
Let us now deal with property (iv). We know that lim
t→∞C00(t) = +∞, by Propo-sition 27, which involves that lim
t→∞C0(t) = +∞. So, the same kind of study as for the last case yields:
hmi(t) ∼
t→∞C00(t) +β(t) →
t→∞+∞.
Proof of Proposition 20.
At first, we remark that:
Z t 0
em?τe−Rτtβ(s)dsdτ = Z 1
0
t em?tτ
e−tRτtβ(ts)dsdτ.
Here, thanks to Proposition 26, we have lim
t→∞t Z 1
τ
β(ts)ds = +∞. So Lebesgue’s dominated convergence theorem yields:
Z t 0
eτ m?e−Rτtβ(s)dsdτ →
t→∞0.
However, the mean fitness verifies:
hmi(t) = N0 (C00(t) +β(t))eC0(t)+d em? t−d e−R0tβ(s)ds N0 eC0(t)+dRt
0 eτ m?e−Rτtβ(s)dsdτ .
Therefore, if m0 ≥ 0, then C0(t) tends to +∞ or converges (thanks to Proposi-tion 27), ast tends to +∞, and so hmi(t) ∼
t→∞ C00(t) +β(t), and so tends to +∞ as t tends to+∞.
Now assume that m0 <0. Then Proposition 27 yields C0(t) ∼
t→∞ m0t and:
Z t 0
eτ m?e−Rτtβ(s)dsdse−m0tdτ = Z 1
0
tem?tτ
e−t(m0+Rτ1β(ts)ds)dτ.
However, m0 +R1
τ β(ts)ds → +∞ and so, by Lebesgue’s dominated convergence theorem, this expectation tends to zero as t tends to+∞. Similarly, we check that
t→∞lim e−t(m0+R01β(ts)ds)= 0. Hence, we have:
t→∞lim hmi(t) = lim
t→∞m0+β(t) +d e(m?−m0)t≥ lim
t→∞m0+β(t) = +∞.