## Convergence results for a coarsening model using global linearization

### Thierry Gallay

Institut Fourier Universit´e de Grenoble I

BP 74

F-38402 Saint-Martin d’H`eres

### Alexander Mielke

Institut f¨ur Analysis, Dynamik und Modellierung

Universit¨at Stuttgart Pfaffenwaldring 57 D-70569 Stuttgart

### December 12, 2002

Abstract

We study a coarsening model describing the dynamics of interfaces in the one- dimensional Allen-Cahn equation. Given a partition of the real line into intervals of length greater than one, the model consists in repeatedly eliminating the shortest interval of the partition by merging it with its two neighbors. We show that the mean-field equation for the time-dependent distribution of interval lengths can be explicitly solved using a global linearization transformation. This allows us to derive rigorous results on the long-time asymptotics of the solutions. If the average length of the intervals is finite, we prove that all distributions approach a uniquely deter- mined self-similar solution. We also obtain global stability results for the family of self-similar profiles which correspond to distributions with infinite expectation.

### 1 Introduction

Consider a domain D ⊂ R^{n} which is divided into a large number of subdomains (or
cells) of different sizes, separated by domain walls, and assume that the system evolves
in such a way that the larger subdomains grow with time while the smaller ones shrink
and eventually disappear. In particular, the average size of the cells increases, so that the
subdivision of D becomes rougher and rougher. Such a coarsening dynamics is observed
in many physical situations, especially near a phase transition when a system is quenched
from a homogeneous state into a state of coexisting phases. Typical examples are the
formation of microstructure in alloy solidification [LiS61, KoO02] and the phase separation
in lattice spin systems [De97, KBN97]. Closely related to coarsening is the coagulation
(or aggregation) process which describes the dynamics of growing and coalescing droplets
[DGY91, PeR92, Vo85]. In this case, the system consists of a large number of particles of
different masses which interact by forming clusters. Again, the total mass is preserved,
so that the average mass per cluster increases with time.

Given a coarsening or a coagulation model, the main task is to predict the long-time
evolution of the size distribution of the cells, or the mass distribution of the clusters. In
many cases, experiments and numerical calculations show that this behavior is asymp-
totically self-similar: the system can be described by a single length scale L(t), and the
distribution approaches the scaling form L(t)^{−1}Φ(x/L(t)) as t → ∞. The profile Φ and
the asymptotics of L(t) can sometimes be determined exactly [NaK86, BDG94]. How-
ever, even in simple situations, it is very difficult to prove that the distribution actually
converges to a self-similar profile.

In this work, we consider a simple coarsening model related to the one-dimensional
Allen-Cahn equation ∂tu =∂_{x}^{2}u+ ^{1}_{2}(u−u^{3}), where x ∈R. The equilibria of this system
are the homogeneous steady states u=±1, together with the kinks u(x) =±tanh(x/2)
which represent domain walls separating regions of different “phases”. Ifuis any bounded
solution of this equation, then fort >0 sufficiently large the graph of u(t,·) will typically
look like a (countable) family of kinks separated by large intervals on which u ≈ ±1. If
we denote by x_{j}(t) the position of thej^{th} kink and if we assume that x_{j+1}(t)−x_{j}(t)1
for allj ∈Z, a rigorous asymptotic analysis shows that ˙xj ≈F(xj+1−xj)−F(xj−xj−1),
where F(y) = 24e^{−y} [CaP89]. In other words, the positions of the domain walls behave
like a system of point particles with short range attractive pair interactions. Thus, on
an appropriate time scale, only the closest pairs of kinks will really move; in such pairs,
kinks will attract each other until they eventually annihilate.

This kink dynamics suggests the following coarsening model [NaK86, DGY91, CaP92,
BDG94, RuB94, BrD95, CaP00]. Consider a partition of the real line R into a countable
union of disjoint intervals I_{j}, with `(I_{j}) ≥ 1 for all j ∈ Z. In the previous picture, the
intervalsIj correspond to regions whereuis close to±1. A dynamics on this configuration
space is defined by iterating the following coarsening step: choose the “smallest” interval
in the partition, and merge it with its two nearest neighbors. This model clearly mimics
the dynamics of the domain walls in the one-dimensional Allen-Cahn equation. However,
proving that the formal procedure described above actually defines a well-posed evolution
(e.g. for almost all initial configurations) and investigating its statistical properties after
many coarsening iterations is a non-trivial task, which has not been accomplished so far.

Instead, the coarsening model has been studied in the mean field approximation, which consists in merging the minimal interval not with its true neighbors, but with two intervals chosen at random in the configuration {Ij}j∈Z. This approximation is valid provided the lengths of consecutive intervals stay uncorrelated during the coarsening process, see [BDG94] for an argument indicating that the correlations indeed disappear if the number of intervals tends to infinity.

Under this assumption, it is possible to write a closed evolution equation for the distribution f(t, x) (per unit length) of intervals of length x ≥ 1 at time t [CaP92].

Denoting by N(t) = R∞

0 f(t, x) dx the total number of intervals per unit length, and by L(t) the length of the smallest interval, the equation reads

∂tf(t, x) = L(t)f˙ (t,L(t))
N(t)^{2}

Z x−L(t) 0

f(t, y)f(t, x−y−L(t)) dy−2f(t, x)N(t)

!

, (1.1) for x ≥ L(t), whereas f(t, x) = 0 for x < L(t) by the definition of L(t). By construc- tion, N(t) decreases with time, while the total length of the intervals R∞

0 xf(t, x) dx is

conserved.

We prefer to work with the distribution density ρ(t, x) =f(t, x)/N(t), which satisfies ρ(t, x) = 0 for x < L(t) and the normalization R∞

0 ρ(t, x) dx= 1 for all t. The evolution equation forρ reads

∂_{t}ρ(t, x) = ˙L(t)ρ(t,L(t))

Z x−L(t) 0

ρ(t, y)ρ(t, x−y−L(t)) dy for x≥ L(t). (1.2) Of course, systems (1.1) and (1.2) are equivalent. In particular, once the densityρ(t, x) is known, the total number N(t) can be recovered by solving the ordinary differential equa- tion ˙N(t) =−2 ˙L(t)ρ(t,L(t))N(t), and the distributionf(t, x) is then given byN(t)ρ(t, x).

It is important to note that equations (1.1), (1.2) are invariant under reparametriza- tions of time. As a consequence, the minimal length L(t) is not determined by the initial data, but can be prescribed to be an arbitrary (increasing) function of time. In [CaP92], the authors define an “intrinsic time” by imposing the relation f(t,L(t)) ˙L(t) = 1, which means that the number of merging events per unit time is constant. We find it more convenient to use the “coarsening time” defined by the simple relation L(t) = t. In other words, we choose to parameterize the coarsening process by the length of the smallest remaining interval, forgetting about how much physical time elapses between or during the merging events. With our choice, equation (1.2) becomes

∂tρ(t, x) =ρ(t, t) Z x−t

0

ρ(t, y)ρ(t, x−y−t) dy for x≥t. (1.3)
Since we do not allow for intervals of length smaller than 1, we impose our initial condition
at timet = 1: ρ(1, x) =ρ_{1}(x).

The aim of this paper is to show that the dynamics of (1.3) can be completely under- stood using a global linearization transformation. As a consequence, we are able to prove that solutions of (1.3) satisfying R∞

0 xρ(t, x) dx < ∞ approach a non-trivial self-similar profile as t→ ∞. To achieve this goal, we first rewrite (1.3) in similarity coordinates by setting

ρ(t, x) = 1

tη(logt, x/t), or η(τ, y) = e^{τ}ρ(e^{τ},e^{τ}y),

where τ = logt ≥ 0 and y = x/t ∈ [1,∞). Then the rescaled density η(τ,·) lies in the time-independent space

P=n

η∈L^{1}((1,∞),R_{+})

Z ∞ 1

η(y) dy= 1o

, (1.4)

which is a closed convex subset of L^{1}((1,∞)). Moreover, (1.3) is transformed into the
autonomous evolution equation

∂_{τ}η(τ, y) =∂_{y} y η(τ, y)

+η(τ,1) Z y−2

1

η(τ, z)η(τ, y−z−1) dz for y≥1. (1.5)
In Section 3 we show that, for all initial data η0 ∈ P, (1.5) has a unique global solution
η∈C^{0}([0,∞),P) with η(0) =η0.

We now define a nonlinear map N :P→L^{1}_{loc}([1,∞),R_{+}) by
N =F^{−1}◦φ◦ F,

where F is the Fourier transform and φ(z) = ^{1}_{2} log^{1+z}_{1−z}. If η(τ,·) is a solution of (1.5)
in P, a direct calculation reveals that w(τ,·) = N(η(τ,·)) satisfies the linear equation

∂τw(τ, y) =∂y(yw(τ, y)). As a consequence, w(τ, y) = (Sτw0)(y) =

e^{τ}w0(e^{τ}y) if y≥1,
0 if y <1,

where w0 = N(η0). It follows that any solution η ∈ C^{0}([0,∞),P) of (1.5) satisfies
N(η(τ)) =SτN(η0) for allτ ≥0. In other words, the nonlinear evolution defined by (1.5)
isconjugated(via the mapN) to the linear semigroup (Sτ). Thus, the difficulty of solving
(1.5) is carried over to the study of the mappingN and of its inverseN^{−1} =F^{−1}◦φ^{−1}◦F.
Although the properties of these maps are not fully understood, it possible to obtain some
information on them using the analyticity properties of the Fourier-Laplace transform.

In Section 4 we investigate the steady states of (1.5), which form a one-parameter
family{η_{θ}^{∗}}_{θ∈}R. Hereη_{θ}^{∗} =N^{−1}(^{θ}_{2}w^{∗}), wherew^{∗}(y) =y^{−1}1{y≥1}. More explicitly, we have

ηb_{θ}^{∗}(ξ) = (Fη_{θ}^{∗})(ξ) = tanhθ

2E1(iξ)

for ξ∈R, (1.6)

where E1 is the exponential integral [AS72]. We prove thatη^{∗}_{θ} ∈Pif and only if θ∈(0,1].

Moreover, η_{1}^{∗}(y) decays exponentially as y → ∞, while η^{∗}_{θ}(y)∼ y^{−(1+θ)} if 0 < θ < 1. In
particular, η_{1}^{∗} is the only steady state for which the average lengthR∞

1 yη_{1}^{∗}(y) dy is finite.

Finally, Section 5 is devoted to the convergence results. If the initial data η0 ∈ P
satisfy y^{γ}η_{0} ∈ L^{2}((1,∞)) for some γ > 3/2 (so that R∞

1 yη_{0}(y) dy < ∞), we prove that
the corresponding solution of (1.5) converges exponentially to the steady state η_{1}^{∗}:

ky^{γ−1}(η(τ)−η_{1}^{∗})k_{L}^{2}_{((1,∞))} =O(e^{−(γ−3/2)τ}) for τ → ∞.

In terms of the original variables, this shows that the density ρ(t, x) asymptotically
approaches the self-similar solution t^{−1}η_{1}^{∗}(x/t) of (1.3). Moreover, the remainder is
O(t^{−(γ−3/2)}), so that the convergence is very fast if γ is large, i.e., the initial data decay
rapidly at infinity. Similarly, if 0< θ <1 and if η0 ∈P satisfies y^{γ}(η0−νη^{∗}_{θ})∈L^{2}((1,∞))
for someγ > θ+ 1/2 and someν > 0, we prove that the solution of (1.5) with initial data
η0 converges to the steady stateη_{θ}^{∗}.

To conclude this section, we briefly comment on previous results and possible general-
izations. The mean field equations (1.1) and especially the self-similar solutions (1.6) can
be found in the physics literature [NaK86, DGY91, BDG94, RuB94, BrD95]. The first
mathematical work is [CaP92], where the authors prove the existence of global solutions
to (1.1). They also show that the profile η^{∗}_{1} is a positive function (a crucial property
that is often tacitly assumed!) and study its asymptotic behavior as y → ∞. Our main
contribution is the introduction of the linearization transformation N which allows to
prove the convergence results. We also extend the analysis of [CaP92] to the equilibria η_{θ}^{∗}
with 0< θ <1.

The “two-sided” coarsening model discussed in this introduction is clearly not the most general system to which our analysis applies. For instance, we can consider the

“one-sided” variant in which the minimal interval is merged with one of its neighbors only [CaP00]. More generally, we can assume that, forj = 1, . . . , N, the minimal interval has a probability pj of being merged withj of its neighbors, where p1+· · ·+pN = 1. In the mean field approximation, this leads to an evolution equation similar to (1.3), where the quadratic convolution in the right-hand side is replaced by a more general convolution polynomial. Except for a modified definition of the mapping N, this extension does not affect our analysis in any essential way. Therefore, in the rest of this paper, all results will be stated and proved in this general situation.

Acknowledgments. The authors are grateful for financial support through the French- German grant: PROCOPE 00307 TK, Attractors for Extended Systems.

### 2 The coarsening equation and its solution

As is explained in the introduction, we shall study a general coarsening model for which the number of intervals involved in each merging event is not necessarily fixed. Instead, we allow for some randomness by choosing nonnegative real numbersp1, . . . , pN satisfying p1 +· · ·+pN = 1, where pj is interpreted as the probability for an interval of minimal length to merge with j other intervals. We define the polynomial

Q(z) = XN

j=1

pjz^{j},

which satisfies Q(1) = 1. The original coarsening model related to the Allen-Cahn equa-
tion corresponds to the particular case where Q(z) =z^{2}.

If ρ∈L^{1}(R), we set

Q[ρ] = XN

j=1

pjρ^{∗j}, (2.1)

where ρ^{∗j} = ρ∗ρ∗ · · · ∗ρ (j factors) and ∗ denotes the convolution product in L^{1}(R).

In particular, we have R∞

0 Q[ρ](x) dx =Q(R∞

0 ρ(x) dx). In what follows, we shall mainly use the space P of probability densities defined by (1.4). Any ρ ∈P can be extended to the whole real line by setting ρ(x) = 0 for x < 1. This natural extension, still denoted by ρ, will be used in the sequel without further mention. As an example of this abuse of notation, if ρ∈P, we have Q[ρ]∈P and supp(Q[ρ])⊂[2,∞) (here and in the sequel, we denote by supp(f) the support of a function f).

The problem we are interested in can now be stated as follows. Given ρ_{1} ∈ P, find a
density ρ : [1,∞)^{2} → R_{+} satisfying ρ(1, x) = ρ1(x) for x ≥ 1, ρ(t, x) = 0 for 1 ≤ x < t,
and

∂_{t}ρ(t, x) =ρ(t, t)Q[ρ(t,·)](x−t) forx≥t ≥1. (2.2)
If Q(z) =z^{2}, the evolution equation (2.2) reduces to (1.3).

By assumption, the densityρ(t, x) is nonzero only in the sector{(t, x)∈R^{2}|1≤t ≤x},
where it satisfies (2.2). An important role will be played by the values of ρ on the
boundaries of this domain, namely the initial densityρ1 and the trace ofρon the diagonal
x=t, which we denote by α:

α(t) = ρ(t, t) for t≥1.

Any sufficiently smooth solution of (2.2) satisfies ρ(t,·) ∈ P for all t ≥ 1 provided
ρ_{1} ∈ P. Indeed, it is obvious from (2.2) that ρ stays nonnegative. Moreover, if m(t) =
R∞

t ρ(t, x) dx, a direct calculation shows that d

dtm(t) =α(t) Q(m(t))−1

fort ≥1. (2.3)

Therefore, ifm(1) = 1, thenm(t) = 1 for allt≥1.

A very remarkable property of equation (2.2) is that it can be explicitly solved using Fourier (or Laplace) transform. Ifρ ∈P, we define

ρ(ξ) = (Fb ρ)(ξ) = Z ∞

1

e^{−iξx}ρ(x) dx for ξ ∈R.

Then ρb∈ C^{0}(R,C) satisfies ρ(0) = 1,b |ρ(ξ)|b <1 for all ξ 6= 0, andρ(ξ)b →0 as ξ→ ±∞.

Moreover, ρbis a positive definite function (in the sense of Bochner). Since supp(ρ) ⊂ [1,∞), the Fourier transform ρbcan be continuously extended to the lower complex half plane

L^{−} ={ξ∈C| Imξ≤0}.

This extension (still denoted byρ) is analytic in the interior ofb L^{−} and satisfies the bound

|ρ(ξ)| ≤b e^{Im}^{ξ} for all ξ ∈L^{−}.

Remark. The closely related Laplace transform is defined by ρ(p) = (Lρ)(p) =e

Z ∞ 1

e^{−px}ρ(x) dx for Rep≥0,

so thatρ(p) =e ρ(−ip). In the sequel, we prefer using Fourier transform instead of Laplaceb because the inversion formula is more natural.

Applying Fourier transform to (2.2) and using the fact that convolutions are turned into multiplications, we find the equation

∂tρ(t, ξ) =b α(t) e^{−iξt}

Q(ρ(t, ξ))b −1

for t ≥1, (2.4)

where α(t) = ρ(t, t). To solve (2.4), we introduce the nonlinear complex transformation φ defined by

φ^{0}(z) = 1

1−Q(z) , φ(0) = 0. (2.5)

Remark that φ^{0}(z) = P∞

k=0[Q(z)]^{k}, so that φ has a power series expansion with non-
negative coefficients whose radius of convergence is equal to 1. In particular, the map

φ : [0,1) → [0,∞) is one-to-one and onto. Let ψ = φ^{−1} be the inverse map, which
satisfies

ψ^{0}(w) = 1−Q(ψ(w)), ψ(0) = 0. (2.6)

By construction,ψ is analytic in a neighborhood of the real positive axis. In the particular
case where Q(z) =z^{2}, one finds

φ(z) = 1

2 log 1 +z

1−z and ψ(w) = tanh(w).

Applying the nonlinear transformationφ simplifies equation (2.4) a lot. The function w(t, ξ) =b φ(ρ(t, ξ)), which is defined at least for Imb ξ < 0, satisfies the differential equation

∂_{t}w(t, ξ) =b −α(t)e^{−iξt} for t ≥1,
which has the explicit solution

w(t, ξ) =b w(1, ξ)b − Z t

1

α(s)e^{−iξs}ds for t≥1 and Imξ <0. (2.7)
Remark that |ρ(t, ξ)| ≤b e^{t}^{Im}^{ξ} for all t ≥ 1 and all ξ ∈ L^{−}, because ρ(t,·) ∈ P and
supp(ρ(t,·)) ⊂ [t,∞). Since φ(z) = z +O(|z|^{2}) as z → 0, it follows that |w(t, ξ)|b =

|φ(ρ(t, ξ))| →b 0 as t → ∞ if Imξ < 0. Thus, taking the limit t → ∞ in (2.7), we find w(1, ξ) =b R∞

1 α(t) e^{−iξt}dt, which in turn implies
b

w(t, ξ) = Z ∞

t

α(s)e^{−iξs}ds for t≥1 and Imξ <0. (2.8)
This formula has a very nice interpretation. Let N be the nonlinear transformation
defined (at least formally) by

N =F^{−1}◦φ◦ F or N^{−1} =F^{−1}◦ψ◦ F. (2.9)
Settingt = 1 in (2.8), we obtain φ(ρb1) = α, that isb α =N(ρ1). In other words, the trace
α(t) = ρ(t, t) is obtained from the initial densityρ1(x) =ρ(1, x) by applying the nonlinear
map N. Moreover, ifU(t) is the linear operator defined for t≥1 by

(U(t)w)(s) =1{s≥t}w(s) =

0 if s < t,

w(s) if s≥t, (2.10)

then (2.8) reads w(t,b ·) =φ(ρ(t,b ·)) =F(U(t)α), which means N(ρ(t,·)) =U(t)α. There- fore, the solution of (2.2) satisfies

N(ρ(t,·)) =U(t)N(ρ_{1}) for t≥1. (2.11)
This shows that the dynamics of the nonlinear system (2.2) is conjugated via the nonlinear
mapping N to the linear evolution U. Since N(ρ_{1}) is the trace function defined by
α(t) = ρ(t, t), it is very natural that the evolution of α is obtained just by cutting off the
history in [1, t).

It is not difficult to show that the map N is well-defined on the space P, cf. (1.4):

Proposition 2.1 If ρ∈P, then N(ρ)∈L^{1}_{loc}([1,∞),R_{+}), and the mapping ρ7→ N(ρ) is
one-to-one.

Proof. For ρ ∈ P we construct w = N(ρ) as follows. Define wb : L^{−}_{∗} → C by w(ξ) =b
φ(ρ(ξ)), whereb L^{−}_{∗} = L^{−} \ {0} . We recall that ρbis continuous on L^{−}, analytic in the
interior of L^{−}, and that |ρ(ξ)|b < 1 for ξ 6= 0. Since φ is analytic in the unit disk of
C, it follows that wb is continuous on L^{−}_{∗} and analytic in the interior of L^{−}. Moreover,

|w(ξ)| ≤b φ(|ρ(ξ)|)b ≤ φ(e^{Im}^{ξ}), hence |w(ξ)|b = O(e^{Im}^{ξ}) as Imξ → −∞. These properties
imply (see [Sch66], Ch. VIII) that wb is the Fourier transform of a uniquely determined
distribution w∈ D^{0}(R) with support in [1,∞).

The injectivity of N follows from the facts that the mapping φ:{z | |z|<1} →C is
locally injective (as φ^{0}(z) = 1/(1−Q(z))6= 0) and thatφ : [0,1)→R is globally injective
(as φ^{0}(s)≥ 1 for s ∈ [0,1)). If N(ρ1) = N(ρ2), then, by the above, we have φ(ρb1(ξ)) =
φ(ρb2(ξ)) for ξ ∈ L^{−}_{∗}. This proves ρb1(−ip) = ρb2(−ip) for p > 0, as ρbj(−ip) ∈ [0,1). By
continuity of ρb_{j} and local invertibility we obtain ρb_{1} =ρb_{2} on L^{−}_{∗}, and hence ρ_{1} =ρ_{2}.

To prove w = N(ρ) ∈ L^{1}_{loc}([1,∞)), choose any ε > 0 and consider the distribution
wε : x 7→ e^{−εx}w(x). It belongs to S^{0}(R) (the space of tempered distributions) and its
Fourier transform satisfies

wb_{ε}(ξ) = w(ξ−iε) =b φ(ρ(ξ−iε)) for Imb ξ ≤0.

Now we observe that ρ(ξ−iε) =b ρbε(ξ), where ρε(x) = e^{−εx}ρ(x). Since kρεk_{L}^{1} ≤ e^{−ε} < 1,
the seriesP∞

k=1
φ^{(k)}(0)

k! ρ^{∗k}_{ε} converges inL^{1}(R) to some functionWε∈L^{1}([1,∞),R_{+}). (Here
we use the crucial fact that φ^{(k)}(0)≥0 for all k ∈N.) By construction,

Wcε(ξ) = X∞ k=1

φ^{(k)}(0)

k! ρbε(ξ)k

=φ(ρ(ξ−iε)) =b wbε(ξ) for Imξ ≤0,

giving w_{ε}=W_{ε}∈L^{1}((1,∞),R_{+}), and hence w:x7→e^{εx}w_{ε}(x) lies in L^{1}_{loc}([1,∞),R_{+}).

Remarks.

1. Under the assumptions of Proposition 2.1, one has that w = N(ρ) ∈ S^{0}(R), i.e., w
is a tempered distribution. In fact, there exists a constant C > 0 such that |w(ξ)|b =

|φ(ρ(ξ))| ≤b Cmax{1,−log|ξ|} for ξ 6= 0, see the proof of Proposition 5.1 below. This means that the singularity of w(ξ) atb ξ = 0 is (not worse than) logarithmic.

2. More information on N can be extracted from the proof of Proposition 2.1. For instance, if ρ∈P, then N(ρ)(x) =ρ(x) for almost all x∈(1, n+1), where

n= min

j ∈ {1, . . . , N}pj >0 ≥1 (2.12)
is the largest integer such that |Q(z)| =O(|z|^{n}) as z →0. Indeed, in view of (2.5), one
has φ(z) =z+O(|z|^{n+1}) as z →0. It follows that

Wε =ρε+ X∞ k=n+1

φ^{(k)}(0)
k! ρ^{∗k}_{ε} ,

where the second term in the right-hand side is supported in the interval [n+1,∞). Thus Wε = ρε almost everywhere in [1, n+1], which proves the claim. Similarly, using the

observation that supp(ρ^{∗k}_{ε} ) ⊂ [k,∞), it is easy to show that, if ρ : [1,∞) → R_{+} is
continuous, so is N(ρ).

The formula (2.11) is very nice, but does not provide an effective method for solving the
Cauchy problem associated with (2.2). Indeed, Proposition 2.1 does not give a sufficient
characterization of the set N(P), which is also the domain of N^{−1}. It is not even clear a
priori that this set in left invariant by the linear evolution U(t). For this reason, we shall
use standard PDE techniques to prove existence of solutions to (2.2) in the next section.

But the representation (2.11) will be very useful to find self-similar solutions of (2.2) in Section 4, and to study their stability in Section 5.

### 3 The Cauchy problem for the rescaled system

The evolution equation (2.2) is not autonomous, and it is defined on the time-dependent
domain{x∈R_{+}|x≥t}. These drawbacks are eliminated if we rescale the densityρ(t, x)
by setting

ρ(t, x) = 1

tη(logt, x/t) for x≥t≥1, (3.1) or equivalently

η(τ, y) = e^{τ}ρ(e^{τ},e^{τ}y) for τ ≥0, y ≥1. (3.2)
In what follows, we denote by τ = logt andy =x/t the new time and space coordinates.

The rescaled density η(τ,·) now belongs to the fixed space P defined in (1.4). Moreover, it satisfies the autonomous evolution equation

∂τη(τ, y) =∂y(y η(τ, y)) +β(τ)Q[η(τ,·)](y−1) for y≥1, (3.3)
whereβ(τ) = η(τ,1) is the new trace which relates toα(t) viaβ(τ) = e^{τ}α(e^{τ}). The initial
condition for (3.3) isη(0, y) =η0(y), where η0 =ρ1 ∈P.

The nonlinearity in (3.3) has the form β(τ)T1Q[η(τ)], where T1 : P → P is the shift operator defined by

(T_{1}η)(y) =

η(y−1) if y≥2;

0 if y <2. (3.4)

In particular, for all η ∈ P, the support of T1Q[η] is contained in [2,∞), or even in
[n+1,∞), where n ≥ 1 is defined in (2.12). Thus, any solution of (3.3) satisfies the
linear equation ∂τη = ∂y(yη) in the strip {(τ, y)|τ ≥ 0,1 ≤ y ≤ 2}. It follows that
η(τ, y) = e^{τ−τ}^{0}η(τ0,e^{τ−τ}^{0}y) for all τ ≥τ0 ≥0 and ally ≥1 such that e^{τ−τ}^{0}y ≤2. Setting
y= 1, we obtain the important relation

β(τ) = e^{τ−τ}^{0}η(τ_{0},e^{τ−τ}^{0}) for 0≤ τ−τ_{0} ≤log 2, (3.5)
which means that the traceβ(τ) forτ ∈[τ0, τ0+ log 2] can be determined from the solution
η(τ0,·). This formula will be useful to define the traceβproperly when the solutionη(τ,·)
of (3.3) is not continuous. For instance, if η(τ,·)∈P for all τ ≥0 and if β satisfies (3.5),
then β ∈L^{1}_{loc}([0,∞),R_{+}).

The main purpose of this section is to show that (3.3) defines a well-posed evolution in the space P. To do this, we consider the associated integral equation

η(τ) =Sτη0+ Z τ

0

β(s)Sτ−sT1Q[η(s)] ds for τ ≥ 0, (3.6) where (Sτ)τ≥0 is the linear semigroup on P defined by

(S_{τ}η)(y) =

e^{τ}η(e^{τ}y) if y≥1;

0 if y <1. (3.7)

To formulate our convergence results in Section 5, we shall need some weighted L^{p} spaces
which we now introduce. For p∈[1,∞) and γ ≥0, we denote by L^{p}_{γ} the function space

L^{p}_{γ} ={w∈L^{1}_{loc}([1,∞),R)| kwk_{p,γ} <∞}, (3.8)
where

kwk_{p,γ} =ky^{γ}wk_{L}^{p} =
Z ∞

1

(y^{γ}|w(y)|)^{p}dy
1/p

.

When γ = 0, we simply write L^{p} instead ofL^{p}_{0} and kwk_{p} instead of kwk_{p,0}. Remark that
L^{p}_{γ} ,→L^{1} if and only ifγ >1−1/p(whenp >1) orγ ≥0 (whenp= 1). In what follows,
we shall often restrict ourselves to such values of p, γ.

We first give a few basic estimates on the semigroup (Sτ) and the nonlinearityQacting
onL^{p}_{γ}.

Lemma 3.1 Let p ∈ [1,∞) and γ ≥ 0. Then (3.7) defines a strongly continuous semi-
group (Sτ)τ≥0 in L^{p}_{γ}, and

kS_{τ}ηk_{p,γ} ≤e−τ(γ−1+1/p)kηk_{p,γ}, (3.9)
for all η ∈L^{p}_{γ} and all τ ≥0. Moreover, equality holds in (3.9) if and only if η(y) = 0 for
almost all y∈[1,e^{τ}].

Lemma 3.2 Let Q be the nonlinear map defined by (2.1).

a) If η ∈L^{1}, then Q[η]∈L^{1} and kQ[η]k1 ≤Q(kηk1). If η,η˜∈L^{1}, then
kQ[η]−Q[˜η]k_{1} ≤Q^{0}(r)kη−ηk˜ _{1},

where r= max{kηk_{1},kηk˜ _{1}}. Finally, if η∈P, then Q[η]∈P.

b) Let p ∈ [1,∞) and γ > 1−1/p. If η ∈ L^{p}_{γ}, then Q[η] ∈ L^{p}_{γ}, and there exists C > 0
(independent of η) such that

kT_{1}Q[η]k_{p,γ} ≤CQ^{0}(kηk_{1})kηk_{p,γ}. (3.10)
If η,η˜∈L^{p}_{γ} and R = max{kηkp,γ,k˜ηkp,γ}, then

kT_{1}Q[η]−T1Q[˜η]k_{p,γ} ≤CQ^{0}(R)kη−ηk˜ _{p,γ}.

Proof. Estimate (3.9) is a straightforward calculation, and the proof of Lemma 3.2 will be outlined in Appendix C.

We are now ready to state the main result of this section:

Theorem 3.3 For any η0 ∈ L^{1}((1,∞),R) with kη_{0}k_{1} ≤ 1, equations (3.6), (3.5) have a
unique global solution η ∈ C^{0}([0,∞), L^{1}), which satisfies kη(τ)k_{1} ≤ 1 for all τ ≥ 0. In
addition,

1) if η0 ∈P, then η(τ)∈P for all τ ≥0;

2) if η0 ∈L^{p}_{γ} for some p≥1 and some γ >1−1/p, then η ∈C^{0}([0,∞), L^{p}_{γ}).

Proof. Fix η0 ∈B1, where B1 ={η∈L^{1}| kηk1 ≤1}. Setting τ0 = 0 in (3.5), we obtain
β(τ) = e^{τ}η0(e^{τ}) for 0≤τ ≤log 2. (3.11)
The first step is to show that (3.6),(3.11) have a unique solution η ∈C^{0}([0,log 2], L^{1}).

Let q =Q^{0}(1)≥1, and let T = (log 2)/m, where m ∈N^{∗} is sufficiently large so that,
for all k = 1, . . . , m,

Z kT (k−1)T

e^{s}|η0(e^{s})|ds < 1

q. (3.12)

We introduce the Banach space X =C^{0}([0, T], L^{1}) equipped with the norm
kηk_{X} = sup

0≤τ≤T

kη(τ)k_{1}.

Let B ={η ∈X| kηk_{X} ≤1}, and let F :X 7→X be the nonlinear map defined by
(F[η])(τ) =Sτη0 +

Z τ 0

β(s)Sτ−sT1Q[η(s)] ds for 0≤τ ≤T,

whereβ(s) is given by (3.11). We claim that F(B)⊂B and thatF is a strict contraction in B. Indeed:

a) Assume that η∈B. Using Lemmas 3.1 and 3.2, we find, for all τ ∈[0, T], k(F[η])(τ)k1 ≤ kSτη0k1 +

Z τ 0

|β(s)|kSτ−sT1Q[η(s)]k1ds

= Z ∞

1

e^{τ}|η0(e^{τ}y)|dy+
Z τ

0

|β(s)|kT1Q[η(s)]k1ds

= Z ∞

e^{τ}

|η0(y)|dy+ Z τ

0

e^{s}|η0(e^{s})|kQ[η(s)]k1ds (3.13)

≤ Z ∞

e^{τ}

|η0(y)|dy+Q(kηkX)
Z e^{τ}

1

|η0(y)|dy ≤ 1, since Q(kηkX)≤Q(1) = 1 and kη0k1 ≤1. This shows that F(B)⊂B.

b) Ifη,η˜∈B, then for all τ ∈[0, T], k(F[η])(τ)−(F[˜η])(τ)k1 ≤

Z τ 0

|β(s)|kSτ−s(T1Q[η(s)]−T1Q[˜η(s)])k1ds

= Z τ

0

e^{s}|η0(e^{s})|kQ[η(s)]−Q[˜η(s)])k1ds

≤ Z τ

0

e^{s}|η0(e^{s})|Q^{0}(1)kη(s)−˜η(s)k1ds

≤ qZ ^{T}

0

e^{s}|η0(e^{s})|ds

kη−˜ηkX. In view of (3.12), this shows that F is a strict contraction in B.

Let η ∈ X be the unique fixed point of F in the ball B. Then η satisfies (3.6), and
using Gronwall’s lemma it is readily verified thatη is in fact the unique solution of (3.6)
in the whole space X =C^{0}([0, T], L^{1}). Repeating the same argument m times (where m
is such that (3.12) holds), we conclude that equations (3.6), (3.11) have a unique solution
η∈C^{0}([0,log 2], L^{1}), which satisfieskη(τ)k_{1} ≤1 for all τ ∈[0,log 2]. Moreover, it is clear
that (3.5) holds for all τ_{0} ∈[0,log 2] and almost all τ ∈[τ_{0},log 2].

For τ ∈ [0,log 2], let Ξτ : B1 → B1 be the nonlinear map defined by Ξτη0 = η(τ), whereη(τ) is the solution of (3.6) we have just constructed. Then it is easy to verify that Ξτ1+τ2 = Ξτ1◦Ξτ2 for 0≤τ1+τ2 ≤log 2. It follows that the family (Ξτ) can be extended to a continuous semiflow (Ξτ)τ≥0. By construction, if η0 ∈B1 and if we set η(τ) = Ξτη0

for all τ ≥0, thenη ∈C^{0}([0,∞), L^{1}) is the unique solution of (3.6), (3.5), andη(τ)∈B1

for all τ ≥0. This proves the first part of Theorem 3.3.

Assume now that η0 ∈P. Keeping the same notations as above, we define B˜ ={η∈X|η(τ)∈Pfor all τ ∈[0, T]}.

In particular, ˜B is a closed subset of B, as P is closed in B1 ⊂ L^{1}. If η ∈ B, it is clear˜
that (F[η])(τ) ∈ L^{1}((1,∞),R_{+}) for all τ ∈ [0, T], and that all inequalities in (3.13) can
be replaced by equalities. Thus F( ˜B)⊂B, hence the solution˜ η∈C^{0}([0,∞), L^{1}) of (3.6)
satisfies η(τ)∈P for all τ ∈[0, T]. Proceeding as above, we then show that η(τ)∈P for
allτ ∈[0,log 2], hence for all τ ≥0. This proves assertion 1) in Theorem 3.3.

Finally, assume thatη0 ∈L^{p}_{γ} for somep≥1 and someγ >1−1/p, and thatkη_{0}k_{1} ≤1.

Using Lemmas 3.1, 3.2 and a fixed point argument as before, it is straightforward to show
that the solution η ∈ C^{0}([0,∞), L^{1}) of (3.6) satisfies η ∈ C^{0}([0, T], L^{p}_{γ}) for some T > 0
(depending on η0). Let

T^{∗} = sup

T >0

η∈C^{0}([0, T], L^{p}_{γ}) ∈(0,∞].

We claim that T^{∗} = ∞. Indeed, assume on the contrary that 0 < T^{∗} < ∞. Since
kη(τ)k1 ≤1 for allτ ≥0, it follows from (3.6), (3.9), (3.10) that

kη(τ)k_{p,γ}≤ kη_{0}k_{p,γ}+Cq
Z τ

0

|β(s)|kη(s)k_{p,γ}ds for 0≤ τ < T^{∗}.

Using Gronwall’s lemma and the fact thatβ∈L^{1}_{loc}([0,∞)), we deduce thatkη(τ)kp,γ ≤C^{0}
for all τ ∈ [0, T^{∗}). In view of (3.6), (3.10), this in turn implies that η(τ) has a limit in

L^{p}_{γ} asτ %T^{∗}, giving η∈C^{0}([0, T^{∗}], L^{p}_{γ}). Since we have a local existence result in L^{p}_{γ}, we
conclude that η ∈C^{0}([0, T], L^{p}_{γ}) for some T > T^{∗}, which contradicts the definition of T^{∗}.
This proves assertion 2) in Theorem 3.3.

The nonlinear mapN introduced in the previous section can also be used to linearize (3.3). Indeed, the Fourier transforms of ρ and η are related via ρ(t, ξ) =b bη(logt, tξ), so that (3.1) is just a rescaling of the Fourier variable ξ. As is clear from (2.9), this transformation commutes with the action of N. Thus, if ρ is a solution of (2.2) with initial dataρ1 and if ηis the corresponding solution of (3.3) given by (3.2), it follows from (2.11) that

1

tN η(logt,·)

(x/t) =N(η0)(x) for x≥t≥1, (3.14) where η0 =ρ1. Setting τ = logt and y=x/t, we obtain the representation formula

N(η(τ)) =SτN(η0) for τ ≥0, (3.15)
where (S_{τ}) is the linear semigroup (3.7). The last result of this section shows that this
formula is indeed correct:

Proposition 3.4 Let η_{0} ∈P, and let η ∈ C^{0}([0,∞),P) be the solution of (3.6) given by
Theorem 3.3. Then N(η(τ)) = SτN(η0) for all τ ≥0.

Proof. We establish the formula by returning to the unscaled variables (t, x) and by
showing that the formal steps of Section 2 can be made rigorous for the solutions of
(3.3). Define ρ : [1,∞)^{2} → R_{+} by ρ(t, x) = ^{1}_{t}η(logt, x/t) if x ≥ t ≥ 1 and ρ(t, x) = 0 if
1≤x < t. Thenρ∈C^{0}([1,∞),P), and rescaling (3.6) we find

ρ(t) =U(t)

ρ1+ Z t

1

α(s)TsQ[ρ(s)] ds

for t ≥1, (3.16)

where ρ1 =η0 ∈P,α(t) = ^{1}_{t}β(logt),U(t) is the linear operator (2.10), and Ts is the shift
operator defined as in (3.4). To simplify the notation, we set f(s, x) = (TsQ[ρ(s)])(x).

Thenf ∈C^{0}([1,∞), L^{1}), so that (s, x)7→α(s)f(s, x)∈L^{1}_{loc}([1,∞), L^{1}). By construction,
the trace α satisfies the identity

α(t) =ρ1(t) + Z t

1

α(s)f(s, t) ds for a.a. t ≥1.

We now apply the Fourier transform to (3.16). For any ξ∈L^{−} and anyt ≥1, we find
b

ρ(t, ξ) = Z ∞

t

ρ1(x)e^{−iξx}dx+
Z ∞

t

n Z ^{t}

1

α(s)f(s, x) dso

e^{−iξx}dx.

Since ρ1 ∈ P, the first term in the right-hand side is absolutely continuous with respect tot, and

∂t

Z ∞ t

ρ1(x)e^{−iξx}dx=−ρ1(t)e^{−iξt} for a.a. t ≥1.

The second term can be decomposed ash1(t, ξ)−h2(t, ξ), where h1(t, ξ) =

Z ∞ 1

n Z ^{t}

1

α(s)f(s, x) dso

e^{−iξx}dx =
Z t

1

α(s)fb(s, ξ) ds, h2(t, ξ) =

Z t 1

n Z ^{t}

1

α(s)f(s, x) dso

e^{−iξx}dx.

Clearly, h1(t, ξ) is absolutely continuous with respect to t, and

∂th1(t, ξ) =α(t)fb(t, ξ) =α(t)e^{−iξt}Q(ρ(t, ξ)) for a.a.b t ≥1.

Next, since f(s, x) = 0 for x < s, we have Rt

1α(s)f(s, x) ds = Rx

1 α(s)f(s, x) ds, and this expression is a locally integrable function of x. It follows that h2(t, ξ) is absolutely continuous with respect to t, and

∂th2(t, ξ) = e^{−iξt}
Z t

1

α(s)f(s, t) ds for a.a. t≥1.

Summarizing, we have shown that, for any ξ∈ L^{−}, the Fourier transform ρ(t, ξ) is abso-b
lutely continuous with respect to t and satisfies

∂_{t}ρ(t, ξ) =b −e^{−iξt}

ρ_{1}(t) +
Z t

1

α(s)f(s, t) ds

+α(t)e^{−iξt}Q(ρ(t, ξ))b

= α(t) e^{−iξt}

Q(ρ(t, ξ))b −1

for a.a. t≥1.

This gives (2.4). Now, proceeding exactly as in Section 2, we deduce that (2.8) holds for all t ≥ 1 if Imξ < 0, and this in turn is equivalent to (2.11). Finally, using the transformation (3.14) we obtain (3.15).

### 4 Properties of the steady states

This section is devoted to the time-independent solutions of (3.3) in the space P defined by (1.4).

Definition. We say thatη0 ∈Pis asteady state of (3.3) if the solutionη ∈C^{0}([0,∞),P)
of (3.6) given by Theorem 3.3 satisfies η(τ) =η_{0} for all τ ≥0.

The steady states of (3.3) will also be called “equilibria” or “stationary solutions”.

Lemma 4.1 If η_{0} ∈P is a steady state of(3.3), there exists β ≥0 such that η_{0}(y) =β/y
for almost all y∈[1,2].

Proof. If η(τ) ≡ η0, (3.6) implies that η0(y) = e^{τ}η0(e^{τ}y) for all τ ∈ [0,log 2] and
a.a. y∈[1,2 e^{−τ}], because the nonlinearity in (3.6) vanishes identically for such values of
τ, y. We define F :x7→Re^{x}

1 η0(y) dy≥0 and obtain

F(x+y) =F(x) +F(y) for x, y ≥0 and x+y≤log 2.

Since F is continuous, we conclude that F(x) = βx for some β ≥ 0. Differentiating
implies β= e^{x}η0(e^{x}) for a.a. x∈[0,log 2] which gives the desired result.

Let η0 ∈ P be a steady state. Since η0 coincides almost everywhere in [1,2] with a continuous function, the constant β in Lemma 4.1 can be identified with η0(1). Clearly, the trace function defined by (3.5) satisfies β(τ) = β for all τ ≥ 0. In particular, the integral equation (3.6) reduces to

η_{0} =S_{τ}η_{0}+β
Z τ

0

S_{s}T_{1}Q[η_{0}] ds for τ ≥0. (4.1)
From η_{0} ∈P we now conclude thatβ >0.

On the other hand, if η_{0} ∈P and w=N(η_{0}), it follows from Propositions 2.1 and 3.4
that η0 is a steady state if and only if Sτw=w for all τ ≥0. In view of (3.7), this is the
case if and only if there exists β^{0} ∈R such that w=β^{0}w^{∗}, where

w^{∗}(y) =

1/y if y≥1,

0 if y <1. (4.2)

But since w(y) =η0(y) for a.a. y∈[1,2] (see Remark 2 after Proposition 2.1), we neces-
sarily have β^{0} =β =η0(1).

Finally, since equilibria are time-independent solutions of (3.3), we certainly expect them to solve the ordinary differential equation

(yη)^{0}(y) +β(T1Q[η])(y) = 0 for y≥1, η(1) =β. (4.3)
Remark that the initial valueβalso appears as a parameter in front of the nonlinear term.

It is not difficult to show that (4.3) has global solutions:

Lemma 4.2 For any β ∈R, equation (4.3) has a unique global solution η: [1,∞)→R.
Proof. For any k ∈ N, letIk = [kn+ 1,(k+1)n+ 1], where n ∈ N_{∗} is defined in (2.12).

For any η∈L^{1}_{loc}([1,∞),R), the nonlinear term (T_{1}Q[η])(y) only depends on the values of
η(z) for z ≤ y−n. In particular, (T1Q[η])(y) = 0 for y ≤ n+ 1, so that any solution of
(4.3) satisfies η(y) =β/y fory∈I0 = [1, n+ 1]. Using this information, one can compute
(T_{1}Q[η])(y) explicitly for y ∈ I_{1} = [n + 1,2n+ 1], and then solve (4.3) on this interval
to determine η(y) for y ∈ I1. By construction, η is smooth on both I0 and I1, but η
has a discontinuity of order n at y = n+ 1, in the sense that the derivatives η^{(k)}(y) are
continuous fork = 0, . . . , n−1, whereasη^{(n)}(y) has different limits to the left and to the
right at y = n+ 1 (if β 6= 0). Iterating this procedure, we find that (4.3) has a unique
global solution η∈C^{n−1,1}([1,∞),R), which satisfies η∈C^{∞}(Ik) for all k ∈N.

The following result shows that equilibria of (3.3) indeed correspond to solutions of the differential equation (4.3).

Proposition 4.3 If η0 ∈P and β >0, the following assertions are equivalent:

a) η0 is a steady state of (3.3) with η0(1) =β.

b) η0 coincides almost everywhere with the solution of (4.3).

c) N(η0) =βw^{∗}.

Proof. We already proved that a) ⇔ c). If η0 ∈ P is a steady state with η0(1) = β, it follows from (4.1) that

Sτη0−η0

τ + β

τ Z τ

0

S_{s}T_{1}Q[η_{0}] ds = 0,

for allτ >0. Using (3.7), it is not difficult to verify that the first term converges to (yη0)^{0}
inD^{0}((1,∞)) asτ →0, while the second one tends to βT1Q[η0] inL^{1}((1,∞)). This shows
that (after modification on a set of measure zero) η0 is absolutely continuous on (1,∞)
and satisfies the differential equation (4.3) for almost all y > 1. It follows easily that η0

is the solution of (4.3) in the sense of Lemma 4.2. Thus a)⇒b).

Conversely, assume that η0 ∈ P satisfies (4.3). Applying the semi-group Sτ to (4.3)
and integrating over τ, we immediately obtain (4.1), which implies that η_{0} is a steady
state. This proves that b)⇒a).

The main goal of this section is to determine for which values ofβ >0 the solution η of (4.3) belongs to P. Our strategy is to use the characterization c) in Proposition 4.3.

Therefore, we are led to study the image of βw^{∗} under the map N^{−1}, and this requires
very precise information on the complex transformations (2.5) and (2.6). The following
quantities, related to the polynomial Q(z), will play an important role in the sequel:

q =Q^{0}(1)≥1 and κ= exp
Z 1

0

1

1−z − q 1−Q(z)

dz

≤1. (4.4)

Lemma 4.4 Let

Φ(z) = 1−e^{−qφ(z)} for |z|<1,

where φ is defined in (2.5). Then Φ can be extended analytically to a neighborhood of the
real positive axis R_{+}. This extension satisfies Φ(z) ≥ 0 and Φ^{0}(z) > 0 for all z ≥ 0.

Moreover, Φ(0) = 0, Φ^{0}(0) =q, Φ(1) = 1, Φ^{0}(1) =κ, and Φ(z)→R as z → ∞, where
R= 1 + exp

Z 2 0

1

1−z − q 1−Q(z)

dz− Z ∞

2

q

1−Q(z)dz

. (4.5)

Note that R=∞ if Q(z) =z and 1< R <∞ otherwise.

Proof. Since the polynomial 1−Q(z) has the unique real positive rootz = 1, which is a
simple root because Q^{0}(1) =q 6= 0, it is clear that the function

χ(z) = exp Z z

0

1

1−t − q 1−Q(t)

dt

= e^{−qφ(z)}

1−z for |z|<1,

can be extended to an analytic map in a neighborhood of the real positive axis R_{+}.
Moreover, χ(0) = 1, χ(1) =κ, and using z−1 = exp(−Rz

2 dt

1−t) shows that (z−1)χ(z)→ R−1 for z → ∞, where R is defined in (4.5). Since Φ(z) = 1−(1−z)χ(z), we conclude that the function Φ has the desired properties. In particular,

Φ^{0}(z) =qχ(z) 1−z
1−Q(z),

so that Φ^{0}(z)>0 for all z≥0.

It follows from Lemma 4.4 that the map Φ : [0,∞) → [0, R) is one-to-one and onto.

Let Ψ = Φ^{−1} : [0, R)→[0,∞) be the inverse map. Then Ψ(0) = 0, Ψ^{0}(0) = 1/q, Ψ(1) = 1,
Ψ^{0}(1) = 1/κ, and Ψ^{0}(u)>0 for allu∈[0, R). By construction,

Ψ(u) =ψ

−1

q log(1−u)

for 0≤u <1. (4.6)

Lemma 4.5 The function Ψ : [0, R) → [0,∞) is absolutely monotone, i.e. Ψ^{(k)}(u) ≥ 0
for all k ∈N and all u∈[0, R). In particular, Ψ can be extended to an analytic function
on the disc |u|< R, and there exist nonnegative coefficients (Ψk)k∈N

∗ such that Ψ(u) =

X∞ k=1

Ψku^{k} for |u|< R.

Proof. Since Ψ = Φ^{−1}, we already know that Ψ is analytic in a neighborhood of [0, R).

We first show by induction that, for all n∈N_{∗}, there exists a polynomialPn such that
Ψ^{(n)}(u) = Pn(Ψ(u))

q^{n}(1−u)^{n} for 0< u <1. (4.7)
Indeed, differentiating (4.6) and using (2.6), we obtain

Ψ^{0}(u) = 1−Q(Ψ(u))

q(1−u) for 0< u < 1. (4.8) Thus (4.7) holds for n = 1 with P1(z) = 1−Q(z). On the other hand, differentiating (4.7) and using (4.8), we find, for 0< u < 1,

Ψ^{(n+1)}(u) = Pn+1(Ψ(u))

q^{n+1}(1−u)^{n+1} with Pn+1(z) =P_{n}^{0}(z)(1−Q(z)) +nqPn(z). (4.9)
Therefore, (4.7) is established.

We next show that, for all n∈ N_{∗}, there exists a polynomial Rn(z) with nonnegative
coefficients such that

Pn(z) = (1−Q(z))(1−z)^{n−1}Rn(z). (4.10)
Obviously, (4.10) holds for n= 1 with R1(z) = 1. Combining (4.9) and (4.10), we obtain
the recursion relation

Rn+1(z) =A1(z)R^{0}_{n}(z) +A2(z)Rn(z) + (n−1)A3(z)Rn(z),
where the coefficient functions A_{j} are given by

A1(z) = 1−Q(z) 1−z =

XN j=1

pj

1−z^{j}
1−z ,
A_{2}(z) = q−Q^{0}(z)

1−z = XN

j=2

jp_{j}1−z^{j−1}
1−z ,
A3(z) = q

1−z − 1−Q(z)
(1−z)^{2} =

XN

pj

Xj−1 1−z^{k}
1−z .

Because of pj ≥ 0 all A1, A2, A3 are polynomials (in z) with nonnegative coefficients.

Thus, the same property holds for Rn by induction over n.

Since 0 <Ψ(u)<1 and 0 < Q(Ψ(u))<1 for all u ∈ (0,1), it follows from (4.7) and
(4.10) that Ψ^{(n)}(u) ≥ 0 for all n ∈ N and all u ∈ (0,1), hence also for u ∈ [0,1]. By a
classical result of Bernstein (see [Fe71], Section VII.2), the power series

X∞ k=1

Ψku^{k}, where Ψk= 1

k!Ψ^{(k)}(0)≥0, (4.11)

converges absolutely and uniformly for |u| ≤ 1, and defines an analytic continuation of Ψ to the unit disk. Moreover, if R1 ≥ 1 denotes the radius of convergence of the series (4.11), it is well-known (see for instance [Ru87], exercise 16.1) that the analytic function defined by (4.11) has a singularity at u=R1. Since Ψ(u)→ ∞ asu%R, it follows that R =R1. This concludes the proof.

Example. To conclude this study of the mappings Φ and Ψ, we give an explicit example
of a nonlinearity Q for which these functions can be calculated explicitly. Let Q(z) =
(1−a)z+az^{2}, where a ∈[0,1]. The value a = 1 corresponds to the coarsening equation
(1.3), whilea= 0 is a particular case of a model studied in [CaP00]. Thenq= 1+a= 1/κ,
R = 1 + 1/a, and

φ(z) = 1

1 +alog1 +az

1−z , ψ(w) = 1−e^{−qw}
1 +ae^{−qw}.
The auxiliary functions Φ, Ψ are:

Φ(z) = (1+a)z

1 +az , Ψ(u) = u 1+a−au.

We are now ready to state and prove the main result of this section.

Theorem 4.6 (Steady states of (3.3))

Fix θ > 0and let η_{θ}^{∗} : [1,∞)→R be the solution of (4.3) with β =θ/q. Then
a) η^{∗}_{θ} ∈P if and only if 0< θ ≤1.

b) If θ∈(0,1], η^{∗}_{θ} ∈P is positive and strictly decreasing, so that yη_{θ}^{∗}(y)→0 as y→ ∞.

c) If 0< θ <1, then

y→∞lim y^{1+θ}η_{θ}^{∗}(y) = θe^{θγ}^{E}

κΓ(1−θ), (4.12)

where Γ is the Gamma function and γE=−Γ^{0}(1)≈0.577216 is Euler’s constant.

d) If θ= 1, then Z ∞

1

yη_{1}^{∗}(y) dy= e^{γ}^{E}

κ . (4.13)

Moreover, if degQ >1, there exists λ >0 such that

y→∞lim

logη_{1}^{∗}(y)

y =−λ. (4.14)

For Q(z) =z we have

y→∞lim

logη_{1}^{∗}(y)

ylogy =−1. (4.15)

Remark. It follows from Theorem 4.6 and Proposition 4.3 that (3.3) has auniquesteady
state η^{∗}_{1} ∈P such that R∞

1 yη_{1}^{∗}(y) dy <∞.

Proof. We first show that η^{∗}_{θ} ∈ P if 0 < θ ≤ 1. According to Proposition 4.3, it
is sufficient to prove that there exists an element of P (still denoted by η_{θ}^{∗}) such that
N(η_{θ}^{∗}) = (θ/q)w^{∗}. Since N^{−1} = F^{−1} ◦ ψ ◦ F and ψ(w) = Ψ(1−e^{−qw}) by (4.6), this
relation is equivalent to

b

η^{∗}_{θ} = Ψ 1−e^{−θ}^{w}^{b}^{∗}

, (4.16)

where bη_{θ}^{∗} =Fη_{θ}^{∗} and wb^{∗} =Fw^{∗}. In view of (4.2),
wb^{∗}(ξ) =

Z ∞ 1

e^{−iξy}

y dy = E1(iξ), (4.17)

where E1 is the exponential integral, see [AS72]. It is well-known that

E1(z) =−logz−γE+χ(z) for |argz|< π, (4.18)
where χ:C→C is an entire function with χ(0) = 0 and χ^{0}(0) = 1. Thus, wb^{∗} is analytic
in the interior of L^{−}, where L^{−} = {ξ ∈ C| Imξ ≤ 0}. Moreover, Re(wb^{∗}(ξ)) → ∞ as
ξ →0 within L^{−}.

In Appendix A, we prove that |1−e^{−θ}^{w}^{b}^{∗}^{(ξ)}|<1 for allξ ∈L^{−}\ {0} and allθ∈(0,1],
see also Figure A.1. From Lemma 4.5, we also know that Ψ is analytic in the disk of radius
R >1 centered at the origin. Therefore, the map bη_{θ}^{∗} defined by (4.16) is continuous over
L^{−} (with ηb_{θ}^{∗}(0) = 1) and analytic in the interior of L^{−}. In addition, since |Ψ(u)| ≤ |u|

whenever |u| ≤1, we have the bound

|bη_{θ}^{∗}(ξ)| ≤ |1−e^{−θ}^{w}^{b}^{∗}^{(ξ)}| ≤2θ|wb^{∗}(ξ)| for ξ ∈L^{−}\ {0}.

In particular, |ηb^{∗}_{θ}(ξ)|= O(e^{Im}^{ξ}) as Imξ → −∞. By the Paley-Wiener Theorem (see for
instance [Ru87]), we conclude that η^{∗}_{θ} =F^{−1}bη_{θ}^{∗} ∈L^{2}((1,∞)).

To prove that η_{θ}^{∗} is nonnegative, we argue as in [CaP92]. Consider the Laplace
transform eη_{θ}^{∗} = Lη_{θ}^{∗}, which satisfies ηe_{θ}^{∗}(p) = bη_{θ}^{∗}(−ip). As is well-known (see [Fe71],
Section XIII.4), positivity of η_{θ}^{∗} is equivalent to complete monotonicity of ηe^{∗}_{θ}, namely
(−1)^{k}ηe^{∗}_{θ}^{(k)}(p)≥0 for all k ∈N and p >0. Recall that

ηe_{θ}^{∗}(p) = Ψ(1−e^{−θ}^{w}^{e}^{∗}^{(p)}) = Ψ(1−e^{−θE}^{1}^{(p)}) for p >0. (4.19)
We apply Lemma 4.7 below with

f1 :

(0,1) → R,

u 7→ Ψ(1−u); and g1 :

(0,∞) → (0,1),
p 7→ e^{−θE}^{1}^{(p)}.

By Lemma 4.5, f1 is completely monotone, thus it remains to show that g^{0}_{1} is completely
monotone. Observe that g_{1}^{0} =f2◦g2, where f2 : R→ R is defined by f2(w) =θe^{−w} and
g2 : (0,∞)→Rby

g2(p) =θE1(p)−log(−E^{0}_{1}(p)) =θE1(p) +p+ logp.