• Aucun résultat trouvé

Stochastic Interacting System for MK Optimal Mass Transport Problem

N/A
N/A
Protected

Academic year: 2021

Partager "Stochastic Interacting System for MK Optimal Mass Transport Problem"

Copied!
28
0
0

Texte intégral

(1)

HAL Id: hal-03170065

https://hal.archives-ouvertes.fr/hal-03170065

Preprint submitted on 15 Mar 2021

HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

Transport Problem

Noureddine Igbida

To cite this version:

Noureddine Igbida. Stochastic Interacting System for MK Optimal Mass Transport Problem. 2013.

�hal-03170065�

(2)

Mass Transport Problem

Noureddine Igbida March 4, 2013

Abstract

We introduce and study a stochastic model that we can associate with Monge-Kantorovich (MK) optimal mass transport problem. It consists in a random process evolving over time which give an interacting particle system describing the random optimal mass transportation.

We treat the problem as a set of particles moving according to simple rules in a network of positions. During a time step, we imagine a random clock placed at each position is turning to manage the transportation by making the particles spread, if necessary, to random positions in order to fulfill a required constraint. Starting for instance from zero, our dynamic provides Markov processes modeling a random evolution of the transportation. The main feature of our random dynamic is to achieve long-term good value prices at each site and an optimal transportation of the mass in turn. We prove that the continuum limit, that is the limit of the rescaled stochastic process, provides a deterministic dynamic description of how to reach the optimal solution for both problems MK-problem and its dual formulation (DMK- problem). It corresponds to a system of evolution equations whose solutions converge, for a large time, to the solutions of MK-problem and DMK-problem .

1 Introduction

The optimal mass transport problem is to consider how to move one distribution of mass to another one as efficiently as possible with respect to a cost function c(x,y) specifying the trans- portation tariff per unit mass. This is an intensively studied topic ; we shall not mention here all contributions but refer to [17, 18], [1] and [7] for more details.

Letν1 andν2be two nonnegative Radon measures with disjoint supports Ω1 and Ω2,respec- tively, satisfying the balance condition ν1(IRN) = ν2(IRN). The Monge-Kantorovich problem (MK-problem for short) of optimal mass transportation consists in determining a nonnegative Radon measure µ with respective marginals ν1 and ν2 that minimizes the Monge-Kantorovich

cost Z Z

c(x, y)dµ(x, y),

where the cost function c : IRN ×IRN → IR+ is a given lower semi-continuous function. We can imagine thatν1 is a density of material we want to transport to a destination of capacities

This work has been partially supported by the Spanish MEC and FEDER, project MTM2008-03176

Institut de recherche XLIM-DMI, UMR-CNRS 7252, Facult´e des Sciences et Techniques, Universit´e de Limo- ges 123, Avenue Albert Thomas 87060 Limoges, France. Email : noureddine.igbida@unilim.fr

1

(3)

measured byν2, and c(x, y) is the the price we have to pay for each unit of materials which is transported from x ∈Ω1 toy ∈Ω2. An arbitrary transport plane µ(x, y) gives the quantity of material we can load fromx∈Ω1 to the destination y∈Ω2. The Monge-Kantorovich problem aims to minimize the total cost among all the admissible transport planesµ(x, y),with respective marginalsν1 and ν2.

To recall the dual formulation, let us introduce u1(x) (resp. u2(y)) an arbitrary price of loading (resp. unloading) one unit of materials at positionx∈Ω1 (resp. at destinationy∈Ω2).

The quantity

Z

u11+ Z

u22

(1)

gives the total price we need to pay for the transport ofν1 into ν2, with the price distribution u1χ1+u2χ2.The new problem now aims how to arrange the price distribution in such a way that the total price is (almost) as much as we would have been ready to pay by minimizing the Monge-Kantorovich cost. Of course, to be profitable we can check very easily that the prices need to fulfill the constraint

u1(x) +u2(y)≤c(x, y), for any (x, y)∈Ω1×Ω2. (2)

More precisely, in the new problem we seek a couple (u1, u2) satisfying the constraint (2) that maximizes the quantity (1). This is the so called dual Monge-Kantorovich problem (DMK- problem for short). The Kantorovich duality says that we can find a couple of prices (u1, u2) and a transport plane in such a way that we pay the same price in either formulation. Moreover, this price will be the optimal one for both problems ; MK-problem and DMK-problem.

There is an extensive literature on MK-problem and DMK-problem. They were introduced by Kantorovich ([11]) as a relaxed formulation of the optimal mass transport problem whose study was initiated by Monge himself ([12]). Several other authors continued his work (we mention the bibliographical notes [17, 18], [1] and [7] for more details). Optimal mass transportation problem has become a famous optimization problem, with applications appearing in economics, meteorology, astrophysics, probability, and image analysis.

Our main interest here is to give a new interpretation of the optimal mass transportation problem. We introduce a random process evolving over time which gives a stochastic inter- acting system associated with MK-problem and DMK-problem. Roughly speaking, we present a dynamical random way to manage the price of loading and/or unloading the mass, as well as a dynamical random way to transport the mass between ν1 and ν2, so as to converge to- wards optimal solution, which correspond to the solutions of MK-problem and DMK-problem, respectively.

In our method, we begin by transforming the problem into a discrete one. By using a small rescaling parameter ε,we redistribute the total mass ν1 (resp. ν2) over a discrete set D1 (resp.

D2) of positions, that we call sites. We adapt the cost function to the new configuration of the problem by rescalingcinto a new one ˆcdefined onD1×D2.This gives rise to a discrete optimal transport problem of how to move the atomic distribution of massν1ε concentrated inD1 toν2ε which is concentrated inD2,with respect to the cost ˆc. We treat the price function as a set of particles moving according to simple rules in the network of positions inD:=D1∪D2. We define the space of configurationsS to consist of functionsη : D→ZZ such thatη(x)−η(y)≤c(x, y),ˆ for any (x, y) ∈ D1×D2.We start with an arbitrary configuration (for instance η ≡ 0). The configuration η attempts to increase (resp. decrease) at a site x ∈ D1 (resp. y ∈ D2) by one

(4)

unit when the random clock rings at the positionx ∈ D1 (resp. y ∈ D2). The increase (resp.

decrease) takes place if the new configuration η+δx (resp. η−δy) remains in S. Otherwise, the increase (resp. decrease) is replaced by an increase (resp. decrease) in a position which is randomly selected over the allowable positions in D2 (resp. D1). In this way we construct a stochastic process (η(t, x), t≥0, x∈D) to describe the random evolution of the price function.

Moreover, we construct a stochastic process (κ(t, x, y), t≥0,(x, y) ∈D1×D2) to describe the random evolution of the transportation between D1 and D2. Then, we derive a macroscopic system of equations for the limits

u= lim

ε→0uε and µ= lim

ε→0µε,

whereuε and µεare the expectation of random functions obtained by rescaling η and κrespec- tively. Roughly speaking, we notice that the rescaling of η and κ we are using aims to cover, in one hand, the control of the rate of arrival of new events that consists of a faster and faster increase/decrease of the valued of the random price. On the other hand, it aims to cover some kind of homogenization that return back from the discrete problem to the continuous one.

The macroscopic system of equations we obtain as a continuum limit is a new evolution equation of non local type where the unknown is the couple (u, µ) satisfying a system of coupled equations. As t → ∞, u(t) and µ(t) converge to the solution of DMK and MK problems respectively.

The paper is organized as follows. In Section 2, we recall some basic facts on the optimal mass transport problem. In Section 3, we set our assumptions on the measureν1andν2.For technical reason, we assume thatν1 and ν2 satisfy additional assumptions which are not true in general.

But, they are fulfilled in many concrete situations like atomic measures, absolutely continuous measures with respect to Lebesgue and many others. Then, we approximate the optimal mass transport problem by a discrete one, where the transportation problem basically aims to find the best way to fulfill the demand ofN1 demand points using the capacities ofN2 supply points.

We denote D1 the set of demand points and D2 the set of supply points. In Section 4, we set our main results. In a first part, we introduce our stochastic model for the discrete optimal mass transportation problem. The random dynamics that keeps the configuration allowable is managed by a probability distribution of the price. We define a couple of Markov processesη(t) andκ(t, x, y) to describe the random price of loading and/or unloading material and the random plane distribution of the material, respectively, for any time t ≥ 0. Here ηχD1 is the random loading price and −ηχD2 is the random unloading price. The stability condition requires for each timet≥0,thatη(t, x)−η(t, y)≤ˆc(x, y) for any (x, y)∈D1×D2,where ˆcis a rescaled cost function. The stochastic model gives a random description on the discrete approximation. The connection with the original problem appears in the study of the so called fluid limit. This is the aim of the second part of Section 4. The limit gives two equivalent evolution problems : one evolution equation governed by by a sub-differential operator (for which the large time behavior gives a solution of DMK-problem), and one system of two nonlocal equations (for which the large time behavior gives a solution of MK-problem) describing the exchanges between the positions x∈Ω1 and the positionsy∈Ω2.Then the rest of the paper is devoted to the proofs. In Section 5, we give some preparatory results. In Section 6, we prove the convergence to the evolution DMK-problem. In Section 7, we prove the convergence to the evolution MK-problem. In the last section, we show that the two problems are equivalent and that the large time behavior provides a solution to MK-problem as well as a solution of DMK-problem.

(5)

2 Preliminaries, assumptions and functional setting

Letν1andν2be two nonnegative Radon measures with disjoint supports Ω1and Ω2,respectively, satisfying the balance condition

ν1(Ω1) =ν2(Ω2).

(3)

We fixc:IRN ×IRN →[0,+∞] a lower semi-continuous cost function.

2.1 Preliminaries

The pushforward measure ofµby T, a map from a measure space (X, µ) to an arbitrary space Y, is denoted byT#µ and is given explicitly by

(T#µ)[B] =µ[T−1(B)].

We will use the usual convention of denoting by

πx, πy :IRN ×IRN →IRN (the projections)

πx(a, b) :=a and πy(a, b) :=b, for any (a, b)∈IRN×IRN. Given a Radon measureµ in Ω×Ω, its marginals are defined by

projx(µ) :=πx#µ and projy(µ) :=πy#µ.

Monge-Kantorovich problem (MK) : Let us denote by Π(ν1, ν2) :=n

µin M+b (Ω×Ω) : projx(µ) =ν1 and projy(µ) =ν2o , and consider the functional

K(µ) :=

Z

c(x, y)dµ(x, y).

(4)

The Monge-Kantorovich problem is to find a measure µ ∈Π(ν1, ν2) which minimizes the cost functional K(µ), that is, to find a solution to the minimization problem

K(µ) = min{K(µ) : µ∈Π(ν1, ν2)}.

(5)

The elements µ∈Π(ν1, ν2) are called transport plans between ν1 and ν2, and µ satisfying (5) is called an optimal transport planbetweenν1 and ν2.

Proposition (see [1, 14] and the references therein) Under the above assumptions, there exists an optimal transport planeµ ∈Π(ν1, ν2) solving MK.

Monge-Kantorovich dual problem (DMK-problem ) : As we said in the introduction, the dual problem is to find(u, v) that maximize

D(u, v) :=

Z

u dν1+ Z

v dν2 (6)

in the set

Φc1, ν2) :=n

(u, v)∈L1(Ω, dν1)×L1(Ω, dν2) : u(x) +v(y)≤c(x, y)o .

(6)

Theorem 1 (see [2, Theorem 3.1] or [17, Theorem 1.3]) We have

min{K(µ) : µ∈Π(ν1, ν2)}= sup{D(u, v) : (u, v)∈Φc1, ν2)}.

(7)

Furthermore, it does not change the value of the supremum in the right-hand side of (7) if one restricts the definition of Φc1, ν2) to those functions(u, v) which are bounded and continuous.

An interesting particular situation which appears in many applications corresponds to the case where the measuresν1 andν2 are supported respectively on a finite number of points. That is

1=n

x1, x2, ....xN1o

⊂IRN and Ω2=n

y1, y2, ....yN2o

⊂IRN

with prescribed massesf1(x1), f1(x2). . . f1(xN1) andf2(y1), f2(y2). . . f2(xN2),respectively, the measuresνk,fork= 1,2,are given by

νk = X

x∈Ωk

fk(x)δx,

where δx denote the Dirac mass concentrated at x. In this particular case, the transportation problem basically aims to find the best way to fulfill the demand ofN1 demand points using the capacities ofN2 supply points. In this case, the set of plans transport reads

Π(f1, f2) :=

µ : Ω1×Ω2 →IR : X

y∈Ω2

µ(x, y) =f1(x) and X

x∈Ω1

µ(x, y) =f2(y)

 ,

and the Kantorovich functional becomes K(µ) := X

(x,y)Ω1×Ω2

c(x, y)µ(x, y).

The dual problem reads how to achieve the maximum of D(u1, u2) := X

x∈Ω1

u1(x)f1(x) + X

x∈Ω2

u2(x)f2(x) among the couples (u1, u2) living in the set

Φc1, ν2) :=

n

(u, v) : Ω1×Ω2 →IR×IR : u(x) +v(y)≤c(x, y), for any (x, y)∈Ω1×Ω2

o .

2.2 Assumptions and functional setting

Now, let us considerλa nonnegative Radon measure inIRN,concentrated in a bounded domain, such that

νkλ, fork= 1,2.

(8)

We denote f1 and f2 the densities ofν1 andν2 respectively. That is, fork= 1,2, νk=fkλ, fork= 1,2.

(9)

(7)

Let us denote

Xp :=Lp(Ω, dλ), where

Ω := Ω1∪Ω2.

The spaceX2 is a Hilbert space when equipped with the inner product D

η, ξ E

= Z

η ξ dλ

and the norm

kηk= Z

η21/2

. We consider the convex set

K :=n

u∈X2 ; u(x)−u(y)≤c(x, y), Λ - a.e. (x, y)∈Ω1×Ω2o , where Λ is the product measure concentrated in Ω1×Ω2,given by

Λ =λλ Ω1×Ω2. We set

f :=f1χ1−f2χ2.

Then, it is not difficult to see that the couple (u1, u2) is a solution of DMK-problem if and only if, setting

u :=u1χ1 −u2χ2

we have

f ∈∂IIK(u), (10)

where∂IIK denotes the subdifferential ofIIK in X2.Here IIK : X2 →[0,∞] is defined by

IIK(z) =

0 ifz∈K +∞ otherwise.

In particular, this implies that DMK-problem and MK-problem are closely connected to the nonlinear dynamic (cf. [4]) :



 du(t)

dt +∂IIK(u(t))3f fort≥0 u(0) = 0.

(11)

For a givenε >0,we consider a partition I1i

i∈N1ε and I2j

j∈N2ε of Ω1 and Ω2,respectively, such that

Ipi∩Iqj =∅, for any (i, p)6= (j, q).

(8)

We assume that, there existsνε>0,such that

λ(I1i) =λ(I2j) =νε, for any i∈N1ε and j∈N2ε, (12)

and, for anyh∈X2,asε→0, X

i∈N1ε

1 νε

Z

I2j

h dλ χIi

1 + X

j∈N2ε

1 νε

Z

I2j

h dλ χIi

2 →h, inX2. (13)

We consider the sets of arbitrary points Dε1=n

xi ; xi ∈I1i∩Ω1, i∈N1εo

and Dε2=n

yj ; yj ∈I2j ∩Ω2, j∈N2εo . We rescale the cost function and introduce

ˆ

c(xi, yj) =

"

Pε

νε2 Z Z

I1i×I2j

c(x, y)dΛ(x, y)

#

, for any i∈N1ε andj ∈N2ε,

where [|A|] denotes the integer part of the real number A, and Pε is a given integer parameter satisfying

ε→0limPε=∞.

(14)

We see that the choice of the sitesxi andyj is arbitrary, however the values of the cost function does not depend on that choice.

At last, we introduce the functions ˆf1 : D1ε→IR+ (resp. ˆf2 : Dε2→IR+) defined by fˆ1(xi) = 1

νε

Z

I1i

f1dλ, fori∈N1ε (resp.

2(yj) = 1 νε

Z

I2j

fkdλ, forj∈N2ε

.

Remark 1 Our assumptions (12) and (13) are not true in general. It depends on the measures ν1 andν2.However, they remains true for some concrete situations that we the following.

1. Assume that ν1 and ν2 are two atomic measures ; that is

ν1=a1δx1 +....+apδxp and ν2=b1δy1 +...+bqδyq,

for a givenx1, ..., xp, y1, ..., yq ∈IRN and a1, ..., ap, b1, ..., bq ∈IR.In this case, we see that we can take

• λ=δx1 +....+δxpy1+...+δyq.

• Ω1 =Dε1= n

x1, ..., xp

o

and Ω2 =Dε2= n

y1, ..., yq

o .

• fˆ1(xi) =f1(xi) =ai, for anyi= 1, ...p and fˆ2(yj) =f2(yj) =bj,for any j= 1, ...q.

• ˆc(x, y) =h

Pεc(x, y) i

, for any(x, y)∈Dε1×D2ε.

(9)

Indeed, in this case there existsε0>0,such that for any ε < ε0,taking

I1i =B(xi, ε) and I2j =B(yj, ε), for any i= 1...N1 and j= 1...N2, we have

νε= 1.

2. An other concrete situation is the case where λ corresponds to Lebesgue measure in IRN, that is

λ=LN.

In this case, for anyε >0, one can construct the partitions

I1i

i∈N1ε and

I2j

j∈N2ε such that

νεN.

3 Main results

3.1 The stochastic model

Thanks to (12), ˆf1 and ˆf2 satisfy the discrete balance condition X

i∈N1ε

1(xi) = X

j∈N2ε

2(yj).

So, it is possible to consider the Monge-Kantorovich optimal mass transportation associated with the atomic measures

ν1ε:= X

x∈D1ε

1(x)δx and ν2ε:= X

y∈D2ε

2(y)δy

with respect to the cost function ˆc.

Now MK-problem and DMK-problem aims to find the best way to fulfill the demand of the demand pointsx∈Dε1 using the capacities of the supply points y∈Dε2. Our aim now is to give the stochastic model that we can associate with the optimal mass transportation with respect to the cost ˆc.Hereε >0 andPε are fixed.

Throughout this section, we omit the subscriptε. The points ofD:=D1∪D2 will be called sites. Aconfiguration (admissible) is a mappingη : D→ZZ satisfying the constraint

η(x)−η(y)≤ˆc(x, y) for any (x, y)∈D1×D2. The state space is

S :=n

η : D→ZZ ; η is a configurationo ,

which is a subspace of the Hilbert spaceH:=l2(D) equipped with the inner product D

ξ1, ξ2E

= X

x∈D

ξ1(x)ξ2(x).

(10)

Letη ∈S be a given configuration which aims to describe the distribution of the price over D1∪D2 (not necessary optimal for DMK-problem). A right dynamic that could converge to the optimal price consists in increasing the price in D1 and decrease it in D2. So, imagine we have a sequence of Poisson clocks at each sites of D,and the value of η at a positionx ∈D1 (resp.

y∈D2) increase (resp. decrease) by one unit when the clock rings atx∈D1 (resp. y ∈D2). It is clear that this may provide a non admissible configuration. This is the situation, for instance if there existsz∈D2(resp. z∈D1), such thatη(x)−η(z) = ˆc(x, z) (resp. η(z)−η(y) = ˆc(z, y)). In this situation, a natural dynamic that could keeps the new configuration admissible is to increase (resp. decrease) in turn the value ofη at the positionz ∈D2 (resp. z∈D1). In general, such z is not unique so that we need to randomly select among the allowable position. To define the probability to distribute the price over the allowable position, for any (x, y)∈D1×D2,we consider the subsets

λ1(x, η) =



 n

z∈D2 ; η(x)−η(z) = ˆc(x, z)o

ifx∈D1

∅ ifx∈D2,

and

λ2(y, η) =



 n

z∈D1 ; η(z)−η(y) = ˆc(z, y) o

ify∈D2

∅ ify∈D1,

.

Then, for any (x, y)∈D×D,we define

p1(η, x, y) =













 1

1(x, η) if #λ1(x, η)6= 0 andy ∈λ1(x, η) 1 if #λ1(x, η) = 0 andx=y

0 otherwise,

and

p2(η, y, x) =













 1

2(y, η) if #λ2(y, η)6= 0 and x∈λ2(y, η) 1 if #λ2(y, η) = 0 and y=x

0 otherwise.

See that, for anyx∈D1 (resp. y∈D2) X

y∈D

p1(η, x, y) = 1 (resp.X

x∈D

p2(η, y, x) = 1).

(15)

In other words,p1(η, x, y) (resp. p2(η, y, x)) is the probability that a proposal action to increase (resp. decrease) the price at the positionx ∈D1 (resp. y ∈D2) will end up by an increase of the price at the positiony ∈D2 (resp. x∈D1).

Now, we set

B(S) =n

F : S→IR; bounded and measurable o ,

(11)

and we define the linear operator Aon B(S) by A F(η) = X

x,y∈D

p1(η, x, y) ˆf1(x)(F(η+Ty)−F(η))

+ X

x,y∈D

p2(η, x, y) ˆf2(x)(F(η−Ty)−F(η)), ∀η ∈S, (16)

where, for anyy ∈D, Ty : D→IN is given by Ty(x) =

1 ifx=y 0 otherwise.

The operatorAis the infinitesimal generator of a continuous-time Markov process onS,that we denote by

η(t), t≥0

.One of the main feature of A is the C0−semigroup T(t) : B(S) → B(S) that it generates. Indeed (cf. [6]) , for any F ∈B(S), we have

IE[F(η(t))|η(0) = 0] =T(t)F(0), for any t≥0, and,T(t)F is the solution of the evolution equation

d

dtT(t)F =A T(t)F =T(t)A F, for any t >0.

(17) That is

IE[F(η(t))|η(0) = 0] =e−tAF(0), for any t >0.

Another feature of the operatorAthat we’ll use in this paper is the time-dependent martingale stochastic integral equation :

F(η(t), t) = Z t

0

∂F

∂s +A F

(η(s), s) +M(t), (18)

for any F : S×(0,∞) → IR Lipchitz continuous in t such that F(η(0),0) = 0.Here M is a Martingale, satisfyingM(0) = 0.

While (η(t);t≥0) is the Markov process which describes the random evolution of the price for the transportation of ˆf1 into ˆf2,we see that the quantity

κ(t, x, y) =

p1(η, x, y) ˆf1(x) +p2(η, y, x) ˆf2(y) for any (x, y)∈D1×D2

0 otherwise.

describes how the mass is distribute between the positions x ∈ D1 and the positions y ∈ D2. Again (κ(t), t≥0) is a Markov process, and moreover

Support(κ(t)) = n

(x, y)∈D1×D2 ; η(t, x)−η(t, y) = ˆc(x, y) o

, for any t≥0.

As we will see in the following sections, whileη and its expectation contains all the information concerning the solution of DMK-problem, the random planeκ and its expectation contains all the information concerning the optimal plane transport problem ; the solution of MK.

(12)

3.2 Continuum limits

Our aim here, is to letε→0 in the following rescaled Markov processes ηε(t) = 1

Pε

 X

i∈N1ε

η(Pεt, xiIi

1+ X

j∈N2ε

η(Pεt, yjIj 2

, for any t≥0, (19)

and

κε(t) = 1 νε

X

(i,j)∈N1ε×N2ε

ˆ

κ(Pεt, xi, yjIi

1×I2j, for any t≥0.

(20)

Theorem 2 We have

ε→0limIE Z

ηε(t)−u(t)2

= 0, for any t≥0,

where u is the unique solution of (11) in the sense that u∈Wloc1,∞(0,∞;X2), u(0) = 0, for any t≥0, u(t)∈K and

Z f− d

dtu(t)

u(t)−ξ

dλ≥0, for anyξ ∈K.

In particular, by setting

vε(t) =IE[ηε(t) ], for any t≥0, and, using Jensen inequality, we see that

Z

vε(t)−u(t) dλ ≤

Z IEh

ηε(t)−u(t) dλi

≤ IE Z

ηε(t)−u(t) dλ

.

So, by suing Holder inequality, we deduce in particular the following result.

Corollary 1 Under the assumptions of Theorem 2, we have

ε→0lim Z

vε(t)−u(t)

dλ= 0, for any t≥0.

For the connection with MK, we assume that c is continuous and we introduce first the transformationT : Mb(Ω1×Ω2)→ Mb(Ω×Ω),defined for any µ∈ Mb(Ω1×Ω2),by

Z

ξ dT(µ) = Z

(ξ(x, y)−ξ(y, x)) dµ(x, y), for any ξ∈ C0(Ω×Ω).

(13)

Theorem 3 Let us denote by

µε(t) =IE[κε(t) ], for any t≥0,

and assume that (14) is fulfilled. There exists a subsequence, that we denote again by ε, such that

µε→µ, in L(0,∞;w− Mb(Ω1×Ω2)), andµ is a solution of the following system









πx#T(µ(t)) =ν1−ν2−du(t)

dt λ, for anyt >0, Z

c dµ(t)≤ Z

f− d dtu(t)

u(t)dλ, (21)

where u is the solution of (11) in X2.

Remark 2 Coming back to the case where ν1 and ν2 are atomic measures (see the first item of Remark 1). Recall that, in this situation νε= 1 and the assumption (13) is fulfielld. In this case, we do not need to rescale in space, a time rescaling is enough to give the stochastic model for the optimal mass transportation.

See that our approach gives in particular a deterministic dynamical optimal transportation problem. Actually, starting from 0,the evolution problem (21) provides a dynamic to built a solution of MK-problem and DMK-problem, ast→ ∞.More precisely, we have

Theorem 4 Let u ∈ C([0,∞), X2) such that u(t) ∈K for any t ≥0. Then, u is a solution of (11) if and onlyu∈W1,∞(0,∞;X2), u(0) = 0and there existsµ∈L(0,∞;w−Mb(Ω1×Ω2)+) such that, for any t≥0,we have (21).

Theorem 5 Let (u, µ) be a solution of (21). We have

• Ast→ ∞,

u(t)→u, in X2−weak, and the couple(uχ1,−uχ2) is a solution of MK-problem.

• There exists a subsequence that we denote again by t→ ∞, such that µ(t)→µ, in Mb(Ω1×Ω2)−weak, (22)

andµ is a solution of DMK-problem.

Remark 3 Let(u, µ)be a solution of (21). Ast→ ∞,Theorem 5 implies the weak stabilization of u(t). In general, we do not know if u(t) converges strongly in X2, or not. Using standard theory, we can prove that we have also an ergodic convergence of u(t). That is there exists u ∈K, such thatf ∈∂IIKu and

1 t

Z t 0

u(t)→u, in X2. Of course the couple (uχ1,−uχ2) is a solution of MK-problem.

(14)

4 Preliminary results

Now, we set

ˆ

v(t) =IE[η(t) ] and κ(t) =ˆ IE[κ(t) ], for any t≥0.

We begin with the connection between ˆ v(t)

t≥0 and ˆ κ(t)

t≥0. This connection is very useful for the rest of the paper. We will use the same notationT for the transformationl2(D1×D2)→ l2(D×D) defined by

T(κ)(x, y) =κ(x, y)−κ(y, x), for any κ∈l2(D1×D2).

We begin, with the connection between ˆv and ˆκ.

Proposition 1 The couple (ˆv,κ)ˆ satisfies the following system











 dˆv

dt(t, x) +X

y∈D

T(ˆκ(t))(x, y) = ˆf(x), for any t≥0 and x∈D,

X

x,y∈D

ˆ

c(x, y)ˆκ(t, x, y)≤X

x∈D

fˆ(x)−dˆv dt(t, x)

ˆ

v(t, x) + ˆE(t), for anyt≥0 (23)

where fˆ= ˆf1χD1 −fˆ2χD2,and

E(t) := 2 ˆˆ M IE

"

X

x∈D

η(t, x)−ˆv(t, x)

# ,

with

Mˆ := maxn

|fˆ(x)|; x∈Do .

Proof : For a given x ∈D,let us consider F(ξ) = ξ(x),for any ξ ∈S.By definition of A,we have

A F(η(t)) =A η(t, x) =−X

y∈D

T(κ(t))(x, y) + ˆf(x).

So, thanks to (17) and by using the fact that

T(ˆκ(t) =IE[T(κ(t)], for any t≥0, (24)

we get

d

dtˆv(t, x) +X

y∈D

T(ˆκ(t))(x, y) = ˆf(x), for any t≥0 andx∈D.

(25)

For the second inequality, recall thatκ(t, x, y) = 0 if and only ifp1(η(t), x, y) =p2(η(t), y, x) = 0, which is equivalent toη(t, x)−η(t, y)6= ˆc(x, y).So,

X

x,y∈D

ˆ

c(x, y)ˆκ(t, x, y) = IE

 X

x,y∈D

ˆ

c(x, y)κ(t, x, y)

(15)

= IE

 X

x,y∈D

(η(t, x)−η(t, y))κ(t, x, y)

= IE

 X

x,y∈D

η(t, x)T(κ(t))(x, y)

.

This implies that X

x,y∈D

ˆ

c(x, y)ˆκ(t, x, y) = IE

 X

x,y∈D

η(t, x)−v(t, x)ˆ

T(κ(t))(x, y)

+IE

 X

x,y∈D

ˆ

v(t, x)T(κ(t))(x, y)

= IE

 X

x,y∈D

η(t, x)−v(t, x)ˆ

T(κ(t))(x, y)

+ X

x,y∈D

ˆ

v(t, x)T(ˆκ(t))(x, y), where we use again (24). Thanks to (15), we see that

X

y∈D

|T(ˆκ(t)| ≤2 ˆM , for any (x, y)∈D×D; so that

X

x,y∈D

c(x, y)Tˆ (ˆκ(t)≤2 ˆM IE

"

X

x∈D

η(t, x)−v(t, x)ˆ

#

+ X

x,y∈D

ˆ

v(t, x)T(ˆκ(t))(x, y).

At last, multiplying the first equation of (23) and summing up over x ∈ D, the result of the proposition follows.

Now, in order to pass to the limit in the rescaled stochastic process, we introduce the following nonlinear dynamic in H :



 d

dtu(t) +ˆ ∂IIKˆ(ˆu(t))3fˆ for any t≥0 u(0) = 0,

(26)

where∂IIKˆ denotes the sub-differential of IIKˆ in H and Kˆ :=

nξˆ∈H ; ˆξ(x)−ξ(y)ˆ ≤c(x, y) for any (x, y)ˆ ∈D1×D2

o .

Since ˆK is a closed and convex subset of H,the dynamic (26) has a unique solution ˆu (cf. [4]), in the sense that ˆu∈Wloc1,∞(0,∞;H), ˆu(0) = 0 and, for any t≥0,u(t)ˆ ∈Kˆ and

X

x∈D

fˆ(x)− d dtu(t, x)ˆ

ˆ

u(t, x)−ξ(x)ˆ

≥0 for any ˆξ ∈K.ˆ

(16)

Lemma 1 Under the assumptions of Proposition 2, for any wˆ ∈K,ˆ we have X

x,y∈D

p1(x, y) ˆf1(x) (η(t, y)−w(y))ˆ ≤ X

x∈D

1(x) (η(t, x)−w(x))ˆ (27)

and

X

x,y∈D

p2(x, y) ˆf2(x) (η(t, y)−w(y))ˆ ≥ X

x∈D

2(x) (η(t, x)−w(x)).ˆ (28)

Proof : We see that X

x,y∈D

p1(t, x, y) ˆf1(x) (η(t, y)−w(y))ˆ = I+ X

x,y∈D

p1(t, x, y) ˆf1(x) (η(t, x)−w(x)),ˆ where

I = X

x,y∈D

p1(t, x, y) ˆf1(x)

( ˆw(x)−w(y))ˆ −(η(t, x)−η(t, y))

.

Since X

y∈D

p1(t, x, y) = 1,for any (t, x)∈D×(0,∞),it is clear that X

x,y∈D

p1(t, x, y) ˆf1(x) (η(t, x)−w(x)) =ˆ X

x∈D

1(x) (η(t, x)−w(x)).ˆ

Let us prove that I ≤ 0. Recall that, p1(t, x, y) 6= 0 if and only if (x, y) ∈ D1 ×D2 and η(t, x)−η(t, y) = ˆc(x, y), so that

I = X

x,y∈D

p1(t, x, y) ˆf1(x)

( ˆw(x)−w(y))ˆ −c(x, y)ˆ

≤ 0,

where we used the fact that ˆw(x)−w(y)ˆ ≤ˆc(x, y) for any (x, y)∈D1×D2 (since ˆw∈K).ˆ The proof of (28) follows in the same way.

Lemma 2 For any wˆ ∈Kˆ and t≥0,we have X

x,y∈D

T(κ(t)) (η(t, x)−w(x))ˆ ≥0.

Proof : This is a simple consequence of Lemma 1 and the fact that X

x,y∈D

T(κ(t)) (η(t, x)−w(x)) =ˆ X

x∈D

( ˆf1(x)−fˆ2(x))(η(t, x)−w(x))ˆ

− X

x,y∈D

p2(t, y, x) ˆf2(y)(η(t, x)−w(x)) +ˆ X

x,y∈D

p2(t, y, x) ˆf2(y)(η(t, x)−w(x)).ˆ

(17)

Lemma 3 For any wˆ ∈Kˆ and t≥0,we have 1

2 A X

x∈D

η(t, x)−w(x)ˆ 2

≤X

y∈D

fˆ(y) (η(t, y)−w(y)) +ˆ 1 2

X

y∈D

|fˆ(y)|.

Proof : For ˆw∈K being fixed, we considerF : B(S)→IR defined by F(ξ) = X

p∈D

ξ(p)−w(p)ˆ 2

, for any x∈S.

Using the definition ofA,we have 1

2A X

p∈D

η(p, t)−w(p)ˆ 2

=I1+I2, where

I1 := 1 2

X

x,y∈D

p1(t, x, y) ˆf1(x) X

p∈D

η(p, t) +Ty(p)−w(p)ˆ 2

−X

p∈D

η(p, t)−w(p)ˆ 2

,

and

I2 := 1 2

X

x,y∈D

p2(t, x, y) ˆf2(x) X

p∈D

η(p, t)−Ty(p)−w(p)ˆ 2

−X

p∈D

η(p, t)−w(p)ˆ 2

.

We see that I1 = 1

2 X

x,y∈D

p1(t, x, y) ˆf1(x) X

p∈D

2η(x) +Ty(p)−2 ˆw(x) Ty(p)

= 1

2 X

x,y∈D

p1(t, x, y) ˆf1(x) (2η(y) + 1−2 ˆw(y))

= X

x,y∈D

p1(t, x, y) ˆf1(x) (η(y)−w(y)) +ˆ 1 2

X

x,y∈D

p1(t, x, y) ˆf1(x)

= X

x,y∈D

(p1(t, x, y) ˆf1(x) +p2(t, y, x) ˆf2(y)) (η(y)−w(y)) +ˆ 1 2

X

x,y∈D

p1(t, x, y) ˆf1(x)

−X

y∈D

2(y) (η(y)−w(y)).ˆ

In the same, we have

I2 = − X

x,y∈D

(p2(t, x, y) ˆf2(x) +p1(t, y, x) ˆf1(y)) (η(y)−w(y)) +ˆ 1 2

X

x,y∈D

p2(t, x, y) ˆf2(x)

−X

y∈D

1(y) (η(y)−w(y)).ˆ

(18)

This implies that 1

2 A X

x∈D

η(t, x)−w(x)ˆ 2

=− X

x,y∈D

T(κ(t)(η(t, y)−w(y)) +ˆ X

y∈D

fˆ(y) (η(t, y)−w(y))ˆ

+1 2

X

x∈D

|fˆ(x)|.

Then, by applying Lemma 2, the result follows.

Proposition 2 Letuˆ be the solution of (26) and(η(t), t≥0)be the stochastic process generated by A. Then, for any t≥0,we have

IE

"

X

x∈D

(η(t, x)−u(t, x))ˆ 2

#

≤t X

x∈D

|f(x)|.ˆ (29)

Proof : Let F : S×(0,∞)→IR be given by F( ˆξ, t) = 1

2 X

x∈D

ξ(x)ˆ −u(t, x)ˆ 2

, for any ( ˆξ, t)∈S×(0, T).

We have

∂F

∂t( ˆξ, t) =−X

x∈D

dˆu

dt(t, x)

ξ(x)ˆ −u(t, x)ˆ

, for any ( ˆξ, t)∈S×(0, T).

So, (18) implies that, for anyt≥0, 1

2IE

"

X

x∈D

(η(t, x)−u(t, x))ˆ 2

#

=IE

"

Z t 0

X

x∈D

dˆu

dt(s, x) (ˆu(s, x)−η(s, x)) +A(F(η(., s), s) ds

# .

Then, by using Lemma 3, we deduce that 1

2IE

"

X

x∈D

(η(t, x)−u(t, x))ˆ 2

#

≤ IE

"

Z t 0

X

x∈D

dˆu

dt(s, x) (ˆu(s, x)−η(s, x))

+X

y∈D

fˆ(y) (η(s, y)−u(s, y))|ˆ ds

+ t 2

X

y∈D

|fˆ(y).

Sinceu is a solution of (26) andη(t)∈K,for any t≥0,we have X

x∈D

dˆu

dt(t, x) (ˆu(t, x)−η(s, x)) +X

x∈D

fˆ(x) (η(t, x)−u(t, x))ˆ ≤0 for anyt≥0, and the result of the proposition follows.

(19)

5 Convergence to the evolution DMK-problem

To pass to the limit, for anyε >0,we introduce fε : Ω→IR+ the function given by fε= X

i∈N1ε

1(xiIi

1 − X

j∈N2ε

2(yjIi

2, and, we consider the convex set given by

Kε:=

n

ξ∈X2 ; ξ(x)−ξ(y)≤cε(x, y) for any (x, y)∈Ω1×Ω2

o

wherecε : Ω1×Ω2 →IR+ is the cost function given by cε= 1

Pε

X

(i,j)∈N1ε×N2ε

ˆ

c(xi, yiIi 1×I2j, wherePε is a nonnegative real parameter that we fix next ; i.e.

cε = 1 Pε

X

(i,j)∈N1ε×N2ε

"

Pε νε2

Z Z

I1i×I2j

c(x, y)dΛ(x, y)

# χIi

1×I2j.

To prove Theorem 2, we rescale again ˆu, the solution of (26), and introduce uε(t) = 1

Pε

 X

i∈N1ε

ˆ

u(Pεt, xiIi

1− X

j∈N2ε

ˆ

u(Pεt, yjIi 2

, for any t≥0.

Lemma 4 The function uε is the unique solution of the following evolution equation inX2,



 d

dtuε(t) +∂IIKε(uε(t))3fε, t≥0 u(0) = 0.

(30)

Proof : Using the definition of uε,it is not difficult to see that uε∈Kε.Now, let ξ ∈Kε and set

I :=

Z fε− d

dtuε(t)

(uε(t)−ξ)dλ.

We have

I = X

i∈N1ε

Z

Ii

(fε− d

dtuε(t)) (uε(t)−ξ)dλ+ X

j∈N2ε

Z

Ij

(fε− d

dtuε(t)) (uε(t)−ξ)dλ

= X

i∈N1ε

Z

Ii

fˆ− d

dtu(Pˆ εt) 1 Pε

ˆ

u(Pεt)−ξ

+ X

j∈N2ε

Z

Ij

fˆ− d

dtu(Pˆ εt) 1 Pε

ˆ

u(Pεt)−ξ

= νε Pε

X

x∈Dε

fˆ(x)− d

dtu(Pˆ εt, x)

ˆ

u(Pεt, x)−ξ(x) ,

(20)

where

ξ(x) =









 Pε

νε

Z

I1i

ξ dλ, for any x∈Iii, i∈N1ε Pε

νε

Z

I2j

ξ dλ, for any x∈Iji, j∈N2ε.

It is not difficult to see thatξ∈K,ˆ so that using the fact that ˆu is a solution of (26), we deduce thatI ≥0 and the proof of the lemma is complete.

Lemma 5 For any ε >0 and t≥0,we have IE

Z

ε(t)−uε(t))2

≤ t Pε

Z

|fε|dλ.

Proof : Using Proposition 2 and Lemma 4, we have IE

Z

ηε(t)−uε(t)2

= 1

Pε2IE

"

νε X

x∈Dε

|η Pεt, x

−u(Pˆ εt, x)|2dλ(x)

#

≤ t Pε

X

x∈Dε

νε( ˆf1(x) + ˆf2(x))

≤ t Pε

 X

i∈N1ε

Z

I1i

|fε|dλ+ X

j∈N2ε

Z

I2j

|fε|dλ

≤ t Pε

Z

|fε|dλ.

Lemma 6 As ε→0,we have

uε→u, in C([0,∞), X2), (31)

and d

dtuε→ d

dtu, in L2loc([0,∞), X2), (32)

where u is the solution of (11).

Proof : Recall that, as ε→0,

fε→f, inX2. (33)

Now, let us prove that

∂IIKε →∂IIK, in the graph sense.

(34)

(21)

To this aim, it is enough to prove the Mosco convergence of IIKε to IIK, as ε→ 0 (cf. [3] and [5]). That is

w−lim sup

ε→0

Epi(IIKε)⊆Epi(IIK)⊆s−lim sup

ε→0

Epi(IIKε), (35)

where we denote by Epi the epigraph, w−lim supε→0 the weak limsup and s−lim supε→0 the strong liminf. See that

Epi(IIKε) =Kε×[0,∞) and Epi(IIK) =K×[0,∞).

For the proof of the first inclusion of (35), we considerξε∈Kε such that ξε→ξ in X2−weak.

Since

ξε(x)−ξε(y)≤cε(x, y) Λ− a.e. Ω1×Ω2

andcε→uin L2(Ω1×Ω2, dΛ),we deduce that the weak limit ξ satisfies ξ(x)−ξ(y)≤c(x, y) Λ− a.e. Ω1×Ω2.

Thusξ ∈K.For the second inclusion we see that fro any ξ∈K ; we can define, for anyε >0, ξε : Ω→IR, by

ξε = X

i∈N1ε

χIi

1

1 νε

Z

I1i

ξ dλ− 1 2Pε

!

+ X

j∈N2ε

χIj

2

1 νε

Z

I2j

ξ dλ+ 1 2Pε

! .

See thatξε ∈Kε.Indeed, if (x, y)∈I1i×I2j,we have ξε(x)−ξε(y) = 1

νε

Z

I1i

ξ dλ− 1 νε

Z

I2j

ξ dλ− 1 Pε

≤ 1 νε2

Z Z

I1i×Ij2

c(x, y)dΛ(x, y)− 1 Pε

≤ 1 Pε

"

Pε

νε2 Z Z

I1i×I2j

c(x, y)dΛ(x, y)

#

≤ cε(x, y).

Moreover, thanks to (13), we know that ξε → ξ in X2. This implies that Epi(IIK) ⊆ s−lim sup

ε→0

Epi(IIKε) and the proof of (35) is complete. At last, the proof of (31) and (32) follows by using (34), (33) and standard perturbation result of [4].

Proof of Theorem 2 : Using Jensen inequality, we have IE

Z

ηε(t)−u(t) 2

≤ Z

uε(t)−u(t) 2

dλ+IE Z

uε(t)−u(t) 2

+2 Z

uε(t)−u(t)2

12

IE Z

uε(t)−u(t)2

12

.

(22)

Then, thanks to Lemma 5, we get IE

Z

ηε(t)−u(t)2

≤ Z

uε(t)−u(t)2

dλ+ t Pε

Z

|fε|dλ

+2 Z

uε(t)−u(t) 2

1

2

t Pε

Z

|fε|dλ 1

2

Z

uε(t)−u(t)2

12

+ t

Pε Z

|fε|dλ 12!2

.

At lats, lettingε→0,the result of the theorem follows by using Lemma 6 .

6 Convergence to the evolution MK-problem

The proof of Theorem 3 follows as a consequence of the following sequence of Lemmas.

Lemma 7 The couple (vε, µε) satisfies the following system









 d

dtvε(t, x) + Z

T(µε(t))(x, y)dλ(y) =fε(x), (t, x)∈(0,∞)×Ω Z Z

c µε(t)dΛ≤ Z

fε− d dtvε(t)

vε(t)d λ+Eε(t) , (36)

where

Eε(t) := 2M IE Z

ηε(t)−vε(t) dλ

and

M :=kfkX. Proof : Recall that ˆv∈Kˆ and

vε(t, x) = 1 Pε

 X

i∈N1ε

ˆ

v(Pεt, xiIi

1 + X

j∈N2ε

ˆ

v(Pεt, yjIj

2

, for any t≥0.

So, for anyt≥0, vε(t)∈Kε, fε− d

dtvε(t) =

 X

i∈N1ε

( ˆf(xi)− d

dtv(Pˆ εt, xi))χIi

1 + X

j∈N2ε

( ˆf(yj)− d

dtˆv(Pεt, yj))χIj

2

and Z

T(µε(t))(x, y)dλ(y) = X

y∈Dε

T(ˆκ(Pεt))(x, y).

Références

Documents relatifs

Generally these values of K are identified by continuity : for example, in case of a perpendicular band, we follow the Q branches, starting from a higher value of

The character of the coating element distribution which evidences for the fluid circulation and the estimations of the fluid velocity suggest the conclusion that mass transport

Finally, as the only detector in JUNO that is sensitive to certain fast neutron events produced by the rock muons or the corner clipping muons, we have estimated that the TT can

This is the worst case, under an exogenous pressure large enough (if a follicle, or several follicles are mature enough), there is no steady state in the saved zone and the

Ali Khan et al., “The nucleon mass in N(f) = 2 lattice QCD: Finite size effects from chiral perturbation

In this paper we propose an algorithm based on a fixed point formulation and a time-space least squares formulation of the mass conservation equation for computing the optimal

The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.. L’archive ouverte pluridisciplinaire HAL, est

For this purpose it presents the analysis of a bar with random elastic modulus and prescribed boundary conditions, say, fixed at one end and attached to a punctual mass and two