• Aucun résultat trouvé

Convex discretization of functionals involving the Monge-Amp`ere operator

N/A
N/A
Protected

Academic year: 2022

Partager "Convex discretization of functionals involving the Monge-Amp`ere operator"

Copied!
89
0
0

Texte intégral

(1)

Convex discretization of functionals involving the Monge-Amp`ere operator

Quentin M´erigot

CNRS / Universit´e Paris-Dauphine

Joint work with J.D. Benamou, G. Carlier and ´E. Oudet

Workshop on Optimal Transport in the Applied Sciences

(2)

1. Motivation: Gradient flows in Wasserstein space

(3)

Background: Optimal transport

I Wasserstein distance between µ, ν ∈ P2(Rd), P2(Rd) = {µ ∈ P(Rd); R

kxk2 d µ(x) < +∞}

Γ(µ, ν) := {π ∈ P(Rd × Rd); p1#π = µ, p2#π = ν} P2ac(Rd) = P2(Rd) ∩ L1(Rd)

Rd

Rd π

µ ν

p1 p2

Definition: W22(µ, ν) := minπ∈Γ(µ,ν) R

kx − yk2 d π(x, y).

(4)

Background: Optimal transport

I Wasserstein distance between µ, ν ∈ P2(Rd), P2(Rd) = {µ ∈ P(Rd); R

kxk2 d µ(x) < +∞}

Γ(µ, ν) := {π ∈ P(Rd × Rd); p1#π = µ, p2#π = ν} P2ac(Rd) = P2(Rd) ∩ L1(Rd)

Rd

Rd π

µ ν

p1 p2

I Relation to convex functions: Def: K := finite convex functions on Rd Definition: W22(µ, ν) := minπ∈Γ(µ,ν) R

kx − yk2 d π(x, y).

(5)

Background: Optimal transport

Theorem (Brenier): Given µ ∈ P2ac(Rd) the map φ ∈ K 7→ ∇φ#µ ∈ P2(Rd) is surjective and moreover,

I Wasserstein distance between µ, ν ∈ P2(Rd), P2(Rd) = {µ ∈ P(Rd); R

kxk2 d µ(x) < +∞}

Γ(µ, ν) := {π ∈ P(Rd × Rd); p1#π = µ, p2#π = ν} P2ac(Rd) = P2(Rd) ∩ L1(Rd)

Rd

Rd π

µ ν

p1 p2

[Brenier ’91]

I Relation to convex functions: Def: K := finite convex functions on Rd Definition: W22(µ, ν) := minπ∈Γ(µ,ν) R

kx − yk2 d π(x, y).

W22(µ, ∇φ#µ) = R

Rd kx − ∇φ(x)k2 d µ(x)

(6)

Background: Optimal transport

Theorem (Brenier): Given µ ∈ P2ac(Rd) the map φ ∈ K 7→ ∇φ#µ ∈ P2(Rd) is surjective and moreover,

I Wasserstein distance between µ, ν ∈ P2(Rd), P2(Rd) = {µ ∈ P(Rd); R

kxk2 d µ(x) < +∞}

Γ(µ, ν) := {π ∈ P(Rd × Rd); p1#π = µ, p2#π = ν} P2ac(Rd) = P2(Rd) ∩ L1(Rd)

Rd

Rd π

µ ν

p1 p2

[Brenier ’91]

I Relation to convex functions: Def: K := finite convex functions on Rd Definition: W22(µ, ν) := minπ∈Γ(µ,ν) R

kx − yk2 d π(x, y).

Given any µ ∈ P2ac(Rd), we get a ”parameterization” of P2(Rd), as ”seen” from µ.

W22(µ, ∇φ#µ) = R

Rd kx − ∇φ(x)k2 d µ(x)

(7)

Motivation 1: Crowd Motion Under Congestion

[Maury-Roudneff-Chupin-Santambrogio 10]

ρτk+1 = minσ∈P2(X) 1 W22τk, σ) + E(σ) + U(σ) I JKO scheme for crowd motion with hard congestion:

U(ν) :=

( 0 if d ν = f d Hd, f ≤ 1

+ ∞ if not congestion potential energy

E(ν) := R

Rd V (x) d ν(x) (∗)

where X ⊆ Rd is convex and bounded, and

(8)

Motivation 1: Crowd Motion Under Congestion

[Maury-Roudneff-Chupin-Santambrogio 10]

minφ 1 R

Rd kx − ∇φ(x)k2ρτk(x) d x + E(∇φ#ρτk) + U(∇φ#ρτk) ρτk+1 = minσ∈P2(X) 1 W22τk, σ) + E(σ) + U(σ)

I JKO scheme for crowd motion with hard congestion:

U(ν) :=

( 0 if d ν = f d Hd, f ≤ 1

+ ∞ if not congestion potential energy

E(ν) := R

Rd V (x) d ν(x)

I Assuming σ = ∇φ#ρτk with φ convex, the Wasserstein term becomes explicit:

On the other hand, the constraint becomes strongly nonlinear:

(∗)

(∗) ⇐⇒

U(∇φ ρτ) < +∞ ⇐⇒ det D2φ(x) ≥ ρ (x) where X ⊆ Rd is convex and bounded, and

(9)

Motivation 2: Nonlinear Diffusion

∂ρ

∂t = div [ρ∇(U0(ρ) + V + W ∗ ρ)] ρ(0, .) = ρ0

(∗) ρ(t, .) ∈ Pac(Rd)

(10)

Motivation 2: Nonlinear Diffusion

∂ρ

∂t = div [ρ∇(U0(ρ) + V + W ∗ ρ)] ρ(0, .) = ρ0 (∗)

I Formally, (∗) can be seen as the W2-gradient flow of U + E, with

ρ(t, .) ∈ Pac(Rd)

U(ν) :=

 Z

Rd

U(f(x)) d x if d ν = f d Hd + ∞ if not

E(ν) := R

Rd V (x) d ν(x) + R

Rd W (x − y) d[ν ⊗ ν](x, y)

internal energy, ex: U(r) = r log r =⇒ entropy

potential energy interaction energy

(11)

Motivation 2: Nonlinear Diffusion

∂ρ

∂t = div [ρ∇(U0(ρ) + V + W ∗ ρ)]

I JKO time discrete scheme: for τ > 0,

ρ(0, .) = ρ0 (∗)

ρτk+1 = arg minσ∈P(Rd) 1 W2τk, σ)2 + U(σ) + E(σ) I Formally, (∗) can be seen as the W2-gradient flow of U + E, with

ρ(t, .) ∈ Pac(Rd)

[Jordan-Kinderlehrer-Otto ’98]

U(ν) :=

 Z

Rd

U(f(x)) d x if d ν = f d Hd + ∞ if not

E(ν) := R

Rd V (x) d ν(x) + R

Rd W (x − y) d[ν ⊗ ν](x, y)

internal energy, ex: U(r) = r log r =⇒ entropy

potential energy interaction energy

(12)

Motivation 2: Nonlinear Diffusion

∂ρ

∂t = div [ρ∇(U0(ρ) + V + W ∗ ρ)]

I JKO time discrete scheme: for τ > 0,

ρ(0, .) = ρ0 (∗)

ρτk+1 = arg minσ∈P(Rd) 1 W2τk, σ)2 + U(σ) + E(σ)

−→ Many applications: porous medium equation, cell movement via chemotaxis, I Formally, (∗) can be seen as the W2-gradient flow of U + E, with

ρ(t, .) ∈ Pac(Rd)

[Jordan-Kinderlehrer-Otto ’98]

U(ν) :=

 Z

Rd

U(f(x)) d x if d ν = f d Hd + ∞ if not

E(ν) := R

Rd V (x) d ν(x) + R

Rd W (x − y) d[ν ⊗ ν](x, y)

internal energy, ex: U(r) = r log r =⇒ entropy

potential energy interaction energy

(13)

Displacement Convex Setting

minν∈P(X) 1 W22(µ, ν) + E(ν) + U(ν) For X convex bounded and µ ∈ Pac(X),

spt(ν) ⊆ X

(14)

Displacement Convex Setting

minν∈P(X) 1 W22(µ, ν) + E(ν) + U(ν) For X convex bounded and µ ∈ Pac(X),

(∗X)

minφ∈KX 1 W22(µ, ∇φ#µ) + U(∇φ#µ) + E(∇φ#µ)

⇐⇒

KX := {φ convex ; ∇φ ∈ X}

(15)

Displacement Convex Setting

When is the minimization problem (∗X) convex ?

E(ν) := R

Rd(V + W ∗ ν) d ν U(ν) := R

Rd U ddHνd

d x minν∈P(X) 1 W22(µ, ν) + E(ν) + U(ν)

For X convex bounded and µ ∈ Pac(X),

(∗X)

minφ∈KX 1 W22(µ, ∇φ#µ) + U(∇φ#µ) + E(∇φ#µ)

⇐⇒

KX := {φ convex ; ∇φ ∈ X}

(16)

Displacement Convex Setting

When is the minimization problem (∗X) convex ?

E(ν) := R

Rd(V + W ∗ ν) d ν U(ν) := R

Rd U ddHνd

d x minν∈P(X) 1 W22(µ, ν) + E(ν) + U(ν)

For X convex bounded and µ ∈ Pac(X),

(∗X)

minφ∈KX 1 W22(µ, ∇φ#µ) + U(∇φ#µ) + E(∇φ#µ)

⇐⇒

NB: U(∇φ#ρ) = R

U

ρ(x) MA[φ](x)

MA[φ](x) d x MA[φ](x) := det(D2φ(x)) KX := {φ convex ; ∇φ ∈ X}

(H2) rdU(r−d) is convex non-increasing, U(0) = 0.

Theorem: (∗X) is convex if (H1) V, W : Rd → R are convex functions,

[McCann ’94]

(17)

Displacement Convex Setting

When is the minimization problem (∗X) convex ?

E(ν) := R

Rd(V + W ∗ ν) d ν U(ν) := R

Rd U ddHνd

d x minν∈P(X) 1 W22(µ, ν) + E(ν) + U(ν)

For X convex bounded and µ ∈ Pac(X),

(∗X)

minφ∈KX 1 W22(µ, ∇φ#µ) + U(∇φ#µ) + E(∇φ#µ)

⇐⇒

NB: U(∇φ#ρ) = R

U

ρ(x) MA[φ](x)

MA[φ](x) d x MA[φ](x) := det(D2φ(x)) KX := {φ convex ; ∇φ ∈ X}

(H2) rdU(r−d) is convex non-increasing, U(0) = 0.

Theorem: (∗X) is convex if (H1) V, W : Rd → R are convex functions,

[McCann ’94]

(18)

Numerical applications of the JKO scheme

I Numerical applications of the (variational) JKO scheme are still limited:

(19)

Numerical applications of the JKO scheme

1D: monotone rearrangement e.g. [Kinderleherer-Walkington 99]

I Numerical applications of the (variational) JKO scheme are still limited:

[Blanchet-Calvez-Carrillo 08] [Agueh-Bowles 09]

(20)

Numerical applications of the JKO scheme

1D: monotone rearrangement e.g. [Kinderleherer-Walkington 99]

2D: optimal transport plans → diffeomorphisms [Carrillo-Moll 09]

U = hard congestion term [Maury-Roudneff-Chupin-Santambrogio 10]

I Numerical applications of the (variational) JKO scheme are still limited:

[Blanchet-Calvez-Carrillo 08] [Agueh-Bowles 09]

(21)

Numerical applications of the JKO scheme

I Our goal is to approximate a JKO step numerically, in dimension ≥ 2:

minν∈P(X) 1 W22(µ, ν) + E(ν) + U(ν)

(∗X)

1D: monotone rearrangement e.g. [Kinderleherer-Walkington 99]

2D: optimal transport plans → diffeomorphisms [Carrillo-Moll 09]

U = hard congestion term [Maury-Roudneff-Chupin-Santambrogio 10]

I Numerical applications of the (variational) JKO scheme are still limited:

For X convex bounded and µ ∈ Pac(X),

[Blanchet-Calvez-Carrillo 08] [Agueh-Bowles 09]

(22)

Numerical applications of the JKO scheme

I Our goal is to approximate a JKO step numerically, in dimension ≥ 2:

Part 2: Convex discretization of the problem (∗X) under McCann’s hypotheses.

Part 3: Γ-convergence results from the discrete problem to the continuous one.

Part 4: Numerical simulations: non-linear diffusion, crowd motion.

minν∈P(X) 1 W22(µ, ν) + E(ν) + U(ν)

(∗X)

1D: monotone rearrangement e.g. [Kinderleherer-Walkington 99]

2D: optimal transport plans → diffeomorphisms [Carrillo-Moll 09]

U = hard congestion term [Maury-Roudneff-Chupin-Santambrogio 10]

I Numerical applications of the (variational) JKO scheme are still limited:

For X convex bounded and µ ∈ Pac(X),

I Plan of the talk:

[Blanchet-Calvez-Carrillo 08] [Agueh-Bowles 09]

(23)

2. Convex discretization of a JKO step

(24)

Prior work: Discretizing the Space of Convex Functions

minφ∈K R

X F (φ(x), ∇φ(x)) d µ(x)

I Finite elements: piecewise-linear convex functions K := {φ convex}

−→ number of constraints is ∼ N := card(P )

P ⊆ X

over a fixed mesh

(25)

Prior work: Discretizing the Space of Convex Functions

minφ∈K R

X F (φ(x), ∇φ(x)) d µ(x)

[Chon´e-Le Meur ’99]

I Finite elements: piecewise-linear convex functions K := {φ convex}

−→ non-convergence result

−→ number of constraints is ∼ N := card(P )

P ⊆ X

over a fixed mesh

(26)

Prior work: Discretizing the Space of Convex Functions

−→ convergent, but number of constraints is ' N2

[Ekeland—Moreno-Bromberg ’10]

minφ∈K R

X F (φ(x), ∇φ(x)) d µ(x)

[Chon´e-Le Meur ’99]

I Finite elements: piecewise-linear convex functions

I Finite differences, using convex interpolates

K := {φ convex}

[Carlier-Lachand-Robert-Maury ’01]

−→ non-convergence result

−→ number of constraints is ∼ N := card(P )

P ⊆ X

over a fixed mesh

(27)

Prior work: Discretizing the Space of Convex Functions

−→ convergent, but number of constraints is ' N2

[Ekeland—Moreno-Bromberg ’10]

[Mirebeau ’14]

minφ∈K R

X F (φ(x), ∇φ(x)) d µ(x)

[Chon´e-Le Meur ’99]

I Finite elements: piecewise-linear convex functions

I Finite differences, using convex interpolates

[Oudet-M. ’14]

[Oberman ’14]

−→ exterior parameterization

K := {φ convex}

[Carlier-Lachand-Robert-Maury ’01]

−→ non-convergence result

−→ adaptive method

−→ number of constraints is ∼ N := card(P )

P ⊆ X

over a fixed mesh

(28)

Prior work: Discretizing the Space of Convex Functions

−→ convergent, but number of constraints is ' N2

[Ekeland—Moreno-Bromberg ’10]

[Mirebeau ’14]

minφ∈K R

X F (φ(x), ∇φ(x)) d µ(x)

[Chon´e-Le Meur ’99]

I Finite elements: piecewise-linear convex functions

I Finite differences, using convex interpolates

[Oudet-M. ’14]

[Oberman ’14]

−→ exterior parameterization

K := {φ convex}

[Carlier-Lachand-Robert-Maury ’01]

−→ non-convergence result

−→ adaptive method

−→ number of constraints is ∼ N := card(P )

P ⊆ X

over a fixed mesh

(29)

Discretizing the Space of Convex Functions

minφ∈KX 1 W22(µ, ∇φ#µ) + E(∇φ#µ) + U(∇φ#µ) KX := {φ cvx ; ∇φ ∈ X}

Definition: Given P ⊆ X finite, KX(P ) = {ψ|P ; ψ ∈ KX}

= φ : P R

P X

(30)

Discretizing the Space of Convex Functions

minφ∈KX 1 W22(µ, ∇φ#µ) + E(∇φ#µ) + U(∇φ#µ) KX := {φ cvx ; ∇φ ∈ X}

Definition: Given P ⊆ X finite, KX(P ) = {ψ|P ; ψ ∈ KX}

−→ finite-dimensional convex set

= φ : P R

P X

(31)

Discretizing the Space of Convex Functions

minφ∈KX 1 W22(µ, ∇φ#µ) + E(∇φ#µ) + U(∇φ#µ) KX := {φ cvx ; ∇φ ∈ X}

Definition: Given P ⊆ X finite, KX(P ) = {ψ|P ; ψ ∈ KX}

−→ finite-dimensional convex set

−→ extension of φ ∈ KX(P ): = φ : P R

φˆ

P X

φˆ := max{ψ; ψ ∈ KX and ψ|P ≤ φ} ∈ KX

φ : P R φˆ : Rd R ∈ KX

X = [−1, 1]

(32)

Discretizing the Space of Convex Functions

minφ∈KX 1 W22(µ, ∇φ#µ) + E(∇φ#µ) + U(∇φ#µ) KX := {φ cvx ; ∇φ ∈ X}

Definition: Given P ⊆ X finite, KX(P ) = {ψ|P ; ψ ∈ KX}

−→ finite-dimensional convex set

−→ extension of φ ∈ KX(P ): = φ : P R

φˆ

P X

φˆ := max{ψ; ψ ∈ KX and ψ|P ≤ φ} ∈ KX

Discrete push-forward of µP = P

p∈P µpδp ∈ P(P)

φ : P R φˆ : Rd R ∈ KX

X = [−1, 1]

(33)

Discretizing the Space of Convex Functions

minφ∈KX 1 W22(µ, ∇φ#µ) + E(∇φ#µ) + U(∇φ#µ) KX := {φ cvx ; ∇φ ∈ X}

Definition: Given P ⊆ X finite, KX(P ) = {ψ|P ; ψ ∈ KX}

−→ finite-dimensional convex set

−→ extension of φ ∈ KX(P ): = φ : P R

φˆ

P X

φˆ := max{ψ; ψ ∈ KX and ψ|P ≤ φ} ∈ KX

p

Discrete push-forward of µP = P

p∈P µpδp ∈ P(P) Definition: For φ ∈ KX(P ), let Vp := ∂φ(p)ˆ and

φ : P R φˆ : Rd R ∈ KX

X = [−1, 1]

(34)

Discretizing the Space of Convex Functions

minφ∈KX 1 W22(µ, ∇φ#µ) + E(∇φ#µ) + U(∇φ#µ) KX := {φ cvx ; ∇φ ∈ X}

Definition: Given P ⊆ X finite, KX(P ) = {ψ|P ; ψ ∈ KX}

−→ finite-dimensional convex set

−→ extension of φ ∈ KX(P ): = φ : P R

φˆ

P X

φˆ := max{ψ; ψ ∈ KX and ψ|P ≤ φ} ∈ KX

X Gφ#µP := P

p∈P

µp

Hd(Vp) Hd

Vp ∈ Pac(X) p

Vp

Discrete push-forward of µP = P

p∈P µpδp ∈ P(P) Definition: For φ ∈ KX(P ), let Vp := ∂φ(p)ˆ and

φ : P R φˆ : Rd R ∈ KX

X = [−1, 1]

(35)

Convex Space-Discretization of One JKO Step

Theorem: Under McCann’s hypothesis (H1) and (H2), the problem minφ∈KG

X(P) 1

W22P , Hφ#µP ) + E(Hφ#µP ) + U(Gφ#µP ) is convex, and the minimum is unique if rdU(r−d) is strictly convex.

[Benamou, Carlier, M., Oudet ’14]

(36)

Convex Space-Discretization of One JKO Step

Theorem: Under McCann’s hypothesis (H1) and (H2), the problem minφ∈KG

X(P) 1

W22P , Hφ#µP ) + E(Hφ#µP ) + U(Gφ#µP ) is convex, and the minimum is unique if rdU(r−d) is strictly convex.

[Benamou, Carlier, M., Oudet ’14]

I Unfortunately, two different definitions of push-forward seem necessary.

(37)

Convex Space-Discretization of One JKO Step

Theorem: Under McCann’s hypothesis (H1) and (H2), the problem minφ∈KG

X(P) 1

W22P , Hφ#µP ) + E(Hφ#µP ) + U(Gφ#µP ) is convex, and the minimum is unique if rdU(r−d) is strictly convex.

[Benamou, Carlier, M., Oudet ’14]

I Unfortunately, two different definitions of push-forward seem necessary.

of the discrete Monge-Amp`ere operator:

I The convexity of φ ∈ KX(P ) 7→ U(Gφ#µP) follows from the log-concavity MA[φ](p) := Hd(∂φ(p)).ˆ

(38)

Convex Space-Discretization of One JKO Step

Theorem: Under McCann’s hypothesis (H1) and (H2), the problem minφ∈KG

X(P) 1

W22P , Hφ#µP ) + E(Hφ#µP ) + U(Gφ#µP ) is convex, and the minimum is unique if rdU(r−d) is strictly convex.

U(Gφ#µP) < +∞ =⇒ ∀p ∈ P, MA[φ](p) = Hd(∂φ(p))ˆ > 0

I If limr→∞ U(r)/r = +∞, the internal energy is a barrier for convexity, i.e.

=⇒ φ is in the interior of KX(P ).

2

[Benamou, Carlier, M., Oudet ’14]

I Unfortunately, two different definitions of push-forward seem necessary.

of the discrete Monge-Amp`ere operator:

I The convexity of φ ∈ KX(P ) 7→ U(Gφ#µP) follows from the log-concavity MA[φ](p) := Hd(∂φ(p)).ˆ

(39)

Convex Space-Discretization of the Internal Energy

Proposition: Under McCann’s assumption, φ ∈ KX(P ) 7→ U(Gφ#µP ) is convex.

(40)

Convex Space-Discretization of the Internal Energy

Proposition: Under McCann’s assumption, φ ∈ KX(P ) 7→ U(Gφ#µP ) is convex.

Discrete Monge-Amp`ere operator: MA[φ](p) := Hd(∂φ(p)).ˆ

(41)

Convex Space-Discretization of the Internal Energy

U(Gacφ#µP ) = P

p∈P U (µp/MA[φ](p)) MA[φ](p)

Proposition: Under McCann’s assumption, φ ∈ KX(P ) 7→ U(Gφ#µP ) is convex.

Proof: Recall Gφ#µP = P

p∈Pp/MA[φ](p)]1φ(p)ˆ . With U(σ) = R

U(σ(x)) d x, Discrete Monge-Amp`ere operator: MA[φ](p) := Hd(∂φ(p)).ˆ

NB: similarity with U(∇φ#ρ) = R

U

ρ(x) MA[φ](x)

MA[φ](x) d x

(42)

Convex Space-Discretization of the Internal Energy

U(Gacφ#µP ) = P

p∈P U (µp/MA[φ](p)) MA[φ](p)

Proposition: Under McCann’s assumption, φ ∈ KX(P ) 7→ U(Gφ#µP ) is convex.

Proof: Recall Gφ#µP = P

p∈Pp/MA[φ](p)]1φ(p)ˆ . With U(σ) = R

U(σ(x)) d x,

= − P

p∈P µp log(MA[φ](p)) + P

p∈P µp log(µp) case U(r) = r log r

Discrete Monge-Amp`ere operator: MA[φ](p) := Hd(∂φ(p)).ˆ

(43)

Convex Space-Discretization of the Internal Energy

U(Gacφ#µP ) = P

p∈P U (µp/MA[φ](p)) MA[φ](p)

Proposition: Under McCann’s assumption, φ ∈ KX(P ) 7→ U(Gφ#µP ) is convex.

Proof: Recall Gφ#µP = P

p∈Pp/MA[φ](p)]1φ(p)ˆ . With U(σ) = R

U(σ(x)) d x,

= − P

p∈P µp log(MA[φ](p)) + P

p∈P µp log(µp)

Lemma: Given φ0, φ1 ∈ KX(P ), φt := (1 − t)φ0 + tφ1 and

∂φˆt(p) ⊇ (1 − t)∂φˆ0(p) ⊕ t∂φˆ1(p).

case U(r) = r log r

⊕ = Minkowski sum Discrete Monge-Amp`ere operator: MA[φ](p) := Hd(∂φ(p)).ˆ

(44)

Convex Space-Discretization of the Internal Energy

U(Gacφ#µP ) = P

p∈P U (µp/MA[φ](p)) MA[φ](p)

Proposition: Under McCann’s assumption, φ ∈ KX(P ) 7→ U(Gφ#µP ) is convex.

Proof: Recall Gφ#µP = P

p∈Pp/MA[φ](p)]1φ(p)ˆ . With U(σ) = R

U(σ(x)) d x,

= − P

p∈P µp log(MA[φ](p)) + P

p∈P µp log(µp)

log Hd(∂φˆt) ≥ log Hd((1 − t)∂φˆ0(p) ⊕ t∂φˆ1(p)).

≥ (1 − t) log Hd(∂φˆ0(p)) + t log Hd( ˆφ1(p))).

Brunn-Minkowski

Lemma: Given φ0, φ1 ∈ KX(P ), φt := (1 − t)φ0 + tφ1 and

∂φˆt(p) ⊇ (1 − t)∂φˆ0(p) ⊕ t∂φˆ1(p).

case U(r) = r log r

⊕ = Minkowski sum

=⇒

Discrete Monge-Amp`ere operator: MA[φ](p) := Hd(∂φ(p)).ˆ

(45)

Convex Space-Discretization of the Internal Energy

U(Gacφ#µP ) = P

p∈P U (µp/MA[φ](p)) MA[φ](p)

Proposition: Under McCann’s assumption, φ ∈ KX(P ) 7→ U(Gφ#µP ) is convex.

Proof: Recall Gφ#µP = P

p∈Pp/MA[φ](p)]1φ(p)ˆ . With U(σ) = R

U(σ(x)) d x,

= − P

p∈P µp log(MA[φ](p)) + P

p∈P µp log(µp)

log Hd(∂φˆt) ≥ log Hd((1 − t)∂φˆ0(p) ⊕ t∂φˆ1(p)).

≥ (1 − t) log Hd(∂φˆ0(p)) + t log Hd( ˆφ1(p))).

Brunn-Minkowski

Lemma: Given φ0, φ1 ∈ KX(P ), φt := (1 − t)φ0 + tφ1 and

∂φˆt(p) ⊇ (1 − t)∂φˆ0(p) ⊕ t∂φˆ1(p).

case U(r) = r log r

⊕ = Minkowski sum

=⇒

Discrete Monge-Amp`ere operator: MA[φ](p) := Hd(∂φ(p)).ˆ

(46)

3. A Γ-convergence result

(47)

Convergence of the Space-Discretization

Setting: X bounded and convex, µ ∈ Pac(X) with density c−1 ≤ ρ ≤ c minν∈P(X) 1 W22(µ, ν) + E(ν) + U(ν) (∗)

(48)

Convergence of the Space-Discretization

Setting: X bounded and convex, µ ∈ Pac(X) with density c−1 ≤ ρ ≤ c

(C1) E continuous, U l.s.c on (P(X), W2)

minν∈P(X) 1 W22(µ, ν) + E(ν) + U(ν) (∗)

(49)

Convergence of the Space-Discretization

Setting: X bounded and convex, µ ∈ Pac(X) with density c−1 ≤ ρ ≤ c

(C1) E continuous, U l.s.c on (P(X), W2)

minν∈P(X) 1 W22(µ, ν) + E(ν) + U(ν) (∗)

(C2) U(ρ) = R

U(ρ(x)) d x, where U ≥ M is convex.

(50)

Convergence of the Space-Discretization

Setting: X bounded and convex, µ ∈ Pac(X) with density c−1 ≤ ρ ≤ c

(C1) E continuous, U l.s.c on (P(X), W2)

minν∈P(X) 1 W22(µ, ν) + E(ν) + U(ν) (∗)

(C2) U(ρ) = R

U(ρ(x)) d x, where U ≥ M is convex.

6= McCann’s condition for displacement-convexity

(51)

Convergence of the Space-Discretization

Setting: X bounded and convex, µ ∈ Pac(X) with density c−1 ≤ ρ ≤ c

(C1) E continuous, U l.s.c on (P(X), W2)

minν∈P(X) 1 W22(µ, ν) + E(ν) + U(ν) (∗)

Theorem: Let Pn ⊆ X finite, µn ∈ P(Pn) with lim W2n, µ) = 0, and:

minφ∈KG

X(Pn) 1

W2n, Hφ#µn) + E(Hφ#µn) + U(Gφ#µn) (∗)n If φn minimizes (∗)n, then (Gφn#µn) is a minimizing sequence for (∗).

(C2) U(ρ) = R

U(ρ(x)) d x, where U ≥ M is convex.

6= McCann’s condition for displacement-convexity

[Benamou, Carlier, M., Oudet ’14]

(52)

Convergence of the Space-Discretization

Setting: X bounded and convex, µ ∈ Pac(X) with density c−1 ≤ ρ ≤ c

(C1) E continuous, U l.s.c on (P(X), W2)

minν∈P(X) 1 W22(µ, ν) + E(ν) + U(ν) (∗)

Theorem: Let Pn ⊆ X finite, µn ∈ P(Pn) with lim W2n, µ) = 0, and:

minφ∈KG

X(Pn) 1

W2n, Hφ#µn) + E(Hφ#µn) + U(Gφ#µn) (∗)n If φn minimizes (∗)n, then (Gφn#µn) is a minimizing sequence for (∗).

(C2) U(ρ) = R

U(ρ(x)) d x, where U ≥ M is convex.

6= McCann’s condition for displacement-convexity

I Proof relies on Caffarelli’s regularity theorem.

[Benamou, Carlier, M., Oudet ’14]

(53)

Convergence of the Space-Discretization

Setting: X bounded and convex, µ ∈ Pac(X) with density c−1 ≤ ρ ≤ c

(C1) E continuous, U l.s.c on (P(X), W2)

minν∈P(X) 1 W22(µ, ν) + E(ν) + U(ν) (∗)

Theorem: Let Pn ⊆ X finite, µn ∈ P(Pn) with lim W2n, µ) = 0, and:

minφ∈KG

X(Pn) 1

W2n, Hφ#µn) + E(Hφ#µn) + U(Gφ#µn) (∗)n If φn minimizes (∗)n, then (Gφn#µn) is a minimizing sequence for (∗).

(C2) U(ρ) = R

U(ρ(x)) d x, where U ≥ M is convex.

6= McCann’s condition for displacement-convexity

I Proof relies on Caffarelli’s regularity theorem.

[Benamou, Carlier, M., Oudet ’14]

(54)

Proof of the convergence theorem: Lower bound

Let m := minσ∈P(X) F(σ)

mn := minφn∈KX(Pn) Fn(Gφn#µn) I Step 1: lim inf mn ≥ m

JKO step: X bounded and convex, µ ∈ Pac(X) with density c−1 ≤ ρ ≤ c

F(σ) := W22(µ, σ) + E(σ) + U(σ) Fn(σ) := W22n, σ) + E(σ) + U(σ)

(55)

Proof of the convergence theorem: Lower bound

Let m := minσ∈P(X) F(σ)

mn := minφn∈KX(Pn) Fn(Gφn#µn) I Step 1: lim inf mn ≥ m

JKO step: X bounded and convex, µ ∈ Pac(X) with density c−1 ≤ ρ ≤ c

I Step 2: lim sup mn ≤ m = minσ∈P(X) F(σ)

F(σ) := W22(µ, σ) + E(σ) + U(σ) Fn(σ) := W22n, σ) + E(σ) + U(σ)

(56)

Proof of the convergence theorem: Lower bound

Let m := minσ∈P(X) F(σ)

mn := minφn∈KX(Pn) Fn(Gφn#µn) I Step 1: lim inf mn ≥ m

JKO step: X bounded and convex, µ ∈ Pac(X) with density c−1 ≤ ρ ≤ c

I Step 2: lim sup mn ≤ m = minσ∈P(X) F(σ)

s.t. limn→∞ kσ − σnkL(X) = 0, where σn is the density of Gφn#µn.

Using (C2) and a convolution argument

F(σ) := W22(µ, σ) + E(σ) + U(σ) Fn(σ) := W22n, σ) + E(σ) + U(σ)

⇐= Given any probability density σ ∈ C0(X) with ε < σ < ε−1, ∃φn ∈ KX(Pn)

(57)

Proof of the convergence theorem: Lower bound

X X

∇φ

µ σ = ∇φ#µ

I Step 2’:

s.t. limn→∞ kσ − σnkL(X) = 0, where σn := density of Gφn#µn. Given any prob. density σ ∈ C0(X) s.t. ε < σ < ε−1, ∃φn ∈ KX(Pn)

(58)

Proof of the convergence theorem: Lower bound

X X X

spt(µ ) = P X

∇φ

µ σ = ∇φ#µ

I Step 2’:

∀p ∈ Pn, σ(Vpn) = µp, where Vpn := ∂φˆn(p)

φˆn

(a) Given µn = P

p∈Pn µpδp, take φn ∈ KX(Pn) O.T. potential between µn and σ:

p Vpn

s.t. limn→∞ kσ − σnkL(X) = 0, where σn := density of Gφn#µn. Given any prob. density σ ∈ C0(X) s.t. ε < σ < ε−1, ∃φn ∈ KX(Pn)

(59)

Proof of the convergence theorem: Lower bound

X X X

∇φ

µ σ = ∇φ#µ

I Step 2’:

∀p ∈ Pn, σ(Vpn) = µp, where Vpn := ∂φˆn(p) (b) σn := Gφn#µn = P

p∈Pn

σ(Vpn)

Hd(Vpn)1V n

p .

φˆn

(a) Given µn = P

p∈Pn µpδp, take φn ∈ KX(Pn) O.T. potential between µn and σ:

p Vpn

s.t. limn→∞ kσ − σnkL(X) = 0, where σn := density of Gφn#µn. Given any prob. density σ ∈ C0(X) s.t. ε < σ < ε−1, ∃φn ∈ KX(Pn)

(60)

Proof of the convergence theorem: Lower bound

X X X

spt(µ ) = P X

∇φ

µ σ = ∇φ#µ

I Step 2’:

∀p ∈ Pn, σ(Vpn) = µp, where Vpn := ∂φˆn(p) (b) σn := Gφn#µn = P

p∈Pn

σ(Vpn)

Hd(Vpn)1V n

p . (c) If maxp∈Pn diam(Vpn) −−−→n→∞ 0 , then

φˆn

(a) Given µn = P

p∈Pn µpδp, take φn ∈ KX(Pn) O.T. potential between µn and σ:

p Vpn

s.t. limn→∞ kσ − σnkL(X) = 0, where σn := density of Gφn#µn. Given any prob. density σ ∈ C0(X) s.t. ε < σ < ε−1, ∃φn ∈ KX(Pn)

limn→∞ kσ − σnkL(X) = 0.

(1)

(61)

Proof of the convergence theorem: Lower bound

X X X

∇φ

µ σ = ∇φ#µ

I Step 2’:

∀p ∈ Pn, σ(Vpn) = µp, where Vpn := ∂φˆn(p) (b) σn := Gφn#µn = P

p∈Pn

σ(Vpn)

Hd(Vpn)1V n

p . (c) If maxp∈Pn diam(Vpn) −−−→n→∞ 0 , then

φˆn

(a) Given µn = P

p∈Pn µpδp, take φn ∈ KX(Pn) O.T. potential between µn and σ:

p Vpn

s.t. limn→∞ kσ − σnkL(X) = 0, where σn := density of Gφn#µn. Given any prob. density σ ∈ C0(X) s.t. ε < σ < ε−1, ∃φn ∈ KX(Pn)

(d) Assume not (1): ∃pn ∈ Pn s.t. diam(Vpn) ≥ ε.

diam(∂φ(x)) ≥ ε where x = limn pn ∈ X. limn→∞ kσ − σnkL(X) = 0.

(1)

Up to extraction, φˆn −−−→k.k∞ φ, so that (2)

(62)

Proof of the convergence theorem: Lower bound

X X X

spt(µ ) = P X

∇φ

µ σ = ∇φ#µ

I Step 2’:

∀p ∈ Pn, σ(Vpn) = µp, where Vpn := ∂φˆn(p) (b) σn := Gφn#µn = P

p∈Pn

σ(Vpn)

Hd(Vpn)1V n

p . (c) If maxp∈Pn diam(Vpn) −−−→n→∞ 0 , then

φˆn

(a) Given µn = P

p∈Pn µpδp, take φn ∈ KX(Pn) O.T. potential between µn and σ:

p Vpn

s.t. limn→∞ kσ − σnkL(X) = 0, where σn := density of Gφn#µn. Given any prob. density σ ∈ C0(X) s.t. ε < σ < ε−1, ∃φn ∈ KX(Pn)

(d) Assume not (1): ∃pn ∈ Pn s.t. diam(Vpn) ≥ ε.

diam(∂φ(x)) ≥ ε where x = limn pn ∈ X. (e) Moreover, ∇φ#ρ = σ. By Caffarelli’s regularity

limn→∞ kσ − σnkL(X) = 0.

(1)

Up to extraction, φˆn −−−→k.k∞ φ, so that (2)

(63)

4. Numerical results

(64)

Computing the discrete Monge-Amp`ere operator

U(Gφ#µP ) = P

p∈P U (µp/MA[φ](p)) MA[φ](p)

with MA[φ](p) := Hd(∂φ(p)).ˆ

I Goal: fast computation of MA[φ](p) and its 1st/2nd derivatives w.r.t φ

(65)

Computing the discrete Monge-Amp`ere operator

X

Definition: Given a function φ : P → R,

I We rely on the notion of Laguerre (or power) cell in computational geometry

LagφP (p) := {y ∈ Rd; ∀q ∈ P, φ(q) ≥ φ(p) + hq − p|yi}

U(Gφ#µP ) = P

p∈P U (µp/MA[φ](p)) MA[φ](p)

with MA[φ](p) := Hd(∂φ(p)).ˆ

LagφP(p)

p

I Goal: fast computation of MA[φ](p) and its 1st/2nd derivatives w.r.t φ

(66)

Computing the discrete Monge-Amp`ere operator

X

Definition: Given a function φ : P → R,

I We rely on the notion of Laguerre (or power) cell in computational geometry

LagφP (p) := {y ∈ Rd; ∀q ∈ P, φ(q) ≥ φ(p) + hq − p|yi}

U(Gφ#µP ) = P

p∈P U (µp/MA[φ](p)) MA[φ](p)

with MA[φ](p) := Hd(∂φ(p)).ˆ

LagφP(p)

p

I Goal: fast computation of MA[φ](p) and its 1st/2nd derivatives w.r.t φ

(67)

Computing the discrete Monge-Amp`ere operator

LagφP(p) := {y; ∀q ∈ P, kq − yk2 ≥ kp − yk2} X

Definition: Given a function φ : P → R,

I We rely on the notion of Laguerre (or power) cell in computational geometry

LagφP (p) := {y ∈ Rd; ∀q ∈ P, φ(q) ≥ φ(p) + hq − p|yi}

−→ For φ(p) = kpk2/2, one gets the Voronoi cell:

U(Gφ#µP ) = P

p∈P U (µp/MA[φ](p)) MA[φ](p)

with MA[φ](p) := Hd(∂φ(p)).ˆ

LagφP(p)

p

I Goal: fast computation of MA[φ](p) and its 1st/2nd derivatives w.r.t φ

−→ Computation in time O(|P | log |P|) in 2D

Références

Documents relatifs

This paper is a quite natural continuation of our papers [3, 4] where generic multifractal properties of measures and of functions monotone increasing in several variables (in

For an arbitrary convex body, the Gauss curvature measure is defined as the push-forward of the uniform measure σ on the sphere using the inverse of the Gauss map G : ∂Ω −→ S m

We prove existence of solutions to the Plateau problem with outer barrier for LSC hypersurfaces of constant or prescribed curvature for general curvature functions inside

an estimate of the modulus of uniform continuity on balls of the duality map of a uniformly smooth Banach space in terms of the uniform convexity of..

Sbordone, Weak lower semicontinuity of polyconvex integrals: a borderline case, Math.. Gariepy, Measure Theory and Fine Properties of Functions, CRC Press, Boca

Given a sequence of closed convex bivariate functions that t-epi-converges to F, we study the induced convergence for the associated classes of convex-concave

In this paper we prove that a similar result holds in the Heisenberg group, by showing that every continuous H –convex function belongs to the class of functions whose second

Recall that by the celebrated theorem of Alexandrov, convex functions defined on Euclidean spaces are twice differentiable almost everywhere (cf. A key step in the proof of this