HAL Id: hal-02173637
https://hal.inria.fr/hal-02173637
Submitted on 4 Jul 2019
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Beyond Petrov-Galerkin projection by using “ multi-space ” priors
Cédric Herzet, Mamadou Diallo, Patrick Héas
To cite this version:
Cédric Herzet, Mamadou Diallo, Patrick Héas. Beyond Petrov-Galerkin projection by using “ multi-space ” priors. European Conference on Numerical Mathematics and Advanced Applications (ENU-MATH), Sep 2017, Vos, Norway. �hal-02173637�
Beyond Petrov-Galerkin projection
by using « multi-space » priors
Cédric Herzet Inria, France
Joint work with M. Diallo & P. Héas
The target problem…
where H is a Hilbert space (h·, ·i and k·k) a : H ⇥ H ! R is a bilinear operator b : H ! R is a linear operator
Find h?
… and its Petrov-Galerkin approximation
Find ˆhPG 2 Vn such that a(ˆhPG, h) = b(h) 8h 2 Zn
The precision of Petrov-Galerkin can be
quantified by an « instance optimal property »
Standard methods constructing Vn often
return a set of subspaces and their « widths »
5
Standard outputs of methods constructing :Vn
such that
E.g., « reduced basis » methods
dim(Vk) = k
V0 ⇢ V1 ⇢ . . . ⇢ Vn,
k = 0 . . . n. dist(h?, Vk) ˆ✏k,
The Petrov-Galerkin projection
discards most of the available information
6
Standard outputs of methods constructing :Vn
such that
E.g., « reduced basis » methods
dim(Vk) = k
V0 ⇢ V1 ⇢ . . . ⇢ Vn,
k = 0 . . . n. dist(h?, Vk) ˆ✏k,
Can we use this information to improve the projection process?
The Petrov-Galerkin projection can be reformulated as a variational problem
ˆ hPG = arg min h2Vn n X j=1 (bj haj, hi)2 span ⇣{zj}nj=1⌘ = Zn
aj is the Riesz’s representer of a(·, zj)
bj = b(zj) where
The proposed « multi-space » decoder adds new constraints to the variational problem
ˆ hMS = arg min h2Vn n X j=1 (bj haj, hi)2, subject to dist(h, Vk) ˆ✏k, k = 0 . . . n 1.
A graphical representation of the problem n = 2 Vk = span ⇣ {vi}ki=1 ⌘ •PVn(h ?) v2 v1 span(Vn)
A graphical representation of the problem • ˆ hPG •PVn(h ?) v2 v1 span(Vn) “iso-contour” of Pn j=1(bj haj, hi) 2 n = 2 Vk = span ⇣ {vi}ki=1 ⌘
The shape of the iso-contours depends on G = [hai, vji]ij
A graphical representation of the problem ˆ✏0 • ˆ hPG= ˆhMS •PVn(h ?) v2 v1 span(Vn) “iso-contour” of Pn j=1(bj haj, hi) 2 {h : dist(h, V0) ˆ✏0} n = 2 Vk = span ⇣ {vi}ki=1 ⌘
The shape of the iso-contours depends on G = [hai, vji]ij
A graphical representation of the problem ˆ✏1 ˆ✏0 • ˆ hPG • ˆ hMS •PVn(h ?) v2 v1 span(Vn) “iso-contour” of Pn j=1(bj haj, hi)2 {h : dist(h, V0) ˆ✏0} {h : dist(h, V1) ˆ✏1} n = 2 Vk = span ⇣ {vi}ki=1 ⌘
The shape of the iso-contours depends on G = [hai, vji]ij
The feasibility region depends on {Vk}nk=1 and {ˆ✏k}nk=1
Can we give some guarantee on the
Instance optimality properties
•
Petrov-Galerkin:•
« Multi-space » decoder: h? hˆPG C(G) dist(h?, Vn), h? hˆMS 0 @ n X j=`+1 2 j + ⇢ `2 + (dist(h?, Vn)) 2 1 A 1 2where ` and j’s are “easily-computable” quantities only
Particularization of the instance optimality bound to some examples
ˆ✏1 ˆ✏0 • ˆ hPG • ˆ hMS v2⇤ v⇤1 •PVn(h ?) v2 v1 span(Vn)
Particularization of the instance optimality bound to some examples
ˆ✏1 ˆ✏0 • ˆ hPG • ˆ hMS v2⇤ v⇤1 •PVn(h ?) v2 v1 span(Vn) h? hˆPG 1 h? hˆMS 3✏ 1 2 j = sing. values of G = 8 < : 1 j = 1 . . . n 3, ✏12 j = n 2, n 1, ✏ j = n. ˆ ✏j = 8 < : 1 j = 0 . . . n 3, ✏12 j = n 2, n 1, ✏ j = n, dist(h?, Vk) = ˆ✏j
PG projection can been carried out efficiently
with a complexity per iteration
O(n
2)
ˆ hPG = arg min h2Vn n X j=1 (bj haj, hi)2
« Least square » problem: can be solved efficiently via gradient-based methods with a complexity per iteration.O(n2)
Our decoder can also be implemented with a
complexity per iteration
O(n
2)
ˆ hMS = arg min h2Vn n X j=1 (bj haj, hi)2, subject to P ? Vk(h) ˆ✏k, k = 0 . . . n 1.
Our problem can be rewritten as:
We use the « Alternating Directions Method of Multipliers » to solve this convex problem with a complexity per iterationO(n2)
Some results
We consider a parametric mass transfert problem
µ14h + b(µ2) · rh = s in ⌦
µ1rh · n = 0 in @⌦
where b(µ2) = [cos(µ2) sin(µ2)]T
s = exp kx mk 2 2 2 2 ! µ1 = [0.03, 0.05] µ2 = [0, 2⇡]
The multi-space decoder improves
the worst-case approximation error over PG
10 20 30 40 50 10 5 10 4 10 3 10 2 10 1 100 101 102 dim(Vn) sup h ? 2 M h ? ˆ h Orthogonal projection Galerkin projection Multi-space projection
Our contributions
•
We derive an instance optimal guarantee for our « multi-space » decoder•
We propose an efficient implementation for the « multi-space » decoder•
We exploit additional information in the Petrov-Galerkin projectionAppendix
ADMM (first step): problem reformulation
We add n new variables and n new constraints.
ˆ hMS = arg min hn2Vn n X j=1 (bj haj, hni)2, subject to ⇢ PV? k(hk) ˆ✏k, k = 0 . . . n 1, h0 = h1 = . . . = hn.
ADMM (second step): main recursions h(l+1)n = arg min hn2Vn n X j=1 (bj haj, hni)2 + ⇢ 2 n 1X k=1 hn h(l)k + u(l)k 2 , h(l+1)k = arg min hk hk h(l+1)n + u(l)k 2subject to PV?k(hk) ˆ✏k, u(l+1)k =u(l)k + h(l+1)n h(l+1)k h(l)k 2 Vn and u(l)k 2 Vn 8k, l