• Aucun résultat trouvé

Parametric optimization of pulsating jets in unsteady flow by Multiple-Gradient Descent Algorithm (MGDA)

N/A
N/A
Protected

Academic year: 2021

Partager "Parametric optimization of pulsating jets in unsteady flow by Multiple-Gradient Descent Algorithm (MGDA)"

Copied!
18
0
0

Texte intégral

(1)

HAL Id: hal-01414741

https://hal.inria.fr/hal-01414741

Submitted on 12 Dec 2016

HAL is a multi-disciplinary open access

archive for the deposit and dissemination of

sci-entific research documents, whether they are

pub-lished or not. The documents may come from

teaching and research institutions in France or

abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est

destinée au dépôt et à la diffusion de documents

scientifiques de niveau recherche, publiés ou non,

émanant des établissements d’enseignement et de

recherche français ou étrangers, des laboratoires

publics ou privés.

Parametric optimization of pulsating jets in unsteady

flow by Multiple-Gradient Descent Algorithm (MGDA)

Jean-Antoine Desideri, Régis Duvigneau

To cite this version:

Jean-Antoine Desideri, Régis Duvigneau. Parametric optimization of pulsating jets in unsteady flow

by Multiple-Gradient Descent Algorithm (MGDA). J. Périaux; W. Fitzgibbon; B. Chetverushkin; O.

Pironneau. Numerical Methods for Differential Equations, Optimization, and Technological Problems,

Modeling, Simulation and Optimization for Science and Technology, 2017. �hal-01414741�

(2)

unsteady flow by Multiple-Gradient Descent

Algorithm (MGDA)

J.-A. D´esid´eri and R. Duvigneau

Abstract Two numerical methodologies are combined to optimize six design char-acteristics of a system of pulsating jets acting on a laminar boundary layer gov-erned by the compressible Navier-Stokes equations in a time-periodic regime. The flow is simulated by second-order in time and space finite-volumes, and the simu-lation provides the drag as a function of time. Simultaneously, the sensitivity equa-tions, obtained by differentiating the governing equations w.r.t. the six parameters are also marched in time, and this provides the six-component parametric gradient of drag. When the periodic regime is reached numerically, one thus disposes of an objective-function, drag, to be minimized, and its parametric gradient, at all times of a period. Second, the parametric optimization is conducted as a multi-point prob-lem by the Multiple-Gradient Descent Algorithm (MGDA) which permits to reduce the objective-function at all times simultaneously, and not simply in the sense of a weighted average.

Key words: Active-flow control, time-dependent Navier-Stokes equations, finite-volume schemes, sensitivity equations, multi-objective differentiable optimization, descent methods, robust design

1 Introduction: active flow control issues

Active flow control consists in exploiting natural flow instabilities by the use of actuators like oscillatory jets or vibrating membranes, and obtain some desirable

ef-J.-A. D´esid´eri

INRIA Acumes Team, 2004, Route des Lucioles, 06902 Sophia-Antipolis (France), e-mail: Jean-Antoine.Desideri@inria.fr

R. Duvigneau

INRIA Acumes Team, 2004, Route des Lucioles, 06902 Sophia-Antipolis (France), e-mail: Regis.Duvigneau@inria.fr

(3)

fects at a moderate energy expense. It has been a growing research area for the last decades, since this approach demonstrated its ability to improve aerodynamic per-formance [9], for a large range of applications. It is especially appealing in case of separated flows, for which natural instability phenomena can be efficiently exploited to manipulate flow characteristics using periodic flow excitation.

In this context, a major difficulty is related to the choice of actuation parameters, such as excitation frequency, amplitude, location, to obtain the expected flow re-sponse. In cases implying a single isolated actuator, it is relatively easy to carry out an experimental or numerical study to determine efficient control parameters. How-ever, in the perspective of industrial applications involving hundreds of actuators, this task is far from being straightforward and the use of an automated optimization strategy is thus proposed, in the spirit of previous works [6, 7, 8].

The application of an optimization procedure to such problems is faced to the following difficulties: first, the choice of the optimization algorithm is conditioned by the huge computational time of the unsteady-flow simulation, and second, it is necessary to consider several objectives concurrently. Typically, the improvement of the single time-averaged performance is usually not satisfactory for realistic ap-plications. Secondly, sensitivity analysis is tedious in the context of unsteady flows, due to the backward integration of the adjoint equation, which requires the storage, or partial storage / partial re-computation, of the unsteady solution.

The proposed work is based on two methodological ingredients to overcome the difficulties described above: the Sensitivity Equation Method (SEM) for unsteady flows on one side, which allows to compute the gradient of a cost-functional with respect to (w.r.t.) control parameters at any time using a forward time-integration, and the Multiple Gradient Descent Algorithm (MGDA) on the other side, which is an extension of the classical steepest-descent method to multiobjective problems and permits to compute a descent direction common to a possibly-large set of cost-functions.

2 Problem description: optimization of pulsating jets

We consider as model problem the two dimensional compressible flow over a flat plate equipped with three periodically oscillating jets (see Fig. 1). The Reynolds number based on the length h is R = 103and the flow is laminar, while the Mach

number is M = 10−1. For the three jets, the crosswise velocity is imposed as: vk(x,t) = Aksin(2πNkt+ ϕk)ζ (x) k= 1, 2, 3, (1)

where ζ (x) corresponds to a squared sine distribution. The jet frequencies are set to the fixed values N1= N∞, N2= 2N∞and N3= 1/2N∞, with N∞= u∞/h. The jet

(4)

Initial values are chosen somewhat arbitrarily as x0= {u∞, 3/2 u∞, 2 u∞, 0, π/4, 3π/4}.

Fig. 1 Problem description.

Fig. 2 Computational mesh.

The grid employed in this study counts 111 161 nodes (see Fig. 2). The ini-tial solution corresponds to uniform flow variables based on inlet conditions. The time step is set to ∆t = 1/(400N∞). The unsteady flow tends rapidly to a periodic

regime, whose period corresponds to the lowest actuation frequency 1/2N∞and is

thus described by 800 time-steps. As transient effects have vanished, one period is defined as observation interval (see Fig. 3). Instantaneous velocity fields are shown in Figs. 4-5 as illustration.

We consider a set of objective-functions { fj(x)}j=1,...,m, defined as the values

of the dragJ , estimated at discrete times {tj}j=1,...,m, chosen in the observation

interval:

fj(x) =J (Wj) with Wj= W(x,tj) ∀ j ∈ {1, · · · , m}, (2)

where x ∈ Rn represents the vector of the n = 6 control parameters and W the flow variables. The objective of this work is to reduce simultaneously these m cost-functions. The following sections describe how the gradient ∇xfj of fj w.r.t. x is

evaluated, and the optimization algorithm proposed to conduct the simultaneous op-timization of these functions.

(5)

0 5 10 15 20 time 0 0.1 0.2 0.3 0.4 0.5 d ra g

Fig. 3 History of drag for initial parameters and observation area.

Fig. 4 Snapshot of the streamwise velocity field.

(6)

3 Sensitivity analysis for unsteady flow

3.1 Method

The governing flow equations are written in conservative form as follows: ∂ W

∂ t + ∇ ·F = ∇ · G , (3) where W = (ρ, ρu, ρv, ρe) is the vector of conservative mean-flow variables, ρ is density, u and v are the velocity components, and e the total energy per unit mass; F = (Fx(W), Fy(W)) andG = (Gx(W), Gy(W)) are the vectors of convective and

diffusive fluxes respectively. Here ∇ stands for the gradient w.r.t. the spatial Carte-sian coordinates x and y, and (∇.) for the divergence operator. The pressure p is obtained from the perfect-gas state equation:

p= ρ(γ − 1)(e −u

2+ v2

2 ) = ρ(γ − 1)ei (4) where γ = 75 is the ratio of the specific heats for diatomic gas, and ei the internal

energy.

The inviscid fluxes are given by:

Fx(W) =     ρ u ρ u2+ p ρ uv ρ u(e +p ρ)     Fy(W) =     ρ v ρ vu ρ v2+ p ρ v(e + p ρ)     . (5)

The viscous fluxes are written as:

Gx(W) =     0 τxx τyx uτxx+ vτyx− qx     Gy(W) =     0 τxy τyy uτxy+ vτyy− qy     , (6)

where ¯¯τ is the symmetric viscous stress tensor and q the heat flux.

We can now introduce the sensitivity field W0, which is defined as the derivative of the flow solution W w.r.t. a given control parameter a, component of x:

W0=∂ W ∂ a

. (7)

The equations governing the sensitivity field can be obtained by differentiating (3) w.r.t. a: ∂ ∂ a  ∂ W ∂ t  + ∂ ∂ a(∇ ·F ) = ∂ ∂ a(∇ ·G ). (8)

(7)

By switching the derivatives w.r.t. a and those w.r.t time or space coordinates, one obtains: ∂ ∂ t  ∂ W ∂ a  + ∇ ·  ∂F ∂ a  = ∇ ·  ∂G ∂ a  , (9) or: ∂ W0 ∂ t + ∇ ·F 0 = ∇ ·G0, (10) which is formally similar to (3), by introducing the sensitivity of the convective flux F0= (F0x(W, W0), F0y(W, W0)) and the sensitivity of the diffusive fluxG0= (G0x(W, W0), G0y(W, W0)). The sensitivity of the convective fluxes can be expressed as: F0x(W, W0) =     (ρu)0 (ρu)0u+ (ρu)u0+ p0 (ρu)0v+ (ρu)v0 (ρu)0(e +p ρ) + (ρu)(e 0+ (p ρ) 0)     (11) F0y(W, W0) =     (ρv)0 (ρv)0u+ (ρv)u0 (ρv)0v+ (ρv)v0+ p0 (ρv)0(e +p ρ) + (ρv)(e 0+ (p ρ) 0)     . (12)

The sensitivity of the diffusive fluxes reads:

G0x(W, W0) =     0 τxx0 τyx0 u0τxx+ v0τyx+ uτxx0 + vτyx0 − q0x     (13) G0y(W, W0) =     0 τxy0 τyy0 u0τxy+ v0τyy+ uτxy0 + vτyy0 − q0y     , (14)

where ¯¯τ0 is the sensitivity of the viscous stress tensor and q0 the sensitivity of the heat flux. The boundary conditions for the sensitivity equations are obtained by differentiating the boundary conditions applied to the flow.

Since the flow and sensitivity equations are formally similar, both are solved us-ing the same finite-volume approach [5], based on a second-order vertex-centered discretization scheme. Temporal integration relies on a second-order implicit back-ward method, with a dual time-stepping technique. Note that the implicit part of the scheme is the same for the flow and sensitivity equations, since both involve the same Jacobian matrix.

The sensitivity equation depends on the parameter a, component of x of inter-est. Therefore, six sensitivity equations have to be solved, possibly in parallel, to estimate the components of the gradient for all m cost-functions:

(8)

∇afj= ∂WJ (Wj) · W0(tj) ∀ j ∈ {1, · · · , m}. (15)

We emphasize that if m is large, the solution of six sensitivity equations is far more cost-efficient than solving m adjoint equations backward in time. Additionally, the memory-storage requirement remains moderate.

3.2 Verification

To verify the implementation of the sensitivity equations, we compute a neighboring solution according to a first-order extrapolation W(a) + W0δ a and compare it with the solution W(a + δ a). This exercise is conducted in a simplified case including a single jet, a being the jet amplitude A1. Figs. 6-7 provide illustrations for the flow

fields at selected times and Fig. 8 for the resulting drag history, for a perturbation of the jet amplitude δ A1= A1/4. A similar exercise has been achieved for the phase,

to fully verify the gradient estimation.

Fig. 6 Linear extrapolation of streamwise velocity field u w.r.t. jet amplitude A1for blowing (top)

and suction (bottom) phases: reference state u(A1) in green, extrapolated state u(A1) + u0δ A1in

(9)

Fig. 7 Linear extrapolation of pressure field p w.r.t. jet amplitude A1for blowing (top) and suction

(bottom) phases: reference state p(A1) in green, extrapolated state p(A1) + p0δ A1in red, non-linear

perturbed state p(A1+ δ A1) in blue, for δ A1= A1/4.

1.5 1.55 1.6 1.65 1.7 time 0.3 0.305 0.31 0.315 d ra g drag(V) drag(V) + drag' dV drag (V + dV)

Fig. 8 Linear extrapolation of the drag w.r.t. jet amplitude A1: reference dragJ (A1) in green,

extrapolated dragJ (A1) +J0δ A1in red, non-linear perturbed dragJ (A1+ δ A1) in blue, for

(10)

4 Multiobjective descent algorithm MGDA

Equipped with procedures for calculating the objective-functions and their gradi-ents, we now turn to the issue of constructing the multi-objective optimization method.

The Multiple-Gradient Descent Algorithm (MGDA) was originally introduced in [1] and [2] to solve general multi-objective optimization problems involving differ-entiable cost-functions. Variants were proposed in [3], but more recently the algo-rithm was slightly revised in [4] to apply to cases where the number m of objective-functions exceeds the dimension n of the working design space. We recall here the basic definition of the revised version and provide some details about the application to the present parametric optimization.

4.1 Multi-Objective problem statement

Let m and n be two arbitrary integers, and consider the multi-objective optimization problem consisting in minimizing m differentiable objective-functions { fj(x)} in

some open admissible domain Ωa⊆ Rn( j = 1, . . . , m; fj∈ C1(Ωa)). Given a starting

point x0∈ Ωaand a vector d ∈ Rn, one forms the directional derivatives

f0j= [∇xfj(x0)]td (16)

where ∇x is the symbol for the gradient w.r.t. x and the superscript t stands for

transposition. One seeks for a vector d such that

fj0> 0 (∀ j) . (17) If such a vector d exists, the direction of vector (−d) is said to be a local descent direction common to all objective-functions. Then evidently, infinitely-many other such directions also exist, and our algorithm permits to identify at least one.

4.2 Convex hull, two lemmas and basic MGDA

We recall the following :

Definition 1. The convex hull of a family of m vectors {uj} ( j = 1, . . . , m; uj∈ Rn),

is the set of all their convex combinations:

U = ( u ∈ Rnsuch that u = m

j=1 αjuj; αj∈ R+; m

j=1 αj= 1 ) . (18)

(11)

Then, we have :

Lemma 1. Given an n×n real-symmetric positive-definite matrix An, the associated

scalar product

u, v = utA

nv (u, v ∈ Rn), (19)

and Euclidean norm

kuk =putA

nu, (20)

the convex hullU admits a unique element ω of minimum norm. Proof. - Existence : U is closed and k.k is a continuous function.

- Uniqueness : suppose that ω1and ω2are two realizations of the minimum µ =

arg minu∈Ukuk so that µ = kω1k = kω2k and let

ωs=12(ω2+ ω1) , ωd=12(ω2− ω1) , so that: ωs, ωd) =14 ω2+ ω1, ω2− ω1 = 1 4  kω2k2− kω1k2  = 0 .

Hence ωs⊥ ωd, and since ωs∈ U, kωsk ≥ µ, and :

µ2= kω2k2= kωs+ ωdk2= kωsk2+ kωdk2≥ µ2+ kωdk2=⇒ ωd= 0 .

Lemma 2. The minimum-norm element ω defined in Lemma 1 satisfies :

∀u ∈ U, u, ω ≥ kωk2. (21)

Proof. Let u ∈ U, arbitrary. Let δ = u − ω; by convexity of U : ∀ε ∈ [0, 1], (1 − ε)ω + εu = ω + εδ ∈ U , and by definition of ω, kω + εδ k ≥ kωk, that is :

ω + ε δ , ω + ε δ − ω, ω = 2ε ω, δ ) + ε2kδ k2≥ 0 ,

and this requires that the coefficient of ε be non-negative. Then consider the case where

uj= ∇xfj(x0) (∀ j). (22)

If the vector ω defined in Lemma 1 is nonzero, the vector

d = Anω (23) is also nonzero, and is a solution to the problem stated in (16)-(17) since by virtue of Lemma 2:

(12)

uj, ω = ut

jAnω = utjd ≥ kωk 2

> 0. (24)

The situation in which ω = 0, or equivalently,

∃α = {αj} ∈ R +m such that m

j=1 αj∇ fj(x0) = 0 and m

j=1 αj= 1, (25)

is said to be one of ”Pareto-stationarity”. The relationship between Pareto-optimality and Pareto-stationarity was made precise by the following [3]-[4]

Theorem 1. If the objective-functions are differentiable and convex in some open ball B ⊆ Ωa about x0, and if x0 is Pareto-optimal, then the Pareto-stationarity

condition is satisfied atx0.

Hence, the Pareto-stationarity condition generalizes to the multi-objective context, the classical stationarity condition expressing that an unconstrained differentiable function is extremal.

We now return to the non-trivial case of a point x0that is not Pareto-stationary

and we suppose that the vectors ω and d (ω 6= 0; d 6= 0) have been identified (see next subsection). Then we define MGDA as the iteration which transforms x0in

x1= x0− ρd (26)

where ρ > 0 is some appropriate step-size. Thus MGDA is an extension to the multi-objective context of the classical steepest-descent method, in which the direction of search is taken to be the vector d defined above. At convergence, the limiting point is Pareto-stationary.

We now examine how can the vector d be computed in practice.

4.3 QP formulation and hierarchical Gram-Schmidt

orthogonalization

By letting ω = m

j=1 αjuj= Uα (27)

where uj= ∇ fj(x0), U is the n × m matrix whose jth column contains the n

compo-nents of vector uj, the identification of vector ω can be made by solving the

follow-ing Quadratic-Programmfollow-ing (QP) problem for the unknown vector of coefficients α = {αj}: ω = arg min α ∈Rm 1 2α t (28) subject to:

(13)

αj≥ 0 (∀ j), m

j=1

αj= 1, (29)

where H = UtAnU. Note that if vector ω is unique, vector α may not be.

If the family of gradients is linearly-independent, which requires in particular that m≤ n, it is possible to choose the scalar product, through the definition of matrix An, in such a way that these gradients form an orthogonal basis of their span. Then

vector ω is explicitly determined by the orthogonal projection of 0 onto the convex hull U: αj= 1 uj 2 ∑mk=1ku1 kk2 (30)

In the inverse case where m > n (and even m  n), focus of interest presently, let r ≤ n < m be the rank of the family of gradients. Using first the standard Euclidean scalar product (An= In), the Gram-Schmidt orthogonalization process stops in r

steps and produces an orthogonal basis (in the usual sense) {vj} ( j = 1, . . . , r), that

span a subspaceG of dimension r. The orthogonal vectors, {vj}, are calculated from

a subfamily of the original vectors, {uj}, j ∈ J, where J is a subfamily of r indices

from 1 to m. In this process, a hierarchical principle was introduced in [4] to select these indices in such a way that the cone bounded by the reduced family {uj} ( j ∈ J)

be as large as possible to contain the directions, in the most favorable situation, of all the other gradients, unused in the Gram-Schmidt process by redundancy. When this occurs, the subfamily {uj} ( j ∈ J) not only is a basis of the subspace G , but

also, its convex hull contains all the directions of interest. In the more general case where the directions of some gradients among the unused vectors {uj} ( j /∈ J) are

not in the cone, we resort to the QP-formulation, but expressed after changing the basis to become {uj} ( j ∈ J) and by choosing a new scalar product to make this

subfamily orthogonal. The technical steps are the following [4]:

• Once the n × m matrix U is formed with the components of the given gradients {uj} (uj∈ Rn, j = 1, . . . , m), these gradients, or column-vectors are made

(physi-cally) dimensionless, by component-wise normalization to form the initial matrix G : ∀i ∈ 1, . . . , n : si= max j ui, j , S = Diag si, G = S−1U. (31) This normalization is essential to make the subsequent calculations of scalar products physically meaningful and computationally well-balanced.

• Throughout the Gram-Schmidt process, columns of the G matrix are permuted by the hierarchical selection of basis vectors. Upon exit, the actually used gradients are placed in the first r columns. The corresponding n × r leftmost block of the final matrix G is then denoted G. Note that in the present version of the Gram-Schmidt process, the computed orthogonal vectors {vj} are not normalized to

unity, but in a special way for which r directional derivatives are equal [4]. Let these vectors be stored in matrix V, and define the following diagonal matrix:

(14)

∆ = Diag vtjvj. (32)

Then:

An= WtW + (I − Π )2, W = GtG−1GtV∆−1Vt, (33)

where Π = V∆−1Vtis the projection matrix onto subspaceG [4].

In this way, the QP-formulation is well-conditioned and easily solved by a library procedure. We have used the procedure qpsolve from the Scilab library which is equivalent to the quadprog procedure from the MATLAB library. As a result of this, exactly r directional derivatives { f0j} are equal, and by experience, the remaining ones differ only slightly.

5 Results

For each control parameter, component of x = {A1, A2, A3, ϕ1, ϕ2, ϕ3}, the SEM is

applied to obtain sensitivity fields. We provide as illustration (see Figs. 9-10) in-stantaneous sensitivity fields of the velocity w.r.t. the amplitude for the first jet. The derivatives of the drag w.r.t. the control parameters are then computed at all time-steps, yielding 800 values of cost-function and gradient for the whole observation period, as illustrated in Fig. 11.

Fig. 9 Sensitivity of streamwise velocity w.r.t. first jet amplitude.

(15)

0 200 400 600 800 index -0.015 -0.01 -0.005 0 0.005 0.01 0.015 p a rt ia l d e ri va ti ve va lu e

grad1 (amplitude jet 1) grad2 (amplitude jet 2) grad3 (amplitude jet 3) grad4 (phase jet 1) grad5 (phase jet 2) grad6 (phase jet 3)

Fig. 11 Gradient components for the 800 cost-functionals.

We aim now at determining the vector of parameters x = {A1, A2, A3, ϕ1, ϕ2, ϕ3}

that reduces simultaneously the 800 cost-functions associated with the observation period. To reduce somewhat the computational complexity without altering greatly the transient behavior, the MGDA approach is applied to only m = 20 homogenized gradients, obtained by averaging the gradients by time-intervals of 40 time-steps. As a result, one disposes of a common descent direction associated with vector d satisfying (17).

Once the vector d is determined, a practical step-size ρ must be estimated. For this, we first note that a natural scale for the variations of a time-dependent objective-function is given by its standard deviation, ¯σ , a more significant value than its av-erage which can be 0. Then if δ x = − ¯ρ d, the variation of the objective-function average can be estimated as − ¯ρ ¯g.d where ¯gis the average gradient. Thus, a mean-ingful reference step-size can be defined by the condition:

¯ ρ ¯g.d = ¯σ (34) where ¯σ = q 1 m∑ m j=1 fj− ¯f 2

, ¯f=m1∑mj=1fj, and ¯g=m1∑mj=1∇ fj(x0). This gives:

¯ ρ = ¯ σ ¯ g.d. (35)

(16)

In practice, in the present experiments, we have used the step-size ρ = 101ρ to up-¯ date the control parameters by the descent method, and this resulted in a successful iteration, stable and effective.

The history of the drag in the observation period is represented in Fig. 12, from the baseline flow to full convergence of the optimization approach. As expected, each update of the design vector has resulted in a diminished drag over the entire observation period. As it can be noticed, some points in time are more critical than others. In contrast, when one applies, more classically, the steepest-descent method to the time-averaged cost-functionJ , by setting the search direction to the average gradient, an increase of the drag can be observed at some times, as illustrated in Fig. 13. Finally, also note that actuation permits a significant drag reduction w.r.t. the case without suction/blowing for which the value of drag is indicated on the figure by a dotted horizontal line.

Finally, a second exercise is conducted: the drag values computed over the last 40% of the observation period only are considered as optimization criteria in the MGDA approach (in this interval, the drag is especially high for the baseline flow). The history of the drag in the observation period is represented in Fig. 14, for the first 16 iterations of the optimization algorithm. As expected, a more significant decrease is achieved during the last 40% of the observation period, whereas the drag is free to vary in the first 60%, and in fact, increases.

13.5 14 14.5 15 time 0.02 0.03 0.04 0.05 0.06 0.07 d ra g initial

MGDA (full interval) converged without blowing-suction

Fig. 12 Evolution of the drag history w.r.t. optimization iterations: case of MGDA approach (blue: initial, red: final, black: intermediate iterations).

(17)

13.5 14 14.5 15 time 0.02 0.03 0.04 0.05 0.06 0.07 d ra g initial Mean direction 2 it

Fig. 13 Evolution of the drag history w.r.t. optimization iterations: case of a mean direction descent (blue: initial, red: after 2 iterations, black: intermediate iteration).

13.5 14 14.5 15 time 0.03 0.04 0.05 0.06 d ra g initial mgda (40% interval) it 16 without blowing / suction

Fig. 14 Evolution of the drag history w.r.t. optimization iterations: case of MGDA approach based on the last 40% of the observation period (blue: initial, red: after 7 iterations, black: intermediate iterations).

(18)

6 Conclusion

In this work, we solved for demonstration an exercise of active-flow control in which drag over a flat plate has been reduced by three pulsating jets acting on the bound-ary layer. The flow, governed by the time-dependent compressible Navier-Stokes equations, has been simulated numerically by second-order in time and space finite-volumes, yielding, when the periodic regime is achieved, drag as a function of time. The simultaneous solution of the sensitivity equations has provided additionally the six-component gradient of drag w.r.t. the design characteristics of the jets. The ac-curacy of these gradients has been verified by comparison with finite-differences via fine-mesh computations.

By the Multiple-Gradient Descent Algorithm (MGDA), a direction of search has been identified permitting to reduce drag at all times of the period, or a selected segment of it. The process was repeated iteratively, and at all intermediate steps of the optimization process as well as at convergence, the drag was effectively reduced over the entire observation time-interval.

Hence, we dispose of a numerical optimization tool whose efficacy is demon-strated uniformly, that is, over a possibly-large range of operational conditions, here different discretization times. This contrasts with more classical approaches in which a single functional, usually defined as a somewhat arbitrary weighted av-erage, is minimized at the risk of a degradation of certain elements composing the average.

This method is currently being extended to solve more general robust design problems.

References

1. J.-A. D´esid´eri. Multiple-gradient descent algorithm (mgda). Research Report 6953, INRIA, 2009 (revised version November 5, 2012). http://hal.inria.fr/inria-00389811/fr/.

2. J.-A. D´esid´eri. Multiple-gradient descent algorithm (MGDA) for multiobjective optimization. Comptes Rendus de l’Acad´emie des Sciences Paris, 350:313–318, 2012.

3. J.-A. D´esid´eri. Numerical Methods for Differential Equations, Optimization, and Technologi-cal Problems, volume 34 of Modeling, Simulation and Optimization for Science and Technol-ogy, Fitzgibbon, W.; Kuznetsov, Y.A.; Neittaanm¨aki, P.; Pironneau, O. Eds., chapter Multiple-Gradient Descent Algorithm (MGDA) for Pareto-Front Identification. Springer-Verlag, 2014. J. P´eriaux and R. Glowinski Jubilees.

4. J.-A. D´esid´eri. R´evision de l’algorithme de descente `a gradients multiples (MGDA) par orthog-onalisation hi´erarchique. Research Report 8710, INRIA, April 2015. https://hal.inria.fr/hal-01139994.

5. R. Duvigneau. A sensitivity equation method for unsteady compressible flows: Implementation and verification. Technical report, INRIA Research Report No 8739, June 2015.

6. R. Duvigneau, A. Hay, and M. Visonneau. Optimal location of a synthetic jet on an airfoil for stall control. Journal of Fluid Engineering, 129(7):825–833, July 2007.

7. R. Duvigneau, J. Labroqu`ere, and E. Guilmineau. Comparison of turbulence closures for opti-mized active control. Computers & Fluids, (124):67–77, January 2016.

8. R. Duvigneau and M. Visonneau. Optimization of a synthetic jet actuator for aerodynamic stall control. Computers and Fluids, 35:624–638, July 2006.

9. M. Gad El-Hack, A. Pollard, and J.-P. Bonnet. Flow control: fundamentals and practices. Springer-Verlag, 1998.

Références

Documents relatifs

(classification et évaluation des instruments financiers à la juste valeur selon le système comptable financier et les normes comptables internationales), et j’espère que

University of Science and Technology of China, Hefei; (b) Institute of Frontier and Interdisciplinary Science and Key Laboratory of Particle Physics and Particle Irradiation

A cross-check of the above observations consists of estimating the Schottky barrier height considering the interface band offset driven by the alignment of interface O 2p bands

Ecology and morphology of mouse lemurs ( Microcebus spp.) in a hotspot of microendemism in northeastern Madagascar, with the description of a new species... For

(hospitals and residential treatment centers) healthcare segments shows that labor productivity, measured as GDP dollar contribution per compensation dollars, hours worked and

Our results indicate that the dominant cause for the LSC dissociation is the inward transport of the Cr-containing phases from the stainless steel interconnects into the contact

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des

The mechanism must hold the paper tightly; the paper must not slip if the stack was picked up by the clip, or in any other situations that the clip might reasonably encounter in