• Aucun résultat trouvé

Max-Min Lyapunov Functions for Switched Systems and Related Differential Inclusions

N/A
N/A
Protected

Academic year: 2021

Partager "Max-Min Lyapunov Functions for Switched Systems and Related Differential Inclusions"

Copied!
27
0
0

Texte intégral

(1)

HAL Id: hal-02886685

https://hal.archives-ouvertes.fr/hal-02886685

Submitted on 1 Jul 2020

HAL is a multi-disciplinary open access

archive for the deposit and dissemination of

sci-entific research documents, whether they are

pub-lished or not. The documents may come from

teaching and research institutions in France or

abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est

destinée au dépôt et à la diffusion de documents

scientifiques de niveau recherche, publiés ou non,

émanant des établissements d’enseignement et de

recherche français ou étrangers, des laboratoires

publics ou privés.

Max-Min Lyapunov Functions for Switched Systems and

Related Differential Inclusions

Matteo Della Rossa, Aneel Tanwani, Luca Zaccarian

To cite this version:

Matteo Della Rossa, Aneel Tanwani, Luca Zaccarian. Max-Min Lyapunov Functions for Switched

Systems and Related Differential Inclusions.

Automatica, Elsevier, 2020, 120, pp.109123.

�10.1016/j.automatica.2020.109123�. �hal-02886685�

(2)

Max-Min Lyapunov Functions for Switched Systems

and Related Differential Inclusions

Matteo Della Rossa

Aneel Tanwani

Luca Zaccarian

Abstract

Starting from a finite family of continuously differentiable positive definite functions, we study con-ditions under which a function obtained by max-min combinations is a Lyapunov function, establishing stability for two kinds of nonlinear dynamical systems: a) Differential inclusions where the set-valued right-hand-side comprises the convex hull of a finite number of vector fields, and b) Autonomous switched systems with a state-dependent switching signal. We investigate generalized notions of directional deriva-tives for these max-min functions, and use them in deriving stability conditions with various degrees of conservatism, where more conservative conditions are numerically more tractable. The proposed con-structions also provide nonconvex Lyapunov functions, which are shown to be useful for systems with state-dependent switching that do not admit a convex Lyapunov function. Several examples are included to illustrate the results.

1

Introduction

Lyapunov functions play an instrumental role in the stability analysis of dynamical systems; The textbook [25] and the research monographs [4] and [29] provide an overview of the developments in this field. When considering dynamical systems resulting from switching among a finite number of dynamical subsystems described by ordinary differential equations (ODEs) of the form ˙x = fi(x), fi(0) = 0, fi: Rn → Rn locally

Lipschitz continuous, i ∈ {1, 2, . . . , M }, different constructions of Lyapunov functions are proposed in the literature to analyze stability of the common equilibrium point: the origin. Overviews of such methods and related references can be found in [26], [38], and [28].

When the evolution of state trajectories results from arbitrary switching among the individual subsys-tems, the stability analysis problem is equivalently addressed by considering the differential inclusion (DI), described by

˙

x ∈ cofi(x) | i ∈ {1, . . . , M } , (1)

where co{S} denotes the convex hull of the set S. For the linear differential inclusion (LDI) case (that is fi(x) = Aix for some Ai ∈ Rn×n), it is shown in [13], [31] that asymptotic stability is equivalent to the

existence of a common Lyapunov function that is convex, homogeneous of degree 2, and C1

(Rn

, R). By addressing a similar question, the paper [30] establishes the existence of a common homogeneous polyno-mial Lyapunov function for asymptotically stable LDIs. Various parameterizations can approximate such homogeneous convex functions, such as maximum of quadratic functions and its convex conjugates [17], [19], which are shown to be universal in [20]. Constructions involving functions with convex polyhedral level sets are proposed in [31] and in [7]. These functions are mostly locally Lipschitz but not continuously differen-tiable, therefore the notion of set-valued derivatives, studied in [10, Chapter 2], [3], is important. Results analyzing nonsmooth Lyapunov functions using such notions of derivatives appear in [9], [39], [11, Chapter 4]. For general differential inclusions, converse Lyapunov theorems are proved in [40]. Other techniques

M. Della Rossa, A. Tanwani and L. Zaccarian are with LAAS-CNRS, University of Toulouse (31400), France. L. Zaccarian

is also with Dipartimento di Ingegneria Industriale, University of Trento, Italy. Corresponding author: mdellaro@laas.fr. This work was supported by the ANR project ConVan with grant number ANR-17-CE40-0019-01.

(3)

for constructing common Lyapunov functions come from imposing strong structural assumptions, such as commuting vector fields [32], triangular structure or solvability/nil-potency of the Lie-algebra generated by {fi}mi=1 [27]. Without any structural conditions, the Lyapunov functions for system (1) are, in general, not

finitely constructible.

On the other hand, for discrete-time systems with arbitrary switching, some constructions involving combinatorial methods have recently appeared. Path-complete Lyapunov functions are proposed in [1] to approximate the joint spectral radius, and it is shown in [2] that this class of functions can be written more explicitly in the form of maximum and minimum over a set of smooth functions. Our conference paper [15] uses the construction based on max-min of smooth functions to study stability of the continuous-time system (1) using Clarke’s derivative, and this article provides new results in this direction.

In contrast to studying stability uniformly over all possible switching signals as in (1), it is also of interest to study dynamical systems driven by a given switching function σ : Rn→ {1, . . . , M }, resulting in

˙

x = fσ(x)(x), (2)

so that the solution set for system (2) is a strict subset of the solution set of system (1). As already mentioned, existence of a convex Lyapunov function is necessary for asymptotic stability of LDIs. However, it is possible that system (2) is asymptotically stable with σ fixed, but does not admit a convex Lyapunov function [8]. It is possible to provide sufficient conditions for a minimum of quadratics (clearly non-convex) to be a Lyapunov function in this context, see [21] and [41]. In general, constructions involving piecewise quadratic functions have been found quite useful [23], and LMI-based formulations have been proposed to compute such functions [14], [36]. Beyond piecewise quadratics, sum-of-squares techniques have been used for polynomial Lyapunov functions [35].

In this article, the problem of interest is to construct a Lyapunov function for systems (1) and (2) which guarantees asymptotic stability of the origin {0} ⊂ Rn. We consider the Lyapunov functions obtained by

taking the maximum, minimum, or their combination over a finite family of continuously differentiable pos-itive definite functions, see Definition 3 for details. Such max-min type of Lyapunov functions were recently proposed in the context of discrete-time switching systems [1], [2]. For the continuous-time case treated in this paper, studying this class of functions naturally requires certain additional tools from nonsmooth and set-valued analysis, and one such fundamental tool is the generalized directional derivative. In our confer-ence paper [15], we provide stability results based on Clarke’s notion of generalized directional derivative for max-min functions. The construction of non-smooth Lyapunov functions for system (2) using the Clarke’s generalized gradient concept is also presented in [5]. However, this notion turns out to be rather conservative as is seen in several examples (including the one given in Section 2). To overcome this conservatism due to Clarke’s generalized derivative, we work with the set-valued Lie derivative, which is formally introduced in Definition 2. Focusing on this latter notion of generalized directional derivative for the class of max-min Lyapunov functions, the major contributions of this paper are listed as follows:

• Describe max-min functions and study generalized notions of set-valued derivatives for such functions. • Provide stability results for systems (1) and (2) using the set-valued Lie derivative.

• Obtain stability conditions using matrix inequalities for the case of linear vector fields in (1) and (2), and Lyapunov functions obtained by max-min of quadratics.

The notion of set-valued Lie derivative was introduced in [3] for locally Lipschitz regular functions. In the context of stability analysis of a differential inclusion, the set-valued Lie derivative was used in [24] to identify and remove infeasible directions from the differential inclusion. For the max-min candidate Lyapunov functions studied in this paper, which are not regular in general, we compute set-valued Lie derivatives and use them to derive stability conditions for systems (1) and (2). The resulting conditions turn out to be less conservative than the ones obtained by using Clarke’s derivative in [15], which are here recovered as a corollary. When restricting the attention to the linear case fi(x) = Aix, and max-min functions obtained

(4)

It should be noted that, since we allow for the minimum operation in the construction, certain elements in our proposed class of Lyapunov functions are nonconvex. In our approach, when we construct a homo-geneous of degree 2 nonconvex Lyapunov function for the LDI problem, a convexification of such functions also provides a Lyapunov function [19, Proposition 2.2]. In fact, the sublevel sets of max-min functions approximate the convex sublevel sets of a homogeneous of degree 2 convex Lyapunov function (which is known to exist) with nonconvex sets obtained via intersections and unions of ellipsoids.

When addressing system (2), our approach provides a more general class of nonconvex and nondifferen-tiable Lyapunov functions obtained via max-min operations. To describe the solutions of switched systems, we adopt Filippov regularizations [16], and establish stability conditions for the resulting system. Consider-ing such regularized differential inclusions for the switched systems also allows considerConsider-ing slidConsider-ing motions along the switching surfaces. In this setting, our adopted notion of set-valued Lie derivative turns out to be crucial and has an interesting geometrical interpretation in terms of the tangent subspace to the switching surface.

The paper is organized as follows: In Section 2 we provide an example of a two-dimensional switched system that does not admit a convex Lyapunov function, but a max-min Lyapunov function can be found. In Section 3 we introduce generalized notions of derivatives for Lipschitz continuous functions, while in Section 4 the class of max-min functions is presented and we show our main stability results in the setting of differential inclusions. In Section 5 we apply our results to switched systems, written as a differential inclusion using Filippov regularizations, and we study asymptotic stability along with an instructive example. In Section 6, we analyze deeply the case of linear switched systems and propose an algorithmic procedure to construct max-min Lyapunov functions, followed by some concluding remarks in Section 7.

2

A Motivating Example

We consider a switched system for which there does not exist any convex Lyapunov function. However, this system is asymptotically stable and our results will allow constructing a Lyapunov function V defined as

V (x) := maxmin{x>P

1x, x>P2x}, x>P3x , (3)

for some positive definite matrices Pi∈ R2×2, i = 1, 2, 3. This example was introduced in [15] where we did

not include the proof of Proposition 1, given below.

Example 1. Consider a linear switched system as in (2), with three subsystems and a state-dependent switching rule x 7→ σ(x) ∈ {1, 2, 3}, namely

˙ x = Aσ(x)x (4) where (A1, A2, A3) = −0.1 1 −5 −0.1, −0.1 5

−1 −0.1, 1.9−3 −2.13  . To define the switching signal σ, introduce

ma-trices (Q1, Q2, Q3) :=  −(1+√2) −2+√2 2 −2+√2 2 −1  ,  −1 (1+√2) − √ 2 2 − √ 2 2 −1  ,h 1 √ 2 √ 2 1 i

and the switching signal

σ(x) :=      1, if x ∈ S1:= {x>Q1x > 0} ∪ S13, 2, if x ∈ S2:= {x>Q2x > 0} ∪ S21, 3, if x ∈ S3:= {x>Q3x > 0} ∪ S32, (5)

where the subspaces Sij, i 6= j are defined as Sij := {x ∈ R2| x>Qix = x>Qjx} , namely

S13:= n x ∈ R2| x2= −(1 + √ 2)x1 o , S21:= x ∈ R2| x2= −x1 , S32:=  x ∈ R2| x2= − 1 1 +√2x1  .

(5)

−2 −1 1 2 −2 −1 1 2 S21 S32 S13 z0 a b R0 x1 x2

Figure 1: The solid blue line shows a trajectory of system (4) starting at z0 and moving in the clockwise

direction. The red dashed line indicates a level set of the max-min Lyapunov function (3). The solid black line indicates the set R0 used in the analysis.

We note that in (5), we have S1∪ S2∪ S3= R2and that the only point of intersection among the three sets

is the origin.

Proposition 1. There does not exist a convex Lyapunov function for system (4).

Proof. Given a set R0 ⊂ R2 and a time T > 0, let C(T ; R0) be the set of reachable points of solutions of

system (4) after time T , starting in R0, that is,

C(T ; R0) := {x(t) ∈ Rn| x solves (4), x(0) ∈ R0, t ≥ T } .

Following [8, Lemma 2.1], if we show that there exists a compact set R0 6= {0} and a T > 0 such that

R0⊂ co{C(T ; R0)} (where co{S} is the convex hull of S) then the system does not admit a convex Lyapunov

function. Toward this end, we choose z0:= [−1 1]>∈ S21, and the compact set R0 := {αz0: α ∈ [0, 1]} ⊂

S21, i.e. the line segment connecting 0 and z0. We compute

eA1t= e−10t h cos(5t) √5 5 sin( √ 5t) −√5 sin(√5t) cos(√5t) i , eA2t= e−10t h cos(5t)5 sin(5t) − √ 5 5 sin( √ 5t) cos(√5t) i , eA3t= e−10t 2 5sin( √ 5t)+cos(√5t) 3 5sin( √ 5t) −3 5sin( √ 5t) cos(√5t)−2 5sin( √ 5t)  ,

which allows us to write analytically the solution of the system starting from any given initial condition. We let t1 > 0 be the smallest time such that z1 := eA1t1z0 ∈ S13, and t2 > t1 be the smallest time such that

z2:= eA3(t2−t1)z1∈ S32. We finally choose t3> t2 as the smallest time such that z3:= eA2(t3−t2)z2∈ S21.

It turns out that |z3| = 1.2671. Thus, the half turn, starting with z0∈ S21 and reaching z3∈ S21, decreases

the norm of the state by a factor of β := |z3|

|z0|= 0.8961. Due to the central symmetry of the dynamics (that

is, if x is a solution, then −x is also a solution) the solution will reach the set R0 at the point β2[−1 1]> at

timeet3= 2t3. Hence, the set R1:= {αz0: α ∈ [0, β2]} is (strictly) contained in the set C(t1; R0). To show

that R0⊂ co{C(t1; R0)}, it thus remains to check that

{αz0: α ∈ [β2, 1]} ⊂ co{C(t1; R0)}. (6)

Property (6) is graphically illustrated in Figure 1 and is proven by the fact that points a = [√2−2√2]>∈ S13

and b = [−√2 2 −√2]>∈ S32 satisfy |a| < |z1| and |b| < |eA3(t2−t1)eA1t1z3|, and thus a, b ∈ C(t1; R0), and

z0=12a +12b ∈ co{C(t1; R0)}. Having already shown that 0 ∈ co{C(t1; R0)}, property (6) indeed holds.

In Section 5, we will study conditions that lead to the construction of a Lyapunov function for state-dependent switched systems. In particular, for the aforementioned example, we will find matrices Pi > 0,

(6)

3

Generalized Gradients and Directional Derivatives

The function in (3) is nonsmooth and requires generalized notions of gradient and directional derivatives, recalled here from [10, Chapter 2], [9].

Let F : Rn

⇒ Rnbe an upper semicontinuous1map with nonempty, compact, convex values, and consider

the differential inclusion (resembling dynamics (1) and (2)), ˙

x ∈ F (x), x(0) = x0∈ Rn. (7)

We recall that a solution of (7) on an interval [0, T ) ⊂ R is a function x : [0, T ) → Rn such that x(·) is

absolutely continuous, x(0) = x0, and ˙x(t) ∈ F (x(t)) for almost all t ∈ [0, T ). In the case T = +∞ the

solution is said to be complete. The origin of (7) is asymptotically stable (AS) if it is Lyapunov stable (for each ε > 0 ∃ δ(ε) > 0 such that all solutions satisfy |x(0)| < δ(ε) ⇒ |x(t)| < ε, ∀t > 0) and attractive (there exists M > 0 such that solutions satisfying |x(0)| < M ⇒ limt→∞|x(t)| = 0.) If attractivity is global (it

holds for every M > 0), then we say that the origin is globally asymptotically stable (GAS). We are only concerned with stability of the origin in this article, and use the statement that a system is (G-)AS to refer to the stability of the origin for the corresponding system. Given an open and connected set D ⊂ Rn such that 0 ∈ D, we say that a locally Lipschitz function V : D → R is a Lyapunov function for (7) if there exist class K functions2 χ, χ, and a positive definite γ ∈ PD such that

χ(|x|) ≤ V (x) ≤ χ(|x|), ∀x ∈ D,

and there exists a δ > 0 such that given any solution x : [0, T ) → D of (7) with |x(0)| < δ, we have d

dtV (x(t)) ≤ −γ(|x(t)|), for almost every t ∈ [0, T ).

The existence of a Lyapunov function implies the asymptotic stability of system (7). If moreover D = Rn,

χ, χ ∈ K∞ and δ can be arbitrarily large, the existence of such a V implies global asymptotic stability of

(7), see [25, Chapter 4]. Given an open set D ⊂ Rn

and a locally Lipschitz function V : D → R we first consider the Clarke’s generalized gradient x 7→ ∂V (x) [10, Chapter 2], which, due to the equivalence in [10, Theorem 2.5.1, page 63] can be defined as

∂V (x) := co ( v ∈ Rn ∃ xk → x, xk∈ N/ V, s.t. v = lim k→∞∇V (xk) ) , (8)

where NV ⊂ Rn is the set of measure zero where ∇V is not defined. [10, Theorem 2.5.1, page 63] proves the

existence of at least one sequence xk as considered in (8), namely ∂V (x) 6= ∅, for all x ∈ D. Moreover, the

following property of locally Lipschitz functions will be used in what follows. Definition 1. Given an open set D ⊂ Rn

, a locally Lipschitz function V : D → R is regular at x ∈ D if, for every v ∈ Rn, the directional derivative V0(x; v) := limh→0+ V (x+hv)−V (x)h exists and the equality

V0(x; v) = maxw>v | w ∈ ∂V (x)

, ∀v ∈ Rn, (9)

holds. V is called regular if it is regular at each x ∈ D.

Definition 1 is in fact a characterization of regularity for locally Lipschitz functions, which follows from [10, Proposition 2.1.2]. For an alternative definition we refer to [10, Definition 2.3.4]. The right-hand side of (9)

1A set-valued map F : Rn⇒ Rnis said to be upper semicontinuous at x if, for every ε > 0 there exists a δ > 0 such that

if y ∈ B(x, δ) then F (y) ⊂ F (x) + B(0, ε). It is said to be upper semicontinuous if it is upper semicontinuous at every x ∈ Rn.

For comparisons with a related notion of outer semicontinuity, see [18, Lemma 5.15].

2A function α : R

≥0 → R is positive definite (α ∈ PD) if it is continuous, α(0) = 0, and α(s) > 0 if s 6= 0. A function

α : R≥0→ R≥0is class K (α ∈ K) if it is continuous, α(0) = 0, and strictly increasing. It is said to be class K∞if it is class K

(7)

is also called the Clarke’s generalized directional derivative of V at x along v (denoted by V0(x, v) and defined in [10, Section 2.1]). The results of this paper could be equivalently stated by referring to Clarke’s generalized directional derivatives instead of the Clarke’s generalized gradient in (8), but we believe that the gradient is a more familiar concept in the control community.

We now introduce two different notions of the generalized directional derivative with respect to differential inclusions (7). We will show that the first one, the more “natural” one, leads to more conservative stability results than the second one. In particular the second one is needed for proving GAS of the motivating example introduced in Section 2.

Definition 2 ([3, 12]). Consider the differential inclusion (7); given an open set D ⊂ Rn and a locally

Lipschitz function V : D → R. Given x ∈ D, the Clarke’s generalized derivative of V with respect to F is defined as

˙

VF(x) := {p>f | p ∈ ∂V (x), f ∈ F (x) }. (10)

Additionally, we define the set-valued Lie derivative of V with respect to F as ˙

VF(x) := {a ∈ R | ∃f ∈ F (x) : p>f = a, ∀p ∈ ∂V (x)}. (11)

In the case where V is continuously differentiable at x, one has ∂V (x) = {∇V (x)} andV˙F(x) = ˙VF(x) =

{∇V (x)>f | f ∈ F (x)}. Moreover, it is clear that

˙

VF(x) ⊂ ˙VF(x). (12)

In fact, given a ∈V˙F(x), there exists f ∈ F (x) such that a = p>f , for all p ∈ ∂V (x) and thus in particular

a ∈ ˙VF(x). Intuitively, it means that when definingV˙F(x) we do not consider every possible scalar product

between vectors of ∂V (x) and F (x), rather we only consider directions f ∈ F (x) that are “meaningful” in the sense of possible flowing directions of solutions. Recalling that the Euclidean scalar product h·, ·i is bilinear in its arguments and, for each v ∈ Rn, hv, ·i is continuous, it can be shown that, for each fixed

x ∈ D,V˙F(x) and ˙VF(x) are compact intervals, possibly empty. Concluding this section, we illustrate the

differences between the different notions of set-valued derivatives in the following example.

Example 2. Consider the function V : R → R defined as V (x) = |x|, which is differentiable everywhere except at 0 and Lipschitz continuous. From (8), Clarke’s generalized gradient at 0 is ∂V (0) = [−1, 1]. Now let us suppose that a set-valued map F : R ⇒ R is given such that F (0) := [f1, f2] ⊂ R. Using (10), we

compute

˙

VF(0) = {pf | p ∈ [−1, 1], f ∈ [f1, f2]}

= [− max{|f1|, |f2|}, max{|f1|, |f2|}] .

On the other hand, using (11) and noting that p1f = p2f for each p1, p2 ∈ [−1, 1] if and only if f = 0, we

get ˙ VF(0) = ( {0} if 0 ∈ [f1, f2], ∅ if 0 /∈ [f1, f2].

It is easily verified thatV˙F(0) is a subset of ˙VF(0).

4

Stability Using Max-Min Functions

In this section, we use the generalized derivatives to study a particular class of locally Lipschitz Lyapunov functions establishing sufficient stability conditions for system (1).

(8)

4.1

Max-Min Functions

The following definition was introduced by [2] in the context of path-complete Lyapunov functions for discrete time switching systems.

Definition 3. Consider an open and connected set D ⊂ Rn. Given K base functions V

1, . . . , VK∈ C1(D, R),

a max-min function VMm: D → R is either defined as

VMm(x) := max j∈{1,...,J }  min k∈Sj {Vk(x)}  , (13a)

for some J ≥ 1 and nonempty sets S1, . . . , SJ⊂ {1, . . . , K}, or

VMm(x) = min j∈{1,...,J?}  max k∈S? j {Vk(x)}  , (13b)

for some J?≥ 1 and nonempty sets S?

1, . . . , S?J?⊂ {1, . . . , K}.

The following proposition states the equivalence between (13a) and (13b), which is obtained by applying the distrubutive property of the max min operators. For a formal proof we refer to [33] and references therein. In the sequel, all our derivations apply to both equivalent expressions (13a) and (13b) but for definiteness, we use the notation adopted in (13a).

Proposition 2. Given J ≥ 1 (resp. J? ≥ 1), and S

1, . . . SJ (resp. S1?, . . . , SJ??) nonempty subsets of

{1, . . . , K}, there exists J?≥ 1 (resp. J ) and nonempty subsets S?

1, . . . , SJ?? (resp. S1, . . . SJ) of {1, . . . , K}

such that expressions (13a) and (13b) coincide, for all x ∈ D, and for any V1, . . . , VK∈ C1(D, Rn).

We denote by Mm(V1, . . . , VK) the set of all the possible max-min functions obtained from K base

functions V1, . . . , VK. Given V ∈ Mm(V1, . . . , VK), it is noted that at each point x ∈ D where a strict

ordering holds between the values of the base functions, that is, V`1(x) < V`2(x) < · · · < V`K(x), the

function value V (x) coincides with V

e

`(x), for some e` ∈ {1, . . . , K}. At points where two or more base

functions are equal, the function V may switch between different base functions. For every ` ∈ {1, . . . K}, we may define the set where the function V` is active, more precisely

C`:= {x ∈ D | V (x) = V`(x)}, (14)

which are closed by continuity of V, V1, . . . , VK. We can associate a mapping with every V ∈ Mm(V1, . . . , VK).

This map is useful to characterize the generalized derivatives introduced in Definition 2.

Definition 4 (Essentially-active index map). Given a function V ∈ Mm(V1, . . . , VK), the corresponding

essentially-active index map αV : D ⇒ {1, . . . , K} is defined as

αV(x) :=` ∈ {1, . . . , K} | x ∈ cl(int(C`)) , (15)

where cl(C) and int(C) represent the closure and the interior of a set C ⊂ Rn, respectively. Indexes ` ∈ α V(x)

are called essentially-active indexes of V at x.

It will be shown in Lemma 1 that αV(x) is nonempty, for every x ∈ D. Here, instead, we highlight that

αV(x) ⊂ {` ∈ {1, . . . , K} | V (x) = V`(x)}, ∀x ∈ D. (16)

The set appearing in the right-hand side of inclusion (16) is called active index set in the context of piecewise C1functions, for example in [34] and [37, Chapter 4]. To obtain the inclusion (16), consider any ` ∈ α

V(x),

then from Definition 4 and D being open, there is a sequence xk → x such that xk ∈ int(C`), ∀ k ∈ N. By

continuity of V and V`, we have V (x) = limk→∞V (xk) = limk→∞V`(xk) = V`(x).

We emphasize that, in general, the inclusion in (16) is strict and equality does not necessarily hold. Moreover, given V ∈ Mm(V1, . . . , VK), the map αV : Rn ⇒ {1, . . . , K} contains all the necessary

(9)

Lemma 1. Consider V ∈ Mm(V1, . . . , VK). For each x ∈ D the set αV(x) is non empty and there exists a

neighborhood U of x such that

(z ∈ U ) ⇒ (∃ `z∈ αV(x) such that V (z) = V`z(z)). (17)

The proof of Lemma 1 is given in Section 4.3.

4.2

Gradients and Stability Conditions

The following statement draws connections between Clarke’s generalized gradient ∂V and the set-valued Lie derivativeV˙F in (11) for a generic V ∈ Mm(V1, . . . , VK), using the mapping αV.

Proposition 3. Given V ∈ Mm(V1, . . . , VK) and x ∈ D, the following equality holds

∂V (x) = co{∇V`(x) | ` ∈ αV(x)}. (18)

In particular, given F : Rn

⇒ Rn, the Lie derivative in (11) reads

˙

VF(x) = {a ∈ R | ∃f ∈ F (x) : a = ∇V`(x)>f, ∀` ∈ αV(x)}. (19)

Proof. Max-min functions are in particular piecewise C1 functions, as defined in [37, Chapter 4]. Then,

equation (18) is proved following the arguments presented in [34, Lemma 2] or in [37, Proposition 4.3.1]. Combining (18) with the definition ofV˙F given in (11), we obtain (19).

We now propose a sufficient condition for asymptotic stability of system (7) in terms ofV˙F given in (19),

while adopting the convention that max ∅ = −∞.

Theorem 1. Given system (7), an open and connected set D ⊂ Rn such that 0 ∈ D, and K positive-definite

functions V1, . . . , VK∈ C1(D, R), consider a max-min function V ∈ Mm(V1, . . . , VK) withV˙F given in (19).

If there exists a function γ ∈ PD such that, for every x ∈ D,

maxV˙F(x) ≤ −γ(|x|), (20)

then V is a Lyapunov function and system (7) is AS. If D = Rn and in addition, each V

j, j ∈ {1, . . . , K},

is radially unbounded, then the origin of (7) is GAS.

A fundamental result for proving Theorem 1 appears in Lemma 2 given below. The proof of Lemma 2 with some related discussions is deferred to Section 4.4.

Lemma 2. Consider a function V ∈ Mm(V1, . . . , VK) and a solution ϕ : [0, T ) → D of the differential

inclusion (7). For t ∈ [0, T ), d

dtV (ϕ(t)) exists almost everywhere and (21a) d

dtV (ϕ(t)) ∈ ˙

VF(ϕ(t)) almost everywhere. (21b)

Remark 1 (Comparison with other approaches). In Lemma 2, we relate the Dini derivative of V along the solutions of system (7) with the Lie derivative V˙F. In [11, Chapter 4.2], we also see a relationship

between the Dini derivative and the directional derivative along vector fields in the context of weak stability. In particular, it is shown that for every ζ ∈ ∂V (ϕ(t)), infϕ(t)∈F (ϕ(t))˙ dtdV (ϕ(t)) ∈ infϕ(t)∈F (ϕ(t))˙ ζ>ϕ(t),˙

for almost every t ≥ 0. On the other hand, for strong stability, it would be natural to work with the relation, supϕ(t)∈F (ϕ(t))˙ dtdV (ϕ(t)) ∈ supϕ(t)∈F (ϕ(t))˙ ζ>ϕ(t) ≤ −γ(|ϕ(t)|), for every ζ ∈ ∂V (ϕ(t)). However,˙ such a relation is conservative for our purposes, as it can be seen in Example 1 (see Remark 9), where the supremum on the right-hand side of the foregoing inclusion is strictly positive along certain directions in the set F (x) for some x ∈ R2. The use of Lie derivative in Lemma 2 thus provides tighter bounds on the Dini

(10)

Remark 2. Stability results involving the set-valued Lie derivative (11) and condition (20) are proved in [3, Proposition 1] for locally Lipschitz and regular (recall Definition 1) Lyapunov functions. Set-valued Lie derivatives are also used in [24] to identify and remove infeasible directions from a differential inclusion when limiting the attention to regular locally Lipschitz functions. Showing that this condition is sufficient when considering locally Lipschitz functions obtained via a max-min composition nontrivially generalizes such results. In fact, a function V ∈ Mm(V1, . . . , VK) is in general not regular: recalling (18), the definition

in (9) requires, for a regular function V , that

lim h→0+ V (x + hv) − V (x) h = max{∇V`(x) >v | ` ∈ α V(x)}, (22)

for all x ∈ D and for all v ∈ Rn. However considering for example V (x) = min{V

1(x), V2(x)}, we have that

the left-hand side of (22) is equal to min{∇V`>v | ` ∈ αV(x)}, and thus in general equality (9) doesn’t hold.

In this sense Lemma 2 is a generalization of [3, Proposition 1] to a class of nonregular functions.

Recalling inclusion (12), we can state the following result specifically for (1), using the notion of Clarke’s generalized derivative, which is generally more conservative than Theorem 1. This result is also reported in our preliminary conference paper [15, Theorem 1].

Corollary 1. Consider the DI (1). Given an open and connected set D ⊂ Rn such that 0 ∈ D and

K positive-definite functions V1, . . . , VK ∈ C1(D, R), consider a max-min function V ∈ Mm(V1, . . . , VK).

Suppose that there exists a function γ ∈ PD, such that for all x ∈ D,

∇V`(x)>fi(x) ≤ −γ(|x|), ∀ ` ∈ αV(x), (23)

for all i ∈ {1, . . . , M }. Then the origin of (1) is AS and V is a Lyapunov function for system (1). If D = Rn, and in addition, each V

j, j ∈ {1, . . . , K}, is radially unbounded, then the origin of (1) is GAS.

Proof. Consider a point x ∈ D, and suppose that αV(x) = {`1, . . . , `p}. Recalling Proposition 3, for each

v ∈ ∂V (x), there exist λ1, . . . , λp ≥ 0,P p

j=1λj = 1, such that v =P p

j=1λj∇V`j(x). Consequently, for each

i ∈ {1, . . . , M }, (23) yields v>fi(x) = p X j=1 λj∇V`j(x) >f i(x) ≤ − p X j=1 λjγ(|x|) = −γ(|x|),

which implies that v>f ≤ −γ(|x|), for each v ∈ ∂V (x), and every f ∈ cofi(x) | i ∈ {1, . . . , M } .

Recall-ing (12), inequality (20) holds and the result follows from Theorem 1.

While Theorem 1 holds for a general differential inclusion (7), the statement of Corollary 1 is specifically tailored for system (1).

4.3

Proof of Lemma 1

Proof. Case 1: Consider first the case where V (x) = V`(x) for an ` ∈ {1, . . . , K} and V (x) 6= Vj(x) for

all j 6= `. By continuity of V, V1, . . . , VK there exists a neighborhood U ⊂ D of x where the non-equality

relations are preserved and thus U ⊂ int(C`), which implies x ∈ int(C`) and αV(x) = {`}, in addition to (17)

with `z≡ `.

Case 2: Let us now consider the general case V (x) = V`1(x) = · · · = V`p(x) and V (x) 6= Vj(x) if j /∈

{`1, . . . , `p}, for some `1, . . . , `p ∈ {1, . . . , K}. By continuity of V, V1, . . . VK there exists a neighborhood

U0 ⊂ D of x such that the non-equality relations V (z) 6= Vj(z), ∀ j /∈ {`1, . . . , `p} are conserved, for any

z ∈ U0. Recalling (16), αV(x) ⊂ {`1, . . . , `p}; when αV(x) = {`1, . . . , `p}, we are done, by proceeding exactly

as in Case 1. Otherwise, when αV(x) 6= {`1, . . . , `p} consider without loss of generality, that `1∈ α/ V(x). By

Definition 4, `1 ∈ α/ V(x) implies x /∈ cl(int(C`1)) therefore there exists an open neighborhood U1 of x such

that

(11)

Consider now, if any, each point x ∈ U0such that V (x) = V`1(x) and V (x) 6= V`(x), for all ` ∈ {`2, . . . , `p}, it

again follows from continuity that V (z) 6= V`(z) for every ` ∈ {`2, . . . , `p}, and every z in some neighborhood

V ⊂ U0of x. Moreover we have, by our choice of U0, V (z) 6= Vj(z) if j /∈ {`1, . . . , `p} for every z ∈ V, which

implies V (z) = V`1(z), ∀ z ∈ V. As a consequence x ∈ int(C`1), and by equation (24) we have x /∈ U1. In

other words, we have shown that

(z ∈ U1) ⇒ (∃ `z∈ {`2, . . . , `p} s.t. V (z) = V`z(z)). (25)

Now, if αV(x) = {`2, . . . , `p}, (17) holds with U = U1 and `z ∈ {`2, . . . , `p}. Otherwise we can iterate this

argument supposing `2∈ α/ V(x) and so on. At each iteration ν ∈ {2, . . . , p − 1}, generalizing (24) and (25),

we construct an open neighborhood Uν of x such that Uν ⊂ Uν−1 and

(z ∈ Uν) ⇒ (∃ `z∈ {`ν+1, . . . , `p} s.t. V (z) = V`z(z)). (26)

Either αV(x) = {`ν+1, . . . , `p} and the proof is complete with U = Uν or we need to iterate again. Note

that when ν = p − 1, the existence of `z ∈ {`p} as in (26), implies V (z) = V`p(z), for all z ∈ Up−1, thus

proving Up−1 ⊂ int(C`p) and hence αV(x) = {`p}. This completes the proof of (17) and the fact that αV is

non-empty.

4.4

Proof of Lemma 2

Lemma 2 is the key result used in the proof of Theorem 1, establishing properties of the directional derivative of V ∈ Mm(V1, . . . , VK) along the solutions of (7). In its proof we will use the following result.

Claim 1. Given functions ξ1, . . . ξJ : R → R continuous at 0, we have that

lim

h→0j∈{1,...,J }min ξj(h) =j∈{1,...,J }min h→0limξj(h).

Proof of Claim 1. Define ξ(h) := minj∈{1,...,J }ξj(h) for all h ∈ R; ξ is continuous at 0 since it is the pointwise

minimum of continuous functions. We have lim h→0j∈{1,...,J }min ξj(h) = lim h→0ξ(h) = ξ(0) = min j∈{1,...,J } ξj(0) = min j∈{1,...,J } lim h→0ξj(h),

thus concluding the proof.

Proof of Lemma 2. Recalling that ϕ(·) is an absolutely continuous solution of the differential inclusion (7) and that V is a locally Lipschitz function, the function V ◦ ϕ : [0, T ) → R is absolutely continuous, and hence

d

dtV (ϕ(t)) exists almost everywhere in [0, T ), proving (21a). Moreover, there exists a set N0of measure zero

such that, for every t ∈ [0, T ) \ N0, both ˙ϕ(t) and dtdV (ϕ(t)) exist, and ˙ϕ(t) ∈ F (ϕ(t)).

To prove (21b), from Proposition 2, we use representation (13b) of V ∈ Mm(V1, . . . , VK), dropping the

superscript “?” for notational simplicity, that is V (x) := min j∈{1,...,J }  max `∈Sj {V`(x)}  ,

where J ≥ 0 and S1, . . . , SJ are non-empty subsets of {1, . . . , K}. By Lemma 1, for each t ∈ [0, T ), and for

each x in a neighborhood of ϕ(t), V (x) can be expressed as V (x) := min j∈{1,...,J }  max `∈Sj∩αV(ϕ(t)) {V`(x)}  ;

namely only the active indexes in αV(ϕ(t)) play a role (possibly ruling out the sets Sj for which Sj ∩

αV(ϕ(t)) = ∅). Let us introduce the notation

Vj(x) := max `∈Sj∩αV(ϕ(t))

(12)

To proceed in a constructive manner, consider the set M(V1, . . . , VK) containing all the functions obtained

by max (and only max) combination over V1, . . . , VK. The cardinality of M(V1, . . . , VK) is finite and equal

to NK := 2K− 1 and we can denote its elements by Wk, for k ∈ {1, . . . , NK}. Reasoning as before, for each

k define Nk as the subset of [0, T ) where Wk◦ ϕ is not differentiable. Since Wk are locally Lipschitz, then

each Nk has measure zero. Fix any t ∈ [0, T ) \ (Sk∈{0,...,NK}Nk). From the fact that Vj in (27) is locally

Lipschitz for each j ∈ {1, . . . , J }, we obtain d

dtVj(ϕ(t)) = limh→0

Vj(ϕ(t) + h ˙ϕ(t)) − Vj(ϕ(t))

h , (28)

where the limit exists because t /∈S

k∈{1,...,NK}Nk. The functions Vj in (27) are regular (Definition 1). We

can follow the idea of [3, Lemma 1]: by letting h go to zero from the right, recalling inclusion (16), we get d

dtVj(ϕ(t)) =`∈Sj∩αmaxV(ϕ(t))

∇V`(ϕ(t))>ϕ(t) .˙ (29)

Similarly, by letting h go to zero from the left in (28), we get d

dtVj(ϕ(t)) =`∈Sj∩αminV(ϕ(t))

∇V`(ϕ(t))>ϕ(t) .˙ (30)

Since dtdVj(ϕ(t)) exists, we have (29)=(30), and thus for each j ∈ {1, . . . , J } we can write, for all ` ∈

Sj∩ αV(ϕ(t)),

d

dtVj(ϕ(t)) = ∇V`(ϕ(t))

>ϕ(t) =: a˙

j(t). (31)

Now consider the function V (x) = minj∈{1,...,J }Vj(x) , for x in some neighborhood of ϕ(t). For all h > 0,

we use the fact that Vj(ϕ(t)) = V (ϕ(t)) for all j ∈ {1, . . . , J }, to obtain

ξ(h) := V (ϕ(t) + h ˙ϕ(t)) − V (ϕ(t)) h = minj{Vj(ϕ(t) + h ˙ϕ(t))} − V (ϕ(t)) h = min j∈{1,...,J } nV j(ϕ(t)+h ˙ϕ(t))−Vj(ϕ(t)) h o =: min j∈{1,...,J } ξj(h).

Then, applying Claim 1 and (31) we have d

dtV (ϕ(t)) = limh→0+j∈{1...J }min {ξj(h)} = minj∈{1...,J }

 lim h→0+ξj(h)  = min j∈{1,...,J }  d dtVj(ϕ(t))  = min j∈{1,...,J }{aj (t)}. (32)

Using again Claim 1, we can also write d dtV (ϕ(t)) = limh→0− V (ϕ(t) + h ˙ϕ(t)) − V (ϕ(t)) h = − lim h→0−  min j∈{1,...,J }  Vj(ϕ(t) + h ˙ϕ(t)) − Vj(ϕ(t)) −h  = − min j∈{1,...,J }  lim h→0− Vj(ϕ(t) + h ˙ϕ(t)) − Vj(ϕ(t)) −h  = − min j∈{1,...,J }{−aj (t)} = max j∈{1,...,J }{aj (t)}. (33)

(13)

(a) The vector fields f1(x) and fe 2(ex) are pointing in the same half-plane, which corresponds to the caseV˙Fsw(

e x) = ∅.

(b) A convex combination of the vector fields f1(ex) and f2(ex) aligns with the tangent space of S12 atx and thus,e

˙ VFsw(

e x) 6= ∅.

Figure 2: A geometric interpretation of the setV˙Fsw(

e x) in R2.

Summarizing, from (32) and (33), it follows that a1(t) = · · · = aJ(t) := a(t). Therefore, from (31) we

get, for each j ∈ {1, . . . , J }, that ` ∈ Sj∩ αV(ϕ(t)) implies ∇V`(ϕ(t))>ϕ(t) = a(t). Finally, recalling that˙

αV(ϕ(t)) =SjSj∩ αV(ϕ(t)), we have

∇V`(ϕ(t))>ϕ(t) = a(t), ∀` ∈ α˙ V(ϕ(t)).

From (19), it follows that a(t) ∈V˙F(ϕ(t)), which then implies (21b).

5

Switched systems

We now focus our attention on system (2). In contrast to (1), where the vector fields may switch to any value at any point in the state space, the switching in (2) occurs according to the pre-specified function x 7→ σ(x), which determines the active vector field as a function of the state. As a consequence, solutions of (2) are also solutions of (1) and Theorem 1 also implies GAS of (2). However we search here for less conservative stability conditions. Let f1, . . . , fM be C1(Rn, Rn) in (2). The class of switching functions x 7→ σ(x) that we

consider for system (2) is introduced in the following assumption.

Assumption 1. There exist finitely many analytic functions H1, . . . , HM : Rn → R, defining open sets

D1, . . . , DM ⊂ Rn by

Di:= {x ∈ Rn| Hi(x) > 0}, ∀ i ∈ {1, . . . , M },

such that σ is constant and equal to i on each Di, M

[

i=1

Di= Rn, and Di∩ Dj= ∅, if i 6= j.

Note that, in Assumption 1 the value of σ remains unspecified on ∂Di, i.e. the boundaries of Di,

i = {1, . . . , M }. Since ∂Di ⊂ {x ∈ Rn | Hi(x) = 0}, and the set of zeros of an analytic function has zero

Lebesgue measure, this ambiguity will not affect the solution set of (2), as explained in the sequel.

Given f1, . . . , fM ∈ C1(Rn, Rn) and σ : Rn→ {1, . . . , M } satisfying Assumption 1, we define fsw: Rn→

Rn, as

fsw(x) := fσ(x)(x). (34)

Because the vector field in (34) is in general discontinuous, we define an appropriate notion of solution of (34), arising from the Filippov regularization.

Definition 5 ([16]). Given fsw

: Rn→ Rn in (34), and the system

˙

(14)

0.5 1 0.5 1 S1 x1 x2

(a) The blue line shows a trajectory starting from (0, 1), the red line a trajectory starting from (0.5, 0) and the red dashed line indicates a level set of V (x).

S2 S1

(b) The red arrows represent the vector field on the whole state-space. Let us note the converging sliding motion on the line S1 and

the nongeneric case on the line S2.

x2

x1

S2 S1

(c) The blue arrows represent the elements of Ffsw(x), and in

partic-ular the convex combination of A1x

and A2x that is pointing toward 0

near the origin and diverging away from the origin.

Figure 3: Trajectories of switched system (40) in Example 3. define the set-valued Filippov regularization

˙ x ∈ Fsw(x) := \ ε>0 \ N ⊂Rn, µ(N )=0 cofsw (Bε(x) \ N ) , (36)

where µ(N ) is the Lebesgue measure of N ⊂ Rn and co denotes the closed convex hull. We say that

x : R≥0→ Rn is a Filippov solution of system (35) starting at x0if

1. x is absolutely continuous, with x(0) = x0,

2. ˙x(t) ∈ Fsw(x(t)) for almost all t > 0.

For the vector field fsw in (34), the computation of Fsw is simplified as observed in [12, Page 51] and is summarized below:

Proposition 4. Consider the vector field fsw in (34) with σ satisfying Assumption 1. Introduce the

set-valued map I : Rn ⇒ {1, . . . , M } as I(x) := {i | x ∈ Di} = \ ε>0 [ y∈Bε(x) y∈S iDi σ(y). (37) Then Fsw in (36) satisfies Fsw(x) = co{fi(x) | i ∈ I(x)}. (38)

We underline that under Assumption 1 the Filippov regularization Fswis an upper semi-continuous map with Fsw(x) being nonempty, compact, and convex for each x ∈ Rn. Thus, we can study stability of switched systems in (34) using the results developed in Section 4. DefiningV˙Fsw(x) as in (19) with F replaced by

Fsw, Theorem 1 leads to the following statement in the context of switched systems.

Theorem 2. Consider system (2), and a switching law σ : Rn → {1, . . . , M } satisfying Assumption 1.

Consider an open and connected set D ⊂ Rn such that 0 ∈ D and K positive-definite functions V

1, . . . , VK ∈

C1

(D, R). If, for a max-min function V ∈ Mm{V1, . . . , VK}, there exists γ ∈ PD such that

maxV˙Fsw(x) ≤ −γ(|x|), ∀x ∈ D, (39)

then the origin of (36) is AS. If D = Rn, and each V

j, j ∈ {1, . . . , K}, is radially unbounded, then (36) is

(15)

Theorem 2 simultaneously accounts for points x where I(x) (associated to σ), and/or points where αV(x)

(associated to V ) are multivalued. Interesting things happen when these points coincide, namely when V mimics the patchy shape of Fsw.

Remark 3. Consider the simplest non-trivial case, taking an ex ∈ D such that I(ex) = {1, 2} and αV(x) =e {`1, `2}, for some `1, `2 ∈ {1, . . . , K}. We may give a geometric interpretation of (39). Parameterizing an

f ∈ Fsw(

e

x) with f = λf1(ex) + (1 − λ)f2(ex) in expression (19), we have that V˙Fsw(

e

x) 6= ∅, if and only if there exists λ ∈ [0, 1] such that (we omit the argumentex of the gradients to simplify the notation),

λ(∇V`1− ∇V`2)

>f

1(x) = −(1 − λ)(∇Ve `1− ∇V`2)

>f 2(x),e which holds only if

(∇V`1− ∇V`2) >f 1(ex)  (∇V`1− ∇V`2) >f 2(ex) ≤ 0. It follows thatV˙Fsw( e

x) 6= ∅ only if the vector fields f1(x) and fe 2(x) are such that the inner product of theire respective components, normal to the hypersurface S12 = {x ∈ Rn| V`1(x) = V`2(x)} is negative, namely

they do not point both on the same side of S12. Figure 2 provides an illustration of this fact in the planar

case.

In Example 3, an illustration of this idea is provided.

Example 3. We consider a system of the form (2) and analyze its stability using Theorem 2. Given A1=−0.1−5 −0.11 , A2=−0.1 −51 −0.1 and Q = 

1 0

0 −1, consider the switched system

˙ x = ( f1(x) := A1x − beg(x), if x >Qx < 0, f2(x) := A2x − beg(x), if x>Qx > 0, (40)

where b ≥ 0, and functioneg : R2→ R2 is defined as

e g(x1, x2) = g(x1) g(x2)  =arctan(x1) arctan(x2)  .

System (40) can be written as (34), and satisfies Assumptions 1 with H2(x) = −H1(x) = x>Qx. Consider

now P1= [5 00 1], P2 = [0 51 0], we prove that V (x) = min{x>P1x, x>P2x} is a Lyapunov function in the sense

of Theorem 2. Noting that P1− P2= 4Q, we can say that the points where V is not differentiable coincide

with the points where σ is not continuous. To show inequality (39), we proceed in three steps:

Step 1: Each subsystem is GAS. Analyzing each subsystem where V is differentiable, it can be shown that ∇V (x)>f ≤ −0.1|x|2, ∀f ∈ Fsw(x), if x>Qx 6= 0.

The next step is to check the inequality (39) where V is not differentiable, that is on the lines S1:= {x ∈

R2| x2= x1}, and S2:= {x ∈ R2| x2= −x1}, so that S1∪ S2 is the set where x>Qx = 0.

Step 2: Line S1 with converging sliding motion. We compute the set-valued derivative V˙Fsw(x) for a point

x ∈ S1. Proceeding as in Remark 3, based on (19), it is seen that

λx>(P1− P2)f1(x) + (1 − λ)x>(P1− P2)f2(x) = 0 (41)

holds with λ = 0.5, for every x ∈ S1. Consequently, for each x ∈ S1, we have

˙ VFsw(x) =  2x>P1  1 2f1(x) + 1 2f2(x)  =x>P1(A1x + A2x) − 2b x>P1eg(x)

. By construction, the same singleton would be obtained if we replaced P1 by P2. Substituting the values of

Ai and Pi, i = 1, 2, it thus follows that maxV˙Fsw(x) < −25

2|x|

2, ∀x ∈ S

(16)

converging “sliding” solutions.

Step 3: Line S2with diverging sliding motion. Choosing x ∈ S2, and following the same reasoning as in Step 2,

it is seen that the setV˙Fsw(x) is nonempty because (41) holds with λ = 0.5, for every x ∈ S2. As a result,

˙

VFsw(x) = x>P1(A1x + A2x) + 2b x>P1

e

g(x) . Analyzing the linear term, we have x>P

1(A1x + A2x) =

22.8 x21; for the nonlinear term, for each x ∈ S2, we have

−2b x>P1eg(x) = −12b x1g(x1).

For x1 small enough, we see that x1g(x1) = x1arctan(x1) = x21+ o(x21) where limx1→0

o(x21) x2

1

= 0. Thus, for sufficiently large values of b > 0, there exists a δ > 0 such that

˙

VFsw(x) = 22.8 x21− 12bx21+ o(x21) < −0.1|x|2, (42)

if x ∈ S2and |x| < δ.

Combining the three steps we proved (39) (with γ(|x|) = 0.1|x|2

) for a small open neighborhood D = Bδ(0)

of the origin, and Theorem 2 establishes local asymptotic stability of the origin by using the minimum of two quadratics as a Lyapunov function. Condition (39) fails to be true on the line S2, away from the origin

regardless of the selection of b > 0. Hence, there exist Filippov solutions, starting in S2 with large enough

initial condition that stay in S2 and diverge; see Figures 3b and 3c for an illustration. We want to underline

that, since x>(A>1P2+ A1P2)x > 0 for all x ∈ S2, recalling (10), it holds that max ˙VF(x) > 0, ∀ x ∈ S2. This

observation again shows the utility of using Lie derivative compared to Clarke’s derivative in (10), which does not allow establishing asymptotic stability of the origin.

6

Linear Switched Systems and Quadratic Basis

We are now interested in applying Theorem 2 to switched systems (34) with linear vector fields and a partition given by symmetric cones. More precisely, given A1, . . . , AM ∈ Rn×n, we consider the differential

inclusion

˙

x ∈ Flinsw(x) := co{Aix | i ∈ I(x)}. (43)

The set valued map I : Rn

⇒ {1, . . . , M } arises from a switching function x 7→ σ(x) satisfying Assumption 1, where the sets D1, . . . , DM ⊂ Rn are defined by

Di:= {x ∈ Rn | x>Qix > 0}, (44)

with properly chosen symmetric matrices Qi ∈ Sym(Rn) := {R ∈ Rn×n | R> = R} and Qi not negative

semidefinite for each i ∈ {1, . . . , M }. The sets Diin (44) are symmetric open cones (if x ∈ Dithen λx ∈ Di

for all λ ∈ R \ {0}). The map I : Rn

⇒ {1, . . . , M } in (37), can be rewritten in this context as follows: I(x) := {i ∈ {1, . . . , M } | x>Qix ≥ 0}.

Indeed Qi not negative semidefinite implies Di= {x ∈ Rn| x>Qix ≥ 0}.

Remark 4. Another possible kind of partition of the state space arises by considering polyhedral cones (with a common vertex at the origin), that is sets D1, . . . , DM ⊂ Rn (satisfying Assumption 1) defined by linear

inequalities Di := {x ∈ Rn |Kix ≥c 0}, where Ki ∈ Rki×n, for all i ∈ {1, . . . , M } and ≥c denotes the

component-wise relation. The techniques employed in what follows could be adapted also to this case. We restrict our attention to Lyapunov functions homogeneous of degree 2, considering max-min functions obtained from quadratic forms. This choice is motivated by the fact that, as proved in [20], max of quadrat-ics Lyapunov functions are universal (existence is sufficient and necessary) for GAS of linear differential inclusions (LDI). For linear state-dependent switched systems (43), as we noted, non-convex (but still ho-mogeneous) Lyapunov functions are required, and thus the min-operator was added to have this flexibility. The study of universality for max-min of quadratics for (43) is open for further research. The construction of “piecewise” quadratic Lyapunov functions, in similar settings, is studied also in [23, ], [19, ], and references therein.

(17)

Definition 6. Given K distinct, symmetric and positive definite matrices P1, . . . PK ∈ Rn×n, a max-min of

quadratics is denoted by V ∈ Mmq (P1, . . . , PK), and is defined as

V (x) = max j∈{1,...,J }  min k∈Sj x>P kx  , (45)

where J ≥ 1 and for each j ∈ {1, . . . , J }, the set Sj⊂ {1, . . . , K} is nonempty.

Remark 5 (Homogeneity). Since the sets Diare symmetric cones, the set-valued map in (43) is homogeneous

of degree 1, in the sense that Fsw

lin(λx) = λF sw

lin(x), ∀x ∈ R n

, ∀λ ∈ R.

Similarly, a max-min of quadratics function defined as in (45) is homogeneous of degree 2, that is V (λx) = λ2

V (x), ∀x ∈ Rn

, ∀λ ∈ R, and αV is constant along rays emanating from the origin, that is αV(λx) =

αV(x), ∀x ∈ Rn, ∀λ ∈ R \ {0}.

6.1

Stability Conditions with Set-Valued Lie Derivative

We first specialize the conditions of Theorem 2 for system (43) with V of the form (45). To this end, points x ∈ Rn where α

V(x) = {`(x)} is a singleton are easily characterized because they satisfy x ∈ int(C`(x)).

Instead, consider any x ∈ Rn, such that α

V(x) = {`1, . . . , `p} with p > 1, namely any point x where

the locally Lipschitz function V is not continuously differentiable. Define now the probability simplex of dimension m as Λm0 := {λ ∈ Rm≥0| m X j=1 λj= 1}.

Denoting I(x) = {i1, . . . , im} ⊆ {1, . . . , M }, and proceeding as in Remark 3, by (19) we have that ˙VFsw lin(x) 6=

∅ if and only if there exist λ = (λ1, . . . , λm) ∈ Λm0 such that

∇V`k+1(x) >   m X j=1 λjAijx  = ∇V`k(x) >   m X j=1 λiAijx  , (46)

for each k ∈ {1, . . . , p − 1}. Based on (46), define the set Λ(x, {Ai}i∈I(x)) ⊂ Λm0 as

λ ∈ Λ(x, {Ai}i∈I(x)) ⇔        Pm j=1λjx >(P `2− P`1)Aijx = 0, .. . ... ... Pm j=1λjx>(P`p− P`p−1)Aijx = 0, (47)

where λ = (λ1, . . . , λm) ∈ Λm0. Then, recalling (19), we have

` ∈ αV(x) ⇒ V˙Fsw lin(x) = ( 2x>P`(λ1Ai1+ · · · + λmAim)x : (λ1, · · · , λm) ∈ Λ(x, {Ai}i∈I(x)) ) . (48)

The equivalence (48) is used to prove the next corollary of Theorem 2.

Corollary 2. Consider system (43) and a max-min of quadratics V ∈ Mmq (P1, . . . , PK), where P1, . . . , PK

are symmetric, positive-definite, and pairwise distinct matrices. Suppose that there exists ε > 0 such that (i) For each x ∈ Rn with α

V(x) = {`} and I(x) = {i} being singletons, it holds that

(18)

(ii) For each x ∈ Rn satisfying αV(x) = {`1, . . . , `p} ⊂ {1, . . . , K}, with p > 1, and I(x) = {i1, . . . , im} ⊂

{1, . . . , M } with m > 1, there exists ` ∈ αV(x) such that

X

i∈I(x)

λix>(P`Ai+ A>i P`)x ≤ −ε|x|2, (50)

for all (λ1, . . . , λm) ∈ Λ(x, {Ai}i∈I(x)).

Then the origin of (43) is GAS.

Proof. It follows from Theorem 2 that the origin of (43) is GAS if (39) holds for all x ∈ Rn. We will proceed

by analyzing four cases, depending on whether the sets I(x) and αV(x) are singletons or not.

First, consider x such that αV(x) = {`} and I(x) = {i} are singletons. In this case,

˙ VFsw

lin(x) =x

>(A>

i P`+ P`Ai)x ≤ −ε|x|2,

where the inequality is due to condition (i).

Secondly, for a point x with αV(x) = {`1, . . . , `p}, with p > 1, and I(x) = {i1, . . . , im} with m > 1, it follows

from (48) and condition (ii) that maxV˙Fsw

lin(x) ≤ −ε|x|

2.

Next, consider the case where αV(x) = {`} is a singleton and I(x) = {i1, . . . , im} with m > 1, that is

a point where V is continuously differentiable and the set Fsw

lin(x) in (43) is multivalued. We thus have

∂V (x) = {∇V`(x)}, and from linearity we have

maxV˙Fsw lin(x) ≤ maxλ∈Λm 0 m X j=1 2λjx>P`Aijx = 2x >P `Ai?x, (51)

where i?∈ arg maxi=i1,...,im2x

>P

`Aix. Since i?∈ I(x), by (37) x ∈ Di? ; from item (i) we have

x>k(A>i?P`+ P`Ai?)xk≤ −ε|xk|2.

for some sequence xk→ x with xk ∈ Di?∩int(C`), ∀k ∈ N. By continuity we thus have x>(A>i?P`+P`Ai?)x ≤

−ε|x|2, and from (51) we have maxV˙ Fsw

lin(x) ≤ −ε|x|

2.

Finally, we consider the case αV(x) = {`1, . . . , `p} with p > 1 and I(x) = {i}, namely a point where the

function V is not continuously differentiable and the set Fsw

lin(x) is a singleton, since x ∈ Di. IfV˙Fsw lin(x) = ∅

we are done. Otherwise, in view of (47),V˙Fsw

lin(x) 6= ∅ implies {2x>P`1Aix} = · · · = {2x >P `pAix} = ˙ VFsw lin(x). (52)

Considering, without loss of generality, the index `1∈ αV(x), by Definition 4 and recalling that Di is open,

we can consider a sequence xk→ x such that xk ∈ Di∩ int(C`1), for all k ∈ N. By condition (i) we have

x>k(A>i P`1+ P`1Ai)xk ≤ −ε|xk|

2

, ∀ k ∈ N. By continuity x>(A>i P`1+ P`1Ai)x ≤ −ε|x|

2; recalling (52), it implies that maxV˙ Fsw

lin(x) ≤ −ε|x|

2.

Having analyzed all the cases, we conclude that (39) holds for all x ∈ Rn and the assertion follows from Theorem 2.

6.2

Checking Item (i) of Corollary 2

In this section, we exploit the properties of system (43) and the family of candidate max-min Lyapunov functions in (45) to computationally check condition (i) of Corollary 2. We do so by following two steps: first, fixing K ≥ 1, J ≥ 1, nonempty subsets S1, . . . , SJ⊂ {1, . . . , K}, and hence the corresponding max-min

combination in (45), we construct an auxiliary function Φ, which characterizes the regions where αV : Rn⇒

{1, . . . , K} is single-valued. Notably, this function is independent of P1, . . . , PK. Secondly, we use Φ to

compute matrices P1, . . . , PK satisfying item (i) of Corollary 2 by only checking the feasibility of a finite set

(19)

Step 0. Consider the symmetric group of order K denoted by SK, which is the group of all possible

permutations of the first K positive integers. Given any K pairwise distinct quadratic functions associated to some P1, . . . , PK> 0, for any ρ = (ρ1, . . . , ρK) ∈ SK, define the open set

Eρ:=x ∈ Rn| x>Pρ1x < · · · < x

>P

ρKx , (53)

which is a cone (possibly empty) where a strict ordering among the K quadratic functions holds. For a given max-min combination in (45), namely given J ≥ 1 and nonempty sets Sj ⊂ {1, . . . , K}, ∀j ∈ {1, . . . , J }, in

each Eρ the function αV : Rn⇒ {1, . . . , K} defined in (15) is constant and single valued; let us denote it by

Φ(ρ) := αV(Eρ) ∈ {1, . . . , K}. 4

In Algorithm 0, we present how to numerically construct Φ : SK→ {1, . . . , K}, independently of matrices

(P1, . . . , PK).

Remark 6. We emphasize that function Φ is independent of P1, . . . , PK, but only depends on the

max-min policy defined by sets S1, . . . SJ. As an example, considering J = K and Sj = {j}, the max-min

combination (45) coincides with the maximum of the K quadratic functions. In this case, Φ will be defined as Φ((ρ1, . . . , ρK)) = ρK, ∀ρ = (ρ1, . . . , ρK) ∈ SK, because of (53). Also, to relate Φ with αV, it is seen that

for any K base quadratics defined by (P1, . . . , PK) with a specific max-min combination determined by V ,

the mapping αV in (15) corresponds to αV(x) =Tε>0{Φ(ρ) | Eρ∩ B(x, ε) 6= ∅}.

Next, in Step 1, we use the function Φ to check condition (i) of Corollary 2:

Step 1 (Conditions on Eρ). Consider system (43), and take K ∈ N, J ≥ 1 and S1, . . . SJ ⊂ {1, . . . K}

nonempty sets. Find P1, . . . , PK > 0, βi(ρ) ≥ 0, τi,k(ρ) ≥ 0, ∀ ρ = (ρ1, . . . , ρK) ∈ SK, ∀ k ∈ {1, . . . , K − 1},

and ∀i ∈ {1, . . . , M }, such that

A>i PΦ(ρ)+ PΦ(ρ)Ai+ K−1

X

k=1

τi,k(ρ)(Pρk+1− Pρk) + βi(ρ)Qi< 0. (54)

In Proposition 5 below, we prove that the feasibility of Step 1 yields K matrices such that condition (i) of Corollary 2 holds, while in Algorithm 1 we formalize this step of computationally checking condition (54).

Algorithm 0: The function Φ : SK→ {1, . . . K}.

Data: K ∈ N, J ≥ 1, S1, . . . , SJ⊂ {1, . . . K}

Input: ρ = (ρ1, . . . , ρK) ∈ SK

Output: out = ΦK,J,S1,...,SJ(ρ)

Function ΦK,J,S1,...,SJ(ρ):

Set: out = 0, Smin= ∅,

for (j = 1, j ≤ J, j = j + 1) do for (i = 1, i ≤ K, i = i + 1) do

if ρi∈ Sj then

Add ρi to Smin, break

end end end for (j = J, j ≥ 0, j = j − 1) do if ρj∈ Sminthen out = ρj, break end end return out End Function

(20)

Algorithm 1: Lyapunov conditions: Differentiable case. Data: A1, . . . , AM∈ Rn×n, Q1, . . . , QM ∈ Sym(Rn).

Initialization: Choose the max-min structure:

Take K ∈ N, J ≥ 1, S1, . . . SJ⊂ {1, . . . K}, construct Φ : SK → {1, . . . , K} (Algorithm 0).

Lyapunov conditions on Eρ, ∀ρ ∈ SK: (Step 1) Check the feasibility of

A>i PΦ(ρ)+ PΦ(ρ)Ai+ K−1 X k=1 τi,k(ρ)(Pρk+1− Pρk) + βi(ρ)Qi< 0, P1, . . . PK> 0, βi(ρ), τi,k(ρ) ≥ 0, ∀ ρ = (ρ1, . . . , ρK) ∈ SK, k ∈ {1, . . . , K − 1}, i ∈ {1, . . . , M }. (55)

if (55) are feasible then

Output: Matrices (P1, . . . , PK)

else

Output: ∅ end

Proposition 5. Consider K ∈ N, J ≥ 1, S1, . . . , SJ ⊂ {1, . . . K} non-empty, (P1, . . . , PK) positive definite

matrices and V defined as in (45). If, for any ρ = (ρ1, . . . , ρK) ∈ SK, any i ∈ {1, . . . , M } and any

k = {1, . . . , K − 1}, there exist βi(ρ) ≥ 0, τi,k(ρ) ≥ 0 such that (54) holds, then item (i) of Corollary 2 holds.

Proof. The set Eρ∩ Di can be written as

Eρ∩ Di= ( x ∈ Rn x>Qix > 0 ∧ x>(Pρk+1− Pρk)x > 0, ∀ k ∈ {1, . . . , K − 1} ) .

If (54) holds, due to the strict inequality, there exists εi,ρ> 0 such that

x>(A>i PΦ(ρ)+ PΦ(ρ)Ai)x ≤ −εi,ρ|x|2, ∀ x ∈ Di∩ Eρ. (56)

By Step 0 we have αV(x) = {Φ(ρ)} and, by (44), I(x) = {i} for all Di∩ Eρ, and thus (56) implies that (49)

holds for all x ∈ Di∩ Eρ. Defining ε := mini,ρεi,ρ we have that (49) holds for each x ∈ Rn with αV(x) and

I(x) being singletons, thus concluding the proof.

Remark 7 (Polyhedral cones). Consider again the alternative state-space partition discussed in Remark 4. More precisely, consider polyhedral cones D1, . . . , DM ⊂ Rn defined by Di:= {x ∈ Rn| Kix ≥c 0}, where,

for each i ∈ {1, . . . , M } Ki ∈ Rki×n, for some ki ∈ N. Equivalently, the sets Di can be represented by

Di= cone(vi)Mi=1i, where v1, . . . , vMi ∈ R

nare the rays of the cone D

i. Let us call by Ri∈ Rn×Mi the matrix

whose columns are the vectors vi. As presented in [22, Lemma 1] we have that, given any symmetric matrix

S ∈ Rn×n, if there exists a symmetric and entry-wise positive matrix Ni∈ Rn×n such that R>i SRi+ Ni≤ 0

then x>Sx < 0, ∀x ∈ Di. Using this result, the procedure presented in Step 1 and Proposition 5 can be

adapted to the polyhedral cones case by requiring that for any ρ = (ρ1, . . . , ρK) ∈ SK, any i ∈ {1, . . . , M }

and any k = {1, . . . , K − 1}, there exist τi,k(ρ) ≥ 0 and a symmetric entry-wise positive matrix Ni(ρ) such

that

R>i S(ρ)Ri+ Ni(ρ) ≤ 0,

with S(ρ) := A>i PΦ(ρ)+ PΦ(ρ)Ai+PK−1k=1 τi,k(ρ)(Pρk+1− Pρk).

Remark 8 (Computational burden). It is noted that, in general, since |SK| = K!, Algorithm 1 requires

studying the feasibility of M · K! inequalities, which involve M KK! non-negative scalars and K symmetric positive-definite matrices. It is clear that the computational burden grows quickly as a function of the number K of the chosen base-quadratics. However, fixing J ≥ 1, S1, . . . , SJ ⊂ {1, . . . , K} in (45) (thus

fixing a particular max-min structure) the computational burden can be reduced. In [15, ] we showed how the number of required inequalities depends on the choice of sets Sj in the case of three quadratics, i.e.

(21)

Example 1 - Continued: Item (i). We have already proved that there does not exist a convex Lyapunov function for system (4). We will construct a max-min of quadratics Lyapunov function V of the form (3). In other words, we have fixed K = 3, J = 2, S1= {1, 2} and S2= {3}. Using Algorithm 0 we construct the

function Φ that reads Φ(ρ1) = Φ(ρ2) = Φ(ρ3) = Φ(ρ4) = 3, where ρ1= (1, 2, 3), ρ2= (1, 3, 2), ρ3= (2, 1, 3),

ρ4= (2, 3, 1); and Φ(ρ5) = 1, where ρ5 = (3, 1, 2); and Φ(ρ6) = 2, where ρ6= (3, 2, 1). In these cases, the

matrix inequalities of Algorithm 1 (after the reductions outlined in Remark 8) read A>2P2+ P2A2+ τ1(P2− P3) + τ2(P1− P2) + β1Q2< 0,

A>1P1+ P1A1+ τ3(P1− P3) + τ4(P2− P1) + β2Q1< 0,

A>3P3+ P3A3+ τ5(P3− P1) + β3Q3< 0,

A>3P3+ P3A3+ τ6(P3− P2) + τ7(P1− P3) + β4Q4< 0,

τk ≥ 0, ∀k ∈ {1, . . . , 7}, βi≥ 0, ∀i ∈ {1, . . . , 4}, P1, P2, P3> 0.

Using numerical solvers, it follows that these inequalities are feasible, and in particular they are satisfied by P1= 5 0 0 1  , P2= 1 0 0 5  , P3= 3 2 2 3  , (57)

τ = (0.258, 0.102, 0.258, 0.102, 0.284, 0.193, 0.090) and βi= 0, ∀ i ∈ {1, . . . , 4}. A level set of V is plotted in

Fig. 1. This proves that V in (3) with Pi as in (57) satisfies item (i) of Corollary 2.

6.3

Checking item (ii) of Corollary 2 in R

2

.

To study GAS of system (43), we also need to check item (ii) of Corollary 2, which is computationally harder than item (i). We now discuss how this condition simplifies in the planar case, that is when n = 2. To do so, let us analyze the geometry of the switching rule proposed in (44). To non-trivially satisfy Assumption 1, we will suppose that the matrices Q1, . . . , QM ∈ Sym(R2) are sign indefinite. We will characterize the sets

Di in (44) using the following result.

Lemma 3. Given any sign indefinite matrix Q ∈ Sym(R2), there exist θ1, θ2∈ R2\ {0}, θ2∈ span(θ/ 1) such

that

Q = θ1θ2>+ θ2θ>1. (58)

Sketch of the proof. Let us denote by λ− < 0 < λ+ the eigenvalues of Q, and with v−,v+ ∈ R2 the

corre-sponding unit eigenvectors (|v−| = |v+| = 1). By the spectral decomposition we have that Q = λ+v+v+>+

λ−v−v−>. Let us call η = q −λ − λ+−λ− > 0 and κ = q 2 λ+−λ− > 0, then by choosing θ1= κ hp 1 − η2v +− η v− i and θ2= κ hp 1 − η2v ++ η v− i

, it is seen that (58) holds.

Lemma 3 allows checking algorithmically condition (ii) of Corollary 2 in the planar case. This is done in two steps.

Step 2a. Given M sign indefinite matrices Q1, . . . , QM ∈ Sym(R2) that satisfy Assumption 1, the

non-overlapping and covering conditions in Assumption 1 imply that matrices Qi, i = 1, ..., M , decomposed as

in (58), can be suitably ordered3 in such a way that

Qi= θiθ>i+1+ θi+1θ>i for i = 1, . . . , M − 1,

QM = θM(−θ1)>+ (−θ1)θ>M,

(59)

for some suitable selections of linear independent vectors θ1, . . . , θM ∈ R2∩ {(x1, x2) ∈ R2| x1≥ 0}.

For each i ∈ {1, . . . , M }, take vi ∈ R2 as an unit vector generating the subspace θi⊥:= {x ∈ R 2| θ>

i x = 0}.

4

3For the ordering of matrices Q

i, via vectors θiin (58), we can associate an angle with each one of the lines θi, i = 1, . . . , M ,

(22)

Step 2b. Consider V ∈ Mmq (P1, . . . , PK) satisfying condition (i) of Corollary 2. For every vi such that

αV(vi) = {`i1, `i2} is multivalued, solve the system

( 0 ≤ λ ≤ 1, λvi>(P`i 2− P`i1)Ai−1vi+ (1 − λ)v > i (P`i 2− P`i1)Aivi= 0, (60)

(with i − 1 = M if i = 1), and denote by Λi⊂ [0, 1] the set of solutions of (60) for v

i (possibly empty). 4

In the following we formally prove the effectiveness of Steps 2a and 2b.

Proposition 6. Consider A1, . . . , AM ∈ R2×2and M indefinite matrices Q1, . . . QM ∈ Sym(R2) that satisfy

Assumption 1, and are parameterized as in (59). Suppose that there exist P1, . . . , PK > 0 such that V ∈

Mmq (P1, . . . , PK) satisfies condition (i) of Corollary 2. If, for all i ∈ {1, . . . , M } such that αV(vi) is

multivalued, we have λv>i (P`i 1Ai−1 + A > i−1P`i 1)vi + (1 − λ)v > i (P`i 1Ai + A > i P`i 1)vi < 0, (61)

for all λ ∈ Λi, then item (ii) of Corollary 2 holds.

Proof. Recalling (44), the parametrization in (59) characterizes the points x where the map I(x) is multi-valued. From (59), we have that Di∩ Di+1= θ⊥i+1, for all i = 1, . . . , M − 1 and DM ∩ D1= θ1⊥. Thus

I(x) =      {i, i + 1}, if x ∈ θ⊥i+1, i = 1, . . . , M − 1, {1, M }, if x ∈ θ⊥ 1, {i}, if x>Qix > 0. (62)

Let us now consider a function V ∈ Mmq (P1, . . . , PK) that satisfies condition (i) of Corollary 2. From

Remark 5, for any max-min function V ∈ Mmq (P1, . . . , PK), the value of the map αV : R2 ⇒ {1, . . . , K}

has at most 2 elements. To check item (ii) of Corollary 2 we must consider all the points x ∈ R2such that I(x)

and αV(x) are multivalued. As shown in (62), the set of points where the map I is multivalued coincides with

the union of the M lines θ⊥1, . . . , θM⊥. From Remark 5, the homogeneity of Fsw

lin and V ∈ Mmq (P1, . . . , PK)

implies that it is sufficient to check condition (ii) of Corollary 2 only for the chosen unit vectors v1, . . . vM

which span θ1⊥, . . . θM⊥ respectively. We can conclude noting that, for each i ∈ {1, . . . , M } such that αV(vi) is

multivalued, system (60) corresponds to (47), and equation (50) follows from (61) selecting a small enough ε > 0.

Proposition 6 shows that for a planar linear switched system (43), (44) involving M subsystems, it is sufficient to identify unit vectors vi, i = 1, . . . , M generating the switching lines, and verify inequality (50)

for these M points. Item (ii) of Corollary 2 then follows from homogeneity. This result allows concluding the analysis of Example 1.

Example 1 - Continued: Item (ii). As a last step to show that the origin of (4), (5) is GAS, we have to ensure the condition (ii) of Corollary 2. Since the signal (5) can be rewritten in the form (59), we can follow Steps 2a and 2b, taking v1 ∈ S13, v2 ∈ S21, v3 ∈ S32 such that |vj| = 1, for all j ∈ {1, 2, 3}. Considering

system (60), it is easily checked that Λj = ∅, ∀ j ∈ {1, 2, 3}. Recalling (48), ˙VF(vj) = ∅, for j = 1, 2, 3.

Then by Proposition 6 the function V in (3) is a Lyapunov function for system (4) which certifies GAS. Remark 9. In Example 1, it can be shown that V in (3) does not satisfy the conditions ˙VFsw(x) < 0 for

some x ∈ R2: consider the point v

1 ∈ S13, where we have shown V˙Fsw(v1) = ∅. Since v1 ∈ S13, then

∂V (v1) = co{2P1v1, 2P3v1} and Fsw(z0) = co{A1v1, A3v1}. Straightforward computations yield v1>(P3A1+

A>1P3)v1= 8.65 > 0, and thus ∃w ∈ ∂V (v1) and f ∈ Fsw(v1) such that 0 < w>f ∈ ˙VFsw(v1), which implies

that Corollary 1 is not applicable and well illustrate the fact that Corollary 2 provides less conservative conditions.

Figure

Figure 1: The solid blue line shows a trajectory of system (4) starting at z 0 and moving in the clockwise direction
Figure 2: A geometric interpretation of the set V ˙ F sw ( x) in e R 2 .
Figure 3: Trajectories of switched system (40) in Example 3.
Figure 4: The evolution of the Lyapunov function V along solutions φ i , i = 1, . . . , 5.

Références

Documents relatifs

The main objective of this paper is three-fold: (i) to investigate under which additional assumptions on the data of these problems, the unique solutions of systems (1.1) and

Lin and Antsaklis, 2009; Liberzon, 2003b), the problem of stabilizing switched affine systems has been less regarded even though this class of nonlinear systems is of

Ultrametric distance induced by a Conley barrier on the set of chain- transitive components The existence of a Conley barrier leads to the existence of a non-trivial

To discuss the relative stability and the most pre- ferred position, we have calculated the total ground state energy of our 936 particle system before doping, immediately after we

In this note, it has been shown how first order approximation study may lead to the construction of Lyapunov function that characterizes the local exponential stability of a

مﺰﻌﻟﺎﺑ ﺎﻨﻧﺎﻋأو ﺎﻧاﺪﻫ نأ ﺎﻨﻴﻠﻋ ﻪﻨّﻣو ﻪﻠﻀﻓ ﻰﻠﻋ ﻰﻟﺎﻌﺗ ﷲ مﻼﺴﻟاو ةﻼّﺼﻟاو ﻊﺿاﻮﺘﻤﻟا ﻞﻤﻌﻟا اﺬﻫ زﺎﺠﻧإ ﻰﻠﻋ ﺮّﺒﺼﻟاو ةدارﻹاو ةﻮﻘﻟاو ﻦﻴﻟﺎﻀﻠﻟ ﺔﻳاﺪﻫو ﻦﻴﻤﻟﺎﻌﻠﻟ ﺔﻤﺣر ﺚﻌﺑ

Les platines sont contrôlées par ordinateur, ce qui nous donne différents paramètres pour obtenir une grande variété de profils de nanofibres.. Le plus grand avantage de ces

The experimental strategy for word recognition, namely, repeated attempts with successively lower stroke-likelihood thresholds, does not eliminate the possibility