• Aucun résultat trouvé

Superimposed optimization methods for the mixed equilibrium problem and variational inclusion

N/A
N/A
Protected

Academic year: 2022

Partager "Superimposed optimization methods for the mixed equilibrium problem and variational inclusion"

Copied!
16
0
0

Texte intégral

(1)

DOI 10.1007/s10898-012-9982-4

Superimposed optimization methods for the mixed equilibrium problem and variational inclusion

Yonghong Yao· Yeong-Cheng Liou · Ngai-Ching Wong

Received: 20 March 2012 / Accepted: 11 September 2012 / Published online: 21 September 2012

© Springer Science+Business Media, LLC. 2012

Abstract The purpose of this paper is to construct two superimposed optimization methods for solving the mixed equilibrium problem and variational inclusion. We show that the pro- posed superimposed methods converge strongly to a solution of some optimization problem.

Note that our methods do not involve any projection.

Keywords Mixed equilibrium problem·Variational inclusion·Monotone operator· Superimposed optimization method·Resolvent

Mathematics Subject Classification (2000) 58E35·47J20·49J740·47H09·65J15

1 Introduction

Let H be a real Hilbert space with inner product·,·and norm · . Let C be a nonempty closed convex subset of H . Let B:CH be a nonlinear mapping and K :C×C →R be a bifunction. The equilibrium problem is to find an x in C such that

K(x,y)+ Bx,yx ≥0, ∀y∈C.

The theory of equilibrium problems provides us a natural, novel and unified framework to study a wide class of applications. The ideas and techniques involved are being used in

Y. Yao

Department of Mathematics, Tianjin Polytechnic University, Tianjin 300387, China e-mail: yaoyonghong@yahoo.cn

Y.-C. Liou

Department of Information Management, Cheng Shiu University, Kaohsiung 833, Taiwan e-mail: simplex_liou@hotmail.com

N.-C. Wong (

B

)

Department of Applied Mathematics, National Sun Yat-sen University, Kaohsiung 80424, Taiwan e-mail: wong@math.nsysu.edu.tw

(2)

a variety of diverse areas and proved to be productive and innovative. It has been shown by Blum and Oettli [1] and Noor and Oettli [15] that variational inequalities and mathe- matical programming problems can be viewed as special realizations of abstract equilib- rium problems. Equilibrium problems have numerous applications, including but not limited to problems in economics, game theory, finance, traffic analysis, circuit network analysis and mechanics. There are a great number of numerical methods for solving equilibrium problems under various assumptions on K and B. Please see, e.g., [2,4,5,7,11–13,17,18,21, 24,27–29,31–34].

In this paper, we are interested in solving the equilibrium problem with those K given by K(x,y)=F(x,y)+G(x,y),

where F,G:C×C →Rare two bifunctions satisfying some special properties (see Sect.

2). This is the well-known mixed equilibrium problem, i.e., to find an x in C such that F(x,y)+G(x,y)+ Bx,yx ≥0,yC. (1.1) The solution set of (1.1) is denoted by E P(F,G,B).

On the other hand, let A:HH be a single-valued nonlinear mapping and R:H→2H be a set-valued mapping. We consider the following variational inclusion, which is to find a point x in H such that

0∈A(x)+R(x), (1.2)

where 0 is the zero vector in H . The set of solutions of problem (1.2) is denoted by(A+R)10.

If H=Rm, problem (1.2) becomes the generalized equation introduced by Robinson [19]. If A=0, problem (1.2) becomes the inclusion problem introduced by Rockafellar [20]. Prob- lem (1.2) provides a convenient framework for a unified study of mathematical programming, complementarity, variational inequalities, optimal control, mathematical economics, equilib- ria, game theory, etc. Meanwhile, various types of variational inclusion problems have been extended and generalized.

In 1997, Combettes and Hirstoaga [6] introduced an iterative method solving equilibrium problems, and proved a strong convergence theorem. Subsequently, Takahashi and Takah- ashi [23], Yao et al. [30], and Zeng and Yao [35] considered several iterative schemes for finding a common element of the set of solutions of the equilibrium problem and the set of common fixed points of a family of finitely or infinite many nonexpansive mappings. More- over, Cianciaruso et. al. [3] presented an iterative method for finding common solutions of an equilibrium problem and a variational inequality.

Recently, Zhang et. al. [36] introduced a new iterative scheme for finding a common element of the set of solutions of problem (1.1) and the set of fixed points of nonexpan- sive mappings in Hilbert spaces. Peng et al. [16] introduced another iterative scheme by the viscosity approximate method for finding a common element of the set of solutions of a varia- tional inclusion with set-valued maximal monotone mapping and inverse strongly monotone mappings, the set of solutions of an equilibrium problem, and the set of fixed points of a nonexpansive mapping. For more related works, please see also, [8–10,22].

Motivated and inspired by the above works, the purpose of this paper is to construct two superimposed optimization methods for solving the mixed equilibrium problem and varia- tional inclusion. Consequently, we show that the suggested superimposed methods converge strongly to a solution of some optimization problem. Note that our methods do not use any projection.

We would like to express our deep thanks to the referees. With their valuable comments and suggestions, we have made several improvements in the final version.

(3)

2 Preliminaries

Let H be a real Hilbert space with inner product·,·and norm · . We write xn x and xnx for the weak and strong convergence of{xn}to x in H , respectively. The set of fixed points of a mapping T on a subset C of H is Fix(T)= {x∈C :T x=x}.

Let R be a mapping of H into 2H. The effective domain of R is dom(R)= {x ∈H:Rx = ∅}.

A multi-valued mapping R is said to be a monotone operator on H if xy,uv ≥0,x,y∈dom(R),uRx,∀v∈R y.

A monotone operator R on H is said to be maximal if its graph is not strictly contained in the graph of any other monotone operator on H . Let R be a maximal monotone operator on H and let R10= {xH :0∈Rx}.

For a maximal monotone operator R on H andλ > 0, we may define a single-valued operator

JλR=(I+λR)−1:H→dom(R),

which is called the resolvent of R forλ. It is known that the resolvent JλRis firmly nonex- pansive, i.e.,

JλRxJλRy2≤ JλRxJλRy,xy, ∀x,yC, and R−10=Fix(JλR)for allλ >0.

Let C be a nonempty closed convex subset of real Hilbert space H . Recall that a mapping S:CC is said to be nonexpansive if

Sx−Sy ≤ x−y, ∀x,yC.

A mapping A:CH isα-inverse-strongly monotone if there existsα >0 such that AxAy,xy ≥αAxAy2, ∀x,yC.

It is clear that anyα-inverse-strongly monotone mapping is monotone andα1-Lipschitz con- tinuous.

Throughout this paper, we assume that two bifunctions F,G:C×C →Rsatisfies the following conditions:

(F1) F(x,x)=0 for all x in C;

(F2) F is monotone, i.e., F(x,y)+F(y,x)0 for all x,y in C;

(F3) for each x,y,z in C, we have lim supt0+F(t z+(1−t)x,y)F(x,y);

(F4) for each x in C, the function yF(x,y)is convex and weakly lower semicontinuous.

(G1) G(x,x)=0 for all x in C;

(G2) G is monotone, and weakly upper semicontinuous in the first variable;

(G3) G is convex in the second variable.

(H) For fixedμ >0 and x in C, there exists a bounded set KC and a in K such that

F(a,z)+G(z,a)+ 1

μaz,zx<0,zC\K.

(4)

Lemma 2.1 [3] Let C be a nonempty closed convex subset of a real Hilbert space H . Let F,G :C×C →Rbe two bifunctions which satisfy conditions (F1)–(F4), (G1)–(G3) and (H). Letμ >0 and xC. Then, there exists a unique z:=Tμx in C such that

F(z,y)+G(z,y)+ 1

μyz,zx ≥0,yC. Furthermore,

(i) The single-valued map Tμthus defined is firmly nonexpansive, i.e., TμxTμy2TμxTμy,xy, ∀x,yH. (ii) E P(F,G,0)is closed and convex, and E P(F,G,0)=Fix(Tμ).

Lemma 2.2 [14] Let C be a nonempty closed convex subset of a real Hilbert space H . Let a mapping A:CH beα-inverse strongly monotone andλ >0 be a constant. Then, we have

(IλA)x(IλA)y2≤ x−y2+λ(λ−2α)AxAy2, ∀x,yC.

In particular, if 0λ≤2α, then I−λA is nonexpansive.

Lemma 2.3 ([26]) Assume{an}is a sequence of nonnegative real numbers such that an+1(1γn)an+δn,

wheren}is a sequence in(0,1)andn}is a real sequence such that (1)

n=1γn = +∞;

(2) lim supn→∞δnn0 or

n=1n|<+∞.

Then limn→∞an =0.

3 Results

Let C be a nonempty closed and convex subset of a real Hilbert space H . Let A,B be two nonlinear operators of C into H . Let R be a maximal monotone operator on H with the resolvent JλR=(I+λR)1. Let F,G:C×C→Rbe two bifunctions. Set

:=E P(F,G,B)(A+R)−10. We consider the following optimization problem:

min{x :x}, (3.1)

the least-squares solutions to the constrained linear inverse problem.

Throughout, we assume = ∅together with the following conditions.

(1) 0<aλb<2αand 0<cμd<2βare all constants.

(2) A,B areα, β-inverse strongly-monotone mappings of C into H , respectively.

(3) The effective domain of R is included in C.

(4) F and G satisfy conditions (F1)–(F4), (G1)–(G3) and (H).

(5)

Let Tμbe the single-valued firmly nonexpansive map defined in Lemma2.1. It is easy to see that

z=Tμ(IμB)z=JλR(IλA)z, ∀z∈. (3.2) In order to solve (3.1), we introduce the following superimposed optimization method:

xt=Tμ(IμB)JλR((1t)IλA)xt, t

0,1− λ

.

Note that the net{xt}above is well-defined. More precisely, for any t in(0,1− λ), we define a mapping

St:=Tμ(IμB)JλR((1−t)IλA).

Since Tμ,JλR,IμB and I1−tλ A (see Lemmas2.2and2.2) are all nonexpansive, we have, for any x,y in C,

StxSty =Tμ(IμB)JλR((1t)IλA)xTμ(IμB)JλR((1t)IλA)y

≤ ((1−t)IλA)x((1t)IλA)y

=(1−t)

xλ 1−tAx

yλ 1−tAy

(1t)xy.

Thus, the mapping St is a contraction on C, and consequently there exists a unique fixed point xtof Stin C.

Theorem 3.1 The net{xt}defined by

xt =Tμ(IμB)JλR((1t)IλA)xt, t

0,1− λ

. (3.3)

converges strongly, as t→0+, to the unique point xinof minimum norm. Moreover, we have

x2≤ x,z, ∀z∈.

Proof Step 1. We first show that{xt}and{Axt}are both uniformly bounded, and

t→0lim+AxtAz =0, ∀z∈. (3.4)

Let z. Note that

z=JλR(zλAz)=JλR

t z+(1−t)(zλAz/(1−t)) .

(6)

From (3.2), (3.3), the nonexpansivity of Tμ, JλR, IμB and IλA/(1−t)(Lemmas2.1 and2.2), and the convexity of · 2, we obtain

xtz2

= Tμ(IμB)JλR((1t)IλA)xtTμ(IμB)z2

JλR

(1t)xtλAxt

z2 (3.5)

=JλR

(1t)(xtλAxt/(1t))

JλR

t z+(1t)(zλAz/(1t))2

(1t)(xtλAxt/(1t))

t z+(1t)(zλAz/(1t))2

=(1−t)

(xtλAxt/(1t))(zλAz/(1t))

+t(−z)2 (3.6)

(1−t)(xtλAxt/(1−t))(zλAz/(1−t))2+tz2 (3.7)

(1t)xtz2+tz2 It follows that

xtzz.

Therefore,{xt}is bounded. Since A isα-inverse strongly monotone, it is α1-Lipschitz con- tinuous. We deduce immediately that{Axt}is also bounded.

On the other hand, from theα-inverse strong monotonicity of A, we derive (xtλAxt/(1t))(zλAz/(1t))2

= (xtz)λ(AxtAz)/(1−t)2

= xtz2− 2λ

1−tAxtAz,xtz + λ2

(1−t)2AxtAz2

xtz2− 2αλ

1−tAxtAz2+ λ2

(1−t)2AxtAz2

= xtz2+ λ

(1−t)2−2(1−t)α)AxtAz2. (3.8) It follows from (3.7) and (3.8) that

xtz2(1t)

xtz2+ λ

(1t)2−2(1−t)α)AxtAz2

+tz2. Consequently,

λ

(1t)(2(1t)αλ)AxtAz2tz2 →0, as t →0+. As a result,

t→0lim+AxtAz =0. Step 2. Next, we show

t→0lim+

xtJλR

(1−t)xtλAxt=0. (3.9)

(7)

Using the firm nonexpansivity of JλR, for any z inwe have JλR

(1−t)xtλAxtz2

=JλR

(1t)xtλAxt

JλR

zλAz2

(1−t)xtλAxt(zλAz),JλR

(1−t)xtλAxtz

=1 2

(1−t)xtλAxt(zλAz)2+JλR

(1t)xtλAxt

z2

−(1−t)xtλ(AxtλAz)JλR

(1−t)xtλAxt2 . By the nonexpansivity of IλA/(1−t), we have

(1−t)xtλAxt(zλAz)2

= (1−t)((xtλAxt/(1t)(zλAz/(1t)))+t(−z)2

(1t)(xtλAxt/(1t)(zλAz/(1t))2+tz2

(1−t)xtz2+tz2. Thus,

JλR

(1t)xtλAxt

z2

≤1 2

(1−t)xtz2+tz2+JλR

(1−t)xtλAxtz2

−(1−t)xtJλR

(1t)xtλAxt

λ(AxtAz)2 . This together with (3.5) gives

xtz2

JλR

(1t)xtλAxt

z2

(1−t)xtz2+tz2−(1−t)xtJλR

(1−t)xtλAxt

λ(AxtAz)2

=(1t)xtz2+tz2−(1−t)xtJλR

(1t)xtλAxt2 +2λ

(1t)xtJλR

(1t)xtλAxt

,AxtAzλ2AxtAz2

(1t)xtz2+tz2−(1−t)xtJλR

(1t)xtλAxt2 +2λ(1−t)xtJλR

(1t)xtλAxtAxtAz. (3.10)

Hence,

(1−t)xtJλR((1−t)xtλAxt)2

tz2+2λ(1−t)xtJλR((1t)xtλAxt)AxtAz.

SinceAxtAz →0 as t→0+by Step 1, we deduce

t→0lim+

(1−t)xtJλR

(1−t)xtλAxt=0.

(8)

Therefore,

lim

t0+

xtJλR

(1−t)xtλAxt=0. Step 3. We show thatxtTμ(IμB)xt0 as t→0+.

Set

ut =JλR((1t)xtλAxt), ∀t∈

0,1− λ

. Then by (3.3),

xt =Tμ(IμB)ut.

It follows from Step 2 and the nonexpansivity of Tμand IμB that xtTμ(IμB)xt = Tμ(IμB)utTμ(IμB)xt

≤ utxt →0.

Step 4. We show that{xt}strongly converges to the unique point xinof minimum norm as t→0+.

From (3.6) and the nonexpansivity of I−1−tλ A, we have xtz2 ≤(1−t)

(xtλ

1−tAxt)(zλ 1−tAz)

t z2

=(1−t)2(xtλ

1−tAxt)(zλ 1−tAz)2

2t(1t)

z, (xtλ

1−tAxt)(zλ

1−tAz) +t2z2

(1t)2xtz22t(1t)

z,xtλ

1−t(AxtAz)z +t2z2

=(1−2t)xtz2+2t

(1−t)

z,xtλ

1−t(AxtAz)z +t2(z2+ xtz2).

It follows that

xtz2≤ −

z,xtλ

1−t(AxtAz)z + t

2(z2+ xtz2) +tzxtλ

1−t(AxtAz)z

≤ −

z,xtλ

1−t(AxtAz)z +t M. (3.11)

Here, M is some constant such that z2+ xtz2

2 + zxtλ

1−t(AxtAz)zM, ∀t∈

0,1− λ

. Let xbe any weak* cluster point in C of the bounded net{xt}as t → 0+. Assume tn→0+in(0,1−λ)as n→ ∞, and xtn x. Put xn :=xtnand un:=utn. From (3.11), we have

xnz2≤ −

z,xnλ

1−tn(AxnAz)z +tnM,z. (3.12)

(9)

Now we show xE P(F,G,B). Setting

vn =Tμ(IμB)xn, for any y in C we have

F(vn,y)+G(vn,y)+ 1

μyvn, vn(xnμBxn) ≥0. From the monotonicity of F, we have

G(vn,y)+

yvn,vnxn μ +Bxn

F(y, vn),yC. (3.13) Put zs =sy+(1−s)xC for s in(0,1]and y in C. It follows from theβ-inverse strongly monotonicity of B and (3.13) that

zsvn,Bzs

= zsvn,BzsBxn + Bxn,zsvn

zsvn,BzsBxn +F(zs, vn)G(vn,zs)− 1

μzsvn, vnxn

= zsvn,BzsBvn + zsvn,BvnBxn +F(zs, vn)G(vn,zs)− 1

μzsvn, vnxn

≥ zsvn,BvnBxn +F(zs, vn)G(vn,zs)− 1

μzsvn, vnxn. (3.14) Note thatBvnBxnβ1vnxn → 0 by Step 3, and also thatvn xweakly.

Letting n→ ∞in (3.14), we have from the assumptions (F4) and (G2) that

zsx,BzsF(zs,x)G(x,zs). (3.15) From (F1), (F3), (G1), (G2), (G3) and (3.15), we also have

0= F(zs,zs)+G(zs,zs)

s F(zs,y)+(1−s)F(zs,x)+sG(zs,y)+(1−s)G(zs,x)

s F(zs,y)+sG(zs,y)+(1s)[F(zs,x)G(x,zs)]

s F(zs,y)+sG(zs,y)+(1s)zsx,Bzs

=s[F(zs,y)+G(zs,y)+(1−s)yx,Bzs], and hence

0≤F(zs,y)+G(zs,y)+(1s)Bzs,yx. (3.16) Letting s→0+in (3.16), we have, for each y in C,

0≤F(x,y)+G(x,y)+ y−x,Bx, ∀y∈C That is, xE P(F,G,B).

Further, we show that xis also in(A+R)−10. LetvRu. Setting un= JλR((1−tn)xnλAxn),n=1,2, . . . , we have

(1−tn)xnλAxn(I+λR)un ⇒1−tn

λ xnAxnun

λRun.

(10)

Since R is monotone, we have, for(u, v)in R, 1−tn

λ xnAxnun

λv,unu ≥0

⇒ (1−tn)xnλAxnunλv,unu ≥0

⇒ Axn+v,unu ≤1

λxnun,unu − tn

λxn,unu

⇒ Ax+v,unu ≤ 1

λxnun,unu −tn

λxn,unu + AxAxn,unu

≤ 1

λxnununu + tn

λxnunu + AxAxnunu. It follows that

Ax+v,xu ≤ 1

λxnununu +tn

λxnunu

+ AxAxnunu + Ax+v,xun. (3.17) Sincexnx,AxnAxαAxnAx2, AxnAz and xnx, we have

AxnAx. (3.18)

We also observe that tn→0,xnun0 (Step 1), and thus un x. From (3.17), we derive

Axv,xu ≥0.

Since R is maximal monotone, we haveAxRx. This shows that 0 ∈ (A+R)x. Hence, we have xE P(F,G,B)(A+R)−10=.

At this moment, we can substitute xfor z in (3.12) to get xnx2≤ −

x,xnλ

1−tn(AxnAx)x +tnM.

Consequently, the weak convergence of{xn}to xactually implies that xnxby (3.18).

Finally, we return to (3.12) again and take the limit as n→ ∞to get xz2≤ −z,xz, ∀z∈. Equivalently,

x2≤ x,z, ∀z∈. (3.19)

This clearly implies that

x ≤ z, ∀z∈.

This together with (3.19) implies that xis the unique element ofof minimum norm.

If xis an other weak cluster point of the net{xt}as t →0+, we will also see that xis inwithx = x. Then (3.19) implies x=x. This ensures that xtx in norm as

t→0+.

Theorem 3.2 Let y0C. Define

yn+1=Tμ(IμB)JλR((1αn)IλA)yn, ∀n=1,2, . . . . (3.20) Here,n}is a sequence in(0,1−2λα)satisfying conditions

(11)

(i) limn→∞αn=0 and

n=0αn= +∞; (ii) limn→∞αn+1α

n =1.

Then the sequence{yn}converges strongly to the unique point xinof minimum norm.

Proof Step 1. Arguing in parallel to the proof of Theorem3.1, we first show that both{yn} and{Ayn}are uniformly bounded, and

n→∞lim AynAz =0,z. Take any z in. Note that JλB(zλAz)= JλB

αnz+(1αn)(zλAz/(1αn)) for all n=1,2, . . .. As in deriving (3.7), we have

JλR

(1αn)ynλAyn

z2

(1αn)

ynz2+λ(λ−2(1−αn)α)

(1αn)2 AynAz2

+αnz2 (3.21)

(1−αn)ynz2+αnz2. From (3.20) and (3.21), we have

yn+1z2 = Tμ(IμB)JλR((1αn)IλA)ynTμ(IμB)z2

JλR

(1−αn)ynλAyn

z2 (3.22)

(1αn)ynz2+αnz2.

≤max{ynz2,z2}.

By induction

ynz ≤max{y0z,z}.

Therefore,{yn}is bounded. Since A isα-inverse strongly monotone, it isα1-Lipschitz con- tinuous. We deduce immediately that{Ayn}is also bounded.

From (3.20) and Lemma2.2, we have yn+1yn

= Tμ(IμB)JλR((1−αn)IλA)ynTμ(IμB)JλR((1−αn1)IλA)yn1

≤ ((1−αn)IλA)yn((1αn−1)IλA)yn−1

= (1−αn)(IλA/(1−αn))yn(1−αn)(IλA/(1−αn))yn−1+n−1αn)yn−1

(1−αn)(IλA/(1−αn))yn(1−αn)(IλA/(1−αn))yn1 + |αnαn1|yn1

(1−αn)ynyn1 + |αnαn1|sup

k yk. It follows from Lemma2.3that

nlim→∞yn+1yn =0. (3.23)

By (3.21) and (3.22), we obtain

yn+1z2(1αn)ynz2+ λ

(1−αn)(λ−2(1−αn)α)AynAz2+αnz2.

(12)

Therefore, λ

(1−αn)(2(1−αnλ)AynAz2

≤ ynz2− yn+1z2+αn(z2ynz2)

yn+1yn(ynz + yn+1z)+αn(z2ynz2)

→0. This implies that

n→∞lim AynAz =0. Step 2. Next, we verify

n→∞lim

ynJλR((1−αn)IλA)yn=0.

Using (3.22) and as deriving (3.10) in the proof of Theorem3.1, we have yn+1z2JλR

(1αn)ynλAyn

z2

(1−αn)ynz2+αnz2−(1−αn)ynJλR

(1−αn)ynλAyn2 +2λ(1−αn)ynJλR

(1αn)ynλAynAynAz.

Hence,

(1−αn)ynJλR

(1−αn)ynλAyn2

≤ ynz2− yn+1z2+αn(z2ynz2) +2λ(1−αn)ynJλR

(1αn)ynλAynAynAz

(ynz + yn+1z)yn+1yn +αn(z2ynz2) +2λ(1−αn)ynJλR

(1−αn)ynλAynAynAz.

Sinceαn →0,yn+1yn →0 by (3.23) andAynAz →0 by Ste p 1, we deduce

n→∞lim

(1−αn)ynJλR

(1−αn)ynλAyn=0.

Therefore,

n→∞lim

ynJλR

(1αn)ynλAyn=0.

Step 3. We next show that

n→∞lim ynTμ(IμB)yn =0.

Set

un= JλR((1αn)ynλAyn), ∀n=1,2, . . . . Then

yn+1=Tμ(IμB)un, ∀n=1,2, . . . .

(13)

It follows from Step 2 that

yn+1Tμ(IμB)yn = Tμ(IμB)unTμ(IμB)yn

≤ unyn →0.

By (3.23) we see that

ynTμ(IμB)yn →0.

Step 4. We show that ynxwhich is the unique element inwith minimum norm (as given in Theorem3.1).

Let{ynk}be a subsequence of the bounded sequence{yn}weakly converging to somex˜ in C. By a similar argument as that of Step 4 in the proof of Theorem3.1, we can show that

˜

x. It follows from Theorem3.1that

k→∞limx,ynkx = x,x˜−x = x,x − x˜ 2≥0.

As this holds true for all weakly convergent subsequences of{yn}, we can conclude that lim infn→∞x,ynx0. Since ynun →0 by Step 2, we also have

lim inf

k→∞x,unx ≥0. (3.24)

Using the firmly nonexpansivity of JλRand the nonexpansivity of I1−αλnA, we get unx2

=JλR

(1−αn)ynλAyn

JλR(xλAx)2

≤ (1−αn)ynλAyn(xλAx),unx

=(1−αn)ynλAyn/(1−αn)(xλAx/(1−αn)),unxαnx,unx

(1αn)ynλAyn/(1αn)−(xλAx/(1αn))unx−αnx,unx

(1−αn)ynxunxαnx,unx

≤1−αn

2 ynx2+1

2unx2αnx,unx. It follows that

unx2(1αn)ynx2−2αnx,unx. Therefore, by (3.22) we have

yn+1x2≤ unx2

(1−αn)ynx2−2αnx,unx.

Applying Lemma2.3with (3.24) to the last inequality, we deduce ynxas asserted.

Remark 3.3 We note that in Theorems3.1and3.2, we add an additional assumption: the effective domain of R is included in C. This assumption is indeed not restrictive. The readers can find an example which satisfies this assumption in [25].

Remark 3.4 In Theorem3.1, we have proved the facts that limt→0+AxtAz =0,∀z∈ (by (3.4)) and limt→0+xt =x. Thus, we deduce immediately that the image ofunder A consists of exactly one point Ax, that is, A()= {Ax}.

This fact brings us a question: whether or not A()= {Ax}impliesis a singleton set.

(14)

The answer is no. We give here an example to clarify this point. Let T :CC be a nonexpansive mapping with a nonempty fixed point set Fix(T). It is easy to see that IT is monotone and Lipschitzian. If we take A=IT and=Fix(T), then A()= {Ax} = {0} for any fixed point xof T . However, Fix(T)can contain more than one points, in general.

We end this paper with an example showing that the set =E P(F,G,B)(A+R)−10. can be nonempty in our setting.

Example 3.5 Let C=H =R. Let F,G:C×C →Rbe defined by F(x,y)=xy and G(x,y)=0, ∀x,yC.

Let A,B:CH be defined by

Ax= −1 and Bx=max{0,x}, ∀xC. Define R:H→2Hby

Rx= {x}, ∀xH.

Clearly, F,G satisfy conditions (F1)–(F4) and (G1)–(G3). For condition (H), we can set a=x and K = [x,x+μ]. On the other hand, by settingα=β=1, we see that A,B are α, β-inverse strongly-monotone from C into H , respectively. The effective domain of the maximal monotone operator R is C. So all the assumptions on A,B,F,G and R in Theorems 3.1and3.2are satisfied.

In this case, it is easy to see that E P(F,G,B)= {1}and

(A+R)10= {x∈C :0∈Ax+Rx} = {x ∈C:0∈ {−1+x}} = {1}.

Hence,=E P(F,G,B)(A+R)−10= {1}is nonempty.

Letλ, μ(0,2). It is straightforward to obtain

JλRx =x/(1+λ) and Tμx=x+μ, ∀x ∈H,xC. Direct computation gives

xt = λ+μ

λ+μ+t(1−μ),t(0,1−λ/2),

as given in Theorem3.1. As t→0+, we see that xt →1, which is the unique point in. Similarly, with the positive null sequence{αn}given in Theorem3.2, we have

yn+1=(1αn)yn+λ

1+λ (1μ)+μ, ∀n=0,1,2, . . . . It is not difficult to see that yn →1, the unique point in.

Acknowledgments Yonghong Yao was supported in part by NSFC 11071279 and NSFC 71161001-G0105.

Yeong-Cheng Liou was partially supported by the Taiwan NSC grant 101-2628-E-230-001-MY3. Ngai-Ching Wong was partially supported by the Taiwan NSC grant 99-2115-M-110-007-MY3.

(15)

References

1. Blum, E., Oettli, W.: From optimization and variational inequalities to equilibrium problems. Math.

Stud. 63, 123–145 (1994)

2. Ceng, L.C., Al-Homidan, S., Ansari, Q.H., Yao, J.C.: An iterative scheme for equilibrium problems and fixed point problems of strict pseudocontraction mappings. J. Comput. Appl. Math. 223, 967–974 (2009) 3. Cianciaruso, F., Marino, G., Muglia, L., Yao, Y.: A hybrid projection algorithm for finding solutions of mixed equilibrium problem and variational inequality problem. Fixed Point Theory Appl. 2010(383740), 19 (2010)

4. Chadli, O., Wong, N.C., Yao, J.C.: Equilibrium problems with applications to eigenvalue problems.

J. Optim. Theory Appl. 117, 245–266 (2003)

5. Colao, V., Marino, G.: Strong convergence for a minimization problem on points of equilibrium and com- mon fixed points of an infinite family of nonexpansive mappings. Nonlinear Anal. 73, 3513–3524 (2010) 6. Combettes, P.L., Hirstoaga, S.A.: Equilibrium programming using proximal-like algorithms. Math. Pro-

gram. 78, 29–41 (1997)

7. Combettes, P.L., Hirstoaga, A.: Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 6, 117–136 (2005)

8. Giannessi, F., Maugeri, A., Pardalos, P.M.: Equilibrium Problems and Variational Models. Kluwer, Dordr- echt (2001)

9. Giannessi, F., Pardalos, P.M., Rapcsak, T.: New Trends in Equilibrium Systems. Kluwer, Dordrecht (2001) 10. Gilbert, R.P., Panagiotopoulos, P.D., Pardalos, P.M.: From Convexity to Nonconvexity. Kluwer, Dordr-

echt (2001)

11. Konnov, I.V., Schaible, S., Yao, J.C.: Combined relaxation method for mixed equilibrium problems.

J. Optim. Theory Appl. 126, 309–322 (2005)

12. Moudafi, A.: Weak convergence theorems for nonexpansive mappings and equilibrium problems. J. Non- linear Convex Anal. 9, 37–43 (2008)

13. Moudafi, A., Théra, M.: Proximal and dynamical approaches to equilibrium problems. In: Lecture Notes in Economics and Mathematical Systems, vol. 477, Springer, pp. 187–201 (1999)

14. Nadezhkina, N., Takahashi, W.: Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 128, 191–201 (2006)

15. Noor, M.A., Oettli, W.: On general nonlinear complementarity problems and quasi equilibria. Mathema- tiche (Catania) 49, 313–331 (1994)

16. Peng, J.W., Wang, Y., Shyu, D.S., Yao, J.C.: Common solutions of an iterative scheme for variational inclusions, equilibrium problems and fixed point problems. J. Inequal. Appl. 2008(720371), 15 (2008) 17. Peng, J.W., Yao, J.C.: A new hybrid-extragradient method for generalized mixed equilibrium problems

and fixed point problems and variational inequality problems. Taiwan. J. Math. 12, 1401–1433 (2008) 18. Qin, X., Cho, Y.J., Kang, S.M.: Viscosity approximation methods for generalized equilibrium problems

and fixed point problems with applications. Nonlinear Anal. 72, 99–112 (2010)

19. Robinson, S.M.: Generalized equation and their solutions, part I: basic theory. Math Program.

Study 10, 128–141 (1979)

20. Rockafella, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877–

898 (1976)

21. Saewan, S., Kumam, P.: A modified hybrid projection method for solving generalized mixed equilibrium problems and fixed point problems in Banach spaces. Comput. Math. Appl. 62, 1723–1735 (2011) 22. Shehu, Y.: Strong convergence theorems for nonlinear mappings, variational inequality problems and

system of generalized mixed equilibrium problems. Math. Comput. Model. 54, 2259–2276 (2011) 23. Takahashi, S., Takahashi, W.: Viscosity approximation methods for equilibrium problems and fixed point

problems in Hilbert spaces. J. Math. Anal. Appl. 331, 506–515 (2007)

24. Takahashi, S., Takahashi, W.: Strong convergence theorem for a generalized equilibrium problem and a nonexpansive mapping in a Hilbert space. Nonlinear Anal. 69, 1025–1033 (2008)

25. Takahashi, S., Takahashi, W., Toyoda, M.: Strong convergence theorems for maximal monotone operators with nonlinear mappings in Hilbert spaces. J. Optim. Theory Appl. 147, 27–41 (2010)

26. Xu, H.K.: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 66, 240–256 (2002) 27. Yao, Y., Cho, Y.J., Chen, R.: An iterative algorithm for solving fixed point problems, variational inequality

problems and mixed equilibrium problems. Nonlinear Anal. 71, 3363–3373 (2009)

28. Yao, Y., Cho, Y.J., Liou, Y.C.: Algorithms of common solutions for variational inclusions, mixed equi- librium problems and fixed point problems. Eur. J. Oper. Res. 212, 242–250 (2011)

29. Yao, Y., Liou, Y.C.: Composite algorithms for minimization over the solutions of equilibrium problems and fixed point problems. Abstr. Appl. Anal. Article ID 763506, 19 pp. (2010). doi:10.1155/2010/763506.

Références

Documents relatifs

Ouahab, Existence results for frac- tional order functional differential inclusions with infinite delay and applications to con- trol theory.. Colombo, Extensions and selections of

The resulting set of exact and approximate (but physically relevant) solutions obtained by the optimization algorithm 1 leads in general to a larger variety of possibilities to the

ǫ -eient solutions of a multi-objetive optimization problem with stohasti searh algo- rithms.. F or this, we propose the set of interest, investigate its topology and state

2014;419:904–937], we establish a general equilibrium version of set- valued Ekeland variational principle (denoted by EVP), where the objective bimap is defined on the product

The variational discrete element method developed in [ 28 ] for dynamic elasto- plastic computations is adapted to compute the deformation of elastic Cosserat materials.. In addition

Informally, a function from tuples to real numbers is computable if we can compute f (x) for each computable tuple x.. For each computable tuple, we only know

According to our definition Rco E is the smallest rank one convex set which contains E (see [3]). This means that for a compact set, our notion of polyconvexity coincides with that

The former (GJ79; GJ83) introduce a very powerful method of inclusion-exclusion to count occurrences of words from a reduced set of words (i.e., where no word is factor of another