A modified double inertial subgradient extragradient algorithm for non-monotone variational inequality with applications
This paper presents a modified iterative approach to solve the variational inequality problem using the double inertial technique in the context of a real Hilbert space. Our iterative technique involves a projection onto a generalized half-space and …
Authors: Watanjeet Singh, Sumit Ch, ok
A mo died double inertial subgradien t extragradien t algorithm for non-monotone v ariational inequalit y with applications W atanjeet Singh ∗ 1 and Sumit Chandok 1 1 Departmen t of Mathematics, Thapar Institute of Engineering and T ec hnology , P atiala, 147001, Punjab, India w atanjeetsingh@gmail.com, sumit.c handok@thapar.edu Abstract This pap er presen ts a mo died iterative approac h to solve the v ariational inequality problem using the double inertial technique in the con text of a real Hilb ert space. Our iterativ e tec hnique in volv es a projection onto a generalized half-space and a self-adaptiv e step-size rule whic h works without prior knowledge of the Lipschitz constan t of the op erator. W e establish a weak con ver- gence result for a v ariational inequality inv olving a non-monotone cost op erator along with weak and strong con vergence results for quasi-monotone and strongly pseudo-monotone operators, respec- tiv ely . Under a simplied framew ork, linear con vergence of the prop osed method is also discussed. A dditionally , w e pro vide some numerical exp eriments to demonstrate the eectiveness of our iter- ativ e algorithm compared to previously established algorithms in solving real-world applications. Finally , we carry out a sensitivity analysis of our algorithm to demonstrate its eectiv eness across v arious parameter settings. Keyw ords: V ariational inequality , Non-monotone mapping, W eak and strong con vergence. 2020 Mathematics Sub ject Classication: 47J20. 65K15. 47J25. 47H05. 1 In tro duction Consider C to be a nonempt y , closed and con vex subset of a real Hilb ert space H . Then the classical v ariational inequalit y for an op erator F : H → H is of the form: nd x ∈ C such that ⟨F x, y − x ⟩ ≥ 0 , for all y ∈ C . (1.1) The solution set of 1.1 is denoted by V I ( C, F ) . This v ariational inequality problem was indep endently in tro duced by Fic hera [ 5 ] and Stampaccia [ 25 ] to address the problems arising from mec hanics. Let S F b e the solution set of the dual v ariational inequalit y problem, i.e. S F = { x ∈ C |⟨F y, y − x ⟩ ≥ 0 , for all y ∈ C } . This S F is closed and conv ex subset of C , furthermore, since C is conv ex and F is con tinuous, we hav e S F ⊂ V I ( C, F ) . Also, if F is pseudo-monotone and contin uous, we hav e S F = V I ( C, F ) . The v ariational inequalit y problem, in short, VIP , has earned considerable attention in optimiza- tion and nonlinear analysis b ecause the theory provides a unied and natural treatmen t for v arious ∗ Corresp onding author: watanjeetsingh@gmail.com 1 mathematical problems. Some of these mathematical problems include net w ork equilibrium problems, optimal control problems, oligop olistic mark et equilibrium problems, and image restoration problems. The earliest and simplest algorithm to tac kle VIP is the projected gradient method. Ho wev er, the implementation of this metho d is limited as it requires F to b e strongly monotone or inv erse strongly monotone. In 1976, Korpelevich [ 11 ] in tro duced the extragradient metho d (EGM) to av oid the strict requirement of monotonicity in the setting of nite-dimensional Euclidean spaces, assuming the cost operator to b e both monotone and L -Lipschitz contin uous. Nonetheless, employing the EGM necessitates the calculation of t wo projections on the set C in eac h iteration, which ma y p ose c hallenges in computing projections during n umerical experiments. T o ov ercome this, He [ 8 ] and Sun [ 26 ] prop osed the pro jection and contraction method (PCM). The PCM has received a great deal of in terest b ecause of its computational eciency . Another v aluable mo dication o ver EGM is the subgradien t extragradien t metho d (SEGM) prop osed by Censor et al. [ 1 ]. The SEGM op erates b y replacing the second pro jection on to set C with a pro jection on to a particular constructible half-space. This adjustment enhances the metho d’s eciency as the pro jection on to half-space is explicitly dened. In 2019, Dong et al. [ 3 ] combined SEGM and PCM using tw o dierent step-sizes in eac h itera- tion. It has b een demonstrated that the iterative sc heme generated b y merging these methods oers a great computational adv an tage. Subsequently , n umerous iterative algorithms hav e b een dev elop ed by com bining the adv antages of EGM (or SEGM) and PCM (see [ 2 , 6 , 21 ] and references cited therein). P olyak [ 20 ] studied the conv ergence of an inertial extrap olation algorithm and demonstrated that it enhances the algorithm’s conv ergence sp eed. Over the y ears, the inertial technique has attracted enormous in terest among researchers as one of the imp ortant to ols to accelerate the con vergence of the algorithms. Numerical iterative metho ds incorp orating m ulti-step inertial extrap olation can signif- ican tly accelerate con vergence in solving optimization problems. Recen tly , Y ao et al. [ 34 ] proposed an iterativ e scheme with double inertial steps to solve pseudo-monotone v ariational inequalit y , showing that it signicantly improv es the single inertial step, as one of the inertial parameters is allow ed to b e 1. W ang et al. [ 33 ] proposed a double inertial projection metho d for a quasi-monotone v ariational inequalit y . Recently , Li et al. [ 12 ] prop osed a double inertial subgradient extragradien t metho d com- bined with projection-contraction metho d to solve quasi-monotone v ariational inequality . F or more literature on the double inertial tec hnique, we refer to [ 10 , 23 , 24 , 29 ], and the references therein. When the Lipschitz constant of the cost op erator is unknown, classical metho ds ma y no longer b e applicable. T o address this issue, self-adaptive step-size rule and the Armijo line searc h rule ha ve b een adopted, allowing algorithms to op erate without prior knowledge of the op erator’s Lipsc hitz constan t. Recen tly , Long et al. [ 15 ] prop osed a single inertial pro jection algorithm incorp orating a line searc h strategy , where uniform contin uity was assumed instead of L -Lipschitz contin uit y . T an et al. [ 27 ] in- tro duced a Mann-type subgradient extragradien t metho d for pseudo-monotone v ariational inequalities, relying on a w eaker form of Lipschitz contin uity . How ever, Armijo-type line searc h sc hemes often re- quire m ultiple pro jections onto the feasible set at each iteration, whic h increases the computational burden. T o mitigate this drawbac k, Y ang and Liu [ 35 ] prop osed a nov el self adaptive step-size rule. Although their approach assumes L -Lipschitz contin uity , it do es not require explicit kno wledge of the Lipsc hitz constan t. In recen t years, considerable attention has b een given to v ariational inequalit y problems without monotonicit y assumptions on the cost op erator. In particular, Liu and Y ang [ 13 ] established a w eak con vergence result for non-monotone v ariational inequalities in 2019. Subsequen tly , Thong et al. [ 30 ] prop osed a relaxed tw o-step T seng-t yp e extragradient metho d for non-monotone v ariational inequalit y problems by incorp orating an inertial technique. F or further studies on non-monotone v ariational inequalit y problems, w e refer the reader to [ 31 , 37 ]. 2 2 Motiv ation V ariational inequalities in Hilb ert spaces oer a p ow erful mathematical framework with broad ap- plications to netw ork o w, economics, engineering, and optimization. In economic contexts lik e the Nash-Cournot oligopoly mo del, w e use v ariational inequalities for analyzing and optimizing competitive b eha viors, enhancing the eciency of oligop oly markets and ensuring b etter resource allo cation and op erational strategies. In the Nash-Cournot oligop olistic mark et equilibrium mo del, Murphy et al. [ 17 ] examined how m ultiple rms can reach a stable state, or Nash equilibrium, in an oligop olistic market, where a small num b er of rms dominate the industry and eac h acts independently while supplying the same pro duct. In this context, a Nash equilibrium means no rm can increase its prot by c hanging its pro duction lev el while others keep theirs unchanged. Murph y et al. [ 17 ] provided a metho d to nd this equilibrium by accoun ting for each rm’s costs and the o verall market demand. The analysis fo cuses on determining ho w muc h each rm should pro duce, considering their pro duction costs and the quan tity they can sell, based on the total market supply . It w as sho wn that under certain conditions, lik e predictable pro duction costs and a clear relationship b et ween price and demand, this equilibrium can b e calculated. Harker [ 7 ] framed a monotone v ariational inequalit y problem for this model. The detailed mathematical model of this problem is discussed in Section 5. In another scenario, supp ose a trac netw ork comprising a nite set of no des connected by directed edges. Consider D is the set of all edges in the net w ork and W denotes the set of directed no de pairs. Let u = ( i, j ) ∈ W , where i and j denote the origin and destination no des, respectively . Then A u represen ts the set of all paths from no de i to no de j . The set of all paths in the netw ork is given b y T = u ∈ W A u . F or ev ery path t ∈ T , let x t represen ts the num b er of vehicles on that path. Eac h u ∈ W corresp onds to a p ositiv e v alue d u indicating the o w demand from no de i to node j . The set of feasible o ws, denoted b y C is dened as follo ws: C = { x : t ∈ A u x t = d u , for all u ∈ W ; x t ≥ 0 for all t ∈ T } . Giv en the o w v ector x , the trac o w on eac h edge d ∈ D is given b y h d = u ∈ W t ∈ A u ρ td x t , where ρ td = 1 if d b elongs to path t 0 otherwise . The total cost for path t is calculated as F t ( x ) = d ∈ D ρ td k d , where k d is the exp enses on d ∈ D . A feasible vector ow x ∗ ∈ C with p ositive comp onents is called the equilibrium if for all u ∈ W , we hav e F q ( x ∗ ) = min t ∈ A u F t ( x ∗ ) . (2.1) Solving ( 2.1 ) is equiv alent to solving the follo wing classical v ariational inequality problem: Find x ∗ ∈ C suc h that ⟨F x ∗ , x − x ∗ ⟩ ≥ 0 , for all x ∈ C . F urthermore, in image restoration, v ariational inequalities provide robust techniques to recov er original images from noisy v ersions, thus improving qualit y and accuracy . The idea is to conv ert 3 the corresp onding least square problem to a v ariational inequalit y problem. This framew ork leads to impro ved restoration outcomes and accurate representation of the original image. Motiv ated by the researc h going in this direction, we prop ose a new double inertial iterative al- gorithm b y generalizing the SEGM com bined with the PCM, along with a generalized non-monotonic step-size. The presen t w ork builds on and generalizes earlier studies in this area, resulting in a broader and more exible framework for the treatmen t of non-monotone v ariational inequalities. Unlik e other existing metho ds, our metho d inv olv es a pro jection onto a generalized half-space, whic h extends many results in the literature. The k ey features of our pap er are: 1. W e examine tw o inertial extrap olation steps, where one inertia can b e selected from [0 , 1] and the other as close as p ossible to 1. 2. A generalized non-monotonic step-size is considered, which works without prior kno wledge of the Lipsc hitz constan t. 3. A pro jection on to a closed, con v ex set C is calculated, follo wed b y a pro jection onto the generalized half-space. 4. W e pro vide a w eak conv ergence result for a non-monotone v ariational inequalit y problem. 5. W e also provide a comparison of our double inertial iterative scheme with other well-kno wn iterativ e sc hemes inv olving no inertial or single inertial parameter. Our pap er is organized as follo ws. In Section 2, w e recall some basic denitions and results that are required to understand the main results. In Section 3, under certain assumptions, we rst establish a w eak conv ergence result for a non-monotone v ariational inequality , and then presen t a weak conv er- gence result for a v ariational inequalit y inv olving a quasi-monotone op erator. A dditionally , we giv e a strong con vergence outcome for a v ariational inequalit y in v olving a strongly pseudo-monotone op erator. W e also discuss the linear conv ergence of the algorithm under a simplied framework. In Section 4, w e demonstrate the eciency of our algorithm in addressing real-life applications, including netw ork equilibrium ow, mark et equilibrium mo dels, and image restoration problems. 3 Preliminaries In this section, we outline fundamental denitions and lemmas essen tial for understanding our main result. Throughout the paper, we denote C as a nonempt y , closed, and con vex subset of the real Hilbert space H . Denition 3.1. A n op er ator F : H → H is said to b e: (i) L -Lipschitz c ontinuous if ther e exists a c onstant L > 0 such that ∥F x − F y ∥ ≤ L ∥ x − y ∥ , for al l x, y ∈ H . (ii) monotone if ⟨F x − F y , x − y ⟩ ≥ 0 , for al l x, y ∈ H . (iii) pseudo-monotone if ⟨F x, y − x ⟩ ≥ 0 implies ⟨F y , y − x ⟩ ≥ 0 , for al l x, y ∈ H . (iv) quasi-monotone if ⟨F x, y − x ⟩ > 0 implies ⟨F y , y − x ⟩ ≥ 0 , for al l x, y ∈ H . (v) k -str ongly pseudo-monotone if ther e exists k > 0 such that ⟨F x, y − x ⟩ ≥ 0 implies ⟨F y , y − x ⟩ ≥ k ∥ y − x ∥ 2 , for al l x, y ∈ H . 4 F or each p oint x ∈ H , there is a unique nearest p oint P C ( x ) in C , suc h that P C ( x ) = arg min y ∈ C {∥ x − y ∥} . The mapping P C : H → C is called a metric pro jection of H onto C . It is known that the metric pro jection P C is nonexpansive. Some of the prop erties of metric pro jection are: ⟨ x − P C ( x ) , y − P C ( y ) ⟩ ≤ 0; for all x ∈ H , y ∈ C , (3.1) and ∥ P C ( x ) − P C ( y ) ∥ 2 ≤ ⟨ P C ( x ) − P C ( y ) , x − y ⟩ , for all x, y ∈ H . (3.2) Lemma 3.1. The fol lowing r esults hold in H : (i) ∥ x + y ∥ 2 = ∥ x ∥ 2 + ∥ y ∥ 2 + 2 ⟨ x, y ⟩ , for al l x, y ∈ H . (ii) ∥ αx + β y ∥ 2 = α ( α + β ) ∥ x ∥ 2 + β ( α + β ) ∥ y ∥ 2 − αβ ∥ x − y ∥ 2 , for al l x, y ∈ H and α , β ∈ R . Lemma 3.2. [ 36 ] If either (i) F is pseudo-monotone on C and V I ( C, F ) = ϕ , (ii) F is the gr adient of G , wher e G is a dier ential quasic onvex function on an op en set K , C ⊂ K and attains its glob al minimum on C , (iii) F is quasi-monotone on C , F = 0 on C and C is b ounde d, (iv) F is quasi-monotone on C , F = 0 on C and ther e exists a p ositive numb er t , such that, for every r ∈ C with ∥ r ∥ ≥ t , ther e exists w ∈ C such that ∥ w ∥ ≤ t and ⟨F r, w − v ⟩ ≤ 0 , (v) F is quasi-monotone on C , intC is non-empty and ther e exists v ∗ ∈ V I ( C , F ) such that F v ∗ = 0 . Then S F is non-empty. Lemma 3.3. [ 14 ] Supp ose η n , ν n and θ n ar e se quenc es in [0 , + ∞ ) such that η n +1 ≤ η n + ν n ( η n − η n − 1 ) + θ n for al l n ≥ 1 , ∞ n =1 θ n < + ∞ , and ther e exists a r e al numb er ν with 0 ≤ ν n ≤ ν < 1 for al l n ∈ N . Then the fol lowing hold: (i) ∞ n =1 [ η n − η n − 1 ] + < + ∞ , wher e [ u ] + = max { u, 0 } ; (ii) ther e exists η ∗ ∈ [0 , + ∞ ) such that lim n →∞ η n = η ∗ . Lemma 3.4. [ 19 ] Consider C is a nonempty subset of H and a se quenc e { x n } in H such that the fol lowing two c onditions hold: (i) for e ach u ∈ C , lim n →∞ ∥ x n − u ∥ exists; (ii) every se quential we ak cluster p oint of { x n } is in C . Then { x n } c onver ges we akly to a p oint in C . 5 4 Main Results In this section, w e presen t our main algorithm and theorems. The follo wing assumptions are required to prov e our main results. (A1): The mapping F is L -Lipschitz con tinuous on H and is sequen tially weakly con tinuous on C . (A2): S F = ϕ . (A3): If x n ⇀ v ∗ and lim sup n →∞ ⟨F x n , x n ⟩ ≤ ⟨F v ∗ , v ∗ ⟩ , then lim n →∞ ⟨F x n , x n ⟩ = ⟨F v ∗ , v ∗ ⟩ . (A4): 0 ≤ ν n ≤ ν n +1 ≤ 1 . (A5): 0 ≤ ξ n ≤ ξ n +1 ≤ ξ < min { ¯ θ − √ 2 ¯ θ ¯ θ , ν 1 } ; ¯ θ ∈ (2 , + ∞ ) . (A6): 0 < α ≤ α n ≤ α n +1 < 1 1+ ¯ θ ; ¯ θ ∈ (2 , + ∞ ) . (A7): Let { δ n } ∈ [1 , + ∞ ) suc h that lim n →∞ δ n = 1 , { χ n } ∈ [1 , + ∞ ) suc h that ∞ n =1 ( χ n − 1) < + ∞ and { ζ n } ∈ [0 , + ∞ ) suc h that ∞ n =0 ζ n < + ∞ . Algorithm 4.1 Mo died double inertial subgradien t extragradien t metho d (MDISEM) Initialization: Choose µ ∈ (0 , 1) , λ 1 > 0 , σ ∈ (0 , 2 µ ) and β ∈ ( σ 2 , 1 µ ) . Choose x 0 , x 1 ∈ H and calculate the iterate x n +1 as: Step 1. Compute w n = x n + ν n ( x n − x n − 1 ) , (4.1) y n = P C ( w n − β λ n F w n ) (4.2) and up date λ n +1 , the step-size, as λ n +1 = min { µδ n ∥ w n − y n ∥ ∥F w n −F y n ∥ , χ n λ n + ζ n } if F w n = F y n χ n λ n + ζ n otherwise . (4.3) W e consider y n as a solution of (VIP) if w n = y n (or F y n = 0 ). Otherwise Step 2. Compute u n = P T n ( w n − σ λ n d n F y n ) , where T n = { x ∈ H : ⟨ w n − β λ n F w n − y n , x − y n ⟩ ≤ 0 } , and d n = ⟨ w n − y n , η n ⟩ ∥ η n ∥ 2 , (4.4) η n = w n − y n − β λ n ( F w n − F y n ) . (4.5) 6 Step 3. Compute v n = x n + ξ n ( x n − x n − 1 ) , (4.6) x n +1 = (1 − α n ) v n + α n u n . (4.7) Set n ← n + 1 and go to Step 1. W e no w presen t a ow chart for our Algorithm 4.1, illustrating its op erational steps and pro cess in T able 1 . Start Initialization: µ ∈ (0 , 1) , λ 1 > 0 , σ ∈ (0 , 2 µ ) , β ∈ ( σ 2 , 1 µ ) , x 0 and x 1 ∈ H Step 1: Compute w n and y n Up date λ n +1 using ( 4.3 ) Is w n = y n (or F y n = 0 )? Stop Step 2: Compute u n using ( 4.4 ) and ( 4.5 ) Step 3: Compute v n and x n +1 Set n ← n + 1 and go to Step 1 Y es No T able 1: Flo wc hart of the Algorithm 4.1 with initialization and iterativ e steps. Remark 4.1. i. It is worth noting that Li et al. [ 12 ] intr o duc e d a double inertial sub gr adient extr agr adient metho d c ombine d with the pr oje ction c ontr action metho d, inc orp or ating a line se ar ch rule, to addr ess quasi-monotone variational ine qualities. In c ontr ast, we fo cus on the double inertial sub gr adient extr agr adient metho d c ombine d with the pr oje ction c ontr action metho d to 7 addr ess non-monotone variational ine qualities, without r e quiring any monotonicity assumption on the c ost op er ator. Mor e over, unlike the line se ar ch str ate gy adopte d by Li et al. [ 12 ], our algorithm utilizes a self-adaptive step-size rule. A dditional ly, the se c ond pr oje ction in our metho d is c ompute d onto a gener alize d half-sp ac e. ii. The step-size se quenc e is wel l dene d under A ssumption (A7), and the existenc e of lim n →∞ λ n is assur e d. F urthermor e, it is str aightforwar d to demonstr ate that the se quenc e { λ n } p ossesses a lower b ound, denote d as { µ L , λ 1 } . F or mor e details, we r efer [ 28 ]. iii. W e utilize the p ar ameters β and σ to enhanc e the sub gr adient extr agr adient metho d and the pr oje ction and c ontr action metho d. Thr ough numeric al exp eriments, we aim to demonstr ate that cho osing appr opriate values for β and σ c an signic antly impr ove b oth c onver genc e sp e e d and ac cur acy. iv. A ssumption ( A 3) holds in sever al imp ortant c ases (se e [ 13 ]), that is, (a) when x n ⇀ p ∗ and F is se quential ly we akly c ontinuous and monotone; and (b) when x n ⇀ p ∗ and F is se quential ly we akly–str ongly c ontinuous ( x n ⇀ p ∗ = ⇒ F x n → F p ∗ ) . W e now prov e the w eak con v ergence result using Algorithm 4.1 (MDISEM). Lemma 4.1. A ssume that A ssumptions (A1)-(A7) hold. L et v ∗ b e one of the we ak cluster p oints of subse quenc e { w n j } of { w n } . If lim j →∞ ∥ w n j − y n j ∥ = 0 , then v ∗ ∈ V I ( C , F ) . Pr o of. W e see that w n j ⇀ v ∗ and lim j →∞ ∥ w n j − y n j ∥ = 0 . It implies that y n j ⇀ v ∗ and since y n ∈ C , we ha ve v ∗ ∈ C . W e now divide the proof in to the follo wing tw o cases. Case 1: If lim sup j →∞ ∥F y n j ∥ = 0 , then w e hav e lim j →∞ ∥F y n j ∥ = lim inf j →∞ ∥F y n j ∥ = 0 . Then, from Assumption ( A 1) , w e see that 0 < ∥F v ∗ ∥ ≤ lim inf j →∞ ∥F y n j ∥ = 0 . This means that F v ∗ = 0 . Case 2: If lim sup j →∞ ∥F y n j ∥ > 0 . Then, without loss of generalit y , we can assume that lim sup j →∞ ∥F y n j ∥ = T 1 > 0 . Th us, w e can nd j 0 ≥ 1 suc h that ∥F y n j ∥ > T 1 2 , for all j ≥ j 0 . W e hav e ⟨ w n j − β λ n j F w n j − y n j , x − y n j ⟩ ≤ 0 for all x ∈ C , whic h is further equiv alent to 1 β λ n j ⟨ w n j − y n j , x − y n j ⟩ ≤ ⟨F w n j , x − y n j ⟩ for all x ∈ C . Th us, w e hav e 1 β λ n j ⟨ w n j − y n j , x − y n j ⟩ + ⟨F w n j , y n j − w n j ⟩ ≤ ⟨F w n j , x − w n j ⟩ for all x ∈ C . (4.8) Since { w n j } is weakly con vergen t, it is bounded. Therefore, from the Lipsc hitz contin uity of F , {F w n j } is also bounded. Also, as lim k →∞ ∥ w n j − y n j ∥ = 0 , w e see that { y n j } is also b ounded. F rom the fact that { λ n } and β are bounded, taking j → ∞ in ( 4.8 ), w e ha v e lim inf j →∞ ⟨F w n j , x − w n j ⟩ ≥ 0 , for all x ∈ C . (4.9) 8 Also ⟨F y n j , x − y n j ⟩ = ⟨F y n j − F w n j , x − w n j ⟩ + ⟨F w n j , x − w n j ⟩ + ⟨F y n j , w n j − y n j ⟩ . (4.10) Since lim j →∞ ∥ w n j − y n j ∥ = 0 , and F is L -Lipschitz con tinuous on H , w e get lim n →∞ ∥F w n j − F y n j ∥ = 0 , whic h together with ( 4.9 ) and ( 4.10 ) implies that lim inf j →∞ ⟨F y n j , x − y n j ⟩ ≥ 0 , for all x ∈ C . (4.11) W e c ho ose a p ositive sequence { s n } such that lim n →∞ s n = 0 , and ⟨F y n j , x − y n j ⟩ + s j > 0 , for all j ≥ 0 . It implies that ⟨F y n j , x ⟩ + s j > ⟨F y n j , y n j ⟩ , for all j ≥ 0 . (4.12) Setting x : = v ∗ in ( 4.12 ), w e get ⟨F y n j , v ∗ ⟩ + s j > ⟨F y n j , y n j ⟩ , for all j ≥ 0 . (4.13) T aking j → ∞ in ( 4.13 ) and from the fact that y n j ⇀ v ∗ , we get ⟨F v ∗ , v ∗ ⟩ ≥ lim sup j →∞ ⟨F y n j , y n j ⟩ . (4.14) Using ( 4.14 ) and Assumption (A3), w e get lim j →∞ ⟨F y n j , y n j ⟩ = ⟨F v ∗ , v ∗ ⟩ . Finally , from ( 4.12 ), we get ⟨F v ∗ , x ⟩ = lim j →∞ ( ⟨F y n j , x ⟩ + s j ) ≥ lim inf j →∞ ⟨F y n j , y n j ⟩ = lim j →∞ ⟨F y n j , y n j ⟩ = ⟨F v ∗ , v ∗ ⟩ . Th us, w e get ⟨F v ∗ , x − v ∗ ⟩ ≥ 0 , for all x ∈ C . Hence v ∗ ∈ V I ( C , F ) . Lemma 4.2. A ssume that the A ssumptions (A1)-(A7) hold. L et { x n } b e the se quenc e obtaine d by A lgorithm 4.1. Then the fol lowing r esults hold: (i) { x n } is b ounde d. (ii) lim n →∞ ∥ x n +1 − x n ∥ = 0 . (iii) lim n →∞ ∥ x n − p ∗ ∥ exists for al l p ∗ ∈ S F . 9 (iv) lim n →∞ ∥ x n − w n ∥ = 0 . Pr o of. (i): First we hav e to pro v e that { x n } is bounded. F rom the denition of u n and property of pro jection, we ha ve for any p ∗ ∈ S F ∥ u n − p ∗ ∥ 2 = ∥ P T n ( w n − σ λ n d n F y n ) − p ∗ ∥ 2 ≤ ∥ w n − σ λ n d n F y n − p ∗ ∥ 2 − ∥ w n − σ λ n d n F y n − u n ∥ 2 = ∥ w n − p ∗ ∥ 2 + ∥ σ λ n d n F y n ∥ 2 + 2 ⟨ w n − p ∗ , − σ λ n d n F y n ⟩ − ∥ w n − u n ∥ 2 − ∥ σ λ n d n F y n ∥ 2 − 2 ⟨ w n − u n , − σ λ n d n F y n ⟩ = ∥ w n − p ∗ ∥ 2 − 2 σ λ n d n ⟨ w n − p ∗ , F y n ⟩ − ∥ w n − u n ∥ 2 + 2 σ λ n d n ⟨ w n − u n , F y n ⟩ = ∥ w n − p ∗ ∥ 2 − 2 σ λ n d n ⟨F y n , w n − u n + u n − p ∗ ⟩ − ∥ w n − u n ∥ 2 + 2 σ λ n d n ⟨ w n − u n , F y n ⟩ = ∥ w n − p ∗ ∥ 2 − 2 σ λ n d n ⟨F y n , u n − p ∗ ⟩ − ∥ w n − u n ∥ 2 . (4.15) Since y n ∈ C and p ∗ ∈ S F , we get ⟨F y n , y n − p ∗ ⟩ ≥ 0 . This can be rewritten as ⟨F y n , y n − u n + u n − p ∗ ⟩ ≥ 0 , whic h implies ⟨F y n , p ∗ − u n ⟩ ≤ ⟨F y n , y n − u n ⟩ . (4.16) Using ( 4.4 ) and ( 4.5 ), we ha ve d n = ⟨ w n − y n , η n ⟩ ∥ η n ∥ 2 = ⟨ w n − y n , w n − y n − β λ n ( F w n − F y n ) ⟩ ∥ η n ∥ 2 = ∥ w n − y n ∥ 2 − ⟨ w n − y n , β λ n ( F w n − F y n ) ⟩ ∥ η n ∥ 2 ≥ ∥ w n − y n ∥ 2 − β λ n ∥ w n − y n ∥∥F w n − F y n ∥ ∥ η n ∥ 2 . (4.17) Using step-size rule ( 4.3 ), we ha ve ∥F w n − F y n ∥ ≤ µδ n λ n +1 ∥ w n − y n ∥ . (4.18) Using ( 4.18 ) in ( 4.17 ), we get d n ≥ (1 − β µδ n λ n λ n +1 ) ∥ w n − y n ∥ 2 ∥ η n ∥ 2 (4.19) No w, again from ( 4.5 ) and ( 4.18 ), we ha ve ∥ η n ∥ = ∥ w n − y n − β λ n ( F w n − F y n ) ∥ ≤ ∥ w n − y n ∥ + β λ n ∥F w n − F y n ∥ ≤ ∥ w n − y n ∥ + β µδ n λ n λ n +1 ∥ w n − y n ∥ 10 ≤ (1 + β µδ n λ n λ n +1 ) ∥ w n − y n ∥ . (4.20) F rom the Assumption (A7) and Remark 4.1(1), w e ha ve lim n →∞ δ n λ n λ n +1 = 1 . Th us, there exists a constan t n 0 ∈ N suc h that (1 − β µδ n λ n λ n +1 ) > 0 for all n ≥ n 0 . Combining ( 4.19 ) and ( 4.20 ), w e get d n ≥ (1 − β µδ n λ n λ n +1 ) (1 + β µδ n λ n λ n +1 ) 2 > 0 , for all n ≥ n 0 . (4.21) Using the denition of T n , and from the fact that u n ∈ T n , we get ⟨ w n − β λ n F w n − y n , u n − y n ⟩ ≤ 0 . It implies ⟨ w n − y n − β λ n ( F w n − F y n ) , u n − y n ⟩ ≤ β λ n ⟨F y n , u n − y n ⟩ . (4.22) F rom ( 4.16 ) and ( 4.21 ), we hav e − 2 σ λ n d n ⟨F y n , u n − p ∗ ⟩ ≤ − 2 σ λ n d n ⟨F y n , u n − y n ⟩ . (4.23) No w from ( 4.4 ), ( 4.5 ), ( 4.23 ) and ( 4.22 ), w e ha ve − 2 σ λ n d n ⟨F y n , u n − p ∗ ⟩ ≤ − 2 σ β d n ⟨ η n , u n − y n ⟩ = − 2 σ β d n ⟨ η n , w n − y n − w n + u n ⟩ = − 2 σ β d n ⟨ η n , w n − y n ⟩ − 2 σ β d n ⟨ η n , − w n + u n ⟩ = − 2 σ β d n ⟨ η n , w n − y n ⟩ + 2 σ β d n ⟨ η n , w n − u n ⟩ = − 2 σ β d 2 n ∥ η n ∥ 2 + 2 σ β d n ⟨ η n , w n − u n ⟩ . (4.24) Using Lemma 3.1 , w e ha v e 2 σ β d n ⟨ η n , w n − u n ⟩ = ∥ w n − u n ∥ 2 + σ 2 β 2 d 2 n ∥ η n ∥ 2 − ∥ w n − u n − σ β d n η n ∥ 2 . (4.25) F rom ( 4.17 ) and ( 4.20 ), we get d 2 n ∥ η n ∥ 2 ≥ d n (1 − β µδ n λ n λ n +1 ) ∥ w n − y n ∥ 2 ≥ (1 − β µδ n λ n λ n +1 ) 2 ∥ w n − y n ∥ 4 ∥ η n ∥ 2 ≥ (1 − β µδ n λ n λ n +1 ) 2 (1 + β µδ n λ n λ n +1 ) 2 ∥ w n − y n ∥ 2 . (4.26) Using ( 4.24 ), ( 4.25 ) and ( 4.26 ) in ( 4.15 ), we get ∥ u n − p ∗ ∥ 2 ≤ ∥ w n − p ∗ ∥ 2 − ∥ w n − u n − σ β d n η n ∥ 2 11 − σ β 2 (2 β − σ ) (1 − β µδ n λ n λ n +1 ) 2 (1 + β µδ n λ n λ n +1 ) 2 ∥ w n − y n ∥ 2 , for all n ≥ n 0 . (4.27) As 2 β − σ > 0 , w e get ∥ u n − p ∗ ∥ ≤ ∥ w n − p ∗ ∥ for all n ≥ n 0 . (4.28) Using ( 4.28 ), w e hav e ∥ x n +1 − p ∗ ∥ 2 = ∥ (1 − α n ) v n + α n u n − p ∗ ∥ 2 = ∥ (1 − α n )( v n − p ∗ ) + α n ( u n − p ∗ ) ∥ 2 = (1 − α n ) ∥ v n − p ∗ ∥ 2 + α n ∥ u n − p ∗ ∥ 2 − α n (1 − α n ) ∥ v n − u n ∥ 2 ≤ (1 − α n ) ∥ v n − p ∗ ∥ 2 + α n ∥ w n − p ∗ ∥ 2 − α n (1 − α n ) ∥ v n − u n ∥ 2 , for all n ≥ n 0 . (4.29) As x n +1 = (1 − α n ) v n + α n u n , w e get ∥ v n − u n ∥ = 1 α n ∥ x n +1 − v n ∥ , for all n ≥ 1 . (4.30) Com bining ( 4.29 ) and ( 4.30 ), w e get ∥ x n +1 − p ∗ ∥ 2 ≤ (1 − α n ) ∥ v n − p ∗ ∥ 2 + α n ∥ w n − p ∗ ∥ 2 − (1 − α n ) α n ∥ x n +1 − v n ∥ 2 , for all n ≥ n 0 . (4.31) Using Lemma 3.1 , w e ha v e ∥ w n − p ∗ ∥ 2 = ∥ x n + ν n ( x n − x n − 1 ) − p ∗ ∥ 2 = ∥ (1 + ν n )( x n − p ∗ ) − ν n ( x n − 1 − p ∗ ) ∥ 2 = (1 + ν n ) ∥ x n − p ∗ ∥ 2 − ν n ∥ x n − 1 − p ∗ ∥ 2 + ν n (1 + ν n ) ∥ x n − x n − 1 ∥ 2 . (4.32) Similarly ∥ v n − p ∗ ∥ 2 = (1 + ξ n ) ∥ x n − p ∗ ∥ 2 − ξ n ∥ x n − 1 − p ∗ ∥ 2 + ξ n (1 + ξ n ) ∥ x n − x n − 1 ∥ 2 . (4.33) F urthermore ∥ x n +1 − v n ∥ 2 = ∥ x n +1 − x n − ξ n ( x n − x n − 1 ) ∥ 2 = ∥ x n +1 − x n ∥ 2 + ξ 2 n ∥ x n − x n − 1 ∥ 2 − 2 ξ n ⟨ x n +1 − x n , x n − x n − 1 ⟩ ≥ ∥ x n +1 − x n ∥ 2 + ξ 2 n ∥ x n − x n − 1 ∥ 2 − 2 ξ n ∥ x n +1 − x n ∥∥ x n − x n − 1 ∥ ≥ (1 − ξ n ) ∥ x n +1 − x n ∥ 2 + ( ξ 2 n − ξ n ) ∥ x n − x n − 1 ∥ 2 . (4.34) Substituting ( 4.32 ), ( 4.33 ) and ( 4.34 ) in ( 4.31 ), we get ∥ x n +1 − p ∗ ∥ 2 ≤ (1 − α n ) (1 + ξ n ) ∥ x n − p ∗ ∥ 2 − ξ n ∥ x n − 1 − p ∗ ∥ 2 + ξ n (1 + ξ n ) ∥ x n − x n − 1 ∥ 2 + α n (1 + ν n ) ∥ x n − p ∗ ∥ 2 − ν n ∥ x n − 1 − p ∗ ∥ 2 + ν n (1 + ν n ) ∥ x n − x n − 1 ∥ 2 − (1 − α n ) α n (1 − ξ n ) ∥ x n +1 − x n ∥ 2 + ( ξ 2 n − ξ n ) ∥ x n − x n − 1 ∥ 2 12 = (1 − α n )(1 + ξ n ) ∥ x n − p ∗ ∥ 2 − ξ n (1 − α n ) ∥ x n − 1 − p ∗ ∥ 2 + ξ n (1 + ξ n )(1 − α n ) ∥ x n − x n − 1 ∥ 2 + α n (1 + ν n ) ∥ x n − p ∗ ∥ 2 − α n ν n ∥ x n − 1 − p ∗ ∥ 2 + α n ν n (1 + ν n ) ∥ x n − x n − 1 ∥ 2 − (1 − α n )(1 − ξ n ) α n ∥ x n +1 − x n ∥ 2 − (1 − α n )( ξ 2 n − ξ n ) α n ∥ x n − x n − 1 ∥ 2 = 1 + α n ν n + ξ n (1 − α n ) ∥ x n − p ∗ ∥ 2 − ( α n ν n + ξ n (1 − α n )) ∥ x n − 1 − p ∗ ∥ 2 − κ n ∥ x n +1 − x n ∥ 2 + π n ∥ x n − x n − 1 ∥ 2 , (4.35) where κ n = (1 − α n )(1 − ξ n ) α n and π n = (1 − α n ) ξ n (1 + ξ n ) + α n ν n (1 + ν n ) − (1 − α n )( ξ 2 n − ξ n ) α n . W e dene Ω n = ∥ x n − p ∗ ∥ 2 − ( α n ν n + ξ n (1 − α n )) ∥ x n − 1 − p ∗ ∥ 2 + π n ∥ x n − x n − 1 ∥ 2 , n ≥ 1 . (4.36) Then Ω n +1 − Ω n = ∥ x n +1 − p ∗ ∥ 2 − ( α n +1 ν n +1 + ξ n +1 (1 − α n +1 )) ∥ x n − p ∗ ∥ 2 + π n +1 ∥ x n +1 − x n ∥ 2 − ∥ x n − p ∗ ∥ 2 + ( α n ν n + ξ (1 − α n )) ∥ x n − 1 − p ∗ ∥ 2 − π n ∥ x n − x n − 1 ∥ 2 . ≤ ( α n ν n + ξ n (1 − α n ) − α n +1 ν n +1 − ξ n +1 (1 − α n +1 )) ∥ x n − p ∗ ∥ 2 − κ n ∥ x n +1 − x n ∥ 2 + π n +1 ∥ x n +1 − x n ∥ 2 = ( α n ( ν n − ξ n ) − ( ν n +1 − ξ n +1 ) α n +1 − ( ξ n +1 − ξ n )) ∥ x n − p ∗ ∥ 2 − κ n ∥ x n +1 − x n ∥ 2 + π n +1 ∥ x n +1 − x n ∥ 2 . (4.37) By 0 ≤ ξ n ≤ ξ n +1 < ν 1 ≤ ν n ≤ ν n +1 and 0 < α n ≤ α n +1 < 1 , w e ha ve − α n +1 ( ν n +1 − ξ n +1 ) ≤ − α n ( ν n +1 − ξ n +1 ) and − ( ξ n +1 − ξ n ) ≤ − α n ( ξ n +1 − ξ n ) . Therefore, w e get α n ( ν n − ξ n ) − α n +1 ( ν n +1 − ξ n +1 ) − ( ξ n +1 − ξ n ) ≤ α n ( ν n − ξ n ) − α n ( ν n +1 − ξ n +1 ) − α n ( ξ n +1 − ξ n ) = α n ( ν n − ν n +1 ) ≤ 0 . Th us, from ( 4.37 ), we ha ve Ω n +1 − Ω n ≤ − κ n ∥ x n +1 − x n ∥ 2 + π n +1 ∥ x n +1 − x n ∥ 2 = − ( κ n − π n +1 ) ∥ x n +1 − x n ∥ 2 . (4.38) Consider κ n − π n +1 = (1 − ξ n )(1 − α n ) α n − (1 − α n +1 ) ξ n +1 (1 + ξ n +1 ) − α n +1 ν n +1 (1 + ν n +1 ) + (1 − α n +1 )( ξ 2 n +1 − ξ n +1 ) α n +1 ≥ (1 − ξ n )(1 − α n +1 ) α n +1 + (1 − α n +1 )( ξ 2 n +1 − ξ n +1 ) α n +1 − (1 − α n +1 ) ξ n +1 (1 + ξ n +1 ) − α n +1 ν n +1 (1 + ν n +1 ) ≥ (1 − α n +1 )( ξ 2 n +1 − ξ n +1 − ξ n + 1) α n +1 − 2(1 − α n +1 ) − 2 α n +1 13 ≥ (1 − α n +1 )( ξ 2 n +1 − 2 ξ n +1 − 1) α n +1 − 2 ≥ ¯ θ ( ξ 2 n +1 − 2 ξ n +1 + 1) = ¯ θ ξ 2 − 2 ¯ θ ξ + ¯ θ − 2 . (4.39) F rom Assumption (A5), we ha ve ξ n +1 ≤ ξ < ¯ θ − √ 2 ¯ θ ¯ θ , which implies ¯ θ ξ 2 n +1 − 2 ¯ θ ξ n +1 + ¯ θ − 2 ≥ ¯ θ ξ 2 − 2 ¯ θ ξ + ¯ θ − 2 > 0 . Th us, from ( 4.38 ) and ( 4.39 ), w e ha ve Ω n +1 − Ω n ≤ − r ∥ x n +1 − x n ∥ 2 , (4.40) where r = ¯ θ ξ 2 − 2 ¯ θ ξ + ¯ θ − 2 . Therefore, the sequence { Ω n } is non-increasing. F rom ( 4.36 ), w e ha v e Ω n = ∥ x n − p ∗ ∥ 2 − ( α n ν n + ξ n (1 − α n )) ∥ x n − 1 − p ∗ ∥ 2 + π n ∥ x n − x n − 1 ∥ 2 ≥ ∥ x n − p ∗ ∥ 2 − ( α n ν n + ξ n (1 − α n )) ∥ x n − 1 − p ∗ ∥ 2 . This implies ∥ x n − p ∗ ∥ 2 ≤ ( α n ν n + ξ n (1 − α n )) ∥ x n − 1 − p ∗ ∥ 2 + Ω n ≤ ( 1 1 + ¯ θ + ξ (1 − α )) ∥ x n − 1 − p ∗ ∥ 2 + Ω n = ε ∥ x n − 1 − p ∗ ∥ 2 + Ω n . . . ≤ ε n ∥ x 0 − p ∗ ∥ 2 + (1 + ε + ε 2 + ... + ε n − 1 )Ω 1 ≤ ε n ∥ x 0 − p ∗ ∥ 2 + 1 1 − ε Ω 1 , where ε = 1 1+ ¯ θ + ξ (1 − α ) < 1 . So, the sequence {∥ x n − p ∗ ∥} is bounded and hence { x n } is bounded. (ii): Now, we show that lim n →∞ ∥ x n +1 − x n ∥ = 0 . Consider Ω n +1 = ∥ x n +1 − p ∗ ∥ 2 − ( α n +1 ν n +1 + ξ n +1 (1 − α n +1 )) ∥ x n − p ∗ ∥ 2 + π n +1 ∥ x n +1 − x n ∥ 2 ≥ − ( α n +1 ν n +1 + ξ n +1 (1 − α n +1 )) ∥ x n − p ∗ ∥ 2 . Th us, w e hav e − Ω n +1 ≤ ( α n +1 ν n +1 + ξ n +1 (1 − α n +1 )) ∥ x n − p ∗ ∥ 2 ≤ ε ∥ x n − p ∗ ∥ 2 ≤ ε n +1 ∥ x 0 − p ∗ ∥ 2 + ε 1 − ε Ω 1 . Also, from ( 4.40 ), we ha ve r ∥ x n +1 − x n ∥ 2 ≤ Ω n − Ω n +1 . It implies r t n =1 ∥ x n +1 − x n ∥ 2 ≤ t n =1 (Ω n − Ω n +1 ) 14 ≤ Ω 1 − Ω t +1 ≤ Ω 1 + ε t +1 ∥ x 0 − p ∗ ∥ 2 + ε Ω 1 1 − ε = ε t +1 ∥ x 0 − p ∗ ∥ 2 + Ω 1 1 − ε . Therefore ∞ n =1 ∥ x n +1 − x n ∥ 2 ≤ Ω 1 r (1 − ε ) < + ∞ and hence lim n →∞ ∥ x n +1 − x n ∥ = 0 . (iii): Next, w e hav e to sho w that lim n →∞ ∥ x n − p ∗ ∥ exists. F rom ( 4.35 ), w e hav e ∥ x n +1 − p ∗ ∥ 2 ≤ (1 + α n ν n + ξ n (1 − α n )) ∥ x n − p ∗ ∥ 2 − ( α n ν n + ξ n (1 − α n )) ∥ x n − 1 − p ∗ ∥ 2 − κ n ∥ x n +1 − x n ∥ 2 + π n ∥ x n − x n − 1 ∥ 2 ≤ ∥ x n − p ∗ ∥ 2 + ( α n ν n + ξ n (1 − α n )) ∥ x n − p ∗ ∥ 2 − ( α n ν n + ξ n (1 − α n )) ∥ x n − 1 − p ∗ ∥ 2 + π n ∥ x n − x n − 1 ∥ 2 = ∥ x n − p ∗ ∥ 2 + ( α n ν n + ξ n (1 − α n ))( ∥ x n − p ∗ ∥ 2 − ∥ x n − 1 − p ∗ ∥ 2 ) + π n ∥ x n − x n − 1 ∥ 2 . As π n = (1 − α n ) ξ n (1 + ξ n ) + α n ν n (1 + ν n ) − (1 − α n )( ξ 2 n − ξ n ) α n ≤ (1 − α ) ξ (1 + ξ ) + 2 1 + ¯ θ + (1 − α ) 4 α . Th us, applying Lemma 3.3 , w e ha ve lim n →∞ ∥ x n − p ∗ ∥ exists. (iv): Finally , we sho w that lim n →∞ ∥ x n − w n ∥ = 0 . Consider ∥ x n +1 − v n ∥ ≤ ∥ x n +1 − x n ∥ + ξ n ∥ x n − x n − 1 ∥ → 0 as n → ∞ . Again on similar lines ∥ x n +1 − w n ∥ ≤ ∥ x n +1 − x n ∥ + ν n ∥ x n − x n − 1 ∥ → 0 as n → ∞ . Also ∥ x n +1 − v n ∥ = ∥ (1 − α n ) v n + α n u n − v n ∥ = α n ∥ v n − u n ∥ ≥ α ∥ v n − u n ∥ , implies lim n →∞ ∥ v n − u n ∥ = 0 . F urthermore ∥ w n − v n ∥ = ∥ x n + ν n ( x n − x n − 1 ) − x n − ξ n ( x n − x n − 1 ) ∥ ≤ ν n ∥ x n − x n − 1 ∥ + ξ n ∥ x n − x n − 1 ∥ ≤ ∥ x n − x n − 1 ∥ + α ∥ x n − x n − 1 ∥ → 0 as n → ∞ . This implies lim n →∞ ∥ w n − v n ∥ = 0 . Also, it is easy to see that lim n →∞ ∥ w n − u n ∥ = 0 . Now, from ( 4.27 ), w e ha ve σ β 2 (2 β − σ ) (1 − β µδ n λ n λ n +1 ) 2 1 + β µδ n λ n λ n +1 ) 2 ∥ w n − y n ∥ 2 + ∥ w n − u n − σ β d n η n ∥ 2 15 ≤ ∥ w n − p ∗ ∥ 2 − ∥ u n − p ∗ ∥ 2 ≤ ( ∥ w n − p ∗ ∥ + ∥ u n − p ∗ ∥ ) ∥ w n − u n ∥ . Therefore, we ha ve lim n →∞ ∥ w n − y n ∥ = 0 and lim n →∞ ∥ w n − u n − σ β d n η n ∥ = 0 . Also, it can b e easily sho wn that lim n →∞ ∥ x n − v n ∥ = 0 , lim n →∞ ∥ x n − w n ∥ = 0 and lim n →∞ ∥ x n − y n ∥ = 0 . No w, w e pro ve our main theorem. Theorem 4.1. A ssume that the A ssumptions (A1)-(A7) hold and V I ( C, F ) is a nite set. Then the se quenc e { x n } gener ate d by A lgorithm 4.1 c onver ges we akly to an element in V I ( C, F ) . Pr o of. F rom Lemma 4.2 , w e see that the sequence { x n } is bounded. Thus, w e can c ho ose a subsequence { x n j } of { x n } such that x n j ⇀ v ∗ as j → ∞ . Again from Lemma 4.2 , w e see that lim n →∞ ∥ x n − w n ∥ = 0 and lim n →∞ ∥ w n − y n ∥ . F rom Lemma 4.1 , it implies that every weak cluster p oint of { x n } is in V I ( C , F ) . F rom the assumption that V I ( C, F ) is nite, we obtain a sequence { x n } has nite cluster p oints in V I ( C, F ) . As lim n →∞ ∥ x n +1 − x n ∥ = 0 . It implies that there exists N ∈ N such that ∥ x n +1 − x n ∥ < p, for all n ≥ N . Assume that { x n } has more than one weak cluster points in V I ( C, F ) . Then from [ 13 , Lemma 3.5], there exists N 1 > N suc h that x N 1 ∈ B i and x N 1 +1 ∈ B j , where i = j , i, j ∈ { 1 , 2 , ..., m } , m ≥ 2 and B i = m j =1 ,j = i { v : ⟨ v , v i − v j ∥ v i − v j ∥ ⟩ > ∥ v i ∥ 2 − ∥ v j ∥ 2 2 ∥ v i − v j ∥ + p } , where p : = min i,j ∈{ 1 , 2 ,...,m } i = j { ∥ v i − v j ∥ 4 } , and v 1 , v 2 , ..., v m are the nite w eak cluster p oints of { x n } . In particular, we ha ve ∥ x N 1 +1 − x N 1 ∥ < p. (4.41) No w x N 1 ∈ B i = m j =1 ,j = i { v : ⟨ v , v i − v j ∥ v i − v j ∥ ⟩ > p + ∥ v i ∥ 2 − ∥ v j ∥ 2 2 ∥ v i − v j ∥ } , and x N 1 +1 ∈ B j = m i =1 ,i = j { v : ⟨ v , v j − v i ∥ v j − v i ∥ ⟩ > p + ∥ v j ∥ 2 − ∥ v i ∥ 2 2 ∥ v j − v i ∥ } = m i =1 ,i = j { v : ⟨− v , v j − v i ∥ v j − v i ∥ ⟩ < − p + ∥ v i ∥ 2 − ∥ v j ∥ 2 2 ∥ v i − v j ∥ } = m i =1 ,i = j { v : ⟨− v , v i − v j ∥ v j − v i ∥ ⟩ > p + ∥ v j ∥ 2 − ∥ v i ∥ 2 2 ∥ v i − v j ∥ } . Th us, w e get ⟨ x N 1 , v i − v j ∥ v i − v j ∥ ⟩ > p + ∥ v i ∥ 2 − ∥ v j ∥ 2 2 ∥ v i − v j ∥ (4.42) 16 and ⟨− x N 1 +1 , v i − v j ∥ v j − v i ∥ ⟩ > p + ∥ v j ∥ 2 − ∥ v i ∥ 2 2 ∥ v i − v j ∥ . (4.43) A dding ( 4.42 ), ( 4.43 ) and using ( 4.41 ), we get 2 p < ⟨ x N 1 − x N 1 +1 , v i − v j ∥ v i − v j ∥ ⟩ ≤ ∥ x N 1 − x N 1 +1 ∥ < p. This is a contradiction. Hence, the sequence { x n } has only one weak cluster p oin t v ∗ ∈ V I ( C , F ) . Th us, w e see that { x n } conv erges weakly to an elemen t of V I ( C , F ) . Using our Algorithm 4.1 (MDISEM), w e now give a w eak conv ergence result in volving F as quasi- monotone op erator. Instead of Lemma 4.1 , we use the following lemma which is as follo ws: Lemma 4.3. A ssume that A ssumptions (A1)-(A2),(A4)-(A7) hold and F is quasi-monotone on H . L et v ∗ b e one of the we ak cluster p oints of the subse quenc e { w n j } of { w n } . If lim j →∞ ∥ w n j − y n j ∥ = 0 , then v ∗ ∈ S F or F v ∗ = 0 . Pr o of. W e see that w n j ⇀ v ∗ and lim j →∞ ∥ w n j − y n j ∥ = 0 . It implies that y n j ⇀ v ∗ and since y n ∈ C , we ha ve v ∗ ∈ C . W e now divide the proof in to the follo wing tw o cases. Case 1: If lim sup j →∞ ∥F y n j ∥ = 0 , then w e hav e lim j →∞ ∥F y n j ∥ = lim inf j →∞ ∥F y n j ∥ = 0 . Then, from Assumption ( A 1) , w e see that 0 < ∥F v ∗ ∥ ≤ lim inf j →∞ ∥F y n j ∥ = 0 . This means that F v ∗ = 0 . Case 2: If lim sup j →∞ ∥F y n j ∥ > 0 . Then, without loss of generalit y , we can assume that lim sup j →∞ ∥F y n j ∥ = T 1 > 0 . Th us, w e can nd j 0 ≥ 1 suc h that ∥F y n j ∥ > T 1 2 , for all j ≥ j 0 . W e hav e ⟨ w n j − β λ n j F w n j − y n j , x − y n j ⟩ ≤ 0 for all x ∈ C , whic h is further equiv alent to 1 β λ n j ⟨ w n j − y n j , x − y n j ⟩ ≤ ⟨F w n j , x − y n j ⟩ for all x ∈ C . Th us, w e hav e 1 β λ n j ⟨ w n j − y n j , x − y n j ⟩ + ⟨F w n j , y n j − w n j ⟩ ≤ ⟨F w n j , x − w n j ⟩ for all x ∈ C . (4.44) Since { w n j } is weakly con vergen t, it is bounded. Therefore, from the Lipsc hitz contin uity of F , {F w n j } is also bounded. Also, as lim k →∞ ∥ w n j − y n j ∥ = 0 , w e see that { y n j } is also b ounded. F rom the fact that { λ n } and β are bounded, taking j → ∞ in ( 4.44 ), w e ha v e lim inf j →∞ ⟨F w n j , x − w n j ⟩ ≥ 0 , for all x ∈ C . (4.45) Also ⟨F y n j , x − y n j ⟩ = ⟨F y n j − F w n j , x − w n j ⟩ + ⟨F w n j , x − w n j ⟩ + ⟨F y n j , w n j − y n j ⟩ . (4.46) 17 Since lim j →∞ ∥ w n j − y n j ∥ = 0 , and F is L -Lipschitz con tinuous on H , w e get lim n →∞ ∥F w n j − F y n j ∥ = 0 , whic h together with ( 4.45 ) and ( 4.46 ) implies that lim inf j →∞ ⟨F y n j , x − y n j ⟩ ≥ 0 , for all x ∈ C . (4.47) W e no w consider the following tw o subcases. Case 2a: If lim sup j →∞ ⟨F y n j , x − y n j ⟩ > 0 . Then, w e can choose a subsequence { y n j k } suc h that lim k →∞ ⟨F y n j k , x − y n j k ⟩ > 0 . Th us, there exists k 0 ≥ 1 suc h that ⟨F y n j k , x − y n j k ⟩ > 0 , for all k ≥ k 0 . By quasi-monotonicit y of F on C , we get ⟨F x, x − y n j k ⟩ ≥ 0 . Letting k → ∞ in the ab o ve inequalit y , we ha ve ⟨F x, x − v ∗ ⟩ ≥ 0 . It implies that v ∗ ∈ S F . Case 2b: If lim sup j →∞ ⟨F y n j , x − y n j ⟩ = 0 . Then, b y ( 4.47 ), w e get lim j →∞ ⟨F y n j , x − y n j ⟩ = 0 , for all x ∈ C . It further implies that ⟨F y n j , x − y n j ⟩ + |⟨F y n j , x − y n j ⟩| + 1 j + 1 > 0 , for all x ∈ C. Let τ j = |⟨F y n j , x − y n j ⟩| + 1 j + 1 > 0 , then we ha ve ⟨F y n j , x − y n j ⟩ + τ j > 0 , for all j ≥ j 0 . (4.48) F urthermore, as y n j ∈ C , w e ha v e F y n j = 0 , for all j ≥ 1 . Setting v n j = F y n j ∥F y n j ∥ 2 , w e get ⟨F y n j , v n j ⟩ = 1 , for each j ≥ j 0 . It follo ws from ( 4.48 ) that for eac h j ≥ j 0 ⟨F y n j , x + τ j v n j − y n j ⟩ > 0 . Since F is quasi-monotone, w e hav e ⟨F ( x + τ j v n j ) , x + τ j v n j − y n j ⟩ ≥ 0 . No w, for all j ≥ j 0 , we nally hav e ⟨F x, x + τ j v n j − y n j ⟩ = ⟨F x − F ( x + τ j v n j ) , x + τ j v n j − y n j ⟩ + ⟨F ( x + τ j v n j ) , x + τ j v n j − y n j ⟩ ≥ ⟨F x − F ( x + τ j v n j ) , x + τ j v n j − y n j ⟩ ≥ −∥F x − F ( x + τ j v n j ) ∥∥ x + τ j v n j − y n j ∥ 18 ≥ − τ j L ∥ v n j ∥∥ x + τ j v n j − y n j ∥ = − τ j L 1 ∥F v n j ∥ ∥ x + τ j v n j − y n j ∥ ≥ − τ j L 2 T 1 ∥ x + τ j v n j − y n j ∥ . Since ∥ x + τ j v n j − y n j ∥ is b ounded and τ j → 0 as j → ∞ . Thus, taking j → ∞ in the inequalit y abov e, w e ha v e ⟨F x, x − v ∗ ⟩ ≥ 0 , for all x ∈ C . (4.49) Hence v ∗ ∈ S F . Theorem 4.2. Supp ose that A ssumptions (A1)-(A2),(A4)-(A7) hold, F is quasi-monotone on H and F x = 0 for al l x ∈ C . Then the se quenc e { x n } obtaine d by MDISEM c onver ges we akly to a p oint in S F ⊂ V I ( C , F ) . Pr o of. Using similar arguments as in Lemma 4.2 , we obtain that lim n →∞ ∥ x n +1 − x n ∥ = 0 , lim n →∞ ∥ x n − p ∗ ∥ exist and lim n →∞ ∥ x n − w n ∥ = 0 . Suppose t ω ( x n ) is the set of weak cluster p oints of { x n } . Moreo ver, w e see that t ω ( x n ) = t ω ( y n ) = t ω ( w n ) . W e show that t ω ( x n ) ⊂ S F . T ake v ∗ ∈ t ω ( x n ) , then there exists a subsequence { x n k } of { x n } such that x n k ⇀ v ∗ , as k → ∞ . Since C is w eakly closed, w e ha ve v ∗ ∈ C . As F x = 0 , for all x ∈ C , w e get F v ∗ = 0 . Therefore, it follo ws from Lemma 4.3 that v ∗ ∈ S F . By Lemma 3.4 , we ha ve { x n } conv erges weakly to a p oin t in the solution set S F . Remark 4.2. It is imp ortant to note that, when de aling with a non-monotone variational ine quality, we imp ose the assumption that V I ( C , F ) is a nite set. This assumption ensur es that the se quenc e { x n } has only nitely many we ak cluster p oints in V I ( C, F ) . In fact, it is shown that { x n } admits a unique we ak cluster p oint in V I ( C , F ) , which yields we ak c onver genc e. In c ontr ast, when de aling with a quasi-monotone variational ine quality, we imp ose the c ondition F x = 0 for al l x ∈ C . Dr opping this c ondition may stil l al low the existenc e of solutions in V I ( C , F ) ; however, it no longer guar ante es that the we ak limit b elongs to the solution set S F , sinc e S F ⊂ V I ( C , F ) in gener al. Next, we establish the strong con vergence result for our Algorithm 4.1 under the assumption that F is strongly pseudo-monotone and L -Lipschitz con tinuous on H . It has b een shown that if F is strongly pseudo-monotone, then V I ( C , F ) has a unique solution. F urthermore, S F = V I ( C , F ) . Theorem 4.3. Supp ose that A ssumptions (A4)-(A7) hold, and F is k -str ongly pseudo-monotone and L -Lipschitz c ontinuous on H . Then the se quenc e { x n } gener ate d by MDISEM c onver ges str ongly to unique element in V I ( C, F ) . Pr o of. W e consider the unique element to b e p ∗ ∈ V I ( C , F ) . This implies that ⟨F p ∗ , y n − p ∗ ⟩ ≥ 0 , therefore by strong pseudo-monotonicity of F , w e get ⟨F y n , y n − p ∗ ⟩ ≥ k ∥ y n − p ∗ ∥ 2 . So ⟨F y n , y n − u n + u n − p ∗ ⟩ ≥ k ∥ y n − p ∗ ∥ 2 . This further implies ⟨F y n , u n − p ∗ ⟩ ≥ k ∥ y n − p ∗ ∥ 2 + ⟨F y n , u n − y n ⟩ − 2 σ λ n d n ⟨F y n , u n − p ∗ ⟩ ≤ − 2 σ λ n d n ⟨F y n , u n − y n ⟩ − 2 σ d n λ n k ∥ y n − p ∗ ∥ 2 . (4.50) 19 Using ( 4.50 ) in ( 4.15 ), we get ∥ u n − p ∗ ∥ 2 ≤ ∥ w n − σ λ n d n F y n − p ∗ ∥ 2 − ∥ w n − σ λ n d n F y n − u n ∥ 2 = ∥ w n − p ∗ ∥ 2 − ∥ w n − u n ∥ 2 − 2 σ λ n d n ⟨F y n , u n − p ∗ ⟩ ≤ ∥ w n − p ∗ ∥ 2 − ∥ w n − u n ∥ 2 − 2 σ λ n d n ⟨F y n , u n − y n ⟩ − 2 σ d n λ n k ∥ y n − p ∗ ∥ 2 . (4.51) F rom ( 4.22 ) and using ( 4.24 ) and ( 4.25 ), w e get − 2 σ d n λ n ⟨F y n , u n − y n ⟩ ≤ − 2 σ β d n ⟨ η n , u n − y n ⟩ = − 2 σ β d 2 n ∥ η n ∥ 2 + ∥ w n − u n ∥ 2 + σ 2 β 2 d 2 n ∥ η n ∥ 2 − ∥ w n − u n − σ β d n η n ∥ 2 . Therefore, we ha ve ∥ u n − p ∗ ∥ 2 ≤ ∥ w n − p ∗ ∥ 2 − ∥ w n − u n ∥ 2 − 2 σ d n λ n k ∥ y n − p ∗ ∥ 2 − 2 σ β d 2 n ∥ η n ∥ 2 + ∥ w n − u n ∥ 2 + σ 2 β 2 d 2 n ∥ η n ∥ 2 − ∥ w n − u n − σ β d n η n ∥ 2 = ∥ w n − p ∗ ∥ 2 − 2 σ d n λ n k ∥ y n − p ∗ ∥ 2 − σ β 2 d 2 n ∥ η n ∥ 2 (2 β − σ ) − ∥ w n − u n − σ β d n η n ∥ 2 . (4.52) Using ( 4.26 ) in ( 4.52 ), we get ∥ u n − p ∗ ∥ 2 ≤ ∥ w n − p ∗ ∥ 2 − ∥ w n − u n − σ β d n η n ∥ 2 − σ β 2 (2 β − σ ) (1 − β µδ n λ n λ n +1 ) 2 (1 + β µδ n λ n λ n +1 ) 2 ∥ w n − y n ∥ 2 − 2 σ d n λ n k ∥ y n − p ∗ ∥ 2 . Setting ρ ≤ (1 − β µ ) 2 (1+ β µ ) 2 , we get 1 > lim n →∞ (1 − β µδ n λ n λ n +1 ) 2 (1 + β µδ n λ n λ n +1 ) 2 = (1 − β µ ) 2 (1 + β µ ) 2 ≥ ρ, where ρ is a xed n um b er. Also d n ≥ (1 − β µδ n λ n λ n +1 ) (1 + β µδ n λ n λ n +1 ) 2 . Th us, w e get lim n →∞ d n ≥ 1 − β µ (1 + β µ ) 2 = d (sa y) . Hence, there exists n 1 ≥ n 0 suc h that for all n ≥ n 1 , we ha ve ∥ u n − p ∗ ∥ 2 ≤ ∥ w n − p ∗ ∥ 2 − ∥ w n − u n − σ β d n η n ∥ 2 − σ β 2 (2 β − σ ) ρ ∥ w n − y n ∥ 2 − 2 σ d n λ n k ∥ y n − p ∗ ∥ 2 . Giv en that { λ n } has a lo wer b ound min { µ L , λ 1 } = λ (sa y), w e get ∥ u n − p ∗ ∥ 2 ≤ ∥ w n − p ∗ ∥ 2 − σ β 2 (2 β − σ ) ρ ∥ w n − y n ∥ 2 − 2 σ dλk ∥ y n − p ∗ ∥ 2 20 = ∥ x n + ν n ( x n − x n − 1 ) − p ∗ ∥ 2 − σ β 2 (2 β − σ ) ρ ∥ w n − y n ∥ 2 − 2 σ dλk ∥ y n − p ∗ ∥ 2 = (1 + ν n ) ∥ x n − p ∗ ∥ 2 − ν n ∥ x n − 1 − p ∗ ∥ 2 + ν n (1 + ν n ) ∥ x n − x n − 1 ∥ 2 − σ β 2 (2 β − σ ) ρ ∥ w n − y n ∥ 2 − 2 σ dλk ∥ y n − p ∗ ∥ 2 . (4.53) Substituting ( 4.53 ) in ( 4.29 ), we get ∥ x n +1 − p ∗ ∥ 2 ≤ (1 − α n ) ∥ v n − p ∗ ∥ 2 + α n (1 + ν n ) ∥ x n − p ∗ ∥ 2 − α n ν n ∥ x n − 1 − p ∗ ∥ 2 + α n ν n (1 + ν n ) ∥ x n − x n − 1 ∥ 2 − σ β 2 (2 β − σ ) ρα n ∥ w n − y n ∥ 2 − 2 α n σ dλk ∥ y n − p ∗ ∥ 2 − α n (1 − α n ) ∥ u n − v n ∥ 2 = (1 − α n ) ∥ v n − p ∗ ∥ 2 + α n (1 + ν n ) ∥ x n − p ∗ ∥ 2 − α n ν n ∥ x n − 1 − p ∗ ∥ 2 + α n ν n (1 + ν n ) ∥ x n − x n − 1 ∥ 2 − σ β 2 (2 β − σ ) ρα n ∥ w n − y n ∥ 2 − 2 α n σ dλk ∥ y n − p ∗ ∥ 2 − (1 − α n ) α n ∥ x n +1 − v n ∥ 2 ≤ (1 − α n ) (1 + ξ n ) ∥ x n − p ∗ ∥ 2 − ξ n ∥ x n − 1 − p ∗ ∥ 2 + ξ n (1 + ξ n ) ∥ x n − x n − 1 ∥ 2 + α n (1 + ν n ) ∥ x n − p ∗ ∥ 2 − α n ν n ∥ x n − 1 − p ∗ ∥ 2 + α n ν n (1 + ν n ) ∥ x n − x n − 1 ∥ 2 − σ β 2 (2 β − σ ) ρα n ∥ w n − y n ∥ 2 − 2 α n σ dλk ∥ y n − p ∗ ∥ 2 − (1 − α n ) α n (1 − ξ n ) ∥ x n +1 − x n ∥ 2 + ( ξ 2 n − ξ n ) ∥ x n − x n − 1 ∥ 2 = (1 − α n )(1 + ξ n ) + α n (1 + ν n ) ∥ x n − p ∗ ∥ 2 − ( ξ n (1 − α n ) + α n ν n ) ∥ x n − 1 − p ∗ ∥ 2 + (1 − α n ) ξ n (1 + ξ n ) + α n ν n (1 + ν n ) − (1 − α n )( ξ 2 n − ξ n ) α n ∥ x n − x n − 1 ∥ 2 − σ β 2 (2 β − σ ) ρα n ∥ w n − y n ∥ 2 − 2 α n σ dλk ∥ y n − p ∗ ∥ 2 − (1 − α n )(1 − ξ n ) α n ∥ x n +1 − x n ∥ 2 . By the assumptions on α n , ν n , and ξ n , we get (1 − α n ) ξ n (1 + ξ n ) + α n ν n (1 + ν n ) − (1 − α n )( ξ 2 n − ξ n ) α n ≤ (1 − α ) ξ (1 + ξ ) + 2 1 + ¯ θ − (1 − α ) 4 α = K ∗ (sa y) . Therefore, we ha ve ∥ x n +1 − p ∗ ∥ 2 ≤ (1 + α n ν n + ξ n (1 − α n )) ∥ x n − p ∗ ∥ 2 − ( α n ν n + ξ n (1 − α n )) ∥ x n − 1 − p ∗ ∥ 2 + K ∗ ∥ x n − x n − 1 ∥ 2 − 2 ασ dλk ∥ y n − p ∗ ∥ 2 . It implies 2 ασ dλk ∥ y n − p ∗ ∥ 2 ≤ ∥ x n − p ∗ ∥ 2 − ∥ x n +1 − p ∗ ∥ 2 + ( α n ν n + ξ n (1 − α n )( ∥ x n − p ∗ ∥ 2 − ∥ x n − 1 − p ∗ ∥ 2 ) + K ∗ ∥ x n − x n − 1 ∥ 2 . T aking the summation, we get 2 ασ dλk n i = N ∥ y i − p ∗ ∥ 2 ≤ ∥ x N − p ∗ ∥ 2 − ∥ x n +1 − p ∗ ∥ 2 + ( α n ν n + ξ n (1 − α n ) ∥ x n − p ∗ ∥ 2 21 − ( α N − 1 ν N − 1 + ξ N − 1 (1 − α N − 1 )) ∥ x n − 1 − p ∗ ∥ 2 + K ∗ n i = N ∥ x i − x i − 1 ∥ 2 . As, w e ha ve already shown that the sequence { x n } is b ounded and also ∞ i = N ∥ x i − x i − 1 ∥ 2 < + ∞ , thus w e get ∞ i = N ∥ y i − p ∗ ∥ 2 < + ∞ . Therefore, lim n →∞ ∥ y n − p ∗ ∥ = 0 . Consequen tly , w e get ∥ x n − p ∗ ∥ ≤ ∥ x n − w n ∥ + ∥ w n − y n ∥ + ∥ y n − p ∗ ∥ → 0 as n → ∞ . Hence the result. No w, we pro vide another algorithm from Algorithm 4.1 with few er parameters by c ho osing ν n = 1 , ξ n = 0 , δ n = 1 , χ n = 1 , ζ n = 0 , γ = 1 , σ = 1 , and β = 1 , which is straigh tforward to implemen t in practical applications. Algorithm 4.1a Initialization: Cho ose µ ∈ (0 , 1) , λ 1 > 0 , x 0 and x 1 ∈ H . Calculate the next iterate x n +1 as: Step 1. Compute w n = 2 x n − x n − 1 , y n = P C ( w n − λ n F w n ) and up date λ n +1 , the step-size, as λ n +1 = min { µ ∥ w n − y n ∥ ∥F w n −F y n ∥ , λ n } if F w n = F y n λ n otherwise . W e consider y n as a solution of (VIP) if w n = y n (or F y n = 0 ). Otherwise Step 2. Compute u n = P T n ( w n − λ n d n F y n ) , where T n = { x ∈ H : ⟨ w n − λ n F w n − y n , x − y n ⟩ ≤ 0 } , and d n = ⟨ w n − y n , η n ⟩ ∥ η n ∥ 2 , η n = w n − y n − λ n ( F w n − F y n ) . Step 3. Compute x n +1 = (1 − α n ) x n + α n u n . Set n ← n + 1 and go to Step 1. 22 Remark 4.3. F or A lgorithm 4.1(a), L emmas 4.1 , 4.2 , and 4.3 as wel l as The or ems 4.1 and 4.2 c ontinue to hold. W e now establish the linear conv ergence of Algorithm 4.1 (MDISEM) by considering a constant step size λ n = λ ∈ 0 , 1 L and parameters satisfying ξ n = 0 , β = 1 , σ = 1 , γ = 1 , ν n = ν and α n = α . Algorithm 4.1b Linear inertial subgradient extragradient method Initialization: F or x 0 and x 1 ∈ H , the next iterate x n +1 is calculated as follo ws: Step 1. Compute w n = x n + ν ( x n − x n − 1 ) , y n = P C ( w n − λ F w n ) Step 2. Compute u n = P T n ( w n − λd n F y n ) , where T n = { x ∈ H : ⟨ w n − λ F w n − y n , x − y n ⟩ ≤ 0 } , and d n = ⟨ w n − y n , η n ⟩ ∥ η n ∥ 2 , η n = w n − y n − λ ( F w n − F y n ) . Step 3. Compute x n +1 = (1 − α ) x n + αu n . Set n ← n + 1 and go to Step 1. Theorem 4.4. The se quenc e { x n } gener ate d by A lgorithm 4.1b c onver ges line arly to a unique element in V I ( C, F ) , if the fol lowing assumptions ar e satise d: (B1): F is k -str ongly pseudo-monotone and L -Lipschitz c ontinuous; (B2): The step-size λ ∈ (0 , 1 L ) ; (B3): 0 ≤ ν < 1 t − 1 , wher e t = 1 − 1 2 min (1 − λL ) 2 (1+ λL ) 2 , 2 λk (1 − λL ) (1+ λL ) 2 ∈ (0 , 1) ; (B4): 0 < α < 1 3 . Pr o of. Let p ∗ ∈ V I ( C , F ) b e the unique element. Then, we ha ve ⟨F p ∗ , x − p ∗ ⟩ ≥ 0 , for all x ∈ C . Since F is k -strongly pseudo-monotone, w e hav e ⟨F y n , y n − p ∗ ⟩ ≥ k ∥ y n − p ∗ ∥ 2 . It implies ⟨F y n , y n − u n ⟩ + ⟨F y n , u n − p ∗ ⟩ ≥ k ∥ y n − p ∗ ∥ 2 . Therefore, we ha ve ⟨F y n , u n − p ∗ ⟩ ≥ k ∥ y n − p ∗ ∥ 2 + ⟨F y n , u n − y n ⟩ . (4.54) 23 On similar lines of ( 4.52 ), w e get ∥ u n − p ∗ ∥ 2 ≤ ∥ w n − p ∗ ∥ 2 − d 2 n ∥ η n ∥ 2 − 2 d n λk ∥ y n − p ∗ ∥ 2 . (4.55) F rom the denition of d n , we ha ve d n = ⟨ w n − y n , η n ⟩ ∥ η n ∥ 2 = ⟨ w n − y n , w n − y n − λ ( F w n − F y n ) ⟩ ∥ η n ∥ 2 ≥ (1 − λL ) ∥ w n − y n ∥ 2 ∥ η n ∥ 2 . (4.56) Consider ∥ η n ∥ = ∥ w n − y n − λ ( F w n − F y n ) ∥ ≤ ∥ w n − y n ∥ + λ ∥F w n − F y n ∥ = (1 + λL ) ∥ w n − y n ∥ . (4.57) Using ( 4.57 ) in ( 4.56 ), we get d n ≥ 1 − λL (1 + λL ) 2 . (4.58) Therefore, d 2 n ∥ η n ∥ 2 ≥ (1 − λL ) 2 (1 + λL ) 2 ∥ w n − y n ∥ 2 . (4.59) Using ( 4.58 ) and ( 4.59 ) in ( 4.55 ), w e get ∥ u n − p ∗ ∥ 2 ≤ ∥ w n − p ∗ ∥ 2 − (1 − λL ) 2 (1 + λL ) 2 ∥ w n − y n ∥ 2 − 2 λk (1 − λL ) (1 + λL ) 2 ∥ y n − p ∗ ∥ 2 . (4.60) As (1 − λL ) 2 (1 + λL ) 2 ∥ w n − y n ∥ 2 + 2 λk (1 − λL ) (1 + λL ) 2 ∥ y n − p ∗ ∥ 2 ≥ min (1 − λL ) 2 (1 + λL ) 2 , 2 λk (1 − λL ) (1 + λL ) 2 ( ∥ w n − y n ∥ 2 + ∥ y n − p ∗ ∥ 2 ) ≥ 1 2 min (1 − λL ) 2 (1 + λL ) 2 , 2 λk (1 − λL ) (1 + λL ) 2 ∥ w n − p ∗ ∥ 2 . Therefore, ( 4.60 ) becomes ∥ u n − p ∗ ∥ 2 ≤ t ∥ w n − p ∗ ∥ 2 . (4.61) F rom the denition of x n , we ha ve ∥ x n − u n ∥ 2 = 1 α 2 ∥ x n +1 − x n ∥ 2 . (4.62) Using ( 4.61 ) and ( 4.62 ), we obtain ∥ x n +1 − p ∗ ∥ 2 ≤ (1 − α ) ∥ x n − p ∗ ∥ 2 + αt ∥ w n − p ∗ ∥ 2 − 1 − α α ∥ x n +1 − x n ∥ 2 24 ≤ (1 − α ) ∥ x n − p ∗ ∥ 2 + αt [(1 + ν ) ∥ x n − p ∗ ∥ 2 − ν ∥ x n − 1 − p ∗ ∥ 2 + ν (1 + ν ) ∥ x n − x n − 1 ∥ 2 ] − 1 − α α ∥ x n +1 − x n ∥ 2 . It implies that ∥ x n +1 − p ∗ ∥ 2 + 1 − α α ∥ x n +1 − x n ∥ 2 ≤ [1 − α (1 − t (1 + ν ))] ∥ x n − p ∗ ∥ 2 − αtν ∥ x n − 1 − p ∗ ∥ 2 + ν (1 + ν ) αt ∥ x n − x n − 1 ∥ 2 . As 0 < α < 1 3 , we ha ve ∥ x n +1 − p ∗ ∥ 2 + ∥ x n +1 − x n ∥ 2 ≤ [1 − α (1 − t (1 + ν ))] ∥ x n − p ∗ ∥ 2 + ν (1 + ν ) αt 1 − α (1 − t (1 + ν )) ∥ x n − x n − 1 ∥ 2 . Since ν (1+ ν ) αt 1 − α (1 − t (1+ ν )) < 1 , w e get ∥ x n +1 − p ∗ ∥ 2 + ∥ x n +1 − x n ∥ 2 ≤ [1 − α (1 − t (1 + ν ))] ∥ x n − p ∗ ∥ 2 + ∥ x n − x n − 1 ∥ 2 . W e dene b n = ∥ x n − p ∗ ∥ 2 + ∥ x n − x n − 1 ∥ 2 , for all n ≥ 1 . Th us, we ha ve b n +1 ≤ [1 − α (1 − t (1 + ν ))] b n . By induction, w e get b n +1 ≤ [1 − α (1 − t (1 + ν ))] n b 1 . Th us, w e hav e ∥ x n +1 − p ∗ ∥ 2 ≤ [1 − α (1 − t (1 + ν ))] n b 1 . This completes the pro of. 5 Numerical Illustrations In this section, w e provide some n umerical exp eriments to v alidate our main result. W e compare the p erformance of our algorithm against some well-established algorithms of Thong et al. [ 32 ], Shehu et al. [ 22 ], Liu and Y ang [ 13 ] and Thong et al. [ 31 ]. Algorithm 4.1 (MDISEM) incorporates tw o inertial parameters, and w e compare its p erformance with w ell-known algorithms that use no inertial parameter and those with a single inertial parameter (see [ 13 , 22 , 31 , 32 ]). F or our conv enience, we denote Algorithm 1 of Thong et al. [ 32 ], Algorithm 1 of Sheh u et al. [ 22 ], Algorithm 3.1 of Liu and Y ang [ 13 ] and Algorithm 4.1 of Thong et al. [ 31 ] as T Algorithm, S Algorithm, L Algorithm and T1 Algorithm, respectively . W e dene E n = ∥ w n − y n ∥ to measure the n -th iteration error and the con vergence of E n → 0 implies that the sequence { x n } conv erges to p ∗ . W e terminate the iterativ e pro cess for netw ork equilibrium o w and Nash-Cournot problem if the error term, E n , falls below 10 − 6 . F or the image restoration problem, we calculate the relativ e error R n , and terminate the pro cess if R n falls b elow a desired threshold ϵ . All the pro jections on to the set C are calculated using Matlab in built function quadpr o g , which is a part of the optimization to olb ox. The pro jection onto the half-space is calculated using an explicit form ula (see [ 9 ] for details). All computations are p erformed using MA TLAB 2018a on an Intel(R) Core(TM) i3-10110U CPU @ 2.10GHz computer with 8.00 GB of RAM. 25 5.1 Net w ork equilibrium ow In this example, we deal with one of the most imp ortant problems in trac netw orks, namely , the net work equilibrium o w. Mastro eni and Pappalardo [ 16 ] form ulated the v ariational inequalit y mo del of this problem. W e provide only a short description, for more details, w e refer to [ 16 ] and [ 18 ]. The follo wing notations will help understand the mo del: 1. x i is the ow on the arc P i = ( r, s ) , and x = ( x 1 , x 2 , ..., x n ) T is the vector of the ow across all arcs; 2. an upp er bound d i is asso ciated with eac h arc P i on its capacit y , d = ( d 1 , d 2 , ..., d n ) ; 3. for eac h arc P i , F i ( x ) is the cost-v ariation as a function of the ows, and F x = ( F 1 ( x ) , F 2 ( x ) , ..., F n ( x )) T , with F x ≥ 0 ; 4. r j is the balance at the node j , j = 1 , 2 , ..., p and r = ( r 1 , r 2 , ..., r q ) T ; 5. T = ( a ij ) ∈ R q × R n is the node-arc incidence matrix whose elemen ts are a ij = − 1 if i is the initial no de for the arc P j +1 if i is the nal node of the arc P j 0 otherwise. A ow x is said to b e a v ariational equilibrium o w for the capacitated model if and only if it solv es the following v ariational inequality: nd p ∗ ∈ C x suc h that ⟨F p ∗ , x − p ∗ ⟩ ≥ 0 for all x ∈ C x , (5.1) where C x = { x ∈ R n , T x = r, 0 ≤ x ≤ d } . In the n umerical exp eriments, w e tak e r = ( − 2 , 0 , 0 , 0 , 0 , 2) T and d = (2 , 1 , 1 , 1 , 1 , 1 , 2 , 2) and T = − 1 − 1 0 0 0 0 0 0 1 0 − 1 − 1 0 0 0 0 0 1 0 0 − 1 − 1 0 0 0 0 1 0 1 0 − 1 0 0 0 0 1 0 1 0 − 1 0 0 0 0 0 0 1 1 . The cost function is dened as F x = diag ( D ) x , where D = (5 . 5 , 1 , 2 , 3 , 4 , 50 , 3 . 5 , 1 . 5) . Now, it is clear that ⟨F x − F y , x − y ⟩ = 8 i =1 D i ∥ x i − y i ∥ 2 ≥ 0 . Th us, F is monotone, and it is easy to see that F is ∥ D ∥ -Lipschitz contin uous. Also, if x n → p ∗ , then F x n → F p ∗ . Th us, it is easy to see from Remark 4.1 (iv) that the cost op erator F satises the Assumption ( A 3) . The solution to this problem is giv en by p ∗ = (1 . 000 , 1 . 000 , 0 . 1575 , 0 . 8425 , 0 . 885 , 0 . 115 , 1 . 0425 , 0 . 9575) T . (5.2) W e use the following parameters for the computation: 26 0 0.5 1 1.5 2 2.5 CPU time (sec.) 10 -6 10 -5 10 -4 10 -3 10 -2 10 -1 10 0 10 1 10 2 E n MDISEM L Algorithm S Algorithm T Algorithm Figure 1: Comparison of algorithms for netw ork equilibrium o w. • MDISEM: µ = 0 . 6 , λ 1 = 0 . 6 , β = 0 . 8 , σ = 1 . 5 , α n = 0 . 5 , δ n = 1 + 1 n , χ n = 1 + 1 ( n +1) 1 . 1 , ζ n = 1 ( n +1) 1 . 1 , ξ = 0 . 4990 and ν n = 1 . • L Algorithm: µ = 0 . 6 , λ 0 = 0 . 6 and p n = 1 ( n +1) 1 . 1 . • S Algorithm: µ = 0 . 6 , λ 1 = 0 . 6 , α n = 0 . 2 , ν n = 1 and γ = 1 . 9 . • T Algorithm: µ = 0 . 6 , ν 1 = 0 . 6 , θ = 0 . 2 , and α n = 1 n 3 2 . Based on Figure 1 and T able 2 , it is evident that the sequence pro duced b y our algorithm con verges to the solution muc h faster compared to previously established algorithms. Moreov er, Figure 2 and T able 3 provide a detailed sensitivit y analysis for v arying µ, β and σ . The p erformance of our algorithm in the context of net w ork equilibrium ow demonstrates reliable eciency and consistency across v arying v alues of µ, β and σ . Through extensive testing on dieren t parameters (see T able 3 ), the algorithm consisten tly exhibits fast conv ergence, ensuring computational eciency . Moreov er, the uctuation in solution conv ergence remains minimal, further v alidating the robustness and stabilit y of the prop osed iterative scheme. It is also v alidated graphically in the Figure 2 for µ = 0 . 6 . These results highlight the algorithm’s eectiv eness in solving net w ork equilibrium ow problems under div erse conditions. Algorithm CPU time (sec.) Iterations MDISEM 0.25 58 L Algorithm 1.32 213 S Algorithm 2.22 605 T Algorithm 1.38 205 T able 2: Numerical results for net work equilibrium ow. 5.2 Nash-Cournot oligop olistic market equilibrium mo del Here, we discuss the Nash-Cournot oligop olistic market equilibrium mo del, initially formulated as a con vex optimization problem by Murphy et al. [ 17 ]. A monotone v ariational inequality w as developed from the mo del by Hark er [ 7 ]. F or further details on this problem, w e refer to [ 4 ]. Consider M rms supplying homogeneous pro ducts non-co op erativ ely . Let g i ( x i ) represent the cost of the i -th rm’s supply , with x i ≥ 0 . The total supply in the mark et is considered to b e R ≥ 0 , i.e. R = M i =1 x i and q ( R ) 27 σ = 1 . 8 σ = 4 . 9 σ = 5 . 6 µ = 0 . 2323 β = 1 . 4 β = 2 . 6 β = 3 . 1 β = 4 . 6 β = 2 . 5 β = 3 . 1 β = 3 . 9 β = 4 . 1 β = 2 . 9 β = 3 . 3 β = 3 . 7 β = 4 . 01 Iterations 56 88 126 199 74 65 87 189 49 59 64 81 σ = 0 . 49 σ = 1 . 21 σ = 2 . 44 µ = 0 . 3332 β = 0 . 30 β = 1 . 1 β = 2 . 6 β = 2 . 8 β = 0 . 8 β = 1 . 2 β = 2 . 2 β = 2 . 7 β = 1 . 23 β = 1 . 4 β = 2 . 6 β = 3 Iterations 90 160 571 729 56 70 141 232 60 44 76 187 σ = 0 . 5 σ = 1 . 8 σ = 2 . 9 µ = 0 . 464 β = 0 . 3 β = 1 . 4 β = 1 . 9 β = 2 . 1 β = 1 β = 1 . 23 β = 1 . 96 β = 2 . 04 β = 1 . 56 β = 1 . 72 β = 1 . 89 β = 2 . 06 Iterations 76 217 413 624 59 47 82 119 50 55 47 71 T able 3: Sensitivit y analysis of Algorithm 4.1 (MDISEM) in net w ork equilibrium o w for dieren t v alues of σ and β . 28 0 10 20 30 40 50 60 70 80 Number of iterations 10 -7 10 -6 10 -5 10 -4 10 -3 10 -2 10 -1 10 0 10 1 E n (a) σ = 0 . 8 0 10 20 30 40 50 60 70 Number of iterations 10 -7 10 -6 10 -5 10 -4 10 -3 10 -2 10 -1 10 0 10 1 E n (b) σ = 1 0 10 20 30 40 50 60 Number of iterations 10 -7 10 -6 10 -5 10 -4 10 -3 10 -2 10 -1 10 0 10 1 E n (c) σ = 1 . 2 0 10 20 30 40 50 60 70 Number of iterations 10 -7 10 -6 10 -5 10 -4 10 -3 10 -2 10 -1 10 0 10 1 E n (d) σ = 1 . 4 Figure 2: Sensitivity analysis of Algorithm 4.1 (MDISEM) in netw ork equilibrium o w with µ = 0 . 6 and v arying v alues of σ and β . 29 b e the inv erse demand curve. The Nash equilibrium solution for the market, denoted as ( p ∗ 1 , p ∗ 2 , ..., p ∗ M ) , satises p ∗ i ≥ 0 , for each i = 1 , 2 , ..., M , and represen ts an optimal solution to the problem: max x i ≥ 0 x i q ( x i + R ∗ i ) − g i ( x i ) (5.3) where R ∗ i = M j =1 ,j = i p ∗ j . This problem ( 5.3 ) is further equiv alen t to the VIP of nding p ∗ = ( p ∗ 1 , p ∗ 2 , ..., p ∗ M ) ∈ R M + suc h that ⟨F p ∗ , x − p ∗ ⟩ ≥ 0 , (5.4) for each x ∈ R M + . Here F p ∗ = ( F 1 ( p ∗ ) , F 2 ( p ∗ ) , ..., F M ( p ∗ )) and F i ( p ∗ ) = g ′ i ( p ∗ i ) − q M j =1 p ∗ j − p ∗ j q ′ M j =1 p ∗ j . The cost function g i and the in v erse demand curv e structured as g i ( x i ) = e i x i + r i r i + 1 O − 1 r i i x r i +1 r i i and q ( R ) = 5000 1 / 1 . 1 R − 1 / 1 . 1 , where the parameters e i , O i , r i are sho wn in T able 4 b elow. Since F is monotone and Lipschitz contin- uous [ 4 ], it follo ws from Remark 4.1 (iv) that the cost operator F satises Assumption ( A 3) . Here, we compute for M = 5 rms and the solution to the problem is p ∗ = (36 . 912 , 41 . 842 , 43 . 705 , 42 . 665 , 39 . 182) . W e use the following parameters for the computation: • MDISEM: µ = 0 . 6 , λ 1 = 0 . 6 , β = 0 . 8 , σ = 1 . 5 , α n = 0 . 5 , δ n = 1 + 1 n , χ n = 1 + 1 ( n +1) 1 . 1 , ζ n = 1 ( n +1) 1 . 1 , ξ = 0 . 4990 and ν n = 1 . • L Algorithm: µ = 0 . 6 , λ 0 = 0 . 6 and p n = 1 ( n +1) 1 . 1 . • S Algorithm: µ = 0 . 6 , λ 1 = 0 . 6 , α n = 0 . 2 , ν n = 1 and γ = 1 . 9 . • T Algorithm: µ = 0 . 6 , ν 1 = 0 . 6 , θ = 0 . 2 , and α n = 1 n 3 2 . • T1 Algorithm: µ = 0 . 6 , τ 0 = 0 . 6 , γ = 0 . 2 , θ = 0 . 8 , β = − 0 . 6 , λ = 0 . 5 and α n = 1 ( n +1) 1 . 1 . Based on Figure 3 and T able 5 , it is evident that the sequence pro duced b y our algorithm con verges to the solution muc h faster compared to previously established algorithms. Moreov er, Figure 4 and T able 6 provide a detailed sensitivit y analysis for v arying µ, β and σ . F rom T able 6 , the sensitivity analysis demonstrates that the iterative sc heme reliably conv erges across v arying v alues of µ , β and σ ensuring stabilit y in computing equilibrium quan tities. Since the Nash- Cournot model in volv es rms adjusting their production strategies based on comp etitors’ actions, a fast and stable conv ergence of our algorithm implies that rms can reac h an equilibrium state eciently , minimizing computational o v erhead. 30 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 CPU time (sec.) 10 -7 10 -6 10 -5 10 -4 10 -3 10 -2 10 -1 10 0 10 1 10 2 E n MDISEM L Algorithm S Algorithm T Algorithm T1 Algorithm Figure 3: Comparison of algorithms for Nash-Cournot problem 0 10 20 30 40 50 60 70 80 90 Number of iterations 10 -8 10 -6 10 -4 10 -2 10 0 10 2 10 4 E n (a) σ = 0 . 8 0 10 20 30 40 50 60 70 80 90 Number of iterations 10 -8 10 -6 10 -4 10 -2 10 0 10 2 10 4 E n (b) σ = 1 0 10 20 30 40 50 60 70 80 90 Number of iterations 10 -8 10 -6 10 -4 10 -2 10 0 10 2 10 4 E n (c) σ = 1 . 2 0 10 20 30 40 50 60 70 80 Number of iterations 10 -8 10 -6 10 -4 10 -2 10 0 10 2 10 4 E n (d) σ = 1 . 4 Figure 4: Sensitivity analysis of Algorithm 4.1 (MDISEM) in Nash-Cournot problem with µ = 0 . 6 and v arying v alues of σ and β . 31 Firm i e i O i r i 1 10 5 1.2 2 8 5 1.1 3 6 5 1.0 4 4 5 0.9 5 2 5 0.8 T able 4: P arameters for the computation. Algorithm CPU time (sec.) Iterations MDISEM 0.30 80 L Algorithm 0.49 132 S Algorithm 1.76 350 T Algorithm 0.74 149 T1 Algorithm 1.44 230 T able 5: Numerical results for Nash-Cournot problem. 5.3 Image restoration problem In this section, we discuss the image restoration problems. The general model for image reco very can b e formulated as: b = Ax + v , (5.5) where x ∈ R n × 1 is the original image, A ∈ R m × n is the blurring matrix, v ∈ R m × 1 is the additiv e noise and b ∈ R m × 1 is the observed image. In this case, w e aim to approximate the original image by minimizing the additiv e noise kno wn as a least square problem, whic h is as follows: min x 1 2 ∥ Ax − b ∥ 2 . (5.6) This ( 5.6 ) is further equiv alent to solving a v ariational inequality problem of nding p ∗ ∈ R n suc h that ∇ f ( p ∗ ) T ( x − p ∗ ) ≥ 0 , for all x ∈ R n , (5.7) where ∇ f ( x ) = A T ( Ax − b ) . Now, F x − F y = A T A ( x − y ) , thus ⟨ A T ( Ax − y ) , x − y ⟩ = ⟨ A ( x − y ) , A ( x − y ) ⟩ ≥ 0 . It implies that ∇ f is monotone. Also ∥F x − F y ∥ = ∥ A T A ( x − y ) ∥ ≤ ∥ A T A ∥∥ x − y ∥ . Hence ∇ f is ∥ A T A ∥ -Lip c hitz con tinuous. Thus, ∇ f satises the assumption ( A 3) . So, w e apply our algorithm to restore the quality of images corrupted by dierent blurs. The test images are considered to be built-in Matlab pictures of the Cameraman (Figure 6a ) and Peppers (Figure 7a ). The following blur types are used to corrupt the images: 1. Gaussian blur of size 5 × 5 with standard deviation 1.5 on Cameraman (Figure 6b ). 2. Motion blur with motion blur length 5 and motion blur angle 60 on p epp ers (Figure 7b ). W e compute the relative error, R n = ∥ x n +1 − x n ∥ x n , and terminate the pro cess once it falls b elow the predened threshold ϵ . W e use the following parameters for the computation: 32 σ = 1 . 8 σ = 4 . 9 σ = 5 . 6 µ = 0 . 2323 β = 1 . 4 β = 2 . 6 β = 3 . 1 β = 4 . 2 β = 2 . 5 β = 3 . 1 β = 3 . 9 β = 4 . 1 β = 2 . 9 β = 3 . 3 β = 3 . 7 β = 4 . 01 Iterations 83 76 76 67 55 49 31 65 48 44 30 31 σ = 0 . 49 σ = 1 . 21 σ = 2 . 44 µ = 0 . 3332 β = 0 . 30 β = 1 . 1 β = 2 . 6 β = 2 . 8 β = 0 . 8 β = 1 . 2 β = 2 . 2 β = 2 . 7 β = 1 . 23 β = 1 . 4 β = 2 . 6 β = 3 Iterations 241 182 96 97 92 82 78 77 81 83 78 74 σ = 0 . 5 σ = 1 . 8 σ = 2 . 9 µ = 0 . 464 β = 0 . 3 β = 1 . 4 β = 1 . 9 β = 2 . 1 β = 1 β = 1 . 23 β = 1 . 96 β = 2 . 04 β = 1 . 56 β = 1 . 72 β = 1 . 89 β = 2 . 06 Iterations 203 88 90 90 146 116 90 91 95 92 90 91 T able 6: Sensitivit y analysis of Algorithm 4.1 (MDISEM) in Nash-Cournot problem for dieren t v alues of σ and β . 33 0 0.5 1 1.5 2 2.5 3 CPU time (sec.) 10 -4 10 -3 10 -2 10 -1 10 0 10 1 10 2 10 3 R n MDISEM L Algorithm S Algorithm T Algorithm T1 Algorithm (a) Gaussian blur with ϵ = 10 − 3 . 0 2 4 6 8 10 12 14 16 CPU time (sec.) 10 -3 10 -2 10 -1 10 0 10 1 10 2 10 3 R n MDISEM L Algorithm S Algorithm T Algorithm T1 Algorithm (b) Motion blur with ϵ = 10 − 2 . Figure 5: Comparison of algorithms in image restoration problem • MDISEM: µ = 0 . 6 , λ 1 = 0 . 6 , β = 0 . 76 , σ = 1 . 5 , α n = 0 . 5 , δ n = 1 + 1 n , χ n = 1 + 1 ( n +1) 1 . 1 , ζ n = 1 ( n +1) 1 . 1 , ξ = 0 . 4990 and ν n = 0 . 4 . • L Algorithm: µ = 0 . 6 , λ 0 = 0 . 6 and p n = 1 ( n +1) 1 . 1 . • S Algorithm: µ = 0 . 6 , λ 1 = 0 . 6 , α n = 0 . 2 , ν n = 1 and γ = 1 . 9 . • T Algorithm: µ = 0 . 6 , ν 1 = 0 . 6 , θ = 0 . 2 , and α n = 1 ( n +1) 3 2 . • T1 Algorithm: µ = 0 . 6 , τ 0 = 0 . 2 , γ = 1 . 1 , θ = 0 . 3 , β = − 0 . 2 , λ = 0 . 5 and α n = 1 ( n +1) 1 . 1 . F rom Figure 5 and T able 7 , it is evident that our algorithm outp erforms previously established results. The recov ered images are sho wn in Figures 6 and 7 . Algorithm Gaussian blur Motion blur CPU time (sec.) Iterations CPU time (sec.) Iterations MDISEM 2.12 16 10.44 9 L Algorithm 2.82 23 12.17 13 S Algorithm 2.75 24 10.71 11 T Algorithm 2.82 26 10.78 11 T1 Algorithm 2.83 20 15.60 13 T able 7: Numerical results for dieren t blur t yp es. 6 Conclusion In this paper, we in tro duced a new ecien t iterativ e algorithm to solv e the v ariational inequalities in the setting of real Hilb ert space. The prop osed algorithm is motiv ated by the use of the double inertial metho d, in which one of the inertial is allo wed to b e 1. Moreov er, our iterative algorithm chooses a generalized step-size that is non-monotonic. Finally , w e giv e some real-life applications for netw ork equilibrium ow, oligop olistic market equilibrium problems and image restoration problems. W e also concluded that our iterativ e scheme works muc h b etter in terms of computational time and con verges 34 (a) Original (b) Gaussian Blur (c) MDISEM (d) L Algorithm (e) S Algorithm (f ) T Algorithm (g) T1 Algorithm Figure 6: Image restoration from Gaussian blur. 35 (a) Original (b) Motion blur (c) MDISEM (d) L Algorithm (e) S Algorithm (f ) T Algorithm (g) T1 Algorithm Figure 7: Image restoration from motion blur. 36 to a solution in less n umber of iterations. F or future w ork, we aim to extend this w ork in the setting of reexive Banac h spaces. A c knowledgemen ts. The authors are thankful to the learned referees for the v aluable suggestions and appreciation of the work. The rst author is also grateful to CSIR, New Delhi, India, for providing a senior researc h fellowship (File 09/0677(13166)/2022-EMR-I). Declarations Ethical Approv al. Not Applicable as no b oth h uman and/ or animal studies. Data A v ailability . No underlying data w as collected or pro duced in this study . Comp eting in terests. The authors declare that there is no comp eting interest in the publication of this paper. F unding. There is no funding. A uthors’ con tributions. All authors contribute equally . References [1] Y. Censor, A. Gibali and S. Reich, The subgradient extragradient method for solving v ariational inequalities in Hilbert space. J. Optim. Theory Appl. 148 (2011) 318–335. [2] P . Cholamjiak, D. V. Thong and Y.J. Cho, A no vel inertial pro jection and contraction metho d for solving pseudomonotone v ariational inequality problems. Acta. Appl. Math.169 (2020) 217–245. [3] Q. L. Dong, D. Jiang and A. Gibali, A mo died subgradien t extragradien t metho d for solving the v ariational inequality problem. Numer. Algorithms. 79 (2018) 927–940. [4] F. F acc hinei and J.S. Pang, Finite-Dimensional V ariational Inequalities and Complementarit y Problems. Springer Series in Op erations Researc h, vols. I and II. Springer, New Y ork (2003). [5] G. Fic hera, Sul problema elastostatico di Signorini con am bigue condizioni al con torno. A tti A ccad. Naz. Lincei, VII I. Ser., Rend., Cl. Sci. Fis. Mat. Nat. 34 (1963) 138–142. [6] A. Gibali, D. V. Thong and P . A. T uan, T wo simple projection-type methods for solving v ariational inequalities. Anal Math Ph ys. 9 (2019) 2203–2225. [7] P .T. Harker, A v ariational inequality approac h for the determination of oligopolistic market equi- librium. Mathematical Programming 30 (1984) 105–111. [8] B. S. He, A class of pro jection-con traction metho ds for monotone v ariational inequalities. Appl. Math. Optim. 35 (1997) 69–76. [9] S. He, C. Y ang and P . Duan, Realization of the hybrid metho d for Mann iterations. Appl. Math. Comput. 217 (2010) 4239–4247. [10] Y. Huang, L. Y ou, G. Cai and Q. L. Dong, Subgradient extragradien t algorithm with double inertial steps for solving v ariational inequality problems and xed p oint problems in Hilb ert spaces. Rend. Circ. Mat. P alermo (2) 74 (2025). [11] G. M. Korpelevich, The extragradient metho d for nding saddle p oin ts and other problems. Ek onom. i Mat. Meto dy 12 (1976) 747-756. [12] H. Li, X. W ang and F. W ang, Pro jection and con traction metho d with double inertial steps for quasi-monotone v ariational inequalities. Optim. 74 (2025) 1643–1674. 37 [13] H. Liu and J. Y ang, W eak con vergence of iterative metho ds for solving quasimonotone v ariational inequalities. Comput. Optim. Appl. 77 (2020) 491–508. [14] P . E. Mainge, Conv ergence theorems for inertial KM-type algorithms. J. Comput. Appl. Math. 219 (2008) 223–236. [15] X. J. Long, J. Y ang and Y. J. Cho, Mo died Subgradien t Extragradien t Algorithms with A New Line-Searc h R ule for V ariational Inequalities. Bull. Mala ys. Math. Sci. So c. 140 (2023). [16] G. Mastro eni and M. P appalardo, A v ariational mo del for equilibrium problems in a trac net work. RAIR O-Op er. Res. 38 (2004) 3-12. [17] F. H. Murph y , H. D. Sherali and A.L. So yster, A mathematical programming approac h for deter- mining oligop olistic mark et equilibrium. Mathematical Programming 24 (1982) 92–106. [18] A. Nagurney and D. Zhang, Pro jected dynamical systems and v ariational inequalities with appli- cations. Kluw er Academic 1996. [19] Z. Opial, W eak conv ergence of the sequence of successiv e approximations for nonexpansiv e map- pings. Bull. Amer. Math. So c. 73 (1967) 591–597. [20] B. T. P olyak, Some metho ds of sp eeding up the con vergence of iteration metho ds. USSR Comput Math Math Ph ys. 4 (1964) 1-17. [21] Y. Shehu, Q. L. Dong and D. Jiang, Single pro jection metho d for pseudo-monotone v ariational inequalit y in Hilb ert spaces. Optimization. 68 (2019) 385–409. [22] Y. Shehu, O. S. Iyiola and J. C. Y ao, New pro jection metho ds with inertial steps for v ariational inequalities. Optimization 71 (2022) 4731-4762. [23] W. Singh and S. Chandok, A double inertial pro jection-con traction metho d for solving v ariational inequalit y problems. Numer. Algebra Con trol Optim. (2025) doi: 10.3934/naco.2026004. [24] W. Singh and S. Chandok, Mann-t yp e extragradien t algorithm for solving v ariational inequalit y and xed point problems. Comp. Appl. Math. 43 (2024) Article ID 259. [25] G. Stampacc hia, F ormes bilineaires coercitives sur les ensembles con vexes. C.R. A cad. Sci. Paris. 258 (1964) 4413-4416. [26] D.F. Sun, A class of iterativ e methods for solving nonlinear pro jection equations. J. Optim. Theory Appl. 91 (1996) 123–140. [27] B. T an, S. Li and S. Y. Cho, Inertial pro jection and contraction metho ds for pseudomonotone v ariational inequalities with non-Lipschitz op erators and applications. Appl. Anal. 102 (2023) 1199-1221. [28] B. T an and S. Li, Mo died inertial pro jection and contraction algorithms with non-monotonic step-sizes for solving v ariational inequalities and their applications. Optimization, 73 (2022) 793– 832. [29] D. V. Thong, V. T. Dung, P . K. Anh, H. V. Thang, A single pro jection algorithm with double inertial extrap olation steps for solving pseudomonotone v ariational inequalities in Hilb ert space. J. Comput. Appl. Math. 426 (2023) Article ID 115099. [30] D. V Thong, P . K. Anh and D. V.Tien, Relaxed T w o-Step Inertial T seng’s Extragradient Metho d for Nonmonotone V ariational Inequalities. J. Optim. Theory Appl. 7 (2025). 38 [31] D. V. Thong, V. T. Dung, P . T. Huong Huy en and H. T. Thanh T am, On appro ximating solu- tions to non-monotone v ariational inequalit y problems: an approach through mo died projection metho d. Netw Spat Econ 24 (2024) 789- 818. [32] D. V. Thong, P . T. V uong, P . K. Anh and L. D. Muu, A New Pro jection‑type Metho d with Nondecreasing Adaptiv e Step‑sizes for Pseudo‑monotone V ariational Inequalities. Netw Spat Econ 22 (2022) 803-829. [33] K. W ang, Y. W ang, O. S. Iyiola and Y. Shehu, Double inertial pro jection metho d for v ariational inequalities with quasi-monotonicit y . Optimization, 73 (2024) 707-739. [34] Y. Y ao, O. S. Iyiola and Y. Sheh u, Subgradien t extragradient metho d with double inertial steps for v ariational inequalities. J. Sci. Comput. 90 (2022) Article ID 71. [35] J. Y ang and H. Liu, Strong con vergence result for solving monotone v ariational inequalities in Hilb ert space. Numer. Algorithms 80 (2019) 741–752. [36] M. Y e and Y. He, A double pro jection metho d for solving v ariational inequalities without mono- tonicit y . Comput Optim Appl. 60 (2015) 141–150. [37] Z. Zh u, K. Zheng and S. W ang, A new double inertial subgradient extragradien t metho d for solving a non-monotone v ariational inequalit y problem in Hilbert space. AIMS Mathematics 9 (2024) 20956-20975. 39
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment