Next Article in Journal
Analysis of a Ferromagnetic Nanofluid Saturating a Porous Medium with Nield’s Boundary Conditions
Previous Article in Journal
Variable Selection for Length-Biased and Interval-Censored Failure Time Data
Previous Article in Special Issue
Convex Fault Diagnosis of a Three-Degree-of-Freedom Mechanical Crane
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A General Iterative Procedure for Solving Nonsmooth Constrained Generalized Equations

School of Mathematics, Yunnan Normal University, Kunming 650500, China
*
Author to whom correspondence should be addressed.
Current address: Yunnan Key Laboratory of Modern Analytical Mathematics and Applications, Kunming 650500, China.
Mathematics 2023, 11(22), 4577; https://doi.org/10.3390/math11224577
Submission received: 13 September 2023 / Revised: 31 October 2023 / Accepted: 6 November 2023 / Published: 8 November 2023

Abstract

:
In this paper, we concentrate on an abstract iterative procedure for solving nonsmooth constrained generalized equations. This procedure employs both the property of weak point-based approximation and the approach of searching for a feasible inexact projection on the constrained set. Utilizing the contraction mapping principle, we establish higher order local convergence of the proposed method under the assumption of metric regularity property which ensures that the iterative procedure generates a sequence converging to a solution of the constrained generalized equation. Under strong metric regularity assumptions, we obtain that each sequence generated by this procedure converges to a solution. Furthermore, a restricted version of the proposed method is considered, for which we establish the desired convergence for each iterative sequence without a strong metric subregularity condition. The obtained results are new even for generalized equations without a constraint set.

1. Introduction and Background

Generalized equations are introduced by Robinson [1] with the following form:
f ( x ) + F ( x ) 0 ,
where f : X Y is a single-valued mapping and F : X Y is a set-valued mapping between arbitrary Banach spaces. Model (1) as well as its various specifications have been widely recognized as a useful way to study optimization-related mathematical problems, such as linear and nonlinear complementarity problems, variational inequalities, first-order necessary conditions for nonlinear programming, equilibrium problems in both engineering and economics, etc.; see, e.g., [2,3,4,5,6] and the references therein. Specifically, it is called a variational system when F stands for the set of limiting subgradients. When we have F representing normal cone mapping associated with a closed convex set, it is called a variational inequality. For more details, please refer to [7,8] and the bibliographies therein.
To find an approximate solution to the generalized equation, there have been extensive studies of different versions of Newton’s method which are based on the assumption of strong metric regularity (cf. [9,10,11,12,13,14,15,16,17,18,19,20,21]). Newton’s method for unconstrained generalized Equation (1) dates back to Josephy [22], which is stated as follows. For the kth iterate x k X , the ( k + 1 ) th iterate x k + 1 is computed according to the following inclusion:
f ( x k ) + f ( x k ) ( x k + 1 x k ) + F ( x k + 1 ) 0 , for all k N
where f represents the derivative of f. It simplifies to the regular version of Newton’s method for solving the nonlinear equation f ( x ) = 0 when F is the zero mapping. When the single-valued mapping f is smooth, convergence rate results of Newton’s method (2) were established under the assumption that the partial linearization of the set-valued mapping x f ( x ¯ ) + f ( x ¯ ) ( x x ¯ ) + F ( x ) is (strongly) metrically regular around x ¯ for 0, where x ¯ is the solution of (1). It is well understood that there exists a sequence generated by (2) which converges linearly if f is continuous on a neighborhood of x ¯ and converges quadratically, provided that f is Lipschitz continuous on a neighborhood of x ¯ , respectively. When the function f in (2) is nonsmooth, we cannot use the usual method of partially linearizing on f anymore. In this situation, there are different ways of constructing abstract iterative procedures which are mainly based on the idea of point-based approximation (PBA). The concept of PBA was first developed by Robinson [23] and has been studied by many researchers. Geoffroy and Piétrus proposed in [24] a generalized concept of point-based approximation to generate an iterative procedure for generalized equations. The authors obtained convergence results on the nonsmooth Newton-type procedure which includes both local and semilocal versions (see [12,13,16,24,25,26] and the references therein).
Inexact Newton methods for solving smooth equation f ( x ) = 0 in finite dimensions (i.e., (1) with F 0 and X = Y = R n ) were introduced by Dembo, Eisenstat, and Steihaug [27]. Specifically, for a given sequence { η k } ( 0 , + ) and a starting point x 0 , the  ( k + 1 ) th iterate is selected to satisfy the condition
( f ( x k ) + f ( x k ) ( x k + 1 x k ) ) B η k f ( x k ) ( 0 ) ,
where B η k f ( x k ) ( 0 ) stands for the closed ball of radius η k f ( x k ) centered at 0. For solving generalized Equation (1) in the Banach space setting, Dontchev and Rockafellar [15] proposed the following inexact Newton method:
( f ( x k ) + f ( x k ) ( x k + 1 x k ) + F ( x k + 1 ) ) R k ( x k , x k + 1 ) , for all k N
where R k : X × X Y is a sequence of set-valued mappings with closed graphs which represent the inexactness of the general model (1) and are not actually calculated in a specified manner. Under the metric regularity assumption, Dontchev and Rockafellar [15] show that the aforementioned method is executable and generate a sequence which converges either linearly, superlinearly, or quadratically.
In this paper, we focus on the study of a general iterative procedure for solving the nonsmooth constrained generalized equation
x C , f ( x ) + F ( x ) 0 ,
where Ω R n is an open set, C Ω is a closed convex set, f : Ω R m is a single-valued mapping which is not necessarily smooth, and  F : Ω R m is a closed set-valued mapping. Due to the presence of the constraint set C, constrained generalized Equation (5) can be viewed as an abstract model which covers several constrained optimization problems such as the Constrained Variational Inequality Problem, and, in particular, the Split Variational Inequality Problem. For more details about these problems, please refer to [28,29] and the references therein.
For solving the constrained generalized Equation (5) when f is smooth, Oliveira et al. [30] proposed a Newton’s method with feasible inexact projection (the Newton-InexP method). The procedure of incorporating a feasible inexact projection rectifies the shortcoming that, in standard Newton’s method (2), the next iterate x k + 1 may be infeasible for the constraint set C. Under the condition of metric regularity and assuming that the derivative f is Lipschitz continuous, the authors in [30] established linear and quadratic convergence for the Newton-InexP method.
When the single-valued mapping f in the constrained generalized Equation (5) is not smooth, the partial linearization technique in the Newton-InexP approach in [30] is no longer applicable, and hence a new approach without involving the derivative of f is in demand. To this end, in this paper, we introduce a weak version of point-based approximation. For a class of single-valued functions which admit weak point-based approximations, we address a general inexact iterative procedure for solving (5) which incorporates a feasible inexact projection onto the constraint set. We aim to establish higher order convergence results for the proposed method assuming metric regularity on the weak point-based approximation of the mapping which generates the generalized equation. Taking into account the fact that in general metric regularity property cannot guarantee that every sequence generated with this method converges to a solution, we consider a restricted version of the aforementioned generalized procedure and establish convergence results for each iterative sequence accordingly.
The rest of this paper is structured in the following way. In Section 2, we provide the notations and a few technical results that we will use in the rest of the paper. In Section 3, we define the general iterative procedure for nonsmooth generalized Equation (5) and conduct local convergence analysis. Exact conditions are provided to ensure higher order convergence for this method as well as convergence for the arbitrary iterative sequence of a restricted version of the aforementioned procedure. In Section 5, we provide a numerical example to illustrate the assumptions and the local convergence result of the proposed approach.

2. Notation and Auxiliary Results

In this section, we display a few notations, definitions, and results that are utilized all through the paper. Let N = { 0 , 1 , 2 , } . The symbol B R n stands for the closed unit ball of the space R n , while B r ( x ) indicates the closed ball of radius r > 0 centered at x R n . Given subsets C , D R n , define the distance from x R n to C and the excess from C to D using
d ( x , C ) : = inf { x c : c C } and e ( C , D ) : = sup { d ( c , D ) : c C } ,
respectively, with the convention that d ( x , ) : = , e ( , D ) : = 0 if D , and e ( , D ) : = if D = . Let F : R n R m be a set-valued mapping and its graph be defined as
gph ( F ) : = { ( x , y ) R n × R m : y F ( x ) } .
F is said to have a closed graph if the set gph ( F ) is closed in the product space R n × R m . We use F 1 : R m R n to represent the inverse mapping of F with F 1 ( y ) : = { x R n : y F ( x ) } for all y R m . For a single-valued mapping g : X Y , it is said to be Hölder calm at x ¯ X of order p 0 , if there exist constants a , L > 0 such that
g ( x ) g ( x ¯ )   L x x ¯ p , x B a ( x ¯ ) .
We say that g is Lipschitzian on Ω X with modulus L, if 
g ( x ) g ( x )   L x x , x , x Ω .
We first recall the concept of ( n , α ) -point-based approximation (also called ( n , α ) -PBA), which was introduced in [24].
Definition 1.
Let Ω be an open subset of a metric space ( X , d ) , Y be a normed linear space, and f : Ω Y be a single-valued mapping. Fix n N and α > 0 . We say that the mapping A : Ω × Ω Y is an ( n , α ) -PBA on Ω for f with modulus κ > 0 , if both of the following assertions hold:
(a) f ( v ) A ( u , v ) κ π n , α u v n + α for all u , v Ω , where π n , α : = i = 1 n ( α + i ) ;
(b) The mapping · A ( u , · ) A ( v , · ) is Lipschitzian on Ω with modulus γ ( κ ) u v α , where γ ( κ ) is a positive function of κ.
It is easy to see that when both n and α take the value of one in the above assertions, the  ( 1 , 1 ) -PBA reduces to the PBA of f on Ω according to Robinson [23]. In the nonsmooth framework, the normal maps are referred to as functions that have a (1,1)-PBA. For the smooth case, the authors showed in [24] that, if a function f is twice Fréchet differentiable on Ω and satisfies that 2 f is H o ¨ lder with exponent α [ 0 , 1 ] and with constant κ > 0 , then it has a ( 2 , α ) -PBA represented by A ( u , v ) = f ( u ) + f ( u ) ( v u ) + 1 2 2 f ( u ) ( v u ) 2 . For more details, please refer to the appendix in [24].
Next, we define the concept of ( n , α ) -weak-point-based approximation for single-valued mappings at given points, which is essential in the generalized iterative procedure studied in Section 3.
Definition 2.
Let Ω be an open subset of a metric space ( X , d ) , Y be a normed linear space, and f : Ω Y be a single-valued mapping. Fix n N and α 0 . We say that the mapping A : Ω × Ω Y is an ( n , α ) -weak-point-based approximation ( ( n , α ) -WPBA) at x ¯ Ω for f with modulus κ > 0 and constant a > 0 , if both of the following assertions hold:
(a) f ( x ¯ ) A ( x , x ¯ ) κ π n , α x x ¯ n + α for all x B a ( x ¯ ) , where π n , α : = i = 1 n ( α + i ) ;
(b) For any x B a ( x ¯ ) , the mapping · A ( x ¯ , · ) A ( x , · ) is Lipschitzian on Ω with modulus γ ( κ ) x x ¯ α , where γ ( κ ) is a positive function of κ.
It is clear that the notion of ( n , α ) -WPBA is weaker than the notion of ( n , α ) -PBA. In the smooth setting, the authors proved in Lemma 3.1 of [31] that any continuously differentiable mapping f around x ¯ such that the derivative f is Hölder calm (which is weaker than the Lipschitz continuity) of order α 0 admits a ( 1 , α ) -PBA given by A ( x , u ) = f ( x ) + f ( x ) ( u x ) . Let us observe that relation ( a ) implies in particular that A ( x ¯ , x ¯ ) = f ( x ¯ ) .
In the following, we present the definition of (strong) metric regularity, which plays an important role in our later analysis.
Definition 3.
Let κ , a , b > 0 , F : R n R m be a set-valued mapping and ( x ¯ , y ¯ ) gph ( F ) . F is said to be metrically regular at x ¯ for y ¯ with constants κ , a , and b, if 
d ( x , F 1 ( y ) ) κ d ( y , F ( x ) ) , for all x B a ( x ¯ ) a n d y B b ( y ¯ ) .
F is said to be strongly metrically regular at x ¯ for y ¯ with constants κ , a , and b, if (7) holds and F 1 ( y ) B a ( x ¯ ) is singleton for each y B b ( y ¯ ) .
It is widely understood that F is strongly metrically regular at x ¯ for y ¯ with constants κ ,   a , and b if and only if the mapping B b ( y ¯ ) y F 1 ( y ) B a ( x ¯ ) is single-valued and Lipschitz continuous on B b ( y ¯ ) ; for more details, see [7]. If  f : R n R m is smooth around x ¯ R n , then f is strongly metrically regular at x ¯ for f ( x ¯ ) if and only if f ( x ¯ ) is invertible.
In [30], the authors introduced the following concept of feasible inexact projection, which is the basic structure of the Newton-InexP method studied therein.
Definition 4.
Let C R n be a closed convex set, x C , and θ 0 . The feasible inexact projection mapping relative to x with error tolerance θ is denoted by P C ( · , x , θ ) : R n C . The definition is as follows:
P C ( u , x , θ ) : = { w C : u w , z w θ u x 2 , z C } , f o r a l l u R n .
Any element w P C ( u , x , θ ) is said to be a feasible inexact projection of u onto C with respect to x and with error tolerance θ.
Since C R n is a closed convex set, Proposition 2.1.3 of [32] implies that for each u R n and x C , we have P C ( u ) P C ( u , x , θ ) and { P C ( u ) } = P C ( u , x , 0 ) , where P C denotes the exact projection mapping (see Remark 2 of [30]). In particular, the point w P C ( u , x , θ ) is an approximate feasible solution for the projection subproblem min z C z u 2 / 2 , which satisfies u w , z w θ u x 2 for all z C .
The next result, Lemma 1 of [30], is useful in the remainder of this paper.
Lemma 1.
Let y , y ˜ R n , x , x ˜ C , and  θ 0 . Then, for any w P C ( y , x , θ ) , we have
P C ( y ˜ , x ˜ , 0 ) w   y y ˜   + 2 θ y x .
We end this section by recalling the well-known contraction mapping principle for set-valued mappings (see Theorem 5E.2 of [7]).
Lemma 2.
Let Φ : X X be a set-valued mapping defined on a complete metric space X, x ¯ X , and let r > 0 be such that the set gph ( Φ ) B r ( x ¯ ) × B r ( x ¯ ) is closed in X × X . Given α ( 0 , 1 ) , impose the following assumptions:
1. 
d ( x ¯ , Φ ( x ¯ ) ) < r ( 1 α ) .
2. 
e ( Φ ( u ) B r ( x ¯ ) , Φ ( v ) ) α d ( u , v ) f o r a l l u , v B r ( x ¯ ) .
Then, Φ has a fixed point in B r ( x ¯ ) , i.e., there exists x B r ( x ¯ ) such that x Φ ( x ) . In addition, if Φ is single-valued, then Φ has a unique fixed point in B r ( x ¯ ) .

3. Convergence Analysis

In this section, employing the notions of ( n , α ) -WPBA and the feasible inexact projection defined in Section 2, we propose a general iterative procedure for solving nonsmooth constrained generalized Equation (5).
Let x ¯ C be such that f ( x ¯ ) + F ( x ¯ ) 0 and A : Ω × Ω R m be an ( n , α ) -WPBA at x ¯ for f. To formulize the iterative procedure, we choose x 0 C , { θ k } [ 0 , + ) as the input data and R k : R n R m for k = 0 , 1 , 2 , as the inexactness (Algorithm 1).
Algorithm 1 General inexact projection method
Step 0. Let x 0 C and { θ j } [ 0 , + ) be given, and set k = 0 .
Step 1. If  f ( x k ) + F ( x k ) 0 , then stop; otherwise, compute u k R n such that
( A ( x k , u k ) + F ( u k ) ) R k ( x k ) .

Step 2. If  u k C , set x k + 1 = u k ; otherwise, take any x k + 1 satisfying
x k + 1 P C ( u k , x k , θ k ) .

Step 3. Set k k + 1 , and go to Step 1.
Note that in comparison with (4), the mapping R k in Step 1 which represents inexactness now depends on the current iteration x k only. In Step 2, we utilize the weak point-based approximation of f in place of the linearization technique for the smooth case applied in [30]. In Step 3, the symbol P C ( y k , x k , θ k ) represents y k ’s feasible inexact projections onto C relative to x k with error tolerance θ k .
To conduct convergence analysis for the proposed method, for each fixed x Ω , we need to define the auxiliary mapping g x : Ω R m :
g x ( u ) : = A ( x ¯ , u ) A ( x , u ) , u Ω .
For convenience, we define L x : Ω R m as the approximation of the set-valued mapping f + F :
L x ( u ) : = A ( x , u ) + F ( u ) , u Ω .
We analyze based on the assumption that an approximation of the set-valued mapping f + F ensures metric regularity/strong metric regularity, and that f has weak point-based approximation which is weaker than the condition of point-based approximation.
To prove our main result, we will first explain some technical results that will be helpful in our later analysis. The following Lemma can be shown with some simple calculations.
Lemma 3.
Let α 0 , κ , a > 0 , n N , and f : Ω R m be a single-valued mapping. Assume that A : Ω × Ω R m is an ( n , α ) -WPBA at x ¯ for f with modulus κ and constant a. Then,
g x ( u ) g x ( u )   γ ( κ ) x x ¯ α u u , x B a ( x ¯ ) , u , u Ω
and
g x ( u )   κ π n , α x x ¯ n + α +   γ ( κ ) x x ¯ α u x ¯ , x B a ( x ¯ ) , u Ω .
Proof. 
Since A is an ( n , α ) -WPBA at x ¯ for f with modulus κ and constant a, we have
g x ( u ) g x ( u ) =   A ( x ¯ , u ) A ( x , u ) ( A ( x ¯ , u ) A ( x , u ) ) γ ( κ ) x x ¯ α u u , x B a ( x ¯ ) , u , u Ω .
Note that f ( x ¯ ) = A ( x ¯ , x ¯ ) for any fixed x B a ( x ¯ ) and u Ω , one has
g x ( u ) =   A ( x ¯ , u ) A ( x , u )   f ( x ¯ ) A ( x , x ¯ ) + A ( x ¯ , u ) A ( x , u ) ( A ( x ¯ , x ¯ ) A ( x , x ¯ ) ) κ π n , α x x ¯ n + α + γ ( κ ) x x ¯ α u x ¯
which establishes (12) and (13).    □
Pick x Ω , v R m and let them be fixed. For convenience, we define the following auxiliary set-valued mapping:
Φ x , v ( u ) : = L x ¯ 1 ( g x ( u ) + v ) , u X ,
where L x ¯ 1 ( y ) : = { u X : y L x ¯ ( u ) } denotes the inverse of L x ¯ defined as in (11). It is easy to observe that u Φ x , v ( u ) if and only if x , u , and v satisfy
A ( x , u ) + F ( u ) v .
Lemma 4.
Assume that the assumptions in Lemma 3 hold. Let η , τ , b , r * ( 0 , + ) be such that
r * a , τ γ ( κ ) r * α < 1 , ( κ + η π n , α ) τ r * n + α π n , α ( 1 τ γ ( κ ) r * α ) < r * and ( κ + η ) r * n + α π n , α + γ ( κ ) r * 1 + α b .
If L x ¯ is metrically regular at x ¯ for 0 with constants τ , a , and b, then for any x B r * ( x ¯ ) and v η r * n + α B R m , there exists a fixed point u ¯ Φ x , v ( u ¯ ) such that
u ¯ x ¯   κ τ x x ¯ n + α + π n , α τ v π n , α ( 1 τ γ ( κ ) x x ¯ α ) .
In particular, u ¯ B r * ( x ¯ ) . In addition, if the mapping L x ¯ is strongly metrically regular at x ¯ for 0, then the mapping Φ x , v has exactly one fixed point in B r * ( x ¯ ) such that (16) holds.
Proof. 
Pick any x B r * ( x ¯ ) and v η r * n + α B R m . Let
ρ : = κ τ x x ¯ n + α + π n , α τ v π n , α ( 1 τ γ ( κ ) x x ¯ α ) .
It is easy to obtain from the choice of the constants that
τ γ ( κ ) x x ¯ α τ γ ( κ ) r * α < 1
and
ρ ( κ + η π n , α ) τ r * n + α π n , α ( 1 τ γ ( κ ) r * α ) r * a .
Recall that L x ¯ is metrically regular at x ¯ for 0 with constants τ , a , and b. We have
d ( x , L x ¯ 1 ( y ) ) τ d ( y , L x ¯ ( x ) ) ( x , y ) B a ( x ¯ ) × B b ( 0 )
which indicates that L x ¯ 1 ( y ) for any y B b ( 0 ) . By (13) and (15), we have
g x ( x ¯ )   κ π n , α x x ¯ n + α
and
g x ( u ) + v   κ π n , α r * n + α + γ ( κ ) r * 1 + α + v b , u B r * ( x ¯ ) .
Then, Φ x , v is well-defined on B r * ( x ¯ ) B a ( x ¯ ) . Since 0 L x ¯ ( x ¯ ) , it follows from (17)–(19) that
d ( x ¯ , Φ x , v ( x ¯ ) ) = d ( x ¯ , L x ¯ 1 ( g x ( x ¯ ) + v ) ) τ d ( g x ( x ¯ ) + v , L x ¯ ( x ¯ ) ) τ g x ( x ¯ ) + v κ τ π n , α x x ¯ n + α + τ v = ρ ( 1 τ γ ( κ ) x x ¯ α ) .
Furthermore, it follows from (12) and (18) that
e ( Φ x , v ( u )   B ρ ( x ¯ ) , Φ x , v ( u ) ) = sup { d ( y , Φ x , v ( u ) ) : y Φ x , v ( u ) B ρ ( x ¯ ) } τ sup { d ( g x ( u ) + v , L x ¯ ( y ) ) : y B ρ ( x ¯ ) , g x ( u ) + v L x ¯ ( y ) } τ g x ( u ) g x ( u )   τ γ ( κ ) x x ¯ α u u
holds for all u , u B ρ ( x ¯ ) . Note that τ γ ( κ ) x x ¯ α < 1 and B ρ ( x ¯ ) B a ( x ¯ ) , and applying Lemma 2 with Φ = Φ x , v , x ¯ = x ¯ , r = ρ , and α = τ γ ( κ ) x x ¯ α ensures the existence of u ¯ Φ x , v ( u ¯ ) B ρ ( x ¯ ) , i.e., inequality (16) holds with u ¯ Φ x , v ( u ¯ ) . Due to the fact that u ¯ x ¯   ρ r * , we arrive at u ¯ B r * ( x ¯ ) .
Next, we assume that the mapping L x ¯ is strongly metrically regular at x ¯ for 0. Then, the mapping B b ( 0 ) y : L x ¯ 1 ( y ) B a ( x ¯ ) is single-valued, and thus the mapping B r * ( x ¯ ) u : Φ x , v | B a ( x ¯ ) ( u ) : = Φ x , v ( u ) B a ( x ¯ ) is single-valued (thanks to (20)). Similar to the proofs of (21) and (22), we have
d ( x ¯ , Φ x , v | B a ( x ¯ ) ( x ¯ ) ) ρ ( 1 τ γ ( κ ) x x ¯ α ) r * ( 1 τ γ ( κ ) x x ¯ α )
and
d ( Φ x , v | B a ( x ¯ ) ( u ) , Φ x , v | B a ( x ¯ ) ( u ) ) τ γ ( κ ) x x ¯ α u u , u , u B r * ( x ¯ ) .
It follows from Lemma 2 (2) that Φ x , v | B a ( x ¯ ) has a unique fixed point in B r * ( x ¯ ) . Besides, since B r * ( x ¯ ) B a ( x ¯ ) , Φ x , v has a unique fixed point in B r * ( x ¯ ) . By the first part of the proof, we know that Φ x , v has a fixed point u ¯ B ρ ( x ¯ ) B r * ( x ¯ ) satisfying (16); hence, u ¯ is the unique fixed point of Φ x , v in B r * ( x ¯ ) , which completes the proof.    □
The following Lemma shows that there exists a unique solution in B r * ( x ¯ ) for generalized Equation (1) under the strong metric regularity assumption.
Lemma 5.
Let the assumptions in Lemmas 3 and 4 hold. If the mapping L x ¯ is strongly metrically regular at x ¯ for 0 with constants τ , a , and b, then x ¯ is the unique solution of (1) in B r * ( x ¯ ) .
Proof. 
Let x ^ be a solution of (1) in B r * ( x ¯ ) . Since A is an ( n , α ) -WPBA for f, we have
f ( x ^ ) A ( x ¯ , x ^ )   κ π n , α x ^ x ¯ n + α < κ r * n + α π n , α b .
Recall that L x ¯ is strongly metrically regular at x ¯ for 0 with constants τ , a , and b. The mapping y L x ¯ ( y ) B a ( x ¯ ) is single-valued on B b ( 0 ) and (18) holds. Furthermore, we know that
0 f ( x ^ ) + F ( x ^ ) = f ( x ^ ) A ( x ¯ , x ^ ) + L x ¯ ( x ^ ) .
Hence, we conclude that
x ^ = L x ¯ 1 ( A ( x ¯ , x ^ ) f ( x ^ ) ) B a ( x ¯ ) .
Note that 0 L x ¯ ( x ¯ ) . By (18) and (23), one has
x ¯ x ^ = d ( x ¯ , L x ¯ 1 ( A ( x ¯ , x ^ ) f ( x ^ ) ) τ d ( A ( x ¯ , x ^ ) f ( x ^ ) , L x ¯ ( x ¯ ) ) τ A ( x ¯ , x ^ ) f ( x ^ ) τ κ π n , α x ^ x ¯ n + α .
Since τ κ π n , α x ^ x ¯ n + α 1 τ κ r * n + α 1 π n , α < 1 (thanks to the third inequality in (15)), then x ^ x ¯ = 0 . Hence, x ¯ is the unique solution of (1) in B r * ( x ¯ ) .    □
The next Lemma plays an important role in the convergence analysis, the proof of which follows from the lines of Lemma 4 of [30].
Lemma 6.
Let the assumptions in Lemma 4 hold and θ 0 . If  x B r * ( x ¯ ) { x ¯ } and u Φ x , v ( u ) satisfies (16), then, for any w P C ( u , x , θ ) , we have
w x ¯   1 + 2 θ κ τ x x ¯ n + α + π n , α τ v π n , α ( 1 τ γ ( κ ) x x ¯ α ) + 2 θ x x ¯ .
Proof. 
Pick any w P C ( u , x , θ ) . Then, applying Lemma 1 with y ˜ = x ˜ = x ¯ , we have
P C ( x ¯ , x ¯ , 0 ) w     u x ¯   + 2 θ x u   u x ¯   +   2 θ ( x x ¯   + u x ¯ ) .
Note that x B r * ( x ¯ ) and P C ( x ¯ , x ¯ , 0 ) = x ¯ . It follows from (16) that
w x ¯   1 + 2 θ κ τ x x ¯ n + α + π n , α τ v π n , α ( 1 τ γ ( κ ) x x ¯ α ) + 2 θ x x ¯ ,
which establishes (24).    □
Now, we are ready to present our main result. We derive the exact relationship between the rate of convergence of the proposed method and the constant of the weak point-based approximation.
Theorem 1.
Consider the nonsmooth constrained generalized Equation (5). Let r : = sup { t R : B t ( x ¯ ) Ω } , { θ k } [ 0 , 1 / 2 ) , θ ˜ : = sup k θ k < 1 2 ; and α 0 , κ , τ , η , a , b , r * > 0 , which satisfy (15) and
r * < r , 1 + 2 θ ˜ κ + η π n , α τ r * n + α 1 π n , α ( 1 τ γ ( κ ) r * α ) + 2 θ ˜ < 1 .
Assume that x ¯ C with f ( x ¯ ) + F ( x ¯ ) 0 , the set-valued mapping L x ¯ is metrically regular at x ¯ for 0 with constants τ , a , and b, and the function A : Ω × Ω R m is an ( n , α ) -WPBA at x ¯ for f with modulus κ and constant a. Furthermore, suppose that the sequence of set-valued mappings { R k } satisfies
sup k N sup v R k ( x ) v   η x x ¯ n + α , x B a ( x ¯ ) .
Then, for every starting point x 0 C B r * ( x ¯ ) { x ¯ } , there exists a sequence { x k } generated by the general inexact projection method associated with { θ k } and { R k } , which is contained in C B r * ( x ¯ ) and converges to x ¯ with the following condition:
x k + 1 x ¯   1 + 2 θ k κ + η π n , α τ x k x ¯ n + α 1 π n , α 1 τ γ ( κ ) x k x ¯ α + 2 θ k x k x ¯ , k N .
In particular, if  θ k = 0 for all k = 0 , 1 , 2 , , then
x k + 1 x ¯   κ + η π n , α τ π n , α 1 τ γ ( κ ) r * α x k x ¯ n + α , k N
and { x k } converges to x ¯ superlinearly of order n + α . Furthermore, if the mapping L x ¯ is strongly metrically regular at x ¯ for 0, then x ¯ is the unique solution of (5) in B r * ( x ¯ ) , and every sequence generated by the general inexact projection method starting at x 0 C B r * ( x ¯ ) { x ¯ } which is contained in B r * ( x ¯ ) and associated with { θ k } , { R k } satisfies (27) and converges to x ¯ .
Proof. 
First, we will show by induction on k that, for any starting point x 0 C B r * ( x ¯ ) { x ¯ } , there exists a sequence { x k } generated by the proposed method satisfying (27) and there exist sequences { u k } R n and { v k } R m associated with { x k } such that
x k + 1 C B r * ( x ¯ ) , v k ( A ( x k , u k ) + F ( u k ) ) R k ( x k ) , k N .
To this end, take x 0 C B r * ( x ¯ ) and v 0 R 0 ( x 0 ) . By (26), one has v 0   γ x 0 x ¯ n + α γ r * n + α . According to Lemma 4, we obtain u 0 Φ x 0 , v 0 ( u 0 ) such that u 0 B r * ( x ¯ ) and (16) holds with x = x 0 , u = u 0 , and v = v 0 . Then,
v 0 ( A ( x 0 , u 0 ) + F ( u 0 ) ) R 0 ( x 0 ) .
If u 0 C , then set x 1 : = u 0 C B r * ( x ¯ ) , and by using (16) we conclude that (27) holds for k = 0 . Otherwise, if u 0 C , then take x 1 P C ( u 0 , x 0 , θ 0 ) . Moreover, by using Lemma 6 with x = x 0 , u = u 0 , and v = v 0 , we obtain from (24) that (27) holds for k = 0 . Note that P C ( u 0 , x 0 , θ 0 ) C and x 0 x ¯   r * . By (25), one has
1 + 2 θ 0 κ + η π n , α τ x 0 x ¯ n + α 1 π n , α ( 1 τ γ ( κ ) x 0 x ¯ α ) + 2 θ 0 < 1
and then x 1 C B r * ( x ¯ ) . Therefore, there exist x 1 , u 0 , and v 0 satisfying (27) and (29) for k = 0 . Assume for induction that there exists x k + 1 , u k , and v k satisfying (27) and (29) for k = 0 , 1 , , i 1 . Taking v i R i ( x i ) and arguing similar to the case of k = 0 , we obtain x i + 1 , u i , and v i satisfying (27) and (29) for k = i , and then the induction step is complete. Therefore, there exists a sequence { x k } C B r * ( x ¯ ) generated by the general inexact projection method, associated with { θ k , R k } and starting at x 0 , and it satisfies (27).
Now, we proceed to show that the sequence { x k } converges to x ¯ . Indeed, it is easy to observe from (25) that, for any k N ,
1 + 2 θ k κ + η π n , α τ x k x ¯ n + α 1 π n , α ( 1 τ γ ( κ ) x k x ¯ α ) + 2 θ k 1 + 2 θ ˜ κ + η π n , α τ r * n + α 1 π n , α ( 1 τ γ ( κ ) r * α ) + 2 θ ˜ = : μ < 1 .
Then, we conclude from (27) that x k + 1 x ¯   μ x k x ¯ for all k N . This implies that { x k } converges to x ¯ , at least linearly. On the other hand, if  θ k = 0 for all k N , then, (28) follows directly from (27). Consequently, { x k } converges to x ¯ of order n + α .
Furthermore, if the mapping L x ¯ is strongly metrically regular at x ¯ for 0, then Lemma 5 implies that x ¯ is the unique solution of (5) in B r * ( x ¯ ) . By the first part of the proof, we know that the general inexact projection method is surely executable. To show the last statement of the theorem, we take arbitary iterative sequence { x k } which is contained in B r * ( x ¯ ) and associated with { θ k , R k } with the starting point x 0 . According to the structure of the proposed method, there exist u k and v k associated with { x k } satisfying
v k R k ( x k ) and u k Φ x k , v k ( u k ) , k N .
It follows from the second part of Lemma 4 that u k is the unique fixed point of Φ x k , v k in B r * ( x ¯ ) for each k N . Then, taking into account the construction of { x k } , we conclude that (27) holds for each k N . Indeed, if  u k C , then x k + 1 = u k , and then Lemma 4 implies that (27) holds. If u k C , then x k + 1 P C ( u k , x k , θ k ) . And then, we obtain from Lemma 6 that (27) holds. By using similar arguments as in the first part of the proof, we can show that such a sequence converges to x ¯ . For the sake of simplicity, we omit the details here.    □
Remark 1.
It is worth mentioning that, for positive n N , conditions (15) and (25) hold true as long as we pick a value for r * that is sufficiently small. In this case, if  lim k + θ k = 0 , then { x k } converges to x ¯ superlinearly. In fact, passing to the limit in (27) as k + , we obtain
lim sup k + x k + 1 x ¯ x k x ¯ = 0 .
For n = α = 0 , one needs to make κ , τ , and η sufficiently small to ensure the validity of (15) and (25), and in this case we have linear convergence.
Remark 2.
For the case of f being smooth, under the condition of metric regularity (strong metric regularity) for an approximation of the set-valued mapping f + F and assuming Lipschitz continuity for the derivative f , the authors show in Theorem 2 of [30] that the sequence generated by the Newton-InexP method converges to a solution of (5) with a linear, superlinear, and Q-quadratic convergence rate, respectively.
In contrast, the proposed method that we investigated in Theorem 1 incorporates both inexactness and nonsmoothness. In fact, if  f is continuously differentiable around x ¯ , we can set A ( x , u ) = f ( x ) + f ( x ) ( u x ) . Then, by Lemma 3.1 of [31], the condition that the derivative f is Hölder calm of order α 0 indicates that A : Ω × Ω Y is a ( 1 , α ) -WPBA at x ¯ for f. Recall that the Hölder calmness property of the derivative f is strictly weaker than the Lipschitz continuity used in Theorem 2 of [30] (see Example 3.1 of [31]). Therefore, even in the smooth case, Theorem 1 is an improvement of Theorem 2 of [30]. Additionally, it is worth pointing out that, even for generalized equations without constraint, i.e., C = R n , Theorem 1 is also new and is a supplement of Theorem 3.1 of [31].
In general, under the assumption of metric regularity, the sequence generated by the general inexact projection method is not unique.
The following example shows that, under the assumption of Theorem 1, one cannot guarantee that every iterative sequence converges to a solution, even for the case of R k 0 for all k N .
Example 1.
Let f : R R be such that f ( x ) = x 2 + 2 x for all x 0 , and f ( x ) = x 3 for all x < 0 . Then, f is not differentiable at 0. Let n = 2 , α = 0 , and A : R × R R be such that A ( x , u ) = ( 2 x + 2 ) u x 2 for all ( x , u ) [ 0 , + ) × R and A ( x , u ) = 3 x 2 u 2 x 3 for all ( x , u ) ( , 0 ) × R . It is clear that A is a ( 2 , 0 ) -WPBA at 0 for f. Let x ¯ = 0 , C = [ 1 , 1 ] , θ ˜ = 0 , R k 0 (for all k N ), and F : R R be such that F ( x ) = [ x , + ) for all x R . It is easy to see that L x ¯ is metrically regular at 0 for 0, and it is not strongly metrically regular at 0 for 0. Then, it follows from Theorem 1 that, for any x 0 [ 1 6 , 1 6 ] , there exists a sequence { x k } generated by the proposed method and contained in C which converges to 0 superlinearly of order 2. For each k N , let x k be the kth generation of the proposed method. In fact, if  x k > 0 , we know that any element taken from [ 1 , x k 2 2 x k + 3 ] satisfies (8), so we choose u k = x k 2 2 x k + 3 . For the case of x k < 0 , since any element taken from [ 1 , 2 x k 3 3 x k 2 + 1 ] satisfies (8), we pick u k = 2 x k 3 3 x k 2 + 1 . Note that u k C , so we set x k + 1 = u k . We also have | x k + 1 0 | 1 3 | x k 0 | 2 . This shows that the sequence { x k } converges to 0 superlinearly of order 2.
On the other hand, for the starting point x 0 = 1 6 C , we can find a sequence { 1 6 , 1 , 1 6 , 1 , } which is generated with the proposed method and does not converge to a solution of the aforementioned constrained generalized equation.
Clearly, the condition of ( x k , u k ) satisfying (8) is equivalent to the fact that u k L x k 1 ( R k ( x k ) ) . It is easy to observe from Example 1 that u k should be chosen around the boundary of L x k 1 ( R k ( x k ) ) and not be too far away from the given solution point.
To overcome the shortcoming that not every sequence produced by the general inexact projection method reaches a solution, we examine a modified version of the proposed method for solving nonsmooth constrained generalized equations (Algorithm 2).
Algorithm 2 Restricted generalized inexact projection method
Step 0. Let x 0 C , λ > 1 , and { θ j } [ 0 , + ) be given, and set k = 0 .
Step 1. If  f ( x k ) + F ( x k ) 0 , then stop; otherwise, compute u k R n such that
( A ( x k , u k ) + F ( u k ) ) R k ( x k ) with u k x ¯ λ d ( x ¯ , L x k 1 ( R k ( x k ) ) ) .

Step 2. If  u k C , set x k + 1 = u k ; otherwise, take any x k + 1 satisfying
x k + 1 P C ( u k , x k , θ k ) .

Step 3. Set k k + 1 , and go to Step 1.
It is clear that (30) is equivalent to the relationship
x k + 1 L x k 1 ( R k ( x k ) ) with x k + 1 x ¯ λ d ( x ¯ , L x k 1 ( R k ( x k ) ) ) .
Since λ > 1 , then the restricted generalized inexact projection method is surely executable when L x 1 ( R k ( x ) ) for any x near x ¯ and k N .
For convergence analysis of the restricted method, we need the following lemma.
Lemma 7.
Assume that the assumptions of Lemmas 3 and 4 hold. Then, for any x B r * ( x ¯ ) and v η r * n + α B R m , we have L x 1 ( v ) and
d ( x ¯ , L x 1 ( v ) ) τ 1 τ γ ( κ ) x x ¯ α d ( v , L x ( x ¯ ) ) .
Proof. 
Pick any x B r * ( x ¯ ) and v η r * n + α B R m . Note that 0 L x ¯ ( x ¯ ) . One has
A ( x , x ¯ ) f ( x ¯ ) L x ( x ¯ ) ,
and then, it follows from (13) that
d ( v , L x ( x ¯ ) )   v + f ( x ¯ ) A ( x , x ¯ ) =   v + κ π n , α x x ¯ n + α ( κ + η π n , α ) r * n + α π n , α .
For any sufficiently small ε 0 , 1 τ γ ( κ ) r * α τ r * ( κ + η π n , α ) τ r * n + α π n , α ( 1 τ γ ( κ ) r * α ) , take y L x ( x ¯ ) such that v y   < d ( v , L x ( x ¯ ) ) + ε . Let
ρ : = τ v y 1 τ γ ( κ ) x x ¯ α .
It is clear that ρ τ 1 τ γ ( κ ) r * α ( κ + η π n , α ) r * n + α π n , α + ε < r * a , and then B ρ ( x ¯ ) B r * ( x ¯ ) .
According to the assumption that L x ¯ is metrically regular at x ¯ for 0 with constants τ , a , and b, we conclude that (18) holds, and then L x ¯ 1 ( y ) for any y B b ( 0 ) . By (20), we have g x ( u ) + v B b ( 0 ) , and, therefore, Φ x , v is well defined on B r * ( x ¯ ) , where Φ x , v is defined by (14). Note that y L x ( x ¯ ) . We have
y + f ( x ¯ ) A ( x , x ¯ ) L x ¯ ( x ¯ ) .
In combination with (18), we have
d ( x ¯ , Φ x , v ( x ¯ ) ) = d ( x ¯ , L x ¯ 1 ( g x ( x ¯ ) + v ) κ d ( g x ( x ¯ ) + v , L x ¯ ( x ¯ ) ) τ g x ( x ¯ ) + v y f ( x ¯ ) + A ( x , x ¯ ) = τ v y = ρ ( 1 τ γ ( κ ) x x ¯ α ) .
Furthermore, for any u , u B ρ ( x ¯ ) , it follows from (22) that
e ( Φ x ( u ) B ρ ( x ¯ ) , Φ x ( u ) ) τ γ ( κ ) x x ¯ α u u .
Since τ γ ( κ ) x x ¯ α < 1 , and B ρ ( x ¯ ) B r * ( x ¯ ) , by applying Lemma 2 with Φ = Φ x , v , x ¯ = x ¯ , r = ρ , and α = τ γ ( κ ) x x ¯ α , we obtain a fixed point u Φ x , v ( u ) B ρ ( x ¯ ) , which establishes that u L x 1 ( v ) and u x ¯   ρ . Then, we have
d ( x ¯ , L x 1 ( v ) ) ρ τ v y 1 τ γ ( κ ) x x ¯ α τ 1 τ γ ( κ ) x x ¯ α ( d ( v , L x ( x ¯ ) ) + ε ) .
Since ε is arbitarily chosen, we conclude that (33) holds. □
The following result shows that under proper conditions, every sequence generated with the aforementioned restricted method converges a solution of the nonsmooth constrained generalized equation.
Theorem 2.
Consider the constrained generalized Equation (5) and assume that the assumptions of Theorem 1 hold. Let λ > 1 be such that
1 + 2 θ ˜ κ + η π n , α λ τ r * n + α 1 π n , α ( 1 τ γ ( κ ) r * α ) + 2 θ ˜ < 1 .
Then, for every sequence { x k } generated by the restricted generalized inexact projection method, which starts from x 0 C B r * ( x ¯ ) { x ¯ } , associated with { θ k } , { R k } , and contained in C B r * ( x ¯ ) , we have the following convergence:
x k + 1 x ¯   1 + 2 θ k κ + η π n , α λ τ x k x ¯ n + α 1 π n , α 1 τ γ ( κ ) x k x ¯ α + 2 θ k x k x ¯ , k N .
In particular, if θ k = 0 for all k = 0 , 1 , 2 , , then
x k + 1 x ¯   κ + η π n , α λ τ π n , α 1 τ γ ( κ ) r * α x k x ¯ n + α , k N
and { x k } converges to x ¯ superlinearly of order n + α .
Proof. 
Take any x , u B r * ( x ¯ ) , and k N . By (26), one has R k ( u ) η r * n + α B R m . Then, it follows from Lemma 7 that L x 1 ( R k ( u ) ) . Since λ > 1 , then the restricted generalized inexact projection method is surely executable. Now, pick any x 0 C B r * ( x ¯ ) { x ¯ } and consider any iterative sequence { x k } generated by the aforementioned method associated with { θ k } , { R k } and starting at x 0 . Then, for each k N , there exist u k and v k associated with { x k } satisfying
v k ( A ( x k , u k ) + F ( u k ) ) R k ( x k ) and u k x ¯ λ d ( x ¯ , L x k 1 ( R k ( x k ) ) ) .
If u k C , then x k + 1 = u k ; otherwise, x k + 1 P C ( u k , x k , θ k ) . By (37), one has
v k R k ( x k ) , u k L x k 1 ( v k ) and u k x ¯ λ d ( x ¯ , L x k 1 ( v k ) ) .
Next, we show by induction that x k B r * ( x ¯ ) and (35) holds for each k N . Since 0 L x ¯ ( x ¯ ) , we have
A ( x 0 , x ¯ ) f ( x ¯ ) L x 0 ( x ¯ ) .
If u 0 C , it follows from (26), (33), and (38) that
u 0 x ¯ λ d ( x ¯ , L x 0 1 ( v 0 ) ) λ τ 1 τ γ ( κ ) x 0 x ¯ α d ( v 0 , L x 0 ( x ¯ ) ) λ τ 1 τ γ ( κ ) x 0 x ¯ α v 0 + f ( x ¯ ) A ( x 0 , x ¯ ) ( κ + η π n , α ) λ τ π n , α ( 1 τ γ ( κ ) x 0 x ¯ α ) x 0 x ¯ n + α .
In this case, x 1 = u 0 C . Hence, (39) implies that (35) holds for k = 0 . If u 0 C , then x 1 P C ( u 0 , x 0 , θ 0 ) C . Similar to the proof of Lemma 6 when applying (39) in place of (16), we obtain that (35) holds for k = 0 . Note that x 0 B r * ( x ¯ ) . By (34) and (35), one has x 1 x ¯   x 0 x ¯   r * . Hence, x 1 C B r * ( x ¯ ) . Assume for induction that x k C B r * ( x ¯ ) and (35) holds for k = 0 , , i 1 . Note that 0 L x ¯ ( x ¯ ) . One has
A ( x i , x ¯ ) f ( x ¯ ) L x i ( x ¯ ) .
If u i C , it follows from (26), (33), and (38) that
u i x ¯ λ d ( x ¯ , L x i 1 ( v i ) ) λ τ 1 τ γ ( κ ) x i x ¯ α d ( v i , L x i ( x ¯ ) ) λ τ 1 τ γ ( κ ) x i x ¯ α v i + f ( x ¯ ) A ( x i , x ¯ ) ( κ + η π n , α ) λ τ π n , α ( 1 τ γ ( κ ) x i x ¯ α ) x i x ¯ n + α .
In this case x i + 1 = u i C , and, hence, (39) implies that (35) holds for k = i . If u i C , then x i + 1 P C ( u i , x i , θ i ) . Similar to the proof of Lemma 6 with the application of (40) instead of (16), we obtain that (35) holds for k = i . Note that x i B r * ( x ¯ ) . By (34) and (35), one has x i + 1 x ¯   x i x ¯   r * , and then x i + 1 C B r * ( x ¯ ) . Thus, the induction step is complete. Therefore, we have that (35) holds. If θ k = 0 for all k N , then (36) follows directly from (35). □

4. Numerical Example

In this section, we provide a one-dimensional numerical example to illustrate the pratical performance of our proposed approach.
Example 2.
Consider a nonsmooth constrained generalized equation of the form (5), where f : R R is defined such that f ( x ) = x 2 + 2 x for all x 0 and f ( x ) = x 3 for all x < 0 , F : R R is defined as F ( x ) = { x , x } for all x R and C = [ 1 , 1 ] . It is clear that f is not differentiable at 0. To apply the general inexact projection method, we calculate that A ( x , u ) = ( 2 x + 2 ) u x 2 for all ( x , u ) [ 0 , + ) × R and A ( x , u ) = 3 x 2 u 2 x 3 for all ( x , u ) ( , 0 ) × R . It is clear that A is a ( 2 , 0 ) -WPBA at 0 for f. Let x ¯ = 0 , θ ˜ = 0 , R k 0 (for all k N ), and it is easy to see that L x ¯ is metrically regular at 0 for 0. Recall that the generalized equation contains two branches: T 1 ( x ) = f ( x ) + x and T 2 ( x ) = f ( x ) x . We examine below the performance of the proposed method for T 1 and T 2 , respectively, by choosing u k according to (8).
In Figure 1 and Figure 2, we consider positive and negative initial points, respectively, and show the values of x k , T 1 ( x k ) , the distances e k = x k + 1 x k , and their relationships with the number of iterations. In Figure 3 and Figure 4, we show the values of x k , T 2 ( x k ) , the distances e k = x k + 1 x k , and their relationships with the number of iterations for both positive and negative initial points. The stop condition is x k x k 1   10 8 or the maximum number of iterations is 50.

5. Conclusions

In this paper, we propose an abstract general iterative procedure for solving nonsmooth constrained generalized equations in which the nonsmooth single-valued mapping admits weak point-based approximations (WPBAs). The proposed method incorporates the aforementioned nonsmoothness property as well as an inexact feasible projection onto the constraint set. We prove higher order convergence of the iterative sequence and establish a relationship between the order of convergence and the parameters of WPBA property. The proposed general inexact projetion method is an extention of the Newton-InexP method which deals with smooth constrained generalized equations to nonsmooth cases. The obtained results are improvements of existing ones in the literature even for cases when the single-valued mapping is smooth or the constraint set vanishes.

Author Contributions

Conceptualization, W.O.; writing—original draft preparation, W.O. and K.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of the People’s Republic of China, grant number 12261109, and the Basic Research Program of Yunnan Province, grant number 202301AT070080.

Data Availability Statement

Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Robinson, S.M. Generalized equations and their solutions. I. Basic theory. Point-to-set maps and mathematical programming. Math. Progr. Stud. 1979, 10, 128–141. [Google Scholar]
  2. Li, G.; Zhang, Y.; Guan, Y.; Li, W. Stability analysis of multi-point boundary conditions for fractional differential equation with non-instantaneous integral impulse. Math. Biosci. Eng. 2023, 20, 7020–7041. [Google Scholar] [CrossRef] [PubMed]
  3. Xue, Y.; Han, J.; Tu, Z.; Chen, X. Stability analysis and design of cooperative control for linear delta operator system. AIMS Math. 2023, 8, 12671–12693. [Google Scholar] [CrossRef]
  4. Wang, C.; Liu, X.; Jiao, F.; Mai, H.; Chen, H.; Lin, R. Generalized Halanay inequalities and relative application to time-delay dynamical systems. Mathematics 2023, 11, 1940. [Google Scholar] [CrossRef]
  5. Wang, B.; Zhu, Q. Stability analysis of discrete time semi-Markov jump linear systems. IEEE Trans. Automat. Contr. 2020, 65, 5415–5421. [Google Scholar] [CrossRef]
  6. Wang, B.; Zhu, Q. Stability analysis of discrete-time semi-Markov jump linear systems with time delay. IEEE Trans. Automat. Contr. 2023, 68, 6758–6765. [Google Scholar] [CrossRef]
  7. Dontchev, A.L.; Rockafellar, R.T. Implicit Functions and Solution Mappings; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  8. Izmailov, A.F.; Solodov, M.V. Newton-Type Methods for Optimization and Variational Problems; Springer: New York, NY, USA, 2014. [Google Scholar]
  9. Adly, S.; Cibulka, R.; Ngai, H.V. Newton’s method for solving inclusions using set-valued approximations. SIAM J. Optim. 2015, 25, 159–184. [Google Scholar] [CrossRef]
  10. Adly, S.; Ngai, H.V.; Nguyen, V.V. Stability of metric regularity with set-valued perturbations and application to Newton’s method for solving generalized equations. Set-Valued Var. Anal. 2017, 25, 543–567. [Google Scholar] [CrossRef]
  11. Aragón Artacho, F.J.; Dontchev, A.L.; Gaydu, M.; Geoffroy, M.H.; Veliov, V.M. Metric regularity of Newton’s iteration. SIAM J. Optim. 2011, 49, 339–362. [Google Scholar] [CrossRef]
  12. Aragón Artacho, F.J.; Belyakov, A.; Dontchev, A.L.; López, M. Local convergence of quasi-Newton methods under metric regularity. Comput. Optim. Appl. 2014, 58, 225–247. [Google Scholar] [CrossRef]
  13. Cibulka, R.; Dontchev, A.L.; Geoffroy, M.H. Inexact Newton methods and Dennis-Moré theorem for nonsmooth generalized equations. SIAM J. Control Optim. 2015, 53, 1003–1019. [Google Scholar] [CrossRef]
  14. Dontchev, A.L.; Rockafellar, R.T. Newton’s method for generalized equations: A sequential implicit function theorem. Math. Progr. Ser. B 2010, 123, 139–159. [Google Scholar] [CrossRef]
  15. Dontchev, A.L.; Rockafellar, R.T. Convergence of inexact Newton methods for generalized equations. Math. Progr. Ser. B 2013, 139, 115–137. [Google Scholar] [CrossRef]
  16. Ferreira, O.P. A robust semi-local convergence analysis of Newton’s method for cone inclusion problems in Banach spaces under affine invariantmajorant condition. J. Comput. Appl. Math. 2015, 279, 318–335. [Google Scholar] [CrossRef]
  17. Ferreira, O.P.; Silva, G.N. Kantorovich’s theorem on Newton’s method for solving strongly regular generalized equation. SIAM J. Optim. 2017, 27, 910–926. [Google Scholar] [CrossRef]
  18. Ferreira, O.P.; Silva, G.N. Local convergence analysis of Newton’s method for solving strongly regular generalized equations. J. Math. Anal. Appl. 2018, 458, 481–496. [Google Scholar] [CrossRef]
  19. Marini, L.; Morini, B.; Porcelli, M. Quasi-Newton methods for constrained nonlinear systems: Complexity analysis and applications. Comput. Optim. Appl. 2018, 71, 147–170. [Google Scholar] [CrossRef]
  20. Ouyang, W.; Zhang, B. Newton’s method for fully parameterized generalized equations. Optimization 2018, 67, 2061–2080. [Google Scholar] [CrossRef]
  21. Robinson, S.M. Strongly regular generalized equations. Math. Oper. Res. 1980, 5, 43–62. [Google Scholar] [CrossRef]
  22. Josephy, N.H. Newton’s Method for Generalized Equations and the Pies Energy Model. Ph.D. Thesis, University of Wisconsin-Madison, Madison, WI, USA, 1979. [Google Scholar]
  23. Robinson, S.M. Newton’s method for a class of nonsmooth functions. Set Valued Anal. 1994, 2, 291–305. [Google Scholar] [CrossRef]
  24. Geoffroy, M.H.; Piétrus, A. A general iterative procedure for solving nonsmooth generalized equations. Comput. Optim. Appl. 2005, 31, 57–67. [Google Scholar] [CrossRef]
  25. Gaydu, M.; Silva, G.N. A general iterative procedure to solve generalized equations with differentiable multifunction. J. Optim. Theory Appl. 2020, 185, 207–222. [Google Scholar] [CrossRef]
  26. Geoffroy, M.H.; Piétrus, A. Local convergence of some iterative methods for generalized equations. J. Math. Anal. Appl. 2001, 290, 497–505. [Google Scholar] [CrossRef]
  27. Dembo, R.S.; Eisenstat, S.C.; Steihaug, T. Inexact Newton methods. SIAM J. Numer. Anal. 1982, 19, 400–408. [Google Scholar] [CrossRef]
  28. Censor, Y.; Gibali, A.; Reich, S. Algorithms for the split variational inequality problem. Numer. Algorithms 2012, 59, 301–323. [Google Scholar] [CrossRef]
  29. He, H.; Ling, C.; Xu, H.K. A relaxed projection method for split variational inequalities. J. Optim. Theory Appl. 2015, 166, 213–233. [Google Scholar] [CrossRef]
  30. De Oliveira, F.R.; Ferreira, O.P.; Silva, G.N. Newton’s method with feasible inexact projections for solving constrained generalized equations. Comput. Optim. Appl. 2019, 72, 159–177. [Google Scholar] [CrossRef]
  31. Wang, J.; Ouyang, W. Newton’s method for solving generalized equations without Lipschitz condition. J. Optim. Theory Appl. 2022, 192, 510–532. [Google Scholar] [CrossRef]
  32. Bertsekas, D.P. Nonlinear Programming. In Athena Scientific Optimization and Computation Series, 2nd ed.; Athena Scientific: Belmont, MA, USA, 1999. [Google Scholar]
Figure 1. T 1 ( x ) with initial point x 0 = 0.3 .
Figure 1. T 1 ( x ) with initial point x 0 = 0.3 .
Mathematics 11 04577 g001
Figure 2. T 1 ( x ) with initial point x 0 = 0.3 .
Figure 2. T 1 ( x ) with initial point x 0 = 0.3 .
Mathematics 11 04577 g002
Figure 3. T 2 ( x ) with initial point x 0 = 0.4 .
Figure 3. T 2 ( x ) with initial point x 0 = 0.4 .
Mathematics 11 04577 g003
Figure 4. T 2 ( x ) with initial point x 0 = 0.4 .
Figure 4. T 2 ( x ) with initial point x 0 = 0.4 .
Mathematics 11 04577 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ouyang, W.; Mei, K. A General Iterative Procedure for Solving Nonsmooth Constrained Generalized Equations. Mathematics 2023, 11, 4577. https://doi.org/10.3390/math11224577

AMA Style

Ouyang W, Mei K. A General Iterative Procedure for Solving Nonsmooth Constrained Generalized Equations. Mathematics. 2023; 11(22):4577. https://doi.org/10.3390/math11224577

Chicago/Turabian Style

Ouyang, Wei, and Kui Mei. 2023. "A General Iterative Procedure for Solving Nonsmooth Constrained Generalized Equations" Mathematics 11, no. 22: 4577. https://doi.org/10.3390/math11224577

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop