**Applied Functional Analysis and Its Applications**

Editors

**Jen-Chih Yao Shahram Rezapour**

MDPI • Basel • Beijing • Wuhan • Barcelona • Belgrade • Manchester • Tokyo • Cluj • Tianjin

*Editors* Jen-Chih Yao China Medical University Hospital China China Medical University Taiwan

Shahram Rezapour Azerbaijan Shahid Madani University Iran China Medical University Taiwan

*Editorial Office* MDPI St. Alban-Anlage 66 4052 Basel, Switzerland

This is a reprint of articles from the Special Issue published online in the open access journal *Mathematics* (ISSN 2227-7390) (available at: https://www.mdpi.com/journal/mathematics/special issues/Applied Functional Analysis Its Applications).

For citation purposes, cite each article independently as indicated on the article page online and as indicated below:

LastName, A.A.; LastName, B.B.; LastName, C.C. Article Title. *Journal Name* **Year**, *Article Number*, Page Range.

**ISBN 978-3-03936-776-4 (Hbk) ISBN 978-3-03936-777-1 (PDF)**

c 2020 by the authors. Articles in this book are Open Access and distributed under the Creative Commons Attribution (CC BY) license, which allows users to download, copy and build upon published articles, as long as the author and publisher are properly credited, which ensures maximum dissemination and a wider impact of our publications.

The book as a whole is distributed by MDPI under the terms and conditions of the Creative Commons license CC BY-NC-ND.

## **Contents**



## **About the Editors**

**Jen-Chih Yao** (Professor) is Chair Professor of the Center for General Education at China University (Taiwan) and Director of the Research Center for Interneural Computing, China Medical University Hospital, China Medical University. He has published numerous papers in different fields (e.g., variational inequalities, complementarity problems, fixed point theorems, variational analysis, optimization, vector optimization problems, etc.). He is the editor-in-chief and a member of the editorial board of several journals. He was also the chairman of the organizing and scientific committees of several international conferences and workshops on nonlinear analysis and optimization. His name has been included on lists of highly cited researchers in 2014, 2015, 2016, 2017, 2018, and 2019.

**Shahram Rezapour** (Professor) is a professor of mathematics at Azarbaijan Shahid Madani University (Iran). He simultaneously holds a visiting professor position at China Medical University (Taiwan). He has published numerous papers on different fields (e.g., approximation theory, fixed point theory, fractional integro-differential equations and inclusions, finite differences, numerical and approximate solutions of singular fractional differential equations, stability, and optimization). He is a member of the editorial boards of several journals. He was also the chairman of the organizing and scientific committees of several international conferences. His name has been included on lists of highly cited researchers in 2016 and 2017.

## **Preface to "Applied Functional Analysis and Its Applications"**

It is well known that applied functional analysis is very important in most applied research fields and its influence has grown in recent decades. Many novel works have used techniques, ideas, notions, and methods of applied functional analysis. Furthermore, applied functional analysis includes linear and nonlinear problems.

The scope of this field is so wide that it cannot be expressed in a few books. This book covers a limited section of this field, namely, fixed point theory and applications, nonlinear methods and variational inequalities, and set-valued optimization problems.

The most important application of fixed point theory is proving the existence of solutions for fractional integro-differential equations and, therefore, increasing our ability to model different kinds of phenomena. In most everyday matters, we seek to use optimization. The importance of optimization has attracted many researchers to this field over the past few decades and provided new ideas, concepts, and techniques.

> **Jen-Chih Yao , Shahram Rezapour** *Editors*

## *Article* **Informal Norm in Hyperspace and Its Topological Structure**

#### **Hsien-Chung Wu**

Department of Mathematics, National Kaohsiung Normal University, Kaohsiung 802, Taiwan; hcwu@nknucc.nknu.edu.tw

Received: 4 September 2019; Accepted: 8 October 2019; Published: 11 October 2019

**Abstract:** The hyperspace consists of all subsets of a vector space. Owing to a lack of additive inverse elements, the hyperspace cannot form a vector space. In this paper, we shall consider a so-called informal norm to the hyperspace in which the axioms regarding the informal norm are almost the same as the axioms of the conventional norm. Under this consideration, we shall propose two different concepts of open balls. Based on the open balls, we shall also propose the different types of open sets. In this case, the topologies generated by these different concepts of open sets are investigated.

**Keywords:** hyperspace; informal open sets; informal norms; null set; open balls

#### **1. Introduction**

The topic of set-valued analysis (or multivalued analysis) has been studied for an extensive period. A detailed discussion can refer to Aubin and Frankowska [1], and Hu and Papageorgiou [2,3]. Applications in nonlinear analysis can refer to Agarwal and O'Regan [4], Burachik and Iusem [5], and Tarafdar and Chowdhury [6]. More specific applications in differential inclusion can also refer to Aubin and Cellina [7]. On the other hand, the fixed point theory for set-valued mappings can refer to Górniewicz [8], and set-valued optimization can refer to Chen et al. [9], Khan et al. [10] and Hamel et al. [11]. Also, the set optimization that is different from the set-valued optimization can refer to Wu [12] and the references therein.

Let P(*X*) be the collection of all subsets of a vector space *X*. The set-valued analysis usually studies the mathematical structure in P(*X*) in which each element in P(*X*) is treated as a subset of *X*. In this paper, we shall treat each element of P(*X*) as a "point". In other words, each subset of *X* is compressed as a point, and the family P(*X*) is treated as a universal set. In this case, the original vector space *X* plays no role in the settings. Therefore, we want to endow a vector structure to P(*X*). Although we can define the vector addition and scalar multiplication in P(*X*) in the usual way, owing to lacking an additive inverse element, the family P(*X*) cannot form a vector space. In this paper, we shall endow a so-called informal norm to P(*X*) even though P(*X*) is not a vector space. Then, the conventional techniques of functional analysis and topological vector space based on the vector space can be used by referring to the monographs [13–23]. The main purpose of this paper is to study the topological structures of informally normed space P(*X*). Based on these topological structures, the potential applications in nonlinear analysis, differential inclusion and set-valued optimization (or set optimization) are possible after suitable formulation.

Given a (conventional) vector space *X*, we denote by P(*X*) the collection of all subsets of *X*. For any *A*, *B* ∈ P(*X*), the set addition is defined by

$$A \oplus B = \{a + b : a \in A \text{ and } b \in B\}\dots$$

Given a scalar *λ* in R, the scalar multiplication in P(*X*) is defined by

$$
\lambda A = \{ \lambda a : a \in A \}\dots
$$

The substraction between *A* and *B* is denoted and defined by

$$A \ominus B \equiv A \oplus (-B) = \{a - b : a \in A \text{ and } b \in B\} \dots$$

We denote by *<sup>θ</sup><sup>X</sup>* the zero element of *<sup>X</sup>*. Let *<sup>θ</sup>*P(*X*) = {*θX*} be a singleton set. We see that

$$A \oplus \theta\_{\mathcal{P}(X)} = A \oplus \{\theta\_X\} = A\_\prime$$

which says that {*θX*} is the zero element of P(*X*). It is clear to see that *A A* = {*θX*}, which says that *A A* cannot be the zero element of P(*X*). That is to say, the additive inverse element of *A* in P(*X*) does not exist. Therefore, the hyperspace P(*X*) cannot form a vector space under the above set of addition and scalar multiplication. Since *A A* is not the zero element, we consider the null set of P(*X*) defined by

$$\Omega = \left\{ A \ominus A : A \in \mathcal{P}(X) \right\},$$

$$\Omega : \Omega \urcorner \in \mathcal{P}(X) \urcorner,$$

which may be treated as a kind of "zero element" of P(*X*). It is clear to see that the null set is closed under the addition.

In this paper, we shall consider the so-called informal norm in P(*X*). The axioms of informal norm will be almost the same as the axioms of conventional norm. The only difference is that the null set will be involved in the axioms of informal norm. In order to study the topological structure of (P(*X*), ·), we need to consider the open balls. Let us recall that if (*X*, ·) is a (conventional) normed hyperspace, then we see that

$$\{y:\|\ x - y \parallel < \epsilon\} = \{\times + z: \|\ z \parallel < \epsilon\}$$

by taking *y* = *x* + *z*. However, for the space (P(*X*), ·) and *A*, *B*, *C* ∈ P(*X*), the following equality

$$\{B:\|\:A\ominus B\ \|<\epsilon\} = \{A\oplus \mathbb{C}:\|\:\mathbb{C}\ \|<\epsilon\}$$

does not hold. The reason is that, by taking *B* = *A* ⊕ *C*, we can just have

$$\parallel A \ominus B \parallel = \parallel A \ominus (A \oplus \mathbb{C}) \parallel = \parallel \omega \ominus \mathbb{C} \parallel \neq \parallel \mathbb{C} \parallel\_{\prime \prime}$$

where *ω* = *A A* ∈ Ω. In this case, two types of open balls will be considered in (P(*X*), ·). Therefore, many types of open sets will also be considered. Based on the different types of openness, we shall study the topological structure of the normed hyperspace (P(*X*), ·).

In Section 2, many interesting properties in P(*X*) are presented in order to study the the topology generated by the so-called informal norm. In Section 3, we introduce the concept of informal norms and provide many useful properties for further investigation. In Section 4, we provide the non-intuitive properties for the open balls. In Section 5, we propose many types of informal open sets based on the different types of open balls. Finally, in Section 6, we investigate the topologies generated by these different types of open sets.

#### **2. Hyperspaces**

Since the null set Ω defined in (1) can be treated as a kind of "zero element", we propose the almost identical concept for elements in P(*X*) as follows.

**Definition 1.** *For any A*, *B* ∈ P(*X*)*, the elements A and B are said to be almost identical if there exist <sup>ω</sup>*1, *<sup>ω</sup>*<sup>2</sup> <sup>∈</sup> <sup>Ω</sup> *satisfying A* <sup>⊕</sup> *<sup>ω</sup>*<sup>1</sup> <sup>=</sup> *<sup>B</sup>* <sup>⊕</sup> *<sup>ω</sup>*2*. In this case, we write A* <sup>Ω</sup> = *B.*

For *<sup>A</sup> <sup>B</sup>* <sup>=</sup> *<sup>C</sup>*, we cannot have *<sup>A</sup>* <sup>=</sup> *<sup>B</sup>* <sup>⊕</sup> *<sup>C</sup>*. However, we can obtain *<sup>A</sup>* <sup>Ω</sup> = *B* ⊕ *C*. Let *B B* ≡ *ω* ∈ <sup>Ω</sup>. Since *<sup>A</sup> <sup>B</sup>* <sup>=</sup> *<sup>C</sup>*, by adding *<sup>B</sup>* on both sides, we have *<sup>A</sup>* <sup>⊕</sup> *<sup>ω</sup>* <sup>=</sup> *<sup>B</sup>* <sup>⊕</sup> *<sup>C</sup>*, which says that *<sup>A</sup>* <sup>Ω</sup> = *B* ⊕ *C*.

**Proposition 1.** *Given any A*, *B* ∈ P(*X*)*, we have the following properties.*


**Proof.** To prove part (i), we first note that there exists *ω*<sup>1</sup> ∈ Ω such that

$$A \ominus B = A \oplus (-B) = \omega\_1.$$

By adding *B* on both sides, we obtain *A* ⊕ (−*B*) ⊕ *B* = *ω*<sup>1</sup> ⊕ *B*. Therefore, we have *A* ⊕ *ω*<sup>2</sup> = *ω*<sup>1</sup> ⊕ *B*, where *ω*<sup>2</sup> = *B B* ∈ Ω.

To prove part (ii), since *<sup>A</sup>* <sup>Ω</sup> = *B*, there exist *ω*1, *ω*<sup>2</sup> ∈ Ω such that *A* ⊕ *ω*<sup>2</sup> = *ω*<sup>1</sup> ⊕ *B*. By adding −*B* on both sides, we obtain *A B* ⊕ *ω*<sup>2</sup> = *ω*<sup>1</sup> ⊕ *ω*<sup>3</sup> ∈ Ω, where *ω*<sup>3</sup> = *B B* ∈ Ω. This completes the proof.

**Proposition 2.** *The following statements hold true.*


**Proof.** To prove part (i), since *<sup>θ</sup>*P(*X*) ≡ {*θX*} ∈ <sup>Ω</sup>, given any *<sup>A</sup>* ∈ A, we have

$$A = A \oplus \{\theta\_X\} = A \oplus \theta\_{\mathcal{P}(X)} \in \mathcal{A} \oplus \Omega.$$

To prove part (ii), given any *ω*1, *ω*<sup>2</sup> ∈ Ω, we have *ω*<sup>1</sup> = *A A* and *ω*<sup>2</sup> = *B B* for some *A*, *B* ∈ P(*X*). Therefore we obtain

$$
\omega\_1 \oplus \omega\_2 = A \ominus A \oplus B \ominus B = (A \oplus B) \ominus (A \oplus B) \in \Omega\_\star
$$

which says that <sup>Ω</sup> ⊕ <sup>Ω</sup> ⊆ <sup>Ω</sup>. Now, for any *<sup>ω</sup>* ∈ <sup>Ω</sup>, since *<sup>θ</sup>*P(*X*) ≡ {*θX*} ∈ <sup>Ω</sup>, we have

$$
\omega = \omega \oplus \{\theta\_X\} = \omega \oplus \theta\_{\mathcal{P}(X)} \in \Omega \oplus \Omega\_\prime
$$

which says that Ω ⊆ Ω ⊕ Ω. Therefore we obtain Ω ⊕ Ω = Ω. On the other hand, we have

$$
\bar{\mathcal{A}} \oplus \Omega = \mathcal{A} \oplus \Omega \oplus \Omega = \mathcal{A} \oplus \Omega = \bar{\mathcal{A}}.
$$

To prove part (iii), given any *B* ⊆ *X*, we have *B* = *B*<sup>1</sup> ⊕ *B*<sup>2</sup> for some subsets *B*<sup>1</sup> and *B*<sup>2</sup> of *X*. For example, we can take *B*<sup>1</sup> = {*b*} and *B*<sup>2</sup> = *B* {*b*} for some *b* ∈ *B*. Therefore we have

$$
\omega = B \ominus B = \left( B\_1 \oplus B\_2 \right) \ominus \left( B\_1 \oplus B\_2 \right) \\
= \left( B\_1 \ominus B\_1 \right) \oplus \left( B\_2 \ominus B\_2 \right) \\
\equiv \omega\_1 \oplus \omega\_2.
$$

This completes the proof.

The following interesting results will be used for discussing the topological structure of informal normed hyperspace.

**Proposition 3.** *Let* A<sup>1</sup> *and* A<sup>2</sup> *be subsets of* P(*X*)*. Then the following inclusion is satisfied:*

$$[(\mathcal{A}\_1 \cap \mathcal{A}\_2) \oplus \Omega \subseteq [(\mathcal{A}\_1 \oplus \Omega) \cap (\mathcal{A}\_2 \oplus \Omega)]\ ]$$

*If we further assume that* A<sup>1</sup> ⊕ Ω ⊆ A<sup>1</sup> *and* A<sup>2</sup> ⊕ Ω ⊆ A2*, then the following equality is satisfied:*

$$[(\mathcal{A}\_1 \oplus \Omega) \cap (\mathcal{A}\_2 \oplus \Omega)] = (\mathcal{A}\_1 \cap \mathcal{A}\_2) \oplus \Omega.$$

**Proof.** For *B* ∈ (A<sup>1</sup> ∩ A2) ⊕ Ω, we have *B* = *A* ⊕ *ω* with *A* ∈ A*<sup>i</sup>* for *i* = 1, 2 and *ω* ∈ Ω, which also says that *B* ∈ [(A<sup>1</sup> ⊕ Ω) ∩ (A<sup>2</sup> ⊕ Ω)], i.e., (A<sup>1</sup> ∩ A2) ⊕ Ω ⊆ [(A<sup>1</sup> ⊕ Ω) ∩ (A<sup>2</sup> ⊕ Ω)]. Under the assumption, using part (i) of Proposition 2, we have

$$[(\mathcal{A}\_1 \oplus \Omega) \cap (\mathcal{A}\_2 \oplus \Omega)] \subseteq \mathcal{A}\_1 \cap \mathcal{A}\_2 \subseteq (\mathcal{A}\_1 \cap \mathcal{A}\_2) \oplus \Omega.$$

This completes the proof.

#### **3. Informal Norms**

Many kinds of informal norms on P(*X*) are proposed below.

**Definition 2.** *Consider the nonnegative real-valued function* ·: P(*X*) → R<sup>+</sup> *and the following conditions:*


*The informal norm* · *is said to satisfy the null condition when condition (iii) is replaced by A* = 0 *if and only if A* ∈ Ω*.*

*Different kinds of informal normed hyperspaces are defined below.*


*We further consider the following conditions:*


**Example 1.** *Let* (*X*, ·*X*) *be a (conventional) normed space. Given any element A* ∈ P(*X*)*, we define*

$$\|\|A\|\| = \sup\_{a \in A} \|\|a\|\|\_{X} \cdot \|$$

*We are going to claim that* (P(*X*), ·) *is an informal normed hyperspace.*


$$\|\|\lambda A\|\| = \sup\_{a \in \lambda A} \|\|a\|\|\_{X} = \sup\_{b \in A} \|\|\lambda b\|\|\_{X} = |\lambda| \sup\_{b \in A} \|\|b\|\|\_{X} = |\lambda| \|\|A\|\|\|\_{X}$$

• *We want to prove the triangle inequality A* ⊕ *B* ≤ *A* + *B . Let*

$$\mathcal{J}\_1 = \sup\_{\{(a,b): a \in A, b \in B\}} \parallel a \parallel\_X \text{ and } \mathcal{J}\_2 = \sup\_{\{(a,b): a \in A, b \in B\}} \parallel b \parallel\_X \dots$$

*It is clear to see that a* + *b* ≤ *ζ*<sup>1</sup> + *ζ*<sup>2</sup> *for all a* ∈ *A and b* ∈ *B, which implies*

$$\sup\_{\{(a,b):a\in A, b\in B\}} \left( \|\begin{array}{cc} a \ \|\|\|\_{X} + \|\|\| b \ \|\|\_{X} \end{array} \right) \le \zeta\_1 + \zeta\_2 = \sup\_{\{(a,b):a\in A, b\in B\}} \|\| a \ \|\|\_{X} + \sup\_{\{(a,b):a\in A, b\in B\}} \|\| b \ \|\|\_{X} \to 0$$

*Then, we obtain*

$$\begin{aligned} \|\|A \oplus B\|\| &= \sup\_{c \in A \oplus B} \|\|c\|\|\_{\mathcal{X}} = \sup\_{\{(a,b) : a \in A, b \in B\}} \|\|a+b\|\|\_{\mathcal{X}} \\ &\le \sup\_{\{(a,b) : a \in A, b \in B\}} \left(\|\|a\|\|\_{\mathcal{X}} + \|\|b\|\|\_{\mathcal{X}}\right) \\ &\le \sup\_{\{(a,b) : a \in A, b \in B\}} \|\|a\|\|\_{\mathcal{X}} + \sup\_{\{(a,b) : a \in A, b \in B\}} \|\|b\|\|\_{\mathcal{X}} \\ &= \sup\_{a \in A} \|\|a\|\|\_{\mathcal{X}} + \sup\_{b \in B} \|\|b\|\|\_{\mathcal{X}} = \|\|A\|\| + \|\|B\|\|. \end{aligned}$$

*Therefore, we conclude that* (P(*X*), ·) *is indeed an informal normed hyperspace. Given any ω* ∈ Ω*, there exists B* ∈ P(*X*) *satisfying ω* = *B B. Therefore, we obtain*

$$\parallel \omega \parallel = \parallel \ B \ominus B \parallel = \sup\_{\{(b\_1, b\_2) : b\_1, b\_2 \in B\}} \parallel b\_1 - b\_2 \parallel\_X \dots$$

*Since ω is not equal to zero in general, it means that the null condition is not satisfied.*

**Proposition 4.** *Let* (P(*X*), ·) *be an informal pseudo-seminormed hyperspace. Suppose that the informal norm* · *satisfies the null super-inequality. For any A*, *C*, *B*1, ··· , *Bm* ∈ P(*X*)*, we have*

$$\parallel A \ominus \mathbb{C} \parallel \leq \parallel A \ominus B\_1 \parallel + \parallel B\_1 \ominus B\_2 \parallel + \dots + \parallel B\_j \ominus B\_{j+1} \parallel + \dots + \parallel B\_m \ominus \mathbb{C} \parallel \dots$$

#### **Proof.** We have

$$\|\|A\ominus\mathbb{C}\|\| \le \|\|A\oplus(-\mathbb{C})\oplus B\_1\oplus\cdots\oplus B\_m\oplus(-B\_1)\oplus\cdots\oplus(-B\_m)\|$$

$$\text{ (using the null super-inequality for }m\text{ times)}$$

$$=\|\|\left[A\oplus(-B\_1)\right]\oplus\left[B\_1\oplus(-B\_2)\right]+\cdots+\left[B\_j\oplus(-B\_{j+1})\right]+\cdots+\left[B\_m\oplus(-\mathbb{C})\right]\|\|\|$$

$$\le \|\|A\odot B\_1\|\| + \|\|B\_1\odot B\_2\|\| + \cdots + \|\|B\_j\odot B\_{j+1}\|\| + \cdots + \|\|B\_m\odot\mathbb{C}\|\|$$

$$\text{ (using the triangle inequality)}.$$

This completes the proof.

#### **4. Open Balls**

If (*X*, ·) is a (conventional) seminormed space, then we see that

$$\{y:\|\ x - y\|\| < \epsilon\} = \{\ x + z : \|z\| < \epsilon\}$$

by taking *y* = *x* + *z*. Let (P(*X*), ·) be an informal seminormed hyperspace. Then the following equality

$$\{B:\|\: A \ominus B \parallel < \epsilon\} = \{A \oplus \mathbb{C} : \|\: \mathbb{C}\| < \epsilon\}$$

does not hold. The reason is that, by taking *B* = *A* ⊕ *C*, we can just have

$$\|\|\|\mathcal{A}\ominus\mathcal{B}\|\| = \|\|\mathcal{A}\ominus(\mathcal{A}\oplus\mathcal{C})\parallel\| = \|\|-\mathcal{C}\oplus\mathcal{O}\parallel\#\|\|\mathcal{C}\parallel\|\|$$

where *ω* = *A A* ∈ Ω. Therefore we can define two types of open ball.

**Definition 3.** *Let* (P(*X*), ·) *be an informal pseudo-seminormed hyperspace. Two types of open balls with radius are defined by*

$$\mathcal{B}^{\diamond}(A;\mathfrak{e}) = \{ A \oplus \mathbb{C} : \| \, \mathbb{C} \, \| < \mathfrak{e} \}$$

*and*

$$\mathcal{B}(A;\epsilon) = \{ B : \| \: A \ominus B \; \| = \| \: B \ominus A \; \| < \epsilon \}.$$

**Example 2.** *Continued from Example 1, for any A* ∈ P(*X*)*, we define*

$$\|\|A\|\| = \sup\_{a \in A} \|\|a\|\|\_{X} \cdot \|$$

*The open balls* B(*A*; ) *and* B(*A*; ) *with radius are given by*

$$\mathcal{B}(A;\mathfrak{e}) = \left\{ B \in \mathcal{P}(X) : \left| \begin{array}{c} A \ominus B \ \parallel < \mathfrak{e} \end{array} \right| < \mathfrak{e} \right\} = \left\{ B \in \mathcal{P}(X) : \sup\_{\mathfrak{a} \in A \ominus B} \parallel \mathfrak{a} \parallel\_X < \mathfrak{e} \right\}$$

*and*

$$\mathcal{B}^{\diamond}(A;\mathfrak{c}) = \{ A \oplus \mathbb{C} \in \mathcal{P}(X) : \| \mathbb{C} \| < \mathfrak{c} \} = \left\{ A \oplus \mathbb{C} \in \mathcal{P}(X) : \sup\_{\mathfrak{c} \in \mathbb{C}} \| \mathfrak{c} \|\_{X} < \mathfrak{c} \right\}.$$

**Remark 1.** *Let* (P(*X*), ·) *be an informal pseudo-seminormed hyperspace. Then we have the following observations.*


**Proposition 5.** *Let* (P(*X*), ·) *be an informal pseudo-seminormed hyperspace.*


**Proof.** To prove part (i), for any *B* ∈ B(*A*; ), i.e., *B A* < , if we take *C* = *B A*, then *C* < and *B* ⊕ *ω<sup>A</sup>* = *A* ⊕ *C*. This shows the inclusion

$$\mathcal{B}(A;\epsilon) \oplus \omega\_A \subseteq \{ A \oplus \mathbb{C} : \| \mid \mathbb{C} \parallel < \epsilon \} = \mathcal{B}^{\diamond}(A;\epsilon).$$

To prove part (ii), for *C* ∈ P(*X*) with *C* < , since · satisfies the null sub-inequality, it follows that

$$\parallel (A \oplus \mathbb{C}) \ominus A \parallel = \parallel \omega\_A \oplus \mathbb{C} \parallel \leq \parallel \mathbb{C} \parallel < \epsilon\_{\prime \prime}$$

which says that *A* ⊕ *C* ∈ B(*A*; ) and shows the inclusion

$$\mathcal{B}^{\diamond}(A;\epsilon) = \{ A \oplus \mathbb{C} : \| \: \mathbb{C} \mid \: \| < \epsilon \} \subseteq \mathcal{B}(A; \epsilon).$$

Part (iii) follows from parts (i) and (ii) immediately. This completes the proof.

**Proposition 6.** *Let* (P(*X*), ·) *be an informal pseudo-seminormed hyperspace.*

	- B(*A*; ) ⊆ B(*A* ⊕ *ω*; ) *for any ω* ∈ Ω*.*
	- B(*A* ⊕ *ω*; ) ⊆ B(*A*; ) *for any ω* ∈ Ω*.*

(iii) *If* · *satisfies the null equality, then* B(*A* ⊕ *ω*; ) = B(*A*; ) *for any ω* ∈ Ω*.*

**Proof.** To prove part (i), the inclusion B(*A* ⊕ *ω*; ) ⊆ B(*A*; ) follows from the following expression

$$\left| \{ \epsilon > \parallel \left( A \ominus \omega \right) \ominus B \parallel = \parallel \left( A \ominus B \right) \oplus \omega \parallel \right| \geq \parallel A \ominus B \parallel \prime $$

and the inclusion B(*A* ⊕ *ω*; ) ⊆ B(*A*; ) follows from the following expression

$$\{\epsilon > \|\: B \ominus (\: A \ominus \omega)\| \| = \|\: (B \ominus A) \oplus \omega\|\| \ge \|\: B \ominus \vint\|\: \omega$$

To prove the first case of part (ii), the inclusion B(*A*; ) ⊆ B(*A* ⊕ *ω*; ) follows from the following expression

$$\{\epsilon > \parallel \: A \ominus B \parallel \succeq \parallel \: (A \ominus B) \oplus \omega \parallel = \parallel (A \ominus \omega) \ominus B \parallel \bot\}$$

To prove the second case of part (ii), for *B* = *A* ⊕ *ω* ⊕ *C* ∈ B(*A* ⊕ *ω*; ) with *C* < , let *<sup>C</sup>*¯ <sup>=</sup> *<sup>ω</sup>* <sup>⊕</sup> *<sup>C</sup>*. Then, using the null sub-inequality, we have

$$\|\|\|\mathbf{C}\|\| = \|\|\omega \oplus \mathbf{C}\|\| \leq \|\|\mathbf{C}\|\| < \mathfrak{e},\tag{2}$$

which says that *<sup>B</sup>* <sup>=</sup> *<sup>A</sup>* <sup>⊕</sup> *<sup>C</sup>*¯ ∈ B(*A*; ). Therefore we obtain the inclusion <sup>B</sup>(*<sup>A</sup>* <sup>⊕</sup> *<sup>ω</sup>*; ) ⊆ B(*A*; ). Part (iii) follows from parts (i) and (ii) immediately. This completes the proof.

In the (conventional) normed hyperspace (*X*, ·), we have the equality

$$\mathcal{B}(\mathfrak{x};\mathfrak{e}) \oplus \mathfrak{x} = \mathcal{B}(\mathfrak{x} \oplus \mathfrak{k}; \mathfrak{e}).\tag{3}$$

However, in the informal normed hyperspace (P(*X*), ·), the intuitive observation (3) will not hold true in general. The following proposition presents the exact relationship.

**Proposition 7.** *Let* (P(*X*), ·) *be an informal pseudo-seminormed hyperspace.*


(iv) *For any <sup>A</sup>* ∈ P(*X*) *with <sup>ω</sup>A* <sup>=</sup> *<sup>A</sup> A, we have the inclusion*

$$\mathcal{B}(A \oplus \bar{A}; \epsilon) \oplus \omega\_{\hat{A}} \subseteq \mathcal{B}(A; \epsilon) \oplus \bar{A}.$$

**Proof.** Part (i) follows from the following equality

$$(\mathcal{A} \oplus \mathcal{C}) \oplus \dot{\mathcal{A}} = (\mathcal{A} \oplus \dot{\mathcal{A}}) \oplus \mathcal{C} \text{ for } \parallel \mathcal{C} \parallel < \epsilon.$$

To prove part (ii), for *B* ∈ B(*A*; ) ⊕ *A*, we have *B* = *B* ⊕ *A* with *A B* < . Then, by the null sub-inequality, we can obtain

$$\|\|\left(A\oplus\dot{A}\right)\ominus B\|\| = \|\|\left(A\oplus\dot{A}\right)\ominus\left(\widehat{B}\oplus\dot{A}\right)\|\| = \|\left(A\ominus\hat{B}\right)\oplus\left(\dot{A}\ominus\dot{A}\right)\|\| \le \|\|\left|A\ominus\hat{B}\right\|\| < \epsilon\_{\prime\prime}$$

which says that *B* ∈ B(*A* ⊕ *A*; ). Therefore we obtain the inclusion B(*A*; ) ⊕ *A* ⊆ B(*A* ⊕ *A*; ). Now we take *A* = *ω*. By part (iii) of Proposition 6, we have

$$\mathcal{B}(A;\epsilon) \oplus \omega \subseteq \mathcal{B}(A \oplus \omega; \epsilon) = \mathcal{B}(A;\epsilon).$$

Similarly, if we take *A* = *ω*, then we have

$$\mathcal{B}(\omega;\mathfrak{e}) \oplus \widehat{A} \subseteq \mathcal{B}(\omega \oplus \widehat{A}; \mathfrak{e}) = \mathcal{B}(\widehat{A}; \mathfrak{e}).$$

To prove part (iii), for *A* ∈ B(*A*; ), we have *A* ⊕ *ω<sup>A</sup>* = *A* ⊕ (*A A*). The null sub-inequality gives

$$\parallel \omega\_A \ominus (\bar{A} \ominus A) \parallel \leq \parallel \bar{A} \ominus A \parallel < \epsilon\_\prime \parallel$$

which says that *A A* ∈ B(*ω*; ), i.e.,

$$
\widehat{A} \oplus \omega\_A = A \oplus (\widehat{A} \ominus A) \in \mathcal{A} \oplus \mathcal{B}(\omega\_A; \epsilon) \dots
$$

To prove part (iv), for *B* ∈ B(*A* ⊕ *A*; ), we have *B* (*A* ⊕ *A*) < . We also have

$$\left| \{ \epsilon > \left| \right| \: B \ominus \left( A \ominus \hat{A} \right) \left| \right| = \left| \right| \left( B \ominus \hat{A} \right) \ominus A \mid \right\} \dots \right| $$

This shows that *<sup>B</sup> <sup>A</sup>* ∈ B(*A*; ). Let *<sup>ω</sup>A* <sup>=</sup> *<sup>A</sup> <sup>A</sup>* <sup>∈</sup> <sup>Ω</sup>. Since *<sup>B</sup>* <sup>⊕</sup> *<sup>ω</sup>A* = (*<sup>B</sup> <sup>A</sup>*) <sup>⊕</sup> *<sup>A</sup>*, it says that *<sup>B</sup>* <sup>⊕</sup> *<sup>ω</sup>A* ∈ B(*A*; ) <sup>⊕</sup> *<sup>A</sup>*. In other words, we have the inclusion

$$\mathcal{B}(\mathcal{A}\oplus\check{\mathcal{A}};\epsilon)\oplus\omega\_{\check{A}}\subseteq\mathcal{B}(\mathcal{A};\epsilon)\oplus\check{\mathcal{A}}.$$

This completes the proof.

**Proposition 8.** *Let* (P(*X*), ·) *be an informal pseudo-seminormed hyperspace.*

	- *Suppose that* · *satisfies the null super-inequality. For any ω* ∈ Ω*, if A* ⊕ *ω* ∈ B(*A*0; )*, then A* ∈ B(*A*0; )*.*
	- *Suppose that* · *satisfies the null sub-inequality. For any ω* ∈ Ω*, if A* ∈ B(*A*0; )*, then A* ⊕ *ω* ∈ B(*A*0; )*, and if A* ∈ B(*A*0; )*, then A* ⊕ *ω* ∈ B(*A*0; )*.*
	- *Suppose that* · *satisfies the null equality. Then, for any ω* ∈ Ω*, A* ⊕ *ω* ∈ B(*A*0; ) *if and only if A* ∈ B(*A*0; )*.*

$$\mathcal{B}(A;\epsilon) \subseteq \mathcal{B}(A;\epsilon) \oplus \Omega \text{ and } \mathcal{B}^{\diamond}(A;\epsilon) \subseteq \mathcal{B}^{\diamond}(A;\epsilon) \oplus \Omega.$$

*If we further assume that* · *satisfies the null sub-inequality, then*

$$\mathcal{B}(A;\mathfrak{e}) \oplus \Omega = \mathcal{B}(A;\mathfrak{e}) \text{ and } \mathcal{B}^{\diamond}(A;\mathfrak{e}) \oplus \Omega = \mathcal{B}^{\diamond}(A;\mathfrak{e}) \text{.} $$

(iii) *Suppose that* · *satisfies the null condition. Given a fixed ω* ∈ Ω*, we have*

$$
\Omega \oplus \omega \subseteq \mathcal{B}^\diamond(\omega; \epsilon) \text{ and } \Omega \subseteq \mathcal{B}(\omega; \epsilon).
$$


$$
\pi \mathcal{B}^{\diamond} (\omega; \mathfrak{e}) \subseteq \mathcal{B}^{\diamond} (\mathfrak{a} \omega; |\mathfrak{a}| \mathfrak{e}) \text{ and } \mathcal{B}^{\diamond} (\mathfrak{a} \omega; |\mathfrak{a}| \mathfrak{e}) \subseteq \mathfrak{a} \mathcal{B}^{\diamond} (\mathfrak{a} \mathfrak{e}; \mathfrak{e}).
$$

**Proof.** The first case of part (i) follows from the following expression

$$\parallel A \ominus A\_0 \parallel \subseteq \parallel (A \oplus \omega) \ominus A\_0 \parallel < \epsilon.$$

The second case of part (i) regarding the open ball B(*A*0; ) follows from the following expression

$$\|\|\left(A\oplus\omega\right)\odot A\_0\|\|\leq\|\|\left|A\ominus A\_0\right\|\|\leqslant\epsilon.\tag{4}$$

For the open ball B(*A*0; ), if *A* ∈ B(*A*0; ), then *A* = *A*<sup>0</sup> ⊕ *C* with *C* < . Given an *ω* ∈ Ω, let *<sup>C</sup>*¯ <sup>=</sup> *<sup>C</sup>* <sup>⊕</sup> *<sup>ω</sup>*. Therefore we have *<sup>A</sup>* <sup>⊕</sup> *<sup>ω</sup>* <sup>=</sup> *<sup>A</sup>*<sup>0</sup> <sup>⊕</sup> *<sup>C</sup>*¯, where

$$\|\|\nC\|\| = \|\|\mathbb{C} \oplus \omega\|\| \le \|\|\mathbb{C}\|\| < \mathfrak{e}\_{\prime} \tag{5}$$

which says that *A* ⊕ *ω* ∈ B(*A*0; ). The third case of part (i) follows from the previous two cases.

To prove part (ii), since *<sup>θ</sup>*P(*X*) ∈ <sup>Ω</sup> is the zero element of P(*X*), it follows that *<sup>B</sup>* = *<sup>B</sup>* ⊕ *<sup>θ</sup>*P(*X*). Therefore we have B(*A*; ) ⊆ B(*A*; ) ⊕ Ω and B(*A*; ) ⊆ B(*A*; ) ⊕ Ω. On the other hand, for *A* ∈ B(*A*0; ) and *ω* ∈ Ω, from (4), we see that *A* ⊕ *ω* ∈ B(*A*0; ), which shows the inclusion B(*A*0; ) ⊕ <sup>Ω</sup> ⊆ B(*A*0; ). Also, for *<sup>B</sup>* <sup>=</sup> *<sup>A</sup>* <sup>⊕</sup> *<sup>C</sup>* ∈ B(*A*; ) with *<sup>C</sup>* <sup>&</sup>lt; , let *<sup>C</sup>*¯ <sup>=</sup> *<sup>ω</sup>* <sup>⊕</sup> *<sup>C</sup>*. By (5), we have *<sup>B</sup>* <sup>⊕</sup> *<sup>ω</sup>* <sup>=</sup> *<sup>A</sup>* <sup>⊕</sup> *<sup>C</sup>*¯ ∈ B(*A*; ), which shows the inclusion <sup>B</sup>(*A*; ) <sup>⊕</sup> <sup>Ω</sup> ⊆ B(*A*; ). This proves part (ii).

To prove part (iii), for any *ω* ∈ Ω, we have *ω* = 0, which says that *ω* ⊕ *ω* ∈ B(*ω*; ). Therefore we obtain the inclusion Ω ⊕ *ω* ⊆ B(*ω*; ). On the other hand, we also have

$$\parallel \ll' \ominus \omega \parallel = \parallel \omega' \oplus (-\omega) \parallel \leq \parallel \omega' \parallel + \parallel - \omega \parallel = \parallel \omega' \parallel + \parallel \omega \parallel = 0\_{\prime\prime}$$

which shows that *ω* ∈ B(*ω*; ), i.e., Ω ⊆ B(*ω*; ).

To prove part (iv), for *A* ∈ B(*ω*; ), since *αω* ∈ Ω, we have

$$\|\|\|\omega\ominus\mathfrak{a}A\|\| = \|\|\left(\omega\ominus\mathfrak{a}\omega\right)\ominus\mathfrak{a}A\| = \|\|\mathfrak{a}\omega\ominus\mathfrak{a}A\|\| = \|\|\mathfrak{a}(A\ominus\omega)\|\| = |\mathfrak{a}|\parallel A\ominus\mathfrak{a}\omega$$

i.e., *αA* ∈ B(*ω*; |*α*|). This shows the inclusion *α*B(*ω*; ) ⊆ B(*ω*; |*α*|).

To prove the first inclusion of part (v), for *A* ∈ B(*ω*; ), we have *A* = *ω* ⊕ *C* with *C* < . It follows that *<sup>α</sup><sup>A</sup>* <sup>=</sup> *αω* <sup>⊕</sup> *<sup>α</sup>C*. Let *<sup>C</sup>*¯ <sup>=</sup> *<sup>α</sup>C*. Then *<sup>C</sup>*¯ <sup>&</sup>lt; <sup>|</sup>*α*|, which shows the inclusion *<sup>α</sup>*B(*ω*; ) <sup>⊆</sup> B(*αω*; |*α*|). To prove the second inclusion of part (v), for *A* ∈ B(*αω*; |*α*|), we have *A* = *αω* ⊕ *C* with *C* < |*α*|. Let *C* = *C*/*α*. Then

$$A = \mathfrak{a}\omega \oplus \mathbb{C} = \mathfrak{a}\omega \oplus \mathfrak{a}(\mathbb{C}/\mathfrak{a}) = \mathfrak{a}\omega \oplus \mathfrak{a}\dot{\mathbb{C}} = \mathfrak{a}(\omega \oplus \dot{\mathbb{C}}) \text{ with } \|\,\dot{\mathbb{C}}\,\| < \varepsilon\_{\prime}$$

which says that *A* ∈ *α*B(*ω*; ). This completes the proof.

#### **5. Informal Open Sets**

Let (P(*X*), ·) be an informal pseudo-seminormed hyperspace. We are going to consider the open subsets of P(*X*).

**Definition 4.** *Let* (P(*X*), ·) *be an informal pseudo-seminormed hyperspace, and let* A *be a nonempty subset of* P(*X*)*.*


*The different types of informal -interior points based on the open ball* B(*A*0; ) *can be similarly defined. For example, int(III)*(A) *denotes the informal -type-III-interior of* A*.*

**Remark 2.** *Recall that we cannot have the property A* ∈ B(*A*; ) *in general by Remark 1, unless* · *satisfies the null condition. Given any A* ∈ I *with A A* = 0*, it follows that A* ∈ B(*A*; ∗) *for* <sup>∗</sup> < *A A . Now, given* < ∗*, it is clear that* B(*A*; ) ⊆ B(*A*; ∗)*. Let us take* A = B(*A*; ∗)*. It means that the open ball* B(*A*; ) *is contained in* A *even though the center A is not in* A*.*

**Remark 3.** *From Remark 2, it can happen that there exists an open ball such that* B(*A*; ) *is contained in* A *even though the center A is not in* A*. In this situation, we will not say that A is an informal interior point, since A is not in* A*. Also, the sets* B(*A*; ) ⊕ Ω *and* B(*A*; ) ⊕ Ω *will not necessarily contain the center A. In other words, it can happen that there exists an open ball such that* B(*A*; ) ⊕ Ω *is contained in* A *even though the center A is not in* A*. In this situation, we will not say that A is an informal type-I-interior point, since A is not in* A*. We also have the following observations.*


According to Remark 3, we can define the different concepts of informal pseudo-interior point.

**Definition 5.** *Let* (P(*X*), ·) *be an informal pseudo-seminormed hyperspace, and let* A *be a nonempty subset of* P(*X*)*.*


*The different types of informal -pseudo-interior point based on the open ball* B(*A*0; ) *can be similarly defined.*

**Remark 4.** *We have to remark that the difference between Definitions 4 and 5 is that we consider A*<sup>0</sup> ∈ A *in Definition 4, and consider A*<sup>0</sup> ∈ P(*X*) *in Definition 5. From Remark 2, if* <sup>∗</sup> < *A A , then A is a pseudo-interior point of* B(*A*; ∗)*. We also have the following observations.*

	- **–** *Suppose that* · *satisfies the null condition. Then these concepts of informal interior point and informal pseudo-interior point are equivalent, since A*<sup>0</sup> *is in the open ball* B(*A*0; )*.*
	- **–** *Suppose that θ* = 0*. Then these concepts of informal -type of interior point and informal -type of pseudo-interior point are equivalent, since A*<sup>0</sup> *is in the open ball* B(*A*0; )*.*

**Remark 5.** *From part (ii) of Proposition 8, if* · *satisfies the null sub-inequality, then these concepts of informal interior point and informal type-I-interior point are equivalent, and these concepts of informal type-II-interior point and informal type-III-interior point are equivalent. The same situation also applies to the cases of informal pseudo-interior points. We also remark that if* · *satisfies the null condition, then* · *satisfies the null sub-inequality, since we have A* ⊕ *ω* ≤ *A* + *ω* = *A for any ω* ∈ Ω*.*

**Remark 6.** *Suppose that* · *satisfies the null sub-inequality. From part (ii) of Proposition 5, we see that if A*<sup>0</sup> *is an informal interior (respectively type-I-interior, type-II-interior, type-III-interior) point then it is also an informal -interior (resp. -type-I-interior, -type-II-interior, -type-III-interior) point. In other words, from Remark 5, we have*

$$\operatorname{int}(\mathcal{A}) = \operatorname{int}^{\vee}(\mathcal{A}) \subseteq \operatorname{int}^{\lhd}(\mathcal{A}) = \operatorname{int}^{\diamond}(\mathcal{A})$$

*and*

$$\operatorname{int}^{\mathrm{in}}(\mathcal{A}) = \operatorname{int}^{\mathrm{in}}(\mathcal{A}) \subseteq \operatorname{int}^{\circ \mathrm{in}}(\mathcal{A}) = \operatorname{int}^{\circ \mathrm{in}}(\mathcal{A}) .$$

*Regarding the different concepts of pseudo-interior point, we also have*

$$\operatorname{pint}(\mathcal{A}) = \operatorname{pint}^{\circ}(\mathcal{A}) \subseteq \operatorname{pint}^{\circ \circ}(\mathcal{A}) = \operatorname{pint}^{\circ}(\mathcal{A})$$

*and*

$$\operatorname{pint}^{\text{in}}(\mathcal{A}) = \operatorname{pint}^{\text{in}}(\mathcal{A}) \subseteq \operatorname{pint}^{\text{$$

**Remark 7.** *Let* (P(*X*), ·) *be an informal pseudo-seminormed hyperspace.*

• *Suppose that the center A*<sup>0</sup> *is in the open ball* B(*A*0; )*. Then the concepts of informal interior point and informal pseudo-interior point are equivalent. It follows that pint*(A) = *int*(A) ⊆ A*. Similarly, if the center A*<sup>0</sup> *is in the open ball* B(*A*0; )*, then pint*(A) = *int*(A) ⊆ A*.*

• *From part (ii) of Proposition 8, we have* B(*A*; ) ⊆ B(*A*; ) ⊕ Ω *and* B(*A*; ) ⊆ B(*A*; ) ⊕ Ω*. Suppose that the center A*<sup>0</sup> *is in the open ball* B(*A*0; )*. Let A*<sup>0</sup> *be an informal type-I-pseudo-interior point of* A*. Since*

$$A\_0 \in \mathcal{B}(A\_0; \epsilon) \subseteq \mathcal{B}(A\_0; \epsilon) \oplus \Omega \subseteq \mathcal{A}\_\prime$$

*using Remark 4, we obtain*

$$\operatorname{pint}^{\circ \circ}(\mathcal{A}) \subseteq \operatorname{int}(\mathcal{A}) \subseteq \mathcal{A} \text{ and } \operatorname{pint}^{\circ \circ}(\mathcal{A}) \subseteq \operatorname{int}^{\circ}(\mathcal{A}) \subseteq \operatorname{pint}^{\circ}(\mathcal{A})\_{\bullet}$$

*which also implies pint(I)*(A) = *int(I)*(A)*. Similarly, if the center A*<sup>0</sup> *is in the open ball* B(*A*0; )*, then pint(I)*(A) = *int(I)*(A)*.*

• *Suppose that* A ⊕ Ω ⊆ A*. We have the following observations. Assume that the center A*<sup>0</sup> *is in the open ball* B(*A*0; )*. Let A*<sup>0</sup> *be an informal type-II-pseudo-interior point of* A*. Since*

$$A\_0 \in \mathcal{B}(\mathcal{A}\_0; \epsilon) \subseteq \mathcal{A} \oplus \Omega \subseteq \mathcal{A}\_\prime$$

*we obtain*

$$\operatorname{print}^{\mathrm{up}}(\mathcal{A}) \subseteq \operatorname{int}(\mathcal{A}) \subseteq \mathcal{A} \text{ and } \operatorname{print}^{\mathrm{ul}}(\mathcal{A}) \subseteq \operatorname{int}^{\mathrm{ul}}(\mathcal{A}) \subseteq \operatorname{print}^{\mathrm{ul}}(\mathcal{A}),$$

*which also implies pint(II)*(A) = *int(II)*(A)*. Similarly, if the center A*<sup>0</sup> *is in the open ball* B(*A*0; )*, then pint(II)*(A) = *int(II)*(A)*.*

• *Suppose that* A ⊕ Ω ⊆ A*. We have the following observations. From part (ii) of Proposition 8, we have* B(*A*; ) ⊆ B(*A*; ) ⊕ Ω *and* B(*A*; ) ⊆ B(*A*; ) ⊕ Ω*. Assume that the center A*<sup>0</sup> *is in the open ball* B(*A*0; )*. Let A*<sup>0</sup> *be an informal type-III-pseudo-interior point of* A*. Since*

$$\mathcal{A}\_0 \in \mathcal{B}(\mathcal{A}\_0; \mathfrak{e}) \subseteq \mathcal{B}(\mathcal{A}\_0; \mathfrak{e}) \oplus \Omega \subseteq \mathcal{A} \oplus \Omega \subseteq \mathcal{A}\_1$$

*we obtain*

$$\operatorname{pint}^{\mathfrak{m}}(\mathcal{A}) \subseteq \operatorname{int}(\mathcal{A}) \subseteq \mathcal{A} \text{ and } \operatorname{pint}^{\mathfrak{m}}(\mathcal{A}) \subseteq \operatorname{int}^{\mathfrak{m}}(\mathcal{A}) \subseteq \operatorname{pint}^{\mathfrak{m}}(\mathcal{A}),$$

*which also implies pint(III)*(A) = *int(III)*(A)*. Similarly, if the center A*<sup>0</sup> *is in the open ball* B(*A*0; )*, then pint(III)*(A) = *int(III)*(A)*.*

**Definition 6.** *Let* (I, ·) *be an informal pseudo-seminormed hyperspace, and let* A *be a nonempty subset of* I*. The set* A *is said to be informally open if* A = *int*(A)*. The set* A *is said to be informally type-I-open if* A = *int(I)*(A)*. The set* A *is said to be informally type-II-open if* A = *int(II)*(A)*. The set* A *is said to be informally type-III-open if* A = *int(III)*(A)*. We can similarly define the informal -open set based on the informal -interior. Also, the informal pseudo-openness can be similarly defined.*

We adopt the convention ∅ ⊕ Ω = ∅.

**Remark 8.** *Let* (P(*X*), ·) *be an informal pseudo-seminormed hyperspace, and let* A *be a nonempty subset of* P(*X*)*. We consider the extreme cases of the empty set* ∅ *and whole set* P(*X*)*.*


*ball* B*, we have A* ∈B⊆P(*X*) ⊆ P(*X*) ⊕ Ω *by part (i) of Proposition 2, i.e.,* P(*X*) ⊆ *int(II)*(P(*X*)) *and* P(*X*) ⊆ *pint(II)*(P(*X*))*.*

• *Since* ∅ ⊕ Ω ⊆ Ω ⊕ ∅*, it means that* ∅ *is informally type-III-open and type-III-pseudo-open. Now for any A* ∈ P(*X*) *and any open ball* B*, we have A* ∈B⊆ *X, which says that* B ⊕ Ω ⊆ *X* ⊕ Ω*, i.e.,* P(*X*) ⊆ *int(III)*(P(*X*)) *and* P(*X*) ⊆ *pint(III)*(P(*X*))*. This shows that* P(*X*) *is informally type-III-open and type-III-pseudo-open.*

*We have the above similar results for the different types of informal -open sets.*

**Proposition 9.** *Let* (P(*X*), ·) *be an informal pseudo-seminormed hyperspace, and let* A *be a nonempty subset of* I*.*


**Proof.** If *A* is an informal pseudo-interior point, i.e., *A* ∈ pint(A) = A, then there exists > 0 such that B(*A*0; ) ⊆ A. Since *A* ∈ A, it follows that *A* is also an informal interior point, i.e., pint(A) ⊆ int(A). From the first observation of Remark 4, we obtain the desired result. The remaining cases can be similarly realized, and the proof is complete.

**Proposition 10.** *Let* (P(*X*), ·) *be an informal pseudo-seminormed hyperspace.*

	- *If* A *is any type of informally pseudo-open, then A* ∈ A *implies A* ⊕ *ω* ∈ A *for any ω* ∈ Ω*.*
	- *If* A *is informally open, then A* ∈ A *implies A* ⊕ *ω* ∈ pint(A) *for any ω* ∈ Ω*.*
	- *If* A *is informally type-I-open, then A* ∈ A *implies A* ⊕ *ω* ∈ pint(I)(A) *for any ω* ∈ Ω*.*
	- *If* A *is informally type-II-open, then A* ∈ A *implies A* ⊕ *ω* ∈ pint(II)(A) *for any ω* ∈ Ω*.*
	- *If* A *is informally type-III-open, then A* ∈ A *implies A* ⊕ *ω* ∈ pint(III)(A) *for any ω* ∈ Ω*.*
	- *A* ⊕ *ω* ∈ A *implies A* ∈ A *for any ω* ∈ Ω*.*
	- A ⊕ *ω* ⊆ A *for any ω* ∈ Ω *and* A ⊕ Ω ⊆ A*.*
	- *A* ⊕ *ω* ∈A⊕ *ω implies A* ∈ A *for any ω* ∈ Ω*.*
	- *We have* A = A ⊕ Ω*.*

**Proof.** To prove part (i), suppose that A is informally type-III-pseudo-open. For *A* ∈ A = pint(III)(A), by definition, there exists > 0 such that B(*A*; ) ⊕ Ω ⊆A⊕ Ω. From part (i) of Proposition 6, we also have B(*A* ⊕ *ω*; ) ⊕ Ω ⊆ A⊕ Ω, which says that *A* ⊕ *ω* ∈ pint(III)(A) = A. Now we assume that A is informally type-III-open. Then *A* ∈ A = int(III)(A) ⊆ pint(III)(A). We can also obtain *A* ⊕ *ω* ∈ pint(III)(A). The other openness can be similarly obtained.

To prove the first case of part (ii), we consider the informal type-III-pseudo-open sets. If *A* ⊕ *ω* ∈ A = pint(III)(A), there exists > 0 such that B(*A* ⊕ *ω*; ) ⊕ Ω ⊆A⊕ Ω. From part (ii) of Proposition 6, we also have B(*A*; ) ⊕ Ω ⊆A⊕ Ω, which shows that *A* ∈ pint(III)(A) = A.

To prove the second case of part (ii), we consider the informal type-III-pseudo-open sets. If *A* ∈ A ⊕ *ω*, then *A* = *A* ⊕ *ω* for some *A* ∈ A = pint(III)(A). Therefore there exists > 0 such that B(*A*; ) ⊕ Ω ⊆A⊕ Ω. Since B(*A*; ) ⊆ B(*A* ⊕ *ω*; ) = B(*A*; ) by part (ii) of Proposition 6, we see that B(*A*; ) ⊕ Ω ⊆A⊕ Ω, i.e., *A* ∈ pint(III)(A) = A. Now, for *A* ∈A⊕ Ω, we see that *A* ∈A⊕ *ω* for some *ω* ∈ Ω, which implies *A* ∈ A. Therefore we obtain A ⊕ Ω ⊆ A.

To prove the third case of part (ii), using the second case of part (ii), we have

$$
\mathcal{A} \oplus \omega \in \mathcal{A} \oplus \omega \subseteq \mathcal{A} \oplus \Omega \subseteq \mathcal{A}.
$$

Using the first case of part (ii), we obtain *A* ∈ A.

To prove the fourth case of part (ii), since *A* = *A* ⊕ {*θX*} and {*θX*} ∈ Ω, it follows that A⊆A⊕ Ω. By the second case of part (ii), we obtain the desired result.

To prove part (iii), from part (ii) of Proposition 6, we have B(*A* ⊕ *ω*; ) ⊆ B(*A*; ). Therefore, using the similar argument in the proof of part (i), we can obtain the desired results. This completes the proof.

We remark that the results in Proposition 10 will not be true for any types of informal open sets. For example, in the proof of part (i), the inclusion B(*A* ⊕ *ω*; ) ⊕ Ω ⊆A⊕ Ω can just say that *A* ⊕ *ω* ∈ pint(III)(A), since we do not know whether *A* ⊕ *ω* is in A or not.

**Proposition 11.** *Let* (P(*X*), ·) *be an informal pseudo-seminormed hyperspace.*

	- *We have* int(A) = int(I)(A) ⊕ Ω ⊆ A*. In particular, if* A *is informally open or type-I-open, then* A ⊕ Ω ⊆ A*.*
	- *We have* int(II)(A) = int(III)(A) ⊆A⊕ Ω*.*

*Moreover the concept of informal* (*resp. type-I, type-II, type-III*) *open set is equivalent to the concept of informal* (*resp. type-I, type-II, type-III*) *pseudo-open set.*

(ii) *Suppose that* · *satisfies the null sub-inequality. Then*

$$(\operatorname{pint}^{\mathfrak{m}}(\mathcal{A}))^{\mathfrak{c}} \oplus \Omega = (\operatorname{pint}^{\mathfrak{m}}(\mathcal{A}))^{\mathfrak{c}} \oplus \Omega \subseteq (\operatorname{pint}^{\mathfrak{m}}(\mathcal{A}))^{\mathfrak{c}} = (\operatorname{pint}^{\mathfrak{m}}(\mathcal{A}))^{\mathfrak{c}}.$$

*In particular, if* <sup>A</sup> *is informally type-II-pseudo-open or type-III-pseudo-open, then* <sup>A</sup>*<sup>c</sup>* <sup>⊕</sup> <sup>Ω</sup> ⊆ A*c.*

**Proof.** To prove the first case of part (i), for any *A* ∈ int(I)(A), there exists an open ball B(*A*; ) such that B(*A*; ) ⊕ Ω ⊆ A. Since *A* ∈ B(*A*; ) by the first observation of Remark 1, we have *A* ⊕ Ω ⊆ B(*A*; ) ⊕ Ω ⊆ A. This shows int(I)(A) ⊕ Ω ⊆ A. Using Remark 5, we obtain the desired results.

To prove the second case of part (i), for any *A* ∈ int(II)(A), there exists an open ball B(*A*; ) such that B(*A*; ) ⊆A⊕ Ω. Then we have *A* ∈A⊕ Ω, since *A* ∈ B(*A*; ). This shows int(II)(A) ⊆A⊕ Ω. Using Remark 5, we obtain the desired results. From Remark 4, we see that the concept of informal (resp. type-I, type-II, type-III) open set is equivalent to the concept of informal (resp. type-I, type-II, type-III) pseudo-open set.

To prove part (ii), for any *<sup>A</sup>* <sup>∈</sup> (pint(II)(A))*<sup>c</sup>* <sup>⊕</sup> <sup>Ω</sup>, we have *<sup>A</sup>* <sup>=</sup> *<sup>A</sup>* <sup>⊕</sup> *<sup>ω</sup>* for some *<sup>A</sup>* <sup>∈</sup> (pint(II)(A))*<sup>c</sup>* and *<sup>ω</sup>* <sup>∈</sup> <sup>Ω</sup>. By definition, we see that <sup>B</sup>(*A*; ) ⊆A⊕ <sup>Ω</sup> for every <sup>&</sup>gt; 0. By part (ii) of Proposition 6, we also have B(*A*; ) ⊆A⊕ Ω for every > 0. This says that *A* is not an informal type-II-pseudo-interior point of A, i.e., *A* ∈ pint(II)(A). This completes the proof.

**Proposition 12.** *Let* (P(*X*), ·) *be an informal pseudo-seminormed hyperspace.*


**Proof.** To prove part (i), for any *<sup>A</sup>* ∈ B(*A*0; ), we have *<sup>A</sup>* <sup>=</sup> *<sup>A</sup>*<sup>0</sup> <sup>⊕</sup> *<sup>C</sup>* with *<sup>C</sup>* <sup>&</sup>lt; . Let <sup>=</sup> − *<sup>C</sup>* <sup>&</sup>gt; 0. For any *<sup>A</sup>* ∈ B(*A*; ), i.e., *<sup>A</sup>* <sup>=</sup> *<sup>A</sup>* <sup>⊕</sup> *<sup>D</sup>* with *<sup>D</sup>* <sup>&</sup>lt; , we obtain *<sup>A</sup>* <sup>=</sup> *<sup>A</sup>*<sup>0</sup> <sup>⊕</sup> *<sup>C</sup>* <sup>⊕</sup> *<sup>D</sup>* and

$$\parallel \subset \oplus D \parallel \leq \parallel \subset \parallel \to \parallel D \parallel = \epsilon - \hat{\epsilon} + \parallel D \parallel < \epsilon - \hat{\epsilon} + \hat{\epsilon} = \epsilon\_{\prime\prime}$$

which means that *A* ∈ B(*A*0; ), i.e.,

$$\mathcal{B}^{\diamond}(A; \hat{\mathfrak{e}}) \subseteq \mathcal{B}^{\diamond}(A\_0; \mathfrak{e}). \tag{6}$$

This shows that B(*A*0; ) ⊆ int(B(*A*0; )). Therefore we obtain B(*A*0; ) = int(B(*A*0; )). We can similarly obtain the inclusion B(*A*0; ) ⊆ pint(B(*A*0; )). However, we cannot have the equality B(*A*0; ) = pint(B(*A*0; )), since pint(B(*A*0; )) is not necessarily contained in B(*A*0; ). From (6), we have <sup>B</sup>(*x*; ) <sup>⊕</sup> <sup>Ω</sup> ⊆ B(*A*0; ) <sup>⊕</sup> <sup>Ω</sup>. This says that <sup>B</sup>(*A*0; ) is informally -type-III-open. On the other hand, from (6) and part (ii) of Proposition 8, we also have

$$\mathcal{B}^{\diamond}(A; \widehat{\epsilon}) \subseteq \mathcal{B}^{\diamond}(A\_0; \epsilon) \subseteq \mathcal{B}^{\diamond}(A\_0; \epsilon) \oplus \Omega.$$

This shows that B(*A*0; ) is informally -type-II-open.

To prove part (ii), for any *<sup>A</sup>* ∈ B(*A*0; ), we have *<sup>A</sup> <sup>A</sup>*<sup>0</sup> <sup>&</sup>lt; . Let <sup>=</sup> *<sup>A</sup> <sup>A</sup>*<sup>0</sup> . For any *<sup>A</sup>* ∈ B(*A*; <sup>−</sup> ), we have *<sup>A</sup> <sup>A</sup>* <sup>&</sup>lt; <sup>−</sup> . Therefore, by Proposition 4, we obtain

$$\parallel \hat{A} \ominus A\_0 \parallel \leq \parallel \hat{A} \ominus A \parallel + \parallel A \ominus A\_0 \parallel = \hat{\mathfrak{e}} + \parallel \hat{A} \ominus A \parallel < \hat{\mathfrak{e}} + \mathfrak{e} - \hat{\mathfrak{e}} = \mathfrak{e}\_{\prime \prime}$$

which means that *A* ∈ B(*A*0; ), i.e.,

$$\mathcal{B}(A; \mathfrak{e} - \widehat{\mathfrak{e}}) \subseteq \mathcal{B}(A\_0; \mathfrak{e}). \tag{7}$$

This shows that B(*A*0; ) ⊆ int(B(*A*0; )).

Therefore we obtain B(*A*0; ) = int(B(*A*0; )). We can similarly obtain the inclusion B(*A*0; ) ⊆ pint(B(*A*0; )). From (7), we have <sup>B</sup>(*A*; <sup>−</sup> ) <sup>⊕</sup> <sup>Ω</sup> ⊆ B(*A*0; ) <sup>⊕</sup> <sup>Ω</sup>. This says that <sup>B</sup>(*A*0; ) is informally type-III-open. On the other hand, from (7) and part (ii) of Proposition 8, we also have

$$\mathcal{B}(A; \epsilon - \hat{\epsilon}) \subseteq \mathcal{B}(A\_0; \epsilon) \subseteq \mathcal{B}(A\_0; \epsilon) \oplus \Omega.$$

This shows that B(*A*0; ) is informally type-II-open.

To prove part (iii), from (6), (7) and part (ii) of Proposition 8, we have

$$\mathcal{B}^{\diamond}(A; \widehat{\epsilon}) \oplus \Omega \subseteq \mathcal{B}^{\diamond}(A\_0; \epsilon) \oplus \Omega = \mathcal{B}^{\diamond}(A\_0; \epsilon).$$

and

$$\mathcal{B}(A; \mathfrak{e} - \widehat{\mathfrak{e}}) \oplus \Omega \subseteq \mathcal{B}(A\_0; \mathfrak{e}) \oplus \Omega = \mathcal{B}(A\_0; \mathfrak{e}) \dots$$

This shows that B(*A*0; ) is informally -type-I-open, and that B(*A*0; ) is informally type-I-open. We complete the proof.

**Proposition 13.** *Let* (P(*X*), ·) *be an informal pseudo-seminormed hyperspace. Suppose that the center A*<sup>0</sup> *is in the open balls* B(*A*0; ) *and* B(*A*0; )*. The following statements hold true:*


**Proof.** The results follow from Proposition 12, Remark 7 and part (ii) of Proposition 8 immediately.

#### **6. Topoloigcal Spaces**

Now we are in a position to investigate the topological structure generated by the informal pseudo-seminormed hyperspace (P(*X*), ·) based on the different kinds of openness. We denote by *τ*<sup>0</sup> and *τ*() <sup>0</sup> the set of all informal open and informal -open subsets of P(*X*), respectively, and by p*τ*<sup>0</sup> and p*τ*() <sup>0</sup> the set of all informal pseudo-open and informal -pseudo-open subsets of P(*X*), respectively. We denote by *τ*(I) and *τ*(I) the set of all informal type-I-open and informal -type-I-open subsets of P(*X*), respectively, and by p*τ*(I) and p*τ*(I) the set of all informal type-I-pseudo-open and informal -type-I-pseudo-open subsets of P(*X*), respectively. We can similarly define the families *τ*(II), *τ*(III), *τ*(II), *τ*(III), p*τ*(II), p*τ*(III), p*τ*(II) and p*τ*(III).

**Proposition 14.** *Let* (P(*X*), ·) *be an informal pseudo-seminormed hyperspace.*


**Proof.** To prove part (i), by the second observation of Remark 8, we see that ∅ ∈ *τ*(I) and P(*X*) ∈ *τ*(I). Let <sup>A</sup> <sup>=</sup> *<sup>n</sup> <sup>i</sup>*=<sup>1</sup> A*i*, where A*<sup>i</sup>* are informal type-I-open sets for all *i* = 1, ··· , *n*. For *A* ∈ A, we have *A* ∈ A*<sup>i</sup>* for all *i* = 1, ··· , *n*. Then there exist *<sup>i</sup>* such that B(*A*; *i*) ⊕ Ω ⊆ A*<sup>i</sup>* for all *i* = 1, ··· , *n*. Let = min{1, ··· , *n*}. Then B(*A*; ) ⊕ Ω ⊆ B(*A*; *i*) ⊕ Ω ⊆ A*<sup>i</sup>* for all *i* = 1, ··· , *n*, which says that <sup>B</sup>(*A*; ) <sup>⊕</sup> <sup>Ω</sup> <sup>⊆</sup> *<sup>n</sup> <sup>i</sup>*=<sup>1</sup> A*<sup>i</sup>* = A, i.e., A ⊆ int(I)(A). Therefore the intersection A is informally type-I-open by Remark 4. On the other hand, let A = *<sup>δ</sup>* A*δ*. Then *A* ∈ A implies that *A* ∈ A*<sup>δ</sup>* for some *δ*. This indicates that B(*A*; ) ⊕ Ω ⊆ A*<sup>δ</sup>* ⊆ A for some > 0, i.e., A ⊆ int(I)(A). Therefore the union A is informally type-I-open. This shows that (P(*X*), *τ*(I)) is a topological space. For the case of informal -type-I-open subsets of P(*X*), we can similarly obtain the desired result. Parts (ii) and (iii) follow from Remark 7 and part (i) immediately. This completes the proof.

Remark 1 shows the sufficient conditions for the open ball B(*A*; ) containing the center *A*.

**Proposition 15.** *Let* (P(*X*), ·) *be an informal pseudo-seminormed hyperspace.*


**Proof.** The empty set ∅ and P(*X*) are informal open by the first observation of Remark 8. The remaining proof follows from the similar argument of Proposition 14 without considering the null set Ω.

Let (P(*X*), ·) be an informal pseudo-seminormed hyperspace. We consider the following families:

$$\hat{\mathfrak{r}}^{\mathfrak{u}} = \{ \mathcal{A} \in \mathfrak{r}^{\mathfrak{u}} : \mathcal{A} \oplus \Omega \subseteq \mathcal{A} \}$$

and

$$\widehat{\tau}^{\mathrm{un}} = \{ \mathcal{A} \in \tau^{\mathrm{an}} : \mathcal{A} \oplus \Omega \subseteq \mathcal{A} \}\ .$$

We can similarly define *<sup>τ</sup>*(II) and *<sup>τ</sup>*(III). Then *<sup>τ</sup>*(II) <sup>⊆</sup> *<sup>τ</sup>*(II), *<sup>τ</sup>*(III) <sup>⊆</sup> *<sup>τ</sup>*(III), *<sup>τ</sup>*(II) <sup>⊆</sup> *<sup>τ</sup>*(II) and *<sup>τ</sup>*(III) <sup>⊆</sup> *<sup>τ</sup>*(III). We can also similarly define <sup>p</sup>*τ*(II), <sup>p</sup>*τ*(III), <sup>p</sup>*τ*(II) and <sup>p</sup>*τ*(III) regarding the informal pseudo-openness. Then <sup>p</sup>*τ*(II) <sup>⊆</sup> <sup>p</sup>*τ*(II), <sup>p</sup>*τ*(III) <sup>⊆</sup> <sup>p</sup>*τ*(III), <sup>p</sup>*τ*(II) <sup>⊆</sup> <sup>p</sup>*τ*(II) and <sup>p</sup>*τ*(III) <sup>⊆</sup> <sup>p</sup>*τ*(III).

**Proposition 16.** *Let* (P(*X*), ·) *be an informal pseudo-seminormed hyperspace. Suppose that* · *satisfies the null sub-inequality. Then*

$$\hat{\mathbf{p}}\hat{\mathbf{r}}^{\text{(n)}} = \mathbf{p}\mathbf{r}^{\text{(n)}} = \mathbf{p}\mathbf{r}^{\text{(n)}} = \hat{\mathbf{p}}\hat{\mathbf{r}}^{\text{(n)}} \text{ and } \hat{\mathbf{r}}^{\text{(n)}} = \mathbf{r}^{\text{(n)}} = \mathbf{r}^{\text{(n)}} = \hat{\mathbf{r}}^{\text{(n)}}.$$

**Proof.** The results follow from Remark 5 and part (ii) of Proposition 10 immediately.

**Proposition 17.** *Let* (P(*X*), ·) *be an informal pseudo-seminormed hyperspace.*

	- *Suppose that each open ball* <sup>B</sup>(*A*; ) *contains the center A. Then* (P(*X*), *<sup>p</sup>τ*(II))=(P(*X*), *<sup>τ</sup>*(II)) *is a topological space.*
	- *Suppose that each open ball* <sup>B</sup>(*A*; ) *contains the center A. Then* (P(*X*), *<sup>p</sup>τ*(II))=(P(*X*), *<sup>τ</sup>*(II)) *is a topological space.*

**Proof.** To prove part (i), given <sup>A</sup>1, <sup>A</sup><sup>2</sup> <sup>∈</sup> *<sup>τ</sup>*(II), let <sup>A</sup> <sup>=</sup> <sup>A</sup><sup>1</sup> ∩ A2. For *<sup>A</sup>* ∈ A, we have *<sup>A</sup>* ∈ A*<sup>i</sup>* for *<sup>i</sup>* <sup>=</sup> 1, 2. Then there exist *<sup>i</sup>* such that B(*A*; *i*) ⊆ A*<sup>i</sup>* ⊕ Ω for all *i* = 1, 2. Let = min{1, 2}. Then

$$\mathcal{B}(\mathcal{A}; \mathfrak{e}) \subseteq \mathcal{B}(\mathcal{A}; \mathfrak{e}\_i) \subseteq \mathcal{A}\_i \oplus \Omega\_i$$

for all *i* = 1, 2, which says that

$$\mathcal{B}(\mathcal{A}; \mathfrak{e}) \subseteq \left[ (\mathcal{A}\_1 \oplus \Omega) \cap (\mathcal{A}\_2 \oplus \Omega) \right] = (\mathcal{A}\_1 \cap \mathcal{A}\_2) \oplus \Omega = \mathcal{A} \oplus \Omega$$

by Proposition 3. This shows that <sup>A</sup> is informally type-II-open. For *<sup>A</sup>* ∈A⊕ <sup>Ω</sup>, we have *<sup>A</sup>* <sup>=</sup> *<sup>A</sup>*¯ <sup>⊕</sup> *<sup>ω</sup>* for some *<sup>A</sup>*¯ ∈ A and *<sup>ω</sup>* <sup>∈</sup> <sup>Ω</sup>. Since *<sup>A</sup>*¯ ∈ A<sup>1</sup> ∩ A2, it follows that *<sup>A</sup>* ∈ A<sup>1</sup> <sup>⊕</sup> <sup>Ω</sup> ⊆ A<sup>1</sup> and *<sup>A</sup>* ∈ A<sup>2</sup> <sup>⊕</sup> <sup>Ω</sup> ⊆ A2, which says that *<sup>A</sup>* ∈ A<sup>1</sup> ∩ A<sup>2</sup> <sup>=</sup> <sup>A</sup>, i.e., A ⊕ <sup>Ω</sup> ⊆ A. This shows that <sup>A</sup> is indeed in *<sup>τ</sup>*(II). Therefore, the intersection of finitely many members of *<sup>τ</sup>*(II) is a member of *<sup>τ</sup>*(II).

Now, given a family {A*δ*}*δ*∈<sup>Λ</sup> <sup>⊂</sup> *<sup>τ</sup>*(II), let <sup>A</sup> <sup>=</sup> *<sup>δ</sup>*∈<sup>Λ</sup> A*δ*. Then *A* ∈ A implies that *A* ∈ A*<sup>δ</sup>* for some *δ* ∈ Λ. This says that

$$\mathcal{B}(\mathcal{A}; \mathfrak{e}) \subseteq \mathcal{A}\_{\mathcal{S}} \oplus \Omega \subseteq \mathcal{A} \oplus \Omega$$

for some <sup>&</sup>gt; 0. Therefore, the union *<sup>A</sup>* is informally type-II-open. For *<sup>A</sup>* ∈A⊕ <sup>Ω</sup>, we have *<sup>A</sup>* <sup>=</sup> *<sup>A</sup>*¯ <sup>⊕</sup> *<sup>ω</sup>*, where *<sup>A</sup>*¯ ∈ A, i.e., *<sup>A</sup>*¯ ∈ A*<sup>δ</sup>* for some *<sup>δ</sup>* <sup>∈</sup> <sup>Λ</sup>. It also says that *<sup>A</sup>* ∈ A*<sup>δ</sup>* <sup>⊕</sup> <sup>Ω</sup> ⊆ A*<sup>δ</sup>* ⊆ A, i.e., A ⊕ <sup>Ω</sup> ⊆ A. This shows that <sup>A</sup> is indeed in *<sup>τ</sup>*(II). By the third observation of Remark 8, we see that <sup>∅</sup> and <sup>P</sup>(*X*) are also informal type-II-open. It is not hard to see that ∅ ⊕ Ω = ∅ and P(*X*) ⊕ Ω ⊆ P(*X*), which shows that <sup>∅</sup>, *<sup>X</sup>* <sup>∈</sup> *<sup>τ</sup>*(II). Therefore, (P(*X*), *<sup>τ</sup>*(II)) is indeed a topological space. The above arguments are also valid for *<sup>τ</sup>*(II).

Part (ii) follows immediately from the third observation of Remark 7 and part (i). This completes the proof.

**Proposition 18.** *Let* (P(*X*), ·) *be an informal pseudo-seminormed hyperspace.*

	- *Suppose that each open ball* <sup>B</sup>(*A*; ) *contains the center A. Then* (P(*X*), *<sup>p</sup>τ*(III))=(P(*X*), *<sup>τ</sup>*(III)) *is a topological space.*
	- *Suppose that each open ball* <sup>B</sup>(*A*; ) *contains the center A. Then* (P(*X*), *<sup>p</sup>τ*(III))=(P(*X*), *<sup>τ</sup>*(III)) *is a topological space.*

**Proof.** To prove part (i), by the fourth observation of Remark 8, it is clear to see that ∅,P(*X*) ∈ *τ*(III). Since <sup>∅</sup> <sup>⊕</sup> <sup>Ω</sup> <sup>=</sup> <sup>∅</sup> and <sup>P</sup>(*X*) <sup>⊕</sup> <sup>Ω</sup> ⊆ P(*X*), it follows that <sup>∅</sup>,P(*X*) <sup>∈</sup> *<sup>τ</sup>*(III). Given <sup>A</sup>1, <sup>A</sup><sup>2</sup> <sup>∈</sup> *<sup>τ</sup>*(III), let A = A<sup>1</sup> ∩ A2. For *A* ∈ A, there exist *<sup>i</sup>* such that B(*A*; *i*) ⊕ Ω ⊆ A*<sup>i</sup>* ⊕ Ω for all *i* = 1, 2. Let = min{1, 2}. Then

$$\mathcal{B}(A;\epsilon) \oplus \Omega \subseteq \mathcal{B}(A;\epsilon\_i) \oplus \Omega \subseteq \mathcal{A}\_i \oplus \Omega\_i$$

for all *i* = 1, 2, which says that

$$\mathcal{B}(A; \mathfrak{e}) \oplus \Omega \subseteq [(\mathcal{A}\_1 \oplus \Omega) \cap (\mathcal{A}\_2 \oplus \Omega)] = (\mathcal{A}\_1 \cap \mathcal{A}\_2) \oplus \Omega = \mathcal{A} \oplus \Omega$$

by Proposition 3. This shows that A is informally type-III-open. From the proof of Proposition 17, we also see that A ⊕ <sup>Ω</sup> ⊆ A. Therefore, the intersection of finitely many members of *<sup>τ</sup>*(III) is a member of *<sup>τ</sup>*(III).

Now, given a family {A*δ*}*δ*∈<sup>Λ</sup> <sup>⊂</sup> *<sup>τ</sup>*(III), let <sup>A</sup> <sup>=</sup> *<sup>δ</sup>*∈<sup>Λ</sup> A*δ*. Then *A* ∈ A implies that *A* ∈ A*<sup>δ</sup>* for some *δ* ∈ Λ. This says that

$$\mathcal{B}(\mathcal{A}; \mathfrak{e}) \oplus \Omega \subseteq \mathcal{A}\_{\mathcal{S}} \oplus \Omega \subseteq \mathcal{A} \oplus \Omega$$

for some > 0. Therefore, the union A is informally type-III-open. From the proof of Proposition 17, we also see that A ⊕ <sup>Ω</sup> ⊆ A, i.e., A ∈ *<sup>τ</sup>*(III). This shows that (P(*X*), *<sup>τ</sup>*(III)) is indeed a topological space. The above arguments are also valid for *<sup>τ</sup>*(III).

Part (ii) follows immediately from the fourth observation of Remark 7 and part (i). This completes the proof.

**Proposition 19.** *Let* (P(*X*), ·) *be an informal pseudo-seminormed hyperspace. Suppose that* · *satisfies the null sub-inequality. If each open ball* B(*A*; ) *contains the center A, then* (P(*X*), p*τ*(II))=(P(*X*), p*τ*(III)) *is a topological space.*

**Proof.** By the third observation of Remark 8, we see that ∅,P(*X*) ∈ p*τ*(II). Given A1, A<sup>2</sup> ∈ p*τ*(II), let A = A<sup>1</sup> ∩ A2. We want to show A = pint(II)(A). For *A* ∈ A, we have *A* ∈ A*<sup>i</sup>* for *i* = 1, 2. There exist *<sup>i</sup>* such that B(*A*; *i*) ⊆ A*<sup>i</sup>* ⊕ Ω for all *i* = 1, 2. Let = min{1, 2}. Then B(*A*; ) ⊆ B(*A*; *i*) ⊆ A*<sup>i</sup>* ⊕ Ω for *i* = 1, 2, which says that, using part (ii) of Proposition 10,

$$\mathcal{B}(A; \mathfrak{e}) \subseteq \left[ (\mathcal{A}\_1 \oplus \Omega) \cap (\mathcal{A}\_2 \oplus \Omega) \right] = \mathcal{A}\_1 \cap \mathcal{A}\_2 = (\mathcal{A}\_1 \cap \mathcal{A}\_2) \oplus \Omega = \mathcal{A} \oplus \Omega.$$

This shows that *A* ∈ int(II)(A), i.e., A ⊆ int(II)(A) ⊆ pint(II)(A) by Remark 4. On the other hand, for *A* ∈ pint(II)(A), using part (ii) of Proposition 10, we have

$$A \in \mathcal{B}(A; \epsilon) \subseteq \mathcal{A} \oplus \Omega = (\mathcal{A}\_1 \cap \mathcal{A}\_2) \oplus \Omega \subseteq \mathcal{A}\_1 \oplus \Omega = \mathcal{A}\_1.$$

We can similarly obtain *A* ∈ A2, i.e., *A* ∈ A<sup>1</sup> ∩ A<sup>2</sup> = A. This shows that pint(II)(A) ⊆ A. Therefore, we conclude that the intersection of finitely many members of p*τ*(II) is a member of p*τ*(II).

Now, given a family {A*δ*}*δ*∈<sup>Λ</sup> <sup>⊂</sup> <sup>p</sup>*τ*(II), let <sup>A</sup> <sup>=</sup> *<sup>δ</sup>*∈<sup>Λ</sup> A*δ*. Then *A* ∈ A implies that *A* ∈ A*<sup>δ</sup>* for some *δ* ∈ Λ. This says that

$$\mathcal{B}(\mathcal{A}; \mathfrak{e}) \subseteq \mathcal{A}\_{\mathcal{S}} \oplus \Omega \subseteq \mathcal{A} \oplus \Omega$$

for some > 0. Therefore we obtain A ⊆ int(II)(A) ⊆ pint(II)(A). On the other hand, for *A* ∈ pint(II)(A), we have

$$\mathcal{A} \in \mathcal{B}(\mathcal{A}; \mathfrak{e}) \subseteq \mathcal{A} \oplus \Omega = \mathcal{A}$$

by part (ii) of Proposition 10. This shows that pint(II)(A) ⊆ A, i.e., A = pint(II)(A). Therefore, by Remark 5, we conclude that (P(*X*), p*τ*(II))=(P(*X*), p*τ*(III)) is a topological space. This completes the proof.

#### **7. Conclusions**

The hyperspace denoted by P(*X*) is the collection of all subsets of a vector space *X*. Under the set addition

$$A \oplus B = \{a + b : a \in A \text{ and } b \in B\}$$

and the scalar multiplication

*λA* = {*λa* : *a* ∈ *A*} ,

the hyperspace P(*X*) cannot form a vector space. The reason is that each *A* ∈ P(*X*) cannot have the additive inverse element. In this paper, the null set defined by

$$\Omega = \{ A \ominus A : A \in \mathcal{P}(X) \},$$

can be treated as a kind of "zero element" of P(*X*). Although P(*X*) is not a vector space, a so-called informal norm is introduced to P(*X*), which will mimic the conventional norm. Using this informal norm, two different concepts of open balls are proposed, which are used to define many types of open sets. Therefore, we can generate many types of topologies based on these different concepts of open sets.

As we mentioned before, the theory of set-valued analysis has been applied to nonlinear analysis, differential inclusion, fixed point theory and set-valued optimization, which treats each element in P(*X*) as a subset of *X*. In this paper, each element of P(*X*) is treated as a "point", and the family P(*X*) is treated as a universal set. The topological structures studied in this paper may provide the potential applications in nonlinear analysis, differential inclusion, fixed point theory and set-valued optimization (or set optimization) based on the different point of view regarding the elements of P(*X*), which will be for future research.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


c 2019 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Modified Inertial Hybrid and Shrinking Projection Algorithms for Solving Fixed Point Problems**

#### **Bing Tan 1, Shanshan Xu <sup>2</sup> and Songxiao Li 1,\***


Received: 22 January 2020; Accepted: 10 February 2020; Published: 12 February 2020

**Abstract:** In this paper, we introduce two modified inertial hybrid and shrinking projection algorithms for solving fixed point problems by combining the modified inertial Mann algorithm with the projection algorithm. We establish strong convergence theorems under certain suitable conditions. Finally, our algorithms are applied to convex feasibility problem, variational inequality problem, and location theory. The algorithms and results presented in this paper can summarize and unify corresponding results previously known in this field.

**Keywords:** conjugate gradient method; steepest descent method; hybrid projection; shrinking projection; inertial Mann; strongly convergence; nonexpansive mapping

**MSC:** 49J40; 47H05; 90C52

#### **1. Introduction**

Throughout this paper, let *C* denote a nonempty closed convex subset of real Hilbert spaces *H* with standard inner products ·, · and induced norms ·. For all *x*, *y* ∈ *C*, there is *Tx* − *Ty*≤*x* − *y*, and the mapping *T* : *C* → *C* is said to be nonexpansive. We use Fix(*T*) := {*x* ∈ *C* : *Tx* = *x*} to represent the set of fixed points of a mapping *T* : *C* → *C*. The main purpose of this paper is to consider the following fixed point problem: Find *x*<sup>∗</sup> ∈ *C*, such that *T* (*x*∗) = *x*∗, where *T* : *C* → *C* is nonexpansive with Fix(*T*) = ∅.

There are various specific applications for approximating fixed point problems with nonexpansive mappings, such as monotone variational inequalities, convex optimization problems, convex feasibility problems, and image restoration problems; see, e.g., [1–6]. It is well known that the Picard iteration method may not converge, and an effective way to overcome this difficulty is to use Mann iterative method, which generates sequences {*xn*} recursively:

$$\mathbf{x}\_{n+1} = \mathbf{a}\_{\text{fl}} \mathbf{x}\_{\text{fl}} + \left(\mathbf{1} - \mathbf{a}\_{\text{fl}}\right) T \mathbf{x}\_{\text{fl}}, \quad n \ge 0,\tag{1}$$

the iterative sequence {*xn*} defined by (1) weakly converges to a fixed point of *T* when the condition ∑<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *α<sup>n</sup>* (1 − *αn*) = +∞ is satisfied, where {*αn*} ⊂ (0, 1).

Many practical applications, for instance, quantum physics and image reconstruction, are in infinite dimensional spaces. To investigate these problems, norm convergence is usually preferable to weak convergence. Therefore, modifying the Mann iteration method to obtain strong convergence is an important research topic. For recent works, see [7–12] and the references therein. On the other hand, the Ishikawa iterative method can strongly converge to the fixed point of nonlinear mappings. For more discussion, see [13–16]. In 2003, Nakajo and Takahashi [17] established strong convergence of the Mann iteration with the aid of projections. Indeed, they considered the following algorithm:

$$\begin{cases} \begin{aligned} y\_n &= a\_n \mathbf{x}\_n + \left(1 - \alpha\_n \right) T \mathbf{x}\_n, \\ \mathbb{C}\_n &= \{ z \in \mathbb{C} : ||y\_n - z|| \le ||\mathbf{x}\_n - z|| \}, \\ Q\_n &= \{ z \in \mathbb{C} : \langle \mathbf{x}\_n - z, \mathbf{x}\_n - \mathbf{x}\_0 \rangle \le 0 \}, \\ \mathbf{x}\_{n+1} &= P\_{\mathbb{C}\_n \cap Q\_n} \mathbf{x}\_0, \quad n \in \mathbb{N}, \end{aligned} \tag{2}$$

where {*αn*} ⊂ [0, 1), *T* is a nonexpansive mapping on *C* and *PCn*∩*Qn* is the metric projection from *C* onto *Cn* ∩ *Qn*. This method is now referred to as the hybrid projection method. Inspired by Nakajo and Takahashi [17], Takahashi, Takeuchi, and Kubota [18] also proposed a projection-based method and obtained strong convergence results, which is now called the shrinking projection method. In recent years, many authors gained new algorithms based on projection method; see [10,18–23].

Generally, the Mann algorithm has a slow convergence rate. In recent years, there has been tremendous interest in developing the fast convergence of algorithms, especially for the inertial type extrapolation method, which was first proposed by Polyak in [24]. Recently, some researchers have constructed different fast iterative algorithms by means of inertial extrapolation techniques, for example, inertial Mann algorithm [25], inertial forward–backward splitting algorithm [26,27], inertial extragradient algorithm [28,29], inertial projection algorithm [30,31], and fast iterative shrinkage–thresholding algorithm (FISTA) [32]. The results of these algorithms and other related ones not only theoretically analyze the convergence properties of inertial type extrapolation algorithms, but also numerically demonstrate their computational performance on some data analysis and image processing problems.

In 2008, Mainge [25] proposed the following inertial Mann algorithm based on the idea of the Mann algorithm and inertial extrapolation:

$$\begin{cases} \; w\_n = \mathbf{x}\_n + \delta\_n \left( \mathbf{x}\_n - \mathbf{x}\_{n-1} \right), \\\; x\_{n+1} = \left( 1 - \eta\_n \right) w\_n + \eta\_n T w\_{n'}, \quad n \ge 1 \end{cases} \tag{3}$$

It should be pointed out that the iteration sequence {*xn*} defined by (3) only obtains weak convergence results under the following assumptions:

(C1) *<sup>δ</sup><sup>n</sup>* <sup>∈</sup> [0, 1) and 0 <sup>&</sup>lt; inf*n*≥<sup>1</sup> *<sup>η</sup><sup>n</sup>* <sup>≤</sup> sup*n*≥<sup>1</sup> *<sup>η</sup><sup>n</sup>* <sup>&</sup>lt; 1; (C2) ∑<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *<sup>δ</sup><sup>n</sup> xn* <sup>−</sup> *xn*−1<sup>2</sup> <sup>&</sup>lt; <sup>+</sup>∞.

It should be noted that the condition (C2) is very strong, which prohibits execution of related algorithms. Recently, Bot and Csetnek [33] got rid of the condition (C2); for more details, see Theorem 5 in [33].

In 2014, Sakurai and Iiduka [34] introduced an algorithm to accelerate the Halpern fixed point algorithm in Hilbert spaces by means of conjugate gradient methods that can accelerate the convergence rate of the steepest descent method. Very recently, inspired by the work of Sakurai and Iiduka [34], Dong et al. [35] proposed a modified inertial Mann algorithm by combining the inertial method, the Picard algorithm and the conjugate gradient method. Their numerical results showed that the proposed algorithm has some advantages over other algorithms. Indeed, they obtained the following result:

**Theorem 1.** *Let T* : *C* → *C be a nonexpansive mapping with* Fix(*T*) = ∅*. Set μ* ∈ (0, 1], *η* > 0 *and x*0, *x*<sup>1</sup> ∈ *H arbitrarily and set d*<sup>0</sup> := (*Tx*<sup>0</sup> − *x*0) /*η. Define a sequence* {*xn*} *by the following algorithm:*

$$\begin{cases} \boldsymbol{w}\_{n} = \boldsymbol{\chi}\_{n} + \delta\_{n} \left( \boldsymbol{x}\_{n} - \boldsymbol{x}\_{n-1} \right), \\\ d\_{n+1} = \frac{1}{\eta} \left( T \boldsymbol{w}\_{n} - \boldsymbol{w}\_{n} \right) + \psi\_{n} d\_{n}, \\\ \boldsymbol{y}\_{n} = \boldsymbol{w}\_{n} + \eta d\_{n+1}, \\\ \boldsymbol{x}\_{n+1} = \mu \boldsymbol{\nu}\_{n} \boldsymbol{w}\_{n} + \left( 1 - \mu \boldsymbol{\nu}\_{n} \right) \boldsymbol{y}\_{n}, n \ge 1. \end{cases} \tag{4}$$

*;*

*The iterative sequence* {*xn*} *defined by* (4) *converges weakly to a point in* Fix(*T*) *under the following conditions:*


Inspired and motivated by the above works, in this paper, based on the modified inertial Mann algorithm (4) and the projection algorithm (2), we propose two new modified inertial hybrid and shrinking projection algorithms, respectively. We obtain strong convergence results under some mild conditions. Finally, our algorithms are applied to a convex feasibility problem, a variational inequality problem, and location theory.

The structure of the paper is the following. Section 2 gives the mathematical preliminaries. Section 3 present modified inertial hybrid and shrinking projection algorithms for nonexpansive mappings in Hilbert spaces and analyzes their convergence. Section 4 gives some numerical experiments to compare the convergence behavior of our proposed algorithms with previously known algorithms. Section 5 concludes the paper with a brief summary.

#### **2. Preliminaries**

We use the notation *xn* → *x* and *xn x* to denote the strong and weak convergence of a sequence {*xn*} to a point *x* ∈ *H*, respectively. Let *ω<sup>w</sup>* {*xn*} := *x* : ∃*xnj x* denote the weak *w*-limit set of {*xn*}. For any *<sup>x</sup>*, *<sup>y</sup>* <sup>∈</sup> *<sup>H</sup>* and *<sup>t</sup>* <sup>∈</sup> *<sup>R</sup>*, we have *tx* + (<sup>1</sup> <sup>−</sup> *<sup>t</sup>*)*y*<sup>2</sup> <sup>=</sup> *<sup>t</sup>x*<sup>2</sup> + (<sup>1</sup> <sup>−</sup> *<sup>t</sup>*)*y*<sup>2</sup> <sup>−</sup> *<sup>t</sup>*(<sup>1</sup> <sup>−</sup> *<sup>t</sup>*)*<sup>x</sup>* <sup>−</sup> *<sup>y</sup>*2.

For any *<sup>x</sup>* <sup>∈</sup> *<sup>H</sup>*, there is a unique nearest point *PCx* in *<sup>C</sup>*, such that *PC*(*x*):<sup>=</sup> argmin*y*∈*<sup>C</sup> <sup>x</sup>* <sup>−</sup> *<sup>y</sup>*. *PC* is called the metric projection of *H* onto *C*. *PCx* has the following characteristics:

$$P\_{\mathbb{C}}\mathbf{x} \in \mathbb{C} \quad \text{and} \quad \langle P\_{\mathbb{C}}\mathbf{x} - \mathbf{x}, P\_{\mathbb{C}}\mathbf{x} - y \rangle \le 0, \quad \forall y \in \mathbb{C} \,. \tag{5}$$

From this characterization, the following inequality can be obtained

$$\left\|\|\mathbf{x} - P\_{\mathsf{C}}\mathbf{x}\|\right\|^2 + \left\|\|\mathbf{y} - P\_{\mathsf{C}}\mathbf{x}\|\right\|^2 \le \left\|\|\mathbf{x} - \mathbf{y}\|\right\|^2, \quad \forall \mathbf{x} \in H, \forall \mathbf{y} \in \mathbf{C}. \tag{6}$$

We give some special cases with simple analytical solutions:

(1) The Euclidean projection of *x*<sup>0</sup> onto an Euclidean ball *B*[*c*,*r*] = {*x* : *x* − *c* ≤ *r*} is given by

$$P\_{B\left[c,r\right]}\left(\mathbf{x}\right) = c + \frac{r}{\max\{||\mathbf{x} - c||, r\}} \left(\mathbf{x} - c\right) \dots$$

(2) The Euclidean projection of *x*<sup>0</sup> onto a box Box[, *u*] = {*x* : ≤ *x* ≤ *u*} is given by

$$P\_{\text{Box}[\ell,\mu]}(\mathbf{x})\_i = \min\left\{ \max\left\{ \mathbf{x}\_i, \ell\_i \right\}, \mu\_i \right\} \dots$$

(3) The Euclidean projection of *x*<sup>0</sup> onto a halfspace *H*<sup>−</sup> *<sup>a</sup>*,*<sup>b</sup>* = {*x* : *a*, *x* ≤ *b*} is given by

$$P\_{H\_{a,b}^-}(x) = x - \frac{[\langle a, x \rangle - b]\_+}{||a||^2} a \dots$$

Next we give some results that will be used in our main proof.

**Lemma 1.** *[36] Let C be a nonempty closed convex subset of real Hilbert spaces H and let T* : *C* → *H be a nonexpansive mapping with* Fix(*T*) = ∅*. Assume that* {*xn*} *be a sequence in C and x* ∈ *H such that xn x and Txn* − *xn* → 0 *as n* → ∞*, then x* ∈ Fix(*T*)*.*

**Lemma 2.** *[37] Let C be a nonempty closed convex subset of real Hilbert spaces H. For any x*, *y*, *z* ∈ *H and <sup>a</sup>* <sup>∈</sup> *R. <sup>v</sup>* <sup>∈</sup> *<sup>C</sup>* : *<sup>y</sup>* <sup>−</sup> *<sup>v</sup>*<sup>2</sup> ≤ *<sup>x</sup>* <sup>−</sup> *<sup>v</sup>*<sup>2</sup> <sup>+</sup> *z*, *<sup>v</sup>* <sup>+</sup> *<sup>a</sup> is convex and closed.*

**Lemma 3.** *[38] Let C be a nonempty closed convex subset of real Hilbert spaces H. Let* {*xn*} ⊂ *H, u* ∈ *H and m* = *PCu. If ω<sup>w</sup>* {*xn*} ⊂ *C and satisfies the condition xn* − *u* ≤ *u* − *m*, ∀*n* ∈ *N. Then xn* → *m.*

#### **3. Modified Inertial Hybrid and Shrinking Projection Algorithms**

In this section, we introduce two modified inertial hybrid and shrinking projection algorithms for nonexpansive mappings in Hilbert spaces using the ideas of the inertial method, the Picard algorithm, the conjugate gradient method, and the projection method. We prove the strong convergence of the algorithms under suitable conditions.

**Theorem 2.** *Let C be a bounded closed convex subset of real Hilbert spaces H and let T* : *C* → *C be a nonexpansive mapping with* Fix(*T*) = ∅*. Assume that the following conditions are satisfied:*

$$\forall \eta > 0, \delta\_n \subset \left[\delta\_1, \delta\_2\right], \delta\_1 \in \left(-\infty, 0\right], \delta\_2 \in \left[0, \infty\right), \psi\_\mathbb{n} \subset \left[0, \infty\right), \lim\_{n \to \infty} \psi\_\mathbb{n} = 0, \upsilon\_\mathbb{n} \subset \left(0, \nu\right], 0 < \nu < 1. \text{ }$$

*Set x*−1, *x*<sup>0</sup> ∈ *H arbitrarily and set d*<sup>0</sup> := (*Tx*<sup>0</sup> − *x*0)/*η. Define a sequence* {*xn*} *by the following:*

$$\begin{cases} \begin{aligned} &w\_{n} = \mathbf{x}\_{n} + \delta\_{n} (\mathbf{x}\_{n} - \mathbf{x}\_{n-1}) \\ &d\_{n+1} = \frac{1}{\eta} \left(Tw\_{n} - w\_{n}\right) + \psi\_{n} d\_{n} \\ &y\_{n} = w\_{n} + \eta d\_{n+1} \\ &z\_{n} = \nu\_{n} w\_{n} + (1 - \nu\_{n}) \, y\_{n} \\ &\mathcal{C}\_{n} = \left\{ z \in H : \left| \left| z\_{n} - z \right| \right|^{2} \le \left| \left| w\_{n} - z \right| \right|^{2} - \nu\_{n} \left(1 - \nu\_{n}\right) \left\| w\_{n} - y\_{n} \right\|^{2} + \zeta\_{n} \right\}, \\ &Q\_{n} = \left\{ z \in H : \left\langle x\_{n} - z, x\_{n} - x\_{0} \right\rangle \le 0 \right\}, \\ &x\_{n+1} = P\_{\mathbb{C}\_{n} \cap Q\_{n}} x\_{0} \, \quad n \ge 0 \end{aligned} \end{cases} \tag{7}$$

*where the sequence* {*ξn*} *is defined by <sup>ξ</sup><sup>n</sup>* :<sup>=</sup> *ηψnM*<sup>2</sup> [*ηψnM*<sup>2</sup> <sup>+</sup> <sup>2</sup>*M*1]*, <sup>M</sup>*<sup>1</sup> :<sup>=</sup> *diam C* <sup>=</sup> sup*x*,*y*∈*<sup>C</sup> <sup>x</sup>* <sup>−</sup> *<sup>y</sup> and <sup>M</sup>*<sup>2</sup> :<sup>=</sup> max max1≤*k*≤*n*<sup>0</sup> *dk* , <sup>2</sup> *<sup>η</sup> M*<sup>1</sup> *, where <sup>n</sup>*<sup>0</sup> *satisfies <sup>ψ</sup><sup>n</sup>* <sup>≤</sup> <sup>1</sup> <sup>2</sup> *for all n* ≥ *n*0*. Then the iterative sequence* {*xn*} *defined by* (7) *converges to P*Fix(*T*)*x*<sup>0</sup> *in norm.*

**Proof.** We divided our proof in three steps.

**Step 1.** To begin with, we need to show that Fix(*T*) ⊂ *Cn* ∩ *Qn*. It is easy to check that *Cn* is convex by Lemma 2. Next we prove Fix(*T*) ⊂ *Cn* for all *n* ≥ 0. Assume that *dn* ≤ *M*<sup>1</sup> for some *n* ≥ *n*0. The triangle inequality ensures that

$$||d\_{n+1}|| = ||\frac{1}{\eta} \left(Tw\_n - w\_n\right) + \psi\_{\mathbb{H}}d\_{\mathbb{H}}|| \le \frac{1}{\eta} ||Tw\_{\mathbb{H}} - w\_{\mathbb{H}}|| + \psi\_{\mathbb{H}} ||d\_{\mathbb{H}}|| \le M\_{2,\mathbb{H}}$$

which implies that *dn* ≤ *M*<sup>2</sup> for all *n* ≥ 0, that is, {*dn*} is bounded. Due to *wn* ∈ *C*, we get that *wn* − *p* ≤ *M*<sup>1</sup> for all *u* ∈ Fix(*T*). From the definition of {*yn*} and nonexpansivity of *T* we obtain

$$\begin{aligned} ||y\_n - \mu|| &= ||w\_n + \eta \left(\frac{1}{\eta} \left(Tw\_n - w\_n\right) + \psi\_n d\_n\right) - \mu|| = ||Tw\_n + \eta \psi\_n d\_n - \mu|| \\ &\le ||w\_n - \mu|| + \eta \psi\_n M\_2. \end{aligned}$$

Therefore,

$$\begin{split} \left\| z\_{n} - \boldsymbol{u} \right\|^{2} &= \left\| \nu\_{n} (\boldsymbol{w}\_{n} - \boldsymbol{u}) + (1 - \nu\_{n}) \left( \boldsymbol{y}\_{n} - \boldsymbol{u} \right) \right\|^{2} \\ &= \nu\_{n} \left\| \boldsymbol{w}\_{n} - \boldsymbol{u} \right\|^{2} + (1 - \nu\_{n}) \left\| \boldsymbol{y}\_{n} - \boldsymbol{u} \right\|^{2} - \nu\_{n} \left( 1 - \nu\_{n} \right) \left\| \boldsymbol{w}\_{n} - \boldsymbol{y}\_{n} \right\|^{2} \\ &\leq \left\| \boldsymbol{w}\_{n} - \boldsymbol{u} \right\|^{2} + 2\eta \psi\_{n} \mathcal{M}\_{2} \left\| \boldsymbol{w}\_{n} - \boldsymbol{u} \right\| + \left( \eta \psi\_{n} \mathcal{M}\_{2} \right)^{2} - \nu\_{n} \left( 1 - \nu\_{n} \right) \left\| \boldsymbol{w}\_{n} - \boldsymbol{y}\_{n} \right\|^{2} \\ &\leq \left\| \boldsymbol{w}\_{n} - \boldsymbol{u} \right\|^{2} - \nu\_{n} \left( 1 - \nu\_{n} \right) \left\| \boldsymbol{w}\_{n} - \boldsymbol{y}\_{n} \right\|^{2} + \mathbb{E}\_{n} \end{split}$$

where *ξ<sup>n</sup>* = *ηψnM*<sup>2</sup> [*ηψnM*<sup>2</sup> + 2*M*1]. Thus, we have *u* ∈ *Cn* for all *n* ≥ 0 and hence Fix(*T*) ⊂ *Cn* for all *n* ≥ 0. On the other hand, it is easy to see that Fix(*T*) ⊂ *C* = *Q*<sup>0</sup> when *n* = 0. Suppose that Fix(*T*) ⊂ *Qn*−1, by combining the fact that *xn* = *PCn*−1∩*Qn*−<sup>1</sup> *<sup>x</sup>*<sup>0</sup> and (5) we obtain *xn* − *<sup>z</sup>*, *xn* − *<sup>x</sup>*0 ≤ <sup>0</sup> for any *z* ∈ *Cn*−<sup>1</sup> ∩ *Qn*−1. According to the induction assumption we have Fix(*T*) ⊂ *Cn*−<sup>1</sup> ∩ *Qn*−1, and it follows from the definition of *Qn* that Fix(*T*) ⊂ *Qn*. Therefore, we get Fix(*T*) ⊂ *Cn* ∩ *Qn* for all *n* ≥ 0.

**Step 2.** We prove that *xn*+<sup>1</sup> − *xn* → 0 as *n* → ∞. Combining the definition of *Qn* and Fix(*T*) ⊂ *Qn*, we obtain

$$\|\mathbf{x}\_{\mathfrak{n}} - \mathbf{x}\_{0}\| \le \|\mathfrak{u} - \mathfrak{x}\_{0}\|\quad \text{ for all } \mathfrak{u} \in \text{Fix}(T)\dots$$

We note that {*xn*} is bounded and

$$\left\|\mathbf{x}\_{n} - \mathbf{x}\_{0}\right\| \le \left\|\mathbf{x}^{\*} - \mathbf{x}\_{0}\right\| \text{ .} \quad \text{where } \mathbf{x}^{\*} = P\_{\text{Fix}(T)} \mathbf{x}\_{0} \text{ .} \tag{8}$$

The fact *xn*+<sup>1</sup> ∈ *Qn*, we have *xn* − *x*0≤*xn*+<sup>1</sup> − *x*0, which means that lim*n*→<sup>∞</sup> *xn* − *x*0 exists. Using (6), one sees that

$$\left\|\mathbf{x}\_{n} - \mathbf{x}\_{n+1}\right\|^{2} \le \left\|\mathbf{x}\_{n+1} - \mathbf{x}\_{0}\right\|^{2} - \left\|\mathbf{x}\_{n} - \mathbf{x}\_{0}\right\|^{2} \text{ \text{s}}$$

which implies that *xn*+<sup>1</sup> − *xn* → 0 as *n* → ∞. Next, by the definition of *wn*, we have

$$\left| \left| \left| w\_n - \mathbf{x}\_n \right| \right| = \left| \delta\_n \right| \left| \left| \mathbf{x}\_n - \mathbf{x}\_{n-1} \right| \right| \le \delta\_2 \left| \left| \mathbf{x}\_n - \mathbf{x}\_{n-1} \right| \right| \to 0 \left( n \to \infty \right),$$

which further yields that

$$\left\|\left|w\_{\mathbb{N}} - \mathfrak{x}\_{n+1}\right\|\right\| \le \left\|\left|w\_{\mathbb{N}} - \mathfrak{x}\_{\mathbb{N}}\right\|\right\| + \left\|\left|\mathfrak{x}\_{\mathbb{N}} - \mathfrak{x}\_{n+1}\right\|\right\| \to 0 \left(\mathfrak{n} \to \infty\right) \dots$$

**Step 3.** It remains to show that *xn* → *x*∗, where *x*<sup>∗</sup> = *P*Fix(*T*)*x*0. From *xn*+<sup>1</sup> ∈ *Cn* we get

$$||z\_n - \mathbf{x}\_{n+1}||^2 \le ||w\_n - \mathbf{x}\_{n+1}||^2 - \nu\_n \left(1 - \nu\_n\right) ||w\_n - y\_n||^2 + \zeta\_n \dots$$

Therefore,

$$||z\_n - \mathfrak{x}\_{n+1}|| \le ||w\_n - \mathfrak{x}\_{n+1}|| + \sqrt{\xi\_n} \dots$$

On the other hand, since *zn* = *νnwn* + (1 − *νn*) *Twn* + (1 − *νn*) *ηψndn* and *ν<sup>n</sup>* ≤ *ν*, we obtain

$$\begin{split} ||\mathcal{I}w\_{n} - w\_{n}|| &= \frac{1}{1 - \nu\_{n}}||z\_{n} - w\_{n} - (1 - \nu\_{n})\,\eta\psi\_{n}d\_{n}|| \\ &\leq \frac{1}{1 - \nu}||z\_{n} - w\_{n}|| + \eta\,\psi\_{n}||d\_{n}|| \\ &\leq \frac{1}{1 - \nu} \left( ||z\_{n} - x\_{n+1}|| + ||w\_{n} - x\_{n+1}|| \right) + \eta\,\psi\_{n}M\_{2} \\ &\leq \frac{1}{1 - \nu} \left( 2\left( |w\_{n} - x\_{n+1}| + \sqrt{\xi\_{n}} \right) + \eta\,\psi\_{n}M\_{2} \to 0 \,(n \to \infty) \right). \end{split}$$

Consequently,

$$\begin{aligned} \left|| \mathbf{T} \mathbf{x}\_{\hbar} - \mathbf{x}\_{\hbar} \right|| &\leq \left|| \mathbf{T} \mathbf{x}\_{\hbar} - \mathbf{T} \mathbf{w}\_{\hbar} \right|| + \left|| \mathbf{T} \mathbf{w}\_{\hbar} - \mathbf{w}\_{\hbar} \right|| + \left|| \mathbf{w}\_{\hbar} - \mathbf{x}\_{\hbar} \right|| \\ &\leq 2 \left|| \mathbf{w}\_{\hbar} - \mathbf{x}\_{\hbar} \right|| + \left|| \mathbf{T} \mathbf{w}\_{\hbar} - \mathbf{w}\_{\hbar} \right|| \to 0 \left( n \to \infty \right) \,. \end{aligned} \tag{9}$$

In view of (9) and Lemma 1, it follows that every weak limit point of {*xn*} is a fixed point of *T*. i.e., *ω<sup>w</sup>* {*xn*} ⊂ Fix(*T*). By means of Lemma 3 and the inequality (8), we get that {*xn*} converges to *P*Fix(*T*)*x*<sup>0</sup> in norm. The proof is complete.

**Theorem 3.** *Let C be a bounded closed convex subset of real Hilbert spaces H and let T* : *C* → *C be a nonexpansive mapping with* Fix(*T*) = ∅*. Assume that the following conditions are satisfied:*

$$\eta > 0, \delta\_n \subset [\delta\_1, \delta\_2], \delta\_1 \in (-\infty, 0], \delta\_2 \in [0, \infty), \\
\psi\_\mathbb{n} \subset [0, \infty), \lim\_{n \to \infty} \psi\_\mathbb{n} = 0, \nu\_\mathbb{n} \subset (0, \nu], \\
0 < \nu < 1.$$

*Set x*−1, *x*<sup>0</sup> ∈ *H arbitrarily and set d*<sup>0</sup> := (*Tx*<sup>0</sup> − *x*0)/*η. Define a sequence* {*xn*} *by the following:*

$$\begin{cases} \begin{aligned} &w\_{n} = \mathbf{x}\_{n} + \delta\_{n}(\mathbf{x}\_{n} - \mathbf{x}\_{n-1}) \\ &d\_{n+1} = \frac{1}{\eta} \left(Tw\_{n} - w\_{n}\right) + \psi\_{n}d\_{n} \\ &y\_{n} = w\_{n} + \eta d\_{n+1} \\ &z\_{n} = \nu\_{n}w\_{n} + \left(1 - \nu\_{n}\right)y\_{n} \\ &\mathbb{C}\_{n+1} = \left\{z \in \mathbb{C}\_{n} : \left\|z\_{n} - z\right\|^{2} \le \left\|w\_{n} - z\right\|^{2} - \nu\_{n}\left(1 - \nu\_{n}\right)\left\|w\_{n} - y\_{n}\right\|^{2} + \xi\_{n}\right\}, \\ &x\_{n+1} = P\_{\mathbb{C}\_{n+1}}x\_{0}, \quad n \ge 0. \end{aligned} \end{cases} \tag{10}$$

*where the sequence* {*ξn*} *is defined by <sup>ξ</sup><sup>n</sup>* :<sup>=</sup> *ηψnM*<sup>2</sup> [*ηψnM*<sup>2</sup> <sup>+</sup> <sup>2</sup>*M*1]*, <sup>M</sup>*<sup>1</sup> :<sup>=</sup> *diam C* <sup>=</sup> sup*x*,*y*∈*<sup>C</sup> <sup>x</sup>* <sup>−</sup> *<sup>y</sup> and <sup>M</sup>*<sup>2</sup> :<sup>=</sup> max max1≤*k*≤*n*<sup>0</sup> *dk* , <sup>2</sup> *<sup>η</sup> M*<sup>1</sup> *, where <sup>n</sup>*<sup>0</sup> *satisfies <sup>ψ</sup><sup>n</sup>* <sup>≤</sup> <sup>1</sup> <sup>2</sup> *for all n* ≥ *n*0*. Then the iterative sequence* {*xn*} *defined by* (10) *converges to P*Fix(*T*)*x*<sup>0</sup> *in the norm.*

#### **Proof.** We divided our proof in three steps.

**Step 1.** Our first goal is to show that Fix(*T*) ⊂ *Cn*+<sup>1</sup> for all *n* ≥ 0. According to Step 1 in Theorem 2, for all *u* ∈ Fix(*T*), we obtain

$$\left\|z\_n - \mathfrak{u}\right\|^2 \le \left\|w\_n - \mathfrak{u}\right\|^2 - \nu\_n \left(1 - \nu\_n\right) \left\|w\_n - y\_n\right\|^2 + \zeta\_n \left\|\right.$$

Therefore, *u* ∈ *Cn*+<sup>1</sup> for each *n* ≥ 0 and hence Fix(*T*) ⊂ *Cn*+<sup>1</sup> ⊂ *Cn*. **Step 2.** As mentioned above, the next thing to do in the proof is show that *xn*+<sup>1</sup> − *xn* → 0 as *n* → ∞. Using the fact that *xn* = *PCn x*<sup>0</sup> and Fix(*T*) ⊂ *Cn*, we have

$$\|\|\mathbf{x}\_{\mathrm{H}} - \mathbf{x}\_{0}\|\| \le \|\|\boldsymbol{u} - \boldsymbol{x}\_{0}\|\|, \quad \text{ for all } \boldsymbol{u} \in \mathrm{Fix}(T)\,\,.$$

It follows that {*xn*} is bounded, in addition, we note that

$$\left\|\mathbf{x}\_{n} - \mathbf{x}\_{0}\right\| \le \left\|\mathbf{x}^{\*} - \mathbf{x}\_{0}\right\|, \quad \text{where } \mathbf{x}^{\*} = P\_{\text{Fix}(T)} \mathbf{x}\_{0}. \tag{11}$$

On the other hand, since *xn*+<sup>1</sup> ∈ *Cn*, we obtain *xn* − *x*0≤*xn*+<sup>1</sup> − *x*0, which implies that lim*n*→<sup>∞</sup> *xn* − *x*0 exists. In view of (6), we have

$$\left\|\mathbf{x}\_{n}-\mathbf{x}\_{n+1}\right\|^{2} \le \left\|\mathbf{x}\_{n+1}-\mathbf{x}\_{0}\right\|^{2} - \left\|\mathbf{x}\_{n}-\mathbf{x}\_{0}\right\|^{2} \text{ .}$$

which further implies that lim*n*→<sup>∞</sup> *xn*+<sup>1</sup> − *xn* = 0. Also, we have lim*n*→<sup>∞</sup> *wn* − *xn* = 0 and lim*n*→<sup>∞</sup> *wn* − *xn*+1 = 0.

**Step 3.** Finally, we have to show that *xn* → *x*∗, where *x*<sup>∗</sup> = *P*Fix(*T*)*x*0. The remainder of the argument is analogous to that in Theorem 2 and is left to the reader.

**Remark 1.** *We remark here that the modified inertial hybrid projection algorithm* (7) *(in short, MIHPA) and the modified inertial shrinking projection algorithm* (10) *(in short, MISPA) contain some previously known results. When δ<sup>n</sup>* = 0 *and ψ<sup>n</sup>* = 0*, the MIHPA becomes the hybrid projection algorithm (in short, HPA) proposed by Nakajo and Takahashi [17] and the MISPA becomes the shrinking projection algorithm (in short, SPA) proposed by Takahashi, Takeuchi, and Kubota [18]. When δ<sup>n</sup>* = 0 *and ψ<sup>n</sup>* = 0*, the MIHPA becomes the modified hybrid projection algorithm (in short, MHPA) proposed by Dong et al. [35], the MISPA becomes the modified shrinking projection algorithm (in short, MSPA).*

#### **4. Numerical Experiments**

In this section, we provide three numerical applications to demonstrate the computational performance of our proposed algorithms and compare them with some existing ones. All the programs are performed in MATLAB2018a on a personal computer Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz 1.800 GHz, RAM 8.00 GB.

**Example 1.** *As an example, we consider the convex feasibility problem, for any nonempty closed convex set Ci* <sup>⊂</sup> *<sup>R</sup><sup>N</sup> (i* <sup>=</sup> 0, 1, ... , *m), we find <sup>x</sup>*<sup>∗</sup> <sup>∈</sup> *<sup>C</sup>* :<sup>=</sup> *<sup>m</sup> <sup>i</sup>*=<sup>0</sup> *Ci, where one supposes that C* = ∅*. A mapping <sup>T</sup>* : *<sup>R</sup><sup>N</sup>* <sup>→</sup> *<sup>R</sup><sup>N</sup> is defined by <sup>T</sup>* :<sup>=</sup> *<sup>P</sup>*<sup>0</sup> <sup>1</sup> *<sup>m</sup>* <sup>∑</sup>*<sup>m</sup> <sup>i</sup>*=<sup>1</sup> *Pi , where Pi* = *PCi stands for the metric projection onto Ci. It follows from Pi being nonexpansive that the mapping T is also nonexpansive. Furthermore, we note that* Fix(*T*) = Fix (*P*0) *m <sup>i</sup>*=<sup>1</sup> Fix (*Pi*) = *C*<sup>0</sup> *m <sup>i</sup>*=<sup>1</sup> *Ci* = *C. In this experiment, we set Ci as a closed ball with center ci* <sup>∈</sup> *<sup>R</sup><sup>N</sup> and radius ri* <sup>&</sup>gt; <sup>0</sup>*. Thus Pi can be computed with*

$$P\_i(\mathbf{x}) := \begin{cases} \ c\_i + \frac{r\_i}{||c\_i - \mathbf{x}||} \left( \mathbf{x} - c\_i \right), & \text{if } ||c\_i - \mathbf{x}|| > r\_i \\\ \mathbf{x}, & \text{if } ||c\_i - \mathbf{x}|| \le r\_i. \end{cases}$$

*Choose ri* = 1 (*i* = 0, 1, ... , *m*)*,* **c**<sup>0</sup> = [0, 0, ... , 0]*,* **c**<sup>1</sup> = [1, 0, ... , 0]*, and* **c**<sup>2</sup> = [−1, 0, ... , 0]*.* **c***<sup>i</sup> is randomly selected from* (−1/ <sup>√</sup>*N*, 1/ <sup>√</sup>*N*)*<sup>N</sup>* (*<sup>i</sup>* <sup>=</sup> 3, ... , *<sup>m</sup>*)*. We have* Fix(*T*) = {**0**} *from the special choice of* **<sup>c</sup>**1, **<sup>c</sup>**<sup>2</sup> *and r*1,*r*2*. In Algorithms* (7) *and* (10)*, setting m* = 30*, N* = 30*, η* = 1*, ψ<sup>n</sup>* = <sup>1</sup> <sup>100</sup>(*n*+1)<sup>2</sup> *, <sup>ν</sup><sup>n</sup>* = 0.1*. When the iteration error En* <sup>=</sup> *xn* <sup>−</sup> *Txn*<sup>2</sup> <sup>&</sup>lt; <sup>10</sup>−<sup>2</sup> *is satisfied, the iteration stops. We test our algorithms under different inertial parameters and initial values. Results are shown in Table 1, where "Iter." represents the number of iterations.*

**Table 1.** Computational results for Example 1.


**Example 2.** *Our another example is to consider the following variational inequality problem (in short, VI). For any nonempty closed convex set C* <sup>⊂</sup> *<sup>R</sup>N,*

$$\text{find } \mathbf{x}^\* \in \mathbb{C} \text{ such that } \langle f\left(\mathbf{x}^\*\right), \mathbf{x} - \mathbf{x}^\*\rangle \ge 0, \quad \forall \mathbf{x} \in \mathbb{C}, \tag{12}$$

*where <sup>f</sup>* : *<sup>R</sup><sup>N</sup>* <sup>→</sup> *<sup>R</sup><sup>N</sup> is a mapping. Take* VI(*C*, *<sup>f</sup>*) *denote the solution of VI* (12)*. <sup>T</sup>* : *<sup>R</sup><sup>N</sup>* <sup>→</sup> *<sup>R</sup><sup>N</sup> is defined by T* := *PC*(*I* − *γ f*)*, where* 0 < *γ* < 2/*L, and L is the Lipschitz constant of the mapping f . In [39], Xu showed that T is an averaged mapping, i.e., T can be seen as the average of an identity mapping I and a nonexpansive mapping. It follows that* Fix(*T*) = VI(*C*, *f*)*, we can solve VI* (12) *by finding the fixed point of T. Taking <sup>f</sup>* : *<sup>R</sup>*<sup>2</sup> <sup>→</sup> *<sup>R</sup>*<sup>2</sup> *as follows:*

$$f(\mathbf{x}, \mathbf{y}) = (2\mathbf{x} + 2\mathbf{y} + \sin(\mathbf{x}), -2\mathbf{x} + 2\mathbf{y} + \sin(\mathbf{y})), \quad \forall \mathbf{x}, \mathbf{y} \in \mathbb{R} \dots$$

*The feasible set C is given by C* = *<sup>x</sup>* <sup>∈</sup> *<sup>R</sup>*2| − <sup>10</sup>**<sup>e</sup>** <sup>≤</sup> *<sup>x</sup>* <sup>≤</sup> <sup>10</sup>**<sup>e</sup>** *, where* **e** = (1, 1)T*. It is not hard to check that f is Lipschitz continuous with constant L* <sup>=</sup> <sup>√</sup><sup>26</sup> *and* <sup>1</sup>*-strongly monotone [40]. Therefore, VI* (12) *has a unique solution* **x**<sup>∗</sup> = (0, 0)T*.*

*We use the Algorithm* (7) *(MIHPA), the Algorithm* (10) *(MISPA), the modified hybrid projection algorithm (MHPA), the modified shrinking projection algorithm (MSPA), the hybrid projection algorithm (HPA), and the shrinking projection algorithm (SPA) to solve Example 2. Setting γ* = 0.9/ <sup>√</sup>26*, <sup>η</sup>* <sup>=</sup> <sup>1</sup>*, <sup>ψ</sup><sup>n</sup>* <sup>=</sup> <sup>1</sup> <sup>100</sup>(*n*+1)<sup>2</sup> *, ν<sup>n</sup>* = 0 *(we consider that T is an average mapping). The initial values are randomly generated by the MATLAB function rand(2,1). We use En* = *xn* − *x*∗<sup>2</sup> *to denote the iteration error of algorithms, and the maximum iteration* 300 *as the stopping criterion. Results are reported in Table 2, where "Iter." denotes the number of iterations.*

**Table 2.** Computational results for Example 2.


**Example 3.** *The Fermat–Weber problem is a famous model in location theory. It is can be formulated mathematically as the problem of finding* **<sup>x</sup>** <sup>∈</sup> *<sup>R</sup><sup>n</sup> that solves*

$$\min\_{\mathbf{x}} \left\{ f(\mathbf{x}) := \sum\_{i=1}^{m} \omega\_i \left\| \mathbf{x} - \mathbf{a}\_i \right\|\_2 \right\} \tag{13}$$

*where <sup>ω</sup><sup>i</sup>* <sup>&</sup>gt; <sup>0</sup> *are given weights and* **<sup>a</sup>***<sup>i</sup>* <sup>∈</sup> *<sup>R</sup><sup>n</sup> are anchor points. It is easy to check that the objective function <sup>f</sup> in* (13) *is convex and coercive. Therefore, the problem has a nonempty solution set. It should be noted that f is not differentiable at the anchor points. The most famous method to solve the problem* (13) *is the Weiszfeld algorithm; see [41] for more discussion. Weiszfeld proposed the following fixed point algorithm:* **x***n*+<sup>1</sup> = *T* (**x***n*), *n* ∈ *N. The mapping <sup>T</sup>* : *<sup>R</sup>n*\**<sup>A</sup>** −→ *<sup>R</sup><sup>n</sup> is defined by <sup>T</sup>*(**x**) :<sup>=</sup> <sup>1</sup> ∑*<sup>m</sup> i*=1 *ωi* **x**−**a***i* ∑*<sup>m</sup> i*=1 *ωi***a***<sup>i</sup>* **x**−**a***i , where* **<sup>A</sup>** <sup>=</sup> {**a**1, **<sup>a</sup>**2,..., **<sup>a</sup>***m*}*. We consider a small example with n* = 2, *m* = 4 *anchor points,*

$$\mathbf{a}\_1 = \begin{pmatrix} 0 \\ 0 \end{pmatrix}, \mathbf{a}\_2 = \begin{pmatrix} 10 \\ 0 \end{pmatrix}, \mathbf{a}\_3 = \begin{pmatrix} 0 \\ 10 \end{pmatrix}, \mathbf{a}\_4 = \begin{pmatrix} 10 \\ 10 \end{pmatrix}, \mathbf{a}\_5$$

*and ω<sup>i</sup>* = 1 *for all i. It follows from the special selection of anchor points* **a***<sup>i</sup>* (*i* = 1, 2, 3, 4) *that the optimal value of* (13) *is* **x**<sup>∗</sup> = (5, 5)T*.*

*We use the same algorithms as in Example 2, and our parameter settings are as follows, setting η* = 1*, ψ<sup>n</sup>* = <sup>1</sup> <sup>100</sup>(*n*+1)<sup>2</sup> *, <sup>ν</sup><sup>n</sup>* <sup>=</sup> 0.1*. We use En* <sup>=</sup> *xn* <sup>−</sup> *<sup>x</sup>*∗<sup>2</sup> <sup>&</sup>lt; <sup>10</sup>−<sup>4</sup> *or maximum iteration* <sup>300</sup> *as the stopping*

*criterion. The initial values are randomly generated by the MATLAB function 10rand(2,1). Figures 1 and 2 show the convergence behavior of iterative sequence* {*xn*} *and iteration error En, respectively.*

**Figure 1.** Convergence process at different initial values for Example 3.

**Figure 2.** Convergence behavior of iteration error {*En*} for Example 3.

**Remark 2.** *From Examples 1–3, we know that our proposed algorithms are effective and easy to implement. Moreover, initial values do not affect the computational performance of our algorithms. However, it should be mentioned that the MIHPA algorithm, the MISPA algorithm, the MHPA algorithm, and the MSPA algorithm will slow down the speed and accuracy of the HPA algorithm and the SPA algorithm. The acceleration may be eliminated by the projection onto the set Cn and Qn and Cn*+1*.*

#### **5. Conclusions**

In this paper, we proposed two modified inertial hybrid and shrinking projection algorithms based on the inertial method, the Picard algorithm, the conjugate gradient method, and the projection method. We could then work with the strong convergence theorems under suitable conditions. However, numerical experiments showed that our algorithms cannot accelerate some previously known algorithms.

**Author Contributions:** Supervision, S.L.; Writing—original draft, B.T. and S.X. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Acknowledgments:** We greatly appreciate the reviewers for their helpful comments and suggestions.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


c 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Convergence Theorems for Modified Implicit Iterative Methods with Perturbation for Pseudocontractive Mappings**

#### **Jong Soo Jung**

Department of Mathematics, Dong-a University, Busan 49315, Korea; jungjs@dau.ac.kr

Received: 14 October 2019; Accepted: 26 December 2019 ; Published: 2 January 2020

**Abstract:** In this paper, first, we introduce a path for a convex combination of a pseudocontractive type of mappings with a perturbed mapping and prove strong convergence of the proposed path in a real reflexive Banach space having a weakly continuous duality mapping. Second, we propose two modified implicit iterative methods with a perturbed mapping for a continuous pseudocontractive mapping in the same Banach space. Strong convergence theorems for the proposed iterative methods are established. The results in this paper substantially develop and complement the previous well-known results in this area.

**Keywords:** modified implicit iterative methods with perturbed mapping; pseudocontractive mapping; strongly pseudocontractive mapping; nonexpansive mapping; weakly continuous duality mapping; fixed point

#### **1. Introduction**

Let *E* be a real Banach space, and let *E*∗ be the dual space of *E*. Let *C* be a nonempty closed convex subset of *E*. Recall that a mapping *f* : *C* → *C* is called *contractive* if there exists *k* ∈ (0, 1) such that *f x* − *f y* ≤ *kx* − *y*, ∀*x*, *y* ∈ *C* and that a mapping *S* : *C* → *C* is called *nonexpansive* if *Sx* − *Sy*≤*x* − *y*, ∀*x*, *y* ∈ *C*.

Let *J* denote the normalized duality mapping from *E* into 2*X*<sup>∗</sup> defined by

$$J(\mathbf{x}) = \{ f \in E^\* : \langle \mathbf{x}, f \rangle = \|\mathbf{x}\| \|f\|\_{\prime} \|f\| = \|\mathbf{x}\|\}, \quad \mathbf{x} \in E\_{\prime \mathbf{x}}$$

where ·, · denotes the generalized duality pair between *E* and *E*∗. The mapping *T* : *C* → *C* is called *pseudocontractive* (respectively, *strong pseudocontractive*), if there exists *j*(*x* − *y*) ∈ *J*(*x* − *y*) such that

$$\langle Tx - Ty, j(x - y) \rangle \le ||x - y||^2, \quad \forall x, y \in \mathbb{C}\_{\prime}$$

(respectively, *Tx* <sup>−</sup> *Ty*, *<sup>j</sup>*(*<sup>x</sup>* <sup>−</sup> *<sup>y</sup>*) ≤ *<sup>β</sup><sup>x</sup>* <sup>−</sup> *<sup>y</sup>*<sup>2</sup> for some *<sup>β</sup>* <sup>∈</sup> (0, 1)).

The class of pseudocontractive mappings is one of the most important classes of mappings in nonlinear analysis, and it has been attracting mathematician's interest. Apart from them being a generalization of nonexpansive mappings, interest in pseudocontractive mappings stems mainly from their firm connection with the class of accretive mappings, where a mapping *A* with domain *D*(*A*) and range *R*(*A*) in *E* is called accretive if the inequality

$$||\mathbf{x} - \mathbf{y}|| \le ||\mathbf{x} - \mathbf{y} + s(A\mathbf{x} - A\mathbf{y})||\_{\prime\prime}$$

holds for every *x*, *y* ∈ *D*(*A*) and for all *s* > 0.

Within the past 50 years or so, many authors have been devoting their study to the existence of zeros of accretive mappings or fixed points of pseudocontractive mappings and several iterative methods for finding zeros of accretive mappings or fixed points of pseudocontractive mappings. We can refer to References [1–14] and the references in therein.

In 2007, Morales [15] introduced the following viscosity iterative method for pseudocontractive mapping:

$$\mathbf{x}\_t = tf\mathbf{x}\_t + (1-t)T\mathbf{x}\_t, \ t \in (0,1), \tag{1}$$

where *T* : *C* → *E* is a continuous pseudocontractive mapping satisfying the weakly inward condition and *f* : *C* → *C* is a bounded continuous strongly pseudocontractive mapping. In a reflexive Banach space with a uniformly Gâteaux differentiable norm such that every closed convex bounded subset of *C* has the fixed point property for nonexpansive self-mappings, he proved the strong convergence of the sequences generated by the iterative method in Equation (1) to a point *q* in *Fix*(*T*) (the set of fixed points of *T*), where *q* is the unique solution to the following variational inequality:

$$
\langle fq - q, f(p - q) \rangle \le 0, \quad \forall p \in \text{Fix}(T). \tag{2}
$$

In 2009, using the method of Reference [16], Ceng et al. [17] introduced the following modified viscosity iterative method and modified implicit viscosity iterative method with a perturbed mapping for a pseudocontractive mapping:

$$\mathbf{x}\_t = t f \mathbf{x}\_t + r\_t \mathbf{S} \mathbf{x}\_t + (1 - t - r\_t) T \mathbf{x}\_{t\prime} \quad t \in (0, 1), \tag{3}$$

where 0 < *rt* < 1 − *t*, *T* : *C* → *C* is a continuous pseudocontractive mapping, *S* : *C* → *C* is a nonexpansive mapping, and *f* : *C* → *C* is a Lipschitz strongly pseudocontractive mapping.

$$\begin{cases} y\_n = a\_n x\_n + (1 - a\_n) \, T y\_n \\ x\_{n+1} = \beta\_n f y\_n + \gamma\_n S y\_n + (1 - \beta\_n - \gamma\_n) y\_n \end{cases} \tag{4}$$

and

$$\begin{cases} \mathbf{x}\_n = \mathbf{a}\_n \mathbf{y}\_n + (1 - \mathbf{a}\_n) T \mathbf{y}\_{n\prime} \\ \mathbf{y}\_n = \beta\_n f \mathbf{x}\_{n-1} + \gamma\_n \mathbf{S} \mathbf{x}\_{n-1} + (1 - \beta\_n - \gamma\_n) \mathbf{x}\_{n-1\prime} \end{cases} \tag{5}$$

where *f* : *C* → *C* is a contractive mapping , *x*<sup>0</sup> ∈ *C* is an arbitrary initial point, and {*αn*}, {*βn*}, {*γn*} ⊂ (0, 1] such that lim*n*→∞(*γn*/*βn*) = 0 and *β<sup>n</sup>* + *γ<sup>n</sup>* < 1. In a reflexive and strictly convex Banach space with a uniformly Gâteaux differentiable norm, they proved the strong convergence of the sequences generated by the iterative methods in Equations (3)–(5) to a point *q* in *Fix*(*T*), where *q* is the unique solution to the variational inequality in Equation (2). Their results developed and improved the corresponding results of Song and Chen [11], Zeng and Yao [16], Xu [18], Xu and Ori [19], and Chen et al. [20].

In this paper, as a continuation of study in this direction, in a reflexive Banach space having a weakly sequentially continuous duality mapping *Jϕ* with gauge function *ϕ*, we consider the viscosity iterative methods in Equations (3)–(5) for a continuous pseudocontractive mapping *T*, a continuous bounded strongly pseudocontractive mapping *f* , and a nonexpansive mapping *S*. We establish strong convergence of the sequences generated by proposed iterative methods to a fixed point of the mapping *T*, which solves a variational inequality related to *f* . The main results develop and supplement the corresponding results of Song and Chen [11], Morales [15], Ceng et al. [17], and Xu [18] to different Banach space as well as Zeng and Yao [16], Xu and Ori [19], Chen et al. [20], and the references therein.

#### **2. Preliminaries**

Throughout the paper, we use the following notations: " " for weak convergence, " <sup>∗</sup> " for weak<sup>∗</sup> convergence, and " → " for strong convergence.

Let *E* be a real Banach space with the norm ·, and let *E*<sup>∗</sup> be its dual. The value of *x*<sup>∗</sup> ∈ *E*<sup>∗</sup> at *x* ∈ *E* will be denoted by *x*, *x*∗. Let *C* be a nonempty closed convex subset of *E*, and let *T* : *C* → *C* be a mapping. We denote the set of fixed points of the mapping *T* by *Fix*(*T*). That is, *Fix*(*T*) := {*x* ∈ *C* : *Tx* = *x*}.

Recall that a Banach space *E* is said to be *smooth* if for each *x* ∈ *SE* = {*x* ∈ *E* : *x* = 1}, there exists a unique functional *jx* ∈ *E*<sup>∗</sup> such that *x*, *jx* = *x* and *jx* = 1 and that a Banach space *E* is said to be *strictly convex* [21] if the following implication holds for *x*, *y* ∈ *E*:

$$||\mathbf{x}|| \le 1, \quad ||y|| \le 1, \quad ||\mathbf{x} - y|| > 0 \implies \left\| \frac{\mathbf{x} + y}{2} \right\| < 1.$$

By a gauge function, we mean a continuous strictly increasing function *ϕ* defined on R<sup>+</sup> := [0, ∞) such that *<sup>ϕ</sup>*(0) = 0 and lim*r*→<sup>∞</sup> *<sup>ϕ</sup>*(*r*) = <sup>∞</sup>. The mapping *<sup>J</sup><sup>ϕ</sup>* : *<sup>E</sup>* <sup>→</sup> <sup>2</sup>*E*<sup>∗</sup> defined by

$$J\_{\mathfrak{g}}(\mathbf{x}) = \{ f \in E^\* : \langle \mathbf{x}, f \rangle = \|\mathbf{x}\| \|f\|\_{\mathfrak{e}} \|f\| = \mathfrak{g}(\|\mathbf{x}\|) \} \text{ for all } \mathbf{x} \in E$$

is called the *duality mapping* with gauge function *ϕ*. In particular, the duality mapping with gauge function *ϕ*(*t*) = *t* denoted by *J* is referred to as the *normalized duality mapping*. It is known that a Banach space *E* is smooth if and only if the normalized duality mapping *J* is single-valued. The following property of duality mapping is also well-known:

$$f\_{\mathcal{P}}(\lambda \mathbf{x}) = \text{sign } \lambda \left( \frac{\varrho(|\lambda| \cdot ||\mathbf{x}||)}{||\mathbf{x}||} \right) I(\mathbf{x}) \text{ for all } \mathbf{x} \in E \nwarrow \{0\}, \ \lambda \in \mathbb{R}, \tag{6}$$

where R is the set of all real numbers. The following are some elementary properties of the duality mapping *J* [21,22]:


We say that a Banach space *E* has a *weakly continuous duality mapping* if there exists a gauge function *ϕ* such that the duality mapping *Jϕ* is single-valued and continuous from the weak topology to the weak<sup>∗</sup> topology, that is, for any {*xn*} ∈ *<sup>E</sup>* with *xn <sup>x</sup>*, *<sup>J</sup>ϕ*(*xn*) <sup>∗</sup> *Jϕ*(*x*). A duality mapping *J<sup>ϕ</sup>* is weakly continuous at 0 if *J<sup>ϕ</sup>* is single-valued and if *xn* 0, *Jϕ*(*xn*) <sup>∗</sup> 0. For example, every *l<sup>p</sup>* space (1 < *p* < ∞) has a weakly continuous duality mapping with gauge function *ϕ*(*t*) = *tp*−<sup>1</sup> [21–23]. Set

$$\Phi(t) = \int\_0^t \varphi(\tau)d\tau \quad \text{for all } t \in \mathbb{R}^+.$$

Then it is known that *Jϕ*(*x*) is the subdifferential of the convex functional Φ(·) at *x*. A Banach space *E* that has a weakly continuous duality mapping implies that *E* satisfies Opial's property. This means that whenever *xn <sup>x</sup>* and *<sup>y</sup>* <sup>=</sup> *<sup>x</sup>*, we have lim sup*n*→<sup>∞</sup> *xn* <sup>−</sup> *<sup>x</sup>* <sup>&</sup>lt; lim sup*n*→<sup>∞</sup> *xn* <sup>−</sup> *y* [21,23].

The following lemma is Lemma 2.1 of Jung [24].

**Lemma 1.** ([24]) *Let E be a reflexive Banach space having a weakly continuous duality mapping Jϕ with gauge function ϕ. Let* {*xn*} *be a bounded sequence of E and f* : *E* → *E be a continuous mapping. Let g* : *E* → R *be defined by*

$$\lg(z) = \limsup\_{n \to \infty} \langle z - fz, f\_{\emptyset}(z - \mathfrak{x}\_n) \rangle$$

*for z* ∈ *E. Then, g is a real valued continuous function on E.*

We need the following well-known lemma for the proof of our main result [21,22].

**Lemma 2.** *Let E be a real Banach space, and let ϕ be a continuous strictly increasing function on* R<sup>+</sup> *such that ϕ*(0) = 0 *and* lim*r*→<sup>∞</sup> *ϕ*(*r*) = ∞*. Define*

$$\Phi(t) = \int\_0^t \rho(\tau)d\tau \text{ for all } t \in \mathbb{R}^+.$$

*Then, the following inequalities hold:*

$$
\Phi(kt) \le k \Phi(t), \ 0 < k < 1,
$$

$$\Phi(||x+y||) \le \Phi(||x||) + \langle y, j\_{\boldsymbol{\theta}}(x+y) \rangle \text{ for all } x, y \in E\_{\boldsymbol{\tau}}$$

*where jϕ*(*x* + *y*) ∈ *Jϕ*(*x* + *y*)*.*

The following lemma can be found in Reference [18].

**Lemma 3.** ([18]) *Let* {*sn*} *be a sequence of nonnegative real numbers satisfying*

$$s\_{n+1} \le (1 - \lambda\_n)s\_n + \lambda\_n \delta\_{n\nu} \quad n \ge 0, \nu$$

*where* {*λn*} *and* {*δn*} *satisfy the following conditions*:


*Then,* lim*n*→<sup>∞</sup> *sn* = 0*.*

Let *C* be a nonempty closed convex subset of a real Banach space *E*. Recall that *S* : *C* → *C* is called *accretive* if *I* − *S* is pseudocontractive. If *T* : *C* → *C* is a pseudocontractive mapping, then *<sup>I</sup>* <sup>−</sup> *<sup>T</sup>* is accretive. We denote *<sup>A</sup>* <sup>=</sup> *<sup>J</sup>*<sup>1</sup> = (2*<sup>I</sup>* <sup>−</sup> *<sup>T</sup>*)−1. Then, *Fix*(*A*) = *Fix*(*T*) and the operator *A* : *R*(2*I* − *T*) → *C* is nonexpansive and single-valued, where *I* denotes the identity mapping.

We also need the following result which can be found in Reference [11].

**Lemma 4.** ([11]) *Let C be a nonempty closed convex subset of a real Banach space E, and let T* : *C* → *C be a continuous pseudocontractive mapping. We denote A* = (2*<sup>I</sup>* <sup>−</sup> *<sup>T</sup>*)−1*.*

*(i) The mapping A is nonexpansive self-mapping on C, i.e., for all x*, *y* ∈ *nC, there holds*

*Ax* − *Ay*≤*x* − *y*, *and Ax* ∈ *C*.

*(ii) If* lim*n*→<sup>∞</sup> *xn* − *Txn* = 0*, then* lim*n*→<sup>∞</sup> *xn* − *Axn* = 0*.*

The following Lemmas, which are well-known, can be found in many books in the geometry of Banach spaces (see References [21,23]).

**Lemma 5.** (Demiclosedness Principle) *Let C be a nonempty closed convex subset of a Banach space E, and let T* : *C* → *C be a nonexpansive mapping. Then, xn x in C and* (*I* − *T*)*xn* → *y imply that* (*I* − *T*)*x* = *y.*

**Lemma 6.** *If E is a Banach space such that E*∗ *is strictly convex, then E is smooth and any duality mapping is norm-to-weak*∗*-continuous.*

Finally, we need the following result which was given by Deimling [4].

**Lemma 7.** ([4]) *Let C be a nonempty closed convex subset of a Banach space E, and let T* : *C* → *C be a continuous strong pseudocontractive mapping with a pseudocontractive coefficient β* ∈ (0, 1)*. Then, T has a unique fixed point in C.*

#### **3. Convergence of Path with Perturbed Mapping**

As we know, the path convergency plays an important role in proving the convergence of iterative methods to approximate fixed points. In this direction, we first prove the existence of a path for a convex combination of a pseudocontractive type of mappings with a perturbed mapping and boundedness of the path.

**Proposition 1.** *Let C be a nonempty closed convex subset of a real Banach space E. Let T* : *C* → *C be a continuous pseudocontractive mapping, let S* : *C* → *C be a nonexpansive mapping, and let f* : *C* → *C be a continuous strongly pseudocontractive mapping with a pseudocontractive coefficient β* ∈ (0, 1)*.*

*(i) There exists a unique path t* → *xt* ∈ *C, t* ∈ (0, 1)*, satisfying*

$$\mathbf{x}\_t = t f \mathbf{x}\_t + r\_t \mathbf{S} \mathbf{x}\_t + (1 - t - r\_t) T \mathbf{x}\_{t\prime} \tag{7}$$

*provided rt* : (0, 1) → [0, 1 − *t*) *is continuous and* lim*t*→0(*rt*/*t*) = 0*. (ii) In particular, if T has a fixed point in C, then the path* {*xt*} *is bounded.*

**Proof.** (i) For each *t* ∈ (0, 1), define the mapping *T*(*S*, *<sup>f</sup>*) : *C* → *C* as follows:

$$T\_{(S,f)} = tf + r\_t S + (1 - t - r\_t)T\_{,t}$$

where 0 < *rt* < 1 − *t* and lim*t*→0(*rt*/*t*) = 0. Then, it is easy to show that the mapping *T*(*S*, *<sup>f</sup>*) is a continuous strongly pseudocontractive self-mapping of *C*. Therefore, by Lemma 7, *T*(*S*, *<sup>f</sup>*) has a unique fixed point in *C*, i.e., for each given *t* ∈ (0, 1), there exists *xt* ∈ *C* such that

$$\mathbf{x}\_t = tf\mathbf{x}\_t + r\_t \mathbf{S} \mathbf{x}\_t + (1 - t - r\_t)T\mathbf{x}\_t.$$

To show continuity, let *t*, *t*<sup>0</sup> ∈ (0, 1). Then, there exists *j* ∈ *J*(*xt* − *xt*<sup>0</sup> ) such that

$$
\begin{split}
\langle \mathbf{x}\_{l} - \mathbf{x}\_{l\_{0}}, j \rangle &= \langle t f \mathbf{x}\_{l} + r\_{l} \mathbf{S} \mathbf{x}\_{l} + (1 - t - r\_{l}) T \mathbf{x}\_{l} - (t\_{0} f \mathbf{x}\_{l\_{0}} + r\_{l} \mathbf{S} \mathbf{x}\_{l} + (1 - t\_{0} - r\_{l\_{0}}) T \mathbf{x}\_{l\_{0}}), j \rangle \\ &= t \langle f \mathbf{x}\_{l} - f \mathbf{x}\_{l\_{0}}, j \rangle + (t - t\_{0}) \langle f \mathbf{x}\_{l\_{0}}, j \rangle + r\_{l} \langle \mathbf{S} \mathbf{x}\_{l} - \mathbf{S} \mathbf{x}\_{l\_{0}}, j \rangle + (r\_{l} - r\_{l\_{0}}) \langle \mathbf{S} \mathbf{x}\_{l\_{0}}, j \rangle \\ &+ (1 - t - r\_{l}) \langle T \mathbf{x}\_{l} - T \mathbf{x}\_{l\_{0}}, j \rangle + ((t - t\_{0}) + (r\_{l} - r\_{l\_{0}})) \langle T \mathbf{x}\_{l\_{0}}, j \rangle,
\end{split}
$$

and this implies that

$$\begin{split} \|\|\mathbf{x}\_{t} - \mathbf{x}\_{t\_{0}}\|\|^{2} &\leq t\beta \|\|\mathbf{x}\_{t} - \mathbf{x}\_{t\_{0}}\|\|^{2} + |t - t\_{0}| \|\|f\mathbf{x}\_{t\_{0}}\|\|\|\mathbf{x}\_{t} - \mathbf{x}\_{t\_{0}}\|\| \\ &\quad + r\_{t} \|\|\mathbf{x}\_{t} - \mathbf{x}\_{t\_{0}}\|\|^{2} + |r\_{t} - r\_{t\_{0}}| \|\|S\mathbf{x}\_{t\_{0}}\|\|\|\mathbf{x}\_{t} - \mathbf{x}\_{t\_{0}}\|\| \\ &\quad + (\mathbf{1} - t - r\_{t}) \|\|\mathbf{x}\_{t} - \mathbf{x}\_{t\_{0}}\|\|^{2} + |t - t\_{0}| \|\|T\mathbf{x}\_{t\_{0}}\|\|\|\mathbf{x}\_{t} - \mathbf{x}\_{t\_{0}}\|\| + |r\_{t} - r\_{t\_{0}}| \|\|T\mathbf{x}\_{t\_{0}}\|\|\|\mathbf{x}\_{t} - \mathbf{x}\_{t\_{0}}\|\|. \end{split}$$

and, hence,

$$\begin{split} \|\mathbf{x}\_{l} - \mathbf{x}\_{l\_{0}}\| &\leq t\beta \|\mathbf{x}\_{l} - \mathbf{x}\_{l\_{0}}\| + |t - t\_{0}| \|f\mathbf{x}\_{l\_{0}}\| + |r\_{l} - r\_{l\_{0}}| \|\mathbf{S}\mathbf{x}\_{l\_{0}}\| \\ &\quad + (1 - t - r\_{l}) \|\mathbf{x}\_{l} - \mathbf{x}\_{l\_{0}}\| + |t - t\_{0}| \|T\mathbf{x}\_{l\_{0}}\| + |r\_{l} - r\_{l\_{0}}| \|T\mathbf{x}\_{l\_{0}}\| \\ &= (1 - (1 - \beta)t) \|\mathbf{x}\_{l} - \mathbf{x}\_{l\_{0}}\| + (\|f\mathbf{x}\_{l\_{0}}\| + \|T\mathbf{x}\_{l\_{0}}\|) \|t - t\_{0}\| + (\|S\mathbf{x}\_{l\_{0}}\| + \|T\mathbf{x}\_{l\_{0}}\|) \|r\_{l} - r\_{l\_{0}}\|. \end{split}$$

Therefore,

$$||\mathbf{x}\_{t} - \mathbf{x}\_{t\_{0}}|| \le \frac{||f\mathbf{x}\_{t\_{0}}|| + ||T\mathbf{x}\_{t\_{0}}||}{(1 - \beta)t}|t - t\_{0}| + \frac{||S\mathbf{x}\_{t\_{0}}|| + ||T\mathbf{x}\_{t\_{0}}||}{(1 - \beta)t}|r\_{t} - r\_{t\_{0}}|,$$

which guarantees continuity.

(ii) By the same argument as in the proof of Theorem 2.1 of Reference [17], we can prove that {*xt*} defined by Equation (7) is bounded for *t* ∈ (0, *t*0) for some *t*<sup>0</sup> ∈ (0, 1), and so we omit its proof.

The above path of Equation (7) is called the *modified viscosity iterative method with perturbed mapping*, where *S* is called the perturbed mapping.

The following result gives conditions for existence of a solution of a variational inequality:

$$\langle (I - f)q, f\_{\boldsymbol{\uprho}}(\boldsymbol{q} - \boldsymbol{p}) \rangle \le 0, \quad \forall \boldsymbol{p} \in \operatorname{Fix}(T). \tag{8}$$

**Theorem 1.** *Let E be a Banach space such that E*∗ *is strictly convex. Let C be a nonempty closed convex subset of a real Banach space E. Let T* : *C* → *C be a continuous pseudocontractive mapping with Fix*(*T*) = ∅*, let S* : *C* → *C be a nonexpansive mapping, and let f* : *C* → *C be a continuous strongly pseudocontractive mapping with a pseudocontractive coefficient β* ∈ (0, 1)*. Suppose that* {*xt*} *defined by Equation* (7) *converges strongly to a point in Fix*(*T*)*. If we define q* := lim*t*→<sup>0</sup> *xt, then q is a solution of the variational inequality in Equation* (8)*.*

**Proof.** First, from Lemma 6, we note that *E* is smooth and *Jϕ* is norm-to-weak∗-continuous.

Since

$$(I - f)\mathbf{x}\_t = -\frac{1 - t - r\_t}{t}(I - T)\mathbf{x}\_t - \frac{r\_t}{t}(I - S)\mathbf{x}\_{t\prime}$$

we have for *p* ∈ *Fix*(*T*)

$$\begin{split} \langle (I - f) \mathbf{x}\_{t\prime} J\_{\boldsymbol{\varphi}} (\mathbf{x}\_{t} - \boldsymbol{p}) \rangle &= -\frac{1 - t - r\_{t}}{t} \langle (I - T) \mathbf{x}\_{t} - (I - T) \boldsymbol{p}\_{\boldsymbol{\varphi}} J\_{\boldsymbol{\varphi}} (\mathbf{x}\_{t} - \boldsymbol{p}) \rangle \\ &+ \frac{r\_{t}}{t} \langle (S - I) \mathbf{x}\_{t\prime} J\_{\boldsymbol{\varphi}} (\mathbf{x}\_{t} - \boldsymbol{p}) \rangle. \end{split} \tag{9}$$

Since *I* − *T* is accretive and *J*(*xt* − *p*) is a positive-scalar multiple of *Jϕ*(*xt* − *p*) (see Equation (6)), it follow from Equation (9) that

$$\begin{split} \langle (I - f)\mathbf{x}\_{l}, I\_{\boldsymbol{\theta}}(\mathbf{x}\_{l} - \boldsymbol{p}) \rangle &\leq \frac{r\_{l}}{t} \langle (S - I)\mathbf{x}\_{l}, I\_{\boldsymbol{\theta}}(\mathbf{x}\_{l} - \boldsymbol{p}) \rangle \\ &\leq \frac{r\_{l}}{t} \|(S - I)\mathbf{x}\_{l}\| \|\boldsymbol{\varrho}(\|\mathbf{x}\_{l} - \boldsymbol{p}\|). \end{split} \tag{10}$$

Taking the limit as *<sup>t</sup>* <sup>→</sup> 0, by lim*t*→<sup>0</sup> *rt <sup>t</sup>* = 0, we obtain

$$\langle (I - f)q, J\_{\boldsymbol{\theta}}(q - p) \rangle \le 0, \quad \forall p \in \text{Fix}(T).$$

This completes the proof.

The following lemma provides conditions under which {*xt*} defined by Equation (7) converges strongly to a point in *Fix*(*T*).

**Lemma 8.** *Let E be a reflexive smooth Banach space having Opial's property and having some duality mapping J<sup>ϕ</sup> weakly continuous at* 0*. Let C be a nonempty closed convex subset of E. Let T* : *C* → *C be a continuous pseudocontractive mapping with Fix*(*T*) = ∅*, let S* : *C* → *C be a nonexpansive mapping, and let f* : *C* → *C be a continuous bounded strongly pseudocontractive mapping with a pseudocontractive coefficient β* ∈ (0, 1)*. Then,* {*xt*} *defined by Equation* (7) *converges strongly to a point in Fix*(*T*) *as t* → 0*.*

**Proof.** First, from Proposition 1 (ii), we know that {*xt* : *t* ∈ (0, *t*0)} is bounded for *t* ∈ (0, *t*0) for some *t*<sup>0</sup> ∈ (0, 1).

Since *f* is a bounded mapping and *S* is a nonexpansive mapping, { *f xt* : *t* ∈ (0, *t*0)} and{*Sxt* : *t* ∈ (0, *t*0)} are bounded. Moreover, noting that *xt* = *tfxt* + *rtSxt* + (1 − *t* − *rt*)*Txt*, we have

$$T\mathbf{x}\_{t} = \frac{1}{1-t-r\_{t}}\mathbf{x}\_{t} - \frac{t}{1-t-r\_{t}}f\mathbf{x}\_{t} - \frac{r\_{t}}{1-t-r\_{t}}\mathbf{S}\mathbf{x}\_{t\_{t}}$$

*Mathematics* **2020**, *8*, 72

which implies that

$$\|\|Tx\_{\mathbf{f}}\|\| \le \frac{1}{1-t-r\_{\mathbf{f}}} \|\mathbf{x}\_{\mathbf{f}}\|\| + \frac{t}{1-t-r\_{\mathbf{f}}} \|\|f\mathbf{x}\_{\mathbf{f}}\|\| + \frac{r\_{\mathbf{f}}}{1-t-r\_{\mathbf{f}}} \|\|Sx\_{\mathbf{f}}\|\|.$$

Thus, we obtain

$$||T\mathbf{x}\_{\ell}|| \le 2||\mathbf{x}\_{\ell}|| + 2t||f\mathbf{x}\_{\ell}|| + 2r\_{\ell}||\mathbb{S}\mathbf{x}\_{\ell}||, \ \forall t \in (0, t\_{0})$$

and so {*Txt* : *t* ∈ (0, *t*0)} is bounded. This implies that

$$\lim\_{t \to 0} \|\mathbf{x}\_t - T\mathbf{x}\_t\| \le \lim\_{t \to 0} t \|f\mathbf{x}\_t - T\mathbf{x}\_t\| + \lim\_{t \to 0} r\_t \|\mathbf{S}\mathbf{x}\_t - T\mathbf{x}\_t\| = 0. \tag{11}$$

Now, let *tm* ∈ (0, *t*0) for some *t*<sup>0</sup> ∈ (0, 1) be such that *tm* → 0, and let {*xm*} := {*xtm* } be a subsequence of {*xt*}. Then,

$$\mathbf{x}\_{\mathrm{m}} = t\_{\mathrm{m}} f \mathbf{x}\_{\mathrm{m}} + r\_{\mathrm{m}} \mathbf{S}\_{\mathrm{m}} + (1 - t\_{\mathrm{m}} - r\_{\mathrm{m}}) T \mathbf{x}\_{\mathrm{m}}.$$

Let *p* ∈ *Fix*(*T*). Then, we have

$$\mathbf{x}\_m - p = t\_m(f\mathbf{x}\_m - p) + r\_m(S\mathbf{x}\_m - p) + (1 - t\_m - r\_m)(T\mathbf{x}\_m - Tp)$$

and

$$\begin{split} \|\mathbf{x}\_{m} - p\| \|\boldsymbol{\varrho}(\|\mathbf{x}\_{m} - \boldsymbol{p}\|) &= \langle \mathbf{x}\_{m} - \boldsymbol{p}, \boldsymbol{J}\_{\boldsymbol{\varrho}}(\mathbf{x}\_{m} - \boldsymbol{p}) \rangle \\ &\leq t\_{m} \langle \boldsymbol{f}\mathbf{x}\_{m} - \boldsymbol{p}, \boldsymbol{J}\_{\boldsymbol{\varrho}}(\mathbf{x}\_{m} - \boldsymbol{p}) \rangle + r\_{m} \langle \mathbf{S}\mathbf{x}\_{m} - \boldsymbol{p}, \boldsymbol{J}\_{\boldsymbol{\varrho}}(\mathbf{x}\_{m} - \boldsymbol{p}) \rangle \\ &+ (1 - t\_{m} - r\_{m}) \|\mathbf{x}\_{m} - \boldsymbol{p}\| \|\boldsymbol{\varrho}(\|\mathbf{x}\_{m} - \boldsymbol{p}\|). \end{split}$$

Thus, it follows that

$$\|\|\mathbf{x}\_{\mathrm{m}} - p\|\|\boldsymbol{\varrho}(\|\mathbf{x}\_{\mathrm{m}} - p\|) \leq \frac{t\_{\mathrm{m}}}{t\_{\mathrm{m}} + r\_{\mathrm{m}}} \langle f\mathbf{x}\_{\mathrm{m}} - p\_{\star}I\_{\boldsymbol{\varrho}}(\mathbf{x}\_{\mathrm{m}} - p) \rangle + \frac{r\_{\mathrm{m}}}{t\_{\mathrm{m}} + r\_{\mathrm{m}}} \langle \mathbf{S}\mathbf{x}\_{\mathrm{m}} - p\_{\star}I\_{\boldsymbol{\varrho}}(\mathbf{x}\_{\mathrm{m}} - p) \rangle. \tag{12}$$

Hence, we get

$$\langle \langle p - f \mathbf{x}\_{m}, I\_{\boldsymbol{\varrho}}(\mathbf{x}\_{m} - \boldsymbol{p}) \rangle \rangle \leq -\frac{t\_{m} + r\_{m}}{t\_{m}} ||\mathbf{x}\_{m} - \boldsymbol{p}|| \boldsymbol{\varrho}(\|\mathbf{x}\_{m} - \boldsymbol{p}\|) + \frac{r\_{m}}{t\_{m}} \langle \mathbf{S} \mathbf{x}\_{m} - \boldsymbol{p}, I\_{\boldsymbol{\varrho}}(\mathbf{x}\_{m} - \boldsymbol{p}) \rangle\_{\*}$$

that is,

$$\langle \rho - f \mathbf{x}\_{\mathrm{m}}, I\_{\varphi}(p - \mathbf{x}\_{\mathrm{m}}) \rangle \geq \frac{t\_{\mathrm{m}} + r\_{\mathrm{m}}}{t\_{\mathrm{m}}} ||\mathbf{x}\_{\mathrm{m}} - p||\rho(\|\mathbf{x}\_{\mathrm{m}} - p\|) + \frac{r\_{\mathrm{m}}}{t\_{\mathrm{m}}} \langle p - \mathrm{S} \mathbf{x}\_{\mathrm{m}}, I\_{\varphi}(\mathbf{x}\_{\mathrm{m}} - p) \rangle.$$

Therefore, we have

$$
\begin{split}
\langle \mathbf{x}\_{m} - f \mathbf{x}\_{m}, f\_{\boldsymbol{\theta}}(\boldsymbol{p} - \mathbf{x}\_{m}) \rangle &= \langle \mathbf{x}\_{m} - \boldsymbol{p}, f\_{\boldsymbol{\theta}}(\boldsymbol{p} - \mathbf{x}\_{m}) \rangle + \langle \boldsymbol{p} - f \mathbf{x}\_{m}, f\_{\boldsymbol{\theta}}(\boldsymbol{p} - \mathbf{x}\_{m}) \rangle \\ &\geq -||\mathbf{x}\_{m} - \boldsymbol{p}|| \boldsymbol{\varrho}(||\mathbf{x}\_{m} - \boldsymbol{p}||) + \frac{t\_{m} + r\_{m}}{t\_{m}} ||\mathbf{x}\_{m} - \boldsymbol{p}|| \boldsymbol{\varrho}(||\mathbf{x}\_{m} - \boldsymbol{p}||) \\ &\quad + \frac{r\_{m}}{t\_{m}} \langle \boldsymbol{p} - \mathbf{S} \mathbf{x}\_{m}, f\_{\boldsymbol{\theta}}(\mathbf{x}\_{m} - \boldsymbol{p}) \rangle \\ &= \frac{r\_{m}}{t\_{m}} ||\mathbf{x}\_{m} - \boldsymbol{p}|| \boldsymbol{\varrho}(||\mathbf{x}\_{m} - \boldsymbol{p}||) + \frac{r\_{m}}{t\_{m}} (\boldsymbol{p} - \mathbf{S} \mathbf{x}\_{m}, f\_{\boldsymbol{\theta}}(\mathbf{x}\_{m} - \boldsymbol{p})).
\end{split}
$$

On the other hand, since {*xm*} is bounded and *E* is reflexive, {*xm*} has a weakly convergent subsequence {*xmk*}, say, *xmk u* ∈ *E*. From Equation (11), it follows that

$$\|\|\mathbf{x}\_m - T\mathbf{x}\_m\|\| \le t\_m \|\|f\mathbf{x}\_m - T\mathbf{x}\_m\|\| + r\_m \|\|S\mathbf{x}\_m - T\mathbf{x}\_m\|\| \to 0.$$

From Lemma 4, we know that the mapping *<sup>A</sup>* = (2*<sup>I</sup>* <sup>−</sup> *<sup>T</sup>*)−<sup>1</sup> : *<sup>C</sup>* <sup>→</sup> *<sup>C</sup>* is nonexpansive, that *Fix*(*A*) = *Fix*(*T*), and that *xm* − *Axm* → 0. Thus, by Lemma 5, *u* ∈ *Fix*(*A*) = *Fix*(*T*). Therefore, by Equation (12) and the assumption that *Jϕ* is weakly continuous at 0, we obtain

$$\begin{split} \|\|\mathbf{x}\_{m\_k} - \boldsymbol{u}\|\|\boldsymbol{\varrho}(\|\mathbf{x}\_{m\_k} - \boldsymbol{u}\|) &\leq \frac{t\_{m\_k}}{t\_{m\_k} + r\_{m\_k}} \langle f\mathbf{x}\_{m\_k} - \boldsymbol{u}, f\_{\boldsymbol{\varrho}}(\mathbf{x}\_{m\_k} - \boldsymbol{u}) \rangle + \frac{r\_{m\_k}}{t\_{m\_k} + r\_{m\_k}} \langle \operatorname{Sx}\_{m\_k} - \boldsymbol{u}, f\_{\boldsymbol{\varrho}}(\mathbf{x}\_{m\_k} - \boldsymbol{u}) \rangle \\ &\leq |\langle f\mathbf{x}\_{m\_k} - \boldsymbol{u}, f\_{\boldsymbol{\varrho}}(\mathbf{x}\_{m\_k} - \boldsymbol{u}) \rangle| + \frac{r\_{m\_k}}{t\_{m\_k}} |\langle \operatorname{Sx}\_{m\_k} - \boldsymbol{u}, f\_{\boldsymbol{\varrho}}(\mathbf{x}\_{m\_k} - \boldsymbol{u}) \rangle| \to 0. \end{split}$$

Since *ϕ* is continuous and strictly increasing, we must have *xmk* → *u*.

Now, we will show that every weakly convergent subsequence of {*xm*} has the same limit. Suppose that *xmk u* and *xmj v*. Then, by the above proof, we have *u*, *v* ∈ *Fix*(*T*) and *xmk* → *u* and *xmj* → *v*. By Equation (12), we have the following for all *p* ∈ *Fix*(*T*):

$$\begin{split} \|\|\mathbf{x}\_{m\_k} - p\|\|\boldsymbol{\varrho}(\|\mathbf{x}\_{m\_k} - p\|) &\leq \frac{t\_{m\_k}}{t\_{m\_k} + r\_{m\_k}} \langle f\mathbf{x}\_{m\_k} - p, f\_{\boldsymbol{\varrho}}(\mathbf{x}\_{m\_k} - p) \rangle + \frac{r\_{m\_k}}{t\_{m\_k} + r\_{m\_k}} \langle \operatorname{Sx}\_{m\_k} - p, f\_{\boldsymbol{\varrho}}(\mathbf{x}\_{m\_k} - p) \rangle \\ &\leq \frac{t\_{m\_k}}{t\_{m\_k} + r\_{m\_k}} \langle f\mathbf{x}\_{m\_k} - p, f\_{\boldsymbol{\varrho}}(\mathbf{x}\_{m\_k} - p) \rangle + \frac{r\_{m\_k}}{t\_{m\_k}} |\langle \operatorname{Sx}\_{m\_k} - p, f\_{\boldsymbol{\varrho}}(\mathbf{x}\_{m\_k} - p) \rangle| \end{split}$$

and

$$\begin{split} \|\|\mathbf{x}\_{m\_{j}} - p\|\|\boldsymbol{\varrho}(\|\mathbf{x}\_{m\_{j}} - p\|) &\leq \frac{t\_{m\_{j}}}{t\_{m\_{j}} + r\_{m\_{j}}} \langle f\mathbf{x}\_{m\_{j}} - p\_{\star} \boldsymbol{I}\_{\boldsymbol{\varrho}}(\mathbf{x}\_{m\_{j}} - p) \rangle + \frac{r\_{m\_{j}}}{t\_{m\_{j}} + r\_{m\_{j}}} \langle \mathbf{S}\mathbf{x}\_{m\_{j}} - p\_{\star} \boldsymbol{I}\_{\boldsymbol{\varrho}}(\mathbf{x}\_{m\_{j}} - p) \rangle \\ &\leq \frac{t\_{m\_{j}}}{t\_{m\_{j}} + r\_{m\_{k}}} \langle f\mathbf{x}\_{m\_{j}} - p\_{\star} \boldsymbol{I}\_{\boldsymbol{\varrho}}(\mathbf{x}\_{m\_{k}} - p) \rangle + \frac{r\_{m\_{k}}}{t\_{m\_{k}}} |\langle \mathbf{S}\mathbf{x}\_{m\_{k}} - p\_{\star} \boldsymbol{I}\_{\boldsymbol{\varrho}}(\mathbf{x}\_{m\_{k}} - p) \rangle|. \end{split}$$

Taking limits, we get

$$\Phi(||\mu - v||) = ||\mu - v|| \, \varrho(||\mu - v||) \le \langle f\mu - v, f\_{\boldsymbol{\theta}}(\mu - v) \rangle \tag{13}$$

and

$$\Phi(\|\upsilon - \mathfrak{u}\|) = \|\upsilon - \mathfrak{u}\| \|\varphi(\|\upsilon - \mathfrak{u}\|) \le \langle f\upsilon - \mathfrak{u}, f\_{\mathfrak{q}}(\upsilon - \mathfrak{u})\rangle. \tag{14}$$

Adding up Equations (13) and (14) yields

$$\begin{aligned} 2\Phi(\|u-v\|) = 2\|u-v\|\varrho(\|u-v\|) &\leq \|u-v\|\varrho(\|u-v\|) + \langle fu-fv, l\_{\varPhi}(u-v)\rangle \\ &\leq (1+\beta)\|u-v\|\varrho(\|u-v\|) = (1+\beta)\Phi(\|u-v\|). \end{aligned}$$

Since *β* ∈ (0, 1), this implies Φ(*u* − *v*) ≤ 0, that is, *u* = *v*. Hence, {*xm*} is strongly convergent to a point in *Fix*(*T*) as *tm* → 0.

The same argument shows that, if *tl* → 0, then the subsequence {*xl*} := {*xtl* } of {*xt* : *t* ∈ (0, *t*0)} for some *t*<sup>0</sup> ∈ (0, 1) is strongly convergent to the same limit. Thus, as *t* → 0, {*xt*} converges strongly to a point in *Fix*(*T*).

Using Theorem 1 and Lemma 8, we show the existence of a unique solution of the variational inequality in Equation (8) in a reflexive Banach space having a weakly continuous duality mapping.

**Theorem 2.** *Let E be a reflexive Banach space having a weakly continuous duality mapping Jϕ with gauge function ϕ, and let C be a nonempty closed convex subset of E. Let T* : *C* → *C be a continuous pseudocontractive mapping such that Fix*(*T*) = ∅*, let S* : *C* → *C be a nonexpansive mapping, and let f* : *C* → *C be a continuous bounded strongly pseudocontractive mapping with a pseudocontractive coefficient β* ∈ (0, 1)*. Then, there exists the unique solution in q* ∈ *Fix*(*T*) *of the variational inequality in Equation* (8)*, where q* := lim*t*→<sup>∞</sup> *xt with xt being defined by Equation* (7)*.*

**Proof.** We notice that the definition of the weak continuity of the duality mapping *Jϕ* implies that *E* is smooth. Thus, *E*<sup>∗</sup> is strictly convex for reflexivity of *E*. By Lemma 8, {*xt*} defined by Equation (7) converges strongly to a point *q* in *Fix*(*T*) as *t* → 0. Hence, by Theorem 1, *q* is the unique solution of the variational inequality in Equation (8). In fact, suppose that *q*, *p* ∈ *Fix*(*T*) satisfy the variational inequality in Equation (8). Then, we have

$$\langle (I - f)q, f\_{\mathfrak{q}}(q - p) \rangle \le 0 \text{ and } \langle (I - f)p, f\_{\mathfrak{q}}(p - q) \rangle \le 0.$$

Adding these two inequalities, we have

$$(1 - \beta)\Phi(||q - p||) = (1 - \beta) \|q - p\| \|\varphi(||q - p||) \le \langle (I - f)q - (I - f)p, I\_\theta(q - p) \rangle \le 0,$$

and so *q* = *p*.

As a direct consequence of Theorem 2, we have the following result.

**Corollary 1.** ([20, Theorem 3.2]) *Let E be a reflexive Banach space having a weakly continuous duality mapping J<sup>ϕ</sup> with gauge function ϕ, and let C be a nonempty closed convex subset of E. Let T* : *C* → *C be a continuous pseudocontractive mapping such that Fix*(*T*) = ∅*, and let f* : *C* → *C be a continuous bounded strongly pseudocontractive mapping with a pseudocontractive coefficient β* ∈ (0, 1)*. Let* {*xt*} *be defined by*

$$\mathbf{x}\_t = t f \mathbf{x}\_t + (1 - t) T \mathbf{x}\_{t\prime} \quad \forall t \in (0, 1).$$

*Then, as t* → 0*, xt converges strongly to a some point of T such that q is the unique solution of the variational inequality in Equation* (8)*.*

**Proof.** Put *S* = *I* and *rt* = 0 for all *t* ∈ (0, 1). Then, the result follows immediately from Theorem 2.

**Remark 1.** *(1) Theorem 2 develops and supplements Theorem 2.1 of Ceng et al. [17] in the following aspects:*


#### **4. Modified Implicit Iterative Methods with Perturbed Mapping**

First, we prepare the following result.

**Theorem 3.** *Let E be a reflexive Banach space having a weakly continuous duality mapping Jϕ with gauge function ϕ, and let C be a nonempty closed convex subset of E. Let T* : *C* → *C be a continuous pseudocontractive mapping such that Fix*(*T*) = ∅*, let S* : *C* → *C be a nonexpansive mapping, and let f* : *C* → *C be a continuous bounded strongly pseudocontractive mapping with a pseudocontractive coefficient β* ∈ (0, 1)*. Let* {*xt*} *be defined by Equation* (7)*. If there exists a bounded sequence* {*xn*} *such that* lim*n*→<sup>∞</sup> *xn* − *Txn* = 0 *and q* = lim*t*→<sup>0</sup> *xt, then*

$$\limsup\_{n \to \infty} \langle fq - q, J\_{\mathfrak{p}}(\mathfrak{x}\_n - q) \rangle \le 0.$$

*Mathematics* **2020**, *8*, 72

**Proof.** Using the equality

$$\mathbf{x}\_{l} - \mathbf{x}\_{n} = (1 - t - r\_{l})(T\mathbf{x}\_{l} - \mathbf{x}\_{n}) + t(f\mathbf{x}\_{l} - \mathbf{x}\_{n}) + r\_{l}(S\mathbf{x}\_{l} - \mathbf{x}\_{n})$$

and the inequality

$$\langle (Tx - Ty, f\_{\boldsymbol{\theta}}(\mathbf{x} - \mathbf{y})) \rangle \le \|\mathbf{x} - \mathbf{y}\| \|\boldsymbol{\theta}(\|\mathbf{x} - \mathbf{y}\|), \quad \forall \mathbf{x}, \ \mathbf{y} \in \mathbf{C}, \mathbf{y}$$

we derive

*xt* − *xnϕ*(*xt* − *xn*)=(1 − *t* − *rt*)*Txt* − *xn*, *Jϕ*(*xt* − *xn*) + *tf xt* − *xn*, *Jϕ*(*xt* − *xn*) + *rtSxt* − *xn*, *Jϕ*(*xt* − *xn*) = (1 − *t* − *rt*)(*Txt* − *Txn*, *Jϕ*(*xt* − *xn*) + *Txn* − *xn*, *Jϕ*(*xt* − *xn*) *tf xt* − *xt*, *Jϕ*(*xt* − *xn*) + *txt* − *xnϕ*(*xt* − *xn*) + *rtSxt* − *xt*, *Jϕ*(*xt* − *xn*) + *rtxt* − *xnϕ*(*xt* − *xn*) ≤ *xt* − *xnϕ*(*xt* − *xn*) + *Txn* − *xnϕ*(*xt* − *xn*) *tf xt* − *xt*, *Jϕ*(*xt* − *xn*) + *rtSxt* − *xnϕ*(*xt* − *xn*)

and, hence,

$$\left| \left< \mathbf{x}\_{l} - f \mathbf{x}\_{l}, l\_{\boldsymbol{\theta}}(\mathbf{x}\_{l} - \mathbf{x}\_{\boldsymbol{n}}) \right> \right| \leq \frac{||T\mathbf{x}\_{\boldsymbol{n}} - \mathbf{x}\_{\boldsymbol{n}}||}{t} \boldsymbol{\varrho}(||\mathbf{x}\_{l} - \mathbf{x}\_{\boldsymbol{n}}||) + \frac{r\_{l}}{t} ||S\mathbf{x}\_{l} - \mathbf{x}\_{l}|| \boldsymbol{\varrho}(||\mathbf{x}\_{l} - \mathbf{x}\_{\boldsymbol{n}}||).$$

Therefore, by lim sup*n*→<sup>∞</sup> *<sup>ϕ</sup>*(*xt* <sup>−</sup> *xn*) <sup>&</sup>lt; <sup>∞</sup>, we have

$$\begin{split} \limsup\_{n \to \infty} \langle \mathbf{x}\_{t} - f\mathbf{x}\_{t}, f\_{\boldsymbol{\varphi}}(\mathbf{x}\_{t} - \mathbf{x}\_{n}) \rangle &\leq \limsup\_{n \to \infty} \frac{||\mathbf{x}\_{\boldsymbol{x}\_{t}} - \mathbf{x}\_{t}||}{t} \boldsymbol{\varrho}(||\mathbf{x}\_{t} - \mathbf{x}\_{n}||) \\ &+ \limsup\_{n \to \infty} \frac{r\_{t}}{t} ||\mathbf{S}\mathbf{x}\_{t} - \mathbf{x}\_{t}|| \boldsymbol{\varrho}(||\mathbf{x}\_{t} - \mathbf{x}\_{n}||) \\ &= \limsup\_{n \to \infty} \frac{r\_{t}}{t} ||\mathbf{S}\mathbf{x}\_{t} - \mathbf{x}\_{t}|| \boldsymbol{\varrho}(||\mathbf{x}\_{t} - \mathbf{x}\_{n}||) \\ &= \frac{r\_{t}}{t} ||\mathbf{S}\mathbf{x}\_{t} - \mathbf{x}\_{t}|| \limsup\_{n \to \infty} \boldsymbol{\varrho}(||\mathbf{x}\_{t} - \mathbf{x}\_{n}||). \end{split}$$

Thus, noting that lim*t*→<sup>0</sup> lim sup*n*→<sup>∞</sup> *<sup>ϕ</sup>*(*xt* <sup>−</sup> *xn*) <sup>&</sup>lt; <sup>∞</sup>, by Lemma 1, we conclude

$$\begin{split} \limsup\_{n \to \infty} \langle fq - q, J\_{\boldsymbol{\theta}}(\mathbf{x}\_{n} - \boldsymbol{q}) \rangle &= \lim\_{t \to 0} \limsup\_{n \to \infty} \langle f\mathbf{x}\_{t} - \mathbf{x}\_{t}, J\_{\boldsymbol{\theta}}(\mathbf{x}\_{n} - \mathbf{x}\_{t}) \rangle \\ &\leq \lim\_{t \to 0} \Big[ \frac{r\_{t}}{t} ||\mathbf{S}\mathbf{x}\_{t} - \mathbf{x}\_{t}|| \Big] \lim\_{t \to 0} \limsup\_{n \to \infty} \varrho(||\mathbf{x}\_{t} - \mathbf{x}\_{n}||) \\ &= 0 \times \lim\_{t \to 0} \limsup\_{n \to \infty} \varrho(||\mathbf{x}\_{t} - \mathbf{x}\_{n}||) = 0. \end{split}$$

This completes the proof.

**Theorem 4.** *Let E be a reflexive Banach space having a weakly continuous duality mapping Jϕ with gauge function ϕ, and let C be a nonempty closed convex subset of E. Let T* : *C* → *C be a continuous pseudocontractive mapping such that Fix*(*T*) = ∅*, let S* : *C* → *C be a nonexpansive mapping, and let f* : *C* → *C be a contractive mapping with a contractive coefficient k* ∈ (0, 1)*. For x*<sup>0</sup> ∈ *C, let* {*xn*} *be defined by the following iterative scheme:*

$$\begin{cases} y\_n = a\_n \mathbf{x}\_n + (1 - a\_n) \mathbf{T} y\_n \\ \mathbf{x}\_{n+1} = \beta\_n f y\_n + \gamma\_n \mathbf{S} y\_n + (1 - \beta\_n - \gamma\_n) y\_n, \quad \forall n \ge 0, \end{cases} \tag{15}$$

*where* {*αn*}*,* {*βn*}*, and* {*γn*} *are three sequences in* (0, 1] *satisfying the following conditions:*

*Mathematics* **2020**, *8*, 72


*Then,* {*xn*} *converges strongly to a fixed point x*<sup>∗</sup> *of T, which is the unique solution of the following variational inequality*

$$\langle (I - f)\mathbf{x}^\*, I\_\varphi(\mathbf{x}^\* - p) \rangle \le 0, \quad \forall p \in F\text{ix}(T). \tag{16}$$

**Proof.** First, put *zt* = *tfzt* + *rtSzt* + (1 − *t* − *rt*)*Tzt*. Then, it follows from Theorem 2 that, as *t* → 0, *zt* converges strongly to some fixed point *x*<sup>∗</sup> of *T* such that *x*<sup>∗</sup> is the unique solution in *Fix*(*T*) to the variational inequality in Equation (16).

Now, we divide the proof into several steps.

**Step 1.** We show that {*xn*} is bounded. To this end, let *p* ∈ *Fix*(*T*). Then, we have

$$\begin{aligned} \|y\_n - p\| \|\varrho(\|y\_n - p\|) &= \langle a\_n \mathbf{x}\_n + (1 - a\_n) \mathbf{T} y\_n - p, f\_{\boldsymbol{\theta}}(y\_n - p) \rangle \\ &\le (1 - a\_n) \langle \mathbf{T} y\_n - \mathbf{T} p, f\_{\boldsymbol{\theta}}(y\_n - p) \rangle + a\_n \|\mathbf{x}\_n - p\| \|\varrho(\|y\_n - p\|) \\ &\le (1 - a\_n) \|\mathbf{y}\_n - p\| \|\varrho(\|y\_n - p\|) + a\_n \|\mathbf{x}\_n - p\| \|\varrho(\|y\_n - p\|) \end{aligned}$$

and, hence,

$$||y\_n - p|| \le ||x\_n - p||\_\prime \quad \forall n \ge 0.$$

Thus, we obtain

$$\begin{split} \|\mathbf{x}\_{n+1} - p\| \le & \beta\_n \|f y\_n - p\| + \gamma\_n \|S y\_n - p\| + (1 - \beta\_n - \gamma\_n) \|y\_n - p\| \\ \le & \beta\_n (\|f y\_n - f p\| + \|f p - p\|) + \gamma\_n (\|S y\_n - S p\| + \|S p - p\|) \\ & + (1 - \beta\_n - \gamma\_n) \|\mathbf{x}\_n - p\| \\ \le & \beta\_n k \|y\_n - p\| + \beta\_n \|f p - p\| + \gamma\_n \|y\_n - p\| + \gamma\_n \|S p - p\| \\ & + (1 - \beta\_n - \gamma\_n) \|\mathbf{x}\_n - p\| \\ \le & \beta\_n k \|\mathbf{x}\_n - p\| + \beta\_n \|f p - p\| + \gamma\_n \|\mathbf{x}\_n - p\| + \gamma\_n \|S p - p\| \\ & + (1 - \beta\_n - \gamma\_n) \|\mathbf{x}\_n - p\| \\ = & (1 - (1 - k)\beta\_n) \|\mathbf{x}\_n - p\| + \beta\_n \|fp - p\| + \gamma\_n \|S p - p\|. \end{split} \tag{17}$$

Since lim*n*→∞(*γn*/*βn*) = 0, we may assume without loss of generality that *γ<sup>n</sup>* ≤ *β<sup>n</sup>* for all *n* > 0. Therefore, it follows from Equation (17) that

$$\begin{aligned} \|\mathbf{x}\_{n+1} - p\| &\le \left(1 - (1 - k)\beta\_n\right) \|\mathbf{x}\_n - p\| + (1 - k)\beta\_n \cdot \frac{1}{1 - k} (\|fp - p\| + \|Sp - p\|) \\ &\le \max\left\{ \|\mathbf{x}\_n - p\|, \frac{1}{1 - k} (\|fp - p\| + \|Sp - p\|) \right\}. \end{aligned}$$

By induction, we derive

$$\|\|x\_n - p\|\| \le \max\left\{ \|\|x\_0 - p\|\|, \frac{1}{1 - k}(\|fp - p\| + \|Sp - p\|) \right\}, \quad \forall n \ge 0.$$

This show that {*xn*} is bounded and so is {*yn*}.

**Step 2.** We show that { *f yn*}, {*Syn*}, and {*Tyn*} are bounded. Indeed, observe that

$$||f y\_n|| \le ||f y\_n - f p|| + ||f p|| \le k||y\_n - p|| + ||f p||$$

and

$$||\mathcal{S}y\_n|| \le ||\mathcal{S}y\_n - \mathcal{S}p|| + ||\mathcal{S}p|| \le ||y\_n - p|| + ||\mathcal{S}p||.$$

Thus, { *f yn*} and {*Syn*} are bounded. Since lim*n*→<sup>∞</sup> *α<sup>n</sup>* = 0, there exist *n*<sup>0</sup> ≥ 0 and *a* ∈ (0, 1) such that *α<sup>n</sup>* ≤ *a* for all *n* ≥ *n*0. Noting that *yn* = *αnxn* + (1 − *αn*)*Tyn*, we have

$$T y\_{\mathbb{N}} = \frac{1}{1 - \alpha\_n} y\_{\mathbb{N}} - \frac{\alpha\_n}{1 - \alpha\_n} \mathbf{x}\_{\mathbb{N}}$$

and so

$$\|Ty\_{\mathcal{H}}\| \le \frac{1}{1 - \alpha\_n} \|y\_{\mathcal{H}}\| + \frac{\alpha\_n}{1 - \alpha\_n} \|\mathbf{x}\_{\mathcal{H}}\| \le \frac{1}{1 - a} \|y\_{\mathcal{H}}\| + \frac{a}{1 - a} \|\mathbf{x}\_{\mathcal{H}}\|.$$

Consequently, the sequence {*Tyn*} is also bounded.

**Step 3.** We show that lim sup*n*→∞*f x*<sup>∗</sup> <sup>−</sup> *<sup>x</sup>*∗, *<sup>J</sup>ϕ*(*yn* <sup>−</sup> *<sup>x</sup>*∗) ≤ 0. In fact, from condition (i) and boundedness of {*xn*} and {*Tyn*}, we get

$$\|y\_n - Ty\_n\| = \kappa\_n \|\mathbf{x}\_n - Ty\_n\| \to 0 \quad (n \to \infty). \tag{18}$$

Thus, it follows from Equation (18) and Theorem <sup>3</sup> that lim sup*n*→∞*f x*<sup>∗</sup> <sup>−</sup> *<sup>x</sup>*∗, *<sup>J</sup>ϕ*(*yn* <sup>−</sup> *<sup>x</sup>*∗) ≤ 0. **Step 4.** We show that lim sup*n*→∞*f x*<sup>∗</sup> <sup>−</sup> *<sup>x</sup>*∗, *<sup>J</sup>ϕ*(*xn*+<sup>1</sup> <sup>−</sup> *<sup>x</sup>*∗) ≤ 0. Indeed, by Equations (15) and (18), we have

$$\begin{aligned} \|\|\mathbf{x}\_{n+1} - \mathbf{y}\_n\|\| &= \|\|\beta\_n f \mathbf{y}\_n + \gamma\_n \mathbb{S} \mathbf{y}\_n + (1 - \beta\_n - \gamma\_n) \mathbf{y}\_n - (a\_n \mathbf{x}\_n + (1 - a\_n) \mathbf{T} \mathbf{y}\_n)\| \\ &\le a\_n \|\mathbf{x}\_n - \mathbf{T} \mathbf{y}\_n\| + \beta\_n \|f \mathbf{y}\_n - \mathbf{y}\_n\| + \gamma\_n \|\mathbb{S} \mathbf{y}\_n - \mathbf{y}\_n\| + \|\|\mathbf{y}\_n - \mathbf{T} \mathbf{y}\_n\|\| \to 0 \ (n \to \infty) .\end{aligned}$$

Since the duality mapping *Jϕ* is single-valued and weakly continuous, we have

$$\lim\_{n \to \infty} \langle f \mathbf{x}^\* - \mathbf{x}^\*, f\_{\boldsymbol{\varphi}}(\mathbf{x}\_{n+1} - \mathbf{x}^\*) - f\_{\boldsymbol{\varphi}}(y\_n - \mathbf{x}^\*) \rangle = 0.$$

Therefore, we obtain from step 3 that

$$\begin{split} \limsup\_{n \to \infty} \langle f \mathbf{x}^\* - \mathbf{x}^\*, f\_{\boldsymbol{\theta}}(\mathbf{x}\_{n+1} - \mathbf{x}^\*) \rangle &\leq \limsup\_{n \to \infty} \langle f \mathbf{x}^\* - \mathbf{x}^\*, f\_{\boldsymbol{\theta}}(y\_n - \mathbf{x}^\*) \rangle \\ &+ \limsup\_{n \to \infty} \langle f \mathbf{x}^\* - \mathbf{x}^\*, f\_{\boldsymbol{\theta}}(\mathbf{x}\_{n+1} - \mathbf{x}^\*) - f\_{\boldsymbol{\theta}}(y\_n - \mathbf{x}^\*) \rangle \\ &= \limsup\_{n \to \infty} \langle f \mathbf{x}^\* - \mathbf{x}^\*, f\_{\boldsymbol{\theta}}(y\_n - \mathbf{x}^\*) \rangle \leq 0. \end{split}$$

**Step 5.** We show that lim*n*→<sup>∞</sup> *xn* − *x*∗ = 0. In fact, it follows from Equation (15) that

$$\begin{split} \mathbf{x}\_{n+1} - \mathbf{x}^\* &= \beta\_n (f y\_n - f \mathbf{x}^\*) + \gamma\_n (\mathbf{S} y\_n - \mathbf{S} \mathbf{x}^\*) + (1 - \beta\_n - \gamma\_n) (y\_n - \mathbf{x}^\*) \\ &+ \beta\_n (f \mathbf{x}^\* - \mathbf{x}^\*) + \gamma\_n (\mathbf{S} \mathbf{x}^\* - \mathbf{x}^\*). \end{split}$$

Therefore, using inequalities *yn* − *x*∗≤*xn* − *x*∗, *f x* − *f y* ≤ *kx* − *y*, and *Sx* − *Sy* ≤ *x* − *y* and using Lemma 2, we have

$$\begin{split} \Phi(\|\mathbf{x}\_{n+1} - \mathbf{x}^\*\|) &\leq \Phi(\|\beta\_n(fy\_n - f\mathbf{x}^\*) + \gamma\_n(Sy\_n - S\mathbf{x}^\*) + (1 - \beta\_n - \gamma\_n)(y\_n - \mathbf{x}^\*)\|) \\ &\quad + \beta\_n(f\mathbf{x}^\* - \mathbf{x}^\*), f\_\theta(\mathbf{x}\_{n+1} - \mathbf{x}^\*) + \gamma\_n(S\mathbf{x}^\* - \mathbf{x}^\*), f\_\theta(\mathbf{x}\_{n+1} - \mathbf{x}^\*)\rangle \\ &\leq \Phi(\beta\_n k \|y\_n - \mathbf{x}^\*\|) + \gamma\_n \|y\_n - \mathbf{x}^\*\| + (1 - \beta\_n - \gamma\_n) \|y\_n - \mathbf{x}^\*\|) \\ &\quad + \beta\_n(f\mathbf{x}^\* - \mathbf{x}^\*, f\_\theta(\mathbf{x}\_{n+1} - \mathbf{x}^\*)) + \gamma\_n(S\mathbf{x}^\* - \mathbf{x}^\*, f\_\theta(\mathbf{x}\_{n+1} - \mathbf{x}^\*)) \\ &\leq \Phi((1 - (1 - k)\beta\_n) \|\mathbf{x}\_n - \mathbf{x}^\*\|) \\ &\quad + \beta\_n(f\mathbf{x}^\* - \mathbf{x}^\*, f\_\theta(\mathbf{x}\_{n+1} - \mathbf{x}^\*)) + \gamma\_n(S\mathbf{x}^\* - \mathbf{x}^\*, f\_\theta(\mathbf{x}\_{n+1} - \mathbf{x}^\*)) \\ &\leq (1 - (1 - k)\beta\_n) \Phi(\|\mathbf{x}\_n - \mathbf{x}^\*\|) + \beta\_n(f\mathbf{x}^\* - \mathbf{x}^\*, f\_\theta(\mathbf{x}\_{n+1} - \mathbf{x}^\*)) \\ &\quad + \gamma\_n \|S\mathbf{x}^\* - \mathbf{x}^\*\|)$$

where *λ<sup>n</sup>* = (1 − *k*)*β<sup>n</sup>* and

$$\delta\_n = \frac{1}{1-k} \left[ \langle f\mathbf{x}^\* - \mathbf{x}^\*, f\_\varphi(\mathbf{x}\_{n+1} - \mathbf{x}^\*) + \frac{\gamma\_n}{\beta\_n} ||\mathbf{S}\mathbf{x}^\* - \mathbf{x}^\*|| \,\varphi(\|\mathbf{x}\_{n+1} - \mathbf{x}^\*\|) \right].$$

From conditions (ii) and (iii) and from step 4, it is easily seen that ∑<sup>∞</sup> *<sup>n</sup>*=<sup>0</sup> *λ<sup>n</sup>* = ∞ and lim sup*n*→<sup>∞</sup> *<sup>δ</sup><sup>n</sup>* <sup>≤</sup> 0. Thus, applying Lemma <sup>3</sup> to Equation (19), we conclude that lim*n*→<sup>∞</sup> <sup>Φ</sup>(*xn* <sup>−</sup> *x*∗) = 0 and, hence, lim*n*→<sup>∞</sup> *xn* − *x*∗ = 0. This completes the proof.

**Theorem 5.** *Let E be a reflexive Banach space having a weakly continuous duality mapping Jϕ with gauge function ϕ, and let C be a nonempty closed convex subset of E. Let T* : *C* → *C be a continuous pseudocontractive mapping such that Fix*(*T*) = ∅*, let S* : *C* → *C be a nonexpansive mapping, and let f* : *C* → *C be a contractive mapping with a contractive coefficient k* ∈ (0, 1)*. For x*<sup>0</sup> ∈ *C, let* {*xn*} *be defined by the following iterative scheme:*

$$\begin{cases} \mathbf{x}\_n = \mathbf{a}\_n \mathbf{y}\_n + (1 - \mathbf{a}\_n) T \mathbf{x}\_n \\ \mathbf{y}\_n = \beta\_n f \mathbf{x}\_{n-1} + \gamma\_n \mathbf{S} \mathbf{x}\_{n-1} + (1 - \beta\_n - \gamma\_n) \mathbf{x}\_{n-1}, \quad \forall n \ge 0, \end{cases} \tag{20}$$

*where* {*αn*}*,* {*βn*}*, and* {*γn*} *are three sequences in* (0, 1] *satisfying the following conditions:*


*Then,* {*xn*} *converges strongly to a fixed point x*<sup>∗</sup> *of T, which is the unique solution of the variational inequality in Equation* (16)*.*

**Proof.** First, as in Theorem 4, we put *zt* = *tfzt* + *rtSzt* + (1 − *t* − *rt*)*Tzt*. Then, from Theorem 2, it follows that, as *t* → 0, *zt* converges strongly to some fixed point *x*<sup>∗</sup> of *T* such that *x*<sup>∗</sup> is the unique solution in *Fix*(*T*) to the variational inequality in Equation (16).

Now, we divide the proof into several steps.

**Step 1.** We show that {*xn*} is bounded. To this end, let *p* ∈ *Fix*(*T*). Then, by Equation (20), we have

$$\begin{aligned} \|\|\mathbf{x}\_{\boldsymbol{n}} - \boldsymbol{p}\|\|\boldsymbol{\varrho}(\|\mathbf{x}\_{\boldsymbol{n}} - \boldsymbol{p}\|) &= \langle \boldsymbol{a}\_{\boldsymbol{n}} \boldsymbol{y}\_{\boldsymbol{n}} + (1 - \boldsymbol{a}\_{\boldsymbol{n}})T\mathbf{x}\_{\boldsymbol{n}} - \boldsymbol{p}, \boldsymbol{J}\_{\boldsymbol{\varrho}}(\mathbf{x}\_{\boldsymbol{n}} - \boldsymbol{p}) \rangle \\ &\leq (1 - \boldsymbol{a}\_{\boldsymbol{n}})(T\mathbf{x}\_{\boldsymbol{n}} - T\boldsymbol{p}, \boldsymbol{J}\_{\boldsymbol{\varrho}}(\mathbf{x}\_{\boldsymbol{n}} - \boldsymbol{p})) + \boldsymbol{a}\_{\boldsymbol{n}} \|\|\boldsymbol{y}\_{\boldsymbol{n}} - \boldsymbol{p}\|\|\boldsymbol{\varrho}(\|\mathbf{x}\_{\boldsymbol{n}} - \boldsymbol{p}\|)) \\ &\leq (1 - \boldsymbol{a}\_{\boldsymbol{n}})\|\mathbf{x}\_{\boldsymbol{n}} - \boldsymbol{p}\|\|\boldsymbol{\varrho}(\|\mathbf{x}\_{\boldsymbol{n}} - \boldsymbol{p}\|) + \boldsymbol{a}\_{\boldsymbol{n}}\|\|\boldsymbol{y}\_{\boldsymbol{n}} - \boldsymbol{p}\|\|\boldsymbol{\varrho}(\|\mathbf{y}\_{\boldsymbol{n}} - \boldsymbol{p}\|) \end{aligned}$$

and, hence,

$$||\mathbf{x}\_n - p|| \le ||y\_n - p||\_\prime \quad \forall n \ge 0.$$

Thus, we obtain

$$\begin{split} \|\mathbf{x}\_{n} - p\| &\le \|y\_{n} - p\| \\ &\le \beta\_{n} \|f\mathbf{x}\_{n-1} - p\| + \gamma\_{n} \|S\mathbf{x}\_{n-1} - p\| + (1 - \beta\_{n} - \gamma\_{n}) \|\mathbf{x}\_{n-1} - p\| \\ &\le \beta\_{n} (\|f\mathbf{x}\_{n-1} - fp\| + \|fp - p\|) + \gamma\_{n} (\|S\mathbf{x}\_{n-1} - Sp\| + \|Sp - p\|) \\ &+ (1 - \beta\_{n} - \gamma\_{n}) \|\mathbf{x}\_{n-1} - p\| \\ &\le \beta\_{n} k \|\mathbf{x}\_{n-1} - p\| + \beta\_{n} \|fp - p\| + \gamma\_{n} \|\mathbf{x}\_{n-1} - p\| + \gamma\_{n} \|Sp - p\| \\ &+ (1 - \beta\_{n} - \gamma\_{n}) \|\mathbf{x}\_{n-1} - p\| \\ &= (1 - (1 - k)\beta\_{n}) \|\mathbf{x}\_{n-1} - p\| + \beta\_{n} \|fp - p\| + \gamma\_{n} \|Sp - p\|. \end{split} \tag{21}$$

Since lim*n*→∞(*γn*/*βn*) = 0, we may assume without loss of generality that *γ<sup>n</sup>* ≤ *β<sup>n</sup>* for all *n* > 0. Therefore, it follows from Equation (21) that

$$\begin{aligned} \|\mathbf{x}\_{\mathsf{H}} - p\| &\leq (1 - (1 - k)\beta\_{\mathsf{H}}) \|\mathbf{x}\_{n-1} - p\| + (1 - k)\beta\_{\mathsf{H}} \cdot \frac{1}{1 - k} (\|fp - p\| + \|Sp - p\|) \\ &\leq \max\left\{ \|\mathbf{x}\_{n-1} - p\|, \frac{1}{1 - k} (\|fp - p\| + \|Sp - p\|) \right\}. \end{aligned}$$

By induction, we derive

$$\|\|\mathbf{x}\_n - p\|\| \le \max\left\{ \|\|\mathbf{x}\_0 - p\|\|, \frac{\mathbf{1}}{1 - k} (\|fp - p\| + \|\|\mathbf{S}p - p\|\|) \right\}, \quad \forall n \ge 0.$$

This show that {*xn*} is bounded and so is {*yn*}.

**Step 2.** We show that { *f xn*}, {*Sxn*}, and {*Txn*} are bounded. Indeed, observe that

*f xn*≤ *f xn* − *f p* + *f p* ≤ *kxn* − *p* + *f p*

and

$$||\mathcal{S} \mathbf{x}\_{\mathsf{H}}|| \le ||\mathcal{S} \mathbf{x}\_{\mathsf{H}} - \mathcal{S}p|| + ||\mathcal{S}p|| \le ||\mathbf{x}\_{\mathsf{H}} - p|| + ||\mathcal{S}p||.$$

Thus, { *f xn*} and {*Sxn*} are bounded. Since lim*n*→<sup>∞</sup> *α<sup>n</sup>* = 0, there exist *n*<sup>0</sup> ≥ 0 and *a* ∈ (0, 1) such that *α<sup>n</sup>* ≤ *a* for all *n* ≥ *n*0. Noting that *xn* = *αnyn* + (1 − *αn*)*Txn*, we have

$$T\mathfrak{x}\_n = \frac{1}{1 - \mathfrak{a}\_n}\mathfrak{x}\_n - \frac{\mathfrak{a}\_n}{1 - \mathfrak{a}\_n}\mathfrak{y}\_n$$

and so

$$||Tx\_n|| \le \frac{1}{1 - a\_n}||x\_n|| + \frac{a\_n}{1 - a\_n}||y\_n|| \le \frac{1}{1 - a}||x\_n|| + \frac{a}{1 - a}||y\_n||.$$

Consequently, the sequence {*Txn*} is also bounded.

**Step 3.** We show that lim sup*n*→∞*f x*<sup>∗</sup> <sup>−</sup> *<sup>x</sup>*∗, *<sup>J</sup>ϕ*(*xn* <sup>−</sup> *<sup>x</sup>*∗) ≤ 0. In fact, from condition (i) and boundedness of {*xn*} and {*Txn*}, we get

$$\|\mathbf{x}\_n - T\mathbf{x}\_n\| = \mathfrak{a}\_n \|\mathbf{y}\_n - T\mathbf{x}\_n\| \to 0 \quad (n \to \infty). \tag{22}$$

Thus, it follows from Equation (22) and Theorem <sup>3</sup> that lim sup*n*→∞*f x*<sup>∗</sup> <sup>−</sup> *<sup>x</sup>*∗, *<sup>J</sup>ϕ*(*xn* <sup>−</sup> *<sup>x</sup>*∗) ≤ 0. **Step 4.** We show that lim*n*→<sup>∞</sup> *xn* − *x*∗ = 0. In fact, using the equality

$$\begin{aligned} \mathbf{x}\_{\text{fl}} - \mathbf{x}^\* &= \mathbf{a}\_{\text{fl}} \left[ \beta\_{\text{fl}} (f \mathbf{x}\_{n-1} - f \mathbf{x}^\*) + \gamma\_{\text{fl}} (\mathbf{S} \mathbf{x}\_{n-1} - \mathbf{S} \mathbf{x}^\*) + (1 - \beta\_{\text{fl}} - \gamma\_{\text{fl}}) (\mathbf{x}\_{n-1} - \mathbf{x}^\*) \right] \\ &+ \boldsymbol{a}\_{\text{fl}} \left[ \beta\_{\text{fl}} (f \mathbf{x}^\* - \mathbf{x}^\*) + \gamma\_{\text{fl}} (\mathbf{S} \mathbf{x}^\* - \mathbf{x}^\*) \right] + (1 - \mathbf{a}\_{\text{fl}}) (T \mathbf{x}\_{\text{fl}} - \mathbf{x}^\*) \end{aligned}$$

by Equation (20) and the inequalities *Tx* − *Ty*, *Jϕ*(*x* − *y*)≤*x* − *yϕ*(*x* − *y*) = Φ(*x* − *y*), *f x* − *f y* ≤ *kx* − *y*, and *Sx* − *Sy*≤*x* − *y*, from Lemma 2, we derive

*Mathematics* **2020**, *8*, 72

Φ(*xn* − *x*∗) = Φ(*αnβn*(*f xn*−<sup>1</sup> − *f x*∗) + *γn*(*Sxn*−<sup>1</sup> − *Sx*∗)+(1 − *β<sup>n</sup>* − *γn*)(*xn*−<sup>1</sup> − *x*∗)) + *αnβnf x*<sup>∗</sup> − *x*∗, *Jϕ*(*xn* − *x*∗) + *αnγnSx*<sup>∗</sup> − *x*∗, *Jϕ*(*xn* − *x*∗) + (1 − *αn*)*Txn* − *x*∗, *Jϕ*(*xn* − *x*∗) ≤ *αn*Φ(*βnkxn*−<sup>1</sup> − *x*∗ + *γnxn*−<sup>1</sup> − *x*∗ + (1 − *β<sup>n</sup>* − *γn*)*xn*−<sup>1</sup> − *x*∗) + *αnβnf x*<sup>∗</sup> − *x*∗, *Jϕ*(*xn* − *x*∗) + *αnγnSx*<sup>∗</sup> − *x*∗, *Jϕ*(*xn* − *x*∗) + (1 − *αn*)*xn* − *x*∗*ϕ*(*xn* − *x*∗) ≤ *αn*(1 − (1 − *k*)*βn*)Φ(*xn*−<sup>1</sup> − *x*∗) + *αnβnf x*<sup>∗</sup> − *x*∗, *Jϕ*(*xn* − *x*∗) + *αnγnSx*<sup>∗</sup> − *x*∗*ϕ*(*xn* − *x*∗) + (1 − *αn*)Φ(*xn* − *x*∗). (23)

By Equation (23), we obtain

$$\begin{split} \Phi(\|\mathbf{x}\_{\mathrm{n}} - \mathbf{x}^\*\|) &\leq \langle 1 - (1 - k)\beta\_{\mathrm{n}} \rangle \Phi(\|\mathbf{x}\_{\mathrm{n}-1} - \mathbf{x}^\*\|) + \beta\_{\mathrm{n}} \langle f \mathbf{x}^\* - \mathbf{x}^\*, f\_{\boldsymbol{\theta}}(\mathbf{x}\_{\mathrm{n}} - \mathbf{x}^\*) \rangle \\ &+ \gamma\_{\mathrm{n}} \| S\mathbf{x}^\* - \mathbf{x}^\* \|] \boldsymbol{\varrho}(\|\mathbf{x}\_{\mathrm{n}} - \mathbf{x}^\*\|) \\ &\leq \langle 1 - (1 - k)\beta\_{\mathrm{n}} \rangle \| \mathbf{x}\_{\mathrm{n}-1} - \mathbf{x}^\* \| + \beta\_{\mathrm{n}} \langle f \mathbf{x}^\* - \mathbf{x}^\*, f\_{\boldsymbol{\theta}}(\mathbf{x}\_{\mathrm{n}} - \mathbf{x}^\*) \rangle + \gamma\_{\mathrm{n}} \| S\mathbf{x}^\* - \mathbf{x}^\* \| \| M\_{\ast} \end{split} \tag{24}$$

where *M* > 0 is a constant such that *ϕ*(*xn* − *x*∗) ≤ *M* for all *n* ≥ 1. Put *λ<sup>n</sup>* = (1 − *k*)*β<sup>n</sup>* and

$$\delta\_{\mathsf{H}} = \frac{1}{1 - k} \left[ \langle f \mathbf{x}^\* - \mathbf{x}^\*, f\_{\boldsymbol{\theta}}(\mathbf{x}\_{\boldsymbol{\mathsf{H}}} - \mathbf{x}^\*) + \frac{\gamma\_{\boldsymbol{\mathsf{H}}}}{\beta\_{\boldsymbol{\mathsf{H}}}} || \mathbf{S} \mathbf{x}^\* - \mathbf{x}^\* || \boldsymbol{\mathsf{M}} \right].$$

From conditions (ii) and (iii) and from step 3, it easily seen that ∑<sup>∞</sup> *<sup>n</sup>*=<sup>0</sup> *<sup>λ</sup><sup>n</sup>* <sup>=</sup> <sup>∞</sup> and lim sup*n*→<sup>∞</sup> *<sup>δ</sup><sup>n</sup>* <sup>≤</sup> 0. Since Equation (24) reduces to

$$\Phi(\|\mathbf{x}\_n - \mathbf{x}^\*\|) \le (1 - \lambda\_n)\Phi(\|\mathbf{x}\_{n-1} - \mathbf{x}^\*\|) + \lambda\_n \delta\_{n\prime} \tag{25}$$

applying Lemma 3 to Equation (25), we conclude that lim*n*→<sup>∞</sup> Φ(*xn* − *x*∗) = 0 and, hence, lim*n*→<sup>∞</sup> *xn* − *x*∗ = 0. This completes the proof.

**Remark 2.** *(1) Theorem 3 develops Theorem 2.3 of Ceng et al. [17] in the following aspects:*


**Funding:** This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science, and Technology (2018R1D1A1B07045718).

**Acknowledgments:** The author thanks the anonymous reviewers for their reading and helpful comments and suggestions along with providing recent related papers, which improved the presentation of this manuscript.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


c 2020 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Properties for** *ψ***-Fractional Integrals Involving a General Function** *ψ* **and Applications**

#### **Jin Liang \* and Yunyi Mu**

School of Mathematical Sciences, Shanghai Jiao Tong University, Shanghai 200240, China; muyunyi@sjtu.edu.cn **\*** Correspondence: jinliang@sjtu.edu.cn

Received: 28 April 2019; Accepted: 5 June 2019; Published: 6 June 2019

**Abstract:** In this paper, we are concerned with the *ψ*-fractional integrals, which is a generalization of the well-known Riemann–Liouville fractional integrals and the Hadamard fractional integrals, and are useful in the study of various fractional integral equations, fractional differential equations, and fractional integrodifferential equations. Our main goal is to present some new properties for *ψ*-fractional integrals involving a general function *ψ* by establishing several new equalities for the *ψ*-fractional integrals. We also give two applications of our new equalities.

**Keywords:** fractional calculus; *ψ*-fractional integrals; fractional differential equations

#### **1. Introduction**

Fractional integrals and fractional derivatives are generalizations of classical integer-order integrals and integer-order derivatives, respectively, which have been found to be more adequate in the study of a lot of real world problems. In recent decades, various fractional-order models have been used in plasma physics, automatic control, robotics, and many other branches of science (cf., [1–24] and the references therein).

It is known that the *ψ*-fractional derivative operator, which was introduced in [22], extends the well-known Riemann–Liouville fractional derivative operator. Moreover, it is also easy to see that the *ψ*-fractional integral operator [14] extends the well-known Riemann–Liouville fractional integral operator and the Hadamard fractional integral operator (see Remark 1 below). Both the *ψ*-fractional derivative operator and the *ψ*-fractional integral operator are useful in the study of various fractional integral equations, fractional differential equations, and fractional integrodifferential equations.

The following known definitions about fractional integrals are used later.

**Definition 1.** [14] *Let* [*t*1, *t*2] ∈ R *and α* > 0*. The Riemann–Liouville fractional integrals (left-sided and right-sided) of order α are defined by*

$$\mathcal{J}\_{t\_1+}^{\alpha}f(\mu) := \frac{1}{\Gamma(\alpha)} \int\_{t\_1}^{\mu} \frac{f(s)}{(\mu - s)^{1-\alpha}} ds, \ \mu > t\_1.$$

*and*

$$\mathcal{J}\_{t\_2}^{\alpha} - f(\mu) := \frac{1}{\Gamma(\alpha)} \int\_{\mu}^{t\_2} \frac{f(s)}{(s - \mu)^{1 - \alpha}} ds, \ \mu < t\_{2\prime}$$

*respectively, where*

$$
\Gamma(t) = \int\_0^\infty \mathbf{s}^{t-1} \mathbf{e}^{-s} d\mathbf{s},
$$

*is the Euler's gamma function.*

**Definition 2.** [23] *Let* [*t*1, *t*2] ∈ R *and α* > 0*. The Hadamard fractional integrals (left-sided and right-sided) of order α are defined by*

$$H\_{t\_1+}^a f(\mu) := \frac{1}{\Gamma(\alpha)} \int\_{t\_1}^{\mu} \left( \ln \frac{\mu}{s} \right)^{a-1} \frac{f(s)}{s} ds, \ \mu > t\_1.$$

*and*

$$H\_{t\_2-}^{\mathfrak{a}}f(\mu) := \frac{1}{\Gamma(\alpha)} \int\_{\mu}^{t\_2} \left(\ln \frac{s}{\mu}\right)^{\alpha-1} \frac{f(s)}{s} ds, \ \mu < t\_{2,\alpha}$$

*respectively.*

**Definition 3.** [14] *Let* [*t*1, *t*2] ∈ R *and α* > 0*. Suppose that ψ*(*μ*) > 0 *is an increasing function on* (*t*1, *t*2]*, and ψ* (*μ*) *is continuous on* (*t*1, *t*2)*. The ψ-fractional integrals (left-sided and right-sided) of order α are defined by*

$$I\_{t\_1+}^{a; \psi} f(\mu) = \frac{1}{\Gamma(a)} \int\_{t\_1}^{\mu} \psi'(s) (\psi(\mu) - \psi(s))^{a-1} f(s) ds, \; \mu > t\_1 \tag{1}$$

*and*

$$\Pi\_{t\_2-}^{a;\psi}f(\mu) = \frac{1}{\Gamma(a)} \int\_{\mu}^{t\_2} \psi'(s)(\psi(s) - \psi(\mu))^{a-1}f(s)ds, \ \mu < t\_2,\tag{2}$$

*respectively.*

#### **Remark 1.**

*(i) From [14], we know that, for a function f, the right-sided and left-sided Riemann–Liouville fractional integral of order α are defined by*

$$\mathcal{J}\_{a+}^{\alpha}f(x) := \frac{1}{\Gamma(\alpha)} \int\_{a}^{x} \frac{f(t)}{(x-t)^{1-\alpha}} dt, \text{ } x > a$$

*and*

$$\mathcal{J}\_{b-}^{\alpha}f(x) := \frac{1}{\Gamma(\alpha)} \int\_{x}^{b} \frac{f(t)}{(t-x)^{1-\alpha}} dt, \text{ } x < b\_{\prime}$$

*respectively. If we take ψ*(*x*) = *x, then it follows from (1) that*

$$I\_{a+}^{a; \chi} f(\mathbf{x}) = \frac{1}{\Gamma(\alpha)} \int\_a^\chi (\mathbf{x} - t)^{a-1} f(t) dt = \mathcal{J}\_{a+}^a f(\mathbf{x}),$$

*which is the right-sided Riemann–Liouville fractional integral.*

*(ii) From [23], we know that, for a function f , the right-sided and left-sided Hadamard fractional integral of order α are defined by*

$$H\_{a+}^{a}f(x) := \frac{1}{\Gamma(\alpha)} \int\_{a}^{x} \left(\ln \frac{x}{t}\right)^{\alpha-1} \frac{f(t)}{t} dt, \ x > a$$

*and*

$$H\_{b-}^{a}f(x) := \frac{1}{\Gamma(\alpha)} \int\_{\infty}^{b} \left(\ln \frac{t}{\varkappa}\right)^{a-1} \frac{f(t)}{t} dt, \; x < b, \mu$$

*respectively. Hence, taking ψ*(*x*) = ln*x in (1), we have*

$$\begin{aligned} I\_{a+}^{a,\text{lnx}}f(\mathbf{x})&=&\frac{1}{\Gamma(\alpha)}\int\_{a}^{\mathbf{x}}\frac{1}{t}(\ln\mathbf{x}-\ln t)^{a-1}f(t)dt\\&=&\frac{1}{\Gamma(\alpha)}\int\_{a}^{\mathbf{x}}\left(\ln\frac{\mathbf{x}}{t}\right)^{\alpha-1}f(t)\frac{dt}{t}\\&=&H\_{a+}^{\alpha}f(\mathbf{x}),\end{aligned}$$

*which is the right-sided Hadamard fractional integral.*

Throughout this paper, we suppose that *ψ*(*μ*) is a strictly increasing function on (0, ∞) and *ψ* (*μ*) is continuous, 0 ≤ *t*<sup>1</sup> < *t*2. *ζ*(*μ*) is the inverse function of *ψ*(*μ*) and

$$\phi(\mu) := f(\mu) + f(t\_1 + t\_2 - \mu).$$

The rest of the paper is organized as follows. In Section 2, we give some new equalities for *ψ*-fractional integrals involving a general function *ψ*. To illustrate the applicability of our new equalities, we give two examples in Section 3 by introducing the *ψ*-means and presenting relationships between the arithmetic mean and the *ψ*-means, and by establishing a prior estimate for a class of fractional differential equations in view of the equalities established in Section 2.

#### **2. Equalities for** *ψ***-Fractional Integrals**

**Theorem 1.** *Let the function f* : [*t*1, *t*2] → R *be differentiable. Then, for the ψ-fractional integrals in (1) and (2), we have*

$$\frac{f(t\_1) + f(t\_2)}{2} - \frac{\Gamma(a+1)}{2(\psi(t\_2) - \psi(t\_1))^a} [I\_{t\_1+}^{a;\psi} f(t\_2) + I\_{t\_2-}^{a;\psi} f(t\_1)]$$

$$= \frac{\psi(t\_2) - \psi(t\_1)}{2} \int\_0^1 [(1-\mu)^a - \mu^a] f'(\zeta((1-\mu)\psi(t\_2) + \mu\psi(t\_1)))$$

$$\cdot \zeta'((1-\mu)\psi(t\_2) + \mu\psi(t\_1))d\mu. \tag{3}$$

**Proof.** Write

$$\begin{aligned} I\_1 &= \int\_0^1 \left[ (1 - \mu)^a - \mu^a \right] \dot{\zeta} \left( (1 - \mu) \psi(t\natural + \mu \psi(t\_1)) \right) \dot{\zeta}' ((1 - \mu) \psi(t\natural + \mu \psi(t\_1)) d\mu \\ &= \quad I\_1 + I\_2. \end{aligned} \tag{4}$$

where

$$I\_1 = \int\_0^1 (1-\mu)^a f'(\zeta((1-\mu)\psi(t\_2) + \mu\psi(t\_1))) \zeta'((1-\mu)\psi(t\_2) + \mu\psi(t\_1))d\mu.$$

$$I\_2 = -\int\_0^1 \mu^a f'(\zeta((1-\mu)\psi(t\_2) + \mu\psi(t\_1)))\zeta'((1-\mu)\psi(t\_2) + \mu\psi(t\_1))d\mu.$$

Then, for *I*1, we have

$$\begin{split} I\_{1} &= \int\_{0}^{1} (1-\mu)^{a} f' \left( \zeta((1-\mu)\psi(t\_{2}) + \mu\psi(t\_{1})) \right) \zeta'((1-\mu)\psi(t\_{2}) + \mu\psi(t\_{1})) d\mu \\ &= \left. (1-\mu)^{a} \frac{f(\zeta(\mu\psi(t\_{1}) + (1-\mu)\psi(t\_{2})))}{\psi(t\_{1}) - \psi(t\_{2})} \right|\_{0}^{1} \\ &- \frac{a}{\psi(t\_{2}) - \psi(t\_{1})} \int\_{0}^{1} (1-\mu)^{a-1} f(\zeta(\mu\psi(t\_{1}) + (1-\mu)\psi(t\_{2}))) d\mu \\ &= \left. \frac{f(t\_{2})}{\psi(t\_{2}) - \psi(t\_{1})} - \frac{a}{(\psi(t\_{2}) - \psi(t\_{1}))^{a+1}} \int\_{t\_{1}}^{t\_{2}} (\psi(\mu) - \psi(t\_{1}))^{a-1} f(\mu) \dot{\psi}'(\mu) d\mu \\ &= \frac{f(t\_{2})}{\psi(t\_{2}) - \psi(t\_{1})} - \frac{\Gamma(a+1)}{(\psi(t\_{2}) - \psi(t\_{1}))^{a+1}} l\_{t\_{2}-}^{a,\psi} f(t\_{1}). \end{split} \tag{5}$$

For *I*2, we obtain

$$\begin{split} l\_{2} &= -\int\_{0}^{1} \mu^{a} f' \left( \zeta((1-\mu)\psi(t\_{2}) + \mu\psi(t\_{1})) \right) \zeta'((1-\mu)\psi(t\_{2}) + \mu\psi(t\_{1})) d\mu \\ &= -\mu^{a} \frac{f(\zeta(\mu\psi(t\_{1}) + (1-\mu)\psi(t\_{2})))}{\psi(t\_{2}) - \psi(t\_{1})} \bigg|\_{0}^{1} - \frac{a}{\psi(t\_{2}) - \psi(t\_{1})} \int\_{0}^{1} \mu^{a-1} f(\zeta(\mu\psi(t\_{1}) + (1-\mu)\psi(t\_{2}))) d\mu \\ &= -\frac{f(t\_{1})}{\psi(t\_{2}) - \psi(t\_{1})} - \frac{a}{(\psi(t\_{2}) - \psi(t\_{1}))^{a+1}} \int\_{t\_{1}}^{t\_{2}} (\psi(t\_{2}) - \psi(\mu))^{a-1} f(\mu) \dot{\psi}'(\mu) d\mu \\ &= -\frac{f(t\_{1})}{\psi(t\_{2}) - \psi(t\_{1})} - \frac{\Gamma(a+1)}{(\psi(t\_{2}) - \psi(t\_{1}))^{a+1}} I\_{i\_{1}+}^{a,p} f(t\_{2}). \end{split}$$

Thus, by (4)–(6), we get

$$I\_{\perp} = \frac{f(t\_1) + f(t\_2)}{\psi(t\_2) - \psi(t\_1)} - \frac{\Gamma(a+1)}{(\psi(t\_2) - \psi(t\_1))^{a+1}} [I\_{t\_1+}^{a;\psi} f(t\_2) + I\_{t\_2-}^{a;\psi} f(t\_1)].\tag{7}$$

This implies that equality (3) is true.

Based on Theorem 1, we can obtain the following Theorems 2 and 3.

**Theorem 2.** *If the function f* : [*t*1, *t*2] → R *is differentiable, then for the ψ-fractional integrals in (1) and (2), we have*

$$\frac{\Gamma(a+1)}{2(\psi(t\_2) - \psi(t\_1))^a} [I\_{t\_1+}^{a;\psi} f(t\_2) + I\_{t\_2-}^{a;\psi} f(t\_1)] - f\left(\frac{t\_1 + t\_2}{2}\right)$$

$$= \frac{t\_2 - t\_1}{2} \int\_0^1 g(\mu) f'(\mu t\_1 + (1 - \mu)t\_2) d\mu$$

$$- \frac{\psi(t\_2) - \psi(t\_1)}{2} \int\_0^1 [(1 - \mu)^a - \mu^a] f'(\zeta((1 - \mu)\psi(t\_2) + \mu\psi(t\_1)))$$

$$\cdot \zeta'((1 - \mu)\psi(t\_2) + \mu\psi(t\_1)) d\mu,\tag{8}$$

*where*

$$\mathcal{g}(\mu) = \begin{cases} \ 1, \ \mu \in \left[0, \frac{1}{2}\right), \\\ -1, \ \mu \in \left[\frac{1}{2}, 1\right]. \end{cases}$$

**Proof.** Notice that

$$\begin{split} & \quad \frac{t\_2 - t\_1}{2} \int\_0^1 g(\mu) \dot{f} \left( \mu t\_1 + (1 - \mu) t\_2 \right) d\mu \\ &= \quad \frac{t\_2 - t\_1}{2} \int\_0^{\frac{1}{2}} f'(\mu t\_1 + (1 - \mu) t\_2) d\mu - \frac{t\_2 - t\_1}{2} \int\_{\frac{1}{2}}^1 f'(\mu t\_1 + (1 - \mu) t\_2) d\mu \\ &= \quad \left. -\frac{1}{2} f(\mu t\_1 + (1 - \mu) t\_2) \right|\_0^{\frac{1}{2}} + \frac{1}{2} f(\mu t\_1 + (1 - \mu) t\_2) \Big|\_{\frac{1}{2}}^1 \\ &= \quad \frac{f(t\_1) + f(t\_2)}{2} - f\left(\frac{t\_1 + t\_2}{2}\right). \end{split}$$

Combining (3) from Theorem 1 and (9), we get (8).

**Theorem 3.** *Let the function f* : [*t*1, *t*2] → R *be differentiable. Then,*

$$\frac{\Gamma(a+1)}{2(\psi(t\_2) - \psi(t\_1))^a} [I\_{t\_1+}^{a;\psi} f(t\_2) + I\_{t\_2-}^{a;\psi} f(t\_1)] - f\left(\zeta\left(\frac{\psi(t\_1) + \psi(t\_2)}{2}\right)\right)$$

$$=\frac{\psi(t\_2) - \psi(t\_1)}{2} \int\_0^1 g(\mu) f'\left(\zeta((1-\mu)\psi(t\_2) + \mu\psi(t\_1))\right) \zeta'((1-\mu)\psi(t\_2) + \mu\psi(t\_1)) d\mu$$

$$- \int\_0^1 [(1-\mu)^a - \mu^a] f'\left(\zeta((1-\mu)\psi(t\_2) + \mu\psi(t\_1))\right) \zeta'((1-\mu)\psi(t\_2) + \mu\psi(t\_2)) d\mu,\tag{10}$$

*where*

$$\lg(\mu) = \begin{cases} -1, & \mu \in \left[0, \frac{1}{2}\right)\_{\prime} \\ -1, & \mu \in \left[\frac{1}{2}, 1\right]. \end{cases}$$

**Proof.** Observe

$$\begin{split} & \quad \frac{\Psi(t\_{2}) - \Psi(t\_{1})}{2} \int\_{0}^{1} g(\mu) \boldsymbol{f}'(\zeta((1-\mu)\boldsymbol{\Psi}(t\_{2}) + \mu\boldsymbol{\Psi}(t\_{1}))) \boldsymbol{\zeta}'((1-\mu)\boldsymbol{\Psi}(t\_{2}) + \mu\boldsymbol{\Psi}(t\_{1})) d\mu \\ & = \frac{\Psi(t\_{2}) - \Psi(t\_{1})}{2} \int\_{0}^{1} \boldsymbol{f}'(\zeta((1-\mu)\boldsymbol{\Psi}(t\_{2}) + \mu\boldsymbol{\Psi}(t\_{1}))) \boldsymbol{\zeta}'((1-\mu)\boldsymbol{\Psi}(t\_{2}) + \mu\boldsymbol{\Psi}(t\_{1})) d\mu \\ & \quad - \frac{\Psi(t\_{2}) - \Psi(t\_{1})}{2} \int\_{\frac{1}{2}}^{1} \boldsymbol{f}'(\zeta((1-\mu)\boldsymbol{\Psi}(t\_{2}) + \mu\boldsymbol{\Psi}(t\_{1}))) \boldsymbol{\zeta}'((1-\mu)\boldsymbol{\Psi}(t\_{2}) + \mu\boldsymbol{\Psi}(t\_{1})) d\mu \\ & = \left. - \frac{1}{2} f(\zeta(\mu\boldsymbol{\Psi}(t\_{1}) + (1-\mu)\boldsymbol{\Psi}(t\_{2}))) \right|\_{0}^{\frac{1}{2}} + \frac{1}{2} f(\zeta(\mu\boldsymbol{\Psi}(t\_{1}) + (1-\mu)\boldsymbol{\Psi}(t\_{2}))) \Big|\_{\frac{1}{2}}^{1} \\ & = \frac{f(t\_{1}) + f(t\_{2})}{2} - f\left(\zeta\left(\frac{\boldsymbol{\Psi}(t\_{1}) + \boldsymbol{\Psi}(t\_{2})}{2}\right)\right). \end{split} \tag{11}$$

Combining (11) of Theorem 1 and (3), we get the equality (10).

The following result involves a point *μ* between *t*<sup>1</sup> and *t*2.

**Theorem 4.** *If the function f* : [*t*1, *t*2] → R *is differentiable, then we have*

$$\Gamma(a+1)[t\_{t-}^{a\circ\varphi}f(t\_1) + t\_{t+}^{a\circ\varphi}f(t\_2)] - [f(t\_1)(\psi(\mu) - \psi(t\_1))^a + f(t\_2)(\psi(t\_2) - \psi(\mu))^a]$$

$$= (\psi(t\_2) - \psi(\mu))^{a+1} \int\_0^1 (s^a - 1) f'(\zeta((1-s)\psi(t\_2) + s\psi(\mu))) \zeta'((1-s)\psi(t\_2) + s\psi(\mu)) ds$$

$$- (\psi(\mu) - \psi(t\_1))^{a+1} \int\_0^1 (s^a - 1) f'(\zeta((1-s)\psi(t\_1) + s\psi(\mu)))$$

$$\cdot \zeta'((1-s)\psi(t\_1) + s\psi(\mu)) ds,\tag{12}$$

*where μ* ∈ (*t*1, *t*2)*.*

#### **Proof.** Observe

$$\begin{split} &\quad \int\_{0}^{1} (s^{a} - 1) \int \left( \zeta((1-s)\psi(t\_{2}) + s\psi(\mu)) \right) \zeta'((1-s)\psi(t\_{2}) + s\psi(\mu)) ds \\ &= \quad \frac{1}{\Psi(\mu) - \Psi(t\_{2})} (s^{a} - 1) f(\zeta(s\psi(\mu) + (1-s)\psi(t\_{2}))) \Big|\_{0}^{1} \\ &\quad + \frac{a}{\Psi(t\_{2}) - \Psi(\mu)} \int\_{0}^{1} s^{a-1} f(\zeta(s\psi(\mu) + (1-s)\psi(t\_{2}))) ds \\ &= \quad - \frac{f(t\_{2})}{\Psi(t\_{2}) - \psi(\mu)} + \frac{a}{(\psi(t\_{2}) - \psi(\mu))^{a+1}} \int\_{t}^{t\_{2}} (\psi(t\_{2}) - \psi(s))^{a-1} f(s) \psi'(s) ds \\ &= \quad - \frac{f(t\_{2})}{\psi(t\_{2}) - \psi(\mu)} + \frac{\Gamma(a+1)}{(\psi(t\_{2}) - \psi(\mu))^{a+1}} I\_{t+}^{a,\mu} f(t\_{2}), \end{split} \tag{13}$$

and

$$\begin{split} &\quad \int\_{0}^{1} (s^{a}-1) \boldsymbol{f}' \left( \zeta((1-s)\boldsymbol{\Psi}(t\_{1}) + s\boldsymbol{\Psi}(\mu)) \right) \boldsymbol{\zeta}' \left( (1-s)\boldsymbol{\Psi}(t\_{1}) + s\boldsymbol{\Psi}(\mu) \right) ds \\ &= \quad \frac{1}{\boldsymbol{\Psi}(\mu) - \boldsymbol{\Psi}(t\_{1})} (s^{a}-1) f(\zeta(s\boldsymbol{\Psi}(\mu) + (1-s)\boldsymbol{\Psi}(t\_{1}))) \Big|\_{0}^{1} \\ &\quad - \frac{\boldsymbol{\alpha}}{\boldsymbol{\Psi}(\mu) - \boldsymbol{\Psi}(t\_{1})} \int\_{0}^{1} s^{a-1} f(\zeta(s\boldsymbol{\Psi}(\mu) + (1-s)\boldsymbol{\Psi}(t\_{1}))) ds \\ &= \quad \frac{f(t\_{1})}{\boldsymbol{\Psi}(\mu) - \boldsymbol{\Psi}(t\_{1})} - \frac{\boldsymbol{\alpha}}{(\psi(\mu) - \boldsymbol{\Psi}(t\_{1}))^{a+1}} \int\_{t\_{1}}^{t} (\boldsymbol{\Psi}(s) - \boldsymbol{\Psi}(t\_{1})^{a-1} f(s)\boldsymbol{\varrho}'(s) ds \\ &= \quad \frac{f(t\_{1})}{\boldsymbol{\Psi}(\mu) - \boldsymbol{\Psi}(t\_{1})} - \frac{\boldsymbol{\Gamma}(a+1)}{(\psi(\mu) - \boldsymbol{\Psi}(t\_{1}))^{a+1}} f^{a,\psi}\_{t-} f(t\_{1}). \end{split} \tag{14}$$

Combining (13) and (14), we get the result (12).

Next, we will give two equalities involving function *φ*.

**Theorem 5.** *Let the function f* : [*t*1, *t*2] → R *be differentiable. If f* ∈ *L*[*t*1, *t*2]*, then*

$$\frac{\phi(t\_1) + \phi(t\_2)}{2} - \frac{\Gamma(a+1)}{2(\psi(t\_2) - \psi(t\_1))^a} [I\_{t\_1 +}^{a; \psi} \phi(t\_2) + I\_{t\_2 -}^{a; \psi} \phi(t\_1)]$$

$$= \frac{t\_2 - t\_1}{2(\psi(t\_2) - \psi(t\_1))^a} \int\_0^1 \mathfrak{g}(s) \phi'((1-s)t\_1 + t\_2s) ds,\tag{15}$$

*where*

$$\mathfrak{g}(\mu) = (\psi((1-\mu)t\_1 + t\_2\mu) - \psi(t\_1))^a - (\psi(t\_2) - \psi((1-\mu)t\_1 + t\_2\mu))^a.$$

**Proof.** Write

$$\begin{aligned} I &=& \int\_0^1 \mathfrak{g}(s) \not{p}'((1-s)t\_1 + t\_2s) ds \\ &=& \int\_0^1 (\psi((1-s)t\_1 + t\_2s) - \psi(t\_1))^a \not{\phi}'((1-s)t\_1 + t\_2s) ds \\ &- \int\_0^1 (\psi(t\_2) - \psi((1-s)t\_1 + t\_2s))^a \not{\phi}'((1-s)t\_1 + t\_2s) ds \\ &=& I\_1 + I\_2. \end{aligned}$$

Then, for *I*1, we have

$$\begin{split} I\_{1} &= \quad \int\_{0}^{1} (\psi((1-s)t\_{1} + t\_{2}s) - \psi(t\_{1}))^{a} \phi'((1-s)t\_{1} + t\_{2}s) ds \\ &= \quad \frac{1}{t\_{2} - t\_{1}} \int\_{t\_{1}}^{t\_{2}} (\psi(s) - \psi(t\_{1}))^{a} d\phi(s) \\ &= \quad \frac{(\psi(s) - \psi(t\_{1}))^{a} \phi(s)}{t\_{2} - t\_{1}} \Big|\_{t\_{1}}^{t\_{2}} - \frac{a}{t\_{2} - t\_{1}} \int\_{t\_{1}}^{t\_{2}} \frac{\psi'(s)}{(\psi(s) - \psi(t\_{1}))^{1 - a}} \phi(s) ds \\ &= \quad \frac{(\psi(t\_{2}) - \psi(t\_{1}))^{a}}{t\_{2} - t\_{1}} \phi(t\_{2}) - \frac{\Gamma(a + 1)}{t\_{2} - t\_{1}} I\_{t\_{2} - 1}^{a; \psi} \phi(t\_{1}). \end{split} \tag{16}$$

For *I*2, we obtain

$$\begin{split} l\_{2} &= \ -\int\_{0}^{1} (\psi(t\_{2}) - \psi((1-s)t\_{1} + t\_{2}s))^{a} \phi'((1-s)t\_{1} + t\_{2}s) ds \\ &= \ -\frac{1}{t\_{2} - t\_{1}} \int\_{t\_{1}}^{t\_{2}} (\psi(t\_{2}) - \psi(s))^{a} d\phi(s) \\ &= \ -\frac{(\psi(t\_{2}) - \psi(s))^{a} \phi(s)}{t\_{2} - t\_{1}} \Bigg|\_{t\_{1}}^{t\_{2}} - \frac{a}{t\_{2} - t\_{1}} \int\_{t\_{1}}^{t\_{2}} \frac{\psi'(s)}{(\psi(t\_{2}) - \psi(s))^{1 - a}} \phi(s) ds \\ &= \ \frac{(\psi(t\_{2}) - \psi(t\_{1}))^{a}}{t\_{2} - t\_{1}} \phi(t\_{1}) - \frac{\Gamma(a + 1)}{t\_{2} - t\_{1}} l\_{t\_{1} +}^{a \alpha} \phi(t\_{2}). \end{split} \tag{17}$$

By adding (16) and (17), we get

$$I = \frac{(\psi(t\_2) - \psi(t\_1))^a}{t\_2 - t\_1} [\phi(t\_1) + \phi(t\_2)] - \frac{\Gamma(a+1)}{t\_2 - t\_1} [I\_{t\_2-}^{a;\psi} \phi(t\_1) + I\_{t\_1+}^{a;\psi} \phi(t\_2)].$$

This implies that the equality (15) is true.

**Theorem 6.** *Let f* : [*t*1, *t*2] → R *be a differentiable function and f* ∈ *L*[*t*1, *t*2]*. If h* : [*t*1, *t*2] → R *is integrable, then*

$$\begin{aligned} \frac{\phi(t\_1) + \phi(t\_2)}{2} [I\_{t\_1+}^{a;\varphi} h(t\_2) + I\_{t\_2-}^{a;\varphi} h(t\_1)] - [I\_{t\_1+}^{a;\varphi} (h\phi)(t\_2) + I\_{t\_2-}^{a;\varphi} (h\phi)(t\_1)] \\ = \quad \frac{1}{2\Gamma(a)} \int\_{t\_1}^{t\_2} \left[ \int\_{t\_1}^{t} \mathfrak{p}(s) h(s) ds - \int\_{t}^{t\_2} \mathfrak{p}(s) h(s) ds \right] \phi'(\mu) d\mu, \end{aligned} \tag{18}$$

*where*

$$\mathfrak{p}(\mu) = \frac{\psi'(\mu)}{(\psi(t\_2) - \psi(\mu))^{1-\alpha}} + \frac{\psi'(\mu)}{(\psi(\mu) - \psi(t\_1))^{1-\alpha}}.$$

**Proof.** Write

$$\begin{array}{rcl} I &=& \int\_{t\_1}^{t\_2} \left[ \int\_{t\_1}^{t} \mathfrak{p}(s) h(s) ds - \int\_{t}^{t\_2} \mathfrak{p}(s) h(s) ds \right] \phi'(\mu) d\mu \\ &=& \int\_{t\_1}^{t\_2} \int\_{t\_1}^{t} \mathfrak{p}(s) h(s) ds \phi'(\mu) d\mu - \int\_{t\_1}^{t\_2} \int\_{t}^{t\_2} \mathfrak{p}(s) h(s) ds \phi'(\mu) d\mu \\ &=& I\_1 + I\_2. \end{array}$$

Then, for *I*1, we have

$$\begin{split} I\_{1} &= \quad \int\_{t\_{1}}^{t\_{2}} \int\_{t\_{1}}^{t} \mathfrak{p}(s) h(s) ds \phi'(\mu) d\mu \\ &= \quad \int\_{t\_{1}}^{t} \mathfrak{p}(s) h(s) ds \phi(\mu) \Big|\_{t\_{1}}^{t\_{2}} - \int\_{t\_{1}}^{t\_{2}} \mathfrak{p}(\mu) h(\mu) \phi(\mu) d\mu \\ &= \quad \int\_{t\_{1}}^{t\_{2}} \mathfrak{p}(s) h(s) ds \phi(t\_{2}) - \int\_{t\_{1}}^{t\_{2}} \mathfrak{p}(\mu) h(\mu) \phi(\mu) d\mu \\ &= \quad \Gamma(a) \big[ I\_{t\_{1}+}^{a;\mu} h(t\_{2}) + I\_{t\_{2}}^{a;\mu} h(t\_{1}) \big] \phi(t\_{2}) \\ &\quad - \Gamma(a) \big[ I\_{t\_{1}+}^{a;\mu} (h\phi)(t\_{2}) + I\_{t\_{2}-}^{a;\mu} (h\phi)(t\_{1}) \big]. \end{split} \tag{19}$$

For *I*2, we obtain

$$\begin{split} I\_{2} &= \quad -\int\_{t\_{1}}^{t\_{2}} \int\_{t}^{t\_{2}} \mathfrak{p}(s) h(s) ds \boldsymbol{\phi}^{\prime}(\mu) d\mu \\ &= \quad -\int\_{t\_{1}}^{t\_{2}} \mathfrak{p}(s) h(s) ds \boldsymbol{\phi}(\mu) \Big|\_{t\_{1}}^{t\_{2}} - \int\_{t\_{1}}^{t\_{2}} \mathfrak{p}(\mu) h(\mu) \boldsymbol{\phi}(\mu) d\mu \\ &= \quad \int\_{t\_{1}}^{t\_{2}} \mathfrak{p}(s) h(s) ds \boldsymbol{\phi}(t\_{1}) - \int\_{t\_{1}}^{t\_{2}} \mathfrak{p}(\mu) h(\mu) \boldsymbol{\phi}(\mu) d\mu \\ &= \quad \Gamma(a) [I\_{t\_{1}+}^{a;\boldsymbol{\Psi}} h(t\_{2}) + I\_{t\_{1}}^{a;\boldsymbol{\Psi}} h(t\_{1})] \boldsymbol{\phi}(t\_{1}) \\ &\quad - \Gamma(a) [I\_{t\_{1}+}^{a;\boldsymbol{\Psi}} (h\boldsymbol{\phi})(t\_{2}) + I\_{t\_{2}}^{a;\boldsymbol{\Psi}} (h\boldsymbol{\phi})(t\_{1})]. \end{split} \tag{20}$$

Combining (19) and (20), we get

$$I = \Gamma(\mathfrak{a})[I\_{t\_1+}^{a;\mathfrak{p}}h(t\_2) + I\_{t\_2-}^{a;\mathfrak{p}}h(t\_1)](\mathfrak{q}(t\_1) + \mathfrak{q}(t\_2)) - 2\Gamma(\mathfrak{a})[I\_{t\_1+}^{a;\mathfrak{p}}(h\mathfrak{q})(t\_2) + I\_{t\_2-}^{a;\mathfrak{p}}(h\mathfrak{q})(t\_1)].$$

This implies the equality (18).

For the last result of this section, we suppose that *ψ*(0) = 0 and *ψ*(1) = 1.

**Theorem 7.** *Let the function f* : [*ψ*(*t*1), *ψ*(*t*2)] → R *be differentiable. Then, the following equality holds:*

$$\frac{f(\psi(t\_1)) + f(\psi(t\_2))}{2} - \frac{\Gamma(\mathfrak{a} + 1)}{2(\psi(t\_2) - \psi(t\_1))^a} [I\_{t\_1 +}^{\mathfrak{a};\mathfrak{p}} f \circ \psi(t\_2) + I\_{t\_2 -}^{\mathfrak{a};\mathfrak{p}} f \circ \psi(t\_1)]$$

$$= -\frac{\psi(t\_2) - \psi(t\_1)}{2} \int\_0^1 [(1 - \psi(\mu))^a - \psi^a(\mu)] \underline{\psi}'(\mu)$$

$$\cdot \not{f}'((1 - \psi(\mu))\psi(t\_2) + \psi(\mu)\psi(t\_1))d\mu,\tag{21}$$

*where f* ◦ *ψ*(*μ*) = *f*(*ψ*(*μ*))*.*

**Proof.** Write

$$\begin{aligned} I &=& \int\_0^1 [(1 - \psi(\mu))^a - \psi^a(\mu)] \psi'(\mu) f'((1 - \psi(\mu))\psi(t\_2) + \psi(\mu)\psi(t\_1)) d\mu \\ &=& \int\_0^1 (1 - \psi(\mu))^a \psi'(\mu) f'((1 - \psi(\mu))\psi(t\_2) + \psi(\mu)\psi(t\_1)) d\mu \\ &- \int\_0^1 \psi''(\mu) \psi'(\mu) f'((1 - \psi(\mu))\psi(t\_2) + \psi(\mu)\psi(t\_1)) d\mu \\ &=& I\_1 + I\_2. \end{aligned}$$

Then, for *I*1, we get

$$\begin{split} I\_{1} &= \int\_{0}^{1} (1 - \psi(\mu))^{a} \psi'(\mu) \int ((1 - \psi(\mu))\psi(t\_{2}) + \psi(\mu)\psi(t\_{1})) d\mu \\ &= \frac{(1 - \psi(\mu))^{a}}{\psi(t\_{1}) - \psi(t\_{2})} f((1 - \psi(\mu))\psi(t\_{2}) + \psi(\mu)\psi(t\_{1})) \Big|\_{0}^{1} \\ &\quad + \int\_{0}^{1} \frac{a(1 - \psi(\mu))^{a - 1} \psi'(\mu)}{\psi(t\_{1}) - \psi(t\_{2})} f((1 - \psi(t))\psi(t\_{2}) + \psi(\mu)\psi(t\_{1})) d\mu \\ &= \frac{f(\psi(t\_{2}))}{\psi(t\_{2}) - \psi(t\_{1})} - \frac{a}{\psi(t\_{2}) - \psi(t\_{1})} \int\_{t\_{1}}^{t\_{2}} \left(\frac{\psi(\mu) - \psi(t\_{1})}{\psi(t\_{2}) - \psi(t\_{1})}\right)^{a - 1} \frac{\psi'(t)\mu}{\psi(t\_{2}) - \psi(t\_{1})} f(\psi(\mu)) d\mu \\ &= \frac{f(\psi(t\_{2}))}{\psi(t\_{2}) - \psi(t\_{1})} - \frac{\Gamma(a + 1)}{(\psi(t\_{2}) - \psi(t\_{1}))^{a + 1}} \int\_{t\_{2}}^{t\_{2}} \psi \circ \psi(t\_{1}). \end{split} \tag{22}$$

For *I*2, we have

$$\begin{split} l\_{2} &= -\int\_{0}^{1} \psi^{\mu}(\mu) \psi'(\mu) \Big{(}(1-\psi(\mu))\psi(t\_{2}) + \psi(\mu)\psi(t\_{1})\big{)} d\mu \\ &= \frac{\psi^{\mu}(\mu)}{\overline{\psi(t\_{2}) - \psi(t\_{1})}f((1-\psi(\mu))\psi(t\_{2}) + \psi(\mu)\psi(t\_{1})) \Big{|}\_{0}^{1}} \\ &- \int\_{0}^{1} \frac{\alpha \psi^{\mu-1}(\mu)\psi'(\mu)}{\psi(t\_{2}) - \psi(t\_{1})} f((1-\psi(\mu))\psi(t\_{2}) + \psi(\mu)\psi(t\_{1})) d\mu \\ &= \frac{f(\psi(t\_{1}))}{\overline{\psi(t\_{2}) - \psi(t\_{1})}} - \frac{a}{\overline{\psi(t\_{2}) - \psi(t\_{1})}} \int\_{t\_{1}}^{t\_{2}} \left(\frac{\psi(t\_{2}) - \psi(\mu)}{\overline{\psi(t\_{2}) - \psi(t\_{1})}}\right)^{a-1} \frac{\psi'(\mu)}{\overline{\psi(t\_{2}) - \psi(t\_{1})} f(\psi(\mu)) d\mu} \\ &= \frac{f(\psi(t\_{1}))}{\overline{\psi(t\_{2}) - \psi(t\_{1})}} - \frac{\Gamma(\alpha+1)}{(\psi(t\_{2}) - \psi(t\_{1}))^{a+1}} I\_{1}^{a,\mu} f \circ \psi(t\_{2}). \tag{23} \end{split}$$

By (22) and (23), we see that

$$I = \frac{f(\psi(t\_1)) + f(\psi(t\_2))}{\psi(t\_2) - \psi(t\_1)} - \frac{\Gamma(a+1)}{(\psi(t\_2) - \psi(t\_1))^{a+1}} [I\_{t\_1+}^{a;\psi} f \circ \psi(t\_2) + I\_{t\_2-}^{a;\psi} f \circ \psi(t\_1)],$$

which means that equality (21) is true.

#### **3. Applications**

To illustrate the applicability of the new equalities established in previous section, we give two examples in this section.

**Example 1.** *The arithmetic mean A is defined by*

$$A(t\_1, t\_2) := \frac{t\_1 + t\_2}{2}, \ t\_1, \ t\_2 > 0.$$

*Now, we introduce the following ψ-means M<sup>ψ</sup> and Mψ*,*n:*

$$M\_{\Psi}(t\_1, t\_2) := \frac{\int\_{t\_1}^{t\_2} \mu \psi'(\mu) d\mu}{\psi(t\_2) - \psi(t\_1)}, \ t\_1 \neq t\_2. \tag{24}$$

*and*

$$\overline{\mathcal{M}}\_{\psi,n}(t\_1, t\_2) := \frac{\psi^{n+1}(t\_2) - \psi^{n+1}(t\_1)}{(n+1)(\psi(t\_2) - \psi(t\_1))}, \ t\_1 \neq t\_2, \ n \in \mathbb{N}.$$

*As we can see from (24) that the ψ-mean Mψ*(*t*1, *t*2) *is just the following logarithmic mean* [25] *when ψ*(*μ*) = ln *μ:*

$$L(t\_1, t\_2) := \frac{t\_2 - t\_1}{\ln t\_2 - \ln t\_1}, \ t\_1 \neq t\_2.$$

*Moreover, we see that, when ψ*(*μ*) = *μ, the ψ-mean Mψ*(*t*1, *t*2) *is just the arithmetic mean A*(*t*1, *t*2)*.*

*The following two results, which are deduced by virtue of our new equalities in the last section, show new relationships between the arithmetic mean A and the two ψ-means above.*

**Theorem 8.** *Let* 0 < *t*<sup>1</sup> < *t*2*. Then,*

$$|A(t\_1, t\_2) - M\_{\Psi}(t\_1, t\_2)| \le \frac{\psi(t\_2) - \psi(t\_1)}{2(q+1)^{1/q}} \left( \int\_0^1 [\zeta'(\mu \psi(t\_1) + (1-\mu)\psi(t\_2))]^{q'} d\mu \right)^{1/q'}.$$

*where q* > 1 *and* <sup>1</sup> *<sup>q</sup>* <sup>+</sup> <sup>1</sup> *q*  = 1.

**Proof.** Taking *α* = 1 and *f*(*μ*) = *μ* in Theorem 1 and using the Hölder inequality, we obtain

$$\begin{split} & \quad \left| A(t\_1, t\_2) - M\_{\Psi}(t\_1, t\_2) \right| \\ & \leq \quad \frac{\left| \Psi(t\_2) - \Psi(t\_1) \right|}{2} \int\_0^1 |1 - 2\mu| \boldsymbol{\zeta}'(\mu\psi(t\_1) + (1 - \mu)\psi(t\_2)) d\mu \\ & \leq \quad \frac{\Psi(t\_2) - \Psi(t\_1)}{2} \left( \int\_0^1 |1 - 2\mu|^q d\mu \right)^{1/q} \left( \int\_0^1 \left[ \boldsymbol{\zeta}'(\mu\psi(t\_1) + (1 - \mu)\psi(t\_2)) \right]^{q'} d\mu \right)^{1/q'} . \end{split}$$

Noticing that

$$\left(\int\_0^1 |1 - 2\mu|^q d\mu\right)^{1/q} = \frac{1}{(q+1)^{1/q}}.$$

we get the desired result.

**Theorem 9.** *Let* 0 < *t*<sup>1</sup> < *t*2*, ψ*(0) = 0 *and ψ*(1) = 1*. Then,*

$$\begin{aligned} & \left| A(\boldsymbol{\psi}^n(t\_1), \boldsymbol{\psi}^n(t\_2)) - \overline{M}\_{\boldsymbol{\psi}, \boldsymbol{n}}(t\_1, t\_2) \right| \\ & \leq \ \ \ \frac{n (\boldsymbol{\psi}(t\_2) - \boldsymbol{\psi}(t\_1))^{1 - 1/q'}}{2(q+1)^{1/q} (q'(n-1) + 1)^{1/q'}} \left( \boldsymbol{\psi}^{q'(n-1) + 1}(t\_2) - \boldsymbol{\psi}^{q'(n-1) + 1}(t\_1) \right)^{1/q'} \end{aligned}$$

*where q* > 1 *and* <sup>1</sup> *<sup>q</sup>* <sup>+</sup> <sup>1</sup> *q*  = 1.

**Proof.** Taking *α* = 1 and *f*(*μ*) = *μ<sup>n</sup>* in Theorem 7 and using the Hölder inequality, we obtain

$$\begin{split} & \quad \left| A(\boldsymbol{\psi}^{\boldsymbol{n}}(t\_{1}), \boldsymbol{\psi}^{\boldsymbol{n}}(t\_{2})) - \overline{\boldsymbol{M}}\_{\boldsymbol{\Psi}, \boldsymbol{n}}(t\_{1}, t\_{2}) \right| \\ & \leq \quad \frac{n(\boldsymbol{\psi}(t\_{2}) - \boldsymbol{\psi}(t\_{1}))}{2} \int\_{0}^{1} |1 - 2\boldsymbol{\psi}(\boldsymbol{\mu})| \boldsymbol{\psi}'(\boldsymbol{\mu}) (\boldsymbol{\upmu}(\boldsymbol{\mu})\boldsymbol{\upmu}(t\_{1}) + (1 - \boldsymbol{\upmu}(\boldsymbol{\mu}))\boldsymbol{\upmu}(t\_{2}))^{n - 1} d\boldsymbol{\upmu} \\ & \leq \quad \frac{n(\boldsymbol{\upmu}(t\_{2}) - \boldsymbol{\upupmu}(t\_{1}))}{2} \left( \int\_{0}^{1} |1 - 2\boldsymbol{\upupup}(\boldsymbol{\mu})|^{q} \boldsymbol{\upupup}'(\boldsymbol{\mu}) d\boldsymbol{\upmu} \right)^{1/q} \\ & \quad \cdot \left( \int\_{0}^{1} \left[ \boldsymbol{\upupup}(\boldsymbol{\mu})\boldsymbol{\upupup}(t\_{1}) + (1 - \boldsymbol{\upupup}(\boldsymbol{\mu})) \boldsymbol{\upupup}(t\_{2}) \right]^{q'} (\boldsymbol{n} - 1) \boldsymbol{\upupupup}'(\boldsymbol{\mu}) d\boldsymbol{\upupup} \right)^{1/q'} . \end{split}$$

Observing that

$$\left(\int\_0^1 \left[\psi(\mu)\psi(t\_1) + (1-\psi(\mu))\psi(t\_2)\right]^{q'(n-1)} \boldsymbol{\psi'}(\mu) d\mu\right)^{1/q'} = \frac{\left(\boldsymbol{\psi}^{q'(n-1)+1}(t\_2) - \boldsymbol{\psi}^{q'(n-1)+1}(t\_1)\right)^{1/q'}}{(q'(n-1)+1)^{1/q'}(\psi(t\_2) - \psi(t\_1))^{1/q'}}$$

we get the desired result.

**Example 2.** *Consider the following fractional integrodifferential equations of Sobolev type with nonlocal conditions in* R*:*

$$\begin{cases} \ ^cD^{\psi\mu}u(t) = f\left(t, u(t), \int\_a^t \rho(t, s)h(t, s, u(s))ds\right), \ t \in J := [a, T],\\ u(a) = u\_4 - \mathcal{g}(u), \end{cases} \tag{25}$$

*where cD<sup>ψ</sup>*;*α*, *<sup>α</sup>* <sup>∈</sup> (0, 1)*, is the <sup>ψ</sup>-Caputo fractional derivative of order <sup>α</sup> with the lower limit <sup>a</sup>* <sup>&</sup>gt; <sup>0</sup>*, ua* <sup>∈</sup> <sup>R</sup> *and g* : *C*(*J*, R) → R*, f* : *J* ×R×R → R*, ρ* : Δ → R *and h* : Δ ×R → R (Δ = {(*t*,*s*) ∈ [*a*, *T*] × [*a*, *T*] : *t* ≥ *s*}) *are given functions.*

*Applying the operator Iα*;*<sup>ψ</sup> <sup>a</sup>*<sup>+</sup> *to the first equation of the problem (25), we get for each t* ∈ (*a*, *T*],

$$\begin{split} u(t) &= \quad u(a) + \frac{1}{\Gamma(a)} \int\_{a}^{t} \psi'(s)(\psi(t) - \psi(s))^{a-1} f\left(s, u(s), \int\_{a}^{s} \rho(s, \tau)h(s, \tau, u(\tau))d\tau\right) ds \\ &= \quad u\_{d} - \mathcal{g}(u) + \frac{1}{\Gamma(a)} \int\_{a}^{t} \psi'(s)(\psi(t) - \psi(s))^{a-1} \end{split}$$
 
$$f\left(s, u(s), \int\_{a}^{s} \rho(s, \tau)h(s, \tau, u(\tau))d\tau\right) ds. \tag{26}$$

*Substituting (26) into (3) of Theorem 1 with f* = *u, t*<sup>1</sup> = *a and t*<sup>2</sup> = *t, we can obtain*

$$\begin{split} &u\_{a} - g(u) + \frac{1}{2\Gamma(a)} \int\_{a}^{t} \boldsymbol{\psi}'(s) (\boldsymbol{\upvarphi}(t) - \boldsymbol{\upvarphi}(s))^{a-1} \boldsymbol{f} \left( s, \boldsymbol{\upmu}(s), \int\_{a}^{s} \boldsymbol{\rho}(s, \boldsymbol{\uppi}) \boldsymbol{h}(s, \boldsymbol{\uppi}, \boldsymbol{u}(\boldsymbol{\uppi})) d\boldsymbol{\uppi} \right) ds \\ & - \frac{a}{2(\boldsymbol{\uppsi}(t) - \boldsymbol{\uppsi}(a))^{a}} \left[ \int\_{a}^{t} \boldsymbol{\uppsi}'(s) (\boldsymbol{\uppsi}(t) - \boldsymbol{\uppsi}(s))^{a-1} \boldsymbol{u}(s) ds + \int\_{a}^{t} \boldsymbol{\uppsi}'(s) (\boldsymbol{\uppsi}(s) - \boldsymbol{\uppsi}(a))^{a-1} \boldsymbol{u}(s) ds \right] \\ & = -\frac{\boldsymbol{\uppsi}(t) - \boldsymbol{\uppsi}(a)}{2} \int\_{0}^{1} [(1-s)^{a} - s^{a}] \boldsymbol{\upzeta}'(s\boldsymbol{\uppsi}(a) + (1-s)\boldsymbol{\uppsi}(t)) \boldsymbol{u}' \left( \boldsymbol{\upzeta}(s\boldsymbol{\uppsi}(a) + (1-s)\boldsymbol{\uppsi}(t)) \right) ds. \end{split}$$

*Therefore, we have*

$$\begin{split} &\int\_{a}^{t} \psi'(s)(\psi(t) - \psi(s))^{a-1} u(s) ds + \int\_{a}^{t} \psi'(s)(\psi(s) - \psi(a))^{a-1} u(s) ds \\ &= \frac{2(\psi(t) - \psi(a))^{a}}{a} (u\_{a} - \mathfrak{g}(u)) + \frac{(\psi(t) - \psi(a))^{a}}{\Gamma(a+1)} \int\_{a}^{t} \psi'(s)(\psi(t) - \psi(s))^{a-1} \\ &\quad f\left(s, u(s), \int\_{a}^{s} \rho(s, \tau) h(s, \tau, u(\tau)) d\tau\right) ds - \frac{(\psi(t) - \psi(a))^{a+1}}{a} \int\_{0}^{1} [(1 - s)^{a} - s^{a}] \\ &\quad \zeta'(s\psi(a) + (1 - s)\psi(t)) u'\left(\zeta(s\psi(a) + (1 - s)\psi(t))\right) ds. \tag{27} \end{split} \tag{28}$$

*Using the fact that* <sup>|</sup>*a<sup>α</sup>* <sup>−</sup> *<sup>b</sup>α*|≤|*<sup>a</sup>* <sup>−</sup> *<sup>b</sup>*<sup>|</sup> *<sup>α</sup>* (*a*, *b* > 0; 0 < *α* < 1) *and Hölder inequality to (27), we obtain the following result.*

**Theorem 10.** *For each solution <sup>u</sup>*(*t*) <sup>∈</sup> *<sup>C</sup>*1[*a*, *<sup>T</sup>*] *of the problem (25), if* <sup>|</sup>*<sup>u</sup>* (*t*)| ≤ *M, then we have the following prior estimate:*

$$\begin{split} & \left| \int\_{a}^{t} \boldsymbol{\psi}'(s) (\boldsymbol{\psi}(t) - \boldsymbol{\psi}(s))^{a-1} \boldsymbol{u}(s) ds + \int\_{a}^{t} \boldsymbol{\psi}'(s) (\boldsymbol{\psi}(s) - \boldsymbol{\psi}(a))^{a-1} \boldsymbol{u}(s) ds \right| \\ & \leq \left. \frac{2 (\boldsymbol{\psi}(t) - \boldsymbol{\psi}(a))^{a}}{a} |\boldsymbol{u}\_{a} - \boldsymbol{\xi}(a)| \right. \\ & \left. + \frac{(\boldsymbol{\psi}(t) - \boldsymbol{\psi}(a))^{a}}{\Gamma(a+1)} \int\_{a}^{t} \boldsymbol{\psi}'(s) (\boldsymbol{\psi}(t) - \boldsymbol{\psi}(s))^{a-1} \left| \boldsymbol{f} \left( s, \boldsymbol{u}(s), \int\_{a}^{s} \boldsymbol{\rho}(s, \tau) \boldsymbol{h}(s, \tau, \boldsymbol{u}(\tau)) d\tau \right) \right| ds \right| \\ & \quad + \frac{M(\boldsymbol{\psi}(t) - \boldsymbol{\psi}(a))^{a+1}}{q^{1/q} a^{1+1/q}} \left( \int\_{0}^{1} \left[ \boldsymbol{\zeta}'(s\boldsymbol{\psi}(a) + (1-s)\boldsymbol{\psi}(t)) \right]^{q'} \boldsymbol{d}s \right)^{1/q'}, \forall t \in [a, T], \end{split}$$

*where q* > 1 *and* <sup>1</sup> *<sup>q</sup>* <sup>+</sup> <sup>1</sup> *q*  = 1.

#### **4. Conclusions**

In this paper, we present new properties for *ψ*-fractional integrals involving a general function *ψ* by establishing several new equalities for the *ψ*-fractional integrals. The *ψ*-fractional integrals are generalizations of Riemann–Liouville fractional integrals and Hadamard fractional integrals, and our equalities are more general and new. To illustrate the applicability of our new equalities, we introduce the *ψ*-means and explore the relationships between the arithmetic mean and the *ψ*-means with the aid of our equalities. Moreover, we use our equalities to obtain an prior estimate for a class of fractional differential equations. How to study the properties of solutions to fractional equations involving *ψ*-Caputo fractional derivative? How to reveal other new properties about *ψ*-fractional integrals? How to find more applications of these properties? We will pay our attention to these problems in our future research.

**Author Contributions:** All the authors contributed equally and significantly in writing this paper. All authors read and approved the final manuscript.

**Funding:** The work was supported partly by the NSF of China (11571229).

**Acknowledgments:** The authors would like to thank the reviewers very much for valuable comments and suggestions.

**Conflicts of Interest:** The authors declare that they have no competing interests.

#### **References**


c 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **A Solution for Volterra Fractional Integral Equations by Hybrid Contractions**

#### **Badr Alqahtani 1, Hassen Aydi 2,3, Erdal Karapınar 3,\* and Vladimir Rakoˇcevi´c 4,\***


Received: 30 June 2019; Accepted: 29 July 2019; Published: 1 August 2019

**Abstract:** In this manuscript, we propose a solution for Volterra type fractional integral equations by using a hybrid type contraction that unifies both nonlinear and linear type inequalities in the context of metric spaces. Besides this main goal, we also aim to combine and merge several existing fixed point theorems that were formulated by linear and nonlinear contractions.

**Keywords:** contraction; hybrid contractions; volterra fractional integral equations; fixed point

**JEL Classification:** 47H10; 54H25; 46J10

#### **1. Introduction and Preliminaries**

In the last few decades, one of the most attractive research topics in nonlinear functional analysis is to solve fractional differential and fractional integral equations that can be reduced properly to standard differential equations and integral equations, respectively. In this paper, we aim to get a proper solution for Volterra type fractional integral equations by using a hybrid type contraction. For this purpose, we first initialize the new hybrid type contractions that combine linear and nonlinear inequalities.

We first recall the auxiliary functions that we shall use effectively: Let Ψ be the set of all nondecreasing functions Λ : [0, ∞) → [0, ∞) in a way that

(ΛΣ) there are *<sup>k</sup>*<sup>0</sup> <sup>∈</sup> <sup>N</sup> and *<sup>δ</sup>* <sup>∈</sup> (0, 1) and a convergent series <sup>∑</sup><sup>∞</sup> *<sup>i</sup>*=<sup>1</sup> *vi* such that *vi* ≥ 0 and

$$
\Lambda^{i+1}\left(t\right) \le \delta \Lambda^k\left(t\right) + v\_{i\star} \tag{1}
$$

for *i* ≥ *i*<sup>0</sup> and *t* ≥ 0.

Each Λ ∈ Φ is called a (*c*)-comparison function (see [1,2]).

The following lemma demonstrate the usability and power of such auxiliary functions:

**Lemma 1** ([2])**.** *If* Λ ∈ Φ*, then*


All the way through the paper, a pair (*X*, *d*) presents a **complete metric space** if it is not mentioned otherwise. In addition, the letter *T* presents a self-mapping on (*X*, *d*).

In what follows, we shall state the definition of a new hybrid contraction:

**Definition 1.** *A mapping T* : (*X*, *d*) → (*X*, *d*) *is called a hybrid contraction of type A, if there is* Λ *in* Φ *so that*

$$d(T\Omega, T\omega) \le \Lambda \left( \mathcal{A}\_T^p(\Omega, \omega) \right),\tag{2}$$

*where p* ≥ 0 *and σ<sup>i</sup>* ≥ 0, *i* = 1, 2, 3, 4, *such that* 4 ∑ *i*=1 *σ<sup>i</sup>* = 1 *and*

$$\mathcal{A}\_{T}^{p}(\Omega,\omega) = \begin{cases} \left[\sigma\_{1}(d(\Omega,\omega))^{p} + \sigma\_{2}(d(\Omega,T\Omega))^{p} + \sigma\_{3}(d(\omega,T\omega))^{p} + \sigma\_{4}\left(\frac{d(\omega,T\Omega) + d(\Omega,T\omega)}{2}\right)^{p}\right]^{1/p}, & \text{for } p > 0, \\ & \text{for } p > 0, \quad \Omega, \omega \in X \\\\ (d(\Omega,\omega))^{\sigma\_{1}}(d(\Omega,T\Omega))^{\sigma\_{2}}(d(\omega,T\omega))^{\sigma\_{3}}, & \text{for } p = 0, \quad \Omega, \omega \in X/\mathcal{F}\_{T}(X), \end{cases} (3)$$

*where* -*<sup>T</sup>*(*X*) = { ∈ *X* : *T* = }*.*

Leu us underline some particular cases from Definition 1.

1. For *p* = 1, *σ*<sup>4</sup> = 0 and *μ<sup>i</sup>* = *κσi*, for *i* = 1, 2, 3, we get a contraction of Reich-Rus-Ciri´ ´ c type:

*d*(*T*Ω, *Tω*) ≤ *μ*1*d*(Ω, *ω*) + *μ*2*d*(Ω, *T*Ω) + *μ*3*d*(*ω*, *Tω*),

for Ω, *ω* ∈ *X*, where *κ* ∈ [0, 1), see [2–4].

2. In the statement above, for *μ<sup>i</sup>* = <sup>1</sup> <sup>3</sup> , we find particular form Reich–Rus–Ciri´ ´ c type contraction,

$$d(T\Omega, T\omega) \le \frac{1}{3} \left[ d(\Omega, \omega) + d(\Omega, T\Omega) + d(\omega, T\omega) \right] \omega$$

for Ω, *ω* ∈ *X*.

3. If *p* = 2, and *σ*<sup>1</sup> = *σ*<sup>2</sup> = *σ*<sup>3</sup> = <sup>1</sup> <sup>3</sup> , *σ*<sup>4</sup> = 0, we find the following condition,

$$d(T\Omega, T\omega) \le \frac{\kappa}{\sqrt{3}} [d^2(\Omega, \omega) + d^2(\Omega, T\Omega) + d^2(\omega, T\omega)]^{1/2}$$

for all Ω, *ω* ∈ *X*, where *κ* ∈ [0, 1).

4. If *p* = 1 and *σ*<sup>2</sup> = *σ*<sup>3</sup> = <sup>1</sup> <sup>2</sup> , *σ*<sup>1</sup> = *σ*<sup>4</sup> = 0, we have a Kannan type contraction,

$$d(T\Omega, T\omega) \le \frac{\kappa}{2} [d(\Omega, T\Omega) + d(\omega, T\omega)]\_\prime$$

for all Ω, *ω* ∈ *X*, see [5].

5. If *p* = 2 and *σ*<sup>2</sup> = *σ*<sup>3</sup> = <sup>1</sup> <sup>2</sup> , *σ*<sup>1</sup> = *σ*<sup>4</sup> = 0, we have

$$d(T\Omega, T\omega) \le \frac{\kappa}{\sqrt{2}} [d^2(\Omega, T\Omega) + d^2(\omega, T\omega)]^{1/2}$$

for all Ω, *ω* ∈ *X*.

6. If *p* = 0 and *σ*<sup>1</sup> = 0, *σ*<sup>2</sup> = *δ*, *σ*<sup>3</sup> = 1 − *δ*, *σ*<sup>4</sup> = 0, we get an interpolative contraction of Kannan type:

$$d(T\Omega, T\omega) \le \kappa (d(\Omega, T\Omega))^\delta (d(\omega, T\omega))^{1-\delta},$$

for all Ω, *ω* ∈ *X*\-*<sup>T</sup>*(*X*), where *κ* ∈ [0, 1), see [6].

7. If *p* = 0 and *σ*<sup>1</sup> = *α*, *σ*<sup>2</sup> = *β*, *σ*<sup>3</sup> = 1 − *β* − *α*, *σ*<sup>4</sup> = 0 with *α*, *β* ∈ (0, 1), then

$$d(T\Omega, T\omega) \le \kappa (d(\Omega, \omega))^a (d(\Omega, T\Omega))^{\beta} (d(\omega, T\omega))^{1-\beta-a} \omega$$

for all Ω, *ω* ∈ *X*\-*<sup>T</sup>*(*X*). It is an interpolative contraction of Reich–Rus–Ciri´ ´ c type [7] (for other related interpolate contraction type mappings, see [8–11]).

In this paper, we provide some fixed point results involving the hybrid contraction (18). At the end, we give a concrete example and we resolve a Volterra fractional type integral equation.

#### **2. Main Results**

Our essential result is

**Theorem 1.** *Suppose that a self-mapping T on* (*X*, *d*) *is a hybrid contraction of type A. Then, T possesses a fixed point <sup>ρ</sup> and, for any <sup>ς</sup>*<sup>0</sup> <sup>∈</sup> *X, the sequence* {*Tnς*0} *converges to <sup>ρ</sup> if either*

*(*C1*) T is continuous at ρ; (*C2*) or,* [*σ*1/*<sup>p</sup>* <sup>2</sup> <sup>+</sup> *<sup>σ</sup>*<sup>4</sup> 2 1/*p* ] < 1*; (*C2*) or,* [*σ*1/*<sup>p</sup>* <sup>3</sup> <sup>+</sup> *<sup>σ</sup>*<sup>4</sup> 2 1/*p* ] < 1*.*

**Proof.** We shall use the standard Picard algorithm to prove the claims in the theorem. Let {*ςn*} be defined by the recursive relation *ςn*+<sup>1</sup> = *Tςn*, *n* ≥ 0, by taking an arbitrary point *x* ∈ *X* and renaming it as *x* = *ς*0. Hereafter, we shall assume that

$$
\zeta\_n \not\models \zeta\_{n+1} \not\Leftrightarrow d(\emptyset\_n, \emptyset\_{n+1}) \gg 0 \text{ for all } n \in \mathbb{N}\_0.
$$

Indeed, it is easy that the converse case is trivial and terminate the proof. More precisely, if there is *n*<sup>0</sup> so that *ςn*<sup>0</sup> = *ςn*0+<sup>1</sup> = *Tςn*<sup>0</sup> , then *ςn*<sup>0</sup> turns to be a fixed point of *T*.

Now, we shall examine the cases *p* = 0 and *p* > 0, separately. We first consider the case *p* > 0. On account of the given condition (18), we find

$$d(\mathfrak{c}\_{n+1}, \mathfrak{c}\_{\mathfrak{h}}) \le \Lambda \left( \mathcal{A}\_T^p(\mathfrak{c}\_{\mathfrak{h}}, \mathfrak{c}\_{n-1}) \right), \tag{4}$$

where

$$\begin{split} \mathcal{A}\_{T}^{p}(\boldsymbol{\xi}\_{n},\boldsymbol{\xi}\_{n-1}) &= \left[\sigma\_{1}(d(\boldsymbol{\xi}\_{n},\boldsymbol{\xi}\_{n-1}))^{p} + \sigma\_{2}(d(\boldsymbol{\xi}\_{n},\boldsymbol{\xi}\_{n+1}))^{p} + \sigma\_{3}(d(\boldsymbol{\xi}\_{n-1},\boldsymbol{\xi}\_{n}))^{p}\right] \\ &+ \sigma\_{4}\left(\frac{d(\boldsymbol{\xi}\_{n-1},\boldsymbol{\xi}\_{n+1}) + d(\boldsymbol{\xi}\_{n},\boldsymbol{\xi}\_{n})}{2}\right)^{p}\right]^{1/p} \\ &= \left[\sigma\_{1}(d(\boldsymbol{\xi}\_{n},\boldsymbol{\xi}\_{n-1}))^{p} + \sigma\_{2}(d(\boldsymbol{\xi}\_{n},\boldsymbol{\xi}\_{n+1}))^{p} + \sigma\_{3}(d(\boldsymbol{\xi}\_{n-1},\boldsymbol{\xi}\_{n}))^{p}\right] \\ &+ \sigma\_{4}\left(\frac{1}{2}[d(\boldsymbol{\xi}\_{n-1},\boldsymbol{\xi}\_{n}) + d(\boldsymbol{\xi}\_{n},\boldsymbol{\xi}\_{n+1})]\right)^{p}\right]^{1/p} .\end{split}$$

Suppose that *d*(*ςn*, *ςn*+1) ≥ *d*(*ςn*−1, *ςn*). With an elementary estimation in Label (4) from the right-hand side and keeping ∑<sup>4</sup> *<sup>i</sup>*=<sup>1</sup> *σ<sup>i</sup>* = 1 in mind, we find that

$$d(\boldsymbol{\varsigma}\_{n+1}, \boldsymbol{\varsigma}\_n) \le \Lambda \left( d(\boldsymbol{\varsigma}\_{n+1}, \boldsymbol{\varsigma}\_n) \sqrt[p]{\sum\_{i=1}^4 \sigma\_i} \right) = \Lambda \left( d(\boldsymbol{\varsigma}\_{n+1}, \boldsymbol{\varsigma}\_n) \right) < d(\boldsymbol{\varsigma}\_{n+1}, \boldsymbol{\varsigma}\_n), \tag{5}$$

a contradiction. Attendantly, we find that *d*(*ςn*, *ςn*+1) < *d*(*ςn*−1, *ςn*) and further

$$d(\mathfrak{g}\_{n+1}, \mathfrak{g}\_n) \le \Lambda \left( d(\mathfrak{g}\_{n-1}, \mathfrak{g}\_n) \right) < d(\mathfrak{g}\_{n-1}, \mathfrak{g}\_n). \tag{6}$$

Inductively, from the inequalities above, we deduce

$$d(\mathfrak{c}\_{n+1}, \mathfrak{c}\_{n}) \le \Lambda^{n}(d(\mathfrak{c}\_{1}, \mathfrak{c}\_{0})), \text{ for all } n \in \mathbb{N}.\tag{7}$$

From Label (7) and using the triangular inequality, for all *k* ≥ 1, we have

$$\begin{aligned} d(\mathfrak{c}\_{n\prime}\mathfrak{c}\_{n+k}) &\leq \ &d(\mathfrak{c}\_{n\prime}\mathfrak{c}\_{n+1}) + \ldots + d(\mathfrak{c}\_{n+k-1\prime}\mathfrak{c}\_{n+k}) \\ &\leq \ &\sum\_{r=n}^{n+k-1} \Lambda^{r}(d(\mathfrak{c}\_{1\prime}\mathfrak{c}\_{0})) \\ &\leq \ &\sum\_{r=n}^{+\infty} \Lambda^{r}(d(\mathfrak{c}\_{1\prime}\mathfrak{c}\_{0})) \rightarrow 0 \ &\text{as } n \rightarrow \infty. \end{aligned}$$

Thus, the constructive sequence {*ςn*} is Cauchy in (*X*, *d*). Taking the completeness of the metric space (*X*, *d*) into account, we conclude the existence of *ρ* ∈ *X* such that

$$\lim\_{n \to \infty} d(\varsigma\_{n\prime} \rho) = 0.\tag{8}$$

Now, we shall indicate that *ρ* is the requested fixed point of *T* under the given assumptions. Suppose that (C1) holds, that is, *T* is continuous. Then,

$$\rho = \lim\_{n \to \infty} \varsigma\_{n+1} = \lim\_{n \to \infty} T\varsigma\_n = T(\lim\_{n \to \infty} \varsigma\_n) = T\rho.$$

Now, we suppose that (C2) holds, that is, [*σ*1/*<sup>p</sup>* <sup>2</sup> <sup>+</sup> *<sup>σ</sup>*<sup>4</sup> 2 1/*p* ] < 1.

$$\begin{split} 0 &< d(T\rho,\rho) \quad \le \quad d(T\rho,\varsigma\_{n+1}) + d(\varsigma\_{n+1},\rho) \\ &= \quad d(T\rho,T\varsigma\_{n+1}) + d(\varsigma\_{n+1},\rho) \\ &\le \quad \Lambda\left(\mathcal{A}\_{T}^{p}(\rho,\varsigma\_{n})\right) + d(\varsigma\_{n+1},\rho), \\ &< \quad \mathcal{A}\_{T}^{p}(\rho,\varsigma\_{n}) + d(\varsigma\_{n+1},\rho), \end{split} \tag{9}$$

where

$$\mathcal{A}\_{\Gamma}^{p}(\rho,\varsigma\_{5}) \quad = \left[\sigma\_{1}(d(\rho,\varsigma\_{5}))^{p} + \sigma\_{2}(d(\rho,T\rho))^{p} + \sigma\_{3}(d(\varsigma\_{5},\varsigma\_{n+1}))^{p} + \sigma\_{4}\left(\frac{d(\varsigma\_{5},T\rho) + d(\rho,\varsigma\_{n+1})}{2}\right)^{p}\right]^{1/p}.$$

As *n* → ∞, we have

$$0 < d(T\rho, \rho) \le \Delta d(T\rho, \rho),$$

where Δ := [*σ*1/*<sup>p</sup>* <sup>2</sup> <sup>+</sup> *<sup>σ</sup>*<sup>4</sup> 2 1/*p* ]. Since Δ := [*σ*1/*<sup>p</sup>* <sup>2</sup> <sup>+</sup> *<sup>σ</sup>*<sup>4</sup> 2 1/*p* ] < 1, which is a contradiction, that is, *Tρ* = *ρ*.

We skip the details of the case (C3) since it is verbatim of the proof of the case (C2). Indeed, the only the difference follows from the fact that <sup>A</sup>*<sup>p</sup> <sup>T</sup>*(*ρ*, *<sup>ς</sup>n*) <sup>=</sup> <sup>A</sup>*<sup>p</sup> <sup>T</sup>*(*ςn*, *ρ*) since *σ*<sup>2</sup> not need to be equal to *σ*3.

As a last step, we shall consider the case *p* = 0. Here, Label (18) and Label (3) become

$$d(T\Omega, T\omega) \le \Lambda \left( (d(\Omega, \omega))^{\sigma\_1} (d(\Omega, T\Omega))^{\sigma\_2} (d(\omega, T\omega))^{\sigma\_3} [\frac{d(T\Omega, \omega) + d(\Omega, T\omega)}{2}]^{1 - \sigma\_1 - \sigma\_2 - \sigma\_3} \right) \tag{10}$$

for all Ω, *ω* ∈ *X*\-*<sup>T</sup>*(*X*), where *κ* ∈ [0, 1) and *σ*1, *σ*2, *σ*<sup>3</sup> ∈ (0, 1). Set Ω = *θ<sup>n</sup>* and *ω* = *θn*−<sup>1</sup> in the inequality (10), we find that

$$\begin{split} d\left(\theta\_{n+1},\theta\_{\boldsymbol{\eta}}\right) = d\left(T\theta\_{\boldsymbol{n}},T\theta\_{\boldsymbol{n}-1}\right) &\leq \Lambda \left( \left[ d\left(\theta\_{\boldsymbol{n}},\theta\_{\boldsymbol{n}-1}\right) \right]^{\sigma\_{1}} \left[ d\left(\theta\_{\boldsymbol{n}},T\theta\_{\boldsymbol{n}}\right) \right] \right)^{\sigma\_{2}} \cdot \left[ d\left(\theta\_{\boldsymbol{n}-1},T\theta\_{\boldsymbol{n}-1}\right) \right]^{\sigma\_{3}} \\ &\quad \cdot \left[ \frac{1}{2} \left( d\left(\theta\_{\boldsymbol{n}},\theta\_{\boldsymbol{n}}\right) + d\left(\theta\_{\boldsymbol{n}-1},\theta\_{\boldsymbol{n}+1}\right) \right) \right]^{1-\sigma\_{1}-\sigma\_{2}-\sigma\_{3}} \\ &\leq \Lambda \left( \left[ d\left(\theta\_{\boldsymbol{n}},\theta\_{\boldsymbol{n}-1}\right) \right]^{\sigma\_{1}} \cdot \left[ d\left(\theta\_{\boldsymbol{n}},\theta\_{\boldsymbol{n}+1}\right) \right]^{\sigma\_{2}} \cdot \left[ d\left(\theta\_{\boldsymbol{n}-1},\theta\_{\boldsymbol{n}}\right) \right]^{\sigma\_{3}} \\ &\quad \cdot \left[ \frac{1}{2} \left( d\left(\theta\_{\boldsymbol{n}-1},\theta\_{\boldsymbol{n}}\right) + d\left(\theta\_{\boldsymbol{n}},\theta\_{\boldsymbol{n}+1}\right) \right) \right]^{1-\sigma\_{1}-\sigma\_{2}-\sigma\_{3}} \right). \end{split} \tag{11}$$

Suppose that *d* (*θn*−1, *θn*) < *d* (*θn*, *θn*+1) for some *n* ≥ 1. Thus,

$$\frac{1}{2}(d\left(\theta\_{n-1}, \theta\_n\right) + d\left(\theta\_n, \theta\_{n+1}\right)) \le d\left(\theta\_{n'}, \theta\_{n+1}\right) \dots$$

Consequently, inequality (11) yields that

$$[d\left(\theta\_{n\prime}\theta\_{n+1}\right)]^{\sigma\_1+\sigma\_3} \le \Lambda \left( [d\left(\theta\_{n-1\prime}\theta\_n\right)]^{\sigma\_1+\sigma\_3} \right) < [d\left(\theta\_{n-1\prime}\theta\_n\right)]^{\sigma\_1+\sigma\_3}.\tag{12}$$

Thus, we conclude that *d* (*θn*−1, *θn*) ≥ *d* (*θn*, *θn*+1), which is a contradiction. Thus, we have

$$d\left(\theta\_{n}, \theta\_{n+1}\right) \le d\left(\theta\_{n-1}, \theta\_{n}\right) \quad \text{for all } n \ge 1.$$

Hence, {*d* (*θn*−1, *θn*)} is a non-increasing sequence with positive terms. On account of the simple observation below,

$$\frac{1}{2}(d\left(\theta\_{n-1}, \theta\_n\right) + d\left(\theta\_{n}, \theta\_{n+1}\right)) \le d\left(\theta\_{n-1}, \theta\_n\right), \quad \text{for all } n \ge 1$$

together with an elementary elimination, the inequality (11) implies that

$$d\left(\theta\_{\mathfrak{n}}, \theta\_{\mathfrak{n}+1}\right) \le \Lambda\left(d\left(\theta\_{\mathfrak{n}-1}, \theta\_{\mathfrak{n}}\right)\right) < d\left(\theta\_{\mathfrak{n}-1}, \theta\_{\mathfrak{n}}\right) \tag{13}$$

for all *n* ∈ N. Since the inequality (13) is equivalent to Label (6), by following the corresponding lines, we derive that the iterated sequence {*θn*} is Cauchy and converges to *<sup>θ</sup>*<sup>∗</sup> <sup>∈</sup> *<sup>X</sup>* that is, lim*n*→<sup>∞</sup> *<sup>d</sup>* (*θn*, *<sup>θ</sup>*∗) <sup>=</sup> 0. Suppose that *θ*<sup>∗</sup> = *Tθ*∗. Since *θ<sup>n</sup>* = *Tθ<sup>n</sup>* for each *n* ≥ 0, by letting *x* = *θ<sup>n</sup>* and *y* = *θ*<sup>∗</sup> in (18), we have

$$d\left(\theta\_{n+1\prime}T\theta^{\*}\right) = d\left(T\theta\_{n\prime}T\theta^{\*}\right) \leq \Lambda \left( [d\left(\theta\_{n\prime}\theta^{\*}\right)]^{\sigma\_{1}} \cdot [d\left(\theta\_{n\prime}T\theta\_{n}\right)]^{\sigma\_{2}} \cdot [d\left(\theta^{\*},T\theta^{\*}\right)]^{\sigma\_{3}} \right. \tag{14}$$

$$\cdot \left[\frac{1}{2}(d\left(\theta\_{n+1\prime}T\theta^{\*}\right) + d\left(\theta^{\*},T\theta\_{n+1}\right))\right]^{1-\sigma\_{2}-\sigma\_{1}-\sigma\_{3}} \right). \tag{14}$$

Letting *n* → ∞ in the inequality (14), we get *d*(*θ*∗, *Tθ*∗) = 0, which is a contradiction. That is, *Tθ*∗ = *θ*∗.

**Corollary 1.** *Let T be a self-mapping on* (*X*, *d*)*. Suppose that there is κ* ∈ [0, 1) *such that*

$$d(T\Omega, T\omega) \le \kappa \mathcal{A}\_T^p(\Omega, \omega),\tag{15}$$

*where p* ≥ 0*. Then, there is a fixed point ρ of T if either*


**Definition 2.** *A self-mapping T is called on* (*X*, *d*) *a hybrid contraction of type B, if there is* Λ ∈ Φ *such that*

$$d(T\Omega, T\omega) \le \Lambda \left(\mathcal{W}\_T^p(\Omega, \omega)\right),\tag{16}$$

*where p* ≥ 0*, a* = (*σ*1, *σ*2, *σ*3), *σ<sup>i</sup>* ≥ 0, *i* = 1, 2, 3 *such that σ*<sup>1</sup> + *σ*<sup>2</sup> + *σ*<sup>3</sup> = 1 *and*

$$\mathcal{W}\_{T}^{p}(\Omega,\omega) = \begin{cases} \left[\sigma\_{1}(d(\Omega,\omega))^{p} + \sigma\_{2}(d(\Omega,T\Omega))^{p} + \sigma\_{3}(d(\omega,T\omega))^{p}\right]^{1/p}, & p > 0, \Omega, \omega \in X, \\\ (d(\Omega,\omega))^{\varepsilon\_{1}}(d(\Omega,T\Omega))^{\varepsilon\_{2}}(d(\omega,T\omega))^{\varepsilon\_{3}}, & p = 0, \ \Omega, \omega \in X/\tau\_{1}(X). \end{cases} \tag{17}$$

Notice that a hybrid contraction of type *A* and a hybrid contraction of type *B* are also called a weighted contraction of type *A* and type *B*, respectively.

As corollaries of Theorem 1, we also have the following.

**Corollary 2.** *Let T be a self-mapping on* (*X*, *d*)*. Suppose that either T is a hybrid contraction of type B, or there is κ* ∈ [0, 1) *so that*

$$d(T\Omega, T\omega) \le \kappa \mathcal{W}\_T^p(\Omega, \omega),\tag{18}$$

*where p* ≥ 0*. Then, there is a fixed point ρ of T if either*

*(i) T is continuous at such point ρ; (ii) or, σ*<sup>2</sup> < 1*; (iii) or, σ*<sup>3</sup> < 1.

**Corollary 3.** *Let T be a self-mapping on* (*X*, *d*)*. Suppose that:*

$$d(T\Omega, T\omega) \le \kappa d^{\tau\_1}(\Omega, \omega) \cdot d^{\tau\_2}(\Omega, T\Omega) \cdot d^{\tau\_3}(\omega, T\omega),\tag{19}$$

*for all* Ω, *ω* ∈ *X*\-*<sup>T</sup>*(*X*)*, where κ* ∈ [0, 1), *σ*1, *σ*2, *σ*<sup>3</sup> ≥ 0 *and σ*<sup>1</sup> + *σ*<sup>2</sup> + *σ*<sup>3</sup> = 1*. Then, there is a fixed point ρ of T.*

**Proof.** Put in Corollary 2, *p* = 0 and *a* = (*σ*1, *σ*2, *σ*3).

**Remark 1.** *Using Corollary 3, we get Theorem 2 in [7] (for metric spaces).*

**Corollary 4.** *Let T be a self-mapping on* (*X*, *d*) *such that*

$$d(T\Omega, T\omega) \le \kappa \sqrt[3]{d(\Omega, \omega) \cdot d(\Omega, T\Omega) \cdot d(\omega, T\omega)},\tag{20}$$

*for all* Ω, *ω* ∈ *X*\-*<sup>T</sup>*(*X*)*, where κ* ∈ [0, 1). *Then, there is a fixed point ρ of T.*

**Proof.** Put in Corollary 2, *p* = 0 and *a* = ( <sup>1</sup> 3 , 1 3 , 1 3 ).

**Corollary 5.** *Let T be a self-mapping on* (*X*, *d*) *such that*

$$d(T\Omega, T\omega) \le \frac{\kappa}{3} [d(\Omega, \omega) + d(\Omega, T\Omega) + d(\omega, T\omega)],\tag{21}$$

*for all* Ω, *ω* ∈ *X, where κ* ∈ [0, 1)*.*

*Then, there is a fixed point ρ of T.*


**Proof.** Put in Corollary 2, *p* = 1 and *a* = ( <sup>1</sup> 3 , 1 3 , 1 3 ). **Corollary 6.** *Let T be a self-mapping on* (*X*, *d*) *such that*

$$d(T\Omega, T\omega) \le \frac{\kappa}{\sqrt{3}} [d^2(\Omega, \omega) + d^2(\Omega, T\Omega) + d^2(\omega, T\omega)]^{1/2},\tag{22}$$

*for all* <sup>Ω</sup>, *<sup>ω</sup>* <sup>∈</sup> *X, where <sup>κ</sup>* <sup>∈</sup> [0, 1), *then T has a fixed point in X. The sequence* {*Tnς*0} *converges to <sup>ρ</sup>.*


**Proof.** Put in Corollary 2, *p* = 2 and *a* = ( <sup>1</sup> 3 , 1 3 , 1 3 ).

Corollary 2 is illustrated by the following.

**Example 1.** *Choose X* = {*τ*1, *τ*2, *τ*3, *τ*4} ∪ [0, ∞) *(where τ*1, *τ*2, *τ*<sup>3</sup> *and τ*<sup>4</sup> *are negative reals). Take*



$$\text{Consider } T: \begin{pmatrix} \tau\_1 & \tau\_2 & \tau\_3 & \tau\_4\\ \tau\_3 & \tau\_4 & \tau\_3 & \tau\_4 \end{pmatrix} \text{ and } T\Omega = \frac{\Omega}{8} \text{ for } \Omega \in [0, \infty).$$

*For* Ω ∈ [0, ∞)*, the main theorem is satisfied straightforwardly. Thus, we examine the case* Ω ∈ {*a*, *b*, *c*, *d*}*. Note that there is no κ* ∈ [0, 1) *such that*

$$d(T\tau\_1, T\tau\_2) \le \frac{\kappa}{3} \left[ d(\tau\_1, \tau\_2) + d(\tau\_1, T\tau\_1) + d(\tau\_2, T\tau\_2) \right] \dots$$

*namely, we have,*

$$2 \le \frac{\kappa}{3} \left[ 1 + 2 + 3 \right].$$

*Thus, Corollary 5 is not applicable.*

*Using (20), we have*

$$d(T\tau\_1, T\tau\_2) \le \kappa \sqrt[3]{d(\tau\_1, \tau\_2) \cdot d(\tau\_1, T\tau\_1) \cdot d(\tau\_2, T\tau\_2)}.$$

*i.e.,* 2 ≤ *κ* <sup>√</sup><sup>3</sup> <sup>1</sup> · <sup>2</sup> · 3, *so <sup>κ</sup>* <sup>≥</sup> <sup>2</sup> <sup>√</sup><sup>3</sup> <sup>6</sup> <sup>&</sup>gt; <sup>1</sup>*. Hence, Corollary <sup>4</sup> is not applicable. Corollary 6 is applicable. In fact, for* Ω, *ω* ∈ *X, we have for κ* = -<sup>6</sup> 7 *,*

$$d(T\Omega, T\omega) \le \frac{\kappa}{\sqrt{3}} [d^2(\Omega, \omega) + d^2(\Omega, T\Omega) + d^2(\omega, T\omega)]^{1/2}.$$

*Here,* {0, *τ*3, *τ*4} *is the set of fixed points of T.*

#### **3. Application on Volterra Fractional Integral Equations**

The fractional Schrodinger equation (FSE) is known as the fundamental equation of the fractional quantum mechanics. As compared to the standard Schrodinger equation, it contains the fractional Laplacian operator instead of the usual one. This change brings profound differences in the behavior of wave function. Zhang et al. [12] investigated analytically and numerically the propagation of optical beams in the FSE with a harmonic potential. In addition, Zhang et al. [13] suggested a real physical system (the honeycomb lattice ) as a possible realization of the FSE system, through utilization of the Dirac–Weyl equation, while Zhang et al. [14] investigated the dynamics of waves in the FSE with a PT -symmetric potential. Still in fractional calculus, in this section, we study a nonlinear Volterra fractional integral equation.

Set 0 < *τ* < 1 and *J* = [*σ*0, *σ*<sup>0</sup> + *a*] in R (*a* > 0). Denote by *X* = *C*(*J*, R) the set of continuous real-valued functions on *J*.

Now, particularly, we cosnider the following nonlinear Volterra fractional integral equation (in short, VFIE)

$$\mathcal{J}(t) = \mathcal{F}(t) + \frac{1}{\Gamma(\tau)} \int\_{\sigma\_0}^t (t - s)^{\tau - 1} h(s, \mathcal{J}(s)) ds,\tag{23}$$

for all *t* ∈ *J*, where Γ is the gamma function, F : *J* → R and *h* : *J* × R → R are continuous functions. The VFIE (23) has been investigated in the literature on fractional calculus and its applications, see [15–17].

In the following result, under some assumptions, we ensure the existence of a solution for the VFIE (23).

#### **Theorem 2.** *Suppose that*

*(H1) There are constants M* > 0 *and N* > 0 *such that*

$$|h(t,\boldsymbol{u}) - h(t,\boldsymbol{v})| \le \frac{M|\boldsymbol{u} - \boldsymbol{v}|}{N + |\boldsymbol{u} - \boldsymbol{v}|}\tag{24}$$

*for all u*, *v* ∈ R*;*

*(H2) Such M and N verify that*

$$\frac{Ma}{\Gamma(\tau+1)} \le N. \tag{25}$$

*Then, the VFIE (23) has a solution in X.*

**Proof.** For *ξ*, *η* ∈ *X*, consider the metric

$$d(\xi'\_\prime \eta) = \sup\_{t \in I} |\xi(t) - \eta(t)|.$$

Take the operator

$$T\_\sharp^x(t) = \mathcal{F}(t) + \frac{1}{\Gamma(\pi)} \int\_{\sigma\_0}^t (t - s)^{\pi - 1} h(s, \xi(s)) ds, \quad t \in J. \tag{26}$$

Clearly, *T* is well defined. Let *ξ*, *η* ∈ *X*, then for each *t* ∈ *J*,

$$\begin{split} |T\_{\xi}^{\mathfrak{z}}(t) - T\eta(t)| &= \frac{1}{\Gamma(\tau)} \int\_{\sigma\_{0}}^{t} (t - s)^{\tau - 1} (h(s, \underline{\mathfrak{z}}(s)) - h(s, \eta(s))) ds \\ &\leq \frac{1}{\Gamma(\tau)} \int\_{\sigma\_{0}}^{t} (t - s)^{\tau - 1} |h(s, \underline{\mathfrak{z}}(s)) - h(s, \eta(s))| ds \\ &\leq \frac{Ma}{\Gamma(\tau + 1)} \frac{M|\underline{\mathfrak{z}}(s) - \eta(s)|}{N + |\underline{\mathfrak{z}}(s) - \eta(s)|)} \\ &\leq \frac{Ma}{\Gamma(\tau + 1)} \frac{M||\underline{\mathfrak{z}} - \eta||}{N + |\underline{\mathfrak{z}} - \eta(\underline{\mathfrak{z}})|}. \end{split}$$

We deduce that

$$\|\|T\overline{\xi} - T\eta\|\| \le \frac{Ma}{\Gamma(\tau + 1)} \frac{M\|\|\overline{\xi} - \eta\|\|}{N + \|\|\overline{\xi} - \eta\|\|} = \Lambda(\|\|\overline{\xi} - \eta\|\|),\tag{27}$$

where Λ(*t*) = *La* Γ(*τ*+1) *Mt <sup>N</sup>*+*<sup>t</sup>* for *t* ≥ 0. By hypothesis (*H*2), Λ ∈ Φ. Then,

$$d(T\xi, T\eta) \le \Lambda \left(\mathcal{F}\_T^p(\xi, \eta)\right),\tag{28}$$

for *p* > 0, with *σ*<sup>2</sup> = *σ*<sup>2</sup> = *σ*<sup>4</sup> = 0 and *σ*<sup>1</sup> = 1. Applying Theorem 1, *T* has a fixed point in *X*, so the VFIE (23) has a solution in *X*.

#### **4. Conclusions**

The obtained results unify several existing results in a single theorem. We list some of the consequences, but it is clear that there are more consequences of our main results. Regarding the length of the paper, we skip them.

**Author Contributions:** B.A. analyzed and prepared the manuscript, H.A. analyzed and prepared/edited the manuscript, E.K. analyzed and prepared/edited the manuscript, V.R. analyzed and prepared the manuscript. All authors read and approved the final manuscript.

**Funding:** We declare that funding is not applicable for our paper.

**Acknowledgments:** The authors are grateful to the handling editor and reviewers for their careful reviews and useful comments. The authors would like to extend their sincere appreciation to the Deanship of Scientific Research at King Saud University for funding this group No. RG-1437-017.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


c 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **A General Inertial Viscosity Type Method for Nonexpansive Mappings and Its Applications in Signal Processing**

#### **Yinglin Luo 1, Meijuan Shang 2,\* and Bing Tan <sup>1</sup>**


Received: 9 January 2020; Accepted: 7 February 2020; Published: 20 February 2020

**Abstract:** In this paper, we propose viscosity algorithms with two different inertia parameters for solving fixed points of nonexpansive and strictly pseudocontractive mappings. Strong convergence theorems are obtained in Hilbert spaces and the applications to the signal processing are considered. Moreover, some numerical experiments of proposed algorithms and comparisons with existing algorithms are given to the demonstration of the efficiency of the proposed algorithms. The numerical results show that our algorithms are superior to some related algorithms.

**Keywords:** nonexpansive mapping; strict pseudo-contraction; variational inequality problem; inclusion problem; signal processing

**MSC:** 49J40; 47H05; 90C52; 47J20; 47H09

#### **1. Introduction**

In this paper, *H* denotes real Hilbert spaces with inner product ·, · and norm ·. We denote the set of fixed points of an operator *T* by Fix(*T*), more precisely, Fix(*T*) := {*x* ∈ *H* : *Tx* = *x*}.

Recall that a mapping *<sup>T</sup>* : *<sup>H</sup>* <sup>→</sup> *<sup>H</sup>* is said to be an *<sup>η</sup>*-strict pseudo-contraction if *Tx* <sup>−</sup> *Ty*<sup>2</sup> <sup>−</sup> *<sup>η</sup>*(*<sup>I</sup>* <sup>−</sup> *<sup>T</sup>*)*<sup>x</sup>* <sup>−</sup> (*<sup>I</sup>* <sup>−</sup> *<sup>T</sup>*)*y*<sup>2</sup> ≤ *<sup>x</sup>* <sup>−</sup> *<sup>y</sup>*2, <sup>∀</sup>*x*, *<sup>y</sup>* <sup>∈</sup> *<sup>H</sup>*, where *<sup>η</sup>* <sup>∈</sup> [0, 1) is a real number. A mapping *T* : *H* → *H* is said to be nonexpansive if *Tx* − *Ty*≤*x* − *y*, ∀*x*, *y* ∈ *H*. It is evident that the class of *η*-strict pseudo-contractions includes the class of nonexpansive mappings, as *T* is nonexpansive if and only if *T* is 0-strict pseudo-contractive. Many classical mathematical problems can be casted into the fixed-point problem of nonexpansive mappings, such as, inclusion problem, equilibrium problem, variational inequality problem, saddle point problem, and split feasibility problem, see [1–3]. Approximating fixed points of nonexpansive mappings is an important field in many areas of pure and applied mathematics. One of the most well-known algorithms for solving such a problem is the Mann iterative algorithm [4]:

$$\mathbf{x}\_{n+1} = (1 - \theta\_{\mathbf{n}})T\mathbf{x}\_{\mathbf{n}} + \theta\_{\mathbf{n}}\mathbf{x}\_{\mathbf{n}}.$$

where *θ<sup>n</sup>* is a sequence in (0, 1). One knows that the iterative sequence {*xn*} converges weakly to a fixed point of *T* provided that ∑<sup>∞</sup> *<sup>n</sup>*=<sup>0</sup> *θn*(1 − *θn*)=+∞. This algorithm is slow in terms of convergence speed. Moreover, this algorithm converges is weak. To obtain more effective methods, many authors have done a lot of works in this area, see [5–8]. A mapping *f* : *H* → *H* is called a contraction if there exists a constant in [0, 1) such that *f*(*x*) − *f*(*y*) ≤ *τx* − *y*, ∀*x*, *y* ∈ *H*. One of celebrated ways to study nonexpansive operators is to use a contractive operator, which is a convex combination of the previous contractive operator and the nonexpansive operator. The viscosity type method for nonexpansive mappings is defined as follows,

$$\mathbf{x}\_{n+1} = (1 - \alpha\_n) T \mathbf{x}\_n + \alpha\_n f(\mathbf{x}\_n),\tag{1}$$

where *α<sup>n</sup>* is a sequence in (0, 1), *T* is the nonexpansive operator, and *f* is the contractive operator. In this method, a special fixed point of the nonexpansive operator is obtained by regularizing the nonexpansive operator via the contraction. This method was proposed by Attouch [9] in 1996 and further promoted by Moudafi [10] in 2000. Motivated by Moudafi, Takahashi and Takahashi [11] introduced a strong convergence theorem by the viscosity type approximation method for finding the fixed point of nonexpansive mappings in Hilbert spaces. In 2019, Qin and Yao [12] introduced a viscosity iterative method for solving a split feasibility problem. For viscosity approximation methods, one refers to [13,14]. In practical applications, one not only studies different algorithms, but also pursues the speed of these algorithms. To obtain faster convergence algorithms, many scholars have given various acceleration techniques, see, e.g., [15–19]. One of the most commonly used methods is the inertial method. In [20], Polyak introduced an inertial extrapolation based on the heavy ball method for solving the smooth convex minimization problem. Shehu et al. [21] introduced a Halpern-type algorithm with inertial terms for approximating fixed points of a nonexpansive mapping. They obtained strong convergence in real Hilbert spaces under some assumptions on the sequence of parameters. To get a more general inertial Mann algorithm for nonexpansive mappings, Dong et al. [22] introduced a general inertial Mann algorithm which includes some classical algorithms as its special cases; however, they only got the weak convergence results.

Inspired by the above works, we give two algorithms for solving fixed point problems of nonexpansive mappings via viscosity and inertial techniques in this paper. One highlight is that our algorithms, which are more consistent and efficient, are accelerated via the inertial technique and the viscosity technique. In addition, the solution also uniquely solves a monotone variational inequality. Another highlight is that we consider two different inertial parameter sequences comparing with the existing results. We establish strong convergence results in infinite dimensional Hilbert spaces without compactness. We also investigate the applications of the two proposed algorithms to variational inequality problems and inclusion problems. Furthermore, we give some numerical experiments to illustrate the convergence efficiency of our algorithms. The proposed numerical experiments show that our algorithms are superior to some related algorithms.

In this paper, Section 2 is devoted to some required prior knowledge, which will be used in this paper. In Section 3, based on viscosity type method, we propose an algorithm for solving fixed point problems of nonexpansive mappings and give an algorithm for strict pseudo-contractive mappings. In Section 4, some applications of our algorithms in real Hilbert spaces are given. Finally, some numerical experiments of our algorithms and its comparisons with other algorithms in signal processing are given in Section 5. Section 6, the last section, is the final conclusion.

#### **2. Toolbox**

In this section, we give some essential lemmas for our main convergence theorems.

**Lemma 1** ([23])**.** *Let* {*an*} *be a non-negative real sequence and* {*bn*} *a real sequence and* {*αn*} *a real sequence in* (0, 1) *such that* ∑<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *α<sup>n</sup>* = ∞*. Assume that an*+<sup>1</sup> ≤ *αnbn* + *an*(1 − *αn*), ∀*n* ≥ 1. *If, for every subsequence* {*ank*} *of* {*an*} *satisfying* lim inf*k*→∞(*ank*<sup>+</sup><sup>1</sup> <sup>−</sup> *ank* ) <sup>≥</sup> <sup>0</sup>*,* lim sup*k*→<sup>∞</sup> *bnk* <sup>≤</sup> <sup>0</sup> *holds, then* lim*n*→<sup>∞</sup> *an* <sup>=</sup> <sup>0</sup>*.*

**Lemma 2** ([24])**.** *Suppose that T* : *H* → *H is a nonexpansive mapping. Let* {*xn*} *be a vector sequence in H and let p be a vector in H. If xn p and xn* − *Txn* → 0*. Then p* ∈ Fix(*T*)*.*

**Lemma 3** ([14])**.** *Let* {*σn*} *be a non-negative real sequence such that there exists a subsequence* {*σni* } *of* {*σn*} *satisfying σni* < *σni*+<sup>1</sup> *for all i* ∈ *N. Then, there exists a nondecreasing sequence* {*mk*} *of N such* *that* lim*k*→<sup>∞</sup> *mk* = <sup>∞</sup> *and the following properties are satisfied for all (sufficiently large) number <sup>k</sup>* ∈ *N: σmk* ≤ *σmk*+<sup>1</sup> *and σ<sup>k</sup>* ≤ *σmk*+1.

It is known that *mk* is the largest number in the set {1, 2, ··· , *k*} such that *σmk* < *σmk*+1.

**Lemma 4** ([25])**.** *Let* {*sn*} *be a sequence of non-negative real numbers such that sn*+<sup>1</sup> = (1 − *βn*)*sn* + *δn*, ∀ ≥ 0, *where* {*βn*} *is a sequence in* (0, 1) *with* <sup>∑</sup><sup>∞</sup> *<sup>n</sup>*=<sup>0</sup> *<sup>β</sup><sup>n</sup>* <sup>=</sup> <sup>∞</sup> *and* {*δn*} *satisfies* lim sup*n*→<sup>∞</sup> *<sup>δ</sup><sup>n</sup> <sup>β</sup><sup>n</sup>* ≤ 0 *or* ∑<sup>∞</sup> *<sup>n</sup>*=<sup>0</sup> |*δn*| < ∞*. Then,* lim*n*→<sup>∞</sup> *sn* = 0*.*

#### **3. Main Results**

In this section, we give two strong convergence theorems for approximating the fixed points of nonexpansive mappings and strict pseudo-contractive mappings. First, we propose some assumptions which will be used in our statements.

**Condition 1.** *Suppose that* {*αn*}, {*βn*} *and* {*γn*} *are three real sequences in* (0, 1) *satisfying the following conditions.*


#### **Algorithm 1** The viscosity type algorithm for nonexpansive mappings

**Initialization:** Let *x*0, *x*<sup>1</sup> ∈ *H* be arbitrary.

**Iterative Steps**: Given the current iterator *xn*, calculate *xn*+<sup>1</sup> as follows: **Step 1.** Compute

$$\begin{cases} y\_n = \theta\_n (\mathbf{x}\_n - \mathbf{x}\_{n-1}) + \mathbf{x}\_{n\prime} \\ z\_n = \epsilon\_n (\mathbf{x}\_n - \mathbf{x}\_{n-1}) + \mathbf{x}\_n. \end{cases} \tag{2}$$

**Step 2.** Compute

$$\mathbf{x}\_{n+1} = a\_n f(\mathbf{x}\_n) + \beta\_n y\_n + \gamma\_n T z\_n. \tag{3}$$
  $\textbf{Step 3. Set } n \leftarrow n+1 \text{ and go to Step 1.}$ 

**Remark 2.** *The (2) of Condition 1 is well defined, as the inertial parameters θ<sup>n</sup> and <sup>n</sup> in* (3) *can be chosen such that* 0 ≤ *θ<sup>n</sup>* ≤ *θ*<sup>∗</sup> *<sup>n</sup> and* 0 ≤ *<sup>n</sup>* ≤ <sup>∗</sup> *n, where*

$$\theta\_{\mathfrak{n}}^{\*} = \begin{cases} \min\left\{\theta, \frac{\delta\_{\mathfrak{n}}}{\|\mathbf{x}\_{n} - \mathbf{x}\_{n-1}\|}\right\}, & \mathbf{x}\_{\mathfrak{n}} \neq \mathbf{x}\_{n-1},\\ \theta, & \text{otherwise}, \end{cases} \quad \mathfrak{e}\_{\mathfrak{n}}^{\*} = \begin{cases} \min\left\{\mathfrak{e}, \frac{\delta\_{\mathfrak{n}}}{\|\mathbf{x}\_{n} - \mathbf{x}\_{n-1}\|}\right\}, & \mathbf{x}\_{\mathfrak{n}} \neq \mathbf{x}\_{n-1},\\ \mathbf{e}\_{\prime} & \text{otherwise}, \end{cases} \tag{4}$$

*and* {*δn*} *is a positive sequence such that* lim*n*→<sup>∞</sup> *<sup>δ</sup><sup>n</sup> <sup>α</sup><sup>n</sup>* = <sup>0</sup>*. It is easy to verify that* lim*n*→<sup>∞</sup> *θnxn* − *xn*−1 = <sup>0</sup> *and* lim*n*→<sup>∞</sup> *θn αn xn* − *xn*−1 = 0*.*

**Theorem 1.** *Let T* : *H* → *H be a nonexpansive mapping with* Fix(*T*) = ∅ *and let f* : *H* → *H be a contraction with constant k* ∈ [0, 1)*. Suppose that* {*xn*} *is any sequence generated by Algorithm 1 and Condition 1 holds. Then,* {*xn*} *converges strongly to p* = *P*Fix(*T*) ◦ *f*(*p*)*.*

**Proof.** The proof is divided into three steps.

**Step 1**. One claims that {*xn*} is bounded.

Let *p* ∈ Fix(*T*). As *yn* = *θn*(*xn* − *xn*−1) + *xn*, one concludes

$$||y\_n - p|| \le \theta\_n ||\mathbf{x}\_n - \mathbf{x}\_{n-1}|| + ||\mathbf{x}\_n - p||.\tag{5}$$

Similarly, one gets

$$\|z\_n - p\| \le \|\mathbf{x}\_n - p\| + \varepsilon\_n \|\mathbf{x}\_n - \mathbf{x}\_{n-1}\|.\tag{6}$$

From (3), one obtains

$$\begin{split} \|\|\mathbf{x}\_{n+1} - \mathbf{p}\| \le & \gamma\_{n} \|\|p - T\mathbf{z}\_{n}\|\| + \beta\_{n} \|\|p - y\_{n}\|\| + a\_{n} \|\|p - f(\mathbf{x}\_{n})\|\| \\ \le & \gamma\_{n} \|\|p - z\_{n}\|\| + \beta\_{n} \|\|p - y\_{n}\|\| + a\_{n} \|\|f(\mathbf{x}\_{n}) - f(p) + f(p) - p\|\| \\ \le & (1 - a\_{n}(1 - k)) \|\mathbf{x}\_{n} - p\|\| \\ & + a\_{n}(1 - k) (\frac{\|f(p) - p\| + \beta\_{n} \frac{\theta\_{n}}{a\_{n}} \|\mathbf{x}\_{n} - \mathbf{x}\_{n-1}\| + \gamma\_{n} \frac{\epsilon\_{n}}{a\_{n}} \|\mathbf{x}\_{n} - \mathbf{x}\_{n-1}\|}{1 - k}). \end{split} \tag{7}$$

In view of Condition <sup>1</sup> (2), one sees that sup*n*≥<sup>1</sup> *<sup>θ</sup><sup>n</sup> <sup>α</sup><sup>n</sup> xn* <sup>−</sup> *xn*−1 and sup*n*≥<sup>1</sup> *<sup>n</sup> <sup>α</sup><sup>n</sup> xn* − *xn*−1 exist. Taking *<sup>M</sup>* :<sup>=</sup> 3 max *<sup>f</sup>*(*p*) <sup>−</sup> *<sup>p</sup>*, sup*n*≥<sup>1</sup> *<sup>θ</sup><sup>n</sup> <sup>α</sup><sup>n</sup> xn* <sup>−</sup> *xn*−1, sup*n*≥<sup>1</sup> *<sup>n</sup> <sup>α</sup><sup>n</sup> xn* − *xn*−1 , one gets from (7) that

$$\begin{aligned} \|\mathbf{x}\_{n+1} - p\| &\le (1 - a\_n(1 - k)) \|\mathbf{x}\_n - p\| + a\_n(1 - k)M\\ &\le \max\{\|\mathbf{x}\_n - p\|, M\} \le \dots \le \max\{\|\mathbf{x}\_1 - p\|, M\}. \end{aligned}$$

This implies that {*xn*} is bounded.

**Step 2**. One claims that if {*xn*} converges weakly to *z* ∈ *H*, then *z* ∈ Fix(*T*). Letting *wn*+<sup>1</sup> = *α<sup>n</sup> f*(*wn*) + *βnwn* + *γnTwn*, from (1), one arrives at

$$\left||\left\|w\_{\mathrm{n}} - y\_{\mathrm{n}}\right\|\right| \le \theta\_{\mathrm{n}} \left||\left\|\mathbf{x}\_{\mathrm{n}} - \mathbf{x}\_{\mathrm{n}-1}\right\|\right| + \left||\left\|w\_{\mathrm{n}} - \mathbf{x}\_{\mathrm{n}}\right\|\right|\tag{8}$$

and

$$||w\_n - z\_n|| \le \epsilon\_n ||\mathbf{x}\_n - \mathbf{x}\_{n-1}|| + ||w\_n - \mathbf{x}\_n||.\tag{9}$$

By the definition of *wn*+1, (8) and (9), one obtains

$$\begin{split} \|\boldsymbol{w}\_{n+1} - \mathbf{x}\_{n+1}\| &\leq \boldsymbol{a}\_{\boldsymbol{n}} \|\boldsymbol{f}(\boldsymbol{w}\_{\boldsymbol{n}}) - \boldsymbol{f}(\mathbf{x}\_{\boldsymbol{n}})\| + \beta\_{\boldsymbol{n}} \|\boldsymbol{w}\_{\boldsymbol{n}} - \boldsymbol{y}\_{\boldsymbol{n}}\| + \gamma\_{\boldsymbol{n}} \|\boldsymbol{T}\boldsymbol{w}\_{\boldsymbol{n}} - \boldsymbol{T}\boldsymbol{z}\_{\boldsymbol{n}}\| \\ &\leq \boldsymbol{\kappa} \boldsymbol{a}\_{\boldsymbol{n}} \|\boldsymbol{w}\_{\boldsymbol{n}} - \mathbf{x}\_{\boldsymbol{n}}\| + \beta\_{\boldsymbol{n}} \|\boldsymbol{w}\_{\boldsymbol{n}} - \boldsymbol{y}\_{\boldsymbol{n}}\| + \gamma\_{\boldsymbol{n}} \|\boldsymbol{w}\_{\boldsymbol{n}} - \boldsymbol{z}\_{\boldsymbol{n}}\| \\ &\leq \left(1 - \boldsymbol{a}\_{\boldsymbol{n}}(1 - \boldsymbol{k})\right) \|\boldsymbol{w}\_{\boldsymbol{n}} - \mathbf{x}\_{\boldsymbol{n}}\| + \left(\theta\_{\boldsymbol{n}} \|\boldsymbol{x}\_{\boldsymbol{n}} - \mathbf{x}\_{n-1}\| + \boldsymbol{\varepsilon}\_{\boldsymbol{n}} \|\boldsymbol{x}\_{\boldsymbol{n}} - \mathbf{x}\_{n-1}\|\right). \end{split} \tag{10}$$

From Condition 1 and Lemma 4, one sees that (10) implies lim*n*→<sup>∞</sup> *wn*+<sup>1</sup> − *xn*+1 = 0. Therefore, it follows from Step 1 that {*wn*} is bounded. By the definition of *wn*+1, one also obtains

$$\begin{split} \|w\_{n+1} - p\|^2 &\le \|a\_{\boldsymbol{n}}(f(\boldsymbol{w}\_{\boldsymbol{n}}) - f(\boldsymbol{p})) + \boldsymbol{\mathcal{G}}\_{\boldsymbol{n}}(y\_{\boldsymbol{n}} - \boldsymbol{p}) + \gamma\_{\boldsymbol{n}}(\boldsymbol{T}y\_{\boldsymbol{n}} - \boldsymbol{p})\|^2 + 2a\_{\boldsymbol{n}}\langle f(\boldsymbol{p}) - \boldsymbol{p}, \boldsymbol{w}\_{\boldsymbol{n}+1} - \boldsymbol{p} \rangle \\ &\le a\_{\boldsymbol{n}}k^2 \|\boldsymbol{w}\_{\boldsymbol{n}} - \boldsymbol{p}\|^2 + \beta\_{\boldsymbol{n}} \|\boldsymbol{w}\_{\boldsymbol{n}} - \boldsymbol{p}\|^2 + \gamma\_{\boldsymbol{n}} \|\boldsymbol{T}\boldsymbol{w}\_{\boldsymbol{n}} - \boldsymbol{p}\|^2 - \beta\_{\boldsymbol{n}}\gamma\_{\boldsymbol{n}} \|\boldsymbol{w}\_{\boldsymbol{n}} - \boldsymbol{T}\boldsymbol{w}\_{\boldsymbol{n}}\|^2 \\ &+ 2a\_{\boldsymbol{n}}\langle f(\boldsymbol{p}) - \boldsymbol{p}, \boldsymbol{w}\_{\boldsymbol{n}+1} - \boldsymbol{p} \rangle \\ &= (1 - a\_{\boldsymbol{n}}(1 - k^2)) \|\boldsymbol{w}\_{\boldsymbol{n}} - \boldsymbol{p}\|^2 + 2a\_{\boldsymbol{n}}\langle f(\boldsymbol{p}) - \boldsymbol{p}, \boldsymbol{w}\_{\boldsymbol{n}+1} - \boldsymbol{p} \rangle - \beta\_{\boldsymbol{n}}\gamma\_{\boldsymbol{n}} \|\boldsymbol{w}\_{\boldsymbol{n}} - \boldsymbol{T}\boldsymbol{w}\_{\boldsymbol{n}}\|^2. \end{split} \tag{11}$$

Taking *sn* <sup>=</sup> *wn* <sup>−</sup> *<sup>p</sup>*2, one sees that (11) is equivalent to

$$s\_{n+1} \le (1 - a\_n(1 - k^2))s\_n - \beta\_n \gamma\_n \|w\_n - Tw\_n\|^2 + 2a\_n \langle f(p) - p, w\_{n+1} - p \rangle. \tag{12}$$

Now, we show *z* ∈ Fix(*T*) by considering two possible cases on sequence {*sn*}.

Case 1. Suppose that there exists a *n*<sup>0</sup> ∈ *N* such that *sn*+<sup>1</sup> ≤ *sn* for all *n* ≥ *n*0. This implies that lim*n*→<sup>∞</sup> *sn* exists. From (12), one has

$$\left\|\beta\_{n}\gamma\_{n}\right\|\left\|w\_{n}-Tw\_{n}\right\|^{2}\leq \left(1-\mathfrak{a}\_{\text{fl}}(1-k^{2})\right)s\_{\text{n}}+2\mathfrak{a}\_{\text{n}}\langle f(p)-p,w\_{n+1}-p\rangle-s\_{n+1}.\tag{13}$$

As {*wn*} is bounded, from Condition 1 and (13), one deduces that

$$\lim\_{n \to \infty} \beta\_n \gamma\_{\mathcal{U}} \|w\_n - Tw\_{\mathcal{U}}\|^2 = 0. \tag{14}$$

As lim inf*n*→<sup>∞</sup> *βnγ<sup>n</sup>* > 0, (14) implies that

$$\lim\_{n \to \infty} \|w\_{\hbar} - Tw\_{\hbar}\|^2 = 0. \tag{15}$$

As *xn z* and lim*n*→<sup>∞</sup> *wn*+<sup>1</sup> − *xn*+1 = 0, one has *wn z*. By using Lemma 2, one gets *z* ∈ Fix(*T*).

Case 2. There exists a subsequence {*snj* } of such {*sn*} that *snj* < *snj*<sup>+</sup><sup>1</sup> for all *j* ∈ *N*. In this case, it follows from Lemma 3 that there is a nondecreasing subsequence {*mk*} of *N* such that lim*k*→<sup>∞</sup> *mk* → ∞ and the following inequalities hold for all *k* ∈ *N*:

$$s\_{m\_k} \le s\_{m\_k + 1} \quad \text{and} \quad s\_k \le s\_{m\_k + 1}. \tag{16}$$

Using a similar argument as Case 1, it is easy to get that lim*k*→<sup>∞</sup> *Twmk* − *wmk* = 0. It is known that *xn z*, which implies *xmk z*. Therefore, *z* ∈ Fix(*T*).

**Step 3**. One claims that {*xn*} converges strongly to *p* = *P*Fix(*T*) ◦ *f*(*p*). From (11), we deduce that

$$\|w\_{n+1} - p\|^2 \le (1 - \mathfrak{a}\_n(1 - k^2)) \|w\_n - p\|^2 + 2\mathfrak{a}\_n \langle f(p) - p, w\_{n+1} - p \rangle. \tag{17}$$

In the following, we show that the sequence {*wn* − *p*} converges strongly to zero. As {*wn*} is bounded, in view of Condition 1 and Lemma 1, we only need to show that for each subsequence {*wnk* <sup>−</sup> *<sup>p</sup>*} of {*wn* <sup>−</sup> *<sup>p</sup>*} such that lim inf*k*→∞(*wnk*<sup>+</sup><sup>1</sup> <sup>−</sup> *<sup>p</sup>*−*wnk* <sup>−</sup> *<sup>p</sup>*) <sup>≥</sup> 0, lim sup*k*→∞*f*(*p*) <sup>−</sup> *p*, *wnk*<sup>+</sup><sup>1</sup> − *p* ≤ 0. For this purpose, one assumes that {*wnk* − *p*} is a subsequence of {*wn* − *p*} such that lim inf*k*→∞(*wnk*<sup>+</sup><sup>1</sup> − *<sup>p</sup>*−*wnk* − *<sup>p</sup>*) ≥ 0. This implies that

$$\begin{split} \liminf\_{k \to \infty} (\left( \|w\_{n\_k+1} - p\| ^2 - \|w\_{n\_k} - p\| ^2 \right) &= \liminf\_{k \to \infty} ((\|w\_{n\_k+1} - p\| - \|w\_{n\_k} - p\|) \\ &\quad \times (\|w\_{n\_k+1} + p\| + \|w\_{n\_k} - p\|)) \ge 0. \end{split} \tag{18}$$

From the definition of *wn*, we obtain

$$\begin{split} \|w\_{n\_k+1} - w\_{n\_k}\| &\le \|a\_{n\_k}(f(w\_{n\_k}) - w\_{n\_k}) + \gamma\_{n\_k}(Tw\_{n\_k} - w\_{n\_k})\| \\ &\le a\_{n\_k} \|f(w\_{n\_k}) - w\_{n\_k}\| + \gamma\_{n\_k} \|Tw\_{n\_k} - w\_{n\_k}\| \\ &\le a\_{n\_k} (k \|w\_{n\_k} - p\| + \|f(p) - w\_{n\_k}\|) + \gamma\_{n\_k} \|Tw\_{n\_k} - w\_{n\_k}\|. \end{split} \tag{19}$$

Using the argument of Case 1 and Case 2 in Step 2, there exists a subsequence of {*wnk*}, still denoted by {*wnk*}, such that

$$\lim\_{k \to \infty} ||Tw\_{n\_k} - w\_{n\_k}|| = 0. \tag{20}$$

By the boundedness of {*wn*}, one deduces from Condition 1, (19), and (20) that

$$\lim\_{n \to \infty} \left\| w\_{n\_k + 1} - w\_{n\_k} \right\| = 0. \tag{21}$$

As {*wnk*} is bounded, there exists a subsequence {*wnkj* } of {*wnk*} converges weakly to some *z* ∈ *H*. This implies that

*Mathematics* **2020**, *8*, 288

$$\limsup\_{k \to \infty} \langle f(p) - p, w\_{n\_k} - p \rangle = \limsup\_{j \to \infty} \langle f(p) - p, w\_{n\_{k\_j}} - p \rangle = \langle f(p) - p, z - p \rangle.$$

From Step 2, one gets *z* ∈ Fix(*T*). Since *p* = *P*Fix(*T*) ◦ *f*(*p*), one arrives at

$$\limsup\_{k \to \infty} \langle f(p) - p\_\prime w\_{n\_k} - p \rangle = \langle f(p) - p\_\prime z - p \rangle \le 0.$$

From (21), one obtains

$$\begin{split} \limsup\_{k \to \infty} \langle f(p) - p, w\_{n\_k + 1} - p \rangle &= \limsup\_{k \to \infty} \langle f(p) - p, w\_{n\_k} - p \rangle + \limsup\_{k \to \infty} \langle f(p) - p, w\_{n\_k + 1} - w\_{n\_k} \rangle \\ &= \langle f(p) - p, z - p \rangle \le 0. \end{split} \tag{22}$$

Therefore, one has *wn* − *p* → 0. Since lim*n*→<sup>∞</sup> *wn* − *xn* = 0, one gets *xn* − *p* → 0.

In the following, we give a strong convergent theorem for strict pseudo-contractions.

**Theorem 2.** *Let T* : *H* → *H be a η-strict pseudo-contraction with* Fix(*T*) = ∅ *and let f* : *H* → *H be a contraction with constant k* ∈ [0, 1)*. Suppose that* {*xn*} *is a vector sequence generated by Algorithm 2 and Condition 1 holds. Then,* {*xn*} *converges strongly to p* = *P*Fix(*T*) ◦ *f*(*p*)*.*

**Algorithm 2** The viscosity type algorithm for strict pseudo-contractions

**Initialization:** Let *x*0, *x*<sup>1</sup> ∈ *H* be arbitrary and let *δ* ∈ [*η*, 1). **Iterative Steps**: Given the current iterator *xn*, calculate *xn*+<sup>1</sup> as follows.

**Step 1.** Compute

$$\begin{cases} y\_n = \theta\_n (\mathbf{x}\_n - \mathbf{x}\_{n-1}) + \mathbf{x}\_{n\prime} \\ z\_n = \epsilon\_n (\mathbf{x}\_n - \mathbf{x}\_{n-1}) + \mathbf{x}\_n. \end{cases} \tag{23}$$

**Step 2.** Compute

$$\begin{array}{c} \mathbf{x}\_{n+1} = a\_n f(\mathbf{x}\_n) + \beta\_n y\_n + \gamma\_n (\delta z\_n + (1 - \delta) T z\_n). \\\\ \mathbf{Step 3. Set } n \leftarrow n+1 \text{ and go to Step 1.} \end{array} \tag{24}$$

**Proof.** Define *Q* : *H* → *H* by *Qx* = *δx* + (1 − *δ*)*Tx*. It is easy to verify that Fix(*T*) = Fix(*Q*). By the definition of strict pseudo-contraction, one has

$$\begin{split} \|Q\mathbf{x} - Q\mathbf{y}\|^2 &= \delta \|\mathbf{x} - \mathbf{y}\| + (1 - \delta) \|\mathbf{T}\mathbf{x} - T\mathbf{y}\|^2 - \delta(\mathbf{1} - \delta) \|(\mathbf{x} - \mathbf{y}) - (T\mathbf{x} - T\mathbf{y})\|^2 \\ &= \delta \|\mathbf{x} - \mathbf{y}\| + (\mathbf{1} - \delta) \|\mathbf{x} - \mathbf{y}\|^2 + \eta(\mathbf{1} - \delta) \|(\mathbf{x} - \mathbf{y}) - (T\mathbf{x} - T\mathbf{y})\|^2 \\ &- \delta(\mathbf{1} - \delta) \|(\mathbf{x} - \mathbf{y}) - (T\mathbf{x} - T\mathbf{y})\|^2 \\ &\leq \|\mathbf{x} - \mathbf{y}\|^2 - (\delta - \eta)(\mathbf{1} - \delta) \|(\mathbf{x} - \mathbf{y}) - (T\mathbf{x} - T\mathbf{y})\|^2 \\ &\leq \|\mathbf{x} - \mathbf{y}\|^2. \end{split}$$

Therefore, *Q* is nonexpansive. Then, we get the conclusions from Theorem 1 immediately.

In the following, we give some corollaries for Theorem 1.

Recall that *T* is called a *ρ*-averaged mapping if and only if it can be written as the average of the identity mapping *I* and a nonexpansive mapping, that is, *T* := (1 − *ρ*)*I* + *ρS*, where *ρ* ∈ (0, 1) and *S* : *H* → *H* is a nonexpansive mapping. It is known that every *ρ*-averaged mapping is nonexpansive and Fix(*T*) = Fix(*S*). A mapping *T* : *H* → *H* is said to be quasi-nonexpansive if, for all *p* ∈ Fix(*T*), *Tx* − *T p*≤*x* − *p*, ∀*x* ∈ *H*. *T* is said to be strongly nonexpansive if *xn* − *yn* − (*Txn* − *Tyn*) → 0, whenever {*xn*} and {*yn*} are two sequences in *H* such that {*xn* − *yn*} is bounded and *xn* − *yn* − *Txn* − *Tyn* → 0. *T* is said to be strongly quasi-nonexpansive if *T* is quasi-nonexpansive and

*xn* − *Txn* → 0 whenever {*xn*} is a bounded sequence in *H* such that *xn* − *p*−*Txn* − *T p* → 0 for all *p* ∈ Fix(*T*). By using Theorem 1, we obtain the following corollaries easily.

**Corollary 1.** *Let H be a Hilbert space and let f* : *H* → *H be a contraction with constant k* ∈ [0, 1)*. Let T* : *H* → *H be a ρ-average mapping with* Fix(*T*) = ∅*. Suppose that Conditions 1 holds. Then, the sequence* {*xn*} *generated by Algorithm 1 converges to p* = *P*Fix(*T*) ◦ *f*(*p*) *in norm.*

**Corollary 2.** *Let H be a Hilbert space and let f* : *H* → *H be a contraction with constant k* ∈ [0, 1)*. Let T* : *H* → *H be a quasi-nonexpansive mapping with* Fix(*T*) = ∅ *and I* − *T be demiclosed at the origin. Suppose that Conditions 1 holds. Then, the sequence* {*xn*} *generated by Algorithm 1 converges to p* = *P*Fix(*T*) ◦ *f*(*p*) *in norm.*

**Corollary 3.** *Let H be a Hilbert space and let f* : *H* → *H be a contraction with constant k* ∈ [0, 1)*. Let T* : *H* → *H be a strongly quasi-nonexpansive mapping with* Fix(*T*) = ∅ *and I* − *T be demiclosed at the origin. Suppose that Conditions 1 holds. Then, the sequence* {*xn*} *generated by Algorithm 1 converges to p* = *P*Fix(*T*) ◦ *f*(*p*) *in norm.*

#### **4. Applications**

In this section, we will give some applications of our algorithms to variational equality problems, inclusion problems and corresponding convex minimization problems.

#### *4.1. Variational Inequality Problems*

In this subsection, we consider the following variational inequality problem (for short, VIP): find *x* ∈ *C* such that

$$
\langle Ax, y - x \rangle \ge 0, \quad \forall y \in \mathbb{C}, \tag{25}
$$

where *A* : *H* → *H* is a single-valued operator and *C* is a nonempty convex closed set in *H*. The solutions of VIP 25 is denoted by Ω. It is known that *x*<sup>∗</sup> is a solution of VIP (25) if and only if *x*<sup>∗</sup> = *PC*(*x*<sup>∗</sup> − *λAx*∗), where *λ* is an arbitrary positive constant. In recent decades, the VIP has received a lot of attention. In order to solve the VIP, various methods have been proposed, see, e.g., [26–28]. In this subsection, we will give some applications of our algorithms to the VIP (25). For this purpose, we introduce a lemma proposed by Shehu et al. [21].

**Lemma 5.** *Let H be a Hilbert space and let C be a nonempty convex and closed set in H. Suppose that A* : *H* → *H is a monotone L-Lipschitz operator on C and that λ is a positive number. Let V* := *PC*(*I* − *λA*) *and let S* := *V* − *λ*(*AV* − *A*)*. Then, I* − *V is demi-closed at the origin. Moreover, if λL* < 1*, S is a strongly quasi-nonexpansive operator and* Fix(*S*) = Fix(*V*) = Ω*.*

By using Lemma 5 and Corollary 3, we obtain the following corollary for VIP (25) immediately.

**Corollary 4.** *Let H be a Hilbert space and let C be a nonempty convex closed set in H. Let f* : *H* → *H be a contraction with constant k* ∈ [0, 1)*. Let A* : *H* → *H be a monotone L-Lipschitz operator and let τ* ∈ 0, <sup>1</sup> *L . Suppose that Conditions 1 holds. Then, the sequence* {*xn*} *generated by Algorithm 3 converges to p* = *P*<sup>Ω</sup> ◦ *f*(*p*) *in norm.*

**Proof.** Let *S* := *PC*(*I* − *τA*) − *τ*(*A*(*PC*(*I* − *τA*)) − *A*). We see from Lemma 5 that *S* is strongly quasi-nonexpansive and Fix(*S*) = Ω. Then, we get the conclusions from Corollary 3 immediately.


**Iterative Steps**: Given the current iterator *xn*, calculate *xn*+<sup>1</sup> as follows. **Step 1.** Compute 

$$\begin{cases} y\_n = \mathbf{x}\_n + \theta\_n (\mathbf{x}\_n - \mathbf{x}\_{n-1}), \\ z\_n = \mathbf{x}\_n + \epsilon\_n (\mathbf{x}\_n - \mathbf{x}\_{n-1}). \end{cases} \tag{26}$$

**Step 2.** Compute - *wn* = *PC*(*I* − *λA*)*zn*, *xn*+<sup>1</sup> <sup>=</sup> *<sup>α</sup><sup>n</sup> <sup>f</sup>*(*xn*) + *<sup>β</sup>nyn* <sup>+</sup> *<sup>γ</sup><sup>n</sup>* (*wn* <sup>−</sup> *<sup>λ</sup>* (*Awn* <sup>−</sup> *Azn*)). (27) **Step 3.** Set *n* ← *n* + 1 and go to Step 1.

*4.2. Inclusion Problems*

Let *H* denote the Hilbert spaces and let *A* : *H* → *H* be a single-valued mapping. Then, *A* is said to be monotone if *Ax* − *Ay*, *x* − *y* ≥ 0, ∀*x*, *y* ∈ *H*; *A* is said to be *α*-inverse strongly monotone if *Ax* <sup>−</sup> *Ay*, *<sup>x</sup>* <sup>−</sup> *<sup>y</sup>* ≥ *<sup>α</sup>A*(*x*) <sup>−</sup> *<sup>A</sup>*(*y*)2, <sup>∀</sup>*x*, *<sup>y</sup>* <sup>∈</sup> *<sup>H</sup>*. A set-valued operator *<sup>A</sup>* : *<sup>H</sup>* <sup>→</sup> <sup>2</sup>*<sup>H</sup>* is said to be monotone if *x* − *y*, *u* − *v* ≥ 0, ∀*x*, *y* ∈ *H*, where *u* ∈ *Ax* and *v* ∈ *Ay*. Furthermore, *A* said to be maximal monotone if, for all (*y*, *v*) ∈ *Graph*(*A*) and each (*x*, *u*) ∈ *H* × *H*, *x* − *y*, *u* − *v* ≥ 0 implies that *<sup>u</sup>* <sup>∈</sup> *Ax*. Recall that the resolvent operator *<sup>J</sup><sup>A</sup> <sup>r</sup>* : *H* → *H* associated operator *A* is defined by *JA <sup>r</sup>* = (*I* + *rA*)−1*x*, where *r* > 0 and *I* denotes the identity operator on *H*. If *A* is a maximal monotone mapping, *J<sup>A</sup> <sup>r</sup>* is a single-valued and firmly nonexpansive mapping. Consider the following simple inclusion problem: find *x*<sup>∗</sup> ∈ *H* such that

$$0 \in A\mathfrak{x}^\*,$$

where *<sup>A</sup>* : *<sup>H</sup>* <sup>→</sup> *<sup>H</sup>* is a maximal monotone operator. It is know that 0 <sup>∈</sup> *<sup>A</sup>*(*x*) if and only if *<sup>x</sup>* <sup>∈</sup> Fix(*J<sup>A</sup> <sup>r</sup>* ). By using Theorem 1, we obtain the following corollary.

**Corollary 5.** *Let H be a Hilbert space and let f* : *H* → *H be a contraction with constant k* ∈ [0, 1)*. Let <sup>A</sup>* : *<sup>H</sup>* <sup>→</sup> *<sup>H</sup> be a maximal monotone operator such that <sup>A</sup>*−1(0) <sup>=</sup> <sup>∅</sup>*. Suppose that Conditions <sup>1</sup> holds. Then, the sequence* {*xn*} *generated by Algorithm 4 converges strongly to p* = *PA*−1(0) ◦ *f*(*p*)*.*


**Step 3.** Set *n* ← *n* + 1 and go to Step 1.

**Proof.** As Fix(*J<sup>A</sup> <sup>r</sup>* ) = *A*−1(0) and *J<sup>A</sup> <sup>r</sup>* is firmly nonexpansive, one has that *J<sup>A</sup> <sup>r</sup>* is <sup>1</sup> <sup>2</sup> -averaged. Therefore, there exists a nonexpansive mapping *S* such that *J<sup>A</sup> <sup>r</sup>* = <sup>1</sup> <sup>2</sup> *<sup>I</sup>* <sup>+</sup> <sup>1</sup> <sup>2</sup>*<sup>S</sup>* and Fix(*J<sup>A</sup> <sup>r</sup>* ) = Fix(*S*). By using Corollary 1, we obtain the conclusions immediately.

Now, we solve the following convex minimization problem.

$$\min\_{\mathbf{x}\in H} h(\mathbf{x}),\tag{31}$$

where *h* : *H* → (−∞, +∞] is a proper lower semi-continuous closed convex function. The subdifferential operator *∂h*(*x*) of *h*(*x*) is defined by *∂h*(*x*) = {*u* ∈ *H* : *h*(*y*) ≥ *h*(*x*) + *u*, *y* − *x*, ∀*y* ∈ *H*}. It is known that *∂h*(*x*) is maximal monotone, and *x*<sup>∗</sup> is a solution of problem (31) if and only if 0 ∈ *∂h*(*x*∗). Taking *A* = *∂h*(*x*), we have *J<sup>A</sup> <sup>r</sup>* = *proxrh*, where *r* > 0 and *proxrh* is defined by

$$\operatorname{prox}\_{rh}(\boldsymbol{u}) = \arg\min\_{\boldsymbol{x}\in H} \left\{ \frac{1}{2r} \left\| \boldsymbol{x} - \boldsymbol{u} \right\|^2 + h(\boldsymbol{x}) \right\}.$$

**Corollary 6.** *Let H be a Hilbert space and let f* : *H* → *H be a contraction with constant k* ∈ [0, 1)*. Let h* : *H* → (−∞, +∞] *be a proper closed lower semi-continuous convex function such that* arg min *h* = ∅*. Suppose that Conditions 1 holds. Then, the sequence* {*xn*} *generated by Algorithm 5 converges to a solution of convex minimization problem* (31) *in norm.*

#### **Algorithm 5** The viscosity type algorithm for solving convex minimization problems

**Initialization:** Let *x*0, *x*<sup>1</sup> ∈ *H* be arbitrary. **Iterative Steps**: Given the current iterator *xn*, calculate *xn*+<sup>1</sup> as follows. **Step 1.** Compute - *yn* = *xn* + *θn*(*xn* − *xn*−1), *zn* <sup>=</sup> *xn* <sup>+</sup> *n*(*xn* <sup>−</sup> *xn*−1). (32) **Step 2.** Compute *xn*+<sup>1</sup> = *α<sup>n</sup> f*(*xn*) + *βnyn* + *γ<sup>n</sup> proxrh*(*zn*). (33)

**Step 3.** Set *n* ← *n* + 1 and go to Step 1.

**Proof.** It is known that the subdifferential operator *∂h* is maximal monotone since *h* is a proper, closed lower semi-continuous, convex function. Therefore, prox*rh* = *<sup>J</sup>∂<sup>h</sup> <sup>r</sup>* . Then, we get the conclusions from Corollary 5 immediately.

In the following, we consider the following inclusion problem: find *x*<sup>∗</sup> ∈ *H* such that

$$0 \in A(\mathbf{x}^\*) + B(\mathbf{x}^\*),\tag{34}$$

where *<sup>A</sup>* : *<sup>H</sup>* <sup>→</sup> *<sup>H</sup>* be an *<sup>α</sup>*-inverse strongly monotone mapping and let *<sup>B</sup>* : *<sup>H</sup>* <sup>→</sup> <sup>2</sup>*<sup>H</sup>* be a set-valued maximal monotone operator. It is known that Fix(*J<sup>B</sup> <sup>r</sup>* (*<sup>I</sup>* <sup>−</sup> *rA*)) = (*<sup>A</sup>* <sup>+</sup> *<sup>B</sup>*)−1(0). Many problems can be modelled as the inclusion problem, such as, convex programming problems, inverse problems, split feasibility problems, and minimization problems, see [29–32]. Moreover, this problem is also widely applied in machine learning, signal processing, statistical regression, and image restoration, see [33–35]. By using Theorem 1, we obtain the following corollary.

**Corollary 7.** *Let H be a Hilbert space and let f* : *H* → *H be a contraction with constant k* ∈ [0, 1)*. Let <sup>A</sup>* : *<sup>H</sup>* <sup>→</sup> *<sup>H</sup> be a <sup>α</sup>-inverse strongly monotone mapping with* <sup>0</sup> <sup>&</sup>lt; *<sup>r</sup>* <sup>=</sup> <sup>2</sup>*<sup>α</sup> and let <sup>B</sup>* : *<sup>H</sup>* <sup>→</sup> <sup>2</sup>*<sup>H</sup> be a maximal monotone operator. Suppose that* (*<sup>A</sup>* <sup>+</sup> *<sup>B</sup>*)−1(0) <sup>=</sup> <sup>∅</sup> *and Conditions <sup>1</sup> holds. Then, the sequence* {*xn*} *generated by Algorithm 3 converges to p* = *P*(*A*+*B*)−1(0) ◦ *f*(*p*) *in norm.*

**Proof.** As *A* is inverse strongly monotone, one has that (*I* − *rA*) is nonexpansive. Therefore, the operator *J<sup>B</sup> <sup>r</sup>* (*I* − *rA*) is nonexpansive. Then, we get the conclusions from Theorem 1 immediately.

#### **5. Numerical Results**

In this section, we give three numerical examples to illustrate the computational performance of our proposed algorithms. All the programs are performed in MATLAB2018a on a PC Desktop Intel(R) Core(TM) i5-8250U CPU @ 1.60 GHz 1.800 GHz, RAM 8.00 GB.

**Example 1.** *In this example, we consider the following case that the usual gradient method is not convergent. Take the feasible set as <sup>C</sup>* :<sup>=</sup> {−<sup>5</sup> <sup>≤</sup> *xi* <sup>≤</sup> 5, *<sup>i</sup>* <sup>=</sup> 1, 2, ··· , *<sup>m</sup>*} *and an <sup>m</sup>* <sup>×</sup> *<sup>m</sup> square matrix <sup>A</sup>* :<sup>=</sup> \$ *aij*% 1≤*i*,*j*≤*m whose terms are given by*

$$a\_{ij} = \begin{cases} 1, & \text{if } j = m+1-i \text{ and } j < i, \\\ -1, & \text{if } j = m+1-i \text{ and } j > i, \\\ 0, & \text{otherwise.} \end{cases}$$

*One knows that zero vector x*∗ = (0, ... , 0) *is a solution of this problem. First, one tests the Algorithm 3 with different choices of inertial parameter θ<sup>n</sup> and n. Setting f*(*x*) = 0.5*x, δ<sup>n</sup>* = <sup>1</sup> (*n*+1)<sup>2</sup> *, <sup>α</sup><sup>n</sup>* <sup>=</sup> *<sup>n</sup>* (*n*+1)1.1 *, β<sup>n</sup>* = *γ<sup>n</sup>* = <sup>1</sup>−*α<sup>n</sup>* <sup>2</sup> *, λ* = 0.7*, the numerical results are shown in Tables 1 and 2.*

*To compare the efficiency between algorithms, we consider our proposed Algorithm 3, the extragradient method (EGM) in [36], the subgradient extragradient method (SEGM) in [26], and the new inertial subgradient extragradient method (NISEGM) in [27]. The parameters are selected as follows. The initial points <sup>x</sup>*0, *<sup>x</sup>*<sup>1</sup> <sup>∈</sup> *<sup>R</sup><sup>m</sup> are generated randomly in MATLAB and we take different values of m into consideration. In EGM, SEGM, we take λ* = 0.7*. In Algorithm 3, we take f*(*x*) = 0.5*x, λ* = 0.7*, δ<sup>n</sup>* = <sup>1</sup> (*n*+1)<sup>2</sup> *, <sup>θ</sup>* = 0.7 *and* = 0.8 *in* (4)*, α<sup>n</sup>* = *<sup>n</sup>* (*n*+1)1.1 *, <sup>β</sup><sup>n</sup>* <sup>=</sup> *<sup>γ</sup><sup>n</sup>* <sup>=</sup> <sup>1</sup>−*α<sup>n</sup>* <sup>2</sup> *. We set <sup>α</sup><sup>n</sup>* <sup>=</sup> 0.1, *<sup>τ</sup><sup>n</sup>* <sup>=</sup> *<sup>n</sup>* (*n*+1)1.1 , *<sup>λ</sup><sup>n</sup>* = 0.8 *in NISEGM. The stopping criterion is En* <sup>=</sup> *xn* <sup>−</sup> *<sup>x</sup>*∗<sup>2</sup> <sup>&</sup>lt; <sup>10</sup>−4*. The results are proposed in Table <sup>3</sup> and Figure 1.*

**Table 1.** Number of iterations of Algorithm 3 with *θ* = 0.5, *m* = 100


**Table 2.** Number of iterations of Algorithm 3 with = 0.7, *m* = 100


**Remark 3.** *By Table 1 and Table 2, one concludes that the number of the iteration is small for the Algorithm 3 with θ* ∈ [0.5, 1] *and* ∈ [0.5, 1]*.*

**Remark 4.** *(1) By numerical results of Example 1, we find that our Algorithm 3 is efficient, easy to implement and fast. Moreover, dimensions do not affect the computational performance of our algorithm.*

*(2) Obviously, by Example 1, we also find that our proposed Algorithm 3 outperforms the extragradient method (EGM), the subgradient extragradient method (SEGM) and the new inertial subgradient extragradient method (NISEGM) in both CPU time and number of iterations.*

**Table 3.** Comparison between Algorithm 3, EGM, SEGM, and NISEGM in Example 1.


**Figure 1.** Convergence behavior of iteration error {*En*} with different dimension in Example 1.

**Algorithm 6** The viscosity type algorithm for solving inclusion problem (34)

**Initialization:** Let *x*0, *x*<sup>1</sup> ∈ *H* be arbitrary.

**Iterative Steps**: Given the current iterator *xn*, calculate *xn*+<sup>1</sup> as follows: **Step 1.** Compute 

$$\begin{cases} y\_n = \mathbf{x}\_n + \theta\_n (\mathbf{x}\_n - \mathbf{x}\_{n-1}), \\ z\_n = \mathbf{x}\_n + \epsilon\_n (\mathbf{x}\_n - \mathbf{x}\_{n-1}). \end{cases} \tag{35}$$

**Step 2.** Compute

*xn*+<sup>1</sup> = *α<sup>n</sup> f*(*xn*) + *βnyn* + *γ<sup>n</sup> J B <sup>r</sup>* (*I* − *rA*)*zn*. (36) **Step 3.** Set *n* ← *n* + 1 and go to Step 1.

**Example 2.** *In this example, we consider H* = *L*2([0, 2*π*]) *and the following half-space,*

$$\mathcal{C} = \left\{ \mathbf{x} \in L\_2([0, 2\pi]) \, \Big| \, \int\_0^{2\pi} \mathbf{x}(t) dt \le 1 \right\}, \text{ and } Q = \left\{ \mathbf{x} \in L\_2([0, 2\pi]) \, \Big| \, \int\_0^{2\pi} |\mathbf{x}(t) - \sin(t)|^2 \, dt \le 16 \right\}.$$

*Define a linear continuous operator T* : *L*2([0, 2*π*]) → *L*2([0, 2*π*])*, where* (*Tx*)(*t*) := *x*(*t*)*. Then* (*T*∗*x*)(*t*) = *x*(*t*) *and T* = 1*. Now, we solve the following problem,*

$$\text{find } \mathfrak{x}^\* \in \mathbb{C} \quad \text{such that } T\mathfrak{x}^\* \in \mathbb{Q}. \tag{37}$$

*As* (*Tx*)(*t*) = *x*(*t*)*,* (37) *is actually a convex feasibility problem: find x*<sup>∗</sup> ∈ *C* ∩ *Q. Moreover, it is evident that x*(*t*) = 0 *is a solution. Therefore, the solution set of* (37) *is nonempty. Take Ax* = ∇ 1 2 *Tx* <sup>−</sup> *PQTx* 2 = *T*<sup>∗</sup> \$ *I* − *PQ* % *Tx and B* = *∂iC. Then* (37) *can be written in the form* (34)*. It is clear that A is* 1*-Lipschitz continuous and B is maximal monotone. For our numerical computation, we can also write the projections onto set C and the projections onto set Q as follows, see [37].*

$$P\_C(z) = \begin{cases} \frac{1 - \int\_0^{2\pi} z(t)dt}{4\pi^2} + z, & \int\_0^{2\pi} z(t)dt > 1, \\ z, & \int\_0^{2\pi} z(t)dt \le 1. \end{cases}$$

*and*

$$P\_Q(w) = \begin{cases} \sin + \frac{4}{\sqrt{\int\_0^{2\pi} |w(t) - \sin(t)|^2 dt}} (w - \sin)\_\prime & \int\_0^{2\pi} |w(t) - \sin(t)|^2 dt > 16, \\\ w, & \int\_0^{2\pi} |w(t) - \sin(t)|^2 dt \le 16. \end{cases}$$

*In this numerical experiment, we consider different initial values x*<sup>0</sup> *and x*1*. The error of the iterative algorithms is denoted by*

$$E\_n = \frac{1}{2} \left\| P\_{\mathbb{C}} \left( \mathfrak{x}\_n \right) - \mathfrak{x}\_n \right\|\_2^2 + \frac{1}{2} \left\| P\_{\mathbb{Q}} \left( T \left( \mathfrak{x}\_n \right) \right) - T \left( \mathfrak{x}\_n \right) \right\|\_2^2.$$

*Now, we give some numerical experiment comparisons between our Algorithm 6 and the Algorithm 5.2 proposed by Shehu et al. [21]. We denote this algorithm by Shehu et al. Algorithm 5.2. In the Shehu et al. Algorithm 5.2, one sets λ* = 0.25*, <sup>n</sup>* = <sup>1</sup> (*n*+1)<sup>2</sup> *, <sup>θ</sup>* <sup>=</sup> 0.5*, <sup>α</sup><sup>n</sup>* <sup>=</sup> <sup>1</sup> *<sup>n</sup>*+<sup>1</sup> *, <sup>β</sup><sup>n</sup>* <sup>=</sup> *<sup>γ</sup><sup>n</sup>* <sup>=</sup> *<sup>n</sup>* <sup>2</sup>(*n*+1)*, en* <sup>=</sup> <sup>1</sup> (*n*+1)<sup>2</sup> *. In Algorithm 6, one sets f*(*x*) = 0.5*x, r* = 0.25*, δ<sup>n</sup>* = <sup>1</sup> (*n*+1)<sup>2</sup> , *<sup>θ</sup>* <sup>=</sup> 0.5*,* <sup>=</sup> 0.7*, <sup>α</sup><sup>n</sup>* <sup>=</sup> <sup>1</sup> *<sup>n</sup>*+<sup>1</sup> *, and <sup>β</sup><sup>n</sup>* <sup>=</sup> *<sup>γ</sup><sup>n</sup>* <sup>=</sup> *<sup>n</sup>* <sup>2</sup>(*n*+1)*. Our stopping criterion is maximum iteration* 200 *or En* < 10−3*. The results are proposed in Table 4 and Figure 2.*

**Figure 2.** Convergence behavior of iteration error {*En*} with different initial values in Example 2.


**Table 4.** Comparison between our Algorithm 6 and Shehu et al.'s Algorithm 5.2 in Example 2.

**Remark 5.** *(1) Also, by observing numerical results of Example 2, we find that our Algorithm 6 is more efficient and faster than the Shehu et al.'s Algorithm 5.2.*

*(2) Our Algorithm 6 is consistent since the choice of initial value does not affect the number of iterations needed to achieve the expected results.*

**Example 3.** *In this example, we consider a linear inverse problem: <sup>b</sup>* <sup>=</sup> *Ax*<sup>0</sup> <sup>+</sup> *w, where <sup>x</sup>*<sup>0</sup> <sup>∈</sup> *<sup>R</sup><sup>N</sup> is the (unknown) signal to recover, <sup>w</sup>* <sup>∈</sup> *<sup>R</sup><sup>M</sup> is a noise vector, and <sup>A</sup>* <sup>∈</sup> *<sup>R</sup>M*×*<sup>N</sup> models the acquisition device. To recover an approximation of the signal x*0*, we use the Basis Pursuit denoising method. That is, one uses the* <sup>1</sup> *norm as a sparsity enforcing penalty.*

$$\min\_{\mathbf{x}\in\mathbb{R}^N} \Phi(\mathbf{x}) = \frac{1}{2} \|\mathbf{b} - A\mathbf{x}\|^2 + \lambda \|\mathbf{x}\|\_{1\prime} \tag{38}$$

*where x*<sup>1</sup> = ∑*<sup>i</sup>* |*xi*| *and λ is a parameter that is relate to noise w. It is known that* (38) *is referred as the least absolute selection and shrinkage operator problem, that is, the LASSO problem. The LASSO problem* (38) *is a special case of minimizing F* + *G, where*

$$F(x) = \frac{1}{2} \|b - Ax\|^2, \quad \text{and} \quad G(x) = \lambda \|x\|\_1.$$

*It is easy to see that F is a smooth function with L-Lipschitz continuous gradient* ∇*F*(*x*) = *A*∗(*Ax* − *b*)*, where L* = *A*∗*A. The* 1*-norm is "simple", as its proximal operator is a soft thresholding:*

$$\text{prox}\_{\gamma G}(\mathbf{x}\_k) = \max\left(0, 1 - \frac{\lambda \gamma}{|\mathbf{x}\_k|}\right) \mathbf{x}\_k.$$

*In our experiment, we want to recover a sparse signal <sup>x</sup>*<sup>0</sup> <sup>∈</sup> *<sup>R</sup><sup>N</sup> with <sup>k</sup> (k N) non-zero elements. A simple linearized model of signal processing is to consider a linear operator, that is, a filtering Ax* = *ϕ x, where ϕ is a second derivative of Gaussian. We wish to solve b* = *Ax*<sup>0</sup> + *w, where w is a realization of Gaussian white noise with variance* 10−2*. Therefore, we need to solve the* (38)*. We compare our Algorithm 6 with another strong convergence algorithm, which was proposed by Gibali and Thong in [38]. We denote this algorithm by G-T Algorithm 1. In addition, we also compare the algorithms with the classic Forward–Backward algorithm in [33]. Our parameter settings are as follows. In all algorithms, we set regularization parameter λ* = <sup>1</sup> <sup>2</sup> *in* (38)*. In the Forward–Backward algorithm, we set step size γ* = 1.9/*L. In G-T Algorithm 1, we set step size γ* = 1.9/*L, α<sup>n</sup>* = <sup>1</sup> *<sup>n</sup>*+<sup>1</sup> *, <sup>β</sup><sup>n</sup>* <sup>=</sup> *<sup>n</sup>* <sup>2</sup>(*n*+1) *and μ* = 0.5*. In Algorithm 6, we set step size r* = 1.9/*L, f*(*x*) = 0.1*x, θ* = = 0.9*, δ<sup>n</sup>* = <sup>1</sup> (*n*+1)<sup>2</sup> *, <sup>α</sup><sup>n</sup>* <sup>=</sup> <sup>1</sup> *<sup>n</sup>*+<sup>1</sup> *, <sup>β</sup><sup>n</sup>* <sup>=</sup> <sup>1</sup> <sup>1000</sup>(*n*+1)<sup>3</sup> , *<sup>γ</sup><sup>n</sup>* = 1 − *<sup>α</sup><sup>n</sup>* − *<sup>β</sup>n. We take the maximum number of iterations* <sup>5</sup> <sup>×</sup> 104 *as a common stopping criterion. In addition, we use the signal-to-noise ratio (SNR) to measure the quality of recovery, and a larger SNR means a better recovery quality. Numerical results are proposed in Table 5 and Figures 3–5. We tested the computational performance of the above algorithms in different dimension N and different sparsity k (Case I: N* = 400*, k* = 12*; Case II: N* = 400*, k* = 20*; Case III: N* = 1000*, k* = 30*; Case IV: N* = 1000*, k* = 50*). Figure 3 shows the original and noise signals in different dimension N and different sparsity k. Figure 4 shows the recovery results of different algorithms under different situation, the corresponding numerical results are shown in Table 5. Figure 5 shows the convergence behavior of* Φ(*x*) *in* (38) *with the number of iterations.*

**Table 5.** Comparison the SNR between Algorithm 6, G-T Algorithm 1, and Forward–Backward in Example 3.

**Figure 3.** Original signals and noise signals at different *N* and *k* in Example 3.

**Figure 4.** Recovery results under different algorithms in Example 3.

**Figure 5.** Convergence behavior of {Φ(*x*)} with different *N* and *k* in Example 3.


#### **6. Conclusions**

In this paper, we proposed a viscosity algorithm with two different inertia parameters for solving fixed-point problem of nonexpansive mappings. We also established a strong convergence theorem for strict pseudo-contractive mappings. By choosing different parameter values in inertial sequences, we analyzed the convergence behavior of our proposed algorithms. One highlight is that our algorithms are based on two different inertial parameter sequences comparing with the exiting ones. accelerated via the inertial technique and the viscosity technique. Another highlight is that, to show the effectiveness of our algorithms, we compare our algorithms with other existing algorithms in the convergence rate and applications in signal processing. Numerical experiments show that our algorithms are consistent and efficient. Finally, we remark that the framework of the space is a Hilbert space, it is of interest to further our results to the framework of Banach spaces or Hadamard manifolds. **Author Contributions:** All the authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

**Funding:** This paper was supported by the National Natural Science Foundation of China under Grant 11601348.

**Acknowledgments:** The authors are grateful to the referees for useful suggestions, which improved the contents of this paper.

**Conflicts of Interest:** The authors declare no conflicts of interest.

#### **References**


c 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Inertial-Like Subgradient Extragradient Methods for Variational Inequalities and Fixed Points of Asymptotically Nonexpansive and Strictly Pseudocontractive Mappings**

**Lu-Chuan Ceng 1, Adrian Petru¸sel 2, Ching-Feng Wen 3,4,\* and Jen-Chih Yao <sup>5</sup>**


Received: 18 August 2019; Accepted: 9 September 2019; Published: 17 September 2019

**Abstract:** Let VIP indicate the variational inequality problem with Lipschitzian and pseudomonotone operator and let CFPP denote the common fixed-point problem of an asymptotically nonexpansive mapping and a strictly pseudocontractive mapping in a real Hilbert space. Our object in this article is to establish strong convergence results for solving the VIP and CFPP by utilizing an inertial-like gradient-like extragradient method with line-search process. Via suitable assumptions, it is shown that the sequences generated by such a method converge strongly to a common solution of the VIP and CFPP, which also solves a hierarchical variational inequality (HVI).

**Keywords:** inertial-like subgradient-like extragradient method with line-search process; pseudomonotone variational inequality problem; asymptotically nonexpansive mapping; strictly pseudocontractive mapping; sequentially weak continuity

**MSC:** 47H05; 47H09; 47H10; 90C52

#### **1. Introduction**

Throughout this paper we assume that *C* is a nonempty, convex and closed subset of a real Hilbert space (*H*, ·), whose inner product is denoted by ·, ·. Moreover, let *PC* denote the metric projection of *H* onto *C*.

Suppose *A* : *H* → *H* is a mapping. In this paper, we shall consider the following variational inequality (VI) of finding *x*<sup>∗</sup> ∈ *C* such that

$$<\langle \mathbf{x} - \mathbf{x}^\*, A\mathbf{x}^\* \rangle \ge 0, \quad \forall \mathbf{x} \in \mathbb{C}. \tag{1}$$

*Mathematics* **2019**, *7*, 860; doi:10.3390/math7090860 www.mdpi.com/journal/mathematics

The set of solutions to Equation (1) is denoted by VI(*C*, *A*). In 1976, Korpelevich [1] first introduced an extragradient method, which is one of the most popular approximation ones for solving Equation (1) till now. That is, for any initial *u*<sup>0</sup> ∈ *C*, the sequence {*un*} is generated by

$$\begin{cases} \quad \boldsymbol{v}\_{n} = \boldsymbol{P}\_{\mathbb{C}}(\boldsymbol{u}\_{n} - \boldsymbol{\tau} A \boldsymbol{u}\_{n}),\\ \quad \boldsymbol{u}\_{n+1} = \boldsymbol{P}\_{\mathbb{C}}(\boldsymbol{u}\_{n} - \boldsymbol{\tau} A \boldsymbol{v}\_{n}), \quad \forall n \ge 0, \end{cases} \tag{2}$$

where *τ* is a constant in (0, <sup>1</sup> *<sup>L</sup>* ) for *L* > 0 the Lipschitz constant of mapping *A*. In the case where VI(*C*, *A*) = ∅, the sequence {*un*} constructed by Equation (2) is weakly convergent to a point in VI(*C*, *A*). Recently, light has been shed on approximation methods for solving problem Equation (1) by many researchers; see, e.g., [2–11] and references therein, to name but a few.

Let *T* : *C* → *C* be a mapping. We denote by Fix(*T*) the set of fixed points of *T*, i.e., Fix(*T*) = {*x* ∈ *C* : *x* = *Tx*}. *T* is said to be asymptotically nonexpansive if ∃{*θn*} ⊂ [0, +∞) such that lim*n*→<sup>∞</sup> *θ<sup>n</sup>* = 0 and *Tnu* <sup>−</sup> *<sup>T</sup>nv*≤*<sup>u</sup>* <sup>−</sup> *<sup>v</sup>* <sup>+</sup> *<sup>θ</sup>n<sup>u</sup>* <sup>−</sup> *<sup>v</sup>*, <sup>∀</sup>*<sup>n</sup>* <sup>≥</sup> 1, *<sup>u</sup>*, *<sup>v</sup>* <sup>∈</sup> *<sup>C</sup>*. If *<sup>θ</sup><sup>n</sup>* <sup>≡</sup> 0, then *<sup>T</sup>* is nonexpansive. Also, *<sup>T</sup>* is said to be strictly pseudocontractive if <sup>∃</sup>*<sup>ζ</sup>* <sup>∈</sup> [0, 1) s.t. *Tu* <sup>−</sup> *Tv*<sup>2</sup> ≤ *<sup>u</sup>* <sup>−</sup> *<sup>v</sup>*<sup>2</sup> <sup>+</sup> *<sup>ζ</sup>*(*<sup>I</sup>* <sup>−</sup> *<sup>T</sup>*)*<sup>u</sup>* <sup>−</sup> (*<sup>I</sup>* <sup>−</sup> *<sup>T</sup>*)*v*2, <sup>∀</sup>*u*, *<sup>v</sup>* <sup>∈</sup> *<sup>C</sup>*. If *ζ* = 0, then *T* reduces to a nonexpansive mapping. One knows that the class of strict pseudocontractions strictly includes the class of nonexpansive mappings. Both strict pseudocontractions and nonexpansive mappings have been studied extensively by a large number of authors via iteration approximation methods; see, e.g., [12–18] and references therein.

Let the mappings *A*, *B* : *C* → *H* be both inverse-strongly monotone and let the mapping *T* : *C* → *C* be asymptotically nonexpansive one with a sequence {*θn*}. Let *f* : *C* → *C* be a *δ*-contraction with *δ* ∈ [0, 1). By using a modified extragradient method, Cai et al. [19] designed a viscosity implicit rule for finding a point in the common solution set Ω of the VIs for *A* and *B* and the FPP of *T*, i.e., for arbitrarily given *x*<sup>1</sup> ∈ *C*, {*xn*} is the sequence constructed by

$$\begin{cases} \begin{aligned} \boldsymbol{u}\_{\boldsymbol{n}} &= \mathbf{s}\_{\boldsymbol{n}} \mathbf{x}\_{\boldsymbol{n}} + (1 - \mathbf{s}\_{\boldsymbol{n}}) \mathbf{y}\_{\boldsymbol{n}\prime} \\ \boldsymbol{y}\_{\boldsymbol{n}} &= \mathbf{P}\_{\mathbb{C}} (\boldsymbol{I} - \lambda \boldsymbol{A}) P\_{\mathbb{C}} (\boldsymbol{u}\_{\boldsymbol{n}} - \mu \boldsymbol{B} \boldsymbol{u}\_{\boldsymbol{n}}) \\ \boldsymbol{x}\_{\boldsymbol{n}+1} &= \mathbf{P}\_{\mathbb{C}} [(\boldsymbol{T}^{\boldsymbol{n}} \boldsymbol{y}\_{\boldsymbol{n}} - \boldsymbol{\alpha}\_{\boldsymbol{n}} \boldsymbol{\rho} \boldsymbol{F} \boldsymbol{T}^{\boldsymbol{n}} \boldsymbol{y}\_{\boldsymbol{n}}) + \boldsymbol{\alpha}\_{\boldsymbol{n}} f(\boldsymbol{x}\_{\boldsymbol{n}})] \end{aligned} \end{cases}$$

where {*αn*}, {*sn*} ⊂ (0, 1]. Under appropriate conditions imposed on {*αn*}, {*sn*}, they proved that {*xn*} is convergent strongly to an element *<sup>x</sup>*<sup>∗</sup> <sup>∈</sup> <sup>Ω</sup> provided <sup>∑</sup><sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *Tn*+1*yn* <sup>−</sup> *<sup>T</sup>nyn* <sup>&</sup>lt; <sup>∞</sup>.

In the context of extragradient techniques, one has to compute metric projections two times for each computational step. Without doubt, if *C* is a general convex and closed set, the computation of the projection onto *C* might be quite consuming-time. In 2011, inspired by Korpelevich's extragradient method, Censor et al. [20] first designed the subgradient extragradient method, where a projection onto a half-space is used in place of the second projection onto *C*. In 2014, Kraikaew and Saejung [21] proposed the Halpern subgradient extragradient method for solving Equation (1), and proved strong convergence of the proposed method to a solution of Equation (1).

In 2018, via the inertial technique, Thong and Hieu [22] studied the inertial subgradient extragradient method, and proved weak convergence of their method to a solution of Equation (1). Very recently, they [23] constructed two inertial subgradient extragradient algorithms with linear-search process for finding a common solution of problem Equation (1) with operator *A* and the FPP of operator *T* with demiclosedness property in a real Hilbert space, where *A* is Lipschitzian and monotone, and *T* is quasi-nonexpansive. The constructed inertial subgradient extragradient algorithms (Algorithms 1 and 2) are as below:

**Algorithm 1:** Inertial subgradient extragradient algorithm (I) (see [[23], Algorithm 1]). **Initialization:** Given *u*0, *u*<sup>1</sup> ∈ *H* arbitrarily. Let *γ* > 0, *l* ∈ (0, 1), *μ* ∈ (0, 1). **Iterative Steps:** Compute *un*+<sup>1</sup> in what follows: Step 1. Put *vn* = *αn*(*un* − *un*−1) + *un* and calculate *yn* = *PC*(*vn* − *τnAvn*), where *τ<sup>n</sup>* is chosen to be the largest *τ* ∈ {*γ*, *γl*, *γl* 2, ...} satisfying *<sup>τ</sup>Avn* <sup>−</sup> *Ayn* ≤ *<sup>μ</sup>vn* <sup>−</sup> *yn*. Step 2. Calculate *zn* = *PTn* (*vn* − *τnAyn*) with *Tn* := {*x* ∈ *H* : *x* − *yn*, *vn* − *τnAvn* − *yn* ≤ 0}. Step 3. Calculate *un*+<sup>1</sup> = *βnTzn* + (1 − *βn*)*vn*. If *vn* = *zn* = *un*+<sup>1</sup> then *vn* ∈ Fix(*T*) ∩ VI(*C*, *A*). Set *n* := *n* + 1 and go to Step 1.


**Initialization:** Given *u*0, *u*<sup>1</sup> ∈ *H* arbitrarily. Let *γ* > 0, *l* ∈ (0, 1), *μ* ∈ (0, 1). **Iterative Steps:** Calculate *un*+<sup>1</sup> as follows: Step 1. Put *vn* = *αn*(*un* − *un*−1) + *un* and calculate *yn* = *PC*(*vn* − *τnAvn*), where *τ<sup>n</sup>* is chosen to be the largest *τ* ∈ {*γ*, *γl*, *γl* 2, ...} satisfying *<sup>τ</sup>Avn* <sup>−</sup> *Ayn* ≤ *<sup>μ</sup>vn* <sup>−</sup> *yn*. Step 2. Calculate *zn* = *PTn* (*vn* − *τnAyn*) with *Tn* := {*x* ∈ *H* : *x* − *yn*, *vn* − *τnAvn* − *yn* ≤ 0}. Step 3. Calculate *un*+<sup>1</sup> = *βnTzn* + (1 − *βn*)*un*. If *vn* = *zn* = *un* = *un*+<sup>1</sup> then *un* ∈ Fix(*T*) ∩ VI(*C*, *A*). Set *n* := *n* + 1 and go to Step 1.

Under mild assumptions, they proved that the sequences generated by the proposed algorithms are weakly convergent to a point in Fix(*T*) ∩ VI(*C*, *A*). Recently, gradient-like methods have been studied extensively by many authors; see, e.g., [24–38].

Inspired by the research work of [23], we introduce two inertial-like subgradient algorithms with line-search process for solving Equation (1) with a Lipschitzian and pseudomonotone operator and the common fixed point problem (CFPP) of an asymptotically nonexpansive operator and a strictly pseudocontractive operator in *H*. The proposed algorithms comprehensively adopt inertial subgradient extragradient method with line-search process, viscosity approximation method, Mann iteration method and asymptotically nonexpansive mapping. Via suitable assumptions, it is shown that the sequences generated by the suggested algorithms converge strongly to a common solution of the VIP and CFPP, which also solves a hierarchical variational inequality (HVI).

#### **2. Preliminaries**

Let *x* ∈ *H* and {*xn*} ⊂ *H*. We use the notation *xn* → *x* (resp., *xn x*) to indicate the strong (resp., weak) convergence of {*xn*} to *x*. Recall that a mapping *T* : *C* → *H* is said to be:


For metric projections, it is well known that the following assertions hold:


**Lemma 1.** *[39] Assume that A* : *C* → *H is a continuous pseudomonotone mapping. Then u*<sup>∗</sup> ∈ *C is a solution to the VI Au*∗, *v* − *u*∗ ≥ 0, ∀*v* ∈ *C, iff Av*, *v* − *u*∗ ≥ 0, ∀*v* ∈ *C.*

**Lemma 2.** *[40] Let the real sequence* {*tn*} ⊂ [0, ∞) *satisfy the conditions: tn*+<sup>1</sup> ≤ (1 − *sn*)*tn* + *snbn*, ∀*n* ≥ <sup>1</sup>*, where* {*sn*} *and* {*bn*} *are sequences in* (−∞, <sup>∞</sup>) *such that (i)* {*sn*} ⊂ [0, 1] *and* <sup>∑</sup><sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *sn* = ∞*, and (ii)* lim sup*n*→<sup>∞</sup> *bn* <sup>≤</sup> <sup>0</sup> *or* <sup>∑</sup><sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> |*snbn*| < ∞*. Then* lim*n*→<sup>∞</sup> *tn* = 0*.*

**Lemma 3.** *[33] Let T* : *C* → *C be a ζ-strict pseudocontraction. If the sequence* {*un*} ⊂ *C satisfies un u* ∈ *C and* (*I* − *T*)*un* → 0*, then u* ∈ Fix(*T*)*, where I is the identity operator of H.*

**Lemma 4.** *[33] Let T* : *C* → *C be a ζ-strictly pseudocontractive mapping. Let the real numbers γ*, *δ* ≥ 0 *satisfy* (*γ* + *δ*)*ζ* ≤ *γ. Then γ*(*x* − *y*) + *δ*(*Tx* − *Ty*) ≤ (*γ* + *δ*)*x* − *y*, ∀*x*, *y* ∈ *C.*

**Lemma 5.** *[41] Let the Banach space X admit a weakly continuous duality mapping, the subset C* ⊂ *X be nonempty, convex and closed, and the asymptotically nonexpansive mapping T* : *C* → *C have a fixed point, i.e.,* Fix(*T*) = ∅*. Then I* − *T is demiclosed at zero, i.e., if the sequence* {*un*} ⊂ *C satisfies un u* ∈ *C and* (*I* − *T*)*un* → 0*, then* (*I* − *T*)*u* = 0*, where I is the identity mapping of X.*

#### **3. Main Results**

Unless otherwise stated, we suppose the following.

	- (i) sup*n*≥<sup>1</sup> *σn <sup>α</sup><sup>n</sup>* < ∞ and *β<sup>n</sup>* + *γ<sup>n</sup>* + *δ<sup>n</sup>* = 1, ∀*n* ≥ 1;
	- (ii) ∑<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *<sup>α</sup><sup>n</sup>* <sup>=</sup> <sup>∞</sup>, lim*n*→<sup>∞</sup> *<sup>α</sup><sup>n</sup>* <sup>=</sup> lim*n*→<sup>∞</sup> *<sup>θ</sup><sup>n</sup> <sup>α</sup><sup>n</sup>* = 0;
	- (iii) (*γ<sup>n</sup>* + *δn*)*ζ* ≤ *γ<sup>n</sup>* < (1 − 2*δ*)*δn*, ∀*n* ≥ 1 and lim inf*n*→∞((1 − 2*δ*)*δ<sup>n</sup>* − *γn*) > 0;
	- (iv) lim sup*n*→<sup>∞</sup> *<sup>β</sup><sup>n</sup>* <sup>&</sup>lt; 1, lim inf*n*→<sup>∞</sup> *<sup>β</sup><sup>n</sup>* <sup>&</sup>gt; 0 and lim inf*n*→<sup>∞</sup> *<sup>δ</sup><sup>n</sup>* <sup>&</sup>gt; 0.

We first introduce an inertial-like subgradient extragradient algorithm (Algorithm 3) with line-search process as follows:

**Algorithm 3:** Inertial-like subgradient extragradient algorithm (I).

**Initialization:** Given *x*0, *x*<sup>1</sup> ∈ *H* arbitrarily. Let *γ* > 0, *l* ∈ (0, 1), *μ* ∈ (0, 1).

**Iterative Steps:** Compute *xn*+<sup>1</sup> in what follows: Step 1. Put *wn* <sup>=</sup> *<sup>σ</sup>n*(*xn* <sup>−</sup> *xn*−1) + *<sup>T</sup>nxn* and calculate *yn* <sup>=</sup> *PC*(*<sup>I</sup>* <sup>−</sup> *<sup>τ</sup>nA*)*wn*, where *<sup>τ</sup><sup>n</sup>* is chosen to be the largest *τ* ∈ {*γ*, *γl*, *γl* 2, ...} such that

$$\|\pi\| A w\_n - A y\_n\| \le \mu \|w\_n - y\_n\|.$$

Step 2. Calculate *zn* = (1 − *αn*)*PCn* (*wn* − *τnAyn*) + *α<sup>n</sup> f*(*xn*) with *Cn* := {*x* ∈ *H* : *wn* − *τnAwn* − *yn*, *x* − *yn* ≤ 0}. Step 3. Calculate

$$\mathbf{x}\_{\mathrm{H}+\mathrm{I}} = \gamma\_{\mathrm{H}} P\_{\mathrm{C}\_{\mathrm{H}}} (w\_{\mathrm{H}} - \pi\_{\mathrm{H}} A y\_{\mathrm{H}}) + \delta\_{\mathrm{H}} S z\_{\mathrm{H}} + \beta\_{\mathrm{H}} T^{\mathrm{H}} \mathbf{x}\_{\mathrm{H}}.$$

Again set *n* := *n* + 1 and return to Step 1.

**Lemma 6.** *In Step 1 of Algorithm 3, the Armijo-like search rule*

$$\|\pi\|\text{A}w\_n - Ay\_n\| \le \mu \|\|w\_n - y\_n\|\|\tag{3}$$

*is well defined, and the inequality holds:* min{*γ*, *<sup>μ</sup><sup>l</sup> <sup>L</sup>* } ≤ *τ<sup>n</sup>* ≤ *γ.*

**Proof.** Since *A* is *L*-Lipschitzian, we know that Equation (3) holds for all *γl <sup>m</sup>* <sup>≤</sup> *<sup>μ</sup> <sup>L</sup>* and so *τ<sup>n</sup>* is well defined. It is clear that *τ<sup>n</sup>* ≤ *γ*. Next we discuss two cases. In the case where *τ<sup>n</sup>* = *γ*, the inequality is valid. In the case where *<sup>τ</sup><sup>n</sup>* <sup>&</sup>lt; *<sup>γ</sup>*, from Equation (3) we derive *Awn* <sup>−</sup> *APC*(*wn* <sup>−</sup> *<sup>τ</sup><sup>n</sup> <sup>l</sup> Awn*) <sup>&</sup>gt; *<sup>μ</sup> τn l wn* − *PC*(*wn* − *τn <sup>l</sup> Awn*). Also, since *<sup>A</sup>* is *<sup>L</sup>*-Lipschitzian, we get *<sup>τ</sup><sup>n</sup>* <sup>&</sup>gt; *<sup>μ</sup><sup>l</sup> <sup>L</sup>* . Therefore the inequality is true.

**Lemma 7.** *Assume that* {*wn*}, {*yn*}, {*zn*} *are the sequences constructed by Algorithm 3. Then*

$$\begin{array}{rcl} \|z\_n - p\|^2 &\le \left[1 - a\_n(1 - \delta)\right] \|\mathbf{x}\_n - p\|^2 + (1 - a\_n)\Lambda\_n - (1 - a\_n)(1 - \mu) \times \\ &\quad \times \left[\|w\_n - y\_n\|^2 + \|u\_n - y\_n\|^2\right] + 2a\_n \langle (f - I)p, z\_n - p \rangle \quad \forall p \in \Omega, \end{array} \tag{4}$$

*where un* := *PCn* (*wn* − *τnAyn*) *and* Λ*<sup>n</sup>* := *σnxn* − *xn*−1[2(1 + *θn*)*xn* − *p* + *σnxn* − *xn*−1] + *θn*(2 + *<sup>θ</sup>n*)*xn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> *for all n* <sup>≥</sup> <sup>1</sup>*.*

**Proof.** We observe that

$$\begin{array}{rcl} 2\|\boldsymbol{u}\_{\boldsymbol{n}} - \boldsymbol{p}\|^{2} &= 2\|\boldsymbol{P}\_{\boldsymbol{\mathcal{C}}\_{\boldsymbol{n}}}(\boldsymbol{w}\_{\boldsymbol{n}} - \boldsymbol{\tau}\_{\boldsymbol{n}}\boldsymbol{A}\boldsymbol{y}\_{\boldsymbol{n}}) - \boldsymbol{P}\_{\boldsymbol{\mathcal{C}}\_{\boldsymbol{n}}}\boldsymbol{p}\|^{2} \leq 2\langle\boldsymbol{u}\_{\boldsymbol{n}} - \boldsymbol{p}, \boldsymbol{w}\_{\boldsymbol{n}} - \boldsymbol{\tau}\_{\boldsymbol{n}}\boldsymbol{A}\boldsymbol{y}\_{\boldsymbol{n}} - \boldsymbol{p}\rangle \\ &= \|\boldsymbol{u}\_{\boldsymbol{n}} - \boldsymbol{p}\|^{2} + \|\boldsymbol{w}\_{\boldsymbol{n}} - \boldsymbol{p}\|^{2} - \|\boldsymbol{u}\_{\boldsymbol{n}} - \boldsymbol{w}\_{\boldsymbol{n}}\|^{2} - 2\langle\boldsymbol{u}\_{\boldsymbol{n}} - \boldsymbol{p}, \boldsymbol{\tau}\_{\boldsymbol{n}}\boldsymbol{A}\boldsymbol{y}\_{\boldsymbol{n}}\rangle. \end{array}$$

So, it follows that *wn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> − *un* <sup>−</sup> *wn*<sup>2</sup> <sup>−</sup> <sup>2</sup>*un* <sup>−</sup> *<sup>p</sup>*, *<sup>τ</sup>nAyn*≥*un* <sup>−</sup> *<sup>p</sup>*2. Since *<sup>A</sup>* is pseudomonotone, we deduce from Equation (3) that *Ayn*, *yn* − *p* ≥ 0 and

$$\begin{array}{c} \|\|u\_n - p\|\|^2 \le \|w\_n - p\|\|^2 + 2\pi\_n(\langle Ay\_n, p - y\_n \rangle + \langle Ay\_n, y\_n - u\_n \rangle) - \|u\_n - w\_n\|^2\\ \le \|w\_n - p\|^2 + 2\pi\_n \langle Ay\_n, y\_n - u\_n \rangle - \|u\_n - w\_n\|^2\\ = \|w\_n - p\|^2 - \|y\_n - w\_n\|^2 + 2\langle w\_n - \pi\_n Ay\_n - y\_n, u\_n - y\_n \rangle - \|u\_n - y\_n\|^2. \end{array} \tag{5}$$

Since *un* = *PCn* (*wn* − *τnAyn*) with *Cn* := {*x* ∈ *H* : 0 ≥ *τnAwn* − *wn* + *yn*, *yn* − *x*}, we have *un* − *yn*, *wn* − *τnAwn* − *yn* ≤ 0, which together with Equation (3), implies that

$$\begin{split} 2\langle w\_{\mathbb{H}} - \tau\_{\mathbb{H}} A y\_{\mathbb{H}} - y\_{\mathbb{H}}, u\_{\mathbb{H}} - y\_{\mathbb{H}} \rangle &= 2 \langle w\_{\mathbb{H}} - \tau\_{\mathbb{H}} A w\_{\mathbb{H}} - y\_{\mathbb{H}}, u\_{\mathbb{H}} - y\_{\mathbb{H}} \rangle + 2 \tau\_{\mathbb{H}} \langle A w\_{\mathbb{H}} - A y\_{\mathbb{H}}, u\_{\mathbb{H}} - y\_{\mathbb{H}} \rangle \\ &\leq 2 \mu ||w\_{\mathbb{H}} - y\_{\mathbb{H}}|| \|u\_{\mathbb{H}} - y\_{\mathbb{H}} \| \leq \mu (||w\_{\mathbb{H}} - y\_{\mathbb{H}}||^{2} + ||u\_{\mathbb{H}} - y\_{\mathbb{H}}||^{2}). \end{split}$$

Also, from *wn* <sup>=</sup> *<sup>σ</sup>n*(*xn* <sup>−</sup> *xn*−1) + *<sup>T</sup>nxn* we get

$$\begin{split} & \|\boldsymbol{w}\_{n} - \boldsymbol{p}\|^{2} = \|\sigma\_{n}(\mathbf{x}\_{n} - \mathbf{x}\_{n-1}) + \boldsymbol{T}^{n}\mathbf{x}\_{n} - \boldsymbol{p}\|^{2} \\ & \leq \left[ (1 + \theta\_{n}) \|\mathbf{x}\_{n} - \boldsymbol{p}\| + \sigma\_{n} \|\mathbf{x}\_{n} - \mathbf{x}\_{n-1}\| \right]^{2} \\ &= (1 + \theta\_{n})^{2} \|\mathbf{x}\_{n} - \boldsymbol{p}\|^{2} + \sigma\_{n} \|\mathbf{x}\_{n} - \mathbf{x}\_{n-1}\| \left[ 2(1 + \theta\_{n}) \|\mathbf{x}\_{n} - \boldsymbol{p}\| + \sigma\_{n} \|\mathbf{x}\_{n} - \mathbf{x}\_{n-1}\| \right] \\ &= \|\mathbf{x}\_{n} - \boldsymbol{p}\|^{2} + \theta\_{\mathcal{U}} (2 + \theta\_{\mathcal{U}}) \|\mathbf{x}\_{\mathcal{U}} - \boldsymbol{p}\|^{2} + \sigma\_{\mathcal{U}} \|\mathbf{x}\_{\mathcal{U}} - \mathbf{x}\_{\mathcal{U}-1}\| \left[ 2(1 + \theta\_{\mathcal{U}}) \|\mathbf{x}\_{\mathcal{U}} - \boldsymbol{p}\| + \sigma\_{\mathcal{U}} \|\mathbf{x}\_{\mathcal{U}} - \mathbf{x}\_{\mathcal{U}-1}\| \right] \\ &= \|\mathbf{x}\_{\mathcal{U}} - \boldsymbol{p}\|^{2} + \Lambda\_{\mathcal{U}\_{\mathcal{U}}} \end{split}$$

where <sup>Λ</sup>*<sup>n</sup>* :<sup>=</sup> *<sup>θ</sup>n*(<sup>2</sup> <sup>+</sup> *<sup>θ</sup>n*)*xn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> <sup>+</sup> *<sup>σ</sup>nxn* <sup>−</sup> *xn*−1[2(<sup>1</sup> <sup>+</sup> *<sup>θ</sup>n*)*xn* <sup>−</sup> *<sup>p</sup>* <sup>+</sup> *<sup>σ</sup>nxn* <sup>−</sup> *xn*−1]. Therefore, substituting the last two inequalities for Equation (5), we infer that

$$\begin{array}{rcl} \|\|u\_{\boldsymbol{n}} - \boldsymbol{p}\|\|^{2} & \leq \|\|\boldsymbol{w}\_{\boldsymbol{n}} - \boldsymbol{p}\|\|^{2} - (1 - \mu) \|\|\boldsymbol{w}\_{\boldsymbol{n}} - \boldsymbol{y}\_{\boldsymbol{n}}\|\|^{2} - (1 - \mu) \|\boldsymbol{u}\_{\boldsymbol{n}} - \boldsymbol{y}\_{\boldsymbol{n}}\|\|^{2} \\ & \leq \Lambda\_{\boldsymbol{n}} - (1 - \mu) \|\boldsymbol{w}\_{\boldsymbol{n}} - \boldsymbol{y}\_{\boldsymbol{n}}\|\|^{2} - (1 - \mu) \|\boldsymbol{u}\_{\boldsymbol{n}} - \boldsymbol{y}\_{\boldsymbol{n}}\|\|^{2} + \|\boldsymbol{x}\_{\boldsymbol{n}} - \boldsymbol{p}\|\|^{2}, \quad \forall \boldsymbol{p} \in \Omega. \end{array} \tag{6}$$

In addition, from Algorithm 3 we have

$$z\_n - p = (1 - \mathfrak{a}\_n)(\mathfrak{u}\_n - p) + \mathfrak{a}\_n(f - I)p + \mathfrak{a}\_n(f(\mathfrak{x}\_n) - f(p)).$$

Since the function *h*(*t*) = *t* 2, <sup>∀</sup>*<sup>t</sup>* <sup>∈</sup> **<sup>R</sup>** is convex, from Equation (6) we have

$$\begin{cases} \|z\_{n} - p\|^{2} \\ \leq & [a\_{n}\delta\|\mathbf{x}\_{n} - p\| + (1 - a\_{n})\|\mathbf{u}\_{n} - p\|]^{2} + 2a\_{n}\langle (f - I)p, z\_{n} - p\rangle \\ \leq & a\_{n}\delta\|\mathbf{x}\_{n} - p\|^{2} + (1 - a\_{n})\left[\|\mathbf{x}\_{n} - p\|^{2} + \Lambda\_{n} - (1 - \mu)\|\mathbf{w}\_{n} - y\_{n}\|\right]^{2} - (1 - \mu)\|\mathbf{u}\_{n} - y\_{n}\|^{2} \\ \quad + & 2a\_{n}\langle (f - I)p, z\_{n} - p\rangle \\ = & [1 - a\_{n}(1 - \delta)]\|\mathbf{x}\_{n} - p\|^{2} + (1 - a\_{n})\Lambda\_{n} - (1 - a\_{n})(1 - \mu)\|\mathbf{u}\_{n} - y\_{n}\|^{2} + \|\mathbf{u}\_{n} - y\_{n}\|^{2} \\ \quad + 2a\_{n}\langle (f - I)p, z\_{n} - p\rangle. \end{cases}$$

This completes the proof.

**Lemma 8.** *Assume that* {*xn*}, {*yn*}, {*zn*} *are bounded vector sequences constructed by Algorithm 3. If <sup>T</sup>nxn* <sup>−</sup> *<sup>T</sup>n*+1*xn* <sup>→</sup> 0, *xn* <sup>−</sup> *xn*+<sup>1</sup> <sup>→</sup> 0, *wn* <sup>−</sup> *xn* <sup>→</sup> 0, *wn* <sup>−</sup> *zn* <sup>→</sup> <sup>0</sup> *and* ∃{*wnk*}⊂{*wn*} *such that wnk <sup>z</sup>* <sup>∈</sup> *H, then z* ∈ Ω*.*

**Proof.** In terms of Algorithm 3, we deduce *wn* <sup>−</sup> *xn* <sup>=</sup> *<sup>T</sup>nxn* <sup>−</sup> *xn* <sup>+</sup> *<sup>σ</sup>n*(*xn* <sup>−</sup> *xn*−1), <sup>∀</sup>*<sup>n</sup>* <sup>≥</sup> 1, and hence *Tnxn* <sup>−</sup> *xn*≤*wn* <sup>−</sup> *xn* <sup>+</sup> *<sup>σ</sup>nxn* <sup>−</sup> *xn*−1≤*wn* <sup>−</sup> *xn* <sup>+</sup> *xn* <sup>−</sup> *xn*−1. Using the conditions *xn* <sup>−</sup> *xn*+<sup>1</sup> → 0 and *wn* − *xn* → 0, we get

$$\lim\_{n \to \infty} \|T^n \mathbf{x}\_n - \mathbf{x}\_n\| = 0. \tag{7}$$

Combining the assumptions *wn* − *xn* → 0 and *wn* − *zn* → 0 yields

$$\|\|z\_n - \mathbf{x}\_n\|\| \le \|w\_n - z\_n\|\| + \|w\_n - \mathbf{x}\_n\|\| \to 0, \quad (n \to \infty).$$

Then, from Equation (4) it follows that

$$\begin{array}{l} (1 - \mathfrak{a}\_{\mathfrak{n}})(1 - \mathfrak{p})[\|\boldsymbol{w}\_{\mathfrak{n}} - \boldsymbol{y}\_{\mathfrak{n}}\|]^{2} + \|\boldsymbol{u}\_{\mathfrak{n}} - \boldsymbol{y}\_{\mathfrak{n}}\|^{2}] \\ \leq & [1 - \mathfrak{a}\_{\mathfrak{n}}(1 - \delta)]\|\boldsymbol{x}\_{\mathfrak{n}} - \boldsymbol{p}\|^{2} + (1 - \mathfrak{a}\_{\mathfrak{n}})\Lambda\_{\mathfrak{n}} - \|\boldsymbol{z}\_{\mathfrak{n}} - \boldsymbol{p}\|^{2} + 2\mathfrak{a}\_{\mathfrak{n}}\langle (f - I)\boldsymbol{p}, \boldsymbol{z}\_{\mathfrak{n}} - \boldsymbol{p} \rangle \\ \leq & \|\boldsymbol{x}\_{\mathfrak{n}} - \boldsymbol{p}\|^{2} - \|\boldsymbol{z}\_{\mathfrak{n}} - \boldsymbol{p}\|^{2} + \Lambda\_{\mathfrak{n}} + 2\mathfrak{a}\_{\mathfrak{n}}\|(f - I)\boldsymbol{p}\|\|\boldsymbol{z}\_{\mathfrak{n}} - \boldsymbol{p}\| \\ \leq & \|\boldsymbol{x}\_{\mathfrak{n}} - \boldsymbol{z}\_{\mathfrak{n}}\|(\|\boldsymbol{x}\_{\mathfrak{n}} - \boldsymbol{p}\| + \|\boldsymbol{z}\_{\mathfrak{n}} - \boldsymbol{p}\|) + \Lambda\_{\mathfrak{n}} + 2\mathfrak{a}\_{\mathfrak{n}}\|(f - I)\boldsymbol{p}\|\|\boldsymbol{z}\_{\mathfrak{n}} - \boldsymbol{p}\| \| \\ \end{array}$$

where <sup>Λ</sup>*<sup>n</sup>* :<sup>=</sup> *<sup>θ</sup>n*(<sup>2</sup> <sup>+</sup> *<sup>θ</sup>n*)*xn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> <sup>+</sup> *<sup>σ</sup>nxn* <sup>−</sup> *xn*−1[2(<sup>1</sup> <sup>+</sup> *<sup>θ</sup>n*)*xn* <sup>−</sup> *<sup>p</sup>* <sup>+</sup> *<sup>σ</sup>nxn* <sup>−</sup> *xn*−1]. Since *<sup>α</sup><sup>n</sup>* <sup>→</sup> 0, Λ*<sup>n</sup>* → 0 and *xn* − *zn* → 0, from the boundedness of {*xn*}, {*zn*} we get

$$\lim\_{n \to \infty} ||w\_n - y\_n|| = 0 \quad \text{and} \quad \lim\_{n \to \infty} ||u\_n - y\_n|| = 0.$$

Thus as *n* → ∞,

$$\|\|w\_n - \boldsymbol{\mu}\_n\|\| \le \|\|w\_n - \boldsymbol{y}\_n\|\| + \|\|\boldsymbol{y}\_n - \boldsymbol{\mu}\_n\|\| \to 0 \quad \text{and} \quad \|\|\mathbf{x}\_n - \boldsymbol{u}\_n\|\| \le \|\|\mathbf{x}\_n - \boldsymbol{w}\_n\|\| + \|\|\boldsymbol{w}\_n - \boldsymbol{u}\_n\|\| \to 0.$$

Furthermore, using Algorithm <sup>3</sup> we have *xn*+<sup>1</sup> <sup>−</sup> *zn* <sup>=</sup> *<sup>γ</sup>n*(*un* <sup>−</sup> *zn*) + *<sup>δ</sup>n*(*Szn* <sup>−</sup> *zn*) + *<sup>β</sup>n*(*Tnxn* <sup>−</sup> *zn*), which hence implies

$$\begin{split} \left\| \delta\_{\boldsymbol{\Pi}} \right\| \left\| \mathcal{Z}\_{\boldsymbol{\Pi}} - \mathcal{Z}\_{\boldsymbol{\Pi}} \right\| &= \left\| \mathbf{x}\_{\boldsymbol{n}+1} - \mathbf{z}\_{\boldsymbol{n}} - \beta\_{\boldsymbol{n}} (T^{\boldsymbol{\Pi}} \mathbf{x}\_{\boldsymbol{n}} - \mathbf{z}\_{\boldsymbol{n}}) - \gamma\_{\boldsymbol{n}} (\boldsymbol{\mu}\_{\boldsymbol{n}} - \mathbf{z}\_{\boldsymbol{n}}) \right\| \\ &= \left\| \mathbf{x}\_{\boldsymbol{n}+1} - \mathbf{x}\_{\boldsymbol{n}} + \delta\_{\boldsymbol{n}} (\mathbf{x}\_{\boldsymbol{n}} - \mathbf{z}\_{\boldsymbol{n}}) - \gamma\_{\boldsymbol{n}} (\mathbf{u}\_{\boldsymbol{n}} - \mathbf{x}\_{\boldsymbol{n}}) - \beta\_{\boldsymbol{n}} (T^{\boldsymbol{\Pi}} \mathbf{x}\_{\boldsymbol{n}} - \mathbf{x}\_{\boldsymbol{n}}) \right\| \\ &\leq \left\| \mathbf{x}\_{\boldsymbol{n}+1} - \mathbf{x}\_{\boldsymbol{n}} \right\| + \left\| \mathbf{x}\_{\boldsymbol{n}} - \mathbf{z}\_{\boldsymbol{n}} \right\| + \left\| \mathbf{u}\_{\boldsymbol{n}} - \mathbf{x}\_{\boldsymbol{n}} \right\| + \left\| T^{\boldsymbol{n}} \mathbf{x}\_{\boldsymbol{n}} - \mathbf{x}\_{\boldsymbol{n}} \right\|. \end{split}$$

Note that *xn* <sup>−</sup> *xn*+<sup>1</sup> <sup>→</sup> 0, *zn* <sup>−</sup> *xn* <sup>→</sup> 0, *xn* <sup>−</sup> *un* <sup>→</sup> 0, *xn* <sup>−</sup> *<sup>T</sup>nxn* <sup>→</sup> 0 and lim inf*n*→<sup>∞</sup> *<sup>δ</sup><sup>n</sup>* <sup>&</sup>gt; 0. So we obtain

$$\lim\_{n \to \infty} \|z\_n - Sz\_n\| = 0. \tag{8}$$

Noticing *yn* = *PC*(*I* − *τnA*)*wn*, we have *x* − *yn*, *wn* − *τnAwn* − *yn* ≤ 0, ∀*x* ∈ *C*, and hence

$$
\langle \boldsymbol{w}\_{\boldsymbol{n}} - \boldsymbol{y}\_{\boldsymbol{n}}, \mathbf{x} - \boldsymbol{y}\_{\boldsymbol{n}} \rangle + \pi\_{\boldsymbol{n}} \langle \boldsymbol{A} \boldsymbol{w}\_{\boldsymbol{n}}, \boldsymbol{y}\_{\boldsymbol{n}} - \boldsymbol{w}\_{\boldsymbol{n}} \rangle \leq \pi\_{\boldsymbol{n}} \langle \boldsymbol{A} \boldsymbol{w}\_{\boldsymbol{n}}, \mathbf{x} - \boldsymbol{w}\_{\boldsymbol{n}} \rangle, \quad \forall \mathbf{x} \in \mathsf{C}. \tag{9}
$$

Since *A* is Lipschitzian, we infer from the boundedness of {*wnk*} that {*Awnk*} is bounded. From *wn* − *yn* <sup>→</sup> 0, we get the boundedness of {*ynk*}. Taking into account *<sup>τ</sup><sup>n</sup>* <sup>≥</sup> min{*γ*, *<sup>μ</sup><sup>l</sup> <sup>L</sup>* }, from Equation (9) we have lim inf*k*→∞*Awnk* , *<sup>x</sup>* − *wnk* ≥ 0, ∀*<sup>x</sup>* ∈ *<sup>C</sup>*. Moreover, note that *Ayn*, *<sup>x</sup>* − *yn* = *Ayn* − *Awn*, *<sup>x</sup>* − *wn* + *Awn*, *x* − *wn* + *Ayn*, *wn* − *yn*. Since *A* is *L*-Lipschitzian, from *wn* − *yn* → 0 we get *Awn* − *Ayn* → 0. According to Equation (9) we have lim inf*k*→∞*Aynk* , *x* − *ynk* ≥ 0, ∀*x* ∈ *C*.

We claim *xn* − *Txn* → 0 below. Indeed, note that

$$\begin{array}{rcl} \left|| \boldsymbol{T} \mathbf{x}\_{\boldsymbol{n}} - \mathbf{x}\_{\boldsymbol{n}} \boldsymbol{\ell} \right| &\leq \left|| \boldsymbol{T} \mathbf{x}\_{\boldsymbol{n}} - \boldsymbol{T}^{\boldsymbol{n}+1} \mathbf{x}\_{\boldsymbol{n}} \boldsymbol{\ell} \right| + \left|| \boldsymbol{T}^{\boldsymbol{n}+1} \mathbf{x}\_{\boldsymbol{n}} - \boldsymbol{T}^{\boldsymbol{n}} \mathbf{x}\_{\boldsymbol{n}} \boldsymbol{\ell} \right| + \left|| \boldsymbol{T}^{\boldsymbol{n}} \mathbf{x}\_{\boldsymbol{n}} - \mathbf{x}\_{\boldsymbol{n}} \boldsymbol{\ell} \right| \\ &\leq \left( \mathbf{2} + \theta\_{1} \right) \left\| \mathbf{x}\_{\boldsymbol{n}} - \boldsymbol{T}^{\boldsymbol{n}} \mathbf{x}\_{\boldsymbol{n}} \right\| + \left\| \boldsymbol{T}^{\boldsymbol{n}+1} \mathbf{x}\_{\boldsymbol{n}} - \boldsymbol{T}^{\boldsymbol{n}} \mathbf{x}\_{\boldsymbol{n}} \right\|. \end{array}$$

Hence from Equation (7) and the assumption *<sup>T</sup>nxn* <sup>−</sup> *<sup>T</sup>n*+1*xn* <sup>→</sup> 0 we get

$$\lim\_{n \to \infty} \|\mathbf{x}\_n - T\mathbf{x}\_{\mathcal{U}}\| = 0. \tag{10}$$

We now choose a sequence {*εk*} ⊂ (0, 1) such that *ε<sup>k</sup>* ↓ 0 as *k* → ∞. For each *k* ≥ 1, we denote by *mk* the smallest natural number satisfying

$$
\langle Ay\_{n\_j \ast} \ge -y\_{n\_j} \rangle + \varepsilon\_k \ge 0, \quad \forall j \ge m\_k.
$$

From the decreasing property of {*εk*}, it is easy to see that {*mk*} is increasing. Considering that {*ymk*} ⊂ *C* implies *Aymk* = 0, ∀*k* ≥ 1, we put

$$\mu\_{m\_k} = \frac{A y\_{m\_k}}{||A y\_{m\_k}||^2}.$$

So we have *Aymk* , *μmk* = 1, ∀*k* ≥ 1. Thus, from Equation (9), we have *x* + *εkμmk* − *ymk* , *Aymk* ≥ 0, ∀*k* ≥ 1. Also, since *A* is pseudomonotone, we get

$$
\langle \mathcal{A}(\mathfrak{x} + \varepsilon\_k \mu\_{m\_k}), \mathfrak{x} + \varepsilon\_k \mu\_{m\_k} - \mathfrak{y}\_{m\_k} \rangle \ge 0, \quad \forall k \ge 1.
$$

Consequently,

$$<\langle \mathbf{x} - y\_{\mathfrak{m}\_{k'}} A \mathbf{x} \rangle \ge \langle \mathbf{x} + \varepsilon\_k \mu\_{\mathfrak{m}\_k} - y\_{\mathfrak{m}\_{k'}} A \mathbf{x} - A(\mathbf{x} + \varepsilon\_k \mu\_{\mathfrak{m}\_k}) \rangle - \varepsilon\_k \langle \mu\_{\mathfrak{m}\_{k'}} A \mathbf{x} \rangle, \quad \forall k \ge 1. \tag{11}$$

We show lim*k*→<sup>∞</sup> *<sup>ε</sup>kμmk* = 0. In fact, since *wnk <sup>z</sup>* and *wn* − *yn* → 0, we get *ynk <sup>z</sup>*. So, {*yn*} ⊂ *<sup>C</sup>* guarantees *z* ∈ *C*. Also, since *A* is sequentially weakly continuous on *C*, we deduce that *Aynk Az*. So, we get *Az* = 0. It follows that 0 < *Az* ≤ lim inf*k*→<sup>∞</sup> *Aynk*. Since {*ymk*}⊂{*ynk*} and *<sup>ε</sup><sup>k</sup>* ↓ 0 as *<sup>k</sup>* → <sup>∞</sup>, we obtain that

$$0 \le \limsup\_{k \to \infty} \|\varepsilon\_k \mu\_{\text{ul}\_k}\| = \limsup\_{k \to \infty} \frac{\varepsilon\_k}{\|Ay\_{\text{ul}\_k}\|} \le \frac{\limsup\_{k \to \infty} \varepsilon\_k}{\limsup\_{k \to \infty} \|Ay\_{\text{ul}\_k}\|} = 0.$$

Thus *εkμmk* → 0.

The last step is to show *z* ∈ Ω. Indeed, we have *xnk z*. From Equation (10) we also have *xnk* − *Txnk* → 0. Note that Lemma 5 yields the demiclosedness of *I* − *T* at zero. Thus *z* ∈ Fix(*T*). Moreover, since *wn* − *zn* → 0 and *wnk z*, we have *znk z*. From Equation (8) we get *znk* − *Sznk* → 0. By Lemma 5 we know that *I* − *S* is demiclosed at zero, and hence we have (*I* − *S*)*z* = 0, i.e., *z* ∈ Fix(*S*). In addition, taking *k* → ∞, we infer that the right hand side of Equation (11) converges to zero by the Lipschitzian property of *<sup>A</sup>*, the boundedness of {*ymk*}, {*μmk*}, and the limit lim*k*→<sup>∞</sup> *<sup>ε</sup>kμmk* = 0. Therefore, *Ax*, *<sup>x</sup>* − *<sup>z</sup>* = lim inf*k*→∞*Ax*, *<sup>x</sup>* − *ymk* ≥ 0, ∀*<sup>x</sup>* ∈ *<sup>C</sup>*. From Lemma 3 we get *<sup>z</sup>* ∈ VI(*C*, *<sup>A</sup>*), and hence *z* ∈ Ω. This completes the proof.

**Theorem 1.** *Let* {*xn*} *be the sequence constructed by Algorithm 3. Suppose that Tnxn* <sup>−</sup> *<sup>T</sup>n*+1*xn* <sup>→</sup> <sup>0</sup>*. Then*

$$\mathbf{x}\_{\mathfrak{n}} \to \mathbf{x}^\* \in \Omega \iff \begin{cases} \quad \mathbf{x}\_{\mathfrak{n}} - \mathbf{x}\_{\mathfrak{n}+1} \to \mathbf{0}\_{\prime} \\ \quad \mathbf{x}\_{\mathfrak{n}} - T^{\mathfrak{n}} \mathbf{x}\_{\mathfrak{n}} \to \mathbf{0}\_{\prime} \\ \quad \sup\_{\mathfrak{n} \ge 1} ||(T^{\mathfrak{n}} - f) \mathbf{x}\_{\mathfrak{n}}|| < \infty\_{\prime} \end{cases}$$

*where x*<sup>∗</sup> ∈ Ω *is only a solution of the HVI:* (*f* − *I*)*x*∗, *p* − *x*∗ ≤ 0, ∀*p* ∈ Ω*.*

**Proof.** Without loss of generality, we may assume that {*βn*} ⊂ [*a*, *b*] ⊂ (0, 1). We can claim that *P*<sup>Ω</sup> ◦ *f* is a contractive map. Banach's Contraction Principle ensures that it has a unique fixed point, i.e., *P*<sup>Ω</sup> *f*(*x*∗) = *x*∗. So, there exists a unique solution *x*<sup>∗</sup> ∈ Ω to the HVI

$$<\langle (I-f)\mathbf{x}^\*, p-\mathbf{x}^\*\rangle \ge 0, \quad \forall p \in \Omega. \tag{12}$$

It is clear that the necessity of the theorem is valid. In fact, if *xn* → *x*<sup>∗</sup> ∈ Ω, then as *n* → ∞, we obtain that *xn* <sup>−</sup> *xn*+1 → 0, *xn* <sup>−</sup> *<sup>T</sup>nxn*≤*xn* <sup>−</sup> *<sup>x</sup>*∗ <sup>+</sup> *x*<sup>∗</sup> <sup>−</sup> *<sup>T</sup>nxn* ≤ (<sup>2</sup> <sup>+</sup> *<sup>θ</sup>n*)*xn* <sup>−</sup> *<sup>x</sup>*∗ → 0, and

$$\begin{split} \sup\_{n\geq 1} \|T^{\mathfrak{n}}\mathbf{x}\_{\mathfrak{n}} - f(\mathbf{x}\_{\mathfrak{n}})\| &\leq \sup\_{n\geq 1} (\|T^{\mathfrak{n}}\mathbf{x}\_{\mathfrak{n}} - \mathbf{x}^{\*}\| + \|\mathbf{x}^{\*} - f(\mathbf{x}^{\*})\| + \|f(\mathbf{x}^{\*}) - f(\mathbf{x}\_{\mathfrak{n}})\|)) \\ &\leq \sup\_{n\geq 1} [(1 + \theta\_{\mathfrak{n}}) \|\mathbf{x}\_{\mathfrak{n}} - \mathbf{x}^{\*}\| + \|\mathbf{x}^{\*} - f(\mathbf{x}^{\*})\| + \delta \|\mathbf{x}^{\*} - \mathbf{x}\_{\mathfrak{n}}\|] \\ &\leq \sup\_{n\geq 1} [(2 + \theta\_{\mathfrak{n}}) \|\mathbf{x}\_{\mathfrak{n}} - \mathbf{x}^{\*}\| + \|\mathbf{x}^{\*} - f(\mathbf{x}^{\*})\|] < \infty. \end{split}$$

We now assume that lim*n*→∞(*xn* <sup>−</sup> *xn*+1 <sup>+</sup> *xn* <sup>−</sup> *<sup>T</sup>nxn*) = 0 and sup*n*≥<sup>1</sup> (*T<sup>n</sup>* <sup>−</sup> *<sup>f</sup>*)*xn* <sup>&</sup>lt; <sup>∞</sup>, and prove the sufficiency by the following steps.

**Step 1.** We claim the boundedness of {*xn*}. In fact, take a fixed *p* ∈ Ω arbitrarily. From Equation (6) we get

$$\|\|w\_n - p\|\|^2 - (1 - \mu) \|w\_n - y\_n\|\|^2 - (1 - \mu) \|u\_n - y\_n\|\|^2 \ge \|u\_n - p\|^2,\tag{13}$$

which hence yields

$$\|w\_{\hbar} - p\| \ge \|u\_{\hbar} - p\|, \quad \forall n \ge 1. \tag{14}$$

By the definition of *wn*, we have

$$\begin{array}{ll} \|w\_{n} - p\| & \leq (1 + \theta\_{n}) \|\mathbf{x}\_{n} - p\| + \sigma\_{n} \|\mathbf{x}\_{n} - \mathbf{x}\_{n-1}\| \\ & = (1 + \theta\_{n}) \|\mathbf{x}\_{n} - p\| + \mathfrak{a}\_{n} \cdot \frac{\sigma\_{n}}{a\_{n}} \|\mathbf{x}\_{n} - \mathbf{x}\_{n-1}\|. \end{array} \tag{15}$$

From sup*n*≥<sup>1</sup> *σn <sup>α</sup><sup>n</sup>* <sup>&</sup>lt; <sup>∞</sup> and sup*n*≥<sup>1</sup> *xn* <sup>−</sup> *xn*−1 <sup>&</sup>lt; <sup>∞</sup>, we deduce that sup*n*≥<sup>1</sup> *σn <sup>α</sup><sup>n</sup> xn* − *xn*−1 < ∞, which immediately implies that ∃*M*<sup>1</sup> > 0 s.t.

$$M\_1 \ge \frac{\sigma\_n}{\alpha\_n} \| \mathbf{x}\_n - \mathbf{x}\_{n-1} \|\_{\prime} \quad \forall n \ge 1. \tag{16}$$

From Equations (14)–(16), we obtain

$$\|\|u\_n - p\|\| \le \|w\_n - p\|\| \le (1 + \theta\_n) \|\|x\_n - p\|\| + \mathfrak{a}\_n M\_1, \quad \forall n \ge 1. \tag{17}$$

Note that *A*(*C*) is bounded, *yn* = *PC*(*I* − *τn*)*Awn*, *f*(*H*) ⊂ *C* ⊂ *Cn* and *un* = *PCn* (*wn* − *τnAyn*). Hence, we know that {*Ayn*} is a bounded sequence. So, from sup*n*≥<sup>1</sup> (*T<sup>n</sup>* <sup>−</sup> *<sup>f</sup>*)*xn* <sup>&</sup>lt; <sup>∞</sup>, it follows that

$$\begin{array}{rcl} \|\|u\_{\boldsymbol{n}} - f(\mathbf{x}\_{\boldsymbol{n}})\|\| &=& \|P\_{\mathcal{C}\_{\boldsymbol{n}}}(w\_{\boldsymbol{n}} - \boldsymbol{\tau}\_{\boldsymbol{n}}Ay\_{\boldsymbol{n}}) - P\_{\mathcal{C}\_{\boldsymbol{n}}}f(\mathbf{x}\_{\boldsymbol{n}})\| \leq \|w\_{\boldsymbol{n}} - \boldsymbol{\tau}\_{\boldsymbol{n}}Ay\_{\boldsymbol{n}} - f(\mathbf{x}\_{\boldsymbol{n}})\| \\ & \leq \|w\_{\boldsymbol{n}} - T^{\boldsymbol{n}}\mathbf{x}\_{\boldsymbol{n}}\| + \|T^{\boldsymbol{n}}\mathbf{x}\_{\boldsymbol{n}} - f(\mathbf{x}\_{\boldsymbol{n}})\| + \boldsymbol{\tau}\_{\boldsymbol{n}}\|Ay\_{\boldsymbol{n}}\| \\ & \leq \|\mathbf{x}\_{\boldsymbol{n}} - \mathbf{x}\_{\boldsymbol{n}-1}\| + \|(T^{\boldsymbol{n}} - f)\mathbf{x}\_{\boldsymbol{n}}\| + \gamma\|Ay\_{\boldsymbol{n}}\| \leq M\_{0}, \end{array}$$

where sup*n*≥1(*xn* <sup>−</sup> *xn*−1 <sup>+</sup> (*T<sup>n</sup>* <sup>−</sup> *<sup>f</sup>*)*xn* <sup>+</sup> *<sup>γ</sup>Ayn*) <sup>≤</sup> *<sup>M</sup>*<sup>0</sup> for some *<sup>M</sup>*<sup>0</sup> <sup>&</sup>gt; 0. Taking into account lim*n*→<sup>∞</sup> *<sup>θ</sup>n*(2+*θn*) *<sup>α</sup>n*(1−*βn*) <sup>=</sup> 0, we know that <sup>∃</sup>*n*<sup>0</sup> <sup>≥</sup> 1 such that

$$
\theta\_n(2+\theta\_n) \le \frac{a\_{\text{ll}}(1-\beta\_{\text{ll}})(1-\delta)}{2} \left( \le \frac{a\_{\text{ll}}(1-\delta)}{2} \right), \quad \forall n \ge n\_0.
$$

So, from Algorithm 3 and Equation (17) it follows that for all *n* ≥ *n*0,

$$\begin{array}{lll} \|\|z\_{n} - p\|\| & \leq \alpha\_{n}\delta \|\|x\_{n} - p\|\| + (1 - \alpha\_{n})\|\|u\_{n} - p\|\| + \alpha\_{n} \|\|(f - I)p\|\| \\ & \leq \left[1 - \alpha\_{n}(1 - \delta) + \theta\_{\text{l}}\right] \|\|x\_{\text{l}} - p\|\| + \alpha\_{\text{l}}(M\_{1} + \|(f - I)p\|) \\ & \leq \left[1 - \frac{\alpha\_{n}(1 - \delta)}{2}\right] \|\|x\_{n} - p\|\| + \alpha\_{\text{l}}(M\_{1} + \|(f - I)p\|) ), \end{array}$$

which together with Lemma 4 and (*γ<sup>n</sup>* + *δn*)*ζ* ≤ *γn*, implies that for all *n* ≥ *n*0,

$$\begin{split} & \|\mathbf{x}\_{n+1} - p\| = \|\beta\_{n}(T^{n}\mathbf{x}\_{n} - p) + \gamma\_{n}(z\_{n} - p) + \delta\_{n}(\mathcal{S}z\_{n} - p) + \gamma\_{n}(\boldsymbol{u}\_{n} - z\_{n})\| \\ & \leq \beta\_{n}(1 + \theta\_{n})\|\mathbf{x}\_{n} - p\| + (1 - \beta\_{n})\|z\_{n} - p\| + \gamma\_{n}\kappa\_{n}\|\|\boldsymbol{u}\_{n} - f(\mathbf{x}\_{n})\| \\ & \leq \beta\_{n}(1 + \theta\_{n})\|\mathbf{x}\_{n} - p\| + (1 - \beta\_{n})\|(1 - \frac{a\_{n}(1 - \delta)}{2})\|\mathbf{x}\_{n} - p\| + a\_{n}(M\_{0} + M\_{1} + \|(f - I)p\|) \|\boldsymbol{f}\| \\ & \leq \left[1 - \frac{a\_{n}(1 - \beta\_{n})(1 - \delta)}{2} + \beta\_{n}\frac{a\_{n}(1 - \beta\_{n})(1 - \delta)}{2}\right] \|\mathbf{x}\_{n} - p\| + a\_{n}(1 - \beta\_{n})(M\_{0} + M\_{1} + \|(f - I)p\|) \|\boldsymbol{f}\| \\ & = \left[1 - \frac{a\_{n}(1 - \beta\_{n})^{2}(1 - \delta)}{2}\right] \|\mathbf{x}\_{n} - p\| + \frac{a\_{n}(1 - \beta\_{n})^{2}(1 - \delta)}{2} \cdot \frac{2\left(M\_{0} + M\_{1} + \|(f - I)p\|\right)}{(1 - \delta)(1 - \beta\_{n})}. \end{split}$$

By induction, we obtain *xn* <sup>−</sup> *<sup>p</sup>* ≤ max{*xn*<sup>0</sup> <sup>−</sup> *<sup>p</sup>*, <sup>2</sup>(*M*0+*M*1+(*f*−*I*)*p*) (1−*δ*)(1−*b*) }, <sup>∀</sup>*<sup>n</sup>* <sup>≥</sup> *n*0. Therefore, we derive the boundedness of {*xn*} and hence the one of sequences {*un*}, {*wn*}, {*yn*}, {*zn*}, { *<sup>f</sup>*(*xn*)}, {*Szn*}, {*Tnxn*}.

**Step 2.** We claim that ∃*M*<sup>4</sup> > 0 s.t.

$$\left[ (1 - a\_n)(1 - \beta\_n)(1 - \mu) \left[ \|w\_n - y\_n\|^2 + \|u\_n - y\_n\|^2 \right] \right] \le \|x\_n - p\|^2 - \|x\_{n+1} - p\|^2 + a\_n M\_{4\prime} \quad \forall n \ge n\_0.$$

In fact, using Lemmas <sup>4</sup> and <sup>7</sup> and the convexity of ·2, we get

$$\begin{split} & \|\mathbf{x}\_{n+1} - p\|^2 = \|\beta\_n (T^n \mathbf{x}\_n - p) + \gamma\_n (z\_n - p) + \delta\_n (\mathbf{S} z\_n - p) + \gamma\_n (u\_n - z\_n)\|^2 \\ & \le \beta\_n \|T^n \mathbf{x}\_n - p\|^2 + (1 - \beta\_n) \|\frac{1}{1 - \beta\_n} [\gamma\_n (z\_n - p) + \delta\_n (\mathbf{S} z\_n - p)]\|^2 \\ & + 2(1 - \beta\_n) u\_n \|u\_n - f(x\_n)\| \|\mathbf{x}\_{n+1} - p\| \\ & \le \beta\_n \|T^n \mathbf{x}\_n - p\|^2 + (1 - \beta\_n) \{\|1 - a\_n (1 - \delta)\| \|\mathbf{x}\_n - p\|^2 + (1 - a\_n) \Lambda\_{\text{ul}} \\ & - (1 - a\_n) (1 - p) \|\|u\_n - y\_n\|\|^2 + \|u\_n - y\_n\|^2\| + 2a\_n \langle (f - 1)p, z\_n - p \rangle \} \\ & + 2(1 - \beta\_n) u\_n \|u\_n - f(x\_n)\| \|\mathbf{x}\_{n+1} - p\| \\ & \le \beta\_n \|T^n \mathbf{x}\_n - p\|^2 + (1 - \beta\_n) \{\|1 - a\_n (1 - \delta)\| \|\mathbf{x}\_n - p\|^2 + (1 - a\_n) \Lambda\_{\text{ul}} \\ & - (1 - a\_n) (1 - p) \{\|u\_n - y\_n\|^2 + \|u\_n - y\_n\|^2\} + a\_n M\_2 \}, \end{split} \tag{18}$$

where

$$\Lambda\_n := \theta\_n (2 + \theta\_n) \|\mathbf{x}\_n - p\|^2 + \sigma\_n \|\mathbf{x}\_n - \mathbf{x}\_{n-1}\| \|2(\mathbf{1} + \theta\_n) \|\mathbf{x}\_n - p\| + \sigma\_n \|\mathbf{x}\_n - \mathbf{x}\_{n-1}\| \|\_{\mathcal{H}}$$

and

$$\sup\_{n\geq 1} 2(\|(f-I)p\|\|z\_n - p\| + \|u\_n - f(\mathbf{x}\_{\mathbb{H}})\|\|\mathbf{x}\_{n+1} - p\|) \leq M\_2$$

for some *M*<sup>2</sup> > 0. Also, from Equation (16) we have

$$\begin{array}{ll} \Lambda\_{\mathsf{H}} &= \theta\_{\mathsf{R}} (2 + \theta\_{\mathsf{H}}) \| \mathbf{x}\_{\mathsf{H}} - p \| ^2 + \sigma\_{\mathsf{H}} \| \mathbf{x}\_{\mathsf{H}} - \mathbf{x}\_{\mathsf{H}-1} \| [2(1 + \theta\_{\mathsf{H}}) \| \mathbf{x}\_{\mathsf{H}} - p \| ] + \sigma\_{\mathsf{H}} \| \mathbf{x}\_{\mathsf{H}} - \mathbf{x}\_{\mathsf{H}-1} \| ] \\ &\leq \theta\_{\mathsf{R}} (2 + \theta\_{\mathsf{H}}) \| \mathbf{x}\_{\mathsf{H}} - p \| ^2 + \mathfrak{a}\_{\mathsf{H}} M\_{1} \| [2(1 + \theta\_{\mathsf{H}}) \| \mathbf{x}\_{\mathsf{H}} - p \| ] + \mathfrak{a}\_{\mathsf{H}} M\_{1}] \\ &= \mathsf{a}\_{\mathsf{H}} \{ \frac{\theta\_{\mathsf{R}}}{\mathsf{a}\_{\mathsf{R}}} (2 + \theta\_{\mathsf{R}}) \| \mathbf{x}\_{\mathsf{R}} - p \| ^2 + M\_{1} \| [2(1 + \theta\_{\mathsf{R}}) \| \mathbf{x}\_{\mathsf{R}} - p \| ] + \mathfrak{a}\_{\mathsf{H}} M\_{1}] \} \leq \mathsf{a}\_{\mathsf{R}} M\_{3}, \end{array} \tag{19}$$

where

$$\sup\_{n\geq 1} \{ \frac{\theta\_{\mathfrak{l}}}{a\_n} (2 + \theta\_n) \| |\mathbf{x}\_n - p| \| ^2 + M\_1 \| [2(1 + \theta\_{\mathfrak{l}}) \| \mathbf{x}\_{\mathfrak{n}} - p \| + a\_n M\_1] \} \leq M\_3$$

for some *M*<sup>3</sup> > 0. Note that

$$
\theta\_n(2+\theta\_n) \le \frac{\kappa\_n (1-\beta\_n)(1-\delta)}{2}, \quad \forall n \ge n\_0.
$$

Substituting Equation (19) for Equation (18), we obtain that for all *n* ≥ *n*0,

$$\begin{array}{c} \left\| \mathbf{x}\_{n+1} - p \right\|^2 &\leq \beta\_n (1 + \theta\_n)^2 \left\| \mathbf{x}\_n - p \right\|^2 + (1 - \beta\_n) \left\{ \left\| 1 - a\_{\mathrm{il}} (1 - \delta) \right\| \left\| \mathbf{x}\_{\mathrm{il}} - p \right\|^2 + (1 - a\_{\mathrm{il}}) a\_{\mathrm{il}} M\_3 \right\} \\ & & - (1 - a\_{\mathrm{il}}) (1 - \mu) \left[ \left\| \mathbf{w}\_n - \mathbf{y}\_n \right\|^2 + \left\| \mathbf{u}\_n - \mathbf{y}\_n \right\|^2 \right] + a\_n M\_2 \right\} \\ & \leq \left[ 1 - \frac{a\_{\mathrm{il}} (1 - \beta\_n) (1 - \delta)}{2} \right] \left\| \mathbf{x}\_n - p \right\|^2 + a\_{\mathrm{il}} M\_3 \\ & - (1 - a\_{\mathrm{il}}) (1 - \beta\_n) (1 - \mu) \left[ \left\| \mathbf{w}\_n - \mathbf{y}\_n \right\|^2 + \left\| \mathbf{u}\_n - \mathbf{y}\_n \right\|^2 \right] + a\_n M\_2 \\ & \leq \left\| \mathbf{x}\_n - p \right\|^2 - (1 - a\_{\mathrm{il}}) (1 - \beta\_n) (1 - \mu) \left[ \left\| \mathbf{w}\_n - \mathbf{y}\_n \right\|^2 + \left\| \mathbf{u}\_n - \mathbf{y}\_n \right\|^2 \right] + a\_n M\_4, \end{array}$$

where *M*<sup>4</sup> := *M*<sup>2</sup> + *M*3. This immediately implies that for all *n* ≥ *n*0,

$$\|(1 - a\_n)(1 - \beta\_n)(1 - \mu)[\|\mathbf{w}\_n - \mathbf{y}\_n\|^2 + \|\mathbf{u}\_n - \mathbf{y}\_n\|^2] \le \|\mathbf{x}\_n - p\|^2 - \|\mathbf{x}\_{n+1} - p\|^2 + \mathbf{z}\_n M\_4.\tag{20}$$

**Step 3.** We claim that ∃*M* > 0 s.t.

$$\begin{array}{l} & \|\mathbf{x}\_{n+1} - p\|^{2} \\ \leq & \left\|1 - \frac{(1 - 2\delta)\delta\_{n} - \gamma\_{n}}{1 - a\_{n}\gamma\_{n}}\mathbf{z}\_{n}\right\| \|\mathbf{x}\_{n} - p\|^{2} + \frac{[(1 - 2\delta)\delta\_{n} - \gamma\_{n}]\mathbf{z}\_{n}}{1 - a\_{n}\gamma\_{n}} \cdot \left\{\frac{2\gamma\_{n}}{(1 - 2\delta)\delta\_{n} - \gamma\_{n}}\left\|f(\mathbf{x}\_{n}) - p\right\| \left\|z\_{n} - \mathbf{x}\_{n+1}\right\| \right\} \\ & + \frac{2\delta\_{n}}{(1 - 2\delta)\delta\_{n} - \gamma\_{n}} \left\|f(\mathbf{x}\_{n}) - p\right\| \left\|z\_{n} - \mathbf{x}\_{n}\right\| + \frac{2\delta\_{n}}{(1 - 2\delta)\delta\_{n} - \gamma\_{n}} \langle f(p) - p, \mathbf{x}\_{n} - p\rangle \\ & + \frac{\gamma\_{n} + \delta\_{n}}{(1 - 2\delta)\delta\_{n} - \gamma\_{n}} \left(\frac{\theta\_{\mathbf{z}}}{\mathbf{a}\_{n}} \cdot \frac{2M^{2}}{1 - b} + \frac{\theta\_{\mathbf{z}}}{\mathbf{a}\_{n}} \|\mathbf{x}\_{n} - \mathbf{x}\_{n-1}\| \|3M\|\right). \end{array}$$

In fact, we get

$$\begin{array}{l} \|\|w\_{\mathbb{n}} - p\|\|^{2} \le [(1 + \theta\_{\mathbb{n}}) \|\|\mathbf{x}\_{\mathbb{n}} - p\|\| + \sigma\_{\mathbb{n}} \|\|\mathbf{x}\_{\mathbb{n}} - \mathbf{x}\_{\mathbb{n} - 1}\|\|]^{2} \\ = \|\|\mathbf{x}\_{\mathbb{n}} - p\|\|^{2} + \theta\_{\mathbb{n}} (2 + \theta\_{\mathbb{n}}) \|\|\mathbf{x}\_{\mathbb{n}} - p\|\|^{2} + \sigma\_{\mathbb{n}} \|\|\mathbf{x}\_{\mathbb{n}} - \mathbf{x}\_{\mathbb{n} - 1}\|\| 2(1 + \theta\_{\mathbb{n}}) \|\mathbf{x}\_{\mathbb{n}} - p\| \\ \le \|\mathbf{x}\_{\mathbb{n}} - p\|\|^{2} + \theta\_{\mathbb{n}} 2M^{2} + \sigma\_{\mathbb{n}} \|\mathbf{x}\_{\mathbb{n}} - \mathbf{x}\_{\mathbb{n} - 1}\|\|\mathbf{M}\_{\mathsf{r}} \\ \end{array} \tag{21}$$

where *<sup>M</sup>* <sup>≥</sup> sup*n*≥1{(<sup>1</sup> <sup>+</sup> *<sup>θ</sup>n*)*xn* <sup>−</sup> *<sup>p</sup>*, *<sup>σ</sup>nxn* <sup>−</sup> *xn*−1} for some *<sup>M</sup>* <sup>&</sup>gt; 0. From Algorithm <sup>3</sup> and the convexity of ·2, we have

$$\begin{split} & \|\mathbf{x}\_{n+1} - p\|^2 = \|\beta\_n (T^n \mathbf{x}\_n - p) + \gamma\_n (z\_n - p) + \delta\_n (Sz\_n - p) + \gamma\_n (u\_n - z\_n)\|^2 \\ & \le \|\beta\_n (T^n \mathbf{x}\_n - p) + \gamma\_n (z\_n - p) + \delta\_n (Sz\_n - p)\|^2 + 2\gamma\_n u\_n \langle u\_n - f(\mathbf{x}\_n), \mathbf{x}\_{n+1} - p \rangle \\ & \le \beta\_n \|T^n \mathbf{x}\_n - p\|^2 + (1 - \beta\_n) \|\frac{1}{1 - \beta\_n} [\gamma\_n (z\_n - p) + \delta\_n (Sz\_n - p)]\|^2 \\ & + 2\gamma\_n a\_n \langle u\_n - p, \mathbf{x}\_{n+1} - p \rangle + 2\gamma\_n a\_n \langle p - f(\mathbf{x}\_n), \mathbf{x}\_{n+1} - p \rangle, \end{split}$$

which together with Lemma 4, leads to

$$\begin{cases} \|\mathbf{x}\_{n+1} - p\|^2 \le \beta\_n (1 + \theta\_n)^2 \|\mathbf{x}\_n - p\|^2 + (1 - \beta\_n) \|\mathbf{z}\_n - p\|^2 + 2\gamma\_n \kappa\_n \|\mathbf{u}\_n - p\| \|\mathbf{x}\_{n+1} - p\| \\ \quad + 2\gamma\_n \kappa\_n \langle p - f(\mathbf{x}\_n), \mathbf{x}\_{n+1} - p \rangle \\ \le \beta\_n (1 + \theta\_n)^2 \|\mathbf{x}\_n - p\|^2 + (1 - \beta\_n) [(1 - \kappa\_n) \|\mathbf{u}\_n - p\|^2 + 2\kappa\_n \langle f(\mathbf{x}\_n) - p, \mathbf{z}\_n - p\rangle] \\ \quad + \gamma\_n \kappa\_n (\|\mathbf{u}\_n - p\|^2 + \|\mathbf{x}\_{n+1} - p\|^2) + 2\gamma\_n \kappa\_n \langle p - f(\mathbf{x}\_n), \mathbf{x}\_{n+1} - p\rangle. \end{cases}$$

From Equations (17) and (21) we know that

$$||\boldsymbol{\mu\_{\mathcal{U}}} - \boldsymbol{p}||^2 \le ||\boldsymbol{\chi\_{\mathcal{U}}} - \boldsymbol{p}||^2 + \theta\_{\mathcal{U}} 2\mathcal{M}^2 + \sigma\_{\mathcal{U}}||\boldsymbol{\chi\_{\mathcal{U}}} - \boldsymbol{x}\_{n-1}||\mathbb{S}\mathcal{M}.$$

Hence, we have

*xn*+<sup>1</sup> <sup>−</sup> *<sup>p</sup>*<sup>2</sup> <sup>≤</sup> [<sup>1</sup> <sup>−</sup> *<sup>α</sup>n*(<sup>1</sup> <sup>−</sup> *<sup>β</sup>n*)]*xn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> <sup>+</sup> *<sup>β</sup>nθn*2*M*<sup>2</sup> + (<sup>1</sup> <sup>−</sup> *<sup>β</sup>n*)(<sup>1</sup> <sup>−</sup> *<sup>α</sup>n*)(*θn*2*M*<sup>2</sup> <sup>+</sup> *<sup>σ</sup>nxn* <sup>−</sup> *xn*−13*M*) + <sup>2</sup>*αnδnf*(*xn*) <sup>−</sup> *<sup>p</sup>*, *zn* <sup>−</sup> *<sup>p</sup>* <sup>+</sup> *<sup>γ</sup>nαn*(*xn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> <sup>+</sup> *xn*+<sup>1</sup> <sup>−</sup> *<sup>p</sup>*2)+(<sup>1</sup> <sup>−</sup> *<sup>β</sup>n*)*αn*(*θn*2*M*<sup>2</sup> <sup>+</sup> *<sup>σ</sup>nxn* <sup>−</sup> *xn*−13*M*) + 2*γnαnf*(*xn*) − *p*, *zn* − *xn*+1 <sup>≤</sup> [<sup>1</sup> <sup>−</sup> *<sup>α</sup>n*(<sup>1</sup> <sup>−</sup> *<sup>β</sup>n*)]*xn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> <sup>+</sup> <sup>2</sup>*γnαn <sup>f</sup>*(*xn*) <sup>−</sup> *<sup>p</sup>zn* <sup>−</sup> *xn*+1 <sup>+</sup> <sup>2</sup>*αnδnδxn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> <sup>+</sup> <sup>2</sup>*αnδnf*(*p*) <sup>−</sup> *<sup>p</sup>*, *xn* <sup>−</sup> *<sup>p</sup>* <sup>+</sup> <sup>2</sup>*αnδn <sup>f</sup>*(*xn*) <sup>−</sup> *<sup>p</sup>zn* <sup>−</sup> *xn* <sup>+</sup> *<sup>γ</sup>nαn*(*xn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> <sup>+</sup> *xn*+<sup>1</sup> <sup>−</sup> *<sup>p</sup>*2)+(<sup>1</sup> <sup>−</sup> *<sup>β</sup>n*)( *<sup>θ</sup>n*2*M*<sup>2</sup> <sup>1</sup>−*β<sup>n</sup>* <sup>+</sup> *<sup>σ</sup>nxn* <sup>−</sup> *xn*−13*M*),

which immediately yields

$$\begin{array}{l} \|\mathbf{x}\_{n+1} - \mathbf{p}\|^{2} \\ \leq & \left[1 - \frac{(1 - 2\delta)\delta\_{n} - \gamma\_{n}}{1 - a\gamma\_{n}\gamma\_{n}}\mathbf{a}\_{n}\right] \|\mathbf{x}\_{n} - \mathbf{p}\|^{2} + \frac{[(1 - 2\delta)\delta\_{n} - \gamma\_{n}]a\_{n}}{1 - a\gamma\_{n}\gamma\_{n}} \cdot \left\{\frac{2\gamma\_{n}}{(1 - 2\delta)\delta\_{n} - \gamma\_{n}}\|f(\mathbf{x}\_{n}) - \mathbf{p}\|\|\|\mathbf{z}\_{n} - \mathbf{x}\_{n+1}\| \right. \\ \left. + \frac{2\delta\_{n}}{(1 - 2\delta)\delta\_{n} - \gamma\_{n}}\|f(\mathbf{x}\_{n}) - \mathbf{p}\|\|\|\mathbf{z}\_{n} - \mathbf{x}\_{n}\| \right. \\ \left. + \frac{\gamma\_{n} + \delta\_{n}}{(1 - 2\delta)\delta\_{n} - \gamma\_{n}} \left(\frac{\theta\_{n}}{a\_{n}} \cdot \frac{2M^{2}}{1 - b} + \frac{\sigma\_{n}}{a\_{n}} \|\mathbf{x}\_{n} - \mathbf{x}\_{n-1}\| \|\mathbf{3M}\right) \right\}. \end{array} \tag{22}$$

**Step 4.** We claim the strong convergence of {*xn*} to a unique solution *x*<sup>∗</sup> ∈ Ω to the HVI Equation (12). In fact, setting *p* = *x*∗, from Equation (22) we know that

$$\begin{array}{l} \left\lVert \mathbf{x}\_{n+1} - \mathbf{x}^\* \right\rVert^2 \\ \leq & \left\lVert 1 - \frac{(1-2\delta)\delta\_n - \gamma\_n}{1 - a\_n\gamma\_n} a\_n \right\rVert \left\lVert \mathbf{x}\_n - \mathbf{x}^\* \right\rVert^2 + \frac{[(1-2\delta)\delta\_n - \gamma\_n]\mathbf{x}\_n}{1 - a\_n\gamma\_n} \cdot \left\lVert \frac{2\gamma\_n}{(1-2\delta)\delta\_n - \gamma\_n} \right\rVert \left\lVert \mathbf{f}(\mathbf{x}\_n) - \mathbf{x}^\* \right\rVert \left\lVert \mathbf{z}\_n - \mathbf{x}\_{n+1} \right\rVert \\ & + \frac{2\delta\_n}{(1-2\delta)\delta\_n - \gamma\_n} \left\lVert \mathbf{f}(\mathbf{x}\_n) - \mathbf{x}^\* \right\rVert \left\lVert \mathbf{z}\_n - \mathbf{x}\_n \right\rVert + \frac{2\delta\_n}{(1-2\delta)\delta\_n - \gamma\_n} \left\lVert \mathbf{f}(\mathbf{x}^\*) - \mathbf{x}^\*, \mathbf{x}\_n - \mathbf{x}^\* \right\rVert \\ + \frac{\gamma\_n + \delta\_n}{(1-2\delta)\delta\_n - \gamma\_n} \left( \frac{\theta\_{\mathbf{n}}}{\delta\_{\mathbf{n}}} \cdot \frac{2\delta\_{\mathbf{n}}^2}{1-b} + \frac{\theta\_{\mathbf{n}}}{\delta\_{\mathbf{n}}} \left\lVert \mathbf{x}\_{\mathcal{U}} - \mathbf{x}\_{n-1} \right\rVert \left\lVert \mathbf{M} \right\rVert \right). \end{array}$$

According to Lemma 4, it is sufficient to prove that lim sup*n*→∞(*<sup>f</sup>* <sup>−</sup> *<sup>I</sup>*)*x*∗, *xn* <sup>−</sup> *<sup>x</sup>*∗ ≤ 0. Since *xn* <sup>−</sup> *xn*+<sup>1</sup> → 0, *α<sup>n</sup>* → 0 and {*βn*} ⊂ [*a*, *b*] ⊂ (0, 1), from Equation (20) we get

$$\begin{aligned} &\limsup\_{n\to\infty} (1-\mathfrak{a}\_{\mathfrak{n}})(1-\mathfrak{p})(1-\mathfrak{p})[\|w\_{\mathfrak{n}}-y\_{\mathfrak{n}}\|^2+\|\|\mathfrak{u}\_{\mathfrak{n}}-y\_{\mathfrak{n}}\|^2] \\ &\leq \limsup\_{n\to\infty} [||\mathfrak{x}\_{\mathfrak{n}}-p||^2-\|\mathfrak{x}\_{\mathfrak{n}+1}-p\|^2+\mathfrak{a}\_{\mathfrak{n}}\mathcal{M}\_{\mathfrak{4}}] \\ &\leq \limsup\_{n\to\infty} (||\mathfrak{x}\_{\mathfrak{n}}-p\|+||\mathfrak{x}\_{\mathfrak{n}+1}-p||)\|\mathbf{x}\_{\mathfrak{n}}-\mathbf{x}\_{\mathfrak{n}+1}\| = 0,\end{aligned}$$

which hence leads to

$$\lim\_{n \to \infty} \|w\_n - y\_n\| = \lim\_{n \to \infty} \|u\_n - y\_n\| = 0. \tag{23}$$

Obviously, the assumptions *xn* <sup>−</sup> *xn*+1 → 0 and *xn* <sup>−</sup> *<sup>T</sup>nxn* → 0 guarantee that *wn* <sup>−</sup> *xn* ≤ *Tnxn* <sup>−</sup> *xn* <sup>+</sup> *xn* <sup>−</sup> *xn*−1 → <sup>0</sup> (*<sup>n</sup>* <sup>→</sup> <sup>∞</sup>). Thus,

$$\|\mathbf{x}\_n - \mathbf{y}\_n\| \le \|\mathbf{x}\_n - \mathbf{w}\_n\| + \|\mathbf{w}\_n - \mathbf{y}\_n\| \to 0, \quad (n \to \infty).$$

Since *zn* = (1 − *αn*)*un* + *α<sup>n</sup> f*(*xn*) with *un* := *PCn* (*wn* − *τnAyn*), from Equation (23) and the boundedness of {*xn*}, {*un*}, we get

$$\left| \|z\_{\mathrm{il}} - y\_{\mathrm{il}}\| \right| \le a\_{\mathrm{il}} (\left| \|f(\mathbf{x}\_{\mathrm{il}})\| + \|u\_{\mathrm{il}}\| \right|) + \left\|u\_{\mathrm{il}} - y\_{\mathrm{il}}\right\| \to 0, \quad (n \to \infty), \tag{24}$$

and hence

$$\left| \| z\_{\mathsf{H}} - \mathsf{x}\_{\mathsf{H}} \| \right| \leq \left|| z\_{\mathsf{H}} - y\_{\mathsf{H}} \| \right| + \left|| y\_{\mathsf{H}} - \mathsf{x}\_{\mathsf{H}} \| \right| \to 0, \quad (n \to \infty).$$

Obviously, combining Equations (23) and (24) guarantees that

$$||w\_n - z\_n|| \le ||w\_n - y\_n|| + ||y\_n - z\_n|| \to 0, \quad (n \to \infty).$$

Since {*xn*} is bounded, we know that ∃{*xnk*}⊂{*xn*} s.t.

$$\limsup\_{n \to \infty} \langle (f - I)\mathbf{x}^\*, \mathbf{x}\_n - \mathbf{x}^\* \rangle = \lim\_{k \to \infty} \langle (f - I)\mathbf{x}^\*, \mathbf{x}\_{n\_k} - \mathbf{x}^\* \rangle. \tag{25}$$

Next, we may suppose that *xnk x*˜. Hence from Equation (25) we get

$$\limsup\_{\mathfrak{n}\to\infty} \langle (f-I)\mathbf{x}^\*, \mathbf{x}\_{\mathfrak{n}} - \mathbf{x}^\* \rangle = \lim\_{k \to \infty} \langle (f-I)\mathbf{x}^\*, \mathbf{x}\_{\mathfrak{n}\_k} - \mathbf{x}^\* \rangle = \langle (f-I)\mathbf{x}^\*, \mathbf{\tilde{x}} - \mathbf{x}^\* \rangle. \tag{26}$$

From *wn* − *xn* → 0 and *xnk x*˜ it follows that *wnk x*˜.

Since *<sup>T</sup>nxn* <sup>−</sup> *<sup>T</sup>n*+1*xn* <sup>→</sup> 0, *xn* <sup>−</sup> *xn*+<sup>1</sup> <sup>→</sup> 0, *wn* <sup>−</sup> *xn* <sup>→</sup> 0, *wn* <sup>−</sup> *zn* <sup>→</sup> 0 and *wnk <sup>x</sup>*˜, from Lemma <sup>8</sup> we conclude that *x*˜ ∈ Ω. Therefore, from Equations (12) and (26) we infer that

$$\limsup\_{n \to \infty} \langle (f - I)\mathbf{x}^\*, \mathbf{x}\_{\mathrm{ll}} - \mathbf{x}^\* \rangle = \langle (f - I)\mathbf{x}^\*, \mathbf{x} - \mathbf{x}^\* \rangle \le 0.$$

Note that <sup>∞</sup>

$$\sum\_{n=0}^{\infty} \frac{(1 - 2\delta)\delta\_{\mathbb{N}} - \gamma\_{\mathbb{N}}}{1 - \alpha\_n \gamma\_{\mathbb{N}}} \kappa\_n = \infty.$$

It is clear that

$$\begin{split} \limsup\_{n \to \infty} \left\{ \frac{2\gamma\_{\rm II}}{(1-2\delta)\delta\_{n} - \gamma\_{\rm n}} \left\| f(\mathbf{x}\_{\rm II}) - \mathbf{x}^{\*} \right\| \left\| \mathbf{z}\_{\rm II} - \mathbf{x}\_{n+1} \right\| + \frac{2\delta\_{\rm n}}{(1-2\delta)\delta\_{n} - \gamma\_{\rm n}} \left\| f(\mathbf{x}\_{\rm II}) - \mathbf{x}^{\*} \right\| \left\| \mathbf{z}\_{\rm II} - \mathbf{x}\_{\rm II} \right\| \\ + \frac{2\delta\_{\rm n}}{(1-2\delta)\delta\_{n} - \gamma\_{\rm n}} \langle f(\mathbf{x}^{\*}) - \mathbf{x}^{\*}, \mathbf{x}\_{\rm II} - \mathbf{x}^{\*} \rangle + \frac{\gamma\_{\rm n} + \delta\_{\rm n}}{(1-2\delta)\delta\_{n} - \gamma\_{\rm n}} \left( \frac{\theta\_{\rm n}}{a\_{n}} \cdot \frac{2M^{2}}{1-b} + \frac{\sigma\_{\rm n}}{a\_{n}} \left\| \mathbf{x}\_{\rm II} - \mathbf{x}\_{n-1} \right\| \left\| 3M \right\| \right) \leq 0. \end{split}$$

Consequently, all conditions of Lemma 4 are satisfied, and hence we immediately deduce that *xn* → *x*∗. This completes the proof.

Next, we introduce another inertial-like subgradient extragradient algorithm (Algorithm 4) with line-search process as the following.

It is remarkable that Lemmas 6–8 are still valid for Algorithm 4.

**Algorithm 4:** Inertial-like subgradient extragradient algorithm (II).

**Initialization:** Given *x*0, *x*<sup>1</sup> ∈ *H* arbitrarily. Let *γ* > 0, *l* ∈ (0, 1), *μ* ∈ (0, 1). **Iterative Steps:** Compute *xn*+<sup>1</sup> in what follows:

Step 1. Put *wn* <sup>=</sup> *<sup>σ</sup>n*(*xn* <sup>−</sup> *xn*−1) + *<sup>T</sup>nxn* and calculate *yn* <sup>=</sup> *PC*(*wn* <sup>−</sup> *<sup>τ</sup>nAwn*), where *<sup>τ</sup><sup>n</sup>* is chosen to be the largest *τ* ∈ {*γ*, *γl*, *γl* 2, ...} such that

$$\|\pi\| A w\_n - A y\_n\| \le \mu \|w\_n - y\_n\|.$$

Step 2. Calculate *zn* = (1 − *αn*)*PCn* (*wn* − *τnAyn*) + *α<sup>n</sup> f*(*xn*) with *Cn* := {*x* ∈ *H* : *wn* − *τnAwn* − *yn*, *x* − *yn* ≤ 0}. Step 3. Calculate

$$\mathbf{x}\_{\mathrm{n}+1} = \gamma\_{\mathrm{n}} P\_{\mathbb{C}\_{\mathrm{n}}} (w\_{\mathrm{n}} - \pi\_{\mathrm{n}} A y\_{\mathrm{n}}) + \delta\_{\mathrm{n}} S z\_{\mathrm{n}} + \beta\_{\mathrm{n}} T^{\mathrm{n}} w\_{\mathrm{n}}.$$

Again set *n* := *n* + 1 and return to Step 1.

**Theorem 2.** *Let* {*xn*} *be the sequence constructed by Algorithm 4. Suppose that Tnxn* <sup>−</sup> *<sup>T</sup>n*+1*xn* <sup>→</sup> <sup>0</sup>*. Then*

$$\mathbf{x}\_n \to \mathbf{x}^\* \in \Omega \iff \begin{cases} \begin{array}{l} \mathbf{x}\_n - \mathbf{x}\_{n+1} \to \mathbf{0}, \\ \mathbf{x}\_n - T^n \mathbf{x}\_n \to \mathbf{0}, \\ \sup\_{n \ge 1} ||(T^n - f)\mathbf{x}\_n|| < \infty, \end{array} \end{cases}$$

*where x*<sup>∗</sup> ∈ Ω *is only a solution of the HVI:* (*I* − *f*)*x*∗, *p* − *x*∗ ≥ 0, ∀*p* ∈ Ω*.*

**Proof.** Using the same reasoning as in the proof of Theorem 1, we know that there is only a solution *x*<sup>∗</sup> ∈ Ω of Equation (12), and that the necessity of the theorem is true.

We claim the sufficiency of the theorem below. For the purpose, we suppose that lim*n*→∞(*xn* − *xn*+1 <sup>+</sup> *xn* <sup>−</sup> *<sup>T</sup>nxn*) = 0 and sup*n*≥<sup>1</sup> (*T<sup>n</sup>* <sup>−</sup> *<sup>f</sup>*)*xn* <sup>&</sup>lt; <sup>∞</sup>. Then we prove the sufficiency by the following steps.

**Step 1.** We claim the boundedness of {*xn*}. In fact, using the same reasoning as in Step 1 of the proof of Theorem 1, we obtain that inequalities Equations (13)–(17) hold. Noticing lim*n*→<sup>∞</sup> *<sup>θ</sup>n*(2+*θn*) *<sup>α</sup>n*(1−*βn*) <sup>=</sup> 0, we infer that ∃*n*<sup>0</sup> ≥ 1 s.t.

$$
\theta\_n(2+\theta\_n) \le \frac{a\_n(1-\beta\_n)(1-\delta)}{2} \left( \le \frac{a\_n(1-\delta)}{2} \right), \quad \forall n \ge n\_0.
$$

So, from Algorithm 4 and Equation (17) it follows that for all *n* ≥ *n*0,

$$\begin{aligned} \|\|z\_n - p\|\| &\le a\_n \delta \|\|x\_n - p\|\| + (1 - a\_n)[(1 + \theta\_n) \|\|x\_n - p\| + a\_n M\_1] + a\_n \|(f - I)p\|\| \\ &\le [1 - \frac{a\_n (1 - \delta)}{2}] \|\|x\_n - p\|\| + a\_n (M\_1 + \|(f - I)p\|) ), \end{aligned}$$

which together with Lemma 4 and (*γ<sup>n</sup>* + *δn*)*ζ* ≤ *γn*, implies that for all *n* ≥ *n*0,

*xn*+<sup>1</sup> <sup>−</sup> *<sup>p</sup>* <sup>=</sup> *βn*(*Tnwn* <sup>−</sup> *<sup>p</sup>*) + *<sup>γ</sup>n*(*zn* <sup>−</sup> *<sup>p</sup>*) + *<sup>δ</sup>n*(*Szn* <sup>−</sup> *<sup>p</sup>*) + *<sup>γ</sup>n*(*un* <sup>−</sup> *zn*) ≤ *βn*(1 + *θn*)*wn* − *p* + (1 − *βn*)*zn* − *p* + *γnαnun* − *f*(*xn*) <sup>≤</sup> [<sup>1</sup> <sup>−</sup> *<sup>α</sup>n*(1−*βn*)(1−*δ*) <sup>2</sup> + *βnθn*(2 + *θn*)]*xn* − *p* + *βn*(1 + *θn*)*αnM*<sup>1</sup> + *αn*(1 − *βn*)(*M*<sup>0</sup> + *M*<sup>1</sup> + (*f* − *I*)*p*) <sup>≤</sup> [<sup>1</sup> <sup>−</sup> *<sup>α</sup>n*(1−*βn*)(1−*δ*) <sup>2</sup> + *β<sup>n</sup> αn*(1−*βn*)(1−*δ*) <sup>2</sup> ]*xn* − *p* + *αn*(1 − *βn*)(*M*<sup>0</sup> + *M*<sup>1</sup> 1+*θ<sup>n</sup>* <sup>1</sup>−*β<sup>n</sup>* <sup>+</sup> (*<sup>f</sup>* <sup>−</sup> *<sup>I</sup>*)*p*) = [<sup>1</sup> <sup>−</sup> *<sup>α</sup>n*(1−*βn*)2(1−*δ*) <sup>2</sup> ]*xn* <sup>−</sup> *<sup>p</sup>* <sup>+</sup> *<sup>α</sup>n*(1−*βn*)2(1−*δ*) 2 · <sup>2</sup>(*M*0+*M*<sup>1</sup> <sup>1</sup>+*θ<sup>n</sup>* <sup>1</sup>−*β<sup>n</sup>* <sup>+</sup>(*f*−*I*)*p*) (1−*δ*)(1−*βn*) .

Hence,

$$\|\mathbf{x}\_{\mathrm{ll}} - p\| \le \max\{\|\mathbf{x}\_{\mathrm{h}\_0} - p\|, \frac{2(M\_0 + M\_1 \frac{2}{1-b} + \|(f-I)p\|)}{(1-\delta)(1-b)}\}, \quad \forall n \ge n\_0.$$

Thus, sequence {*xn*} is bounded.

**Step 2.** We claim that for all *n* ≥ *n*0,

$$\left\|\|\mathbf{x}\_{\mathrm{il}} - p\|\right\|^2 - \left\|\mathbf{x}\_{\mathrm{n}+1} - p\right\|^2 + a\_{\mathrm{il}}M\_{\mathrm{4}} \geq \left(1 - a\_{\mathrm{n}}\right)\left(1 - \beta\_{\mathrm{n}}\right)\left(1 - \mu\right)\left[\left\|\|w\_{\mathrm{n}} - y\_{\mathrm{n}}\right\|\right^2 + \left\|\|u\_{\mathrm{n}} - y\_{\mathrm{n}}\|\right^2\right]\_{\mathrm{i}}$$

with constant *<sup>M</sup>*<sup>4</sup> <sup>&</sup>gt; 0. Indeed, utilizing Lemmas <sup>4</sup> and <sup>7</sup> and the convexity of ·2, one reaches

*xn*+<sup>1</sup> <sup>−</sup> *<sup>p</sup>*<sup>2</sup> <sup>=</sup> *βn*(*Tnwn* <sup>−</sup> *<sup>p</sup>*) + *<sup>γ</sup>n*(*zn* <sup>−</sup> *<sup>p</sup>*) + *<sup>δ</sup>n*(*Szn* <sup>−</sup> *<sup>p</sup>*) + *<sup>γ</sup>n*(*un* <sup>−</sup> *zn*)<sup>2</sup> <sup>≤</sup> *<sup>β</sup>nTnwn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> + (<sup>1</sup> <sup>−</sup> *<sup>β</sup>n*) <sup>1</sup> <sup>1</sup>−*β<sup>n</sup>* [*γn*(*zn* <sup>−</sup> *<sup>p</sup>*) + *<sup>δ</sup>n*(*Szn* <sup>−</sup> *<sup>p</sup>*)]<sup>2</sup> + 2(1 − *βn*)*αnun* − *f*(*xn*)*xn*+<sup>1</sup> − *p* <sup>≤</sup> *<sup>β</sup>n*(<sup>1</sup> <sup>+</sup> *<sup>θ</sup>n*)2*wn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> + (<sup>1</sup> <sup>−</sup> *<sup>β</sup>n*){[<sup>1</sup> <sup>−</sup> *<sup>α</sup>n*(<sup>1</sup> <sup>−</sup> *<sup>δ</sup>*)]*xn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> + (<sup>1</sup> <sup>−</sup> *<sup>α</sup>n*)Λ*<sup>n</sup>* <sup>−</sup> (<sup>1</sup> <sup>−</sup> *<sup>α</sup>n*)(<sup>1</sup> <sup>−</sup> *<sup>μ</sup>*)[*wn* <sup>−</sup> *yn*<sup>2</sup> <sup>+</sup> *un* <sup>−</sup> *yn*2] + <sup>2</sup>*αn*(*<sup>f</sup>* <sup>−</sup> *<sup>I</sup>*)*p*, *zn* <sup>−</sup> *<sup>p</sup>*} + 2(1 − *βn*)*αnun* − *f*(*xn*)*xn*+<sup>1</sup> − *p* <sup>≤</sup> *<sup>β</sup>n*(<sup>1</sup> <sup>+</sup> *<sup>θ</sup>n*)2(*xn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> <sup>+</sup> <sup>Λ</sup>*n*)+(<sup>1</sup> <sup>−</sup> *<sup>β</sup>n*){[<sup>1</sup> <sup>−</sup> *<sup>α</sup>n*(<sup>1</sup> <sup>−</sup> *<sup>δ</sup>*)]*xn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> + (<sup>1</sup> <sup>−</sup> *<sup>α</sup>n*)Λ*<sup>n</sup>* <sup>−</sup> (<sup>1</sup> <sup>−</sup> *<sup>α</sup>n*)(<sup>1</sup> <sup>−</sup> *<sup>μ</sup>*)[*wn* <sup>−</sup> *yn*<sup>2</sup> <sup>+</sup> *un* <sup>−</sup> *yn*2] + *<sup>α</sup>nM*2}, (27)

where <sup>Λ</sup>*<sup>n</sup>* :<sup>=</sup> *<sup>θ</sup>n*(<sup>2</sup> <sup>+</sup> *<sup>θ</sup>n*)*xn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> <sup>+</sup> *<sup>σ</sup>nxn* <sup>−</sup> *xn*−1[2(<sup>1</sup> <sup>+</sup> *<sup>θ</sup>n*)*xn* <sup>−</sup> *<sup>p</sup>* <sup>+</sup> *<sup>σ</sup>nxn* <sup>−</sup> *xn*−1], and sup*n*≥<sup>1</sup> <sup>2</sup>((*<sup>f</sup>* <sup>−</sup> *<sup>I</sup>*)*pzn* <sup>−</sup> *<sup>p</sup>* <sup>+</sup> *un* <sup>−</sup> *<sup>f</sup>*(*xn*)*xn*+<sup>1</sup> <sup>−</sup> *<sup>p</sup>*) <sup>≤</sup> *<sup>M</sup>*<sup>2</sup> for some *<sup>M</sup>*<sup>2</sup> <sup>&</sup>gt; 0. Also, from Equation (16) we have

$$\begin{array}{ll} \Lambda\_{\mathsf{H}} &= \theta\_{\mathsf{H}}(\mathsf{2} + \theta\_{\mathsf{H}}) \| \mathbf{x}\_{\mathsf{H}} - p \| ^{2} + \sigma\_{\mathsf{H}} \| \mathbf{x}\_{\mathsf{H}} - \mathbf{x}\_{\mathsf{H}-1} \| \left[ 2(\mathbf{1} + \theta\_{\mathsf{H}}) \| \mathbf{x}\_{\mathsf{H}} - p \| + \sigma\_{\mathsf{H}} \| \mathbf{x}\_{\mathsf{H}} - \mathbf{x}\_{\mathsf{H}-1} \| \right] \\ &\leq \mathfrak{a}\_{n} \{ \frac{\theta\_{\mathsf{H}}}{a\_{n}} (2 + \theta\_{\mathsf{H}}) \| \mathbf{x}\_{\mathsf{H}} - p \| ^{2} + M\_{1} \| \left[ 2(1 + \theta\_{\mathsf{H}}) \| \mathbf{x}\_{n} - p \| + \mathfrak{a}\_{n} M\_{1} \right] \} \leq \mathfrak{a}\_{n} M\_{3}, \end{array} \tag{28}$$

where sup*n*≥1{ *<sup>θ</sup><sup>n</sup> <sup>α</sup><sup>n</sup>* (<sup>2</sup> <sup>+</sup> *<sup>θ</sup>n*)*xn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> <sup>+</sup> *<sup>M</sup>*1[2(<sup>1</sup> <sup>+</sup> *<sup>θ</sup>n*)*xn* <sup>−</sup> *<sup>p</sup>* <sup>+</sup> *<sup>α</sup>nM*1]} ≤ *<sup>M</sup>*<sup>3</sup> for some *<sup>M</sup>*<sup>3</sup> <sup>&</sup>gt; 0. Note that *<sup>θ</sup>n*(<sup>2</sup> <sup>+</sup> *<sup>θ</sup>n*) <sup>≤</sup> *<sup>α</sup>n*(1−*βn*)(1−*δ*) <sup>2</sup> , ∀*n* ≥ *n*0. Substituting Equation (28) for Equation (27), we obtain that for all *n* ≥ *n*0,

$$\begin{array}{lcl} \|\mathbf{x}\_{n+1} - \boldsymbol{p}\|^{2} & \leq [1 - a\_{\boldsymbol{n}}(1 - \beta\_{\boldsymbol{n}})(1 - \delta) + \beta\_{\boldsymbol{n}}\theta\_{\boldsymbol{n}}(2 + \theta\_{\boldsymbol{n}})] \|\mathbf{x}\_{\boldsymbol{n}} - \boldsymbol{p}\|^{2} + \beta\_{\boldsymbol{n}}(1 + \theta\_{\boldsymbol{n}})^{2}a\_{\boldsymbol{n}}M\_{3} \\ & + (1 - \beta\_{\boldsymbol{n}})(1 - a\_{\boldsymbol{n}})a\_{\boldsymbol{n}}M\_{3} - (1 - a\_{\boldsymbol{n}})(1 - \beta\_{\boldsymbol{n}})(1 - \mu) \|(\boldsymbol{\mu}\mathbf{v}\_{\boldsymbol{n}} - \boldsymbol{y}\_{\boldsymbol{n}})\|^{2} \\ & + \|\boldsymbol{u}\_{\boldsymbol{n}} - \boldsymbol{y}\_{\boldsymbol{n}}\|^{2}\| + (1 - \beta\_{\boldsymbol{n}})a\_{\boldsymbol{n}}M\_{2} \\ & \leq \|\mathbf{x}\_{n} - \boldsymbol{p}\|^{2} - (1 - a\_{\boldsymbol{n}})(1 - \beta\_{\boldsymbol{n}})(1 - \mu)[\|\boldsymbol{w}\_{\boldsymbol{n}} - \boldsymbol{y}\_{\boldsymbol{n}}\|^{2} + \|\boldsymbol{u}\_{\boldsymbol{n}} - \boldsymbol{y}\_{\boldsymbol{n}}\|^{2}] + a\_{\boldsymbol{n}}M\_{4}, \end{array}$$

where *M*<sup>4</sup> := *M*<sup>2</sup> + 4*M*3. This immediately implies that for all *n* ≥ *n*0,

$$(1 - \mathfrak{a}\_{\mathbb{H}})(1 - \mathfrak{f}\_{\mathbb{H}})(1 - \mathfrak{p})[\|\boldsymbol{w}\_{\mathbb{H}} - \boldsymbol{y}\_{\mathbb{H}}\|^2 + \|\boldsymbol{u}\_{\mathbb{H}} - \boldsymbol{y}\_{\mathbb{H}}\|^2] \le \|\mathbf{x}\_{\mathbb{H}} - \boldsymbol{p}\|^2 - \|\mathbf{x}\_{\mathbb{H}+1} - \boldsymbol{p}\|^2 + a\_{\mathbb{H}}\mathcal{M}\_{4-\mathbb{H}}$$

**Step 3.** We claim that ∃*M* > 0 s.t.

$$\begin{array}{l} \|\mathbf{x}\_{n+1} - p\|^2 \\ \leq & \left[1 - \frac{(1 - 2\delta)\delta\_{n} - \gamma\_{n}}{1 - a\_{n}\gamma\_{n}}\mathbf{x}\_{n}\right] \|\mathbf{x}\_{n} - p\|^2 + \frac{[(1 - 2\delta)\delta\_{n} - \gamma\_{n}]\boldsymbol{a}\_{n}}{1 - a\_{n}\gamma\_{n}} \cdot \left\{\frac{2\gamma\_{n}}{(1 - 2\delta)\delta\_{n} - \gamma\_{n}} \|f(\mathbf{x}\_{n}) - p\| \, \|\mathbf{z}\_{n} - \mathbf{x}\_{n+1}\| \right. \\ \left. + \frac{2\delta\_{n}}{(1 - 2\delta)\delta\_{n} - \gamma\_{n}} \|f(\mathbf{x}\_{n}) - p\| \, \|\mathbf{z}\_{n} - \mathbf{x}\_{n}\| + \frac{2\delta\_{n}}{(1 - 2\delta)\delta\_{n} - \gamma\_{n}} \langle f(\mathbf{p}) - p, \mathbf{x}\_{n} - \mathbf{p} \rangle \\ + \frac{\gamma\_{n} + \delta\_{n}}{(1 - 2\delta)\delta\_{n} - \gamma\_{n}} \left(\frac{\theta\_{n}}{a\_{n}} \cdot \frac{2\delta^{2}(1 + b(1 + \theta\_{n})^{2})}{1 - b} + \frac{\sigma\_{n}}{a\_{n}} \|\mathbf{x}\_{n} - \mathbf{x}\_{n-1}\| \frac{3M(1 + b\theta\_{n}(2 + \theta\_{n}))}{1 - b} \right) \right\}. \end{array} \tag{29}$$

In fact, we get

$$\|\|\mathbf{w}\_n - p\|\|^2 \le [(1 + \theta\_n) \|\mathbf{x}\_n - p\| + \sigma\_n \|\|\mathbf{x}\_n - \mathbf{x}\_{n-1}\|]^2 \le \|\mathbf{x}\_n - p\|^2 + \theta\_n 2M^2 + \sigma\_n \|\mathbf{x}\_n - \mathbf{x}\_{n-1}\| \|\mathbf{3}M\_\star \tag{30}$$

where <sup>∃</sup>*<sup>M</sup>* <sup>&</sup>gt; 0 s.t. sup*n*≥1{(<sup>1</sup> <sup>+</sup> *<sup>θ</sup>n*)*xn* <sup>−</sup> *<sup>p</sup>*, *<sup>σ</sup>nxn* <sup>−</sup> *xn*−1} ≤ *<sup>M</sup>*. From Algorithm <sup>4</sup> and the convexity of ·2, we have

$$\begin{split} \|\mathbf{x}\_{n+1} - p\|^2 &= \|\beta\_{\mathbb{H}}(T^{\mathbb{H}}\mathbf{w}\_{\mathbb{H}} - p) + \gamma\_{\mathbb{H}}(\mathbf{z}\_{\mathbb{H}} - p) + \delta\_{\mathbb{H}}(S\mathbf{z}\_{\mathbb{H}} - p) + \gamma\_{\mathbb{H}}(\mathbf{u}\_{\mathbb{H}} - \mathbf{z}\_{\mathbb{H}})\|^2 \\ &\leq \beta\_{\mathbb{H}}\|T^{\mathbb{H}}\mathbf{w}\_{\mathbb{H}} - p\|^2 + (1 - \beta\_{\mathbb{H}})\|\frac{1}{1 - \beta\_{\mathbb{H}}}[\gamma\_{\mathbb{H}}(\mathbf{z}\_{\mathbb{H}} - p) + \delta\_{\mathbb{H}}(S\mathbf{z}\_{\mathbb{H}} - p)]\|^2 \\ &+ 2\gamma\_{\mathbb{H}}a\_{\mathbb{H}}(\mathbf{u}\_{\mathbb{H}} - p, \mathbf{x}\_{n+1} - p) + 2\gamma\_{\mathbb{H}}a\_{\mathbb{H}}\langle p - f(\mathbf{x}\_{\mathbb{H}}), \mathbf{x}\_{n+1} - p\rangle, \end{split}$$

which together with Lemma 4, leads to

$$\begin{split} & \|\|\mathbf{x}\_{n+1} - p\|\|^2 \leq \beta\_n (1 + \theta\_n)^2 \|\|\mathbf{w}\_n - p\|\|^2 + (1 - \beta\_n) \|\|\mathbf{z}\_n - p\|\|^2 + 2\gamma\_n \mathbf{z}\_n \|\|\mathbf{u}\_n - p\|\|\|\mathbf{x}\_{n+1} - p\| \\ & + 2\gamma\_n \mathbf{a}\_n \langle p - f(\mathbf{x}\_n), \mathbf{x}\_{n+1} - p \rangle \\ & \leq \beta\_n (1 + \theta\_n)^2 (\|\|\mathbf{x}\_n - p\|\|^2 + \theta\_n 2M^2 + \sigma\_n \|\|\mathbf{x}\_n - \mathbf{x}\_{n-1}\| |3M) + (1 - \beta\_n) [(1 - \mathbf{a}\_n) \|\|\mathbf{u}\_n - p\|^2 \\ & + 2\mathbf{a}\_n (f(\mathbf{x}\_n) - p, \mathbf{x}\_n - p\rangle) + \gamma\_n \mathbf{a}\_n (\|\mathbf{u}\_n - p\|^2 + \|\mathbf{x}\_{n+1} - p\|^2) \\ & + 2\gamma\_n \mathbf{a}\_n \langle p - f(\mathbf{x}\_n), \mathbf{x}\_{n+1} - p\rangle. \end{split}$$

By Step 3 of Algorithm 4, and from Equation (30) we know that *un* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> ≤ *xn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> <sup>+</sup> *<sup>θ</sup>n*2*M*<sup>2</sup> <sup>+</sup> *σnxn* − *xn*−13*M*. Hence, we have

*xn*+<sup>1</sup> <sup>−</sup> *<sup>p</sup>*<sup>2</sup> <sup>≤</sup> [<sup>1</sup> <sup>−</sup> *<sup>α</sup>n*(<sup>1</sup> <sup>−</sup> *<sup>β</sup>n*)]*xn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> <sup>+</sup> *<sup>β</sup>nθn*2*M*<sup>2</sup> + (<sup>1</sup> <sup>−</sup> *<sup>β</sup>n*)(<sup>1</sup> <sup>−</sup> *<sup>α</sup>n*)(*θn*2*M*<sup>2</sup> <sup>+</sup> *<sup>σ</sup>nxn* <sup>−</sup> *xn*−13*M*) + <sup>2</sup>*αnδnf*(*xn*) <sup>−</sup> *<sup>p</sup>*, *zn* <sup>−</sup> *<sup>p</sup>* <sup>+</sup> *<sup>γ</sup>nαn*(*xn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> <sup>+</sup> *xn*+<sup>1</sup> <sup>−</sup> *<sup>p</sup>*2)+(<sup>1</sup> <sup>−</sup> *<sup>β</sup>n*)*αn*(*θn*2*M*<sup>2</sup> <sup>+</sup> *<sup>σ</sup>nxn* <sup>−</sup> *xn*−13*M*) <sup>+</sup> <sup>2</sup>*γnαnf*(*xn*) <sup>−</sup> *<sup>p</sup>*, *zn* <sup>−</sup> *xn*+1 <sup>+</sup> *<sup>β</sup>n*(<sup>1</sup> <sup>+</sup> *<sup>θ</sup>n*)2(*θn*2*M*<sup>2</sup> <sup>+</sup> *<sup>σ</sup>nxn* <sup>−</sup> *xn*−13*M*) <sup>≤</sup> [<sup>1</sup> <sup>−</sup> *<sup>α</sup>n*(<sup>1</sup> <sup>−</sup> *<sup>β</sup>n*)]*xn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> <sup>+</sup> <sup>2</sup>*γnαn <sup>f</sup>*(*xn*) <sup>−</sup> *<sup>p</sup>zn* <sup>−</sup> *xn*+1 + 2*αnδnf*(*xn*) − *p*, *xn* − *p* + 2*αnδnf*(*xn*) − *p*, *zn* − *xn* <sup>+</sup> *<sup>γ</sup>nαn*(*xn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> <sup>+</sup> *xn*+<sup>1</sup> <sup>−</sup> *<sup>p</sup>*2)+(<sup>1</sup> <sup>−</sup> *<sup>β</sup>n*)[*θ<sup>n</sup>* 2*M*2(1+*βn*(1+*θn*)2) 1−*β<sup>n</sup>* <sup>+</sup> *<sup>σ</sup>nxn* <sup>−</sup> *xn*−1 <sup>3</sup>*M*(1+*βnθn*(2+*θn*)) <sup>1</sup>−*β<sup>n</sup>* ] <sup>≤</sup> [<sup>1</sup> <sup>−</sup> *<sup>α</sup>n*(<sup>1</sup> <sup>−</sup> *<sup>β</sup>n*)]*xn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> <sup>+</sup> <sup>2</sup>*γnαn <sup>f</sup>*(*xn*) <sup>−</sup> *<sup>p</sup>zn* <sup>−</sup> *xn*+1 <sup>+</sup> <sup>2</sup>*αnδnδxn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> <sup>+</sup> <sup>2</sup>*αnδnf*(*p*) <sup>−</sup> *<sup>p</sup>*, *xn* <sup>−</sup> *<sup>p</sup>* <sup>+</sup> <sup>2</sup>*αnδn <sup>f</sup>*(*xn*) <sup>−</sup> *<sup>p</sup>zn* <sup>−</sup> *xn* <sup>+</sup> *<sup>γ</sup>nαn*(*xn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> <sup>+</sup> *xn*+<sup>1</sup> <sup>−</sup> *<sup>p</sup>*2)+(<sup>1</sup> <sup>−</sup> *<sup>β</sup>n*)[*θ<sup>n</sup>* 2*M*2(1+*b*(1+*θn*)2) 1−*b* <sup>+</sup> *<sup>σ</sup>nxn* <sup>−</sup> *xn*−1 <sup>3</sup>*M*(1+*bθn*(2+*θn*)) <sup>1</sup>−*<sup>b</sup>* ],

which immediately yields Equation (29).

**Step 4.** We claim the strong convergence of {*xn*} to a unique solution *x*<sup>∗</sup> ∈ Ω of HVI Equation (12). In fact, using the same reasoning as in Step 4 of the proof of Theorem 1, we derive the desired conclusion. This completes the proof.

Next, we shall show how to solve the VIP and CFPP in the following illustrating example. The initial point *<sup>x</sup>*<sup>0</sup> <sup>=</sup> *<sup>x</sup>*<sup>1</sup> is randomly chosen in **<sup>R</sup>** = (−∞, <sup>∞</sup>). Take *<sup>f</sup>*(*x*) = <sup>1</sup> <sup>4</sup> sin *<sup>x</sup>*, *<sup>γ</sup>* <sup>=</sup> *<sup>l</sup>* <sup>=</sup> *<sup>μ</sup>* <sup>=</sup> <sup>1</sup> <sup>2</sup> , *<sup>σ</sup><sup>n</sup>* <sup>=</sup> *<sup>α</sup><sup>n</sup>* <sup>=</sup> <sup>1</sup> *<sup>n</sup>*+<sup>1</sup> , *<sup>β</sup><sup>n</sup>* <sup>=</sup> <sup>1</sup> <sup>3</sup> , *<sup>γ</sup><sup>n</sup>* <sup>=</sup> <sup>1</sup> <sup>6</sup> , and *<sup>δ</sup><sup>n</sup>* <sup>=</sup> <sup>1</sup> <sup>2</sup> . Then we know that *<sup>δ</sup>* <sup>=</sup> <sup>1</sup> <sup>4</sup> and *<sup>f</sup>*(**R**) <sup>⊂</sup> [−<sup>1</sup> 4 , 1 4 ].

We first provide an example of Lipschitz continuous and pseudomonotone mapping *A*, asymptotically nonexpansive mapping *T* and strictly pseudocontractive mapping *S* with Ω = Fix(*T*) ∩ Fix(*S*) ∩ VI(*C*, *A*) = ∅. Let *C* = [−1.5, 1] and *H* = **R** with the inner product *a*, *b* = *ab* and induced norm · <sup>=</sup> |·|. Let *<sup>A</sup>*, *<sup>T</sup>*, *<sup>S</sup>* : *<sup>H</sup>* <sup>→</sup> *<sup>H</sup>* be defined as *Ax* :<sup>=</sup> <sup>1</sup> <sup>1</sup>+<sup>|</sup> sin *<sup>x</sup>*<sup>|</sup> <sup>−</sup> <sup>1</sup> 1+|*x*| , *Tx* := <sup>4</sup> <sup>5</sup> sin *<sup>x</sup>* and *Sx* :<sup>=</sup> <sup>1</sup> <sup>3</sup> *<sup>x</sup>* <sup>+</sup> <sup>1</sup> <sup>2</sup> sin *x* for all *x* ∈ *H*. Now, we first show that *A* is pseudomonotone and Lipschitz continuous with *L* = 2 such that *A*(*C*) is bounded. Indeed, it is clear that *A*(*C*) is bounded. Moreover, for all *x*, *y* ∈ *H* we have

$$\begin{array}{l} \|Ax - Ay\| \| = \|\frac{1}{1 + \|\sin x\|} - \frac{1}{1 + \|x\|} - \frac{1}{1 + \|\sin y\|} + \frac{1}{1 + \|y\|} \| \\ \leq \|\frac{|\|\sin y\| - \|\sin x\|}{(1 + \|\sin x\|)(1 + \|\sin y\|)}\| + |\frac{|y| - \|x\|}{(1 + \|x\|)(1 + \|y\|)}| \\ \leq \|\sin x - \sin y\| + \|\|x - y\|\| \leq 2\|x - y\|. \end{array}$$

This implies that *A* is Lipschitz continuous with *L* = 2. Next, we show that *A* is pseudomonotone. For any given *x*, *y* ∈ *H*, it is clear that the relation holds:

$$<\langle Ax, y - x \rangle = (\frac{1}{1 + |\sin x|} - \frac{1}{1 + |x|})(y - x) \ge 0\\ \Rightarrow \langle Ay, y - x \rangle = (\frac{1}{1 + |\sin y|} - \frac{1}{1 + |y|})(y - x) \ge 0.$$

Furthermore, it is easy to see that *T* is asymptotically nonexpansive with *θ<sup>n</sup>* = ( <sup>4</sup> <sup>5</sup> )*n*, <sup>∀</sup>*<sup>n</sup>* <sup>≥</sup> 1, such that *Tn*+1*xn* <sup>−</sup> *<sup>T</sup>nxn* → 0 as *<sup>n</sup>* <sup>→</sup> <sup>∞</sup>. Indeed, we observe that

$$\|\|T^n\mathbf{x} - T^ny\|\| \le \frac{4}{5} \|\|T^{n-1}\mathbf{x} - T^{n-1}y\|\| \le \cdot \cdot \le (\frac{4}{5})^n \|\|\mathbf{x} - y\|\| \le (1 + \theta\_n) \|\|\mathbf{x} - y\|\|\_{\mathcal{H}}$$

and

$$\|\|T^{n+1}\mathbf{x}\_{\mathbb{H}} - T^{\mathbf{n}}\mathbf{x}\_{\mathbb{H}}\|\| \le (\frac{4}{5})^{n-1} \|\|T^2\mathbf{x}\_{\mathbb{H}} - T\mathbf{x}\_{\mathbb{H}}\|\| = (\frac{4}{5})^{n-1} \|\frac{4}{5} \sin(T\mathbf{x}\_{\mathbb{H}}) - \frac{4}{5} \sin\mathbf{x}\_{\mathbb{H}}\|\| \le 2(\frac{4}{5})^n \to 0, \ (n \to \infty).$$

It is clear that Fix(*T*) = {0} and

$$\lim\_{n \to \infty} \frac{\theta\_n}{\mathfrak{a}\_n} = \lim\_{n \to \infty} \frac{(4/5)^n}{1/(n+1)} = 0.$$

Moreover, it is readily seen that sup*n*≥<sup>1</sup> <sup>|</sup>(*T<sup>n</sup>* <sup>−</sup> *<sup>f</sup>*)*xn*<sup>|</sup> <sup>=</sup> sup*n*≥<sup>1</sup> <sup>|</sup> <sup>4</sup> <sup>5</sup> sin(*Tn*−1*xn*) <sup>−</sup> <sup>1</sup> <sup>4</sup> sin *xn*| ≤ <sup>21</sup> <sup>20</sup> < ∞. In addition, it is clear that *S* is strictly pseudocontractive with constant *ζ* = <sup>1</sup> <sup>4</sup> . Indeed, we observe that for all *x*, *y* ∈ *H*,

$$\|\|Sx - Sy\|\|^2 \le \left[\frac{1}{3} \|\|x - y\|\| + \frac{1}{2} \|\sin x - \sin y\|\right]^2 \le \|\|x - y\|\|^2 + \frac{1}{4} \|(I - S)x - (I - S)y\|^2.$$

It is clear that (*γ<sup>n</sup>* + *δn*)*ζ* = ( <sup>1</sup> <sup>6</sup> <sup>+</sup> <sup>1</sup> <sup>2</sup> ) · <sup>1</sup> <sup>4</sup> <sup>≤</sup> <sup>1</sup> <sup>6</sup> <sup>=</sup> *<sup>γ</sup><sup>n</sup>* <sup>&</sup>lt; (<sup>1</sup> <sup>−</sup> <sup>2</sup>*δ*)*δ<sup>n</sup>* = (<sup>1</sup> <sup>−</sup> <sup>2</sup> · <sup>1</sup> <sup>4</sup> ) · <sup>1</sup> <sup>2</sup> <sup>=</sup> <sup>1</sup> <sup>4</sup> for all *n* ≥ 1. Therefore, Ω = Fix(*T*) ∩ Fix(*S*) ∩ VI(*C*, *A*) = {0} = ∅. In this case, Algorithm 3 can be rewritten as follows:

$$\begin{cases} \begin{array}{c} \boldsymbol{w}\_{\boldsymbol{n}} = \boldsymbol{T}^{\boldsymbol{n}} \boldsymbol{\mathsf{x}}\_{\boldsymbol{n}} + \frac{1}{n+1} (\boldsymbol{\mathsf{x}}\_{\boldsymbol{n}} - \boldsymbol{\mathsf{x}}\_{\boldsymbol{n}-1}), \\\ y\_{\boldsymbol{n}} = \boldsymbol{P}\_{\boldsymbol{\mathsf{C}}} (\boldsymbol{w}\_{\boldsymbol{n}} - \boldsymbol{\mathsf{x}}\_{\boldsymbol{n}} A \boldsymbol{w}\_{\boldsymbol{n}}), \\\ z\_{\boldsymbol{n}} = \frac{1}{n+1} f(\boldsymbol{x}\_{\boldsymbol{n}}) + \frac{n}{n+1} \boldsymbol{P}\_{\boldsymbol{\mathsf{C}}\_{\boldsymbol{n}}} (\boldsymbol{w}\_{\boldsymbol{n}} - \boldsymbol{\mathsf{x}}\_{\boldsymbol{n}} A \boldsymbol{y}\_{\boldsymbol{n}}), \\\ \boldsymbol{x}\_{\boldsymbol{n}+1} = \frac{1}{3} \boldsymbol{T}^{\boldsymbol{n}} \boldsymbol{x}\_{\boldsymbol{n}} + \frac{1}{6} \boldsymbol{P}\_{\boldsymbol{\mathsf{C}}\_{\boldsymbol{n}}} (\boldsymbol{w}\_{\boldsymbol{n}} - \boldsymbol{\mathsf{x}}\_{\boldsymbol{n}} A \boldsymbol{y}\_{\boldsymbol{n}}) + \frac{1}{2} \boldsymbol{S} \boldsymbol{z}\_{\boldsymbol{n}}, \quad \forall \boldsymbol{n} \ge 1, \end{cases} $$

where *Cn* and *τ<sup>n</sup>* are picked up as in Algorithm 3. Thus, by Theorem 1, we know that {*xn*} converges to <sup>0</sup> <sup>∈</sup> <sup>Ω</sup> if and only if <sup>|</sup>*xn* <sup>−</sup> *xn*+1<sup>|</sup> <sup>+</sup> <sup>|</sup>*xn* <sup>−</sup> *<sup>T</sup>nxn*| → 0, (*<sup>n</sup>* <sup>→</sup> <sup>∞</sup>).

On the other hand, Algorithm 4 can be rewritten as follows:

$$\begin{cases} \begin{array}{c} \boldsymbol{w}\_{\boldsymbol{n}} = \boldsymbol{T}^{\boldsymbol{n}} \boldsymbol{x}\_{\boldsymbol{n}} + \frac{1}{n+1} (\boldsymbol{x}\_{\boldsymbol{n}} - \boldsymbol{x}\_{\boldsymbol{n}-1}), \\\ y\_{\boldsymbol{n}} = \boldsymbol{P}\_{\mathbb{C}} (\boldsymbol{w}\_{\boldsymbol{n}} - \boldsymbol{\tau}\_{\boldsymbol{n}} A \boldsymbol{w}\_{\boldsymbol{n}}), \\\ z\_{\boldsymbol{n}} = \frac{1}{n+1} f(\boldsymbol{x}\_{\boldsymbol{n}}) + \frac{n}{n+1} \boldsymbol{P}\_{\mathbb{C}\_{n}} (\boldsymbol{w}\_{\boldsymbol{n}} - \boldsymbol{\tau}\_{\boldsymbol{n}} A \boldsymbol{y}\_{\boldsymbol{n}}), \\\ \boldsymbol{x}\_{\boldsymbol{n}+1} = \frac{1}{3} \boldsymbol{T}^{\boldsymbol{n}} \boldsymbol{w}\_{\boldsymbol{n}} + \frac{1}{5} \boldsymbol{P}\_{\mathbb{C}\_{n}} (\boldsymbol{w}\_{\boldsymbol{n}} - \boldsymbol{\tau}\_{\boldsymbol{n}} A \boldsymbol{y}\_{\boldsymbol{n}}) + \frac{1}{2} \boldsymbol{S} \boldsymbol{z}\_{\boldsymbol{n}}, \quad \forall \boldsymbol{n} \ge 1, \end{cases}$$

where *Cn* and *τ<sup>n</sup>* are picked up as in Algorithm 4. Thus, by Theorem 2, we know that {*xn*} converges to <sup>0</sup> <sup>∈</sup> <sup>Ω</sup> if and only if <sup>|</sup>*xn* <sup>−</sup> *xn*+1<sup>|</sup> <sup>+</sup> <sup>|</sup>*xn* <sup>−</sup> *<sup>T</sup>nxn*| → 0, (*<sup>n</sup>* <sup>→</sup> <sup>∞</sup>).

**Author Contributions:** The authors made equal contributions to this paper. Conceptualization, methodology, formal analysis and investigation: L.-C.C., A.P., C.-F.W. and J.-C.Y.; writing—original draft preparation: L.-C.C. and A.P.; writing—review and editing: C.-F.W. and J.-C.Y.

**Funding:** This research was partially supported by the Innovation Program of Shanghai Municipal Education Commission (15ZZ068), Ph.D. Program Foundation of Ministry of Education of China (20123127110002) and Program for Outstanding Academic Leaders in Shanghai City (15XD1503100). This research was also supported by the Ministry of Science and Technology, Taiwan [grant number: 107-2115-M-037-001].

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**




c 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

*Article*

## **On Mann Viscosity Subgradient Extragradient Algorithms for Fixed Point Problems of Finitely Many Strict Pseudocontractions and Variational Inequalities**

**Lu-Chuan Ceng 1, Adrian Petru¸sel 2,3 and Jen-Chih Yao 4,\***


Received: 21 August 2019; Accepted: 26 September 2019; Published: 4 October 2019

**Abstract:** In a real Hilbert space, we denote CFPP and VIP as common fixed point problem of finitely many strict pseudocontractions and a variational inequality problem for Lipschitzian, pseudomonotone operator, respectively. This paper is devoted to explore how to find a common solution of the CFPP and VIP. To this end, we propose Mann viscosity algorithms with line-search process by virtue of subgradient extragradient techniques. The designed algorithms fully assimilate Mann approximation approach, viscosity iteration algorithm and inertial subgradient extragradient technique with line-search process. Under suitable assumptions, it is proven that the sequences generated by the designed algorithms converge strongly to a common solution of the CFPP and VIP, which is the unique solution to a hierarchical variational inequality (HVI).

**Keywords:** method with line-search process; pseudomonotone variational inequality; strictly pseudocontractive mappings; common fixed point; sequentially weak continuity

**MSC:** 47H05; 47H09; 47H10; 90C52

#### **1. Introduction and Preliminaries**

Throughout this article, we suppose that the real vector space *H* is a Hilbert one and the nonempty subset *C* of *H* is a convex and closed one. An operator *S* : *C* → *H* is called:

(i) *L*-Lipschitzian if there exists *L* > 0 such that *Su* − *Sv* ≤ *Lu* − *v* ∀*u*, *v* ∈ *C*;

(ii) sequentially weakly continuous if for any {*wn*} ⊂ *C*, the following implication holds: *wn w* ⇒ *Swn Sw*;

(iii) pseudomonotone if *Su*, *u* − *v* ≤ 0 ⇒ *Sv*, *u* − *v* ≤ 0 ∀*u*, *v* ∈ *C*;

(iv) monotone if *Su* − *Sv*, *v* − *u* ≤ 0 ∀*u*, *v* ∈ *C*;

(v) *<sup>γ</sup>*-strongly monotone if <sup>∃</sup>*<sup>γ</sup>* <sup>&</sup>gt; 0 s.t. *Su* <sup>−</sup> *Sw*, *<sup>u</sup>* <sup>−</sup> *<sup>w</sup>* ≥ *<sup>γ</sup><sup>u</sup>* <sup>−</sup> *<sup>w</sup>*<sup>2</sup> <sup>∀</sup>*u*, *<sup>w</sup>* <sup>∈</sup> *<sup>C</sup>*.

It is not difficult to observe that monotonicity ensures the pseudomonotonicity. A self-mapping *<sup>S</sup>* : *<sup>C</sup>* <sup>→</sup> *<sup>C</sup>* is called a *<sup>η</sup>*-strict pseudocontraction if the relation holds: *Su* <sup>−</sup> *Sv*, *<sup>u</sup>* <sup>−</sup> *<sup>v</sup>*≤*<sup>u</sup>* <sup>−</sup> *<sup>v</sup>*<sup>2</sup> <sup>−</sup> <sup>1</sup>−*<sup>η</sup>* <sup>2</sup> (*<sup>I</sup>* <sup>−</sup> *<sup>S</sup>*)*<sup>u</sup>* <sup>−</sup> (*<sup>I</sup>* <sup>−</sup> *<sup>S</sup>*)*v*<sup>2</sup> <sup>∀</sup>*u*, *<sup>v</sup>* <sup>∈</sup> *<sup>C</sup>* for some *<sup>η</sup>* <sup>∈</sup> [0, 1). By [1] we know that, in the case where *<sup>S</sup>* is *<sup>η</sup>*-strictly pseudocontractive, *<sup>S</sup>* is Lipschitzian, i.e., *Su* <sup>−</sup> *Sv* ≤ <sup>1</sup>+*<sup>η</sup>* <sup>1</sup>−*<sup>η</sup> <sup>u</sup>* <sup>−</sup> *<sup>v</sup>* ∀*u*, *<sup>v</sup>* <sup>∈</sup> *<sup>C</sup>*. It is clear that the class of strict pseudocontractions includes the class of nonexpansive operators, i.e., *Su* − *Sv*≤*u* − *v* ∀*u*, *v* ∈ *C*. Both classes of nonlinear operators received much attention and many numerical algorithms were designed for calculating their fixed points in Hilbert or Banach spaces; see e.g., [2–11].

Let *A* be a self-mapping on *H*. The classical variational inequality problem (VIP) is to find *z* ∈ *C* such that *Az*, *y* − *z* ≥ 0 ∀*y* ∈ *C*. The solution set of such a VIP is indicated by VI(*C*, *A*). To the best of our knowledge, one of the most effective methods for solving the VIP is the gradient-projection method. Recently, many authors numerically investigated the VIP in finite dimensional spaces, Hilbert spaces or Banach spaces; see e.g., [12–20].

In 2014, Kraikaew and Saejung [21] suggested a Halpern-type gradient-like algorithm to deal with the VIP ⎧

$$\begin{cases} \boldsymbol{v}\_{k} = \boldsymbol{P}\_{\mathbb{C}}(\boldsymbol{u}\_{k} - \ell A \boldsymbol{u}\_{k}),\\ \quad \mathcal{C}\_{k} = \{ \boldsymbol{v} \in H : \langle \boldsymbol{u}\_{k} - \ell A \boldsymbol{u}\_{k} - \boldsymbol{v}\_{k}, \boldsymbol{v}\_{k} - \boldsymbol{v} \rangle \ge 0 \},\\ \quad \boldsymbol{w}\_{k} = \boldsymbol{P}\_{\mathbb{C}\_{k}}(\boldsymbol{u}\_{n} - \ell A \boldsymbol{v}\_{k}),\\ \quad \boldsymbol{u}\_{k+1} = \boldsymbol{\varrho}\_{k} \boldsymbol{u}\_{0} + (1 - \boldsymbol{\varrho}\_{k}) \boldsymbol{w}\_{k} \quad \forall k \ge 0, \end{cases}$$

where <sup>∈</sup> (0, <sup>1</sup> *<sup>L</sup>* ), {*k*} ⊂ (0, 1), lim*k*→<sup>∞</sup> *<sup>k</sup>* <sup>=</sup> 0, <sup>∑</sup><sup>∞</sup> *<sup>k</sup>*=<sup>1</sup> *<sup>k</sup>* = +∞, and established strong convergence theorems for approximation solutions in Hilbert spaces. Later, Thong and Hieu [22] designed an inertial algorithm, i.e., for arbitrarily given *u*0, *u*<sup>1</sup> ∈ *H*, the sequence {*uk*} is constructed by

$$\begin{cases} \quad z\_k = u\_k + \varrho\_k (u\_k - u\_{k-1}), \\ \quad v\_k = P\_\mathbb{C} (z\_k - \ell A z\_k), \\ \quad \mathbb{C}\_k = \{ v \in H : \langle z\_k - \ell A z\_k - v\_k, v\_k - v \rangle \ge 0 \}, \\ \quad u\_{k+1} = P\_{\mathbb{C}\_k} (z\_{\mathbb{H}} - \ell A v\_k) \quad \forall k \ge 1, \end{cases}$$

with <sup>∈</sup> (0, <sup>1</sup> *<sup>L</sup>* ). Under mild assumptions, they proved that {*uk*} converge weakly to a point of VI(*C*, *A*). Very recently, Thong and Hieu [23] suggested two inertial algorithms with linear-search process, to solve the VIP for Lipschitzian, monotone operator *A* and the FPP for a quasi-nonexpansive operator *S* satisfying a demiclosedness property in *H*. Under appropriate assumptions, they proved that the sequences constructed by the suggested algorithms converge weakly to a point of Fix(*S*) ∩ VI(*C*, *A*). Further research on common solutions problems, we refer the readers to [24–38].

In this paper, we first introduce Mann viscosity algorithms via subgradient extragradient techniques, and then establish some strong convergence theorems in Hilbert spaces. It is remarkable that our algorithms involve line-search process.

The following lemmas are useful for the convergence analysis of our algorithms in the sequel.

**Lemma 1.** *[39] Let the operator A be pseudomonotone and continuous on C. Given a point w* ∈ *C. Then the relation holds: Aw*, *w* − *y* ≤ 0 ∀*y* ∈ *C* ⇔ *Ay*, *w* − *y* ≤ 0 ∀*y* ∈ *C.*

**Lemma 2.** *[40] Suppose that* {*sk*} *is a sequence in* [0, +∞) *such that sk*<sup>+</sup><sup>1</sup> ≤ *tkbk* + (1 − *tk*)*sk* ∀*k* ≥ 1*, where* {*tk*} *and* {*bk*} *lie in real line R* := (−∞, ∞)*, such that:*

*(a)* {*tk*} ⊂ [0, 1] *and* <sup>∑</sup><sup>∞</sup> *<sup>k</sup>*=<sup>1</sup> *tk* = ∞*; (b)* lim sup*k*→<sup>∞</sup> *bk* <sup>≤</sup> <sup>0</sup> *or* <sup>∑</sup><sup>∞</sup> *<sup>k</sup>*=<sup>1</sup> |*tkbk*| < ∞*. Then sk* → 0 *as k* → ∞*.*

From Ceng et al. [2] it is not difficult to find that the following lemmas hold.

**Lemma 3.** *Let* Γ *be an η-strictly pseudocontractive self-mapping on C. Then I* − Γ *is demiclosed at zero.*

**Lemma 4.** *For l* = 1, ..., *N, let* Γ*<sup>l</sup> be an ηl-strictly pseudocontractive self-mapping on C. Then for l* = 1, ..., *N, the mapping* Γ*<sup>l</sup> is an η-strict pseudocontraction with η* = max{*η<sup>l</sup>* : 1 ≤ *l* ≤ *N*}*, such that*

$$\|\|\Gamma\_l u - \Gamma\_l v\|\| \le \frac{1+\eta}{1-\eta} \|u - v\|\quad \forall u, v \in \mathbb{C}.$$

**Lemma 5.** *Let* Γ *be an η-strictly pseudocontractive self-mapping on C. Given two reals γ*, *β* ∈ [0, +∞)*. If* (*γ* + *β*)*η* ≤ *γ, then γ*(*u* − *v*) + *β*(Γ*u* − Γ*v*) ≤ (*γ* + *β*)*u* − *v* ∀*u*, *v* ∈ *C.*

#### **2. Main Results**

Our first algorithm is specified below.

#### **Algorithm 1**

**Initial Step:** Given *x*0, *x*<sup>1</sup> ∈ *H* arbitrarily. Let *γ* > 0, *l* ∈ (0, 1), *μ* ∈ (0, 1). **Iteration Steps:** Compute *xn*+<sup>1</sup> below: Step 1. Put *vn* = *xn* − *σn*(*xn*−<sup>1</sup> − *xn*) and calculate *un* = *PC*(*vn* − *nAvn*), where *<sup>n</sup>* is picked to be the largest ∈ {*γ*, *γl*, *γl* 2, ...} s.t. *Avn* − *Aun* ≤ *μvn* − *un*. (1)

Step 2. Calculate *zn* = (1 − *αn*)*PCn* (*vn* − *nAun*) + *α<sup>n</sup> f*(*xn*) with *Cn* := {*v* ∈ *H* : *vn* − *nAvn* − *un*, *un* − *v* ≥ 0}. Step 3. Calculate

$$\mathbf{x}\_{n+1} = \gamma\_n P\_{\mathbb{C}\_n} (\upsilon\_n - \ell\_n A \imath\_n) + \delta\_n T\_n z\_n + \beta\_n \mathbf{x}\_n. \tag{2}$$

Update *n* := *n* + 1 and return to Step 1. In this section, we always suppose that the following hypotheses hold: *Tk* is a *ζk*-strictly pseudocontractive self-mapping on *H* for *k* = 1, ..., *N* s.t. *ζ* ∈ [0, 1) with *ζ* = max{*ζ<sup>k</sup>* : 1 ≤ *k* ≤ *N*}. *A* is *L*-Lipschitzian, pseudomonotone self-mapping on *H*, and sequentially weakly continuous on *C*, such that <sup>Ω</sup> :<sup>=</sup> <sup>∩</sup>*<sup>N</sup> <sup>k</sup>*=1Fix(*Tk*) ∩ VI(*C*, *A*) = ∅. *<sup>f</sup>* : *<sup>H</sup>* <sup>→</sup> *<sup>C</sup>* is a *<sup>δ</sup>*-contraction with *<sup>δ</sup>* <sup>∈</sup> [0, <sup>1</sup> 2 ). {*σn*} ⊂ [0, 1] and {*αn*}, {*βn*}, {*γn*}, {*δn*} ⊂ (0, 1) are such that: (i) *<sup>β</sup><sup>n</sup>* <sup>+</sup> *<sup>γ</sup><sup>n</sup>* <sup>+</sup> *<sup>δ</sup><sup>n</sup>* <sup>=</sup> 1 and sup*n*≥<sup>1</sup> *σn <sup>α</sup><sup>n</sup>* < ∞; (ii) (1 − 2*δ*)*δ<sup>n</sup>* > *γ<sup>n</sup>* ≥ (*γ<sup>n</sup>* + *δn*)*ζ* ∀*n* ≥ 1 and lim inf*n*→∞((1 − 2*δ*)*δ<sup>n</sup>* − *γn*) > 0; (iii) lim*n*→<sup>∞</sup> *<sup>α</sup><sup>n</sup>* <sup>=</sup> 0 and <sup>∑</sup><sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *α<sup>n</sup>* = ∞; (iv) lim inf*n*→<sup>∞</sup> *<sup>β</sup><sup>n</sup>* <sup>&</sup>gt; 0, lim inf*n*→<sup>∞</sup> *<sup>δ</sup><sup>n</sup>* <sup>&</sup>gt; 0 and lim sup*n*→<sup>∞</sup> *<sup>β</sup><sup>n</sup>* <sup>&</sup>lt; 1. Following Xu and Kim [40], we denote *Tn* := *Tn*mod*N*, ∀*n* ≥ 1, where the mod function takes values in {1, 2, ..., *N*}, i.e., whenever *n* = *jN* + *q* for some *j* ≥ 0 and 0 ≤ *q* < *N*, we obtain that *Tn* = *TN* in the case of *q* = 0 and *Tn* = *Tq* in the case of 0 < *q* < *N*.

**Lemma 6.** *The Armijo-like search rule (1) is well defined, and* min{*γ*, *<sup>μ</sup><sup>l</sup> <sup>L</sup>* } ≤ *<sup>n</sup>* ≤ *γ.*

**Proof.** Obviously, (1) holds for all *γl <sup>m</sup>* <sup>≤</sup> *<sup>μ</sup> <sup>L</sup>* . So, *<sup>n</sup>* is well defined and *<sup>n</sup>* ≤ *γ*. In the case of *<sup>n</sup>* <sup>=</sup> *<sup>γ</sup>*, the inequality is true. In the case of *<sup>n</sup>* <sup>&</sup>lt; *<sup>γ</sup>*, (1) ensures *Avn* <sup>−</sup> *APC*(*vn* <sup>−</sup> *<sup>n</sup> <sup>l</sup> Avn*) > *μ n l vn* <sup>−</sup> *PC*(*vn* <sup>−</sup> *<sup>n</sup> <sup>l</sup> Avn*). The *<sup>L</sup>*-Lipschitzian property of *<sup>A</sup>* yields *<sup>n</sup>* <sup>&</sup>gt; *<sup>μ</sup><sup>l</sup> L* .

**Lemma 7.** *Let* {*vn*}, {*un*} *and* {*zn*} *be the sequences constructed by Algorithm 1. Then*

$$\begin{array}{rcl} \|\mathbf{z}\_{n} - \boldsymbol{\omega}\|\|^{2} & \leq (1 - \mathfrak{a}\_{\boldsymbol{n}}) \|\boldsymbol{v}\_{\boldsymbol{n}} - \boldsymbol{\omega}\|\|^{2} + \mathfrak{a}\_{\boldsymbol{n}} \delta \|\mathbf{x}\_{n} - \boldsymbol{\omega}\|\|^{2} - (1 - \mathfrak{a}\_{\boldsymbol{n}})(1 - \boldsymbol{\mu}) \left[ \|\boldsymbol{v}\_{\boldsymbol{n}} - \boldsymbol{u}\_{\boldsymbol{n}}\|\right]^{2} \\ & + \|\|\boldsymbol{h}\_{\boldsymbol{n}} - \boldsymbol{u}\_{\boldsymbol{n}}\|\|^{2} \Big] + 2\mathfrak{a}\_{\boldsymbol{n}} \langle f\boldsymbol{\omega} - \boldsymbol{\omega}, \boldsymbol{z}\_{\boldsymbol{n}} - \boldsymbol{\omega} \rangle \ \forall \boldsymbol{\omega} \in \Omega, \end{array} \tag{3}$$

*where hn* := *PCn* (*vn* − *nAun*) ∀*n* ≥ 1*.*

**Proof.** First, taking an arbitrary *p* ∈ Ω ⊂ *C* ⊂ *Cn*, we observe that

$$\begin{aligned} 2||h\_n - p||^2 &\le 2\langle h\_n - p, v\_n - \ell\_n A u\_n - p \rangle \\ &= ||h\_n - p||^2 + ||v\_n - p||^2 - ||h\_n - v\_n||^2 - 2\langle \ell\_n A u\_n, h\_n - p \rangle. \end{aligned}$$

So, it follows that *vn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> <sup>−</sup> <sup>2</sup>*hn* <sup>−</sup> *<sup>p</sup>*, *nAun*−*hn* <sup>−</sup> *vn*<sup>2</sup> ≥ *hn* <sup>−</sup> *<sup>p</sup>*2, which together with (1), we deduce that 0 ≥ *p* − *un*, *Aun* and

$$\begin{array}{rcl} \|h\_{\boldsymbol{n}} - \boldsymbol{p}\|^2 & \leq \|\boldsymbol{v}\_{\boldsymbol{n}} - \boldsymbol{p}\|^2 - \|h\_{\boldsymbol{n}} - \boldsymbol{v}\_{\boldsymbol{n}}\|^2 + 2\ell\_{\boldsymbol{n}}(\langle Au\_{\boldsymbol{n}\prime}\boldsymbol{p} - \boldsymbol{u}\_{\boldsymbol{n}}\rangle + \langle Au\_{\boldsymbol{n}\prime}\boldsymbol{u}\_{\boldsymbol{n}} - h\_{\boldsymbol{n}}\rangle) \\ & \leq \|\boldsymbol{v}\_{\boldsymbol{n}} - \boldsymbol{p}\|^2 - \|\boldsymbol{u}\_{\boldsymbol{n}} - h\_{\boldsymbol{n}}\|^2 - \|\boldsymbol{v}\_{\boldsymbol{n}} - \boldsymbol{u}\_{\boldsymbol{n}}\|^2 + 2\langle u\_{\boldsymbol{n}} - \boldsymbol{v}\_{\boldsymbol{n}} + \ell\_{\boldsymbol{n}}Au\_{\boldsymbol{n}\prime}\boldsymbol{u}\_{\boldsymbol{n}} - h\_{\boldsymbol{n}}\rangle. \end{array} \tag{4}$$

Since *hn* = *PCn* (*vn* − *nAun*) with *Cn* := {*v* ∈ *H* : *un* − *vn* + *nAvn*, *un* − *v* ≤ 0}, we have *un* − *vn* + *nAvn*, *un* − *hn* ≤ 0, which together with (1), implies that

$$\begin{aligned} 2\langle u\_{\boldsymbol{n}} - \boldsymbol{v}\_{\boldsymbol{n}} + \ell\_{\boldsymbol{n}} A \boldsymbol{u}\_{\boldsymbol{n}}, \boldsymbol{u}\_{\boldsymbol{n}} - \boldsymbol{h}\_{\boldsymbol{n}} \rangle &= 2\langle u\_{\boldsymbol{n}} - \boldsymbol{v}\_{\boldsymbol{n}} + \ell\_{\boldsymbol{n}} A \boldsymbol{v}\_{\boldsymbol{n}}, \boldsymbol{u}\_{\boldsymbol{n}} - \boldsymbol{h}\_{\boldsymbol{n}} \rangle + 2\ell\_{\boldsymbol{n}} \langle A \boldsymbol{v}\_{\boldsymbol{n}} - A \boldsymbol{u}\_{\boldsymbol{n}}, \boldsymbol{h}\_{\boldsymbol{n}} - \boldsymbol{u}\_{\boldsymbol{n}} \rangle \\ &\leq 2\mu \| \boldsymbol{u}\_{\boldsymbol{n}} - \boldsymbol{v}\_{\boldsymbol{n}} \| \| \boldsymbol{u}\_{\boldsymbol{n}} - \boldsymbol{h}\_{\boldsymbol{n}} \| \leq \mu (\| \boldsymbol{v}\_{\boldsymbol{n}} - \boldsymbol{u}\_{\boldsymbol{n}} \| ^2 + \| \boldsymbol{h}\_{\boldsymbol{n}} - \boldsymbol{u}\_{\boldsymbol{n}} \| ^2). \end{aligned}$$

Therefore, substituting the last inequality for (4), we infer that

$$\left\| \left\| h\_{\mathrm{il}} - p \right\| ^2 \le \left\| \left\| v\_{\mathrm{il}} - p \right\| ^2 - (1 - \mu) \left\| v\_{\mathrm{il}} - \mu\_{\mathrm{il}} \right\| ^2 - (1 - \mu) \left\| h\_{\mathrm{il}} - \mu\_{\mathrm{il}} \right\| ^2 \quad \forall p \in \Omega. \tag{5}$$

In addition, we have

$$z\_n - p = (1 - \alpha\_n)(h\_n - p) + \alpha\_n(f - I)p + \alpha\_n(f(\mathbf{x}\_n) - f(p)).$$

Using the convexity of the function *h*(*t*) = *t* <sup>2</sup> <sup>∀</sup>*<sup>t</sup>* <sup>∈</sup> **<sup>R</sup>**, from (5) we get

$$\begin{array}{lcl} \|z\_{n}-p\|^{2} & \leq [a\_{n}\delta \|\mathbf{x}\_{n}-p\|+(1-a\_{n})\|h\_{n}-p\|\|^{2}+2a\_{n}\langle(f-I)p,z\_{n}-p\rangle \\ & \leq a\_{n}\delta \|\mathbf{x}\_{n}-p\|^{2}+(1-a\_{n})\|h\_{n}-p\|^{2}+2a\_{n}\langle(f-I)p,z\_{n}-p\rangle \\ & \leq a\_{n}\delta \|\mathbf{x}\_{n}-p\|^{2}+(1-a\_{n})\|v\_{n}-p\|^{2}-(1-a\_{n})(1-\mu)\langle\|v\_{n}-u\_{n}\|^{2}\rangle \\ & + \|h\_{n}-u\_{n}\|^{2}\|+2a\_{n}\langle(f-I)p,z\_{n}-p\rangle. \end{array}$$

**Lemma 8.** *Let* {*xn*}, {*un*}, *and* {*zn*} *be bounded sequences constructed by Algorithm 1. If xn* − *xn*+<sup>1</sup> → 0, *vn* − *un* → 0, *vn* − *zn* → 0 *and* ∃{*vni* }⊂{*vn*} *s.t. vni z* ∈ *H, then z* ∈ Ω*.*

**Proof.** According to Algorithm 1, we get *σn*(*xn* − *xn*−1) = *vn* − *xn* ∀*n* ≥ 1, and hence *xn* − *xn*−1 ≥ *vn* − *xn*. Using the assumption *xn* − *xn*+<sup>1</sup> → 0, we have

$$\lim\_{n \to \infty} \|v\_{\mathbb{H}} - \mathfrak{x}\_{\mathbb{H}}\| = 0. \tag{6}$$

So,

$$||z\_{\hbar} - \mathfrak{x}\_{\hbar}|| \le ||\upsilon\_{\hbar} - z\_{\hbar}|| + ||\upsilon\_{\hbar} - \mathfrak{x}\_{\hbar}|| \to 0.$$

Since {*xn*} is bounded, from *vn* = *xn* − *σn*(*xn*−<sup>1</sup> − *xn*) we know that {*vn*} is a bounded vector sequence. According to (5), we obtain that *hn* := *PCn* (*vn* − *nAun*) is a bounded vector sequence. Also, by Algorithm 1 we get *α<sup>n</sup> f*(*xn*) + *hn* − *xn* − *αnhn* = *zn* − *xn*. So, the boundedness of {*xn*}, {*hn*} guarantees that as *n* → ∞,

$$||h\_n - \mathbf{x}\_n|| = ||z\_n - \mathbf{x}\_n - a\_n f(\mathbf{x}\_n) + a\_n h\_n|| \le ||z\_n - \mathbf{x}\_n|| + a\_n (||f(\mathbf{x}\_n)|| + ||h\_n||) \to 0.$$

It follows that

$$
\lambda \mathbf{x}\_{n+1} - z\_n = \gamma\_n (h\_n - \mathbf{x}\_n) + \delta\_n (T\_n z\_n - z\_n) + (1 - \delta\_n)(\mathbf{x}\_n - z\_n),
$$

which immediately yields

$$\begin{array}{lcl} \left\| \delta\_{\boldsymbol{n}} \left\| \left| T\_{\boldsymbol{n}} \boldsymbol{z}\_{\boldsymbol{n}} - \boldsymbol{z}\_{\boldsymbol{n}} \right\| \right\| &=& \left\| \left\| \mathbf{x}\_{\boldsymbol{n}+1} - \mathbf{x}\_{\boldsymbol{n}} + \mathbf{x}\_{\boldsymbol{n}} - \boldsymbol{z}\_{\boldsymbol{n}} - \left( 1 - \delta\_{\boldsymbol{n}} \right) \left( \mathbf{x}\_{\boldsymbol{n}} - \mathbf{z}\_{\boldsymbol{n}} \right) - \gamma\_{\boldsymbol{n}} \left( \mathbf{h}\_{\boldsymbol{n}} - \mathbf{x}\_{\boldsymbol{n}} \right) \right\| \\ &=& \left\| \mathbf{x}\_{\boldsymbol{n}+1} - \mathbf{x}\_{\boldsymbol{n}} + \delta\_{\boldsymbol{n}} \left( \mathbf{x}\_{\boldsymbol{n}} - \mathbf{z}\_{\boldsymbol{n}} \right) - \gamma\_{\boldsymbol{n}} \left( \mathbf{h}\_{\boldsymbol{n}} - \mathbf{x}\_{\boldsymbol{n}} \right) \right\| \\ & \leq \left\| \mathbf{x}\_{\boldsymbol{n}+1} - \mathbf{x}\_{\boldsymbol{n}} \right\| + \left\| \mathbf{x}\_{\boldsymbol{n}} - \mathbf{z}\_{\boldsymbol{n}} \right\| + \left\| \left\| \mathbf{h}\_{\boldsymbol{n}} - \mathbf{x}\_{\boldsymbol{n}} \right\| \right\|. \end{array}$$

Since *xn* − *xn*+<sup>1</sup> → 0, *zn* − *xn* → 0, *hn* − *xn* → 0 and lim inf*n*→<sup>∞</sup> *δ<sup>n</sup>* > 0, we obtain *zn* − *Tnzn* → 0 as *n* → ∞. This further implies that

$$\begin{array}{rcl} \|\mathbf{x}\_{n} - T\_{n}\mathbf{x}\_{n}\| & \leq \|\mathbf{x}\_{n} - z\_{n}\| + \|z\_{n} - T\_{n}z\_{n}\| + \frac{1+\zeta}{1-\zeta} \|z\_{n} - \mathbf{x}\_{n}\|\\ & \leq \frac{2}{1-\zeta} \|\mathbf{x}\_{n} - z\_{n}\| + \|z\_{n} - T\_{n}z\_{n}\| \to 0 \quad (n \to \infty). \end{array} \tag{7}$$

We have *vn* − *nAvn* − *un*, *v* − *un* ≤ 0 ∀*v* ∈ *C*, and

$$
\left< \boldsymbol{\upsilon}\_{\boldsymbol{n}} - \boldsymbol{\mathsf{u}}\_{\boldsymbol{n}}, \boldsymbol{\upsilon} - \boldsymbol{\mathsf{u}}\_{\boldsymbol{n}} \right> + \ell\_{\boldsymbol{n}} \left< A \boldsymbol{\upsilon}\_{\boldsymbol{n}}, \boldsymbol{u}\_{\boldsymbol{n}} - \boldsymbol{\upsilon}\_{\boldsymbol{n}} \right> \leq \ell\_{\boldsymbol{n}} \left< A \boldsymbol{\upsilon}\_{\boldsymbol{n}}, \boldsymbol{\upsilon} - \boldsymbol{\upsilon}\_{\boldsymbol{n}} \right> \quad \forall \boldsymbol{\upsilon} \in \mathsf{C}. \tag{8}
$$

Note that *<sup>n</sup>* <sup>≥</sup> min{*γ*, *<sup>μ</sup><sup>l</sup> <sup>L</sup>* }. So, lim inf*i*→∞*Avni* , *v* − *vni* ≥ 0 ∀*v* ∈ *C*. This yields lim inf*i*→∞*Auni* , *v* − *uni* ≥ 0 ∀*v* ∈ *C*. Since *vn* − *xn* → 0 and *vni z*, we get *xni z*. We may assume *k* = *ni*mod*N* for all *i*. By the assumption *xn* − *xn*<sup>+</sup>*<sup>k</sup>* → 0, we have *xni*<sup>+</sup>*<sup>j</sup> z* for all *j* ≥ 1. Hence, *xni*<sup>+</sup>*<sup>j</sup>* − *Tk*+*jxni*<sup>+</sup>*j* = *xni*<sup>+</sup>*<sup>j</sup>* − *Tni*+*jxni*<sup>+</sup>*j* → 0. Then the demiclosedness principle implies that *z* ∈ Fix(*Tk*<sup>+</sup>*j*) for all *j*. This ensures that

$$z \in \bigcap\_{k=1}^{N} \text{Fix}(T\_k). \tag{9}$$

We now take a sequence {*ςi*} ⊂ (0, 1) satisfying *ς<sup>i</sup>* ↓ 0 as *i* → ∞. For all *i* ≥ 1, we denote by *mi* the smallest natural number satisfying

$$
\langle Au\_{n\_j \prime} v - u\_{n\_j} \rangle + \varrho\_i \ge 0 \quad \forall j \ge m\_i. \tag{10}
$$

Since {*ςi*} is decreasing, it is clear that {*mi*} is increasing. Noticing that {*umi* } ⊂ *C* ensures *Aumi* = <sup>0</sup> <sup>∀</sup>*<sup>i</sup>* <sup>≥</sup> 1, we set *emi* <sup>=</sup> *Aumi Aumi* <sup>2</sup> , we get *Aumi* ,*emi* = 1 ∀*i* ≥ 1. So, from (10) we get *Aumi* , *v* + *ςiemi* − *umi* ≥ 0 ∀*i* ≥ 1. Also, the pseudomonotonicity of *A* implies *A*(*v* + *ςiemi* ), *v* + *ςiemi* − *umi* ≥ 0 ∀*i* ≥ 1. This immediately leads to

$$
\langle A\boldsymbol{\upsilon} - A(\boldsymbol{\upsilon} + \boldsymbol{\varsigma}\_{i}h\_{\boldsymbol{m}\_{i}}), \boldsymbol{\upsilon} + \boldsymbol{\varsigma}\_{i}\boldsymbol{\varepsilon}\_{\boldsymbol{m}\_{i}} - \boldsymbol{\mu}\_{\boldsymbol{m}\_{i}} \rangle - \boldsymbol{\varsigma}\_{i} \langle A\boldsymbol{\upsilon}, h\_{\boldsymbol{m}\_{i}} \rangle \leq \langle A\boldsymbol{\upsilon}, \boldsymbol{\upsilon} - \boldsymbol{\mu}\_{\boldsymbol{m}\_{i}} \rangle \quad \forall i \geq 1. \tag{11}
$$

We claim lim*i*→<sup>∞</sup> *<sup>ς</sup>iemi* = 0. Indeed, from *vni <sup>z</sup>* and *vn* − *un* → 0, we obtain *uni <sup>z</sup>*. So, {*un*} ⊂ *C* ensures *z* ∈ *C*. Also, the sequentially weak continuity of *A* guarantees that *Auni Az*. Thus, we have *Az* = 0 (otherwise, *z* is a solution). Moreover, the sequentially weak lower semicontinuity of · ensures 0 < *Az* ≤ lim inf*i*→<sup>∞</sup> *Auni* . Since {*umi* }⊂{*uni* } and *ς<sup>i</sup>* ↓ 0 as *i* → ∞, we deduce that 0 <sup>≤</sup> lim sup*i*→<sup>∞</sup> *ςiemi* <sup>=</sup> lim sup*i*→<sup>∞</sup> *ςi Aumi* <sup>≤</sup> lim sup*i*→<sup>∞</sup> *<sup>ς</sup><sup>i</sup>* lim inf*i*→<sup>∞</sup> *Auni* <sup>=</sup> 0. Hence we get *<sup>ς</sup>iemi* <sup>→</sup> 0.

Finally we claim *z* ∈ Ω. In fact, letting *i* → ∞, we conclude that the right hand side of (11) tends to zero by the Lipschitzian property of *A*, the boundedness of {*umi* }, {*hmi* } and the limit lim*i*→<sup>∞</sup> *<sup>ς</sup>iemi* = 0. Thus, we get *Av*, *<sup>v</sup>* − *<sup>z</sup>* = lim inf*i*→∞*Av*, *<sup>v</sup>* − *umi* ≥ 0 ∀*v* ∈ *C*. So, *z* ∈ VI(*C*, *A*). Therefore, from (9) we have *<sup>z</sup>* ∈ ∩*<sup>N</sup> <sup>k</sup>*=1Fix(*Tk*) ∩ VI(*C*, *A*) = Ω.

**Theorem 1.** *Assume A*(*C*) *is bounded. Let* {*xn*} *be constructed by Algorithm 1. Then*

$$\begin{array}{rcl} \mathbf{x}\_{\mathfrak{n}} \to \mathbf{x}^\* \in \Omega \quad \Leftrightarrow & \left\{ \begin{array}{c} \mathbf{x}\_{\mathfrak{n}} - \mathbf{x}\_{\mathfrak{n}+1} \to \mathbf{0}, \\ \sup\_{n \ge 1} ||\mathbf{x}\_{\mathfrak{n}} - f\mathbf{x}\_{\mathfrak{n}}|| < \infty. \end{array} \right. \end{array}$$

*where x*<sup>∗</sup> ∈ Ω *is the unique solution to the hierarchical variational inequality (HVI):* (*I* − *f*)*x*∗, *x*<sup>∗</sup> − *ω* ≤ 0, ∀*ω* ∈ Ω*.*

**Proof.** Taking into account condition (iv) on {*γn*}, we may suppose that {*βn*} ⊂ [*a*, *b*] ⊂ (0, 1). Applying Banach's Contraction Principle, we obtain existence and uniqueness of a fixed point *x*<sup>∗</sup> ∈ *H* for the mapping *P*<sup>Ω</sup> ◦ *f* , which means that *x*<sup>∗</sup> = *P*<sup>Ω</sup> *f*(*x*∗). Hence, the HVI

$$\langle (I - f)\mathbf{x}^\*, \mathbf{x}^\* - \omega \rangle \le 0, \quad \forall \omega \in \Omega \tag{12}$$

has a unique solution *<sup>x</sup>*<sup>∗</sup> <sup>∈</sup> <sup>Ω</sup> :<sup>=</sup> <sup>∩</sup>*<sup>N</sup> <sup>k</sup>*=1Fix(*Tk*) ∩ VI(*C*, *A*)

It is now obvious that the necessity of the theorem is true. In fact, if *xn* → *x*<sup>∗</sup> ∈ Ω, then we get sup*n*≥<sup>1</sup> *xn* <sup>−</sup> *<sup>f</sup>*(*xn*) ≤ sup*n*≥1(*xn* <sup>−</sup> *<sup>x</sup>*∗ <sup>+</sup> *x*<sup>∗</sup> <sup>−</sup> *<sup>f</sup>*(*x*∗) <sup>+</sup> *<sup>f</sup>*(*x*∗) <sup>−</sup> *<sup>f</sup>*(*xn*)) <sup>&</sup>lt; <sup>∞</sup> and

$$\|\mathbf{x}\_{\mathsf{n}} - \mathbf{x}\_{\mathsf{n}+1}\| \le \|\mathbf{x}\_{\mathsf{n}} - \mathbf{x}^\*\| + \|\mathbf{x}\_{\mathsf{n}+1} - \mathbf{x}^\*\| \to 0 \quad (\mathsf{n} \to \infty).$$

For the sufficient condition, let us suppose *xn* <sup>−</sup> *xn*+<sup>1</sup> <sup>→</sup> 0 and sup*n*≥<sup>1</sup> (*<sup>I</sup>* <sup>−</sup> *<sup>f</sup>*)*xn* <sup>&</sup>lt; <sup>∞</sup>. The sufficiency of our conclusion is proved in the following steps.

**Step 1.** We show the boundedness of {*xn*}. In fact, let *p* be an arbitrary point in Ω. Then *Tn p* = *p* ∀*n* ≥ 1, and

$$\|\|v\_n - p\|\|^2 - (1 - \mu) \|h\_n - \mu\_n\|^2 - (1 - \mu) \|v\_n - \mu\_n\|^2 \ge \|h\_n - p\|^2,\tag{13}$$

which hence leads to

$$\|\left|\upsilon\_{n} - p\right|\| \ge \left||h\_{n} - p\right|| \quad \forall n \ge 1. \tag{14}$$

By the definition of *vn*, we have

$$\|\|\boldsymbol{v}\_{n} - \boldsymbol{p}\|\| \le \|\|\mathbf{x}\_{n} - \boldsymbol{p}\|\| + \sigma\_{n} \|\|\mathbf{x}\_{n} - \mathbf{x}\_{n-1}\|\| \le \|\mathbf{x}\_{n} - \boldsymbol{p}\|\| + a\_{n} \cdot \frac{\sigma\_{n}}{a\_{n}} \|\mathbf{x}\_{n} - \mathbf{x}\_{n-1}\|\|.\tag{15}$$

Noticing sup*n*≥<sup>1</sup> *σn <sup>α</sup><sup>n</sup>* <sup>&</sup>lt; <sup>∞</sup> and sup*n*≥<sup>1</sup> *xn* <sup>−</sup> *xn*−1 <sup>&</sup>lt; <sup>∞</sup>, we obtain that sup*n*≥<sup>1</sup> *σn <sup>α</sup><sup>n</sup> xn* − *xn*−1 < ∞. This ensures that <sup>∃</sup>*M*<sup>1</sup> <sup>&</sup>gt; 0 s.t. *<sup>σ</sup><sup>n</sup>*

$$\frac{\sigma\_{\rm H}}{\alpha\_{\rm H}} \| \mathbf{x}\_{\rm H} - \mathbf{x}\_{n-1} \| \le M\_1 \quad \forall n \ge 1. \tag{16}$$

Combining (14)–(16), we get

$$\|h\_n - p\| \le \|v\_n - p\| \le \|\mathbf{x}\_n - p\| + a\_n M\_1 \quad \forall n \ge 1. \tag{17}$$

Note that *A*(*C*) is bounded, *un* = *PC*(*vn* − *nAvn*), *f*(*H*) ⊂ *C* ⊂ *Cn* and *hn* = *PCn* (*vn* − *nAun*). Hence we know that {*Aun*} is bounded. So, from sup*n*≥<sup>1</sup> (*<sup>I</sup>* <sup>−</sup> *<sup>f</sup>*)*xn* <sup>&</sup>lt; <sup>∞</sup>, it follows that

$$\begin{array}{ll} \|\|\mathbf{h}\_{\mathsf{H}} - f(\mathbf{x}\_{\mathsf{H}})\|\| & \leq \|\upsilon\_{\mathsf{H}} - \ell\_{\mathsf{H}} A \mathbf{u}\_{\mathsf{H}} - f(\mathbf{x}\_{\mathsf{H}})\| \\ & \leq \|\mathbf{x}\_{\mathsf{H}} - \mathbf{x}\_{\mathsf{H}-1}\| + \|\mathbf{x}\_{\mathsf{H}} - f(\mathbf{x}\_{\mathsf{H}})\| + \gamma \|A \mathbf{u}\_{\mathsf{H}}\| \leq M\_{0}. \end{array}$$

where <sup>∃</sup>*M*<sup>0</sup> <sup>&</sup>gt; 0 s.t. *<sup>M</sup>*<sup>0</sup> <sup>≥</sup> sup*n*≥1(*xn* <sup>−</sup> *xn*−1 <sup>+</sup> *xn* <sup>−</sup> *<sup>f</sup>*(*xn*) <sup>+</sup> *<sup>γ</sup>Aun*) (due to the assumption *xn* − *xn*+<sup>1</sup> → 0). Consequently,

$$\begin{array}{rcl} \|z\_n - p\| &\le \mathfrak{a}\_n \delta \|\mathbf{x}\_n - p\| + (1 - \mathfrak{a}\_n) \|h\_n - p\| + \mathfrak{a}\_n \|(f - I)p\|\\ &\le (1 - \mathfrak{a}\_n (1 - \delta)) \|\mathbf{x}\_n - p\| + \mathfrak{a}\_n (M\_1 + \|(f - I)p\|), \end{array}$$

which together with (*γ<sup>n</sup>* + *δn*)*ζ* ≤ *γn*, yields

$$\begin{split} \left\lVert \left\lVert \mathbf{x}\_{n+1} - p \right\rVert \right\rVert &\leq \beta\_{n} \left\lVert \mathbf{x}\_{n} - p \right\rVert + (1 - \beta\_{n}) \left\lVert \frac{1}{1 - \beta\_{n}} [\gamma\_{n}(\mathbf{z}\_{n} - p) + \delta\_{n}(T\_{n}\mathbf{z}\_{n} - p)] \right\rVert + \gamma\_{n} \left\lVert h\_{n} - \mathbf{z}\_{n} \right\rVert \\ &\leq \beta\_{n} \left\lVert \mathbf{x}\_{n} - p \right\rVert + (1 - \beta\_{n}) \left[ (1 - a\_{n}(1 - \delta)) \left\lVert \mathbf{x}\_{n} - p \right\rVert + a\_{n}(M\_{0} + M\_{1} + \left\lVert (f - I)p \right\rVert) \right] \\ &= \left[ 1 - a\_{n}(1 - \beta\_{n})(1 - \delta) \right] \left\lVert \mathbf{x}\_{n} - p \right\rVert + a\_{n}(1 - \beta\_{n})(1 - \delta) \frac{M\_{0} + M\_{1} + \left\lVert (f - I)p \right\rVert}{1 - \delta}. \end{split}$$

This shows that *xn* <sup>−</sup> *<sup>p</sup>* ≤ max{*x*<sup>1</sup> <sup>−</sup> *<sup>p</sup>*, *<sup>M</sup>*0+*M*1+(*I*−*f*)*p* <sup>1</sup>−*<sup>δ</sup>* } ∀*<sup>n</sup>* <sup>≥</sup> 1. Thus, {*xn*} is bounded, and so are the sequences {*hn*}, {*vn*}, {*un*}, {*zn*}, {*Tnzn*}.

**Step 2.** We show that ∃*M*<sup>4</sup> > 0 s.t.

$$(1 - \mathfrak{a}\_n)(1 - \beta\_n)(1 - \mu)[||\mathfrak{w}\_n - \mathfrak{y}\_n||^2 + \|\mathfrak{u}\_n - \mathfrak{y}\_n\|^2] \le \|\mathfrak{x}\_n - p\|^2 - \|\mathfrak{x}\_{n+1} - p\|^2 + \mathfrak{a}\_n M\_4.$$

In fact, using Lemma <sup>7</sup> and the convexity of ·2, we get

$$\begin{array}{lcl} \|\mathbf{x}\_{n+1} - p\|^2 & \leq \|\beta\_n(\mathbf{x}\_n - p) + \gamma\_n(z\_n - p) + \delta\_n(T\_n z\_n - p)\|^2 + 2\gamma\_n a\_n \langle h\_n - f(\mathbf{x}\_n), \mathbf{x}\_{n+1} - p \rangle \\ & \leq \beta\_n \|\mathbf{x}\_n - p\|^2 + (1 - \beta\_n) \|z\_n - p\|^2 + 2(1 - \beta\_n) a\_n \|h\_n - f(\mathbf{x}\_n)\| \|\mathbf{x}\_{n+1} - p\| \\ & \leq \beta\_n \|\mathbf{x}\_n - p\|^2 + (1 - \beta\_n) \{a\_n \delta \|\mathbf{x}\_n - p\|^2 + (1 - a\_n) \|v\_n - p\|^2 \\ & - (1 - a\_n)(1 - p) \{\|v\_n - u\_n\|^2 + \|h\_n - u\_n\|^2\} + a\_n M\_2 \}, \end{array} (18)$$

where <sup>∃</sup>*M*<sup>2</sup> <sup>&</sup>gt; 0 s.t. *<sup>M</sup>*<sup>2</sup> <sup>≥</sup> sup*n*≥<sup>1</sup> <sup>2</sup>((*<sup>f</sup>* <sup>−</sup> *<sup>I</sup>*)*pzn* <sup>−</sup> *<sup>p</sup>* <sup>+</sup> *un* <sup>−</sup> *<sup>f</sup>*(*xn*)*xn*+<sup>1</sup> <sup>−</sup> *<sup>p</sup>*). Also,

$$\begin{array}{ll} \|\boldsymbol{v}\_{n} - \boldsymbol{p}\|^{2} & \leq \|\mathbf{x}\_{n} - \boldsymbol{p}\|^{2} + a\_{n}(2M\_{1} \|\mathbf{x}\_{n} - \boldsymbol{p}\| + a\_{n}M\_{1}^{2}) \\ & \leq \|\mathbf{x}\_{n} - \boldsymbol{p}\|^{2} + a\_{n}M\_{3} \end{array} \tag{19}$$

where <sup>∃</sup>*M*<sup>3</sup> <sup>&</sup>gt; 0 s.t. *<sup>M</sup>*<sup>3</sup> <sup>≥</sup> sup*n*≥1(2*M*1*xn* <sup>−</sup> *<sup>p</sup>* <sup>+</sup> *<sup>β</sup>nM*<sup>2</sup> <sup>1</sup>). Substituting (19) for (18), we have

$$\begin{array}{c} \|\mathbf{x}\_{n+1} - \boldsymbol{p}\|^2 \leq \beta\_{\text{H}} \|\mathbf{x}\_{\text{H}} - \boldsymbol{p}\|^2 + (1 - \beta\_{\text{H}}) \left\{ (1 - a\_{\text{H}}(1 - \delta)) \|\mathbf{x}\_{\text{H}} - \boldsymbol{p}\|^2 + (1 - a\_{\text{H}})a\_{\text{H}}M\_{3} \\ \quad - (1 - a\_{\text{H}})(1 - \mu) [\|\boldsymbol{v}\_{\text{n}} - \boldsymbol{u}\_{\text{n}}\|^2 + \|\boldsymbol{h}\_{\text{n}} - \boldsymbol{u}\_{\text{n}}\|^2] + a\_{\text{n}}M\_{2} \right\} \\ \leq \|\mathbf{x}\_{\text{H}} - \boldsymbol{p}\|^2 - (1 - a\_{\text{H}})(1 - \beta\_{\text{H}})(1 - \mu) [\|\boldsymbol{v}\_{\text{n}} - \boldsymbol{u}\_{\text{n}}\|^2 + \|\boldsymbol{h}\_{\text{n}} - \boldsymbol{u}\_{\text{n}}\|^2] + a\_{\text{n}}M\_{4}, \end{array} \tag{20}$$

where *M*<sup>4</sup> := *M*<sup>2</sup> + *M*3. This immediately implies that

$$(1 - a\_n)(1 - \beta\_n)(1 - \mu)\left[\left\|\boldsymbol{v}\_n - \boldsymbol{u}\_n\right\|^2 + \left\|\boldsymbol{h}\_n - \boldsymbol{u}\_n\right\|^2\right] \le \left\|\mathbf{x}\_n - p\right\|^2 - \left\|\mathbf{x}\_{n+1} - p\right\|^2 + a\_n M\_4.\tag{21}$$

**Step 3.** We show that ∃*M* > 0 s.t.

$$\begin{array}{l} \|\|\mathbf{x}\_{n+1} - p\|\|^{2} \\ \leq & \left[1 - \frac{(1-2\delta)\delta\_{n} - \gamma\_{n}}{1 - a\_{n}\gamma\_{n}} a\_{n}\right] \|\mathbf{x}\_{n} - p\|\|^{2} + \frac{[(1-2\delta)\delta\_{n} - \gamma\_{n}]a\_{n}}{1 - a\_{n}\gamma\_{n}} \cdot \left\{\frac{2\gamma\_{n}}{(1-2\delta)\delta\_{n} - \gamma\_{n}} \|f(\mathbf{x}\_{n}) - p\|\|\|\mathbf{z}\_{n} - \mathbf{x}\_{n+1}\|\right\} \\ & + \frac{2\delta\_{n}}{(1-2\delta)\delta\_{n} - \gamma\_{n}} \|\|f(\mathbf{x}\_{n}) - p\|\|\|\mathbf{z}\_{n} - \mathbf{x}\_{n}\|\| + \frac{2\delta\_{n}}{(1-2\delta)\delta\_{n} - \gamma\_{n}} \langle f(p) - p, \mathbf{x}\_{n} - p\rangle \\ & + \frac{\gamma\_{n} + \delta\_{n}}{(1-2\delta)\delta\_{n} - \gamma\_{n}} \cdot \frac{\sigma\_{n}}{a\_{n}} \|\mathbf{x}\_{n} - \mathbf{x}\_{n-1}\|\|\mathbf{3}M\|. \end{array}$$

In fact, we get

$$\begin{array}{rcl} \|\boldsymbol{v}\_{n} - \boldsymbol{p}\|^{2} & \leq \|\boldsymbol{x}\_{n} - \boldsymbol{p}\|^{2} + \sigma\_{n} \|\boldsymbol{x}\_{n} - \boldsymbol{x}\_{n-1}\| (2\|\boldsymbol{x}\_{n} - \boldsymbol{p}\| + \sigma\_{n} \|\boldsymbol{x}\_{n} - \boldsymbol{x}\_{n-1}\|) \\ & \leq \|\boldsymbol{x}\_{n} - \boldsymbol{p}\|^{2} + \sigma\_{n} \|\boldsymbol{x}\_{n} - \boldsymbol{x}\_{n-1}\| \|\boldsymbol{3}\boldsymbol{M}\_{\prime} \end{array} \tag{22}$$

where <sup>∃</sup>*<sup>M</sup>* <sup>&</sup>gt; 0 s.t. *<sup>M</sup>* <sup>≥</sup> sup*n*≥1{*xn* <sup>−</sup> *<sup>p</sup>*, *<sup>σ</sup>nxn* <sup>−</sup> *xn*−1}. By Algorithm <sup>1</sup> and the convexity of ·2, we have

$$\begin{array}{lll} \|\mathbf{x}\_{n+1} - p\|^2 &\leq \|\beta\_n(\mathbf{x}\_n - p) + \gamma\_n(z\_n - p) + \delta\_n(T\_n z\_n - p)\|^2 + 2\gamma\_n a\_n \langle h\_n - f(\mathbf{x}\_n), \mathbf{x}\_{n+1} - p \rangle \\ &\leq \beta\_n \|\mathbf{x}\_n - p\|^2 + (1 - \beta\_n) \|\frac{1}{1 - \beta\_n} [\gamma\_n(z\_n - p) + \delta\_n(T\_n z\_n - p)]\|^2 \\ &+ 2\gamma\_n a\_n \langle h\_n - p, \mathbf{x}\_{n+1} - p \rangle + 2\gamma\_n a\_n \langle p - f(\mathbf{x}\_n), \mathbf{x}\_{n+1} - p \rangle, \end{array}$$

which leads to

$$\begin{array}{ll} \|\mathbf{x}\_{n+1} - p\|^2 & \leq \beta\_n \|\mathbf{x}\_n - p\|^2 + (1 - \beta\_n) [(1 - \mathbf{a}\_n) \|h\_n - p\|^2 + 2\mathbf{a}\_n \langle f(\mathbf{x}\_n) - p, \mathbf{z}\_n - p\rangle] \\ & + \gamma\_n a\_n (\|h\_n - p\|^2 + \|\mathbf{x}\_{n+1} - p\|^2) + 2\gamma\_n a\_n \langle p - f(\mathbf{x}\_n), \mathbf{x}\_{n+1} - p\rangle. \end{array}$$

Using (17) and (22) we obtain that *hn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> ≤ *xn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> <sup>+</sup> *<sup>σ</sup>nxn* <sup>−</sup> *xn*−13*M*. Hence,

*xn*+<sup>1</sup> <sup>−</sup> *<sup>p</sup>*<sup>2</sup> <sup>≤</sup> [<sup>1</sup> <sup>−</sup> *<sup>α</sup>n*(<sup>1</sup> <sup>−</sup> *<sup>β</sup>n*)]*xn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> + (<sup>1</sup> <sup>−</sup> *<sup>β</sup>n*)(<sup>1</sup> <sup>−</sup> *<sup>α</sup>n*)*σnxn* <sup>−</sup> *xn*−13*<sup>M</sup>* <sup>+</sup> <sup>2</sup>*αnδnf*(*xn*) <sup>−</sup> *<sup>p</sup>*, *zn* <sup>−</sup> *<sup>p</sup>* <sup>+</sup> *<sup>γ</sup>nαn*(*xn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> <sup>+</sup> *xn*+<sup>1</sup> <sup>−</sup> *<sup>p</sup>*2) + (1 − *βn*)*αnσnxn* − *xn*−13*M* + 2*γnαnf*(*xn*) − *p*, *zn* − *xn*+1 <sup>≤</sup> [<sup>1</sup> <sup>−</sup> *<sup>α</sup>n*(<sup>1</sup> <sup>−</sup> *<sup>β</sup>n*)]*xn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> <sup>+</sup> <sup>2</sup>*γnαn <sup>f</sup>*(*xn*) <sup>−</sup> *<sup>p</sup>zn* <sup>−</sup> *xn*+1 + 2*αnδnf*(*xn*) − *p*, *xn* − *p* + 2*αnδnf*(*xn*) − *p*, *zn* − *xn* <sup>+</sup> *<sup>γ</sup>nαn*(*xn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> <sup>+</sup> *xn*+<sup>1</sup> <sup>−</sup> *<sup>p</sup>*2)+(<sup>1</sup> <sup>−</sup> *<sup>β</sup>n*)*σnxn* <sup>−</sup> *xn*−13*<sup>M</sup>* <sup>≤</sup> [<sup>1</sup> <sup>−</sup> *<sup>α</sup>n*(<sup>1</sup> <sup>−</sup> *<sup>β</sup>n*)]*xn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> <sup>+</sup> <sup>2</sup>*γnαn <sup>f</sup>*(*xn*) <sup>−</sup> *<sup>p</sup>zn* <sup>−</sup> *xn*+1 <sup>+</sup> <sup>2</sup>*αnδnδxn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> <sup>+</sup> <sup>2</sup>*αnδnf*(*p*) <sup>−</sup> *<sup>p</sup>*, *xn* <sup>−</sup> *<sup>p</sup>* <sup>+</sup> <sup>2</sup>*αnδn <sup>f</sup>*(*xn*) <sup>−</sup> *<sup>p</sup>zn* <sup>−</sup> *xn* <sup>+</sup> *<sup>γ</sup>nαn*(*xn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> <sup>+</sup> *xn*+<sup>1</sup> <sup>−</sup> *<sup>p</sup>*2)+(<sup>1</sup> <sup>−</sup> *<sup>β</sup>n*)*σnxn* <sup>−</sup> *xn*−13*M*,

which immediately yields

$$\begin{array}{l} \|\mathbf{x}\_{n+1} - p\|^2 \\ \leq & \|1 - \frac{(1 - 2\delta)\delta\_n - \gamma\_n}{1 - a\_n\gamma\_n} \mathbf{z}\_n\| \|\mathbf{x}\_n - p\|^2 + \frac{[(1 - 2\delta)\delta\_n - \gamma\_n]a\_n}{1 - a\_n\gamma\_n} \cdot \left\{ \frac{2\gamma\_n}{(1 - 2\delta)\delta\_n - \gamma\_n} \|f(\mathbf{x}\_n) - p\| \|\mathbf{z}\_n - \mathbf{x}\_{n+1}\| \right. \\ \left. + \frac{2\delta\_n}{(1 - 2\delta)\delta\_n - \gamma\_n} \|f(\mathbf{x}\_n) - p\| \|\mathbf{z}\_n - \mathbf{x}\_n\| + \frac{2\delta\_n}{(1 - 2\delta)\delta\_n - \gamma\_n} \langle f(p) - p, \mathbf{x}\_n - p \rangle \\ + \frac{\gamma\_n + \delta\_n}{(1 - 2\delta)\delta\_n - \gamma\_n} \cdot \frac{\sigma\_n}{\mathbf{z}\_n} \|\mathbf{x}\_n - \mathbf{x}\_{n-1}\| \|\mathbf{3M} \right\}. \end{array} \tag{23}$$

**Step 4.** We show that *xn* → *x*<sup>∗</sup> ∈ Ω, where *x*<sup>∗</sup> is the unique solution of (12). Indeed, putting *p* = *x*∗, we infer from (23) that

$$\begin{array}{l} \|\|\mathbf{x}\_{n+1} - \mathbf{x}^\*\|\|^2 \\ \leq & \|1 - \frac{(1 - 2\delta)\delta\_n - \gamma\_n}{1 - a\_n\gamma\_n} \mathbf{a}\_n\|\|\mathbf{x}\_n - \mathbf{x}^\*\|\|^2 + \frac{[(1 - 2\delta)\delta\_n - \gamma\_n]a\_n}{1 - a\_n\gamma\_n} \cdot \left\{\frac{2\gamma\_n}{(1 - 2\delta)\delta\_n - \gamma\_n} \|f(\mathbf{x}\_n) - \mathbf{x}^\*\|\|\|\mathbf{z}\_n - \mathbf{x}\_{n+1}\|\right. \\ \left. + \frac{2\delta\_n}{(1 - 2\delta)\delta\_n - \gamma\_n} \|f(\mathbf{x}\_n) - \mathbf{x}^\*\|\|\|\mathbf{z}\_n - \mathbf{x}\_n\|\| + \frac{2\delta\_n}{(1 - 2\delta)\delta\_n - \gamma\_n} \langle f(\mathbf{x}^\*) - \mathbf{x}^\*, \mathbf{x}\_n - \mathbf{x}^\*\rangle \right. \\ \left. + \frac{\gamma\_n + \delta\_n}{(1 - 2\delta)\delta\_n - \gamma\_n} \frac{\sigma\_n}{\delta\_n} \|\mathbf{x}\_n - \mathbf{x}\_{n-1}\|\|\mathbf{M}\mathbf{f}\|. \end{array} \tag{24}$$

It is sufficient to show that lim sup*n*→∞(*<sup>f</sup>* <sup>−</sup> *<sup>I</sup>*)*x*∗, *xn* <sup>−</sup> *<sup>x</sup>*∗ ≤ 0. From (21), *xn* <sup>−</sup> *xn*+<sup>1</sup> <sup>→</sup> 0, *<sup>α</sup><sup>n</sup>* <sup>→</sup> <sup>0</sup> and {*βn*} ⊂ [*a*, *b*] ⊂ (0, 1), we get

$$\begin{aligned} &\limsup\_{n\to\infty} (1-\mathfrak{a}\_n)(1-b)(1-\mu)[||\upsilon\_n - \mathfrak{u}\_n||^2 + ||h\_n - \mathfrak{u}\_n||^2] \\ &\le \limsup\_{n\to\infty} [(||\mathfrak{x}\_n - p|| + ||\mathfrak{x}\_{n+1} - p||)|\mathfrak{x}\_n - \mathfrak{x}\_{n+1}|| + \mathfrak{a}\_n M\_4] = 0. \end{aligned}$$

This ensures that

$$\lim\_{n \to \infty} \|v\_n - u\_n\| = 0 \quad \text{and} \quad \lim\_{n \to \infty} \|h\_n - u\_n\| = 0. \tag{25}$$

Consequently,

$$||\mathfrak{x}\_n - \mathfrak{u}\_n|| \le ||\mathfrak{x}\_n - \mathfrak{v}\_n|| + ||\mathfrak{v}\_n - \mathfrak{u}\_n|| \to 0 \quad (n \to \infty).$$

Since *zn* = *α<sup>n</sup> f*(*xn*)+(1 − *αn*)*hn* with *hn* := *PCn* (*vn* − *nAun*), we get

$$\begin{aligned} \|z\_n - u\_n\| &= \|a\_n f(\mathbf{x}\_n) - a\_n h\_n + h\_n - u\_n\| \\ &\le a\_n (\|f(\mathbf{x}\_n)\| + \|h\_n\|) + \|h\_n - u\_n\| \to 0 \quad (n \to \infty), \end{aligned} \tag{26}$$

and hence

$$\|\|z\_{\rm n} - \mathbf{x}\_{\rm n}\|\| \le \|z\_{\rm n} - \mathbf{u}\_{\rm n}\|\| + \|\|\mathbf{u}\_{\rm n} - \mathbf{x}\_{\rm n}\|\| \to 0 \quad (\mathbf{n} \to \infty). \tag{27}$$

Obviously, combining (25) and (26), guarantees that

*vn* − *zn*≤*vn* − *un* + *un* − *zn* → 0 (*n* → ∞).

From the boundedness of {*xn*}, it follows that ∃{*xni* }⊂{*xn*} s.t.

$$\limsup\_{n \to \infty} \langle (f - I)\mathbf{x}^\*, \mathbf{x}\_n - \mathbf{x}^\* \rangle = \lim\_{i \to \infty} \langle (f - I)\mathbf{x}^\*, \mathbf{x}\_{\mathbb{R}\_i} - \mathbf{x}^\* \rangle. \tag{28}$$

Since {*xn*} is bounded, we may suppose that *xni x*˜. Hence from (28) we get

$$\limsup\_{n \to \infty} \langle (f - I) \mathbf{x}^\*, \mathbf{x}\_{\mathbb{R}} - \mathbf{x}^\* \rangle = \lim\_{i \to \infty} \langle (f - I) \mathbf{x}^\*, \mathbf{x}\_{\mathbb{R}\_i} - \mathbf{x}^\* \rangle = \langle (f - I) \mathbf{x}^\*, \mathbf{\tilde{x}} - \mathbf{x}^\* \rangle. \tag{29}$$

It is easy to see from *vn* − *xn* → 0 and *xni x*˜ that *vni x*˜. Since *xn* − *xn*+<sup>1</sup> → 0, *vn* − *un* → 0, *vn* − *zn* → 0 and *vni x*˜, by Lemma 8 we infer that *x*˜ ∈ Ω. Therefore, from (12) and (29) we conclude that

$$\limsup\_{n \to \infty} \langle (f - I)\mathbf{x}^\*, \mathbf{x}\_n - \mathbf{x}^\* \rangle = \langle (f - I)\mathbf{x}^\*, \mathbf{\tilde{x}} - \mathbf{x}^\* \rangle \le 0. \tag{30}$$

Note that lim inf*n*→<sup>∞</sup> (1−2*δ*)*δn*−*γ<sup>n</sup>* <sup>1</sup>−*αnγ<sup>n</sup>* <sup>&</sup>gt; 0. It follows that <sup>∑</sup><sup>∞</sup> *n*=0 (1−2*δ*)*δn*−*γ<sup>n</sup>* <sup>1</sup>−*αnγ<sup>n</sup> <sup>α</sup><sup>n</sup>* <sup>=</sup> <sup>∞</sup>. It is clear that

$$\begin{split} \limsup\_{n \to \infty} \left\{ \frac{2\gamma\_{n}}{(1-2\delta)\delta\_{n} - \gamma\_{n}} \left\| f(\mathbf{x}\_{n}) - \mathbf{x}^{\*} \right\| \left\| z\_{n} - \mathbf{x}\_{n+1} \right\| + \frac{2\delta\_{n}}{(1-2\delta)\delta\_{n} - \gamma\_{n}} \left\| f(\mathbf{x}\_{n}) - \mathbf{x}^{\*} \right\| \left\| z\_{n} - \mathbf{x}\_{n} \right\| \\ + \frac{2\delta\_{n}}{(1-2\delta)\delta\_{n} - \gamma\_{n}} \left\langle f(\mathbf{x}^{\*}) - \mathbf{x}^{\*}, \mathbf{x}\_{n} - \mathbf{x}^{\*} \right\rangle + \frac{\gamma\_{n} + \delta\_{n}}{(1-2\delta)\delta\_{n} - \gamma\_{n}} \cdot \frac{\sigma\_{n}}{a\_{n}} \left\| \mathbf{x}\_{n} - \mathbf{x}\_{n-1} \right\| \left\| 3M \right\| \leq 0. \end{split} \tag{31}$$

Therefore, by Lemma 2 we immediately deduce that *xn* → *x*∗.

Next, we introduce another Mann viscosity algorithm with line-search process by the subgradient extragradient technique.

#### **Algorithm 2**

**Initial Step:** Given *x*0, *x*<sup>1</sup> ∈ *H* arbitrarily. Let *γ* > 0, *l* ∈ (0, 1), *μ* ∈ (0, 1). **Iteration Steps:** Compute *xn*+<sup>1</sup> below: Step 1. Put *vn* = *xn* − *σn*(*xn*−<sup>1</sup> − *xn*) and calculate *un* = *PC*(*vn* − *nAvn*), where *<sup>n</sup>* is picked to be the largest ∈ {*γ*, *γl*, *γl* 2, ...} s.t. *Avn* − *Aun* ≤ *μvn* − *un*. (32) Step 2. Calculate *zn* = (1 − *αn*)*PCn* (*vn* − *nAun*) + *α<sup>n</sup> f*(*xn*) with *Cn* := {*v* ∈ *H* : *vn* − *nAvn* − *un*, *un* − *v* ≥ 0}. Step 3. Calculate

$$
\omega\_{n+1} = \gamma\_n P\_{\mathbb{C}\_n} (\upsilon\_n - \ell\_n A \mu\_n) + \delta\_n T\_n z\_n + \beta\_n \upsilon\_n. \tag{33}
$$

Update *n* := *n* + 1 and return to Step 1.

It is remarkable that Lemmas 6, 7 and 8 remain true for Algorithm 2.

**Theorem 2.** *Assume A*(*C*) *is bounded. Let* {*xn*} *be constructed by Algorithm 2. Then*

$$\mathbf{x}\_{\mathfrak{n}} \to \mathbf{x}^\* \in \Omega \iff \begin{cases} \quad \mathbf{x}\_{\mathfrak{n}} - \mathbf{x}\_{\mathfrak{n}+1} \to 0, \\ \quad \quad \sup\_{n \ge 1} ||(I - f)\mathbf{x}\_{\mathfrak{n}}|| < \infty. \end{cases}$$

*where x*<sup>∗</sup> ∈ Ω *is the unique solution of the HVI:* (*I* − *f*)*x*∗, *x*<sup>∗</sup> − *ω* ≤ 0, ∀*ω* ∈ Ω*.*

**Proof.** For the necessity of our proof, we can observe that, by a similar approach to that in the proof of Theorem 1, we obtain that there is a unique solution *x*<sup>∗</sup> ∈ Ω of (12).

We show the sufficiency below. To this aim, we suppose *xn* <sup>−</sup> *xn*+<sup>1</sup> <sup>→</sup> 0 and sup*n*≥<sup>1</sup> (*<sup>I</sup>* <sup>−</sup> *<sup>f</sup>*)*xn* <sup>&</sup>lt; ∞, and prove the sufficiency by the following steps.

**Step 1.** We show the boundedness of {*xn*}. In fact, by the similar inference to that in Step 1 for the proof of Theorem 1, we obtain that (13)–(17) hold. So, using Algorithm 2 and (17) we obtain

$$||z\_{\mathfrak{n}} - p|| \quad \le (1 - \mathfrak{a}\_{\mathfrak{n}}(1 - \delta)) ||x\_{\mathfrak{n}} - p|| + \mathfrak{a}\_{\mathfrak{n}}(M\_1 + ||(f - I)p||),$$

which together with (*γ<sup>n</sup>* + *δn*)*ζ* ≤ *γn*, yields

$$\begin{split} \|\mathbf{x}\_{n+1} - \boldsymbol{p}\| &\leq \beta\_{\boldsymbol{n}} \|\boldsymbol{v}\_{\boldsymbol{n}} - \boldsymbol{p}\| + (1 - \beta\_{\boldsymbol{n}}) \|\frac{1}{1 - \beta\_{\boldsymbol{n}}} [\gamma\_{\boldsymbol{n}}(\boldsymbol{z}\_{\boldsymbol{n}} - \boldsymbol{p}) + \delta\_{\boldsymbol{n}}(T\_{\boldsymbol{n}}\boldsymbol{z}\_{\boldsymbol{n}} - \boldsymbol{p})] \| + \gamma\_{\boldsymbol{n}} \|\boldsymbol{h}\_{\boldsymbol{n}} - \boldsymbol{z}\_{\boldsymbol{n}} \| \\ &\leq \beta\_{\boldsymbol{n}} (\|\mathbf{x}\_{\boldsymbol{n}} - \boldsymbol{p}\| + \boldsymbol{a}\_{\boldsymbol{n}} M\_{1}) + (1 - \beta\_{\boldsymbol{n}}) [(1 - \boldsymbol{a}\_{\boldsymbol{n}}(1 - \delta)) \|\mathbf{x}\_{\boldsymbol{n}} - \boldsymbol{p}\| \\ &\quad + \boldsymbol{a}\_{\boldsymbol{n}} (M\_{0} + M\_{1} + \|(f - I)\boldsymbol{p}\|) \| \\ &= [1 - \boldsymbol{a}\_{\boldsymbol{n}} (1 - \beta\_{\boldsymbol{n}}) (1 - \delta)] \|\mathbf{x}\_{\boldsymbol{n}} - \boldsymbol{p}\| + \boldsymbol{a}\_{\boldsymbol{n}} (1 - \beta\_{\boldsymbol{n}}) (1 - \delta) \frac{M\_{0} + \frac{1}{1 - \beta\boldsymbol{n}} M\_{1} + \|(f - I)\boldsymbol{p}\|}{1 - \delta}. \end{split}$$

Therefore, we get the boundedness of {*xn*} and hence the one of sequences {*hn*}, {*vn*}, {*un*}, {*zn*}, {*Tnzn*}. **Step 2.** We show that ∃*M*<sup>4</sup> > 0 s.t.

$$\left[ (1 - \alpha\_n)(1 - \beta\_n)(1 - \mu) \right] \|w\_n - y\_n\|^2 + \|\mu\_n - y\_n\|^2 \| \le \|\mathbf{x}\_n - p\|^2 - \|\mathbf{x}\_{n+1} - p\|^2 + \kappa\_n M\_4.$$

In fact, by Lemma <sup>7</sup> and the convexity of ·2, we get

$$\begin{array}{lcl} \|\mathbf{x}\_{n+1} - p\|^2 & \leq \|\beta\_n (\mathbf{v}\_n - p) + \gamma\_n (\mathbf{z}\_n - p) + \delta\_n (T\_n \mathbf{z}\_n - p)\|^2 + 2\gamma\_n a\_n \langle h\_n - f(\mathbf{x}\_n), \mathbf{x}\_{n+1} - p \rangle \\ & \leq \beta\_n \|\mathbf{v}\_n - p\|^2 + (1 - \beta\_n) \|\mathbf{z}\_n - p\|^2 + 2(1 - \beta\_n) a\_n \|h\_n - f(\mathbf{x}\_n)\| \|\mathbf{x}\_{n+1} - p\| \\ & \leq \beta\_n \|\mathbf{v}\_n - p\|^2 + (1 - \beta\_n) \{a\_n \delta \|\mathbf{x}\_n - p\|^2 + (1 - a\_n) \|\mathbf{v}\_n - p\|^2 \\ & - (1 - a\_n)(1 - p) \{\|\mathbf{v}\_n - \mathbf{u}\_n\|^2 + \|h\_n - \mathbf{u}\_n\|^2\} + a\_n M\_2 \}, \end{array} \tag{34}$$

where <sup>∃</sup>*M*<sup>2</sup> <sup>&</sup>gt; 0 s.t. *<sup>M</sup>*<sup>2</sup> <sup>≥</sup> sup*n*≥<sup>1</sup> <sup>2</sup>((*<sup>f</sup>* <sup>−</sup> *<sup>I</sup>*)*pzn* <sup>−</sup> *<sup>p</sup>* <sup>+</sup> *un* <sup>−</sup> *<sup>f</sup>*(*xn*)*xn*+<sup>1</sup> <sup>−</sup> *<sup>p</sup>*). Also,

$$\begin{array}{ll} \|\boldsymbol{v}\_{n} - \boldsymbol{p}\|^{2} & \leq \|\mathbf{x}\_{n} - \boldsymbol{p}\|^{2} + a\_{n}(2M\_{1} \|\mathbf{x}\_{n} - \boldsymbol{p}\| + a\_{n}M\_{1}^{2}) \\ & \leq \|\mathbf{x}\_{n} - \boldsymbol{p}\|^{2} + a\_{n}M\_{3} \end{array} \tag{35}$$

where <sup>∃</sup>*M*<sup>3</sup> <sup>&</sup>gt; 0 s.t. *<sup>M</sup>*<sup>3</sup> <sup>≥</sup> sup*n*≥1(2*M*1*xn* <sup>−</sup> *<sup>p</sup>* <sup>+</sup> *<sup>β</sup>nM*<sup>2</sup> <sup>1</sup>). Substituting (35) for (34), we have

$$\begin{split} \|\mathbf{x}\_{n+1} - \boldsymbol{p}\|^2 &\leq \beta\_{\text{fl}} \|\mathbf{x}\_{\text{fl}} - \boldsymbol{p}\|^2 + (1 - \beta\_{\text{fl}}) \left\{ (1 - a\_{\text{fl}}(1 - \delta)) \|\mathbf{x}\_{\text{fl}} - \boldsymbol{p}\|^2 + (1 - a\_{\text{fl}}) a\_{\text{fl}} M\_3 \\ &- (1 - a\_{\text{fl}}) (1 - \mu) [\|\boldsymbol{v}\_{\text{fl}} - \boldsymbol{u}\_{\text{fl}}\|^2 + \|\boldsymbol{h}\_{\text{fl}} - \boldsymbol{u}\_{\text{fl}}\|^2] + a\_{\text{fl}} M\_2 \right\} + \beta\_{\text{fl}} a\_{\text{fl}} M\_3 \\ &= \|\mathbf{x}\_{\text{fl}} - \boldsymbol{p}\|^2 - (1 - a\_{\text{fl}}) (1 - \beta\_{\text{fl}}) (1 - \mu) [\|\boldsymbol{v}\_{\text{fl}} - \boldsymbol{u}\_{\text{fl}}\|^2 + \|\boldsymbol{h}\_{\text{fl}} - \boldsymbol{u}\_{\text{fl}}\|^2] + a\_{\text{fl}} M\_4. \end{split} (36)$$

where *M*<sup>4</sup> := *M*<sup>2</sup> + *M*3. This ensures that

$$\left[ (1 - \mathfrak{a}\_n)(1 - \mathfrak{f}\_n)(1 - \mu) \left[ \|\boldsymbol{v}\_n - \boldsymbol{u}\_n\|^2 + \|\boldsymbol{h}\_n - \boldsymbol{u}\_n\|^2 \right] \leq \|\mathbf{x}\_n - \boldsymbol{p}\|^2 - \|\mathbf{x}\_{n+1} - \boldsymbol{p}\|^2 + \mathfrak{a}\_n \mathsf{M}\_4.\tag{37}$$

**Step 3.** We show that ∃*M* > 0 s.t.

$$\begin{array}{l} \|\mathbf{x}\_{n+1} - p\|^{2} \\ \leq & \left[1 - \frac{(1-2\delta)\delta\_{n} - \gamma\_{n}}{1 - \mathbf{a}\_{n}\gamma\_{n}}\mathbf{a}\_{n}\right] \|\mathbf{x}\_{n} - p\|^{2} + \frac{[(1-2\delta)\delta\_{n} - \gamma\_{n}]\mathbf{a}\_{n}}{1 - \mathbf{a}\_{n}\gamma\_{n}} \cdot \left\{\frac{2\gamma\_{n}}{(1-2\delta)\delta\_{n} - \gamma\_{n}}\|f(\mathbf{x}\_{n}) - p\| \, \|\mathbf{z}\_{n} - \mathbf{x}\_{n+1}\| \right\} \\ & + \frac{2\delta\_{n}}{(1-2\delta)\delta\_{n} - \gamma\_{n}} \|f(\mathbf{x}\_{n}) - p\| \, \|\mathbf{z}\_{n} - \mathbf{x}\_{n}\| + \frac{2\delta\_{n}}{(1-2\delta)\delta\_{n} - \gamma\_{n}} \langle f(p) - p, \mathbf{x}\_{n} - p\rangle \\ & + \frac{1}{(1-2\delta)\delta\_{n} - \gamma\_{n}} \cdot \frac{\sigma\_{n}}{\mathbf{a}\_{n}} \|\mathbf{x}\_{n} - \mathbf{x}\_{n-1}\| \, \|\mathbf{M}\mathbf{J}\|. \end{array}$$

In fact, we get

$$\begin{array}{rcl} \left\| \boldsymbol{v}\_{n} - \boldsymbol{p} \right\|^{2} & \leq \left\| \mathbf{x}\_{n} - \boldsymbol{p} \right\|^{2} + \sigma\_{n} \left\| \mathbf{x}\_{n} - \mathbf{x}\_{n-1} \right\| \left( 2 \left\| \mathbf{x}\_{n} - \boldsymbol{p} \right\| + \sigma\_{n} \left\| \mathbf{x}\_{n} - \mathbf{x}\_{n-1} \right\| \right) \\ & \leq \left\| \mathbf{x}\_{n} - \boldsymbol{p} \right\|^{2} + \sigma\_{n} \left\| \mathbf{x}\_{n} - \mathbf{x}\_{n-1} \right\| \left\| 3\boldsymbol{M}\_{\prime} \end{array} \tag{38}$$

where <sup>∃</sup>*<sup>M</sup>* <sup>&</sup>gt; 0 s.t. *<sup>M</sup>* <sup>≥</sup> sup*n*≥1{*xn* <sup>−</sup> *<sup>p</sup>*, *<sup>σ</sup>nxn* <sup>−</sup> *xn*−1}. Using Algorithm <sup>1</sup> and the convexity of ·2, we get

$$\begin{array}{lll} \|\mathbf{x}\_{n+1} - \boldsymbol{p}\|^2 &\leq \|\beta\_{\boldsymbol{n}}(\boldsymbol{v}\_{n} - \boldsymbol{p}) + \gamma\_{\boldsymbol{n}}(\boldsymbol{z}\_{n} - \boldsymbol{p}) + \delta\_{\boldsymbol{n}}(T\_{n}\boldsymbol{z}\_{n} - \boldsymbol{p})\|^2 + 2\gamma\_{\boldsymbol{n}}a\_{\boldsymbol{n}}\langle h\_{n} - \boldsymbol{f}(\mathbf{x}\_{n}), \mathbf{x}\_{n+1} - \boldsymbol{p} \rangle \\ &\leq \beta\_{\boldsymbol{n}}\|\boldsymbol{v}\_{n} - \boldsymbol{p}\|^2 + (1 - \beta\_{\boldsymbol{n}})\|\frac{1}{1 - \beta\_{\boldsymbol{n}}}[\gamma\_{\boldsymbol{n}}(\boldsymbol{z}\_{n} - \boldsymbol{p}) + \delta\_{\boldsymbol{n}}(T\_{n}\boldsymbol{z}\_{n} - \boldsymbol{p})]\|^2 \\ &+ 2\gamma\_{\boldsymbol{n}}a\_{\boldsymbol{n}}\langle h\_{n} - \boldsymbol{p}, \mathbf{x}\_{n+1} - \boldsymbol{p} \rangle + 2\gamma\_{\boldsymbol{n}}a\_{\boldsymbol{n}}\langle \boldsymbol{p} - \boldsymbol{f}(\mathbf{x}\_{n}), \mathbf{x}\_{n+1} - \boldsymbol{p} \rangle, \end{array}$$

which leads to

$$\begin{split} \|\|\mathbf{x}\_{n+1} - \boldsymbol{p}\|\|^2 &\quad \beta\_n \|\|\boldsymbol{v}\_n - \boldsymbol{p}\|\|^2 + (1 - \beta\_n) [(1 - \boldsymbol{a}\_n) \|\|\boldsymbol{h}\_n - \boldsymbol{p}\|]^2 + 2\boldsymbol{a}\_n \langle f(\mathbf{x}\_n) - \boldsymbol{p}, \boldsymbol{z}\_n - \boldsymbol{p} \rangle \| \\ &\quad + \gamma\_n \boldsymbol{a}\_n (\|\|\boldsymbol{h}\_n - \boldsymbol{p}\|\|^2 + \|\boldsymbol{x}\_{n+1} - \boldsymbol{p}\|\|^2) + 2\gamma\_n \boldsymbol{a}\_n \langle \boldsymbol{p} - f(\mathbf{x}\_n), \boldsymbol{x}\_{n+1} - \boldsymbol{p} \rangle. \end{split}$$

Using (17) and (38) we deduce that *hn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> ≤ *vn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> ≤ *xn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> <sup>+</sup> *<sup>σ</sup>nxn* <sup>−</sup> *xn*−13*M*. Hence,

*xn*+<sup>1</sup> <sup>−</sup> *<sup>p</sup>*<sup>2</sup> <sup>≤</sup> [<sup>1</sup> <sup>−</sup> *<sup>α</sup>n*(<sup>1</sup> <sup>−</sup> *<sup>β</sup>n*)]*xn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> + [<sup>1</sup> <sup>−</sup> *<sup>α</sup>n*(<sup>1</sup> <sup>−</sup> *<sup>β</sup>n*)]*σnxn* <sup>−</sup> *xn*−13*<sup>M</sup>* <sup>+</sup> <sup>2</sup>*αnδnf*(*xn*) <sup>−</sup> *<sup>p</sup>*, *zn* <sup>−</sup> *<sup>p</sup>* <sup>+</sup> *<sup>γ</sup>nαn*(*xn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> <sup>+</sup> *xn*+<sup>1</sup> <sup>−</sup> *<sup>p</sup>*2) + (1 − *βn*)*αnσnxn* − *xn*−13*M* + 2*γnαnf*(*xn*) − *p*, *zn* − *xn*+1 <sup>≤</sup> [<sup>1</sup> <sup>−</sup> *<sup>α</sup>n*(<sup>1</sup> <sup>−</sup> *<sup>β</sup>n*)]*xn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> <sup>+</sup> <sup>2</sup>*γnαn <sup>f</sup>*(*xn*) <sup>−</sup> *<sup>p</sup>zn* <sup>−</sup> *xn*+1 + 2*αnδnf*(*xn*) − *p*, *xn* − *p* + 2*αnδnf*(*xn*) − *p*, *zn* − *xn* <sup>+</sup> *<sup>γ</sup>nαn*(*xn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> <sup>+</sup> *xn*+<sup>1</sup> <sup>−</sup> *<sup>p</sup>*2) + *<sup>σ</sup>nxn* <sup>−</sup> *xn*−13*<sup>M</sup>* <sup>≤</sup> [<sup>1</sup> <sup>−</sup> *<sup>α</sup>n*(<sup>1</sup> <sup>−</sup> *<sup>β</sup>n*)]*xn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> <sup>+</sup> <sup>2</sup>*γnαn <sup>f</sup>*(*xn*) <sup>−</sup> *<sup>p</sup>zn* <sup>−</sup> *xn*+1 <sup>+</sup> <sup>2</sup>*αnδnδxn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> <sup>+</sup> <sup>2</sup>*αnδnf*(*p*) <sup>−</sup> *<sup>p</sup>*, *xn* <sup>−</sup> *<sup>p</sup>* <sup>+</sup> <sup>2</sup>*αnδn <sup>f</sup>*(*xn*) <sup>−</sup> *<sup>p</sup>zn* <sup>−</sup> *xn* <sup>+</sup> *<sup>γ</sup>nαn*(*xn* <sup>−</sup> *<sup>p</sup>*<sup>2</sup> <sup>+</sup> *xn*+<sup>1</sup> <sup>−</sup> *<sup>p</sup>*2) + *<sup>σ</sup>nxn* <sup>−</sup> *xn*−13*M*,

which immediately yields

$$\begin{array}{l} \|\mathbf{x}\_{n+1} - p\|^2 \\ \leq & \left[1 - \frac{(1 - 2\delta)\delta\_n - \gamma\_n}{1 - a\_n\gamma\_n} a\_n\right] \|\mathbf{x}\_n - p\|^2 + \frac{[(1 - 2\delta)\delta\_n - \gamma\_n]a\_n}{1 - a\_n\gamma\_n} \cdot \left\{\frac{2\gamma\_n}{(1 - 2\delta)\delta\_n - \gamma\_n} \|f(\mathbf{x}\_n) - p\| \|\mathbf{z}\_n - \mathbf{x}\_{n+1}\| \\ \quad + \frac{2\delta\_n}{(1 - 2\delta)\delta\_n - \gamma\_n} \|f(\mathbf{x}\_n) - p\| \|\mathbf{z}\_n - \mathbf{x}\_n\| \\ \quad + \frac{1}{(1 - 2\delta)\delta\_n - \gamma\_n} \cdot \frac{\sigma\_n}{a\_n} \|\mathbf{x}\_n - \mathbf{x}\_{n-1}\| \|\mathbf{3}M\right\}. \end{array} \tag{39}$$

**Step 4.** In order to show that *xn* → *x*<sup>∗</sup> ∈ Ω, which is the unique solution of (12), we can follow a similar method to that in Step 4 for the proof of Theorem 1.

Finally, we apply our main results to solve the VIP and common fixed point problem (CFPP) in the following illustrating example.

The starting point *x*<sup>0</sup> = *x*<sup>1</sup> is randomly picked in the real line. Put *f*(*u*) = <sup>1</sup> <sup>8</sup> sin *<sup>u</sup>*, *<sup>γ</sup>* <sup>=</sup> *<sup>l</sup>* <sup>=</sup> *<sup>μ</sup>* <sup>=</sup> <sup>1</sup> <sup>2</sup> , *<sup>σ</sup><sup>n</sup>* <sup>=</sup> *<sup>α</sup><sup>n</sup>* <sup>=</sup> <sup>1</sup> *<sup>n</sup>*+<sup>1</sup> , *<sup>β</sup><sup>n</sup>* <sup>=</sup> <sup>1</sup> <sup>3</sup> , *<sup>γ</sup><sup>n</sup>* <sup>=</sup> <sup>1</sup> <sup>6</sup> and *<sup>δ</sup><sup>n</sup>* <sup>=</sup> <sup>1</sup> 2 .

We first provide an example of Lipschitzian, pseudomonotone self-mapping *A* satisfying the boundedness of *A*(*C*) and strictly pseudocontractive self-mapping *T*<sup>1</sup> with Ω = Fix(*T*1) ∩ VI(*C*, *A*) = ∅. Let *C* = [−1, 2] and *H* be the real line with the inner product *a*, *b* = *ab* and induced norm · <sup>=</sup> |·|. Then *<sup>f</sup>* is a *<sup>δ</sup>*-contractive map with *<sup>δ</sup>* <sup>=</sup> <sup>1</sup> <sup>8</sup> <sup>∈</sup> [0, <sup>1</sup> <sup>2</sup> ) and *<sup>f</sup>*(*H*) <sup>⊂</sup> *<sup>C</sup>* because *<sup>f</sup>*(*u*) <sup>−</sup> *<sup>f</sup>*(*v*) <sup>=</sup> <sup>1</sup> <sup>8</sup> sin *<sup>u</sup>* <sup>−</sup> sin *<sup>v</sup>* ≤ <sup>1</sup> <sup>8</sup> *u* − *v* for all *u*, *v* ∈ *H*.

Let *<sup>A</sup>* : *<sup>H</sup>* <sup>→</sup> *<sup>H</sup>* and *<sup>T</sup>*<sup>1</sup> : *<sup>H</sup>* <sup>→</sup> *<sup>H</sup>* be defined as *Au* :<sup>=</sup> <sup>1</sup> <sup>1</sup>+<sup>|</sup> sin *<sup>u</sup>*<sup>|</sup> <sup>−</sup> <sup>1</sup> 1+|*u*| , and *T*1*u* := <sup>1</sup> <sup>2</sup>*<sup>u</sup>* <sup>−</sup> <sup>3</sup> <sup>8</sup> sin *u* for all *u* ∈ *H*. Now, we first show that *A* is *L*-Lipschitzian, pseudomonotone operator with *L* = 2, such that *A*(*C*) is bounded. In fact, for all *u*, *v* ∈ *H* we get

$$\begin{array}{l} \|Au - Av\| \| \leq \left| \frac{1}{1 + \|u\|} - \frac{1}{1 + \|v\|} \right| + \left| \frac{1}{1 + \|\sin u\|} - \frac{1}{1 + \|\sin v\|} \right| \\ = \left| \frac{\|v\| - \|u\|}{(1 + \|u\|)(1 + \|v\|)} \right| + \left| \frac{\|\sin v\| - \|\sin u\|}{(1 + \|\sin u\|)(1 + \|\sin v\|)} \right| \\ \leq \frac{\|u - v\|}{(1 + \|u\|)(1 + \|v\|)} + \frac{\|\sin u - \sin v\|}{(1 + \|\sin u\|)(1 + \|\sin v\|)} \\ \leq 2\|u - v\|\|. \end{array}$$

This implies that *A* is 2-Lipschitzian. Next, we show that *A* is pseudomonotone. For any given *u*, *v* ∈ *H*, it is clear that the relation holds:

$$\langle Au, u - v \rangle = (\frac{1}{1 + |\sin u|} - \frac{1}{1 + |u|})(u - v) \le 0 \Rightarrow \langle Av, u - v \rangle = (\frac{1}{1 + |\sin v|} - \frac{1}{1 + |v|})(u - v) \le 0.$$

Furthermore, it is easy to see that *T*<sup>1</sup> is strictly pseudocontractive with constant *ζ*<sup>1</sup> = <sup>1</sup> <sup>4</sup> . In fact, we observe that for all *u*, *v* ∈ *H*,

$$\|\|T\_1u - T\_1v\|\| \le \frac{1}{2} \|u - v\| + \frac{3}{8} \|\sin u - \sin v\| \le \|u - v\| + \frac{1}{4} \|(I - T\_1)u - (I - T\_1)v\|.$$

It is clear that (*γ<sup>n</sup>* + *δn*)*ζ*<sup>1</sup> = ( <sup>1</sup> <sup>6</sup> <sup>+</sup> <sup>1</sup> <sup>2</sup> ) · <sup>1</sup> <sup>4</sup> <sup>≤</sup> <sup>1</sup> <sup>6</sup> <sup>=</sup> *<sup>γ</sup><sup>n</sup>* <sup>&</sup>lt; (<sup>1</sup> <sup>−</sup> <sup>2</sup>*δ*)*δ<sup>n</sup>* = (<sup>1</sup> <sup>−</sup> <sup>2</sup> · <sup>1</sup> 8 ) 1 <sup>2</sup> <sup>=</sup> <sup>3</sup> <sup>8</sup> for all *n* ≥ 1. In addition, it is clear that Fix(*T*1) = {0} and *<sup>A</sup>*<sup>0</sup> <sup>=</sup> 0 because the derivative *<sup>d</sup>*(*T*1*u*)/*du* <sup>=</sup> <sup>1</sup> <sup>2</sup> <sup>−</sup> <sup>3</sup> <sup>8</sup> cos *u* > 0. Therefore, Ω = {0} = ∅. In this case, Algorithm 1 can be rewritten below:

$$\begin{cases} \begin{aligned} \boldsymbol{v}\_{\boldsymbol{n}} &= \boldsymbol{x}\_{\boldsymbol{n}} - \frac{1}{n+1} (\boldsymbol{x}\_{n-1} - \boldsymbol{x}\_{\boldsymbol{n}}), \\ \boldsymbol{u}\_{\boldsymbol{n}} &= \boldsymbol{P}\_{\mathbb{C}} (\boldsymbol{v}\_{\boldsymbol{n}} - \ell\_{\boldsymbol{n}} A \boldsymbol{v}\_{\boldsymbol{n}}), \\ \boldsymbol{z}\_{\boldsymbol{n}} &= \frac{1}{n+1} f(\boldsymbol{x}\_{\boldsymbol{n}}) + \frac{n}{n+1} \boldsymbol{P}\_{\mathbb{C}\_{\n}} (\boldsymbol{v}\_{\boldsymbol{n}} - \ell\_{\boldsymbol{n}} A \boldsymbol{u}\_{\boldsymbol{n}}), \\ \boldsymbol{x}\_{\boldsymbol{n}+1} &= \frac{1}{3} \boldsymbol{x}\_{\boldsymbol{n}} + \frac{1}{6} \boldsymbol{P}\_{\mathbb{C}\_{\n}} (\boldsymbol{v}\_{\boldsymbol{n}} - \ell\_{\boldsymbol{n}} A \boldsymbol{u}\_{\boldsymbol{n}}) + \frac{1}{2} \boldsymbol{T}\_{1} \boldsymbol{z}\_{\boldsymbol{n}} & \forall \boldsymbol{n} \ge 1, \end{aligned} \end{cases}$$

with {*Cn*} and {*n*}, selected as in Algorithm 1. Then, by Theorem 1, we know that *xn* → 0 ∈ Ω iff *xn* <sup>−</sup> *xn*+<sup>1</sup> <sup>→</sup> <sup>0</sup> (*<sup>n</sup>* <sup>→</sup> <sup>∞</sup>) and sup*n*≥<sup>1</sup> <sup>|</sup>*xn* <sup>−</sup> <sup>1</sup> <sup>8</sup> sin *xn*| < ∞.

On the other hand, Algorithm 2 can be rewritten below:

$$\begin{cases} \begin{aligned} \boldsymbol{v}\_{\boldsymbol{n}} &= \boldsymbol{x}\_{\boldsymbol{n}} - \frac{1}{n+1} (\boldsymbol{x}\_{n-1} - \boldsymbol{x}\_{\boldsymbol{n}}), \\ \boldsymbol{u}\_{\boldsymbol{n}} &= \boldsymbol{P}\_{\mathbb{C}} (\boldsymbol{v}\_{\boldsymbol{n}} - \ell\_{\boldsymbol{n}} A \boldsymbol{v}\_{\boldsymbol{n}}), \\ \boldsymbol{z}\_{\boldsymbol{n}} &= \frac{1}{n+1} f(\boldsymbol{x}\_{\boldsymbol{n}}) + \frac{\boldsymbol{n}}{n+1} P\_{\mathbb{C}\boldsymbol{u}} (\boldsymbol{v}\_{\boldsymbol{n}} - \ell\_{\boldsymbol{n}} A \boldsymbol{u}\_{\boldsymbol{n}}), \\ \boldsymbol{x}\_{\boldsymbol{n}+1} &= \frac{1}{3} \boldsymbol{v}\_{\boldsymbol{n}} + \frac{1}{5} P\_{\mathbb{C}\boldsymbol{u}} (\boldsymbol{v}\_{\boldsymbol{n}} - \ell\_{\boldsymbol{n}} A \boldsymbol{u}\_{\boldsymbol{n}}) + \frac{1}{2} T\_{1} \boldsymbol{z}\_{\boldsymbol{n}} \quad \forall \boldsymbol{n} \ge 1, \end{aligned} \end{cases}$$

with {*Cn*} and {*n*}, selected as in Algorithm 2. Then, by Theorem 2 , we know that *xn* → 0 ∈ Ω iff *xn* <sup>−</sup> *xn*+<sup>1</sup> <sup>→</sup> <sup>0</sup> (*<sup>n</sup>* <sup>→</sup> <sup>∞</sup>) and sup*n*≥<sup>1</sup> <sup>|</sup>*xn* <sup>−</sup> <sup>1</sup> <sup>8</sup> sin *xn*| < ∞.

**Author Contributions:** All authors contributed equally to this manuscript.

**Funding:** This research was partially supported by the Innovation Program of Shanghai Municipal Education Commission (15ZZ068), Ph.D. Program Foundation of Ministry of Education of China (20123127110002) and Program for Outstanding Academic Leaders in Shanghai City (15XD1503100).

**Conflicts of Interest:** The authors certify that they have no affiliations with or involvement in any organization or entity with any financial or non-financial interest in the subject matter discussed in this manuscript.

#### **References**


c 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## **Generalized Nonsmooth Exponential-Type Vector Variational-Like Inequalities and Nonsmooth Vector Optimization Problems in Asplund Spaces**

**Syed Shakaib Irfan 1,†, Mijanur Rahaman 2,†, Iqbal Ahmad 1,\* and Rais Ahmad 2,† and Saddam Husain 2,†**


Received: 27 February 2019; Accepted: 4 April 2019; Published: 10 April 2019

**Abstract:** The aim of this article is to study new types of generalized nonsmooth exponential type vector variational-like inequality problems involving Mordukhovich limiting subdifferential operator. We establish some relationships between generalized nonsmooth exponential type vector variational-like inequality problems and vector optimization problems under some invexity assumptions. The celebrated Fan-KKM theorem is used to obtain the existence of solution of generalized nonsmooth exponential-type vector variational like inequality problems. In support of our main result, some examples are given. Our results presented in this article improve, extend, and generalize some known results offer in the literature.

**Keywords:** vector variational-like inequalities; vector optimization problems; limiting (*p*,*r*)-*α*-(*η*, *θ*)-invexity; Lipschitz continuity; Fan-KKM theorem

#### **1. Introduction**

The vector variational inequality has been introduced and studied in [1] in finite-dimensional Euclidean spaces. Vector variational inequalities have emerged as an efficient tool to provide imperative requirements for the solution of vector optimization problems. Vector variational-like inequalities for nonsmooth mappings are useful generalizations of vector variational inequalities. For more details on vector variational inequalities and their generalizations, see the references [2–8]. In 1998, Giannessi [9] proved a necessary and sufficient condition for the existence of an efficient solution of a vector optimization problem for differentiable and convex mappings by using a Minty type vector variational inequality problem. Under different assumptions, many researchers have studied vector optimization problems by using different types of Minty type vector variational inequality problems. Yang et al. [8] generalized the result of Giannessi [9] for differentiable but pseudoconvex mappings.

On the other hand, Yang and Yang [10] considered vector variational-like inequality problem and showed relationships between vector variational-like inequality and vector optimization problem under the assumptions of pseudoinvexity or invariant pseudomonotonicity. Later, some researchers extended above problems in the direction of nonsmooth mappings. Rezaie and Zafarani [11] established a correspondence between a solution of the generalized vector variational-like inequality problem and the nonsmooth vector optimization problem under the same assumptions of Yang and Yang [10] in the setting of Clarke's subdifferentiability. Due to the fact that Clarke's subdifferentiability is bigger class than Mordukhovich limiting subdifferentiability, many authors studied the vector variational-like inequality problems and vector optimization problems by means of Mordukhovich

limiting subdifferential. Later, Long et al. [12] and Oveisiha and Zafarani [13] studied generalized vector variational-like inequality problem and discussed the relationships between generalized vector variational-like inequality problem and nonsmooth vector optimization problem for pseudoinvex mappings, whereas Chen and Huang [14] obtained similar results for invex mappings by means of Mordukhovich limiting subdifferential.

Due to several applications of invex sets and exponential mappings in engineering, economics, population growth, mathematical modelling problems, Antczak [15] introduced exponential (*p*,*r*)-invex sets and mappings. After that, Mandal and Nahak [16] introduced (*p*,*r*)-*ρ*-(*η*, *θ*)-invexity mapping which is the generalization of the result of Antczak [15]. By using (*p*,*r*)-invexity, Jayaswal and Choudhury [17] introduced exponential type vector variational-like inequality problem involving locally Lipschitz mappings.

In this paper, we introduce generalized nonsmooth exponential-type vector variational like inequality problems involving Mordukhovich limiting subdifferential in Asplund spaces. We obtain some relationships between an efficient solution of nonsmooth vector optimization problems and this generalized nonsmooth exponential-type vector variational like inequality problems using limiting (*p*,*r*)-*α*-(*η*, *θ*)-invexity mapping. Employing the Fan-KKM theorem, we establish an existence result for our problem in Asplund spaces.

#### **2. Preliminaries**

Suppose that *X* is a real Banach space with dual space *X*<sup>∗</sup> and ·, · is duality pairing between them. Assume that *<sup>K</sup>* <sup>⊆</sup> *<sup>X</sup>* is a nonempty subset, *<sup>C</sup>* <sup>⊂</sup> <sup>R</sup>*<sup>n</sup>* is a pointed, closed, convex cone with nonempty interior, i.e., *intC* = ∅ and *f* : *K* −→ R is a non-differentiable mapping. When the mappings are non-differentiable, many authors used the concept of subdifferential such as Fre´chet subdifferential, Mordukhovich limiting subdifferential, and Clarke subdifferential operators. Now, we mention some notions and results already known in the literature.

**Definition 1.** *Suppose that f* : *X* −→ R *is a proper lower semicontinuous mapping on Banach space X. Then, the mapping f is said to be Fre*´*chet subdifferentiable and ξ*<sup>∗</sup> *is Fre*´*chet subderivative of f at x* (*i*.*e*., *ξ*<sup>∗</sup> ∈ *∂<sup>F</sup> f*(*x*)) *if, x* ∈ *dom f and*

$$\liminf\_{||h||\to 0} \frac{f(x+h) - f(x) - \langle \xi^\*, h \rangle}{||h||} \ge 0.$$

**Definition 2** ([18])**.** *Suppose that* Ω *is a nonempty subset of a normed vector space X. Then, for any x* ∈ *X and ε* ≥ 0*, the set of ε-normals to* Ω *at x is defined as*

$$\hat{\mathcal{N}}\_{\varepsilon}(\mathbf{x};\Omega) = \left\{ \mathbf{x}^\* \in X^\* : \limsup\_{\substack{\Omega \\ \mathbf{u} \xrightarrow{\Omega} \mathbf{x}}} \frac{\langle \mathbf{x}^\*, \mathbf{u} - \mathbf{x} \rangle}{||\mathbf{u} - \mathbf{x}||} \le \varepsilon \right\}.$$

*For x*˜ ∈ Ω*, the limiting normal cone to* Ω *at x is* ˜

$$N(\mathfrak{x}; \Omega) = \limsup\_{\mathfrak{x} \xrightarrow{\Omega} \mathfrak{x}, \mathfrak{e} \downarrow 0} \widehat{\aleph}\_{\mathfrak{e}}(\mathfrak{x}; \Omega).$$

*Consider a mapping f* : *X* −→ R ∪ {±∞} *and a finite point x*˜ ∈ *X. Then, the limiting subdifferential of f at x is the following set* ˜

$$\partial\_L f(\vec{x}) = \{ \mathbf{x}^\* \in X^\* : (\mathbf{x}^\*, -1) \in N((\vec{x}, f(\vec{x})); cpif) \}\,\,\,\,\,$$

*where epi f is defined as epi f* = {(*x*, *a*) ∈ *X* × R : *f*(*x*) ≤ *a*}*. If* | *f*(*x*˜)| = ∞*, then we put ∂<sup>L</sup> f*(*x*˜) = ∅*.*

**Remark 1** ([18])**.** *It is noted that the Clarke subdifferential is larger class than the Fre*´*chet subdifferential and the limiting subdifferential with the relation ∂<sup>F</sup> f*(*x*) ⊆ *∂<sup>L</sup> f*(*x*) ⊆ *∂<sup>C</sup> f*(*x*)*.*

**Definition 3.** *A Banach space X is said to be Asplund space if K is any open subset of X and f* : *K* −→ R *is continuous convex mapping, then f is Frechet subdifferentiable at any point of a dense subset of K.* ´

**Remark 2.** *It is remarked that a Banach space X has the Asplundity property if every separable subspace of X has separable dual. The concept of Asplund space depicts the differentiability characteristics of continuous convex mappings on Euclidean space. All the spaces which are reflexive Banach spaces are Asplund. The space of convergent real sequences c*<sup>0</sup> *(whose limit is 0) is non-reflexive separable Banach space, but its is an Asplund space. For more details, we refer to [19].*

**Definition 4.** *A bi-mapping η* : *K* × *K* −→ *K is said to be affine with respect to the first argument if, for any λ* ∈ [0, 1] *and u*1, *u*<sup>2</sup> ∈ *K with u* = *λu*<sup>1</sup> + (1 − *λ*)*u*<sup>2</sup> ∈ *K such that*

$$
\eta(\lambda u\_1 + (1 - \lambda)u\_2, \upsilon) = \lambda \eta(u\_1, \upsilon) + (1 - \lambda)\eta(u\_2, \upsilon), \ \forall \upsilon \in K.
$$

**Definition 5.** *A bi-mapping η* : *K* × *K* −→ *X is said to be continuous in the first argument if,*

$$\|\eta(u, z) - \eta(v, z)\| \to 0 \text{ as } \|\|u - v\|\| \to 0, \text{ } \forall u, v \in \mathbb{K}, \ z \text{ is fixed.} $$

**Definition 6** ([20])**.** *Suppose that K is a subset of a topological vector space Y. A set-valued mapping T* : *<sup>K</sup>* −→ <sup>2</sup>*<sup>Y</sup> is called a KKM-mapping if, for each nonempty finite subset* {*y*1, *<sup>y</sup>*2, ··· , *yn*} ⊂ *K, we have*

$$\operatorname{Co}\{y\_1, y\_2, \dots, y\_n\} \subseteq \bigcup\_{i=1}^n T(y\_i)\_{i}$$

*where Co denotes the convex hull.*

**Theorem 1** (Fan-KKM Theorem [20])**.** *Suppose that K is a subset of a topological vector space Y and <sup>T</sup>* : *<sup>K</sup>* −→ <sup>2</sup>*<sup>Y</sup> is a KKM-mapping. If, for each <sup>y</sup>* <sup>∈</sup> *<sup>K</sup>*, *<sup>T</sup>*(*y*) *is closed and for at least one <sup>y</sup>* <sup>∈</sup> *<sup>K</sup>*, *<sup>T</sup>*(*y*) *is compact, then*

$$\bigcap\_{y \in K} T(y) \neq \bigotimes.$$

**Definition 7.** *A mapping <sup>f</sup>* : *<sup>X</sup>* −→ <sup>R</sup>*<sup>n</sup> is called locally Lipschitz continuous at <sup>x</sup>*<sup>0</sup> *if, there exists a <sup>L</sup>* <sup>&</sup>gt; <sup>0</sup> *and a neighbourhood N of x*<sup>0</sup> *such that*

$$||f(y) - f(z)|| \le L||y - z||\_\prime \,\,\forall y, z \in N(\mathfrak{x}\_0).$$

*If f is locally Lipschitz continuous for each x*<sup>0</sup> *in X, then f is locally Lipschitz continuous mapping on X.*

Slightly changing the structure of definition of (*p*,*r*)-*α*-(*η*, *θ*)-invexity defined in [16], we have the following definition.

**Definition 8.** *Suppose that f* : *<sup>X</sup>* −→ <sup>R</sup>*<sup>n</sup> is a locally Lipschitz continuous mapping, e* = (1, 1, ··· , 1) <sup>∈</sup> <sup>R</sup>*<sup>n</sup> and p*,*r are arbitrary real numbers. If there exist the mappings η*, *θ* : *X* × *X* −→ *X and a constant α* ∈ R *such that one of the following relations*

1 *r* exp*r*(*f*(*x*)−*f*(*u*)) <sup>−</sup><sup>1</sup> ≥ 1 *p* 3 *ξ*; exp*pη*(*x*,*u*) <sup>−</sup>*<sup>e</sup>* 4 <sup>+</sup> *<sup>α</sup>θ*(*x*, *<sup>u</sup>*)2*<sup>e</sup>* (<sup>&</sup>gt; *if x* <sup>=</sup> *<sup>u</sup>*) *f or p* <sup>=</sup> 0,*<sup>r</sup>* <sup>=</sup> 0, 1 *r* exp*r*(*f*(*x*)−*f*(*u*)) <sup>−</sup><sup>1</sup> ≥ *ξ*; *<sup>η</sup>*(*x*, *<sup>u</sup>*) <sup>+</sup> *<sup>α</sup>θ*(*x*, *<sup>u</sup>*)2*<sup>e</sup>* (<sup>&</sup>gt; *if x* <sup>=</sup> *<sup>u</sup>*) *f or p* <sup>=</sup> 0,*<sup>r</sup>* <sup>=</sup> 0, *f*(*x*) − *f*(*u*) ≥ 1 *p* 3 *ξ*; exp*pη*(*x*,*u*) <sup>−</sup>*<sup>e</sup>* 4 <sup>+</sup> *<sup>α</sup>θ*(*x*, *<sup>u</sup>*)2*<sup>e</sup>* (<sup>&</sup>gt; *if x* <sup>=</sup> *<sup>u</sup>*) *f or p* <sup>=</sup> 0,*<sup>r</sup>* <sup>=</sup> 0, *<sup>f</sup>*(*x*) <sup>−</sup> *<sup>f</sup>*(*u*) ≥ *ξ*; *<sup>η</sup>*(*x*, *<sup>u</sup>*) <sup>+</sup> *<sup>α</sup>θ*(*x*, *<sup>u</sup>*)2*<sup>e</sup>* (<sup>&</sup>gt; *if x* <sup>=</sup> *<sup>u</sup>*) *f or p* <sup>=</sup> 0,*<sup>r</sup>* <sup>=</sup> 0,

*holds for each ξ* ∈ *∂<sup>L</sup> f*(*u*)*, then f is called limiting* (*p*,*r*)*-α-*(*η*, *θ*)*-invex* (*strictly limiting* (*p*,*r*)*-α-*(*η*, *θ*)*-invex*) *with respect to η and θ at the point u on X. If f is limiting* (*p*,*r*)*-α-*(*η*, *θ*)*-invex with respect to η and θ at each u* ∈ *X, then f is limiting* (*p*,*r*)*-α-*(*η*, *θ*)*-invex with respect to the same η and θ on X.*

**Remark 3.** *We only consider the case when p* = 0,*r* = 0 *to prove the results. We exclude other cases as it is straightforward in terms of altering inequality. Throughout the proof of the results, we assume that r* > 0*. Under other condition r* < 0*, the direction in the proof will be reversed.*

**Problem 1.** *Suppose that <sup>f</sup>* = (*f*1, *<sup>f</sup>*2, ··· , *fn*) : *<sup>K</sup>* −→ <sup>R</sup>*<sup>n</sup> is a vector-valued mapping such that each fi* : *K* −→ R (*i* = 1, 2, ··· , *n*) *is locally Lipschitz continuous mapping. The nonsmooth vector optimization problem is to*

$$\underset{\mathbf{C}}{\text{Maximize}} f(\mathbf{x}) = (f\_1(\mathbf{x}), f\_2(\mathbf{x}), \dots, f\_n(\mathbf{x})) \tag{P\_1}$$

$${"{s}} 
subset {} {
\text{to x}} 
∈ {K\_{"{s}}}$$

*where C* <sup>∈</sup> <sup>R</sup>*<sup>n</sup> is a pointed, closed and convex cone with intC* <sup>=</sup> <sup>∅</sup>*.*

**Definition 9.** *Suppose that f* : *<sup>K</sup>* −→ <sup>R</sup>*<sup>n</sup> is a vector-valued mapping. A point <sup>x</sup>*¯ <sup>∈</sup> *K is called*

(*i*) *an efficient solution* (*Pareto solution*) *of* (*P*1) *if and only if*

$$f(y) - f(\mathfrak{x}) \notin -\mathbb{C} \backslash \{0\}, \ \forall y \in \mathbb{K};$$

(*ii*) *a weak efficient solution* (*weak Pareto solution*) *of* (*P*1) *if and only if*

$$f(y) - f(\mathfrak{x}) \notin -int \mathbb{C}, \,\, \forall y \in \mathbb{K}.$$

Now, we introduce following two kinds of generalized nonsmooth exponential-type vector variational-like inequality problems. Suppose that *<sup>K</sup>* <sup>=</sup> <sup>∅</sup> is a subset of an Asplund space *<sup>X</sup>* and *<sup>C</sup>* <sup>⊂</sup> <sup>R</sup>*<sup>n</sup>* is a pointed, closed and convex cone such that *intC* <sup>=</sup> <sup>∅</sup>. Assume that *<sup>f</sup>* = (*f*1, *<sup>f</sup>*2, ··· , *fn*) : *<sup>K</sup>* −→ <sup>R</sup>*<sup>n</sup>* is a non-differentiable locally Lipschitz continuous mapping, *η*, *θ* : *K* × *K* −→ *X* are the continuous mappings, *<sup>β</sup>*, *<sup>p</sup>* is an arbitrary real number and *<sup>e</sup>* = (1, 1, ··· , 1) <sup>∈</sup> <sup>R</sup>*n*.

**Problem 2.** *Generalized nonsmooth exponential-type strong vector variational like inequality problem is to find a vector x*¯ ∈ *K such that*

$$\begin{array}{llll} \frac{1}{p} \left\langle \xi; \left( \exp^{\eta \eta (y,x)} - e \right) \right\rangle + \beta \| \theta(y,\mathfrak{x}) \|^{2} e & \pleftarrow -\mathsf{C} \\ \langle \xi; \eta(y,\mathfrak{x}) \rangle + \beta \| \theta(y,\mathfrak{x}) \|^{2} e & \pleftarrow -\mathsf{C} \\ \end{array} \right\rangle \forall \xi \in \mathfrak{d}\_{\mathbb{L}} f(\mathfrak{x}), y \in \mathsf{K}; \qquad \text{(P2)}$$

**Problem 3.** *Generalized nonsmooth exponential-type weak vector variational like inequality problem is to find a vector x*¯ ∈ *K such that*

$$\begin{array}{llll} \frac{1}{p} \left\langle \xi; \left( \mathrm{exp}^{p \eta \left( y, \mathfrak{x} \right)} - \varepsilon \right) \right\rangle + \beta \| \theta(y, \mathfrak{x}) \|^{2} \varepsilon & \notin -int \mathrm{C}\_{\prime} \text{ for } p \neq 0, \\\ \left\langle \xi; \eta(y, \mathfrak{x}) \right\rangle + \beta \| \theta(y, \mathfrak{x}) \|^{2} \varepsilon & \notin -int \mathrm{C}\_{\prime} \text{ for } p = 0, \end{array} \right\} \forall \xi \in \partial\_{L} f(\mathfrak{x}), y \in K. \tag{P3}$$

#### **Special Cases:**


Apparently, it shows that the solution of (*P*2) is also a solution of (*P*3). We construct the following example in support of (*P*2).

**Example 1.** *Let us consider <sup>X</sup>* <sup>=</sup> <sup>R</sup>*, <sup>K</sup>* = [−1, 1]*, <sup>C</sup>* <sup>=</sup> <sup>R</sup><sup>2</sup> <sup>+</sup>*, p* = 1 *and the mapping f be defined as f* = (*f*1, *f*2) *by*

$$f\_1(\mathbf{x}) = \begin{cases} \mathbf{x}, & \text{if } \mathbf{x} \ge \mathbf{0}, \\ \mathbf{0}, & \text{if } \mathbf{x} < \mathbf{0}, \end{cases} \quad \text{and} \quad f\_2(\mathbf{x}) = \begin{cases} \mathbf{x}^2 + 2\mathbf{x}, & \text{if } \mathbf{x} \ge \mathbf{0}, \\ \mathbf{0}, & \text{if } \mathbf{x} < \mathbf{0}. \end{cases}$$

*Now, the limiting subdifferential of f is*

$$\partial\_L f(\mathbf{x}) = \begin{cases} (1, 2\mathbf{x} + 2), & \text{if } \mathbf{x} > \mathbf{0}, \\ \{ (\mathbf{s}, t) : \mathbf{s} \in [0, 1], t \in [0, 2] \}, & \text{if } \mathbf{x} = \mathbf{0}, \\ (0, 0), & \text{if } \mathbf{x} < \mathbf{0}. \end{cases}$$

*Define the mappings η*, *θ* : *K* × *K* −→ *X by*

$$
\ln \eta(y, \mathbf{x}) = \ln \left( |y - \mathbf{x}| + 1 \right) \text{ and } \theta(y, \mathbf{x}) = \frac{y - \mathbf{x}}{2}, \text{ } \forall y, \mathbf{x} \in K.
$$

*Then, the problem* (*P*2) *is to find a point x*¯ ∈ *K such that*

$$\left\langle \left\langle \xi; \left(\exp^{\eta(y,\mathfrak{x})} - e\right) \right\rangle + \beta \|\left\|\theta(y,\mathfrak{x})\right\|\|^2 e \notin -\mathsf{C} \right\rangle \{0\}, \,\,\forall \xi \in \partial\_L f(\mathsf{x}), y \in K, \,\right\rangle$$

*which is equivalent to say that*

$$\left\langle \partial\_L f(\mathfrak{x}); \left( \exp^{\eta(y,\mathfrak{x})} - \mathfrak{e} \right) \right\rangle + \beta ||\theta(y,\mathfrak{x})||^2 \mathfrak{e} \not\subseteq -\mathbb{C} \; \middle\langle \begin{array}{l} 0 \end{array} \right\rangle, \; \forall \mathfrak{x} \in \partial\_L f(\mathfrak{x}), y \in \mathbb{K} .$$

*For x*¯ = 0 *and β* ≥ 4*, we can see that*

$$\begin{split} & \quad \left\langle \partial\_{\mathbb{L}} f(\boldsymbol{x}); \left( \exp^{\eta(\boldsymbol{y},\boldsymbol{x})} - \boldsymbol{e} \right) \right\rangle + \beta \| \| \theta(\boldsymbol{y},\boldsymbol{x}) \| ^{2} \boldsymbol{e} \\ &= \quad \left\{ \left( s \left( \exp^{\ln(|\boldsymbol{y}-\boldsymbol{x}|} + 1) - \boldsymbol{e} \right), t \left( \exp^{\ln(|\boldsymbol{y}-\boldsymbol{x}| + 1)} - \boldsymbol{e} \right) \right) : s \in [0, 1], t \in [0, 2] \right\} + \beta \left\| \frac{\boldsymbol{y} - \boldsymbol{x}}{2} \right\|^{2} \boldsymbol{e} \\ &= \quad \left\{ \left( s \left( |\boldsymbol{y} - \boldsymbol{x}| \right), t \left( |\boldsymbol{y} - \boldsymbol{x}| \right) \right) : s \in [0, 1], t \in [0, 2] \right\} + \frac{\beta}{4} \| \boldsymbol{y} - \bar{\boldsymbol{x}} \| ^{2} \boldsymbol{e} \\ &= \quad \left\{ \left( s |\boldsymbol{y}|, t |\boldsymbol{y}| \right) : s \in [0, 1], t \in [0, 2] \right\} + \frac{\beta}{4} |\boldsymbol{y}|^{2} \boldsymbol{e} \\ &\leq \quad - \mathbb{C} \left\{ 0 \right\}. \end{split} $$

*Hence, x*¯ = 0 *is the solution of the problem* (*P*2)*.*

#### **3. Main Results**

Now, we prove a result which ensures that the solution of (*P*2) is an efficient solution of (*P*1).

**Theorem 2.** *Suppose that K* <sup>=</sup> <sup>∅</sup> *is a subset of Asplund space X, C* <sup>=</sup> <sup>R</sup>*<sup>n</sup>* <sup>+</sup> *and f* = (*f*1, *f*2, ··· , *fn*) : *K* −→ <sup>R</sup>*<sup>n</sup> is a locally Lipschitz continous mapping on K. Let <sup>η</sup>*, *<sup>θ</sup>* : *<sup>K</sup>* <sup>×</sup> *<sup>K</sup>* −→ *<sup>X</sup> be the mappings such that each fi* (*i* = 1, 2, ··· , *n*) *is limiting* (*p*,*r*)*-αi-*(*η*, *θ*)*-invex mapping with respect to η and θ. If x*¯ ∈ *K is a solution of* (*P*2)*, then x is an efficient solution of* ¯ (*P*1)*.*

**Proof.** Assume that *x*¯ ∈ *K* is a solution of (*P*2). We will prove that *x*¯ ∈ *K* is an efficient solution of (*P*1). Indeed, let us assume that *x*¯ ∈ *K* is not an efficient solution of (*P*1). Then, ∃*y* ∈ *K* such that

$$f\_1(f\_1(y) - f\_1(\vec{x}), f\_2(y) - f\_2(\vec{x}), \dots, f\_n(y) - f\_n(\vec{x})) = f\_i(y) - f\_i(\vec{x}) \in -\mathcal{C} \backslash \{0\},$$

which implies that

$$f\_i(y) - f\_i(\mathfrak{x}) \le 0, \ \forall i = 1, 2, \dots, n,\tag{1}$$

and strict inequality holds for some 1 ≤ *k* ≤ *n*.

Since *C* = R*<sup>n</sup>* <sup>+</sup>, exponential mapping is monotonic and *r* > 0, then from (1), we have

$$\frac{1}{r}\left(\exp^{r(f\_i(y)-f\_i(\mathbf{r}))} - 1\right) \le 0, \quad \forall i = 1, 2, \dots, n. \tag{2}$$

Since each *fi* is limiting (*p*,*r*)-*αi*-(*η*, *θ*)-invex mapping with respect to *η* and *θ* at *x*¯, therefore for all *ξ<sup>i</sup>* ∈ *∂<sup>L</sup> fi*(*x*¯), we have

$$\frac{1}{r}\left(\exp^{r(f\_i(y)-f\_i(x))} - 1\right) \ge \frac{1}{p}\left\langle \zeta\_i; \left(\exp^{p\eta\left(y,x\right)} - \varepsilon\right) \right\rangle + a\_i \|\theta(y,x)\|^2 \varepsilon. \tag{3}$$

Set *β* = min{*α*1, *α*2, ··· , *αn*}, therefore from (3), we have

$$\frac{1}{r}\left(\exp^{r(f\_i(y)-f\_i(x))} - 1\right) \ge \frac{1}{p}\left\langle \zeta\_i; \left(\exp^{p\eta(y,x)} - \varepsilon\right) \right\rangle + \beta \|\theta(y,x)\|^2 \varepsilon. \tag{4}$$

Now by using (2) and (4), we get

$$\frac{1}{p} \left\langle \xi\_i ; \left( \exp^{p \eta \left( y, \mathfrak{x} \right)} - \mathfrak{e} \right) \right\rangle + \beta ||\theta(y, \mathfrak{x})||^2 \mathfrak{e} \leq 0, \mathfrak{y}$$

which implies that for all *ξ<sup>i</sup>* ∈ *∂<sup>L</sup> fi*(*x*¯)

$$\frac{1}{p} \left\langle \zeta\_i ; \left( \exp^{p \eta(y, \mathfrak{x})} - \mathfrak{e} \right) \right\rangle + \beta ||\theta(y, \bar{\mathfrak{x}})||^2 \mathfrak{e} \in -\mathsf{C} \; \backslash \; \{0\} \right\rangle$$

which counteracts the hypothesis that *x*¯ is a solution of (*P*2). Hence, *x*¯ is an efficient solution of (*P*1). This completes the proof.

Next, we show the converse of the above conclusion.

**Theorem 3.** *Suppose that <sup>f</sup>* = (*f*1, *<sup>f</sup>*2, ··· , *fn*) : *<sup>K</sup>* −→ <sup>R</sup>*<sup>n</sup> is a locally Lipschitz continuous mapping on K. If each* −*fi is limiting* (*p*,*r*)*-αi-*(*η*, *θ*)*-invex mapping with respect to η and θ, and x*¯ *is an efficient solution of* (*P*1)*, then x is a solution of* ¯ (*P*2)*.*

**Proof.** Assume that *x*¯ is an efficient solution of (*P*1). On contrary suppose that *x*¯ is not a solution of (*P*2). Then, each *β* ensures the existence of *x<sup>β</sup>* satisfying

$$\frac{1}{p} \left\langle \xi\_i ; \left( \exp^{p \eta \left( x\_{\beta}, x \right)} - \epsilon \right) \right\rangle + \beta \| \theta (x\_{\beta}, x) \|^2 \epsilon \in -\mathcal{C} \; \middle\rangle \{ 0 \},$$

for all *<sup>ξ</sup><sup>i</sup>* <sup>∈</sup> *<sup>∂</sup><sup>L</sup> fi*(*xβ*). Since *<sup>C</sup>* <sup>=</sup> <sup>R</sup>*<sup>n</sup>* <sup>+</sup>, from above relation, we have

$$\frac{1}{p}\left\langle \xi\_i; \left(\exp^{p\eta\left(\mathbf{x}\_{\beta}, \mathbf{z}\right)} - e\right) \right\rangle + \beta ||\theta\left(\mathbf{x}\_{\beta}, \mathbf{z}\right)||^2 e \leq 0,\tag{5}$$

and strict inequality holds for some 1 ≤ *k* ≤ *n*.

As each −*fi* is limiting (*p*,*r*)-*αi*-(*η*, *θ*)-invex mapping with respect to *η* and *θ* with constants *αi*, therefore for any *y* ∈ *K*, ∃*ξ<sup>i</sup>* ∈ *∂<sup>L</sup> fi*(*y*) such that

$$\frac{1}{r}\left(\exp^{r(-f\_i(y)+f\_i(x))} - 1\right) \ge \frac{1}{p}\left\langle (-\xi\_i); \left(\exp^{p\eta\left(y,x\right)} - e\right) \right\rangle + a\_i \|\theta(y,\bar{x})\|^2 e\_i$$

which implies that

$$\frac{1}{p}\left(\exp^{r(-f\_i(y)+f\_i(\bar{x}))} - 1\right) \ge \frac{1}{p}\left\langle \left(-\xi\_i\right); \left(\exp^{p\eta\left(y,\bar{x}\right)} - e\right) \right\rangle + \beta \|\theta(y,\bar{x})\|^2 \varepsilon\_\prime \tag{6}$$

where *β* = min{*α*1, *α*2, ··· , *αn*}.

Using (5), (6) and monotonic property of exponential mapping, it is easy to deduce that ∃*y* ∈ *K* such that

$$f\_i(\mathfrak{x}) - f\_i(\mathfrak{y}) \ge 0,$$

and strict inequality holds for *i* = *k* and equivalently

$$f\_i(x) - f\_i(y) \in \mathcal{C} \backslash \{0\},$$

which counteracts the hypothesis that *x*¯ is an efficient solution of (*P*1). Therefore, *x*¯ is a solution of (*P*2). This completes the proof.

Based on equivalent arguments as used in Theorems 2 and 3, we have the following theorem which associates the problems (*P*1) and (*P*3).

**Theorem 4.** *Suppose that <sup>K</sup>* <sup>=</sup> <sup>∅</sup> *is a subset of Asplund space X, <sup>C</sup>* <sup>=</sup> <sup>R</sup>*<sup>n</sup>* <sup>+</sup> *and f* = (*f*1, *f*2, ··· , *fn*) : *<sup>K</sup>* −→ <sup>R</sup>*<sup>n</sup> a locally Lipschitz continuous mapping on K. If each* <sup>−</sup>*fi* (<sup>1</sup> <sup>≤</sup> *<sup>i</sup>* <sup>≤</sup> *<sup>n</sup>*) *is strictly limiting* (*p*,*r*)*-αi-*(*η*, *θ*)*-invex mapping with respect to η and θ and x*¯ ∈ *K is a weak efficient solution of* (*P*1)*, then x*¯ ∈ *K is also a solution of* (*P*3)*. Conversely, if each fi* (1 ≤ *i* ≤ *n*) *is limiting* (*p*,*r*)*-αi-*(*η*, *θ*)*-invex mapping with respect to η and θ and x*¯ ∈ *K is the solution of* (*P*3)*, then x*¯ ∈ *K is also a weak efficient solution of* (*P*1)*.*

We contrive the following example in support of Theorem 4.

**Example 2.** *Let us consider X* = R*, K* = [0, 1]*, C* = R<sup>2</sup> <sup>+</sup> *and p* = 1*. Define the nonsmooth vector optimization problem*

$$\begin{aligned} \min\_{\mathbb{C}} f(\mathbf{x}) &= (f\_1(\mathbf{x}), f\_2(\mathbf{x})) \\ \text{subject to } & \mathbf{x} \in \mathbb{K}, \end{aligned} \tag{7}$$

*where f*1(*x*) = ln \$ *<sup>x</sup>*<sup>2</sup> <sup>+</sup> <sup>√</sup>*<sup>x</sup>* <sup>+</sup> <sup>1</sup> % *and <sup>f</sup>*2(*x*) = ln *x*<sup>2</sup> + √*x* 2 *. Clearly, f is locally Lipschitz mapping at x* = 0*. Now, the limiting subdifferential of f is as follows:*

$$\partial\_L f(\mathbf{x}) = \begin{cases} \left( \frac{2\mathbf{x} + \frac{1}{2\sqrt{\mathbf{x}}}}{\mathbf{x}^2 + \sqrt{\mathbf{x}} + 1}, \frac{4\mathbf{x} + \frac{1}{2\sqrt{\mathbf{x}}}}{2\mathbf{x}^2 + \sqrt{\mathbf{x}}} \right), & \text{if } \mathbf{x} > \mathbf{0}, \\\left\{ (s, t) : s, t \in [0, \infty) \right\}, & \text{if } \mathbf{x} = \mathbf{0}. \end{cases}$$

*Define the mappings θ*, *η* : *K* × *K* −→ *X by*

$$
\eta(y, \mathbf{x}) = \ln \left( -\frac{\sqrt{y}}{2} + \mathbf{x} + 1 \right) \text{ and } \theta(y, \mathbf{x}) = y - \mathbf{x}, \text{ } \forall y, \mathbf{x} \in K.
$$

*For r* = 1*, we can see that for α* = 1 *at x*¯ = 0

$$\begin{split} & \quad \left( \exp^{f\_{1}(y) - f\_{1}(\bar{x})} - 1 \right) - \left\langle \check{\xi}\_{1}; \left( \exp^{\eta(y,\bar{x})} - \epsilon \right) \right\rangle - \alpha \|\theta(y,\bar{x})\|^{2} \\ & \quad = \left( \exp^{\ln\left(\frac{y^{2} + \sqrt{y} + 1}{2 + \sqrt{\bar{x} + 1}}\right)} - 1 \right) - \left\langle \check{\xi}\_{1}; \left( \exp^{\ln\left(-\frac{\sqrt{y}}{2} + \bar{x} + 1}\right) - \epsilon \right) \right\rangle - \|y - \bar{x}\|^{2} \\ & \quad = \left( \frac{y^{2} + \sqrt{y} + 1}{\bar{x}^{2} + \sqrt{\bar{x}} + 1} - 1 \right) - \left\langle \check{\xi}\_{1}; \left( -\frac{\sqrt{y}}{2} + \bar{x} + 1 \right) - \epsilon \right\rangle - \|y - \bar{x}\|^{2} \\ & \quad = \left( y^{2} + \sqrt{y} \right) + \bar{\xi}\_{1} \left( \frac{\sqrt{y}}{2} \right) - |y|^{2} \\ & \quad = \left| y^{2} + \sqrt{y} \left( 1 + \frac{\bar{\xi}\_{1}}{2} \right) - |y|^{2} \geq 0. \end{split}$$

*Similarly, we can show that*

$$\left(\exp^{f\_2(y) - f\_2(\mathfrak{x})} - 1\right) - \left\langle \mathfrak{z}\_2; \left(\exp^{\eta(y,\mathfrak{x})} - e\right) \right\rangle - a ||\theta(y,\mathfrak{x})||^2 \ge 0.$$

*Therefore, f is* (1, 1)*-*1*-*(*η*, *θ*)*-invex mapping at x*¯ = 0*.*

*Now, problem* (*P*3) *is to find x*¯ ∈ [0, 1] *such that*

$$\frac{1}{p}\left\langle \xi; \left(\exp^{p\eta\left(y,\tilde{\mathbf{x}}\right)} - \epsilon\right) \right\rangle + a \left\|\theta(y,\mathfrak{x})\right\|^2 \mathfrak{e} \notin -int \mathsf{C}\_{\prime} \,\,\forall \xi \in \partial\_L f(x), y \in \mathsf{K}\_{\prime}$$

*which is analogous to the following problem*

$$\frac{1}{p}\left\langle \partial\_L f(\pounds); \left(\exp^{p\eta(y,\mathfrak{x})} - \mathfrak{e}\right) \right\rangle + a \|\!\|\theta(y,\mathfrak{x})\|\!\|^2 \mathfrak{e} \not\subseteq -int \mathbb{C}, \ \forall \xi \in \partial\_L f(\mathfrak{x}), y \in \mathbb{K}.$$

*Now, for α* = *p* = 1*, we deduce that*

$$\begin{split} & \quad \left\langle \partial\_{\mathrm{L}}f(\mathbf{x}); \left( \exp^{\eta(y,\mathbf{x})} - \boldsymbol{\varepsilon} \right) \right\rangle + \mathfrak{a} \left\| \theta(y,\mathbf{\bar{x}}) \right\|^{2} \boldsymbol{\varepsilon} \\ &= \quad \left\{ \left( s \left( \exp^{\mathrm{ln}(-\sqrt{y}-\mathfrak{x}+1)} - \boldsymbol{\varepsilon} \right), t \left( \exp^{\mathrm{ln}(-\sqrt{y}-\mathfrak{x}+1}) - \boldsymbol{\varepsilon} \right) \right) : s, t \in [0, \infty) \right\} + \left\| \boldsymbol{y} - \boldsymbol{\mathfrak{x}} \right\|^{2} \boldsymbol{\varepsilon} \\ &= \quad \left\{ \left( s(-\sqrt{y}-\mathfrak{x}), t(-\sqrt{y}-\mathfrak{x}) \right) : s, t \in [0, \infty) \right\} + \left\| \boldsymbol{y} \right\|^{2} \boldsymbol{\varepsilon} \\ &\stackrel{\scriptstyle \mathcal{L}}{\leq} - \mathrm{int} \mathbf{C}. \end{split} $$

*Therefore, x*¯ = 0 *is the solution of the problem* (*P*3)*. One can easily show that x*¯ = 0 *is a weakly efficient solution of vector optimization problem* (7) *by using Theorem 4.*

Following is the existence theorem for the solution of generalized nonsmooth exponential-type weak vector variational like inequality problem (*P*3) by employing the Fan-KKM Theorem.

**Theorem 5.** *Suppose that K* = ∅ *is a convex subset of Asplund space X, C is a pointed, closed and convex cone, and <sup>f</sup>* = (*f*1, *<sup>f</sup>*2, ··· , *fn*) : *<sup>K</sup>* −→ <sup>R</sup>*<sup>n</sup> is a locally Lipschitz mapping such that each fi* (<sup>1</sup> <sup>≤</sup> *<sup>i</sup>* <sup>≤</sup> *<sup>n</sup>*) *is limiting* (*p*,*r*)*-αi-*(*η*, *θ*)*-invex mapping with respect to η and θ with constants αi. Suppose that η*, *θ* : *K* × *K* −→ *X are the continuous mappings which are affine in the first argument, respectively and η*(*x*, *x*) = 0 = *θ*(*x*, *x*)*, for all x* ∈ *K. For any compact subset B* = ∅ *of K and y*<sup>0</sup> ∈ *B with the property*

$$\frac{1}{p}\left\langle \xi; \left(\exp^{p\eta(y\_0, x)} - \varepsilon\right) \right\rangle + \beta \|\theta(y\_0, x)\|^2 \varepsilon \in -int \mathcal{C}, \ \forall x \in K \backslash \mathcal{B}, \ \sharp \in \partial\_L f(x), \tag{8}$$

*where β* = min{*α*1, *α*2, ··· , *αn*}*, then generalized nonsmooth exponential-type weak vector variational like inequality problem* (*P*3) *admits a solution.*

**Proof.** For any *<sup>y</sup>* <sup>∈</sup> *<sup>K</sup>*, consider the mapping *<sup>F</sup>* : *<sup>K</sup>* −→ <sup>2</sup>*<sup>K</sup>* define by

$$F(\underline{y}) = \left\{ \mathbf{x} \in K : \frac{1}{p} \left< \underline{\mathbf{y}}; \left( \exp^{p\eta \cdot (y\_r x)} - \mathfrak{e} \right) \right> + \beta \| \theta(\underline{y}, \mathbf{x}) \|^2 \mathbf{e} \notin -\mathrm{int}\mathbf{C}, \ \forall \underline{\mathbf{y}} \in \partial\_{\underline{\mathbf{L}}} f(\mathbf{x}) \right\},$$

Since *y* ∈ *F*(*y*), therefore *F* is nonempty.

Now, we will prove that *F* is a KKM-mapping on *K*. On contrary, assume that *F* is not a KKM-mapping. Therefore, we can find a finite set {*x*1, *x*2, ··· , *xn*} and *ti* ≥ 0, *i* = 1, 2, ··· , *n* with ∑*n <sup>i</sup>*=<sup>1</sup> *ti* = 1 such that

$$\mathbf{x}\_0 = \sum\_{i=1}^n t\_i x\_i \notin \bigcup\_{i=1}^n F(x\_i)\_i$$

which implies that *x*<sup>0</sup> ∈/ *F*(*xi*), ∀*i* = 1, 2, ··· , *n*, i.e.,

$$\frac{1}{p} \left\langle \xi; \left( \exp^{p \eta \left( \mathbf{x}\_i, \mathbf{x}\_0 \right)} - \mathfrak{c} \right) \right\rangle + \beta \| \theta(\mathbf{x}\_i, \mathbf{x}\_0) \| ^2 \mathfrak{c} \in -int \mathbf{C}, \ \forall i = 1, 2, \dots, n. \right\}$$

In view of convexity of \$ exp*λ<sup>x</sup>* <sup>−</sup>*<sup>e</sup>* % , for all *x* ∈ R and for any *λ* > 0, and affinity of *η* and *θ* in the first argument with the property *η*(*x*, *x*) = 0 = *θ*(*x*, *x*), we obtain

<sup>0</sup> <sup>=</sup> <sup>1</sup> *p* 3 *ξ*; exp*pη*(*x*0,*x*0) <sup>−</sup>*<sup>e</sup>* 4 <sup>+</sup> *β* ∑*n <sup>i</sup>*=<sup>1</sup> *ti θ*(*x*0, *<sup>x</sup>*0)2*<sup>e</sup>* <sup>=</sup> <sup>1</sup> *p* 7 *ξ*; ⎛ ⎝exp *pη n* ∑ *i*=1 *tixi*,*x*<sup>0</sup> −*e* ⎞ ⎠ 8 + *β* ∑*n <sup>i</sup>*=<sup>1</sup> *ti θ n* ∑ *i*=1 *tixi*, *x*<sup>0</sup> 2 *e* <sup>=</sup> <sup>1</sup> *p* 7 *ξ*; ⎛ ⎝exp *p n* ∑ *i*=1 *tiη*(*xi*,*x*0) −*e* ⎞ ⎠ 8 + *β* ∑*n <sup>i</sup>*=<sup>1</sup> *ti n* ∑ *i*=1 *tiθ* (*xi*, *x*0) 2 *e* ≤*C* 1 *p* 7 *ξ*; *n* ∑ *i*=1 *ti* exp*pη*(*xi*,*x*0) <sup>−</sup>*<sup>e</sup>* 8 + *β n* ∑ *i*=1 *ti <sup>θ</sup>* (*xi*, *<sup>x</sup>*0)<sup>2</sup> *<sup>e</sup>* <sup>=</sup> <sup>1</sup> *p n* ∑ *i*=1 *ti* 3 *ξ*; exp*pη*(*xi*,*x*0) <sup>−</sup>*<sup>e</sup>* 4 <sup>+</sup> *<sup>β</sup> n* ∑ *i*=1 *ti <sup>θ</sup>* (*xi*, *<sup>x</sup>*0)<sup>2</sup> *<sup>e</sup>* ∈ −*intC*,

which implies that 0 ∈ −*intC* and hence, a contradiction. Therefore, *F* is a KKM-mapping.

Next, to show that *F*(*y*) is closed set, for each *y* ∈ *K*, consider any sequence {*xn*} in *F*(*y*) which converges to *x*¯. This implies that

$$z\_{\rm tr} = \frac{1}{p} \left\langle \mathbb{J}\_{\rm tr}; \left( \exp^{\eta \eta \left( y, \mathbf{x}\_{\rm tr} \right)} - e \right) \right\rangle + \beta \| \theta(y, \mathbf{x}\_{\rm tr}) \|^{2} e \notin -int \mathbf{C}\_{\star} \, \forall \xi\_{\rm tr} \in \partial\_{\rm L} f(\mathbf{x}\_{\rm tr}). \tag{9}$$

Using locally Lipschitz continuity property of *f* , we have

$$\|\|f(\mathbf{x}) - f(y)\|\| \le L \|\|\mathbf{x} - y\|\|\_{\prime} \,\,\forall \mathbf{x}, y \in N(\mathfrak{x})\_{\prime}.$$

where *L* > 0 is a constant and *N*(*x*¯) is the neighbourhood of *x*¯. Then, we can find any *x* ∈ *N*(*x*¯) and *ξ* ∈ *∂<sup>L</sup> f*(*x*) such that

$$\|\xi\| \le L$$

Since *∂<sup>L</sup> f*(*xn*) is *w*∗-compact, then the sequence {*ξn*} has a convergent subsequence, say {*ξm*} in *<sup>∂</sup><sup>L</sup> <sup>f</sup>*(*xn*) such that *<sup>ξ</sup><sup>m</sup>* <sup>→</sup> ¯ *ξ* ∈ *∂<sup>L</sup> f*(*x*¯). Since *η* and *θ* are continuous mappings, we have

$$\mathcal{Z} = \lim\_{m} z\_m = \frac{1}{p} \left\langle \mathfrak{F}; \left( \exp^{p \eta \left( y, \mathfrak{x} \right)} - e \right) \right\rangle + \beta \left\| \theta(y, \mathfrak{x}) \right\|^2 e.$$

From (9), it follows that *z*¯ ∈ *intC* and therefore, we have

$$\frac{1}{p}\left\langle \bar{\xi}; \left( \exp^{p\eta\left(y,\mathfrak{x}\right)} - \mathfrak{e} \right) \right\rangle + \beta \left\| \theta(y,\mathfrak{x}) \right\|^2 \mathfrak{e} \notin \text{intC}\mathcal{L}$$

Hence *x*¯ ∈ *F*(*y*), and thus *F*(*y*) is closed set.

Using the hypothesis (8), for any compact subset *B* = ∅ of *K* and *y*<sup>0</sup> ∈ *B*, we have

$$\frac{1}{p}\left\langle \xi; \left(\exp^{p\eta\left(y\_0, x\right)} - \epsilon\right) \right\rangle + \beta \|\theta\left(y\_0, x\right)\|^2 \epsilon \in -int \mathcal{C}, \ \forall x \in K \; \bigvee B, \ \xi \in \partial\_L f(x),$$

which shows that *F*(*y*0) ∈ *B*. Due to compactness of *B*, we have *F*(*y*0) is also compact. Therefore, by applying the Fan-KKM Theorem 1, we obtain

$$\bigcap\_{y \in K} F(y) \neq \bigotimes.$$

Therefore, ∃*x*˜ ∈ *K* such that

$$\frac{1}{p}\left\langle \xi; \left(\exp^{p\eta\left(y,\mathfrak{x}\right)} - \mathfrak{e}\right) \right\rangle + \beta \|\|\theta\left(y,\mathfrak{x}\right)\|\|^2 \mathfrak{e} \notin -int \mathbb{C}, \,\,\forall \xi \in \partial\_L f(\mathfrak{x}).$$

Thus, generalized nonsmooth exponential-type weak vector variational like inequality problem (*P*3) has a solution. This completes the proof.

#### **4. Conclusions**

We have introduced and studied a new type of generalized nonsmooth exponential type vector variational-like inequality problem involving Mordukhovich limiting subdifferential operator in Asplund spaces. We proved the relationships between our considered problems with vector optimization problems using the generalized concept of invexity, which we called limiting (*p*,*r*)-*α*-(*η*, *θ*)-invexity of mappings. We also derived the existence of a result for our considered problem using the Fan-KKM theorem. It is remarked that our problems and related results are more general than the previously known results.

**Author Contributions:** The authors S.S.I., M.R., I.A., R.A. and S.H. carried out this work and drafted the manuscript together. All the authors studied and validated the article.

**Funding:** The research was supported by the Deanship of Scientific Research, Qassim University, Saudi Arabia grant number 3611-qec-2018-1-14-S.

**Acknowledgments:** We are grateful for the comments and suggestions of the reviewers and Editor, which improve the paper a lot. The first and third authors are thankful to Deanship of Scientific Research, Qassim University, Saudi Arabia for technical and financial support of the research project 3611-qec-2018-1-14-S.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


c 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **A Kind of New Higher-Order Mond-Weir Type Duality for Set-Valued Optimization Problems**

#### **Liu He 1,†, Qi-Lin Wang 1,\*, Ching-Feng Wen 2,3,\*, Xiao-Yan Zhang <sup>1</sup> and Xiao-Bing Li <sup>1</sup>**


Received: 9 March 2019; Accepted: 17 April 2019; Published: 24 April 2019

**Abstract:** In this paper, we introduce the notion of higher-order weak adjacent epiderivative for a set-valued map without lower-order approximating directions and obtain existence theorem and some properties of the epiderivative. Then by virtue of the epiderivative and Benson proper efficiency, we establish the higher-order Mond-Weir type dual problem for a set-valued optimization problem and obtain the corresponding weak duality, strong duality and converse duality theorems, respectively.

**Keywords:** set-valued optimization problems; higher-order weak adjacent epiderivatives; higher-order mond-weir type dual; benson proper efficiency

#### **1. Introduction**

The theory of duality and optimality conditions for optimization problems has received considerable attention (see [1–10]). The derivative (epiderivative) plays an important role in studying duality and optimality conditions for set-valued optimization problems. The contingent derivatives [1], the contingent epiderivatives [11] and the generalized contingent epiderivatives [12] for set-valued maps are employed by different authors to investigate necessary or/and sufficient optimality conditions for set-valued optimization problems. Later, the second-order epiderivatives [13], higher-order generalized contingent (adjacent) epiderivatives [14] and generalized higher-order contingent (adjacent) derivatives [15] for set-valued maps are used to study the second (or high) order necessary or/and sufficient optimality conditions for set-valued optimization problems. Chen et al. [2] utilized the weak efficiency to introduce higher-order weak adjacent (contingent) epiderivative for a set-valued map, they then investigate higher-order Mond-Weir (Wolfe) type duality and higher-order Kuhn-Tucker type optimality conditions for constrained set-valued optimization problems. Li et al. [3] used the higher-order contingent derivatives to discuss the weak duality, strong duality and converse duality of a higher-order Mond-Weir type dual for a set-valued optimization problem. Wang et al. [4] used the higher-order generalized adjacent derivative to extend the main results of [3] from convexity to non-convexity. Anh [6] used the higher-order radial derivatives [16] to discuss mixed duality of set-valued optimization problems.

It is well known that the lower-order approximating directions are very important to define the higher-order derivatives (epiderivatives) in [2–4,6,14,15]. This limits their practical applications when the lower-order approximating directions are unknown. So, it is necessary to introduce some higher-order derivatives (epiderivatives) without lower-order approximating directions. As we know, a few paper are devoted to this topic. Motivated by [17], Li et al. [7] proposed the higher-order upper and lower Studniarski derivatives of a set-valued map to establish necessary and sufficient conditions for a strict local minimizer of a constrained set-valued optimization problem. Anh [8] introduced the higher-order radial epiderivative to establish mixed type duality in constrained set-valued optimization problems. Anh [18] proposed the higher-order upper and lower Studniarski derivatives of a set-valued map to establish Fritz John type and Kuhn-Tucker type conditions, and discussed the higher-order Mond-Weir type dual for constrained set-valued optimization problems. Anh [19] further defined the notion of higher-order Studniarski epiderivative and established higher-order optimality conditions for a generalized set-valued optimization problems. Anh [20] noted that the epiderivatives in [8,19] is singleton, they proposed a notion of the higher-order generalized Studniarski epiderivative which is set-valued, and discussed its applications in optimality conditions and duality of set-valued optimization problems.

As we know that the existence conditions of weak efficient point are weaker than ones of efficient point for a set. Inspired by [2,8,18–20], we introduce the higher-order weak adjacent set without the lower-order approximating directions for set-valued maps. Furthermore, we use the higher-order weak adjacent set and weak efficiency to introduce the higher-order weak adjacent epiderivative for a set-valued map, we use it and Benson proper efficiency to discuss higher-order Mond-Weir type dual for a constrained set-valued optimization problem, and then obtain the corresponding weak duality, strong duality and converse duality, respectively.

The rest of the article is as follows. In Section 2, we recall some of definitions and notations to be needed in the paper, and so define the higher-order adjacent set of a set-valued map without lower-order approximating directions, which has some nice properties. In Section 3, we use the higher-order adjacent set of Section 2 to define the higher-order weak adjacent epiderivative for a set-valued map, and discuss its properties, such as existence and subdifferential. In Section 4, we introduce a higher-order Mond-Weir type dual for a constrained set-valued optimization problem and establish the corresponding weak duality, strong duality and converse duality, respectively.

#### **2. Preliminaries**

Throughout the paper, let *X* , *Y* and *Z* be three real normed linear spaces. The spaces *Y* and *Z* are partially ordered by nontrivial pointed closed convex cones *C* ⊆ *Y* and *D* ⊆ *Z* with nonempty interior, respectively. By 0*<sup>Y</sup>* we denote the zero vector of *Y*. *Y*∗ stands for the topological dual space of *Y*. The dual cone *C*<sup>+</sup> of *C* is defined as

$$\mathbb{C}^+ := \{ f \in \mathcal{Y}^\* | f(c) \ge 0, \forall c \in \mathcal{C} \}.$$

Its quasi-interior *C*+*<sup>i</sup>* is defined as

$$\mathbb{C}^{+i} := \{ f \in \mathcal{Y}^\* | f(\mathfrak{c}) > 0, \forall \mathfrak{c} \in \mathcal{C} \; | \; \{ 0\_Y \} \}.$$

Let *M* be a nonempty subset of *Y*. We denote the closure, the interior and the cone hull of *M* by cl*M*, int*M* and cone*M*, respectively. We denote by *B*(*c*,*r*) the open ball of radius *r* centered at *c*. A nonempty subset *B* of *C* is called a base of *C* if and only if *C* = cone*B* and 0*<sup>Y</sup>* ∈/ cl*B*.

Let *<sup>E</sup>* <sup>⊆</sup> *<sup>X</sup>* be a nonempty subset and *<sup>F</sup>* : *<sup>E</sup>* <sup>→</sup> <sup>2</sup>*<sup>Y</sup>* be a set-valued map. The domain, graph and epigraph of *F* are, respectively, defined as

$$\mathsf{dom}F := \{ \mathbf{x} \in E | F(\mathbf{x}) \neq \mathcal{O} \}, \quad \mathsf{im}F := \{ y \in \mathsf{Y} | y \in F(\mathbf{x}) \},$$

$$\mathsf{gr}F := \{ (\mathbf{x}, y) \in E \times \mathsf{Y} | y \in F(\mathbf{x}), \mathbf{x} \in E \}$$

and

$$\text{epi}F := \{ (\mathfrak{x}, y) \in E \times \mathcal{Y} | y \in F(\mathfrak{x}) + \mathbb{C}, \mathfrak{x} \in E \}.$$

**Definition 1.** *[9] Let M* ⊆ *Y and y*<sup>0</sup> ∈ *M. (i) y*<sup>0</sup> *is said to be a Pareto efficient point of M (y*<sup>0</sup> ∈ Min*CM) if*

$$(M - \{y\_0\}) \cap (-\mathbb{C} \backslash \{0\_Y\}) = \mathbb{Q}.$$

*(ii) Let* int*C* = ∅*. y*<sup>0</sup> *is said to be a weakly efficient point of M (y*<sup>0</sup> ∈ WMin*CM) if*

$$(M - \{y\_0\}) \cap (-\text{intC}) = \bigotimes.$$

**Definition 2.** *[10,21,22] (i) The cone C is called Daniell if any decreasing sequence in Y that has a lower bound converges to its infimum.*

*(ii) A subset M of Y is said to be minorized if there is a y* ∈ *Y such that*

$$\mathcal{M} \subseteq \{y\} + \mathbb{C}\_{\cdot}$$

*(iii) The weak domination property is said to hold for a subset M of Y if*

$$M \subseteq \mathsf{WMin}\_{\mathsf{C}}M + \mathsf{intC} \cup \{0\_Y\}.$$

**Definition 3.** *Let A* ⊆ *X* × *Y,* (*x*0, *y*0) ∈ cl*A and m* ∈ *N* \ {0}*. (i) [9] The mth-order adjacent set of A at* (*x*, *v*1, ··· , *vm*−1) *is defined by*

$$\begin{aligned} T\_A^{\flat(m)}(\mathbf{x}, \upsilon\_1, \dots, \upsilon\_{m-1}) &:= \{ y \in A | \forall t\_n \to 0^+, \exists y\_n \to y\_\prime \text{ s.t.} \\ \mathbf{x}\_0 + t\_n \upsilon\_1 + \dots + t\_n^{m-1} \upsilon\_{m-1} + t\_n^m y\_n &\in A \}, \end{aligned}$$

*where vi* ∈ *X*(*i* = 1, ··· , *m* − 1)*.*

*(ii) [19] The mth-order Studniarski set of A at* (*x*0, *y*0) *is defined by*

*Sm <sup>A</sup>*(*x*0, *<sup>y</sup>*0) :<sup>=</sup> {(*x*, *<sup>y</sup>*) <sup>∈</sup> *<sup>X</sup>* <sup>×</sup> *<sup>Y</sup>*|∃*tn* <sup>→</sup> <sup>0</sup>+, <sup>∃</sup>(*xn*, *yn*) <sup>→</sup> (*x*, *<sup>y</sup>*), *s*.*t*.(*x*<sup>0</sup> + *tnxn*, *y*<sup>0</sup> + *t m <sup>n</sup> yn*) ∈ *A*}.

**Definition 4.** *Let K* ⊆ *X* × *Y,* (*x*0, *y*0) ∈ cl*K and m* ∈ *N* \ {0}*. The mth-order adjacent set of K at* (*x*0, *y*0) *is defined by*

*T*(*m*) *<sup>K</sup>* (*x*0, *<sup>y</sup>*0) :<sup>=</sup> {(*x*, *<sup>y</sup>*) <sup>∈</sup> *<sup>X</sup>* <sup>×</sup> *<sup>Y</sup>*|∀*tn* <sup>→</sup> <sup>0</sup>+, <sup>∃</sup>(*xn*, *yn*) <sup>→</sup> (*x*, *<sup>y</sup>*), *s*.*t*.(*x*<sup>0</sup> + *tnxn*, *y*<sup>0</sup> + *t m <sup>n</sup> yn*) ∈ *K*}.

We can obtain the equivalent characterization of *T*(*m*) *<sup>K</sup>* (*x*0, *y*0) in terms of sequences: (*x*, *<sup>y</sup>*) <sup>∈</sup> *<sup>T</sup>*(*m*) *<sup>K</sup>* (*x*0, *<sup>y</sup>*0) if and only if ∀{*tn*} → <sup>0</sup>+, ∃{(*<sup>x</sup> <sup>n</sup>*, *y <sup>n</sup>*)} ⊆ *K* such that

$$\lim\_{n \to \infty} (\frac{\mathbf{x}\_n' - \mathbf{x}\_0}{t\_n}, \frac{y\_n' - y\_0}{t\_n^m}) = (\mathbf{x}, y).$$

Now, we establish a few properties of *T*(*m*) *<sup>K</sup>* (*x*0, *y*0).

**Proposition 1.** *Let K* <sup>⊆</sup> *<sup>X</sup>* <sup>×</sup> *Y,* (*x*0, *<sup>y</sup>*0) <sup>∈</sup> *K and* (*x*, *<sup>y</sup>*) <sup>∈</sup> *<sup>T</sup>*(*m*) *<sup>K</sup>* (*x*0, *y*0)*. Then*

$$(\lambda \mathfrak{x}, \lambda^m \mathfrak{y}) \in T\_K^{\flat(\mathfrak{m})} (\mathfrak{x}\_0, \mathfrak{y}\_0), \forall \lambda \ge 0.$$

**Proof.** We divide *λ* into two cases to show the proposition.

Case 1: *<sup>λ</sup>* <sup>=</sup> 0. Note that (*x*0, *<sup>y</sup>*0) <sup>∈</sup> *<sup>K</sup>*; for any sequence {*tn*} with *tn* <sup>→</sup> <sup>0</sup>+, we choose (*xn*, *yn*) = (0*X*, 0*Y*) such that (*x*<sup>0</sup> + *tnxn*, *y*<sup>0</sup> + *t m <sup>n</sup> yn*) <sup>∈</sup> *<sup>K</sup>*. This means that (0*X*, 0*Y*) <sup>∈</sup> *<sup>T</sup>*(*m*) *<sup>K</sup>* (*x*0, *y*0).

Case 2:*<sup>λ</sup>* <sup>&</sup>gt; 0. Let (*x*, *<sup>y</sup>*) <sup>∈</sup> *<sup>T</sup>*(*m*) *<sup>K</sup>* (*x*0, *<sup>y</sup>*0). Then for any sequence {*tn*} with *tn* <sup>→</sup> <sup>0</sup>+, there exists a sequence {(*xn*, *yn*)} ⊆ *K* with (*xn*, *yn*) → (*x*, *y*) such that

$$\mathcal{K} \ni \left( \mathbf{x}\_0 + t\_n \mathbf{x}\_{n'} \, y\_0 + t\_n^m y\_n \right) = \left( \mathbf{x}\_0 + \left( \frac{t\_n}{\lambda} \right) \lambda \mathbf{x}\_{n'} \, y\_0 + \left( \frac{t\_n}{\lambda} \right)^m \lambda^m y\_n \right).$$

Naturally, *tn <sup>λ</sup>* <sup>→</sup> <sup>0</sup><sup>+</sup> and (*λxn*, *<sup>λ</sup>myn*) <sup>→</sup> (*λx*, *<sup>λ</sup>my*) <sup>∈</sup> *<sup>T</sup>*(*m*) *<sup>K</sup>* (*x*0, *y*0). It completes the proof. -

**Remark 1.** *Let <sup>K</sup>* <sup>⊆</sup> *<sup>X</sup>* <sup>×</sup> *<sup>Y</sup> and* (*x*0, *<sup>y</sup>*0) <sup>∈</sup> cl*K. The mth-order adjacent set <sup>T</sup>*(*m*) *<sup>K</sup>* (*x*0, *y*0) *of K at* (*x*0, *y*0) *may not be a cone; see Example 1.*

**Example 1.** *Let <sup>K</sup>* <sup>=</sup> {(*x*, *<sup>y</sup>*) <sup>∈</sup> <sup>R</sup>2|*<sup>y</sup>* <sup>≥</sup> *<sup>x</sup>*4, *<sup>x</sup>* <sup>∈</sup> <sup>R</sup>}*,* (*x*0, *<sup>y</sup>*0)=(0, 0) *and <sup>m</sup>* <sup>=</sup> <sup>4</sup>*. A simple calculation shows that*

$$T\_K^{\rhd(4)}(0,0) = \{(\mathfrak{x}, \mathfrak{y}) \in \mathbb{R}^2 | \mathfrak{y} \ge \mathfrak{x}^4\}.$$

*Take* (*x*, *<sup>y</sup>*)=(1, 1) <sup>∈</sup> *<sup>T</sup>*(4) *<sup>K</sup>* (0, 0) *and <sup>λ</sup>* <sup>=</sup> <sup>2</sup>*. Then <sup>λ</sup>*(*x*, *<sup>y</sup>*)=(2, 2) <sup>∈</sup>/ *<sup>T</sup>*(4) *<sup>K</sup>* (0, 0)*, i.e., <sup>T</sup>*(4) *<sup>k</sup>* (0, 0) *is not a cone here.*

**Proposition 2.** *Let F* : *<sup>E</sup>* <sup>→</sup> <sup>2</sup>*<sup>Y</sup> be a set-valued map and* (*x*0, *<sup>y</sup>*0) <sup>∈</sup> gr*F. Then, (i) T*(*m*) epi*<sup>F</sup>* (*x*0, *<sup>y</sup>*0) = *<sup>T</sup>*(*m*) epi*<sup>F</sup>* (*x*0, *y*0) + {0*X*} × *C*; *(ii)* {*<sup>y</sup>* <sup>∈</sup> *<sup>Y</sup>*|(*x*, *<sup>y</sup>*) <sup>∈</sup> *<sup>T</sup>*(*m*) epi*<sup>F</sup>* (*x*0, *<sup>y</sup>*0)} <sup>=</sup> {*<sup>y</sup>* <sup>∈</sup> *<sup>Y</sup>*|(*x*, *<sup>y</sup>*) <sup>∈</sup> *<sup>T</sup>*(*m*) epi*<sup>F</sup>* (*x*0, *y*0)} + *C*, ∀*x* ∈ *X*.

**Proof.** Since 0*<sup>Y</sup>* <sup>∈</sup> *<sup>C</sup>*, it is clearly that *<sup>T</sup>*(*m*) epi*<sup>F</sup>* (*x*0, *<sup>y</sup>*0) <sup>⊆</sup> *<sup>T</sup>b*(*m*) epi*<sup>F</sup>* (*x*0, *y*0) + {0*X*} × *C*. Therefore we only need to prove *T*(*m*) epi*<sup>F</sup>* (*x*0, *<sup>y</sup>*0) + {0*X*} × *<sup>C</sup>* <sup>⊆</sup> *<sup>T</sup>*(*m*) epi*<sup>F</sup>* (*x*0, *y*0).

Let (*u*, *<sup>v</sup>*) <sup>∈</sup> *<sup>T</sup>*(*m*) epi*<sup>F</sup>* (*x*0, *<sup>y</sup>*0) and *<sup>c</sup>* <sup>∈</sup> *<sup>C</sup>*. Then for any sequence {*tn*} with *tn* <sup>→</sup> <sup>0</sup>+, there exists a sequence {(*un*, *vn*)} ⊆ *X* × *Y* with (*un*, *vn*) → (*u*, *v*) such that

$$(\mathbf{x}\_0 + t\_n u\_n \, y\_0 + t\_n^m v\_n) \in \mathbf{epi} F\_{\prime \prime}$$

namely,

$$y\_0 + t\_n^m \upsilon\_n \in F(\mathfrak{x}\_0 + t\_n \mathfrak{u}\_n) + \mathcal{C}.$$

Since *<sup>c</sup>* <sup>∈</sup> *<sup>C</sup>*, *tn* <sup>→</sup> <sup>0</sup><sup>+</sup> and *<sup>C</sup>* <sup>+</sup> *<sup>C</sup>* <sup>⊆</sup> *<sup>C</sup>*, one has

$$y\_0 + t\_n^m(\upsilon\_n + \mathfrak{c}) \in F(\mathfrak{x}\_0 + t\_n u\_n) + \mathbb{C} + \{t\_n^m \mathfrak{c}\} \subseteq F(\mathfrak{x}\_0 + t\_n u\_n) + \mathbb{C}.$$

Thus

$$(\mathbf{x}\_0 + t\_n \mu\_n \, \_\nu \mathbf{y}\_0 + t\_n^m (\upsilon\_n + c)) \in \mathbf{epi} F.\,\,\,\,$$

This together with (*un*, *vn* <sup>+</sup> *<sup>c</sup>*) <sup>→</sup> (*u*, *<sup>v</sup>* <sup>+</sup> *<sup>c</sup>*) implies (*u*, *<sup>v</sup>* <sup>+</sup> *<sup>c</sup>*) <sup>∈</sup> *<sup>T</sup>*(*m*) epi*<sup>F</sup>* (*x*0, *<sup>y</sup>*0), and so *<sup>T</sup>*(*m*) epi*<sup>F</sup>* + {0*X*} × *<sup>C</sup>* <sup>⊆</sup> *<sup>T</sup>*(*m*) epi*<sup>F</sup>* .

(ii) Obviously, (ii) follows from (i). The proof is complete. -

**Proposition 3.** *Let K* <sup>⊆</sup> *<sup>X</sup>* <sup>×</sup> *Y and* (*x*0, *<sup>y</sup>*0) <sup>∈</sup> cl*K. If K is a convex set, then T*(*m*) *<sup>K</sup>* (*x*0, *y*0) *is a convex set.*

**Proof.** Let (*x<sup>i</sup>* , *yi* ) <sup>∈</sup> *<sup>T</sup>*(*m*) *<sup>K</sup>* (*x*0, *<sup>y</sup>*0) (*<sup>i</sup>* <sup>=</sup> 1, 2) and *<sup>λ</sup>* <sup>∈</sup> [0, 1]. Then for any *tn* <sup>→</sup> <sup>0</sup>+, there exist (*xi <sup>n</sup>*, *y<sup>i</sup> <sup>n</sup>*) <sup>→</sup> (*x<sup>i</sup>* , *yi* ) (*i* = 1, 2) such that

$$(\mathbf{x}\_0 + t\_n \mathbf{x}\_{n\prime}^i y\_0 + t\_n^m y\_n^i) \in \mathsf{K}\ (i = 1, 2).$$

From the convexity of *K*, we have

$$\left(\mathbf{x}\_0 + t\_n[(1-\lambda)\mathbf{x}\_n^1 + \lambda \mathbf{x}\_n^2], y\_0 + t\_n^m[(1-\lambda)y\_n^1 + \lambda y\_n^2]\right) \in K.$$

It is obvious that

$$((1 - \lambda)\mathbf{x}\_n^1 + \lambda\mathbf{x}\_{n'}^2(1 - \lambda)y\_n^1 + \lambda y\_n^2) \to ((1 - \lambda)\mathbf{x}^1 + \lambda\mathbf{x}^2, (1 - \lambda)y^1 + \lambda y^2).$$

It follows from the definition of *T*(*m*) *<sup>K</sup>* (*x*0, *y*0) that

$$((1 - \lambda)(\mathbf{x}^1, y^1) + \lambda(\mathbf{x}^2, y^2) = ((1 - \lambda)\mathbf{x}^1 + \lambda\mathbf{x}^2, (1 - \lambda)y^1 + \lambda y^2) \in T\_K^{\flat(m)}(\mathbf{x}\_0, y\_0).$$

Thus, *T*(*m*) *<sup>K</sup>* (*x*0, *y*0) is a convex set and the proof is complete. -

$$\mathbb{D}$$

#### **3. Higher-Order Weak Adjacent Epiderivatives**

In this section, we introduce the notion of higher-order weak adjacent epiderivative of a set-valued map without lower-order approximating directions, and obtain some properties of the epiderivative.

Firstly, we recall the notions of *m*th-order weak adjacent epiderivative with lower-order approximating directions and generalized Studniarski epiderivative without lower-order approximating directions.

**Definition 5.** *[2] Let <sup>F</sup>* : *<sup>X</sup>* <sup>→</sup> <sup>2</sup>*Y,* (*x*0, *<sup>y</sup>*0) <sup>∈</sup> gr*<sup>F</sup> and* (*ui*, *vi*) <sup>∈</sup> *<sup>X</sup>* <sup>×</sup> *<sup>Y</sup>*(*<sup>i</sup>* <sup>=</sup> 1, ··· , *<sup>m</sup>* <sup>−</sup> <sup>1</sup>)*. The mth-order weak adjacent epiderivative <sup>D</sup>*(*m*) *<sup>w</sup> <sup>F</sup>*(*x*0, *<sup>y</sup>*0, *<sup>u</sup>*1, *<sup>v</sup>*<sup>1</sup> ··· , *um*−1, *vm*−1) *of <sup>F</sup> at* (*x*0, *<sup>y</sup>*0) *for vectors* (*u*1, *<sup>v</sup>*1), ··· , (*um*−1, *vm*−1) *is the set-valued map from X to Y defined by*

$$\begin{aligned} D\_w^{\flat(m)} F(\mathbf{x}\_0, y\_0, \boldsymbol{\mu}\_1, \boldsymbol{\upsilon}\_1, \boldsymbol{\upsilon}\_1, \boldsymbol{\upsilon}\_m, \boldsymbol{\upsilon}\_{m-1})(\boldsymbol{\chi}) \\ \coloneqq \mathsf{WMin}\_{\mathsf{C}} \{ y \in Y \mid (\mathbf{x}, y) \in T\_{\mathsf{epi}F}^{\flat(m)}(\mathbf{x}\_0, y\_0, \boldsymbol{\mu}\_1, \boldsymbol{\upsilon}\_1, \boldsymbol{\upsilon}\_1, \boldsymbol{\upsilon}\_m, \boldsymbol{\upsilon}\_{m-1}, \boldsymbol{\upsilon}\_{m-1}) \}. \end{aligned}$$

**Definition 6.** *[20] Let <sup>F</sup>* : *<sup>X</sup>* <sup>→</sup> <sup>2</sup>*<sup>Y</sup> and* (*x*0, *<sup>y</sup>*0) <sup>∈</sup> gr*F. The mth-order generalized Studniarski epiderivative G-ED<sup>m</sup> <sup>S</sup> F*(*x*0, *y*0) *of F at* (*x*0, *y*0) *is the set-valued map from X to Y defined by*

$$G\text{-}\boldsymbol{E}\boldsymbol{D}\_{\boldsymbol{S}}^{\text{m}}\boldsymbol{F}(\mathbf{x}\_{0\prime},\boldsymbol{y}\_{0})(\boldsymbol{x}) := \text{Min}\_{\boldsymbol{C}}\{\boldsymbol{y}\in\boldsymbol{Y} \mid (\boldsymbol{x},\boldsymbol{y})\in\boldsymbol{S}\_{\text{epi}\boldsymbol{F}}^{m}(\boldsymbol{x}\_{0\prime}\boldsymbol{y}\_{0})\}.$$

Motivated by Definitions 5 and 6, we introduce the higher-order epiderivative without lower-order approximating directions.

**Definition 7.** *Let <sup>F</sup>* : *<sup>E</sup>* <sup>→</sup> <sup>2</sup>*<sup>Y</sup> and* (*x*0, *<sup>y</sup>*0) <sup>∈</sup> gr*F. The mth-order weak adjacent epiderivative of <sup>F</sup> at* (*x*0, *<sup>y</sup>*0) *is a set-valued map ED*(*m*) *<sup>w</sup> <sup>F</sup>*(*x*0, *<sup>y</sup>*0) : *<sup>E</sup>* <sup>→</sup> <sup>2</sup>*<sup>Y</sup> defined by*

$$ED\_w^{\flat(m)}F(\mathbf{x}\_0, y\_0)(\mathbf{x}) := \mathsf{WMin}\_{\mathsf{C}}\{y \in \mathsf{Y} \mid (\mathbf{x}, y) \in T\_{\mathsf{epi}F}^{\flat(m)}(\mathbf{x}\_0, y\_0)\}.$$

**Remark 2.** *There are many examples show that ED*(*m*) *<sup>w</sup> <sup>F</sup>*(*x*0, *<sup>y</sup>*0) *possibly exists even if <sup>D</sup>*(*m*) *<sup>w</sup> <sup>F</sup>*(*x*0, *<sup>y</sup>*0, *<sup>u</sup>*1, *<sup>v</sup>*1, ··· , *um*−1, *vm*−1) *and G-ED<sup>m</sup> <sup>S</sup> F*(*x*0, *y*0) *do not; see Examples 2 and 3. Therefore it is interesting to study this derivative and employ it to investigate the Mond-Weir duality for set-valued optimization problems.*

**Example 2.** *Let <sup>E</sup>* <sup>=</sup> *<sup>X</sup>* <sup>=</sup> *<sup>Y</sup>* <sup>=</sup> <sup>R</sup>*, <sup>C</sup>* <sup>=</sup> <sup>R</sup><sup>+</sup> *and <sup>F</sup>* : *<sup>E</sup>* <sup>→</sup> <sup>2</sup>*<sup>Y</sup> be defined by <sup>F</sup>*(*x*) :<sup>=</sup> {*<sup>y</sup>* <sup>∈</sup> *<sup>Y</sup>* <sup>|</sup> *<sup>y</sup>* <sup>≥</sup> *<sup>x</sup>*2}*. Take* (*x*0, *y*0)=(0, 0) ∈ gr*F and* (*u*, *v*)=(1, −1)*. Then, simple calculations show that*

$$T\_{\mathrm{epi}F}^{\circ(2)}((0,0),(1,-1)) = \bigotimes$$

*and*

$$T\_{\text{epi}F}^{\flat(2)}(0,0) = \{ (\mathbf{x}, \mathbf{y}) \in \mathbb{R} \times \mathbb{R} \mid \mathbf{x} \in \mathbb{R}, \mathbf{y} \ge \mathbf{x}^2 \}.$$
 
$$\text{So, for any } \mathbf{x} \in \mathcal{E}, \,\,\mathrm{ED}\_w^{\flat(2)} F \{ (0,0), (1,-1) \} (\mathbf{x}) = \mathcal{O}, \,\,\mathrm{but}\,\,\mathrm{ED}\_w^{\flat(2)} F(0,0)(\mathbf{x}) = \{ \mathbf{x}^2 \}.$$

**Example 3.** *Let E* = *X* = R*, Y* = R2*, C* = R<sup>2</sup> <sup>+</sup> *and <sup>F</sup>* : *<sup>E</sup>* <sup>→</sup> <sup>2</sup>*<sup>Y</sup> be defined by <sup>F</sup>*(*x*) :<sup>=</sup> {(*y*1, *<sup>y</sup>*2) <sup>∈</sup> *<sup>Y</sup>* <sup>|</sup> *<sup>y</sup>*<sup>1</sup> <sup>∈</sup> <sup>R</sup>, *<sup>y</sup>*<sup>2</sup> <sup>≥</sup> *<sup>x</sup>*2}*. Take* (*x*0, *<sup>y</sup>*0)=(0*X*, 0*Y*) <sup>∈</sup> gr*F. Then*

$$\mathcal{S}^{2}\_{\mathrm{epi}F}(0\_{\mathcal{X}},0\_{Y}) = T^{b(2)}\_{\mathrm{epi}F}(0\_{\mathcal{X}},0\_{Y}) = \{ (\mathbf{x}, (y\_1, y\_2)) \in \mathbb{R} \times \mathbb{R}^2 \mid \mathbf{x} \in \mathbb{R}, y\_1 \in \mathbb{R}, y\_2 \ge x^2 \}.$$

*Therefore, for any <sup>x</sup>* <sup>∈</sup> *E, ED*(2) *<sup>w</sup> <sup>F</sup>*(0*X*, 0*Y*)(*x*) = {(*y*1, *<sup>y</sup>*2) <sup>∈</sup> <sup>R</sup><sup>2</sup> <sup>|</sup> *<sup>y</sup>*<sup>1</sup> <sup>∈</sup> <sup>R</sup>, *<sup>y</sup>*<sup>2</sup> <sup>=</sup> *<sup>x</sup>*2}*, but G-ED*<sup>2</sup> *<sup>S</sup>F*(0*X*, 0*Y*)(*x*) = ∅*.*

**Theorem 1.** *Let <sup>F</sup>* : *<sup>E</sup>* <sup>→</sup> <sup>2</sup>*<sup>Y</sup> and* (*x*0, *<sup>y</sup>*0) <sup>∈</sup> gr*F. Let <sup>C</sup> be a pointed closed convex cone and Daniell. If P*(*x*) :<sup>=</sup> {*<sup>y</sup>* <sup>∈</sup> *<sup>Y</sup>* <sup>|</sup> (*x*, *<sup>y</sup>*) <sup>∈</sup> *<sup>T</sup>*(*m*) epi*<sup>F</sup>* (*x*0, *<sup>y</sup>*0)} *is minorized for all x* <sup>∈</sup> dom*P, then ED*(*m*) *<sup>w</sup> <sup>F</sup>*(*x*0, *<sup>y</sup>*0) *exists.*

**Proof.** The proof is similar to that of Theorem 3.1 in [2]. -

**Definition 8.** *[23] Let <sup>M</sup>* <sup>⊆</sup> <sup>R</sup>*<sup>n</sup> be a nonempty set and <sup>x</sup>*<sup>0</sup> <sup>∈</sup> *M. <sup>M</sup> is called star-shaped at <sup>x</sup>*0*, if for any point x* ∈ *M with x* = *x*0*, the segment*

$$[\mathbf{x}, \mathbf{x}\_0] := \{ \mathbf{y} \in M \mid \mathbf{y} = (1 - \lambda)\mathbf{x}\_0 + \lambda \mathbf{x}, 0 \le \lambda \le 1 \} \subseteq M.$$

**Definition 9.** *[10] Let E be a nonempty convex set. The map F is said to be C-convex on E, if for any x*1, *x*<sup>2</sup> ∈ *E and λ* ∈ [0, 1]*,*

$$
\lambda F(\mathbf{x}\_1) + (1 - \lambda) F(\mathbf{x}\_2) \subseteq F(\lambda \mathbf{x}\_1 + (1 - \lambda) \mathbf{x}\_2) + \mathsf{C}.
$$

Motivated by Definition 9, we introduce the following concept.

**Definition 10.** *Let E be a star-shaped set at x*<sup>0</sup> ∈ *E. The map F is said to be generalized C-convex at x*<sup>0</sup> *on E, if for any x* ∈ *E and λ* ∈ [0, 1]*,*

$$((1 - \lambda)F(\mathbf{x}\_0) + \lambda F(\mathbf{x}) \subseteq F((1 - \lambda)\mathbf{x}\_0 + \lambda \mathbf{x}) + \mathbf{C}.)$$

**Remark 3.** *Let E be a convex set and x*<sup>0</sup> ∈ *E. If F is C-convex on E, then F is generalized C-convex at x*<sup>0</sup> *on E. However, the converse implication is not true.*

To understand Remark 3, we give the following example.

**Example 4.** *Let E*<sup>1</sup> = (−∞, −1] ⊆ R, *E*<sup>2</sup> = (−1, 1] ⊆ R, *E* = *X* = *E*<sup>1</sup> ∪ *E*<sup>2</sup> ⊆ R*, Y* = R*, C* = R<sup>+</sup> *and <sup>F</sup>* : *<sup>E</sup>* <sup>→</sup> <sup>2</sup>*<sup>Y</sup> be defined by*

$$F(x) = \begin{cases} \{ y \in \mathcal{Y} \mid y \ge 1 \}, x \in E\_{1\prime} \\ \{ y \in \mathcal{Y} \mid y \ge x^2 \}, x \in E\_2. \end{cases}$$

*Take x*<sup>0</sup> = −1 ∈ *E. Then E is a convex set, and F is generalized C-convex at x*<sup>0</sup> *on E. Take x*<sup>1</sup> <sup>=</sup> <sup>−</sup><sup>4</sup> <sup>∈</sup> *<sup>E</sup>*<sup>1</sup> <sup>⊆</sup> *<sup>E</sup>*, *<sup>x</sup>*<sup>2</sup> <sup>=</sup> <sup>0</sup> <sup>∈</sup> *<sup>E</sup>*<sup>2</sup> <sup>⊆</sup> *E and <sup>λ</sup>* <sup>=</sup> <sup>1</sup> <sup>2</sup> *, then*

$$\frac{1}{2}F(\mathfrak{x}\_1) + \frac{1}{2}F(\mathfrak{x}\_2) = \{y|y \ge \frac{1}{2}\}$$

*and*

$$F(\frac{1}{2}\mathbf{x}\_1 + \frac{1}{2}\mathbf{x}\_2) = \{y | y \ge 1\}.$$

*Thus*

$$\frac{1}{2}F(\mathbf{x}\_1) + (1 - \frac{1}{2})F(\mathbf{x}\_2) \not\subseteq F(\frac{1}{2}\mathbf{x}\_1 + (1 - \frac{1}{2})\mathbf{x}\_2) + \mathcal{C}.\ .$$

*Therefore F is not C-convex on E.*

**Definition 11.** *[24] Let <sup>U</sup>* <sup>⊆</sup> *<sup>X</sup> be a star-shaped set at <sup>x</sup>*<sup>0</sup> <sup>∈</sup> *U. A set-valued map <sup>F</sup>* : *<sup>U</sup>* <sup>→</sup> <sup>2</sup>*<sup>Y</sup> is said to be decreasing-along-rays at x*<sup>0</sup> *if for any x* ∈ *U and* 0 ≤ *t*<sup>1</sup> ≤ *t*<sup>2</sup> *with tix* + (1 − *ti*)*x*<sup>0</sup> ∈ *U*(*i* = 1, 2)*, one has*

$$F(t\_1\mathbf{x} + (1 - t\_1)\mathbf{x}\_0) \subseteq F(t\_2\mathbf{x} + (1 - t\_2)\mathbf{x}\_0) + \mathbf{C}.\mathbf{x}$$

Next, we give an important property of the *m*th-order weak adjacent epiderivative.

**Proposition 4.** *Let <sup>E</sup> be a star-shaped set at <sup>x</sup>*<sup>0</sup> <sup>∈</sup> *E. Let <sup>F</sup>* : *<sup>E</sup>* <sup>→</sup> <sup>2</sup>*<sup>Y</sup> be a set-valued map and* (*x*0, *<sup>y</sup>*0) <sup>∈</sup> gr*F. Suppose that the following conditions are satisfied:*

*(i) F is decreasing-along-rays at x*<sup>0</sup> *;*

*(ii) F is generalized C-convex at x*<sup>0</sup> *on E;*

*(iii) the set <sup>P</sup>*(*x*) :<sup>=</sup> {*<sup>y</sup>* <sup>∈</sup> *<sup>Y</sup>* <sup>|</sup> (*x*, *<sup>y</sup>*) <sup>∈</sup> *<sup>T</sup>*(*m*) epi*<sup>F</sup>* (*x*0, *y*0)} *fulfills the weak domination property for all x* ∈ dom*P.*

*Then for all x* <sup>∈</sup> *E, one has x* <sup>−</sup> *<sup>x</sup>*<sup>0</sup> <sup>∈</sup> <sup>Ω</sup> :<sup>=</sup> dom*ED*(*m*) *<sup>w</sup> <sup>F</sup>*(*x*0, *<sup>y</sup>*0) *and*

$$F(\mathbf{x}) - \{y\_0\} \subseteq ED\_w^{\flat(m)} F(\mathbf{x}\_0, y\_0)(\mathbf{x} - \mathbf{x}\_0) + \mathsf{C}.\}$$

**Proof.** Let *<sup>x</sup>* <sup>∈</sup> *<sup>E</sup>* and *<sup>y</sup>* <sup>∈</sup> *<sup>F</sup>*(*x*). For any *<sup>λ</sup><sup>n</sup>* <sup>∈</sup> (0, 1) with *<sup>λ</sup><sup>n</sup>* <sup>→</sup> <sup>0</sup>+, ( *<sup>λ</sup><sup>n</sup>* <sup>2</sup> )*<sup>m</sup>* <sup>≤</sup> *<sup>λ</sup><sup>n</sup>* <sup>2</sup> . Since *E* is a star-shaped set at *x*0,

$$\mathbf{x}\_{\mathfrak{n}} := \mathbf{x}\_0 + \frac{\lambda\_{\mathfrak{n}}}{2}(\mathbf{x} - \mathbf{x}\_0) = (1 - \frac{\lambda\_{\mathfrak{n}}}{2})\mathbf{x}\_0 + \frac{\lambda\_{\mathfrak{n}}}{2}\mathbf{x} \in E\_{\mathfrak{n}}$$

and

$$x\_0 + (\frac{\lambda\_n}{2})^m (x - x\_0) = (1 - (\frac{\lambda\_n}{2})^m) x\_0 + (\frac{\lambda\_n}{2})^m x \in E.$$

Together this with conditions (i) and (ii) implies

$$\begin{aligned} y\_n &:= y\_0 + (\frac{\lambda\_n}{2})^m (y - y\_0) = (1 - (\frac{\lambda\_n}{2})^m) y\_0 + (\frac{\lambda\_n}{2})^m y \\ &\in (1 - (\frac{\lambda\_n}{2})^m) F(\mathbf{x}\_0) + (\frac{\lambda\_n}{2})^m F(\mathbf{x}) \subseteq F((1 - (\frac{\lambda\_n}{2})^m) \mathbf{x}\_0 + (\frac{\lambda\_n}{2})^m \mathbf{x}) + C, \\ &\subseteq F(\mathbf{x}\_n) + \mathbf{C} + \mathbf{C} \subseteq F(\mathbf{x}\_n) + \mathbf{C}. \end{aligned}$$

Hence, (*xn*, *yn*) <sup>∈</sup> epi*F*. It follows from the definition of *<sup>T</sup>*(*m*) *<sup>K</sup>* (*x*0, *y*0) that (*x* − *x*0, *y* − *y*0) ∈ *T*(*m*) epi*<sup>F</sup>* (*x*0, *<sup>y</sup>*0). Replacing *<sup>x</sup>* <sup>−</sup> *<sup>x</sup>*<sup>0</sup> <sup>∈</sup> dom*<sup>P</sup>* with *<sup>x</sup>* of condition (iii), from the definition of *ED*(*m*) *<sup>w</sup> <sup>F</sup>*(*x*0, *<sup>y</sup>*0), we have

$$\begin{aligned} P(\mathbf{x} - \mathbf{x}\_0) &\subseteq ED\_w^{\flat(\mathfrak{m})} F(\mathbf{x}\_0, y\_0) (\mathbf{x} - \mathbf{x}\_0) + \text{int} \mathbf{C} \cup \{0\_Y\} \\ &\subseteq ED\_w^{\flat(\mathfrak{m})} F(\mathbf{x}\_0, y\_0) (\mathbf{x} - \mathbf{x}\_0) + \text{C} .\end{aligned}$$

Thus *x* − *x*<sup>0</sup> ∈ Ω and

$$F(\mathbf{x}) - \{y\_0\} \subseteq ED\_w^{\rhd(m)} F(\mathbf{x}\_0, y\_0)(\mathbf{x} - \mathbf{x}\_0) + \mathbf{C}.\text{ }$$

This completes the proof. -

We now give an example to explain Proposition 4.

**Example 5.** *Let <sup>E</sup>* = [0, <sup>+</sup>∞) <sup>⊆</sup> <sup>R</sup>*, <sup>Y</sup>* <sup>=</sup> <sup>R</sup>*, <sup>C</sup>* <sup>=</sup> <sup>R</sup><sup>+</sup> *and <sup>F</sup>* : *<sup>E</sup>* <sup>→</sup> <sup>2</sup>*<sup>Y</sup> be defined as <sup>F</sup>*(*x*) = {*<sup>y</sup>* <sup>∈</sup> *<sup>Y</sup>* <sup>|</sup> *<sup>y</sup>* <sup>≥</sup> <sup>0</sup>}*. Take* (*x*0, *<sup>y</sup>*0)=(0, 0) <sup>∈</sup> gr*F. Then, simple calculations show that T*(2) epi*F*(0, 0) = R<sup>2</sup> <sup>+</sup> *and*

$$ED\_w^{\diamond(2)}F(0,0)(\mathbf{x}-\mathbf{x}\_0) = \{0\}, \forall \mathbf{x} \ge \mathbf{0}.$$

*We can easily see that all conditions of Proposition 4 are satisfied. For any x* ∈ *E, one has x* − 0 ∈ Ω := dom*ED*(2) *<sup>w</sup> <sup>F</sup>*(0, 0) = {*<sup>x</sup>* <sup>|</sup> *<sup>x</sup>* <sup>≥</sup> <sup>0</sup>} *and*

$$F(\mathfrak{x}) - \{y\_0\} \subseteq ED\_w^{\flat(2)} F(0, 0) (\mathfrak{x} - \mathfrak{x}\_0) + \mathbb{C}.$$

*Therefore Proposition 4 is applicable here.*

The following examples show that every condition of Proposition 4 is necessary.

**Example 6.** *Let <sup>E</sup>* = [0, <sup>+</sup>∞) <sup>⊆</sup> <sup>R</sup>*, <sup>Y</sup>* <sup>=</sup> <sup>R</sup>*, <sup>C</sup>* <sup>=</sup> <sup>R</sup><sup>+</sup> *and <sup>F</sup>* : *<sup>E</sup>* <sup>→</sup> <sup>2</sup>*<sup>Y</sup> be a set-valued map satisfing F*(*x*) = {*y* ∈ *Y* | *y* ≥ *x*}*. Take* (*x*0, *y*0)=(0, 0) ∈ gr*F. By a simple calculation, we obtain*

$$T\_{\mathrm{epi}F}^{\circ(2)}(0,0) = \{(0,y) \in \mathbb{R} \times \mathbb{R} | y \ge 0\}$$

*and*

$$ED\_w^{\flat(2)}F(0,0)(\mathbf{x}) = \begin{cases} \{0\}, \mathbf{x} = 0, \\ \mathcal{O}\_{\nu}, \mathbf{x} \neq 0. \end{cases}$$

*Thus x* <sup>−</sup> <sup>0</sup> ∈ <sup>Ω</sup> :<sup>=</sup> dom*ED*(2) *<sup>w</sup> <sup>F</sup>*(0, 0) = {0}*, for any x* <sup>∈</sup> (0, <sup>+</sup>∞)*. Obviously, the conditions (ii) and (iii) of Proposition 4 are satisfied except condition (i), and*

*<sup>F</sup>*(*x*) − {*y*0} ⊆ *ED*(2) *<sup>w</sup> <sup>F</sup>*(*x*0, *<sup>y</sup>*0)(*<sup>x</sup>* <sup>−</sup> *<sup>x</sup>*0) + *<sup>C</sup>*, *<sup>x</sup>* <sup>∈</sup> (0, <sup>+</sup>∞).

*Thus Proposition 4 does not hold here and the condition (i) of Proposition 4 is essential.*

**Example 7.** *Let E*<sup>1</sup> = [0, 1] ⊆ R, *E*<sup>2</sup> = (1, +∞) ⊆ R, *E* = *X* = *E*<sup>1</sup> ∪ *E*<sup>2</sup> ⊆ R*, Y* = R*, C* = R<sup>+</sup> *and <sup>F</sup>* : *<sup>E</sup>* <sup>→</sup> <sup>2</sup>*<sup>Y</sup> be given by*

$$F(\mathbf{x}) = \begin{cases} \{ y \in \mathbb{R} \mid y \ge -\mathbf{x}^2 \} \, \mathbf{x} \in E\_{1\prime}, \\ \{ y \in \mathbb{R} \mid y \ge -\mathbf{x}^3 \} \, \mathbf{x} \in E\_2. \end{cases}$$

*Take* (*x*0, *y*0)=(0, 0) ∈ gr*F* = epi*F. Then,*

$$T\_{\mathsf{cpri}F}^{\flat(2)}(0,0) = \{(\mathfrak{x}, \mathfrak{y}) \in \mathbb{R} \times \mathbb{R} | \mathfrak{y} \ge -\mathfrak{x}^2, \mathfrak{x} \ge 0\}$$

*and*

$$ED\_w^{\flat(2)}F(0,0)(\mathbf{x} - \mathbf{x}\_0) = \{ y \in \mathbb{R} \mid y = -\mathbf{x}^2 \}, \forall \mathbf{x} \ge 0.$$

*Clearly, the conditions (i) and (iii) of Proposition 4 are satisfied except condition (ii), and for any x* ∈ *E*2*,*

$$F(\mathbf{x}) - \{y\_0\} \not\subseteq ED\_w^{\flat(2)}\\F(\mathbf{x}\_0, y\_0)(\mathbf{x} - \mathbf{x}\_0) + \mathbf{C}.\text{}$$

*Therefore Proposition 4 does not hold here and the condition (ii) of Proposition 4 is essential.*

**Example 8.** *Let E* = *X* = R*, Y* = R2*, C* = R<sup>2</sup> <sup>+</sup> *and <sup>F</sup>* : *<sup>E</sup>* <sup>→</sup> <sup>2</sup>*<sup>Y</sup> be defined by <sup>F</sup>*(*x*) :<sup>=</sup> {(*y*1, *<sup>y</sup>*2) <sup>∈</sup> *<sup>Y</sup>* <sup>|</sup> *<sup>y</sup>*<sup>1</sup> <sup>∈</sup> R, *y*<sup>2</sup> ≥ 0}*. Take* (*x*0, *y*0)=(0,(0, 1)) ∈ gr*F. Then a simple calculation shows that*

$$T\_{\mathrm{epi}F}^{\circ(2)}(0,(0,1)) = \mathbb{R} \times \mathbb{R}^2.$$

*This means that: (i)* dom*<sup>P</sup>* <sup>=</sup> <sup>R</sup> *and <sup>P</sup>*(*x*) = <sup>R</sup>2, <sup>∀</sup>*<sup>x</sup>* <sup>∈</sup> dom*P; (ii) ED*(2) *<sup>w</sup> <sup>F</sup>*(0,(0, 1)) = <sup>∅</sup> *for each <sup>x</sup>* <sup>∈</sup> <sup>R</sup>*. Obviously, <sup>P</sup>*(*x*) :<sup>=</sup> {*<sup>y</sup>* <sup>∈</sup> *<sup>Y</sup>* <sup>|</sup> (*x*, *<sup>y</sup>*) <sup>∈</sup> <sup>R</sup> <sup>×</sup> <sup>R</sup>2} *does not fulfill the weak domination property for each x* ∈ R *and* Ω = ∅*. Thus Proposition 4 does not hold here and the condition (iii) of Proposition 4 is essential.*

#### **4. Higher-Order Mond-Weir Type Duality**

In this section, by virtue of the higher-order weak adjacent epiderivative of a set-valued map, we establish Mond-Weir duality theorems for a constrained optimization problem under Benson proper efficiency.

Let *<sup>E</sup>* <sup>⊆</sup> *<sup>X</sup>*, *<sup>F</sup>* : *<sup>E</sup>* <sup>→</sup> <sup>2</sup>*<sup>Y</sup>* and *<sup>G</sup>* : *<sup>E</sup>* <sup>→</sup> <sup>2</sup>*<sup>Z</sup>* be two set-valued maps. We consider the following constrained set-valued optimization problem:

$$(\text{SOP})\left\{\begin{array}{ll}\text{Min}\_{\mathbb{C}} & F(\mathfrak{x}),\\\text{s.t.} & \mathfrak{x}\in E,\mathcal{G}(\mathfrak{x})\cap(-D)\neq\mathcal{Q}.\end{array}\right.$$

Let *M* := {*x* ∈ *E* | *G*(*x*) ∩ (−*D*) = ∅} and *F*(*M*) := ∪*x*∈*MF*(*x*). We denote *F*(*x*) × *G*(*x*) by (*F*, *G*)(*x*). The point (*x*0, *y*0) ∈ *E* × *Y* is said to be a feasible solution of (SOP) if *x*<sup>0</sup> ∈ *M* and *y*<sup>0</sup> ∈ *F*(*x*0).

**Definition 12.** *[25] The feasible solution* (*x*0, *y*0) *is called a Benson proper efficient solution of (SOP) if*

$$\text{clcore}(F(M) + \mathbb{C} - \{y\_0\}) \cap (-\mathbb{C}) = \{0\_Y\}.$$

Let (*x*˜, *<sup>y</sup>*˜, *<sup>z</sup>*˜) <sup>∈</sup> gr(*F*, *<sup>G</sup>*), *<sup>ν</sup>* <sup>∈</sup> *<sup>Y</sup>*∗, *<sup>ω</sup>* <sup>∈</sup> *<sup>Z</sup>*<sup>∗</sup> and *<sup>x</sup>* <sup>∈</sup> <sup>Θ</sup> :<sup>=</sup> dom*ED*(*m*) *<sup>w</sup>* (*F*, *<sup>G</sup>*)(*x*˜, *<sup>y</sup>*˜, *<sup>z</sup>*˜). Inspired by [2], We establish a new higher-order Mond-Weir type dual problem (DSOP) of (SOP) as follows:

max *y*˜

$$\text{s.t.} \qquad \nu(y) + \omega(z) \ge 0, \forall (y, z) \in \text{ED}\_w^{\flat(m)}(\mathcal{F}, \mathcal{G})(\mathfrak{x}, \mathfrak{y}, \widetilde{z})(\mathbf{x}), \mathbf{x} \in \Theta,\tag{1}$$

$$
\omega(\overline{z}) \gg 0,\tag{2}
$$

$$
\nu \in \mathbb{C}^{+i} \tag{3}
$$

$$
\omega \in D^+. \tag{4}
$$

The point (*x*˜, *y*˜, *z*˜, *ν*, *ω*) is called a feasible solution of (DSOP) if (*x*˜, *y*˜, *z*˜, *ν*, *ω*) satisfies conditions (1), (2), (3) and (4) of (DSOP). A feasible solution (*x*0, *y*0, *z*0, *ν*0, *ω*0) is called a maximal solution of (DSOP) if for all *y*˜ ∈ *MD*, ({*y*˜}−{*y*0}) ∩ (*C* \ {0*Y*}) = ∅, where *MD* := {*y*˜ ∈ *F*(*x*˜) | (*x*˜, *y*˜, *z*˜) ∈ gr(*F*, *G*), *ν* ∈ *C*+*<sup>i</sup>* , *<sup>ω</sup>* <sup>∈</sup> *<sup>D</sup>*+, and (*x*˜, *<sup>y</sup>*˜, *<sup>z</sup>*˜, *<sup>ν</sup>*, *<sup>ω</sup>*) is the feasible solution of (DSOP)}.

**Definition 13.** *[26] Let K* ⊆ *X, the interior tangent cone of K at x*<sup>0</sup> *is defined by*

$$IT\_K(\mathbf{x}\_0) := \{ \mu \in X \mid \exists \lambda > 0, \forall t \in (0, \lambda), \forall \mu' \in B\_X(\mu, \lambda), \mathbf{x}\_0 + t\mu' \in K \},$$

*where BX*(*μ*, *λ*) *stands for the closed ball centered at μ* ∈ *X and of radius λ.*

**Theorem 2.** *(Weak Duality) Let E be a star-shaped set at x*˜ ∈ *E and* (*x*˜, *y*˜, *z*˜) ∈ gr(*F*, *G*)*. Let* (*x*0, *y*0) *and* (*x*˜, *y*˜, *z*˜, *ν*, *ω*) *be the feasible solution of (SOP) and (DSOP), respectively. Then the weak duality: ν*(*y*0) ≥ *ν*(*y*˜) *holds if the following conditions are satisfied:*

*(i)* (*F*, *G*) *is decreasing-along-rays at x;*˜

*(ii)* (*F*, *G*) *is generalized C* × *D-convex at x on E;* ˜

*(iii) the set <sup>P</sup>*(*F*,*G*)(*x*<sup>0</sup> <sup>−</sup> *<sup>x</sup>*˜) :<sup>=</sup> {*<sup>y</sup>* <sup>∈</sup> *<sup>Y</sup>*|(*x*<sup>0</sup> <sup>−</sup> *<sup>x</sup>*˜, *<sup>y</sup>*, *<sup>z</sup>*) <sup>∈</sup> *<sup>T</sup>*(*m*) epi(*F*,*G*) (*x*˜, *y*˜, *z*˜)} *fulfills the weak domination property.*

**Proof.** Since (*x*0, *y*0) is a feasible solution of (SOP), *G*(*x*0) ∩ (−*D*) = ∅. Take *z*<sup>0</sup> ∈ *G*(*x*0) ∩ (−*D*). It follows from (2) and (4) that

$$
\omega \left( z\_0 - \overline{z} \right) \lessapprox 0. \tag{5}
$$

From Proposition <sup>4</sup> it follows that *<sup>x</sup>*<sup>0</sup> <sup>−</sup> *<sup>x</sup>*˜ <sup>∈</sup> *<sup>S</sup>* :<sup>=</sup> dom*ED*(*m*) *<sup>w</sup>* (*F*, *<sup>G</sup>*)(*x*˜, *<sup>y</sup>*˜, *<sup>z</sup>*˜) and

$$(y\_0, z\_0) - (\vec{y}, \vec{z}) \in \operatorname{ED}\_w^{\flat(m)}(\mathbb{F}, \mathbb{G})(\vec{x}, \vec{y}, \vec{z})(\mathbf{x}\_0 - \vec{x}) + \mathbb{C} \times D. \tag{6}$$

Noting that *<sup>ν</sup>* <sup>∈</sup> *<sup>C</sup>*+*<sup>i</sup>* and *<sup>ω</sup>* <sup>∈</sup> *<sup>D</sup>*+, we have by (1) and (6) that *<sup>ν</sup>*(*y*<sup>0</sup> <sup>−</sup> *<sup>y</sup>*˜) + *<sup>ω</sup>*(*z*<sup>0</sup> <sup>−</sup> *<sup>z</sup>*˜) 0. Combining this with (5), one has

$$\nu(y\_0 - \mathfrak{y}) \gg 0.$$

Thus *ν*(*y*0) ≥ *ν*(*y*˜) and the proof is complete. -

Theorem 2 is an extension of [2], Theorem 4.1 from cone convexity to generalized cone convexity. Now, we give an example to illustrate that Theorem 2 can apply but [2], Theorem 4.1 dose not.

**Example 9.** *Let <sup>X</sup>* <sup>=</sup> *<sup>Y</sup>* <sup>=</sup> *<sup>Z</sup>* <sup>=</sup> <sup>R</sup>*, <sup>C</sup>* <sup>=</sup> *<sup>D</sup>* <sup>=</sup> <sup>R</sup>+*, <sup>F</sup>* : *<sup>E</sup>* <sup>→</sup> <sup>2</sup>*<sup>Y</sup> be given as <sup>F</sup>*(*x*) = {*<sup>y</sup>* <sup>∈</sup> *<sup>Y</sup>* <sup>|</sup> *<sup>y</sup>* <sup>≥</sup> <sup>0</sup>} *and <sup>G</sup>* : *<sup>E</sup>* <sup>→</sup> <sup>2</sup>*<sup>Z</sup> be defined by*

$$G(x) = \begin{cases} \{z \in Z \mid z \ge 0\}, x \le 0, \\ \mathbb{R}, & x > 0. \end{cases}$$

*Then sets of the feasible solutions for (DSOP) and (SOP) are* {(*x*˜, *y*˜, *z*˜, *ν*, *ω*) | *x*˜ = 0, *y*˜ = 0, *z*˜ ≥ 0, *ν* ∈ *C*+*<sup>i</sup>* , *ω* = 0} *and* {(*x*0, *y*0) | *x*<sup>0</sup> ∈ R, *y*<sup>0</sup> ≥ 0}*, respectively. Thus ν*(*y*0) ≥ *ν*(*y*˜) = *ν*(0) *and Theorem 2 holds here. However, [2], Theorem 4.1 is not applicable here because G is not C-convex on E.*

**Lemma 1.** *[27] Let x*<sup>0</sup> ∈ *K* ⊆ *X and* int*K* = ∅*. If K is convex, then*

$$IT\_{\text{int}K}(\mathbf{x}\_0) = \text{intcome}(K - \{\mathbf{x}\_0\}).$$

The inclusion relation between the generalized second-order adjacent epiderivative and convex cone *C* and *D* is established by Wang and Yu in [28], Theorem 5.2. Inspired by [28] , Theorem 5.2, we next introduce the equality of the higher-order weak adjacent epiderivative and convex cone *C* and *D* to the proof of the strong duality theory.

**Lemma 2.** *Let* (*x*0, *y*0, *z*0) ∈ gr(*F*, *G*) *and z*<sup>0</sup> ∈ −*D. If* (*x*0, *y*0) *is a Benson proper efficient solution of (SOP), then for all x* <sup>∈</sup> <sup>Θ</sup> :<sup>=</sup> dom*ED*(*m*) *<sup>w</sup>* (*F*, *<sup>G</sup>*)(*x*0, *<sup>y</sup>*0, *<sup>z</sup>*0)*,*

$$\left[ \left( \mathrm{ED}\_{w}^{\circ(m)} (\mathrm{F}, \mathrm{G}) (\mathbf{x}\_{0}, y\_{0}, z\_{0}) (\mathbf{x}) + \mathrm{C} \times D + \{ (\mathbf{0}\_{Y}, z\_{0}) \} \right) \cap (- (\{ \mathrm{C} \ \langle \ \{ 0\_{Y} \} \rangle \times \mathrm{int} D)) = \mathcal{Q} . \tag{7} \right]$$

**Proof.** We can easily see that (7) is equivalent to

$$\left(\mathrm{ED}\_{w}^{\circ(m)}(\mathrm{F},\mathrm{G})(\mathrm{x}\_{0},y\_{0},z\_{0})(\mathrm{x}) + \mathrm{C} \times D\right) \cap \left(-\left(\left(\mathrm{C} \mid \{0\_{Y}\}\right) \times \left(\mathrm{int}\,\mathrm{D} + \{z\_{0}\}\right)\right)\right) = \bigcirc \mathbf{1} \tag{8}$$

Thus we only need to prove that (8) holds. Suppose on the contrary that there exist *x* ∈ Θ, (*y*, *<sup>z</sup>*) <sup>∈</sup> *ED*(*m*) *<sup>w</sup>* (*F*, *<sup>G</sup>*) (*x*0, *<sup>y</sup>*0, *<sup>z</sup>*0)(*x*) and (*c*0, *<sup>d</sup>*0) <sup>∈</sup> *<sup>C</sup>* <sup>×</sup> *<sup>D</sup>* such that

$$z + d\_0 \in -\left(\text{int}D + \{z\_0\}\right) \tag{9}$$

and

$$
\partial\_t y + \mathfrak{e}\_0 \in -(\mathbb{C} \mid \{0\_Y\}).\tag{10}
$$

It follows from (*y*, *<sup>z</sup>*) <sup>∈</sup> *ED*(*m*) *<sup>w</sup>* (*F*, *<sup>G</sup>*)(*x*0, *<sup>y</sup>*0, *<sup>z</sup>*0)(*x*) that (*x*, *<sup>y</sup>*, *<sup>z</sup>*) <sup>∈</sup> *<sup>T</sup>*(*m*) epi(*F*,*G*) (*x*0, *y*0, *z*0). Then for any sequence {*tn*} with *tn* <sup>→</sup> <sup>0</sup>+, there exists {(*xn*, *yn*, *zn*)} ⊆ epi(*F*, *<sup>G</sup>*) such that

$$(\frac{\mathbf{x}\_{n}-\mathbf{x}\_{0}}{t\_{n}},\frac{(y\_{n},z\_{n})-(y\_{0},z\_{0})}{t\_{n}^{m}}) \to (\mathbf{x},y,z).\tag{11}$$

From (9) and (11), there exists a sufficiently large natural number *N*<sup>1</sup> such that

$$\begin{split} \bar{z}\_{n} := \frac{z\_{n} - z\_{0} + t\_{n}^{m} d\_{0}}{t\_{n}^{m}} \in - (\text{int}D + \{z\_{0}\}) \subseteq -\text{intcome}(D + \{z\_{0}\}) \\ \subseteq -IT\_{\text{int}}D(-z\_{0}), \forall n > N\_{1} \end{split} \tag{12}$$

where the last inclusion follows from Lemma 1. According to Definition 13, there exists *λ* > 0 such that

$$
\mu - z\_0 + t\_n \mu' \in \text{int}D, \forall t\_n \in (0, \lambda), \mu' \in B\_Y(-\overline{z}\_n, \lambda), n > N\_1. \tag{13}
$$

Since *tn* <sup>→</sup> <sup>0</sup>+, there exists a sufficiently large natural number *<sup>N</sup>*<sup>2</sup> with *<sup>N</sup>*<sup>2</sup> <sup>≥</sup> *<sup>N</sup>*<sup>1</sup> such that *t m <sup>n</sup>* ∈ (0, *λ*). Combining this with (13), one has

$$-z\_0 + t\_n^m(-\mathbb{Z}\_n) \in \text{int}D, \forall n > N\_2. \tag{14}$$

From (12) and (14), we have

$$-z\_0 - (z\_n - z\_0 + t\_n^{\text{nr}} d\_0) = -z\_n - t\_n^{\text{nr}} d\_0 \in \text{irrt}D, \forall n > N\_2.$$

It follows from *d*<sup>0</sup> ∈ *D*, *t m <sup>n</sup>* <sup>→</sup> <sup>0</sup><sup>+</sup> and int*<sup>D</sup>* <sup>+</sup> *<sup>D</sup>* <sup>⊆</sup> int*<sup>D</sup>* that

$$z\_n \in -\text{int}D, \forall n > N\_2. \tag{15}$$

Noting that {(*xn*, *yn*, *zn*)} ⊆ epi(*F*, *G*), there exist *xn* ∈ *E*, *z*ˆ*<sup>n</sup>* ∈ *G*(*xn*), *y*ˆ*<sup>n</sup>* ∈ *F*(*xn*) and (*cn*, *dn*) ∈ *C* × *D* such that *yn* = *y*ˆ*<sup>n</sup>* + *cn* and *zn* = *z*ˆ*<sup>n</sup>* + *dn*. By (15), *z*ˆ*<sup>n</sup>* ∈ −int*D* − {*dn*}⊆−int*D* ⊆ −*D*, ∀*n* > *N*2. Therefore

$$
\infty\_{\rm tr} \in M\_{\prime} \forall n > N\_{2}. \tag{16}
$$

Clearly, we have

$$\begin{aligned} \frac{y\_n - y\_0}{t\_n^m} + \infty &= \frac{y\_n + t\_n^m c\_0 - y\_0}{t\_n^m} \in \frac{F(x\_n) + \mathcal{C} - \{y\_0\}}{t\_n^m} \\ &\subseteq \frac{F(\mathcal{M}) + \mathcal{C} - \{y\_0\}}{t\_n^m} \\ &\subseteq \text{clcore}(F(\mathcal{M}) + \mathcal{C} - \{y\_0\}). \end{aligned}$$

It follows from (11) and (16) that *y* + *c*<sup>0</sup> ∈ clcone(*F*(*M*) + *C* − {*y*0}). Combining this with (10), one has

$$y + c\_0 \in \text{clcore}(F(M) + \mathbb{C} - \{y\_0\}) \cap (-(\mathbb{C} \mid \{0\_Y\})),$$

which contradicts that (*x*0, *y*0) is a Benson proper efficient solution of (SOP). Thus (7) holds and the proof is complete. -

According to Theorem 2.3 of [29], we have the following lemma.

**Lemma 3.** *[29] Let W be a locally convex space, H and Q be cones in W. If H is closed, Q have a compact base and H* <sup>∩</sup> *<sup>Q</sup>* <sup>=</sup> {0*W*}*, then there is a pointed convex cone A such that Q* ˜ \ {0*W*} ⊆ int*A and* ˜ *<sup>A</sup>*˜ <sup>∩</sup> *<sup>H</sup>* <sup>=</sup> {0*W*}*.* **Theorem 3.** *(Strong Duality) Let E be a convex subset of X,* (*x*0, *y*0, *z*0) ∈ gr(*F*, *G*) *and z*<sup>0</sup> ∈ −*D. Suppose that the following conditions are satisfied:*

*(i)* (*F*, *G*) *is C* × *D-convex on E;*

*(ii) <sup>P</sup>*(*x*) :<sup>=</sup> {(*y*, *<sup>z</sup>*) <sup>∈</sup> *<sup>Y</sup>* <sup>×</sup> *<sup>Z</sup>* <sup>|</sup> (*x*, *<sup>y</sup>*, *<sup>z</sup>*) <sup>∈</sup> *<sup>T</sup>*(*m*) epi(*F*,*G*) (*x*0, *y*0, *z*0)} *fulfills the weak domination property for all x* ∈ dom*P;*

*(iii) C has a compact base;*

*(iv)* (*x*0, *y*0) *be a Benson proper efficient solution of (SOP);*

*(v) for any x* ∈ *E*, *G*(*x*) ∩ (−*D*) = ∅*.*

*Then there exist <sup>ν</sup>* <sup>∈</sup> *<sup>C</sup>*+*<sup>i</sup> and <sup>ω</sup>* <sup>∈</sup> *<sup>D</sup>*<sup>+</sup> *such that* (*x*0, *<sup>y</sup>*0, *<sup>z</sup>*0, *<sup>ν</sup>*, *<sup>ω</sup>*) *is a maximal solution of (DSOP).*

**Proof.** Define

$$\Psi := E D\_w^{\flat(m)}(F, G)(\mathfrak{x}\_0, \mathfrak{y}\_0, z\_0)(\Theta) + \mathbb{C} \times D + \{(0\_Y, z\_0)\}\_{Y^\*}$$

where <sup>Θ</sup> :<sup>=</sup> dom*ED*(*m*) *<sup>w</sup>* (*F*, *<sup>G</sup>*)(*x*0, *<sup>y</sup>*0, *<sup>z</sup>*0).

Step 1. We firstly prove that Ψ is a convex set. Indeed, it is sufficient to show the convexity of Ψ<sup>0</sup> := Ψ − {(0*Y*, *z*0)}.

Let (*yi*, *zi*) ∈ Ψ<sup>0</sup> (*i* = 1, 2). Then there exist *xi* ∈ Θ, (*y i* , *z i* ) <sup>∈</sup> *ED*(*m*) *<sup>w</sup>* (*F*, *<sup>G</sup>*)(*x*0, *<sup>y</sup>*0, *<sup>z</sup>*0)(*xi*) and (*ci*, *di*) ∈ *C* × *D* (*i* = 1, 2) such that

$$(y\_{i\prime}z\_i) = (y\_{i\prime}'z\_i') + (c\_i, d\_i) \ (i = 1, 2). \tag{17}$$

According to the definition of *ED*(*m*) *<sup>w</sup>* (*F*, *<sup>G</sup>*)(*x*0, *<sup>y</sup>*0, *<sup>z</sup>*0), one has (*xi*, *<sup>y</sup> i* , *z i* ) ∈ *T*(*m*) epi(*F*,*G*) (*x*0, *y*0, *z*0) (*i* = 1, 2).

Since (*F*, *<sup>G</sup>*) is *<sup>C</sup>* <sup>×</sup> *<sup>D</sup>*-convex on *<sup>E</sup>*, epi(*F*, *<sup>G</sup>*) is a convex set. From Proposition 3, *<sup>T</sup>*(*m*) epi(*F*,*G*) (*x*0, *y*0, *z*0) is a convex set. So for any *t* ∈ [0, 1],

$$t(x\_1, y\_1', z\_1') + (1 - t)(x\_2, y\_2', z\_2') \in T^{\flat(\mathfrak{m})}\_{\mathrm{epi}(F, G)} (x\_{0'} y\_{0'} z\_0) \dots$$

By (ii), we have

$$\begin{aligned} \left(\mathbf{f}(y\_1', z\_1') + (1 - t)(y\_2', z\_2') \in ED\_w^{\diamond(m)}(F, \mathbf{G})(\mathbf{x}\_0, y\_0, z\_0)(t\mathbf{x}\_1 + (1 - t)\mathbf{x}\_2) + \text{int}(\mathbb{C} \times D) \cup \{(\mathbf{0}\_{V}, \mathbf{0}\_{\mathbb{Z}})\} \right) \\ \subseteq ED\_w^{\diamond(m)}(F, \mathbf{G})(\mathbf{x}\_0, y\_0, z\_0)(t\mathbf{x}\_1 + (1 - t)\mathbf{x}\_2) + \mathbb{C} \times D. \end{aligned}$$

Combining this with (17), one has

$$t(y\_1, z\_1) + (1 - t)(y\_2, z\_2) \in \Psi\_0 + \mathbb{C} \times D = \Psi\_0.$$

Therefore Ψ<sup>0</sup> is a convex set and so Ψ = Ψ<sup>0</sup> + {(0*Y*, *z*0)} is a convex set.

Step 2. We prove that there exist *<sup>ν</sup>* <sup>∈</sup> *<sup>C</sup>*+*<sup>i</sup>* and *<sup>ω</sup>* <sup>∈</sup> *<sup>D</sup>*<sup>+</sup> such that (*x*0, *<sup>y</sup>*0, *<sup>z</sup>*0, *<sup>ν</sup>*, *<sup>ω</sup>*) is a feasible solution of (DSOP).

Define

$$
\Phi := \text{clcore} \Psi.
$$

Since Ψ is a convex set, Φ is a convex cone. According to Lemma 2, we have

$$\Phi \cap \left( -\left( \left( \mathbb{C} \mid \{ 0\_Y \} \right) \times \text{int} D \right) \right) = \bigotimes . \tag{18}$$

Hence, we can conclude

$$\Phi \cap \left( -\left( \mathbb{C} \times \{ 0\_{\mathbb{Z}} \} \right) \right) = \{ \left( 0\_{\mathbb{Y}}, 0\_{\mathbb{Z}} \right) \}. \tag{19}$$

In fact, assume that (19) does not hold. Since Φ is a cone, there exists *b* ∈ −*C* \ {0*Y*} such that

$$(\mathfrak{b}, \mathfrak{l}\_{\mathbb{Z}}) \in \Phi \cap (-(\{\mathbb{C} \mid \{\mathfrak{l}\_{\mathbb{Y}}\}) \times \{\mathfrak{l}\_{\mathbb{Z}}\})).$$

Then there exist *<sup>x</sup><sup>n</sup>* <sup>∈</sup> <sup>Θ</sup>, (*yn*, *<sup>z</sup>n*) <sup>∈</sup> *ED*(*m*) *<sup>w</sup>* (*F*, *<sup>G</sup>*)(*x*0, *<sup>y</sup>*0, *<sup>z</sup>*0)(*xn*), (*cn*, *dn*) <sup>∈</sup> *<sup>C</sup>* <sup>×</sup> *<sup>D</sup>* and *<sup>λ</sup><sup>n</sup>* <sup>≥</sup> <sup>0</sup> such that

$$b = \lim\_{n \to \infty} \lambda\_n (y^n + c\_n). \tag{20}$$

According to the definition of *ED*(*m*) *<sup>w</sup>* (*F*, *<sup>G</sup>*)(*x*0, *<sup>y</sup>*0, *<sup>z</sup>*0), for any *tk* <sup>→</sup> <sup>0</sup>+, there exists (*x<sup>n</sup> <sup>k</sup>* , *<sup>y</sup><sup>n</sup> <sup>k</sup>* , *<sup>z</sup><sup>n</sup> <sup>k</sup>* ) ∈ epi(*F*, *G*) such that

$$\lim\_{k \to \infty} (\frac{x\_k^n - x\_0}{t\_k}, \frac{y\_k^n - y\_0 + t\_k^m c\_n}{t\_k^m}, \frac{z\_k^n - z\_0 + t\_k^m d\_n}{t\_k^m}) = (x^n, y^n + c\_n, z^n + d\_n). \tag{21}$$

This together with condition (v) implies

$$\begin{split} \lambda\_n \frac{y\_k^n - y\_0 + t\_k^m c\_n}{t\_k^m} \in \lambda\_n \frac{F(x\_k^n) + \mathcal{C} - \{y\_0\} + t\_k^m c\_n}{t\_k^m} \\ \subseteq \text{clcome}[F(E) + \mathcal{C} - \{y\_0\}] \\ \subseteq \text{clcone}[F(M) + \mathcal{C} - \{y\_0\}]. \end{split} \tag{22}$$

It follows from (20), (21), (22) and *b* ∈ −(*C* \ {0*Y*}) that

$$b \in \text{clcore}(F(M) + \mathbb{C} - \{y\_0\}) \cap (-(\mathbb{C} \backslash \{0\_Y\})),$$

which contradicts that (*x*0, *y*0) is a Benson proper efficient solution of (SOP). Thus (19) holds.

Since *C* has a compact base, −(*C* × {0*Z*}) also has a compact base. Combining this with (19) and Lemma 3, replacing *<sup>H</sup>* and *<sup>Q</sup>* with <sup>Φ</sup> and <sup>−</sup>(*<sup>C</sup>* × {0*Z*}), there exists a pointed convex cone *<sup>A</sup>*˜ such that

$$-\left(\mathbb{C}\times\{0\_{\mathbb{Z}}\}\right)\backslash(0\_{\mathcal{Y}},0\_{\mathbb{Z}})\subseteq\operatorname{int}\tilde{A}\tag{23}$$

and

$$
\Phi \cap \tilde{A} = \{ (0\_Y, 0\_Z) \}. \tag{24}
$$

Let *<sup>B</sup>*˜ :<sup>=</sup> *<sup>A</sup>* ∪ {(0*Y*, 0*Z*)}, where *<sup>A</sup>* :<sup>=</sup> <sup>−</sup>((*<sup>C</sup>* \ {0*Y*}) <sup>×</sup> (int*<sup>D</sup>* ∪ {0*Z*})) + *<sup>A</sup>*˜. Thus *<sup>B</sup>*˜ is a convex cone. Next, we further prove that *<sup>B</sup>*˜ is a pointed cone. According to Proposition 1, we get (0*X*, 0*Y*, 0*Z*) <sup>∈</sup> *T*(*m*) epi(*F*,*G*) (*x*0, *y*0, *z*0). Combining this with the weak domination property of *P*, we get

$$\mathbb{P}(\mathbf{0}\_{Y}, \mathbf{0}\_{Z}) \in ED\_{w}^{\diamond(m)}(F, G)(\mathbf{x}\_{0}, y\_{0}, z\_{0})(\mathbf{0}\_{X}) + \mathbb{C} \times D. \tag{25}$$

For *z*<sup>0</sup> ∈ *G*(*x*0) ∩ (−*D*) and (*c*, *d*) ∈ *C* × *D*, we have

$$\begin{aligned} (\mathbf{c},d) &= (\mathbf{0}\_{Y\_{\prime}}\mathbf{0}\_{Z}) + (\mathbf{c},d - z\_{0}) + (\mathbf{0}\_{Y\_{\prime}}z\_{0}) \\ &\in ED\_{\mathrm{w}}^{\flat(m)}(\mathbf{F},\mathbf{G})(\mathbf{x}\_{0},\mathbf{y}\_{0},z\_{0})(\mathbf{0}\_{X}) + \mathbf{C} \times D + \{(\mathbf{0}\_{Y\_{\prime}}z\_{0})\}, \\ &\subseteq \Phi, \end{aligned}$$

and so

$$
\mathbb{C} \times D \subseteq \Phi. \tag{26}
$$

It follows from (24) and (26) that (*<sup>C</sup>* <sup>×</sup> *<sup>D</sup>*) <sup>∩</sup> *<sup>A</sup>*˜ <sup>=</sup> {(0*Y*, 0*Z*)}. Hence,

$$(\left(\left(\mathbb{C} \mid \{0\_Y\}\right) \times \left(\text{int}\,\mathbf{D} \cup \{0\_Z\}\right)\right) \cap \tilde{A} = \bigotimes\_{\mathbf{I}} A$$

Combining with the definition of *A*, one has

$$(0\_Y, 0\_Z) \notin A. \tag{27}$$

Thus

$$A \cap (-A) = \bigotimes \tag{28}$$

To obtain this result, we suppose on the contrary that there exists (*c*, *d*) ∈ *A* ∩ (−*A*). Then there exist (*ci*, *di*) ∈ (*C* \ {0*Y*}) × (int*D* ∪ {0*Z*}) (*i* = 1, 2) and (*c i* , *d i* ) <sup>∈</sup> *<sup>A</sup>*˜ (*<sup>i</sup>* <sup>=</sup> 1, 2) such that

$$(c,d) = -(c\_1,d\_1) + (c'\_{1'},d'\_1)$$

and

$$(c,d) = (c\_{2"}d\_2) - (c\_{2"}'d\_2').$$

So

 $\left(-\left(c\_1, d\_1\right) + \left(c\_1', d\_1'\right)\right) - \left(\left(c\_2, d\_2\right) - \left(c\_2', d\_2'\right)\right)$ 

$$= -\left(c\_1 + c\_2, d\_1 + d\_2\right) + \left(c\_1' + c\_2', d\_1' + d\_2'\right)$$

 $= \left(0\_Y, 0\_Z\right) \in A\_\prime$ 

which contradicts (27). Therefore (28) holds. Then *<sup>B</sup>*˜ is a pointed convex cone and (0*Y*, 0*Z*) ∈ int*B*˜.

Now, we can conclude

$$
\Phi \cap \vec{B} = \{ (0\_Y, 0\_Z) \}. \tag{29}
$$

To see the conclusion, we suppose on the contrary that there exists (*y*, *z*) = (0*Y*, 0*Z*) such that

$$(y, z) \in \Phi \cap \mathcal{B},\tag{30}$$

because *B*˜ is a pointed convex cone and Φ is a convex cone. From the definition of *B*˜, there exist (*y*1, *<sup>z</sup>*1) ∈ −((*<sup>C</sup>* \ {0*Y*}) <sup>×</sup> (int*<sup>D</sup>* ∪ {0*Z*})) and (*y*2, *<sup>z</sup>*2) <sup>∈</sup> *<sup>A</sup>*˜ such that

$$(y, z) = (y\_1, z\_1) + (y\_2, z\_2).$$

According to the definition of Φ, there exist *x <sup>n</sup>* ∈ Θ, (*y <sup>n</sup>*, *z <sup>n</sup>*) <sup>∈</sup> *ED*(*m*) *<sup>w</sup>* (*F*, *<sup>G</sup>*)(*x*0, *<sup>y</sup>*0, *<sup>z</sup>*0)(*<sup>x</sup> n*), (*c <sup>n</sup>*, *d <sup>n</sup>*) ∈ *C* × *D* and *λ <sup>n</sup>* ≥ 0 such that

$$\rho(y, z) = \lim\_{n \to \infty} \lambda\_n'(y\_n' + c\_{n'}'z\_n' + d\_n' + z\_0).$$

Since (*y*, *z*) = (0*Y*, 0*Z*), without loss of generality, we may assume that *λ <sup>n</sup>* > 0. It follows from the definition of Φ that

$$\begin{aligned} (y\_\prime z) - (y\_{1\prime} z\_1) &= \lim\_{n \to \infty} \lambda\_n'(y\_n' + c\_n', z\_n' + d\_n' + z\_0) - (y\_{1\prime} z\_1) \\ &= \lim\_{n \to \infty} \lambda\_n'(y\_n' + c\_n' - \frac{y\_1}{\lambda\_n'}, z\_n' + d\_n' - \frac{z\_1}{\lambda\_n'} + z\_0) \\ &\in \Phi\_\prime \end{aligned}$$

and so

$$\{(y\_2, z\_2) = (y, z) - (y\_1, z\_1) \in \Phi \cap \bar{A} = \{(0\_Y, 0\_{\mathbb{Z}})\}.\}$$

Thus

$$(y, z) = (y\_1, z\_1) \in -((C \backslash \{0\_Y\}) \times \text{int}D). \tag{31}$$

By (30) and (31), we have

$$(y, z) \in \Phi \cap (-((\mathcal{C} \backslash \{0\_Y\}) \times \text{int}D))\_r$$

which contradicts (18).

We claim that

$$-\left(\left(\mathbb{C}\nmid\{0\_Y\}\right)\times\left(\text{int}D\cup\{0\_Z\}\right)\right)\subseteq\text{int}\tilde{B}.\tag{32}$$

To obtain this conclusion, we replace *B* and *C* in [30], Theorem 2.2 with −((*C* \ {0*Y*}) × (int*D* ∪ {0*Z*})) and int*A*˜, respectively, which together with the fact: (0*Y*, 0*Z*) ∈ int*B*˜ yields that

$$\text{int}\vec{B} = -\left(\mathbb{C} \mid \{0\_Y\}\right) \times \left(\text{int}\boldsymbol{D} \cup \{0\_{\tilde{Z}}\}\right) + \text{int}\boldsymbol{\tilde{A}}.\tag{33}$$

Let *c* ∈ *C* \ {0*Y*} and *d* ∈ int*D* ∪ {0*Z*}. Then by (23) and (33), one has

$$-(\mathbf{c},d) = -(\frac{\mathbf{c}}{2},d) - (\frac{\mathbf{c}}{2},0\_{\mathbb{Z}}) \in -((\mathbb{C} \backslash \{0\_{Y}\}) \times (\text{int}D \cup \{0\_{\mathbb{Z}}\})) + \text{int}\vec{A} = \text{int}\vec{B}\_{r}$$

and so (32) holds.

According to the separation theorem for convex set and (29), there exist *ν* ∈ *Y*<sup>∗</sup> and *ω* ∈ *Z*<sup>∗</sup> such that

$$
\nu(\vec{g}) + \omega(\vec{z}) < 0,\\
\forall (\vec{g}, \vec{z}) \in \text{int}\mathcal{B} \tag{34}
$$

and

$$
\nu(\check{y}) + \omega(\check{z}) \ge 0, \forall (\check{y}, \check{z}) \in \Phi. \tag{35}
$$

By (32) and (34), we have

$$
\nu(\bar{y}) + \omega(\bar{z}) > 0, \forall (\bar{y}, \bar{z}) \in (\mathbb{C} \backslash \{0\_Y\}) \times (\text{int}\, \mathcal{D} \cup \{0\:\Box\}).\tag{36}
$$

Taking *<sup>z</sup>*¯ <sup>=</sup> <sup>0</sup>*<sup>Z</sup>* in (36), one has *<sup>ν</sup>*(*y*¯) <sup>&</sup>gt; 0, <sup>∀</sup>*y*¯ <sup>∈</sup> *<sup>C</sup>* \ {0*Y*}, thus *<sup>ν</sup>* <sup>∈</sup> *<sup>C</sup>*+*<sup>i</sup>* . For any *ε* > 0, take *y*¯ ∈ (*C* \ {0*Y*}) ∩ *B*(0*Y*,*ε*) in (36). Then we can observe that *ω*(*z*¯) ≥ 0, ∀*z*¯ ∈ int*D*, which implies *<sup>ω</sup>* <sup>∈</sup> *<sup>D</sup>*+.

It follows from (35) that

$$\nu(y) + \omega(z) \ge 0,\\ \forall (y, z) \in E D\_w^{\flat(m)}(\mathcal{F}, \mathcal{G})(\mathbf{x}\_0, y\_0, z\_0)(\Theta) + \mathcal{C} \times D + \{(0\_Y, z\_0)\}.\tag{37}$$

Together with (25), we get *<sup>ω</sup>*(*z*0) <sup>≥</sup> 0. It follows from *<sup>z</sup>*<sup>0</sup> ∈ −*<sup>D</sup>* and *<sup>ω</sup>* <sup>∈</sup> *<sup>D</sup>*<sup>+</sup> that *<sup>ω</sup>*(*z*0) <sup>≤</sup> 0. Thus,

$$
\omega(z\_0) = 0.
$$

Combining this with (37), one has

$$
\omega(y') + \omega(z') \ge 0,\\
\forall (y', z') \in E D\_w^{\flat(\mathfrak{m})}(F, G)(\mathfrak{x}\_{0\prime} y\_{0\prime} z\_0)(\Theta) + \mathcal{C} \times D\_{\prime\prime}
$$

and so

$$
\psi(y) + \omega(z) \ge 0,\\
\forall (y, z) \in E D\_w^{\flat(\mathfrak{m})} (F, G) (\mathfrak{x}\_{0\prime} y\_{0\prime} z\_0)(\Theta).
$$

Thus (*x*0, *y*0, *z*0, *ν*, *ω*) is a feasible solution of (DSOP).

Step 3. We prove that (*x*0, *y*0, *z*0, *ν*, *ω*) is a maximal solution of (DSOP).

Suppose on the contrary that there exists a feasible solution (*x*ˆ, *y*ˆ, *z*ˆ, *ν* , *ω* ) such that *y*ˆ − *y*<sup>0</sup> ∈ *<sup>C</sup>* \ {0*Y*}. By *<sup>ν</sup>* <sup>∈</sup> *<sup>C</sup>*+*<sup>i</sup>* , we have

$$
\nu'(\emptyset) > \nu'(y\_0) \tag{38}
$$

Since (*x*0, *y*0) is a feasible solution of (SOP), it follows from Theorem 2 that *ν* (*y*0) *ν* (*y*ˆ), which contradicts (38). The proof is complete. - **Theorem 4.** *(Converse Duality) Let E be a star-shaped set at x*<sup>0</sup> ∈ *E. Let y*<sup>0</sup> ∈ *F*(*x*0), *z*<sup>0</sup> ∈ *G*(*x*0) ∩ (−*D*)*, <sup>ν</sup>* <sup>∈</sup> *<sup>C</sup>*+*<sup>i</sup> and <sup>ω</sup>* <sup>∈</sup> *<sup>D</sup>*<sup>+</sup> *such that* (*x*0, *<sup>y</sup>*0, *<sup>z</sup>*0, *<sup>ν</sup>*, *<sup>ω</sup>*) *is a feasible solution of (DSOP). Then* (*x*0, *<sup>y</sup>*0) *is a Benson proper efficient solution of (SOP) if the following conditions are satisfied:*

*(i)* (*F*, *G*) *is decreasing-along-rays at x*0*;*

*(ii)* (*F*, *G*) *is a generalized C* × *D-convex at x*<sup>0</sup> *on E;*

*(iii) the set <sup>P</sup>*(*F*,*G*)(*<sup>x</sup>* <sup>−</sup> *<sup>x</sup>*0) :<sup>=</sup> {*<sup>y</sup>* <sup>∈</sup> *<sup>Y</sup>* <sup>|</sup> (*<sup>x</sup>* <sup>−</sup> *<sup>x</sup>*0, *<sup>y</sup>*, *<sup>z</sup>*) <sup>∈</sup> *<sup>T</sup>*(*m*) epi(*F*,*G*) (*x*0, *y*0, *z*0)} *fulfills the weak domination property for all x* ∈ dom*P*(*F*,*G*)*.*

**Proof.** It follows from (1), (3) and (4) that

$$\begin{split} \nu(y) + \omega(z) \ge 0, \forall (y, z) \in E D\_w^{\flat(m)}(F, G)(\mathbf{x}\_0, y\_0, z\_0)(\mathbf{x}) + \mathbb{C} \times D, \\ \forall \mathbf{x} \in \Theta := \text{dom} E D\_w^{\flat(m)}(F, G)(\mathbf{x}\_0, y\_0, z\_0). \end{split} \tag{39}$$

According to Proposition 4, we get

$$\begin{aligned} (y - y\_0, z - z\_0) \in ED\_w^{\phi(m)}(F, G)(\mathbf{x}\_0, y\_0, z\_0)(\mathbf{x} - \mathbf{x}\_0) + \mathbb{C} \times D, \\ \forall \mathbf{x} \in M, y \in F(\mathbf{x}), z \in G(\mathbf{x}) \cap (-D). \end{aligned} \tag{40}$$

By (2), we have *<sup>ω</sup>*(*z*0) <sup>≥</sup> 0. It follows from *<sup>z</sup>*<sup>0</sup> <sup>∈</sup> *<sup>G</sup>*(*x*0) <sup>∩</sup> (−*D*) and *<sup>ω</sup>* <sup>∈</sup> *<sup>D</sup>*<sup>+</sup> that *<sup>ω</sup>*(*z*0) 0, thus *ω*(*z*0) = 0. Then

$$
\omega(z - z\_0) = \omega(z) - \omega(z\_0) = \omega(z) \leqslant 0, \forall z \in G(\mathbf{x}) \cap (-D), \mathbf{x} \in M. \tag{41}
$$

It follows from (39), (40) and (41) that

$$\nu(y - y\_0) \ge 0, \forall y \in F(\mathfrak{x}), \mathfrak{x} \in M.$$

Further more, we can get

$$\nu(y+c-y\_0) \ge 0, \forall y \in F(\mathfrak{x}), \mathfrak{x} \in M\_{\varkappa}$$

and so

$$\forall \nu(y) \ge 0, \forall y \in \text{clcore}(F(M) + \mathbb{C} - \{y\_0\}). \tag{42}$$

Assume that the feasible solution (*x*0, *y*0) is not a Benson proper efficient solution of (SOP). Then there exists *y* ∈ −(*C* \ {0*Y*}) such that *y* ∈ clcone(*F*(*M*) + *C* − {*y*0}). This together with (42) implies that

$$
\nu(y') \ge 0.\tag{43}
$$

It follows from *<sup>ν</sup>* <sup>∈</sup> *<sup>C</sup>*+*<sup>i</sup>* and *<sup>y</sup>* ∈ −(*<sup>C</sup>* \ {0*Y*}) that *<sup>ν</sup>*(*<sup>y</sup>* ) < 0, which contradicts (43). Thus (*x*0, *y*0) is a Benson proper efficient of (SOP) and the proof is complete. -

**Remark 4.** *Example 9 also illustrates that Theorem 4 extends [2], Theorem 4.3 from the cone convexity to generalized cone convexity. Indeed, take* (*x*0, *y*0, *z*0)=(0, 0, 0)*. Then simple calculations show that*

$$T^{\flat(2)}\_{\text{epi}(F,G)}(0,0,0) = \{ (x,y,z) \in X \times Y \times Z \mid x \le 0, y \ge 0, z \ge 0 \} \cup$$

$$\{ (x,y,z) \in X \times Y \times Z \mid x > 0, y \ge 0, z \in \mathbb{R} \}$$

*and*

$$ED\_w^{\flat(2)}(F,G)(0,0,0)(\mathbf{x}) = \begin{cases} \{(y,z) \in Y \times Z \mid y=0, z \ge 0\} \cup \{(y,z) \in Y \times Z \mid y \ge 0, z=0\}, \mathbf{x} \le \mathbf{0}, \mathbf{y} \\ \{(y,z) \in Y \times Z \mid y=0, z \in \mathbb{R}\}, \mathbf{x} > \mathbf{0}. \end{cases}$$

*Then we can choose ν* = 1 *and ω* = 0 *such that* (*x*0, *y*0, *z*0, *ν*, *ω*)=(0, 0, 0, 1, 0) *is a feasible solution of (DSOP). It is easy to show that the all conditions of Theorem 4 are fulfilled and* (0, 0) *is a Benson proper efficient solution of (SOP). Thus Theorem 4 holds here. However, [2], Theorem 4.3 is not applicable here because G is not C-convex on E.*

#### **Author Contributions:** The authors made equal contributions to this paper.

**Funding:** This research was funded by Chongqing Jiaotong University Graduate Education Innovation Foundation Project (No. 2018S0152), Chongqing Natural Science Foundation Project of CQ CSTC(Nos. 2015jcyjA30009, 2015jcyjBX0131, 2017jcyjAX0382), the Program of Chongqing Innovation Team Project in University (No. CXTDX201601022) and the National Natural Science Foundation of China (No. 11571055). Ching-Feng Wen was supported by the Taiwan MOST [grant number 107-2115-M-037-001].

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


c 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **An Inequality Approach to Approximate Solutions of Set Optimization Problems in Real Linear Spaces**

**Elisabeth Köbis 1,\*,†, Markus A. Köbis 2,† and Xiaolong Qin 3,†**


Received: 29 October 2019; Accepted: 10 January 2020; Published: 20 January 2020

**Abstract:** This paper explores new notions of approximate minimality in set optimization using a set approach. We propose characterizations of several approximate minimal elements of families of sets in real linear spaces by means of general functionals, which can be unified in an inequality approach. As particular cases, we investigate the use of the prominent Tammer–Weidner nonlinear scalarizing functionals, without assuming any topology, in our context. We also derive numerical methods to obtain approximate minimal elements of families of finitely many sets by means of our obtained results.

**Keywords:** set optimization; set relations; nonlinear scalarizing functional; algebraic interior; vector closure

**MSC:** 90C29; 90C26

#### **1. Introduction**

Set optimization has become an important research area and has gained tremendous interest within the optimization community due to its wide and important applications; see, e.g., [1–4]. There exist various research fields that directly lead to problems which can most satisfactorily be modeled and solved in the unified framework provided by set optimization. For example, duality in vector optimization, gap functions for vector variational inequalities, fuzzy optimization, as well as many problems in image processing, viability theory, economics etc. all lead to optimization problems that can be modeled as set-valued optimization problems. For an introduction to set optimization and its applications, we refer to [5].

For example, it is well known that uncertain optimization problems can be modeled by means of set optimization. Uncertainty here means that some parameters are not known. Instead, possibly only an estimated value or a set of possible values can be determined. As inaccurate data can have severe impacts on the model and therefore on the computed solution, it is important to take such uncertainty into account when modeling an optimization problem. If uncertainty is included in the optimization model, one is left with not only one objective function value, but possibly a whole set of values. This leads to a set-valued optimization problem, where the objective map is set-valued.

Recently, it has been shown that certain concepts of robustness for dealing with uncertainties in vector optimization can be described using approaches from set-valued optimization (see [2,3] and a practical application in the context of layout optimization of photovoltaic powerplants in [6]). The concept of interval arithmetic for computations with strict error bounds [7] is also a special case of dealing with set-valued mappings.

To obtain minimal solutions of a set-valued optimization problem, one must analyze whether one set dominates another set in a certain sense, i.e., by means of a given set relation. As it turns out, however, (depending on the chosen set relation), this intuitive and natural mathematical modeling framework often reaches its limitations and leads to very large or—even worse—empty solution sets. This is especially important throughout the design and implementation process of numerical algorithms for set optimization problems: The criteria involved in the definition of the set relations are usually based on set inclusions which for continuous problems are very sensitive to numerical inaccuracies or even just round-off errors.

A simple way to remedy this is to use *approximate solution* concepts: Here, the strict set inclusions are in a way relaxed by extending (enlarging/translating) the quantities that are to be compared such that one obtains more robust results for the involved inclusion tests.

The goal of this paper lies in the characterization of several well-known set relations by means of a very broad, manageable and easy-to-compute functional in the context of approximate solutions to set optimization problems using the set approach. In contrast to recent results in this area (for example see [8–11]), we assume that the spaces in which the sets are compared are not endowed with a particular topology. Therefore, our results generalize those found in the literature by dismissing topological properties. Please note that the references [10,11] present results on scalarizing functionals, but the functional acts on a real linear topological space and no relation to approximate solutions is presented there. Moreover, in [8,9], the oriented distance functional (which implicitly requires a topology) is used to derive characterizations of set relations. To the best of our knowledge, our approach of combining algebraic tools with approximate minimality notions in set optimization is original. That way, our results are not only valid in a broader mathematical setting but also provide some further insight into the purely algebraic tools and theoretical requirements necessary to acquire our findings. This is not only mathematically interesting, but deepens the theoretical understanding of approximate minimality in set optimization. It is furthermore in line with the recent increased interest in studying optimality conditions and separation concepts in spaces without a particular topology underneath it, see [12–21] and the references therein.

#### **2. Preliminaries**

Throughout this work, let *Y* be a real linear space. Following the nomenclature of [22], for a nonempty set *F* ⊆ *Y*, we denote by

$$\text{core}\,F := \{ y \in \mathcal{Y} \mid \forall v \in \mathcal{Y} \, \exists \lambda > 0 \,\text{s.t. } y + [0, \lambda] v \subseteq F \},$$

the algebraic interior of *F* and for any given *k* ∈ *Y*, let

$$\text{vol}\_k F := \{ y \in Y \mid \forall \lambda > 0 \; \exists \lambda' \in [0, \lambda] \text{ s.t. } y + \lambda' k \in F \}.$$

We say that *F* is *k*-vectorially closed if vcl*<sup>k</sup> F* = *F*. Obviously, it holds *F* ⊆ vcl*<sup>k</sup> F* for all *k* ∈ *Y*.

We denote by P(*Y*) := {*A* ⊆ *Y* | *A* is nonempty} the power set of *Y* without the empty set. For two elements *A*, *B* of P(*Y*), we denote the sum of sets by

$$A + B := \{ a + b \mid a \in A, \ b \in B \}.$$

The set *F* ⊆ *Y* is a cone if for all *f* ∈ *F* and *λ* ≥ 0, *λ f* ∈ *F* holds true. The cone *F* is convex if *F* + *F* ⊆ *F*.

Now let <sup>∅</sup> <sup>=</sup> *<sup>C</sup>* <sup>⊆</sup> *<sup>Y</sup>* and *<sup>k</sup>* <sup>∈</sup> *<sup>Y</sup>* \ {0}. We recall the functional *<sup>z</sup>C*,*<sup>k</sup>* : *<sup>Y</sup>* <sup>→</sup> <sup>R</sup> ∪ {+∞} ∪ {−∞} <sup>=</sup>: <sup>R</sup>¯ from Gerstewitz [23] (which has very recently been extended to the space *Y* without assuming any topology, see [24] and the references therein)

$$z^{\mathbb{C},k}(y) := \begin{cases} +\infty & \text{if } y \notin \mathbb{R}k - \mathbb{C}, \\ \inf\{t \in \mathbb{R} \mid y \in tk - \mathbb{C}\} & \text{otherwise}. \end{cases} \tag{1}$$

The functional *zC*,*<sup>k</sup>* was originally introduced as scalarizing functional in vector optimization. Please note that the construction of *zC*,*<sup>k</sup>* was mentioned by Krasnosel'ski˘i [25] (see Rubinov [26]) in the context of operator theory. Figure 1 visualizes the functional *zC*,*k*, where *C* = R<sup>2</sup> <sup>+</sup> has been taken as the natural ordering cone in <sup>R</sup><sup>2</sup> and *<sup>k</sup>* <sup>∈</sup> core *<sup>C</sup>*. We can see that the set <sup>−</sup>*<sup>C</sup>* is moved along the line <sup>R</sup> · *<sup>k</sup>* up until *<sup>y</sup>* belongs to *tk* <sup>−</sup> *<sup>C</sup>*. The functional *<sup>z</sup>C*,*<sup>k</sup>* assigns the smallest value *<sup>t</sup>* such that the property *<sup>y</sup>* <sup>∈</sup> *tk* <sup>−</sup> *<sup>C</sup>* is fulfilled.

**Figure 1.** Illustration of the functional *<sup>z</sup>C*,*k*(*y*) :<sup>=</sup> inf{*<sup>t</sup>* <sup>∈</sup> <sup>R</sup>|*<sup>y</sup>* <sup>∈</sup> *tk* <sup>−</sup> *<sup>C</sup>*}.

The functional *zC*,*<sup>k</sup>* plays an important role as nonlinear separation functional for not necessarily convex sets. Applications of *zC*,*<sup>k</sup>* include coherent risk measures in financial mathematics (see, for instance, [27]) and uncertain programming (see [2,3]). Several important properties of *zC*,*<sup>k</sup>* (in the case that *Y* is endowed with a topology) were studied in [28,29]. Now let us recall the definition of *E*-monotonicity of a functional.

**Definition 1.** *Let E* ∈ P(*Y*)*. A functional z* : *<sup>Y</sup>* <sup>→</sup> <sup>R</sup>¯ *is called E-monotone if*

$$y\_1. \ y\_2 \in \mathcal{Y}: \ y\_1 \in \mathcal{Y}\_2 - E \Rightarrow z(y\_1) \leq z(y\_2) \dots$$

Below we provide some properties of the functional *zC*,*<sup>k</sup>* introduced in (1).

**Proposition 1** ([22])**.** *Let C and E be nonempty subsets of Y, and let k* ∈ *Y* \ {0}*. Then the following properties hold.*

*(a)* <sup>∀</sup> *<sup>y</sup>* <sup>∈</sup> *<sup>Y</sup>* : *<sup>z</sup>C*,*k*(*y*) <sup>≤</sup> <sup>0</sup> ⇐⇒ *<sup>y</sup>* <sup>∈</sup> (−∞, 0]*<sup>k</sup>* <sup>−</sup> vcl*<sup>k</sup> C. (b)* <sup>∀</sup> *<sup>y</sup>* <sup>∈</sup> *<sup>Y</sup>* : *<sup>z</sup>C*,*k*(*y*) <sup>&</sup>lt; <sup>0</sup> ⇐⇒ *<sup>y</sup>* <sup>∈</sup> (−∞, 0)*<sup>k</sup>* <sup>−</sup> vcl*<sup>k</sup> C. (c) zC*,*<sup>k</sup> is E-monotone if and only if E* <sup>+</sup> *<sup>C</sup>* <sup>⊂</sup> [0, <sup>+</sup>∞)*<sup>k</sup>* <sup>+</sup> vcl*<sup>k</sup> C. (d)* <sup>∀</sup> *<sup>y</sup>* <sup>∈</sup> *<sup>Y</sup>*, <sup>∀</sup> *<sup>r</sup>* <sup>∈</sup> <sup>R</sup> : *<sup>z</sup>C*,*k*(*<sup>y</sup>* <sup>+</sup> *rk*) = *<sup>z</sup>C*,*k*(*y*) + *r.*

The set relations to be defined below rely on set inclusions where the set *C* is attached pointwise to the considered sets *A*, *B* ∈ P(*Y*). The following corollary relates *A* + *C* and *A* − *C* respectively by means of the functional *zC*,*<sup>k</sup>* in the case that *C* is a convex cone.

**Corollary 1** ([14], Corollary 2.3)**.** *Let C* ⊆ *Y be a convex cone, A* ∈ P(*Y*) *and k* ∈ *Y* \ {0}*. Then it holds*

$$\sup\_{a \in A} z^{\mathbb{C}, k}(a) = \sup\_{y \in A - \mathbb{C}} z^{\mathbb{C}, k}(y) \text{ and } \inf\_{a \in A} z^{\mathbb{C}, k}(a) = \inf\_{y \in A + \mathbb{C}} z^{\mathbb{C}, k}(y) \text{ .}$$

A well-known set relation is the upper set less order relation introduced by Kuroiwa [30,31]. We recall a generalized version of this relation here, where the underlying set *C* is not necessarily a convex cone and thus the resulting relation is not necessarily an order.

**Definition 2** (Upper Set Less Relation, [32])**.** *Let <sup>C</sup>* <sup>⊆</sup> *Y. The upper set less relation* "*<sup>u</sup> <sup>C</sup> is defined for two sets A*, *B* ∈ P(*Y*) *by*

$$A \preceq\_{\mathbb{C}}^{\mathfrak{u}} B : \Longleftrightarrow A \subseteq B - \mathsf{C}.$$

The following theorem shows a first connection between the upper set less relation and the nonlinear scalarizing functional *zC*,*k*.

**Theorem 1** ([14], Theorem 3.2)**.** *Let C* ⊆ *Y be a convex cone, A*, *B* ∈ P(*Y*) *and k* ∈ *Y* \ {0}*. Then*

$$A \preceq\_{\mathbb{C}}^{\mathfrak{u}} B \implies \sup\_{a \in A} z^{\mathbb{C},k}(a) \le \sup\_{b \in B} z^{\mathbb{C},k}(b).$$

The converse implication in Theorem 1 is not generally fulfilled, even if the underlying sets are convex, see ([33], Example 3.2). However, we have the following result.

**Theorem 2** ([14], Theorem 3.3)**.** *Let C* ⊆ *Y. For two sets A*, *B* ∈ P(*Y*) *and k* ∈ *Y* \ {0}*, it holds*

$$A \preceq\_{\mathbb{C}}^{\underline{u}} B \quad \implies \sup\_{a \in A} \inf\_{b \in B} z^{C,k} (a - b) \le 0.$$

*Assume on the other hand that there exists a <sup>k</sup>*<sup>0</sup> <sup>∈</sup> *<sup>Y</sup>* \ {0} *such that* inf*b*∈*<sup>B</sup> <sup>z</sup>C*,*k*<sup>0</sup> (*<sup>a</sup>* <sup>−</sup> *<sup>b</sup>*) *is attained for all a* ∈ *A, C is k*0*-vectorially closed and* [0, +∞)*k*<sup>0</sup> + *C* ⊆ *C. Then*

$$\sup\_{a \in A} \inf\_{b \in B} z^{\mathbb{C}, k\_0} (a - b) \le 0 \quad \implies \quad A \preceq\_{\mathbb{C}}^{\mathbb{U}} B.$$

**Remark 1.** *(1) Please note that for any <sup>A</sup>*, *<sup>B</sup>* ∈ P(*Y*)*, the set relation <sup>A</sup>* "*<sup>u</sup> <sup>C</sup> B by Theorem 2 also implies* sup*k*∈*Y*\{0} sup*a*∈*<sup>A</sup>* inf*b*∈*<sup>B</sup> <sup>z</sup>C*,*k*(*<sup>a</sup>* <sup>−</sup> *<sup>b</sup>*) <sup>≤</sup> <sup>0</sup>*.*

*(2) Let <sup>A</sup>*, *<sup>B</sup>* ∈ P(*Y*) *and <sup>C</sup>* <sup>⊆</sup> *Y. If there exists an element <sup>k</sup>*<sup>0</sup> <sup>∈</sup> *<sup>C</sup>* \ {0} *such that* inf*b*∈*<sup>B</sup> <sup>z</sup>C*,*k*<sup>0</sup> (*<sup>a</sup>* <sup>−</sup> *<sup>b</sup>*) *is attained for all a* ∈ *A, C is k*0*-vectorially closed and* [0, +∞)*k*<sup>0</sup> + *C* = *C, then it follows from Theorem 2 that*

$$A \preceq\_{\mathbb{C}}^{\boldsymbol{u}} B \iff \sup\_{\boldsymbol{a} \in \mathcal{A}} \inf\_{\boldsymbol{b} \in B} \boldsymbol{z}^{\mathbb{C}, \boldsymbol{k}\_{0}} (\boldsymbol{a} - \boldsymbol{b}) \le \boldsymbol{0}$$

$$\iff \sup\_{\boldsymbol{k} \in \mathcal{Y} \backslash \{0\}} \sup\_{\boldsymbol{a} \in \mathcal{A}} \inf\_{\boldsymbol{b} \in B} \boldsymbol{z}^{\mathbb{C}, \boldsymbol{k}} (\boldsymbol{a} - \boldsymbol{b}) \le \boldsymbol{0}.$$

In the second part of Theorem 2, we need the assumption that there exists a *k*<sup>0</sup> ∈ *Y* \ {0} such that inf*b*∈*<sup>B</sup> <sup>z</sup>C*,*k*<sup>0</sup> (*<sup>a</sup>* <sup>−</sup> *<sup>b</sup>*) is attained for all *<sup>a</sup>* <sup>∈</sup> *<sup>A</sup>*. Sufficient conditions for such an attainment property, i.e., assertions concerning the existence of solutions of the corresponding optimization problems (extremal principles) are given in the literature. The well-known Theorem of Weierstrass says that a lower semi-continuous function on a nonempty weakly compact set in a reflexive Banach space has a minimum. An extension of the Theorem of Weierstrass is given by Zeidler ([34], Proposition 9.13): A proper lower semi-continuous and quasi-convex function on a nonempty closed bounded convex subset of a reflexive Banach space has a minimum. Since the functional *zC*,*k*<sup>0</sup> is studied here in the context of real linear spaces that are not endowed with a particular topology, we cannot rely on continuity assumptions. Therefore, we propose the following theorem without any attainment property.

**Theorem 3** ([14], Theorem 3.6)**.** *Let C* ⊆ *Y, A*, *B* ∈ P(*Y*) *and k*<sup>0</sup> ∈ *Y* \ {0} *such that* (−∞, 0)*k*<sup>0</sup> − vcl*k*<sup>0</sup> *<sup>C</sup>* ⊆ −*C and* vcl−*k*<sup>0</sup> (*<sup>B</sup>* − *<sup>C</sup>*) ⊆ *<sup>B</sup>* − *C. Then*

$$\sup\_{a \in A} \inf\_{b \in B} z^{\mathbb{C}, k\_0} (a - b) \le 0 \quad \implies \quad A \preceq\_{\mathbb{C}}^u B.$$

We also consider the following set relation, which compares sets based on their lower bounds (compare [30,31] for the according definition for orders).

**Definition 3** (Lower Set Less Relation, [32])**.** *Let <sup>C</sup>* <sup>⊆</sup> *Y. The lower set less relation* "*<sup>l</sup> <sup>C</sup> is defined for two sets A*, *B* ∈ P(*Y*) *by*

$$A \preceq\_{\mathbb{C}}^{I} B : \Longleftrightarrow B \subseteq A + \text{C.} $$

Because *<sup>A</sup>* "*<sup>u</sup> <sup>C</sup> <sup>B</sup>* is equivalent to <sup>−</sup>*<sup>B</sup>* "*<sup>l</sup> <sup>C</sup>* −*A*, we obtain the following corollaries from Theorems 1, 2 and 3.

**Corollary 2** ([14], Corollary 3.9)**.** *Let C* ⊆ *Y be a convex cone, A*, *B* ∈ P(*Y*) *and k* ∈ *Y* \ {0}*. Then*

$$A \underset{-\mathbb{C}}{\precsim} B \implies \inf\_{a \in \mathcal{A}} z^{\mathbb{C},k}(a) \le \inf\_{b \in \mathcal{B}} z^{\mathbb{C},k}(b).$$

**Corollary 3** ([14], Corollary 3.10)**.** *Let C* ⊆ *Y. For two sets A*, *B* ∈ P(*Y*) *and k* ∈ *Y* \ {0}*, it holds*

$$A \preceq\_{\mathbb{C}}^{l} B \quad \implies \sup\_{b \in B} \inf\_{a \in A} z^{\mathbb{C},k} (a - b) \le 0.$$

*Assume on the other hand that there exists a <sup>k</sup>*<sup>0</sup> <sup>∈</sup> *<sup>Y</sup>* \ {0} *such that* inf*a*∈*<sup>A</sup> <sup>z</sup>C*,*k*<sup>0</sup> (*<sup>a</sup>* <sup>−</sup> *<sup>b</sup>*) *is attained for all b* ∈ *B, C is k*0*-vectorially closed and* [0, +∞)*k*<sup>0</sup> + *C* ⊆ *C. Then*

$$\sup\_{b \in B} \inf\_{a \in A} z^{C\_\mathcal{A} k\_0} (a - b) \le 0 \quad \implies \quad A \preceq\_\mathbb{C}^l B.$$

**Corollary 4** ([14], Corollary 3.11)**.** *Let C* ⊆ *Y, A*, *B* ∈ P(*Y*) *and k*<sup>0</sup> ∈ *Y* \ {0} *such that* (−∞, 0)*k*<sup>0</sup> − vcl*k*<sup>0</sup> *<sup>C</sup>* ⊆ −*C and* vcl−*k*<sup>0</sup> (*<sup>A</sup>* − *<sup>C</sup>*) ⊆ −*<sup>A</sup>* − *C. Then*

$$\sup\_{b \in B} \inf\_{a \in A} z^{C\_r k\_0} (a - b) \le 0 \quad \implies \quad A \preceq\_{\mathbb{C}}^l B.$$

We also study the so-called *set less relation* (see [35,36] for the case where the underlying set *C* is a convex cone).

**Definition 4** (Set Less Relation, [32])**.** *Let <sup>C</sup>* <sup>⊆</sup> *Y. The set less relation* "*<sup>s</sup> <sup>C</sup> is defined for two sets A*, *B* ∈ P(*Y*) *by*

$$A \begin{array}{c} \preceq^{s}\_{\mathbb{C}} \ B : \Longleftrightarrow^{u} A \begin{array}{c} \preceq^{u}\_{\mathbb{C}} \ B \text{ and } A \begin{array}{c} \preceq^{l}\_{\mathbb{C}} \ B. \end{array} \end{array} \end{array}$$

We immediately obtain the following results.

**Corollary 5** ([14], Corollary 3.13)**.** *Let C* ⊆ *Y be a convex cone, A*, *B* ∈ P(*Y*) *and k* ∈ *Y* \ {0}*. Then*

$$A \preceq\_{\mathbb{C}}^{s} B \implies \sup\_{a \in A} z^{\mathbb{C},k}(a) \le \sup\_{b \in B} z^{\mathbb{C},k}(b) \text{ and } \inf\_{a \in A} z^{\mathbb{C},k}(a) \le \inf\_{b \in B} z^{\mathbb{C},k}(b).$$

**Corollary 6** ([14], Corollary 3.14)**.** *Let C* ⊆ *Y. For two sets A*, *B* ∈ P(*Y*) *and k* ∈ *Y* \ {0}*, it holds*

$$A \preceq\_{\mathbb{C}}^{s} B \quad \implies \sup\_{a \in A} \inf\_{b \in B} z^{\mathbb{C}, k} (a - b) \le 0 \; and \; \sup\_{b \in B} \inf\_{a \in A} z^{\mathbb{C}, k} (a - b) \le 0 \;.$$

*Assume on the other hand that there exists a <sup>k</sup>*<sup>0</sup> <sup>∈</sup> *<sup>Y</sup>* \ {0} *such that* inf*b*∈*<sup>B</sup> <sup>z</sup>C*,*k*<sup>0</sup> (*<sup>a</sup>* <sup>−</sup> *<sup>b</sup>*) *is attained for all <sup>a</sup>* <sup>∈</sup> *A, and there exists <sup>k</sup>*<sup>1</sup> <sup>∈</sup> *<sup>Y</sup>* \ {0} *such that* inf*a*∈*<sup>A</sup> <sup>z</sup>C*,*k*<sup>1</sup> (*<sup>a</sup>* <sup>−</sup> *<sup>b</sup>*) *is attained for all <sup>b</sup>* <sup>∈</sup> *B, <sup>C</sup> is both <sup>k</sup>*0*- and k*1*-vectorially closed,* [0, +∞)*k*<sup>0</sup> + *C* ⊆ *C and* [0, +∞)*k*<sup>1</sup> + *C* ⊆ *C. Then*

$$\sup\_{a \in A} \inf\_{b \in B} z^{\mathbb{C}, k\_0} (a - b) \le 0 \text{ and } \sup\_{b \in B} \inf\_{a \in A} z^{\mathbb{C}, k\_1} (a - b) \le 0 \implies \quad A \preceq\_{\mathbb{C}}^s B.$$

**Corollary 7** ([14], Corollary 3.15)**.** *Let C* ⊆ *Y, A*, *B* ∈ P(*Y*) *and k*0, *k*<sup>1</sup> ∈ *Y* \ {0} *such that* (−∞, 0)*k*<sup>0</sup> − vcl*k*<sup>0</sup> *<sup>C</sup>* ⊆ −*C,* (−∞, 0)*k*<sup>1</sup> − vcl*k*<sup>1</sup> *<sup>C</sup>* ⊆ −*C,* vcl−*k*<sup>0</sup> (*<sup>B</sup>* − *<sup>C</sup>*) ⊆ *<sup>B</sup>* − *C and* vcl−*k*<sup>1</sup> (*<sup>A</sup>* − *<sup>C</sup>*) ⊆ *<sup>A</sup>* − *C. Then*

$$\sup\_{a \in A} \inf\_{b \in B} z^{\mathbb{C}, k\_0}(a - b) \le 0 \text{ and } \sup\_{b \in B} \inf\_{a \in A} z^{\mathbb{C}, k\_1}(a - b) \le 0 \implies \quad A \preceq\_{\mathbb{C}}^s B.$$

#### **3. Approximate Minimal Elements of Set Optimization Problems**

The following definition describes minimality in the setting of a family of sets (see ([5], Definition 2.6.19) for the corresponding definition for preorders).

**Definition 5** (Minimal Elements)**.** *Let* A *be a family of elements of* P(*Y*)*. A* ∈ A *is called a minimal element of* A *w. r. t.* " *if*

$$A \preceq \overline{A} \text{, } A \in \mathcal{A} \quad \implies \quad \overline{A} \preceq A \dots$$

*The set of all minimal elements of* A *w. r. t.* " *will be denoted by* A*min.*

Please note that if the elements of A are single-valued and *A* " *A* :⇐⇒ *A* ∈ *A* − *C* with *C* ⊆ *Y* being a convex cone, then Definition 5 reduces to the standard notion of minimality in vector optimization (compare, for example, ([15], Definition 4.1)). From vector optimization, it is well known that usually, the existence of minimal elements can only be guaranteed under additional assumptions (for an existence result of minimal elements in set optimization, see, for example, [37]). Since the set A*min* may be empty, it is common practice to use a weaker notion of minimality, so-called approximate minimality. For this reason, we extend three notions of approximate minimality that were originally introduced in [38]. In [38], the following definitions are given for " <sup>=</sup> "*<sup>l</sup> <sup>C</sup>* (see Definition 3). In order to stay as general as possible, we define approximate minimality using set relations that are not required to possess any ordering structure.

**Definition 6.** *Let* A *be a family of elements of* P(*Y*)*, H* ∈ P(*Y*)*, H* = *Y and* " *be a binary relation on* A*.*

*(a) <sup>A</sup>* ∈ A *is called an H*1*–approximate minimal element of* <sup>A</sup> *w. r. t.* " *if*

*A* " *A*, *A* ∈ A =⇒ *A* " *A* + *H* .

*(b) <sup>A</sup>* ∈ A *is called an H*2*–approximate minimal element of* <sup>A</sup> *w. r. t.* " *if*

$$A + H \preceq \overline{A}, \ A \in \mathcal{A} \quad \implies \quad \overline{A} \preceq A + H.$$

*(c) <sup>A</sup>* ∈ A *is called an <sup>H</sup>*3*–approximate minimal element of* <sup>A</sup> *w. r. t.* " *if <sup>A</sup>* <sup>+</sup> *<sup>H</sup>* " *A, for all <sup>A</sup>* ∈A\ *A. The set of all H<sup>i</sup> –approximate minimal elements of* A *w. r. t.* " *(i* = 1, 2, 3*) will be denoted by* A*H<sup>i</sup> .*

Please note that Definition 6 (a) is a natural formulation for approximate minimality, while Definition 6 (b) is derived from the standard notion of approximate efficiency for vector-valued

maps (see ([38], Remark 2.5)). Definition 6 (c) represents an approximate version of the well-known nondomination concept of vector optimization.

Here we consider a set-valued optimization problem in the following setting: Let *<sup>S</sup>* <sup>⊆</sup> <sup>R</sup>*n*, a set-valued mapping *F* : *S* ⇒ *Y* and a set relation " be given. We are looking for **approximate minimal elements** w. r. t. the order relation " in the sense of Definition 6 of the problem

$$\min\_{\mathbf{x}\in S} F(\mathbf{x})\,. \tag{2}$$

We say that *<sup>x</sup>* <sup>∈</sup> *<sup>S</sup>* is an *<sup>H</sup><sup>i</sup>* **–approximate minimal solution** (*i* = 1, 2, 3) of (2) w. r. t. " if *F*(*x*) is an *Hi* –approximate minimal element of the family of sets *F*(*x*), *x* ∈ *S* w. r. t. ". The family of sets *F*(*x*), *x* ∈ *S*, is denoted by A.

Now we will present characterizations of approximate minimal solutions of (2) w. r. t. ". In what follows, we will use the following notation. For some *x* ∈ *S*, let us denote

$$[F(\overline{\mathfrak{x}})]\_{\cong}^{H^1} := \{ \mathfrak{x} \in \mathcal{S} \mid F(\mathfrak{x}) \preceq F(\overline{\mathfrak{x}}), \ F(\overline{\mathfrak{x}}) \preceq F(\mathfrak{x}) + H \}$$

and

$$[F(\overline{\mathfrak{x}})]\_{\cong}^{H^2} := \{ \mathfrak{x} \in S \mid F(\mathfrak{x}) + H \preceq F(\overline{\mathfrak{x}}), \ F(\overline{\mathfrak{x}}) \preceq F(\mathfrak{x}) + H \} \dots$$

The following proposition will be useful in the theorem below.

**Proposition 2.** *<sup>x</sup>* <sup>∈</sup> *<sup>S</sup> is an <sup>H</sup>*1*-approximate minimal solution of the problem* (2) *w. r. t.* " *if and only if for any <sup>x</sup>* <sup>∈</sup> *<sup>S</sup>* \ [*F*(*x*)]*H*<sup>1</sup> " *, we have F*(*x*) " *<sup>F</sup>*(*x*)*.*

**Proof.** First note that *<sup>x</sup>* <sup>∈</sup> *<sup>S</sup>* \ [*F*(*x*)]*H*<sup>1</sup> " means that *<sup>x</sup>* <sup>∈</sup> *<sup>S</sup>* such that *<sup>F</sup>*(*x*) " *<sup>F</sup>*(*x*) or *<sup>F</sup>*(*x*) " *<sup>F</sup>*(*x*) + *<sup>H</sup>*. Let *<sup>x</sup>* <sup>∈</sup> *<sup>S</sup>* be an *<sup>H</sup>*1-approximate minimal solution of the problem (2) w. r. t. ". Then we must consider two cases:

**Case 1:** For *x* ∈ *S* and *F*(*x*) " *F*(*x*), there is nothing left to show.

**Case 2:** For *<sup>x</sup>* <sup>∈</sup> *<sup>S</sup>* and *<sup>F</sup>*(*x*) " *<sup>F</sup>*(*x*) + *<sup>H</sup>*, we obtain *<sup>F</sup>*(*x*) " *<sup>F</sup>*(*x*) due to *<sup>x</sup>*'s *<sup>H</sup>*1-approximate minimality, as desired.

Conversely, assume that for all *<sup>x</sup>* <sup>∈</sup> *<sup>S</sup>* \ [*F*(*x*)]*H*<sup>1</sup> " , *<sup>F</sup>*(*x*) " *<sup>F</sup>*(*x*) holds true. Suppose, by contradiction, that *<sup>x</sup>* is not an *<sup>H</sup>*1-approximate minimal solution of the problem (2) w. r. t. ". This implies the existence of some *x* ∈ *S* with the properties *F*(*x*) " *F*(*x*) and *F*(*x*) " *F*(*x*), in contradiction to the assumption.

Now we consider a functional *gH*<sup>1</sup> : *S* × *S* → R ∪ {±∞} with the property

$$\forall \; \mathbf{x}, \mathbf{\overline{x}} \in S: \quad \; \mathbf{g}^{H^1}(\mathbf{x}, \mathbf{\overline{x}}) \le 0 \quad \Longleftrightarrow \quad F(\mathbf{x}) \preceq F(\mathbf{\overline{x}}).$$

Then we have the following characterization for *H*1-approximate minimal solution of the problem (2) w. r. t. ".

**Theorem 4.** *<sup>x</sup>* <sup>∈</sup> *<sup>S</sup> is an <sup>H</sup>*1*-approximate minimal solution of the problem* (2) *w. r. t.* " *if and only if the following system* (*in the unknown x*)

$$\mathcal{S}^{H^1}(\mathfrak{x}, \overline{\mathfrak{x}}) \le 0, \ \mathfrak{x} \in \mathcal{S} \lor [F(\overline{\mathfrak{x}})]\_{\equiv}^{H^1} \ .$$

*is impossible.*

**Proof.** First note that due to Proposition 2, *<sup>x</sup>* <sup>∈</sup> *<sup>S</sup>* is an *<sup>H</sup>*1-approximate minimal solution of the problem (2) w. r. t. " if and only if for *<sup>x</sup>* <sup>∈</sup> *<sup>S</sup>* \ [*F*(*x*)]*H*<sup>1</sup> " , we have *<sup>F</sup>*(*x*) " *<sup>F</sup>*(*x*). Furthermore, we have

$$\begin{split} \mathsf{g}^{H^{1}}(\mathsf{x},\overline{\mathsf{x}}) \leq 0, \ \mathsf{x} \in \mathsf{S} \ \big{ }[F(\overline{\mathsf{x}})]\_{\leq}^{H^{1}} \ \text{is impossible} \\ \Longleftrightarrow \ \quad \quad \not\exists \mathsf{x} \in \mathsf{S} \ \big{ }[F(\overline{\mathsf{x}})]\_{\leq}^{H^{1}} \ \vdots \ \mbox{g}^{H^{1}}(\mathsf{x},\overline{\mathsf{x}}) \leq 0, \\ \Longleftrightarrow \ \quad \quad \forall \mathsf{x} \in \mathsf{S} \ \big{ }[F(\overline{\mathsf{x}})]\_{\leq}^{H^{1}} \ \vdots \ \mbox{g}^{H^{1}}(\mathsf{x},\overline{\mathsf{x}}) > 0 \\ \Longleftrightarrow \quad \forall \mathsf{x} \in \mathsf{S} \ \big{ }[F(\overline{\mathsf{x}})]\_{\leq}^{H^{1}} \ \vdots \ F(\mathsf{x}) \ \not\preceq F(\overline{\mathsf{x}}). \end{split}$$

In a similar manner as Proposition 2 and Theorem 4, one can verify the following results. For this, we assume that we are given a functional *gH*<sup>2</sup> : *S* × *S* → R ∪ {±∞} with the property

$$\forall \, \mathfrak{x}, \mathfrak{X} \in \mathcal{S}: \quad \mathcal{g}^{H^2}(\mathfrak{x}, \mathfrak{X}) \le 0 \quad \Longleftrightarrow \quad F(\mathfrak{x}) + H \preceq F(\mathfrak{X}).$$

**Proposition 3.** *<sup>x</sup>* <sup>∈</sup> *<sup>S</sup> is an <sup>H</sup>*2*-approximate minimal solution of the problem* (2) *w. r. t.* " *if and only if for any <sup>x</sup>* <sup>∈</sup> *<sup>S</sup>* \ [*F*(*x*)]*H*<sup>2</sup> " *, we have F*(*x*) + *<sup>H</sup>* " *<sup>F</sup>*(*x*)*.*

**Theorem 5.** *<sup>x</sup>* <sup>∈</sup> *<sup>S</sup> is an <sup>H</sup>*2*-approximate minimal solution of the problem* (2) *w. r. t.* " *if and only if the following system (in the unknown x)*

$$\mathcal{S}^{H^2}(\mathfrak{x}, \mathfrak{x}) \le 0, \ \mathfrak{x} \in \mathcal{S} \backslash [F(\mathfrak{X})]\_{\cong}^{H^2} \mathcal{A}$$

*is impossible.*

Let us now consider problem (2) with the set relation " <sup>=</sup> "*<sup>u</sup> <sup>C</sup>*. Motivated by Theorem 3 and Corollary 4 above, we consider the functionals *gH<sup>i</sup> <sup>u</sup>* : *S* × *S* → R ∪ {±∞} (*i* = 1, 2) defined by

$$\begin{aligned} \mathcal{g}\_{\boldsymbol{\mu}}^{H^1}(\boldsymbol{x}, \overline{\boldsymbol{x}}) &:= \sup\_{\boldsymbol{y} \in F(\boldsymbol{x})} \inf\_{\overline{\boldsymbol{y}} \in F(\overline{\boldsymbol{x}})} z^{C,k} (\boldsymbol{y} - \overline{\boldsymbol{y}}), \\ \mathcal{g}\_{\boldsymbol{\mu}}^{H^2}(\boldsymbol{x}, \overline{\boldsymbol{x}}) &:= \sup\_{\boldsymbol{y} \in F(\boldsymbol{x}) + H} \inf\_{\overline{\boldsymbol{y}} \in F(\overline{\boldsymbol{x}})} z^{C,k} (\boldsymbol{y} - \overline{\boldsymbol{y}}). \end{aligned}$$

**Assumption 1.** *For C* ⊆ *Y, k* ∈ *Y* \ {0}*, and x* ∈ *S we assume that*

	- *(b)* (−∞, 0)*<sup>k</sup>* − vcl*<sup>k</sup> <sup>C</sup>* ⊆ −*C and* vcl−*k*(*F*(*x*) − *<sup>C</sup>*) ⊆ *<sup>F</sup>*(*x*) − *C.*

We next present a sufficient and necessary condition for *H*1-approximate minimal solutions of the problem (2) w. r. t. the relation "*<sup>u</sup> C*.

**Corollary 8.** *Let Assumption 1 (a-H<sup>i</sup> ) or (b) be satisfied. Then <sup>x</sup>* <sup>∈</sup> *<sup>S</sup> is an <sup>H</sup><sup>i</sup> -approximate minimal solution (i* <sup>=</sup> 1, 2*) of the problem* (2) *w. r. t.* "*<sup>u</sup> <sup>C</sup> if and only if the following system* (*in the unknown x*)

$$g\_{\
u}^{H^{\prime}}(\mathbf{x}, \overline{\mathbf{x}}) \le 0, \ \mathbf{x} \in \mathcal{S} \backslash [F(\overline{\mathbf{x}})]\_{\leq\_{\mathcal{C}}^{\mathbf{u}}}^{H^{\prime}}$$

*is impossible.*

**Proof.** The proof follows by Theorems 2, 3, 4 and 5.

Furthermore, let us consider problem (2) with " <sup>=</sup> "*<sup>l</sup> <sup>C</sup>*. We define the functions *<sup>g</sup>H<sup>i</sup> <sup>l</sup>* : *S* × *S* → R ∪ {±∞} for *i* = 1, 2 by

$$\begin{aligned} \mathcal{g}\_I^{H^1}(\mathbf{x}, \overline{\mathbf{x}}) &:= \sup\_{\overline{\mathbf{y}} \in F(\overline{\mathbf{x}})} \inf\_{\mathbb{Y} \in F(\mathbf{x})} z^{C,k} (\mathbf{y} - \overline{\mathbf{y}}) \\\mathcal{g}\_I^{H^2}(\mathbf{x}, \overline{\mathbf{x}}) &:= \sup\_{\overline{\mathbf{y}} \in F(\overline{\mathbf{x}})} \inf\_{\mathbb{Y} \in F(\mathbf{x}) + H} z^{C,k} (\mathbf{y} - \overline{\mathbf{y}}). \end{aligned}$$

**Assumption 2.** *For C* ⊆ *Y, k* ∈ *Y* \ {0}*, and x* ∈ *S we assume that*

	- *(b)* (−∞, 0)*<sup>k</sup>* − vcl*<sup>k</sup> <sup>C</sup>* ⊆ −*C and for all x* ∈ *<sup>S</sup>* : vcl−*k*(−*F*(*x*) − *<sup>C</sup>*) = −*F*(*x*) − *C.*

In the following, we present a sufficient and necessary condition for *H<sup>i</sup>* -approximate minimal solutions of the problem (2) w. r. t. "*<sup>l</sup> C*.

**Corollary 9.** *Let Assumption 2 (a-H<sup>i</sup> ) or (b) be satisfied. Then x is an H<sup>i</sup> -approximate minimal solution (i* <sup>=</sup> 1, 2*) of the problem* (2) *w. r. t.* "*<sup>l</sup> <sup>C</sup> if and only if the following system (in the unknown x)*

$$g\_I^{H^1}(\mathbf{x}, \overline{\mathbf{x}}) \le 0, \ x \in \mathcal{S} \backslash [F(\overline{\mathbf{x}})]\_{\stackrel{\to\_{\mathcal{C}}^l}{\rightrightarrows}}^{H^1}$$

*is impossible.*

**Proof.** The proof follows by Corollaries 3 and 4 as well as Theorems 4 and 5.

Finally, we have the following result for *H<sup>i</sup>* -approximate minimal solutions of the problem (2) w. r. t. "*<sup>s</sup> C*.

**Corollary 10.** *Let <sup>i</sup>* ∈ {1, 2} *and suppose that Assumptions <sup>1</sup> (a-H<sup>i</sup> ) and 2 (a-H<sup>i</sup> ) or Assumptions 1 (b) and <sup>2</sup> (b) are satisfied for the same <sup>k</sup>* <sup>∈</sup> *<sup>Y</sup>* \ {0}*. Then <sup>x</sup> is an <sup>H</sup><sup>i</sup> -approximate minimal solution of the problem* (2) *w. r. t.* "*<sup>s</sup> <sup>C</sup> if and only if the following system* (*in the unknown x*)*:*

$$\|g\_{\boldsymbol{u}}^{H^{i}}(\mathbf{x},\overline{\mathbf{x}}) \leq 0 \text{ and } g\_{l}^{H^{i}}(\mathbf{x},\overline{\mathbf{x}}) \leq 0, \ \mathbf{x} \in S \; \Big\vee \left( [F(\overline{\mathbf{x}})]\_{\stackrel{\scriptstyle\equiv 0}{=} \mathbb{C}}^{H^{i}} \cup [F(\overline{\mathbf{x}})]\_{\stackrel{\scriptstyle\leftarrow 0}{=} \mathbb{C}}^{H^{i}} \right) \; \wedge$$

*is impossible.*

#### **4. Numerical Procedure for Computing** *H<sup>i</sup>* **-Approximate Minimal Elements of a Family of Finitely Many Elements**

Finding *H<sup>i</sup>* -approximate minimal elements of a family of finitely many elements of P(*Y*) is very important. A first approach to deriving and implementing numerical methods for obtaining *Hi* -approximate minimal elements has been presented in [38] for the lower set less relation "*<sup>l</sup> C*. The assumption that the given family is finitely valued is oftentimes not a restriction, as many continuous set optimization problem can be appropriately discretized, see the discussion in [39] and the theoretical investigations for linear programs [40] as well as the numerical studies in [41]. In this section, we propose numerical methods for obtaining approximate minimal elements as proposed in Definition 5 for general set relations under suitable assumptions.

Please note that the following algorithms can be found in [38] for the specific case that the set relation is equal to "*<sup>l</sup> <sup>C</sup>*. We present them here for general set relations ". The following algorithm is an extension of the so-called *Graef-Younes method* [42,43] and it is useful for sorting out elements which do not belong to the set of *H<sup>i</sup>* –approximate minimal elements.

**Algorithm 1:** (Method for sorting out elements of a family of finitely many sets which are not *H*1- (*H*2-, *H*3-, respectively) approximate minimal elements).

,

*Input:* A := {*A*1,..., *Am*}, set relation ", *H* ∈ P(*Y*) % initialization T := {*A*1}, % iteration loop **for** *j* = 2:1: *m* **do if** \$ *<sup>A</sup>* " *Aj*, *<sup>A</sup>* ∈ T <sup>=</sup><sup>⇒</sup> *Aj* " *<sup>A</sup>* <sup>+</sup> *<sup>H</sup>*% \$ *<sup>A</sup>* <sup>+</sup> *<sup>H</sup>* " *Aj*, *<sup>A</sup>* ∈ T <sup>=</sup><sup>⇒</sup> *Aj* " *<sup>A</sup>* <sup>+</sup> *<sup>H</sup>*% , respectively *<sup>A</sup>* <sup>+</sup> *<sup>H</sup>* " *<sup>A</sup>*, *<sup>A</sup>* ∈ T , respectively , **then** T := T ∪{*Aj*} **end if end for** *Output:* T


**Theorem 6.** *1. Algorithm 1 is well-defined.*


**Proof.** The statements 1 and 2 are easily checked (We loop over a finite number of elements, all the necessary comparisons are well-defined and after the first step, the set T already consists of an element.) and therefore, their proofs are omitted. Now let *Aj* be an *H*1- (*H*2-, *H*3-, respectively) approximate minimal element of A. Then we have

$$\begin{aligned} A &\preceq A\_{j\prime}, A \in \mathcal{A} \implies A\_{j} \preceq A + H \\ (A + H &\preceq A\_{j\prime}, A \in \mathcal{A} \implies A\_{j} \preceq A + H, \text{ respectively}), \\ (A + H &\not\preceq \overline{A}, A \in \mathcal{A}, \text{ respectively}). \end{aligned}$$

Because of T ⊆A, by the above implications we directly obtain

$$\begin{aligned} A &\preceq A\_j, \ A \in \mathcal{T} \implies A\_j \preceq A + H \\ (A + H &\preceq A\_j, \ A \in \mathcal{T} \implies A\_j \preceq A + H, \text{ respectively}), \\ (A + H &\not\preceq A\_j, \ A \in \mathcal{T}, \text{ respectively}). \end{aligned}$$

which verifies that the if-condition in Algorithm 1 is satisfied and *Aj* is added to T .

After the application of Algorithm 1 we have only created a smaller set T containing all the approximate minimal elements of the original family of sets. To filter out solely the approximate minimal elements, another step is required which we handle in the following algorithm:

**Algorithm 2:** (Method for finding *H*1- (*H*2-, *H*3-, respectively) approximate minimal elements of a family A of finitely many sets).

*Input:* A<sup>∗</sup> := {*A*1,..., *Am*}, set relation ", *H* ∈ P(*Y*) % initialization T := {*A*1} % forward iteration loop **for** *j* = 2:1: *m* **do if** \$ *<sup>A</sup>* " *Aj*, *<sup>A</sup>* ∈ T <sup>=</sup><sup>⇒</sup> *Aj* " *<sup>A</sup>* <sup>+</sup> *<sup>H</sup>*% \$ *<sup>A</sup>* <sup>+</sup> *<sup>H</sup>* " *Aj*, *<sup>A</sup>* ∈ T <sup>=</sup><sup>⇒</sup> *Aj* " *<sup>A</sup>* <sup>+</sup> *<sup>H</sup>*% , *<sup>A</sup>* <sup>+</sup> *<sup>H</sup>* " *Aj*, *<sup>A</sup>* ∈ T , respectively , **then** T := T ∪{*Aj*} **end if end for** {*A*1,..., *Ap*} := T U := {*Ap*} % backward iteration loop **for** *j* = *p* − 1 : −1:1 **do if** \$ *<sup>A</sup>* " *Aj*, *<sup>A</sup>* ∈ U <sup>=</sup><sup>⇒</sup> *Aj* " *<sup>A</sup>* <sup>+</sup> *<sup>H</sup>*% \$ *<sup>A</sup>* <sup>+</sup> *<sup>H</sup>* " *Aj*, *<sup>A</sup>* ∈ U <sup>=</sup><sup>⇒</sup> *Aj* " *<sup>A</sup>* <sup>+</sup> *<sup>H</sup>*% , *<sup>A</sup>* <sup>+</sup> *<sup>H</sup>* " *Aj*, *<sup>A</sup>* ∈ U, respectively , **then** U := U∪{*Aj*} **end if end for** *Output:* U {*A*1,..., *Aq*} := U V := ∅ % final comparison **for** *j* = 1:1: *q* **do if** \$ *<sup>A</sup>* " *Aj*, *<sup>A</sup>* ∈A\U <sup>=</sup><sup>⇒</sup> *Aj* " *<sup>A</sup>* <sup>+</sup> *<sup>H</sup>*% \$ *<sup>A</sup>* <sup>+</sup> *<sup>H</sup>* " *Aj*, *<sup>A</sup>* ∈A\U <sup>=</sup><sup>⇒</sup> *Aj* " *<sup>A</sup>* <sup>+</sup> *<sup>H</sup>*% , respectively *<sup>A</sup>* <sup>+</sup> *<sup>H</sup>* " *Aj*, *<sup>A</sup>* ∈A\U, respectively , **then** V := V∪{*Aj*} **end if end for** *Output:* V

**Remark 3.** *1. Again, for determining whether the implications in the definition of minimality are fulfilled, one must loop over the elements of the sets of* T *,* U *and* A\U*, resp.*

,

*2. Please note that we formulated Algorithm 2 to have two outputs* U *and* V*. For practical purposes it would suffice to use* V *which in fact contains all the approximate minimal elements and no more. However, the theoretical investigations below show that the set* U *is in its own right interesting to be examined further.*

*Mathematics* **2020**, *8*, 143

We start the investigation of the above algorithms for the (arguably simplest) case of *<sup>H</sup>*3-approximate minimality. The following result shows that every element of the set <sup>U</sup> is an *<sup>H</sup>*3-approximate minimal element of <sup>U</sup> w. r. t. " (but not necessarily an *<sup>H</sup>*3-approximate minimal element of the set A).

**Lemma 1.** *Every element of* <sup>U</sup> *generated by Algorithm 2 after the backward iteration is also an <sup>H</sup>*3*-approximate minimal element of* U *w. r. t.* "*.*

**Proof.** Let *Aj* ∈ U = {*A*1,..., *Aq*}. By the forward iteration, we obtain

$$\forall \ i < j \ (i \ge 1):\ A\_i + H \not\le A\_j.$$

The backward iteration yields

$$\forall \ i > j \ (i \le q) \colon A\_i + H \not\le A\_j. \ j$$

This means that

$$\forall i \neq j \ (1 \le i \le q): A\_i + H \not\le A\_{j'} $$

which is equivalent to

$$
\forall \, A\_i \in \mathcal{U} \; \bigvee \; \{A\_j\} \; : \; A\_i + H \not\subseteq A\_j.
$$

This is the definition of an *<sup>H</sup>*3-approximate minimal element of <sup>U</sup> w. r. t. ".

**Theorem 7.** *Algorithm 2 generates exactly all <sup>H</sup>*3*-approximate minimal elements of* <sup>A</sup> *w. r. t.* " *within the set* V*.*

**Proof.** Let *Aj* be an arbitrary element in V. Then *Aj* ∈ U, as V⊆U, and due to the third if-statement in Algorithm 2

$$A + H \not\subseteq A\_{\not\prime}, \ A \in \mathcal{A} \backslash \mathcal{U}. \tag{3}$$

Suppose that *Aj* is not *<sup>H</sup>*3-approximate minimal in <sup>A</sup>. Then there exists some *<sup>A</sup>* ∈A\ *Aj* such that

$$A + H \preceq A\_{\rangle}.\tag{4}$$

If *<sup>A</sup>* ∈ U / , then this is a contradiction to (3). If *<sup>A</sup>* ∈ U, then due to the *<sup>H</sup>*3-approximate minimality of *Aj* in U (see Lemma 1), we obtain *A* + *H* " *Aj*, a contradiction to (4).

Conversely, let *Aj* be *<sup>H</sup>*3-approximate minimal in <sup>A</sup>. This means, by definition that

$$A + H \not\subseteq A\_{j\prime} \ A \in \mathcal{A} \backslash \mathcal{A}\_{j\cdot}.$$

Now let us assume, by contradiction, that *Aj* ∈ V / . Then, there exists some *A* ∈A\U with *A* + *H* " *Aj*, a contradiction.

To obtain similar results as in Lemma 1 and Theorem 7 for *H*1- (*H*2-, respectively) approximate minimal elements of U w. r. t. ", we need the following assumption.

**Assumption 3.** *Suppose that one of the following conditions holds:*

*1. The set relation* " *is irreflexive.*

*2. The set relation* " *is reflexive and for every A* ∈ A*, A* " *A* + *H.*

**Assumption 4.** *Suppose that for all A* ∈ A*, it we have A* + *H* " *A or A* " *A* + *H.*

Below we give some examples of set relations that fulfill the above assumptions.

**Example 1.** *1. Consider the* certainly less relation*, which is defined as (see ([32], Definition 3.12))*

*<sup>A</sup>* "*cert <sup>C</sup> B* ⇐⇒ ∀ *a* ∈ *A*, ∀ *b* ∈ *B* : *a* ∈ *b* − *C*,

*where C* ∈ P(*Y*)*. Then* "*cert <sup>C</sup> is irreflexive if C is pointed, i.e., C* ∩ (−*C*) = ∅ *(hence,* 0 /∈ *C). 2. Let us recall the* possibly less relation*, given as (compare [32,37,44])*

> *<sup>A</sup>* "*<sup>p</sup> <sup>C</sup> B* ⇐⇒ ∃ *a* ∈ *A*, ∃ *b* ∈ *B* : *a* ∈ *b* − *C*,

*where <sup>C</sup>* ∈ P(*Y*) *such that* <sup>0</sup> <sup>∈</sup> *C. Then* "*<sup>p</sup> <sup>C</sup> is reflexive. If <sup>C</sup> is a convex cone with <sup>H</sup>* <sup>⊆</sup> *C, then <sup>A</sup>* "*<sup>p</sup> C A* + *H for all A* ∈ A*.*

*3. If C is a convex cone with* <sup>0</sup> <sup>∈</sup> *C and H* ⊆ −*C, then A* "*<sup>u</sup> <sup>C</sup> A* + *H holds true for all A* ∈ A*.*

**Lemma 2.** *Let Assumption 3 (Assumption 4, respectively) be fulfilled. Then every element of* U *generated by Algorithm 2 is also an H*1*- (H*2*-, respectively) approximate minimal element of* <sup>U</sup> *w. r. t.* "*.*

**Proof.** Let *Aj* ∈ U = {*A*1,..., *Aq*}. By the forward iteration, we obtain

$$\forall i < j \; (i \ge 1):\; A\_i \preceq A\_j \quad \implies \quad A\_j \preceq A\_i + H,\tag{5}$$

$$\left(\forall \ i < j \; (i \ge 1) : A\_i + H \preceq A\_j \implies \quad A\_j \preceq A\_i + H, \text{ respectively}\right). \tag{6}$$

The backward iteration yields (5) ((6), respectively) for every *i* > *j* (*i* ≤ *q*). Together, this means that

$$\forall i \neq j:\ A\_i \preceq A\_j \quad \implies \quad A\_j \preceq A\_i + H\_\* \tag{7}$$

$$\begin{pmatrix} \forall \ i \neq j \colon A\_i + H \preceq A\_j & \implies & A\_j \preceq A\_i + H \text{. respectively} \end{pmatrix} . \tag{8}$$

Since the set relation is, due to Assumption 3 either irreflexive or reflexive and for every *A* ∈ A, *A* " *A* + *H*, (7) is equivalent to the implication given in Definition 6 (a), and hence, *Aj* is an *H*1–approximate minimal element of U w. r. t. ". Similarly, according to Assumption 4, it holds for all *A* ∈ A *A* + *H* " *A* or *A* " *A* + *H*. With this in mind, the implication (8) coincides with Definition 6 (b), and hence, *Aj* ∈ U*H*<sup>2</sup> .

**Theorem 8.** *Let Assumption 3 (Assumption 4, respectively) be fulfilled. Then Algorithm 2 generates exactly all H*1*- (H*2*-, respectively) approximate minimal elements of* <sup>A</sup> *w. r. t.* "*.*

**Proof.** Let *Aj* be an arbitrary element in V. Then *Aj* ∈ U, as V⊆U, and due to the third if-statement in Algorithm 2

$$A \preceq A\_{\rangle'} \ A \in \mathcal{A} \left( \mathcal{U} \implies A\_{\rangle} \preceq A + H,\tag{9}$$

$$\left(A + H \preceq A\_{\rangle}, \ A \in \mathcal{A} \mid \mathcal{U} \implies A\_{\rangle} \preceq A + H, \text{ respectively}\right). \tag{10}$$

Suppose that *Aj* is not *<sup>H</sup>*1- (*H*2-, respectively) approximate minimal in <sup>A</sup>. Then there exists some *A* ∈ A such that

$$A \preceq A\_{\rangle} \text{and } A\_{\rangle} \not\supseteq A + H,\tag{11}$$

\$ *<sup>A</sup>* <sup>+</sup> *<sup>H</sup>* " *Aj* and *Aj* " *<sup>A</sup>* <sup>+</sup> *<sup>H</sup>*, respectively% (12) If *A* ∈ U / , then this is a contradiction to (9) ((10), respectively). If *A* ∈ U, then *Aj* " *A* + *H*, as *Aj* is *<sup>H</sup>*1- (*H*2-, respectively) approximate minimal in <sup>U</sup> according to Lemma 2. But this contradicts the implication (11) ((12), respectively).

Conversely, let *Aj* be an *<sup>H</sup>*1- (*H*2-, respectively) approximate minimal element in the set <sup>A</sup>, i.e.,

$$\begin{aligned} A \preceq A\_{\circ}, \ A \in \mathcal{A} &\implies & A\_{\circ} \preceq A + H, \\ \left( A + H \preceq A\_{\circ}, \ A \in \mathcal{A} &\implies & A\_{\circ} \preceq A + H, \text{ respectively} \right). \end{aligned} \tag{13}$$

Now let us assume, by contradiction, that *Aj* ∈ V / . Then, there exists some *A* ∈A\U with *A* " *Aj* (*A* + *H* " *Aj*, respectively), but *Aj* " *A* + *H*, a contradiction to (13).

To illustrate the algorithms, we will apply the forward and backward iteration for a rather academic example in R2. Note, however, that its (even computerized) application is not limited to these finite-dimensional structures as the algorithms are based on elementary finite iteration loops. So, once a way has been established to numerically assert the relation *A* " *B* for two sets *A* and *B* out of a certain family of sets, the algorithms can directly be applied. For the case of polyhedral sets, such a comparison principle has, for example, been established in [45] and similar computational approaches were developed in [46].

**Example 2.** *For this example, let C* := R<sup>2</sup> <sup>+</sup>*,* ":="*cert <sup>C</sup> and <sup>H</sup>* <sup>=</sup> {(1, 1)*T*}*. As the family of sets* <sup>A</sup>*, we have randomly computed 1000 sets, for easy comparison each set is a ball of radius one in* R2*. We are interested in the <sup>H</sup>*2*-approximate minimal elements of the set* <sup>A</sup> *and make use of Algorithm 2 to obtain those. Notice that Assumption 4 is trivially fulfilled. Out of the 1000 sets, a total number of 177 are H*2*-approximate minimal w. r. t. to* "*. Algorithm 2 generates at first 189 sets in* T *; then, 177 sets are collected within the set* U *and* V*. We used the same data as in Example 4.7 and 4.14 from [32], and according to our earlier results, a total number of 93 elements are minimal. In Figure 2, the sets within* T *are the lightly and darkly filled circles, while the <sup>H</sup>*2*-approximate minimal elements of the set* <sup>A</sup> *(that is, the sets in* <sup>U</sup> *and* <sup>V</sup>*) are the darkly filled circles. For comparison, Algorithm 2 is also used on the same family of sets with <sup>H</sup>* <sup>=</sup> {(0, 0)*T*} *(see ([32], Example 4.7 and 4.14)), with 103 sets within* T *and 93 sets within* U *and* V*, see Figure 3. Let us note that this example is chosen to illustrate the efficiency of Algorithm 2 as it is to be expected for problems with relatively homogeneous distribution of set size and structure, see the according discussion in the vector-valued case [15,43].*

**Figure 2.** A randomly generated family of sets. The lightly and darkly filled circles belong to the set T generated by Algorithm 2, while the *<sup>H</sup>*2-approximate minimal elements of the set <sup>A</sup> are exactly the darkly filled circles (see Example 2).

*Of course, the notion of approximate minimality makes sense when minimal elements do not exist (in the vector-valued case, this can happen when the set of feasible elements in the objective space is open). In the future, we will study continuity notions of set-valued mappings that appear in set optimization problems and investigate existence results.*

**Figure 3.** The randomly generated family of sets from Example <sup>2</sup> with *<sup>H</sup>* <sup>=</sup> {(0, 0)*T*}, i.e., we do not consider approximate minimal elements here, but look for the minimal elements of the family of sets A. The lightly and darkly filled circles belong to the set T generated by Algorithm 2, while the minimal elements of the set A are the darkly filled circles.

#### **5. Conclusions**

This paper investigates different kinds of approximate minimal solutions of set optimization problems. In particular, we present an inequality approach to characterize these approximate minimal solutions by means of a prominent scalarizing functional. To be as general as possible, our analysis is developed in real linear spaces without assuming any topology on the spaces and therefore bases only on algebraic relations and set inclusions between all the involved quantities. It would be interesting to study whether different scalarizing functionals may be used for a similar analysis as the separation functionals of Tammer–Weidner type have recently been embedded into a larger class of functionals [47]. We have proposed effective algorithms that select approximate minimal elements out of a family of finitely many sets. As a next step, it will be necessary to test our algorithms on practical examples.

**Author Contributions:** Conceptualization, E.K., M.A.K. and X.Q.; Methodology, E.K., M.A.K. and X.Q.; Software, E.K. and M.A.K.; Investigation, E.K., M.A.K. and X.Q.; Writing–original draft preparation, E.K. and M.A.K.; writing–review and editing, E.K., M.A.K. and X.Q.; visualization, E.K. and M.A.K. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


c 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

MDPI St. Alban-Anlage 66 4052 Basel Switzerland Tel. +41 61 683 77 34 Fax +41 61 302 89 18 www.mdpi.com

*Mathematics* Editorial Office E-mail: mathematics@mdpi.com www.mdpi.com/journal/mathematics

MDPI St. Alban-Anlage 66 4052 Basel Switzerland

Tel: +41 61 683 77 34 Fax: +41 61 302 89 18