**Symmetry in Mathematical Analysis and Functional Analysis**

Editors **Octav Olteanu Savin Treanta**

MDPI ' Basel ' Beijing ' Wuhan ' Barcelona ' Belgrade ' Manchester ' Tokyo ' Cluj ' Tianjin

*Editors* Octav Olteanu Mathematics and Informatics University Politehnica of Bucharest Bucharest Romania

Savin Treanta Applied Mathematics University Politehnica of Bucharest Bucharest Romania

*Editorial Office* MDPI St. Alban-Anlage 66 4052 Basel, Switzerland

This is a reprint of articles from the Special Issue published online in the open access journal *Symmetry* (ISSN 2073-8994) (available at: www.mdpi.com/journal/symmetry/special issues/Symmetry Mathematical Analysis Functional Analysis).

For citation purposes, cite each article independently as indicated on the article page online and as indicated below:

LastName, A.A.; LastName, B.B.; LastName, C.C. Article Title. *Journal Name* **Year**, *Volume Number*, Page Range.

**ISBN 978-3-0365-6591-0 (Hbk) ISBN 978-3-0365-6590-3 (PDF)**

© 2023 by the authors. Articles in this book are Open Access and distributed under the Creative Commons Attribution (CC BY) license, which allows users to download, copy and build upon published articles, as long as the author and publisher are properly credited, which ensures maximum dissemination and a wider impact of our publications.

The book as a whole is distributed by MDPI under the terms and conditions of the Creative Commons license CC BY-NC-ND.

## **Contents**


## *Editorial* **Special Issue of Symmetry: "Symmetry in Mathematical Analysis and Functional Analysis"**

**Octav Olteanu**

Department of Mathematics and Informatics, University Politehnica of Bucharest, 060042 Bucharest, Romania; octav.olteanu50@gmail.com

This Special Issue consists of 11 papers recently published in MDPI's journal *Symmetry* under the general thematic title "Symmetry in Mathematical Analysis and Functional Analysis" (see [1–11]). The deadline for manuscript submissions was 31 July 2022. This Special Issue belongs to the section of the journal entitled "Mathematics and Symmetry/Asymmetry".

Among other aspects of the theory underlying this area of research, the content of these 11 published papers (and their references) covers, but is not limited to, the following subjects:


In the first part of paper [11], the symmetry of sublinear continuous operators *P* : *X* → *Y* (*P*(*x*) = *P*(−*x*) ∀*x* ∈ *X*) appears in Theorem 2 and in some of its consequences. Of note, if *X*,*Y* are Banach lattices, with *Y* being an order complete, then the norm of a continuous sublinear operator from *X* into *Y* controls the norm of all its subgradients. In the second part of the same paper, elements of the theory of the Markov moment problem are explored. Since this thematic area is closely related to many other fields of mathematics, here, we briefly review some of the notions regarding the classical one dimensional and, in particular, the multidimensional moment problem and its relationship with other areas of research, such as the explicit form of any non-negative polynomial on a closed subset of R*<sup>n</sup>* in terms of the sums of the squares of some polynomials; the extension of positive linear functionals and operators; the extension of linear operators dominated by a convex continuous operator and dominating a given continuous concave operator (these constraints might hold only on the positive cone of the domain space); measure theory; the notion of a moment determinate measure and study of determinacy; matrix theory; spaces of commuting selfadjoint operators (in particular, spaces of commuting symmetric matrices with real entries); inequalities; the Banach lattices of functions and self-adjoint operators; existence, uniqueness, and the eventual construction of the linear solution to an interpolation problem with one or two constraints; and examples of continuous sublinear (or only convex) operators, operator theory, and the complex functions of complex variables. In the present editorial, only the analysis and functional analysis of the real field are addressed. As is well-known and pointed out by the authors of [12], symmetric matrices with real entries have special properties, and there exits a natural order relation with respect to the real vector space *Sym*(*n* × *n*, R) of all such matrices. With respect to this order relation, for *n* ≥ 2, the

**Citation:** Olteanu, O. Special Issue of Symmetry: "Symmetry in Mathematical Analysis and Functional Analysis". *Symmetry* **2022**, *14*, 2665. https://doi.org/10.3390/ sym14122665

Received: 13 December 2022 Accepted: 14 December 2022 Published: 16 December 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

ordered vector space *Sym*(*n* × *n*, R) is not a lattice. On the other hand, the multiplication operation of such *n* × *n* matrices is not commutative for *n* ≥ 2. Clearly, the corresponding assertions hold true regarding the space A(*H*) of all the self-adjoint operators acting on a real or complex Hilbert space *H*, where dim(*H*) ≥ 2. The same article [12] contains simple proof of the fact that any positive linear operator applying an ordered Banach space *X* to an ordered Banach space *Y* is continuous. In particular, any positive linear operator mapping an arbitrary Banach lattice onto a Banach lattice is continuous. In order to avoid the two main difficulties mentioned above, regarding the space A(*H*) for any *A* ∈ A(*H*), as demonstrated in [13], one must construct a commutative real Banach algebra over the real field, denoted by *Y*(*A*), which is also an order complete Banach lattice (endowed with the operatorial norm on A(*H*)). In this Banach lattice, we have |*U*| := *sup*{*U*, −*U*} = √ *U*<sup>2</sup> for all *U* ∈ *Y*(*A*). In other words, the modulus of *U* in this Banach lattice equals the positive square root of the positive self-adjoint operator *U*<sup>2</sup> . Moreover, due to the order completeness of the vector lattice *Y*(*A*), Hahn–Banach-type extension theorems for linear operators have *Y*(*A*) as a codomain hold. In the classical one-dimensional moment problem, given a sequence *yj j*∈N of real numbers, we should find necessary and sufficient conditions for the existence of a positive regular Borel measure *ν* on the closed subset *<sup>F</sup>* <sup>⊆</sup> <sup>R</sup>, which satisfies the interpolation conditions <sup>R</sup> *F t <sup>j</sup>dν*(*t*) = *y<sup>j</sup>* , *j* ∈ N := {0, 1, 2, . . .}. This is an inverse problem, because the measure *ν* is not known. Thus, it must be identified starting with its moments R *F t <sup>j</sup>dν*(*t*), *<sup>j</sup>* <sup>∈</sup> <sup>N</sup>. If such a measure does exist, its uniqueness and, eventually, its construction can be studied. The multidimensional real moment problem can be formulated in a similar way. In the case of an *n*− dimensional moment problem, we have *<sup>j</sup>* <sup>=</sup> (*j*1, . . . , *<sup>j</sup>n*) <sup>∈</sup> <sup>N</sup>*<sup>n</sup>* , *<sup>t</sup>* <sup>=</sup> (*t*1, . . . , *<sup>t</sup>n*) <sup>∈</sup> *<sup>F</sup>* <sup>⊆</sup> <sup>R</sup>*<sup>n</sup>* , *n* ≥ 2, *n*, being a fixed integer. Considering the unique linear form *L*<sup>0</sup> on the space of all the polynomials with real coefficients, satisfying the interpolation condition *L*<sup>0</sup> *ϕj* = *y<sup>j</sup>* , *<sup>j</sup>* <sup>∈</sup> <sup>N</sup>*<sup>n</sup>* , the existence of a solution is reduced to the representation of *L*<sup>0</sup> by a positive measure *dν*. Namely, through linearity, the following equality is true for *dν* : *L*0(*p*) = R *F p*(*t*)*dν*(*t*) for all the polynomials *p* ∈ R[*t*1, . . . , *tn*]. This is a motivation for the terminology *representing measure* for *L*0. According to the Haviland theorem [14], the sufficient (and necessary) condition for the existence of the representing positive measure *dν* for *L*<sup>0</sup> is *L*0(*p*) ≥ 0 for any polynomial *p* ∈ R[*t*1, . . . , *tn*] satisfying *p*(*t*) ≥ 0 for all *t* = (*t*1, . . . , *tn*) ∈ *F*. In the important case of *n* = 1, *F* = R, this positivity condition can be expressed in terms of the semi-positiveness of quadratic forms, since each polynomial (with real coefficients) which is non-negative on the entirety of the real axes is the sum of two squares of the polynomials from R[*t*] (see [15,16]). With the abovementioned notations, the coefficients of the quadratic forms are *yi*+*<sup>j</sup>* . This is the one-dimensional Hamburger moment problem. It represents a good example of symmetry, given by the symmetric matrices *yi*+*<sup>j</sup>* 0≤*i*,*j*≤*m* and *m* ∈ N. A similar remark is valid for the one-dimensional moment problem on [0, +∞) : *p* ∈ R[*t*], *p*(*t*) ≥ 0 for all *t* ∈ [0, +∞) ⇐⇒ *p*(*t*) = *q* 2 (*t*) + *tr*<sup>2</sup> (*t*) ∀*t* ∈ [0, +∞) for some polynomials *q*,*r* ∈ R[*t*]. Unlike the one-dimensional case, there are non-negative polynomials on R<sup>2</sup> , which are not sums of the squares of the polynomials in R[*t*1, *t*2] (see [16]). Up to now, the terms of the sequence *yj <sup>j</sup>*∈N*<sup>n</sup>* have been numbers. This is the scalar moment problem. Next, we consider a sequence *yj <sup>j</sup>*∈N*<sup>n</sup>* of elements of an ordered vector space *<sup>Y</sup>* and, with the notation forms above, we study the existence of a linear positive operator *T* : *X*<sup>1</sup> → *Y*, such that *T ϕj* = *y<sup>j</sup>* for all *<sup>j</sup>* <sup>∈</sup> <sup>N</sup>*<sup>n</sup>* . Here, *X*<sup>1</sup> is an ordered vector space of real functions, containing the polynomials and the space *Cc*(*F*) of all the continuous compactly supported functions on *F*, such that the subspace of the polynomials is a majorizing subspace in *X*1. For example, if *X* := *L p <sup>ν</sup>* (*F*), *p* ∈ [1, +∞), the space *X*<sup>1</sup> will be the sublattice of *X* formed by all the functions *f* from *X*, possessing the modulus | *f* | dominated by a polynomial on the entire subset *F*. Then, it is easy to observe that the subspace of the polynomials is a majorizing subspace in *X*<sup>1</sup> and, clearly, *X*<sup>1</sup> contains *Cc*(*F*) and the space of the polynomials. Assuming that *Y* is order complete, we consider the unique linear operator *T*<sup>0</sup> mapping the

space of the polynomials to *Y*, *T*<sup>0</sup> ∑ *j*∈*J*<sup>0</sup> *αjϕ<sup>j</sup>* ! := ∑ *j*∈*J*<sup>0</sup> *αjy<sup>j</sup>* , *<sup>J</sup>*<sup>0</sup> <sup>⊂</sup> <sup>N</sup>*<sup>n</sup>* , being an arbitrary finite subset. Additionally, assume that *T*0(*p*) ∈ *Y*<sup>+</sup> for all the non-negative polynomials *p* on *F*. The application of the Kantorovich extension theorem for positive linear operators (see [17]) leads to the existence of a linear positive extension *T*<sup>1</sup> of *T*0, where *T*<sup>1</sup> is mapped *X*<sup>1</sup> to *Y*. If we prove the continuity of *T*<sup>1</sup> on *X*1, then there exists a unique continuous positive extension *T* : *X* → *Y* of *T*1. This follows from the density of *Cc*(*F*) in *X* = *L p <sup>ν</sup>* (*F*), *p* ∈ [1, +∞) (see [18]). When an upper constraint on the solution *T* is required, we have a Markov moment problem. Usually, the following constraints on the solution *T* of the interpolation problem are required: 0 ≤ *T* ≤ *T*<sup>2</sup> on the positive cone *X*+, where *T*<sup>2</sup> is a given linear positive operator mapping the Banach lattice *X* to the order complete Banach lattice *Y*. In [19], the explicit form of non-negative polynomials on a strip is highlighted in terms of the sums of squares. In the papers [20–36], various results on the full and truncated moment problem are provided. These results refer to connection with fixed-point theory (see [23]), the moment problem on compact subsets with a nonempty interior in R*<sup>n</sup>* , (see [25]), and the decomposition of positive polynomials on such compact subsets, the moment problem, and the decomposition of positive polynomials on compact semi-algebraic subsets (see [26–29]. In [29], a class of moment problems on unbounded semi-algebraic sets are also discussed. The truncated Markov moment problem, including the construction of a solution, is emphasized in the articles [30,31]. For optimization related to the truncated moment problem, see [32,33]. A solution to a full moment problem obtained as a limit of the solutions for the associated truncated moment problem is provided by the authors of [34]. In [35], an operator-valued moment problem is solved, while an *L*-moment problem is discussed in [36]. The geometric aspects of functional analysis in nonstandard spaces are discussed in the papers [37,38], without any connection with the moment problem. Iterative methods regarding fixed-point and related optimization problems are discussed in [39–41]. In the monograph [42], the authors study the sandwich condition *T*<sup>1</sup> ≤ *T* ≤ *T*<sup>2</sup> on the positive cone of the domain space, where *T*1, *T*<sup>2</sup> are given linear functionals and *T* is a solution for a finite number of the interpolation moment conditions. The article [43] provides the necessary and sufficient conditions for the existence of a positive linear operator solution dominated by a convex operator. A result of G. Cassier (see [25]) is applied in order to apply the first theorem in [43] to the classical multidimensional Markov moment problem on a compact with a nonempty interior in R*<sup>n</sup>* . A characterization of the existence of a linear operator solution *T* for an arbitrary infinite number of moment conditions, such that the sandwich constraint *T*<sup>1</sup> ≤ *T* ≤ *T*<sup>2</sup> on *X*<sup>+</sup> holds, is also provided. Here, *T*1, *T*<sup>2</sup> are the given linear operators. In the article [44], sufficient conditions for the determinacy of probability distributions on R or respectively on [0, +∞) are studied. We recall that a measure is a determinate measure on the closed subset *F* if it is uniquely determined by its moments on *F*. In the paper [45], the notion of a finite simplicial set is reviewed and applied to a nonstandard sandwich theorem on that set. Notably, a finite simplicial set can be unbounded in the case of any locally convex topology on the vector space in which the set is contained. As we have already seen, the non-negative polynomials on R*<sup>n</sup>* are not expressible in terms of the sums of squares. This is the motivation for the polynomial approximation results provided in [46,47] and applied to the Markov moment problem with the operator solution in [46–49]. These results are essentially based on the notion of a moment determinate measure. In [46], it was proved that for a moment determinate measure *ν*, the non-negative polynomials on *F* are dense in the positive cone of *L* 1 *ν* (*F*). Consequently, the subspace of the polynomials is dense in *L* 1 *ν* (*F*). Notably, if *n* ≥ 2, there exist moment determinate measures *ν* on R*<sup>n</sup>* , such that the polynomials are not dense in *L* 2 *ν* (R*<sup>n</sup>* ) (see [22]). We can assume that all the measures are positive Borel regular measures on *F*, with finite moments of all the orders. In [47–49], the authors prove that for the products *ν* = *ν*<sup>1</sup> × · · · × *ν<sup>n</sup>* of *n* moment determinate measures *ν<sup>i</sup>* on R, any function of the positive cone of *L* 1 *ν* (*F*) can be approximated by finite sums of special products of polynomials, *p*(*t*) = *p*1(*t*1)· · · *pn*(*tn*), where each *p<sup>i</sup>* is non-negative on R, which, hence, is a sum of (two) squares, *i* = 1, . . . , *n*. For

such measures *ν*, this enables us to solve the multidimensional Markov moment problems on R*<sup>n</sup>* mentioned above in terms of the quadratic forms. The corresponding result for the products of the *n* moment determinate measures on [0, +∞) *n* holds. Here, assume that *Y* is an order complete Banach lattice and *T*1, *T*<sup>2</sup> are bounded linear operators applying *L* 1 *ν* (*F*) to *T*. In this case, the linear solution *T* of the problem under investigation is also bounded due to the constraint *T*<sup>1</sup> ≤ *T* ≤ *T*<sup>2</sup> on the positive cone of *L* 1 *ν* (*F*). The uniqueness of the solution follows according to the density of the polynomials in *L* 1 *ν* (*F*). To conclude, we can observe that polynomial approximation on bounded subsets solves the existence, as well as the uniqueness, of the solution to a large class of Markov moment problems on R*<sup>n</sup>* or on [0, +∞) *n* , *n* ≥ 2.

**Data Availability Statement:** This study uses only theoretical results and their applications published in the cited references.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


## *Article* **Relation-Theoretic Coincidence and Common Fixed Point Results in Extended Rectangular** *b***-Metric Spaces with Applications**

**Yan Sun <sup>1</sup> and Xiaolan Liu 1,2,\***


**Abstract:** The objective of this paper is to obtain new relation-theoretic coincidence and common fixed point results for some mappings *F* and *g* via hybrid contractions and auxiliary functions in extended rectangular *b*-metric spaces, which improve the existing results and give some relevant results. Finally, some nontrivial examples and applications to justify the main results.

**Keywords:** coincidence point; common fixed point; relation-theoretic; auxiliary functions; hybrid contractions; extended rectangular *b*-metric space

**MSC:** 47H10; 54H25

#### **1. Introduction and Preliminaries**

Throughout the article, we denote, by R, the set of all real numbers; by R+, the set of all non-negative real numbers; and by N, the set of all non-negative integers. At the beginning, we retrace several known metric-type spaces, which will be useful in the following.

In 1993, Czerwik [1] formally introduced and studied this interesting generalized metric space named *b*-metric space . Since then, many scholars have extended and developed fixed point theorems in *b*-metric spaces. Recent studies of fixed point theorems in *b*-metric spaces can be seen in [2–4].

**Definition 1** ([1])**.** *Let* Ω 6= ∅ *and s* > 1 *be a given real number. If a function d* : Ω × Ω → R<sup>+</sup> *satisfies the following conditions:*

(*d*1) *d*(*u*, *v*) = 0 *if and only if u* = *v;* (*d*2) *d*(*u*, *v*) = *d*(*v*, *u*)*, for all u*, *v* ∈ Ω*;*

(*d*3) *d*(*u*, *v*) 6 *s*[*d*(*u*, *w*) + *d*(*w*, *v*)]*, for all u*, *v*, *w* ∈ Ω *,*

*then d is said to be a b-metric, and* (Ω, *d*) *is said to be a b-metric space with the coefficient s.*

In 2000, a generalized metric that replaces the triangular inequality with quadrilateral inequality was proposed by Branciari [5].

**Definition 2** ([5])**.** *Let* Ω 6= ∅*. For all u*, *v* ∈ Ω *and all distinct points w*, *t* ∈ Ω \ {*u*, *v*}*, if a function d<sup>r</sup>* : Ω × Ω → [0, ∞) *satisfies the following conditions:*

(*d*1) *dr*(*u*, *v*) = 0 ⇔ *u* = *v;*

(*d*2) *dr*(*u*, *v*) = *dr*(*v*, *u*)*; and*

(*d*3) *dr*(*u*, *v*) 6 *dr*(*u*, *w*) + *dr*(*w*, *t*) + *dr*(*t*, *v*)*,*

*then d<sup>r</sup> is said to be a rectangular metric and* (Ω, *dr*) *is said to be a rectangular metric space (Branciari distance space).*

In 2015, rectangular *b*-metric was raised by George et al. [6], which is a development of *b*-metric and rectangular metric.

**Citation:** Sun, Y.; Liu, X. Relation-Theoretic Coincidence and Common Fixed Point Results in Extended Rectangular *b*-Metric Spaces with Applications. *Symmetry* **2022**, *14*, 1588. https://doi.org/ 10.3390/sym14081588

Academic Editors: Savin Treanta and Juan Luis García Guirao

Received: 30 June 2022 Accepted: 28 July 2022 Published: 2 August 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

**Definition 3** ([6])**.** *Let* Ω 6= ∅ *and s* > 1 *be a given real number. If, for all u*, *v* ∈ Ω *and for all distinct points w*, *t* ∈ Ω\{*u*, *v*}*, a function drb* : Ω × Ω → R<sup>+</sup> *satisfies the following conditions:*

$$\begin{pmatrix} d\_{rb}\mathbf{1} \\ d\_{r\lambda}\mathbf{1} \end{pmatrix} d\_{rb}(\mu, v) = 0 \text{ if and only if } \mu = v;$$

(*drb*2) *drb*(*u*, *v*) = *drb*(*v*, *u*)*; and* (*drb*3) *drb*(*u*, *v*) 6 *s*[*drb*(*u*, *w*) + *drb*(*w*, *t*) + *drb*(*t*, *v*)]*,*

*then drb is said to be a rectangular b-metric and* (Ω, *drb*) *is said to be a rectangular b-metric space with the coefficient s.*

In 2017, a binary function proposed by Kamran et al. [7] was used to introduce a novel metric-type space.

**Definition 4** ([7])**.** *Let* Ω 6= ∅ *and θ* : Ω × Ω → [1, ∞)*. A function d<sup>θ</sup>* : Ω × Ω → [0, ∞) *is said to be an extended b-metric if it satisfies the following conditions:*

(*dθ*1) *d<sup>θ</sup>* (*u*, *v*) = 0 *if and only if u* = *v;*

(*dθ*2) *d<sup>θ</sup>* (*u*, *v*) = *d<sup>θ</sup>* (*v*, *u*)*, for all u*, *v* ∈ Ω*;*

(*dθ*3) *d<sup>θ</sup>* (*u*, *v*) 6 *θ*(*u*, *v*)[*d<sup>θ</sup>* (*u*, *w*) + *d<sup>θ</sup>* (*w*, *v*)]*, for all u*, *v*, *w* ∈ Ω*,*

*then* (Ω, *d<sup>θ</sup>* ) *is said to be an extended b-metric space with θ.*

In 2019, inspired by [5,7], Asim et al. [8] presented a more generalized metric space called extended rectangular *b*-metric space(also extended Branciari *b*-distance in [9]).

**Definition 5** ([8])**.** *Let* Ω 6= ∅ *and ξ* : Ω × Ω → [1, ∞)*. A function d<sup>ξ</sup>* : Ω × Ω → [0, ∞) *is said to be an extended rectangular b-metric, if for all u*, *v* ∈ Ω *and all distinct points w*, *t* ∈ Ω \ {*u*, *v*}*, dξ satisfies the following conditions:*

(*dξ*1) *d<sup>ξ</sup>* (*u*, *v*) = 0 ⇔ *u* = *v;* (*dξ*2) *d<sup>ξ</sup>* (*u*, *v*) = *d<sup>ξ</sup>* (*v*, *u*)*; and*

(*dξ*3) *d<sup>ξ</sup>* (*u*, *v*) 6 *ξ*(*u*, *v*)[*d<sup>ξ</sup>* (*u*, *w*) + *d<sup>ξ</sup>* (*w*, *t*) + *d<sup>ξ</sup>* (*t*, *v*)]*,*

*then* (Ω, *d<sup>ξ</sup>* ) *is said to be an extended rectangular b-metric space.*

**Remark 1.** *The relationship between these types of metric spaces are shown in Figure 1.*

**Figure 1.** The relationship between these types of metric spaces.

Now, we review some topological properties of the extended rectangular *b*-metric space.

**Definition 6** ([8])**.** *Let* (Ω, *d<sup>ξ</sup>* ) *be an extended rectangular b-metric space.*

(*i*) *a sequence* {*un*} *in* <sup>Ω</sup> *is said to be a Cauchy sequence if* lim *<sup>n</sup>*,*m*→<sup>∞</sup> *d<sup>ξ</sup>* (*un*, *um*) = 0*;*

(*ii*) *a sequence* {*un*} *in* <sup>Ω</sup> *is said to be convergent to u if* lim*n*→<sup>∞</sup> *d<sup>ξ</sup>* (*un*, *u*) = 0*; and*

(*iii*) (Ω, *d<sup>ξ</sup>* ) *is said to be complete if every Cauchy sequence in* Ω *convergent to some point in* Ω*.*

Next, we introduce the simulation function was introduced by Khojasteh et al. [10]. It plays an important role in recent studies on the fixed point theory, which has inspired many scholars. Some results via simulation functions can be referred to [11–14].

**Definition 7** ([10])**.** *A function η* : R<sup>+</sup> × R<sup>+</sup> → R *is said to be a simulation function, if it satisfies the following conditions:*

(*η*1) *η*(0, 0) = 0*;*

(*η*2) *η*(*u*, *v*) < *v* − *u*, for *u*, *v* > 0*; and which*

(*η*3) *if* {*un*}, {*vn*} *are sequences in* (0, <sup>∞</sup>) *such that* lim*n*→<sup>∞</sup> *<sup>u</sup><sup>n</sup>* <sup>=</sup> lim*n*→<sup>∞</sup> *v<sup>n</sup>* > 0*, then*

$$\limsup\_{n \to \infty} \eta(u\_n, v\_n) < 0.$$

We denote the set of all simulation functions by Z.

**Definition 8** ([10])**.** *Let* (Ω, *d*) *be a metric space, F* : Ω → Ω *be a mapping and η* ∈ Z*. Then, T is called a* Z*-contraction with respect to η if the following condition holds:*

$$
\eta(d(Fu, Fv), d(u, v)) \geqslant 0,
$$

*where u*, *v* ∈ Ω*, with u* 6= *v.*

**Theorem 1** ([10])**.** *Every* Z*-contraction on a complete metric space has a unique fixed point.*

Another new variant of Banach contraction principle with binary relation is proposed by Alam and Imdad [15] on complete metric spaces. In this case, the contraction condition is relatively weaker than the usual contraction, since it only needs to keep those elements that are related under the binary relation, not the whole space. With the introduction of binary relations, the study of fixed point theory is more colorful.

For instance, Al-Sulami et al. [15] raised (*θ*, <) contraction by binary relation and applied it to nonlinear matrix equations, Alfaqih et al. [16] proposed (*F*, <)*g*-contraction in the metric space with a binary relation and investigated the existence and uniqueness of a solution of integral equation of Volterra type, Zadal and Sarwar [17] obtained common fixed point for two mappings in the case of binary relation. Now, we recall some basic definitions of binary relations, which play an important role in our main results.

**Definition 9** ([18])**.** *Let* Ω 6= ∅ *and* < *be a binary relation on* Ω*. For any u*<*v or* (*u*, *v*) ∈ <*, where u*, *v* ∈ Ω*, we say that "u is* <*-related to v" or "u relates to v under* <*".*

**Definition 10** ([18])**.** *Let* Ω 6= ∅*,* < *be a binary relation on* Ω *and F* : Ω → Ω *be a mapping.*

(*i*) *A sequence* {*un*} *is called an* <*-preserving sequence if un*<*un*+1*, for all n* ∈ N*.*

(*ii*) *A binary relation* < *on* Ω *is said to be F-closed if Fu*<*Fv, whenever u*<*v.*

(*iii*) *A binary relation* < *on* Ω *is said to be d-self-closed if for any sequence* {*un*} ⊆ Ω *such that* {*un*} *is* <*-preserving with u<sup>n</sup>* → *u* ∈ Ω*, there exists a subsequence* {*un<sup>k</sup>* } *of* {*un*} *such that unk*<*u or u*<*un<sup>k</sup> , for all k* ∈ N*.*

(*iv*) *A binary relation* < *on* Ω *is said to be transitive if u*<*v and v*<*w implies that u*<*w.*

**Definition 11** ([18])**.** *For u*, *v* ∈ Ω*, a path of length p* ∈ N *in* < *from u to v is a finite sequence* {*u*0, *u*1, · · · , *up*} *such that u*<sup>0</sup> = *u*, *u<sup>p</sup>* = *v and ui*<*ui*+<sup>1</sup> *for all i* ∈ {0, 1, · · · , *p* − 1}*.*

In addition, Alam and Imdad [19] utilized some relatively weaker notions to prove results on the existence and uniqueness of coincidence points involving a pair of mappings defined on a metric space endowed with an arbitrary binary relation. For completeness, we first review some of the relevant definitions that are known.

**Definition 12** ([19])**.** *Let* (Ω, *d*) *be a metric space ,* < *be a binary relation on* Ω *and F*, *g* : Ω → Ω *be two mappings.*

(*i*) *The set* Ω *is* <*-complete if every* <*-preserving Cauchy sequence in* Ω *converges to a limit in* Ω*.*

(*ii*) *A binary relation* < *on* Ω *is said to be* (*F*, *g*)*-closed if Fu*<*Fv, whenever gu*<*gv.*

(*iii*) *A binary relation* < *on* Ω *is said to be* (*g*, *d*)*-self-closed if for any sequence* {*un*} ⊆ Ω

*such that* {*un*} *is* <sup>&</sup>lt;*-preserving with* lim*n*→<sup>∞</sup> *u<sup>n</sup>* = *u, there exists a subsequence* {*un<sup>k</sup>* } *of u<sup>n</sup> such that gunk*<*gu or gu*<*gun<sup>k</sup> , for all k* ∈ N*.*

(*iv*) *<sup>F</sup> is* <sup>&</sup>lt;*-continuous at <sup>u</sup>* <sup>∈</sup> <sup>Ω</sup> *if, for any* <sup>&</sup>lt;*-preserving sequence, such that* lim*n*→<sup>∞</sup> *u<sup>n</sup>* = *u, we have* lim*n*→<sup>∞</sup> *Fu<sup>n</sup>* = *Fu. Moreover, F is called* <*-continuous if it is* <*-continuous at each point of* Ω*.*

(*v*) *F is* (*g*, <)*-continuous at x if for any sequence* {*un*} ⊆ Ω *such that* {*gun*} *is* <*-preserving with* lim*n*→<sup>∞</sup> *gu<sup>n</sup>* <sup>=</sup> *gu, we have* lim*n*→<sup>∞</sup> *Fu<sup>n</sup>* = *Fu. Moreover, F is called* (*g*, <)*-continuous if it is* (*g*, <)*-continuous at each point of* Ω*.*

(*vi*) (*F*, *g*) *is* <*-compatible if for any sequence* {*un*} ⊆ Ω *such that* {*gun*} *and* {*Fun*} *are* <sup>&</sup>lt;*-preserving and* lim*n*→<sup>∞</sup> *gu<sup>n</sup>* <sup>=</sup> lim*n*→<sup>∞</sup> *Fu<sup>n</sup>* <sup>=</sup> *<sup>u</sup>* <sup>∈</sup> <sup>Ω</sup>*, we have* lim*n*→<sup>∞</sup> *d*(*Fgun*, *gFun*) = 0*.*

(*vii*) *A subset E* ⊆ Ω *is said to be* <*-connected, if for any u*, *v* ∈ *E, there exists a path in* < *from u to v.*

**Definition 13** ([19])**.** *Let* (Ω, *d*) *be a metric space and F and g are two self-mappings on* Ω*. Then,* (*i*) *a point u* ∈ Ω *is called a coincidence point of F and g if gu* = *Fu;*

(*ii*) *if u* ∈ Ω *is a coincidence point of F and g, and there exists a point u*¯ *such that u*¯ = *gu* = *Fu, then u is called a point of coincidence of F and g;* ¯

(*iii*) *if u* ∈ Ω *is a coincidence fixed point of F and g and u* = *gu* = *Fu, then u is called a common fixed point of F and g; and*

(*iv*) *F and g are called weakly compatible if for all u* ∈ Ω *with Fu* = *gu implies F*(*gu*) = *g*(*Fu*)*.*

**Theorem 2** ([19])**.** *Let* (Ω, *d*) *be a metric space with a binary relation* <*, and* 4 *be an* <*-complete subspace of* Ω*. F and g are two self-mappings on* Ω*, which satisfy*

*d*(*Fu*, *Fv*) 6 *kd*(*gu*, *gv*), *for all gu*<*gv*,

*where k* ∈ (0, 1)*. In addition, if F and g satisfy the following conditions:*

(*i*) *there exists v*<sup>0</sup> ∈ Ω *such that gv*0<*Fv*0*;*

(*ii*) < *is* (*F*, *g*)*-closed;*

(*iii*) *F*(Ω) ⊆ (4 ∩ *g*(Ω))*; and*

	- (*b*) *one of the conditions satisfies:*
		- (1) *F is* (*g*, <)*-continuous;*
		- (2) *F and g are continuous; and*
		- (3) <|<sup>Ω</sup> *is d-self-closed,*

*or alternatively,*

(*iv*0 ) (*a* 0 ) *F and g are* <*-compatible;*

	- (1 0 ) *f is* <*-continuous; and*
	- (2 0 ) < *is* (*g*, *d*)*-self-closed,*

*then F and g have a coincidence point.*

The following lemma plays a crucial role in proving the main results of this paper.

**Lemma 1** ([19])**.** *Let* Ω *be a nonempty set and g* : Ω → Ω*. Then, there exists a subset E of* Ω *such that g*(*E*) = *g*(Ω) *and g* : *E* → *E is one to one.*

Through the above inspiration, we can understand that the extended rectangular *b*-metric spaces are a type of generalized metric spaces including metric spaces, rectangular metric spaces and *b*-metric spaces. As far as we know, in metric space, rectangular metric and *b*-metric space, there are also some contractions that have not been studied; thus, we intend to study the coincidence point and common fixed point results for some mappings *F* and *g* in the extended rectangular *b*-metric with a binary relation <, which develops the results of [1,6,8,14,18–23].

#### **2. Main Results**

In this section, we introduce an auxiliary function before we begin our discussion of the main results. Let Ψ be the set of all increasing functions *ψ* : [0, ∞) → [0, ∞) satisfying the following condition: lim*n*→<sup>∞</sup> *ψ n* (*t*) = 0, for all *t* > 0.

**Remark 2.** *If ψ* ∈ Ψ *, then ψ*(*t*) < *t, for all t* > 0*.*

**Theorem 3.** *Let* (Ω, *d<sup>ξ</sup>* ) *be an extended rectangular b-metric space with a binary relation* < *such that* < *is* (*F*, *g*)*-closed, and* 4 *be an* <*-complete subspace of* Ω*. F and g are two self-mappings on* Ω*, which satisfy F*(Ω) ⊆ (4 ∩ *g*(Ω)) *and*

$$
\eta(d\_{\tilde{\xi}}(Fu, Fv), \psi(M\_{F, \xi}(u, v))) \gtrsim 0, \text{ for all } gu\mathbb{R}gv,\tag{1}
$$

*where η* ∈ Z*, ψ* ∈ Ψ *and*

$$\begin{split} M\_{\mathbb{F},\mathbb{g}}(\boldsymbol{u},\boldsymbol{v}) &= \max \{ d\_{\mathbb{\tilde{\zeta}}}(\boldsymbol{g}\boldsymbol{u},\boldsymbol{g}\boldsymbol{v}), d\_{\mathbb{\tilde{\zeta}}}(\boldsymbol{g}\boldsymbol{u},\boldsymbol{F}\boldsymbol{u}), d\_{\mathbb{\tilde{\zeta}}}(\boldsymbol{F}\boldsymbol{v},\boldsymbol{g}\boldsymbol{v}) \} \\ &\quad \frac{d\_{\mathbb{\tilde{\zeta}}}(\boldsymbol{g}\boldsymbol{v},\boldsymbol{F}\boldsymbol{v}) (1 + d\_{\mathbb{\tilde{\zeta}}}(\boldsymbol{g}\boldsymbol{u},\boldsymbol{F}\boldsymbol{u}))}{1 + d\_{\mathbb{\tilde{\zeta}}}(\boldsymbol{g}\boldsymbol{u},\boldsymbol{g}\boldsymbol{v})} \, \frac{d\_{\mathbb{\tilde{\zeta}}}(\boldsymbol{g}\boldsymbol{u},\boldsymbol{F}\boldsymbol{u}) (1 + d\_{\mathbb{\tilde{\zeta}}}(\boldsymbol{g}\boldsymbol{v},\boldsymbol{F}\boldsymbol{v}))}{1 + d\_{\mathbb{\tilde{\zeta}}}(\boldsymbol{g}\boldsymbol{u},\boldsymbol{g}\boldsymbol{v})} \}. \end{split}$$

*In addition, if F and g satisfy the following conditions:*

(*i*) *there exists v*<sup>0</sup> ∈ Ω *such that gv*0<*Fv*<sup>0</sup> *and gv*0<*Fv*1*, where v*<sup>1</sup> *is such that gv*<sup>1</sup> = *Fv*0*;* (*ii*) *for v*<sup>0</sup> *in* (*i*)*, we have* lim sup *n*→∞ *ψ n*+1 (*t*) *ψn*(*t*) *ξ*(*un*+1, *up*) < 1*, where for all p, n* ∈ N*, u<sup>n</sup>* =

*Fv<sup>n</sup>* = *gvn*+<sup>1</sup> *and t* ∈ (0, *d<sup>ξ</sup>* (*u*0, *u*1)] *with u*<sup>0</sup> 6= *u*1*;* (*iii*) (*a*) 4 ⊆ *g*(Ω)*;*

(*b*) *F is* (*g*, <)*-continuous or F and g are continuous or* <|*g*(Ω) *is d<sup>ξ</sup> -self-closed and d<sup>ξ</sup>* (*gw*, *Fw*) > 0*, where w* ∈ Ω*, such that*

$$\limsup\_{t \to d\_{\xi}(gw, Fw)} \psi(t) < \frac{d\_{\xi}(gw, Fw)}{\widetilde{\xi}(Fw, gw)} \operatorname{or} \limsup\_{t \to d\_{\xi}(gw, Fw)} \psi(t) < \frac{d\_{\xi}(gw, Fw)}{\widetilde{\xi}(gw, Fw)};$$

*or alternatively,*

(*iii*0 ) *if d<sup>ξ</sup> is continuous,* (*F*, *g*) *is* <*-compatible, and g and F are* <*-continuous, then F and g have a coincidence point.*

**Proof.** For *gu*<*gv*, by (1) and (*η*1), it is easy to show that

$$d\_{\vec{\xi}}(Fu, Fv) \lesssim \psi(M\_{\mathcal{F}, \mathcal{S}}(u, v)), \text{ for } M\_{\mathcal{F}, \mathcal{S}}(u, v) \neq 0. \tag{2}$$

Considering *F*(Ω) ⊆ (4 ∩ *g*(Ω)), we deduce that *F*(Ω) ⊆ *g*(Ω). Now, we define two sequences {*un*} and {*vn*} by *u<sup>n</sup>* = *Fv<sup>n</sup>* = *gvn*+1. By *gv*0<*Fv*<sup>0</sup> and < is (*F*, *g*)-closed, it follows that

$$
\delta g v\_0 \mathfrak{R} F v\_0 \Rightarrow g v\_0 \mathfrak{R} g v\_1 \Rightarrow F v\_0 \mathfrak{R} F v\_1 \Rightarrow g v\_1 \mathfrak{R} g v\_2. \tag{3}
$$

Combining (3) with < is (*F*, *g*)-closed, we have

$$
\circledast \mathsf{g} v\_1 \mathsf{R} \mathsf{g} v\_2 \Rightarrow F v\_1 \mathsf{R} F v\_2 \Rightarrow \mathsf{g} v\_2 \mathsf{R} \mathsf{g} v\_3. \tag{4}
$$

Repeating the above process, we can find

$$F\upsilon\_n \\$RF\upsilon\_{n+1} \tag{5}$$

and

$$
\mathfrak{g}v\_n\mathfrak{R}\mathfrak{g}v\_{n+1}.\tag{6}
$$

By *gv*0<*Fv*<sup>1</sup> and < is (*F*, *g*)-closed , we obtain

$$
\mathcal{g}v\_0 \mathfrak{R}Fv\_1 \Rightarrow \mathcal{g}v\_0 \mathfrak{R}g v\_2 \Rightarrow Fv\_0 \mathfrak{R}Fv\_2 \Rightarrow \mathcal{g}v\_1 \mathfrak{R}g v\_3. \tag{7}
$$

$$\text{Taking } (7), (i) \text{ and } \Re \text{ is } (F, \emptyset)\text{-closed in mind, we find } \emptyset$$

$$
\mathfrak{g}v\_1\mathfrak{R}\mathfrak{g}v\_3 \Rightarrow Fv\_1\mathfrak{R}Fv\_3 \Rightarrow \mathfrak{g}v\_2\mathfrak{R}\mathfrak{g}v\_4.\tag{8}
$$

Repeating the above process, it follows that

$$Fv\_n\\$Rfv\_{n+2}\tag{9}$$

and

$$
\mathfrak{g}v\_n\mathfrak{R}\mathfrak{g}v\_{n+2}.\tag{10}
$$

If there exists *n*<sup>0</sup> ∈ N such that *un*<sup>0</sup> = *un*0+1, that is, *gvn*0+<sup>1</sup> = *Fvn*0+1, then *vn*0+<sup>1</sup> is the coincidence point of *F* and *g*. The proof is complete.

Now, suppose that *u<sup>n</sup>* 6= *un*+1, for all *n* ∈ N. Let *u* = *vn*, *v* = *vn*+<sup>1</sup> in (2), by (6), we have

*d<sup>ξ</sup>* (*un*, *un*+1) = *d<sup>ξ</sup>* (*Fvn*, *Fvn*+1) 6 *ψ*(*MF*,*g*(*vn*, *vn*+1)) = *ψ*(max{*d<sup>ξ</sup>* (*gvn*, *gvn*+1), *d<sup>ξ</sup>* (*gvn*, *Fvn*), *d<sup>ξ</sup>* (*Fvn*+1, *gvn*+1), *d<sup>ξ</sup>* (*gvn*+1, *Fvn*+1)(1 + *d<sup>ξ</sup>* (*gvn*, *Fvn*)) 1 + *d<sup>ξ</sup>* (*gvn*, *gvn*+1) , *d<sup>ξ</sup>* (*gvn*, *Fvn*)(1 + *d<sup>ξ</sup>* (*gvn*+1, *Fvn*+1)) 1 + *d<sup>ξ</sup>* (*gvn*, *gvn*+1) }) = *ψ*(max{*d<sup>ξ</sup>* (*un*−1, *un*), *d<sup>ξ</sup>* (*un*−1, *un*), *d<sup>ξ</sup>* (*un*+1, *un*), *d<sup>ξ</sup>* (*un*, *un*+1)(1 + *d<sup>ξ</sup>* (*un*−1, *un*)) 1 + *d<sup>ξ</sup>* (*un*−1, *un*) , *d<sup>ξ</sup>* (*un*−1, *un*)(1 + *d<sup>ξ</sup>* (*un*, *un*+1)) 1 + *d<sup>ξ</sup>* (*un*−1, *un*) }) = *ψ*(max{*d<sup>ξ</sup>* (*un*−1, *un*), *d<sup>ξ</sup>* (*un*, *un*+1)). (11)

If

$$\max \{ d\_{\xi}(\mu\_{n-1}, \mu\_n), d\_{\xi}(\mu\_n, \mu\_{n+1}) \} = d\_{\xi}(\mu\_n, \mu\_{n+1})\_\nu$$

by (11) and Remark 2, we gain

$$d\_{\tilde{\xi}}(\boldsymbol{\mu}\_{n\prime}\boldsymbol{\mu}\_{n+1}) \leqslant \psi(d\_{\tilde{\xi}}(\boldsymbol{\mu}\_{n\prime}\boldsymbol{\mu}\_{n+1})) < d\_{\tilde{\xi}}(\boldsymbol{\mu}\_{n\prime}\boldsymbol{\mu}\_{n+1}).$$

This is a contradiction. Thus,

$$\max \{ d\_{\tilde{\xi}}(\mu\_{n-1}, \mu\_n), d\_{\tilde{\xi}}(\mu\_n, \mu\_{n+1}) \} = d\_{\tilde{\xi}}(\mu\_{n-1}, \mu\_n).$$

In view of (11), we can deduce that

$$d\_{\tilde{\xi}}(\boldsymbol{\mu}\_{n}, \boldsymbol{\mu}\_{n+1}) \leqslant \psi(d\_{\tilde{\xi}}(\boldsymbol{\mu}\_{n-1}, \boldsymbol{\mu}\_{n})).\tag{12}$$

By (12), we acquire

$$d\_{\tilde{\xi}}(\boldsymbol{u}\_{n\prime}\boldsymbol{u}\_{n+1}) \leqslant \psi(d\_{\tilde{\xi}}(\boldsymbol{u}\_{n-1\prime}\boldsymbol{u}\_{n})) \leqslant \cdots \leqslant \psi^{\eta}(d\_{\tilde{\xi}}(\boldsymbol{u}\_{0\prime}\boldsymbol{u}\_{1})).\tag{13}$$

Taking the limits on the both sides of (13), we have

$$\lim\_{n \to \infty} d\_{\tilde{\xi}}(u\_n, u\_{n+1}) = 0. \tag{14}$$

Now, we show that *u<sup>n</sup>* 6= *um*, for all *n* 6= *m*∈ N. If there exist *n*0, *m*<sup>0</sup> ∈ N such that *un*<sup>0</sup> = *um*<sup>0</sup> with *n*<sup>0</sup> < *m*0, we have

$$\begin{split}d\_{\xi}(u\_{n\_{0}},u\_{n\_{0}+1})&=d\_{\xi}(u\_{n\_{0}},Fv\_{n\_{0}+1})\\&=d\_{\xi}(u\_{n\_{0}},Fv\_{n\_{0}+1})\\&=d\_{\xi}(Fv\_{n\_{0}},Fv\_{n\_{0}+1})\\&\leqslant\,\Psi(\mathsf{M}\_{F,\xi}(\mathcal{E}v\_{m\_{0}},\mathcal{G}v\_{n\_{0}+1})),d\_{\xi}(\mathcal{G}v\_{m\_{0}},Fv\_{m\_{0}}),d\_{\xi}(Fv\_{n\_{0}+1},\mathcal{G}v\_{n\_{0}+1})\\&=\Psi(\mathsf{max}\{d\_{\xi}(\mathcal{G}v\_{m\_{0}+1},\mathcal{G}v\_{n\_{0}+1}),d\_{\xi}(\mathcal{G}v\_{m\_{0}},Fv\_{m\_{0}})\},\\&\dfrac{d\_{\xi}(\mathcal{G}v\_{m\_{0}+1},Fv\_{m\_{0}+1})(1+d\_{\xi}(\mathcal{G}v\_{m\_{0}},Fv\_{m\_{0}}))}{1+d\_{\xi}(\mathcal{G}v\_{m\_{0}+1},Fv\_{n\_{0}+1})}\\&\dfrac{d\_{\xi}(\mathcal{G}v\_{m\_{0}+1},\mathcal{G}v\_{n\_{0}})(1+d\_{\xi}(\mathcal{G}v\_{n\_{0}+1},Fv\_{n\_{0}+1}))}{1+d\_{\xi}(\mathcal{G}v\_{m\_{0}},\mathcal{G}v\_{n\_{0}+1})}\})\\&=\psi(\max\{d\_{\xi}(u\_{m\_{0}+1},u\_{m\_{0}}),d\_{\xi}(u\_{m\_{0}-1},u\_{m\_{0}}),d\_{\xi}(u\_{n\_{0}+1},u\_{n\_{0}})\},\\&\dfrac{d\_{\xi}(u\_{m\_{0}+1},u\_{m\_{0}+1})(1+d\_{\xi}(u\_{m\_{0}-1},u$$

which contradicts *d<sup>ξ</sup>* (*un*<sup>0</sup> , *un*0+1) > 0. Thus, *u<sup>n</sup>* 6= *um*, for all *n*, *m* ∈ N. Letting *u* = *vn*, *v* = *vn*+<sup>2</sup> in (2), by (10), we obtain

$$\begin{split} \xi\_{\xi}(u\_{n},u\_{n+2}) &= \delta\_{\xi}(\mathcal{F}v\_{n},\mathcal{F}v\_{n+2}) \\ &\quad \leqslant \mathsf{\upmu}(\mathsf{\upmu}(\_{\mathsf{F}\mathcal{G}v\_{n}},v\_{n+2})) \\ &= \mathsf{\upmu}(\mathsf{\upmu}\{d\_{\xi}\circ(v\_{n},v\_{n+2}),d\_{\xi}(\mathcal{G}v\_{n},\mathcal{F}v\_{n}),d\_{\xi}(\mathcal{F}v\_{n+2},\mathcal{G}v\_{n+2})\} \\ &\quad \frac{d\_{\xi}(\mathcal{G}v\_{n+2},Fv\_{n+2})(1+d\_{\xi}(\mathcal{G}v\_{n},Fv\_{n}))}{1+d\_{\xi}(\mathcal{G}v\_{n},\mathcal{G}v\_{n+2})}, \\ &\quad \frac{d\_{\xi}(\mathcal{G}v\_{n},Fv\_{n})(1+d\_{\xi}(\mathcal{G}v\_{n+2},Fv\_{n+2}))}{1+d\_{\xi}(\mathcal{G}v\_{n},\mathcal{G}v\_{n+2})} \\ &= \psi(\max\{d\_{\xi}(u\_{n-1},u\_{n+1}),d\_{\xi}(u\_{n-1},u\_{n}),d\_{\xi}(u\_{n+2},u\_{n+1})\} \\ &\quad \frac{d\_{\xi}(u\_{n+2},u\_{n+1})(1+d\_{\xi}(u\_{n-1},u\_{n}))}{1+d\_{\xi}(u\_{n-1},u\_{n+1})} \\ &\quad \frac{d\_{\xi}(u\_{n-1},u\_{n})(1+d\_{\xi}(u\_{n+2},u\_{n+1}))}{1+1+d\_{\xi}(u\_{n-1},u\_{n+1})}\}) \\ &\quad \psi(\max\{d\_{\xi}(u\_{n-1},u\_{n+1}),d\_{\xi}(u\_{n-1},u\_{n}), \\ &$$

where

$$\begin{aligned} A\_{\mathfrak{n}} &= \max \{ d\_{\xi}(u\_{n-1}, u\_{n+1}), d\_{\xi}(u\_{n-1}, u\_{\mathfrak{n}}), \frac{d\_{\xi}(u\_{n-1}, u\_{\mathfrak{n}})(1 + d\_{\xi}(u\_{n-1}, u\_{\mathfrak{n}}))}{1 + d\_{\xi}(u\_{n-1}, u\_{\mathfrak{n}+1})} \}. \\ \text{If } A\_{\mathfrak{n}} &= d\_{\xi}(u\_{n-1}, u\_{\mathfrak{n}+1}). \end{aligned}$$

By (15), we have

$$d\_{\xi}(\boldsymbol{u}\_{n}, \boldsymbol{u}\_{n+2}) \leqslant \psi(d\_{\xi}(\boldsymbol{u}\_{n-1}, \boldsymbol{u}\_{n+1})) \leqslant \cdots \leqslant \psi^{\eta}(d\_{\xi}(\boldsymbol{u}\_{0}, \boldsymbol{u}\_{2})).\tag{16}$$

If *A<sup>n</sup>* = *d<sup>ξ</sup>* (*un*−1, *un*), from (13) and (15), we gain

$$d\_{\tilde{\xi}}(\boldsymbol{u}\_{n\prime}\boldsymbol{u}\_{n+2}) \leqslant \psi(d\_{\tilde{\xi}}(\boldsymbol{u}\_{n-1\prime}\boldsymbol{u}\_{n})) \leqslant \psi^{n-1}(d\_{\tilde{\xi}}(\boldsymbol{u}\_{0\prime}\boldsymbol{u}\_{1})).\tag{17}$$

$$\text{If } A\_{\text{il}} = \frac{d\_{\xi}(u\_{n-1}, \mu\_{\mathfrak{n}})(1 + d\_{\xi}(u\_{n-1}, \mu\_{\mathfrak{n}}))}{1 + d\_{\xi}(u\_{n-1}, \mu\_{\mathfrak{n}+1})}, \text{ combining (13), (15) with Remark 2, we acquire}$$

$$d\_{\xi}(u\_{n}, u\_{n+2}) \leqslant \psi(\frac{d\_{\xi}(u\_{n-1}, u\_{n})(1 + d\_{\xi}(u\_{n-1}, u\_{n}))}{1 + d\_{\xi}(u\_{n-1}, u\_{n+1})})$$

$$\quad < \frac{d\_{\xi}(u\_{n-1}, u\_{n})(1 + d\_{\xi}(u\_{n-1}, u\_{n}))}{1 + d\_{\xi}(u\_{n-1}, u\_{n+1})}$$

$$\quad < d\_{\xi}(u\_{n-1}, u\_{n})(1 + d\_{\xi}(u\_{n-1}, u\_{n})) $$

$$\quad \leqslant \psi^{n-1}(d\_{\xi}(u\_{0}, u\_{1}))(1 + \psi^{n-1}(d\_{\xi}(u\_{0}, u\_{1}))).\tag{18}$$

Taking the limits on the both sides of (16), (17) and (18), by lim*n*→<sup>∞</sup> *ψ n* (*t*) = 0, for all *t* > 0, we find

$$\lim\_{n \to \infty} d\_{\tilde{\xi}}(u\_{n\prime} u\_{n+2}) = 0. \tag{19}$$

Now, we show that {*un*} is a Cauchy sequence. The next discussion can be divided into the following cases.

Case I: when *m* = *n* + 2*k* + 1 with *k* > 1. By (*dξ*3) and (13), for all *n* ∈ N, we have

*d<sup>ξ</sup>* (*un*, *un*+2*k*+<sup>1</sup> ) 6 *ξ*(*un*, *un*+2*k*+<sup>1</sup> )[(*d<sup>ξ</sup>* (*un*, *un*+1) + *d<sup>ξ</sup>* (*un*+1, *un*+2) + *d<sup>ξ</sup>* (*un*+2, *un*+2*k*+<sup>1</sup> )] = *ξ*(*un*, *un*+2*k*+<sup>1</sup> )[*d<sup>ξ</sup>* (*un*, *un*+1) + *d<sup>ξ</sup>* (*un*+1, *un*+2)] + *ξ*(*un*, *un*+2*k*+<sup>1</sup> )*d<sup>ξ</sup>* (*un*+2, *un*+2*k*+<sup>1</sup> ) 6 *ξ*(*un*, *un*+2*k*+<sup>1</sup> )(*d<sup>n</sup>* + *dn*+1) + *ξ*(*un*, *un*+2*k*+<sup>1</sup> )*ξ*(*un*+2, *un*+2*k*+<sup>1</sup> )(*dn*+<sup>2</sup> + *dn*+3) + *ξ*(*un*, *un*+2*k*+<sup>1</sup> )*ξ*(*un*+2, *un*+2*k*+<sup>1</sup> )*d<sup>ξ</sup>* (*un*+2, *un*+2*k*+<sup>1</sup> ) 6 *ξ*(*un*, *un*+2*k*+<sup>1</sup> )(*d<sup>n</sup>* + *dn*+1) + *ξ*(*un*, *un*+2*k*+<sup>1</sup> )*ξ*(*un*+2, *un*+2*k*+<sup>1</sup> )(*dn*+<sup>2</sup> + *dn*+3) + · · · + *ξ*(*un*, *un*+2*k*+<sup>1</sup> )*ξ*(*un*+2, *un*+2*k*+<sup>1</sup> )· · · *<sup>ξ</sup>*(*un*+2*k*−<sup>2</sup> , *un*+2*k*+<sup>1</sup> )(*dn*+2*k*−<sup>2</sup> + *<sup>d</sup>n*+2*k*−<sup>1</sup> ) + *ξ*(*un*, *un*+2*k*+<sup>1</sup> )*ξ*(*un*+2, *un*+2*k*+<sup>1</sup> )· · · *<sup>ξ</sup>*(*un*+2*k*−<sup>2</sup> , *un*+2*k*+<sup>1</sup> )*d<sup>ξ</sup>* (*un*+2*<sup>k</sup>* , *un*+2*k*+<sup>1</sup> ) 6 *ξ*(*un*, *un*+2*k*+<sup>1</sup> )(*ψ n* (*G*0) + *ψ n*+1 (*G*0)) + *ξ*(*un*+2, *un*+2*m*+1)(*ψ n*+2 (*G*0) + *ψ n*+3 (*G*0)) + · · · + *ξ*(*un*, *un*+2*k*+<sup>1</sup> )*ξ*(*un*+2, *un*+2*k*+<sup>1</sup> )· · · *<sup>ξ</sup>*(*un*+2*k*−<sup>2</sup> , *un*+2*k*+<sup>1</sup> )(*ψ n*+2*k*−2 (*G*0) + *ψ n*+2*k*−1 (*G*0)) + *ξ*(*un*, *un*+2*k*+<sup>1</sup> )*ξ*(*un*+2, *un*+2*k*+<sup>1</sup> )· · · *<sup>ξ</sup>*(*un*+2*k*−<sup>2</sup> , *un*+2*k*+<sup>1</sup> )*ψ n*+2*k* (*G*0) 6 *ξ*(*u*0, *un*+2*k*+<sup>1</sup> )*ξ*(*u*1, *un*+2*k*+<sup>1</sup> )*ξ*(*u*2, *un*+2*k*+<sup>1</sup> )· · · *ξ*(*un*, *un*+2*k*+<sup>1</sup> )[*ψ n* (*G*0) + *ξ*(*un*+1, *un*+2*k*+<sup>1</sup> )*ψ n*+1 (*G*0)] + *ξ*(*u*0, *un*+2*k*+<sup>1</sup> )*ξ*(*u*1, *un*+2*k*+<sup>1</sup> )*ξ*(*u*2, *un*+2*k*+<sup>1</sup> )· · · × *ξ*(*un*+2, *un*+2*k*+<sup>1</sup> )[*ψ n*+2 (*G*0) + *ξ*(*un*+3, *un*+2*k*+<sup>1</sup> )*ψ n*+3 (*G*0)] + · · · + *ξ*(*u*0, *un*+2*k*+<sup>1</sup> ) × *ξ*(*u*1, *un*+2*k*+<sup>1</sup> )*ξ*(*u*2, *un*+2*k*+<sup>1</sup> )· · · *<sup>ξ</sup>*(*un*+2*k*−<sup>2</sup> , *un*+2*k*+<sup>1</sup> )[*ψ n*+2*k*−2 (*G*0) + *<sup>ξ</sup>*(*un*+2*k*−<sup>1</sup> , *un*+2*k*+<sup>1</sup> )*ψ n*+2*k*−1 (*G*0)] + *ξ*(*u*0, *un*+2*k*+<sup>1</sup> )*ξ*(*u*1, *un*+2*k*+<sup>1</sup> )*ξ*(*u*2, *un*+2*k*+<sup>1</sup> ) × · · · *ξ*(*un*+2*<sup>k</sup>* , *un*+2*k*+<sup>1</sup> )*ψ n*+2*k* (*G*0) = *n*+2*k* ∑ *i*=*n ψ i* (*G*0) *i* ∏ *j*=0 *ξ*(*u<sup>j</sup>* , *un*+2*k*+<sup>1</sup> ), (20)

where *d<sup>n</sup>* = *d<sup>ξ</sup>* (*un*, *un*+1) and *ψ n* (*G*0) = *ψ n* (*d<sup>ξ</sup>* (*u*0, *u*1)), for all *n* ∈ N. Let

$$S\_n = \sum\_{i=0}^n \psi^i(G\_0) \prod\_{j=0}^i \xi(u\_{j'}, u\_{n+2k+1}).$$

By (20), we obtain

$$d\_{\xi}(u\_{n\prime}u\_{n+2k+1}) \lesssim \mathcal{S}\_{n+2k} - \mathcal{S}\_{n-1}.\tag{21}$$

$$\text{Suppose that } u\_n = \psi^n(\mathcal{G}\_0) \prod\_{j=0}^n \xi^\sharp(u\_{j\prime} u\_{n+2k+1}). \text{ We have } \psi$$

$$\frac{u\_{n+1}}{u\_n} = \frac{\psi^{n+1}(\mathcal{G}\_0) \prod\_{j=0}^{n+1} \tilde{\xi}(u\_j, u\_{n+2k+1})}{\psi^n(\mathcal{G}\_0) \prod\_{j=0}^n \tilde{\xi}(u\_j, u\_{n+2k+1})} = \frac{\psi^{n+1}(\mathcal{G}\_0)}{\psi^n(\mathcal{G}\_0)} \tilde{\xi}(u\_{n+1}, u\_{n+2k+1}).$$

By (*ii*) and Ratio test, we deduce that the series ∞ ∑ *i*=0 *ψ i* (*G*0) *i* ∏ *j*=0 *ξ*(*u<sup>j</sup>* , *un*+2*k*+<sup>1</sup> ) is convergent. Letting *n* → ∞ in (21), we have

$$d\_{\tilde{\xi}}(\mathfrak{u}\_n, \mathfrak{u}\_m) \to 0, \ n \to \infty.$$

Case II: when *m* = *n* + 2*k* with *k* > 1. By (*dξ*3) and (13), for all *n* ∈ N, we obtain

*d<sup>ξ</sup>* (*un*, *un*+2*<sup>k</sup>* ) 6 *ξ*(*un*, *un*+2*<sup>k</sup>* )[(*d<sup>ξ</sup>* (*un*, *un*+2) + *d<sup>ξ</sup>* (*un*+2, *un*+3) + *d<sup>ξ</sup>* (*un*+3, *un*+2*<sup>k</sup>* )] = *ξ*(*un*, *un*+2*<sup>k</sup>* )[*d<sup>ξ</sup>* (*un*, *un*+2) + *d<sup>ξ</sup>* (*un*+2, *un*+3)] + *ξ*(*un*, *un*+2*<sup>k</sup>* )*d<sup>ξ</sup>* (*un*+3, *un*+2*<sup>k</sup>* ) 6 *ξ*(*un*, *un*+2*<sup>k</sup>* )(*d<sup>ξ</sup>* (*un*, *un*+2) + *dn*+2) + *ξ*(*un*, *un*+2*<sup>k</sup>* )*ξ*(*un*+3, *un*+2*<sup>k</sup>* )(*dn*+<sup>3</sup> + *dn*+4) + *ξ*(*un*, *un*+2*<sup>k</sup>* )*ξ*(*un*+3, *un*+2*<sup>k</sup>* )*d<sup>ξ</sup>* (*un*+5, *un*+2*<sup>k</sup>* ) 6 *ξ*(*un*, *un*+2*<sup>k</sup>* )(*d<sup>ξ</sup>* (*un*, *un*+2) + *dn*+2) + *ξ*(*un*, *un*+2*<sup>k</sup>* )*ξ*(*un*+3, *un*+2*<sup>k</sup>* )(*dn*+<sup>3</sup> + *dn*+4) + · · · + *ξ*(*un*, *un*+2*<sup>k</sup>* )*ξ*(*un*+3, *un*+2*<sup>k</sup>* )· · · *<sup>ξ</sup>*(*un*+2*k*−<sup>3</sup> , *un*+2*<sup>k</sup>* )(*dn*+2*k*−<sup>3</sup> + *<sup>d</sup>n*+2*k*−<sup>2</sup> + *<sup>d</sup>n*+2*k*−<sup>1</sup> ) 6 *ξ*(*un*, *un*+2*<sup>k</sup>* )(*d<sup>ξ</sup>* (*un*, *un*+2) + *ψ n*+2 (*G*0)) + *ξ*(*xn*, *un*+2*<sup>k</sup>* )*ξ*(*un*+3, *un*+2*<sup>k</sup>* ) (*ψ n*+3 (*G*0) + *ψ n*+4 (*G*0)) + · · · + *ξ*(*un*, *un*+2*<sup>k</sup>* ) *ξ*(*un*+3, *un*+2*<sup>k</sup>* )· · · *<sup>ξ</sup>*(*un*+2*k*−<sup>3</sup> , *un*+2*<sup>k</sup>* )(*ψ n*+2*k*−3 (*G*0) + *ψ n*+2*k*−2 (*G*0) + *ψ n*+2*k*−1 (*G*0)) < *ξ*(*un*, *un*+2*<sup>k</sup>* )*d<sup>ξ</sup>* (*un*, *un*+2) + *ξ*(*u*0, *un*+2*<sup>k</sup>* )*ξ*(*u*1, *un*+2*<sup>k</sup>* )*ξ*(*u*2, *un*+2*<sup>k</sup>* )· · · *ξ*(*un*+2, *un*+2*<sup>k</sup>* )*ψ n*+2 (*G*0) + *ξ*(*u*0, *un*+2*<sup>k</sup>* )*ξ*(*u*1, *un*+2*<sup>k</sup>* )*ξ*(*x*2, *un*+2*<sup>k</sup>* )· · · *ξ*(*un*+3, *un*+2*<sup>k</sup>* )[*ψ n*+3 (*G*0) + *ξ*(*un*+4, *un*+2*<sup>k</sup>* )*ψ n*+4 (*G*0)] + · · · + *ξ*(*u*0, *un*+2*<sup>k</sup>* )*ξ*(*u*1, *un*+2*<sup>k</sup>* )*ξ*(*u*2, *un*+2*<sup>k</sup>* )· · · *<sup>ξ</sup>*(*un*+2*k*−<sup>3</sup> , *un*+2*<sup>k</sup>* )[*ψ n*+2*k*−3 (*G*0) + *<sup>ξ</sup>*(*un*+2*k*−<sup>2</sup> , *un*+2*<sup>k</sup>* )*ψ n*+2*k*−2 (*G*0)] + *ξ*(*u*0, *un*+2*<sup>k</sup>* )*ξ*(*u*1, *un*+2*<sup>k</sup>* ) *ξ*(*u*2, *un*+2*<sup>k</sup>* )· · · *<sup>ξ</sup>*(*un*+2*k*−<sup>1</sup> , *un*+2*<sup>k</sup>* )*ψ n*+2*k*−1 (*G*0) = *ξ*(*un*, *un*+2*<sup>k</sup>* )*d<sup>ξ</sup>* (*un*, *un*+2) + *n*+2*k*−1 ∑ *i*=*n*+2 *ψ i* (*G*0) *i* ∏ *j*=0 *ξ*(*u<sup>j</sup>* , *un*+2*<sup>k</sup>* ), (22) *n n*

where *ψ* (*G*0) = *ψ* (*d<sup>ξ</sup>* (*u*0, *u*1)) and *d<sup>n</sup>* = *d<sup>ξ</sup>* (*un*, *un*+1). For all *n* ∈ N, assume that

$$R\_n = \sum\_{i=0}^n \psi^i(G\_0) \prod\_{j=0}^i \xi(u\_{j'} u\_{n+2k}).$$

According to (22), we find

$$d\_{\xi}(\boldsymbol{u}\_{n\prime}\boldsymbol{u}\_{n+2k}) < \tilde{\xi}(\boldsymbol{u}\_{n\prime}\boldsymbol{u}\_{n+2k})d\_{\xi}(\boldsymbol{u}\_{n\prime}\boldsymbol{u}\_{n+2}) + R\_{n+2k-1} - R\_{n+1}.\tag{23}$$

Now, let *w<sup>n</sup>* = *ψ n* (*G*0) *n* ∏ *j*=0 *ξ*(*u<sup>j</sup>* , *un*+2*<sup>k</sup>* ). It follows that

$$\frac{w\_{n+1}}{w\_n} = \frac{\psi^{n+1}(\mathcal{G}\_0) \prod\_{j=0}^{n+1} \tilde{\xi}(u\_j, u\_{n+2k})}{\psi^n(\mathcal{G}\_0) \prod\_{j=0}^n \tilde{\xi}(u\_j, u\_{n+2k})} = \frac{\psi^{n+1}(\mathcal{G}\_0)}{\psi^n(\mathcal{G}\_0)} \tilde{\xi}(u\_{n+1}, u\_{n+2k}).$$

In a similar way as in thecase I, we deduce that the series ∞ ∑ *i*=0 *ψ i* (*G*0) *i* ∏ *j*=0 *ξ*(*u<sup>j</sup>* , *un*+2*<sup>k</sup>* ) is convergent. Taking the limits on the both sides of (23), by (19), we have

$$d\_{\tilde{\xi}}(\mu\_n, \mu\_m) \to 0, n \to \infty.$$

In both Cases, lim *<sup>n</sup>*,*m*→<sup>∞</sup> *d<sup>ξ</sup>* (*un*, *um*) = 0.

Thus, {*un*} is a Cauchy sequence.

Now, we show that *F* and *g* have a coincidence point. We discuss the following cases: Case I: (*iii*) holds.

Since (4, *d<sup>ξ</sup>* ) is <-complete, *F*(Ω) ⊆ 4, *u<sup>n</sup>* = *Fv<sup>n</sup>* = *gvn*+<sup>1</sup> and (6), there exists *u* ∈ 4 such that

$$\lim\_{n \to \infty} d\_{\tilde{\xi}}(gv\_{n\prime}u) = 0. \tag{24}$$

Considering 4 ⊆ *g*(Ω); thus, there exists *v* ∈ Ω such that *u* = *gv*. That is,

$$\lim\_{n \to \infty} d\_{\tilde{\xi}}(gv\_{n\prime}gv) = 0. \tag{25}$$

By *u<sup>n</sup>* = *Fv<sup>n</sup>* = *gvn*+1, we have

$$\lim\_{n \to \infty} d\_{\tilde{\xi}}(Fv\_n, gv) = 0. \tag{26}$$

If there exists an infinite subsequence {*un<sup>k</sup>* } of {*un*} such that *un<sup>k</sup>* = *Fv* or *un<sup>k</sup>* = *gv*, then it will lead to a contradiction with *u<sup>n</sup>* 6= *um*, for all *n* 6= *m*∈ N. Thus, we assume that *u<sup>n</sup>* 6= *Fv* and *u<sup>n</sup>* 6= *gv* for all *n* ∈ N.

If *F* is (*g*, <)-continuous. Thinking about (6) and (25), we obtain

$$\lim\_{n \to \infty} d\_{\tilde{\xi}}(Fv\_n, Fv) = 0. \tag{27}$$

By (*dξ*3), it follows that

$$\begin{split} d\_{\xi}(Fv, gv) &\leqslant \mathfrak{f}(Fv, gv) [d\_{\xi}(Fv, Fv\_{n}) + d\_{\xi}(Fv\_{n}, Fv\_{n+1}) + d\_{\xi}(Fv\_{n+1}, gv)] \\ &= \mathfrak{f}(Fv, gv) [d\_{\xi}(Fv, Fv\_{n}) + d\_{\xi}(u\_{n}, u\_{n+1}) + d\_{\xi}(Fv\_{n+1}, gv)] \end{split} \tag{28}$$

Taking the limits on the both sides of (28), keep (14), (26) and (27) in mind, we deduce that

$$d\_{\xi}(Fv, gv) = 0.$$

Thus, *v* is a coincidence point of *F* and *g*.

Assume that *F* and *g* are continuous. By Lemma 1, it is not difficult to find that there exists *E* ⊂ Ω such that *g*(*E*) = *g*(Ω) and *g* : *E* → *E* is one to one. Define a function *T* : *g*(*E*) → *g*(*E*) by *Fe* = *Tge*, where *e* ∈ *E*. Clearly, *T* is well-defined. Since *F* and *g* are continuous , we deduce that, *T* is continuous as well. Without loss of generality, we choose {*vn*} ⊆ *E* and *v* ∈ *E*. By (25), we obtain

$$\begin{aligned} \lim\_{n \to \infty} d\_{\xi}(Fv\_{n\prime}Fv) &= \lim\_{n \to \infty} d\_{\xi}(Tgv\_{n\prime}Fv) \\ &= \lim\_{n \to \infty} d\_{\xi}(Tgv\_{n\prime}Tgv) \\ &= 0, \end{aligned}$$

that is (27) holds. Taking the limits on the both sides of (28), keep (14), (26) and (27) in mind, we deduce that

$$d\_{\tilde{\xi}}(Fv, gv) = 0.$$

Then, *v* is a coincidence point of *F* and *g*.

If <|*g*<sup>Ω</sup> is *d<sup>ξ</sup>* -self closed, form (6) and (25), there exists a subsequence {*gvn<sup>k</sup>* } of {*gvn*} satisfying

$$
\otimes v\_{\mathbb{N}\_k} \otimes v \text{ or } \otimes v \otimes v\_{\mathbb{N}\_k}.\tag{29}
$$

Assume that *gvnk*<*gv*. Let *u* = *vn<sup>k</sup>* in (2), keep (29) in mind, we have

*d<sup>ξ</sup>* (*Fvn<sup>k</sup>* , *Fv*) 6 *ψ*(*MF*,*g*(*vn<sup>k</sup>* , *v*)) = *ψ*(max{*d<sup>ξ</sup>* (*gvn<sup>k</sup>* , *gv*), *d<sup>ξ</sup>* (*gvn<sup>k</sup>* , *Fvn<sup>k</sup>* ), *d<sup>ξ</sup>* (*Fv*, *gv*), *d<sup>ξ</sup>* (*gv*, *Fv*)(1 + *d<sup>ξ</sup>* (*gvn<sup>k</sup>* , *Fvn<sup>k</sup>* )) 1 + *d<sup>ξ</sup>* (*gvn<sup>k</sup>* , *gv*) , *d<sup>ξ</sup>* (*gvn<sup>k</sup>* , *Fvn<sup>k</sup>* )(1 + *d<sup>ξ</sup>* (*gv*, *Fv*)) 1 + *d<sup>ξ</sup>* (*gvn<sup>k</sup>* , *gv*) }) = *ψ*(max{*d<sup>ξ</sup>* (*gvn<sup>k</sup>* , *gx*), *d<sup>ξ</sup>* (*unk*−1, *un<sup>k</sup>* ), *d<sup>ξ</sup>* (*Fv*, *gv*), *d<sup>ξ</sup>* (*gv*, *Fv*)(1 + *d<sup>ξ</sup>* (*unk*−1, *un<sup>k</sup>* )) 1 + *d<sup>ξ</sup>* (*gvn<sup>k</sup>* , *gv*) , *d<sup>ξ</sup>* (*unk*−1, *un<sup>k</sup>* )(1 + *d<sup>ξ</sup>* (*gv*, *Fv*)) 1 + *d<sup>ξ</sup>* (*gvn<sup>k</sup>* , *gv*) }). (30)

Taking the super limits on the both sides of (30), we gain

$$\limsup\_{k \to \infty} d\_{\tilde{\xi}}(Fv\_{\mathfrak{n}\_{k'}}Fv) \lesssim \limsup\_{t \to d\_{\tilde{\xi}}(gv,Fv)} \psi(t). \tag{31}$$

Taking the super limits on the both sides of (28), according to (14), (25) and (31), we have

$$d\_{\xi}(Fv, gv) \leqslant \xi(Fv, gv) \limsup\_{t \to d\_{\xi}(gv, Fv)} \psi(t).$$

This leads to a contradiction with

$$\limsup\_{t \to d\_{\xi}(gv,Fv)} \psi(t) < \frac{d\_{\xi}(gv,Fv)}{\xi(Fv, gv)}.$$

Thus, *d<sup>ξ</sup>* (*gv*, *Fv*) = 0.

If *gv*<*gvn<sup>k</sup>* , by the similar discussion and keep

$$\limsup\_{t \to d\_{\xi}(gw, Fw)} \psi(t) < \frac{d\_{\xi}(gw, Fw)}{\xi(gw, Fw)}$$

in mind, we can also find *d<sup>ξ</sup>* (*gv*, *Fv*) = 0. Case II: (*iii*0 ) holds.

By *F*(Ω) ⊆ (4 ∩ *g*(Ω)), 4 being an <-complete and the construction of the sequence {*un*}, there exists *u* ∈ (∆ ∩ *g*Ω) such that

$$\lim\_{n \to \infty} Fv\_n = \mu \tag{32}$$

and

$$\lim\_{n \to \infty} g v\_n = \mu. \tag{33}$$

If *F* and *g* are <-continuous, we obtain

$$\lim\_{n \to \infty} gFv\_n = gu \tag{34}$$

and

$$\lim\_{n \to \infty} Fgv\_n = Fu. \tag{35}$$

Considering (32), (33) and (*F*, *g*) is <-compatible, we gain

$$\lim\_{n \to \infty} d\_{\tilde{\xi}}(Fg \upsilon\_{n\prime} gF \upsilon\_{n}) = 0. \tag{36}$$

Clearly, by (34)–(36) and *d<sup>ξ</sup>* is continuous, we have

$$d\_{\xi}(Fu, gu) = 0.$$

The proof is complete.

**Example 1.** *Let* Ω = [0, 1] *with u*<*v if and only if u*, *v* ∈ [ 1 32 , 1 <sup>16</sup> ] *and <sup>d</sup><sup>ξ</sup>* (*u*, *<sup>v</sup>*)= (*u*−*v*) 2 <sup>2</sup> *with <sup>ξ</sup>*(*u*, *<sup>v</sup>*) = *<sup>u</sup>* <sup>+</sup> *<sup>v</sup>* <sup>+</sup> <sup>4</sup> *for all <sup>u</sup>*, *<sup>v</sup>* <sup>∈</sup> <sup>Ω</sup>*. Suppose that* <sup>∆</sup> = [0, <sup>5</sup> <sup>32</sup> ]*, clearly,* (Ω, *d<sup>ξ</sup>* ) *is an extended rectangular b-metric space and* ∆ *is* <*-complete. Indeed, d<sup>ξ</sup> is generated from standard metric, for every* <*-preserving Cauchy sequence* {*un*} *in* Ω*, we acquire sequence* {*un*} *converges to a point in* Ω*. Define the mappings F*, *g* : Ω → Ω *by*

$$Fu = \begin{cases} \frac{u}{2}, & \text{if } u \in [0, \frac{1}{4}], \\ 1 \\ \overline{8}' & otherwise. \end{cases}$$

*and*

$$gu = \begin{cases} \frac{u}{2}, & \text{if } u \in \left[0, \frac{1}{2}\right], \\ 1 \\ \frac{1}{4}, & \text{otherwise}. \end{cases}$$

*Clearly, F*(Ω) ⊆ ∆ ⊆ *g*(Ω)*,* < *is* (*F*, *g*)*-closed. Indeed, for all gu*<*gv, we obtain u*, *v* ∈ [ 1 16 , 1 8 ]*, then Fu*, *Fv* ∈ [ 1 32 , 1 <sup>16</sup> ]*, that is Fu*<*Fv. Suppose that a sequence* {*un*} ⊆ Ω *and a point u* ∈ Ω *such that* lim*n*→<sup>∞</sup> *<sup>u</sup><sup>n</sup>* <sup>=</sup> *u. For mapping F, if <sup>u</sup><sup>n</sup>* <sup>∈</sup> [0, <sup>1</sup> 4 ]*, by the definitions of function d<sup>ξ</sup> and mapping F, we have <sup>u</sup>* <sup>∈</sup> [0, <sup>1</sup> 4 ]*, Fu<sup>n</sup>* = *un* 2 *and Fu* = *<sup>u</sup>* 2 *. Then,* lim*n*→<sup>∞</sup> *Fu<sup>n</sup>* = *Fu. If u<sup>n</sup>* ∈ ( 1 4 , 1]*, by the definitions of function d<sup>ξ</sup> and mapping F, we have u* ∈ [ 1 4 , 1] *and Fu<sup>n</sup>* = <sup>1</sup> 8 *and Fu* = <sup>1</sup> 8 *. Then,* lim*n*→<sup>∞</sup> *Fu<sup>n</sup>* = *Fu. Thus, mapping F is continuous. By similarly discuss, we can also find g is continuous. In addition, there exists v*<sup>0</sup> = <sup>1</sup> <sup>32</sup> *such that gv*0<*Fv*<sup>0</sup> *and gv*0<*Fv*1*, where v*<sup>1</sup> *is such that gv*<sup>1</sup> = *Fv*0*. Take η*(*u*, *v*) = <sup>1</sup> 2 *v* − *u and*

$$\psi(t) = \begin{cases} \frac{2}{9}t, & \text{if } t \in [0, 1], \\ \frac{260}{1161}, & \text{otherwise.} \end{cases}$$

*For all t* ∈ (0, *d<sup>ξ</sup>* (*u*0, *u*1)] *and for all p* ∈ N*, we have*

$$\begin{aligned} \limsup\_{n \to \infty} \frac{\psi^{n+1}(t)}{\psi^n(t)} \tilde{\xi}(u\_{n+1}, u\_p) &= \limsup\_{n \to \infty} \frac{2}{9} (4 + u\_{n+1} + u\_p) \\ &= \limsup\_{n \to \infty} \frac{2}{9} (4 + \frac{v\_0}{2} + \frac{v\_0}{2}) \\ &= \frac{2}{9} (4 + \frac{1}{32}) \\ &< 1. \end{aligned}$$

*Now, we show that F and g satisfy condition* (1)*. Indeed, for all gu*<*gv,*

$$\begin{split} \frac{1}{2}\psi(M\_{F,\mathcal{S}}(\boldsymbol{u},\boldsymbol{v})) - d\_{\boldsymbol{\xi}}(\boldsymbol{F}\boldsymbol{u},\boldsymbol{F}\boldsymbol{v}) &\geqslant \frac{1}{9} \max\{d\_{\boldsymbol{\xi}}(\boldsymbol{g}\boldsymbol{u},\boldsymbol{F}\boldsymbol{u}), d\_{\boldsymbol{\xi}}(\boldsymbol{g}\boldsymbol{v},\boldsymbol{F}\boldsymbol{v})\} - \frac{(\frac{\boldsymbol{u}}{2} - \frac{\boldsymbol{v}}{2})^{2}}{2} \\ &= \frac{1}{9} \max\{\frac{9\boldsymbol{u}^{2}}{8}, \frac{9\boldsymbol{v}^{2}}{8}\} - \frac{(\boldsymbol{u}-\boldsymbol{v})^{2}}{8} \\ &\geqslant (\frac{1\times9}{9} - 1)d\_{\boldsymbol{\xi}}(\boldsymbol{F}\boldsymbol{u},\boldsymbol{F}\boldsymbol{v}) \\ &\geqslant 0. \end{split}$$

*Thus, by Theorem 3, there exists v* = <sup>1</sup> <sup>32</sup> *such that F*( 1 <sup>32</sup> ) = *g*( 1 <sup>32</sup> )*.*

**Example 2.** *Let* <sup>Ω</sup> = [0, 3)*, <sup>u</sup>*<*<sup>v</sup> if and only if* (*u*, *<sup>v</sup>*) <sup>∈</sup> [0, <sup>1</sup> 8 ] <sup>×</sup> [0, <sup>1</sup> 8 ] *and d<sup>ξ</sup>* (*u*, *v*)= (*u* − *v*) 2 *with ξ*(*u*, *v*) = *u* + *v* + 4*, for all u*, *v* ∈ Ω*. Define the mappings F*, *g* : Ω → Ω *by*

$$F\mu = \begin{cases} \frac{\mu}{4}, & \text{if } \mu \in [0, \frac{1}{2}], \\ 2, & \text{if } \mu \in (\frac{1}{2}, 3). \end{cases}$$

*and*

$$gu = \begin{cases} u, & \text{if } u \in [0, \frac{1}{2}], \\\\ 2, & \text{if } u \in (\frac{1}{2}, 3). \end{cases}$$

*Clearly, F*(Ω) ⊆ ∆ ⊆ *g*(Ω)*,* < *is* (*F*, *g*)*-closed and F is* (*g*, <)*-continuous. Indeed, for all gu*<*gv, we obtain <sup>u</sup>*, *<sup>v</sup>* <sup>∈</sup> [0, <sup>1</sup> 8 ]*, then Fu*, *Fv* <sup>∈</sup> [0, <sup>1</sup> <sup>32</sup> ]*, that is Fu*<*Fv. For any sequence* {*un*} ⊆ Ω *such that* {*gun*} *is* <sup>&</sup>lt;*-preserving with* lim*n*→<sup>∞</sup> *gu<sup>n</sup>* <sup>=</sup> *gu, we have sequence* {*un*} ⊆ [0, <sup>1</sup> <sup>32</sup> ] *and <sup>u</sup>* <sup>∈</sup> [0, <sup>1</sup> <sup>32</sup> ]*, so* lim*n*→<sup>∞</sup> *Fu<sup>n</sup>* = *Fu. We can find that both F and g are not continuous at u* = <sup>1</sup> 2 *, and* ∆ *is* <*-complete via d<sup>ξ</sup> is generated from standard metrics, where* ∆ = [0, <sup>1</sup> 2 ] ∪ {2}*. In addition, there exists v*<sup>0</sup> = <sup>1</sup> 8 *such that gv*0<*Fv*<sup>0</sup> *and gv*0<*Fv*1*, where <sup>v</sup>*<sup>1</sup> *with gv*<sup>1</sup> <sup>=</sup> *Fv*0*. Take <sup>η</sup>*(*u*, *<sup>v</sup>*) = <sup>1</sup> 2 *v* − *u and ψ*(*t*) = <sup>4</sup> <sup>17</sup> *t, for all t* ∈ [0, ∞)*. For every t* ∈ (0, *d<sup>ξ</sup>* (*u*0, *u*1)]*, we have*

$$\begin{aligned} \limsup\_{n \to \infty} \frac{\psi^{n+1}(t)}{\psi^n(t)} \xi(u\_{n+1}, u\_p) &= \limsup\_{n \to \infty} \frac{4}{17} (4 + u\_{n+1} + u\_p) \\ &= \limsup\_{n \to \infty} \frac{4}{17} (4 + \frac{v\_0}{4^{n+1}} + \frac{v\_0}{4^{p+1}}) \\ &= \frac{4}{17} (4 + \frac{1}{8}) \\ &< 1. \end{aligned}$$

*Now, we show that condition* (1) *for F and g holds. Indeed, for all gu*<*gv,*

$$\begin{aligned} \eta \left( d\_{\xi} (Fu, Fv), M\_{F, \mathfrak{z}} (u, v) \right) &= \frac{1}{2} \psi (M\_{F, \mathfrak{z}} (u, v)) - d\_{\xi} (Fu, Fv) \\ &\geqslant \frac{2}{17} d\_{\xi} (gu, gv) - (\frac{u}{4} - \frac{v}{4})^2 \\ &= (\frac{2}{17} - \frac{1}{16}) d\_{\xi} (gu, gv) \\ &\geqslant 0. \end{aligned}$$

*So, by Theorem 3, there exists v* = 0 *such that F*0 = *g*0*. Further, we claim that the common fixed point theorems in [20,21] are not valid in proving the existence of common fixed points of F and g. Indeed, for u* = 0, *v* = 2*, d<sup>ξ</sup>* (*Fu*, *Fv*) > *k*1*d<sup>ξ</sup>* (*gu*, *gv*) *and d<sup>ξ</sup>* (*Fu*, *Fv*) > *<sup>k</sup>*2[*d<sup>ξ</sup>* (*gu*, *Fu*) + *<sup>d</sup><sup>ξ</sup>* (*gv*, *Fv*)]*, where k*<sup>1</sup> <sup>∈</sup> (0, 1)*, k*<sup>2</sup> <sup>∈</sup> (0, <sup>1</sup> 2 )*.*

According to Examples 1 and 2, we find that the coincidence point of *F* and *g* is not unique. Thus, Theorem 3 shows only the existence of coincidence point of *F* and *g*. Now, we add some conditions to show that the point of coincidence of *F* and *g* is unique.

**Theorem 4.** *In addition the assumption in Theorem 3, we also suppose the following condition:* (*iv*) *If gu*<*gv or gv*<*gu, for all u, v* ∈ *C*(*F*, *g*)*, where C*(*F*, *g*) = {*u* ∈ Ω : *Fu* = *gu*}*, then the point of coincidence of F and g is unique.*

**Proof.** Assume that there exist *u*, *v* ∈ *C*(*F*, *g*) with *d<sup>ξ</sup>* (*Fu*, *Fv*) > 0. If *gu*<*gv*, by (2), we have

$$\begin{split}d\_{\xi}(Fu,Fv) &\leqslant \psi(M\_{F,\xi}(u,v)) \\ &= \psi(\max\{d\_{\xi}(gu,gv), d\_{\xi}(gu,Fu), d\_{\xi}(Fv,gv)\} \\ &\frac{d\_{\xi}(gv,Fv)(1+d\_{\xi}(gu,Fu))}{1+d\_{\xi}(gu,gv)}, \frac{d\_{\xi}(gu,Fu)(1+d\_{\xi}(gv,Fv))}{1+d\_{\xi}(gu,gv)}\}) \\ &= \psi(\max\{d\_{\xi}(Fu,Fv), 0, 0, 0, 0\}) \\ &= \psi(d\_{\xi}(Fu,Fv)) \\ &< d\_{\xi}(Fu,Fv), \end{split}$$

which leads to a contradiction with *d<sup>ξ</sup>* (*Fu*, *Fv*) > 0. Thus, *d<sup>ξ</sup>* (*Fu*, *Fv*) = 0. If *gv*<*gu*, by the similar discussion, we have *d<sup>ξ</sup>* (*Fu*, *Fv*) = 0. The proof is complete.

Theorem 4 shows that the point of coincidence of *F* and *g* is unique. Now, we add a condition to show that *F* and *g* have a unique common fixed point.

**Theorem 5.** *Except for the assumption in Theorem 4, if* (*F*, *g*) *is weakly compatible, then F and g have a unique common fixed point.*

**Proof.** By Theorem 3, there exists *v* ∈ Ω such that *Fv* = *gv*. Assume that *u* = *Fv* = *gv*. Since (*F*, *g*) is weakly compatible, we have *Fu* = *Fgv* = *gFv* = *gu*. By Theorem 4, we have *Fu* = *gu* = *Fv* = *gv* = *u*. Thus, *u* is the common point of *F* and *g*. Suppose that there exists *s* such that *s* = *Fs* = *gs* and *s* 6= *u*. By *s* 6= *u*, we have *Fs* 6= *Fu*—a contradiction. Thus, *s* = *u*. The proof is complete.

**Remark 3.** (*i*) *By the proof of Theorem 3, we only use the property* (*η*1) *of function η.*

(*ii*) *In the proofs of Theorem 3, Theorem 4 and Theorem 5, we can find that we mainly use* (2) *instead of* (1)*. Thus, if we replace* (1) *with*

*d<sup>ξ</sup>* (*Fu*, *Fv*) 6 *ψ*(*MF*,*g*(*u*, *v*)), *for all gu*<*gv*,

*in Theorem 3, these results still hold.*

(*iii*) *We observe that*

*d<sup>ξ</sup>* (*gvn*, *gvn*+1)*d<sup>ξ</sup>* (*gvn*+1, *Fvn*+1) 1 + *d<sup>ξ</sup>* (*gvn*, *Fvn*) 6 *d<sup>ξ</sup>* (*un*, *un*+1); *d<sup>ξ</sup>* (*gvn*, *gvn*+2)*d<sup>ξ</sup>* (*gvn*+2, *Fvn*+2) 1 + *d<sup>ξ</sup>* (*gvn*, *Fvn*) = *d<sup>ξ</sup>* (*un*−1, *un*+1)*d<sup>ξ</sup>* (*un*+1, *un*+2) 1 + *d<sup>ξ</sup>* (*un*−1, *un*) ; *d<sup>ξ</sup>* (*gvn<sup>k</sup>* , *gv*)*d<sup>ξ</sup>* (*gv*, *Fv*) 1 + *d<sup>ξ</sup>* (*gvn<sup>k</sup>* , *Fvn<sup>k</sup>* ) = *d<sup>ξ</sup>* (*unk*−1, *gv*)*d<sup>ξ</sup>* (*gv*, *Fv*) 1 + *d<sup>ξ</sup>* (*unk*−1, *un<sup>k</sup>* ) ; *d<sup>ξ</sup>* (*gvn*, *Fvn*)*d<sup>ξ</sup>* (*gvn*+1, *Fvn*+1) 1 + *d<sup>ξ</sup>* (*Fvn*, *Fvn*+1) 6 *d<sup>ξ</sup>* (*un*−1, *un*); *d<sup>ξ</sup>* (*gvn*, *Fvn*)*d<sup>ξ</sup>* (*gvn*+2, *Fvn*+2) 1 + *d<sup>ξ</sup>* (*Fvn*, *Fvn*+2) = *d<sup>ξ</sup>* (*un*−1, *un*)*d<sup>ξ</sup>* (*un*+1, *un*+2) 1 + *d<sup>ξ</sup>* (*un*, *un*+2) ; *d<sup>ξ</sup>* (*gvn*, *Fvn*)*d<sup>ξ</sup>* (*gv*, *Fv*) 1 + *d<sup>ξ</sup>* (*Fvn*, *Fv*) = *d<sup>ξ</sup>* (*un*−1, *un*)*d<sup>ξ</sup>* (*gv*, *Fv*) 1 + *d<sup>ξ</sup>* (*un*, *Fv*) .

*Thus, we add <sup>d</sup><sup>ξ</sup>* (*gu*,*gv*)*d<sup>ξ</sup>* (*gv*,*Fv*) 1+*d<sup>ξ</sup>* (*gu*,*Fu*) *and <sup>d</sup><sup>ξ</sup>* (*gu*,*Fu*)*d<sup>ξ</sup>* (*gv*,*Fv*) 1+*d<sup>ξ</sup>* (*Fu*,*Fv*) *to MF*,*g*(*u*, *v*)*, the above results still hold.*

#### **3. Corollaries**

**Corollary 1.** *Let* (Ω, *d<sup>ξ</sup>* ) *be an extended rectangular b-metric space with a binary relation* <*. F is a self-mapping on* Ω*, which satisfies*

$$\left(\eta(d\_{\tilde{\xi}}(Fu, Fv), \psi(M(u, v)))\right) \gtrsim 0, \text{ for all } u \Re v,\tag{37}$$

*where η* ∈ Z*, ψ* ∈ Ψ *and*

$$\begin{split} M(\boldsymbol{u},\boldsymbol{v}) &= \max\{d\_{\xi}(\boldsymbol{u},\boldsymbol{v}), d\_{\xi}(\boldsymbol{u},\boldsymbol{F}\boldsymbol{u}), d\_{\xi}(\boldsymbol{F}\boldsymbol{v},\boldsymbol{v})\} \\ &\quad \frac{d\_{\xi}(\boldsymbol{v},\boldsymbol{F}\boldsymbol{v})(1+d\_{\xi}(\boldsymbol{u},\boldsymbol{F}\boldsymbol{u}))}{1+d\_{\xi}(\boldsymbol{u},\boldsymbol{v})} , \frac{d\_{\xi}(\boldsymbol{u},\boldsymbol{F}\boldsymbol{u})(1+d\_{\xi}(\boldsymbol{v},\boldsymbol{F}\boldsymbol{v}))}{1+d\_{\xi}(\boldsymbol{u},\boldsymbol{v})} \}. \end{split}$$

*In addition, if F satisfies the following conditions:*


(*iii*) *For v*<sup>0</sup> *in* (*i*)*, we have* lim sup *n*→∞ *ψ n*+1 (*t*) *ψn*(*t*) *ξ*(*vn*+1, *vp*) < 1*, where p* ∈ N*, vn*+<sup>1</sup> = *Fv<sup>n</sup> and*

*t* ∈ (0, *d<sup>ξ</sup>* (*v*0, *v*1)] *with v*<sup>0</sup> 6= *v*1*.*

(*iv*) *There exists* 4 ⊆ Ω *such that F*(Ω) ⊆ 4 *and* (4, *d<sup>ξ</sup>* ) *is* <*-complete.*

(*v*) *One of the conditions holds:*

(*a*) *F is* <*-continuous; or*

(*b*) <|<sup>Ω</sup> *is d<sup>ξ</sup> -self-closed and for all w* ∈ Ω *with d<sup>ξ</sup>* (*w*, *Fw*) > 0 *such that*

$$\limsup\_{t \to d\_{\xi}(w, Fw)} \psi(t) < \frac{d\_{\xi}(w, Fw)}{\widetilde{\xi}(Fw, w)} \text{ or } \limsup\_{t \to d\_{\xi}(w, Fw)} \psi(t) < \frac{d\_{\xi}(w, Fw)}{\widetilde{\xi}(w, Fw)}.$$

*then F has a fixed point.*

*In addition, if*

(*vi*) *u*<*v or v*<*u, for all u, v with u* = *Fu and v* = *Fv, then F has a unique fixed point.*

**Proof.** Take *g* = *I* (the identity map) in Theorem 5, it is clear that the result is true.

**Corollary 2.** *Let* (Ω, *d<sup>ξ</sup>* ) *be an extended rectangular b-metric space with a binary relation* < *and* 4 *be an* <*-complete subspace of* Ω*. F and g are self-mappings on* Ω*, which satisfy F*(Ω) ⊆ (*g*(Ω) ∩ 4)*, and*

$$d\_{\tilde{\xi}}(Fu, Fv) \lesssim k M\_{F, \emptyset}(\mu, v), \text{ for all } \mathfrak{g}u \mathfrak{R}gv,\tag{38}$$

*where k* ∈ (0, 1) *and*

$$\begin{split} M\_{F\_{\mathcal{G}}}(\boldsymbol{u},\boldsymbol{v}) &= \max \{ d\_{\tilde{\xi}}(\boldsymbol{g}\boldsymbol{u},\boldsymbol{g}\boldsymbol{v}), d\_{\tilde{\xi}}(\boldsymbol{g}\boldsymbol{u},\boldsymbol{F}\boldsymbol{u}), d\_{\tilde{\xi}}(\boldsymbol{F}\boldsymbol{v},\boldsymbol{g}\boldsymbol{v}) \} \\ &\quad \frac{d\_{\tilde{\xi}}(\boldsymbol{g}\boldsymbol{v},\boldsymbol{F}\boldsymbol{v})(1 + d\_{\tilde{\xi}}(\boldsymbol{g}\boldsymbol{u},\boldsymbol{F}\boldsymbol{u}))}{1 + d\_{\tilde{\xi}}(\boldsymbol{g}\boldsymbol{u},\boldsymbol{g}\boldsymbol{v})} \, \frac{d\_{\tilde{\xi}}(\boldsymbol{g}\boldsymbol{u},\boldsymbol{F}\boldsymbol{u})(1 + d\_{\tilde{\xi}}(\boldsymbol{g}\boldsymbol{v},\boldsymbol{F}\boldsymbol{v}))}{1 + d\_{\tilde{\xi}}(\boldsymbol{g}\boldsymbol{u},\boldsymbol{g}\boldsymbol{v})} \} .\end{split}$$

*In addition, if F and g satisfy the following conditions:*

(*i*) *there exists v*<sup>0</sup> ∈ Ω *such that gv*0<*Fv*<sup>0</sup> *and gv*0<*Fv*1*, where v*<sup>1</sup> *is such that gv*<sup>1</sup> = *Fv*0*;* (*ii*) < *is* (*F*, *g*)*-closed;*

(*iii*) *for v*<sup>0</sup> *in* (*i*)*, we have* lim sup *n*→∞ *ξ*(*un*+1, *up*) < <sup>1</sup> *k , where p* ∈ N*, u<sup>n</sup>* = *Fv<sup>n</sup>* = *gvn*+<sup>1</sup> *and*

*t* ∈ (0, *d<sup>ξ</sup>* (*u*0, *u*1)] *with u*<sup>0</sup> 6= *u*1*; and*

(*iv*) (*a*) 4 ⊆ *g*(Ω)*; and*

(*b*) *F is* (*g*, <)*-continuous or F and g are continuous or* <|*g*(Ω) *is d<sup>ξ</sup> -self-closed and <sup>d</sup><sup>ξ</sup>* (*gw*, *Fw*) <sup>&</sup>gt; <sup>0</sup>*, where w* <sup>∈</sup> <sup>Ω</sup>*, such that <sup>ξ</sup>*(*Fw*, *<sup>w</sup>*) <sup>&</sup>lt; <sup>1</sup> *k or ξ*(*w*, *Fw*) < <sup>1</sup> *k* ; *or alternatively,*

(*iv*0 ) *d<sup>ξ</sup> is continuous and* (*F*, *g*) *is* <*-compatible, and g and F are* <*-continuous;*

(*v*) *if gu*<*gv or gv*<*gu, for all u, v with gu* = *Fu and gv* = *Fv; and*

(*vi*) (*F*, *g*) *is weakly compatible,*

*then F and g have a unique fixed point.*

**Proof.** By Remark 3, if *ψ*(*u*) = *ku* , where *k* ∈ (0, 1), it is clear that the result is true.

**Remark 4.** *Let* <sup>&</sup>lt; <sup>=</sup> <sup>Ω</sup><sup>2</sup> *and MF*,*g*(*u*, *v*) = *d<sup>ξ</sup>* (*gu*, *gv*) *in Corollary 2, we can find the results of Hassen et al. [20].*

**Corollary 3.** *Let* (Ω, *d<sup>ξ</sup>* ) *be an extended rectangular b-metric space with a binary relation* <*. F is a self-mappings on* Ω*, which satisfies*

$$d\_{\tilde{\xi}}(Fu, Fv) \lesssim kM(u, v), \text{ for all } u \Re v,$$

*where k* ∈ (0, 1) *and*

$$\begin{split} M(\boldsymbol{u},\boldsymbol{v}) &= \max \{ d\_{\xi}(\boldsymbol{u},\boldsymbol{v}), d\_{\xi}(\boldsymbol{u},\boldsymbol{F}\boldsymbol{u}), d\_{\xi}(\boldsymbol{F}\boldsymbol{v},\boldsymbol{v}) \} \\ &\quad \frac{d\_{\xi}(\boldsymbol{v},\boldsymbol{F}\boldsymbol{v})(1+d\_{\xi}(\boldsymbol{u},\boldsymbol{F}\boldsymbol{u}))}{1+d\_{\xi}(\boldsymbol{u},\boldsymbol{v})} , \frac{d\_{\xi}(\boldsymbol{u},\boldsymbol{F}\boldsymbol{u})(1+d\_{\xi}(\boldsymbol{v},\boldsymbol{F}\boldsymbol{v}))}{1+d\_{\xi}(\boldsymbol{u},\boldsymbol{v})} \}. \end{split}$$

*In addition, if F satisfies the following conditions:*

(*i*) *there exists v*<sup>0</sup> ∈ Ω *such that v*0<*Fv*<sup>0</sup> *and v*0<*F* <sup>2</sup>*v*0*;*

(*ii*) < *is F-closed;*

(*iii*) *for v*<sup>0</sup> *in* (*i*)*, we have* lim sup *ξ*(*vn*+1, *vp*) < <sup>1</sup> *k , where p* ∈ N*, vn*+<sup>1</sup> = *Fvn;*

*n*→∞ (*iv*) (*a*) *there exists* 4 *such that F*(Ω) ⊆ 4 *and* (4, *d<sup>ξ</sup>* ) *is* <*-complete; and*

(*b*) *F is* <*-continuous or* <|<sup>Ω</sup> *is d<sup>ξ</sup> -self-closed and d<sup>ξ</sup>* (*w*, *Fw*) > 0*, where w* ∈ Ω*, such that ξ*(*Fw*, *w*) < <sup>1</sup> *k or ξ*(*w*, *Fw*) < <sup>1</sup> *k* ; *and*

(*v*) *if u*<*v or v*<*u, for all u, v with u* = *Fu and v* = *Fv,*

*then F has a unique fixed point.*

**Proof.** By Corollary 2, let *g* = *I*, it is clear that the result is true.

**Example 3.** *Let* Ω = [1, 4] *with* < = [1, 2] 2 *, d<sup>ξ</sup>* (*u*, *v*) = |*u* − *v*| *with ξ*(*u*, *v*) = *u* + *v* + 1 *for all u*, *v* ∈ Ω*. Clearly,* (Ω, *d<sup>ξ</sup>* ) *be an* <*-complete extended rectangular b-metric space. Consider that the mapping F* : Ω → Ω *is defined by*

$$F(u) = \begin{cases} \frac{7}{4} & \text{if } u \in [1,2]. \\ \frac{u}{10} & \text{otherwise.} \end{cases}$$

*Then, for all u*<*v, we obtain u*, *v* ∈ [1, 2] 2 *, then Fu* = *Fv* = <sup>7</sup> <sup>4</sup> ∈ [1, 2]*, that is Fu*<*Fv. Since* < *is F-closed. for any sequence* {*un*} ⊆ <sup>Ω</sup> *such that* {*un*} *is* <sup>&</sup>lt;*-preserving with* lim*n*→<sup>∞</sup> *u<sup>n</sup>* = *u, we obtain that <sup>u</sup><sup>n</sup>* <sup>∈</sup> [1, 2]*, for all <sup>n</sup>* <sup>∈</sup> <sup>N</sup> *and <sup>u</sup>* <sup>∈</sup> [1, 2]*, then Fu* <sup>=</sup> *Fu<sup>n</sup>* <sup>=</sup> <sup>7</sup> 4 *, for all n* ∈ N*, that is,* <|<sup>Ω</sup> *is dξ -self-closed. Moreover, there exists v*<sup>0</sup> = <sup>7</sup> 4 *such that v*0<*Fv*<sup>0</sup> *and v*0<*F* <sup>2</sup>*v*0*. Clearly,*

$$d\_{\tilde{\xi}}(Fu, Fv) \lesssim \frac{1}{10} (M(u, v))\_{\prime} \text{ for all } u \Re v\_{\prime}$$

*and for all w* ∈ Ω *with d<sup>ξ</sup>* (*w*, *Fw*) > 0 *satisfies ξ*(*Fw*, *w*) < 10 *and* lim sup *n*→∞ *ξ*(*vn*+1, *vp*) < 10. *Thus, by Corollary 3,* 7 4 *is the unique fixed point of F.*

**Corollary 4.** *Let* (Ω, *d<sup>ξ</sup>* ) *be an extended rectangular b-metric space. F is a self-mapping on* Ω*, which satisfies*

$$d\_{\tilde{\xi}}(Fu, Fv) \lesssim kM(u, v), \text{ for all } u, v \in \Omega\_{\epsilon}$$

*where k* ∈ [0, 1) *and*

$$\begin{split} M(\boldsymbol{u},\boldsymbol{v}) &= \max \{ d\_{\xi}(\boldsymbol{u},\boldsymbol{v}), d\_{\xi}(\boldsymbol{u},\boldsymbol{F}\boldsymbol{u}), d\_{\xi}(\boldsymbol{F}\boldsymbol{v},\boldsymbol{v}) \} \\ &\quad \frac{d\_{\xi}(\boldsymbol{v},\boldsymbol{F}\boldsymbol{v})(1+d\_{\xi}(\boldsymbol{u},\boldsymbol{F}\boldsymbol{u}))}{1+d\_{\xi}(\boldsymbol{u},\boldsymbol{v})}, \frac{d\_{\xi}(\boldsymbol{u},\boldsymbol{F}\boldsymbol{u})(1+d\_{\xi}(\boldsymbol{v},\boldsymbol{F}\boldsymbol{v}))}{1+d\_{\xi}(\boldsymbol{u},\boldsymbol{v})} \}. \end{split}$$

*In addition, if F satisfies the following conditions:*


(*a*) *F is continuous; or*

(*b*) *for all w* <sup>∈</sup> <sup>Ω</sup> *with d<sup>ξ</sup>* (*w*, *Fw*) <sup>&</sup>gt; <sup>0</sup> *such that k* <sup>&</sup>lt; <sup>1</sup> *ξ*(*Fw*,*w*) *or k* < <sup>1</sup> *ξ*(*w*,*Fw*) , *then F has a unique fixed point.*

**Proof.** Let <sup>&</sup>lt; <sup>=</sup> <sup>Ω</sup><sup>2</sup> , by Corollary 3, the proof is complete.

**Corollary 5.** *Let* (Ω, *d<sup>ξ</sup>* ) *be an extended rectangular b-metric space with a binary relation* <*. Assume that F is a self-mapping on* Ω*, which satisfies*

$$\begin{aligned} d\_{\xi}(Fu, Fv) &\leqslant a\_1 d\_{\xi}(u, v) + a\_2 d\_{\xi}(u, Fu) + a\_3 d\_{\xi}(Fv, v) + \\ a\_4 &\frac{d\_{\xi}(v, Fv)(1 + d\_{\xi}(u, Fu))}{1 + d\_{\xi}(u, v)} + a\_5 \frac{d\_{\xi}(u, Fu)(1 + d\_{\xi}(v, Fv))}{1 + d\_{\xi}(u, v)}, \text{ for all } u \text{\textquotedblleft} v \end{aligned}$$

*where* 5 ∑ *i*=1 *a<sup>i</sup>* ∈ (0, 1). *In addition, if F satisfies the following conditions:*

(*i*) *there exists v*<sup>0</sup> ∈ Ω *such that v*0<*Fv*<sup>0</sup> *and v*0<*F* <sup>2</sup>*v*0*;*

(*ii*) < *is F-closed;*

(*iii*) *for v*<sup>0</sup> *in* (*i*)*, we have* lim sup *n*→∞ *ξ*(*vn*+1, *vp*) < <sup>1</sup> 5 ∑ *i*=1 *ai , where p* ∈ N *and vn*+<sup>1</sup> = *Fvn;*

(*iv*) *there exists* 4 ⊆ Ω *such that F*(Ω) ⊆ 4 *and* (4, *d<sup>ξ</sup>* ) *is* <*-complete;*

(*v*) *one of the conditions holds:*


$$\sum\_{i=1}^{5} a\_i < \frac{1}{\tilde{\xi}(Fw\_\prime w)} \operatorname{or} \sum\_{i=1}^{5} a\_i < \frac{1}{\tilde{\xi}(w\_\prime Fw)};$$

*and*

(*vi*) *if u*<*v or v*<*u, for all u, v with Fu* = *u and Fv* = *v, then F has a unique fixed point.*

#### **Proof.** For all *u*<*v*,

$$\begin{split}d\_{\xi}(Fu,Fv) &\leqslant a\_{1}d\_{\xi}(u,v) + a\_{2}d\_{\xi}(u,Fu) + a\_{3}d\_{\xi}(Fv,v) + \\ &a\_{4}\frac{d\_{\xi}(v,Fv)(1+d\_{\xi}(u,Fu))}{1+d\_{\xi}(u,v)} + a\_{5}\frac{d\_{\xi}(u,Fu)(1+d\_{\xi}(v,Fv))}{1+d\_{\xi}(u,v)} \\ &\leqslant \sum\_{i=1}^{5} a\_{i} \max\{d\_{\xi}(u,v), d\_{\xi}(u,Fu), d\_{\xi}(Fv,v)\} \\ &\frac{d\_{\xi}(v,Fv)(1+d\_{\xi}(u,Fu))}{1+d\_{\xi}(u,v)}, \frac{d\_{\xi}(u,Fu)(1+d\_{\xi}(v,Fv))}{1+d\_{\xi}(u,v)}\} \\ &= kM(u,v)\_{\prime} \end{split}$$

where *k* = 5 ∑ *i*=1 *ai* . By Corollary 4, the proof is complete.

**Remark 5.** (*i*) *In Corollary 5, take a<sup>i</sup>* = 0, *i* = 2, 3, 4, 5*, our results generalized the results of Alam et al. [18] to extended rectangular b-metric spaces.*

(*ii*) *In Corollary 5, if a<sup>i</sup>* = 0, *i* = 2, 3, 5*, then we develop the result of Hossain et al. [23] into extended rectangular b-metric space.*

**Corollary 6.** *Let* (Ω, *d<sup>ξ</sup>* ) *be an extended rectangular b-metric space. F is a self-mapping on* Ω*, which satisfies*

$$\begin{aligned} d\_{\xi}(\operatorname{F}u, \operatorname{Fv}) &\leqslant a\_{1}d\_{\xi}(\operatorname{u}, \operatorname{v}) + a\_{2}d\_{\xi}(\operatorname{u}, \operatorname{Fu}) + a\_{3}d\_{\xi}(\operatorname{Fv}, \operatorname{v}) + \\ a\_{4} &\frac{d\_{\xi}(\operatorname{v}, \operatorname{Fv})(1 + d\_{\xi}(\operatorname{u}, \operatorname{Fu}))}{1 + d\_{\xi}(\operatorname{u}, \operatorname{v})} + a\_{5} \frac{d\_{\xi}(\operatorname{u}, \operatorname{Fu})(1 + d\_{\xi}(\operatorname{v}, \operatorname{Fv}))}{1 + d\_{\xi}(\operatorname{u}, \operatorname{v})}, \text{ for all } \operatorname{u}, \operatorname{v} \in \Omega, \end{aligned}$$

*where* 5 ∑ *i*=1 *a<sup>i</sup>* ∈ (0, 1)*. In addition, if F satisfies the following conditions:*

(*i*) *there exists v*<sup>0</sup> ∈ Ω *such that* lim sup *n*→∞ *ξ*(*vn*+1, *vp*) < <sup>1</sup> 5 ∑ *i*=1 *ai , where p* ∈ N *and vn*+<sup>1</sup> = *Fvn;*

	- (*a*) *F is continuous; or*

$$(b) \text{ for all } w \in \Omega \text{ with } d\_{\overline{\xi}}(w, Fw) > 0 \text{ such that } \sum\_{i=1}^{5} a\_i < \frac{1}{\overline{\xi}(Fw, w)} \text{ or } \sum\_{i=1}^{5} a\_i < \frac{1}{\overline{\xi}(w, Fw)}.$$

*then F has a unique fixed point.*

**Proof.** Let <sup>&</sup>lt; <sup>=</sup> <sup>Ω</sup><sup>2</sup> , by Corollary 5, the proof is complete.

**Remark 6.** (*i*) *In Corollary 6, if a<sup>i</sup>* = 0, *i* = 2, 3, 4, 5*, we can obtain the Banach type fixed point theorem.*

(*ii*) *In Corollary 6, if a<sup>i</sup>* = 0, *i* = 1, 4, 5*, we can find the Kannan type fixed point theorem.*

(*iii*) *In Corollary 6, if a<sup>i</sup>* = 0, *i* = 2, 3, 5*, we can develop the result of Dass et al. [22] into extended rectangular b-metric space.*

#### **4. Applications**

*4.1. Application to Ordinary Differential Equations with Periodic Boundary Value*

In this section, we apply our results to show the existence of solutions to the following ordinary differential equations with periodic boundary value.

$$\begin{cases} u'(t) = f(t, u(t)), \quad t \in [0, T],\\ u(0) = u(T), \end{cases} \tag{39}$$

where *T* ∈ (0, ∞) is a constant, *u*(*t*) : [0, *T*] → R and *f* : [0, *T*] × R → R is continuous. It is clear that the solution of (39) is equivalent to the following integral equation

$$u(t) = \int\_0^T G(t, s)[f(s, u(s)) + \lambda u(s)]ds, \; t \in [0, T], \tag{40}$$

where *λ* > 0 and

$$G(t,s) = \begin{cases} \frac{e^{\lambda(T+s-t)}}{e^{\lambda T}-1}, & 0 \le s < t \le T, \\\frac{e^{\lambda(s-t)}}{e^{\lambda T}-1}, & 0 \le t < s \le T. \end{cases}$$

Let *C*([0, *T*], R) be the set of all continuous real value functions defined on [0, *T*]. For all *u*, *v* ∈ *C*([0, *T*], R), we define two functions *ξ*(*u*, *v*), *d<sup>ξ</sup>* (*u*, *v*) and a mapping *F* by *ξ*(*u*, *v*) = |*u*| + |*v*| + 4,

$$d\_{\tilde{\xi}}(\boldsymbol{u}, \boldsymbol{v}) = \max\_{t \in [0, T]} |\boldsymbol{u}(t) - \boldsymbol{v}(t)|^2 \boldsymbol{\lambda}$$

and

$$F(u(t)) = \int\_0^T G(t, s)[f(s, u(s)) + \lambda u(s)]ds, \ t \in [0, T]. \tag{41}$$

Clearly, (*d<sup>ξ</sup>* , *C*([0, *T*], R)) is a complete extended rectangular *b*-metric space and *F* is continuous.

**Theorem 6.** *If the following conditions hold,*

(*i*) *there exist λ*, *µ* > 0 *with µ* < *λ* 2 *and ψ* ∈ Ψ *such that*

$$0 \le f(t, u) + \lambda u - [f(t, v) + \lambda v], \text{ for all } u \lessapprox v$$

*and*

$$\mathbb{E}\left|f(t,u) + \lambda u - [f(t,v) + \lambda v]\right|^2 \leqslant \mu \psi(\max\_{t \in [0,T]} |u(t) - v(t)|^2), \text{ for all } u \leqslant v;$$

(*ii*) (39) *has a lower solution, that is, there exists u*0(*t*) ∈ *C*([0, *T*], R) *such that*

$$\begin{cases} u\_0'(t) \lesssim f(t, u\_0(t)), \quad t \in [0, T],\\ u\_0(0) \lesssim u\_0(T); \text{and} \end{cases}$$

(*iii*) *for u*<sup>0</sup> *in* (*ii*)*, we have* lim sup *n*→∞ *ψ n*+1 (*t*) *ψn*(*t*) *ξ*(*un*+1, *up*) < 1*, where p*, *n* ∈ N*, un*+<sup>1</sup> = *Fu<sup>n</sup> and t* ∈ (0, *d<sup>ξ</sup>* (*u*0, *u*1)] *with u*<sup>0</sup> 6= *u*1*, then the ordinary differential equation with periodic boundary value* (39) *has a solution.*

**Proof.** First, we define a binary relation < by *u*<*v* if and only if *u*(*t*) 6 *v*(*t*), for all *t* ∈ [0, *T*]. Clearly, considering (*ii*), we have *u*0(*t*) 6 *Fu*0(*t*). By 0 6 *f*(*t*, *u*) + *λu* − [ *f*(*t*, *v*) + *λv*], for all *u* 6 *v*, *u*0(*t*) 6 *Fu*0(*t*) and the definition of *F*, we have *F*(*u*0(*t*)) 6 *F* 2 (*u*0(*t*)).

We can easily deduce that *u*0(*t*) 6 *Fu*0(*t*) and *u*0(*t*) 6 *F* 2 (*u*0(*t*)). By the definition of <, there exists *u*0(*t*) such that *u*0(*t*)<*F*(*u*0(*t*)) and *u*0(*t*)<*F* 2 (*u*0(*t*)). We can conclude that < is *F*-closed via 0 6 *f*(*t*, *u*) + *λu* − [ *f*(*t*, *v*) + *λv*], for all *u* 6 *v*, *u*0(*t*) 6 *Fu*0(*t*) and the definitions of *F* and <. Now, we prove that *F* satisfies (37). Indeed, for all *u*<*v*, we have

$$\begin{split} |Fu(t) - Fv(t)|^2 &= \left| \int\_0^T G(t, s)[f(s, u(s)) + \lambda u(s)]ds - \int\_0^T G(t, s)[f(s, v(s)) + \lambda v(s)]ds \right|^2 \\ &= \left| \int\_0^T G(t, s)\{ [f(s, u(s)) + \lambda u(s)] - [f(s, v(s)) + \lambda v(s)] \} ds \right|^2 \\ &\le \max\_{t \in [0, T]} |\{ [f(s, u(s)) + \lambda u(s)] - [f(s, v(s)) + \lambda v(s)] \}|^2 |\int\_0^T G(t, s) ds|^2 \\ &\le \mu \psi\_0(\max\_{t \in [0, T]} |u(t) - v(t)|^2) |\max\_{t \in [0, T]} \int\_0^T G(t, s) ds|^2 \\ &\le \mu \psi(M(u, v)) \Big| \max\_{t \in [0, T]} \int\_0^T G(t, s) ds \Big|^2 \\ &= \mu \psi(M(u, v)) \times \frac{1}{\lambda^2} \\ &= \frac{\mu}{\lambda^2} \psi(M(u, v)), \end{split}$$

that is, *d<sup>ξ</sup>* (*Fu*, *Fv*) 6 *µ <sup>λ</sup>*<sup>2</sup> *<sup>ψ</sup>*(*M*(*u*, *<sup>v</sup>*)). Therefore, all conditions of Corollary 1 are satisfied, and thus (39) has a solution.

#### *4.2. Application to Linear Matrix Equations*

In this section, of the paper, Corollary 3 is used to prove the existence of solutions to a class of linear matrix equations. For convenience, we first give the following notations:

We denote *M<sup>m</sup>* is the set of all complex number matrices of order *m*, *H<sup>m</sup>* is the set of all Hermitian matrices of order *m*, *P <sup>m</sup>* and *H<sup>m</sup>* <sup>+</sup> represent the set of all *m* × *m* positive matrices and *m* × *m* positive semi-definite matrices, respectively. Clearly, *P <sup>m</sup>* <sup>⊆</sup> *<sup>H</sup><sup>m</sup>* <sup>⊆</sup> *<sup>M</sup>m*, *H<sup>m</sup>* <sup>+</sup> <sup>⊆</sup> *<sup>H</sup>m*. Here, *<sup>A</sup>*<sup>1</sup> *<sup>O</sup>*(*<sup>O</sup>* represents null matrix of same order) and *<sup>A</sup>*<sup>1</sup> *<sup>O</sup>* mean that *A*<sup>1</sup> ∈ *P <sup>m</sup>* and *<sup>A</sup>*<sup>1</sup> <sup>∈</sup> *<sup>H</sup><sup>m</sup>* <sup>+</sup>, respectively; for *A*<sup>1</sup> − *A*<sup>2</sup> *O* and *A*<sup>1</sup> − *A*<sup>2</sup> *O*, we will use *A*<sup>1</sup> *A*<sup>2</sup> and *A*<sup>1</sup> *A*2, respectively.

In the section, we investigate the existence of the solution to the following linear matrix equations:

$$
\delta II = G + \sum\_{i=1}^{m} A\_i^\* \mathcal{U} A\_i + \sum\_{i=1}^{m} B\_i^\* \mathcal{U} B\_i \tag{42}
$$

where *G* ∈ *P <sup>m</sup>*, *A<sup>i</sup>* , *B<sup>i</sup>* are arbitrary *m* × *m* matrices for each *i*. We use the metric *d*(*A*, *B*) = k*A* − *B*k*tr*,*<sup>X</sup>* = k*X* 1 <sup>2</sup> (*A* − *B*)*X* 1 <sup>2</sup> k*tr*, which is induced by the norm k*A*k*tr* = *n* ∑ *i*=1 *σi*(*A*), where *X* ∈ *P <sup>m</sup>*, *<sup>A</sup>*, *<sup>B</sup>* <sup>∈</sup> *<sup>H</sup><sup>m</sup>* and *<sup>σ</sup>i*(*A*), *<sup>i</sup>* <sup>=</sup> 1, 2, 3, · · · , *<sup>n</sup>*, are the singular values of *<sup>A</sup>* <sup>∈</sup> *<sup>M</sup>m*. Clearly, the set *H<sup>m</sup>* equipped with the metric *d* is a complete metric space, then (*Hm*, *d*) is a complete extended rectangular *b*-metric space with respect to *ξ* = 3.

Define <sup>&</sup>lt; and mapping *<sup>F</sup>* : *<sup>H</sup><sup>m</sup>* <sup>→</sup> *<sup>H</sup><sup>m</sup>* by *<sup>A</sup>*<*<sup>B</sup>* iff *<sup>B</sup>* <sup>−</sup> *<sup>A</sup>* <sup>∈</sup> *<sup>H</sup><sup>m</sup>* and

$$F(\mathcal{U}) = \mathcal{G} + \sum\_{i=1}^{m} A\_i^\* \mathcal{U} A\_i + \sum\_{i=1}^{m} B\_i^\* \mathcal{U} B\_{i\prime} \text{ for all } A\_\prime \mathcal{B} \in H^m.$$

Note that the solutions of the matrix Equation (42) are the fixed point of the mapping *<sup>F</sup>*, furthermore, the mapping *<sup>F</sup>* is continuous in *<sup>H</sup>m*, <sup>&</sup>lt; is *<sup>F</sup>*-closed and there exists *<sup>U</sup>*<sup>0</sup> such that *U*0<*F*(*U*0) and *U*0<*F* 2 (*U*0).

To establish the existence result, we introduce the following Lemmas.

**Lemma 2** ([24])**.** *If A*, *<sup>B</sup>* <sup>∈</sup> *<sup>H</sup><sup>m</sup>* <sup>+</sup>*, then* 0 6 *tr*(*AB*) 6 k*A*k*tr*(*B*). **Lemma 3** ([24])**.** *If A* <sup>∈</sup> *<sup>H</sup><sup>m</sup> such that A* <sup>≺</sup> *<sup>I</sup>n, then* <sup>k</sup>*A*<sup>k</sup> <sup>&</sup>lt; <sup>1</sup>*.*

**Theorem 7.** *If X* ∈ *P m, m* ∑ *i*=1 *A* ∗ *<sup>i</sup> XA<sup>i</sup>* <sup>≺</sup> <sup>1</sup> <sup>7</sup> *X and m* ∑ *i*=1 *B* ∗ *<sup>i</sup> XB<sup>i</sup>* <sup>≺</sup> <sup>1</sup> <sup>7</sup> *X, then the mapping F has a fixed point in Hm.*

**Proof.** Suppose that *<sup>U</sup>*, *<sup>V</sup>* <sup>∈</sup> *<sup>H</sup><sup>m</sup>* and *<sup>U</sup>*<*V*. Consider

k*F*(*U*) − *F*(*V*)k*tr*,*<sup>X</sup>* = *tr*(*X* 1 <sup>2</sup> (*F*(*U*) − *F*(*V*))*X* 1 2 ) = *tr*( *m* ∑ *i*=1 {*X* 1 <sup>2</sup> (*A* ∗ *i* (*U* − *V*)*A<sup>i</sup>* + *B* ∗ *i* (*U* − *V*)*BiX* 1 <sup>2</sup> )}) = *tr*( *m* ∑ *i*=1 *X* 1 <sup>2</sup> *A* ∗ *i* (*U* − *V*)*AiX* 1 <sup>2</sup> + *m* ∑ *i*=1 *X* 1 <sup>2</sup> *B* ∗ *i* (*U* − *V*)*BiX* 1 2 ) = *m* ∑ *i*=1 *tr*(*X* 1 <sup>2</sup> *A* ∗ *i* (*U* − *V*)*AiX* 1 <sup>2</sup> + *X* 1 <sup>2</sup> *B* ∗ *i* (*U* − *V*)*BiX* 1 2 ) = *m* ∑ *i*=1 *tr*(*X* 1 <sup>2</sup> *A* ∗ *i* (*U* − *V*)*AiX* 1 <sup>2</sup> ) + *m* ∑ *i*=1 *tr*(*X* 1 <sup>2</sup> *B* ∗ *i* (*U* − *V*)*BiX* 1 2 ) = *m* ∑ *i*=1 *tr*(*A* ∗ *<sup>i</sup> XAi*(*U* − *V*)) + *m* ∑ *i*=1 *tr*(*B* ∗ *<sup>i</sup> XBi*(*U* − *V*)) = *m* ∑ *i*=1 *tr*(*A* ∗ *<sup>i</sup> XAiX* − 1 <sup>2</sup> *X* 1 <sup>2</sup> (*U* − *V*)*X* − 1 <sup>2</sup> *X* 1 <sup>2</sup> ) + *m* ∑ *i*=1 *tr*(*B* ∗ *<sup>i</sup> XBiX* − 1 <sup>2</sup> *X* 1 <sup>2</sup> (*U* − *V*)*X* − 1 <sup>2</sup> *X* 1 2 ) = *m* ∑ *i*=1 *tr*(*X* − 1 <sup>2</sup> *A* ∗ *<sup>i</sup> XAiX* − 1 <sup>2</sup> *X* 1 <sup>2</sup> (*U* − *V*)*X* 1 <sup>2</sup> ) + *m* ∑ *i*=1 *tr*(*X* − 1 <sup>2</sup> *B* ∗ *<sup>i</sup> XBiX* − 1 <sup>2</sup> *X* 1 <sup>2</sup> (*U* − *V*)*X* 1 2 ) = *tr*( *m* ∑ *i*=1 *X* − 1 <sup>2</sup> *A* ∗ *<sup>i</sup> XAiX* − 1 <sup>2</sup> *X* 1 <sup>2</sup> (*U* − *V*)*X* 1 <sup>2</sup> ) + *tr*( *m* ∑ *i*=1 *X* − 1 <sup>2</sup> *B* ∗ *<sup>i</sup> XBiX* − 1 <sup>2</sup> *X* 1 <sup>2</sup> (*U* − *V*)*X* 1 2 ) 6 k *m* ∑ *i*=1 *X* − 1 <sup>2</sup> *A* ∗ *<sup>i</sup> XAiX* − 1 <sup>2</sup> kk(*U* − *V*)k*tr*,*<sup>X</sup>* + k *m* ∑ *i*=1 *X* − 1 <sup>2</sup> *B* ∗ *<sup>i</sup> XBiX* − 1 <sup>2</sup> kk(*U* − *V*)k*tr*,*<sup>X</sup>* = (k *m* ∑ *i*=1 *X* − 1 <sup>2</sup> *A* ∗ *<sup>i</sup> XAiX* − 1 <sup>2</sup> k + k *m* ∑ *i*=1 *X* − 1 <sup>2</sup> *B* ∗ *<sup>i</sup> XBiX* − 1 <sup>2</sup> k)k(*U* − *V*)k*tr*,*<sup>X</sup>* = *k*k(*U* − *V*)k*tr*,*<sup>X</sup>* 6 *kM*(*U*, *V*), where *k* = k *m* ∑ *X* − 1 <sup>2</sup> *A* ∗ *<sup>i</sup> XAiX* − 1 <sup>2</sup> k + k *m* ∑ *X* − 1 <sup>2</sup> *B* ∗ *<sup>i</sup> XBiX* − 1 <sup>2</sup> k and

$$\begin{split} M(\mathcal{U},V) &= \max\{ \|(\mathcal{U}-V)\|\_{\operatorname{tr}\mathcal{X}\prime} \|(\mathcal{U}-F(\mathcal{U}))\|\_{\operatorname{tr}\mathcal{X}\prime} \|(V-F(V))\|\_{\operatorname{tr}\mathcal{X}\prime} \\ &\quad \frac{\|(\mathcal{V}-F(V))\|\_{\operatorname{tr}\mathcal{X}} (1+\|(\mathcal{U}-F(\mathcal{U}))\|\_{\operatorname{tr}\mathcal{X}\prime})}{1+\|(\mathcal{U}-V)\|\_{\operatorname{tr}\mathcal{X}}} \, \frac{\|(\mathcal{U}-F(\mathcal{U}))\|\_{\operatorname{tr}\mathcal{X}} (1+\|(\mathcal{V}-F(V))\|\_{\operatorname{tr}\mathcal{X}})}{1+\|(\mathcal{U}-V)\|\_{\operatorname{tr}\mathcal{X}}} \}. \end{split}$$

By Lemma 3, we have *k* < <sup>2</sup> 7 . Mapping *F* and < satisfy the conditions of the Corollary 3; therefore, *F* has a fixed point, and linear matrix Equation (42) has a solution.

*i*=1

#### **5. Conclusions**

*i*=1

In this paper, some new relation-theoretic coincidence and common fixed point results of for some mappings of *F* and *g* are obtained by using hybrid contractions and auxiliary functions in extended rectangular *b*-metric space. We improve and expand some recent results. Furthermore, we use instances and applications to justify the results. Finally, regarding the main results of this paper, we draw some corollaries. Due to the importance of the fixed point theory, we consider possible future research directions.

These are potential works in the future:


**Author Contributions:** Conceptualization, Y.S. and X.L.; formal analysis, Y.S. and X.L.; investigation, Y.S.; writing—original draft preparation, Y.S. and X.L.; writing—review and editing, Y.S. and X.L. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by National Natural Science Foundation of China (Grant No. 11872043), Central Government Funds of Guiding Local Scientific and Technological Development for Sichuan Province (Grant No. 2021ZYD0017), Zigong Science and Technology Program (Grant No. 2020YGJC03), and the 2021 Innovation and Entrepreneurship Training Program for College Students of Sichuan University of Science and Engineering (Grant No. cx2021150).

**Data Availability Statement:** The data that support the findings of this study are available from the corresponding author upon reasonable request.

**Acknowledgments:** The authors thanks the anonymous reviewers for their excellent comments, suggestions, and ideas that helped improve this article.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **Sufficiency for Weak Minima in Optimal Control Subject to Mixed Constraints**

**Gerardo Sánchez Licea**

Departamento de Matemáticas, Facultad de Ciencias, Universidad Nacional Autónoma de México, Ciudad de México 04510, Mexico; gesl@ciencias.unam.mx

**Abstract:** For optimal control problems of Bolza involving time-state-control mixed constraints, containing inequalities and equalities, fixed initial end-point, variable final end-point, and nonlinear dynamics, sufficient conditions for weak minima are derived. The proposed algorithm allows us to avoid hypotheses such as the continuity of the second derivatives of the functions delimiting the problems, the continuity of the optimal controls or the parametrization of the final variable end-point. We also present a relaxation relative to some similar works, in the sense that we arrive essentially to the same conclusions but making weaker assumptions.

**Keywords:** optimal control; mixed constraints; free final end-point; sufficiency; weak minima

**MSC:** 49K15

#### **1. Introduction**

In this paper, we study sufficiency conditions for a weak minimum in two constrained parametric and nonparametric optimal control problems having nonlinear dynamics, a left fixed end-point, a right variable end-point and mixed time-state-control restrictions involving inequalities and equalities. In the parametric problem, we show how the deviation between admissible costs and optimal costs is derived by some functions playing the role of the square of some norms; in particular, the involvement of a functional whose structure is very similar to the square of the classical norm of the Lebesgue measurable functions is a fundamental component. See [1–4], where the authors study sufficient conditions for optimality, and they obtain a similar behaviour with respect to the corresponding deviations between optimal and feasible costs. In the parametric problem, the variable end-point is subject to a parametrization involving a twice continuously differentiable manifold, and, in the nonparametric problem, we make a relaxation of that concept because of the fact that the final end-point is not only variable but also completely free, in the sense that the final end-point may belong to any set which only must be contained in a surface having continuous second derivatives of the independent variable. Another important relaxation of this paper is that we avoid the imposition of two functional restrictions involving the maximum of some crucial integrals, one of them concerning derivatives of admissible and optimal dynamics and the other concerning the admissible and optimal controls, see [5,6]. In contrast, we show how, by fixing the left end-point, we are able to eliminate the integral depending on the admissible dynamics of the problem and only make a weaker hypothesis only involving the integral of admissible and the optimal controls. It is worth emphasizing that the conclusions are very similar and the hypotheses are weaker.

On the other hand, the sufficiency technique employed to prove the main theorem of the paper is self-contained because it is independent of classical approaches used to obtain sufficiency in optimal control such as the Hamilton–Jacobi theory, the incorporation of symmetric solutions of some matrix-valued Riccati equations or the use of fundamental concepts appealing to Jacobi's theory in terms of conjugate points, see [7–9], respectively.

**Citation:** Licea, G.S. Sufficiency for Weak Minima in Optimal Control Subject to Mixed Constraints. *Symmetry* **2022**, *14*, 1520. https:// doi.org/10.3390/sym14081520

Academic Editors: Octav Olteanu and Jan Awrejcewicz

Received: 26 June 2022 Accepted: 20 July 2022 Published: 25 July 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

In contrast, our approach is direct in nature since it strongly depends upon three fundamental concepts; the first one concerns a similar version of the Legendre–Clebsch necessary condition; the second one is related with the positivity of the second variation over the cone of critical directions, and the third one involves a crucial integral inequality involving a Weierstrass verification excess function and the integral of a mapping whose behavior is very similar to the quadratic function around zero and very analogous to the absolute value function around infinity and minus infinity. As the right end-point is variable in the parametric optimal control problem as well as in the nonparametric optimal control problem, our hypotheses also impose a transversality condition and the properties of the proof of the theorem of the article find out the fulfillment of a second order inequality to be crucial. This second order inequality has its origin in a symmetric inequality presented in hypothesis (ii) of Theorem 1 and Corollary 1 of [5,6]. The absence of the continuity of the proposed optimal controls in the content of this paper is also one of the essential components of this work. See [7–21], where that assumption of continuity in the sufficiency approaches containing a degree of generality very similar to that obtained in this article, is a uniform unfortunate assumption since the admissible controls must only lie in the family of measurable functions. To be more precise, it is an unfortunate issue that, in the works mentioned above, their optimal controls need to be confined to the space of continuous functions; meanwhile, all the feasible controls must only be measurable, see [5,6,22], where we show that this assumption of continuity on the optimal controls is very strong.

The paper is organized as follows: In Section 2, we state the parametric optimal control problem that we shall study, some basic definitions, and we enunciate the main theorem of the article. In Section 3, we pose the nonparametric optimal control problem we are going to study together with a fundamental lemma and a corollary which turns out to be the principal result of the paper. In the same section, we illustrate with two examples how even the non-expert can apply the main corollary of the article. In Section 4, we establish three supplementary lemmas whose proofs can be found in [23] and on which the proof of the theorem is strongly based. In Section 5, we make the proof of the theorem of the paper by means of two lemmas. In Section 6, we present a discussion concerning the relations between necessary and sufficient conditions, we add some comments about an experimental economic model, and we exhibit some relevant references containing the fundamental subject of mixed constraints. Finally, in Section 7, we provide the main conclusions of the article.

#### **2. An Auxiliary Theorem**

Suppose that we are given an interval T := [*t*1, *t*2] in **R**, a fixed point *ξ*<sup>1</sup> ∈ **R** *n* and *C* any nonempty subset of **R** *s* , called the set of *parameters*, that we have functions *γ*: **R** *<sup>n</sup>* <sup>→</sup> **<sup>R</sup>**, <sup>Ψ</sup>: **<sup>R</sup>** *<sup>n</sup>* <sup>→</sup> **<sup>R</sup>** *n* , Γ(*t*, *x*, *u*): T × **R** *<sup>n</sup>* <sup>×</sup> **<sup>R</sup>** *<sup>m</sup>* <sup>→</sup> **<sup>R</sup>**, *<sup>f</sup>*(*t*, *<sup>x</sup>*, *<sup>u</sup>*): T × **<sup>R</sup>** *<sup>n</sup>* <sup>×</sup> **<sup>R</sup>** *<sup>m</sup>* <sup>→</sup> **<sup>R</sup>** *n* and *ϕ*(*t*, *x*, *u*): T × **R** *<sup>n</sup>* <sup>×</sup> **<sup>R</sup>** *<sup>m</sup>* <sup>→</sup> **<sup>R</sup>** *q* . Set

$$\mathcal{R} := \left\{ (t, \mathbf{x}, \boldsymbol{\mu}) \in \mathcal{T} \times \mathbf{R}^n \times \mathbf{R}^m \mid \boldsymbol{\varrho}\_{\boldsymbol{\sigma}}(t, \mathbf{x}, \boldsymbol{\mu}) \le \mathbf{0} \; (\boldsymbol{\sigma} \in P), \; \boldsymbol{\varrho}\_{\boldsymbol{\xi}}(t, \mathbf{x}, \boldsymbol{\mu}) = \mathbf{0} \; (\boldsymbol{\varrho} \in Q) \right\}$$

where *P* := {1, . . . , *p*} and *Q* := {*p* + 1, . . . , *q*} (*p* = 0, 1, . . . , *q*). If *p* = 0, then *P* is empty, and we disregard statements about *ϕσ*. If *p* = *q*, then *Q* is empty, and we disregard statements about *ϕς*.

Throughout the paper, we suppose that Γ, *f* and *ϕ* = (*ϕ*1, . . . , *ϕq*) have first and second derivatives with respect to *x* and *u*. Additionally, if we denote by *G*(*t*, *x*, *u*) either Γ(*t*, *x*, *u*), *f*(*t*, *x*, *u*), *ϕ*(*t*, *x*, *u*) or any of their partial derivatives of order ≤ 2 with respect to *x* and *u*, we are going to assume that, if G is any bounded subset of T × **R** *<sup>n</sup>* <sup>×</sup> **<sup>R</sup>** *m*, then |*G*(G)| is a bounded subset of **R**. In addition, we suppose that, if ((*hq*, *lq*)) is any sequence in *AC*(T ; **R** *n* ) × *L* <sup>∞</sup>(<sup>T</sup> ; **<sup>R</sup>** *<sup>m</sup>*) such that for some (*h*, *<sup>l</sup>*) <sup>∈</sup> *AC*(<sup>T</sup> ; **<sup>R</sup>** *n* ) × *L* <sup>∞</sup>(<sup>T</sup> ; **<sup>R</sup>** *<sup>m</sup>*), (*hq*(·), *<sup>l</sup>q*(·)) *<sup>L</sup>* ∞ −→ (*h*(·), *l*(·)) on T , then, for all *q* ∈ **N**, *G*(·, *hq*(·), *lq*(·)) is measurable on T and ∞

$$G(\cdot, h\_q(\cdot), l\_q(\cdot)) \stackrel{L^\infty}{\longrightarrow} G(\cdot, h(\cdot), l(\cdot)) \text{ on } \mathcal{T}.$$

It is worth observing that conditions given above are satisfied if the functions Γ, *f* , *ϕ* and their first and second derivatives relative to *x* and *u* are continuous on T × **R** *<sup>n</sup>* <sup>×</sup> **<sup>R</sup>** *m*. We are going to suppose that the functions *γ* and Ψ are of class *C* <sup>2</sup> on **R** *n* .

Designate by *X* := {*x* : T → **R** *n* | *x* is absolutely continuous} and for any positive integer *s*, set *U<sup>s</sup>* := *L* <sup>∞</sup>(<sup>T</sup> ; **<sup>R</sup>** *s* ). Define *A* := *X* × *U<sup>m</sup>* × **R** *s* . The notation *z<sup>a</sup>* := (*z*, *a*) = (*x*, *u*, *a*) denotes any element *z<sup>a</sup>* ∈ *A*.

We are going to study a parametric optimal control problem, denoted by *P*(*γ*, Γ, *C*, *f* , *ξ*1, Ψ, *R*,*s*), consisting of minimizing a functional of the form

$$I(z\_a) := \gamma(a) + \int\_{t\_1}^{t\_2} \Gamma(t, \mathfrak{x}(t), \mathfrak{u}(t))dt$$

over all *z<sup>a</sup>* in *A* satisfying the constraints

$$\begin{cases} \begin{aligned} a \in \mathcal{C}. \\ \dot{\mathfrak{x}}(t) = f(t, \mathfrak{x}(t), \mathfrak{u}(t)) \text{ (a.e. in } \mathcal{T}). \\ \mathfrak{x}(t\_1) = \mathfrak{f}\_{1'} \mathfrak{x}(t\_2) = \Psi(a). \\ (t, \mathfrak{x}(t), \mathfrak{u}(t)) \in \mathcal{R} \ (t \in \mathcal{T}). \end{aligned} \end{cases}$$

Elements *a* = (*a*1, . . . , *as*) ∗ in **R** *s* (∗ denotes transpose) will be called *parameters*, members *z<sup>a</sup>* in *A* will be called *processes*, and a process is *admissible* if it verifies the constraints.

• A process *z*ˆ*a*<sup>ˆ</sup> solves *P*(*γ*, Γ, *C*, *f* , *ξ*1, Ψ, *R*,*s*) if it is admissible and *I*(*z*ˆ*a*ˆ) ≤ *I*(*za*) for all admissible processes *za*. An admissible process *z*ˆ*a*<sup>ˆ</sup> is a *weak minimum* of *P*(*γ*, Γ, *C*, *f* , *ξ*1, Ψ, *R*,*s*) if it is a minimum of *I* relative to the norm

$$||z\_a|| := |a| + ||(x, u)||\_{\infty \prime}$$

that is, if, for some *e* > 0, *I*(*z*ˆ*a*ˆ) ≤ *I*(*za*) for all admissible processes *z<sup>a</sup>* verifying k*z<sup>a</sup>* − *z*ˆ*a*ˆk < *e*.

• For all (*t*, *x*, *u*, *ω*, *ν*) ∈ T × **R** *<sup>n</sup>* <sup>×</sup> **<sup>R</sup>** *<sup>m</sup>* <sup>×</sup> **<sup>R</sup>** *<sup>n</sup>* <sup>×</sup> **<sup>R</sup>** *q* , define the augmented Hamiltonian by

$$\mathcal{H}(t, \mathbf{x}, \boldsymbol{\mu}, \omega, \nu) := \omega^\* f(t, \mathbf{x}, \boldsymbol{\mu}) - \Gamma(t, \mathbf{x}, \boldsymbol{\mu}) - \nu^\* \boldsymbol{\varphi}(t, \mathbf{x}, \boldsymbol{\mu}).$$

If *ω* ∈ *X* and *ν* ∈ *U<sup>q</sup>* are given, set, for all (*t*, *x*, *u*) ∈ T × **R** *<sup>n</sup>* <sup>×</sup> **<sup>R</sup>** *m*,

$$\mathcal{F}(t, \mathbf{x}, \boldsymbol{\mu}) := -\mathcal{H}(t, \mathbf{x}, \boldsymbol{\mu}, \omega(t), \boldsymbol{\nu}(t)) - \dot{\omega}(t)\mathbf{x}$$

and let

$$J(z\_d) := \omega^\*(t\_2)\mathbf{x}(t\_2) - \omega^\*(t\_1)\mathbf{x}(t\_1) + \gamma(a) + \int\_{t\_1}^{t\_2} \mathcal{F}(t, \mathbf{x}(t), \boldsymbol{\mu}(t))dt.$$

• The *second variation* of *J* with respect to *z<sup>a</sup>* in the direction *wα*, is given by

$$J^{\prime\prime}(z\_a; w\_a) := \alpha^\* \gamma^{\prime\prime}(a)\alpha + \int\_{t\_1}^{t\_2} \mathbf{2} \Omega(t, \mathbf{x}(t), \boldsymbol{\mu}(t); \boldsymbol{y}(t), \boldsymbol{v}(t))dt,$$

where, for all (*t*, *y*, *v*) ∈ T × **R** *<sup>n</sup>* <sup>×</sup> **<sup>R</sup>** *m*,

2Ω(*t*, *x*(*t*), *u*(*t*); *y*, *v*) := *y* <sup>∗</sup>F*xx*(*t*, *x*(*t*), *u*(*t*))*y* + 2*y* <sup>∗</sup>F*xu*(*t*, *x*(*t*), *u*(*t*))*v* + *v* <sup>∗</sup>F*uu*(*t*, *x*(*t*), *u*(*t*))*v*,

> and the notation *w<sup>α</sup>* means any element (*y*, *v*, *α*) ∈ *X* × *L* 2 (T ; **R** *<sup>m</sup>*) <sup>×</sup> **<sup>R</sup>** *s* . In addition, *γ* 00(*a*) is the second derivative of *γ* evaluated at *a*.

• Let *E*(*t*, *x*, *u*, *v*) := F(*t*, *x*, *v*) − F(*t*, *x*, *u*) − F*u*(*t*, *x*, *u*)(*v* − *u*). • Define

$$\mathcal{D}(u) := \int\_{t\_1}^{t\_2} L(u(t))dt \quad \text{where} \quad L(c) := (1 + |c|^2)^{1/2} - 1 \ (c \in \mathbb{R}^m).$$

Finally, if (*t*, *x*, *u*) ∈ T × **R** *<sup>n</sup>* <sup>×</sup> **<sup>R</sup>** *<sup>m</sup>* is given, denote by

$$i(t, \mathfrak{x}, \mathfrak{u}) := \{ \sigma \in P \mid \varrho\_{\sigma}(t, \mathfrak{x}, \mathfrak{u}) = 0 \},$$

the set of active indices of (*t*, *x*, *u*) relative to the inequality constraints. For all *z<sup>a</sup>* ∈ *A*, let *Y*(*za*) be the cone of all *w<sup>α</sup>* ∈ *X* × *L* 2 (*T*; **R** *<sup>m</sup>*) <sup>×</sup> **<sup>R</sup>** *s* satisfying

$$\begin{cases} \begin{aligned} \dot{y}(t) &= f\_{\mathbf{x}}(t, \mathbf{x}(t), \boldsymbol{\mu}(t)) y(t) + f\_{\boldsymbol{\mu}}(t, \mathbf{x}(t), \boldsymbol{\mu}(t)) v(t) \text{ (a.e. in } \mathcal{T}). \\\ y(t\_1) &= 0, \ y(t\_2) = \boldsymbol{\Psi}'(a) \boldsymbol{\mu}. \\\ \boldsymbol{\varrho}\_{\sigma\mathbf{x}}(t, \mathbf{x}(t), \boldsymbol{\mu}(t)) y(t) + \boldsymbol{\varrho}\_{\sigma\mathbf{u}}(t, \mathbf{x}(t), \boldsymbol{\mu}(t)) v(t) \le 0 \text{ (a.e. in } \mathcal{T}, \sigma \in i(t, \mathbf{x}(t), \boldsymbol{\mu}(t))). \\\ \boldsymbol{\varrho}\_{\xi\mathbf{x}}(t, \mathbf{x}(t), \boldsymbol{\mu}(t)) y(t) + \boldsymbol{\varrho}\_{\xi\mathbf{u}}(t, \mathbf{x}(t), \boldsymbol{\mu}(t)) v(t) = 0 \text{ (a.e. in } \mathcal{T}, \boldsymbol{\varrho} \in \mathcal{Q}). \end{aligned} \end{cases}$$

The set *Y*(*za*) is the *cone of critical directions* with respect to *za*.

**Theorem 1.** *Let z*ˆ*a*<sup>ˆ</sup> *be an admissible process. Assume that i*(·, *x*ˆ(·), *u*ˆ(·)) *is piecewise constant on* T *that there exist ω* ∈ *X, ν* ∈ *U<sup>q</sup> with νσ*(*t*) ≥ 0*, νσ*(*t*)*ϕσ*(*t*, *x*ˆ(*t*), *u*ˆ(*t*)) = 0 (*σ* ∈ *P*, *t* ∈ T ) *and δ*, *e* > 0*, such that*

$$\dot{\boldsymbol{\omega}}(t) = -\mathcal{H}\_{\mathbf{x}}^{\*}(t, \mathbf{\hat{x}}(t), \hat{\boldsymbol{\omega}}(t), \boldsymbol{\omega}(t), \boldsymbol{\nu}(t)) \text{ (a.e. in } \boldsymbol{\mathcal{T}})\_{\boldsymbol{\nu}}$$

$$\mathcal{H}\_{\boldsymbol{\omega}}^{\*}(t, \mathbf{\hat{x}}(t), \boldsymbol{\omega}(t), \boldsymbol{\omega}(t), \boldsymbol{\nu}(t)) = \boldsymbol{0} \ (t \in \boldsymbol{\mathcal{T}})\_{\boldsymbol{\nu}}$$

*and the following is satisfied:*


*Then, for some ρ*1, *ρ*<sup>2</sup> > 0 *and all admissible processes z<sup>a</sup> satisfying* k*z<sup>a</sup>* − *z*ˆ*a*ˆk < *ρ*1*,*

$$I(z\_{\mathfrak{a}}) \ge I(\mathfrak{z}\_{\mathfrak{d}}) + \rho\_2 \min\{ |a - \mathfrak{d}|^2, \mathcal{D}(u - \mathfrak{d}) \}.$$

*In particular, z*ˆ*a*<sup>ˆ</sup> *is a weak minimum of P*(*γ*, Γ, *C*, *f* , *ξ*1, Ψ, *R*,*s*)*.*

*.*

#### **3. The Principal Result**

Suppose that an interval T := [*t*1, *t*2] in **R** is given, a fixed point Υ<sup>1</sup> ∈ **R** *n* , a set *B* ⊂ **R** *n* and functions ` : **R** *<sup>n</sup>* <sup>→</sup> **<sup>R</sup>**, <sup>L</sup>(*t*, *<sup>x</sup>*, *<sup>u</sup>*): T × **<sup>R</sup>** *<sup>n</sup>* <sup>×</sup> **<sup>R</sup>** *<sup>m</sup>* <sup>→</sup> **<sup>R</sup>**, *<sup>g</sup>*(*t*, *<sup>x</sup>*, *<sup>u</sup>*): T × **<sup>R</sup>** *<sup>n</sup>* <sup>×</sup> **<sup>R</sup>** *<sup>m</sup>* <sup>→</sup> **<sup>R</sup>** *n* and *φ*(*t*, *x*, *u*): T × **R** *<sup>n</sup>* <sup>×</sup> **<sup>R</sup>** *<sup>m</sup>* <sup>→</sup> **<sup>R</sup>** *q* . Set

$$\mathcal{R} := \left\{ (t, \mathbf{x}, \boldsymbol{\mu}) \in \mathcal{T} \times \mathbf{R}^n \times \mathbf{R}^m \mid \phi\_{\boldsymbol{\sigma}}(t, \mathbf{x}, \boldsymbol{\mu}) \le 0 \; (\boldsymbol{\sigma} \in P), \; \phi\_{\boldsymbol{\xi}}(t, \mathbf{x}, \boldsymbol{\mu}) = \mathbf{0} \; (\boldsymbol{\xi} \in Q) \right\}$$

where *P* := {1, . . . , *p*} and *Q* := {*p* + 1, . . . , *q*} (*p* = 0, 1, . . . , *q*). If *p* = 0, then *P* is empty, and we disregard statements about *φσ*. If *p* = *q*, then *Q* is empty, and we disregard statements about *φς*.

In this section, we shall assume that L, *g* and *φ* = (*φ*1, . . . , *φq*) satisfy the regularity hypotheses mentioned in Section 2. In particular, if L, *g*, and *φ* have first and second continuous partial derivatives with respect to *x* and *u* on T × **R** *<sup>n</sup>* <sup>×</sup> **<sup>R</sup>** *<sup>m</sup>*, then they verify the previously mentioned regularity hypotheses. Moreover, we shall be assuming that the function ` is of class *C* <sup>2</sup> on **R** *n* .

Set A := *X* × *Um*, where usually *X* is the space of absolutely continuous functions mapping T to **R** *n* , and *U<sup>m</sup>* is the space of all essentially bounded measurable functions mapping T to **R** *m*.

In this section, we are going to study the non-parametric optimal control problem P(`,L, *g*, Υ1, *B*, R, *n*) of finding a minimum value to the functional

$$\mathcal{J}(\mathfrak{x}, \mathfrak{u}) := \ell(\mathfrak{x}(t\_2)) + \int\_{t\_1}^{t\_2} \mathcal{L}(t, \mathfrak{x}(t), \mathfrak{u}(t)) dt$$

over all pairs (*x*, *u*) in A verifying the constraints

$$\begin{cases} \dot{\mathfrak{x}}(t) = \mathcal{g}(t, \mathfrak{x}(t), \mathfrak{u}(t)) \text{ (a.e. in } \mathcal{T}). \\\ \mathfrak{x}(t\_1) = \mathcal{Y}\_1, \mathfrak{x}(t\_2) \in \mathcal{B}. \\\ (t, \mathfrak{x}(t), \mathfrak{u}(t)) \in \mathcal{R} \ (t \in \mathcal{T}). \end{cases}$$

The elements (*x*, *u*) in A will be called *processes*. A process is admissible if it satisfies the restrictions.

A process (*x*ˆ, *u*ˆ) is a global solution of P(`,L, *g*, Υ1, *B*, R, *n*) if it is admissible and J (*x*ˆ, *u*ˆ) ≤ J (*x*, *u*) for all (*x*, *u*) admissible. An admissible process (*x*ˆ, *u*ˆ) is a *weak minimum* of P(`,L, *g*, Υ1, *B*, R, *n*) if it is a minimum of J with respect to the essential supremum norm, that is, J (*x*ˆ, *u*ˆ) ≤ J (*x*, *u*) for all admissible processes verifying k(*x*, *u*) − (*x*ˆ, *u*ˆ)k<sup>∞</sup> < *e*, for some *e* > 0.

Let Ψ: **R** *<sup>n</sup>* <sup>→</sup> **<sup>R</sup>** *<sup>n</sup>* be any twice continuously differentiable function such that *<sup>B</sup>* <sup>⊂</sup> Ψ(**R** *n* ). Connect the nonparametric optimal control problem P(`, L, *g*, Υ1, *B*, R, *n*) with the parametric optimal control problem stated in Section 2, denoted by *P*(*γ*, Γ, *C*, *f* , *ξ*1, Ψ, *R*,*s*), that is, *P*(*γ*, Γ, *C*, *f* , *ξ*1, Ψ, *R*,*s*) is the parametric problem stated in Section 2, with the next data; *<sup>γ</sup>* <sup>=</sup> ` ◦ <sup>Ψ</sup>, <sup>Γ</sup> <sup>=</sup> <sup>L</sup>, *<sup>C</sup>* <sup>=</sup> <sup>Ψ</sup>−<sup>1</sup> (*B*), *f* = *g*, *ξ*<sup>1</sup> = Υ1, Ψ the function given above, *R* = R and *s* = *n*.

**Lemma 1.** *The following conditions are satisfied:*


$$
\mathcal{J}(\mathbf{x}, \boldsymbol{u}) = I(z\_a).
$$

*(iii) If z*ˆ*a*<sup>ˆ</sup> *solves P*(*γ*, Γ, *C*, *f* , *ξ*1, Ψ, *R*, *n*)*, then* (*x*ˆ, *u*ˆ) *solves* P(`,L, *g*, Υ1, *B*, R, *n*)*.*

**Proof.** Index (i) follows from the definition of the problems. In order to prove (ii), note that, if *z<sup>a</sup>* is an admissible process of *P*(*γ*, Γ, *C*, *f* , *ξ*1, Ψ, *R*, *n*), then, by (i), (*x*, *u*) is an admissible process of P(`,L, *g*, Υ1, *B*, R, *n*) and *x*(*t*2) = Ψ(*a*). Then,

$$\begin{aligned} \mathcal{J}(\mathbf{x}, \boldsymbol{u}) &= \quad \ell(\mathbf{x}(t\_2)) + \int\_{t\_1}^{t\_2} \mathcal{L}(t, \mathbf{x}(t), \boldsymbol{u}(t)) dt \\ &= \quad \ell(\mathbf{Y}(a)) + \int\_{t\_1}^{t\_2} \Gamma(t, \mathbf{x}(t), \boldsymbol{u}(t)) dt \\ &= \quad \gamma(a) + \int\_{t\_1}^{t\_2} \Gamma(t, \mathbf{x}(t), \boldsymbol{u}(t)) dt = I(z\_a). \end{aligned}$$

Finally, in order to prove (iii), let *z<sup>a</sup>* be an admissible process of *P*(*γ*, Γ, *C*, *f* , *ξ*1, Ψ, *R*, *n*). By (i), (*x*ˆ, *u*ˆ) and (*x*, *u*) are admissible of P(`,L, *g*, Υ1, *B*, R, *n*). Then, by (ii) and (iii),

$$\mathcal{J}(\hat{\mathfrak{x}}, \hat{\mathfrak{u}}) = I(\hat{z}\_{\mathfrak{A}}) \le I(z\_{\mathfrak{a}}) = \mathcal{J}(\mathfrak{x}, \mathfrak{u}).$$

Corollary 1 below is a straightforward implication of Theorem 1 and Lemma 1. It provides sufficient conditions for weak minima of the nonparametric problem P(`, L, *g*, Υ1, *B*, R, *n*). It is worth observing that the proposed optimal control is not necessarily continuous but only measurable as was the case of Theorem 1.

**Corollary 1.** *Let* Ψ: **R** *<sup>n</sup>* <sup>→</sup> **<sup>R</sup>** *n be any twice continuously differentiable function such that B* ⊂ Ψ(**R** *n* ) *and let P*(*γ*, Γ, *C*, *f* , *ξ*1, Ψ, *R*, *n*) *be the parametric optimal control problem before pronouncing Lemma 1. Let z*ˆ*a*<sup>ˆ</sup> *be an admissible process of P*(*γ*, Γ, *C*, *f* , *ξ*1, Ψ, *R*, *n*)*. Suppose that i*(·, *x*ˆ(·), *u*ˆ(·)) *is piecewise constant on* T *, there exist ω* ∈ *X, ν* ∈ *U<sup>q</sup> satisfying νσ*(*t*) ≥ 0 *and νσ*(*t*)*ϕσ*(*t*, *x*ˆ(*t*), *u*ˆ(*t*)) = 0 (*σ* ∈ *P*, *t* ∈ T )*, two positive numbers δ*, *e such that*

$$
\dot{\omega}(t) = -\mathcal{H}\_{\mathbf{x}}^\*(t, \mathbf{\hat{x}}(t), \mathbf{\hat{u}}(t), \omega(t), \nu(t)) \text{ (a.e. in } \mathcal{T}),
$$

$$\mathcal{H}^\*\_{\mathfrak{u}}(t, \mathfrak{k}(t), \mathfrak{a}(t), \omega(t), \nu(t)) = 0 \ (t \in \mathcal{T})\_{\nu}$$

*and the following conditions are satisfied:*


*Then,* (*x*ˆ, *u*ˆ) *is a weak minimum of* P(`,L, *g*, Υ1, *B*, R, *n*)*.*

Examples 1 and 2 below show how even a non-expert can apply Corollary 1. Examples 1 and 2 are concerned with an inequality-equality restrained optimal control problem in which one has to verify that an element (*x*ˆ, *u*ˆ, *ω*, *ν*) satisfies the sufficient conditions

$$
\dot{\omega}(t) = -\mathcal{H}\_x^\*(t, \mathbf{\hat{x}}(t), \mathbf{\hat{u}}(t), \omega(t), \mathbf{v}(t)) \text{ (a.e. in } \mathcal{T}), \quad \mathcal{H}\_u^\*(t, \mathbf{\hat{x}}(t), \mathbf{\hat{u}}(t), \omega(t), \mathbf{v}(t)) = 0 \text{ (}t \in \mathcal{T}),
$$

and that the former also satisfies conditions (i), (ii), (iii), (iv), and (v) of Corollary 1, implying that it is a weak minimum of P(`,L, *g*, Υ1, *B*, R, *n*).

**Example 1.** *Consider the nonparametric optimal control problem* P(`, L, *g*, Υ1, *B*, R, *n*) *of finding a minimum value to the functional*

$$\mathcal{J}(\mathfrak{x}, \mathfrak{u}) = \mathfrak{x}^2(1) - \mathfrak{x}(1) + \int\_0^1 \{\exp(t\mathfrak{u}(t)) + \sinh \mathfrak{x}(t)\} dt$$

*over all* (*x*, *u*) *in* A *verifying the constraints*

$$\begin{cases} \dot{\mathfrak{x}}(t) = \mathfrak{u}(t) \text{ almost everywhere in } [0,1]. \\ \mathfrak{x}(0) = 0, \ \mathfrak{x}(1) \in (-\infty, 0]. \\ (t, \mathfrak{x}(t), \mathfrak{u}(t)) \in \mathcal{R} \ (t \in [0,1]) \end{cases}$$

*where*

$$\mathcal{R} := \{(t, \mathbf{x}, u) \in [0, 1] \times \mathbf{R} \times \mathbf{R} \mid (3/2)u^2 - \mathbf{x}^2 - \exp(-\mathbf{x}) - \mathbf{x} + 1 \le 0\},$$

$$\mathcal{A} := \mathbf{X} \times \mathcal{U}\_{\mathbf{1}}.$$

$$X := \{\mathbf{x} : [0, 1] \to \mathbf{R} \mid \mathbf{x} \text{ is absolutely continuous on } [0, 1]\},$$

$$\mathcal{U}\_1 := \{u \colon [0, 1] \to \mathbf{R} \mid u \text{ is essentially bounded on } [0, 1]\}.$$

For this event, the data of the proposed nonparametric problem are given by T = [0, 1], *m* = 1, *p* = 1, *q* = 1, `(·) = *x* 2 (·) − *x*(·), L(*t*, *x*, *u*) = exp(*tu*) + sinh *x*, *g*(*t*, *x*, *u*) = *u*, Υ<sup>1</sup> = 0, *B* = (−∞, 0], R = {(*t*, *x*, *u*) ∈ T × **R** × **R** | (3/2)*u* <sup>2</sup> <sup>−</sup> *<sup>x</sup>* <sup>2</sup> <sup>−</sup> exp(−*x*) <sup>−</sup> *<sup>x</sup>* <sup>+</sup> <sup>1</sup> <sup>≤</sup> <sup>0</sup>} and *n* = 1. Observe that

$$\phi\_1(t, \mathbf{x}, \boldsymbol{\mu}) = (\mathbf{3}/2)\boldsymbol{\mu}^2 - \mathbf{x}^2 - \exp(-\mathbf{x}) - \mathbf{x} + \mathbf{1}.$$

We have that the functions L, *g*, *φ* = *φ*1, and their first and second derivatives relative to *x* and *u* are continuous on T × **R** × **R**. Additionally, the function ` is *C* 2 in **R**.

Moreover, one can verify that the process (*x*ˆ, *u*ˆ) ≡ (0, 0) is admissible of P(`,L, *g*, Υ1, *B*, R, *n*). Let Ψ: **R** → **R** be given by Ψ(*b*) := *b*. Clearly, Ψ is *C* 2 in **R** and *B* ⊂ Ψ(**R**). The connected parametric problem designated by *P*(*γ*, Γ, *C*, *f* , *ξ*1, Ψ, *R*,*s*) has the next data; *<sup>γ</sup>* <sup>=</sup> ` ◦ <sup>Ψ</sup>, <sup>Γ</sup> <sup>=</sup> <sup>L</sup>, *<sup>C</sup>* <sup>=</sup> <sup>Ψ</sup>−<sup>1</sup> (*B*), *f* = *g*, *ξ*<sup>1</sup> = Υ1, Ψ the function given above, *R* = R and *s* = *n*.

Observe that, if we set *a*ˆ := 0, then *z*ˆ*a*<sup>ˆ</sup> = (*x*ˆ, *u*ˆ, *a*ˆ) ≡ (0, 0, 0) is admissible of *P*(*γ*, Γ, *C*, *f* , *ξ*1, Ψ, *R*, *n*). Moreover, *i*(·, *x*ˆ(·), *u*ˆ(·)) ≡ {1} is constant on T . Let *ω* ≡ *t*, *ν*<sup>1</sup> ≡ 1 and observe that (*ω*, *ν*) ∈ *X* × *U*1, *ν<sup>σ</sup>* ≥ 0 and *νσ*(*t*)*ϕσ*(*t*, *x*ˆ(*t*), *u*ˆ(*t*)) = 0 (*t* ∈ T , *σ* = 1). Recall that *ϕ* = *φ*.

Now,

$$\mathcal{H}(t, \mathbf{x}, u, \omega, \nu) = \omega u - \exp(tu) - \sinh \mathbf{x} - \nu\_1[(3/2)u^2 - \mathbf{x}^2 - \exp(-\mathbf{x}) - \mathbf{x} + 1]\nu\_2$$

and observe that

$$\mathcal{H}\_{\mathfrak{X}}(t, \mathfrak{x}, \mathfrak{u}, \omega, \mathfrak{v}) = -\cosh\mathfrak{x} - \nu\_{1}[-2\mathfrak{x} + \exp(-\mathfrak{x}) - 1],$$

$$\mathcal{H}\_{\mathfrak{U}}(t, \mathfrak{x}, \mathfrak{u}, \omega, \mathfrak{v}) = \omega - t\exp(t\mathfrak{u}) - \mathfrak{3}\nu\_{1}\mathfrak{u}.$$

Then,

$$
\dot{\omega}(t) = -\mathcal{H}\_{\mathbf{x}}(t, \mathbf{\hat{x}}(t), \mathbf{\hat{u}}(t), \omega(t), \mathbf{v}(t)) \text{ (a.e. in } T) \quad \text{and} \quad \mathcal{H}\_{\mathbf{u}}(t, \mathbf{\hat{x}}(t), \mathbf{\hat{u}}(t), \omega(t), \mathbf{v}(t)) = \mathbf{0} \text{ (}t \in T\text{)}
$$

and hence (*x*ˆ, *u*ˆ, *ω*, *ν*) verifies the first order sufficiency conditions of Corollary 1. Since Ψ(*b*) = *b* (*b* ∈ **R**), we have that *γ*(*b*) = *b* <sup>2</sup> <sup>−</sup> *<sup>b</sup>* (*<sup>b</sup>* <sup>∈</sup> **<sup>R</sup>**). Then,

$$\gamma'(\mathfrak{d}) + \Psi'(\mathfrak{d})\omega(\mathfrak{1}) = 0$$

and hence condition (i) of Corollary 1 is verified. Moreover, one can verify that

$$
\omega(1)\Psi''(\mathfrak{d};h) = 0 \text{ for all } h \in \mathbf{R}
$$

and then condition (ii) of Corollary 1 is verified. Now, for all (*t*, *x*, *u*) ∈ T × **R** × **R**,

$$\mathcal{H}(t, \mathbf{x}, u, \omega(t), \nu(t)) = tu - \exp(tu) - \sinh \mathbf{x} - \left[ (3/2)u^2 - \mathbf{x}^2 - \exp(-\mathbf{x}) - \mathbf{x} + 1 \right]$$

and hence, for all *t* ∈ T ,

$$\mathcal{H}\_{\text{uu}}(t, \mathfrak{X}(t), \mathfrak{A}(t), \omega(t), \nu(t)) = -t^2 - 3 \le 0$$

implying that (*x*ˆ, *u*ˆ, *ω*, *ν*) satisfies condition (iii) of Corollary 1.

Additionally, note that, for all *t* ∈ T ,

$$f\_{\mathbf{x}}(t, \hat{\mathbf{x}}(t), \hat{\mathbf{u}}(t)) = 0 \quad \text{and} \quad f\_{\boldsymbol{\mu}}(t, \hat{\mathbf{x}}(t), \hat{\mathbf{u}}(t)) = 1,$$

$$
\varphi\_{\mathfrak{X}}(t, \mathfrak{X}(t), \mathfrak{A}(t)) = 0 \quad \text{and} \quad \varphi\_{\mathfrak{u}}(t, \mathfrak{X}(t), \mathfrak{A}(t)) = 0.
$$

Consequently, *Y*(*z*ˆ*a*ˆ) is given by all *w<sup>α</sup>* ∈ *X* × *L* 2 (T ; **R**) × **R** verifying

$$\begin{cases} \text{ } \dot{y}(t) = v(t) \text{ (a.e. in } \mathcal{T}). \\ y(0) = 0, \ y(1) = \alpha. \end{cases} $$

In addition, observe that, for all (*t*, *x*, *u*) ∈ T × **R** × **R**,

$$\mathcal{F}(t, \mathbf{x}, u) = -tu + \exp(tu) + (3/2)u^2 + \sinh x - x^2 - \exp(-x) - 2x + 1$$

and, for all *t* ∈ T ,

$$\mathcal{F}\_{\text{xx}}(t, \mathfrak{X}(t), \mathfrak{U}(t)) = -\mathfrak{Z}, \quad \mathcal{F}\_{\text{xu}}(t, \mathfrak{X}(t), \mathfrak{U}(t)) = 0, \quad \mathcal{F}\_{\text{uu}}(t, \mathfrak{X}(t), \mathfrak{U}(t)) = t^2 + 3.$$

Thus, for all *w<sup>α</sup>* ∈ *Y*(*z*ˆ*a*ˆ),

$$f^{\prime\prime}(\sharp\_{\mathfrak{k}}; w\_a) = 2a^2 + \int\_0^1 \Im\{v^2(t) - y^2(t)\} dt + \int\_0^1 \Im t^2 v^2(t) dt \geq 2a^2 + \int\_0^1 \Im\{\dot{y}^2(t) - y^2(t)\} dt.$$

Hence,

$$J''(\mathfrak{z}\_{\mathfrak{A}}; w\_{\mathfrak{a}}) > 0$$

for all *w<sup>α</sup>* ∈ *Y*(*z*ˆ*a*ˆ), *w<sup>α</sup>* 6≡ (0, 0, 0), and hence condition (iv) of Corollary 1 is fulfilled. Now, note that, if *z<sup>a</sup>* is admissible, for all *t* ∈ T ,

$$E(t, \mathfrak{x}(t), \mathfrak{u}(t), \mathfrak{u}(t)) = -t\mathfrak{u}(t) + \exp(t\mathfrak{u}(t)) + (\mathfrak{z}/2)\mathfrak{u}^2(t) - 1.$$

Thus, if *z<sup>a</sup>* is admissible,

$$\begin{aligned} \int\_0^1 \mathbb{E}(t, \mathbf{x}(t), \mathfrak{A}(t), u(t)) dt &= \int\_0^1 \{-tu(t) + \exp(tu(t)) + (\mathfrak{B}/2)u^2(t) - 1\} dt \geq \int\_0^1 (1/2)u^2(t) dt \\ &\geq \int\_0^1 L(u(t) - \mathfrak{A}(t)) dt = \mathcal{D}(u - \mathfrak{A}). \end{aligned}$$

Therefore, condition (v) of Corollary 1 is satisfied for any *e* > 0 and *δ* = 1. By Corollary 1, (*x*ˆ, *u*ˆ) is a weak minimum of P(`,L, *g*, Υ1, *B*, R, *n*).

**Example 2.** *Let us study the nonparametric optimal control problem* P(`,L, *g*, Υ1, *B*, R, *n*) *of minimizing the functional*

$$\mathcal{J}(\mathbf{x}, \boldsymbol{\mu}) = \mathbf{x}^2(1) + \int\_0^1 \{\frac{1}{2}(\boldsymbol{\mu}\_1(t) + \boldsymbol{\mu}\_2(t))^2 + \boldsymbol{\mu}\_1(t)\} dt$$

*over all* (*x*, *u*) *in* A *satisfying the constraints*

$$\begin{cases} \dot{\mathfrak{x}}(t) = \mathfrak{u}\_1(t) + \mathfrak{u}\_2(t) + \mathfrak{x}^3(t) \text{ almost everywhere in } [0, 1]. \\ \mathfrak{x}(0) = 0, \ \mathfrak{x}(1) \in \mathbf{R}. \\ \ (t, \mathfrak{x}(t), \mathfrak{u}(t)) \in \mathcal{R} \ (t \in [0, 1]) \end{cases}$$

*where*

$$\mathcal{R} := \{(t, \mathbf{x}, \boldsymbol{\mu}) \in [0, 1] \times \mathbf{R} \times \mathbf{R}^2 \mid -\frac{1}{2}\mathbf{x}^2 - \boldsymbol{\mu}\_1 \le 0, \,\text{sin}\,\boldsymbol{\mu}\_2 = 0\},$$

$$\mathcal{A} := \boldsymbol{X} \times \mathcal{U}\_2,$$

$$\boldsymbol{X} := \{\mathbf{x} : [0, 1] \to \mathbf{R} \mid \mathbf{x} \text{ is absolutely continuous on } [0, 1]\},$$

$$\mathcal{U}\_2 := \{\boldsymbol{\mu} : [0, 1] \to \mathbf{R}^2 \mid \boldsymbol{\mu} \text{ is essentially bounded on } [0, 1]\}.$$

For this event, the data of the nonparametric problem are given by T = [0, 1], *m* = 2, *p* = 1, *q* = 2, `(·) = *x* 2 (·), <sup>L</sup>(*t*, *<sup>x</sup>*, *<sup>u</sup>*) = <sup>1</sup> 2 (*u*<sup>1</sup> + *u*2) <sup>2</sup> + *u*1, *g*(*t*, *x*, *u*) = *u*<sup>1</sup> + *u*<sup>2</sup> + *x* 3 , Υ<sup>1</sup> = 0, *B* = **R**, R = {(*t*, *x*, *u*) ∈ T × **R** × **R** 2 | −<sup>1</sup> 2 *x* <sup>2</sup> <sup>−</sup> *<sup>u</sup>*<sup>1</sup> <sup>≤</sup> 0, sin *<sup>u</sup>*<sup>2</sup> <sup>=</sup> <sup>0</sup>} and *<sup>n</sup>* <sup>=</sup> 1. Observe that

$$\phi\_1(t, \mathbf{x}, \boldsymbol{\mu}) = -\frac{1}{2}\mathbf{x}^2 - \boldsymbol{\mu}\_1 \quad \text{and} \quad \phi\_2(t, \mathbf{x}, \boldsymbol{\mu}) = \sin \boldsymbol{\mu}\_2.$$

We have that the functions L, *g*, *φ* = (*φ*1, *φ*2) and their first and second derivatives with respect to *x* and *u* are continuous on T × **R** × **R** 2 . Additionally, the function ` is *C* 2 in **R**.

Moreover, as one readily verifies, the process (*x*ˆ, *u*ˆ) ≡ (0, 0, 0) is admissible of P(`,L, *g*, Υ1, *B*, R, *n*). Let Ψ: **R** → **R** be defined by Ψ(*b*) := *b*. Clearly, Ψ is *C* 2 in **R** and *B* ⊂ Ψ(**R**). The connected parametric problem designated by *P*(*γ*, Γ, *C*, *f* , *ξ*1, Ψ, *R*,*s*) has the next data; *<sup>γ</sup>* <sup>=</sup> ` ◦ <sup>Ψ</sup>, <sup>Γ</sup> <sup>=</sup> <sup>L</sup>, *<sup>C</sup>* <sup>=</sup> <sup>Ψ</sup>−<sup>1</sup> (*B*), *f* = *g*, *ξ*<sup>1</sup> = Υ1, Ψ the function given above, *R* = R and *s* = *n*.

Observe that, if we set *a*ˆ := 0, then *z*ˆ*a*<sup>ˆ</sup> = (*x*ˆ, *u*ˆ, *a*ˆ) ≡ (0, 0, 0, 0) is admissible of *P*(*γ*, Γ, *C*, *f* , *ξ*1, Ψ, *R*, *n*). Moreover, *i*(·, *x*ˆ(·), *u*ˆ(·)) ≡ {1} is constant on T . Let *ω* ≡ 0, *ν*<sup>1</sup> ≡ 1, *ν*<sup>2</sup> ≡ 0 and observe that (*ω*, *ν*) ∈ *X* × *U*2, *ν<sup>σ</sup>* ≥ 0 and *νσ*(*t*)*ϕσ*(*t*, *x*ˆ(*t*), *u*ˆ(*t*)) = 0 (*t* ∈ T , *σ* = 1). Recall that *ϕ* = *φ*.

Now,

$$\mathcal{H}(t, \mathbf{x}, \boldsymbol{\mu}, \omega, \nu) = \omega \boldsymbol{u}\_1 + \omega \boldsymbol{u}\_2 + \omega \boldsymbol{x}^3 - \frac{1}{2} (\boldsymbol{\mu}\_1 + \boldsymbol{\mu}\_2)^2 - \boldsymbol{\mu}\_1 + \frac{1}{2} \nu\_1 \boldsymbol{x}^2 + \nu\_1 \boldsymbol{u}\_1 - \nu\_2 \sin \nu\_2 \boldsymbol{\mu}\_1$$

and observe that

$$\mathcal{H}\_{\mathbf{x}}(t, \mathbf{x}, \boldsymbol{\mu}, \omega, \nu) = 3\omega \mathbf{x}^2 + \nu\_1 \mathbf{x},$$

$$\mathcal{H}\_{\boldsymbol{\mu}}(t, \mathbf{x}, \boldsymbol{\mu}, \omega, \nu) = (\omega - \boldsymbol{\mu}\_1 - \boldsymbol{\mu}\_2 - 1 + \nu\_1, \omega - \boldsymbol{\mu}\_1 - \boldsymbol{\mu}\_2 - \nu\_2 \cos \nu\_2).$$

Consequently,

*ω*˙ (*t*) = −H*x*(*t*, *x*ˆ(*t*), *u*ˆ(*t*), *ω*(*t*), *ν*(*t*)) (a.e. in T ) and H*u*(*t*, *x*ˆ(*t*), *u*ˆ(*t*), *ω*(*t*), *ν*(*t*)) = (0, 0) (*t* ∈ T )

and hence (*x*ˆ, *u*ˆ, *ω*, *ν*) satisfies the first order sufficiency conditions of Corollary 1. Since Ψ(*b*) = *b* (*b* ∈ **R**), we have that *γ*(*b*) = *b* 2 (*b* ∈ **R**). Then,

$$
\gamma'(\hat{a}) + \Psi'(\hat{a})\omega(1) = 0
$$

and then condition (i) of Corollary 1 is satisfied. Moreover, one can verify that

$$
\omega(1)\Psi''(\mathfrak{d};h) = 0 \text{ for all } h \in \mathbf{R}
$$

and hence condition (ii) of Corollary 1 is fulfilled. Now, for all (*t*, *x*, *u*) ∈ T × **R** × **R** 2 ,

$$\mathcal{H}(t, \mathbf{x}, \boldsymbol{\mu}, \omega(t), \nu(t)) = -\frac{1}{2}(\boldsymbol{\mu}\_1 + \boldsymbol{\mu}\_2)^2 + \frac{1}{2}\mathbf{x}^2$$

and hence, for all *t* ∈ T ,

$$\mathcal{H}\_{\text{uu}}(t, \mathfrak{x}(t), \mathfrak{u}(t), \omega(t), \mathfrak{v}(t)) = \begin{pmatrix} -1 & -1 \\ -1 & -1 \end{pmatrix} \le 0$$

implying that (*x*ˆ, *u*ˆ, *ω*, *ν*) verifies condition (iii) of Corollary 1. Additionally, note that, for all *t* ∈ T ,

*fx*(*t*, *x*ˆ(*t*), *u*ˆ(*t*)) = 0 and *fu*(*t*, *x*ˆ(*t*), *u*ˆ(*t*)) = (1, 1),

$$
\varphi\_{\mathfrak{x}}(t, \mathfrak{x}(t), \mathfrak{f}(t)) = \begin{pmatrix} 0 \\ 0 \end{pmatrix} \quad \text{and} \quad \varphi\_{\mathfrak{u}}(t, \mathfrak{x}(t), \mathfrak{f}(t)) = \begin{pmatrix} -1 & 0 \\ 0 & 1 \end{pmatrix}.
$$

Therefore, *Y*(*z*ˆ*a*ˆ) is given by all *w<sup>α</sup>* ∈ *X* × *L* 2 (T ; **R** 2 ) × **R** verifying

$$\begin{cases} \dot{y}(t) = v\_1(t) + v\_2(t) \text{ (a.e. in } \mathcal{T}). \\\ y(0) = 0, \ y(1) = \text{a.} \\\ -v\_1(t) \le 0, \ v\_2(t) = 0 \text{ (a.e. in } \mathcal{T}). \end{cases}$$

In addition, observe that, for all (*t*, *x*, *u*) ∈ T × **R** × **R** 2 ,

$$\mathcal{F}(t, \mathbf{x}, \boldsymbol{\mu}) = \frac{1}{2}(\boldsymbol{\mu}\_1 + \boldsymbol{\mu}\_2)^2 - \frac{1}{2}\mathbf{x}^2$$

and, for all *t* ∈ T ,

$$\mathcal{F}\_{\text{xx}}(t, \mathfrak{X}(t), \mathfrak{A}(t)) = -1, \quad \mathcal{F}\_{\text{xu}}(t, \mathfrak{X}(t), \mathfrak{A}(t)) = (0, 0), \quad \mathcal{F}\_{\text{uu}}(t, \mathfrak{X}(t), \mathfrak{A}(t)) = \left(\begin{array}{cc} 1 & 1\\ 1 & 1 \end{array}\right).$$

Thus, for all *w<sup>α</sup>* ∈ *Y*(*z*ˆ*a*ˆ),

$$f''(\hat{z}\_{\mathbb{A}}; w\_a) = 2a^2 + \int\_0^1 \{ (v\_1(t) + v\_2(t))^2 - y^2(t) \} dt = 2a^2 + \int\_0^1 \{ \dot{y}^2(t) - y^2(t) \} dt.$$

Hence,

$$J''(\hat{z}\_{\mathfrak{A}}; w\_{\mathfrak{a}}) > 0$$

for all *w<sup>α</sup>* ∈ *Y*(*z*ˆ*a*ˆ), *w<sup>α</sup>* 6≡ (0, 0, 0, 0), and then condition (iv) of Corollary 1 is verified. Now, note that, if *z<sup>a</sup>* is admissible, for all *t* ∈ T ,

$$E(t, \mathbf{x}(t), \mathfrak{d}(t), \mathfrak{u}(t)) = \frac{1}{2} (\mathfrak{u}\_1(t) + \mathfrak{u}\_2(t))^2.$$

Therefore, if *z<sup>a</sup>* is admissible,

$$\int\_0^1 \mathbb{E}(t, \mathbf{x}(t), \mathfrak{d}(t), \mathfrak{u}(t)) dt = \int\_0^1 \frac{1}{2} (u\_1(t) + u\_2(t))^2 dt \ge \int\_0^1 \mathbb{L}(u(t) - \mathfrak{d}(t)) dt = \mathcal{D}(u - \mathfrak{d}).$$

Thus, condition (v) of Corollary 1 is verified for any *e* > 0 and *δ* = 1. By Corollary 1, (*x*ˆ, *u*ˆ) is a weak minimum of P(`,L, *g*, Υ1, *B*, R, *n*).

#### **4. Supplementary Lemmas**

Now, we enunciate three supplementary lemmas which are going to be fundamental in proving Theorem 1. These lemmas are direct consequences of Lemmas 3.1–3.3 of [23].

If (Σ*n*) is a sequence of measurable functions and Σ is a measurable function, we shall designate uniform convergence of (Σ*n*) to Σ by Σ*<sup>n</sup>* u −→ Σ. Similarly, strong convergence in *L <sup>p</sup>* by Σ*<sup>n</sup> L p* −→ Σ and weak convergence by Σ*<sup>n</sup> L p* \* Σ.

In the next three lemmas, we suppose that *u*ˆ ∈ *L* 1 (T ; **R** *<sup>m</sup>*) is given and a sequence (*uq*) in *L* 1 (*T*; **R** *<sup>m</sup>*) such that

$$\lim\_{q \to \infty} \mathcal{D}(\mathfrak{u}\_q - \mathfrak{A}) = 0 \quad \text{and} \quad d\_q := \left[ 2\mathcal{D}(\mathfrak{u}\_q - \mathfrak{A}) \right]^{1/2} > 0 \quad (q \in \mathbf{N}).$$

For all *q* ∈ **N**, define

$$v\_q := \frac{u\_q - \hat{u}}{d\_q}.$$

**Lemma 2.** *For some v*ˆ ∈ *L* 2 (T ; **R** *<sup>m</sup>*) *and some subsequence of* (*uq*) *(without relabeling), v<sup>q</sup> L* 1 \* *v*ˆ *on* T *.*

**Lemma 3.** *Let A<sup>q</sup>* ∈ *L* <sup>∞</sup>(<sup>T</sup> ; **<sup>R</sup>** *n*×*n* ) *and B<sup>q</sup>* ∈ *L* <sup>∞</sup>(<sup>T</sup> ; **<sup>R</sup>** *<sup>n</sup>*×*m*) *be matrix-valued functions for which we have the existence of some constants m*0, *m*<sup>1</sup> > 0 *such that* k*Aq*k<sup>∞</sup> ≤ *m*0*,* k*Bq*k<sup>∞</sup> ≤ *m*<sup>1</sup> (*q* ∈ **N**)*, and for all q* ∈ **N** *indicate by y<sup>q</sup> the solution of the initial value problem*

$$\dot{y}(t) = A\_q(t)y(t) + B\_q(t)v\_q(t) \text{ (a.e. in } \mathcal{T}), \quad y(t\_1) = 0.$$

*Then, there exist ζ* ∈ *L* 2 (T ; **R** *n* ) *and a subsequence (without relabeling), such that y*˙*<sup>q</sup> L* 1 \* *ζ on* T *, and hence, if y*ˆ(*t*) := R *t t*1 *ζ*(*τ*)*dτ* (*t* ∈ T )*, then y<sup>q</sup> u* −→ *y on* ˆ T *.*

**Lemma 4.** *Suppose u<sup>q</sup> L* ∞ −→ *u*ˆ *on* T *, let* Φ*q*, Φ ∈ *L* <sup>∞</sup>(<sup>T</sup> ; **<sup>R</sup>** *<sup>m</sup>*×*m*)*; suppose that* Φ*<sup>q</sup> L* ∞ −→ Φ *on* T *,* Φ(*t*) ≥ 0 (*a.e. in* T ) *and let v be the function given in Lemma 2. Then,* ˆ

$$\liminf\_{q \to \infty} \int\_{t\_1}^{t\_2} v\_q^\*(t) \Phi\_q(t) v\_q(t) dt \ge \int\_{t\_1}^{t\_2} \vartheta^\*(t) \Phi(t) \vartheta(t) dt.$$

#### **5. Proof of Theorem 1**

The proof of Theorem 1 will be divided into two Lemmas. In Lemmas 5 and 6 below, we shall suppose that all the hypotheses of Theorem 1 are verified. Before stating the lemmas, let us present some definitions.

Note first that, given *x* = (*x*1, . . . , *xn*) ∗ in **R** *n* and *a* = (*a*1, . . . , *as*) ∗ in **R** *s* , if we set *x***i**, *a***j** in **R** *<sup>n</sup>*+*<sup>s</sup>* by *x***i** := (*x*1, . . . , *xn*, 0, . . . , 0) <sup>∗</sup> and *a***j** := (0, . . . , 0, *a*1, . . . , *as*) ∗ , then

$$\{\mathbf{x}\mathbf{i} + a\mathbf{j} = (x\_1, \dots, x\_n, a\_1, \dots, a\_s)^\* = \begin{pmatrix} \mathbf{x} \\ a \end{pmatrix} \in \mathbf{R}^{n+s}.\}$$

Define <sup>F</sup>˜ : T × **<sup>R</sup>** *<sup>n</sup>*+*<sup>s</sup>* <sup>×</sup> **<sup>R</sup>** *<sup>m</sup>* <sup>→</sup> **<sup>R</sup>** by

$$
\tilde{\mathcal{F}}(t,\xi,\mu) := \frac{\gamma(\xi\_{n+1},\ldots,\xi\_{n+s})}{t\_2 - t\_1} + \mathcal{F}(t,\xi\_1,\ldots,\xi\_{n},\mu).
$$

Observe that the Weierstrass function *<sup>E</sup>*˜ : T × **<sup>R</sup>** *<sup>n</sup>*+*<sup>s</sup>* <sup>×</sup> **<sup>R</sup>** *<sup>m</sup>* <sup>×</sup> **<sup>R</sup>** *<sup>m</sup>* <sup>→</sup> **<sup>R</sup>** of <sup>F</sup>˜ is given by

$$
\tilde{E}(t, \xi, u, v) := \mathcal{F}(t, \xi, v) - \mathcal{F}(t, \xi, u) - \mathcal{F}\_u(t, \xi, u)(v - u).
$$

It is not difficult to see that, for all (*t*, *x*, *u*, *v*) ∈ T × **R** *<sup>n</sup>* <sup>×</sup> **<sup>R</sup>** *<sup>m</sup>* <sup>×</sup> **<sup>R</sup>** *<sup>m</sup>* and all *a* in **R** *s* ,

$$
\vec{E}(t, \mathbf{x}\mathbf{i} + a\mathbf{j}, u, v) = E(t, \mathbf{x}, u, v).
$$

Set

$$\mathbf{f}(z\_{\mathfrak{a}}) := \boldsymbol{\omega}^\*(t\_2)\mathbf{x}(t\_2) - \boldsymbol{\omega}^\*(t\_1)\mathbf{x}(t\_1) + \int\_{t\_1}^{t\_2} \boldsymbol{\mathcal{F}}(t, \mathbf{x}(t)\mathbf{i} + a\mathbf{j}, \boldsymbol{\omega}(t))dt.$$

As one readily verifies, *J*(*za*) = ˜*J*(*za*) for all *z<sup>a</sup>* in *A*, and

$$\check{f}(z\_a) = \check{f}(\mathfrak{z}\_{\mathfrak{d}}) + \check{f}'(\mathfrak{z}\_{\mathfrak{d}\prime}z\_a - \mathfrak{z}\_{\mathfrak{d}}) + \check{\mathcal{K}}(\mathfrak{z}\_{\mathfrak{d}\prime}z\_a) + \check{\mathcal{E}}(\mathfrak{z}\_{\mathfrak{d}\prime}z\_a) \tag{1}$$

where

$$
\begin{aligned}
\mathcal{E}(\mathfrak{z}\_{\mathfrak{d}}; z\_{\mathfrak{a}}) &:= \int\_{t\_1}^{t\_2} \tilde{\mathcal{E}}(t, \mathfrak{x}(t)\mathbf{i} + a\mathfrak{j}, \mathfrak{u}(t), \mathfrak{u}(t)) dt, \\
\mathcal{K}(\mathfrak{z}\_{\mathfrak{d}}; z\_{\mathfrak{a}}) &:= \int\_{t\_1}^{t\_2} \{\tilde{\mathcal{M}}(t, \mathfrak{x}(t)\mathbf{i} + a\mathfrak{j}) + [\mathfrak{u}^\*(t) - \mathfrak{u}^\*(t)]\tilde{\mathcal{N}}(t, \mathfrak{x}(t)\mathbf{i} + a\mathfrak{j})\} dt, \\
\mathfrak{z}\_{\mathfrak{d}}(\mathfrak{z}\_{\mathfrak{d}}; z\_{\mathfrak{a}}) &:= \mathfrak{z}\_{\mathfrak{d}}(\mathfrak{z}\_{\mathfrak{d}}; z\_{\mathfrak{a}}) = \mathfrak{z}\_{\mathfrak{d}}(\mathfrak{z}\_{\mathfrak{d}})^{\mathfrak{l}} \quad \mathfrak{z}\_{\mathfrak{d}}(\mathfrak{z}\_{\mathfrak{d}})^{\mathfrak{l}} \quad \mathfrak{z}\_{\mathfrak{d}}(\mathfrak{z}\_{\mathfrak{d}})^{\mathfrak{l}}
\end{aligned}
$$

$$\begin{split} \tilde{f}'(\hat{z}\_{\mathbf{i}}; z\_{a} - \hat{z}\_{\mathbf{i}}) &:= \quad \omega^{\*}(t\_{2})[\mathbf{x}(t\_{2}) - \hat{\mathbf{x}}(t\_{2})] - \omega^{\*}(t\_{1})[\mathbf{x}(t\_{1}) - \hat{\mathbf{x}}(t\_{1})] \\ &+ \int\_{t\_{1}}^{t\_{2}} \{\tilde{\mathcal{F}}\_{\xi}(t, \hat{\mathbf{x}}(t)\mathbf{i} + \hat{\mathbf{a}}\mathbf{j}, \hat{\mathbf{u}}(t))([\mathbf{x}(t) - \hat{\mathbf{x}}(t)]\mathbf{i} + [a - \hat{a}]\mathbf{j}) \\ &+ \mathcal{F}\_{\mathbb{U}}(t, \hat{\mathbf{x}}(t)\mathbf{i} + \hat{\mathbf{u}}\mathbf{j}, \hat{\mathbf{u}}(t))(u(t) - \hat{\mathbf{u}}(t))\}dt, \end{split}$$

and <sup>M</sup>˜ , <sup>N</sup>˜ are defined by

$$\begin{aligned} \mathcal{M}(t, \mathbf{x}\mathbf{i} + a\mathbf{j}) &:= \begin{aligned} \mathcal{F}(t, \mathbf{x}\mathbf{i} + a\mathbf{j}, \boldsymbol{\mathfrak{i}}(t)) - \mathcal{F}(t, \mathbf{x}(t)\mathbf{i} + \boldsymbol{\mathfrak{i}}\mathbf{j}, \boldsymbol{\mathfrak{i}}(t)) \\ -\mathcal{F}\_{\xi}(t, \mathbf{x}(t)\mathbf{i} + \boldsymbol{\mathfrak{i}}\mathbf{j}, \boldsymbol{\mathfrak{i}}(t)) ([\mathbf{x} - \boldsymbol{\mathfrak{i}}(t)]\mathbf{i} + [a - \boldsymbol{\mathfrak{i}}]\mathbf{j}). \end{aligned} \end{aligned}$$

$$
\tilde{\mathcal{N}}(t, \mathbf{x} \mathbf{i} + a \mathbf{j}) := \mathcal{F}^\*\_{\mathfrak{u}}(t, \mathbf{x} \mathbf{i} + a \mathbf{j}, \mathfrak{u}(t)) - \mathcal{F}^\*\_{\mathfrak{u}}(t, \mathfrak{x}(t) \mathbf{i} + \mathfrak{sl}, \mathfrak{u}(t)).
$$

By Taylor's theorem,

$$\tilde{\mathcal{M}}(t, \mathbf{x}i + a\mathbf{j}) = \frac{1}{2}( [\mathbf{x}^\* - \hat{\mathbf{x}}^\*(t)]\mathbf{i} + [a^\* - \hat{a}^\*]\mathbf{j})\mathcal{P}(t, \mathbf{x}i + a\mathbf{j})([\mathbf{x} - \hat{\mathbf{x}}(t)]\mathbf{i} + [a - \hat{a}]\mathbf{j}), \quad (2\mathbf{a})$$

$$\tilde{\mathcal{N}}(t, \mathbf{x}i + a\mathbf{j}) = \tilde{\mathcal{Q}}(t, \mathbf{x}i + a\mathbf{j})([\mathbf{x} - \hat{\mathbf{x}}(t)]\mathbf{i} + [a - \hat{a}]\mathbf{j}), \tag{2b}$$

where

$$\begin{aligned} \mathcal{P}(t, \mathbf{x}i + a\mathbf{j}) &:= 2 \int\_0^1 (1 - \theta) \mathcal{F}\_{\xi\xi}(t, [\mathbf{\dot{x}}(t) + \theta(\mathbf{x} - \mathbf{\dot{x}}(t))] \mathbf{\dot{i}} + [\mathfrak{A} + \theta(a - \mathfrak{A})] \mathbf{j}, \mathfrak{a}(t)) d\theta, \\\\ \mathcal{Q}(t, \mathbf{x}i + a\mathbf{j}) &:= \int\_0^1 \mathcal{F}\_{\mathbf{u}\xi}(t, [\mathbf{\dot{x}}(t) + \theta(\mathbf{x} - \mathbf{\dot{x}}(t))] \mathbf{\dot{i}} + [\mathfrak{A} + \theta(a - \mathfrak{A})] \mathbf{j}, \mathfrak{a}(t)) d\theta. \end{aligned}$$

**Lemma 5.** *If the deduction of Theorem 1 is false, then we have the existence of a subsequence* (*z q aq* ) *of admissible processes such that*

$$\lim\_{q \to \infty} \mathcal{D}(u\_q - \mathfrak{A}) = 0 \quad \text{and} \quad d\_q := [2\mathcal{D}(u\_q - \mathfrak{A})]^{1/2} > 0 \quad (q \in \mathbf{N}).$$

**Proof.** If the deduction of Theorem 1 is false, then, for all *ρ*1, *ρ*<sup>2</sup> > 0, there exists an admissible process *z<sup>a</sup>* such that

$$\|\|z\_{\mathfrak{a}} - \mathfrak{z}\_{\mathfrak{d}}\|\| < \rho\_1 \quad \text{and} \quad I(z\_{\mathfrak{a}}) < I(\mathfrak{z}\_{\mathfrak{d}}) + \rho\_2 \min\{ |a - \mathfrak{d}|^2, \mathcal{D}(u - \mathfrak{d}) \}. \tag{3}$$

Since

$$\nu\_{\sigma}(t) \ge 0 \ (\sigma \in P\_{\prime} \text{ a.e. in } \mathcal{T})\_{\prime}$$

if *z<sup>a</sup>* is admissible, then *I*(*za*) ≥ *J*(*za*). Additionally, as

$$\nu\_{\sigma}(t)\varphi\_{\sigma}(t,\pounds(t),\pounds(t)) = 0 \ (\sigma \in P, \text{a.e. in } \mathcal{T})$$

then *I*(*z*ˆ*a*ˆ) = *J*(*z*ˆ*a*ˆ). Thus, (3) implies that, for *ρ*1, *ρ*<sup>2</sup> > 0, we have the existence of *z<sup>a</sup>* admissible such that

$$\|z\_a - \hat{z}\_{\mathfrak{A}}\| < \rho\_1 \quad \text{and} \quad f(z\_a) < f(\mathfrak{z}\_{\mathfrak{A}}) + \rho\_2 \min\{|a - \mathfrak{A}|^2, \mathcal{D}(\mathfrak{u} - \mathfrak{A})\}.$$

Therefore, if the deduction of Theorem 1 is false, then, for all *q* ∈ **N**, we have the existence of a sequence of admissible processes (*z q aq* ) such that

$$||z\_{d\_q}^q - \mathfrak{z}\_{\mathbb{A}}|| < \min\{\varepsilon, 1/q\}, \quad f(z\_{d\_q}^q) - f(\mathfrak{z}\_{\mathbb{A}}) < \min\{\frac{|a\_q - \mathfrak{d}|^2}{q}, \frac{\mathcal{D}(u\_q - \mathfrak{d})}{q}\}.\tag{4}$$

The first relation in (4) assures that

$$\lim\_{q \to \infty} \mathcal{D}(u\_q - \mathfrak{d}) = 0.$$

Moreover, as (*z q aq* ) is a sequence of admissible processes, we see that D(*u<sup>q</sup>* − *u*ˆ) = 0 if and only if *z <sup>q</sup>* = *z*ˆ. Hence, the second relation of (4) implies that

$$\mathcal{D}(\mathfrak{u}\_{\mathfrak{q}} - \mathfrak{a}) = 0 \Longrightarrow \mathfrak{a}\_{\mathfrak{q}} \neq \mathfrak{a}.$$

Assume that D(*u<sup>q</sup>* − *u*ˆ) = 0 for infinitely many *q*'s. We have

$$0 = \mathfrak{x}\_q(t\_2) - \hat{\mathfrak{x}}(t\_2) = \Psi(a\_q) - \Psi(\hat{a}) = \int\_0^1 \Psi'(\hat{a} + \theta[a\_q - \hat{a}])(a\_q - \hat{a})d\theta,\tag{5}$$

$$0 = \Psi(a\_q) - \Psi(\mathfrak{d}) = \Psi'(\mathfrak{d})(a\_q - \mathfrak{d}) + \int\_0^1 (1 - \theta) \Psi''(\mathfrak{d} + \theta[a\_q - \mathfrak{d}]; a\_q - \mathfrak{d}) d\theta. \tag{6}$$

If we designate by (*aq*, *a*ˆ) the line segment in **R** *s* joining the points *a<sup>q</sup>* and *a*ˆ, by the second relation of (4), by hypothesis (i) of Theorem (1), by (6), and the mean value theorem, we have the existence of Θ*<sup>q</sup>* ∈ (*aq*, *a*ˆ) such that

0 > *J*(*z*ˆ*a<sup>q</sup>* ) − *J*(*z*ˆ*a*ˆ) = *γ*(*aq*) − *γ*(*a*ˆ) = *γ* 0 (*a*ˆ)(*a<sup>q</sup>* <sup>−</sup> *<sup>a</sup>*ˆ) + <sup>1</sup> 2 (*a<sup>q</sup>* − *a*ˆ) ∗*γ* <sup>00</sup>(Θ*q*)(*a<sup>q</sup>* − *a*ˆ) = −*ω* ∗ (*t*2)Ψ 0 (*a*ˆ)(*a<sup>q</sup>* <sup>−</sup> *<sup>a</sup>*ˆ) + <sup>1</sup> 2 (*a<sup>q</sup>* − *a*ˆ) ∗*γ* <sup>00</sup>(Θ*q*)(*a<sup>q</sup>* − *a*ˆ) (7) = Z 1 0 (1 − *θ*)*ω* ∗ (*t*2)Ψ <sup>00</sup>(*a*<sup>ˆ</sup> <sup>+</sup> *<sup>θ</sup>*[*a<sup>q</sup>* <sup>−</sup> *<sup>a</sup>*ˆ]; *<sup>a</sup><sup>q</sup>* <sup>−</sup> *<sup>a</sup>*ˆ)*d<sup>θ</sup>* <sup>+</sup> <sup>1</sup> 2 (*a<sup>q</sup>* − *a*ˆ) ∗*γ* <sup>00</sup>(Θ*q*)(*a<sup>q</sup>* − *a*ˆ).

Select an adequately subsequence of ((*a<sup>q</sup>* − *a*ˆ)/|*a<sup>q</sup>* − *a*ˆ|), such that

$$\lim\_{q \to \infty} \frac{a\_q - \mathfrak{a}}{|a\_q - \mathfrak{a}|} = \mathfrak{a} \tag{8}$$

for some *α*ˆ ∈ **R** *s* satisfying |*α*ˆ| = 1. By (5),

$$
\Psi'(\hat{a})\hat{a} = 0.
$$

By (7) and (8) and hypothesis (ii) of Theorem 1, we see that

$$0 \ge \frac{1}{2}\omega^\*(t\_2)\Psi^{\prime\prime}(\mathfrak{d};\mathfrak{d}) + \frac{1}{2}\mathfrak{d}^\*\gamma^{\prime\prime}(\mathfrak{d})\mathfrak{d} \ge \frac{1}{2}\mathfrak{d}^\*\gamma^{\prime\prime}(\mathfrak{d})\mathfrak{d} = \frac{1}{2}f^{\prime\prime}(\mathfrak{d}\_{\mathfrak{d}};0\_{\mathfrak{d}})\_{\mathfrak{d}}$$

contradicting (iv) of Theorem 1. Consequently, we may suppose that, for all *q* ∈ **N**,

$$d\_{\mathfrak{q}} = [\mathfrak{D}\mathcal{D}(u\_{\mathfrak{q}} - \mathfrak{d})]^{1/2} > 0.$$

**Lemma 6.** *If the deduction of Theorem 1 is false, then condition (iv) of Theorem 1 is false.*

**Proof.** Let (*z q aq* ) be the sequence of admissible processes provided in Lemma 5. Hence,

$$\lim\_{q \to \infty} \mathcal{D}(\mathfrak{u}\_q - \mathfrak{A}) = 0 \quad \text{and} \quad d\_q = \left[ 2\mathcal{D}(\mathfrak{u}\_q - \mathfrak{A}) \right]^{1/q} > 0 \quad (q \in \mathbf{N}).$$

Case(1): First, assume that the sequence ((*a<sup>q</sup>* − *a*ˆ)/*dq*) is bounded in **R** *s* . For all *q* ∈ **N**, set

$$y\_q := \frac{\mathfrak{x}\_q - \mathfrak{k}}{d\_q}, \quad v\_q := \frac{u\_q - \mathfrak{k}}{d\_q}, \quad \mathfrak{o}\_q := y\_q \mathbf{i} + \frac{a\_q - \mathfrak{k}}{d\_q} \mathbf{j}.$$

By Lemma 2, there exist *v*ˆ ∈ *L* 2 (T ; **R** *<sup>m</sup>*) and a subsequence of (*z q aq* ) (without relabeling) such that *v<sup>q</sup> L* 1 \* *v*ˆ on T . We have, for all *q* ∈ **N**, that

$$\dot{y}\_q(t) = A\_q(t)y\_q(t) + B\_q(t)v\_q(t) \text{ (a.e. in } \mathcal{T}), \quad y\_q(t\_1) = 0,$$

where

$$A\_{\boldsymbol{\eta}}(t) := \int\_0^1 f\_{\boldsymbol{x}}(t, \mathfrak{x}(t) + \theta[\mathfrak{x}\_{\boldsymbol{\eta}}(t) - \mathfrak{x}(t)], \mathfrak{u}(t) + \theta[\mathfrak{u}\_{\boldsymbol{\eta}}(t) - \mathfrak{u}(t)]) d\theta,$$

$$B\_{\boldsymbol{\eta}}(t) := \int\_0^1 f\_{\boldsymbol{\eta}}(t, \mathfrak{x}(t) + \theta[\mathfrak{x}\_{\boldsymbol{\eta}}(t) - \mathfrak{x}(t)], \mathfrak{u}(t) + \theta[\mathfrak{u}\_{\boldsymbol{\eta}}(t) - \mathfrak{u}(t)]) d\theta.$$

We obtain the existence of *m*0, *m*<sup>1</sup> > 0 such that k*Aq*k<sup>∞</sup> ≤ *m*0, k*Bq*k<sup>∞</sup> ≤ *m*<sup>1</sup> (*q* ∈ **N**). By Lemma 3, there exist *ζ* ∈ *L* 2 (T ; **R** *n* ) and some subsequence of (*z q aq* ) (we do not relabel) such that, if for all *t* ∈ T , *y*ˆ(*t*) := R *t t*1 *ζ*(*τ*)*dτ*, then

$$y\_{\mathfrak{g}} \xrightarrow{\mathfrak{u}} \mathfrak{g} \text{ on } \mathcal{T}. \tag{9}$$

As the sequence ((*a<sup>q</sup>* − *a*ˆ)/*dq*) is bounded in **R** *s* , then we can suppose that there exists some *α*ˆ ∈ **R** *s* such that

$$\lim\_{q \to \infty} \frac{a\_q - \mathfrak{a}}{d\_q} = \mathfrak{a}.\tag{10}$$

First, we shall show that

$$
\mathfrak{F}(t\_2) = \Psi'(\mathfrak{d})\mathfrak{k}.\tag{11}
$$

Note that, we have, for all *q* ∈ **N**, that

$$y\_q(t\_2) = \int\_0^1 \Psi'(\hat{a} + \theta[a\_q - \hat{a}]) \frac{(a\_q - \hat{a})}{d\_q} d\theta. \tag{12}$$

By (9), (10), and (12), as one readily verifies, (11) holds. Now, we claim that

$$J^{\prime\prime}(\sharp\_{\hat{\mathfrak{a}}\prime}\psi\_{\hat{\mathfrak{a}}}) \le 0 \quad \text{and} \quad \psi\_{\hat{\mathfrak{a}}} = (\mathfrak{g}, \mathfrak{d}, \mathfrak{k}) \not\equiv (0, 0, 0). \tag{13}$$

In order to prove it, note that, by (2), (9), and (10),

$$\begin{array}{rcl}\frac{\tilde{\mathcal{M}}(\cdot,\,\mathbf{x}\_{\mathsf{q}}(\cdot)\mathbf{i}+a\_{\mathsf{q}}\mathbf{j})}{d\_{\mathsf{q}}^{2}} &=& \frac{1}{2}\boldsymbol{\mathcal{O}}\_{\mathsf{q}}^{\*}(\cdot)\tilde{\mathcal{P}}(\cdot,\boldsymbol{\chi}\_{\mathsf{q}}(\cdot)\mathbf{i}+a\_{\mathsf{q}}\mathbf{j})\boldsymbol{\mathcal{O}}\_{\mathsf{q}}(\cdot) \stackrel{L^{\infty}}{\longrightarrow} \\\\ & & \frac{1}{2}[\hat{\mathcal{Y}}^{\*}(\cdot)\mathbf{i}+\hat{\mathfrak{a}}^{\*}\mathbf{j}]\,\tilde{\mathcal{F}}\_{\mathsf{k}}^{\*}(\cdot,\boldsymbol{\hat{\mathfrak{x}}}(\cdot)\mathbf{i}+\hat{\mathfrak{a}}\mathbf{j},\boldsymbol{\hat{\mathfrak{u}}}(\cdot))[\boldsymbol{\hat{\mathfrak{y}}}(\cdot)\mathbf{i}+\hat{\mathfrak{a}}\mathbf{j}]\,\mathsf{y} \end{array}$$

$$\frac{\tilde{\mathcal{N}}(\cdot,\mathbf{x}\_{q}(\cdot)\mathbf{i}+a\_{q}\mathbf{j})}{d\_{q}} = \tilde{\mathcal{Q}}(\cdot,\mathbf{x}\_{q}(\cdot)\mathbf{i}+a\_{q}\mathbf{j})\sigma\_{q}(\cdot) \stackrel{L^{\infty}}{\longrightarrow} \mathcal{F}\_{\mathbf{u}\_{\mathbf{i}}^{\mathbf{s}}}(\cdot,\mathbf{x}(\cdot)\mathbf{i}+\mathbf{i}\mathfrak{j},\mathfrak{a}(\cdot))[\mathcal{Y}(\cdot)\mathbf{i}+\mathfrak{k}\mathfrak{j}]$$

both on T . This fact together with Lemma 2 implies that

$$\lim\_{q \to \infty} \frac{\tilde{\mathcal{K}}(\hat{z}\_{\mathbb{H}}; z\_{a\_q}^q)}{d\_q^2} = \frac{1}{2} \int\_{t\_1}^{t\_2} \{ [\mathcal{G}^\*(t)\mathbf{i} + \mathbb{A}^\*\mathbf{j}] \mathcal{F}\_{\tilde{\xi}\tilde{\mathbf{z}}}(t, \mathbf{\hat{x}}(t)\mathbf{i} + \mathbf{i}\mathbf{j}, \boldsymbol{\mathfrak{d}}(t))[\mathcal{G}(t)\mathbf{i} + \mathbf{i}\mathbf{j}] \} $$
 
$$+ 2\vartheta^\*(t) \mathcal{F}\_{\mathbf{u}\_{\tilde{\mathbf{z}}}^\sharp}(t, \mathbf{\hat{x}}(t)\mathbf{i} + \mathbf{i}\mathbf{j}, \boldsymbol{\mathfrak{d}}(t))[\mathcal{G}(t)\mathbf{i} + \mathbf{i}\mathbf{j}] \}) dt. \tag{14}$$

As (*x*ˆ, *u*ˆ, *ω*, *ν*) satisfies the first order sufficient conditions

$$
\dot{\boldsymbol{\omega}}(t) = -\mathcal{H}^\*\_{\boldsymbol{x}}(t, \hat{\boldsymbol{x}}(t), \hat{\boldsymbol{u}}(t), \boldsymbol{\omega}(t), \boldsymbol{\nu}(t)) \text{ (a.e. in } \mathcal{T})\_{\boldsymbol{\nu}} \quad \mathcal{H}^\*\_{\boldsymbol{u}}(t, \hat{\boldsymbol{x}}(t), \hat{\boldsymbol{u}}(t), \boldsymbol{\nu}(t), \boldsymbol{\nu}(t)) = \boldsymbol{0} \ (t \in \mathcal{T})\_{\boldsymbol{\nu}}
$$

and, by condition (i) of Theorem 1, we obtain

$$\begin{split} \lim\_{q \to \infty} \frac{\bar{f}'(\hat{z}\_{\hat{q}}; \frac{q}{d\_{q}^{2}} - \hat{z}\_{\hat{q}})}{d\_{q}^{2}} &= \lim\_{q \to \infty} \frac{1}{d\_{q}^{2}} [\omega^{\*}(t\_{2})(x\_{q}(t\_{2}) - \mathfrak{x}(t\_{2})) + \gamma'(\mathfrak{a})(a\_{q} - \mathfrak{a})] \\ &= \lim\_{q \to \infty} \frac{1}{d\_{q}^{2}} [\omega^{\*}(t\_{2})(\varPsi(a\_{q}) - \varPsi(\mathfrak{a})) - \omega^{\*}(t\_{2})\varPsi'(\mathfrak{a})(a\_{q} - \mathfrak{a})] \\ &= \lim\_{q \to \infty} \frac{1}{d\_{q}^{2}} \omega^{\*}(t\_{2})(\varPsi(a\_{q}) - \varPsi(\mathfrak{a}) - \varPsi'(\mathfrak{a})(a\_{q} - \mathfrak{a})) \\ &= \lim\_{q \to \infty} \frac{1}{d\_{q}^{2}} \int\_{0}^{1} \omega^{\*}(t\_{2})(1 - \theta)\varPsi''(\mathfrak{a} + \theta[a\_{q} - \mathfrak{a}]; a\_{q} - \mathfrak{a}) d\theta \\ &= \frac{1}{2} \omega^{\*}(t\_{2})\varPsi''(\mathfrak{a}; \mathfrak{a}). \end{split} (15)$$

Then, by (1), the fact that

$$|J(z\_{a\_q}^q) - J(\mathfrak{z}\_{\mathfrak{h}})| < \min\left\{ \frac{|a\_q - \mathfrak{h}|^2}{q}, \frac{\mathcal{D}(u\_q - \mathfrak{h})}{q} \right\} \cdot \mathfrak{h}$$

Equation (15) and hypothesis (ii) of Theorem 1,

$$0 \ge \lim\_{q \to \infty} \frac{\tilde{\mathcal{K}}(\mathfrak{z}\_{\mathring{\mathfrak{a}}}; z\_{\mathring{\mathfrak{a}}\_q}^q)}{d\_q^2} + \liminf\_{q \to \infty} \frac{\tilde{\mathcal{E}}(\mathfrak{z}\_{\mathring{\mathfrak{a}}}; z\_{\mathring{\mathfrak{a}}\_q}^q)}{d\_q^2}. \tag{16}$$

Now, we have, for all *t* ∈ T and *q* ∈ **N**, that

$$\frac{1}{d\_q^2} \tilde{E}(t, \mathbf{x}\_q(t)\mathbf{i} + a\_q \mathbf{j}, \hat{u}(t), u\_q(t)) = \frac{1}{2} v\_q^\*(t) \Phi\_q(t) v\_q(t).$$

where

$$\Phi\_q(t) := 2 \int\_0^1 (1 - \theta) \mathcal{F}\_{uu}(t, \mathbf{x}\_q(t)\mathbf{i} + a\_q \mathbf{j}, \mathfrak{u}(t) + \theta[u\_q(t) - \mathfrak{u}(t)]) d\theta.$$

We have

$$\Phi\_{\mathfrak{q}}(\cdot) \xrightarrow{L^{\infty}} \Phi(\cdot) := \mathcal{F}\_{\mathfrak{uu}}(\cdot, \mathfrak{x}(\cdot)\mathbf{i} + \mathfrak{sl}\_{\lambda}\mathfrak{u}(\cdot)) \text{ on } \mathcal{T}.$$

By condition (iii) of Theorem 1, we have

$$\tilde{\mathcal{F}}\_{\rm uu}(t, \hat{\mathfrak{x}}(t)\mathbf{i} + \hat{a}\mathfrak{j}, \hat{u}(t)) = \Phi(t) \ge 0 \text{ (a.e. in } \mathcal{T}).\tag{17}$$

By the fact that

$$\|z\_{a\_q}^q - \mathfrak{z}\_{\mathfrak{A}}\| < \frac{1}{q} \prime$$

*uq L* ∞ −→ *u*ˆ on T . Keeping this in mind, by (17) and Lemma 4,

$$\begin{split} \liminf\_{q \to \infty} \frac{\mathcal{E}(\mathbf{\hat{z}}\_{\mathbf{d}}; \mathbf{z}\_{\mathbf{d}\_q}^q)}{d\_q^2} &= \liminf\_{q \to \infty} \frac{1}{d\_q^2} \int\_{t\_1}^{t\_2} \mathbf{\hat{E}}(t, \mathbf{x}\_q(t)\mathbf{i} + a\_q \mathbf{j}, \mathbf{d}(t), u\_q(t)) dt \\ &= \frac{1}{2} \liminf\_{q \to \infty} \int\_{t\_1}^{t\_2} v\_q^\*(t) \Phi\_q(t) v\_q(t) dt \ge \frac{1}{2} \int\_{t\_1}^{t\_2} \mathfrak{d}^\*(t) \Phi(t) \mathfrak{d}(t) dt. \end{split} \tag{18}$$

By (16) and (18), we have

0 ≥ Z *t*<sup>2</sup> *t*1 {*v*ˆ ∗ (*t*)F˜ *uu*(*t*, *x*ˆ(*t*)**i** + *a*ˆ**j**, *u*ˆ(*t*))*v*ˆ(*t*) + 2*v*ˆ ∗ (*t*)F˜ *<sup>u</sup><sup>ξ</sup>* (*t*, *x*ˆ(*t*)**i** + *a*ˆ**j**, *u*ˆ(*t*))[*y*ˆ(*t*)**i** + *α*ˆ**j**] +[*y*ˆ ∗ (*t*)**i** + *α*ˆ ∗ **j**]F˜ *ξξ* (*t*, *x*ˆ(*t*)**i** + *a*ˆ**j**, *u*ˆ(*t*))[*y*ˆ(*t*)**i** + *α*ˆ**j**]}*dt* = *α*ˆ ∗*γ* 00(*a*ˆ)*α*ˆ + Z *t*<sup>2</sup> *t*1 {*v*ˆ ∗ (*t*)F*uu*(*t*, *x*ˆ(*t*), *u*ˆ(*t*))*v*ˆ(*t*) + 2*v*ˆ ∗ (*t*)F*ux*(*t*, *x*ˆ(*t*), *u*ˆ(*t*))*y*ˆ(*t*) +*y*ˆ ∗ (*t*)F*xx*(*t*, *x*ˆ(*t*), *u*ˆ(*t*))*y*ˆ(*t*)}*dt* = *α*ˆ ∗*γ* 00(*a*ˆ)*α*ˆ + Z *t*<sup>2</sup> *t*1 2Ω(*t*, *x*ˆ(*t*), *u*ˆ(*t*); *y*ˆ(*t*), *v*ˆ(*t*))*dt* = *J* <sup>00</sup>(*z*ˆ*a*<sup>ˆ</sup> ; *w*ˆ *<sup>α</sup>*ˆ).

Now, let us prove that *w*ˆ *<sup>α</sup>*<sup>ˆ</sup> 6≡ (0, 0, 0). By (16) and hypothesis (v) of Theorem 1, we have

$$0 \ge \lim\_{q \to \infty} \frac{\widetilde{\mathcal{K}}(\mathfrak{z}\_{\mathring{\mathfrak{u}}}; z\_{\mathring{\mathfrak{u}}\_q}^q)}{d\_q^2} + \liminf\_{q \to \infty} \frac{\delta}{d\_q^2} \mathcal{D}(u\_q - \mathring{u}) = \lim\_{q \to \infty} \frac{\widetilde{\mathcal{K}}(\mathfrak{z}\_{\mathring{\mathfrak{u}}}; z\_{\mathring{\mathfrak{u}}\_q}^q)}{d\_q^2} + \frac{\delta}{2}.$$

Keeping this in mind together with (14), if we assume that *w*ˆ *<sup>α</sup>*<sup>ˆ</sup> ≡ (0, 0, 0), then *δ* would be nonpositive, which is a contradiction, and this proves (13). Now, let us show that

$$\frac{d}{dt}\mathfrak{F}(t) = f\_{\mathbf{x}}(t, \mathfrak{X}(t), \mathfrak{A}(t))\mathfrak{F}(t) + f\_{\mathbf{u}}(t, \mathfrak{X}(t), \mathfrak{A}(t))\mathfrak{F}(t) \text{ (a.e. in } \mathcal{T}). \tag{19}$$

In fact, since

$$A\_{\mathfrak{q}}(\cdot) \xrightarrow{L^{\infty}} f\_{\mathfrak{x}}(\cdot, \mathfrak{x}(\cdot), \mathfrak{u}(\cdot)), \quad B\_{\mathfrak{q}}(\cdot) \xrightarrow{L^{\infty}} f\_{\mathfrak{u}}(\cdot, \mathfrak{x}(\cdot), \mathfrak{u}(\cdot)), \quad y\_{\mathfrak{q}} \xrightarrow{\mathbf{u}} \mathfrak{y}, \quad v\_{\mathfrak{q}} \xrightarrow{L^{1}\_{\omega}} \mathfrak{o}$$

all on T , we see that

$$\boldsymbol{\mathfrak{y}}\_{\boldsymbol{\mathfrak{q}}}(\cdot) \stackrel{\boldsymbol{L}^{1}\_{\boldsymbol{\omega}}}{\rightharpoonup} \boldsymbol{f}\_{\mathbf{x}}(\cdot,\mathfrak{x}(\cdot),\mathfrak{u}(\cdot))\boldsymbol{\mathfrak{y}}(\cdot) + \boldsymbol{f}\_{\boldsymbol{\mathfrak{u}}}(\cdot,\mathfrak{x}(\cdot),\mathfrak{u}(\cdot))\boldsymbol{\mathfrak{y}}(\cdot) \text{ on } \boldsymbol{\mathcal{T}}.$$

By Lemma 3, *y*˙*<sup>q</sup> L* 1 \* *ζ* = *dy*ˆ *dt* on T . Consequently, (19) is fulfilled. Additionally, we claim that

i. *ϕσx*(*t*, *x*ˆ(*t*), *u*ˆ(*t*))*y*ˆ(*t*) + *ϕσu*(*t*, *x*ˆ(*t*), *u*ˆ(*t*))*v*ˆ(*t*) ≤ 0 (a.e. in T , *σ* ∈ *i*(*t*, *x*ˆ(*t*), *u*ˆ(*t*))). ii. *ϕςx*(*t*, *x*ˆ(*t*), *u*ˆ(*t*))*y*ˆ(*t*) + *ϕςu*(*t*, *x*ˆ(*t*), *u*ˆ(*t*))*v*ˆ(*t*) = 0 (a.e. in T , *ς* ∈ *Q*).

As one readily verifies, (i) and (ii) above follows if one copies the proofs from (13) to (15) of [24].

Hence, from (11), (19), (i) and (ii), above, we see that *w*ˆ *<sup>α</sup>*<sup>ˆ</sup> ∈ *Y*(*z*ˆ*a*ˆ). This fact combined with (13) contradict condition (iv) of Theorem 1.

Case (2): Now, suppose that the sequence ((*a<sup>q</sup>* − *a*ˆ)/*dq*) is not bounded. Then,

$$\lim\_{q \to \infty} \left| \frac{a\_q - \mathfrak{A}}{d\_q} \right| = +\infty. \tag{20}$$

Select an adequately subsequence of ((*a<sup>q</sup>* − *a*ˆ)/|*a<sup>q</sup>* − *a*ˆ|) (without relabeling), and *α*˜ ∈ **R** *s* satisfying |*α*˜| = 1, such that

$$\lim\_{q \to \infty} \frac{a\_q - \mathfrak{a}}{|a\_q - \mathfrak{a}|} = \mathfrak{a}.\tag{21}$$

For all *q* ∈ **N** and *t* ∈ T , set

$$\vec{\boldsymbol{\sigma}}(t) := \frac{\boldsymbol{\chi}\_{\boldsymbol{q}}(t) - \boldsymbol{\mathfrak{x}}(t)}{|a\_{\boldsymbol{q}} - \boldsymbol{\mathfrak{a}}|} \mathbf{i} + \frac{a\_{\boldsymbol{q}} - \boldsymbol{\mathfrak{a}}}{|a\_{\boldsymbol{q}} - \boldsymbol{\mathfrak{a}}|} \mathbf{j}.$$

By Lemma 2 and (20),

$$\frac{\mathbf{x}\_q(\cdot) - \mathbf{f}(\cdot)}{|a\_q - \mathbf{i}|} = y\_q(\cdot) \cdot \frac{d\_q}{|a\_q - \mathbf{i}|} \xrightarrow{\mathbf{u}} \mathcal{Y}(\cdot) \cdot 0 = 0 \text{ on } \mathcal{T}. \tag{22}$$

For all *q* ∈ **N**, we have

$$\frac{\mathbf{x}\_q(t\_2) - \mathbf{\hat{x}}(t\_2)}{|a\_q - \mathbf{\hat{a}}|} = \int\_0^1 \mathbf{\hat{y}}'(\mathbf{\hat{a}} + \theta[a\_q - \mathbf{\hat{a}}]) \left(\frac{a\_q - \mathbf{\hat{a}}}{|a\_q - \mathbf{\hat{a}}|}\right) d\theta. \tag{23}$$

By (21)–(23),

$$
\Psi'(\mathfrak{d})\mathfrak{k} = 0.\tag{24}
$$

Now, by (2), (21), and (22),

$$\begin{array}{rcl}\frac{\tilde{\mathcal{M}}(\cdot,\mathbf{x}\_{q}(\cdot)\mathbf{i}+a\_{q}\mathbf{j})}{|a\_{q}-\hat{\mathbf{a}}|^{2}} &=& \frac{1}{2}\tilde{\mathcal{C}}\_{q}^{\*}(\cdot)\tilde{\mathcal{P}}(\cdot,\mathbf{x}\_{q}(\cdot)\mathbf{i}+a\_{q}\mathbf{j})\tilde{\mathcal{O}}\_{q}(\cdot) \\\\ \stackrel{L^{\infty}}{\longrightarrow}{\ }{\ }{\ }{\ }{\ }{\ }{\ }{\ }{\ }{\epsilon}^{\ }{\ }{\epsilon}\tilde{\mathcal{F}}\_{\xi\overline{\xi}}(\cdot,\hat{\mathbf{x}}(\cdot)\mathbf{i}+\hat{\mathbf{a}}\mathbf{j},\hat{\mathbf{u}}(\cdot)\mathbf{j})\tilde{\mathcal{O}}\_{\tilde{\mathbf{a}}}=\frac{\tilde{\mathcal{A}}^{\*}\gamma^{\prime\prime}(\hat{\mathbf{a}})\tilde{\mathcal{A}}}{2(t\_{2}-t\_{1})},\end{array}$$

$$\begin{aligned} \frac{\tilde{\mathcal{N}}(\cdot,\mathbf{x}\_{\emptyset}(\cdot)\mathbf{i}+a\_{\emptyset}\mathbf{j})}{|a\_{\emptyset}-\hat{\mathfrak{a}}|} &=& \tilde{\mathcal{Q}}(\cdot,\mathbf{x}\_{\emptyset}(\cdot)\mathbf{i}+a\_{\emptyset}\mathbf{j})\tilde{\mathcal{a}}\_{\emptyset}(\cdot) \\ &\stackrel{L^{\infty}}{\longrightarrow} \mathcal{F}\_{\mathsf{u}\_{\emptyset}^{\mathsf{T}}}(\cdot,\mathbf{x}(\cdot)\mathbf{i}+\mathfrak{a}\mathbf{j},\mathfrak{a}(\cdot))\mathbf{0}\_{\emptyset} = \mathbf{0} \end{aligned}$$

#### both on T . Combined this fact with Lemma 2, this implies that

$$\lim\_{q \to \infty} \frac{\tilde{\mathcal{K}}(\mathbf{z}\_{\mathfrak{d}}; z\_{\mathfrak{a}\_q}^q)}{|a\_q - \hat{\mathfrak{a}}|^2} = \quad \frac{1}{2} \| \mathbf{\hat{z}}^\* \gamma^{\prime\prime}(\mathfrak{d}) \mathbf{\hat{z}} + \lim\_{q \to \infty} \int\_{t\_1}^{t\_2} \frac{d\_q}{|a\_q - \hat{\mathfrak{a}}|} \cdot v\_q^\*(t) \frac{\tilde{\mathcal{N}}(t, \mathbf{x}\_q(t) \mathbf{i} + a\_q \mathbf{j})}{|a\_q - \hat{\mathfrak{a}}|} dt \tag{25}$$
 
$$= \quad \frac{1}{2} \mathbf{\hat{z}}^\* \gamma^{\prime\prime}(\mathfrak{d}) \mathbf{\hat{z}}.$$

As in (15), we have

$$\lim\_{q \to \infty} \frac{\tilde{J}'(\mathfrak{z}\_{\mathfrak{d}}; z\_{\mathfrak{a}\_q}^q - \mathfrak{z}\_{\mathfrak{d}})}{|a\_q - \mathfrak{a}|^2} = \frac{1}{2} \omega^\*(t\_{\mathfrak{2}}) \Psi'''(\mathfrak{d}; \mathfrak{a}). \tag{26}$$

In addition, by (1), (4), and (26) and condition (ii) of Theorem 1,

$$0 \ge \lim\_{q \to \infty} \frac{\tilde{\mathcal{K}}(\mathfrak{z}\_{\mathfrak{d}}; z\_{a\_q}^q)}{|a\_q - \mathfrak{d}|^2} + \liminf\_{q \to \infty} \frac{\mathcal{E}(\mathfrak{z}\_{\mathfrak{d}}; z\_{a\_q}^q)}{|a\_q - \mathfrak{d}|^2}. \tag{27}$$

Hence, as <sup>E</sup>˜(*z*ˆ*a*<sup>ˆ</sup> ; *z q aq* ) ≥ 0 (*q* ∈ **N**), by (25) and (27),

$$0 \ge \frac{1}{2} \mathfrak{a}^\* \gamma''(\mathfrak{d}) \mathfrak{a} = \frac{1}{2} \mathfrak{J}''(\mathfrak{z}\_{\mathbb{R}}; \mathbf{0}\_{\mathbb{R}}).\tag{28}$$

Accordingly, (24) and (28) contradict condition (iv) of Theorem 1.

#### **6. Discussion Part**

Let us point out that our hypotheses try to respect the property that the first and second order sufficient conditions are closely related to the necessary conditions for optimality. For instance, the sufficient conditions

$$
\dot{\omega}(t) = -\mathcal{H}\_{\text{x}}^{\*}(t, \dot{\mathfrak{x}}(t), \hat{\mathfrak{u}}(t), \omega(t), \boldsymbol{\nu}(t)) \text{ (a.e. in } \mathcal{T})\_{\prime} \quad \mathcal{H}\_{\text{u}}^{\*}(t, \dot{\mathfrak{x}}(t), \hat{\mathfrak{u}}(t), \boldsymbol{\omega}(t), \boldsymbol{\nu}(t)) = 0 \ (t \in \mathcal{T})\_{\prime}
$$

are the Pontryagin maximum principle in normal form. On the other hand, a cone of critical directions that we strengthen in the article is the following:

$$\mathcal{Y}(\mathfrak{A}\_{\mathbb{C}}) := \begin{cases} \mathcal{Y}(t) = f\_{\mathfrak{x}}(t, \mathfrak{x}(t), u(t))y(t) + f\_{\mathfrak{u}}(t, \mathfrak{x}(t), u(t))v(t) \text{ (a.e. in } \mathcal{T}). \\\ y(t\_{1}) = 0, \ y(t\_{2}) = \mathbb{P}'(a)a. \\\ \varrho\_{\mathfrak{T}\mathfrak{x}}(t, \mathfrak{x}(t), u(t))y(t) + \varrho\_{\mathfrak{T}\mathfrak{u}}(t, \mathfrak{x}(t), u(t))v(t) \le 0 \text{ a.e. in } \mathcal{T}, \sigma \in i(t, \mathfrak{x}(t), u(t)) \text{ with } v\_{\sigma}(t) = 0. \\\ \varrho\_{\subset\mathfrak{X}}(t, \mathfrak{x}(t), u(t))y(t) + \varrho\_{\mathfrak{U}}(t, \mathfrak{x}(t), u(t))v(t) = 0 \text{ a.e. in } \mathcal{T}, \zeta \in P \text{ with } v\_{\zeta}(t) > 0 \text{ or } \zeta \in Q. \end{cases}$$

Here, condition (iv) of Theorem 1 and Corollary 1 asks for

$$J^{\prime\prime}(\mathfrak{z}\_{\hbar}; w\_{\mathfrak{a}}) > 0 \text{ for all } w\_{\mathfrak{a}} \in \mathcal{Y}(\mathfrak{z}\_{\hbar}), w\_{\mathfrak{a}} \not\equiv (0, 0, 0)\_{\prime}$$

that is, the positivity of the second variation on *Y*(*z*ˆ*a*ˆ), which can be considered as a strengthening of the second order necessary condition

$$J^{\prime\prime}(\mathfrak{z}\_{\hat{a}\prime}; w\_{\mathfrak{a}}) \ge 0 \text{ for all } w\_{\mathfrak{a}} \in \mathcal{Y}(\mathfrak{z}\_{\hat{a}}).$$

Additionally, condition (i),

$$\gamma^{\prime \*}(\hat{a}) + \mathbf{Y}^{\prime \*}(\hat{a})\omega(t\_2) = \mathbf{0}\_{\prime}$$

is the classical transversality condition. It is well-known that the transversality condition is a necessary condition for a weak minimum of problem *P*(*γ*, Γ, *C*, *f* , *ξ*1, Ψ, *R*,*s*). As explained in the article, condition (iii),

$$\mathcal{H}\_{\text{uu}}(t, \mathfrak{X}(t), \mathfrak{A}(t), \omega(t), \nu(t)) \le 0 \text{ (a.e. in } \mathcal{T})\_{\text{\textquotedblleft}}$$

is a similar version of the Legendre–Clebsch necessary condition. It is not the necessary condition of Legendre–Clebsch because the former is less restrictive, that is,

$$\mathcal{H}\_{\mu\mu}(t, \mathfrak{x}(t), \mathfrak{a}(t), \omega(t), \mathfrak{v}(t))$$

must be less or equal than zero almost everywhere on T , but only in a subset related with the kernel of the linear transformation *ϕu*(*t*, *x*ˆ(*t*), *u*ˆ(*t*)). In the fixed-endpoints problem of calculus of variations, it is well-known that, if *x*ˆ is a smooth nonsingular extremal satisfying Legendre necessary condition, then, for some *e* > 0,

$$E(t, \mathbf{x}, \dot{\mathbf{x}}, u) > 0 \text{ for } (t, \mathbf{x}, \dot{\mathbf{x}}, u) \in T(\mathbf{x}, \epsilon), u \neq \dot{\mathbf{x}},$$

is a sufficient condition for a weak minimum. Here,

$$T(\hat{\mathfrak{x}}, \mathfrak{c}) := \left\{ (t, \mathfrak{x}, \dot{\mathfrak{x}}, \mathfrak{u}) \in \mathcal{T} \times \mathbf{R}^{\mathfrak{n}} \times \mathbf{R}^{\mathfrak{n}} \times \mathbf{R}^{\mathfrak{n}} \mid \left| \mathfrak{x} - \hat{\mathfrak{x}}(t) \right| < \mathfrak{c}, \left| \dot{\mathfrak{x}} - (d/dt)\hat{\mathfrak{x}}(t) \right| < \mathfrak{c} \right\}.$$

In fact, as one can be seen in [10], the above condition implies that

$$L(t, \mathbf{x}, \dot{\mathbf{x}}, \mu) \ge \delta L(\mu - \dot{\mathbf{x}}) \text{ for } (t, \mathbf{x}, \dot{\mathbf{x}}, \mu) \in T(\mathbf{f}, \varepsilon) \tag{29}$$

for some *δ*, *e* > 0. Then, (29) implies that for some *δ*, *e* > 0,

$$\int\_{t\_1}^{t\_2} \mathbb{E}(t, \mathbf{x}(t), (d/dt)\mathbf{\hat{x}}(t), \mathbf{\hat{x}}(t))dt \ge \delta \int\_{t\_1}^{t\_2} \mathbb{L}(\mathbf{\dot{x}}(t) - (d/dt)\mathbf{\hat{x}}(t))dt = \delta \mathcal{D}(\mathbf{\dot{x}} - (d/dt)\mathbf{\hat{x}}),\tag{30}$$

whenever *x* is such that k*x* − *x*ˆk<sup>1</sup> < *e*, where

$$\|\mathfrak{x}\|\_1 := \|\mathfrak{x}\|\_\infty + \|\mathfrak{x}\|\_\infty.$$

It is worth to say that (30) gave us the inspiration to obtain the sufficient condition (v) of Theorem 1 and Corollary 1. Condition (ii) arises from the properties of the algorithm established to prove Theorem 1. In summary, our goal consists of providing an alternate model of sufficiency. Even though we do not necessarily obtain no gap hypotheses between necessary and sufficient conditions for optimality, we follow a classical way of obtaining sufficient conditions by strengthening the necessary ones. Finally, in [25], one could find an experimental application involving an economic model of population growth. More precisely, in [25], an application concerning a model for a one sector economy taking into consideration population growth is presented. In the proposed economic model, it is shown that the only factor decreasing the capital per worker is the inclusion of additional workers to the economy, and the only factor increasing the economy is the rate of production. The presence of nonlinear time-state-control mixed constraints plays a crucial role in that model, see [25], for details. For comparison reasons, it is worthwhile mentioning some of the bibliography studying necessary and sufficient conditions involving mixed constraints. Some relevant works we found convenient for that issue are the following [26–36].

#### **7. Conclusions**

In this article, we derive sufficiency conditions for weak minima in optimal control problems of Bolza in the parametric as well as in the nonparametric forms. These problems include nonlinear dynamics, a fixed initial end-point, a variable final end-point, and nonlinear mixed time-state-control constraints involving inequalities and equalities. In the nonparametric optimal control problem, the final end-point is not only variable, but

also completely free, in the sense that it must not be confined to a parametrization, but it only must be contained in the image of a twice continuously differentiable manifold. Due to the fact that the left end-point is fixed, we were able to make a relaxation, in the sense that we arrived essentially to the same conclusions, but we made weaker assumptions. This relaxation is relative to some recently published works whose initial left end-point is not necessarily fixed. The algorithm used to prove the main theorem of the paper is independent of some classical concepts such as the Hamilton–Jacobi theory, the verification of bounded solutions of certain matrix Riccati equations, or extended notions of the conjugate points theory. Finally, in the parametric problem, we were able to present how the deviation between optimal costs and admissible costs is estimated by quadratic functions, in particular, the square of the norm of the classical Banach space of integrable functions in the deviation mentioned above, is a fundamental component.

**Funding:** This research was financially supported by Dirección General de Asuntos del Personal Académico, DGAPA-UNAM, by the project PAPIIT-IN102220.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The authors are incredibly grateful to Dirección General de Asuntos del Personal Académico, Universidad Nacional Autónoma de México, for the management of funds granted by the project PAPIIT-IN102220. The author also appreciates the encouragement suggestions made by the three referees in their reports.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


## *Article* **Abstraction of Interpolative Reich-Rus-Ciri´c-Type Contractions ´ and Simplest Proof Technique**

**Monairah Alansari 1,† and Muhammad Usman Ali 2,\* ,†**


**Abstract:** The concept of symmetry is a very vast topic that is involved in the studies of several phenomena. This concept enables us to discuss the phenomenon in some systematic pattern depending upon the type of phenomenon. Each phenomenon has its own type of symmetry. The phenomenon that is used in the discussion of this article is a symmetric distance-measuring function. This article presents the notions of abstract interpolative Reich-Rus-Ciri´c-type contractions with a ´ shrink map and examines the existence of *φ*-fixed points for such maps in complete metric space. These notions are defined through special types of simulation functions. The proof technique of the results presented in this article is easy to understand compared with the existing literature on interpolative Reich-Rus-Ciri´c-type contractions. ´

**Keywords:** *φ*-fixed points; interpolative Kannan contraction; abstract interpolative Reich-Rus-Ciri´c- ´ type contractions with a shrink map

#### **1. Introduction and Preliminaries**

Metric fixed point theory has a significant contribution to nonlinear analysis with its applications. This branch of fixed point theory is based on the work of the famous mathematician Banach. He proved that [1], on a complete metric space, every contraction map possesses a unique fixed point. Later on, Kannan [2] and Chatterjea [3] modified the contraction inequality to study the existence of fixed points of discontinuous self-maps on a complete metric space. Afterward, this field has flourished with several interesting results. A few results have been obtained for the following aspects:


Recently, Karapınar [4] derived the interpolative Kannan contraction, which can be considered a modified form of the Kannan contraction. Inspiration from this work led several researchers to extend the existing contraction type inequalities in the pattern of interpolative Kannan contraction.

A few generalizations of contraction inequality have been obtained using some special types of simulation functions, for example [5,6].

Symmetry is a very vast topic that is involved in the studies of several phenomena. Each phenomenon has its own definition of symmetry, which helps to discuss the phenomenon in a systematic pattern. Metric space is a symmetric distance measuring function, which is used in the discussion of this article. In the literature related to interpolative Kannan contractions, we have seen several results based on the symmetric distance measuring function, for example, [7,8], and the asymmetric distance measuring function, for example, [9,10].

In this article, we use special types of simulation functions to extend interpolative Reich-Rus-Ciri´c-type contraction inequalities. The proof technique of the fixed point results ´

**Citation:** Alansari, M.; Ali, M.U. Abstraction of Interpolative Reich-Rus-Ciri´c-Type Contractions ´ and Simplest Proof Technique. *Symmetry* **2022**, *14*, 1504. https:// doi.org/10.3390/sym14081504

Academic Editor: Savin Treanta

Received: 4 July 2022 Accepted: 17 July 2022 Published: 22 July 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

involving interpolative contraction type inequalities is more complicated than the proof technique of the fixed point results involving contraction type inequalities. With the help of a simulation function, we have tried minimizing these complications of the proof technique, and now the presented proofs are easier to understand.

Before moving on to the next section, we will recall some basic concepts such as interpolative Kannan contraction, a few generalizations of the interpolative Kannan contraction, well-known simulation functions and some other notions that are required for the next section.

Let (*V*, *dV*) be a metric space and let *Q* : *V* → *V* be a self map. Then, we have the following notions.

• A map *Q* : *V* → *V* is said to be an interpolative Kannan contraction [4], if

$$d\_V(Qk, Ql) \le \eta d\_V(k, Qk)^{\omega\_1} d\_V(l, Ql)^{1-\omega\_1}$$

for all *k*, *l* ∈ *V* with *k* 6= *Qk*, where *η* ∈ [0, 1) and *ω*<sup>1</sup> ∈ (0, 1).

Later on, it was observed by Karapinar et al. [11] that the above inequality does not ensure the existence of a unique fixed point of a map in complete metric space. Hence, to discuss the uniqueness of a fixed point, the above inequality was redefined in the following way.

• A map *Q* : *V* → *V* is said to be an improved interpolative Kannan contraction [11], if

$$d\_V(Qk, Ql) \le \eta d\_V(k, Qk)^{\omega\_1} d\_V(l, Ql)^{1-\omega\_1}$$

for all *k*, *l* ∈ *V*\*Fix*(*Q*), where *η* ∈ [0, 1), *ω*<sup>1</sup> ∈ (0, 1) and *Fix*(*Q*) = {*k* ∈ *V* : *Qk* = *k*}. • A map *<sup>Q</sup>* : *<sup>V</sup>* <sup>→</sup> *<sup>V</sup>* is said to be an interpolative Reich-Rus-Ciri´c-type contraction [ ´ 12], if

> *dV*(*Qk*, *Ql*) ≤ *ηdV*(*k*, *l*) *<sup>ω</sup>*<sup>1</sup> *dV*(*k*, *Qk*) *<sup>ω</sup>*<sup>2</sup> *dV*(*l*, *Ql*) 1−*ω*1−*ω*<sup>2</sup>

for each *k*, *l* ∈ *V* \ *Fix*(*Q*), where *η* ∈ [0, 1) and *ω*1, *ω*<sup>2</sup> ∈ (0, 1) with *ω*<sup>1</sup> + *ω*<sup>2</sup> < 1.

In the literature, *CB*(*V*) represents the collection of all nonvoid closed and bounded subsets of *V* and the Pompeiu–Hausdorff distance is a map *H<sup>V</sup>* : *CB*(*V*) × *CB*(*V*) → [0, ∞) defined by

$$H\_V(E, F) = \max\{\sup\_{e \in E} d\_V(e, F), \sup\_{f \in F} d\_V(f, E)\}$$

where *dV*(*f* , *E*) = inf{*dV*(*f* ,*e*) : *e* ∈ *E*}.

A set-valued generalization of interpolative Reich-Rus-Ciri´c-type contraction is de- ´ fined in the way: A map *Q* : *V* → *CB*(*V*) is said to be a set-valued interpolative Reich-Rus-Ciri´c-type contraction [ ´ 13], if

$$H\_V(Qk, Ql) \quad \le \quad \eta d\_V(k, l)^{\omega\_1} d\_V(k, Qk)^{\omega\_2} d\_V(l, Ql)^{1-\omega\_1-\omega\_2}$$

for each *k*, *l* ∈ *V* \ *Fix*(*Q*), where *η* ∈ [0, 1) and *ω*1, *ω*<sup>2</sup> ∈ (0, 1) with *ω*<sup>1</sup> + *ω*<sup>2</sup> < 1.

In the literature, we have seen many auxiliary type functions from [0, ∞) × [0, ∞) into R, for example, simulation functions, R-functions and C-class functions. Recently, Karapinar [14] used the simulation function *ζ* : [0, ∞) × [0, ∞) → R given by Khojasteh et al. [15] to define the following notion.

A map *Q* : *V* → *V* is said to be an interpolative Hardy–Rogers type *Z*-contraction, if

$$\mathcal{Z}(d\_V(Qk, Ql), \mathcal{C}(k, l)) \ge 0\_\prime$$

for each *k*, *l* ∈ *V* \ *Fix*(*Q*), where *ω*1, *ω*2, *ω*<sup>3</sup> ∈ (0, 1) with *ω*<sup>1</sup> + *ω*<sup>2</sup> + *ω*<sup>3</sup> < 1, and

$$\mathcal{C}(k,l) = d\_V(k,l)^{\omega\_1} d\_V(k,Qk)^{\omega\_2} d\_V(l,Ql)^{\omega\_3} \left[ \frac{d\_V(k,Ql) + d\_V(l,Qk)}{2} \right]^{1-\omega\_1-\omega\_2-\omega\_3}.$$

A few more studies related to interpolative type contractions are available in [16–18].

In the next section, we use the following family of functions defined in [19]: Θ*<sup>F</sup>* is the collection of functions *θ <sup>f</sup>* : [0, ∞) <sup>4</sup> <sup>→</sup> [0, <sup>∞</sup>) with the given properties


It is well-known that for a self-map *Q* : *V* → *V*, a point *v* ∈ *V* with *v* = *Qv* is called a fixed point of *Q*. If *v* is a fixed point of *Q* with *φ*(*v*) = 0 for a map *φ* : *V* → [0, ∞), then *v* is called a *φ*-fixed point of *Q*. This notion is presented in [20].

#### **2. Results**

In this section, we denote Ξ*<sup>F</sup>* by the collection of functions *ξ <sup>f</sup>* : [0, ∞) <sup>3</sup> <sup>→</sup> [0, <sup>∞</sup>) such that


**Example 1.** *The following functions belong to* Ξ*F.*

*(E1) ξ <sup>f</sup>* (*a*, *b*, *c*) = *abc; (E2) ξ <sup>f</sup>* (*a*, *<sup>b</sup>*, *<sup>c</sup>*) = *ac* 1+*b ab* 1+*c bc* 1+*a .*

Throughout this article, *ξ <sup>f</sup>* belongs to Ξ*<sup>f</sup>* , *θ <sup>f</sup>* belongs to Θ*F*, *φ* represents a map from *V* into [0, ∞), and (*V*, *dV*) is a metric space.

The following definition is the first form of abstract interpolative Reich-Rus-Ciri´c type ´ contraction with a shrink map.

**Definition 1.** *A self-map <sup>Q</sup>* : *<sup>V</sup>* <sup>→</sup> *<sup>V</sup> is called an abstract interpolative Reich-Rus-Ciri´c type-I ´ contraction with φ shrink, if the below-stated inequalities hold:*

$$\begin{split} d\_{V}(\mathbb{Q}k, \mathbb{Q}l) &\quad \leq \quad \eta \mathbb{S}\_{f}(d\_{V}(k,l)^{\omega\_{1}}, d\_{V}(k, \mathbb{Q}k)^{\omega\_{2}}, d\_{V}(l, \mathbb{Q}l)^{\omega\_{3}}) \\ &\quad + L\theta\_{f}(d\_{V}(k,l)^{\omega\_{1}}, d\_{V}(k, \mathbb{Q}k)^{\omega\_{2}}, d\_{V}(l, \mathbb{Q}l)^{\omega\_{3}}, d\_{V}(l, \mathbb{Q}k)^{\omega\_{4}}) \end{split} \tag{1}$$

*for each k*, *l* ∈ *V* \ *Fix*(*Q*) *with l* 6= *k, where ω*1, *ω*2, *ω*<sup>3</sup> ∈ [0, 1] *with ω*<sup>1</sup> + *ω*<sup>2</sup> + *ω*<sup>3</sup> = 1*, ω*<sup>4</sup> > 0*, and L* ≥ 0*;*

*for every l* ∈ *V, we have*

$$
\phi(Ql) \le \eta \phi(l),
\tag{2}
$$

*where η* ∈ [0, 1) *and Fix*(*Q*) = {*v* ∈ *V* : *v* = *Qv*}*.*

The following theorem ensures the existence of *φ*-fixed points of the map *Q* satisfying the above definition.

**Theorem 1.** *Let Q* : *<sup>V</sup>* <sup>→</sup> *V be an abstract interpolative Reich-Rus-Ciri´c type-I contraction with ´ φ shrink on a complete metric space* (*V*, *dV*)*. Then at least one φ-fixed point of Q exists in V.*

**Proof.** Take an arbitrary point *l*<sup>0</sup> ∈ *V*, and define an iterative sequence *l<sup>n</sup>* = *Qln*−1∀*n* ∈ N. If *ln*<sup>0</sup> = *ln*0+<sup>1</sup> for some *n*0, then *ln*<sup>0</sup> is a fixed point of *Q*. Moreover, by (2) we get *φ*(*ln*<sup>0</sup> ) = *φ*(*Qln*<sup>0</sup> ) ≤ *λφ*(*ln*<sup>0</sup> ). This gives *φ*(*ln*<sup>0</sup> ) = 0. Hence, *ln*<sup>0</sup> is a *φ*-fixed point of *Q*. Now, consider *ln*−<sup>1</sup> 6= *l<sup>n</sup>* ∀*n* ∈ N. By (1), for each *n* ∈ N, we get

$$\begin{split} d\_{V}(\mathcal{Q}|\_{\mathfrak{u}-1}, \mathcal{Q}l\_{\mathfrak{n}}) &\leq \quad \eta\_{\mathcal{G}f}^{\star}(d\_{V}(l\_{\mathfrak{n}-1}, l\_{\mathfrak{n}})^{\omega\_{1}}, d\_{V}(l\_{\mathfrak{n}-1}, \mathcal{Q}l\_{\mathfrak{n}-1})^{\omega\_{2}}, d\_{V}(l\_{\mathfrak{n}}, \mathcal{Q}l\_{\mathfrak{n}})^{\omega\_{3}}) \\ &+ \mathcal{L}\theta\_{f}(d\_{V}(l\_{\mathfrak{n}-1}, l\_{\mathfrak{n}})^{\omega\_{1}}, d\_{V}(l\_{\mathfrak{n}-1}, \mathcal{Q}l\_{\mathfrak{n}}-)^{\omega\_{2}}, d\_{V}(l\_{\mathfrak{n}}, \mathcal{Q}l\_{\mathfrak{n}})^{\omega\_{3}}, d\_{V}(l\_{\mathfrak{n}}, \mathcal{Q}l\_{\mathfrak{n}-1})^{\omega\_{4}}). \end{split} \tag{3}$$

That is,

$$d\_V(l\_{\hbar}, l\_{\hbar+1}) \quad \le \quad \eta \mathfrak{F}\_f \Big( d\_V(l\_{\hbar-1}, l\_{\hbar})^{\omega\_1}, d\_V(l\_{\hbar-1}, l\_{\hbar})^{\omega\_2}, d\_V(l\_{\hbar}, l\_{\hbar+1})^{\omega\_3} \Big) \,\forall \pi \in \mathbb{N}. \tag{4}$$

Now, claim that *dV*(*ln*, *ln*+1) < *dV*(*ln*−1, *ln*) ∀*n* ∈ N. If it is wrong, then we have *m*<sup>0</sup> ∈ *N* with *dV*(*lm*<sup>0</sup> , *lm*0+1) ≥ *dV*(*lm*0−1, *lm*<sup>0</sup> ). By (4) we get

$$\begin{array}{rclcrcl}d\_{V}(l\_{m\_{0}},l\_{m\_{0}+1}) & \leq & \eta \mathfrak{star}\_{f}\big(d\_{V}(l\_{m\_{0}-1},l\_{m\_{0}})^{\omega\_{1}},d\_{V}(l\_{m\_{0}-1},l\_{m\_{0}})^{\omega\_{2}},d\_{V}(l\_{m\_{0}},l\_{m\_{0}+1})^{\omega\_{3}}\big) \\ & \leq & \eta \mathfrak{star}\_{f}\big(d\_{V}(l\_{m\_{0}},l\_{m\_{0}+1})^{\omega\_{1}},d\_{V}(l\_{m\_{0}},l\_{m\_{0}+1})^{\omega\_{2}},d\_{V}(l\_{m\_{0}},l\_{m\_{0}+1})^{\omega\_{3}}\big) \\ & \leq & \eta d\_{V}(l\_{m\_{0}},l\_{m\_{0}+1}) \end{array}$$

which is only possible when *dV*(*lm*<sup>0</sup> , *lm*0+1) = 0, and it contradicts our assumption. Thus, the claim is true. Since *dV*(*ln*, *ln*+1) < *dV*(*ln*−1, *ln*) ∀*n* ∈ N, then (4) we get

$$\begin{split} d\_{V}(l\_{n}, l\_{n+1}) &\leq \quad \eta \mathfrak{F}\_{f}(d\_{V}(l\_{n-1}, l\_{n})^{\omega\_{1}}, d\_{V}(l\_{n-1}, l\_{n})^{\omega\_{2}}, d\_{V}(l\_{n}, l\_{n+1})^{\omega\_{3}}) \\ &\leq \quad \eta \mathfrak{F}\_{f}(d\_{V}(l\_{n-1}, l\_{n})^{\omega\_{1}}, d\_{V}(l\_{n-1}, l\_{n})^{\omega\_{2}}, d\_{V}(l\_{n-1}, l\_{n})^{\omega\_{3}}) \\ &\leq \quad \eta d\_{V}(l\_{n-1}, l\_{n}) \,\forall n \in \mathbb{N}. \end{split} \tag{5}$$

The above inequality implies that

$$d\_V(l\_{n\prime}l\_{n+1}) \le \eta^n d\_V(l\_0, l\_1) \,\forall n \in \mathbb{N}.\tag{6}$$

To verify that the sequence {*ln*} is Cauchy. Consider *m*, *n* ∈ N with *n* > *m*. By triangle inequality and (6) we obtain

$$d\_V(l\_m, l\_n) \quad \le \sum\_{j=m}^{n-1} d\_V(l\_{j\prime} l\_{j+1}) \le \sum\_{j=m}^{n-1} \eta^j d\_V(l\_0, l\_1).$$

Since ∑ ∞ *j*=1 *η j* is a convergent series, thus, by the above inequality, we get lim*n*,*m*→<sup>∞</sup> *dV*(*lm*, *ln*) = 0. As (*V*, *dV*) is complete and {*ln*} is Cauchy in *V*, then there exists an element *l* <sup>∗</sup> ∈ *V* with *l<sup>n</sup>* → *l* ∗ . Now, claim that *l* ∗ = *Ql*∗ . If it is wrong, then *dV*(*l* ∗ , *Ql*∗ ) > 0. Since {*ln*} is an iterative sequence with *l<sup>n</sup>* → *l* ∗ , thus, we get

$$\max \{ d\_V(l\_n, l^\*), d\_V(l\_n, l\_{n+1}), d\_V(l^\*, \mathcal{Q}l^\*) \} = d\_V(l^\*, \mathcal{Q}l^\*) \,\forall n \ge N\_0 \tag{7}$$

for some *N*<sup>0</sup> ∈ N. By (1), for each *n* ∈ N, we obtain

$$\begin{split} d\_{V}(\mathbb{Q}l\_{\mathbb{n}}\mathbb{Q}l^{\*}) &\leq \quad \eta \mathfrak{S}\_{f} \Big( d\_{V}(l\_{\mathbb{n}},l^{\*})^{\omega\_{1}},d\_{V}(l\_{\mathbb{n}},\mathbb{Q}l\_{\mathbb{n}})^{\omega\_{2}},d\_{V}(l^{\*},\mathbb{Q}l^{\*})^{\omega\_{3}} \Big) \\ &+ L\mathfrak{\theta}\_{f} \Big( d\_{V}(l\_{\mathbb{n}},l^{\*})^{\omega\_{1}},d\_{V}(l\_{\mathbb{n}},\mathbb{Q}l\_{\mathbb{n}})^{\omega\_{2}},d\_{V}(l^{\*},\mathbb{Q}l^{\*})^{\omega\_{3}},d\_{V}(l^{\*},\mathbb{Q}l\_{\mathbb{n}})^{\omega\_{4}} \Big). \end{split} \tag{8}$$

From (7) and (8), for each *n* ≥ *N*0, we get

*dV*(*ln*+1, *Ql*<sup>∗</sup> ) ≤ *ηξ <sup>f</sup> dV*(*ln*, *l* ∗ ) *ω*1 , *dV*(*ln*, *ln*+1) *ω*2 , *dV*(*l* ∗ , *Ql*∗ ) *ω*3 +*Lθ <sup>f</sup> dV*(*ln*, *l* ∗ ) *ω*1 , *dV*(*ln*, *Qln*) *ω*2 , *dV*(*l* ∗ , *Ql*∗ ) *ω*3 , *dV*(*l* ∗ , *ln*+1) *ω*4 ≤ *ηξ <sup>f</sup> dV*(*l* ∗ , *Ql*∗ ) *ω*1 , *dV*(*l* ∗ , *Ql*∗ ) *ω*2 , *dV*(*l* ∗ , *Ql*∗ ) *ω*3 (9) +*Lθ <sup>f</sup> dV*(*ln*, *l* ∗ ) *ω*1 , *dV*(*ln*, *Qln*) *ω*2 , *dV*(*l* ∗ , *Ql*∗ ) *ω*3 , *dV*(*l* ∗ , *ln*+1) *ω*4 ≤ *ηdV*(*l* ∗ , *Ql*∗ ) +*Lθ <sup>f</sup> dV*(*ln*, *l* ∗ ) *ω*1 , *dV*(*ln*, *Qln*) *ω*2 , *dV*(*l* ∗ , *Ql*∗ ) *ω*3 , *dV*(*l* ∗ , *ln*+1) *ω*4 .

By applying the limit *n* → ∞ in (9), we get

$$d\_V(l^\*, \mathcal{Q}l^\*) \le \eta d\_V(l^\*, \mathcal{Q}l^\*).$$

As *η* < 1, thus, the above inequality, only exists when *dV*(*l* ∗ , *Ql*∗ ) = 0. Hence, the claim is correct. Since *l* ∗ = *Ql*∗ , then by (2) we get

$$
\phi(l^\*) = \phi(Ql^\*) \le \lambda \phi(l^\*).
$$

This implies that *φ*(*l* ∗ ) = 0. Hence, *l* ∗ is *φ*-fixed point of *Q*.

By letting *ξ <sup>f</sup>* (*a*, *b*, *c*) = *abc* and *θ <sup>f</sup>* (*a*, *b*, *c*, *d*) = *abcd* in Theorem 1, we get the following result.

**Corollary 1.** *Let* (*V*, *dV*) *be a complete metric space. Let Q* : *V* → *V and φ* : *V* → [0, ∞) *be two maps such that*

$$\begin{array}{rcl}d\_V(Qk, Ql) & \leq & \eta d\_V(k, l)^{\omega\_1} d\_V(k, Qk)^{\omega\_2} d\_V(l, Ql)^{\omega\_3} \\ & & + L d\_V(k, l)^{\omega\_1} d\_V(k, Qk)^{\omega\_2} d\_V(l, Ql)^{\omega\_3} d\_V(l, Qk)^{\omega\_4} \end{array}$$

*for each k*, *l* ∈ *V* \ *Fix*(*Q*) *with l* 6= *k, where ω*1, *ω*2, *ω*<sup>3</sup> ∈ [0, 1] *with ω*<sup>1</sup> + *ω*<sup>2</sup> + *ω*<sup>3</sup> = 1 *and ω*<sup>4</sup> > 0*; further, for every l* ∈ *V, we have*

$$
\phi(Ql) \le \eta \phi(l)\_\prime
$$

*where η* ∈ [0, 1) *and L* ≥ 0*. Then at least one φ-fixed point of Q exists in V.*

By taking *ω*<sup>1</sup> = *ω*<sup>4</sup> = 1 and *ω*<sup>2</sup> = *ω*<sup>3</sup> = 0 in the above mentioned corollary, we obtain the following result.

**Corollary 2.** *Let* (*V*, *dV*) *be a complete metric space. Let Q* : *V* → *V and φ* : *V* → [0, ∞) *be two maps such that*

$$\begin{array}{rcl}d\_V(Qk, Ql) & \leq & \eta d\_V(k, l) + L d\_V(k, l) d\_V(l, Qk), \end{array}$$

*for each k*, *l* ∈ *V* \ *Fix*(*Q*) *with l* 6= *k; further, for every l* ∈ *V, we have*

$$
\phi(Ql) \le \eta \phi(l)\_{\nu}
$$

*where η* ∈ [0, 1) *and L* ≥ 0*. Then at least one φ-fixed point of Q exists in V.*

**Corollary 3.** *Let* (*V*, *dV*) *be a complete metric space. Let Q* : *V* → *V be a map such that*

$$d\_V(Qk, Ql) \quad \le \quad \eta d\_V(k, l)^{\omega\_1} d\_V(k, Qk)^{\omega\_2} d\_V(l, Ql)^{\omega\_3} \tag{10}$$

*for each k*, *l* ∈ *V* \ *Fix*(*Q*) *with l* 6= *k, where ω*1, *ω*2, *ω*<sup>3</sup> ∈ [0, 1] *with ω*<sup>1</sup> + *ω*<sup>2</sup> + *ω*<sup>3</sup> = 1*, and η* ∈ [0, 1)*. Then a fixed point of Q exists in V.*

The conclusion of the above result can be concluded from Corollary 1 by considering *L* = 0 and *φ*(*k*) = 0 ∀*k* ∈ *V*.

The following corollary follows from Corollary 3 by defining *ω*<sup>1</sup> = *τ*1, *ω*<sup>2</sup> = *τ*<sup>2</sup> and *ω*<sup>3</sup> = 1 − *τ*<sup>1</sup> − *τ*2.

**Corollary 4.** *Let* (*V*, *dV*) *be a complete metric space. Let Q* : *V* → *V be a map such that*

$$d\_V(Qk, Ql) \quad \le \quad \eta d\_V(k, l)^{\tau\_1} d\_V(k, Qk)^{\tau\_2} d\_V(l, Ql)^{1-\tau\_1-\tau\_2} \tag{11}$$

*for each k*, *l* ∈ *V* \ *Fix*(*Q*) *with l* 6= *k, where τ*1, *τ*<sup>2</sup> ∈ (0, 1) *with τ*<sup>1</sup> + *τ*<sup>2</sup> < 1*, and η* ∈ [0, 1)*. Then fixed point of Q exists in V.*

Inequality (12) can be considered as a rational type interpolative contraction inequality obtained through (1) by taking *ξ <sup>f</sup>* (*a*, *<sup>b</sup>*, *<sup>c</sup>*) = *ac* 1+*b ab* 1+*c bc* 1+*a* and *L* = 0. Some interesting results related to rational type contraction conditions are given in [21].

**Corollary 5.** *Let* (*V*, *dV*) *be a complete metric space. Let Q* : *V* → *V and φ* : *V* → [0, ∞) *be two maps such that*

$$d\_V(Qk, Ql) \le \eta \left( \frac{d\_V(k, l)^{\omega\_1} d\_V(l, Ql)^{\omega\_3}}{1 + d\_V(k, Qk)^{\omega\_2}} \right) \left( \frac{d\_V(k, l)^{\omega\_1} d\_V(k, Qk)^{\omega\_2}}{1 + d\_V(l, Ql)^{\omega\_3}} \right) \left( \frac{d\_V(k, Qk)^{\omega\_2} d\_V(l, Ql)^{\omega\_3}}{1 + d\_V(k, l)^{\omega\_1}} \right) \tag{12}$$

*for each k*, *l* ∈ *V* \ *Fix*(*Q*) *with k* 6= *l, where ω*1, *ω*2, *ω*<sup>3</sup> ∈ [0, 1] *with ω*<sup>1</sup> + *ω*<sup>2</sup> + *ω*<sup>3</sup> = 1*; further, for every l* ∈ *V, we have*

$$
\phi(Ql) \le \eta \phi(l)
$$

*where η* ∈ [0, 1)*. Then at least one φ-fixed point of Q exists in V.*

Consider a simulation function *β<sup>ψ</sup>* : [0, ∞) <sup>2</sup> <sup>→</sup> <sup>R</sup> with the properties:


where *ψ* : [0, ∞) → [0, ∞) is a nondecreasing function that fulfills that ∑ ∞ *<sup>j</sup>*=<sup>1</sup> *ψ j* (*s*) is convergent for each *s* > 0, moreover, *ψ*(0) = 0 and *ψ*(*s*) < *s* if *s* > 0.

**Example 2.** *A function β<sup>ψ</sup>* : [0, ∞) × [0, ∞) → R *defined by βψ*(*k*, *l*) = *αl* − *k for each k*, *l* ∈ [0, ∞)*, where ψ*(*l*) = *αl and α* ∈ (0, 1)*, is the simplest example of the above-defined simulation function.*

Throughout the article, *β<sup>ψ</sup>* represents the above simulation function. Now, we define an abstract interpolative Reich-Rus-Ciri´c type-II contraction with ´ *φ* shrink by using the simulation function *βψ*.

**Definition 2.** *A self-map <sup>Q</sup>* : *<sup>V</sup>* <sup>→</sup> *<sup>V</sup> is called an abstract interpolative Reich-Rus-Ciri´c type-II ´ contraction with φ shrink, if the below-stated inequalities hold:*

$$\begin{split} \mathcal{B}\_{\Psi} \Big( d\_{V}(Qk, Ql), \mathfrak{J}\_{f} \Big( d\_{V}(k, l)^{\omega\_{1}}, d\_{V}(k, Qk)^{\omega\_{2}}, d\_{V}(l, Ql)^{\omega\_{3}} \Big) \Big) \\ + L\theta\_{f} \big( d\_{V}(k, l)^{\omega\_{1}}, d\_{V}(k, Qk)^{\omega\_{2}}, d\_{V}(l, Ql)^{\omega\_{3}}, d\_{V}(l, Qk)^{\omega\_{4}} \Big) \geq 0 \end{split} \tag{13}$$

*for each k*, *l* ∈ *V* \ *Fix*(*Q*) *with l* 6= *k, where ω*1, *ω*2, *ω*<sup>3</sup> ∈ [0, 1] *with ω*<sup>1</sup> + *ω*<sup>2</sup> + *ω*<sup>3</sup> = 1*, ω*<sup>4</sup> > 0*, and L* ≥ 0*;*

*for every l* ∈ *V, we have*

$$
\beta\_{\Psi} \left( \phi(Ql), \phi(l) \right) \ge 0. \tag{14}
$$

Now, we discuss the following *φ*-fixed point result for self-maps satisfying the above definition.

**Theorem 2.** *Let <sup>Q</sup>* : *<sup>V</sup>* <sup>→</sup> *<sup>V</sup> be an abstract interpolative Reich-Rus-Ciri´c type-II contraction with ´ φ shrink on a complete metric space* (*V*, *dV*)*. Then at least one φ-fixed point of Q exists in V.*

**Proof.** Define an iterative sequence {*ln*}, that is *l<sup>n</sup>* = *Qln*−1∀*n* ∈ N, for an arbitrary point *l*<sup>0</sup> ∈ *V*. If *ln*<sup>0</sup> = *ln*0+<sup>1</sup> for some *n*0, then *ln*<sup>0</sup> is a fixed point of *Q*. Moreover, from (14) we obtain 0 ≤ *β<sup>ψ</sup> φ*(*Qln*<sup>0</sup> ), *φ*(*ln*<sup>0</sup> ) ≤ *ψ*(*φ*(*ln*<sup>0</sup> )) − *φ*(*Qln*<sup>0</sup> ); that is *φ*(*ln*<sup>0</sup> ) = *φ*(*Qln*<sup>0</sup> ) ≤ *ψ*(*φ*(*ln*<sup>0</sup> )). This gives *φ*(*ln*<sup>0</sup> ) = 0. Hence, *ln*<sup>0</sup> is a *φ*-fixed point of *Q*. To work with the proof, we consider *ln*−<sup>1</sup> 6= *l<sup>n</sup>* ∀*n* ∈ N. By (13), for each *n* ∈ N, we get

$$\begin{split} \beta\_{\Psi} \Big( d\_{V}(\mathcal{Q}l\_{\mathfrak{n}-1}, \mathcal{Q}l\_{\mathfrak{n}})\_{\prime} \mathbb{1}\_{f} \Big( d\_{V}(l\_{\mathfrak{n}-1}, l\_{\mathfrak{n}})^{\omega\_{1}}, d\_{V}(l\_{\mathfrak{n}-1}, \mathcal{Q}l\_{\mathfrak{n}-1})^{\omega\_{2}}, d\_{V}(l\_{\mathfrak{n}}, \mathcal{Q}l\_{\mathfrak{n}})^{\omega\_{3}} \Big) \Big) \\ + L\theta\_{f} \Big( d\_{V}(l\_{\mathfrak{n}-1}, l\_{\mathfrak{n}})^{\omega\_{1}}, d\_{V}(l\_{\mathfrak{n}-1}, \mathcal{Q}l\_{\mathfrak{n}-1})^{\omega\_{2}}, d\_{V}(l\_{\mathfrak{n}}, \mathcal{Q}l\_{\mathfrak{n}})^{\omega\_{3}}, d\_{V}(l\_{\mathfrak{n}}, \mathcal{Q}l\_{\mathfrak{n}-1})^{\omega\_{4}} \Big) \geq 0. \end{split} \tag{15}$$

Using (b2) and (15), we get

*ψ ξ f dV*(*ln*−1, *ln*) *ω*1 , *dV*(*ln*−1, *Qln*−1) *ω*2 , *dV*(*ln*, *Qln*) *ω*3 <sup>−</sup> *<sup>d</sup>V*(*Qln*−1, *Qln*) + *Lθ <sup>f</sup> dV*(*ln*−1, *ln*) *ω*1 , *dV*(*ln*−1, *Qln*−1) *ω*2 , *dV*(*ln*, *Qln*) *ω*3 , *dV*(*ln*, *Qln*−1) *ω*4 ≥ *β<sup>ψ</sup> dV*(*Qln*−1, *Qln*), *ξ <sup>f</sup> dV*(*ln*−1, *ln*) *ω*1 , *dV*(*ln*−1, *Qln*−1) *ω*2 , *dV*(*ln*, *Qln*) *ω*3 + *Lθ <sup>f</sup> dV*(*ln*−1, *ln*) *ω*1 , *dV*(*ln*−1, *Qln*−1) *ω*2 , *dV*(*ln*, *Qln*) *ω*3 , *dV*(*ln*, *Qln*−1) *ω*4 ≥ 0.

This implies

$$\begin{split} d\_{V}(\mathcal{Q}l\_{n-1}, \mathcal{Q}l\_{n}) &\leq \quad \psi\big(\mathfrak{f}\_{f}(d\_{V}(l\_{n-1}, l\_{\mathfrak{n}})^{\omega\_{1}}, d\_{V}(l\_{n-1}, \mathcal{Q}l\_{n-1})^{\omega\_{2}}, d\_{V}(l\_{\mathfrak{n}}, \mathcal{Q}l\_{\mathfrak{n}})^{\omega\_{3}}\big)\big) \\ &+ L\theta\_{f}\big(d\_{V}(l\_{\mathfrak{n}-1}, l\_{\mathfrak{n}})^{\omega\_{1}}, d\_{V}(l\_{\mathfrak{n}-1}, \mathcal{Q}l\_{\mathfrak{n}-1})^{\omega\_{2}}, d\_{V}(l\_{\mathfrak{n}}, \mathcal{Q}l\_{\mathfrak{n}})^{\omega\_{3}}, d\_{V}(l\_{\mathfrak{n}}, \mathcal{Q}l\_{\mathfrak{n}-1})^{\omega\_{4}}\big). \end{split} \tag{16}$$

That is,

$$d\_V(l\_{n\prime}l\_{n+1}) \quad \le \quad \Psi(\xi\_f(d\_V(l\_{n-1},l\_n)^{\omega\_1},d\_V(l\_{n-1},Ql\_{n-1})^{\omega\_2},d\_V(l\_n,Ql\_n)^{\omega\_3})) \; \forall n \in \mathbb{N}. \tag{17}$$

Now, let us claim that *dV*(*ln*, *ln*+1) < *dV*(*ln*−1, *ln*) ∀*n* ∈ N. Assume that the claim is wrong, then we have *m*<sup>0</sup> ∈ *N* with *dV*(*lm*<sup>0</sup> , *lm*0+1) ≥ *dV*(*lm*0−1, *lm*<sup>0</sup> ). By (17) we get

$$\begin{split} d\_{V}(l\_{m\_{0}}, l\_{m\_{0}+1}) &\leq \quad \psi\left(\mathfrak{f}\_{f}\big(d\_{V}(l\_{m\_{0}-1}, l\_{m\_{0}})^{\omega\_{1}}, d\_{V}(l\_{m\_{0}-1}, l\_{m\_{0}})^{\omega\_{2}}, d\_{V}(l\_{m\_{0}}, l\_{m\_{0}+1})^{\omega\_{3}}\big)\big)\right) \\ &\leq \quad \psi\left(\mathfrak{f}\_{f}\big(d\_{V}(l\_{m\_{0}}, l\_{m\_{0}+1})^{\omega\_{1}}, d\_{V}(l\_{m\_{0}}, l\_{m\_{0}+1})^{\omega\_{2}}, d\_{V}(l\_{m\_{0}}, l\_{m\_{0}+1})^{\omega\_{3}}\big)\right) \\ &\leq \quad \psi\left(d\_{V}(l\_{m\_{0}}, l\_{m\_{0}+1})\right) \end{split}$$

which is impossible, since *lm*<sup>0</sup> 6= *lm*0+1. Hence, the claim holds. As *dV*(*ln*, *ln*+1) < *dV*(*ln*−1, *ln*) ∀*n* ∈ N, then (17) we get

$$\begin{split} d\_{V}(l\_{\mathfrak{n}}, l\_{\mathfrak{n}+1}) &\leq \quad \psi\left(\mathfrak{f}\_{f}\big(d\_{V}(l\_{\mathfrak{n}-1}, l\_{\mathfrak{n}})^{\omega\_{1}}, d\_{V}(l\_{\mathfrak{n}-1}, l\_{\mathfrak{n}})^{\omega\_{2}}, d\_{V}(l\_{\mathfrak{n}}, l\_{\mathfrak{n}+1})^{\omega\_{3}}\big)\right) \\ &\leq \quad \psi\left(\mathfrak{f}\_{f}\big(d\_{V}(l\_{\mathfrak{n}-1}, l\_{\mathfrak{n}})^{\omega\_{1}}, d\_{V}(l\_{\mathfrak{n}-1}, l\_{\mathfrak{n}})^{\omega\_{2}}, d\_{V}(l\_{\mathfrak{n}-1}, l\_{\mathfrak{n}})^{\omega\_{3}}\big)\right) \\ &\leq \quad \psi\left(d\_{V}(l\_{\mathfrak{n}-1}, l\_{\mathfrak{n}})\right) \forall n \in \mathbb{N}. \end{split} \tag{18}$$

This yields

$$d\_V(l\_{n\prime}l\_{n+1}) \le \psi^n(d\_V(l\_0\prime l\_1)) \,\,\forall n \in \mathbb{N}.\tag{19}$$

Consider *m*, *n* ∈ N with *n* > *m*. By triangle inequality and (19) we obtain

$$d\_V(l\_{m\nu}l\_n) \quad \le \sum\_{j=m}^{n-1} d\_V(l\_{j\nu}l\_{j+1}) \le \sum\_{j=m}^{n-1} \psi^j \{d\_V(l\_0, l\_1)\}.$$

Since ∑ ∞ *<sup>j</sup>*=<sup>1</sup> *ψ j* (*s*) is a convergent series for each *s* > 0, hence, by the above inequality we get lim*n*,*m*→<sup>∞</sup> *dV*(*lm*, *ln*) = 0. The completeness of (*V*, *dV*) confirms the existence of an element *l* <sup>∗</sup> ∈ *V* with *l<sup>n</sup>* → *l* ∗ . Now, let us claim that *l* ∗ = *Ql*∗ . Let us suppose that the claim is wrong, then *dV*(*l* ∗ , *Ql*∗ ) > 0. Since {*ln*} is an iterative sequence with *l<sup>n</sup>* → *l* ∗ , thus, we get

$$\max \{ d\_V(l\_n, l^\*), d\_V(l\_n, l\_{n+1}), d\_V(l^\*, Ql^\*) \} = d\_V(l^\*, Ql^\*) \,\forall n \ge N\_0 \tag{20}$$

for some *N*<sup>0</sup> ∈ N. By (13), for each *n* ∈ N, we obtain

$$\begin{split} \mathbb{1}\_{\mathbb{V}} \left( d\_{V} (Ql\_{\text{n}}, Ql^{\*}), \mathbb{1}\_{f} \left( d\_{V} (l\_{\text{n}}, l^{\*})^{\omega\_{1}}, d\_{V} (l\_{\text{n}}, Ql\_{\text{n}})^{\omega\_{2}}, d\_{V} (l^{\*}, Ql^{\*})^{\omega\_{3}} \right) \right) \\ + L\theta\_{f} \left( d\_{V} (l\_{\text{n}}, l^{\*})^{\omega\_{1}}, d\_{V} (l\_{\text{n}}, Ql\_{\text{n}})^{\omega\_{2}}, d\_{V} (l^{\*}, Ql^{\*})^{\omega\_{3}}, d\_{V} (l^{\*}, Ql\_{\text{n}})^{\omega\_{4}} \right) \geq 0. \end{split} \tag{21}$$

This gives

$$\begin{split} d\_{V}(\mathcal{Q}l\_{\boldsymbol{n}},\mathcal{Q}l^{\*}) &\leq \quad \Psi\_{f}\Big(\mathcal{J}\_{f}\big(d\_{V}(l\_{\boldsymbol{n}},l^{\*})^{\omega\_{1}},d\_{V}(l\_{\boldsymbol{n}},\mathcal{Q}l\_{\boldsymbol{n}})^{\omega\_{2}},d\_{V}(l^{\*},\mathcal{Q}l^{\*})^{\omega\_{3}}\big)\Big) \\ &+ \mathcal{I}\theta\_{f}\big(d\_{V}(l\_{\boldsymbol{n}},l^{\*})^{\omega\_{1}},d\_{V}(l\_{\boldsymbol{n}},\mathcal{Q}l\_{\boldsymbol{n}})^{\omega\_{2}},d\_{V}(l^{\*},\mathcal{Q}l^{\*})^{\omega\_{3}},d\_{V}(l^{\*},\mathcal{Q}l\_{\boldsymbol{n}})^{\omega\_{4}}\Big). \end{split} \tag{22}$$

By (20) and (22), for each *n* ≥ *N*0, we get

$$\begin{split} d\_{V}(l\_{n+1},Q^{\star}) &\leq \quad \Psi(\xi\_{f}(d\_{V}(l\_{n},l^{\star})^{\omega\_{1}},d\_{V}(l\_{n},l\_{n+1})^{\omega\_{2}},d\_{V}(l^{\star},Q^{\star})^{\omega\_{3}})) \\ &\quad + \mathcal{I}\theta\_{f}(d\_{V}(l\_{n},l^{\star})^{\omega\_{1}},d\_{V}(l\_{n},Ql\_{n})^{\omega\_{2}},d\_{V}(l^{\star},Ql^{\star})^{\omega\_{3}},d\_{V}(l^{\star},l\_{n+1})^{\omega\_{4}}) \\ &\leq \quad \Psi(\xi\_{f}(d\_{V}(l^{\star},Ql^{\star})^{\omega\_{1}},d\_{V}(l^{\star},Ql^{\star})^{\omega\_{2}},d\_{V}(l^{\star},Ql^{\star})^{\omega\_{3}})) \\ &\quad + \mathcal{I}\theta\_{f}(d\_{V}(l\_{n},l^{\star})^{\omega\_{1}},d\_{V}(l\_{n},Ql\_{n})^{\omega\_{2}},d\_{V}(l^{\star},Ql^{\star})^{\omega\_{3}},d\_{V}(l^{\star},l\_{n+1})^{\omega\_{4}}) \\ &\leq \quad \Psi(d\_{V}(l^{\star},Ql^{\star})) \\ &\quad + \mathcal{I}\theta\_{f}(d\_{V}(l\_{n},l^{\star})^{\omega\_{1}},d\_{V}(l\_{n},Ql\_{n})^{\omega\_{2}},d\_{V}(l^{\star},Ql^{\star})^{\omega\_{3}},d\_{V}(l^{\star},l\_{n+1})^{\omega\_{4}}). \end{split} \tag{23}$$

Letting *n* → ∞ in (23), we get

$$d\_V(l^\*, \mathcal{Q}l^\*) \le \psi(d\_V(l^\*, \mathcal{Q}l^\*))\,.$$

The above inequality, only holds when *dV*(*l* ∗ , *Ql*∗ ) = 0. Hence, the claim is correct, *l* ∗ = *Ql*∗ . By (14) we get 0 ≤ *β<sup>ψ</sup> φ*(*Ql*∗ ), *φ*(*l* ∗ ) ≤ *ψ*(*φ*(*l* ∗ )) − *φ*(*Ql*<sup>∗</sup> ); that is *φ*(*l* ∗ ) = *φ*(*Ql*∗ ) ≤ *ψ*(*φ*(*l* ∗ )). This implies that *φ*(*l* ∗ ) = 0. Hence, *l* ∗ is a *φ*-fixed point of *Q*.

We will extend the above results by considering *Q* as a set-valued map. In the following, *CB*(*V*) represents the collection of all nonvoid closed and bounded subsets of *V* and *CL*(*V*) represents the collection of all nonvoid closed subsets of *V*.

**Definition 3.** *A set-valued map Q* : *V* → *CB*(*V*) *is called an abstract interpolative Reich-Rus-Ciri´c type-I set-valued contraction with ´ φ shrink, if the below-stated inequalities hold:*

$$\begin{split} \left( \operatorname{H}\_{V}(\operatorname{Qk}, \operatorname{Ql}) \right) &\leq \quad \eta \mathfrak{F}\_{f} \Big( d\_{V}(\operatorname{k}, l)^{\omega\_{1}}, d\_{V}(\operatorname{k}, \operatorname{Qk})^{\omega\_{2}}, d\_{V}(l, \operatorname{Ql})^{\omega\_{3}} \Big) \\ &\quad + \operatorname{L} \theta\_{f} \Big( d\_{V}(\operatorname{k}, l)^{\omega\_{1}}, d\_{V}(\operatorname{k}, \operatorname{Qk})^{\omega\_{2}}, d\_{V}(l, \operatorname{Ql})^{\omega\_{3}}, d\_{V}(l, \operatorname{Qk})^{\omega\_{4}} \Big) \end{split} \tag{24}$$

*for each k*, *l* ∈ *V* \ *Fix*(*Q*) *with l* 6= *k, where ω*1, *ω*2, *ω*<sup>3</sup> ∈ [0, 1] *with ω*<sup>1</sup> + *ω*<sup>2</sup> + *ω*<sup>3</sup> = 1*, ω*<sup>4</sup> > 0*, and L* ≥ 0*;*

*for every k* ∈ *V, we have*

$$\sup\_{l \in Qk} \phi(l) \le \eta \phi(k),\tag{25}$$

*where η* ∈ (0, 1) *and Fix*(*Q*) = {*v* ∈ *V* : *v* ∈ *Qv*}*.*

The following theorem can be used to validate the existence of *φ*-fixed points for a map satisfying the above definition.

**Theorem 3.** *Let <sup>Q</sup>* : *<sup>V</sup>* <sup>→</sup> *CB*(*V*) *be an abstract interpolative Reich-Rus-Ciri´c type-I set-valued ´ contraction with φ shrink on a complete metric space* (*V*, *dV*)*. Then at least one φ-fixed point of Q exists in V; that is, there exists a point v*∗ *in V with v*<sup>∗</sup> ∈ *Qv*<sup>∗</sup> *and φ*(*v* ∗ ) = 0*.*

**Proof.** For an arbitrary point *l*<sup>0</sup> ∈ *V*, we get some *l*<sup>1</sup> ∈ *Ql*0. If *l*<sup>0</sup> = *l*1, then *l*<sup>0</sup> is a fixed point of *<sup>Q</sup>*. Moreover, by (25) we get *<sup>φ</sup>*(*l*0) <sup>≤</sup> sup*l*∈*Ql*<sup>0</sup> *φ*(*l*) ≤ *ηφ*(*l*0); that is *φ*(*l*0) = 0. Hence, *l*<sup>0</sup> is a *φ*-fixed point of *Q*. Suppose that neither *l*<sup>0</sup> nor *l*<sup>1</sup> is a fixed point of *Q*, then by (24) we get

$$\begin{split} d\_{V}(l\_{1}, Ql\_{1}) &\leq \quad H\_{V}(Ql\_{0}, Ql\_{1}) \\ &\leq \quad \eta \xi\_{f} (d\_{V}(l\_{0}, l\_{1})^{\omega\_{1}}, d\_{V}(l\_{0}, Ql\_{0})^{\omega\_{2}}, d\_{V}(l\_{1}, Ql\_{1})^{\omega\_{3}}) \\ &\quad + \mathbf{L}\theta\_{f} (d\_{V}(l\_{0}, l\_{1})^{\omega\_{1}}, d\_{V}(l\_{0}, Ql\_{0})^{\omega\_{2}}, d\_{V}(l\_{1}, Ql\_{1})^{\omega\_{3}}, d\_{V}(l\_{1}, Ql\_{0})^{\omega\_{4}}). \end{split} \tag{26}$$

That is,

$$d\_V(l\_1, Ql\_1) \quad \le \quad \eta \sharp\_f (d\_V(l\_0, l\_1)^{\omega\_1}, d\_V(l\_0, Ql\_0)^{\omega\_2}, d\_V(l\_1, Ql\_1)^{\omega\_3}).\tag{27}$$

Since *η* ∈ (0, 1), thus, for <sup>√</sup> 1 *<sup>η</sup>* > 1 we have *l*<sup>2</sup> ∈ *Ql*<sup>1</sup> satisfying the given inequality

$$d\_V(l\_1, l\_2) \le \frac{1}{\sqrt{\eta}} d\_V(l\_1, Ql\_1). \tag{28}$$

To proceed with the proof, we assume that *l*<sup>1</sup> 6= *l*2, otherwise *l*<sup>2</sup> is a *φ*-fixed point. From (27) and (28), we get

$$d\_V(l\_1, l\_2) \quad \le \quad \sqrt{\eta} \mathfrak{F}\_f(d\_V(l\_0, l\_1)^{\omega\_1}, d\_V(l\_0, Ql\_0)^{\omega\_2}, d\_V(l\_1, Ql\_1)^{\omega\_3}). \tag{29}$$

From the facts that *l*<sup>1</sup> ∈ *Ql*0, *l*<sup>2</sup> ∈ *Ql*1, and nondecreasing property of *ξ <sup>f</sup>* , by (29), we get

$$d\_V(l\_1, l\_2) \quad \le \quad \sqrt{\eta} \xi\_f(d\_V(l\_0, l\_1)^{\omega\_1}, d\_V(l\_0, l\_1)^{\omega\_2}, d\_V(l\_1, l\_2)^{\omega\_3}). \tag{30}$$

If *dV*(*l*0, *l*1) ≤ *dV*(*l*1, *l*2), then from the above inequality we get *dV*(*l*1, *l*2) = 0, which is impossible. Thus, *dV*(*l*1, *l*2) < *dV*(*l*0, *l*1). Now, by (30), we get

$$\begin{array}{rcl}d\_V(l\_1, l\_2) &\leq& \sqrt{\eta} \mathfrak{F}\_f\left(d\_V(l\_0, l\_1)^{\omega\_1}, d\_V(l\_0, l\_1)^{\omega\_2}, d\_V(l\_1, l\_2)^{\omega\_3}\right) \\ &\leq& \sqrt{\eta} \mathfrak{F}\_f\left(d\_V(l\_0, l\_1)^{\omega\_1}, d\_V(l\_0, l\_1)^{\omega\_2}, d\_V(l\_0, l\_1)^{\omega\_3}\right) \\ &\leq& \sqrt{\eta} d\_V(l\_0, l\_1). \end{array} \tag{31}$$

Continuing the proof on the above lines we can obtain a sequence {*ln*} with *l<sup>n</sup>* ∈ *Qln*−<sup>1</sup> ∀*n* ∈ N, *ln*−<sup>1</sup> 6= *l<sup>n</sup>* ∀*n* ∈ N, and

$$d\_V(l\_{n\prime}l\_{n+1}) \le (\sqrt{\eta})^n d\_V(l\_0\prime l\_1) \,\forall n \in \mathbb{N}.$$

Moreover, it is trivial to conclude that {*ln*} is a Cauchy sequence in a complete metric space (*V*, *dV*), thus, there is a point *l* <sup>∗</sup> ∈ *V* with *l<sup>n</sup>* → *l* ∗ . Now, we claim that *l* <sup>∗</sup> ∈ *Ql*<sup>∗</sup> . If it is wrong, then *dV*(*l* ∗ , *Ql*∗ ) > 0. Thus, we can obtain *N*<sup>0</sup> ∈ N such that

$$\max \{ d\_V(l\_n, l^\*), d\_V(l\_n, l\_{n+1}), d\_V(l^\*, Ql^\*) \} = d\_V(l^\*, Ql^\*) \,\forall n \ge N\_0. \tag{32}$$

By (24), for *k* = *l<sup>n</sup>* and *l* = *l* ∗ , we obtain

$$\begin{split} d\_{V}(l\_{n+1}, Ql^{\*}) &\leq \quad H\_{V}(Ql\_{n}, Ql^{\*}) \\ &\leq \quad \eta\_{\mathcal{I}f}^{\star} \Big( d\_{V}(l\_{\mathrm{n}}, l^{\*})^{\omega\_{1}}, d\_{V}(l\_{\mathrm{n}}, Ql\_{\mathrm{n}})^{\omega\_{2}}, d\_{V}(l^{\*}, Ql^{\*})^{\omega\_{3}} \Big) \\ &\quad + L\theta\_{f}(d\_{V}(l\_{\mathrm{n}}, l^{\*})^{\omega\_{1}}, d\_{V}(l\_{\mathrm{n}}, Ql\_{\mathrm{n}})^{\omega\_{2}}, d\_{V}(l^{\*}, Ql^{\*})^{\omega\_{3}}, d\_{V}(l^{\*}, Ql\_{\mathrm{n}})^{\omega\_{4}}) \,\forall n \in \mathbb{N}. \end{split} \tag{33}$$

From (32) and (33), for each *n* ≥ *N*0, we get

*dV*(*ln*+1, *Ql*<sup>∗</sup> ) ≤ *ηξ <sup>f</sup> dV*(*ln*, *l* ∗ ) *ω*1 , *dV*(*ln*, *ln*+1) *ω*2 , *dV*(*l* ∗ , *Ql*∗ ) *ω*3 +*Lθ <sup>f</sup> dV*(*ln*, *l* ∗ ) *ω*1 , *dV*(*ln*, *Qln*) *ω*2 , *dV*(*l* ∗ , *Ql*∗ ) *ω*3 , *dV*(*l* ∗ , *ln*+1) *ω*4 ≤ *ηξ <sup>f</sup> dV*(*l* ∗ , *Ql*∗ ) *ω*1 , *dV*(*l* ∗ , *Ql*∗ ) *ω*2 , *dV*(*l* ∗ , *Ql*∗ ) *ω*3 (34) +*Lθ <sup>f</sup> dV*(*ln*, *l* ∗ ) *ω*1 , *dV*(*ln*, *Qln*) *ω*2 , *dV*(*l* ∗ , *Ql*∗ ) *ω*3 , *dV*(*l* ∗ , *ln*+1) *ω*4 ≤ *ηdV*(*l* ∗ , *Ql*∗ ) +*Lθ <sup>f</sup> dV*(*ln*, *l* ∗ ) *ω*1 , *dV*(*ln*, *Qln*) *ω*2 , *dV*(*l* ∗ , *Ql*∗ ) *ω*3 , *dV*(*l* ∗ , *ln*+1) *ω*4 .

By applying the limit *n* → ∞ in (34), we get

$$d\_V(l^\*, \mathbb{Q}l^\*) \le \eta d\_V(l^\*, \mathbb{Q}l^\*).$$

The existence of the above inequality is impossible when *dV*(*l* ∗ , *Ql*∗ ) > 0. Hence, the claim is correct, *l* <sup>∗</sup> ∈ *Ql*<sup>∗</sup> . By (25) we get

$$
\phi(l^\*) \le \sup\_{l \in Ql^\*} \phi(l) \le \lambda \phi(l^\*).
$$

This implies that *φ*(*l* ∗ ) = 0. Hence, *l* ∗ is a *φ*-fixed point of *Q*.

The following result examines the existence of *φ*-fixed points for a set-valued map *Q* : *V* → *CL*(*V*).

**Theorem 4.** *Let* (*V*, *dV*) *be a complete metric space and let Q* : *V* → *CL*(*V*) *be a set-valued map and φ* : *V* → [0, ∞) *be another map fulfilling the following inequalities:*

$$d\_V(l, Ql) \quad \le \quad \eta \mathfrak{J}\_f\left(d\_V(k, l)^{\omega\_1}, d\_V(k, Qk)^{\omega\_2}, d\_V(l, Ql)^{\omega\_3}\right) \tag{35}$$

*for each k*, *l* ∈ *V* \ *Fix*(*Q*) *with l* ∈ *Qk, where ω*1, *ω*2, *ω*<sup>3</sup> ∈ [0, 1] *with ω*<sup>1</sup> + *ω*<sup>2</sup> + *ω*<sup>3</sup> = 1*, and ω*<sup>3</sup> 6= 1*; further, for every k* ∈ *V, we have*

$$\sup\_{l \in Qk} \phi(l) \le \eta \phi(k),\tag{36}$$

*where η* ∈ (0, 1)*. Moreover, assume that Graph*(*Q*) = {(*k*, *l*) : *k* ∈ *V*, *l* ∈ *Qk*} *is closed. Then at least one φ-fixed point of Q exists in V.*

**Proof.** Following the proof of Theorem 3, here, one can easily obtain a Cauchy sequence {*ln*} in a complete metric space (*V*, *dV*) with *l<sup>n</sup>* ∈ *Qln*−<sup>1</sup> ∀*n* ∈ N, *ln*−<sup>1</sup> 6= *l<sup>n</sup>* ∀*n* ∈ N, and

$$d\_V(l\_{n\prime}l\_{n+1}) \le (\sqrt{\eta})^n d\_V(l\_0\prime l\_1) \,\forall n \in \mathbb{N}.$$

Furthermore, there exists a point *l* <sup>∗</sup> ∈ *V* with *l<sup>n</sup>* → *l* ∗ . Since *l<sup>n</sup>* ∈ *Qln*−<sup>1</sup> ∀*n* ∈ N, thus, (*ln*−1, *ln*) ∈ *Graph*(*Q*) ∀*n* ∈ N. As given that *Graph*(*Q*) is closed, thus, (*l* ∗ , *l* ∗ ) ∈ *Graph*(*Q*), that is *l* <sup>∗</sup> ∈ *Ql*<sup>∗</sup> . Hence, *l* ∗ is a fixed point of *Q*. By considering (36), we conclude that *l* ∗ is a *φ*-fixed point of *Q*.

Now we present the definition of the abstract interpolative Reich-Rus-Ciri´c type-II ´ set-valued contraction with *φ* shrink.

**Definition 4.** *A set-valued map Q* : *V* → *CB*(*V*) *is called an abstract interpolative Reich-Rus-Ciri´c type-II set-valued contraction with ´ φ shrink, if the below-stated inequalities are fulfilled:*

$$\begin{split} \beta\_{\Psi} \Big( H\_{V}(Qk, \mathbb{Q}l)\_{\prime} \mathbb{E}\_{f} \Big( d\_{V}(k, l)^{\omega\_{1}}, d\_{V}(k, \mathbb{Q}k)^{\omega\_{2}}, d\_{V}(l, \mathbb{Q}l)^{\omega\_{3}} \Big) \Big) \\ + L\theta\_{f} \Big( d\_{V}(k, l)^{\omega\_{1}}, d\_{V}(k, \mathbb{Q}k)^{\omega\_{2}}, d\_{V}(l, \mathbb{Q}l)^{\omega\_{3}}, d\_{V}(l, \mathbb{Q}k)^{\omega\_{4}} \Big) \geq 0 \end{split} \tag{37}$$

*for each k*, *l* ∈ *V* \ *Fix*(*Q*) *with l* 6= *k, where ω*1, *ω*2, *ω*<sup>3</sup> ∈ [0, 1] *with ω*<sup>1</sup> + *ω*<sup>2</sup> + *ω*<sup>3</sup> = 1*, ω*<sup>3</sup> 6= 0*, ω*<sup>4</sup> > 0*, and L* ≥ 0*;*

*for every k* ∈ *V, we have*

$$\beta\_{\Psi} \left( \sup\_{l \in Qk} \phi(l), \phi(k) \right) \ge 0. \tag{38}$$

In the following theorems, we assume that *ξ <sup>f</sup>* and *ψ* are strictly increasing instead of nondecreasing.

**Theorem 5.** *Let Q* : *<sup>V</sup>* <sup>→</sup> *CB*(*V*) *be an abstract interpolative Reich-Rus-Ciri´c type-II set-valued ´ contraction with φ shrink on a complete metric space* (*V*, *dV*)*. Then at least one φ-fixed point of Q exists in V.*

**Proof.** For an arbitrary point *l*<sup>0</sup> ∈ *V*, we get a point *l*<sup>1</sup> ∈ *Ql*0. If *l*<sup>0</sup> = *l*1, then *l*<sup>0</sup> is a fixed point of *Q*. Moreover, by (38), we get 0 ≤ *β<sup>ψ</sup>* sup*l*∈*Ql*<sup>0</sup> *φ*(*l*), *φ*(*l*0) ≤ *ψ*(*φ*(*l*0)) − sup*l*∈*Ql*<sup>0</sup> *φ*(*l*), this implies *φ*(*l*0) ≤ *ψ*(*φ*(*l*0)), hence, *l*<sup>0</sup> is a *φ*-fixed point of *Q*. Suppose that neither *l*<sup>0</sup> nor *l*<sup>1</sup> is a fixed point of *Q*, then by (37) we get

$$\begin{split} \beta\_{\Psi} \Big( H\_{V} (Q l\_{0}, Q l\_{1})\_{\prime} \tilde{\xi}\_{f} \Big( d\_{V} (l\_{0}, l\_{1})^{\omega\_{1}}, d\_{V} (l\_{0}, Q l\_{0})^{\omega\_{2}}, d\_{V} (l\_{1}, Q l\_{1})^{\omega\_{3}} \Big) \\ + L \theta\_{f} \Big( d\_{V} (l\_{0}, l\_{1})^{\omega\_{1}}, d\_{V} (l\_{0}, Q l\_{0})^{\omega\_{2}}, d\_{V} (l\_{1}, Q l\_{1})^{\omega\_{3}}, d\_{V} (l\_{1}, Q l\_{0})^{\omega\_{4}} \Big) \geq 0. \end{split} \tag{39}$$

This implies that

$$\begin{split} \mathbb{I}\_{V}(Ql\_{0}, Ql\_{1}) &\leq \quad \Psi\_{f}\Big(\xi\_{f}\big(d\_{V}(l\_{0}, l\_{1})^{\omega\_{1}}, d\_{V}(l\_{0}, Ql\_{0})^{\omega\_{2}}, d\_{V}(l\_{1}, Ql\_{1})^{\omega\_{3}}\big)\big) \\ &+ \mathbb{I}\theta\_{f}\big(d\_{V}(l\_{0}, l\_{1})^{\omega\_{1}}, d\_{V}(l\_{0}, Ql\_{0})^{\omega\_{2}}, d\_{V}(l\_{1}, Ql\_{1})^{\omega\_{3}}, d\_{V}(l\_{1}, Ql\_{0})^{\omega\_{4}}\big). \end{split} \tag{40}$$

Since *l*<sup>1</sup> ∈ *Ql*0, thus, by the above inequality we get

$$d\_V(l\_1, Ql\_1) \quad \le \quad \psi\left(\mathfrak{f}\_f\left(d\_V(l\_0, l\_1)^{\omega\_1}, d\_V(l\_0, l\_1)^{\omega\_2}, d\_V(l\_1, Ql\_1)^{\omega\_3}\right)\right). \tag{41}$$

If *dV*(*l*0, *l*1) ≤ *dV*(*l*1, *Ql*1), then by (41) we get *dV*(*l*1, *Ql*1) ≤ *ψ*(*dV*(*l*1, *Ql*1)) < *dV*(*l*1, *Ql*1), which is impossible. Thus, we conclude *dV*(*l*0, *l*1) > *dV*(*l*1, *Ql*1). By considering strictly increasing behavior of *ψ*, *ξ <sup>f</sup>* , and using (41) we get

$$\begin{split} d\_{V}(l\_{1},Ql\_{1}) &\leq \quad \psi\big(\mathfrak{F}\_{f}\big(d\_{V}(l\_{0},l\_{1})^{\omega\_{1}},d\_{V}(l\_{0},l\_{1})^{\omega\_{2}},d\_{V}(l\_{1},Ql\_{1})^{\omega\_{3}}\big)\big) \\ &< \quad \psi\big(\mathfrak{F}\_{f}\big(d\_{V}(l\_{0},l\_{1})^{\omega\_{1}},d\_{V}(l\_{0},l\_{1})^{\omega\_{2}},d\_{V}(l\_{0},l\_{1})^{\omega\_{3}}\big)\big) \\ &\leq \quad \psi\big(d\_{V}(l\_{0},l\_{1})\big). \end{split} \tag{42}$$

As *dV*(*l*1, *Ql*1) < *ψ dV*(*l*0, *l*1)), there exists some real number *e*<sup>1</sup> > 0 such that *dV*(*l*1, *Ql*1) + *e*<sup>1</sup> = *ψ dV*(*l*0, *l*1)). Thus, we get *l*<sup>2</sup> ∈ *Ql*<sup>1</sup> such that *dV*(*l*1, *l*2) ≤ *dV*(*l*1, *Ql*1) + *e*1. Hence, we conclude that

$$d\_V(l\_1, l\_2) \le \psi\left(d\_V(l\_0, l\_1)\right). \tag{43}$$

Continuing the proof on the above lines we can obtain a sequence {*ln*} with *l<sup>n</sup>* ∈ *Qln*−<sup>1</sup> ∀*n* ∈ N, *ln*−<sup>1</sup> 6= *l<sup>n</sup>* ∀*n* ∈ N, and

$$d\_V(l\_{n\prime}l\_{n+1}) \le \psi^n(d\_V(l\_0\prime l\_1)) \,\,\forall n \in \mathbb{N}.$$

Further, it can be seen that {*ln*} is a Cauchy sequence in a complete metric space (*V*, *dV*) and there exists *l* <sup>∗</sup> ∈ *V* with *l<sup>n</sup>* → *l* ∗ . Now, we claim that *l* <sup>∗</sup> ∈ *Ql*<sup>∗</sup> . If it is wrong then *dV*(*l* ∗ , *Ql*∗ ) > 0. Thus, we can obtain *N*<sup>0</sup> ∈ N such that

$$\max \{ d\_V(l\_n, l^\*), d\_V(l\_n, l\_{n+1}), d\_V(l^\*, \mathcal{Q}l^\*) \} = d\_V(l^\*, \mathcal{Q}l^\*) \,\forall n \ge N\_0. \tag{44}$$

By (37), for *k* = *l<sup>n</sup>* and *l* = *l* ∗ , we get

$$\begin{split} \mathbb{A}\_{\mathbb{V}} \left( H\_{V} (\mathbb{Q} l\_{n}, \mathbb{Q} l^{\*})\_{\prime} \mathbb{S}\_{f} \Big( d\_{V} (l\_{n}, l^{\*})^{\omega\_{1}}, d\_{V} (l\_{n}, \mathbb{Q} l\_{n})^{\omega\_{2}}, d\_{V} (l^{\*}, \mathbb{Q} l^{\*})^{\omega\_{3}} \Big) \right) \\ + \mathbb{L} \theta\_{f} \Big( d\_{V} (l\_{n}, l^{\*})^{\omega\_{1}}, d\_{V} (l\_{n}, \mathbb{Q} l\_{n})^{\omega\_{2}}, d\_{V} (l^{\*}, \mathbb{Q} l^{\*})^{\omega\_{3}}, d\_{V} (l^{\*}, \mathbb{Q} l\_{n})^{\omega\_{4}} \Big) \forall n \in \mathbb{N}. \end{split} \tag{45}$$

From the above inequality, we obtain

$$\begin{split}d\_{V}(l\_{\mathbb{N}+1},Ql^{\*}) &\leq \quad H\_{V}(Ql\_{\mathbb{N}},Ql^{\*})\\ &\leq \quad \Psi\left(\xi\_{f}(d\_{V}(l\_{\mathbb{N}},l^{\*})^{\omega\_{1}},d\_{V}(l\_{\mathbb{N}},Ql\_{\mathbb{N}})^{\omega\_{2}},d\_{V}(l^{\*},Ql^{\*})^{\omega\_{3}}\right)\right)\\ &+L\theta\_{f}\big(d\_{V}(l\_{\mathbb{N}},l^{\*})^{\omega\_{1}},d\_{V}(l\_{\mathbb{N}},Ql\_{\mathbb{N}})^{\omega\_{2}},d\_{V}(l^{\*},Ql^{\*})^{\omega\_{3}},d\_{V}(l^{\*},Ql\_{\mathbb{N}})^{\omega\_{4}}\Big)\;\forall n\in\mathbb{N}.\tag{46} \end{split}$$

From (44) and (46), for each *n* ≥ *N*0, we get

$$\begin{split}d\_{V}(l\_{n+1},Q^{\star}) &\leq \quad \psi\left(\xi\_{f}(d\_{V}(l\_{n},l^{\star})^{\omega\_{1}},d\_{V}(l\_{n},l\_{n+1})^{\omega\_{2}},d\_{V}(l^{\star},Q^{\star})^{\omega\_{3}})\right) \\ &\quad + \mathbf{L}\theta\_{f}(d\_{V}(l\_{n},l^{\star})^{\omega\_{1}},d\_{V}(l\_{n},Ql\_{n})^{\omega\_{2}},d\_{V}(l^{\star},Ql^{\star})^{\omega\_{3}},d\_{V}(l^{\star},l\_{n+1})^{\omega\_{4}}) \\ &\leq \quad \psi\left(\xi\_{f}(d\_{V}(l^{\star},Ql^{\star})^{\omega\_{1}},d\_{V}(l^{\star},Ql^{\star})^{\omega\_{2}},d\_{V}(l^{\star},Ql^{\star})^{\omega\_{3}}\right) \\ &\quad + \mathbf{L}\theta\_{f}(d\_{V}(l\_{n},l^{\star})^{\omega\_{1}},d\_{V}(l\_{n},Ql\_{n})^{\omega\_{2}},d\_{V}(l^{\star},Ql^{\star})^{\omega\_{3}},d\_{V}(l^{\star},l\_{n+1})^{\omega\_{4}}) \\ &\leq \quad \psi\left(d\_{V}(l^{\star},Ql^{\star})\right) \\ &\quad + \mathbf{L}\theta\_{f}(d\_{V}(l\_{n},l^{\star})^{\omega\_{1}},d\_{V}(l\_{n},Ql\_{n})^{\omega\_{2}},d\_{V}(l^{\star},Ql^{\star})^{\omega\_{3}},d\_{V}(l^{\star},l\_{n+1})^{\omega\_{4}}\right). \end{split} \tag{47}$$

By letting *n* → ∞ in (47), we get

$$d\_V(l^\*, \mathcal{Q}l^\*) \le \psi(d\_V(l^\*, \mathcal{Q}l^\*))$$

which is impossible for *dV*(*l* ∗ , *Ql*∗ ) > 0. Hence, the claim is correct, *l* <sup>∗</sup> ∈ *Ql*<sup>∗</sup> . Moreover, by (38) we get 0 ≤ *β<sup>ψ</sup>* sup*l*∈*Ql*<sup>∗</sup> *<sup>φ</sup>*(*l*), *<sup>φ</sup>*(*<sup>l</sup>* ∗ ) ≤ *ψ*(*φ*(*l* ∗ )) <sup>−</sup> sup*l*∈*Ql*<sup>∗</sup> *<sup>φ</sup>*(*l*). As *<sup>l</sup>* <sup>∗</sup> ∈ *Ql*<sup>∗</sup> , thus, *φ*(*l* ∗ ) <sup>≤</sup> sup*l*∈*Ql*<sup>∗</sup> *<sup>φ</sup>*(*l*) <sup>≤</sup> *<sup>ψ</sup>*(*φ*(*<sup>l</sup>* ∗ )). This implies that *φ*(*l* ∗ ) = 0. Hence, *l* ∗ is a *φ*-fixed point of *Q*.

The following theorem can examine *φ*-fixed points of set-valued map *Q* : *V* → *CL*(*V*).

**Theorem 6.** *Let* (*V*, *dV*) *be a complete metric space and let Q* : *V* → *CL*(*V*) *be a set-valued map and φ* : *V* → [0, ∞) *be another map fulfilling the following inequalities:*

$$\mathcal{S}\_{\Psi} \Big( d\_V(l, Ql), \mathfrak{J}\_f \Big( d\_V(k, l)^{\omega\_1}, d\_V(k, Qk)^{\omega\_2}, d\_V(l, Ql)^{\omega\_3} \Big) \Big) \ge 0 \tag{48}$$

*for each k*, *l* ∈ *V* \ *Fix*(*Q*) *with l* ∈ *Qk, where ω*1, *ω*<sup>2</sup> ∈ [0, 1] *and ω*<sup>3</sup> ∈ (0, 1) *with ω*<sup>1</sup> + *ω*<sup>2</sup> + *ω*<sup>3</sup> = 1*; further, for every k* ∈ *V, we have*

$$\beta\_{\Psi} \left( \sup\_{l \in Qk} \phi(l), \phi(k) \right) \ge 0. \tag{49}$$

*Furthermore, assume that Graph*(*Q*) = {(*k*, *l*) : *k* ∈ *V*, *l* ∈ *Qk*} *is closed. Then at least one φ-fixed point of Q exists in V.*

#### **3. Application**

A suitable application of the work can be seen as an existence theorem for the following type of fractional-order integral equation:

$$k(t) = q(t) + \frac{\mu}{[\Gamma(a)]^2} \int\_0^{p(t)} (p(t) - s)^{a-1} w(s, k(s)) ds, \quad a \in (0, 1), \quad t \in I = [a, b] \tag{50}$$

where *<sup>q</sup>* : *<sup>J</sup>* <sup>→</sup> <sup>R</sup>, *<sup>p</sup>* : *<sup>J</sup>* <sup>→</sup> <sup>R</sup><sup>+</sup> = [0, <sup>∞</sup>), and *<sup>w</sup>*: *<sup>J</sup>* <sup>×</sup> <sup>R</sup> <sup>→</sup> <sup>R</sup> are continuous functions, *<sup>µ</sup>* is constant real number, and Γ is the Euler gamma function; that is Γ(*α*) = R <sup>∞</sup> 0 *t α*−1 *e* <sup>−</sup>*tdt*.

Consider *V* = (*C*[*a*, *b*], R) is the space of all continuous and bounded real-valued functions defined on *J* = [*a*, *b*]. Define a metric on *V* by

$$d\_V(k, l) = ||k - l|| = \max\_{t \in I} |k(t) - l(t)| \,\,\forall k, l \in V.$$

Clearly, (*V*, *dV*) is a complete metric space. Now, we move towards the existence theorem of (50).

**Theorem 7.** *Consider V* = (*C*[*a*, *b*], R) *and consider the operator*

$$Q \colon V \to V, \quad Qk(t) = q(t) + \frac{\mu}{[\Gamma(a)]^2} \int\_0^{p(t)} (p(t) - s)^{a-1} w(s, k(s)) ds, \quad a \in (0, 1), \quad t \in I$$

*where <sup>q</sup>* : *<sup>J</sup>* <sup>→</sup> <sup>R</sup>*, <sup>p</sup>* : *<sup>J</sup>* <sup>→</sup> <sup>R</sup><sup>+</sup> = [0, <sup>∞</sup>)*, and <sup>w</sup>*: *<sup>J</sup>* <sup>×</sup> <sup>R</sup> <sup>→</sup> <sup>R</sup> *are continuous functions, <sup>µ</sup> is constant, and* Γ *is the Euler gamma function; that is* Γ(*α*) = R <sup>∞</sup> 0 *t α*−1 *e* <sup>−</sup>*tdt. Moreover, consider that there are ω*1, *ω*2, *ω*<sup>3</sup> ∈ [0, 1] *with ω*<sup>1</sup> + *ω*<sup>2</sup> + *ω*<sup>3</sup> = 1 *satisfying*

$$\frac{|w(s,k(s)) - w(s,l(s))|}{||k - Qk||^{\omega\_2}||l - Ql||^{\omega\_3}} \le [\Gamma(\alpha + 1)]^2 |k(s) - l(s)|^{\omega\_1} \tag{51}$$

*for all s* ∈ *J and for each k*, *l* ∈ *V with* min{k*k* − *l*k, k*k* − *Qk*k, k*l* − *Ql*k} > 0*, moreover,*

$$\sup\_{t \in I} \left| \mu(p(t))^a \right| \le 1.$$

*Then, (50) possesses at least one solution.*

**Proof.** For each *k*, *l* ∈ *V* with min{k*k* − *l*k, k*k* − *Qk*k, k*l* − *Ql*k} > 0, we obtain

$$\begin{split} \left| \left| Qk(t) - Ql(t) \right| \right| &= \left| \frac{\mu}{\left[ \Gamma(a) \right]^2} \int\_0^{p(t)} (p(t) - s)^{a-1} [w(s, k(s)) - w(s, l(s))] ds \right| \\ &\leq \left| \frac{\mu}{\left[ \Gamma(a) \right]^2} \int\_0^{p(t)} (p(t) - s)^{a-1} ds \right| \left| \Gamma(a+1) \right|^2 \left\| k - l \right\|^{\omega\_1} \left\| l - Qk \right\|^{\omega\_2} \left\| l - Ql \right\|^{\omega\_3} \\ &= \left| \frac{\mu}{\left[ \Gamma(a) \right]^2} \frac{(p(t))^a}{a} \right| \left| a\Gamma(a) \right|^2 \left\| k - l \right\|^{\omega\_1} \left\| k - Qk \right\|^{\omega\_2} \left\| l - Ql \right\|^{\omega\_3} \\ &= \left| \mu \left| p(t) \right|^a \left| \left| k - l \right|^{\omega\_1} \left\| k - Qk \right\|^{\omega\_2} \left\| l - Ql \right\|^{\omega\_3} \forall t \in I. \end{split}$$

Thus, we get

$$\|Qk - Ql\| \le \alpha \|k - l\|^{\omega\_1} \|k - Qk\|^{\omega\_2} \|l - Ql\|^{\omega\_3}$$

for each *k*, *l* ∈ *V* \ *Fix*(*Q*) with *k* 6= *l*. Thus, by Corollary 3, a fixed point of *Q* occurs; that is, the integral Equation (50) possesses at least one solution.

**Example 3.** *Consider V* = {0, 1, 2 · · · , 20} *and define*

$$d\_V(k,l) = \begin{cases} 0, & k=l\\ \max\{k,l\}, & k\neq l. \end{cases}$$

*Define Q* : *V* → *V and φ* : *V* → [0, ∞) *by*

$$Q(k) = \begin{cases} 0, & k = 0\\ k - 1, & otherwise \end{cases}$$

*and*

$$
\phi(k) = \frac{k}{2}.
$$

*Then, it is easy to verify that the axioms of Theorem 1 are valid, by taking ξ <sup>f</sup>* (*a*, *b*, *c*) = *abc, ω*<sup>1</sup> = *0.99, ω*<sup>2</sup> = *0.005, ω*<sup>3</sup> = *0.005, L* = 0 *and η* = <sup>99</sup> <sup>100</sup> *. Thus, there is an element k* ∈ *V with Qk* = *k and φ*(*k*) = 0*.*

**Example 4.** *Consider V* = W *the set of all whole numbers and define*

$$d\_V(k,l) = \begin{cases} 0, & k=l\\ \max\{k,l\}, & k\neq l. \end{cases}$$

*Define Q* : *V* → *CB*(*V*) *and φ* : *V* → [0, ∞) *by*

$$Q(k) = \begin{cases} \{0\}, & k \in \{0, 1\} \\ \{0, k-1\}, & k \in \{2, 3, \cdots, 10\} \\ \{0, k\}, & 
otherwise \end{cases}$$

*and*

$$\phi(k) = \begin{cases} k/2, & k \in \{1, 2, \dots, 10\}, \\ 0, & \text{otherwise}. \end{cases}$$

*Then, it is easy to check that the axioms of Theorem 6 are valid, by taking ξ <sup>f</sup>* (*a*, *b*, *c*) = *abc, βψ*(*k*, *l*) = (49/50)*l* − *k, ω*<sup>1</sup> = *0.99, ω*<sup>2</sup> = *0.005, and ω*<sup>3</sup> = *0.005. Since*

$$(k-1)^{0.995} \le (49/50)k^{0.995} \text{ for each } k \in \{1, 2, \dots, 10\}.$$

*Hence, there is an element k* ∈ *V with k* ∈ *Qk and φ*(*k*) = 0*.*

#### **4. Conclusions**

In this article, we have studied the existence of *φ*-fixed points for the mappings satisfying abstract interpolative Reich-Rus-Ciri´c-type contractions with a shrink map on ´ a complete metric space. Abstract interpolative Reich-Rus-Ciri´c-type contraction with a ´ shrink map has the following characteristics:


Finally, we have studied the existence of a solution for a fractional-order integral equation using our results.

**Author Contributions:** Both authors contributed equally to this article and approved the final manuscript. All authors have read and agreed to the published version of the manuscript.

**Funding:** The Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia has funded this project, under grant number FP-083-43.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The authors are grateful to the Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia for funding this project, under grant number FP-083-43.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **Solving a System of Differential Equations with Infinite Delay by Using Tripled Fixed Point Techniques on Graphs**

**Hasanen A. Hammad 1,2,\* ,† and Mohra Zayed 3,†**


**Abstract:** In this manuscript, some similar tripled fixed point results under certain restrictions on a *b*−metric space endowed with graphs are established. Furthermore, an example is provided to support our results. The obtained results extend, generalize, and unify several similar significant contributions in the literature. Finally, to further extend our results, the existence of a solution to a system of ordinary differential equations with infinite delay is derived.

**Keywords:** tripled fixed point; edge-preserving; directed graph; *b*-metric space; differential equation with infinite delay

#### **1. Introduction and Basic Concepts**

One of the most crucial methods for comprehending the world around us is mathematics. With the help of the various fields of mathematics, other sciences can be analyzed. The use of integral and differential equations is crucial for creating patterns for better understanding. Integral and differential equations likewise heavily rely on the fixed point theory.

In 2011, Berinde and Borcut [1] defined the notion of a tripled fixed point (TFP) for self-mappings and established some interesting consequences in partially ordered metric spaces. The (TFP) theory has a large number of significant applications that have been successfully employed to address a wide variety of issues. Researchers have focused on these issues to examine possible solutions, as seen in [2–7].

In 2008, Jachymski [8] proposed considering partial order sets as graphs in metric spaces. He obtained novel contraction mappings using this concept, which generalized many of the prior contractions. Moreover, in a metric space endowed with a graph, some results of the fixed points under these contractions were successfully deduced. Several authors have used this contribution in various applications. See the series of papers [9–12].

As a continuation of this approach, the results of coupled fixed points and TFPs for edge-preserving mappings with applications in abstract spaces have been investigated. For more details, see [13–17].

Czerwik [18] introduced the concept of *b*−metric spaces as a generalization of ordinary metric spaces as follows:

**Definition 1.** *Let <sup>χ</sup>* <sup>6</sup><sup>=</sup> <sup>∅</sup> *be a set and <sup>s</sup>* <sup>≥</sup> <sup>1</sup> *be a real number. A function <sup>v</sup>* : *<sup>χ</sup>* <sup>×</sup> *<sup>χ</sup>* <sup>→</sup> <sup>R</sup><sup>+</sup> *is said to be a b*−*metric on χ*, *if for each z*, *d*,*r* ∈ *χ*, *the hypotheses below hold:*


**Citation:** Hammad, H.A.; Zayed, M. Solving a System of Differential Equations with Infinite Delay by Using Tripled Fixed Point Techniques on Graphs. *Symmetry* **2022**, *14*, 1388. https://doi.org/10.3390/ sym14071388

Academic Editor: Ioan Ras,a

Received: 8 June 2022 Accepted: 4 July 2022 Published: 6 July 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

*The pair* (*χ*, *v*) *is known as b*−*metric space.*

In the context of a metric space (*χ*, *v*), let ∇ = {(*z*, *z*) : *z* ∈ *χ*} be the set of self loops and f = (∨(f), Ξ(f)) be a directed graph where ∨(f) represents the set of vertices and Ξ(f) refers to the set of edges, so Ξ(f) ⊇ ∇ and f has no parallel edges.

Consider *z*, *d* ∈ ∨(f), a path from *z* to *d* is a finite sequence {*zt*} *N <sup>t</sup>*=<sup>0</sup> ⊆ f, where *z*<sup>0</sup> = *z*, *z<sup>t</sup>* = *d*, and (*z<sup>t</sup>* , *zt*−1) ∈ Ξ(f), *t* = 1, 2, . . . , *N*. For simplicity, we write

[*z*]<sup>f</sup> = {*d* ∈ *χ* there is a path from *z* to *d*}.

If ∨(f) = [*z*]f, then f is said to be connected for all *l* ∈ *χ*.

By reversing the directions of the edges on a directed graph f, we may obtain the directed graph f−<sup>1</sup> , i.e., <sup>∨</sup>(f−<sup>1</sup> ) = ∨(f) and

$$\Xi\left(\mathbb{U}^{-1}\right) = \{ (d, z) : (z, d) \in \Xi(\mathbb{U}) \}.$$

Moreover, by neglecting the direction of edges, we have the indirect graph fe, i.e., ∨(fe) = ∨(f) and

$$
\Xi\left(\underline{\chi}\right) = \Xi(\underline{\chi}) \cap \Xi\left(\underline{\chi}^{-1}\right).
$$

Herein, we assume that (*χ*, *v*) is a *b*−metric space, and f is a directed graph, so ∨(f) = *χ* and Ξ(f) ⊇ ∇. Further, we define another graph f on the product *χ* × *χ* × *χ* as follows:

$$\left( (\underline{\boldsymbol{z}}, \boldsymbol{d}, \boldsymbol{r}), \left( \underline{\boldsymbol{z}}, \overline{\boldsymbol{d}}, \overline{\boldsymbol{r}} \right) \right) \in \Xi(\mathsf{U}) \Leftrightarrow (\mathsf{z}, \overline{\boldsymbol{z}}) \in \Xi(\mathsf{U}), \left( \overline{\boldsymbol{d}}, \boldsymbol{d} \right) \in \Xi(\mathsf{U}) \text{ and } (\mathsf{r}, \overline{\boldsymbol{r}}) \in \Xi(\mathsf{U}).$$

for all (*z*, *d*,*r*), *z*, *d*,*r* ∈ *χ* 3 .

**Definition 2** ([1])**.** *A trio* (*z*, *d*,*r*) ∈ *χ* 3 *is called a TFP of the mapping* Ω : *χ* <sup>3</sup> <sup>→</sup> *<sup>χ</sup> if*

$$z = \Omega(z, d, r), \; d = \Omega(d, r, z), \; \text{and} \; r = \Omega(r, z, d).$$

**Definition 3** ([15])**.** *Let* Ω : *χ* × *χ* → *χ be a given mapping defined on a complete metric space* (*χ*, *v*) *equipped with a directed graph* f. *We say that* Ω *has the mixed* f−*monotone property if for all z*, *z*1, *z*2, *d*, *d*1, *d*<sup>2</sup> ∈ *χ*,

$$(z\_1, z\_2) \in \Xi(\mathbb{O}) \text{ implies } (\Omega(z\_1, d), \Omega(z\_2, d)) \in \Xi(\mathbb{O}),$$

*and*

$$(d\_1, d\_2) \in \Xi(\mathbb{J}) \text{ implies } (\Omega(z, d\_2), \Omega(z, d\_1)) \in \Xi(\mathbb{J}).$$

In a similar vein, our work seeks to create a new generalization of TFP results in the context of a *b*−metric space with a graph. Our results extend and unify the results of Alfuraidan and Khamsi [15], Luong and Thuan [19], and I¸sik and Türko˘glu [20] in partially ordered metric spaces. Our theoretical findings have been used to show that a system of ordinary differential equations with infinite delay has a solution.

#### **2. Main Results**

This section starts with a generalization of Definition 3 as follows:

**Definition 4.** *Let* Ω : *χ* <sup>3</sup> <sup>→</sup> *<sup>χ</sup> be a function defined on a complete metric space* (*χ*, *<sup>v</sup>*) *with a directed graph. We say that* Ω *has the mixed* f*-monotone property if for all z*, *z*1, *z*2, *d*, *d*1, *d*2,*r*,*r*1,*r*<sup>2</sup> ∈ *χ*,

$$\begin{array}{rcl}(z\_1, z\_2) & \in & \Xi(\mathsf{U}) \text{ implies } (\Omega(z\_1, d, r), \Omega(z\_2, d, r)) \in \Xi(\mathsf{U}),\\(d\_1, d\_2) & \in & \Xi(\mathsf{U}) \text{ implies } (\Omega(z, d\_1, r), \Omega(z, d\_2, r)) \in \Xi(\mathsf{U}),\end{array}$$

*and*

$$(r\_1, r\_2) \in \Xi(\mathbb{O}) \text{ implies } (\Omega(z, d, r\_1), \Omega(z, d, r\_2)) \in \Xi(\mathbb{O}).$$

In order to facilitate our study, we denote by Γ the set of pairs of functions (*θ*, *ϑ*), where *θ*, *ϑ* : [0, ∞) → [0, ∞) fulfilling the constraints below:


The lemma below is useful for our main results.

**Lemma 1.** *Assume that* (*χ*, *v*) *is a b*−*metric space with s* ≥ 1. *Suppose that* {`*k*}*,* {*δk*}*, and* {*λk*} *are three sequences in <sup>χ</sup>, and there is <sup>σ</sup>* <sup>∈</sup> [0, <sup>1</sup> *s* )*, justifying*

$$
\mathfrak{a}(\ell\_k, \ell\_{k+1}) + \mathfrak{a}(\delta\_k, \delta\_{k+1}) + \mathfrak{a}(\lambda\_k, \lambda\_{k+1}) \le \sigma(\mathfrak{a}(\ell\_{k-1}, \ell\_k) + \mathfrak{a}(\delta\_{k-1}, \delta\_k) + \mathfrak{a}(\lambda\_{k-1}, \lambda\_k)),
\tag{1}
$$

$$
\text{for any } k \in \mathbb{N}. \text{ Then, } \{\ell\_k\}, \{\delta\_k\}, \text{and } \{\lambda\_k\} \text{ are Cauchy sequences.}
$$

**Proof.** Let *j*, *k* ∈ N, and *j* < *k*. Then,

*v* `*j* , `*<sup>k</sup>* + *v δj* , *δ<sup>k</sup>* + *v λj* , *λ<sup>k</sup>* ≤ *s v* `*j* , `*j*+<sup>1</sup> + *v* `*j*+<sup>1</sup> , `*<sup>k</sup>* + *s v δj* , *δj*+<sup>1</sup> + *v δj*+<sup>1</sup> , *δ<sup>k</sup>* +*s v λj* , *λj*+<sup>1</sup> + *v λj*+<sup>1</sup> , *λ<sup>k</sup>* ≤ *s v* `*j* , `*j*+<sup>1</sup> + *v δj* , *δj*+<sup>1</sup> + *v λj* , *λj*+<sup>1</sup> +*s* 2 *v* `*j*+<sup>1</sup> , `*j*+<sup>2</sup> + *v δj*+<sup>1</sup> , *δj*+<sup>2</sup> + *v λj*+<sup>1</sup> , *λj*+<sup>2</sup> +*s* 2 *v* `*j*+<sup>2</sup> , `*<sup>k</sup>* + *v δj*+<sup>2</sup> , *δ<sup>k</sup>* + *v λj*+<sup>2</sup> , *λ<sup>k</sup>* ≤ · · · ≤ *s v* `*j* , `*j*+<sup>1</sup> + *v δj* , *δj*+<sup>1</sup> + *v λj* , *λj*+<sup>1</sup> +*s* 2 *v* `*j*+<sup>1</sup> , `*j*+<sup>2</sup> + *v δj*+<sup>1</sup> , *δj*+<sup>2</sup> + *v λj*+<sup>1</sup> , *λj*+<sup>2</sup> <sup>+</sup> · · · +*s k*−*j*−1 (*v*(`*k*−<sup>2</sup> , `*k*−1) + *<sup>v</sup>*(`*k*−<sup>1</sup> , `*k*) + *<sup>v</sup>*(*δk*−<sup>2</sup> , *<sup>δ</sup>k*−1) + *<sup>v</sup>*(*δk*−<sup>1</sup> , *δk*)) +*s k*−*j*−1 (*v*(*λk*−<sup>2</sup> , *<sup>λ</sup>k*−1) + *<sup>v</sup>*(*λk*−<sup>1</sup> , *λk*)) ≤ *s v* `*j* , `*j*+<sup>1</sup> + *v δj* , *δj*+<sup>1</sup> + *v λj* , *λj*+<sup>1</sup> +*s* 2 *v* `*j*+<sup>1</sup> , `*j*+<sup>2</sup> + *v δj*+<sup>1</sup> , *δj*+<sup>2</sup> + *v λj*+<sup>1</sup> , *λj*+<sup>2</sup> +*s k*−*j*−1 (*v*(`*k*−<sup>2</sup> , `*k*−1) + *<sup>v</sup>*(*δk*−<sup>2</sup> , *<sup>δ</sup>k*−1) + *<sup>v</sup>*(*λk*−<sup>2</sup> , *λk*−1)) +*s k*−*j* (*v*(`*k*−<sup>1</sup> , `*k*) + *<sup>v</sup>*(*δk*−<sup>1</sup> , *<sup>δ</sup>k*) + *<sup>v</sup>*(*λk*−<sup>1</sup> , *λk*)).

From the fact that *sσ* < 1, and using (1), we have

$$\begin{split} & \quad \boldsymbol{\sigma}(\boldsymbol{\ell}\_{j},\boldsymbol{\ell}\_{k}) + \boldsymbol{\sigma}(\boldsymbol{\delta}\_{j},\boldsymbol{\delta}\_{k}) + \boldsymbol{\sigma}(\boldsymbol{\lambda}\_{j},\boldsymbol{\lambda}\_{k}) \\ & \leq \quad \left( s\boldsymbol{\sigma}^{j} + s^{2}\boldsymbol{\sigma}^{j+1} + \dots + s^{k-j-1}\boldsymbol{\sigma}^{k-2} + s^{k-j}\boldsymbol{\sigma}^{k-1} \right) (\boldsymbol{\sigma}(\boldsymbol{\ell}\_{0},\boldsymbol{\ell}\_{1}) + \boldsymbol{\sigma}(\boldsymbol{\delta}\_{0},\boldsymbol{\delta}\_{1}) + \boldsymbol{\sigma}(\boldsymbol{\lambda}\_{0},\boldsymbol{\lambda}\_{1})) \\ & = \quad s\boldsymbol{\sigma}^{j} \Big( 1 + s\boldsymbol{\sigma} + \dots + s^{k-j-2}\boldsymbol{\sigma}^{k--j-2} + s^{k-j-1}\boldsymbol{\sigma}^{k-j-1} \Big) (\boldsymbol{\sigma}(\boldsymbol{\ell}\_{0},\boldsymbol{\ell}\_{1}) + \boldsymbol{\sigma}(\boldsymbol{\delta}\_{0},\boldsymbol{\delta}\_{1}) + \boldsymbol{\sigma}(\boldsymbol{\lambda}\_{0},\boldsymbol{\lambda}\_{1})) \\ & = \quad \frac{s\boldsymbol{\sigma}^{j}}{1 + s\boldsymbol{\sigma}} (\boldsymbol{\sigma}(\boldsymbol{\ell}\_{0},\boldsymbol{\ell}\_{1}) + \boldsymbol{\sigma}(\boldsymbol{\delta}\_{0},\boldsymbol{\delta}\_{1}) + \boldsymbol{\sigma}(\boldsymbol{\lambda}\_{0},\boldsymbol{\lambda}\_{1})). \end{split}$$

It follows that

$$\lim\_{j \to \infty} \left( \mathcal{o}\left(\ell\_{j\prime}\ell\_k\right) + \mathcal{o}\left(\delta\_{j\prime}\delta\_k\right) + \mathcal{o}\left(\lambda\_{j\prime}\lambda\_k\right) \right) = 0.$$

Hence, {`*k*}, {*δk*}, and {*λk*} are Cauchy sequences.

Now, we formulate and prove the first main result.

**Theorem 1.** *On* (*χ*, Ξ, *v*)*, let* (*χ*, *v*) *be a complete b*−*metric space with s* ≥ 1 *and* Ω : *χ* <sup>3</sup> <sup>→</sup> *<sup>χ</sup> be a continuous mapping that has the mixed* f−*monotone property on χ for which there is a pair* (*θ*, *ϑ*) ∈ Γ*, so that*

$$\left(\theta\left(s^2\mathcal{O}(\Omega(z,d,r),\Omega(z^\*,d^\*,r^\*))\right) \le \frac{1}{3}\theta(\mathcal{O}(z,z^\*) + \mathcal{O}(d,d^\*) + \mathcal{O}(r,r^\*)),\tag{2}$$

*for all* (*z*, *d*,*r*),(*z* ∗ , *d* ∗ ,*r* ∗ ) ∈ *χ* 3 , *where* ((*z*, *d*,*r*),(*z* ∗ , *d* ∗ ,*r* ∗ )) ∈ Ξ(f)*. If there are z*0, *d*0,*r*<sup>0</sup> ∈ *χ so that*

$$((z\_0, d\_0, r\_0), (\Omega(z\_0, d\_0, r\_0), \Omega(d\_0, r\_0, z\_0), \Omega(r\_0, z\_0, d\_0))) \in \Xi(\mathbb{O});$$

*then,* <sup>Ω</sup> *owns a TFP* <sup>b</sup>*z*, *<sup>d</sup>*b,b*<sup>r</sup>* ∈ *χ* 3 .

**Proof.** Put *zk*+<sup>1</sup> = Ω(*z<sup>k</sup>* , *d<sup>k</sup>* ,*rk*), *dk*+<sup>1</sup> = Ω(*d<sup>k</sup>* ,*r<sup>k</sup>* , *zk*), and *rk*+<sup>1</sup> = Ω(*r<sup>k</sup>* , *z<sup>k</sup>* , *dk*). Based on our assumption, we have

$$((z\_0, d\_0, r\_0), (z\_1, d\_1, r\_1)) \in \Xi(\mathbb{G})\_\prime$$

which leads to

$$\begin{aligned} \theta\left(s^2\mathcal{O}(z\_2, z\_1)\right) &=& \theta\left(s^2\mathcal{O}(\Omega(z\_1, d\_1, r\_1), \Omega(z\_0, d\_0, r\_0))\right) \\ &\leq& \frac{1}{3}\theta(\mathcal{O}(z\_1, z\_0) + \mathcal{O}(d\_1, d\_0) + \mathcal{O}(r\_1, r\_0)). \end{aligned}$$

Analogously, since ((*d*0,*r*0, *z*0),(*d*1,*r*1, *z*1)) ∈ Ξ(f), one can obtain

$$
\theta\left(s^2\mathcal{O}(d\_2,d\_1)\right) \le \frac{1}{3}\theta(\mathcal{O}(d\_1,d\_0) + \mathcal{O}(r\_1,r\_0) + \mathcal{O}(z\_1,z\_0)).
$$

Similarly, since ((*r*0, *z*0, *d*0),(*r*1, *z*1, *d*1)) ∈ Ξ(f), we can write

$$
\theta\left(s^2\mathcal{O}(r\_{2\prime}r\_1)\right) \le \frac{1}{3}\theta(\mathcal{O}(r\_{1\prime}r\_0) + \mathcal{O}(z\_1, z\_0) + \mathcal{O}(d\_1, d\_0)).
$$

Because Ω has the mixed f−monotone property, we have for *k* ≥ 1,

$$\begin{array}{rcl} ((z\_{k'}d\_{k'}r\_{k})\_{\prime}(z\_{k+1'}d\_{k+1'}r\_{k+1})) & \in & \Xi(\mathcal{U}),\\ ((d\_{k'}r\_{k'}z\_{k})\_{\prime}(d\_{k+1'}r\_{k+1'}z\_{k+1})) & \in & \Xi(\mathcal{U}),\end{array}$$

and

$$((r\_{k'}z\_{k'}d\_k)\_{\prime}(r\_{k+1\prime}z\_{k+1\prime}d\_{k+1})) \in \Xi(\mathcal{O}).$$

Then,

$$
\theta\left(s^2\mathcal{O}(z\_{k+1}, z\_k)\right) \le \frac{1}{3}\theta(\mathcal{O}(z\_k, z\_{k-1}) + \mathcal{O}(d\_k, d\_{k-1}) + \mathcal{O}(r\_k, r\_{k-1})),\tag{3}
$$

$$
\theta\left(s^2 \mathcal{O}(d\_{k+1}, d\_k)\right) \le \frac{1}{3} \theta(\mathcal{O}(d\_k, d\_{k-1}) + \mathcal{O}(r\_k, r\_{k-1}) + \mathcal{O}(z\_k, z\_{k-1})),\tag{4}
$$

and

$$
\theta\left(s^2\mathcal{O}(r\_{k+1},r\_k)\right) \le \frac{1}{3}\theta(\mathcal{O}(r\_k,r\_{k-1}) + \mathcal{O}(z\_k,z\_{k-1}) + \mathcal{O}(d\_k,d\_{k-1})).\tag{5}
$$

Adding (3)–(5), we obtain

$$
\theta\left(s^2\mathcal{O}(z\_{k+1}, z\_k)\right) + \theta\left(s^2\mathcal{O}(d\_{k+1}, d\_k)\right) + \theta\left(s^2\mathcal{O}(r\_{k+1}, r\_k)\right) \le \theta(\mathcal{O}(z\_k, z\_{k-1}) + \mathcal{O}(d\_k, d\_{k-1}) + \mathcal{O}(r\_k, r\_{k-1})).
$$

$$
\text{It follows from the properties of } (\theta, \theta) \text{ that}
$$

$$
\theta \left( s^2(\mathcal{O}(z\_{k+1}, z\_k) + \mathcal{O}(d\_{k+1}, d\_k) + \mathcal{O}(r\_{k+1}, r\_k)) \right) \le \theta(\mathcal{O}(z\_k, z\_{k-1}) + \mathcal{O}(d\_k, d\_{k-1}) + \mathcal{O}(r\_k, r\_{k-1})) \mu
$$

again, from the properties of (*θ*, *ϑ*), we have

$$\theta \left( s^2(\mathcal{O}(z\_{k+1}, z\_k) + \mathcal{O}(d\_{k+1}, d\_k) + \mathcal{O}(r\_{k+1}, r\_k)) \right) \le \theta (\mathcal{O}(z\_k, z\_{k-1}) + \mathcal{O}(d\_k, d\_{k-1}) + \mathcal{O}(r\_k, r\_{k-1}));$$
 
$$\text{since } \theta \text{ is non-decreasing we obtain}$$

since *θ* is non-decreasing, we obtain

$$
\sigma^2(\mathcal{o}(z\_{k+1}, z\_k) + \mathcal{o}(d\_{k+1}, d\_k) + \mathcal{o}(r\_{k+1}, r\_k)) \le \mathcal{o}(z\_k, z\_{k-1}) + \mathcal{o}(d\_k, d\_{k-1}) + \mathcal{o}(r\_k, r\_{k-1}),
$$

$$
\text{which leads to}
$$

$$
\omega \left( z\_{k+1}, z\_k \right) + \mathcal{O}(d\_{k+1}, d\_k) + \mathcal{O}(r\_{k+1}, r\_k) \le \frac{1}{s^2} (\mathcal{O}(z\_k, z\_{k-1}) + \mathcal{O}(d\_k, d\_{k-1}) + \mathcal{O}(r\_k, r\_{k-1})).
$$

Because <sup>0</sup> <sup>≤</sup> <sup>1</sup> *s* <sup>2</sup> < <sup>1</sup> *s* , then by Lemma 1, we observe that {*zk*}, {*dk*}, and {*rk*} are Cauchy sequences. The completeness of *<sup>χ</sup>* implies that there are <sup>b</sup>*z*, *<sup>d</sup>*b,b*<sup>r</sup>* <sup>∈</sup> *<sup>χ</sup>*, so that

$$\lim\_{k \to \infty} z\_k = \widehat{z}, \lim\_{k \to \infty} d\_k = \widehat{d}m \text{ and } \lim\_{k \to \infty} r\_k = \widehat{r}.$$

,

Since Ω is continuous, we obtain

$$\begin{aligned} \widehat{z} &= \lim\_{k \to \infty} z\_k = \lim\_{k \to \infty} \Omega(z\_{k-1}, d\_{k-1}, r\_{k-1}) = \Omega \left( \lim\_{k \to \infty} z\_{k-1}, \lim\_{k \to \infty} d\_{k-1}, \lim\_{k \to \infty} r\_{k-1} \right) = \Omega \left( \widehat{z}, \widehat{d}, \widehat{r} \right), \\\widehat{\gamma} &= \dots \dots \dots \dots \dots \dots \dots \dots \dots \dots \dots \left( \widehat{d}, \widehat{r} \right) = \Omega \left( \widehat{z}, \widehat{d}, \widehat{r} \right) \end{aligned}$$

$$\widehat{d}^{\cdot} = \lim\_{k \to \infty} d\_k = \lim\_{k \to \infty} \Omega(d\_{k-1}, r\_{k-1}, z\_{k-1}) = \Omega \left( \lim\_{k \to \infty} d\_{k-1} \lim\_{k \to \infty} r\_{k-1}, \lim\_{k \to \infty} z\_{k-1} \right) = \Omega \left( \widehat{d}, \widehat{r}, \widehat{z} \right)$$

$$\widehat{\tau}^\* = \lim\_{k \to \infty} r\_k = \lim\_{k \to \infty} \Omega(r\_{k-1}, z\_{k-1}, d\_{k-1}) = \Omega\left(\lim\_{k \to \infty} r\_{k-1}, \lim\_{k \to \infty} z\_{k-1}, \lim\_{k \to \infty} d\_{k-1}\right) = \Omega\left(\widehat{r}, \widehat{z}, \widehat{d}\right).$$

This proves that <sup>b</sup>*z*, *<sup>d</sup>*b,b*<sup>r</sup>* is a TFP of Ω.

In the case of the non continuity of Ω, we can state another sufficient condition for the existence of TFP by giving the following postulate on the trio (*χ*, Ξ, *v*) :

(p) for any sequence {*zk*}*k*∈<sup>N</sup> in *<sup>χ</sup>*, so that (*z<sup>k</sup>* , *zk*+1) ∈ Ξ(f), (*zk*+<sup>1</sup> , *zk*) ∈ Ξ(f), and lim*k*→<sup>∞</sup> *<sup>z</sup><sup>k</sup>* = *<sup>z</sup>*, we have (*z<sup>k</sup>* , *z*,) ∈ Ξ(f) and (*z*, *zk*) ∈ Ξ(f).

Now, our second theoretical result is as follows:

**Theorem 2.** *On* (*χ*, Ξ, *v*)*, suppose that* (*χ*, *v*) *is a complete b*−*ms with s* ≥ 1*, and* (*χ*, Ξ, *v*) *satisfies Postulate (p). Suppose also the mapping* Ω : *χ* <sup>3</sup> <sup>→</sup> *<sup>χ</sup> has the mixed* <sup>f</sup>−*monotone property*

*on χ*. *Assume that* (*θ*, *ϑ*) ∈ Γ*, so that the contractive condition (2) holds. If there are z*0, *d*0,*r*<sup>0</sup> ∈ *χ so that*

$$((z\_0, d\_0, r\_0), (\Omega(z\_0, d\_0, r\_0), \Omega(d\_0, r\_0, z\_0), \Omega(r\_0, z\_0, d\_0))) \in \Xi(\mathcal{O})\_\*$$

*then* <sup>Ω</sup> *possesses a TFP* <sup>b</sup>*z*, *<sup>d</sup>*b,b*<sup>r</sup>* ∈ *χ* 3 .

**Proof.** By the same line proof of Theorem 1 and since

$$\begin{aligned} \lim\_{k \to \infty} z\_{k+1} &= \lim\_{k \to \infty} \Omega(z\_{k'} d\_{k'} r\_k) = \widehat{z}\_{\prime} \\ \lim\_{k \to \infty} d\_{k+1} &= \lim\_{k \to \infty} \Omega(d\_{k'} r\_{k'} z\_k) = \widehat{d\_{\prime}} \\ \lim\_{k \to \infty} r\_{k+1} &= \lim\_{k \to \infty} \Omega(r\_{k'} z\_{k'} d\_k) = \widehat{r}\_{\prime} \end{aligned}$$

and

$$(z\_{k\prime}z\_{k+1}) \in \Xi(\mathbb{O}), \ (d\_{k\prime}d\_{k+1}) \in \Xi(\mathbb{O}) \text{ and } (r\_{k\prime}r\_{k+1}) \in \Xi(\mathbb{O}),$$

then, by Postulate (p), one can write

$$(z\_{k\prime}\widehat{z}) \in \Xi(\mathsf{U}), \; \left(d\_{k\prime}\widehat{d}\right) \in \Xi(\mathsf{U}) \text{ and } (r\_{k\prime}\widehat{r}) \in \Xi(\mathsf{U}).$$

Then,

$$\left( (z\_{k'}d\_{k'}r\_k)\_{\prime} \left( \widehat{z}\_{\prime}\widehat{d}\_{\prime}\widehat{r} \right) \right) \in \Xi(\mathbb{O}).$$

Hence, we obtain

$$\theta\left(s^2\mathcal{O}\left(\Omega(z\_{k\prime}d\_{k\prime}r\_k),\Omega\left(\hat{z},\hat{d},\hat{r}\right)\right)\right) \le \frac{1}{3}\theta\left(\mathcal{o}(z\_{k\prime}\hat{z}) + \mathcal{o}\left(d\_{k\prime}\hat{d}\right) + \mathcal{o}\left(r\_{k\prime}\hat{r}\right)\right).\tag{6}$$

Analogously, we obtain

$$\left(\theta\left(s^2\mathcal{O}\left(\Omega(d\_{k\prime}r\_{k\prime}z\_k),\Omega\left(\widehat{d}\_{\prime}\widehat{r}\_{\prime}\widehat{z}\right)\right)\right) \le \frac{1}{3}\theta\left(\mathcal{O}\left(d\_{k\prime}\widehat{d}\right) + \mathcal{O}\left(r\_{k\prime}\widehat{r}\right) + \mathcal{O}\left(z\_{k\prime}\widehat{z}\right)\right),\tag{7}$$

and

$$\left(\theta\left(s^{2}\mathcal{O}\left(\Omega(d\_{k'}r\_{k'}z\_{k}),\Omega\left(\hat{d}\_{l}\hat{r},\hat{z}\right)\right)\right)\leq\frac{1}{3}\theta\left(\mathcal{O}\left(d\_{k'}\hat{d}\right)+\mathcal{O}\left(r\_{k'}\hat{r}\right)+\mathcal{O}\left(z\_{k'}\hat{z}\right)\right).\tag{8}$$

Taking the limit as *k* → ∞ in (6)–(8), we have

$$\begin{split} \lim\_{k \to \infty} \mathcal{o}\left(\Omega(z\_k, d\_k, r\_k), \Omega\left(\widehat{z}, \widehat{d}, \widehat{r}\right)\right) &=& 0, \lim\_{k \to \infty} \mathcal{o}\left(\Omega(d\_k, r\_k, z\_k), \Omega\left(\widehat{d}, \widehat{r}, \widehat{z}\right)\right) = 0 \\ \text{and } \lim\_{k \to \infty} \mathcal{o}\left(\Omega(d\_k, r\_k, z\_k), \Omega\left(\widehat{d}, \widehat{r}, \widehat{z}\right)\right) &=& 0. \end{split}$$

This implies that

$$\lim\_{k \to \infty} z\_{k+1} = \Omega \left( \widehat{z}, \widehat{d}, \widehat{r} \right)\_{\prime} \\ \lim\_{k \to \infty} d\_{k+1} = \Omega \left( \widehat{d}, \widehat{r}, \widehat{z} \right) \text{ and } \lim\_{k \to \infty} r\_{k+1} = \Omega \left( \widehat{r}, \widehat{z}, \widehat{d} \right)\_{\prime}$$

which yields that

$$
\hat{z} = \Omega \left( \hat{z}, \hat{d}, \hat{r} \right), \hat{d} = \Omega \left( \hat{d}, \hat{r}, \hat{z} \right) \text{ and } \hat{r} = \Omega \left( \hat{r}, \hat{z}, \hat{d} \right);
$$

that is, <sup>b</sup>*z*, *<sup>d</sup>*b,b*<sup>r</sup>* is a TFP of Ω on *χ*.

Next, we shall state some contributions of Theorems 1 and 2 in the literature.

The results of Alfuraidan and Khamsi [15] can be generalized if we let *θ*(*a*) = *a* and *ϑ*(*a*) = `*a* in Theorems 1 and 2 with *b* = 1 as follows:

**Corollary 1.** *Let* (*χ*, *v*) *be a complete metric space with a direct graph* Ξ *and the mapping* Ω : *χ* <sup>3</sup> <sup>→</sup> *<sup>χ</sup> has the mixed* <sup>f</sup>−*monotone property on <sup>χ</sup> for which there exists* ` <sup>∈</sup> [0, 1) *such that*

$$
\theta(\mathcal{Q}(\Omega(z,d,r),\Omega(z^\*,d^\*,r^\*))) \le \frac{\ell}{3} \theta(\mathcal{Q}(z,z^\*) + \mathcal{Q}(d,d^\*) + \mathcal{Q}(r,r^\*)),
$$

*for all* (*z*, *d*,*r*),(*z* ∗ , *d* ∗ ,*r* ∗ ) ∈ *χ* <sup>3</sup> *with* ((*z*, *d*,*r*),(*z* ∗ , *d* ∗ ,*r* ∗ )) ∈ Ξ(f)*. Assume that either* Ω *is a continuous mapping or the triple* (*χ*, Ξ, *v*) *has the property (p). If there are z*0, *d*0,*r*<sup>0</sup> ∈ *χ so that*

$$((z\_0, d\_0, r\_0), (\Omega(z\_0, d\_0, r\_0), \Omega(d\_0, r\_0, z\_0), \Omega(r\_0, z\_0, d\_0))) \in \Xi(\mathbb{O}),$$

*then,* <sup>Ω</sup> *has a TFP* <sup>b</sup>*z*, *<sup>d</sup>*b,b*<sup>r</sup>* ∈ *χ* 3 .

It should be noted that if (*θ*, *ϑ*) ∈ Γ and *ϑ*1(*a*) = *θ*(*a*) − 3*ϑ a* 3 , then (*θ*, *ϑ*1) ∈ Γ. Based on this notion, the results of Luong and Thuan [19] in a metric space endowed with a graph can be re-formulated as follows:

**Corollary 2.** *Let* (*χ*, *v*) *be a complete metric space with a direct graph* Ξ*, and the mapping* Ω : *χ* <sup>3</sup> <sup>→</sup> *<sup>χ</sup> has the mixed* <sup>f</sup>−*monotone property. Let* (*θ*, *<sup>ϑ</sup>*) <sup>∈</sup> <sup>Γ</sup>*, so that*

$$\begin{aligned} \left(\theta(\mathcal{O}(\Omega(z,d,r),\Omega(z^\*,d^\*,r^\*)))\right) &\leq \frac{1}{3}\theta(\mathcal{O}(z,z^\*) + \mathcal{O}(d,d^\*) + \mathcal{O}(r,r^\*))\\ &- \theta\left(\frac{\mathcal{O}(z,z^\*) + \mathcal{O}(d,d^\*) + \mathcal{O}(r,r^\*)}{3}\right) \end{aligned}$$

*for all* (*z*, *d*,*r*),(*z* ∗ , *d* ∗ ,*r* ∗ ) ∈ *χ* <sup>3</sup> *with* ((*z*, *d*,*r*),(*z* ∗ , *d* ∗ ,*r* ∗ )) ∈ Ξ(f)*. Assume either the mapping* Ω *is continuous or a trio* (*χ*, Ξ, *v*) *satisfies the postulate (p). If there are z*0, *d*0,*r*<sup>0</sup> ∈ *χ, so that*

$$((z\_0, d\_0, r\_0), (\Omega(z\_0, d\_0, r\_0), \Omega(d\_0, r\_0, z\_0), \Omega(r\_0, z\_0, d\_0))) \in \Xi(\mathbb{G})\_\times$$

*then,* <sup>Ω</sup> *has a TFP* <sup>b</sup>*z*, *<sup>d</sup>*b,b*<sup>r</sup>* ∈ *χ* 3 .

In the following, we discuss the uniqueness of a TFP of the mapping Ω.

**Theorem 3.** *In addition to the assumptions of Theorems 1 and 2, assume that for any* (*z*, *d*,*r*) ,(*z* ∗ , *d* ∗ ,*r* ∗ ) ∈ *χ* 3 *, there is* b`,e`, ` ∈ *χ* 3 *, so that*

$$\left( (z, d, r), \left( \widehat{\ell}, \widetilde{\ell}, \overline{\ell} \right) \right) \in \Xi(\mathsf{U}) \text{ and } \left( (z^\*, d^\*, r^\*), \left( \widehat{\ell}, \widetilde{\ell}, \overline{\ell} \right) \right) \in \Xi(\mathsf{U}).$$

*Then,* Ω *has a unique TFP.*

**Proof.** Assume that there are two TFPs (*z*, *d*,*r*) and (*z* ∗ , *d* ∗ ,*r* ∗ ) of Ω. By our hypothesis, there is (κ, *η*, *ζ*) ∈ *χ* 3 , so that ((*z*, *d*,*r*),(κ, *η*, *ζ*)) ∈ Ξ(f), and ((*z* ∗ , *d* ∗ ,*r* ∗ ),(κ, *η*, *ζ*)) ∈ Ξ(f). Define three sequences {κ*k*}, {*ηk*}, and {*ζk*} by

κ = κ0, *η* = *η*0, *ζ* = *ζ*0, κ*k*+<sup>1</sup> = Ω(κ*<sup>k</sup>* , *η<sup>k</sup>* , *ζk*), *ηk*+<sup>1</sup> = Ω(*η<sup>k</sup>* , *ζ<sup>k</sup>* ,κ*k*) and *ζk*+<sup>1</sup> = Ω(*ζ<sup>k</sup>* ,κ*<sup>k</sup>* , *ηk*), for all *n*.

> Since ((*z*, *d*,*r*),(κ, *η*, *ζ*)) ∈ Ξ(f) and Ω has a mixed f−monotone property, we can show that ((*z*, *d*,*r*),(κ*<sup>k</sup>* , *η<sup>k</sup>* , *ζk*)) ∈ Ξ(f). Then,

$$
\theta\left(s^2\mathcal{O}(z,\varkappa\_{k+1})\right) = \theta\left(s^2\mathcal{O}(\Omega(z,d,r),\Omega(\varkappa\_k,\eta\_k\zeta\_k))\right) \le \frac{1}{3}\theta(\mathcal{O}(z,\varkappa\_k) + \mathcal{o}(d,\eta\_k) + \mathcal{o}(r,\zeta\_k)).\tag{9}
$$

Similarly, we can write

$$\theta\left(s^{2}\mathcal{O}(d,\eta\_{k+1})\right) = \theta\left(s^{2}\mathcal{O}(\Omega(d,r,z),\Omega(\eta\_{k},\zeta\_{k},\varkappa\_{k}))\right) \leq \frac{1}{3}\theta(\mathcal{O}(d,\eta\_{k}) + \mathcal{O}(r,\zeta\_{k}) + \mathcal{O}(z,\varkappa\_{k})),\tag{10}$$

and

$$\theta\left(s^2\mathcal{\boldsymbol{\omega}}(r,\zeta\_{k+1})\right) = \theta\left(s^2\mathcal{\boldsymbol{\omega}}(\Omega(r,z,d),\Omega(\zeta\_k,\varkappa\_k,\eta\_k))\right) \le \frac{1}{3}\theta\left(\mathcal{\boldsymbol{\omega}}(r,\zeta\_k) + \mathcal{\boldsymbol{\omega}}(z,\varkappa\_k) + \mathcal{\boldsymbol{\omega}}(d,\eta\_k)\right). \tag{11}$$

Combining (9)–(11) and using the properties of *θ* and *ϑ*, we have

$$
\theta \left( s^2(\mathcal{oz}(z, \varkappa\_{k+1}) + \mathcal{oz}(d, \eta\_{k+1}) + \mathcal{oz}(r, \zeta\_{k+1})) \right) \le \theta(\mathcal{az}(z, \varkappa\_k) + \mathcal{oz}(d, \eta\_k) + \mathcal{oz}(r, \zeta\_k)).\tag{12}
$$

Because *θ* is non-decreasing function, and *θ*(*a*) > *ϑ*(*a*) for *a* > 0, we have

$$
\sigma^2(\mathfrak{o}(\boldsymbol{z}, \varkappa\_{k+1}) + \mathfrak{o}(d, \eta\_{k+1}) + \mathfrak{o}(r, \zeta\_{k+1})) \le \mathfrak{o}(\boldsymbol{z}, \varkappa\_k) + \mathfrak{o}(d, \eta\_k) + \mathfrak{o}(r, \zeta\_k).
$$

Since *s* ≥ 1, we obtain

$$
\omega(z, \varkappa\_{k+1}) + \varpi(d, \eta\_{k+1}) + \varpi(r, \zeta\_{k+1}) \le \varpi(z, \varkappa\_k) + \varpi(d, \eta\_k) + \varpi(r, \zeta\_k).
$$

This leads to {*v*(*z*,κ*k*) + *v*(*d*, *ηk*) + *v*(*r*, *ζk*)} being a nonnegative decreasing sequence; consequently, there is *ρ* ≥ 0, so that

$$\lim\_{k \to \infty} (\mathcal{a}(z, \varkappa\_k) + \mathcal{a}(d, \eta\_k) + \mathcal{a}(r, \zeta\_k)) = \rho.$$

As the functions *θ* and *ϑ* are continuous, and by taking *k* → ∞ in (12), one can write

*θ s* 2 *ρ* ≤ *ϑ*(*ρ*).

It follows from the properties of *θ* and *ϑ* that *ρ* = 0. Hence,

$$\lim\_{k \to \infty} (\mathcal{a}(z, \varkappa\_k) + \mathcal{a}(d, \eta\_k) + \mathcal{a}(r, \zeta\_k)) = 0;$$

that is,

$$\lim\_{k \to \infty} \mathcal{a}(z, \varkappa\_k) = 0, \lim\_{k \to \infty} \mathcal{a}(d, \eta\_k) = 0, \text{ and } \lim\_{k \to \infty} \mathcal{a}(r, \zeta\_k) = 0.$$

Following the same scenario, we have

$$\lim\_{k \to \infty} \mathcal{O}(z^\*, \varkappa\_k) = 0, \lim\_{k \to \infty} \mathcal{O}(d^\*, \eta\_k) = 0 \text{ and } \lim\_{k \to \infty} \mathcal{O}(r^\*, \zeta\_k) = 0.$$

Let *k* → ∞ in the following inequalities

$$\begin{array}{rcl}\mathcal{\mathcal{O}}(\boldsymbol{z},\boldsymbol{z}^\*) & \leq & s(\boldsymbol{\mathcal{O}}(\boldsymbol{z},\boldsymbol{\varkappa\_k}) + \boldsymbol{\mathcal{O}}(\boldsymbol{\varkappa\_{k'}}\boldsymbol{z}^\*)),\\\mathcal{\mathcal{O}}(\boldsymbol{d},\boldsymbol{d}^\*) & \leq & s(\boldsymbol{\mathcal{O}}(\boldsymbol{d},\boldsymbol{\varkappa\_k}) + \boldsymbol{\mathcal{O}}(\boldsymbol{\varkappa\_{k'}}\boldsymbol{d}^\*)),\\\mathcal{\mathcal{O}}(\boldsymbol{r},\boldsymbol{r}^\*) & \leq & s(\boldsymbol{\mathcal{O}}(\boldsymbol{r},\boldsymbol{\varkappa\_k}) + \boldsymbol{\mathcal{O}}(\boldsymbol{\varkappa\_{k'}}\boldsymbol{r}^\*)).\end{array}$$

Thus, *v*(*z*, *z* ∗ ) = 0, *v*(*d*, *d* ∗ ) = 0, and *v*(*r*,*r* ∗ ) = 0. Hence, *z* = *z* ∗ , *d* = *d* ∗ , and *r* = *r* ∗ .

**Theorem 4.** *Assume that* b*z*, *<sup>d</sup>*<sup>b</sup> , *d*b,b*r* ,(b*r*, <sup>b</sup>*z*) ∈ Ξ(f) *and the assumptions of Theorems 1 and 2 are true. If* <sup>b</sup>*z*, *<sup>d</sup>*b,b*<sup>r</sup> is a TFP of* <sup>Ω</sup>, *then* <sup>b</sup>*<sup>z</sup>* <sup>=</sup> *<sup>d</sup>*b<sup>=</sup> <sup>b</sup>*r*.

**Proof.** Because b*z*, *<sup>d</sup>*<sup>b</sup> , *d*b,b*r* ,(b*r*, <sup>b</sup>*z*) ∈ Ξ(f), we have

$$\begin{aligned} \theta\left(s^2\mathcal{O}\left(\hat{z},\hat{d}\right)\right) &= \,^\theta\theta\Big(s^2\mathcal{O}\left(\Omega\left(\hat{z},\hat{d},\hat{r}\right),\Omega\left(\hat{d},\hat{r},\hat{z}\right)\right)\Big) \\ &\leq \,^\|\frac{1}{3}\theta\Big(\mathcal{O}\left(\hat{z},\hat{d}\right) + \mathcal{O}\left(\hat{d},\hat{r}\right) + \mathcal{O}\left(\hat{r},\hat{z}\right)\Big). \end{aligned}$$

Similarly, we can write

$$
\theta\left(s^2\mathcal{O}\left(\widehat{d},\widehat{r}\right)\right) \le \frac{1}{3}\theta\left(\mathcal{O}\left(\widehat{d},\widehat{r}\right) + \mathcal{O}\left(\widehat{r},\widehat{z}\right) + \mathcal{O}\left(\widehat{z},\widehat{d}\right)\right),
$$

and

$$
\theta\left(s^2\mathcal{O}(\hat{r},\hat{z})\right) \le \frac{1}{3}\theta\left(\mathcal{O}(\hat{r},\hat{z}) + \mathcal{O}\left(\hat{z},\hat{d}\right) + \mathcal{O}\left(\hat{d},\hat{r}\right)\right).
$$

Combining the above three inequalities, we have

$$\begin{split} \left( \theta \left( s^2 \Big[ \mathcal{O} \left( \hat{z}, \hat{d} \right) + \mathcal{O} \left( \hat{d}, \hat{r} \right) + \mathcal{O} \left( \hat{r}, \hat{z} \right) \right] \right) &\leq \quad \theta \left( \mathcal{O} \left( \hat{z}, \hat{d} \right) + \mathcal{O} \left( \hat{d}, \hat{r} \right) + \mathcal{O} \left( \hat{r}, \hat{z} \right) \right) \\ &< \quad \theta \left( \mathcal{O} \left( \hat{z}, \hat{d} \right) + \mathcal{O} \left( \hat{d}, \hat{r} \right) + \mathcal{O} \left( \hat{r}, \hat{z} \right) \right) . \end{split}$$

Since the function *θ* is non-decreasing, we obtain

$$
\sigma^2 \left( \mathcal{O}\left(\hat{z}, \hat{d}\right) + \mathcal{O}\left(\hat{d}, \hat{r}\right) + \mathcal{O}\left(\hat{r}, \hat{z}\right) \right) < \mathcal{O}\left(\hat{z}, \hat{d}\right) + \mathcal{O}\left(\hat{d}, \hat{r}\right) + \mathcal{O}\left(\hat{r}, \hat{z}\right).
$$

Hence, *v* <sup>b</sup>*z*, *<sup>d</sup>*<sup>b</sup> + *v d*b,b*r* <sup>+</sup> *<sup>v</sup>*(b*r*, <sup>b</sup>*z*) <sup>=</sup> 0; that is, *<sup>v</sup>* <sup>b</sup>*z*, *<sup>d</sup>*<sup>b</sup> = 0, *v d*b,b*r* = 0, and *<sup>v</sup>*(b*r*, <sup>b</sup>*z*) <sup>=</sup> 0. So, <sup>b</sup>*<sup>z</sup>* <sup>=</sup> *<sup>d</sup>*b<sup>=</sup> <sup>b</sup>*r*. This completes the proof.

In the end of this part, we present the following example to support our theoretical results.

**Example 1.** *Assume that χ* = R, *v*(*z*, *d*) = |*z* − *d*| 2 *is a b*−*metric space with s* = 2*. Define a directed graph* f *on χ by*

> ((*z*, *d*,*r*),(*z* ∗ , *d* ∗ ,*r* ∗ )) ∈ Ξ(f), *if and only if z* ≤ *z* ∗ , *d* <sup>∗</sup> ≤ *d and r* ≤ *r* ∗ .

*Describe the mapping* Ω : *χ* <sup>3</sup> <sup>→</sup> *<sup>χ</sup> as* <sup>Ω</sup>(*z*, *<sup>d</sup>*,*r*) <sup>=</sup> <sup>1</sup> 6 (*z* + *d* + *r*), (*z* ∗ , *d* ∗ ,*r* ∗ ) ∈ *χ* 3 . *It is clear that* Ω *has a* f−*monotone property. For any* (*z*, *d*,*r*),(*z* ∗ , *d* ∗ ,*r* ∗ ) ∈ *χ* <sup>3</sup> *with* ((*z*, *d*,*r*),(*z* ∗ , *d* ∗ ,*r* ∗ )) ∈ Ξ(f)*, we have*

$$\begin{split} \theta \left( s^2 \mathcal{O}(\Omega(z, d, r), \Omega(z^\*, d^\*, r^\*)) \right) &= \quad \frac{1}{4} \Big( 2^2 \Big( \frac{z + d + r}{6} - \frac{z^\* + d^\* + r^\*}{6} \Big)^2 \Big) \\ &= \quad \frac{1}{36} ((z - z^\*) + (d - d^\*) + (r - r^\*))^2 \\ &\leq \quad \frac{1}{9} \Big( (z - z^\*)^2 + (d - d^\*)^2 + (r - r^\*)^2 \Big) \\ &= \quad \frac{1}{3} \theta(\mathcal{O}(z, z^\*) + \mathcal{O}(d, d^\*) + \mathcal{O}(r, r^\*)). \end{split}$$

*Hence, the condition (2) is satisfied with θ*(*a*) = <sup>1</sup> 4 *a and ϑ*(*a*) = <sup>1</sup> 3 *a*. *Clearly,* (*θ*, *ϑ*) ∈ Γ. *Therefore, all requirements of Theorem 1 are fulfilled. Moreover,* ((0, 0, 0),(0, 0, 0)) ∈ Ξ(f) *So, by Theorems 1 and 3, the point* (0, 0, 0) *is a unique TFP of the mapping* Ω.

#### **3. Solving a System of Ordinary Differential Equations**

This section is the mainstay of our paper in which the existence and uniqueness of the solution to a system of ordinary differential equations is investigated. This system is given as follows:

$$\begin{cases} \begin{aligned} z'(\nu) &= \wp(\nu, z\_{\nu}, u\_{\nu}, r\_{\nu})\_{\prime} \\ u'(\nu) &= \wp(\nu, u\_{\nu}, r\_{\nu}, z\_{\nu})\_{\prime} \; \nu \in \chi\_{\prime} \\ r'(\nu) &= \wp(\nu, r\_{\nu}, z\_{\nu}, u\_{\nu})\_{\prime} \end{aligned} \tag{13}$$

under the conditions

$$z(\nu) = \mathcal{O}\_1(\nu), \ u(\nu) = \mathcal{O}\_2(\nu) \text{ and } r(\nu) = \mathcal{O}\_3(\nu), \ \nu \in (-\infty, 0], \tag{14}$$

where *χ* = [0, *b*], ℘ : *χ* × a <sup>3</sup> <sup>→</sup> <sup>R</sup>*<sup>k</sup>* , (where a <sup>3</sup> <sup>=</sup> <sup>a</sup> <sup>×</sup> <sup>a</sup> <sup>×</sup> <sup>a</sup>) *<sup>v</sup>*1, *<sup>v</sup>*2, *<sup>v</sup>*<sup>3</sup> <sup>∈</sup> <sup>a</sup>, and *<sup>z</sup>ν*, *<sup>u</sup>ν*,*r<sup>ν</sup>* are the history of the state from −∞ to the time *ν*. Let the histories *zν*, *uν*,*r<sup>ν</sup>* ∈ a, where (a, k.k<sup>a</sup> ) is a seminormed linear space of functions mapping *<sup>z</sup>* : (−∞, 0] <sup>→</sup> <sup>R</sup>*<sup>k</sup>* , *k* ∈ N and satisfying the hypotheses below that were presented by Hale and Kato [21] for the ODE.

	- (1) *z<sup>a</sup>* ∈ a;
	- (2) k*z*k ≤ k*za*k<sup>a</sup>
	- (3) k*za*k<sup>a</sup> ≤ *τ* sup{k*z*(*c*)k : 0 ≤ *c* ≤ *a*} + *ξ*k*z*0k<sup>a</sup> .

;


Now, we consider the following space to define a solution for Problems (13) and (14):

$$\Theta = \left\{ z, \, z : ( -\infty, 0] \to \mathbb{R}^k, \, z \in \mathbb{C} \Big( \chi\_{\epsilon} \mathbb{R}^k \Big), \, k \in \mathbb{N}, \, z(\nu) = \varphi\_1(\nu), \, \nu \in (-\infty, 0], \, \varphi\_1 \in \mathbb{C} \right\},$$

equipped with the following seminorm

$$\|z\|\_{\Theta} = \|z\_0\|\_{\bigcirc} + \sup\_{0 \le c \le b} \|z(c)\|.$$

It should be noted that the function (*z*, *<sup>u</sup>*,*r*) <sup>∈</sup> <sup>Θ</sup><sup>3</sup> (where <sup>Θ</sup><sup>3</sup> <sup>=</sup> <sup>Θ</sup> <sup>×</sup> <sup>Θ</sup> <sup>×</sup> <sup>Θ</sup>) is a solution of (13) and (14), if (*z*, *u*,*r*) fulfills (13) and (14).

Describe the operator <sup>Υ</sup> : <sup>Θ</sup><sup>3</sup> <sup>→</sup> <sup>Θ</sup> as

$$\begin{array}{rcl} \Upsilon(z,\boldsymbol{\mu},\boldsymbol{r}) &=& \begin{cases} & \boldsymbol{\mathcal{O}}\_{1}(\boldsymbol{\nu}) & \text{if } \boldsymbol{\nu} \in (-\infty,0) \\ & \boldsymbol{\mathcal{O}}\_{1}(\boldsymbol{\nu}) + \int\_{0}^{\boldsymbol{\nu}} \wp(\boldsymbol{\hbar},\boldsymbol{z}\_{\boldsymbol{\mu}},\boldsymbol{\mu}\_{\boldsymbol{\nu}},\boldsymbol{r}\_{\boldsymbol{\mu}}) d\boldsymbol{\hbar} & \text{if } \boldsymbol{\nu} \in \boldsymbol{\chi} \end{cases} \\ \Upsilon(\boldsymbol{\mu},\boldsymbol{r},\boldsymbol{z}) &=& \begin{cases} & \boldsymbol{\mathcal{O}}\_{2}(\boldsymbol{\nu}) & \text{if } \boldsymbol{\nu} \in (-\infty,0) \\ & \boldsymbol{\mathcal{O}}\_{2}(\boldsymbol{\nu}) + \int\_{0}^{\boldsymbol{\nu}} \wp(\boldsymbol{\hbar},\boldsymbol{u}\_{\boldsymbol{\mu}},\boldsymbol{r}\_{\boldsymbol{\nu}},\boldsymbol{z}\_{\boldsymbol{\mu}}) d\boldsymbol{\hbar} & \text{if } \boldsymbol{\nu} \in \boldsymbol{\chi} \end{cases} \end{array}$$

and

$$\Upsilon(r, z, u) = \begin{cases} \begin{array}{c} \mathcal{O}\_{\mathfrak{I}}(\nu) \\ \updownarrow \mathfrak{I} \end{array} & \text{if } \nu \in (-\infty, 0) \\ \begin{array}{c} \mathcal{O}\_{\mathfrak{I}}(\nu) + \int\_{0}^{\nu} \wp(\mathfrak{h}, r\_{\mathfrak{h}}, z\_{\mathfrak{h}}, u\_{\mathfrak{h}}) d\mathfrak{h} \\ \end{array} & \text{if } \nu \in \chi \end{cases}$$

.

.

Assume that *<sup>v</sup>*e1, *<sup>v</sup>*e2, *<sup>v</sup>*e<sup>3</sup> : (−∞, *<sup>b</sup>*) <sup>→</sup> <sup>R</sup>*<sup>k</sup>* are functions defined by

$$
\widetilde{\boldsymbol{\mathcal{O}}}\_{1}(\boldsymbol{\nu}) = \begin{cases}
\boldsymbol{\mathcal{O}}\_{1}(\boldsymbol{\nu}) & \text{if } \boldsymbol{\nu} \in (-\infty, 0) \\
\boldsymbol{\mathcal{O}}\_{1}(0) & \text{if } \boldsymbol{\nu} \in \boldsymbol{\chi}
\end{cases}, \\
\widetilde{\boldsymbol{\mathcal{O}}}\_{2}(\boldsymbol{\nu}) = \begin{cases}
\boldsymbol{\mathcal{O}}\_{2}(\boldsymbol{\nu}) & \text{if } \boldsymbol{\nu} \in (-\infty, 0) \\
\boldsymbol{\mathcal{O}}\_{2}(0) & \text{if } \boldsymbol{\nu} \in \boldsymbol{\chi}
\end{cases},
$$

and

$$
\tilde{\mathcal{O}}\_3(\nu) = \begin{cases}
\ \mathcal{O}\_3(\nu) & \text{if } \nu \in (-\infty, 0) \\
\ \mathcal{O}\_3(0) & \text{if } \nu \in \chi
\end{cases}
$$

Then, *<sup>v</sup>*<sup>e</sup> 0 <sup>1</sup> <sup>=</sup> *<sup>v</sup>*1, *<sup>v</sup>*<sup>e</sup> 0 <sup>2</sup> <sup>=</sup> *<sup>v</sup>*2, and *<sup>v</sup>*<sup>e</sup> 0 <sup>3</sup> = *v*3. For each *δ*1, *δ*2, *δ*<sup>3</sup> ∈ *C* [0, *b*], R*<sup>k</sup>* with *δ*1(0) = 0, *δ*2(0) = 0, and *δ*3(0) = 0. Describe the functions *δ*b <sup>1</sup>, *δ*b <sup>2</sup>, and *δ*b <sup>3</sup> as

$$
\widehat{\delta}\_1(\nu) = \begin{cases} 0 & \text{if } \nu \in (-\infty, 0) \\ \delta\_1(\nu) & \text{if } \nu \in \chi \end{cases}, \\
\widehat{\delta}\_2(\nu) = \begin{cases} 0 & \text{if } \nu \in (-\infty, 0) \\ \delta\_2(\nu) & \text{if } \nu \in \chi \end{cases}.
$$

and

$$
\widehat{\delta}\_3(\nu) = \begin{cases} \ 0 & \text{if } \nu \in (-\infty, 0) \\\ \delta\_3(\nu) & \text{if } \nu \in \chi \end{cases}
$$

.

If *z*(.), *u*(.), and *r*(.) satisfy the integral equations

$$\begin{aligned} z(\nu) &= \underset{\nu}{\varphi\_1}(\nu) + \int\_0^{\nu} \wp(\hbar, z\_{\hbar\nu} u\_{\hbar\nu} r\_{\hbar}) d\hbar, \\ u(\nu) &= \underset{\nu}{\varphi\_2}(\nu) + \int\_0^{\nu} \wp(\hbar, u\_{\hbar\nu} r\_{\hbar} z\_{\hbar}) d\hbar, \end{aligned}$$

and

$$r(\nu) = \mathcal{O}\_3(\nu) + \int\_0^\nu \wp(\hbar, r\_{\hbar\prime} z\_{\hbar\prime} u\_{\hbar}) d\hbar\prime$$

we can decompose *z*(.), *u*(.), and *r*(.) as *z*(*ν*) = *δ*b <sup>1</sup>(*ν*) <sup>+</sup> *<sup>v</sup>*e1(*ν*), *<sup>u</sup>*(*ν*) <sup>=</sup> *<sup>δ</sup>*<sup>b</sup> <sup>2</sup>(*ν*) <sup>+</sup> *<sup>v</sup>*e2(*ν*), and *r*(*ν*) = *δ*b <sup>3</sup>(*ν*) <sup>+</sup> *<sup>v</sup>*e3(*ν*) for every 0 <sup>≤</sup> *<sup>ν</sup>* <sup>≤</sup> *<sup>b</sup>*. In addition, the functions *<sup>δ</sup>*1, *<sup>δ</sup>*2, and *<sup>δ</sup>*<sup>3</sup> satisfy

$$\begin{aligned} \delta\_1(\nu) &= \int\_0^\nu \wp \{\hbar, \widehat{\delta}\_1(\hbar) + \widetilde{\wp}\_1(\hbar), \widehat{\delta}\_2(\hbar) + \widetilde{\wp}\_2(\hbar), \widehat{\delta}\_3(\hbar) + \widetilde{\wp}\_3(\hbar)\} d\hbar \nu \\ \delta\_2(\nu) &= \int\_0^\nu \wp \{\hbar, \widehat{\delta}\_2(\hbar) + \widetilde{\wp}\_2(\hbar), \widehat{\delta}\_3(\hbar) + \widetilde{\wp}\_3(\hbar), \widehat{\delta}\_1(\hbar) + \widetilde{\wp}\_1(\hbar)\} d\hbar \nu \end{aligned}$$

and

$$\delta\_3(\nu) = \int\_0^\nu \wp\left(\hbar \,\widehat{\delta}\_3(\hbar) + \widetilde{\alpha}\_3(v), \widehat{\delta}\_1(\hbar) + \widetilde{\alpha}\_1(\hbar), \widehat{\delta}\_2(\hbar) + \widetilde{\alpha}\_2(\hbar)\right) d\hbar.$$

[0, *b*], R*<sup>k</sup>*

Put *C*<sup>0</sup> =

: *δ*(0) = 0 o equipped with a

*b*−metric *v*(*z*, *u*) = sup*ν*∈*<sup>χ</sup>* k*z*(*ν*) − *u*(*ν*)k 2 with *s* = 2.

n *δ* ∈ *C* 

Consider the following partial order relation on *C* 3 0 (where *C* 3 <sup>0</sup> = *C*<sup>0</sup> × *C*<sup>0</sup> × *C*0) :

$$r\_1(z\_1, \mu\_1, r\_1) \le (z\_2, \mu\_2, r\_2) \Leftrightarrow z\_1(a) \le z\_2(a), \ u\_1(a) \ge u\_2(a), \text{ and } r\_1(a) \le r\_2(a), \ a \in \chi.$$

Now, Problems (13) and (14) will be considered under the following hypotheses:

**Hypothesis 1** (**H1**)**.** *The function* ℘ : *χ* × a <sup>3</sup> <sup>→</sup> <sup>R</sup>*<sup>k</sup>* , *k* ∈ N *is continuous.*

**Hypothesis 2** (**H2**)**.** *For all z*, *<sup>u</sup>*,*r*, *<sup>z</sup>*1, *<sup>u</sup>*1,*r*<sup>1</sup> <sup>∈</sup> <sup>R</sup>*<sup>k</sup> with z* <sup>≤</sup> *<sup>z</sup>*1, *<sup>u</sup>*<sup>1</sup> <sup>≤</sup> *u and r* <sup>≤</sup> *<sup>r</sup>*1,

$$
\wp(\nu, z, \mathfrak{u}, r) \le \wp(\nu, z\_1, \mathfrak{u}\_1, r\_1).
$$

**Hypothesis 3** (**H3**)**.** *For each <sup>ν</sup>* <sup>∈</sup> [0, *<sup>b</sup>*], *<sup>z</sup>*, *<sup>u</sup>*,*r*, *<sup>z</sup>*1, *<sup>u</sup>*1,*r*<sup>1</sup> <sup>∈</sup> <sup>R</sup>*<sup>k</sup> , z* ≤ *z*1, *u*<sup>1</sup> ≤ *u, and r* ≤ *r*1, *we have*

$$\left\|\left\|\varphi(\nu,z,u,r) - (\nu,z\_1,u\_1,r\_1)\right\|\right\|^2 \le \frac{1}{12b^2} \ln\left(1 + \frac{1}{\tau} \left\|z - z\_1\right\|\_{\odot}^2 + \left\|u - u\_1\right\|\_{\odot}^2 + \left\|r - r\_1\right\|\_{\odot}^2\right).$$

**Theorem 5.** *Consider Problems (13) and (14) under the hypotheses* (*H*1)*–*(*H*3). *If there are* (*e*, *f* , *g*) ∈ *C* 3 0 *, so that*

$$\begin{aligned} e(\nu) &\geq \int\_0^\nu \wp\left(\hbar \,\widetilde{e}(\hbar) + \widetilde{\boldsymbol{\sigma}}\_1(\hbar), \widehat{f}(\hbar) + \widetilde{\boldsymbol{\sigma}}\_2(\hbar), \widehat{\boldsymbol{\xi}}(\hbar) + \widetilde{\boldsymbol{\sigma}}\_3(\hbar)\right) d\hbar \,\nu \\ f(\nu) &\leq \int\_0^\nu \wp\left(\hbar \,\widetilde{f}(\hbar) + \widetilde{\boldsymbol{\sigma}}\_1(\hbar), \widehat{\boldsymbol{\xi}}(\hbar) + \widetilde{\boldsymbol{\sigma}}\_2(\hbar), \widehat{e}(\hbar) + \widetilde{\boldsymbol{\sigma}}\_3(\hbar)\right) d\hbar \,\nu \end{aligned}$$

*and*

$$\mathcal{G}(\nu) \ge \int\_0^\nu \wp\left(\hbar, \mathcal{g}(\hbar) + \widetilde{\mathcal{o}}\_1(\hbar), \widehat{e}(\hbar) + \widetilde{\mathcal{o}}\_2(\hbar), \widehat{f}(\hbar) + \widetilde{\mathcal{o}}\_3(\hbar)\right) d\hbar.$$

*Then, there is at least one solution to the problem (13) and (14).*

.

**Proof.** Let ℵ : *C* 3 <sup>0</sup> → *C*<sup>0</sup> be an operator defined by

$$\mathfrak{N}(\delta\_1, \delta\_2, \delta\_3) = \int\_0^\nu \wp \left(\hbar \,\hat{\delta}\_1(\nu) + \tilde{\varpi}\_1(\nu), \hat{\delta}\_2(\nu) + \tilde{\varpi}\_2(\nu), \hat{\delta}\_3(\nu) + \tilde{\varpi}\_3(\nu)\right) d\hbar.$$

It is clear that if Υ has a TFP, then ℵ has a TFP and vice versa. So the existence solution of Problems (13) and (14) is equivalent to finding a TFP of the mapping ℵ. To achieve this, we demonstrate that ℵ fulfills the requirements of Theorems 1 or 2.

Define the graph f with ∨(f) = *C* 3 0 and

$$\Xi(\mathsf{U}) = \left\{ ((z, u, r), (z^\*, u^\*, r^\*)) \in \mathbb{C}\_0^3 \times \mathbb{C}\_0^3 : z \le z^\*, u^\* \ge u \text{ and } r \le r^\* \right\}.$$

It follows that

$$\Xi\left((z,u,r),(z^\*,u^\*,r^\*)\right) \in \Xi(\mathcal{U}) \Leftrightarrow (z,z^\*) \in \Xi(\mathcal{U}),\ (u^\*,u) \in \Xi(\mathcal{U}) \text{ and } (r,r^\*) \in \Xi(\mathcal{U}),$$

for all ((*z*, *u*,*r*),(*z* ∗ , *u* ∗ ,*r* ∗ )) ∈ *C* 3 0

Consider *z*, *u*,*r*, *z*1, *u*1,*r*1, *z*2, *u*2,*r*<sup>2</sup> ∈ *C*0. If (*z*1, *z*2) ∈ Ξ(f), then, from (*H*2), we can write

$$\begin{aligned} \mathfrak{N}(z\_1, u, r) &= \int\_0^\nu \wp(\hbar \, \widehat{z}\_1(\hbar) + \widetilde{\varpi}\_1(\hbar), \widehat{u}(\hbar) + \widetilde{\varpi}\_2(\hbar), \widehat{r}(\hbar) + \widetilde{\varpi}\_3(\hbar)) d\hbar \\ &\leq \int\_0^\nu \wp(\hbar \, \widehat{z}\_2(\hbar) + \widetilde{\varpi}\_1(\hbar), \widehat{u}(\hbar) + \widetilde{\varpi}\_2(\hbar), \widehat{r}(\hbar) + \widetilde{\varpi}\_3(\hbar)) d\hbar \\ &= \quad \mathfrak{N}(z\_2, u, r)\_\prime \end{aligned}$$

which implies that (ℵ(*z*1, *u*,*r*), ℵ(*z*2, *u*,*r*)) ∈ Ξ(f). Moreover, if (*u*1, *u*2) ∈ Ξ(f), we can write

$$\begin{aligned} \mathfrak{N}(z, u\_2, r) &= \int\_0^\nu \wp(\hbar, \widehat{z}\_1(\hbar) + \widetilde{\varpi}\_1(\hbar), \widehat{u}\_2(\hbar) + \widetilde{\varpi}\_2(\hbar), \widehat{r}(\hbar) + \widetilde{\varpi}\_3(\hbar)) d\hbar \\ &\leq \int\_0^\nu \wp(\hbar, \widehat{z}\_2(\hbar) + \widetilde{\varpi}\_1(\hbar), \widehat{u}\_1(\hbar) + \widetilde{\varpi}\_2(\hbar), \widehat{r}(\hbar) + \widetilde{\varpi}\_3(\hbar)) d\hbar \\ &= \quad \mathfrak{N}(z, u\_1, r) .\end{aligned}$$

which leads to (ℵ(*z*, *u*2,*r*), ℵ(*y*, *u*1,*r*)) ∈ Ξ(f). Analogously, we obtain (ℵ(*z*, *u*,*r*1), ℵ(*y*, *u*,*r*2)) ∈ Ξ(f). Hence, ℵ has the mixed f-monotone property. In order to prove the contractive condition of Theorem 1, assume that ((*z*, *u*,*r*),(*z* ∗ , *u* ∗ ,*r* ∗ )) ∈ *C* 3 0 , so that

$$((z, \mathfrak{u}, r), (z\_1, \mathfrak{u}\_1, r\_1)) \in \Xi(\mathfrak{U}) \Leftrightarrow (z, z\_1) \in \Xi(\mathfrak{U}), \ (\mathfrak{u}\_1, \mathfrak{u}) \in \Xi(\mathfrak{U}), \text{ and } (r, r\_1) \in \Xi(\mathfrak{U});$$

then, by using the assumptions (*H*1), (*H*1), and (*H*3), we have

kℵ((*z*, *u*,*r*)) − ℵ(*z*1, *u*1,*r*1)k 2 = Z*ν* 0 <sup>℘</sup>(*h*¯, <sup>b</sup>*z*(*h*¯) <sup>+</sup> *<sup>v</sup>*e1(*h*¯), *<sup>u</sup>*b(*h*¯) <sup>+</sup> *<sup>v</sup>*e2(*h*¯),b*r*(*h*¯) <sup>+</sup> *<sup>v</sup>*e3(*h*¯))*dh*¯ − Z*ν* 0 <sup>℘</sup>(*h*¯, <sup>b</sup>*z*1(*h*¯) <sup>+</sup> *<sup>v</sup>*e1(*h*¯), *<sup>u</sup>*b1(*h*¯) <sup>+</sup> *<sup>v</sup>*e2(*h*¯),b*r*1(*h*¯) <sup>+</sup> *<sup>v</sup>*e3(*h*¯))*dh*¯ 2 ≤ *b* Z*ν* 0 <sup>k</sup>℘(*h*¯, <sup>b</sup>*z*(*h*¯) <sup>+</sup> *<sup>v</sup>*e1(*h*¯), *<sup>u</sup>*b(*h*¯) <sup>+</sup> *<sup>v</sup>*e2(*h*¯),b*r*(*h*¯) <sup>+</sup> *<sup>v</sup>*e3(*h*¯)) <sup>−</sup>℘(*h*¯, <sup>b</sup>*z*1(*h*¯) <sup>+</sup> *<sup>v</sup>*e1(*h*¯), *<sup>u</sup>*b1(*h*¯) <sup>+</sup> *<sup>v</sup>*e2(*h*¯),b*r*1(*h*¯) <sup>+</sup> *<sup>v</sup>*e3(*h*¯))<sup>k</sup> 2 *dh*¯ ≤ 1 12*b* Z*ν* 0 ln 1 + 1 *τ* <sup>k</sup>b*z*(*dh*¯) <sup>−</sup> <sup>b</sup>*z*1(*h*¯)<sup>k</sup> 2 <sup>a</sup> + 1 *τ* <sup>k</sup>*u*b(*h*¯) <sup>−</sup> *<sup>u</sup>*b1(*h*¯)<sup>k</sup> 2 <sup>a</sup> + 1 *τ* <sup>k</sup>b*r*(*h*¯) <sup>−</sup>b*r*1(*h*¯)<sup>k</sup> 2 a *dh*¯ ≤ 1 <sup>12</sup> ln sup *h*¯∈*χ* <sup>k</sup>b*z*(*h*¯) <sup>−</sup> <sup>b</sup>*z*1(*h*¯)<sup>k</sup> 2 <sup>a</sup> + sup *h*¯∈*χ* <sup>k</sup>*u*b(*h*¯) <sup>−</sup> *<sup>u</sup>*b1(*h*¯)<sup>k</sup> 2 <sup>a</sup> + sup *h*¯∈*χ* <sup>k</sup>b*r*(*h*¯) <sup>−</sup>b*r*1(*h*¯)<sup>k</sup> 2 a ! ,

which yields

$$\left(\theta\left(s^2\mathcal{O}(\aleph((z,\mu,r)) - \aleph(z\_1,\mu\_1,r\_1))\right) \le \frac{1}{3}\theta(\mathcal{O}(z,z\_1),\mathcal{O}(\mu,\mu\_1),\mathcal{O}(r,r\_1))\right)$$

where *θ*(*a*) = *a*, and *ϑ*(*a*) = ln(1 + *a*). Obviously, the pair (*θ*, *ϑ*) ∈ Γ. Hence, by our assumptions, we conclude that

$$((z, u, r), (\Omega(z, u, r), \Omega(u, r, z), \Omega(r, z, u))) \in \Xi(\mathbb{G}).$$

The operator ℵ is continuous, and the triple (*C*0, Ξ, *v*) satisfy the property (p). Hence, all requirements of Theorems 1 and 3 are fulfilled. Hence, there is a TFP of the mapping Ω in *C* 3 0 , which represents a solution to the problem (13) and (14).

#### **4. Conclusions**

There has been much development of the theory of delay differential equations. This was connected to a variety of practical issues whose study required the resolution of delay equations. Equations of this kind are necessary to describe processes whose rate depends on their prior states. Such processes are commonly described as "delay processes" or "processes with aftereffects". The present paper was dedicated to the study of the existence and uniqueness of tripled fixed points in a *b*−metric space with a directed graph. Common tripled fixed point results were also provided. Moreover, some applications of the main results in solving different types of tripled equation systems were presented. Then, using our main results, we studied the existence and uniqueness of a solution to a system of ordinary differential equations with infinite delay. Our results help to improve some results from the related literature and provide new directions in the study of economic phenomena, using the tripled fixed point technique.

**Author Contributions:** All authors contributed equally and significantly in writing this article. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work was funded through research groups program under grant R.G.P.2/207/43 provided by the Deanship of Scientific Research at King Khalid University, Saudi Arabia.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** No data were associated with this study.

**Acknowledgments:** The authors thank the anonymous referees for their constructive reviews that greatly improved the paper. M. Zayed appreciates the support by the Deanship of Scientific Research at King Khalid University, Saudi Arabia through the research groups program under grant R.G.P.2/207/43.

**Conflicts of Interest:** The authors declare that they have no conflict of interest.

#### **References**


## *Review* **On Special Properties for Continuous Convex Operators and Related Linear Operators**

**Octav Olteanu**

Department Mathematics-Informatics, University Politehnica of Bucharest, Splaiul Independen¸tei, 313, 060042 Bucharest, Romania; octav.olteanu50@gmail.com

**Abstract:** This paper provides a uniform boundedness theorem for a class of convex operators, such as Banach–Steinhaus theorem for families of continuous linear operators. The case of continuous symmetric sublinear operators is outlined. Second, a general theorem characterizing the existence of the solution of the Markov moment problem is reviewed, and a related minimization problem is solved. Convexity is the common point of the two aims of the paper mentioned above.

**Keywords:** convex operator; uniform boundedness; symmetric operators; Hahn–Banach type theorems; Markov moment problems; constrained minimization

#### **1. Introduction**

This paper provides an overview on a few basic topics in functional analysis, joined together by the notion of convexity and its applications. The references partially illustrate old and recent research in this area and relationships between them. The motivation of this paper consists of pointing out two different main aspects of convexity: convex operators and their properties, and Hahn–Banach type theorems applied to the Moment Problem. Concerning the second aspect, a related optimization problem with infinitely many linear constraints is solved. For basic notions in analysis and functional analysis related to this work, see references [1–9]. First, we prove a uniform boundedness theorem for a class of convex continuous operators. The corresponding result for classes of bounded linear operators is the well-known Banach–Steinhaus theorem, whose proof is based on Baire's theorem. We assume that the domain space, which is a topological vector space, cannot be written as a union of a sequence of closed subsets, each of them having an empty interior. Similar results to our Theorem 1 proved below concerning classes of continuous convex operators were published in [9–11]. Notably, in [9], the case of sublinear operators is under attention. Following the idea of [10], we prove the existence of a common convex neighborhood of the origin *W*<sup>0</sup> in the domain space, for all involved convex operators, without assuming that the domain space is locally convex. The convexity of *W*<sup>0</sup> is a consequence of the properties of the codomain and of the convex continuous operators in the given class. The important case of classes of continuous sublinear operators is under attention. We study the classes of sublinear operators *P* satisfying the symmetry condition *P*(*x*) = *P*(−*x*) for all *x* in the domain space *X*. We point out an example related to this first part. The relevance consists not only in reviewing the result from [10] but also completing it with some consequences and remarks, discussed in the end of Section 3.1. Such theorems and their consequences are published in [11]. From the point of view of uniform boundedness, references [12,13] discuss the collections of linear operators more. In the papers [14–17], the interested reader could find similar properties formulated in the physics setting and possible interactions, especially concerning new results in the Jensen-type inequalities.

The second part of the results section is first motivated by solving the existence problems related to the moment problem. Basic results on this subject are outlined in [1–4]

**Citation:** Olteanu, O. On Special Properties for Continuous Convex Operators and Related Linear Operators. *Symmetry* **2022**, *14*, 1390. https://doi.org/10.3390/ sym14071390

Academic Editor: Palle E. T. Jorgensen

Received: 6 June 2022 Accepted: 4 July 2022 Published: 6 July 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

and [18]. Second, we continue with results on the extension of linear functionals and linear operators, most of them being related to the moment problem. The classical moment problem is formulated as follows: given a sequence (*yn*)*n*∈N*<sup>n</sup>* of real numbers, and a nonempty closed subset *<sup>F</sup>* <sup>⊆</sup> <sup>R</sup>*<sup>n</sup>* , find a positive regular Borel measure *ν* on *F* such that the interpolation moment conditions hold.

$$\int\_{F} \boldsymbol{\varrho}\_{j}(t) \mathbf{d}\boldsymbol{\nu} = \int\_{F} t^{j} \mathbf{d}\boldsymbol{\nu} = \boldsymbol{y}\_{j}, \ j \in \mathbb{N}^{n}. \tag{1}$$

Here, we use the notations:

$$\begin{array}{l} \mathbb{N} = \{0, 1, 2, \ldots\}, \varphi\_{j}(t) = t^{j} = t\_{1}^{j\_{1}} \cdots t\_{n}^{j\_{n}}, j = (j\_{1}, \ldots, j\_{n}) \in \mathbb{N}^{n}, \\\ t = (t\_{1}, \ldots, t\_{n}) \in F \subseteq \mathbb{R}^{n}, \mathcal{P} = \mathbb{R}[t\_{1}, \ldots, t\_{n}], n \in \mathbb{N}, n \ge 1. \end{array} \tag{2}$$

If *n* = 1, we have a one-dimensional moment problem, while for *n* ≥ 2, the corresponding moment problem is called a multidimensional moment problem. From the scalar moment problem (1), many authors studied the vector valued (or operator valued, or matrix valued) moment problems, when the *y<sup>j</sup>* , *<sup>j</sup>* <sup>∈</sup> <sup>N</sup>*<sup>n</sup>* are elements of an ordered vector space *<sup>Y</sup>* with additional properties, whose elements are vectors, functions, self-adjoint operators or symmetric matrices with real entries. The moment problem is an inverse problem, since we are looking for an unknown positive measure *ν* which satisfies the moment conditions (1), knowing only his known (given) moments R *F t <sup>j</sup>dν*, *<sup>j</sup>* <sup>∈</sup> <sup>N</sup>*<sup>n</sup>* . Finding the measure means studying its existence, uniqueness, and construction. In case of the vector-valued moment problem, the codomain *Y* is assumed to be an order complete vector space. This condition is required since we need to extend the linear operator

$$T\_0: \mathcal{P} \to Y\_\prime\\T\_0 \left(\sum\_{j \in I\_0} \alpha\_j \varphi\_j\right) = \sum\_{j \in I\_0} \alpha\_j y\_j. \tag{3}$$

from the vector space of all polynomials with real coefficients to an ordered Banach function space *X* which contains P and the vector space *Cc*(*F*) of all real valued continuous compactly supported functions defined on *<sup>F</sup>*. In Equation (3), *<sup>J</sup>*<sup>0</sup> <sup>⊂</sup> <sup>N</sup>*<sup>n</sup>* is a finite subset, *α<sup>j</sup>* ∈ R. To ensure the existence of a linear positive extension *T* : *X* → *Y* of *T*0, we need a Hahn–Banach type extension result, which requires the order completeness of *Y*. From (3), it results

$$T(\varphi\_j) = T\_0(\varphi\_j) = y\_{j\prime} \ j \in \mathbb{N}^n.$$

which is the vector-valued variant of (1). There are moment problems when, besides the positivity of the solution *T*, we naturally obtain, from the proof of its existence, the property

$$T(\mathbf{x}) \le P(\mathbf{x}),\tag{4}$$

for all *x* ∈ *X*, where *X*,*Y* are Banach lattices, *Y* is order complete, and *P* : *X* → *Y* is a continuous convex or sublinear operator. Such problems are Markov moment problems. Sometimes, the constraints on the solution *T* are *T*<sup>1</sup> ≤ *T* ≤ *T*<sup>2</sup> on the positive cone of the domain space *X*, where *T<sup>i</sup>* , *i* = 1, 2 are two given bounded linear operators from *X* to *Y*.

The moment problems mentioned up to now are called full moment problems, because they involve the moment conditions *T ϕj* = *y<sup>j</sup>* for all *<sup>j</sup>* <sup>∈</sup> <sup>N</sup>*<sup>n</sup>* . The reduced (or truncated) moment problem requires the conditions *T ϕj* = *y<sup>j</sup>* only for

$$j = (j\_1, \ldots, j\_n)\_{\prime\prime}, j\_k \in \{0, 1, \ldots, d\}, \ k = 1, \ldots, n\_{\prime\prime}$$

where *d* is a fixed natural number. For a basic result on the extension of linear positive operators, see [19]. Other extension results of linear operators, with two constraints, were published in [20–22]. Such old theorems found new applications in characterizing the isotonicity of continuous convex operators on a convex cone, recently published in [23]. We recall that an operator *P* : *X*<sup>+</sup> → *Y* defined on the positive cone *X*<sup>+</sup> of the ordered vector space *X*, to the ordered vector space *Y* is called isotone (monotone increasing) if:

$$\mathbf{0} \le \mathbf{x}\_1 \le \mathbf{x}\_2 \text{ in } X \text{ implies } P(\mathbf{x}\_1) \le P(\mathbf{x}\_2).$$

Various aspects of the full and reduced moment problem are discussed in [24–34]. These results include the existence, the uniqueness, and the construction of the solution. Obviously, the uniqueness of the solution makes sense only for the full moment problem. In the end of the article [34], a minimization problem related to a Markov moment problem is discussed. Here, we start from an idea appearing in the PhD thesis [28], also using some other methods. This is the second purpose of the paper. Optimization problems are studied in the articles [35–39], from which the last three are providing corresponding iterative methods and algorithms. As is well known, in any reflexive Banach space, for a non-empty closed convex subset not containing the origin, there exists at least one element of minimum norm in that subset. The point of this work is to discuss the case when the convex subset under attention appears from natural constraints related to a Markov moment problem.

Thus, the points of the first part of this paper are recalling and mainly completing the uniformly boundedness of some classes of convex operators, a subject which is not very well covered in the literature, except the references cited here. The significance of the second part consists in pointing out a necessary and sufficient condition for the existence of a solution of a Markov moment problem (an interpolation problem with two constraints), accompanied by a related minimization problem with infinitely many constraints. One characterizes the non-emptiness of the set of feasible solutions, and the existence of at least one minimum point is also proved (see Theorem 4). The uniqueness of such a point is briefly discussed (see Remark 7). The reader can find details and completions to the second part of this work by means of our references.

The rest of the paper is organized as follows. In Section 2, the main methods used in the sequel are pointed out. Section 3 contains the results on the subjects briefly mentioned above and is divided into two subsections. The common point is the notion of convexity for operators and for real valued functions, and its relationships with linear operators. Section 4 discusses the relevant results and concludes the paper.

#### **2. Methods**

The main methods used in what follows are:


#### **3. Results**

#### *3.1. Uniform Boundedness for Families of Convex Operators and Related Consequences*

In the sequel, *X* will be a (not necessarily locally convex) topological vector space which cannot be expressible as the countable union of closed subsets having empty interiors, and *Y* will be a locally convex vector lattice (on which the lattice operations are continuous and there exists a fundamental system V of neighborhoods *V* of 0*<sup>Y</sup>* which are convex, closed, and solid subsets, i.e.,

$$|y\_1| \le |y\_2| \, \_\prime \, y\_2 \in V \Rightarrow y\_1 \in V \, .$$

Both spaces *X*,*Y* are vector spaces over the real field. Consider a class C of convex continuous operators *P* : *X* → *Y*, *P*(**0**) = **0**. Recall that we can always reduce the problem of proving the equicontinuity of a family of convex operators at a point *x*<sup>0</sup> ∈ *X* to the equicontinuity of a corresponding family of convex operators at **0**, where each element *P* of the latter family satisfies the condition *P*(**0**) = **0** (cf. [10], the proof of Theorem 3.1). The next result was published in [11].

**Theorem 1.** *Additionally assume that for each V* ∈ V, *and any x* ∈ *X*, *there exists a small enough positive number r such that*

$$rP(x) \in V \,\,\forall P \in \mathcal{C}.$$

*Then, for any V*<sup>0</sup> ∈ V*, there exists a closed convex neighborhood W*<sup>0</sup> *of* 0*<sup>X</sup> such that*

$$\bigcup\_{P \in \mathcal{C}} P(\,\, W\_0) \subset \, V\_0.$$

*One writes lim x*→0*<sup>X</sup> P*(*x*) = **0***<sup>Y</sup> uniformly in P* ∈ C.

**Proof.** For any *V*<sup>0</sup> ∈ V and any *P* ∈ C, define *P*<sup>1</sup> : *X* → *Y*, *P*1(*x*) := *sup*{*P*(*x*), *P*(−*x*)}, *x* ∈ *X*. The operator *P*<sup>1</sup> is obviously convex. An additional property of *P*<sup>1</sup> is *P*1(*x*) = *P*1(−*x*), *x* ∈ *X*. Consequently, the codomain of *P*<sup>1</sup> is *Y*+, since **0***<sup>Y</sup>* = *P*1(**0***X*) = *P*1 1 2 *x* + <sup>1</sup> 2 (−*x*) <sup>≤</sup> <sup>1</sup> 2 2*P*1(*x*) = *P*1(*x*), *x* ∈ *X*. The operator *P*<sup>1</sup> is also continuous, as the least upper bound of two continuous operators, thanks to the continuity of "sup" operation from *Y* × *Y* to *Y*. The subset *P* −1 1 (*V*0) is closed, due to the continuity of *P*1. Now, we prove that it is also convex. Indeed, for *x*1, *x*<sup>2</sup> ∈ *P* −1 1 (*V*0), *t* ∈ [0, 1], the following relations hold:

$$P\_1((1-t)\mathbf{x}\_1 + t\mathbf{x}\_2) \le (1-t)P\_1(\mathbf{x}\_1) + tP\_1(\mathbf{x}\_2) \in V\_{0\mathcal{A}}$$

since *V*<sup>0</sup> is convex and *P*<sup>1</sup> is convex too. Now, using the assumption on *V*<sup>0</sup> of being solid, it results

$$P\_1((1-t)\mathbf{x}\_1 + t\mathbf{x}\_2) \in V\_0 \left( \Leftrightarrow ((1-t)\mathbf{x}\_1 + t\mathbf{x}\_2) \in P\_1^{-1}(V\_0) \right).$$

We define

$$\mathcal{W}\_0 := \bigcap\_{P \in \mathcal{C}} P\_1^{-1}(V\_0).$$

The subset *W*<sup>0</sup> is closed and convex, as an intersection of such subsets. Clearly, S *P*∈C *P*1( *W*0) ⊂ *V*0. For any *x* ∈ *W*<sup>0</sup> and any *P* ∈ P, it results

$$|P(\mathfrak{x})| \le \sup \{ P(\mathfrak{x}), P(-\mathfrak{x}) \} = P\_1(\mathfrak{x}) \in V\_{0\prime}$$

because of <sup>−</sup>*P*(*x*) <sup>≤</sup> *<sup>P</sup>*(−*x*), *<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*. Indeed, <sup>0</sup>*<sup>Y</sup>* <sup>=</sup> *<sup>P</sup>*(0*X*) <sup>≤</sup> <sup>1</sup> 2 (*P*(*x*) + *P*(−*x*)), *x* ∈ *X*. Having in mind the property of *V*0, we infer that *P*(*x*) ∈ *V*0, ∀*x* ∈ *W*0, ∀*P* ∈ C. The first conclusion is S *<sup>P</sup>*∈C *<sup>P</sup>*( *<sup>W</sup>*0) ⊂ *<sup>V</sup>*0. To finish the proof, we have to show that *<sup>W</sup>*<sup>0</sup> is a neighborhood of **0***X*. For any *x* ∈ *X* and for any *V*<sup>0</sup> ∈ V, there exists a sufficiently small *r*<sup>0</sup> > 0 such that *αP*1(*x*) ∈ *V*<sup>0</sup> ∀*α* ∈ R, |*α*| ≤ *r*0, ∀*P* ∈ C. We can suppose that *r*<sup>0</sup> ≤ 1. From the preceding considerations, it results

$$\mathfrak{a} \in [0, r\_0] \subset [0, 1] \Rightarrow P\_1(\mathfrak{a}\mathbf{x}) = P\_1((1 - \mathfrak{a})\mathbf{0}\_\mathbf{X} + \mathfrak{a}\mathbf{x}) \leq$$

$$\mathfrak{a}P\_1(\mathfrak{x}) \in V\_0 \Rightarrow P\_1(\mathfrak{a}\mathbf{x}) \in V\_{0\prime}$$

$$\mathfrak{a} \in [-r\_0, 0] \Rightarrow P\_1(\mathfrak{a}\mathbf{x}) = P\_1((-\mathfrak{a})(-\mathfrak{x})) \leq (-\mathfrak{a})P\_1(-\mathfrak{x}) \leq r\_0 P\_1(\mathfrak{x}) \in V\_0, \; P \in \mathcal{C}.$$

These relations lead to *x* ∈ *X*, |*α*| ≤ *r*<sup>0</sup> ⇒ *αx* ∈ *W*<sup>0</sup> ⇒ *x* ∈ 1 <sup>|</sup>*α*|*W*<sup>0</sup> <sup>⊂</sup> *nW*<sup>0</sup> for a sufficiently large *n* ∈ N. Consequently, the following basic relation holds true: *X* = S *<sup>n</sup>*∈<sup>N</sup> *nW*0. Now, recall that *W*<sup>0</sup> is closed, convex, and our assumption on *X* yields *int*(*W*0) 6= ∅, so that there exists *<sup>x</sup>*<sup>0</sup> <sup>∈</sup> *int*(*W*0) <sup>⇒</sup> **<sup>0</sup>***<sup>X</sup>* <sup>=</sup> <sup>1</sup> 2 (*x*<sup>0</sup> + (−*x*0)) ∈ *int*(*W*0). This concludes the proof.

**Corollary 1.** *Let X be a Banach space, Y a Banach lattice,* C *a collection of continuous convex operators P* : *X* → *Y*, *P*(**0**) = **0**, *such that for any x* ∈ *X*, *we have sup* ||*P*(*x*)||*<sup>Y</sup>* < ∞. *Then the*

*P*∈C

*following relation holds: sup P*∈C, ||*x*||≤1 ||*P*(*x*)||*<sup>Y</sup>* < ∞.

In the sequel, *X* will be an (F) space, i.e., a metrizable complete (not necessarily locally convex) topological vector space, *Y* will be a normed vector lattice (in particular, its norm is monotone on *Y*<sup>+</sup> : (**0***<sup>Y</sup>* ≤ *y*<sup>1</sup> ≤ *y*<sup>2</sup> ⇒ ||*y*1||*<sup>Y</sup>* ≤ ||*y*2||*Y*) and the multiplication with scalars is continuous). Recall that a normed vector lattice *Y* is a vector lattice endowed with a solid norm (|*y*1| ≤ |*y*2| ⇒ ||*y*1|| ≤ ||*y*2||), so the lattice operations are continuous. Consider a class S of sublinear operators Φ : *X* → *Y*<sup>+</sup> such that Φ(*x*) = Φ(−*x*) ∀*x* ∈ *X*, ∀Φ ∈ S.

**Corollary 2.** *Let X*,*Y*, S *be as above. Assume that* Φ *is continuous* ∀Φ ∈ S *and sup* Φ∈S ||Φ(*x*)||*<sup>Y</sup>* < <sup>∞</sup> ∀*<sup>x</sup>* ∈ *<sup>X</sup>*. *Then there exists a convex closed neighborhood <sup>U</sup> of* <sup>0</sup>*<sup>X</sup> such that* <sup>S</sup> Φ∈S Φ(*U*) ⊂ *B*1,*<sup>Y</sup>* := *B*1(**0***Y*), *where B*1(**0***Y*) *is the closed unit ball centered at the origin of the space Y*.

The poof follows the ideas from that of Theorem 1, also applying Baire's theorem.

**Remark 1.** *Under previous conditions, assuming that Y is a normed vector lattice (the norm on Y is solid and the lattice operations are continuous), Corollary 2 says that*

$$|\mathbf{x}\_1 - \mathbf{x}\_2 \in \mathcal{U} \Rightarrow |\Phi(\mathbf{x}\_1) - \Phi(\mathbf{x}\_2)| \le \Phi(\mathbf{x}\_1 - \mathbf{x}\_2) \in \mathcal{B}\_{1,Y} \Rightarrow$$

$$\Phi(\mathbf{x}\_1) - \Phi(\mathbf{x}\_2) \in \mathcal{B}\_{1,Y} \; \forall \Phi \in \mathcal{S}.$$

*It results that* S *is equicontinuous.*

**Example 1.** *Using the above notations, let* L *be a family of linear continuous operators from X to Y such that sup T*∈L ||*T*(*x*)||*<sup>Y</sup>* < ∞ ∀*x* ∈ *X*. *Define* Φ(*x*) = Φ*T*(*x*) := |*T*(*x*)|, *x* ∈ *X*, *T* ∈ L. *Then, the family* <sup>S</sup> <sup>=</sup> {Φ*T*}*T*∈L *verifies the condition sup T*∈L ||Φ*T*(*x*)||*<sup>Y</sup>* < ∞ ∀*x* ∈ *X*.

**Remark 2.** *Theorem 1 holds true when X is a Banach space, Y is a normed vector lattice, and the other conditions of Theorem 1 are accomplished. It is possible that a similar result be true for more general spaces X (involving the notion of a barreled TVS). However, only for a few spaces can it be easily proved that they are barreled spaces, without using Baire's theorem. On the other side, for applications, the most important spaces are Banach spaces, especially Banach lattices.*

**Theorem 2.** *Let X be a Banach space and Y an order complete normed vector lattice with strong order unit u*0, *such that B*1,*<sup>Y</sup>* = [−*u*0, *u*0]. *Let* S *be a class of sublinear operators with the properties mentioned in Corollary 2. Additionally, assume that* Φ(*x*) = Φ(−*x*) ∀*x* ∈ *X*, ∀Φ ∈ S. *Then, the relation*

$$\tilde{\Phi}(\mathfrak{x}) = \sup\_{\Phi \in \mathcal{S}} \Phi(\mathfrak{x}) \quad \forall \mathfrak{x} \in X\_{\star}$$

*defines a sublinear Lipschitz operator* Φe, *such that* Φe (*x*) = Φe (−*x*) ∈ *Y*<sup>+</sup> ∀*x* ∈ *X*.

**Proof.** Application of Corollary 2 leads to the existence of a closed ball of sufficiently small radius *r* > 0 such that

$$||\mathfrak{x}||\_X \le r \Rightarrow \Phi(\mathfrak{x}) \in B\_{1,Y} = [-\mathfrak{u}\_0, \mathfrak{u}\_0] \quad \forall \Phi \in \mathcal{S}.$$

It results

$$\Phi\left(r\frac{\mathbf{x}}{||\mathbf{x}||\_{X}}\right) \le u\_0 \Leftrightarrow \ \Phi(\mathbf{x}) \le \frac{||\mathbf{x}||\_{X}}{r} u\_0 \ \forall \mathbf{x} \in X \backslash \{0\_X\}, \ \forall \Phi \in \mathcal{S}.\tag{5}$$

Thus, according to (5), for any fixed *x* ∈ *X*, the set {Φ(*x*); Φ ∈ S } is bounded from above in *Y*. Thanks to the hypothesis on order completeness of *Y*, there exists

$$\tilde{\Phi}(\mathbf{x}) := \sup\_{\Phi \in \mathcal{S}} \Phi(\mathbf{x}) \le \frac{||\mathbf{x}||\_X}{r} \mu\_0 \quad \forall \mathbf{x} \in \mathcal{X}. \tag{6}$$

It is easy to see that Φe is sublinear and has the property Φe (*x*) = Φe (−*x*) ∈ *Y*<sup>+</sup> ∀*x* ∈ *X*. Next, we prove the Lipschitz property of Φe. To do this, one uses the subadditivity property of Φe, the fact that the norm of *Y* is monotone on *Y*+, and relation (6). Namely, the following implications hold:

$$\begin{aligned} \mathbf{x}\_1, \mathbf{x}\_2 \in \mathcal{X}, \; |\Phi(\mathbf{x}\_1) - \Phi(\mathbf{x}\_2)| \le \Phi(\mathbf{x}\_1 - \mathbf{x}\_2) \Rightarrow \\ ||\tilde{\Phi}(\mathbf{x}\_1) - \tilde{\Phi}(\mathbf{x}\_2)||\_Y \le ||\tilde{\Phi}(\mathbf{x}\_1 - \mathbf{x}\_2)||\_Y \le \left|| \frac{||\mathbf{x}\_1 - \mathbf{x}\_2||\_X}{r} u\_0 \right||\_Y = \frac{||\mathbf{x}\_1 - \mathbf{x}\_2||\_X}{r}. \end{aligned}$$

Hence, Φe is a Lipschitz mapping from *X* to *Y*+. This concludes the proof.

**Remark 3.** *Under the hypothesis of Theorem 2, each element of* Φ ∈ S *is a Lipschitz operator, with the same Lipschitz constant* 1/*r*.

**Remark 4.** *It seems that topological completeness of Y is not necessary for the above results. However, the usual concrete spaces verifying the hypothesis of Theorem 2 are Banach spaces.*

**Remark 5.** *The set* C *of all continuous sublinear operators* Φ *from X* to *Y*+*, such that* Φ(*x*) = Φ(−*x*) ∀*x* ∈ *X*, ∀Φ ∈ C, *sup ϕ*∈C ||*ϕ*(*x*)||*<sup>Y</sup>* < ∞ ∀*x* ∈ *X*, *is a convex cone. With the notations and*

*under the assumptions of Theorem 2, the subset of all* Φ ∈ C *formed by all elements of* C *with the property ϕ*(*B*1,*X*) ⊂ *B*1,*<sup>Y</sup> is convex, and its elements are the non-expansive operators from* C. *If r of the proof of Theorem 2 is strictly greater than* 1, *then the elements of* S *(as well as the operator* Φe ) *are contractions.*

**Remark 6.** *An arbitrary sublinear operator ϕ* : *X* → *Y*<sup>+</sup> *is a Lipschitz operator if and only if* Φ *is continuous at* **0***X*.

**Corollary 3.** *Let X and Y be as in Theorem 2,* S = {Φ*n*; *n* ∈ *N*} *a countable set of sublinear continuous operators from X to Y, such that* Φ*n*(*x*) = Φ*n*(−*x*) ∀*x* ∈ *X*, ∀*n* ∈ N, *and sup n*∈N ||Φ*n*(*x*)||*<sup>Y</sup>* < ∞ ∀*x* ∈ *X*. *Then, the relation*

$$\tilde{\Phi}(\mathfrak{x}) = \sup\_{n \in \mathbb{N}} \Phi\_n(\mathfrak{x}) \quad \forall \mathfrak{x} \in X$$

*defines a sublinear Lipschitz operator* Φe : *X* → *Y*+, *such that* Φe (*x*) = Φe (−*x*) ∀*x* ∈ *X*.

**Corollary 4.** *Let X*,*Y be as in Theorem 2,* T = {Φ*n*; *n* ∈ *N*, *n* ≥ 1} *a countable set of sublinear continuous operators from X to Y*+*, such that* Φ*n*(*x*) = Φ*n*(−*x*) ∀*x* ∈ *X*, ∀*n* ∈ {1, 2, . . .}, *and*

$$\sup\_{\substack{m \in \mathbb{N}\_{\prime} \\ m \ge 1}} || \sum\_{k=1}^{n} \Phi\_{k}(\mathfrak{x}) ||\_{Y} < \infty \quad \forall \mathfrak{x} \in \mathcal{X}.$$

*Then, the relation*

$$\tilde{\Phi}(\mathfrak{x}) = \sup\_{\substack{n \in \mathbb{N}\_{\prime} \\ n \ge 1}} \left( \sum\_{k=1}^{n} \Phi\_{k}(\mathfrak{x}) \right) \quad \forall \mathfrak{x} \in \mathcal{X}\_{\prime}$$

*defines a sublinear Lipschitz operator* Φe : *X* → *Y*+, *such that* Φe (*x*) = Φe (−*x*) ∀*x* ∈ *X*.

**Example 2.** *Let K be a Hausdorff compact topological space, endowed with a regular Borel probability measure µ*, *X* := *C*(*K*) *the Banach lattice of all real valued, continuous functions on K*, *Y* := *l*<sup>∞</sup> *the space of all bounded sequences of real numbers. The norm* ||·||*sup on the space X is the sup-norm and the norm on* ||·||*<sup>Y</sup> is the usual norm* ||·||*<sup>Y</sup>* <sup>=</sup> ||·||∞, ||(*xn*)*n*≥<sup>1</sup> ||<sup>∞</sup> = *sup* |*xn*|.

*n*≥1 *The space Y* = *l*<sup>∞</sup> *verifies the hypothesis of Theorem 2, since it is an order complete normed vector lattice, the appropriate strong order unit being the sequence u*0*, which has all the terms equal to* 1*. Define the scalar valued norms on X*

$$N\_k(f) := \left(\int\_K |f|^k d\mu\right)^{1/k}, f \in \mathcal{X}, k \in \mathbb{N}, k \ge 1/k$$

*and the finite dimensional vector-valued norms on X*

$$\mathcal{S}\_n(f) := ||f||\_n : \mathcal{X} \to \mathcal{Y},$$

$$||f||\_n := \left(\mathcal{N}\_1(f), \mathcal{Q}^{1/2}\mathcal{N}\_2(f), \dots, n^{1/n}\mathcal{N}\_n(f), \ 0, \dots, 0, \dots\right), n \in \mathbb{N}, n \ge 1, f \in \mathcal{X},$$

$$\mathcal{N}\_k(f) \le ||f||\_{\sup}(\mu(\mathcal{K}))^{1/k} = ||f||\_{\sup}, \ \mathcal{N}\_k(f) = 1 \Rightarrow \sup\_{||f||\_{\sup} = 1} \mathcal{N}\_k(f) = 1,$$

$$k \in \{1, 2, \dots\}, f \in \mathcal{X}.$$

*Consider the elementary function t* → *g*(*t*) := *ln*(*t*)/*t*, *t* ∈ [1, ∞)*, which is increasing on* [1,*e*] *and decreasing on the interval* [*e*, ∞). *This function has a global maximum point at t*<sup>0</sup> = *e* ∈ (2, 3). *It results that the function*

$$h: (1, \infty) \to (0, \infty), \ h(t) := t^{1/t} = e^{\ln(t)/t}$$

*has the same monotonicity properties; hence,*

$$\max\_{1 \le k \le n} k^{1/k} \le \max \left\{ 2^{1/2}, 3^{1/3} \right\} = 3^{1/3} \text{ } \forall n \in \{1, 2, \ldots\} \text{ }$$

*Thus, we obtain*

$$f \in X \Rightarrow S\_n(f) = ||f||\_n \leq \max\_{1 \leq k \leq n} k^{1/k} ||f||\_{\sup} u\_0 \leq \mathfrak{J}^{1/3} \, ||f||\_{\sup} u\_0 \,\,\forall n \in \{1, 2, \ldots\} \Rightarrow$$

$$\tilde{\Phi}(f) = \sup\_{n \in \mathbb{N}} S\_n(f) = \left( n^{1/n} \mathcal{N}\_n(f) \right)\_{n \geq 1} \leq \mathfrak{J}^{1/3} := ||f||\_{\sup} u\_0.$$

$$u\_0 = (1, \ldots, 1, \ldots), \, f \in \mathcal{X}\_{\prime}$$

*where* Φe *is the sublinear operator from Corollary 4. Observe that* Φe *has as Lipschitz constant* 3 1/3 > 1*. Next, we apply the same method, replacing n*1/*<sup>n</sup> by*

$$
\pi^{-1/n} = \exp(-\ln(n)/n) \le 1 \quad \forall n \in \{1, 2, \ldots\}.
$$

*In this case, the above estimations turn into the following ones:*

$$\widetilde{\Phi}(f) = \sup\_{n \in \mathbb{N}} \mathcal{S}\_n(f) = \left( n^{-1/n} \mathcal{N}\_n(f) \right)\_{n \ge 1} \le ||f||\_{\sup} u\_0 \,\,\,\forall f \in \mathcal{X} \Rightarrow$$

$$|\widetilde{\Phi}(f) - \widetilde{\Phi}(g)| \le \widetilde{\Phi}(f - g) \le ||f - g||\_{\sup} u\_0 \Rightarrow$$

$$||\widetilde{\Phi}(f) - \widetilde{\Phi}(g)||\_Y \le ||\widetilde{\Phi}(f - g)|| \le ||f - g||\_{\sup} \,\,\forall f, g \in \mathcal{X}.$$

*To conclude, in this case,* Φe *is a nonexpansive vector valued norm from X to Y*. *To obtain contractions* Φe, *consider*

$$(c\_{\mathbb{N}})\_{n\geq1} \in Y = l\_{\mathbb{N}}, \ 0 \leq c\_{\mathbb{N}} \leq q < 1 \ \forall n \geq 1,$$

$$\mathcal{S}\_{\mathbb{N}}(f) = (c\_1 N\_1(f), \dots, c\_n N\_n(f), 0, \dots, 0, \dots),$$

$$\widetilde{\Phi}(f) = \sup\_{n\geq 1} \mathcal{S}\_{\mathbb{N}}(f) = (c\_n N\_n(f))\_{n\geq 1} \leq q ||f||\_{\sup} u\_0 \ \forall f \in \mathcal{X} \Rightarrow$$

$$|\widetilde{\Phi}(f) - \widetilde{\Phi}(g)|\_Y \leq \widetilde{\Phi}(f - g) \leq q ||f - g||\_{\sup} u\_0 \ \Rightarrow$$

$$||\widetilde{\Phi}(f) - \widetilde{\Phi}(g)||\_Y \leq q ||f - g||\_{\sup} ||u\_0||\_Y = q ||f - g||\_{\sup} \ \forall f.g \in \mathcal{X}.$$

*Thus* Φe : *X* → *Y*<sup>+</sup> *is a contraction vector-valued norm, of contraction constant q*, *and the best value for q is q* = *sup n*≥1 *cn*. *In particular, if* 0 ≤ *in f n*≥1 *c<sup>n</sup>* ≤ *sup n*≥1 *c<sup>n</sup>* = 1/2 ∀*n* ≥ 1, *then* Φe *is a contraction operator, of contraction constant q* = 1/2*. In this example, the operators* Φ*<sup>n</sup> mentioned in Corollary 4 stand for* (0, . . . , 0, *cnNn*(*f*), 0, 0, . . .), *and cnNn*(*f*) *is the n* − *th coordinate of the vector Sn*(*f*) ∈ *Y*+.

#### *3.2. A Constrained Minimization Problem Related to a Markov Moment Problem*

The present subsection has as a motivation proving similar results to some of those of [28]. One proves a result in a general setting, obtained by means of Theorem 3 stated below. A constrained related optimization problem in infinite dimensional spaces is solved too. The results presented in the sequel were published in [34]. In particular, using the latter theorem, one obtains a necessary and sufficient condition for the existence of a feasible solution (see theorem 4 from below). Under such a condition, the existence of an optimal feasible solution follows too. On the other hand, the uniqueness and the construction of the optimal solution does not seem to be obtained easily by such general methods. Therefore, we focus mainly on the existence problem. For other aspects of such problems on an optimal solution (uniqueness or non-uniqueness, construction of a unique solution, etc.), see [28]. In the latter work, one considers the following primal problem (P): study the constrained minimization problem:

$$\nu = \inf \left\{ ||\varrho||\_{\infty}; \varrho \in L\_{\mu}^{\infty}(\mathbb{Z}), \int\_{X} \varrho f\_{]} d\mu = b\_{\restriction}, \ j = 1, \dots, n, \ \mathbf{0} \le \mathfrak{a} \le \mathfrak{p} \le \mathfrak{\beta} \right\},$$

where *α*, *β* are in *L* ∞ *µ* (*Z*), *fj n j*=1 is a subset of *L* 1 *µ* (*Z*), and *b* = (*b*1, . . . , *bn*) *<sup>t</sup>* <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* . The function *ϕ* is unknown, and in general, it is not determined by a finite number of moments. The next theorem discusses some of the above existence type results for a feasible solution. Here, (*Z*,M) is a measure space endowed with a *σ*−finite positive measure *µ*, and M is the *σ*−algebra of all measurable subsets of *Z*.

**Theorem 3.** *See* [22]. *Let X be an ordered vector space, Y an order complete vector lattice, ϕj j*∈*J* ⊂ *X*, *yj j*∈*J* ⊂ *Y given arbitrary families, T*1, *T*<sup>2</sup> ∈ *L*(*X*,*Y*) *two linear operators. The following statements are equivalent:*

*(a) there is a linear operator T* ∈ *L*(*X*,*Y*)*, such that*

$$T\_1(\mathbf{x}) \le T(\mathbf{x}) \le T\_2(\mathbf{x}) \,\,\forall \mathbf{x} \in X\_+, \; T(\varphi\_j) = y\_j \,\,\forall j \in J;$$

*(b) for any finite subset J*<sup>0</sup> <sup>⊂</sup> *J and any λj* ; *j* ∈ *J*<sup>0</sup> ⊂ R, *the following implication holds true:*

$$\left(\sum\_{j \in J\_0} \lambda\_j \varphi\_j = \psi\_2 - \psi\_1, \psi\_1, \psi\_2 \in X\_+\right) := \sum\_{j \in J\_0} \lambda\_j \psi\_j \le T\_2(\psi\_2) - T\_1(\psi\_1);$$

*If X is a vector lattice, then assertions (a) and (b) are equivalent to (c), where (c) is formulated as follows:*

*(c) T*1(*w*) ≤ *T*2(*w*) *for all w* ∈ *X*<sup>+</sup> *and for any finite subset J*<sup>0</sup> ⊂ *J and* ∀ *λj* ; *j* ∈ *J*<sup>0</sup> ⊂ R, *we have*

$$\sum\_{j \in I\_0} \lambda\_j y\_j \le T\_2 \left( \left( \sum\_{j \in I\_0} \lambda\_j \varphi\_j \right)^+ \right) - T\_1 \left( \left( \sum\_{j \in I\_0} \lambda\_j \varphi\_j \right)^- \right).$$

The next result is an application of Theorem 3 stated above, also using a constrained minimization argument.

**Theorem 4.** *Let <sup>p</sup>* <sup>∈</sup> (1, <sup>∞</sup>) *and let q be the conjugate of p. Let fj j*∈*J be an arbitrary family of functions in L p <sup>µ</sup>*(*Z*), *where the measure µ is σ–finite, and bj j*∈*J a family of real numbers. Assume that α*, *β* ∈ *L q <sup>µ</sup>*(*Z*) *are such that* **0** ≤ *α* ≤ *β. The following statements are equivalent*:


$$\sum\_{j \in I\_0} \lambda\_j \mathfrak{f}\_j = \psi\_2 - \psi\_1, \ \psi\_1, \ \psi\_2 \in \left(L^p\_\mu(Z)\right)\_+ \implies \sum\_{j \in J\_0} \lambda\_j b\_{\backslash} \le \int\_Z \beta \psi\_2 \, d\mu - \int\_Z a \psi\_1 \, d\mu;$$

*Moreover, the set of all feasible solutions ϕ (satisfying the conditions (a)) is weakly compact with respect the dual pair* (*L p* , *L q* ) *and the inferior*

$$\nu = \inf \left\{ ||\varrho||\_{q'} \colon \varrho \in L^q\_{\mu}(Z), \int\_Z \varrho f\_j d\mu = b\_j, \ j \in \mathfrak{J}, \ \mathbf{0} \le \mathfrak{a} \le \mathfrak{g} \le \mathfrak{f} \right\} \ge ||a||\_{q'}$$

*is attained for at least one optimal feasible solution ϕ*0.

**Proof.** Since the implication (a) =⇒ (b) is obvious, the next step consists in proving that (b) =⇒ (a). We define the linear positive (continuous) forms *T*1, *T*<sup>2</sup> on *X* = *L p <sup>µ</sup>*(*Z*), by

$$T\_1(f) = \int\_Z \mathfrak{a}f d\mu \,\, T\_2(\varrho) = \int\_Z \mathfrak{H}f d\mu \,\, \, f \in \mathcal{X}.$$

Then, condition (b) of the present theorem coincides with condition (b) of Theorem 3. A straightforward application of the latter theorem leads to the existence of a linear form *T* on *X*, such that the interpolation conditions *T ϕj* = *b<sup>j</sup>* , *j* ∈ *J* are verified and

$$\int\_Z \alpha \psi d\mu \le T(\psi) \le \int\_Z \beta \psi d\mu\_\prime \,\,\psi \in \mathcal{X}\_+.$$

In particular, the linear form *T* is positive on *X* = *L p <sup>µ</sup>*(*Z*), and this space is a Banach lattice. It is known that on such spaces, any linear positive functional is continuous (see [5], or [8], or [23]). The conclusion is that *T* can be represented by means of a nonnegative function *ϕ* ∈ *L q <sup>µ</sup>*(*Z*). From the previous relations, we infer that

$$\int\_Z \mathfrak{a}\psi d\mu \le \int\_Z \mathfrak{g}\psi d\mu \le \int\_Z \mathfrak{f}\mathfrak{d}\psi d\mu \text{ } \psi \in X\_+.$$

Writing these relations for *ψ* = *χB*, where *B* is an arbitrary measurable set of positive measure *µ*(*B*), one deduces

$$\int\_{B} (\varphi - \mathfrak{a})d\mu \ge 0,\\ \int\_{B} (\beta - \varrho)d\mu \ge 0, \ B \in \mathcal{M}, \ \mu(B) > 0.$$

Now, a standard measure theory argument shows that *α* ≤ *ϕ* ≤ *β* almost everywhere in *Z*. This finishes the proof of (b) =⇒ (a). To prove the last assertion of the theorem, observe that the set of all feasible solutions is weakly compact in *L q <sup>µ</sup>*(*Z*), by Alaoglu's theorem; it is a weakly closed subset of the closed ball centered at the origin, of radius ||*β*||*<sup>q</sup>* , and *L q <sup>µ</sup>*(*Z*) is reflexive. On the other hand, the norm of any normed linear space is lower weakly semi-continuous, as the supremum of continuous linear forms, which are also weak continuous with respect to the dual pair *L q <sup>µ</sup>*(*Z*), *L p <sup>µ</sup>*(*Z*) , 1 < *p* < ∞, 1/*p* + 1/*q* = 1. Since *L q <sup>µ</sup>*(*Z*) is reflexive for 1 < *q* < ∞, we conclude that the norm ||·||*<sup>q</sup>* is weakly lower semi-continuous on the weakly (convex) and compact set described at point (a), so that it attains its minimum at a function *ϕ*<sup>0</sup> of this set. Hence, there exists at least one optimal feasible solution. This concludes the proof.

**Remark 7.** *If the set fj j*∈*J is total in the space L p <sup>µ</sup>*(*Z*), *then the set of all feasible solutions is a singleton, so that there exists a unique solution.*

**Remark 8.** *In the proof of Theorem 4, we claimed that any positive linear function on L p <sup>µ</sup>*(*Z*), 1 < *p* < ∞ *is continuous. Actually, there is a much more general result on this subject. Namely, any positive linear operator acting between two ordered Banach spaces is continuous (see* [8] *and/or* [23]*). In particular, this result holds for positive linear operators acting between Banach lattices.*

#### **4. Discussion**

In the first part of Section 3, this paper brings a few new elements and completions with respect to the basic results previously published on this subject. The main completions are formulated as Corollaries, Remarks, and two examples. The second subsection of Section 3 reviews the main Theorem 3 and gives one of its applications, stated as Theorem 4. The latter theorem can be applied to the existence of at least one feasible solution for the constrained minimization problem formulated in the same theorem. The problem under attention is solved on a concrete function space. The index set *J* appearing in Theorems 3 and 4 is arbitrary, finite, countable, or uncountable. In the case of the full moment problem on a closed subset of R*<sup>n</sup>* , we have *J* = N*<sup>n</sup>* , *n* ∈ N, *n* ≥ 1, so in this case, *J* is a countable infinite set of indexes. Theorem 4 provides a necessary and sufficient condition for the feasible set of a minimization problem with many countable constraints being non-empty. The common point of the two subsections of Section 3 is the notion of convexity, applied to real-valued functions and to operators. The connection of convex functions (respectively, convex operators) with the linear functionals (respectively, linear operators) is emphasized in both subsections. As a direction for future work, we recall the importance of Markov linear operators. Many such operators arise as solutions of Markov moment problems. They are dominated by a given continuous sublinear operator and apply the strong order unit of the domain space to the strong order unit of the codomain space (assuming that both the domain and the codomain are endowed with a strong order unit).

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The author would like to thank the reviewers for their comments and suggestions, leading to the improvement of the presentation of this paper.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


## *Article* **Sharp Bounds for Trigonometric and Hyperbolic Functions with Application to Fractional Calculus**

**Vuk Stojiljkovi´c 1,\* ,† , Slobodan Radojevi´c 2,† , Eyüp Çetin 3,4,† , Vesna Šešum Cavi´c ˇ 5,† and Stojan Radenovi´c 2,†**


**Abstract:** Sharp bounds for cosh(*x*) *x* , sinh(*x*) *x* , and sin(*x*) *<sup>x</sup>* were obtained, as well as one new bound for *e <sup>x</sup>*+arctan(*x*) √ *x* . A new situation to note about the obtained boundaries is the symmetry in the upper and lower boundary, where the upper boundary differs by a constant from the lower boundary. New consequences of the inequalities were obtained in terms of the Riemann–Liovuille fractional integral and in terms of the standard integral.

**Keywords:** polynomial bounds; L'Hôpital's rule of monotonicity; Jordan's inequality; trigonometric functions

**MSC:** 26D05; 26D07; 26D20

#### **1. Introduction and Preliminaries**

Inequalities have been an ongoing topic of research since their discovery. As the proof of how interesting they are, many books were written in that field; for example, refer to the famous book [1]. The sin(*x*) *x* inequality in this paper will be improved; thus, we must mention the first inequality of that nature known as Jordan's inequality.

$$\frac{2}{\pi} < \frac{\sin(\mathfrak{x})}{\mathfrak{x}} < 1; 0 < \mathfrak{x} < \frac{\pi}{2}.$$

Multiple proofs of the Jordans inequality exist, and we refer the reader to the following papers for more detail [2–4]. Jordan's inequality was improved on the left-hand side by Mitrinovi´c-Adamovi´c, while the right-hand side is the known Cusa inequality. We state it here for educational purposes.

$$(\cos(\mathfrak{x}))^{\frac{1}{3}} < \frac{\sin(\mathfrak{x})}{\mathfrak{x}} < \frac{2 + \cos(\mathfrak{x})}{3}.$$

Recently, the authors [5] sharpened Jordan's inequality further.

$$\left(1 - \frac{\mathfrak{x}^2}{\pi^2}\right) e^{-\frac{\ln(2)}{\pi^2} \mathfrak{x}^2} < \frac{\sin(\mathfrak{x})}{\mathfrak{x}} < \left(1 - \frac{\mathfrak{x}^2}{\pi^2}\right) e^{(\frac{1}{\pi^2} - \frac{1}{6})\mathfrak{x}^2}; 0 < \mathfrak{x} < \pi.$$

**Citation:** Stojiljkovi´c, V.; Radojevi´c, S.; Çetin, E.; Cavi´c, V.Š.; Radenovi´c, S. ´ Sharp Bounds for Trigonometric and Hyperbolic Functions with Application to Fractional Calculus. *Symmetry* **2022**, *14*, 1260. https:// doi.org/10.3390/sym14061260

Academic Editor: Savin Treanta

Received: 26 May 2022 Accepted: 15 June 2022 Published: 18 June 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

They also provided other interesting bounds in another paper [6].

$$\begin{aligned} \left(1-\frac{\mathfrak{x}^2}{\pi^2}\right)^{\frac{\mathfrak{x}^4}{90}}e^{(\frac{\mathfrak{x}^2}{90}-\frac{1}{9})\mathfrak{x}^2} &< \frac{\sin(\mathfrak{x})}{\mathfrak{x}}; 0 < \mathfrak{x} < \pi \\\\ \frac{\sin(\mathfrak{x})}{\mathfrak{x}} &< \frac{2}{3} + \frac{1}{3}\left(1-\frac{4\mathfrak{x}^2}{\pi^2}\right)^{\frac{\mathfrak{x}^4}{96}}e^{(\frac{\mathfrak{x}^2}{24}-\frac{1}{2})\mathfrak{x}^2}; 0 < \mathfrak{x} < \frac{\pi}{2} \end{aligned}$$

In this paper, we will sharpen these bounds in a simple and efficient manner. More about such inequalities can be found in the following papers [7–11].

.

We provide our first definition of a fractional integral that will be used in the corollaries of the results.

**Definition 1.** *The generalized hypergeometric function <sup>q</sup>Fq*(*a*; *b*; *x*) *is defined as follows [12]:*

$$\,\_pF\_q(a;b;\mathop{\bf x}) = \sum\_{k=0}^{+\infty} \frac{(a\_1)\_k \dots (a\_p)\_k}{(b\_1)\_k \dots (b\_q)\_k} \frac{\mathfrak{x}^k}{k!}$$

*where* (*a*)*<sup>k</sup> is the Pochhammer symbol defined as follows [12].*

$$(a)\_k = \frac{\Gamma(a+k)}{\Gamma(a)} = a(a+1)\dots(a+k-1).$$

**Definition 2.** *The Riemann–Liouville fractional integral is defined by [13–15] where* <(*α*) > 0 *and f is locally integrable.*

$$\_aI\_t^\alpha f(t) = \frac{1}{\Gamma(\alpha)} \int\_a^t (t-\mathfrak{x})^{\alpha-1} f(\mathfrak{x}) d\mathfrak{x}.$$

The functions on which we apply the Riemann–Liouville fractional integral are well defined in terms of the integral formula. We will require the following Lemma. Lemma 1 ([16], p. 10) taken below is known as L'Hôpital's rule of monotonicity. It is a very useful tool in the theory of inequalities.

**Lemma 1.** *Let f* , *g* : [*m*, *n*] → R *be two continuous functions which are differentiable on* (*m*, *n*) *and g* <sup>0</sup> <sup>6</sup><sup>=</sup> <sup>0</sup> *in* (*m*, *<sup>n</sup>*)*. If <sup>f</sup>* 0 *g* 0 *is increasing (or decreasing) on* (*m*, *n*)*, then the functions <sup>f</sup>*(*x*)−*f*(*m*) *g*(*x*)−*g*(*m*) *and <sup>f</sup>*(*x*)−*f*(*n*) *g*(*x*)−*g*(*n*) *are also increasing (or decreasing) on* (*m*, *n*). *If <sup>f</sup>* 0 *g* 0 *is strictly monotone, then the monotonicity in the relationship is also strict.*

#### **2. Main Results**

We provide our first Theorem in the paper.

**Theorem 1.** *The following bounds hold for x* ∈ (0, 1)*.*

$$\frac{1}{\sqrt{\mathfrak{X}}} + \frac{\mathfrak{x}^{\frac{3}{2}}}{2} + \sqrt{\mathfrak{x}} < \frac{e^{\mathfrak{x}} + \arctan(\mathfrak{x})}{\sqrt{\mathfrak{x}}} \\ < e + \frac{1}{4}(\pi - 10) + \frac{1}{\sqrt{\mathfrak{X}}} + \frac{\mathfrak{x}^{\frac{3}{2}}}{2} + \sqrt{\mathfrak{x}}.$$

**Proof.** Set the following:

$$g(\mathbf{x}) = \frac{e^{\mathbf{x}} - 1 + \arctan(\mathbf{x}) - \frac{\mathbf{x}^2}{2} - \mathbf{x}}{\sqrt{\mathbf{x}}} = \frac{h\_1(\mathbf{x})}{h\_2(\mathbf{x})}$$

where *h*1(*x*) = *e <sup>x</sup>* <sup>−</sup> <sup>1</sup> <sup>+</sup> arctan(*x*) <sup>−</sup> *<sup>x</sup>* 2 <sup>2</sup> <sup>−</sup> *<sup>x</sup>* and *<sup>h</sup>*2(*x*) = <sup>√</sup> *x* with *h*1(0) = 0 and *h*2(0) = 0. After differentiating, we obtain the following.

$$\frac{h\_1'(\mathbf{x})}{h\_2'(\mathbf{x})} = \left(-1 + e^{\mathbf{x}} - \mathbf{x} + \frac{1}{1 + \mathbf{x}^2}\right) \cdot 2\sqrt{\mathbf{x}}.$$

Taking the following:

$$f(\mathbf{x}) = \left(-1 + e^{\mathbf{x}} - \mathbf{x} + \frac{1}{1 + \mathbf{x}^2}\right) \cdot 2\sqrt{\mathbf{x}}$$

and by differentiating it, we obtain the following.

$$f'(\mathbf{x}) = \frac{(2e^{\mathbf{x}} - 3)\mathbf{x}^5 + (e^{\mathbf{x}} - 1)\mathbf{x}^4 + 2(2e^{\mathbf{x}} - 3)\mathbf{x}^3 + (2e^{\mathbf{x}} - 5)\mathbf{x}^2 + (2e^{\mathbf{x}} - 3)\mathbf{x} + e^{\mathbf{x}}}{\sqrt{\mathbf{x}}(\mathbf{x}^2 + 1)^2}$$

The denominator is positive for all *x* ∈ (0, 1). We need to show that *q*(*x*) > 0 where *q*(*x*) denotes the numerator. Using the simple estimates *e <sup>x</sup>* <sup>≥</sup> <sup>1</sup> <sup>+</sup> *<sup>x</sup>*, 1 <sup>&</sup>gt; *<sup>x</sup>* <sup>2</sup> where *<sup>x</sup>* <sup>∈</sup> (0, 1), we obtain the following.

$$q(x) > 2x^6 + 4x^4 > 0.$$

Therefore *f* 0 (*x*) > 0, which implies *f*(*x*) is increasing; therefore, *<sup>h</sup>* 0 1 (*x*) *h* 0 2 (*x*) is increasing, which by Lemma 1 means *h*1 (*x*)−*h*<sup>1</sup> (0) *h*2(*x*)−*h*2(0) is increasing. However, since we chose functions *h*1(*x*), *h*2(*x*) such that *h*1(0) = 0 and *h*2(0) = 0, we obtain the fact that the following:

$$g(\mathbf{x}) = \frac{e^{\mathbf{x}} - 1 + \arctan(\mathbf{x}) - \mathbf{x} - \frac{\mathbf{x}^2}{2}}{\sqrt{\mathbf{x}}} = \frac{h\_1(\mathbf{x})}{h\_2(\mathbf{x})}$$

is increasing. Therefore, the following inequality holds:

$$
\lg(\mathbf{0}\_+) < \lg(\mathbf{x}) < \lg(\mathbf{1}).
$$

which provides us with the following inequality.

$$0 < \frac{e^{\chi} - 1 + \arctan(\chi) - \pi - \frac{\chi^2}{2}}{\sqrt{\chi}} < e + \frac{1}{4}(\pi - 10)^2$$

This is rearranged and provides us with the desired inequality.

$$
\frac{1}{\sqrt{\mathfrak{x}}} + \frac{\mathfrak{x}^{\frac{3}{2}}}{2} + \sqrt{\mathfrak{x}} < \frac{e^{\mathfrak{x}} + \arctan(\mathfrak{x})}{\sqrt{\mathfrak{x}}} \\
< e + \frac{1}{4}(\pi - 10) + \frac{1}{\sqrt{\mathfrak{x}}} + \frac{\mathfrak{x}^{\frac{3}{2}}}{2} + \sqrt{\mathfrak{x}}
$$

We provide a corollary in which we provide an estimate of the fractional inequality using the previous theorem.

**Corollary 1.** *The following inequality holds for* 0 < *a* < *t, α* > *t* > 0 *and t* ∈ (0, 1)*:*

$$\begin{split} & \quad \frac{1}{\Gamma(\alpha)} \left( \frac{\sqrt{\pi} \Gamma(\alpha) t^{a - \frac{1}{2}}}{\Gamma\left(\alpha + \frac{1}{2}\right)} - 2\sqrt{a} t^{a - 1} \,\_2F\_1\left(\frac{1}{2}, 1 - \alpha; \frac{3}{2}; \frac{a}{t}\right) + \psi(a, t, a) \right) \\ & \quad + \frac{t^{a} \left( \frac{4\left(1 - \frac{4}{t}\right)^{a} (2aa - a + t) - 4t \,\_2F\_1\left(-\frac{1}{2}, 1 - a; \frac{1}{2}; \frac{a}{t}\right)}{4a^2 - 1} + \frac{\sqrt{\pi} \left( (at)^{3/2} - \sqrt{at}^3 \right) \Gamma(a)}{t(a - t) \Gamma\left(a + \frac{3}{2}\right)} \right)}{2\sqrt{a}} \,\_2F\_1\left(\frac{1}{a}, 1 - \alpha; \frac{a}{t}\right) \end{split}$$

$$\begin{split} &< \,\_aI\_t^a \left( \frac{e^x + \arctan(x)}{\sqrt{x}} \right) < \frac{1}{\Gamma(a)} \left( \frac{(-10 + 4e + \pi)(t - a)^a}{4\alpha} \right) \\ &+ \frac{\sqrt{\pi}\Gamma(a)t^{a - \frac{1}{2}}}{\Gamma\left(a + \frac{1}{2}\right)} - 2\sqrt{a}t^{a - 1} \,\_2F\_1\left(\frac{1}{2}, 1 - a; \frac{3}{2}; \frac{a}{t}\right) + \psi(a, t, a) \\ &+ \frac{t^a \left(\frac{4\left(1 - \frac{4}{t}\right)^a (2aa - a + t) - 4t \,\_2F\_1\left(-\frac{1}{2}, 1 - a; \frac{1}{2}; \frac{a}{t}\right)}{4a^2 - 1} + \frac{\sqrt{\pi}\left(\left(at\right)^{3/2} - \sqrt{at^3}\right) \Gamma(a)}{t(a - t)\Gamma\left(a + \frac{2}{3}\right)} \right) \\ &= \frac{2\sqrt{a}}{\sqrt{a}} \end{split}$$

*where ψ*(*a*, *t*, *α*) = *<sup>a</sup> I α t x* 3 2 2 Γ(*α*)*.*

**Proof.** Let us first consider the convergence of the integral for the sake of completeness.

$$\_aI\_t^{\mathfrak{a}}\left(\frac{e^{\mathfrak{x}}+\arctan(\mathfrak{x})}{\sqrt{\mathfrak{x}}}\right) = \int\_a^t (t-\mathfrak{x})^{\mathfrak{a}-1} \frac{\arctan(\mathfrak{x}) + e^{\mathfrak{x}}}{\sqrt{\mathfrak{x}}} d\mathfrak{x}.$$

As we can see, the quantity that can induce a problem is (*t* − *x*) *<sup>α</sup>*−<sup>1</sup> when *<sup>x</sup>* <sup>→</sup> *<sup>t</sup>*. The thing to note here is that *α* > 0, which means that the degree of the expression (*t* − *x*) *<sup>α</sup>*−<sup>1</sup> will be between (0, 1), which when integrated will not proceed to the denominator; therefore, there is no division by zero. Another situation to note is that when *a* = 0, the quantity in the denominator √ *x* can be integrated around zero.

Similar discussions in the other corollaries lead to the same conclusion; therefore, they are omitted.

Now we are certain about applying the formula. By pplying the Riemann–Liouville integral transform:

$$\, \_aI\_t^a f(t) = \frac{1}{\Gamma(a)} \int\_a^t (t-x)^{\alpha-1} f(x) dx$$

on both sides of the inequality, we derived in the last theorem:

$$\frac{1}{\sqrt{\mathfrak{X}}} + \frac{\mathbf{x}^{\frac{3}{2}}}{2} + \sqrt{\mathfrak{X}} < \frac{e^{\mathbf{x}} + \arctan(\mathfrak{x})}{\sqrt{\mathfrak{X}}} < e + \frac{1}{4}(\pi - 10) + \frac{1}{\sqrt{\mathfrak{X}}} + \frac{\mathbf{x}^{\frac{3}{2}}}{2} + \sqrt{\mathfrak{X}}$$

and we obtain the following inequality.

**Corollary 2.** *The derived inequality can be used to approximate the solution to a first-order nonlinear ordinary differential equation. Consider differential equation y* = *f*(*x*) *such that f* : (0, 1) → (0, 1) *and y*(*t*0) *are defined.*

$$y' = \frac{\sqrt{y}\ge}{e^y + \arctan(y)}$$

.

*Separating the variables and integrating from t*<sup>0</sup> *to t, we obtain the following.*

$$\int\_{t\_0}^{t} \frac{e^y + \arctan(y)}{\sqrt{y}} dy = \int\_{t\_0}^{t} \mathfrak{x} d\mathfrak{x}.$$

*Using the inequality and solving the integral, which is then in terms of polynomials, we obtain the following solution.*

The following inequality provides an estimate for cosh(*x*) *x* .

**Theorem 2.** *The following bounds hold for x* ∈ (0, 1)*,*

$$
\frac{1}{\mathfrak{x}} + \frac{\mathfrak{x}}{2} + \frac{\mathfrak{x}^3}{24} + \frac{\mathfrak{x}^5}{720} < \frac{\cosh(\mathfrak{x})}{\mathfrak{x}} < \cosh(1) - \frac{1111}{720} + \frac{1}{\mathfrak{x}} + \frac{\mathfrak{x}}{2} + \frac{\mathfrak{x}^3}{24} + \frac{\mathfrak{x}^5}{720}.
$$

**Proof.** Let us consider the following function.

$$g(\mathbf{x}) = \frac{\cosh(\mathbf{x}) - 1 - \frac{\mathbf{x}^2}{2} - \frac{\mathbf{x}^4}{24} - \frac{\mathbf{x}^6}{720}}{\mathbf{x}} = \frac{h\_1(\mathbf{x})}{h\_2(\mathbf{x})}$$

where *<sup>h</sup>*1(*x*) = cosh(*x*) <sup>−</sup> <sup>1</sup> <sup>−</sup> *<sup>x</sup>* 2 <sup>2</sup> <sup>−</sup> *<sup>x</sup>* 4 <sup>24</sup> <sup>−</sup> *<sup>x</sup>* 6 <sup>720</sup> and *h*2(*x*) = *x*. Taking its derivative, we obtain the following.

$$\frac{h\_1'(x)}{h\_2'(x)} = \sinh(x) - x - \frac{x^3}{6} - \frac{x^5}{120}$$

Now we realize that the terms with a negative sign are exactly the terms in the sinh(*x*) Taylor expansion

$$\sinh(\mathbf{x}) = \sum\_{n=0}^{+\infty} \frac{\mathbf{x}^{2n+1}}{(2n+1)!}.$$

$$\frac{h\_1'(\mathbf{x})}{h\_2'(\mathbf{x})} = \sum\_{n=3}^{+\infty} \frac{\mathbf{x}^{2n+1}}{(2n+1)!}$$

This is obviously positive. Now, we need its increasing form. We take the following.

$$G(\mathfrak{x}) = \frac{h\_1'(\mathfrak{x})}{h\_2'(\mathfrak{x})} = \sum\_{n=3}^{+\infty} \frac{\mathfrak{x}^{2n+1}}{(2n+1)!}$$

Taking a derivative, we obtain the following:

$$G'(\mathfrak{x}) = \left(\frac{h\_1'(\mathfrak{x})}{h\_2'(\mathfrak{x})}\right)' = \sum\_{n=3}^{+\infty} (2n+1) \frac{\mathfrak{x}^{2n}}{(2n+1)!} > 0$$

which means that *G*(*x*) is increasing. Therefore, according to the Lemma 1, we obtain an increasing function *g*(*x*) = *<sup>h</sup>*<sup>1</sup> (*x*)−*h*<sup>1</sup> (0) *h*2(*x*)−*h*2(0) . However, since we chose *h*1, *h*<sup>2</sup> to be zero at *x* = 0, we obtain an increasing function *g*(*x*). Therefore, the following inequality holds.

$$g(0) < \frac{\cosh(\varkappa) - 1 - \frac{\varkappa^2}{2} - \frac{\varkappa^4}{24} - \frac{\varkappa^6}{720}}{\varkappa} < g(1),$$

This provides us with the following:

$$0 < \frac{\cosh(\mathfrak{x}) - 1 - \frac{\mathfrak{x}^2}{2} - \frac{\mathfrak{x}^4}{24} - \frac{\mathfrak{x}^6}{720}}{\mathfrak{x}} < \cosh(1) - \frac{1111}{720}$$

which when rearranged provides us with the desired inequality.

The following Corollary shows how our inequality can be paired up with the fractional integral to produce an effective inequality for *<sup>a</sup> I α t* ( cosh(*x*) *x* ).

**Corollary 3.** *The following inequality holds for* 0 < *a* < *t and* <(*α*) > 0*, t* ∈ (0, 1)*:*

$$-\frac{1}{\Gamma(\alpha)} \left( \psi(a, t, \alpha) + \zeta(a, t, \alpha) \right) < \,\_a I\_t^{\alpha} \left( \frac{\cosh(\chi)}{\chi} \right) < 1$$

$$\frac{1}{\Gamma(\mathfrak{a})} \left( \frac{(720 \cosh(1) - 1111)(t - a)^a}{720 \mathfrak{a}} + \psi(a, t, \mathfrak{a}) + \zeta(a, t, \mathfrak{a}) \right)$$

*α*

*where*

$$\psi(a,t,a) = \frac{(t-a)^{\alpha}(aa+t)}{2a(a+1)}$$

$$+\frac{(t-a)^{a}\left(a(a+1)(a+2)a^{3}+3a(a+1)a^{2}t+6aat^{2}+6t^{3}\right)}{24a(a+1)(a+2)(a+3)} + {}\_{a}I\_{t}^{a}\left(\frac{x^{5}}{720}\right)\Gamma(a)$$

$$\zeta(a,t,a) = t^{a-2}\left(a(a-1){\,\_{3}}F\_{2}\left(1,1,2-a;2,2;\frac{a}{t}\right) - t(\log(a) + \psi^{(0)}(a) - \log(t) + \gamma)\right)$$

**Proof.** Applying the Riemann–Liouville integral transform on both sides of the inequality we derived in the last Theorem and evaluating the left and right hand side, we arrive at the following inequality.

**Corollary 4.** *Using similar reasoning to the Corollary 2, we can form the following differential equation, y* = *f*(*x*)*, such that f* : (0, 1) → (0, 1) *and y*(*t*0) *are defined.*

$$y' = \frac{yx}{\cosh(y)}.$$

*Separating the variables and using the inequality, we can find the following solution. We omit the calculations for obvious reasons.*

A similar construction of Corollaries for other Theorems can be performed, and we omit them due to obvious reasons.

The following Theorem sharpens Jordan's inequality.

**Theorem 3.** *The following bounds hold for x* <sup>∈</sup> (0, *<sup>π</sup>* 2 )*.*

$$1 - \frac{\mathbf{x}^2}{3!} + \frac{\mathbf{x}^4}{5!} - \frac{\mathbf{x}^6}{7!} + \frac{\mathbf{x}^8}{9!} - \frac{\mathbf{x}^{10}}{11!} < \frac{\sin(\mathbf{x})}{\mathbf{x}} < \frac{\mathbf{x}}{\mathbf{x}}$$

$$1 - \frac{\mathbf{x}^2}{3!} + \frac{\mathbf{x}^4}{5!} - \frac{\mathbf{x}^6}{7!} + \frac{\mathbf{x}^8}{9!} - \frac{\mathbf{x}^{10}}{11!} - 1 + \frac{2}{\pi} + \frac{\pi^2}{24} - \frac{\pi^4}{1920} + \frac{\pi^6}{322560} - \frac{\pi^8}{92897280} + \frac{\pi^{10}}{40874803200}.$$

**Proof.** Let us consider the following function.

$$g(\mathbf{x}) = \frac{\sin(\mathbf{x}) - \mathbf{x} + \frac{\mathbf{x}^3}{3!} - \frac{\mathbf{x}^5}{5!} + \frac{\mathbf{x}^7}{7!} - \frac{\mathbf{x}^9}{9!} + \frac{\mathbf{x}^{11}}{11!}}{} = \frac{h\_1(\mathbf{x})}{h\_2(\mathbf{x})}$$

Differentiating *h*<sup>1</sup> and *h*2, respectively, we obtain the following.

$$\frac{h\_1'(\mathbf{x})}{h\_2'(\mathbf{x})} = \cos(\mathbf{x}) - 1 + \frac{\mathbf{x}^2}{2!} - \frac{\mathbf{x}^4}{4!} + \frac{\mathbf{x}^6}{6!} - \frac{\mathbf{x}^8}{8!} + \frac{\mathbf{x}^{10}}{10!}$$

Expanding cos(*x*) into a Taylor series:

$$\cos(x) = \sum\_{k=0}^{+\infty} \frac{(-1)^k x^{2k}}{(2k)!}$$

we realize that the terms outside of summation are exactly the coefficients of the cos(*x*) expansion and, to be precise, the terms are exactly the first five terms of the cos(*x*) expansion, which leaves us with the following:

$$\frac{h\_1'(x)}{h\_2'(x)} = \sum\_{k=6}^{+\infty} \frac{(-1)^k x^{2k}}{(2k)!}$$

.

which is obviously positive since it is a remainder of the positive Taylor expansion. Now, we need an increasing form. Taking the following:

$$G(\mathfrak{x}) = \frac{h\_1'(\mathfrak{x})}{h\_2'(\mathfrak{x})} = \sum\_{k=6}^{+\infty} \frac{(-1)^k \mathfrak{x}^{2k}}{(2k)!}.$$

and differentiating *G*(*x*), we obtain the following:

$$G'(\mathfrak{x}) = \left(\frac{h\_1'(\mathfrak{x})}{h\_2'(\mathfrak{x})}\right)' = \sum\_{k=6}^{+\infty} 2k \frac{(-1)^k \mathfrak{x}^{2k-1}}{(2k)!} > 0.1$$

which means that *G*(*x*) is increasing. Therefore, we obtain the fact that *<sup>h</sup>* 0 1 (*x*) *h* 0 2 (*x*) is increasing in both cases; therefore, *<sup>h</sup>*<sup>1</sup> (*x*)−*h*<sup>1</sup> (0) *h*2(*x*)−*h*2(0) is increasing, but we chose *h*1(*x*), *h*2(*x*) such that the following holds *h*1,2(0) = 0. Therefore since *g*(*x*) is an increasing function, the following relation holds:

$$g(0) < g(\mathfrak{x}) < g\left(\frac{\pi}{2}\right).$$

which is evaluated at the following.

$$0 < \frac{\sin(\pi) - \pi + \frac{\mathbf{x}^3}{3!} - \frac{\mathbf{x}^5}{5!} + \frac{\mathbf{x}^7}{7!} - \frac{\mathbf{x}^9}{9!} + \frac{\mathbf{x}^{11}}{11!}}{\pi} <$$

$$-1 + \frac{2}{\pi} + \frac{\pi^2}{24} - \frac{\pi^4}{1920} + \frac{\pi^6}{322560} - \frac{\pi^8}{92897280} + \frac{\pi^{10}}{40874803200}$$

When rearranged, it provides us with the desired inequality.

In the following, we provide a corollary of the previously improved inequality.

**Corollary 5.** *The following inequality holds.*

$$1.37076216382 < \int\_0^{\frac{\pi}{2}} \frac{\sin(\pi)}{\pi} d\pi < 1.37076222008$$

**Proof.** Integrating the inequality derived in the last Theorem from 0 to *<sup>π</sup>* 2 and integrating term by term, we obtain the following inequality.

The next Theorem provides an estimate on the sinh(*x*) *x* inequality.

**Theorem 4.** *The following bounds hold for x* ∈ (0, 1)*.*

$$1 + \frac{\mathfrak{x}^2}{3!} + \frac{\mathfrak{x}^4}{5!} + \frac{\mathfrak{x}^6}{7!} < \frac{\sinh(\mathfrak{x})}{\mathfrak{x}} \\ < 1 + \frac{\mathfrak{x}^2}{3!} + \frac{\mathfrak{x}^4}{5!} + \frac{\mathfrak{x}^6}{7!} + \sinh(1) - \frac{5923}{5040}.$$

**Proof.** Let us consider the following function.

$$g(\mathbf{x}) = \frac{\sinh(\mathbf{x}) - \mathbf{x} - \frac{\mathbf{x}^3}{3!} - \frac{\mathbf{x}^5}{5!} - \frac{\mathbf{x}^7}{7!}}{\mathbf{x}} = \frac{h\_1(\mathbf{x})}{h\_2(\mathbf{x})}$$

Taking derivative of *h*1(*x*) and *h*2(*x*), we obtain the following.

$$\frac{h\_1'(\mathbf{x})}{h\_2'(\mathbf{x})} = \cosh(\mathbf{x}) - 1 - \frac{\mathbf{x}^2}{2!} - \frac{\mathbf{x}^4}{4!} - \frac{\mathbf{x}^6}{6!}$$

Now we expand the cosh into its Taylor series and realize that the terms outside of the sum are exactly the first four terms in the summation. Therefore, we obtain the following:

$$\frac{h\_1'(\mathbf{x})}{h\_2'(\mathbf{x})} = \sum\_{n=4}^{+\infty} \frac{\mathbf{x}^{2n}}{(2n)!}.$$

which is positive. We also it in increasing form. Taking the following:

$$G(\mathfrak{x}) = \frac{h\_1'(\mathfrak{x})}{h\_2'(\mathfrak{x})} = \sum\_{n=4}^{+\infty} \frac{\mathfrak{x}^{2n}}{(2n)!}$$

and taking a derivative, we obtain the following:

$$G'(\mathfrak{x}) = \left(\frac{h\_1'(\mathfrak{x})}{h\_2'(\mathfrak{x})}\right)' = \sum\_{n=4}^{+\infty} 2n \frac{\mathfrak{x}^{2n-1}}{(2n)!} . $$

which is positive; therefore, *G*(*x*) is increasing. From the Lemma, we obtain that function *h*1 (*x*)−*h*<sup>1</sup> (0) *h*2(*x*)−*h*2(0) is increasing too. However, since we chose functions *h*1, *h*<sup>2</sup> to be zero when *x* = 0, we obtain an increasing *g*(*x*) . Therefore, the following inequality follows.

$$
\mathfrak{g}(0) < \mathfrak{g}(\mathfrak{x}) < \mathfrak{g}(1).
$$

When the expression is solved for sinh(*x*) *x* , we obtained the desired inequality.

The following Corollary illustrates how the improved bounds can be used in estimating the integral.

**Corollary 6.** *The following bounds for the integral hold.*

$$1.05725056689 < \int\_0^1 \frac{\sinh(\varkappa)}{\varkappa} d\varkappa < 1.05725334784.$$

**Proof.** Integrating the inequality in the previously derived Theorem from 0 to 1, we obtain the desired bounds.

#### **3. Conclusions**


**Author Contributions:** Conceptualization V.S.C. and S.R. (Stojan Radenovi´c); methodology, V.S. ˇ C., ˇ S.R. (Stojan Radenovi´c) and E.Ç.; formal analysis, V.S.C. and S.R. (Stojan Radenovi´c); writing— ˇ original draft preparation, V.S.C. and S.R. (Stojan Radenovi´c); supervision, S.R. (Stojan Radenovi´c), ˇ S.R. (Slobodan Radojevi´c), V.Š.C. and E.Ç. All authors have read and agreed to the published version ˇ of the manuscript.

**Funding:** This research received no external funding.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **Hermite–Hadamard Type Inclusions for Interval-Valued Coordinated Preinvex Functions**

**Kin Keung Lai 1,\* , Shashi Kant Mishra <sup>2</sup> , Jaya Bisht <sup>2</sup> and Mohd Hassan <sup>2</sup>**


**Abstract:** The connection between generalized convexity and symmetry has been studied by many authors in recent years. Due to this strong connection, generalized convexity and symmetry have arisen as a new topic in the subject of inequalities. In this paper, we introduce the concept of interval-valued preinvex functions on the coordinates in a rectangle from the plane and prove Hermite–Hadamard type inclusions for interval-valued preinvex functions on coordinates. Further, we establish Hermite–Hadamard type inclusions for the product of two interval-valued coordinated preinvex functions. These results are motivated by the symmetric results obtained in the recent article by Kara et al. in 2021 on weighted Hermite–Hadamard type inclusions for products of coordinated convex interval-valued functions. Our established results generalize and extend some recent results obtained in the existing literature. Moreover, we provide suitable examples in the support of our theoretical results.

**Keywords:** invex set; coordinated preinvex functions; Hermite–Hadamard inequalities; interval-valued functions

#### **1. Introduction**

In recent years, many researchers have made efforts to generalize and extend the classical convexity in different directions and discovered new integral inequalities for this generalized and extended convexity; see, for instance, [1–6]. In 1981, Hanson [7] introduced a useful generalization of convex functions known as invex functions. Craven and Glover [8] showed that the class of invex functions is equivalent to the class of functions whose stationary points are global minima. The concept of preinvex functions was introduced by Ben-Israel and Mond [9]. It is well known that preinvex functions are nonconvex functions. This concept inspired a large number of research papers dealing with the analysis and applications of this newly defined nonconvex function in optimization theory and related fields; see [10–12].

Noor [13] obtained Hermite–Hadamard (H-H) inequality for the preinvex functions, which is a generalization of the classical H-H inequality. Dragomir [14] defined the concept of classical convex functions on coordinates and demonstrated H-H type inequalities for these functions. Further, Latif and Dragomir [15] defined preinvex functions on the coordinates and established some H-H type inequalities for functions whose second-order partial derivatives in absolute value are preinvex on the coordinates. Matłoka [16] introduced the class of (*h*1, *h*2)-preinvex functions on the coordinates and proved H-H and Fej*e*´r type inequalities using the symmetricity of the positive function. For more details on preinvex functions and related inequalities, see [17–21].

The concept of interval analysis was first considered by Moore [22]. In 1979, Moore [23] studied the integration of interval-valued functions and investigated interval methods for computing upper and lower bounds on exact values of integrals of interval-valued functions. Bhurjee and Panda [24] presented a general multi-objective fractional programming problem whose parameters in the objective functions and constraints are intervals and

**Citation:** Lai, K.K.; Mishra, S.K.; Bisht, J.; Hassan, M. Hermite– Hadamard Type Inclusions for Interval-Valued Coordinated Preinvex Functions. *Symmetry* **2022**, *14*, 771. https://doi.org/10.3390/ sym14040771

Academic Editors: Octav Olteanu and Savin Treanta

Received: 18 March 2022 Accepted: 2 April 2022 Published: 8 April 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

developed a methodology to determine its efficient solutions. Zhang et al. [25] extended the concepts of invexity and preinvexity to interval-valued functions and derived KKT optimality conditions for LU-prinvex and invex optimization problems with an intervalvalued objective function. Zhao et al. [26] introduced the interval double integral for interval-valued functions and gave Chebyshev type inequalities for interval-valued functions. Practical applications of interval analysis include areas of economics, chemical engineering, beam physics, control circuitry design, global optimization, robotics, error analysis, signal processing, and computer graphics (see [27–31]).

Budak et al. [32] defined interval-valued right-sided Riemann–Liouville fractional integral and derived H-H type inequalities for interval-valued Riemann–Liouville fractional integrals. Sharma et al. [33] introduced interval-valued preinvex function and established fractional H-H type inequalities for these functions. Recently, Zhao et al. [34,35] proposed the notion of interval-valued convex functions on coordinates and established H-H type inequalities for these interval-valued coordinated convex functions. Further, Budak et al. [36] described a new concept of interval-valued fractional integrals on coordinates and investigated H-H type inequalities for interval-valued coordinated convex functions using these fractional integrals. Kara et al. [37] proved H–H–Fej*e*´r type inclusions for the product of two interval-valued convex functions on coordinates. For more details of the relationships between the different forms of interval-valued functions and integral inequalities, we refer to [38–43] and references therein.

The work in this research paper is mainly motivated by Zhao et al. [34] and Sharma et al. [33]. We propose the notion of interval-valued preinvex functions on coordinates, which is a generalization of interval-valued convex functions on coordinates, and prove new H-H type inclusions for these interval-valued coordinated preinvex functions. We also present H-H type inclusions for the product of two interval-valued preinvex functions on coordinates. Moreover, we illustrate our results with the help of some suitable examples. The results established in this paper include the previously known results for intervalvalued convex functions on coordinates as a special case. For future directions, we can investigate H-H type inclusions for interval-valued coordinated preinvex functions using interval-valued fractional integrals on coordinates.

The organization of this paper is as follows: In Section 2, we present some necessary preliminaries. In Section 3, we define preinvex interval-valued functions on coordinates and investigate H-H type inclusions for coordinated preinvex interval-valued functions. Further, we present H-H type inclusions for the product of two interval-valued preinvex functions on coordinates. Some special cases of these results are also investigated in Section 3. In Section 4, we discuss the conclusions and future directions of this study.

#### **2. Preliminaries**

In this section, we recall some notations, basic definitions, and related results that are necessary for this paper.

Let R*<sup>I</sup>* , R + *I* , R − *I* be the set of all closed intervals of R, set of all positive closed intervals of R, and set of all negative closed intervals of R, respectively. If *Λ* ∈ R*<sup>I</sup>* , then interval *Λ* is defined by:

$$\Lambda = [\underline{\Delta}\,\overline{\Lambda}] = \{ \mu \in \mathbb{R} : \underline{\Lambda} \le \mu \le \overline{\Lambda} \}, \,\,\underline{\Delta}\,\overline{\Lambda} \in \mathbb{R}.$$

The interval *Λ* = [*Λ*, *Λ*] is called degenerated if *Λ* = *Λ*; positive if *Λ* > 0; and negative if *Λ* < 0.

Let *Λ*<sup>1</sup> = [*Λ*<sup>1</sup> ,*Λ*1],*Λ*<sup>2</sup> = [*Λ*<sup>2</sup> ,*Λ*2] ∈ R*<sup>I</sup>* . We say *Λ*<sup>1</sup> ⊆ *Λ*<sup>2</sup> (or *Λ*<sup>2</sup> ⊇ *Λ*1) if and only if *Λ*<sup>2</sup> ≤ *Λ*<sup>1</sup> and *Λ*<sup>1</sup> ≤ *Λ*2.

The Hausdorff distance between *Λ*<sup>1</sup> = [*Λ*<sup>1</sup> ,*Λ*1] and *Λ*<sup>2</sup> = [*Λ*<sup>2</sup> ,*Λ*2] is defined as

$$d(\Lambda\_1, \Lambda\_2) = d([\underline{\Lambda}\_1, \overline{\Lambda}\_1], [\underline{\Lambda}\_2, \overline{\Lambda}\_2]) = \max\{|\,\underline{\Lambda}\_1 - \underline{\Lambda}\_2\,|\,|\,\overline{\Lambda}\_1 - \overline{\Lambda}\_2\,|\}.$$

For more properties and notations of intervals, we refer to [23,28].

**Definition 1** ([23])**.** *A function* Ω *is called an interval-valued function on* [*p*, *q*] *if it assigns a nonempty interval to each u* ∈ [*p*, *q*] *and*

$$\Omega(u) = [\underline{\Omega}(u), \overline{\Omega}(u)]\_{\prime\prime}$$

*where* Ω *and* Ω *are real-valued functions.*

A partition *P*<sup>1</sup> of [*p*, *q*] is a set of numbers {*ωi*−<sup>1</sup> , *νi* , *ωi*} *m i*=1 such that

$$P\_1: p = \omega\_0 < \omega\_1 < \dots < \omega\_m = q$$

with *ωi*−<sup>1</sup> ≤ *ν<sup>i</sup>* ≤ *ω<sup>i</sup>* for all *i* = 1, 2, 3 . . . *m*. Partition *P*<sup>1</sup> is said to be *δ*-fine if ∆*ω<sup>i</sup>* < *δ* for all *i*, where ∆*ω<sup>i</sup>* = *ω<sup>i</sup>* − *ωi*−<sup>1</sup> . Let the set of all *δ*-fine partitions of [*p*, *q*] be denoted by P(*δ*, [*p*, *q*]). If {*ωi*−<sup>1</sup> , *νi* , *ωi*} *m i*=1 is a *δ*-fine *P*<sup>1</sup> of [*p*, *q*] and {*σj*−<sup>1</sup> , *µ<sup>j</sup>* , *σj*} *n j*=1 is a *δ*-fine *P*<sup>2</sup> of [*r*,*s*], then the rectangles

$$\Delta\_{i,j} = [\omega\_{i-1}, \omega\_i] \times [\sigma\_{j-1}, \sigma\_{\bar{j}}]$$

partition rectangle ∆ = [*p*, *q*] × [*r*,*s*] with the points (*ν<sup>i</sup>* , *µj*) are inside the rectangles [*ωi*−<sup>1</sup> , *ω<sup>i</sup>* ] × [*σj*−<sup>1</sup> , *σ<sup>j</sup>* ]. Furthermore, we denote the set of all *δ*-fine partitions of ∆ with *P*<sup>1</sup> × *P*<sup>2</sup> by P(*δ*, ∆), where *P*<sup>1</sup> ∈ P(*δ*, [*p*, *q*]) and *P*<sup>2</sup> ∈ P(*δ*, [*r*,*s*]). Let ∆*Ai*,*<sup>j</sup>* be the area of the rectangle ∆*i*,*<sup>j</sup>* . Choose an arbitrary (*ν<sup>i</sup>* , *µj*) from each rectangle ∆*i*,*<sup>j</sup>* , where 1 ≤ *i* ≤ *m*, 1 ≤ *j* ≤ *n*, and we get

$$S(\Omega, P, \delta, \Delta) = \sum\_{i=1}^{m} \sum\_{j=1}^{n} \Omega(\nu\_{i\prime} \mu\_j) \Delta A\_{i,j\prime}$$

where Ω : ∆ → R*<sup>I</sup>* . *S*(Ω, *P*, *δ*, ∆) denotes integral sum of Ω corresponding to the *P* ∈ P(*δ*, ∆).

**Definition 2** ([26])**.** *A function* Ω : [*p*, *q*] → R*<sup>I</sup> is called interval Riemann integrable (IRintegrable) on* [*p*, *q*] *with* (*IR*)*-integral I* = (*IR*) R *q <sup>p</sup>* Ω(*λ*)*dλ if for each e* > 0, *there exists δ* > 0 *such that*

$$d(\mathcal{S}(\Omega, P, \delta, [p\_\prime q]), I) < \epsilon$$

*for each P* ∈ P(*δ*, [*p*, *q*])*.*

The collection of all (*IR*)-integrable functions on [*p*, *q*] denoted by *IR*([*p*,*q*]).

**Definition 3** ([26])**.** *A function* Ω : ∆ → R*<sup>I</sup> is called interval double integrable (ID-integrable) on* ∆ *with* (*ID*)*-integral I* = (*ID*) R R <sup>∆</sup> Ω(*u*, *v*)*dA if for each e* > 0, *there exists δ* > 0 *such that*

$$d(S(\Omega, P, \delta, \Delta), I) < \epsilon$$

*for each P* ∈ P(*δ*, ∆)*.*

The collection of all (*ID*)-integrable functions on ∆ denoted by *ID*(∆) .

**Theorem 1** ([28])**.** *Let* Ω : [*p*, *q*] → *R<sup>I</sup> be an interval-valued function such that* Ω = [Ω, Ω]. *Then, ψ is* (*IR*)*-integrable on* [*p*, *q*] *if and only if* Ω *and* Ω *are R-integrable on* [*p*, *q*] *and*

$$(IR)\int\_p^q \Omega(u)du = \left[ (\mathcal{R})\int\_p^q \underline{\Omega}(u)du, (\mathcal{R})\int\_p^q \overline{\Omega}(u)du \right]$$

.

**Theorem 2** ([26])**.** *Let* ∆ = [*p*, *q*] × [*r*,*s*]*. If* Ω : ∆ → *R<sup>I</sup> be an interval-valued function such that* Ω = [Ω, Ω] *and* Ω ∈ *ID*(∆) *, then we have*

$$(ID)\int\int\_{\Delta} \Omega(u,v)dA = (ID)\int\_{p}^{q} (ID)\int\_{r}^{s} \Omega(u,v)dvdu.$$

**Definition 4** ([12])**.** *The set <sup>X</sup>* <sup>⊆</sup> <sup>R</sup>*<sup>n</sup> is said to be invex with respect to vector function η* : <sup>R</sup>*<sup>n</sup>* <sup>×</sup> <sup>R</sup>*<sup>n</sup>* <sup>→</sup> <sup>R</sup>*<sup>n</sup> , if*

$$
\forall v + \lambda \eta(u, v) \in \mathcal{X}, \quad \text{for all } u, v \in \mathcal{X}, \ \lambda \in [0, 1].
$$

**Remark 1.** *Every convex set is invex with respect to η*(*u*, *v*) = *u* − *v but not conversely.*

**Definition 5** ([12])**.** *The function* Ω *on the invex set X is said to be preinvex with respect to η, if*

$$
\Omega(v + \lambda \eta(u, v)) \le (1 - \lambda)\Omega(v) + \lambda \Omega(u), \text{ for all } u, v \in \mathcal{X}, \ \lambda \in [0, 1].
$$

**Remark 2.** *Every convex function is preinvex with respect to η*(*u*, *v*) = *u* − *v but not conversely.*

**Condition C** [10] Let *X* ⊆ R be an invex set with respect to *η*(., .). Then, function *η* satisfies Condition C if for any *λ* ∈ [0, 1] and any *u*, *v* ∈ *X*,

$$
\eta(v, v + \lambda \eta(u, v)) = -\lambda \eta(u, v),
$$

$$
\eta(\mu, v + \lambda \eta(\mu, v)) = (1 - \lambda)\eta(\mu, v).
$$

For all *λ*1, *λ*<sup>2</sup> ∈ [0, 1], *u*, *v* ∈ *X* and from Condition C, we have

$$
\eta(v + \lambda\_2 \eta(u, v), v + \lambda\_1 \eta(u, v)) = (\lambda\_2 - \lambda\_1)\eta(u, v).
$$

**Theorem 3** ([13])**.** *Let* Ω : [*p*, *p* + *η*(*q*, *p*)] → (0, ∞) *be a preinvex function on the interval of the real numbers X o (the interior of X) and p*, *q* ∈ *X <sup>o</sup> with p* < *p* + *η*(*q*, *p*)*. Then the following inequality holds:*

$$
\Omega\left(\frac{2p+\eta(q,p)}{2}\right) \le \frac{1}{\eta(q,p)} \int\_p^{p+\eta(q,p)} \Omega(u) du \le \frac{\Omega(p)+\Omega(q)}{2}.
$$

**Definition 6** ([33])**.** *If X* ⊆ R *is an invex set with respect to η*(., .), Ω(*u*) = [*ψ*(*u*), *ψ*(*u*)] *is an interval-valued function on X. Then* Ω *is preinvex interval-valued function on X with respect to η*(., .) *if*

$$
\Omega(v + \lambda \eta(u, v)) \supseteq \lambda \Omega(u) + (1 - \lambda)\Omega(v), \text{ for all } u, v \in X, \ \lambda \in [0, 1].
$$

Let *X*<sup>1</sup> and *X*<sup>2</sup> be two nonempty subsets of R*<sup>n</sup>* , *<sup>η</sup>*<sup>1</sup> : *<sup>X</sup>*<sup>1</sup> <sup>×</sup> *<sup>X</sup>*<sup>1</sup> <sup>→</sup> <sup>R</sup>*<sup>n</sup>* and *<sup>η</sup>*<sup>2</sup> : *<sup>X</sup>*<sup>2</sup> <sup>×</sup> *<sup>X</sup>*<sup>2</sup> <sup>→</sup> R*n* .

**Definition 7** ([16])**.** *Let* (*u*, *v*) ∈ *X*<sup>1</sup> × *X*2*. The set X*<sup>1</sup> × *X*<sup>2</sup> *is said to be invex at* (*u*, *v*) *with respect to η*<sup>1</sup> *and η*2*, if for each* (*w*, *z*) ∈ *X*<sup>1</sup> × *X*<sup>2</sup> *and λ*1*,λ*2∈ [0, 1]*,*

$$(u + \lambda\_1 \eta\_1(w, u), v + \lambda\_2 \eta\_2(z, v)) \in X\_1 \times X\_2.$$

*X*<sup>1</sup> × *X*<sup>2</sup> is said to be invex set with respect to *η*<sup>1</sup> and *η*<sup>2</sup> if *X*<sup>1</sup> × *X*<sup>2</sup> is invex at each (*w*, *z*) ∈ *X*<sup>1</sup> × *X*2.

**Theorem 4** ([33])**.** *Let X* ⊆ R *be an open invex subset with respect to η* : *X* × *X* → R *and p*, *q* ∈ *X with p* < *p* + *η*(*q*, *p*). *If* Ω : [*p*, *p* + *η*(*q*, *p*)] → R + *I is a preinvex interval-valued function such that* Ω(*λ*) = [Ω(*λ*), Ω(*λ*)]; Ω ∈ *L*[*p*, *p* + *η*(*q*, *p*)] *and η satisfies Condition C and α* > 0, *then*

$$\begin{aligned} \Omega\left(p+\frac{\eta(q,p)}{2}\right) &\supseteq \frac{\Gamma(\alpha+1)}{2\eta^{\alpha}(q,p)}[J\_{p+}^{\alpha}\Omega(p+\eta(q,p))+J\_{(p+\eta(q,p))}^{\alpha}\Omega(p)]\\ &\supseteq \frac{\Omega(p)+\Omega(p+\eta(q,p))}{2} \supseteq \frac{\Omega(p)+\Omega(q)}{2}.\end{aligned}$$

**Corollary 1.** *If α = 1, then Theorem 4 reduces to the following result:*

$$\begin{aligned} \Omega\left(p + \frac{\eta(q,p)}{2}\right) &\supseteq \frac{1}{\eta(q,p)} \int\_p^{p+\eta(q,p)} \Omega(\lambda) d\lambda\\ &\supseteq \frac{\Omega(p) + \Omega(p + \eta(q,p))}{2} \supseteq \frac{\Omega(p) + \Omega(q)}{2} \end{aligned}$$

**Theorem 5** ([33])**.** *Let X* ⊆ R *be an open invex subset with respect to η* : *X* × *X* → R *and p*, *q* ∈ *X with p* < *p* + *η*(*q*, *p*). *If* Ω,*Υ* : [*p*, *p* + *η*(*q*, *p*)] → R + *I is a preinvex interval-valued function such that* Ω(*λ*) = [Ω(*λ*), Ω(*λ*)] *and Υ*(*λ*) = [*Υ*(*λ*),*Υ*(*λ*)]; Ω,*Υ* ∈ *L*[*p*, *p* + *η*(*q*, *p*)] *and η satisfies Condition C and α* > 0, *then*

$$\frac{\Gamma(a+1)}{2\eta^a(q,p)}[J\_{p^+}^{\mathfrak{a}}\Omega(p+\eta(q,p))\mathcal{Y}(p+\eta(q,p))+J\_{(p+\eta(q,p))}^{\mathfrak{a}}\Omega(p)\mathcal{Y}(p)]$$

$$\equiv \left(\frac{1}{2} - \frac{a}{(a+1)(a+2)}\right)\mathcal{F}(p,p+\eta(q,p)) + \frac{a}{(a+1)(a+2)}G(p,p+\eta(q,p)) \tag{1}$$

.

*and*

$$\begin{split} &2\Omega\left(p+\frac{1}{2}\eta(q,p)\right)\mathcal{Y}\left(p+\frac{1}{2}\eta(q,p)\right) \\ &\supseteq \frac{\Gamma(\mathfrak{a}+1)}{2\eta^{\mathfrak{a}}(q,p)}[J\_{p+}^{\mathfrak{a}}\Omega(p+\eta(q,p))\mathcal{Y}(p+\eta(q,p))+J\_{(p+\eta(q,p))}^{\mathfrak{a}}\cdot\Omega(p)\mathcal{Y}(p)] \\ &+\left(\frac{1}{2}-\frac{\mathfrak{a}}{(\mathfrak{a}+1)(\mathfrak{a}+2)}\right)G(p,p+\eta(q,p))+\frac{\mathfrak{a}}{(\mathfrak{a}+1)(\mathfrak{a}+2)}F(p,p+\eta(q,p)), \end{split} \tag{2}$$

*where F*(*p*, *p* + *η*(*q*, *p*)) = Ω(*p*)*Υ*(*p*) + Ω(*p* + *η*(*q*, *p*))*Υ*(*p* + *η*(*q*, *p*)) *and G*(*p*, *p* + *η*(*q*, *p*)) = Ω(*p*)*Υ*(*p* + *η*(*q*, *p*)) + Ω(*p* + *η*(*q*, *p*))*Υ*(*p*).

**Corollary 2.** *If α = 1, then (1) reduces to the following result:*

$$\frac{1}{\eta(q,p)} \int\_p^{p+\eta(q,p)} \Omega(\lambda) Y(\lambda) d\lambda \supseteq \frac{1}{3} F(p, p+\eta(q,p)) + \frac{1}{6} G(p, p+\eta(q,p)).$$

**Corollary 3.** *If α = 1, then (2) reduces to the following result:*

$$\begin{aligned} &2\Omega\Big(p+\frac{1}{2}\eta(q,p)\Big)\mathcal{Y}\Big(p+\frac{1}{2}\eta(q,p)\Big) \\ \geq &\frac{1}{\eta(q,p)}\int\_{p}^{p+\eta(q,p)}\Omega(\lambda)\mathcal{Y}(\lambda)d\lambda+\frac{1}{3}G(p,p+\eta(q,p))+\frac{1}{6}F(p,p+\eta(q,p)).\end{aligned}$$

#### **3. Main Results**

In this section, first, we give the definition of interval-valued coordinated preinvex function.

**Definition 8.** *Let X*<sup>1</sup> × *X*<sup>2</sup> *be an invex set with respect to η*<sup>1</sup> *and η*2*,* Ω = [Ω, Ω] *be an interval valued function defined on X*<sup>1</sup> × *X*2*. The function* Ω *is said to be interval-valued coordinated preinvex function with respect to η*<sup>1</sup> *and η*<sup>2</sup> *if the partial mappings* Ω*<sup>v</sup>* : *X*<sup>1</sup> → R + *I* , Ω*v*(*w*) = (*w*, *v*) *and* Ω*<sup>u</sup>* : *X*<sup>2</sup> → R + *I* , Ω*u*(*z*) = (*u*, *z*) *are interval-valued preinvex functions with respect to η*<sup>1</sup> *and η*2*, respectively, for all u* ∈ *X*<sup>1</sup> *and v* ∈ *X*2*.*

**Remark 3.** *From the definition of interval-valued coordinated preinvex functions, it follows that if* Ω *is an interval-valued coordinated preinvex function, then*

$$\begin{aligned} \Omega(u + \lambda\_1 \eta\_1(w, u), v + \lambda\_2 \eta\_2(z, v)) &\supseteq (1 - \lambda\_1)(1 - \lambda\_2)\Omega(u, v) + (1 - \lambda\_1)\lambda\_2\Omega(u, z) \\ &+ \lambda\_1(1 - \lambda\_2)\Omega(w, v) + \lambda\_1\lambda\_2\Omega(w, z), \end{aligned}$$

*for all* (*u*, *v*),(*u*, *z*),(*w*, *v*),(*w*, *z*) ∈ *X*<sup>1</sup> × *X*<sup>2</sup> *and λ*1, *λ*<sup>2</sup> ∈ [0, 1]*.*

If *η*1(*w*, *u*) = *w* − *u* and *η*2(*z*, *v*) = *z* − *v*, then the definition of interval-valued coordinated preinvex function reduces to the definition of interval-valued coordinated convex function proposed by Zhao et al. [34].

**Example 1.** *An interval-valued function* Ω : [0, 1] × [ 1 2 , 1] → R + *I defined as* Ω(*u*, *v*) = [*u* + *v*,(2 − *u*)(2 − *v*)] *is an interval-valued coordinated preinvex function with respect to η*1(*w*, *u*) = *w* − *u* − 1 *and η*2(*z*, *v*) = *z* − 2*v for all u*, *w* ∈ [0, 1] *and v*, *z* ∈ - 1 2 , 1 *.*

Now, we establish H-H type inclusions for interval-valued preinvex functions on coordinates. In what follows, without any confusion, we will not include the symbol (*R*), (*IR*), or (*ID*) before the integral sign.

**Theorem 6.** *Let X*<sup>1</sup> × *X*<sup>2</sup> *be an invex set with respect to η*<sup>1</sup> *and η*2*. If* Ω : *X*<sup>1</sup> × *X*<sup>2</sup> → R + *I is an interval-valued coordinated preinvex function with respect to η*<sup>1</sup> *and η*<sup>2</sup> *such that* Ω = [Ω, Ω] *and p* < *p* + *η*1(*q*, *p*), *r* < *r* + *η*2(*s*,*r*)*, where p*, *q* ∈ *X*<sup>1</sup> *and r*,*s* ∈ *X*2*. If η*1, *η*<sup>2</sup> *satisfy Condition C, then we have*

$$\begin{aligned} \Omega\left(p+\frac{1}{2}\eta\_1(q,p)\_\prime r+\frac{1}{2}\eta\_2(s,r)\right) &\supseteq \frac{1}{\eta\_1(q,p)\eta\_2(s,r)} \int\_p^{p+\eta\_1(q,p)} \int\_r^{r+\eta\_2(s,r)} \Omega(u,v)dvdu \\ &\supseteq \frac{1}{4}[\Omega(p,r)+\Omega(q,r)+\Omega(p,s)+\Omega(q,s)].\end{aligned}$$

**Proof.** Since Ω is an interval-valued preinvex function on coordinates with respect to *η*<sup>1</sup> and *η*2, we have

$$\begin{split} \Omega(p + \lambda\_1 \eta\_1(q, p), r + \lambda\_2 \eta\_2(s, r)) &\supseteq (1 - \lambda\_1)(1 - \lambda\_2)\Omega(p, r) + (1 - \lambda\_1)\lambda\_2 \Omega(p, s) \\ &+ \lambda\_1 (1 - \lambda\_2) \Omega(q, r) + \lambda\_1 \lambda\_2 \Omega(q, s) . \end{split} \tag{3}$$

Integrating (3) with respect to (*λ*1, *λ*2) over [0, 1] × [0, 1], we get

$$\begin{split} &\int\_{0}^{1} \int\_{0}^{1} \Omega(p+\lambda\_{1}\eta\_{1}(q,p),r+\lambda\_{2}\eta\_{2}(s,r))d\lambda\_{2}d\lambda\_{1} \\ &\geq \int\_{0}^{1} \int\_{0}^{1} (1-\lambda\_{1})(1-\lambda\_{2})\Omega(p,r)d\lambda\_{2}d\lambda\_{1} + \int\_{0}^{1} \int\_{0}^{1} (1-\lambda\_{1})\lambda\_{2}\Omega(p,s)d\lambda\_{2}d\lambda\_{1} \\ &\quad + \int\_{0}^{1} \int\_{0}^{1} \lambda\_{1}(1-\lambda\_{2})\Omega(q,r)d\lambda\_{2}d\lambda\_{1} + \int\_{0}^{1} \int\_{0}^{1} \lambda\_{1}\lambda\_{2}\Omega(q,s)d\lambda\_{2}d\lambda\_{1}. \end{split}$$

This implies that

$$\frac{1}{\eta\_1(q,p)\eta\_2(s,r)} \int\_p^{p+\eta\_1(q,p)} \int\_r^{r+\eta\_2(s,r)} \Omega(u,v)dvdu \supseteq \frac{1}{4} [\Omega(p,r) + \Omega(p,s) + \Omega(q,r) + \Omega(q,s)].\tag{4}$$

Using the definition of an interval-valued coordinated preinvex function and Condition C for *η*1, *η*2, we get

$$\begin{aligned} \Omega\left(p+\frac{1}{2}\eta\_1(q,p),r+\frac{1}{2}\eta\_2(s,r)\right) \\ = \Omega(p+\lambda\_1\eta\_1(q,p)+\frac{1}{2}\eta\_1(p+(1-\lambda\_1)\eta\_1(q,p),p+\lambda\_1\eta\_1(q,p)),\ r+\lambda\_2\eta\_2(s,r) \\ +\frac{1}{2}\eta\_2(r+(1-\lambda\_2)\eta\_2(s,r),r+\lambda\_2\eta\_2(s,r))) \\ \geq \frac{1}{4}\left[\Omega(p+\lambda\_1\eta\_1(q,p),r+\lambda\_2\eta\_2(s,r))+\Omega(p+\lambda\_1\eta\_1(q,p),r+(1-\lambda\_2)\eta\_2(s,r))\right] \\ +\Omega(p+(1-\lambda\_1)\eta\_1(q,p),r+\lambda\_2\eta\_2(s,r))+\Omega(p+(1-\lambda\_1)\eta\_1(q,p),r+(1-\lambda\_2)\eta\_2(s,r))\right] \end{aligned} \tag{5}$$

Thus, integrating (5) with respect to (*λ*1, *λ*2) over [0, 1] × [0, 1], we get

$$\begin{split} &\int\_{0}^{1} \int\_{0}^{1} \Omega \Big( p + \frac{1}{2} \eta\_{1}(q, p), r + \frac{1}{2} \eta\_{2}(s, r) \Big) d\lambda\_{2} d\lambda\_{1} \\ &\geq \frac{1}{4} \int\_{0}^{1} \int\_{0}^{1} [\Omega(p + \lambda\_{1} \eta\_{1}(q, p), r + \lambda\_{2} \eta\_{2}(s, r)) + \Omega(p + \lambda\_{1} \eta\_{1}(q, p), r + (1 - \lambda\_{2}) \eta\_{2}(s, r)) \\ &\quad + \Omega(p + (1 - \lambda\_{1}) \eta\_{1}(q, p), r + \lambda\_{2} \eta\_{2}(s, r)) + \Omega(p + (1 - \lambda\_{1}) \eta\_{1}(q, p), r + (1 - \lambda\_{2}) \eta\_{2}(s, r))] d\lambda\_{2} d\lambda\_{1}. \end{split}$$

This implies

$$\Omega\left(p+\frac{1}{2}\eta\_1(q,p),r+\frac{1}{2}\eta\_2(s,r)\right) \supseteq \frac{1}{\eta\_1(q,p)\eta\_2(s,r)} \int\_p^{p+\eta\_1(q,p)} \int\_r^{r+\eta\_2(s,r)} \Omega(u,v)dvdu. \tag{6}$$

From (4) and (6), we get the desired result.

**Theorem 7.** *Let X*<sup>1</sup> × *X*<sup>2</sup> *be an invex set with respect to η*<sup>1</sup> *and η*2*. If* Ω : [*p*, *p* + *η*1(*q*, *p*)] × [*r*,*r* + *η*2(*s*,*r*)] → R + *I is an interval-valued coordinated preinvex function with respect to η*<sup>1</sup> *and η*<sup>2</sup> *such that* Ω = [Ω, Ω] *and p* < *p* + *η*1(*q*, *p*), *r* < *r* + *η*2(*s*,*r*)*, where p*, *q* ∈ *X*<sup>1</sup> *and r*,*s* ∈ *X*2*. If η*1, *η*<sup>2</sup> *satisfy Condition C, then we have*

$$\begin{split} &\frac{1}{\eta\_{1}(q,p)}\int\_{p}^{p+\eta\_{1}(q,p)}\Omega\left(u,r+\frac{1}{2}\eta\_{2}(s,r)\right)du+\frac{1}{\eta\_{2}(s,r)}\int\_{r}^{r+\eta\_{2}(s,r)}\Omega\left(p+\frac{1}{2}\eta\_{1}(q,p),v\right)dv\\ &\geq\frac{2}{\eta\_{1}(q,p)\eta\_{2}(s,r)}\int\_{p}^{p+\eta\_{1}(q,p)}\int\_{r}^{r+\eta\_{2}(s,r)}\Omega(u,v)dvdu\\ &\geq\frac{1}{2}\left[\frac{1}{\eta\_{1}(q,p)}\int\_{p}^{p+\eta\_{1}(q,p)}(\Omega(u,r)+\Omega(u,r+\eta\_{2}(s,r)))du\\ &+\frac{1}{\eta\_{2}(s,r)}\int\_{r}^{r+\eta\_{2}(s,r)}(\Omega(p,v)+\Omega(p+\eta\_{1}(q,p),v))dv\right]. \end{split} \tag{7}$$

**Proof.** Since Ω is an interval-valued preinvex function on coordinates [*p*, *p* + *η*1(*q*, *p*)] × [*r*,*r* + *η*2(*s*,*r*)], then Ω*<sup>u</sup>* : [*r*,*r* + *η*2(*s*,*r*)] → R + *I* , Ω*u*(*v*) = Ω(*u*, *v*) is an interval-valued preinvex function on [*r*,*r* + *η*2(*s*,*r*)] for all *u* ∈ [*p*, *p* + *η*1(*q*, *p*)]. From Corollary 1, we have

$$
\left(\Omega\_{\mathfrak{u}}\left(r+\frac{1}{2}\eta\_{2}(s,r)\right)\right) \supseteq \frac{1}{\eta\_{2}(s,r)} \int\_{r}^{r+\eta\_{2}(s,r)} \Omega\_{\mathfrak{u}}(v)dv \supseteq \frac{\Omega\_{\mathfrak{u}}(r)+\Omega\_{\mathfrak{u}}(r+\eta\_{2}(s,r))}{2}.
$$

This implies

$$\Omega\left(u, r + \frac{1}{2}\eta\_2(s, r)\right) \supseteq \frac{1}{\eta\_2(s, r)} \int\_r^{r + \eta\_2(s, r)} \Omega(u, v) dv \supseteq \frac{\Omega(u, r) + \Omega(u, r + \eta\_2(s, r))}{2}. \tag{8}$$

Integrating (8) over [*p*, *p* + *η*1(*q*, *p*)] with respect to *u*, then dividing by *η*1(*q*, *p*), we get

$$\begin{split} &\frac{1}{\eta\_{1}(q,p)} \int\_{p}^{p+\eta\_{1}(q,p)} \Omega\left(u,r+\frac{1}{2}\eta\_{2}(s,r)\right) du \\ &\supseteq \frac{1}{\eta\_{1}(q,p)\eta\_{2}(s,r)} \int\_{p}^{p+\eta\_{1}(q,p)} \int\_{r}^{r+\eta\_{2}(s,r)} \Omega(u,v) dv du \\ &\supseteq \frac{1}{2\eta\_{1}(q,p)} \int\_{p}^{p+\eta\_{1}(q,p)} (\Omega(u,r)+\Omega(u,r+\eta\_{2}(s,r))) du. \end{split} \tag{9}$$

Similarly, Ω*<sup>v</sup>* : [*p*, *p* + *η*1(*p*, *q*)] → R + *I* , Ω*v*(*u*) = Ω(*u*, *v*) is interval-valued preinvex function on [*p*, *p* + *η*1(*p*, *q*)] for all *v* ∈ [*r*,*r* + *η*2(*s*,*r*)]. Then, we have

$$\begin{split} &\frac{1}{\eta\_{2}(s,r)}\int\_{r}^{r+\eta\_{2}(s,r)}\Omega\left(p+\frac{1}{2}\eta\_{1}(q,p),v\right)dv \\ &\supseteq \frac{1}{\eta\_{1}(q,p)\eta\_{2}(s,r)}\int\_{p}^{p+\eta\_{1}(q,p)}\int\_{r}^{r+\eta\_{2}(s,r)}\Omega(u,v)dvdu \\ &\supseteq \frac{1}{2\eta\_{2}(s,r)}\int\_{r}^{r+\eta\_{2}(s,r)}(\Omega(p,v)+\Omega(p+\eta\_{1}(q,p),v))dv. \end{split} \tag{10}$$

By adding (9) and (10), we have

$$\begin{split} &\frac{1}{\eta\_{1}(q,p)}\int\_{p}^{p+\eta\_{1}(q,p)}\Omega\left(u,r+\frac{1}{2}\eta\_{2}(s,r)\right)du+\frac{1}{\eta\_{2}(s,r)}\int\_{r}^{r+\eta\_{2}(s,r)}\Omega\left(p+\frac{1}{2}\eta\_{1}(q,p),v\right)dv\\ &\geq\frac{2}{\eta\_{1}(q,p)\eta\_{2}(s,r)}\int\_{p}^{p+\eta\_{1}(q,p)}\int\_{r}^{r+\eta\_{2}(s,r)}\Omega(u,v)dvdu\\ &\geq\frac{1}{2}\left[\frac{1}{\eta\_{1}(q,p)}\int\_{p}^{p+\eta\_{1}(q,p)}(\Omega(u,r)+\Omega(u,r+\eta\_{2}(s,r)))du\\ &+\frac{1}{\eta\_{2}(s,r)}\int\_{r}^{r+\eta\_{2}(s,r)}(\Omega(p,v)+\Omega(p+\eta\_{1}(q,p),v))dv\right]. \end{split}$$

This completes the proof.

**Example 2.** *Let* [*p*, *p* + *η*1(*q*, *p*)] = [ <sup>1</sup> 4 , 1 2 ]*,* [*r*,*r* + *η*2(*s*,*r*)] = [ <sup>1</sup> 4 , 1 2 ] *and η*1(*q*, *p*) = *q* − 2*p, η*2(*s*,*r*) = *s* − 2*r. Let* Ω : [ 1 4 , 1 2 ] × [ 1 4 , 1 2 ] → R + *I be defined by* Ω(*u*, *v*) = [*uv*,(1 − *u*)(1 − *v*)] ∀ *u* ∈ [ 1 4 , 1 2 ] *and v* ∈ [ 1 4 , 1 2 ]*. Then all assumptions of Theorem 7 are satisfied.*

**Theorem 8.** *Let X*<sup>1</sup> × *X*<sup>2</sup> *be an invex set with respect to η*<sup>1</sup> *and η*2*. If* Ω : [*p*, *p* + *η*1(*q*, *p*)] × [*r*,*r* + *η*2(*s*,*r*)] → R + *I is an interval-valued coordinated preinvex function with respect to η*<sup>1</sup> *and η*<sup>2</sup> *such that* Ω = [Ω, Ω] *and p* < *p* + *η*1(*q*, *p*), *r* < *r* + *η*2(*s*,*r*)*, where p*, *q* ∈ *X*<sup>1</sup> *and r*,*s* ∈ *X*2*. If η*1, *η*<sup>2</sup> *satisfy Condition C, then we have*

$$\begin{split} &\Omega\Big(p+\frac{1}{2}\eta\_{1}(q,p),r+\frac{1}{2}\eta\_{2}(s,r)\big)\Big) \\ &\supseteq \frac{1}{2}\bigg[\frac{1}{\eta\_{1}(q,p)}\int\_{p}^{p+\eta\_{1}(q,p)}\Omega\Big(u,r+\frac{1}{2}\eta\_{2}(s,r)\big)du+\frac{1}{\eta\_{2}(s,r)}\int\_{r}^{r+\eta\_{2}(s,r)}\Omega\Big(p+\frac{1}{2}\eta\_{1}(q,p),v\Big)dv\Big] \\ &\supseteq \frac{1}{\eta\_{1}(q,p)\eta\_{2}(s,r)}\int\_{p}^{p+\eta\_{1}(q,p)}\int\_{r}^{r+\eta\_{2}(s,r)}\Omega(u,v)dvdu \\ &\supseteq \frac{1}{4}\bigg[\frac{1}{\eta\_{1}(q,p)}\int\_{p}^{p+\eta\_{1}(q,p)}(\Omega(u,r)+\Omega(u,r+\eta\_{2}(s,r)))du \\ &+\frac{1}{\eta\_{2}(s,r)}\int\_{r}^{r+\eta\_{2}(s,r)}(\Omega(p,v)+\Omega(p+\eta\_{1}(q,p),v))dv \\ &\supseteq \frac{1}{4}[\Omega(p,r)+\Omega(p+\eta\_{1}(q,p),r)+\Omega(p,r+\eta\_{2}(s,r))+\Omega(p+\eta\_{1}(q,p),r+\eta\_{2}(s,r))] \\ &\supseteq \frac{1}{4}[\Omega(p,r)+\Omega(q,r)+\Omega(p,s)+\Omega(q,s)]. \end{split}$$

**Proof.** Since Ω is an interval-valued preinvex function on coordinates [*p*, *p* + *η*1(*q*, *p*)] × [*r*,*r* + *η*2(*s*,*r*)], then from Corollary 1 we get

$$\Omega\left(p+\frac{1}{2}\eta\_1(q,p),r+\frac{1}{2}\eta\_2(s,r)\right) \supseteq \frac{1}{\eta\_1(q,p)}\int\_p^{p+\eta\_1(q,p)} \Omega\left(u,r+\frac{1}{2}\eta\_2(s,r)\right)du,\tag{11}$$

$$\Omega\left(p+\frac{1}{2}\eta\_1(q,p),r+\frac{1}{2}\eta\_2(s,r)\right) \supseteq \frac{1}{\eta\_2(s,r)} \int\_r^{r+\eta\_2(s,r)} \Omega\left(p+\frac{1}{2}\eta\_1(q,p),v\right)dv. \tag{12}$$

Adding (11) and (12), we have

$$\begin{split} &\Omega\left(p+\frac{1}{2}\eta\_{1}(q,p),r+\frac{1}{2}\eta\_{2}(s,r)\right) \\ \geq &\frac{1}{2}\left[\frac{1}{\eta\_{1}(q,p)}\int\_{p}^{p+\eta\_{1}(q,p)}\Omega\left(u,r+\frac{1}{2}\eta\_{2}(s,r)\right)du+\frac{1}{\eta\_{2}(s,r)}\int\_{r}^{r+\eta\_{2}(s,r)}\Omega\left(p+\frac{1}{2}\eta\_{1}(q,p),v\right)dv\right].\tag{13} \end{split} \tag{13}$$

Again from Corollary 1, we get

$$\frac{1}{\eta\_1(q,p)} \int\_p^{p+\eta\_1(q,p)} \Omega(u,r) du \ge \frac{\Omega(p,r) + \Omega(p+\eta\_1(q,p),r)}{2},\tag{14}$$

$$\frac{1}{\eta\_1(q,p)} \int\_p^{p+\eta\_1(q,p)} \Omega(u,r+\eta\_2(s,r)) du \ge \frac{\Omega(p,r+\eta\_2(s,r)) + \Omega(p+\eta\_1(q,p),r+\eta\_2(s,r))}{2} \tag{15}$$

$$\frac{1}{\eta\_2(s,r)} \int\_r^{r+\eta\_2(s,r)} \Omega(p,v)dv \ge \frac{\Omega(p,r) + \Omega(p,r+\eta\_2(s,r))}{2},\tag{16}$$

$$\frac{1}{\eta\_2(s,r)} \int\_r^{r+\eta\_2(s,r)} \Omega(p+\eta\_1(q,p),v)dv \ge \frac{\Omega(p+\eta\_1(q,p),r) + \Omega(p+\eta\_1(q,p),r+\eta\_2(s,r))}{2}.\tag{17}$$

Adding (14)–(17), we get

$$\begin{split} &\frac{1}{\eta\_{1}(q,p)} \int\_{p}^{p+\eta\_{1}(q,p)} (\Omega(u,r) + \Omega(u,r+\eta\_{2}(s,r))) du \\ &+ \frac{1}{\eta\_{2}(s,r)} \int\_{r}^{r+\eta\_{2}(s,r)} (\Omega(p,v) + \Omega(p+\eta\_{1}(q,p),v)) dv \\ &\supseteq \Omega(p,r) + \Omega(p+\eta\_{1}(q,p),r) + \Omega(p,r+\eta\_{2}(s,r)) + \Omega(p+\eta\_{1}(q,p),r+\eta\_{2}(s,r)). \end{split} \tag{18}$$

By Corollary 1, we also have

$$
\begin{split} \Omega(p,r) + \Omega(p+\eta\_1(q,p),r) + \Omega(p,r+\eta\_2(s,r)) + \Omega(p+\eta\_1(q,p),r+\eta\_2(s,r)) \\ \supseteq \Omega(p,r) + \Omega(q,r) + \Omega(p,s) + \Omega(q,s). \end{split} \tag{19}
$$

From (7), (13), (18), and (19), we get the desired result.

**Remark 4.** *If we put η*1(*q*, *p*) = *q* − *p and η*2(*s*,*r*) = *s* − *r in Theorem 8, we obtain Theorem 7 of [34].*

Next, we prove H-H type inclusions for the product of two interval-valued coordinated preinvex functions.

**Theorem 9.** *Let X*<sup>1</sup> × *X*<sup>2</sup> *be an invex set with respect to η*<sup>1</sup> *and η*2*. If* Ω,*Υ* : [*p*, *p* + *η*1(*q*, *p*)] × [*r*,*r* + *η*2(*s*,*r*)] → R + *I are interval-valued coordinated preinvex functions with respect to η*<sup>1</sup> *and η*<sup>2</sup> *such that* Ω = [Ω, Ω]*, Υ* = [*Υ*,*Υ*] *and p* < *p* + *η*1(*q*, *p*), *r* < *r* + *η*2(*s*,*r*)*, where p*, *q* ∈ *X*<sup>1</sup> *and r*,*s* ∈ *X*2*. If η*1, *η*<sup>2</sup> *satisfy Condition C, then*

$$\begin{aligned} &\frac{1}{\eta\_1(q,p)\eta\_2(s,r)} \int\_p^{p+\eta\_1(q,p)} \int\_r^{r+\eta\_2(s,r)} \Omega(u,v) \mathcal{Y}(u,v) dv du \\ &\supseteq \frac{1}{9} \mathcal{N}\_1(p,q,r,s) + \frac{1}{18} \mathcal{N}\_2(p,q,r,s) + \frac{1}{18} \mathcal{N}\_3(p,q,r,s) + \frac{1}{36} \mathcal{N}\_4(p,q,r,s), \end{aligned}$$

*where*

*N*1(*p*, *q*,*r*,*s*) = Ω(*p*,*r*)*Υ*(*p*,*r*) + Ω(*p* + *η*1(*q*, *p*),*r*)*Υ*(*p* + *η*1(*q*, *p*),*r*) + Ω(*p*,*r* + *η*2(*s*,*r*)) *Υ*(*p*,*r* + *η*2(*s*,*r*)) + Ω(*p* + *η*1(*q*, *p*),*r* + *η*2(*s*,*r*))*Υ*(*p* + *η*1(*q*, *p*),*r* + *η*2(*s*,*r*)),

*N*2(*p*, *q*,*r*,*s*) = Ω(*p*,*r*)*Υ*(*p* + *η*1(*q*, *p*),*r*) + Ω(*p* + *η*1(*q*, *p*),*r*)*Υ*(*p*,*r*) + Ω(*p*,*r* + *η*2(*s*,*r*)) *Υ*(*p* + *η*1(*q*, *p*),*r* + *η*2(*s*,*r*)) + Ω(*p* + *η*1(*q*, *p*),*r* + *η*2(*s*,*r*))*Υ*(*p*,*r* + *η*2(*s*,*r*)),

*N*3(*p*, *q*,*r*,*s*) = Ω(*p*,*r*)*Υ*(*p*,*r* + *η*2(*s*,*r*)) + Ω(*p* + *η*1(*q*, *p*),*r*)*Υ*(*p* + *η*1(*q*, *p*),*r* + *η*2(*s*,*r*)) + Ω(*p*,*r* + *η*2(*s*,*r*))*Υ*(*p*,*r*) + Ω(*p* + *η*1(*q*, *p*),*r* + *η*2(*s*,*r*))*Υ*(*p* + *η*1(*q*, *p*),*r*),

*N*4(*p*, *q*,*r*,*s*) = Ω(*p*,*r*)*Υ*(*p* + *η*1(*q*, *p*),*r* + *η*2(*s*,*r*)) + Ω(*p* + *η*1(*q*, *p*),*r*)*Υ*(*p*,*r* + *η*2(*s*,*r*)) + Ω(*p*,*r* + *η*2(*s*,*r*))*Υ*(*p* + *η*1(*q*, *p*),*r*) + Ω(*p* + *η*1(*q*, *p*),*r* + *η*2(*s*,*r*))*Υ*(*p*,*r*).

**Proof.** Since Ω and *Υ* are interval-valued coordinated preinvex functions on [*p*, *p* + *η*1(*q*, *p*)] × [*r*,*r* + *η*2(*s*,*r*)], we have

$$
\Omega\_{\mathfrak{u}}(v) : [r\_\prime r + \eta\_2(s\_\prime r)] \to \mathbb{R}\_I^+ \,, \Omega\_{\mathfrak{u}}(v) = \Omega(\mathfrak{u}, v)
$$

and

$$
\mathcal{Y}\_{\mathfrak{u}}(v) : [r\_\prime r + \eta\_2(s\_\prime r)] \to \mathbb{R}\_I^+ \,, \mathcal{Y}\_{\mathfrak{u}}(v) = \mathcal{Y}(\mathfrak{u}, v)
$$

are interval-valued preinvex functions on [*r*,*r* + *η*2(*s*,*r*)] for all *u* ∈ [*p*, *p* + *η*1(*q*, *p*)]. Similarly,

$$\Omega\_{\upsilon}(\mu) : [p\_\prime p + \eta\_1(q, p)] \to \mathbb{R}\_I^+ \prime \, \Omega\_{\upsilon}(\mu) = \Omega(\mu, \upsilon)$$
  $and$  
$$\mathcal{Y}\_{\upsilon}(\mu) : [p\_\prime p + \eta\_1(q, p)] \to \mathbb{R}\_I^+ \prime \, \mathcal{Y}\_{\upsilon}(\mu) = \mathcal{Y}(\mu, \upsilon)$$

are interval-valued preinvex functions on [*p*, *p* + *η*1(*q*, *p*)] for all *v* ∈ [*r*,*r* + *η*2(*s*,*r*)].

From Corollary 2, we get

$$\begin{aligned} &\frac{1}{\eta\_2(s,r)} \int\_r^{r+\eta\_2(s,r)} \Omega\_{\mathfrak{u}}(v) Y\_{\mathfrak{u}}(v) dv \\ &\supseteq \frac{1}{3} [\Omega\_{\mathfrak{u}}(r) Y\_{\mathfrak{u}}(r) + \Omega\_{\mathfrak{u}}(r+\eta\_2(s,r)) Y\_{\mathfrak{u}}(r+\eta\_2(s,r))] + \frac{1}{6} [\Omega\_{\mathfrak{u}}(r) Y\_{\mathfrak{u}}(r+\eta\_2(s,r)) + \Omega\_{\mathfrak{u}}(r+\eta\_2(s,r)) Y\_{\mathfrak{u}}(r)] .\end{aligned}$$

This implies

$$\begin{split} &\frac{1}{\eta\_2(s,r)} \int\_r^{r+\eta\_2(s,r)} \Omega(u,v) \mathcal{Y}(u,v) dv \\ &\supseteq \frac{1}{3} [\Omega(u,r)\mathcal{Y}(u,r) + \Omega(u,r+\eta\_2(s,r))\mathcal{Y}(u,r+\eta\_2(s,r))] \\ &\quad + \frac{1}{6} [\Omega(u,r)\mathcal{Y}(u,r+\eta\_2(s,r)) + \Omega(u,r+\eta\_2(s,r))\mathcal{Y}(u,r)]. \end{split} \tag{20}$$

Integrating (20) with respect to *u* over [*p*, *p* + *η*1(*q*, *p*)] and after then dividing by *η*1(*q*, *p*), we find

$$\begin{split} &\frac{1}{\eta\_{1}(q,p)\eta\_{2}(s,r)} \int\_{p}^{p+\eta\_{1}(q,p)} \int\_{r}^{r+\eta\_{2}(s,r)} \Omega(u,v) \mathcal{Y}(u,v) dv du \\ &\supseteq \frac{1}{3\eta\_{1}(q,p)} \int\_{p}^{p+\eta\_{1}(q,p)} [\Omega(u,r)\mathcal{Y}(u,r) + \Omega(u,r+\eta\_{2}(s,r))\mathcal{Y}(u,r+\eta\_{2}(s,r))] du \\ &\quad + \frac{1}{6\eta\_{1}(q,p)} \int\_{p}^{p+\eta\_{1}(q,p)} [\Omega(u,r)\mathcal{Y}(u,r+\eta\_{2}(s,r)) + \Omega(u,r+\eta\_{2}(s,r))\mathcal{Y}(u,r)] du. \end{split} \tag{21}$$

Again from Corollary 2, we have

$$\begin{split} \frac{1}{\eta\_1(q,p)} \int\_p^{p+\eta\_1(q,p)} \Omega(u,r)Y(u,r)du \\ \geq \frac{1}{3} [\Omega(p,r)Y(p,r) + \Omega(p+\eta\_1(q,p),r)Y(p+\eta\_1(q,p),r) \\ + \frac{1}{6} [\Omega(p,r)Y(p+\eta\_1(q,p),r) + \Omega(p+\eta\_1(q,p),r)Y(p,r)], \end{split} \tag{22}$$

$$\begin{split} &\frac{1}{\eta\_1(q,p)} \int\_p^{p+\eta\_1(q,p)} \Omega(u,r+\eta\_2(s,r)) Y(u,r+\eta\_2(s,r)) du \\ &\supseteq \frac{1}{3} [\Omega(p,r+\eta\_2(s,r)) Y(p,r+\eta\_2(s,r)) + \Omega(p+\eta\_1(q,p),r+\eta\_2(s,r)) Y(p+\eta\_1(q,p),r+\eta\_2(s,r))] \\ &\quad + \frac{1}{6} [\Omega(p,r+\eta\_2(s,r)) Y(p+\eta\_1(q,p),r+\eta\_2(s,r)) + \Omega(p+\eta\_1(q,p),r+\eta\_2(s,r)) Y(p,r+\eta\_2(s,r))], \end{split} \tag{23}$$

$$\begin{split} &\frac{1}{\eta\_1(q,p)} \int\_p^{p+\eta\_1(q,p)} \Omega(u,r) Y(u,r+\eta\_2(s,r)) du \\ &\supseteq \frac{1}{3} [\Omega(p,r)) Y(p,r+\eta\_2(s,r)) + \Omega(p+\eta\_1(q,p),r) Y(p+\eta\_1(q,p),r+\eta\_2(s,r))] \\ &\quad + \frac{1}{6} [\Omega(p,r) Y(p+\eta\_1(q,p),r+\eta\_2(s,r)) + \Omega(p+\eta\_1(q,p),r) Y(p,r+\eta\_2(s,r))], \end{split} \tag{24}$$

$$\begin{split} &\frac{1}{\eta\_1(q,p)} \int\_p^{p+\eta\_1(q,p)} \Omega(u,r+\eta\_2(s,r)) Y(u,r) du \\ &\supseteq \frac{1}{3} [\Omega(p,r+\eta\_2(s,r)) Y(p,r) + \Omega(p+\eta\_1(q,p),r+\eta\_2(s,r)) Y(p+\eta\_1(q,p),r) \\ &\quad + \frac{1}{6} [\Omega(p,r+\eta\_2(s,r)) Y(p+\eta\_1(q,p),r) + \Omega(p+\eta\_1(q,p),r+\eta\_2(s,r)) Y(p,r)]. \end{split} \tag{25}$$

Substituting (22)–(25) into (21), we obtain the desired result. Similarly, we can obtain the same result by using Corollary 2 for the product Ω*v*(*u*)*Υv*(*u*) on [*p*, *p* + *η*1(*q*, *p*)] .

**Remark 5.** *If we put η*1(*q*, *p*) = *q* − *p and η*2(*s*,*r*) = *s* − *r in Theorem 9, we obtain Theorem 8 of [34].*

**Theorem 10.** *Let X*<sup>1</sup> × *X*<sup>2</sup> *be an invex set with respect to η*<sup>1</sup> *and η*2*. If* Ω,*Υ* : [*p*, *p* + *η*1(*q*, *p*)] × [*r*,*r* + *η*2(*s*,*r*)] → R + *I are interval-valued coordinated preinvex functions with respect to η*<sup>1</sup> *and η*<sup>2</sup> *such that* Ω = [Ω, Ω]*, Υ* = [*Υ*,*Υ*] *and p* < *p* + *η*1(*q*, *p*), *r* < *r* + *η*2(*s*,*r*)*, where p*, *q* ∈ *X*<sup>1</sup> *and r*,*s* ∈ *X*2*. If η*1, *η*<sup>2</sup> *satisfy Condition C, then we have*

$$\begin{split} &4\Omega\Big(p+\frac{1}{2}\eta\_{1}(q,p),r+\frac{1}{2}\eta\_{2}(s,r)\Big)\mathcal{Y}\Big(p+\frac{1}{2}\eta\_{1}(q,p),r+\frac{1}{2}\eta\_{2}(s,r)\Big) \\ &\supseteq \frac{1}{\eta\_{1}(q,p)\eta\_{2}(s,r)}\int\_{p}^{p+\eta\_{1}(q,p)}\int\_{r}^{r+\eta\_{2}(s,r)}\Omega(u,v)\mathcal{Y}(u,v)dvdu \\ &+\frac{5}{36}\mathcal{N}\_{1}(p,q,r,s)+\frac{7}{36}\mathcal{N}\_{2}(p,q,r,s)+\frac{7}{36}\mathcal{N}\_{3}(p,q,r,s)+\frac{2}{9}\mathcal{N}\_{4}(p,q,r,s), \end{split}$$

*where N*1(*p*, *q*,*r*,*s*), *N*2(*p*, *q*,*r*,*s*), *N*3(*p*, *q*,*r*,*s*), *and N*4(*p*, *q*,*r*,*s*) *are defined as previous.*

**Proof.** Since Ω and *Υ* are interval-valued coordinated preinvex functions, therefore from Corollary 3, we have

$$\begin{split} 2\Omega\left(p+\frac{1}{2}\eta\_{1}(q,p),r+\frac{1}{2}\eta\_{2}(s,r)\right)Y\left(p+\frac{1}{2}\eta\_{1}(q,p),r+\frac{1}{2}\eta\_{2}(s,r)\right) \\ \geq \frac{1}{\eta\_{1}(q,p)}\int\_{p}^{p+\eta\_{1}(q,p)}\Omega(u,r+\frac{1}{2}\eta\_{2}(s,r))Y(u,r+\frac{1}{2}\eta\_{2}(s,r))du \\ +\frac{1}{6}\left[\Omega(p,r+\frac{1}{2}\eta\_{2}(s,r))Y(p,r+\frac{1}{2}\eta\_{2}(s,r)) \\ +\Omega(p+\eta\_{1}(q,p),r+\frac{1}{2}\eta\_{2}(s,r))Y(p+\eta\_{1}(q,p),r+\frac{1}{2}\eta\_{2}(s,r))\right] \\ +\frac{1}{3}\left[\Omega(p,r+\frac{1}{2}\eta\_{2}(s,r))Y(p+\eta\_{1}(q,p),r+\frac{1}{2}\eta\_{2}(s,r)) \\ +\Omega(p+\eta\_{1}(q,p),r+\frac{1}{2}\eta\_{2}(s,r))Y(p,r+\frac{1}{2}\eta\_{2}(s,r))\right] \\ \end{split} \tag{26}$$

and

$$\begin{split} &2\Omega\left(p+\frac{1}{2}\eta\_{1}(q,p),r+\frac{1}{2}\eta\_{2}(s,r)\right)Y\left(p+\frac{1}{2}\eta\_{1}(q,p),r+\frac{1}{2}\eta\_{2}(s,r)\right) \\ \supseteq &\frac{1}{\eta\_{2}(s,r)}\int\_{r}^{r+\eta\_{2}(s,r)}\Omega(r+\frac{1}{2}\eta\_{2}(s,r),v)Y(p+\frac{1}{2}\eta\_{1}(q,p),v)dv \\ &+\frac{1}{6}\left[\Omega(p+\frac{1}{2}\eta\_{1}(q,p),r)Y(p+\frac{1}{2}\eta\_{1}(q,p),r) \\ &+\Omega(p+\frac{1}{2}\eta\_{1}(q,p),r+\eta\_{2}(s,r))Y(p+\frac{1}{2}\eta\_{1}(q,p),r+\eta\_{2}(s,r))\right] \\ &+\frac{1}{3}\left[\Omega(p+\frac{1}{2}\eta\_{1}(q,p),r)Y(p+\frac{1}{2}\eta\_{1}(q,p),r+\eta\_{2}(s,r)) \\ &+\Omega(p+\frac{1}{2}\eta\_{1}(q,p),r+\eta\_{2}(s,r))Y(p+\frac{1}{2}\eta\_{1}(q,p),r)\right]. \tag{27} \end{split}$$

Adding (26) and (27), then multiplying both sides of the resultant one by 2, we find

8Ω *p* + 1 2 *η*1(*q*, *p*),*r* + 1 2 *η*2(*s*,*r*) *Υ p* + 1 2 *η*1(*q*, *p*),*r* + 1 2 *η*2(*s*,*r*) ⊇ 2 *η*1(*q*, *p*) Z *p*+*η*<sup>1</sup> (*q*,*p*) *p* Ω(*u*,*r* + 1 2 *η*2(*s*,*r*))*Υ*(*u*,*r* + 1 2 *η*2(*s*,*r*))*du* + 2 *η*2(*s*,*r*) Z *r*+*η*2(*s*,*r*) *r* Ω(*r* + 1 2 *η*2(*s*,*r*), *v*)*Υ*(*p* + 1 2 *η*1(*q*, *p*), *v*)*dv* + 1 6 2Ω(*p*,*r* + 1 2 *η*2(*s*,*r*))*Υ*(*p*,*r* + 1 2 *η*2(*s*,*r*)) +2Ω(*p* + *η*1(*q*, *p*),*r* + 1 2 *η*2(*s*,*r*))*Υ*(*p* + *η*1(*q*, *p*),*r* + 1 2 *η*2(*s*,*r*)) +2Ω(*p* + 1 2 *η*1(*q*, *p*),*r*)*Υ*(*p* + 1 2 *η*1(*q*, *p*),*r*) +2Ω(*p* + 1 2 *η*1(*q*, *p*),*r* + *η*2(*s*,*r*))*Υ*(*p* + 1 2 *<sup>η</sup>*1(*q*, *<sup>p</sup>*),*<sup>r</sup>* <sup>+</sup> *<sup>η</sup>*2(*s*,*r*)) + 1 3 2Ω(*p*,*r* + 1 2 *η*2(*s*,*r*))*Υ*(*p* + *η*1(*q*, *p*),*r* + 1 2 *η*2(*s*,*r*)) +2Ω(*p* + *η*1(*q*, *p*),*r* + 1 2 *η*2(*s*,*r*))*Υ*(*p*,*r* + 1 2 *η*2(*s*,*r*)) +2Ω(*p* + 1 2 *η*1(*q*, *p*),*r*)*Υ*(*p* + 1 2 *η*1(*q*, *p*),*r* + *η*2(*s*,*r*)) +2Ω(*p* + 1 2 *η*1(*q*, *p*),*r* + *η*2(*s*,*r*))*Υ*(*p* + 1 2 *η*1(*q*, *p*),*r*) . (28)

Now, from Corollary 3, we have

$$\begin{split} 2\Omega(p,r+\frac{1}{2}\eta\_{2}(s,r))Y(p,r+\frac{1}{2}\eta\_{2}(s,r)) \\ \geq & \frac{1}{\eta\_{2}(s,r)}\int\_{r}^{r+\eta\_{2}(s,r)}\Omega(p,v)Y(p,v)dv \\ + \frac{1}{6}[\Omega(p,r)Y(p,r)+\Omega(p,r+\eta\_{2}(s,r))Y(p,r+\eta\_{2}(s,r))] \\ + \frac{1}{3}[\Omega(p,r)Y(p,r+\eta\_{2}(s,r))+\Omega(p,r+\eta\_{2}(s,r))Y(p,r)], \end{split} \tag{29}$$

$$\begin{split} &2\Omega(p+\eta\_{1}(q,p),r+\frac{1}{2}\eta\_{2}(s,r))Y(p+\eta\_{1}(q,p),r+\frac{1}{2}\eta\_{2}(s,r)) \\ &\supseteq \frac{1}{\eta\_{2}(s,r)}\int\_{r}^{r+\eta\_{2}(s,r)}\Omega(p+\eta\_{1}(q,p),v)Y(p+\eta\_{1}(q,p),v)dv \\ &+\frac{1}{6}[\Omega(p+\eta\_{1}(q,p),r)Y(p+\eta\_{1}(q,p),r)+\Omega(p+\eta\_{1}(q,p),r+\eta\_{2}(s,r))Y(p+\eta\_{1}(q,p),r+\eta\_{2}(s,r))] \\ &+\frac{1}{3}[\Omega(p+\eta\_{1}(q,p),r)Y(p+\eta\_{1}(q,p),r+\eta\_{2}(s,r))+\Omega(p+\eta\_{1}(q,p),r+\eta\_{2}(s,r))Y(p+\eta\_{1}(q,p),r)], \end{split} \tag{30}$$

$$\begin{split} 2\Omega\left(p+\frac{1}{2}\eta\_{1}(q,p),r\right)\mathcal{Y}\left(p+\frac{1}{2}\eta\_{1}(q,p),r\right) \\ \geq &\frac{1}{\eta\_{1}(q,p)}\int\_{p}^{p+\eta\_{1}(q,p)}\Omega(u,r)\mathcal{Y}(u,r)du \\ +\frac{1}{6}[\Omega(p,r)\mathcal{Y}(p,r)+\Omega(p+\eta\_{1}(q,p),r)\mathcal{Y}(p+\eta\_{1}(q,p),r)] \\ +\frac{1}{3}[\Omega(p,r)\mathcal{Y}(p+\eta\_{1}(q,p),r)+\Omega(p+\eta\_{1}(q,p),r)\mathcal{Y}(p,r)], \end{split} \tag{31}$$

$$\begin{split} 2\Omega\left(p+\frac{1}{2}\eta\_{1}(q,p),r+\eta\_{2}(s,r)\right)\mathcal{Y}\left(p+\frac{1}{2}\eta\_{1}(q,p),r+\eta\_{2}(s,r)\right) \\ \geq \frac{1}{\eta\_{1}(q,p)}\int\_{p}^{p+\eta\_{1}(q,p)}\Omega(u,r+\eta\_{2}(s,r))\mathcal{Y}(u,r+\eta\_{2}(s,r))du \\ +\frac{1}{6}[\Omega(p,r+\eta\_{2}(s,r))\mathcal{Y}(p,r+\eta\_{2}(s,r))+\Omega(p+\eta\_{1}(q,p),r+\eta\_{2}(s,r))\mathcal{Y}(p+\eta\_{1}(q,p),r+\eta\_{2}(s,r))] \\ +\frac{1}{3}[\Omega(p,r+\eta\_{2}(s,r))\mathcal{Y}(p+\eta\_{1}(q,p),r+\eta\_{2}(s,r))+\Omega(p+\eta\_{1}(q,p),r+\eta\_{2}(s,r))\mathcal{Y}(p,r+\eta\_{2}(s,r))], \end{split} \tag{32}$$

$$\begin{split} 2\Omega(p,r+\frac{1}{2}\eta\_{2}(s,r))\Upsilon(p+\eta\_{1}(q,p),r+\frac{1}{2}\eta\_{2}(s,r)) \\ \geq \frac{1}{\eta\_{2}(s,r)}\int\_{r}^{r+\eta\_{2}(s,r)}\Omega(p,v)\Upsilon(p+\eta\_{1}(q,p),v)dv \\ +\frac{1}{6}[\Omega(p,r)\Upsilon(p+\eta\_{1}(q,p),r)+\Omega(p,r+\eta\_{2}(s,r))\Upsilon(p+\eta\_{1}(q,p),r+\eta\_{2}(s,r))] \\ +\frac{1}{3}[\Omega(p,r)\Upsilon(p+\eta\_{1}(q,p),r+\eta\_{2}(s,r))+\Omega(p,r+\eta\_{2}(s,r))\Upsilon(p+\eta\_{1}(q,p),r)], \end{split} \tag{33}$$

$$\begin{split} &2\Omega(p+\eta\_{1}(q,p),r+\frac{1}{2}\eta\_{2}(s,r))Y(p,r+\frac{1}{2}\eta\_{2}(s,r)) \\ \geq &\frac{1}{\eta\_{2}(s,r)}\int\_{r}^{r+\eta\_{2}(s,r)}\Omega(p+\eta\_{1}(q,p),v)Y(p,v)dv \\ &+\frac{1}{6}[\Omega(p+\eta\_{1}(q,p),r)Y(p,r)+\Omega(p+\eta\_{1}(q,p),r+\eta\_{2}(s,r))Y(p,r+\eta\_{2}(s,r))] \\ &+\frac{1}{3}[\Omega(p+\eta\_{1}(q,p),r)Y(p,r+\eta\_{2}(s,r))+\Omega(p+\eta\_{1}(q,p),r+\eta\_{2}(s,r))Y(p,r)], \end{split} \tag{34}$$

$$\begin{split} &2\Omega\Big(p+\frac{1}{2}\eta\_{1}(q,p),r\Big)\mathcal{Y}\Big(p+\frac{1}{2}\eta\_{1}(q,p),r+\eta\_{2}(s,r)\Big) \\ \geq &\frac{1}{\eta\_{1}(q,p)}\int\_{p}^{p+\eta\_{1}(q,p)}\Omega(u,r)\mathcal{Y}(u,r+\eta\_{2}(s,r))du \\ &+\frac{1}{6}[\Omega(p,r)\mathcal{Y}(p,r+\eta\_{2}(s,r))+\Omega(p+\eta\_{1}(q,p),r)\mathcal{Y}(p+\eta\_{1}(q,p),r+\eta\_{2}(s,r))] \\ &+\frac{1}{3}[\Omega(p,r)\mathcal{Y}(p+\eta\_{1}(q,p),r+\eta\_{2}(s,r))+\Omega(p+\eta\_{1}(q,p),r)\mathcal{Y}(p,r+\eta\_{2}(s,r))], \end{split} \tag{35}$$

$$\begin{split} &2\Omega\Big(p+\frac{1}{2}\eta\_{1}(q,p),r+\eta\_{2}(s,r)\Big)Y\Big(p+\frac{1}{2}\eta\_{1}(q,p),r\Big) \\ \geq &\frac{1}{\eta\_{1}(q,p)}\int\_{p}^{p+\eta\_{1}(q,p)}\Omega(u,r+\eta\_{2}(s,r))Y(u,r)du \\ &+\frac{1}{6}[\Omega(p,r+\eta\_{2}(s,r))Y(p,r)+\Omega(p+\eta\_{1}(q,p),r+\eta\_{2}(s,r))Y(p+\eta\_{1}(q,p),r)] \\ &+\frac{1}{3}[\Omega(p,r+\eta\_{2}(s,r))Y(p+\eta\_{1}(q,p),r)+\Omega(p+\eta\_{1}(q,p),r+\eta\_{2}(s,r))Y(p,r)]. \end{split} \tag{36}$$

Using (29)–(36) in (28), we get

8Ω *p* + 1 2 *η*1(*q*, *p*),*r* + 1 2 *η*2(*s*,*r*) *Υ p* + 1 2 *η*1(*q*, *p*),*r* + 1 2 *η*2(*s*,*r*) ⊇ 2 *η*1(*q*, *p*) Z *p*+*η*<sup>1</sup> (*q*,*p*) *p* Ω(*u*,*r* + 1 2 *η*2(*s*,*r*))*Υ*(*u*,*r* + 1 2 *η*2(*s*,*r*))*du* + 2 *η*2(*s*,*r*) Z *r*+*η*2(*s*,*r*) *r* Ω(*r* + 1 2 *η*2(*s*,*r*), *v*)*Υ*(*p* + 1 2 *η*1(*q*, *p*), *v*)*dv* + 1 6*η*2(*s*,*r*) Z *r*+*η*2(*s*,*r*) *r* (Ω(*p*, *v*)*Υ*(*p*, *v*) + Ω(*p* + *η*1(*q*, *p*), *v*)*Υ*(*p* + *η*1(*q*, *p*), *v*))*dv* + 1 3*η*2(*s*,*r*) Z *r*+*η*2(*s*,*r*) *r* (Ω(*p*, *v*)*Υ*(*p* + *η*1(*q*, *p*), *v*) + Ω(*p* + *η*1(*q*, *p*), *v*)*Υ*(*p*, *v*))*dv* + 1 6*η*1(*q*, *p*) Z *p*+*η*<sup>1</sup> (*q*,*p*) *p* (Ω(*u*,*r*)*Υ*(*u*,*r*) + Ω(*u*,*r* + *η*2(*s*,*r*))*Υ*(*u*,*r* + *η*2(*s*,*r*)))*du* + 1 3*η*1(*q*, *p*) Z *p*+*η*<sup>1</sup> (*q*,*p*) *p* (Ω(*u*,*r*)*Υ*(*u*,*r* + *η*2(*s*,*r*)) + Ω(*u*,*r* + *η*2(*s*,*r*))*Υ*(*u*,*r*))*du* + 1 <sup>18</sup> *<sup>N</sup>*1(*p*, *<sup>q</sup>*,*r*,*s*) + <sup>1</sup> 9 *<sup>N</sup>*2(*p*, *<sup>q</sup>*,*r*,*s*) + <sup>1</sup> 9 *<sup>N</sup>*3(*p*, *<sup>q</sup>*,*r*,*s*) + <sup>2</sup> 9 *N*4(*p*, *q*,*r*,*s*). (37)

Again from Corollary 3, we have

$$\begin{split} &\frac{2}{\eta\_{2}(s,r)}\int\_{r}^{r+\eta\_{2}(s,r)}\Omega(p+\frac{1}{2}\eta\_{1}(q,p),v)Y(p+\frac{1}{2}\eta\_{1}(q,p),v)dv \\ &\supseteq \frac{1}{\eta\_{1}(q,p)\eta\_{2}(s,r)}\int\_{p}^{p+\eta\_{1}(q,p)}\int\_{r}^{r+\eta\_{2}(s,r)}\Omega(u,v)Y(u,v)dvdu \\ &+\frac{1}{6\eta\_{2}(s,r)}\int\_{r}^{r+\eta\_{2}(s,r)}(\Omega(p,v)Y(p,v)+\Omega(p+\eta\_{1}(q,p),v)Y(p+\eta\_{1}(q,p),v))dv \\ &+\frac{1}{3\eta\_{2}(s,r)}\int\_{r}^{r+\eta\_{2}(s,r)}(\Omega(p,v)Y(p+\eta\_{1}(q,p),v)+\Omega(p+\eta\_{1}(q,p),v)Y(p,v))dv,\end{split} \tag{38}$$

$$\begin{split} &\frac{2}{\eta\_{1}(q,p)}\int\_{p}^{p+\eta\_{1}(q,p)}\Omega(u,r+\frac{1}{2}\eta\_{2}(s,r))Y(u,r+\frac{1}{2}\eta\_{2}(s,r))du \\ \geq &\frac{1}{\eta\_{1}(q,p)\eta\_{2}(s,r)}\int\_{p}^{p+\eta\_{1}(q,p)}\int\_{r}^{r+\eta\_{2}(s,r)}\Omega(u,v)Y(u,v)dvdu \\ &+\frac{1}{6\eta\_{1}(q,p)}\int\_{p}^{p+\eta\_{1}(q,p)}(\Omega(u,r)Y(u,r)+\Omega(u,r+\eta\_{2}(s,r))Y(u,r+\eta\_{2}(s,r)))du \\ &+\frac{1}{3\eta\_{1}(q,p)}\int\_{p}^{p+\eta\_{1}(q,p)}(\Omega(u,r)Y(u,r+\eta\_{2}(s,r))+\Omega(u,r+\eta\_{2}(s,r))Y(u,r))du. \end{split} \tag{39}$$

Using (38) and (39) in (37), we get

$$\begin{split} &\Omega\Omega\left(p+\frac{1}{2}\eta\_{1}(q,p),r+\frac{1}{2}\eta\_{2}(s,r)\right)\mathcal{Y}\left(p+\frac{1}{2}\eta\_{1}(q,p),r+\frac{1}{2}\eta\_{2}(s,r)\right) \\ &\geq \frac{2}{\eta\_{1}(q,p)\eta\_{2}(s,r)}\int\_{p}^{p+\eta\_{1}(q,p)}\int\_{r}^{r+\eta\_{2}(s,r)}\Omega(u,v)\mathcal{Y}(u,v)dudu \\ &\quad +\frac{1}{3\eta\_{2}(s,r)}\int\_{r}^{r+\eta\_{2}(s,r)}(\Omega(p,v)\mathcal{Y}(p,v)+\Omega(p+\eta\_{1}(q,p),v)\mathcal{Y}(p+\eta\_{1}(q,p),v) \\ &\quad +2\Omega(p,v)\mathcal{Y}(p+\eta\_{1}(q,p),v)+2\Omega(p+\eta\_{1}(q,p),v)\mathcal{Y}(p,v))dvd \\ &\quad +\frac{1}{3\eta\_{1}(q,p)}\int\_{p}^{p+\eta\_{1}(q,p)}(\Omega(u,r)\mathcal{Y}(u,r)+\Omega(u,r+\eta\_{2}(s,r))\mathcal{Y}(u,r+\eta\_{2}(s,r)) \\ &\quad +2\Omega(u,r)\mathcal{Y}(u,r+\eta\_{2}(s,r))+2\Omega(u,r+\eta\_{2}(s,r))\mathcal{Y}(u,r))dud \\ &\quad +\frac{1}{18}\mathcal{N}\_{1}(p,q,r,s)+\frac{1}{9}\mathcal{N}\_{2}(p,q,r,s)+\frac{1}{9}\mathcal{N}\_{3}(p,q,r,s)+\frac{2}{9}\mathcal{N}\_{4}(p,q,r,s). \tag{40} \end{split}$$

Applying Corollary 3 for each integral in right side of (40), we obtain our desired result.

**Remark 6.** *If we put η*1(*q*, *p*) = *q* − *p and η*2(*s*,*r*) = *s* − *r in Theorem 10, we obtain Theorem 9 of [34].*

#### **4. Conclusions**

In this article, we have introduced the concept of interval-valued preinvex functions on coordinates as a generalization of the convex interval-valued functions on coordinates. We have established H-H type inclusions for coordinated preinvex interval-valued functions. Moreover, some new H-H type inclusions for the product of two coordinated preinvex interval-valued functions are investigated. The results obtained in this paper may be extended for other kinds of interval-valued preinvex functions on the coordinates. In the future, we can investigate H-H type and H–H–Fej*e*´r type inclusions for interval-valued coordinated preinvex functions via interval-valued fractional integrals on coordinates. We hope that the ideas and results obtained in this article will encourage the readers towards further investigation.

**Author Contributions:** Formal analysis, K.K.L., S.K.M., J.B. and M.H.; funding acquisition, K.K.L.; investigation, K.K.L., S.K.M., J.B. and M.H.; methodology, J.B.; supervision, K.K.L. and S.K.M.; validation, J.B. and M.H.; writing—original draft preparation, S.K.M. and J.B.; writing—review and editing, J.B. and M.H. All authors have read and agreed to the published version of the manuscript.

**Funding:** The second author is financially supported by "Research Grant for Faculty" (IoE Scheme) under Dev. Scheme NO. 6031 and Department of Science and Technology, SERB, New Delhi, India through grant no.: MTR/2018/000121, and the third author is financially supported by the Ministry of Science and Technology, Department of Science and Technology, New Delhi, India, through Registration No. DST/INSPIRE Fellowship/[IF190355].

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** No data were used to support this study.

**Acknowledgments:** The authors are indebted to the anonymous reviewers for their valuable comments and remarks that helped to improve the presentation and quality of the manuscript.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **Fractional Calculus for Convex Functions in Interval-Valued Settings and Inequalities**

**Muhammad Bilal Khan <sup>1</sup> , Hatim Ghazi Zaini <sup>2</sup> , Savin Treant,aˇ 3,\* , Gustavo Santos-García 4,\* , Jorge E. Macías-Díaz 5,6 and Mohamed S. Soliman <sup>7</sup>**

	- <sup>4</sup> Facultad de Economía y Empresa and Multidisciplinary Institute of Enterprise (IME), University of Salamanca, 37007 Salamanca, Spain
	- <sup>5</sup> Departamento de Matemáticas y Física, Universidad Autónoma de Aguascalientes, Avenida Universidad 940, Ciudad Universitaria, Aguascalientes 20131, Mexico; jemacias@correo.uaa.mx
	- <sup>6</sup> Department of Mathematics, School of Digital Technologies, Tallinn University, Narva Rd. 25, 10120 Tallinn, Estonia
	- <sup>7</sup> Department of Electrical Engineering, College of Engineering, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia; soliman@tu.edu.sa
	- **\*** Correspondence: savin.treanta@upb.ro (S.T.); santos@usal.es (G.S.-G.)

**Abstract:** In this paper, we discuss the Riemann–Liouville fractional integral operator for left and right convex interval-valued functions (left and right convex *I*·*V-F*), as well as various related notions and concepts. First, the authors used the Riemann–Liouville fractional integral to prove Hermite– Hadamard type (H–H type) inequality. Furthermore, H–H type inequalities for the product of two left and right convex *I*·*V-Fs* have been established. Finally, for left and right convex *I*·*V-Fs*, we found the Riemann–Liouville fractional integral Hermite–Hadamard type inequality (H–H Fejér type inequality). The findings of this research show that this methodology may be applied directly and is computationally simple and precise.

**Keywords:** left and right convex interval-valued function; fractional integral operator; Hermite– Hadamard type inequality; Hermite–Hadamard Fejér type inequality

#### **1. Introduction**

Mathematical inequality, finance, engineering, statistics, and probability all use convex functions in some way. Convex and symmetric convex functions have strong relationships with inequalities. Because of their intriguing features in the mathematical sciences, there are expansive properties and strong links between the symmetric function and different fields of convexity, including convex functions, probability theory, and convex geometry on convex sets. Convex functions have a long and illustrious history in science, and they have been a hot focus of study for more than a century. Several researchers have proposed different convex function guesses, expansions, and variants. Many inequalities or equalities, such as the Ostrowski-type inequality, Hardy-type inequality, Opial-type inequality, Simpson inequality, Fejér-type inequality, and Cebysev-type inequalities, have been established using convex functions. Among these inequalities, the H–H inequality [1,2], on which many publications have been published, is likely the one that attracts the most attention from scholars. H–H inequality has been regarded as the most useful inequality in mathematical analysis since its discovery in 1883. It is also known as the conventional H–H Inequality equation. The expansions and generalizations of the H–H inequality have piqued the curiosity of a number of mathematicians. For various classes of convex functions and mappings, a number of

**Citation:** Khan, M.B.; Zaini, H.G.; Treant,a, S.; Santos-García, G.; ˇ Macías-Díaz, J.E.; Soliman, M.S. Fractional Calculus for Convex Functions in Interval-Valued Settings and Inequalities. *Symmetry* **2022**, *14*, 341. https://doi.org/10.3390/ sym14020341

Academic Editor: Clemente Cesarano

Received: 19 January 2022 Accepted: 4 February 2022 Published: 7 February 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

mathematicians in the fields of pure and applied mathematics have worked to expand, generalize, counterpart, and enhance the H–H inequality (references [3–13] are a good place to start for interested readers).

Historically, Leibnitz and L'Hospital (1695) are credited with the invention of fractional calculus; however, Riemann, Liouville, and Grunwald–Letnikov, among others, made significant contributions to the field later on. The way that fractional operator speculation deciphers nature's existence in a grand and intentional fashion [14–19] has piqued the curiosity of researchers. By offering an enhanced form of an integral representation for the Appell k-series, Mubeen and Iqbal [20] have contributed to the present research.

Moreover, Khan et al. [21] exploited fuzzy order relations to introduce a new class of convex fuzzy-interval-valued functions (convex *F-I*·*V-F*s), known as (*h*1, *h*2)-convex *F-I*·*V-F*s, as well as a novel version of the H–H type inequality for (*h*1, *h*2)-convex *F-I*·*V-F*s that incorporates the fuzzy interval Riemann integral. Khan et al. went a step further by providing new convex and extended convex *I*·*V-F* classes, as well as new fractional H–H type and H–H type inequalities for left and right (*h*1, *h*2)-preinvex *I*·*V-F* [22], left and right p-convex *I*·*V-Fs* [23], left and right log-h-convex *I*·*V-Fs* [24], and the references therein. For further analysis of the literature on the applications and properties of fuzzy Riemannian integrals, inequalities, and generalized convex fuzzy mappings, we refer the readers to cited works [25–56] and the references therein.

Motivated and inspired by the fascinating features of symmetry, convexity, and the fractional operator, we study the new H–H and related H–H type inequalities for left and right convex *I*·*V-Fs*, based upon the pseudo order relation and the Riemann–Liouville fractional integral operator.

#### **2. Preliminaries**

First, we offer some background information on interval-valued functions, the theory of convexity, interval-valued integration, and interval-valued fractional integration, which will be utilized throughout the article.

We offer some fundamental arithmetic regarding interval analysis in this paragraph, which will be quite useful throughout the article.

$$Y = [Y\_{\ast}, Y^{\ast}]\_{\cdot} \ Q = [Q\_{\ast\ast}, Q^{\ast}] \ (Y\_{\ast} \le \omega \le Y^{\ast} \text{ and } Q\_{\ast} \le z \le Q^{\ast} \text{ } \omega, z \in \mathbb{R})$$

$$\begin{aligned} Y + Q &= [Y\_{\ast\ast}, Y^{\ast}] + [Q\_{\ast\ast}, Q^{\ast}] = [Y\_{\ast} + Q\_{\ast\ast}, Y^{\ast} + Q^{\ast}]\_{\prime} \\ Y - Q &= [Y\_{\ast\ast}, Y^{\ast}] - [Q\_{\ast\ast}, Q^{\ast}] = [Y\_{\ast} - Q\_{\ast\ast}, Y^{\ast} - Q^{\ast}]\_{\prime} \\ Y \times Q &= [Y\_{\ast\ast}, Y^{\ast}] \times [Q\_{\ast\ast}, Q^{\ast}] = [\min \mathcal{K}, \max \mathcal{K}] \\ \min \mathcal{K} = \min \{ Y\_{\ast} Q\_{\ast\ast}, Y^{\ast} Q\_{\ast\ast}, Y^{\ast} Q^{\ast} \}, \max \mathcal{K} = \max \{ Y\_{\ast} Q\_{\ast\ast}, Y^{\ast} Q\_{\ast\ast}, Y\_{\ast} Q^{\ast}, Y^{\ast} Q^{\ast} \} \\ \nu. [Y\_{\ast\ast}, Y^{\ast}] &= \begin{cases} [\nu Y\_{\ast\ast}, \nu Y^{\ast}] & \text{if } \nu > 0, \\ \{0\} & \text{if } \nu = 0, \\ [\nu Y^{\ast}, \nu Y\_{\ast}] & \text{if } \nu < 0. \end{cases} \end{aligned}$$

Let X*<sup>I</sup>* , X + *I* , X − *I* be the set of all closed intervals of R, the set of all closed positive intervals of R, and the set of all closed negative intervals of R, respectively.

For [*Y*∗, *Y* ∗ ], [*Q*∗, *Q*<sup>∗</sup> ] ∈ X*<sup>I</sup>* , the inclusion " ⊆ " is defined by [*Y*∗, *Y* ∗ ] ⊆ [*Q*∗, *Q*<sup>∗</sup> ], if and only if, *Q*<sup>∗</sup> ≤ *Y*∗, *Y* <sup>∗</sup> ≤ *Q*<sup>∗</sup> .

**Remark 1.** *[21] The left and right relation* " ≤*<sup>p</sup>* "*, defined on* X*<sup>I</sup> by* [*Y*∗, *Y* ∗ ] ≤*<sup>p</sup>* [*Q*∗, *Q*<sup>∗</sup> ]*. , if and only if, Y*<sup>∗</sup> ≤ *Q*∗, *Y* <sup>∗</sup> ≤ *Q*<sup>∗</sup> , *for all* [*Y*∗, *Y* ∗ ], [*Q*∗, *Q*<sup>∗</sup> ] ∈ X*<sup>I</sup>* , *it is a pseudo order relation. For a given* [*Y*∗, *Y* ∗ ], [*Q*∗, *Q*<sup>∗</sup> ] ∈ X*<sup>I</sup>* , *we say that* [*Y*∗, *Y* ∗ ] ≤*<sup>p</sup>* [*Q*∗, *Q*<sup>∗</sup> ]*, if and only if, Y*<sup>∗</sup> ≤ *Q*∗, *Y* <sup>∗</sup> ≤ *Q*<sup>∗</sup> *or Y*<sup>∗</sup> ≤ *Q*∗, *Y* ∗ < *Q*∗ .

**Theorem 1.** *[33] If Y* : [*t*,*s*] ⊂ R → X*<sup>I</sup> is an I*·*V-F on such that Y*(*ω*) = [*Y*∗(*ω*), *Y* ∗ (*ω*)], *then Y is Riemann integrable over* [*t*,*s*]*, if and only if, Y*∗*and Y* ∗ *are both Riemann integrable over* [*t*,*s*]*, such that*

$$(IR)\int\_{t}^{s} Y(\omega)d\omega = \left[ (R)\int\_{t}^{s} Y\_{\*}(\omega)d\omega,\ (R)\int\_{t}^{s} Y^{\*}(\omega)d\omega \right].$$

**Definition 1.** *[28,30] Let Y* ∈ *L* [*t*,*s*], X + *I* . *Then, interval fractional integrals,* I a *<sup>t</sup>*<sup>+</sup> *and* I a *<sup>s</sup>*<sup>−</sup> , *of order* a > 0 *are defined by*

$$\mathcal{Z}\_{t^+}^{\mathfrak{a}} \, Y(\omega) = \frac{1}{\varGamma(\mathfrak{a})} \int\_t^{\omega} (\omega - \nu)^{\mathfrak{a} - 1} Y(\nu) d\nu, \quad (\omega > t), \tag{1}$$

*and*

$$\mathcal{Z}\_{\mathbf{s}^{-}}^{\mathfrak{a}} \, Y(\omega) = \frac{1}{I(\mathfrak{a})} \int\_{\omega}^{\mathfrak{s}} (\nu - \omega)^{\mathfrak{a} - 1} Y(\nu) d\nu, \quad (\omega < s), \tag{2}$$

*respectively, where* Γ(*ω*) = R <sup>∞</sup> 0 *ν ω*−1 *e* <sup>−</sup>*νdν is the Euler gamma function.*

**Definition 2.** *[31] The I*·*V-F <sup>Y</sup>* : *<sup>K</sup>* → X <sup>+</sup> *I is named as the left and right convex-I*·*V- F on convex set K if the coming inequality,*

$$\mathcal{Y}(\nu\omega + (1-\nu)z \mid \le\_p \nu \mathcal{Y}(\omega) + (1-\nu \mathcal{J})\mathcal{Y}(z),\tag{3}$$

*holds for all ω*, *z* ∈ *K and ν* ∈ [0, 1] *we have. If inequality (3) is reversed, then Y is named as the left and right concave on K*. *Y is affine, if and only if, it is both left and right convex and left and right concave.*

**Theorem 2.** *[31] Let Y* : *<sup>K</sup>* → X <sup>+</sup> *I be an I*·*V-F, such that*

$$\mathcal{Y}(\omega) = [\mathcal{Y}\_\*(\omega), \mathcal{Y}^\*(\omega)], \forall \,\omega \in \mathcal{K} \tag{4}$$

*for all ω* ∈ *K*. *Then, Y is a left and right convex I*·*V-F on K*, *if and only if, Y*∗(*ω*) *and Y* ∗ (*ω*) *both are convex functions.*

#### **3. Interval Fractional Hermite–Hadamard Inequalities**

The major goal, and the main purpose of this section, is to develop a novel version of the H–H inequalities in the mode of interval-valued left and right convex functions.

**Theorem 3.** *Let <sup>Y</sup>* : [*s*, *<sup>t</sup>*] → X <sup>+</sup> *I be a left and right convex I*·*V-F on* [*s*, *t*] *and provided by Y*(*ω*) = [*Y*∗(*ω*), *Y* ∗ (*ω*)] *for all ω* ∈ [*s*, *t*]*. If Y* ∈ *L* [*s*, *t*], X + *I , then*

$$\mathbb{Y}\left(\frac{s+t}{2}\right) \leq\_p \frac{I(\mathfrak{a}+1)}{2(t-s)^{\mathfrak{a}}} \left[\mathcal{Z}\_{s^+}^{\mathfrak{a}} \, \, \mathcal{Y}(t) + \mathcal{Z}\_{t^-}^{\mathfrak{a}} \, \, \mathcal{Y}(s)\right] \leq\_p \frac{\mathcal{Y}(s) + \mathcal{Y}(t)}{2}.\tag{5}$$

*If Y*(*ω*) *is a left and right concave I*·*V-F, then*

$$\mathbb{Y}\left(\frac{s+t}{2}\right) \geq\_{p} \frac{I(\mathfrak{a}+1)}{2(t-s)^{\mathfrak{a}}} \left[\mathcal{Z}\_{s^{+}}^{\mathfrak{a}} \, \, \mathcal{Y}(t) + \mathcal{Z}\_{t^{-}}^{\mathfrak{a}} \, \, \mathcal{Y}(s)\right] \geq\_{p} \frac{\mathcal{Y}(s) + \mathcal{Y}(t)}{2}.\tag{6}$$

**Proof.** Let *<sup>Y</sup>* : [*s*, *<sup>t</sup>*] → X <sup>+</sup> *I* be a left and right convex *I*·*V-F*. Then, by hypothesis, we have:

$$2\mathcal{Y}\left(\frac{s+t}{2}\right) \le\_p \mathcal{Y}(\nu s + (1-\nu)t) + \mathcal{Y}((1-\nu)s + \nu t).$$

Therefore, we have

$$2\mathcal{Y}\_{\ast}\left(\frac{s+t}{2}\right) \le \mathcal{Y}\_{\ast}(\nu s + (1-\nu)t) + \mathcal{Y}\_{\ast}((1-\nu)s + \nu t)\_{\prime} \\ 2\mathcal{Y}^{\ast}\left(\frac{s+t}{2}\right) \le \mathcal{Y}^{\ast}(\nu s + (1-\nu)t) + \mathcal{Y}^{\ast}((1-\nu)s + \nu t).$$

Multiplying both sides by *ν* <sup>a</sup>−<sup>1</sup> and integrating the obtained result, with respect to *ν* over (0, 1), we have

$$\begin{cases} 2\int\_{0}^{1} \nu^{a-1} Y\_{\ast} \left( \frac{s+t}{2} \right) d\nu \\ \qquad \le \int\_{0}^{1} \nu^{a-1} Y\_{\ast} (\nu s + (1-\nu)t) d\nu + \int\_{0}^{1} \nu^{a-1} Y\_{\ast} ((1-\nu)s + \nu t) d\nu \\ 2\int\_{0}^{1} \nu^{a-1} Y^{\ast} \left( \frac{s+t}{2} \right) d\nu \\ \qquad \le \int\_{0}^{1} \nu^{a-1} Y^{\ast} (\nu s + (1-\nu)t) d\nu + \int\_{0}^{1} \nu^{a-1} Y^{\ast} ((1-\nu)s + \nu t) d\nu. \end{cases}$$

Let *ω* = *νs* + (1 − *ν*)*t* and *z* = (1 − *ν*)*s* + *νt*. Then, we have

$$\begin{split} \frac{2}{a} \mathbf{Y}\_{\ast} \left( \frac{s+t}{2} \right) &\leq \frac{1}{(t-s)^{a}} \int\_{\stackrel{\scriptstyle\rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \rm \$$

That is,

$$\frac{2}{a} \left[ Y\_\* \left( \frac{s+t}{2} \right), Y^\* \left( \frac{s+t}{2} \right) \right] \leq\_p \frac{\Gamma(\mathfrak{a})}{(t-s)^{\mathfrak{a}}} \left[ \left[ \mathcal{Z}\_{s^+}^{\mathfrak{a}} \, Y\_\*(t) + \mathcal{Z}\_{t^-}^{\mathfrak{a}} \, Y\_\*(s) \right], \left[ \mathcal{Z}\_{s^+}^{\mathfrak{a}} \, Y^\*(t) + \mathcal{Z}\_{t^-}^{\mathfrak{a}} \, Y^\*(s) \right] \right]$$

Thus:

$$\frac{2}{\mathfrak{a}}\,\,\mathrm{Y}\left(\frac{s+t}{2}\right)\leq\_{p}\frac{\Gamma(\mathfrak{a})}{\left(t-s\right)^{\mathfrak{a}}}\left[\mathcal{Z}\_{s^{+}}^{\mathfrak{a}}\,\,\,\,\mathcal{Y}(t)+\mathcal{Z}\_{t^{-}}^{\mathfrak{a}}\,\,\,\,\mathcal{Y}(s)\right]\tag{7}$$

Similar to the above, we have

$$\frac{\Gamma(\mathfrak{a})}{\left(t-s\right)^{\mathfrak{a}}} \left[ \mathcal{Z}\_{s^{+}}^{\mathfrak{a}} \, \, Y(t) + \mathcal{Z}\_{t^{-}}^{\mathfrak{a}} \, \, Y(s) \right] \le\_{p} \frac{\mathcal{Y}(s) + \mathcal{Y}(t)}{2} \tag{8}$$

Combining (7) and (8), we have

$$\mathbb{P}\left(\frac{s+t}{2}\right) \leq\_p \frac{\Gamma(\mathfrak{a}+1)}{2(t-s)^{\mathfrak{a}}} \left[\mathcal{Z}\_{s^+}^{\mathfrak{a}} \, \, \, Y(t) + \mathcal{Z}\_{t^-}^{\mathfrak{a}} \, \, Y(s)\right] \leq\_p \frac{\mathcal{Y}(s) + \mathcal{Y}(t)}{2}$$

Hence, we achieve the required result.

#### **Remark 2.** *We may observe from Theorem 3 that:*

*Let one take* α = 1. *Then, from Theorem 1 and (5), we achieve the coming inequality (see [23]):*

$$\mathbb{Y}\left(\frac{s+t}{2}\right) \le\_p \frac{1}{t-s} \int\_s^t \mathbb{Y}(\omega)d\omega \le\_p \frac{\mathbb{Y}(s) + \mathbb{Y}(t)}{2}.$$

*If we take Y*∗(*ω*) = *Y* ∗ (*ω*)*, then from Theorem 3 and (5), we acquire the coming inequality (see [32]):*

$$Y\left(\frac{s+t}{2}\right) \le \frac{\Gamma(\mathfrak{a}+1)}{2(t-s)^{\mathfrak{a}}} \left[\mathcal{Z}\_{s^{+}}^{\mathfrak{a}} \left\Vert \mathcal{Y}(t) + \mathcal{Z}\_{t^{-}}^{\mathfrak{a}} \left\Vert \mathcal{Y}(s) \right\Vert \right] \le \frac{\mathcal{Y}(s) + \mathcal{Y}(t)}{2} \right]$$

*Let one take* a = 1 *and Y*∗(*ω*) = *Y* ∗ (*ω*)*. Then, from Theorem 1 and (5), we achieve classical* H*–*H *type inequality.*

**Example 1.** *Let* a = <sup>1</sup> 2 , *<sup>ω</sup>* <sup>∈</sup> [2, 3]*, and the I*·*V-F <sup>Y</sup>* : [*s*, *<sup>t</sup>*] <sup>=</sup> [2, 3] → X <sup>+</sup> *I* , *provided by Y*(*ω*) = [1, 2] 2 − *ω* 1 2 *. Since the left and right endpoint functions, Y*∗(*ω*) = 2 − *ω* 1 <sup>2</sup> , *Y* ∗ (*ω*) = 2 2 − *ω* 1 2 *, are left and right convex functions, then Y*(*ω*) *is a left and right convex I*·*V-F. We clearly see that Y* ∈ *L* [*s*, *t*], X + *I , and*

$$\begin{array}{l} \mathcal{Y}\_{\*}\left(\frac{s+t}{2}\right) = \mathcal{Y}\_{\*}\left(\frac{5}{2}\right) = \frac{4-\sqrt{10}}{2} \\ \mathcal{Y}^{\*}\left(\frac{s+t}{2}\right) = \mathcal{Y}^{\*}\left(\frac{5}{2}\right) = 4-\sqrt{10} \\ \frac{\mathcal{Y}\_{\*}(s)+\mathcal{Y}\_{\*}(t)}{2} = \frac{4-\sqrt{2}-\sqrt{3}}{2} \\ \frac{\mathcal{Y}^{\*}(s)+\mathcal{Y}^{\*}(t)}{2} = 4-\sqrt{2}-\sqrt{3} \end{array}$$

*Note that*

Γ(a+1) 2(*t*−*s*) a - I a *<sup>s</sup>*<sup>+</sup> *Y*∗(*t*) + I a *<sup>t</sup>*<sup>−</sup> *Y*∗(*s*) = Γ( 3 2 ) 2 √ 1 *π* R 3 2 (3 − *ω*) −1 <sup>2</sup> . 2 − *ω* 1 2 *dω* + Γ( 3 2 ) 2 √ 1 *π* R 3 2 (*ω* − 2) −1 <sup>2</sup> . 2 − *ω* 1 2 *dω* = <sup>1</sup> 4 h 7393 10,000 <sup>+</sup> <sup>9501</sup> 10,000 <sup>i</sup> = <sup>8447</sup> 20,000 . Γ(a+1) 2(*t*−*s*) a - I a *<sup>s</sup>*<sup>+</sup> *Y* ∗ (*t*) + I a *<sup>t</sup>*<sup>−</sup> *Y* ∗ (*s*) = Γ( 3 2 ) 2 √ 1 *π* R 3 2 (3 − *ω*) −1 <sup>2</sup> . 2 2 − *ω* 1 2 *dω* + Γ( 3 2 ) 2 √ 1 *π* R 3 2 (*ω* − 2) −1 <sup>2</sup> . 2 2 − *ω* 1 2 *dω* = <sup>1</sup> 2 h 7393 10,000 <sup>+</sup> <sup>9501</sup> 10,000 <sup>i</sup> = <sup>8447</sup> 10,000

*Therefore*,

$$\left[\frac{4-\sqrt{10}}{2}, 4-\sqrt{10}\right] \le\_p \left[\frac{8447}{20,000}, \frac{8447}{10,000}\right] \le\_p \left[\frac{4-\sqrt{2}-\sqrt{3}}{2}, 4-\sqrt{2}-\sqrt{3}\right]$$

*and Theorem 3 is verified.*

The upcoming two results acquire the fractional inequalities for the product of left and right convex *I*·*V-Fs*.

**Theorem 4.** *Let <sup>Y</sup>*, <sup>G</sup> : [*s*, *<sup>t</sup>*] → X <sup>+</sup> *I be two left and right convex I*·*V-Fs on* [*s*, *t*], *provided by Y*(*ω*) = [*Y*∗(*ω*), *Y* ∗ (*ω*)] *and* G(*ω*) = [G∗(*ω*), G<sup>∗</sup> (*ω*)] *for all ω* ∈ [*s*, *t*]*. If Y* × G ∈ *L* [*s*, *t*], X + *I , then*

$$\begin{aligned} &\frac{\Gamma(\mathfrak{a})}{2(t-s)^{\mathfrak{a}}} \Big[ \mathcal{Z}\_{s+}^{\mathfrak{a}} \, \, \, Y(t) \times \mathfrak{G}(t) + \mathcal{Z}\_{t-}^{\mathfrak{a}} \, \, Y(s) \times \mathfrak{G}(s) \Big] \\ &\leq\_{p} \left( \frac{1}{2} - \frac{\mathfrak{a}}{(\mathfrak{a}+1)(\mathfrak{a}+2)} \right) \varphi(s,t) + \left( \frac{\mathfrak{a}}{(\mathfrak{a}+1)(\mathfrak{a}+2)} \right) \nabla(s,t) \, \, \, \end{aligned}$$

where *ϕ*(*s*, *t*) = *Y*(*s*) × G(*s*) + *Y*(*t*) × G(*t*), ∇(*s*, *t*) = *Y*(*s*) × G(*t*) + *Y*(*t*) × G(*s*), and *ϕ*(*s*, *t*) = [*ϕ*∗(*s*, *t*), *ϕ* ∗ (*s*, *t*)] and ∇(*s*, *t*) = [∇∗(*s*, *t*), ∇<sup>∗</sup> (*s*, *t*)].

**Proof.** Since *Y*, G are both left and right convex *I*·*V-Fs,* then we have

$$\begin{aligned} \mathcal{Y}\_\*(\nu s + (1 - \nu)t) &\leq \nu \mathcal{Y}\_\*(s) + (1 - \nu)\mathcal{Y}\_\*(t), \\ \mathcal{Y}^\*(\nu s + (1 - \nu)t) &\leq \nu \mathcal{Y}^\*(s) + (1 - \nu)\mathcal{Y}^\*(t). \end{aligned}$$

and

$$
\mathfrak{G}\_\*(\nu s + (1 - \nu)t) \le \nu \mathfrak{G}\_\*(s) + (1 - \nu)\mathfrak{G}\_\*(t),
$$

$$
\mathfrak{G}^\*(\nu s + (1 - \nu)t) \le \nu \mathfrak{G}^\*(s) + (1 - \nu)\mathfrak{G}^\*(t).
$$

From the definition of left and right convex *I*·*V-Fs,* it follows that 0 ≤*<sup>p</sup> Y*(*ω*) and 0 ≤*<sup>p</sup>* G(*ω*), so

$$\begin{array}{c} Y\_\*(\nu s + (1 - \nu)t) \times \mathfrak{G}\_\*(\nu s + (1 - \nu)t) \\ \leq & (\nu Y\_\*(s) + (1 - \nu)Y\_\*(t))(\nu \mathfrak{G}\_\*(s) + (1 - \nu)\mathfrak{G}\_\*(t)) \\ = & \nu^2 Y\_\*(s) \times \mathfrak{G}\_\*(s) + (1 - \nu)^2 Y\_\*(t) \times \mathfrak{G}\_\*(t) \\ \qquad + \nu (1 - \nu) Y\_\*(s) \times \mathfrak{G}\_\*(t) + \nu (1 - \nu) Y\_\*(t) \times \mathfrak{G}\_\*(s) \\ Y^\*(\nu s + (1 - \nu)t) \times \mathfrak{G}^\*(\nu s + (1 - \nu)t) \\ \leq & (\nu Y^\*(s) + (1 - \nu)Y^\*(t))(\nu \mathfrak{G}^\*(s) + (1 - \nu)\mathfrak{G}^\*(t)) \\ = & \nu^2 Y^\*(s) \times \mathfrak{G}^\*(s) + (1 - \nu)^2 Y^\*(t) \times \mathfrak{G}^\*(t) \\ + \nu (1 - \nu) Y^\*(s) \times \mathfrak{G}^\*(t) + \nu (1 - \nu) Y^\*(t) \times \mathfrak{G}^\*(s), \end{array} \tag{9}$$

Analogously, we have

$$\begin{array}{c} Y\_{\*}((1-\nu)s+\nu t)\mathfrak{G}\_{\*}((1-\nu)s+\nu t) \\ \leq & (1-\nu)^{2}Y\_{\*}(s)\times\mathfrak{G}\_{\*}(s)+\nu^{2}Y\_{\*}(t)\times\mathfrak{G}\_{\*}(t) \\ +\nu(1-\nu)Y\_{\*}(s)\times\mathfrak{G}\_{\*}(t)+\nu(1-\nu)Y\_{\*}(t)\times\mathfrak{G}\_{\*}(s) \\ Y^{\*}((1-\nu)s+\nu t)\times\mathfrak{G}^{\*}((1-\nu)s+\nu t) \\ \leq & (1-\nu)^{2}Y^{\*}(s)\times\mathfrak{G}^{\*}(s)+\nu^{2}Y^{\*}(t)\times\mathfrak{G}^{\*}(t) \\ +\nu(1-\nu)Y^{\*}(s)\times\mathfrak{G}^{\*}(t)+\nu(1-\nu)Y^{\*}(t)\times\mathfrak{G}^{\*}(s). \end{array} \tag{10}$$

Adding (9) and (10), we have

$$\begin{array}{l} Y\_{\*} (\upsilon s + (1 - \upsilon)t) \times \mathfrak{G}\_{\*} (\upsilon s + (1 - \upsilon)t) \\ \qquad \qquad + Y\_{\*} ((1 - \upsilon)s + \upsilon t) \times \mathfrak{G}\_{\*} ((1 - \upsilon)s + \upsilon t) \\ \qquad \qquad \leq \left[ \upsilon^{2} + (1 - \upsilon)^{2} \right] [Y\_{\*} (s) \times \mathfrak{G}\_{\*} (s) + Y\_{\*} (t) \times \mathfrak{G}\_{\*} (t)] \\ \qquad \qquad + 2\upsilon (1 - \upsilon) [Y\_{\*} (t) \times \mathfrak{G}\_{\*} (s) + Y\_{\*} (s) \times \mathfrak{G}\_{\*} (t)] \\ Y^{\*} (\upsilon s + (1 - \upsilon)t) \times \mathfrak{G}^{\*} (\upsilon s + (1 - \upsilon)t) \\ \qquad \qquad + Y^{\*} ((1 - \upsilon)s + \upsilon t) \times \mathfrak{G}^{\*} ((1 - \upsilon)s + \upsilon t) \\ \qquad \leq \left[ \upsilon^{2} + (1 - \upsilon)^{2} \right] [Y^{\*} (s) \times \mathfrak{G}^{\*} (s) + Y^{\*} (t) \times \mathfrak{G}^{\*} (t)] \\ \qquad \qquad + 2\upsilon (1 - \upsilon) [Y^{\*} (t) \times \mathfrak{G}^{\*} (s) + Y^{\*} (s) \times \mathfrak{G}^{\*} (t)]. \end{array} \tag{11}$$

Taking the multiplication of (11) by *ν* <sup>a</sup>−<sup>1</sup> and integrating the obtained result, with respect to *ν* over (0, 1), we have

$$\begin{split} \int\_{0}^{1} \boldsymbol{\nu}^{\mathsf{a}-1} Y\_{\ast} (\boldsymbol{\nu}s + (1-\boldsymbol{\nu})t) &\times \mathfrak{G}\_{\ast} (\boldsymbol{\nu}s + (1-\boldsymbol{\nu})t) \\ &+ \boldsymbol{\nu}^{\mathsf{a}-1} Y\_{\ast} ((1-\boldsymbol{\nu})s + \boldsymbol{\nu}t) \times \mathfrak{G}\_{\ast} ((1-\boldsymbol{\nu})s + \boldsymbol{\nu}t) d\boldsymbol{\nu} \\ &\leq q\_{\ast}(s,t) \int\_{0}^{1} \boldsymbol{\nu}^{\mathsf{a}-1} \left[\boldsymbol{\nu}^{2} + (1-\boldsymbol{\nu})^{2}\right] d\boldsymbol{\nu} + 2\nabla\_{\ast}(s,t) \int\_{0}^{1} \boldsymbol{\nu}^{\mathsf{a}-1} \boldsymbol{\nu} (1-\boldsymbol{\nu}) d\boldsymbol{\nu} \\ \int\_{0}^{1} \boldsymbol{\nu}^{\mathsf{a}-1} Y^{\ast} (\boldsymbol{\nu}s + (1-\boldsymbol{\nu})t) \times \mathfrak{G}^{\ast} (\boldsymbol{\nu}s + (1-\boldsymbol{\nu})t) \\ &+ \boldsymbol{\nu}^{\mathsf{a}-1} Y^{\ast} ((1-\boldsymbol{\nu})s + \boldsymbol{\nu}t) \times \mathfrak{G}^{\ast} ((1-\boldsymbol{\nu})s + \boldsymbol{\nu}t) d\boldsymbol{\nu} \\ &\leq q^{\ast}(s,t) \int\_{0}^{1} \boldsymbol{\nu}^{\mathsf{a}-1} \Big[\boldsymbol{\nu}^{2} + (1-\boldsymbol{\nu})^{2}\Big] d\boldsymbol{\nu} + 2\nabla^{\ast}(s,t) \int\_{0}^{1} \boldsymbol{\nu}^{\mathsf{a}-1} \boldsymbol{\nu} (1-\boldsymbol{\nu}) d\boldsymbol{\nu}. \end{split}$$

It follows that

Γ(a) (*t*−*s*) a - I a *<sup>s</sup>*<sup>+</sup> *Y*∗(*t*) × G∗(*t*) + I a *<sup>t</sup>*<sup>−</sup> *Y*∗(*s*) × G∗(*s*) <sup>≤</sup> <sup>2</sup> a 1 <sup>2</sup> <sup>−</sup> <sup>a</sup> (a+1)(a+2) *<sup>ϕ</sup>*∗(*s*, *<sup>t</sup>*) <sup>+</sup> <sup>2</sup> a a (a+1)(a+2) ∇∗(*s*, *t*) Γ(a) (*t*−*s*) a - I a *<sup>s</sup>*<sup>+</sup> *Y* ∗ (*t*) × G<sup>∗</sup> (*t*) + I a *<sup>t</sup>*<sup>−</sup> *Y* ∗ (*s*) × G<sup>∗</sup> (*s*) <sup>≤</sup> <sup>2</sup> a 1 <sup>2</sup> <sup>−</sup> <sup>a</sup> (a+1)(a+2) *ϕ* ∗ (*s*, *t*) + <sup>2</sup> a a (a+1)(a+2) ∇∗ (*s*, *t*), Γ(a) (*t*−*s*) a - I a *<sup>s</sup>*<sup>+</sup> *Y*∗(*t*) × G∗(*t*) + I a *<sup>t</sup>*<sup>−</sup> *Y*∗(*s*) × G∗(*s*) <sup>≤</sup> <sup>2</sup> a 1 <sup>2</sup> <sup>−</sup> <sup>a</sup> (a+1)(a+2) *<sup>ϕ</sup>*∗(*s*, *<sup>t</sup>*) <sup>+</sup> <sup>2</sup> a a (a+1)(a+2) ∇∗(*s*, *t*) Γ(a) (*t*−*s*) a - I a *<sup>s</sup>*<sup>+</sup> *Y* ∗ (*t*) × G<sup>∗</sup> (*t*) + I a *<sup>t</sup>*<sup>−</sup> *Y* ∗ (*s*) × G<sup>∗</sup> (*s*) <sup>≤</sup> <sup>2</sup> a 1 <sup>2</sup> <sup>−</sup> <sup>a</sup> (a+1)(a+2) *ϕ* ∗ (*s*, *t*) + <sup>2</sup> a a (a+1)(a+2) ∇∗ (*s*, *t*),

That is,

$$\begin{split} & \frac{\Gamma(\mathfrak{a})}{(t-s)^{\mathfrak{a}}} \Big[ \mathcal{Z}\_{s^{+}}^{\mathfrak{a}} \, \mathcal{Y}\_{\*}(t) \times \mathfrak{G}\_{\*}(t) + \mathcal{Z}\_{t^{-}}^{\mathfrak{a}} \, \mathcal{Y}\_{\*}(s) \times \mathfrak{G}\_{\*}(s) \, \mathcal{Z}\_{s^{+}}^{\mathfrak{a}} \, \mathcal{Y}^{\*}(t) \times \mathfrak{G}^{\*}(t) + \mathcal{Z}\_{t^{-}}^{\mathfrak{a}} \, \mathcal{Y}^{\*}(s) \times \mathfrak{G}^{\*}(s) \Big] \\ & \leq p \, \frac{2}{\mathfrak{a}} \Big( \frac{1}{2} - \frac{\mathfrak{a}}{(\mathfrak{a}+1)(\mathfrak{a}+2)} \Big[ \!\vert \boldsymbol{\varrho}\_{\*}(\mathsf{s},t) \ \ \boldsymbol{\varrho}^{\*}(\mathsf{s},t) \big] + \frac{2}{\mathfrak{a}} \Big( \frac{\mathfrak{a}}{(\mathfrak{a}+1)(\mathfrak{a}+2)} \Big[ \!\vert \nabla\_{\mathrm{s}}(\mathsf{s},t) \ \ \nabla^{\mathfrak{s}}(\mathsf{s},t) \right] \end{split}$$

Thus,

$$\begin{aligned} \frac{\overline{I(\mathfrak{a})}}{2(t-s)^{\mathfrak{a}}} \left[ \mathcal{Z}\_{s^{+}}^{\mathfrak{a}} \, \, Y(t) \times \mathfrak{G}(t) + \mathcal{Z}\_{t^{-}}^{\mathfrak{a}} \, Y(s) \times \mathfrak{G}(s) \right] \\ \leq\_{p} & \left( \frac{1}{2} - \frac{\mathfrak{a}}{(a+1)(a+2)} \right) \varphi(s,t) + \left( \frac{\mathfrak{a}}{(a+1)(a+2)} \right) \nabla(s,t) . \end{aligned}$$

and the theorem has been established.

**Example 2.** *Let* [*s*, *t*] = [0, 2 ]*,* a = <sup>1</sup> 2 , *Y*(*ω*) = *ω* 2 , 3*ω* 2 *, and* G(*ω*) = [*ω*, 3*ω*]. *Since the left and right endpoint functions, <sup>Y</sup>*∗(*ω*) <sup>=</sup> *<sup>ω</sup>* 2 , *Y* ∗ (*ω*) = <sup>3</sup>*<sup>ω</sup>* 2 *,* G∗(*ω*) = *ω and* G<sup>∗</sup> (*ω*) = 3*ω, are left and right convex functions, then Y*(*ω*) *and* G(*ω*) *are both left and right convex I*·*V-Fs. We clearly see that Y*(*ω*) × G(*ω*) ∈ *L* [*s*, *t*], X + *I , and*

$$\begin{split} &\frac{\Gamma(1+\mathfrak{a})}{2(t-s)^{\mathfrak{a}}} \left[ \mathcal{T}\_{\mathfrak{s}^{+}}^{\mathfrak{a}} \, \mathcal{Y}\_{\ast}(t) \times \mathfrak{G}\_{\ast}(t) + \mathcal{T}\_{\mathfrak{t}^{-}}^{\mathfrak{a}} \, \mathcal{Y}\_{\ast}(s) \times \mathfrak{G}\_{\ast}(s) \right] \\ = & \frac{\Gamma(\frac{3}{2})}{2\sqrt{2}} \frac{1}{\sqrt{\pi}} \int\_{0}^{2} (2-\omega)^{\frac{-1}{2}} \left( \frac{1}{2}\omega^{2} \right) d\omega + \frac{\Gamma(\frac{3}{2})}{2\sqrt{2}} \frac{1}{\sqrt{\pi}} \int\_{0}^{2} (\omega)^{\frac{-1}{2}} \left( \frac{1}{2}\omega^{2} \right) d\omega \approx 0.7333 \,\mu\text{V} \\ & \frac{\Gamma(1+\mathfrak{a})}{2(t-s)^{\mathfrak{a}}} \left[ \mathcal{T}\_{\mathfrak{s}^{+}}^{\mathfrak{a}} \, \mathcal{Y}^{\ast}(t) \times \mathfrak{G}^{\ast}(t) + \mathcal{T}\_{\mathfrak{t}^{-}}^{\mathfrak{a}} \, \mathcal{Y}^{\ast}(s) \times \mathfrak{G}^{\ast}(s) \right] \\ = & \frac{\Gamma(\frac{3}{2})}{2\sqrt{2}} \frac{1}{\sqrt{\pi}} \int\_{0}^{2} (2-\omega)^{\frac{-1}{2}} \, \frac{9}{2} \omega^{2} d\omega + \frac{\Gamma(\frac{3}{2})}{2\sqrt{2}} \frac{1}{\sqrt{\pi}} \int\_{0}^{2} (\omega)^{\frac{-1}{2}} \, \frac{9}{2} \omega^{2} d\omega \approx 6.5997, \end{split}$$

*Note that*

$$\begin{cases} \frac{1}{2} - \frac{\mathfrak{a}}{(\mathfrak{a}+1)(\mathfrak{a}+2)} \Big[ \mathfrak{q}\_{\*}(\mathfrak{s},t) = \left[ \mathcal{Y}\_{\*}(\mathfrak{s}) \times \mathfrak{G}\_{\*}(\mathfrak{s}) + \mathcal{Y}\_{\*}(t) \times \mathfrak{G}\_{\*}(t) \right] = \frac{11}{15}, \\ \frac{1}{2} - \frac{\mathfrak{a}}{(\mathfrak{a}+1)(\mathfrak{a}+2)} \Big] \mathcal{q}^{\*}(\mathfrak{s},t) = \left[ \mathcal{Y}^{\*}(\mathfrak{s}) \times \mathfrak{G}^{\*}(\mathfrak{s}) + \mathcal{Y}^{\*}(t) \times \mathfrak{G}^{\*}(t) \right] = \frac{33}{5}, \\ \left( \frac{\mathfrak{a}}{(\mathfrak{a}+1)(\mathfrak{a}+2)} \right) \nabla\_{\ast}(\mathfrak{s},t) = \left[ \mathcal{Y}\_{\*}(\mathfrak{s}) \times \mathfrak{G}\_{\*}(t) + \mathcal{Y}\_{\*}(t) \times \mathfrak{G}\_{\*}(\mathfrak{s}) \right] = \frac{2}{15}(0), \\ \left( \frac{\mathfrak{a}}{(\mathfrak{a}+1)(\mathfrak{a}+2)} \right) \nabla\_{\ast}(\mathfrak{s},t) = \left[ \mathcal{Y}^{\*}(\mathfrak{s}) \times \mathfrak{G}^{\*}(t) + \mathcal{Y}^{\*}(t) \times \mathfrak{G}^{\*}(\mathfrak{s}) \right] = \frac{2}{15}(0). \end{cases}$$

*Therefore, we have*

$$\begin{array}{c} \left(\frac{1}{2} - \frac{\mathfrak{a}}{(\mathfrak{a}+1)(\mathfrak{a}+2)}\right) \varphi(\mathfrak{s},t) + \left(\frac{\mathfrak{a}}{(\mathfrak{a}+1)(\mathfrak{a}+2)}\right) \nabla(\mathfrak{s},t) \\ = \begin{bmatrix} \frac{11}{15}\frac{33}{5} \end{bmatrix} + \frac{2}{15}[0,0] = \begin{bmatrix} \frac{11}{15}\frac{33}{5} \end{bmatrix} \end{array}$$

*It follows that*

$$[0.7333, \ 6.5997] \leq\_p \left[\frac{11}{15}, \frac{33}{5}\right]$$

*and Theorem 4 has been demonstrated*.

**Theorem 5.** *Let <sup>Y</sup>*, <sup>G</sup> : [*s*, *<sup>t</sup>*] → X <sup>+</sup> *I be two left and right convex I*·*V-Fs, provided by Y*(*ω*) = [*Y*∗(*ω*), *Y* ∗ (*ω*)] *and* G(*ω*) = [G∗(*ω*), G<sup>∗</sup> (*ω*)] *for all ω* ∈ [*s*, *t*]*. If Y* × G ∈ *L* [*s*, *t*], X + *I , then*

$$\begin{split} \frac{1}{a} \, ^\mathsf{T} \Big( \frac{s+t}{2} \big) \times \mathfrak{G} \Big( \frac{s+t}{2} \big) \leq\_p \frac{\mathsf{I} (\mathsf{a}+1)}{4(t-s)^\mathsf{a}} \Big[ \mathscr{Z}\_{s^+}^\mathsf{a} \, ^\mathsf{Y} (t) \times \mathfrak{G} (t) + \mathscr{Z}\_{t^-}^\mathsf{a} \, ^\mathsf{Y} (s) \times \mathfrak{G} (s) \Big] \\ + \frac{1}{2a} \Big( \frac{1}{2} - \frac{\mathsf{a}}{(\mathsf{a}+1)(\mathsf{a}+2)} \Big) \nabla (s,t) + \frac{1}{2a} \Big( \frac{\mathsf{a}}{(\mathsf{a}+1)(\mathsf{a}+2)} \Big) \varrho (s,t) \, \end{split}$$

*where ϕ*(*s*, *t*) = *Y*(*s*) × G(*s*) + *Y*(*t*) × G(*t*), ∇(*s*, *t*) = *Y*(*s*) × G(*t*) + *Y*(*t*) × G(*s*), *ϕ*(*s*, *t*) = [*ϕ*∗(*s*, *t*), *ϕ* ∗ (*s*, *t*)], *and* ∇(*s*, *t*) = [∇∗(*s*, *t*), ∇<sup>∗</sup> (*s*, *t*)].

**Proof.** Consider that *<sup>Y</sup>*, <sup>G</sup> : [*s*, *<sup>t</sup>*] → X <sup>+</sup> *I* are left and right convex *I*·*V-Fs*. Then, by hypothesis, we have *s*+*t s*+*t* 

*Y*∗ 2 × G<sup>∗</sup> 2 *Y* ∗ *s*+*t* 2 × G<sup>∗</sup> *s*+*t* 2 <sup>≤</sup> <sup>1</sup> 4 *Y*∗(*νs* + (1 − *ν*)*t*) × G∗(*νs* + (1 − *ν*)*t*) +*Y*∗(*νs* + (1 − *ν*)*t*) × G∗((1 − *ν*)*s* + *νt*) +<sup>1</sup> 4 *Y*∗((1 − *ν*)*s* + *νt*) × G∗(*νs* + (1 − *ν*)*t*) +*Y*∗((1 − *ν*)*s* + *νt*) × G∗((1 − *ν*)*s* + *νt*) <sup>≤</sup> <sup>1</sup> 4 *Y* ∗ (*νs* + (1 − *ν*)*t*) × G<sup>∗</sup> (*νs* + (1 − *ν*)*t*) +*Y* ∗ (*νs* + (1 − *ν*)*t*) × G<sup>∗</sup> ((1 − *ν*)*s* + *νt*) +<sup>1</sup> 4 *Y* ∗ ((1 − *ν*)*s* + *νt*) × G<sup>∗</sup> (*νs* + (1 − *ν*)*t*) +*Y* ∗ ((1 − *ν*)*s* + *νt*) × G<sup>∗</sup> ((1 − *ν*)*s* + *νt*) , <sup>≤</sup> <sup>1</sup> 4 *Y*∗(*νs* + (1 − *ν*)*t*) × G∗(*νs* + (1 − *ν*)*t*) +*Y*∗((1 − *ν*)*s* + *νt*) × G∗((1 − *ν*)*s* + *νt*) +<sup>1</sup> 4 (*νY*∗(*s*) + (1 − *ν*)*Y*∗(*t*)) ×((1 − *ν*)G∗(*s*) + *ν*G∗(*t*)) +((1 − *ν*)*Y*∗(*s*) + *νY*∗(*t*)) ×(*ν*G∗(*s*) + (1 − *ν*)G∗(*t*)) <sup>≤</sup> <sup>1</sup> 4 *Y* ∗ (*νs* + (1 − *ν*)*t*) × G<sup>∗</sup> (*νs* + (1 − *ν*)*t*) +*Y* ∗ ((1 − *ν*)*s* + *νt*) × G<sup>∗</sup> ((1 − *ν*)*s* + *νt*) +<sup>1</sup> 4 (*νY* ∗ (*s*) + (1 − *ν*)*Y* ∗ (*t*)) ×((1 − *ν*)G<sup>∗</sup> (*s*) + *ν*G∗ (*t*)) +((1 − *ν*)*Y* ∗ (*s*) + *νY* ∗ (*t*)) ×(*ν*G<sup>∗</sup> (*s*) + (1 − *ν*)G<sup>∗</sup> (*t*)) , = <sup>1</sup> 4 *Y*∗(*νs* + (1 − *ν*)*t*) × G∗(*νs* + (1 − *ν*)*t*) +*Y*∗((1 − *ν*)*s* + *νt*) × G∗((1 − *ν*)*s* + *νt*) +<sup>1</sup> 4 " n *ν* <sup>2</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ν</sup>*) 2 o ∇∗(*s*, *t*) +{*ν*(1 − *ν*) + (1 − *ν*)*ν*}*ϕ*∗(*s*, *t*) # = <sup>1</sup> 4 *Y* ∗ (*νs* + (1 − *ν*)*t*) × G<sup>∗</sup> (*νs* + (1 − *ν*)*t*) +*Y* ∗ ((1 − *ν*)*s* + *νt*) × G<sup>∗</sup> ((1 − *ν*)*s* + *νt*) +<sup>1</sup> 4 " n *ν* <sup>2</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ν</sup>*) 2 o ∇∗ (*s*, *t*) +{*ν*(1 − *ν*) + (1 − *ν*)*ν*}*ϕ* ∗ (*s*, *t*) # . (12)

Taking the multiplication of (12) with *ν* <sup>a</sup>−<sup>1</sup> and integrating over (0, 1), we get

1 a *Y*∗ *s*+*t* 2 × G<sup>∗</sup> *s*+*t* 2 <sup>≤</sup> <sup>1</sup> 4(*t*−*s*) a " R *<sup>t</sup> s* (*t* − *ω*) <sup>a</sup>−1*Y*∗(*ω*) <sup>×</sup> <sup>G</sup>∗(*ω*)*d<sup>ω</sup>* + R *t s* (*z* − *s*) <sup>a</sup>−1*Y*∗(*z*) <sup>×</sup> <sup>G</sup>∗(*z*)*dz* # + <sup>1</sup> 2a 1 <sup>2</sup> <sup>−</sup> <sup>a</sup> (a+1)(a+2) <sup>∇</sup>∗(*s*, *<sup>t</sup>*) <sup>+</sup> <sup>1</sup> 2a a (a+1)(a+2) *ϕ*∗(*s*, *t*) = Γ(a+1) 4(*t*−*s*) a - I a *<sup>s</sup>*<sup>+</sup> *Y*∗(*t*) × G∗(*t*) + I a *<sup>t</sup>*<sup>−</sup> *Y*∗(*s*) × G∗(*s*) + <sup>1</sup> 2a 1 <sup>2</sup> <sup>−</sup> <sup>a</sup> (a+1)(a+2) <sup>∇</sup>∗(*s*, *<sup>t</sup>*) <sup>+</sup> <sup>1</sup> 2a a (a+1)(a+2) *ϕ*∗(*s*, *t*) 1 a *Y* ∗ *s*+*t* 2 × G<sup>∗</sup> *s*+*t* 2 <sup>≤</sup> <sup>1</sup> 4(*t*−*s*) a " R *<sup>t</sup> s* (*t* − *ω*) <sup>a</sup>−1*Y* ∗ (*ω*) × G<sup>∗</sup> (*ω*)*dω* + R *t s* (*z* − *s*) <sup>a</sup>−1*Y* ∗ (*z*) × G<sup>∗</sup> (*z*)*dz* # + <sup>1</sup> 2a 1 <sup>2</sup> <sup>−</sup> <sup>a</sup> (a+1)(a+2) ∇∗ (*s*, *t*) + <sup>1</sup> 2a a (a+1)(a+2) *ϕ* ∗ (*s*, *t*) = Γ(a+1) 4(*t*−*s*) a - I a *<sup>s</sup>*<sup>+</sup> *Y* ∗ (*t*) × G<sup>∗</sup> (*t*) + I a *<sup>t</sup>*<sup>−</sup> *Y* ∗ (*s*) × G<sup>∗</sup> (*s*) + <sup>1</sup> 2a 1 <sup>2</sup> <sup>−</sup> <sup>a</sup> (a+1)(a+2) ∇∗ (*s*, *t*) + <sup>1</sup> 2a a (a+1)(a+2) *ϕ* ∗ (*s*, *t*),

That is,

$$\begin{split} \frac{1}{a} \, ^\mathsf{T} \Big( \frac{s+t}{2} \big) \times \mathfrak{G} \Big( \frac{s+t}{2} \big) &\leq\_p \frac{\mathsf{T}(a+1)}{4(t-s)^a} \Big[ \mathcal{Z}\_{s^+}^\mathsf{a} \, \, \, \mathcal{Y}(t) \times \mathfrak{G}(t) + \mathcal{Z}\_{t^-}^\mathsf{a} \, \, \, \mathcal{Y}(s) \times \mathfrak{G}(s) \Big] \\ &+ \frac{1}{2a} \Big( \frac{1}{2} - \frac{a}{(a+1)(a+2)} \Big) \nabla (s,t) + \frac{1}{2a} \Big( \frac{a}{(a+1)(a+2)} \Big) \varphi(s,t) . \end{split}$$

Hence, the required result is achieved.

The upcoming results discuss the H–H Fejér type inequality left and right convex *I*·*V-F.* Firstly, we achieve second H–H Fejér type inequality.

**Theorem 6.** *Let <sup>Y</sup>* : [*s*, *<sup>t</sup>*] → X <sup>+</sup> *I be a left and right convex I*·*V-F, with s* < *t, provided by Y*(*ω*) = [*Y*∗(*ω*), *Y* ∗ (*ω*)] *for all ω* ∈ [*s*, *t*]*. Let Y* ∈ *L* [*s*, *t*], X + *I and* C : [*s*, *t*] → R, C(*ω*) ≥ 0, *be symmetric with respect to <sup>s</sup>*+*<sup>t</sup>* 2 .*Then,*

$$\mathbb{E}\left[\mathcal{Z}\_{\mathbf{s}^+}^{\mathbf{a}}\,\,\mathbf{Y}\mathfrak{C}(t) + \mathcal{Z}\_{\mathbf{t}^-}^{\mathbf{a}}\,\,\mathbf{Y}\mathfrak{C}(s)\right] \leq\_p \frac{\mathsf{Y}(s) + \mathsf{Y}(t)}{2} \left[\mathcal{Z}\_{\mathbf{s}^+}^{\mathbf{a}}\,\,\mathfrak{C}(t) + \mathcal{Z}\_{\mathbf{t}^-}^{\mathbf{a}}\,\,\mathfrak{C}(s)\right] \tag{13}$$

*If Y is a concave I*·*V-F, then inequality (13) is reversed.*

**Proof.** Let *Y* be a left and right convex *I*·*V-F* and *ν* <sup>a</sup>−1C(*ν<sup>s</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ν</sup>*)*t*) <sup>≥</sup> 0. Then, we have

$$\begin{array}{l} \nu^{a-1}Y\_\*(\upsilon s + (1-\upsilon)t)\mathbb{C}(\upsilon s + (1-\upsilon)t) \\ \qquad \le \nu^{a-1}(\upsilon Y\_\*(s) + (1-\upsilon)Y\_\*(t))\mathbb{C}(\upsilon s + (1-\upsilon)t) \\ \nu^{a-1}Y^\*(\upsilon s + (1-\upsilon)t)\mathbb{C}(\upsilon s + (1-\upsilon)t) \\ \qquad \le \nu^{a-1}(\upsilon Y^\*(s) + (1-\upsilon)Y^\*(t))\mathbb{C}(\upsilon s + (1-\upsilon)t). \end{array} \tag{14}$$

and

*ν* <sup>a</sup>−1*Y*∗((<sup>1</sup> <sup>−</sup> *<sup>ν</sup>*)*<sup>s</sup>* <sup>+</sup> *<sup>ν</sup>t*)*C*((<sup>1</sup> <sup>−</sup> *<sup>ν</sup>*)*<sup>s</sup>* <sup>+</sup> *<sup>ν</sup>t*) ≤ *ν* a−1 ((1 − *ν*)*Y*∗(*s*) + *νY*∗(*t*))*C*((1 − *ν*)*s* + *νt*) *ν* <sup>a</sup>−1*Y* ∗ ((1 − *ν*)*s* + *νt*)C((1 − *ν*)*s* + *νt*) ≤ *ν* a−1 ((1 − *ν*)*Y* ∗ (*s*) + *νY* ∗ (*t*))C((1 − *ν*)*s* + *νt*). (15)

After adding (14) and (15), and integrating over [0, 1], we get

R 1 0 *ν* <sup>a</sup>−1*Y*∗(*ν<sup>s</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ν</sup>*)*t*)C(*ν<sup>s</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ν</sup>*)*t*)*d<sup>ν</sup>* + R 1 0 *ν* <sup>a</sup>−1*Y*∗((<sup>1</sup> <sup>−</sup> *<sup>ν</sup>*)*<sup>s</sup>* <sup>+</sup> *<sup>ν</sup>t*)C((<sup>1</sup> <sup>−</sup> *<sup>ν</sup>*)*<sup>s</sup>* <sup>+</sup> *<sup>ν</sup>t*)*d<sup>ν</sup>* ≤ R 1 0 *ν* <sup>a</sup>−1*Y*∗(*s*){*ν*C(*ν<sup>s</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ν</sup>*)*t*) <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ν</sup>*)C((<sup>1</sup> <sup>−</sup> *<sup>ν</sup>*)*<sup>s</sup>* <sup>+</sup> *<sup>ν</sup>t*)} +*ν* <sup>a</sup>−1*Y*∗(*t*){(<sup>1</sup> <sup>−</sup> *<sup>ν</sup>*)C(*ν<sup>s</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ν</sup>*)*t*) <sup>+</sup> *<sup>ν</sup>*C((<sup>1</sup> <sup>−</sup> *<sup>ν</sup>*)*<sup>s</sup>* <sup>+</sup> *<sup>ν</sup>t*)} *dν*, R 1 0 *ν* <sup>a</sup>−1*Y* ∗ ((1 − *ν*)*s* + *νt*)C((1 − *ν*)*s* + *νt*)*dν* + R 1 0 *ν* <sup>a</sup>−1*Y* ∗ (*νs* + (1 − *ν*)*t*)C(*νs* + (1 − *ν*)*t*)*dν* ≤ R 1 0 *ν* <sup>a</sup>−1*Y* ∗ (*s*){*ν*C(*νs* + (1 − *ν*)*t*) + (1 − *ν*)C((1 − *ν*)*s* + *νt*)} +*ν* <sup>a</sup>−1*Y* ∗ (*t*){(1 − *ν*)C(*νs* + (1 − *ν*)*t*) + *ν*C((1 − *ν*)*s* + *νt*)} *dν*, = *Y*∗(*s*) R 1 0 *ν* <sup>a</sup>−1*C*(*ν<sup>s</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ν</sup>*)*t*)*d<sup>ν</sup>* <sup>+</sup> *<sup>Y</sup>*∗(*t*) R 1 0 *ν* <sup>a</sup>−1*C*((<sup>1</sup> <sup>−</sup> *<sup>ν</sup>*)*<sup>s</sup>* <sup>+</sup> *<sup>ν</sup>t*)*dν*, = *Y* ∗ (*s*) R 1 0 *ν* <sup>a</sup>−1*C*(*ν<sup>s</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ν</sup>*)*t*)*d<sup>ν</sup>* <sup>+</sup> *<sup>Y</sup>* ∗ (*t*) R 1 0 *ν* <sup>a</sup>−1*C*((<sup>1</sup> <sup>−</sup> *<sup>ν</sup>*)*<sup>s</sup>* <sup>+</sup> *<sup>ν</sup>t*)*dν*.

Since C is symmetric, then

$$\begin{array}{l} = \left[Y\_{\ast}(s) + Y\_{\ast}(t)\right] \int\_{0}^{1} \nu^{\mathfrak{a}-1} \mathbb{C}((1-\nu)s + \nu t) \, d\nu\\ = \left[Y^{\ast}(s) + Y^{\ast}(t)\right] \int\_{0}^{1} \nu^{\mathfrak{a}-1} \mathbb{C}((1-\nu)s + \nu t) \, d\nu. \end{array}$$

$$\begin{split} \mathcal{I} &= \frac{Y\_{\ast}(s) + Y\_{\ast}(t)}{2} \frac{\overline{I(\mathfrak{a})}}{(t-s)^{\mathfrak{a}}} \Big[ \mathcal{Z}\_{\mathbb{s}^{+}}^{\mathfrak{a}} \, \mathfrak{C}(t) + \mathcal{Z}\_{t^{-}}^{\mathfrak{a}} \, \mathfrak{C}(s) \Big], \\ &= \frac{Y^{\ast}(s) + Y^{\ast}(t)}{2} \frac{\overline{I(\mathfrak{a})}}{(t-s)^{\mathfrak{a}}} \Big[ \mathcal{Z}\_{\mathbb{s}^{+}}^{\mathfrak{a}} \, \mathfrak{C}(t) + \mathcal{Z}\_{t^{-}}^{\mathfrak{a}} \, \mathfrak{C}(s) \Big]. \end{split} \tag{16}$$

Since

$$\begin{split} \int\_{0}^{1} \nu^{a-1} Y\_{\*} (\nu s + (1-\nu)t) \mathfrak{C}((1-\nu)s + \nu t) d\upsilon \\ &\quad + \int\_{0}^{1} \nu^{a-1} Y\_{\*} ((1-\nu)s + \nu t) \mathfrak{C}((1-\nu)s + \nu t) d\upsilon \\ &= \frac{1}{(t-s)^{a}} \int\_{s}^{t} (\omega - s)^{a-1} Y\_{\*} (s + t - \omega) \mathfrak{C}(\omega) d\omega \\ &= \frac{1}{(t-s)^{a}} \int\_{s}^{t} (\omega - s)^{a-1} Y\_{\*} (\omega) \mathfrak{C}(\omega) d\omega \\ &\quad + \frac{1}{(t-s)^{a}} \int\_{s}^{t} (\omega - s)^{a-1} Y\_{\*} (\omega) \mathfrak{C}(\omega) d\omega \\ &= \frac{\Gamma(a)}{(t-s)^{a}} \left[ T\_{\mathfrak{C}}^{a} + Y\_{\*} \mathfrak{C}(t) + \mathcal{T}\_{t}^{a} \cdot Y\_{\*} \mathfrak{C}(s) \right], \\ \int\_{0}^{1} \nu^{a-1} Y^{\*} (\nu s + (1-\nu)t) \mathfrak{C}((1-\nu)s + \nu t) d\nu \\ &\quad + \int\_{0}^{1} \nu^{a-1} Y^{\*} ((1-\nu)s + \nu t) \mathfrak{C}((1-\nu)s + \nu t) d\nu \\ &= \frac{\Gamma(a)}{(t-s)^{a}} \left[ T\_{\mathfrak{C}}^{a} + Y^{\*} \mathfrak{C}(t) + \mathcal{T}\_{t-}^{a} \cdot Y^{\*} \mathfrak{C}(s) \right]. \end{split} \tag{17}$$

then, from (16), we have

Γ(a) (*t*−*s*) a - I a *<sup>s</sup>*<sup>+</sup> *Y* <sup>∗</sup>C(*t*) + I a *<sup>t</sup>*<sup>−</sup> *Y* ∗C(*s*) ≤ *Y* ∗ (*s*)+*Y* ∗ (*t*) 2 Γ(a) (*t*−*s*) a - I a *<sup>s</sup>*<sup>+</sup> C(*t*) + I a *<sup>t</sup>*<sup>−</sup> C(*s*) ≤ *Y* ∗ (*s*)+*Y* ∗ (*t*) 2 Γ(a) (*t*−*s*) a - I a *<sup>s</sup>*<sup>+</sup> C(*t*) + I a *<sup>t</sup>*<sup>−</sup> C(*s*) , Γ(a) (*t*−*s*) a - I a *<sup>s</sup>*<sup>+</sup> *Y* <sup>∗</sup>C(*t*) + I a *<sup>t</sup>*<sup>−</sup> *Y* ∗C(*s*) ≤ *Y* ∗ (*s*)+*Y* ∗ (*t*) 2 Γ(a) (*t*−*s*) a - I a *<sup>s</sup>*<sup>+</sup> C(*t*) + I a *<sup>t</sup>*<sup>−</sup> C(*s*) ≤ *Y* ∗ (*s*)+*Y* ∗ (*t*) 2 Γ(a) (*t*−*s*) a - I a *<sup>s</sup>*<sup>+</sup> C(*t*) + I a *<sup>t</sup>*<sup>−</sup> C(*s*) ,

That is,

$$\frac{\Gamma(\mathfrak{a})}{(t-s)^{\mathfrak{a}}} \left[ \left[ \mathcal{Z}\_{\mathfrak{s}^{+}}^{\mathfrak{a}} \, \, \mathcal{Y}\_{\ast} \mathfrak{C}(t) + \mathcal{Z}\_{t^{-}}^{\mathfrak{a}} \, \, \mathcal{Y}\_{\ast} \mathfrak{C}(s) \right], \, \mathcal{Z}\_{\mathfrak{s}^{+}}^{\mathfrak{a}} \, \, \mathcal{Y}^{\ast} \mathfrak{C}(t) + \mathcal{Z}\_{t^{-}}^{\mathfrak{a}} \, \, \mathcal{Y}^{\ast} \mathfrak{C}(s) \right].$$

$$\leq\_{p} \frac{\Gamma(\mathfrak{a})}{(t-s)^{\mathfrak{a}}} \left[ \frac{Y\_{\ast}(\mathfrak{s}) + Y\_{\ast}(t)}{2}, \frac{Y^{\ast}(\mathfrak{s}) + Y^{\ast}(t)}{2} \right] \left[ \mathcal{Z}\_{\mathfrak{s}^{+}}^{\mathfrak{a}} \mathfrak{C}(t) + \mathcal{Z}\_{t^{-}}^{\mathfrak{a}} \mathfrak{C}(\mathfrak{s}) \right],$$

$$\leq\_{p} \frac{\Gamma(\mathfrak{a})}{(t-s)^{\mathfrak{a}}} \left[ \frac{Y\_{\ast}(\mathfrak{s}) + Y\_{\ast}(t)}{2}, \frac{Y^{\ast}(\mathfrak{s}) + Y^{\ast}(t)}{2} \right] \left[ \mathcal{Z}\_{\mathfrak{s}^{+}}^{\mathfrak{a}} \mathfrak{C}(t) + \mathcal{Z}\_{t^{-}}^{\mathfrak{a}} \mathfrak{C}(\mathfrak{s}) \right],$$

Hence,

$$\left[\mathcal{Z}\_{s^+}^{\mathbf{a}} \, \mathcal{Y}\mathfrak{C}(t) \leq\_p \mathcal{Z}\_{t^-}^{\mathbf{a}} \, \mathcal{Y}\mathfrak{C}(s)\right] \leq\_p \frac{\mathcal{Y}(s) + \mathcal{Y}(t)}{2} \left[\mathcal{Z}\_{s^+}^{\mathbf{a}} \, \mathfrak{C}(t) + \mathcal{Z}\_{t^-}^{\mathbf{a}} \, \mathfrak{C}(s)\right].$$

$$\leq\_p \frac{\mathcal{Y}(s) + \mathcal{Y}(t)}{2} \left[\mathcal{Z}\_{s^+}^{\mathbf{a}} \, \mathfrak{C}(t) + \mathcal{Z}\_{t^-}^{\mathbf{a}} \, \mathfrak{C}(s)\right].$$

Now, we first obtain the H–H Fejér type inequality for the left and right convex *I*·*V-F*.

**Theorem 7.** *Let <sup>Y</sup>* : [*s*, *<sup>t</sup>*] → X <sup>+</sup> *I be a left and right convex I*·*V-F, with s* < *t, and defined by Y*(*ω*) = [*Y*∗(*ω*), *Y* ∗ (*ω*)] *for all ω* ∈ [*s*, *t*]*. If Y* ∈ *L* [*s*, *t*], X + *I and* C : [*s*, *t*] → R, C(*ω*) ≥ 0 *are symmetric with respect to <sup>s</sup>*+*<sup>t</sup>* 2 *, then*

$$\mathcal{Y}\left(\frac{s+t}{2}\right)\left[\mathcal{Z}\_{\mathbf{s}^+}^{\mathbf{a}}\,\mathfrak{C}(t)+\mathcal{Z}\_{t^-}^{\mathbf{a}}\,\mathfrak{C}(s)\right] \leq\_p \left[\mathcal{Z}\_{\mathbf{s}^+}^{\mathbf{a}}\,\mathcal{Y}\mathfrak{C}(t)+\mathcal{Z}\_{t^-}^{\mathbf{a}}\,\mathcal{Y}\mathfrak{C}(s)\right] \tag{18}$$

*If Y is a concave I*·*V-F, then inequality (18) is reversed*.

**Proof.** Since *Y* is a left and right convex *I*·*V-F*, then we have

$$\begin{array}{l} \chi\_{\*}\left(\frac{s+t}{2}\right) \leq \frac{1}{2} (\chi\_{\*}(\upsilon s + (1-\nu)t) + \chi\_{\*}((1-\nu)s + \upsilon t)) \\ \chi^{\*}\left(\frac{s+t}{2}\right) \leq \frac{1}{2} (\chi^{\*}(\upsilon s + (1-\nu)t) + \chi^{\*}((1-\nu)s + \upsilon t)), \end{array} \tag{19}$$

Since C(*νs* + (1 − *ν*)*t*) = C((1 − *ν*)*s* + *νt*), then by multiplying (19) by *ν* <sup>a</sup>−1C ((1 − *ν*)*s* + *νt*) and integrating it, with respect to *ν* over [0, 1], we obtain

$$\begin{split} \mathcal{Y}\_{\boldsymbol{s}}\left(\frac{s+t}{2}\right) \int\_{0}^{1} \boldsymbol{\nu}^{a-1} \mathfrak{C}((1-\boldsymbol{\nu})s+\boldsymbol{\nu}t) d\boldsymbol{\nu} \\ \leq & \frac{1}{2} \Big( \int\_{0}^{1} \boldsymbol{\nu}^{a-1} Y\_{\ast}(\boldsymbol{\nu}s+(1-\boldsymbol{\nu})t) \mathfrak{C}((1-\boldsymbol{\nu})s+\boldsymbol{\nu}t) d\boldsymbol{\nu} \\ \quad + \int\_{0}^{1} \boldsymbol{\nu}^{a-1} Y\_{\ast}((1-\boldsymbol{\nu})s+\boldsymbol{\nu}t) \mathfrak{C}((1-\boldsymbol{\nu})s+\boldsymbol{\nu}t) d\boldsymbol{\nu} \Big) \\ \leq & \frac{1}{2} \Big( \int\_{0}^{1} \boldsymbol{\nu}^{a-1} Y^{\ast}(\boldsymbol{\nu}s+(1-\boldsymbol{\nu})t) \mathfrak{C}((1-\boldsymbol{\nu})s+\boldsymbol{\nu}t) d\boldsymbol{\nu} \\ \leq & \frac{1}{2} \Big( \quad + \int\_{0}^{1} \boldsymbol{\nu}^{a-1} Y^{\ast}((1-\boldsymbol{\nu})s+\boldsymbol{\nu}t) \mathfrak{C}((1-\boldsymbol{\nu})s+\boldsymbol{\nu}t) d\boldsymbol{\nu} \Big) \end{split} \tag{20}$$

Let *ω* = (1 − *ν*)*s* + *νt*. Then, we have

$$\begin{split} \int\_{0}^{1} v^{a-1} Y\_{\*} (\nu s + (1-\nu)t) \mathfrak{C}((1-\nu)s + \nu t) d\upsilon \\ &\quad + \int\_{0}^{1} v^{a-1} Y\_{\*} ((1-\nu)s + \nu t) \mathfrak{C}((1-\nu)s + \nu t) d\upsilon \\ &= \frac{1}{(t-s)^{a}} \int\_{s}^{t} (\omega - s)^{a-1} Y\_{\*} (s + t - \omega) \mathfrak{C}(\omega) d\omega \\ &\quad + \frac{1}{(t-s)^{a}} \int\_{s}^{t} (\omega - s)^{a-1} Y\_{\*} (\omega) \mathfrak{C}(\omega) d\omega \\ &= \frac{1}{(t-s)^{a}} \int\_{s}^{t} (\omega - s)^{a-1} Y\_{\*} (\omega) \mathfrak{C}(s + t - \omega) d\omega \\ &\quad + \frac{1}{(t-s)^{a}} \int\_{s}^{t} (\omega - s)^{a-1} Y\_{\*} (\omega) \mathfrak{C}(\omega) d\omega \\ &= \frac{\Gamma(a)}{(t-s)^{a}} \left[ Z\_{\*}^{a}, Y\_{\*} \mathfrak{C}(t) + Z\_{t}^{a}, Y\_{\*} \mathfrak{C}(s) \right], \\ \int\_{0}^{1} v^{a-1} Y^{\*} (\nu s + (1-\nu)t) \mathfrak{C}((1-\nu)s + \nu t) d\nu \\ &\quad + \int\_{0}^{1} \nu^{a-1} Y^{\*} ((1-\nu)s + \nu t) \mathfrak{C}((1-\nu)s + \nu t) d\nu \\ &= \frac{\Gamma(a)}{(t-s)^{a}} \left[ Z\_{\*}^{a}, Y^{\*} \mathfrak{C}(t) + Z\_{t}^{a} \cdot Y^{\*} \mathfrak{C}(s) \right]. \end{split} \tag{21}$$

Then, from (21), we have

$$\begin{split} \frac{\overline{\varGamma}(\mathfrak{a})}{(t-s)^{\mathfrak{a}}} Y\_{\*} \left( \frac{s+t}{2} \right) \left[ \mathscr{Z}\_{\mathrm{s}^{+}}^{\mathfrak{a}} \mathfrak{C}(t) + \mathscr{Z}\_{t^{-}}^{\mathfrak{a}} \mathfrak{C}(s) \right] \\ \leq & \frac{\varGamma(\mathfrak{a})}{(t-s)^{\mathfrak{a}}} \left[ \mathscr{Z}\_{\mathrm{s}^{+}}^{\mathfrak{a}} \operatorname{ Y}\_{\*} \mathfrak{C}(t) + \mathscr{Z}\_{t^{-}}^{\mathfrak{a}} \operatorname{ Y}\_{\*} \mathfrak{C}(s) \right] \\ \frac{\varGamma(\mathfrak{a})}{(t-s)^{\mathfrak{a}}} Y^{\*} \left( \frac{s+t}{2} \right) \left[ \mathscr{Z}\_{\mathrm{s}^{+}}^{\mathfrak{a}} \mathfrak{C}(t) + \mathscr{Z}\_{t^{-}}^{\mathfrak{a}} \mathfrak{C}(s) \right] \\ \leq & \frac{\varGamma(\mathfrak{a})}{(t-s)^{\mathfrak{a}}} \left[ \mathscr{Z}\_{\mathrm{s}^{+}}^{\mathfrak{a}} \operatorname{ Y}^{\mathfrak{a}} \mathfrak{C}(t) + \mathscr{Z}\_{t^{-}}^{\mathfrak{a}} \operatorname{ Y}^{\mathfrak{a}} \mathfrak{C}(s) \right] \end{split}$$

from which, we have

$$\begin{array}{ll} \frac{\overline{I}(\mathfrak{a})}{(t-s)^{\mathfrak{a}}} \left[ \mathcal{Y}\_{\*} \left( \frac{s+t}{2} \right), \mathcal{Y}^{\*} \left( \frac{s+t}{2} \right) \right] \left[ \mathcal{Z}\_{\mathfrak{s}^{+}}^{\mathfrak{a}} \mathfrak{C}(t) + \mathcal{Z}\_{\mathfrak{t}^{-}}^{\mathfrak{a}} \mathfrak{C}(s) \right] \\ \leq & \frac{\overline{I}(\mathfrak{a})}{p \left(t-s\right)^{\mathfrak{a}}} \left[ \mathcal{Z}\_{\mathfrak{s}^{+}}^{\mathfrak{a}} \mathcal{Y}\_{\*} \mathfrak{C}(t) + \mathcal{Z}\_{\mathfrak{t}^{-}}^{\mathfrak{a}} \mathcal{Y}\_{\*} \mathfrak{C}(s), \mathcal{Z}\_{\mathfrak{s}^{+}}^{\mathfrak{a}} \mathcal{Y}^{\*} \mathfrak{C}(t) + \mathcal{Z}\_{\mathfrak{t}^{-}}^{\mathfrak{a}} \mathcal{Y}^{\*} \mathfrak{C}(s) \right], \end{array}$$

That is,

$$\frac{\Gamma(\mathfrak{a})}{(t-s)^{\mathfrak{a}}} Y(\frac{s+t}{2}) \left[ \mathcal{Z}\_{\mathfrak{s}^{+}}^{\mathfrak{a}} \mathfrak{C}(t) + \mathcal{Z}\_{t^{-}}^{\mathfrak{a}} \mathfrak{C}(s) \right],$$

$$\leq\_{p} \frac{\Gamma(\mathfrak{a})}{(t-s)^{\mathfrak{a}}} \left[ \mathcal{Z}\_{\mathfrak{s}^{+}}^{\mathfrak{a}} \ Y \mathfrak{C}(t) + \mathcal{Z}\_{t^{-}}^{\mathfrak{a}} \ Y \mathfrak{C}(s) \right].$$

This completes the proof.

**Example 3.** *We consider the I*·*V-F <sup>Y</sup>* : [0, 2] → X <sup>+</sup> *I , defined by Y*(*ω*) = -<sup>2</sup> <sup>−</sup> √ *ω* , 2 2 − √ *ω . Since endpoint functions Y*∗(*ω*), *Y* ∗ (*ω*) *are convex functions, then Y*(*ω*) *is a left and right convex I*·*V-F. If*

$$\mathfrak{C}(\omega) = \left\{ \begin{array}{c} \sqrt{\omega} \\ \sqrt{2 - \omega} \end{array} \begin{array}{c} \omega \in [0, 1] \\ \omega \in (1, 2] \end{array} \right\}$$

*then* C(2 − *ω*) = C(*ω*) ≥ 0 *for all ω* ∈ [0, 2]. *Since Y*∗(*ω*) = 2 − √ *ω and Y* ∗ (*ω*) = 2 2 − √ *ω* , *if* a = <sup>1</sup> 2 , *then we compute the following:*

$$\begin{aligned} \left[\mathbb{Z}\_{s^+}^{\mathbf{a}} \, \mathbf{Y} \mathfrak{C}(t) \tilde{+} \mathbb{Z}\_{t^-}^{\mathbf{a}} \, \mathbf{Y} \mathfrak{C}(s)\right] &\leq\_p \frac{Y(s) + Y(t)}{2} \left[\mathbb{Z}\_{s^+}^{\mathbf{a}} \, \mathfrak{C}(t) + \mathbb{Z}\_{t^-}^{\mathbf{a}} \, \mathfrak{C}(s)\right] \\\\ \frac{Y\_\*(s) + Y\_\*(t)}{2} \left[\mathbb{Z}\_{s^+}^{\mathbf{a}} \, \mathfrak{C}(t) + \mathbb{Z}\_{t^-}^{\mathbf{a}} \, \mathfrak{C}(s)\right] &= \frac{\pi}{\sqrt{2}} \binom{4 - \sqrt{2}}{2} \\\\ \frac{Y^\*(s) + Y^\*(t)}{2} \left[\mathbb{Z}\_{s^+}^{\mathbf{a}} \, \mathfrak{C}(t) + \mathbb{Z}\_{t^-}^{\mathbf{a}} \, \mathfrak{C}(s)\right] &= \frac{\pi}{\sqrt{2}} \binom{4 - \sqrt{2}}{2}, \end{aligned} \tag{22}$$

$$\begin{array}{c} \frac{Y\_\*(s) + Y\_\*(t)}{2} \left[ \mathcal{Z}\_{s^+}^{\mathfrak{a}} \mathfrak{C}(t) + \mathcal{Z}\_{t^-}^{\mathfrak{a}} \mathfrak{C}(s) \right] = \frac{\pi}{\sqrt{2}} \binom{4 - \sqrt{2}}{2} \\\ \frac{Y^\*(s) + Y^\*(t)}{2} \left[ \mathcal{Z}\_{s^+}^{\mathfrak{a}} \mathfrak{C}(t) + \mathcal{Z}\_{t^-}^{\mathfrak{a}} \mathfrak{C}(s) \right] = \frac{\pi}{\sqrt{2}} \binom{4 - \sqrt{2}}{2} . \end{array} \tag{23}$$

$$\begin{aligned} \left[ \mathcal{Z}\_{s^+}^{\mathfrak{a}} \, \, \mathcal{Y}\_\* \mathfrak{C}(t) + \mathcal{Z}\_{t^-}^{\mathfrak{a}} \, \, \mathcal{Y}\_\* \mathfrak{C}(s) \right] &= \frac{1}{\sqrt{\pi}} \Big( 2\pi + \frac{4 - 8\sqrt{2}}{3} \Big), \\\left[ \mathcal{Z}\_{s^+}^{\mathfrak{a}} \, \, \mathcal{Y}^\* \mathfrak{C}(t) + \mathcal{Z}\_{t^-}^{\mathfrak{a}} \, \, \mathcal{Y}^\* \mathfrak{C}(s) \right] &= \frac{2}{\sqrt{\pi}} \Big( 2\pi + \frac{4 - 8\sqrt{2}}{3} \Big). \end{aligned} \tag{24}$$

*From (22)–(24), (13) we have*

$$\frac{1}{\sqrt{\pi}} \left[ \left( 2\pi + \frac{4 - 8\sqrt{2}}{3} \right) , 2 \left( 2\pi + \frac{4 - 8\sqrt{2}}{3} \right) \right] \\ \quad \leq \, \,\_p \frac{\pi}{\sqrt{2}} \left[ \frac{4 - \sqrt{2}}{2} , 4 - \sqrt{2} \right] = \frac{\pi}{\sqrt{2}} \left[ \frac{4 - \sqrt{2}}{2} , 4 - \sqrt{2} \right] .$$

*Hence, Theorem 6 is verified. For Theorem 7, we have*

$$
\begin{bmatrix}
\Upsilon\_\* \left( \frac{s+t}{2} \right) \left[ \mathcal{Z}\_{s^+}^{\mathbf{a}} \mathfrak{C}(t) + \mathcal{Z}\_{t^-}^{\mathbf{a}} \mathfrak{C}(s) \right] = \sqrt{\pi}, \\
\mathcal{Y}^\* \left( \frac{s+t}{2} \right) \left[ \mathcal{Z}\_{s^+}^{\mathbf{a}} \mathfrak{C}(t) + \mathcal{Z}\_{t^-}^{\mathbf{a}} \mathfrak{C}(s) \right] = \mathfrak{D} \sqrt{\pi}.
\end{bmatrix} \tag{25}
$$

*From (24) and (25), we have*

$$2\sqrt{\pi}[1,2] \quad \le \ \_ {p} \frac{1}{\sqrt{\pi}} \left[2\pi + \frac{4-8\sqrt{2}}{3}, \ 2\left(2\pi + \frac{4-8\sqrt{2}}{3}\right)\right]$$

*Hence, (18) has been verified.*

**Remark 3.** *If one takes* C(*ω*) = 1*, then, from (13) and (18), we acquire (5). Let us take* a = 1. *Then, we achieve the coming inequality (see [22])*.

$$\mathbb{Y}\left(\frac{s+t}{2}\right) \leq\_p \frac{1}{\int\_s^t \mathfrak{C}(\omega)d\omega} \int\_s^t \mathbb{Y}(\omega)\mathfrak{C}(\omega)d\omega \leq\_p \frac{\mathbb{Y}(s)+\mathbb{Y}(t)}{2}.$$

*If we take Y*∗(*ω*) = *Y* ∗ (*ω*), *then from (13) and (18), we acquire the coming inequality (see [33])*.

$$Y\left(\frac{s+t}{2}\right)\left[\mathcal{Z}\_{s^{+}}^{\mathbf{a}}\ \mathfrak{C}(t) + \mathcal{Z}\_{t^{-}}^{\mathbf{a}}\ \mathfrak{C}(s)\right] \leq\_{p} \left[\mathcal{Z}\_{s^{+}}^{\mathbf{a}}\ \ Y\mathfrak{C}(t) + \mathcal{Z}\_{t^{-}}^{\mathbf{a}}\ \ Y\mathfrak{C}(s)\right],$$

$$\leq\_{p} \frac{\mathcal{Y}(s) + \mathcal{Y}(t)}{2} \left[\mathcal{Z}\_{s^{+}}^{\mathbf{a}}\ \mathfrak{C}(t) + \mathcal{Z}\_{t^{-}}^{\mathbf{a}}\ \mathfrak{C}(s)\right]$$

*If one takes Y*∗(*ω*) = *Y* ∗ (*ω*) *with* a = 1, *then from (13) and (18), we achieve the classical* H*–*H *Fejér inequality (see [26])*.

#### **4. Conclusions**

In applied sciences, convex functions and fractional calculus are essential. The new interval-valued left and right convex functions are presented in this article. Some novel Riemann–Liouville fractional integral H–H and Fejér-type inequalities are provided, utilizing the idea of interval-valued left and right convex functions and some supplementary interval analysis findings. Our results are a generalization of a number of previously published findings. In the future, we will use generalized interval and fuzzy Riemann–Liouville fractional operators to investigate this concept for generalized left and right convex *I*·*V-Fs* and *F-I*·*V-Fs* by using interval Katugampola fractional integrals and fuzzy Katugampola fractional integrals. For applications, see [53–56].

**Author Contributions:** Conceptualization, M.B.K.; methodology, M.B.K.; validation, S.T., H.G.Z., and M.S.S.; formal analysis, G.S.-G.; investigation, M.S.S.; resources, S.T. and J.E.M.-D.; data curation, H.G.Z.; writing—original draft preparation, H.G.Z., M.B.K., and G.S.-G.; writing—review and editing, S.T. and M.B.K.; visualization, J.E.M.-D., and H.G.Z.; supervision, M.S.S. and M.B.K.; project administration, M.B.K.; funding acquisition, M.S.S., G.S.-G., and H.G.Z. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The authors would like to thank the Rector, COMSATS University Islamabad, Islamabad, Pakistan, for providing excellent research. This work was funded by Taif University Researchers Supporting Project (number TURSP-2020/345), Taif University, Taif, Saudi Arabia. Moreover, the work of Santos-García was also partially supported by the Spanish project TRACES TIN2015-67522-C3-3-R.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **Some Fuzzy Riemann–Liouville Fractional Integral Inequalities for Preinvex Fuzzy Interval-Valued Functions**

**Muhammad Bilal Khan <sup>1</sup> , Hatim Ghazi Zaini <sup>2</sup> , Jorge E. Macías-Díaz 3,4,\* , Savin Treant,aˇ 5,\* and Mohamed S. Soliman <sup>6</sup>**


**Abstract:** The main objective of this study is to introduce new versions of fractional integral inequalities in fuzzy fractional calculus utilizing the introduced preinvexity. Due to the behavior of its definition, the idea of preinvexity plays a significant role in the subject of inequalities. The concepts of preinvexity and symmetry have a tight connection thanks to the significant correlation that has developed between both in recent years. In this study, we attain the Hermite-Hadamard (*H*·*H*) and Hermite-Hadamard-Fejér (*H*·*H* Fejér) type inequalities for preinvex fuzzy-interval-valued functions (preinvex *F*·*I*·*V*·*Fs*) via Condition C and fuzzy Riemann–Liouville fractional integrals. Furthermore, we establish some refinements of fuzzy fractional *H*·*H* type inequality. There are also some specific examples of the reported results for various preinvex functions deduced. To support the newly introduced ideal, we have provided some nontrivial and logical examples. The results presented in this research are a significant improvement over earlier results. This paper's awe-inspiring notions and formidable tools may energize and revitalize future research on this worthwhile and fascinating topic.

**Keywords:** preinvex fuzzy interval-valued function; fuzzy fractional integral operator; Hermite-Hadamard type inequality; Hermite-Hadamard Fejér type inequality

#### **1. Introduction**

Convex function theory has a wide range of potential applications in a variety of unique and fascinating disciplines of study. Furthermore, this theory is useful in a variety of fields, including physics, information theory, coding theory, engineering, optimization, and inequality theory. This theory is currently making a significant contribution to the extensions and improvements of a wide range of mathematical and practical fields. Many authors analyzed, celebrated, and executed their work on the concept of convexity, and used fruitful methodologies and novel ideas to extend its many variations in helpful ways. In the literature, several new families of classical convex functions have been proposed. The references [1–5] are provided for the benefit of the readers. Many authors and scientists have always attempted to contribute to the theory of inequality by producing high-quality work. Integral inequalities on convex functions, both derivative and integration, have likewise been a hot and engaging area of study in recent years. The theory of inequalities has significant applications in the field of applied analysis, such as geometric function theory, impulsive diffusion equations, coding theory, numerical analysis, and fractional

**Citation:** Khan, M.B.; Zaini, H.G.; Macías-Díaz, J.E.; Treant,a, S.; ˇ Soliman, M.S. Some Fuzzy Riemann–Liouville Fractional Integral Inequalities for Preinvex Fuzzy Interval-Valued Functions. *Symmetry* **2022**, *14*, 313. https:// doi.org/10.3390/sym14020313

Academic Editor: Alexander Zaslavski

Received: 4 January 2022 Accepted: 21 January 2022 Published: 3 February 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

calculus, to name a few. Sun [6] and co-workers [7] recently used the local fractional integral operator to generalize the Hermite-Hadamard condition for harmonically convex and s-preinvex functions. The references [8–13] are provided for the benefit of the readers.

Several writers have recently proposed novel inequalities for various types of convexities, preinvexities, statistical theory, and other topics. Several discussions show a tight connection between inequality theory and convex functions. Hanson examined the invex function in the context of bi-function *ϕ*(., .) for the first time in 1981 (see [14]). Following Hanson's work, Ben-Israel and Mond attempted to delve deeper into linked invexity, introducing the concepts of invex sets and preinvex functions for the first time (see [15]). Under certain conditions, the preinvex and invex functions in the form of differentiability are comparable, according to Mohan and Neogy [16]. Antczak [17] discovered and analyzed the features of preinvex functions for the first time in 2005.

Note that fuzzy mappings (*F*·*Ms*) are fuzzy-interval-valued functions. On the other hand, the concept of convex *<sup>F</sup>*·*Ms* from <sup>R</sup>*<sup>n</sup>* to the set of fuzzy numbers was introduced by Nanda and Kar [18], Syau [19], and Furukawa [20]. They also explored Lipschitz continuity of fuzzy valued mappings and created other types of convex *F*·*Ms*, such as logarithmic convex *F*·*Ms* and quasi-convex *F*·*Ms*. Based on Goetschel and Voxman's concept of ordering [21], Yan and Xu [22] introduced the conceptions of epigraphs and convexity of *F*·*Ms*, as well as the properties of convex *F*·*Ms* and quasi-convex *F*·*Ms*. Khan et al. [23–26] extended the class of convex *F*·*Ms* and defined *h*-convex and (*h*1, *h*2)-convex *F*·*I*·*V*·*Fs* using fuzzy partial order relation. Moreover, they introduced *H*·*H, H*·*H* Fejér, *H*·*H* fractional, *H*·*H* fractional Fejér for *h*-convex and (*h*1, *h*2)-convex *F*·*I*·*V*·*Fs* via fuzzy Riemannian and fuzzy Riemann–Liouville fractional integrals. Noor [27] proposed and investigated the notion of fuzzy preinvex mapping on the invex set. He also showed how to express the fuzzy optimality conditions of differentiable preinvex fuzzy mappings using variational inequalities. Recently Khan et al. [28] generalized the concept of preinvex fuzzy mappings in terms of (*h*1, *h*2)-preinvex *F*·*I*·*V*·*Fs.* Moreover, they established relation between *H*·*H* inequalities and (*h*1, *h*2)-preinvex *F*·*I*·*V*·*Fs* by using fuzzy Riemannian integrals. Recently Khan et al. [29–33] proposed the concepts of strongly preinvex *F*·*I*·*V*·*Fs,* higher strongly preinvex *F*·*I*·*V*·*Fs,* generalized strongly preinvex *F*·*I*·*V*·*Fs* and characterized their optimality conditions by introducing different variational like inequalities. Moreover, they proposed *H*·*H* inequalities for strongly preinvex *F*·*I*·*V*·*Fs* by utilizing fuzzy Riemannian.

At one step forward, Khan et al. introduced new classes of convex and generalized convex *F*·*I*·*V*·*Fs*, and derived new *H*·*H* type inequalities for log-s-convex *F*·*I*·*V*·*Fs* in the second sense [34], log-*h*-convex *F*·*I*·*V*·*Fs* [35] and the references therein. We refer to the readers for further analysis of literature on the applications and properties of fuzzy-interval, and inequalities and generalized convex *F*·*Ms*, see [36–56] and the references therein.

The goal of this study is to complete the fuzzy Riemann–Liouville fractional integrals for *F*·*I*·*V*·*Fs* and use these integrals to get the *H*·*H* inequalities. These integrals are also used to derive *H*·*H* type inequalities for preinvex *F*·*I*·*V*·*Fs*.

#### **2. Preliminaries**

Let K*<sup>C</sup>* be the space of all closed and bounded intervals of R and *η* ∈ K*<sup>C</sup>* be defined by

$$\eta = \left[ \eta\_{\*\prime} \,\,\eta^\* \right] = \left\{ \omega \in \mathbb{R} \, \middle| \,\eta\_\* \le \omega \le \eta^\* \right\}, \text{ } (\eta\_{\*\prime} \,\,\eta^\* \in \mathbb{R})$$

if *η*<sup>∗</sup> = *η* ∗ then, *η* is said to be degenerate. In this article, all intervals will be non-degenerate intervals. If *η*<sup>∗</sup> ≥ 0, then [*η*∗, *η* ∗ ] is called positive interval. The set of all positive interval is denoted by K + *C* and defined as K + *<sup>C</sup>* = {[*η*∗, *η* ∗ ] : [*η*∗, *η* ∗ ] ∈ K*<sup>C</sup>* and *η*<sup>∗</sup> ≥ 0} .

Let *ς* ∈ R and *ςη* be defined by

$$
\xi \cdot \eta = \begin{cases}
\ \left[\xi \eta\_{\ast \prime} \ \xi \eta^{\ast}\right] \text{ if } \xi \ge 0, \\
\ \left[\xi \eta^{\ast} \ \xi \eta\_{\ast}\right] \text{ if } \xi < 0.
\end{cases}
\tag{1}
$$

Then the Minkowski difference *ξ* − *η*, addition *η* + *ξ* and *η* × *ξ* for *η*, *ξ* ∈ K*<sup>C</sup>* are defined by

$$\begin{array}{l} \left[ \mathfrak{f}\_{\ast \prime}^{\star} \mathfrak{f}^{\ast} \right] - \left[ \eta\_{\ast \prime} \eta^{\ast} \right] = \left[ \mathfrak{f}\_{\ast}^{\star} - \eta\_{\ast \prime} \ \mathfrak{f}^{\ast} - \eta^{\ast} \right] \, \prime \\\ \left[ \mathfrak{f}\_{\ast \prime}^{\star} \ \mathfrak{f}^{\ast} \right] + \left[ \eta\_{\ast \prime} \ \eta^{\ast} \right] = \left[ \mathfrak{f}\_{\ast}^{\star} + \eta\_{\ast \prime} \ \mathfrak{f}^{\ast} + \eta^{\ast} \right] \, \prime \end{array} \tag{2}$$

and

$$\begin{bmatrix} \tilde{\xi}^{\*} \ \tilde{\xi}^{\*} \end{bmatrix} \times \begin{bmatrix} \eta\_{\*} \ \eta^{\*} \end{bmatrix} = \begin{bmatrix} \min \{ \tilde{\xi}^{\*} \eta^{\*} \ \tilde{\xi}^{\*} \eta^{\*} \end{bmatrix} \begin{bmatrix} \xi^{\*} \eta^{\*} \end{bmatrix},\\ \max \{ \tilde{\xi}^{\*} \eta^{\*} \ \tilde{\xi}^{\*} \eta^{\*} \} \ \begin{array}{c} \max \{ \tilde{\xi}^{\*} \eta^{\*} \} \ \tilde{\xi}^{\*} \eta^{\*} \end{array} \begin{bmatrix} \tilde{\xi}^{\*} \eta^{\*} \end{bmatrix} \end{bmatrix}$$

The inclusion "⊆" means that

$$\xi \subseteq \eta \text{ if and only if, } [\xi\_{\*\*} \text{ } \xi^\*] \subseteq [\eta\_{\*\*} \text{ } \eta^\*], \text{ if and only if } \eta\_\* \le \xi\_{\*\*} \text{ } \xi^\* \le \eta^\* \tag{3}$$

**Remark 2.1.** [38] The relation " ≤*<sup>I</sup>* " defined on K*<sup>C</sup>* by

$$\left[\nabla\_{\ast \prime} \cdot \nabla^{\ast}\right] \le\_I \left[\eta\_{\ast \prime} \cdot \eta^{\ast}\right] \text{ if and only if } \nabla\_{\ast} \le \eta\_{\ast \prime} \nabla^{\ast} \le \eta^{\ast},\tag{4}$$

for all [∇∗, ∇<sup>∗</sup> ] , [*η*∗, *η* ∗ ] ∈ K*C*, it is an order relation. For given [∇∗, ∇<sup>∗</sup> ] , [*η*∗, *η* ∗ ] ∈ K*C*, we say that [∇∗, ∇<sup>∗</sup> ] ≤*<sup>I</sup>* [*η*∗, *η* ∗ ] if and only if ∇<sup>∗</sup> ≤ *η*∗, ∇<sup>∗</sup> ≤ *η* <sup>∗</sup> or ∇<sup>∗</sup> ≤ *η*∗, ∇<sup>∗</sup> < *η* ∗ .

A fuzzy subset *A* of R is characterize by a mapping *ζ* : R → [0, 1] called the membership function, for each fuzzy set and *θ* ∈ (0, 1], then *θ*-level sets of *ζ* is denoted and defined as follows *ζ<sup>θ</sup>* = {*u* ∈ R| *ζ*(*u*) ≥ *θ*}. If *θ* = 0, then *supp*(*ζ*) = {*ω* ∈ R | *ζ*(*ω*)i0} is called support of *ζ*. By [*ζ*] <sup>0</sup> we define the closure of *supp*(*ψ*).

Let F(R) be the family of all fuzzy sets and *ζ* ∈ F(R) denote the family of all nonempty sets. *ζ* ∈ F(R) be a fuzzy set. Then we define the following:


A fuzzy set is called a fuzzy number or fuzzy interval if it has properties (1), (2), (3) and (4). We denote by F<sup>0</sup> the family of all intervals.

Let *ζ* ∈ F<sup>0</sup> be a fuzzy-interval, if and only if, *θ*-levels [*ζ*] *θ* is a nonempty compact convex set of R. From these definitions, we have

$$\left[\zeta\right]^\theta = \left[\zeta\_\*(\theta) \text{ , } \zeta^\*(\theta)\right] \text{ .}$$

where

$$\mathcal{L}\_\*(\theta) = \inf \{ \omega \in \mathbb{R} \, | \, \zeta(\omega) \ge \theta \}, \\ \zeta^\*(\theta) = \sup \{ \omega \in \mathbb{R} \, | \, \zeta(\omega) \ge \theta \}. \tag{5}$$

**Proposition 2.2.** [47] If *ζ*, *η* ∈ F<sup>0</sup> then relation " 4 " defined on F<sup>0</sup> by

$$\mathcal{L} \preccurlyeq \eta \text{ if and only if, } \left[\zeta\right]^{\theta} \le\_I \left[\eta\right]^{\theta}, \text{ for all } \theta \in \left[0, 1\right] \text{ }\tag{6}$$

this relation is known as partial order relation.

For *<sup>ζ</sup>*, *<sup>η</sup>* ∈ F<sup>0</sup> and *<sup>ς</sup>* ∈ R, the sum *<sup>ζ</sup>*+<sup>e</sup> *<sup>η</sup>*, product *<sup>ζ</sup>*×<sup>e</sup> *<sup>η</sup>*, scalar product *<sup>ς</sup>*.*<sup>ζ</sup>* and sum with scalar are defined by:

Then, for all *θ* ∈ [0, 1] , we have

$$\left[\left[\widetilde{\zeta} \dot{+} \eta\right]^{\theta} = \left[\widetilde{\zeta}\right]^{\theta} + \left[\eta\right]^{\theta}\right.\tag{7}$$

$$\left[\left[\widetilde{\mathbb{J}} \widetilde{\times} \eta\right]^{\theta} = \left[\widetilde{\mathbb{J}}\right]^{\theta} \times \left[\left.\eta\right]^{\theta}\right] \tag{8}$$

$$\left[\emptyset,\mathbb{Q}\right]^\theta = \emptyset . \left[\mathbb{Q}\right]^\theta. \tag{9}$$

$$\left[\mathfrak{J}\left[\mathfrak{J}+\mathfrak{J}\right]^{\theta}=\mathfrak{g}+\left[\mathfrak{J}\right]^{\theta}.\tag{10}$$

For *<sup>ψ</sup>* ∈ F<sup>0</sup> such that *<sup>ζ</sup>* = *<sup>η</sup>*+e*ψ*, then by this result we have existence of Hukuhara difference of *ζ* and *η*, and we say that *ψ* is the H-difference of *ζ* and *η*, and denoted by *ζ*−e *η*. If H-difference exists, then

$$(\left(\psi\right)^{\*}\left(\theta\right) \;=\left(\zeta\overset{\sim}{-}\eta\right)^{\*}\left(\theta\right) \;=\zeta^{\*}\left(\theta\right) \;-\eta^{\*}\left(\theta\right),\\(\psi)\_{\*}\left(\theta\right) \;=\left(\zeta\overset{\sim}{-}\eta\right)\_{\*}\left(\theta\right) \;=\zeta\_{\*}\left(\theta\right) \;-\eta\_{\*}\left(\theta\right).$$

**Definition 2.3.** [36] A fuzzy map Ψ : [*u*, *ν*] ⊂ R → F<sup>0</sup> is called *F*·*I*·*V*·*F*. For each *θ* ∈ [0, 1] , whose *θ*-levels define the family of *I*·*V*·*F* Ψ*<sup>θ</sup>* : [*u*, *ν*] ⊂ R → K*<sup>C</sup>* are given by Ψ*<sup>θ</sup>* (*ω*) = [Ψ∗(*ω*, *θ*) , Ψ<sup>∗</sup> (*ω*, *θ*)] for all *ω* ∈ [*u*, *ν*] . Here, for each *θ* ∈ [0, 1] , the left and right real valued functions Ψ∗(*ω*, *θ*) , Ψ<sup>∗</sup> (*ω*, *θ*) : [*u*, *ν*] → R are also called lower and upper functions of Ψ.

**Remark 2.4.** If Ψ : [*u*, *ν*] ⊂ R → F<sup>0</sup> is a *F*·*I*·*V*·*F*, then Ψ(*ω*) is called continuous function at *ω* ∈ [*u*, *ν*] , if for each *θ* ∈ [0, 1] , both left and right real valued functions Ψ∗(*ω*, *θ*) and Ψ∗ (*ω*, *θ*) are continuous at *ω* ∈ [*u*, *ν*] .

The following FI Riemann–Liouville fractional integral operators were introduced by Allahviranloo et al. [40]:

**Definition 2.5.** Let *β* > 0 and *L*([*µ*, *υ*] , F0) be the collection of all Lebesgue measurable *F*·*I*·*V*·*Fs* on [*µ*, *υ*]. Then the fuzzy left and right Riemann–Liouville fractional integral of Ψ ∈ *L*([*µ*, *υ*] , F0) with order *β* > 0 are defined by

$$\mathcal{T}\_{\mu^{+}}^{\beta}\,\Psi(\omega) \,= \frac{1}{\Gamma(\beta)} \int\_{\mu}^{\omega} (\omega - \underline{\varsigma})^{\beta - 1} \Psi(\underline{\varsigma}) d\underline{\varsigma} \,\,\, (\omega > \mu) \tag{11}$$

and

$$\mathcal{T}\_{v^{-}}^{\mathfrak{f}}\left.\Psi(\omega)\right. = \frac{1}{\Gamma(\mathfrak{f})} \int\_{\omega}^{v} (\mathfrak{f}-\omega)^{\mathfrak{f}-1} \Psi(\mathfrak{f}) d\mathfrak{g}\,\,\left(\omega$$

respectively, where Γ(*ω*) = R <sup>∞</sup> 0 *ς ω*−1 *e* <sup>−</sup>*ςdς* is the Euler gamma function. The fuzzy left and right Riemann–Liouville fractional integral *ω* based on left and right end point functions can be defined, that is

$$\begin{aligned} \left[ \mathcal{Z}\_{\mu^{+}}^{\theta} \, \Psi(\omega) \right]^{\theta} &= \frac{1}{\Gamma(\theta)} \int\_{\mu}^{\omega} (\omega - \varsigma)^{\theta - 1} \Psi\_{\theta}(\varsigma) d\varsigma \\\\ \zeta &= \frac{1}{\Gamma(\theta)} \int\_{\mu}^{\omega} (\omega - \varsigma)^{\theta - 1} [\Psi\_{\ast}(\varsigma, \theta) \, \, \Psi^{\ast}(\varsigma, \theta)] d\varsigma \, \, (\omega > \mu) \end{aligned} \tag{13}$$

where

$$\mathcal{T}\_{\mu^{+}}^{\mathfrak{f}}\,^{\Psi}\_{\*}(\omega,\theta) \,= \frac{1}{\Gamma(\mathfrak{f})} \int\_{\mu}^{\omega} (\omega - \mathfrak{g})^{\mathfrak{f}-1} \Psi\_{\*}(\mathfrak{g},\theta) d\mathfrak{g} \,\,(\omega > \mu)\,\,\,\tag{14}$$

and

$$\mathcal{T}^{\delta}\_{\mu^{+}}\,\Psi^{\*}(\omega,\theta) \,=\frac{1}{\Gamma(\beta)}\int\_{\mu}^{\omega}(\omega-\emptyset)^{\beta-1}\Psi^{\*}(\emptyset,\theta)d\xi\,\,(\omega>\mu)\,\,\tag{15}$$

Similarly, the left and right end point functions can be used to define the right Riemann– Liouville fractional integral Ψ of *ω*.

**Definition 2.6.** [18]. The *F*·*I*·*V*·*F* Ψ : [*u*, *ν*] → F<sup>0</sup> is called convex *F*·*I*·*V*·*F* on [*u*, *ν*] if

$$\Psi(\emptyset \omega + (1 - \emptyset)y) \nrightarrow \emptyset \Psi(\omega) \tilde{+} (1 - \emptyset) \Psi(y) \; \; \; \tag{16}$$

for all *ω*, *y* ∈ [*u*, *ν*] , *ς* ∈ [0, 1] , where for all Ψ(*ω*) < e0 for all *ω* ∈ [*u*, *ν*] . If (16) is reversed, then Ψ is called concave *F*·*I*·*V*·*F* on [*u*, *ν*]. Ψ is affine if and only if, it is both convex and concave *F*·*I*·*V*·*F*.

**Definition 2.7.** [27]. The *F*·*I*·*V*·*F* Ψ : [*u*, *ν*] → F<sup>0</sup> is called preinvex *F*·*I*·*V*·*F* on invex interval [*u*, *ν*] if

$$\Psi(\omega + (1 - \varsigma)\varphi(\omega, y)) \preccurlyeq \xi \Psi(\omega) \bar{+} (1 - \varsigma)\Psi(y) \,, \tag{17}$$

for all *ω*, *y* ∈ [*u*, *ν*] , *ς* ∈ [0, 1] , where Ψ(*ω*) < e0 for all *ω* ∈ [*u*, *ν*] and *ϕ* : [*u*, *ν*] × [*u*, *ν*] → R. If (17) is reversed then, Ψ is called preconcave *F*·*I*·*V*·*F* on [*u*, *ν*]. Ψ is affine if and only if, it is both preinvex and preconcave *F*·*I*·*V*·*F*.

We need the following assumption regarding the function *ϕ* : [*u*, *ν*] × [*u*, *ν*] → R, which plays an important role in upcoming main results.

#### **Condition C.** [16]

$$
\begin{aligned}
\label{eq:center}
\varphi(y\_\prime\omega + \tau\varphi(y\_\prime\omega)) &= (1-\tau)\varphi(y\_\prime\omega) \\
\label{eq:center}
\varphi(\omega,\omega + \tau\varphi(y\_\prime\omega)) &= -\tau\varphi(y\_\prime\omega)
\end{aligned}
$$

Note that ∀ *ω*, *y* ∈ [*u*, *ν*] and *ς* ∈ [0, 1], then from Condition C we have

$$
\varphi(\omega + \tau\_2 \varphi(y, \omega), \omega + \tau\_1 \varphi(y, \omega)) = (\tau\_2 - \tau\_1)\varphi(y, \omega).
$$

Clearly for *τ* = 0, we have *ξ*(*y*, *ω*) = 0 if and only if *y* = *ω*, for all *ω*, *y* ∈ [*u*, *ν*]. For the application of Condition C, see [27–33].

**Theorem 2.8.** [28] *Let* [*u*, *ν*] *be an invex set with resoect to bifunvtion ϕ and* Ψ : [*u*, *ν*] → F*C*(R) *be a F*·*I*·*V*·*F with* Ψ(*ω*) < e0*, whose θ-levels define the family of I*·*V*·*Fs* Ψ*<sup>θ</sup>* : [*u*, *ν*] ⊂ R → K*<sup>C</sup>* + *are given by*

$$\Psi\_{\theta}(\omega) = \left[ \Psi\_{\*}(\omega, \theta) \; , \; \Psi^{\*}(\omega, \theta) \right] \; , \; \forall \; \omega \in \left[ \mathfrak{u} \; , \nu \right] \tag{18}$$

*for all ω* ∈ [*u*, *ν*] *and for all θ* ∈ [0, 1]*. Then,* Ψ *is preinvex* F·I·V·F *on* [*u*, *ν*] , *if and only if, for all θ* ∈ [0, 1] , Ψ∗(*ω*, *θ*) *and* Ψ<sup>∗</sup> (*ω*, *θ*) *both are preinvex functions.*

**Remark 2.9.** If *ϕ*(*ω*, *y*) = *ω* − *y*, then we obtain inequality (16).

If Ψ∗(*ω*, *θ*) = Ψ<sup>∗</sup> (*ω*, *θ*) with *θ* = 1, then from (17), we obtain the definition of classical preinvex function, see [16].

If Ψ∗(*ω*, *θ*) = Ψ<sup>∗</sup> (*ω*, *θ*) with *ϕ*(*ω*, *y*) = *ω* − *y* and *θ* = 1, then from (17), we obtain the definition of classical convex function.

#### **3. Fuzzy-Interval Fractional Hermite-Hadamard Inequalities**

The major goal of this section is to build a new version of fractional *H*·*H* and *H*·*H* Fejér type inequality in the mode of preinvex *F*·*I*·*V*·*Fs*, which is a classical studied topic. We also study some related inequalities. In what follows, we denote by *L*([*u*, *u* + *ϕ*(*ν*, *u*)] , F0) the family of Lebesgue measureable *F*·*I*·*V*·*Fs*.

**Theorem 3.1.** *Let* Ψ : [*u*, *u* + *ϕ*(*ν*, *u*)] → F<sup>0</sup> *be a preinvex F*·*I*·*V*·*F on* [*u*, *u* + *ϕ*(*ν*, *u*)] , *whose θ-levels define the family of I*·*V*·*Fs* Ψ*<sup>θ</sup>* : [*u*, *u* + *ϕ*(*ν*, *u*)] ⊂ R → K*<sup>C</sup>* <sup>+</sup> *are given by* Ψ*<sup>θ</sup>* (*ω*) = [Ψ∗(*ω*, *θ*) , Ψ<sup>∗</sup> (*ω*, *θ*)] *for all ω* ∈ [*u*, *u* + *ϕ*(*ν*, *u*)] *and for all θ* ∈ [0, 1]*. If ϕ satisfies Condition C and* Ψ ∈ *L*([*u*, *u* + *ϕ*(*ν*, *u*)] , F0)*, then*

$$\mathbb{P}\left(\frac{2u+\varrho(\nu,u)}{2}\right) \preccurlyeq \frac{\Gamma(\beta+1)}{2(\varrho(\nu,u))^{\beta}} \Big[T^{\beta}\_{u^{+}} \, \mathbb{P}(u+\varrho(\nu,u)) \, \overline{+} T^{\beta}\_{u+\varrho(\nu,u)^{-}} \, \mathbb{P}(u)\Big] \prec \frac{\mathbb{P}(u)\overline{+} \Psi(u+\varrho(\nu,u))}{2} \preccurlyeq \frac{\Psi(u)\overline{+} \Psi(v)}{2} \tag{19}$$

If Ψ(*ω*) is preconcave *F*·*I*·*V*·*F* then

$$\mathbb{P}\left(\frac{2u+\varrho(\nu,u)}{2}\right) \xleftarrow{\Gamma(\mathfrak{F}+1)} \frac{\Gamma(\mathfrak{F}+1)}{2(\varrho(\nu,u))^{\mathfrak{F}}} \left[\mathbb{Z}\_{u^{+}}^{\mathfrak{G}}\,\Psi(u+\varrho(\nu,u))\,\overline{+}\mathbb{Z}\_{u+\varrho(\nu,u)^{-}}^{\mathfrak{G}}\,\Psi(u)\right] \xleftarrow{\Psi(u)} \frac{\Psi(u)\,\overline{+}\Psi(u,u)}{2} \succeq \frac{\Psi(u)\,\overline{+}\Psi(\nu)}{2} \tag{20}$$

**Proof.** Let Ψ : [*u*, *u* + *ϕ*(*ν*, *u*)] → F<sup>0</sup> be a preinvex *F*·*I*·*V*·*F*. If Condition C holds then, by hypothesis, we have that

$$2\Psi\left(\frac{2\iota+\varrho(\nu,\iota)}{2}\right) \preccurlyeq \Psi(\iota+(1-\varrho)\varrho(\nu,\iota)) \cdot \widetilde{+}\Psi(\iota+\xi\varrho(\nu,\iota))$$

Therefore, for every *θ* [0, 1], we have

$$2\mathbb{1}\_\*\left(\frac{2u+\varrho(\nu,\mu)}{2},\theta\right) \le \mathbb{1}\_\*(u+\left(1-\varrho\right)\varrho(\nu,u),\theta) + \mathbb{1}\_\*(u+\xi\varrho(\nu,u),\theta),$$

$$2\mathbb{1}^\*\left(\frac{2u+\varrho(\nu,u)}{2},\theta\right) \le \mathbb{1}^\*(u+\left(1-\varrho\right)\varrho(\nu,u),\theta) + \mathbb{1}^\*(u+\xi\varrho(\nu,u),\theta).$$

Multiplying both sides by *ς <sup>β</sup>*−<sup>1</sup> and integrating the obtained result with respect to *ς* over (0, 1), we have

$$2\int\_0^1 \xi^{\beta - 1} \Psi\_\*\left(\frac{2u + \varrho(\nu, \mu)}{2}, \theta\right) d\xi$$

$$\leq \int\_0^1 \xi^{\beta - 1} \Psi\_\*\left(u + \left(1 - \xi\right)\varrho(\nu, u\right), \theta\right) d\xi + \int\_0^1 \xi^{\beta - 1} \Psi\_\*\left(u + \xi\varrho(\nu, u), \theta\right) d\xi.$$

$$2\int\_0^1 \xi^{\beta - 1} \Psi^\*\left(\frac{2u + \varrho(\nu, u)}{2}, \theta\right) d\xi$$

$$\leq \int\_0^1 \xi^{\beta - 1} \Psi^\*\left(u + \left(1 - \xi\right)\varrho(\nu, u), \theta\right) d\xi + \int\_0^1 \xi^{\beta - 1} \Psi^\*\left(u + \xi\varrho(\nu, u), \theta\right) d\xi.$$

Let *ω* = *u* + (1 − *ς*)*ϕ*(*ν*, *u*) and *y* = *u* + *ςϕ*(*ν*, *u*) . Then we have

2 *<sup>β</sup>*Ψ<sup>∗</sup> 2*u*+*ϕ*(*ν*,*u*) 2 , *θ* <sup>≤</sup> <sup>1</sup> (*ϕ*(*ν*,*u*))*<sup>β</sup> u*+*ϕ*(*ν*,*u*) R *u* (*u* + *ϕ*(*ν*, *u*) − *y*) *<sup>β</sup>*−1Ψ∗(*y*, *<sup>θ</sup>*)*dy* + <sup>1</sup> (*ϕ*(*ν*,*u*))*<sup>β</sup> u*+*ϕ*(*ν*,*u*) R *u* (*ω* − *u*) *<sup>β</sup>*−1Ψ∗(*ω*, *<sup>θ</sup>*)*d<sup>ω</sup>* 2 *<sup>β</sup>*Ψ<sup>∗</sup> 2*u*+*ϕ*(*ν*,*u*) 2 , *θ* <sup>≤</sup> <sup>1</sup> (*ϕ*(*ν*,*u*))*<sup>β</sup> u*+*ϕ*(*ν*,*u*) R *u* (*u* + *ϕ*(*ν*, *u*) − *y*) *<sup>β</sup>*−1Ψ<sup>∗</sup> (*y*, *θ*)*dy* + <sup>1</sup> (*ϕ*(*ν*,*u*))*<sup>β</sup> u*+*ϕ*(*ν*,*u*) R *u* (*ω* − *u*) *<sup>β</sup>*−1Ψ<sup>∗</sup> (*ω*, *θ*)*dω*, ≤ Γ(*β*) (*ϕ*(*ν*,*u*))*<sup>β</sup>* I *β <sup>u</sup>*<sup>+</sup> Ψ∗(*u* + *ϕ*(*ν*, *u*) , *θ*) + I *β u*+*ϕ*(*ν*,*u*) <sup>−</sup> Ψ∗(*u*, *θ*) ≤ Γ(*β*) (*ϕ*(*ν*,*u*))*<sup>β</sup>* I *β <sup>u</sup>*<sup>+</sup> Ψ<sup>∗</sup> (*u* + *ϕ*(*ν*, *u*) , *θ*) + I *β u*+*ϕ*(*ν*,*u*) <sup>−</sup> Ψ<sup>∗</sup> (*u*, *θ*) ,

That is

$$\begin{aligned} \frac{2}{\beta} \Big[ \mathbf{V}\_{\ast} \left( \frac{2u + q(\nu \mu)}{2}, \theta \right) \Big. \left. \mathbf{V}^{\ast} \left( \frac{2u + q(\nu \mu)}{2}, \theta \right) \right] \\\\ \leq \underset{\left(\varphi(\nu, u)\right)^{\beta}}{\Gamma(\varphi(\nu, u))^{\beta}} \Big[ \mathcal{Z}\_{\mu^{+}}^{\theta} \left( \boldsymbol{\Psi}\_{\ast} \left( \boldsymbol{u} + \boldsymbol{\varrho}(\nu, u) \right), \theta \right) + \mathcal{Z}\_{\boldsymbol{u} + \boldsymbol{\varrho}(\nu, \mu)^{-}}^{\theta} \left. \boldsymbol{\Psi}\_{\ast} \left( \boldsymbol{u} + \boldsymbol{\varrho}(\nu, u) \right), \theta \right) \\\\ \leq \underset{\left(\varphi(\nu, u)\right)^{\beta}}{\Gamma(\varphi(\nu, u))^{\beta}} \left[ \boldsymbol{\Psi}\_{\ast} \left( \boldsymbol{u} + \boldsymbol{\varrho}(\nu, u) \right), \theta \right] + \mathcal{Z}\_{\boldsymbol{u} - \boldsymbol{\varrho}}^{\theta} \left( \boldsymbol{u} + \boldsymbol{\varrho}(\nu, u) \right), \theta \right] \end{aligned}$$

Thus,

$$\frac{2}{\beta} \,\, \Psi \left( \frac{2u + \varrho(\nu, u)}{2} \right) \,\, \preccurlyeq \frac{\Gamma(\beta)}{(\varrho(\nu, u))^{\beta}} \left[ \mathcal{T}\_{u^{+}}^{\beta} \,\, \Psi(u + \varrho(\nu, u)) \,\widetilde{+} \mathcal{T}\_{u + \varrho(\nu, u)^{-}}^{\beta} \,\, \Psi(u) \right] \tag{21}$$

In a similar way as above, we have

$$\frac{\Gamma(\beta)}{\left(\varphi(\upsilon,u)\right)^{\beta}} \left[ \mathcal{Z}\_{u^{+}}^{\theta} \cdot \Psi(u+\varrho(\upsilon,u)) \tilde{+} \mathcal{Z}\_{u+\varrho(\upsilon,u)}^{\theta} \cdot \Psi(u) \right] \prec \frac{\Psi(u) \tilde{+} \Psi(u+\varrho(\upsilon,u))}{2} \prec \frac{\Psi(u) \tilde{+} \Psi(\upsilon)}{2}.\tag{22}$$

Combining (21) and (22), we have

$$\mathbb{P}\left(\frac{2u+\varrho(\nu,u)}{2}\right) \preccurlyeq \frac{\Gamma(\beta+1)}{2(\varrho(\nu,u))^{\beta}} \left[\mathcal{Z}\_{u^{+}}^{\beta}\,\,\Psi(u+\varrho(\nu,u))\widetilde{+}\mathcal{Z}\_{u+\varrho(\nu,u)^{-}}^{\beta}\,\,\Psi(u)\right] \\ \preccurlyeq \frac{\Psi(u)\widetilde{+}\Psi(u+\varrho(\nu,u))}{2} \preccurlyeq \frac{\Psi(u)\widetilde{+}\Psi(\nu,u)}{2} \preccurlyeq \frac{\Psi(u)\widetilde{+}\Psi(\nu,u)}{2} \preccurlyeq \frac{\Psi(u)\widetilde{+}\Psi(u)\Psi(u)}{2} \preccurlyeq \frac{\Psi(u)\Psi(u)\Psi(u)}{2} \preccurlyeq \frac{\Psi(u)\Psi(u)\Psi(u)}{2} \preccurlyeq \frac{\Psi(u)\Psi(u)\Psi(u)}{2} \preccurlyeq \frac{\Psi(u)\Psi(u)\Psi(u)}{2} \preccurlyeq \frac{\Psi(u)\Psi(u)\Psi(u)}{2} \preccurlyeq \frac{\Psi(u)\Psi(u)\Psi(u)}{2} \preccurlyeq \frac{\Psi(u)\Psi(u)\Psi(u)}{2} \preccurlyeq \frac{\Psi(u)\Psi(u)\Psi(u)}{2} \preccurlyeq \frac{\Psi(u)\Psi(u)\Psi(u)}{2} \preccurlyeq \frac{\Psi(u)\Psi(u)\Psi(u)}{2} \preccurlyeq \frac{\Psi(u)\Psi(u)\Psi(u)}{2} \preccurlyeq \frac{\Psi(u)\Psi(u)\Psi(u)}{2} \preccurlyeq \frac{\Psi(u)\Psi(u)\Psi(u)\Psi(u)}{2} \preccurlyeq \frac{\Psi(u)\Psi(u)\Psi(u)\Psi(u)}{2} \preccurlyeq \frac{\Psi(u)\Psi(u)\Psi(u)\Psi(u)}{2} \preccurlyeq \frac{\Psi(u)\Psi(u)\Psi(u}$$

Hence, the required result.

#### **Remark 3.2.** From Theorem 3.1 we clearly see that

If *ϕ*(*ω*, *y*) = *ω* − *y*, then from Theorem 3.1, we get following result in fuzzy fractional calculus, see [23].

$$\Psi\left(\frac{\mu+\nu}{2}\right) \preccurlyeq \frac{\Gamma(\beta+1)}{2(\nu-\mu)^{\beta}} \left[ \mathcal{I}\_{\mu^{+}}^{\beta} \,\,\Psi(\nu) \widetilde{+} \mathcal{I}\_{\nu^{-}}^{\beta} \,\,\Psi(\mu) \right] \preccurlyeq \frac{\Psi(\mu) \widetilde{+} \Psi(\nu)}{2}.$$

Let *β* = 1. Then Theorem 3.1 reduces to the result for preinvex *F*·*I*·*V*·*F* given in [28]:

$$\Psi\left(\frac{2u+\varrho(\nu,\mu)}{2}\right) \preccurlyeq \frac{1}{\varrho(\nu,\mu)} \int\_{u}^{u+\varrho(\nu,\mu)} \Psi(\omega)d\omega \preccurlyeq \frac{\Psi(u)\upharpoonright \Psi(\nu)}{2}.$$

Let *β* = 1 and *ϕ*(*ω*, *y*) = *ω* − *y*. Then Theorem 3.1 reduces to the result for convex *F*·*I*·*V*·*F* given in [26]:

$$\,\_2\Psi\left(\frac{\iota+\nu}{2}\right) \prec \frac{1}{\nu-\iota} \int\_{\iota}^{\nu} \Psi(\omega)d\omega \prec \frac{\Psi(\iota) \preccurlyeq \Psi(\nu)}{2}$$

Let *β* = 1 = *θ* and Ψ∗(*ω*, *θ*) = Ψ<sup>∗</sup> (*ω*, *θ*) with *ϕ*(*ω*, *y*) = *ω* − *y*. Then from Theorem 3.1 we obtain classical *H*·*H* Fejér type inequality.

**Example 3.3.** Let *β* = <sup>1</sup> 2 , *ω* ∈ [2, 2 + *ϕ*(3, 2)], and the *F*·*I*·*V*·*F* Ψ : [*u*, *u* + *ϕ*(*ν*, *u*)] = [2, 2 + *ϕ*(3, 2)] → F0, defined by

$$\Psi(\omega)(\theta) = \begin{cases} \frac{\theta}{2-\omega^{\frac{1}{2}}} & \theta \in \left[0, 2-\omega^{\frac{1}{2}}\right] \\\\ \frac{2\left(2-\omega^{\frac{1}{2}}\right)-\theta}{2-\omega^{\frac{1}{2}}} & \theta \in \left(2-\omega^{\frac{1}{2}}, 2\left(2-\omega^{\frac{1}{2}}\right)\right] \\\\ 0 & \theta \text{ otherwise} \end{cases}$$

Then, for each *θ* ∈ [0, 1] , we have Ψ*<sup>θ</sup>* (*ω*) = h *θ* 2 − *ω* 1 2 , (2 − *θ*) 2 − *ω* 1 2 i. Since left and right end point functions Ψ∗(*ω*, *θ*) = *θ* 2 − *ω* 1 2 , Ψ∗ (*ω*, *θ*) = (2 − *θ*) 2 − *ω* 1 2 , are preinvex functions with respect to *ϕ*(*ν*, *u*) = *ν* − *u*, for each *θ* ∈ [0, 1], then Ψ(*ω*) is preinvex *F*·*I*·*V*·*F*. We clearly see that Ψ ∈ *L*([*u*, *u* + *ϕ*(*ν*, *u*)] , F0) and

$$\mathbb{P}\_\*\left(\frac{2\mu + \varrho(\nu,\mu)}{2}, \theta\right) = \mathbb{P}\_\*\left(\frac{5}{2}, \theta\right) \\ = \theta \frac{4 - \sqrt{10}}{2}$$

$$\Psi^\*\left(\frac{2u+\varrho(\nu,u)}{2},\theta\right) = \Psi^\*\left(\frac{5}{2},\theta\right) \\ = (2-\theta)\frac{4-\sqrt{10}}{2}$$

$$\frac{\Psi\_\*(u,\theta) + \Psi\_\*(u+\varrho(\nu,u),\theta)}{2} = \theta\left(\frac{4-\sqrt{2}-\sqrt{3}}{2}\right)$$

$$\frac{\Psi^\*(u,\theta) + \Psi^\*(u+\varrho(\nu,u),\theta)}{2} = (2-\theta)\left(\frac{4-\sqrt{2}-\sqrt{3}}{2}\right)$$

Note that

Γ(*β* + 1) 2(*ϕ*(*ν*, *u*))*<sup>β</sup>* I *β <sup>u</sup>*<sup>+</sup> Ψ∗(*u* + *ϕ*(*ν*, *u*) , *θ*) + I *β u*+*ϕ*(*ν*,*u*) <sup>−</sup> Ψ∗(*u*, *θ*) = Γ 3 2 2 1 √ *π* 2+*ϕ*(3,2) Z 2 (3 − *ω*) −1 <sup>2</sup> . *θ* 2 − *ω* 1 2 *dω* + Γ 3 2 2 1 √ *π* 2+*ϕ*(3,2) Z 2 (*ω* − 2) −1 <sup>2</sup> . *θ* 2 − *ω* 1 2 *dω* = 1 4 *θ* 7393 10, 000 <sup>+</sup> 9501 10, 000 = *θ* 8447 20, 000 Γ(*β* + 1) 2(*ϕ*(*ν*, *u*))*<sup>β</sup>* I *β <sup>u</sup>*<sup>+</sup> Ψ ∗ (*u* + *ϕ*(*ν*, *u*) , *θ*) + I *β u*+*ϕ*(*ν*,*u*) <sup>−</sup> Ψ ∗ (*u*, *θ*) = Γ 3 2 2 1 √ *π* 2+*ϕ*(3,2) Z 2 (3 − *ω*) −1 <sup>2</sup> . (2 − *θ*) 2 − *ω* 1 2 *dω* + Γ 3 2 2 1 √ *π* 2+*ϕ*(3,2) Z 2 (*ω* − 2) −1 <sup>2</sup> . (2 − *θ*) 2 − *ω* 1 2 *dω* = 1 4 (2 − *θ*) 7393 10, 000 <sup>+</sup> 9501 10, 000 = (2 − *θ*) 8447 20, 000

Therefore

$$\left[\theta \frac{4-\sqrt{10}}{2}, (2-\theta)\frac{4-\sqrt{10}}{2}\right] \leq\_I \left[\theta \frac{8447}{20,000}, (2-\theta)\frac{8447}{20,000}\right] \leq\_I \left[\theta \left(\frac{4-\sqrt{2}-\sqrt{3}}{2}\right), (2-\theta)\left(\frac{4-\sqrt{2}+\sqrt{3}}{2}\right)\right]$$

and Theorem 3.1 is verified.

It is well known fact that *H*·*H* Fejér type inequality is a generalization of *H*·*H* type inequality. In Theorem 3.4 and Theorem 3.5, we obtain second and first fuzzy fractional *H*·*H* Fejér type inequalities for introduced preinvex *F*·*I*·*V*·*F.*

**Theorem 3.4.** *Let* Ψ : [*u*, *u* + *ϕ*(*ν*, *u*)] → F<sup>0</sup> *be a preinvex F*·*I*·*V*·*F with u* < *ν, whose θlevels define the family of I*·*V*·*F* Ψ*<sup>θ</sup>* : [*u*, *u* + *ϕ*(*ν*, *u*)] ⊂ R → K*<sup>C</sup>* <sup>+</sup> *are given by* Ψ*<sup>θ</sup>* (*ω*) = [Ψ∗(*ω*, *θ*), Ψ<sup>∗</sup> (*ω*, *θ*)] *for all ω* ∈ [*u*, *u* + *ϕ*(*ν*, *u*)] *and for all θ* ∈ [0, 1]*. Let* Ψ ∈

*L*([*u*, *u* + *ϕ*(*ν*, *u*)], F0) *and* Ω : [*u*, *u* + *ϕ*(*ν*, *u*)] → R, Ω(*ω*) ≥ 0, *symmetric with respect to* <sup>2</sup>*u*+*ϕ*(*ν*,*u*) 2 . *If ϕ satisfies Condition C, then*

$$\begin{cases} \begin{aligned} &\mathcal{Z}\_{u^{+}}^{\tilde{\mathcal{P}}} \, \, \Psi \Omega(u + \varrho(\boldsymbol{\nu}, \boldsymbol{u})) \tilde{+} \mathcal{Z}\_{u + \varrho(\boldsymbol{\nu}, \boldsymbol{u})^{-}}^{\tilde{\mathcal{P}}} \, \Psi \Omega(u) \Big{]} \end{aligned} \\\\ \begin{aligned} &\mathcal{Z}\_{\boldsymbol{\nu}} \, \frac{\Psi(\boldsymbol{u}) \tilde{+} \Psi(\boldsymbol{u} + \varrho(\boldsymbol{\nu}, \boldsymbol{u}))}{2} \bigg[ \mathcal{Z}\_{u^{+}}^{\tilde{\mathcal{P}}} \, \Omega(\boldsymbol{u} + \varrho(\boldsymbol{\nu}, \boldsymbol{u})) \, \, \, \mathcal{Z}\_{u + \varrho(\boldsymbol{\nu}, \boldsymbol{u})^{-}}^{\tilde{\mathcal{P}}} \, \Omega(\boldsymbol{u}) \Big{]} \end{aligned} \\\\ &\prec \begin{aligned} \begin{aligned} \mathcal{X}(\boldsymbol{u}) \tilde{+} \Psi(\boldsymbol{\nu}) \Big{]} \, \mathcal{Z}\_{u^{+}}^{\tilde{\mathcal{P}}} \, \Omega(\boldsymbol{u} + \varrho(\boldsymbol{\nu}, \boldsymbol{u})) \, \, \widetilde{+} \mathcal{Z}\_{u + \varrho(\boldsymbol{\nu}, \boldsymbol{u})^{-}}^{\tilde{\mathcal{P}}} \, \Omega(\boldsymbol{u}) \end{aligned} \end{aligned} \end{cases} \end{cases} \end{cases} \tag{23}$$

If Ψ is preconcave *F*·*I*·*V*·*F*, then inequality (23) is reversed.

**Proof.** Let Ψ be a preinvex *F*·*I*·*V*·*F* and *ς <sup>β</sup>*−1Ω(*<sup>u</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*ϕ*(*ν*, *<sup>u</sup>*)) <sup>≥</sup> 0. Then, for each *θ* ∈ [0, 1] , we have

$$\begin{split} \xi^{\delta-1} \Psi\_{\*} (\mathsf{u} + (1-\xi)\boldsymbol{\varrho}(\mathsf{v},\mathsf{u}) \, , \theta) \Omega(\mathsf{u} + (1-\xi)\boldsymbol{\varrho}(\mathsf{v},\mathsf{u})) \\ \leq \xi^{\delta-1} (\xi \mathsf{Y}\_{\*} (\mathsf{u},\theta) \, + \, (1-\xi)\boldsymbol{\Psi}\_{\*} (\mathsf{u} + \boldsymbol{\varrho}(\mathsf{v},\mathsf{u}) \, , \theta)) \Omega(\mathsf{u} + \, (1-\xi)\boldsymbol{\varrho}(\mathsf{u} + \boldsymbol{\varrho}(\mathsf{v},\mathsf{u}) \, , \mathsf{u})) \\ \leq \xi^{\delta-1} \boldsymbol{\Psi}^{\*} (\mathsf{u} + \, (1-\xi)\boldsymbol{\varrho}(\mathsf{v},\mathsf{u}) \, , \theta) \Omega(\mathsf{u} + \, (1-\xi)\boldsymbol{\varrho}(\mathsf{v},\mathsf{u})) \\ \leq \xi^{\delta-1} (\boldsymbol{\varrho}\boldsymbol{\Psi}^{\*} (\mathsf{u},\mathsf{u}) \, + \, (1-\xi)\boldsymbol{\Psi}^{\*} (\mathsf{u} + \boldsymbol{\varrho}(\mathsf{v},\mathsf{u}) \, , \theta)) \Omega(\mathsf{u} + \, (1-\xi)\boldsymbol{\varrho}(\mathsf{u} + \boldsymbol{\varrho}(\mathsf{v},\mathsf{u}) \, , \mathsf{u})) \,. \end{split} \tag{24}$$

and

*ς <sup>β</sup>*−1Ψ∗(*<sup>u</sup>* <sup>+</sup> *ςϕ*(*ν*, *<sup>u</sup>*) , *<sup>θ</sup>*)Ω(*<sup>u</sup>* <sup>+</sup> *ςϕ*(*ν*, *<sup>u</sup>*)) ≤ *ς β*−1 ((1 − *ς*)Ψ∗(*u*, *θ*) + *ς*Ψ∗(*u* + *ϕ*(*ν*, *u*) , *θ*))Ω(*u* + *ςϕ*(*ν*, *u*)) *ς <sup>β</sup>*−1Ψ<sup>∗</sup> (*u* + *ςϕ*(*ν*, *u*) , *θ*)Ω(*u* + *ςϕ*(*ν*, *u*)) ≤ *ς β*−1 ((1 − *ς*)Ψ<sup>∗</sup> (*u*, *θ*) + *ς*Ψ∗ (*u* + *ϕ*(*ν*, *u*) , *θ*))Ω(*u* + *ςϕ*(*ν*, *u*)) (25)

After adding (24) and (25), and integrating over [0, 1] , we get

$$\begin{split} &\int\_{0}^{1}\xi^{\beta-1}\Psi\_{\star}\left(u+(1-\xi)\varrho(\upsilon,u),\theta\right)\Omega(u+(1-\xi)\varrho(\upsilon,u))d\xi \\ &\qquad +\int\_{0}^{1}\xi^{\beta-1}\Psi\_{\star}\left(u+\xi\varrho(\upsilon,u),\theta\right)\Omega(u+\xi\varrho(\upsilon,u))d\xi \\ \leq &\int\_{0}^{1}\left[\begin{array}{cc} \xi^{\beta-1}\Psi\_{\star}\left(u,\theta\right)\{\xi\Omega(u+(1-\xi)\varrho(\upsilon,u))+(1-\xi)\Omega\left(u+\xi\varrho(\upsilon,u)\right)\} \\ +\xi^{\beta-1}\Psi\_{\star}\left(u+\varrho(\upsilon,u),\theta\right)\{(1-\xi)\Omega(u+(1-\xi)\varrho(\upsilon,u))+\xi\Omega(u+\xi\varrho(\upsilon,u))\} \end{array}\right] d\xi, \\ &\qquad\qquad +\int\_{0}^{1}\xi^{\beta-1}\Psi\_{\star}^{\ast}\left(u+(1-\xi)\varrho(\upsilon,u),\theta\right)\Omega(u+(1-\xi)\varrho(\upsilon,u))d\xi \\ &\qquad\qquad\qquad +\int\_{0}^{1}\xi^{\beta-1}\Psi\_{\star}^{\ast}\left(u+(1-\xi)\varrho(\upsilon,u)\right)\Omega(u+(1-\xi)\varrho(\upsilon,u))d\xi \\ \leq &\int\_{0}^{1}\left[\begin{array}{cc} \xi^{\beta-1}\Psi^{\ast}\left(u,\theta\right)\left\{\xi\Omega(u+(1-\xi)\varrho(\upsilon,u))+(1-\xi)\Omega(u+\xi\varrho(\upsilon,u))\right\} \\ +\xi^{\beta-1}\Psi^{\ast}\left(u+\varrho(\upsilon,u),\theta\right)\left\{(1-\xi)\varrho(\upsilon,u)\right\}+\xi\Omega(u+(1-\xi)\varrho(\upsilon,u))\right\} \end{array}\right] d\xi, \\ =$$

Since Ω is symmetric, then

<sup>=</sup> [Ψ∗(*u*, *<sup>θ</sup>*) <sup>+</sup> <sup>Ψ</sup>∗(*<sup>u</sup>* <sup>+</sup> *<sup>ϕ</sup>*(*ν*, *<sup>u</sup>*) , *<sup>θ</sup>*)] <sup>R</sup> <sup>1</sup> 0 *ς <sup>β</sup>*−1Ω(*u* + *ςϕ*(*ν*, *u*)) *dς* = [Ψ∗ (*u*, *θ*) + Ψ∗ (*u* + *ϕ*(*ν*, *u*) , *θ*)] R <sup>1</sup> 0 *ς <sup>β</sup>*−1Ω(*u* + *ςϕ*(*ν*, *u*)) *dς*. = Ψ∗(*u*, *θ*) +Ψ∗(*u*+*ϕ*(*ν*,*u*) , *θ*) 2 Γ(*β*) (*ϕ*(*ν*,*u*))*<sup>β</sup>* I *β <sup>u</sup>*<sup>+</sup> Ω(*u* + *ϕ*(*ν*, *u*)) + I *β u*+*ϕ*(*ν*,*u*) <sup>−</sup> Ω(*u*) , = Ψ∗ (*u*, *θ*) +Ψ∗ (*u*+*ϕ*(*ν*,*u*) , *θ*) 2 Γ(*β*) (*ϕ*(*ν*,*u*))*<sup>β</sup>* I *β <sup>u</sup>*<sup>+</sup> Ω(*u* + *ϕ*(*ν*, *u*)) + I *β u*+*ϕ*(*ν*,*u*) <sup>−</sup> Ω(*u*) . (26)

Since

R 1 0 *ς <sup>β</sup>*−1Ψ∗(*<sup>u</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*ϕ*(*ν*, *<sup>u</sup>*) , *<sup>θ</sup>*)Ω(*<sup>u</sup>* <sup>+</sup> *ςϕ*(*ν*, *<sup>u</sup>*))*d<sup>ς</sup>* + R 1 0 *ς <sup>β</sup>*−1Ψ∗(*<sup>u</sup>* <sup>+</sup> *ςϕ*(*ν*, *<sup>u</sup>*) , *<sup>θ</sup>*)Ω(*<sup>u</sup>* <sup>+</sup> *ςϕ*(*ν*, *<sup>u</sup>*))*d<sup>ς</sup>* = <sup>1</sup> (*ϕ*(*ν*,*u*))*<sup>β</sup>* R *<sup>u</sup>*+*ϕ*(*ν*,*u*) *u* (*ω* − *u*) *<sup>β</sup>*−1Ψ∗(2*<sup>u</sup>* <sup>+</sup> *<sup>ϕ</sup>*(*ν*, *<sup>u</sup>*) <sup>−</sup> *<sup>ω</sup>*, *<sup>θ</sup>*)Ω(*ω*)*d<sup>ω</sup>* + <sup>1</sup> (*ϕ*(*ν*,*u*))*<sup>β</sup>* R *<sup>u</sup>*+*ϕ*(*ν*,*u*) *u* (*ω* − *u*) *<sup>β</sup>*−1Ψ∗(*ω*, *<sup>θ</sup>*)Ω(*ω*)*d<sup>ω</sup>* = <sup>1</sup> (*ϕ*(*ν*,*u*))*<sup>β</sup>* R *<sup>u</sup>*+*ϕ*(*ν*,*u*) *u* (*ω* − *u*) *<sup>β</sup>*−1Ψ∗(*ω*, *<sup>θ</sup>*)Ω(2*<sup>u</sup>* <sup>+</sup> *<sup>ϕ</sup>*(*ν*, *<sup>u</sup>*) <sup>−</sup> *<sup>ω</sup>*)*d<sup>ω</sup>* + <sup>1</sup> (*ϕ*(*ν*,*u*))*<sup>β</sup>* R *<sup>u</sup>*+*ϕ*(*ν*,*u*) *u* (*ω* − *u*) *<sup>β</sup>*−1Ψ∗(*ω*, *<sup>θ</sup>*)Ω(*ω*)*d<sup>ω</sup>* = Γ(*β*) (*ϕ*(*ν*,*u*))*<sup>β</sup>* h I *β <sup>u</sup>*<sup>+</sup> Ψ∗Ω(*ν*) + I *β <sup>ν</sup>*<sup>−</sup> Ψ∗Ω(*u*) i , R 1 0 *ς <sup>β</sup>*−1Ψ<sup>∗</sup> (*u* + (1 − *ς*)*ϕ*(*ν*, *u*) , *θ*)Ω(*u* + *ςϕ*(*ν*, *u*))*dς* + R 1 0 *ς <sup>β</sup>*−1Ψ<sup>∗</sup> (*u* + *ςϕ*(*ν*, *u*) , *θ*)Ω(*u* + *ςϕ*(*ν*, *u*))*dς* = Γ(*β*) (*ϕ*(*ν*,*u*))*<sup>β</sup>* I *β <sup>u</sup>*<sup>+</sup> Ψ∗Ω(*u* + *ϕ*(*ν*, *u*)) + I *β u*+*ϕ*(*ν*,*u*) <sup>−</sup> Ψ∗Ω(*u*) . (27)

Then from (26), we have

Γ(*β*) (*ϕ*(*ν*,*u*))*<sup>β</sup>* I *β <sup>u</sup>*<sup>+</sup> Ψ∗Ω(*u* + *ϕ*(*ν*, *u*)) + I *β u*+*ϕ*(*ν*,*u*) <sup>−</sup> Ψ∗Ω(*u*) ≤ Ψ∗(*u*, *θ*) +Ψ∗(*u*+*ϕ*(*ν*,*u*) , *θ*) 2 Γ(*β*) (*ϕ*(*ν*,*u*))*<sup>β</sup>* I *β <sup>u</sup>*<sup>+</sup> Ω(*u* + *ϕ*(*ν*, *u*)) + I *β u*+*ϕ*(*ν*,*u*) <sup>−</sup> Ω(*u*) ≤ Ψ∗(*u*, *θ*) +Ψ∗(*u*+*ϕ*(*ν*,*u*) , *θ*) 2 Γ(*β*) (*ϕ*(*ν*,*u*))*<sup>β</sup>* I *β <sup>u</sup>*<sup>+</sup> Ω(*u* + *ϕ*(*ν*, *u*)) + I *β u*+*ϕ*(*ν*,*u*) <sup>−</sup> Ω(*u*) , Γ(*β*) (*ϕ*(*ν*,*u*))*<sup>β</sup>* I *β <sup>u</sup>*<sup>+</sup> Ψ∗Ω(*u* + *ϕ*(*ν*, *u*)) + I *β u*+*ϕ*(*ν*,*u*) <sup>−</sup> Ψ∗Ω(*u*) ≤ Ψ∗ (*u*, *θ*) +Ψ∗ (*u*+*ϕ*(*ν*,*u*) , *θ*) 2 Γ(*β*) (*ϕ*(*ν*,*u*))*<sup>β</sup>* I *β <sup>u</sup>*<sup>+</sup> Ω(*u* + *ϕ*(*ν*, *u*)) + I *β u*+*ϕ*(*ν*,*u*) <sup>−</sup> Ω(*u*) ≤ Ψ∗ (*u*, *θ*) +Ψ∗ (*u*+*ϕ*(*ν*) , *θ*) 2 Γ(*β*) (*ϕ*(*ν*,*u*))*<sup>β</sup>* I *β <sup>u</sup>*<sup>+</sup> Ω(*u* + *ϕ*(*ν*, *u*)) + I *β u*+*ϕ*(*ν*,*u*) <sup>−</sup> Ω(*u*) ,

that is

$$\frac{\Gamma(\boldsymbol{\beta})}{\Gamma(\boldsymbol{\rho}(\boldsymbol{\nu},\boldsymbol{\mu}))^{\boldsymbol{\beta}}} \left[ \left[ \boldsymbol{\mathcal{T}}\_{\boldsymbol{u}^{\boldsymbol{\beta}}}^{\boldsymbol{\beta}} \,\,\boldsymbol{\Psi}\_{\ast} \boldsymbol{\Omega}(\boldsymbol{u}+\boldsymbol{\varrho}(\boldsymbol{\nu},\boldsymbol{\mu})) \right. \\ \left. + \boldsymbol{\mathcal{T}}\_{\boldsymbol{u}+\boldsymbol{\rho}(\boldsymbol{\nu},\boldsymbol{\mu})}^{\boldsymbol{\beta}} \,\,\boldsymbol{\Psi}\_{\ast} \boldsymbol{\Omega}(\boldsymbol{u}) \right] \,, \,\boldsymbol{\mathcal{T}}\_{\boldsymbol{u}^{\boldsymbol{\beta}}}^{\boldsymbol{\beta}} \,\,\boldsymbol{\Psi}^{\ast} \boldsymbol{\Omega}(\boldsymbol{u}+\boldsymbol{\varrho}(\boldsymbol{\nu},\boldsymbol{\mu})) \right. \\ \left. + \boldsymbol{\mathcal{T}}\_{\boldsymbol{u}+\boldsymbol{\rho}(\boldsymbol{\nu},\boldsymbol{\mu})}^{\boldsymbol{\beta}} \,\,\boldsymbol{\Psi}\_{\ast} \boldsymbol{\Omega}(\boldsymbol{u}+\boldsymbol{\rho}(\boldsymbol{\nu},\boldsymbol{\mu}),\boldsymbol{\Theta}) \right] \,\,\boldsymbol{\Omega}(\boldsymbol{u}+\boldsymbol{\rho}(\boldsymbol{\nu},\boldsymbol{\mu})) $$
 
$$ \leq\_{I} \frac{\Gamma(\boldsymbol{\beta})}{(\boldsymbol{\rho}(\boldsymbol{\nu},\boldsymbol{\mu}))^{\boldsymbol{\beta}}} \left[ \frac{\mathsf{F}\_{\boldsymbol{u}}(\boldsymbol{\nu},\boldsymbol{\mu}) + \mathsf{F}\_{\boldsymbol{u}}(\boldsymbol{\nu},\boldsymbol{\mu})}{2} , \,\frac{\mathsf{F}^{\*}(\boldsymbol{u},\boldsymbol{\theta})}{2} \,\,\frac{\mathsf{F}^{\*}(\boldsymbol{u},\boldsymbol{\theta})}{2} \right] \,\Big[ \mathscr{T}\_{\boldsymbol{u}^{\boldsymbol{\beta}}}^{\boldsymbol{\beta}} \,\,\Omega(\boldsymbol{u}+\boldsymbol{\$$

hence

$$\begin{aligned} & \left[ \mathcal{I}\_{\boldsymbol{\mu}^+}^{\boldsymbol{\beta}} \, \Psi \Omega(\boldsymbol{u} + \boldsymbol{\varrho}(\boldsymbol{\nu}, \boldsymbol{\mu})) \, \xrightarrow{\sim} \mathcal{I}\_{\boldsymbol{\mu} + \boldsymbol{\varrho}(\boldsymbol{\nu}, \boldsymbol{\mu})^-}^{\boldsymbol{\beta}} \, \Psi \Omega(\boldsymbol{\mu}) \right] \\ \prec & \frac{\Psi(\boldsymbol{u}) \hat{+} \Psi(\boldsymbol{u} + \boldsymbol{\varrho}(\boldsymbol{\nu}, \boldsymbol{\mu}))}{2} \left[ \mathcal{I}\_{\boldsymbol{\mu}^+}^{\boldsymbol{\beta}} \, \Omega(\boldsymbol{u} + \boldsymbol{\varrho}(\boldsymbol{\nu}, \boldsymbol{\mu})) \, \right. + \mathcal{I}\_{\boldsymbol{\mu} + \boldsymbol{\varrho}(\boldsymbol{\nu}, \boldsymbol{\mu})^-}^{\boldsymbol{\beta}} \, \Omega(\boldsymbol{\mu}) \right] \\ \prec & \frac{\Psi(\boldsymbol{u}) \hat{+} \Psi(\boldsymbol{\nu})}{2} \left[ \mathcal{I}\_{\boldsymbol{\mu}^+}^{\boldsymbol{\beta}} \, \Omega(\boldsymbol{u} + \boldsymbol{\varrho}(\boldsymbol{\nu}, \boldsymbol{\mu})) \, \widetilde{+} \mathcal{I}\_{\boldsymbol{\mu} + \boldsymbol{\varrho}(\boldsymbol{\nu}, \boldsymbol{\mu})^-}^{\boldsymbol{\beta}} \, \Omega(\boldsymbol{u}) \right] \end{aligned}$$

**Theorem 3.5.** *Let* Ψ : [*u*, *u* + *ϕ*(*ν*, *u*)] → F<sup>0</sup> *be a preinvex F*·*I*·*V*·*F with u* < *ν, whose θlevels define the family of I*·*V*·*Fs* Ψ*<sup>θ</sup>* : [*u*, *u* + *ϕ*(*ν*, *u*)] ⊂ R → K*<sup>C</sup>* <sup>+</sup> *are given by* Ψ*<sup>θ</sup>* (*ω*) = [Ψ∗(*ω*, *θ*) , Ψ<sup>∗</sup> (*ω*, *θ*)] *for all ω* ∈ [*u*, *u* + *ϕ*(*ν*, *u*)] *and for all θ* ∈ [0, 1]*. If* Ψ ∈ *L* ([*u*, *u* + *ϕ*(*ν*, *u*)], F0) *and* Ω : [*u*, *u* + *ϕ*(*ν*, *u*)] → R, Ω(*ω*) ≥ 0, *symmetric with respect to* 2*u*+*ϕ*(*ν*,*u*) 2 *. If ϕ satisfies Condition C and then*

$$\begin{split} & \mathbb{1} \left( \frac{2u + \varrho(\nu, \boldsymbol{\mu})}{2} \right) \left[ \mathcal{T}\_{\boldsymbol{u}^{+}}^{\mathcal{G}} \, \Omega(\boldsymbol{u} + \varrho(\boldsymbol{\nu}, \boldsymbol{\mu})) \right. \\ & \left. + \mathbb{E}\_{\boldsymbol{u}^{+}} \, \mathbb{1} \Omega(\boldsymbol{u} + \varrho(\boldsymbol{\nu}, \boldsymbol{\mu})) \tilde{+} \mathcal{T}\_{\boldsymbol{u} + \varrho(\boldsymbol{\nu}, \boldsymbol{\mu})^{-}}^{\mathcal{G}} \, \mathbb{1} \Omega(\boldsymbol{u}) \right]. \end{split} \tag{28}$$

If Ψ is preconcave *F*·*I*·*V*·*F*, then inequality (28) is reversed.

**Proof.** Since Ψ is a preinvex *F*·*I*·*V*·*F*, then for *θ* ∈ [0, 1] , we have

$$\begin{split} \Psi\_{\*}\left(\frac{2u+\varrho(\nu,\mu)}{2},\theta\right) &\leq \frac{1}{2} \left(\Psi\_{\*}\left(u+\left(1-\xi\right)\varrho(\nu,\mu)\right),\theta\right) + \Psi\_{\*}\left(u+\xi\varrho(\nu,\mu)\right),\theta\right) \\ \Psi\_{\*}^{\*}\left(\frac{2u+\varrho(\nu,\mu)}{2},\theta\right) &\leq \frac{1}{2} \left(\Psi^{\*}\left(u+\left(1-\xi\right)\varrho(\nu,\mu)\right),\theta\right) + \Psi^{\*}\left(u+\xi\varrho(\nu,\mu),\theta\right) \end{split} \tag{29}$$

Since Ω(*u* + (1 − *ς*)*ϕ*(*ν*, *u*)) = Ω(*u* + *ςϕ*(*ν*, *u*)), then by multiplying (29) by *ς <sup>β</sup>*−1Ω(*u* + *ςϕ*(*ν*, *u*)) and integrate it with respect to *ς* over [0, 1] , we obtain

$$\begin{split} & \quad \Psi\_{\*}\left(\frac{2u+\varrho(\nu,\mu)}{2},\theta\right) \int\_{0}^{1} \xi^{\beta-1} \Omega(u+\xi\varrho(\nu,u))d\xi \\ & \leq \frac{1}{2} \Biggl( \begin{aligned} &\int\_{0}^{1} \xi^{\beta-1} \Psi\_{\*}\left(u+\left(1-\xi\right)\varrho(\nu,u)\right),\theta\rangle \Omega(u+\xi\varrho(\nu,u))d\xi \\ &+ \int\_{0}^{1} \xi^{\beta-1} \Psi\_{\*}\left(u+\xi\varrho(\nu,u)\right),\theta\rangle \Omega(u+\xi\varrho(\nu,u))d\xi \end{aligned} \right) . \tag{30} \\ & \quad \begin{aligned} &\Psi^{\*}\left(\frac{2u+\varrho(\nu,u)}{2},\theta\right) \int\_{0}^{1} \Omega(u+\xi\varrho(\nu,u))d\xi \\ &\leq \frac{1}{2} \Biggl( &\int\_{0}^{1} \xi^{\beta-1} \Psi^{\*}\left(u+\left(1-\xi\right)\varrho(\nu,u)\right),\theta\rangle \Omega(u+\xi\varrho(\nu,u))d\xi \\ &+ \int\_{0}^{1} \xi^{\beta-1} \Psi^{\*}\left(u+\xi\varrho(\nu,u)\right),\theta\rangle \Omega(u+\xi\varrho(\nu,u))d\xi \end{aligned} \right). \end{split} \tag{31}$$

Let *ω* = *u* + *ςϕ*(*ν*, *u*). Then we have

R 1 0 *ς <sup>β</sup>*−1Ψ∗(*<sup>u</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*ϕ*(*ν*, *<sup>u</sup>*) , *<sup>θ</sup>*)Ω(*<sup>u</sup>* <sup>+</sup> *ςϕ*(*ν*, *<sup>u</sup>*))*d<sup>ς</sup>* + R 1 0 *ς <sup>β</sup>*−1Ψ∗(*<sup>u</sup>* <sup>+</sup> *ςϕ*(*ν*, *<sup>u</sup>*) , *<sup>θ</sup>*)Ω(*<sup>u</sup>* <sup>+</sup> *ςϕ*(*ν*, *<sup>u</sup>*))*d<sup>ς</sup>* = <sup>1</sup> (*ϕ*(*ν*,*u*))*<sup>β</sup>* R *<sup>u</sup>*+*ϕ*(*ν*,*u*) *u* (*ω* − *u*) *<sup>β</sup>*−1Ψ∗(2*<sup>u</sup>* <sup>+</sup> *<sup>ϕ</sup>*(*ν*, *<sup>u</sup>*) <sup>−</sup> *<sup>ω</sup>*, *<sup>θ</sup>*)Ω(*ω*)*d<sup>ω</sup>* + <sup>1</sup> (*ϕ*(*ν*,*u*))*<sup>β</sup>* R *<sup>u</sup>*+*ϕ*(*ν*,*u*) *u* (*ω* − *u*) *<sup>β</sup>*−1Ψ∗(*ω*, *<sup>θ</sup>*)Ω(*ω*)*d<sup>ω</sup>* = <sup>1</sup> (*ϕ*(*ν*,*u*))*<sup>β</sup>* R *<sup>u</sup>*+*ϕ*(*ν*,*u*) *u* (*ω* − *u*) *<sup>β</sup>*−1Ψ∗(*ω*, *<sup>θ</sup>*)Ω(2*<sup>u</sup>* <sup>+</sup> *<sup>ϕ</sup>*(*ν*, *<sup>u</sup>*) <sup>−</sup> *<sup>ω</sup>*)*d<sup>ω</sup>* + <sup>1</sup> (*ϕ*(*ν*,*u*))*<sup>β</sup>* R *<sup>u</sup>*+*ϕ*(*ν*,*u*) *u* (*ω* − *u*) *<sup>β</sup>*−1Ψ∗(*ω*, *<sup>θ</sup>*)Ω(*ω*)*d<sup>ω</sup>* = Γ(*β*) (*ϕ*(*ν*,*u*))*<sup>β</sup>* I *β <sup>u</sup>*<sup>+</sup> Ψ∗Ω(*u* + *ϕ*(*ν*, *u*)) + I *β u*+*ϕ*(*ν*,*u*) <sup>−</sup> Ψ∗Ω(*u*) , R 1 0 *ς <sup>β</sup>*−1Ψ<sup>∗</sup> (*u* + (1 − *ς*)*ϕ*(*ν*, *u*) , *θ*)Ω(*u* + *ςϕ*(*ν*, *u*))*dς* + R 1 0 *ς <sup>β</sup>*−1Ψ<sup>∗</sup> (*u* + *ςϕ*(*ν*, *u*) , *θ*)Ω(*u* + *ςϕ*(*ν*, *u*))*dς* = Γ(*β*) (*ϕ*(*ν*,*u*))*<sup>β</sup>* I *β <sup>u</sup>*<sup>+</sup> Ψ∗Ω(*u* + *ϕ*(*ν*, *u*)) + I *β u*+*ϕ*(*ν*,*u*) <sup>−</sup> Ψ∗Ω(*u*) . (31)

Then from (31), we have

Γ(*β*) (*ϕ*(*ν*,*u*))*<sup>β</sup>* <sup>Ψ</sup><sup>∗</sup> 2*u*+*ϕ*(*ν*,*u*) 2 , *θ* I *β <sup>u</sup>*<sup>+</sup> Ω(*u* + *ϕ*(*ν*, *u*)) + I *β u*+*ϕ*(*ν*,*u*) <sup>−</sup> Ω(*u*) ≤ Γ(*β*) (*ϕ*(*ν*,*u*))*<sup>β</sup>* I *β <sup>u</sup>*<sup>+</sup> Ψ∗Ω(*u* + *ϕ*(*ν*, *u*)) + I *β u*+*ϕ*(*ν*,*u*) <sup>−</sup> Ψ∗Ω(*u*) Γ(*β*) (*ϕ*(*ν*,*u*))*<sup>β</sup>* <sup>Ψ</sup><sup>∗</sup> 2*u*+*ϕ*(*ν*,*u*) 2 , *θ* I *β <sup>u</sup>*<sup>+</sup> Ω(*u* + *ϕ*(*ν*, *u*)) + I *β u*+*ϕ*(*ν*,*u*) <sup>−</sup> Ω(*u*) ≤ Γ(*β*) (*ϕ*(*ν*,*u*))*<sup>β</sup>* I *β <sup>u</sup>*<sup>+</sup> Ψ∗Ω(*u* + *ϕ*(*ν*, *u*)) + I *β u*+*ϕ*(*ν*,*u*) <sup>−</sup> Ψ∗Ω(*u*) ,

from which, we have

$$\begin{split} &\frac{\Gamma(\boldsymbol{\theta})}{\left(\boldsymbol{\varrho}(\boldsymbol{\nu},\boldsymbol{\nu})\right)^{\mathsf{F}}} \Big[\Psi\_{\ast}\Big{(}\frac{2\boldsymbol{\nu}+\boldsymbol{\varrho}(\boldsymbol{\nu},\boldsymbol{\mu})}{2},\boldsymbol{\theta}\Big{)},\ \Psi^{\ast}\Big{(}\frac{2\boldsymbol{\nu}+\boldsymbol{\varrho}(\boldsymbol{\nu},\boldsymbol{\mu})}{2},\boldsymbol{\theta}\Big{)}\Big{]} \Big{[}\mathcal{I}^{\mathsf{F}}\_{\boldsymbol{u}^{\mathsf{F}}}\,\Omega(\boldsymbol{u}+\boldsymbol{\varrho}(\boldsymbol{\nu},\boldsymbol{\mu}))\ \ +\mathcal{I}^{\mathsf{F}}\_{\boldsymbol{u}+\boldsymbol{\varrho}(\boldsymbol{\nu},\boldsymbol{\mu})}{\left(\boldsymbol{\nu}+\boldsymbol{\varrho}(\boldsymbol{\nu},\boldsymbol{\mu})\right)^{\mathsf{F}}}\,\Omega(\boldsymbol{u})\Big{]} \\ &\leq \inf\_{\boldsymbol{\Lambda}\left(\boldsymbol{\varrho}(\boldsymbol{\nu},\boldsymbol{\mu})\right)^{\mathsf{F}}} \Big[\mathcal{I}^{\mathsf{F}}\_{\boldsymbol{u}^{\mathsf{F}}}\,\Psi\_{\ast}\Omega(\boldsymbol{u}+\boldsymbol{\varrho}(\boldsymbol{\nu},\boldsymbol{\mu}))\ \ +\mathcal{I}^{\mathsf{F}}\_{\boldsymbol{u}+\boldsymbol{\varrho}(\boldsymbol{\nu},\boldsymbol{\mu})}\,\left.\Psi\_{\ast}\Omega(\boldsymbol{u})\right\}\,\mathcal{I}^{\mathsf{F}}\_{\boldsymbol{u}^{\mathsf{F}}}\,\Psi^{\ast}\Omega(\boldsymbol{u}+\boldsymbol{\varrho}(\boldsymbol{\nu},\boldsymbol{\mu}))\ +\mathcal{I}^{\mathsf{F}}\_{\boldsymbol{u}+\boldsymbol{\varrho}(\boldsymbol{\nu},\boldsymbol{\mu})}\,\left.\Psi^{\ast}\Omega(\boldsymbol{u})\right],\ \end{split}$$

that is

$$\begin{aligned} &\frac{\Gamma(\boldsymbol{\beta})}{\left(\boldsymbol{\varrho}(\boldsymbol{\nu},\boldsymbol{\mu})\right)^{\boldsymbol{\beta}}}\Psi\left(\frac{2\boldsymbol{\mu}+\boldsymbol{\varrho}(\boldsymbol{\nu},\boldsymbol{\mu})}{2}\right)\left[\mathcal{T}\_{\boldsymbol{\mu}^{+}}^{\boldsymbol{\beta}}\,\boldsymbol{\Omega}(\boldsymbol{\nu}+\boldsymbol{\varrho}(\boldsymbol{\nu},\boldsymbol{\mu}))\right.+\mathcal{T}\_{\boldsymbol{\mu}+\boldsymbol{\varrho}(\boldsymbol{\nu},\boldsymbol{\mu})^{-}}^{\boldsymbol{\beta}}\,\boldsymbol{\Omega}(\boldsymbol{\mu})\right] \\ &\prec \frac{\Gamma(\boldsymbol{\beta})}{\left(\boldsymbol{\varrho}(\boldsymbol{\nu},\boldsymbol{\mu})\right)^{\boldsymbol{\beta}}}\left[\mathcal{T}\_{\boldsymbol{\mu}^{+}}^{\boldsymbol{\beta}}\,\boldsymbol{\Psi}\boldsymbol{\Omega}(\boldsymbol{\nu}+\boldsymbol{\varrho}(\boldsymbol{\nu},\boldsymbol{\mu}))\right.+\mathcal{T}\_{\boldsymbol{\mu}+\boldsymbol{\varrho}(\boldsymbol{\nu},\boldsymbol{\mu})^{-}}^{\boldsymbol{\beta}}\,\boldsymbol{\Psi}\boldsymbol{\Omega}(\boldsymbol{\nu})\right] \end{aligned}$$

This completes the proof.

**Example 3.6.** We consider the *F*·*I*·*V*·*F* Ψ : [0, 2] → F<sup>0</sup> defined by,

$$\Psi(\omega)(\sigma) = \begin{cases} \frac{\sigma}{2 - \sqrt{\omega}\prime} & \sigma \in \left[0, 2 - \sqrt{\omega}\right] \\\\ \frac{2\left(2 - \sqrt{\omega}\right) - \sigma}{2 - \sqrt{\omega}} & \sigma \in \left(2 - \sqrt{\omega}, 2\left(2 - \sqrt{\omega}\right)\right] \\\\ 0, & \text{otherwise}. \end{cases}$$

Then, for each *θ* ∈ [0, 1] , we have Ψ*<sup>θ</sup>* (*ω*) = - *θ* 2 − √ *ω* , (2 − *θ*) 2 − √ *ω* . Since end point functions Ψ∗(*ω*, *θ*) , Ψ<sup>∗</sup> (*ω*, *θ*) are preinvex functions with respect to *ϕ*(*ν*, *u*) = *ν* − *u* for each *θ* ∈ [0, 1], then Ψ(*ω*) is preinvex *F*·*I*·*V*·*F*. If

$$\Omega(\omega) := \left\{ \begin{array}{c} \sqrt{\omega} \\ \sqrt{2-\omega} \end{array} \begin{array}{c} \sigma \in \left[0,1\right] \\ \sigma \in \left(1,2\right] \end{array} \right\}$$

then Ω(2 − *ω*) = Ω(*ω*) ≥ 0, for all *ω* ∈ [0, 2]. Since Ψ∗(*ω*, *θ*) = *θ* 2 − √ *ω* and Ψ∗ (*ω*, *θ*) = (2 − *θ*) 2 − √ *ω* . If *β* = <sup>1</sup> 2 , then we compute the following:

$$
\begin{split}
\left[T^{\theta}\_{\mu^{+}}\,\Psi\Omega(u+\varrho(\nu,u))\right]\hat{+}T^{\theta}\_{\mu+\varrho(\nu,u)}\cdots\Psi\Omega(u)\bigg] &\leqslant \frac{\mathbf{v}\_{\mu}(u)\cdot\mathbf{\varpi}(u+\varrho(\nu,u))}{2}\bigg[-\mathcal{R}^{\theta}\_{\mu+\varrho(\nu,u)}\cdot\Omega(u)\bigg] \\
\leqslant \frac{\mathbf{v}\_{\mu}(u)\cdot\mathbf{\varpi}(u)}{2}\bigg[\mathcal{T}^{\theta}\_{\mu^{+}}\,\Omega(u+\varrho(\nu,u))\bigg] + \mathcal{T}^{\theta}\_{\mu+\varrho(\nu,u)}\cdot\Omega(u)\bigg] \\
\dfrac{\mathbf{v}\_{\mu}(u)\cdot\mathbf{\varpi}\_{\begin{subarray}{c}\mu+\Psi(\nu,u)\\\Delta\end{subarray}}\bigg[T^{\theta}\_{\mu^{+}}\,\Omega(u+\varrho(\nu,u))\bigg] + \mathcal{T}^{\theta}\_{\mu+\varrho(\nu,u)}\cdot\Omega(u)\bigg]}{2} \\
\dfrac{\mathbf{v}\_{\mu}(u)\cdot\mathbf{\varpi}^{+}(u+\varrho(\nu,u))}{2}\bigg[T^{\theta}\_{\mu^{+}}\,\Omega(u+\varrho(\nu,u))\bigg] + \mathcal{T}^{\theta}\_{\mu+\varrho(\nu,u)}\cdot\Omega(u)\bigg] = \frac{\pi}{\sqrt{2}}\theta\left(\frac{4-\sqrt{2}}{2}\right),
\end{split}
\tag{32}
$$

$$
\dfrac{\frac{\mathbf{v}\_{\mu}(u)\cdot\mathbf{\varpi}^{+}(u+\varrho(\nu,u))}{2}\bigg[T^{\theta}\_{\mu^{+}}\,\Omega(u+\varrho(\nu,u))\bigg] + \mathcal{T}^{\theta}\_{\mu+\varrho(\nu,u)}\cdot\Omega(u)\bigg] = \frac{\pi}{\sqrt{2}}(2-\theta)\left(\frac{4-\sqrt{2}}{2}\right),
$$

$$\begin{aligned} \left[ \begin{aligned} \mathcal{T}\_{\boldsymbol{u}^{+}}^{\boldsymbol{\theta}} \,\Psi\_{\*}\Omega(\boldsymbol{u}+\boldsymbol{\varrho}(\boldsymbol{\nu},\boldsymbol{u})) &+ \mathcal{T}\_{\boldsymbol{u}+\boldsymbol{\varrho}(\boldsymbol{\nu},\boldsymbol{u})^{-}}^{\boldsymbol{\theta}} \,\Psi\_{\*}\Omega(\boldsymbol{u}) \right] &= \frac{1}{\sqrt{\pi}} \theta\left(2\pi+\frac{4-8\sqrt{2}}{3}\right), \\ \left[ \begin{aligned} \mathcal{T}\_{\boldsymbol{u}^{+}}^{\boldsymbol{\theta}} \,\Psi^{\*}\Omega(\boldsymbol{u}+\boldsymbol{\varrho}(\boldsymbol{\nu},\boldsymbol{u})) &+ \mathcal{T}\_{\boldsymbol{u}+\boldsymbol{\varrho}(\boldsymbol{\nu},\boldsymbol{u})^{-}}^{\boldsymbol{\theta}} \,\Psi^{\*}\Omega(\boldsymbol{u}) \right] &= \frac{1}{\sqrt{\pi}} (2-\theta)\left(2\pi+\frac{4-8\sqrt{2}}{3}\right). \end{aligned} \end{aligned} \tag{34}$$

From (32), (33) and (34), we have

$$\begin{aligned} \frac{1}{\sqrt{\pi}} \Big[ \theta \Big( 2\pi + \frac{4-8\sqrt{2}}{3} \Big), \ (2-\theta) \Big( 2\pi + \frac{4-8\sqrt{2}}{3} \Big) \Big] & \quad \leq \ \,\_1\frac{\pi}{\sqrt{2}} \Big[ \theta \Big( \frac{4-\sqrt{2}}{2} \Big), \ (2-\theta) \Big( \frac{4-\sqrt{2}}{2} \Big) \Big] \\ &= \frac{\pi}{\sqrt{2}} \Big[ \theta \Big( \frac{4-\sqrt{2}}{2} \Big), \ (2-\theta) \Big( \frac{4-\sqrt{2}}{2} \Big) \Big] \end{aligned}$$

for each *θ* ∈ [0, 1] . Hence, Theorem 10 is verified.

For Theorem 11, we have

$$\begin{split} \Psi\_{\*}\left(\frac{2\mu+\varrho(\nu,\mu)}{2},\theta\right) \left[\mathcal{T}\_{\boldsymbol{u}^{+}}^{\boldsymbol{\beta}}\,\Omega(\boldsymbol{u}+\boldsymbol{\varrho}(\boldsymbol{\nu},\boldsymbol{u})) + \mathcal{T}\_{\boldsymbol{u}+\boldsymbol{\varrho}(\boldsymbol{\nu},\boldsymbol{u})^{-}}^{\boldsymbol{\beta}}\,\Omega(\boldsymbol{u})\right] &= \theta\sqrt{\pi}, \\ \Psi^{\*}\left(\frac{2\mu+\varrho(\boldsymbol{\nu},\boldsymbol{u})}{2},\theta\right) \left[\mathcal{T}\_{\boldsymbol{u}^{+}}^{\boldsymbol{\beta}}\,\Omega(\boldsymbol{u}+\boldsymbol{\varrho}(\boldsymbol{\nu},\boldsymbol{u})) + \mathcal{T}\_{\boldsymbol{u}+\boldsymbol{\varrho}(\boldsymbol{\nu},\boldsymbol{u})^{-}}^{\boldsymbol{\beta}}\,\Omega(\boldsymbol{u})\right] &= (2-\theta)\sqrt{\pi}. \end{split} \tag{35}$$

From (34) and (35), we have √ *π*[*θ*, (2 − *θ*)] ≤*<sup>I</sup>* <sup>√</sup> 1 *π* [*θ*(2*π* + 4−8 √ 2 3 ),(2 − *θ*)(2*π* + 4−8 √ 2 3 )] , for each *θ* ∈ [0, 1] .

**Remark 3.7.** If Ω(*ω*) = 1. Then from Theorem 3.4 and Theorem 3.5, we get Theorem 3.1.

Let *β* = 1. Then we obtain following *H*·*H* Fejér type inequality for preinvex *F*·*I*·*V*·*F*, see [28].

$$\mathbb{P}\left(\frac{2u+\varrho(\nu,\mu)}{2}\right) \preccurlyeq \frac{1}{\int\_{u}^{u+\varrho(\nu,\mu)} \Omega(\omega)d\omega} \ (\mathsf{FR}) \int\_{u}^{u+\varrho(\nu,\mu)} \Psi(\omega)\Omega(\omega)d\omega \preccurlyeq \frac{\Psi(u)+\Psi(\nu)}{2}$$

If Ψ∗(*ω*, *θ*) = Ψ<sup>∗</sup> (*ω*, *θ*) with *ϕ*(*ω*, *y*) = *ω* − *y* and Ω(*ω*) = *β* = 1 = *θ*. Then from Theorem 3.4 and Theorem 3.5, we get the classical *H*·*H* inequality.

If Ψ∗(*ω*, *θ*) = Ψ<sup>∗</sup> (*ω*, *θ*) with *ϕ*(*ω*, *y*) = *ω* − *y* and *β* = 1, then from Theorem 3.4 and Theorem 3.5, we obtain the classical *H*·*H* Fejér inequality, see [46].

From Theorems 3.8 and 3.9, now we get several fuzzy-interval fractional integral inequalities linked to fuzzy-interval fractional *H*·*H* type inequality for the product of preinvex *F*·*I*·*V*·*Fs*.

**Theorem 3.8.** *Let* Ψ, Φ : [*u*, *u* + *ϕ*(*ν*, *u*)] → F<sup>0</sup> *be two preinvex F*·*I*·*V*·*Fs on* [*u*, *u* + *ϕ*(*ν*, *u*)] , *whose θ-levels* Ψ*<sup>θ</sup>* , Φ*<sup>θ</sup>* : [*u*, *u* + *ϕ*(*ν*, *u*)] ⊂ R → K*<sup>C</sup>* <sup>+</sup> *are defined by* Ψ*<sup>θ</sup>* (*ω*) = [Ψ∗(*ω*, *θ*) , Ψ<sup>∗</sup> (*ω*, *θ*)] *and* Φ*<sup>θ</sup>* (*ω*) = [Φ∗(*ω*, *θ*) , Φ<sup>∗</sup> (*ω*, *θ*)] *for all ω* ∈ [*u*, *u* + *ϕ*(*ν*, *u*)] *and for all <sup>θ</sup>* ∈ [0, 1]*. If* <sup>Ψ</sup>×e<sup>Φ</sup> ∈ *<sup>L</sup>*([*u*, *<sup>u</sup>* + *<sup>ϕ</sup>*(*ν*, *<sup>u</sup>*)] , F0) *and <sup>ϕ</sup> satisfies Condition C, then*

$$\frac{\Gamma(\beta)}{2(\varrho(\boldsymbol{\nu},\boldsymbol{\mu}))^{\mathsf{F}}} \left[ \mathcal{T}\_{\boldsymbol{\mu}^{+}}^{\mathsf{F}} \, \Psi(\boldsymbol{\mu} + \varrho(\boldsymbol{\nu},\boldsymbol{\mu})) \widetilde{\boldsymbol{\chi}} \, \Phi(\boldsymbol{\nu} + \varrho(\boldsymbol{\nu},\boldsymbol{\mu})) \widetilde{+} \mathcal{T}\_{\boldsymbol{\mu} + \varrho(\boldsymbol{\nu},\boldsymbol{\mu})}^{\mathsf{F}} \Psi(\boldsymbol{\mu}) \widetilde{\boldsymbol{\chi}} \Phi(\boldsymbol{\mu}) \right] $$
 
$$\prec \left( \frac{1}{2} - \frac{\beta}{(\beta+1)(\beta+2)} \right) \Delta \left( \boldsymbol{\mu}, \boldsymbol{\mu} + \varrho(\boldsymbol{\nu}, \boldsymbol{\mu}) \right) \widetilde{+} \left( \frac{\beta}{(\beta+1)(\beta+2)} \right) \nabla \left( \boldsymbol{\mu}, \boldsymbol{\mu} + \varrho(\boldsymbol{\nu}, \boldsymbol{\mu}) \right)$$

where <sup>∆</sup> (*u*, *<sup>u</sup>* + *<sup>ϕ</sup>*(*ν*, *<sup>u</sup>*)) = <sup>Ψ</sup>(*u*)×eΦ(*u*) +<sup>e</sup> <sup>Ψ</sup>(*<sup>u</sup>* + *<sup>ϕ</sup>*(*ν*, *<sup>u</sup>*))×eΦ(*<sup>u</sup>* + *<sup>ϕ</sup>*(*ν*, *<sup>u</sup>*)) , ∇ (*u*, *<sup>u</sup>*+ *<sup>ϕ</sup>*(*ν*, *<sup>u</sup>*)) = <sup>Ψ</sup>(*u*)×eΦ(*<sup>u</sup>* + *<sup>ϕ</sup>*(*ν*, *<sup>u</sup>*)) +<sup>e</sup> <sup>Ψ</sup>(*<sup>u</sup>* + *<sup>ϕ</sup>*(*ν*, *<sup>u</sup>*))×eΦ(*u*) , and <sup>∆</sup>*<sup>θ</sup>* (*u*, *<sup>u</sup>* + *<sup>ϕ</sup>*(*ν*, *<sup>u</sup>*)) = [∆∗((*u*, *u* + *ϕ*(*ν*, *u*)) , *θ*) , ∆ ∗ ((*u*, *u* + *ϕ*(*ν*, *u*)) , *θ*)] and ∇*<sup>θ</sup>* (*u*, *ν*) = [∇∗((*u*, *u* + *ϕ*(*ν*, *u*)), *θ*), ∇<sup>∗</sup> ((*u*, *u* + *ϕ*(*ν*, *u*)) , *θ*)].

**Proof.** Since Ψ, Φ both are preinvex *F*·*I*·*V*·*Fs* and Condition C holds *ϕ* for then, for each *θ* ∈ [0, 1] we have

$$\begin{aligned} \Psi\_\*(\boldsymbol{u} + (1 - \boldsymbol{\xi})\boldsymbol{\varrho}(\boldsymbol{\nu}, \boldsymbol{u}) \; , \; \boldsymbol{\theta}) &= \Psi\_\*(\boldsymbol{u} + \boldsymbol{\varrho}(\boldsymbol{\nu}, \boldsymbol{u}) + \boldsymbol{\xi}\boldsymbol{\varrho}(\boldsymbol{u}, \boldsymbol{u} + \boldsymbol{\varrho}(\boldsymbol{\nu}, \boldsymbol{u})) \; , \; \boldsymbol{\theta}) \\ &\leq \boldsymbol{\zeta}\Psi\_\*(\boldsymbol{u}, \boldsymbol{\theta}) + (1 - \boldsymbol{\zeta})\Psi\_\*(\boldsymbol{u} + \boldsymbol{\varrho}(\boldsymbol{\nu}, \boldsymbol{u}) \; , \; \boldsymbol{\theta}) \\ \Psi^\*(\boldsymbol{u} + (1 - \boldsymbol{\zeta})\boldsymbol{\varrho}(\boldsymbol{\nu}, \boldsymbol{u}) \; , \; \boldsymbol{\theta}) &= \Psi^\*(\boldsymbol{u} + \boldsymbol{\varrho}(\boldsymbol{\nu}, \boldsymbol{u}) + \boldsymbol{\xi}\boldsymbol{\varrho}(\boldsymbol{u}, \boldsymbol{u} + \boldsymbol{\varrho}(\boldsymbol{\nu}, \boldsymbol{u})) \; , \; \boldsymbol{\theta}) \\ &\leq \boldsymbol{\zeta}\Psi^\*(\boldsymbol{u}, \boldsymbol{\theta}) + (1 - \boldsymbol{\zeta})\Psi^\*(\boldsymbol{u} + \boldsymbol{\varrho}(\boldsymbol{\nu}, \boldsymbol{u}) \; , \; \boldsymbol{\theta}) \; . \end{aligned}$$

and

$$\begin{aligned} \Phi\_\*(\boldsymbol{u} + (1 - \boldsymbol{\xi})\boldsymbol{\varrho}(\boldsymbol{\nu}, \boldsymbol{u}) \; , \; \boldsymbol{\theta}) &= \Phi\_\*(\boldsymbol{u} + \boldsymbol{\varrho}(\boldsymbol{\nu}, \boldsymbol{u}) + \boldsymbol{\zeta}\boldsymbol{\varrho}(\boldsymbol{u}, \boldsymbol{u} + \boldsymbol{\varrho}(\boldsymbol{\nu}, \boldsymbol{u})) \; , \; \boldsymbol{\theta}) \\ &\leq \boldsymbol{\xi}\Phi\_\*(\boldsymbol{u}, \boldsymbol{\theta}) + \left(1 - \boldsymbol{\zeta}\right)\Phi\_\*(\boldsymbol{u} + \boldsymbol{\varrho}(\boldsymbol{\nu}, \boldsymbol{u}) \; , \; \boldsymbol{\theta}) \\\\ \Phi^\*(\boldsymbol{u} + (1 - \boldsymbol{\zeta})\boldsymbol{\varrho}(\boldsymbol{\nu}, \boldsymbol{u}) \; , \; \boldsymbol{\theta}) &= \Phi^\*(\boldsymbol{u} + \boldsymbol{\varrho}(\boldsymbol{\nu}, \boldsymbol{u}) + \boldsymbol{\zeta}\boldsymbol{\varrho}(\boldsymbol{u}, \boldsymbol{u} + \boldsymbol{\varrho}(\boldsymbol{\nu}, \boldsymbol{u})) \; , \; \boldsymbol{\theta}) \\\\ &\leq \boldsymbol{\xi}\Phi^\*(\boldsymbol{u}, \boldsymbol{\theta}) + \left(1 - \boldsymbol{\zeta}\right)\Phi^\*(\boldsymbol{u} + \boldsymbol{\varrho}(\boldsymbol{\nu}, \boldsymbol{u}) \; , \; \boldsymbol{\theta}) \; . \end{aligned}$$

From the definition of preinvex *F*·*I*·*V*·*Fs* it follows that e0 4 Ψ(*ω*) and e0 4 Φ(*ω*), so

Ψ∗(*u* + (1 − *ς*)*ϕ*(*ν*, *u*) , *θ*) × Φ∗(*u* + (1 − *ς*)*ϕ*(*ν*, *u*) , *θ*) ≤ (*ς*Ψ∗(*u*, *θ*) + (1 − *ς*)Ψ∗(*u* + *ϕ*(*ν*, *u*) , *θ*))(*ς*Φ∗(*u*, *θ*) + (1 − *ς*)Φ∗(*u* + *ϕ*(*ν*, *u*) , *θ*)) = *ς* <sup>2</sup>Ψ∗(*u*, *<sup>θ</sup>*) <sup>×</sup> <sup>Φ</sup>∗(*u*, *<sup>θ</sup>*) <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*) <sup>2</sup>Ψ∗(*<sup>u</sup>* <sup>+</sup> *<sup>ϕ</sup>*(*ν*, *<sup>u</sup>*) , *<sup>θ</sup>*) <sup>×</sup> <sup>Φ</sup>∗(*<sup>u</sup>* <sup>+</sup> *<sup>ϕ</sup>*(*ν*, *<sup>u</sup>*) , *<sup>θ</sup>*) +*ς*(1 − *ς*)Ψ∗(*u*, *θ*) × Φ∗(*u* + *ϕ*(*ν*, *u*) , *θ*) + *ς*(1 − *ς*)Ψ∗(*u* + *ϕ*(*ν*, *u*) , *θ*) × Φ∗(*u*, *θ*) Ψ∗ (*u* + (1 − *ς*)*ϕ*(*ν*, *u*) , *θ*) × Φ<sup>∗</sup> (*u* + (1 − *ς*)*ϕ*(*ν*, *u*) , *θ*) ≤ (*ς*Ψ<sup>∗</sup> (*u*, *θ*) + (1 − *ς*)Ψ<sup>∗</sup> (*u* + *ϕ*(*ν*, *u*) , *θ*))(*ς*Φ∗ (*u*, *θ*) + (1 − *ς*)Φ<sup>∗</sup> (*u* + *ϕ*(*ν*, *u*) , *θ*)) = *ς* <sup>2</sup>Ψ<sup>∗</sup> (*u*, *θ*) × Φ<sup>∗</sup> (*u*, *θ*) + (1 − *ς*) <sup>2</sup>Ψ<sup>∗</sup> (*u* + *ϕ*(*ν*, *u*) , *θ*) × Φ<sup>∗</sup> (*u* + *ϕ*(*ν*, *u*) , *θ*) +*ς*(1 − *ς*)Ψ<sup>∗</sup> (*u*, *θ*) × Φ<sup>∗</sup> (*u* + *ϕ*(*ν*, *u*) , *θ*) + *ς*(1 − *ς*)Ψ<sup>∗</sup> (*u* + *ϕ*(*ν*, *u*) , *θ*) × Φ<sup>∗</sup> (*u*, *θ*) , (36)

Analogously, we have

$$\begin{aligned} \Psi\_\*(\boldsymbol{u} + \boldsymbol{\xi}\boldsymbol{\varrho}(\boldsymbol{\nu}, \boldsymbol{u}) \; , \; \boldsymbol{\theta}) \Phi\_\*(\boldsymbol{u} + \boldsymbol{\xi}\boldsymbol{\varrho}(\boldsymbol{\nu}, \boldsymbol{u}) \; , \; \boldsymbol{\theta}) \\ \leq (1 - \boldsymbol{\zeta})^2 \Psi\_\*(\boldsymbol{u}, \boldsymbol{\theta}) \; \times \Phi\_\*(\boldsymbol{u}, \boldsymbol{\theta}) + \boldsymbol{\xi}^2 \Psi\_\*(\boldsymbol{u} + \boldsymbol{\varrho}(\boldsymbol{\nu}, \boldsymbol{u}) \; , \; \boldsymbol{\theta}) \; \times \Phi\_\*(\boldsymbol{u} + \boldsymbol{\varrho}(\boldsymbol{\nu}, \boldsymbol{u}) \; , \; \boldsymbol{\theta}) \\ + \boldsymbol{\xi}(1 - \boldsymbol{\zeta}) \Psi\_\*(\boldsymbol{u}, \boldsymbol{\theta}) \; \times \Phi\_\*(\boldsymbol{u} + \boldsymbol{\varrho}(\boldsymbol{\nu}, \boldsymbol{u}) \; , \; \boldsymbol{\theta}) + \boldsymbol{\zeta}(1 - \boldsymbol{\zeta}) \Psi\_\*(\boldsymbol{u} + \boldsymbol{\varrho}(\boldsymbol{\nu}, \boldsymbol{u}) \; , \; \boldsymbol{\theta}) \; \times \Phi\_\*(\boldsymbol{u}, \boldsymbol{\theta}) \end{aligned} \tag{37}$$
 
$$\Psi^\*(\boldsymbol{u} + \boldsymbol{\xi}\boldsymbol{\varrho}(\boldsymbol{\nu}, \boldsymbol{u}) \; , \; \boldsymbol{\theta}) \; \times \Phi^\*(\boldsymbol{u} + \boldsymbol{\xi}\boldsymbol{\varrho}(\boldsymbol{\nu}, \boldsymbol{u}) \; , \; \boldsymbol{\theta})$$

$$\leq (1 - \varsigma)^2 \mathbf{Y}^\*(\boldsymbol{u}, \boldsymbol{\theta}) \times \boldsymbol{\Phi}^\*(\boldsymbol{u}, \boldsymbol{\theta}) + \boldsymbol{\zeta}^2 \mathbf{Y}^\*(\boldsymbol{u} + \boldsymbol{\varrho}(\boldsymbol{\nu}, \boldsymbol{u}) \,, \boldsymbol{\theta}) \times \boldsymbol{\Phi}^\*(\boldsymbol{u} + \boldsymbol{\varrho}(\boldsymbol{\nu}, \boldsymbol{u}) \,, \boldsymbol{\theta})$$
 
$$+ \boldsymbol{\xi} (1 - \boldsymbol{\zeta}) \mathbf{Y}^\*(\boldsymbol{u}, \boldsymbol{\theta}) \times \boldsymbol{\Phi}^\*(\boldsymbol{u} + \boldsymbol{\varrho}(\boldsymbol{\nu}, \boldsymbol{u}) \,, \boldsymbol{\theta}) + \boldsymbol{\zeta} (1 - \boldsymbol{\zeta}) \mathbf{Y}^\*(\boldsymbol{u} + \boldsymbol{\varrho}(\boldsymbol{\nu}, \boldsymbol{u}) \,, \boldsymbol{\theta}) \times \boldsymbol{\Phi}^\*(\boldsymbol{u}, \boldsymbol{\theta}) \,.$$
 
$$\text{Adding (36) and (37), we have}$$

Ψ∗(*u* + (1 − *ς*)*ϕ*(*ν*, *u*) , *θ*) × Φ∗(*u* + (1 − *ς*)*ϕ*(*ν*, *u*) , *θ*) +Ψ∗(*u* + *ςϕ*(*ν*, *u*) , *θ*) × Φ∗(*u* + *ςϕ*(*ν*, *u*) , *θ*) ≤ h *ς* <sup>2</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*) 2 i [Ψ∗(*u*, *θ*) × Φ∗(*u*, *θ*) + Ψ∗(*u* + *ϕ*(*ν*, *u*) , *θ*) × Φ∗(*u* + *ϕ*(*ν*, *u*) , *θ*)] +2*ς*(1 − *ς*)[Ψ∗(*u* + *ϕ*(*ν*, *u*) , *θ*) × Φ∗(*u*, *θ*) + Ψ∗(*u*, *θ*) × Φ∗(*u* + *ϕ*(*ν*, *u*) , *θ*)] Ψ∗ (*u* + (1 − *ς*)*ϕ*(*ν*, *u*) , *θ*) × Φ<sup>∗</sup> (*u* + (1 − *ς*)*ϕ*(*ν*, *u*) , *θ*) +Ψ∗ (*u* + *ςϕ*(*ν*, *u*) , *θ*) × Φ<sup>∗</sup> (*u* + *ςϕ*(*ν*, *u*) , *θ*) ≤ h *ς* <sup>2</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*) 2 i [Ψ∗ (*u*, *θ*) × Φ<sup>∗</sup> (*u*, *θ*) + Ψ∗ (*u* + *ϕ*(*ν*, *u*) , *θ*) × Φ<sup>∗</sup> (*u* + *ϕ*(*ν*, *u*) , *θ*)] +2*ς*(1 − *ς*)[Ψ<sup>∗</sup> (*u* + *ϕ*(*ν*, *u*) , *θ*) × Φ<sup>∗</sup> (*u*, *θ*) + Ψ∗ (*u*, *θ*) × Φ<sup>∗</sup> (*u* + *ϕ*(*ν*, *u*) , *θ*)] . (38)

Taking multiplication of (38) by *ς <sup>β</sup>*−<sup>1</sup> and integrating the obtained result with respect to *ς* over (0,1), we have

R 1 0 *ς <sup>β</sup>*−1Ψ∗(*<sup>u</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*ϕ*(*ν*, *<sup>u</sup>*) , *<sup>θ</sup>*) <sup>×</sup> <sup>Φ</sup>∗(*<sup>u</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*ϕ*(*ν*, *<sup>u</sup>*) , *<sup>θ</sup>*) +*ς <sup>β</sup>*−1Ψ∗(*<sup>u</sup>* <sup>+</sup> *ςϕ*(*ν*, *<sup>u</sup>*) , *<sup>θ</sup>*) <sup>×</sup> <sup>Φ</sup>∗(*<sup>u</sup>* <sup>+</sup> *ςϕ*(*ν*, *<sup>u</sup>*) , *<sup>θ</sup>*)*d<sup>ς</sup>* ≤ ∆∗((*u*, *u* + *ϕ*(*ν*, *u*)) , *θ*) R 1 0 *ς β*−1 h *ς* <sup>2</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*) 2 i *dς* +2∇∗((*u*, *u* + *ϕ*(*ν*, *u*)) , *θ*) R 1 0 *ς β*−1 *ς*(1 − *ς*)*dς* R 1 0 *ς <sup>β</sup>*−1Ψ<sup>∗</sup> (*u* + (1 − *ς*)*ϕ*(*ν*, *u*) , *θ*) × Φ<sup>∗</sup> (*u* + (1 − *ς*)*ϕ*(*ν*, *u*) , *θ*) +*ς <sup>β</sup>*−1Ψ<sup>∗</sup> (*u* + *ςϕ*(*ν*, *u*) , *θ*) × Φ<sup>∗</sup> (*u* + *ςϕ*(*ν*, *u*) , *θ*)*dς* ≤ ∆ ∗ ((*u*, *u* + *ϕ*(*ν*, *u*)) , *θ*) R 1 0 *ς β*−1 h *ς* <sup>2</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*) 2 i *dς* +2∇<sup>∗</sup> ((*u*, *u* + *ϕ*(*ν*, *u*)) , *θ*) R 1 0 *ς β*−1 *ς*(1 − *ς*)*dς*.

It follows that,

Γ(*β*) (*ϕ*(*ν*,*u*))*<sup>β</sup>* I *β <sup>u</sup>*<sup>+</sup> Ψ∗(*u* + *ϕ*(*ν*, *u*) , *θ*) × Φ∗(*u* + *ϕ*(*ν*, *u*) , *θ*) + I *β u*+*ϕ*(*ν*,*u*) <sup>−</sup> Ψ∗(*u*, *θ*) × Φ∗(*u*, *θ*) <sup>≤</sup> <sup>2</sup> *β* 1 2 − *β* (*β*+1)(*β*+2) <sup>∆</sup>∗((*u*, *<sup>u</sup>* <sup>+</sup> *<sup>ϕ</sup>*(*ν*, *<sup>u</sup>*)) , *<sup>θ</sup>*) <sup>+</sup> <sup>2</sup> *β β* (*β*+1)(*β*+2) ∇∗((*u*, *u* + *ϕ*(*ν*, *u*)) , *θ*) Γ(*β*) (*ϕ*(*ν*,*u*))*<sup>β</sup>* I *β <sup>u</sup>*<sup>+</sup> Ψ<sup>∗</sup> (*u* + *ϕ*(*ν*, *u*) , *θ*) × Φ<sup>∗</sup> (*u* + *ϕ*(*ν*, *u*) , *θ*) + I *β u*+*ϕ*(*ν*,*u*) <sup>−</sup> Ψ<sup>∗</sup> (*u*, *θ*) × Φ<sup>∗</sup> (*u*, *θ*) <sup>≤</sup> <sup>2</sup> *β* 1 2 − *β* (*β*+1)(*β*+2) ∆ ∗ ((*u*, *u* + *ϕ*(*ν*, *u*)) , *θ*) + <sup>2</sup> *β β* (*β*+1)(*β*+2) ∇∗ ((*u*, *u* + *ϕ*(*ν*, *u*)) , *θ*) , that is

Γ(*β*) (*ϕ*(*ν*,*u*))*<sup>β</sup>* [I *β <sup>u</sup>*<sup>+</sup> Ψ∗(*u* + *ϕ*(*ν*, *u*) , *θ*) × Φ∗(*u* + *ϕ*(*ν*, *u*) , *θ*) + I *β u*+*ϕ*(*ν*,*u*) <sup>−</sup> Ψ∗(*u*, *θ*) × Φ∗(*u*, *θ*) , I *β <sup>u</sup>*<sup>+</sup> Ψ<sup>∗</sup> (*u* + *ϕ*(*ν*, *u*) , *θ*) × Φ<sup>∗</sup> (*u* + *ϕ*(*ν*, *u*) , *θ*) + I *β u*+*ϕ*(*ν*,*u*) <sup>−</sup> Ψ<sup>∗</sup> (*u*, *θ*) × Φ<sup>∗</sup> (*u*, *θ*)] ≤*I* 2 *β* 1 2 − *β* (*β*+1)(*β*+2) [∆∗((*u*, *u* + *ϕ*(*ν*, *u*)) , *θ*) , ∆ ∗ ((*u*, *u* + *ϕ*(*ν*, *u*)) , *θ*)] + <sup>2</sup> *β β* (*β*+1)(*β*+2) [∇∗((*u*, *u* + *ϕ*(*ν*, *u*)) , *θ*) , ∇<sup>∗</sup> ((*u*, *u* + *ϕ*(*ν*, *u*)) , *θ*)] Thus,

$$\frac{\Gamma(\beta)}{2(\varrho(\boldsymbol{\nu},\boldsymbol{\mu}))^{\widetilde{\beta}}} \left[ \mathcal{Z}\_{\boldsymbol{\mu}^{+}}^{\widetilde{\boldsymbol{\beta}}} \,\Psi(\boldsymbol{\nu}+\varrho(\boldsymbol{\nu},\boldsymbol{\mu})) \widetilde{\boldsymbol{\chi}} \,\Phi(\boldsymbol{\nu}+\varrho(\boldsymbol{\nu},\boldsymbol{\mu})) \widetilde{+} \mathcal{Z}\_{\boldsymbol{\mu}+\varrho(\boldsymbol{\nu},\boldsymbol{\mu})^{-}}^{\widetilde{\boldsymbol{\beta}}} \Psi(\boldsymbol{\nu}) \widetilde{\boldsymbol{\chi}} \Phi(\boldsymbol{\mu}) \right] $$
 
$$ \preceq \left( \frac{1}{2} - \frac{\beta}{(\beta+1)(\beta+2)} \right) \Delta \left( \boldsymbol{\mu}, \boldsymbol{\mu} + \varrho(\boldsymbol{\nu}, \boldsymbol{\mu}) \right) \widetilde{+} \left( \frac{\beta}{(\beta+1)(\beta+2)} \right) \nabla \left( \boldsymbol{\mu}, \boldsymbol{\mu} + \varrho(\boldsymbol{\nu}, \boldsymbol{\mu}) \right).$$

and the theorem has been established.

**Theorem 3.9.** *Let* Ψ, Φ : [*u*, *u* + *ϕ*(*ν*, *u*)] → F<sup>0</sup> *be two preinvex F*·*I*·*V*·*Fs, whose θ-levels define the family of I*·*V*·*Fs* Ψ*<sup>θ</sup>* , Φ*<sup>θ</sup>* : [*u*, *u* + *ϕ*(*ν*, *u*)] ⊂ R → K*<sup>C</sup>* <sup>+</sup> *are given by* Ψ*<sup>θ</sup>* (*ω*) = [Ψ∗(*ω*, *θ*) , Ψ<sup>∗</sup> (*ω*, *θ*)] *and* Φ*<sup>θ</sup>* (*ω*) = [Φ∗(*ω*, *θ*) , Φ<sup>∗</sup> (*ω*, *θ*)] *for all ω* ∈ [*u*, *u* + *ϕ*(*ν*, *u*)] *and for all <sup>θ</sup>* ∈ [0, 1]*. If* <sup>Ψ</sup>×e<sup>Φ</sup> ∈ *<sup>L</sup>*([*u*, *<sup>u</sup>* + *<sup>ϕ</sup>*(*ν*, *<sup>u</sup>*)] , F0) *and <sup>ϕ</sup> satisfies Condition C, then*

$$\frac{1}{\beta} \le \left(\frac{2u + \varrho(\nu, u)}{2}\right) \widetilde{\times} \Phi\left(\frac{2u + \varrho(\nu, u)}{2}\right)$$

$$\prec \frac{\Gamma(\beta + 1)}{4(\varrho(\nu, u))^{\beta}} \left[\mathcal{Z}\_{u^{+}}^{\beta} \,\,\Psi(u + \varrho(\nu, u)) \,\widetilde{\times} \Phi(u + \varrho(\nu, u)) \,\widetilde{\div} \mathcal{I}\_{u + \varrho(\nu, u)}^{\beta} \cdot \Psi(u) \,\widetilde{\times} \Phi(u)\right]$$

$$\widetilde{+} \frac{1}{2\beta} \left(\frac{1}{2} - \frac{\beta}{(\beta + 1)(\beta + 2)}\right) \,\nabla \,(u, u + \varrho(\nu, u)) \widetilde{\div} \frac{1}{2\beta} \left(\frac{\beta}{(\beta + 1)(\beta + 2)}\right) \,\Delta\left(u, u + \varrho(\nu, u)\right)\,,$$

$$\text{where } \Delta\left(u, u + \varrho(\nu, u)\right) = \Psi(u) \widetilde{\times} \Phi(u) \,\,\widetilde{\times} \Psi(u + \varrho(\nu, u)) \,\widetilde{\times} \Phi(u + \varrho(\nu, u))\,,\,\nabla\left(u, u\right) = \Psi(u) \widetilde{\times} \Psi(u) \,\,\widetilde{\times} \Psi(u) \,\,\widetilde{\times} \Psi(u) \,\,\widetilde{\times} \Psi(u) \,\,\widetilde{\times} \Psi(u) \,\,\widetilde{\times} \Psi(u) \,\,\widetilde{\times} \Psi(u) \,\,\widetilde{\times} \Psi(u) \,\,\widetilde{\times} \Psi(u) \,\,\widetilde{\times} \Psi(u) \,\,\widetilde{\times} \Psi(u) \,\,\widetilde{\times} \Psi(u) \,\,\widetilde{\times} \Psi(u) \,\,\$$

where <sup>∆</sup> (*u*, *<sup>u</sup>* + *<sup>ϕ</sup>*(*ν*, *<sup>u</sup>*)) = <sup>Ψ</sup>(*u*)×eΦ(*u*) +<sup>e</sup> <sup>Ψ</sup>(*<sup>u</sup>* + *<sup>ϕ</sup>*(*ν*, *<sup>u</sup>*))×eΦ(*<sup>u</sup>* + *<sup>ϕ</sup>*(*ν*, *<sup>u</sup>*)) , ∇ (*u*, *<sup>ν</sup>*) = <sup>Ψ</sup>(*u*)×eΦ(*<sup>u</sup>* + *<sup>ϕ</sup>*(*ν*, *<sup>u</sup>*)) +<sup>e</sup> <sup>Ψ</sup>(*<sup>u</sup>* + *<sup>ϕ</sup>*(*ν*, *<sup>u</sup>*))×eΦ(*u*) , and <sup>∆</sup>*<sup>θ</sup>* (*u*, *<sup>u</sup>* + *<sup>ϕ</sup>*(*ν*, *<sup>u</sup>*)) = [∆∗((*u*, *u* + *ϕ*(*u* + *ϕ*(*ν*, *u*))) , *θ*) , ∆ ∗ ((*u*, *u* + *ϕ*(*ν*, *u*)) , *θ*)] and ∇*<sup>θ</sup>* (*u*, *u* + *ϕ*(*ν*, *u*)) = [∇∗((*u*, *u* + *ϕ*(*ν*, *u*)) , *θ*) , ∇<sup>∗</sup> ((*u*, *u* + *ϕ*(*ν*, *u*)) , *θ*)] .

**Proof.** Consider Ψ, Φ : [*u*, *u* + *ϕ*(*ν*, *u*)] → F<sup>0</sup> are preinvex *F*·*I*·*V*·*Fs*. Then by hypothesis, for each *θ* ∈ [0, 1] , we have

Ψ∗ 2*u*+*ϕ*(*ν*,*u*) 2 , *θ* × Φ<sup>∗</sup> 2*u*+*ϕ*(*ν*,*u*) 2 , *θ* Ψ∗ 2*u*+*ϕ*(*ν*,*u*) 2 , *θ* × Φ<sup>∗</sup> 2*u*+*ϕ*(*ν*,*u*) 2 , *θ* <sup>≤</sup> <sup>1</sup> 4 " Ψ∗(*u* + (1 − *ς*)*ϕ*(*ν*, *u*) , *θ*) × Φ∗(*u* + (1 − *ς*)*ϕ*(*ν*, *u*) , *θ*) +Ψ∗(*u* + (1 − *ς*)*ϕ*(*ν*, *u*) , *θ*) × Φ∗(*u* + *ςϕ*(*ν*, *u*) , *θ*) # +<sup>1</sup> 4 " Ψ∗(*u* + *ςϕ*(*ν*, *u*) , *θ*) × Φ∗(*u* + (1 − *ς*)*ϕ*(*ν*, *u*) , *θ*) +Ψ∗(*u* + *ςϕ*(*ν*, *u*) , *θ*) × Φ∗(*u* + *ςϕ*(*ν*, *u*) , *θ*) # <sup>≤</sup> <sup>1</sup> 4 " Ψ∗ (*u* + (1 − *ς*)*ϕ*(*ν*, *u*) , *θ*) × Φ<sup>∗</sup> (*u* + (1 − *ς*)*ϕ*(*ν*, *u*) , *θ*) +Ψ∗ (*u* + (1 − *ς*)*ϕ*(*ν*, *u*) , *θ*) × Φ<sup>∗</sup> (*u* + *ςϕ*(*ν*, *u*) , *θ*) # +<sup>1</sup> 4 Ψ∗ (*u* + *ςϕ*(*ν*, *u*) , *θ*) × Φ<sup>∗</sup> (*u* + (1 − *ς*)*ϕ*(*ν*, *u*) , *θ*) +Ψ∗ (*u* + *ςϕ*(*ν*, *u*) , *θ*) × Φ<sup>∗</sup> (*u* + *ςϕ*(*ν*, *u*) , *θ*) , <sup>≤</sup> <sup>1</sup> 4 " Ψ∗(*u* + (1 − *ς*)*ϕ*(*ν*, *u*) , *θ*) × Φ∗(*u* + (1 − *ς*)*ϕ*(*ν*, *u*) , *θ*) +Ψ∗(*u* + *ςϕ*(*ν*, *u*) , *θ*) × Φ∗(*u* + *ςϕ*(*ν*, *u*) , *θ*) # +<sup>1</sup> 4 (*ς*Ψ∗(*u*, *θ*) + (1 − *ς*)Ψ∗(*u* + *ϕ*(*ν*, *u*) , *θ*)) × ((1 − *ς*)Φ∗(*u*, *θ*) + *ς*Φ∗(*u* + *ϕ*(*ν*, *u*) , *θ*)) + ((1 − *ς*)Ψ∗(*u*, *θ*) + *ς*Ψ∗(*u* + *ϕ*(*ν*, *u*) , *θ*)) × (*ς*Φ∗(*u*, *θ*) + (1 − *ς*)Φ∗(*u* + *ϕ*(*ν*, *u*) , *θ*)) <sup>≤</sup> <sup>1</sup> 4 " Ψ∗ (*u* + (1 − *ς*)*ϕ*(*ν*, *u*) , *θ*) × Φ<sup>∗</sup> (*u* + (1 − *ς*)*ϕ*(*ν*, *u*) , *θ*) +Ψ∗ (*u* + *ςϕ*(*ν*, *u*) , *θ*) × Φ<sup>∗</sup> (*u* + *ςϕ*(*ν*, *u*) , *θ*) # +<sup>1</sup> 4 (*ς*Ψ∗ (*u*, *θ*) + (1 − *ς*)Ψ<sup>∗</sup> (*u* + *ϕ*(*ν*, *u*) , *θ*)) × ((1 − *ς*)Φ<sup>∗</sup> (*u*, *θ*) + *ς*Φ∗ (*u* + *ϕ*(*ν*, *u*) , *θ*)) + ((1 − *ς*)Ψ<sup>∗</sup> (*u*, *θ*) + *ς*Ψ∗ (*u* + *ϕ*(*ν*, *u*) , *θ*)) × (*ς*Φ<sup>∗</sup> (*u*, *θ*) + (1 − *ς*)Φ<sup>∗</sup> (*u* + *ϕ*(*ν*, *u*) , *θ*)) , = <sup>1</sup> 4 Ψ∗(*u* + (1 − *ς*)*ϕ*(*ν*, *u*) , *θ*) × Φ∗(*u* + (1 − *ς*)*ϕ*(*ν*, *u*) , *θ*) +Ψ∗(*u* + *ςϕ*(*ν*, *u*) , *θ*) × Φ∗(*u* + *ςϕ*(*ν*, *u*) , *θ*) +<sup>1</sup> 4 n *ς* <sup>2</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*) 2 o ∇∗((*u*, *u* + *ϕ*(*ν*, *u*)) , *θ*) + {*ς*(1 − *ς*) + (1 − *ς*)*ς*}∆∗((*u*, *u* + *ϕ*(*ν*, *u*)) , *θ*) = <sup>1</sup> 4 Ψ∗ (*u* + (1 − *ς*)*ϕ*(*ν*, *u*) , *θ*) × Φ<sup>∗</sup> (*u* + (1 − *ς*)*ϕ*(*ν*, *u*) , *θ*) +Ψ∗ (*u* + *ςϕ*(*ν*, *u*) , *θ*) × Φ<sup>∗</sup> (*u* + *ςϕ*(*ν*, *u*) , *θ*) +<sup>1</sup> 4 n *ς* <sup>2</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*) 2 o ∇∗ ((*u*, *u* + *ϕ*(*ν*, *u*)) , *θ*) + {*ς*(1 − *ς*) + (1 − *ς*)*ς*}∆ ∗ ((*u*, *u* + *ϕ*(*ν*, *u*)) , *θ*) .

(39)

Taking multiplication of (39) with *ς <sup>β</sup>*−<sup>1</sup> and integrating over (0, 1) , we get

1 *<sup>β</sup>* Ψ<sup>∗</sup> 2*u*+*ϕ*(*ν*,*u*) 2 , *θ* × Φ<sup>∗</sup> 2*u*+*ϕ*(*ν*,*u*) 2 , *θ* <sup>≤</sup> <sup>1</sup> 4(*ϕ*(*ν*,*u*))*<sup>β</sup>* R *<sup>u</sup>*+*ϕ*(*ν*,*u*) *u* (*u* + *ϕ*(*ν*, *u*) − *ω*) *<sup>β</sup>*−1Ψ∗(*ω*, *<sup>θ</sup>*) <sup>×</sup> <sup>Φ</sup>∗(*ω*, *<sup>θ</sup>*)*d<sup>ω</sup>* + R *<sup>u</sup>*+*ϕ*(*ν*,*u*) *u* (*y* − *u*) *<sup>β</sup>*−1Ψ∗(*y*, *<sup>θ</sup>*) <sup>×</sup> <sup>Φ</sup>∗(*y*, *<sup>θ</sup>*)*dy* + <sup>1</sup> 2*β* 1 2 − *β* (*β*+1)(*β*+2) <sup>∇</sup>∗((*u*, *<sup>u</sup>* <sup>+</sup> *<sup>ϕ</sup>*(*ν*, *<sup>u</sup>*)) , *<sup>θ</sup>*) <sup>+</sup> <sup>1</sup> 2*β β* (*β*+1)(*β*+2) ∆∗((*u*, *u* + *ϕ*(*ν*, *u*)) , *θ*) = Γ(*β*+1) 4(*ϕ*(*ν*,*u*))*<sup>β</sup>* I *β <sup>u</sup>*<sup>+</sup> Ψ∗(*u* + *ϕ*(*ν*, *u*)) × Φ∗(*u* + *ϕ*(*ν*, *u*)) + I *β u*+*ϕ*(*ν*,*u*) <sup>−</sup> Ψ∗(*u*) × Φ∗(*u*) + <sup>1</sup> 2*β* 1 2 − *β* (*β*+1)(*β*+2) <sup>∇</sup>∗((*u*, *<sup>u</sup>* <sup>+</sup> *<sup>ϕ</sup>*(*ν*, *<sup>u</sup>*)) , *<sup>θ</sup>*) <sup>+</sup> <sup>1</sup> 2*β β* (*β*+1)(*β*+2) ∆∗((*u*, *u* + *ϕ*(*ν*, *u*)) , *θ*) 1 *<sup>β</sup>* Ψ<sup>∗</sup> 2*u*+*ϕ*(*ν*,*u*) 2 , *θ* × Φ<sup>∗</sup> 2*u*+*ϕ*(*ν*,*u*) 2 , *θ* <sup>≤</sup> <sup>1</sup> 4(*ϕ*(*ν*,*u*))*<sup>β</sup>* R *<sup>u</sup>*+*ϕ*(*ν*,*u*) *u* (*u* + *ϕ*(*ν*, *u*) − *ω*) *<sup>β</sup>*−1Ψ<sup>∗</sup> (*ω*, *θ*) × Φ<sup>∗</sup> (*ω*, *θ*)*dω* + R *<sup>u</sup>*+*ϕ*(*ν*,*u*) *u* (*y* − *u*) *<sup>β</sup>*−1Ψ<sup>∗</sup> (*y*, *θ*) × Φ<sup>∗</sup> (*y*, *θ*)*dy* + <sup>1</sup> 2*β* 1 2 − *β* (*β*+1)(*β*+2) ∇∗ ((*u*, *u* + *ϕ*(*ν*, *u*)) , *θ*) + <sup>1</sup> 2*β β* (*β*+1)(*β*+2) ∆ ∗ ((*u*, *u* + *ϕ*(*ν*, *u*)) , *θ*) = Γ(*β*+1) 4(*ϕ*(*ν*,*u*))*<sup>β</sup>* I *β <sup>u</sup>*<sup>+</sup> Ψ<sup>∗</sup> (*u* + *ϕ*(*ν*, *u*)) × Φ<sup>∗</sup> (*u* + *ϕ*(*ν*, *u*)) + I *β u*+*ϕ*(*ν*,*u*) <sup>−</sup> Ψ<sup>∗</sup> (*u*) × Φ<sup>∗</sup> (*u*) + <sup>1</sup> 2*β* 1 2 − *β* (*β*+1)(*β*+2) ∇∗ ((*u*, *u* + *ϕ*(*ν*, *u*)) , *θ*) + <sup>1</sup> 2*β β* (*β*+1)(*β*+2) ∆ ∗ ((*u*, *u* + *ϕ*(*ν*, *u*)) , *θ*) ,

that is

$$\frac{1}{\beta} \,\, \Psi\left(\frac{2u+\varrho(\nu,\mu)}{2}\right) \widetilde{\times} \Phi\left(\frac{2u+\varrho(\nu,\mu)}{2}\right)$$

$$\prec \frac{\Gamma(\beta+1)}{4(\varrho(\nu,\mu))^{\beta}} \left[\mathcal{T}\_{\mu^{+}}^{\beta} \,\, \Psi(u+\varrho(\nu,\mu)) \widetilde{\times} \Phi(u+\varrho(\nu,\mu)) \widetilde{+} \mathcal{T}\_{u+\varrho(\nu,\mu)^{-}}^{\beta} \,\, \Psi(u) \widetilde{\times} \Phi(u)\right]$$

$$\widetilde{+} \frac{1}{2\beta} \left(\frac{1}{2} - \frac{\beta}{(\beta+1)(\beta+2)}\right) \,\, \nabla \, (u, u+\varrho(\nu, u)) \widetilde{+} \frac{1}{2\beta} \left(\frac{\beta}{(\beta+1)(\beta+2)}\right) \,\, \Delta \, (u, u+\varrho(\nu, u))$$

Hence, the required result.

**Example 3.10.** Let [*u*, *u* + *ϕ*(*ν*, *u*)] = [0, *ϕ*(2, 0)], *β* = <sup>1</sup> 2 , Ψ(*ω*) = [*ω*, 2*ω*], and Φ(*ω*) = [*ω*, 3*ω*].

$$\Psi(\omega)(\theta) = \begin{cases} \begin{array}{cc} \frac{\theta}{\omega} & \theta \in [0, \omega] \\ \frac{2\omega - \theta}{\omega} & \theta \in (\omega, 2\omega] \\ 0 & \text{otherwise} \end{array} \\\\ \Phi(\omega)(\theta) = \begin{cases} \begin{array}{cc} \frac{\theta}{2\omega} & \theta \in [0, 2\omega] \\ \frac{4\omega - \theta}{2\omega} & \theta \in (2\omega, 4\omega] \\ 0 & \text{otherwise} \end{array} \end{cases} \end{cases}$$

Then, for each *θ* ∈ [0, 1] , we have Ψ*<sup>θ</sup>* (*ω*) = [*θω*, (2 − *θ*)*ω*] and Φ*<sup>θ</sup>* (*ω*) = [2*θω*, 2(2 − *θ*)*ω*] . Since left and right end point functions Ψ∗(*ω*, *θ*) = *θω*, Ψ<sup>∗</sup> (*ω*, *θ*) = (2 − *θ*)*ω*, Φ∗(*ω*, *θ*) = 2*θω* and Φ<sup>∗</sup> (*ω*, *θ*) = 2(2 − *θ*)*ω* are preinvex functions with respect to *ϕ*(*ν*, *u*) = *ν* − *u* and for each *θ* ∈ [0, 1], then Ψ(*ω*) and Φ(*ω*) both are preinvex *F*·*I*·*V*·*F*. We clearly see that <sup>Ψ</sup>(*ω*)×eΦ(*ω*) ∈ *<sup>L</sup>*([*u*, *<sup>u</sup>* + *<sup>ϕ</sup>*(*ν*, *<sup>u</sup>*)] , F0) and

Γ(1+*β*) 2(*ϕ*(*ν*,*u*))*<sup>β</sup>* I *β <sup>u</sup>*<sup>+</sup> Ψ∗(*u* + *ϕ*(*ν*, *u*)) × Φ∗(*u* + *ϕ*(*ν*, *u*)) + I *β u*+*ϕ*(*ν*,*u*) <sup>−</sup> Ψ∗(*u*) × Φ∗(*u*) = Γ( 3 2 ) 2 √ 2 √ 1 *π ϕ*(2,0) R 0 (2 − *ω*) −1 2 2*θ* <sup>2</sup>*ω*<sup>2</sup> *dω* + Γ( 3 2 ) 2 √ 2 √ 1 *π ϕ*(2,0) R 0 (*ω*) −1 2 2*θ* <sup>2</sup>*ω*<sup>2</sup> *dω* ≈ 2.9332*θ* 2 , Γ(1+*β*) 2(*ϕ*(*ν*,*u*))*<sup>β</sup>* I *β <sup>u</sup>*<sup>+</sup> Ψ<sup>∗</sup> (*u* + *ϕ*(*ν*, *u*)) × Φ<sup>∗</sup> (*u* + *ϕ*(*ν*, *u*)) + I *β u*+*ϕ*(*ν*,*u*) <sup>−</sup> Ψ<sup>∗</sup> (*u*) × Φ<sup>∗</sup> (*u*) = Γ( 3 2 ) 2 √ 2 √ 1 *π ϕ*(2,0) R 0 (2 − *ω*) −1 <sup>2</sup> .2(2 − *θ*) <sup>2</sup>*ω*2*dω* + Γ( 3 2 ) 2 √ 2 √ 1 *π ϕ*(2,0) R 0 (*ω*) −1 <sup>2</sup> .2(2 − *θ*) <sup>2</sup>*ω*2*dω* ≈ 2.9332(2 − *θ*) 2 ,

Note that

$$
\left(\frac{1}{2} - \frac{\beta}{(\beta+1)(\beta+2)}\right) \Delta\_\*(u, u + \varrho(\nu, u)) \\
= \left[\Psi\_\*(u) \times \Phi\_\*(u) + \Psi\_\*(u + \varrho(\nu, u)) \times \Phi\_\*(u + \varrho(\nu, u))\right]
$$

$$
= \frac{11}{30} \mathcal{S} \theta^2,
$$

$$
\left(\frac{1}{2} - \frac{\beta}{(\beta+1)(\beta+2)}\right) \Delta^\*(u, u + \varrho(\nu, u))
$$

$$
= \left[\Psi^\*(u) \times \Phi^\*(u) + \Psi^\*(u + \varrho(\nu, u)) \times \Phi^\*(u + \varrho(\nu, u))\right] \\
= \frac{11}{30} \mathcal{S} (2 - \theta)^2,
$$

$$
\left(\frac{\beta}{(\beta+1)(\beta+2)}\right) \nabla\_\*(u, u + \varrho(\nu, u)) \\
= \left[\Psi\_\*(u) \times \Phi\_\*(u + \varrho(\nu, u)) + \Psi\_\*(u + \varrho(\nu, u)) \times \Phi\_\*(u)\right]
$$

$$
= \frac{2}{15} \left(0\right),
\\ \left(\frac{\beta}{(\beta+1)(\beta+2)}\right) \nabla\_\*(u, u + \varrho(\nu, u)) \\
= \left[\Psi^\*(u) \times \Phi^\*(u + \varrho(\nu, u))\right] \\
\times \left[\Psi^\*(u) \frac{\beta}{(\beta+1)(\beta+2)}\right] \\
= \frac{2}{15} \left(0\right).
$$

Therefore, we have

$$
\left(\frac{1}{2} - \frac{\not\theta}{(\not\theta + 1)(\not\theta + 2)}\right) \Delta\_{\theta}(\left(\mu, \mu + \varrho(\nu, u)\right), \; \theta) \\
+ \left(\frac{\not\theta}{(\not\theta + 1)(\not\theta + 2)}\right) \nabla\_{\theta}(\left(\mu, \mu + \varrho(\nu, u)\right), \; \theta)
$$

$$
= \frac{11}{30} \left[8\theta^2, 8(2 - \theta)^2\right] \\
+ \frac{2}{15} [0, 0] \\
\approx \left[2.9332\theta^2, 2.9332(2 - \theta)^2\right].
$$

$$
\text{It follows that}
$$

$$\left[\left[2.9332\theta^2, 2.9332(2-\theta)^2\right] \le\_I \left[2.9332\theta^2, 2.9332(2-\theta)^2\right]\right] \text{ .}$$

and Theorem 3.7. has been demonstrated.

#### **4. Conclusions and Future Plan**

In this article, we established relation between integral inequalities and preinvex *F*·*I*·*V*·*Fs* using fuzzy Riemann–Liouville fractional integrals and Condition C. We addressed *H*·*H* type inequalities and *H*·*H* Fejér type inequalities for introduced preinvex *F*·*I*·*V*·*F.* Moreover, some related fuzzy fractional inequalities were also obtained. We gave useful examples to verify the validity of presented results. In future, we will try to explore this concept for generalized preinvex *F*·*I*·*V*·*Fs* and using fuzzy Riemann–Liouville fractional integrals, we will try to get new inequalities for preinvex *F*·*I*·*V*·*Fs.* We believe that the implications and methodologies presented in this article will energize and encourage scholars to pursue a more intriguing follow-up in this field. Finally, we think that our findings may be applied to other fractional calculus models having Mittag-Liffler functions

in their kernels, such as Atangana-Baleanue and Prabhakar fractional operators. This consideration has been kept as an open problem for academics interested in this topic. Researchers that are interested might follow the steps outlined in references [52,53].

**Author Contributions:** Conceptualization, M.B.K.; methodology, M.B.K.; validation, S.T., M.S.S. and H.G.Z.; formal analysis, J.E.M.-D.; investigation, M.S.S.; resources, S.T.; data curation, H.G.Z.; writing—original draft preparation, M.B.K., J.E.M.-D. and H.G.Z.; writing—review and editing, M.B.K. and S.T.; visualization, H.G.Z.; supervision, M.B.K. and M.S.S.; project administration, M.B.K.; funding acquisition, J.E.M.-D., M.S.S. and H.G.Z. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The authors would like to thank the Rector, COMSATS University Islamabad, Islamabad, Pakistan, for providing excellent research. This work was funded by Taif University Researchers Supporting Project number (TURSP-2020/345), Taif University, Taif, Saudi Arabia and this work was also supported by Consejo Nacional de Ciencia y Tecnología, Grant No. A1-S-45928.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **Multiobjective Convex Optimization in Real Banach Space**

**Kin Keung Lai 1,\* ,†, Mohd Hassan 2,†, Jitendra Kumar Maurya 3,†, Sanjeev Kumar Singh 2,† and Shashi Kant Mishra 2,†**


**Abstract:** In this paper, we consider convex multiobjective optimization problems with equality and inequality constraints in real Banach space. We establish saddle point necessary and sufficient Pareto optimality conditions for considered problems under some constraint qualifications. These results are motivated by the symmetric results obtained in the recent article by Cobos Sánchez et al. in 2021 on Pareto optimality for multiobjective optimization problems of continuous linear operators. The discussions in this paper are also related to second order symmetric duality for nonlinear multiobjective mixed integer programs for arbitrary cones due to Mishra and Wang in 2005. Further, we establish Karush–Kuhn– Tucker optimality conditions using saddle point optimality conditions for the differentiable cases and present some examples to illustrate our results. The study in this article can also be seen and extended as symmetric results of necessary and sufficient optimality conditions for vector equilibrium problems on Hadamard manifolds by Ruiz-Garzón et al. in 2019.

**Keywords:** multiobjective programming; nonlinear programming; convex optimization; saddle point

#### **1. Introduction**

Consider the general multiobjective optimization problem

$$(\text{MOP})\text{ min }f(\mathbf{x}) = (f\_1(\mathbf{x}), \dots, f\_p(\mathbf{x})), \text{ subject to } \operatorname{g}(\mathbf{x}) \le 0, \, h(\mathbf{x}) = 0,\tag{1}$$

where the functions *<sup>f</sup>* : *<sup>X</sup>* <sup>→</sup> <sup>R</sup>*<sup>p</sup>* , *<sup>g</sup>* : *<sup>X</sup>* <sup>→</sup> <sup>R</sup>*<sup>q</sup>* , and *<sup>h</sup>* : *<sup>X</sup>* <sup>→</sup> <sup>R</sup>*<sup>r</sup>* are real vector valued functions and *X* is real Banach space.

Multiobjective optimization problem (MOP) arises when two or more objective functions are simultaneously optimized over a feasible region. The multiobjective optimization has been considerably analyzed and studied by many researchers, see for instance [1–6]. Multiobjective optimization problems play a crucial role in various fields like economics, engineering, management sciences [2,7–11], and many more places in daily life.

To deal with the multiobjective optimization problems, we have to find Pareto optimal solutions. These solutions are non-dominated by one another. A solution is called nondominated or Pareto optimal if none of the objective functions can be improved in value without reducing one or more objective values. One of the best techniques to deal with multiobjective optimization problems is scalarization. Wendell and Lee [12] developed the scalarization technique to deal with multiobjective optimization problems. Wendell and Lee [12] generalized the results on efficient points for multiobjective optimization problems to nonlinear optimization problems. The multiobjective problem is converted into a single objective problem in the scalarization technique.

The saddle point optimality conditions are briefly explained in [13], Rooyen et al. [14] constructed a Langrangian function for the convex multiobjective problem and established a relationship between saddle point optimality conditions and Pareto optimal solutions.

**Citation:** Lai, K.K.; Hassan, M.; Maurya, J.K.; Singh, S.K.; Mishra, S.K. Multiobjective Convex Optimization in Real Banach Space. *Symmetry* **2021**, *13*, 2148. https://doi.org/10.3390/ sym13112148

Academic Editors: Octav Olteanu and Savin Treanta

Received: 8 October 2021 Accepted: 27 October 2021 Published: 10 November 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

Cobos-Sànchez et al. [15] proposed Pareto optimality conditions for multiobjective optimization problems of continuous linear operators. Recently, Treanta [16] studied robust saddle point criterion in second order partial differential equations and partial differential inequations.

Rooyen et al. [14] discussed necessary and sufficient optimality conditions for (MOP) without any constraint qualification in Euclidean space. Recently, Antczak and Abdulaleem [17] studied optimality and duality results for E-differentiable functions. Barbu and Precupanu [18] studied the saddle point optimality conditions of convex optimization problem for real Banach space. Valyi [19] proposed the concept of approximate saddle point condition for convex multiobjective optimization problems. Further Rong and Wu [20] generalized the results of Valyi [19] with set valued maps.

Karush–Kuhn–Tucker (KKT) optimality conditions [21] play a pivotal role to solve scalar optimization problem as well as multiobjective optimization problems. Recently Lai et al. [3] discussed unconstrained multiobjective optimization problems. Further, Guu et al. [22] studied strong KKT type sufficient optimality conditions for semi-infinite programming problems.

Motivated by the work of Barbu and Precupanu [18], Rooyen et al. [14] and Wendell and Lee [12], we extend the results related to saddle point optimality conditions and Karush–Kuhn–Tucker optimality conditions from single objective function to multiobjective function with the help of Slater's constraint qualifications [13]. We also present some illustrative examples to support the theory.

The organization of this paper is as follows: In Section 2, we recall some preliminaries and basic results. In Section 3, results on saddle point and Karush–Kuhn–Tucker necessary optimality conditions for multiobjective optimization problems are extended. Further, we establish the relationship between the Pareto solution and the saddle point for the Lagrange function using Slater's constraint qualification. The last section is dedicated to conclusions and future remarks.

#### **2. Preliminaries**

In this section, we recall some notions and preliminary results which will be used in this paper. R denotes the set of real numbers. Let *X* be real Banach Space and *X* ∗ be its dual space. Let *<sup>x</sup>*, *<sup>y</sup>* <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* , then following inequalities represent their meaning as follows:

$$\begin{aligned} &x \geq y \implies x\_{i} \geq y\_{i} \\ &x \geq y \implies x\_{i} \geq y\_{i\prime} \; \mathbf{x} \neq y\_{\prime} \\ &x > y \implies x\_{i} > y\_{i\prime} \\ &x = y \implies x\_{i} = y\_{i} .\end{aligned}$$

We denote the feasible region as

$$S = \{ \mathfrak{x} \in X \colon \mathfrak{g}(\mathfrak{x}) \le \mathbf{0}, \ h(\mathfrak{x}) = \mathbf{0} \}.$$

To deal with the multiobjective optimization problems (MOP), we require some basic definitions.

**Definition 1** (Ref. [2])**.** *A decision vector x*¯ ∈ *S is global Pareto optimal solution (global efficient solution) if there does not exist another decision vector x* ∈ *S such that*

$$f(\mathfrak{x}) \le f(\mathfrak{x}).$$

Consider the following scalarized multiobjective optimization problem (SMOP) corresponding to (*MOP*):

$$(SMOP) \quad \min \sum\_{i=1}^{p} f\_i(x)$$

subject to *f*(*x*) 5 *f*(*x*¯), *g*(*x*) 5 0, *h*(*x*) = 0, where *x*¯ is any feasible point of (*MOP*).

Now, we recall the result from [2], which relate the solution of (*MOP*) and (*SMOP*).

**Theorem 1** (Ref. [7])**.** *A feasible point x*¯ ∈ *S is a Pareto optimal solution of the (MOP) if and only if x is an optimal solution of the (SMOP).* ¯

**Definition 2** (Ref. [13])**.** *A subset of the linear space X is said to be convex if for every distinct pair x and y of subset, it contains λx* + (1 − *λ*)*y*, ∀ *λ* ∈ [0, 1]*.*

**Definition 3** (Ref. [13])**.** *A function f is said to be convex on X if the inequality*

$$f(\lambda x + (1 - \lambda)y) \le \lambda f(x) + (1 - \lambda)f(y)$$

*holds for all x*, *y* ∈ *X and for every λ* ∈ [0, 1].

**Definition 4** (Ref. [18])**.** *The function f* : *X* → R = [−∞, +∞] *is said to be proper convex if f*(*x*) > −∞ ∀ *x* ∈ *X and if f is not the constant function then* +∞ *(that is, f* 6≡ +∞*). If f is a convex function, Dom*(*f*) *denotes the effective domain of f* , *which is as follows:*

$$Dom(f) = \{ \mathfrak{x} \in X : f(\mathfrak{x}) < +\infty \}.$$

*If f is proper, then Dom*(*f*) *is finite. Conversely, if A is a nonempty convex subset of X and f is a finite and convex function on A, then one can obtain a proper convex function on X by setting f*(*x*) = +∞ *if x* ∈ *X* \ *A*.

**Definition 5** (Ref. [18])**.** *The function f* : *X* → R *is called lower semi continuous at x if* ¯

$$f(\mathfrak{x}) = \lim\_{\mathfrak{x} \to \mathfrak{x}} \inf f(\mathfrak{x}).$$

**Corollary 1** (Ref. [18])**.** *If A*<sup>1</sup> *and A*<sup>2</sup> *are two non-empty disjoint convex sets of* R*<sup>n</sup> , there exist a non zero element c* = (*c*1, · · · , *<sup>c</sup>n*) <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* \ {0} *such that*

$$\sum\_{i=1}^{n} c\_i u\_i \le \sum\_{i=1}^{n} c\_i v\_{1\prime} \,\,\forall\,\,u = (u\_i) \in A\_{1\prime} \,\,\forall\,\,v = (v\_i) \in A\_2.$$

**Definition 6** (Ref. [18])**.** *Given the proper convex function f* : *X* →] − ∞, +∞], *the subdifferential of such a function is the mapping ∂ f* : *X* → *X* ∗ *defined by*

$$\partial f(\mathbf{x}) = \{ \mathbf{x}^\* \in \mathcal{X}^\* : f(u) - f(\mathbf{x}) \ge (u - \mathbf{x}, \mathbf{x}^\*), \forall \, u \in X \}, \mathbf{x}$$

*where X* ∗ *is dual of X and* (., .) *denote the canonical pairing between X and X* ∗ . *The element x* <sup>∗</sup> ∈ *∂ f*(*x*) *called subgradient of f at x*.

**Corollary 2** (Ref. [18])**.** *If f is a proper convex function on X, then the minimum (global) of f over X is attained at the point x*¯ ∈ *X if and only if* 0 ∈ *∂ f*(*x*¯).

**Theorem 2** (Ref. [18])**.** *If the functions f*<sup>1</sup> *and f*<sup>2</sup> *are finite at a point in which at least one is continuous then*

$$
\partial (f\_1 + f\_2)(\mathbf{x}) = \partial f\_1(\mathbf{x}) + \partial f\_2(\mathbf{x}) \,\forall \,\mathbf{x} \in X.
$$

**Definition 7.** *Slater's constraint qualification*


$$0 \in \{ (h\_1(\mathfrak{x}), h\_2(\mathfrak{x}), \dots, h\_r(\mathfrak{x})); \mathfrak{x} \in \mathfrak{X}\_0 \}.$$

#### **3. Saddle Point and Karush–Kuhn–Tucker Optimality Conditions**

In this section, we established saddle point and Karush–Kuhn–Tucker type optimality conditions for considered (MOP) in Banach space.

**Theorem 3.** *Let f*1, · · · , *fp*, *g*1, · · · , *g<sup>q</sup> be proper convex functions and h*1, · · · , *h<sup>r</sup> be affine functions. If x*¯ *is a Pareto optimal solution of the (MOP). Then, there exist real numbers λ f* 1 , . . . , *λ f p*, *λ g* 1 , · · · , *λ g q* , *λ h* 1 , · · · , *λ h <sup>r</sup> not all zero and have the properties:*

$$\begin{aligned} \sum\_{i=1}^{p} \lambda\_i^f f\_i(\mathbf{\bar{x}}) & \le \sum\_{i=1}^{p} \lambda\_i^f f\_i(\mathbf{x}) + \sum\_{j=1}^{q} \lambda\_j^g g\_j(\mathbf{x}) + \sum\_{k=1}^{r} \lambda\_k^h h\_k(\mathbf{x}), \ \forall \mathbf{x} \in \mathbf{X}\_0, \\\ \lambda\_i^f &\ge 0, \ \forall \ i = 1, \dots, p, \ \lambda\_j^g \ge 0, \ \forall \ j = 1, \dots, q, \ \lambda\_j^g g\_j(\mathbf{x}) = 0, \\\ \text{where } \mathbf{X}\_0 &= \bigcap\_{i=1}^{p} Dom(f\_i) \cap \bigcap\_{j=1}^{q} Dom(g\_j). \end{aligned} \tag{2}$$

**Proof.** Let *x*¯ be an Pareto optimal solution of the consistent problem (MOP). Then, from Theorem 1 *x*¯ is an optimal solution of the problem

$$\begin{aligned} \left(^{\text{\*}}(\text{SMOP})\right)\min & \sum\_{i=1}^{p} f\_i(\mathbf{x})\\ \text{subject to } f\_i(\mathbf{x}) & \le f\_i(\mathbf{x}) \; \forall \; i = 1, \dots, p, \\ & g\_j(\mathbf{x}) \le 0 \; (j = 1, \dots, q), \; h\_k(\mathbf{x}) = 0 \; (k = 1, \dots, r). \end{aligned}$$

Now, we consider the subset

$$B = \left\{ \sum\_{i=1}^{p} f\_i(\mathbf{x}) - \sum\_{i=1}^{p} f\_i(\mathbf{\tilde{x}}) + a\_{0\prime}^f f\_1(\mathbf{x}) - f\_1(\mathbf{\tilde{x}}) + a\_{1\prime}^f \cdot \dots \cdot f\_p(\mathbf{x}) - f\_p(\mathbf{\tilde{x}}) + a\_{p\prime}^f \right.$$

$$g\_1(\mathbf{x}) + a\_{1\prime}^g \cdot \dots \cdot g\_q(\mathbf{x}) + a\_q^g \cdot h\_1(\mathbf{x}) \cdot \dots \cdot h\_r(\mathbf{x}) \colon \mathbf{x} \in \mathbf{X}\_0, a\_j^f > 0 \; \forall \ i, \ a\_j^g > 0 \; \forall \ j \right\}. \tag{3}$$

It is easy to see that the set *B* does not contain origin as well as it is a non-void convex subset of R1+*p*+*q*+*<sup>r</sup>* . Since origin is a nonempty convex set, then from Corollary 1 there exist a homogeneous hyperplane, that is there exist 1 + *p* + *q* + *r* real numbers not all zero *λ*ˆ *f* 0 , *λ*ˆ *f* 1 , . . . , *λ*ˆ *f <sup>p</sup>*, *λ g* 1 , . . . , *λ g q* , *λ h* 1 , . . . , *λ h r* , such that

$$\begin{split} \lambda\_0^f \left\{ \sum\_{i=1}^p f\_i(\mathbf{x}) - \sum\_{i=1}^p f\_i(\mathbf{x}) + a\_0^f \right\} &+ \sum\_{i=1}^p \lambda\_i^f \left\{ f\_i(\mathbf{x}) - f\_i(\mathbf{x}) + a\_i^f \right\} \\ &+ \sum\_{j=1}^q \lambda\_j^g \left\{ g\_j(\mathbf{x}) + a\_j^g \right\} + \sum\_{k=1}^r \lambda\_k^h h\_k(\mathbf{x}) \ge 0, \end{split} \tag{4}$$

∀ *x* ∈ *X*0, *α f <sup>i</sup>* > 0 (*i* = 0, 1, . . . , *p*), *α g <sup>j</sup>* > 0 (*j* = 1, . . . , *q*), taking *x* = *x*¯, *α g j* ↓ 0 (∀ *j*), *α f i* ↓ 0 for *i* 6= *l* and *α f l* ↑ ∞. Again taking *x* = *x*¯, *α f i* ↓ 0 (∀ *i*), *α g j* ↓ 0 for *j* 6= *l* and *α g l* ↑ ∞, we get

$$
\lambda\_0^f \ge \mathbf{0}, \ \lambda\_i^f \ge \mathbf{0} \text{ and } \lambda\_i^g \ge \mathbf{0}.
$$

Thus, relation (4) becomes

$$
\lambda\_0^f \left\{ \sum\_{i=1}^p f\_i(\mathbf{x}) - \sum\_{i=1}^p f\_i(\mathbf{x}) \right\} + \sum\_{i=1}^p \lambda\_i^f \{f\_i(\mathbf{x}) - f\_i(\mathbf{x})\} + \sum\_{j=1}^q \lambda\_j^g g\_j(\mathbf{x}) + \sum\_{k=1}^r \lambda\_k^h h\_k(\mathbf{x}) \ge 0,
$$

$$
\implies \sum\_{i=1}^p f\_i(\mathbf{x}) \left\{ \lambda\_0^f + \lambda\_i^f \right\} + \sum\_{j=1}^q \lambda\_j^g g\_j(\mathbf{x}) \left. + \sum\_{k=1}^r \lambda\_k^h h\_k(\mathbf{x}) \right| \ge \sum\_{i=1}^p f\_i(\mathbf{x}) \left\{ \lambda\_0^f + \lambda\_i^f \right\},
$$

$$
\implies \sum\_{i=1}^p \lambda\_i^f f\_i(\mathbf{x}) + \sum\_{j=1}^q \lambda\_j^g g\_j(\mathbf{x}) + \sum\_{k=1}^r \lambda\_k^h h\_k(\mathbf{x}) \ge \sum\_{i=1}^p \lambda\_i^f f\_i(\mathbf{x}),
\tag{5}
$$

where *λ f <sup>i</sup>* <sup>=</sup> *<sup>λ</sup>*<sup>ˆ</sup> *f* <sup>0</sup> <sup>+</sup> *<sup>λ</sup>*<sup>ˆ</sup> *f i* . Since *x*¯ is feasible, therefore

$$
\lambda\_j^g \mathfrak{g}\_j(\bar{\mathfrak{x}}) \le \mathbf{0}, \forall \ j. \tag{6}
$$

substituting *x* = *x*¯ in inequality (5), we get

$$\sum\_{j=1}^{q} \lambda\_j^{\mathcal{S}} \mathfrak{g}\_j(\mathfrak{x}) \ge 0. \tag{7}$$

Now, from (6) and (7) we have *λ g j gj*(*x*¯) = 0, ∀ *j* = 1, . . . , *q*, which completes the proof.

*min f*(*x*) = (*f*1(*x*), *f*2(*x*)), *subject to g*(*x*) 5 0,

**Example 1.** *Consider the problem*

$$\begin{aligned} & \text{where } f\_1(\mathbf{x}) = \begin{cases} \mathbf{x}\_{1\prime}^2 & \text{if } -3 \le \mathbf{x}\_1, \mathbf{x}\_2 \le 3 \\ +\infty, & \text{otherwise} \end{cases}, f\_2(\mathbf{x}) = \begin{cases} \mathbf{x}\_{2\prime}^2 & \text{if } -3 \le \mathbf{x}\_1, \mathbf{x}\_2 \le 3 \\ +\infty, & \text{otherwise} \end{cases}, \\\ & \text{and } g(\mathbf{x}) = \begin{cases} (\mathbf{x}\_1 - 1)^2 + (\mathbf{x}\_2 - 1)^2 - 1, & \text{if } -3 \le \mathbf{x}\_1, \mathbf{x}\_2 \le 3 \\ +\infty, & \text{otherwise} \end{cases}. \end{aligned}$$

*Therefore, feasible region <sup>S</sup>* <sup>=</sup> {(*x*1, *<sup>x</sup>*2) <sup>∈</sup> <sup>R</sup><sup>2</sup> : (*x*<sup>1</sup> − 1) <sup>2</sup> + (*x*<sup>2</sup> <sup>−</sup> <sup>1</sup>) <sup>2</sup> <sup>5</sup> <sup>1</sup>} *and common effective domain X*<sup>0</sup> = T2 *<sup>i</sup>*=<sup>1</sup> *Dom*(*fi*) <sup>∩</sup> *Dom*(*g*) = {(*x*1, *<sup>x</sup>*2) <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* : −1 5 *x*1, *x*<sup>2</sup> 5 1}. *Since, x*¯ = (1, 0) *is a Pareto optimal solution, then for λ f* <sup>1</sup> = 0, *λ f* <sup>2</sup> > 0, *λ <sup>g</sup>* = 0*, the following inequality satisfies*

$$\begin{split} &\lambda\_1^f f\_1(\bar{\mathbf{x}}) + \lambda\_2^f f\_2(\bar{\mathbf{x}}) = \mathbf{0} \leq \lambda\_1^f \mathbf{x}\_1^2 + \lambda\_2^f \mathbf{x}\_2^2 + \lambda^g [(\mathbf{x}\_1 - 1)^2 + (\mathbf{x}\_2 - 1)^2 - 1] \\ &= \lambda\_1^f f\_1(\mathbf{x}) + \lambda\_2^f f\_2(\mathbf{x}) + \lambda^g g(\mathbf{x}), \forall \, \mathbf{x} \in \mathcal{X}\_0. \end{split}$$

*Hence, result is verified.*

*λ*

Thus, it is natural to call the function

$$L(\mathbf{x}, \boldsymbol{\lambda}^f, \boldsymbol{\lambda}^g, \boldsymbol{\lambda}^h) = \sum\_{i=1}^p \boldsymbol{\lambda}\_i^f f\_i(\mathbf{x}) + \sum\_{j=1}^q \boldsymbol{\lambda}\_j^g \boldsymbol{g}\_j(\mathbf{x}) + \sum\_{k=1}^r \boldsymbol{\lambda}\_k^h h\_k(\mathbf{x}), \tag{8}$$
 
$$\boldsymbol{\lambda}^f = \langle \boldsymbol{\lambda}\_i^f \rangle \in \mathbb{R}^p, \boldsymbol{\lambda}^g = \langle \boldsymbol{\lambda}\_j^g \rangle \in \mathbb{R}^q \text{ and } \boldsymbol{\lambda}^h = \langle \boldsymbol{\lambda}\_k^h \rangle \in \mathbb{R}^r.$$

**Remark 1.** *The necessary conditions (2) with x*¯ ∈ *S are equivalent to the fact that the point* (*x*¯, *λ f* , *λ g* , *λ h* ) *is a saddle point for the Lagrange function (8) on <sup>X</sup>*<sup>0</sup> <sup>×</sup> <sup>R</sup>*<sup>p</sup>* <sup>×</sup> <sup>R</sup>*<sup>q</sup>* <sup>×</sup> <sup>R</sup>*<sup>r</sup> , with respect to minimization on X*<sup>0</sup> *and maximization on* <sup>R</sup>*<sup>p</sup>* <sup>×</sup> <sup>R</sup>*<sup>q</sup>* <sup>×</sup> <sup>R</sup>*<sup>r</sup>* , *that is,*

$$\sum\_{i=1}^{p} \lambda\_i^f f\_i(\mathbf{\bar{x}}) + \sum\_{j=1}^{q} \lambda\_j^g \mathbf{g}\_j(\mathbf{\bar{x}}) + \sum\_{k=1}^{r} \lambda\_k^h h\_k(\mathbf{\bar{x}}) \le \sum\_{i=1}^{p} \lambda\_i^f f\_i(\mathbf{x}) + \sum\_{j=1}^{q} \lambda\_j^g \mathbf{g}\_j(\mathbf{x}) + \sum\_{k=1}^{r} \lambda\_k^h h\_k(\mathbf{x})$$

$$\implies L(\mathbf{\bar{x}}, \lambda^f, \lambda^g, \lambda^h) \le L(\mathbf{x}, \lambda^f, \lambda^g, \lambda^h), \forall \mathbf{\bar{x}} \in \mathbf{X}\_{\mathbf{0}} \tag{9}$$

*and for every* (*x*, *λ f* , *λ g* , *λ h* ) <sup>∈</sup> *<sup>X</sup>* <sup>×</sup> <sup>R</sup>*<sup>p</sup>* <sup>×</sup> <sup>R</sup>*<sup>q</sup>* <sup>×</sup> <sup>R</sup>*<sup>r</sup>* .

**Remark 2.** *The necessary optimality conditions (2) with λ <sup>f</sup>* <sup>6</sup><sup>=</sup> 0, *and <sup>x</sup>*¯ <sup>∈</sup> *<sup>S</sup> are also sufficient for x*¯ *to be a Pareto optimal solution to (MOP). If λ <sup>f</sup>* = 0, *then optimality conditions concern only the constraints functions, without giving any piece of information from the function which is minimized.*

**Theorem 4.** *Let f*1, · · · , *fp*, *g*1, · · · , *g<sup>q</sup> be proper convex functions and let h*1, · · · , *h<sup>r</sup> be affine functions such that Slater's constraint qualification satisfied at a feasible point x*¯ *of (MOP). Then, the point x*¯ *is a Pareto optimal solution for (MOP) if and only if there exist p* + *q* + *r real numbers λ f* 1 , · · · , *λ f <sup>p</sup>*, *λ g* 1 , · · · , *λ g q* , *λ h* 1 , · · · , *λ h r* , *such that*

$$\sum\_{i=1}^{p} \lambda\_i^f f\_i(\mathbf{x}) \le \sum\_{i=1}^{p} \lambda\_i^f f\_i(\mathbf{x}) + \sum\_{j=1}^{q} \lambda\_j^g g\_j(\mathbf{x}) + \sum\_{k=1}^{r} \lambda\_k^h h\_k(\mathbf{x}),\tag{10}$$
 
$$\text{and } \lambda^f \ge \mathbf{0}, \ \lambda^f \ne \mathbf{0}, \ \lambda^g \ge \mathbf{0}, \ \lambda\_j^g g\_j(\mathbf{x}) = \mathbf{0} \,\forall \ j = 1, \cdots, q.$$

**Proof.** Let *x*¯ be a Pareto optimal solution of (MOP). Then, from above Theorem 3, there exist *λ f* 1 , · · · , *λ f <sup>p</sup>*, *λ g* 1 , · · · , *λ g q* , *λ h* 1 , · · · , *λ h <sup>r</sup>* not all zero such that (2) hold. If we suppose *λ <sup>f</sup>* = 0, taking *x* = *x*¯ ∈ *S*, then from (2) we get *q* ∑ *j*=1 *λ g j gj*(*x*¯) = 0. Since *λ f <sup>i</sup>* = 0 *and gj*(*x*¯) < 0 (∀ *j*), we must have *λ g <sup>j</sup>* = 0 (∀*j*), therefore from (2) we have

$$\sum\_{k=1}^r \lambda\_k^h h\_k(\mathbf{x}) \ge \mathbf{0} \,\forall \, \mathbf{x} \in \mathbf{X}\_{0\prime}$$

and all components of *λ <sup>h</sup>* are not zero, which is contradiction of the interiority conditions of Slater's constraint qualification. Hence *λ <sup>f</sup>* <sup>6</sup><sup>=</sup> 0, that is, we can take some components of *λ <sup>f</sup>* are greater than zero.

Conversely, suppose *x*¯ is not Pareto optimal solution of (MOP), then there exist *x* ∗ (6= *x*¯) ∈ *S*, which is a Pareto optimal solution for (MOP), that is

$$f(\mathbf{x}^\*) \le f(\mathbf{x}).\tag{11}$$

Now, from relation (10) for *x* <sup>∗</sup> ∈ *S*, we have

$$\sum\_{i=1}^p \lambda\_i^f f\_{\bar{i}}(\mathfrak{x}) \le \sum\_{i=1}^p \lambda\_i^f f\_{\bar{i}}(\mathfrak{x}^\*).$$

which is contradiction of inequality (11). Hence, *x*¯ is a Pareto optimal solution for (MOP). Since *f*(*x*) is a proper convex function, then *f*(*x*) is necessarily finite.

**Theorem 5.** *Under the assumptions of Theorem 4, x*¯ ∈ *X is a Pareto optimal solution of (MOP) if and only if there exist λ <sup>f</sup>* = (*λ f* 1 , · · · , *λ f <sup>p</sup>*) <sup>∈</sup> <sup>R</sup>*<sup>p</sup>* , *λ <sup>g</sup>* = (*λ g* 1 , · · · , *λ g <sup>q</sup>* ) <sup>∈</sup> <sup>R</sup>*<sup>q</sup> and λ <sup>h</sup>* = (*λ h* 1 , · · · , *λ h r* ) <sup>∈</sup> <sup>R</sup>*<sup>r</sup> such that* (*x*¯, *λ f* , *λ g* , *λ h* ) *is a saddle point for the Lagrange function on X*<sup>0</sup> × <sup>R</sup>*<sup>p</sup>* <sup>×</sup> <sup>R</sup>*<sup>q</sup>* <sup>×</sup> <sup>R</sup>*<sup>r</sup> , that is*

$$\sum\_{i=1}^{p} \lambda\_i^f f\_i(\mathbf{x}) + \sum\_{j=1}^{q} \lambda\_j^g g\_j(\mathbf{x}) + \sum\_{k=1}^{r} \lambda\_k^h h\_k(\mathbf{x}) \le \sum\_{i=1}^{p} \lambda\_i^f f\_i(\mathbf{x}) + \sum\_{j=1}^{q} \lambda\_j^g g\_j(\mathbf{x}) + \sum\_{k=1}^{r} \lambda\_k^h h\_k(\mathbf{x})$$

*for all* (*x*, *λ f* , *λ g* , *λ h* ) <sup>∈</sup> *<sup>X</sup>*<sup>0</sup> <sup>×</sup> <sup>R</sup>*<sup>p</sup>* <sup>×</sup> <sup>R</sup>*<sup>q</sup>* <sup>×</sup> <sup>R</sup>*<sup>r</sup>* .

**Proof.** Thus, the proof is obvious from Theorem 4.

Now, we establish optimality conditions where differentiability of all functions is essential. The following result extends the Karush–Kuhn–Tucker theorem for the lowersemicontinuous multiobjective functions.

**Theorem 6.** *Under the hypothesis of Theorem 4, if we suppose that the function f<sup>i</sup> is lowersemicontinuous and g<sup>j</sup>* , *h<sup>k</sup> are continuous real functions, then the optimality conditions for x*¯ ∈ *S is equivalent to the conditions*

$$0 \in \sum\_{i=1}^{p} \lambda\_i^f \partial f\_i(\mathfrak{x}) + \sum\_{j=1}^{q} \lambda\_j^g \partial g\_j(\mathfrak{x}) + \sum\_{k=1}^{r} \lambda\_k^h \nabla h\_k(\mathfrak{x}). \tag{12}$$

**Proof.** From Equation (10), if *x*¯ ∈ *S* is the minimum point of the function, then

$$\sum\_{i=1}^{p} \lambda\_i^f f\_i(\mathbf{x}) \le \sum\_{i=1}^{p} \lambda\_i^f f\_i(\mathbf{x}) + \sum\_{j=1}^{q} \lambda\_j^g g\_j(\mathbf{x}) + \sum\_{k=1}^{r} \lambda\_k^h h\_k(\mathbf{x}). \tag{13}$$

Since *gj*(*x*¯) 5 0, *h<sup>k</sup>* (*x*¯) = 0. Then, inequality (13) takes the form

$$\sum\_{i=1}^{p} \lambda\_i^f f\_i(\mathfrak{x}) + \sum\_{j=1}^{q} \lambda\_j^g g\_j(\mathfrak{x}) + \sum\_{k=1}^{r} \lambda\_k^h h\_k(\mathfrak{x}) \le \sum\_{i=1}^{p} \lambda\_i^f f\_i(\mathfrak{x}) + \sum\_{j=1}^{q} \lambda\_j^g g\_j(\mathfrak{x}) + \sum\_{k=1}^{r} \lambda\_k^h h\_k(\mathfrak{x}).$$

Now, from Corollary 2, the minimum point of Lagrange function is solution of the equation

$$0 \in \partial \left(\sum\_{i=1}^p \lambda\_i^f f\_i + \sum\_{j=1}^q \lambda\_j^g g\_j + \sum\_{k=1}^r \lambda\_k^h h\_k\right)(\mathfrak{x})\dots$$

Making use of previous results and additive property of subdifferential, we get

$$0 \in \sum\_{i=1}^p \lambda\_i^f \partial f\_i(\mathfrak{x}) + \sum\_{j=1}^q \lambda\_j^g \partial g\_j(\mathfrak{x}) + \sum\_{k=1}^r \lambda\_k^h \partial h\_k(\mathfrak{x}) \dots$$

We know that *h<sup>k</sup>* (*x*¯) be an affine function, then

$$
\partial h\_k(\mathfrak{x}) = \nabla h\_k(\mathfrak{x}).
$$

Hence, we get the required result.

**Example 2.** *Consider the following problem*

*min f*(*x*) = (*f*1(*x*), *f*2(*x*)), *subject to g*(*x*) 5 0, *at a feasible point x*¯ = (0, 0). *where, f*1(*x*) = |*x*1|, *f*2(*x*) = |*x*2|, *and g*(*x*) = |*x*1| + |*x*2| − 1.

Since, *x*¯ is a Pareto optimal solution for the considered problem as well as satisfying Slater's constraints qualification because *g*(*x*¯) < 0, then *λ <sup>f</sup>* = (*λ f* 1 , *λ f* 2 ) 6= 0, *λ <sup>f</sup>* = 0 and *λ <sup>g</sup>g*(*x*¯) = <sup>0</sup> <sup>=</sup><sup>⇒</sup> *<sup>λ</sup> <sup>g</sup>* = 0. Now from the definition of subdifferential, we get

$$\partial f\_1(\breve{\mathbf{x}}) = \{ (\breve{\xi}, 0) \in \mathbb{R}^2 : -1 \le \breve{\xi} \le 1 \}, \\ \partial f\_2(\breve{\mathbf{x}}) = \{ (0, \xi) \in \mathbb{R}^2 : -1 \le \xi \le 1 \}.$$

which implies that

$$0 \in \lambda\_1^f \partial f\_1(\mathfrak{x}) + \lambda\_2^f \partial f\_2(\mathfrak{x}) + \lambda^g \partial g(\mathfrak{x}).$$

Hence, the result is verified.

**Remark 3.** *Since h<sup>k</sup> are affine, then there exist a continuous linear functional x* ∗ *k* ∈ *X* ∗ *and a real number α<sup>k</sup>* ∈ R *such that h<sup>k</sup>* = *x* ∗ *<sup>k</sup>* + *α<sup>k</sup> , therefore we have* ∇*h<sup>k</sup>* = *x* ∗ *k and above condition becomes*

$$0 \in \sum\_{i=1}^{p} \lambda\_i^f \partial f\_i(\mathfrak{x}) + \sum\_{j=1}^{q} \lambda\_j^g \partial g\_j(\mathfrak{x}) + \sum\_{k=1}^{r} \lambda\_k^h \mathfrak{x}\_k^\*(\mathfrak{x}).\tag{14}$$

Now, if we consider only the case of the constraint given by inequalities, that is,

$$S\_1 = \{ \mathfrak{x} \in X : \mathfrak{g}\_j(\mathfrak{x}) \le 0, \,\forall j = 1, \dots, q \}.$$

Then, Slater's constraints qualification is as follows: There exist a point *x*¯ ∈ T*<sup>p</sup> <sup>i</sup>*=<sup>1</sup> *Dom*(*fi*) such that *gj*(*x*¯) < 0, ∀ *j* = 1, · · · , *q*.

**Theorem 7.** *Let f*1, · · · , *f<sup>p</sup> be a proper convex lower-semicontinuous function and g*1, · · · , *g<sup>q</sup> be real convex continuous functions satisfying the Slater's constraint qualification* (7) *at a feasible point x*¯*. Then, the point x*¯ ∈ *S*<sup>1</sup> *is a Pareto optimal solution for (MOP) if and only if there exists λ <sup>f</sup>* = (*λ f* 1 , · · · , *λ f <sup>p</sup>*), *λ <sup>g</sup>* = (*λ g* 1 , · · · , *λ g <sup>q</sup>* ) *such that*

$$0 \in \sum\_{i=1}^{p} \lambda\_i^f \partial f\_i(\mathfrak{x}) + \sum\_{j=1}^{q} \lambda\_j^g \partial g\_j(\mathfrak{x})\_\prime \tag{15}$$

$$
\lambda^f \ge 0, \ \lambda^f \ne 0, \ \lambda^g\_j \ge 0, \ \lambda^g\_j g\_j(\mathfrak{x}) = 0, \ \forall \ j = 1, \cdots, q. \tag{16}
$$

**Proof.** Suppose *x*¯ ∈ *S* is a Pareto optimal solution of problem (MOP) then, from equation

$$\sum\_{i=1}^{p} \lambda\_i^f f\_i(\mathbf{x}) \le \sum\_{i=1}^p \lambda\_i^f f\_i(\mathbf{x}) + \sum\_{j=1}^q \lambda\_j^g g\_j(\mathbf{x}). \tag{17}$$

By Slater's constraint qualification there exists *x*¯ ∈ *S*, such that

$$\sum\_{i=1}^p \lambda\_i^f f\_i(\mathfrak{x}) + \sum\_{j=1}^q \lambda\_j^g g\_j(\mathfrak{x}) \le \sum\_{i=1}^p \lambda\_i^f f\_i(\mathfrak{x}) + \sum\_{j=1}^q \lambda\_j^g g\_j(\mathfrak{x}).$$

Now from Corollary (2) the minimum point of the Langrange function is the solutuon of the relation

$$0 \in \partial \left(\sum\_{i=1}^p \lambda\_i^f f\_i + \sum\_{j=1}^q \lambda\_j^g g\_j + \sum\_{k=1}^r \lambda\_k^h h\_k\right)(\mathfrak{x}).$$

Using the additive property of subdifferential, we get

$$0 \in \sum\_{i=1}^p \lambda\_i^f \partial f\_i(\mathfrak{x}) + \sum\_{j=1}^q \lambda\_j^g \partial g\_j(\mathfrak{x}) + \sum\_{k=1}^r \lambda\_k^h \partial h\_k(\mathfrak{x}).$$

$$
\lambda^f \ge 0, \ \lambda^f \ne 0, \ \lambda^g\_j \ge 0, \ \lambda^g\_j \mathbf{g}\_j(\mathfrak{x}) = 0, \ \forall \ j = 1, \dots, q.
$$

Conversely, suppose *x*¯ is not pareto optimal solution of (MOP), then there exists *x* ∗ (6= *x*¯) ∈ *S* which is Pareto optimal solution for MOP that is

$$f(\mathbf{x}^\*) \le f(\bar{\mathbf{x}}).\tag{18}$$

Now, from relation (10) for *x* <sup>∗</sup> ∈ *S*

*λ*

$$\sum\_{i=1}^p \lambda\_i^f f\_i(\bar{\mathbf{x}}) \le \sum\_{i=1}^p \lambda\_i^f f\_i(\mathbf{x}^\*) \lambda$$

which is contradiction to inequality (18). Hence *x*¯ is a pareto optimal solution for MOP. Since *f*(*x*) is a proper convex function, therefore *f*(*x*) is necessarily finite.

**Corollary 3.** *Let f*1, · · · , *fp*, *g*1, · · · , *g<sup>q</sup> be real convex and differentiable functions on X which satisfy (1). Then, a feasible point x*¯ *is a Pareto solution of problem (MOP) with* (1) *given by (15) if and only if there exist real numbers λ f* 1 , · · · , *λ f <sup>p</sup>*, *λ g* 1 , · · · , *λ g <sup>q</sup> such that*

*j*

$$\sum\_{i=1}^{p} \lambda\_i^f \nabla f\_i(\mathfrak{x}) + \sum\_{j=1}^{q} \lambda\_j^g \nabla g\_j(\mathfrak{x}) = \mathbf{0},\tag{19}$$
 
$$\lambda^f \ge \mathbf{0}, \ \lambda^f \ne \mathbf{0}, \ \lambda\_j^g \ge \mathbf{0}, \ \lambda\_j^g g\_j(\mathfrak{x}) = \mathbf{0}, \ \forall j = 1, \dots, q.$$

#### **4. Conclusions**

In this paper, we have established saddle point optimality conditions for a convex MOP in real Banach space. We recall Slater's constraint qualification from [18] and derive saddle point necessary and sufficient Pareto optimality condition for the considered problem where multipliers of objective functions never vanished simultaneously. We have deduced Karush–Kuhn–Tucker optimality conditions from saddle point optimality conditions for the subdifferentiable case and present some examples to verify our results. We have characterized saddle point optimality conditions for Pareto points to convex MOPs in real Banach space which is more general as well as proofing technique is different from Ehrgott and Wiecek [23]. Further, we have concluded Karush–Kuhn–Tucker optimality conditions for smooth and nonsmooth cases from saddle point optimality conditions that is a new thing as compared to Ehrgott and Wiecek. Our derived Karush–Kuhn–Tucker optimality conditions are the same as in Miettinen [2] and Haeser and Ramos [24]. That is why our paper includes novelty from Ehrgott and Wiecek [23] in some senses. Further, these results can be extended for convex semi-infinite programming problems [25,26]. In the future, we can extend these results to interval-valued optimality conditions and can deduce some applications motivated by the recent article by Treanta [27]. Further, we can extend these results on vector equilibrium on Hadamard manifolds motivated by Ruiz-Garzòn et al. [28].

**Author Contributions:** Writing—original draft preparation, K.K.L., M.H., J.K.M., S.K.S. and S.K.M.; writing—review and editing, K.K.L., M.H., J.K.M., S.K.S. and S.K.M.; funding acquisition, K.K.L. All authors have read and agreed to the published version of the manuscript.

**Funding:** The second author is financially supported by CSIR-UGC JRF, New Delhi, India, through Reference no.: 1009/(CSIR-UGC NET JUNE 2018). The fourth author is financially supported by CSIR-UGC JRF, New Delhi, India, through Reference no.: 1272/(CSIR-UGC NET DEC.2016). The fifth author is financially supported by "Research Grant for Faculty" (IoE Scheme) under Dev. Scheme NO. 6031.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** No data were used to support this study.

**Acknowledgments:** The authors are indebted to the anonymous reviewers for their valuable comments and remarks that helped to improve the presentation and quality of the manuscript.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


### *Article* **The Well Posedness for Nonhomogeneous Boussinesq Equations**

**Yan Liu <sup>1</sup> and Baiping Ouyang 2,\***


**Abstract:** This paper is devoted to studying the Cauchy problem for non-homogeneous Boussinesq equations. We built the results on the critical Besov spaces (*θ*, *u*) ∈ *L* ∞ *T* (*B*˙ *N*/*p <sup>p</sup>*,1 ) × *L* ∞ *T* (*B*˙ *N*/*p*−1 *<sup>p</sup>*,1 ) T *L* 1 *T* (*B*˙ *N*/*p*+1 *<sup>p</sup>*,1 ) with 1 < *p* < 2*N*. We proved the global existence of the solution when the initial velocity is small with respect to the viscosity, as well as the initial temperature approaches a positive constant. Furthermore, we proved the uniqueness for 1 < *p* ≤ *N*. Our results can been seen as a version of symmetry in Besov space for the Boussinesq equations.

**Keywords:** non homogenous boussinesq equations; global well-posedness; littlewood-paley decomposition

#### **1. Introduction**

This paper discusses the global well-posedness of Boussinesq equations. We assume that the viscosity and thermal conductivity are temperature dependent. The coupled mass flow and heat flow of the viscous incompressible fluid are controlled by Boussinesq approximation. The equations we study are as follows:

$$\begin{cases} u\_t - \operatorname{div}(\nu(\theta)\nabla u) + \boldsymbol{u} \cdot \nabla \boldsymbol{u} + \boldsymbol{a}\theta \boldsymbol{g} + \nabla p = \mathbf{0}, \\ \operatorname{div}(\boldsymbol{u}) = \mathbf{0}, \\ \theta\_t - \operatorname{div}(\kappa(\theta)\nabla \theta) + \boldsymbol{u} \cdot \nabla \theta = \mathbf{0}. \end{cases} \tag{1}$$

Here *u*(*t*, *x*) denotes the velocity of the fluid , (*t*, *x*) ∈ *R* <sup>+</sup> <sup>×</sup> *<sup>R</sup> <sup>N</sup>*, *<sup>N</sup>* <sup>≥</sup> <sup>2</sup> is the spatial dimension; *p*(*t*, *x*) is the hydrostatic pressure; *θ*(*t*, *x*) is the temperature; *g*(*t*, *x*) is the external force by a unit of mass; *ν*(*θ*) is the kinematic viscosity; *κ*(*θ*) is the thermal conductivity; *α* is a positive constant which is dependent on the coefficient of volume expansion. The Boussinesq system has important roles in the atmospheric sciences, for more details, one could refer to [1,2].

The homogeneous Boussinesq equations corresponding to the special case where coefficients *ν* and *κ* are positive constants:

$$\begin{cases} u\_t - \nu \triangle u + u \cdot \nabla u + a\theta g + \nabla p = 0, \\ \operatorname{div}(u) = 0, \\ \theta\_t - \kappa \triangle \theta + u \cdot \nabla \theta = 0. \end{cases} \tag{2}$$

The global well-posedness of (2) with *ν* > 0, *κ* > 0 is well-known (see [3]). However, for the case of *ν* = 0 and *κ* = 0 in (2), the global existence of solution is still an outstanding open problem in the mathematical fluid mechanics (see [4–6]). Recently, some authors obtain the global well-posedness of (2) with partial viscosity cases (i.e., either the zero diffusivity case: *κ* = 0 and *ν* > 0, or the zero viscosity case: *κ* > 0 and *ν* = 0) (see [6–12]).

Some attentions have been paid to the nonhomogeneous case (1). In [13], the authors investigated the initial-boundary problems of (1) and obtained the global well-posedness.

**Citation:** Liu, Y.; Ouyang, B. The Well Posedness for Nonhomogeneous Boussinesq Equations. *Symmetry* **2021**, *13*, 2110. https://doi.org/10.3390/ sym13112110

Academic Editors: Octav Olteanu and Savin Treanta

Received: 19 October 2021 Accepted: 30 October 2021 Published: 6 November 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

In [14], they studied an optimum control problem of mathematical model describing steady non-isothermal creep of incompressible fluid through local Lipschitz bounded region. In [15], they studied an optimal control problem for the mathematical model that describes steady non-isothermal creeping flows of an incompressible fluid through a locally Lipschitz bounded domain. In [16], the initial-boundary value problem of completely incompressible Navier-Stokes equations with viscosity coefficient *ν* and heat conductivity *κ* varying with temperature by the power law of Chapman-Enskog are studied. When *κ* = 0, the method used in [16] is not applicable. We must seek new methods to overcome the difficulty.

The purpose of this paper is to study the well-posedness of the Boussinesq system (1). Equations (1) corresponds to the physical environment which we can't ignore the variation of fluid viscosity (and thermal conductivity) with temperature (for more details see [17] and the references therein). The existing literature has more discussion on the constant viscosity and less discussions on the viscosity of temperature. This paper will provide some methods for studying other problems when viscosity relates to temperature. In the present paper, we consider the system (1) without thermal conductivity, and with the viscosity *ν* dependent on *θ*. The main difficulty is that we can not use the results obtained previously for the constant viscosity. We firstly use the method of iteration, then we transform the problem into a constant viscosity problem. This is the biggest innovation of this paper. Since the Besov space is more meticulous than the traditional Sobolev space, the results obtained in this paper are no longer correct in the Sobolev space. In the present paper, we study the following equations:

$$\begin{cases} u\_t - \operatorname{div}(\nu(\theta)\nabla u) + u \cdot \nabla u + a\theta g + \nabla p = 0, \\ \operatorname{div}(u) = 0, \\ \theta\_t + u \cdot \nabla \theta = 0, \\ t = 0, u = u\_0, \theta = \theta\_0. \end{cases} \tag{3}$$

In order to have a clear idea of our purpose, we shall recall some research history for the following Navier-Stokes equations:

$$\begin{cases} \partial\_t \rho + \nabla \cdot (\rho u) = 0, \\ \partial\_t (\rho u) + \nabla \cdot (\rho u \times u) - \mu \Delta u = \rho f, \\ \nabla \cdot u = 0. \end{cases} \tag{4}$$

In [18], Fujita & Kato proved the global existence and uniqueness of problem (4) in the critical Sobolev space *H*˙ *N* <sup>2</sup> <sup>×</sup> (*H*˙ *N* <sup>2</sup> −1 ) *<sup>N</sup>*. Precisely, if (*ρ*, *u*) is a solution of (4), with initial data (*ρ*0(*x*), *u*0(*x*)), then:

$$
\rho\_\lambda(t,\mathfrak{x}) = \rho(\lambda^2 t, \lambda \mathfrak{x}), \quad \mathfrak{u}\_\lambda(t,\mathfrak{x}) = \lambda \mathfrak{u}(\lambda^2 t, \lambda \mathfrak{x}).
$$

is also the solution of (4) with initial data (*ρ*0,*λ*(*x*), *u*0,*λ*(*x*)) = (*ρ*0(*λx*), *λu*0(*λx*)). Subsequently, in [19], Danchin generalized the results by Fujita & Kato [18] in Besov

space (*B*˙ *N* <sup>2</sup> −1 2,1 ) *<sup>N</sup>*; see also [20–24]. Some ideas of this paper came from [19,20]. Some new results about the equations may be found in [25–30].

We suppose that the initial data *θ*<sup>0</sup> > 0. In the present paper, we shall establish the well-posedness of the non-homogeneous Bounssinesq Equation (3) in *B*˙ *N*/*p <sup>p</sup>*,1 (see the definition in Section 2). Since the Besov spaces are symmetry, the results obtained in this paper have the property of symmetry. We shall restrict our work to solutions such that the temperature *θ* is a small perturbation of a constant temperature *θ*. As we know, there are inevitable errors in the process of modeling or measurement. We are looking forward to understanding the impact of these errors on the behavior of the solutions. This paper solves this problem well. There are few relevant studies at present. Without loss of generality, in the following, we take *θ* = 1,*ν* ∈ *C* <sup>∞</sup> and *ν*(*θ*) = *ν*(1) = *ν*. Therefore, Equation (3) can be rewritten as:

$$\begin{cases} u\_t - \underline{v} \triangle u + u \cdot \nabla u + \nabla p = G, \\ \operatorname{div}(u) = 0, \\ \theta\_t + u \cdot \nabla \theta = 0, \\ t = 0, u = u\_0, \theta = \theta\_{0\prime} \end{cases} \tag{5}$$

where,

$$\mathcal{G} = \nabla \cdot \left[ (\nu(\theta) - \underline{\mathbf{v}}) \nabla \underline{\mathbf{u}} \right] - \mathbf{a} \theta \mathbf{g}. \tag{6}$$

We write:

$$
\widetilde{\nu}(\theta + 1) = \nu(\theta) \quad \text{and} \quad \widetilde{\nu}(\underline{\theta}) = 1. \tag{7}
$$

Let us now state our main results.

**Theorem 1.** *Let* <sup>1</sup> <sup>&</sup>lt; *<sup>p</sup>* <sup>&</sup>lt; <sup>2</sup>*N, then for* (*θ*0, *<sup>u</sup>*0) <sup>∈</sup> (*B*˙ *N*/*p <sup>p</sup>*,1 ) <sup>×</sup> (*B*˙ *N*/*p*−1 *<sup>p</sup>*,1 ) *<sup>N</sup>, g* <sup>∈</sup> *<sup>L</sup>* 1 *T* (*B*˙ *N*/*p*−1 *<sup>p</sup>*,1 ) T *L* 2 *T* (*B*˙ *N*/*p <sup>p</sup>*,1 ) *there exists T*(*θ*0, *u*0) > 0*, such that the problem* (5) *admits a solution* (*θ*, *u*) *with:*

$$\theta \in \mathbb{C}([0, T), \mathring{\mathcal{B}}\_{p, 1}^{N/p}) \bigcap L\_T^{\infty}(\mathring{\mathcal{B}}\_{p, 1}^{N/p}),$$

$$u \in \mathbb{C}([0, T), \mathring{\mathcal{B}}\_{p, 1}^{N/p - 1}) \bigcap L\_T^{\infty}(\mathring{\mathcal{B}}\_{p, 1}^{N/p - 1}) \bigcap L\_T^1(\mathring{\mathcal{B}}\_{p, 1}^{N/p + 1}).$$

*Moreover, if there exist a small constant ε, such that:*

$$\left\|\left|\boldsymbol{\mu}\_{0}\right|\right\|\_{L^{\infty}\_{T}(\dot{B}^{N/p-1}\_{p,1})} + \left\|\boldsymbol{g}\right\|\_{L^{1}\_{T}(\dot{B}^{N/p-1}\_{p,1})} \leq \varepsilon \underline{\mu}\_{\boldsymbol{\tau}}$$

*then T* = +∞*. If* 1 < *p* ≤ *N, the solution is unique.*

The present paper is structured as follows: in the next section, we show some preliminaries. In Section 3, we show the existence of the solution. The uniqueness is presented in Section 4. Some conclusions are included in Section 5.

**Remark 1.** *Throughout this paper, C stands for a 'harmless' uniform constant, and we sometimes use the notation A* . *B as an equivalent of A* ≤ *CB. The notation A* ≈ *B means that A* . *B and B* . *A.*

#### **2. Some Results on Besov Spaces**

*2.1. Littlewood-Paley Theory*

At the beginning, we shall recall the Littlewood-Paley decomposition.

Take *χ*, *φ* ∈ *C* <sup>∞</sup>(*R <sup>N</sup>*) supported on *<sup>B</sup>* <sup>=</sup> {*<sup>ξ</sup>* <sup>∈</sup> *<sup>R</sup> <sup>N</sup>*, <sup>|</sup>*ξ*| ≤ 4/3} and <sup>Γ</sup> <sup>=</sup> {*<sup>ξ</sup>* <sup>∈</sup> *<sup>R</sup> <sup>N</sup>*, 3/4 <sup>≤</sup> |*ξ*| ≤ 8/3} respectively, such that:

$$\sum\_{j \in \mathbb{Z}} \phi(\mathbb{2}^{-j}\xi) = 1, \quad \chi = 1 - \sum\_{j \ge 0} \phi(\mathbb{2}^{-j}\xi), \quad \forall \xi \neq 0. \tag{8}$$

Denoting:

$$\Delta\_j \mathfrak{u} = F^{-1}(\mathfrak{\phi}(2^{-j} \cdot) \mathfrak{d}(\cdot \cdot)) = 2^{Nj} \int\_{\mathbb{R}^N} \mathfrak{y}(2^j y) u(x - y) dy, \quad \text{for } j \in \mathbb{Z}\_N$$

and:

$$S\_j \mu = \sum\_{k \le j-1} \Delta\_k \mu = 2^{Nj} \int\_{\mathbb{R}^N} \psi\_1(2^j y) \mu(\mathbf{x} - y) dy \mu$$

where *<sup>u</sup>*<sup>ˆ</sup> <sup>=</sup> <sup>F</sup>(*u*) denote the Fourier transformation of u, *<sup>ψ</sup>* <sup>=</sup> <sup>F</sup> <sup>−</sup><sup>1</sup> (*φ*(·)), and *ψ*<sup>1</sup> = <sup>F</sup> <sup>−</sup><sup>1</sup> (*χ*(·)). The formal decomposition:

$$\mu = \sum\_{j=-\infty}^{\infty} \Delta\_j u\_{\prime} \tag{9}$$

is called homogenous Littlewood-Paley decomposition. This dyadic decomposition has a nice quasi-orthogonality, and we have:

$$
\Delta\_i \Delta\_j u \equiv 0, \quad \text{if}, \quad |i - j| \ge 2,\tag{10}
$$

∆*i*(*Sj*−1*u*∆*ju*) ≡ 0, if, |*i* − *j*| ≥ 5. (11)

The details of Littlewood-Paley decomposition can be found in [31,32].

#### *2.2. The Homogeneous Besov Spaces*

In the following, we shall define the functional spaces in which we shall work in.

**Definition 1.** *For s* ∈ *R,* (*p*,*r*) ∈ [1, +∞] × [1, +∞]*, and u* ∈ S<sup>0</sup> (*R <sup>N</sup>*)*. Define:*

$$\dot{B}^{s}\_{p,r} = \{ \mu \in \mathcal{S}'(\mathbb{R}^N)\_{\prime} \, ||\mu||\_{\dot{B}^s\_{p,r}} < +\infty \},$$

*where:*

$$||u||\_{\dot{\mathcal{B}}^s\_{p,r}} = \begin{cases} \left(\sum\_{j\in\mathbb{Z}} \mathsf{2}^{rjs} ||\Delta\_j u||\_{L^p}^r\right)^{\frac{1}{r}}, & r < +\infty, \\ \sup\_j \mathsf{2}^{js} ||\Delta\_j u||\_{L^p}, & r = +\infty. \end{cases}$$

Let us now recall some classical properties for these Besov spaces (see [23,24])

#### **Proposition 1.** *The following properties hold:*

*(i) There exists a uniform constant C, such that,*

$$\mathbb{C}^{-1} \|\mu\|\_{\mathcal{B}^{s}\_{p,r}} \le \|\nabla \mu\|\_{\mathcal{B}^{s-1}\_{p,r}} \le \mathbb{C} \|\mu\|\_{\mathcal{B}^{s}\_{p,r'}} \tag{12}$$

*(ii) Sobolev embedding: for p*<sup>1</sup> ≤ *p*<sup>2</sup> *and r*<sup>1</sup> ≤ *r*2*, then,*

$$
\mathring{\mathcal{B}}\_{p\_1, r\_1}^{s} \longrightarrow \mathring{\mathcal{B}}\_{p\_2, r\_2}^{s - N(\frac{1}{p\_1} - \frac{1}{p\_2})}\,\tag{13}
$$

*(iii) For s* > 0*, B*˙*<sup>s</sup> p*,*r* T *L* <sup>∞</sup> *is an algebra. Moreover, for any p* <sup>∈</sup> [1, <sup>+</sup>∞]*, then,*

$$
\mathring{\mathcal{B}}\_{p,1}^{N/p} \hookrightarrow \mathring{\mathcal{B}}\_{p,\infty}^{N/p} \bigcap L^{\infty}.\tag{14}
$$

$$(iv)\text{ Interpolation: }[\dot{B}^{s\_1}\_{p,r'}\dot{B}^{s\_2}\_{p,r}]\_{\theta,r'} = \dot{B}^{\theta s\_1 + (1-\theta)s\_2}\_{p,r'}.$$

Through this paper, we shall use the product law in Besov spaces. These product laws are proved in [20,33].

**Proposition 2.** *Let* (*p*, *p*1, *p*2) ∈ [1, +∞] 3 *such that:*

$$\frac{1}{p} \le \frac{1}{p\_1} + \frac{1}{p\_2}$$

,

*We get:*

*(i) If:*

$$s\_1 + s\_2 + \text{Nin}f\left(0, 1 - \frac{1}{p\_1} - \frac{1}{p\_2}\right) > 0, \quad s\_1 < \frac{N}{p\_1} \quad \text{and} \quad s\_2 < \frac{N}{p\_2}.$$

*there holds,*

$$||\iota\upsilon||\_{\dot{B}^{s\_1+s\_2-N(\frac{1}{p\_1}+\frac{1}{p\_2}-\frac{1}{p})}} \lesssim ||\iota\iota||\_{\dot{B}^{s\_1}\_{p\_1r}}||\upsilon||\_{\dot{B}^{s\_2}\_{p\_2,\infty}}\tag{15}$$

*furthermore, if s*<sup>1</sup> = *<sup>N</sup> p*1 *and s*<sup>2</sup> = *<sup>N</sup> p*2 *, we take r* = 1*. (ii) If* |*s*| < | *N p* | *and p* ≥ 2*, then we get:*

$$\|\|\boldsymbol{\mu}\boldsymbol{\upsilon}\|\|\_{\mathcal{B}^{s}\_{p,r}} \lesssim \|\|\boldsymbol{\mu}\|\|\_{\mathcal{B}^{s}\_{p,r}} \|\boldsymbol{\upsilon}\|\|\_{\mathcal{B}^{N/p}\_{p,\infty} \cap L^{\infty}}.\tag{16}$$

(iii) If  $s\_1 + s\_2 = 0$ ,  $s\_1 \in \left(-\frac{N}{p\_1}, \frac{N}{p\_1}\right]$  and  $\frac{1}{p\_1} + \frac{1}{p\_2} \le 1$ , then: 
$$\|\boldsymbol{u}\boldsymbol{v}\|\Big|\_{\dot{B}\_{p\infty}^{-N(\frac{1}{p\_1} + \frac{1}{p\_2} - \frac{1}{p})} \lesssim \|\boldsymbol{u}\|\Big|\_{\dot{B}\_{p\_1, 1}^{s\_1}} \|\boldsymbol{v}\|\Big|\_{\dot{B}\_{p\_2, \infty}^{s\_2}}.\tag{17}$$

Additionally, we need the definition of *L*˜ *<sup>α</sup> T* (*B*˙*<sup>s</sup> p*,*r* ) introduced in [19,20,31].

**Definition 2.** *Let* (*r*, *α*, *p*) ∈ [1, +∞] 3 *, T* ∈ [0, +∞] *and s* ∈ *R. We set:*

$$||\boldsymbol{u}||\_{\boldsymbol{L}\_T^a(\mathcal{B}\_{p,r}^s)} \overset{\Delta}{=} \left(\sum\_{j\in\mathbb{Z}} 2^{jrs} \left(\int\_0^T ||\Delta\_j \boldsymbol{u}(t)||\_{L^p}^a dt\right)^{r/a}\right)^{1/r}.$$

By virtue of the Minkowski inequality, we get:

$$\|\|\mu\|\|\_{L^{a}\_{T}(\mathcal{B}^{s}\_{p,r})} \leq \|\|\mu\|\|\_{L^{a}\_{T}(\mathcal{B}^{s}\_{p,r})'} \quad \text{if } a \leq r. \tag{18}$$

Thus,

$$\|\|u\|\|\_{L^{a}\_{T}(\mathcal{B}^{s}\_{p,r})} \leq \|\|u\|\|\_{L^{a}\_{T}(\mathcal{B}^{s}\_{p,r})'} \quad \text{if } r \leq a. \tag{19}$$

Moreover, for *θ* ∈ (0, 1], we have:

$$\|\|u\|\|\_{\tilde{L}^a\_T(\mathcal{B}^s\_{p,r})} \le \|\|u\|\|\_{L^{a\_1}\_T(\mathcal{B}^{s\_1}\_{p,r})}^\theta \|\|u\|\|\_{L^{a\_2}\_T(\mathcal{B}^{s\_2}\_{p,r})}^{1-\theta} \tag{20}$$

with,

$$\frac{1}{\alpha} = \frac{\theta}{\alpha\_1} + \frac{1-\theta}{\alpha\_2}, \quad \text{and} \quad s = \theta s\_1 + (1-\theta)s\_2.$$

#### *2.3. Estimates for Linear Transport Equation*

In the following, we recall some estimates for the following linear transport equation:

$$\begin{cases} \partial\_t g + \nabla \cdot (vg) = F, \\ g(0, x) = g\_0. \end{cases} \tag{21}$$

The following results will hold, (see proof in [19,24,33]).

**Proposition 3.** *Let* (*p*,*r*) ∈ [1, +∞] 2 *and s be such that* −1 − *Nin f*( 1 *p* 0 , 1 *p* ) < *s* < 1 + *<sup>N</sup> <sup>p</sup> where p* 0 *is the conjugate of p. Let v be a free divergence vector such that* ∇*v* ∈ *L* 1 (0, *T*; *B*˙ *N*/*p p*,*r* T *L* <sup>∞</sup>)*. Suppose that g*<sup>0</sup> <sup>∈</sup> *<sup>B</sup>*˙*<sup>s</sup> p*,*r and F* ∈ *L* 1 (0, *T*, *B*˙*<sup>s</sup> p*,*r* )*, and g be a solution of* (21) *then holds:*

$$\|\|g\|\|\_{\tilde{L}^{\infty}\_{T}(\dot{B}^{s}\_{p,r})} \leq \exp\left(\mathsf{C} \|\nabla v\|\|\_{L^{1}\_{T}(\dot{B}^{N/p}\_{p,r} \cap L^{\infty})}\right) \left(\|g\_{0}\|\|\_{\dot{B}^{s}\_{p,r}} + \int\_{0}^{T} \|F(t)\|\|\_{\dot{B}^{s}\_{p,r}} dt\right). \tag{22}$$

**Proposition 4.** *Let p* ∈ (1, +∞) *and* −1 − *Nin f*( 1 *p* , 1 *p* <sup>0</sup>) < *<sup>s</sup>* < *<sup>N</sup> p , where p* 0 *is the conjugate exponent of p. Let <sup>u</sup>*<sup>0</sup> <sup>∈</sup> *<sup>B</sup>*˙*<sup>s</sup> p*,*r , <sup>F</sup>* <sup>∈</sup> *<sup>L</sup>*˜ <sup>1</sup> *T* (*B*˙*<sup>s</sup> p*,*r* ) *and v be a free divergence vector such that* ∇*v* ∈ *L* 1 (0, *T*; *B*˙ *N*/*p p*,*r* T *L* <sup>∞</sup>)*, then Let u be a solution of the following system:*

$$\begin{cases} \partial\_t u + v \cdot \nabla u - \nu \Delta u + \nabla P = F, \\ \nabla \cdot u = 0, \\ u(0, x) = u\_0 \end{cases} \tag{23}$$

*where ν is a positive constant. Then there exists a constant C such that the following estimates hold:*

$$\begin{split} \|\mu\|\|\_{L^{\infty}\_{T}(\mathfrak{B}^{s}\_{p\mathcal{I}})} + \nu \|\mu\|\|\_{L^{1}\_{T}(\mathfrak{B}^{s+2}\_{p\mathcal{I}})} + \|\nabla P\|\|\_{L^{1}\_{T}(\mathfrak{B}^{s}\_{p\mathcal{I}})} \\ \leq \exp\left(\mathcal{C} \|\nabla \boldsymbol{v}\|\|\_{\tilde{L}^{1}\_{T}(\mathfrak{B}^{N/p}\_{p\mathcal{I}}\cap\mathbb{L}^{\infty})}\right) \Big(\|\mu\_{0}\|\|\_{\dot{B}^{s}\_{p\mathcal{I}}} + \mathcal{C} \|\boldsymbol{F}\|\|\_{L^{1}\_{T}(\mathfrak{B}^{s}\_{p\mathcal{I}})}\Big). \tag{24} \end{split}$$

#### **3. The Existence of the Solution**

In this section we shall prove the existence of the solution for (5). We state the results as following.

**Theorem 2.** *Let* <sup>1</sup> <sup>&</sup>lt; *<sup>p</sup>* <sup>&</sup>lt; <sup>2</sup>*N, then for* (*θ*0, *<sup>u</sup>*0) <sup>∈</sup> (*B*˙ *N*/*p <sup>p</sup>*,1 ) <sup>×</sup> (*B*˙ *N*/*p*−1 *<sup>p</sup>*,1 ) *<sup>N</sup>,g* <sup>∈</sup> *<sup>L</sup>* 1 *T* (*B*˙ *N*/*p*−1 *<sup>p</sup>*,1 ) T *L* 2 *T* (*B*˙ *N*/*p <sup>p</sup>*,1 ) *there exists T*(*θ*0, *u*0) > 0*, such that the problem* (5) *admits a solution* (*θ*, *u*) *with:*

$$\theta \in \mathbb{C}([0, T), \dot{\mathcal{B}}\_{p, 1}^{N/p}) \bigcap L\_T^{\infty}(\dot{\mathcal{B}}\_{p, 1}^{N/p}),$$

$$u \in \mathbb{C}([0, T), \dot{\mathcal{B}}\_{p, 1}^{N/p - 1}) \bigcap L\_T^{\infty}(\dot{\mathcal{B}}\_{p, 1}^{N/p - 1}) \bigcap L\_T^1(\dot{\mathcal{B}}\_{p, 1}^{N/p + 1}).$$

*Moreover, if there exist a small constant ε, such that:*

$$\|\|u\_0\|\|\_{L\_T^\infty(\dot{B}\_{p,1}^{N/p-1})} + \|\|g\|\|\_{L\_T^1\dot{B}\_{p,1}^{N/p-1}} \le \varepsilon \underline{\mu}\prime$$

*then T* = +∞*.*

**Proof.** We shall prove this results by iteration. Denoting,

$$\sum\_{j \le n} \Delta\_j \theta = \theta^n, \quad \sum\_{j \le n} \Delta\_j \mu = \mu^n, \quad \sum\_{j \le n} \Delta\_j p = p^n.$$

We shall build an approximate smooth solution (*θ n* , *u n* , *p n* ) of (5) satisfying,

$$\begin{cases} \partial\_t \theta^{n+1} + u^n \cdot \nabla \theta^{n+1} = 0, \\ \partial\_t u^{n+1} + u^n \cdot \nabla u^{n+1} - \underline{v} \Delta u^{n+1} + \nabla p^{n+1} = G^n, \\ \nabla \cdot u^{n+1} = 0, \\ (\theta^1, u^1) = S\_2(\theta\_0, u\_0), \\ (\tau^{n+1}, u^{n+1})|\_{t=0} = S\_{n+2}(\theta\_0, u\_0), \end{cases} \tag{25}$$

where,

$$\mathcal{G}^{\boldsymbol{n}} = \nabla \cdot \left[ (\widetilde{\boldsymbol{\nu}} (\boldsymbol{\theta}^{\boldsymbol{n}} + \mathbf{1}) - \underline{\mathbf{y}}) \nabla \boldsymbol{u}^{\boldsymbol{n}} \right] - \boldsymbol{a} \boldsymbol{\theta}^{\boldsymbol{n}} \boldsymbol{g}. \tag{26}$$

Obviously, from Propositions 3 and 4, we know that there exist a *T* such that (25) admits a unique smooth solution in *t* ∈ [0, *T*]. Then, the proof of Theorem 2 is divided into two steps:

(1) The uniform a priori estimates for (*θ n* , *u n* ).

(2) The proof of the convergence of the sequences.

We begin to obtain the uniform estimates for (*θ n* , *u n* ) Denoting:

$$I\_0 \triangleq \left\| \theta\_0 \right\|\_{\dot{B}^{N/p}\_{p,1}} + \left\| \left. u\_0 \right\|\_{\dot{B}^{N/p-1}\_{p,1}} \tag{27}$$

.

and,

$$E\_T^n = \left\|\theta^n\right\|\_{\underline{L}\_T^\infty(\mathcal{B}\_{p,1}^{N/p})} + \left\|u^n\right\|\_{\underline{L}\_T^\infty(\mathcal{B}\_{p,1}^{N/p-1})} + \underline{\mu} \left\|u^n\right\|\_{\underline{L}\_T^1(\mathcal{B}\_{p,1}^{N/p+1})}$$

Let:

$$\chi\_T = \check{L}\_T^{\infty}(\mathring{\mathcal{B}}\_{p,1}^{N/p}) \times (\check{L}\_T^{\infty}(\mathring{\mathcal{B}}\_{p,1}^{N/p-1}) \bigcap \check{L}\_T^1(\mathring{\mathcal{B}}\_{p,1}^{N/p+1}))^N.$$

Now, we shall prove that {(*θ n* , *u n* )*n*∈Z} is uniformly bounded in *χT*. Moreover, ∀*n* ∈ Z, we have the following conclusion:

$$\mathbf{claim:}\quad E\_T^{\eta} \le 4I\_0. \tag{28}$$

We shall prove these by induction. For *n* = 0, they are valid obviously. We assume that for a fixed *n*, (*θ n* , *u n* ) ∈ *χ<sup>T</sup>* is valid and the claim holds, we shall show that for *n* + 1, (*θ n*+1 , *u n*+1 ) ∈ *χ<sup>T</sup>* and the claim are also valid.

From (25), by Propositions 3 and 4, we have:

$$\|\theta^{n+1}\|\_{\dot{L}^{\infty}\_{T}(\dot{B}^{N/p}\_{p,1})} \leq e^{\int\_{0}^{T} c \|\nabla u^{n}\|\_{\dot{B}^{N/p}\_{p,1}} dt} \|\theta^{n+1}\_{0}\|\_{\dot{B}^{N/p}\_{p,1}} \tag{29}$$

and:

$$\begin{split} \|\boldsymbol{u}^{n+1}\|\_{\boldsymbol{L}^{\infty}\_{T}(\boldsymbol{\mathcal{B}}^{N/p-1}\_{p,1})} + \underline{\boldsymbol{v}}\|\boldsymbol{u}^{n+1}\|\_{\boldsymbol{L}^{1}\_{T}(\boldsymbol{\mathcal{B}}^{N/p+1}\_{p,1})} + \|\nabla\boldsymbol{p}^{n}\|\_{\boldsymbol{L}^{1}\_{T}(\boldsymbol{\mathcal{B}}^{N/p-1}\_{p,1})} \\ \leq \underline{\boldsymbol{e}}^{\int\_{0}^{T} \varepsilon\|\nabla\boldsymbol{u}^{n}\|\_{\boldsymbol{B}^{N/p}\_{p,1}} dt} (\int\_{0}^{T} \|\boldsymbol{G}^{n}\|\_{\boldsymbol{B}^{N/p-1}\_{p,1}} dt + \|\boldsymbol{u}\_{0}\|\_{\boldsymbol{B}^{N/p-1}\_{p,1}}). \end{split} \tag{30}$$

By the induction hypothesis, taking *T*<sup>1</sup> < *T* small enough, such that:

$$\mathfrak{e}^{c||u^{n}||\_{L^{1}\_{T\_1}(\mathcal{B}^{N/p+1}\_{p,1})}^{1}} \le \mathfrak{2},\tag{31}$$

then, we obtain:

$$\|\theta^{n+1}\|\_{\dot{L}\_{T\_1}^{\infty}(\dot{B}\_{p,1}^{N/p})} \le 2\|\theta\_0\|\_{\dot{B}\_{p,1}^{N/p}}.\tag{32}$$

From (30) and (31), we have:

$$\begin{split} \|\|u^{n+1}\|\|\_{L\_{T\_1}^{\infty}(\dot{\mathcal{B}}\_{p,1}^{N/p-1})} + \underline{\nu} \|\|u^{n+1}\|\|\_{L\_{T\_1}^{1}(\dot{\mathcal{B}}\_{p,1}^{N/p+1})} + \|\nabla p^{n}\|\|\_{L\_{T\_1}^{1}(\dot{\mathcal{B}}\_{p,1}^{N/p-1})} \\ \qquad \qquad \le 2(\int\_0^{T\_1} \|\mathcal{G}^{\text{n}}\|\|\_{\dot{\mathcal{B}}\_{p,1}^{N/p-1}} dt + \|\|u\_0\|\|\_{\dot{\mathcal{B}}\_{p,1}^{N/p-1}}). \end{split} \tag{33}$$

We now want to deal with R *<sup>T</sup>*<sup>1</sup> 0 k*G n*k *B*˙ *N*/*p*−1 *p*,1 *dt*. Owing to Taylor's formula and Proposition 2, for 1 < *p* < 2*N*, we obtain:

$$\begin{split} \left\lVert \left\lVert \nabla \cdot \left[ (\widetilde{\boldsymbol{\nu}}(\theta^{n} + 1) - \underline{\boldsymbol{\nu}}) \nabla \boldsymbol{u}^{n} \right] \right\rVert\_{L^{1}\_{T\_{1}}(\boldsymbol{\mathcal{B}}^{N/p - 1}\_{p,1})} &\lesssim \\ \left\lVert \left[ (\widetilde{\boldsymbol{\nu}}(\theta^{n} + 1) - \underline{\boldsymbol{\nu}}) \nabla \boldsymbol{u}^{n} \right] \right\rVert\_{\widetilde{L}^{1}\_{T\_{1}}(\dot{\mathcal{B}}^{N/p}\_{p,1})} \\ &\lesssim \left( \left\lVert \boldsymbol{\theta}^{n} \right\rVert\_{L^{\infty}\_{T\_{1}}(\boldsymbol{\mathcal{B}}^{N/p}\_{p,1})} \left\lVert \boldsymbol{u}^{n} \right\rVert\_{L^{1}\_{T\_{1}}(\boldsymbol{\mathcal{B}}^{N/p + 1}\_{p,1})} . \end{split} \tag{34}$$

Combining (26) and (34), and using (15), we can get:

$$\begin{split} \|\mathbb{G}^{n}\|\_{\boldsymbol{L}^{1}\_{T\_{1}}(\boldsymbol{\mathfrak{B}}^{N/p-1}\_{p,1})} &\lesssim \|\boldsymbol{\theta}^{n}\|\_{\boldsymbol{L}^{\infty}\_{T\_{1}}(\boldsymbol{\mathfrak{B}}^{N/p}\_{p,1})} \|\boldsymbol{u}^{n}\|\_{\boldsymbol{L}^{1}\_{T\_{1}}(\boldsymbol{\mathfrak{B}}^{N/p+1}\_{p,1})} + \mathfrak{a} \|\boldsymbol{\theta}^{n}\|\_{\boldsymbol{L}^{\infty}\_{T\_{1}}(\boldsymbol{\mathfrak{B}}^{N/p}\_{p,1})} \|\boldsymbol{g}\|\_{\boldsymbol{L}^{1}\_{T\_{1}}(\boldsymbol{\mathfrak{B}}^{N/p-1}\_{p,1})} \\ &\lesssim 4I\_{0} \|\boldsymbol{u}^{n}\|\_{\boldsymbol{L}^{1}\_{T\_{1}}(\boldsymbol{\mathfrak{B}}^{N/p+1}\_{p,1})} + \mathfrak{a}I\_{0} \|\boldsymbol{g}\|\_{\boldsymbol{L}^{1}\_{T\_{1}}(\boldsymbol{\mathfrak{B}}^{N/p-1}\_{p,1})}. \tag{35} \end{split} \tag{35}$$

Therefore, by the induction hypothesis, taking *T*<sup>1</sup> small enough, such that:

$$\left. \mathbf{4} I\_0 \left\| \left. u^{\boldsymbol{n}} \right\| \right\|\_{\boldsymbol{L}^1\_{T\_1}(\mathcal{B}^{N/p+1}\_{p,1})} + \left. \mathbf{a} I\_0 \left\| \left. g \right\| \right\|\_{\boldsymbol{L}^1\_{T\_1}(\mathcal{B}^{N/p-1}\_{p,1})} \leq I\_{0\prime} \right\} \tag{36}$$

Then, from (32), (33), (35), and (36), we proved the claim that:

$$(\pi^{n+1}, \mathfrak{u}^{n+1}) \in \chi\_{\mathbb{T}\_1}.\tag{37}$$

Repeating the progress above, we see that if there exists a constant *ε* small enough such that:

$$\|\|\mathfrak{u}\_0\|\|\_{L\_T^\infty(\dot{B}\_{p,1}^{N/p-1})} + \|\|\mathfrak{g}\|\|\_{\tilde{L}\_T^1(\dot{B}\_{p,1}^{N/p-1})} \le \mathfrak{e}\underline{\nu}$$

then, the results presented above will be valid globally. (37) will be valid for all *T*. Thus, we have proved the claim (28).

We begin to get the convergence of the sequences.

To verify the convergence of the sequences of (*θ n* , *u n* ), we shall consider the time derivative of the solution. We first show the following Lemma.

**Lemma 1.** *Let* 0 < *η* < *in f*(1, <sup>2</sup>*<sup>N</sup> p* )*,*1 < *p* < 2*N, be such that* 1 + *η* < <sup>2</sup>*<sup>N</sup> p . Then* ∇*p n is uniformly bounded in L* 2 2−*η T* (*B*˙ *N <sup>p</sup>* −1−*η <sup>p</sup>*,1 )*.*

**Proof.** Since *g* ∈ *L* 1 *T* (*B*˙ *N*/*p*−1 *<sup>p</sup>*,1 ) T *L* 2 *T* (*B*˙ *N*/*p <sup>p</sup>*,1 ),we can easily get *g* ∈ *L* 2 2−*η T* (*B*˙ *N*/*p*−1−*η <sup>p</sup>*,1 ) by interpolation.

Applying ∇· on the second equation of (25), noting ∇ · *u <sup>n</sup>* = 0, then we get:

$$\nabla \cdot (\nabla p^{n+1}) = \nabla \cdot \left[\nabla \cdot \left[\widetilde{\boldsymbol{v}}(\boldsymbol{\theta}^{\boldsymbol{u}} + 1) - \underline{\boldsymbol{v}}\right] \nabla \boldsymbol{u}^{\boldsymbol{u}}\right] + \nabla \cdot \left(\underline{\boldsymbol{v}} \,\Delta \boldsymbol{u}^{\boldsymbol{u}+1}\right) - \nabla \cdot \left[\boldsymbol{u}^{\boldsymbol{u}} \cdot \nabla \boldsymbol{u}^{\boldsymbol{u}+1}\right] + \boldsymbol{u} \nabla \cdot \left[\boldsymbol{\theta}^{\boldsymbol{u}} \,\rm{g}\right].\tag{38}$$

By the first step we have proved that:

$$\mathfrak{u}^n \in (\check{L}\_T^\infty(\mathring{\mathcal{B}}\_{p,1}^{N/p-1}) \bigcap \check{L}\_T^1(\mathring{\mathcal{B}}\_{p,1}^{N/p+1}))^{N} \mathsf{a}$$

From (19) and (20), we get *u <sup>n</sup>* <sup>∈</sup> *<sup>L</sup>* 2 *T* (*B*˙ *N*/*p <sup>p</sup>*,1 ) by interpolation. Similarly, we also have: *u <sup>n</sup>* <sup>∈</sup> *<sup>L</sup>* 2 2−*η T* (*B*˙ *N <sup>p</sup>* +1−*η <sup>p</sup>*,1 ); *u <sup>n</sup>* <sup>∈</sup> *<sup>L</sup>* 2 1−*η T* (*B*˙ *N <sup>p</sup>* −*η <sup>p</sup>*,1 ) with 0 <sup>&</sup>lt; *<sup>η</sup>* <sup>&</sup>lt; *in f*(1, <sup>2</sup>*<sup>N</sup> p* ). By Taylor's formula, we obtain that:

$$\begin{split} \left\| \nabla \cdot \nabla \cdot \left[ (\widetilde{\boldsymbol{v}}(\theta^{n} + 1) - \underline{\boldsymbol{v}}) \nabla \boldsymbol{u}^{n} \right] \right\|\_{L^{\frac{2-\eta}{2-\eta}}\_{T} (\boldsymbol{B}^{\frac{N}{p-1}}\_{p,1} - \boldsymbol{v})} \\ \leq & \left\| \nabla \cdot \left[ (\widetilde{\boldsymbol{v}}(\theta^{n} + 1) - \underline{\boldsymbol{v}}) \nabla \boldsymbol{u}^{n} \right] \right\|\_{L^{\frac{2-\eta}{2-\eta}}\_{T} (\boldsymbol{B}^{\frac{N}{p-1}}\_{p,1} - \boldsymbol{v})} \\ \lesssim & \left\| (\widetilde{\boldsymbol{v}}(\theta^{n} + 1) - \underline{\boldsymbol{v}}) \nabla \boldsymbol{u}^{n} \right\|\_{L^{\frac{2-\eta}{2-\eta}}\_{T} (\boldsymbol{B}^{\frac{N}{p-1}}\_{p,1})} \\ \leq & \left\| \theta^{n} \|\_{L^{\infty}\_{T}} (\boldsymbol{B}^{\frac{N}{p-1}}\_{p,\infty} (\boldsymbol{B}^{\frac{N}{p-1} + 1 - \eta}\_{p,1}) \\ \qquad \qquad \qquad \qquad \lesssim \left\| \theta^{n} \|\_{L^{\infty}\_{T}} (\boldsymbol{B}^{\frac{N}{p-1}}\_{p,1} \| \boldsymbol{u}^{n} \|\_{L^{\frac{2-\eta}{2-\eta}}\_{T} (\boldsymbol{B}^{\frac{N}{p-1} + 1 - \eta}\_{p,1})} \right) \right\} \end{split} (39)$$

in deriving (39), we have used (14) and (15).

Since *u <sup>n</sup>*+<sup>1</sup> <sup>∈</sup> *<sup>L</sup>* 2 2−*η T* (*B*˙ *N <sup>p</sup>* +1−*η <sup>p</sup>*,1 ), we can easily obtain:

> ∆*u <sup>n</sup>*+<sup>1</sup> <sup>∈</sup> *<sup>L</sup>* 2 2−*η T* (*B*˙ *N <sup>p</sup>* −1−*η <sup>p</sup>*,1 ). (40)

Using (14) and (15) we have:

$$\begin{split} \|u^{n}\cdot\nabla u^{n+1}\|\_{L^{\frac{2}{2-\eta}}\_{T}(\boldsymbol{\mathcal{B}}^{\boldsymbol{\mathcal{B}}^{\boldsymbol{\mathcal{B}}^{\boldsymbol{\mathcal{B}}}}\_{p,1})}^{\frac{2}{2-\eta}}(\boldsymbol{\mathcal{B}}^{\boldsymbol{\mathcal{B}}^{\boldsymbol{\mathcal{B}}}}\_{p,1}\cdot\boldsymbol{\eta}) &\stackrel{2}{\underset{\boldsymbol{L}^{\frac{2}{2-\eta}}\_{T}(\boldsymbol{\mathcal{B}}^{\boldsymbol{\mathcal{B}}^{\boldsymbol{\mathcal{B}}}\_{p,1})}\_{p,1}}\|u^{n+1}\|\_{L^{\frac{2}{2-\eta}}\_{T}(\boldsymbol{\mathcal{B}}^{\boldsymbol{\mathcal{B}}^{\boldsymbol{\mathcal{B}}}\_{p,1})}\_{p,1}\cdot\text{(41)} \\ &\lesssim \|\boldsymbol{u}^{n}\|\_{L^{\frac{2}{2-\eta}}\_{T}(\boldsymbol{\mathcal{B}}^{\boldsymbol{\mathcal{B}}^{\boldsymbol{\mathcal{B}}}\_{p,1})}\_{p,1}\|u^{n+1}\|\_{L^{2}\_{T}(\boldsymbol{\mathcal{B}}^{\boldsymbol{\mathcal{B}}}\_{p,1})}.\end{split}$$

We now begin to bound the last item k*θ <sup>n</sup>g*<sup>k</sup> *L* 2 2−*η T* (*B*˙ *N p* −1−*η <sup>p</sup>*,1 ) . Using (15), we have:

$$\|\theta^n \mathbf{g}\|\|\_{L\_T^{\frac{2}{2-\eta}}(\mathcal{B}\_{p,1}^{\frac{N}{p}-1-\eta})} \lesssim \|\theta^n\|\|\_{L\_T^{\infty}(\mathcal{B}\_{p,1}^{\frac{N}{p}})} \|\mathbf{g}\|\|\_{L\_T^{\frac{2}{2-\eta}}(\mathcal{B}\_{p,1}^{\frac{N}{p}-1-\eta})}.\tag{42}$$

Combining (38) and (42), we obtain the desired result.

In order to use the Ascoli Theorem, it suffices to estimate the derivatives of *θ <sup>n</sup>* and *u n* .

	- *(i) The sequence* (*∂tθ n* )*n*∈<sup>N</sup> *is uniformly bounded in L*<sup>2</sup> *T* (*B*˙ *N <sup>p</sup>* −1 *<sup>p</sup>*,1 )*. (ii) The sequence* (*∂tu n* )*n*∈<sup>N</sup> *is uniformly bounded in L* 2 2−*η T* (*B*˙ *N <sup>p</sup>* −1−*η <sup>p</sup>*,1 )*, for:*

$$0 < \eta < \inf(1, \frac{2N}{p} - 1) \quad \text{and} \quad 1 < p < 2N.$$

**Proof.** From (25), we have:

$$
\partial\_l \theta^{n+1} = -\boldsymbol{u}^n \cdot \nabla \theta^{n+1}.\tag{43}
$$

Recall that (*θ n*+1 , *u n* ) ∈ *L* ∞ *T* (*B*˙ *N*/*p <sup>p</sup>*,1 ) × (*L* 2 *T* (*B*˙ *N*/*p <sup>p</sup>*,1 ))*N*, from (43), we have:

$$
\partial\_t \theta^{n+1} \in L\_T^2(\dot{\mathcal{B}}\_{p,1}^{N/p-1}).\tag{44}
$$

Similarly, we get:

$$
\partial\_t u^{n+1} = -u^n \cdot \nabla u^{n+1} + \underline{\nu} \Delta u^{n+1} - \nabla p^{n+1} + \mathcal{G}^n. \tag{45}
$$

Then we get the desired result (ii) from Lemma 1.

Now we turn to the proof of the existence of the solution. According to Proposition 5, Cauchy-Schwarz inequality and Hölder's inequality, we deduce the following Corollary.

**Corollary 1.** *For the sequence* (*θ n* , *u n* )*:*

> *(i) The sequence* (*θ n* )*n*∈<sup>N</sup> *is uniformly bounded in C*<sup>1</sup> <sup>2</sup> ([0, *T*], *B*˙ *N <sup>p</sup>* −1 *<sup>p</sup>*,1 )*, (ii) The sequence* (*u n* )*n*∈<sup>N</sup> *is uniformly bounded in C η* <sup>2</sup> (*B*˙ *N <sup>p</sup>* −1−*η <sup>p</sup>*,1 ) *<sup>N</sup>, for,*

$$0 < \eta < \inf(1, \frac{2N}{p} - 1).$$

According to Corollary 1, the sequence (*θ n* , *u n* )*n*∈*<sup>N</sup>* is uniformly bounded in *C* 1 <sup>2</sup> ([0, *T*], *B*˙ *N <sup>p</sup>* −1 *<sup>p</sup>*,1 ) ×*C η* <sup>2</sup> (*B*˙ *N <sup>p</sup>* −1−*η <sup>p</sup>*,1 ) *<sup>N</sup>*, thus is uniformly bounded in *C*([0, *T*], *B*˙ *N <sup>p</sup>* −1 *<sup>p</sup>*,1 ) <sup>×</sup>*C*([0, *<sup>T</sup>*], *<sup>B</sup>*˙ *N <sup>p</sup>* −1−*η <sup>p</sup>*,1 ) *N*. We recall that the injection of *B*˙ *s*+*ε pq*,*loc* in *<sup>B</sup>*˙*<sup>s</sup> pq*,*loc* is compact for all *ε* > 0. (See the proof in [34]). Using the uniform estimates and applying the Ascoli's Theorem, there exists a subsequence (*θ n* 0 , *u n* 0 ), which converges to (*θ*, *u*). We gather that (*θ*, *u*) is a solution of (25) belongs to:

$$(\mathbb{C}([0,T], \mathring{\mathsf{B}}\_{p,1}^{N/p}) \bigcap \mathsf{L}\_T^{\infty}(\mathring{\mathsf{B}}\_{p,1}^{N/p})) \times \\ (\mathbb{C}([0,T], \mathring{\mathsf{B}}\_{p,1}^{N/p-1}) \bigcap \mathsf{L}\_T^{\infty}(\mathring{\mathsf{B}}\_{p,1}^{N/p-1}) \bigcap \mathsf{L}\_T^1(\mathring{\mathsf{B}}\_{p,1}^{N/p+1})) .$$
 
$$\Box$$

#### **4. The Uniqueness of the Solution**

In this section we shall prove the uniqueness of the solution for (5). We shall only establish the uniqueness when *p* = *N*, the case when 1 < *p* < *N* follows by injection.

We state the results as following.

**Theorem 3.** *Let* (*θ i* , *u i* , *p i* ),(*i* = 1, 2) *be two solutions solve* (5) *with the same initial data* (*θ*0, *u*0)*. Assume that:*

$$g \in L^1([0, T], \dot{\mathcal{B}}\_{N, 1}^0),$$

$$\theta^i \in \mathcal{C}([0, T], \dot{\mathcal{B}}\_{N, 1}^1) \bigcap L^\infty([0, T], \dot{\mathcal{B}}\_{N, 1}^1),\tag{46}$$

$$\mu^i \in \mathcal{C}([0, T], \dot{\mathcal{B}}\_{\text{N}, 1}^0) \bigcap L^\infty([0, T], \dot{\mathcal{B}}\_{\text{N}, 1}^0) \bigcap L^1([0, T], \dot{\mathcal{B}}\_{\text{N}, 1}^2), \tag{47}$$

$$\nabla p^i \in L^1([0, T], \dot{\mathcal{B}}^0\_{N, 1})\_\prime \tag{48}$$

*there exists a constant ε small enough, if we have:*

$$\|\theta^1\|\_{L\_T^\infty(\mathcal{B}\_{N,1}^1)} \le \varepsilon,\tag{49}$$

*then* (*θ* 1 , *u* 1 , ∇*p* 1 ) = (*θ* 2 , *u* 2 , ∇*p* 2 )*.*

**Proof.** Let (*θ i* , *u i* , ∇*p i* ) (*i* = 1, 2) be two solutions to the system (5), we denote:

$$(\delta\theta, \delta u, \delta \nabla p) = (\theta^1 - \theta^2, u^1 - u^2, \nabla p^1 - \nabla p^2),\tag{50}$$

then we have:

$$\begin{cases} \partial\_t \delta \theta + u^2 \cdot \nabla \delta \theta + \delta u \nabla \theta^1 = 0, \\ \partial\_t \delta u + u^1 \cdot \nabla \delta u - \underline{v} \Delta \delta u + \nabla \delta p = \mathcal{K}, \\ \nabla \cdot (\delta u) = 0, \\ (\delta \theta, \delta u)|\_{t=0} = (0, 0)\_t \end{cases} \tag{51}$$

where,

$$\mathcal{K}\_{-}=\nabla\cdot\left[\left(\nu(\theta^{1})-\underline{\nu}\right)\nabla u^{1}\right]-\nabla\cdot\left[\left(\nu(\theta^{2})-\underline{\nu}\right)\nabla u^{2}\right]+\delta u\cdot\nabla u^{2}+\delta\theta\text{g.}\tag{52}$$

We shall prove the uniqueness in the space D*<sup>T</sup>* with:

$$\mathfrak{D}\_{T} = \left(\mathbb{C}([0,T],\mathring{\mathcal{B}}\_{\mathrm{N},\infty}^{0})\bigcap\nolimits^{\infty}([0,T],\mathring{\mathcal{B}}\_{\mathrm{N},\infty}^{0})\right) \times \left(\mathbb{C}([0,T],\mathring{\mathcal{B}}\_{\mathrm{N},\infty}^{-1})\right.$$

$$\bigcap L^{\infty}([0,T],\mathring{\mathcal{B}}\_{\mathrm{N},\infty}^{-1}) \bigcap L^{1}([0,T],\mathring{\mathcal{B}}\_{\mathrm{N},\infty}^{1}) \times L^{1}([0,T],\mathring{\mathcal{B}}\_{\mathrm{N},\infty}^{-1}) . \quad \text{(53)}$$

Firstly,we have to state that (*δθ*, *δu*, *δ*∇*p*) ∈ D*T*.

According to our assumption on (*θ i* , *u i* ), the estimates of paraproduct yield *∂tθ i* ∈ *L* 2 *T* (*B*˙ <sup>0</sup> *<sup>N</sup>*,1). Therefore ¯ *θ <sup>i</sup>* = *θ <sup>i</sup>* <sup>−</sup> *<sup>θ</sup>*<sup>0</sup> belongs to *<sup>C</sup>* 1 <sup>2</sup> ([0, *T*], *B*˙ <sup>0</sup> *<sup>N</sup>*,1). Which clearly entails by embedding:

$$
\delta\theta \in \mathcal{C}([0, T], \dot{B}^0\_{N, 1}).
$$

We now define:

$$
\mu^i = \mu\_L + \bar{\mu}^i \lrcorner \nabla p^i = \nabla p^i\_L + \nabla \bar{p^i}.
$$

The quantities *u<sup>L</sup>* and ∇*p<sup>L</sup>* are defined by the system below:

$$\begin{cases} \partial\_t u\_L - \underline{\nu} \Delta u + \nabla p\_L = 0, \\ \nabla \cdot u\_L = 0, \\ u\_L|\_{t=0} = u\_0. \end{cases} $$

Thanks to Proposition 2.6 above and Proposition 2.1 in [32], we have:

$$\mu\_L \in \mathcal{C}([0, T], \mathring{\mathcal{B}}\_{N, 1}^0) \bigcap L^1([0, T], \mathring{\mathcal{B}}\_{N, 1}^2)\_{\prime \prime}$$

and,

$$
\nabla p\_L \in L^1([0, T], \dot{B}^0\_{N, 1}).
$$

We obviously have ¯*u i* |*t*=<sup>0</sup> = 0 and ( ¯*u i* , <sup>∇</sup> ¯*<sup>p</sup> <sup>i</sup>*) verify:

$$\begin{cases} \partial\_t \bar{u}^i - \underline{\nu} \Delta \bar{u}^i + \nabla \bar{p}^i = K(\theta^i, \mu^i)\_{\prime} \\ \nabla \cdot \bar{u}^i = 0, \\ \bar{u}^i|\_{t=0} = \mu\_{0\prime} \end{cases}$$

where *K*(*θ i* , *u i* ) = −*u i* · ∇*u <sup>i</sup>* <sup>+</sup> ∇ · [(*ν*e(*<sup>θ</sup> <sup>n</sup>* <sup>+</sup> <sup>1</sup>) <sup>−</sup> *<sup>ν</sup>*)∇*<sup>u</sup> n* ] <sup>−</sup> *αθng*.

The product and composition laws in Besov Spaces insure that *K*(*θ i* , *u i* ) belongs to *L* 1 *T* (*B*˙ <sup>−</sup><sup>1</sup> *<sup>N</sup>*,1), thus we can easily get *K*(*θ i* , *u i* ) belongs to *L* 1 *T* (*B*˙ <sup>−</sup><sup>1</sup> *N*,∞ ).

The Proposition 2.6 above and Proposition 2.1 in [32] yield:

$$\bar{\mu}^i \in (\mathbb{C}([0, T], \dot{\mathcal{B}}\_{N, \infty}^{-1}) \bigcap L^{\infty}([0, T], \dot{\mathcal{B}}\_{N, \infty}^{-1}) \bigcap L^1([0, T], \dot{\mathcal{B}}\_{N, \infty}^1),$$

and,

$$
\bar{\nabla}p^i \in L^1([0,T], \dot{\mathcal{B}}\_{N,\infty}^{-1})\,.
$$

Since

$$
\delta\theta = (\theta^2 - \theta\_0) - (\theta^1 - \theta\_0), \\
\delta u = \bar{u}^2 - \bar{u}^1 \quad \text{and} \quad \nabla \delta p = \nabla \bar{p}^2 - \nabla \bar{p}^1.
$$

on combining the above discussions, we can conclude:

$$(\delta\theta, \delta u, \delta \nabla p) \in \mathfrak{D}\_T.$$

To get the estimates of k(*δθ*, *δu*, *δ*∇*p*)kD*<sup>T</sup>* , by Proposition 4, we have:

$$\|\delta u\|\_{L^{\infty}\_{T}(\mathcal{B}^{-1}\_{\mathrm{N},\infty})} + \underline{\nu} \|\delta u\|\_{L^{1}\_{T}(\mathcal{B}^{1}\_{\mathrm{N},\infty})} + \|\delta \nabla p\|\_{L^{1}\_{T}(\mathcal{B}^{-1}\_{\mathrm{N},\infty})} \lesssim \exp\left(\int\_{0}^{T} \|\nabla u^{1}\|\_{\mathcal{B}^{1}\_{\mathrm{N},\infty} \cap L^{\infty}} dt\right) \int\_{0}^{T} \|K\|\_{\mathcal{B}^{-1}\_{\mathrm{N},\infty}} dt.\tag{54}$$

Noting (53), and using Proposition 2, we have:

$$\begin{split} \|\nabla \cdot \left[ (\widetilde{\boldsymbol{v}}(\theta^{1}) - \underline{\boldsymbol{v}}) \nabla \boldsymbol{u}^{1} \right] - \|\nabla \cdot \left[ (\widetilde{\boldsymbol{v}}(\theta^{2}) - \underline{\boldsymbol{v}}) \nabla \boldsymbol{u}^{2} \right] \|\_{L^{1}\_{\widetilde{T}}(\mathbb{B}^{-1}\_{N,\infty})} \\ \lesssim & \|\nabla \cdot \left[ (\widetilde{\boldsymbol{v}}(\theta^{1}) - \underline{\boldsymbol{v}}) \nabla \boldsymbol{u}^{1} \right] - \nabla \cdot \left[ (\widetilde{\boldsymbol{v}}(\theta^{1}) - \underline{\boldsymbol{v}}) \nabla \boldsymbol{u}^{2} \right] \|\_{L^{1}\_{\widetilde{T}}(\mathbb{B}^{-1}\_{N,\infty})} \\ & + \|\nabla \cdot \left[ (\widetilde{\boldsymbol{v}}(\theta^{1}) - \underline{\boldsymbol{v}}) \nabla \boldsymbol{u}^{2} \right] - \nabla \cdot \left[ (\widetilde{\boldsymbol{v}}(\theta^{2}) - \underline{\boldsymbol{v}}) \nabla \boldsymbol{u}^{2} \right] \|\_{L^{1}\_{\widetilde{T}}(\mathbb{B}^{-1}\_{N,\infty})} \\ \lesssim & \|\theta^{1} \|\_{L^{\infty}\_{\widetilde{T}}(\mathbb{B}^{1}\_{N,1})} \|\delta \boldsymbol{u} \|\_{L^{1}\_{\widetilde{T}}(\mathbb{B}^{1}\_{N,\infty})} + \|\delta \theta \|\_{L^{\infty}\_{\widetilde{T}}(\mathbb{B}^{0}\_{N,\infty})} \|\boldsymbol{u}^{2} \|\_{L^{1}\_{\widetilde{T}}(\mathbb{B}^{2}\_{N,1})} \tag{55} \end{split}$$

and,

$$\|\delta\mathfrak{u}\cdot\nabla\mathfrak{u}^{2}\|\_{L^{1}\_{T}(\mathcal{B}^{-1}\_{N,\infty})} \lesssim \|\delta\mathfrak{u}\|\_{L^{\infty}\_{T}(\mathcal{B}^{-1}\_{N,\infty})} \|\mathfrak{u}^{2}\|\_{L^{1}\_{T}(\mathcal{B}^{2}\_{N,1})}.\tag{56}$$

Now, we shall estimate the term *δθ*. By Proposition 3,

$$\|\delta\theta\|\|\_{\tilde{L}^{\infty}\_{T}(\dot{B}^{0}\_{\text{N},\infty})} \leq \exp\left(\int\_{0}^{T} \|\nabla u^{2}\|\_{\dot{B}^{1}\_{\text{N},\infty}\cap L^{\infty}} dt\right) \int\_{0}^{T} \|\delta u \nabla \theta^{1}\|\_{\dot{B}^{0}\_{\text{N},\infty}} dt,\tag{57}$$

Using (51)<sup>3</sup> and (13), we get:

$$\begin{split} \int\_{0}^{T} \|\delta u \nabla \theta^{1} \|\_{\dot{B}^{0}\_{N,\infty}} dt &= \int\_{0}^{T} \| \nabla \cdot (\delta u \theta^{1}) \|\_{\dot{B}^{0}\_{N,\infty}} dt \\ &= \int\_{0}^{T} \| \delta u \theta^{1} \|\_{\dot{B}^{1}\_{N,\infty}} dt \lesssim \int\_{0}^{T} \| \delta u \theta^{1} \|\_{\dot{B}^{1}\_{N,1}} dt \quad (58) \end{split} \tag{58}$$

If we choose *s*<sup>1</sup> = 1,*s*<sup>2</sup> = 1, *p*<sup>1</sup> = *p*<sup>2</sup> = *p* = *N* in (15), we obtain:

$$\int\_0^T \|\delta u \theta^1\|\_{\mathcal{B}^1\_{N,1}} dt \quad \lesssim \quad \int\_0^T \|\delta u\|\_{\mathcal{B}^1\_{N,\infty}} \|\theta^1\|\_{\mathcal{B}^1\_{N,1}} dt \quad \lesssim \quad \|\theta^1\|\_{L^\infty\_T(\mathcal{B}^1\_{N,1})} \|\delta u\|\_{L^1\_T(\mathcal{B}^1\_{N,\infty})}.\tag{59}$$

We now begin to bound R *<sup>T</sup>* 0 *α*k*δθg*k*L*˜ <sup>1</sup> *T* (*B*˙ <sup>−</sup><sup>1</sup> *N*,∞ ) ,

$$\int\_0^T \mathfrak{a} \|\delta\theta \mathfrak{g}\|\_{L^1\_T(\mathcal{B}^{-1}\_{N,\infty})} \lesssim \mathfrak{a} \|\delta\theta\|\_{L^\infty\_T(\mathcal{B}^0\_{N,\infty})} \|\mathfrak{g}\|\_{L^1\_T(\mathcal{B}^0\_{N,1})}.\tag{60}$$

in deriving (60), we have used (17).

Write:

$$\gamma(T) \triangleq \|\delta\theta\|\_{\tilde{L}^{\infty}\_{\Gamma}(\dot{\mathcal{B}}^{0}\_{\text{N},\infty})} + \|\delta u\|\_{\tilde{L}^{\infty}\_{\Gamma}(\dot{\mathcal{B}}^{-1}\_{\text{N},\infty})} + \underline{\mu} \|\delta u\|\_{\tilde{L}^{1}\_{\Gamma}(\dot{\mathcal{B}}^{1}\_{\text{N},\infty})} + \|\delta \nabla p\|\_{\tilde{L}^{1}\_{\Gamma}(\dot{\mathcal{B}}^{-1}\_{\text{N},\infty})}.\tag{61}$$

Then from (54)–(60), we get:

$$\begin{split} \gamma(T) \lesssim \|\theta^1\|\_{L^\infty\_T(\mathcal{B}^1\_{\mathrm{N},1})} \|\delta u\|\_{L^1\_T(\mathcal{B}^1\_{\mathrm{N},\infty})} + \|\delta \theta\|\_{L^\infty\_T(\mathcal{B}^0\_{\mathrm{N},\infty})} \|u^2\|\_{L^1\_T(\mathcal{B}^2\_{\mathrm{N},1})} \\ + \|\delta u\|\_{L^\infty\_T(\mathcal{B}^{-1}\_{\mathrm{N},\infty})} \|u^2\|\_{L^1\_T(\mathcal{B}^2\_{\mathrm{N},1})} + a \|\delta \theta\|\_{L^\infty\_T(\mathcal{B}^0\_{\mathrm{N},\infty})} \|\|g\|\_{L^1\_T(\mathcal{B}^0\_{\mathrm{N},1})}. \end{split} \tag{62}$$

Taking *T* small enough, such that for any small positive constant *ε*0, we have:

$$\|\nabla \boldsymbol{u}^{2}\|\_{\tilde{L}^{1}\_{\mathcal{T}}(\dot{B}^{1}\_{N,1})} + \|\boldsymbol{g}\|\_{L^{1}\_{\mathcal{T}}(\dot{B}^{0}\_{N,1})} \leq \frac{1}{2} \varepsilon\_{0}.\tag{63}$$

Thus, we have:

$$
\gamma(T) \le \frac{1}{2}\gamma(T). \tag{64}
$$

we get,

$$
\gamma(T) \equiv 0,
$$

which yields uniqueness of the solutions.

#### **5. Conclusions**

In this paper, we studied the Cauchy problem for non-homogeneous Boussinesq equations. We proved the global existence of the solution when the initial velocity are small with respect to the viscosity, as well as the initial temperature approaches a positive constant on the critical Besov spaces (*θ*, *u*) ∈ *L* ∞ *T* (*B*˙ *N*/*p <sup>p</sup>*,1 ) × *L* ∞ *T* (*B*˙ *N*/*p*−1 *<sup>p</sup>*,1 ) T *L* 1 *T* (*B*˙ *N*/*p*+1 *<sup>p</sup>*,1 ) with 1 < *p* < 2*N*. Furthermore, we proved the uniqueness for 1 < *p* ≤ *N*. When *N* ≤ *p* ≤ 2*N*, the uniqueness is difficult. We can't get any result following the method proposed in this paper. We will consider the uniqueness for *N* ≤ *p* ≤ 2*N* in the future. We can also obtain similar results for other fluid equations.

**Author Contributions:** Writing—original draft, Y.L. and B.O. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by Key projects of universities in Guangdong Province (NATU-RAL SCIENCE) (2019KZDXM042) and the Research Fundations of Guangzhou Huashang College (2021HSKT01,2020HSDS01).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The authors would like to deeply thank all the reviewers for their insightful and constructive comments.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


MDPI St. Alban-Anlage 66 4052 Basel Switzerland Tel. +41 61 683 77 34 Fax +41 61 302 89 18 www.mdpi.com

*Symmetry* Editorial Office E-mail: symmetry@mdpi.com www.mdpi.com/journal/symmetry

MDPI St. Alban-Anlage 66 4052 Basel Switzerland

Tel: +41 61 683 77 34 Fax: +41 61 302 89 18

www.mdpi.com ISBN 978-3-0365-6590-3