**Variational Problems and Applications**

Editor **Savin Treanta**

MDPI Basel Beijing Wuhan Barcelona Belgrade Manchester Tokyo Cluj Tianjin

*Editor* Savin Treanta Applied Mathematics University Politehnica of Bucharest Bucharest Romania

*Editorial Office* MDPI St. Alban-Anlage 66 4052 Basel, Switzerland

This is a reprint of articles from the Special Issue published online in the open access journal *Mathematics* (ISSN 2227-7390) (available at: www.mdpi.com/journal/mathematics/special issues/ Variational Problems Applications).

For citation purposes, cite each article independently as indicated on the article page online and as indicated below:

LastName, A.A.; LastName, B.B.; LastName, C.C. Article Title. *Journal Name* **Year**, *Volume Number*, Page Range.

**ISBN 978-3-0365-6589-7 (Hbk) ISBN 978-3-0365-6588-0 (PDF)**

© 2023 by the authors. Articles in this book are Open Access and distributed under the Creative Commons Attribution (CC BY) license, which allows users to download, copy and build upon published articles, as long as the author and publisher are properly credited, which ensures maximum dissemination and a wider impact of our publications.

The book as a whole is distributed by MDPI under the terms and conditions of the Creative Commons license CC BY-NC-ND.

## **Contents**


#### **Kin Keung Lai, Jaya Bisht, Nidhi Sharma and Shashi Kant Mishra**

Hermite-Hadamard-Type Fractional Inclusions for Interval-Valued Preinvex Functions Reprinted from: *Mathematics* **2022**, *10*, 264, doi:10.3390/math10020264 . . . . . . . . . . . . . . . . **137**


## *Editorial* **Variational Problems and Applications**

**Savin Trean¸tă 1,2,3**


#### **1. Introduction**

Over the years, many researchers have been interested in obtaining solution procedures in variational (interval/fuzzy) analysis and robust control. In order to formulate necessary and sufficient optimality/efficiency conditions and duality theorems for different classes of robust and interval-valued/fuzzy variational problems, various approaches have been proposed. In this regard, we provide the Special Issue "Variational Problems and Applications" to cover the new advances in these mathematical topics. In this Special Issue, we focused on formulating and demonstrating some characterization results of well-posedness and robust efficient solutions in new classes of (multiobjective) variational (control) problems governed by multiple and/or path-independent curvilinear integral cost functionals and robust mixed and/or isoperimetric constraints involving first- and second-order partial differential equations. In response to our invitation, we received 30 papers from many countries (Romania, China, India, Saudi Arabia, Australia, Egypt, Yemen, Germany, Pakistan, Thailand, Russia), of which 14 were published.

#### **2. Brief Overview of the Contributions**

In a review conducted by Trean¸tă [1], nonlinear dynamics, generated by some classes of constrained control problems that involve second-order partial derivatives, were comprehensively reviewed. Specifically, necessary optimality conditions were formulated and proved for the considered variational control problems governed by integral functionals. In addition, the well-posedness and the associated variational inequalities have been considered in this review paper.

Olteanu [2] briefly reviews a method of approximating any real-valued nonnegative continuous compactly supported function defined on a closed unbounded subset by dominating special polynomials that are sums of squares. This method also works in several-dimensional cases. To perform this, a Hahn–Banach-type theorem (Kantorovich theorem on an extension of positive linear operators), a Haviland theorem, and the notion of a moment-determinate measure were applied. Second, completions and other results of solving full Markov moment problems in terms of quadratic forms are proposed based on polynomial approximation. The existence and uniqueness of the solution are discussed.

Trean¸tă and Das [3] introduced a new class of multi-dimensional robust optimization problems (named (*P*)) with mixed constraints, implying second-order partial differential equations (PDEs) and inequations (PDIs). Moreover, they defined an auxiliary (modified) class of robust control problems (named (*P*) (*b*,*c*) ), which is much easier to study, and provided some characterization results of (*P*) and (*P*) (*b*,*c*) by using the notions of a normal weak robust optimal solution and robust saddle-point associated with a Lagrange functional corresponding to (*P*) (*b*,*c*) . For this aim, they considered path-independent curvilinear integral cost functionals and the notion of convexity associated with a curvilinear integral functional generated by a controlled closed (complete integrable) Lagrange 1-form.

**Citation:** Trean¸t˘a, S. Variational Problems and Applications. *Mathematics* **2023**, *11*, 205. https:// doi.org/10.3390/math11010205

Received: 19 December 2022 Accepted: 23 December 2022 Published: 30 December 2022

**Copyright:** © 2022 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

In 1961, Kestelman first proved the change in the variable theorem for the Riemann integral in its modern form. In 1970, Preiss and Uher supplemented this result with the inverse statement. Later, in a number of papers (Sarkhel, Výborný, Puoso, Tandra, and Torchinsky), the alternative proofs of these theorems were provided within the same formulations. In [4], Kuleshov showed that one of the restrictions (namely, the boundedness of the function f on its entire domain) can be omitted, while the change of variable formula still holds.

By considering the new forms of the notions of lower semicontinuity, pseudomonotonicity, hemicontinuity and monotonicity of the considered scalar multiple integral functional, Trean¸tă [5] studied the well-posedness of a new class of variational problems with variational inequality constraints. More specifically, by defining the set of approximating solutions for the class of variational problems under study, he established several results on well-posedness.

Guo et al. [6] studied the derivation of optimality conditions and duality theorems for interval-valued optimization problems based on gH-symmetrical derivatives. Further, the concepts of symmetric pseudo-convexity and symmetric quasi-convexity for intervalvalued functions are proposed to extend the above optimization conditions. Examples are also presented to illustrate corresponding results.

The concepts of convex and non-convex functions play a key role in the study of optimization. So, with the help of these ideas, some inequalities can also be established. Moreover, the principles of convexity and symmetry are inextricably linked. In the last two years, convexity and symmetry have emerged as a new field due to considerable association. In the work of Khan et al. [7], the authors studied a new version of intervalvalued functions (I-V·Fs), known as left and right *χ*-pre-invex interval-valued functions (LR-*χ*-pre-invex I-V·Fs). For this class of non-convex I-V·Fs, they derived numerous new dynamic inequalities interval Riemann–Liouville fractional integral operators. The applications of these repercussions are taken into account in a unique way.

Lai et al. [8] introduced a new class of interval-valued preinvex functions termed as harmonically h-preinvex interval-valued functions. They established new inclusion of Hermite–Hadamard for harmonically h-preinvex interval-valued functions via intervalvalued Riemann–Liouville fractional integrals. Further, they proved fractional Hermite– Hadamard–type inclusions for the product of two harmonically h-preinvex interval-valued functions. In this way, these findings include several well-known results and newly obtained results of the existing literature as special cases. Moreover, applications of the main results have been demonstrated with some examples.

The principles of convexity and symmetry are inextricably linked. Because of the considerable association that has emerged between the two in recent years, we may apply what we learn from one to the other. In the study of Khan et al. [9], the main aim is to establish the relationship between integral inequalities and interval-valued functions (IV-Fs) based upon the pseudo-order relation. Firstly, we discussed the properties of left and right preinvex interval-valued functions (left and right preinvex IV-Fs). Then, we obtained a Hermite–Hadamard (H–H) and Hermite–Hadamard–Fejér (H–H–Fejér) type inequality and some related integral inequalities with the support of left and right preinvex IV-Fs via a pseudo-order relation and interval Riemann integral. Moreover, some exceptional special cases have been discussed.

In Alnowibet et al. [10], a hybrid gradient simulated annealing algorithm is guided to solve the constrained optimization problem. When trying to solve constrained optimization problems using deterministic, stochastic optimization methods or using a hybridization between them, penalty function methods are the most popular approach due to their simplicity and ease of implementation. There are many approaches to handling the existence of the constraints in the constraint problem. The simulated-annealing algorithm (SA) is one of the most successful meta-heuristic strategies. On the other hand, the gradient method is the most inexpensive method among the deterministic methods. In previous literature, the hybrid gradient simulated annealing algorithm (GLMSA) demonstrated efficiency and

effectiveness in solving unconstrained optimization problems. In Alnowibet et al. [10], the GLMSA algorithm is generalized to solve the constrained optimization problems. Hence, a new approach penalty function is proposed to handle the existence of the constraints. The proposed approach penalty function is used to guide the hybrid gradient simulated annealing algorithm (GLMSA) to obtain a new algorithm (GHMSA) that finds the constrained optimization problem. The performance of the proposed algorithm is tested on several benchmark optimization test problems and some well-known engineering design problems with varying dimensions. Comprehensive comparisons against other methods in the literature are also presented. The results indicate that the proposed method is promising and competitive. The comparison results between the GHMSA and the other four state-Meta-heuristic algorithms indicate that the proposed GHMSA algorithm is competitive with, and in some cases superior to, other existing algorithms in terms of the quality, efficiency, convergence rate, and robustness of the final result.

Data-mining applications are growing with the availability of large data; sometimes, handling large data is also a typical task. Segregation of the data for the extraction of useful information is inevitable for designing modern technologies. Considering this fact, the work of Alrasheedi et al. [11] proposes a chaos-embedded marine predator algorithm (CMPA) for feature selection. The optimization routine is designed with the aim of maximizing the classification accuracy with the optimal number of features selected. The well-known benchmark datasets have been chosen for validating the performance of the proposed algorithm. A comparative analysis of the performance with some well-known algorithms proves the applicability of the proposed algorithm. Further, the analysis was extended to some of the well-known chaotic algorithms; first, the binary versions of these algorithms are developed, and then a comparative analysis of the performance is conducted on the basis of the mean features selected, the classification accuracy obtained and the fitness function values. Statistical significance tests have also been conducted to establish the significance of the proposed algorithm.

In the work of Peng et al. [12], the reverse space-time nonlocal complex modified Kortewewg–de Vries (mKdV) equation is investigated by using the consistent tanh expansion (CTE) method. According to the CTE method, a nonauto-Bäcklund transformation theorem of nonlocal complex mKdV is obtained. The interactions between one kink soliton and other different nonlinear excitations are constructed via the nonauto–Bäcklund transformation theorem. By selecting cnoidal periodic waves, the interaction between one kink soliton and the cnoidal periodic waves is derived. The specific Jacobi function-type solution and graphs of its analysis are provided in this paper.

Lai et al. [13] obtained characterizations of solution sets of the interval-valued mathematical programming problems with switching constraints. Stationary conditions, which are weaker than the standard Karush–Kuhn–Tucker conditions, need to be discussed in order to find the necessary optimality conditions. The authors introduced corresponding weak, Mordukhovich, and strong stationary conditions for the corresponding intervalvalued mathematical programming problems with switching constraints (IVPSC) and interval-valued tightened nonlinear problems (IVTNP), because the W-stationary condition of IVPSC is equivalent to the Karush–Kuhn–Tucker conditions of the IVTNP. Furthermore, they used strong stationary conditions to characterize the solution sets for IVTNP, in which the last ones are particular solutions sets for IVPSC, because the feasible set of tightened nonlinear problems (IVTNP) is a subset of the feasible set of the mathematical programs with switching constraints (IVPSC).

In the work of Cipu and Barbu [14], the authors are concerned with solutions for Sturm–Liouville problems (SLP) using a variational problem (VP) formulation of regular SLP. The minimization problem (MP) is also established, and the connection between the solution of each formulation is then proved. Variational estimations (the variational equation associated with the Euler–Lagrange variational principle and Nehari's method, shooting method and bisection method) and iterative variational methods (He's method

and HPM) for regular RSL are presented in the final part of the paper, which ends with applications.

**Acknowledgments:** I am thankful to the editors and reviewers of the *Mathematics* journal for their help and support.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

**Elena Corina Cipu 1,2,\* and Cosmin D˘anu¸t Barbu <sup>1</sup>**


**Abstract:** In this paper, we are concerned with approach solutions for Sturm–Liouville problems (SLP) using variational problem (VP) formulation of regular SLP. The minimization problem (MP) is also set forth, and the connection between the solution of each formulation is then proved. Variational estimations (the variational equation associated through the Euler–Lagrange variational principle and Nehari's method, shooting method and bisection method) and iterative variational methods (He's method and HPM) for regular RSL are unitary presented in final part of the paper, which ends with applications.

**Keywords:** BVP nonlinear problems; variational methods; estimating nonlinearities; Green function

**MSC:** 34A12; 34A45

#### **1. Introduction**

Nonlinearities are different from linear type by a function, an operator or a system that is nonlinear or is the case in which only some characteristics of it are known. The existence of the solution and the dependence of conditions for solving some classes of differential equations described by an operator is specified by the general framework of the Sturm–Liouville problem, with parametric conditions at the limit. The general framework of the Sturm–Liouville problem with parametric conditions at the limit is specified in the first part of the paper. The existence of the solution and the dependence of conditions is specified through the connection between the differential operator and Green's function. Based on the properties of Green's function, the operator used to analyze the behavior of the solution of the parameters given by the boundary conditions is specified. Variational problems derived from the initial RSLP are outlined with different type conditions in order to estimate the solution.

Let be the operator *L* = − *d dx p*(*x*) *d dx* + *ρ*(*x*) as part of the regular Sturm–Liouville problem (RSL). The Sturm–Liouville (SL) problem expressed by the differential equation and the boundary conditions

$$a(\mathbf{x})\frac{d^2u}{d\mathbf{x}^2} + b(\mathbf{x})\frac{du}{d\mathbf{x}} + c(\mathbf{x})u - \lambda d(\mathbf{x})u = 0,\tag{1}$$

$$\begin{aligned} B\_1: a\_1 u(a) + a\_2 u'(a) &= 0, \ |a\_1| + |a\_2| \neq 0, a\_1, a\_2 \in \mathbb{R}, \\ B\_2: b\_1 u(b) + b\_2 u'(b) &= 0, \ |b\_1| + |b\_2| \neq 0, b\_1, b\_2 \in \mathbb{R} \end{aligned} \tag{2}$$

could be written as

$$L\mu + \lambda s(\mathfrak{x})\mu = 0, \ \mathfrak{x} \in (a, b) = I, \lambda \in \mathbb{R} \tag{3}$$

**Citation:** Cipu, E.C.; Barbu, C.D. Variational Estimation Methods for Sturm–Liouville Problems. *Mathematics* **2022**, *10*, 3728. https:// doi.org/10.3390/math10203728

Academic Editors: Simeon Reich and Alessio Pomponio

Received: 31 July 2022 Accepted: 7 October 2022 Published: 11 October 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

with *p*(*x*) = *a*(*x*), *ρ*(*x*) = −*c*(*x*),*s*(*x*) = *d*(*x*) in case *b*(*x*) = *a* 0 (*x*) and with integrant Z *x b*(*t*) *dt*

factor *µ* = *ke a a*(*t*) , *p*(*x*) = *µa*(*x*), *ρ*(*x*) = −*µc*(*x*),*s*(*x*) = *µd*(*x*) in case *b*(*x*) 6= *a* 0 (*x*) (see [1,2]).

The Sturm–Liouville equation is regular in the interval [*a*, *b*] if the functions verify the condition *p*(*x*) > 0 and *s*(*x*) > 0, ∀*x* ∈ *I* or *s* ≡ 0 and the operator *L* : H = L 2 (*I*) ∩ C<sup>2</sup> (*I*) → L 2 (*I*) is self-adjoint with real eigenvalues and orthogonal eigenfunctions in space L 2 *s* (*I*) according to the inner product

$$
\langle f, g \rangle = \int\_a^b fg \, \mathrm{d}x f, g \in \mathcal{L}^2(I); \\
\langle f, g \rangle\_s = \int\_a^b s fg \, \mathrm{d}x \text{ in } \mathcal{L}^2\_s(I). \tag{4}
$$

and for a given *λ*, there exist two linearly independent solutions of a *RSL* equation in the *I* interval, L 2 (*I*) = <sup>n</sup> *f* : *I* → R, R *b a* | *f*(*x*)| <sup>2</sup> d*x* < ∞ o .

We denote *D*(*L*) as the domain of *L* that is defined by

$$\begin{array}{l} D(L) = \left\{ \mathbf{y} \in \mathcal{C}([a, b]), y'' \in \mathcal{L}^2(I), \mathbf{y} \text{ satisfies } B\_1, B\_2 \right\} \text{ for general case} \\ D(L) = \left\{ \mathbf{y} \in \mathcal{C}([a, b]), (py')' \in \mathcal{C}([a, b]), \mathbf{y} \text{ satisfies } B\_1, B\_2 \right\} \text{ for regular case.} \end{array} \tag{5}$$

The adjoint operator, *L* ∗ , associated to the operator *L* verifies h*L f* , *g*i = h*f* , *L* <sup>∗</sup>*g*i, ∀ *f* , *g* ∈ H and *L* is self-adjoint if *L* = *L* ∗ . Additionally, the operator *L* is symmetric if h*L f* , *g*i = h*f* , *Lg*i, ∀ *f* , *g* ∈ *D*(*L*). For the operator defined for SL problems, one obtains

$$
\langle \lg\_\prime Lf \rangle - \langle f, L\mathbf{g} \rangle = \left[ p \left( f'\mathbf{g} - f\mathbf{g}' \right) \right] \vert\_{\mathbf{a}\prime}^b \forall f, \mathbf{g} \in D(L).
$$

and the condition h*L f* , *g*i = h*f* , *Lg*i holds if *p*(*f g*<sup>0</sup> − *f* <sup>0</sup>*g*)| *b <sup>a</sup>* = 0 verified in *D*(*L*) and the Lagrange identity is expressed by

$$\lg \mathcal{L}f - f \lg = \left[ p \left( f \mathcal{g}' - f' \mathcal{g} \right) \right]', \forall f, \mathcal{g} \in D(L).$$

**Remark 1.** *L for SL problems is the self-adjoint operator if*

$$\left.p\left(u'v - uv'\right)\right|\_{a}^{b} = p(b)u'(b)v(b) - p(a)u'(a)v(a) = 0. \tag{6}$$

*For example, for p*(*b*) = *p*(*a*) *and periodic conditions*

$$\mu(a) = \mu(b) = A, \mu'(a) = \mu'(b) = B$$

*or antiperiodic conditions*

$$
\mu(a) = -\mu(b) = A, \mu'(a) = -\mu'(b) = B
$$

*the operator L is self-adjoint.*

The RSL eigenvalue problem is to find *v* ∈ *D*(*L*) such that for *Lv* + *λv* = 0 with *λ*, the eigenvalue associated with *v* is the eigenfunction. For RSL problems, all the eigenvalues are real and positive [3–5], and there exists an infinite number of eigenvalues. The sequence of the eigenvalues (*λn*)*<sup>n</sup>* is considered such that *<sup>λ</sup>*<sup>0</sup> <sup>&</sup>lt; *<sup>λ</sup>*<sup>1</sup> <sup>&</sup>lt; . . . <sup>&</sup>lt; *<sup>λ</sup><sup>n</sup>* <sup>&</sup>lt; . . . with lim*x*→<sup>∞</sup> *λ<sup>n</sup>* = ∞. For each eigenvalue *λn*, the corresponding eigenfunction *v<sup>n</sup>* is unique up to a constant factor and has exactly *n* − 1 zeros in interval (*a*, *b*). The set *V* = {(*vn*)*<sup>n</sup>* , *v<sup>n</sup>* ∈ *D*(*L*)} is complete in the *D*(*L*) space, and the solution of RSL is represented by a generalized Fourier series of the eigenfunction

$$u = \sum\_{n=1}^{\infty} c\_n v\_n(x). \tag{7}$$

#### **2. General Framework of SLPs**

In this section, we will mention certain conditions that the functions defining the operator *L* fulfill for different SL or RSL problems.

A second method to find the solution of a RSL problem, different from the generalized Fourier series development, is described using the Green function and two linear independent solutions. The section ends with the analysis of the Fourier equation with different types of boundary conditions.

**Remark 2.** *For Sturm–Liouville problems, we consider two types of assumptions that are usually used (see [4])*

*General assumptions:*


*RSL assumptions*


Known equations, such as Fourier, Graetz–Nusslet, Collatz and Airy equations, for which RSL assumptions are verified, are given in Table 1, and the first eigenvalues and eigenfunctions are depicted in Figures 1 and 2.

**Table 1.** Examples of regular Sturm–Liouville problems.


**Figure 1.** (**a**) Fourier equation; (**b**) Graetz–Nusselt equation.

**Figure 2.** (**a**) Collatz problem; (**b**) Airy problem.

Some other known equations, such as the Legendre differential equation, Chebysev's differential equation or Bessel equation, must be transformed into the Sturm–Liouville form that we considered in (3). These forms are specified in Table 2. Other SL equations for which general assumptions are fulfilled are exemplified in Table 3.


**Table 2.** Examples of differential equations and their SL form.

**Table 3.** Examples of Sturm–Liouville problems.


For an example of a singular SL, a discontinuity on the middle of the interval is considered [*a*, *<sup>b</sup>*], *<sup>x</sup>*<sup>0</sup> = (*<sup>b</sup>* <sup>−</sup> *<sup>a</sup>*)/2, with *<sup>ρ</sup>*(*x*) <sup>≡</sup> <sup>0</sup> and *<sup>p</sup>*(*x*) = ( 1, *x* ∈ [0, *x*0) *c* 2 , *x* ∈ [*x*0, 1] , *c* 6= 0, *c* 6= 1. The problem to solve is *Lu* + *λu* = 0, *u*(0) = *u*(1) = 0 and with *u*(*x*0−) − *u*(*x*0+) = 0, *u* 0 (*x*0−) − *c* 2*u* 0 (*x*0+) = 0 transmission conditions.

The *asymptotic behavior* of the eigenvalues leads to *<sup>λ</sup><sup>n</sup> n* 2 → 2*πc* 1 + *c* <sup>2</sup> , as *n* → ∞.

#### *2.1. Resolvent Operator and Green Function*

This RSL problem is solved using a Green function solution for the resolvent operator *R*(*λ*) = (*L* + *λI*) <sup>−</sup><sup>1</sup> of the form

$$R(\lambda)f = \frac{\varrho\_{\lambda}(\mathbf{x})}{\omega(\lambda)} \int\_{a}^{\mathbf{x}} f(t)\psi\_{\lambda}(t)dt + \frac{\psi\_{\lambda}(\mathbf{x})}{\omega(\lambda)} \int\_{\mathbf{x}}^{b} f(t)\varrho\_{\lambda}(t)dt\tag{8}$$

where *ψλ*, *ϕ<sup>λ</sup>* are non-trivial classical solutions of (*L* + *λI*)*f* = 0 which satisfy

$$\begin{aligned} a\_1 \varphi\_\lambda(a) + a\_2 \varphi'\_\lambda(a) &= 0 \\ b\_1 \psi\_\lambda(b) + b\_2 \psi'\_\lambda(b) &= 0 \end{aligned} \tag{9}$$

A simple normalization that eliminates some complexity can be specified by requiring

$$\begin{aligned} \varphi\_{\lambda}(a) &= a\_2, \varphi'\_{\lambda}(a) = -a\_1 \\ \psi\_{\lambda}(b) &= b\_2, \psi'\_{\lambda}(b) = -b\_1 \end{aligned} \tag{10}$$

Then, the Wronskian *ω*(*λ*) = *v*1(*x*)*v* 0 2 (*x*) − *v* 0 1 (*x*)*v*2(*x*) of these solutions is a function that depends only on *λ*:

$$
\omega(\lambda) = (p\varphi\_{\lambda}^{\prime})\psi\_{\lambda} - \varphi\_{\lambda}(p\psi\_{\lambda}^{\prime}) \tag{11}
$$

Therefore, *ω*(*λ*) 6= 0, ∀*x* ∈ [*a*, *b*] or *ω*(*λ*) ≡ 0. The Wronskian vanishes if {*ϕλ*, *ψλ*} is a dependent set of functions of *x*, which is precisely when both functions satisfy (*L* + *λI*)*h* = 0 as well as the specified conditions at *x* = *a* and *x* = *b*, meaning *λ* is an eigenvalue of *L* (see [6]).

For the RSL case, the equation *Lu* = 0 has two linear independent solutions, *v*<sup>1</sup> and *v*<sup>2</sup> such that *a*1*v*1(*a*) + *a*2*v* 0 1 (*a*) = 0, *b*1*v*2(*b*) + *b*2*v* 0 2 (*b*) = 0, the Green function *G* : [*a*, *b*] × [*a*, *b*] → R,

$$G(\mathbf{x}, y) = \begin{cases} v\_1(y)v\_2(\mathbf{x})/m, & a \le y \le \mathbf{x} \le b \\ -v\_1(\mathbf{x})v\_2(y)/m, & a \le \mathbf{x} \le y \le b \end{cases} \tag{12}$$

*m* = *p*(*x*)*ω*(*λ*) has the properties


$$\text{(iii)}\ \mathcal{G}\_{\mathbf{x}}(y^{+},y) - \mathcal{G}\_{\mathbf{x}}(y^{-},y) = \lim\_{\substack{\varepsilon \to 0 \\ \varepsilon > 0}} [\mathcal{G}\_{\mathbf{x}}(y+\varepsilon,y) - \mathcal{G}\_{\mathbf{x}}(y-\varepsilon,y)] = \frac{1}{p(\mathbf{x})}\text{, }\mathcal{G}\_{\mathbf{x}}\text{ is discontinuous.}$$

on *M*.

Now let *<sup>T</sup>* be the operator *Tu*(*x*) = <sup>Z</sup> *<sup>b</sup> a G*(*x*, *y*)*u*(*y*)*dy* defined on *C*[*a*, *b*]. Using the 2

properties of the Green function *G* and the continuity of *u*, one obtains that *Tu* ∈ *C* [*a*, *b*] and is the solution of the equation *Lu* = *f* . The function *Tu* satisfies the same boundary conditions as *u* ∈ *C* 2 [*a*, *b*], then *T*(*Lu*)(*x*) = *u*(*x*), and *T* is the inverse operator of *L*. The problem of eigenvalues and eigenfunctions *Lu* + *λu* = 0, *B*1*u*(*a*) = 0; *B*2*u*(*b*) = 0 becomes *Tu* = *µu*, with *µ* = −1/*λ*. For results for the construction of the operator T for fractional SLPs, see [7,8].

**Remark 3** (Rayleigh quotient)**.** *The eigenvalues of the operator L are lower bounded by a real constant. The smallest eigenvalue of the SL eigenvalue problem satisfies*

$$\lambda\_0 = \min\_{\substack{u \neq 0 \\ u \in D(L)}} \frac{\langle Lu, u \rangle}{\langle u, u \rangle\_s} = \min\_{\substack{u \neq 0 \\ u \in D(L)}} \frac{-p \, u u' |\_a^b + \int\_a^b p (u')^2 + \rho u^2 dx}{\int\_a^b u^2 s \, dx} \tag{13}$$

*and the minimum u*<sup>0</sup> *is achieved if u*<sup>0</sup> *is the eigenfunction corresponding to λ*0*.*

#### *2.2. Sturm–Liouville Fourier Problems*

Consider the operator *<sup>L</sup>* <sup>=</sup> <sup>−</sup> *<sup>d</sup> dx* h *p*(*x*) *d dx* i + *ρ*(*x*) and nonhomogeneous equation *Lu* + *λs*(*x*)*u* = *f*(*x*) with functions *p*(*x*) smooth, *ρ*(*x*) positive, and also (i) *p*(*a*) = 0 or *p*(*b*) = 0 or both or (ii) the interval *I* is infinite. In this section, we study the Fourier problem

−*u* <sup>00</sup> + *αu* = *f*(*x*), *x* ∈ (*a*, *b*), BVP conditions (2) and (3)

with different type of boundary conditions. For example, in the case of Dirichlet conditions *<sup>u</sup>*(*a*) = *<sup>u</sup>*(*b*) = 0, the operator *<sup>L</sup>* <sup>=</sup> <sup>−</sup> *<sup>d</sup>* 2 *dx*<sup>2</sup> + *α* in space *C* ∞ 0 (*I*) is self-adjoint.

In *Example 1*, we study the case for *α* = 0. Additionally, for the case *α* = *n* <sup>2</sup> > 0, general solutions of the homogeneous equation are *vn*(*x*) = *A* exp(*nx*) + *B* exp(−*nx*), and for boundary conditions *u*(*a*) = *u*(*b*) = 0, the RSL solution is *v* = 0.

In *Example 2*, for *α* < 0, we consider different cases, shown in Tables 4 and 5.

**Table 4.** Examples of Sturm–Liouville problems with *a* = 0, *b* > 0, *x* ∈ (0, *b*), case 1: *a*<sup>1</sup> · *a*<sup>2</sup> = 0.


**Table 5.** Examples of Sturm–Liouville problems with *a* = 0, *b* > 0, *x* ∈ (0, *b*), case 2: *a*<sup>1</sup> · *a*<sup>2</sup> 6= 0.


**Example 1.** *Let us consider the RSL equation* −*u* 00(*x*) = *f*(*x*)*, with general solution v*(*x*) = *mx* + *n for the homogeneous equation and associated Green's function*

$$G(\mathfrak{x}, \mathfrak{y}) = \begin{cases} \mathfrak{x} \,(b - \mathfrak{y}) / b \, \mathfrak{z} \, 0 \le \mathfrak{x} \le \mathfrak{y} \\ \mathfrak{y} \,(b - \mathfrak{x}) / b \, \mathfrak{y} \le \mathfrak{x} \le \mathfrak{y} \le b. \end{cases}$$

*Using the superposition principle, the solution of the problem defined for u*(0) = *A, u* 0 (*b*) = *B is u*(*x*) = *v*1(*x*) + *v*2(*x*)*, v*1(*x*) = (*b* − *x*)*A* + *xB and*

$$v\_2(\mathbf{x}) = \int\_0^b G(\mathbf{x}, y) f(y) dy = \frac{1}{b} \left[ (b - \mathbf{x}) \int\_0^\mathbf{x} y f(y) dy - \int\_\mathbf{x}^1 y (b - y) f(y) dy \right].$$

*Changing the boundary conditions in the previous problem, we now consider*

$$-\boldsymbol{u}^{\prime\prime}(\mathbf{x}) = f(\mathbf{x}) \, \_{\prime} \mathcal{B}\_1 : \boldsymbol{u}(0) - \boldsymbol{u}^{\prime}(0) = 0, \mathcal{B}\_2 : \boldsymbol{u}(b) + \boldsymbol{u}^{\prime}(b) = 0.$$

*Solving the initial value problem* −*u* <sup>00</sup>(*x*) = *f*(*x*), *B*<sup>1</sup> : *u*(0) = *A*, *u* 0 (0) = *A*, *one finds the solution u*(*x*) = *A*(1 + *x*) − R *b* 0 (*x* − *y*)*f*(*y*)*dy and boundary condition B*2 *leads to <sup>u</sup>*(*x*) = <sup>Z</sup> *<sup>b</sup>* 0 *<sup>G</sup>*(*x*, *<sup>y</sup>*)*f*(*y*)*dy with G*(*x*, *<sup>y</sup>*) = ( (1 + *x*) (*b* + 1 − *y*)/(*b* + 2), *x* < *y* (1 + *y*) (*b* + 1 − *x*)/(*b* + 2), *y* < *x*.

**Example 2.** *For α* = −*n* 2 *and a* = 0, *b* = *π, general solutions of the equation are vn*(*x*) = *A* cos(*nx*) + *B* sin(*nx*) *with λ<sup>n</sup>* = *n* 2 *the eigenvalues and for u*(0) = *u*(*π*) = 0 *the eigenfunctions are v<sup>n</sup>* = sin(*nx*)*. The general solution is a Fourier series:*

$$u(\mathbf{x}) = \sum\_{n=1}^{\infty} B\_n \sin(n\mathbf{x}),\\B\_n = \frac{\langle f(\mathbf{x}), \sin(n\mathbf{x}) \rangle}{\langle \sin(n\mathbf{x}), \sin(n\mathbf{x}) \rangle} = \frac{2}{\pi} \int\_0^{\pi} f(\mathbf{x}) \sin(n\mathbf{x}) d\mathbf{x} \tag{14}$$

#### *Case 1.1*

*According to Table 4, for cases 1.1, we consider a*<sup>2</sup> = 0 *and B*<sup>1</sup> *is u*(0) = 0 *and the eigenfunctions v<sup>n</sup>* = sin( p *λnx*)*. The eigenvalues corresponding to cases 1.1a and 1.1b are λ<sup>n</sup>* = *nπ b* 2 *and λ<sup>n</sup>* = (2*n* + 1)*π* 2*b* 2 *, respectively. For problems P1c P*1*<sup>c</sup>* : − *u* <sup>00</sup>(*x*) = *λu* + *f*(*x*) *in* (0, *b*); *u*(0) = 0, *b*1*u*(*b*) + *b*2*u* 0 (*b*) = 0 (15)

*the general solution is u*(*x*) = ∑ ∞ *n*=1 *cnvn*(*x*) *with the eigenvalues determined by the equation* tan( p *λnb*) = − *b*2 *b*1 p *λn. The determination of the first eigenvalues is graphically presented in Figure 3.*

(**a**) Eigenvalues determination

(**b**) The first corresponding eigenfunctions

**Figure 3.** The eigenvalues determination and the corresponding eigenfunctions example (15).

*Case 1.2*

*The eigenfunctions corresponding to case 1.2 are v<sup>n</sup>* = cos( p *λnx*) *with the eigenvalues determined by the equation* tan( p *<sup>λ</sup>nb*) = *<sup>b</sup>*<sup>1</sup> *b*2 √ *λn .*

*Case 2*

*The eigenfunctions corresponding to case 2.1 are v<sup>n</sup>* = p *λ<sup>n</sup>* cos( p *λnx*) + sin( p *λnx*) *with the eigenvalues determined by the equation* tan( p *λnb*) = − (*b*<sup>1</sup> + *b*2) √ *λn b*<sup>1</sup> − *b*<sup>2</sup> √ *λn . If b has the form* (2*n* + 1)*π* 2 · *b*2 *b*1 *, then λ<sup>n</sup>* = *b*1 *b*2 *is the eigenvalue for the problem. In Figure 4a, the determination of the first eigenvalues is graphically presented as the roots of the function* tan(*x*) + (*b*<sup>1</sup> <sup>+</sup> *<sup>b</sup>*2)*<sup>x</sup> b*1*b* − *b*2*x with notation x* = √ *λnb and in Figure 4b, the corresponding eigenfunctions are plotted.* √

*For case 2.2, the eigenfunctions are v<sup>n</sup>* = *λn a*1 cos( p *λnx*) + sin( p *λn*)*, and the eigenvalues are the solutions of the nonlinear equation* tan( p *<sup>λ</sup>nb*) = (*a*<sup>1</sup> <sup>−</sup> *<sup>b</sup>*1) √ *λn a*<sup>1</sup> + √ *λn .*

**Figure 4.** The eigenvalues determination and the corresponding eigenfunctions example 2.1.

**Example 3.** *The conditions can be considerably weakened with respect to continuity and differentiability. In some cases, changes of variables, dependent and independent, may transform a problem from singular to regular; see [4].*

*For construction of the solutions, the Dirac function is used. Green's function verifies* − *d dx p*(*x*) *dG*(*x*, *y*) *dx* + *ρ*(*x*)*G*(*x*, *y*) = *δ*(*x* − *y*), *and expresses the response under homogeneous boundary conditions to a forcing function consisting of a concentrated unit of inhomogeneity at x* = *y*. 2

*For the problem* −*u* 00(*x*) = *λu*(*x*) + *f*(*x*) *in* (0, *b*)*; u*(0) = *u*(*b*) = 0*, λ* = *n , the solution is u*(*x*) = <sup>Z</sup> *<sup>b</sup>* 0 *G*(*x*, *y*)*f*(*y*)*dy using the Green function*

$$G(x,y) = \begin{cases} \frac{\sin(nx)\sin(n(b-y))}{n\sin(nb)}, 0 \le x < y\\ -\frac{\sin(n(b-x))\sin(ny)}{n\sin(nb)}, y < x \le b. \end{cases}$$

*That leads to the representation*

$$u(\mathbf{x}) = \frac{\sin(n\mathbf{x})}{n\sin(nb)} \int\_0^\mathbf{x} \sin(n(b-y))f(y)dy - \frac{\sin(n(b-\mathbf{x}))}{n\sin(nb)} \int\_\mathbf{x}^b \sin(ny)f(y)dy.$$

**Remark 4.** *Using the definition of the norm convergence, namely: "A sequence* (*ϕn*)*<sup>n</sup> in* L 2 *s* (*I*) *converges to <sup>ϕ</sup>* ∈ L<sup>2</sup> *s* (*I*) *if* lim*n*→<sup>∞</sup> k*ϕ<sup>n</sup>* − *ϕ*k*<sup>s</sup>* = 0*, i.e., ϕ<sup>n</sup>* → *ϕ in* L 2 *<sup>s</sup> norm", some sequences δn*(*x*) *could be used instead of δ*(*x*) *in order to obtain the Green function.*

$$\begin{aligned} \text{Starting from the definition } \delta(\mathbf{x} - \mathbf{y}) &= \begin{cases} 0, \mathbf{x} \neq \mathbf{y} \\ \infty, \mathbf{x} = \mathbf{y} \end{cases} \text{ and use some properties (see [6,9]):}\\ \delta \text{ is symmetric with } \delta(\mathbf{a}\mathbf{x}) &= -\frac{1}{|a|} \delta(\mathbf{x}), \delta(\mathbf{x}) = \lim\_{n \to \infty} \delta\_{n}(\mathbf{x}), \delta\_{n}(\mathbf{x}) = \frac{n}{\sqrt{\pi}} e^{-n^{2}\mathbf{x}^{2}}, \delta\_{n}(\mathbf{x}) = \frac{1}{\sqrt{\pi}} e^{-n^{2}\mathbf{x}^{2}}, \text{ so} \\ \frac{\sin^{2}(n\mathbf{x})}{n\pi x^{2}} \text{ or } \delta\_{n}(\mathbf{x}) &= \frac{n}{\pi(1 + n^{2}\mathbf{x}^{2})}, \text{also} \end{aligned}$$

$$\delta\left(\mathbf{x}^{2} - a^{2}\right) = \frac{1}{|2a|} [\delta(\mathbf{x} + a) + \delta(\mathbf{x} - a)] \text{ and } \int\_{-\infty}^{+\infty} f(y) \delta(\mathbf{x} - y) d\mathbf{x} = f(\mathbf{x}).$$

#### **3. Variational RSL Problems**

We define problem *P*<sup>1</sup> as follows:

$$-\left(pu'\right)' + \rho u + \lambda su = f\_{\prime} \text{ on } (a, b) \tag{16}$$

$$\begin{aligned} \mathcal{B}\_1 u(a) : a\_1 u(a) + a\_2 u'(a) &= 0 \\ \mathcal{B}\_2 u(b) : b\_1 u(b) + b\_2 u'(b) &= 0 \end{aligned} \tag{17}$$

and the set *V* = *v* ∈ *C* 1 ([*a*, *b*]), *v* <sup>0</sup> piecewise continuous on [*a*, *b*], *B*1*v*(*a*), *B*2*v*(*b*)verified . For *<sup>v</sup>* <sup>∈</sup> *<sup>D</sup>*(*L*), we have <sup>Z</sup> *<sup>b</sup> a pu*0 *v* 0 *dx* + Z *b a* (*ρ* + *λs*)*uvdx* = Z *b a f vdx*, ∀*v* ∈ *V* (see [10,11]).

Variational problem (VP1) associated to the problem *P*1 is as follows: find *u* ∈ *V* such that *a*(*u*, *v*) = *l v*,∀*v* ∈ *V* with

$$a(u,v) = \int\_{a}^{b} pu'v' + (\rho + \lambda s)uv \, d\mathbf{x}, \forall u, v \in V; lv = \int\_{a}^{b} fv \, d\mathbf{x}, \forall v \in V. \tag{18}$$

The functional *F* : *V* → R, *Fv* = 1 2 *a*(*u*, *v*) − *lv*, ∀*v* ∈ *V* expresses the difference between the internal elastic energy and the load potential.

#### **Lemma 1.**


**Proof.** (i) Let *v*1, *v*<sup>2</sup> ∈ *V* and *α*, *β* ∈ R, then *l*(*αv*<sup>1</sup> + *βv*2) = *α lv*<sup>1</sup> + *β lv*<sup>2</sup> is the result that is obtained from the properties of the scalar product.

(ii) Let *u*1, *u*2, *v* ∈ *V* and *α*, *β* ∈ R then

$$\begin{aligned} a(\mathfrak{a}u\_1 + \beta u\_2, v) &= \int\_a^b p(\mathfrak{a}u\_1 + \beta u\_2)' v' + (\rho + \lambda s)(\mathfrak{a}u\_1 + \beta u\_2) v dx \\ &= \mathfrak{a} \int\_a^b p u\_1' v' + (\rho + \lambda s) u\_1 v dx + \beta \int\_a^b p u\_2' v' + (\rho + \lambda s) u\_2 v dx \\ &= \mathfrak{a} \, a(u\_1, v) + \beta \, a(u\_2, v) \end{aligned}$$

Let *u*, *v*1, *v*<sup>2</sup> ∈ *V* and *α*, *β* ∈ R, then

$$\begin{aligned} a(u, \alpha v\_1 + \beta v\_2) &= \int\_a^b p u'(\alpha v\_1 + \beta v\_2)' + (\rho - \lambda s) u (\alpha v\_1 + \beta v\_2) dx \\ &= a \int\_a^b p u' v\_1' + (\rho + \lambda s) u v\_1 dx + \beta \int\_a^b p u' v\_2 + (\rho + \lambda s) u v\_2 dx \\ &= a \, a(u, v\_1) + \beta \, a(u, v\_2) \end{aligned}$$

Let *u* ∈ *V*, then

$$a(u,u) = \int\_{a}^{b} p(u')^2 + (\rho + \lambda s)u^2 d\mathbf{x} = p \int\_{a}^{b} (u')^2 d\mathbf{x} + \int\_{a}^{b} (\rho + \lambda s)u^2 d\mathbf{x}$$

The weight function *p*(*x*) in [*a*, *b*] is positive and in RSL(*P*1) conditions, *ρ* + *λs* is a positive function in [*a*, *b*] accordingly *a*(*u*, *u*) ≥ 0, ∀*u* ∈ *V*, and hence *a*(·, ·) is positive. Let *u*, *v* ∈ *V*, then

$$a(u,v) = \int\_{a}^{b} p \, u'v' + (\rho + \lambda s) \, uv \, d\mathbf{x} = a(v,u).$$

Consequently, *a*(·, ·) is symmetric.

**Minimization problem (***MP*1**) associated to** (*VP*1) is as follows. Find *u* ∈ *V* such that *Fu* = min *v*∈*V Fv* with

$$Fv = \frac{1}{2} \int\_{a}^{b} p(v')^2 + (\rho + \lambda s)v^2 dx - \int\_{a}^{b} fv \, dx \tag{19}$$

#### **Theorem 1.**


**Proof.** *(1)* (i) Let *u* ∈ *V* be the solution of (*VP*1), then *a*(*u*, *v*) = *lv*, ∀*v* ∈ *V*. For any *w* ∈ *V*, denoting *v* = *w* − *u* ∈ *V*, we have

$$\begin{aligned} Fw - Fu &= \frac{1}{2}a(u+v, u+v) - l(u+v) - Fu \\ &= Fu + Fv + a(u,v) - Fu = a(u,v) + F(v) \\ &= a(u,v) - lv + lu + F(v) = \frac{1}{2}a(v,v) \ge 0 \end{aligned}$$

meaning that

$$F\mu = \min\_{w \in V} Fw \text{ i.e. } \mu \text{ solution of } (MP\_1)!$$

(1) (ii) Let *u* ∈ *V* be the solution of (*MP*1), then *Fu* = min *w*∈*V Fw*; therefore, *Fw* − *Fu* ≥ 0, ∀*w* ∈ *V*.

Using *w* = *u* + *tv*, *t* ∈ R, one finds *F*(*u* + *tv*) − *Fu* ≥ 0, ∀*t* ∈ R meaning

$$\frac{1}{2}a(u+tv, u+tv) - l(u+tv) - \frac{1}{2}a(u,u) + lu \ge 0, \forall t \in \mathbb{R}, \forall u, v \in V$$

$$\frac{1}{2}a(u,u) + ta(u,v) + \frac{t^2}{2}a(v,v) - lu - t\,lv - \frac{1}{2}a(u,u) + lu \ge 0, \forall t \in \mathbb{R}, \forall u, v \in V$$

$$\left[\frac{1}{2}a(v,v)\right]t^2 + [a(u,v) - lv]t \ge 0 \,\forall t \in \mathbb{R}$$

Using the positivity of the term *a*(*v*, *v*), one finds

$$a\left[a(\mu, v) - lv\right]^2 \le 0 \quad \Rightarrow \quad a(\mu, v) = lv \quad \forall v \in V$$

meaning the *u* solution of (*VP*1).

(2) Let *u* ∈ *V* ∩ H solution of (*VP*1), then

$$\begin{aligned} \int\_a^b p u' v' + (\rho + \lambda s) u v \, dx &= \int\_a^b f v \, dx \\ \int\_a^b p u' v' \, dx + \int\_a^b (\rho + \lambda s) u v \, dx &= \int\_a^b f v \, dx \\ \left( p u' \right) v \big|\_a^b - \int\_a^b \left( p u' \right)' v \, dx + \int\_a^b (\rho + \lambda s) u v \, dx &= \int\_a^b f v \, dx \\ \left( p u' \right) v \big|\_a^b + \int\_a^b \left[ - \left( p u' \right)' + (\rho + \lambda s) u - f \right] v \, dx &= 0, \,\forall v \in V \end{aligned}$$

In case of a self-adjoint operator for *L*, such as in the case of periodic or antiperiodic boundary conditions, we have

$$\left.u(pu')v\right|\_{a}^{b} = p(b)u'(b)v(b) - p(a)u'(a)v(a) = 0, \forall u, v \in V$$

and one obtains <sup>Z</sup> *<sup>b</sup> a ϕv dx* = 0, ∀*v* ∈ *V* for *ϕ*(*x*) = −(*pu*<sup>0</sup> ) <sup>0</sup> + (*ρ* + *λs*)*u* ∈ *C*(*a*, *b*) that is *ϕ* ≡ 0 over interval (*a*, *b*).

This means that the *u* solution of *P*<sup>1</sup> also verifies the boundary conditions.

The theorem proved above transfers the search space of the solution *u* of the problem *P*<sup>1</sup> to the search space for the solution of the problem *MP*1, where the existence is ensured through the Lax–Milgram theorem for a coercive quadratic form, even more general from the Lions–Stampacchia theorem, where *a*(·, ·) is of a symmetric positive bilinear form.

#### **4. Variational Approaches for VP of RSL**

*4.1. Nehari Variational Method*

Let *u* ∈ *C* 2 ([*a*, *b*]) and *F* : [*a*, *b*] × *C* 2 ([*a*, *b*]) × *C*([*a*, *b*]) → [*a*, *b*] be a function that has continuous second-order derivatives with respect to all of its arguments. According to the Euler–Lagrange variational principle, a necessary condition for the functional

$$J(u) = \int\_{a}^{b} \mathcal{F}(\mathbf{x}, u, u') \, d\mathbf{x},\tag{20}$$

to be stationary at *u* is that *u* is a solution of the Euler–Lagrange equation (see [3])

$$\frac{\partial F}{\partial u} - \frac{d}{dx} \left( \frac{\partial F}{\partial u'} \right) = 0, \quad a \le x \le b \tag{21}$$

with the Dirichlet conditions *u*(*a*) = *A* and *u*(*b*) = *B*.

For a nonlinear RSL problem,

$$-\mu'' = f(\mathbf{x}, \mu^2), f \in \mathcal{H}, \mathbf{x} \in (a, b) = I; \mu(a) = A \text{ and } \mu(b) = B,\tag{22}$$

for H = { *f* ∈ *C*([*a*, *b*] × [0, ∞])| *f* verifies (ip1), (ip2) and boundary conditions}.

$$\begin{aligned} \text{(ip1)}: &f(\mathbf{x}, y) > 0 \text{for } y > 0\\ \text{(ip2)}: &\exists \nu, y^{-\nu} f(\mathbf{x}, y) \text{ is a non-decreasing function of} \mathbf{y} \in [0, \infty) \end{aligned}$$

For the problem (22), looking for the extremum value of the functional

$$J(u) = \int\_{a}^{b} \left[ \left( u' \right)^{2} - \int\_{0}^{u^{2}(x)} \left( f(x, y) dy \right) \right] dx,\tag{23}$$

for the set *Va*,*<sup>b</sup>* = {*u*|*u* ∈ *C*([*a*, *b*]), *u* <sup>0</sup>piecewise continuous in[*a*, *b*], *u*(*a*) = *u*(*b*) = 0} the functional *J*(*u*) is not bounded. Using the Nehari method, a new condition on the function *f* and *u* is imposed:

$$\int\_{a}^{b} \left(u'\right)^{2} d\mathbf{x} = \int\_{a}^{b} u^{2} f(\mathbf{x}, u^{2}) d\mathbf{x} \,\tag{24}$$

which is satisfied by the solutions of (22).

Let us consider the set V*a*,*<sup>b</sup>* = *u*|*u* ∈ *Va*,*b*verifies (24) . For *I* given, let *µ*(*a*, *b*) = inf *u*∈*V J*(*u*). Then ∃*u* ∈ V*a*,*<sup>b</sup>* such that *J*(*u*) = *µ*(*a*, *b*) and also (see [12]),

for *u* ∈ V*a*,*<sup>b</sup>* with *J*(*u*) = *µ*(*a*, *b*), *w* = |*u*| ∈ V*a*,*<sup>b</sup>* is a positive solution of (22).

The function *µ*(*a*, *b*) is continuous with respect to both arguments and

$$\mu(a,b) = \inf\_{c < d \in [a,b]} \mu(c,d) \text{ with } \lim\_{b \to a} \mu(a,b) = \infty. \tag{25}$$

**Remark 5.** *For a partition* ∆*<sup>n</sup>* : *a* = *x*<sup>0</sup> < *x*<sup>1</sup> < · · · < *xn*−<sup>1</sup> < *x<sup>n</sup>* = *b of the interval I, over each subinterval* [*x<sup>i</sup>* ; *xi*+<sup>1</sup> ]*, consider u<sup>i</sup>* ∈ V*x<sup>i</sup>* ,*xi*+<sup>1</sup> *normalized with the Nehari condition and*

$$\text{For } \mathbf{x} \in [\mathbf{x}\_{i}; \mathbf{x}\_{l+1}]: u(\mathbf{x}) = (-1)^{i} |u\_{l}(\mathbf{x})|,\\ I(u) = \mu\_{n-1}(\mathbf{x}\_{1}, \mathbf{x}\_{2}, \dots, \mathbf{x}\_{n-1}) \tag{26}$$

$$\text{For } \Delta\_{\text{n}} \text{ given } : \mu\_{\text{n}-1}(\mathbf{x}\_1, \mathbf{x}\_2, \dots, \mathbf{x}\_{\text{n}-1}) = \sum\_{i=1}^{n} \mu(\mathbf{x}\_{i-1}, \mathbf{x}\_i) \tag{27}$$

*then the solution u*(*x*) *is in Va*,*<sup>b</sup> and is vanishing n* − 1 *times over interval I. Additionally, if* |*u* 0 *k* (*x<sup>k</sup>* )| 6= |*u* 0 *k*+1 (*x<sup>k</sup>* )| *then µn*−1(*x*1, *x*2, . . . , *xn*−1) *is not a minimum of n* ∑ *i*=1 *µ*(*xi*−<sup>1</sup> , *xi*)*.*

#### *4.2. Variational Estimations for RSL*

In the following, two variational estimation methods are presented, the shooting method and bisection method, consisting in solving the variational equations associated to the problem given.

#### **Shooting method:**

$$u(P\_2) \left\{ \begin{array}{ll} -(pu')' + (q + \lambda s)u = 0, & x \in [a, b] \\ u(a) = 0, u(b) = 0 \end{array} \right. \tag{28}$$

For *λ* eigenvalue and *uλ*(*x*), the corresponding eigenfunction *uλ*(*a*) 6= 0 and *y* = *uλ u* 0 *λ* (*a*) is the normalized eigenfunction with *y* 0 (*a*) = 1, which is the solution for the variational equation associated to (28) with the initial value conditions:

$$(VIP)\left\{\begin{array}{l} -(py')' + (q + \lambda s)y = 0, \quad x \in [a, b] \\ y(a) = 0, y'(a) = 1 \end{array} \right.\tag{29}$$

with *y*(*b*) = 0 (see [4]).

Algorithm of the shooting method:

Step 1 Determine an interval of an eigenvalue and make a guess;

Step 2 Solve VIP (*P*2) and find the eigenfunction *u* = *uλ*(*x*);

Step 3 If *uλ*(*b*) = 0 or |*uλ*(*b*)| < *ε* given, then Stop.

Else, find *λ* the root of *uλ*(*b*) = 0 in a given interval and update *λ*.

GO TO Step 1.

#### **Bisection method**

For SL eigenvalue problem (*P*3) with functions and constants satisfying RSL assumptions (1)–(4):

$$\iota(P\_3) \begin{cases} -\left(p\mu'\right)' + (q + \lambda s)\mu = 0, & x \in [a, b] \\ B\_1\mu(a) : a\_1\mu(a) + a\_2\mu'(a) = 0 \\ B\_2\mu(b) : b\_1\mu(b) + b\_2\mu'(b) = 0 \end{cases} \tag{30}$$

the related variational initial value problem *VIP*<sup>3</sup> is

$$(VIP3)\left\{ \begin{array}{l} -(pu')' + (q + \lambda s)u = 0, \\ u(a) = -\frac{a\_2}{\sqrt{a\_1^2 + a\_2^2}}; \ u'(a) = \frac{a\_1}{\sqrt{a\_1^2 + a\_2^2}} \end{array} \right. \tag{31}$$

For the eigenvalue *λ* denoting *uλ*, the corresponding eigenfunction has *uλ*(0) 6= 0, and *y* = *u<sup>λ</sup>* is the normalized *u<sup>λ</sup>* eigenfunction such that *a*1*y*(*a*) + *a*2*y* 0 (*a*) = 0. In this case, *λ* is the eigenvalue for *P*<sup>3</sup> if *F*(*λ*) := *B*2*y*(*b*) = *b*1*y*(*b*) + *b*2*y* 0 (*b*) = 0.

The function *<sup>w</sup>λ*(*x*) = *<sup>∂</sup>u<sup>λ</sup> ∂λ* satisfies the variational initial value problem (*VIVP*3).

$$(VIVP\_3)\left\{\begin{array}{l}-(pw')'+(q+\lambda s)w+sw=0\\w(a)=0,\ w'(a)=0\end{array}\right.\tag{32}$$

and *F*(*λ*) = *b*1*uλ*(*b*) + *b*2*u* 0 *λ* (*b*) is a continuously differentiable function on *λ* with *F* 0 (*λ*) = *b*1*wλ*(*b*) + *b*2*w* 0 *λ* (*b*) 6= 0.

**Remark 6.** *Under RSL assumptions (1)–(4), if λ is the eigenvalue of* (*P*3) *and y* = *u<sup>λ</sup> is the corresponding normalized eigenfunction, then there exists* (*λ*inf, *λ*sup) *containing λ such that F*(*λ*inf)*F*(*λ*sup) < 0 *and the approximate sequence* (*λn*)*<sup>n</sup> is convergent, λ<sup>n</sup>* → *λ and y<sup>n</sup>* = *uλ<sup>n</sup> are the corresponding eigenfunctions obtained by solving* (*VIVP*3) *such that y<sup>n</sup>* → *y and y* 0 *<sup>n</sup>* → *y* 0 *.*

For instance, the solution of the problem

$$\begin{cases} \, \_i(E P\_3) \left\{ \begin{array}{ll} - (p(\mathbf{x}) u')' + \rho u(\mathbf{x}) = f(\mathbf{x}), & \mathbf{x} \in (0, 1) \\ u(0) = 0, \, u(1) = 0 \end{array} \right. \end{cases} \tag{33}$$

with *p*, *ρ*, *f* ∈ *C*([0, 1]) verifying (RSL) conditions (1)–(4) is obtained solving the associated

$$(VI - EP\_3) \left\{ \begin{array}{l} -(py')' + \rho y = f(x), \quad x \in (0,1) \\ y(0) = 0, \ y(1) = s \end{array} \right. \tag{34}$$

The solution of (*EP*3) is determined such that *us*(*x*) = *up*(*x*) + *sv*(*x*) with *us*(1) = 0 and *u<sup>p</sup>* is a particular solution of *Ly* = *f* and *v* satisfies

$$Lv = 0, \; v(0) = 0; \; v'(0) = 1.$$

#### *4.3. Iterative Variational Methods for RSL*

Among analytical estimation methods, the variational iteration method (VIM or He's methods, see [13]) and homotopy perturbation method (HPM) (see [14,15]) are considered to find approximations for the nonlinear equation

$$L\mu + \lambda \mathbf{s}(\mathbf{x})\mu = f(\mathbf{x}, \mu, \mu'), \mathbf{x} \in (a, b) = I, \lambda \in \mathbb{R},\tag{35}$$

under different boundary conditions (Dirichlet, Neumann or general case *B*1*u*(*a*), *B*2*u*(*b*)).

4.3.1. He's Variational Method (VIM)

For the nonlinear Equation (35), we define *N* as the nonlinear operator such that (35) becomes

$$Lu + Nu = g(\mathbf{x}), \; \mathbf{x} \in (a, b) = I,\tag{36}$$

and the correction functional for the general Lagrange multiplier method is

$$u\_{n+1}(\mathbf{x}) = u\_n(\mathbf{x}) + \int\_0^\mathbf{x} \mu(t, \mathbf{x}, \lambda) [Lu\_n(t) + N\tilde{u}\_n(t) - \mathbf{g}(t)]dt,\tag{37}$$

with *<sup>u</sup>*e*<sup>n</sup>* considered as restricted variation, *<sup>δ</sup>u*e*<sup>n</sup>* <sup>=</sup> 0, and *<sup>µ</sup>*(*t*, *<sup>x</sup>*, *<sup>λ</sup>*) a Lagrange multiplier determined through the calculus of variations from (37); see [10,16].

$$
\delta u\_{n+1}(\mathbf{x}) = \delta u\_n(\mathbf{x}) + \delta \int\_0^\mathbf{x} \mu(t, \mathbf{x}, \lambda) [Lu\_n(t) + N\tilde{u}\_n(t) - \mathbf{g}(t)]dt. \tag{38}
$$

#### 4.3.2. Homotopy Perturbation Method (HPM)

For the nonlinear Equation (35) we define the operators L and *N* for *q* ∈ [0, 1]

$$\mathcal{L}[\Phi(\mathbf{x},\boldsymbol{q})] = -\frac{d}{d\mathbf{x}}\Big[p(\mathbf{x})\frac{d}{d\mathbf{x}}\Phi(\mathbf{x},\boldsymbol{q})\Big],\tag{39}$$

$$N[\Phi(\mathbf{x},q)] = -\frac{d}{d\mathbf{x}}\left[p(\mathbf{x})\frac{d}{d\mathbf{x}}\Phi(\mathbf{x},q)\right] + (\rho + \lambda\mathbf{s})(\mathbf{x})\Phi(\mathbf{x},q) - f(\mathbf{x},\Phi(\mathbf{x},q),\Phi\_\mathbf{x}(\mathbf{x},q)), \tag{40}$$

given by the maximum order of derivation from the equation and by the form of the equation; see [14,17]. We write the zero-order equation associated with the initial equation:

$$(1 - q)\mathcal{L}[\Phi(\mathbf{x}, q) - \mu\_0(\mathbf{x})] = hq \mathcal{N}[\Phi(\mathbf{x}, q)] \tag{41}$$

with *h* as a nonzero parameter, *u*<sup>0</sup> as a first analytical approximation of the function *u* with conditions

$$
\Phi(\mathbf{x},0) = \mathfrak{u}\_0(\mathbf{x}); \quad \Phi(\mathbf{x},1) = \mathfrak{u}(\mathbf{x}), \mathbf{x} \in [a,b]. \tag{42}
$$

where *u*<sup>0</sup> is a initial function that verifies the boundary conditions *B*1*u*(*a*) : *a*1*u*(*a*) + *a*2*u* 0 (*a*) = 0, *B*2*u*(*b*) : *b*1*u*(*b*) + *b*2*u* 0 (*b*) = 0 could be obtained from polynomial approximation developing the function *f* .

We develop Φ(*x*, *q*) by a Taylor series in the vicinity of the origin in relation to the second variable

$$\Phi(\mathbf{x}, q) = u\_0(\mathbf{x}) + \sum\_{1}^{\infty} u\_m(\mathbf{x}) q^m; \\ u\_m(\mathbf{x}) = \left. \frac{1}{m!} \frac{\partial^m \Phi}{\partial \mathbf{x}^m}(\mathbf{x}, q) \right|\_{q = 0} \tag{43}$$

A good choice for *h* (in relation to the error obtained compared to the initial equation) leads to *u*(*x*) = *u*0(*x*) + ∑ ∞ *m*=1 *um*(*x*).

The equation of order *m*:

$$\begin{aligned} \text{case } m = 1 \to \mathcal{L}[\mathfrak{u}\_m(\mathfrak{x})] &= hN[\mathfrak{u}\_{m-1}] \\ \text{case } m \ge 2 \to \mathcal{L}[f\_m(\mathfrak{x}) - f\_{m-1}(\mathfrak{x})] &= hN[\mathfrak{u}\_{m-1}] \end{aligned}$$

with boundary conditions *B*1*um*(*a*), *B*2*um*(*b*).

For the approximation of order 1, we have

$$-\left[p(\mathbf{x})u\_1'(\mathbf{x})\right]' = h\underbrace{\left\{-\left[p(\mathbf{x})u\_0'(\mathbf{x})\right]' + (\rho(\mathbf{x}) + \lambda\mathbf{s}(\mathbf{x}))u\_0(\mathbf{x}) - f(\mathbf{x}, u\_0(\mathbf{x}), u\_0'(\mathbf{x}))\right\}}\_{\varepsilon\_0(\mathbf{x})}\tag{44}$$

with *ε*1(*x*, *h*) = *N*[*u*1(*x*)].

The parameter *h* at step 1 is chosen such that the value of max*x*∈*<sup>I</sup>* |*ε*1(*x*, *h*)| is the smallest possible and becomes the next value of *ε*1, but also could be taken as *h* = 1. Iteratively, for *m* ≥ 2

$$L u\_m + \lambda s(\mathbf{x}) u\_m = f(\mathbf{x}, u\_{m-1}, u\_{m-1}'), \; \mathbf{x} \in I; \; B\_1 u\_m(a), B\_2 u\_m(b) \tag{45}$$

with start condition *u*<sup>0</sup> being known and stop condition max*x*|*u<sup>m</sup>* − *um*−1| < *e*.

#### 4.3.3. Applications

Let us consider a nonlinear RSL, such as the following problem:

$$-\mu^{\prime\prime}(\mathbf{x}) + \lambda \mu(\mathbf{x}) = f(\mathbf{x}, \mu, \mu^{\prime}),\\\mathbf{x} \in [0, b];\ B\_1\mu(0),\newline B\_2\mu(b).\tag{46}$$

The corresponding correction functional (37) to the problem (46) for the variational iteration method leads to the general Lagrange multiplier

$$\text{for } \lambda > 0, \mu(t, \mathbf{x}, \lambda) = \frac{1}{2\mathfrak{a}} \left( e^{\mathfrak{a}(t-\mathbf{x})} - e^{\mathfrak{a}(\mathbf{x}-t)} \right) = \frac{1}{\mathfrak{a}} \sinh \mathfrak{a}(t-\mathbf{x}), \mathfrak{a} = \sqrt{\lambda} \tag{47}$$

$$\text{for } \lambda < 0, \mu(t, \mathbf{x}, \lambda) = -\frac{1}{\mathfrak{a}} \sin a(t - \mathbf{x}), \mathfrak{a} = \sqrt{-\lambda} \tag{48}$$

and for *f*(*x*, *u*, *u* 0 ) = *g*(*x*), one finds

$$u\_{n+1}(\mathbf{x}) = u\_n(\mathbf{x}) + \frac{1}{a} \int\_0^\mathbf{x} \sinh a(t-\mathbf{x}) \left[ -u\_n''(t) + a^2 u\_n(t) - g(t) \right] dt, \lambda > 0,\tag{49}$$

$$u\_{n+1}(\mathbf{x}) = u\_n(\mathbf{x}) - \frac{1}{\alpha} \int\_0^\mathbf{x} \sin a(t - \mathbf{x}) \left[ -u\_n''(t) + a^2 u\_n(t) - g(t) \right] dt, \lambda < 0\tag{50}$$

with *u*0(*x*) = *A* + *Bx*. Particularly, for *g*(*x*) = *x*, the first step leads to

$$u\_1(\mathbf{x}) = u\_0(\mathbf{x}) + \frac{1}{\mathfrak{a}} \int\_0^\mathbf{x} \sinh \mathfrak{a}(t - \mathbf{x}) [\lambda A + (\lambda B - 1)t] dt,\\ \lambda = \mathfrak{a}^2,\tag{51}$$

$$u\_1(\mathbf{x}) = u\_0(\mathbf{x}) - \frac{1}{a} \int\_0^\mathbf{x} \sin a(t - \mathbf{x}) [\lambda A + (\lambda B - 1)t] dt,\\ \lambda = -a^2 \tag{52}$$

from where

$$u\_1(\mathbf{x}) = 2A + 2B\mathbf{x} + \frac{a^2B - 1}{a^3}\sinh(a\mathbf{x}) - A\cosh(a\mathbf{x}) - \frac{\mathbf{x}}{a^2}, \lambda = a^2,\tag{53}$$

$$u\_1(\mathbf{x}) = 2A + \frac{a^2B - 1}{a^3}\sin(ax) - A\cos(ax) + \frac{\chi}{a^2}, \lambda = -a^2,\tag{54}$$

For *n* > 1, we have

$$u\_{n+1}(\mathbf{x}) = u\_n(\mathbf{x}) + \frac{1}{a} \int\_0^\mathbf{x} \sinh a(t-\mathbf{x}) \left[ -u\_n''(t) + a^2 u\_n(t) - t \right] dt, \lambda > 0,\tag{55}$$

$$u\_{n+1}(\mathbf{x}) = u\_n(\mathbf{x}) - \frac{1}{a} \int\_0^\mathbf{x} \sin a(t-\mathbf{x}) \left[ -u\_n''(t) + a^2 u\_n(t) - t \right] dt, \lambda < 0\tag{56}$$

Constants A and B are determined from the boundary condition imposed to the last function *u<sup>n</sup>* computed, and for *u*(*x*) = *un*(*x*), boundary conditions are imposed, resulting in a system for the constants *A*, *B*. Thus, the solution of the problem is obtained.

In the case of using HPM, Equation (44), with *λ* = *α* 2 , for the first step becomes

$$h - u\_1''(\mathbf{x}) = h(-u\_0''(\mathbf{x}) + \lambda u\_0(\mathbf{x}) - \mathbf{x}) = h\varepsilon\_0(\mathbf{x})\tag{57}$$

and for case *m* ≥ 2

$$-\left[\boldsymbol{\mu}\_{m}^{\prime\prime} - \boldsymbol{\mu}\_{m-1}^{\prime\prime}\right](\mathbf{x}) = h(-\boldsymbol{\mu}\_{m-1}^{\prime\prime}(\mathbf{x}) + \lambda \boldsymbol{\mu}\_{m-1}(\mathbf{x}) - \mathbf{x}) = \varepsilon\_{m}(\mathbf{x}, h). \tag{58}$$

One obtains

$$u\_1(\mathbf{x}) = -h \left( \frac{\lambda A x^2}{2} + \frac{(\lambda B - 1)x^3}{3} \right),\tag{59}$$

$$\mathbf{u}\_{2}^{\prime\prime} = h(1 - h)(\lambda A + (\lambda B - 1)\mathbf{x}) + h\mathbf{x} - h^{2} \left(\frac{\lambda^{2}A\mathbf{x}^{2}}{2} + \lambda \frac{(\lambda B - 1)\mathbf{x}^{3}}{3}\right) = -\varepsilon\_{1}(\mathbf{x}, h). \tag{60}$$

from where

$$u\_2 = h(1-h)(\lambda A \frac{\mathbf{x}^2}{2} + (\lambda B - 1)\frac{\mathbf{x}^3}{6}) + h\frac{\mathbf{x}^3}{6} - h^2 \left(\frac{\lambda^2 A \mathbf{x}^4}{24} + \lambda \frac{(\lambda B - 1)\mathbf{x}^5}{603}\right). \tag{61}$$

The solution will be *u*(*x*) = *u*0(*x*) + *u*1(*x*) + *u*2(*x*) + . . . .

Additionally, if we consider the nonlinearity in (46) through function *f*(*x*, *u*, *u* 0 ) = *g*(*x*)*u*, that is, for *g*(*x*) = −*x* 2 , *x* ∈ (−*l*, *l*) we are in the harmonic oscillator case (*λ* < 0), then the following correction functional appears

$$u\_{n+1}(\mathbf{x}) = u\_n(\mathbf{x}) + \frac{1}{\alpha} \int\_0^\mathbf{x} \sinh u(t-\mathbf{x}) \left[ -u\_n''(t) + (\mathbf{a}^2 - \mathbf{g}(t))u\_n(t) \right] dt, \lambda > 0,\tag{62}$$

$$u\_{n+1}(\mathbf{x}) = u\_n(\mathbf{x}) - \frac{1}{\alpha} \int\_0^\mathbf{x} \sin a(t - \mathbf{x}) \left[ -u\_n''(t) + (a^2 - g(t))u\_n(t) \right] dt, \lambda < 0\tag{63}$$

as an eigenfunction of the equation Hermite polynomials appears.

For *B*1*u*(−*l*) : *u*(−*l*) = 0; *B*2*u*(−*l*) : *u*(*l*) = 0 and (*λ* < 0)

$$u\_1(\mathbf{x}) = A + B\mathbf{x} - \frac{1}{a} \int\_0^\mathbf{x} \sin a(t - \mathbf{x}) \left[ t^2 (A + Bt) + a^2 (A + Bt) \right] dt,\tag{64}$$

$$u\_{n+1}(\mathbf{x}) = u\_n(\mathbf{x}) - \frac{1}{a} \int\_0^\mathbf{x} \sin \alpha(t-\mathbf{x}) \left[ -u\_n''(t) + (a^2 + t^2)u\_n(t) \right] dt,\tag{65}$$

$$-u\_1^{\prime\prime}(\mathbf{x}) = h\left(-u\_0^{\prime\prime}(\mathbf{x}) + (\mathbf{x}^2 + \mathbf{a}^2)u\_0(\mathbf{x})\right) = h\varepsilon\_0(\mathbf{x})\tag{66}$$

and for case *m* ≥ 2

$$\left[-\left[u\_{m}^{\prime\prime}-u\_{m-1}^{\prime\prime}\right](\mathbf{x})=h\left(-u\_{m-1}^{\prime\prime}(\mathbf{x})+(\mathbf{x}^{2}+\mathbf{a}^{2})u\_{m-1}(\mathbf{x})\right)=\varepsilon\_{m}(\mathbf{x},h).\tag{67}$$

The two methods are fast convergent methods.

Variational iteration methods, such as VIM and HPM, could be used also for nonlinear propagation problems in which the temporal variable is considered, for example, for the coupled pseudo-parabolic equation, or the one-dimensional coupled Burgers equation numerically studied in [18]. The nonlinear coupled Burgers equations are also studied in [19] as an application of EOHAM (extension optimal homotopy asymptotic method) in which homotopy is combined with perturbation techniques. The Newell–Whitehead–Segel equation (NWSE) was also studied using the VIM technique and He's polynomials [20].

#### **5. Conclusions**

In the first part of the paper, definitions and results are presented connected to regular and singular Sturm–Liouville problems. Some types of direct singular SLPs were solved in [5,21] and a study of the inverse SLP algorithm was made. We defined in a different manner the SLP, and different boundary conditions were considered. All the figures were made using Matlab codes, the academic versions.

In the core of the paper, the variational formulation (VP) through a bilinear functional positive and symmetric is made. The minimization problem (MP) is also outlined through the functional of energy, and the equivalence of the formulations under some conditions imposed for RSL problems is proved.

Variational estimations are in the final part of the paper through the construction of the solution trough variational equations associated to the problem, such as the shooting method and bisection method, or using a sequential analytical approximate solution that is constructed according to the accuracy established. Here, we present He's variational method and the homotopy method. In the closing part, a is taken into account and the sequentiality of the transition from one step to another is specified for both methods. Al-Khaled et al. (see [22]) solve numerically a SLP using the general Sinc–Galerkin and Newton method but for different types of boundary conditions. In the paper, He's method, the Adomian method and Lagrange multiplier for special ODEs were given in detail, numerical results being obtained for Duffing and Titchmarch equations. We considered our applications the interval (0, *b*) and general conditions *B*1*u*(0), *B*2*u*(*b*) for a linear and a nonlinear case of *f*(*x*, *u*, *u* 0 ).

In [23] spectral problems of the nonlocal SLP with an integral *B*2*u*(*b*) were studied. Kernel of the operator, properties of the first eigenvalue and oscillation properties of eigenfunctions to the nonlocal problem were expressed. Additionally, the solution of the Cauchy problem for the SL equation on a star graph was constructed in [24].

For fractional differential equations, VIM could also be a very powerful instrument, in which Equations (36) and (37) are written using the Caputo fractional derivative, see [25]. This is our intention for the new study.

Nonlinear RSL problems could appear in the case of non-Newtonian fluid flows. Variational estimation methods are efficient techniques for finding analytical approximate solutions for a class of problems and also for optimal problems when looking for a minimum, using the functional of energy.

**Author Contributions:** Conceptualization, E.C.C.; methodology, E.C.C. and C.D.B.; software, E.C.C. and C.D.B.; validation, E.C.C. and C.D.B.; formal analysis, E.C.C.; investigation, E.C.C. and C.D.B.; writing—original draft preparation, E.C.C. and C.D.B.; writing—review and editing, E.C.C. and C.D.B.; supervision, E.C.C. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Acknowledgments:** The authors want to thank to the referees who allowed us to improve ourselves and our article.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Review* **Markov Moment Problem and Sandwich Conditions on Bounded Linear Operators in Terms of Quadratic Forms**

**Octav Olteanu**

Department of Mathematics and Informatics, University Politehnica of Bucharest, 060042 Bucharest, Romania; octav.olteanu50@gmail.com

**Abstract:** As is well-known, unlike the one-dimensional case, there exist nonnegative polynomials in several real variables that are not sums of squares. First, we briefly review a method of approximating any real-valued nonnegative continuous compactly supported function defined on a closed unbounded subset by dominating special polynomials that are sums of squares. This also works in several-dimensional cases. To perform this, a Hahn–Banach-type theorem (Kantorovich theorem on an extension of positive linear operators), a Haviland theorem, and the notion of a momentdeterminate measure are applied. Second, completions and other results on solving full Markov moment problems in terms of quadratic forms are proposed based on polynomial approximation. The existence and uniqueness of the solution are discussed. Third, the characterization of the constraints *T*<sup>1</sup> ≤ *T* ≤ *T*<sup>2</sup> for the linear operator *T*, only in terms of quadratic forms, is deduced. Here, *T*1 , *T*, and *T*<sup>2</sup> are bounded linear operators. Concrete spaces, operators, and functionals are involved in our corollaries or examples.

**Keywords:** polynomial approximation; unbounded subsets; Markov moment problem; positive operators; solution; existence; uniqueness; sums of squares; Banach lattices

**MSC:** 41A10; 46A22; 46B42; 46B70; 47B65

#### **1. Introduction**

We begin by recalling a few general remarks on approximation theory and its applications. A first fact is that the results of the present review paper focus on the existence and uniqueness of the solution of the solution for a large class of Markov moment problems. The involved solutions are bounded linear operators *T* mapping *L* 1 *ν* (*F*) into an order-complete Banach lattice *Y*, where *ν* is a moment-determinate positive regular Borel measure on the closed unbounded subset *<sup>F</sup>* <sup>⊆</sup> <sup>R</sup>*<sup>n</sup>* , *n* ∈ {1, 2, . . .}. The uniqueness follows from the density of polynomials in *L* 1 *ν* (*F*) (Lemma 1) via the continuity of the operator *T*. Of note, our first result (Lemma 1) also works for *n* ≥ 2, when, unlike the case *n* = 1, there exist momentdeterminate measures *ν* on R*<sup>n</sup>* for which the polynomials are not dense in *L* 2 *ν* (*F*) (according to [1]). Thus, for *n* ≥ 2, Lemmas 1, 2, and 3 are no longer valid if we turn *L* 1 *ν* (*F*) into *L* 2 *ν* (*F*). Moreover, Lemma 1 holds true for any closed (unbounded) subset of *<sup>F</sup>* <sup>⊆</sup> <sup>R</sup>*<sup>n</sup>* . Hence, the nonnegative polynomials on *F* are dense in the positive cone of *L* 1 *ν* (*F*). If *F* = R*<sup>n</sup>* or *F* = R*<sup>n</sup>* +, special convex cones of nonnegative polynomials (which are sums of squares) are dense in the positive cone of *L* 1 *ν* (*F*) (Lemmas 2 and 3). These remarks lead to the characterizations in terms of quadratic forms in the case *n* ≥ 2, which is the main contribution of this review paper. Going back to our aim on the applications of approximation theory, in [2] an interesting connection of a moment problem on [0, 1] (the Hausdorff moment problem) with fixed point theory was pointed out. As a rule, fixed point theorems use an iteration process. In [2], this iteration involved a rational function. The solution of the Hausdorff moment problem under attention is regarded as the fixed point of a transformation appearing naturally from the context. In [3], deep results on the uniqueness of the solutions for moment problems

**Citation:** Olteanu, O. Markov Moment Problem and Sandwich Conditions on Bounded Linear Operators in Terms of Quadratic Forms. *Mathematics* **2022**, *10*, 3288. https://doi.org/10.3390/ math10183288

Academic Editor: Stefano De Marchi

Received: 30 July 2022 Accepted: 7 September 2022 Published: 10 September 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

were carefully discussed. The article [4] provided approximation results on various locally compact spaces not necessarily related to the moment problem. In references [5] and [6], the geometric and iterative aspects of optimization theory were emphasized. The article [7] provided several interesting functional equations and new simple proofs of related inequalities involving logarithmic convexity and proposed new conjectures on the subject. In the article [8], an iterative method and its related algorithm, accompanied by a convergence analysis, for solving an optimization problem were discussed. As a general remark, recall that determining the element of minimum norm of a closed convex subset in a Hilbert space, not containing the origin, is also a passing to the limit process associated with an iteration geometrical method. This method can be adapted for a more general setting. The article [9] provides an iterative method for solving and approximating the solution of an operator equation, starting from Newton's global method for convex monotone increasing (or decreasing) operators. Sometimes, the usual iteration defining Newton's method leads to an iteration *Ak*+<sup>1</sup> = *ϕ*(*Ak*), where *A<sup>k</sup>* are self-adjoint operators acting on a Hilbert space and *ϕ* is a contractive convex mapping. As is well-known, the convergence of the sequence generated by Newton's method generally only works locally. For convex monotone operators of the *C* 1 class, it works globally, with the control of the norm of the error (providing the velocity of the convergence). The key point of the article [9] is that the convergence of the sequence of the successive approximations associated with the contraction mapping *ϕ* can be handled more easily than that provided by Newton's method. The contraction constant of *ϕ* can be determined quite easily. In particular, if the matrices have real entries, the result holds for functions of symmetric matrices. In the end, recall the connection between optimization (such as the best approximation by the elements of a closed subspace of a Hilbert space) and Fourier approximation. This is a useful remark that can be used in controlling the mean square error *g* − *h* 2 2 between the solutions *g*, *h* of the reduced moment problems *g*, *ψ<sup>j</sup>* = *y<sup>j</sup>* , *h*, *ψ<sup>j</sup>* = *m<sup>j</sup>* , *j* = 0, 1, . . . , *m* in terms of the squares of the errors *m<sup>j</sup>* − *y<sup>j</sup>* 2 , *j* = 0, 1, . . . , *m*. Here, all the involved functions *g*, *h*, *ψ<sup>j</sup>* are elements of the Hilbert space *L* 2 *µ* (*F*), and *<sup>F</sup>* <sup>⊆</sup> <sup>R</sup>*<sup>n</sup>* is a closed subset:

$$\psi\_{\vec{j}}(t) = t^{\dot{j}} := t\_1^{\dot{j}\_1} \cdots t\_n^{\dot{j}\_n}, \ t = (t\_1, \dots, t\_n) \in F, \ j = (j\_1, \dots, j\_n) \in \mathbb{N}^n$$

where *y<sup>j</sup>* are the exact values of the moments, determined in the experimental stage, while *m<sup>j</sup>* are the modified values for *y<sup>j</sup>* , perturbed by external influences in the reallife measuring stage. Another important field in approximation theory is provided by Korovkin-type theorems and their applications. The article [10] presents such an application in approximating a Kantorovich-type rational operator by means of Korovkin's classical approximating result and completing technique. Associated inequalities are established as well. The papers [11,12] refer to the aspects related to or like those of the moment problem, being inverse problems, as the moment problem is as well. The references [13,14] contain a polynomial approximation on the unbounded subsets discussed in the beginning of this introduction. Another direction of applying these approximation results is that of characterizing sandwich conditions on bounded linear operators defined on *L* 1 *ν* (*F*) (where *ν* is moment-determinate) only in terms of quadratic forms (see below). Another wellknown application of approximation theory arises from Krein-Milman theorem, which leads to approximation by convex combinations of the extreme points of a compact convex subset in a locally convex space. Such results lead to representation theorems and possible applications for optimization (see the references [14–17]).

Before stating our work on the multidimensional Markov moment problem and the related results studied in Section 3, we recall some basic notions and related terminology on compatible structures on usual spaces, which are used in the sequel. The motivation for this is that all concrete spaces of functions and self-adjoint operators have such natural structures. For complete and related information, see the monographs and books [18–27].

An ordered vector space is a real vector space *X* endowed with an order relation compatible with the algebraic structure expressed by the following two properties:

$$\begin{array}{ll} \text{x } y \in X, & \text{x } \le y := \ge +z \le y + z \text{ for all } z \in X, \\ \text{x } \le y := \text{ax} \le \text{ay for all real } \mathfrak{a} \in [0, \infty). \end{array}$$

An order relation with the above two compatibility properties is called a linear order relation on *X*. An ordered vector space *X* with the property that for any *x*1, *x*<sup>2</sup> ∈ *X* there exists the least upper bound *sup*{*x*1, *x*2} = *x*<sup>1</sup> ∨ *x*<sup>2</sup> for the set {*x*1, *x*2} is called a vector lattice. In a vector lattice *X*, the following basic notations are used:

$$\mathfrak{x}^+ := \mathfrak{x} \vee \mathbf{0}, \quad \mathfrak{x}^- := (-\mathfrak{x}) \vee \mathbf{0}, \ |\mathfrak{x}| := \mathfrak{x} \vee (-\mathfrak{x}), \ \mathfrak{x} \in X.$$

All the usual vector spaces have such a natural order relation. If *X* is an order vector space, one denotes by *X*<sup>+</sup> the convex cone with a vertex at **0**, defined by *X*<sup>+</sup> := {*x* ∈ *X*; *x* ≥ **0**}. This cone is called the positive cone of *X*. In the function spaces and in the spaces of symmetric matrices with real entries, as well as in the space of self-adjoint operators acting on an infinite-dimensional Hilbert space, there exist natural norms, which make them Banach spaces. Generally, the structures given by the norms are compatible with the algebraic and order structures on the Banach spaces appearing in applications. An ordered Banach space is a Banach space *X* endowed with a linear order relation such that the positive cone *X*+ is topologically closed and the norm is monotone increasing (isotone) on *X*+ :

$$
\mathfrak{x}\_1, \mathfrak{x}\_2 \in X, \ \mathbf{0} \le \mathfrak{x}\_1 \le \mathfrak{x}\_2 := \|\mathfrak{x}\_1\| \le \|\mathfrak{x}\_2\|.
$$

A Banach lattice is a Banach space *X*, which is also a vector lattice, such that the norm is solid on *X* :

$$|\mathfrak{x}\_1, \mathfrak{x}\_2 \in X, \ |\mathfrak{x}\_1| \le |\mathfrak{x}\_2| := \|\mathfrak{x}\_1\| \le \|\mathfrak{x}\_2\|.$$

Almost all Banach function spaces have a natural structure of a Banach lattice. From the above definitions, clearly, any Banach lattice is an ordered Banach space. The converse is false. A first example of an ordered Banach spaces that is not a lattice is the space SM(*n* × *n*) of all symmetric *n* × *n* matrices with real entries. The order relation on this space is given by:

$$A, B \in \mathcal{SM}(n \times n), \ A \le B \text{ if and only if } \langle Ah, h \rangle \le \langle Bh, h \rangle \text{ for all } h \in \mathbb{R}^n.$$

From this definition, we infer that *A* ≤ *B* if and only if *B* − *A* is positive semidefinite. The norm of the symmetric matrix *A* is: *A* = sup *h*≤1 |h*Ah*, *h*i|. Here, by k*h*k we denote the

Euclidean norm of the vector *h*. These definitions and notations make sense and have motivations in the infinite-dimensional case. Namely, if *H* is an arbitrary infinite-dimensional real or complex Hilbert space, a linear operator *A* : *H* → *H* is called a symmetric operator if h*Ax*, *y*i = h*x*, *Ay*i for all *x*, *y* ∈ *H*. A linear symmetric (continuous) operator is called a self-adjoint operator. Of note, any symmetric linear operator acting on *H* is continuous and therefore self-adjoint thanks to the closed graph theorem. The last definition makes sense for linear operators *A* : *D*(*A*) → *H*, where *D*(*A*) ⊆ *H* is a vector subspace of *H*, called the domain of definition of *A*. In this case, h*Ax*, *y*i = h*x*, *Ay*i holds for all *x*, *y* ∈ *D*(*A*). To avoid the inconvenience arising from the fact that the real vector space of self-adjoint operators is not a lattice as well as the noncommutativity of the multiplication (composition) of self-adjoint operators (and of symmetric square matrices), the following subspace has been studied. Let *A* ∈ A(*H*), where A(*H*) is the real vector space of all self-adjoint operators acting on *H*. We define:

$$\mathcal{Y}\_1(A) := \{ V \in \mathcal{A}(H) ; AV = VA \}, \ \mathcal{Y}(A) := \{ W \in \mathcal{Y}\_1(A) ; \mathcal{U}W = WU \ \forall U \in \mathcal{Y}\_1(A) \}.$$

Then, *Y*(*A*) is an order complete Banach lattice and a commutative real algebra of self-adjoint operators (according to [22]). P = R[*t*1, . . . , *tn*] is the real vector space of all polynomial functions with real coefficients of *n* real variables *t*1, . . . , *tn*.In what follows, *F* is a closed, unbounded subset of R*<sup>n</sup>* , and P+(*F*) is the convex cone of polynomials *p* : *F* → R, with *p*(*t*) ≥ 0 for all *t* ∈ *F*. We denote by P++(*F*) a convex subcone of P+(*F*) whose elements are special nonnegative polynomials. For example, <sup>P</sup>++(R*<sup>n</sup>* ) can be the convex cone of all sums of polynomials of the form *p*<sup>1</sup> ⊗ · · · ⊗ *pn*, where:

$$\begin{array}{c} (p\_1 \otimes \cdots \otimes p\_n)(t\_1, \ldots, t\_n) := p\_1(t\_1) \cdots p\_n(t\_n), \ t = (t\_1, \ldots, t\_n) \in \mathbb{R}^n, \\\ p\_i \in \mathcal{P}\_+(\mathbb{R}), \ i = 1, \ldots, n. \end{array} \tag{1}$$

We recall that:

$$p \in \mathcal{P}\_{+}(\mathbb{R}) \Leftrightarrow p = q^{2} + r^{2} \tag{2}$$

for some polynomials *q*,*r* and

$$p \in \mathcal{P}\_{+}(\mathbb{R}\_{+}) \Leftrightarrow p(t) = q(t)^{2} + tr(t)^{2} \text{ for all } t \in \mathbb{R}\_{+} := [0, \infty). \tag{3}$$

for some *q*,*r* ∈ R[*t*]. We denote by N := {0, 1, . . .} the set of all nonnegative integers. If *F* is a closed unbounded subset of R*<sup>n</sup>* , then *Cc*(*F*) is the vector space of all real-valued continuous compactly supported functions defined on *F*. In the sequel, all the involved vector space and linear operators (or functionals) are considered over the real field.

The classical moment problem can be written as follows: being given a sequence *yj <sup>j</sup>*∈N*<sup>n</sup>* of real numbers and a closed subset *<sup>F</sup>* <sup>⊆</sup> <sup>R</sup>*<sup>n</sup>* , *n* ∈ {1, 2, . . .}, find a positive regular Borel measure *µ* on *F* such that R *F t <sup>j</sup>dµ* = *y<sup>j</sup>* , *<sup>j</sup>* <sup>∈</sup> <sup>N</sup>*<sup>n</sup>* . This is the full moment problem. The existence, uniqueness, and construction of the unknown solution *µ* are the focus of attention. The truncated (or reduced) moment problem requires the interpolation moment conditions only for *j<sup>k</sup>* ≤ *d*, *k* = 1, . . . , *n*, , *j* = (*j*1, . . . , *jn*), where *d* is a given positive integer. The numbers *y<sup>j</sup>* , *<sup>j</sup>* <sup>∈</sup> <sup>N</sup>*<sup>n</sup>* are called the moments of the measure *<sup>µ</sup>*. When a sandwich condition on the solution is required, we have a Markov moment problem. The moment problem is an inverse problem since the measure *µ* is not known. It must be found, starting from its moments. Instead of real number moments, one can work with elements *<sup>y</sup><sup>j</sup>* <sup>∈</sup> *<sup>Y</sup>*, *<sup>j</sup>* <sup>∈</sup> <sup>N</sup>*<sup>n</sup>* , where *Y* is an order complete Banach lattice of functions or self-adjoint operators. If the *y<sup>j</sup>* are operators, we have an operator-valued moment problem. When *Y* is a Banach lattice of functions, we have a vector-valued moment problem. The requirement for *Y* to be order-complete is motivated by the necessity of applying Hahn–Banach-type theorems in order to obtain a linear positive extension *T* : *X* → *Y* of the linear operator *T*<sup>0</sup> : P → *Y*, satisfying the moment conditions *T*<sup>0</sup> *ϕj* := *y<sup>j</sup>* , *<sup>j</sup>* <sup>∈</sup> <sup>N</sup>*<sup>n</sup>* , *ϕj*(*t*) = *t <sup>j</sup>* = *t j*1 1 · · · *t jn <sup>n</sup>* from P to an ordered Banach space *X* containing both spaces P and *Cc*(*F*). When a sandwich condition *T*<sup>1</sup> ≤ *T* ≤ *T*<sup>2</sup> is required on the extension *T*, where *T<sup>i</sup>* , *i* = 1, 2 are given bounded linear operators mapping *X* into *Y*, we have a Markov moment problem. In this case the positivity of *T* on *X*<sup>+</sup> is replaced by the condition *T*<sup>1</sup> ≤ *T*, while the requirement *T* ≤ *T*<sup>2</sup> controls the norm of the solution *T*. As in the case of a scalar-valued linear solution, we now study the existence, the uniqueness, and eventually the construction of a/the linear solution *T* satisfying the interpolation moment conditions and the sandwich condition. A basic result in solving the classical moment on unbounded closed subsets is the Haviland theorem [28]. In [29], the result of Kantorovich on the extension of positive linear operators preserving linearity and positivity was reviewed and proven. This a Hahn–Banach-type result. The references [30–43] point out various aspects of the moment and related problems. Unlike other unbounded subsets of R*<sup>n</sup>* , *n* ≥ 2, the expression of nonnegative polynomials on a strip in terms of sums of squares is known due to M. Marshall's theorem [39]. Using the polynomial approximation ensured by Lemma 1 and Theorem 1, proven below, the Markov moment problem in terms of quadratic forms is solved (see Theorem 3 below). Applications of Hahn–Banach-type extension theorems to the study of the isotonicity (increasing monotonicity) of continuous convex operators on the positive cone *X*<sup>+</sup> were

published in the article [44]. References [45–48] focus mainly on several aspects of the truncated or full Markov moment problem. The rest of this paper is organized as follows. Section 2 summarizes the basic methods and results used along the proofs of the theorems in the present paper. Section 3 is devoted to the results: polynomial approximation on unbounded subsets in some *L* 1 *ν* spaces, applications of such results accompanied by other theorems to the existence and uniqueness of the solution of the Markov moment problem on an unbounded closed subset, and characterizations of the sandwich condition for bounded linear operators. All these applications of approximation-type results are partially or completely formulated in terms of quadratic forms. Section 4 concludes the paper.

#### **2. Methods**

Here are the basic methods used directly or as background of this paper:


#### **3. Results**

*3.1. On Polynomial Approximation on Unbounded Closed Subsets F* <sup>⊆</sup> <sup>R</sup>*<sup>n</sup> in Spaces L*<sup>1</sup> *ν* (*F*), *Where ν Is a Moment-Determinate Positive Regular Borel Measure on F*

In the sequel, the following approximation lemmas are applied

**Lemma 1.** *Let <sup>F</sup>* <sup>⊆</sup> <sup>R</sup>*<sup>n</sup> be an unbounded closed subset and ν be a moment-determinate positive regular Borel measure on F, with finite moments of all natural orders. Then, for any x* ∈ *Cc*(*F*), *x*(*t*) ≥ 0, ∀*t* ∈ *F*, *there exists a sequence* (*pm*)*<sup>m</sup>* , *p<sup>m</sup>* ≥ *x*, *m* ∈ N, *p<sup>m</sup>* → *x in L* 1 *ν* (*F*)*. Consequently, we have:*

$$\lim\_{m} \int\_{F} p\_m(t) d\nu = \int\_{F} \mathfrak{x}(t) d\nu\_{\prime}$$

*where* <sup>P</sup><sup>+</sup> <sup>=</sup> <sup>P</sup>+(*F*) *is dense in L* 1 *ν* (*F*) + *, and* <sup>P</sup> *is dense in L*<sup>1</sup> *ν* (*F*).

**Proof** To prove the assertions of the statement, it is sufficient to show that for any *x* ∈ (*Cc*(*F*))<sup>+</sup> we have

$$Q\_1(\mathbf{x}) := \inf \left\{ \int\_F p(t) d\nu; p \ge \mathbf{x}, \ p \in \mathcal{P} \right\} = \int\_F \mathbf{x}(t) d\nu.$$

Obviously, one has

$$Q\_1(\mathbf{x}) \ge \int\_F \mathbf{x}(t) d\nu.$$

To prove the converse, we define the linear form

$$T\_0: X\_0 := \mathcal{P} \oplus \operatorname{Sp}\{\mathbf{x}\} \to \mathbb{R},\ \operatorname{F}\_0(p + a\mathbf{x}) := \int\_F p(t)d\nu + a\mathbf{Q}\_1(\mathbf{x}),\ p \in \mathcal{P},\ a \in \mathbb{R}.$$

Next, we show that *F*<sup>0</sup> is positive on *X*0. In fact, for *α* < 0, one has (from the definition of *Q*1, which is a sublinear functional on *X*1):

$$p + a\mathbf{x} \ge 0 := p \ge -a\mathbf{x} := (-a)Q\_1(\mathbf{x}) = Q\_1(-a\mathbf{x}) \le \int\_F p(t)d\nu := T\_0(p + a\mathbf{x}) \ge 0.$$

If *a* ≥ 0, we infer that:

$$\begin{aligned} 0 &= Q\_1(0) = Q\_1(a\infty - a\infty) \le aQ\_1(\infty) + Q\_1(-a\infty) \implies \\ \int\_F p(t)d\nu &\ge Q\_1(-a\infty) \ge -aQ\_1(\infty) := T\_0(p + a\infty) \ge 0, \end{aligned}$$

where, in both possible cases, we have *x*<sup>0</sup> ∈ (*X*0)<sup>+</sup> := *T*0(*x*0) ≥ 0. Since *X*<sup>0</sup> contains the space of the polynomials' functions, which is a majorizing subspace of *X*1, there exists a linear positive extension *T* : *X* → R of *T*<sup>0</sup> (cf. [29]), which is continuous on *Cc*(*F*) with respect to the sup-norm. Therefore, *T* has a representation by means of a positive Borel regular measure *µ* on *F* such that

$$T(\mathfrak{x}) = \int\_F \mathfrak{x}(t)d\mu\_\prime \,\, \mathfrak{x} \in \mathfrak{C}\_\mathfrak{c}(F).$$

Let *p* ∈ P<sup>+</sup> be a nonnegative polynomial function. There is a nondecreasing sequence (*xm*)*<sup>m</sup>* of continuous nonnegative function with compact support such that *x<sup>m</sup>* % *p* pointwise on *F*. The positivity of *T* and Lebesgue's dominated convergence theorem for *µ* yield

$$\int\_{F} p(t)d\nu = T(p) \ge \sup T(\mathfrak{x}\_{\mathfrak{m}}) = \sup \int\_{F} \mathfrak{x}\_{\mathfrak{m}}(t)d\mu = \int\_{F} p(t)d\mu, \ p \in \mathcal{P}\_{+}.$$

Thanks to Haviland's theorem [28], there exists a positive Borel regular measure *λ* on *F* such that

$$
\lambda(p) = \nu(p) - \mu(p) \Leftrightarrow \nu(p) = \lambda(p) + \mu(p), \ p \in \mathcal{P}.
$$

Since *ν* is assumed to be *M*-determinate, it follows that:

$$\nu(B) = \lambda(B) + \mu(B)\lambda$$

for any Borel subset *B* of *F*. From this last assertion, approximating each *x* ∈ *L* 1 *ν* (*F* + by a nondecreasing sequence of nonnegative simple functions and using Lebesgue's convergence theorem, one obtains, first for positive functions, then for arbitrary *ν*-integrable functions, *ϕ* :

$$\int\_{F} \mathfrak{q} d\nu = \int\_{F} \mathfrak{q} d\lambda + \int\_{F} \mathfrak{q} d\mu, \quad \mathfrak{q} \in L^{1}\_{\nu}(F).$$

In particular, we must have

$$\int\_F \mathfrak{x} d\nu \ge \int\_F \mathfrak{x} d\mu = T(\mathfrak{x}) = T\_0(\mathfrak{x}) = Q\_1(\mathfrak{x}).$$

The conclusion is: *Q*1(*x*) = R *F x*(*t*)*dν*. This ends the proof.

Using Bernstein polynomial of *n* real variables when Lemma 1 is applied to *n* = 1, for *F* = R and Fubini's theorem we derive the following multidimensional polynomial approximation result.

**Lemma 2.** *Let ν* = *ν*<sup>1</sup> × · · · × *ν<sup>n</sup> be a product of n positive regular Borel-moment-determinate measures on* R, *with finite moments of all orders. Then, we can approximate any nonnegative continuous compactly supported function <sup>ψ</sup>* <sup>∈</sup> *<sup>X</sup>* <sup>=</sup> (*Cc*(R*<sup>n</sup>* ))<sup>+</sup> *with the sums of products:*

$$p\_1 \otimes \dots \otimes p\_n (t\_1, \dots, t\_n) := p\_1(t\_1) \cdot \dots \cdot p\_n(t\_n).t = (t\_1, \dots, t\_n) \in \mathbb{R}^n.$$

*where p<sup>j</sup> is a nonnegative polynomial on the entire real line*, *j* = 1, . . . , *n, and any such sum of special polynomials dominates ψon* R*<sup>n</sup>* .

**Lemma 3.** *Let ν* = *ν*<sup>1</sup> × · · · × *ν<sup>n</sup> be a product of n positive regular Borel-moment-determinate measures on* R+, *with finite moments of all orders. Then, we can approximate any nonnegative continuous compactly supported function ψ* ∈ *Cc* R*n* + + *with the sums of products:*

$$p\_1 \otimes \dots \otimes p\_n (t\_1, \dots, t\_n) := p\_1(t\_1) \cdot \dots \cdot p\_n(t\_n).t = (t\_1, \dots, t\_n) \in \mathbb{R}\_+^n.$$

*where p<sup>j</sup> is a nonnegative polynomial on the entire nonnegative semi axes*, *j* = 1, . . . , *n, and any such sum of special polynomials dominates ψ on* R*<sup>n</sup>* +.

**Proof.** Let *f* ∈ *Cc* R*n* + + ,*K<sup>i</sup>* = *pri*(*supp*(*f*)), *a<sup>i</sup>* = *in f K<sup>i</sup>* ,*b<sup>i</sup>* = *supK<sup>i</sup>* , *i* = 1, . . . , *n*, *K* = [*a*1, *b*1] × · · · × [*an*, *bn*].

The restriction of *f* to the parallelepiped *K* can be approximated uniformly on *K* by Bernstein polynomials *B<sup>m</sup>* in *n* variables. Any such polynomial *B<sup>m</sup>* is a sum of the products of the form *qm*,1 ⊗ · · · ⊗ *qm*,*n*, where each *qm*,*<sup>i</sup>* is a polynomial nonnegative on [*ai* , *bi* ], *i* = 1, . . . , *n*, *m* ∈ N. *B<sup>m</sup>* can be written as:

$$B\_{m} = \sum\_{\substack{k\_i = 0, \dots, m\_r \\ i = 1, \dots, n}} q\_{m, k\_1} \otimes \dots \otimes q\_{m, k\_{n'}}$$

where *qm*,*k<sup>i</sup>* is a nonnegative polynomial on [*a<sup>i</sup>* , *bi* ], *i* = 1, . . . , *n*, *m* ∈ N. By the Weierstrass– Bernstein uniform approximation theorem, we have:

$$\|\|f - B\_{\mathfrak{m}}\|\|\_{\infty} := \sup\_{t \in K} |f(t) - B\_{\mathfrak{m}}(t)| \to 0, \; \mathfrak{m} \to \infty.$$

By an abuse of notation, we write *qm*,*<sup>i</sup>* = *qm*,*k<sup>i</sup>* . We need a similar approximation, with sums of tensor products of nonnegative polynomials *p<sup>i</sup>* , *pi*(*ti*) ≥ 0, for all *t<sup>i</sup>* ∈ R+, *i* = 1, . . . , *n* in the space *L* 1 *ν* R*n* + . To this aim, the idea is to use Lemma 18 for *n* = 1, *F* = R+, followed by Fubini's theorem. We define *q*0,*m*,*<sup>i</sup>* = *qm*,*<sup>i</sup>* ·*χ*[*a<sup>i</sup>* ,*bi* ] , *i* = 1, . . . , *n* and *fi*(*t*) = *qm*,*i*(*t*), *t* ∈ [*a<sup>i</sup>* , *bi* ], *fi*(*t*) = 0 for *t* outside an interval [*a<sup>i</sup>* − *ε*, *b<sup>i</sup>* + *ε*] with small *ε* > 0, the graph of *f<sup>i</sup>* on [*b<sup>i</sup>* , *b<sup>i</sup>* + *ε*] being the line segment of the ends of the points (*b<sup>i</sup>* , *qi*(*bi*)) and (*b<sup>i</sup>* + *ε*, 0). We proceed similarly on an interval [*a<sup>i</sup>* − *ε*, *a<sup>i</sup>* ]. Clearly, for *ε* > 0 small enough, *f<sup>i</sup>* approximates *q*0, *<sup>m</sup>*,*<sup>i</sup>* in *L* 1 *νi* (R+) as accurate as we wish. On the other hand, *f<sup>i</sup>* is nonnegative, compactly supported, and continuous on R+, so that Lemma 1 ensures the existence of an approximating polynomial *p<sup>i</sup>* with respect to the norm of *L* 1 *dν<sup>i</sup>* ( R+), *pi*(*t*) ≥ 0 for all *t* ∈ R+, *i* = 1, . . . , *n*. According to Fubini's theorem, the preceding reasoning yields *p*<sup>1</sup> ⊗ · · · ⊗ *p<sup>n</sup>* , which approximates *f*<sup>1</sup> ⊗ · · · ⊗ *fn*, and *f*<sup>1</sup> ⊗ · · · ⊗ *fn*, which approximates *q*0,*m*,1 ⊗ · · · ⊗ *q*0,*m*,*<sup>n</sup>* = *q*0,*m*,*k*<sup>1</sup> ⊗ · · · ⊗ *q*0,*m*,*k<sup>n</sup>* . The approximations hold for finite sums of these products in *L* 1 *ν* R*n* + . Moreover, finite sums of functions *q*0,*m*,1, ⊗ · · · ⊗ *q*0,*m*,*<sup>n</sup>* approximate *f* uniformly on *K* because their restrictions to *K* define the restriction to *<sup>K</sup>* of approximating Bernstein polynomials (*Bm*)*m*∈<sup>N</sup> associated to

*f* . Since *f* and *q*0,*m*,1, ⊗ · · · ⊗ *q*0,*m*,*<sup>n</sup>* vanish outside *K*, we infer that the following norm k k<sup>1</sup> in *L* 1 *ν* R*n* + is evaluated as:

$$\left| \| f - \sum\_{\substack{k\_1 = 0, \dots, m, \\ i = 1, \dots, n}} q\_{0, m, k\_1} \otimes \dots \otimes q\_{0, m, k\_n} \| \right| \\ = \int\_K \left| f - \sum\_{\substack{k\_i = 0, \dots, m, \\ i = 1, \dots, n}} q\_{m, k\_1} \otimes \dots \otimes q\_{m, k\_n} \right| d\nu \le \epsilon,$$

$$\sup\_{t \in \mathbb{K}} |f(t) - B\_m(t)| \cdot \nu(K) \to 0, \; m \to \infty.$$

The conclusion is that *f* can be approximated in *L* 1 *ν* R*n* + by the sums of products *p*<sup>1</sup> ⊗ · · · ⊗ *pn*, where *p<sup>i</sup>* is nonnegative on R+ for all *i* = 1, . . . , *n*. This ends the proof.

**Example 1.** *For any α* ∈ (0, ∞), *dν* = *e* <sup>−</sup>*αtdt is a moment-determinate positive Borel measure on* R+*, according to* [14]*. The application of Lemma 3 shows that for the product measure:*

$$d\nu = \exp\left(-\sum\_{j=1}^{n} \alpha\_j t\_j\right) dt\_1 \cdots dt\_n = 0$$

*exp*(−*α*1*t*1)*dt*<sup>1</sup> × · · · × *exp*(−*αntn*)*dtn*, *α<sup>j</sup>* > 0, *j* = 1, . . . , *n*,

*the polynomials are dense in L* 1 *ν* R*n* + . *In particular, the measureν is moment-determinate on* R*<sup>n</sup>* <sup>+</sup>. *A similar consequence follows from Lemma 2, for the measure*

$$d\mu = \exp\left(-\sum\_{j=1}^{n} \alpha\_j t\_j^2\right) dt\_1 \cdots dt\_{n\prime} \ \alpha\_j > 0 \ \ j = 1, \ldots, n.$$

In this case, the polynomials are dense in *L* 1 *µ* (R*<sup>n</sup>* ); in particular, *µ* is a momentdeterminate measure on R*<sup>n</sup>* .

#### *3.2. Solving Markov Moment Problems in Terms of Signatures of Quadratic Forms*

The approximation results reviewed in Section 3.1 allow the extension of sandwich conditions on the solution *T*, preserving the interpolation moment conditions, from the subspace of polynomials to the entire space *L* 1 *ν* (*F*) for moment-determinate measures *ν*. The results stated in the sequel complete theorems previously published in [13,14,16].

**Theorem 1.** *Let F be a closed unbounded subset of* R*<sup>n</sup>* , *Y an order-complete Banach lattice, yj <sup>j</sup>*∈N*<sup>n</sup> a given sequence in Y*, *and ν a positive regular moment-determinate Borel measure on F*, *with finite moments of all orders. Let T*1,*T*<sup>2</sup> ∈ *B L* 1 *ν* (*F*),*Y be two linear bounded operators from L* 1 *ν* (*F*) *to Y*. *The following statements are equivalent:*


$$\begin{aligned} \sum\_{j \in J\_0} a\_j \varphi\_j \ge 0 \text{ on} & F \Rightarrow \\ \sum\_{j \in J\_0} a\_j T\_1 \left( \varphi\_j \right) \le \sum\_{j \in J\_0} a\_j y\_j \le \sum\_{j \in J\_0} a\_j T\_2 \left( \varphi\_j \right). \end{aligned}$$

**Proof.** We define *T*<sup>0</sup> : P → *Y* by

$$T\_0\left(\sum\_{j\in J\_0} \lambda\_j \varphi\_j\right) := \sum\_{j\in J\_0} \lambda\_j y\_j. \tag{4}$$

Here, *<sup>J</sup>*<sup>0</sup> <sup>⊂</sup> <sup>N</sup>*<sup>n</sup>* is an arbitrary finite subset, and *λ<sup>j</sup>* , *j* ∈ *J*<sup>0</sup> are real coefficients. With this notation, point (b) says that

$$T\_1(p) \le T\_0(p) \le T\_2(p), \ p \in \mathcal{P}\_+(F). \tag{5}$$

In other words, *U*<sup>1</sup> := *T*<sup>0</sup> − *T*1, *U*<sup>2</sup> := *T*<sup>2</sup> − *T*1, *U<sup>i</sup>* : P → *Y*, *i* = 1, 2 are positive linear operators on the positive cone P+(*F*) of the ordered vector space P, and *U*<sup>1</sup> <sup>P</sup>+(*F*) <sup>≤</sup> *<sup>U</sup>*<sup>2</sup> P+(*F*) . According to the Kantorovich extension result for positive linear operators, there exists a positive linear extension *V*<sup>1</sup> of *U*<sup>1</sup> from P to a dense subspace *X*<sup>1</sup> of *X* := *L* 1 *ν* (*F*) since P is a majorizing subspace of *X*<sup>1</sup> := { *f* ∈ *X*; ∃*p* ∈ P, | *f* | ≤ *p*}. Clearly, the space *X*<sup>1</sup> contains both subspaces *Cc*(*F*) and P. Then, *V*<sup>1</sup> + *T*<sup>1</sup> extends *T*<sup>0</sup> to a linear operator:

$$\mathcal{W}\_1: X\_1 \to Y\_1 \\ \mathcal{W}\_1 := V\_1 + T\_1 \ge T\_1 \text{ on } \mathcal{P}\_+(F).$$

Using Lemma 1, the continuity of *T*1, *T*2, and the inequalities **0** ≤ *U*<sup>1</sup> ≤ *U*<sup>2</sup> on P+, we infer that for any sequence of nonnegative compactly supported functions (*gl*) *l* , *g<sup>l</sup>* → **0**, there exists a sequence of polynomials (*pl*) *l* , **0** ≤ *g<sup>l</sup>* ≤ *p<sup>l</sup>* for all *l*, *p<sup>l</sup>* − *g<sup>l</sup>* → **0**, *l* → ∞. These yield:

$$p\_l = (p\_l - \mathbf{g}\_l) + \mathbf{g}\_l \to \mathbf{0}, \; l \to \infty. \tag{6}$$

On the other hand, (5) and (6) lead to:

$$\mathbf{0} \leftarrow T\_1(p\_l) \le T\_0(p\_l) \le T\_2(p\_l) \to 0.$$

Thus, *W*1(*pl*) = *T*0(*pl*) → **0**, which further implies

$$\mathbf{0} \le \mathcal{W}\_1(\mathbf{g}\_l) \le \mathcal{W}\_1(p\_l) \to \mathbf{0}.$$

Thus, *W*1(*gl*) → **0** for any convergent to zero sequence of elements from (*Cc*(*F*))<sup>+</sup> . Now, let (*gl*) *l* be an arbitrary sequence in *Cc*(*F*), *g<sup>l</sup>* → **0**. Then, *g* + *<sup>l</sup>* → 0, *g* − *<sup>l</sup>* → 0, and the preceding reasons imply *W*<sup>1</sup> *g* + *l* → **0**, *W*<sup>1</sup> *g* − *l* → **0**. Therefore, *W*1(*gl*) = *W*<sup>1</sup> *g* + *l* − *W*<sup>1</sup> *g* − *l* → **0**. The conclusion is that the linear operator *W*<sup>1</sup> is continuous on *Cc*(*F*). It admits a unique linear continuous extension *T* ∈ *B*(*X*,*Y*), since *Cc*(*F*) is dense in *X*. Hence, *T* is continuous and defined on the entire space *X* = *L* 1 *ν* (*F*), verifying *T ϕj* = *T*<sup>0</sup> *ϕj* = *y<sup>j</sup>* , *<sup>j</sup>* <sup>∈</sup> <sup>N</sup>*<sup>n</sup>* . If *ψ* ∈ *X*+, there exists a sequence (*gl*) *l* of functions in (*Cc*(*F*))<sup>+</sup> such that *g<sup>l</sup>* → *ψ* in *X*. If (*pl*) *l* is a sequence of polynomial functions, *g<sup>l</sup>* ≤ *p<sup>l</sup>* for all *l*, *p<sup>l</sup>* − *g<sup>l</sup>* → **0**, then the continuity of the operators *T*1, *T*, *T*<sup>2</sup> on *X* and the inequalities (5) yield:

$$T\_1(\psi) = \lim\_{l} T\_1(p\_l) \le \lim\_{l} T\_0(p\_l) = \lim\_{l} T(p\_l) \le \lim\_{l} T\_2(p\_l) = T\_2(\psi), \ \psi \in X\_+.$$

This ends the proof.

If the nonnegative polynomials on *F* are expressible in terms of sums of squares, theorem 1 allows the characterization of the existence and uniqueness of the solution in terms of quadratic forms. The following consequences hold. We start with the simplest case, when *F* = R.

**Corollary 1.** *Let X* = *L* 1 *ν* (R), *where ν is a positive regular moment-determinate Borel measure on* R, *with finite moments of all orders. Assume that Y is an arbitrary order complete Banach lattice and* (*yn*)*n*≥<sup>0</sup> *is a given sequence with its terms in Y*. *Let T*1, *T*<sup>2</sup> *be two linear operators from X to Y such that* **0** ≤ *T*<sup>1</sup> ≤ *T*<sup>2</sup> *on X*+. *The following statements are equivalent:*


$$\sum\_{i,j \in I\_0} \lambda\_i \lambda\_j T\_1 \left(\varphi\_{i+j}\right) \le \sum\_{i,j \in I\_0} \lambda\_i \lambda\_j y\_{i+j} \le \sum\_{i,j \in I\_0} \lambda\_i \lambda\_j T\_2 \left(\varphi\_{i+j}\right).$$

**Proof.** We apply Theorem 1 to *F* = R as well as the explicit form of nonnegative polynomials on the real axes (2). One uses the obvious equality:

$$q = \sum\_{j \in J\_0} \lambda\_j q \rho\_j \Rightarrow q^2 = \sum\_{i, j \in J\_0} \lambda\_i \lambda\_j q \rho\_i q \rho\_j = \sum\_{i, j \in J\_0} \lambda\_i \lambda\_j q \rho\_{i+j} q$$

Here, *J*<sup>0</sup> ⊂ N is an arbitrary finite subset, *λ<sup>j</sup>* ∈ R, *j* ∈ *J*0. It remains to prove that

$$\|\|T\_1\|\| \le \|\|T\|\| \le \|\|T\_2\|\|.$$

The positivity of the linear operators *T*1, *T*, *T*2, *T* − *T*1, *T*<sup>2</sup> − *T* on *X*<sup>+</sup> and their continuity yields:

$$\pm T\_1(\mathfrak{x}) = T\_1(\pm \mathfrak{x}) \le T\_1(|\mathfrak{x}|) \le T(|\mathfrak{x}|)\_\prime$$

which implies |*T*1(*x*)| ≤ *T*(|*x*|), *x* ∈ *X*. Since *Y* is a Banach lattice, we infer that the inequalities:

$$\|\|T\_1(\mathbf{x})\|\| \le \|\|T(|\mathbf{x}|)\|\| \le \|\|T\|\| \|\mathbf{x}\|\|\_{\prime}$$

hold for all *x* ∈ *X*. This proves that k*T*1k ≤ k*T*k. Similarly, we show that k*T*k ≤ k*T*2k. This ends the proof.

Here is the scalar-valued version of Corollary 1.

**Corollary 2.** *Let ν be a positive regular moment-determinate Borel measure on* R, *with finite moments of all orders. Assume that h*1, *h*<sup>2</sup> *are two functions in L* ∞ *ν* (R) *such that* 0 ≤ *h*<sup>1</sup> ≤ *h*<sup>2</sup> *almost everywhere. Let* (*yn*)*n*≥<sup>0</sup> *be a given sequence of real numbers. The following statements are equivalent:*


$$\sum\_{i,j \in I\_0} \lambda\_i \lambda\_j \int\_{\mathbb{R}} t^{i+j} h\_1(t) d\nu \le \sum\_{i,j \in I\_0} \lambda\_i \lambda\_j y\_{i+j} \le \sum\_{i,j \in I\_0} \lambda\_i \lambda\_j \int\_{\mathbb{R}} t^{i+j} h\_2(t) d\nu.$$

**Proof.** The implication (*a*) := (*b*) is obvious. To prove the converse, we apply Corollary 1 to the case *Y* = R, *Ti*(*f*) := R R *hi*(*t*)*f*(*t*)*dν*, *i* = 1, 2. The linear positive (hence, continuous) functional *T* is represented by a function *h* ∈ *L* ∞ *ν* (R) according to the measure theory results from [9]. The moment interpolation conditions from Corollary 1 must be written as

$$\int\_{\mathbb{R}} h(t)t^j d\nu = T(\boldsymbol{\varrho}\_j) = \boldsymbol{y}\_{j'} \; j \in \mathbb{N}.$$

To finish the proof, we must show that *h*<sup>1</sup> ≤ *h* ≤ *h*<sup>2</sup> *ν*−almost everywhere in R. According to Corollary 1, we already know that:

$$\int\_{\mathbb{R}} h\_1(t)f(t)d\nu \le \int\_{\mathbb{R}} h(t)f(t)d\nu \le \int\_{\mathbb{R}} h\_2(t)f(t)d\nu.$$

for all *f* ∈ *L* 1 *ν* (R) + . Writing this for any *f* = *χB*, where *B* ⊆ R is an arbitrary Borel subset with *ν*(*B*) ∈ (0, ∞), the following conclusion holds:

$$\int\_{B} (h(t) - h\_1(t))d\nu \ge 0,\\
\int\_{B} (h\_2(t) - h(t))d\nu \ge 0, \ B \in \mathcal{B}, \ \nu(B) > 0.$$

Here, B is the sigma algebra of all Borel subsets of R. Now, a well-known measure theory argument [9] leads to *h*1(*t*) ≤ *h*(*t*) ≤ *h*2(*t*) for almost all *t* ∈ R with respect to the measure *dν*. This ends the proof.

If in Corollaries 1 and 2 we take R+ instead of R, the following statements hold, via proofs like those shown above.

**Corollary 3.** *Let X* = *L* 1 *ν* (R+), *where ν is a positive regular moment-determinate Borel measure on* <sup>R</sup>+. *Assume that <sup>Y</sup> is an arbitrary order-complete Banach lattice and* (*yn*)*n*≥<sup>0</sup> *is a given sequence with its terms in Y*. *Let T*1, *T*<sup>2</sup> *be two linear operators from X to Y such that* 0 ≤ *T*<sup>1</sup> ≤ *T*<sup>2</sup> *on X*+. *The following statements are equivalent:*


$$\sum\_{i,j \in I\_0} \lambda\_i \lambda\_j T\_1(\varrho\_{i+j+k}) \le \sum\_{i,j \in I\_0} \lambda\_i \lambda\_j \mathfrak{z}\_{i+j+k} \le \sum\_{i,j \in I\_0} \lambda\_i \lambda\_j T\_2\left(\varrho\_{i+j+k}\right), k \in \{0, 1\}.$$

**Corollary 4.** *Let ν be a positive regular moment-determinate Borel measure on* R+, *with finite moments of all orders. Assume that h*1, *h*<sup>2</sup> *are two functions in L* ∞ *ν* (R+) *such that* **0** ≤ *h*<sup>1</sup> ≤ *h*<sup>2</sup> *almost everywhere. Let* (*yn*)*n*≥<sup>0</sup> *be a given sequence of real numbers. The following statements are equivalent:*

(c) *There exists a unique h* ∈ *L* ∞ *ν* (R+) *such that h*<sup>1</sup> ≤ *h* ≤ *h*<sup>2</sup> *ν*−*almost everywhere, and*

$$\int\_{\mathbb{R}\_+} t^j h(t) d\nu = y\_j \\ \text{for all } j \in \mathbb{N}.$$

(d) *If J*<sup>0</sup> <sup>⊂</sup> <sup>N</sup> *is a finite subset and λj* ; *j* ∈ *J*<sup>0</sup> ⊂ R, *then:*

$$\sum\_{i,j \in I\_0} \lambda\_i \lambda\_j \int\_{\mathbb{R}\_+} t^{i+j+k} h\_1(t) d\nu \le \sum\_{i,j \in I\_0} \lambda\_i \lambda\_j y\_{i+j+k} \le \sum\_{i,j \in I\_0} \lambda\_i \lambda\_j \int\_{\mathbb{R}\_+} t^{i+j+k} h\_2(t) d\nu\_\prime \ k \in \{0, 1\}.$$

**Example 2.** *If, in Corollary 4, we take dν* = *e* <sup>−</sup>*tdt*, *h*1(*t*) := *te*−*t*, , *h*2(*t*) := **1/2**, *then dν is moment-determinate* [14]*,*

$$\begin{aligned} \int\_{\mathbb{R}\_+} t^{i+j+k} h\_1(t) d\nu &= \int\_0^\infty t^{i+j+k+1} e^{-2t} d\nu = 2^{-(i+j+k+2)} \int\_0^\infty u^{i+j+k+1} e^{-u} d\nu = 0\\ &\quad 2^{-(i+j+k+2)} (i+j+k+1)!,\\ \int\_{\mathbb{R}\_+} t^{i+j+k} h\_2(t) d\nu &= 2^{-1} (i+j+k)!. \end{aligned}$$

Thus, condition (b) must be written as follows:

$$\begin{aligned} \sum\_{\substack{i,j \in I\_0 \\ i,j \in I\_0}} \lambda\_i \lambda\_j 2^{-(i+j+k+2)} (i+j+k+1)! &\le \sum\_{\substack{i,j \in I\_0 \\ i,j \in I\_0}} \lambda\_i \lambda\_j y\_{i+j+k} \le \lambda\_i\\ \sum\_{i,j \in I\_0} \lambda\_i \lambda\_j 2^{-1} (i+j+k)!, &\ k \in \{0,1\}, \end{aligned}$$

where *J*<sup>0</sup> ⊂ N is an arbitrary finite subset and *λ<sup>j</sup>* , *j* ∈ *J*<sup>0</sup> are arbitrary real numbers.

We go on with the two-dimensional case, starting with the Markov moment problem on a strip. The motivation is that the explicit expression of nonnegative polynomials on a strip in terms of sums of squares is known due to following M. Marshall's result [39].

**Theorem 2.** *If p*(*t*1, *t*2) ∈ R[*t*1, *t*2] *is nonnegative on the strip F* = [0, 1] × R*, then p*(*t*1, *t*2) *is expressible as:*

$$p(t\_1, t\_2) = \sigma(t\_1, t\_2) + \tau(t\_1, t\_2)t\_1(1 - t\_1)\_{\prime\prime}$$

*where σ*(*t*1, *t*2), *τ*(*t*1, *t*2) *are sums of squares in* R[*t*1, *t*2].

From Theorems 1 and 2, the next result also holds. Let *F* = [0, 1] ×R, *ν* be a positive regular Borel *M*-determinate (moment-determinate) measure on *F*, and *X* = *L* 1 *ν* (*F*), *ϕj*(*t*1, *t*2) := *t j*1 1 *t j*2 2 , *<sup>j</sup>* <sup>=</sup> (*j*1, *<sup>j</sup>*2) <sup>∈</sup> <sup>N</sup><sup>2</sup> , (*t*1, *t*2) ∈ *F*. Let *Y* be an order-complete Banach lattice and *yj <sup>j</sup>*∈N<sup>2</sup> be a sequence of given elements in *<sup>Y</sup>*.

**Theorem 3.** *Let T*1, *T*<sup>2</sup> ∈ *B*+(*X*,*Y*) *be two linear (bounded) positive operators mapping X into Y*. *The following statements are equivalent:*


$$\begin{split} \sum\_{i,j \in J\_{0}} \lambda\_{i} \lambda\_{j} T\_{1} \big( \boldsymbol{\varrho}\_{i+j} \big) &\leq \sum\_{i,j \in J\_{0}} \lambda\_{i} \lambda\_{j} \boldsymbol{y}\_{i+j} \leq \sum\_{i,j \in J\_{0}} \lambda\_{i} \lambda\_{j} T\_{2} \big( \boldsymbol{\varrho}\_{i+j} \big), \\ \sum\_{i,j \in J\_{0}} \lambda\_{i} \lambda\_{j} \big( T\_{1} \big( \boldsymbol{\varrho}\_{i+j\_{1}+1, \, \, i\_{2}+j\_{2}} - \boldsymbol{\varrho}\_{i\_{1}+j\_{1}+2, \, i\_{2}+j\_{2}} \big) \big) &\leq \\ \sum\_{i,j \in J\_{0}} \lambda\_{i} \lambda\_{j} \big( \boldsymbol{y}\_{i\_{1}+j\_{1}+1, \, i\_{2}+j\_{2}} - \boldsymbol{y}\_{i\_{1}+j\_{1}+2, \, i\_{2}+j\_{2}} \big) &\leq \\ \sum\_{i,j \in J\_{0}} \lambda\_{i} \lambda\_{j} \big( T\_{2} \big( \boldsymbol{\varrho}\_{i\_{1}+j\_{1}+1, \, i\_{2}+j\_{2}} - \boldsymbol{\varrho}\_{i\_{1}+j\_{1}+2, \, i\_{2}+j\_{2}} \big) \big), \ i = (i\_{1}, i\_{2}), j = (j\_{1}, j\_{2}) \in \J\_{0}. \end{split}$$

Unfortunately, similar results cannot be proven for moment problems on R*<sup>n</sup>* and R*<sup>n</sup>* +. This is a motivation for reviewing the following result [13].

If *<sup>F</sup>* <sup>⊆</sup> <sup>R</sup>*<sup>n</sup>* is an arbitrary closed unbounded subset, then we denote, by P++, a subcone of P<sup>+</sup> generated by special nonnegative polynomials expressible in terms of sums of squares.

**Theorem 4.** *Let <sup>F</sup>* <sup>⊆</sup> <sup>R</sup>*<sup>n</sup> be a closed unbounded subset; ν be a positive regular Borel-momentdeterminate measure on F*, *having finite moments of all orders; and X* = *L* 1 *ν* (*F*), *ϕj*(*t*) = *t j* , *<sup>t</sup>* <sup>∈</sup> *<sup>F</sup>*, *<sup>j</sup>* <sup>∈</sup> <sup>N</sup>*<sup>n</sup>* . *Let Y be an order-complete Banach lattice, yj <sup>j</sup>*∈N*<sup>n</sup> be a given sequence of elements in Y*, *and T*<sup>1</sup> *and T*<sup>2</sup> *be two bounded linear operators mapping X into Y*. *Assume that there exists a subcone* P++ ⊆ P<sup>+</sup> *such that each f* ∈ (*Cc*(*F*))<sup>+</sup> *can be approximated in X by a sequence* (*pl*) *l* , *p<sup>l</sup>* ∈ P++, *p<sup>l</sup>* ≥ *f for all l. The following statements are equivalent:*

(a) *There exists a unique (bounded) linear operator*

$$T: X \to Y, \ T(\emptyset\_{\bar{\jmath}}) = y\_{\bar{\jmath}\iota}, \ j \in \mathbb{N}^{\mathfrak{n}}, \mathbf{0} \le T\_1 \le T \le T\_2 \text{ } X\_{+\iota} \parallel T\_1 \| \le \|T\| \le \|T\_2\| \|\iota\|$$

(b) *For any finite subset <sup>J</sup>*<sup>0</sup> <sup>⊂</sup> <sup>N</sup>*<sup>n</sup> and any λj* ; *j* ∈ *J*<sup>0</sup> ⊂ R, *the following implications hold true:*

$$\sum\_{j \in I\_0} \lambda\_j \varphi\_j \in \mathcal{P}\_+(F) := \sum\_{j \in I\_0} \lambda\_j T\_1 \left( \varphi\_j \right) \le \sum\_{j \in I\_0} \lambda\_j y\_{j\prime}$$

$$\sum\_{j \in I\_0} \lambda\_j \varphi\_j \in \mathcal{P}\_{++} := \sum\_{j \in I\_0} \lambda\_j T\_1 \left( \varphi\_j \right) \ge \mathbf{0}\_\prime \sum\_{j \in I\_0} \lambda\_j y\_{j\prime} \le \sum\_{j \in I\_0} \lambda\_j T\_2 \left( \varphi\_j \right).$$

The application of Theorem 4 and Lemma 2 yields the following result.

**Theorem 5.** *Let ν* = *ν*<sup>1</sup> × · · · × *νn*, *n* ≥ 2, *ν<sup>j</sup> being a positive regular M*−*determinate (momentdeterminate) Borel measure on* R, *j* = 1, . . . , *n*, *X* = *L* 1 *ν* (R*<sup>n</sup>* ), *ϕj*(*t*) = *t j* , *<sup>t</sup>* <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* , *<sup>j</sup>* <sup>∈</sup> <sup>N</sup>*<sup>n</sup>* . *Additionally, assume that ν<sup>j</sup> has finite moments of all orders, j* = 1, . . . , *n*. *Let Y be an order-* *complete Banach lattice, yj <sup>j</sup>*∈N*<sup>n</sup> a given sequence of elements in Y*, *and T*<sup>1</sup> *and T*<sup>2</sup> *two bounded linear operators mapping X into Y*. *The following statements are equivalent:*


$$\sum\_{j \in I\_0} \lambda\_j \varphi\_j \in \mathcal{P}\_+ \Rightarrow \sum\_{j \in J\_0} \lambda\_j T\_1(\varphi\_j) \le \sum\_{j \in J\_0} \lambda\_j y\_j.$$

*For any finite subsets <sup>J</sup><sup>k</sup>* <sup>⊂</sup> <sup>N</sup>, *<sup>k</sup>* <sup>=</sup> 1, . . . , *<sup>n</sup> and any λjk jk*∈*J<sup>k</sup>* ⊂ R, *the following inequalities hold true:*

$$\begin{split} \mathbf{0} & \leq \sum\_{i\_{1},j\_{1} \in f\_{1}} \left( \cdots \cdot \left( \sum\_{i\_{n},j\_{n} \in f\_{n}} \lambda\_{i\_{1}} \lambda\_{j\_{1}} \cdots \cdot \lambda\_{i\_{n}} \lambda\_{j\_{n}} T\_{1} \left( \varphi\_{i\_{1} + j\_{1}, \ldots, i\_{n} + j\_{n}} \right) \right) \cdot \cdots \right), \\ & \sum\_{i\_{1},j\_{1} \in f\_{1}} \left( \cdots \cdot \left( \sum\_{i\_{n},j\_{n} \in f\_{n}} \lambda\_{i\_{1}} \lambda\_{j\_{1}} \cdots \cdot \lambda\_{i\_{n}} \lambda\_{j\_{n}} y\_{i\_{1} + j\_{1}, \ldots, i\_{n} + j\_{n}} \right) \cdot \cdots \right) \leq \\ & \sum\_{i\_{1},j\_{1} \in f\_{1}} \left( \cdots \cdot \left( \sum\_{i\_{n},j\_{n} \in f\_{n}} \lambda\_{i\_{1}} \lambda\_{j\_{1}} \cdots \cdot \lambda\_{i\_{n}} \lambda\_{j\_{n}} T\_{2} \left( \varphi\_{i\_{1} + j\_{1}, \ldots, i\_{n} + j\_{n}} \right) \right) \cdot \cdots \right). \end{split}$$

A similar result holds for products of *n* moment-determinate measures on R+, *n* ≥ 2 via Theorem 4 and Lemma 3, also using the explicit form of nonnegative polynomials on R+ written in (3).

*3.3. Characterizing Sandwich Conditions on Bounded Linear Operators in Terms of Quadratic Forms*

Lemma 2 leads to the following characterization.

**Theorem 6.** *Let ν*, *X be as in the statement of Theorem 5, Y a Banach lattice, and T1*, *T*, *T*<sup>2</sup> *bounded linear operators mapping X into Y*. *The following statements are equivalent:*


$$\sum\_{i\_1, j\_1 \in I\_1} \left( \cdots \left( \sum\_{i\_n, j\_n \in I\_n} \lambda\_{i\_1} \lambda\_{j\_1} \cdots \cdots \lambda\_{i\_n} \lambda\_{j\_n} T\_1 \left( \varphi\_{i\_1 + j\_1 \prime} \cdots \varphi\_{i\_n + j\_n} \right) \right) \cdots \right)$$

$$\le \sum\_{i\_1, j\_1 \in I\_1} \left( \cdots \left( \sum\_{i\_n, j\_n \in I\_n} \lambda\_{i\_1} \lambda\_{j\_1} \cdots \cdots \lambda\_{i\_n} \lambda\_{j\_n} T \left( \varphi\_{i\_1 + j\_1 \prime} \ldots \varphi\_{i\_n + j\_n} \right) \right) \cdots \right) \le \cdots$$

$$\sum\_{i\_1, j\_1 \in I\_1} \left( \cdots \left( \sum\_{i\_n, j\_n \in I\_n} \lambda\_{i\_1} \lambda\_{j\_1} \cdots \cdots \lambda\_{i\_n} \lambda\_{j\_n} T\_2 \left( \varphi\_{i\_1 + j\_1 \prime} \ldots \varphi\_{i\_n + j\_n} \right) \right) \cdots \right).$$

**Proof.** Statement (b) says that <sup>T</sup>1(p) <sup>≤</sup> *<sup>T</sup>*(*p*) <sup>≤</sup> <sup>T</sup>2(*p*) for all *<sup>p</sup>* ∈ P++(R*<sup>n</sup>* ), where <sup>P</sup>++(R*<sup>n</sup>* ) is the subcone of <sup>P</sup>+(R*<sup>n</sup>* ) formed by all polynomials that can be written as finite sums of the polynomial defined by (1), with *p<sup>i</sup>* ∈ P+(R), *i* = 1, 2, . . . , *n*. Hence, the implication (*a*) := (*b*) is obvious. For the converse, according to a measure-type result [9], for any *ψ* ∈ *X*<sup>+</sup> there exists a sequence (*gl*) *l*∈N of functions from (*Cc*(R*<sup>n</sup>* ))<sup>+</sup> , with *ψ* = lim *l gl* . On the other hand, Lemma 2 implies that there is a sequence of polynomials (*pl*) *l*∈N , *<sup>p</sup><sup>l</sup>* ∈ P++(R*<sup>n</sup>* ) for all *l* such that *p<sup>l</sup>* − *g<sup>l</sup>* → **0**, *l* → ∞. Thus,

$$
\psi - p\_l = (\psi - g\_l) + (g\_l - p\_l) \to 0.
$$

This means that *ψ* = lim *l*→∞ *pl* . From (b), we know that *T*1(*pl*) ≤ *T*(*pl*) ≤ T2(*pl*) for all *l* ∈ N. Now, the continuity of the three involved operators *T*1, *T*, *T*<sup>2</sup> yields

$$T\_1(\psi) = \lim\_{l} T\_1(p\_l) \le \lim\_{l} T(p\_l) = T(\psi) \le \lim\_{l} T\_2(p\_l) = T\_2(\psi), \ \psi \in X\_+.$$

This ends the proof.

Using Lemma 3 and the form of nonnegative polynomials on R+ (3), the next result holds too.

**Theorem 7.** *Let X* = *L* 1 *ν* R*n* + , *ϕj*(*t*) = *t j* , *<sup>t</sup>* <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* <sup>+</sup>, *<sup>j</sup>* <sup>∈</sup> <sup>N</sup>*<sup>n</sup>* , *where ν is as in Lemma 3, Y is a Banach lattice, and T*1, *T*, *T*<sup>2</sup> *are bounded linear operators mapping X into Y*. *The following statements are equivalent:*


$$\begin{split} &\sum\_{i\_1,j\_1 \in I\_1} \left( \cdots \left( \sum\_{i\_n,j\_n \in I\_n} \lambda\_{i\_1} \lambda\_{j\_1} \cdots \lambda\_{i\_n} \lambda\_{j\_n} T\_1 \left( \varrho\_{l\_1 + i\_1 + j\_1 \prime} \cdots \varrho\_{l\_n + i\_n + j\_n} \right) \right) \right) \cdots \\ &\leq \sum\_{i\_1,j\_1 \in I\_1} \left( \cdots \left( \sum\_{i\_n,j\_n \in I\_n} \lambda\_{i\_1} \lambda\_{j\_1} \cdots \lambda\_{i\_n} \lambda\_{j\_n} T \left( \varrho\_{l\_1 + i\_1 + j\_1 \prime} \cdots \varrho\_{l\_n + i\_n + j\_n} \right) \right) \right) \cdots \\ &\sum\_{i\_1,j\_1 \in I\_1} \left( \cdots \left( \sum\_{i\_n,j\_n \in I\_n} \lambda\_{i\_1} \lambda\_{j\_1} \cdots \lambda\_{i\_n} \lambda\_{j\_n} T\_2 \left( \varrho\_{l\_1 + i\_1 + j\_1 \prime} \cdots \varrho\_{l\_n + i\_n + j\_n} \right) \right) \right) \cdots \\ &\text{ll }(l\_1 \ldots l\_n) \in \{0, 1\}^n \end{split}$$

*for all* (*l*1, . . . , *ln*) ∈ {0, 1} .

#### **4. Discussion**

The present paper provides recently published results and a new way to present them. Such results refer to the Markov moment problem, which motivated the polynomial approximation on unbounded subsets stated in the beginning of the previous section. Instead of looking for the explicit form of nonnegative polynomials on unbounded closed subsets *F* of R*<sup>n</sup>* , *n* ≥ 2 (which has been proven to not always be expressible in terms of sums of squares), the approximation by finite sums of special polynomials pointed out in Lemmas 2 and 3, followed by the passing to the limit process, solved partially or completely, respectively, the problems discussed in the present work. With respect to our own previous similar results, this review paper comes with generalizations and improvements in the theorems, which clearly needed to be improved. We did not see a simpler method in the literature that was able to solve polynomial approximation on unbounded subsets (which is important as a separate subject) and the applications emphasized in this paper. It is a work in the settings of analysis and functional analysis over the real field. The presentation of some statements completes or generalizes the published results on the subject. As a direction for future work, it would be interesting to study what these theorems say in the cases when the codomains *Y* are concrete Banach lattices.

**Funding:** This research received no external funding.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The author would like to thank the reviewers for their comments and suggestions, which led to an improvement in the presentation of the paper.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


## *Article* **Stationary Conditions and Characterizations of Solution Sets for Interval-Valued Tightened Nonlinear Problems**

**Kin Keung Lai 1,\* ,† , Shashi Kant Mishra 2,†, Sanjeev Kumar Singh 2,† and Mohd Hassan 2,†**


**Abstract:** In this paper, we obtain characterizations of solution sets of the interval-valued mathematical programming problems with switching constraints. Stationary conditions which are weaker than the standard Karush–Kuhn–Tucker conditions need to be discussed in order to find the necessary optimality conditions. We introduce corresponding weak, Mordukhovich, and strong stationary conditions for the corresponding interval-valued mathematical programming problems with switching constraints (IVPSC) and interval-valued tightened nonlinear problems (IVTNP), because the W-stationary condition of IVPSC is equivalent to Karush–Kuhn–Tucker conditions of the IVTNP. Furthermore, we use strong stationary conditions to characterize the several solutions sets for IVTNP, in which the last ones are particular solutions sets for IVPSC at the same time, because the feasible set of tightened nonlinear problems (IVTNP) is a subset of the feasible set of the mathematical programs with switching constraints (IVPSC).

**Keywords:** nonlinear programming; switching constraints; stationary conditions; interval-valued optimization

**MSC:** 90C30; 90C33; 49K10

#### **1. Introduction**

Mathematical programming problems with equilibrium constraints (MPEC) [1] and mathematical programming problems with vanishing constraints (MPVC) [2] have recently found considerable attention in the area of optimal control, mathematical equilibrium, truss topology, and other research fields [3] due to a wide range of applications in real-life problems.

Singh et al. [4] established Lagrange-type duality results and saddle point optimality criteria for mathematical programs with equilibrium constraints for differentiable functions. Pandey and Mishra [5] established Wolfe and Mond–Weir-type duality results for mathematical programs with equilibrium constraints using convexificators. Pandey and Mishra [6] obtained optimality and duality results for semi-infinite mathematical programs with equilibrium constraints using convexificators. Pandey and Mishra [7] established that the Mordukhovich (M) stationary conditions [7] are strong KKT-type sufficient optimality conditions for the nonsmooth multiobjective semi-infinite mathematical programs with equilibrium constraints. Mishra et al. [8] obtained duality results for mathematical programs with vanishing constraints for differentiable functions. Mishra et al. [9] showed that Cottle, Slater, and Mangasarian–Fromovitz constraint qualifications do not hold at an efficient solution under fairly mild assumptions, whereas the Guignard constraint qualification was satisfied sometimes for mathematical programs with vanishing constraints. Mishra et al. [9] introduced suitable modifications of said constraint qualifications, established relationships, and derived the KKT-type necessary optimality conditions. Guu et al. [10] established strong KKT-type sufficient optimality conditions for nonsmooth multiobjective

**Citation:** Lai, K.K.; Mishra, S.K.; Singh, S.K.; Hassan, M. Stationary Conditions and Characterizations of Solution Sets for Interval-Valued Tightened Nonlinear Problems. *Mathematics* **2022**, *10*, 2763. https:// doi.org/10.3390/math10152763

Academic Editor: Savin Treanta

Received: 28 June 2022 Accepted: 1 August 2022 Published: 4 August 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

semi-infinite programming problems with vanishing constraints. Lai et al. [11] established Fritz–John and KKT-type stationary points conditions for nonsmooth semi-definite multiobjective mathematical programs with vanishing constraints.

Mehlitz [12] introduced the mathematical program with switching constraints (MPSC). It is not surprising that the issues involving the usual constraint qualifications for MPEC and MPVC also exist for MPSC. Mehlitz [12] showed that if an MPSC is treated as a nonlinear program, the Mangasarian–Fromovitz constraint qualifications fail at any feasible point for which there is a pair of switching functions with a value equal to zero. As a result, he introduced the concepts of weak, Mordukhovich (M-), and strong (S-) stationarity for MPSC and presented some constraint qualifications. Kanzow et al. [13] provided several relaxation methods from the numerical solutions of MPEC to MPSC. Liang and Ye [14] obtained various optimality conditions and local error bounds for MPSC. Pandey and Singh [15] studied several constraint qualifications and stationarity for multiobjective mathematical programs with switching constraints.

Uncertainty in the real world is inevitable. Therefore, imposing uncertainty in optimization problems becomes an interesting research topic. Interval-valued nonlinear programming is one such research area; see [16–19]. Lai et al. [20] established sufficient optimality conditions and duality results for semidifferentiable mathematical programming problems. Sharma et al. [21] established the Hermite–Hadamard inequalities for preinvex interval-valued functions. Su and Dinh [22] established duality results for interval-valued pseudoconvex optimization problems with equilibrium constraints with applications. Wang and Wang [23] obtained duality results for nondifferentiable semi-infinite interval-valued optimization problems with vanishing constraints.

The characterization of solution sets in mathematical programming is useful in understanding the development of solution methods for solving the problem. Mangasarian [24] introduced the concept of the characterization of solutions sets for convex programs, and Burke and Ferris [25] provided several characterizations of solution sets for nonsmooth convex programs. Jeyakumar et al. [26] provided Lagrange multiplier-based characterizations of solution sets of cone-constrained convex programs and semidefinite programs. Dinh et al. [27] studied Lagrange multiplier characterizations of solution sets of constrained pseudolinear optimization problems. Furthermore, Jeyakumar et al. [28] gave a dual characterization of the weak and proper solution sets. Jeyakumar et al. [28] discussed Lagrange multiplier characterizations of the solutions sets under regularity conditions. Lalitha and Mehta [29] derived Lagrange multiplier characterizations of solution sets for nonlinear mathematical programs with an h-convex objective and h-pseudolinear constraints. Several Lagrange multiplier characterizations of solution sets for a convex infinite programming problems are obtained in [30]. Mishra et al. [31] established several Lagrange multiplier characterizations of solution sets for constrained nonsmooth pseudolinear optimization problems. Recently, Sisarat and Wangkeeree [32] provided some characterizations of solution sets of constant pseudo Lagrangian-type functions and established Lagrange multiplier characterizations. Some recent developments of significant research on characterizations of solution sets are in [33–43] and references therein. Recently, Treanta [44] provided several characterizations of solution sets of interval-valued variational control problems and discussed its relationship with variational control problems.

Motivated by the above-mentioned work, firstly, we consider interval-valued mathematical programming with switching constraints (IVPSC). We introduce corresponding weak, Mordukhovich, and strong stationary conditions (W-stationary, M-stationary and Sstationary for short). We propose an interval-valued tightened nonlinear problem (IVTNP) associated with IVPSC. We provide several characterizations of solution sets for IVPSC with the help of the S-stationary condition and IVTNP. We construct the corresponding Lagrangian function for IVPSC. We use semiconvex functions introduced by Mifflin [45], extend for interval-valued nonsmooth functions and provide the properties of intervalvalued semiconvex functions. Furthermore, we prove that the associated Lagrangian is

constant under the S-stationary and semiconvexity conditions with a Clarke subdifferential. We also provide an example to support the theoretical findings.

#### **2. Preliminaries**

#### *2.1. Interval Analysis*

We collect some basic concepts and essential definitions related to interval-valued functions from Moore [46] and Wu [18].

We denote by I(R) the class of all closed intervals in R. Let *U* = [*u L* , *u <sup>U</sup>*], where *u <sup>L</sup>* and *u U* denote the lower and upper bounds of *U*, respectively. Let *U* = [*u L* , *u <sup>U</sup>*] and *V* = [*v L* , *v U*] be in I(R); then, we have

(i) *U* + *V* = {*u* + *v* : *u* ∈ *U*, *v* ∈ *V*} = [*u <sup>L</sup>* + *v L* , *u <sup>U</sup>* + *v <sup>U</sup>*],

(ii) −*U* = {−*u* : *u* ∈ *U*} = [−*u <sup>U</sup>*, <sup>−</sup>*<sup>u</sup> L* ],


Let *U* = [*u L* , *u <sup>U</sup>*] and *V* = [*v L* , *v <sup>U</sup>*] be two closed intervals in <sup>R</sup>. We write *<sup>U</sup> <sup>V</sup>* if and only if *u <sup>L</sup>* <sup>≤</sup> *<sup>v</sup> <sup>L</sup>* and *u <sup>U</sup>* <sup>≤</sup> *<sup>v</sup> <sup>U</sup>*. It means that *U* is inferior to *V*, or *V* is superior to *U*. It is easy to see that " " is a partial ordering on I(R).

The function *<sup>f</sup>* : <sup>R</sup>*<sup>n</sup>* → I is called an interval valued function; this means *<sup>f</sup>*(*u*) = *<sup>f</sup>*(*u*1, · · · , *<sup>u</sup>n*) is a closed interval in <sup>R</sup> for each *<sup>u</sup>* <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* . *f* can be written as *f*(*u*) = [ *f L* (*u*), *f <sup>U</sup>*(*u*)], where *f <sup>L</sup>* and *f <sup>U</sup>* are two real valued functions defined on R*<sup>n</sup>* such that *f L* (*u*) ≤ *f <sup>U</sup>*(*u*), <sup>∀</sup>*<sup>u</sup>* <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* .

We write *U* ≺*LU V* if and only if *U LU V* and *U* 6= *V*. We say *U* = (*U*1, · · · , *Up*) is an interval valued vector if each component *U<sup>k</sup>* = [*u L k* , *u U k* ] is a closed interval for *k* = 1, · · · , *p*. Suppose *U* = (*U*1, · · · , *Up*) and *V* = (*V*1, · · · , *Vp*) are two interval valued vectors. We write *U LU V* if and only if *U<sup>k</sup> LU V<sup>k</sup>* ∀*k* = 1, · · · , *p*, and *U* ≺*LU V* if and only if *U<sup>k</sup> LU V<sup>k</sup>* , ∀*k* = 1, · · · , *p* and *U<sup>q</sup>* ≺*LU V<sup>q</sup>* for at least one *q*.

**Definition 1** ([17])**.** *An interval-valued function f*(*u*) = [ *f L* (*u*), *f <sup>U</sup>*(*u*)] *defined on <sup>X</sup>* <sup>⊆</sup> <sup>R</sup>*<sup>n</sup> is said to be LU-convex if* ∀*u*, *v* ∈ *X*, *λ* ∈ (0, 1),

$$f(\lambda u + (1 - \lambda)v) \preceq\_{LU} \lambda f(u) + (1 - \lambda)f(v).$$

#### *2.2. Generalized Derivatives*

We collect the definitions and properties of generalized derivatives from Clarke [47]. Suppose *<sup>f</sup>* : <sup>R</sup>*<sup>n</sup>* <sup>→</sup> <sup>R</sup> is a locally Lipschitz function at *<sup>u</sup>* <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* . The generalized directional derivative of *<sup>f</sup>* at *<sup>u</sup>* in the direction *<sup>d</sup>* <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* is denoted by *f c* (*u*; *d*) and is defined by

$$f^c(u;d) := \limsup\_{\substack{h \to 0 \\ t \downarrow 0}} \frac{f(u+h+td) - f(u+h)}{t}$$

and the Clarke's subdifferential of *f* at *u*, denoted by *∂ c f*(*u*), is defined by

$$\partial^{\mathbb{C}}f(\mu) := \{ \mu \in \mathbb{R}^n \, : \, f^{\mathbb{C}}(\mu; d) \ge \langle \mu, d \rangle, \,\forall d \in \mathbb{R}^n \}.$$

We denote by <sup>h</sup>*u*, *<sup>v</sup>*<sup>i</sup> the usual inner product in *<sup>n</sup>*-dimensional real Euclidean space <sup>R</sup>*<sup>n</sup>* ,

i.e.,

$$
\langle \mathfrak{u}, v \rangle = \mathfrak{u}^T v, \text{ for } \mathfrak{u}, v \in \mathbb{R}^n.
$$

The directional derivatives of *f* at *u* in the direction of *d*, denoted by *f* 0 (*u*; *d*), are defined by

$$\lim\_{t \downarrow 0} \frac{f(u+td) - f(u)}{t} \text{ provided the limit exists.}$$

*f* is said to be regular at *u* in the Clarke sense if *f* 0 (*u*; *d*) exists and is equal to *f c* (*u*; *d*) for every *<sup>d</sup>* <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* [48].

Consider *<sup>f</sup>* : <sup>R</sup>*<sup>n</sup>* → I(R) is an interval-valued function; then, *<sup>f</sup>*(*u*) = [ *<sup>f</sup> L* (*u*), *f <sup>U</sup>*(*u*)] is regular if both the upper and lower bound functions *f <sup>L</sup>* and *f <sup>U</sup>* are regular.

Suppose *M* is the closed convex subset of R*<sup>n</sup>* . The normal cone [49] to *M* at *u* is

> *N*(*M*, *u*) = {*η* ∈ R *n* : h*η*, *v* − *u*i ≤ 0, ∀*v* ∈ *M*}.

**Definition 2** ([45])**.** *Suppose X is a nonempty subset of* R*<sup>n</sup> . A function <sup>f</sup>* : <sup>R</sup>*<sup>n</sup>* <sup>→</sup> <sup>R</sup> *is said to be semiconvex at u* ∈ *X if f is locally Lipschitz at u and regular at u, and it satisfies the following condition*

$$
\mu + d \in \mathcal{X}, d \in \mathbb{R}^n, f'(u; d) \ge 0 \implies f(u+d) \ge f(u).
$$

*The interval-valued function <sup>f</sup>* : <sup>R</sup>*<sup>n</sup>* → I(R) *is said to be semiconvex on <sup>X</sup> if <sup>f</sup> L and f <sup>U</sup> are semiconvex at every u* ∈ *X*.

We can easily see from the above definition that *f* is semiconvex at *u* if ∃ *u* ∈ *∂ c f*(*u*) : h*η*, *v* − *u*i ≥ 0 =⇒ *f*(*v*) ≥ *f*(*u*).

Mifflin [45] provided an important result on semiconvex functions, which can be further generalized for interval-valued functions.

**Lemma 1.** *Let the function <sup>f</sup> be semiconvex on a convex set <sup>X</sup>* <sup>⊂</sup> <sup>R</sup>*<sup>n</sup> . Then, for <sup>u</sup>* <sup>∈</sup> *<sup>X</sup>*, *<sup>d</sup>* <sup>∈</sup> <sup>R</sup>*<sup>n</sup> with u* + *d* ∈ *X*, *we have*

$$f(u+d) \le f(u) \implies f'(u;d) \le 0.$$

The interval-valued function *<sup>f</sup>* : <sup>R</sup>*<sup>n</sup>* → I(R) is semiconvex; then, for *<sup>u</sup>* <sup>∈</sup> *<sup>X</sup>* <sup>⊂</sup> <sup>R</sup>*<sup>n</sup>* , *d* ∈ <sup>R</sup>*<sup>n</sup>* with *<sup>u</sup>* <sup>+</sup> *<sup>d</sup>* <sup>∈</sup> *<sup>X</sup>*, we have

$$f(u+d) \preceq\_{LU} f(u) \implies f'(u;d) \preceq\_{LU} 0.$$

This means that

$$\begin{aligned} f^L(u+d) \le f^L(u) \implies f^{L'}(u;d) \le 0\\ \text{and } f^{\mathcal{U}}(u+d) \le f^{\mathcal{U}}(u) \implies f^{\mathcal{U}'}(u;d) \le 0. \end{aligned}$$

*2.3. Interval-Valued Mathematical Programs with Switching Constraints (IVPSC)*

We consider the following interval-valued mathematical programs with switching constraints (IVPSC)

$$\min f(u) = \left[ f^L(u), f^L(u) \right] \tag{1}$$

$$\text{subject to } g\_i(u) \le 0, \forall \, i = 1, \dots, p\_\prime$$

$$h\_j(u) = 0, \forall \, j = 1, \dots, q\_\prime$$

$$\mathcal{G}\_k(u)H\_k(u) = 0, \forall \, k = 1, \dots, r\_\prime$$

where the functions *f L* , *g<sup>i</sup>* , *h<sup>j</sup>* , *G<sup>k</sup>* , *H<sup>k</sup>* : <sup>R</sup>*<sup>n</sup>* <sup>→</sup> <sup>R</sup> are continuously differentiable on <sup>R</sup>*<sup>n</sup>* . We say *G<sup>k</sup>* (*u*)*H<sup>k</sup>* (*u*) = 0, while the switching constraint since functions *G<sup>k</sup>* (*u*), *H<sup>k</sup>* (*u*) are active is at least one, *G<sup>k</sup>* (*u*) = 0 or *H<sup>k</sup>* (*u*) = 0 for all *k* = 1, · · · ,*r*, at any feasible point of IVPSC.

We denote the solution set of IVPSC by S.

$$S = \{ u \in M : f^L(u) \le f^L(v), \ f^{II}(u) \le f^{II}(v), \ g(u) \le 0, \ l \}$$

$$h(u) = 0, \ G\_k(u)H\_k(u) = 0, \forall v \in M \}.$$

#### *2.4. Stationary Conditions*

We need to mention some index sets to define stationary conditions at the feasible point *u*¯ for IVPSC.

$$\begin{aligned} I\_{\mathcal{S}}(\vec{\boldsymbol{\pi}}) &:= \{ \boldsymbol{i} \in \{ 1, \cdots, r \} \, : \, \mathcal{g}\_{i}(\vec{\boldsymbol{\pi}}) = \mathbf{0} \}, \\ I^{\mathcal{G}}(\vec{\boldsymbol{\pi}}) &:= \{ \boldsymbol{k} \in \{ 1, \cdots, r \} \, : \, \mathcal{G}\_{\boldsymbol{k}}(\vec{\boldsymbol{\pi}}) = \mathbf{0} \text{ and } H\_{\boldsymbol{k}}(\vec{\boldsymbol{\pi}}) \neq \mathbf{0} \}, \\ I^{H}(\vec{\boldsymbol{\pi}}) &:= \{ \boldsymbol{k} \in \{ 1, \cdots, r \} \, : \, \mathcal{G}\_{\boldsymbol{k}}(\vec{\boldsymbol{\pi}}) \neq \mathbf{0} \text{ and } H\_{\boldsymbol{k}}(\vec{\boldsymbol{\pi}}) = \mathbf{0} \}, \\ I^{\mathcal{G}^{H}}(\vec{\boldsymbol{\pi}}) &:= \{ \boldsymbol{k} \in \{ 1, \cdots, r \} \, : \, \mathcal{G}\_{\boldsymbol{k}}(\vec{\boldsymbol{\pi}}) = \mathbf{0} \text{ and } H\_{\boldsymbol{k}}(\vec{\boldsymbol{\pi}}) = \mathbf{0} \}. \end{aligned}$$

We establish some stationary conditions in the Clarke subdifferential form motivated by Mehlitz [12]. In order to define the stationary conditions, we need to introduce the KKT system of IVPSC, which is as follows.

**Definition 3.** *(KKT-type conditions): A feasible point u*¯ *of IVPSC is said to satisfy KKT-type conditions if there exist multipliers λ L* , *λ <sup>U</sup>*, *<sup>λ</sup>i*(*<sup>i</sup>* ∈ {1, · · · , *<sup>p</sup>*}), *<sup>λ</sup>j*(*<sup>j</sup>* ∈ {1, · · · , *<sup>q</sup>*}), *<sup>λ</sup><sup>k</sup>* , *µ<sup>k</sup>* (*k* ∈ {1, · · · ,*r*}) *such that the following conditions hold*

$$\begin{aligned} 0 \in \lambda^L \partial^c f^L(\vec{u}) + \lambda^H \partial^c f^{II}(\vec{u}) + \sum\_{i=1}^p \lambda\_i \partial^c g\_i(\vec{u}) + \sum\_{j=1}^q \lambda\_j \partial^c h\_i(\vec{u}) \\ + \sum\_{k=1}^r [\lambda\_k \partial^c G\_k(\vec{u}) + \mu\_k \partial^c H\_k(\vec{u})], \\ \lambda\_i \ge 0 \,\forall \, i \in I\_\mathcal{S}(\vec{u}). \end{aligned}$$

1. Weakly stationary point (W-stationary point): A feasible point *u*¯ of IVPSC is called Wstationary if there exist multipliers *λ L* , *λ <sup>U</sup>*, *<sup>λ</sup>i*(*<sup>i</sup>* ∈ {1, · · · , *<sup>p</sup>*}), *<sup>λ</sup>j*(*<sup>j</sup>* ∈ {1, · · · , *<sup>q</sup>*}), *<sup>λ</sup><sup>k</sup>* , *µk* (*k* ∈ {1, · · · ,*r*}) such that the following conditions hold

$$\begin{split} 0 \in \lambda^{L}\partial^{c}f^{L}(\mathfrak{u}) + \lambda^{U}\partial^{c}f^{U}(\mathfrak{u}) + \sum\_{i=1}^{p}\lambda\_{i}\partial^{c}g\_{i}(\mathfrak{u}) + \sum\_{j=1}^{q}\lambda\_{j}\partial^{c}h\_{i}(\mathfrak{u}) \\ + \sum\_{k=1}^{r}[\lambda\_{k}\partial^{c}\mathcal{G}\_{k}(\mathfrak{u}) + \mu\_{k}\partial^{c}H\_{k}(\mathfrak{u})], \\ \lambda\_{i} \ge 0 \,\forall\, i \in I\_{\mathcal{S}}(\mathfrak{u}), \quad \lambda\_{k} = 0 \,\forall\, k \in I^{\mathcal{G}}(\mathfrak{u}), \quad \mu\_{k} = 0 \,\forall\, k \in I^{H}(\mathfrak{u}). \end{split}$$

2. Mordukhovich stationary point (M-stationary point): A feasible point *u*¯ of IVPSC is called M-stationary if there exist multipliers *λ L* , *λ <sup>U</sup>*, *<sup>λ</sup>i*(*<sup>i</sup>* ∈ {1, · · · , *<sup>p</sup>*}), *<sup>λ</sup>j*(*<sup>j</sup>* <sup>∈</sup> {1, · · · , *q*}), *λ<sup>k</sup>* , *µ<sup>k</sup>* (*k* ∈ {1, · · · ,*r*}) such that the following conditions hold

$$\begin{aligned} 0 \in \lambda^L \partial^c f^L(\vec{u}) + \lambda^U \partial^c f^{IL}(\vec{u}) + \sum\_{i=1}^p \lambda\_i \partial^c g\_i(\vec{u}) + \sum\_{j=1}^q \lambda\_j \partial^c h\_i(\vec{u}) \\ + \sum\_{k=1}^r [\lambda\_k \partial^c G\_k(\vec{u}) + \mu\_k \partial^c H\_k(\vec{u})], \\ \lambda\_i \ge 0 \,\forall \, i \in I\_\mathcal{S}(\vec{u}), \quad \lambda\_k = 0 \,\forall \, k \in I^G(\vec{u}), \quad \mu\_k = 0 \,\forall \, k \in I^H(\vec{u}), \\ \text{and } \lambda\_k \mu\_k = 0 \quad \forall \, k \in I^{GH}(\vec{u}). \end{aligned}$$

3. Strong stationary point (S-stationary point): A feasible point *u*¯ of IVPSC is called S-stationary if there exist multipliers *λ L* , *λ <sup>U</sup>*, *<sup>λ</sup>i*(*<sup>i</sup>* ∈ {1, · · · , *<sup>p</sup>*}), *<sup>λ</sup>j*(*<sup>j</sup>* ∈ {1, · · · , *<sup>q</sup>*}), *<sup>λ</sup><sup>k</sup>* , *µk* (*k* ∈ {1, · · · ,*r*}) such that the following conditions hold

$$0 \in \lambda^L \partial^c f^L(\vec{u}) + \lambda^U \partial^c f^U(\vec{u}) + \sum\_{i=1}^p \lambda\_i \partial^c g\_i(\vec{u}) + \sum\_{j=1}^q \lambda\_j \partial^c h\_i(\vec{u})$$

$$\begin{aligned} +\sum\_{k=1}^{r} [\lambda\_k \partial^c G\_k(\vec{u}) + \mu\_k \partial^c H\_k(\vec{u})]\_\prime, \\ \lambda\_i \ge 0 \,\forall \, i \in I\_\S(\vec{u}), \quad \lambda\_k = 0 \,\forall \, k \in I^G(\vec{u}), \quad \mu\_k = 0 \,\forall \, k \in I^H(\vec{u}), \\ \text{and } \lambda\_k = 0, \,\mu\_k = 0 \quad \forall \, k \in I^{GH}(\vec{u}). \end{aligned}$$

We can easily see that the following relationship holds between the above stationary conditions.

S-stationary condition =⇒ M-stationary condition =⇒ W-stationary condition.

The W-stationary condition of IVPSC at one of its feasible points *u*¯ is equivalent to KKT conditions of the following tightened nonlinear problem.

We consider the interval-valued tightened nonlinear problem (IVTNP) at *u*¯.

$$\begin{aligned} (IVNNP) \qquad \min f(\vec{\boldsymbol{\pi}}) &= [f^L(\vec{\boldsymbol{\pi}}), f^{\text{UI}}(\vec{\boldsymbol{\pi}})] \\ \text{subject to} \quad \mathcal{g}\_i(\vec{\boldsymbol{\pi}}) &\le \mathbf{0}, \forall \, i = 1, \cdots, p\_\prime \\ h\_j(\vec{\boldsymbol{\pi}}) &= \mathbf{0}, \,\forall \, j = 1, \cdots, q\_\prime \\ \mathcal{G}\_k(\vec{\boldsymbol{\pi}}) &= \mathbf{0}, \,\forall \, k \in I^G(\vec{\boldsymbol{\pi}}) \cup I^{GH}(\vec{\boldsymbol{\pi}}), \\ H\_k(\vec{\boldsymbol{\pi}}) &= \mathbf{0}, \,\forall \, k \in I^H(\vec{\boldsymbol{\pi}}) \cup I^{GH}(\vec{\boldsymbol{\pi}}). \end{aligned} \tag{2}$$

The feasible set of IVTNP is a subset of the feasible set of IVPSC.

#### **3. Lagrange Multiplier Characterization**

We suppose that there exist multipliers *λ L* , *λ <sup>U</sup>*, *<sup>λ</sup>i*(*<sup>i</sup>* ∈ {1, · · · , *<sup>p</sup>*}), *<sup>λ</sup>j*(*<sup>j</sup>* ∈ {1, · · · , *<sup>q</sup>*}), *λk* , *µ<sup>k</sup>* (*k* ∈ {1, · · · ,*r*}) such the the following optimality conditions hold

$$0 \in \lambda^L \partial^c f^L(u) + \lambda^{IL} \partial^c f^{IL}(u) + \sum\_{i=1}^p \lambda\_i \partial^c g\_i(u) + \sum\_{j=1}^q \lambda\_j \partial^c h\_i(u)$$

$$+ \sum\_{k=1}^r \left(\lambda\_k \partial^c G\_k(u) + \mu\_k \partial^c H\_k(u)\right) + N(M, u),$$

$$\lambda\_i g\_i(u) = 0, \forall i \in \{1, \dots, p\}, \ \lambda\_j h\_j(u) = 0, \forall j \in \{1, \dots, q\},$$

$$\lambda\_k G\_k(u) = 0, \forall k \in I^G(\mathfrak{a}) \cup I^{GH}(\mathfrak{a}), \ \mu\_k H\_k(u) = 0, \forall k \in I^H(\mathfrak{a}) \cup I^{GH}(\mathfrak{a}). \tag{3}$$

The addition of normal cone *N*(*M*, *u*) in the above optimality condition is motivated by Theorem 5.1.6 of [50].

The Lagrangian function is defined by

$$L(u, \lambda, \mu) = \lambda^L f^L(u) + \lambda^U f^{\bar{U}}(u) + \sum\_{i=1}^p \lambda\_i \lg\_i(u) + \sum\_{j=1}^q \lambda\_j h\_i(u)$$

$$+ \sum\_{k=1}^r \left(\lambda\_k \mathcal{G}\_k(u) + \mu\_k H\_k(u)\right). \tag{4}$$

**Lemma 2.** *Let u*¯ *be the solution to the problem (IVTNP) such that the condition* (3) *and S-stationary condition hold. Suppose that the functions f L* , *f <sup>U</sup>*, *<sup>g</sup>i*(*<sup>i</sup>* ∈ {1, · · · , *<sup>p</sup>*}), *<sup>h</sup>j*(*<sup>j</sup>* ∈ {1, · · · , *<sup>q</sup>*}), *Gk* , *H<sup>k</sup>* (*k* ∈ {1, · · · ,*r*}) *are regular at u*¯ *and the Lagrangian function L*(·, *λ*, *µ*) *is semiconvex at u*¯; *then, L*(·, *λ*.*µ*) *is constant on S*.

**Proof.** Let *u*¯ ∈ *S*, and there exist multipliers *λ g* , *λ h* , *λ <sup>G</sup>*, *λ <sup>H</sup>* such that condition (3) holds. Then, there exist *u <sup>L</sup>* <sup>∈</sup> *<sup>∂</sup> c f L* (*u*¯), *u <sup>U</sup>* <sup>∈</sup> *<sup>∂</sup> c f <sup>U</sup>*(*u*¯), *<sup>w</sup>* <sup>∈</sup> *<sup>N</sup>*(*M*, *<sup>u</sup>*¯), *<sup>ν</sup><sup>g</sup>* <sup>∈</sup> *<sup>∂</sup> <sup>c</sup>gi*(*u*¯)(*<sup>i</sup>* ∈ {1, · · · , *<sup>p</sup>*}), *ν<sup>h</sup>* ∈ *∂ <sup>c</sup>hj*(*u*¯)(*<sup>j</sup>* ∈ {1, · · · , *<sup>q</sup>*}), *<sup>ν</sup><sup>G</sup>* <sup>∈</sup> *<sup>∂</sup> <sup>c</sup>G<sup>k</sup>* (*u*¯), *ν<sup>H</sup>* ∈ *∂ <sup>c</sup>H<sup>k</sup>* (*u*¯)(*k* ∈ {1, · · · ,*r*}), such that

$$\lambda^L u^L + \lambda^{Ll} u^{ll} + \sum\_{i=1}^p \lambda\_i \nu\_{\mathcal{S}} + \sum\_{j=1}^q \lambda\_j \nu\_{\hbar} + \sum\_{k=1}^r \left(\lambda\_k \nu\_{\mathcal{G}} + \mu\_k \nu\_{\hbar}\right) = -w.c.$$

As *M* is a closed convex subset of *X*, h*w*, *v* − *u*¯i ≤ 0 ∀*v* ∈ *M*, hence, we have

$$\left\langle \lambda^L u^L + \lambda^U u^U + \sum\_{i=1}^p \lambda\_i \nu\_{\mathcal{G}} + \sum\_{j=1}^q \lambda\_j \nu\_{\mathcal{U}} + \sum\_{k=1}^r \left( \lambda\_k \nu\_{\mathcal{G}} + \mu\_k \nu\_{\mathcal{H}} \right), v - \overline{u} \right\rangle \ge 0. \tag{5}$$

Now, since *L*(·, *λ*, *µ*) is regular at *u*¯, we have

$$\left[\lambda^L f^L + \lambda^{II} f^{II} + \sum\_{i \in I\_{\mathcal{S}}(\vec{u})} \lambda\_i \mathbf{g}\_i + \sum\_{j=1}^q \lambda\_j h\_j + \sum\_{k=1}^r \left(\lambda\_k \mathbf{G}\_k + \mu\_k H\_k\right)\right]^c (\vec{u}, v - \vec{u})$$

$$= \left[\lambda^L f^L + \lambda^{II} f^{II} + \sum\_{i \in I\_{\mathcal{S}}(\vec{a})} \lambda\_i \mathbf{g}\_i + \sum\_{j=1}^q \lambda\_j h\_j + \sum\_{k=1}^r \left(\lambda\_k \mathbf{G}\_k + \mu\_k H\_k\right)\right]' (\vec{u}, v - \vec{u}). \tag{6}$$

Using the regularity of *f L* , *f <sup>U</sup>*, *<sup>g</sup>i*(*<sup>i</sup>* ∈ {1, · · · , *<sup>p</sup>*}), *<sup>h</sup>j*(*<sup>j</sup>* ∈ {1, · · · , *<sup>q</sup>*}), *<sup>G</sup><sup>k</sup>* , *Hk* (*k* ∈ {1, · · · ,*r*}) and from (5) and (6), we obtain

$$\left[\lambda^L f^L + \lambda^L f^{\mathcal{U}} + \sum\_{i \in I\_{\mathcal{S}}(\mathcal{U})} \lambda\_i \mathfrak{z}\_i + \sum\_{j=1}^q \lambda\_j h\_j + \sum\_{k=1}^r \left(\lambda\_k \mathcal{G}\_k + \mu\_k H\_k\right)\right]' (\mathfrak{u}, v - \mathfrak{u}) \ge 0.$$

Since *L*(·, *λ*, *µ*) is semiconvex at *u*¯, we have

$$
\begin{split} &\lambda f(\vec{u}) + \sum\_{i \in I\_{\vec{\mathcal{K}}}(\mathfrak{a})} \lambda\_i \mathbbm{g}\_i(\vec{u}) + \sum\_{j=1}^{q} \lambda\_j h\_j(\vec{u}) + \sum\_{k=1}^{r} \left(\lambda\_k \mathbbm{G}\_k(\vec{u}) + \mu\_k H\_k(\vec{u})\right), \\ \preceq\_{\text{LU}} &\lambda f(\boldsymbol{v})) + \sum\_{i \in I\_{\vec{\mathcal{K}}}(\mathfrak{a})} \lambda\_i \mathbbm{g}\_i(\boldsymbol{v}) + \sum\_{j=1}^{q} \lambda\_j h\_j(\boldsymbol{v}) + \sum\_{k=1}^{r} \left(\lambda\_k \mathbbm{G}\_k(\boldsymbol{v}) + \mu\_k H\_k(\boldsymbol{v})\right). \end{split}
$$

This means

$$
\lambda^L f^L(\boldsymbol{\upsilon}) + \lambda^H f^{II}(\boldsymbol{\upsilon}) + \sum\_{\boldsymbol{i} \in I\_{\mathcal{S}}(\boldsymbol{a})} \lambda\_{\mathcal{S}} g\_{\boldsymbol{i}}(\boldsymbol{\upsilon}) + \sum\_{j=1}^q \lambda\_j h\_{\boldsymbol{j}}(\boldsymbol{\upsilon}) + \sum\_{k=1}^r \left(\lambda\_k \mathbf{G}\_k(\boldsymbol{\upsilon}) + \mu\_k H\_k(\boldsymbol{\upsilon})\right)
$$

$$
\geq \lambda^L f^L(\boldsymbol{\mathfrak{z}}) + \lambda^H f^{II}(\boldsymbol{\mathfrak{z}}) + \sum\_{\boldsymbol{i} \in I\_{\mathcal{S}}(\boldsymbol{\widetilde{\boldsymbol{u}}})} \lambda\_{\mathcal{S}} g\_{\boldsymbol{i}}(\boldsymbol{\mathfrak{z}}) + \sum\_{j=1}^q \lambda\_j h\_{\boldsymbol{j}}(\boldsymbol{\mathfrak{z}}) + \sum\_{k=1}^r \left(\lambda\_k \mathbf{G}\_k(\boldsymbol{\mathfrak{z}}) + \mu\_k H\_k(\boldsymbol{\mathfrak{z}})\right). \tag{7}
$$

Since condition (3) and *S*-stationary condition hold at *u*¯, so

$$\begin{aligned} \lambda\_i \mathbf{g}\_i(\mathfrak{a}) = 0, \forall i \in \{1, \dots, p\}, \; \lambda\_j h\_j(\mathfrak{a}) = 0, \forall j \in \{1, \dots, q\}, \\ \lambda\_k \mathbf{G}\_k(\vec{u}) = \mathbf{0}, \forall k \in I^G(\vec{u}) \cup I^{GH}(\vec{u}), \\ \mu\_k H\_k(\vec{u}) = \mathbf{0}, \forall k \in I^H(\vec{u}) \cup I^{GH}(\vec{u}). \end{aligned}$$

Hence, (7) becomes

$$\begin{aligned} \lambda^L f^L(v) + \lambda^U f^U(v) + \sum\_{i \in I\_\mathcal{S}(\vec{u})} \lambda\_i \mathbf{g}\_i(v) + \sum\_{j=1}^q \lambda\_j h\_j(v) + \sum\_{k=1}^r \left(\lambda\_k \mathbf{G}\_k(v) + \mu\_k H\_k(v)\right) \\ &\ge \lambda^L f^L(\vec{u}) + \lambda^U f^U(\vec{u}). \end{aligned} \tag{8}$$

When *v* ∈ *S*, this means *v* ∈ *M*, *gi*(*v*) = 0 ∀*i* ∈ *Ig*(*u*¯) and *λ L f L* (*v*) + *λ U f <sup>U</sup>*(*v*) = *λ L f L* (*u*¯) + *λ U f <sup>U</sup>*(*u*¯). Hence,

$$
\lambda^L f^L(\vec{u}) + \lambda^U f^U(\vec{u}) = \lambda^L f^L(v) + \lambda^U f^U(v)
$$

$$\geq \lambda^L f^L(v) + \lambda^{\text{II}} f^{\text{II}}(v) + \sum\_{i \in I\_{\mathcal{S}}(\mathfrak{a})} \lambda\_i g\_i(v) + \sum\_{j=1}^q \lambda\_j h\_j(v)$$

$$\begin{split} \lambda + \sum\_{k=1}^r \left(\lambda\_k \mathbf{G}\_k(v) + \mu\_k H\_k(v)\right) \\ \geq \lambda^L f^L(\mathfrak{a}) + \lambda^{\text{II}} f^{\text{II}}(\mathfrak{a}). \end{split} \tag{9}$$

Then, it follows from (8) and (9) that

$$\begin{aligned} \sum\_{i \in I\_{\mathcal{S}}(\vec{u})} \lambda\_i g\_i(v) &= 0 \text{ i.e., } g\_i = 0 \ (i \in I\_{\mathcal{S}}(\vec{u})), \\ \sum\_{j=1}^q \lambda\_j h\_j(v) &= 0 \text{ i.e., } h\_j = 0 \ (j \in \{1, \cdots, r, q\}), \\ \sum\_{k=1}^r \left(\lambda\_k \mathcal{G}\_k(v) + \mu\_k H\_k(v)\right) &= 0 \text{ i.e., } \mathcal{G}\_k = 0 = H\_k\left(k \in \{1, \cdots, r\}.\right). \end{aligned}$$

Therefore, *L*(·, *λ*, *µ*) is constant on *S*.

**Theorem 1.** *Let u*¯ *be the solution to the problem (IVTNP), such that the condition* (3) *and Sstationary condition hold. Suppose that the functions f L* , *f <sup>U</sup> are semiconvex on M and the Lagrangian function L*(·, *λ*, *µ*) *is semiconvex at u*¯, *and suppose that the functions f L* , *f U, gi*(*i* ∈ {1, · · · , *p*}), *hj*(*j* ∈ {1, · · · , *q*}), *G<sup>k</sup>* , *H<sup>k</sup>* (*k* ∈ {1, · · · ,*r*}) *are regular at u*¯*. Then, S* = *S*<sup>1</sup> = *S* 0 1 , *where*

$$\begin{split} S\_1 &= \left\{ v \in M : \exists \eta \in \{ \lambda^L \partial^c f^L(\bar{u}) + \lambda^{1L} \partial^c f^{Ll}(\bar{u}) \} \cap \{ \lambda^L \partial^c f^L(v) + \lambda^{1L} \partial^c f^{Ll}(v) \} \right\}, \\ &\langle \eta, \bar{u} - v \rangle = 0, g\_i(v) = 0 \,\forall i \in I\_{\mathcal{S}}(\bar{u}), \ g\_i(v) \le 0 \,\forall i \in \{ 1, \cdots, p \} \mid I\_{\mathcal{S}}(\bar{u}), \\ &h\_j(v) = 0 \,\forall j \in \{ 1, \cdots, q \}, G\_{\mathcal{S}}(v) = 0, \forall k \in I^G(\bar{u}) \cup I^{GH}(\bar{u}), \\ &H\_k(v) = 0, \forall k \in I^H(\bar{u}) \cup I^{GH}(\bar{u}) \}, \\ &S\_1' = \left\{ v \in M : \exists \eta \in \lambda^L \partial^c f^L(v) + \lambda^{1L} \partial^c f^{Ll}(v), \langle \eta, \bar{u} - v \rangle = 0, \\ &g\_i(v) = 0 \,\forall i \in I\_{\mathcal{S}}(\bar{u}), \ g\_i(v) \le 0 \,\forall i \in \{ 1, \cdots, p \} \mid I\_{\mathcal{S}}(\bar{u}), \\ &h\_j(v) = 0 \,\forall j \in \{ 1, \cdots, q \}, G\_{\mathcal{S}}(v) = 0, \forall k \in I^G(\bar{u}) \cup I^{GH}(\bar{u}), \\ &H\_k(v) = 0, \forall k \in I^H(\bar{u}) \cup I^{GH}(\bar{u}) \}. \end{split}$$

**Proof.** Clearly, *S*<sup>1</sup> ⊂ *S* 0 1 , we claim that *S* ⊂ *S*<sup>1</sup> and *S* 0 <sup>1</sup> ⊂ *S*.

Let us suppose that *v* ∈ *S* 0 1 , then ∃ *η* ∈ *λ L∂ c f L* (*v*) + *λ U∂ c f <sup>U</sup>*(*v*), such that <sup>h</sup>*η*, *<sup>u</sup>*¯ <sup>−</sup> *v*i = 0, *gi*(*v*) = 0 ∀ *i* ∈ *Ig*(*u*¯), *gi*(*v*) ≤ 0 ∀ *i* ∈ {1, · · · , *p*} \ *Ig*(*u*¯), *hj*(*v*) = 0 ∀ *j* ∈ {1, · · · , *q*}, *G<sup>k</sup>* (*v*) = 0, ∀*k* ∈ *I <sup>G</sup>*(*u*¯) <sup>∪</sup> *<sup>I</sup> GH*(*u*¯), *H<sup>k</sup>* (*v*) = 0, ∀*k* ∈ *I <sup>H</sup>*(*u*¯) <sup>∪</sup> *<sup>I</sup> GH*(*u*¯). Since *f <sup>L</sup>* and *f <sup>U</sup>* are semiconvex on *X*, *f L* (*u*¯) ≥ *f L* (*v*) and *f <sup>U</sup>*(*u*¯) <sup>≥</sup> *<sup>f</sup> <sup>U</sup>*(*v*). In addition, since *u*¯, *v* ∈ *M* and *u*¯ is a solution to (IVPSC), *v* ∈ *S*.

Now, we claim that *S* ⊂ *S*1. Suppose *v* ∈ *S*, it follows from Lemma 2 that we have *gi*(*v*) = 0 ∀ *i* ∈ *Ig*(*u*¯), *gi*(*v*) ≤ 0 ∀ *i* ∈ {1, · · · , *p*} \ *Ig*(*u*¯).

As *u*¯ satisfies condition (3) with *λ<sup>i</sup>* ∈ R<sup>+</sup> and the *S*-stationary condition holds at *u*¯, then there exists *u <sup>L</sup>* <sup>∈</sup> *<sup>∂</sup> c f L* (*u*¯), *u <sup>U</sup>* <sup>∈</sup> *<sup>∂</sup> c f <sup>U</sup>*(*u*¯), *<sup>w</sup>* <sup>∈</sup> *<sup>N</sup>*(*M*, *<sup>u</sup>*¯), *<sup>ν</sup><sup>g</sup>* <sup>∈</sup> *<sup>∂</sup> <sup>c</sup>gi*(*u*¯)(*<sup>i</sup>* ∈ {1, · · · , *<sup>p</sup>*}), *ν<sup>h</sup>* ∈ *∂ <sup>c</sup>hj*(*u*¯)(*<sup>j</sup>* ∈ {1, · · · , *<sup>q</sup>*}), *<sup>ν</sup><sup>G</sup>* <sup>∈</sup> *<sup>∂</sup> <sup>c</sup>G<sup>k</sup>* (*u*¯), *ν<sup>H</sup>* ∈ *∂ <sup>c</sup>H<sup>k</sup>* (*u*¯)(*k* ∈ {1, · · · ,*r*}), such that

$$\left(\lambda^L u^L + \lambda^{Ll} u^{lL} + \sum\_{i=1}^p \lambda\_i \nu\_{\mathcal{S}} + \sum\_{j=1}^q \lambda\_j \nu\_h + \sum\_{k=1}^r \left(\lambda\_k \nu\_{\mathcal{G}} + \mu\_k \nu\_H\right) = -w.s.$$

As *M* is a closed convex subset of *X*, h*w*, *v* − *u*¯i ≤ 0 ∀*v* ∈ *M*, therefore, for *v* ∈ *S* ⊆ *M*, we obtain

$$\left\langle \lambda^L u^L + \lambda^L u^H + \sum\_{i=1}^p \lambda\_i \nu\_{\mathcal{S}} + \sum\_{j=1}^q \lambda\_j \nu\_h + \sum\_{k=1}^r \left( \lambda\_k \nu\_{\mathcal{G}} + \mu\_k \nu\_H \right), v - \bar{u} \right\rangle \ge 0.5$$

i.e.,

$$\begin{split} \langle \lambda^{L} u^{L} + \lambda^{L} u^{L}, v - \mathfrak{a} \rangle &\geq - \left\langle \sum\_{i=1}^{p} \lambda\_{i} \boldsymbol{\nu}\_{\mathcal{S}} + \sum\_{j=1}^{q} \lambda\_{j} \boldsymbol{\nu}\_{\mathcal{V}} + \sum\_{k=1}^{r} \left( \lambda\_{k} \boldsymbol{\nu}\_{\mathcal{G}} + \mu\_{k} \boldsymbol{\nu}\_{\mathcal{H}} \right), \boldsymbol{\nu} - \mathfrak{a} \right\rangle \\ &= - \left\langle \sum\_{i \in I\_{\mathcal{S}}(\widetilde{\boldsymbol{u}})} \lambda\_{i} \boldsymbol{\nu}\_{\mathcal{S}} + \sum\_{j=1}^{q} \lambda\_{j} \boldsymbol{\nu}\_{\mathcal{V}} + \sum\_{k=1}^{r} \left( \lambda\_{k} \boldsymbol{\nu}\_{\mathcal{G}} + \mu\_{k} \boldsymbol{\nu}\_{\mathcal{H}} \right), \boldsymbol{\nu} - \mathfrak{a} \right\rangle. \end{split} \tag{10}$$

Since *λigi*(*u*¯) = 0, ∀*i* ∈ {1, · · · , *p*}, *λjhj*(*u*¯) = 0, ∀*j* ∈ {1, · · · , *q*} and *S*-stationary holds at *u*¯,

$$\left(\lambda\_i \mathbf{g}\_i\right)'(\overline{\mathbf{u}}, \mathbf{v} - \overline{\mathbf{u}}) = \lim\_{t \downarrow 0} \frac{\lambda\_i \mathbf{g}\_i(\overline{\mathbf{u}} + t(\mathbf{v} - \overline{\mathbf{u}})) - \lambda\_i \mathbf{g}\_i(\overline{\mathbf{u}})}{t} = \lim\_{t \downarrow 0} \frac{\lambda\_i \mathbf{g}\_i(\overline{\mathbf{u}} + t(\mathbf{v} - \overline{\mathbf{u}}))}{t},\tag{11}$$

$$\left(\lambda\_j^h h\_{\bar{j}}\right)'(\bar{u}, v - \bar{u}) = \lim\_{t \downarrow 0} \frac{\lambda\_j^h h\_{\bar{j}}(\bar{u} + t(v - \bar{u})) - \lambda\_j h\_{\bar{j}}(\bar{u})}{t} = \lim\_{t \downarrow 0} \frac{\lambda\_j^h h\_{\bar{j}}(\bar{u} + t(v - \bar{u}))}{t},\tag{12}$$

$$\left(\lambda\_k \mathcal{G}\_k\right)'(\overline{\mathfrak{u}}, \overline{\mathfrak{v}} - \mathfrak{u}) = \lim\_{t \downarrow 0} \frac{\lambda\_k \mathcal{G}\_k(\overline{\mathfrak{u}} + t(\overline{\mathfrak{v}} - \mathfrak{u})) - \lambda\_k \mathcal{G}\_k(\overline{\mathfrak{u}})}{t} = \lim\_{t \downarrow 0} \frac{\lambda\_k \mathcal{G}\_k(\overline{\mathfrak{u}} + t(\overline{\mathfrak{v}} - \mathfrak{u}))}{t}, \tag{13}$$

$$\left(\mu\_k H\_k\right)'(\overline{\mathfrak{u}}, v - \mathfrak{u}) = \lim\_{t \downarrow 0} \frac{\mu\_k H\_k(\overline{\mathfrak{u}} + t(v - \mathfrak{u})) - \mu\_k H\_k(\overline{\mathfrak{u}})}{t} = \lim\_{t \downarrow 0} \frac{\mu\_k H\_k(\overline{\mathfrak{u}} + t(v - \mathfrak{u}))}{t} . \tag{14}$$

Since *M* is a convex subset of *M*, we have *u*¯ + *t*(*v* − *u*¯) ∈ *M*, provided *u*¯, *v* ∈ *M* and *t* ∈ (0, 1).

Hence,

$$\begin{aligned} \lambda\_i \mathfrak{g}\_i(\bar{\mathfrak{u}} + t(v - \bar{\mathfrak{u}})) &\le 0, \forall i \in \{1, \dots, p\}, \\ \lambda\_j h\_j(\bar{\mathfrak{u}} + t(v - \bar{\mathfrak{u}})) &= 0, \forall j \in \{1, \dots, q\}, \\ \lambda\_k \mathcal{G}\_k(\bar{\mathfrak{u}} + t(v - \bar{\mathfrak{u}})) &= 0, \forall k \in I^G(\bar{\mathfrak{u}}) \cup I^{\text{GH}}(\bar{\mathfrak{u}}), \\ \mu\_k H\_k(\bar{\mathfrak{u}} + t(v - \bar{\mathfrak{u}})) &= 0, \forall k \in I^H(\bar{\mathfrak{u}}) \cup I^{\text{GH}}(\bar{\mathfrak{u}}). \end{aligned}$$

From (11)–(14) and the above argument, we obtain

$$\begin{aligned} \left(\lambda\_i g\_i\right)'(\vec{u}, v - \vec{u}) &\le 0, i \in \{1, \cdots, r\}, \\ \left(\lambda\_j^h h\_j\right)'(\vec{u}, v - \vec{u}) &= 0, j \in \{1, \cdots, r\}, \\ \left(\lambda\_k G\_k\right)'(\vec{u}, v - \vec{u}) &= 0, k \in \{1, \cdots, r\}, \\ \left(\mu\_k H\_k\right)'(\vec{u}, v - \vec{u}) &= 0, k \in \{1, \cdots, r\}. \end{aligned}$$

Since, *g<sup>i</sup>* , *h<sup>j</sup>* , *G<sup>k</sup>* , *H<sup>k</sup>* are regular at *u*¯, i.e.,

$$\begin{aligned} \left(\lambda\_i \mathbf{g}\_i\right)'(\overline{\mathbf{u}}, v - \overline{\mathbf{u}}) &= (\lambda\_i \mathbf{g}\_i)^\mathbf{c} (\overline{\mathbf{u}}, v - \overline{\mathbf{u}})\_\prime \\ \left(\lambda\_j^\text{h} \mathbf{h}\_j\right)'(\overline{\mathbf{u}}, v - \overline{\mathbf{u}}) &= (\lambda\_j^\text{h} \mathbf{h}\_j)^\mathbf{c} (\overline{\mathbf{u}}, v - \overline{\mathbf{u}})\_\prime \\ \left(\lambda\_k \mathbf{G}\_k\right)'(\overline{\mathbf{u}}, v - \overline{\mathbf{u}}) &= (\lambda\_k \mathbf{G}\_k)^\mathbf{c} (\overline{\mathbf{u}}, v - \overline{\mathbf{u}})\_\prime \end{aligned}$$

$$\left(\mu\_k H\_k\right)'(\bar{\mathfrak{u}}\_\prime v - \bar{\mathfrak{u}}) = (\mu\_k H\_k)^c(\bar{\mathfrak{u}}\_\prime v - \bar{\mathfrak{u}}).$$

Let *ν<sup>g</sup>* ∈ *∂ <sup>c</sup>gi*(*u*¯)(*<sup>i</sup>* ∈ {1, · · · , *<sup>p</sup>*}), *<sup>ν</sup><sup>h</sup>* <sup>∈</sup> *<sup>∂</sup> <sup>c</sup>hj*(*u*¯)(*<sup>j</sup>* ∈ {1, · · · , *<sup>q</sup>*}), *<sup>ν</sup><sup>G</sup>* <sup>∈</sup> *<sup>∂</sup> <sup>c</sup>G<sup>k</sup>* (*u*¯), *ν<sup>H</sup>* ∈ *∂ <sup>c</sup>H<sup>k</sup>* (*u*¯)(*k* ∈ {1, · · · ,*r*}), such that

$$\begin{aligned} \langle \lambda\_i \nu\_{\mathcal{S}}, v - \mathfrak{a} \rangle &\le 0, \forall \, i \in \{1, \dots, p\}, \\ \langle \lambda\_j \nu\_{\mathcal{U}}, v - \mathfrak{a} \rangle &= 0, \forall \, j \in \{1, \dots, q\}, \\ \langle \lambda\_k \nu\_{\mathcal{G}}, v - \mathfrak{a} \rangle &= 0, \forall \, k \in I^G(\mathfrak{a}) \cup I^{\mathcal{G}H}(\mathfrak{a}), \\ \langle \mu\_k \nu\_{\mathcal{H}}, v - \mathfrak{a} \rangle &= 0, \forall \, k \in I^H(\mathfrak{a}) \cup I^{\mathcal{G}H}(\mathfrak{a}). \end{aligned} \tag{15}$$

From (15) and (10), we obtain h*λ Lu <sup>L</sup>* + *λ <sup>U</sup>u <sup>U</sup>*, *<sup>v</sup>* <sup>−</sup> *<sup>u</sup>*¯i ≥ 0. Now, since *f L* (*v*) = *f L* (*u*¯) and *f <sup>U</sup>*(*v*) = *f <sup>U</sup>*(*u*¯), and *f L* , *f <sup>U</sup>* are semiconvex at *u*¯. Lemma 1 implies that *f* 0 (*u*¯, *v* − *u*¯) *LU* 0; this means (*λ L f <sup>L</sup>* + *λ U f U*) 0 (*u*¯, *v* − *u*¯) ≤ 0. Therefore,

$$\begin{aligned} \langle \lambda^L u^L + \lambda^U u^U, v - \mathfrak{a} \rangle &\leq (\lambda^L f^L + \lambda^U f^U)^c (\mathfrak{a}, v - \mathfrak{a}) \\ &= (\lambda^L f^L + \lambda^U f^U)^{\prime} (\mathfrak{a}, v - \mathfrak{a}) \leq 0, \\ \text{where } u^L \in \partial^c f^L (\mathfrak{a}), u^U \in \partial^c f^U (\mathfrak{a}). \end{aligned}$$

Hence, h*λ Lu <sup>L</sup>* + *λ <sup>U</sup>u <sup>U</sup>*, *<sup>v</sup>* <sup>−</sup> *<sup>u</sup>*¯<sup>i</sup> <sup>=</sup> 0.

Now, we have to prove that *λ Lu <sup>L</sup>* + *λ <sup>U</sup>u <sup>U</sup>* <sup>∈</sup> *<sup>λ</sup> <sup>L</sup>∂ f L* (*u*¯) + *λ <sup>U</sup>∂ f <sup>U</sup>*(*u*¯) <sup>∩</sup> *<sup>λ</sup> <sup>L</sup>∂ f L* (*v*) + *λ <sup>U</sup>∂ f <sup>U</sup>*(*v*).

Since *λ Lu <sup>L</sup>* + *λ <sup>U</sup>u <sup>U</sup>* <sup>∈</sup> *<sup>λ</sup> <sup>L</sup>∂ f L* (*u*¯) + *λ <sup>U</sup>∂ f <sup>U</sup>*(*u*¯), it remains to prove that *λ Lu <sup>L</sup>* + *λ <sup>U</sup>u <sup>U</sup>* <sup>∈</sup> *λ <sup>L</sup>∂ f L* (*v*) + *λ <sup>U</sup>∂ f <sup>U</sup>*(*v*).

*f <sup>L</sup>* and *f <sup>U</sup>* are regular at *u*¯ and *v*, so we have

$$\begin{aligned} (\lambda^L f^L + \lambda^{\mathcal{U}} f^{\mathcal{U}})^\circ (\mathfrak{u}, d) &= (\lambda^L f^L + \lambda^{\mathcal{U}} f^{\mathcal{U}})' (\mathfrak{u}, d), \\ (\lambda^L f^L + \lambda^{\mathcal{U}} f^{\mathcal{U}})^\circ (\mathfrak{v}, d) &= (\lambda^L f^L + \lambda^{\mathcal{U}} f^{\mathcal{U}})' (\mathfrak{v}, d), \forall d \in \mathbb{R}^n. \end{aligned}$$

Now, we claim that there does not exist any *<sup>d</sup>*<sup>0</sup> <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* such that (*λ L f <sup>L</sup>* + *λ U f U*) 0 (*u*¯, *d*0) < (*λ L f <sup>L</sup>* + *λ U f U*) 0 (*v*, *d*0).

Suppose on contrary, there exists *<sup>d</sup>*<sup>0</sup> <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* , such that (*λ L f <sup>L</sup>* + *λ U f U*) 0 (*u*¯, *d*0) < (*λ L f <sup>L</sup>* + *λ U f U*) 0 (*v*, *d*0), i.e.,

$$\begin{aligned} \lim\_{t\_1 \downarrow 0} & \frac{(\lambda^L f^L + \lambda^U f^{II})(v + t\_1 d\_0) - (\lambda^L f^L + \lambda^{II} f^{II})(v)}{t\_1} \\ - \lim\_{t\_2 \downarrow 0} & \frac{(\lambda^L f^L + \lambda^{II} f^{II})(\bar{u} + t\_2 d\_0) - (\lambda^L f^L + \lambda^{II} f^{II})(\bar{u})}{t\_2} < 0. \end{aligned}$$

Then

$$\lim\_{t \downarrow 0} \left[ \frac{(\lambda^L f^L + \lambda^{\bar{L}} f^{\bar{L}})(v + td\_0) - (\lambda^L f^L + \lambda^{\bar{L}} f^{\bar{L}})(v)}{t} \right]$$

$$- \frac{(\lambda^L f^L + \lambda^{\bar{L}} f^{\bar{L}})(\bar{u} + td\_0) - (\lambda^L f^L + \lambda^{\bar{L}} f^{\bar{L}})(\bar{u})}{t} \Big] < 0.$$

$$\text{Since } (\lambda^L f^L + \lambda^{\text{II}} f^{\text{II}})(v) = (\lambda^L f^L + \lambda^{\text{II}} f^{\text{II}})(\vec{u}) \text{, we have} $$

$$\lim\_{t \downarrow 0} \frac{(\lambda^L f^L + \lambda^{\text{II}} f^{\text{II}})(v + td\_0) - (\lambda^L f^L + \lambda^{\text{II}} f^{\text{II}})(\vec{u} + td\_0)}{t} < 0.$$

Thus, ∃*t*<sup>0</sup> ∈ (0, 1) and *e* > 0 small enough such that

$$(\lambda^L f^L + \lambda^{lI} f^{lI})(v + td\_0) - (\lambda^L f^L + \lambda^{lI} f^{lI})(\vec{u} + td\_0) < -\epsilon < 0 \forall t \in (0, t\_0). \tag{16}$$

Easily, we can see that *F*(*t*) = (*λ L f <sup>L</sup>* + *λ U f <sup>U</sup>*)(*<sup>v</sup>* <sup>+</sup> *td*0) <sup>−</sup> (*<sup>λ</sup> L f <sup>L</sup>* + *λ U f <sup>U</sup>*)(*u*¯ + *td*0) is continuous at *t* = 0.

Letting *t* → 0, we have (*λ L f <sup>L</sup>* + *λ U f <sup>U</sup>*)(*v*) <sup>−</sup> (*<sup>λ</sup> L f <sup>L</sup>* + *λ U f <sup>U</sup>*)(*u*¯) < 0, which is a contradiction, hence, if

$$\begin{aligned} \lambda^L u^L(d) \le (\lambda^L f^L + \lambda^{\text{II}} f^{\text{II}})'(\mathfrak{u}, d) &= (\lambda^L f^L + \lambda^{\text{II}} f^{\text{II}})^c(\mathfrak{u}, d) \,\forall \, d \in \mathbb{R}^n, \\ \text{and } \lambda^{\text{II}} u^{\text{II}}(d) \le (\lambda^L f^L + \lambda^{\text{II}} f^{\text{II}})'(\mathfrak{u}, d) &= (\lambda^L f^L + \lambda^{\text{II}} f^{\text{II}})^c(\mathfrak{u}, d) \,\forall \, d \in \mathbb{R}^n. \end{aligned}$$

This proves that *λ Lu <sup>L</sup>* + *λ <sup>U</sup>u <sup>U</sup>* <sup>∈</sup> *<sup>λ</sup> L∂ c f L* (*u*¯) + *λ U∂ c f <sup>U</sup>*(*u*¯) implies *λ Lu <sup>L</sup>* + *λ <sup>U</sup>u <sup>U</sup>* <sup>∈</sup> *<sup>λ</sup> L∂ c f L* (*v*) +*λ U∂ c f <sup>U</sup>*(*v*). We have *λ Lu <sup>L</sup>* + *λ <sup>U</sup>u <sup>U</sup>* <sup>∈</sup> *<sup>λ</sup> <sup>L</sup>∂ f L* (*u*¯) + *λ <sup>U</sup>∂ f <sup>U</sup>*(*u*¯) <sup>∩</sup> *<sup>λ</sup> <sup>L</sup>∂ f L* (*v*) + *λ <sup>U</sup>∂ f <sup>U</sup>*(*v*). Hence, *v* ∈ *S*1. This completes the proof.

**Example 1.** *Consider an interval-valued optimization problem (IVPSC)*

$$\begin{aligned} \min f(\boldsymbol{\mu})\\ \text{subject to } \boldsymbol{\mu}\_1 - \boldsymbol{\mu}\_2 \le 0, \\ \boldsymbol{\mu}\_1 \boldsymbol{\mu}\_2 = 0. \end{aligned}$$

*where f* : <sup>R</sup><sup>2</sup> → I(R) *is defined by*

$$f(\mu\_1, \mu\_2) = \left[\mu\_2^2 - \mu\_1^2, \mu\_2^2\right].$$

*As f L* (*u*) = *u* 2 <sup>2</sup> − *u* 2 1 *and f <sup>U</sup>*(*u*) = *u* 2 2 *are differentiable convex functions so the coresponding subgradient and gradient are the same.*

∇ *f L* (*u*) = (−2*u*1, 2*u*2) *T and* ∇ *f <sup>U</sup>*(*u*) = (0, 2*u*2) *T* . *Consider a set M* = {*u* = (*u*1, *u*2) : *u*<sup>1</sup> − *u*<sup>2</sup> ≤ 0, *u*1*u*<sup>2</sup> = 0}. *f is a LU-convex on the set*

$$M = \{ \boldsymbol{\mu} = (\boldsymbol{\mu}\_1, \boldsymbol{\mu}\_2) : \boldsymbol{\mu}\_1 - \boldsymbol{\mu}\_2 \le 0, \boldsymbol{\mu}\_1 \boldsymbol{\mu}\_2 = 0 \}.$$

*Lagrangian L*(·, *λ*, *µ*)(*u*) = *λ L* (*u* 2 <sup>2</sup> − *u* 2 1 ) + *λ <sup>U</sup>*(*u* 2 2 ) + *λ g* (*u*<sup>1</sup> − *u*2) + *λu*<sup>1</sup> + *µu*2. *Here, the solution set is S* = {(0, 0)}. *Let u*¯ = (0, 0) *hold the condition* (3) *and L*(·, *λ*, *µ*) *is convex.*

*We can easily see that the condition* (3) *holds for the above interval-valued problem*

$$
\lambda^L \begin{bmatrix} -2u\_1 \\ 2u\_2 \end{bmatrix} + \lambda^H \begin{bmatrix} 0 \\ 2u\_2 \end{bmatrix} + \lambda\_3 \begin{bmatrix} 1 \\ -1 \end{bmatrix} + \lambda\_G \begin{bmatrix} 1 \\ 0 \end{bmatrix} + \lambda\_H \begin{bmatrix} 0 \\ 1 \end{bmatrix} \quad = \quad (0,0),
$$

*with λ<sup>g</sup>* = *λ<sup>G</sup>* = *λ<sup>H</sup>* = 0 *and for any values of λ L and λ <sup>U</sup> at point u*¯ = (0, 0). *We can also see that the S-stationary condition holds for IVPSC.*

*Choosing η* = 0 ∈ *λ <sup>L</sup>∂ f L* (*u*¯) + *λ <sup>U</sup>∂ f <sup>U</sup>*(*u*¯) *such that* <sup>h</sup>*η*, *<sup>u</sup>*¯ <sup>−</sup> *<sup>v</sup>*<sup>i</sup> <sup>=</sup> <sup>0</sup> <sup>⇔</sup> *<sup>v</sup>* <sup>=</sup> 0. *Hence,*

$$\begin{split} \mathcal{S}\_{1}^{\prime} &= \left\{ v \in \mathcal{M} : \exists \eta \in \lambda^{L} \exists \delta^{c} f^{L}(v) + \lambda^{\mathcal{U}} \partial^{c} f^{\mathcal{U}}(v), \langle \eta, \vec{u} - v \rangle = 0, \\ \mathcal{G}\_{i}(v) &= 0 \,\forall i \in I\_{\mathcal{S}}(\vec{u}), \; \mathcal{g}\_{i}(v) \le 0 \,\forall i \in \{1, \cdots, \prime, p\} \; \middle|\; I\_{\mathcal{S}}(\vec{u}), \\ h\_{\mathcal{I}}(v) &= 0 \,\forall \, j \in \{1, \cdots, \prime, q\}, \mathcal{G}\_{k}(v) = 0, \forall k \in I^{G}(\vec{u}) \cup I^{GH}(\vec{u}), \\ H\_{k}(v) &= 0, \forall k \in I^{H}(\vec{u}) \cup I^{GH}(\vec{u}) \Big\} = \{(0, 0)\} = \mathcal{S}. \end{split}$$

*This verifies the above result.*

Figure 1 represents the lower and upper bound function *f L* (*u*) and *f <sup>U</sup>*(*u*) of intervalvalued objective function *f*(*u*). Figure 2 shows the constraint functions *gi*(*u*) and switching constraints *G<sup>k</sup>* (*u*)*H<sup>k</sup>* (*u*) for Example 1.

**Figure 1.** The lower and upper bound objective functions.

**Figure 2.** Constraints *g<sup>i</sup>* (*u*) and *G<sup>k</sup>* (*u*)*H<sup>k</sup>* (*u*).

**Corollary 1.** *Let u*¯ *be the solution to the problem (IVTNP) such that the condition* (3) *and S-stationary condition hold. Suppose that the functions f L* , *f <sup>U</sup>*, *<sup>g</sup>i*(*<sup>i</sup>* ∈ {1, · · · , *<sup>p</sup>*}), *<sup>h</sup>j*(*<sup>j</sup>* <sup>∈</sup> {1, · · · , *q*}), *G<sup>k</sup>* , *H<sup>k</sup>* (*k* ∈ {1, · · · ,*r*}) *are semiconvex on M and if the Lagrangian function L*(·, *λ*, *µ*) *is semiconvex at u*¯, *then S* = *S*<sup>1</sup> = *S* 0 1 , *where*

*S*<sup>1</sup> = n *v* ∈ *M* : ∃*η* ∈ {*λ L ∂ c f L* (*u*¯) + *λ U∂ c f <sup>U</sup>*(*u*¯)} ∩ {*<sup>λ</sup> L ∂ c f L* (*v*) + *λ U∂ c f <sup>U</sup>*(*v*)}, h*η*, *u*¯ − *v*i = 0, *gi*(*v*) = 0 ∀ *i* ∈ *Ig*(*u*¯), *gi*(*v*) ≤ 0 ∀ *i* ∈ {1, · · · , *p*} \ *Ig*(*u*¯), *hj*(*v*) = 0 ∀ *j* ∈ {1, · · · , *q*}, *G<sup>k</sup>* (*v*) = 0, ∀*k* ∈ *I G* (*u*¯) ∪ *I GH*(*u*¯), *Hk* (*v*) = 0, ∀*k* ∈ *I <sup>H</sup>*(*u*¯) <sup>∪</sup> *<sup>I</sup> GH*(*u*¯) o , *S* 0 <sup>1</sup> = n *v* ∈ *M* : ∃*η* ∈ *λ L ∂ c f L* (*v*) + *λ U∂ c f <sup>U</sup>*(*v*),h*η*, *<sup>u</sup>*¯ <sup>−</sup> *<sup>v</sup>*<sup>i</sup> <sup>=</sup> 0, *gi*(*v*) = 0 ∀ *i* ∈ *Ig*(*u*¯), *gi*(*v*) ≤ 0 ∀ *i* ∈ {1, · · · , *p*} \ *Ig*(*u*¯), *hj*(*v*) = 0 ∀ *j* ∈ {1, · · · , *q*}, *G<sup>k</sup>* (*v*) = 0, ∀*k* ∈ *I G* (*u*¯) ∪ *I GH*(*u*¯), *Hk* (*v*) = 0, ∀*k* ∈ *I <sup>H</sup>*(*u*¯) <sup>∪</sup> *<sup>I</sup> GH*(*u*¯) o .

We know that every convex function is semiconvex [51]. In the case where *f L* , *f <sup>U</sup>*, *<sup>g</sup>i*(*<sup>i</sup>* <sup>∈</sup> {1, · · · , *p*}), *hj*(*j* ∈ {1, · · · , *q*}), *G<sup>k</sup>* , *H<sup>k</sup>* (*k* ∈ {1, · · · ,*r*}) are convex functions, the Clarke subdifferential coincides with the subdifferential in the convex analysis.

**Corollary 2.** *Let u*¯ *be the solution to the problem (IVTNP) such that the condition* (3) *and S-stationary condition hold. Suppose that the functions f L* , *f <sup>U</sup>*, *<sup>g</sup>i*(*<sup>i</sup>* ∈ {1, · · · , *<sup>p</sup>*}), *<sup>h</sup>j*(*<sup>j</sup>* <sup>∈</sup> {1, · · · , *q*}), *G<sup>k</sup>* , *H<sup>k</sup>* (*k* ∈ {1, · · · ,*r*}) *are convex; then, S* = *S*<sup>2</sup> = *S* 0 2 , *where*

*S*<sup>2</sup> = n *v* ∈ *M* : ∃*η* ∈ {*λ L ∂ f L* (*u*¯) + *λ <sup>U</sup>∂ f <sup>U</sup>*(*u*¯)} ∩ {*<sup>λ</sup> L ∂ f L* (*v*) + *λ <sup>U</sup>∂ f <sup>U</sup>*(*v*)}, h*η*, *u*¯ − *v*i = 0, *gi*(*v*) = 0 ∀ *i* ∈ *Ig*(*u*¯), *gi*(*v*) ≤ 0 ∀ *i* ∈ {1, · · · , *p*} \ *Ig*(*u*¯), *hj*(*v*) = 0 ∀ *j* ∈ {1, · · · , *q*}, *G<sup>k</sup>* (*v*) = 0, ∀*k* ∈ *I G* (*u*¯) ∪ *I GH*(*u*¯), *Hk* (*v*) = 0, ∀*k* ∈ *I <sup>H</sup>*(*u*¯) <sup>∪</sup> *<sup>I</sup> GH*(*u*¯) o , *S* 0 <sup>2</sup> = n *v* ∈ *M* : ∃*η* ∈ *λ L ∂ f L* (*v*) + *λ <sup>U</sup>∂ f <sup>U</sup>*(*v*),h*η*, *<sup>u</sup>*¯ <sup>−</sup> *<sup>v</sup>*<sup>i</sup> <sup>=</sup> 0, *gi*(*v*) = 0 ∀ *i* ∈ *Ig*(*u*¯), *gi*(*v*) ≤ 0 ∀ *i* ∈ {1, · · · , *p*} \ *Ig*(*u*¯), *hj*(*v*) = 0 ∀ *j* ∈ {1, · · · , *q*}, *G<sup>k</sup>* (*v*) = 0, ∀*k* ∈ *I G* (*u*¯) ∪ *I GH*(*u*¯), *Hk* (*v*) = 0, ∀*k* ∈ *I <sup>H</sup>*(*u*¯) <sup>∪</sup> *<sup>I</sup> GH*(*u*¯) o .

We can easily see that

$$\left[\vec{u} \in M\_{\prime} \sum\_{i \in I\_{\mathcal{S}}(\mathbf{a})} \lambda\_{i} \mathbf{g}\_{i}(\boldsymbol{v}) + \sum\_{j=1}^{q} \lambda\_{j} h\_{j}(\boldsymbol{v}) + \sum\_{k=1}^{r} \left(\lambda\_{k} \mathbf{G}\_{k}(\boldsymbol{v}) + \mu\_{k} H\_{k}(\boldsymbol{v})\right) = 0\right],$$

$$\Leftrightarrow \left[\vec{u} \in M\_{\prime} \boldsymbol{g}\_{i}(\boldsymbol{v}) = 0 \,\forall \, i \in I\_{\mathcal{S}}(\vec{u}), \,\boldsymbol{g}\_{i}(\boldsymbol{v}) \le 0 \,\forall \, i \in \{1, \cdots, r\}\right] I\_{\mathcal{S}}(\vec{u}),$$

$$h\_{j}(\boldsymbol{v}) = 0 \,\forall \, j \in \{1, \cdots, q\}, \mathbf{G}\_{k}(\boldsymbol{v}) = 0, \forall k \in I^{\mathcal{G}}(\vec{u}) \cup I^{\mathcal{G}H}(\vec{u}),$$

$$H\_{k}(\boldsymbol{v}) = 0, \forall k \in I^{H}(\vec{u}) \cup I^{\mathcal{G}H}(\vec{u}),$$

and by Lemma 2, *L*(*v*, *λ*, *µ*) = *λ L f L* (*v*) + *λ U f <sup>U</sup>*(*v*) <sup>∀</sup> *<sup>v</sup>* <sup>∈</sup> *<sup>S</sup>*.

**Corollary 3.** *Suppose that the functions f L* , *f <sup>U</sup>*, *<sup>g</sup>i*(*<sup>i</sup>* ∈ {1, · · · , *<sup>p</sup>*}), *<sup>h</sup>j*(*<sup>j</sup>* ∈ {1, · · · , *<sup>q</sup>*}), *<sup>G</sup><sup>k</sup>* , *Hk* (*k* ∈ {1, · · · ,*r*}) *and L*(·, *λ*, *µ*) *are semiconvex on M*, *then S* = *S*<sup>3</sup> = *S* 0 3 , *where*

$$\begin{split} & S\_3 = \left\{ v \in M : \sum\_{i \in I\_\mathcal{S}(\tilde{u})} \lambda\_i \underline{\mathbf{g}}\_i(v) + \sum\_{j=1}^q \lambda\_j h\_j(v) + \sum\_{k=1}^r \left( \lambda\_k \mathbf{G}\_k(v) + \mu\_k H\_k(v) \right) = 0, \\ & \exists \eta \in \partial^c L(\cdot, \lambda^\mathcal{S}, \lambda^h, \lambda^G, \lambda^H)(v), \ \langle \eta, \overline{u} - v \rangle = 0 \right\}, \\ & S\_3' = \left\{ v \in M : \sum\_{i \in I\_\mathcal{S}(\tilde{u})} \lambda\_i \underline{\mathbf{g}}\_i(v) + \sum\_{j=1}^q \lambda\_j h\_j(v) + \sum\_{k=1}^r \left( \lambda\_k \mathbf{G}\_k(v) + \mu\_k H\_k(v) \right) = 0, \\ & \exists \eta \in \partial^c L(\cdot, \lambda^\mathcal{S}, \lambda^h, \lambda^G, \lambda^H)(\overline{u}) \cap \partial^c L(\cdot, \lambda^\mathcal{S}, \lambda^h, \lambda^G, \lambda^H)(v), \ \langle \eta, \overline{u} - v \rangle = 0 \right\}. \end{split}$$

#### **4. Conclusions and Future Remarks**

We have considered the interval-valued mathematical programming problem with switching constraints (IVPSC) and studied the Lagrange multiplier characterizations of solution sets with the help of a semiconvex function and S-stationary condition. The Sstationary condition is stronger than the W-stationary and M-stationary conditions. We have proved that the associated Lagrangian function is constant for IVTNP withholding of the S-stationary condition. Thus, the W-stationary condition holds, too. Based on the proved by Mehlitz [12] condition, for the W-stationary condition, the feasible set of a tightened nonlinear problem (IVTNP) is a subset of the feasible set of the mathematical programs with switching constraints (IVPSC). Therefore, we have characterized the particular solutions sets for IVTNP. The IVPSC is a new class of optimization problems with significant applications. MPSC can be used for the discretized version of the optimal control

problem with switching structure [12], and we can extend the results to interval-valued optimization problems from a practical point of view. The IVPSC can be reformulated as a mathematical program with disjunctive constraints (MPDC) [14]. We can introduce an alternative approach to LICQ and establish the first and second order optimality conditions for MPDC with interval-valued objective functions. To the best of our knowledge, there are a few papers related to characterizations of solution sets and interval-valued nonlinear optimization. This article can be extended for various nonlinear programming problems such as MPEC, MPVC, MPDC, and many more from the application point of view.

**Author Contributions:** Formal analysis, K.K.L., S.K.M., S.K.S. and M.H.; Funding acquisition, K.K.L.; Methodology, S.K.S. and M.H.; Supervision, S.K.M.; Validation, K.K.L., S.K.M., S.K.S. and M.H.; Writing—original draft, S.K.S.; Writing—review & editing, S.K.S. All authors have read and agreed to the published version of the manuscript.

**Funding:** The second author is financially supported by the "Research Grant for Faculty" (IoE Scheme) under Dev. Scheme No. 6031. The fourth author is financially supported by CSIR-UGC JRF, New Delhi, India, through Reference no.: 1009/(CSIR-UGC NET JUNE 2018).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** No data were used to support this study.

**Acknowledgments:** The authors are indebted to the anonymous reviewers for their valuable comments and remarks that helped to improve the presentation and quality of the manuscript.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Review* **Recent Advances of Constrained Variational Problems Involving Second-Order Partial Derivatives: A Review**

**Savin Trean¸t˘a 1,2,3**


**Abstract:** This paper comprehensively reviews the nonlinear dynamics given by some classes of constrained control problems which involve second-order partial derivatives. Specifically, necessary optimality conditions are formulated and proved for the considered variational control problems governed by integral functionals. In addition, the well-posedness and the associated variational inequalities are considered in the present review paper.

**Keywords:** multi-time controlled Lagrangian of second-order; isoperimetric constraints; Euler– Lagrange equations; multiple integral; differential 1-form; curvilinear integral; variational inequalities

**MSC:** 49K15; 49K20; 49K21; 65K10

#### **1. Introduction**

We all know that Calculus of Variations and Optimal Control Theory are two strongly connected mathematical fields. In this direction, several researchers have investigated these areas, achieving remarkable results (see Friedman [1], Hestenes [2], Kendall [3], Udri¸ste [4], Petrat and Tumulka [5], Trean¸t˘a [6] and Deckert and Nickel [7]). The problems (in several time variables) studied by the aforementioned researchers have been continued, in the last period, in the study of multi-dimensional optimization problems. These studies have many applications in different branches of mathematical sciences, web access problems, management science, portfolio selection, engineering design, query optimization in databases, game theory, and so on. In this respect, we mention the papers conducted by Mititelu and Trean¸t˘a [8], Trean¸t˘a [9–18], and Jayswal et al. [19]. For other connected but different ideas on this topic, the reader can consult Arisawa and Ishii [20], Lai and Motta [21], Shi et al. [22], An et al. [23], Zhao et al. [24], Hung et al. [25], Chen et al. [26], Antonsev and Shmarev [27], Cekic et al. [28], Chen et al. [29], Diening et al. [30], and Zhikov [31].

This review article is structured as follows. Section 2 introduces the second-order PDEconstrained optimal control problem under study (see Theorem 1). This result formulates the necessary conditions of optimality for the considered PDE-constrained optimization problem. Section 3 states the associated necessary optimality conditions for a new class of isoperimetric constrained control problems governed by multiple and curvilinear integrals. In Section 4, by using the pseudomonotonicity, hemicontinuity, and monotonicity of the considered integral functionals, we present the well-posedness of some variational inequality problems determined by partial derivatives of a second-order. Section 5 formulates some very important open problems to be investigated in the near future. Section 6 contains the conclusions of the paper.

**Citation:** Trean¸t˘a, S. Recent Advances of Constrained Variational Problems Involving Second-Order Partial Derivatives: A Review. *Mathematics* **2021**, *10*, 2599. https://doi.org/ 10.3390/math10152599

Academic Editor: Yeol Je Cho

Received: 4 July 2022 Accepted: 25 July 2022 Published: 26 July 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

#### **2. Second-Order PDE-Constrained Control Problem**

Let H*<sup>ζ</sup> b*(*t*), *bγ*(*t*), *bαβ*(*t*), *u*(*t*), *t* , *ζ* = 1, *m* be some functions of *C* 3 -class, called *multitime controlled Lagrangians of second order*, where *t* = (*t α* ) = (*t* 1 , · · · , *t <sup>m</sup>*) <sup>∈</sup> <sup>Λ</sup>*t*0,*t*<sup>1</sup> <sup>⊂</sup> <sup>R</sup> *m* <sup>+</sup>, *b* = (*b i* ) = *b* 1 , · · · , *b n* : Λ*t*0,*t*<sup>1</sup> → R *n* is a function of *C* 4 -class (the *state variable*) and *u* = (*u ϑ* ) = *u* 1 , · · · , *u k* : <sup>Λ</sup>*t*0,*t*<sup>1</sup> <sup>→</sup> <sup>R</sup>*<sup>k</sup>* is a piecewise continuous function (the *control variable*). In addition, denote *bα*(*t*) := *∂b ∂t α* (*t*), *bαβ*(*t*) := *∂* 2 *b ∂t <sup>α</sup>∂t β* (*t*), *α*, *β* ∈ {1, ..., *m*} and consider Λ*t*0,*t*<sup>1</sup> = [*t*0, *t*1] (*multi-time interval* in R *m* <sup>+</sup>) as a hyper-parallelepiped determined by the diagonally opposite points *t*0, *t*<sup>1</sup> ∈ R *m* <sup>+</sup>. Moreover, we assume that the previous multi-time controlled Lagrangians of second order determine a closed controlled Lagrange 1-form

$$\mathcal{H}\_{\tilde{\zeta}}\left(b(t), b\_{\gamma}(t), b\_{\alpha\beta}(t), \mu(t), t\right) dt^{\tilde{\zeta}}$$

(see summation over the repeated indices), which provides the following curvilinear integral functional:

$$J(b(\cdot), u(\cdot)) = \int\_{\mathcal{Y}\_{t\_0, t\_1}} \mathcal{H}\_{\tilde{\varsigma}}(b(t), b\_{\gamma}(t), b\_{a\circ}(t), u(t), t)dt^{\tilde{\varsigma}},\tag{1}$$

where Υ*t*0,*t*<sup>1</sup> is a smooth curve, included in Λ*t*0,*t*<sup>1</sup> , joining *t*0, *t*<sup>1</sup> ∈ R *m* +.

*Second-order PDE-constrained control problem. Find the pair* (*b* ∗ , *u* ∗ ) *that minimizes the aforementioned controlled path-independent curvilinear integral functional Equation* (1)*, among all the pair functions* (*b*, *u*) *satisfying*

$$b(t\_0) = b\_{0\prime} \quad b(t\_1) = b\_{1\prime} \quad b\_{\gamma}(t\_0) = \tilde{b}\_{\gamma 0 \prime} \quad b\_{\gamma}(t\_1) = \tilde{b}\_{\gamma 1}$$

*and the partial speed-acceleration constraints:*

$$\mathbf{g}\_{\boldsymbol{\zeta}}^{a}(\boldsymbol{b}(t), \boldsymbol{b}\_{\boldsymbol{\gamma}}(t), \boldsymbol{b}\_{a\boldsymbol{\beta}}(t), \boldsymbol{u}(t), \boldsymbol{t}) = \mathbf{0}, \quad a = 1, 2, \cdots, r \le n, \boldsymbol{\zeta} = 1, 2, \cdots, m.$$

In order to investigate the above controlled optimization problem in Equation (1), associated with the aforementioned partial speed-acceleration constraints, we introduce the Lagrange multiplier *p* = (*pa*(*t*)) and build new multi-time-controlled second-order Lagrangians (see summation over the repeated indices):

$$\mathcal{H}\_{1\zeta}\left(b(t), b\_{\gamma}(t), b\_{a\beta}(t), u(t), p(t), t\right) = \mathcal{H}\_{\zeta}\left(b(t), b\_{\gamma}(t), b\_{a\beta}(t), u(t), t\right)$$

$$+ p\_a(t)g\_{\zeta}^a(b(t), b\_{\gamma}(t), b\_{a\beta}(t), u(t), t), \quad \zeta = \overline{1, m},$$

which change the initial controlled optimization problem (with second-order PDE constraints) into a partial speed-acceleration, unconstrained, controlled optimization problem:

$$\min\_{\left(\boldsymbol{b}(\cdot),\boldsymbol{u}(\cdot),\boldsymbol{v}(\cdot)\right)} \int\_{\mathcal{Y}\_{t\_0,t\_1}} \mathcal{H}\_{1\zeta}(\boldsymbol{b}(t),\boldsymbol{b}\_\gamma(t),\boldsymbol{b}\_{a\beta}(t),\boldsymbol{u}(t),\boldsymbol{p}(t),\boldsymbol{t})dt^5 \tag{2}$$

$$\boldsymbol{b}(t\_q) = \boldsymbol{b}\_{q\prime} \quad \boldsymbol{b}\_\gamma(t\_q) = \tilde{\boldsymbol{b}}\_{\gamma q\prime} \quad q = 0,1,$$

if the Lagrange 1-form H1*<sup>ζ</sup> b*(*t*), *bγ*(*t*), *bαβ*(*t*), *u*(*t*), *p*(*t*), *t dt<sup>ζ</sup>* is completely integrable.

In accordance with Lagrange theory, an extreme point of Equation (1) is found among the extreme points of Equation (2).

To formulate the necessary optimality conditions associated with the aforementioned control problem, we shall introduce the Saunders's multi-index (Saunders [32], Trean¸t˘a [9–12]).

The following theorem represents the main result of this section (see Trean¸t˘a [12]). It establishes the necessary conditions of optimality associated with the considered secondorder PDE-constrained control problem.

*ζ*

**Theorem 1** (Trean¸t˘a [12])**.** *If* (*b* ∗ (·), *u* ∗ (·), *p* ∗ (·)) *solves Equation (2), then*

$$(b^\*(\cdot), \ u^\*(\cdot), \ p^\*(\cdot))$$

*solves the following Euler–Lagrange system of PDEs:*

$$\frac{\partial \mathcal{H}\_{1\zeta}}{\partial b^{i}} - D\_{\gamma} \frac{\partial \mathcal{H}\_{1\overline{\zeta}}}{\partial b^{i}\_{\gamma}} + \frac{1}{\mu(\mathfrak{a}, \beta)} D\_{\alpha\beta}^{2} \frac{\partial \mathcal{H}\_{1\overline{\zeta}}}{\partial b^{i}\_{\alpha\beta}} = 0, \quad i = \overline{1, n}, \; \zeta = \overline{1, m}$$

$$\frac{\partial \mathcal{H}\_{1\overline{\zeta}}}{\partial u^{\theta}} - D\_{\gamma} \frac{\partial \mathcal{H}\_{1\overline{\zeta}}}{\partial u^{\theta}\_{\gamma}} + \frac{1}{\mu(\mathfrak{a}, \beta)} D\_{\alpha\beta}^{2} \frac{\partial \mathcal{H}\_{1\overline{\zeta}}}{\partial u^{\theta}\_{\alpha\beta}} = 0, \quad \theta = \overline{1, k}, \; \zeta = \overline{1, m}$$

$$\frac{\partial \mathcal{H}\_{1\overline{\zeta}}}{\partial p\_{a}} - D\_{\gamma} \frac{\partial \mathcal{H}\_{1\overline{\zeta}}}{\partial p\_{a, \gamma}} + \frac{1}{\mu(\mathfrak{a}, \beta)} D\_{\alpha\beta}^{2} \frac{\partial \mathcal{H}\_{1\overline{\zeta}}}{\partial p\_{a, a\beta}} = 0, \quad a = \overline{1, r}, \; \zeta = \overline{1, m}$$

$$\text{where } p\_{a, \gamma} := \frac{\partial p\_{a}}{\partial t^{\gamma}}, \; p\_{a, \alpha\beta} := \frac{\partial^{2} p\_{a}}{\partial t^{\alpha} \partial t^{\beta}}, \; u^{\theta}\_{\alpha\beta} := \frac{\partial^{2} u^{\theta}}{\partial t^{\alpha} \partial t^{\beta}}, \; a, \; \beta, \; \gamma \in \{1, 2, \dots, m\}.$$

**Remark 1** (Trean¸t˘a [12])**.** *The system of Euler–Lagrange PDEs given in Theorem 1 becomes*

$$\frac{\partial \mathcal{H}\_{1\zeta}}{\partial b^{i}} - D\_{\gamma} \frac{\partial \mathcal{H}\_{1\zeta}}{\partial b^{i}\_{\gamma}} + \frac{1}{\mu(\varkappa, \beta)} D\_{a\beta}^{2} \frac{\partial \mathcal{H}\_{1\zeta}}{\partial b^{i}\_{\alpha\beta}} = 0, \quad i = \overline{1, n}, \; \zeta = \overline{1, m}$$

$$\frac{\partial \mathcal{H}\_{1\zeta}}{\partial u^{\theta}} - D\_{\gamma} \frac{\partial \mathcal{H}\_{1\zeta}}{\partial u^{\theta}\_{\gamma}} + \frac{1}{\mu(\varkappa, \beta)} D\_{a\beta}^{2} \frac{\partial \mathcal{H}\_{1\zeta}}{\partial u^{\theta}\_{\alpha\beta}} = 0, \quad \theta = \overline{1, k}, \; \zeta = \overline{1, m}$$

$$g\_{\zeta}^{a}(b(t), b\_{\gamma}(t), b\_{\alpha\beta}(t), u(t), t) = 0, \quad a = 1, 2, \cdots, r \le n, \; \zeta = 1, 2, \cdots, m.$$

**Remark 2** (Trean¸t˘a [12])**.** *(i) The most general Lagrange* 1*-form that can be used in the previous problem is of the form:*

$$\mathcal{H}\_{2\zeta}\left(b(t), b\_{\gamma}(t), b\_{a\beta}(t), u(t), p(t), t\right) = \mathcal{H}\_{\zeta}\left(b(t), b\_{\gamma}(t), b\_{a\beta}(t), u(t), t\right).$$

$$+ p\_{a\zeta}^{\lambda}(t) g\_{\lambda}^{a}\left(b(t), b\_{\gamma}(t), b\_{a\beta}(t), u(t), t\right).$$

*(ii) The closeness conditions Dθ*H*<sup>ζ</sup>* = *Dζ*H*<sup>θ</sup> associated with the Lagrange* 1*-form* H*ζ b*(*t*), *bγ*(*t*), *bαβ*(*t*), *u*(*t*), *t dt<sup>ζ</sup> are actually PDE constraints for the considered problem. The optimization problem of the controlled curvilinear integral cost functional J*(*b*(·), *u*(·))*, conditioned by Dθ*H*<sup>ζ</sup>* = *Dζ*H*<sup>θ</sup> , can be studied by using the following Lagrange* 1*-form:*

$$
\mathcal{H}\_{\mathfrak{X}}\left(b(t), b\_{\gamma}(t), b\_{a\mathfrak{F}}(t), u(t), p(t), t\right) = \mathcal{H}\_{\mathfrak{F}}\left(b(t), b\_{\gamma}(t), b\_{a\mathfrak{F}}(t), u(t), t\right).
$$

$$
+ p\_{\mathfrak{F}}^{\theta\lambda}(t)(D\_{\theta}\mathcal{H}\_{\lambda} - D\_{\lambda}\mathcal{H}\_{\theta}).
$$

*Illustrative example.* Minimize the following objective functional:

$$J(b(\cdot), u(\cdot)) = \int\_{\mathcal{Y}\_{0,1}} \left( b^2(t) + u^2(t) \right) dt^1 + \left( b^2(t) + u^2(t) \right) dt^2$$

subject to *b<sup>t</sup>* <sup>1</sup> (*t*) + *b<sup>t</sup>* <sup>2</sup> (*t*) = 0, *b*(0, 0) = 0, *b*(1, 1) = 0, where Υ0,1 is a curve of *C* 1 -class in [0, 1] 2 , joining (0, 0) and (1, 1).

*Solution.* The path-independence of the functional *J*(*b*(·), *u*(·)) gives:

$$b\left(\frac{\partial b}{\partial t^2} - \frac{\partial b}{\partial t^1}\right) = \mu \left(\frac{\partial u}{\partial t^1} - \frac{\partial u}{\partial t^2}\right).$$

Moreover, for the Lagrange 1-form (Remark 2), we obtain:

$$\Theta\_{11} = b^2(t) + \mu^2(t) + \omega\_1(t)(b\_{t^1}(t) + b\_{t^2}(t)),$$

$$\Theta\_{12} = b^2(t) + \mu^2(t) + \omega\_2(t)(b\_{t^1}(t) + b\_{t^2}(t))$$

and the extreme points are formulated as below:

$$2s - \frac{\partial \omega\_1}{\partial t^1} - \frac{\partial \omega\_1}{\partial t^2} = 0, \quad 2s - \frac{\partial \omega\_2}{\partial t^1} - \frac{\partial \omega\_2}{\partial t^2} = 0,$$

$$2u = 0,$$

$$b\_{t^1}(t) + b\_{t^2}(t) = 0.$$

It follows that (*b* ∗ , *u* ∗ ) = (0, 0) is the optimal point of the considered optimization problem, and satisfies *∂φ ∂t* 1 + *∂φ ∂t* 2 = 0, where *φ* := *ω*<sup>1</sup> − *ω*2.

#### **3. Isoperimetric Constrained Controlled Optimization Problem**

In this section, we use similar notations as in the previous section. We consider a *C* 3 -class function H *b*(*t*), *bγ*(*t*), *bαβ*(*t*), *u*(*t*), *t* , called *multi-time-controlled, second-order Lagrangian*, where *t* = (*t α* ) = (*t* 1 , · · · , *t <sup>m</sup>*) <sup>∈</sup> <sup>Λ</sup>*t*0,*t*<sup>1</sup> <sup>⊂</sup> <sup>R</sup> *m* <sup>+</sup>, *b* = (*b i* ) = *b* 1 , · · · , *b n* : Λ*t*0,*t*<sup>1</sup> → R *n* is a function of the *C* 4 -class (the *state variable*), and *u* = (*u ϑ* ) = *u* 1 , · · · , *u k* : <sup>Λ</sup>*t*0,*t*<sup>1</sup> <sup>→</sup> <sup>R</sup>*<sup>k</sup>* is a piecewise continuous function (the *control variable*). In addition, denote *bα*(*t*) := *∂b ∂t α* (*t*), *bαβ*(*t*) := *∂* 2 *b ∂t <sup>α</sup>∂t β* (*t*), *α*, *β* ∈ {1, ..., *m*}, and consider Λ*t*0,*t*<sup>1</sup> = [*t*0, *t*1] as a hyper-parallelepiped generated by the diagonally opposite points *t*0, *t*<sup>1</sup> ∈ R *m* +.

*Isoperimetric constrained control problem. Find the pair* (*b* ∗ , *u* ∗ ) *that minimizes the following multiple integral functional:*

$$J(b(\cdot), u(\cdot)) = \int\_{\Lambda\_{t\_0 t\_1}} \mathcal{H}(b(t), b\_\gamma(t), b\_{a\S}(t), u(t), t)dt^1 \cdots dt^m \tag{3}$$

*among all the pair functions* (*b*, *u*) *satisfying*

$$b(t\_0) = b\_{0\prime} \quad b(t\_1) = b\_{1\prime} \quad b\_{\gamma}(t\_0) = \tilde{b}\_{\gamma 0 \prime} \quad b\_{\gamma}(t\_1) = \tilde{b}\_{\gamma 1 \prime}$$

*or*

$$b(t)|\_{\partial \Lambda\_{t\_0, t\_1}} = \text{given}, \quad b\_{\gamma}(t)|\_{\partial \Lambda\_{t\_0, t\_1}} = \text{given}$$

*and the isoperimetric constraints (that is, constant level sets of some functionals) formulated as follows.*

*Isoperimetric Constraints Defined by Controlled Curvilinear Integral Functionals*

Consider the isoperimetric constraints:

$$\int\_{\mathcal{Y}\_{t\_0,t\_1}} g^a\_\zeta(b(t), b\_\gamma(t), b\_{a\beta}(t), \mu(t), t)dt^\zeta = l^a, \quad a = 1, 2, \cdots, r \le n\_r$$

where Υ*t*0,*t*<sup>1</sup> is a smooth curve, included in Λ*t*0,*t*<sup>1</sup> , joining the points *t*0, *t*<sup>1</sup> ∈ R *m* <sup>+</sup>, and

$$\log\_{\tilde{\xi}}^{a}(b(t), b\_{\gamma}(t), b\_{\alpha\beta}(t), \mu(t), t)dt^{\zeta}, \quad a = 1, 2, \cdots, r$$

are completely integrable differential 1-forms, namely, *Dγg<sup>ζ</sup>* = *D<sup>ζ</sup> gγ*, *γ*, *ζ* ∈ {1, · · · , *m*}, *γ* 6= *ζ*, with *D<sup>γ</sup>* := *∂ ∂t γ* , *γ* ∈ {1, · · · , *m*}.

In order to investigate the above controlled optimization problem in Equation (3), associated with the aforementioned isoperimetric constraints, we introduce the curve Υ*t*0,*<sup>t</sup>* ⊂ Υ*t*0,*t*<sup>1</sup> and the auxiliary variables:

$$y^a(t) = \int\_{\mathcal{Y}\_{t\_0,t}} g^a\_\zeta(b(\tau), b\_\gamma(\tau), b\_{a\beta}(\tau), \mu(\tau), \tau) d\tau^\zeta, \quad a = 1, 2, \cdots, r\_n$$

which satisfy *y a* (*t*0) = 0, *y a* (*t*1) = *l a* . Consequently, the functions *y a* fulfill the next first-order PDEs:

$$\frac{\partial y^a}{\partial t^\zeta}(t) = g^a\_\zeta(b(t), b\_\gamma(t), b\_{a\beta}(t), \mu(t), t), \quad y^a(t\_1) = l^a.$$

Considering the Lagrange multiplier *p* = *p ζ <sup>a</sup>* (*t*) and by denoting *y* = (*y a* (*t*)), we introduce a new multi-time-controlled Lagrangian of second order:

$$\mathcal{H}\_1\{b(t), b\_\gamma(t), b\_{a\beta}(t), u(t), y(t), y\_\zeta(t), p(t), t\} = \mathcal{H}\{b(t), b\_\gamma(t), b\_{a\beta}(t), u(t), t\}$$

$$+ p\_a^\zeta(t) \left( g\_\zeta^a(b(t), b\_\gamma(t), b\_{a\beta}(t), u(t), t) - \frac{\partial y^a}{\partial t^\zeta}(t) \right)$$

that changes the initial control problem into an unconstrained control problem

$$\min\_{b(\cdot), u(\cdot), y(\cdot), p(\cdot)} \int\_{\Lambda\_{0:t1}} \mathcal{H}\_1(b(t), b\_\gamma(t), b\_{a\tilde{p}}(t), u(t), y(t), y\_\zeta(t), p(t), t) dt^1 \cdots dt^m \qquad (4)$$

$$b(t\_q) = b\_{q\prime} \quad b\_\gamma(t\_q) = \tilde{b}\_{\gamma q\prime} \quad q = 0, 1$$

$$y(t\_0) = 0, \quad y(t\_1) = l.$$

In accordance with Lagrange theory, an extreme point of Equation (3) is found among the extreme points of Equation (4).

The following theorem (see Trean¸t˘a and Ahmad [13]) establishes the necessary conditions of optimality associated with the considered isoperimetric constrained control problem.

**Theorem 2** (Trean¸t˘a and Ahmad [13])**.** *If* (*b* ∗ (·), *u* ∗ (·), *y* ∗ (·), *p* ∗ (·)) *solves Equation (4), then*

$$(b^\*(\cdot), \ u^\*(\cdot), \ y^\*(\cdot), \ p^\*(\cdot))$$

*solves the following Euler–Lagrange system of PDEs:*

$$\frac{\partial \mathcal{H}\_1}{\partial b^i} - D\_\gamma \frac{\partial \mathcal{H}\_1}{\partial b^i\_\gamma} + \frac{1}{\mu(\alpha, \beta)} D\_{a\beta}^2 \frac{\partial \mathcal{H}\_1}{\partial b^i\_{\alpha\beta}} = 0, \quad i = \overline{1, n}$$

$$\frac{\partial \mathcal{H}\_1}{\partial u^\theta} - D\_\gamma \frac{\partial \mathcal{H}\_1}{\partial u^\theta\_\gamma} + \frac{1}{\mu(\alpha, \beta)} D\_{a\beta}^2 \frac{\partial \mathcal{H}\_1}{\partial u^\theta\_{\alpha\beta}} = 0, \quad \theta = \overline{1, k}$$

$$\frac{\partial \mathcal{H}\_1}{\partial y^a} - D\_\zeta \frac{\partial \mathcal{H}\_1}{\partial y^\delta\_\zeta} + \frac{1}{\mu(\alpha, \beta)} D\_{a\beta}^2 \frac{\partial \mathcal{H}\_1}{\partial y^a\_{\alpha\beta}} = 0, \quad a = \overline{1, r}$$

$$\frac{\partial \mathcal{H}\_1}{\partial p^\zeta\_a} - D\_\gamma \frac{\partial \mathcal{H}\_1}{\partial p^\zeta\_{a,\gamma}} + \frac{1}{\mu(\alpha, \beta)} D^2\_{a\beta} \frac{\partial \mathcal{H}\_1}{\partial p^\zeta\_{a,\alpha\beta}} = 0,$$

*where p ζ <sup>a</sup>*,*<sup>γ</sup>* := *∂p ζ a ∂t γ* , *p ζ <sup>a</sup>*,*αβ* := *∂* 2 *p ζ a ∂t <sup>α</sup>∂t β* , *u ϑ αβ* := *∂* 2*u ϑ ∂t <sup>α</sup>∂t β* , *y a αβ* := *∂* 2*y a ∂t <sup>α</sup>∂t β* , *α*, *β*, *γ*, *ζ* ∈ {1, 2, ..., *m*}*.*

**Remark 3** (Trean¸t˘a and Ahmad [13])**.** *The system of Euler–Lagrange PDEs given in Theorem 2 becomes*

$$\frac{\partial \mathcal{H}\_1}{\partial b^i} - D\_\gamma \frac{\partial \mathcal{H}\_1}{\partial b^i\_\gamma} + \frac{1}{\mu(\alpha, \beta)} D\_{\alpha\beta}^2 \frac{\partial \mathcal{H}\_1}{\partial b^i\_{\alpha\beta}} = 0, \quad i = \overline{1, n}$$

$$\frac{\partial \mathcal{H}\_1}{\partial u^\theta} - D\_\gamma \frac{\partial \mathcal{H}\_1}{\partial u^\theta\_\gamma} + \frac{1}{\mu(\alpha, \beta)} D\_{\alpha\beta}^2 \frac{\partial \mathcal{H}\_1}{\partial u^\theta\_{\alpha\beta}} = 0, \quad \theta = \overline{1, k}$$

$$\frac{\partial p\_a^\zeta}{\partial t^\zeta} = 0, \quad a = \overline{1, r}, \quad \zeta \in \{1, 2, \cdots, r, m\}$$

$$\frac{\partial y^a}{\partial t^\zeta}(t) = g\_\zeta^a(b(t), b\_\gamma(t), b\_{a,\beta}(t), u(t), t).$$

*In consequence, the Lagrange matrix multiplier p has null total divergence. Moreover, it is well determined only if the optimal solution is not an extreme for at least one of the functionals* Z Υ*t*0 ,*t*1 *g a ζ b*(*t*), *bγ*(*t*), *bα*,*β*(*t*), *u*(*t*), *t dt<sup>ζ</sup>* , *a* = 1,*r.*

#### **4. Well-Posedness of Some Variational Inequalities Involving Second-Order Partial Derivatives**

In the following, in accordance with Trean¸t˘a [14–16], we consider: Λ*s*<sup>1</sup> ,*s*2 as a compact set in R *<sup>m</sup>*; <sup>Λ</sup>*s*<sup>1</sup> ,*s*<sup>2</sup> 3 *s* = (*s ζ* ), *ζ* = 1, *m* as a multi-variate evolution parameter; Λ*s*<sup>1</sup> ,*s*<sup>2</sup> ⊃ Υ as a piecewise differentiable curve that links the points *s*<sup>1</sup> = (*s* 1 1 , . . . ,*s m* 1 ), *s*<sup>2</sup> = (*s* 1 2 , . . . ,*s m* 2 ) in Λ*s*<sup>1</sup> ,*s*2 ; B as the space of *C* 4 -class *state* functions *b* : Λ*s*<sup>1</sup> ,*s*<sup>2</sup> → R *n* ; and *b<sup>κ</sup>* := *∂b ∂s κ* , *bαβ* := *∂* 2 *b*

*∂s <sup>α</sup>∂s β* denote the *partial speed* and *partial acceleration*, respectively. In addition, let U be the space of *C* 1 -class *control* functions *u* : Λ*s*<sup>1</sup> ,*s*<sup>2</sup> → R *k* and assume that *B* × *U* is a (nonempty) convex and closed subset of B × **U**, equipped with

$$
\langle (b, u), (q, z) \rangle = \int\_{\mathcal{Y}} [b(s) \cdot q(s) + u(s) \cdot z(s)] ds^{\mathbb{E}}
$$

$$
= \int\_{\mathcal{Y}} [\sum\_{i=1}^{n} b^{i}(s) q^{i}(s) + \sum\_{j=1}^{k} u^{j}(s) z^{j}(s)] ds^{\mathbb{E}}
$$

$$
= \int\_{\mathcal{Y}} [\sum\_{i=1}^{n} b^{i}(s) q^{i}(s) + \sum\_{j=1}^{k} u^{j}(s) z^{j}(s)] ds^{1} + \dots + \left[ \sum\_{i=1}^{n} b^{i}(s) q^{i}(s) + \sum\_{j=1}^{k} u^{j}(s) z^{j}(s) \right] ds^{m},
$$

$$
\forall (b, u), (q, z) \in \mathcal{B} \times \mathbf{U}
$$

and the norm induced by it.

Let *J* 2 (R *<sup>m</sup>*, R *n* ) be the jet bundle of the second order of R *<sup>m</sup>* and R *n* . Assume that the Lagrangians *w<sup>ζ</sup>* : *J* 2 (R *<sup>m</sup>*, R *n* ) × R *<sup>k</sup>* <sup>→</sup> <sup>R</sup>, *<sup>ζ</sup>* <sup>=</sup> 1, *<sup>m</sup>* provide a closed controlled Lagrange 1-form

$$w\_{\zeta}(s, b(s), b\_{\times}(s), b\_{\alpha\beta}(s), \mu(s)) ds^{\zeta} \mu$$

which gives the following integral functional:

$$\mathcal{W}: \mathcal{B} \times \mathbf{U} \to \mathbb{R}, \quad \mathcal{W}(b, \mathfrak{u}) = \int\_{\mathcal{Y}} w\_{\tilde{\xi}}(s, b(s), b\_{\mathfrak{x}}(s), b\_{a\mathfrak{f}}(s), \mathfrak{u}(s)) ds^{\tilde{\xi}}$$

$$= \int\_{\mathcal{Y}} w\_1(s, b(s), b\_{\mathfrak{x}}(s), b\_{a\mathfrak{f}}(s), \mathfrak{u}(s)) ds^1 + \dots + w\_m(s, b(s), b\_{\mathfrak{x}}(s), b\_{a\mathfrak{f}}(s), \mathfrak{u}(s)) ds^m.$$

In order to state the problem under study, we introduce the Saunders's multi-index (Saunders [32]).

Now, we introduce the variational problem: find (*b*, *u*) ∈ *B* × *U* such that

$$\int\_{\mathcal{Y}} \left[ \frac{\partial w\_{\zeta}}{\partial b^{\prime}} (\Psi\_{b,\mu}(s)) (q(s) - b(s)) + \frac{\partial w\_{\zeta}}{\partial b\_{\mathbf{x}}} (\Psi\_{b,\mu}(s)) D\_{\mathbf{x}} (q(s) - b(s)) \right] ds^{\zeta} \tag{5}$$

$$+ \int\_{\mathcal{Y}} \left[ \frac{1}{\mathbf{x}(a,\beta)} \frac{\partial w\_{\zeta}}{\partial b\_{a\beta}} (\Psi\_{b,\mu}(s)) D\_{a\beta}^{2} (q(s) - b(s)) \right] ds^{\zeta}$$

$$+ \int\_{\mathcal{Y}} \left[ \frac{\partial w\_{\zeta}}{\partial \mu} (\Psi\_{b,\mu}(s)) (z(s) - u(s)) \right] ds^{\zeta} \ge 0, \quad \forall (q, z) \in B \times \mathcal{U},$$

where *D<sup>κ</sup>* := *∂ ∂s κ* is the total derivative operator, *D* 2 *αβ* := *Dα*(*Dβ*), and (Ψ*b*,*u*(*s*)) := (*s*, *b*(*s*), *bκ*(*s*), *bαβ*(*s*), *u*(*s*)).

Let Ω be the feasible solution set of (5):

$$\begin{split} \Omega = \left\{ (b, u) \in \mathcal{B} \times \mathcal{U} : \int\_{\mathcal{Y}} [(q(s) - b(s)) \frac{\partial w\_{\zeta}}{\partial b}(\Psi\_{b, \mu}(s)) \\ \qquad \qquad \qquad \qquad + D\_{\mathbb{K}}(q(s) - b(s)) \frac{\partial w\_{\zeta}}{\partial b\_{\mathbb{K}}}(\Psi\_{b, \mu}(s)) \\ \qquad \qquad \qquad \qquad + \frac{1}{\varkappa(a, \beta)} D\_{a\beta}^{2}(q(s) - b(s)) \frac{\partial w\_{\zeta}}{\partial b\_{\alpha\beta}}(\Psi\_{b, \mu}(s)) \\ \qquad \qquad \qquad \qquad + (z(s) - u(s)) \frac{\partial w\_{\zeta}}{\partial u}(\Psi\_{b, \mu}(s)) \Big| ds^{\zeta} \ge 0, \\ \forall (q, z) \in \mathcal{B} \times \mathcal{U} \Big\} .\end{split}$$

**Assumption 1.** *The next working hypothesis is assumed:*

$$dG := D\_{\mathbb{X}} \left[ \frac{\partial w\_{\mathbb{\zeta}}}{\partial b\_{\mathbb{X}}} (b - q) \right] ds^{\mathbb{\zeta}} \tag{6}$$

*as a total exact differential, with G*(*s*1) = *G*(*s*2)*.*

According to Equation (6) and considering the notion of monotonicity associated with variational inequalities, we formulate (see Trean¸t˘a et al. [14]) the monotonicity and pseudomonotonicity for *W*.

**Definition 1.** *The functional W is monotone on B* × *U if*

$$\begin{split} \int\_{\mathcal{Y}} \Big[ (b(s) - q(s)) \Big( \frac{\partial w\_{\boldsymbol{\xi}}}{\partial b^{s}} (\Psi\_{b,\mu}(s)) - \frac{\partial w\_{\boldsymbol{\xi}}}{\partial b} (\Psi\_{q,\boldsymbol{z}}(s)) \Big) \\ &+ (u(s) - z(s)) \Big( \frac{\partial w\_{\boldsymbol{\xi}}}{\partial u} (\Psi\_{b,\mu}(s)) - \frac{\partial w\_{\boldsymbol{\xi}}}{\partial u} (\Psi\_{q,\boldsymbol{z}}(s)) \Big) \\ &+ D\_{\mathbf{x}} (b(s) - q(s)) \Big( \frac{\partial w\_{\boldsymbol{\xi}}}{\partial b\_{\mathbf{x}}} (\Psi\_{b,\mu}(s)) - \frac{\partial w\_{\boldsymbol{\xi}}}{\partial b\_{\mathbf{x}}} (\Psi\_{q,\boldsymbol{z}}(s)) \Big) \\ &+ \frac{1}{\boldsymbol{x}(\boldsymbol{a},\boldsymbol{\beta})} D\_{\boldsymbol{a}\boldsymbol{\beta}}^{2} (b(s) - q(s)) \Big( \frac{\partial w\_{\boldsymbol{\xi}}}{\partial b\_{\mathbf{a}\boldsymbol{\beta}}} (\Psi\_{b,\mu}(s)) - \frac{\partial w\_{\boldsymbol{\xi}}}{\partial b\_{\mathbf{a}\boldsymbol{\beta}}} (\Psi\_{q,\boldsymbol{z}}(s)) \Big) \Big] ds^{\boldsymbol{\xi}} \ge 0, \end{split}$$
 
$$\forall (b,\boldsymbol{u}), (q,\boldsymbol{z}) \in \boldsymbol{B} \times \boldsymbol{U}$$

*is satisfied.*

**Definition 2.** *The functional W is pseudomonotone on B* × *U if*

$$\begin{split} \int\_{\mathcal{Y}} [(b(s) - q(s)) \frac{\partial w\_{\xi}}{\partial b} (\Psi\_{q,z}(s)) + (u(s) - z(s)) \frac{\partial w\_{\xi}}{\partial u} (\Psi\_{q,z}(s))] \\ + D\_{\mathbf{x}}(b(s) - q(s)) \frac{\partial w\_{\xi}}{\partial b\_{\mathbf{x}}} (\Psi\_{q,z}(s)) \\ + \frac{1}{\underline{x}(\boldsymbol{\alpha}, \boldsymbol{\beta})} D\_{\mathbf{a}\boldsymbol{\beta}}^{2} (b(s) - q(s)) \frac{\partial w\_{\xi}}{\partial b\_{\mathbf{a}\boldsymbol{\beta}}} (\Psi\_{q,z}(s)) |ds^{\boldsymbol{\zeta}} \ge 0 \\ \Rightarrow \quad \int\_{\mathcal{Y}} [(b(s) - q(s)) \frac{\partial w\_{\xi}}{\partial b} (\Psi\_{b,u}(s)) + (u(s) - z(s)) \frac{\partial w\_{\xi}}{\partial u} (\Psi\_{b,u}(s))] \\ + D\_{\mathbf{x}}(b(s) - q(s)) \frac{\partial w\_{\xi}}{\partial b\_{\mathbf{x}}} (\Psi\_{b,u}(s)) \\ + \frac{1}{\underline{x}(\boldsymbol{\alpha}, \boldsymbol{\beta})} D\_{\mathbf{a}\boldsymbol{\beta}}^{2} (b(s) - q(s)) \frac{\partial w\_{\xi}}{\partial b\_{\mathbf{a}\boldsymbol{\beta}}} (\Psi\_{b,u}(s)) |ds^{\boldsymbol{\zeta}} \ge 0, \end{split}$$
 
$$\forall (b, u), (q, z) \in \mathbb{B} \times \mathcal{U}$$

*is valid.*

By using Usman and Khan [33], we introduce the following definition.

**Definition 3.** *W is hemicontinuous on B* × *U if*

$$
\lambda \to \left\langle \left( (b(s), u(s)) - (q(s), z(s))\_{\prime} \left( \frac{\delta\_{\zeta} W}{\delta b\_{\lambda}}, \frac{\delta\_{\zeta} W}{\delta u\_{\lambda}} \right) \right) \right\rangle \quad 0 \le \lambda \le 1 \right\}$$

*is continuous at* 0 <sup>+</sup>*, for* <sup>∀</sup>(*b*, *<sup>u</sup>*),(*q*, *<sup>z</sup>*) <sup>∈</sup> *<sup>B</sup>* <sup>×</sup> *U, where*

$$\frac{\delta\_{\zeta}\mathcal{W}}{\delta b\_{\lambda}} := \frac{\partial w\_{\zeta}}{\partial b} (\Psi\_{b\_{\lambda}, \mu\_{\lambda}}(s)) - D\_{\mathbf{x}} \frac{\partial w\_{\zeta}}{\partial b\_{\mathbf{x}}} (\Psi\_{b\_{\lambda}, \mu\_{\lambda}}(s)) + \frac{1}{\mathfrak{x}(\mathfrak{a}, \mathfrak{f})} D\_{\mathbf{a}\mathfrak{f}}^{2} \frac{\partial w\_{\zeta}}{\partial b\_{\mathbf{a}\mathfrak{f}}} (\Psi\_{b\_{\lambda}, \mu\_{\lambda}}(s)) \in \mathcal{B},$$

$$\frac{\delta\_{\zeta}\mathcal{W}}{\delta u\_{\lambda}} := \frac{\partial w\_{\zeta}}{\partial u} (\Psi\_{b\_{\lambda}, \mu\_{\lambda}}(s)) \in \mathcal{U},$$

$$b\_{\lambda} := \lambda b + (1 - \lambda)q, \quad u\_{\lambda} := \lambda u + (1 - \lambda)z.$$

**Lemma 1** (Trean¸t˘a et al. [14])**.** *Let the functional W be hemicontinuous and pseudomonotone on B* × *U. A point* (*b*, *u*) ∈ *B* × *U solves Equation (5) if and only if* (*b*, *u*) ∈ *B* × *U solves:*

$$\begin{split} \int\_{\mathcal{Y}} [(q(s) - b(s)) \frac{\partial w\_{\zeta}}{\partial b}(\Psi\_{q,z}(s)) + (z(s) - u(s)) \frac{\partial w\_{\zeta}}{\partial u}(\Psi\_{q,z}(s)) \\ + D\_{\mathbf{k}}(q(s) - b(s)) \frac{\partial w\_{\zeta}}{\partial b\_{\mathbf{k}}}(\Psi\_{q,z}(s)) \\ + \frac{1}{\mathbf{x}(\boldsymbol{\varrho}, \boldsymbol{\varrho})} D\_{\mathbf{a}\boldsymbol{\beta}}^{2}(q(s) - b(s)) \frac{\partial w\_{\zeta}}{\partial b\_{\mathbf{a}\boldsymbol{\beta}}}(\Psi\_{q,z}(s)) ] ds^{\zeta} \ge 0, \quad \forall (q, z) \in \mathbb{B} \times \boldsymbol{\mathcal{U}}. \end{split}$$

Furthermore, according to Trean¸t˘a et al. [14], we present two well-posedness results associated with the considered variational inequality problem involving second-order PDEs.

**Definition 4.** *The sequence* {(*bn*, *un*)} ∈ *B* × *U is called an approximating sequence of Equation (5) if there exists a sequence of positive real numbers σ<sup>n</sup>* → 0 *as n* → ∞*, such that:*

$$\begin{split} \int\_{\boldsymbol{Y}} [(q(\boldsymbol{s}) - b\_{\boldsymbol{n}}(\boldsymbol{s})) \frac{\partial w\_{\boldsymbol{\xi}}}{\partial b} (\boldsymbol{\Psi}\_{\boldsymbol{b}, \boldsymbol{u}\_{\boldsymbol{n}}}(\boldsymbol{s})) + (\boldsymbol{z}(\boldsymbol{s}) - \boldsymbol{u}\_{\boldsymbol{n}}(\boldsymbol{s})) \frac{\partial w\_{\boldsymbol{\xi}}}{\partial \boldsymbol{u}} (\boldsymbol{\Psi}\_{\boldsymbol{b}, \boldsymbol{u}\_{\boldsymbol{n}}}(\boldsymbol{s})) \\ + D\_{\boldsymbol{\kappa}}(q(\boldsymbol{s}) - b\_{\boldsymbol{n}}(\boldsymbol{s})) \frac{\partial w\_{\boldsymbol{\xi}}}{\partial b\_{\boldsymbol{\kappa}}} (\boldsymbol{\Psi}\_{\boldsymbol{b}, \boldsymbol{u}\_{\boldsymbol{n}}}(\boldsymbol{s})) \\ + \frac{1}{\boldsymbol{\kappa}(\boldsymbol{a}, \boldsymbol{\theta})} D\_{\boldsymbol{a}\boldsymbol{\beta}}^{2} (q(\boldsymbol{s}) - b\_{\boldsymbol{n}}(\boldsymbol{s})) \frac{\partial w\_{\boldsymbol{\xi}}}{\partial b\_{\boldsymbol{a}\boldsymbol{\beta}}} (\boldsymbol{\Psi}\_{\boldsymbol{b}, \boldsymbol{u}\_{\boldsymbol{n}}}(\boldsymbol{s})) |\boldsymbol{d}\boldsymbol{s}^{\boldsymbol{\zeta}} + \boldsymbol{\sigma}\_{\boldsymbol{n}} \geq 0, \quad \forall (q, \boldsymbol{z}) \in \boldsymbol{B} \times \boldsymbol{U}. \end{split}$$

**Definition 5.** *The problem Equation (5) is called well-posed if:*


$$\begin{split} \Omega\_{\sigma} = \left\{ (b, u) \in \mathcal{B} \times \mathcal{U} : \int\_{\mathcal{Y}} [(q(s) - b(s)) \frac{\partial w\_{\zeta}}{\partial b}(\Psi\_{b, \mu}(s)) + (z(s) - u(s)) \frac{\partial w\_{\zeta}}{\partial u}(\Psi\_{b, \mu}(s)) \right\} \\ + D\_{\mathbf{x}}(q(s) - b(s)) \frac{\partial w\_{\zeta}}{\partial b\_{\mathbf{x}}}(\Psi\_{b, \mu}(s)) \\ + \frac{1}{\mathbf{x}(a, \boldsymbol{\rho})} D\_{a\boldsymbol{\beta}}^{2}(q(s) - b(s)) \frac{\partial w\_{\zeta}}{\partial b\_{\mathbf{a}\boldsymbol{\beta}}}(\Psi\_{b, \mu}(s)) ] ds^{\zeta} + \sigma \ge 0, \,\forall (q, z) \in \mathcal{B} \times \mathcal{U} \right\}. \end{split}$$

**Remark 4.** *We have:* Ω = Ω*σ, when σ* = 0 *and* Ω ⊆ Ω*σ*, ∀*σ* > 0. *Furthermore, for a set P, the diameter of P is defined as follows*

$$diam\,P = \sup\_{\phi,\eta \in P} ||\phi - \eta||.$$

**Theorem 3** (Trean¸t˘a et al. [14])**.** *Let the functional W be hemicontinuous and monotone on B* × *U. The problem Equation (5) is well-posed if and only if:*

$$
\Omega\_{\sigma} \neq \mathcal{O}\_{\prime} \forall \sigma > 0 \text{ and } \operatorname{diam} \Omega\_{\sigma} \to 0 \text{ as } \sigma \to 0.
$$

**Theorem 4** (Trean¸t˘a et al. [14])**.** *Let the functional W be hemicontinuous and monotone on B* × *U. Then, Equation (5) is well-posed if and only if it has one solution.*

#### **5. Open Problem**

As in the previous sections, we start with T as a compact set in R *<sup>m</sup>* and T 3 *<sup>ζ</sup>* <sup>=</sup> (*ζ β* ), *β* = 1, *m*, as a multi-variable. Let T ⊃ C : *ζ* = *ζ*(*ς*), *ς* ∈ [*p*, *q*] be a (piecewise) differentiable curve joining the following two fixed points *ζ*<sup>1</sup> = (*ζ* 1 1 , . . . , *ζ m* 1 ), *ζ*<sup>2</sup> = (*ζ* 1 2 , . . . , *ζ m* 2 ) in T . In addition, we consider Λ as the space of (piecewise) smooth *state* functions *σ* : T → R *n* and Ω as the space of *control* functions *η* : T → R *k* , which are considered to be piecewise continuous. Moreover, on the product space Λ × Ω, we consider the scalar product:

$$
\langle (\boldsymbol{\sigma}, \boldsymbol{\eta}), (\boldsymbol{\pi}, \boldsymbol{x}) \rangle = \int\_{\mathsf{C}} [\boldsymbol{\sigma}(\boldsymbol{\xi}) \cdot \boldsymbol{\pi}(\boldsymbol{\xi}) + \boldsymbol{\eta}(\boldsymbol{\xi}) \cdot \boldsymbol{x}(\boldsymbol{\xi})] d\boldsymbol{\xi}^{\mathtt{f}}
$$

$$
= \int\_{\mathsf{C}} \left[ \sum\_{i=1}^{n} \boldsymbol{\sigma}^{i}(\boldsymbol{\xi}) \boldsymbol{\pi}^{i}(\boldsymbol{\xi}) + \sum\_{j=1}^{k} \boldsymbol{\eta}^{j}(\boldsymbol{\xi}) \boldsymbol{x}^{j}(\boldsymbol{\xi}) \right] d\boldsymbol{\xi}^{\mathtt{f}}
$$

$$
+ \cdot \cdot + \left[ \sum\_{i=1}^{n} \boldsymbol{\sigma}^{i}(\boldsymbol{\xi}) \boldsymbol{\pi}^{i}(\boldsymbol{\xi}) + \sum\_{j=1}^{k} \boldsymbol{\eta}^{j}(\boldsymbol{\xi}) \boldsymbol{x}^{j}(\boldsymbol{\xi}) \right] d\boldsymbol{\xi}^{\mathtt{m}} \, \quad (\forall) (\boldsymbol{\sigma}, \boldsymbol{\eta}), (\boldsymbol{\pi}, \boldsymbol{x}) \in \mathsf{A} \times \Omega
$$

together with the norm induced by it.

In the following, we introduce the vector functional defined by curvilinear integrals:

$$\Psi: \Lambda \times \Omega \to \mathbb{R}^p, \quad \Psi(\sigma, \eta) = \int\_{\mathbb{C}} \psi\_{\mathbb{R}}(\zeta, \sigma(\zeta), \sigma\_a(\zeta), \sigma\_{ab}(\zeta), \eta(\zeta)) d\zeta^{\mathfrak{f}}$$

$$= \left( \int\_{\mathbb{C}} \psi\_{\mathbb{R}}^1(\zeta, \sigma(\zeta), \sigma\_a(\zeta), \sigma\_{ab}(\zeta), \eta(\zeta)) d\zeta^{\mathfrak{f}}, \cdots, \int\_{\mathbb{C}} \psi\_{\mathbb{R}}^p(\zeta, \sigma(\zeta), \sigma\_a(\zeta), \sigma\_{ab}(\zeta), \eta(\zeta)) d\zeta^{\mathfrak{f}} \right).$$

where we used the vector-valued *C* 2 -class functions *ψ<sup>β</sup>* = (*ψ l β* ) : T × R *<sup>n</sup>* <sup>×</sup> <sup>R</sup> *nm* <sup>×</sup> <sup>R</sup> *nm*<sup>2</sup> × R *<sup>k</sup>* <sup>→</sup> <sup>R</sup> *p* , *β* = 1, *m*, *l* = 1, *p*. In addition, *Dα*, *α* ∈ {1, . . . , *m*} represents the operator of total derivative, and the aforementioned 1-form densities

$$\boldsymbol{\psi}\_{\mathcal{B}} = \left(\psi\_{\mathcal{B}'}^{1}, \dots, \psi\_{\mathcal{B}}^{p}\right) : \mathcal{T} \times \mathbb{R}^{n} \times \mathbb{R}^{nm} \times \mathbb{R}^{nm^{2}} \times \mathbb{R}^{k} \to \mathbb{R}^{p}, \quad \boldsymbol{\beta} = \overline{1, m}.$$

are closed (*Dαψ l <sup>β</sup>* = *Dβψ l α* , *β*, *α* = 1, *m*, *β* 6= *α*, *l* = 1, *p*). Throughout the paper, the following rules for equalities and inequalities are applied:

*a* = *b* ⇔ *a <sup>l</sup>* = *b l* , *a* ≤ *b* ⇔ *a <sup>l</sup>* <sup>≤</sup> *<sup>b</sup> l* , *a* < *b* ⇔ *a <sup>l</sup>* < *b l* , *a b* ⇔ *a* ≤ *b*, *a* 6= *b*, *l* = 1, *p*,

for all *p*-tuples, *a* = *a* 1 , · · · , *a p* , *b* = *b* 1 , · · · , *b p* in R *p* .

Next, we formulate the partial differential equation/inequation constrained optimization problem:

$$\pi(\text{CP}) \quad \min\_{(\sigma,\eta)} \left\{ \Psi(\sigma,\eta) = \int\_{\mathbb{C}} \psi\_{\mathbb{R}}(\zeta,\sigma(\zeta),\sigma\_a(\zeta),\sigma\_{ab}(\zeta),\eta(\zeta)) d\zeta^{\beta} \right\} \text{ subject to } (\sigma,\eta) \in \mathcal{S}\_{\text{tot}}$$

where

$$\Psi(\sigma,\eta) = \int\_{\mathbb{C}} \psi\_{\beta}(\zeta,\sigma(\zeta),\sigma\_{a}(\zeta),\sigma\_{ab}(\zeta),\eta(\zeta))d\zeta^{\beta}$$

$$= \left(\int\_{\mathbb{C}} \psi\_{\beta}^{1}(\zeta,\sigma(\zeta),\sigma\_{a}(\zeta),\sigma\_{ab}(\zeta),\eta(\zeta))d\zeta^{\beta}, \cdot \cdot \cdot \int\_{\mathbb{C}} \Psi\_{\beta}^{p}(\zeta,\sigma(\zeta),\sigma\_{a}(\zeta),\sigma\_{ab}(\zeta),\eta(\zeta))d\zeta^{\beta}\right)$$

$$= \left(\Psi^{1}(\sigma,\eta),\ldots,\Psi^{p}(\sigma,\eta)\right)$$

and

$$\mathcal{S} = \left\{ (\sigma, \eta) \in \Lambda \times \Omega \mid Z(\zeta, \sigma(\zeta), \sigma\_a(\zeta), \sigma\_{ab}(\zeta), \eta(\zeta)) = 0, \ Y(\zeta, \sigma(\zeta), \sigma\_a(\zeta), \sigma\_{ab}(\zeta), \eta(\zeta)) \le 0, \ \forall \zeta \in \mathcal{S} \right\}$$

$$\sigma|\_{\zeta = \zeta\_1, \zeta\_2} = \text{given}, \ \sigma\_a|\_{\zeta = \zeta\_1, \zeta\_2} = \text{given} \right\}.$$

Above, we considered *Z* = (*Z ι* ) : T × R *<sup>n</sup>* <sup>×</sup> <sup>R</sup> *nm* <sup>×</sup> <sup>R</sup> *nm*<sup>2</sup> × R *<sup>k</sup>* <sup>→</sup> <sup>R</sup> *t* , *ι* = 1, *t*, *Y* = (*Y r* ) : T × R *<sup>n</sup>* <sup>×</sup> <sup>R</sup> *nm* <sup>×</sup> <sup>R</sup> *nm*<sup>2</sup> × R *<sup>k</sup>* <sup>→</sup> <sup>R</sup> *q* , *r* = 1, *q* as *C* 2 -class functions.

**Definition 6.** *A point* (*σ* 0 , *η* 0 ) ∈ S *is called an efficient solution in* (*CP*) *if there exists no other* (*σ*, *η*) ∈ S *such that* Ψ(*σ*, *η*) Ψ(*σ* 0 , *η* 0 )*, or, equivalently,* Ψ *l* (*σ*, *η*) − Ψ *l* (*σ* 0 , *η* 0 ) ≤ 0, (∀)*l* = 1, *p, with strict inequality for at least one l.*

**Definition 7.** *A point* (*σ* 0 , *η* 0 ) ∈ S *is called a proper efficient solution in* (*CP*) *if* (*σ* 0 , *η* 0 ) ∈ S *is an efficient solution in* (*CP*) *and there exists a positive real number M, such that, for all l* = 1, *p, we have*

$$
\Psi^l(\sigma^0, \eta^0) - \Psi^l(\sigma, \eta) \le M \Big(\Psi^s(\sigma, \eta) - \Psi^s(\sigma^0, \eta^0)\Big).
$$

*for some s* ∈ {1, · · · , *p*} *such that*

$$\Psi^s(\sigma,\eta) > \Psi^s(\sigma^0,\eta^0).$$

*whenever* (*σ*, *η*) ∈ S *and*

$$
\Psi^l(\sigma,\eta) < \Psi^l(\sigma^0,\eta^0).
$$

**Definition 8.** *A point* (*σ* 0 , *η* 0 ) ∈ S *is called a weak efficient solution in* (*CP*) *if there exists no other* (*σ*, *η*) ∈ S *such that* Ψ(*σ*, *η*) < Ψ(*σ* 0 , *η* 0 )*, or, equivalently,* Ψ *l* (*σ*, *η*) − Ψ *l* (*σ* 0 , *η* 0 ) < 0, (∀)*l* = 1, *p.*

*According to Trean¸t˘a [17,18], for σ* ∈ Λ *and η* ∈ Ω*, we consider the vector functional*

$$K: \Lambda \times \Omega \to \mathbb{R}^p, \quad K(\sigma, \eta) = \int\_{\mathbb{C}} \kappa\_{\beta}(\zeta, \sigma(\zeta), \sigma\_a(\zeta), \sigma\_{ab}(\zeta), \eta(\zeta)) d\zeta^{\beta}$$

*and define the concepts of invexity and pseudoinvexity associated with K.*

For examples of invex and/or pseudoinvex curvilinear integral functionals, the reader can consult Trean¸t˘a [17].

**Definition 9** (Trean¸t˘a [18])**.** *We say that* X × Q ⊂ Λ × Ω *is invex with respect to ϑ and υ if*

$$\left(\sigma^0, \eta^0\right) + \lambda \left(\theta\left(\zeta\_\prime \sigma, \eta\_\prime \sigma^0, \eta^0\right), \upsilon\left(\zeta\_\prime \sigma, \eta\_\prime \sigma^0, \eta^0\right)\right) \in \mathbb{X} \times \mathbb{Q}\_\prime$$

*for all* (*σ*, *η*), (*σ* 0 , *η* 0 ) ∈ X × Q *and λ* ∈ [0, 1]*.*

*Now, we introduce the following (weak) vector controlled variational inequalities: I. Find* (*σ* 0 , *η* 0 ) ∈ S *such that there exists no* (*σ*, *η*) ∈ S *satisfying*

$$
\begin{split} \left( \begin{array}{c} \left( \begin{array}{c} \left( \begin{array}{c} \left[ \begin{array}{c} \left[ \begin{array}{c} \left( \left( \begin{array}{c} \rho^{0} \right)\_{\beta} \right) \left( \left( \left. \rho \right)\_{\alpha} \right) \right] \end{array} , \rho^{0} \left( \left( \left( \left. \rho \right) \right) \right) \sigma\_{\alpha}^{0} \left( \left( \left. \rho \right) \right) \end{array} , \sigma\_{\alpha}^{0} \left( \left( \left. \rho \right) \right) \right) \mathbf{e} \right) \right] \mathbf{d} \xi^{\beta} \\ \\ + \left[ \begin{array}{c} \left[ \begin{array}{c} \left[ \begin{array}{c} \rho \right]\_{\beta} \\ \left( \left( \left. \rho \right) \right) \end{array} , \sigma\_{\alpha}^{0} \left( \left( \left. \rho \right) \right) \end{array} , \rho^{0} \left( \left( \left. \rho \right) \right) \right) \mathbf{D}\_{\alpha} \theta \end{array} \right] \end{v} \right] \mathbf{d} \xi^{\beta} \\ \\ + \frac{1}{\gamma \left( a,b \right)} \int\_{\mathcal{C} \left[ \right]} \left[ \frac{\partial \mathbf{J}\_{\beta}^{p}}{\partial \sigma\_{ab}} \left( \left( \left. \rho \right) \right) , \sigma\_{ab}^{0} \left( \left( \left. \rho \right) \right) \right) \mathbf{D}\_{\alpha} \theta \right) \right] \mathbf{d} \xi^{\beta} \right. \\\\ + \frac{1}{\gamma \left( a,b \right)} \int\_{\mathcal{C} \left[ \left( \left. \rho \right$$

*II. Find* (*σ* 0 , *η* 0 ) ∈ S *such that there exists no* (*σ*, *η*) ∈ S *satisfying*

(*WVI*) Z C " *∂ψ*<sup>1</sup> *β ∂σ ζ*, *σ* 0 (*ζ*), *σ* 0 *α* (*ζ*), *σ* 0 *ab*(*ζ*), *η* 0 (*ζ*) *ϑ* + *∂ψ*<sup>1</sup> *β ∂η ζ*, *σ* 0 (*ζ*), *σ* 0 *α* (*ζ*), *σ* 0 *ab*(*ζ*), *η* 0 (*ζ*) *υ* # *dζ β* + Z C " *∂ψ*<sup>1</sup> *β ∂σα ζ*, *σ* 0 (*ζ*), *σ* 0 *α* (*ζ*), *σ* 0 *ab*(*ζ*), *η* 0 (*ζ*) *Dαϑ* # *dζ β* + 1 *x*(*a*, *b*) Z C " *∂ψ*<sup>1</sup> *β ∂σab ζ*, *σ* 0 (*ζ*), *σ* 0 *α* (*ζ*), *σ* 0 *ab*(*ζ*), *η* 0 (*ζ*) *D* 2 *abϑ* # *dζ β* , · · · , Z C " *∂ψ<sup>p</sup> β ∂σ ζ*, *σ* 0 (*ζ*), *σ* 0 *α* (*ζ*), *σ* 0 *ab*(*ζ*), *η* 0 (*ζ*) *ϑ* + *∂ψ<sup>p</sup> β ∂η ζ*, *σ* 0 (*ζ*), *σ* 0 *α* (*ζ*), *σ* 0 *ab*(*ζ*), *η* 0 (*ζ*) *υ* # *dζ β*

$$+\int\_{\mathbb{C}} \left[\frac{\partial \boldsymbol{\psi}\_{\beta}^{p}}{\partial \sigma\_{a}} \left(\boldsymbol{\zeta}, \sigma^{0}(\boldsymbol{\zeta}), \sigma\_{a}^{0}(\boldsymbol{\zeta}), \sigma\_{ab}^{0}(\boldsymbol{\zeta}), \boldsymbol{\eta}^{0}(\boldsymbol{\zeta})\right) D\_{a}\boldsymbol{\theta}\right] d\boldsymbol{\zeta}^{\beta}$$

$$+\frac{1}{\boldsymbol{\chi}(a,b)} \int\_{\mathbb{C}} \left[\frac{\partial \boldsymbol{\psi}\_{\beta}^{p}}{\partial \sigma\_{ab}} \left(\boldsymbol{\zeta}, \sigma^{0}(\boldsymbol{\zeta}), \sigma\_{a}^{0}(\boldsymbol{\zeta}), \sigma\_{ab}^{0}(\boldsymbol{\zeta}), \boldsymbol{\eta}^{0}(\boldsymbol{\zeta})\right) D\_{ab}^{2}\boldsymbol{\theta}\right] d\boldsymbol{\zeta}^{\beta}\Big) < 0.$$

*Note. In the above formulation,* <sup>1</sup> *x*(*a*, *b*) *represents the Saunders's multi-index.*

*Open Problem. Taking into account the notion of an invex set with respect to some given functions, the Fréchet differentiability and invexity/pseudoinvexity of the considered curvilinear integral functionals (which are path-independent) state some relations between the solutions of the (weak) vector-controlled variational inequalities and (proper, weak) efficient solutions of the associated optimization problem.*

#### **6. Conclusions**

This paper presented the nonlinear dynamics generated by some classes of constrained controlled optimization problems involving second-order partial derivatives. More precisely, we have stated the necessary optimality conditions for the considered variational control problems given by integral functionals. In addition, the well-posedness and the associated variational inequalities have been considered in this review paper.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


## *Article* **Interaction Behaviours between Soliton and Cnoidal Periodic Waves for Nonlocal Complex Modified Korteweg–de Vries Equation**

**Junda Peng , Bo Ren \* , Shoufeng Shen and Guofang Wang**

Department of Applied Mathematics, Zhejiang University of Technology, Hangzhou 310023, China; junda\_peng@163.com (J.P.); mathssf@zjut.edu.cn (S.S.); 18815598139@163.com (G.W.) **\*** Correspondence: renbo@zjut.edu.cn

**Abstract:** The reverse space-time nonlocal complex modified Kortewewg–de Vries (mKdV) equation is investigated by using the consistent tanh expansion (CTE) method. According to the CTE method, a nonauto-Bäcklund transformation theorem of nonlocal complex mKdV is obtained. The interactions between one kink soliton and other different nonlinear excitations are constructed via the nonauto-Bäcklund transformation theorem. By selecting cnoidal periodic waves, the interaction between one kink soliton and the cnoidal periodic waves is derived. The specific Jacobi function-type solution and graphs of its analysis are provided in this paper.

**Keywords:** nonlocal modified Korteweg–de Vries equation; consistent tanh expansion method; parity-time symmetry

**MSC:** 35J60; 35N05; 35L05

**1. Introduction**

Physical systems exhibiting parity-time (PT )-symmetries have received increasing attention since a family of non-Hermitian PT -symmetric Hamiltonians with a real constant was first shown by Bender and Boettcher to admit entirely real spectra [1,2]. The study of PT symmetry in mathematics and physics can offer great research value and strong prospects for dynamical systems. PT -symmetric nonlinear systems have become a major focus of nonlinear science, such as soliton theory, fluid mechanics, hydrodynamics and optical theory. Some effective methods have been developed to derive exact solutions of nonlinear integrable systems, such as the inverse scattering transform method [3,4], the dressing method [5], the Hirota direct method [6,7], the Darboux transformations [8–10] and the Bäcklund transformations [11–13], etc.

The modified Kortewewg–de Vries (mKdV) equation, which describes the evolutions of weakly dispersive wavelets in shallow water, is widely studied. The integrable nonlocal nonlinear Schrödinger equation proposed by Ablowitz and Musslimani [14] attracted many researchers because of its special property. Ablowitz and Musslimani proposed some new nonlocal nonlinear integrable equations, including the reverse space-time nonlocal complex mKdV equation [15]. In these new types of nonlocal equations; in addition to the terms at space-time point (*x*, *t*), there are terms at mirror image point (−*x*, −*t*). The self-induced potential of the nonlocal complex mKdV equation is *V*(*x*, *t*) = *u*(*x*, *t*)*u* ∗ (−*x*, −*t*) [15]. The PT-symmetry for the nonlocal complex mKdV equation amounts to the invariance of the self-induced potential in the case of classical optics, i.e., *V*(*x*, *t*) = *V* ∗ (−*x*, −*t*), under the combined effect of parity and time reversal symmetry. A family of traveling solitary wave solutions including soliton, kink, periodic and singular solutions of the nonlocal mKdV equation is discussed [16].

**Citation:** Peng, J.D.; Ren, B.; Shen, S.F.; Wang G.F. Interaction Behaviours between Soliton and Cnoidal Periodic Waves for Nonlocal Complex Modified Korteweg-de Vries Equation. *Mathematics* **2022**, *10*, 1429. https://doi.org/10.3390/ math10091429

Academic Editor: Savin Treanta

Received: 29 March 2022 Accepted: 21 April 2022 Published: 23 April 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

The interaction between solitons and a periodic cnoidal wave of the Korteweg–de Vries equation and the cubic Schrödinger equation is discussed by using the inverse scattering technique [17,18]. Rogue waves on a periodic background and the nonlinear superposition of the two periodic solutions of mKdV equation are obtained by using the Darboux transformation [19,20]. The soliton excitation of the circular vortex motion can be constructed based on localized-induction approximation equations [21,22]. Recently, the consistent tanh expansion (CTE) method has been proposed to identify CTE-solvable systems [23,24]. The interaction between one soliton and other different nonlinear excitations such as cnoidal periodic waves can be obtained by using the CTE method. The method has been valid for classical integrable nonlinear systems, including the nonlinear Schrödinger system [25], the Broer–Kaup system [26], the higher-order KdV equation [27], etc. [28–30]. The application of the CTE method to nonlocal integrable systems with PT -symmetric is deficient. Applying the CTE method to nonlocal PT -symmetric integrable systems is innovative and convenient. In this paper, the CTE method is used to investigate the PT -symmetric nonlocal complex mKdV equation and can construct the interaction solution of the soliton and cnoidal periodic waves.

This paper is organized as follows. In Section 2, a nonauto-Bäcklund transformation theorem is obtained by using the CTE method. The interactions between one kink soliton and other different nonlinear excitations are constructed by the nonauto-Bäcklund transformation theorem. Section 3 discusses the interaction between one kink soliton, and the Jacobi-elliptic function types are explicitly discussed both with analytical and graphical methods. Sections 4 and 5 include simple discussions and provide conclusions.

#### **2. CTE Method for the Nonlocal Complex mKdV System**

The reverse space-time nonlocal complex mKdV equation reads as follows [15]:

$$
\mu\_t(\mathbf{x}, t) - 6au(\mathbf{x}, t)u^\*( -\mathbf{x}, -t)u\_x(\mathbf{x}, t) + u\_{\text{xxx}}(\mathbf{x}, t) = \mathbf{0},\tag{1}
$$

where *u* = *u*(*x*, *t*) is a complex function of real variables *x* and *t*, *α* is an arbitrary constant and ∗ denotes complex conjugation. The self-induced potential *V*(*x*, *t*) = *u*(*x*, *t*)*u* ∗ (−*x*, −*t*) of (1) satisfies the PT -symmetry condition *V*(*x*, *t*) = *V* ∗ (−*x*, −*t*). The nonauto-Bäcklund transformations and the soliton phenomenology of the standard mKdV equation are systematically studied [31].

For the nonlocal complex mKdV system, one can take the generalized truncated tanh expansion form by using leading order analysis:

$$
\mu = \mu\_0 + \mu\_1 \tanh(f), \tag{2}
$$

where *u*<sup>0</sup> and *u*<sup>1</sup> are arbitrary functions of (*x*, *t*). *f* satisfies constraint *f*(*x*, *t*) = *f* ∗ (−*x*, −*t*).

By substituting (2) into the nonlocal complex mKdV system (1), a complicated polynomial with respect to tanh(*f*) is obtained. Collecting coefficients of the powers of tanh<sup>4</sup> (*f*) and tanh<sup>3</sup> (*f*), we derive the following.

$$
\mu u\_1(\mathbf{x}, t) u\_1^\*( -\mathbf{x}, -t) - f\_\mathbf{x}^2 = \mathbf{0},\tag{3}
$$

$$\mathbf{u}\mathbf{u}\_{1}^{2}(\mathbf{x},t)\mathbf{u}\_{0}^{\*}(-\mathbf{x},-t)f\_{\mathbf{x}} + \mathbf{u}\mathbf{u}\_{0}(\mathbf{x},t)\mathbf{u}\_{1}^{\*}(-\mathbf{x},-t)\mathbf{u}\_{1}(\mathbf{x},t)f\_{\mathbf{x}} - \mathbf{a}\mathbf{u}\_{1}(\mathbf{x},t)\mathbf{u}\_{1}^{\*}(-\mathbf{x},-t)\mathbf{u}\_{1,\mathbf{f}}(\mathbf{x},t) + \mathbf{u}\_{1}(\mathbf{x},t)f\_{\mathbf{f}}f\_{\mathbf{x}} + \mathbf{u}\_{1,\mathbf{f}}(\mathbf{x},t)f\_{\mathbf{x}}^{2} = \mathbf{0}.\tag{4}$$

Substituting *u*<sup>1</sup> obtained by solving (3) into (4) and further solving for *u*0, a set of solutions for *u*<sup>1</sup> and *u*<sup>0</sup> is derived as follows.

$$
\mu\_1 = \frac{1}{\sqrt{-\alpha}} f\_{\chi\nu} \qquad \mu\_0 = -\frac{1}{2\sqrt{-\alpha}} \frac{f\_{\text{xx}}}{f\_{\chi}}.\tag{5}
$$

Substituting (5) into the complicated polynomial obtained before and collecting the coefficients of tanh<sup>2</sup> (*f*), tanh<sup>1</sup> (*f*) and tanh<sup>0</sup> (*f*) via symbolic computation with the help of Maple, we obtain the following three over-determined systems.

$$\frac{3}{2}f\_{\text{xx}}^2 + 2f\_{\text{x}}^4 - f\_{\text{x}}f\_{\text{xxx}} - f\_{\text{x}}f\_{\text{t}} = 0,\tag{6}$$

$$\frac{3}{2}\frac{f\_{\text{xx}}^3}{f\_{\text{x}}^2} - 3\frac{f\_{\text{xx}}f\_{\text{xxx}}}{f\_{\text{x}}} - 6f\_{\text{x}}^2 f\_{\text{xx}} + f\_{\text{xt}} + f\_{\text{xxx}} = 0,\tag{7}$$

$$-\frac{3}{2}f\_{\text{xx}}^2 - 2f\_{\text{x}}^4 - \frac{21}{4}\frac{f\_{\text{xx}}^2 f\_{\text{xxx}}}{f\_{\text{x}}^3} + \frac{9}{4}\frac{f\_{\text{xx}}^4}{f\_{\text{x}}^4} + f\_{\text{x}}f\_{\text{l}} + 4f\_{\text{x}}f\_{\text{xxx}} + 3f\_{\text{xx}}^2 - \frac{1}{2}\frac{f\_{\text{xx}}}{f\_{\text{x}}} + \frac{1}{2}\frac{f\_{\text{xx}}f\_{\text{xt}}}{f\_{\text{x}}^2} + \frac{2f\_{\text{xxx}}f\_{\text{xx}}}{f\_{\text{x}}^2} + \frac{3}{2}\frac{f\_{\text{xxx}}^2}{f\_{\text{x}}^2} = 0. \tag{8}$$

Moreover, the above three Equations (6)–(8) are consistent each other, meaning that if *f* satisfies one of the equations, it will be a solution for other two equations. According to above analysis, we derive the following nonauto-Bäcklund transformation theorem.

Nonauto-Bäcklund transformation theorem. If one finds that solution *f* satisfies (6), then *u* is obtained with the following:

$$u(\mathbf{x},t) = -\frac{1}{2\sqrt{-\alpha}}\frac{f\_{\mathbf{x}\mathbf{x}}}{f\_{\mathbf{x}}} + \frac{1}{\sqrt{-\alpha}}f\_{\mathbf{x}}\tanh(f),\tag{9}$$

which is a solution of the reverse space-time nonlocal complex mKdV system (1).

The Miura transform is known as the transformation connection the solutions between KdV equation and mKdV equation. This nonauto-Bäcklund transformation can be treated as a form of Miura transformation. According to the above theorem, the exact solutions of the nonlocal complex mKdV system (1) are obtained by solving (6). Here are some interesting examples.

A quite trivial solution of (6) has the following form:

$$f = i(k\_0 \mathfrak{x} + w\_0 t), \quad w\_0 = -2k\_0^3 \tag{10}$$

where *k*<sup>0</sup> is a free constant, and *w*<sup>0</sup> is determined by dispersion relations. Substituting the trivial solution (10) into (9), one kink soliton solution of the nonlocal complex mKdV system yields the following.

$$u = -\frac{1}{\sqrt{-\alpha}}k\_0 \tan(k\_0 x - 2k\_0^3 t). \tag{11}$$

Some nontrivial solutions of the mKdV equation can be derived from a quite trivial solution of (10). To find interaction solutions between one kink soliton and other nonlinear excitations, we assume the interaction solution form as follows:

$$f = i(k\_0 \mathfrak{x} + w\_0 t) + F(X), \quad X = k\mathfrak{x} + wt,\tag{12}$$

where *k*0, *w*0, *k* and *w* are all free constants. Substituting expression (12) into (6), (6) becomes the following.

$$\mathbf{F}\_{\mathbf{X}}^{4} + \frac{4ik}{k\_{0}}\mathbf{F}\_{\mathbf{X}}^{3} - \frac{12k\_{0}^{2} + w}{2k^{3}}\mathbf{F}\_{\mathbf{X}}^{2} - \frac{i(8k\_{0}^{3}k + kw\_{0} + k\_{0}w)}{2k^{4}}\mathbf{F}\_{\mathbf{X}} - \frac{1}{2}\mathbf{F}\_{\mathbf{X}}\mathbf{F}\_{\mathbf{XX}} + \frac{3}{4}\mathbf{F}\_{\mathbf{XX}}^{2} - \frac{ik\_{0}}{2k}\mathbf{F}\_{\mathbf{XX}} + \frac{2k\_{0}^{4} + k\_{0}w\_{0}}{2k^{4}} = \mathbf{0}. \tag{13}$$

Then, the following equation is obtained by using transformation *F<sup>X</sup>* = *F*1.

$$\mathbf{F}\_1^4 + \frac{4ik\_0}{k}\mathbf{F}\_1^3 - \frac{12kk\_0^2 + w}{2k^3}\mathbf{F}\_1^2 - \frac{i(8kk\_0^3 + kw\_0 + k\_0w)}{2k^4}\mathbf{F}\_1 + \frac{3}{4}\mathbf{F}\_{1,\mathcal{X}}^2 - \frac{1}{2}(\mathbf{F}\_1 + \frac{ik\_0}{k})\mathbf{F}\_{1,\mathcal{X}\mathcal{X}} + \frac{k\_0(2\mathbf{k}\_0^3 + w\_0)}{2k^4} = \mathbf{0}.\tag{14}$$

The CTE method is valid in many classical integrable systems. For the interaction between soliton and Jacobi periodic waves in classical integrable systems, one can obtain the standard Jacobi-elliptic function equation [32]. One only obtains Equation (14) rather than the standard Jacobi-elliptic function equation. In order to obtain the Jacobi periodic wave solution of (14), we assume that Equation (14) has a Jacobian elliptic function solution as *F*1(*X*) = *c*1*Sn*(*c*2*X*, *m*) [33]. Hence, the solution expressed by (9) is just the explicit exact interaction between one kink soliton and cnoidal periodic waves. To show more clearly this form of solution, we offer one special case for solving (14).

#### **3. Interaction between Soliton and Cnoidal Periodic Waves**

According to above analysis, the solution of (13) has the following form:

$$F(X) = \int c\_1 \mathbf{S}\_n(c\_2 X, m) dX = \frac{c\_1 \ln[D\_n(c\_2 X, m) - m \mathbf{C}\_n(c\_2 X, m)]}{c\_2 m},\tag{15}$$

where *Sn*, *C<sup>n</sup>* and *D<sup>n</sup>* are the Jacobian-elliptic functions with modulus *m*. Verified by Maple's symbolic calculation, (15) satisfies constraint *f*(*x*, *t*) = *f* ∗ (−*x*, −*t*) and is a real even function. By substituting the undetermined parameter solution (15) into (13) and using symbolic computation with the help of Maple, the parameters satisfy the following.

$$c\_1 = -\frac{c\_2 m}{2}, \quad c\_2 = 2\frac{ik\_0}{k}, \quad w\_0 = -2k\_0^3(3m^2 + 1), \quad w = 2k\_0^2 k(m^2 - 5). \tag{16}$$

The interaction between one kink soliton and the cnoidal wave of the nonlocal complex mKdV system (1) has the the following form.

$$\mu = \frac{2}{\sqrt{-\kappa}(c\_1 \mathbf{S}\_{\text{fl}} + i\mathbf{k}\_0)} \left[ \tanh(\frac{c\_1 \ln(D\_{\text{fl}} - m\mathbf{C}\_{\text{fl}}) + i c\_2 m(k\_0 \mathbf{x} + \mathbf{u}\_0 \mathbf{t})}{c\_2 m}) (\frac{c\_1^2 \mathbf{k}^2}{2} S\_{\text{n}}^2 + i c\_1 k k\_0 \mathbf{S}\_{\text{fl}} - \frac{k\_0^2}{2}) - \frac{c\_1 c\_2 \mathbf{k}^2}{4} \mathbf{C}\_{\text{n}} \mathbf{D}\_{\text{fl}} \right]. \tag{17}$$

The parameters *c*1, *c*2, *w*<sup>0</sup> and *w* have been given in (16).

We select the parameters as *α* = −1, *k*<sup>0</sup> = 0.4*i*, *m* = 0.4 in Figures 1–3. Figures 1 and 2 plot the interaction solution between one kink soliton and the cnoidal wave in the patterns of three-dimensional and wave along *x*-axis. Field *u* exhibits one kink soliton propagating on the cnoidal wave's background. Figure 3 plots the status-only soliton or cnoidal wave at *t* = 0. The superpose status is just the interaction between one kink soliton and the cnoidal waves, which are depicted in Figure 2. The changes before and after superposition are displayed visually. There are some nonlinear waves including interactions between solitary waves and the cnoidal periodic waves, which can be described in certain ocean phenomena.

**Figure 1.** Plot of one kink soliton on the cnoidal wave background expressed by (17) of the nonlocal mKdV equation in three dimensions.

**Figure 2.** One dimensional image followed by *t* = −25, 0, 25.

**Figure 3.** Plot of separate state for one kink soliton or the cnoidal wave expressed by (10) and (15) of the nonlocal mKdV equation at *t* = 0.

#### **4. Discussion**

Kuznetsov and Mikhailov discussed the interaction between solitons and a periodic cnoidal wave of the Korteweg–de Vries equation [17]. Gorshkov and Ostrovsky investigated the interaction between soliton and a periodic wave via the direct perturbation method [34]. The interaction between the Jacobi elliptic periodic wave and kink soliton for the complex mKdV equation is directly obtained by the CTE method in this paper. Compared with the previous two methods, the CTE method can obtain this type solution more directly and conveniently. Other reverse space-time nonlocal system is worthy of study by using the CTE method.

#### **5. Conclusions**

The reverse space-time nonlocal complex mKdV equation is investigated by using the CTE method. A nonauto-Bäcklund transformation theorem is constructed by using the CTE method. The interactions between one kink soliton and the cnoidal waves are derived by means of the nonauto-Bäcklund transformation theorem. The dynamics of the interactions are studied both with analytical and graphical methods. These types of interaction solutions can describe certain oceanic phenomena. The method is valid and promising for the PT -symmetry models. The interactions between solitons and the cnoidal waves can be obtained by symmetry reductions related by nonlocal symmetry [27]. Symmetry reductions related by the nonlocal symmetry of the nonlocal complex mKdV equation will be studied in the future.

**Author Contributions:** Methodology, formal analysis and writing—original draft preparation, J.P.; Conceptualization, investigation, validation, software and writing—review and editing, B.R.; formal analysis, S.S.; formal analysis G.W. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work is supported by the National Natural Science Foundation of China No. 11775146 and the Xinyuan Transportation Electronics Company Limited of Zhejiang Province of China Grant No. KYY-HX-20220005.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **Chaos Embed Marine Predator (CMPA) Algorithm for Feature Selection**

**Adel Fahad Alrasheedi <sup>1</sup> , Khalid Abdulaziz Alnowibet <sup>1</sup> , Akash Saxena 2,\* , Karam M. Sallam <sup>3</sup> and Ali Wagdy Mohamed 4,5,\***


**Abstract:** Data mining applications are growing with the availability of large data; sometimes, handling large data is also a typical task. Segregation of the data for extracting useful information is inevitable for designing modern technologies. Considering this fact, the work proposes a chaos embed marine predator algorithm (CMPA) for feature selection. The optimization routine is designed with the aim of maximizing the classification accuracy with the optimal number of features selected. The well-known benchmark data sets have been chosen for validating the performance of the proposed algorithm. A comparative analysis of the performance with some well-known algorithms advocates the applicability of the proposed algorithm. Further, the analysis has been extended to some of the well-known chaotic algorithms; first, the binary versions of these algorithms are developed and then the comparative analysis of the performance has been conducted on the basis of mean features selected, classification accuracy obtained and fitness function values. Statistical significance tests have also been conducted to establish the significance of the proposed algorithm.

**Keywords:** metaheuristics; feature selection; classification

**MSC:** 68T01; 68T05; 68T07; 68T09; 68T20; 68T30

**1. Introduction**

In recent years, the application of optimization in the field of data-mining has been reported in many published approaches. Feature selection (FS) from a large data set is also one of the optimization problems. The FS problem has many industrial and healthcarerelated applications. An effective FS technique can enhance the classification accuracy of the classifier and reduce the complexity of the system. The complexity of the system substantially enhanced with the dimension of the data. In other words, it speeds up the learning rate and improves the ability of a machine to anticipate the information pertaining to the data. The recent application of the FS technique in the field of healthcare is reported in [1], where an ensemble-based hybrid feature selection has been employed for the diagnosis of the brain tumor. The authors claimed that the proposed method is able to handle the imbalanced data. A network intrusion detection scheme based on the Least Square Support Vector Machine has been proposed by the authors [2]. The authors validated the approach on intrusion data sets. The problem of the high dimensionality of feature space pertaining to text characterization has been addressed in reference [3]. In this work, the authors proposed a novel Gini index for the classification and reduction of

**Citation:** Alrasheedi, A.F.; Alnowibet, K.A.; Saxena, A.; Sallam, K.M.; Mohamed, A.W. Chaos Embed Marine Predator (CMPA) Algorithm for Feature Selection. *Mathematics* **2022**, *10*, 1411. https://doi.org/ 10.3390/math10091411

Academic Editor: Savin Treanta

Received: 12 March 2022 Accepted: 17 April 2022 Published: 22 April 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

the features. Feature selection for the Brain Computer Interface (BCI) has been conducted with the help of information gain ranking, correlation-based feature selection, ReliefF, consistency-based feature selection and 1R ranking methods in the approach [4]. A brief classification of the feature selection algorithms are given in Figure 1.

**Figure 1.** Classification of feature selection algorithms.

A very interesting approach on the path planning for the mobile robot is proposed in reference. For defining the obstacle, the situation of workers in the Artificial Bee Colony has been utilized and in the second phase, the shortest path is selected by Dijkstra's algorithm [5]. A very important application of the ABC algorithm has been reported for the identification of mechanical parameters of the Servo-drive system [6]. A novel approach of the Adaptive Procedure for Optimization Algorithms is proposed in reference [7]. Apart from these approaches, recent approaches based on the metaheuristic optimization motivated the author to employ the optimization algorithm in a feature selection task [8–10]. These references provide strong evidence of what optimization algorithms are capable of for dealing with complex engineering problems.

Apart from the application of metaheuristic optimization algorithms and evolutionbased algorithms, there are many deterministic algorithms that are also employed for conducting feature selection tasks. Due to the deterministic nature or gradient-based mechanism, these algorithms are often stuck in a local minima trap and provide slow and premature convergence. For avoiding such problems and to provide a smooth and fast optimization environment, metaheuristic techniques are employed for executing feature selection problems. The recent trend is to apply the metaheuristic optimization algorithm for conducting this task; some of the fine approaches are depicted in the following references, where the application of the Hybrid Whale Optimization Algorithm (HWOA) [11] is explored with the amalgamation of the Whale Optimization Algorithm and Simulated annealing Algorithm (SA). A chaotic dragonfly algorithm has been proposed and applied on the feature selection task in reference [12].A similar approach based on the chaotic selfish heard optimizer has been proposed in reference [13]. A rich review of literature pertaining to the feature selection methods have been demonstrated in reference [14]. S-shaped and V-shaped functions are employed to create a binary search space in gaining and sharing a knowledge algorithm for the feature selection task in reference [15].

#### *1.1. Some Recent Chaos-Based Approaches for Feature Selection*

A chaotic optimization algorithm based on gaining and sharing knowledge-based optimization has been proposed in reference [16], as well as the the similar applications based on chaotic fruit fly optimization [17], chaotic crow search algorithms [18], chaotic multi verse optimizer [19] and chaotic salp swarm optimizers [20].

From these approaches, it is evident that the embedding chaos for making naive algorithms compatible for feature selection is a potential area of research. These approaches are strong evidence that by embedding chaos in the mechanism of algorithms, a substantial improvement can be achieved as far as classification accuracy and reduction in dimensionality is considered. Based on this discussion, the following subsection presents the research proposal for the work and objectives.

#### *1.2. Research Objectives and Proposal*

Recently, a new metaheuristic has been proposed [21] based on predatory behavior. The algorithm is known as the marine predator algorithm (MPA). The application of this algorithm in a multi-objective domain has been explored in reference [22]. A new improved model of MPA has been established in reference [23]. The paper touched the theme of introducing an opposition-based learning method, chaos map, self-adaption of population, and switching between exploration and exploitation phases. Application of this algorithm has been explored in the field of controller tuning. Further, a hybrid computational intelligence-based approach has been proposed for structural damage detection in reference [24].

Keeping these facts in mind, the work proposed in this paper addresses following objectives.


The remaining part of this paper is organized as follows: in Section 2, brief details of the MPA are discussed. Section 3 presents the basic framework of the chaos embed marine predator algorithm (CMPA). Section 4 presents the problem formulation and details of the objective considered in this study. Section 5 presents the results and analysis of different tests. Section 6 concludes all major findings.

#### **2. Marine Predator Algorithm: An Overview**

The marine predator algorithm (MPA) [21] is a recently developed optimization technique that is based on the philosophy that while predator is searching for the prey, the prey also updates its position according to the location of food. The MPA presents a beautiful mimicry of a social life in terms of mathematical representations. This section briefly discuss the steps incorporated in the development of MPA. The different steps of MPA are as follows

1. Conceptualization of MPA: Like other nature-inspired algorithms, the initial population in MPA is equally scattered in the search region, which can be given as:

$$Y\_0 = \mathcal{U}\_b + m(\mathcal{U}\_b - L\_b) \tag{1}$$

Here, *U<sup>b</sup>* and *L<sup>b</sup>* are the minimum and maximum values of variables and r is an arbitrary number satisfying 0 < *m* < 1.

Following the well-known Darwinian fittest theory in MPA, a group of best predators are selected as a final solution. In MPA, the initial location of the prey can be expressed as the following matrix of order *n* × *d*, where *n* represents the number of search agents and *d* is the dimension of the problem.

$$TPPR^{EM} = \begin{bmatrix} Y\_{1,1}^{tp} & Y\_{1,2}^{tp} & \cdots & Y\_{1,d}^{tp} \\ Y\_{2,1}^{tp} & Y\_{2,2}^{tp} & \cdots & Y\_{2,d}^{tp} \\ \vdots & \vdots & \ddots & \vdots \\ Y\_{n,1}^{tp} & Y\_{n,2}^{tp} & \cdots & Y\_{n,d}^{tp} \end{bmatrix} \tag{2}$$

where *Y tp* 1,1 represents the first top predator vector, which is replicated *n* times to construct the Elite matrix *TPREM*, which can be extended up to *n* times and *d* dimensions. In MPA, the prey is searching for food and the predator is searching for prey, hence both can be considered as search agents. The matrix TPM has taken initial solutions, and after every iteration, the position of prey has improved. This updated matrix is called the elite matrix *TPREM*. The prey matrix (TPM) is given by following expression.

$$TPM = \begin{bmatrix} Y\_{1,1} & Y\_{1,2} & \cdots & Y\_{1,d} \\ Y\_{2,1} & Y\_{2,2} & \cdots & Y\_{2,d} \\ \vdots & \vdots & \ddots & \vdots \\ Y\_{n,1} & Y\_{n,2} & \cdots & Y\_{n,d} \end{bmatrix} \tag{3}$$

*Yi*,*<sup>j</sup>* denotes the location of *i*-th prey in the *j*-th dimension. It is to be noted that during the search process both prey and predators are search agents and they search for food.

	- Stage 1: If the velocity of predator is greater than prey. This case occurs in the initial steps or in intensification. When the proportion velocity is very high, i.e., (≥10), then the predator is almost still. This can be mathematically written as when *t* < *Tmax*/3,

$$\vec{\nabla}step\_{i} = \vec{\mathcal{R}}\_{B} \otimes \left(\vec{T}PR\_{i}^{EM} - \vec{\mathcal{R}}\_{B} \otimes \overrightarrow{T\dot{P}\_{i}}\right) \tag{4}$$

where *t* is the current iteration and *TMax* maximum values of iteration.

$$
\overrightarrow{TPM}\_{\text{i}} = \overrightarrow{TPM}\_{\text{i}} + \textbf{K.} \vec{R} \otimes \vec{\text{step}}\_{\text{i}} \tag{5}
$$

where *step<sup>i</sup>* = step size of *i*-th iteration, ~*R<sup>B</sup>* = vector including arbitrary numbers related to Brownian motion, *K* = constant number taken as equal to 0.5 and <sup>~</sup>*<sup>R</sup>* = a vector of arbitrary numbers <sup>∈</sup> [0, 1]. This stage occurs in almost the first 33 percentage of the total iteration, when the intensification is high.

• Stage 2: If the proportional velocity of predator and prey is almost the same, which indicates that the prey is looking for its food and the predator is looking for its prey. This case happens in middle iterations, when intensification is slowly converting into diversification. At this time, half of the part of the population, i.e., predator, is accountable for the intensification and the prey is responsible for the diversification. If the prey follows the Levy motion and the predator follows

the Brownian motion, then we get proportional velocity (≈1). Mathematically, when <sup>1</sup> 3 *Tmax* < *t* < <sup>2</sup> 3 *Tmax*. For the first part of the population:

$$\vec{\nabla}step\_{i} = \vec{\mathcal{R}}\_{L} \otimes \left(\vec{TPR}\_{i}^{EM} - \vec{\mathcal{R}}\_{L} \otimes \overrightarrow{TPM}\_{i}\right) \tag{6}$$

$$
\overrightarrow{TPM}\_{l} = \overrightarrow{TPM}\_{l} + \textbf{K.}\vec{R} \times \overrightarrow{\\$step}\_{l} \tag{7}
$$

Here, the ~*RL*= vector includes arbitrary numbers related to the Levy motion. As in the Levy distribution, the step size is very small, hence this movement represents diversification.

In the second half population MPA consider

$$\vec{\pi} \text{step}\_{\text{i}} = \vec{\mathcal{R}}\_{B} \otimes \left( \vec{\mathcal{R}}\_{B} \otimes \vec{\mathcal{T}} \text{PR}\_{\text{i}}^{EM} - \overrightarrow{\mathcal{T}PM}\_{\text{i}} \right) \tag{8}$$

$$
\overrightarrow{TPM}\_{\text{i}} = \overrightarrow{TPM}\_{\text{i}} + \text{K.C} \times \overrightarrow{s}\_{\text{i}} \tag{9}
$$

*C* = <sup>1</sup> <sup>−</sup> *<sup>t</sup> <sup>T</sup>max* ( 2*t Tmax* ) is a control parameter that commands the step size of movements of the predator. The predator moves according to the Brownian motion and the prey follow the predator for its position updates.

• Stage 3: If the proportional velocity ratio is low, i.e., the predator is moving faster in comparison to the prey. This situation occurs in the last iterations of optimization, and is related to diversification. The predator adopts the Levy motion in the case of low proportional velocity (=0.1). This can be given in the following way, if *t* > <sup>2</sup> 3 *Tmax*

$$\overrightarrow{step}\_{l} = \overrightarrow{\mathcal{R}}\_{L} \otimes \left(\overrightarrow{\mathcal{R}}\_{L} \otimes \overrightarrow{T}PR\_{i}^{EM} - \overrightarrow{TPM}\_{l}\right) \text{ i } = 1,...,n\tag{10}$$

$$
\overrightarrow{TPM}\_{i} = \overrightarrow{T} \, \overset{\rightarrow}{P} \mathbf{R}\_{i}^{EM} + \text{K.C} \times \overrightarrow{step}\_{i} \tag{11}
$$

These three stages present different steps of predators in finding their prey. According to their behaviour, we consider that the predator follows both the Brownian and Levy motion equally. In stage I, the predator is still, in stage II it follows the Brownian motion and in the last stage it moves in the Levy motion. These same things are also followed by the prey, as the prey is also a predator for some other marine creatures. For example, bony fish and marine invertebrates are prey for tuna fish and themselves a prey for silky sharks.

2. Fish Aggregating Device Effect (FAD): FAD is a floating device made by humans to find some specific marine creatures in tropical regions. It also affects marine animals in many other ways. According to [25], 80% of the lifespan of sharks has been spent around FAD and the rest in jumping in various dimensions to find prey. These FADs can be considered as local optima trapping agents of marine predators. The effect of FADs can be given mathematically as:

$$\overrightarrow{TPM}\_{i} = \begin{cases} \overrightarrow{TPM}\_{i} + \mathcal{C}\left[\overrightarrow{L}\_{b} + \overrightarrow{R} \times \left(\overrightarrow{L}\_{b} - \overrightarrow{L}\_{b}\right)\right] \times \overrightarrow{A} & \text{if } r \le f\\ \overrightarrow{TPM}\_{i} + [f(1-q) + q]\left(\overrightarrow{TPM}\_{r\_{1}} - \overrightarrow{TPM}\_{r\_{2}}\right) & \text{if } r > f \end{cases} \tag{12}$$

Here, *f* is the probability of the FAD effect on any optimizer and taken as *f* = 0.2, *q* = *a* is the random number between 0 and 1, and *r*<sup>1</sup> and *r*<sup>2</sup> represent two arbitrary indexes of the prey matrix.

$$
\overrightarrow{A} = \begin{array}{c} 0 \\ 1 \end{array} \begin{array}{c} if \quad r < 0.2 \\ if \quad r > 0.2 \end{array} \tag{13}
$$

3. Memory of marine predators: Almost all marine predators are good at memorizing their location of successful foraging, which is referred to as the memory saving term in MPA. When the prey updates their location and the FAD effect is implemented, the fitness of the prey matrix has evaluated whether to update the elite matrix or not and the most fit matrix is chosen. This step also helpful in the improvement of the solution, according to [26].

#### **3. Development of Chaos Embed Marine Predator Algorithm**

This section presents the development of the chaos embed marine predator algorithm (CMPA). The following are the procedural steps for the development.


$$\mathcal{S}(\mathbf{J}; \nu, \mu, \mathbf{J}\_1, \mathbf{J}\_2) = \begin{cases} \left(\frac{\mathbf{J} - \mathbf{J}\_1}{\mathbf{J}\_c - \mathbf{J}\_1}\right)^{\nu} \left(\frac{\mathbf{J}\_2 - \mathbf{J}\_c}{\mathbf{J}\_2 - \mathbf{J}\_c}\right)^{\mu} & \text{if } \mathbf{J} \in [\mathbf{J}\_1, \mathbf{J}\_2] \\\mathbf{0} & \text{otherwise} \end{cases} \tag{14}$$

where (*ν*, *µ*, *J*1, *J*2) ∈ *R* and *J*<sup>1</sup> < *J*2. The *β*-Chaotic sequence at any iteration *t* will be given as:

$$J\_{t+1} = k\beta(J\_t; \nu\_\prime \mu\_\prime J\_1, l\_2) \tag{15}$$

(b) For the first part of the population, during the second phase an update mechanism is introduced and represented as:

$$\vec{\nabla}step\_{i} = \vec{\mathcal{R}}\_{L} \otimes \left(\vec{TPR}\_{i}^{EM} - \vec{\mathcal{R}}\_{L} \otimes \overrightarrow{TPM}\_{i}\right) \tag{16}$$

$$
\overrightarrow{TPM}\_{\text{i}} = \overrightarrow{TPM}\_{\text{i}} + \textbf{K.}\overrightarrow{R} \times \overrightarrow{\text{step}}\_{\text{i}} \tag{17}
$$

Here, the ~*RL*= vector includes the arbitrary numbers related to the Levy motion. As in the Levy distribution the step size is very small, this movement represents diversification.

(c) More precisely, the update in prey position can be governed by by the following decision-making loop.

$$
\overrightarrow{TPM}\_i = \overrightarrow{TPM}\_i + \textbf{K.f} \times \overrightarrow{\sf step}\_i \tag{18}
$$

In this modification, R has been replaced by Equation (15). This implies that for every iteration there will a new chaotic number is assigned for making a decision process. Hence, the decision for the position update is handled with the help of the chaotic function instead of a random function that is normally distributed. Pseudo code of the proposed algorithm is depicted in Algorithm 1.

#### **Algorithm 1** Pseudo code of proposed CMPA.


10: **end while**

11: **Print the values of Fitness, Accuracy and Attributes.**

#### *Discussion*

During stage 2, both prey and predator moves at the same pace; hence, there is a chance of local minima stagnation as the exploration and exploitation rates are almost same. Hence, to keep the exploration and exploitation phase alive the position update equation based on a random number has been replaced with chaotic numbers, which are obtained from the sequence generation as per the definition in Equations (14) and (15).

Embedding chaos at this stage, when the velocity of prey and predator is almost the same, is more meaningful because these search agents can be directed to a local minima spot without changing or exploring in the different direction. Hence, it is quite necessary to keep the gradient of the velocity agile. This fact also motivates the experimental investigation of embedding chaos in other phases. In this work, our focus is to embed chaos and observe the impact of this addition only on the optimization performance of the algorithm in the binary domain. The following section presents the problem formulation part for evaluation of the proposed CMPA.

#### **4. Problem Formulation**

From the evaluation perspective, the feature selection problem can be classified into two broad categories, in the first type of approach, which is based on filter-based methods, an effective subset of the feature is selected and its performance is evaluated; finally, the algorithm suggests the optimal subset. In this type of approach, the subset is not evaluated over the training samples. On the other hand, the wrapper feature selectionbased approaches evaluate the feature subset and performance validation is conducted with testing and validation of the data sets. Feature selection is always considered as a multi objective optimization problem where objectives can be the maximization of the classification accuracy with the minimum number of feature subsets. It appears that both of the objectives are conflicting in nature. Hence, the objective function employed in this study is a weighted combination of these objectives.

$$\text{ObjectiveFunction}(I) = w\_1 \times \text{Er}(D) + w\_2 \times \frac{R\_c}{N} \tag{19}$$

where *Er*(*D*) is the error in the classification rate of a given classifier; in this work, we have employed the K-nearest Neighbor classifier (KNN), and *w*<sup>1</sup> and *w*<sup>2</sup> are the weights where *w*<sup>1</sup> = 1 − *w*2. The weighted combination philosophy has been adapted from reference [11].

#### **5. Results and Discussions**

For comparing the proposed variant we draw a comparison on the basis of the accuracy of the classification, fitness values obtained by algorithm and average attributes obtained from the optimization runs. In order to access the performance of the proposed algorithm, 17 classical data sets have been chosen. The details of data sets are shown in Table 1.

We have reported our results in two sets. In set-1, a comparison is made with contemporary algorithms, and in set-2 the chaotic algorithms are simulated and their comparative analysis is presented.

#### *5.1. Experimental Details*

Designing a mechanism that chooses the optimal feature from the given sets is a very important procedure, as the randomness can alter the results in a very effective manner; hence, a rigorous experimental analysis has been carried out for choosing the number of iterations, number of search agents and both chaotic marine algorithms, along with the marine algorithm, have been analyzed for many independent runs. We choose the Vote, Tic-Tac-Toe, Sonar, Penguin, Lymphography, Exactly, CongressEw and Breast Cancer for analysis. In this analysis, we change the values of search agents from (5, 10 and 20) and number of maximum iterations (20, 30, 50 and 70). From the analysis conducted in this experiment, we have adopted the numbers of search agents to be 10 and the maximum iteration number is 100. This analysis is conducted in such a manner that the parametric impact can be observed on the accuracy of classification and fitness values. We observe that in choosing these values of the parameters, the accuracy of the classification is not compromised and fitness values are also optimal. Further, the experimental details of this study has been shown in Figure 2.


**Table 1.** Data sets used for experimental verification.

#### Comparison with Previously Published Approaches

For investigation, the comparison is made with some of the previously reported approaches in the classification domain, where the objective function depicted in the previous section has been considered for dealing with the KNN classifier. The comparison results of the fitness values has been shown in Table 2. It is worth mentioning here that the simulation process is time consuming, hence the mean values of 10 runs are reported in the table. We observe that the fitness values for all the test data is optimal for the proposed

CMPA and in some cases these values are optimal. This fact establishes the applicability of CMPA in the binary domain. For example, in the case of CongressEw data, the fitness values are optimal for both CMPA and MPA.

**Figure 2.** Classification of feature selection algorithms.


**Table 2.** Fitness value.

Further, the comparative analysis of the classification accuracy has also been conducted with previously published algorithms; we observed that the classification accuracy of the proposed algorithm is better than MPA and better than GA, PSO and ALO. These results are shown in Table 3. For example, in the case of the ZOO data base, we observed that the classification accuracy of the CMPA is about 98%, on the other hand, the classification accuracy has been substantially compromised in ALO (91%), GA (88%) and PSO (83%).

It is also important to showcase the fact that classification accuracy has been achieved without compromising feature size. Hence, the attributes (feature) selected by every algorithm in each run has been averaged and showcased in Table 4. These values are very important indicators, as it can be easily observed from the table that the number of features selected by the algorithm is optimal in many cases, and this happens without compromising the classification accuracy.


**Table 3.** Comparative analysis of classification accuracy.

**Table 4.** Optimized mean of attributes.



**Table 4.** *Cont.*

#### *5.2. Comparative Analysis of MPA and CMPA*

For conducting this analysis, we have compared the optimization run results on the basis of attributes selected by the optimization algorithms, i.e., MPA and CMPA, on the basis of the fitness function values and on the basis of the classification accuracy achieved for different data sets. Table 5 showcases the results of the Wilcoxon rank-sum test [30] between MPA, and CMPA and the *p*-values are depicted in the table. This test is conducted with 95% confidence interval (5% significance level).

**Table 5.** Statistical significance test with MPA.



**Table 5.** *Cont.*

The column entry, which indicates value 1 in the *p*-values column, is considered as the native algorithm, from which the statistical comparison is executed. Here, MPA is considered as native algorithm and the rank-sum test calculation has been executed between MPA and the proposed CMPA. Hence, the results that obtained 0.05 were considered as a different distribution. From the entries depicted in the table, it has been observed that the CMPA provides competitive results when compared with MPA, and provides an optimal values of attributes, fitness function values and classification accuracies for almost all data sets. This fact advocates the applicability of a proposed algorithm on the feature selection problem.

#### *5.3. Comparative Analysis of Performance of the Proposed CMPA with Other Chaotic Algorithms*

Further, it has been an established fact that amending the chaos in the metaheuristic algorithms improvises the optimization efficiency in the binary domain. In order to investigate this fact, some recently published algorithms are considered for the evaluation of the performance of the proposed CMPA. These algorithms are the enhanced chaotic grasshopper optimization algorithm (ECGOA) (with sine map) [31], sinusoidal bridging mechanism-based grasshopper algorithm (with sine map) [32] and enhanced chaotic artificial bee colony algorithm (ECABC) (with sine map) [33]. The binary version of these chaotic algorithms are obtained, as per reference [11].

For showcasing the impact of chaos on the performance of these algorithms, the classification accuracy along with the mean fitness attribute selected by the algorithms is depicted in Table 6. From the table it has been observed that for majority of the data sets the classification accuracy is very competitive and that is with a smaller number of selected features.


**Table 6.** Comparative analysis of performance with chaotic algorithms.

Further, as proof, the statistical significance test has been conducted for comparison of the proposed algorithm with other chaotic algorithms. The results of the mean feature obtained from the optimization runs along with the *p*-values of the rank-sum test have been showcased in Table 7. The following points are observed:


**Table 7.** Statistical significance analysis of CMPA with chaotic algorithms.


Figure 3: Graphical Representation of the Optimization Results (Set-1) **Figure 3.** Graphical representation of the optimization results (set-1).

17

Figure 4: Graphical Representation of the Optimization Results (Set-2) **Figure 4.** Graphical representation of the optimization results (set-2).

#### **6. Conclusions**

19 This paper reports an application of the chaotic marine predator algorithm in a feature selection task; a binary version of the chaotic MPA algorithm is proposed in this work by altering the decision making of the position update phase of stage-2 with a chaotic sequence. We have changed the decision process by inculcating chaotic numbers generated from a chaotic sequence. Further, the proposed binary algorithm has been tested over 17 data sets and the algorithm analysis has been performed with the native algorithm. We observed that the native algorithm is strong and robust but some modifications in the position update process make it more suitable for the feature selection task. The results are reported with the help of different analyses. The following are the major conclusions drawn from this work.

1. The algorithm analysis has been conducted on the basis of the number of search agents selected and the number of iterations selected for feature selection. After this analysis, the optimal values of design parameters have been selected for executing the feature selection task.


Application of chaos in multiple phases with normalization and scaled functions will be evaluated in the future.

**Author Contributions:** Conceptualization, K.M.S.; Data curation, K.A.A.; Formal analysis, A.S. and A.W.M.; Funding acquisition, A.F.A. and K.A.A.; Investigation, K.M.S.; Methodology, A.S.; Project administration, A.F.A.; Resources, K.A.A.; Supervision, A.W.M.; Writing—original draft, A.S.; Writing—review & editing, A.S. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research is funded by the Researchers Supporting Program at King Saud University, Project number (RSP-2021/323).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The authors present their appreciation to King Saud University for funding this research through the Researchers Supporting Program (Project number RSP-2021/323), King Saud University, Riyadh, Saudi Arabia.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **Guided Hybrid Modified Simulated Annealing Algorithm for Solving Constrained Global Optimization Problems**

**Khalid Abdulaziz Alnowibet <sup>1</sup> , Salem Mahdi <sup>2</sup> , Mahmoud El-Alem <sup>3</sup> , Mohamed Abdelawwad <sup>4</sup> and Ali Wagdy Mohamed 5,6,\***


**Abstract:** In this paper, a hybrid gradient simulated annealing algorithm is guided to solve the constrained optimization problem. In trying to solve constrained optimization problems using deterministic, stochastic optimization methods or hybridization between them, penalty function methods are the most popular approach due to their simplicity and ease of implementation. There are many approaches to handling the existence of the constraints in the constrained problem. The simulated-annealing algorithm (SA) is one of the most successful meta-heuristic strategies. On the other hand, the gradient method is the most inexpensive method among the deterministic methods. In previous literature, the hybrid gradient simulated annealing algorithm (GLMSA) has demonstrated efficiency and effectiveness to solve unconstrained optimization problems. In this paper, therefore, the GLMSA algorithm is generalized to solve the constrained optimization problems. Hence, a new approach penalty function is proposed to handle the existence of the constraints. The proposed approach penalty function is used to guide the hybrid gradient simulated annealing algorithm (GLMSA) to obtain a new algorithm (GHMSA) that finds the constrained optimization problem. The performance of the proposed algorithm is tested on several benchmark optimization test problems and some well-known engineering design problems with varying dimensions. Comprehensive comparisons against other methods in the literature are also presented. The results indicate that the proposed method is promising and competitive. The comparison results between the GHMSA and the other four state-Meta-heuristic algorithms indicate that the proposed GHMSA algorithm is competitive with, and in some cases superior to, other existing algorithms in terms of the quality, efficiency, convergence rate, and robustness of the final result.

**Keywords:** nonlinear function; constrained optimization; hybrid algorithm; global optima; line search; gradient method; meta-heuristics; simulated annealing algorithm; constraint handling; penalty function; evolutionary computation; numerical comparisons

**MSC:** 65D05

#### **1. Introduction**

Optimization problems arise in different applications fields, such as technical sciences, industrial engineering, economics, networks, chemical engineering, etc. See for example [1–5]

**Citation:** Alnowibet, K.A.; Mahdi, S.; El-Alem, M.; Abdelawwad, M.; Mohamed, A.W. Guided Hybrid Modified Simulated Annealing Algorithm for Solving Constrained Global Optimization Problems. *Mathematics* **2022**, *10*, 1312. https:// doi.org/10.3390/math10081312

Academic Editor: Savin Treanta

Received: 3 March 2022 Accepted: 11 April 2022 Published: 14 April 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

In general, the constrained optimization problem can be formulated as follows:

$$\begin{array}{ll}\min\_{\mathbf{x}\in\mathbb{R}^{n}} & f(\mathbf{x}),\\\text{s.t.} & g\_{l}(\mathbf{x}) \le 0, \quad l = 1, 2, \dots, q\_{l} \\ & h\_{d}(\mathbf{x}) = 0, \quad d = 1, 2, \dots, m, \ m < n \\ & a\_{i} \le x\_{i} \le b\_{i}, \ i = 1, 2, \dots, n,\end{array} \tag{1}$$

where *a<sup>i</sup>* ∈ {R ∪ {−∞}}, and *b<sup>i</sup>* ∈ {R ∪ {∞}}.

The functions *f*(*x*), *g<sup>l</sup>* (*x*), *<sup>h</sup>j*(*x*) : <sup>R</sup>*<sup>n</sup>* <sup>→</sup> <sup>R</sup> are real valued functions, *<sup>n</sup>* denotes the number of variables in *x*, *q* is the number of inequality constraints, *m* is the number of equality constraints, *a* is a lower bounded on *x* and *b* is an upper bounded on *x*. The objective function *f* , the inequality constraints *g<sup>l</sup>* , *l* = 1, 2, . . . , *q*, and the equality constraint *hd* , *d* = 1, 2, . . . , *m*, are assumed to be continuously differentiable nonlinear functions.

Recently, there has been great development of optimization algorithms that are proposed to find global solutions to optimization problems. See for example [2,6–8].

The global optimization methods are used to prevent convergence to local optima and increase the probability of finding the global optimum [9].

The numerical global optimization algorithms can be classified into two classes: deterministic and stochastic methods. In stochastic methods, the minimization process depends partly on probability. In deterministic methods, in contrast, no probabilistic information is used [9].

So, for finding the global minimum of the unconstrained problem by using deterministic methods, it needs an exhaustive search over the feasible region of the function *f* and additional assumptions for the function *f* . On the contrary, to find the global minimum of the unconstrained problems, by using stochastic methods, one can prove the asymptotic convergence in probability, i.e., these methods are asymptotically successful with probability 1, see for example [10–12]. In general, the computational results of the stochastic methods are better than those of the deterministic methods [13].

Due to those reasons, a meta-heuristics strategy (stochastic method) is used to guide the search process [13]. Hence a meta-heuristic is a technique designed for solving a problem more quickly when classic methods are too slow, or for finding an approximate solution when classic methods fail to find any exact or near-exact solution. This is achieved by trading optimality, completeness, accuracy, or precision for speed [14–16].

The simulated-annealing algorithm (SA) is one of the most successful meta-heuristic strategies. In fact, the numerical results display that the simulated annealing technique is very efficient and effective for finding the global minimizer. See, for example, [2,5,17–19].

On the other hand, the gradient method is the most inexpensive method for finding a local minimizer of a continuously differentiable function. It has been proved that the gradient algorithm converges locally to a local minimizer [20]. Therefore, if a line-search (L) is added to the gradient method (G) as a globalization strategy, the resulting algorithm is globally convergent to a local minimizer (GL) [9,21,22].

Hence, when the simulated-annealing algorithm (SA) as a global optimization algorithm is combined with the line-search gradient method (GL) as a globally convergent method, the result is the hybrid gradient simulated annealing algorithm (GLMSA) [23]. The idea behind this hybridization is to gain the benefits and advantages of both the GL algorithm and the MSA algorithm.

As a matter of fact, the numerical results demonstrated that the (GLMSA) algorithm is a very efficient, effective and strong competitor for finding the global minimizer. For example, Table 4 of [23] shows that the GL algorithm is able to reach the optimum point of all test problems whose objective functions have only one minimum point (no local minima except the global one. i.e., convex function) and it is stuck at a local minimum for test problems whose objective functions have several local minima (with one global minimum, i.e., non-convex function). Table 6 of [23] demonstrates that the SMA modified simulated annealing algorithm finds the global minimum of all test problems from any starting point

of the feasible search space *S*. However, the GLMSA hybrid gradient simulated annealing algorithm is faster than MSA; also, GLMSA is efficient and effective compared to other meta-heuristic algorithms.

All the above have motivated and encouraged us to generalize the GLMSA algorithm to solve Problem (1).

The literature review analysis shows that the handling constraint which is based on a penalty function is considered the most popular implemented mechanism; this is due to its simplicity and ease of implementation [24–27]. A penalty technique transforms Problem (1) into an unconstrained problem by adding the penalty term of each constraint violation to the objective function value. The remainder of this paper is organized as follows. The next section provides a brief description of the GLMSA algorithm. Constraint handling, the penalty function method, proposed penalty method and interior-point algorithm are presented in Section 3. A guided hybrid simulated annealing algorithm to solve constrained problems is presented in Section 4. Numerical results are given in Section 5. Section 6 contains some concluding remarks.

**Note**: Section Abbreviations provides a list of the abbreviations and symbols which are used in this paper.

#### **2. Summarized Description of GLMSA Algorithm**

The GLMSA algorithm has been designed for solving unconstrained optimization problems; in this paper the GLMSA algorithm is generalized to solve Problem (1). The GLMSA algorithm contains two approaches to find a new step at each iteration, the first one is the gradient method. In this approach, a candidate point is generated and it might be accepted or rejected. If the objective function *f* is decreased at this point, then it will be accepted, otherwise, the second approach will be used to generate another point.

#### *2.1. The First Approach (Gradient Method)*

The gradient method solves an unconstrained optimization problem iteratively, such that at each iteration, a step in the direction of the negative gradient is computed and added to the current point as follows. Given an initial guess *<sup>x</sup>*<sup>0</sup> <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* , the gradient method generates a sequence {*x<sup>k</sup>* }, *k* ≥ 0 of the objective function of the unconstrained optimization problem such that:

$$\mathbf{x}\_{k+1} = \mathbf{x}\_k + d\_{k'} \tag{2}$$

where *d<sup>k</sup>* is the first step, and it is defined by:

*d<sup>k</sup>* = −|*α<sup>k</sup>* |*g*(*x<sup>k</sup>* ), (3)

where *g*(*x<sup>k</sup>* ) the gradient vector of the function *f* at point *x<sup>k</sup>* and *α<sup>k</sup>* is a step length along the negative gradient direction (−*g*(*x<sup>k</sup>* )). The step length *α<sup>k</sup>* along the −*g*(*x<sup>k</sup>* ) is defined by:

$$\mathfrak{a}\_{k} = \frac{f(\mathfrak{x}\_{k})}{||\!\!\!/\mathfrak{g}(\mathfrak{x}\_{k})\!\!/\!/\!2}. \tag{4}$$

The G gradient algorithm is listed in Algorithm 1 of [23]. The step length *λ<sup>k</sup>* that is computed by the backtracking line-search approach is very important for global convergence of the gradient method. The following section presents a brief description of the backtracking line-search approach for globalizing the gradient method.

#### Globalizing the First Approach (Gradient Method)

To make the gradient method capable of finding a local minimizer *x* ∗ of the objective function of the unconstrained optimization problem from any starting point *x*<sup>0</sup> , the *G* algorithm (gradient algorithm) is combined with the *L* algorithm (line-search algorithm) in order to obtain globally convergent algorithm *GL*. This algorithm is listed in Algorithm 1 below and it contains the first approach (gradient algorithm G) and the backtracking line-search algorithm L.

**Algorithm 1** Line-Search Gradient Algorithm "GL"

**Input:** *<sup>f</sup>* : <sup>R</sup>*<sup>n</sup>* <sup>→</sup> <sup>R</sup>, *<sup>f</sup>* <sup>∈</sup> *<sup>C</sup>* 1 , *<sup>γ</sup>* <sup>∈</sup> (0, 1), *<sup>k</sup>* <sup>=</sup> 0, a starting point *<sup>x</sup><sup>k</sup>* <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* and *<sup>ε</sup>* <sup>&</sup>gt; 0. **Output:** *x* <sup>∗</sup> = *xac* the local minimizer of *f* , *f*(*x* ∗ ), the value of *f* at *x* ∗ 1: Set *xac* = *x*0. . *xac* is accepted solution. 2: Compute *fac* = *f*(*xac*), *gac* = *g*(*xac*) and *d<sup>k</sup>* . 3: **while** k*gac*k<sup>2</sup> > *ε* **do** . *gac* is the value of the gradient vector at the accepted point *xac*. 4: Set *k* = *k* + 1. 5: *x<sup>k</sup>* = *xac* + *d<sup>k</sup>* . *xac* is the accepted point form the previous iteration. 6: Compute *f <sup>k</sup>* = *f*(*x<sup>k</sup>* ) 7: Set *λ* = 1. 8: **while** *f <sup>k</sup>* > *fac* + *γλg T ac dk* **do** 9: Set *λ* = *<sup>λ</sup>* 2 10: *<sup>x</sup><sup>k</sup>* <sup>=</sup> *<sup>x</sup>ac* <sup>−</sup> *<sup>λ</sup>gac* . in this paper the value of *<sup>γ</sup>* is 10−<sup>4</sup> . 11: Compute *f <sup>k</sup>* = *f*(*x<sup>k</sup>* ) 12: **end while** 13: Set *xac* ← *x<sup>k</sup>* and *fac* ← *f*(*x<sup>k</sup>* ). 14: Compute *gac* = *g*(*xac*) and *d<sup>k</sup>* . 15: **end while** 16: **return** *xac* the local minimizer and its function value *fac*

For more details about the gradient method and the backtracking line-search approach see [23]. The second approach of the GLMSA algorithm is presented in the following subsection.

#### *2.2. The Second Approach (Simulated Annealing SA)*

It must be noted that the modified simulated annealing algorithm in [23] contains three alternatives to generate a new point, but in this paper, the first alternative is considered to generate a new point. This procedure is very important for reducing the function evaluations from three times at each iteration to one function evaluation for every iteration, because we need to allow for more inner iterations when solving constrained optimization problems. This procedure guarantees that the parameters of the penalty function are increasing enough because it is a necessary condition for non-stationary penalty functions [28], i.e., when *k* → ∞, parameters must also go to infinity.

The second point is generated by

$$\mathfrak{x}\_{k'+1} = \mathfrak{x}\_{ac} + \mathfrak{y}\_{k'} \tag{5}$$

where *xac* is the best point which is accepted so far and *ψ*<sup>0</sup> *k* is the step of the second approach and computed by Algorithm 2 below.

The gradient line-search algorithm (GL) has been listed in Algorithm 1 and a modified simulated annealing algorithm (MSA) is illustrated by Algorithm 3.

**Algorithm 2** The second approach to generate the step *ψ*<sup>0</sup> .

*k* Step 1: Set *k* 0 = 0. Step 2: Compute *ω<sup>k</sup>* <sup>0</sup> = 10(0.1∗*<sup>k</sup>* 0 ) . Step 3: Generate a random vector *X* 0 *k* ∈ [−1, 1] *n* . Step 4: Compute *D<sup>i</sup>* 0 *k* = −1+(1+*ω*<sup>0</sup> *k* ) |*X i* 0 *k* | *ω*0 *k* , *i* = 1, 2, . . . , *n*. . *n* is the number of variables. Step 5: Set *DX<sup>i</sup>* 0 *k* = *sign*(*X i* 0 *k* ). Step 6: Compute *DE<sup>j</sup>* 0 *k* = *D<sup>i</sup>* 0 *k* <sup>∗</sup> *DX<sup>j</sup> i* . Step 7: Compute *ψ i* 0 *k* = *b<sup>i</sup>* <sup>∗</sup> *DE<sup>i</sup>* 0 *k* . . *b<sup>i</sup>* is the upper bound of the feasible search space. Step 8: *k* <sup>0</sup> ← *k* 0 + 1. Step 9: Repeat steps 2–8 until *k* 0 = *N*. . *N* is the number of iterations and it is given in advance.

#### **Algorithm 3** Modified Simulated-Annealing "MSA".

**Input:** *xac*, *fac*, *N* and *T*. . *T* control parameter (Temperature) **Output:** *xbest* is the best point of *N* points and it value *fbest* 1: **for** *k* <sup>0</sup> = 0 → *N* **do** 2: *x*<sup>0</sup> *k* = *xac* + *ψ*<sup>0</sup> *k* , using Equation (5). 3: Compute ∆*f* = *f*(*x*<sup>0</sup> *k* ) − *fac*. 4: **if** ∆*f* < 0 **then** 5: Set *xac*<sup>0</sup> *k* ← *x*<sup>0</sup> *k* , *fac*<sup>0</sup> *k* ← *f*(*x*<sup>0</sup> *k* ). 6: **else** 7: Generate a random number *β* ∈ (0, 1) 8: **if** *β* < *e* − ∆*f <sup>T</sup>* **then** 9: Set *xac*<sup>0</sup> *k* ← *x*<sup>0</sup> *k* , *fac<sup>k</sup>* ← *f*(*x*<sup>0</sup> *k* ). 10: **end if** 11: **end if** 12: **end for** 13: **return** *xac* and its function value *fac*. . *fac* = *f*(*xac*).

where *N* is the maximum number of possible trials (Length Markov Chains of MSA) and *T* is the control parameter (temperature). For more details about the MSA algorithm, please, see [23].

For a detailed description of the simulated annealing algorithm SA see for example [18,29–31].

As we have mentioned above, Algorithm 1 (gradient line-search algorithm (GL)) is hybridized with Algorithm 3 (a modified simulated annealing algorithm (MSA)) to get the LGMSA algorithm that solves the unconstrained optimization problem.

In the next section, the LGMSA algorithm is guided to solve Problem (1) by using the penalty function method. There are many methods for handling the existence of the constraints in the constrained problem.

#### **3. Constraints Handling**

The algorithms which have been proposed to solve unconstrained optimization problems are unable to deal directly with constrained optimization problems. There are several approaches proposed to handle the existence of the constraints, see for example [27,32,33]. The most popular of them is the penalty function method.

The penalty function method is a successful technique for handling constraints [27,34,35].

#### *3.1. Penalty Function Methods*

The penalty methods have been most widely studied and used due to their simplicity in implementation. The major definition of the penalty function methods is the degree to which each constraint is penalized [28]. There are several types of penalty methods that are used to penalize the constraints in constrained optimization problems.

Three groups of penalty function methods are most popular; the first one is a group of methods of static penalties. In these methods, the penalty parameter does not depend on the current iteration, i.e., parameters remain constant through the evolutionary process [24,36].

The second one is a set of methods of dynamic penalties. In these methods the penalty parameters are usually dependent on the current iteration, in other words, the penalty parameters are functions in the iteration *k*, i.e., they are non-stationary. See [24,37,38].

The third is a set of methods of adaptive penalties; in this group penalty parameters are updated for every iteration [24].

The next section presents a suggested penalty function method with dynamic and adaptive parameters.

#### Proposed Penalty Function Method

This section shows how Problem (1) is transformed to an unconstrained optimization problem which is simple bounded as follows:

$$\begin{array}{ll}\min\_{\mathbf{x}\in\mathbb{R}^n} & \theta(\mathbf{x},r) = f(\mathbf{x}) + rp(\mathbf{x}),\\ & s.t \quad a\_i \le \ x\_i \le b\_{i'} \quad i = 1,2,\dots,n\_r \end{array} \tag{6}$$

where *f*(*x*) is the original objective function in Problem (1), *r* is a penalty parameter. The penalty term *p*(*x*) is defined by:

$$p(\mathbf{x}) = \sum\_{l=1}^{q} \left( \max\{0, \mathbf{g}\_l(\mathbf{x})\} \right)^2 + \sum\_{j=1}^{m} |h\_j(\mathbf{x})|^2. \tag{7}$$

The difference between the penalty function methods is in the way of defining the penalty term and its parameter *r* [24].

The penalty function methods force infeasible points toward the feasible region by step-wise increasing the penalty; *r* is used in the penalizing function *p*(*x*).

Therefore, the solution *x* ∗ minimizes the objective function of Problem (6) and also minimizes the objective function of Problem (1), i.e., as long as *k* → ∞ and *r <sup>k</sup>* → ∞, *x* ∗ approaches the feasible region and *r k p*(*x*) → 0 [28].

In this paper, the penalty function method has two parameters—the first one is *r* which penalizes the inequality constraint that is violated , i.e., when *g<sup>l</sup>* (*x*) > 0. The second parameter is *t* which penalizes the equality constraint *h<sup>j</sup>* (*x*) whose value is not equal to zero.

Accordingly, the *θ*(*x*,*r*) function is defined by:

$$
\theta(\mathbf{x}, r) = f(\mathbf{x}) + \frac{r}{2} p\_1(\mathbf{x}) + \frac{t}{2} p\_2(\mathbf{x}) \,\tag{8}
$$

where *p*<sup>1</sup> (*x*) = *q* ∑ *l*=1 *max*{0, *g<sup>l</sup>* (*x*)} 2 , *p*<sup>2</sup> (*x*) = *m* ∑ *j*=1 |*hj* (*x*)| <sup>2</sup> and *r* and *t* are the parameters for inequality and equality constraints respectively.

The parameters *r* and *t* are updated at each iteration *k* as follows.

$$\begin{cases} r\_{k+1} = r\_k + \varphi\_k \* \Phi\_{k'} \\ t\_{k+1} = t\_k + 1 \end{cases} \tag{9}$$

where the parameter *ϕ<sup>k</sup>* is updated by:

$$\mathfrak{g}\_k = \begin{cases} 0 & \text{if } \mathfrak{g}\_l(\mathfrak{x}) \le 0, \\ 2 & \text{otherwise.} \end{cases} \tag{10}$$

The parameter *ϕ<sup>k</sup>* is an adaptive parameter, i.e., when the candidate solutions are out of the feasible region then *ϕ<sup>k</sup>* penalizes a violated constraint by multiplying the term Φ*k* by 2, where *r*<sup>0</sup> = 1 is the initial value of *r*. The parameter Φ is updated as follows: Φ*k*+<sup>1</sup> = Φ*<sup>k</sup>* + 1, *t* <sup>0</sup> = 1.

**Note**: The equality constraint is more difficult than the inequality constraint because the size of the feasible region of the equality constraint is smaller than the size of the feasible region of the inequality constraint. For example, *f*(*x*, *y*) = *xy* s.t *h*(*x*, *y*) = *x* <sup>2</sup> + *y* <sup>2</sup> <sup>−</sup> <sup>1</sup> <sup>=</sup> <sup>0</sup> and *f*(*x*, *y*) = *xy* s.t *g*(*x*, *y*) = *x* <sup>2</sup> + *y* <sup>2</sup> <sup>−</sup> <sup>1</sup> <sup>&</sup>lt;<sup>=</sup> 0. The first problem is much harder than the second because in the first problem the size of the feasible region is the circumference of the circle while in the second problem, the feasible region is the whole disk. So, the parameter *t*(*k*) must be taken carefully.

#### *3.2. Mechanism of Working of the Penalty Function Method*

The penalty method solves the general Problem (1), during a succession of unconstrained optimization problems.

Let us discuss two examples in order to illustrate how the parameters of the penalty function are run.

The first example is very easy (one dimension); minimize *f*(*x*) = *x* <sup>2</sup> <sup>−</sup> <sup>3</sup> subject to *g*(*x*) = 0.5 − 0.5*x* ≤ 0, where *S* = [−6, 6] is the search domain.

If we want to find the optimal solution of the objective function *f*(*x*) = *x* <sup>2</sup> <sup>−</sup> <sup>3</sup> as an unconstrained problem, it is clear that the global solution to this problem is the point *x* ∗ = 0, such that *f*(*x* ∗ ) = −3, for *x* ∈ R, but when we want to find the optimal solution of the objective function *f*(*x*) = *x* <sup>2</sup> <sup>−</sup> <sup>3</sup> subject to *<sup>g</sup>*(*x*) <sup>≤</sup> 0, in this case, the problem is very difficult because we have to find the point *x* ∗ that minimizes *f*(*x*) and at the same time it must satisfy the condition of the constraint *g*(*x*) ≤ 0, which is why we need to apply the penalty function.

Hence, the problem *f*(*x*) = *x* <sup>2</sup> <sup>−</sup> <sup>3</sup> subject to *<sup>g</sup>*(*x*) = 0.5 <sup>−</sup> 0.5*<sup>x</sup>* <sup>≤</sup> <sup>0</sup> is transformed into *θ*(*x*,*r*) = *x* + *<sup>r</sup>* 2 (max{0,( 1 <sup>2</sup> − 0.5*x*)} 2 ), if *g*(*x*) > 0; (*g*(*x*) is violated), the first derivative is computed by the function *θ*(*x*,*r*); *dθ*(*x*,*r*) *dx* <sup>=</sup> <sup>1</sup> <sup>−</sup> *<sup>r</sup>* 2 ( 1 <sup>2</sup> <sup>−</sup> 0.5*x*), then <sup>1</sup> <sup>−</sup> *<sup>r</sup>* 2 ( 1 <sup>2</sup> − 0.5*x*) = 0; *x* ∗ = <sup>1</sup><sup>−</sup> <sup>4</sup> *r* , when *r* = {1, 2, 3, . . . , ∞}, then *x* <sup>∗</sup> <sup>=</sup> {−3, <sup>−</sup>1, <sup>−</sup><sup>1</sup> 3 , . . . , 1}, *f*(*x* ∗ ) = {6, <sup>−</sup>2, <sup>−</sup><sup>26</sup> 9 . . . , −2} and *g*(*x* ∗ ) = {2, 1, <sup>2</sup> 3 , . . . , 0}, i.e., when *r* → ∞, *x* <sup>∗</sup> → 1, *g*(*x* ∗ ) → 0, *rp*(*x* ∗ ) → 0, *f*(*x* ∗ ) → −2, and *θ*(*x* ∗ ,*r*) → −2.

Hence, the optimal point is *x* ∗ = 1, such that *f*(*x* ∗ ) = −2 and the constraint *g*(*x* ∗ ) = 0 is satisfied.

Figure 1 illustrates the behavior of the penalty functions; *rp*<sup>1</sup> (*x*), *rp*<sup>2</sup> (*x*) and *rp*<sup>3</sup> (*x*) and the objective function *f*(*x*) of the original problem (constrained problem) and the objective function *θ*(*x*,*r*) of the transformed problem (unconstrained problem) for all *x* ∈ *S* = [−6, 6].

Example 2: minimize <sup>−</sup>*xy* s.t *<sup>g</sup>*(*x*, *<sup>y</sup>*) = *<sup>x</sup>* <sup>+</sup>2*<sup>y</sup>* <sup>−</sup><sup>4</sup> <sup>≤</sup> 0; *<sup>θ</sup>*(*x*, *<sup>y</sup>*,*r*) = <sup>−</sup>*xy* <sup>+</sup> *<sup>r</sup>* 2 (max{0,(*x* + 2*y* − 4)} 2 ), if *g*(*x*, *y*) > 0; (*g*(*x*, *y*) is violated), the gradient vector is computed by the function *θ*(*x*, *y*); *g*(*x*, *y*) = [−*y* + *r*(*x* + 2*y* − 4), −*x* + 2*r*(*x* + 2*y* − 4)], hence, (*x* ∗ , *y* ∗ ) = ( 2 <sup>1</sup><sup>−</sup> <sup>1</sup> 4*r* , 1 <sup>1</sup><sup>−</sup> <sup>1</sup> 4*r* ), then (*x* ∗ , *y* ∗ ) → (2, 1) as *r* → ∞; this is why it must allow for the parameters *r k* and *t k* to increase as long as there exists a violated constraint, i.e., when a process of searching for a solution is an infeasible region.

To ensure that the process of searching for the optimal solution remains within the search domain, the interior-point algorithm is used. Therefore, the next section presents a brief description of this technique.

**Figure 1.** Penalty function *rp*(*x*) converges to zero VS *f*(*x*) → −2 and *θ*(*r*, *x*) → −2 that is the optimal solution of the constrained problem.

#### *3.3. Interior-Point Method*

The interior-point method is used in this paper, when a simple bounded exists in the test problem. Therefore, the interior point technique is used to ensure that the candidate solution lies inside a feasible region. This technique is used as follows at each iteration *k*, a damping parameter *τ<sup>k</sup>* is applied to insure that *xk*+<sup>1</sup> is feasible with respect to the limits *a<sup>i</sup>* ≤ *x<sup>i</sup>* ≤ *b<sup>i</sup>* , *i* = 1, 2, . . . *n* and *k* = 1, 2, . . . *M* as the inner loop of Algorithm 4, ref. [39].

#### **Algorithm 4** Guided Hybrid Modified Simulated-Annealing Algorithm (GHMSA).

**Input:** *f*(*x*), *g<sup>l</sup>* (*x*) and *h<sup>d</sup>* (*x*) : <sup>R</sup>*<sup>n</sup>* −→ <sup>R</sup>, *<sup>x</sup>*<sup>0</sup> <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* , *M*, *T*, *T<sup>f</sup>* , *Tout*, *ε*, *r* 0 , Φ<sup>0</sup> and *t* 0 . 1: set *xac* = *x*<sup>0</sup> . at the beginning we accept the initial point *x*<sup>0</sup> as an optimal solution. 2: compute *<sup>θ</sup>*(*xac*) = *<sup>f</sup>*(*xac*) + *<sup>r</sup> k* 2 *p*1 (*xac*) + *<sup>t</sup> k* 2 *p*2 (*xac*) . Using Formula (8). 3: set *θ<sup>b</sup>* = *θ*(*xac*) and *θ<sup>δ</sup>* = 1. . The values of *θ<sup>b</sup>* and *θ<sup>δ</sup>* = 1 are updated after *M* iterations. 4: **while** *T* > *T<sup>f</sup>* and *θ<sup>δ</sup>* > *ε* or *<sup>T</sup>* <sup>&</sup>gt; *<sup>T</sup>out* **do** . *<sup>T</sup>out* <sup>&</sup>lt; *<sup>T</sup><sup>f</sup>* <sup>≤</sup> <sup>10</sup>−<sup>4</sup> are as stopping criteria. 5: **for** *k* = 0 to *M* **do** 6: compute *<sup>θ</sup>*(*xac*) = *<sup>f</sup>*(*xac*) + *<sup>r</sup> k* 2 *p*1 (*xac*) + *<sup>t</sup> k* 2 *p*2 (*xac*). 7: set *θac* = *θ*(*xac*). 8: compute *x*<sup>1</sup> = *xac* + *d<sup>k</sup>* . . *d<sup>k</sup>* is competed by (16). 9: go to Formula (8) to ensure that the point *x*<sup>1</sup> lies inside [*a*, *b*] *<sup>n</sup>*. by Formula (14). 10: compute ∆*θ* = *θ*(*x*<sup>1</sup> ) − *θac* 11: **if** ∆*θ* < 0 **then** 12: go to Algorithm 1. 13: **else** 14: go to Formula (5) to generate other point. 15: **end if** 16: **end for** 17: compute Φ*k*+<sup>1</sup> = Φ*<sup>k</sup>* + 1 . here update penalty parameters. 18: *T* = *r T* ∗ *T* . decrease temperature, where *r <sup>T</sup>* = 0.8. 19: compute *θ<sup>δ</sup>* = |*θ<sup>b</sup>* − *θac* | and *θ<sup>b</sup>* ← *θac*.. *θ<sup>δ</sup>* is a stopping criterion when the solutions converge in the accumulation point for all iterations. 20: **end while** 21: Set *x<sup>g</sup>* ← *xac* , *θ<sup>g</sup>* ← *θac* 22: **return** *x<sup>g</sup>* the global minimizer and the value of the objective function *θ*(*x<sup>g</sup>* ) at *x<sup>g</sup>* .

The damping parameter *τ<sup>k</sup>* is defined to be:

$$\pi\_k = \min\{1, \min\_i \{u\_{k'}^i v\_k^i\}\},\tag{11}$$

where

*u i <sup>k</sup>* = - *a <sup>i</sup>*−*<sup>x</sup> i k* ∆*x i k* if *a <sup>i</sup>* <sup>&</sup>gt; <sup>−</sup><sup>∞</sup> *and* <sup>∆</sup>*<sup>x</sup> i <sup>k</sup>* < 0, 1, otherwise, (12)

$$v\_k^i = \begin{cases} \frac{\left[b^i - x\_k^i\right]}{\Delta x\_k^i} & \text{if } b^i < \infty \text{ and } \Delta x\_k^i > 0, \\ 1 & \text{otherwise,} \end{cases} \tag{13}$$

where *a <sup>i</sup>* and *b <sup>i</sup>* are the lower and upper bounds of the domain of the problem respectively, *i* = 1, 2, . . . *n* , *n* is the number of variables of function in problem, *x i k* is the component *i*th of variable *x* at iteration *k* and ∆*x<sup>k</sup>* denotes the steps which are obtained by either Formula (2) or by Formula (5).

Since the {*x<sup>k</sup>* } is always required to satisfy, for all *k*, *a* < *x<sup>k</sup>* < *b*, and then the point *xk*+<sup>1</sup> is computed by:

$$
\mathfrak{x}\_{k+1} = \mathfrak{x}\_k + 0.99\tau\_k \Delta \mathfrak{x}\_{k'} \tag{14}
$$

where the constant 0.99 is a damping parameter to ensure that *x<sup>k</sup>* is feasible with respect to the domain of function in the problem.

#### **4. The Proposed Algorithm for Solving Constrained Optimization Problems (GHMSA)**

According to the above procedures the GLMSA Algorithm is capable of solving Problem (1) as a constrained optimization problem during the solving of Problem (6) as an unconstrained optimization problem, hence there are some changes to the objective function *θ*(*x*,*r*) in Problem (6) to fit with the first step of the GLMSA Algorithm as follows.

• the function *f*(*x*) is replaced by the function *θ*(*x*,*r*) defined in Equation (8), and then calculate

$$\alpha\_k = \frac{\theta(\mathbf{x\_{ac}})}{||\!\!\!\!/\!g(\mathbf{x\_{ac}}, r\_k)\!\!/\!\_2\!\!/\!\_2\!\!/ \_2} \,\tag{15}$$

where *xac* is the accepted solution at iteration *k*,

$$d\_k = -|\mathfrak{a}\_k| \lg(\mathfrak{x}\_{\text{ac}'} r\_k),\tag{16}$$

where the parameter *r <sup>k</sup>* might denote *r* only or *t* only or both together according to a type of constrained optimization problem, for example, if the constraints contain mixed constraints inequality and equality, then *r <sup>k</sup>* = (*r k* , *t k* ).

• if the constrained problem contains simple bounded, we use Formula (14) to limit the new point inside this simple bounded.

In light of the above procedures, we rename the GLMSA Algorithm the "Guided Hybrid Modified Simulated-Annealing Algorithm" with the abbreviation "GHMSA".

#### *Setting Parameters of GHMSA Algorithm*

The choice of a cooling schedule has an important impact on the performance of the simulated-annealing algorithm. The cooling schedule includes two terms: the initial value of the temperature *T* and the cooling coefficient *r<sup>T</sup>* which is used to reduce *T*. Many suggestions have been proposed in the literature for determining the initial value of the temperature *T* and the cooling coefficient *rT*, see for example [4,18,40–42].

In general, it is a unanimous fact that the initial temperature *T* must be sufficiently high (to ensure escape from local points) and *r<sup>T</sup>* ∈ (0.1, 1) [7,43,44]. In this section, we suggest that the initial value of *T* be related to the number of variables and the value of *f*(*x*) at the starting point *x*0. The cooling coefficient is taken to be *r<sup>T</sup>* ∈ [0.8, 1) to decrease the temperature *T* slowly.

Therefore, the parameters used in Algorithm 4 are presented as follows. *M* is the inner loop maximum number of iterations, *T* is the control parameter (Temperature), *Tout* is a final value of *T*, *r T* is the cooling coefficient and *T<sup>f</sup>* is a final value of *T* if it is sufficiently small.

The setting of parameters is as follows: *T* = 10<sup>4</sup> , *ε* = 10−<sup>6</sup> , *T<sup>f</sup>* = 10−<sup>14</sup> , *Tout* = 10−<sup>20</sup> , *r<sup>T</sup>* = 0.8, and *M* = 10 *n*.

#### **5. Numerical Result**

To test the effectiveness and efficiency of the proposed algorithm, the algorithm is run on some test problems. The test problems are divided into two sets. The first set of test problems are taken from [45]. They are 24 well-known constrained real-parameter optimization problems. The objective functions in these problems take different shapes and the number of variables is between 2 and 24. These test problems also contain four types of constraints as follows: (LI) denotes a linear inequality, (LE) is a linear equality, (NI) refers to a nonlinear inequality, and (NE) denotes a nonlinear equality. They are listed in Table 1, where *f*(*x* ∗ ) is the best known optimal function value and *a* denotes the active constraint number at the known optimal solution. "The information mentioned in Table 1 is taken from [46]".



The GHMSA Algorithm solved 18 test problems out of the 24 because the other problems are either not continuous or not differentiable. The second set of test problems contains four known non-linear engineering design optimization problems. These test problems do not have known exact solutions.

#### *5.1. Results of "GHMSA" Algorithm*

The GHMSA algorithm is programmed using MATLAB version 8.5.0.197613 (R2015a) and it is run on a personal laptop and the machine epsilon about 1 <sup>×</sup> <sup>10</sup>−<sup>16</sup> .

The results of our algorithm are compared against the results of the CB-ABC Algorithm in [47], the CCiALF Algorithm in [48], the NDE Algorithm in [49] and the CAMDE Algorithm in [50].

Liang et al. [45] suggested that the achieved function error values of the obtained optimal solution *<sup>x</sup>* after <sup>5</sup> <sup>×</sup> <sup>10</sup><sup>3</sup> , <sup>5</sup> <sup>×</sup> <sup>10</sup><sup>4</sup> and <sup>5</sup> <sup>×</sup> <sup>10</sup><sup>5</sup> function evaluations (FES) are summarized

in terms of {Best, Median, Worst, *c*, *v* (*v* = *p*(*x*) *q*+*m* , *p*(*x*) is a penalty term in Equation (7)), Mean, s.d}.

The results are listed in Tables 2–4; where *c* is a concatenation of three numbers indicating the violated constraint number at the median solution by more than 1.0, between 0.01 and 1.0, and between 0.0001 and 0.1, respectively. *v* is the mean value of the violations of all constraints at the median solution. The numbers in the parenthesis after the error value of the Best, Median, Worst solution are the constraint numbers not satisfying the feasible condition of the Best, Median, and Worst solutions, respectively. Tables 2–4 denote that the GHMSA can determine feasible solutions at each run utilizing <sup>5</sup> <sup>×</sup> <sup>10</sup><sup>3</sup> FES for 12 test problems {G01, G03, G04, G06, G08, G09, G10, G12, G13, G16, G18, G24}. As for problems G11, G14 and G15, the GHMSA Algorithm finds feasible solutions by using <sup>5</sup> <sup>×</sup> <sup>10</sup><sup>4</sup> FES. For the other three test problems, {G05, G07, G19}, the GHMSA Algorithm is able to reach feasible solutions by using 5 <sup>×</sup> <sup>10</sup><sup>5</sup> FES.

Assume that if the result *x* is a feasible one satisfying (*f*(*x*) − *f*(*x* ∗ ) ≤ 0.0001, then *x* is in a neighborhood (near-optimal) of the optimal point *x* <sup>∗</sup> = *x<sup>g</sup>* . Tables 2–4 indicate that the GHMSA Algorithm can get near-optimal points for six problems, { G01, G04, G06, G08, G12, G24,} by using only <sup>5</sup> <sup>×</sup> <sup>10</sup><sup>3</sup> FES, { G03, G11, G13, G14, G15, G16,G18} by using only <sup>5</sup> <sup>×</sup> <sup>10</sup><sup>4</sup> FES and { G07, G09, G13, G19} by using only <sup>5</sup> <sup>×</sup> <sup>10</sup><sup>5</sup> FES. However, the GHMSA Algorithm failed to satisfy (*f*(*x*) − *f*(*x* ∗ ) ≤ 0.0001, for two problems {G05, G10}. As suggested by [45], Table 5 presents the Best, Median, Worst, Mean, and s.d values of successful run, feasible rate, success rate, and success performance over 40 runs. Let us define the following:

**Feasible run:** A run through which at least one feasible solution is found in Max FES. **Successful run:** A run during which the algorithm finds a feasible solution *x* satisfying (*f*(*x*) − *f*(*x* ∗ ) ≤ 0.0001.

**Feasible rate** (**f.r**) = (# of feasible runs)/ total runs.

**Successrate**(**s**.**r**) = (# of successful runs) / total runs.

**Successperformance**(**s**.**p**) = mean (FES for successful runs) × (# of total runs)/(# of successful runs).

**Table 2.** Error values achieved if FES <sup>=</sup> <sup>5</sup> <sup>×</sup> <sup>10</sup><sup>3</sup> , FES <sup>=</sup> <sup>5</sup> <sup>×</sup> <sup>10</sup><sup>4</sup> , FES <sup>=</sup> <sup>5</sup> <sup>×</sup> <sup>10</sup><sup>5</sup> for G1, G3, G4, G5, G6 and G7.



**Table 3.** Error values achieved when FES <sup>=</sup> <sup>5</sup> <sup>×</sup> <sup>10</sup><sup>3</sup> , FES <sup>=</sup> <sup>5</sup> <sup>×</sup> <sup>10</sup><sup>4</sup> , FES <sup>=</sup> <sup>5</sup> <sup>×</sup> <sup>10</sup><sup>5</sup> for Problems G8, G9, G10, G11, G12 and G13.

**Table 4.** Error values achieved when FES <sup>=</sup> <sup>5</sup> <sup>×</sup> <sup>10</sup><sup>3</sup> , FES <sup>=</sup> <sup>5</sup> <sup>×</sup> <sup>10</sup><sup>4</sup> , FES <sup>=</sup> <sup>5</sup> <sup>×</sup> <sup>10</sup><sup>5</sup> for Problems G14, G15, G16, G18, G19 and G24.


Table 5 shows that the GHMSA Algorithm obtains a 100% feasible rate and success rate for all 18 problems with the exception of problems G05 and G10.


**Table 5.** Number of FES to achieve the fixed accuracy level ((*f*(*x*) − *f*(*x* ∗ )) ≤ 0.0001), success rate, feasible rate and success performance.

For achieving the success condition during the view of success performance in Table 5, the GHMSA Algorithm needs:


The GHMSA Algorithm failed to achieve the success condition for two problems, i.e., {G05, G10}. More information about the performance of the GHMSA Algorithm for solving these problems is given in Figures 2–4. We have plotted the relationship between *log*10(*f*(*x*) − *f*(*x* ∗ )) and FES for showing the convergence of the GHMSA at the median run over 40 independent runs. So the convergence graphs of these problems in Figures 2–4 show that the error values decrease dramatically with increasing FES for all test problems.

**Figure 2.** Convergence graph for G01 to G07.

**Figure 3.** Convergence graph for G08 to G13.

**Figure 4.** Convergence graph for G14 to G24.

#### *5.2. Performance of GHMSA Algorithm Using Statistical Hypothesis Testing*

In this section, we use statistical hypothesis testing to evaluate the efficiency of the GHMSA Algorithm versus the efficiency of the CB-ABC, the CCiALF, the NDE and the CAMDE Algorithms.

A statistical hypothesis is a surmise about a population parameter. This expectation might be true or false. The null hypothesis is denoted by *H*<sup>0</sup> , and it is a statistical hypothesis that announces that there is no difference between a parameter and a specific value or that there is no difference between two parameters. The alternative hypothesis is indicated by *Ha* , and it is a statistical hypothesis that declares a specific difference between a parameter and a specific value or states that there is a difference between two parameters. Hypothesis testing is a form of inferential statistic which authorizes us to draw conclusions on a whole population based on a representative sample [51]. Parametric tests can provide trustworthy results with distributions that are skewed and non normal. Parametric analysis can produce reliable results even if the continuous data are non normally distributed. We just have to be sure that the sample size is greater than 30. A one sample *t*-test is one of the parametric tests that is used to compare the mean (Average) of a sample with a mean of the population. The important conditions for using the one-sample *t*-test are independence and normality (or sample size > 30). In our study the sample size is 50, i.e., the number of runs is 50 randomly (from any starting point) run; this criterion is suggested by [45]. The significance level in this study is 95%, i.e., *α* = 0.05. Our hypotheses are formulated in the following:

*H*0 : the mean (average) of the results of the GHMSA Algorithm and the mean (average) of the results of other algorithms are equal.

*Ha* : the mean (average) of the results of the GHMSA Algorithm and the mean of the results of other algorithms are different.

The above hypotheses can be formulated in Equation (17).

$$\begin{aligned} \mathcal{H}\_0: \mathcal{M}e\_{\mathcal{G}\mathcal{H}\mathcal{M}SA} &= \mathcal{M}e\_{\mathcal{A}\mathcal{J}\mathcal{g}\mathcal{r}\mathcal{t}\mathcal{l}\mathfrak{m}\_{l'}}\\ \mathcal{H}\_a: \mathcal{M}e\_{\mathcal{G}\mathcal{H}\mathcal{M}SA} &\neq \mathcal{M}e\_{\mathcal{A}\mathcal{J}\mathcal{g}\mathcal{r}\mathcal{t}\mathfrak{l}\mathfrak{m}\_{l'}} \end{aligned} \tag{17}$$

where *l* denotes one of the algorithms, CB-ABC, CCiALF, NDE and CAMDE, and *Me* denotes the average results of the algorithms.

In order to compare the performance of the GHMSA Algorithm with the CB-ABC, the CCiALF, the NDE and the CAMDE Algorithms, the *t*-test with a significance level of *α* = 0.05 is performed. To perform the *t*-test, the hypotheses in Equation (17) are considered.

Statistical processes are performed by using the SPSS Program. Rejecting or accepting *H*0 is based on the value of the p-value (Sig. (2-tailed)) according to Column 1 of Table 6. While the performance of the algorithm based on the value of the *t*-test is in Column 3 of Table 6. So, Column 4 of Table 6 takes three values according to the probabilities in (18).

$$\text{Decision} = \begin{cases} 1 & \text{then } Me\_{GHMSA} < Me\_{Algoritlm\_{al}}, \\ -1 & \text{then } Me\_{GHMSA} > Me\_{Algoritlm\_{l}}, \\ 0 & \text{then } Me\_{GHMSA} = Me\_{Algoritlm\_{l}}. \end{cases} \tag{18}$$

The results of the GHMSA are compared to the results of the CB-ABC, the CCiALF, the NDE and the CAMDE Algorithms. The statistical hypotheses in Equation (17) are tested by using the *t*-test. Tables 7–10 present these results.

The results of the GHMSA are compared versus the four meta-heuristic algorithms in the literature. The results of statistical tests are presented in Tables 7, 9 and 10. In Table 7, Column 1 presents the abbreviation of the test problems denoted by pr. Column 2 presents the results of the s.t which include {b.s, mean, s.d, Decision }, where Decision denotes wins, losses and draws of the GHMSA compared with the other algorithms. Columns 3–7 give the results of the five algorithms. Tables 9 and 10 are similar to Table 7.

After executing the pairwise *t*-test for all algorithms, if the GHMSA Algorithm is superior, inferior or equal to the compared algorithm denoted by *algorithm<sup>l</sup>* , then the decision is set to 1, –1 and 0 respectively, as we have shown in Table 6. The left of Figure 5 summarizes the results that are presented in Tables 7–10 regarding the decision. The left of Figure 5 shows that the GHMSA Algorithm was superior at {7, 6, 9, 5 } problems, equal at {6, 6, 3, 5 } problems and inferior at {4, 5, 5, 7 } problems compared to the CB-ABC, the CALF, the NDE and the CAMDE Algorithms, respectively. However, the GHMSA is inferior at seven problems compared to the CAMDE, but the GHMSA needs 1,590,905 as a total FES versus the CAMDE needing 4,320,000, as shown in Figure 6. To gain the success condition from the point of view of successful execution, the GHMSA needs less than <sup>5</sup> <sup>×</sup> <sup>10</sup><sup>3</sup> FES for five problems, i.e., {G01, G04, G08, G12, G24} versus the CAMDE needing at least <sup>5</sup> <sup>×</sup> <sup>10</sup><sup>3</sup> FES for two problems, i.e., {G08, G12}. We can say that the percentage of superior, equal and inferior of the GHMSA are 40%, 30%, 30% respectively.

**Table 6.** How the null hypothesis is rejected (or accepted) and the decision is made.



**Table 7.** Comparison of results for test problems G01 to G08.

The mark ‡ means that we do not use G03 to compare the result of the GHMSA with results of the four algorithms because the *h*(*x* ∗ ) = 0.0001, i.e., *<sup>v</sup>* <sup>=</sup> 0.0001 in [45], but *<sup>v</sup>* for the GHMSA is 4.05 <sup>×</sup> <sup>10</sup>−07, see Tables 1, <sup>2</sup> and 8.

**Table 8.** Statistical results of "GHMSA" Algorithm for first set of test problems and four mechanical engineering problems.



**Table 8.** *Cont.*

**Table 9.** Comparison of results for test problems G09 to G15.


For the four engineering problems, we give a brief description. The pressure vessel problem is a practical problem that is often used as a benchmark problem for testing optimization algorithms [52]. The left of Figure 7 shows the structure of this issue, where a cylindrical pressure vessel is capped at both ends by hemispherical heads. The aim of the problem is to find the minimum total cost of fabrication, including costs from a combination of welding, material and forming. The thickness of the cylindrical skin,*x*<sup>1</sup> (*Ts*), thickness of the spherical head, *x*<sup>2</sup> (*T<sup>h</sup>* ), the inner radius, *x*<sup>3</sup> (*R*), and the length of the cylindrical segment of the vessel, *x*<sup>4</sup> (*L*), were included as the optimization design variables of the problem. The GHMSA Algorithm obtains these results: *xGHMSA* = {0.778168641375105, 0.384649162627902, 40.3196187240987, 200}, i.e.,

*<sup>f</sup>*(*xGHMSA*) = 5885.332774, *<sup>c</sup>* = {0, –3.8858 <sup>×</sup> <sup>10</sup>−<sup>16</sup> , 1.1642 <sup>×</sup> <sup>10</sup>−<sup>97</sup> , −40}, i.e., *v* = 0; the left of Figure 8 shows a convergence graph of the GHMSA to the best solution for this problem.


**Table 10.** Comparison of results for test problems G16, G18, G19, G24 and Enp1-Enp4.

Another well-known engineering optimization task is the design of a tension (compression spring) for a minimum weight. This problem has been studied by several authors. For example, [52]. The right of Figure 7 shows a tension (compression spring) with three design variables. It needs to minimize the weight of a tension (compression string) subject to constraints on minimum deflection, shear stress, surge frequency, limits on outside diameter and on design variables. The design variables are the wire diameter, *d*(*x*<sup>1</sup> ), the mean coil diameter, *D*(*x*<sup>2</sup> ), and the number of active coils, *P*(*x*<sup>3</sup> ). The GHMSA obtains these results: *xGHMSA* = {0.0516890825110813, 0.356718255308635, 11.2889355307237}, i.e., *<sup>f</sup>*(*xGHMSA*) = 0.01266523279, *<sup>c</sup>* <sup>=</sup> {−1.55 <sup>×</sup> <sup>10</sup>−<sup>10</sup> , 4.44 <sup>×</sup> <sup>10</sup>−16, –4.05379, –0.72773}, i.e., *<sup>v</sup>* <sup>=</sup> 1.11 <sup>×</sup> <sup>10</sup>−16. The convergence graph for Engp2 is presented on the right of Figure 8. The welded beam design optimization problem has been solved by many researchers [52]. The left of Figure 9 shows the welded beam structure which consists of a beam *A* and the weld required to hold it to member *B*. The goal of this problem is to minimize the overall cost of fabrication, subject to some constraints. This problem has four design variables—*x*1,

*x*2, *x*<sup>3</sup> and *x*4—with constraints of shear stress *τ*, bending stress in the beam *σ*, buckling load on the bar *Pc*, and end deflection on the beam *δ*. The GHMSA obtains these results: *xGHMSA* = {0.205729642092758, 3.4704886133955, 9.03662391715327, 0.205729639752274}, i.e., *<sup>f</sup>*(*xGHMSA*) = 1.7248523060, *<sup>c</sup>* <sup>=</sup> {–9.03 <sup>×</sup> <sup>10</sup>−<sup>08</sup> , –4.02 <sup>×</sup> <sup>10</sup>−<sup>05</sup> , 2.34 <sup>×</sup> <sup>10</sup>−09, –3.43298, –0.08073, –0.23554, –8.73 <sup>×</sup> <sup>10</sup>−<sup>09</sup> }, i.e., *<sup>v</sup>* <sup>=</sup> 3.3429 <sup>×</sup> <sup>10</sup>−10. The convergence graph for Engp3 is presented by the left of Figure 10.

**Figure 5.** The number of "wins-draws-losses" of GHMSA compared with other algorithms for G01 to G24 and Enp1 to Enp4.

**Figure 6.** Comparison Between GHMSA With CAMDE Regarding FES.

**Figure 7.** Design engineering problems (Engp1 and Engp2).

**Figure 8.** Convergence graph for engineering problems (Engp1 and Engp2).

**Figure 9.** Design engineering problems (Engp3 and Engp4 ).

**Figure 10.** Convergence graph for engineering problems (Engp3 and Engp4 ).

The speed reducer design problem is one of the benchmark structural engineering problems [52]. It has seven design variables as described in the right of Figure 9, with the face width *x*<sup>1</sup> , module of teeth *x*<sup>2</sup> , number of teeth on pinion *x*<sup>3</sup> , length of the first shaft between bearings *x*<sup>4</sup> , length of the second shaft between bearings *x*<sup>5</sup> , diameter of the first shaft *x*<sup>6</sup> , and diameter of the first shaft *x*<sup>7</sup> . The aim of this problem is to minimize the total weight of the decelerator. The GHMSA obtains these results: *xGHMSA* = {3.499999999, 0.7, 17, 7.3, 7.715319913, 3.350214666, 5.286654465}, i.e., *<sup>f</sup>*(*xGHMSA*) = 2994.471066, *<sup>c</sup>* <sup>=</sup> {−0.073915, <sup>−</sup>0.198, <sup>−</sup>0.49917, <sup>−</sup>0.90464, 8.6365 <sup>×</sup> <sup>10</sup>−<sup>11</sup> , <sup>−</sup>1.0931 <sup>×</sup> <sup>10</sup>−<sup>11</sup> , <sup>−</sup>0.7025, 2.86 <sup>×</sup> <sup>10</sup>−<sup>10</sup> , <sup>−</sup>0.58333, <sup>−</sup>0.051326, <sup>−</sup>1.944210−10}, i.e, *<sup>v</sup>* <sup>=</sup> 2.6 <sup>×</sup> 10−11. The convergence graph for Engp4 is presented by the right of Figure 10. The four engineering problems are used to compare the performance of the GHMSA against the CB-ABC, the CCiALF, the NDE and the CAMDE Algorithms. Statistical hypotheses in Equation (17) are used to compare the mean of the GHMSA with means of the CB-ABC, the CCiALF, the NDE and the CAMDE Algorithms. Rows 22–41 of Table 10 present the statistical comparisons of the GHMSA versus the four Algorithms for engineering problems Enp1 to Enp4. The right of Figure 5 gives the number of "wins-draws-losses" of the GHMSA compared with the CB-ABC, the CCiALF, the NDE and the CAMDE for Enp1 to Enp4. Figure 11 shows the convergence graph of standard deviation for problems Enp1, Enp2, Enp3 and Enp4 for the five algorithms. The relation between the four engineering problems {Enp1, Enp2, Enp3 and Enp4} and their values for *log*10(*s*.*d*) are plotted. From the right of Figures 5 and 11, it can be said that the performance of the GHMSA algorithm is better than the other algorithms for problems Enp1 to Enp4, for the following reasons:

(1) The GHMSA obtains a minimum value of objective function (5885.332774) for engineering problem Enp1 (pressure vessel), the point minimum *x* ∗ is feasible; many of the algorithms obtained a value of objective function equal to or greater than 6059.71. see for example [48–50,52–58].

In addition to that, if 10 ≤ *x*<sup>4</sup> (*L*) < ∞, then *f*(*x* ∗ ) = 5804.37621675626, otherwise if 10 ≤ *x*<sup>4</sup> (*L*) < 208, then *f*(*x* ∗ ) = 5866.99226593889, where *L* is shown in the left of Figure 8.

(2) The right of Figure 5 shows that the GHMSA Algorithm does not fall at any problem versus the other algorithms.

(3) The GHMSA is superior at {2, 1, 4, 2} problems versus the CB-ABC Algorithm, the CCiALF Algorithm, the NDE and the CAMDE Algorithm, respectively.

(4) The GHMSA is equal at {2, 3, 0, 2} problems versus the CB-ABC Algorithm, the CCiALF Algorithm, the NDE and the CAMDE Algorithm, respectively.

(5) Figure 11 shows that the GHMSA Algorithm converges to zero for the standard deviation (s.d). See the green color.

**Figure 11.** Convergence graph of standard deviation for Enp1 to Enp4.

#### **6. Conclusions and Future Work**

The unconstrained nonlinear optimization algorithms have been guided to find the global minimizer of the constrained optimization problem. A result, Algorithm "GHMSA", has been proposed for finding the global minimizer of the non-linear constrained optimization problem. Algorithm "GHMSA" contains a new technique that is applied to convert the constrained optimization problem into the unconstrained optimization problem. The results of the algorithm demonstrate that the proposed penalty function is a good technique to make the unconstrained algorithm able to deal with the constrained optimization problem. The interior-point algorithm keeps the candidate solutions inside the domain search. The results of some nonlinear constrained optimization problems and four nonlinear engineering optimization problems show that the GHMSA algorithm has superiority over the other four algorithms in some test problems. For the future work, the proposed algorithm can be enhanced and modified to solve the multi-objective function, and the convergence analysis of the modified simulated annealing algorithm will be performed.

Moreover, it will be considered in future work to propose a new free derivative to approximate the gradient vector that will be combined (hybridized) with a new simulated annealing algorithm to solve unconstrained optimization, constrained, or multi-objective optimization problems. Convergence analysis of the GMLSA and GHMAS algorithms will be considered in future work.

**Author Contributions:** M.E.-A.; Formal analysis, M.A.; Funding acquisition, K.A.A.; Investigation, M.A.; Methodology, A.W.M.; Project administration, K.A.A.; Resources, K.A.A.; Supervision, A.W.M.; Validation, M.E.-A.; Writing—original draft, S.M.; Writing—review & editing, S.M. All authors have read and agreed to the published version of the manuscript.

**Funding:** The Research is funded by Researchers Supporting Program at King Saud University, (Project# RSP-2021/305).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The authors present their appreciation to King Saud University for funding the publication of this research through Researchers Supporting Program (RSP-2021/305), King Saud University, Riyadh, Saudi Arabia.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Abbreviations**

The following abbreviations are used in this manuscript:



#### **References**


#### *Article* **Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings** *Article*  **Some New Versions of Integral Inequalities for Left and Right**  *Article*  **Some New Versions of Integral Inequalities for Left and Right**  *Article*  **Some New Versions of Integral Inequalities for Left and Right**  *Article*  **Some New Versions of Integral Inequalities for Left and Right**

**Muhammad Bilal Khan <sup>1</sup> , Savin Treant,aˇ 2 , Mohamed S. Soliman <sup>3</sup> , Kamsing Nonlaopon 4,\* and Hatim Ghazi Zaini <sup>5</sup> Preinvex Functions in the Interval-Valued Settings Muhammad Bilal Khan 1, Savin Treanțǎ 2, Mohamed S. Soliman 3, Kamsing Nonlaopon 4,\* and Hatim Ghazi Zaini 5 Preinvex Functions in the Interval-Valued Settings Muhammad Bilal Khan 1, Savin Treanțǎ 2, Mohamed S. Soliman 3, Kamsing Nonlaopon 4,\* and Hatim Ghazi Zaini 5 Preinvex Functions in the Interval-Valued Settings Muhammad Bilal Khan 1, Savin Treanțǎ 2, Mohamed S. Soliman 3, Kamsing Nonlaopon 4,\* and Hatim Ghazi Zaini 5 Preinvex Functions in the Interval-Valued Settings Muhammad Bilal Khan 1, Savin Treanțǎ 2, Mohamed S. Soliman 3, Kamsing Nonlaopon 4,\* and Hatim Ghazi Zaini 5** 


**Abstract:** The principles of convexity and symmetry are inextricably linked. Because of the considerable association that has emerged between the two in recent years, we may apply what we learn from one to the other. In this paper, our aim is to establish the relation between integral inequalities and interval-valued functions (*IV-Fs*) based upon the pseudo-order relation. Firstly, we discuss the properties of left and right preinvex interval-valued functions (left and right preinvex *IV-Fs*). Then, we obtain Hermite–Hadamard ( **Abstract:** The principles of convexity and symmetry are inextricably linked. Because of the considerable association that has emerged between the two in recent years, we may apply what we learn from one to the other. In this paper, our aim is to establish the relation between integral inequalities and interval-valued functions (*IV-Fs*) based upon the pseudo-order relation. Firstly, we discuss the properties of left and right preinvex interval-valued functions (left and right preinvex *IV-Fs*). Then, we obtain Hermite–Hadamard (-) and Hermite–Hadamard–Fejér (--Fejér) type inequality and some related integral inequalities with the support of left and right preinvex *IV-Fs* via pseudoorder relation and interval Riemann integral. Moreover, some exceptional special cases are also discussed. Some useful examples are also given to prove the validity of our main results. - **Abstract:** The principles of convexity and symmetry are inextricably linked. Because of the considerable association that has emerged between the two in recent years, we may apply what we learn from one to the other. In this paper, our aim is to establish the relation between integral inequalities and interval-valued functions (*IV-Fs*) based upon the pseudo-order relation. Firstly, we discuss the properties of left and right preinvex interval-valued functions (left and right preinvex *IV-Fs*). Then, we obtain Hermite–Hadamard (-) and Hermite–Hadamard–Fejér (--Fejér) type inequality and some related integral inequalities with the support of left and right preinvex *IV-Fs* via pseudoorder relation and interval Riemann integral. Moreover, some exceptional special cases are also discussed. Some useful examples are also given to prove the validity of our main results. ) and Hermite–Hadamard–Fejér ( **Abstract:** The principles of convexity and symmetry are inextricably linked. Because of the considerable association that has emerged between the two in recent years, we may apply what we learn from one to the other. In this paper, our aim is to establish the relation between integral inequalities and interval-valued functions (*IV-Fs*) based upon the pseudo-order relation. Firstly, we discuss the properties of left and right preinvex interval-valued functions (left and right preinvex *IV-Fs*). Then, we obtain Hermite–Hadamard ) and Hermite–Hadamard–Fejér (--Fejér) type inequality and some related integral inequalities with the support of left and right preinvex *IV-Fs* via pseudoorder relation and interval Riemann integral. Moreover, some exceptional special cases are also discussed. Some useful examples are also given to prove the validity of our main results. **Citation:** Khan, M.B.; Treanțǎ, S.; Soliman, M.S.; Nonlaopon, K.; Zaini, H.G. Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the - **Abstract:** The principles of convexity and symmetry are inextricably linked. Because of the considerable association that has emerged between the two in recent years, we may apply what we learn from one to the other. In this paper, our aim is to establish the relation between integral inequalities and interval-valued functions (*IV-Fs*) based upon the pseudo-order relation. Firstly, we discuss the properties of left and right preinvex interval-valued functions (left and right preinvex *IV-Fs*). Then, we obtain Hermite–Hadamard () and Hermite–Hadamard–Fejér (--Fejér) type inequality and some related integral inequalities with the support of left and right preinvex *IV-Fs* via pseudoorder relation and interval Riemann integral. Moreover, some exceptional special cases are also discussed. Some useful examples are also given to prove the validity of our main results. **Citation:** Khan, M.B.; Treanțǎ, S.; Soliman, M.S.; Nonlaopon, K.; Zaini, H.G. Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the -Fejér) type inequality and some related integral inequalities with the support of left and right preinvex *IV-Fs* via pseudo-order relation and interval Riemann integral. Moreover, some exceptional special cases are also discussed. Some useful examples are also given to prove the validity of our main results. *Article*  **Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings Muhammad Bilal Khan 1, Savin Treanțǎ 2, Mohamed S. Soliman 3, Kamsing Nonlaopon 4,\* and Hatim Ghazi Zaini 5**  1 Department of Mathematics, COMSATS University Islamabad, Islamabad 44000, Pakistan; *Article*  **Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings Muhammad Bilal Khan 1, Savin Treanțǎ 2, Mohamed S. Soliman 3, Kamsing Nonlaopon 4,\* and Hatim Ghazi Zaini 5**  1 Department of Mathematics, COMSATS University Islamabad, Islamabad 44000, Pakistan; **Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings Muhammad Bilal Khan 1, Savin Treanțǎ 2, Mohamed S. Soliman 3, Kamsing Nonlaopon 4,\* and Hatim Ghazi Zaini 5 Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings Muhammad Bilal Khan 1, Savin Treanțǎ 2, Mohamed S. Soliman 3, Kamsing Nonlaopon 4,\* and Hatim Ghazi Zaini 5** 

> **Keywords:** left and right preinvex interval-valued function; interval Riemann integral; Hermite–Hadamard type inequality; Hermite–Hadamard–Fejér type inequality **Keywords:** left and right preinvex interval-valued function; interval Riemann integral; Hermite–Hadamard type inequality; Hermite–Hadamard–Fejér type inequality **Keywords:** left and right preinvex interval-valued function; interval Riemann integral; Hermite–Hadamard type inequality; Hermite–Hadamard–Fejér type inequality *Mathematics* **2022**, *9*, x. https://doi.org/10.3390/xxxxx **Keywords:** left and right preinvex interval-valued function; interval Riemann integral; Hermite–Hadamard type inequality; Hermite–Hadamard–Fejér type inequality *Mathematics* **2022**, *9*, x. https://doi.org/10.3390/xxxxx **Keywords:** left and right preinvex interval-valued function; interval Riemann integral; Hermite– Hadamard type inequality; Hermite–Hadamard–Fejér type inequality bilal42742@gmail.com 2 Department of Applied Mathematics, University Politehnica of Bucharest, 060042 Bucharest, Romania; savin.treanta@upb.ro bilal42742@gmail.com 2 Department of Applied Mathematics, University Politehnica of Bucharest, 060042 Bucharest, Romania; savin.treanta@upb.ro 1 Department of Mathematics, COMSATS University Islamabad, Islamabad 44000, Pakistan; bilal42742@gmail.com 2 Department of Applied Mathematics, University Politehnica of Bucharest, 060042 Bucharest, Romania; 1 Department of Mathematics, COMSATS University Islamabad, Islamabad 44000, Pakistan; bilal42742@gmail.com 2 Department of Applied Mathematics, University Politehnica of Bucharest, 060042 Bucharest, Romania;

> > Saudi Arabia; soliman@tu.edu.sa

3 Department of Electrical Engineering, College of Engineering, Taif University, P.O. Box 11099, Taif 21944,

3 Department of Electrical Engineering, College of Engineering, Taif University, P.O. Box 11099, Taif 21944,

**1. Introduction** 

**1. Introduction** 

Saudi Arabia; soliman@tu.edu.sa

#### **1. Introduction 1. Introduction**  Received: 13 December 2021 Accepted: 15 February 2022 Received: 13 December 2021 Accepted: 15 February 2022 **1. Introduction** 4 Department of Mathematics, Faculty of Science, Khon Kaen University, Khon Kaen 40002, Thailand 4 Department of Mathematics, Faculty of Science, Khon Kaen University, Khon Kaen 40002, Thailand

Published: 16 February 2022

tional affiliations.

tional affiliations.

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institu-

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institu-

**1. Introduction** 

Published: 16 February 2022

**1. Introduction** 

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

Janusz Brzdęk

Janusz Brzdęk

Interval-Valued Settings.

Interval-Valued Settings.

Academic Editors: Simeon Reich and

savin.treanta@upb.ro

savin.treanta@upb.ro

Academic Editors: Simeon Reich and

Saudi Arabia; soliman@tu.edu.sa

Saudi Arabia; soliman@tu.edu.sa

Hanson [1] defined the class of invex functions as one of the most significant extensions of convex functions. Weir and Mond [2], in 1988, used the notion of preinvex functions to demonstrate adequate optimality criteria and duality in nonlinear programming. For a differentiable mapping, the concept of fractional integral identities involving Riemann–Liouville fractional and Hadamard fractional integrals integrals was considered by Wang et al. [3], who identified some inequalities using standard convex, -convex, convex, -convex, (s, m)-convex, and (, )-convex. Moreover, Işcan [4] also used fractional integrals for preinvex functions to obtain various - type inequalities. See [5–8] for other generalizations of the - inequality. Hanson [1] defined the class of invex functions as one of the most significant extensions of convex functions. Weir and Mond [2], in 1988, used the notion of preinvex functions to demonstrate adequate optimality criteria and duality in nonlinear programming. For a differentiable mapping, the concept of fractional integral identities involving Riemann–Liouville fractional and Hadamard fractional integrals integrals was considered by Wang et al. [3], who identified some inequalities using standard convex, -convex, convex, -convex, (s, m)-convex, and (, )-convex. Moreover, Işcan [4] also used fractional integrals for preinvex functions to obtain various - type inequalities. See [5–8] for other generalizations of the - inequality. Hanson [1] defined the class of invex functions as one of the most significant extensions of convex functions. Weir and Mond [2], in 1988, used the notion of preinvex functions to demonstrate adequate optimality criteria and duality in nonlinear programming. For a differentiable mapping, the concept of fractional integral identities involving Riemann–Liouville fractional and Hadamard fractional integrals integrals was considered by Wang et al. [3], who identified some inequalities using standard convex, -convex, convex, -convex, (s, m)-convex, and (, )-convex. Moreover, Işcan [4] also used fractional integrals for preinvex functions to obtain various - type inequalities. See [5–8] for other generalizations of the - inequality. Published: 16 February 2022 **Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. **Copyright:** © 2022 by the authors. Li-Hanson [1] defined the class of invex functions as one of the most significant extensions of convex functions. Weir and Mond [2], in 1988, used the notion of preinvex functions to demonstrate adequate optimality criteria and duality in nonlinear programming. For a differentiable mapping, the concept of fractional integral identities involving Riemann–Liouville fractional and Hadamard fractional integrals integrals was considered by Wang et al. [3], who identified some inequalities using standard convex, -convex, convex, -convex, (s, m)-convex, and (, )-convex. Moreover, Işcan [4] also used fractional integrals for preinvex functions to obtain various - type inequalities. See [5–8] for other generalizations of the - inequality. Published: 16 February 2022 **Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. **Copyright:** © 2022 by the authors. Li-Hanson [1] defined the class of invex functions as one of the most significant extensions of convex functions. Weir and Mond [2], in 1988, used the notion of preinvex functions to demonstrate adequate optimality criteria and duality in nonlinear programming. For a differentiable mapping, the concept of fractional integral identities involving Riemann–Liouville fractional and Hadamard fractional integrals integrals was considered by Wang et al. [3], who identified some inequalities using standard convex, *r*-convex, *m*-convex, *S*-convex, (s, m)-convex, and (*β*, *m*)-convex. Moreover, I¸scan [4] also used fractional integrals for preinvex functions to obtain various **\*** Correspondence: nkamsi@kku.ac.th; Tel.: +668-6642-1582 **Abstract:** The principles of convexity and symmetry are inextricably linked. Because of the considerable association that has emerged between the two in recent years, we may apply what we learn from one to the other. In this paper, our aim is to establish the relation between integral inequalities and interval-valued functions (*IV-Fs*) based upon the pseudo-order relation. Firstly, we discuss the properties of left and right preinvex interval-valued functions (left and right preinvex *IV-Fs*). Then, we obtain Hermite–Hadamard (-) and Hermite–Hadamard–Fejér (--Fejér) type inequality and some related integral inequalities with the support of left and right preinvex *IV-Fs* via pseudo-**Citation:** Khan, M.B.; Treanțǎ, S.; Soliman, M.S.; Nonlaopon, K.; Zaini, H.G. Some New Versions of - **\*** Correspondence: nkamsi@kku.ac.th; Tel.: +668-6642-1582 **Abstract:** The principles of convexity and symmetry are inextricably linked. Because of the considerable association that has emerged between the two in recent years, we may apply what we learn from one to the other. In this paper, our aim is to establish the relation between integral inequalities and interval-valued functions (*IV-Fs*) based upon the pseudo-order relation. Firstly, we discuss the properties of left and right preinvex interval-valued functions (left and right preinvex *IV-Fs*). Then, we obtain Hermite–Hadamard (-) and Hermite–Hadamard–Fejér (--Fejér) type inequality and some related integral inequalities with the support of left and right preinvex *IV-Fs* via pseudo-**Citation:** Khan, M.B.; Treanțǎ, S.; Soliman, M.S.; Nonlaopon, K.; Zaini, H.G. Some New Versions of type inequalities. See [5–8] for other generalizations of the 5 Department of Computer Science, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia; h.zaini@tu.edu.sa **\*** Correspondence: nkamsi@kku.ac.th; Tel.: +668-6642-1582 **Abstract:** The principles of convexity and symmetry are inextricably linked. Because of the considerable association that has emerged between the two in recent years, we may apply what we learn from one to the other. In this paper, our aim is to establish the relation between integral inequalities and interval-valued functions (*IV-Fs*) based upon the pseudo-order relation. Firstly, we discuss the properties of left and right preinvex interval-valued functions (left and right preinvex *IV-Fs*). Then, we obtain Hermite–Hadamard (-) and Hermite–Hadamard–Fejér (--Fejér) type inequality - 5 Department of Computer Science, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia; h.zaini@tu.edu.sa **\*** Correspondence: nkamsi@kku.ac.th; Tel.: +668-6642-1582 **Abstract:** The principles of convexity and symmetry are inextricably linked. Because of the considerable association that has emerged between the two in recent years, we may apply what we learn from one to the other. In this paper, our aim is to establish the relation between integral inequalities and interval-valued functions (*IV-Fs*) based upon the pseudo-order relation. Firstly, we discuss the properties of left and right preinvex interval-valued functions (left and right preinvex *IV-Fs*). Then, we obtain Hermite–Hadamard (-) and Hermite–Hadamard–Fejér (--Fejér) type inequality inequality.

P.O. Box 11099, Taif 21944, Saudi Arabia; h.zaini@tu.edu.sa

P.O. Box 11099, Taif 21944, Saudi Arabia; h.zaini@tu.edu.sa

For accurate solutions to various problems in practical mathematics, Moore [9] used interval arithmetic, *IV-Fs*, and integrals of *IV-Fs* to establish arbitrarily sharp upper and lower limits. Moore [9] showed that, if a real-valued mapping () meets an ordinary Lipschitz condition in , |() − ()| ≤ |−|, for , ∈ , then, the united extension is a Lipschitz interval extension in . To combine the study of discrete and continuous dynamical systems, Hilger [10] introduced a time scales theory. The widespread use of dynamic equations and integral inequalities on time scales, in domains as diverse as electrical engineering, quantum physics, heat transfer, neural networks, combinatorics, and population dynamics [11], has highlighted the need for this theory. Young's inequal-For accurate solutions to various problems in practical mathematics, Moore [9] used interval arithmetic, *IV-Fs*, and integrals of *IV-Fs* to establish arbitrarily sharp upper and lower limits. Moore [9] showed that, if a real-valued mapping () meets an ordinary Lipschitz condition in , |() − ()| ≤ |−|, for , ∈ , then, the united extension is a Lipschitz interval extension in . To combine the study of discrete and continuous dynamical systems, Hilger [10] introduced a time scales theory. The widespread use of dynamic equations and integral inequalities on time scales, in domains as diverse as electrical engineering, quantum physics, heat transfer, neural networks, combinatorics, and population dynamics [11], has highlighted the need for this theory. Young's inequal-For accurate solutions to various problems in practical mathematics, Moore [9] used interval arithmetic, *IV-Fs*, and integrals of *IV-Fs* to establish arbitrarily sharp upper and lower limits. Moore [9] showed that, if a real-valued mapping () meets an ordinary Lipschitz condition in , |() − ()| ≤ |−|, for , ∈ , then, the united extension is a Lipschitz interval extension in . To combine the study of discrete and continuous dynamical systems, Hilger [10] introduced a time scales theory. The widespread use of dynamic equations and integral inequalities on time scales, in domains as diverse as electrical engineering, quantum physics, heat transfer, neural networks, combinatorics, and population dynamics [11], has highlighted the need for this theory. Young's inequalcensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). For accurate solutions to various problems in practical mathematics, Moore [9] used interval arithmetic, *IV-Fs*, and integrals of *IV-Fs* to establish arbitrarily sharp upper and lower limits. Moore [9] showed that, if a real-valued mapping () meets an ordinary Lipschitz condition in , |() − ()| ≤ |−|, for , ∈ , then, the united extension is a Lipschitz interval extension in . To combine the study of discrete and continuous dynamical systems, Hilger [10] introduced a time scales theory. The widespread use of dynamic equations and integral inequalities on time scales, in domains as diverse as electrical engineering, quantum physics, heat transfer, neural networks, combinatorics, and population dynamics [11], has highlighted the need for this theory. Young's inequalcensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). order relation and interval Riemann integral. Moreover, some exceptional special cases are also discussed. Some useful examples are also given to prove the validity of our main results. **Keywords:** left and right preinvex interval-valued function; interval Riemann integral; Hermite–Hadamard type inequality; Hermite–Hadamard–Fejér type inequality **1. Introduction**  Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings. *Mathematics* **2022**, *9*, x. https://doi.org/10.3390/xxxxx Academic Editors: Simeon Reich and Janusz Brzdęk Received: 13 December 2021 Accepted: 15 February 2022 order relation and interval Riemann integral. Moreover, some exceptional special cases are also discussed. Some useful examples are also given to prove the validity of our main results. **Keywords:** left and right preinvex interval-valued function; interval Riemann integral; Hermite–Hadamard type inequality; Hermite–Hadamard–Fejér type inequality **1. Introduction**  Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings. *Mathematics* **2022**, *9*, x. https://doi.org/10.3390/xxxxx Academic Editors: Simeon Reich and Janusz Brzdęk Received: 13 December 2021 Accepted: 15 February 2022 and some related integral inequalities with the support of left and right preinvex *IV-Fs* via pseudoorder relation and interval Riemann integral. Moreover, some exceptional special cases are also discussed. Some useful examples are also given to prove the validity of our main results. **Keywords:** left and right preinvex interval-valued function; interval Riemann integral; Hermite–Hadamard type inequality; Hermite–Hadamard–Fejér type inequality and some related integral inequalities with the support of left and right preinvex *IV-Fs* via pseudoorder relation and interval Riemann integral. Moreover, some exceptional special cases are also discussed. Some useful examples are also given to prove the validity of our main results. **Keywords:** left and right preinvex interval-valued function; interval Riemann integral; Hermite–Hadamard type inequality; Hermite–Hadamard–Fejér type inequality For accurate solutions to various problems in practical mathematics, Moore [9] used interval arithmetic, *IV-Fs*, and integrals of *IV-Fs* to establish arbitrarily sharp upper and lower limits. Moore [9] showed that, if a real-valued mapping *Y*(κ) meets an ordinary Lipschitz condition in *Y*, |*Y*(κ) − *Y*(*ω*)| ≤ *L*|κ − *ω*|, for *ω*, κ ∈ *Y*, then, the united extension is a Lipschitz interval extension in *Y*. To combine the study of discrete and continuous dynamical systems, Hilger [10] introduced a time scales theory. The widespread use of dynamic equations and integral inequalities on time scales, in domains as diverse as electrical engineering, quantum physics, heat transfer, neural networks, combinatorics, and population dynamics [11], has highlighted the need for this theory. Young's inequality,

ity, Minkoswki's inequality, Jensen's inequality, Hölder's inequality, - inequality,

ity, Minkoswki's inequality, Jensen's inequality, Hölder's inequality, - inequality,

Hanson [1] defined the class of invex functions as one of the most significant extensions of convex functions. Weir and Mond [2], in 1988, used the notion of preinvex func-

For a differentiable mapping, the concept of fractional integral identities involving Riemann–Liouville fractional and Hadamard fractional integrals integrals was considered by Wang et al. [3], who identified some inequalities using standard convex, -convex, convex, -convex, (s, m)-convex, and (, )-convex. Moreover, Işcan [4] also used fractional integrals for preinvex functions to obtain various - type inequalities. See [5–8]

Hanson [1] defined the class of invex functions as one of the most significant extensions of convex functions. Weir and Mond [2], in 1988, used the notion of preinvex func-

For a differentiable mapping, the concept of fractional integral identities involving Riemann–Liouville fractional and Hadamard fractional integrals integrals was considered by Wang et al. [3], who identified some inequalities using standard convex, -convex, convex, -convex, (s, m)-convex, and (, )-convex. Moreover, Işcan [4] also used fractional integrals for preinvex functions to obtain various - type inequalities. See [5–8]

for other generalizations of the - inequality.

For accurate solutions to various problems in practical mathematics, Moore [9] used interval arithmetic, *IV-Fs*, and integrals of *IV-Fs* to establish arbitrarily sharp upper and lower limits. Moore [9] showed that, if a real-valued mapping () meets an ordinary Lipschitz condition in , |() − ()| ≤ |−|, for , ∈ , then, the united extension is a Lipschitz interval extension in . To combine the study of discrete and continuous dynamical systems, Hilger [10] introduced a time scales theory. The widespread use of dynamic equations and integral inequalities on time scales, in domains as diverse as electrical engineering, quantum physics, heat transfer, neural networks, combinatorics, and population dynamics [11], has highlighted the need for this theory. Young's inequality, Minkoswki's inequality, Jensen's inequality, Hölder's inequality, - inequality,

For accurate solutions to various problems in practical mathematics, Moore [9] used interval arithmetic, *IV-Fs*, and integrals of *IV-Fs* to establish arbitrarily sharp upper and lower limits. Moore [9] showed that, if a real-valued mapping () meets an ordinary Lipschitz condition in , |() − ()| ≤ |−|, for , ∈ , then, the united extension is a Lipschitz interval extension in . To combine the study of discrete and continuous dynamical systems, Hilger [10] introduced a time scales theory. The widespread use of dynamic equations and integral inequalities on time scales, in domains as diverse as electrical engineering, quantum physics, heat transfer, neural networks, combinatorics, and population dynamics [11], has highlighted the need for this theory. Young's inequality, Minkoswki's inequality, Jensen's inequality, Hölder's inequality, - inequality,

for other generalizations of the - inequality.

*Mathematics* **2021**, *9*, x. https://doi.org/10.3390/xxxxx www.mdpi.com/journal/mathematics

*Mathematics* **2021**, *9*, x. https://doi.org/10.3390/xxxxx www.mdpi.com/journal/mathematics

ity, Minkoswki's inequality, Jensen's inequality, Hölder's inequality, - inequality,

Hanson [1] defined the class of invex functions as one of the most significant extensions of convex functions. Weir and Mond [2], in 1988, used the notion of preinvex func-

Hanson [1] defined the class of invex functions as one of the most significant extensions of convex functions. Weir and Mond [2], in 1988, used the notion of preinvex func-

mann–Liouville fractional and Hadamard fractional integrals integrals was considered by Wang et al. [3], who identified some inequalities using standard convex, -convex, convex, -convex, (s, m)-convex, and (, )-convex. Moreover, Işcan [4] also used fractional integrals for preinvex functions to obtain various - type inequalities. See [5–8]

mann–Liouville fractional and Hadamard fractional integrals integrals was considered by Wang et al. [3], who identified some inequalities using standard convex, -convex, convex, -convex, (s, m)-convex, and (, )-convex. Moreover, Işcan [4] also used fractional integrals for preinvex functions to obtain various - type inequalities. See [5–8]

For accurate solutions to various problems in practical mathematics, Moore [9] used interval arithmetic, *IV-Fs*, and integrals of *IV-Fs* to establish arbitrarily sharp upper and lower limits. Moore [9] showed that, if a real-valued mapping () meets an ordinary Lipschitz condition in , |() − ()| ≤ |−|, for , ∈ , then, the united extension is a Lipschitz interval extension in . To combine the study of discrete and continuous dynamical systems, Hilger [10] introduced a time scales theory. The widespread use of dynamic equations and integral inequalities on time scales, in domains as diverse as electrical engineering, quantum physics, heat transfer, neural networks, combinatorics, and population dynamics [11], has highlighted the need for this theory. Young's inequality, Minkoswki's inequality, Jensen's inequality, Hölder's inequality, - inequality,

For accurate solutions to various problems in practical mathematics, Moore [9] used interval arithmetic, *IV-Fs*, and integrals of *IV-Fs* to establish arbitrarily sharp upper and lower limits. Moore [9] showed that, if a real-valued mapping () meets an ordinary Lipschitz condition in , |() − ()| ≤ |−|, for , ∈ , then, the united extension is a Lipschitz interval extension in . To combine the study of discrete and continuous dynamical systems, Hilger [10] introduced a time scales theory. The widespread use of dynamic equations and integral inequalities on time scales, in domains as diverse as electrical engineering, quantum physics, heat transfer, neural networks, combinatorics, and population dynamics [11], has highlighted the need for this theory. Young's inequality, Minkoswki's inequality, Jensen's inequality, Hölder's inequality, - inequality,

3 Department of Electrical Engineering, College of Engineering, Taif University, P.O. Box 11099, Taif 21944,

4 Department of Mathematics, Faculty of Science, Khon Kaen University, Khon Kaen 40002, Thailand 5 Department of Computer Science, College of Computers and Information Technology, Taif University,

3 Department of Electrical Engineering, College of Engineering, Taif University, P.O. Box 11099, Taif 21944,

4 Department of Mathematics, Faculty of Science, Khon Kaen University, Khon Kaen 40002, Thailand 5 Department of Computer Science, College of Computers and Information Technology, Taif University,

ity, Minkoswki's inequality, Jensen's inequality, Hölder's inequality, - inequality,

**Citation:** Khan, M.B.; Treant,a, S.; ˇ Soliman, M.S.; Nonlaopon, K.; Zaini, H.G. Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings. *Mathematics* **2022**, *10*, 611. https://doi.org/ 10.3390/math10040611 Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings. *Mathematics* **2022**, *9*, x. https://doi.org/10.3390/xxxxx Academic Editors: Simeon Reich and Janusz Brzdęk Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings. *Mathematics* **2022**, *9*, x. https://doi.org/10.3390/xxxxx Academic Editors: Simeon Reich and Janusz Brzdęk

**Citation:** Khan, M.B.; Treanțǎ, S.; Soliman, M.S.; Nonlaopon, K.; Zaini, H.G. Some New Versions of

*Article* 

*Article* 

**Citation:** Khan, M.B.; Treanțǎ, S.; Soliman, M.S.; Nonlaopon, K.; Zaini, H.G. Some New Versions of

Academic Editors: Simeon Reich and Janusz Brzd˛ek Received: 13 December 2021 Accepted: 15 February 2022 Received: 13 December 2021 Accepted: 15 February 2022

Received: 13 December 2021 Accepted: 15 February 2022 Published: 16 February 2022 **Publisher's Note:** MDPI stays neutral with regard to jurisdictional **Publisher's Note:** MDPI stays neutral with regard to jurisdictional

Published: 16 February 2022

claims in published maps and institu-

**Copyright:** © 2022 by the authors. Li-

Published: 16 February 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. tional affiliations. tional affiliations. **Citation:** Khan, M.B.; Treanțǎ, S.; **Citation:** Khan, M.B.; Treanțǎ, S.;

**Copyright:** © 2022 by the authors. Li-

Soliman, M.S.; Nonlaopon, K.;

Soliman, M.S.; Nonlaopon, K.;

claims in published maps and institu-

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/). distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Right Preinvex Functions in the Interval-Valued Settings. *Mathematics* **2022**, *9*, x. https://doi.org/10.3390/xxxxx Academic Editors: Simeon Reich and Janusz Brzdęk Received: 13 December 2021 Right Preinvex Functions in the Interval-Valued Settings. *Mathematics* **2022**, *9*, x. https://doi.org/10.3390/xxxxx Academic Editors: Simeon Reich and Janusz Brzdęk Received: 13 December 2021

> Accepted: 15 February 2022 Published: 16 February 2022

tional affiliations.

tional affiliations.

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institu-

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institu-

Accepted: 15 February 2022 Published: 16 February 2022

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

for other generalizations of the - inequality.

for other generalizations of the - inequality.

*Mathematics* **2021**, *9*, x. https://doi.org/10.3390/xxxxx www.mdpi.com/journal/mathematics

*Mathematics* **2021**, *9*, x. https://doi.org/10.3390/xxxxx www.mdpi.com/journal/mathematics

*Article* 

*Article* 

*Article* 

*Article* 

**Citation:** Khan, M.B.; Treanțǎ, S.; Soliman, M.S.; Nonlaopon, K.; Zaini, H.G. Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings. *Mathematics* **2022**, *9*, x. https://doi.org/10.3390/xxxxx

**Citation:** Khan, M.B.; Treanțǎ, S.; Soliman, M.S.; Nonlaopon, K.; Zaini, H.G. Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings. *Mathematics* **2022**, *9*, x. https://doi.org/10.3390/xxxxx

**Citation:** Khan, M.B.; Treanțǎ, S.; Soliman, M.S.; Nonlaopon, K.; Zaini, H.G. Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings. *Mathematics* **2022**, *9*, x. https://doi.org/10.3390/xxxxx

**Citation:** Khan, M.B.; Treanțǎ, S.; Soliman, M.S.; Nonlaopon, K.;

Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings. *Mathematics* **2022**, *9*, x. https://doi.org/10.3390/xxxxx

Academic Editors: Simeon Reich and

Received: 13 December 2021 Accepted: 15 February 2022 Published: 16 February 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institu-

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://cre-

Received: 13 December 2021 Accepted: 15 February 2022 Published: 16 February 2022

Academic Editors: Simeon Reich and

Received: 13 December 2021 Accepted: 15 February 2022 Published: 16 February 2022

Academic Editors: Simeon Reich and

Janusz Brzdęk

Janusz Brzdęk

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institu-

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institu-

tional affiliations.

tional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

Received: 13 December 2021 Accepted: 15 February 2022 Published: 16 February 2022

Janusz Brzdęk

Janusz Brzdęk

tional affiliations.

tional affiliations.

*Article* 

*Article* 

Minkoswki's inequality, Jensen's inequality, Hölder's inequality, we obtain Hermite–Hadamard (-) and Hermite–Hadamard–Fejér (--Fejér) type inequality and some related integral inequalities with the support of left and right preinvex *IV-Fs* via pseudoorder relation and interval Riemann integral. Moreover, some exceptional special cases are also discussed. Some useful examples are also given to prove the validity of our main results. **Keywords:** left and right preinvex interval-valued function; interval Riemann integral; Hermite–Hadamard type inequality; Hermite–Hadamard–Fejér type inequality **1. Introduction**  Hanson [1] defined the class of invex functions as one of the most significant extensions of convex functions. Weir and Mond [2], in 1988, used the notion of preinvex functions to demonstrate adequate optimality criteria and duality in nonlinear programming. For a differentiable mapping, the concept of fractional integral identities involving Riemann–Liouville fractional and Hadamard fractional integrals integrals was considered by Wang et al. [3], who identified some inequalities using standard convex, -convex, convex, -convex, (s, m)-convex, and (, )-convex. Moreover, Işcan [4] also used fractional integrals for preinvex functions to obtain various - type inequalities. See [5–8] for other generalizations of the - inequality. For accurate solutions to various problems in practical mathematics, Moore [9] used interval arithmetic, *IV-Fs*, and integrals of *IV-Fs* to establish arbitrarily sharp upper and lower limits. Moore [9] showed that, if a real-valued mapping () meets an ordinary Lipschitz condition in , |() − ()| ≤ |−|, for , ∈ , then, the united exten-Soliman, M.S.; Nonlaopon, K.; Zaini, H.G. Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings. *Mathematics* **2022**, *9*, x. https://doi.org/10.3390/xxxxx Academic Editors: Simeon Reich and Janusz Brzdęk Received: 13 December 2021 Accepted: 15 February 2022 Published: 16 February 2022 **Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. **Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://cre- we obtain Hermite–Hadamard (-) and Hermite–Hadamard–Fejér (--Fejér) type inequality and some related integral inequalities with the support of left and right preinvex *IV-Fs* via pseudoorder relation and interval Riemann integral. Moreover, some exceptional special cases are also discussed. Some useful examples are also given to prove the validity of our main results. **Keywords:** left and right preinvex interval-valued function; interval Riemann integral; Hermite–Hadamard type inequality; Hermite–Hadamard–Fejér type inequality **1. Introduction**  Hanson [1] defined the class of invex functions as one of the most significant extensions of convex functions. Weir and Mond [2], in 1988, used the notion of preinvex functions to demonstrate adequate optimality criteria and duality in nonlinear programming. For a differentiable mapping, the concept of fractional integral identities involving Riemann–Liouville fractional and Hadamard fractional integrals integrals was considered by Wang et al. [3], who identified some inequalities using standard convex, -convex, convex, -convex, (s, m)-convex, and (, )-convex. Moreover, Işcan [4] also used fractional integrals for preinvex functions to obtain various - type inequalities. See [5–8] for other generalizations of the - inequality. For accurate solutions to various problems in practical mathematics, Moore [9] used interval arithmetic, *IV-Fs*, and integrals of *IV-Fs* to establish arbitrarily sharp upper and lower limits. Moore [9] showed that, if a real-valued mapping () meets an ordinary Lipschitz condition in , |() − ()| ≤ |−|, for , ∈ , then, the united exten-Soliman, M.S.; Nonlaopon, K.; Zaini, H.G. Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings. *Mathematics* **2022**, *9*, x. https://doi.org/10.3390/xxxxx Academic Editors: Simeon Reich and Janusz Brzdęk Received: 13 December 2021 Accepted: 15 February 2022 Published: 16 February 2022 **Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. **Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creinequality, Steffensen's inequality, Opial type inequality and Chebyšhev's inequality were all explored ˇ by Agarwal et al. [11]. Srivastava et al. [12] discovered some generic time scale weighted Opial type inequalities in 2010. Srivastava et al. [13] also proposed several time-based expansions and generalizations of Maroni's inequality. Under certain proper conditions, some new local fractional integral analogue of Anderson's inequality on fractal space was introduced by Wei et al. [14], demonstrating that for classical Anderson's inequality, it was a novel extension on fractal space. Tunç et al. [15] also constructed an identity for local fractional integrals and derived numerous modifications of the well-known Steffensen's inequality for fractional integrals. The papers [11,16] and the references therein might be consulted for further information. Bhurjee and Panda [17] identified the parametric form of an *IV-F* and devised a technique to investigate the existence of a generic interval optimization issue solution. Using the notion of the generalized Hukuhara difference, Lupulescu [18] developed differentiability and integrability for *IV-Fs* on time scales. Cano et al. [19] developed a novel form of the Ostrowski inequality for gH differentiable *IV-Fs* in 2015 and achieved an extension of the class of real functions that are not always differentiable. For gH-differentiable *IV-Fs*, Cano et al. [19] found error limitations to quadrature rules. In addition, Roy and Panda [20] developed the idea of the -monotonic property of *IV-Fs* in the higher dimension and used extended Hukuhara differentiability to obtain various conclusions. We refer to [21–25], and the references therein, for further information on *IV-Fs*. An et al. [26] and Zhao et al. [27] recently proposed an (h1, h2)-convex *IV-F* and harmonically h-convex *IV-F,* respectively. Moreover, they found certain interval **Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings Muhammad Bilal Khan 1, Savin Treanțǎ 2, Mohamed S. Soliman 3, Kamsing Nonlaopon 4,\* and Hatim Ghazi Zaini 5**  1 Department of Mathematics, COMSATS University Islamabad, Islamabad 44000, Pakistan; bilal42742@gmail.com 2 Department of Applied Mathematics, University Politehnica of Bucharest, 060042 Bucharest, Romania; savin.treanta@upb.ro 3 Department of Electrical Engineering, College of Engineering, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia; soliman@tu.edu.sa 4 Department of Mathematics, Faculty of Science, Khon Kaen University, Khon Kaen 40002, Thailand 5 Department of Computer Science, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia; h.zaini@tu.edu.sa **\*** Correspondence: nkamsi@kku.ac.th; Tel.: +668-6642-1582 **Abstract:** The principles of convexity and symmetry are inextricably linked. Because of the considerable association that has emerged between the two in recent years, we may apply what we learn from one to the other. In this paper, our aim is to establish the relation between integral inequalities and interval-valued functions (*IV-Fs*) based upon the pseudo-order relation. Firstly, we discuss the properties of left and right preinvex interval-valued functions (left and right preinvex *IV-Fs*). Then, we obtain Hermite–Hadamard (-) and Hermite–Hadamard–Fejér (--Fejér) type inequality and some related integral inequalities with the support of left and right preinvex *IV-Fs* via pseudoorder relation and interval Riemann integral. Moreover, some exceptional special cases are also discussed. Some useful examples are also given to prove the validity of our main results. **Citation:** Khan, M.B.; Treanțǎ, S.; Soliman, M.S.; Nonlaopon, K.; Zaini, H.G. Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the - **Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings Muhammad Bilal Khan 1, Savin Treanțǎ 2, Mohamed S. Soliman 3, Kamsing Nonlaopon 4,\* and Hatim Ghazi Zaini 5**  1 Department of Mathematics, COMSATS University Islamabad, Islamabad 44000, Pakistan; bilal42742@gmail.com 2 Department of Applied Mathematics, University Politehnica of Bucharest, 060042 Bucharest, Romania; savin.treanta@upb.ro 3 Department of Electrical Engineering, College of Engineering, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia; soliman@tu.edu.sa 4 Department of Mathematics, Faculty of Science, Khon Kaen University, Khon Kaen 40002, Thailand 5 Department of Computer Science, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia; h.zaini@tu.edu.sa **\*** Correspondence: nkamsi@kku.ac.th; Tel.: +668-6642-1582 **Abstract:** The principles of convexity and symmetry are inextricably linked. Because of the considerable association that has emerged between the two in recent years, we may apply what we learn from one to the other. In this paper, our aim is to establish the relation between integral inequalities and interval-valued functions (*IV-Fs*) based upon the pseudo-order relation. Firstly, we discuss the properties of left and right preinvex interval-valued functions (left and right preinvex *IV-Fs*). Then, we obtain Hermite–Hadamard (-) and Hermite–Hadamard–Fejér (--Fejér) type inequality and some related integral inequalities with the support of left and right preinvex *IV-Fs* via pseudoorder relation and interval Riemann integral. Moreover, some exceptional special cases are also discussed. Some useful examples are also given to prove the validity of our main results. **Citation:** Khan, M.B.; Treanțǎ, S.; Soliman, M.S.; Nonlaopon, K.; Zaini, H.G. Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the type inequalities. Budak et al. [28] also created the **Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings Muhammad Bilal Khan 1, Savin Treanțǎ 2, Mohamed S. Soliman 3, Kamsing Nonlaopon 4,\* and Hatim Ghazi Zaini 5**  1 Department of Mathematics, COMSATS University Islamabad, Islamabad 44000, Pakistan; bilal42742@gmail.com 2 Department of Applied Mathematics, University Politehnica of Bucharest, 060042 Bucharest, Romania; savin.treanta@upb.ro 3 Department of Electrical Engineering, College of Engineering, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia; soliman@tu.edu.sa 4 Department of Mathematics, Faculty of Science, Khon Kaen University, Khon Kaen 40002, Thailand 5 Department of Computer Science, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia; h.zaini@tu.edu.sa **\*** Correspondence: nkamsi@kku.ac.th; Tel.: +668-6642-1582 **Abstract:** The principles of convexity and symmetry are inextricably linked. Because of the considerable association that has emerged between the two in recent years, we may apply what we learn from one to the other. In this paper, our aim is to establish the relation between integral inequalities and interval-valued functions (*IV-Fs*) based upon the pseudo-order relation. Firstly, we discuss the properties of left and right preinvex interval-valued functions (left and right preinvex *IV-Fs*). Then, we obtain Hermite–Hadamard (-) and Hermite–Hadamard–Fejér (--Fejér) type inequality and some related integral inequalities with the support of left and right preinvex *IV-Fs* via pseudoorder relation and interval Riemann integral. Moreover, some exceptional special cases are also dis-**Citation:** Khan, M.B.; Treanțǎ, S.; Soliman, M.S.; Nonlaopon, K.; Zaini, H.G. Some New Versions of Integral Inequalities for Left and - **Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings Muhammad Bilal Khan 1, Savin Treanțǎ 2, Mohamed S. Soliman 3, Kamsing Nonlaopon 4,\* and Hatim Ghazi Zaini 5**  1 Department of Mathematics, COMSATS University Islamabad, Islamabad 44000, Pakistan; bilal42742@gmail.com 2 Department of Applied Mathematics, University Politehnica of Bucharest, 060042 Bucharest, Romania; savin.treanta@upb.ro 3 Department of Electrical Engineering, College of Engineering, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia; soliman@tu.edu.sa 4 Department of Mathematics, Faculty of Science, Khon Kaen University, Khon Kaen 40002, Thailand 5 Department of Computer Science, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia; h.zaini@tu.edu.sa **\*** Correspondence: nkamsi@kku.ac.th; Tel.: +668-6642-1582 **Abstract:** The principles of convexity and symmetry are inextricably linked. Because of the considerable association that has emerged between the two in recent years, we may apply what we learn from one to the other. In this paper, our aim is to establish the relation between integral inequalities and interval-valued functions (*IV-Fs*) based upon the pseudo-order relation. Firstly, we discuss the properties of left and right preinvex interval-valued functions (left and right preinvex *IV-Fs*). Then, we obtain Hermite–Hadamard (-) and Hermite–Hadamard–Fejér (--Fejér) type inequality and some related integral inequalities with the support of left and right preinvex *IV-Fs* via pseudoorder relation and interval Riemann integral. Moreover, some exceptional special cases are also dis-**Citation:** Khan, M.B.; Treanțǎ, S.; Soliman, M.S.; Nonlaopon, K.; Zaini, H.G. Some New Versions of Integral Inequalities for Left and inequality for a convex *IV-F* and its product. For more information related to generalized convex functions and fractional inequalities in interval-valued settings, see [29–53] and the references therein. **Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings Muhammad Bilal Khan 1, Savin Treanțǎ 2, Mohamed S. Soliman 3, Kamsing Nonlaopon 4,\* and Hatim Ghazi Zaini 5**  1 Department of Mathematics, COMSATS University Islamabad, Islamabad 44000, Pakistan; bilal42742@gmail.com 2 Department of Applied Mathematics, University Politehnica of Bucharest, 060042 Bucharest, Romania; savin.treanta@upb.ro 3 Department of Electrical Engineering, College of Engineering, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia; soliman@tu.edu.sa 4 Department of Mathematics, Faculty of Science, Khon Kaen University, Khon Kaen 40002, Thailand 5 Department of Computer Science, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia; h.zaini@tu.edu.sa **\*** Correspondence: nkamsi@kku.ac.th; Tel.: +668-6642-1582 **Abstract:** The principles of convexity and symmetry are inextricably linked. Because of the considerable association that has emerged between the two in recent years, we may apply what we learn from one to the other. In this paper, our aim is to establish the relation between integral inequalities and interval-valued functions (*IV-Fs*) based upon the pseudo-order relation. Firstly, we discuss the **Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings Muhammad Bilal Khan 1, Savin Treanțǎ 2, Mohamed S. Soliman 3, Kamsing Nonlaopon 4,\* and Hatim Ghazi Zaini 5**  1 Department of Mathematics, COMSATS University Islamabad, Islamabad 44000, Pakistan; bilal42742@gmail.com 2 Department of Applied Mathematics, University Politehnica of Bucharest, 060042 Bucharest, Romania; savin.treanta@upb.ro 3 Department of Electrical Engineering, College of Engineering, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia; soliman@tu.edu.sa 4 Department of Mathematics, Faculty of Science, Khon Kaen University, Khon Kaen 40002, Thailand 5 Department of Computer Science, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia; h.zaini@tu.edu.sa **\*** Correspondence: nkamsi@kku.ac.th; Tel.: +668-6642-1582 **Abstract:** The principles of convexity and symmetry are inextricably linked. Because of the considerable association that has emerged between the two in recent years, we may apply what we learn from one to the other. In this paper, our aim is to establish the relation between integral inequalities and interval-valued functions (*IV-Fs*) based upon the pseudo-order relation. Firstly, we discuss the **Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings Muhammad Bilal Khan 1, Savin Treanțǎ 2, Mohamed S. Soliman 3, Kamsing Nonlaopon 4,\* and Hatim Ghazi Zaini 5**  1 Department of Mathematics, COMSATS University Islamabad, Islamabad 44000, Pakistan; bilal42742@gmail.com 2 Department of Applied Mathematics, University Politehnica of Bucharest, 060042 Bucharest, Romania; savin.treanta@upb.ro 3 Department of Electrical Engineering, College of Engineering, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia; soliman@tu.edu.sa 4 Department of Mathematics, Faculty of Science, Khon Kaen University, Khon Kaen 40002, Thailand 5 Department of Computer Science, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia; h.zaini@tu.edu.sa **\*** Correspondence: nkamsi@kku.ac.th; Tel.: +668-6642-1582 **Abstract:** The principles of convexity and symmetry are inextricably linked. Because of the considerable association that has emerged between the two in recent years, we may apply what we learn from one to the other. In this paper, our aim is to establish the relation between integral inequalities and interval-valued functions (*IV-Fs*) based upon the pseudo-order relation. Firstly, we discuss the **Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings Muhammad Bilal Khan 1, Savin Treanțǎ 2, Mohamed S. Soliman 3, Kamsing Nonlaopon 4,\* and Hatim Ghazi Zaini 5**  1 Department of Mathematics, COMSATS University Islamabad, Islamabad 44000, Pakistan; bilal42742@gmail.com 2 Department of Applied Mathematics, University Politehnica of Bucharest, 060042 Bucharest, Romania; savin.treanta@upb.ro 3 Department of Electrical Engineering, College of Engineering, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia; soliman@tu.edu.sa 4 Department of Mathematics, Faculty of Science, Khon Kaen University, Khon Kaen 40002, Thailand 5 Department of Computer Science, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia; h.zaini@tu.edu.sa **\*** Correspondence: nkamsi@kku.ac.th; Tel.: +668-6642-1582 **Abstract:** The principles of convexity and symmetry are inextricably linked. Because of the considerable association that has emerged between the two in recent years, we may apply what we learn from one to the other. In this paper, our aim is to establish the relation between integral inequalities and interval-valued functions (*IV-Fs*) based upon the pseudo-order relation. Firstly, we discuss the

**Some New Versions of Integral Inequalities for Left and Right** 

**Some New Versions of Integral Inequalities for Left and Right** 

**Muhammad Bilal Khan 1, Savin Treanțǎ 2, Mohamed S. Soliman 3, Kamsing Nonlaopon 4,\* and Hatim Ghazi Zaini 5** 

**Muhammad Bilal Khan 1, Savin Treanțǎ 2, Mohamed S. Soliman 3, Kamsing Nonlaopon 4,\* and Hatim Ghazi Zaini 5** 

P.O. Box 11099, Taif 21944, Saudi Arabia; h.zaini@tu.edu.sa **\*** Correspondence: nkamsi@kku.ac.th; Tel.: +668-6642-1582

P.O. Box 11099, Taif 21944, Saudi Arabia; h.zaini@tu.edu.sa **\*** Correspondence: nkamsi@kku.ac.th; Tel.: +668-6642-1582

1 Department of Mathematics, COMSATS University Islamabad, Islamabad 44000, Pakistan;

1 Department of Mathematics, COMSATS University Islamabad, Islamabad 44000, Pakistan;

2 Department of Applied Mathematics, University Politehnica of Bucharest, 060042 Bucharest, Romania;

4 Department of Mathematics, Faculty of Science, Khon Kaen University, Khon Kaen 40002, Thailand 5 Department of Computer Science, College of Computers and Information Technology, Taif University,

3 Department of Electrical Engineering, College of Engineering, Taif University, P.O. Box 11099, Taif 21944,

4 Department of Mathematics, Faculty of Science, Khon Kaen University, Khon Kaen 40002, Thailand 5 Department of Computer Science, College of Computers and Information Technology, Taif University,

2 Department of Applied Mathematics, University Politehnica of Bucharest, 060042 Bucharest, Romania;

3 Department of Electrical Engineering, College of Engineering, Taif University, P.O. Box 11099, Taif 21944,

**Abstract:** The principles of convexity and symmetry are inextricably linked. Because of the considerable association that has emerged between the two in recent years, we may apply what we learn from one to the other. In this paper, our aim is to establish the relation between integral inequalities and interval-valued functions (*IV-Fs*) based upon the pseudo-order relation. Firstly, we discuss the properties of left and right preinvex interval-valued functions (left and right preinvex *IV-Fs*). Then,

**Abstract:** The principles of convexity and symmetry are inextricably linked. Because of the considerable association that has emerged between the two in recent years, we may apply what we learn from one to the other. In this paper, our aim is to establish the relation between integral inequalities and interval-valued functions (*IV-Fs*) based upon the pseudo-order relation. Firstly, we discuss the properties of left and right preinvex interval-valued functions (left and right preinvex *IV-Fs*). Then,

ity, Minkoswki's inequality, Jensen's inequality, Hölder's inequality, - inequality,

ity, Minkoswki's inequality, Jensen's inequality, Hölder's inequality, - inequality,

*Mathematics* **2021**, *9*, x. https://doi.org/10.3390/xxxxx www.mdpi.com/journal/mathematics

*Mathematics* **2021**, *9*, x. https://doi.org/10.3390/xxxxx www.mdpi.com/journal/mathematics

**Preinvex Functions in the Interval-Valued Settings** 

**Preinvex Functions in the Interval-Valued Settings** 

bilal42742@gmail.com

bilal42742@gmail.com

savin.treanta@upb.ro

savin.treanta@upb.ro

Saudi Arabia; soliman@tu.edu.sa

Saudi Arabia; soliman@tu.edu.sa

sion is a Lipschitz interval extension in . To combine the study of discrete and continuous dynamical systems, Hilger [10] introduced a time scales theory. The widespread use of dynamic equations and integral inequalities on time scales, in domains as diverse as electrical engineering, quantum physics, heat transfer, neural networks, combinatorics, and population dynamics [11], has highlighted the need for this theory. Young's inequalativecommons.org/licenses/by/4.0/). sion is a Lipschitz interval extension in . To combine the study of discrete and continuous dynamical systems, Hilger [10] introduced a time scales theory. The widespread use of dynamic equations and integral inequalities on time scales, in domains as diverse as electrical engineering, quantum physics, heat transfer, neural networks, combinatorics, and population dynamics [11], has highlighted the need for this theory. Young's inequalativecommons.org/licenses/by/4.0/). **Keywords:** left and right preinvex interval-valued function; interval Riemann integral; Hermite–Hadamard type inequality; Hermite–Hadamard–Fejér type inequality Interval-Valued Settings. *Mathematics* **2022**, *9*, x. https://doi.org/10.3390/xxxxx Academic Editors: Simeon Reich and Janusz Brzdęk **Keywords:** left and right preinvex interval-valued function; interval Riemann integral; Hermite–Hadamard type inequality; Hermite–Hadamard–Fejér type inequality Interval-Valued Settings. *Mathematics* **2022**, *9*, x. https://doi.org/10.3390/xxxxx Academic Editors: Simeon Reich and Janusz Brzdęk cussed. Some useful examples are also given to prove the validity of our main results. **Keywords:** left and right preinvex interval-valued function; interval Riemann integral; Hermite–Hadamard type inequality; Hermite–Hadamard–Fejér type inequality Right Preinvex Functions in the Interval-Valued Settings. *Mathematics* **2022**, *9*, x. https://doi.org/10.3390/xxxxx Academic Editors: Simeon Reich and cussed. Some useful examples are also given to prove the validity of our main results. **Keywords:** left and right preinvex interval-valued function; interval Riemann integral; Hermite–Hadamard type inequality; Hermite–Hadamard–Fejér type inequality Right Preinvex Functions in the Interval-Valued Settings. *Mathematics* **2022**, *9*, x. https://doi.org/10.3390/xxxxx Academic Editors: Simeon Reich and Inspired by the ongoing research, we introduce the concept of left and right preinvex *IV-F* and establish the properties of left and right preinvex interval-valued functions (left and right preinvex *IV-Fs*). Then, we obtain Hermite–Hadamard (-) and Hermite–Hadamard–Fejér (--Fejér) type inequality and some related integral inequalities with the support of left and right preinvex *IV-Fs* via pseudoorder relation and interval Riemann integral. Moreover, some exceptional special cases are also discussed. Some useful examples are also given to prove the validity of our main results. properties of left and right preinvex interval-valued functions (left and right preinvex *IV-Fs*). Then, we obtain Hermite–Hadamard (-) and Hermite–Hadamard–Fejér (--Fejér) type inequality and some related integral inequalities with the support of left and right preinvex *IV-Fs* via pseudoorder relation and interval Riemann integral. Moreover, some exceptional special cases are also discussed. Some useful examples are also given to prove the validity of our main results. and properties of left and right preinvex interval-valued functions (left and right preinvex *IV-Fs*). Then, we obtain Hermite–Hadamard (-) and Hermite–Hadamard–Fejér (--Fejér) type inequality and some related integral inequalities with the support of left and right preinvex *IV-Fs* via pseudoorder relation and interval Riemann integral. Moreover, some exceptional special cases are also discussed. Some useful examples are also given to prove the validity of our main results. properties of left and right preinvex interval-valued functions (left and right preinvex *IV-Fs*). Then, we obtain Hermite–Hadamard (-) and Hermite–Hadamard–Fejér (--Fejér) type inequality and some related integral inequalities with the support of left and right preinvex *IV-Fs* via pseudoorder relation and interval Riemann integral. Moreover, some exceptional special cases are also discussed. Some useful examples are also given to prove the validity of our main results. Zaini, H.G. Some New Versions of -Fejér inequality for left and right preinvex *IV-Fs* and the product of two left and right preinvex *IV-Fs* using Riemann integrals in interval-valued settings, which are motivated by the above studies and ideas. We also provide some examples to support our ideas.

#### Janusz Brzdęk Janusz Brzdęk **Keywords:** left and right preinvex interval-valued function; interval Riemann integral; **Keywords:** left and right preinvex interval-valued function; interval Riemann integral; **Keywords:** left and right preinvex interval-valued function; interval Riemann integral; **Keywords:** left and right preinvex interval-valued function; interval Riemann integral; **2. Preliminaries**

distributed under the terms and con-

distributed under the terms and con-

*Article* 

*Article* 

**Citation:** Khan, M.B.; Treanțǎ, S.;

**Citation:** Khan, M.B.; Treanțǎ, S.;

*Article* 

*Article* 

*Mathematics* **2021**, *9*, x. https://doi.org/10.3390/xxxxx www.mdpi.com/journal/mathematics *Mathematics* **2021**, *9*, x. https://doi.org/10.3390/xxxxx www.mdpi.com/journal/mathematics **1. Introduction**  Hanson [1] defined the class of invex functions as one of the most significant extensions of convex functions. Weir and Mond [2], in 1988, used the notion of preinvex func-Accepted: 15 February 2022 Published: 16 February 2022 **Publisher's Note:** MDPI stays neu-**1. Introduction**  Hanson [1] defined the class of invex functions as one of the most significant extensions of convex functions. Weir and Mond [2], in 1988, used the notion of preinvex func-Accepted: 15 February 2022 Published: 16 February 2022 **Publisher's Note:** MDPI stays neu-**1. Introduction**  Hanson [1] defined the class of invex functions as one of the most significant exten-Received: 13 December 2021 Accepted: 15 February 2022 Published: 16 February 2022 **1. Introduction**  Hanson [1] defined the class of invex functions as one of the most significant exten-Received: 13 December 2021 Accepted: 15 February 2022 Published: 16 February 2022 Hermite–Hadamard type inequality; Hermite–Hadamard–Fejér type inequality Hermite–Hadamard type inequality; Hermite–Hadamard–Fejér type inequality Hermite–Hadamard type inequality; Hermite–Hadamard–Fejér type inequality Hermite–Hadamard type inequality; Hermite–Hadamard–Fejér type inequality Academic Editors: Simeon Reich and First, we offer some background information on interval-valued functions, the theory of convexity, interval-valued integration, and interval-valued fractional integration, which will be utilized throughout the article.

Received: 13 December 2021

Received: 13 December 2021

tional affiliations.

tional affiliations.

tions to demonstrate adequate optimality criteria and duality in nonlinear programming. For a differentiable mapping, the concept of fractional integral identities involving Riemann–Liouville fractional and Hadamard fractional integrals integrals was considered by tral with regard to jurisdictional claims in published maps and institutions to demonstrate adequate optimality criteria and duality in nonlinear programming. For a differentiable mapping, the concept of fractional integral identities involving Riemann–Liouville fractional and Hadamard fractional integrals integrals was considered by tral with regard to jurisdictional claims in published maps and institusions of convex functions. Weir and Mond [2], in 1988, used the notion of preinvex functions to demonstrate adequate optimality criteria and duality in nonlinear programming. For a differentiable mapping, the concept of fractional integral identities involving Rie-**Publisher's Note:** MDPI stays neutral with regard to jurisdictional sions of convex functions. Weir and Mond [2], in 1988, used the notion of preinvex functions to demonstrate adequate optimality criteria and duality in nonlinear programming. For a differentiable mapping, the concept of fractional integral identities involving Rie-**Publisher's Note:** MDPI stays neutral with regard to jurisdictional **1. Introduction 1. Introduction 1. Introduction 1. Introduction**  We offer some fundamental arithmetic regarding interval analysis in this paragraph, which will be quite useful throughout the article.

$$\begin{array}{c} \mathcal{Z} = \{\mathcal{Z}\_{\nu}, \mathcal{Z}^{\*}\}, Q = \{Q\_{\nu}, \mathcal{Q}^{\*}\} \{Z\_{\nu} \le \nu \le \mathcal{Z}^{\*} \text{ and } Q\_{\nu} \le z \le \mathcal{Q}^{\*} \nu, \ z \in \mathbb{R}\} \\ \mathcal{Z} + \mathcal{Q} = [\mathcal{Z}\_{\nu}, \mathcal{Z}^{\*}] + [Q\_{\nu}, \mathcal{Q}^{\*}] = [\mathcal{Z}\_{\nu} + \mathcal{Q}\_{\nu}, \mathcal{Z}^{\*} + \mathcal{Q}^{\*}], \\ \mathcal{Z} - \mathcal{Q} = [\mathcal{Z}\_{\nu}, \mathcal{Z}^{\*}] - [\mathcal{Q}\_{\nu}, \mathcal{Q}^{\*}] = [\mathcal{Z}\_{\nu} - \mathcal{Q}\_{\nu}, \mathcal{Z}^{\*} - \mathcal{Q}^{\*}], \\ \min \mathcal{X} = \min \{ \mathcal{Z}\_{\nu} Q\_{\nu}, \mathcal{Z}^{\*} Q\_{\nu}, \mathcal{Z}^{\*} Q\_{\nu}^{\*}, \mathcal{Z}^{\*} Q\_{\nu}^{\*} \}, \max X = \max \{ \mathcal{Z}\_{\nu} Q\_{\nu}, \mathcal{Z}^{\*} Q\_{\nu}, \mathcal{Z}\_{\nu} Q\_{\nu}^{\*}, \mathcal{Z}^{\*} Q^{\*} \} \\ v \left[ \mathcal{Z}\_{\nu}, \mathcal{Z}^{\*} \right] = \begin{cases} \left[ v \mathcal{Z}\_{\nu}, \mathcal{Z}^{\*} \right] \text{if } \nu > 0, \\ \left[ 0 \right] \text{if } \nu = 0, \\ \left[ v \mathcal{Z}^{\*}, \nu \mathcal{Z}\_{\nu} \right] \text{if } \nu < 0. \end{cases} \end{array}$$

Lipschitz condition in , |() − ()| ≤ |−|, for , ∈ , then, the united extension is a Lipschitz interval extension in . To combine the study of discrete and continuous dynamical systems, Hilger [10] introduced a time scales theory. The widespread use of dynamic equations and integral inequalities on time scales, in domains as diverse as tribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Lipschitz condition in , |() − ()| ≤ |−|, for , ∈ , then, the united extension is a Lipschitz interval extension in . To combine the study of discrete and continuous dynamical systems, Hilger [10] introduced a time scales theory. The widespread use of dynamic equations and integral inequalities on time scales, in domains as diverse as tribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). lower limits. Moore [9] showed that, if a real-valued mapping () meets an ordinary Lipschitz condition in , |() − ()| ≤ |−|, for , ∈ , then, the united extension is a Lipschitz interval extension in . To combine the study of discrete and continuous dynamical systems, Hilger [10] introduced a time scales theory. The widespread use ditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). lower limits. Moore [9] showed that, if a real-valued mapping () meets an ordinary Lipschitz condition in , |() − ()| ≤ |−|, for , ∈ , then, the united extension is a Lipschitz interval extension in . To combine the study of discrete and continuous dynamical systems, Hilger [10] introduced a time scales theory. The widespread use ditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). for other generalizations of the - inequality. For accurate solutions to various problems in practical mathematics, Moore [9] used interval arithmetic, *IV-Fs*, and integrals of *IV-Fs* to establish arbitrarily sharp upper and for other generalizations of the - inequality. For accurate solutions to various problems in practical mathematics, Moore [9] used interval arithmetic, *IV-Fs*, and integrals of *IV-Fs* to establish arbitrarily sharp upper and for other generalizations of the - inequality. For accurate solutions to various problems in practical mathematics, Moore [9] used interval arithmetic, *IV-Fs*, and integrals of *IV-Fs* to establish arbitrarily sharp upper and for other generalizations of the - inequality. For accurate solutions to various problems in practical mathematics, Moore [9] used interval arithmetic, *IV-Fs*, and integrals of *IV-Fs* to establish arbitrarily sharp upper and **Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article Let K*C*, K + *C* , K − *C* be the set of all closed intervals of R, the set of all closed positive intervals of R and the set of all closed negative intervals of R. Then, K*C*, K + *C* , and K − *C* are defined as

tional integrals for preinvex functions to obtain various - type inequalities. See [5–8]

tional integrals for preinvex functions to obtain various - type inequalities. See [5–8]

ditions of the Creative Commons At-

tional integrals for preinvex functions to obtain various - type inequalities. See [5–8]

electrical engineering, quantum physics, heat transfer, neural networks, combinatorics, and population dynamics [11], has highlighted the need for this theory. Young's inequality, Minkoswki's inequality, Jensen's inequality, Hölder's inequality, - inequality,

electrical engineering, quantum physics, heat transfer, neural networks, combinatorics, and population dynamics [11], has highlighted the need for this theory. Young's inequality, Minkoswki's inequality, Jensen's inequality, Hölder's inequality, - inequality,

ditions of the Creative Commons At-

tional integrals for preinvex functions to obtain various - type inequalities. See [5–8]

$$\begin{array}{c} \mathbb{K}\_{\mathbb{C}} = \{ [\mathcal{Z}\_{\star}, \mathcal{Z}^{\*}] : \mathcal{Z}\_{\star}, \mathcal{Z}^{\*} \in \mathbb{R} \text{ and } \mathcal{Z}\_{\star} \le \mathcal{Z}^{\*} \} \\ \mathbb{K}\_{\mathbb{C}}^{+} = \{ [\mathcal{Z}\_{\star}, \mathcal{Z}^{\*}] : \mathcal{Z}\_{\star}, \mathcal{Z}^{\*} \in \mathcal{K}\_{\mathbb{C}} \text{ and } \mathcal{Z}\_{\star} > 0 \} \\ \mathbb{K}\_{\mathbb{C}}^{-} = \{ [\mathcal{Z}\_{\star}, \mathcal{Z}^{\*}] : \mathcal{Z}\_{\star}, \mathcal{Z}^{\*} \in \mathbb{K}\_{\mathbb{C}} \text{ and } \mathcal{Z}^{\*} < 0 \} \end{array}$$

ity, Minkoswki's inequality, Jensen's inequality, Hölder's inequality, - inequality, ity, Minkoswki's inequality, Jensen's inequality, Hölder's inequality, - inequality, ous dynamical systems, Hilger [10] introduced a time scales theory. The widespread use of dynamic equations and integral inequalities on time scales, in domains as diverse as ous dynamical systems, Hilger [10] introduced a time scales theory. The widespread use of dynamic equations and integral inequalities on time scales, in domains as diverse as ous dynamical systems, Hilger [10] introduced a time scales theory. The widespread use of dynamic equations and integral inequalities on time scales, in domains as diverse as ous dynamical systems, Hilger [10] introduced a time scales theory. The widespread use of dynamic equations and integral inequalities on time scales, in domains as diverse as ativecommons.org/licenses/by/4.0/). For [Z∗, Z<sup>∗</sup> ], [*Q*∗, *Q*<sup>∗</sup> ] ∈ K*C*, the inclusion " ⊆ " is defined by [Z∗, Z<sup>∗</sup> ] ⊆ [*Q*∗, *Q*<sup>∗</sup> ], if and only if, *Q*<sup>∗</sup> ≤ Z∗, Z<sup>∗</sup> ≤ *Q*<sup>∗</sup> .

electrical engineering, quantum physics, heat transfer, neural networks, combinatorics, and population dynamics [11], has highlighted the need for this theory. Young's inequality, Minkoswki's inequality, Jensen's inequality, Hölder's inequality, - inequality,

*Mathematics* **2021**, *9*, x. https://doi.org/10.3390/xxxxx www.mdpi.com/journal/mathematics

electrical engineering, quantum physics, heat transfer, neural networks, combinatorics, and population dynamics [11], has highlighted the need for this theory. Young's inequality, Minkoswki's inequality, Jensen's inequality, Hölder's inequality, - inequality,

*Mathematics* **2021**, *9*, x. https://doi.org/10.3390/xxxxx www.mdpi.com/journal/mathematics

*Mathematics* **2021**, *9*, x. https://doi.org/10.3390/xxxxx www.mdpi.com/journal/mathematics

*Mathematics* **2021**, *9*, x. https://doi.org/10.3390/xxxxx www.mdpi.com/journal/mathematics

*Mathematics* **2021**, *9*, x. https://doi.org/10.3390/xxxxx www.mdpi.com/journal/mathematics

*Mathematics* **2021**, *9*, x. https://doi.org/10.3390/xxxxx www.mdpi.com/journal/mathematics

**Remark 1.** *[36] The relation* " ≤*<sup>p</sup>* " *defined on* K*<sup>C</sup> by*

$$\mathbb{E}\left[\mathcal{Q}\_{\ast\prime}\mid\mathcal{Q}^{\ast}\right] \leq\_p \left[\mathcal{Z}\_{\ast\prime}\mid\mathcal{Z}^{\ast}\right] \text{ if and only if } \mathcal{Q}\_{\ast} \leq \mathcal{Z}\_{\ast\prime}\;\mathcal{Q}^{\ast} \leq \mathcal{Z}^{\ast},\tag{1}$$

*for all* [Q∗, Q<sup>∗</sup> ], [Z∗, Z<sup>∗</sup> ] ∈ K*C*, *is a pseudo-order relation.*

**Theorem 1.** *[9] If Y* : [*µ*, *υ*] ⊂ R → K*<sup>C</sup> is an IV-F, such that Y*(*ω*) = [*Y*∗(*ω*), *Y* ∗ (*ω*)]*, then, Y is Riemann integrable over* [*µ*, *υ*] *if and only if, Y*∗(*ω*) *and Y* ∗ (*ω*) *are both Riemann integrable over* [*µ*, *υ*]*, such that*

$$(IR)\int\_{\mu}^{v} Y(\omega)d\omega = \left[ (R)\int\_{\mu}^{v} Y\_\*(\omega)d\omega,\ (R)\int\_{\mu}^{v} Y^\*(\omega)d\omega \right] \tag{2}$$

*where Y*∗,*Y* ∗ : [*µ*, *υ*] → R.

The collection of all Riemann integrable real valued functions and Riemann integrable *IV-Fs* is denoted by R[*µ*,*υ*] and IR[*µ*,*υ*] , respectively.

**Definition 1.** *A set K* <sup>⊂</sup> <sup>R</sup>*<sup>n</sup> is said to be a convex set, if, for all ω*,κ ∈ *K*, t ∈ [0, 1]*, we have*

$$\mathbf{t}\,\mathbf{x} + (1-\mathbf{t})\boldsymbol{\omega} \in \mathbf{K}, \text{ or } \mathbf{t}\boldsymbol{\omega} + (1-\mathbf{t})\boldsymbol{\omega} \in \mathbf{K}.$$

**Definition 2.** *[36] Let <sup>K</sup> be a convex set. Then, IV-F <sup>Y</sup>* : *<sup>K</sup>* → K<sup>+</sup> *C is said to be left and right convex on K if*

$$\mathbf{Y}(\mathbf{t}\omega + (\mathbf{1} - \mathbf{t})\varkappa) \le\_p \mathbf{t}\mathbf{Y}(\omega) + (\mathbf{1} - \mathbf{t})\mathbf{Y}(\varkappa),\tag{3}$$

*for all ω*,κ ∈ *K*, t ∈ [0, 1]. *Y is called left and right concave on K if Equation (3) is reversed.*

**Definition 3.** *[7] A set <sup>A</sup>* <sup>⊂</sup> <sup>R</sup>*<sup>n</sup> is said to be an invex set, if, for all ω*,κ ∈ *A*, t ∈ [0, 1]*, we have*

$$
\omega + (1 - \mathfrak{t})\zeta(\varkappa, \omega) \in A \text{ or } \omega + \mathfrak{t}\zeta(\varkappa, \omega) \in A,
$$

*where <sup>ζ</sup>* : <sup>R</sup>*<sup>n</sup>* <sup>×</sup> <sup>R</sup>*<sup>n</sup>* <sup>→</sup> <sup>R</sup>*<sup>n</sup>* .

**Definition 4.** *[6] Let <sup>A</sup> be an invex set. Then, IV-F <sup>Y</sup>* : *<sup>A</sup>* → K<sup>+</sup> *C is said to be left and right preinvex on A with respect to ζ if*

$$\mathcal{Y}(\omega + (1-\mathbf{t})\mathcal{Y}(\varkappa,\omega)) \le\_p \mathbf{t}\mathcal{Y}(\omega) + (1-\mathbf{t})\mathcal{Y}(\varkappa),\tag{4}$$

*for all <sup>ω</sup>*,<sup>κ</sup> <sup>∈</sup> *<sup>A</sup>*, <sup>t</sup> <sup>∈</sup> [0, 1], *where <sup>ζ</sup>* : <sup>R</sup>*<sup>n</sup>* <sup>×</sup> <sup>R</sup>*<sup>n</sup>* <sup>→</sup> <sup>R</sup>*<sup>n</sup>* . *Y is called left and right preincave on A with respect to ζ if inequality (4) is reversed. Y is called affine if Y is both convex and concave.*

**Remark 2.** *The left and right preinvex IV-Fs have some very nice properties similar to left and right convex IV-F:*


*In the case of ζ*(κ, *ω*) = −*ω*, *we obtain (4) from (3).*

The following outcome is very important in the field of interval-valued calculus because, by using this result, we can easily handle *IV-Fs*. Basically, Theorem 2 establishes the relation between *IV-F Y*(*ω*) and lower function *Y*∗(*ω*) and upper function *Y* ∗ (*ω*).

The following assumption will be required to prove the next result regarding the bifunction *<sup>ζ</sup>* : <sup>R</sup>*<sup>n</sup>* <sup>×</sup> <sup>R</sup>*<sup>n</sup>* <sup>→</sup> <sup>R</sup>*<sup>n</sup>* , which is known as:

*Article* 

*Article* 

**Condition C.** *[7] Let A be an invex set with respect to ζ*. *For any* κ, *ω* ∈ *A and* t ∈ [0, 1]*,*

$$\begin{aligned} \zeta(\omega, \omega + \mathfrak{t}\zeta(\varkappa, \omega)) &= -\mathfrak{t}\zeta(\varkappa, \omega), \\ \zeta(\varkappa, \omega + \mathfrak{t}\zeta(\varkappa, \omega)) &= (1 - \mathfrak{t})\zeta(\varkappa, \omega). \end{aligned}$$

*Clearly for* t *= 0, we have ζ*(κ, *ω*) *= 0 if and only if,* κ = *ω, for all* κ, *ω* ∈ *A. For the applications of Condition C, see [26,30,34,35].*

**Theorem 2.** *[6] Let A be an invex set and Y* : *<sup>A</sup>* → K<sup>+</sup> *C be a IV-F such that*

$$Y(\omega) = [Y\_\*(\omega), Y^\*(\omega)]\_\prime \,\forall \,\omega \in A\_\prime \tag{5}$$

*for all ω* ∈ *A . Then, Y is left and right preinvex IV-F on A*, *if and only if, Y*∗(*ω*) *and Y* ∗ (*ω*) *both are preinvex functions.* **Preinvex Functions in the Interval-Valued Settings Muhammad Bilal Khan 1, Savin Treanțǎ 2, Mohamed S. Soliman 3, Kamsing Nonlaopon 4,\* and Hatim Ghazi Zaini 5 Preinvex Functions in the Interval-Valued Settings Muhammad Bilal Khan 1, Savin Treanțǎ 2, Mohamed S. Soliman 3, Kamsing Nonlaopon 4,\* and Hatim Ghazi Zaini 5** 

> **Remark 3.** *If Y*∗(*ω*) = *Y* ∗ (*ω*), *then, from (4), one can acquire the following inequality, see [2]:*

$$\mathbf{Y}(\omega + (1-\mathbf{t})\tilde{\mathbf{y}}(\varkappa,\omega)) \le \mathbf{t}\mathbf{Y}(\omega) + (1-\mathbf{t})\mathbf{Y}(\varkappa),\tag{6}$$

*for all <sup>ω</sup>*, <sup>∈</sup> *<sup>A</sup>*, t <sup>∈</sup> [0, 1], *where <sup>ζ</sup>* : <sup>R</sup>*<sup>n</sup>* <sup>×</sup> <sup>R</sup>*<sup>n</sup>* <sup>→</sup> <sup>R</sup>*<sup>n</sup>* . 2 Department of Applied Mathematics, University Politehnica of Bucharest, 060042 Bucharest, Romania; savin.treanta@upb.ro 2 Department of Applied Mathematics, University Politehnica of Bucharest, 060042 Bucharest, Romania; savin.treanta@upb.ro

*If Y*∗(*ω*) = *Y* ∗ (*ω*) *with ζ*(κ, *ω*) = κ − *ω, then, from (4), one can acquire the following inequality:* 3 Department of Electrical Engineering, College of Engineering, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia; soliman@tu.edu.sa 3 Department of Electrical Engineering, College of Engineering, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia; soliman@tu.edu.sa

$$\mathbf{Y}(\mathsf{t}\omega + (\mathsf{1} - \mathsf{t})\mathbf{x}) \le \mathsf{t}\mathbf{Y}(\omega) + (\mathsf{1} - \mathsf{t})\mathbf{Y}(\varkappa),\tag{7}$$

from one to the other. In this paper, our aim is to establish the relation between integral inequalities and interval-valued functions (*IV-Fs*) based upon the pseudo-order relation. Firstly, we discuss the

from one to the other. In this paper, our aim is to establish the relation between integral inequalities and interval-valued functions (*IV-Fs*) based upon the pseudo-order relation. Firstly, we discuss the

tions to demonstrate adequate optimality criteria and duality in nonlinear programming.

tions to demonstrate adequate optimality criteria and duality in nonlinear programming.

sion is a Lipschitz interval extension in . To combine the study of discrete and continuous dynamical systems, Hilger [10] introduced a time scales theory. The widespread use

sion is a Lipschitz interval extension in . To combine the study of discrete and continuous dynamical systems, Hilger [10] introduced a time scales theory. The widespread use

electrical engineering, quantum physics, heat transfer, neural networks, combinatorics, and population dynamics [11], has highlighted the need for this theory. Young's inequality, Minkoswki's inequality, Jensen's inequality, Hölder's inequality, - inequality,

electrical engineering, quantum physics, heat transfer, neural networks, combinatorics, and population dynamics [11], has highlighted the need for this theory. Young's inequality, Minkoswki's inequality, Jensen's inequality, Hölder's inequality, - inequality,

*for all ω*,κ ∈ *K*, t ∈ [0, 1]. P.O. Box 11099, Taif 21944, Saudi Arabia; h.zaini@tu.edu.sa P.O. Box 11099, Taif 21944, Saudi Arabia; h.zaini@tu.edu.sa

**Example 1.** *We consider the IV-F <sup>Y</sup>* : [0, 1] → K<sup>+</sup> *C defined by Y*(*ω*) = [2, 4 ]*ω*<sup>2</sup> *. Since end point functions Y*∗(*ω*), *Y* ∗ (*ω*) *are preinvex functions with respect to ζ*(κ, *ω*) = κ − *ω*. *Hence, Y*(*ω*) *is left and right preinvex IV-F.* **Abstract:** The principles of convexity and symmetry are inextricably linked. Because of the considerable association that has emerged between the two in recent years, we may apply what we learn **Abstract:** The principles of convexity and symmetry are inextricably linked. Because of the considerable association that has emerged between the two in recent years, we may apply what we learn

#### **3. Main Results** properties of left and right preinvex interval-valued functions (left and right preinvex *IV-Fs*). Then, **Citation:** Khan, M.B.; Treanțǎ, S.; properties of left and right preinvex interval-valued functions (left and right preinvex *IV-Fs*). Then, **Citation:** Khan, M.B.; Treanțǎ, S.;

Right Preinvex Functions in the Interval-Valued Settings. *Mathematics* **2022**, *9*, x. https://doi.org/10.3390/xxxxx

*Mathematics* **2022**, *9*, x. https://doi.org/10.3390/xxxxx

Right Preinvex Functions in the

tional affiliations.

tional affiliations.

censee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

censee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

In this section, we derive interval we obtain Hermite–Hadamard (-) and Hermite–Hadamard–Fejér (--Fejér) type inequality and some related integral inequalities with the support of left and right preinvex *IV-Fs* via pseudoorder relation and interval Riemann integral. Moreover, some exceptional special cases are also dis-Soliman, M.S.; Nonlaopon, K.; Zaini, H.G. Some New Versions of Integral Inequalities for Left and we obtain Hermite–Hadamard (-) and Hermite–Hadamard–Fejér (--Fejér) type inequality and some related integral inequalities with the support of left and right preinvex *IV-Fs* via pseudoorder relation and interval Riemann integral. Moreover, some exceptional special cases are also dis-Soliman, M.S.; Nonlaopon, K.; Zaini, H.G. Some New Versions of Integral Inequalities for Left and type inequalities for left and right preinvex functions in interval-valued settings. Moreover, we provide some nontrivial examples to verify the validity of the theory developed in this study.

cussed. Some useful examples are also given to prove the validity of our main results. **Keywords:** left and right preinvex interval-valued function; interval Riemann integral; cussed. Some useful examples are also given to prove the validity of our main results. **Keywords:** left and right preinvex interval-valued function; interval Riemann integral; Interval-Valued Settings. **Theorem 3.** *Let <sup>Y</sup>* : [*υ*, *<sup>υ</sup>* <sup>+</sup> *<sup>ζ</sup>*(*µ*, *<sup>υ</sup>*)] → K<sup>+</sup> *C be a left and right preinvex IV-F such that Y*(*ω*) = [*Y*∗(*ω*), *Y* ∗ (*ω*)] *for all ω* ∈ [*υ*, *υ* + *ζ*(*µ*, *υ*)]*. If Y* ∈ TR([*υ*, *<sup>υ</sup>*+*ζ*(*µ*, *<sup>υ</sup>*)]), *then*

$$Y\left(\frac{2v+\zeta(\mu,v)}{2}\right) \le\_p \frac{1}{\zeta(\mu,v)}\left(l\mathbb{R}\right) \int\_v^{v+\zeta(\mu,v)} Y(\omega)d\omega \le\_p \frac{Y(v) + Y(v+\zeta(\mu,v))}{2} \le\_p \frac{Y(v) + Y(\mu)}{2} \tag{8}$$

**\*** Correspondence: nkamsi@kku.ac.th; Tel.: +668-6642-1582

**\*** Correspondence: nkamsi@kku.ac.th; Tel.: +668-6642-1582

Received: 13 December 2021 Received: 13 December 2021 *If Y is left and right preincave, then, we achieve the following coming inequality:*

$$Y(\frac{2v+\zeta(\mu,v)}{2}) \ge\_p \frac{1}{\zeta(\mu,v)} \text{ (IR)} \int\_{\nu}^{v+\zeta(\mu,v)} Y(\omega)d\omega \ge\_p \frac{Y(v) + Y(v+\zeta(\mu,v))}{2} \le\_p \frac{Y(v) + Y(\mu)}{2} \tag{9}$$

For a differentiable mapping, the concept of fractional integral identities involving Riemann–Liouville fractional and Hadamard fractional integrals integrals was considered by tral with regard to jurisdictional claims in published maps and institu-For a differentiable mapping, the concept of fractional integral identities involving Riemann–Liouville fractional and Hadamard fractional integrals integrals was considered by tral with regard to jurisdictional claims in published maps and institu-**Proof.** Let *<sup>Y</sup>* : [*υ*, *<sup>υ</sup>* <sup>+</sup> *<sup>ζ</sup>*(*µ*, *<sup>υ</sup>*)] → K<sup>+</sup> *C* be a left and right preinvex *IV-F*. Then, by hypothesis, we have

$$2Y\left(\frac{2v+\zeta(\mu,v)}{2}\right) \le\_p Y(v+(1-t)\zeta(\mu,v)) + Y(v+t\zeta(\mu,v)).$$

for other generalizations of the - inequality. **Copyright:** © 2022 by the authors. Lifor other generalizations of the - inequality. **Copyright:** © 2022 by the authors. Li-Therefore, we have

$$\begin{aligned} 2Y\_\*\left(\frac{2\upsilon+\zeta(\mu,\upsilon)}{2}\right) &\leq Y\_\*(\upsilon+(1-t)\zeta(\mu,\upsilon)) + Y\_\*(\upsilon+t\zeta(\mu,\upsilon)),\\ 2Y^\*\left(\frac{2\upsilon+\zeta(\mu,\upsilon)}{2}\right) &\leq Y^\*(\upsilon+(1-t)\zeta(\mu,\upsilon)) + Y^\*(\upsilon+t\zeta(\mu,\upsilon)).\end{aligned}$$

*Mathematics* **2021**, *9*, x. https://doi.org/10.3390/xxxxx www.mdpi.com/journal/mathematics

*Mathematics* **2021**, *9*, x. https://doi.org/10.3390/xxxxx www.mdpi.com/journal/mathematics

Then

$$\begin{split} 2\int\_{0}^{1}Y\_{\ast}\left(\frac{2\upsilon+\mathsf{f}(\mu,\upsilon)}{2}\right)d\mathsf{t} &\leq \int\_{0}^{1}Y\_{\ast}\left(\upsilon+(1-\mathsf{t})\mathsf{f}(\mu,\upsilon)\right)d\mathsf{t} + \int\_{0}^{1}Y\_{\ast}\left(\upsilon+\mathsf{t}\mathsf{f}(\mu,\upsilon)\right)d\mathsf{t},\\ 2\int\_{0}^{1}Y^{\ast}\left(\frac{2\upsilon+\mathsf{f}(\mu,\upsilon)}{2}\right)d\mathsf{t} &\leq \int\_{0}^{1}Y^{\ast}\left(\upsilon+(1-\mathsf{t})\mathsf{f}(\mu,\upsilon)\right)d\mathsf{t} + \int\_{0}^{1}Y^{\ast}\left(\upsilon+\mathsf{t}\mathsf{f}(\mu,\upsilon)\right)d\mathsf{t}. \end{split}$$

It follows that

$$\begin{array}{c} Y\_\*\left(\frac{2\upsilon+\zeta(\mu,\upsilon)}{2}\right) \leq \frac{1}{\overline{\zeta(\mu,\upsilon)}} \int\_{\upsilon}^{\upsilon+\zeta(\mu,\upsilon)} Y\_\*(\omega) d\omega \,\,\iota \\\ Y^\*\left(\frac{2\upsilon+\zeta(\mu,\upsilon)}{2}\right) \leq \frac{2}{\overline{\zeta(\mu,\upsilon)}} \int\_{\upsilon}^{\upsilon+\zeta(\mu,\upsilon)} Y^\*(\omega) d\omega \,\,\iota \end{array}$$

That is

 *Y*∗ 2*υ* + *ζ*(*µ*, *υ*) 2 , *Y* ∗ 2*υ* + *ζ*(*µ*, *υ*) 2 ≤ *p* 1 *ζ*(*µ*, *υ*) Z *υ*+*ζ*(*µ*, *υ*) *υ Y*∗(*ω*)*dω*, Z *υ*+*ζ*(*µ*, *υ*) *υ Y* ∗ (*ω*)*dω* . Thus,

$$Y\left(\frac{2\upsilon+\zeta(\mu,\upsilon)}{2}\right) \leq\_p \frac{1}{\zeta(\mu,\upsilon)}\left(IR\right)\int\_{\upsilon}^{\upsilon+\zeta(\mu,\upsilon)}Y(\omega)d\omega.\tag{10}$$

In a similar way to the above, we have

$$\frac{1}{\mathbb{Z}(\mu,\upsilon)}\left(IR\right)\int\_{\upsilon}^{\upsilon+\xi(\mu,\upsilon)}\mathbb{Y}(\omega)d\omega \leq\_{p} \frac{\mathbb{Y}(\upsilon)+\mathbb{Y}(\mu)}{2}.\tag{11}$$

Combining (10) and (11), we have

$$\mathbb{P}\left(\frac{2v+\zeta(\mu,\upsilon)}{2}\right) \le\_p \frac{1}{\zeta(\mu,\upsilon)} \left(IR\right) \int\_{\upsilon}^{v+\zeta(\mu,\upsilon)} \mathbb{Y}(\omega)d\omega \le\_p \frac{\mathbb{Y}(\upsilon)+\mathbb{Y}(\mu)}{2}.$$

This completes the proof.

**Remark 4.** *If ξ*(*µ*, *υ*) = *µ* − *υ, then Theorem 3 reduces to the result for left and right convex IV-F, see [29]:*

$$\Upsilon\left(\frac{v+\mu}{2}\right) \le\_p \frac{1}{\mu-v} \left(IR\right) \int\_v^\mu Y(\omega)d\omega \le\_p \frac{\Upsilon(v)+\Upsilon(\mu)}{2}.\tag{12}$$

*If Y*∗(*ω*) = *Y* ∗ (*ω*)*, then Theorem 3 reduces to the result for the preinvex function, see [30]:*

$$Y\left(\frac{2v+\zeta(\mu,\upsilon)}{2}\right) \le \frac{1}{\zeta(\mu,\upsilon)}\left(\mathbb{R}\right) \int\_{\upsilon}^{v+\zeta(\mu,\upsilon)} Y(\omega)d\omega \le \left[Y(\upsilon) + \mathcal{Y}(\mu)\right] \int\_0^1 \mathbf{t}d\mathbf{t}.\tag{13}$$

*If Y*∗(*ω*) = *Y* ∗ (*ω*) *with ξ*(*µ*, *υ*) = *µ* − *υ, then Theorem 3 reduces to the result for the convex function, see [31,32]:*

$$\Upsilon\left(\frac{v+\mu}{2}\right) \le \frac{1}{\mu-\upsilon} \text{ (R)} \int\_{v}^{\mu} \Upsilon(\omega)d\omega \le \frac{\Upsilon(v)+\Upsilon(\mu)}{2}.\tag{14}$$

**Example 2.** *We consider the IV-F <sup>Y</sup>* : [*υ*, *<sup>υ</sup>* <sup>+</sup> *<sup>ζ</sup>*(*µ*, *<sup>υ</sup>*)] <sup>=</sup> [0, *<sup>ζ</sup>*(2, 0)] → K<sup>+</sup> *C defined by Y*(*ω*) = - 2*ω*<sup>2</sup> , 4*ω*<sup>2</sup> *. Since end point functions <sup>Y</sup>*∗(*ω*) <sup>=</sup> <sup>2</sup>*ω*<sup>2</sup> , *Y* ∗ (*ω*) = 4*ω*<sup>2</sup> *are preinvex functions with respect to ζ*(*µ*, *υ*) = *µ* − *υ. Hence, Y*(*ω*) *is left and right preinvex IV-F with respect to ζ*(*µ*, *υ*) = *µ* − *υ. We now compute the following*

$$\begin{array}{c} Y\left(\frac{2v+\tilde{\zeta}(\mu,\upsilon)}{2}\right) \leq\_p \frac{1}{\tilde{\zeta}(\mu,\upsilon)} \left(IR\right) \int\_{\upsilon}^{v+\tilde{\zeta}(\mu,\upsilon)} Y(\omega) d\omega \leq\_p \frac{Y(\upsilon)+Y(\mu)}{2}. \\\ Y\_\*\left(\frac{2\upsilon+\tilde{\zeta}(\mu,\upsilon)}{2}\right) = Y\_\*(1) = 2, \\\ \frac{1}{\tilde{\zeta}(\mu,\upsilon)} \int\_{\upsilon}^{v+\tilde{\zeta}(\mu,\upsilon)} Y\_\*(\omega) d\omega = \frac{1}{2} \int\_0^2 2\omega^2 d\omega = \frac{8}{3}, \\\ \frac{Y\_\*(\upsilon)+Y\_\*(\mu)}{2} = 4, \end{array}$$

that means

$$2 \le \frac{8}{3} \le 4.$$

Similarly, it can be easily shown that

$$Y^\* \left( \frac{2v + \zeta(\mu, \upsilon)}{2} \right) \le \frac{1}{\zeta(\mu, \upsilon)} \int\_{\upsilon}^{\upsilon + \zeta(\mu, \upsilon)} Y^\*(\omega) d\omega \le \frac{Y^\*(\upsilon) + Y^\*(\mu)}{2}$$

such that

$$\begin{array}{c} \mathcal{Y}^\* \Big( \frac{2\upsilon + \zeta(\mu, \upsilon)}{2} \Big) = \mathcal{Y}\_\*(1) = \mathsf{4},\\ \frac{1}{\overline{\zeta}(\mu, \upsilon)} \int\_{\upsilon}^{\upsilon + \zeta(\mu, \upsilon)} Y^\*(\omega) d\omega = \frac{1}{2} \int\_0^2 4\omega^2 d\omega = \frac{16}{3},\\ \frac{Y^\*(\upsilon) + Y^\*(\mu)}{2} = 8. \end{array}$$

From which, it follows that

$$4 \le \frac{16}{3} \le 84$$

that is

$$\begin{array}{rcl} \left[2, \ 4\right] & \leq & \ \_p\left[\frac{8}{3}, \ \frac{16}{3}\right] & \leq & \ \_p\left[4, \ 8\right] \end{array}$$

hence,

$$\mathbb{P}\left(\frac{2v+\zeta(\mu,\upsilon)}{2}\right) \le\_p \frac{1}{\zeta(\mu,\upsilon)} \left(IR\right) \int\_{\upsilon}^{\upsilon+\zeta(\mu,\upsilon)} \mathbb{Y}(\omega)d\omega \le\_p \frac{\mathbb{Y}(\upsilon)+\mathbb{Y}(\mu)}{2}.$$

**Theorem 4.** *Let <sup>Y</sup>*, <sup>D</sup> : [*υ*, *<sup>υ</sup>* <sup>+</sup> *<sup>ζ</sup>*(*µ*, *<sup>υ</sup>*)] → K<sup>+</sup> *C be two left and right preinvex IV-F such that Y*(*ω*) = [*Y*∗(*ω*), *Y* ∗ (*ω*)] *and* D(*ω*) = [D∗(*ω*), D<sup>∗</sup> (*ω*)] *for all ω* ∈ [*υ*, *υ* + *ζ*(*µ*, *υ*)]*. If Y*, D *and Y* × D ∈ TR([*υ*, *<sup>υ</sup>*+*ζ*(*µ*, *<sup>υ</sup>*)])*, then*

$$\frac{1}{\mathcal{L}(\mu,\upsilon)}\left(IR\right)\int\_{\upsilon}^{\upsilon+\zeta(\mu,\upsilon)}\mathcal{Y}(\omega)\times\mathfrak{D}(\omega)d\omega \leq\_{p} \frac{\mathcal{A}(\upsilon,\mu)}{\mathfrak{D}} + \frac{\mathcal{C}(\upsilon,\mu)}{\mathfrak{G}},\tag{15}$$

*where* A(*υ*, *µ*) = *Y*(*υ*) × D(*υ*) + *Y*(*µ*) × D(*µ*), C(*υ*, *µ*) = *Y*(*υ*) × D(*µ*) + *Y*(*µ*) × D(*υ*), *and* A(*υ*, *µ*) = [A∗((*υ*, *µ*)), A<sup>∗</sup> ((*υ*, *µ*))] *and* C(*υ*, *µ*) = [C∗((*υ*, *µ*)), C ∗ ((*υ*, *µ*))].

**Proof.** Since *Y*, D ∈ IR([*υ*, *<sup>υ</sup>*+*ζ*(*µ*, *<sup>υ</sup>*)]), then we have

$$\begin{cases} \mathcal{Y}\_\*(\boldsymbol{\upsilon} + (\mathbf{1} - \mathbf{t})\boldsymbol{\zeta}(\boldsymbol{\mu}, \boldsymbol{\upsilon})) \le \mathbf{t} \mathcal{Y}\_\*(\boldsymbol{\upsilon}) + (\mathbf{1} - \mathbf{t})\mathcal{Y}\_\*(\boldsymbol{\mu}),\\ \mathcal{Y}^\*(\boldsymbol{\upsilon} + (\mathbf{1} - \mathbf{t})\boldsymbol{\zeta}(\boldsymbol{\mu}, \boldsymbol{\upsilon})) \le \mathbf{t} \mathcal{Y}^\*(\boldsymbol{\upsilon}) + (\mathbf{1} - \mathbf{t})\mathcal{Y}^\*(\boldsymbol{\mu}). \end{cases}$$

And

$$
\begin{array}{c}
\mathfrak{D}\_{\*}(\upsilon + (1-\mathsf{t})\zeta(\mu,\upsilon)) \leq \mathsf{t}\mathfrak{D}\_{\*}(\upsilon) + (1-\mathsf{t})\mathfrak{D}\_{\*}(\mu), \\
\mathfrak{D}^{\*}(\upsilon + (1-\mathsf{t})\zeta(\mu,\upsilon)) \leq \mathsf{t}\mathfrak{D}^{\*}(\upsilon) + (1-\mathsf{t})\mathfrak{D}^{\*}(\mu).
\end{array}
$$

From the definition of left and right preinvex *IV-F*, it follows that 0 ≤*<sup>p</sup> Y*(*ω*) and 0 ≤*<sup>p</sup>* D(*ω*), so

*Y*∗(*υ* + (1 − t)*ζ*(*µ*, *υ*)) × D∗(*υ* + (1 − t)*ζ*(*µ*, *υ*)) ≤ t*Y*∗(*υ*) + (1 − t)*Y*∗(*µ*) <sup>t</sup>D∗(*υ*) <sup>+</sup> (<sup>1</sup> <sup>−</sup> <sup>t</sup>)D∗(*µ*) = *Y*∗(*υ*) × D∗(*υ*)t <sup>2</sup> <sup>+</sup> *<sup>Y</sup>*∗(*µ*) <sup>×</sup> <sup>D</sup>∗(*µ*)<sup>t</sup> <sup>2</sup> <sup>+</sup> *<sup>Y</sup>*∗(*υ*) <sup>×</sup> <sup>D</sup>∗(*µ*)t(<sup>1</sup> <sup>−</sup> <sup>t</sup>) +*Y*∗(*µ*) × D∗(*υ*)t(1 − t), *Y* ∗ (*υ* + (1 − t)*ζ*(*µ*, *υ*)) × D<sup>∗</sup> (*υ* + (1 − t)*ζ*(*µ*, *υ*)) ≤ t*Y* ∗ (*υ*) + (1 − t)*Y* ∗ (*µ*) tD<sup>∗</sup> (*υ*) + (1 − t)D<sup>∗</sup> (*µ*) = *Y* ∗ (*υ*) × D<sup>∗</sup> (*υ*)t <sup>2</sup> + *Y* ∗ (*µ*) × D<sup>∗</sup> (*µ*)t <sup>2</sup> + *Y* ∗ (*υ*) × D<sup>∗</sup> (*µ*)t(1 − t) +*Y* ∗ (*µ*) × D<sup>∗</sup> (*υ*)t(1 − t),

Integrating both sides of the above inequality over [0,1], we obtain

$$\begin{split} \int\_{0}^{1} Y\_{\*} (\upsilon + (1 - \mathfrak{t}) \zeta(\mu\_{\*} \upsilon)) \mathfrak{D}\_{\*} (\upsilon + (1 - \mathfrak{t}) \zeta(\mu\_{\*} \upsilon)) \\ = \frac{1}{\mathfrak{t} (\mu\_{\*} \upsilon)} \int\_{\upsilon}^{\upsilon + \mathfrak{t} (\mu\_{\*} \upsilon)} Y\_{\*} (\mu) \mathfrak{D}\_{\*} (\upsilon) d\upsilon \\ \leq \left( Y\_{\*} (\upsilon) \mathfrak{D}\_{\*} (\upsilon) + Y\_{\*} (\mu) \mathfrak{D}\_{\*} (\mu) \right) \int\_{0}^{1} \mathfrak{t}^{2} d\mathfrak{t} \\ + (Y\_{\*} (\upsilon) \mathfrak{D}\_{\*} (\mu) + Y\_{\*} (\mu) \mathfrak{D}\_{\*} (\upsilon)) \int\_{0}^{1} \mathfrak{t} (1 - \mathfrak{t}) d\mathfrak{t} \\ \int\_{0}^{1} Y^{\*} (\upsilon + (1 - \mathfrak{t}) \zeta(\mu\_{\*} \upsilon)) \mathfrak{D}^{\*} (\upsilon + (1 - \mathfrak{t}) \zeta(\mu\_{\*} \upsilon)) \\ = \frac{1}{\mathfrak{I} (\mu, \upsilon)} \int\_{\upsilon}^{\upsilon + \zeta(\mu, \upsilon)} Y^{\*} (\mu) \mathfrak{D}^{\*} (\omega) d\omega \\ \leq \left( Y^{\*} (\upsilon) \mathfrak{D}^{\*} (\upsilon) + Y^{\*} (\mu) \mathfrak{D}^{\*} (\mu) \right) \int\_{0}^{1} \mathfrak{t}^{2} d\mathfrak{t} \\ + (Y^{\*} (\upsilon) \mathfrak{D}^{\*} (\mu) + Y^{\*} (\mu) \mathfrak{D}^{\*} (\upsilon)) \int\_{0}^{1} \mathfrak{t} (1 - \mathfrak{t}) d\mathfrak{t}. \end{split}$$

It follows that,

$$\frac{1}{\sqrt{\left(\mu,\upsilon\right)}}\int\_{v}^{v+\zeta(\mu,\upsilon)}Y\_{\ast}(\omega)\mathfrak{D}\_{\ast}(\omega)d\omega \leq \mathcal{A}\_{\ast}((v,\mu))\int\_{0}^{1}\mathfrak{t}^{2}d\mathfrak{t}+\mathcal{C}\_{\ast}((v,\mu))\int\_{0}^{1}\mathfrak{t}(1-\mathfrak{t})d\mathfrak{t},$$

that is

$$\begin{array}{c} \frac{1}{\xi(\mu,\upsilon)} \left[ \int\_{\upsilon}^{\upsilon+\zeta(\mu,\upsilon)} Y\_\*(\omega) \mathfrak{D}\_\*(\omega) d\omega,\int\_{\upsilon}^{\upsilon+\zeta(\mu,\upsilon)} Y^\*(\omega) \mathfrak{D}^\*(\omega) d\omega \right] \\ \leq p \left[ \frac{\mathcal{A}\_\*((\upsilon,\mu))}{3}, \frac{\mathcal{A}^\*((\upsilon,\mu))}{3} \right] + \left[ \frac{\mathcal{C}\_\*((\upsilon,\mu))}{6}, \frac{\mathcal{C}^\*((\upsilon,\mu))}{6} \right]. \end{array}$$

Thus,

$$\frac{1}{\mathcal{Z}(\mu,\upsilon)}\left(IR\right)\int\_{\upsilon}^{\upsilon+\zeta(\mu,\upsilon)}Y(\omega)\mathfrak{D}(\omega)d\omega \leq\_{p} \frac{\mathcal{A}(\upsilon,\mu)}{\mathfrak{Z}} + \frac{\mathcal{C}(\upsilon,\mu)}{\mathfrak{G}}$$

,

and the theorem has been established.

**Example 3.** *We consider the IV-Fs <sup>Y</sup>*, <sup>D</sup> : [*υ*, *<sup>υ</sup>* <sup>+</sup> *<sup>ζ</sup>*(*µ*, *<sup>υ</sup>*)] <sup>=</sup> [0, *<sup>ζ</sup>*(1, 0)] → K<sup>+</sup> *C defined by Y*(*ω*) = - 2*ω*<sup>2</sup> , 4*ω*<sup>2</sup> *and* <sup>D</sup>(*ω*) <sup>=</sup> [*ω*, 2*ω*]. *Since end point functions <sup>Y</sup>*∗(*ω*) <sup>=</sup> <sup>2</sup>*ω*<sup>2</sup> , *Y* ∗ (*ω*) = 4*ω*<sup>2</sup> *and* D∗(*ω*) = *ω,* D<sup>∗</sup> (*ω*) = 2*ω are preinvex functions with respect to ζ*(*µ*, *υ*) = *µ* − *υ. Hence Y*, D *both are left and right preinvex IV-Fs. We now compute the following*

$$\begin{array}{c} \frac{1}{\xi(\mu,\upsilon)} \quad \int\_{\upsilon}^{\upsilon+\xi(\mu,\upsilon)} Y\_{\ast}(\omega) \times \mathfrak{D}\_{\ast}(\omega) d\omega = \frac{1}{2}, \\\frac{1}{\xi(\mu,\upsilon)} \quad \int\_{\upsilon}^{\upsilon+\xi(\mu,\upsilon)} Y^{\ast}(\omega) \times \mathfrak{D}^{\ast}(\omega) d\omega = 2, \\\frac{\begin{array}{c} \mathcal{A}\_{\ast}((\upsilon,\mu)) }{3} = \frac{1}{3}, \\\frac{\mathcal{A}^{\ast}((\upsilon,\mu))}{3} = \frac{8}{3}, \\\frac{\mathcal{C}\_{\ast}((\upsilon,\mu))}{6} = 0, \\\frac{\mathcal{C}^{\ast}((\upsilon,\mu))}{6} = 0, \end{array} \\\\ \frac{1}{2} \leq \frac{2}{3}, 2 \leq \frac{8}{3}. \end{array}$$

3

3

that means

Hence, Theorem 4 is verified.

**Theorem 5.** *Let <sup>Y</sup>*, <sup>D</sup> : [*υ*, *<sup>υ</sup>* <sup>+</sup> *<sup>ζ</sup>*(*µ*, *<sup>υ</sup>*)] → K<sup>+</sup> *C be two left and right preinvex IV-Fs, such that Y*(*ω*) = [*Y*∗(*ω*), *Y* ∗ (*ω*)] *and* D(*ω*) = [D∗(*ω*), D<sup>∗</sup> (*ω*)] *for all ω* ∈ [*υ*, *υ* + *ζ*(*µ*, *υ*)]*. If Y*, D *and Y* × D ∈ TR([*υ*, *<sup>υ</sup>*+*ζ*(*µ*, *<sup>υ</sup>*)]) *and condition C hold for ζ, then*

$$\mathfrak{D}\,Y\left(\frac{2v+\zeta(\mu,v)}{2}\right)\times\mathfrak{D}\left(\frac{2v+\zeta(\mu,v)}{2}\right)\leq\_p\frac{1}{\zeta(\mu,v)}\left(IR\right)\int\_v^{v+\zeta(\mu,v)}Y(\omega)\times\mathfrak{D}(\omega)d\omega + \frac{\mathcal{A}(v,\mu)}{6} + \frac{\mathcal{C}(v,\mu)}{3},\tag{16}$$

where A(*υ*, *µ*) = *Y*(*υ*) × D(*υ*) + *Y*(*µ*) × D(*µ*), C(*υ*, *µ*) = *Y*(*υ*) × D(*µ*) + *Y*(*µ*) × D(*υ*), and A(*υ*, *µ*) = [A∗((*υ*, *µ*)), A<sup>∗</sup> ((*υ*, *µ*))] and C(*υ*, *µ*) = [C∗((*υ*, *µ*)), C ∗ ((*υ*, *µ*))].

**Proof.** Using condition C, we can write

$$
\upsilon + \frac{1}{2}\zeta(\mu, \upsilon) = \upsilon + \mathfrak{t}\zeta(\mu, \upsilon) + \frac{1}{2}\zeta(\upsilon + (1 - \mathfrak{t})\zeta(\mu, \upsilon), \upsilon + \mathfrak{t}\zeta(\mu, \upsilon)).
$$

By hypothesis, we have

*Y*∗ 2*υ*+*ζ*(*µ*, *υ*) 2 × D<sup>∗</sup> 2*υ*+*ζ*(*µ*, *υ*) 2 *Y* ∗ 2*υ*+*ζ*(*µ*, *υ*) 2 × D<sup>∗</sup> 2*υ*+*ζ*(*µ*, *υ*) 2 = *Y*<sup>∗</sup> *υ* + t*ζ*(*µ*, *υ*) + <sup>1</sup> 2 *<sup>ζ</sup>*(*<sup>υ</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> <sup>t</sup>)*ζ*(*µ*, *<sup>υ</sup>*), *<sup>υ</sup>* <sup>+</sup> <sup>t</sup>*ζ*(*µ*, *<sup>υ</sup>*)) ×D<sup>∗</sup> *υ* + t*ζ*(*µ*, *υ*) + <sup>1</sup> 2 *<sup>ζ</sup>*(*<sup>υ</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> <sup>t</sup>)*ζ*(*µ*, *<sup>υ</sup>*), *<sup>υ</sup>* <sup>+</sup> <sup>t</sup>*ζ*(*µ*, *<sup>υ</sup>*)) = *Y* ∗ *υ* + t*ζ*(*µ*, *υ*) + <sup>1</sup> 2 *<sup>ζ</sup>*(*<sup>υ</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> <sup>t</sup>)*ζ*(*µ*, *<sup>υ</sup>*), *<sup>υ</sup>* <sup>+</sup> <sup>t</sup>*ζ*(*µ*, *<sup>υ</sup>*)) ×D<sup>∗</sup> *υ* + t*ζ*(*µ*, *υ*) + <sup>1</sup> 2 *<sup>ζ</sup>*(*<sup>υ</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> <sup>t</sup>)*ζ*(*µ*, *<sup>υ</sup>*), *<sup>υ</sup>* <sup>+</sup> <sup>t</sup>*ζ*(*µ*, *<sup>υ</sup>*)) <sup>≤</sup> <sup>1</sup> 4 *Y*∗(*υ* + (1 − t)*ζ*(*µ*, *υ*)) × D∗(*υ* + (1 − t)*ζ*(*µ*, *υ*)) <sup>+</sup>*Y*∗(*<sup>υ</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> <sup>t</sup>)*ζ*(*µ*, *<sup>υ</sup>*)) <sup>×</sup> <sup>D</sup>∗(*<sup>υ</sup>* <sup>+</sup> <sup>t</sup>*ζ*(*µ*, *<sup>υ</sup>*)) + <sup>1</sup> 4 *Y*∗(*υ* + t*ζ*(*µ*, *υ*)) × D∗(*υ* + (1 − t)*ζ*(*µ*, *υ*)) <sup>+</sup>*Y*∗(*<sup>υ</sup>* <sup>+</sup> <sup>t</sup>*ζ*(*µ*, *<sup>υ</sup>*)) <sup>×</sup> <sup>D</sup>∗(*<sup>υ</sup>* <sup>+</sup> <sup>t</sup>*ζ*(*µ*, *<sup>υ</sup>*)) , <sup>≤</sup> <sup>1</sup> 4 *Y* ∗ (*υ* + (1 − t)*ζ*(*µ*, *υ*)) × D<sup>∗</sup> (*υ* + (1 − t)*ζ*(*µ*, *υ*)) +*Y* ∗ (*υ* + (1 − t)*ζ*(*µ*, *υ*)) × D<sup>∗</sup> (*<sup>υ</sup>* <sup>+</sup> <sup>t</sup>*ζ*(*µ*, *<sup>υ</sup>*)) + <sup>1</sup> 4 *Y* ∗ (*υ* + t*ζ*(*µ*, *υ*)) × D<sup>∗</sup> (*υ* + (1 − t)*ζ*(*µ*, *υ*)) +*Y* ∗ (*υ* + t*ζ*(*µ*, *υ*)) × D<sup>∗</sup> (*<sup>υ</sup>* <sup>+</sup> <sup>t</sup>*ζ*(*µ*, *<sup>υ</sup>*)) , <sup>≤</sup> <sup>1</sup> 4 *Y*∗(*υ* + (1 − t)*ζ*(*µ*, *υ*)) × D∗(*υ* + (1 − t)*ζ*(*µ*, *υ*)) <sup>+</sup>*Y*∗(*<sup>υ</sup>* <sup>+</sup> <sup>t</sup>*ζ*(*µ*, *<sup>υ</sup>*)) <sup>×</sup> <sup>D</sup>∗(*<sup>υ</sup>* <sup>+</sup> <sup>t</sup>*ζ*(*µ*, *<sup>υ</sup>*)) + <sup>1</sup> 4 (t*Y*∗(*υ*) + (1 − t)*Y*∗(*µ*)) × ((1 − t)D∗(*υ*) + tD∗(*µ*)) +((<sup>1</sup> <sup>−</sup> <sup>t</sup>)*Y*∗(*υ*) <sup>+</sup> <sup>t</sup>*Y*∗(*µ*)) <sup>×</sup> (tD∗(*υ*) <sup>+</sup> (<sup>1</sup> <sup>−</sup> <sup>t</sup>)D∗(*µ*)) , <sup>≤</sup> <sup>1</sup> 4 *Y* ∗ (*υ* + (1 − t)*ζ*(*µ*, *υ*)) × D<sup>∗</sup> (*υ* + (1 − t)*ζ*(*µ*, *υ*)) +*Y* ∗ (*υ* + t*ζ*(*µ*, *υ*)) × D<sup>∗</sup> (*<sup>υ</sup>* <sup>+</sup> <sup>t</sup>*ζ*(*µ*, *<sup>υ</sup>*)) + <sup>1</sup> 4 (t*Y* ∗ (*υ*) + (1 − t)*Y* ∗ (*µ*)) × ((1 − t)D<sup>∗</sup> (*υ*) + tD∗ (*µ*)) +((1 − t)*Y* ∗ (*υ*) + t*Y* ∗ (*µ*)) × (tD<sup>∗</sup> (*υ*) + (1 − t)D<sup>∗</sup> (*µ*)) , = <sup>1</sup> 4 *Y*∗(*υ* + (1 − t)*ζ*(*µ*, *υ*)) × D∗(*υ* + (1 − t)*ζ*(*µ*, *υ*)) <sup>+</sup>*Y*∗(*<sup>υ</sup>* <sup>+</sup> <sup>t</sup>*ζ*(*µ*, *<sup>υ</sup>*)) <sup>×</sup> <sup>D</sup>∗(*<sup>υ</sup>* <sup>+</sup> <sup>t</sup>*ζ*(*µ*, *<sup>υ</sup>*)) +<sup>1</sup> 2 " n t <sup>2</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> <sup>t</sup>) 2 o C∗((*υ*, *µ*)) <sup>+</sup>{t(<sup>1</sup> <sup>−</sup> <sup>t</sup>) <sup>+</sup> (<sup>1</sup> <sup>−</sup> <sup>t</sup>)t}A∗((*υ*, *<sup>µ</sup>*)) # , = <sup>1</sup> 4 *Y* ∗ (*υ* + (1 − t)*ζ*(*µ*, *υ*)) × D<sup>∗</sup> (*υ* + (1 − t)*ζ*(*µ*, *υ*)) +*Y* ∗ (*υ* + t*ζ*(*µ*, *υ*)) × D<sup>∗</sup> (*<sup>υ</sup>* <sup>+</sup> <sup>t</sup>*ζ*(*µ*, *<sup>υ</sup>*)) +<sup>1</sup> 2 " n t <sup>2</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> <sup>t</sup>) 2 o C ∗ ((*υ*, *µ*)) +{t(1 − t) + (1 − t)t}A<sup>∗</sup> ((*υ*, *<sup>µ</sup>*)) # .

Integrating over [0, 1], we have

$$\begin{split} 2\operatorname{Y}\_{\*}\left(\frac{2v+\mathbb{\zeta}(\mu,v)}{2}\right)\times\mathfrak{D}\_{\*}\left(\frac{2v+\mathbb{\zeta}(\mu,v)}{2}\right) &\leq \frac{1}{\underline{\zeta}(\mu,v)}\int\_{\mathcal{V}}^{\eta+\mathbb{\zeta}(\mu,v)}\operatorname{Y}\_{\*}(\omega)\times\mathfrak{D}\_{\*}(\omega)d\omega + \frac{\mathcal{A}\_{\*}((v,\mu))}{\delta} + \frac{\mathcal{C}\_{\*}((v,\mu))}{3},\\ 2\operatorname{Y}^{\*}\left(\frac{2v+\mathbb{\zeta}(\mu,v)}{2}\right)\times\mathfrak{D}^{\*}\left(\frac{2v+\mathbb{\zeta}(\mu,v)}{2}\right) &\leq \frac{1}{\underline{\zeta}(\mu,v)}\int\_{\mathcal{V}}^{\eta+\mathbb{\zeta}(\mu,v)}\operatorname{Y}^{\*}(\omega)\times\mathfrak{D}^{\*}(\omega)d\omega + \frac{\mathcal{A}^{\*}((v,\mu))}{\delta} + \frac{\mathcal{C}^{\*}((v,\mu))}{3},\\ &\dots \end{split}$$

from which, we have

$$\begin{array}{c} \mathbf{2}\left[\mathbf{Y}\_{\ast}\left(\frac{2\upsilon+\zeta(\mu,\upsilon)}{2}\right)\times\mathfrak{D}\_{\ast}\left(\frac{2\upsilon+\zeta(\mu,\upsilon)}{2}\right),\ Y^{\ast}\left(\frac{2\upsilon+\zeta(\mu,\upsilon)}{2}\right)\times\mathfrak{D}^{\ast}\left(\frac{2\upsilon+\zeta(\mu,\upsilon)}{2}\right)\right] \\ \leq \begin{array}{c} \frac{1}{p\,\overline{\zeta}(\mu,\upsilon)}\left[\int\_{\upsilon}^{\upsilon+\zeta(\mu,\upsilon)}\,\mathbf{Y}\_{\ast}(\omega)\times\mathfrak{D}\_{\ast}(\omega)d\omega,\ \int\_{\upsilon}^{\upsilon+\zeta(\mu,\upsilon)}\,\mathbf{Y}^{\ast}(\omega)\times\mathfrak{D}^{\ast}(\omega)d\omega\right] \\ + \left[\frac{\mathcal{A}\_{\ast}((\upsilon,\mu))}{6},\ \frac{\mathcal{A}^{\ast}((\upsilon,\mu))}{6}\right] + \left[\frac{\mathcal{C}\_{\ast}((\upsilon,\mu))}{3},\ \frac{\mathcal{C}^{\ast}((\upsilon,\mu))}{3}\right], \end{array}$$

that is

*Article* 

*Article* 

*Article* 

*Article* 

*Article* 

bilal42742@gmail.com

bilal42742@gmail.com

savin.treanta@upb.ro

savin.treanta@upb.ro

Saudi Arabia; soliman@tu.edu.sa

Saudi Arabia; soliman@tu.edu.sa

*Article* 

*Article* 

**Citation:** Khan, M.B.; Treanțǎ, S.; Soliman, M.S.; Nonlaopon, K.; Zaini, H.G. Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings. *Mathematics* **2022**, *9*, x. https://doi.org/10.3390/xxxxx

**Citation:** Khan, M.B.; Treanțǎ, S.; Soliman, M.S.; Nonlaopon, K.; Zaini, H.G. Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings. *Mathematics* **2022**, *9*, x. https://doi.org/10.3390/xxxxx

Academic Editors: Simeon Reich and

Received: 13 December 2021 Accepted: 15 February 2022 Published: 16 February 2022

Academic Editors: Simeon Reich and

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institu-

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institu-

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

Received: 13 December 2021 Accepted: 15 February 2022 Published: 16 February 2022

Janusz Brzdęk

Janusz Brzdęk

tional affiliations.

tional affiliations.

*Article* 

$$\mathfrak{D}\,\mathcal{Y}\left(\frac{2v+\zeta(\mu,v)}{2}\right)\times\mathfrak{D}\left(\frac{2v+\zeta(\mu,v)}{2}\right)\leq\_{\mathbb{P}}\frac{1}{\overline{\zeta}(\mu,v)}\left(IR\right)\int\_{\mathcal{v}}^{v+\zeta(\mu,v)}\mathcal{Y}(\omega)\times\mathfrak{D}(\omega)d\omega + \frac{\mathcal{A}(v,\mu)}{6} + \frac{\mathcal{C}(v,\mu)}{3}.$$

This completes the proof.

**Example 4.** *We consider the IV-Fs <sup>Y</sup>*, <sup>D</sup> : [*υ*, *<sup>υ</sup>* <sup>+</sup> *<sup>ζ</sup>*(*µ*, *<sup>υ</sup>*)] <sup>=</sup> [0, *<sup>ζ</sup>*(1, 0)] → K<sup>+</sup> *C defined by, Y*(*ω*) = - 2*ω*<sup>2</sup> , 4*ω*<sup>2</sup> *and* D(*ω*) = [1, 2]*ω, and these functions fulfill all the assumptions of Theorem 5. Since Y*(*ω*), D(*ω*) *both are left and right preinvex IV-Fs with respect to <sup>ζ</sup>*(*µ*, *<sup>υ</sup>*) <sup>=</sup> *<sup>µ</sup>* <sup>−</sup> *<sup>υ</sup>, we have <sup>Y</sup>*∗(*ω*) <sup>=</sup> <sup>2</sup>*ω*<sup>2</sup> , *Y* ∗ (*ω*) = 4*ω*<sup>2</sup> *and* D∗(*ω*) = *ω,* D<sup>∗</sup> (*ω*) = 2*ω. We now compute the following* **Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings Some New Versions of Integral Inequalities for Left and Right Some New Versions of Integral Inequalities for Left and Right** 

$$\begin{aligned} 2\begin{array}{c} \mathcal{Y}\_{\boldsymbol{\nu}} \left( \frac{2 + \overline{\boldsymbol{\nu}}\_{\boldsymbol{\nu}}(\boldsymbol{\mu}, \boldsymbol{\nu})}{2} \right) \times \mathcal{D}\_{\boldsymbol{\nu}} \left( \frac{2 + \overline{\boldsymbol{\nu}}\_{\boldsymbol{\nu}}(\boldsymbol{\mu}, \boldsymbol{\nu})}{2} \right) = \frac{1}{2}, \\ 2\begin{array}{c} \mathcal{Y}^{\boldsymbol{\nu}} \left( \frac{2 + \overline{\boldsymbol{\nu}}\_{\boldsymbol{\nu}}(\boldsymbol{\mu}, \boldsymbol{\nu})}{2} \right) \times \mathcal{D}^{\boldsymbol{\nu}} \left( \frac{2 + \overline{\boldsymbol{\nu}}\_{\boldsymbol{\nu}}(\boldsymbol{\mu}, \boldsymbol{\nu})}{2} \right) = 2, \\ \frac{1}{\mathcal{Y}(\boldsymbol{\nu}, \boldsymbol{\nu})} \int\_{\mathcal{Y}} \frac{\overline{\boldsymbol{\nu}}\_{\boldsymbol{\nu}}(\boldsymbol{\mu}, \boldsymbol{\nu})}{\mathcal{Y}(\boldsymbol{\nu}, \boldsymbol{\nu})} \, \mathcal{Y}\_{\boldsymbol{\nu}}(\boldsymbol{\mu}) \times \mathcal{D}\_{\boldsymbol{\nu}}(\boldsymbol{\mu}) d\boldsymbol{\nu} = \frac{1}{2}, \\ \frac{1}{\mathcal{Y}(\boldsymbol{\mu}, \boldsymbol{\nu})} \int\_{\mathcal{Y}} \frac{\overline{\boldsymbol{\nu}}\_{\boldsymbol{\nu}}(\boldsymbol{\mu}, \boldsymbol{\nu})}{\mathcal{Y}(\boldsymbol{\nu}, \boldsymbol{\nu})} = \frac{1}{3}, \\ \frac{\mathcal{A}^{\boldsymbol{\nu}}(\boldsymbol{\mu}, \boldsymbol{\mu})}{\mathcal{X}(\boldsymbol{\nu}, \boldsymbol{\mu})} = \frac{1}{3}, \\ \frac{\mathcal{C}^{\boldsymbol{\nu}}(\{\boldsymbol{\mu}, \boldsymbol{\nu}\})}{3} = 0, \end{array}$$

that means **Abstract:** The principles of convexity and symmetry are inextricably linked. Because of the consid-

savin.treanta@upb.ro

**Citation:** Khan, M.B.; Treanțǎ, S.; Soliman, M.S.; Nonlaopon, K.;

**Citation:** Khan, M.B.; Treanțǎ, S.; Soliman, M.S.; Nonlaopon, K.;

**Citation:** Khan, M.B.; Treanțǎ, S.; Soliman, M.S.; Nonlaopon, K.; Zaini, H.G. Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings. *Mathematics* **2022**, *9*, x. https://doi.org/10.3390/xxxxx

**Citation:** Khan, M.B.; Treanțǎ, S.; Soliman, M.S.; Nonlaopon, K.; Zaini, H.G. Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings. *Mathematics* **2022**, *9*, x. https://doi.org/10.3390/xxxxx

**Citation:** Khan, M.B.; Treanțǎ, S.;

**\*** Correspondence: nkamsi@kku.ac.th; Tel.: +668-6642-1582

Right Preinvex Functions in the Interval-Valued Settings. *Mathematics* **2022**, *9*, x. https://doi.org/10.3390/xxxxx

Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings. *Mathematics* **2022**, *9*, x. https://doi.org/10.3390/xxxxx

Interval-Valued Settings. *Mathematics* **2022**, *9*, x.

Received: 13 December 2021 Accepted: 15 February 2022 Published: 16 February 2022

Received: 13 December 2021 Accepted: 15 February 2022 Published: 16 February 2022

Academic Editors: Simeon Reich and

Janusz Brzdęk

**Publisher's Note:** MDPI stays neu-

**Publisher's Note:** MDPI stays neu-

claims in published maps and institu-

**Copyright:** © 2022 by the authors. Li-

tribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

Received: 13 December 2021 Accepted: 15 February 2022 Published: 16 February 2022

claims in published maps and institu-

tral with regard to jurisdictional claims in published maps and institu-

**Copyright:** © 2022 by the authors. Li-

This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

censee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

Janusz Brzdęk

**Citation:** Khan, M.B.; Treanțǎ, S.; Soliman, M.S.; Nonlaopon, K.; Zaini, H.G. Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings. *Mathematics* **2022**, *9*, x. https://doi.org/10.3390/xxxxx

**1. Introduction** 

Janusz Brzdęk

Received: 13 December 2021 Accepted: 15 February 2022 Published: 16 February 2022

Received: 13 December 2021 Accepted: 15 February 2022 Published: 16 February 2022

Academic Editors: Simeon Reich and

**1. Introduction** 

Janusz Brzdęk

Received: 13 December 2021 Accepted: 15 February 2022 Published: 16 February 2022

Janusz Brzdęk

tional affiliations.

tional affiliations.

tional affiliations.

Janusz Brzdęk

tional affiliations.

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institu-

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

tional affiliations.

tional affiliations.

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institu-

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institu-

1 <sup>2</sup> <sup>≤</sup> <sup>1</sup> <sup>2</sup> <sup>+</sup> <sup>0</sup> <sup>+</sup> <sup>1</sup> <sup>3</sup> <sup>=</sup> <sup>5</sup> 6 , <sup>2</sup> <sup>≤</sup> <sup>2</sup> <sup>+</sup> <sup>0</sup> <sup>+</sup> <sup>4</sup> <sup>3</sup> <sup>=</sup> <sup>10</sup> 3 . erable association that has emerged between the two in recent years, we may apply what we learn from one to the other. In this paper, our aim is to establish the relation between integral inequalities **Abstract:** The principles of convexity and symmetry are inextricably linked. Because of the considerable association that has emerged between the two in recent years, we may apply what we learn from one to the other. In this paper, our aim is to establish the relation between integral inequalities **Abstract:** The principles of convexity and symmetry are inextricably linked. Because of the considerable association that has emerged between the two in recent years, we may apply what we learn **Abstract:** The principles of convexity and symmetry are inextricably linked. Because of the considerable association that has emerged between the two in recent years, we may apply what we learn **Abstract:** The principles of convexity and symmetry are inextricably linked. Because of the considerable association that has emerged between the two in recent years, we may apply what we learn **Abstract:** The principles of convexity and symmetry are inextricably linked. Because of the considerable association that has emerged between the two in recent years, we may apply what we learn P.O. Box 11099, Taif 21944, Saudi Arabia; h.zaini@tu.edu.sa **\*** Correspondence: nkamsi@kku.ac.th; Tel.: +668-6642-1582 P.O. Box 11099, Taif 21944, Saudi Arabia; h.zaini@tu.edu.sa **\*** Correspondence: nkamsi@kku.ac.th; Tel.: +668-6642-1582

properties of left and right preinvex interval-valued functions (left and right preinvex *IV-Fs*). Then,

and interval-valued functions (*IV-Fs*) based upon the pseudo-order relation. Firstly, we discuss the

and interval-valued functions (*IV-Fs*) based upon the pseudo-order relation. Firstly, we discuss the

erable association that has emerged between the two in recent years, we may apply what we learn

properties of left and right preinvex interval-valued functions (left and right preinvex *IV-Fs*). Then,

Hence, Theorem 5 is verified. and interval-valued functions (*IV-Fs*) based upon the pseudo-order relation. Firstly, we discuss the and interval-valued functions (*IV-Fs*) based upon the pseudo-order relation. Firstly, we discuss the from one to the other. In this paper, our aim is to establish the relation between integral inequalities and interval-valued functions (*IV-Fs*) based upon the pseudo-order relation. Firstly, we discuss the from one to the other. In this paper, our aim is to establish the relation between integral inequalities from one to the other. In this paper, our aim is to establish the relation between integral inequalities from one to the other. In this paper, our aim is to establish the relation between integral inequalities **Abstract:** The principles of convexity and symmetry are inextricably linked. Because of the consid-**Abstract:** The principles of convexity and symmetry are inextricably linked. Because of the consid-

and interval-valued functions (*IV-Fs*) based upon the pseudo-order relation. Firstly, we discuss the

Hanson [1] defined the class of invex functions as one of the most significant exten-

tions to demonstrate adequate optimality criteria and duality in nonlinear programming.

Hanson [1] defined the class of invex functions as one of the most significant exten-

tions to demonstrate adequate optimality criteria and duality in nonlinear programming.

Lipschitz condition in , |() − ()| ≤ |−|, for , ∈ , then, the united extension is a Lipschitz interval extension in . To combine the study of discrete and continuous dynamical systems, Hilger [10] introduced a time scales theory. The widespread use of dynamic equations and integral inequalities on time scales, in domains as diverse as electrical engineering, quantum physics, heat transfer, neural networks, combinatorics,

Lipschitz condition in , |() − ()| ≤ |−|, for , ∈ , then, the united extension is a Lipschitz interval extension in . To combine the study of discrete and continuous dynamical systems, Hilger [10] introduced a time scales theory. The widespread use of dynamic equations and integral inequalities on time scales, in domains as diverse as electrical engineering, quantum physics, heat transfer, neural networks, combinatorics,

ity, Minkoswki's inequality, Jensen's inequality, Hölder's inequality, - inequality,

ity, Minkoswki's inequality, Jensen's inequality, Hölder's inequality, - inequality,

cussed. Some useful examples are also given to prove the validity of our main results.

cussed. Some useful examples are also given to prove the validity of our main results.

It is well known that classical we obtain Hermite–Hadamard (-) and Hermite–Hadamard–Fejér (--Fejér) type inequality and some related integral inequalities with the support of left and right preinvex *IV-Fs* via pseudoorder relation and interval Riemann integral. Moreover, some exceptional special cases are also discussed. Some useful examples are also given to prove the validity of our main results. Zaini, H.G. Some New Versions of Integral Inequalities for Left and we obtain Hermite–Hadamard (-) and Hermite–Hadamard–Fejér (--Fejér) type inequality and some related integral inequalities with the support of left and right preinvex *IV-Fs* via pseudoorder relation and interval Riemann integral. Moreover, some exceptional special cases are also discussed. Some useful examples are also given to prove the validity of our main results. Soliman, M.S.; Nonlaopon, K.; Zaini, H.G. Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the -Fejér inequality is a generalization of classical properties of left and right preinvex interval-valued functions (left and right preinvex *IV-Fs*). Then, we obtain Hermite–Hadamard (-) and Hermite–Hadamard–Fejér (--Fejér) type inequality and some related integral inequalities with the support of left and right preinvex *IV-Fs* via pseudoorder relation and interval Riemann integral. Moreover, some exceptional special cases are also dis- properties of left and right preinvex interval-valued functions (left and right preinvex *IV-Fs*). Then, we obtain Hermite–Hadamard (-) and Hermite–Hadamard–Fejér (--Fejér) type inequality and some related integral inequalities with the support of left and right preinvex *IV-Fs* via pseudoorder relation and interval Riemann integral. Moreover, some exceptional special cases are also disinequality. Now we derive properties of left and right preinvex interval-valued functions (left and right preinvex *IV-Fs*). Then, we obtain Hermite–Hadamard (-) and Hermite–Hadamard–Fejér (--Fejér) type inequality and some related integral inequalities with the support of left and right preinvex *IV-Fs* via pseudoorder relation and interval Riemann integral. Moreover, some exceptional special cases are also dis- properties of left and right preinvex interval-valued functions (left and right preinvex *IV-Fs*). Then, we obtain Hermite–Hadamard (-) and Hermite–Hadamard–Fejér (--Fejér) type inequality and some related integral inequalities with the support of left and right preinvex *IV-Fs* via pseudoorder relation and interval Riemann integral. Moreover, some exceptional special cases are also dis-Zaini, H.G. Some New Versions of -Fejér inequality for left and right preinvex *IV-Fs* and then we will obtain the validity of this inequality with the help of a non-trivial example. Firstly, we obtain the second from one to the other. In this paper, our aim is to establish the relation between integral inequalities and interval-valued functions (*IV-Fs*) based upon the pseudo-order relation. Firstly, we discuss the properties of left and right preinvex interval-valued functions (left and right preinvex *IV-Fs*). Then, we obtain Hermite–Hadamard (-) and Hermite–Hadamard–Fejér (--Fejér) type inequality from one to the other. In this paper, our aim is to establish the relation between integral inequalities and interval-valued functions (*IV-Fs*) based upon the pseudo-order relation. Firstly, we discuss the properties of left and right preinvex interval-valued functions (left and right preinvex *IV-Fs*). Then, we obtain Hermite–Hadamard (-) and Hermite–Hadamard–Fejér (--Fejér) type inequality -Fejér inequality for left and right preinvex *IV-F*.

erable association that has emerged between the two in recent years, we may apply what we learn

**Keywords:** left and right preinvex interval-valued function; interval Riemann integral; Hermite–Hadamard type inequality; Hermite–Hadamard–Fejér type inequality Academic Editors: Simeon Reich and **Keywords:** left and right preinvex interval-valued function; interval Riemann integral; Hermite–Hadamard type inequality; Hermite–Hadamard–Fejér type inequality https://doi.org/10.3390/xxxxx Academic Editors: Simeon Reich and **Keywords:** left and right preinvex interval-valued function; interval Riemann integral; Hermite–Hadamard type inequality; Hermite–Hadamard–Fejér type inequality **Keywords:** left and right preinvex interval-valued function; interval Riemann integral; Hermite–Hadamard type inequality; Hermite–Hadamard–Fejér type inequality **Keywords:** left and right preinvex interval-valued function; interval Riemann integral; Hermite–Hadamard type inequality; Hermite–Hadamard–Fejér type inequality Academic Editors: Simeon Reich and **Keywords:** left and right preinvex interval-valued function; interval Riemann integral; Hermite–Hadamard type inequality; Hermite–Hadamard–Fejér type inequality Academic Editors: Simeon Reich and order relation and interval Riemann integral. Moreover, some exceptional special cases are also discussed. Some useful examples are also given to prove the validity of our main results. **Keywords:** left and right preinvex interval-valued function; interval Riemann integral; order relation and interval Riemann integral. Moreover, some exceptional special cases are also discussed. Some useful examples are also given to prove the validity of our main results. **Keywords:** left and right preinvex interval-valued function; interval Riemann integral; **Theorem 6.** *Let <sup>Y</sup>* : [*υ*, *<sup>υ</sup>* <sup>+</sup> *<sup>ζ</sup>*(*µ*, *<sup>υ</sup>*)] → K<sup>+</sup> *C be a left and right preinvex IV-F with υ* < *υ* + *ζ*(*µ*, *υ*) *such that Y*(*ω*) = [*Y*∗(*ω*), *Y* ∗ (*ω*)] *for all ω* ∈ [*υ*, *υ* + *ζ*(*µ*, *υ*)]*. If Y* ∈ TR([*υ*, *<sup>υ</sup>*+*ζ*(*µ*, *<sup>υ</sup>*)]) *and* S : [*υ*, *υ* + *ζ*(*µ*, *υ*)] → R, S(*ω*) ≥ 0, *symmetric with respect to υ* + <sup>1</sup> 2 *ζ*(*µ*, *υ*), *then*

cussed. Some useful examples are also given to prove the validity of our main results.

and some related integral inequalities with the support of left and right preinvex *IV-Fs* via pseudo-

and some related integral inequalities with the support of left and right preinvex *IV-Fs* via pseudo-

cussed. Some useful examples are also given to prove the validity of our main results.

$$\frac{1}{\mathcal{L}(\boldsymbol{\mu},\boldsymbol{\upsilon})} \left( \mathrm{IR} \right) \int\_{\boldsymbol{\upsilon}}^{\boldsymbol{\upsilon} + \boldsymbol{\zeta}(\boldsymbol{\mu},\boldsymbol{\upsilon})} \mathcal{Y}(\boldsymbol{\omega}) \mathcal{S}(\boldsymbol{\omega}) d\boldsymbol{\omega} \leq\_{p} \left[ \mathcal{Y}(\boldsymbol{\upsilon}) + \mathcal{Y}(\boldsymbol{\mu}) \right] \int\_{0}^{1} \mathbf{t} \mathcal{S}(\boldsymbol{\upsilon} + \mathbf{t}\_{\boldsymbol{\prime}}^{\boldsymbol{\prime}}(\boldsymbol{\mu},\boldsymbol{\upsilon})) d\mathbf{t}. \tag{17}$$

tions to demonstrate adequate optimality criteria and duality in nonlinear programming. tral with regard to jurisdictional tions to demonstrate adequate optimality criteria and duality in nonlinear programming. tral with regard to jurisdictional sions of convex functions. Weir and Mond [2], in 1988, used the notion of preinvex funcsions of convex functions. Weir and Mond [2], in 1988, used the notion of preinvex funcsions of convex functions. Weir and Mond [2], in 1988, used the notion of preinvex funcsions of convex functions. Weir and Mond [2], in 1988, used the notion of preinvex func-**Publisher's Note:** MDPI stays neu-**1. Introduction**  Hanson [1] defined the class of invex functions as one of the most significant exten-**1. Introduction**  Hanson [1] defined the class of invex functions as one of the most significant exten-**Proof.** Let *Y* be a left and right preinvex *IV-F*. Then, we have

for other generalizations of the - inequality.

for other generalizations of the - inequality.

for other generalizations of the - inequality.

$$\begin{array}{l} \mathcal{Y}\_{\*}\left(\upsilon + (1-t)\zeta(\mu,\upsilon)\right)\mathcal{S}\left(\upsilon + (1-t)\zeta(\mu,\upsilon)\right) \\ \leq \left(t\mathcal{Y}\_{\*}\left(\upsilon\right) + (1-t)\mathcal{Y}\_{\*}\left(\mu\right)\right)\mathcal{S}\left(\upsilon + (1-t)\zeta(\mu,\upsilon)\right), \\ \mathcal{Y}^{\*}\left(\upsilon + (1-t)\zeta(\mu,\upsilon)\right)\mathcal{S}\left(\upsilon + (1-t)\zeta(\mu,\upsilon)\right) \\ \leq \left(t\mathcal{Y}^{\*}\left(\upsilon\right) + (1-t)\mathcal{Y}^{\*}\left(\mu\right)\right)\mathcal{S}\left(\upsilon + (1-t)\zeta(\mu,\upsilon)\right). \end{array} \tag{18}$$

For accurate solutions to various problems in practical mathematics, Moore [9] used

sion is a Lipschitz interval extension in . To combine the study of discrete and continuous dynamical systems, Hilger [10] introduced a time scales theory. The widespread use of dynamic equations and integral inequalities on time scales, in domains as diverse as electrical engineering, quantum physics, heat transfer, neural networks, combinatorics, and population dynamics [11], has highlighted the need for this theory. Young's inequal-

For accurate solutions to various problems in practical mathematics, Moore [9] used

sion is a Lipschitz interval extension in . To combine the study of discrete and continuous dynamical systems, Hilger [10] introduced a time scales theory. The widespread use of dynamic equations and integral inequalities on time scales, in domains as diverse as electrical engineering, quantum physics, heat transfer, neural networks, combinatorics, and population dynamics [11], has highlighted the need for this theory. Young's inequal-

Lipschitz condition in , |() − ()| ≤ |−|, for , ∈ , then, the united extension is a Lipschitz interval extension in . To combine the study of discrete and continuous dynamical systems, Hilger [10] introduced a time scales theory. The widespread use of dynamic equations and integral inequalities on time scales, in domains as diverse as electrical engineering, quantum physics, heat transfer, neural networks, combinatorics,

sions of convex functions. Weir and Mond [2], in 1988, used the notion of preinvex func-

Hanson [1] defined the class of invex functions as one of the most significant exten-

Hanson [1] defined the class of invex functions as one of the most significant exten-

tions to demonstrate adequate optimality criteria and duality in nonlinear programming.

sions of convex functions. Weir and Mond [2], in 1988, used the notion of preinvex func-

For a differentiable mapping, the concept of fractional integral identities involving Rie-

For a differentiable mapping, the concept of fractional integral identities involving Rie-

tions to demonstrate adequate optimality criteria and duality in nonlinear programming.

censee MDPI, Basel, Switzerland. censee MDPI, Basel, Switzerland. tional integrals for preinvex functions to obtain various - type inequalities. See [5–8] for other generalizations of the - inequality. tional integrals for preinvex functions to obtain various - type inequalities. See [5–8] for other generalizations of the - inequality. for other generalizations of the - inequality. **Copyright:** © 2022 by the authors. Liconvex, -convex, (s, m)-convex, and (, )-convex. Moreover, Işcan [4] also used fracconvex, -convex, (s, m)-convex, and (, )-convex. Moreover, Işcan [4] also used frac-And

*Mathematics* **2021**, *9*, x. https://doi.org/10.3390/xxxxx www.mdpi.com/journal/mathematics

*Mathematics* **2021**, *9*, x. https://doi.org/10.3390/xxxxx www.mdpi.com/journal/mathematics

$$\begin{split} \mathcal{Y}\_{\boldsymbol{s}}(\boldsymbol{v} + \mathbf{t}\boldsymbol{\xi}(\boldsymbol{\mu}\_{\boldsymbol{\nu}} \boldsymbol{v})) \mathcal{S}(\boldsymbol{v} + \mathbf{t}\boldsymbol{\xi}(\boldsymbol{\mu}\_{\boldsymbol{\nu}} \boldsymbol{v})) &\leq ((1-\mathbf{t})\mathcal{Y}\_{\boldsymbol{s}}(\boldsymbol{v}) + \mathbf{t}\mathcal{Y}\_{\boldsymbol{s}}(\boldsymbol{\mu})) \mathcal{S}(\boldsymbol{v} + \mathbf{t}\boldsymbol{\xi}(\boldsymbol{\mu}\_{\boldsymbol{\nu}} \boldsymbol{v})),\\ \mathcal{Y}^{\boldsymbol{\epsilon}}(\boldsymbol{v} + \mathbf{t}\boldsymbol{\xi}(\boldsymbol{\mu}\_{\boldsymbol{\nu}} \boldsymbol{v})) \mathcal{S}(\boldsymbol{v} + \mathbf{t}\boldsymbol{\xi}(\boldsymbol{\mu}\_{\boldsymbol{\nu}} \boldsymbol{v})) &\leq ((1-\mathbf{t})\mathcal{Y}^{\boldsymbol{s}}(\boldsymbol{v}) + \mathbf{t}\mathcal{Y}^{\boldsymbol{s}}(\boldsymbol{\mu})) \mathcal{S}(\boldsymbol{v} + \mathbf{t}\boldsymbol{\xi}(\boldsymbol{\mu}\_{\boldsymbol{\nu}} \boldsymbol{v})). \end{split} \tag{19}$$

Lipschitz condition in , |() − ()| ≤ |−|, for , ∈ , then, the united extension is a Lipschitz interval extension in . To combine the study of discrete and continuous dynamical systems, Hilger [10] introduced a time scales theory. The widespread use of dynamic equations and integral inequalities on time scales, in domains as diverse as electrical engineering, quantum physics, heat transfer, neural networks, combinatorics,

interval arithmetic, *IV-Fs*and integrals of *IV-Fs* to establish arbitrarily sharp upper and lower limits. Moore [9] showed that, if a real-valued mapping () meets an ordinary Lipschitz condition in , |() − ()| ≤ |−|, for , ∈ , then, the united extension is a Lipschitz interval extension in . To combine the study of discrete and continuous dynamical systems, Hilger [10] introduced a time scales theory. The widespread use

interval arithmetic, *IV-Fs*, and integrals of *IV-Fs* to establish arbitrarily sharp upper and lower limits. Moore [9] showed that, if a real-valued mapping () meets an ordinary Lipschitz condition in , |() − ()| ≤ |−|, for , ∈ , then, the united extension is a Lipschitz interval extension in . To combine the study of discrete and continuous dynamical systems, Hilger [10] introduced a time scales theory. The widespread use

ity, Minkoswki's inequality, Jensen's inequality, Hölder's inequality, - inequality,

electrical engineering, quantum physics, heat transfer, neural networks, combinatorics, and population dynamics [11], has highlighted the need for this theory. Young's inequality, Minkoswki's inequality, Jensen's inequality, Hölder's inequality, - inequality,

electrical engineering, quantum physics, heat transfer, neural networks, combinatorics, and population dynamics [11], has highlighted the need for this theory. Young's inequality, Minkoswki's inequality, Jensen's inequality, Hölder's inequality, - inequality,

ity, Minkoswki's inequality, Jensen's inequality, Hölder's inequality, - inequality,

*Mathematics* **2021**, *9*, x. https://doi.org/10.3390/xxxxx www.mdpi.com/journal/mathematics

*Mathematics* **2021**, *9*, x. https://doi.org/10.3390/xxxxx www.mdpi.com/journal/mathematics

*Mathematics* **2021**, *9*, x. https://doi.org/10.3390/xxxxx www.mdpi.com/journal/mathematics

*Mathematics* **2021**, *9*, x. https://doi.org/10.3390/xxxxx www.mdpi.com/journal/mathematics

*Mathematics* **2021**, *9*, x. https://doi.org/10.3390/xxxxx www.mdpi.com/journal/mathematics

*Mathematics* **2021**, *9*, x. https://doi.org/10.3390/xxxxx www.mdpi.com/journal/mathematics

bilal42742@gmail.com

bilal42742@gmail.com

savin.treanta@upb.ro

savin.treanta@upb.ro

**1. Introduction** 

**1. Introduction** 

Saudi Arabia; soliman@tu.edu.sa

Saudi Arabia; soliman@tu.edu.sa

*Article* 

*Article* 

**Citation:** Khan, M.B.; Treanțǎ, S.; Soliman, M.S.; Nonlaopon, K.; Zaini, H.G. Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings. *Mathematics* **2022**, *9*, x. https://doi.org/10.3390/xxxxx

**Citation:** Khan, M.B.; Treanțǎ, S.; Soliman, M.S.; Nonlaopon, K.; Zaini, H.G. Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings. *Mathematics* **2022**, *9*, x. https://doi.org/10.3390/xxxxx

Academic Editors: Simeon Reich and

Received: 13 December 2021 Accepted: 15 February 2022 Published: 16 February 2022

Academic Editors: Simeon Reich and

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institu-

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institu-

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

Received: 13 December 2021 Accepted: 15 February 2022 Published: 16 February 2022

Janusz Brzdęk

Janusz Brzdęk

tional affiliations.

tional affiliations.

After adding (18) and (19), and integrating over [0, 1], we get

R 1 0 *Y*∗(*υ* + (1 − t)*ζ*(*µ*, *υ*))S(*υ* + (1 − t)*ζ*(*µ*, *υ*))*d*t + R 1 0 *Y*∗(*υ* + t*ζ*(*µ*, *υ*))S(*υ* + t*ζ*(*µ*, *υ*))*d*t ≤ R 1 0 *Y*∗(*υ*){tS(*υ* + (1 − t)*ζ*(*µ*, *υ*)) + (1 − t)S(*υ* + t*ζ*(*µ*, *υ*))} +*Y*∗(*µ*){(1 − t)S(*υ* + (1 − t)*ζ*(*µ*, *υ*)) + tS(*υ* + t*ζ*(*µ*, *υ*))} *d*t, R 1 0 *Y* ∗ (*υ* + t*ζ*(*µ*, *υ*))S(*υ* + t*ζ*(*µ*, *υ*))*d*t + R 1 0 *Y* ∗ (*υ* + (1 − t)*ζ*(*µ*, *υ*))S(*υ* + (1 − t)*ζ*(*µ*, *υ*))*d*t ≤ R 1 0 *Y* ∗ (*υ*){tS(*υ* + (1 − t)*ζ*(*µ*, *υ*)) + (1 − t)S(*υ* + t*ζ*(*µ*, *υ*))} +*Y* ∗ (*µ*){(1 − t)S(*υ* + (1 − t)*ζ*(*µ*, *υ*)) + tS(*υ* + t*ζ*(*µ*, *υ*))} *d*t. = 2*Y*∗(*υ*) R 1 0 tS(*υ* + (1 − t)*ζ*(*µ*, *υ*)) *d*t + 2*Y*∗(*µ*) R 1 0 tS(*υ* + t*ζ*(*µ*, *υ*)) *d*t, = 2*Y* ∗ (*υ*) R 1 0 tS(*υ* + (1 − t)*ζ*(*µ*, *υ*)) *d*t + 2*Y* ∗ (*µ*) R 1 0 tS(*υ* + t*ζ*(*µ*, *υ*)) *d*t.

Since S is symmetric, then

$$\begin{array}{ll} \mathbf{t} = \mathbf{2} [\mathbf{Y}\_{\*}(\boldsymbol{\upsilon}) + \mathbf{Y}\_{\*}(\boldsymbol{\mu})] \int\_{0}^{1} \mathbf{t} \mathcal{S}(\boldsymbol{\upsilon} + \mathbf{t} \mathcal{J}(\boldsymbol{\mu}, \boldsymbol{\upsilon})) \, d\mathbf{t}, \\\ = \mathbf{2} [Y^{\*}(\boldsymbol{\upsilon}) + \mathbf{Y}^{\*}(\boldsymbol{\mu})] \int\_{0}^{1} \mathbf{t} \mathcal{S}(\boldsymbol{\upsilon} + \mathbf{t} \mathcal{J}(\boldsymbol{\mu}, \boldsymbol{\upsilon})) \, d\mathbf{t}. \end{array} \tag{20}$$

Since

$$\begin{array}{lcl} & \int\_{0}^{1} Y\_{\star}(v + (1 - t)\zeta(\mu, v)) \mathcal{S}(v + (1 - t)\zeta'(\mu, v)) dt \\ & = \int\_{0}^{1} Y\_{\star}(v + t\zeta(\mu, v)) \mathcal{S}(v + t\zeta(\mu, v)) dt \\ & = \frac{1}{\overline{\zeta}(\mu, v)} \int\_{\mathbb{R}}^{u + \overline{\zeta}(\mu, v)} Y\_{\star}(\omega) \mathcal{S}(\omega) d\omega, \\ & \int\_{0}^{1} Y^{\star}(v + t\zeta(\mu, v)) \mathcal{S}(v + t\zeta(\mu, v)) dt \\ & = \int\_{0}^{1} Y^{\star}(v + (1 - t)\zeta(\mu, v)) \mathcal{S}(v + (1 - t)\zeta(\mu, v)) dt \\ & = \frac{1}{\overline{\zeta}(\mu, v)} \int\_{\mathbb{R}}^{u + \overline{\zeta}(\mu, v)} Y^{\star}(\omega) \mathcal{S}(\omega) d\omega. \end{array} \tag{21}$$

From (21), we have **Muhammad Bilal Khan 1, Savin Treanțǎ 2, Mohamed S. Soliman 3, Kamsing Nonlaopon 4,\* and Hatim Ghazi Zaini 5 Muhammad Bilal Khan 1, Savin Treanțǎ 2, Mohamed S. Soliman 3, Kamsing Nonlaopon 4,\* and Hatim Ghazi Zaini 5** 

and interval-valued functions (*IV-Fs*) based upon the pseudo-order relation. Firstly, we discuss the

and interval-valued functions (*IV-Fs*) based upon the pseudo-order relation. Firstly, we discuss the

and some related integral inequalities with the support of left and right preinvex *IV-Fs* via pseudo-

and some related integral inequalities with the support of left and right preinvex *IV-Fs* via pseudo-

Hanson [1] defined the class of invex functions as one of the most significant exten-

tions to demonstrate adequate optimality criteria and duality in nonlinear programming. For a differentiable mapping, the concept of fractional integral identities involving Riemann–Liouville fractional and Hadamard fractional integrals integrals was considered by Wang et al. [3], who identified some inequalities using standard convex, -convex, convex, -convex, (s, m)-convex, and (, )-convex. Moreover, Işcan [4] also used fractional integrals for preinvex functions to obtain various - type inequalities. See [5–8]

Hanson [1] defined the class of invex functions as one of the most significant exten-

For accurate solutions to various problems in practical mathematics, Moore [9] used interval arithmetic, *IV-Fs*, and integrals of *IV-Fs* to establish arbitrarily sharp upper and lower limits. Moore [9] showed that, if a real-valued mapping () meets an ordinary Lipschitz condition in , |() − ()| ≤ |−|, for , ∈ , then, the united extension is a Lipschitz interval extension in . To combine the study of discrete and continuous dynamical systems, Hilger [10] introduced a time scales theory. The widespread use of dynamic equations and integral inequalities on time scales, in domains as diverse as electrical engineering, quantum physics, heat transfer, neural networks, combinatorics, and population dynamics [11], has highlighted the need for this theory. Young's inequality, Minkoswki's inequality, Jensen's inequality, Hölder's inequality, - inequality,

For accurate solutions to various problems in practical mathematics, Moore [9] used interval arithmetic, *IV-Fs*, and integrals of *IV-Fs* to establish arbitrarily sharp upper and lower limits. Moore [9] showed that, if a real-valued mapping () meets an ordinary Lipschitz condition in , |() − ()| ≤ |−|, for , ∈ , then, the united extension is a Lipschitz interval extension in . To combine the study of discrete and continuous dynamical systems, Hilger [10] introduced a time scales theory. The widespread use of dynamic equations and integral inequalities on time scales, in domains as diverse as electrical engineering, quantum physics, heat transfer, neural networks, combinatorics, and population dynamics [11], has highlighted the need for this theory. Young's inequality, Minkoswki's inequality, Jensen's inequality, Hölder's inequality, - inequality,

tions to demonstrate adequate optimality criteria and duality in nonlinear programming. For a differentiable mapping, the concept of fractional integral identities involving Riemann–Liouville fractional and Hadamard fractional integrals integrals was considered by Wang et al. [3], who identified some inequalities using standard convex, -convex, convex, -convex, (s, m)-convex, and (, )-convex. Moreover, Işcan [4] also used fractional integrals for preinvex functions to obtain various - type inequalities. See [5–8]

$$\begin{array}{ll}\frac{1}{\mathbb{\tilde{J}(\mu,\upsilon)}}\int\_{\upsilon}^{\mu+\mathbb{\tilde{J}(\mu,\upsilon)}}Y\_{\ast}(\omega)\mathcal{S}(\omega)d\omega \leq \left[\mathcal{Y}\_{\ast}(\upsilon)+\mathcal{Y}\_{\ast}(\mu)\right]\int\_{0}^{1}\operatorname{t}\mathcal{S}(\upsilon+\mathfrak{t}\_{\ast}^{\star}(\mu,\upsilon))\ \operatorname{d}\mathfrak{t},\\\frac{1}{\mathbb{\tilde{J}(\mu,\upsilon)}}\int\_{\upsilon}^{\mu+\mathbb{\tilde{J}(\mu,\upsilon)}}Y\_{\ast}^{\ast}(\omega)\mathcal{S}(\omega)d\omega \leq \left[\mathcal{Y}^{\ast}(\upsilon)+\mathcal{Y}^{\ast}(\mu)\right]\int\_{0}^{1}\operatorname{t}\mathcal{S}(\upsilon+\mathfrak{t}\_{\ast}^{\star}(\mu,\upsilon))\ \operatorname{d}\mathfrak{t}.\end{array}$$

that is

$$\begin{cases} \frac{1}{\zeta(\mu,\upsilon)} \int\_{\upsilon}^{\upsilon+\zeta(\mu,\upsilon)} Y\_\*(\omega) \mathcal{S}(\omega) d\omega, \frac{1}{\zeta(\mu,\upsilon)} \int\_{\upsilon}^{\upsilon+\zeta(\mu,\upsilon)} Y^\*(\omega) \mathcal{S}(\omega) d\omega \\\quad \le\_p \left[ Y\_\*(\upsilon) + Y\_\*(\mu), \ Y^\*(\upsilon) + Y^\*(\mu) \right] \int\_0^1 \text{t} \mathcal{S}(\upsilon + \mathfrak{t}\zeta(\mu,\upsilon)) \right. \end{cases}$$

hence P.O. Box 11099, Taif 21944, Saudi Arabia; h.zaini@tu.edu.sa **\*** Correspondence: nkamsi@kku.ac.th; Tel.: +668-6642-1582 P.O. Box 11099, Taif 21944, Saudi Arabia; h.zaini@tu.edu.sa **\*** Correspondence: nkamsi@kku.ac.th; Tel.: +668-6642-1582

*Mathematics* **2021**, *9*, x. https://doi.org/10.3390/xxxxx www.mdpi.com/journal/mathematics

*Mathematics* **2021**, *9*, x. https://doi.org/10.3390/xxxxx www.mdpi.com/journal/mathematics

for other generalizations of the - inequality.

for other generalizations of the - inequality.

$$\frac{1}{\mathbb{Z}(\mu,\ v)}\left(IR\right)\int\_{v}^{v+\zeta(\mu,v)}Y(\omega)\mathcal{S}(\omega)d\omega \leq\_{p} \left[\mathcal{Y}(v)+\mathcal{Y}(\mu)\right]\int\_{0}^{1}t\mathcal{S}(v+t\zeta(\mu,v))d\mathfrak{k}.$$

$$\Box$$

Now, we present the succeeding reformative version of the generalized version of first properties of left and right preinvex interval-valued functions (left and right preinvex *IV-Fs*). Then, we obtain Hermite–Hadamard (-) and Hermite–Hadamard–Fejér (--Fejér) type inequality properties of left and right preinvex interval-valued functions (left and right preinvex *IV-Fs*). Then, we obtain Hermite–Hadamard (-) and Hermite–Hadamard–Fejér (--Fejér) type inequality -Fejér inequalities for left and right preinvex *IV-Fs*.

order relation and interval Riemann integral. Moreover, some exceptional special cases are also discussed. Some useful examples are also given to prove the validity of our main results. **Keywords:** left and right preinvex interval-valued function; interval Riemann integral; Hermite–Hadamard type inequality; Hermite–Hadamard–Fejér type inequality order relation and interval Riemann integral. Moreover, some exceptional special cases are also discussed. Some useful examples are also given to prove the validity of our main results. **Keywords:** left and right preinvex interval-valued function; interval Riemann integral; Hermite–Hadamard type inequality; Hermite–Hadamard–Fejér type inequality **Theorem 7.** *Let <sup>Y</sup>* : [*υ*, *<sup>υ</sup>* <sup>+</sup> *<sup>ζ</sup>*(*µ*, *<sup>υ</sup>*)] → K<sup>+</sup> *C be a left and right preinvex IV-F with υ* < *υ* + *ζ*(*µ*, *υ*) *such that Y*(*ω*) = [*Y*∗(*ω*), *Y* ∗ (*ω*)] *for all ω* ∈ [*υ*, *υ* + *ζ*(*µ*, *υ*)]*. If Y* ∈ TR([*υ*, *<sup>υ</sup>*+*ζ*(*µ*, *<sup>υ</sup>*)]) *and* S : [*υ*, *υ* + *ζ*(*µ*, *υ*)] → R, S(*ω*) ≥ 0, *symmetric with respect to υ* + <sup>1</sup> 2 *ζ*(*µ*, *υ*), *and* R *<sup>υ</sup>*+*ζ*(*µ*, *<sup>υ</sup>*) *υ* S(*ω*)*dω* > 0*, and Condition C for ζ, then*

$$Y\left(\upsilon + \frac{1}{2}\zeta(\mu,\upsilon)\right) \leq\_p \frac{1}{\int\_{\upsilon}^{\upsilon+\zeta(\mu,\upsilon)} \mathcal{S}(\omega)d\omega} \ (IR) \int\_{\upsilon}^{\upsilon+\zeta(\mu,\upsilon)} Y(\omega)\mathcal{S}(\omega)d\omega. \tag{22}$$

*Article* 

*Article* 

**Citation:** Khan, M.B.; Treanțǎ, S.; Soliman, M.S.; Nonlaopon, K.; Zaini, H.G. Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings. *Mathematics* **2022**, *9*, x. https://doi.org/10.3390/xxxxx

**Citation:** Khan, M.B.; Treanțǎ, S.; Soliman, M.S.; Nonlaopon, K.; Zaini, H.G. Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings. *Mathematics* **2022**, *9*, x. https://doi.org/10.3390/xxxxx

Academic Editors: Simeon Reich and

Received: 13 December 2021 Accepted: 15 February 2022 Published: 16 February 2022

Academic Editors: Simeon Reich and

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institu-

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institu-

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

Received: 13 December 2021 Accepted: 15 February 2022 Published: 16 February 2022

Janusz Brzdęk

Janusz Brzdęk

tional affiliations.

tional affiliations.

**Proof.** Using condition C, we can write

$$
\upsilon + \frac{1}{2}\tilde{\mathfrak{z}}(\mu, \upsilon) = \upsilon + \mathfrak{t}\tilde{\mathfrak{z}}(\mu, \upsilon) + \frac{1}{2}\tilde{\mathfrak{z}}(\upsilon + (1 - \mathfrak{t})\tilde{\mathfrak{z}}(\mu, \upsilon), \upsilon + \mathfrak{t}\tilde{\mathfrak{z}}(\mu, \upsilon)).
$$

Since *Y* is a left and right preinvex, we have

$$\begin{array}{c} Y\_{\ast} \left( \upsilon + \frac{1}{2} \zeta(\mu, \upsilon) \right) = Y\_{\ast} \left( \upsilon + \mathfrak{t}\_{\ast}^{\prime}(\mu, \upsilon) + \frac{1}{2} \zeta(\upsilon + (1 - \mathfrak{t})\zeta(\mu, \upsilon), \upsilon + \mathfrak{t}\_{\ast}^{\prime}(\mu, \upsilon)) \right) \\ \leq \frac{1}{2} (Y\_{\ast} (\upsilon + (1 - \mathfrak{t})\zeta(\mu, \upsilon)) + Y\_{\ast} (\upsilon + \mathfrak{t}\_{\ast}^{\prime}(\mu, \upsilon))), \\ Y^{\ast} \left( \upsilon + \frac{1}{2} \zeta(\mu, \upsilon) \right) = Y^{\ast} \left( \upsilon + \mathfrak{t}\_{\ast}^{\prime}(\mu, \upsilon) + \frac{1}{2} \zeta(\upsilon + (1 - \mathfrak{t})\zeta(\mu, \upsilon), \upsilon + \mathfrak{t}\_{\ast}^{\prime}(\mu, \upsilon)) \right) \\ \leq (Y^{\ast}(\upsilon + (1 - \mathfrak{t})\zeta(\mu, \upsilon)) + Y^{\ast} (\upsilon + \mathfrak{t}\_{\ast}^{\prime}(\mu, \upsilon))). \end{array} \tag{23}$$

By multiplying (23) by S(*υ* + (1 − t)*ζ*(*µ*, *υ*)) = S(*υ* + t*ζ*(*µ*, *υ*)) and integrating it by t over [0, 1], we obtain

$$\begin{split} &\quad Y\_{\ast}\left(\upsilon+\frac{1}{2}\zeta(\mu,\upsilon)\right)\int\_{0}^{1}\mathcal{S}(\upsilon+\mathfrak{t}\_{\ast}^{\prime}(\mu,\upsilon))d\mathfrak{t} \\ \leq &\frac{1}{2}\Biggl(\begin{array}{cc} \int\_{0}^{1}Y\_{\ast}\left(\upsilon+(1-\mathfrak{t})\zeta(\mu,\upsilon)\right)\mathcal{S}(\upsilon+(1-\mathfrak{t})\zeta(\mu,\upsilon))d\mathfrak{t} \\ +\int\_{0}^{1}Y\_{\ast}\left(\upsilon+\mathfrak{t}\_{\ast}^{\prime}(\mu,\upsilon)\right)d\mathfrak{t}\mathcal{S}(\upsilon+\mathfrak{t}\_{\ast}^{\prime}(\mu,\upsilon)) \\ Y^{\ast}\left(\upsilon+\frac{1}{2}\zeta(\mu,\upsilon)\right)\int\_{0}^{1}\mathcal{S}(\upsilon+\mathfrak{t}\_{\ast}^{\prime}(\mu,\upsilon))d\mathfrak{t} \\ \leq &\frac{1}{2}\Biggl(\begin{array}{cc} \int\_{0}^{1}Y^{\ast}\left(\upsilon+(1-\mathfrak{t})\zeta(\mu,\upsilon)\right)\mathcal{S}(\upsilon+(1-\mathfrak{t})\zeta(\mu,\upsilon))d\mathfrak{t} \\ +\int\_{0}^{1}Y^{\ast}(\upsilon+\mathfrak{t}\_{\ast}^{\prime}(\mu,\upsilon))\mathcal{S}(\upsilon+\mathfrak{t}\_{\ast}^{\prime}(\mu,\upsilon))d\mathfrak{t} \end{array}\right). \end{split} \tag{24}$$

Since

$$\begin{split} \int\_{0}^{1} Y\_{\*} (\upsilon + (1 - \mathfrak{t}) \tilde{\boldsymbol{\zeta}}(\mu, \upsilon)) \boldsymbol{\mathcal{S}} (\upsilon + (1 - \mathfrak{t}) \tilde{\boldsymbol{\zeta}}(\mu, \upsilon)) d\mathfrak{t} \\ = \int\_{0}^{1} Y\_{\*} (\upsilon + \mathfrak{t} \tilde{\boldsymbol{\zeta}}(\mu, \upsilon)) \boldsymbol{\mathcal{S}} (\upsilon + \mathfrak{t} \tilde{\boldsymbol{\zeta}}(\mu, \upsilon)) d\mathfrak{t} \\ = \frac{1}{\tilde{\boldsymbol{\zeta}}(\mu, \upsilon)} \int\_{\upsilon}^{\upsilon + \tilde{\boldsymbol{\zeta}}(\mu, \upsilon)} Y\_{\*} (\mu) \boldsymbol{\mathcal{S}} (\upsilon) d\omega \\ = \int\_{0}^{1} Y^{\*} (\upsilon + \mathfrak{t} \tilde{\boldsymbol{\zeta}}(\mu, \upsilon)) \boldsymbol{\mathcal{S}} (\upsilon + \mathfrak{t} \tilde{\boldsymbol{\zeta}}(\mu, \upsilon)) d\mathfrak{t} \\ = \int\_{0}^{1} Y^{\*} (\upsilon + (1 - \mathfrak{t}) \tilde{\boldsymbol{\zeta}}(\mu, \upsilon)) \boldsymbol{\mathcal{S}} (\upsilon + (1 - \mathfrak{t}) \tilde{\boldsymbol{\zeta}}(\mu, \upsilon)) d\mathfrak{t} \\ = \frac{1}{\tilde{\boldsymbol{\zeta}}(\mu, \upsilon)} \int\_{\upsilon}^{\upsilon + \tilde{\boldsymbol{\zeta}}(\mu, \upsilon)} Y^{\*} (\omega) \boldsymbol{\mathcal{S}}(\omega) d\omega. \end{split} \tag{25}$$

From (25), we have **Some New Versions of Integral Inequalities for Left and Right Some New Versions of Integral Inequalities for Left and Right** 

$$\begin{array}{c} \mathcal{Y}\_{\*}\left(\upsilon+\frac{1}{2}\zeta(\mu\_{\iota},\upsilon)\right) \leq \frac{1}{\int\_{\upsilon}^{\upsilon+\zeta(\mu,\upsilon)}\mathcal{S}(\omega)d\omega} \int\_{\upsilon}^{\upsilon+\zeta(\mu,\upsilon)} \mathcal{Y}\_{\*}(\omega)\mathcal{S}(\omega)d\omega, \\\mathcal{Y}^{\*}\left(\upsilon+\frac{1}{2}\zeta(\mu,\upsilon)\right) \leq \frac{1}{\int\_{\upsilon}^{\upsilon+\zeta(\mu,\upsilon)}\mathcal{S}(\omega)d\omega} \int\_{\upsilon}^{\upsilon+\zeta(\mu,\upsilon)} \mathcal{Y}^{\*}(\omega)\mathcal{S}(\omega)d\omega. \end{array}$$

From which, we have 1 Department of Mathematics, COMSATS University Islamabad, Islamabad 44000, Pakistan; 1 Department of Mathematics, COMSATS University Islamabad, Islamabad 44000, Pakistan;

$$\leq \begin{bmatrix} \begin{bmatrix} Y\_\*\left(\upsilon + \frac{1}{2}\zeta(\mu,\upsilon)\right), Y^\*\left(\upsilon + \frac{1}{2}\zeta(\mu,\upsilon)\right) \\ \frac{1}{\upsilon + \zeta(\mu,\upsilon)} \binom{\upsilon + \zeta(\mu,\upsilon)}{\mathscr{S}(\omega)d\omega} \end{bmatrix} \end{bmatrix}$$

that is 4 Department of Mathematics, Faculty of Science, Khon Kaen University, Khon Kaen 40002, Thailand 4 Department of Mathematics, Faculty of Science, Khon Kaen University, Khon Kaen 40002, Thailand

savin.treanta@upb.ro

**1. Introduction** 

**1. Introduction** 

$$Y(\boldsymbol{v} + \frac{1}{2}\zeta(\boldsymbol{\mu}, \boldsymbol{v})) \leq\_p \frac{1}{\int\_{\boldsymbol{v}}^{\boldsymbol{v} + \zeta(\boldsymbol{\mu}, \boldsymbol{v})} \mathcal{S}(\boldsymbol{\omega}) d\boldsymbol{\omega}} \text{ (IR)} \int\_{\boldsymbol{v}}^{\boldsymbol{v} + \zeta(\boldsymbol{\mu}, \boldsymbol{v})} Y(\boldsymbol{\omega}) \mathcal{S}(\boldsymbol{\omega}) d\boldsymbol{\omega}.$$

This completes the proof. **Abstract:** The principles of convexity and symmetry are inextricably linked. Because of the considerable association that has emerged between the two in recent years, we may apply what we learn **Abstract:** The principles of convexity and symmetry are inextricably linked. Because of the considerable association that has emerged between the two in recent years, we may apply what we learn

**Remark 5.** *If one considers taking ζ*(*µ*, *υ*) = *µ* − *υ, then, by combining inequalities (17) and (22), we achieve the expected inequality.* from one to the other. In this paper, our aim is to establish the relation between integral inequalities and interval-valued functions (*IV-Fs*) based upon the pseudo-order relation. Firstly, we discuss the from one to the other. In this paper, our aim is to establish the relation between integral inequalities and interval-valued functions (*IV-Fs*) based upon the pseudo-order relation. Firstly, we discuss the

*If one considers taking Y*∗(*ω*) = *Y* ∗ (*ω*)*, then, by combining inequalities (17) and (22), we achieve the classical* properties of left and right preinvex interval-valued functions (left and right preinvex *IV-Fs*). Then, we obtain Hermite–Hadamard (-) and Hermite–Hadamard–Fejér (--Fejér) type inequality  properties of left and right preinvex interval-valued functions (left and right preinvex *IV-Fs*). Then, we obtain Hermite–Hadamard (-) and Hermite–Hadamard–Fejér (--Fejér) type inequality *-Fejér inequality, see [30].*

cussed. Some useful examples are also given to prove the validity of our main results.

cussed. Some useful examples are also given to prove the validity of our main results.

Hermite–Hadamard type inequality; Hermite–Hadamard–Fejér type inequality

Hermite–Hadamard type inequality; Hermite–Hadamard–Fejér type inequality

and some related integral inequalities with the support of left and right preinvex *IV-Fs* via pseudoorder relation and interval Riemann integral. Moreover, some exceptional special cases are also dis-

and some related integral inequalities with the support of left and right preinvex *IV-Fs* via pseudoorder relation and interval Riemann integral. Moreover, some exceptional special cases are also dis-

Hanson [1] defined the class of invex functions as one of the most significant extensions of convex functions. Weir and Mond [2], in 1988, used the notion of preinvex functions to demonstrate adequate optimality criteria and duality in nonlinear programming. For a differentiable mapping, the concept of fractional integral identities involving Riemann–Liouville fractional and Hadamard fractional integrals integrals was considered by Wang et al. [3], who identified some inequalities using standard convex, -convex, convex, -convex, (s, m)-convex, and (, )-convex. Moreover, Işcan [4] also used fractional integrals for preinvex functions to obtain various - type inequalities. See [5–8]

Hanson [1] defined the class of invex functions as one of the most significant extensions of convex functions. Weir and Mond [2], in 1988, used the notion of preinvex functions to demonstrate adequate optimality criteria and duality in nonlinear programming. For a differentiable mapping, the concept of fractional integral identities involving Riemann–Liouville fractional and Hadamard fractional integrals integrals was considered by Wang et al. [3], who identified some inequalities using standard convex, -convex, convex, -convex, (s, m)-convex, and (, )-convex. Moreover, Işcan [4] also used fractional integrals for preinvex functions to obtain various - type inequalities. See [5–8]

For accurate solutions to various problems in practical mathematics, Moore [9] used interval arithmetic, *IV-Fs*, and integrals of *IV-Fs* to establish arbitrarily sharp upper and lower limits. Moore [9] showed that, if a real-valued mapping () meets an ordinary Lipschitz condition in , |() − ()| ≤ |−|, for , ∈ , then, the united extension is a Lipschitz interval extension in . To combine the study of discrete and continuous dynamical systems, Hilger [10] introduced a time scales theory. The widespread use of dynamic equations and integral inequalities on time scales, in domains as diverse as electrical engineering, quantum physics, heat transfer, neural networks, combinatorics, and population dynamics [11], has highlighted the need for this theory. Young's inequality, Minkoswki's inequality, Jensen's inequality, Hölder's inequality, - inequality,

For accurate solutions to various problems in practical mathematics, Moore [9] used interval arithmetic, *IV-Fs*, and integrals of *IV-Fs* to establish arbitrarily sharp upper and lower limits. Moore [9] showed that, if a real-valued mapping () meets an ordinary Lipschitz condition in , |() − ()| ≤ |−|, for , ∈ , then, the united extension is a Lipschitz interval extension in . To combine the study of discrete and continuous dynamical systems, Hilger [10] introduced a time scales theory. The widespread use of dynamic equations and integral inequalities on time scales, in domains as diverse as electrical engineering, quantum physics, heat transfer, neural networks, combinatorics, and population dynamics [11], has highlighted the need for this theory. Young's inequality, Minkoswki's inequality, Jensen's inequality, Hölder's inequality, - inequality,

*Mathematics* **2021**, *9*, x. https://doi.org/10.3390/xxxxx www.mdpi.com/journal/mathematics

*Mathematics* **2021**, *9*, x. https://doi.org/10.3390/xxxxx www.mdpi.com/journal/mathematics

for other generalizations of the - inequality.

for other generalizations of the - inequality.

Janusz Brzdęk

tional affiliations.

tional affiliations.

Received: 13 December 2021 Accepted: 15 February 2022 Published: 16 February 2022

Janusz Brzdęk

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institu-

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institu-

Received: 13 December 2021 Accepted: 15 February 2022 Published: 16 February 2022

censee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

censee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

**Citation:** Khan, M.B.; Treanțǎ, S.; Soliman, M.S.; Nonlaopon, K.; Zaini, H.G. Some New Versions of Integral Inequalities for Left and Right Preinvex Functions in the Interval-Valued Settings. *Mathematics* **2022**, *9*, x. https://doi.org/10.3390/xxxxx

Interval-Valued Settings. *Mathematics* **2022**, *9*, x. https://doi.org/10.3390/xxxxx

Zaini, H.G. Some New Versions of

*Article* 

*Article* 

*If one considers taking Y*∗(*ω*) = *Y* <sup>∗</sup>*ω and ζ*(*µ*, *υ*) = *µ* − *υ, then, by combining inequalities (17) and (22), we acquire the classical* properties of left and right preinvex interval-valued functions (left and right preinvex *IV-Fs*). Then, we obtain Hermite–Hadamard (-) and Hermite–Hadamard–Fejér (--Fejér) type inequality  properties of left and right preinvex interval-valued functions (left and right preinvex *IV-Fs*). Then, we obtain Hermite–Hadamard (-) and Hermite–Hadamard–Fejér (--Fejér) type inequality **Citation:** Khan, M.B.; Treanțǎ, S.; Soliman, M.S.; Nonlaopon, K.; *-Fejér inequality, see [33].*

**Some New Versions of Integral Inequalities for Left and Right** 

**Some New Versions of Integral Inequalities for Left and Right** 

**Muhammad Bilal Khan 1, Savin Treanțǎ 2, Mohamed S. Soliman 3, Kamsing Nonlaopon 4,\* and Hatim Ghazi Zaini 5** 

**Muhammad Bilal Khan 1, Savin Treanțǎ 2, Mohamed S. Soliman 3, Kamsing Nonlaopon 4,\* and Hatim Ghazi Zaini 5** 

P.O. Box 11099, Taif 21944, Saudi Arabia; h.zaini@tu.edu.sa **\*** Correspondence: nkamsi@kku.ac.th; Tel.: +668-6642-1582

P.O. Box 11099, Taif 21944, Saudi Arabia; h.zaini@tu.edu.sa **\*** Correspondence: nkamsi@kku.ac.th; Tel.: +668-6642-1582

1 Department of Mathematics, COMSATS University Islamabad, Islamabad 44000, Pakistan;

1 Department of Mathematics, COMSATS University Islamabad, Islamabad 44000, Pakistan;

2 Department of Applied Mathematics, University Politehnica of Bucharest, 060042 Bucharest, Romania;

4 Department of Mathematics, Faculty of Science, Khon Kaen University, Khon Kaen 40002, Thailand 5 Department of Computer Science, College of Computers and Information Technology, Taif University,

3 Department of Electrical Engineering, College of Engineering, Taif University, P.O. Box 11099, Taif 21944,

4 Department of Mathematics, Faculty of Science, Khon Kaen University, Khon Kaen 40002, Thailand 5 Department of Computer Science, College of Computers and Information Technology, Taif University,

2 Department of Applied Mathematics, University Politehnica of Bucharest, 060042 Bucharest, Romania;

3 Department of Electrical Engineering, College of Engineering, Taif University, P.O. Box 11099, Taif 21944,

**Abstract:** The principles of convexity and symmetry are inextricably linked. Because of the considerable association that has emerged between the two in recent years, we may apply what we learn from one to the other. In this paper, our aim is to establish the relation between integral inequalities and interval-valued functions (*IV-Fs*) based upon the pseudo-order relation. Firstly, we discuss the

**Abstract:** The principles of convexity and symmetry are inextricably linked. Because of the considerable association that has emerged between the two in recent years, we may apply what we learn from one to the other. In this paper, our aim is to establish the relation between integral inequalities and interval-valued functions (*IV-Fs*) based upon the pseudo-order relation. Firstly, we discuss the

and some related integral inequalities with the support of left and right preinvex *IV-Fs* via pseudo-

and some related integral inequalities with the support of left and right preinvex *IV-Fs* via pseudo-

**Preinvex Functions in the Interval-Valued Settings** 

**Preinvex Functions in the Interval-Valued Settings** 

bilal42742@gmail.com

bilal42742@gmail.com

savin.treanta@upb.ro

savin.treanta@upb.ro

Saudi Arabia; soliman@tu.edu.sa

Saudi Arabia; soliman@tu.edu.sa

order relation and interval Riemann integral. Moreover, some exceptional special cases are also discussed. Some useful examples are also given to prove the validity of our main results. order relation and interval Riemann integral. Moreover, some exceptional special cases are also discussed. Some useful examples are also given to prove the validity of our main results. Integral Inequalities for Left and Right Preinvex Functions in the **Example 5.** *We consider the IV-F <sup>Y</sup>* : [1, 1 <sup>+</sup> *<sup>ζ</sup>*(4, 1)] → K<sup>+</sup> *C defined by Y*(*ω*) = [2, 4]*e <sup>ω</sup>. Since end point functions Y*∗(*ω*), *Y* ∗ (*ω*) *are preinvex functions ζ*(κ, *ω*) = κ − *ω, then, Y*(*ω*) *is left and right preinvex IV-F. If*

$$\mathcal{S}(\omega) = \begin{cases} \omega - 1, \sigma \in \left[1, \frac{5}{2}\right], \\ 4 - \omega, \sigma \in \left[\frac{5}{2}, 4\right]. \end{cases}$$

Academic Editors: Simeon Reich and Academic Editors: Simeon Reich and *Then, we have*

$$\begin{split} &\frac{1}{\xi(4,1)}\int\_{1}^{1+\frac{\zeta}{2}(4,1)}Y\_{\ast}(\omega)\mathcal{S}(\omega)d\omega = \frac{1}{3}\int\_{1}^{4}Y\_{\ast}(\omega)\mathcal{S}(\omega)d\omega \\ &=\frac{1}{3}\int\_{1}^{\frac{\zeta}{2}}Y\_{\ast}(\omega)\mathcal{S}(\omega)d\omega + \frac{1}{3}\int\_{\frac{\zeta}{2}}^{4}Y\_{\ast}(\omega)\mathcal{S}(\omega)d\omega, \\ &\frac{1}{\xi(4,1)}\int\_{1}^{1+\frac{\zeta}{2}(4,1)}Y^{\ast}(\omega)\mathcal{S}(\omega)d\omega = \frac{1}{3}\int\_{1}^{4}Y^{\ast}(\omega)\mathcal{S}(\omega)d\omega \\ &=\frac{1}{3}\int\_{1}^{\frac{\zeta}{2}}Y^{\ast}(\omega)\mathcal{S}(\omega)d\omega + \frac{1}{3}\int\_{\frac{\zeta}{2}}^{4}Y^{\ast}(\omega)\mathcal{S}(\omega)d\omega, \\ &=\frac{2}{3}\int\_{1}^{\frac{\zeta}{2}}\mathcal{C}^{\omega}(\omega-1)d\omega + \frac{2}{3}\int\_{\frac{\zeta}{2}}^{4}\mathcal{C}^{\omega}(4-\omega)d\omega \approx 22, \\ &=\frac{4}{3}\int\_{1}^{\frac{\zeta}{2}}\mathcal{C}^{\omega}(\omega-1)d\omega + \frac{4}{3}\int\_{\frac{\zeta}{2}}^{4}\mathcal{C}^{\omega}(4-\omega)d\omega \approx 44, \end{split} \tag{26}$$

tional integrals for preinvex functions to obtain various - type inequalities. See [5–8]

tional integrals for preinvex functions to obtain various - type inequalities. See [5–8]

**Copyright:** © 2022 by the authors. Li-**Copyright:** © 2022 by the authors. Liand

$$\begin{aligned} \left[Y\_{\ast}(\boldsymbol{\upsilon}) + Y\_{\ast}(\boldsymbol{\mu})\right] \int\_{0}^{1} \mathbf{t} \mathcal{S}(\boldsymbol{v} + \mathbf{t}\_{\ast}^{\ast}(\boldsymbol{\mu}, \boldsymbol{v})) \, d\mathbf{t} \\ \left[Y^{\ast}(\boldsymbol{\upsilon}) + Y^{\ast}(\boldsymbol{\mu})\right] \int\_{0}^{1} \mathbf{t} \mathcal{S}(\boldsymbol{v} + \mathbf{t}\_{\ast}^{\ast}(\boldsymbol{\mu}, \boldsymbol{v})) \right] \, d\mathbf{t} \\ = 2 \left[\boldsymbol{\varepsilon} + \boldsymbol{\varepsilon}^{4}\right] \left[\int\_{0}^{\frac{1}{2}} 3\mathbf{t}^{2} d\boldsymbol{\omega} + \int\_{\frac{1}{2}}^{1} \mathbf{t}(3 - 3\mathbf{t}) d\mathbf{t}\right] \approx 43. \end{aligned} \tag{27}$$
 
$$= 4 \left[\boldsymbol{\varepsilon} + \boldsymbol{\varepsilon}^{4}\right] \left[\int\_{0}^{\frac{1}{2}} 3\mathbf{t}^{2} d\boldsymbol{\omega} + \int\_{\frac{1}{2}}^{1} \mathbf{t}(3 - 3\mathbf{t}) d\mathbf{t}\right] \approx 86.$$

From (26) and (27), we have

of dynamic equations and integral inequalities on time scales, in domains as diverse as electrical engineering, quantum physics, heat transfer, neural networks, combinatorics, of dynamic equations and integral inequalities on time scales, in domains as diverse as electrical engineering, quantum physics, heat transfer, neural networks, combinatorics, [22, 44] ≤ *<sup>p</sup>* [43, 86]

and population dynamics [11], has highlighted the need for this theory. Young's inequaland population dynamics [11], has highlighted the need for this theory. Young's inequal-Hence, Theorem 6 is verified. For Theorem 7, we have

$$\begin{aligned} Y\_\* \left( v + \frac{1}{2} \zeta(\mu, v) \right) &\approx \frac{12}{5}, \\ Y^\* \left( v + \frac{1}{2} \zeta(\mu, v) \right) &\approx \frac{244}{5}, \end{aligned} \tag{28}$$

$$\int\_{\mu}^{v + \zeta(\mu, v)} \mathcal{S}(\omega) d\omega = \int\_{1}^{\frac{5}{2}} (\omega - 1) d\omega + \int\_{\frac{5}{2}}^{4} (4 - \omega) d\omega = \frac{9}{4},$$

$$\begin{array}{ll} \iota & \star(\omega,\omega) \leftarrow \int\_{1}^{\cdot} \langle \omega,\omega \rangle \cdots \omega \cdot \int\_{\frac{5}{2}}^{\frac{5}{2}} \langle \omega,\omega \rangle \cdots \omega \cdot \omega \cdot \omega \cdot \omega \cdot \omega \end{array}$$

$$\begin{array}{ll} \frac{1}{\int\_{v}^{v+\zeta(\mu,v)} \mathcal{S}(\omega)d\omega} & \int\_{1}^{4} Y\_{\*}(\omega)\mathcal{S}(\omega)d\omega \approx \frac{146}{5} \\\ \frac{1}{\int\_{v}^{v+\zeta(\mu,v)} \mathcal{S}(\omega)d\omega} & \int\_{1}^{4} Y^{\*}(\omega)\mathcal{S}(\omega)d\omega \approx \frac{293}{5} \end{array} \tag{29}$$

From (28) and (29), we have

$$
\left[\frac{122}{5}, 49\right]\_p \le \left[\frac{146}{5}, \frac{293}{5}\right]\_p
$$

Hence, Theorem 7 is verified.

#### **4. Conclusions and Prospective Results**

In this study, the notion of left and right preinvex functions in interval-valued settings was presented. For left and right preinvex interval-valued functions, we constructed Hermite–Hadamard type inequalities, as well as for the product of two left and right preinvex interval-valued functions. We also established Hemite–Hadamard–Fejér type inequality. We also discussed some special cases and provided some examples to prove the validity of our main results. In future, we will seek to explore this concept by using different fractional integral operators, such as Riemann–Liouville fractional operators, Katugampola fractional operators and generalized K-fractional operators.

Finally, we think that our results may be relevant to other fractional calculus models having Mittag–Liffler functions in their kernels, such as Atangana–Baleanu and Prabhakar fractional operators. This consideration has been presented as an open problem for academics interested in this topic. Researchers who are interested might follow the steps outlined in the references [54,55].

**Author Contributions:** Conceptualization, M.B.K.; methodology, M.B.K.; validation, S.T., M.S.S. and H.G.Z.; formal analysis, K.N.; investigation, M.S.S.; resources, S.T.; data curation, H.G.Z.; writing original draft preparation, M.B.K., K.N. and H.G.Z.; writing—review and editing, M.B.K. and S.T.; visualization, H.G.Z.; supervision, M.B.K. and M.S.S.; project administration, M.B.K.; funding acquisition, K.N., M.S.S. and H.G.Z. All authors have read and agreed to the published version of the manuscript.

**Funding:** The authors would like to thank the Rector, COMSATS University Islamabad, Islamabad, Pakistan, for providing excellent research support. This work was funded by Taif University Researchers Supporting Project number (TURSP-2020/345), Taif University, Taif, Saudi Arabia. In addition, this research has received funding support from the National Science, Research and Innovation Fund (NSRF), Thailand.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **Hermite-Hadamard-Type Fractional Inclusions for Interval-Valued Preinvex Functions**

**Kin Keung Lai 1,\* , Jaya Bisht <sup>2</sup> , Nidhi Sharma <sup>2</sup> and Shashi Kant Mishra <sup>2</sup>**


**Abstract:** We introduce a new class of interval-valued preinvex functions termed as harmonically *h*-preinvex interval-valued functions. We establish new inclusion of Hermite–Hadamard for harmonically *h*-preinvex interval-valued function via interval-valued Riemann–Liouville fractional integrals. Further, we prove fractional Hermite–Hadamard-type inclusions for the product of two harmonically *h*-preinvex interval-valued functions. In this way, these findings include several well-known results and newly obtained results of the existing literature as special cases. Moreover, applications of the main results are demonstrated by presenting some examples.

**Keywords:** Hermite–Hadamard inequalities; harmonical convex functions; interval-valued functions; fractional integrals

#### **1. Introduction**

It is well known that extensive literature on the class of integral inequalities is being introduced under various notions of convexity; see, for instance [1–6]. Inspired by the importance of convexity in multiple fields of pure and applied sciences, researchers generalized and extended the notion of convexity in various settings. A useful generalization of convex functions is introduced by Hanson [7] which is called invex functions. In 1986, Ben-Israel and Mond [8] proposed the notion of preinvex functions and showed that every differentiable preinvex function is invex, but the converse may not be true. Yang and Li [9] provided two conditions that determine the preinvexity of a function via an intermediatepoint preinvexity check under conditions of upper and lower semicontinuity, respectively.

On the other hand, interval analysis was introduced to handle interval uncertainty in many mathematical or computer models of some deterministic real-world phenomena. Moore [10] was the first to propose the concept of interval analysis and extend the arithmetic of intervals to the computer. Moore et al. [11] discussed an arithmetic for intervals, integration of interval functions, and interval Newton methods. Bhurjee and Panda [12] provided a methodology to determine the efficient solution of general multi-objective interval fractional programming problem. Lupulescu [13] gave a theory of the fractional calculus for interval-valued functions using gH-difference for closed intervals. Further, Li et al. [14] introduced the concept of invexity using gH-derivative of interval-valued functions and derived Kuhn–Tucker optimality conditions for an interval-valued objective function. Interval analysis has applications in various fields such as experimental and computational physics, error analysis, computer graphics, robotics, numerical integration, and many other fields (see [15–19]).

#### **2. Literature Survey**

I¸scan [20] proposed the concept of harmonically convex functions and presented some Hermite–Hadamard (H–H)-type inequalities for harmonically convex functions. Noor et al. [21] defined a new class of preinvex functions named h-preinvex functions and

**Citation:** Lai, K.K.; Bisht, J.; Sharma, N.; Mishra, S.K. Hermite-Hadamard-Type Fractional Inclusions for Interval-Valued Preinvex Functions. *Mathematics* **2022**, *10*, 264. https:// doi.org/10.3390/math10020264

Academic Editor: Savin Treanta

Received: 22 December 2021 Accepted: 13 January 2022 Published: 16 January 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

<sup>1</sup> International Business School, Shaanxi Normal University, Xi'an 710119, China

established H–H-type inequalities for these preinvex functions under certain conditions. Further, Noor et al. [22] introduced harmonic *h*-preinvex function and obtained Ostrowski type inequalities for harmonic *h*-preinvex functions using Riemann–Liouville (R–L) fractional integrals. In recent years, several integral inequalities for different type of preinvex functions are investigated by many authors; see, for instance [23–30].

Cano et al. [31] obtained some Ostrowski type inequalities for interval-valued functions using gH-derivative. Zhao et al. [32] investigated Riemann interval delta integrals for interval-valued functions on time scales and proved Jensen's, Hölder's, and Minkowski's inequalities using Riemann interval delta integrals. Budak et al. [33] defined right-sided R–L fractional integrals for interval-valued functions and obtained H–H-type inequalities for interval-valued R–L fractional integrals. Lou et al. [34] presented the notions of the Iq-integral and Iq-derivative and gave the Iq-H–H inequalities for interval-valued functions. Further, numerous concepts of quantum calculus for interval-valued functions have been investigated by [35–37].

Considering the importance of interval analysis, many researchers established relations between integral inequalities and different types of interval-valued functions. Zhao et al. [38] introduced the notion of harmonical *h*-convexity for interval-valued functions and proved some new H–H-type inequalities for the interval Riemann integral. Further, Zhao et al. [39,40] introduced the concept of interval-valued coordinated convexity and established H–H-type inequalities for newly defined interval-valued coordinated convex functions. Recently, Sharma et al. [41] introduced (*h*1, *h*2)-preinvex interval-valued function and derived fractional H–H-type inequalities for these class of interval-valued preinvex functions. Zhou et al. [42] derived H–H-type inequalities for interval-valued exponential type preinvex functions for R–L interval-valued fractional operator. For more inequalities for interval-valued functions, see references [43–49].

The work in this paper is mainly motivated by Zhao et al. [38] and Shi et al. [50]. We propose the concept of harmonically *h*-preinvex interval-valued function which includes harmonical *h*-convex interval-valued functions as a special case. We prove new fractional inclusions of H–H-type for harmonically *h*-preinvex interval-valued functions. We also present H–H-type inclusions for the product of two harmonically *h*-preinvex intervalvalued functions for interval-valued R–L fractional integrals. Further, we discuss some special cases of our main results. The results obtained in this paper may be generalized for other kinds of interval-valued fractional integrals including harmonically h-preinvex interval-valued functions. As future directions, we can investigate the interval-valued preinvexity on coordinates and establish new inclusions of H–H-type for interval-valued coordinated preinvex functions.

The presentation sequence of the proposed work is the following. In Section 3, we consider some basic definitions and notions of interval analysis. Additionally, we discuss the related results required for this paper. In Section 4, we define harmonically *h*-preinvexity of interval-valued functions and prove fractional H–H-type inclusions for harmonically *h*-preinvex interval-valued functions. Some special cases of these results are also discussed in Section 4. In Section 5, we discuss the results obtained by us in this paper. Finally, in Section 6, conclusions and future directions of this study are given.

#### **3. Preliminaries**

Let *X<sup>I</sup>* be the collection of all closed intervals of R and ∆ ∈ *X<sup>I</sup>* . Then, interval ∆ is defined by:

$$\Delta = [\underline{\Delta}, \overline{\Delta}] = \{ \underline{\mu} \in \mathbb{R} \mid \underline{\Delta} \le \underline{\mu} \le \overline{\Delta} \}, \\ \underline{\Delta} \cdot \overline{\Delta} \in \mathbb{R}.$$

We say ∆ is positive if ∆ > 0 or negative if ∆ < 0. We denote the set of all positive closed intervals by *X* + *I* and the set of all negative closed intervals by *X* − *I* . The following binary operations for intervals ∆<sup>1</sup> = [∆1, ∆1] and ∆<sup>2</sup> = [∆2, ∆2] are given by [17].

$$
\Delta\_1 + \Delta\_2 = [\underline{\Delta\_1}\prime \overline{\Delta\_1}] + [\underline{\Delta\_2}\prime \overline{\Delta\_2}] = [\underline{\Delta\_1} + \underline{\Delta\_2}\prime \overline{\Delta\_1} + \overline{\Delta\_2}]\prime
$$

$$
\Delta\_1 - \Delta\_2 = [\underline{\Delta}\_1 \overline{\Delta}\_1] - [\underline{\Delta}\_2 \overline{\Delta}\_2] = [\underline{\Delta}\_1 - \overline{\Delta}\_2 \overline{\Delta}\_1 - \underline{\Delta}\_2],
$$

$$
\Delta\_1 \Delta\_2 = [\min\{\underline{\Delta}\_1 \underline{\Delta}\_2, \underline{\Delta}\_1 \overline{\Delta}\_2, \overline{\Delta}\_1 \underline{\Delta}\_2, \overline{\Delta}\_1 \overline{\Delta}\_2\},
\max\{\underline{\Delta}\_1 \underline{\Delta}\_2, \underline{\Delta}\_1 \overline{\Delta}\_2, \overline{\Delta}\_1 \underline{\Delta}\_2, \overline{\Delta}\_1 \overline{\Delta}\_2\}],
$$

$$
1/\Delta = \{1/u : 0 \ne u \in \Delta\} = [1/\overline{\Delta}, 1/\underline{\Delta}],
$$

$$
\Delta\_1/\Delta\_2 = \Delta\_1.(1/\Delta\_2) = \{u.(1/v) : u \in \Delta\_1, 0 \ne v \in \Delta\_2\},
$$

$$
\rho \Delta = \rho[\underline{\Delta}, \overline{\Delta}] = \begin{cases}
[\rho \underline{\Delta}, \rho \overline{\Delta}], & \text{if } \rho > 0, \\
\{0\}, & \text{if } \rho = 0, \\
[\rho \overline{\Delta}, \rho \underline{\Delta}], & \text{if } \rho < 0,
\end{cases}
$$

where *ρ* ∈ R.

**Definition 1** ([17])**.** *A function ψ is called an interval-valued function on* [*ω*1, *ω*2] *if it assigns a nonempty interval to each u* ∈ [*ω*1, *ω*2] *and*

$$\psi(u) = [\psi(u), \overline{\psi}(u)]\_\prime$$

*where ψ and ψ are real-valued functions.*

**Theorem 1** ([11])**.** *Let ψ* : [*ω*1, *ω*2] → *X<sup>I</sup> be an interval-valued function such that ψ*(*u*) = [*ψ*(*u*), *ψ*(*u*)]. *Then, ψ is interval Riemann integrable (IR*−*integrable) on* [*ω*1, *ω*2] *if and only if ψ*(*u*) *and ψ*(*u*) *are Riemann integrable (R*−*integrable) on* [*ω*1, *ω*2] *and*

$$(IR)\int\_{\omega\_1}^{\omega\_2} \psi(u) du = \left[ (R) \int\_{\omega\_1}^{\omega\_2} \underline{\psi}(u) du, (R) \int\_{\omega\_1}^{\omega\_2} \overline{\psi}(u) du \right].$$

The collection of all *R*-integrable and *IR*-integrable functions on [*ω*1, *ω*2] denoted by *R*([*ω*<sup>1</sup> ,*ω*2]) and *IR*([*ω*<sup>1</sup> ,*ω*2]), respectively.

**Definition 2** ([51])**.** *Let ψ* ∈ *L*1[*ω*1, *ω*2]*. The R–L fractional integrals J α ω* + 1 *ψ and J α ω* − 2 *ψ of order α* > 0 *with ω*<sup>1</sup> ≥ 0 *are defined by*

$$J\_{\omega\_1^+}^{\alpha}\psi(u) = \frac{1}{\Gamma(\alpha)}\int\_{\omega\_1}^{u} (u-\epsilon)^{(\alpha-1)}\psi(\epsilon)d\epsilon,\ \ u > \omega\_1$$

*and*

$$J\_{\omega\_2^-}^a \psi(u) = \frac{1}{\Gamma(a)} \int\_u^{\omega\_2} (\epsilon - u)^{(a-1)} \psi(\epsilon) d\epsilon, \ u < \omega\_2.$$

*respectively. Here,* Γ(.) *is the Gamma function defined by*

$$
\Gamma(\mathfrak{a}) = \int\_0^\infty e^{-\varepsilon} \mathfrak{e}^{\mathfrak{a}-1} d\varepsilon.
$$

**Definition 3** ([13,33])**.** *Let ψ* : [*ω*1, *ω*2] → *X<sup>I</sup> be an interval-valued function and ψ* ∈ *IR*([*ω*<sup>1</sup> ,*ω*2]). *The interval-valued R–L fractional integrals of function ψ are defined by*

$$J\_{\omega\_1^+}^{\mathfrak{a}}\psi(u) = \frac{1}{\Gamma(\mathfrak{a})} (IR) \int\_{\omega\_1}^u (u - \epsilon)^{(a-1)} \psi(\epsilon) d\epsilon, \ u > \omega\_1, a > 0$$

*and*

$$J\_{\omega\_2^-}^{\mathfrak{a}}\psi(u) = \frac{1}{\Gamma(\mathfrak{a})} (IR) \int\_u^{\omega\_2} (\varepsilon - u)^{(\mathfrak{a} - 1)} \psi(\varepsilon) d\varepsilon, \ u < \omega\_2, \mathfrak{a} > 0,$$

*where* Γ(*α*) *is the Gamma function.*

**Corollary 1** ([33])**.** *If ψ* : [*ω*1, *ω*2] → *X<sup>I</sup> is an interval-valued function such that ψ*(*u*) = [*ψ*(*u*), *ψ*(*u*)] *with ψ*(*u*), *ψ*(*u*) ∈ *R*([*ω*<sup>1</sup> ,*ω*2]), *then we have*

$$J\_{\omega\_1^+}^{\alpha}\psi(\mathfrak{u}) = [J\_{\omega\_1^+}^{\alpha}\underline{\psi}(\mathfrak{u}), J\_{\omega\_1^+}^{\alpha}\overline{\psi}(\mathfrak{u})]$$

*and*

$$J\_{\omega\_2^-}^{\alpha}\psi(\mu) = [J\_{\omega\_2^-}^{\alpha}\underline{\psi}(\mu), J\_{\omega\_2^-}^{\alpha}\overline{\psi}(\mu)].$$

**Definition 4** ([52])**.** *A set I* = [*ω*1, *ω*2] ⊆ R\{0} *is called a harmonic convex set if*

$$\frac{uv}{tu + (1-t)v} \in I, \quad \forall u, v \in I, \quad t \in [0, 1].$$

**Definition 5** ([20])**.** *A function ψ* : *I* = [*ω*1, *ω*2] ⊆ R\{0} → R *is called harmonic convex, if*

$$
\psi\left(\frac{uv}{tu+(1-t)v}\right) \le (1-t)\psi(u) + t\psi(v), \ \forall u,v \in I, \ t \in [0,1].
$$

Now we consider some concepts for harmonic preinvex functions. Let *ψ* : *I* ⊆ R\{0} → R and *η*(., .) : *I* × *I* → R be continuous functions.

**Definition 6** ([53])**.** *A set I* = [*ω*1, *ω*<sup>1</sup> + *η*(*ω*2, *ω*1)] ⊆ R\{0} *is called a harmonic invex with respect to η*(., .)*, if*

$$\frac{u(u+\eta(v,u))}{u+(1-t)\eta(v,u)} \in I\_\prime \quad \forall u,v \in I\_\prime \quad t \in [0,1].$$

It is well known that every harmonic convex set is harmonic invex with respect to *η*(*v*, *u*) = *v* − *u* but not conversely.

**Definition 7** ([53])**.** *A function ψ* : *I* = [*ω*1, *ω*<sup>1</sup> + *η*(*ω*2, *ω*1)] ⊆ R\{0} → R *is said to be harmonic preinvex with respect to the bifunction η*(., .)*, if*

$$
\psi\left(\frac{u(\mu+\eta(v,u))}{u+(1-t)\eta(v,u)}\right) \le (1-t)\psi(u) + t\psi(v), \ \forall u,v \in I, \ t \in [0,1].
$$

*Condition C* [54]. Let *I* ⊆ R be an invex set with respect to *η*(., .). Then, function *η* holds the condition C if for any *t* ∈ [0, 1] and any *u*, *v* ∈ *I*,

$$
\eta(v, v + t\eta(u, v)) = -t\eta(u, v),
$$

$$
\eta(u, v + t\eta(u, v)) = (1 - t)\eta(u, v).
$$

Note that ∀ *t*1, *t*<sup>2</sup> ∈ [0, 1], *u*, *v* ∈ *I* and from condition C, we have

$$
\eta(v + t\_2 \eta(u, v), v + t\_1 \eta(u, v)) = (t\_2 - t\_1)\eta(u, v).
$$

**Theorem 2** ([55])**.** *Let ψ* : *I* = [*ω*1, *ω*<sup>1</sup> + *η*(*ω*2, *ω*1)] ⊆ R → (0, ∞) *be a preinvex function on I and ω*1, *ω*<sup>2</sup> ∈ *I with ω*<sup>1</sup> < *ω*<sup>1</sup> + *η*(*ω*2, *ω*1)*. Then*

$$\psi\left(\frac{2\omega\_1 + \eta(\omega\_2, \omega\_1)}{2}\right) \le \frac{1}{\eta(\omega\_2, \omega\_1)} \int\_{\omega\_1}^{\omega\_1 + \eta(\omega\_2, \omega\_1)} \psi(u) du \le \frac{\psi(\omega\_1) + \psi(\omega\_2)}{2}.$$

*which is called the H–H-Noor inequality.*

**Definition 8** ([41])**.** *If I* ⊆ R *is an invex set with respect to η*(., .), *ψ*(*u*) = [*ψ*(*u*), *ψ*(*u*)] *is an interval-valued function on I*. *Then ψ is preinvex interval-valued function on I with respect to η*(., .) *if*

*ψ*(*v* + *tη*(*u*, *v*)) ⊇ *tψ*(*u*) + (1 − *t*)*ψ*(*v*), ∀ *t* ∈ [0, 1] *and* ∀ *u*, *v* ∈ *I*.

#### **4. Main Results**

In this section, first, we define harmonically *h*-preinvex interval-valued function and discuss some special cases of harmonically *h*-preinvex interval-valued function.

**Definition 9.** *Let h* : [0, 1] ⊆ *J* → R *be a non-negative function such that h* 6≡ 0*, and I* ⊆ R\{0} *be a harmonic invex set with respect to η*(., .)*. Let ψ* : *I* ⊆ R\{0} → *X* + *I be an interval-valued function on set I, then ψ is called harmonically h-preinvex interval-valued function with respect to η*(., .) *if*

$$\psi\left(\frac{u(u+\eta(v,u))}{u+(1-t)\eta(v,u)}\right) \supseteq h(1-t)\psi(u) + h(t)\psi(v), \ \forall \ t \in [0,1] \ and \ \forall \ u,v \in I.$$

Now, we consider some special cases of harmonically *h*-preinvex interval- valued functions.

For *h*(*t*) = 1, function *ψ* is called a harmonically *P*−preinvex interval-valued function. For *h*(*t*) = *t*, function *ψ* is called a harmonically preinvex interval-valued function. If *h*(*t*) = *t s* , *s* ∈ (0, 1), then we find the definition of Breckner type of *s*−harmonically preinvex interval-valued functions.

If *h*(*t*) = *t* −*s* , *s* ∈ (0, 1), then we find the definition of Godunova–Levin type of *s*−harmonically preinvex interval-valued functions.

**Example 1.** *Let <sup>I</sup>* = [1, 2] <sup>⊂</sup> <sup>R</sup>\{0}, *<sup>ψ</sup>*(*u*) = <sup>h</sup> <sup>1</sup> <sup>−</sup> <sup>1</sup> 2*u* 2 , 1 + <sup>1</sup> 2*u* i , *η*(*v*, *u*) = *v* − 2*u, h*(*t*) = *t then ψ is harmonically h-preinvex interval-valued function on I.*

Now, we establish fractional inclusion of H–H for harmonically *h*-preinvex intervalvalued functions.

**Theorem 3.** *Let h* : [0, 1] → R *be a non-negative function such that h*( 1 2 ) 6= 0*. Let ψ* : *I* = [*ω*1, *ω*<sup>1</sup> + *η*(*ω*2, *ω*1)] ⊆ R\{0} → *X* + *I be a harmonically h-preinvex interval-valued function such that ψ* = [*ψ*, *ψ*] *and ω*1, *ω*<sup>2</sup> ∈ *I with ω*<sup>1</sup> < *ω*<sup>1</sup> + *η*(*ω*2, *ω*1). *If ψ* ∈ *L*[*ω*1, *ω*<sup>1</sup> + *η*(*ω*2, *ω*1)]*, α* > 0 *and η holds condition C, then*

$$\begin{split} &\frac{1}{nh(\frac{1}{2})}\psi\left(\frac{2\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{2\omega\_{1}+\eta(\omega\_{2},\omega\_{1})}\right) \\ &\supseteq \Gamma(a) \left(\frac{\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\eta(\omega\_{2},\omega\_{1})}\right)^{a} \left[\int\_{\left(\frac{1}{\omega\_{1}+\eta(\omega\_{2},\omega\_{1})}\right)^{+}}^{a} (\sharp\rho\Omega)\left(\frac{1}{\omega\_{1}}\right)+\int\_{\left(\frac{1}{\omega\_{1}}\right)^{-}}^{a} (\sharp\rho\Omega)\left(\frac{1}{\omega\_{1}+\eta(\omega\_{2},\omega\_{1})}\right) \right] \\ &\supseteq \left[\psi(\omega\_{1})+\psi(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))\right] \int\_{0}^{1}t^{a-1}[h(t)+h(1-t)]dt, \end{split}$$
 
$$\text{where }\Omega(u)=\frac{1}{n}\text{ and }\psi\Omega\text{ is defined by }\text{d}u\text{-}\Omega(u)=\psi(\Omega(u)) \quad \forall u\subset\left[1,1\right].$$

*where* Ω(*u*) = <sup>1</sup> *u and ψo*Ω *is defined by ψo*Ω(*u*) = *ψ*(Ω(*u*)), ∀ *u* ∈ 1 *ω*1+*η*(*ω*2,*ω*<sup>1</sup> ) , 1 *ω*1 i .

**Proof.** As *ψ* is harmonically *h*-preinvex interval-valued function on [*ω*1, *ω*<sup>1</sup> + *η*(*ω*2, *ω*1)], we have

$$\frac{1}{h(\frac{1}{2})}\psi\left(\frac{2u(u+\eta(v,u))}{2u+\eta(v,u)}\right) \supseteq \psi(u) + \psi(v), \; \forall \; u, v \in [\omega\_1, \omega\_1 + \eta(\omega\_2, \omega\_1)].\tag{1}$$

Let *u* = *ω*1 (*ω*1+*η*(*ω*2,*ω*<sup>1</sup> )) *ω*1+(1−*t*)*η*(*ω*2,*ω*<sup>1</sup> ) and *v* = *ω*1 (*ω*1+*η*(*ω*2,*ω*<sup>1</sup> )) *ω*1+*tη*(*ω*2,*ω*<sup>1</sup> ) . Then, using Condition *C* in (1), we find

$$\frac{1}{\eta\left(\frac{1}{2}\right)}\psi\left(\frac{2\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{2\omega\_{1}+\eta(\omega\_{2},\omega\_{1})}\right) \underset{\omega}{\geq} \psi\left(\frac{\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\omega\_{1}+(1-t)\eta(\omega\_{2},\omega\_{1})}\right) + \psi\left(\frac{\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\omega\_{1}+t\eta(\omega\_{2},\omega\_{1})}\right). \tag{2}$$

Multiplying (2) by *t α*−1 , *α* > 0 and integrating over [0, 1] with respect to *t*, we have

$$\begin{split} \frac{1}{h(\frac{1}{2})} (IR) \int\_{0}^{1} t^{a-1} \boldsymbol{\psi} \left( \frac{2\omega\_{1} (\omega\_{1} + \eta(\omega\_{2}, \omega\_{1}))}{2\omega\_{1} + \eta(\omega\_{2}, \omega\_{1})} \right) dt &\supseteq (IR) \int\_{0}^{1} t^{a-1} \boldsymbol{\psi} \left( \frac{\omega\_{1} (\omega\_{1} + \eta(\omega\_{2}, \omega\_{1}))}{\omega\_{1} + (1 - t)\eta(\omega\_{2}, \omega\_{1})} \right) dt \\ &\qquad + (IR) \int\_{0}^{1} t^{a-1} \boldsymbol{\psi} \left( \frac{\omega\_{1} (\omega\_{1} + \eta(\omega\_{2}, \omega\_{1}))}{\omega\_{1} + t\eta(\omega\_{2}, \omega\_{1})} \right) dt. \end{split} \tag{3}$$

#### Applying Theorem 1 in above relation, we find

$$\begin{split} & (IR) \int\_{0}^{1} t^{\mu-1} \overline{\psi} \left( \frac{2 \omega\_{1} (\omega\_{1} + \eta(\omega\_{2}, \omega\_{1}))}{2 \omega\_{1} + \eta(\omega\_{2}, \omega\_{1})} \right) dt \\ &= \left[ (R) \int\_{0}^{1} t^{\mu-1} \underline{\psi} \left( \frac{2 \omega\_{1} (\omega\_{1} + \eta(\omega\_{2}, \omega\_{1}))}{2 \omega\_{1} + \eta(\omega\_{2}, \omega\_{1})} \right) dt, (R) \int\_{0}^{1} t^{\mu-1} \overline{\psi} \left( \frac{2 \omega\_{1} (\omega\_{1} + \eta(\omega\_{2}, \omega\_{1}))}{2 \omega\_{1} + \eta(\omega\_{2}, \omega\_{1})} \right) dt \right] \\ &= \left[ \frac{1}{a} \underline{\psi} \left( \frac{2 \omega\_{1} (\omega\_{1} + \eta(\omega\_{2}, \omega\_{1}))}{2 \omega\_{1} + \eta(\omega\_{2}, \omega\_{1})} \right) \right) \frac{1}{a} \overline{\psi} \left( \frac{2 \omega\_{1} (\omega\_{1} + \eta(\omega\_{2}, \omega\_{1}))}{2 \omega\_{1} + \eta(\omega\_{2}, \omega\_{1})} \right) \end{split} \tag{4}$$

$$\begin{split} & (IR) \int\_{0}^{1} t^{\kappa-1} \overline{\psi\left(\frac{\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\omega\_{1}+(1-t)\eta(\omega\_{2},\omega\_{1})}\right)} dt \\ &= \left[ (R) \int\_{0}^{1} t^{\kappa-1} \frac{\overline{\psi\left(\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))\right)}}{\omega\_{1}+(1-t)\eta(\omega\_{2},\omega\_{1})} \right) dt, (R) \int\_{0}^{1} t^{\kappa-1} \overline{\psi\left(\frac{\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\omega\_{1}+(1-t)\eta(\omega\_{2},\omega\_{1})}\right)} dt \right] \\ &= \Gamma(a) \left( \frac{\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\eta(\omega\_{2},\omega\_{1})} \right)^{a} \left[ \int\_{\left(\frac{1}{\omega\_{1}+\eta(\omega\_{2},\omega\_{1})}\right)^{2}}^{a} \underbrace{\Psi^{\alpha}\Omega\left(\frac{1}{\omega\_{1}}\right)}\_{\left(\omega\_{1}+\eta(\omega\_{2},\omega\_{1})\right)} + \overline{\Psi}\alpha\Omega\left(\frac{1}{\omega\_{1}}\right) \right] \\ &= \Gamma(a) \left( \frac{\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\eta(\omega\_{2},\omega\_{1})} \right)^{a} \int\_{\left(\frac{1}{\omega\_{1}+\eta(\omega\_{2},\omega\_{1})}\right)^{2}}^{a} (\psi\omega\Omega) \left(\frac{1}{\omega\_{1}}\right). \end{split} \tag{5}$$

Similarly,

$$\Gamma(IR)\int\_0^1 t^{a-1} \psi\left(\frac{\omega\_1(\omega\_1 + \eta(\omega\_2, \omega\_1))}{\omega\_1 + t\eta(\omega\_2, \omega\_1)}\right) dt = \Gamma(a) \left(\frac{\omega\_1(\omega\_1 + \eta(\omega\_2, \omega\_1))}{\eta(\omega\_2, \omega\_1)}\right)^a f^{\mathfrak{a}}\_{\left(\frac{1}{\omega\_1}\right)^{-a}}(\psi \omega \Omega) \left(\frac{1}{\omega\_1 + \eta(\omega\_2, \omega\_1)}\right). \tag{6}$$

Using (4)–(6) in (3), we have

$$\frac{1}{nh(\frac{1}{2})}\psi\left(\frac{2\omega\_1(\omega\_1+\eta(\omega\_2,\omega\_1))}{2\omega\_1+\eta(\omega\_2,\omega\_1)}\right) \supseteq \Gamma(\mathfrak{a}) \left(\frac{\omega\_1(\omega\_1+\eta(\omega\_2,\omega\_1))}{\eta(\omega\_2,\omega\_1)}\right)^{\mathfrak{a}} \left[l\_{\left(\frac{1}{\omega\_1+\eta(\omega\_2,\omega\_1)}\right)}^{\mathfrak{a}},(\mathfrak{p}\alpha\Omega)\left(\frac{1}{\omega\_1}\right)\right]$$

$$+l\_{\left(\frac{1}{\omega\_1}\right)}^{\mathfrak{a}}(\mathfrak{p}\alpha\Omega)\left(\frac{1}{\omega\_1+\eta(\omega\_2,\omega\_1)}\right)\bigg].\tag{7}$$

As *ψ* is an harmonically *h*-preinvex interval-valued function on [*ω*1, *ω*<sup>1</sup> + *η*(*ω*2, *ω*1)], we have

$$\begin{split} \psi\left(\frac{\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\omega\_{1}+(1-t)\eta(\omega\_{2},\omega\_{1})}\right) &= \psi\left(\frac{(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))(\omega\_{1}+\eta(\omega\_{2},\omega\_{1})+\eta(\omega\_{1},\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\omega\_{1}+\eta(\omega\_{2},\omega\_{1})+t\eta(\omega\_{1},\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}\right) \\ &\supseteq h(t)\psi(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))+h(1-t)\psi(\omega\_{1}) \end{split} \tag{8}$$

and

$$
\begin{split}
\psi\left(\frac{\omega\_1(\omega\_1+\eta(\omega\_2,\omega\_1))}{\omega\_1+t\eta(\omega\_2,\omega\_1)}\right) &= \psi\left(\frac{(\omega\_1+\eta(\omega\_2,\omega\_1))(\omega\_1+\eta(\omega\_2,\omega\_1)+\eta(\omega\_1,\omega\_1+\eta(\omega\_2,\omega\_1))}{\omega\_1+\eta(\omega\_2,\omega\_1)+(1-t)\eta(\omega\_1,\omega\_1+\eta(\omega\_2,\omega\_1))}\right) \\ &\supseteq h(1-t)\psi(\omega\_1+\eta(\omega\_2,\omega\_1))+h(t)\psi(\omega\_1).
\end{split}
\tag{9}
$$

Adding (8) and (9), we have

$$\psi\left(\frac{\omega\_1(\omega\_1+\eta(\omega\_2,\omega\_1))}{\omega\_1+(1-t)\eta(\omega\_2,\omega\_1)}\right) + \psi\left(\frac{\omega\_1(\omega\_1+\eta(\omega\_2,\omega\_1))}{\omega\_1+t\eta(\omega\_2,\omega\_1)}\right) \supseteq [h(t)+h(1-t)][\psi(\omega\_1)+\psi(\omega\_1+\eta(\omega\_2,\omega\_1))].\tag{10}$$

Multiplying (10) by *t <sup>α</sup>*−<sup>1</sup> and integrating over [0, 1] with respect to *t*, we have

$$\begin{split} & (IR) \int\_{0}^{1} t^{\mathfrak{a}-1} \psi \left( \frac{\omega\_{1} (\omega\_{1} + \eta(\omega\_{2}, \omega\_{1}))}{\omega\_{1} + (1-t)\eta(\omega\_{2}, \omega\_{1})} \right) dt + (IR) \int\_{0}^{1} t^{\mathfrak{a}-1} \psi \left( \frac{\omega\_{1} (\omega\_{1} + \eta(\omega\_{2}, \omega\_{1}))}{\omega\_{1} + t\eta(\omega\_{2}, \omega\_{1})} \right) dt \\ & \supseteq (IR) \int\_{0}^{1} t^{\mathfrak{a}-1} [h(t) + h(1-t)] [\psi(\omega\_{1}) + \psi(\omega\_{1} + \eta(\omega\_{2}, \omega\_{1}))] dt. \end{split}$$

This implies

$$\begin{split} \Gamma(\mathfrak{a}) \left( \frac{\omega\_1(\omega\_1 + \eta(\omega\_2, \omega\_1))}{\eta(\omega\_2, \omega\_1)} \right)^a \Big[ l^a\_{\left( \frac{1}{\omega\_1 + \eta(\omega\_2, \omega\_1)} \right)^+} (\psi \mathfrak{a} \Omega) \left( \frac{1}{\omega\_1} \right) + l^a\_{\left( \frac{1}{\omega\_1} \right)^-} (\psi \mathfrak{a} \Omega) \left( \frac{1}{\omega\_1 + \eta(\omega\_2, \omega\_1)} \right) \Big] \\ \geq \left[ \psi(\omega\_1) + \psi(\omega\_1 + \eta(\omega\_2, \omega\_1)) \right] \int\_0^1 t^{a-1} [h(t) + h(1-t)] dt. \end{split} \tag{11}$$

From (7) and (11), we find

$$\begin{split} &\frac{1}{nh(\frac{1}{2})}\psi\left(\frac{2\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{2\omega\_{1}+\eta(\omega\_{2},\omega\_{1})}\right) \\ &\supseteq \Gamma(\mathfrak{a}) \left(\frac{\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\eta(\omega\_{2},\omega\_{1})}\right)^{\mathfrak{a}} \left[f^{a}\_{\left(\frac{1}{\omega\_{1}+\eta(\omega\_{2},\omega\_{1})}\right)^{+}}(\mathfrak{p}\boldsymbol{\alpha}\boldsymbol{\Omega})\left(\frac{1}{\omega\_{1}}\right)+f^{a}\_{\left(\frac{1}{\omega\_{1}}\right)^{-}}(\mathfrak{p}\boldsymbol{\alpha}\boldsymbol{\Omega})\left(\frac{1}{\omega\_{1}+\eta(\omega\_{2},\omega\_{1})}\right)\right] \\ &\supseteq \left[\psi(\omega\_{1})+\psi(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))\right] \int\_{0}^{1}t^{a-1}[h(t)+h(1-t)]dt. \end{split}$$

**Example 2.** *Let I* = [*ω*1, *ω*<sup>1</sup> + *η*(*ω*2, *ω*1)] = [1, 2], *η*(*ω*2, *ω*1) = *ω*<sup>2</sup> − 2*ω*1*. Let α* = 1 *and h*(*t*) = *t* ∀ *t* ∈ [0, 1]*, ψ* : *I* → *X* + *I be defined by*

$$
\psi(u) = \left[ -\frac{1}{u} + 2\left.\frac{1}{u} + 2 \right\vert \right] \,\,\forall \, u \in I.
$$

*We find*

$$\frac{1}{nh(\frac{1}{2})}\psi\left(\frac{2\omega\_1(\omega\_1+\eta(\omega\_2,\omega\_1))}{2\omega\_1+\eta(\omega\_2,\omega\_1)}\right) = 2\psi\left(\frac{4}{3}\right) = \left[\frac{5}{2}, \frac{11}{2}\right],\tag{12}$$

$$\begin{split} \Gamma(\mathfrak{a}) \left( \frac{\omega\_{1}(\omega\_{1} + \mathfrak{y}(\omega\_{2}, \omega\_{1}))}{\eta(\omega\_{2}, \omega\_{1})} \right)^{\mathfrak{a}} \left[ \mathbb{I}^{\mathfrak{a}}\_{\left( \frac{1}{\omega\_{1} + \eta(\omega\_{2}, \omega\_{1})} \right)} + (\mathfrak{y}\alpha\Omega) \left( \frac{1}{\omega\_{1}} \right) + \mathbb{I}^{\mathfrak{a}}\_{\left( \frac{1}{\omega\_{1}} \right)^{-}} (\mathfrak{y}\alpha\Omega) \left( \frac{1}{\omega\_{1} + \eta(\omega\_{2}, \omega\_{1})} \right) \right] \\ = \frac{2\omega\_{1}(\omega\_{1} + \eta(\omega\_{2}, \omega\_{1}))}{\eta(\omega\_{2}, \omega\_{1})} \int\_{\omega\_{1}}^{\omega\_{1} + \eta(\omega\_{2}, \omega\_{1})} \frac{\psi(u)}{u^{2}} du = 2 \int\_{1}^{2} \frac{1}{u^{2}} \left[ -\frac{1}{u} + 2, \frac{1}{u} + 2 \right] du = \left[ \frac{5}{2}, \frac{11}{2} \right] \end{split} \tag{13}$$

*and*

$$\left[\psi(\omega\_1) + \psi(\omega\_1 + \eta(\omega\_2, \omega\_1))\right] \int\_0^1 t^{\mu - 1} [h(t) + h(1 - t)] dt = [\psi + \psi] = \left[\frac{5}{2}, \frac{11}{2}\right]. \tag{14}$$

*From (12)–(14), we see Theorem 3 is verified.*

**Remark 1.** *If we put η*(*ω*2, *ω*1) = *ω*<sup>2</sup> − *ω*<sup>1</sup> *in the above theorem, we obtain Theorem 5 of [50].*

**Remark 2.** *If we put η*(*ω*2, *ω*1) = *ω*<sup>2</sup> − *ω*<sup>1</sup> *and α* = 1 *in the above theorem, we obtain Theorem 1 of [38].*

**Remark 3.** *If we put η*(*ω*2, *ω*1) = *ω*<sup>2</sup> − *ω*<sup>1</sup> *and h*(*t*) = *t in the above theorem, we obtain Theorem 3.6 of [56].*

Now we present some particular cases of Theorem 3.

**Corollary 2.** *If α* = 1, *then Theorem 3 gives the following result:*

$$\begin{split} \frac{1}{h(\frac{1}{2})} \psi\left(\frac{2\omega\_{1}(\omega\_{1} + \eta(\omega\_{2}, \omega\_{1}))}{2\omega\_{1} + \eta(\omega\_{2}, \omega\_{1})}\right) & \overset{\text{}}{=} \frac{2\omega\_{1}(\omega\_{1} + \eta(\omega\_{2}, \omega\_{1}))}{\eta(\omega\_{2}, \omega\_{1})} \int\_{\omega\_{1}}^{\omega\_{1} + \eta(\omega\_{2}, \omega\_{1})} \frac{\psi(u)}{u^{2}} du \\ & \overset{\text{}}{=} [\psi(\omega\_{1}) + \psi(\omega\_{1} + \eta(\omega\_{2}, \omega\_{1}))] \int\_{0}^{1} [h(t) + h(1 - t)] dt. \end{split}$$

**Corollary 3.** *If h*(*t*) = *t*, *then Theorem 3 gives the following result:*

$$\begin{split} &\psi\left(\frac{2\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{2\omega\_{1}+\eta(\omega\_{2},\omega\_{1})}\right) \\ &\geq \frac{\Gamma(\mathfrak{a}+1)}{2} \left(\frac{\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\eta(\omega\_{2},\omega\_{1})}\right)^{a} \left[l^{a}\_{\left(\frac{1}{\omega\_{1}+\eta(\omega\_{2},\omega\_{1})}\right)^{+}}\psi\left(\frac{1}{\omega\_{1}}\right)+l^{a}\_{\left(\frac{1}{\omega\_{1}}\right)^{-}}\psi\left(\frac{1}{\omega\_{1}+\eta(\omega\_{2},\omega\_{1})}\right)\right] \\ &\geq \frac{\psi(\omega\_{1})+\psi(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{2}. \end{split}$$

Next, we prove fractional inclusions of H–H-type for the product of two harmonically *h*-preinvex interval-valued functions.

**Theorem 4.** *Let h*1, *h*<sup>2</sup> : [0, 1] → R *be non-negative functions and h*1, *h*<sup>2</sup> 6≡ 0*. Let ψ*, *ϕ* : *I* = [*ω*1, *ω*<sup>1</sup> + *η*(*ω*2, *ω*1)] ⊆ R\{0} → *X* + *I be two harmonically h*1*- and h*2*-preinvex intervalvalued functions, respectively, such that ψ* = [*ψ*, *ψ*]*, ϕ* = [*ϕ*, *ϕ*] *and ω*1, *ω*<sup>2</sup> ∈ *I with ω*<sup>1</sup> < *ω*<sup>1</sup> + *η*(*ω*2, *ω*1). *If ψϕ* ∈ *L*[*ω*1, *ω*<sup>1</sup> + *η*(*ω*2, *ω*1)]*, α* > 0 *and η holds condition C, then*

$$\begin{split} \Gamma(\boldsymbol{a}) \left( \frac{\omega\_{1}(\omega\_{1} + \eta(\omega\_{2}, \omega\_{1}))}{\eta(\omega\_{2}, \omega\_{1})} \right)^{\mathsf{a}} \Bigg[ \Big( \Big( \frac{1}{\omega\_{1} + \eta(\omega\_{2}, \omega\_{1})} \Big)^{+} (\psi o \Omega) \left( \frac{1}{\omega\_{1}} \right) (\varphi o \Omega) \left( \frac{1}{\omega\_{1}} \right) \\ \quad + \Big( \frac{1}{\omega\_{1}} \Big)^{-} (\psi o \Omega) \left( \frac{1}{\omega\_{1} + \eta(\omega\_{2}, \omega\_{1})} \right) (\varphi o \Omega) \left( \frac{1}{\omega\_{1} + \eta(\omega\_{2}, \omega\_{1})} \right) \Bigg] \\ \geq \, F(\omega\_{1}, \omega\_{1} + \eta(\omega\_{2}, \omega\_{1})) \int\_{0}^{1} [t^{a-1} + (1-t)^{a-1}] h\_{1}(t) h\_{2}(t) dt \\ \quad + \, G(\omega\_{1}, \omega\_{1} + \eta(\omega\_{2}, \omega\_{1})) \int\_{0}^{1} [t^{a-1} + (1-t)^{a-1}] h\_{1}(1-t) h\_{2}(t)] dt. \end{split} \tag{15}$$

*where F*(*ω*1, *ω*<sup>1</sup> + *η*(*ω*2, *ω*1)) = *ψ*(*ω*1)*ϕ*(*ω*1) + *ψ*(*ω*<sup>1</sup> + *η*(*ω*2, *ω*1))*ϕ*(*ω*<sup>1</sup> + *η*(*ω*2, *ω*1)), *N*(*ω*1, *ω*<sup>1</sup> + *η*(*ω*2, *ω*1)) = *ψ*(*ω*1)*ϕ*(*ω*<sup>1</sup> + *η*(*ω*2, *ω*1)) + *ψ*(*ω*<sup>1</sup> + *η*(*ω*2, *ω*1))*ϕ*(*ω*1) *and* Ω(*u*) = <sup>1</sup> *u .*

**Proof.** As *ψ* and *ϕ* are two harmonically *h*1- and *h*2-preinvex interval-valued functions on [*ω*1, *ω*<sup>1</sup> + *η*(*ω*2, *ω*1)], respectively. Therefore,

$$\begin{split} \psi\left(\frac{\omega\_1(\omega\_1+\eta(\omega\_2,\omega\_1))}{\omega\_1+(1-t)\eta(\omega\_2,\omega\_1)}\right) &= \psi\left(\frac{(\omega\_1+\eta(\omega\_2,\omega\_1))(\omega\_1+\eta(\omega\_2,\omega\_1)+\eta(\omega\_1,\omega\_1+\eta(\omega\_2,\omega\_1))}{\omega\_1+\eta(\omega\_2,\omega\_1)+t\eta(\omega\_1,\omega\_1+\eta(\omega\_2,\omega\_1))}\right) \\ &\supseteq h\_1(t)\psi(\omega\_1+\eta(\omega\_2,\omega\_1))+h\_1(1-t)\psi(\omega\_1) \end{split} \tag{16}$$

and

$$
\begin{split}
\boldsymbol{\varrho}\left(\frac{\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\omega\_{1}+(1-t)\eta(\omega\_{2},\omega\_{1})}\right) &= \boldsymbol{\varrho}\left(\frac{(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))(\omega\_{1}+\eta(\omega\_{2},\omega\_{1})+\eta(\omega\_{1},\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\omega\_{1}+\eta(\omega\_{2},\omega\_{1})+t\eta(\omega\_{1},\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}\right) \\ &\supseteq h\_{2}(t)\boldsymbol{\varrho}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))+h\_{2}(1-t)\boldsymbol{\varrho}(\omega\_{1}).\tag{17}
\end{split}
\tag{17}
$$

As *ψ*(*u*), *ϕ*(*u*) ∈ *X* + *I* , ∀ *u* ∈ [*ω*1, *ω*<sup>1</sup> + *η*(*ω*2, *ω*1)], then from (16) and (17), we obtain

$$\begin{split} \psi\left(\frac{\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\omega\_{1}+(1-t)\eta(\omega\_{2},\omega\_{1})}\right)\varphi\left(\frac{\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\omega\_{1}+(1-t)\eta(\omega\_{2},\omega\_{1})}\right) \\ \geq h\_{1}(t)h\_{2}(t)\psi(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))\varphi(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))+h\_{1}(1-t)h\_{2}(1-t)\psi(\omega\_{1})\varphi(\omega\_{1}) \\ +h\_{1}(t)h\_{2}(1-t)\psi(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))\varphi(\omega\_{1})+h\_{1}(1-t)h\_{2}(t)\psi(\omega\_{1})\varphi(\omega\_{1}+\eta(\omega\_{2},\omega\_{1})). \end{split} \tag{18}$$

Similarly,

$$\begin{split} &\psi\left(\frac{\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\omega\_{1}+t\eta(\omega\_{2},\omega\_{1})}\right)\varphi\left(\frac{\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\omega\_{1}+t\eta(\omega\_{2},\omega\_{1})}\right) \\ &\geq h\_{1}(1-t)h\_{2}(1-t)\psi(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))\varphi(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))+h\_{1}(t)h\_{2}(t)\psi(\omega\_{1})\varphi(\omega\_{1}) \\ &\quad + h\_{1}(1-t)h\_{2}(t)\psi(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))\varphi(\omega\_{1})+h\_{1}(t)h\_{2}(1-t)\psi(\omega\_{1})\varphi(\omega\_{1}+\eta(\omega\_{2},\omega\_{1})). \end{split} \tag{19}$$

Adding (18) and (19), we have

$$\begin{split} \psi\left(\frac{\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\omega\_{1}+(1-t)\eta(\omega\_{2},\omega\_{1})}\right)\psi\left(\frac{\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\omega\_{1}+(1-t)\eta(\omega\_{2},\omega\_{1})}\right) \\ +\psi\left(\frac{\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\omega\_{1}+t\eta(\omega\_{2},\omega\_{1})}\right)\psi\left(\frac{\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\omega\_{1}+t\eta(\omega\_{2},\omega\_{1})}\right) \\ \geq [h\_{1}(t)h\_{2}(t)+h\_{1}(1-t)h\_{2}(1-t)][\psi(\omega\_{1})\rho(\omega\_{1})+\psi(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))\rho(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))] \\ +[h\_{1}(t)h\_{2}(1-t)+h\_{1}(1-t)h\_{2}(t)][\psi(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))\rho(\omega\_{1})+\psi(\omega\_{1})\rho(\omega\_{1}+\left(\omega\_{2},r\right))] \\ = F(\omega\_{1},\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))[h\_{1}(t)h\_{2}(t)+h\_{1}(1-t)h\_{2}(1-t)] \\ +G(\omega\_{1},\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))[h\_{1}(1-t)h\_{2}(t)+h\_{1}(t)h\_{2}(1-t)]. \tag{20} \end{split}$$

Multiplying (20) by *t <sup>α</sup>*−<sup>1</sup> and integrating over [0, 1] with respect to *t*, we have

$$\begin{split} \left( (IR) \int\_{0}^{1} t^{a-1} \Psi \Big( \frac{\omega\_{1} (\omega\_{1} + \eta(\omega\_{2}, \omega\_{1}))}{\omega\_{1} + (1-t)\eta(\omega\_{2}, \omega\_{1})} \Big) \Big\/ \Phi \Big( \frac{\omega\_{1} (\omega\_{1} + \eta(\omega\_{2}, \omega\_{1}))}{\omega\_{1} + (1-t)\eta(\omega\_{2}, \omega\_{1})} \Big) dt \\ + (IR) \int\_{0}^{1} t^{a-1} \Psi \Big( \frac{\omega\_{1} (\omega\_{1} + \eta(\omega\_{2}, \omega\_{1}))}{\omega\_{1} + t\eta(\omega\_{2}, \omega\_{1})} \Big) \Big\/ \Phi \Big( \frac{\omega\_{1} (\omega\_{1} + \eta(\omega\_{2}, \omega\_{1}))}{\omega\_{1} + t\eta(\omega\_{2}, \omega\_{1})} \Big) dt \\ \geq (IR) \int\_{0}^{1} t^{a-1} F(\omega\_{1}, \omega\_{1} + \eta(\omega\_{2}, \omega\_{1})) [h\_{1}(t) h\_{2}(t) + h\_{1}(1-t) h\_{2}(1-t)] dt \\ + (IR) \int\_{0}^{1} t^{a-1} G(\omega\_{1}, \omega\_{1} + \eta(\omega\_{2}, \omega\_{1})) [h\_{1}(1-t) h\_{2}(t) + h\_{1}(t) h\_{2}(1-t)] dt. \end{split} \tag{21}$$

As

$$\left( (IR) \int\_0^1 t^{\omega - 1} \psi \left( \frac{\omega\_1 (\omega\_1 + \eta(\omega\_{2'} \omega\_1))}{\omega\_1 + (1 - t)\eta(\omega\_{2'} \omega\_1)} \right) \wp \left( \frac{\omega\_1 (\omega\_1 + \eta(\omega\_{2'} \omega\_1))}{\omega\_1 + (1 - t)\eta(\omega\_{2'} \omega\_1)} \right) dt \right)$$

$$\Gamma = \Gamma(a) \left( \frac{\omega\_1(\omega\_1 + \eta(\omega\_2, \omega\_1))}{\eta(\omega\_2, \omega\_1)} \right)^a I\_{\left(\frac{1}{\omega\_1 + \eta(\omega\_2, \omega\_1)}\right)^+}^a (\psi a \Omega) \left( \frac{1}{\omega\_1} \right) (\varphi a \Omega) \left( \frac{1}{\omega\_1} \right), \tag{22}$$

$$\begin{split} \Gamma(1R) \int\_{0}^{1} t^{\mu-1} \psi \left( \frac{\omega\_{1} (\omega\_{1} + \eta(\omega\_{2}, \omega\_{1}))}{\omega\_{1} + t\eta(\omega\_{2}, \omega\_{1})} \right) \bigg\upphi \left( \frac{\omega\_{1} (\omega\_{1} + \eta(\omega\_{2}, \omega\_{1}))}{\omega\_{1} + t\eta(\omega\_{2}, \omega\_{1})} \right) dt \\ = \Gamma(a) \left( \frac{\omega\_{1} (\omega\_{1} + \eta(\omega\_{2}, \omega\_{1}))}{\eta(\omega\_{2}, \omega\_{1})} \right)^{a} \bigg\upupnu\_{\left( \frac{1}{\omega\_{1}} \right)^{-}} (\psi \omega \Omega) \left( \frac{1}{\omega\_{1} + \eta(\omega\_{2}, \omega\_{1})} \right) (\psi \omega \Omega) \bigg( \frac{1}{\omega\_{1} + \eta(\omega\_{2}, \omega\_{1})} \bigg). \end{split} \tag{23}$$

Using (22), (23) in (21), we have

$$\begin{split} &\Gamma(\boldsymbol{a})\left(\frac{\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\eta(\omega\_{2},\omega\_{1})}\right)^{\boldsymbol{a}}\Big[\int\_{\left(\frac{1}{\omega\_{1}+\eta(\omega\_{2},\omega\_{1})}\right)^{\boldsymbol{+}}}^{\boldsymbol{a}}(\psi\alpha\Omega)\left(\frac{1}{\omega\_{1}}\right)(\psi\alpha\Omega)\left(\frac{1}{\omega\_{1}}\right) \\ &\quad + \int\_{\left(\frac{1}{\omega\_{1}}\right)^{-}}^{\boldsymbol{a}}(\psi\alpha\Omega)\left(\frac{1}{\omega\_{1}+\eta(\omega\_{2},\omega\_{1})}\right)(\psi\alpha\Omega)\left(\frac{1}{\omega\_{1}+\eta(\omega\_{2},\omega\_{1})}\right) \bigg] \\ &\quad \supseteq F(\omega\_{1},\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))\int\_{0}^{1}[t^{\alpha-1}+(1-t)^{\alpha-1}]h\_{1}(t)h\_{2}(t)dt \\ &\quad + \mathcal{G}(\omega\_{1},\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))\int\_{0}^{1}[t^{\alpha-1}+(1-t)^{\alpha-1}]h\_{1}(1-t)h\_{2}(t)]dt. \end{split}$$

**Remark 4.** *If we put η*(*ω*2, *ω*1) = *ω*<sup>2</sup> − *ω*<sup>1</sup> *in the above theorem, we obtain Theorem 6 of [50].*

**Remark 5.** *If we put η*(*ω*2, *ω*1) = *ω*<sup>2</sup> − *ω*<sup>1</sup> *and α* = 1 *in the above theorem, we obtain Theorem 3 of [38].*

**Corollary 4.** *If α* = 1, *then Theorem 4 gives the following result:*

$$\begin{aligned} &\frac{\omega\_1(\omega\_1+\eta(\omega\_2,\omega\_1))}{\eta(\omega\_2,\omega\_1)}\int\_{\omega\_1}^{\omega\_1+\eta(\omega\_2,\omega\_1)}\frac{\psi(u)\overline{\rho(u)}}{u^2}du\\ &\supseteq \mathcal{F}(\omega\_1,\omega\_1+\eta(\omega\_2,\omega\_1))\int\_0^1 h\_1(t)h\_2(t)dt + \mathcal{G}(\omega\_1,\omega\_1+\eta(\omega\_2,\omega\_1))\int\_0^1 h\_1(1-t)h\_2(t)]dt. \end{aligned}$$

**Corollary 5.** *If h*1(*t*) = *h*2(*t*) = *t*, *then Theorem 4 gives the following result:*

Γ(*α*) *ω*1(*ω*<sup>1</sup> + *η*(*ω*2, *ω*1)) *η*(*ω*2, *ω*1) *<sup>α</sup>* " *J α* 1 *ω*1+*η*(*ω*2 ,*ω*1 ) <sup>+</sup> (*ψo*Ω) 1 *ω*<sup>1</sup> (*ϕo*Ω) 1 *ω*<sup>1</sup> +*J α* 1 *ω*1 <sup>−</sup> (*ψo*Ω) 1 *ω*<sup>1</sup> + *η*(*ω*2, *ω*1) (*ϕo*Ω) 1 *ω*<sup>1</sup> + *η*(*ω*2, *ω*1) # <sup>⊇</sup> *<sup>F</sup>*(*ω*1, *<sup>ω</sup>*<sup>1</sup> <sup>+</sup> *<sup>η</sup>*(*ω*2, *<sup>ω</sup>*1)) <sup>Z</sup> <sup>1</sup> 0 *t* 2 [*t <sup>α</sup>*−<sup>1</sup> + (<sup>1</sup> <sup>−</sup> *<sup>t</sup>*) *α*−1 ]*dt* <sup>+</sup> *<sup>G</sup>*(*ω*1, *<sup>ω</sup>*<sup>1</sup> <sup>+</sup> *<sup>η</sup>*(*ω*2, *<sup>ω</sup>*1)) <sup>Z</sup> <sup>1</sup> 0 *t*(1 − *t*)[*t <sup>α</sup>*−<sup>1</sup> + (<sup>1</sup> <sup>−</sup> *<sup>t</sup>*) *α*−1 ]*dt* = *α* <sup>2</sup> + *α* + 2 *α*(*α* + 1)(*α* + 2) *<sup>F</sup>*(*ω*1, *<sup>ω</sup>*<sup>1</sup> <sup>+</sup> *<sup>η</sup>*(*ω*2, *<sup>ω</sup>*1)) + <sup>2</sup> (*α* + 1)(*α* + 2) *G*(*ω*1, *ω*<sup>1</sup> + *η*(*ω*2, *ω*1)) = (*α* <sup>2</sup> + *α* + 2)*F*(*ω*1, *ω*<sup>1</sup> + *η*(*ω*2, *ω*1)) + 2*αG*(*ω*1, *ω*<sup>1</sup> + *η*(*ω*2, *ω*1)) *α*(*α* + 1)(*α* + 2) .

**Theorem 5.** *Let h*1, *h*<sup>2</sup> : [0, 1] → R *be non-negative functions and h*1( 1 2 )*h*2( 1 2 ) 6= 0*. Let ψ*, *ϕ* : *I* = [*ω*1, *ω*<sup>1</sup> + *η*(*ω*2, *ω*1)] ⊆ R\{0} → *X* + *I be two harmonically h*1*- and h*2*-preinvex* *interval-valued functions, respectively, such that ψ* = [*ψ*, *ψ*]*, ϕ* = [*ϕ*, *ϕ*] *and ω*1, *ω*<sup>2</sup> ∈ *I with ω*<sup>1</sup> < *ω*<sup>1</sup> + *η*(*ω*2, *ω*1). *If ψϕ* ∈ *L*[*ω*1, *ω*<sup>1</sup> + *η*(*ω*2, *ω*1)]*, α* > 0 *and η holds condition C, then*

$$\begin{split} &\frac{1}{ah\_{1}(\frac{1}{2})h\_{2}(\frac{1}{2})}\psi\left(\frac{2\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{2\omega\_{1}+\eta(\omega\_{2},\omega\_{1})}\right)\psi\left(\frac{2\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{2\omega\_{1}+\eta(\omega\_{2},\omega\_{1})}\right) \\ &\supseteq\Gamma(\mathfrak{a})\left(\frac{\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\eta(\omega\_{2},\omega\_{1})}\right)^{a}\left[f\_{\mathfrak{a}}^{a}\frac{1}{\left(\frac{1}{\omega\_{1}+\eta(\omega\_{2},\omega\_{1})}\right)^{+}(\eta\alpha\Omega)\left(\frac{1}{\omega\_{1}}\right)(\eta\alpha\Omega)\left(\frac{1}{\omega\_{1}}\right)}\right] \\ &+f\_{\mathfrak{a}}^{x}\frac{1}{\left(\frac{1}{\omega\_{1}}\right)}(\psi\alpha\Omega)\left(\frac{1}{\omega\_{1}+\eta(\omega\_{2},\omega\_{1})}\right)(\varphi\alpha\Omega)\left(\frac{1}{\omega\_{1}+\eta(\omega\_{2},\omega\_{1})}\right) \Bigg] \\ &+F(\omega\_{1},\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))\int\_{0}^{1}(t^{a-1}+(1-t)^{a-1})h\_{1}(t)h\_{2}(1-t)dt \\ &+G(\omega\_{1},\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))\int\_{0}^{1}(t^{a-1}+(1-t)^{a-1})h\_{1}(t)h\_{2}(t)dt,\end{split}$$

*where F*(*ω*1, *ω*<sup>1</sup> + *η*(*ω*2, *ω*1)) *and G*(*ω*1, *ω*<sup>1</sup> + *η*(*ω*2, *ω*1)) *are defined as previous.*

**Proof.** As *ψ* is harmonically *h*1-preinvex interval-valued function on [*ω*1, *ω*<sup>1</sup> + *η*(*ω*2, *ω*1)], we have

$$\frac{1}{h\_1(\frac{1}{2})}\psi\left(\frac{2u(u+\eta(v,u))}{2u+\eta(v,u)}\right) \supseteq \psi(u) + \psi(v), \; \forall \; u, v \in [\omega\_1, \omega\_1 + \eta(\omega\_2, \omega\_1)].\tag{24}$$

Let *u* = *ω*1 (*ω*1+*η*(*ω*2,*ω*<sup>1</sup> )) *ω*1+(1−*t*)*η*(*ω*2,*ω*<sup>1</sup> ) and *v* = *ω*1 (*ω*1+*η*(*ω*2,*ω*<sup>1</sup> )) *ω*1+*tη*(*ω*2,*ω*<sup>1</sup> ) . Then, using Condition *C* in (24), we find

$$\frac{1}{h\_1(\frac{1}{2})}\psi\left(\frac{2\omega\_1(\omega\_1+\eta(\omega\_2,\omega\_1))}{2\omega\_1+\eta(\omega\_2,\omega\_1)}\right) \ge \psi\left(\frac{\omega\_1(\omega\_1+\eta(\omega\_2,\omega\_1))}{\omega\_1+(1-t)\eta(\omega\_2,\omega\_1)}\right) + \psi\left(\frac{\omega\_1(\omega\_1+\eta(\omega\_2,\omega\_1))}{\omega\_1+t\eta(\omega\_2,\omega\_1)}\right). \tag{25}$$

Similarly,

$$\frac{1}{h\_2(\frac{1}{2})}\phi\left(\frac{2\omega\_1(\omega\_1+\eta(\omega\_2,\omega\_1))}{2\omega\_1+\eta(\omega\_2,\omega\_1)}\right) \ge \phi\left(\frac{\omega\_1(\omega\_1+\eta(\omega\_2,\omega\_1))}{\omega\_1+(1-t)\eta(\omega\_2,\omega\_1)}\right) + \phi\left(\frac{\omega\_1(\omega\_1+\eta(\omega\_2,\omega\_1))}{\omega\_1+t\eta(\omega\_2,\omega\_1)}\right). \tag{26}$$

From (25) and (26), we find

1 *h*1( 1 2 )*h*2( 1 2 ) *ψ* 2*ω*1(*ω*<sup>1</sup> + *η*(*ω*2, *ω*1)) 2*ω*<sup>1</sup> + *η*(*ω*2, *ω*1) *ϕ* 2*ω*1(*ω*<sup>1</sup> + *η*(*ω*2, *ω*1)) 2*ω*<sup>1</sup> + *η*(*ω*2, *ω*1) ⊇ *ψ ω*1(*ω*<sup>1</sup> + *η*(*ω*2, *ω*1)) *ω*<sup>1</sup> + (1 − *t*)*η*(*ω*2, *ω*1) + *ψ ω*1(*ω*<sup>1</sup> + *η*(*ω*2, *ω*1)) *ω*<sup>1</sup> + *tη*(*ω*2, *ω*1) × *ϕ ω*1(*ω*<sup>1</sup> + *η*(*ω*2, *ω*1)) *ω*<sup>1</sup> + (1 − *t*)*η*(*ω*2, *ω*1) + *ϕ ω*1(*ω*<sup>1</sup> + *η*(*ω*2, *ω*1)) *ω*<sup>1</sup> + *tη*(*ω*2, *ω*1) = *ψ ω*1(*ω*<sup>1</sup> + *η*(*ω*2, *ω*1)) *ω*<sup>1</sup> + (1 − *t*)*η*(*ω*2, *ω*1) *ϕ ω*1(*ω*<sup>1</sup> + *η*(*ω*2, *ω*1)) *ω*<sup>1</sup> + (1 − *t*)*η*(*ω*2, *ω*1) + *ψ ω*1(*ω*<sup>1</sup> + *η*(*ω*2, *ω*1)) *ω*<sup>1</sup> + *tη*(*ω*2, *ω*1) *ϕ ω*1(*ω*<sup>1</sup> + *η*(*ω*2, *ω*1)) *ω*<sup>1</sup> + *tη*(*ω*2, *ω*1) + *ψ ω*1(*ω*<sup>1</sup> + *η*(*ω*2, *ω*1)) *ω*<sup>1</sup> + (1 − *t*)*η*(*ω*2, *ω*1) *ϕ ω*1(*ω*<sup>1</sup> + *η*(*ω*2, *ω*1)) *ω*<sup>1</sup> + *tη*(*ω*2, *ω*1) +*ψ ω*1(*ω*<sup>1</sup> + *η*(*ω*2, *ω*1)) *ω*<sup>1</sup> + *tη*(*ω*2, *ω*1) *ϕ ω*1(*ω*<sup>1</sup> + *η*(*ω*2, *ω*1)) *ω*<sup>1</sup> + (1 − *t*)*η*(*ω*2, *ω*1) . (27)

As *ψ*(*u*) and *ϕ*(*u*) ∈ *X* + *I* , ∀*u* ∈ [*ω*1, *ω*<sup>1</sup> + *η*(*ω*2, *ω*1)] are two harmonically *h*1- and *h*2 preinvex interval-valued functions, respectively. Therefore,

$$\begin{split} & \psi\left(\frac{\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\omega\_{1}+(1-t)\eta(\omega\_{2},\omega\_{1})}\right)\varphi\left(\frac{\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\omega\_{1}+t\eta(\omega\_{2},\omega\_{1})}\right) \\ & \geq h\_{1}(t)h\_{2}(t)\psi(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))\varphi(\omega\_{1})+h\_{1}(1-t)h\_{2}(1-t)\psi(\omega\_{1})\varphi(\omega\_{1}+\eta(\omega\_{2},\omega\_{1})) \\ & \quad + h\_{1}(t)h\_{2}(1-t)\psi(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))\varphi(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))+h\_{1}(1-t)h\_{2}(t)\psi(\omega\_{1})\varphi(\omega\_{1}). \end{split} \tag{28}$$

#### Similarly,

$$\begin{split} & \psi\left(\frac{\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\omega\_{1}+t\eta(\omega\_{2},\omega\_{1})}\right)\mathfrak{p}\left(\frac{\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\omega\_{1}+(1-t)\eta(\omega\_{2},\omega\_{1})}\right) \\ & \geq h\_{1}(t)h\_{2}(t)\mathfrak{q}(\omega\_{1})\mathfrak{q}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))+h\_{1}(1-t)h\_{2}(1-t)\mathfrak{q}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))\mathfrak{q}(\omega\_{1}) \\ & \quad + h\_{1}(t)h\_{2}(1-t)\mathfrak{q}(\omega\_{1})\mathfrak{q}(\omega\_{1})+h\_{1}(1-t)h\_{2}(t)\mathfrak{q}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))\mathfrak{q}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1})). \end{split} \tag{29}$$

Adding (28) and (29), we obtain

$$\begin{split} \psi\left(\frac{\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\omega\_{1}+(1-t)\eta(\omega\_{2},\omega\_{1})}\right)\phi\left(\frac{\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\omega\_{1}+t\eta(\omega\_{2},\omega\_{1})}\right) \\ +\psi\left(\frac{\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\omega\_{1}+t\eta(\omega\_{2},\omega\_{1})}\right)\phi\left(\frac{\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\omega\_{1}+(1-t)\eta(\omega\_{2},\omega\_{1})}\right) \\ \geq G(\omega\_{1},\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))[h\_{1}(t)h\_{2}(t)+h\_{1}(1-t)h\_{2}(1-t)] \\ +F(\omega\_{1},\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))[h\_{1}(1-t)h\_{2}(t)+h\_{1}(t)h\_{2}(1-t)]. \end{split} \tag{30}$$

From (27) and (30), we have

$$\begin{split} &\frac{1}{h\_{1}(\frac{1}{2})h\_{2}(\frac{1}{2})}\psi\left(\frac{2\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{2\omega\_{1}+\eta(\omega\_{2},\omega\_{1})}\right)\psi\left(\frac{2\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{2\omega\_{1}+\eta(\omega\_{2},\omega\_{1})}\right) \\ &\supseteq\psi\left(\frac{\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\omega\_{1}+(1-t)\eta(\omega\_{2},\omega\_{1})}\right)\psi\left(\frac{\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\omega\_{1}+(1-t)\eta(\omega\_{2},\omega\_{1})}\right) \\ &+\psi\left(\frac{\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\omega\_{1}+t\eta(\omega\_{2},\omega\_{1})}\right)\psi\left(\frac{\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\omega\_{1}+t\eta(\omega\_{2},\omega\_{1})}\right) \\ &+G(\omega\_{1},\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))[h\_{1}(t)h\_{2}(t)+h\_{1}(1-t)h\_{2}(1-t)]dt \\ &+F(\omega\_{1},\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))[h\_{1}(1-t)h\_{2}(t)+h\_{1}(t)h\_{2}(1-t)]dt. \tag{31} \end{split}$$

Multiplying (31) by *t α*−1 , then integrating over [0, 1] with respect to *t*, we find

$$\begin{split} &\frac{1}{h\_{1}(\frac{1}{2})h\_{2}(\frac{1}{2})}(IR)\int\_{0}^{1}t^{a-1}\psi\left(\frac{2\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{2\omega\_{1}+\eta(\omega\_{2},\omega\_{1})}\right)\phi\left(\frac{2\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{2\omega\_{1}+\eta(\omega\_{2},\omega\_{1})}\right)dt \\ &\geq (IR)\int\_{0}^{1}t^{a-1}\psi\left(\frac{\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\omega\_{1}+(1-t)\eta(\omega\_{2},\omega\_{1})}\right)\phi\left(\frac{\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\omega\_{1}+(1-t)\eta(\omega\_{2},\omega\_{1})}\right)dt \\ &+(IR)\int\_{0}^{1}t^{a-1}\psi\left(\frac{\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\omega\_{1}+t\eta(\omega\_{2},\omega\_{1})}\right)\phi\left(\frac{\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\omega\_{1}+t\eta(\omega\_{2},\omega\_{1})}\right)dt \\ &+G(\omega\_{1},\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))(IR)\int\_{0}^{1}t^{a-1}[h\_{1}(t)h\_{2}(t)+h\_{1}(1-t)h\_{2}(1-t)]dt \\ &+F(\omega\_{1},\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))(IR)\int\_{0}^{1}t^{a-1}[h\_{1}(1-t)h\_{2}(t)+h\_{1}(t)h\_{2}(1-t)]dt. \end{split}$$

This implies

$$\frac{1}{ah\_1(\frac{1}{2})h\_2(\frac{1}{2})}\psi\left(\frac{2\omega\_1(\omega\_1+\eta(\omega\_2,\omega\_1))}{2\omega\_1+\eta(\omega\_2,\omega\_1)}\right)\varphi\left(\frac{2\omega\_1(\omega\_1+\eta(\omega\_2,\omega\_1))}{2\omega\_1+\eta(\omega\_2,\omega\_1)}\right)$$

$$\begin{split} & \supseteq \Gamma(\boldsymbol{\alpha}) \left( \frac{\omega\_{1}(\omega\_{1} + \eta(\omega\_{2}, \omega\_{1}))}{\eta(\omega\_{2}, \omega\_{1})} \right)^{a} \Big[ \Big( \frac{1}{\omega\_{1} + \eta(\omega\_{2}, \omega\_{1})} \Big)^{+} (\eta \boldsymbol{\alpha} \Omega) \left( \frac{1}{\omega\_{1}} \right) \boldsymbol{\rho} \boldsymbol{\alpha} \Omega \Big( \frac{1}{\omega\_{1}} \Big) \Big] \\ & \quad + \Big( \frac{1}{\omega\_{1}} \Big)^{-} (\psi \boldsymbol{\rho} \boldsymbol{\Omega}) \Big( \frac{1}{\omega\_{1} + \eta(\omega\_{2}, \omega\_{1})} \Big) (\rho \boldsymbol{\nu} \Omega) \Big( \frac{1}{\omega\_{1} + \eta(\omega\_{2}, \omega\_{1})} \Big) \Big] \\ & \quad + \boldsymbol{F} (\omega\_{1}, \omega\_{1} + \eta(\omega\_{2}, \omega\_{1})) \int\_{0}^{1} (t^{a-1} + (1-t)^{a-1}) h\_{1}(t) h\_{2}(1-t) \Big] dt \\ & \quad + \boldsymbol{G} (\omega\_{1}, \omega\_{1} + \eta(\omega\_{2}, \omega\_{1})) \int\_{0}^{1} (t^{a-1} + (1-t)^{a-1}) h\_{1}(t) h\_{2}(t) dt. \end{split}$$

**Remark 6.** *If we put η*(*ω*2, *ω*1) = *ω*<sup>2</sup> − *ω*<sup>1</sup> *in the above theorem, we obtain Theorem 7 of [50].*

**Remark 7.** *If we put η*(*ω*2, *ω*1) = *ω*<sup>2</sup> − *ω*<sup>1</sup> *and α* = 1 *in the above theorem, we obtain Theorem 4 of [38].*

**Corollary 6.** *If α* = 1, *then Theorem 5 gives the following result:*

$$\begin{split} &\frac{1}{2h\_{1}(\frac{1}{2})h\_{2}(\frac{1}{2})}\psi\left(\frac{2\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{2\omega\_{1}+\eta(\omega\_{2},\omega\_{1})}\right)\psi\left(\frac{2\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{2\omega\_{1}+\eta(\omega\_{2},\omega\_{1})}\right) \\ &\geq \frac{\omega\_{1}(\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))}{\eta(\omega\_{2},\omega\_{1})}\int\_{\omega\_{1}}^{\omega\_{1}+\eta(\omega\_{2},\omega\_{1})}\frac{\psi(u)\rho(u)}{u^{2}}du + F(\omega\_{1},\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))\int\_{0}^{1}h\_{1}(t)h\_{2}(1-t)dt \\ &\quad + G(\omega\_{1},\omega\_{1}+\eta(\omega\_{2},\omega\_{1}))\int\_{0}^{1}h\_{1}(t)h\_{2}(t)dt. \end{split}$$

**Corollary 7.** *If h*1(*t*) = *h*2(*t*) = *t*, *then Theorem 5 gives the following result:*

4 *α ψ* 2*ω*1(*ω*<sup>1</sup> + *η*(*ω*2, *ω*1)) 2*ω*<sup>1</sup> + *η*(*ω*2, *ω*1) *ϕ* 2*ω*1(*ω*<sup>1</sup> + *η*(*ω*2, *ω*1)) 2*ω*<sup>1</sup> + *η*(*ω*2, *ω*1) ⊇ Γ(*α*)*ψ ω*1(*ω*<sup>1</sup> + *η*(*ω*2, *ω*1)) *η*(*ω*2, *ω*1) *<sup>α</sup>* " *J α* 1 *ω*1+*η*(*ω*2 ,*ω*1 ) <sup>+</sup> (*ψo*Ω) 1 *ω*<sup>1</sup> *ϕo*Ω 1 *ω*<sup>1</sup> +*J α* 1 *ω*1 <sup>−</sup> (*ψo*Ω) 1 *ω*<sup>1</sup> + *η*(*ω*2, *ω*1) (*ϕo*Ω) 1 *ω*<sup>1</sup> + *η*(*ω*2, *ω*1) # <sup>+</sup> *<sup>F</sup>*(*ω*1, *<sup>ω</sup>*<sup>1</sup> <sup>+</sup> *<sup>η</sup>*(*ω*2, *<sup>ω</sup>*1)) <sup>Z</sup> <sup>1</sup> 0 *t*(1 − *t*)(*t <sup>α</sup>*−<sup>1</sup> + (<sup>1</sup> <sup>−</sup> *<sup>t</sup>*) *α*−1 )*dt* <sup>+</sup> *<sup>G</sup>*(*ω*1, *<sup>ω</sup>*<sup>1</sup> <sup>+</sup> *<sup>η</sup>*(*ω*2, *<sup>ω</sup>*1)) <sup>Z</sup> <sup>1</sup> 0 *t* 2 (*t <sup>α</sup>*−<sup>1</sup> + (<sup>1</sup> <sup>−</sup> *<sup>t</sup>*) *α*−1 )*dt* = Γ(*α*)*ψ ω*1(*ω*<sup>1</sup> + *η*(*ω*2, *ω*1)) *η*(*ω*2, *ω*1) *<sup>α</sup>* " *J α* 1 *ω*1+*η*(*ω*2 ,*ω*1 ) <sup>+</sup> (*ψo*Ω) 1 *ω*<sup>1</sup> *ϕo*Ω 1 *ω*<sup>1</sup> +*J α* 1 *ω*1 <sup>−</sup> (*ψo*Ω) 1 *ω*<sup>1</sup> + *η*(*ω*2, *ω*1) (*ϕo*Ω) 1 *ω*<sup>1</sup> + *η*(*ω*2, *ω*1) # + 2 (*α* + 1)(*α* + 2) *<sup>F</sup>*(*ω*1, *<sup>ω</sup>*<sup>1</sup> <sup>+</sup> *<sup>η</sup>*(*ω*2, *<sup>ω</sup>*1)) + *<sup>α</sup>* <sup>2</sup> + *α* + 2 *α*(*α* + 1)(*α* + 2) *G*(*ω*1, *ω*<sup>1</sup> + *η*(*ω*2, *ω*1)) = Γ(*α*)*ψ ω*1(*ω*<sup>1</sup> + *η*(*ω*2, *ω*1)) *η*(*ω*2, *ω*1) *<sup>α</sup>* " *J α* 1 *ω*1+*η*(*ω*2 ,*ω*1 ) <sup>+</sup> (*ψo*Ω) 1 *ω*<sup>1</sup> *ϕo*Ω 1 *ω*<sup>1</sup> +*J α* 1 *ω*1 <sup>−</sup> (*ψo*Ω) 1 *ω*<sup>1</sup> + *η*(*ω*2, *ω*1) (*ϕo*Ω) 1 *ω*<sup>1</sup> + *η*(*ω*2, *ω*1) #

$$+\frac{2\alpha \mathrm{F}(\omega\_1, \omega\_1 + \eta(\omega\_2, \omega\_1)) + (\alpha^2 + \mathfrak{a} + 2)\mathrm{G}(\omega\_1, \omega\_1 + \eta(\omega\_2, \omega\_1))}{\mathfrak{a}(\mathfrak{a} + 1)(\mathfrak{a} + 2)}.$$

#### **5. Results and Discussions**

After illustrating the concept of interval-valued functions, this paper proposes a new definition of harmonically *h*-preinvex interval-valued functions. Further, with the help of the proposed harmonically *h*-preinvexity for interval-valued functions, we have proven H–H-type inclusions for interval-valued R–L fractional integrals. From the definition of harmonically *h*-preinvex interval-valued function, we can see that every harmonical *h*convex interval-valued function is harmonically *h*-preinvex interval-valued function with respect to *η*(*v*, *u*) = *v* − *u*. The results obtained in this paper are generalization of the results of Zhao et al. [38] and Shi et al. [50]. Moreover, some particular cases of our main outcomes are considered.

#### **6. Conclusions and Future Directions**

In this paper, we have introduced harmonically *h*-preinvex interval-valued functions which include harmonical h-convex interval-valued functions and harmonical convex interval-valued functions as special cases. We have obtained H–H-type fractional inclusions for harmonically *h*-preinvex interval-valued functions. After that, we have proven fractional H–H-type inclusions for the product of two harmonically *h*-preinvex intervalvalued functions. The results obtained in this paper may be extended for other kinds of interval-valued fractional integrals including harmonically *h*-preinvex interval-valued functions. In the future, we can investigate the interval-valued preinvexity on coordinates and establish new inclusions of H–H-type for interval-valued coordinated preinvex functions. It is expected that current work will motivate researchers working in fractional calculus, interval analysis, and other related areas.

**Author Contributions:** Formal analysis, K.K.L., J.B., N.S. and S.K.M.; funding acquisition, K.K.L.; investigation, S.K.M.; methodology, J.B., N.S. and S.K.M.; supervision, S.K.M.; validation, N.S.; writing—original draft, J.B.; writing—review and editing, K.K.L., J.B. and N.S. All authors have read and agreed to the published version of the manuscript.

**Funding:** Second author is financially supported by the Ministry of Science and Technology, Department of Science and Technology, New Delhi, India, through Registration No. DST/INSPIRE Fellowship/[IF190355] and the fourth author is financially supported by "Research Grant for Faculty" (IoE Scheme) under Dev. Scheme NO. 6031 and Department of Science and Technology, SERB, New Delhi, India through grant no.: MTR/2018/000121.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** No data were used to support this study.

**Acknowledgments:** The authors are indebted to the anonymous reviewers for their valuable comments and remarks that helped to improve the presentation and quality of the manuscript.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **Riemann–Liouville Fractional Integral Inequalities for Generalized Pre-Invex Functions of Interval-Valued Settings Based upon Pseudo Order Relation**

**Muhammad Bilal Khan <sup>1</sup> , Hatim Ghazi Zaini <sup>2</sup> , Savin Treant,aˇ 3 , Mohamed S. Soliman <sup>4</sup> and Kamsing Nonlaopon 5,\***


**Abstract:** The concepts of convex and non-convex functions play a key role in the study of optimization. So, with the help of these ideas, some inequalities can also be established. Moreover, the principles of convexity and symmetry are inextricably linked. In the last two years, convexity and symmetry have emerged as a new field due to considerable association. In this paper, we study a new version of interval-valued functions (*I*-*V*·*Fs*), known as left and right *χ*-pre-invex interval-valued functions (LR-*χ*-pre-invex *I*-*V*·*Fs*). For this class of non-convex *I*-*V*·*Fs*, we derive numerous new dynamic inequalities interval Riemann–Liouville fractional integral operators. The applications of these repercussions are taken into account in a unique way. In addition, instructive instances are provided to aid our conclusions. Meanwhile, we'll discuss a few specific examples that may be extrapolated from our primary findings.

**Keywords:** LR-*χ*-pre-invex interval-valued function; interval Riemann–Liouville fractional integral operator; Hermite–Hadamard inequality; Hermite–Hadamard Fejér inequality

#### **1. Introduction**

The Hermite–Hadamard inequality (see [1,2], p. 137) is a well-known inequality in convex function theory, with a geometrical explanation and a wide range of applications. Hermite–Hadamard inequality (*H*-*H* inequality) is a development of the concept of convexity, and it logically follows from Jensen's inequality. In recent years, the *H*-*H* inequality for convex functions has sparked a lot of attention, and several refinements and extensions have been investigated; see [3–14] and the references therein.

On the other hand, interval analysis is a subset of set-valued analysis and is concerned with the study of intervals in the context of mathematical analysis and topology. It was developed as a means of dealing with interval uncertainty, which is included in many mathematical or computer models of deterministic real-world systems. A historical example of an interval enclosure is Archimedes' method for calculating the circumference of a circle. In 1966, Moore [15] released the first book on interval analysis. Moore is recognized as being the first usage of intervals in computer mathematics. Following the release of his book, a lot of scientists began to study interval arithmetic's theory and applications. Because of its universality, interval analysis is currently a useful approach in a range of sectors that are interested in ambiguous data. Moreover, interval analysis has also applications in

**Citation:** Khan, M.B.; Zaini, H.G.; Treant,a, S.; Soliman, M.S.; ˇ Nonlaopon, K. Riemann–Liouville Fractional Integral Inequalities for Generalized Pre-Invex Functions of Interval-Valued Settings Based upon Pseudo Order Relation. *Mathematics* **2022**, *10*, 204. https://doi.org/

Academic Editor: Sitnik Sergey

10.3390/math10020204

Received: 7 December 2021 Accepted: 7 January 2022 Published: 10 January 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

different fields like in error analysis, computer graphics, error analysis, experimental and computational physics, and many more.

Numerous significant inequalities for *I*-*V*·*Fs* (Hermite–Hadamard, Ostrowski, etc.) have been investigated in recent years. In [16,17], Chalco–Cano et al. constructed Ostrowski type inequalities for *I*-*V*·*Fs* using the Hukuhara derivative for *I*-*V*·*Fs*. Román-Flores et al. developed Minkowski and Beckenbach's inequality for *I*-*V*·*Fs* in [18]. For more information, see [18–22] and the references therein. Moreover, inequalities can be examined for the more general set-valued mappings for example, Sadowska [23] introduced the *H*-*H* inequality general set-valued mappings. Similarly, for generalized inequalities, we refer to the following articles, see [24,25] and the references therein. Recently, Khan et al. extended the interval *H*-*H* inequalities in terms of fuzzy interval *H*-*H* inequalities using fuzzy Riemannian and fuzzy Riemann–Liouville fractional integral operators such as in [26]. Khan et al. also presented the new class of convex fuzzy mappings known as (*χ*1, *χ*2)-convex fuzzy-interval-valued functions ((*χ*1, *χ*2)-convex *F-I-V*·*F*) and obtained the new version of *H*-*H* inequalities for (*χ*1, *χ*2)-convex *F-I-V*·*Fs*. Moreover, Khan et al. introduced new notions of generalized convex *F-I-V*·*Fs*, and derived new fractional *H*-*H* type and *H*-*H* type inequalities for convex *F-I-V*·*Fs* [27–32]. For more analysis and applications of *F-I-V*·*Fs,* see [33–50] and the references therein.

This study is organized as follows: Section 2 presents preliminary and new concepts and results in interval space, and convex analysis. Section 3 obtains interval *H*-*H* inequalities and *H*-*H* Fejér inequalities for LR-*χ*-pre-invex *I*-*V*·*Fs* via interval Riemann–Liouville fractional integral operators. In addition, some interesting examples are also given to verify our results. Section 4 gives conclusions and future plans.

#### **2. Preliminaries**

Let K*<sup>C</sup>* stand for the collection of all closed and bounded intervals of R. We use K + *C* to represent the set of all positive intervals. The collections of all Riemann integrable real valued functions and Riemann integrable *I-V-F* are denoted by R[*µ*,*ω*] and IR[*µ*,*ω*] , respectively. For more conceptions on *I*-*V*·*Fs*, see [36]. Moreover, we have:

**Remark 1.** [35] *(i) The relation* " ≤*<sup>p</sup>* " *defined on* K*<sup>C</sup> by:*

$$\left[\mathfrak{U}\_{\ast\prime},\mathfrak{U}^{\ast}\right] \leq\_p \left[\mathcal{Z}\_{\ast\prime}\cdot\mathcal{Z}^{\ast}\right] \text{ if and only if } \mathfrak{U}\_{\ast} \leq \mathcal{Z}\_{\ast\prime} \ \mathfrak{U}^{\ast} \leq \mathcal{Z}^{\ast},\tag{1}$$

*for all* [U∗, U ∗ ], [Z∗, Z<sup>∗</sup> ] ∈ K*C*, *it is a pseudo-order relation. For given* [U∗, U ∗ ], [Z∗, Z<sup>∗</sup> ] ∈ K*C*, *we say that* [U∗, U ∗ ] ≤*<sup>p</sup>* [Z∗, Z<sup>∗</sup> ] *if and only if* U<sup>∗</sup> ≤ Z∗, U <sup>∗</sup> ≤ Z<sup>∗</sup> *or* U<sup>∗</sup> ≤ Z∗, U <sup>∗</sup> < Z<sup>∗</sup> *. The relation* [U∗, U ∗ ] ≤*<sup>p</sup>* [Z∗, Z<sup>∗</sup> ] *coincident to* [U∗, U ∗ ] ≤ [Z∗, Z<sup>∗</sup> ] *on* K*C*.

*(ii) It can be easily seen that* " ≤*<sup>p</sup>* " *looks like "left and right" on the real line* R, *so we call* " ≤*<sup>p</sup>* " *"left and right" (or "LR" order, in short).*

The concept of the Riemann integral for *I-V-F* first introduced by Moore [15] is defined as follows:

**Theorem 1.**[15]*If* S : [*µ*,*ω*] ⊂ R → K*<sup>C</sup> is an I-V*·*F on* [*µ*,*ω*]*such that* S(*x*) = [S∗(*x*), S<sup>∗</sup> (*x*)]*, then* S *is Riemann integrable over* [*µ*,*ω*] *if and only if,* S∗(*x*) *and* S<sup>∗</sup> (*x*) *both are Riemann integrable over* [*µ*,*ω*] *such that:*

$$(IR)\int\_{\mu}^{\omega} \mathfrak{S}(\mathbf{x})d\mathbf{x} = \left[ (\mathcal{R})\int\_{\mu}^{\omega} \mathfrak{S}\_{\*}(\mathbf{x})d\mathbf{x},\ (\mathcal{R})\int\_{\mu}^{\omega} \mathfrak{S}^{\*}(\mathbf{x})d\mathbf{x} \right] \tag{2}$$

*Lupulescu and Budak et al. [36,37] introduced the following interval Riemann*–*Liouville fractional integral operators:*

*Let α* > 0 *and L* [*µ*, *ω*], K + *C be the collection of all Lebesgue measurable I-V-Fs on* [*µ*, *ω*]*. Then the interval left and right Riemann*–*Liouville fractional integrals of* S ∈ *L* [*µ*, *ω*], K + *C with order α* > 0 *are defined by:*

$$\mathcal{Z}^{\mathfrak{a}}\_{\mu^{+}} \circledast(\mathfrak{x}) = \frac{1}{\Gamma(\mathfrak{a})} \int\_{\mu}^{\mathfrak{x}} (\mathfrak{x} - \mathfrak{g})^{\mathfrak{a}-1} \mathfrak{S}(\mathfrak{g}) d\mathfrak{g}, \quad (\mathfrak{x} > \mu), \tag{3}$$

*and:*

$$\mathcal{Z}\_{\omega^{-}}^{a}\left(\mathfrak{S}(\mathfrak{x})=\frac{1}{\Gamma(a)}\int\_{\mathfrak{x}}^{\omega}(\mathfrak{g}-\mathfrak{x})^{a-1}\mathfrak{S}(\mathfrak{g})d\mathfrak{g},\quad(\mathfrak{x}<\omega)\tag{4}$$

*respectively, where* Γ(*x*) = R <sup>∞</sup> 0 *ς x*−1 *e* <sup>−</sup>*ςdς is the Euler gamma function.*

**Definition 1.** [34] *A real-valued function* <sup>S</sup> : [*µ*, *<sup>ω</sup>*] <sup>→</sup> <sup>R</sup><sup>+</sup> *is named as convex function if:*

$$\mathfrak{S}(\emptyset \mathfrak{x} + (1 - \mathfrak{s})\mathfrak{z}) \le \mathfrak{s}\mathfrak{S}(\mathfrak{x}) + (1 - \mathfrak{s})\mathfrak{S}(\mathfrak{z})\_{\prime} \tag{5}$$

*for all x*, *z* ∈ [*µ*, *ω*], *ς* ∈ [0, 1]. *If (5) is reversed, then* S *is named as concave.*

**Definition 2.** [40] *A real valued function* <sup>S</sup> : [*µ*, *<sup>ω</sup>*] <sup>→</sup> <sup>R</sup><sup>+</sup> *is named as pre-invex function if:*

$$\mathfrak{S}(\mathfrak{x} + (1 - \mathfrak{g})\mathfrak{g}(z, \mathfrak{x})) \le \mathfrak{g}\mathfrak{S}(\mathfrak{x}) + (1 - \mathfrak{g})\mathfrak{S}(z),\tag{6}$$

*for all x*, *z* ∈ [*µ*, *ω*], *ς* ∈ [0, 1]*, where ϕ* : [*µ*, *ω*] × [*µ*, *ω*] → R*. If (6) is reversed, then* S *is named as pre-incave.*

**Definition 3.** [35] *The I*-*V*·*<sup>F</sup>* <sup>S</sup> : [*µ*, *<sup>ω</sup>*] → K<sup>+</sup> *C is named as LR-convex I-V*·*F on* [*µ*, *ω*] *if:*

$$\mathfrak{S}(\mathfrak{g}\mathfrak{x} + (1-\mathfrak{g})z) \le\_p \mathfrak{S}\mathfrak{S}(\mathfrak{x}) + (1-\mathfrak{g})\mathfrak{S}(z),\tag{7}$$

*for all x*, *z* ∈ [*µ*, *ω*], *ς* ∈ [0, 1]. *If (7) is reversed, then* S *is named as LR-concave I-V*·*F on* [*µ*, *ω*]. S *is affine, if and only if it is both LR-convex and LR-concave I-V*·*F.*

**Definition 4.** [41] *The I*-*V*·*<sup>F</sup>* <sup>S</sup> : [*µ*, *<sup>ω</sup>*] → K<sup>+</sup> *C is named as LR-pre-invex I-V*·*F on invex interval* [*µ*, *ω*] *if:*

$$
\mathfrak{S}(\mathfrak{x} + (1 - \mathfrak{g})\mathfrak{g}(\mathfrak{z}, \mathfrak{x})) \le\_p \mathfrak{G}(\mathfrak{x}) + (1 - \mathfrak{g})\mathfrak{S}(\mathfrak{z}),
\tag{8}
$$

*for all x*, *z* ∈ [*µ*, *ω*], *ς* ∈ [0, 1]*, where ϕ* : [*µ*, *ω*] × [*µ*, *ω*] → R*. If (8) is reversed then,* S *is named as LR-pre-incave I-V*·*F on* [*µ*, *ω*]. S *is a LR-affine if and only if, it is both LR-pre-invex and LR-pre-incave I-V*·*Fs.*

**Definition 5.** *Let <sup>χ</sup>* : [0, 1] <sup>⊆</sup> [*µ*, *<sup>ω</sup>*] <sup>→</sup> <sup>R</sup><sup>+</sup> *such that <sup>χ</sup>*0*. Then, I-V*·*<sup>F</sup>* <sup>S</sup> : [*µ*, *<sup>ω</sup>*] → K<sup>+</sup> *C is said to be LR-χ-pre-invex I-V*·*F on* [*µ*, *ω*] *if:*

$$\mathfrak{S}(\mathfrak{x} + (1 - \mathfrak{s})\mathfrak{q}(\mathfrak{x}, \mathfrak{z})) \le\_p \chi(\mathfrak{g})\mathfrak{S}(\mathfrak{x}) + \chi(1 - \mathfrak{g})\mathfrak{S}(\mathfrak{z}),\tag{9}$$

*for all x*, *z* ∈ [*µ*, *ω*], *ς* ∈ [0, 1]*, where ϕ* : [*µ*, *ω*] × [*µ*, *ω*] → R*. If* S *is LR-χ-pre-incave on* [*µ*, *ω*]*, then inequality (9) is reversed.*

**Remark 2.** *If χ*(*ς*) = *ς*, *then LR-χ-pre-invex I-V*·*F becomes LR-pre-invex I-V*·*F. If χ*(*ς*) ≡ 1, *then LR-χ-pre-invex I-V*·*F becomes LR-P I*-*V*·*F, that is:*

$$\mathfrak{S}(\mathfrak{x} + (1 - \mathfrak{z})\mathfrak{q}(\mathfrak{x}, z)) \le\_p \mathfrak{S}(\mathfrak{x}) + \mathfrak{S}(z), \forall \ \mathfrak{x}, z \in [\mathfrak{z}, \omega], \; \underline{\mathfrak{z}} \in [0, 1]. \tag{10}$$

**Theorem 2.** *Let χ* : [0, 1] ⊆ [*µ*, *ω*] → R *be a non-negative real valued function such that χ*0 *and let* <sup>S</sup> : [*µ*, *<sup>ω</sup>*] → K<sup>+</sup> *C be a I-V*·*F such that:*

$$\mathfrak{S}(z) = [\mathfrak{S}\_\*(z), \mathfrak{S}^\*(z)],\tag{11}$$

*for all z* ∈ [*µ*, *ω*]*. Then,* S(*z*) *is LR-χ-pre-invex I-V*·*F on* [*µ*, *ω*], *if and only if,* S∗(*z*) *and* S<sup>∗</sup> (*z*) *both are χ-pre-invex.*

**Proof.** Assume that, S∗(*x*) and S<sup>∗</sup> (*x*) are *χ*-pre-invex on [*µ*, *ω*]. Then from (6), we have:

$$
\mathfrak{S}\_\*(\mathfrak{x} + (1 - \mathfrak{z})\mathfrak{g}(\mathfrak{x}, \mathfrak{z})) \le \chi(\mathfrak{z})\mathfrak{S}\_\*(\mathfrak{x}) + \chi(1 - \mathfrak{z})\mathfrak{S}\_\*(\mathfrak{z}), \,\forall \, \mathfrak{x}, \,\boldsymbol{z} \in [\mathfrak{\mu}, \omega], \,\boldsymbol{\varrho} \in [0, 1],
$$

and:

$$
\mathfrak{S}^\*(\mathfrak{x} + (1 - \mathfrak{z})\mathfrak{g}(\mathfrak{x}, z)) \le \chi(\mathfrak{z})\mathfrak{S}^\*(\mathfrak{x}) + \chi(1 - \mathfrak{z})\mathfrak{S}^\*(\mathfrak{z}), \\
\forall \mathfrak{x}, z \in [\mathfrak{y}, \omega], \; \mathfrak{z} \in [0, 1].
$$

Then by (11), we obtain:

$$\begin{array}{rcl} \mathfrak{S}(\mathfrak{x} + (1 - \mathfrak{g})\mathfrak{g}(\mathfrak{x}, \mathfrak{z})) &=& [\mathfrak{S}\_{\ast}(\mathfrak{x} + (1 - \mathfrak{g})\mathfrak{g}(\mathfrak{x}, \mathfrak{z})), \mathfrak{S}^{\ast}(\mathfrak{x} + (1 - \mathfrak{g})\mathfrak{g}(\mathfrak{x}, \mathfrak{z}))], \\ & \leq [\chi(\mathfrak{g})\mathfrak{S}\_{\ast}(\mathfrak{x}), \chi(\mathfrak{g})\mathfrak{S}^{\ast}(\mathfrak{x})] + [\chi(1 - \mathfrak{g})\mathfrak{S}\_{\ast}(\mathfrak{z}), \chi(1 - \mathfrak{g})\mathfrak{S}^{\ast}(\mathfrak{z})]. \end{array}$$

that is:

$$
\mathfrak{S}(\mathfrak{x} + (1 - \mathfrak{z})\mathfrak{g}(\mathfrak{x}, \mathfrak{z})) \leq\_p \chi(\mathfrak{z})\mathfrak{S}(\mathfrak{x}) + \chi(1 - \mathfrak{z})\mathfrak{S}(\mathfrak{z}), \\
\forall \mathfrak{x}, \mathfrak{z} \in [\mathfrak{y}, \omega], \mathfrak{g} \in [0, 1].
$$

Hence, S is LR-*χ*-pre-invex IVF on [*µ*, *ω*].

Conversely, let S be a LR-*χ*-pre-invex IVF on [*µ*, *ω*]. Then for all *x*, *z* ∈ [*µ*, *ω*] and *ς* ∈ [0, 1], we have:

$$
\mathfrak{S}(\mathfrak{x} + (1 - \mathfrak{s})\mathfrak{q}(\mathfrak{x}, z)) \leq\_p \chi(\mathfrak{g})\mathfrak{S}(\mathfrak{x}) + \chi(1 - \mathfrak{g})\mathfrak{S}(z).
$$

Therefore, from (11), we have:

$$\mathfrak{S}(\mathfrak{x} + (1 - \mathfrak{z})\mathfrak{q}(\mathfrak{x}, \mathfrak{z})) = [\mathfrak{S}\_\*(\mathfrak{x} + (1 - \mathfrak{z})\mathfrak{q}(\mathfrak{x}, \mathfrak{z})), \mathfrak{S}^\*(\mathfrak{x} + (1 - \mathfrak{z})\mathfrak{q}(\mathfrak{x}, \mathfrak{z}))].$$

Again, from (11), we obtain:

$$\chi(\mathfrak{c})\mathfrak{S}(\mathfrak{x}) + \chi(1-\mathfrak{c})\mathfrak{S}(\mathfrak{x}) = [\chi(\mathfrak{c})\mathfrak{S}\_\*(\mathfrak{x}), \chi(\mathfrak{c})\mathfrak{S}^\*(\mathfrak{x})] + [\chi(1-\mathfrak{c})\mathfrak{S}\_\*(\mathfrak{z}), \chi(1-\mathfrak{c})\mathfrak{S}^\*(\mathfrak{z})]\_{\mathcal{H}}$$

for all *x*, *z* ∈ [*µ*, *ω*] and *ς* ∈ [0, 1]. Then by LR-*χ*-pre-invexity of S, we have for all *x*, *z* ∈ [*µ*, *ω*] and *ς* ∈ [0, 1] such that:

$$
\mathfrak{S}\_\*(\mathfrak{x} + (1 - \mathfrak{q})\mathfrak{q}(\mathfrak{x}, z)) \le \chi(\mathfrak{q})\mathfrak{S}\_\*(\mathfrak{x}) + \chi(1 - \mathfrak{q})\mathfrak{S}\_\*(z),
$$

and:

$$
\mathfrak{S}^\*(\mathfrak{x} + (1 - \mathfrak{g})\mathfrak{q}(\mathfrak{x}, z)) \le \chi(\mathfrak{g})\mathfrak{S}^\*(\mathfrak{x}) + \chi(1 - \mathfrak{g})\mathfrak{S}^\*(z),
$$

hence, the result follows.

**Example 1.** *We consider <sup>χ</sup>*(*ς*) <sup>=</sup> *<sup>ς</sup>*, *for <sup>ς</sup>* <sup>∈</sup> [0, 1] *and the I-V*·*<sup>F</sup>* <sup>S</sup> : [0, 4] → K<sup>+</sup> *C defined by* S(*z*) = h *z*, 2*e z* 2 i *. Since end point functions* S∗(*z*), S<sup>∗</sup> (*z*) *are χ-pre-invex functions with respect to ϕ*(*x*, *z*) = *x* − *z. Hence* S(*z*) *is LR-χ-pre-invex I-V*·*F*.

**Remark 3.** *If χ*(*ς*) ≡ *ς and* S∗(*z*) = S<sup>∗</sup> (*z*)*, then from (8), we obtain the inequality (6).*

If *χ*(*ς*) ≡ *ς* and S∗(*z*) = S<sup>∗</sup> (*z*) and *ϕ*(*x*, *z*) = *x* − *z*, then from (8), we obtain the inequality (5).

We'll need to make the following assumption about the function *ϕ* : [*µ*, *ω*] × [*µ*, *ω*] → R, which will be crucial in the major findings.

**Condition C**. [40]

$$
\varphi(\mathfrak{x}, z + \mathfrak{g}\,\varphi(\mathfrak{x}, z)) = (1 - \mathfrak{g})\,\varphi(\mathfrak{x}, z)\,\varphi
$$

$$
\varphi(z, z + \xi \varphi(\mathfrak{x}, z)) = -\xi \varphi(\mathfrak{x}, z).
$$

Note that ∀ *z*, *x* ∈ [*µ*, *ω*] and *ς* ∈ [0, 1]*,* then from condition C we have

$$
\varphi(z + \varrho\_2 \varrho(\mathbf{x}, z), z + \varrho\_1 \varrho(\mathbf{x}, z)) = (\varrho\_2 - \varrho\_1)\varphi(\mathbf{x}, z)
$$

Clearly for *ς* = 0, we have *ϕ*(*x*, *z*) = 0 if and only if *x* = *z,* for all *z*, *x* ∈ [*µ*, *ω*]. For the application of condition C, see [40,41].

#### **3. Interval Fractional Hermite–Hadamard Inequalities**

In this section, we will prove some new *H*-*H* type inequalities for LR-*χ*-pre-invex *I*-*V*·*Fs* via Riemann–Liouville fractional integral operators. In the next results, we will denote *L* [*µ*, *µ* + *ϕ*(*ω*, *µ*)], K + *C* as the family of Lebesgue measurable *I*-*V*·*Fs*.

**Theorem 3.** *Let* <sup>S</sup> : [*µ*, *<sup>µ</sup>* <sup>+</sup> *<sup>ϕ</sup>*(*ω*, *<sup>µ</sup>*)] → K<sup>+</sup> *C be a LR--pre-invex I-V*·*F on* [*µ*, *µ* + *ϕ*(*ω*, *µ*)] *such that* S(*z*) = [S∗(*z*), S<sup>∗</sup> (*z*)] *for all z* ∈ [*µ*, *µ* + *ϕ*(*ω*, *µ*)]*. If ϕ satisfies condition C and* S ∈ *L* [*µ*, *µ* + *ϕ*(*ω*, *µ*)], K + *C , then:*

$$\begin{split} \frac{1}{\alpha \chi(\frac{1}{2})} \mathfrak{S} \left( \frac{2\mu + \varrho(\omega, \mu)}{2} \right) &\leq p \frac{\Gamma(a)}{\left( \varrho(\omega, \mu) \right)^{\mathfrak{a}}} \left[ \mathscr{Z}\_{\mu^{+}}^{a} \left( \mathfrak{S} \left( \mu + \varrho(\omega, \mu) \right) \right) + \mathscr{Z}\_{\mu + \varrho(\omega, \mu)^{-}}^{a} \left( \mathfrak{S} \left( \mu \right) \right) \right] \\ &\leq p \left( \mathfrak{S} \left( \mu \right) + \mathfrak{S} \left( \mu + \varrho(\omega, \mu) \right) \right) \int\_{0}^{1} \mathfrak{s}^{a - 1} [\chi(\boldsymbol{\xi}) - \chi(\boldsymbol{1} - \boldsymbol{\xi})] d\boldsymbol{\xi} \\ &\leq p \left( \mathfrak{S} \left( \mu \right) + \mathfrak{S} \left( \omega \right) \right) \int\_{0}^{1} \mathfrak{s}^{a - 1} [\chi(\boldsymbol{\xi}) - \chi(\boldsymbol{1} - \boldsymbol{\xi})] d\boldsymbol{\xi} \end{split} \tag{12}$$

*If* S(*z*) *is pre-incave I-V*·*F, then:*

$$\begin{split} \frac{1}{\alpha \chi(\frac{1}{2})} \mathfrak{S} \left( \frac{2\mu + \varrho(\omega, \mu)}{2} \right) &\geq\_{p} \frac{\Gamma(a)}{(\varrho(\omega, \mu))^{\mathfrak{a}}} \left[ \mathscr{Z}\_{\mu^{+}}^{\mathfrak{a}} \left( \mathfrak{S} (\mu + \varrho(\omega, \mu)) + \mathscr{Z}\_{\mu + \varrho(\omega, \mu)^{-}}^{\mathfrak{a}} \mathfrak{S} (\mu) \right) \right] \\ &\geq\_{p} \left( \mathfrak{S} (\mu) + \mathfrak{S} (\mu + \varrho(\omega, \mu)) \right) \int\_{0}^{1} \mathfrak{s}^{a - 1} [\chi(\mathfrak{s}) - \chi(1 - \mathfrak{s})] d\mathfrak{s} \\ &\geq\_{p} \left( \mathfrak{S} (\mu) + \mathfrak{S} (\omega) \right) \int\_{0}^{1} \mathfrak{s}^{a - 1} [\chi(\mathfrak{s}) - \chi(1 - \mathfrak{s})] d\mathfrak{s} \end{split} \tag{13}$$

**Proof.** Let <sup>S</sup> : [*µ*, *<sup>µ</sup>* <sup>+</sup> *<sup>ϕ</sup>*(*ω*, *<sup>µ</sup>*)] → K<sup>+</sup> *C* be a LR--pre-invex *I*-*V*·*F*. If condition C holds then, by hypothesis, we have:

$$\frac{1}{\chi\binom{1}{2}}\mathfrak{S}\left(\frac{2\mu+\varrho(\omega,\mu)}{2}\right) \leq\_p \mathfrak{S}(\mu+(1-\varrho)\varrho(\omega,\mu)) + \mathfrak{S}(\mu+\varrho\varrho(\omega,\mu)).$$

Therefore, we have:

$$\begin{cases} \frac{1}{\chi\left(\frac{1}{2}\right)} \mathfrak{S}\_{\*} \left(\frac{2\mu + \varrho(\omega,\mu)}{2}\right) \leq \mathfrak{S}\_{\*} \left(\mu + (1-\xi)\varrho(\omega,\mu)\right) + \mathfrak{S}\_{\*} \left(\mu + \xi\varrho(\omega,\mu)\right),\\ \frac{1}{\chi\left(\frac{1}{2}\right)} \mathfrak{S}^{\*} \left(\frac{2\mu + \varrho(\omega,\mu)}{2}\right) \leq \mathfrak{S}^{\*} \left(\mu + (1-\xi)\varrho(\omega,\mu)\right) + \mathfrak{S}^{\*} \left(\mu + \xi\varrho(\omega,\mu)\right). \end{cases}$$

Multiplying both sides by *ς <sup>α</sup>*−<sup>1</sup> and integrating the obtained result with respect to *ς* over (0, 1), we have

 $\frac{1}{\chi\left(\frac{1}{2}\right)}\int\_{0}^{1}\xi^{a-1}\mathfrak{S}\_{\*}\left(\frac{2\mu+\varrho(\omega,\mu)}{2}\right)d\xi\leq\int\_{0}^{1}\xi^{a-1}\mathfrak{S}\_{\*}\left(\mu+(1-\xi)\varrho(\omega,\mu)\right)d\xi+\int\_{0}^{1}\xi^{a-1}\mathfrak{S}\_{\*}\left(\mu+\zeta\varrho(\omega,\mu)\right)d\zeta.$  $\frac{1}{\chi\left(\frac{1}{2}\right)}\int\_{0}^{1}\xi^{a-1}\mathfrak{S}\_{\*}\left(\mu+(1-\xi)\varrho(\omega,\mu)\right)d\xi\leq\int\_{0}^{1}\xi^{a-1}\mathfrak{S}^{\*}\left(\mu+(1-\xi)\varrho(\omega,\mu)\right)d\xi+\int\_{0}^{1}\xi^{a-1}\mathfrak{S}^{\*}\left(\mu+\zeta\varrho(\omega,\mu)\right)d\xi.$ 

Let *x* = *µ* + (1 − *ς*)*ϕ*(*ω*, *µ*) and *x* = *µ* + *ςϕ*(*ω*, *µ*). Then, we have:

1 *αχ*( 1 2 ) S∗ 2*µ*+*ϕ*(*ω*,*µ*) 2 <sup>≤</sup> <sup>1</sup> (*ϕ*(*ω*,*µ*))*<sup>α</sup>* R *<sup>µ</sup>*+*ϕ*(*ω*,*µ*) *µ* (*µ* + *ϕ*(*ω*, *µ*) − *x*) *<sup>α</sup>*−1S∗(*x*)*dx* + <sup>1</sup> (*ϕ*(*ω*,*µ*))*<sup>α</sup>* R *<sup>µ</sup>*+*ϕ*(*ω*,*µ*) *µ* (*z* − *µ*) *<sup>α</sup>*−1S∗(*z*)*dz* 1 *αχ*( 1 2 ) S∗ 2*µ*+*ϕ*(*ω*,*µ*) 2 <sup>≤</sup> <sup>1</sup> (*ϕ*(*ω*,*µ*))*<sup>α</sup>* R *<sup>µ</sup>*+*ϕ*(*ω*,*µ*) *µ* (*µ* + *ϕ*(*ω*, *µ*) − *x*) *<sup>α</sup>*−1S<sup>∗</sup> (*x*)*dx* + <sup>1</sup> (*ϕ*(*ω*,*µ*))*<sup>α</sup>* R *<sup>µ</sup>*+*ϕ*(*ω*,*µ*) *µ* (*z* − *µ*) *<sup>α</sup>*−1S<sup>∗</sup> (*z*)*dz*, ≤ Γ(*α*) (*ϕ*(*ω*,*µ*))*<sup>α</sup>* h I *α <sup>µ</sup>*<sup>+</sup> S∗(*µ* + *ϕ*(*ω*, *µ*)) + I *α µ*+*ϕ*(*ω*,*µ*) <sup>−</sup> S∗(*µ*) i ≤ Γ(*α*) (*ϕ*(*ω*,*µ*))*<sup>α</sup>* h I *α <sup>µ</sup>*<sup>+</sup> S<sup>∗</sup> (*µ* + *ϕ*(*ω*, *µ*)) + I *α µ*+*ϕ*(*ω*,*µ*) <sup>−</sup> S<sup>∗</sup> (*µ*) i ,

that is:

$$\begin{cases} \frac{1}{\alpha \chi\left(\frac{1}{2}\right)} \left[ \mathfrak{S}\_{\ast} \left( \frac{2\mu + q(\omega,\mu)}{2} \right)\_{+} \mathfrak{S}^{\ast} \left( \frac{2\mu + q(\omega,\mu)}{2} \right) \right] \\ \leq \sum\_{\begin{subarray}{c} \mu \leq \mu \end{subarray}} \frac{\Gamma(a)}{(\varrho(\omega,\mu))^{2}} \left[ \mathfrak{T}\_{\mu^{+}}^{a} \cdot \mathfrak{S}\_{\ast}(\mu + \varrho(\omega,\mu)) + \mathfrak{T}\_{\mu + \varrho(\omega,\mu)}^{a} \cdot \mathfrak{S}\_{\ast}(\mu), \ \mathfrak{T}\_{\mu^{+}}^{a} \cdot \mathfrak{S}^{\ast}(\mu + \varrho(\omega,\mu)) + \mathfrak{T}\_{\mu + \varrho(\omega,\mu)}^{a} \cdot \mathfrak{S}^{\ast}(\mu) \right] \end{cases}$$

thus,

$$\frac{1}{a\chi\left(\frac{1}{2}\right)}\mathfrak{S}\left(\frac{2\mu+\varrho(\omega,\mu)}{2}\right)\leq\_{p}\frac{\Gamma(a)}{\left(\varrho(\omega,\mu)\right)^{\mathfrak{a}}}\left[\mathcal{Z}\_{\mu^{+}}^{\mathfrak{a}}\left\mathfrak{S}(\mu+\varrho(\omega,\mu))+\mathcal{Z}\_{\mu+\varrho(\omega,\mu)}^{\mathfrak{a}}\left\mathfrak{S}(\mu)\right].\tag{14}$$

In a similar way as above, we have:

$$\begin{split} & \frac{\Gamma(a)}{(\varrho(\omega,\mu))^{\mathbb{R}}} \Big[ \mathcal{Z}\_{\mu^{+}}^{a} \left( \mathfrak{S}(\mu+\varrho(\omega,\mu)) + \mathcal{Z}\_{\mu+\varrho(\omega,\mu)^{-}}^{a} \mathfrak{S}(\mu) \right) \\ & \leq\_{p} [\mathfrak{S}(\mu) + \mathfrak{S}(\mu+\varrho(\omega,\mu))] \int\_{0}^{1} \mathfrak{g}^{a-1} [\chi(\boldsymbol{\varsigma}) - \chi(1-\boldsymbol{\varsigma})] d\boldsymbol{\varsigma}. \end{split} \tag{15}$$

Combining (14) and (15), we have:

$$\begin{array}{llll}\frac{1}{a\chi(\frac{1}{2})}\mathfrak{S}\left(\frac{2\mu+\varrho(\omega,\mu)}{2}\right) & \leq\_{p} \frac{\Gamma(a)}{\left(\varrho(\omega,\mu)\right)^{\mathfrak{a}}} \left[\mathcal{Z}\_{\mu}^{\mathfrak{a}} \cdot \mathfrak{S}(\mu+\varrho(\omega,\mu)) + \mathcal{Z}\_{\mu+\varrho(\omega,\mu)}^{\mathfrak{a}} \cdot \mathfrak{S}(\mu)\right] \\ & \leq\_{p} \left[\mathfrak{S}(\mu) + \mathfrak{S}(\mu+\varrho(\omega,\mu))\right] \int\_{0}^{1} \xi^{a-1} [\chi(\boldsymbol{\varsigma}) - \chi(1-\boldsymbol{\varsigma})] d\boldsymbol{\varsigma} \\ & \leq\_{p} \left[\mathfrak{S}(\mu) + \mathfrak{S}(\omega)\right] \int\_{0}^{1} \xi^{a-1} [\chi(\boldsymbol{\varsigma}) - \chi(1-\boldsymbol{\varsigma})] d\boldsymbol{\varsigma} \end{array}$$

hence, the required result.

**Remark 4.** *From Theorem 3 we clearly see that:*

*If ϕ*(*ω*, *µ*) = *ω* − *µ, then from Theorem 3, we get the following new result in fractional calculus, see* [42].

$$\mathfrak{Q}\left(\frac{\mu+\omega}{2}\right) \leq\_p \frac{\Gamma(a+1)}{2(\omega-\mu)^a} \left[\mathcal{Z}\_{\mu+}^{a}\mathfrak{Q}(\omega) + \mathcal{Z}\_{\omega-}^{a}\mathfrak{Q}(\mu)\right] \leq\_p \frac{\mathfrak{Q}(\mu)+\mathfrak{Q}(\omega)}{2} \tag{16}$$

*If α* = 1*, then from Theorem 3, we obtain the following results for LR--pre-invex I-V*·*F, which are also new ones:*

$$\begin{split} \frac{1}{2\chi\left(\frac{1}{2}\right)} \mathfrak{S}\left(\frac{2\mu+\varrho(\omega,\mu)}{2}\right) &\leq\_{p} \frac{1}{\varrho(\omega,\mu)} \left(FR\right) \int\_{\mu}^{\mu+\varrho(\omega,\mu)} \mathfrak{S}(z)dz \\ &\leq\_{p} \left[\mathfrak{S}(\mu)+\mathfrak{S}(\mu+\varrho(\omega,\mu))\right] \int\_{0}^{1} \chi(\boldsymbol{\varsigma})d\boldsymbol{\varsigma}. \end{split} \tag{17}$$

*If χ*(*ς*) = *ς, then Theorem 3 reduces to the result for LR-pre-invex I-V*·*F, see* [41]*:*

$$\mathfrak{S}\left(\frac{2\mu+\mathfrak{q}(\omega,\mu)}{2}\right) \leq\_p \frac{\Gamma(a+1)}{2(\mathfrak{q}(\omega,\mu))^a} \left[\mathfrak{Z}\_{\mu^+}^a \mathfrak{S}(\mu+\mathfrak{q}(\omega,\mu)) + \mathfrak{Z}\_{\mu+\mathfrak{q}(\omega,\mu)}^a \mathfrak{S}(\mu)\right] \leq\_p \frac{\mathfrak{S}(\mu)+\mathfrak{S}(\mu+\mathfrak{q}(\omega,\mu))}{2} \tag{18}$$

*Let α* = 1 *and χ*(*ς*) = *ς. Then, Theorem 3 reduces to the result for LR-pre-invex-I-V*·*F, see* [41]*:*

$$\mathfrak{S}\left(\frac{2\mu+\varrho(\omega,\mu)}{2}\right) \leq\_p \frac{1}{\varrho(\omega,\mu)} \left(FR\right) \int\_{\mu}^{\mu+\varrho(\omega,\mu)} \mathfrak{S}(z)dz \leq\_p \frac{\mathfrak{S}(\mu)+\mathfrak{S}(\omega)}{2} \tag{19}$$

**Example2.** *α* = <sup>1</sup> 2 *,χ*(*ς*) <sup>=</sup> *<sup>ς</sup>,forall <sup>ς</sup>* <sup>∈</sup> [0, 1] *andtheI-V*·*<sup>F</sup>* <sup>S</sup> : [*µ*, *<sup>µ</sup>*<sup>+</sup> *<sup>ϕ</sup>*(*ω*,*µ*)] <sup>=</sup> [2, 2<sup>+</sup> *<sup>ϕ</sup>*(3,2)] → K<sup>+</sup> *C* , *defined by* S(*z*) = [1,2] 2−*z* 1 2 *. Since left and right end-point functions* S∗(*z*) = 2−*z* 1 2 , S∗ (*z*) = 2 2−*z* 1 2 *, are LR-χ-pre-invex functions, then* S(*z*) *is LR-χ-pre-invex I-V*·*F. We clearly see that* S ∈ *L* [*µ*, *µ*+ *ϕ*(*ω*,*µ*)],K + *C and:*

$$\frac{1}{a\chi\left(\frac{1}{2}\right)}\mathfrak{S}\_{\*}\left(\frac{2\mu+\varrho(\omega,\mu)}{2}\right) = \mathfrak{S}\_{\*}\left(\frac{5}{2}\right) = \frac{4-\sqrt{10}}{8}$$

$$\frac{1}{a\chi\left(\frac{1}{2}\right)}\mathfrak{S}^{\*}\left(\frac{2\mu+\varrho(\omega,\mu)}{2}\right) = \mathfrak{S}^{\*}\left(\frac{5}{2}\right) = \frac{4-\sqrt{10}}{4},$$

$$\frac{\mathfrak{S}\_{\*}\left(\mu\right)+\mathfrak{S}\_{\*}\left(\mu+\varrho(\omega,\mu)\right)}{2}\int\_{0}^{1}\mathfrak{s}^{a-1}[\chi(\boldsymbol{\xi})-\chi(1-\boldsymbol{\xi})]d\boldsymbol{\xi} = \left(4-\sqrt{2}-\sqrt{3}\right).$$

$$\frac{\mathfrak{S}^{\*}\left(\mu\right)+\mathfrak{S}^{\*}\left(\mu+\varrho(\omega,\mu)\right)}{2}\int\_{0}^{1}\mathfrak{s}^{a-1}[\chi(\boldsymbol{\xi})-\chi(1-\boldsymbol{\xi})]d\boldsymbol{\xi} = 2\left(4-\sqrt{2}-\sqrt{3}\right).$$

*Note that:*

$$\begin{array}{c} \frac{\Gamma(\boldsymbol{\alpha})}{\left(\boldsymbol{\varrho}(\boldsymbol{\omega},\boldsymbol{\mu})\right)^{\mathsf{T}}} \left[\mathcal{T}^{\boldsymbol{\alpha}}\_{\mu^{+}} \cdot \mathfrak{S}\_{\ast}\left(\boldsymbol{\mu} + \boldsymbol{\varrho}(\boldsymbol{\omega},\boldsymbol{\mu})\right) + \mathcal{T}^{\boldsymbol{\alpha}}\_{\mu+\boldsymbol{\varrho}(\boldsymbol{\omega},\boldsymbol{\mu})^{-}} \cdot \mathfrak{S}\_{\ast}\left(\boldsymbol{\mu}\right)\right] \\ = \frac{\Gamma\left(\frac{1}{2}\right)}{2} \frac{1}{\sqrt{\pi}} \int\_{2}^{3} (3-z)^{\frac{-1}{2}} \cdot \left(2 - z^{\frac{1}{2}}\right) dz \\ + \frac{\Gamma\left(\frac{1}{2}\right)}{2} \frac{1}{\sqrt{\pi}} \int\_{2}^{3} (z-2)^{\frac{-1}{2}} \cdot \left(2 - z^{\frac{1}{2}}\right) dz \\ = \frac{1}{2} \left[\frac{7393}{10,000} + \frac{9501}{10,000}\right] \\ = \frac{8447}{20,000} \text{.} \end{array}$$
 
$$\frac{\Gamma(\boldsymbol{\alpha})}{\Gamma(\boldsymbol{\omega},\boldsymbol{\mu},\boldsymbol{\mu})^{\mathsf{T}}} \left[\mathcal{T}^{\boldsymbol{\alpha}}\_{\mu^{+}} \cdot \mathfrak{S}^{\boldsymbol{\star}}\left(\boldsymbol{\mu} + \boldsymbol{\varrho}(\boldsymbol{\omega},\boldsymbol{\mu})\right) + \mathcal{T}^{\boldsymbol{\alpha}}\_{\mu+\boldsymbol{\varrho}(\boldsymbol{\omega},\boldsymbol{\mu})^{-}} \cdot \mathfrak{S}^{\boldsymbol{\star}}\left(\boldsymbol{\mu}\right)\right]$$

$$\begin{array}{c} \frac{\Gamma(\boldsymbol{\rho},\boldsymbol{\omega},\boldsymbol{\mu})^{\mathbb{R}}}{(\boldsymbol{\rho}(\boldsymbol{\omega},\boldsymbol{\mu}))^{\mathbb{R}}} \left[ \mathcal{T}\_{\mu}{}^{\boldsymbol{\alpha}} \circ (\mu + \boldsymbol{\varphi}(\boldsymbol{\omega},\boldsymbol{\mu})) + \mathcal{T}\_{\mu+\boldsymbol{\varphi}(\boldsymbol{\omega},\boldsymbol{\mu})^{-}} \circ (\mu) \right] \\ \frac{\Gamma(\boldsymbol{\alpha})}{(\boldsymbol{\rho}(\boldsymbol{\omega},\boldsymbol{\mu}))^{\mathbb{R}}} \left[ \mathcal{T}\_{\mu}{}^{\boldsymbol{\alpha}} \circ (\mu + \boldsymbol{\varphi}(\boldsymbol{\omega},\boldsymbol{\mu})) + \mathcal{T}\_{\mu+\boldsymbol{\varphi}(\boldsymbol{\omega},\boldsymbol{\mu})^{-}} \circ \mathfrak{S}^{\*}(\boldsymbol{\mu}) \right] \\ + \frac{\Gamma(\frac{1}{2})}{2} \frac{1}{\sqrt{\pi}} \int\_{2}^{3} (z - 2)^{\frac{-1}{2}} \cdot 2 \left( 2 - z^{\frac{1}{2}} \right) dz \\ = \begin{bmatrix} \frac{7393}{10,000} + \frac{9501}{10,000} \\ \frac{17393}{10,000} - \frac{8447}{10,000} \end{bmatrix} \end{array}$$

*Therefore:*

$$\left[\frac{4-\sqrt{10}}{8}, \frac{4-\sqrt{10}}{4}\right] \le\_p \left[\frac{8447}{20,000}, \frac{8447}{10,000}\right] \le\_p \left[\left(4-\sqrt{2}-\sqrt{3}\right), 2\left(4-\sqrt{2}-\sqrt{3}\right)\right]$$

*and Theorem 3 is verified.*

From Theorems 4 and 5, we obtain some interval fractional integral inequalities related to interval fractional *H*-*H* inequalities.

**Theorem 4.** *Let* <sup>S</sup>, <sup>H</sup> : [*µ*, *<sup>µ</sup>* <sup>+</sup> *<sup>ϕ</sup>*(*ω*, *<sup>µ</sup>*)] → K<sup>+</sup> *C be LR-χ*1*-pre-invex and LR-χ*2*-pre-invex I-V*·*Fs on* [*µ*, *µ* + *ϕ*(*ω*, *µ*)]*, respectively, such that* S(*z*) = [S∗(*z*), S<sup>∗</sup> (*z*)] *and* H(*z*) = [H∗(*z*), H<sup>∗</sup> (*z*)] *for all z* ∈ [*µ*, *µ* + *ϕ*(*ω*, *µ*)]*. If ϕ satisfies condition C and* S × H ∈ *L* [*µ*, *µ* + *ϕ*(*ω*, *µ*)],K + *C , then:*

$$\begin{split} & \frac{\Gamma(a)}{(\varphi(\omega,\mu))^{\mathfrak{T}}} \Big[ \mathscr{Z}\_{\mu^{+}}^{\mathfrak{a}} \left( \mathfrak{S} \left( \mu + \mathfrak{g}(\omega,\mu) \right) \right) \times \mathscr{H}(\mu + \mathfrak{g}(\omega,\mu)) + \mathscr{Z}\_{\mu + \mathfrak{g}(\omega,\mu)^{-}}^{\mathfrak{a}} \mathfrak{S}(\mu) \times \mathscr{H}(\mu) \Big] \\ & \leq\_{p} \mathfrak{F}(\mu,\mu + \mathfrak{g}(\omega,\mu)) \int\_{0}^{1} \mathfrak{g}^{\mathfrak{a}-1} [\chi\_{1}(\mathfrak{g})\chi\_{2}(\mathfrak{g}) + \chi\_{1}(1-\mathfrak{g})\chi\_{2}(1-\mathfrak{g})] d\mathfrak{g} \\ & + \mathfrak{d}(\mu,\mu + \mathfrak{g}(\omega,\mu)) \int\_{0}^{1} \mathfrak{g}^{\mathfrak{a}-1} [\chi\_{1}(\mathfrak{g})\chi\_{2}(1-\mathfrak{g}) + \chi\_{1}(1-\mathfrak{g})\chi\_{2}(\mathfrak{g})] d\mathfrak{g} . \end{split} \tag{20}$$

*where ξ*(*µ*, *µ* + *ϕ*(*ω*, *µ*)) = S(*µ*) × H(*µ*) + S(*µ* + *ϕ*(*ω*, *µ*)) × H(*µ* + *ϕ*(*ω*, *µ*)), *∂*(*µ*, *µ* + *ϕ*(*ω*, *µ*)) = S(*µ*) × H(*µ* + *ϕ*(*ω*, *µ*)) + S(*µ* + *ϕ*(*ω*, *µ*)) × H(*µ*), *and ξ*(*µ*, *µ* + *ϕ*(*ω*, *µ*)) = [*ξ*∗((*µ*, *µ* + *ϕ*(*ω*, *µ*))), *ξ* ∗ ((*µ*, *µ* + *ϕ*(*ω*, *µ*)))] *and ∂*(*µ*, *µ* + *ϕ*(*ω*, *µ*)) = [*∂*∗(*µ*, *µ* + *ϕ*(*ω*, *µ*)), *∂* ∗ (*µ*, *µ* + *ϕ*(*ω*, *µ*))].

**Proof.** Since S, H both are LR-*χ*1-pre-invex and LR-*χ*2-pre-invex *I*-*V*·*F* then, we have:

$$\begin{array}{c} \mathfrak{S}\_{\*}\left(\mu + (1 - \emptyset)\varrho(\omega,\mu)\right) = \mathfrak{S}\_{\*}\left(\mu + \varrho(\omega,\mu) + \zeta\varrho(\mu,\mu + \varrho(\omega,\mu))\right) \\ \qquad \qquad \qquad \qquad \leq \chi\_{1}(\emptyset)\mathfrak{S}\_{\*}\left(\mu\right) + \chi\_{1}(1 - \emptyset)\mathfrak{S}\_{\*}\left(\mu + \varrho(\omega,\mu)\right) \\ \qquad \qquad \quad \mathfrak{S}^{\*}\left(\mu + (1 - \emptyset)\varrho(\omega,\mu)\right) = \mathfrak{S}^{\*}\left(\mu + \varrho(\omega,\mu) + \zeta\varrho(\mu,\mu + \varrho(\omega,\mu))\right) \\ \qquad \qquad \qquad \leq \chi\_{1}(\emptyset)\mathfrak{S}^{\*}\left(\mu\right) + \chi\_{1}(1 - \emptyset)\mathfrak{S}^{\*}\left(\mu + \varrho(\omega,\mu)\right). \end{array}$$

and:

$$\begin{array}{ll} \mathcal{H}\_{\*}\left(\mu+(1-\emptyset)\mathfrak{q}(\omega,\mu)\right) =& \mathcal{H}\_{\*}\left(\mu+(1-\emptyset)\mathfrak{q}(\omega,\mu)\right) \\ & \stackrel{\scriptstyle\in}{\leq} \chi\_{2}(\mathfrak{c})\mathcal{H}\_{\*}\left(\mu\right)+\chi\_{2}(1-\emptyset)\mathcal{H}\_{\*}\left(\mu+\mathfrak{q}(\omega,\mu)\right) \\ \mathcal{H}^{\*}\left(\mu+(1-\emptyset)\mathfrak{q}(\omega,\mu)\right) =& \mathcal{H}^{\*}\left(\mu+(1-\emptyset)\mathfrak{q}(\omega,\mu)\right) \\ & \stackrel{\scriptstyle\in}{\leq} \chi\_{2}(\mathfrak{c})\mathcal{H}^{\*}\left(\mu\right)+\chi\_{2}(1-\emptyset)\mathcal{H}^{\*}\left(\mu+\mathfrak{q}(\omega,\mu)\right). \end{array}$$

### From the definition of LR--pre-invex *I*-*V*·*F*, we have:

S∗(*µ* + (1 − *ς*)*ϕ*(*ω*, *µ*)) × H∗(*µ* + (1 − *ς*)*ϕ*(*ω*, *µ*)) ≤ *χ*1(*ς*)*χ*2(*ς*)S∗(*µ*) × H∗(*µ*) + *χ*1(1 − *ς*)*χ*2(1 − *ς*)S∗(*µ* + *ϕ*(*ω*, *µ*)) × H∗(*µ* + *ϕ*(*ω*, *µ*)) +*χ*1(*ς*)*χ*2(1 − *ς*)S∗(*µ*) × H∗(*µ* + *ϕ*(*ω*, *µ*)) + *χ*1(1 − *ς*)*χ*2(*ς*)S∗(*µ* + *ϕ*(*ω*, *µ*)) × H∗(*µ*) S∗ (*µ* + (1 − *ς*)*ϕ*(*ω*, *µ*)) × H<sup>∗</sup> (*µ* + (1 − *ς*)*ϕ*(*ω*, *µ*)) ≤ *χ*1(*ς*)*χ*2(*ς*)S<sup>∗</sup> (*µ*) × H<sup>∗</sup> (*µ*) + *χ*1(1 − *ς*)*χ*2(1 − *ς*)S<sup>∗</sup> (*µ* + *ϕ*(*ω*, *µ*)) × H<sup>∗</sup> (*µ* + *ϕ*(*ω*, *µ*)) +*χ*1(*ς*)*χ*2(1 − *ς*)S<sup>∗</sup> (*µ*) × H<sup>∗</sup> (*µ* + *ϕ*(*ω*, *µ*)) + *χ*1(1 − *ς*)*χ*2(*ς*)S<sup>∗</sup> (*µ* + *ϕ*(*ω*, *µ*)) × H<sup>∗</sup> (*µ*). (21)

Analogously, we have:

$$\begin{split} \mathfrak{S}\_{\*}(\mu+\mathfrak{g}(\mu,\mu)) \mathcal{H}\_{\*}(\mu+\mathfrak{g}(\omega,\mu)) \\ \leq & \chi\_{1}(1-\varsigma)\chi\_{2}(1-\varsigma)\mathfrak{S}\_{\*}(\mu)\times\mathcal{H}\_{\*}(\mu)+\chi\_{1}(\varsigma)\chi\_{2}(\varsigma)\mathfrak{S}\_{\*}(\mu+\mathfrak{g}(\omega,\mu))\times\mathcal{H}\_{\*}(\mu+\mathfrak{g}(\omega,\mu)) \\ + & \chi\_{1}(1-\varsigma)\chi\_{2}(\varsigma)\mathfrak{S}\_{\*}(\mu)\times\mathcal{H}\_{\*}(\mu+\mathfrak{g}(\omega,\mu))+\chi\_{1}(\varsigma)\chi\_{2}(1-\varsigma)\mathfrak{S}\_{\*}(\mu+\mathfrak{g}(\omega,\mu))\times\mathcal{H}\_{\*}(\mu) \\ \mathfrak{S}^{\*}(\mu+\mathfrak{g}(\omega,\mu))\times\mathcal{H}^{\*}(\mu+\mathfrak{g}(\omega,\mu)) \\ \leq & \chi\_{1}(1-\varsigma)\chi\_{2}(1-\varsigma)\mathfrak{S}^{\*}(\mu)\times\mathcal{H}^{\*}(\mu)+\chi\_{1}(\varsigma)\chi\_{2}(\varsigma)\mathfrak{S}^{\*}(\mu+\mathfrak{g}(\omega,\mu))\times\mathcal{H}^{\*}(\mu+\mathfrak{g}(\omega,\mu)) \\ + & \chi\_{1}(1-\varsigma)\chi\_{2}(\varsigma)\mathfrak{S}^{\*}(\mu)\times\mathcal{H}^{\*}(\mu+\mathfrak{g}(\omega,\mu))+\chi\_{1}(\varsigma)\chi\_{2}(1-\varsigma)\mathfrak{S}^{\*}(\mu+\mathfrak{g}(\omega,\mu))\times\mathcal{H}^{\*}(\mu). \end{split} \tag{22}$$

Adding (21) and (22), we have:

S∗(*µ* + (1 − *ς*)*ϕ*(*ω*, *µ*)) × H∗(*µ* + (1 − *ς*)*ϕ*(*ω*, *µ*)) + S∗(*µ* + *ςϕ*(*ω*, *µ*)) × H∗(*µ* + *ςϕ*(*ω*, *µ*)) ≤ *χ*1(*ς*)*χ*2(*ς*) +*χ*1(1 − *ς*)*χ*2(1 − *ς*) [S∗(*µ*) × H∗(*µ*) + S∗(*µ* + *ϕ*(*ω*, *µ*)) × H∗(*µ* + *ϕ*(*ω*, *µ*))] + *χ*1(*ς*)*χ*2(1 − *ς*) +*χ*1(1 − *ς*)*χ*2(*ς*) [S∗(*µ* + *ϕ*(*ω*, *µ*)) × H∗(*µ*) + S∗(*µ*) × H∗(*µ* + *ϕ*(*ω*, *µ*))] S∗ (*µ* + (1 − *ς*)*ϕ*(*ω*, *µ*)) × H<sup>∗</sup> (*µ* + (1 − *ς*)*ϕ*(*ω*, *µ*)) + S<sup>∗</sup> (*µ* + *ςϕ*(*ω*, *µ*)) × H<sup>∗</sup> (*µ* + *ςϕ*(*ω*, *µ*)) ≤ *χ*1(*ς*)*χ*2(*ς*) +*χ*1(1 − *ς*)*χ*2(1 − *ς*) [S∗ (*µ*) × H<sup>∗</sup> (*µ*) + S∗ (*µ* + *ϕ*(*ω*, *µ*)) × H<sup>∗</sup> (*µ* + *ϕ*(*ω*, *µ*))] + *χ*1(*ς*)*χ*2(1 − *ς*) +*χ*1(1 − *ς*)*χ*2(*ς*) [S∗ (*µ* + *ϕ*(*ω*, *µ*)) × H<sup>∗</sup> (*µ*) + S∗ (*µ*) × H<sup>∗</sup> (*µ* + *ϕ*(*ω*, *µ*))]. (23)

Taking multiplication of (23) with *ς <sup>α</sup>*−<sup>1</sup> and integrating the obtained result with respect to *ς* over (0,1), we have:

R 1 0 *ς <sup>α</sup>*−1S∗(*<sup>µ</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*ϕ*(*ω*, *<sup>µ</sup>*)) × H∗(*<sup>µ</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*ϕ*(*ω*, *<sup>µ</sup>*)) + *ς <sup>α</sup>*−1S∗(*<sup>µ</sup>* <sup>+</sup> *ςϕ*(*ω*, *<sup>µ</sup>*)) × H∗(*<sup>µ</sup>* <sup>+</sup> *ςϕ*(*ω*, *<sup>µ</sup>*))*d<sup>ς</sup>* <sup>≤</sup> *<sup>ξ</sup>*∗((*µ*, *<sup>µ</sup>* <sup>+</sup> *<sup>ϕ</sup>*(*ω*, *<sup>µ</sup>*))) <sup>R</sup> <sup>1</sup> 0 *ς α*−1 [*χ*1(*ς*)*χ*2(*ς*) + *χ*1(1 − *ς*)*χ*2(1 − *ς*)]*dς* <sup>+</sup>*∂*∗((*µ*, *<sup>µ</sup>* <sup>+</sup> *<sup>ϕ</sup>*(*ω*, *<sup>µ</sup>*))) <sup>R</sup> <sup>1</sup> 0 *ς α*−1 [*χ*1(*ς*)*χ*2(1 − *ς*) + *χ*1(1 − *ς*)*χ*2(*ς*)]*dς* R 1 0 *ς <sup>α</sup>*−1S<sup>∗</sup> (*µ* + (1 − *ς*)*ϕ*(*ω*, *µ*)) × H<sup>∗</sup> (*µ* + (1 − *ς*)*ϕ*(*ω*, *µ*)) + *ς <sup>α</sup>*−1S<sup>∗</sup> (*µ* + *ςϕ*(*ω*, *µ*)) × H<sup>∗</sup> (*µ* + *ςϕ*(*ω*, *µ*))*dς* ≤ *ξ* ∗ ((*µ*, *µ* + *ϕ*(*ω*, *µ*))) R <sup>1</sup> 0 *ς α*−1 [*χ*1(*ς*)*χ*2(*ς*) + *χ*1(1 − *ς*)*χ*2(1 − *ς*)]*dς* +*∂* ∗ ((*µ*, *µ* + *ϕ*(*ω*, *µ*))) R <sup>1</sup> 0 *ς α*−1 [*χ*1(*ς*)*χ*2(1 − *ς*) + *χ*1(1 − *ς*)*χ*2(*ς*)]*dς*.

It follows that:

Γ(*α*) (*ϕ*(*ω*,*µ*))*<sup>α</sup>* h I *α <sup>µ</sup>*<sup>+</sup> S∗(*µ* + *ϕ*(*ω*, *µ*)) × H∗(*µ* + *ϕ*(*ω*, *µ*)) + I *α µ*+*ϕ*(*ω*,*µ*) <sup>−</sup> S∗(*µ*) × H∗(*µ*) i <sup>≤</sup> *<sup>ξ</sup>*∗((*µ*, *<sup>µ</sup>* <sup>+</sup> *<sup>ϕ</sup>*(*ω*, *<sup>µ</sup>*))) <sup>R</sup> <sup>1</sup> 0 *ς α*−1 [*χ*1(*ς*)*χ*2(*ς*) + *χ*1(1 − *ς*)*χ*2(1 − *ς*)]*dς* <sup>+</sup>*∂*∗((*µ*, *<sup>µ</sup>* <sup>+</sup> *<sup>ϕ</sup>*(*ω*, *<sup>µ</sup>*))) <sup>R</sup> <sup>1</sup> 0 *ς α*−1 [*χ*1(*ς*)*χ*2(1 − *ς*) + *χ*1(1 − *ς*)*χ*2(*ς*)]*dς* Γ(*α*) (*ϕ*(*ω*,*µ*))*<sup>α</sup>* h I *α <sup>µ</sup>*<sup>+</sup> S<sup>∗</sup> (*µ* + *ϕ*(*ω*, *µ*)) × H<sup>∗</sup> (*µ* + *ϕ*(*ω*, *µ*)) + I *α µ*+*ϕ*(*ω*,*µ*) <sup>−</sup> S<sup>∗</sup> (*µ*) × H<sup>∗</sup> (*µ*) i ≤ *ξ* ∗ ((*µ*, *µ* + *ϕ*(*ω*, *µ*))) R <sup>1</sup> 0 *ς α*−1 [*χ*1(*ς*)*χ*2(*ς*) + *χ*1(1 − *ς*)*χ*2(1 − *ς*)]*dς* +*∂* ∗ ((*µ*, *µ*+]*ϕ*(*ω*, *µ*))) R <sup>1</sup> 0 *ς α*−1 [*χ*1(*ς*)*χ*2(1 − *ς*) + *χ*1(1 − *ς*)*χ*2(*ς*)]*dς*.

#### It results that:

Γ(*α*) (*ϕ*(*ω*,*µ*))*<sup>α</sup>* h I *α <sup>µ</sup>*<sup>+</sup> <sup>S</sup>∗(*<sup>µ</sup>* <sup>+</sup> *<sup>ϕ</sup>*(*ω*, *<sup>µ</sup>*)) × H∗(*<sup>µ</sup>* <sup>+</sup> *<sup>ϕ</sup>*(*ω*, *<sup>µ</sup>*)) <sup>+</sup> <sup>I</sup> *α µ*+*ϕ*(*ω*,*µ*) <sup>−</sup> <sup>S</sup>∗(*µ*) × H∗(*µ*), <sup>I</sup> *α <sup>µ</sup>*<sup>+</sup> <sup>S</sup>∗(*<sup>µ</sup>* <sup>+</sup> *<sup>ϕ</sup>*(*ω*, *<sup>µ</sup>*)) × H<sup>∗</sup>(*<sup>µ</sup>* <sup>+</sup> *<sup>ϕ</sup>*(*ω*, *<sup>µ</sup>*)) <sup>+</sup> <sup>I</sup> *α µ*+*ϕ*(*ω*,*µ*) <sup>−</sup> <sup>S</sup>∗(*µ*) × H<sup>∗</sup>(*µ*) i ≤*<sup>p</sup>* [*ξ*∗((*µ*, *µ* + *ϕ*(*ω*, *µ*))), *ξ* <sup>∗</sup>((*µ*, *µ* + *ϕ*(*ω*, *µ*)))] R <sup>1</sup> 0 *ς α*−1 [*χ*1(*ς*)*χ*2(*ς*) + *χ*1(1 − *ς*)*χ*2(1 − *ς*)]*dς* +[*∂*∗((*µ*, *µ* + *ϕ*(*ω*, *µ*))), *∂* <sup>∗</sup>((*µ*, *µ* + *ϕ*(*ω*, *µ*)))] R <sup>1</sup> 0 *ς α*−1 [*χ*1(*ς*)*χ*2(1 − *ς*) + *χ*1(1 − *ς*)*χ*2(*ς*)]*dς*

that is:

$$\begin{split} &\frac{\Gamma(a)}{(\varrho(\omega,\mu))^{\mathsf{T}}} \Big[ \mathcal{Z}^{a}\_{\mu^{+}} \mathfrak{S}(\mu+\varrho(\omega,\mu)) \times \mathcal{H}(\mu+\varrho(\omega,\mu)) + \mathcal{Z}^{a}\_{\mu+\varrho(\omega,\mu)^{-}} \mathfrak{S}(\mu) \times \mathcal{H}(\mu) \Big] \\ &\leq\_{p} \mathfrak{S}(\mu,\mu+\varrho(\omega,\mu)) \int\_{0}^{1} \mathfrak{g}^{a-1} [\chi\_{1}(\boldsymbol{\varrho})\chi\_{2}(\boldsymbol{\varrho})+\chi\_{1}(1-\boldsymbol{\varrho})\chi\_{2}(1-\boldsymbol{\varrho})] d\boldsymbol{\varrho} \\ &\quad + \mathfrak{O}(\mu,\mu+\varrho(\omega,\mu)) \int\_{0}^{1} \mathfrak{g}^{a-1} [\chi\_{1}(\boldsymbol{\varrho})\chi\_{2}(1-\boldsymbol{\varrho})+\chi\_{1}(1-\boldsymbol{\varrho})\chi\_{2}(\boldsymbol{\varrho})] d\boldsymbol{\varrho} \end{split}$$

and the theorem has been established.

**Theorem 5.** *Let* <sup>S</sup>, <sup>H</sup> : [*µ*, *<sup>µ</sup>* <sup>+</sup> *<sup>ϕ</sup>*(*ω*, *<sup>µ</sup>*)] → K<sup>+</sup> *C be two LR-χ*1*-pre-invex and LR-χ*2*-pre-invex I-V*·*Fs, respectively, such that* S(*z*) = [S∗(*z*), S<sup>∗</sup> (*z*)] *and* H(*z*) = [H∗(*z*), H<sup>∗</sup> (*z*)] *for all z* ∈ [*µ*, *µ* + *ϕ*(*ω*, *µ*)]*. If ϕ satisfies condition C and* S × H ∈ *L* [*µ*, *µ* + *ϕ*(*ω*, *µ*)], K + *C , then:*

$$\begin{split} \leq\_{p} & \frac{1}{\varrho(\omega,\mu)^{d}} \operatorname\*{\mathfrak{S}}\limits\_{\begin{subarray}{c} \frac{\mathsf{T}\left(\mathsf{a}\right)}{\mathsf{q}\left(\omega,\mu\right)} \Big| \mathscr{Z}^{a} \Big| + \operatorname{\mathfrak{S}}\big(\frac{2\mu+\operatorname{\mathfrak{q}}(\omega,\mu)}{2}\Big) \times \operatorname{\mathfrak{H}}\Big(\frac{2\mu+\operatorname{\mathfrak{q}}(\omega,\mu)}{2}\Big)} \\ \leq\_{p} & \frac{\operatorname\*{\mathfrak{T}}\big(\mathsf{a}\right)}{\left(\mathsf{q}\left(\omega,\mu\right)\right)^{d}} \Big[ \mathscr{Z}^{a} \operatorname\*{\mathfrak{S}}\big(\mu+\operatorname{\mathfrak{q}}(\omega,\mu)\right) \times \operatorname{\mathfrak{H}}\big(\omega\big) + \operatorname{\mathfrak{T}}^{a} \operatorname\*{\mathfrak{S}}\big(\mu\big) \times \operatorname{\mathfrak{H}}\Big(\mu\big) \\ \quad + \operatorname{\mathfrak{S}}\big(\mu,\mu+\operatorname{\mathfrak{q}}(\omega,\mu)\right) \int\_{0}^{1} \Big[ \operatorname{\mathfrak{s}}^{a-1} + \left(1-\operatorname{\mathfrak{s}}\right)^{a-1} \Big] \chi\_{1}(\operatorname{\mathfrak{s}}) \chi\_{2}(1-\operatorname{\mathfrak{s}}) d\operatorname{\mathfrak{g}} \\ + \operatorname{\mathfrak{S}}(\mu,\mu+\operatorname{\mathfrak{q}}(\omega,\mu)) \int\_{0}^{1} \Big[ \operatorname{\mathfrak{s}}^{a-1} + \left(1-\operatorname{\mathfrak{s}}\right)^{a-1} \Big] \chi\_{1}(1-\operatorname{\mathfrak{s}}) \chi\_{2}(1-\operatorname{\mathfrak{s}}) d\operatorname{\mathfrak{s}} \end{split} \tag{24}$$

*where ξ*(*u*, *u* + *ϕ*(*ν*, *u*)) = S(*u*) × H(*u*) + S(*µ* + *ϕ*(*ω*, *µ*)) × H(*µ* + *ϕ*(*ω*, *µ*)), *∂*(*µ*, *µ* + *ϕ*(*ω*, *µ*)) = S(*µ*) × H(*µ* + *ϕ*(*ω*, *µ*)) + S(*µ* + *ϕ*(*ω*, *µ*)) × H(*µ*), *and ξ*(*µ*, *µ* + *ϕ*(*ω*, *µ*)) = [*ξ*∗(*µ*, *µ* + *ϕ*(*ω*, *µ*)), *ξ* ∗ (*µ*, *µ* + *ϕ*(*ω*, *µ*))] *and ∂*(*µ*, *µ* + *ϕ*(*ω*, *µ*)) = [*∂*∗((*µ*, *µ* + *ϕ*(*ω*, *µ*))), *∂* ∗ (*µ*, *µ* + *ϕ*(*ω*, *µ*))].

**Proof.** Consider <sup>S</sup>, <sup>H</sup> : [*µ*, *<sup>µ</sup>* <sup>+</sup> *<sup>ϕ</sup>*(*ω*, *<sup>µ</sup>*)] → K<sup>+</sup> *C* **.** are LR-*χ*1-pre-invex and LR-*χ*2-pre-invex *I*-*V*·*Fs*. Then, by hypothesis, we have:

S∗ 2*µ*+*ϕ*(*ω*,*µ*) 2 × H∗ 2*µ*+*ϕ*(*ω*,*µ*) 2 S∗ 2*µ*+*ϕ*(*ω*,*µ*) 2 × H<sup>∗</sup> 2*µ*+*ϕ*(*ω*,*µ*) 2 ≤ *χ*<sup>1</sup> 1 2 *χ*2 1 2 S∗(*µ* + (1 − *ς*)*ϕ*(*ω*, *µ*)) × H∗(*µ* + (1 − *ς*)*ϕ*(*ω*, *µ*)) <sup>+</sup>S∗(*<sup>µ</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*ϕ*(*ω*, *<sup>µ</sup>*)) × H∗(*<sup>µ</sup>* <sup>+</sup> *ςϕ*(*ω*, *<sup>µ</sup>*)) + *χ*<sup>1</sup> 1 2 *χ*2 1 2 S∗(*µ* + *ςϕ*(*ω*, *µ*)) × H∗(*µ* + (1 − *ς*)*ϕ*(*ω*, *µ*)) <sup>+</sup>S∗(*<sup>µ</sup>* <sup>+</sup> *ςϕ*(*ω*, *<sup>µ</sup>*)) × H∗(*<sup>µ</sup>* <sup>+</sup> *ςϕ*(*ω*, *<sup>µ</sup>*)) ≤ *χ*<sup>1</sup> 1 2 *χ*2 1 2 S∗ (*µ* + (1 − *ς*)*ϕ*(*ω*, *µ*)) × H<sup>∗</sup> (*µ* + (1 − *ς*)*ϕ*(*ω*, *µ*)) +S∗ (*µ* + (1 − *ς*)*ϕ*(*ω*, *µ*)) × H<sup>∗</sup> (*<sup>µ</sup>* <sup>+</sup> *ςϕ*(*ω*, *<sup>µ</sup>*)) + *χ*<sup>1</sup> 1 2 *χ*2 1 2 S∗ (*µ* + *ςϕ*(*ω*, *µ*)) × H<sup>∗</sup> (*µ* + (1 − *ς*)*ϕ*(*ω*, *µ*)) +S∗ (*µ* + *ςϕ*(*ω*, *µ*)) × H<sup>∗</sup> (*<sup>µ</sup>* <sup>+</sup> *ςϕ*(*ω*, *<sup>µ</sup>*)) , ≤ *χ*<sup>1</sup> 1 2 *χ*2 1 2 S∗(*µ* + (1 − *ς*)*ϕ*(*ω*, *µ*)) × H∗(*µ* + (1 − *ς*)*ϕ*(*ω*, *µ*)) <sup>+</sup>S∗(*<sup>µ</sup>* <sup>+</sup> *ςϕ*(*ω*, *<sup>µ</sup>*)) × H∗(*<sup>µ</sup>* <sup>+</sup> *ςϕ*(*ω*, *<sup>µ</sup>*)) + *χ*<sup>1</sup> 1 2 *χ*2 1 2 (*χ*1(*ς*)S∗(*µ*) + *χ*1(1 − *ς*)S∗(*µ* + *ϕ*(*ω*, *µ*),)) ×(*χ*2(1 − *ς*)H∗(*µ*) + *χ*2(*ς*)H∗(*µ* + *ϕ*(*ω*, *µ*),)) +(*χ*1(1 − *ς*)S∗(*µ*) + *χ*1(*ς*)S∗(*µ* + *ϕ*(*ω*, *µ*),)) ×(*χ*2(*ς*)H∗(*µ*) + *χ*2(1 − *ς*)H∗(*µ* + *ϕ*(*ω*, *µ*),)) ≤ *χ*<sup>1</sup> 1 2 *χ*2 1 2 S∗ (*µ* + (1 − *ς*)*ϕ*(*ω*, *µ*)) × H<sup>∗</sup> (*µ* + (1 − *ς*)*ϕ*(*ω*, *µ*)) +S∗ (*µ* + *ςϕ*(*ω*, *µ*)) × H<sup>∗</sup> (*<sup>µ</sup>* <sup>+</sup> *ςϕ*(*ω*, *<sup>µ</sup>*)) + *χ*<sup>1</sup> 1 2 *χ*2 1 2 (*χ*1(*ς*)S∗ (*µ*) + *χ*1(1 − *ς*)S<sup>∗</sup> (*µ* + *ϕ*(*ω*, *µ*),)) ×(*χ*2(1 − *ς*)H<sup>∗</sup> (*µ*) + *χ*2(*ς*)H<sup>∗</sup> (*µ* + *ϕ*(*ω*, *µ*),)) +(*χ*1(1 − *ς*)S<sup>∗</sup> (*µ*) + *χ*1(*ς*)S<sup>∗</sup> (*µ* + *ϕ*(*ω*, *µ*),)) ×(*χ*2(*ς*)H<sup>∗</sup> (*µ*) + *χ*2(1 − *ς*)H<sup>∗</sup> (*µ* + *ϕ*(*ω*, *µ*),)) , = *χ*<sup>1</sup> 1 2 *χ*2 1 2 S∗(*µ* + (1 − *ς*)*ϕ*(*ω*, *µ*)) × H∗(*µ* + (1 − *ς*)*ϕ*(*ω*, *µ*)) <sup>+</sup>S∗(*<sup>µ</sup>* <sup>+</sup> *ςϕ*(*ω*, *<sup>µ</sup>*)) × H∗(*<sup>µ</sup>* <sup>+</sup> *ςϕ*(*ω*, *<sup>µ</sup>*)) + *χ*<sup>1</sup> 1 2 *χ*2 1 2 {*χ*1(*ς*)*χ*2(1 − *ς*) + *χ*1(1 − *ς*)*χ*2(*ς*)}*∂*∗((*µ*, *µ* + *ϕ*(*ω*, *µ*))) <sup>+</sup>{*χ*1(*ς*)*χ*2(*ς*) <sup>+</sup> *<sup>χ</sup>*1(<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*χ*2(<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)}*ξ*∗((*µ*, *<sup>µ</sup>* <sup>+</sup> *<sup>ϕ</sup>*(*ω*, *<sup>µ</sup>*))) = *χ*<sup>1</sup> 1 2 *χ*2 1 2 S∗ (*µ* + (1 − *ς*)*ϕ*(*ω*, *µ*)) × H<sup>∗</sup> (*µ* + (1 − *ς*)*ϕ*(*ω*, *µ*)) +S∗ (*µ* + *ςϕ*(*ω*, *µ*)) × H<sup>∗</sup> (*<sup>µ</sup>* <sup>+</sup> *ςϕ*(*ω*, *<sup>µ</sup>*)) + *χ*<sup>1</sup> 1 2 *χ*2 1 2 {*χ*1(*ς*)*χ*2(1 − *ς*) + *χ*1(1 − *ς*)*χ*2(*ς*)}*∂* ∗ (*µ*, *µ* + *ϕ*(*ω*, *µ*)) +{*χ*1(*ς*)*χ*2(*ς*) + *χ*1(1 − *ς*)*χ*2(1 − *ς*)}*ξ* ∗ (*µ*, *<sup>µ</sup>* <sup>+</sup> *<sup>ϕ</sup>*(*ω*, *<sup>µ</sup>*)) .

(25)

Taking multiplication of (25) with *ς <sup>α</sup>*−<sup>1</sup> and integrating over (0, 1), we get:

1 *αχ*1( 1 2 )*χ*2( 1 2 ) S∗ 2*µ*+*ϕ*(*ω*,*µ*) 2 × H∗ 2*µ*+*ϕ*(*ω*,*µ*) 2 ≤ Γ(*α*) (*ϕ*(*ω*,*µ*))*<sup>α</sup>* I *α <sup>µ</sup>*<sup>+</sup> S∗(*µ* + *ϕ*(*ω*, *µ*)) × H∗(*µ* + *ϕ*(*ω*, *µ*)) + I *α µ*+*ϕ*(*ω*,*µ*) <sup>−</sup> S∗(*µ*) × H∗(*µ*) <sup>+</sup>*∂*∗(*µ*, *<sup>µ</sup>* <sup>+</sup> *<sup>ϕ</sup>*(*ω*, *<sup>µ</sup>*)) <sup>R</sup> <sup>1</sup> 0 h *ς <sup>α</sup>*−<sup>1</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*) *α*−1 i *χ*1(*ς*)*χ*2(1 − *ς*)*dςb* <sup>+</sup>*ξ*∗(*µ*, *<sup>µ</sup>* <sup>+</sup> *<sup>ϕ</sup>*(*ω*, *<sup>µ</sup>*)) <sup>R</sup> <sup>1</sup> 0 h *ς <sup>α</sup>*−<sup>1</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*) *α*−1 i *χ*1(1 − *ς*)*χ*2(1 − *ς*)*dς* 1 *αχ*1( 1 2 )*χ*2( 1 2 ) S∗ 2*µ*+*ϕ*(*ω*,*µ*) 2 × H<sup>∗</sup> 2*µ*+*ϕ*(*ω*,*µ*) 2 ≤ Γ(*α*) (*ϕ*(*ω*,*µ*))*<sup>α</sup>* I *α <sup>µ</sup>*<sup>+</sup> S<sup>∗</sup> (*µ* + *ϕ*(*ω*, *µ*)) × H<sup>∗</sup> (*µ* + *ϕ*(*ω*, *µ*)) + I *α µ*+*ϕ*(*ω*,*µ*) <sup>−</sup> S<sup>∗</sup> (*µ*) × H<sup>∗</sup> (*µ*) +*∂* ∗ (*µ*, *µ* + *ϕ*(*ω*, *µ*)) R <sup>1</sup> 0 h *ς <sup>α</sup>*−<sup>1</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*) *α*−1 i *χ*1(*ς*)*χ*2(1 − *ς*)*dς* +*ξ* ∗ (*µ*, *µ* + *ϕ*(*ω*, *µ*)) R <sup>1</sup> 0 h *ς <sup>α</sup>*−<sup>1</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*) *α*−1 i *χ*1(1 − *ς*)*χ*2(1 − *ς*)*dς*.

It follows that:

$$\begin{array}{c} \frac{1}{\varkappa \chi\_{1} \left(\frac{1}{2}\right) \chi\_{2} \left(\frac{1}{2}\right)} \mathfrak{S} \left(\frac{2\mu + \varrho(\omega,\mu)}{2}\right) \times \mathcal{H} \left(\frac{2\mu + \varrho(\omega,\mu)}{2}\right) \\ \leq\_{p} \frac{\Gamma(a)}{(\varrho(\omega,\mu))^{a}} \left[\mathcal{I}\_{\mu^{+}}^{a} \cdot \mathfrak{S} (\mu + \varrho(\omega,\mu)) \times \mathcal{H} (\mu + \varrho(\omega,\mu)) \right. \\ \left. + \mathcal{I}\_{\mu + \varrho(\omega,\mu)^{-}}^{a} \cdot \mathfrak{S} (\mu) \times \mathcal{H} (\mu) \right] \\ + \mathfrak{O} (\mu,\mu + \varrho(\omega,\mu)) \int\_{0}^{1} \left[\mathfrak{s}^{a-1} + (1-\mathfrak{s})^{a-1}\right] \chi\_{1} (\mathfrak{s}) \chi\_{2} (1-\mathfrak{s}) d\mathfrak{\zeta} \\ + \mathfrak{E} (\mu,\mu + \varrho(\omega,\mu)) \int\_{0}^{1} \left[\mathfrak{s}^{a-1} + (1-\mathfrak{s})^{a-1}\right] \chi\_{1} (1-\mathfrak{s}) \chi\_{2} (1-\mathfrak{s}) d\mathfrak{\zeta} \end{array}$$

Hence, the required result.

Now, we present the successful reformative version of the generalized version of interval *H*-*H* inequality on invex set for LR-*χ*-pre-invex *I*-*V*·*F* via interval Riemann–Liouville fractional integral.

**Theorem 6.** *(Second fractional H*-*H Fejér inequality) Let* <sup>S</sup> : [*µ*, *<sup>µ</sup>* <sup>+</sup> *<sup>ϕ</sup>*(*ω*, *<sup>µ</sup>*)] → K<sup>+</sup> *C be a LR-χ-preinvex I-V*·*F with µ* < *µ* + *ϕ*(*ω*, *µ*)*, such that* S(*z*) = [S∗(*z*), S<sup>∗</sup> (*z*)] *for all z* ∈ [*µ*, *µ* + *ϕ*(*ω*, *µ*)]*. If* S ∈ *L* [*µ*, *µ* + *ϕ*(*ω*, *µ*)], K + *C and* D : [*µ*, *µ* + *ϕ*(*ω*, *µ*)] → R, D(*z*) ≥ 0, *symmetric with respect to* <sup>2</sup>*µ*+*ϕ*(*ω*,*µ*) 2 , *then:*

$$\begin{split} & \frac{\Gamma(\mathfrak{a})}{(\mathfrak{q}(\omega,\mathfrak{p}))^{\mathfrak{a}}} \Big[ \mathscr{Z}\_{\mu^{+}}^{\mathfrak{a}} \left( \mathfrak{S} \mathscr{D}(\mu + \mathfrak{q}(\omega,\mathfrak{p})) + \mathscr{Z}\_{\mu + \mathfrak{q}(\omega,\mathfrak{p})^{-}}^{\mathfrak{a}} \mathfrak{S} \mathscr{D}(\mu) \right) \\ \leq & \varrho \left( \mathfrak{S}(\mu) + \mathfrak{S}(\mu + \mathfrak{q}(\omega,\mathfrak{p})) \right) \int\_{0}^{1} \mathfrak{q}^{\mathfrak{a} - 1} [\chi(\mathfrak{q}) + \chi(1 - \mathfrak{q})] D(\mu + \mathfrak{q} \mathfrak{q}(\omega,\mathfrak{p})) d\mathfrak{q} \, . \end{split} \tag{26}$$

*If* S *is pre-incave I-V*·*F, then inequality (26) is reversed.*

**Proof.** Let S be a LR-*χ*-pre-invex *I*-*V*·*F* and *ς <sup>α</sup>*−1D(*<sup>µ</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*ϕ*(*ω*, *<sup>µ</sup>*)) <sup>≥</sup> 0. Then, we have:

$$\begin{array}{ll} \mathfrak{s}^{a-1}\mathfrak{S}\_{\*}(\mu+(1-\varsigma)\mathfrak{q}(\omega,\mu))\mathcal{D}(\mu+(1-\varsigma)\mathfrak{q}(\omega,\mu)) \\ \qquad \leq \mathfrak{s}^{a-1}(\chi(\underline{\varsigma})\mathfrak{S}\_{\*}(\mu)+\chi(1-\underline{\varsigma})\mathfrak{S}\_{\*}(\mu+\underline{\varrho}(\omega,\mu)))\mathcal{D}(\mu+(1-\varsigma)\mathfrak{q}(\omega,\mu)) \\ \quad \mathfrak{s}^{a-1}\mathfrak{S}^{\*}(\mu+(1-\varsigma)\mathfrak{q}(\omega,\mu))\mathcal{D}(\mu+(1-\varsigma)\mathfrak{q}(\omega,\mu)) \\ \qquad \leq \mathfrak{s}^{a-1}(\chi(\underline{\varsigma})\mathfrak{S}^{\*}(\mu)+\chi(1-\underline{\varsigma})\mathfrak{S}^{\*}(\mu+\underline{\varrho}(\omega,\mu)))\mathcal{D}(\mu+(1-\varsigma)\mathfrak{q}(\omega,\mu)), \end{array} \tag{27}$$

and:

$$\begin{array}{ll} \mathfrak{s}^{a-1}\mathfrak{S}\_{\*}(\mu+\xi\mathfrak{g}(\omega,\mu))\mathcal{D}(\mu+\xi\mathfrak{g}(\omega,\mu)) \\ \qquad \leq & \mathfrak{s}^{a-1}(\chi(1-\mathfrak{s})\mathfrak{S}\_{\*}(\mu)+\chi(\mathfrak{s})\mathfrak{S}\_{\*}(\mu+\mathfrak{g}(\omega,\mu)))\mathcal{D}(\mu+\xi\mathfrak{g}(\omega,\mu)) \\ \leq & \mathfrak{s}^{a-1}\mathfrak{S}^{\*}(\mu+\xi\mathfrak{g}(\omega,\mu))\mathcal{D}(\mu+\xi\mathfrak{g}(\omega,\mu)) \\ \leq & \mathfrak{s}^{a-1}(\chi(1-\mathfrak{s})\mathfrak{S}^{\*}(\mu)+\chi(\mathfrak{s})\mathfrak{S}^{\*}(\mu+\mathfrak{g}(\omega,\mu)))\mathcal{D}(\mu+\xi\mathfrak{g}(\omega,\mu)). \end{array} \tag{28}$$

After adding (27) and (28), and integrating over [0, 1], we get:

R <sup>1</sup> 0 *ς <sup>α</sup>*−1S∗(*<sup>µ</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*ϕ*(*ω*, *<sup>µ</sup>*))D(*<sup>µ</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*ϕ*(*ω*, *<sup>µ</sup>*))*d<sup>ς</sup>* + R <sup>1</sup> 0 *ς <sup>α</sup>*−1S∗(*<sup>µ</sup>* <sup>+</sup> *<sup>ϕ</sup>*(*ω*, *<sup>µ</sup>*))D(*<sup>µ</sup>* <sup>+</sup> *ςϕ*(*ω*, *<sup>µ</sup>*))*d<sup>ς</sup>* ≤ R <sup>1</sup> 0 *ς <sup>α</sup>*−1S∗(*µ*){*χ*(*ς*)D(*<sup>µ</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*ϕ*(*ω*, *<sup>µ</sup>*)) <sup>+</sup> *<sup>χ</sup>*(<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)D(*<sup>µ</sup>* <sup>+</sup> *ςϕ*(*ω*, *<sup>µ</sup>*))} +*ς <sup>α</sup>*−1S∗(*<sup>µ</sup>* <sup>+</sup> *<sup>ϕ</sup>*(*ω*, *<sup>µ</sup>*)){*χ*(<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)D(*<sup>µ</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*ϕ*(*ω*, *<sup>µ</sup>*)) <sup>+</sup> *<sup>χ</sup>*(*ς*)D(*<sup>µ</sup>* <sup>+</sup> *ςϕ*(*ω*, *<sup>µ</sup>*))} *dς*, = S∗(*µ*) R <sup>1</sup> 0 *ς α*−1 [*χ*(*ς*) + *χ*(1 − *ς*)]*D*(*µ* + (1 − *ς*)*ϕ*(*ω*, *µ*)) *dς* <sup>+</sup>S∗(*<sup>µ</sup>* <sup>+</sup> *<sup>ϕ</sup>*(*ω*, *<sup>µ</sup>*)) <sup>R</sup> <sup>1</sup> 0 *ς α*−1 [*χ*(*ς*) + *χ*(1 − *ς*)]*D*(*µ* + *ςϕ*(*ω*, *µ*)) *dς*, R <sup>1</sup> 0 *ς <sup>α</sup>*−1S∗(*<sup>µ</sup>* <sup>+</sup> *ςϕ*(*ω*, *<sup>µ</sup>*))D(*<sup>µ</sup>* <sup>+</sup> *ςϕ*(*ω*, *<sup>µ</sup>*))*d<sup>ς</sup>* + R <sup>1</sup> 0 *ς <sup>α</sup>*−1S∗(*<sup>µ</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*ϕ*(*ω*, *<sup>µ</sup>*))D(*<sup>µ</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*ϕ*(*ω*, *<sup>µ</sup>*))*d<sup>ς</sup>* ≤ R <sup>1</sup> 0 *ς <sup>α</sup>*−1S∗(*µ*){*χ*(*ς*)D(*<sup>µ</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*ϕ*(*ω*, *<sup>µ</sup>*)) <sup>+</sup> *<sup>χ</sup>*(<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)D(*<sup>µ</sup>* <sup>+</sup> *ςϕ*(*ω*, *<sup>µ</sup>*))} +*ς <sup>α</sup>*−1S∗(*<sup>µ</sup>* <sup>+</sup> *<sup>ϕ</sup>*(*ω*, *<sup>µ</sup>*)){*χ*(<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)D(*<sup>µ</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*ϕ*(*ω*, *<sup>µ</sup>*)) <sup>+</sup> *<sup>χ</sup>*(*ς*)D(*<sup>µ</sup>* <sup>+</sup> *ςϕ*(*ω*, *<sup>µ</sup>*))} *dς*, = S∗(*µ*) R <sup>1</sup> 0 *ς α*−1 [*χ*(*ς*) + *χ*(1 − *ς*)]*D*(*µ* + (1 − *ς*)*ϕ*(*ω*, *µ*)) *dς* +S∗(*µ* + *ϕ*(*ω*, *µ*)) R <sup>1</sup> 0 *ς α*−1 [*χ*(*ς*) + *χ*(1 − *ς*)]*D*(*µ* + *ςϕ*(*ω*, *µ*)) *dς*. (29)

Taking the right hand side of inequality (29), we have:

R 1 0 *ς <sup>α</sup>*−1S∗(*<sup>µ</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*ϕ*(*ω*, *<sup>µ</sup>*))D(*<sup>µ</sup>* <sup>+</sup> *ςϕ*(*ω*, *<sup>µ</sup>*))*d<sup>ς</sup>* + R 1 0 *ς <sup>α</sup>*−1S∗(*<sup>µ</sup>* <sup>+</sup> *ςϕ*(*ω*, *<sup>µ</sup>*))D(*<sup>µ</sup>* <sup>+</sup> *ςϕ*(*ω*, *<sup>µ</sup>*))*d<sup>ς</sup>* = <sup>1</sup> (*ϕ*(*ω*,*µ*))*<sup>α</sup>* R *<sup>µ</sup>*+*ϕ*(*ω*,*µ*) *µ* (*z* − *µ*) *<sup>α</sup>*−1S∗(2*<sup>µ</sup>* <sup>+</sup> *<sup>ϕ</sup>*(*ω*, *<sup>µ</sup>*) <sup>−</sup> *<sup>z</sup>*)D(*z*)*dz* + <sup>1</sup> (*ϕ*(*ω*,*µ*))*<sup>α</sup>* R *<sup>µ</sup>*+*ϕ*(*ω*,*µ*) *µ* (*z* − *µ*) *<sup>α</sup>*−1S∗(*z*)D(*z*)*dz* = <sup>1</sup> (*ϕ*(*ω*,*µ*))*<sup>α</sup>* R *<sup>µ</sup>*+*ϕ*(*ω*,*µ*) *µ* (*µ* + *ϕ*(*ω*, *µ*) − *z*) *<sup>α</sup>*−1S∗(*z*)D(2*<sup>µ</sup>* <sup>+</sup> *<sup>ϕ</sup>*(*ω*, *<sup>µ</sup>*) <sup>−</sup> *<sup>z</sup>*)*dz* + <sup>1</sup> (*ϕ*(*ω*,*µ*))*<sup>α</sup>* R *<sup>µ</sup>*+*ϕ*(*ω*,*µ*) *µ* (*z* − *µ*) *<sup>α</sup>*−1S∗(*z*)D(*z*)*dz* = Γ(*α*) (*ϕ*(*ω*,*µ*))*<sup>α</sup>* h I *α <sup>µ</sup>*<sup>+</sup> S∗D(*µ* + *ϕ*(*ω*, *µ*)) + I *α µ*+*ϕ*(*ω*,*µ*) <sup>−</sup> S∗D(*µ*) i , R 1 0 *ς <sup>α</sup>*−1S<sup>∗</sup> (*µ* + (1 − *ς*)*ϕ*(*ω*, *µ*))D(*µ* + *ςϕ*(*ω*, *µ*))*dς* + R 1 0 *ς <sup>α</sup>*−1S<sup>∗</sup> (*µ* + *ςϕ*(*ω*, *µ*))D(*µ* + *ςϕ*(*ω*, *µ*))*dς* = Γ(*α*) (*ϕ*(*ω*,*µ*))*<sup>α</sup>* h I *α <sup>µ</sup>*<sup>+</sup> S∗D(*µ* + *ϕ*(*ω*, *µ*)) + I *α µ*+*ϕ*(*ω*,*µ*) <sup>−</sup> S∗D(*µ*) i (30)

From (30), we have:

$$\begin{split} \frac{\Gamma(\mu)}{(\varphi(\omega,\mu))^{\mathbb{A}}} & \left[ \mathscr{Z}\_{\mu+}^{\mathfrak{a}} \mathfrak{S}\_{\star} \mathscr{D}(\mu+\mathfrak{g}(\omega,\mu)) + \mathscr{Z}\_{\mu+\mathfrak{g}(\omega,\mu)}^{\mathfrak{a}} \cdot \mathfrak{S}\_{\star} \mathscr{D}(\mu) \right] \\ & \leq (\mathfrak{S}\_{\star}(\mu) + \mathfrak{S}\_{\star}(\mu+\mathfrak{g}(\omega,\mu))) \int\_{0}^{1} \mathfrak{S}^{\mathfrak{a}-1} \left[ \begin{array}{c} \chi(\xi) \\ + \chi(1-\xi) \end{array} \right] D(\mu+\xi\mathfrak{g}(\omega,\mu)) \\ \frac{\Gamma(\mathfrak{a})}{(\mathfrak{g}(\omega,\mu))^{\mathbb{A}}} & \left[ \mathscr{Z}\_{\mu+}^{\mathfrak{a}} \mathfrak{S}^{\mathfrak{a}} \mathscr{D}(\mu+\mathfrak{g}(\omega,\mu)) + \mathscr{Z}\_{\mu+\mathfrak{g}(\omega,\mu)}^{\mathfrak{a}} \cdot \mathfrak{S}^{\mathfrak{a}} \mathscr{D}(\mu) \right] \\ & \leq (\mathfrak{S}^{\mathfrak{a}}(\mu) + \mathfrak{S}^{\mathfrak{a}}(\mu+\mathfrak{g}(\omega,\mu))) \int\_{0}^{1} \mathfrak{S}^{\mathfrak{a}-1} \left[ \begin{array}{c} \chi(\xi) \\ + \chi(1-\xi) \end{array} \right] D(\mu+\xi\mathfrak{g}(\omega,\mu)) \end{split}$$

that is:

$$\begin{cases} \frac{\Gamma(a)}{(q(\omega,\mu))^{2}} \Big[ \mathcal{Z}\_{\mu}^{\scriptscriptstyle{u}} \, \mathfrak{S}\_{\ast} \mathcal{D}(\mu+\varrho(\omega,\mu)) + \mathcal{Z}\_{\mu+\varrho(\omega,\mu)}^{\scriptscriptstyle{u}} \, \mathfrak{S}\_{\ast} \mathcal{D}(\mu) \, \mathcal{Z}\_{\mu}^{\scriptscriptstyle{u}} \, \mathfrak{S}^{\ast} \mathcal{D}(\mu+\varrho(\omega,\mu)) + \mathcal{Z}\_{\mu+\varrho(\omega,\mu)}^{\scriptscriptstyle{u}} \, \mathfrak{S}^{\ast} \mathcal{D}(\mu) \Big] \\ \leq\_{p} \left[ \mathfrak{S}\_{\ast}(\mu) + \mathfrak{S}\_{\ast}(\mu+\varrho(\omega,\mu)) , \ \mathfrak{S}^{\ast}(\mu) + \mathfrak{S}^{\ast}(\mu+\varrho(\omega,\mu)) \right] \int\_{0}^{1} \, \zeta^{\scriptscriptstyle{u}-1} [\chi(\zeta) + \chi(1-\zeta)] \mathcal{D}(\mu+\zeta\varrho(\omega,\mu)) d\zeta \end{cases}$$

Hence,

$$\begin{aligned} &\frac{\Gamma(\mathfrak{a})}{\left(\mathfrak{q}(\omega,\mathfrak{\mu})\right)^{\mathfrak{a}}} \Big[\mathscr{T}\_{\mathfrak{\mu}^{+}}^{\mathfrak{a}} \left\{\mathfrak{S}\mathscr{D}(\mathfrak{\mu}+\mathfrak{q}(\omega,\mathfrak{\mu}))+\mathscr{T}\_{\mathfrak{\mu}+\mathfrak{q}(\omega,\mathfrak{\mu})^{-}} \mathfrak{S}\mathscr{D}(\mathfrak{\mu})\right\} \\ &\leq\_{p} (\mathfrak{S}(\mathfrak{\mu})+\mathfrak{S}(\omega)) \int\_{0}^{1} \mathfrak{q}^{\mathfrak{a}-1} [\chi(\mathfrak{\zeta})+\chi(1-\mathfrak{\zeta})] D(\mathfrak{\mu}+\mathfrak{\zeta}\mathfrak{q}(\omega,\mathfrak{\mu})) d\mathfrak{\zeta} \end{aligned}$$

Now, we propose the first *H*-*H* Fejér inequality for LR--pre-invex *I*-*V*·*F* using the interval Riemann–Liouville fractional integral. Then we will prove the validity of Theorem 6 and Theorem 7 with a nontrivial Example 3.

**Theorem 7.** *(First fractional H*-*H Fejér inequality) Let* <sup>S</sup> : [*µ*, *<sup>µ</sup>* <sup>+</sup> *<sup>ϕ</sup>*(*ω*, *<sup>µ</sup>*)] → K<sup>+</sup> *C be a LR--preinvex I-V*·*F such that* S(*z*) = [S∗(*z*), S<sup>∗</sup> (*z*)] *for all z* ∈ [*µ*, *µ* + *ϕ*(*ω*, *µ*)]*. Let* S ∈ *L* [*µ*, *µ* + *ϕ*(*ω*, *µ*)], K + *C and* D : [*µ*, *µ* + *ϕ*(*ω*, *µ*)] → R, D(*z*) ≥ 0, *symmetric with respect to* <sup>2</sup>*µ*+*ϕ*(*ω*,*µ*) 2 *. If ϕ satisfies condition C, then:*

$$\begin{split} \frac{1}{2\chi\left(\frac{1}{2}\right)} \mathfrak{S} \left(\frac{2\mu + \mathfrak{q}(\omega,\mu)}{2}\right) \left[ \mathcal{Z}\_{\mu^{+}}^{a} \cdot \mathcal{D}(\mu + \mathfrak{q}(\omega,\mu)) + \mathcal{Z}\_{\mu + \mathfrak{q}(\omega,\mu)^{-}}^{a} \cdot \mathcal{D}(\mu) \right] \\ \leq\_{p} \left[ \mathcal{Z}\_{\mu^{+}}^{a} \cdot \mathfrak{S} \mathcal{D}(\mu + \mathfrak{q}(\omega,\mu)) + \mathcal{Z}\_{\mu + \mathfrak{q}(\omega,\mu)^{-}}^{a} \cdot \mathfrak{S} \mathcal{D}(\mu) \right] \end{split} \tag{31}$$

*If* S *is pre-incave I-V*·*F, then inequality (31) is reversed.*

**Proof.** Since S is a LR-*χ*-pre-invex *I*-*V*·*F* then, we have:

$$
\begin{split}
\mathfrak{S}\_{\*}\left(\frac{2\mu+\varrho(\omega,\mu)}{2}\right) &\leq \chi\left(\frac{1}{2}\right) \left(\mathfrak{S}\_{\*}\left(\mu+(1-\varsigma)\varrho(\omega,\mu)\right)+\mathfrak{S}\_{\*}\left(\mu+\varsigma\varrho(\omega,\mu)\right)\right) \\
\mathfrak{S}^{\*}\left(\frac{2\mu+\varrho(\omega,\mu)}{2}\right) &\leq \chi\left(\frac{1}{2}\right) \left(\mathfrak{S}^{\*}\left(\mu+(1-\varsigma)\varrho(\omega,\mu)\right)+\mathfrak{S}^{\*}\left(\mu+\varsigma\varrho(\omega,\mu)\right)\right).
\end{split}
\tag{32}
$$

Since D(*µ* + (1 − *ς*)*ϕ*(*ω*, *µ*)) = D(*µ* + *ςϕ*(*ω*, *µ*)), then by multiplying (32) by *ς <sup>α</sup>*−1D(*<sup>µ</sup>* <sup>+</sup> *ςϕ*(*ω*, *<sup>µ</sup>*)) and integrate it with respect to *<sup>ς</sup>* over [0, 1], we obtain:

$$\begin{split} \mathfrak{S}\_{\*}\left(\frac{2\mu+\varrho(\omega,\mu)}{2}\right) & \int\_{0}^{1} \mathfrak{s}^{a-1} \mathcal{D}(\mu+\zeta\varphi(\omega,\mu))d\zeta \\ & \qquad \le \chi\left(\frac{1}{2}\right) \left(\int\_{0}^{1} \mathfrak{s}^{a-1} \mathfrak{S}\_{\*}(\mu+(1-\xi)\varphi(\omega,\mu))\mathcal{D}(\mu+\xi\varphi(\omega,\mu))d\xi\right), \\ \mathfrak{S}^{\*}\left(\frac{2\mu+\varrho(\omega,\mu)}{2}\right) \int\_{0}^{1} \mathfrak{s}^{a-1} \mathcal{D}(\mu+\xi\varphi(\omega,\mu))d\zeta \\ & \qquad \le \chi\left(\frac{1}{2}\right) \left(\int\_{0}^{1} \mathfrak{s}^{a-1} \mathfrak{S}^{\*}(\mu+(1-\xi)\varphi(\omega,\mu))\mathcal{D}(\mu+\xi\varphi(\omega,\mu))d\xi\right). \end{split} \tag{33}$$

Let *x* = *µ* + *ςϕ*(*ω*, *µ*). Then, on the right hand side of inequality (33), we have:

R 1 0 *ς <sup>α</sup>*−1S∗(*<sup>µ</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*ϕ*(*ω*, *<sup>µ</sup>*))D(*<sup>µ</sup>* <sup>+</sup> *ςϕ*(*ω*, *<sup>µ</sup>*))*d<sup>ς</sup>* + R 1 0 *ς <sup>α</sup>*−1S∗(*<sup>µ</sup>* <sup>+</sup> *ςϕ*(*ω*, *<sup>µ</sup>*))D(*<sup>µ</sup>* <sup>+</sup> *ςϕ*(*ω*, *<sup>µ</sup>*))*d<sup>ς</sup>* = <sup>1</sup> (*ϕ*(*ω*,*µ*))*<sup>α</sup>* R *<sup>µ</sup>*+*ϕ*(*ω*,*µ*) *µ* (*z* − *µ*) *<sup>α</sup>*−1S∗(2*<sup>µ</sup>* <sup>+</sup> *<sup>ϕ</sup>*(*ω*, *<sup>µ</sup>*) <sup>−</sup> *<sup>z</sup>*)D(*z*)*dz* + <sup>1</sup> (*ϕ*(*ω*,*µ*))*<sup>α</sup>* R *<sup>µ</sup>*+*ϕ*(*ω*,*µ*) *µ* (*z* − *µ*) *<sup>α</sup>*−1S∗(*z*)D(*z*)*dz* = <sup>1</sup> (*ϕ*(*ω*,*µ*))*<sup>α</sup>* R *<sup>µ</sup>*+*ϕ*(*ω*,*µ*) *µ* (*z* − *µ*) *<sup>α</sup>*−1S∗(*z*)D(*<sup>µ</sup>* <sup>−</sup> *<sup>ω</sup>* <sup>−</sup> *<sup>z</sup>*)*dz* + <sup>1</sup> (*ϕ*(*ω*,*µ*))*<sup>α</sup>* R *<sup>µ</sup>*+*ϕ*(*ω*,*µ*) *µ* (*z* − *µ*) *<sup>α</sup>*−1S∗(*z*)D(*z*)*dz* = Γ(*α*) (*ϕ*(*ω*,*µ*))*<sup>α</sup>* h I *α <sup>µ</sup>*<sup>+</sup> S∗D(*µ* + *ϕ*(*ω*, *µ*)) + I *α µ*+*ϕ*(*ω*,*µ*) <sup>−</sup> S∗D(*µ*) i , R 1 0 *ς <sup>α</sup>*−1S<sup>∗</sup> (*µ* + (1 − *ς*)*ϕ*(*ω*, *µ*))D(*µ* + *ςϕ*(*ω*, *µ*))*dς* + R 1 0 *ς <sup>α</sup>*−1S<sup>∗</sup> (*µ* + *ςϕ*(*ω*, *µ*))D(*µ* + *ςϕ*(*ω*, *µ*))*dς* = Γ(*α*) (*ϕ*(*ω*,*µ*))*<sup>α</sup>* h I *α <sup>µ</sup>*<sup>+</sup> S∗D(*µ* + *ϕ*(*ω*, *µ*)) + I *α µ*+*ϕ*(*ω*,*µ*) <sup>−</sup> S∗D(*µ*) i . (34)

Then from (34), we have:

$$\begin{split} \frac{1}{2\chi\left(\frac{1}{2}\right)} & \mathfrak{S}\_{\*} \left(\frac{2\mu+\varrho(\omega,\mu)}{2}\right) \left[ \mathscr{Z}\_{\mu^{+}}^{\mathfrak{a}} \cdot \mathscr{D}(\mu+\varrho(\omega,\mu)) + \mathscr{Z}\_{\mu+\varrho(\omega,\mu)^{-}}^{\mathfrak{a}} \cdot \mathscr{D}(\mu) \right] \\ & \qquad \leq \left[ \mathscr{Z}\_{\mu^{+}}^{\mathfrak{a}} \cdot \mathfrak{S}\_{\*} \mathscr{D}(\mu+\varrho(\omega,\mu)) + \mathscr{Z}\_{\mu+\varrho(\omega,\mu)^{-}}^{\mathfrak{a}} \cdot \mathfrak{S}\_{\*} \mathscr{D}(\mu) \right] \\ \frac{1}{2\chi\left(\frac{1}{2}\right)} & \mathfrak{S}^{\*} \left(\frac{2\mu+\varrho(\omega,\mu)}{2}\right) \left[ \mathscr{Z}\_{\mu^{+}}^{\mathfrak{a}} \cdot \mathscr{D}(\mu+\varrho(\omega,\mu)) + \mathscr{Z}\_{\mu+\varrho(\omega,\mu)^{-}}^{\mathfrak{a}} \cdot \mathscr{D}(\mu) \right] \\ & \qquad \leq \left[ \mathscr{Z}\_{\mu^{+}}^{\mathfrak{a}} \cdot \mathfrak{S}^{\*} \mathscr{D}(\mu+\varrho(\omega,\mu)) + \mathscr{Z}\_{\mu+\varrho(\omega,\mu)^{-}}^{\mathfrak{a}} \cdot \mathfrak{S}^{\*} \mathscr{D}(\mu) \right]. \end{split}$$

from which, we have:

$$\begin{split} &\frac{1}{2\chi\left(\frac{1}{2}\right)} \left[ \mathfrak{S}\_{\*} \left( \frac{2\mu + \varrho(\omega,\mu)}{2} \right) , \ \mathfrak{S}^{\*} \left( \frac{2\mu + \varrho(\omega,\mu)}{2} \right) \right] \left[ \mathfrak{I}\_{\mu^{+}}^{a} \mathcal{D}(\mu + \varrho(\omega,\mu)) + \mathfrak{I}\_{\mu + \varrho(\omega,\mu)^{-}}^{a} \mathcal{D}(\mu) \right] \\ &\leq \ \ \_p \left[ \mathfrak{I}\_{\mu^{+}}^{a} \mathfrak{S}\_{\*} \mathcal{D}(\mu + \varrho(\omega,\mu)) + \mathfrak{I}\_{\mu + \varrho(\omega,\mu)}^{a} \mathfrak{S}\_{\*} \mathcal{D}(\mu) , \ \mathfrak{I}\_{\mu^{+}}^{a} \mathfrak{S}^{\*} \mathcal{D}(\mu + \varrho(\omega,\mu)) + \mathfrak{I}\_{\mu + \varrho(\omega,\mu)}^{a} \mathfrak{S}^{\*} \mathcal{D}(\mu) \right]. \end{split}$$

and it follows that:

$$\begin{split} & \frac{1}{2\chi\left(\frac{1}{2}\right)} \mathfrak{S} \left(\frac{2\mu + \varrho(\omega,\mu)}{2}\right) \left[ \mathcal{Z}\_{\mu^{+}}^{\mathfrak{a}} \lrcorner \mathcal{D}(\mu + \varrho(\omega,\mu)) + \mathcal{Z}\_{\mu + \varrho(\omega,\mu)^{-}}^{\mathfrak{a}} \lrcorner \mathcal{D}(\mu) \right] \\ & \qquad \leq \ \_p \left[ \mathcal{Z}\_{\mu^{+}}^{\mathfrak{a}} \lcorner \mathfrak{S} \mathcal{D}(\mu + \varrho(\omega,\mu)) + \mathcal{Z}\_{\mu + \varrho(\omega,\mu)^{-}}^{\mathfrak{a}} \lcorner \mathfrak{S} \mathcal{D}(\mu) \right] \end{split}$$

This completes the proof.

**Remark 5.** *If* D(*z*) = 1*, then from (26) and (31), we get Theorem 3. If χ*(*ς*) = *ς, then from (26) and (31), we achieve the following coming inequality, see* [42]*:*

$$\begin{split} & \mathfrak{S} \left( \frac{2\mu + \varrho(\omega,\mu)}{2} \right) \left[ \mathscr{I}^{\mathfrak{a}}\_{\mu^{+}} \cdot \mathscr{D}(\mu + \varrho(\omega,\mu)) + \mathscr{I}^{\mathfrak{a}}\_{\mu + \varrho(\omega,\mu)^{-}} \mathscr{D}(\mu) \right] \\ & \leq\_{p} \left[ \mathscr{I}^{\mathfrak{a}}\_{\mu^{+}} \cdot \mathfrak{S} \mathscr{D}(\mu + \{\varrho(\omega,\mu)\}) + \mathscr{I}^{\mathfrak{a}}\_{\mu + \varrho(\omega,\mu)^{-}} \mathscr{S} \mathscr{D}(\mu) \right] \\ & \leq\_{p} \frac{\mathfrak{S}(\mu) + \mathfrak{S}(\mu + \varrho(\omega,\mu))}{2} \left[ \mathscr{I}^{\mathfrak{a}}\_{\mu^{+}} \cdot \mathscr{D}(\mu + \varrho(\omega,\mu)) + \mathscr{I}^{\mathfrak{a}}\_{\mu + \varrho(\omega,\mu)^{-}} \mathscr{D}(\mu) \right] \\ & \leq\_{p} \frac{\mathfrak{S}(\mu) + \mathfrak{S}(\omega)}{2} \left[ \mathscr{I}^{\mathfrak{a}}\_{\mu^{+}} \cdot \mathscr{D}(\mu + \varrho(\omega,\mu)) + \mathscr{I}^{\mathfrak{a}}\_{\mu + \varrho(\omega,\mu)^{-}} \mathscr{D}(\mu) \right] \end{split} \tag{35}$$

*Let χ*(*ς*) = *ς and α* = 1*. Then, from (26) and (31), we achieve the following coming inequality:*

$$\mathfrak{S}\left(\frac{2\mu+\varrho(\omega,\mu)}{2}\right) \leq\_{p} \frac{1}{\int\_{\mu}^{\mu+\varrho(\omega,\mu)} \mathcal{D}(z)dz} \text{ (FR)} \int\_{\mu}^{\mu+\varrho(\omega,\mu)} \mathfrak{S}(z)\mathcal{D}(z)dz \leq\_{p} \frac{\mathfrak{S}(\mu)+\mathfrak{S}(\omega)}{2} \tag{36}$$

*If* S∗(*z*) = S<sup>∗</sup> (*z*) *and χ*(*ς*) = *ς, then from (26) and (31), we achieve the following coming inequality:*

$$\begin{split} \mathfrak{S} \left( \frac{2\mu + \mathfrak{q}(\omega, \mu)}{2} \right) \left[ \mathcal{Z}\_{\mu^{+}}^{\mathfrak{a}} \cdot \mathcal{D}(\omega) + \mathcal{Z}\_{\omega^{+}}^{\mathfrak{a}} \cdot \mathcal{D}(\mu) \right] &\leq \left[ \mathcal{Z}\_{\mu^{+}}^{\mathfrak{a}} \cdot \mathfrak{S} \mathcal{D}(\mu + \mathfrak{q}(\omega, \mu)) + \mathcal{Z}\_{\mu + \mathfrak{q}(\omega, \mu)^{-}}^{\mathfrak{a}} \cdot \mathfrak{S} \mathcal{D}(\mu) \right] \\ &\leq \frac{\mathfrak{S}(\mu) + \mathfrak{S}(\mu + \mathfrak{q}(\omega, \mu))}{2} \left[ \mathcal{Z}\_{\mu^{+}}^{\mathfrak{a}} \cdot \mathcal{D}(\mu + \mathfrak{q}(\omega, \mu)) + \mathcal{Z}\_{\mu + \mathfrak{q}(\omega, \mu)^{-}}^{\mathfrak{a}} \cdot \mathcal{D}(\mu) \right] \\ &\leq \frac{\mathfrak{S}(\mu) + \mathfrak{S}(\omega)}{2} \left[ \mathcal{Z}\_{\mu^{+}}^{\mathfrak{a}} \cdot \mathcal{D}(\mu + \mathfrak{q}(\omega, \mu)) + \mathcal{Z}\_{\mu + \mathfrak{q}(\omega, \mu)^{-}}^{\mathfrak{a}} \cdot \mathcal{D}(\mu) \right] \end{split} \tag{37}$$

*If* S∗(*z*) = S<sup>∗</sup> (*z*) *and α* = 1 *and χ*(*ς*) = *ς, then from (26) and (31), we acquire the classical H-H Fejér inequality.*

*If* S∗(*z*) = S<sup>∗</sup> (*z*) *and* D(*z*) = *α* = 1 *and χ*(*ς*) = *ς, then from (26) and (31), we acquire the classical H-H inequality.*

**Example 3.** *We consider the I-V*·*<sup>F</sup>* <sup>S</sup> : [0, *<sup>ϕ</sup>*(2, 0)] → K<sup>+</sup> *C defined by,* S(*z*) = [1, 2] 2 − √ *z . Since end-point functions* S∗(*z*), S<sup>∗</sup> (*z*) *are LR-χ-pre-invex functions, then* S(*z*) *is LR-χ-pre-invex I-V*·*F.*

If:

$$\mathcal{D}(z) = \begin{cases} \begin{array}{c} \sqrt{z}, \\ \sqrt{2-z}, \end{array} \sigma \in [0,1]\_{\prime} \\ \sigma \in (1,2]\_{\prime} \end{cases}$$

then D(2 − *z*) = D(*z*) ≥ 0, for all *z* ∈ [0, 2]. Since S∗(*z*) = 2 − √ *z* and S∗ (*z*) = 2 2 − √ *z* . If *χ*(*ς*) = *ς* and *α* = <sup>1</sup> 2 , then we compute the following:

$$\begin{array}{c} \frac{\mathfrak{S}\_{\*}(\mu) + \mathfrak{S}\_{\*}(\mu + \mathfrak{q}(\omega,\mu))}{2} \int\_{0}^{1} \ \xi^{a-1} [\chi(\mathfrak{z}) + \chi(1-\mathfrak{q})] D(\mu + \mathfrak{q}\mathfrak{q}(\omega,\mu)) \ = \frac{\pi}{\sqrt{2}} \left( \frac{4-\sqrt{2}}{2} \right), \\\ \frac{\mathfrak{S}^{\*}(\mu) + \mathfrak{S}^{\*}(\mu + \mathfrak{q}(\omega,\mu))}{2} \int\_{0}^{1} \ \xi^{a-1} [\chi(\mathfrak{z}) + \chi(1-\mathfrak{q})] D(\mu + \mathfrak{q}\mathfrak{q}(\omega,\mu)) \ = \frac{\pi}{\sqrt{2}} \left( 4 - \sqrt{2} \right), \end{array} \tag{38}$$

$$\begin{array}{c} \frac{\mathfrak{S}^\*(\mu) + \mathfrak{S}^\*\_\*(\mu + \mathfrak{g}(\omega, \mu), \mu)}{2} \int\_0^1 \xi^{a-1} [\chi(\xi) + \chi(1-\xi)] D(\mu + \xi \mathfrak{g}(\omega, \mu)) & = \frac{\pi}{\sqrt{2}} \left( \frac{4 - \sqrt{2}}{2} \right), \\ \frac{\mathfrak{S}^\*(\mu) + \mathfrak{S}^\*(\mu + \mathfrak{g}(\omega, \mu))}{2} \int\_0^1 \int\_0^{a-1} [\chi(\xi) + \chi(1-\xi)] D(\mu + \xi \mathfrak{g}(\omega, \mu)) & = \frac{\pi}{\sqrt{2}} \left( 4 - \sqrt{2} \right), \\ \frac{\Gamma(a)}{\left( \mathfrak{g}(\omega, \mu) \right)^2} \left[ \mathfrak{L}^a\_{\mu^+} \cdot \mathfrak{S}\_\* \mathcal{D}(\mu + \mathfrak{g}(\omega, \mu)) + \mathfrak{L}^a\_{\mu + \mathfrak{g}(\omega, \mu)^-} \cdot \mathfrak{S}\_\* \mathcal{D}(\mu) \right] & = \frac{1}{\sqrt{\pi}} \left( 2\pi + \frac{4 - 8\sqrt{2}}{3} \right), \\ \frac{\Gamma(a)}{\left( \mathfrak{g}(\omega, \mu) \right)^2} \left[ \mathfrak{L}^a\_{\mu^+} \cdot \mathfrak{S}^\* \mathcal{D}(\mu + \mathfrak{g}(\omega, \mu)) + \mathfrak{L}^a\_{\mu + \mathfrak{g}(\omega, \mu)^-} \cdot \mathfrak{S}^\* \mathcal{D}(\mu) \right] & = \frac{2}{\sqrt{\pi}} \left( 2\pi + \frac{4 - 8\sqrt{2}}{3} \right). \end{array} \tag{39}$$

From (38) and (39), we have:

$$2\frac{1}{\sqrt{\pi}}\left[\left(2\pi+\frac{4-8\sqrt{2}}{3}\right), 2\left(2\pi+\frac{4-8\sqrt{2}}{3}\right)\right] \\ \quad \leq \,\_p\frac{\pi}{\sqrt{2}}\left[\frac{4-\sqrt{2}}{2}, 4-\sqrt{2}\right].$$

Hence, Theorem 6 is verified. For Theorem 7, we have:

I *α <sup>µ</sup>*<sup>+</sup> S∗D(*µ* + *ϕ*(*ω*, *µ*)) + I *α µ*+*ϕ*(*ω*,*µ*) <sup>−</sup> S∗D(*µ*) = √ 1 *π* R 2 0 (2 − *z*) −1 <sup>2</sup> D(*z*) 2 − √ *z dz* + √ 1 *π* R 2 0 (*z*) −1 <sup>2</sup> D(*z*) 2 − √ *z dz* = √ 1 *π π* + 8−8 √ 2 3 + √ 1 *π <sup>π</sup>* <sup>−</sup> <sup>4</sup> 3 = √ 1 *π* 2*π* + 4−8 √ 2 3 I *α <sup>µ</sup>*<sup>+</sup> S∗D(*µ* + *ϕ*(*ω*, *µ*)) + I *α µ*+*ϕ*(*ω*,*µ*) <sup>−</sup> S∗D(*µ*) = √ 2 *π* R 2 0 (2 − *z*) −1 <sup>2</sup> D(*z*) 2 − √ *z dz* + √ 2 *π* R 2 0 (*z*) −1 <sup>2</sup> D(*z*) 2 − √ *z dz* = √ 2 *π π* + 8−8 √ 2 3 + √ 2 *π <sup>π</sup>* <sup>−</sup> <sup>4</sup> 3 = √ 2 *π* 2*π* + 4−8 √ 2 3 . (40) 1 2*χ*( 1 2 ) S∗ 2*µ*+*ϕ*(*ω*,*µ*) 2 hI *α <sup>µ</sup>*<sup>+</sup> D(*µ* + *ϕ*(*ω*, *µ*)) + I *α µ*+*ϕ*(*ω*,*µ*) <sup>−</sup> D(*µ*) i = √ *π*, 1 2*χ*( 1 2 ) S∗ 2*µ*+*ϕ*(*ω*,*µ*) 2 hI *α <sup>µ</sup>*<sup>+</sup> D(*µ* + *ϕ*(*ω*, *µ*)) + I *α µ*+*ϕ*(*ω*,*µ*) <sup>−</sup> D(*µ*) i = 2 √ *π*. (41)

#### **4. Conclusions**

We have proposed the class of LR-*χ*-pre-invexity for *I*-*V*·*Fs*. By using this class, we have presented several interval *H*-*H* inequalities and interval *H*-*H* Fejér inequalities using interval Riemann–Liouville fractional integral operators. Useful examples that illustrate the applicability of theory developed in this study are also presented. In future, we intend to discuss generalized LR-*χ*-pre-invex *I*-*V*·*Fs*. We hope that this concept will be helpful for other authors to play their roles in different fields of sciences.

**Author Contributions:** Conceptualization, M.B.K.; methodology, M.B.K.; validation, S.T., M.S.S. and H.G.Z.; formal analysis, K.N.; investigation, M.S.S.; resources, S.T.; data curation, H.G.Z.; writing original draft preparation, M.B.K., K.N. and H.G.Z.; writing—review and editing, M.B.K. and S.T.; visualization, H.G.Z.; supervision, M.B.K. and M.S.S.; project administration, M.B.K.; funding acquisition, K.N., M.S.S. and H.G.Z. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Acknowledgments:** The authors would like to thank the Rector, COMSATS University Islamabad, Islamabad, Pakistan, for providing excellent research. This work was funded by Taif University Researchers Supporting Project number (TURSP-2020/345), Taif University, Taif, Saudi Arabia. Moreover, this research has also received funding support from the National Science, Research and Innovation Fund (NSRF), Thailand.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


**Yating Guo <sup>1</sup> , Guoju Ye <sup>1</sup> , Wei Liu 1,**<sup>∗</sup> **, Dafang Zhao <sup>2</sup> and Savin Trean¸taˇ 3**


**Abstract:** This paper is devoted to derive optimality conditions and duality theorems for interval-valued optimization problems based on gH-symmetrically derivative. Further, the concepts of symmetric pseudo-convexity and symmetric quasi-convexity for interval-valued functions are proposed to extend above optimization conditions. Examples are also presented to illustrate corresponding results.

**Keywords:** gH-symmetrically derivative; optimality conditions; wolfe duality; symmetric pseudoconvexity; symmetric quasi-convexity

#### **1. Introduction**

Due to the complexity of the environment and the inherent ambiguity of human cognition, the information data in real world optimization problems are usually uncertain. More often, we can not ignore the fact that small uncertainties in data may lead to a completely meaningless of the usual optimal solutions from a practical viewpoint. Therefore, much interest has been paid to the uncertain optimization problems, see [1–4].

There are various approaches used to tackle the optimization problems with uncertainty, such as stochastic process [5], fuzzy set theory [6] and interval analysis [7]. Among them, the method of interval analysis is to express an uncertain variable as a real interval or an interval-valued function (IVF), which has been applied to many fields, such as, the models involving inexact linear programming problems [8], data envelopment analysis [9], optimal control [10], goal programming [11], minimax regret solutions [12] and multiperiod portfolio selection problems [13] etc. Up to now, we can find many works involving interval-valued optimization problems (IVOPs) (see [14,15]).

In classical optimization theory, the derivative is the most frequently used one. It plays an important role in the study of optimality conditions and duality theorems in constrained optimization problems. To date, various notions of IVF's derivative have been proposed, see [16–23]. One famous concept is H-derivative defined in [16]. However, the H-derivative is restrictive. In 2009, Stefanini and Bede presented the gH-derivative [23] to overcome the disadvantages of H-derivative. Furthermore, in [24], Guo et al. proposed the gHsymmetrically derivative which is more general than gH-derivative. Researchers of optimal problems have largely used these derivatives of IVFs. For instance, Wu [25] considered the Karush–Kuhn–Tucker (KKT) conditions for nonlinear IVOPs using H-derivative. In [26,27], Wolfe type dual problems of IVOPs were investigated. Later, more general KKT optimality conditions has been proposed by Chalco-Cano et al. [28,29] based on gH-derivative. Besides, Jayswal et al. [30] extended optimality conditions and duality theorems for IVOPs with the generalized convexity. Antczak et al. [31] studied the optimality conditions and duality results for nonsmooth vector optimization problems with multiple IVFs [32]. In

**Citation:** Guo, Y.; Ye, G.; Liu, W.; Zhao, D.; Trean¸ta, S. Optimality ˇ Conditions and Duality for a Class of Generalized Convex Interval-Valued Optimization Problems. *Mathematics* **2021**, *9*, 2979. https://doi.org/ 10.3390/math9222979

Academic Editor: Armin Fugenschuh ¨

Received: 25 October 2021 Accepted: 20 November 2021 Published: 22 November 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

2019, Ghosh [33] have extended the KKT condition for constrained IVOPs. In addition, Van [34] investigated the duality results for interval-valued pseudoconvex optimization problems with equilibrium constraints.

Based on the fact that the IVOPs have been extensively studied on optimality condition and duality by many researchers in recent years, in this paper, we continue to study and develop these results on optimality conditions and Wolfe duality of IVOPs on the basis of the gH-symmetrically derivative. In addition, we are going to introduce more appropriate concepts of symmetric pseudo-convexity and symmetric quasi-convexity to weak the convexity hypothesis.

The remaining of the paper is as follows: In Section 2, we give preliminaries and recall some main concepts. In Section 3, we propose the directional gH-symmetrically derivative and more appropriate concepts of generalized convexity. Section 4 establishes the necessary optimality conditions and Wolfe duality theorems. In Section 5, we apply the generalized convexities to investigate the content in Section 4. Our results are properly wider than the results in [28–30].

#### **2. Preliminaries**

**Theorem 1** ([35])**.** *Suppose that f* : *M* → R *is symmetrically differentiable on M and N is an open convex subset of M. Then f is convex on N if and only if*

$$f(t) - f(t^\*) \ge f^s(t^\*)^T (t - t^\*), \text{ for all } t, t^\* \in \mathbb{N}.\tag{1}$$

**Theorem 2** ([36])**.** *Let <sup>A</sup> be a <sup>m</sup>* <sup>×</sup> *<sup>n</sup> real matrix and let <sup>c</sup>* <sup>∈</sup> <sup>R</sup>*<sup>n</sup> be a column vector. Then the implication*

$$At \le 0 \Rightarrow c^T t \le 0 \tag{2}$$

*holds for all t* <sup>∈</sup> <sup>R</sup>*<sup>n</sup> if and only if*

$$\exists u \ge 0 \; ; \; u^T A = \mathcal{c}^T \; , \tag{3}$$

*where u* <sup>∈</sup> <sup>R</sup>*m.*

Let I be the set of all bounded and closed intervals in R, i.e.,

$$\mathbb{T} = \{a = [\underline{a}, \overline{a}] \, | \, \underline{a}, \overline{a} \in \mathbb{R} \text{ and } \underline{a} \le \overline{a}\}.$$

$$\text{For } a = [\underline{a}, \overline{a}], b = [\underline{b}, \overline{b}], c = [\underline{c}, \overline{c}] \in \mathbb{I} \text{ and } k \in \mathbb{R} \text{, we have}$$

$$a + b = [\underline{a}, \overline{a}] + [\underline{b}, \overline{b}] = [\underline{a} + \underline{b}, \overline{a} + \overline{b}].$$

$$k \cdot a = k \cdot [\underline{a}, \overline{a}] = \left\{ \begin{array}{c} [\underline{a}, \underline{k}\overline{a}], \text{if } k > 0; \\\ [\overline{k}\overline{a}, \underline{k}\underline{a}], \text{if } k \le 0. \end{array} \right.$$

In [23], Stefanini and Bede presented the gH-difference:

$$a \ominus\_{\mathcal{S}} b = c \Leftrightarrow \left\{ \begin{array}{c} a = b + c; \\ \text{or } b = a + (-1)c. \end{array} \right.$$

In addition, this difference between two intervals always exists, i.e.,

$$a \ominus\_{\mathcal{S}} b = \left[ \min \{ \underline{a} - \underline{b}, \overline{a} - \overline{b} \}, \; \max \{ \underline{a} - \underline{b}, \overline{a} - \overline{b} \} \right].$$

Furthermore, the partial order relation "*LU*" on I is determined as follows:

$$[\underline{a}, \overline{a}] \preceq\_{LU} [\underline{b}, \underline{b}] \Leftrightarrow \underline{a} \le \underline{b} \text{ and } \overline{a} \le \underline{b},$$

$$[\underline{a}, \overline{a}] \prec\_{LU} [\underline{b}, \overline{b}] \Leftrightarrow [\underline{a}, \overline{a}] \preceq\_{LU} [\underline{b}, \overline{b}] \text{ and } [\underline{a}, \overline{a}] \ne [\underline{b}, \overline{b}].$$

$$\text{In said to be comparable if and only if } a \prec\_{\dots} b \text{ or } a \succ\_{\dots} b.$$

*a* and *b* are said to be comparable if and only if *a LU b* or *a LU b*.

Let <sup>R</sup>*<sup>n</sup>* be the n-dimensional Euclidean space, and *<sup>T</sup>* <sup>⊂</sup> <sup>R</sup>*<sup>n</sup>* is an open set. We call the function *F* : *T* → I an IVF, i.e., *F*(*t*) is a closed interval in R for every *t* ∈ *T*. The IVF *F* can also be denoted as *F* = [*F*, *F*], where *F* and *F* are real functions and *F* ≤ *F* on *T*. Moreover, *F*, *F* are called the endpoint functions of *F*.

**Definition 1** ([24])**.** *Let F* : *T* → I*. Then F is said to be gH-symmetrically differentiable at t*<sup>0</sup> <sup>∈</sup> *T if there exists F<sup>s</sup>* (*t* 0 ) ∈ I *such that:*

$$\lim\_{||h||\to 0} \frac{F(t^0 + h) \ominus\_{\mathcal{S}} F(t^0 - h)}{||h||} = F^{\mathcal{S}}(t^0). \tag{4}$$

**Definition 2** ([24])**.** *Let F* : *T* → I *and t* <sup>0</sup> <sup>∈</sup> *T. If the IVF <sup>ϕ</sup>*(*ti*) = *<sup>F</sup>*(*<sup>t</sup>* 0 1 , . . . , *t* 0 *i*−1 , *ti* , *t* 0 *i*+1 , . . . , *t* 0 *n* ) *is gH-symmetrically differentiable at t* 0 *i , then we say that F has the ith partial gH-symmetrically derivative* ( *∂ sF ∂ti* )*g*(*t* 0 ) *at t*<sup>0</sup> *and*

$$(\frac{\partial^s F}{\partial t\_i})\_{\mathcal{S}}(t^0) = \varphi^s(t\_i^0).$$

**Definition 3** ([24])**.** *Let F* : *T* → I *be an IVF, and ∂ s ti F stands for the partial gH-symmetrically derivative with respect to the ith variable t<sup>i</sup> . If ∂ s ti F*(*t* 0 ) *(i* = 1, . . . , *n) exist on some neighborhoods of t* 0 *and are continuous at t* 0 *, then F is said to be gH-symmetrically differentiable at t* <sup>0</sup> <sup>∈</sup> *T. Moreover, we denote by*

$$\nabla^s F(t^0) = \left(\partial^s\_{t\_1} F(t^0), \dots, \partial^s\_{t\_n} F(t^0)\right)$$

*the symmetric gradient of F at t*<sup>0</sup> *.*

**Theorem 3** ([24])**.** *Let the IVF F* : *T* → I *be continuous in* (*t* <sup>0</sup> <sup>−</sup> *<sup>δ</sup>*, *<sup>t</sup>* <sup>0</sup> + *δ*) *for some δ* > 0*. Then F is gH-symmetrically differentiable at t* <sup>0</sup> <sup>∈</sup> *<sup>T</sup> if and only if <sup>F</sup> and <sup>F</sup> are symmetrically differentiable at t*<sup>0</sup> *.*

**Definition 4** ([28])**.** *Let F* = [*F*, *F*] *be an IVF defined on T. We say that F is LU-convex at t*∗ *if*

$$F(\theta t^\* + (1 - \theta)t) \preceq\_{LU} \theta F(t^\*) + (1 - \theta)F(t)$$

*for every θ* ∈ [0, 1] *and t* ∈ *T.*

Now, we introduce the following IVOP :

$$\begin{aligned} \min \quad & F(t) \\ \text{subject to} \quad & g\_i(t) \le 0, \quad i = 1, \dots, m \end{aligned} \tag{5}$$

where *F* : *M* → I, *g<sup>i</sup>* : *<sup>M</sup>* <sup>→</sup> <sup>R</sup> (*<sup>i</sup>* <sup>=</sup> 1, . . . , *<sup>m</sup>*), and *<sup>M</sup>* <sup>⊂</sup> <sup>R</sup>*<sup>n</sup>* is an open and convex set. Let

$$\mathcal{X} = \{ t \in \mathbb{R}^n : t \in M \text{ and } \varrho\_i(t) \le 0, \ i = 1, \dots, m \}$$

be the collection of feasible points of Problem (5), and the set of objective values of primal Problem (5) is indicated by:

$$O\_P(F, \mathcal{X}) = \{F(t) : t \in \mathcal{X}\}.\tag{6}$$

Moreover, we review the definition of non-dominated solution to the Problem (5):

**Definition 5** ([27])**.** *Let t* ∗ *be a feasible solution of Problem* (5)*, i.e., t* <sup>∗</sup> ∈ X *. Then t* ∗ *is said to be a non-dominated solution of Problem* (5) *if there exists no t* ∈ X \ {*t* <sup>∗</sup>} *such that: F*(*t*) ≺*LU F*(*t* ∗ )*.*

The KKT sufficient optimality conditions of Problem (5) have been obtained in [24]:

**Theorem 4** ([24], Sufficient optimality condition)**.** *Assume that F* : *M* → I *is LU-convex and gH-symmetrically differentiable at t* ∗ *, <sup>g</sup>* : *<sup>M</sup>* <sup>→</sup> <sup>R</sup>*<sup>n</sup> is convex and symmetrically differentiable at t* ∗ *. If there exist (Lagrange) multipliers* 0 ≤ *µ<sup>i</sup>* ∈ R, *i* = 1, . . . , *m such that*

$$\begin{aligned} \nabla^s \underline{\mathbf{F}}(t^\*) + \nabla^s \overline{\mathbf{F}}(t^\*) + \sum\_{i=1}^m \mu\_i \nabla^s g\_i(t^\*) &= \mathbf{0};\\ \sum\_{i=1}^m \mu\_i g\_i(t^\*) &= \mathbf{0}, \text{where } \boldsymbol{\mu} = (\mu\_1, \dots, \mu\_m)^T. \end{aligned} \tag{7}$$

*then t*∗ *is a non-dominated solution to Problem (5).*

**Example 1.** *Consider the IVOP as below:*

$$\begin{array}{ll}\min & F(t) \\ \text{subject to} & g\_1(t) \le 0, \\ & g\_2(t) \le 0, \end{array} \tag{8}$$

*where*

$$F(t) = \begin{cases} [4t^2 + 2t - 3, 3t^2 + 3t], & \text{if } t \in (-1, 0);\\ [3t - 3, 3t], & \text{if } t \in [0, 1). \end{cases}$$

*and*

$$\mathbf{g}\_1(t) = -t; \text{ g}\_2(t) = t - 1.$$

*By simple calculation, F is LU-convex and gH-symmetrically differentiable at t* = 0 *and*

$$\nabla^s F(0) = [\frac{5}{2}, \mathfrak{Z}]\_\prime \, \mathfrak{Z}\_1^s(0) = -1, \, and \, \mathfrak{Z}\_2^s(0) = 1.$$

*The condition* (7) *in Theorem 4 is satisfied at t* = 0 *when µ*<sup>1</sup> = <sup>11</sup> 2 *, and µ*<sup>2</sup> = 0*.*

*On the other hand, it can be easily verified that t* = 0 *is a non-dominated solution of Problem (8). Hence, Theorem 4 is verified.*

*Noted that F is not gH-differentiable at t* = 0*, the sufficient conditions in [24] are properly wider than those in [28].*

#### **3. Generalized Convexity of gH-Symmetrically Differentiable IVFs**

The LU-convexity assumption in [28] may be restrictive. For example, the IVF

$$F(t) = \begin{cases} \begin{array}{ll} [t, 2t]\_\prime \text{ if } t \ge 0; \\\ [2t, t]\_\prime \text{ if } t < 0. \end{array} \end{cases}$$

is not LU-convex at *t* = 0. Inspired by this, we introduce the directional gH-symmetrically derivative and the concepts of generalized convexity for IVFs which will be used in Section 4.

**Definition 6.** *Let <sup>F</sup>* : *<sup>T</sup>* <sup>→</sup> <sup>I</sup> *be an IVF and <sup>h</sup>* <sup>∈</sup> <sup>R</sup>*<sup>n</sup> . Then F is called directional gH-symmetrically differentiable at t*<sup>0</sup> *in the direction h if DsF*(*t* 0 : *h*) ∈ I *exists and*

$$D^s F(t^0 : h) = \lim\_{\mathfrak{a} \to 0^+} \frac{F(t^0 + \mathfrak{a}h) \ominus\_{\mathcal{S}} F(t^0 - \mathfrak{a}h)}{2\mathfrak{a}}.\tag{9}$$

*If t* = (*t*1, . . . , *tn*) *T and e<sup>i</sup>* = (0, . . . , *i* 1, . . . , 0)*, then DsF*(*t* : *e i* ) *is the partial gH-symmetrically derivative of F with respect to t<sup>i</sup> at t.*

**Theorem 5.** *If <sup>F</sup>* : *<sup>T</sup>* <sup>→</sup> <sup>I</sup> *is gH-symmetrically differentiable at <sup>t</sup>* <sup>∈</sup> *<sup>T</sup> and <sup>h</sup>* <sup>∈</sup> <sup>R</sup>*<sup>n</sup> , then the directional gH-symmetrically derivative exists and*

$$D^s F(t:h) = F^s(t)^T h.$$

**Proof.** Since, by hypothesis, *F* is gH-symmetrically differentiable at *t*, then there exists *F s* (*t*) ∈ I such that:

$$\lim\_{\alpha h \to 0} \frac{F(t + \alpha h) \ominus\_{\mathcal{S}} F(t - \alpha h)}{2\alpha h} = F^{\mathcal{S}}(t).$$

Then, we have:

$$\lim\_{\alpha \to 0} D\left(\frac{F(t + \alpha h) \ominus\_{\mathcal{S}} F(t - \alpha h)}{2\alpha}, F^s(t)h\right) = 0.$$

i.e.,

$$D^s F(t:h) = F^s(t)h.$$

Thus, we complete the proof.

**Definition 7.** *The IVF F* : *T* → I *is called symmetric pseudo-convex (SP-convex) at t* <sup>0</sup> <sup>∈</sup> *T, if <sup>F</sup> is gH-symmetrically differentiable at t*<sup>0</sup> *and*

$$F^s(t^0)(t - t^0) \succeq\_{LU} 0 \text{ implies } F(t) \succeq\_{LU} F(t^0).$$

*for all t* ∈ *T.*

*F* is said to be symmetric pseudo-concave (SP-concave) at *t* 0 if −*F* is SP-convex at *t* 0 .

**Definition 8.** *The IVF F* : *T* → I *is called symmetric quasi-convex (SQ-convex) at t* <sup>0</sup> <sup>∈</sup> *T, if <sup>F</sup> is gH-symmetrically differentiable at t*<sup>0</sup> *and*

$$F(t) \preceq\_{LU} F(t^0) \text{ implies } F^s(t^0)(t - t^0) \preceq\_{LU} \mathbf{0}\_{\prime\prime}$$

*for all t* ∈ *T.*

*F* is said to be symmetric quasi-concave (SQ-concave) at *t* 0 if −*F* is SQ-convex at *t* 0 .

**Remark 1.** *When F* = *F, i.e., F degenerates to a real function, the concepts of SQ-convexity and SP-convexity will degenerate to s-quasiconvexity and s-pseudoconvexity in [35].*

#### **4. KKT Necessary Conditions**

The necessary optimality conditions are an important part of the optimization theory, because these conditions can be used to exclude all the feasible solutions which are not optimal solutions, i.e., they can identify all options for solving the problem. From this point, using gH-symmetrically derivative, we establish a KKT necessary optimality condition which is more general than [28,29].

In order to obtain the necessary condition of Problem (5), we shall use the Slater's constraint qualification [37]. Such condition is:

$$\exists t^0 \in \mathcal{X} \text{ such that } \mathcal{g}\_i(t^0) < 0, \ i = 1, \ldots, m. \tag{10}$$

**Theorem 6** (Necessary optimality condition)**.** *Assume that F* : *M* → I *is LU-convex and gH-symmetrically differentiable, g<sup>i</sup>* : *M* → R*(i* = 1, . . . , *m) are symmetrically differentiable and convex on M. Suppose H* = {*i* : *gi*(*t* ∗ ) = 0}*. If t* ∗ *is a non-dominated solution to Problem (5) and the following conditions are satisfied:*

*(A*1*) For every <sup>i</sup>* <sup>∈</sup> *<sup>H</sup> and for all <sup>y</sup>* <sup>∈</sup> <sup>R</sup>*<sup>n</sup> , there exist some positive real numbers ξ<sup>i</sup> , when* 0 < *ξ* < *ξ<sup>i</sup> and* <sup>∇</sup>*sgi*(*<sup>t</sup>* ∗ ) *<sup>T</sup>y* < 0*, we have:*

$$\nabla^s g\_i(t^\* + \xi y)^T y < 0;$$

*(A*2*) The set* <sup>X</sup> *satisfies the Slater's constraint qualification. For <sup>i</sup>* <sup>∈</sup> *<sup>H</sup> and for all <sup>h</sup>* <sup>∈</sup> <sup>R</sup>*<sup>n</sup> , D*+*F*(*t* ∗ : *<sup>h</sup>*) <sup>≥</sup> <sup>0</sup> *implies that <sup>D</sup>sF*(*<sup>t</sup>* ∗ : *<sup>h</sup>*) <sup>≥</sup> <sup>0</sup> *or <sup>D</sup>*+*F*(*<sup>t</sup>* ∗ : *<sup>h</sup>*) <sup>≥</sup> <sup>0</sup> *implies that <sup>D</sup>sF*(*<sup>t</sup>* ∗ : *h*) ≥ 0*;*

*where D*+*F and D*−*F (D*+*F and D*−*F) are the right-sided and left-sided directional derivative of F (F). Then, there exists u*<sup>∗</sup> <sup>∈</sup> <sup>R</sup>*<sup>m</sup>* <sup>+</sup> *such that condition* (7) *in Theorem 4 holds.*

**Proof.** Suppose the above conditions are satisfied. Assume there exists *<sup>w</sup>* <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* such that:

$$\begin{aligned} &w^T \nabla^s g\_i(t^\*) \le 0, \\ &\text{and} \quad w^T \nabla^s \underline{F}(t^\*) < 0, \quad w^T \nabla^s \overline{F}(t^\*) < 0, \quad (\forall i \in H). \end{aligned} \tag{11}$$

Since X satisfies the Slater's constraint qualification, by Equation (10), there exists *t* <sup>0</sup> ∈ X such that *<sup>g</sup>i*(*<sup>t</sup>* 0 ) < 0 (*i* = 1, . . . , *m*). Then we have:

$$
\mathcal{g}\_i(t^0) - \mathcal{g}\_i(t^\*) < 0, \quad (\forall i \in H)\_\prime
$$

Combining Theorem 1 and the convexity of *g<sup>i</sup>* , we have

$$(\nabla^s g\_i(t^\*)(t^0 - t^\*) < 0, \quad (\forall i \in H).$$

by inequality (11), we get

$$\nabla^s g\_i(t^\*) [w + \rho(t^0 - t^\*)] < 0, \quad (\forall i \in H)$$

for all *ρ* > 0. By hypothesis in (*A*1), there exists *ξ<sup>i</sup>* > 0 such that

$$[g\_i(t^\* + \xi[w + \rho(t^0 - t^\*)] < 0, \quad (\forall i \in H)$$

for 0 < *ξ* < *ξ<sup>i</sup>* . Therefore, we have: *t* ∗ + *ξ*[*w* + *ρ*(*t* <sup>0</sup> <sup>−</sup> *<sup>t</sup>* ∗ )] ∈ X .

Since *t* ∗ is a non-dominated solution to Problem (5), there exists no feasible solution *t* such that: *F*(*t*) ≺ *F*(*t* ∗ ), i.e.,

$$\begin{aligned} \underline{F}(t^\* + \mathfrak{f}[w + \rho(t^0 - t^\*)]) &\geq \underline{F}(t^\*). \\ \text{or} \quad \overline{F}(t^\* + \mathfrak{f}[w + \rho(t^0 - t^\*)]) &\geq \overline{F}(t^\*). \end{aligned}$$

By hypothesis (A2), we have

$$\begin{aligned} \left[w + \rho(t^0 - t^\*)\right] \nabla^s \underline{F}(t^\*) &\geq 0, \\ \text{or} \quad \left[w + \rho(t^0 - t^\*)\right] \nabla^s \overline{F}(t^\*) &\geq 0, \end{aligned}$$

for all *ρ* > 0. When *ρ* → 0 <sup>+</sup>, we obtain

$$w^T \nabla^s \underline{\mathbf{F}}(t^\*) \ge 0,\text{ or } w^T \nabla^s \overline{\mathbf{F}}(t^\*) \ge 0,\tag{12}$$

which contradicts to the inequality (11).

Thus, inequality (11) has no solution. By Theorem 2, there exists 0 ≤ *µ* ∗ *<sup>i</sup>* ∈ R such that

$$
\nabla^s \underline{F}(t^\*) + \nabla^s \overline{F}(t^\*) + \sum\_{i=1}^m \mu\_i^\* \nabla^s g\_i(t^\*) = \mathbf{0}.
$$

For *i* 6∈ *H*, let *µ<sup>i</sup>* = 0, then we have *m* ∑ *i*=1 *µigi*(*t* ∗ ) = 0. The proof is complete.

**Example 2.** *Continued from Example 1, note that g*1(0) = 0 *and g s* 1 (*t*) ≡ −1*. Moreover, M satisfies the Slater's condition. For h* <sup>∈</sup> <sup>R</sup>*<sup>n</sup> we have:*

$$D^{+}\underline{F}(0:h) = \lim\_{\alpha \to 0^{+}} \frac{\underline{F}(0+\alpha h) - \underline{F}(0)}{\alpha} = \begin{cases} 3h, \, h > 0;\\ 2h, \, h \le 0. \end{cases}$$

$$D^{-}\underline{F}(0:h) = \lim\_{\alpha \to 0^{-}} \frac{\underline{F}(0+\alpha h) - \underline{F}(0)}{\alpha} = 3h.$$

*Obviously, D*+*F*(*t* ∗ : *h*) ≥ 0 *implies that*

$$D^{+}\underline{F}(t^{\*}:h) + D^{-}\underline{F}(t^{\*}:h) \ge 0.$$

*Thus, the conditions in Theorem 6 hold at t* = 0*. On the other hand, we have:*

$$\nabla^s \underline{F}(\mathbf{0}) + \nabla^s \overline{F}(\mathbf{0}) + \sum\_{i \in H} \mu\_i^\* \nabla^s g\_i(\mathbf{0}) \tag{13}$$
 
$$= \frac{5}{2} + 3 + \mu\_1 \cdot (-1) + \mu\_2 \cdot 1 = 0$$

*when µ*<sup>1</sup> = <sup>11</sup> 2 *, µ*<sup>2</sup> = 0*. Hence, Theorem 6 is verified.*

#### **5. Wolfe Type Duality**

In this section, we consider the Wolfe dual Problem (14) of Problem (5) as follows:

$$\max \quad F(t) + \sum\_{i=1}^{m} \mu\_i \mathbf{g}\_i(t) \tag{14}$$

$$\text{subject to} \quad \nabla^s \underline{F}(t) + \nabla^s \overline{F}(t) + \sum\_{i=1}^{m} \mu\_i \nabla^s \mathbf{g}\_i(t) = \mathbf{0},$$

$$\mu = (\mu\_1, \dots, \mu\_m) \ge 0.$$

For convenience, we write:

$$L(t, \mu) = F(t) + \sum\_{i=1}^{m} \mu\_1 g\_i(t). \tag{15}$$

We denote by

$$\mathcal{Y} = \left\{ (t, \mu) \in \mathbb{R}^n \times \mathbb{R}^m : \nabla^s \underline{\mathbf{F}}(t) + \nabla^s \overline{\mathbf{F}}(t) + \sum\_{i=1}^m \mu\_i \nabla^s \underline{\mathbf{g}}\_i(t) = \mathbf{0} \right\} \tag{16}$$

the feasible set of dual Problem (14) and

$$O\_D(L, \mathcal{Y}) = \{ L(t, \mu) : (t, \mu) \in \mathcal{Y} \} \tag{17}$$

the set of all objective values of Problem (14).

**Definition 9.** *Let* (*t* ∗ , *µ* ∗ ) *be a feasible solution to Problem (14), i.e.,* (*t* ∗ , *µ* ∗ ) ∈ Y*.* (*t* ∗ , *µ* ∗ ) *is said to be a non-dominated solution to Problem (14), if there is no* (*t*, *µ*) ∈ Y *such that L*(*t* ∗ , *µ* ∗ ) ≺*LU L*(*t*, *µ*)*.*

Next, we discuss the solvability for Wolfe primal and dual problems.

**Lemma 1.** *Assume that F* : *M* → I *is LU-convex and gH-symmetrically differentiable, g<sup>i</sup>* : *M* → R*(i* = 1, . . . , *m) are symmetrically differentiable and convex on M. Furthermore, H* = {*i* : *gi*(*t* ∗ ) = <sup>0</sup>}*. If* <sup>ˆ</sup>*t,* (*t*, *<sup>µ</sup>*) *are feasible solutions to Problems* (5) *and* (14)*, respectively, then the following statements hold true:*

*(B*1*) If F*(*t*) <sup>≥</sup> *<sup>F</sup>*(ˆ*t*)*, then <sup>F</sup>*(ˆ*t*) <sup>≥</sup> *<sup>L</sup>*(*t*, *<sup>µ</sup>*)*; (B*2*) If <sup>F</sup>*(*t*) <sup>≥</sup> *<sup>F</sup>*(ˆ*t*)*, then F*(ˆ*t*) <sup>≥</sup> *<sup>L</sup>*(*t*, *<sup>µ</sup>*)*.*

*Moreover, the statements still hold true under strict inequality.*

**Proof.** Suppose ˆ*t*, (*t*, *µ*) are feasible solutions to Problem (5) and (14), respectively. Since *F* is LU-convex, we have:

$$\begin{aligned} \overline{F}(\hat{t}) &\geq \overline{F}(t) + \nabla^s \overline{F}(t)(\hat{t} - t) \\ &= \overline{F}(t) - \nabla^s \underline{F}(t)(\hat{t} - t) - \sum\_{i=1}^m \nabla^s g\_i(t)(\hat{t} - t) \\ &\geq \overline{F}(t) + \underline{F}(t) - \underline{F}(\hat{t}) + \sum\_{i=1}^m [g\_i(t) - g\_i(\hat{t})].\end{aligned}$$

If *<sup>F</sup>*(*t*) <sup>−</sup> *<sup>F</sup>*(ˆ*t*) <sup>≥</sup> 0, it follows that

$$
\overline{F}(t) \ge \overline{F}(t) + \sum\_{i=1}^{m} \mathcal{g}\_i(t) = \overline{L}(t, \mu).
$$

Thus, the statement (*B*1) holds true. On the other hand, if *<sup>F</sup>*(*t*) <sup>−</sup> *<sup>F</sup>*(ˆ*t*) <sup>&</sup>gt; 0, then

$$
\overline{F}(\hat{t}) > \overline{F}(t) + \sum\_{i=1}^{m} g\_i(t) = \overline{L}(t, \mu).
$$

The other statements can also be proof by using similar arguments.

**Lemma 2.** *Under the same assumption to Lemma 1, if* ˆ*t,* (*t*, *µ*) *are feasible solutions to Problems* (5) *and* (14)*, respectively, then the following statements hold true:*

$$\text{(C1) } If \overline{F}(t) \le \overline{F}(\hat{t}), \text{ then } \overline{F}(\hat{t}) \ge \overline{L}(t, \mu);$$

*(C*2*) If F*(*t*) <sup>≤</sup> *<sup>F</sup>*(ˆ*t*)*, then F*(ˆ*t*) <sup>≥</sup> *<sup>L</sup>*(*t*, *<sup>µ</sup>*)*.*

*Moreover, the statements still hold true under strict inequality.*

**Proof.** Suppose *<sup>F</sup>*(*t*) <sup>≤</sup> *<sup>F</sup>*(ˆ*t*), then we have:

$$\begin{split} & \quad \overline{F}(\hat{t}) - \overline{L}(t, \mu) \\ &= \overline{F}(\hat{t}) - \overline{F}(t) - \sum\_{i=1}^{m} \mu\_{i} g\_{i}(t) \\ & \ge \overline{F}^{\*}(t)(\hat{t} - t) + \left[ - \sum\_{i=1}^{m} \mu\_{i} g\_{i}(\hat{t}) + \sum\_{i=1}^{m} \mu\_{i} g\_{i}(\hat{t}) - \sum\_{i=1}^{m} \mu\_{i} g\_{i}(t) \right] \\ & \ge \overline{F}^{\*}(t)(\hat{t} - t) + \left[ - \sum\_{i=1}^{m} \mu\_{i} g\_{i}(\hat{t}) + \sum\_{i=1}^{m} \mu\_{i} g\_{i}^{s}(t)(\hat{t} - t) \right] \\ &= \left[ \overline{F}^{s}(t) + \sum\_{i=1}^{m} \mu\_{i} g\_{i}^{s}(t) \right] (\hat{t} - t) - \sum\_{i=1}^{m} \mu\_{i} g\_{i}(\hat{t}) \\ & = -\underline{F}^{s}(\hat{t} - t) - \sum\_{i=1}^{m} \mu\_{i} g\_{i}(\hat{t}) \\ & \ge \underline{F}(t) - \underline{F}(\hat{t}) - \sum\_{i=1}^{m} \mu\_{i} g\_{i}(\hat{t}) \end{split}$$

$$\begin{aligned} & \quad = \underline{F}(t) - \underline{L}(\hat{t}, \mu) \\ & \ge 0. \end{aligned}$$

Thus, the statement (*C*1) holds true. On the other hand, if *F*(*t*) < *F*(ˆ*t*), then:

$$
\overline{F}(\mathfrak{f}) > \overline{L}(t, \mu).
$$

The proof of (*C*2) is similar to (*C*1), so we omit it.

**Theorem 7.** *(Weak duality) Under the same assumption of Lemma 1, if* ˆ*t,* (*t*, *µ*) *are feasible solutions to Problems* (5) *and* (14)*, respectively, then the following statements hold true: (D*1*) If F*(*t*) *and F*(ˆ*t*) *are comparable, then F*(ˆ*t*) *<sup>L</sup>*(*t*, *<sup>µ</sup>*)*. (D*2*) If F*(*t*) *and F*(ˆ*t*) *are not comparable, then F*(ˆ*t*) > *L*(*t*, *µ*) *or F*(ˆ*t*) > *L*(*t*, *µ*)*.*

**Proof.** If *F*(*t*) and *F*(ˆ*t*) are comparable, by Lemmas 1 and 2, we can obtain the statement (*D*1); If *F*(*t*), *F*(ˆ*t*) are not comparable, then we have:

$$F(\hat{t}) \subset F(t), \text{ or } F(\hat{t}) \supset F(t).$$

By Lemmas 1 and 2, we obtain that:

$$
\underline{F}(\hat{t}) > \underline{L}(t,\mu), \text{ or } \overline{F}(\hat{t}) > \overline{L}(t,\mu).
$$

The proof is complete.

**Example 3.** *Consider the optimization problem in Example 1. The corresponding Wolfe duality problem is:*

$$\begin{aligned} \max \quad & F(t) + \mu\_1 \varrho\_1(t) + \mu\_2 \varrho\_2(t) \\ \text{subject to} \quad & \nabla^s \underline{F}(t) + \nabla^s \overline{F}(t) + \mu\_1 \nabla^s \underline{g}\_1(t) + \mu\_2 \nabla^s \underline{g}\_2(t) = 0, \\ & \mu = (\mu\_1, \mu\_2) \ge 0. \end{aligned} \tag{18}$$

*Clearly,* <sup>ˆ</sup>*<sup>t</sup>* <sup>=</sup> <sup>0</sup> *is a feasible solution of the Problem* (8) *and the objective value is* [−3, 0]*. Moreover,* (*t*, *<sup>µ</sup>*1, *<sup>µ</sup>*2) = (−<sup>1</sup> 2 , 0, 2) *is a feasible solution to the Problem* (18)*, and objective value is* [−6, <sup>−</sup><sup>15</sup> 4 ]*.*

*We observe that*

$$F(0) \succ L(-\frac{1}{2}, 0, 2). \tag{19}$$

*Hence, Theorem 7 is verified.*

**Theorem 8.** *(Solvability) Under the same assumption of Lemma 1, if* (*t* ∗ , *µ* ∗ ) ∈ Y *and L*(*t* ∗ , *µ* ∗ ) ∈ *OP*(*F*, X )*, then* (*t* ∗ , *µ* ∗ ) *solves the Problem* (14)*.*

**Proof.** Suppose (*t* ∗ , *µ* ∗ ) is not a non-dominated solution to Problem (14), then there exists (*t*, *µ*) ∈ Y so that:

$$L(t^\*, \mu^\*) \prec L(t, \mu).$$

Since *L*(*t* ∗ , *µ* ∗ ) <sup>∈</sup> *<sup>O</sup>P*(*F*, <sup>X</sup> ), there exists <sup>ˆ</sup>*<sup>t</sup>* ∈ X such that:

$$F(\hat{t}) = L(t^\*, \mu^\*) \prec L(t, \mu). \tag{20}$$

According to Theorem 7, if *F*(*t*), *F*(ˆ*t*) are comparable, then we have

$$F(\hat{t}) \succeq L(t, \mu).$$

If *F*(*t*), *F*(ˆ*t*) are not comparable, then:

$$
\underline{F}(\mathfrak{f}) > \underline{L}(t,\mu), \text{ or } \overline{F}(\mathfrak{f}) > \overline{L}(t,\mu).
$$

These two results are contradict to Equation (20). Thus, we complete the proof.

**Theorem 9.** *(Solvability) Under the same assumption of Lemma 1, if* <sup>ˆ</sup>*<sup>t</sup>* ∈ X *is a feasible solution to Problem* (5) *and F*(ˆ*t*) <sup>∈</sup> *<sup>O</sup>D*(*L*, <sup>Y</sup>)*, then* <sup>ˆ</sup>*t solves the Problem* (5)*.*

**Proof.** The proof is similar to Theorem 8, so we omit it.

**Corollary 1.** *Under the same assumption of Lemma 1, if* ˆ*t,* (*t* ∗ , *µ* ∗ ) *are feasible solutions to Problems* (5) *and* (14)*, respectively, moreover, if F*(ˆ*t*) = *L*(*t* ∗ , *µ* ∗ ), *then* ˆ*t solves Problem* (5) *and* (*t* ∗ , *µ* ∗ ) *solves the Problem* (14)*.*

**Proof.** The proof follows Theorem 8 and Theorem 9.

**Theorem 10.** *(Strong duality) Under the same assumption of Lemma 1, if F, g<sup>i</sup>* (*i* = 1, . . . , *m) satisfy the conditions (A*1*) and (A*2*) at t* ∗ *, then there exists µ* <sup>∗</sup> <sup>∈</sup> <sup>R</sup>*<sup>m</sup>* <sup>+</sup> *such that* (*t* ∗ , *µ* ∗ ) *is a solution of Problem* (14) *and*

$$L(t^\*, \mu^\*) = F(t^\*).$$

**Proof.** By Theorem 6, there exists *µ* <sup>∗</sup> <sup>∈</sup> <sup>R</sup>*<sup>m</sup>* <sup>+</sup> such that:

$$
\nabla^s \underline{\mathbf{F}}(t^\*) + \nabla^s \overline{\mathbf{F}}(t^\*) + \sum\_{i=1}^m \mu\_i^\* \nabla^s g\_i(t^\*) = \mathbf{0},\tag{21}
$$

and *m* ∑ *i*=1 *gi*(*t* ∗ ) = 0. It can be shown that *L*(*t* ∗ , *µ* ∗ ) ∈ *OD*(*L*, Y) and

$$L(t^\*, \mu^\*) = F(t^\*).$$

By Corollary 1, there exists *µ* <sup>∗</sup> <sup>∈</sup> <sup>R</sup>*<sup>m</sup>* <sup>+</sup> such that (*t* ∗ , *µ* ∗ ) is a solution to Problem (14). The proof is complete.

**Example 4.** *Continued from Example 2, after calculation, the non-dominated solution to Problem* (18) *is* (0, <sup>11</sup> 2 , 0) *and the objective value is* [−6, 0]*; While t* = 0 *is also a non-dominated solution to Problem* (8) *and the objective value is* [−6, 0]*. Then we have:*

$$L(0, \frac{7}{2}, 0) = F(0).$$

*On the other hand, the IVF F in Example 2 satisfies the conditions (A*1*) and (A*2*), which verifies Theorem 10.*

#### **6. The optimality Conditions with Generalized Convexity**

In this section, we use the concepts of SP-convexity and SQ-convexity which are less restrictive than LU-convexity to obtain some generalized optimality theorems of Problem (5).

**Theorem 11.** *(Sufficient condition) Suppose F is SP-convex and g<sup>i</sup> is s-quasiconvex at t* ∗ *for i* ∈ *H. If t* <sup>∗</sup> ∈ X *, and for some µ* <sup>∗</sup> <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* <sup>+</sup> *condition* (7) *in Theorem 4 holds, then t* ∗ *is a non-dominated solution to Problem* (5)*.*

**Proof.** Assume for some *µ* <sup>∗</sup> ≥ 0, condition (7) in Theorem 4 holds. We have *m* ∑ *i*=1 *µ* ∗ *i gi*(*t* ∗ ) = 0, where *µ* ∗ *<sup>i</sup>* = 0 when *i* 6∈ *H*. Since *gi*(*t*) ≤ *gi*(*t* ∗ ) and *g<sup>i</sup>* is s-quasiconvex at *t* ∗ for *i* ∈ *H*, we obtain *g s i* (*t* ∗ )(*t* − *t* ∗ ) ≤ 0. Thus:

$$\sum\_{i=1}^{m} \mu\_i^\* g\_i^s(t^\*) (t - t^\*) \le 0, \quad \text{for all } t \in \mathcal{X}\_\prime$$

which implies:

$$\nabla^s(\underline{F}(t^\*) + \overline{F}(t^\*))(t - t^\*) \ge 0 \text{ for all } t \in \mathcal{X}.$$

Thanks to the SP-convexity of *F*, we have:

$$
\underline{F}(t) + \overline{F}(t) \ge \underline{F}(t^\*) + \overline{F}(t^\*) \text{ for all } t \in \mathcal{X}. \tag{22}
$$

Then *t* ∗ is an optimal solution to the real-valued objective function *F* + *F* subject to the same constraints of Problem (5). Suppose *t* ∗ is not a non-dominated solution of Problem (5), there exists *t* ∈ X such that:

$$F(t) \prec F(t^\*)$$

which contradicts Equation (22). The proof is complete.

**Example 5.** *Consider the following optimization:*

$$\begin{array}{ll}\min & F(t) \\ \text{subject to} & g\_1(t) \le 0, \\ & g\_2(t) \le 0. \end{array} \tag{23}$$

*where:*

$$F(t) = \begin{cases} \ [t^3 + t, 2t^3 + t]\_\prime & \text{if } t \ge 0; \\\ [2t, 1.5t]\_\prime & \text{if } t < 0. \end{cases}$$

*and g*1(*t*) = −*t*, *g*2(*t*) = *t* − 1.

*We observe that F is not gH-differentiable at t* = 0*, and F is not LU-convex at t* = 0 *with:*

$$F(0) \not\le \frac{2}{3}F(\frac{1}{4}) + \frac{1}{3}F(-\frac{1}{2}).$$

*However, F is SP-convex at t* = 0 *and g<sup>i</sup> is s-quasiconvex at t* = 0 *for i* ∈ *H. Furthermore, F is gH-symmetrically differentiable at t* = 0 *with*

$$F^s(0) = [\frac{5}{4}, \frac{3}{2}].$$

*Moreover, we have:*

$$\begin{aligned} \nabla^s \underline{F}(0) + \nabla^s \overline{F}(0) + \sum\_{i=1}^m \mu\_i \nabla^s g\_i(0) &= 0; \\ \sum\_{i=1}^m \mu\_i g\_i(0) &= 0, where \; \mu = (\frac{11}{4}, 0)^T. \end{aligned} \tag{24}$$

*On the other hand, t* = 0 *is a non-dominated solution to Problem* (23)*, which verifies Theorem 11.*

**Theorem 12.** *(Necessary condition) Suppose F is SQ-concave at t* ∗ *and g<sup>i</sup> is s-pseudoconcave at t* ∗ *for i* ∈ *H. If t* ∗ *is a non-dominated solution to Problem* (5) *and g<sup>i</sup> is lower semicontinuous on M for all i* 6∈ *H, then* (*t* ∗ , *µ* ∗ ) *satisfies condition* (7) *in Theorem 4 with some µ* <sup>∗</sup> ≥ 0*.*

**Proof.** Assume X<sup>1</sup> = {*t* ∈ X : *gi*(*t*) < 0 for all *i* 6∈ *H*}. The set X<sup>1</sup> is relatively open since *gi* is lower semicontinuous on *M* for each *i* 6∈ *H*. Since *t* <sup>∗</sup> ∈ X1, there is some *α*<sup>0</sup> such that for any *y* ∈ *E n* , *t* <sup>∗</sup> + *αy* ∈ X<sup>1</sup> when: 0 < *α* < *α*0.

Suppose 0 < *α* < *α*<sup>0</sup> and for *i* ∈ *H* we have *g s i* (*t* ∗ ) *<sup>T</sup><sup>y</sup>* <sup>≤</sup> 0, then *<sup>g</sup> s i* (*t* ∗ ) *<sup>T</sup>α<sup>y</sup>* <sup>≤</sup> <sup>0</sup> for *i* ∈ *H*. According to the s-pseudoconcavity of *g<sup>i</sup>* at *t* ∗ , we have:

$$
\mathfrak{g}\_i(t^\* + \mathfrak{a}y) \le \mathfrak{g}\_i(t^\*).
$$

Since *t* ∗ solves Problem (5), we have: *F*(*t* ∗ ) *LU F*(*t* ∗ + *αy*). The SQ-concavity of *F* at *t* ∗ implies that

$$(\nabla^s \underline{F}(t^\*) + \nabla^s \overline{F}(t^\*))(\alpha y) \ge 0.$$

Thus:

$$\mathcal{g}\_i^s(t^\*)^T y \le 0,\ (\nabla^s \underline{F}(t^\*) + \nabla^s \overline{F}(t^\*)) y < 0$$

has no solution *y* in R*<sup>n</sup>* . Hence, by Farkas' lemma, there exist *µ* ∗ *<sup>i</sup>* ≥ 0 such that:

$$
\nabla^s \underline{F}(t^\*) + \nabla^s \overline{F}(t^\*) + \sum\_{i=1}^m \mu\_i^\* \nabla^s g\_i(t^\*) = 0.
$$

**Example 6.** *Note that in Example 5, t* = 0 *is a non-dominated solution. F is SQ-concave at t* = 0*, and g*1(*t*) = −*t is s-pseudoconcave at t* = 0*, g*2(*t*) = *t* − 1 *is lower semicontinuous on* R*.*

*On the other hand, for µ* = ( <sup>11</sup> 4 , 0)*, the condition* (7) *is satisfied at t* = 0 *which verifies Theorem 12.*

**Theorem 13.** *(Weak duality) Suppose for each µ such that* (*t*, *µ*) ∈ R*, L*(·, *µ*) *is SP-convex on* X *. Then for all* <sup>ˆ</sup>*<sup>t</sup>* ∈ X *and* (*t*, *<sup>µ</sup>*) ∈ Y*, L*(*t*, *<sup>µ</sup>*) *LU <sup>F</sup>*(ˆ*t*)*.*

**Proof.** Consider <sup>ˆ</sup>*<sup>t</sup>* ∈ X and (*t*, *<sup>µ</sup>*) ∈ Y. Then we have: *<sup>L</sup> s t* (*t*, *µ*) = 0. Since *L*(·, *µ*) is SP-convex on <sup>X</sup> , we obtain *<sup>L</sup>*(ˆ*t*, *<sup>µ</sup>*) *<sup>L</sup>*(*t*, *<sup>µ</sup>*). Therefore,

$$F(\hat{t}) + \sum\_{i=1}^{m} \mu\_i g\_i(\hat{t}) \succeq L(t, \mu).$$

The proof is complete.

**Example 7.** *Continued the problem of Example 5, t* = 0 *is a feasible solution to Problem* (23) *and the objective value is F*(0) = 0*.*

*Moreover,* (*t*, *µ*) = (1, 11, 0) *is a feasible solution to the Wolfe problem of Problem* (23) *and the objective value is* [−9, −8]*. Furthermore, we have*

$$F(0) \succ L(1,11,0),$$

*which verifies Theorem 13.*

**Theorem 14.** *(Strong duality) Suppose F, g<sup>i</sup>* (*i* = 1, . . . , *m*) *and t* ∗ *satisfy the conditions of Theorem 12. Furthermore, for each µ such that* (*t*, *µ*) ∈ R*, L*(·, *µ*) *is SP-convex on* X *. Then there exists a µ* <sup>∗</sup> ≥ 0 *such that* (*t* ∗ , *µ* ∗ ) *solves Problem* (14) *and L*(*t* ∗ , *µ* ∗ ) = *F*(*t* ∗ )*.*

**Proof.** The proof is similar to the proof of Theorem 10.

**Example 8.** *Continued from Example 5, the non-dominated solution to Wolfe dual of Problem* (23) *is* (0, <sup>11</sup> 4 , 0) *and the objective value is L*(0, <sup>11</sup> 4 , 0) = 0*.*

*While t* = 0 *is also a non-dominated solution of Problem* (23) *and the objective value is F*(0) = 0*. Then we have:*

$$L(0, \frac{11}{4}, 0) = F(0).$$

*On the other hand, the IVF F in Example 5 satisfies the conditions of Theorem 14, which verifies Theorem 14.*

#### **7. Conclusions**

The IVOP is an interesting topic with many real world applications. The nondifferentiable counterpart of this problem is an interesting topic too. In this work, we newly investigate a topic on gH-symmetrically differentiable IVOPs and obtain the KKT conditions and duality theorems which are properly wider than those in [28]. Additionally, more appropriate concepts of generalized convexity are introduced to extend the optimality conditions in [24]. Some developments of the results presented in this paper, which will be investigated in future papers, are given by the study of the saddle-point optimality criteria for the considered class of IVOPs.

**Author Contributions:** Funding acquisition, G.Y., W.L. and D.Z.; writing—original draft, Y.G.; writing—review and editing, G.Y., W.L., D.Z. and S.T. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by the National Key Research and Development Program of China (2018YFC1508100), Natural Science Foundation of Jiangsu Province (BK20180500), Key Projects of Educational Commission of Hubei Province of China (D20192501), and Philosophy and Social Sciences of Educational Commission of Hubei Province of China (20Y109).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** This work has been supported by the National Key Research and Development Program of China (2018YFC1508100), Natural Science Foundation of Jiangsu Province (BK20180500), Key Projects of Educational Commission of Hubei Province of China (D20192501), and Philosophy and Social Sciences of Educational Commission of Hubei Province of China (20Y109).

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


## *Article* **On Well-Posedness of Some Constrained Variational Problems**

**Savin Trean¸t˘a**

Department of Applied Mathematics, University Politehnica of Bucharest, 060042 Bucharest, Romania; savin.treanta@upb.ro

**Abstract:** By considering the new forms of the notions of lower semicontinuity, pseudomonotonicity, hemicontinuity and monotonicity of the considered scalar multiple integral functional, in this paper we study the well-posedness of a new class of variational problems with variational inequality constraints. More specifically, by defining the set of approximating solutions for the class of variational problems under study, we establish several results on well-posedness.

**Keywords:** constrained variational problem; well-posedness; multiple integral functional

#### **1. Introduction**

The concept of well-posedness is a very useful mathematical tool in the study of optimization problems. Thus, beginning with the work of Tykhonov [1], many types of well-posedness associated with variational problems have been introduced (Levitin– Polyak well-posedness [2–5], *α*-well-posedness [6,7], extended well-posedness [8–16], *L*well-posedness [17]). Additionally, this mathematical tool can be used to study some related problems: variational inequality problems [18–20], complementary problems [21], equilibrium problems [22,23], fixed point problems [24], hemivariational inequality problems [25], Nash equilibrium problems [26], and so on. The well-posedness of generalized variational inequalities and the corresponding optimization problems have been analyzed by Jayswal and Shalini [27]. Moreover, an interesting and important extension of variational inequality problem is the multidimensional variational inequality problem and the associated multi-time optimization problems (see [28–33]). Recently, Trean¸t˘a [30] investigated the well-posed isoperimetric-type constrained variational control problems. For other different but connected ideas, the reader is directed to Dridi and Djebabla [34] and Jana [35].

In this paper, motivated and inspired by the above research papers, we study the well-posedness property for new constrained variational problems, implying second-order multiple integral functionals and partial derivatives. In this regard, we formulate new forms of monotonicity, lower semicontinuity, hemicontinuity, and pseudomonotonicity for the considered multiple integral-type functional. Further, we introduce the set of approximating solutions for the constrained optimization problem under study and establish several theorems on well-posedness. The previous research works in this scientific area did not take into account the new form of the notions mentioned above. In essence, the results derived here can be considered as dynamic generalizations of the corresponding static results already existing in the literature. In this paper, the framework is based on function spaces of infinite-dimension and multiple integral-type functionals. This element is completely new for the well-posed optimization problems.

The present paper is structured as follows: In Section 2, we formulate the problem under study and introduce the new forms of monotonicity, lower semicontinuity, hemicontinuity, and pseudomonotonicity for the considered multiple integral-type functional. Additionally, an auxiliary lemma is provided. In Section 3, we study the well-posedness for the considered constrained variational problem. More precisely, we prove that wellposedness is equivalent with the existence and uniqueness of a solution in the aforesaid problem. Finally, Section 4 concludes the paper and provides further developments.

**Citation:** Trean¸t˘a, S. On Well-Posedness of Some Constrained Variational Problems. *Mathematics* **2021**, *9*, 2478. https://doi.org/ 10.3390/math9192478

Academic Editor: Simeon Reich

Received: 20 September 2021 Accepted: 1 October 2021 Published: 4 October 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

#### **2. Preliminaries and Problem Formulation**

In this paper, we consider the following notations and mathematical tools: denote by *K* a compact domain in R *<sup>m</sup>* and consider the point *<sup>K</sup>* <sup>3</sup> *<sup>ζ</sup>* = (*<sup>ζ</sup> α* ), *α* = 1, *m*; let E denote the space of *state* functions of *C* 4 -class *s* : *K* → R *n* and *s<sup>α</sup>* := *∂s ∂ζ<sup>α</sup>* , *sβγ* := *∂* 2 *s ∂ζβ∂ζ<sup>γ</sup>* denote the *partial speed* and *partial acceleration*, respectively; consider *E* ⊆ E as a nonempty, closed and convex subset, with *s*|*∂<sup>K</sup>* = given, equipped with the inner product

$$\langle s\_\prime z \rangle = \int\_K [s(\zeta) \cdot z(\zeta)] \, d\zeta = \int\_K \left[ \sum\_{i=1}^n s^i(\zeta) z^i(\zeta) \right] d\zeta, \quad \forall s, z \in \mathcal{E}$$

and the induced norm, where *dζ* = *dζ* 1 · · · *dζ <sup>m</sup>* is the element of volume on R*m*.

Let *J* 2 (R *<sup>m</sup>*, R *n* ) be the second-order jet bundle for R *<sup>m</sup>* and R *n* . By using the realvalued continuously differentiable function *f* : *J* 2 (R *<sup>m</sup>*, R *n* ) → R, we define the multiple integral-type functional:

$$F: \mathcal{E} \to \mathbb{R}, \quad F(s) = \int\_K f\left(\zeta, s(\zeta), s\_a(\zeta), s\_{\beta\gamma}(\zeta)\right) d\zeta.$$

By using the above mathematical framework, we formulate the *constrained variational problem* (in short, CVP) ((*πs*(*ζ*)) := (*ζ*,*s*(*ζ*),*sα*(*ζ*),*sβγ*(*ζ*)))):

$$\begin{array}{cc} \text{(CVP)} & \text{Minimize } \int\_{K} f(\pi\_{s}(\zeta)) d\zeta\\ & s. \ldots & \ldots \end{array}$$

where Ω stands for the set of solutions for the *variational inequality problem* (in short, VIP): *find s* ∈ *E such that*

$$\begin{split} \text{(VIP)} \quad & \int\_{K} \left[ \frac{\partial f}{\partial \mathbf{s}} (\pi\_{\mathbf{s}}(\boldsymbol{\zeta})) (z(\boldsymbol{\zeta}) - s(\boldsymbol{\zeta})) + \frac{\partial f}{\partial \mathbf{s}\_{\boldsymbol{\alpha}}} (\pi\_{\mathbf{s}}(\boldsymbol{\zeta})) D\_{\mathbf{n}} (z(\boldsymbol{\zeta}) - s(\boldsymbol{\zeta})) \right] \\ & + \frac{1}{n(\boldsymbol{\beta}, \boldsymbol{\gamma})} \frac{\partial f}{\partial s\_{\boldsymbol{\beta}\boldsymbol{\gamma}}} (\pi\_{\mathbf{s}}(\boldsymbol{\zeta})) D\_{\boldsymbol{\beta}\boldsymbol{\gamma}}^{2} (z(\boldsymbol{\zeta}) - s(\boldsymbol{\zeta})) \Big] d\boldsymbol{\zeta} \ge 0, \quad \forall \boldsymbol{z} \in \mathbf{E}\_{\boldsymbol{\nu}} \end{split}$$

where *D* 2 *βγ* := *Dβ*(*Dγ*), and *n*(*β*, *γ*) represents the multi-index notation (Saunders [36], Trean¸t˘a [33]).

More precisely, the set of all feasible solutions of (VIP) is defined as

$$\begin{split} \Omega = \left\{ s \in E : \int\_{K} \left[ (z(\boldsymbol{\zeta}) - s(\boldsymbol{\zeta})) \frac{\partial f}{\partial s}(\pi\_{s}(\boldsymbol{\zeta})) + D\_{\boldsymbol{a}}(z(\boldsymbol{\zeta}) - s(\boldsymbol{\zeta})) \frac{\partial f}{\partial s\_{\boldsymbol{a}}}(\pi\_{s}(\boldsymbol{\zeta})) \right] \right. \\ \left. + \frac{1}{n(\boldsymbol{\beta}, \boldsymbol{\gamma})} D\_{\boldsymbol{\beta}\boldsymbol{\gamma}}^{2}(z(\boldsymbol{\zeta}) - s(\boldsymbol{\zeta})) \frac{\partial f}{\partial s\_{\boldsymbol{\beta}\boldsymbol{\gamma}}}(\pi\_{s}(\boldsymbol{\zeta})) \right] d\boldsymbol{\zeta} \ge 0, \; \forall \boldsymbol{z} \in E \right\}. \end{split}$$

**Definition 1.** *The functional F*(*s*) = <sup>Z</sup> *K f*(*πs*(*ζ*))*dζ is monotone on E if the inequality holds:*

$$\begin{split} &\int\_{K} \left[ (s(\boldsymbol{\zeta}) - z(\boldsymbol{\zeta})) \left( \frac{\partial f}{\partial s} (\pi\_{s}(\boldsymbol{\zeta})) - \frac{\partial f}{\partial s} (\pi\_{z}(\boldsymbol{\zeta})) \right) \right. \\ &\left. + D\_{\mathfrak{a}} (s(\boldsymbol{\zeta}) - z(\boldsymbol{\zeta})) \left( \frac{\partial f}{\partial s\_{\mathfrak{a}}} (\pi\_{s}(\boldsymbol{\zeta})) - \frac{\partial f}{\partial s\_{\mathfrak{a}}} (\pi\_{z}(\boldsymbol{\zeta})) \right) \right. \\ &\left. + \frac{1}{n(\boldsymbol{\beta},\boldsymbol{\gamma})} D\_{\boldsymbol{\beta}\boldsymbol{\gamma}}^{2} (s(\boldsymbol{\zeta}) - z(\boldsymbol{\zeta})) \left( \frac{\partial f}{\partial s\_{\mathfrak{f}\boldsymbol{\beta}\boldsymbol{\gamma}}} (\pi\_{s}(\boldsymbol{\zeta})) - \frac{\partial f}{\partial s\_{\mathfrak{f}\boldsymbol{\beta}\boldsymbol{\gamma}}} (\pi\_{z}(\boldsymbol{\zeta})) \right) \right] d\boldsymbol{\zeta} \geq 0, \end{split}$$

*for* ∀*s*, *z* ∈ *E.*

**Definition 2.** *The functional <sup>F</sup>*(*s*) = <sup>Z</sup> *K f*(*πs*(*ζ*))*dζ is pseudomonotone on E if the implication holds:*

$$\begin{split} \int\_{K} \left[ \left( s(\boldsymbol{\zeta}) - z(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial s} (\pi\_{z}(\boldsymbol{\zeta})) + D\_{\boldsymbol{\alpha}} (s(\boldsymbol{\zeta}) - z(\boldsymbol{\zeta})) \frac{\partial f}{\partial s\_{\boldsymbol{\alpha}}} (\pi\_{z}(\boldsymbol{\zeta})) \right. \\ \left. + \frac{1}{n(\boldsymbol{\beta}, \boldsymbol{\gamma})} D\_{\boldsymbol{\beta}\gamma}^{2} (s(\boldsymbol{\zeta}) - z(\boldsymbol{\zeta})) \frac{\partial f}{\partial s\_{\boldsymbol{\beta}\gamma}} (\pi\_{z}(\boldsymbol{\zeta})) \right] d\boldsymbol{\zeta} \geq 0 \\ \Rightarrow \quad \int\_{K} \left[ \left( s(\boldsymbol{\zeta}) - z(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial s} (\pi\_{s}(\boldsymbol{\zeta})) + D\_{\boldsymbol{\alpha}} (s(\boldsymbol{\zeta}) - z(\boldsymbol{\zeta})) \frac{\partial f}{\partial s\_{\boldsymbol{\alpha}}} (\pi\_{s}(\boldsymbol{\zeta})) \right] d\boldsymbol{\zeta} \\ \phantom{=} \quad + \frac{1}{n(\boldsymbol{\beta}, \boldsymbol{\gamma})} D\_{\boldsymbol{\beta}\gamma}^{2} (s(\boldsymbol{\zeta}) - z(\boldsymbol{\zeta})) \frac{\partial f}{\partial s\_{\boldsymbol{\beta}\gamma}} (\pi\_{s}(\boldsymbol{\zeta})) \Big] d\boldsymbol{\zeta} \geq 0, \end{split}$$

*for* ∀*s*, *z* ∈ *E.*

**Example 1.** *Consider m* = 2, *n* = 1*, and K* = [0, 3] 2 *. Additionally, we define*

*f*(*πs*(*ζ*)) = 2 sin *s*(*ζ*) + *s*(*ζ*)*e s*(*ζ*) . *The functional F*(*s*) = <sup>Z</sup> *K f*(*πs*(*ζ*))*dζ is pseudomonotone on E* = *C* 4 (*K*, [−1, 1])*,* Z *K* h (*s*(*ζ*) <sup>−</sup> *<sup>z</sup>*(*ζ*)) *<sup>∂</sup> <sup>f</sup> ∂s* (*πz*(*ζ*)) + *<sup>D</sup>α*(*s*(*ζ*) <sup>−</sup> *<sup>z</sup>*(*ζ*)) *<sup>∂</sup> <sup>f</sup> ∂s<sup>α</sup>* (*πz*(*ζ*)) + 1 *n*(*β*, *γ*) *D* 2 *βγ*(*s*(*ζ*) <sup>−</sup> *<sup>z</sup>*(*ζ*)) *<sup>∂</sup> <sup>f</sup> ∂sβγ* (*πz*(*ζ*))<sup>i</sup> *dζ* = Z *K* h (*s*(*ζ*) − *z*(*ζ*))(2 cos *z*(*ζ*) + *e <sup>z</sup>*(*ζ*) + *z*(*ζ*)*e z*(*ζ*) ) i *dζ* ≥ 0 ∀*s*, *z* ∈ *E* ⇒ Z *K* h (*s*(*ζ*) <sup>−</sup> *<sup>z</sup>*(*ζ*)) *<sup>∂</sup> <sup>f</sup> ∂s* (*πs*(*ζ*)) + *<sup>D</sup>α*(*s*(*ζ*) <sup>−</sup> *<sup>z</sup>*(*ζ*)) *<sup>∂</sup> <sup>f</sup> ∂s<sup>α</sup>* (*πs*(*ζ*)) + 1 *n*(*β*, *γ*) *D* 2 *βγ*(*s*(*ζ*) <sup>−</sup> *<sup>z</sup>*(*ζ*)) *<sup>∂</sup> <sup>f</sup> ∂sβγ* (*πs*(*ζ*))<sup>i</sup> *dζ* = Z *K* h (*s*(*ζ*) − *z*(*ζ*))(2 cos *s*(*ζ*) + *e <sup>s</sup>*(*ζ*) + *s*(*ζ*)*e s*(*ζ*) ) i *dζ* ≥ 0 ∀*s*, *z* ∈ *E*.

*By direct computation, we obtain*

$$\begin{split} \int\_{X} \left[ \left( s(\boldsymbol{\zeta}) - z(\boldsymbol{\zeta}) \right) \left( \frac{\partial f}{\partial s} \left( \pi\_{s}(\boldsymbol{\zeta}) \right) - \frac{\partial f}{\partial s} \left( \pi\_{z}(\boldsymbol{\zeta}) \right) \right) \right. \\ &+ D\_{\boldsymbol{\alpha}} (s(\boldsymbol{\zeta}) - z(\boldsymbol{\zeta})) \left( \frac{\partial f}{\partial s\_{\boldsymbol{\alpha}}} \left( \pi\_{s}(\boldsymbol{\zeta}) \right) - \frac{\partial f}{\partial s\_{\boldsymbol{\alpha}}} \left( \pi\_{z}(\boldsymbol{\zeta}) \right) \right) \\ &+ \frac{1}{n(\boldsymbol{\beta}, \boldsymbol{\gamma})} D\_{\boldsymbol{\beta}\gamma}^{2} (s(\boldsymbol{\zeta}) - z(\boldsymbol{\zeta})) \left( \frac{\partial f}{\partial s\_{\boldsymbol{\beta}\gamma}} \left( \pi\_{s}(\boldsymbol{\zeta}) \right) - \frac{\partial f}{\partial s\_{\boldsymbol{\beta}\gamma}} \left( \pi\_{z}(\boldsymbol{\zeta}) \right) \right) \Big] d\boldsymbol{\zeta} \\ &= \int\_{X} \left[ \left( s(\boldsymbol{\zeta}) - z(\boldsymbol{\zeta}) \right) \big] \big[ 2(\cos s(\boldsymbol{\zeta}) - \cos z(\boldsymbol{\zeta})) + s(\boldsymbol{\zeta}) e^{s(\boldsymbol{\zeta})} + e^{s(\boldsymbol{\zeta})} - z(\boldsymbol{\zeta}) e^{z(\boldsymbol{\zeta})} - e^{z(\boldsymbol{\zeta})} \big] \right] d\boldsymbol{\zeta} \succeq 0, \end{split}$$

*which implies that the functional <sup>F</sup>*(*s*) = <sup>Z</sup> *K f*(*πs*(*ζ*))*dζ is not monotone on E (in the sense of Definition 1).*

*By considering the work of Usman and Khan [37], we provide the following definition.*

**Definition 3.** *The functional F*(*s*) = <sup>Z</sup> *K f*(*πs*(*ζ*))*dζ is hemicontinuous on E if the application*

$$
\lambda \to \left\langle s(\zeta) - z(\zeta), \frac{\delta F}{\delta \mathbf{s}\_{\lambda}}(\zeta) \right\rangle, \quad 0 \le \lambda \le 1
$$

*is continuous at* 0 <sup>+</sup>*, for* <sup>∀</sup>*s*, *<sup>z</sup>* <sup>∈</sup> *E, where*

$$\frac{\delta F}{\delta s\_{\lambda}}(\zeta) := \frac{\partial f}{\partial s}(\pi\_{s\_{\lambda}}(\zeta)) - D\_{\mathfrak{a}} \frac{\partial f}{\partial s\_{\mathfrak{a}}}(\pi\_{s\_{\lambda}}(\zeta)) + \frac{1}{n(\mathfrak{f}, \gamma)} D\_{\mathfrak{f}\gamma}^2 \frac{\partial f}{\partial s\_{\mathfrak{f}\gamma}}(\pi\_{s\_{\lambda}}(\zeta)) \in \mathbb{E}\_{\nu}$$

$$s\_{\lambda} := \lambda s + (1 - \lambda)z.$$

**Lemma 1.** *Consider the functional <sup>F</sup>*(*s*) = <sup>Z</sup> *K f*(*πs*(*ζ*))*dζ as hemicontinuous and pseudomonotone on E. Then, the function s* ∈ *E solves (VIP) if and only if it solves the variational inequality*

$$\int\_{K} \left[ (z(\boldsymbol{\zeta}) - s(\boldsymbol{\zeta})) \frac{\partial f}{\partial s}(\pi\_{z}(\boldsymbol{\zeta})) + D\_{\boldsymbol{a}}(z(\boldsymbol{\zeta}) - s(\boldsymbol{\zeta})) \frac{\partial f}{\partial s\_{\boldsymbol{\alpha}}}(\pi\_{z}(\boldsymbol{\zeta})) \right]$$

$$+ \frac{1}{n(\boldsymbol{\beta}, \boldsymbol{\gamma})} D\_{\boldsymbol{\beta}\boldsymbol{\gamma}}^{2}(z(\boldsymbol{\zeta}) - s(\boldsymbol{\zeta})) \frac{\partial f}{\partial s\_{\boldsymbol{\beta}\boldsymbol{\gamma}}}(\pi\_{z}(\boldsymbol{\zeta})) \Big] d\boldsymbol{\zeta} \ge 0, \quad \forall z \in E.$$

**Proof.** Firstly, let us consider that the function *s* ∈ *E* solves (VIP). In consequence, it follows

$$\int\_{K} \left[ (z(\boldsymbol{\zeta}) - s(\boldsymbol{\zeta})) \frac{\partial f}{\partial s}(\pi\_{s}(\boldsymbol{\zeta})) + D\_{\boldsymbol{a}}(z(\boldsymbol{\zeta}) - s(\boldsymbol{\zeta})) \frac{\partial f}{\partial s\_{\boldsymbol{a}}}(\pi\_{s}(\boldsymbol{\zeta})) \right] $$

$$+ \frac{1}{n(\boldsymbol{\beta}, \boldsymbol{\gamma})} D\_{\boldsymbol{\beta}\boldsymbol{\gamma}}^{2}(z(\boldsymbol{\zeta}) - s(\boldsymbol{\zeta})) \frac{\partial f}{\partial s\_{\boldsymbol{\beta}\boldsymbol{\gamma}}}(\pi\_{s}(\boldsymbol{\zeta})) \Big] d\boldsymbol{\zeta} \ge 0, \quad \forall \boldsymbol{z} \in \boldsymbol{E}.$$

By using the pseudomonotonicity property of *<sup>F</sup>*(*s*) = <sup>Z</sup> *K f*(*πs*(*ζ*))*dζ*, the previous inequality involves

$$\int\_{K} \left[ \left( z(\boldsymbol{\zeta}) - s(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial s} (\pi\_{z}(\boldsymbol{\zeta})) + D\_{\boldsymbol{a}} (z(\boldsymbol{\zeta}) - s(\boldsymbol{\zeta})) \frac{\partial f}{\partial s\_{\boldsymbol{a}}} (\pi\_{z}(\boldsymbol{\zeta})) \right] $$

$$+ \frac{1}{n(\boldsymbol{\beta}, \boldsymbol{\gamma})} D\_{\boldsymbol{\beta}\boldsymbol{\gamma}}^{2} (z(\boldsymbol{\zeta}) - s(\boldsymbol{\zeta})) \frac{\partial f}{\partial s\_{\boldsymbol{\beta}\boldsymbol{\gamma}}} (\pi\_{z}(\boldsymbol{\zeta})) \Big] d\boldsymbol{\zeta} \ge 0, \quad \forall z \in \mathbb{B}.$$

Conversely, assume that

$$\int\_{K} \left[ (z(\boldsymbol{\zeta}) - s(\boldsymbol{\zeta})) \frac{\partial f}{\partial s}(\pi\_{z}(\boldsymbol{\zeta})) + D\_{\boldsymbol{a}}(z(\boldsymbol{\zeta}) - s(\boldsymbol{\zeta})) \frac{\partial f}{\partial s\_{\boldsymbol{a}}}(\pi\_{z}(\boldsymbol{\zeta})) \right]$$

$$+ \frac{1}{n(\boldsymbol{\beta}, \boldsymbol{\gamma})} D\_{\boldsymbol{\beta}\boldsymbol{\gamma}}^{2}(z(\boldsymbol{\zeta}) - s(\boldsymbol{\zeta})) \frac{\partial f}{\partial s\_{\boldsymbol{\beta}\boldsymbol{\gamma}}}(\pi\_{z}(\boldsymbol{\zeta})) \Big] d\boldsymbol{\zeta} \ge 0, \quad \forall z \in E.$$

For *z* ∈ *E* and *λ* ∈ (0, 1], we define

$$z\_{\lambda} = (1 - \lambda)s + \lambda z \in E.$$

Therefore, the above inequality can be rewritten as follows

$$\int\_{K} \left[ (z\_{\lambda}(\zeta) - s(\zeta)) \frac{\partial f}{\partial s}(\pi\_{z\_{\lambda}}(\zeta)) + D\_{\mathfrak{a}}(z\_{\lambda}(\zeta) - s(\zeta)) \frac{\partial f}{\partial s\_{\mathfrak{a}}}(\pi\_{z\_{\lambda}}(\zeta)) \right] $$

$$+ \frac{1}{n(\beta\_{\gamma}\gamma)} D\_{\beta\gamma}^{2}(z\_{\lambda}(\zeta) - s(\zeta)) \frac{\partial f}{\partial s\_{\beta\gamma}}(\pi\_{z\_{\lambda}}(\zeta)) \Big] d\zeta \ge 0, \quad z \in E.$$

By considering *<sup>λ</sup>* <sup>→</sup> <sup>0</sup> (and the hemicontinuity property of *<sup>F</sup>*(*s*) = <sup>Z</sup> *K f*(*πs*(*ζ*))*dζ*), it results that

$$\int\_{K} \left[ (z(\boldsymbol{\zeta}) - s(\boldsymbol{\zeta})) \frac{\partial f}{\partial s}(\pi\_{s}(\boldsymbol{\zeta})) + D\_{\boldsymbol{a}}(z(\boldsymbol{\zeta}) - s(\boldsymbol{\zeta})) \frac{\partial f}{\partial s\_{\boldsymbol{a}}}(\pi\_{s}(\boldsymbol{\zeta})) \right] $$

$$+ \frac{1}{n(\boldsymbol{\beta}, \boldsymbol{\gamma})} D\_{\boldsymbol{\beta}\boldsymbol{\gamma}}^{2}(z(\boldsymbol{\zeta}) - s(\boldsymbol{\zeta})) \frac{\partial f}{\partial s\_{\boldsymbol{\beta}\boldsymbol{\gamma}}}(\pi\_{s}(\boldsymbol{\zeta})) \Big] d\boldsymbol{\zeta} \ge 0, \quad \forall z \in \mathbb{E}\_{\boldsymbol{\gamma}}$$

which shows that *s* is solution for (VIP). The proof of this lemma is now complete.

\*\*Definition 4.\*\* The functional  $F(s) = 
\int\_K f(
\pi\_s(
\zeta))d
\zeta$  is lower semicontinuous at  $s\_0 \in \mathbb{E}$  if

$$\int\_K f(
\pi\_{s\_0}(
\zeta))d
\zeta \le \liminf\_{s \to s\_0} \int\_K f(
\pi\_s(
\zeta))d
\zeta.$$

#### **3. Well-Posedness Associated with (CVP)**

In this section, we analyze the well-posedness property for the constrained variational problem (CVP). To this aim, we provide the following mathematical tools.

Let us denote by S the *set of all solutions* for (CVP), that is,

$$\begin{split} \mathcal{S} = \left\{ s \in E \mid \int\_{K} f(\pi\_{s}(\zeta)) d\zeta \le \inf\_{z \in \Omega} \int\_{K} f(\pi\_{z}(\zeta)) d\zeta \quad \text{and} \\ \int\_{K} \left[ (z(\zeta) - s(\zeta)) \frac{\partial f}{\partial s}(\pi\_{s}(\zeta)) + D\_{\mathfrak{a}}(z(\zeta) - s(\zeta)) \frac{\partial f}{\partial s\_{\mathfrak{a}}}(\pi\_{s}(\zeta)) \right] \\ + \frac{1}{n(\boldsymbol{\beta}, \boldsymbol{\gamma})} D\_{\boldsymbol{\beta}\boldsymbol{\gamma}}^{2}(z(\boldsymbol{\zeta}) - s(\boldsymbol{\zeta})) \frac{\partial f}{\partial s\_{\boldsymbol{\beta}\boldsymbol{\gamma}}}(\pi\_{s}(\boldsymbol{\zeta})) \Big] d\boldsymbol{\zeta} \ge \boldsymbol{0}, \; \forall z \in E \right\}. \end{split}$$

Additionally, for *θ*, *ϑ* ≥ 0, we define the *set of approximating solutions* for (CVP) as

$$\begin{split} \mathcal{S}(\theta,\theta) &= \left\{ s \in E \mid \int\_{K} f(\pi\_{s}(\boldsymbol{\zeta})) d\boldsymbol{\zeta} \leq \inf\_{z \in \Omega} \int\_{K} f(\pi\_{z}(\boldsymbol{\zeta})) d\boldsymbol{\zeta} + \theta \quad \text{and} \\ & \int\_{K} \Big[ (z(\boldsymbol{\zeta}) - s(\boldsymbol{\zeta})) \frac{\partial f}{\partial s}(\pi\_{s}(\boldsymbol{\zeta})) + D\_{\boldsymbol{a}}(z(\boldsymbol{\zeta}) - s(\boldsymbol{\zeta})) \frac{\partial f}{\partial s\_{\boldsymbol{a}}}(\pi\_{s}(\boldsymbol{\zeta})) \\ & + \frac{1}{n(\boldsymbol{\beta},\boldsymbol{\gamma})} D\_{\boldsymbol{\beta}\boldsymbol{\gamma}}^{2}(z(\boldsymbol{\zeta}) - s(\boldsymbol{\zeta})) \frac{\partial f}{\partial s\_{\boldsymbol{\beta}\boldsymbol{\gamma}}}(\pi\_{s}(\boldsymbol{\zeta})) \Big] d\boldsymbol{\zeta} + \theta \geq 0, \ \forall z \in E \right\}. \end{split}$$

**Remark 1.** *For* (*θ*, *ϑ*) = (0, 0)*, we have* S = S(*θ*, *ϑ*) *and, for* (*θ*, *ϑ*) > (0, 0)*, we obtain* S ⊆ S(*θ*, *ϑ*)*.*

**Definition 5.** *If there exists a sequence of positive real numbers ϑ<sup>n</sup>* → 0 *as n* → ∞*, such that the following inequalities*

$$\begin{aligned} \lim\_{n \to \infty} \sup \int\_K f(\pi\_{\mathfrak{s}\_n}(\zeta)) d\zeta &\le \inf\_{z \in \Omega} \int\_K f(\pi\_z(\zeta)) d\zeta \\\\ \int\_K \left[ (z(\zeta) - s\_{\mathfrak{n}}(\zeta)) \frac{\partial f}{\partial \mathfrak{s}}(\pi\_{\mathfrak{s}\_n}(\zeta)) + D\_{\mathfrak{a}}(z(\zeta) - s\_{\mathfrak{n}}(\zeta)) \frac{\partial f}{\partial \mathfrak{s}\_{\mathfrak{a}}}(\pi\_{\mathfrak{s}\_n}(\zeta)) \right] \end{aligned}$$

*and*

$$+\frac{1}{n(\boldsymbol{\beta},\boldsymbol{\gamma})}D\_{\boldsymbol{\beta}\boldsymbol{\gamma}}^{2}(\boldsymbol{z}(\boldsymbol{\zeta})-\boldsymbol{s}\_{\boldsymbol{n}}(\boldsymbol{\zeta})) \frac{\partial f}{\partial \boldsymbol{s}\_{\boldsymbol{\beta}\boldsymbol{\gamma}}}(\boldsymbol{\pi}\_{\boldsymbol{s}\_{\boldsymbol{n}}}(\boldsymbol{\zeta})) \Big]d\boldsymbol{\zeta}+\boldsymbol{\theta}\_{\boldsymbol{n}}\geq \mathbf{0}, \quad \forall \boldsymbol{z}\in E$$

*are fulfilled, then the sequence* {*sn*} *is called an approximating sequence of (CVP).*

**Definition 6.** *The problem (CVP) is called well-posed if:*


$$\operatorname{diam} B = \sup\_{x, y \in B} ||x - y||.$$

**Theorem 1.** *Consider the functional <sup>F</sup>*(*s*) = <sup>Z</sup> *K f*(*πs*(*ζ*))*dζ as lower semicontinuous, hemicontinuous and monotone on E. Then, the problem (CVP) is well-posed if and only if*

$$
\mathcal{S}(\theta, \theta) \neq \mathcal{O}, \forall \theta, \theta > 0 \text{ and } \operatorname{diam} \mathcal{S}(\theta, \theta) \to 0 \text{ as } (\theta, \theta) \to (0, 0).
$$

**Proof.** Let us consider the case that (CVP) is well-posed. Therefore, it admits a unique solution *s*¯ ∈ S. Since S ⊆ S(*θ*, *ϑ*), ∀*θ*, *ϑ* > 0, we obtain S(*θ*, *ϑ*) 6= ∅, ∀*θ*, *ϑ* > 0. Contrary to the result, let us suppose that diam S(*θ*, *ϑ*) 9 0 as (*θ*, *ϑ*) → (0, 0). Then, there exists *r* > 0, a positive integer *m*, *θn*, *ϑ<sup>n</sup>* > 0 with *θn*, *ϑ<sup>n</sup>* → 0, and *sn*, *s* 0 *<sup>n</sup>* ∈ S(*θn*, *ϑn*) such that

$$\|\|s\_n - s\_n'\|\| > r, \quad \forall n \ge m. \tag{1}$$

Since *sn*, *s* 0 *<sup>n</sup>* ∈ S(*θn*, *ϑn*), we obtain

$$\int\_{K} f(\pi\_{\mathfrak{s}\_{n}}(\zeta))d\zeta \le \inf\_{z \in \Omega} \int\_{K} f(\pi\_{z}(\zeta))d\zeta + \theta\_{n}$$

$$\int\_{K} \left[ (z(\zeta) - s\_{\mathfrak{n}}(\zeta)) \frac{\partial f}{\partial \mathfrak{s}}(\pi\_{\mathfrak{s}\_{n}}(\zeta)) + D\_{\mathfrak{a}}(z(\zeta) - s\_{\mathfrak{n}}(\zeta)) \frac{\partial f}{\partial \mathfrak{s}\_{\mathfrak{a}}}(\pi\_{\mathfrak{s}\_{\mathfrak{n}}}(\zeta)) \right]$$

$$+ \frac{1}{n(\mathfrak{z}, \gamma)} D\_{\mathfrak{P}\gamma}^{2}(z(\zeta) - s\_{\mathfrak{n}}(\zeta)) \frac{\partial f}{\partial \mathfrak{s}\_{\mathfrak{P}\gamma}}(\pi\_{\mathfrak{s}\_{\mathfrak{n}}}(\zeta)) \Big] d\zeta + \theta\_{\mathfrak{n}} \ge 0, \quad \forall z \in E$$

and 
$$\int\_{K} f(\pi\_{\mathbf{s}\_{n}^{\prime}}(\boldsymbol{\xi}))d\boldsymbol{\xi} \leq \inf\_{\boldsymbol{z}\in\Omega} \int\_{K} f(\pi\_{\mathbf{z}}(\boldsymbol{\zeta}))d\boldsymbol{\zeta} + \theta\_{\mathbf{n}}.$$
 
$$\int\_{K} \left[ (\boldsymbol{z}(\boldsymbol{\zeta}) - \boldsymbol{s}\_{n}^{\prime}(\boldsymbol{\zeta})) \frac{\partial f}{\partial \mathbf{s}}(\pi\_{\mathbf{s}\_{n}^{\prime}}(\boldsymbol{\zeta})) + D\_{\mathbf{n}}(\boldsymbol{z}(\boldsymbol{\zeta}) - \boldsymbol{s}\_{n}^{\prime}(\boldsymbol{\zeta})) \frac{\partial f}{\partial \mathbf{s}\_{n}}(\pi\_{\mathbf{s}\_{n}^{\prime}}(\boldsymbol{\zeta})) \right.$$
 
$$+ \frac{1}{n(\boldsymbol{\beta},\boldsymbol{\gamma})} D\_{\boldsymbol{\beta}\boldsymbol{\gamma}}^{2}(\boldsymbol{z}(\boldsymbol{\zeta}) - \boldsymbol{s}\_{n}^{\prime}(\boldsymbol{\zeta})) \frac{\partial f}{\partial \mathbf{s}\_{\beta\boldsymbol{\gamma}}}(\pi\_{\mathbf{s}\_{n}^{\prime}}(\boldsymbol{\zeta})) \Big] d\boldsymbol{\zeta} + \theta\_{\mathbf{n}} \geq 0, \quad \forall \boldsymbol{z} \in \mathbf{E}.$$

It results that {*sn*} and {*s* 0 *<sup>n</sup>*} are approximating sequences of (CVP) which tend to *s*¯ (the problem (CVP) is well-posed, by hypothesis). By direct computation, it follows that

$$||s\_n - s\_n'|| = ||s\_n - \mathfrak{s} + \mathfrak{s} - s\_n'||$$

$$\leq ||s\_n - \mathfrak{s}|| + ||\mathfrak{s} - s\_n'|| \leq \theta\_\prime$$

which contradicts (1) for some *ϑ* = *r*. In consequence, diam S(*θ*, *ϑ*) → 0 as (*θ*, *ϑ*) → (0, 0). Conversely, let us consider that {*sn*} is an approximating sequence of (CVP). Then there exists a sequence of positive real numbers *ϑ<sup>n</sup>* → 0 as *n* → ∞ such that the inequalities

$$\lim\_{n \to \infty} \sup \int\_K f(\pi\_{s\_n}(\zeta)) d\zeta \le \inf\_{z \in \Omega} \int\_K f(\pi\_z(\zeta)) d\zeta. \tag{2}$$

$$\int\_{X} \left[ \left( z(\boldsymbol{\zeta}) - s\_{\boldsymbol{n}}(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial \boldsymbol{s}} \left( \pi\_{\boldsymbol{s}\_{n}}(\boldsymbol{\zeta}) \right) + D\_{\boldsymbol{n}} \left( z(\boldsymbol{\zeta}) - s\_{\boldsymbol{n}}(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial \boldsymbol{s}\_{\boldsymbol{n}}} \left( \pi\_{\boldsymbol{s}\_{\boldsymbol{n}}}(\boldsymbol{\zeta}) \right) \right] $$

$$+ \frac{1}{n(\boldsymbol{\beta}, \boldsymbol{\gamma})} D\_{\boldsymbol{\beta}\boldsymbol{\gamma}}^{2} (z(\boldsymbol{\zeta}) - s\_{\boldsymbol{n}}(\boldsymbol{\zeta})) \frac{\partial f}{\partial \boldsymbol{s}\_{\boldsymbol{\beta}\boldsymbol{\gamma}}} (\pi\_{\boldsymbol{s}\_{\boldsymbol{n}}}(\boldsymbol{\zeta})) \Big] d\boldsymbol{\zeta} + \boldsymbol{\theta}\_{\boldsymbol{n}} \geq 0, \quad \forall \boldsymbol{z} \in E \tag{3}$$

hold, including *s<sup>n</sup>* ∈ S(*θn*, *ϑn*), for a sequence of positive real numbers *θ<sup>n</sup>* → 0 as *n* → ∞. Since diam S(*θn*, *ϑn*) → 0 as (*θn*, *ϑn*) → (0, 0), {*sn*} is a Cauchy sequence which converges to some *s*¯ ∈ *E* as *E* is a closed set.

By hypothesis, the multiple integral functional <sup>Z</sup> *K f*(*πs*(*ζ*))*dζ* is monotone on *E*. Therefore, by Definition 1, for *s*¯, *z* ∈ *E*, we have

$$\int\_{K} \left[ (\mathfrak{s}(\zeta) - z(\zeta)) \left( \frac{\partial f}{\partial s} (\pi\_{\mathfrak{s}}(\zeta)) - \frac{\partial f}{\partial s} (\pi\_{z}(\zeta)) \right) \right]$$

$$+ D\_{\mathfrak{s}} (\mathfrak{s}(\zeta) - z(\zeta)) \left( \frac{\partial f}{\partial s\_{\mathfrak{a}}} (\pi\_{\mathfrak{s}}(\zeta)) - \frac{\partial f}{\partial s\_{\mathfrak{a}}} (\pi\_{z}(\zeta)) \right)$$

$$+ \frac{1}{n(\beta\_{\gamma}\gamma)} D\_{\beta\gamma}^{2} (\mathfrak{s}(\zeta) - z(\zeta)) \left( \frac{\partial f}{\partial s\_{\beta\gamma}} (\pi\_{\mathfrak{s}}(\zeta)) - \frac{\partial f}{\partial s\_{\beta\gamma}} (\pi\_{z}(\zeta)) \right) \Big] d\zeta \ge 0,$$

or, equivalently,

$$\int\_{K} \left[ \left( \mathfrak{s}(\boldsymbol{\zeta}) - z(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial \boldsymbol{s}} \left( \mathfrak{n}\_{\mathfrak{s}}(\boldsymbol{\zeta}) \right) + D\_{\mathfrak{a}} \left( \mathfrak{s}(\boldsymbol{\zeta}) - z(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial \boldsymbol{s}\_{\mathfrak{a}}} \left( \mathfrak{n}\_{\mathfrak{s}}(\boldsymbol{\zeta}) \right) \right]$$

$$+ \frac{1}{n(\boldsymbol{\beta}, \boldsymbol{\gamma})} D\_{\mathfrak{\boldsymbol{P}}\boldsymbol{\gamma}}^{2} \left( \mathfrak{s}(\boldsymbol{\zeta}) - z(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial \boldsymbol{s}\_{\mathfrak{\boldsymbol{P}}\boldsymbol{\gamma}}} \left( \mathfrak{n}\_{\mathfrak{s}}(\boldsymbol{\zeta}) \right) \Bigr] d\boldsymbol{\zeta}$$

$$\geq \int\_{K} \left[ \left( \mathfrak{s}(\boldsymbol{\zeta}) - z(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial \boldsymbol{s}} \left( \mathfrak{n}\_{\mathfrak{z}}(\boldsymbol{\zeta}) \right) + D\_{\mathfrak{a}} \left( \mathfrak{s}(\boldsymbol{\zeta}) - z(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial \boldsymbol{s}\_{\mathfrak{a}}} \left( \mathfrak{n}\_{\mathfrak{z}}(\boldsymbol{\zeta}) \right) \right.$$

$$+ \frac{1}{n(\boldsymbol{\beta}, \boldsymbol{\gamma})} D\_{\mathfrak{\boldsymbol{P}}\boldsymbol{\gamma}}^{2} \left( \mathfrak{s}(\boldsymbol{\zeta}) - z(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial \boldsymbol{s}\_{\mathfrak{\boldsymbol{P}}\boldsymbol{\gamma}}} \left( \mathfrak{n}\_{\mathfrak{z}}(\boldsymbol{\zeta}) \right) \Bigr] d\boldsymbol{\zeta}. \tag{4}$$

Taking limit in inequality (3), we have

$$\int\_{K} \left[ (\mathfrak{s}(\zeta) - z(\zeta)) \frac{\partial f}{\partial \mathbf{s}}(\pi\_{\mathfrak{s}}(\zeta)) + D\_{\mathfrak{a}}(\mathfrak{s}(\zeta) - z(\zeta)) \frac{\partial f}{\partial \mathbf{s}\_{\mathfrak{a}}}(\pi\_{\mathfrak{s}}(\zeta)) \right] $$

$$+ \frac{1}{n(\mathfrak{\boldsymbol{\mathcal{G}}}, \boldsymbol{\gamma})} D\_{\mathfrak{\boldsymbol{\mathcal{G}}}\boldsymbol{\gamma}}^{2}(\bar{\mathfrak{s}}(\zeta) - z(\zeta)) \frac{\partial f}{\partial \mathbf{s}\_{\mathfrak{\boldsymbol{\mathcal{G}}}\boldsymbol{\gamma}}(\boldsymbol{\pi}\_{\mathfrak{\boldsymbol{\mathcal{G}}}}(\zeta))}(\boldsymbol{\pi}\_{\mathfrak{\boldsymbol{\mathcal{G}}}}(\zeta)) \Big] d\zeta \le 0. \tag{5}$$

On combining (4) and (5), we obtain

$$\int\_{K} \left[ (z(\boldsymbol{\zeta}) - \mathfrak{s}(\boldsymbol{\zeta})) \frac{\partial f}{\partial \boldsymbol{s}} (\pi\_{z}(\boldsymbol{\zeta})) + D\_{\mathfrak{a}} (z(\boldsymbol{\zeta}) - \mathfrak{s}(\boldsymbol{\zeta})) \frac{\partial f}{\partial \boldsymbol{s}\_{\mathfrak{a}}} (\pi\_{z}(\boldsymbol{\zeta})) \right] $$

$$+ \frac{1}{n(\boldsymbol{\beta}, \boldsymbol{\gamma})} D\_{\boldsymbol{\beta}\boldsymbol{\gamma}}^{2} (z(\boldsymbol{\zeta}) - \mathfrak{s}(\boldsymbol{\zeta})) \frac{\partial f}{\partial \boldsymbol{s}\_{\boldsymbol{\beta}\boldsymbol{\gamma}}} (\pi\_{z}(\boldsymbol{\zeta})) \Big] d\boldsymbol{\zeta} \ge 0.$$

Further, taking into account Lemma 1, it follows that

$$\int\_{K} \left[ \left( z(\boldsymbol{\zeta}) - \overline{s}(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial s} \left( \pi\_{\mathfrak{s}}(\boldsymbol{\zeta}) \right) + D\_{\mathfrak{a}} \left( z(\boldsymbol{\zeta}) - \overline{s}(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial s\_{\mathfrak{a}}} \left( \pi\_{\mathfrak{s}}(\boldsymbol{\zeta}) \right) \right]$$

$$+ \frac{1}{n(\boldsymbol{\beta}, \boldsymbol{\gamma})} D\_{\overline{\boldsymbol{\beta}} \boldsymbol{\gamma}}^{2} \left( z(\boldsymbol{\zeta}) - \overline{s}(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial s\_{\overline{\boldsymbol{\beta}} \boldsymbol{\gamma}}} \left( \pi\_{\mathfrak{s}}(\boldsymbol{\zeta}) \right) \Big] d\boldsymbol{\zeta} \ge 0,\tag{6}$$

which implies that *s*¯ ∈ Ω.

Since the functional <sup>Z</sup> *K f*(*πs*(*ζ*))*dζ* is lower semicontinuous, it results that

$$\int\_K f(\pi\_{\mathfrak{S}}(\zeta))d\zeta \le \lim\_{n \to \infty} \inf \int\_K f(\pi\_{s\_n}(\zeta))d\zeta \le \lim\_{n \to \infty} \sup \int\_K f(\pi\_{s\_n}(\zeta))d\zeta.$$

By using (2), the above inequality reduces to

$$\int\_{K} f(\pi\_{\mathfrak{s}}(\zeta))d\zeta \le \inf\_{z \in \Omega} \int\_{K} f(\pi\_{z}(\zeta))d\zeta. \tag{7}$$

Thus, from (6) and (7), we conclude that *s*¯ solves (CVP).

Now, let us prove that *s*¯ is the unique solution of (CVP). Suppose that *s*1,*s*<sup>2</sup> are two distinct solutions of (CVP). Then,

$$0 < \|s\_1 - s\_2\| \le \text{diam } \mathcal{S}(\theta, \theta) \to 0 \text{ as } (\theta, \theta) \to (0, 0)\_\prime$$

and the proof is complete.

**Theorem 2.** *Consider the functional <sup>F</sup>*(*s*) = <sup>Z</sup> *K f*(*πs*(*ζ*))*dζ as lower semicontinuous, hemicontinuous and monotone on E. Then, the problem (CVP) is well-posed if and only if it has a unique solution.*

**Proof.** Let us consider that (CVP) is well-posed. Thus, it possesses a unique solution *s*0. Conversely, let us consider that (CVP) has a unique solution *s*0, that is,

$$\int\_{K} f(\pi\_{\mathfrak{s}\_{0}}(\zeta))d\zeta \le \inf\_{z \in \Omega} \int\_{K} f(\pi\_{z}(\zeta))d\zeta,$$

$$\int\_{K} \left[ (z(\zeta) - s\_{0}(\zeta)) \frac{\partial f}{\partial \mathfrak{s}}(\pi\_{\mathfrak{s}\_{0}}(\zeta)) + D\_{\mathfrak{a}}(z(\zeta) - s\_{0}(\zeta)) \frac{\partial f}{\partial \mathfrak{s}\_{\mathfrak{a}}}(\pi\_{\mathfrak{s}\_{0}}(\zeta)) \right.$$

$$+ \frac{1}{n(\mathfrak{\beta}, \gamma)} D\_{\mathfrak{\beta}\gamma}^{2}(z(\zeta) - s\_{0}(\zeta)) \frac{\partial f}{\partial \mathfrak{s}\_{\mathfrak{\beta}\gamma}}(\pi\_{\mathfrak{s}\_{0}}(\zeta)) \Big] d\zeta \ge 0, \quad \forall z \in E,\tag{8}$$

but it is not well-posed. Therefore, by Definition 6, there exists an approximating sequence {*sn*} of (CVP), which does not converge to *s*0, such that the following inequalities

$$\lim\_{n \to \infty} \sup \int\_K f(\pi\_{s\_n}(\zeta)) d\zeta \le \inf\_{z \in \Omega} \int\_K f(\pi\_z(\zeta)) d\zeta$$

and

$$\int\_{K} \left[ \left( z(\boldsymbol{\zeta}) - s\_{n}(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial s} (\pi\_{\boldsymbol{s}\_{n}}(\boldsymbol{\zeta})) + D\_{n} (z(\boldsymbol{\zeta}) - s\_{n}(\boldsymbol{\zeta})) \frac{\partial f}{\partial s\_{n}} (\pi\_{\boldsymbol{s}\_{n}}(\boldsymbol{\zeta})) \right. $$

$$+ \frac{1}{n (\boldsymbol{\beta}, \boldsymbol{\gamma})} D\_{\boldsymbol{\beta} \boldsymbol{\gamma}}^{2} (z(\boldsymbol{\zeta}) - s\_{n}(\boldsymbol{\zeta})) \frac{\partial f}{\partial s\_{\boldsymbol{\beta} \boldsymbol{\gamma}}} (\pi\_{\boldsymbol{s}\_{n}}(\boldsymbol{\zeta})) \Big] d\boldsymbol{\zeta} + \boldsymbol{\theta}\_{n} \geq 0, \quad \forall \boldsymbol{z} \in E \tag{9}$$

are fulfilled. Further, we proceed by contradiction to prove the boundedness of {*sn*}. Contrary to the result, we suppose that {*sn*} is not bounded; consequently, k*sn*k → +∞ as *n* → +∞. We define *δ<sup>n</sup>* = 1 k*s<sup>n</sup>* − *s*0k and s*<sup>n</sup>* = *s*<sup>0</sup> + *δn*[*s<sup>n</sup>* − *s*0]. We observe that {s*n*} is bounded in *E*. Therefore, if necessary, passing to a subsequence, we may consider that

 $\mathbf{s}\_n \to \mathbf{s}$  weakly in  $E \neq (s\_0)$ .

It is not difficult to see that s 6= *s*<sup>0</sup> due to k*δn*[*s<sup>n</sup>* − *s*0]k = 1, for all *n* ∈ N. Since *s*<sup>0</sup> is a solution of (CVP), the inequalities (8) are verified. By using Lemma 1, it follows that

$$\int\_{K} f(\pi\_{s\_0}(\zeta))d\zeta \le \inf\_{z \in \Omega} \int\_{K} f(\pi\_z(\zeta))d\zeta.$$

$$\int\_{K} \left[ (z(\boldsymbol{\zeta}) - s\_0(\boldsymbol{\zeta})) \frac{\partial f}{\partial s}(\pi\_z(\boldsymbol{\zeta})) + D\_a(z(\boldsymbol{\zeta}) - s\_0(\boldsymbol{\zeta})) \frac{\partial f}{\partial s\_a}(\pi\_z(\boldsymbol{\zeta})) \right.$$

$$+ \frac{1}{n(\boldsymbol{\beta}, \boldsymbol{\gamma})} D\_{\boldsymbol{\beta}\boldsymbol{\gamma}}^2(z(\boldsymbol{\zeta}) - s\_0(\boldsymbol{\zeta})) \frac{\partial f}{\partial s\_{\boldsymbol{\beta}\boldsymbol{\gamma}}}(\pi\_z(\boldsymbol{\zeta})) \Big] d\boldsymbol{\zeta} \ge 0, \quad \forall z \in E. \tag{10}$$

By considering the monotonicity property of the functional <sup>Z</sup> *K f*(*πs*(*ζ*))*dζ*, for *sn*, *z* ∈ *E*, we obtain

$$\begin{split} \int\_{\mathcal{K}} \left[ (s\_{\boldsymbol{n}}(\boldsymbol{\zeta}) - z(\boldsymbol{\zeta})) \left( \frac{\partial f}{\partial \boldsymbol{s}} (\pi\_{\boldsymbol{s}\_{\boldsymbol{n}}}(\boldsymbol{\zeta})) - \frac{\partial f}{\partial \boldsymbol{s}} (\pi\_{\boldsymbol{z}}(\boldsymbol{\zeta})) \right) \right. \\ \left. + D\_{\boldsymbol{a}} (s\_{\boldsymbol{n}}(\boldsymbol{\zeta}) - z(\boldsymbol{\zeta})) \left( \frac{\partial f}{\partial \boldsymbol{s}\_{\boldsymbol{a}}} (\pi\_{\boldsymbol{s}\_{\boldsymbol{n}}}(\boldsymbol{\zeta})) - \frac{\partial f}{\partial \boldsymbol{s}\_{\boldsymbol{a}}} (\pi\_{\boldsymbol{z}}(\boldsymbol{\zeta})) \right) \right. \\ \left. + \frac{1}{n(\boldsymbol{\beta}, \boldsymbol{\gamma})} D\_{\boldsymbol{\beta}\boldsymbol{\gamma}}^{2} (s\_{\boldsymbol{n}}(\boldsymbol{\zeta}) - z(\boldsymbol{\zeta})) \left( \frac{\partial f}{\partial \boldsymbol{s}\_{\boldsymbol{\beta}\boldsymbol{\gamma}}} (\pi\_{\boldsymbol{s}\_{\boldsymbol{n}}}(\boldsymbol{\zeta})) - \frac{\partial f}{\partial \boldsymbol{s}\_{\boldsymbol{\beta}\boldsymbol{\gamma}}} (\pi\_{\boldsymbol{z}}(\boldsymbol{\zeta})) \right) \right] d\boldsymbol{\zeta} \geq 0, \end{split}$$

or, equivalently,

$$\int\_{K} \left[ \left( z(\boldsymbol{\zeta}) - s\_{n}(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial \boldsymbol{s}} \left( \pi\_{\boldsymbol{s}\_{n}}(\boldsymbol{\zeta}) \right) + D\_{\boldsymbol{a}} \left( z(\boldsymbol{\zeta}) - s\_{n}(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial \boldsymbol{s}\_{n}} \left( \pi\_{\boldsymbol{s}\_{n}}(\boldsymbol{\zeta}) \right) \right] $$

$$+ \frac{1}{n(\boldsymbol{\beta}, \boldsymbol{\gamma})} D\_{\boldsymbol{\beta}\gamma}^{2} \left( z(\boldsymbol{\zeta}) - s\_{n}(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial s\_{\boldsymbol{\beta}\gamma}} \left( \pi\_{\boldsymbol{s}\_{n}}(\boldsymbol{\zeta}) \right) \Big] d\boldsymbol{\zeta} $$

$$\leq \int\_{K} \left[ \left( z(\boldsymbol{\zeta}) - s\_{n}(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial \boldsymbol{s}} \left( \pi\_{\boldsymbol{z}}(\boldsymbol{\zeta}) \right) + D\_{\boldsymbol{a}} \left( z(\boldsymbol{\zeta}) - s\_{n}(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial \boldsymbol{s}\_{\boldsymbol{a}}} \left( \pi\_{\boldsymbol{z}}(\boldsymbol{\zeta}) \right) \right] $$

$$+ \frac{1}{n(\boldsymbol{\beta}, \boldsymbol{\gamma})} D\_{\boldsymbol{\beta}\gamma}^{2} \left( z(\boldsymbol{\zeta}) - s\_{n}(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial s\_{\boldsymbol{\beta}\gamma}} \left( \pi\_{\boldsymbol{z}}(\boldsymbol{\zeta}) \right) \Big] d\boldsymbol{\zeta}. \tag{11}$$

Combining with (9) and (11), we have

$$\int\_{K} \left[ \left( z(\boldsymbol{\zeta}) - s\_{\boldsymbol{n}}(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial s} (\pi\_{z}(\boldsymbol{\zeta})) + D\_{\boldsymbol{n}} (z(\boldsymbol{\zeta}) - s\_{\boldsymbol{n}}(\boldsymbol{\zeta})) \frac{\partial f}{\partial s\_{\boldsymbol{n}}} (\pi\_{z}(\boldsymbol{\zeta})) \right] $$

$$+ \frac{1}{n(\boldsymbol{\beta}, \boldsymbol{\gamma})} D\_{\boldsymbol{\beta}\boldsymbol{\gamma}}^{2} (z(\boldsymbol{\zeta}) - s\_{\boldsymbol{n}}(\boldsymbol{\zeta})) \frac{\partial f}{\partial s\_{\boldsymbol{\beta}\boldsymbol{\gamma}}} (\pi\_{z}(\boldsymbol{\zeta})) \Big] d\boldsymbol{\zeta} \geq -\theta\_{\boldsymbol{n}\boldsymbol{\gamma}} \quad \forall \boldsymbol{z} \in \boldsymbol{E}.$$

Next, we can take *n*<sup>0</sup> ∈ N be large enough such that *δ<sup>n</sup>* < 1, for all *n* ≥ *n*<sup>0</sup> (because of *δ<sup>n</sup>* → 0 as *n* → ∞). Multiplying the above inequality and (10) by *δ<sup>n</sup>* > 0 and 1 − *δ<sup>n</sup>* > 0, respectively, we obtain

$$\int\_{K} \left[ (z(\boldsymbol{\zeta}) - \mathbf{s}\_{\boldsymbol{n}}(\boldsymbol{\zeta})) \frac{\partial f}{\partial \mathbf{s}} (\pi\_{z}(\boldsymbol{\zeta})) + D\_{\boldsymbol{n}} (z(\boldsymbol{\zeta}) - \mathbf{s}\_{\boldsymbol{n}}(\boldsymbol{\zeta})) \frac{\partial f}{\partial \mathbf{s}\_{\boldsymbol{n}}} (\pi\_{z}(\boldsymbol{\zeta})) \right] $$

$$+ \frac{1}{n(\boldsymbol{\beta}, \boldsymbol{\gamma})} D\_{\boldsymbol{\beta}\boldsymbol{\gamma}}^{2} (z(\boldsymbol{\zeta}) - \mathbf{s}\_{\boldsymbol{n}}(\boldsymbol{\zeta})) \frac{\partial f}{\partial \mathbf{s}\_{\boldsymbol{\beta}\boldsymbol{\gamma}}} (\pi\_{z}(\boldsymbol{\zeta})) \Big] d\boldsymbol{\zeta} \geq -\theta\_{n\boldsymbol{\gamma}} \quad \forall z \in E, \ \forall n \geq n\_{0}.$$

By using s*<sup>n</sup>* → s 6= *s*<sup>0</sup> and s*<sup>n</sup>* = *s*<sup>0</sup> + **s***n*[*s<sup>n</sup>* − *s*0], we obtain

$$\int\_{K} \left[ (z(\boldsymbol{\zeta}) - \mathbf{s}(\boldsymbol{\zeta})) \frac{\partial f}{\partial \mathbf{s}} (\pi\_{z}(\boldsymbol{\zeta})) + D\_{\boldsymbol{\alpha}} (z(\boldsymbol{\zeta}) - \mathbf{s}(\boldsymbol{\zeta})) \frac{\partial f}{\partial \mathbf{s}\_{\boldsymbol{\alpha}}} (\pi\_{z}(\boldsymbol{\zeta})) \right]$$

$$+ \frac{1}{n(\boldsymbol{\beta}, \boldsymbol{\gamma})} D\_{\boldsymbol{\beta}\boldsymbol{\gamma}}^{2} (z(\boldsymbol{\zeta}) - \mathbf{s}(\boldsymbol{\zeta})) \frac{\partial f}{\partial s\_{\boldsymbol{\beta}\boldsymbol{\gamma}}} (\pi\_{z}(\boldsymbol{\zeta})) \Big] d\boldsymbol{\zeta}$$

$$= \lim\_{n \to \infty} \int\_{K} \left[ (z(\boldsymbol{\zeta}) - \mathbf{s}\_{\boldsymbol{n}}(\boldsymbol{\zeta})) \frac{\partial f}{\partial \mathbf{s}} (\pi\_{z}(\boldsymbol{\zeta})) + D\_{\boldsymbol{\alpha}} (z(\boldsymbol{\zeta}) - \mathbf{s}\_{\boldsymbol{n}}(\boldsymbol{\zeta})) \frac{\partial f}{\partial \mathbf{s}\_{\boldsymbol{\alpha}}} (\pi\_{z}(\boldsymbol{\zeta})) \right] d\boldsymbol{\zeta}$$

$$+ \frac{1}{n(\boldsymbol{\beta}, \boldsymbol{\gamma})} D\_{\boldsymbol{\beta}\boldsymbol{\gamma}}^{2} (z(\boldsymbol{\zeta}) - \mathbf{s}\_{\boldsymbol{n}}(\boldsymbol{\zeta})) \frac{\partial f}{\partial \mathbf{s}\_{\boldsymbol{\beta}\boldsymbol{\gamma}}} (\pi\_{z}(\boldsymbol{\zeta})) \Big] d\boldsymbol{\zeta}$$

$$\geq -\lim\_{n \to \infty} \theta\_n = 0, \quad \forall z \in E.$$

Taking into account Lemma 1 and by using the lower semicontinuity property, we obtain

$$\int\_{\mathcal{K}} f(\pi\_{\mathfrak{s}}(\zeta))d\zeta \le \inf\_{z \in \Omega} \int\_{K} f(\pi\_{z}(\zeta))d\zeta,$$

$$\int\_{K} \left[ (z(\zeta) - \mathfrak{s}(\zeta)) \frac{\partial f}{\partial \mathfrak{s}}(\pi\_{\mathfrak{s}}(\zeta)) + D\_{\mathfrak{a}}(z(\zeta) - \mathfrak{s}(\zeta)) \frac{\partial f}{\partial \mathfrak{s}\_{\mathfrak{a}}}(\pi\_{\mathfrak{s}}(\zeta)) \right.$$

$$+ \frac{1}{n(\beta, \gamma)} D\_{\beta \gamma}^{2}(z(\zeta) - \mathfrak{s}(\zeta)) \frac{\partial f}{\partial \mathfrak{s}\_{\beta \gamma}}(\pi\_{\mathfrak{s}}(\zeta)) \Big] d\zeta \ge 0, \quad \forall z \in E. \tag{12}$$

This involves that s solves (CVP), contradiction with the uniqueness of *s*0. Therefore, {*sn*} is a bounded sequence having a convergent subsequence {*sn<sup>k</sup>* }, which converges to *s*¯ ∈ *E* as *k* → ∞. Now, for *sn<sup>k</sup>* , *z* ∈ *E*, we obtain (see (11))

$$\int\_{K} \left[ \left( z(\boldsymbol{\zeta}) - s\_{\boldsymbol{n}\_{k}}(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial s} \left( \pi\_{s\_{\boldsymbol{n}\_{k}}}(\boldsymbol{\zeta}) \right) + D\_{\boldsymbol{a}} \left( z(\boldsymbol{\zeta}) - s\_{\boldsymbol{n}\_{k}}(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial s\_{\boldsymbol{a}}} \left( \pi\_{s\_{\boldsymbol{n}\_{k}}}(\boldsymbol{\zeta}) \right) \right] $$

$$+ \frac{1}{\boldsymbol{n}(\boldsymbol{\beta}, \boldsymbol{\gamma})} D\_{\boldsymbol{\beta}\boldsymbol{\gamma}}^{2} \left( z(\boldsymbol{\zeta}) - s\_{\boldsymbol{n}\_{k}}(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial s\_{\boldsymbol{\beta}\boldsymbol{\gamma}}} \left( \pi\_{s\_{\boldsymbol{n}\_{k}}}(\boldsymbol{\zeta}) \right) \Big] d\boldsymbol{\zeta} $$

$$\leq \int\_{K} \left[ \left( z(\boldsymbol{\zeta}) - s\_{\boldsymbol{n}\_{k}}(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial s} \left( \pi\_{z}(\boldsymbol{\zeta}) \right) + D\_{\boldsymbol{a}} \left( z(\boldsymbol{\zeta}) - s\_{\boldsymbol{n}\_{k}}(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial s\_{\boldsymbol{a}}} \left( \pi\_{z}(\boldsymbol{\zeta}) \right) \right] $$

$$+ \frac{1}{\boldsymbol{n}(\boldsymbol{\beta}, \boldsymbol{\gamma})} D\_{\boldsymbol{\beta}\boldsymbol{\gamma}}^{2} \left( z(\boldsymbol{\zeta}) - s\_{\boldsymbol{n}\_{k}}(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial s\_{\boldsymbol{\beta}\boldsymbol{\gamma}}} \left( \pi\_{z}(\boldsymbol{\zeta}) \right) \Big] d\boldsymbol{\zeta}. \tag{13}$$

Additionally, by (9), we can write

$$\lim\_{k \to \infty} \int\_{K} \left[ (z(\boldsymbol{\zeta}) - s\_{\boldsymbol{n}\_{k}}(\boldsymbol{\zeta})) \frac{\partial f}{\partial s}(\boldsymbol{\pi}\_{\boldsymbol{s}\_{\boldsymbol{n}\_{k}}}(\boldsymbol{\zeta})) + D\_{\boldsymbol{n}}(z(\boldsymbol{\zeta}) - s\_{\boldsymbol{n}\_{k}}(\boldsymbol{\zeta})) \frac{\partial f}{\partial s\_{\boldsymbol{n}}}(\boldsymbol{\pi}\_{\boldsymbol{s}\_{\boldsymbol{n}\_{k}}}(\boldsymbol{\zeta})) \right] $$

$$+ \frac{1}{n(\boldsymbol{\beta}, \boldsymbol{\gamma})} D\_{\boldsymbol{\beta}\boldsymbol{\gamma}}^{2}(z(\boldsymbol{\zeta}) - s\_{\boldsymbol{n}\_{k}}(\boldsymbol{\zeta})) \frac{\partial f}{\partial s\_{\boldsymbol{\beta}\boldsymbol{\gamma}}}(\boldsymbol{\pi}\_{\boldsymbol{s}\_{\boldsymbol{n}\_{k}}}(\boldsymbol{\zeta})) \Big] d\boldsymbol{\zeta} \ge 0. \tag{14}$$

By (13) and (14), we have

$$\begin{split} \lim\_{k \to \infty} \int\_{K} \left[ \left( z(\boldsymbol{\zeta}) - s\_{\boldsymbol{n}\_{k}}(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial \boldsymbol{s}} \left( \pi\_{z}(\boldsymbol{\zeta}) \right) + D\_{\boldsymbol{\alpha}} (z(\boldsymbol{\zeta}) - s\_{\boldsymbol{n}\_{k}}(\boldsymbol{\zeta})) \frac{\partial f}{\partial s\_{\boldsymbol{\alpha}}} \left( \pi\_{z}(\boldsymbol{\zeta}) \right) \right] \\ &+ \frac{1}{\boldsymbol{n}(\boldsymbol{\beta}, \boldsymbol{\gamma})} D\_{\boldsymbol{\beta}\boldsymbol{\gamma}}^{2} \left( z(\boldsymbol{\zeta}) - s\_{\boldsymbol{n}\_{k}}(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial s\_{\boldsymbol{\beta}\boldsymbol{\gamma}}} \left( \pi\_{z}(\boldsymbol{\zeta}) \right) \Big] d\boldsymbol{\zeta} \geq 0 \\ &\Rightarrow \int\_{K} \left[ \left( z(\boldsymbol{\zeta}) - \boldsymbol{\tilde{s}}(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial \boldsymbol{s}} \left( \pi\_{\boldsymbol{z}}(\boldsymbol{\zeta}) \right) + D\_{\boldsymbol{\alpha}} (z(\boldsymbol{\zeta}) - \boldsymbol{\tilde{s}}(\boldsymbol{\zeta})) \frac{\partial f}{\partial s\_{\boldsymbol{\alpha}}} \left( \pi\_{\boldsymbol{z}}(\boldsymbol{\zeta}) \right) \right] \\ &+ \frac{1}{n(\boldsymbol{\beta}, \boldsymbol{\gamma})} D\_{\boldsymbol{\beta}\boldsymbol{\gamma}}^{2} \left( z(\boldsymbol{\zeta}) - \boldsymbol{\tilde{s}}(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial s\_{\boldsymbol{\beta}\boldsymbol{\gamma}}} \left( \pi\_{\boldsymbol{z}}(\boldsymbol{\zeta}) \right) \Big] d\boldsymbol{\zeta} \geq 0. \end{split}$$

Using Lemma 1 and the lower semicontinuity property of the considered functional, we obtain

$$\int\_{K} f(\pi\_{\mathfrak{s}}(\zeta))d\zeta \le \inf\_{z \in \Omega} \int\_{K} f(\pi\_{z}(\zeta))d\zeta,$$

$$\int\_{K} \left[ (z(\zeta) - \mathfrak{s}(\zeta)) \frac{\partial f}{\partial s}(\pi\_{\mathfrak{s}}(\zeta)) + D\_{\mathfrak{a}}(z(\zeta) - \mathfrak{s}(\zeta)) \frac{\partial f}{\partial s\_{\mathfrak{a}}}(\pi\_{\mathfrak{s}}(\zeta)) \right]$$

$$+ \frac{1}{n(\beta, \gamma)} \mathcal{D}^{2}\_{\beta\gamma}(z(\zeta) - \mathfrak{s}(\zeta)) \frac{\partial f}{\partial s\_{\beta\gamma}}(\pi\_{\mathfrak{s}}(\zeta)) \Big] d\zeta \ge 0,$$

which shows that *s*¯ is a solution of (CVP). Hence, *sn<sup>k</sup>* → *s*¯, that is, *sn<sup>k</sup>* → *s*0, involving *s<sup>n</sup>* → *s*<sup>0</sup> and the proof is complete.

**Example 2.** *We consider n* = 1 *and K* = [0, 2] <sup>2</sup> = [0, 2] <sup>×</sup> [0, 2]*. Let us minimize the mass of <sup>K</sup> having the density (that depends on the current point) f ζ*,*s*(*ζ*),*sα*(*ζ*),*sβγ*(*ζ*) = *e <sup>s</sup>*(*ζ*) <sup>−</sup> *<sup>s</sup>*(*ζ*)*, such that the following behavior (positivity property)*

$$\begin{aligned} \label{eq:diff} \int\_K (z(\zeta) - s(\zeta)) (e^{s(\zeta)} - 1) d\zeta^1 d\zeta^2 &\ge 0, \\\\ \forall z \in E &= \mathbb{C}^1 (\mathbb{K}, [-15, 15]), \text{ s}|\_{\partial K} = 0, \end{aligned}$$

*is satisfied.*

*To solve the previous practical problem, we consider the following constrained optimization problem:*

$$\begin{array}{rcl} (\text{CVPI}) & \quad \text{Minimize} & \iint\_{K} [e^{s(\zeta)} - s(\zeta)] d\zeta^{1} d\zeta^{2} \\ & \quad \dots & \quad \dots \end{array}$$

*where* Ω *is the solution set of the following inequality problem*

$$\begin{aligned} \iint\_K (z(\zeta) - s(\zeta)) (e^{s(\zeta)} - 1) d\zeta^1 d\zeta^2 &\ge 0, \\ \forall z \in E &= \mathbb{C}^1 (K, [-15, 15]), \text{ s}|\_{\partial K} = 0. \end{aligned}$$

*Clearly,* <sup>S</sup> <sup>=</sup> {0} *and the functional* <sup>Z</sup> *K e <sup>s</sup>*(*ζ*) <sup>−</sup> *<sup>s</sup>*(*ζ*))*d<sup>ζ</sup> is hemicontinuous, monotone and lower semicontinuous on E. Thus, all the hypotheses of Theorem 2 hold and, in consequence, the problem (CVP1) is well-posed. Additionally,* S(*θ*, *ϑ*) = {0} *and, therefore,* S(*θ*, *ϑ*) 6= ∅ *and diam* S(*θ*, *ϑ*) → 0 *as* (*θ*, *ϑ*) → (0, 0)*. In conclusion, by Theorem 1, the variational problem (CVP1) is well-posed.*

#### **4. Conclusions**

In this paper, we have studied the well-posedness property of new constrained variational problems governed by second-order partial derivatives. More precisely, by using the concepts of lower semicontinuity, monotonicity, hemicontinuity and pseudomonotonicity of considered multiple integral-type functional, we have proved that the well-posedness property of the problem under study is described in terms of existence and uniqueness of solution.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


## *Article* **A Remark on the Change of Variable Theorem for the Riemann Integral**

**Alexander Kuleshov 1,2,**


**Abstract:** In 1961, Kestelman first proved the change in the variable theorem for the Riemann integral in its modern form. In 1970, Preiss and Uher supplemented his result with the inverse statement. Later, in a number of papers (Sarkhel, Výborný, Puoso, Tandra, and Torchinsky), the alternative proofs of these theorems were given within the same formulations. In this note, we show that one of the restrictions (namely, the boundedness of the function *f* on its entire domain) can be omitted while the change of variable formula still holds.

**Keywords:** real analysis; Riemann integral; change of variable

#### **1. Introduction**

Throughout this paper, we denote [*a*, *b*] as the closed interval connecting the points *a*, *b* ∈ R, and denote *R*[*a*, *b*] as the class of all Riemann-integrable real functions on [*a*, *b*]. In 1961, Kestelman (see [1]) first proved the following fundamental theorem for the Riemann integral.

**Theorem 1.** *Suppose that g* ∈ *R*[*α*, *β*]*, c* ∈ R*,*

$$\mathcal{G}(t) := \int\_{a}^{t} \mathcal{g}(y) dy + \mathcal{c} \tag{1}$$

*and f* ∈ *R G*([*α*, *β*]) *. Then,* (*f* ◦ *G*)*g* ∈ *R*[*α*, *β*] *and the following change of variable formula holds:*

*G*(*β*) Z *G*(*α*) *f*(*x*)*dx* = *β* Z *α f G*(*t*) *g*(*t*)*dt* (2)

In 1970, Preiss and Uher (see [2]) supplemented this result with the following statement.

**Theorem 2.** *Suppose that g* ∈ *R*[*α*, *β*]*, G is defined by* (1)*, f is bounded on* [*c*, *d*] := *G*([*α*, *β*]) *and* (*f* ◦ *G*)*g* ∈ *R*[*α*, *β*]*. Then f* ∈ *R*[*c*, *d*] ⊂ *R*[*G*(*α*), *G*(*β*)] *and the change of variable Formula* (2) *holds.*

Later, in a number of papers (see [3–6]), the alternative Proofs of Theorems 1 and 2 were given within the same formulations. The main goal of this note is to abandon the requirement of boundedness of the function *f* on [*c*, *d*] := *G*([*α*, *β*]) in Theorem 2. At the same time, the condition for the boundedness of the function *f* on [*G*(*α*), *G*(*β*)] is essential for the existence of the integral on the left-hand side of (2) and does not follow from other conditions of the theorem, which are shown by the example at the end of [3]. Let us now proceed to formulating the main result.

**Citation:** Kuleshov, A. A Remark on the Change of Variable Theorem for the Riemann Integral. *Mathematics* **2021**, *9*, 1899. https://doi.org/ 10.3390/math9161899

Academic Editor: Denis N. Sidorov

Received: 25 July 2021 Accepted: 8 August 2021 Published: 10 August 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

#### **2. The Main Result**

**Theorem 3.** *Suppose that g* ∈ *R*[*α*, *β*]*, G is defined by* (1)*, f is bounded on I* := [*G*(*α*), *G*(*β*)] *and* (*f* ◦ *G*)*g* ∈ *R*[*α*, *β*]*. Then, f* ∈ *R*(*I*) *and the change of variable Formula* (2) *holds.*

For the proof of Theorem 3, we need the following lemma.

**Lemma 1.** *If g*, *gh* ∈ *R*[*α*, *β*]*, then g*|*h*| ∈ *R*[*α*, *β*]*.*

**Proof.** By Lebesgue's criterion, the functions *g* and *gh* are both continuous a.e. on [*α*, *β*]. Let *x*<sup>0</sup> ∈ [*α*, *β*] be the point of their mutual continuity. If *h* is continuous at *x*0, then *g*|*h*| is continuous at *x*0. If *h* is discontinuous at *x*0, then the equality *g*(*x*0) = 0 must hold because otherwise, *h* must be continuous at *x*<sup>0</sup> as a quotient of continuous functions *gh* and *g*. Then, we have the following:

$$g(\mathfrak{x})h(\mathfrak{x}) \to g(\mathfrak{x}\_0)h(\mathfrak{x}\_0) = 0,$$

and therefore,

$$\mathcal{g}(\mathfrak{x})|h(\mathfrak{x})| = \mathcal{g}(\mathfrak{x})h(\mathfrak{x})\text{sgn}(h(\mathfrak{x})) \to 0 = \mathcal{g}(\mathfrak{x}\_0)|h(\mathfrak{x}\_0)|$$

as *x* → *x*0, which means the continuity of *g*|*h*| at *x*0, and thus, its continuity a.e. on [*α*, *β*]. Thus, *g*|*h*| ∈ *R*[*α*, *β*] by Lebesgue's criterion.

**Proof of Theorem 3.** By the hypothesis of the theorem, there is *M*<sup>1</sup> > 0 such that | *f*(*x*)| ≤ *M*<sup>1</sup> for all *x* ∈ *I*. For all *n* ∈ N, let *c<sup>n</sup>* := *M*<sup>1</sup> + *n* and define for all *x* ∈ [*c*, *d*] := *G*([*α*, *β*]) the following function:

$$f\_{\mathfrak{n}}(\mathfrak{x}) := \begin{cases} f(\mathfrak{x})\_{\mathfrak{t}} \text{ if } |f(\mathfrak{x})| \le c\_{\mathfrak{n}};\\ c\_{\mathfrak{n}\prime} \text{ if } f(\mathfrak{x}) > c\_{\mathfrak{n}\prime};\\ -c\_{\mathfrak{n}\prime} \text{ if } f(\mathfrak{x}) < -c\_{\mathfrak{n}\prime}. \end{cases}$$

From the given definition for all *n* ∈ N, we obtain the boundedness of *f<sup>n</sup>* as well as the following equality:

$$\left.f\_n\right|\_I = f|\_I. \tag{3}$$

Additionally, for every *n* ∈ N for all *x* ∈ [*c*, *d*], we obtain the following:

$$|f\_n(\mathbf{x})| \le |f(\mathbf{x})|\_\prime \tag{4}$$

and for all *x* ∈ [*c*, *d*], we have the following:

$$f\_n(\mathbf{x}) \to f(\mathbf{x})\tag{5}$$

as *n* → ∞. Next, we show that (*f<sup>n</sup>* ◦ *G*)*g* ∈ *R*[*α*, *β*] for all *n* ∈ N. For each *n* ∈ N, we have the following explicit formula:

$$f\_{\boldsymbol{\eta}} = \min \{ \max \{ f, -\boldsymbol{c}\_{\boldsymbol{\eta}} \}, \boldsymbol{c}\_{\boldsymbol{\eta}} \} = \frac{1}{4} (f - \boldsymbol{c}\_{\boldsymbol{\eta}} - |f - \boldsymbol{c}\_{\boldsymbol{\eta}}| + |\mathbf{3}\boldsymbol{c}\_{\boldsymbol{\eta}} + f - |f - \boldsymbol{c}\_{\boldsymbol{\eta}}|) \}.$$

from which, for *h* := *f* ◦ *G*, we obtain the following equality:

$$(f\_\hbar \circ \mathcal{G})\mathcal{g} = \frac{1}{4} \left( \hbar - \mathcal{c}\_\hbar - |\hbar - \mathcal{c}\_\hbar| + \left| 3\mathcal{c}\_\hbar + \hbar - |\hbar - \mathcal{c}\_\hbar| \right| \right) \mathcal{g}.\tag{6}$$

Since by the hypothesis of the theorem *g*, *gh* ∈ *R*[*α*, *β*], then by Lemma 1, we have *g*|*h* − *cn*| ∈ *R*[*α*, *β*], and thus, *g* 3*c<sup>n</sup>* + *h* − |*h* − *cn*|  ∈ *R*[*α*, *β*] by the same lemma. Finally, (6) implies that (*f<sup>n</sup>* ◦ *G*)*g* ∈ *R*[*α*, *β*] for all *n* ∈ N.

Since the function (*f* ◦ *G*)*g* is integrable (and, thus, bounded), there exists *M*<sup>2</sup> > 0 such that for all *n* ∈ *N*, *t* ∈ [*α*, *β*] holds the inequality as follows:

$$\left| f\_n(G(t))g(t) \right| \stackrel{(4)}{\leq} \left| f(G(t))g(t) \right| \leq M\_{2\prime}$$

Additionally, for all *t* ∈ [*α*, *β*] as *n* → ∞, we have the following:

$$f\_n\left(\mathcal{G}(t)\right)\mathcal{g}(t) \stackrel{\text{(5)}}{\rightarrow} f\left(\mathcal{G}(t)\right)\mathcal{g}(t).$$

By virtue of (3), using Theorem 2 and Arzela's bounded convergence theorem for the Riemann integral (see [7]), as *n* → ∞ we obtain the following:

$$\int\_{G(a)}^{G(\beta)} f(\mathbf{x})d\mathbf{x} \stackrel{G(\beta)}{=} \int\_{G(a)}^{G(\beta)} f\_n(\mathbf{x})d\mathbf{x} \stackrel{\text{Th. 2}}{=} \int\_a^{\beta} f\_n\left(G(t)\right)g(t)dt \to \int\_a^{\beta} f\left(G(t)\right)g(t)dt,$$

which completes the verification of (2) and the proof of the theorem.

#### **3. Some applications**

The following example illustrates Theorem 3 in use: let *α* := −1, *β* := 2, *g*(*t*) := 2*t*, *G*(*t*) := *t* <sup>2</sup> and

$$f(\mathbf{x}) := \begin{cases} \frac{1}{\sqrt{\mathbf{x}}} \text{ if } \mathbf{x} > \mathbf{0};\\ 0 \text{ if } \mathbf{x} = \mathbf{0}. \end{cases}$$

Clearly, *f* is unbounded on *G*([−1, 2]) = [0, 4], but there exists

$$\int\_{1}^{4} \frac{d\mathbf{x}}{\sqrt{\mathbf{x}}} = \int\_{G(a)}^{G(\beta)} f(\mathbf{x}) d\mathbf{x} \overset{\text{Th. 3}}{=} \int\_{a}^{\beta} f\{G(t)\} g(t) dt = \int\_{-1}^{2} 2 \operatorname{sgn}(t) dt = 2.$$

To illustrate some other applications of our result, we obtain as a consequence the theorem on the change of a variable in an improper integral (in one direction) under quite general conditions.

**Corollary 1** (of Theorem 3)**.** *Suppose that a* < *b, α* < *β, f is bounded on* [*a*, *c*] *for all c* ∈ (*a*, *b*)*, g* ∈ *R*[*α*, *γ*] *for all γ* ∈ (*α*, *β*)*,*

$$G(t) := \int\_{a}^{t} \mathcal{g}(y) dy + a \xrightarrow{t \to \beta -} b - b$$

*and*

$$\lim\_{z \to \beta - \frac{1}{a}} \int\_a^z f\left(G(t)\right)g(t)dt = I.$$

*Then, the following holds:*

$$\lim\_{x \to b-} \int\_{a}^{x} f(s)ds = I.$$

**Funding:** This work was funded by a grant of the Government of the Russian Federation (project No. 161 14.W03.31.0031).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** No new data were created or analyzed in this study. Data sharing is not applicable to this article.

**Conflicts of Interest:** The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

#### **References**


## *Article* **On Robust Saddle-Point Criterion in Optimization Problems with Curvilinear Integral Functionals**

**Savin Trean¸t˘a 1,\* and Koushik Das <sup>2</sup>**


**Abstract:** In this paper, we introduce a new class of multi-dimensional robust optimization problems (named (*P*)) with mixed constraints implying second-order partial differential equations (PDEs) and inequations (PDIs). Moreover, we define an auxiliary (modified) class of robust control problems (named (*P*) ( ¯*b*,*c*¯) ), which is much easier to study, and provide some characterization results of (*P*) and (*P*) ( ¯*b*,*c*¯) by using the notions of normal weak robust optimal solution and robust saddle-point associated with a Lagrange functional corresponding to (*P*) ( ¯*b*,*c*¯) . For this aim, we consider pathindependent curvilinear integral cost functionals and the notion of convexity associated with a curvilinear integral functional generated by a controlled closed (complete integrable) Lagrange 1-form.

**Keywords:** Lagrange 1-form; second-order Lagrangian; normal weak robust optimal solution; modified objective function method; robust saddle-point

#### **1. Introduction**

As we all know, partial differential equations (PDEs) and partial differential inequations (PDIs) are essential in modeling and investigating many processes in engineering and science. In this respect, many researchers have taken a special interest in their study. We specify, for example, the research works of Mititelu [1], Trean¸t˘a [2–4], Mititelu and Trean¸t˘a [5], Olteanu and Trean¸t˘a [6], Preeti et al. [7], and Jayswal et al. [8] on the study of some optimization problems with ODE, PDE, or isoperimetric constraints. In order to reduce the complexity of the considered optimization problems, some auxiliary optimization problems were formulated to investigate the initial problems more easily (Trean¸t˘a [9–12]). Nevertheless, since the real-life processes and phenomena often imply uncertainty in initial data, many researchers have turned their attention to optimization issues governed by first- and second-order PDEs, isoperimetric restrictions, stochastic PDEs, uncertain data, or a combination thereof. In this context, we mention the following research papers: Wei et al. [13], Liu and Yuan [14], Jeyakumar et al. [15], Sun et al. [16], Preeti et al. [7], Lu et al. [17], and Trean¸t˘a [18]. The structure of approximate solutions associated with some autonomous variational problems on large finite intervals was studied by Zaslavski [19]. Furthermore, Geldhauser and Valdinoci [20] investigated an optimization problem with SPDE constraints, with the peculiarity that the control parameter *s* is the *s*-th power of the diffusion operator in the state equation. In [21], Babamiyi et al. focused on identifying a distributed parameter in a saddle point problem with application to the elasticity imaging inverse problem. Very recently, Debnath and Qin [22], investigated the robust optimality and duality for minimax fractional programming problems with support functions.

Motivated and inspired by previous research works, in this paper, we introduce and study new classes of robust optimization problems. More exactly, by taking curvilinear integral objective functionals with mixed (equality and inequality) constraints implying data uncertainty and second-order partial derivatives, we introduce the robust control problems under study. Further, by using the concept of convexity associated with curvilinear integral

**Citation:** Trean¸t˘a, S.; Das, K. On Robust Saddle-Point Criterion in Optimization Problems with Curvilinear Integral Functionals. *Mathematics* **2021**, *9*, 1790. https:// doi.org/10.3390/math9151790

Academic Editor: Simeon Reich

Received: 6 July 2021 Accepted: 27 July 2021 Published: 28 July 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

functionals and the notion of robust saddle-point associated with a Lagrange functional corresponding to the modified robust optimization problem, we formulate and prove some characterization results for the considered classes of control problems. The novelty elements included in the paper, in comparison with other research papers in this field, are provided by the presence of uncertain data both in the objective functional and in the constraint functionals and also by the presence of second-order partial derivatives. Moreover, the proofs associated with the main results are established in an innovative way. Furthermore, since the mathematical framework introduced here is appropriate for various scientific approaches and viewpoints on complex spatial behaviors, the current paper could be seen as a definitive research work for a large community of researchers in engineering and science.

The paper is structured as follows. Section 2 provides the preliminary and necessary mathematical tools, which will be used in the next sections. Section 3 includes the main results of this paper. Under convexity assumption of the cost functional, the first main result establishes a connection between a robust saddle point of the Lagrange functional associated with the associated modified problem (*P*)( ¯*b*,*c*¯) and a weak robust optimal solution of (*P*). By assuming the convexity hypotheses of the constraint functionals, the converse of the first main result is presented in the second main result. In Section 4, we formulate the conclusions and further development.

#### **2. Preliminaries**

In this paper, we use the following working hypotheses and notations:


*∂t <sup>α</sup>∂t β* denote the *partial speed* and *partial acceleration*, respectively;

	- (*i*) *x* < *y* ⇔ *x<sup>i</sup>* < *y<sup>i</sup>* , ∀*i* = 1, *n*,
	- (*ii*) *x* = *y* ⇔ *x<sup>i</sup>* = *y<sup>i</sup>* , ∀*i* = 1, *n*,
	- (*iii*) *x* 5 *y* ⇔ *x<sup>i</sup>* ≤ *y<sup>i</sup>* , ∀*i* = 1, *n*,
	- (*iv*) *x* ≤ *y* ⇔ *x<sup>i</sup>* ≤ *y<sup>i</sup>* , ∀*i* = 1, *n* and *x<sup>i</sup>* < *y<sup>i</sup>* for some *i*.

In the following, we consider *g* = (*g*1, . . . , *gm*) = (*g<sup>l</sup>* ) : *J* 2 Θ, R*<sup>q</sup>* × C × *<sup>U</sup><sup>l</sup>* <sup>→</sup> <sup>R</sup>*m*, *l* = 1, *m*, *f<sup>κ</sup>* : *J* 2 Θ, R*<sup>q</sup>* × C × *W<sup>κ</sup>* → R, *κ* = 1, *p*, *h* = (*h*1, . . . , *hn*) = (*h<sup>ζ</sup>* ) : *J* 2 Θ, R*<sup>q</sup>* × C × *<sup>V</sup><sup>ζ</sup>* <sup>→</sup> <sup>R</sup>*<sup>n</sup>* , *ζ* = 1, *n*, are *C* 3 -class functionals. Furthermore, let us assume that *w* = (*wκ*), *u* = (*u<sup>l</sup>* ) and *v* = (*v<sup>ζ</sup>* ) are the uncertain parameters for some convex compact subsets *<sup>W</sup>* = (*Wκ*) <sup>⊂</sup> <sup>R</sup>*<sup>p</sup>* , *U* = (*U<sup>l</sup>* ) <sup>⊂</sup> <sup>R</sup>*<sup>m</sup>* and *<sup>V</sup>* = (*V<sup>ζ</sup>* ) <sup>⊂</sup> <sup>R</sup>*<sup>n</sup>* , respectively. Denote by *J* 2 Θ, R*<sup>q</sup>* the second-order jet bundle associated with Θ and R*<sup>q</sup>* . Furthermore, assume that the previous multi-time-controlled second-order Lagrangians *f<sup>κ</sup>* determine a controlled closed (complete integrable) Lagrange 1-form (see summation over the repeated indices, Einstein summation):

$${"{f\_k(t), b(t), b\_\sigma(t), b\_{u\beta}(t), c(t), w}}$$

which generates the following controlled path-independent curvilinear integral functional:

$$\int\_{\Gamma} f\_{\mathbb{K}}(t, b(t), b\_{\sigma}(t), b\_{\mathfrak{a}\beta}(t), c(t), w) dt^{\kappa}.$$

The second-order PDE and PDI constrained variational control problem with uncertainty in the objective and constraint functionals is defined as follows:

$$\epsilon(P) = \min\_{\left(b(\cdot), c(\cdot)\right)} \int\_{\Gamma} f\_{\mathbb{K}}(t, b(t), b\_{\sigma}(t), b\_{a\beta}(t), c(t), w) dt^{\kappa}$$

subject to

$$g(t, b(t), b\_{\sigma}(t), b\_{a\beta}(t), c(t), u) \le 0, \quad t \in \Theta$$

$$h(t, b(t), b\_{\sigma}(t), b\_{a\beta}(t), c(t), v) = 0, \quad t \in \Theta$$

$$b(t\_0) = b\_{0'} \; b(t\_1) = b\_{1'} \; b\_{\sigma}(t\_0) = b\_{\sigma 0'} \; b\_{\sigma}(t\_1) = b\_{\sigma 1}.$$

The associated robust counterpart of the aforementioned variational control problem (*P*) is defined as:

$$(RP) \qquad \min\_{\left(b(\cdot), c(\cdot)\right)} \int\_{\Gamma} \max\_{w \in \mathcal{W}'} f\_{\mathbb{K}}(t, b(t), b\_{\sigma}(t), b\_{u\notin}(t), c(t), w)dt^{\kappa}$$

subject to

$$g(t, b(t), b\_{\sigma}(t), b\_{\mathfrak{a}\beta}(t), c(t), u) \le 0, \quad t \in \Theta,\ \forall u \in \mathcal{U}$$

$$h(t, b(t), b\_{\sigma}(t), b\_{\mathfrak{a}\beta}(t), c(t), v) = 0, \quad t \in \Theta,\ \forall v \text{s.t.} \in \mathcal{V}$$

$$b(t\_0) = b\_{0'} \; \; b(t\_1) = b\_{1'} \; \; b\_{\sigma}(t\_0) = b\_{\sigma 0'} \; \; b\_{\sigma}(t\_1) = b\_{\sigma 1}.$$

Further, denote by

$$\mathcal{D} = \left\{ (b, c) \in \mathcal{B} \times \mathcal{C} \, : \, \mathcal{g}(t, b(t), b\_{\sigma}(t), b\_{a\beta}(t), c(t), u) \le 0, \,\, \right. \\ \left. \begin{aligned} &b\_{\sigma}(t, b(t), b\_{\sigma}(t), b\_{a\beta}(t), c(t), u) \le 0, \\ &b\_{\sigma}(t\_{0}, b\_{\sigma}(t), c(t), c(t), v) = 0, \,\, b(t\_{0}) = b\_{0\prime}, \, b(t\_{1}) = b\_{1\prime}, \\ &b\_{\sigma}(t\_{0}) = b\_{\sigma 0}, \,\, b\_{\sigma}(t\_{1}) = b\_{\sigma 1\prime}, \,\, t \in \mathcal{O}, \,\, u \in \mathcal{U}, \,\, vs. \in \mathcal{V} \right\} \end{aligned}$$

the feasible solution set in (*RP*), and we call it the *robust feasible solution set* of (*P*).

To simplify the presentation, we use the following notation:

$$
\pi = (t, b(t), b\_{\sigma}(t), b\_{\alpha\beta}(t), c(t)).
$$

The associated first-order partial derivatives of *fκ*, *κ* = 1, *p*, are defined as

$$\frac{\partial f\_{\mathbf{k}}}{\partial b} = \left(\frac{\partial f\_{\mathbf{k}}}{\partial b^{1}}, \dots, \frac{\partial f\_{\mathbf{k}}}{\partial b^{q}}\right), \quad \frac{\partial f\_{\mathbf{k}}}{\partial c} = \left(\frac{\partial f\_{\mathbf{k}}}{\partial c^{1}}, \dots, \frac{\partial f\_{\mathbf{k}}}{\partial c^{r}}\right).$$

In the same manner, we have *g<sup>b</sup>* := *∂g ∂b* and *g<sup>c</sup>* := *∂g ∂c* by using matrices with *m* rows and *h<sup>b</sup>* := *∂h ∂b* and *h<sup>c</sup>* := *∂h ∂c* by using matrices with *n* rows.

Further, in accordance to Trean¸t˘a [3], we define the notion of a weak robust optimal solution of the considered class of constrained variational control problems. This notion will be used to establish the associated robust necessary conditions of optimality and the main results derived in the paper.

**Definition 1.** *A pair* ( ¯*b*, *<sup>c</sup>*¯) <sup>∈</sup> *<sup>D</sup> is said to be a weak robust optimal solution to* (*P*) *if there does not exist another point* (*b*, *c*) ∈ *D such that*

$$\int\_{\Gamma} \max\_{w \in \mathcal{W}} f\_{\mathfrak{k}}(\pi, w) dt^{\kappa} < \int\_{\Gamma} \max\_{w \in \mathcal{W}} f\_{\mathfrak{k}}(\pi, w) dt^{\kappa}.$$

Next, we shall use the Saunders's multi-index notation (Saunders [23], Trean¸t˘a [3,24]) to formulate the concept of convexity and the robust necessary optimality conditions for (*P*).

**Definition 2.** *A curvilinear integral functional*

$$F(b, \mathfrak{c}, \mathfrak{w}) = \int\_{\Gamma} f\_{\mathfrak{k}}(t, b(t), b\_{\sigma}(t), b\_{\mathfrak{a}\mathfrak{F}}(t), \mathfrak{c}(t), \mathfrak{w}) dt^{\mathfrak{x}} = \int\_{\Gamma} f\_{\mathfrak{k}}(\pi, \mathfrak{w}) dt^{\mathfrak{x}}$$

*is said to be convex at* ( ¯*b*, *<sup>c</sup>*¯) ∈ B × C *if the following inequality*

$$F(b, c, \overline{w}) - F(\overline{b}, \overline{c}, \overline{w}) \ge \int\_{\Gamma} \frac{\partial f\_{\mathbf{k}}}{\partial \overline{b}}(\pi, \overline{w})[b(t) - \overline{b}(t)]dt^{\mathbf{x}}$$

$$+ \int\_{\Gamma} \frac{\partial f\_{\mathbf{k}}}{\partial b\_{\sigma}}(\overline{\pi}, \overline{w})[b\_{\sigma}(t) - \overline{b}\_{\sigma}(t)]dt^{\mathbf{x}}$$

$$+ \frac{1}{n(\boldsymbol{\kappa}, \boldsymbol{\beta})} \int\_{\Gamma} \frac{\partial f\_{\mathbf{k}}}{\partial b\_{\mathbf{a}\boldsymbol{\beta}}}(\pi, \overline{w})[b\_{\mathbf{a}\boldsymbol{\beta}}(t) - \overline{b}\_{\mathbf{a}\boldsymbol{\beta}}(t)]dt^{\mathbf{x}}$$

$$+ \int\_{\Gamma} \frac{\partial f\_{\mathbf{k}}}{\partial \boldsymbol{c}}(\pi, \overline{w})[\boldsymbol{c}(t) - \overline{\boldsymbol{c}}(t)]dt^{\mathbf{x}}$$

*holds for all* (*b*, *c*) ∈ B × C*.*

According to Trean¸t˘a [24], we formulate the robust necessary optimality conditions for (*P*).

**Theorem 1.** *If* ( ¯*b*, *<sup>c</sup>*¯) <sup>∈</sup> *<sup>D</sup> is a weak robust optimal solution to* (*P*) *and* max*w*∈*<sup>W</sup> <sup>f</sup>κ*(*π*, *<sup>w</sup>*) = *fκ*(*π*, *w*¯), *κ* = 1, *p, then there exist the scalar µ*¯ ∈ R*, the piecewise smooth functions ν*¯ = (*ν*¯*l* (*t*)) <sup>∈</sup> <sup>R</sup>*<sup>m</sup>* <sup>+</sup>, *<sup>γ</sup>*¯ = (*γ*¯ *<sup>ζ</sup>* (*t*)) <sup>∈</sup> <sup>R</sup>*<sup>n</sup> , and the uncertainty parameters u*¯ ∈ *U and v*¯ ∈ *V such that the following conditions*

$$
\mu \frac{\partial f\_{\mathbf{k}}}{\partial b}(\boldsymbol{\pi}, \boldsymbol{\pi}) + \boldsymbol{\nu}^{T} g\_{b}(\boldsymbol{\pi}, \boldsymbol{\pi}) + \boldsymbol{\gamma}^{T} h\_{b}(\boldsymbol{\pi}, \boldsymbol{\pi}) \tag{1}
$$

$$-D\_{\sigma} \left[ \overline{\mu} \frac{\partial f\_{\mathbf{k}}}{\partial b\_{\sigma}}(\boldsymbol{\pi}, \boldsymbol{\varpi}) + \boldsymbol{\nu}^{T} g\_{b\mathbf{r}}(\boldsymbol{\pi}, \boldsymbol{\pi}) + \boldsymbol{\gamma}^{T} h\_{b\mathbf{r}}(\boldsymbol{\pi}, \boldsymbol{\sigma}) \right]$$

$$+ \frac{1}{n(\boldsymbol{\alpha}, \boldsymbol{\beta})} D\_{a\boldsymbol{\beta}}^{2} \left[ \overline{\mu} \frac{\partial f\_{\mathbf{k}}}{\partial b\_{a\boldsymbol{\beta}}}(\boldsymbol{\pi}, \boldsymbol{\varpi}) + \boldsymbol{\nu}^{T} g\_{b\_{a\boldsymbol{\beta}}}(\boldsymbol{\pi}, \boldsymbol{\pi}) + \boldsymbol{\gamma}^{T} h\_{b\_{a\boldsymbol{\beta}}}(\boldsymbol{\pi}, \boldsymbol{\sigma}) \right] = 0, \quad \boldsymbol{\kappa} = \overline{1, p}$$

$$\overline{\mu} \frac{\partial f\_{\mathbf{k}}}{\partial \boldsymbol{\alpha}}(\boldsymbol{\pi}, \boldsymbol{\varpi}) + \boldsymbol{\nu}^{T} g\_{\mathbf{c}}(\boldsymbol{\pi}, \boldsymbol{\pi}) + \boldsymbol{\gamma}^{T} h\_{\mathbf{c}}(\boldsymbol{\pi}, \boldsymbol{\sigma}) = 0, \quad \boldsymbol{\kappa} = \overline{1, p} \tag{2}$$

$$\frac{\partial^{\mathsf{J}}\mathsf{g}}{\partial\mathsf{c}}(\mathsf{\pi},\mathsf{w}) + \mathsf{\pi}^{\mathsf{T}}\mathsf{g}\_{\mathsf{c}}(\mathsf{\pi},\mathsf{u}) + \mathsf{\gamma}^{\mathsf{T}}h\_{\mathsf{c}}(\mathsf{\pi},\mathsf{\sigma}) = \mathsf{0}, \quad \mathsf{\kappa} = \overline{\mathsf{1},p} \tag{2}$$

*ν*¯ *T g*(*π*¯, *u*¯) = 0, *ν*¯ = 0, (3)

$$
\mu \ge 0 \tag{4}
$$

*hold for all t* ∈ Θ, *except at discontinuities.*

**Remark 1.** *The robust necessary optimality conditions of* (*P*) *are given by the conditions (1)–(4).*

**Definition 3.** *A pair* ( ¯*b*, *<sup>c</sup>*¯) <sup>∈</sup> *<sup>D</sup> is said to be a normal weak robust optimal solution to* (*P*) *if <sup>µ</sup>*¯ <sup>&</sup>gt; <sup>0</sup> *in Theorem 1. We can consider µ*¯ = 1 *without loss of generality.*

Next, we use the modified objective function method to reduce the complexity of (*P*). In this direction, let ( ¯*b*, *c*¯) be an arbitrary given robust feasible solution to (*P*). The modified multi-dimensional variational control problem associated with the original optimization problem (*P*) is defined as:

$$\begin{aligned} \left( \left( P \right)\_{\left( \tilde{b}, \mathcal{L} \right)} \min\_{\left( b(\cdot), \mathcal{L}(\cdot) \right)} \int\_{\Gamma} \left\{ \frac{\partial f\_{\kappa}}{\partial b} \left( \pi, w \right) (b(t) - \bar{b}(t)) + \frac{\partial f\_{\kappa}}{\partial b\_{\sigma}} \left( \pi, w \right) (b\_{\sigma}(t) - \bar{b}\_{\sigma}(t)) \right\} \\ + \frac{1}{n(\alpha, \beta)} \frac{\partial f\_{\kappa}}{\partial b\_{\alpha \overline{\beta}}} \left( \bar{\pi}, w \right) (b\_{\alpha \overline{\beta}}(t) - \bar{b}\_{\overline{\alpha} \overline{\beta}}(t)) + \frac{\partial f\_{\kappa}}{\partial c} \left( \bar{\pi}, w \right) (c(t) - \bar{c}(t)) \right\} dt^{\kappa} \\ \text{subject to} \end{aligned}$$

$$g(\pi, \iota) \le 0, \quad t \in \Theta$$

$$h(\pi, \iota) = 0, \quad t \in \Theta$$

$$b(t\_0) = b\_{0\prime} \ b(t\_1) = b\_{1\prime} \ b\_{\sigma}(t\_0) = b\_{\sigma 0\prime} \ b\_{\sigma}(t\_1) = b\_{\sigma 1\prime}$$

where the functionals *g*, *f<sup>κ</sup>* and *h* are given as in (*P*).

The associated robust counterpart of the modified multi-dimensional variational control problem (*P*)( ¯*b*,*c*¯) is defined as:

$$(R) \begin{split} \min\_{\left(\bar{b}, \mathcal{E}\right)} \min\_{\left(b(\cdot), \mathcal{E}(\cdot)\right)} & \int\_{\Gamma} \max\_{w \in \mathcal{W}} \left\{ \frac{\partial f\_{\mathbf{k}}}{\partial b} (\pi, w) (b(t) - \bar{b}(t)) + \frac{\partial f\_{\mathbf{k}}}{\partial b\_{\sigma}} (\pi, w) (b\_{\sigma}(t) - \bar{b}\_{\sigma}(t)) \right\} \\ & + \frac{1}{n \left(a, \beta\right)} \frac{\partial f\_{\mathbf{k}}}{\partial b\_{a\beta}} (\pi, w) (b\_{a\beta}(t) - \bar{b}\_{a\beta}(t)) + \frac{\partial f\_{\mathbf{k}}}{\partial c} (\pi, w) (c(t) - \tilde{c}(t)) \right\} dt^{\kappa} \\ & \text{subject to} \end{split}$$

$$g(\pi, u) \le 0, \quad t \in \Theta, \forall u \in \mathcal{U}$$

$$h(\pi, v) = 0, \quad t \in \Theta, \forall vs. \in \mathcal{V}$$

$$b(t\_0) = b\_{0'} \ b(t\_1) = b\_{1'} \ b\_{\sigma}(t\_0) = b\_{\sigma 0'} \ b\_{\sigma}(t\_1) = b\_{\sigma 1}.$$

**Remark 2.** *The robust feasible solution set of the problem* (*P*)( ¯*b*,*c*¯) *is the same as in* (*P*)*. Consequently, it is also denoted by D.*

**Definition 4.** *A pair* ( <sup>ˆ</sup>*b*, *<sup>c</sup>*ˆ) <sup>∈</sup> *<sup>D</sup> is said to be a weak robust optimal solution to* (*P*)( ¯*b*,*c*¯) *if there does not exist another point* (*b*, *c*) ∈ *D such that*

$$\begin{split} &\int\_{\Gamma} \max\_{w \in \mathcal{W}} \left[ \frac{\partial f\_{\mathbf{k}}}{\partial b} (\pi, w) (b - \overline{b}) + \frac{\partial f\_{\mathbf{k}}}{\partial b\_{\sigma}} (\pi, w) (b\_{\sigma} - \overline{b}\_{\sigma}) \right] \\ & \qquad + \frac{1}{n (\boldsymbol{\alpha}, \beta)} \frac{\partial f\_{\mathbf{k}}}{\partial b\_{\alpha \beta}} (\pi, w) (b\_{a\beta} - \overline{b}\_{a\beta}) + \frac{\partial f\_{\mathbf{k}}}{\partial c} (\pi, w) (c - \overline{c}) \Big] dt^{\mathbf{x}} \\ & \qquad < \int\_{\Gamma} \max\_{w \in \mathcal{W}} \left[ \frac{\partial f\_{\mathbf{k}}}{\partial b} (\pi, w) (\hat{b} - \overline{b}) + \frac{\partial f\_{\mathbf{k}}}{\partial b\_{\sigma}} (\pi, w) (\hat{b}\_{\sigma} - \overline{b}\_{\sigma}) \right] \\ & \qquad + \frac{1}{n (\boldsymbol{\alpha}, \beta)} \frac{\partial f\_{\mathbf{k}}}{\partial b\_{\alpha \beta}} (\pi, w) (\hat{b}\_{\alpha \beta} - \overline{b}\_{a\beta}) + \frac{\partial f\_{\mathbf{k}}}{\partial c} (\pi, w) (\mathcal{E} - \overline{\varepsilon}) \Big] dt^{\mathbf{x}}. \end{split}$$

#### **3. Saddle-Point Optimality Criterion**

In this section, under some convexity assumptions, we establish some connections between a weak robust optimal solution of (*P*) and a *robust saddle-point* associated with a Lagrange functional (Lagrangian) corresponding to the modified multi-dimensional variational control problem (*P*)( ¯*b*,*c*¯) . In this regard, in accordance with Trean¸t˘a [9,11,12] and Preeti et al. [7], we formulate the next definitions.

**Definition 5.** *The Lagrange functional <sup>L</sup>*((*b*, *<sup>c</sup>*), *<sup>ν</sup>*, *<sup>γ</sup>*, *<sup>w</sup>*, *<sup>u</sup>*, *<sup>v</sup>*) : B × C × <sup>R</sup>*<sup>m</sup>* <sup>+</sup> <sup>×</sup> <sup>R</sup>*<sup>n</sup>* <sup>×</sup> *<sup>W</sup>* <sup>×</sup> *<sup>U</sup>* <sup>×</sup> *V* → R *associated with the modified variational control problem* (*P*)( ¯*b*,*c*¯) *is defined as*

$$\begin{split} L((b,c),\nu,\gamma\_{\prime},w,u,v) &= \int\_{\Gamma} \left\{ \max\_{w \in \mathcal{W}} \left[ (b(t) - \widetilde{b}(t)) \frac{\partial f\_{\mathbf{k}}}{\partial b}(\pi,w) + (b\_{\sigma}(t) - \widetilde{b}\_{\sigma}(t)) \frac{\partial f\_{\mathbf{k}}}{\partial b\_{\sigma}}(\pi,w) \right] \right. \\ &\left. + \frac{1}{n(\boldsymbol{\pi},\boldsymbol{\beta})} (b\_{a\boldsymbol{\beta}}(t) - \widetilde{b}\_{a\boldsymbol{\beta}}(t)) \frac{\partial f\_{\mathbf{k}}}{\partial b\_{a\boldsymbol{\beta}}}(\pi,w) + (c(t) - \widetilde{c}(t)) \frac{\partial f\_{\mathbf{k}}}{\partial c}(\pi,w) \right] \\ &\left. + \boldsymbol{\nu}^{\top}(t) g(\pi,u) + \gamma^{\mathrm{T}}(t) h(\pi,v) \right\} dt^{\kappa}. \end{split}$$

**Definition 6.** *A point* ((¯*b*, *<sup>c</sup>*¯), *<sup>ν</sup>*¯, *<sup>γ</sup>*¯, *<sup>w</sup>*¯, *<sup>u</sup>*¯, *<sup>v</sup>*¯) <sup>∈</sup> *<sup>D</sup>* <sup>×</sup> <sup>R</sup>*<sup>m</sup>* <sup>+</sup> <sup>×</sup> <sup>R</sup>*<sup>n</sup>* <sup>×</sup> *<sup>W</sup>* <sup>×</sup> *<sup>U</sup>* <sup>×</sup> *<sup>V</sup> is said to be a robust saddle-point for the Lagrange functional L*((*b*, *c*), *ν*, *γ*, *w*, *u*, *v*) *associated with the modified multidimensional variational control problem* (*P*)( ¯*b*,*c*¯) *if the following relations are fulfilled:*

*(i) <sup>L</sup>*((¯*b*, *<sup>c</sup>*¯), *<sup>ν</sup>*, *<sup>γ</sup>*, *<sup>w</sup>*, *<sup>u</sup>*, *<sup>v</sup>*) <sup>≤</sup> *<sup>L</sup>*((¯*b*, *<sup>c</sup>*¯), *<sup>ν</sup>*¯, *<sup>γ</sup>*¯, *<sup>w</sup>*, *<sup>u</sup>*¯, *<sup>v</sup>*¯), <sup>∀</sup>*<sup>ν</sup>* <sup>∈</sup> <sup>R</sup>*<sup>m</sup>* <sup>+</sup>, <sup>∀</sup>*<sup>γ</sup>* <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* , ∀(*u*, *v*) ∈ *U* × *V , (ii) <sup>L</sup>*((*b*, *<sup>c</sup>*), *<sup>ν</sup>*¯, *<sup>γ</sup>*¯, *<sup>w</sup>*, *<sup>u</sup>*¯, *<sup>v</sup>*¯) <sup>≥</sup> *<sup>L</sup>*((¯*b*, *<sup>c</sup>*¯), *<sup>ν</sup>*¯, *<sup>γ</sup>*¯, *<sup>w</sup>*, *<sup>u</sup>*¯, *<sup>v</sup>*¯), <sup>∀</sup>(*b*, *<sup>c</sup>*) ∈ B × C*.*

Now, taking into account the above definitions, we establish the following two main results of this paper.

**Theorem 2.** *Let* ( ¯*b*, *<sup>c</sup>*¯) *be a robust feasible solution to* (*P*)*. Assume that* max*w*∈*<sup>W</sup> <sup>f</sup>κ*(*π*, *<sup>w</sup>*) = *<sup>f</sup>κ*(*π*, *<sup>w</sup>*¯), *<sup>κ</sup>* <sup>=</sup> 1, *p, and the objective functional* <sup>Z</sup> Γ *fκ*(*π*, *w*¯)*dt<sup>κ</sup> is convex at* ( ¯*b*, *c*¯)*. If the point* ((¯*b*, *c*¯), *ν*¯, *γ*¯, *w*¯, *u*¯, *v*¯) *is a robust saddle-point for the Lagrange functional L*((*b*, *c*), *ν*, *γ*, *w*, *u*, *v*) *associated with the modified multi-dimensional variational control problem* (*P*)( ¯*b*,*c*¯) *, then* ( ¯*b*, *c*¯) *is a weak robust optimal solution to* (*P*)*.*

**Proof.** By reductio ad absurdum, let us assume that ( ¯*b*, *c*¯) is not a weak robust optimal solution to (*P*). Therefore, by using the convexity property of the objective functional Z Γ *fκ*(*π*, *w*¯)*dt<sup>κ</sup>* , we get

$$\int\_{\Gamma} \max\_{w \in \mathcal{W}} \left[ (\bar{b} - b) \frac{\partial f\_{\kappa}}{\partial b} (\pi, w) + (\bar{b}\_{\sigma} - \bar{b}\_{\sigma}) \frac{\partial f\_{\kappa}}{\partial b\_{\sigma}} (\pi, w) \right. \tag{5}$$

$$+ \frac{1}{n(\boldsymbol{\alpha}, \boldsymbol{\beta})} (\bar{b}\_{a\boldsymbol{\beta}} - \bar{b}\_{a\boldsymbol{\beta}}) \frac{\partial f\_{\kappa}}{\partial b\_{a\boldsymbol{\beta}}} (\bar{\pi}, w) + (\bar{\boldsymbol{\varepsilon}} - \bar{\boldsymbol{\varepsilon}}) \frac{\partial f\_{\kappa}}{\partial \boldsymbol{\varepsilon}} (\bar{\pi}, w) \right] d t^{\kappa}$$

$$\quad < \int\_{\Gamma} \max\_{w \in \mathcal{W}} \left[ (\bar{b} - \bar{b}) \frac{\partial f\_{\kappa}}{\partial b} (\bar{\pi}, w) + (\bar{b}\_{\sigma} - \bar{b}\_{\sigma}) \frac{\partial f\_{\kappa}}{\partial b\_{\sigma}} (\bar{\pi}, w) \right.$$

$$+ \frac{1}{n(\boldsymbol{\alpha}, \boldsymbol{\beta})} (\bar{b}\_{a\boldsymbol{\beta}} - \bar{b}\_{a\boldsymbol{\beta}}) \frac{\partial f\_{\kappa}}{\partial b\_{a\boldsymbol{\beta}}} (\pi, w) + (\bar{\pi} - \bar{\boldsymbol{\varepsilon}}) \frac{\partial f\_{\kappa}}{\partial \boldsymbol{\varepsilon}} (\pi, w) \Big] d t^{\kappa},$$

for some ( ˜*b*, *<sup>c</sup>*˜) <sup>∈</sup> *<sup>D</sup>*.

From the feasibility of ( ˜*b*, *<sup>c</sup>*˜) to the problem (*P*) and *<sup>ν</sup>*¯ <sup>∈</sup> <sup>R</sup>*<sup>m</sup>* <sup>+</sup>, we get

$$\int\_{\Gamma} \vec{\nu}^T g(\mathfrak{A}, \mathfrak{a}) dt^\kappa \le 0. \tag{6}$$

On the other hand, since ((¯*b*, *c*¯), *ν*¯, *γ*¯, *w*¯, *u*¯, *v*¯) is a robust saddle-point for the Lagrange functional *L*((*b*, *c*), *ν*, *γ*, *w*, *u*, *v*) associated with the modified multi-dimensional variational control problem (*P*)( ¯*b*,*c*¯) , by using Definition 6 (*i*), we have

$$L((\overline{b}, \overline{c}), \upsilon\_{\prime\prime}\gamma\_{\prime}w, u\_{\prime}v) \leq L((\overline{b}, \overline{c}), \overline{\upsilon}\_{\prime}\gamma\_{\prime}w, \overline{u}, \overline{\sigma})\_{\prime} \,\,\forall \upsilon \in \mathbb{R}^{m}\_{+\prime}, \forall \gamma \in \mathbb{R}^{n}, \forall u \in \mathfrak{u}, \forall v \text{s.t. } \in \mathcal{V}\_{\prime}$$

which, using of the definition of Lagrange functional, can be rewritten as

$$\int\_{\varGamma} \left\{ \max\_{w \in \mathcal{W}} \left[ (\not b(t) - \not b(t)) \frac{\partial f\_{\mathbf{k}}}{\partial b}(\pi, w) + (\not b\_{\sigma}(t) - \not b\_{\sigma}(t)) \frac{\partial f\_{\mathbf{k}}}{\partial b\_{\sigma}}(\pi, w) \right] \right\} $$

$$+\frac{1}{n(\mathfrak{a},\beta)}(\overline{b}\_{a\beta}(t)-\overline{b}\_{a\beta}(t))\frac{\partial f\_{\mathbf{k}}}{\partial b\_{a\beta}}(\pi,w)+(\overline{\varepsilon}(t)-\overline{\varepsilon}(t))\frac{\partial f\_{\mathbf{k}}}{\partial c}(\pi,w)\Big|\_{}^{1}$$

$$+\imath^{T}(t)g(\pi,u)+\gamma^{T}(t)h(\pi,v)\Big)dt^{\kappa}$$

$$\leq\int\_{\Gamma}\left\{\max\_{w\in\mathcal{W}}\left[(\overline{b}(t)-\overline{b}(t))\frac{\partial f\_{\mathbf{k}}}{\partial b}(\pi,w)+(\overline{b}\_{\sigma}(t)-\overline{b}\_{\sigma}(t))\frac{\partial f\_{\mathbf{k}}}{\partial b\_{\sigma}}(\pi,w)\right]\right.$$

$$+\frac{1}{n(\mathfrak{a},\beta)}(\overline{b}\_{\mathfrak{a}\beta}(t)-\overline{b}\_{\mathfrak{a}\beta}(t))\frac{\partial f\_{\mathbf{k}}}{\partial b\_{\mathbf{a}\beta}}(\pi,w)+(\overline{\varepsilon}(t)-\overline{\varepsilon}(t))\frac{\partial f\_{\mathbf{k}}}{\partial c}(\pi,w)\Big]$$

$$+\imath^{T}(t)g(\pi,u)+\gamma^{T}(t)h(\pi,\sigma)\Big]dt^{\kappa}.$$

Since max*w*∈*<sup>W</sup> fκ*(*π*, *w*) = *fκ*(*π*, *w*¯), *κ* = 1, *p*, it follows that

Z Γ n ( ¯*b*(*t*) <sup>−</sup> ¯*b*(*t*)) *<sup>∂</sup>f<sup>κ</sup> ∂b* (*π*¯, *<sup>w</sup>*¯) + (¯*bσ*(*t*) <sup>−</sup> ¯*bσ*(*t*)) *<sup>∂</sup>f<sup>κ</sup> ∂b<sup>σ</sup>* (*π*¯, *w*¯) + 1 *n*(*α*, *β*) ( ¯*bαβ*(*t*) <sup>−</sup> ¯*bαβ*(*t*)) *<sup>∂</sup>f<sup>κ</sup> ∂bαβ* (*π*¯, *<sup>w</sup>*¯) + (*c*¯(*t*) <sup>−</sup> *<sup>c</sup>*¯(*t*)) *<sup>∂</sup>f<sup>κ</sup> ∂c* (*π*¯, *w*¯) +*ν T* (*t*)*g*(*π*¯, *u*) + *γ T* (*t*)*h*(*π*¯, *v*) o *dt<sup>κ</sup>* ≤ Z Γ n ( ¯*b*(*t*) <sup>−</sup> ¯*b*(*t*)) *<sup>∂</sup>f<sup>κ</sup> ∂b* (*π*¯, *<sup>w</sup>*¯) + (¯*bσ*(*t*) <sup>−</sup> ¯*bσ*(*t*)) *<sup>∂</sup>f<sup>κ</sup> ∂b<sup>σ</sup>* (*π*¯, *w*¯) + 1 *n*(*α*, *β*) ( ¯*bαβ*(*t*) <sup>−</sup> ¯*bαβ*(*t*)) *<sup>∂</sup>f<sup>κ</sup> ∂bαβ* (*π*¯, *<sup>w</sup>*¯) + (*c*¯(*t*) <sup>−</sup> *<sup>c</sup>*¯(*t*)) *<sup>∂</sup>f<sup>κ</sup> ∂c* (*π*¯, *w*¯) +*ν*¯ *T* (*t*)*g*(*π*¯, *u*¯) + *γ*¯ *T* (*t*)*h*(*π*¯, *v*¯) o *dt<sup>κ</sup>* .

If we set *ν* = 0 and *γ* = 0 in the above inequality, we obtain

$$\int\_{\Gamma} \bar{\nu}^T g(\bar{\pi}, \bar{u}) dt^\kappa \ge 0. \tag{7}$$

From (6) and (7), it follows that

$$\int\_{\Gamma} \bar{\nu}^T g(\tilde{\pi}, \bar{u}) dt^\kappa \le \int\_{\Gamma} \bar{\nu}^T g(\bar{\pi}, \bar{u}) dt^\kappa \mu$$

which, along with the inequality (5), gives

$$\int\_{\Gamma} \left\{ \max\_{w \in \mathcal{W}} \left[ (\tilde{b} - \bar{b}) \frac{\partial f\_{\kappa}}{\partial \bar{b}} (\pi, w) + (\bar{b}\_{\sigma} - \bar{b}\_{\sigma}) \frac{\partial f\_{\kappa}}{\partial b\_{\sigma}} (\pi, w) \right. \right. $$

$$+ \frac{1}{n(\boldsymbol{a}, \boldsymbol{\beta})} (\tilde{b}\_{\boldsymbol{a}\boldsymbol{\beta}} - \bar{b}\_{\boldsymbol{a}\boldsymbol{\beta}}) \frac{\partial f\_{\kappa}}{\partial b\_{\boldsymbol{a}\boldsymbol{\beta}}} (\tilde{\pi}, w) + (\boldsymbol{\varepsilon} - \boldsymbol{\varepsilon}) \frac{\partial f\_{\kappa}}{\partial \boldsymbol{c}} (\bar{\pi}, w) \right] + \boldsymbol{\bar{v}}^{T} \boldsymbol{g} (\bar{\pi}, \boldsymbol{\bar{n}}) \Big\} dt^{\kappa}$$

$$< \int\_{\Gamma} \left\{ \max\_{w \in \mathcal{W}} \left[ (\bar{b} - \bar{b}) \frac{\partial f\_{\kappa}}{\partial b} (\pi, w) + (\bar{b}\_{\sigma} - \bar{b}\_{\sigma}) \frac{\partial f\_{\kappa}}{\partial b\_{\sigma}} (\pi, w) \right. \\$$

$$+ \frac{1}{n(\boldsymbol{a}, \boldsymbol{\beta})} (\bar{b}\_{\boldsymbol{a}\boldsymbol{\beta}} - \bar{b}\_{\boldsymbol{a}\boldsymbol{\beta}}) \frac{\partial f\_{\kappa}}{\partial b\_{\boldsymbol{a}\boldsymbol{\beta}}} (\pi, w) + (\boldsymbol{\varepsilon} - \boldsymbol{\varepsilon}) \frac{\partial f\_{\kappa}}{\partial c} (\pi, w) \right] + \boldsymbol{\bar{v}}^{T} \boldsymbol{g} (\pi, \boldsymbol{\bar{n}}) \Big\} dt^{\kappa},$$

equivalently with

$$\int\_{\Gamma} \left\{ \max\_{w \in \mathcal{W}} \left[ (\tilde{b} - \bar{b}) \frac{\partial f\_{\mathbf{x}}}{\partial b} (\pi, w) + (\tilde{b}\_{\sigma} - \bar{b}\_{\sigma}) \frac{\partial f\_{\mathbf{x}}}{\partial b\_{\sigma}} (\pi, w) \right] \right.$$

$$\left. + \frac{1}{n(\alpha, \beta)} (\tilde{b}\_{a\beta} - \bar{b}\_{a\beta}) \frac{\partial f\_{\mathbf{x}}}{\partial b\_{a\beta}} (\bar{\pi}, w) + (\tilde{c} - \bar{c}) \frac{\partial f\_{\mathbf{x}}}{\partial c} (\bar{\pi}, w) \right] + \bar{\nu}^T g (\tilde{\pi}, \bar{u}) + \bar{\gamma}^T h (\tilde{\pi}, \bar{v}) \right\} dt^{\kappa}$$

$$<\int\_{\Gamma} \left\{ \max\_{w \in \mathcal{W}} \left[ (\mathsf{\boldsymbol{\delta}} - \mathsf{\boldsymbol{\delta}}) \frac{\partial f\_{\mathsf{K}}}{\partial \mathsf{b}} (\mathsf{\boldsymbol{\pi}}, w) + (\mathsf{\boldsymbol{b}}\_{\mathsf{\boldsymbol{\sigma}}} - \mathsf{\boldsymbol{b}}\_{\mathsf{\boldsymbol{\sigma}}}) \frac{\partial f\_{\mathsf{K}}}{\partial \mathsf{b}\_{\mathsf{\boldsymbol{\sigma}}}} (\mathsf{\boldsymbol{\pi}}, w) \right. \right. $$

$$+ \frac{1}{n(\mathsf{a}, \mathsf{\boldsymbol{\sigma}})} (\mathsf{\boldsymbol{\delta}}\_{a\mathsf{\boldsymbol{\beta}}} - \mathsf{\boldsymbol{b}}\_{a\mathsf{\boldsymbol{\beta}}}) \frac{\partial f\_{\mathsf{K}}}{\partial \mathsf{b}\_{a\mathsf{\boldsymbol{\beta}}}} (\mathsf{\boldsymbol{\pi}}, w) + (\mathsf{\boldsymbol{\varepsilon}} - \mathsf{\boldsymbol{\varepsilon}}) \frac{\partial f\_{\mathsf{K}}}{\partial \mathsf{c}} (\mathsf{\boldsymbol{\pi}}, w) \right] + \mathsf{\boldsymbol{\tau}}^{T} \mathsf{g} (\mathsf{\boldsymbol{\pi}}, \mathsf{\boldsymbol{\mu}}) + \bar{\boldsymbol{\gamma}}^{T} h (\mathsf{\boldsymbol{\pi}}, \mathsf{\boldsymbol{\sigma}}) \Big\} d t^{\mathsf{k}} \mathsf{\_{\mathsf{I}}$$

or

$$L((\tilde{b}, \mathfrak{c}), \mathfrak{v}, \underline{\gamma}, \mathfrak{w}, \mathfrak{u}, \mathfrak{v}) < L((\tilde{b}, \mathfrak{c}), \mathfrak{v}, \underline{\gamma}, \mathfrak{w}, \mathfrak{u}, \mathfrak{v}), \quad (\tilde{b}, \mathfrak{c}) \in \mathcal{B} \times \mathcal{C}\_{\prime}$$

which contradicts Definition 6, and the proof is completed.

**Theorem 3.** *Let* ( ¯*b*, *<sup>c</sup>*¯) *be a normal weak robust optimal solution to* (*P*)*. Assume that* max*w*∈*<sup>W</sup> fκ*(*π*, *w*) = *fκ*(*π*, *w*¯), *κ* = 1, *p, and the constraint functionals*

$$\int\_{\Gamma} \boldsymbol{\sigma}^{T} \boldsymbol{g}(\boldsymbol{\pi}, \boldsymbol{\pi}) d\boldsymbol{t}^{\kappa}, \\ \int\_{\Gamma} \boldsymbol{\gamma}^{T} \boldsymbol{h}(\boldsymbol{\pi}, \boldsymbol{\sigma}) d\boldsymbol{t}^{\kappa}$$

*are convex at* ( ¯*b*, *c*¯)*. Then,* ((¯*b*, *c*¯), *ν*¯, *γ*¯, *w*¯, *u*¯, *v*¯) *is a robust saddle-point for the Lagrange functional L*((*b*, *c*), *ν*, *γ*, *w*, *u*, *v*) *associated with the modified variational control problem* (*P*)( ¯*b*,*c*¯) *.*

**Proof.** Since the relations (1)–(4), with *µ*¯ = 1, are satisfied for all *t* ∈ Θ, except at discontinuities, the conditions (1) and (2) yield

Z Γ (*<sup>b</sup>* <sup>−</sup> ¯*b*){ *∂f<sup>κ</sup> ∂b* (*π*¯, *w*¯) + *ν*¯ *T gb* (*π*¯, *u*¯) + *γ*¯ *T hb* (*π*¯, *v*¯) (8) −*D<sup>σ</sup>* h *∂f<sup>κ</sup> ∂b<sup>σ</sup>* (*π*¯, *w*¯) + *ν*¯ *T gbσ* (*π*¯, *u*¯) + *γ*¯ *T hbσ* (*π*¯, *v*¯) i + 1 *n*(*α*, *β*) *D* 2 *αβ*h *∂f<sup>κ</sup> ∂bαβ* (*π*¯, *w*¯) + *ν*¯ *T gbαβ* (*π*¯, *u*¯) + *γ*¯ *T hbαβ* (*π*¯, *v*¯) i }*dt<sup>κ</sup>* + Z Γ (*c* − *c*¯){ *∂f<sup>κ</sup> ∂c* (*π*¯, *w*¯) + *ν*¯ *T gc*(*π*¯, *u*¯) + *γ*¯ *T <sup>h</sup>c*(*π*¯, *<sup>v</sup>*¯)}*dt<sup>κ</sup>* = Z Γ h (*<sup>b</sup>* <sup>−</sup> ¯*b*){ *∂f<sup>κ</sup> ∂b* (*π*¯, *w*¯) + *ν*¯ *T gb* (*π*¯, *u*¯) + *γ*¯ *T hb* (*π*¯, *v*¯)} +(*b<sup>σ</sup>* <sup>−</sup> ¯*bσ*){ *∂f<sup>κ</sup> ∂b<sup>σ</sup>* (*π*¯, *w*¯) + *ν*¯ *T gbσ* (*π*¯, *u*¯) + *γ*¯ *T hbσ* (*π*¯, *v*¯)} + 1 *n*(*α*, *β*) (*bαβ* <sup>−</sup> ¯*bαβ*){ *∂f<sup>κ</sup> ∂bαβ* (*π*¯, *w*¯) + *ν*¯ *T gbαβ* (*π*¯, *u*¯) + *γ*¯ *T hbαβ* (*π*¯, *v*¯)} i *dt<sup>κ</sup>* + Z Γ (*c* − *c*¯){ *∂f<sup>κ</sup> ∂c* (*π*¯, *w*¯) + *ν*¯ *T gc*(*π*¯, *u*¯) + *γ*¯ *T <sup>h</sup>c*(*π*¯, *<sup>v</sup>*¯)}*dt<sup>κ</sup>* <sup>=</sup> 0,

where we used the formula of integration by parts, the result "*A total divergence is equal to a total derivative*" (see Trean¸t˘a [4]) and the boundary conditions formulated in the considered problem.

Further, taking into account the assumption of convexity for the following multiple integral functionals <sup>Z</sup> Γ *ν*¯ *T g*(*π*, *u*¯)*dt<sup>κ</sup>* , Z Γ *γ*¯ *T h*(*π*, *v*¯)*dt<sup>κ</sup>* at ( ¯*b*, *u*¯), we obtain

$$\int\_{\Gamma} \left\{ \bar{\upsilon}^{T} g(\pi, \vec{u}) - \bar{\upsilon}^{T} g(\pi, \vec{u}) \right\} dt^{\kappa} \ge \int\_{\Gamma} (b - \bar{b}) \bar{\upsilon}^{T} g\_{b}(\pi, \vec{u}) dt^{\kappa}$$

$$+ \int\_{\Gamma} (b\_{\sigma} - \bar{b}\_{\sigma}) \bar{\upsilon}^{T} g\_{b\_{\sigma}}(\pi, \vec{u}) dt^{\kappa} + \frac{1}{n(\mathfrak{a}, \beta)} \int\_{\Gamma} (b\_{a\beta} - \bar{b}\_{a\beta}) \bar{\upsilon}^{T} g\_{b\_{a\beta}}(\pi, \vec{u}) dt^{\kappa}$$

$$+ \int\_{\Gamma} (c - \bar{c}) \bar{\upsilon}^{T} g\_{c}(\pi, \vec{u}) dt^{\kappa} \mu$$

$$\int\_{\Gamma} \left\{ \gamma^{T} h(\pi, \overline{\sigma}) - \gamma^{T} h(\pi, \overline{\sigma}) \right\} dt^{\kappa} \ge \int\_{\Gamma} (b - \overline{b}) \gamma^{T} h\_{b}(\pi, \overline{\sigma}) dt^{\kappa}$$

$$+ \int\_{\Gamma} (b\_{\sigma} - \overline{b}\_{\sigma}) \gamma^{T} h\_{b\_{\sigma}}(\pi, \overline{\sigma}) dt^{\kappa} + \frac{1}{n(\mathfrak{a}, \beta)} \int\_{\Gamma} (b\_{\mathrm{a}\beta} - \overline{b}\_{\mathrm{a}\beta}) \gamma^{T} h\_{b\_{\mathrm{a}\beta}}(\pi, \overline{\sigma}) dt^{\kappa}$$

$$+ \int\_{\Gamma} (c - \overline{c}) \gamma^{T} h\_{c}(\overline{\pi}, \overline{\sigma} \mathrm{s},) dt^{\kappa},$$

implying

Z Γ n *ν*¯ *T g*(*π*, *u*¯) + *γ*¯ *T h*(*π*, *v*¯) o *dt<sup>κ</sup>* <sup>−</sup> Z Γ n *ν*¯ *T g*(*π*¯, *u*¯) + *γ*¯ *T h*(*π*¯, *v*¯) o *dt<sup>κ</sup>* ≥ Z Γ (*<sup>b</sup>* <sup>−</sup> ¯*b*) h *ν*¯ *T gb* (*π*¯, *u*¯) + *γ T hb* (*π*¯, *v*¯) i *dt<sup>κ</sup>* + Z Γ (*b<sup>σ</sup>* <sup>−</sup> ¯*bσ*) h *ν*¯ *T gbσ* (*π*¯, *u*¯) + *γ*¯ *T hbσ* (*π*¯, *v*¯) i *dt<sup>κ</sup>* + 1 *n*(*α*, *β*) Z Γ (*bαβ* <sup>−</sup> ¯*bαβ*) h *ν*¯ *T gbαβ* (*π*¯, *u*¯) + *γ*¯ *T hbαβ* (*π*¯, *v*¯) i *dt<sup>κ</sup>* + Z Γ (*c* − *c*¯) h *ν*¯ *T gc*(*π*¯, *u*¯) + *γ*¯ *T hc*(*π*¯, *vs*¯ .) i *dt<sup>κ</sup>* = − Z Γ (*<sup>b</sup>* <sup>−</sup> ¯*b*) *∂f<sup>κ</sup> ∂b* (*π*¯, *<sup>w</sup>*¯)*dt<sup>κ</sup>* <sup>−</sup> Z Γ (*b<sup>σ</sup>* <sup>−</sup> ¯*bσ*) *∂f<sup>κ</sup> ∂b<sup>σ</sup>* (*π*¯, *w*¯)*dt<sup>κ</sup>* − 1 *n*(*α*, *β*) Z Γ (*bαβ* <sup>−</sup> ¯*bαβ*) *∂f<sup>κ</sup> ∂bαβ* (*π*¯, *<sup>w</sup>*¯)*dt<sup>κ</sup>* <sup>−</sup> Z Γ (*c* − *c*¯) *∂f<sup>κ</sup> ∂c* (*π*¯, *w*¯)*dt<sup>κ</sup>* ,

by considering (8). The previous inequality can be formulated as follows

Z Γ (*<sup>b</sup>* <sup>−</sup> ¯*b*) *∂f<sup>κ</sup> ∂b* (*π*¯, *w*¯)*dt<sup>κ</sup>* + Z Γ (*b<sup>σ</sup>* <sup>−</sup> ¯*bσ*) *∂f<sup>κ</sup> ∂b<sup>σ</sup>* (*π*¯, *w*¯)*dt<sup>κ</sup>* + 1 *n*(*α*, *β*) Z Γ (*bαβ* <sup>−</sup> ¯*bαβ*) *∂f<sup>κ</sup> ∂bαβ* (*π*¯, *w*¯)*dt<sup>κ</sup>* + Z Γ (*c* − *c*¯) *∂f<sup>κ</sup> ∂c* (*π*¯, *w*¯)*dt<sup>κ</sup>* + Z Γ n *ν*¯ *T g*(*π*, *u*¯) + *γ*¯ *T h*(*π*, *v*¯) o *dt<sup>κ</sup>* ≥ Z Γ ( ¯*<sup>b</sup>* <sup>−</sup> ¯*b*) *∂f<sup>κ</sup> ∂b* (*π*¯, *w*¯)*dt<sup>κ</sup>* + Z Γ ( ¯*b<sup>σ</sup>* <sup>−</sup> ¯*bσ*) *∂f<sup>κ</sup> ∂b<sup>σ</sup>* (*π*¯, *w*¯)*dt<sup>κ</sup>* + 1 *n*(*α*, *β*) Z Γ ( ¯*bαβ* <sup>−</sup> ¯*bαβ*) *∂f<sup>κ</sup> ∂bαβ* (*π*¯, *w*¯)*dt<sup>κ</sup>* + Z Γ (*c*¯ − *c*¯) *∂f<sup>κ</sup> ∂c* (*π*¯, *w*¯)*dt<sup>κ</sup>* + Z Γ n *ν*¯ *T g*(*π*¯, *u*¯) + *γ*¯ *T h*(*π*¯, *v*¯) o *dt<sup>κ</sup>* ,

which involves the inequality

$$L((b,\varepsilon),\overline{v},\overline{\gamma},w,\overline{u},\overline{v}) \ge L((\overline{b},\varepsilon),\overline{v},\overline{\gamma},w,\overline{u},\overline{v}), \quad \forall (b,\varepsilon) \in \mathcal{B} \times \mathcal{C}.\tag{9}$$

Furthermore, the following inequality is satisfied

$$\int\_{\Gamma} \nu^T g(\bar{\pi}, \mu) dt^\kappa + \int\_{\Gamma} \gamma^T h(\bar{\pi}, v) dt^\kappa \le 0$$

for all (*ν*, *γ*) ∈ R *m* <sup>+</sup> × R *n* , (*u*, *v*) ∈ *U* × *V* and, using the feasibility of ( ¯*b*, *u*¯), we obtain

$$\int\_{\Gamma} \boldsymbol{\nu}^T \boldsymbol{g}(\boldsymbol{\pi}, \boldsymbol{u}) dt^\kappa + \int\_{\Gamma} \boldsymbol{\gamma}^T \boldsymbol{h}(\boldsymbol{\pi}, \boldsymbol{v}) dt^\kappa$$

$$\leq \int\_{\Gamma} \boldsymbol{\bar{\nu}}^T \boldsymbol{g}(\boldsymbol{\pi}, \boldsymbol{u}) dt^\kappa + \int\_{\Gamma} \boldsymbol{\bar{\gamma}}^T \boldsymbol{h}(\boldsymbol{\pi}, \boldsymbol{\sigma}) dt^\kappa \,\boldsymbol{\bar{\nu}}$$

or, equivalently,

Z Γ ( ¯*<sup>b</sup>* <sup>−</sup> ¯*b*) *∂f<sup>κ</sup> ∂b* (*π*¯, *w*¯)*dt<sup>κ</sup>* + Z Γ ( ¯*b<sup>σ</sup>* <sup>−</sup> ¯*bσ*) *∂f<sup>κ</sup> ∂b<sup>σ</sup>* (*π*¯, *w*¯)*dt<sup>κ</sup>* + 1 *n*(*α*, *β*) Z Γ ( ¯*bαβ* <sup>−</sup> ¯*bαβ*) *∂f<sup>κ</sup> ∂bαβ* (*π*¯, *w*¯)*dt<sup>κ</sup>* + Z Γ (*c*¯ − *c*¯) *∂f<sup>κ</sup> ∂c* (*π*¯, *w*¯)*dt<sup>κ</sup>* Z Γ *ν T g*(*π*¯, *u*)*dt<sup>κ</sup>* + Z Γ *γ T h*(*π*¯, *v*)*dt<sup>κ</sup>* ≤ Z Γ ( ¯*<sup>b</sup>* <sup>−</sup> ¯*b*) *∂f<sup>κ</sup> ∂b* (*π*¯, *w*¯)*dt<sup>κ</sup>* + Z Γ ( ¯*b<sup>σ</sup>* <sup>−</sup> ¯*bσ*) *∂f<sup>κ</sup> ∂b<sup>σ</sup>* (*π*¯, *w*¯)*dt<sup>κ</sup>* + 1 *n*(*α*, *β*) Z Γ ( ¯*bαβ* <sup>−</sup> ¯*bαβ*) *∂f<sup>κ</sup> ∂bαβ* (*π*¯, *w*¯)*dt<sup>κ</sup>* + Z Γ (*c*¯ − *c*¯) *∂f<sup>κ</sup> ∂c* (*π*¯, *w*¯)*dt<sup>κ</sup>* Z Γ *ν*¯ *T g*(*π*¯, *u*¯)*dt<sup>κ</sup>* + Z Γ *γ*¯ *T h*(*π*¯, *v*¯)*dt<sup>κ</sup>* ,

involving

$$L((\vec{b}, \mathfrak{c}), \nu, \gamma\_\prime, \mathfrak{w}, \mathfrak{u}, \mathfrak{v}) \le L((\vec{b}, \mathfrak{c}), \mathfrak{v}, \gamma\_\prime, \mathfrak{w}, \mathfrak{u}, \mathfrak{v}), \ \forall \mathfrak{v} \in \mathbb{R}\_+^{\mathfrak{m}}, \ \forall \gamma \in \mathbb{R}^{\mathfrak{n}}, \quad \forall (\mathfrak{u}, \mathfrak{v}) \in \mathfrak{u} \times \mathbb{V}. \tag{10}$$

Consequently, by (9) and (10), we conclude that ((¯*b*, *c*¯), *ν*¯, *γ*¯, *w*¯, *u*¯, *v*¯) is a robust saddlepoint for the Lagrange functional *L*((*b*, *c*), *ν*, *γ*, *w*, *u*, *v*) associated with the modified multidimensional variational control problem (*P*)( ¯*b*,*c*¯) , and the proof is completed.

**Illustrative application.** Let us minimize the mechanical work performed by the variable force <sup>F</sup>¯(*<sup>c</sup>* 2 (*t*) + *w*1, *c* 2 (*t*) + *w*2), including the uncertain parameters *w<sup>κ</sup>* ∈ *W<sup>κ</sup>* = [0, 1], *κ* = 1, 2, to move its point of application along the piecewise smooth curve Γ, contained in Θ = [0, 3] <sup>2</sup> = [0, 3] <sup>×</sup> [0, 3] and joining the points *<sup>t</sup>*<sup>0</sup> <sup>=</sup> (0, 0) and *<sup>t</sup>*<sup>1</sup> <sup>=</sup> (3, 3), such that the following constraints

$$u\_1(b-2)(b+2) \le 0, \quad t = (t^1, t^2) \in \Theta$$

$$\frac{\partial b}{\partial t^1} = 3 - c + v\_{1\prime} \quad t = (t^1, t^2) \in \Theta$$

$$\frac{\partial b}{\partial t^2} = 3 - c + v\_{2\prime} \quad t = (t^1, t^2) \in \Theta$$

$$b(0,0) = 1, \quad b(3,3) = 2$$

are satisfied, where *v<sup>ζ</sup>* ∈ *V<sup>ζ</sup>* = [1, 2], *ζ* = 1, 2 and *u*<sup>1</sup> ∈ *U*<sup>1</sup> = 1 2 , 1 . To solve the previous problem, for *m* = 1, *n* = *p* = 2, we consider

$$f\_{\mathbf{x}}(\pi, w)dt^{\mathbf{x}} = f\_1(\pi, w)dt^1 + f\_2(\pi, w)dt^2 = (c^2 + w\_1)(t)dt^1 + (c^2 + w\_2)dt^2$$

and the constrained robust control problem:

(*P*1) min (*b*(·),*c*(·)) Z Γ *fκ*(*π*, *w*)*dt<sup>κ</sup>*

subject to

$$\mu\_1(b-2)(b+2) \le 0, \quad t = (t^1, t^2) \in \Theta \tag{11}$$

$$\frac{\partial b}{\partial t^1} = \mathfrak{Z} - \mathfrak{c} + v\_{1\prime} \quad t = (t^1, t^2) \in \Theta \tag{12}$$

$$\frac{\partial b}{\partial t^2} = \mathfrak{Z} - \mathfrak{c} + v\_{\mathfrak{Z}} \quad t = (t^1, t^2) \in \Theta \tag{13}$$

$$b(0,0) = 1, \quad b(\mathfrak{Z}, \mathfrak{Z}) = \mathfrak{Z}.\tag{14}$$

The robust counterpart of (*P*1) is formulated as follows:

$$(RP1) \qquad \min\_{\left(\vartheta(\cdot), c(\cdot)\right)} \int\_{\Gamma} \max\_{w \in \mathcal{W}} f\_{\mathbb{K}}(\pi, w) dt^{\kappa}$$

subject to

$$u\_1(b-2)(b+2) \le 0, \quad \forall u\_1 \in \mathcal{U}\_{1\prime} \ t = (t^1, t^2) \in \Theta$$

$$\frac{\partial b}{\partial t^1} = 3 - c + v\_{1\prime} \quad \forall v\_1 \in \mathcal{V}\_{1\prime} \ t = (t^1, t^2) \in \Theta$$

$$\frac{\partial b}{\partial t^2} = 3 - c + v\_{2\prime} \quad \forall v\_2 \in \mathcal{V}\_{2\prime} \ t = (t^1, t^2) \in \Theta$$

$$b(0, 0) = 1, \quad b(3, 3) = 2.$$

Clearly, the set of all feasible solutions in (*RP*1) is

$$\mathcal{D} = \{(b, c) \in \mathcal{S} \times \mathcal{C} : -2 \le b \le 2, \ \frac{\partial b}{\partial t^1} = \frac{\partial b}{\partial t^2}, \ b(0, 0) = 1, \ b(3, 3) = 2, \ \}$$

$$t \in \Theta, \ u\_1 \in \mathcal{U}\_{\mathbb{V}}, \ v\_\zeta \in \mathcal{V}\_{\mathbb{V}}, \ \zeta = 1, 2\}.$$

Now, we are interested in finding a weak robust optimal solution to the problem (*P*1). This means that we must find the control function *c*¯ : Θ → R (that determines the state function ¯*<sup>b</sup>* : <sup>Θ</sup> <sup>→</sup> <sup>R</sup>), which satisfies the dynamical system (11), (12) and (13) with respect to the boundary conditions (14). Additionally, we assume that the state function is affine.

Let ( ¯*b*, *<sup>c</sup>*¯) <sup>∈</sup> *<sup>D</sup>* be a weak robust optimal solution to the problem (*P*1) and consider max*w*∈*<sup>W</sup> fκ*(*π*, *w*) = *fκ*(*π*, *w*¯), *κ* = 1, 2. Then, according to Theorem 1, there exists the scalar *<sup>µ</sup>*¯ <sup>∈</sup> <sup>R</sup>, the piecewise smooth functions *<sup>ν</sup>*¯ <sup>=</sup> *<sup>ν</sup>*¯1(*t*) <sup>∈</sup> <sup>R</sup>+, *<sup>γ</sup>*¯ = (*γ*¯1(*t*), *<sup>γ</sup>*¯2(*t*)) <sup>∈</sup> <sup>R</sup><sup>2</sup> , and the uncertainty parameters *u*¯<sup>1</sup> ∈ *U*<sup>1</sup> and *v*¯*<sup>ζ</sup>* ∈ *V<sup>ζ</sup>* , *ζ* = 1, 2, such that the following conditions

$$2\bar{v}\_1\bar{u}\_1\bar{b} + \frac{\partial\bar{\gamma}\_1}{\partial t^1} + \frac{\partial\bar{\gamma}\_2}{\partial t^2} = 0,\tag{15}$$

$$2\bar{\mu}\mathbb{E}-\bar{\gamma}\_1-\bar{\gamma}\_2=0,\tag{16}$$

$$
\sigma\_1 \mathfrak{a}\_1 (\tilde{b}^2 - 4) = 0, \quad \sigma\_1 \ge 0, \quad \mathfrak{a} \ge 0 \tag{17}
$$

hold for all *t* ∈ Θ, except at discontinuities.

One can easily verify that the robust necessary optimality conditions (15)–(17) are satisfied at ( ¯*b*, *<sup>c</sup>*¯) = 1 6 (*t* <sup>1</sup> + *t* 2 ) + 1, <sup>29</sup> 6 , with the Lagrange multipliers *µ*¯ = 1, *ν*¯<sup>1</sup> = 0, *γ*¯<sup>1</sup> + *γ*¯<sup>2</sup> = *d*<sup>1</sup> + *d*<sup>2</sup> (with *d*<sup>1</sup> + *d*<sup>2</sup> = *µ*¯ 17 <sup>3</sup> + *v*¯<sup>1</sup> + *v*¯<sup>2</sup> ) and the uncertain parameters *w*¯ <sup>1</sup> = *w*<sup>2</sup> = *u*¯<sup>1</sup> = 1, *v*¯<sup>1</sup> = *v*¯<sup>2</sup> = 2 ∈ [1, 2]. Further, it can also be easily verified that the objective functional <sup>Z</sup> Γ *fκ*(*π*, *w*¯)*dt<sup>κ</sup>* is convex at ( ¯*b*, *c*¯) and that ((¯*b*, *c*¯), *ν*¯, *γ*¯, *w*¯, *u*¯, *v*¯) is a robust saddlepoint for the Lagrange functional *L*((*b*, *c*), *ν*, *γ*, *w*, *u*, *v*) associated with the modified multidimensional variational control problem

$$(P1)\_{\left(\mathfrak{d},\mathfrak{f}\right)} \qquad \min\_{\left(\mathfrak{d}\left(\cdot\right),\mathfrak{c}\left(\cdot\right)\right)} \int\_{\Gamma} \left(\frac{29}{3} + w\_1\right) \left(c - \frac{29}{6}\right) dt^1 + \left(\frac{29}{3} + w\_2\right) \left(c - \frac{29}{6}\right) dt^2 \,\mathfrak{f}$$

subject to

$$u\_1(b-2)(b+2) \le 0, \quad t = (t^1, t^2) \in \Theta$$

$$\frac{\partial b}{\partial t^1} = 3 - c + v\_{1\prime} \quad t = (t^1, t^2) \in \Theta$$

$$\frac{\partial b}{\partial t^2} = 3 - c + v\_{2\prime} \quad t = (t^1, t^2) \in \Theta$$

*b*(0, 0) = 1, *b*(3, 3) = 2.

Hence, all the conditions of Theorem 2 are satisfied, which ensures that ( ¯ *b*, *c*¯) = 1 6 (*t* <sup>1</sup> + *t* 2 ) + 1, <sup>29</sup> 6 is a weak robust optimal solution to the problem (*P*1).

#### **4. Conclusions and Further Development**

In this paper, by considering path-independent curvilinear integral cost functionals with mixed (equality and inequality) constraints implying data uncertainty and secondorder partial derivatives, we have introduced new classes of robust optimization problems. More precisely, by using the notion of convexity for curvilinear integral functionals, the concept of a normal weak robust optimal solution and the robust saddle-point of a considered Lagrange functional, we have established some characterization results of the problems under study.

As an immediate subsequent development of the results presented in this paper, the author mentions the study of well-posedness for the considered classes of robust control problems.

**Author Contributions:** Conceptualization, S.T.; Methodology, K.D. Both authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


MDPI St. Alban-Anlage 66 4052 Basel Switzerland Tel. +41 61 683 77 34 Fax +41 61 302 89 18 www.mdpi.com

*Mathematics* Editorial Office E-mail: mathematics@mdpi.com www.mdpi.com/journal/mathematics

MDPI St. Alban-Anlage 66 4052 Basel Switzerland

Tel: +41 61 683 77 34

www.mdpi.com ISBN 978-3-0365-6588-0