**Advances in Optimization and Nonlinear Analysis**

Editor **Savin Treant¸˘a**

MDPI • Basel • Beijing • Wuhan • Barcelona • Belgrade • Manchester • Tokyo • Cluj • Tianjin

*Editor* Savin Treant¸a˘ University Politehnica of Bucharest Romania

*Editorial Office* MDPI St. Alban-Anlage 66 4052 Basel, Switzerland

This is a reprint of articles from the Special Issue published online in the open access journal *Fractal and Fractional* (ISSN 2504-3110) (available at: https://www.mdpi.com/journal/fractalfract/ special issues/optimization nonlinear).

For citation purposes, cite each article independently as indicated on the article page online and as indicated below:

LastName, A.A.; LastName, B.B.; LastName, C.C. Article Title. *Journal Name* **Year**, *Volume Number*, Page Range.

**ISBN 978-3-0365-4749-7 (Hbk) ISBN 978-3-0365-4750-3 (PDF)**

© 2022 by the authors. Articles in this book are Open Access and distributed under the Creative Commons Attribution (CC BY) license, which allows users to download, copy and build upon published articles, as long as the author and publisher are properly credited, which ensures maximum dissemination and a wider impact of our publications.

The book as a whole is distributed by MDPI under the terms and conditions of the Creative Commons license CC BY-NC-ND.

## **Contents**


Reprinted from: *Fractal Fract.* **2022**, *6*, 26, doi:10.3390/fractalfract6010026 .............. **157**


## **About the Editor**

#### **Savin Treant¸ ˘a**

Savin Treant¸a is currently based in the Department of Applied Mathematics, Faculty of Applied ˘ Sciences, University Politehnica of Bucharest, Romania. His research interests include multi-objective optimization, optimal control, mathematical modelling, geometric PDEs, and information theory. Savin has published more than 100 scientific papers on these topics in the most prestigious journals of Mathematics and Engineering.

### *Editorial* **Advances in Optimization and Nonlinear Analysis**

**Savin Trean¸t ˘a 1,2**


#### **1. Introduction**

There are many applications of optimization and nonlinear analysis in various fields of basic science, engineering, and natural phenomena. In this regard, we have provided the Special Issue "Advances in Optimization and Nonlinear Analysis" to cover the new advances in these mathematical areas. In this Special Issue, we have focused on publishing research studies on optimization and nonlinear analysis by investigating the well-posedness and optimal solutions in new classes of (multiobjective) variational (control) problems governed by multiple and/or path-independent curvilinear integral cost functionals and mixed and/or isoperimetric constraints involving first- and second-order partial differential equations. Additionally, some applications of fractional calculus or related subjects (variational inequalities, equilibrium problems, fixed point problems, evolutionary problems, and so on) have been considered in this Special Issue. In response to our invitation, we received 41 papers from 22 countries (Egypt, Saudi Arabia, Morocco, Pakistan, Mexico, Romania, China, Iran, Tunisia, South Africa, Yemen, Korea, Turkey, Bangladesh, Australia, Indonesia, Thailand, India, Ecuador, Albania, Spain, Malaysia), of which 15 were published and 26 rejected/withdrawn.

#### **2. Brief Overview of the Contributions**

In a review conducted by Omar et al. [1], the spiral dynamics optimization (SDO) algorithm was comprehensively reviewed. It is well-known that SDO algorithm is one of the most straightforward physics-based optimization algorithms and it is successfully applied in various broad fields. This review paper describes the recent advances of the SDO algorithm, including its adaptive, improved, and hybrid approaches. The growth of the SDO algorithm and its application in various areas, theoretical analysis, and comparison with its preceding and other algorithms are also described in detail. A detailed description of different spiral paths, their characteristics, and the application of these spiral approaches in developing and improving other optimization algorithms are comprehensively presented. The review concludes the current works on the SDO algorithm, highlighting its shortcomings and suggesting possible future research perspectives.

In [2], Trean¸t ˘a studies the well posedness for a new class of optimization problems with variational inequality constraints involving second-order partial derivatives. More precisely, by using the notions of lower semicontinuity, pseudomonotonicity, hemicontinuity and monotonicity for a multiple integral functional, and by introducing the set of approximating solutions for the considered class of constrained optimization problems, he establishes some characterization results on well posedness. Furthermore, to illustrate the theoretical developments included in this paper, some examples are presented.

Thakur et al.'s [3] study in this Special Issue investigates the existence of positive solutions for a class of fractional differential equations of arbitrary order *δ* > 2, subject to boundary conditions that include an integral operator of the fractional type. The consideration of this type of boundary conditions allows to consider heterogeneity on the dependence specified by the restriction added to the equation as a relevant issue for applications. An existence result is

**Citation:** Trean¸t ˘a, S. Advances in Optimization and Nonlinear Analysis. *Fractal Fract.* **2022**, *6*, 364. https://doi.org/10.3390/ fractalfract6070364

Received: 28 June 2022 Accepted: 29 June 2022 Published: 30 June 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

obtained for the sublinear and superlinear case by using the Guo–Krasnosel'skii fixed point theorem through the definition of adequate conical shells that allow to localize the solution. As additional tools in the considered procedure, Thakur et al. obtain the explicit expression of Green's function associated to an auxiliary linear fractional boundary value problem, and study some of its properties, such as the sign and some useful upper and lower estimates. Finally, an example is given to illustrate the results.

A parametric intuitionistic fuzzy multi-objective fractional transportation problem (PIF-MOFTP) is analyzed in El Sayed et al. [4]. The PIF-MOFTP includes a single-scalar parameter in the objective functions and an intuitionistic fuzzy supply and demand. Based on the (*α*, *β*)-cut concept, a parametric (*α*, *β*)-MOFTP is proposed. Then, a fuzzy goal programming (FGP) approach is utilized to obtain (*α*, *β*)-Pareto optimal solution. Moreover, the authors investigates the stability set associated with the first kind (SSFK) corresponding to the solution by extending the Kuhn-Tucker optimality conditions of multi-objective programming problems. Also, an algorithm to crystalize the progressing SSFK for PIF-MOFTP is presented.

Vivas-Cortez et al. [5] use integral inequalities involving many fractional integral operators in order to solve various fractional differential equations. More precisely, the authors generalize the Hermite–Jensen–Mercer-type inequalities for an *h*-convex function via a Caputo–Fabrizio fractional integral. They develop some novel Caputo–Fabrizio fractional integral inequalities. Also, they establish Caputo–Fabrizio fractional integral identities for differentiable mapping, and these will be used to give estimates for some fractional Hermite–Jensen–Mercer-type inequalities. Some familiar results are recaptured as special cases of these results.

In Lai et al. [6], the authors establish Fritz John stationary conditions for nonsmooth, nonlinear, semidefinite, multiobjective programs with vanishing constraints in terms of convexificator. Also, they introduce generalized Cottle type and generalized Guignard type constraints qualification to achieve strong *S*—stationary conditions from Fritz John stationary conditions. Further, the authors establish strong *S*—stationary necessary and sufficient conditions, independently from Fritz John conditions. Some examples are provided to validate the established results.

The purpose of the next paper Khan et al. [7] published in this Special Issue is to introduce a new class of Hermite–Hadamard inequalities for LR-convex interval-valued functions, by means of a pseudo-order relation. This order relation is defined on interval space. Moreover, the interval Hermite–Hadamard–Fejér inequality is also derived for LR-convex interval-valued functions. These inequalities also generalize some new and known results. Useful examples that verify the applicability of the theory developed in this study are presented.

The Lieb concavity theorem, successfully solved in the Wigner–Yanase–Dyson conjecture, is an important application of matrix concave functions. Recently, the Thompson–Golden theorem, a corollary of the Lieb concavity theorem, was extended to deformed exponentials. Hence, it is worthwhile to study the Lieb concavity theorem for deformed exponentials. In Yang [8], the Pick function is used to obtain a generalization of the Lieb concavity theorem for deformed exponentials, and some corollaries associated with exterior algebra are obtained.

Nowadays, more and more consumers consider environmentally friendly products in their purchasing decisions. Companies need to adapt to these changes while paying attention to standard business systems such as payment terms. The purpose of the study realized by Sultana et al. [9] is to optimize the entire profit function of a retailer and to find the optimal selling price and replenishment cycle when the demand rate depends on the price and carbon emission reduction level. This study investigates an economic order quantity model that has a demand function with a positive impact of carbon emission reduction besides the selling price. In this model, the supplier requests payment in advance on the purchased cost while offering a discount according to the payment in the advanced decision. Three different types of payment-in-advance cases are applied: (1) payment in advance with equal numbers of instalments, (2) payment in advance with a single instalment, and (3) the absence of payment in advance. Numerical examples and sensitivity analysis illustrate the proposed model. Here, the total profit increases for all three cases with higher values of carbon emission reduction level. Further, the study finds that the profit becomes maximum for case 2, whereas the selling price and cycle length become minimum. This study considers the sustainable inventory model with payment-in-advance settings when the demand rate depends on the price and carbon emission reduction level.

Convexity is crucial in obtaining many forms of inequalities. As a result, there is a significant link between convexity and integral inequality. Due to the significance of these concepts, the purpose of Khan et al.'s [10] study is to introduce a new class of generalized convex interval-valued functions called (*p*,*s*)-convex fuzzy interval-valued functions (for short, (*p*,*s*)-convex F-I-V-Fs) in the second sense and to establish Hermite–Hadamard (for short, H–H) type inequalities for (*p*,*s*)-convex F-I-V-Fs using fuzzy order relation. In addition, the authors demonstrate that the derived results include a large class of new and known inequalities for (*p*,*s*)-convex F-I-V-Fs and their variant forms as special instances. Furthermore, useful examples are given to demonstrate usefulness of the theory produced in this study. These findings and diverse approaches may pave the way for future research in fuzzy optimization, modeling, and interval-valued functions.

In the paper Sajjadmanesh et al. [11], the authors are interested in an inverse geometric problem for the three-dimensional Laplace equation to recover an inner boundary of an annular domain. This work is based on the method of fundamental solutions (MFS) by imposing the boundary Cauchy data in a least-square sense and minimisation of the objective function. This approach can also be considered with noisy boundary Cauchy data. The simplicity and efficiency of this method is illustrated in several numerical examples.

Multiple attractors and their fractal basins of attraction can lead to the loss of global stability and integrity of Micro Electro Mechanical Systems (MEMS). In the paper of Zhu et al. [12], multistability of a class of electrostatic bilateral capacitive micro-resonator is researched in detail. First, the dynamical model is established and made dimensionless. Second, via the perturbating method and the numerical description of basins of attraction, the multiple periodic motions under primary resonance are discussed. It is found that the variation of AC voltage can induce safe jump of the micro resonator. In addition, with the increase of the amplitude of AC voltage, hidden attractors and chaos appear. The results may have some potential value in the design of MEMS devices.

The purpose of the study Khan et al. [13] is to define a new class of harmonically convex functions, which is known as left and right harmonically convex interval-valued functions (for short, LR-H-convex IV-F), and to establish novel inclusions for a newly defined class of interval-valued functions (for short, IV-Fs) linked to Hermite–Hadamard (for short, H-H) and Hermite–Hadamard–Fejér (H-H-Fejér) type inequalities via intervalvalued Riemann–Liouville fractional (for short, IV-RL-fractional) integrals. These findings enable the authors to identify a new class of inclusions that may be seen as significant generalizations. Some examples are included in the considered findings that may be used to determine the validity of the results.

The study developed in Daqaq et al. [14] describes a novel manta ray foraging optimization approach based non-dominated sorting strategy, namely (NSMRFO), for solving the multi-objective optimization problems (MOPs). The proposed powerful optimizer can efficiently achieve good convergence and distribution in both the search and objective spaces. In the NSMRFO algorithm, the elitist non-dominated sorting mechanism is followed. Afterwards, a crowding distance with a non-dominated ranking method is integrated for the purpose of archiving the Pareto front and improving the optimal solutions coverage. To judge the NSMRFO performances, a bunch of test functions are carried out including classical unconstrained and constrained functions, a recent benchmark suite known as the completions on evolutionary computation 2020 (CEC2020) that contains twenty-four multimodal optimization problems (MMOPs), some engineering design problems, and also the modified real-world issue known as IEEE 30-bus optimal power flow involving

the wind/solar/small-hydro power generations. Comparison findings with multimodal multi-objective evolutionary algorithms (MMMOEAs) and other existing multi-objective approaches with respect to performance indicators reveal the NSMRFO ability to balance between the coverage and convergence towards the true Pareto front (PF) and Pareto optimal sets (PSs). Thus, the competing algorithms fail in providing better solutions while the proposed NSMRFO optimizer is able to attain almost all the Pareto optimal solutions.

The last paper published in the considered Special Issue (see Elkasem et al. [15]) presents an innovative strategy for load frequency control (LFC) using a combination structure of tilt-derivative and tilt-integral gains to form a TD-TI controller. Furthermore, a new improved optimization technique, namely the quantum chaos game optimizer (QCGO) is applied to tune the gains of the proposed combination TD-TI controller in two-area interconnected hybrid power systems, while the effectiveness of the proposed QCGO is validated via a comparison of its performance with the traditional CGO and other optimizers when considering 23 bench functions. Correspondingly, the effectiveness of the proposed controller is validated by comparing its performance with other controllers, such as the proportional-integral-derivative (PID) controller based on different optimizers, the tilt-integral-derivative (TID) controller based on a CGO algorithm, and the TID controller based on a QCGO algorithm, where the effectiveness of the proposed TD-TI controller based on the QCGO algorithm is ensured using different load patterns (i.e., step load perturbation (SLP), series SLP, and random load variation (RLV)). Furthermore, the challenges of renewable energy penetration and communication time delay are considered to test the robustness of the proposed controller in achieving more system stability. In addition, the integration of electric vehicles as dispersed energy storage units in both areas has been considered to test their effectiveness in achieving power grid stability. The simulation results elucidate that the proposed TD-TI controller based on the QCGO controller can achieve more system stability under the different aforementioned challenges.

**Funding:** This research received no external funding.

**Acknowledgments:** I am thankful the editors and reviewers of the *Fractal and Fractional* journal for their help and support.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


### *Review* **Recent Advances and Applications of Spiral Dynamics Optimization Algorithm: A Review**

**Madiah Binti Omar 1, Kishore Bingi 2,\*, B Rajanarayan Prusty <sup>2</sup> and Rosdiazli Ibrahim <sup>3</sup>**


**Abstract:** This paper comprehensively reviews the spiral dynamics optimization (SDO) algorithm and investigates its characteristics. SDO algorithm is one of the most straightforward physics-based optimization algorithms and is successfully applied in various broad fields. This paper describes the recent advances of the SDO algorithm, including its adaptive, improved, and hybrid approaches. The growth of the SDO algorithm and its application in various areas, theoretical analysis, and comparison with its preceding and other algorithms are also described in detail. A detailed description of different spiral paths, their characteristics, and the application of these spiral approaches in developing and improving other optimization algorithms are comprehensively presented. The review concludes the current works on the SDO algorithm, highlighting its shortcomings and suggesting possible future research perspectives.

**Keywords:** advances of SDO; applications of SDO; metaheuristic optimization; nature-inspired algorithms; optimization problems; spiral dynamics optimization; spiral-inspired optimization algorithms; spiral paths

#### **1. Introduction**

In engineering applications, metaheuristic optimization algorithms are more popular and widely used for computing the optimal solution [1]. This broad application is because:


A great variety of nature and population-based metaheuristic optimization algorithms have been published in the literature [2]. As reported in [2], these algorithms are categorized into breeding-based, swarm intelligence-based, physics-based, chemistry-based, social human behavior-based, plant-based, and others. Many developed metaheuristic optimization algorithms published in the literature are swarm intelligence-based algorithms. After swarm intelligence-based algorithms, physics-based algorithms are the most widely proposed and implemented in various applications [3,4]. As the name suggests, in swarm intelligence-based algorithms, some degree of intelligence is present in the algorithm process while finding the optimal solution. However, in physics-based algorithms, the algorithm process is based on specific laws or principles [3,5,6]. The main advantage of physics-based algorithms compared to others is the most straightforwardness. This is because the algorithm's strategy is based on fundamental physical principles. Thus, the algorithms can consistently and accurately represent the dynamics over the entire domain. Further, some physics-based algorithms also take advantage of a nature-inspired

**Citation:** Omar, M.B.; Bingi, K.; Prusty, B.R.; Ibrahim, R. Recent Advances and Applications of Spiral Dynamics Optimization Algorithm: A Review. *Fractal Fract.* **2022**, *6*, 27. https://doi.org/10.3390/ fractalfract6010027

Academic Editor: Savin Trean¸t ˘a

Received: 16 December 2021 Accepted: 28 December 2021 Published: 2 January 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

ratio, called the golden ratio, which helps to converge quickly and effectively when finding the optimal solution [7].

The most popular physics-based optimization algorithms are harmony search, gravitational search algorithm (GSA), big bang big crunch, electromagnetic field optimization (EFO), galaxy-based search [8], ray optimization, magnetic optimization, spiral dynamics optimization [9], and water cycle optimization [10]. Spiral dynamics optimization (SDO) is one of the most straightforward physics-based algorithms proposed by Tamura and Yasuda in 2011, developed using a logarithmic spiral phenomenon in nature [9]. The algorithm is simple and has few control parameters. Moreover, the algorithm has fast computational speed, local searching capability, diversification in the early phase, and intensification in the later stage.

This review paper provides the origin and concept of the SDO algorithm for an *n*dimensional system. The effect of variation of spiral parameters (radius and angle) for two- and three-dimensional systems are analyzed by generating the conventional and hypotrochoid spiral trajectories. Besides, the recent advances in SDO algorithm, including adaptive, improved, and hybrid versions, are highlighted. The current applications of SDO and its variants are also focused. Different types of spirals, coordinates on *xy*-plane, and trajectories are generated to understand spiral behaviors. Further, various novel optimization algorithms' developments using these spirals are presented comprehensively. Therefore, this review paper helps in guiding multiple researchers who are currently working and willing to work by employing SDO and its variants to solve various engineering problems. Moreover, the review helps in developing or improving existing algorithms using the spiral phenomenon.

The paper's remaining sections are organized as follows: the origin and concept of the SDO algorithm and the effect of the spiral parameter in developing search trajectories are presented in Section 2. Section 3 offers the recent adaptive, improved, and hybrid versions of the SDO algorithm. Section 4 gives the different types of spiral trajectories and a list of novel optimization algorithms created using these trajectories. The applications of SDO and its hybrid versions are presented in Section 5. Finally, the paper is concluded in Section 6.

#### **2. Spiral Dynamics Optimization Algorithm**

This section presents the origin and the concept of the SDO algorithm for twodimensional and three-dimensional systems. A detailed analysis of the effect of varying spiral parameters (radius and angle) is also presented.

#### *2.1. Origin*

Tamura and Yoshida developed the SDO algorithm in 2011 to mimic the spiral phenomena in nature [9,11]. Many spirals are available in nature, such as galaxies, aurora, blackbuck horns, hurricanes, tornadoes, seashells, snails, ammonites, cabbage butterflies, Pieris brassicae, chameleon tail, seahorse, and fish vortex [12,13]. The spirals are also seen in ancient art created by humanity during 5000 BC to 1600 AD [12]. Over the years, several researchers have made efforts to understand the spiral sequences and complexities and develop equations and algorithms of the spirals. Moreover, it is worth highlighting that the frequently encountered spiral phenomenon in nature is the logarithmic spiral, which can be seen in galaxies, tropical cyclones, and nautilus shells [14]. The discrete processes of generating a logarithmic spiral have been realized as an effective search behavior in metaheuristics, which inspired the spiral dynamics optimization algorithm to develop.

#### *2.2. Concept*

In the SDO algorithm, the multipoint search function for an *n*-dimensional system is formulated as [15],

$$\mathbf{x}\_{k+1} = rR^{(n)}(\theta)\mathbf{x}\_k - (rR^{(n)}(\theta) - I\_n)\mathbf{x}^\*,\tag{1}$$

where *<sup>r</sup>* is the spiral radius, *<sup>R</sup>*(*n*)(*θ*) is the rotational matrix of order *<sup>n</sup>* <sup>×</sup> *<sup>n</sup>*, *<sup>θ</sup>* is the spiral rotation angle, *In* is the identity matrix of order *n* × *n*, *x*<sup>∗</sup> is the spiral center, *xk* and *xk*<sup>+</sup><sup>1</sup> are the search point positions at iterations *k* and *k* + 1, respectively.

The rotational matrix *R*(*n*)(*θ*) for an *n*-dimensional case on an arbitrary *xixj*-plane is given as [9,16,17],

$$R^{(n)}(\theta) = \begin{bmatrix} 1 & 0 & 0 & \dots & 0 & 0 & 0 \\ 0 & 1 & 0 & \dots & 0 & 0 & 0 \\ 0 & 0 & \cos(\theta\_{i,j}) & \dots & -\sin(\theta\_{i,j}) & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\ 0 & 0 & \sin(\theta\_{i,j}) & \dots & \cos(\theta\_{i,j}) & 0 & 0 \\ 0 & 0 & 0 & \dots & 0 & 1 & 0 \\ 0 & 0 & 0 & \dots & 0 & 0 & 1 \end{bmatrix} \tag{2}$$

where *θi*,*<sup>j</sup>* is the spiral rotation angle around the origin on *xixj*-plane.

From (2), the only one possibility of rotational matrix *R*(2)(*θ*) for a two-dimensional system on *x*1*x*2-plane is given as follows:

$$R^{(2)}(\theta) = \begin{bmatrix} \cos(\theta) & -\sin(\theta) \\ \sin(\theta) & \cos(\theta) \end{bmatrix} \tag{3}$$

On the other hand, the three possible combinations of rotational matrix *R*(3)(*θ*) for a three-dimensional system on *x*1*x*2, *x*2*x*3, and *x*1*x*3-planes are respectively given as follows:

$$R\_{1,2}^{(3)}(\theta) \quad = \begin{bmatrix} \cos(\theta\_{1,2}) & -\sin(\theta\_{1,2}) & 0\\ \sin(\theta\_{1,2}) & \cos(\theta\_{1,2}) & 0\\ 0 & 0 & 1 \end{bmatrix} \tag{4}$$

$$R\_{2,3}^{(3)}(\theta) \quad = \begin{bmatrix} 1 & 0 & 0 \\ 0 & \cos(\theta\_{2,3}) & -\sin(\theta\_{2,3}) \\ 0 & \sin(\theta\_{2,3}) & \cos(\theta\_{2,3}) \end{bmatrix} \text{, and} \tag{5}$$

$$R\_{1,3}^{(3)}(\\\theta) \quad = \begin{bmatrix} \cos(\theta\_{1,3}) & 0 & -\sin(\theta\_{1,3}) \\ 0 & 1 & 0 \\ \sin(\theta\_{1,3}) & 0 & \cos(\theta\_{1,3}) \end{bmatrix} . \tag{6}$$

From (1), it is to be noted that the model generated the spiral trajectories around the center *x*∗ and these trajectories are classified into two types [18,19]:


From the above classification, the spiral's direction of rotation based on the value of *θ* is classified as follows:


The spiral trajectories for a two-dimensional system for various values of *r* ∈ [−1, 1] and *θ* = *<sup>π</sup>* <sup>8</sup> is shown in Figure 1. Similarly, the trajectories for various values of *<sup>θ</sup>* <sup>∈</sup> [−*<sup>π</sup>* <sup>2</sup> , *<sup>π</sup>* 2 ] and *r* = 0.85 for conventional spiral and *r* = −0.85 for hypotrochoid spiral are shown in Figure 2. Further, the conventional and hypotrochoid spiral trajectories for both positive and negative values of *θ* are shown in Figure 3. In all these cases, the starting point used in the study is (25, 25).

**Figure 1.** Spiral trajectories for a two-dimensional system for various values of *<sup>r</sup>* <sup>∈</sup> [−1, 1] and *<sup>θ</sup>* <sup>=</sup> *<sup>π</sup>* 8 : (**a**) conventional spiral and (**b**) hypotrochoid spiral.

**Figure 2.** Spiral trajectories for a two-dimensional system for various values of *<sup>θ</sup>* <sup>∈</sup> [−*<sup>π</sup>* <sup>2</sup> , *<sup>π</sup>* <sup>2</sup> ] and *r* = 0.85 for conventional spiral in (**a**) and *r* = −0.85 for hypotrochoid spiral in (**b**).

**Figure 3.** Spiral trajectories for a two-dimensional system for both positive and negative values of *θ*: (**a**) conventional spiral and (**b**) hypotrochoid spiral.

Observing the notations *k* = 0, *k* = 1, ... , *k* = 4 on spiral trajectories in Figures 1–3, it can be noted that at each iteration, the spiral point from the starting point moves by an angle *θ* and then tends towards the center *x*∗. Thus, the net effect is the spiral movement of the initial point towards the center. The trajectories also depict the angle *θ*, controlling the spiral curve. A smoother curve is achieved for smaller values of *θ*, compared to the

boxy curved with larger values of *θ* (refer to Figure 2a). The spiral trajectories in Figure 3 show the clockwise and anticlockwise spiral movement for negative and positive angles, respectively. On the other hand, the spiral radius *r* controls the spiral movement towards the center *x*∗. A quick movement of spiral towards the center is achieved for smaller values of *r*, compared to the slow movement with larger values of *r* (refer to Figures 1 and 2). The hypotrochoid spirals shown in Figures 1b, 2b, and 3b are internal trajectories which are generated along a circle. The advantage of a hypotrochoid spiral over conventional spirals is it does not exceed the search space and can search most of the area in the search space.

In a similar way, the conventional and hypotrochoid spiral trajectories for a three-dimensional system with *r* = 0.95 and *θ* = *<sup>π</sup>* <sup>4</sup> are shown in Figure 4. The trajectory in Figure 4a on the *x*1*x*2-plane is obtained using the rotational matrix in (4). Similarly, the trajectories in Figure 4b,c on the *x*2*x*<sup>3</sup> and *x*1*x*3-planes are obtained using the rotational matrices in (5) and (6), respectively. The starting point used is(25, 25, 25) in all of these cases. The trajectories depict the conventional spiral with a positive *r* value and the hypotrochoid spiral with a negative *r* value. As the *θ* value is positive, all the spiral movements are anticlockwise. As mentioned earlier, the advantage of hypotrochoid spirals is they can search most of the area in the search space, as shown in Figure 4. The search space of a conventional spiral is only on the positive plane, while the hypotrochoid spirals search space is both negative and positive. Thus, the trajectories in the figure conclude that the hypotrochoid spirals can search most of the area in the search space.

**Figure 4.** Conventional and hypotrochoid spiral trajectories for a three-dimensional system with *r* = 0.95 and *θ* = *<sup>π</sup>* <sup>4</sup> : (**a**) on *x*1*x*2-plane with *R*1,2. (**b**) on *x*2*x*3-plane with *R*2,3. (**c**) on *x*1*x*3-plane with *R*1,3.

#### **3. Advances of Spiral Dynamics Optimization Algorithm**

This section presents the recent adaptive, improved, and hybrid versions of the SDO algorithm.

#### *3.1. Adaptive Versions of Spiral Dynamics Optimization Algorithm*

Researchers have developed the adaptive versions of the SDO algorithm by dynamically varying the spirals' radius and angle based on the fitness value during each iteration. The four types of proposed adaptive approaches in the literature are linear, quadratic, exponential, and fuzzy [16,20,21]. The mathematical functions of spirals' radius and angle using the proposed approaches are given in Figure 5.

In the figure, the notations are defined as follows:


• *Y*Fit is the difference between fitness value at a current iteration *f*(*xi*(*k*)) and best fitness min(*f*(*xi*(*k*))), is defined as,

$$Y\_{\text{Fit}} = f(\mathbf{x}\_i(k)) - \min(f(\mathbf{x}\_i(k))).\tag{7}$$

In [17], using the linear adaptive approach in Figure 5, the authors have proposed the adaptive hypotrochoid SDO algorithm. The proposed algorithm performs best on various benchmark functions compared to conventional techniques. On the other hand, in [22], a self-adaptive approach is proposed for the SDO algorithm to update the spiral radius and angle during the optimization. The approach's advantage is that all search points are updated by randomly tuning the parameter values in each iteration. Similarly, the authors of [23] have proposed an adaptive SDO by incorporating three mechanisms, such as (i) bi-considering updation, (ii) self-adaptive radius, and (iii) punish mechanisms. The proposed algorithm boosted the optimization efficiency and avoided trapping at the local optimal minima.

**Figure 5.** Adaptive versions of the SDO algorithm.

#### *3.2. Improved Versions of Spiral Dynamics Optimization Algorithm*

As mentioned earlier in Section 2.2, the algorithm settles into optimal local values at the end of the optimization process due to insufficient exploration of the conventional SDO's search space. Thus, to avoid this problem, Nasir et al. have proposed the improved SDO algorithm using the bacterial foraging algorithms' elimination–dispersal strategy [24,25]. In this enhanced version, the algorithm structure is kept the same. However, two new phases, namely elimination and dispersal, are introduced. Similarly, Hashim et al. have proposed the chaotic SDO algorithm logistic chaotic map patterns in the conventional SDO [26,27]. The chaotic map pattern helps in the initial population distribution rather than randomly in conventional SDO. Moreover, the search strategy of the artificial bee colony optimization algorithm is employed to improve the SDO's exploration capability. The authors have also proposed the greedy SDO algorithm by incorporating the greedy selection stage and chaotic logistic map in the conventional SDO [28]. In this selection stage, the obtained solution is compared to the previous value for updating the spiral positions. The authors of [18,19] have proposed the hypotrochoid SDO algorithm in which the search points follow the hypotrochoid spiral rather than the conventional spiral in SDO. The proposed hypotrochoid SDO can explore the search space more effectively and explore the whole neighborhood of the optimal center. The experimental validation on optimal triaxial accelerometers placement in the Shanghai Tower in China [19], and sizing and layout of truss structures [18] has shown the better performance of hypotrochoid SDO than its predecessors.

The SDO algorithm in Section 2.2 is developed by utilizing a feature of the logarithmic spiral. This algorithm is also known as a deterministic or direct-solving metaheuristic optimization algorithm. One of the significant drawbacks of this algorithm is the slow convergence. Therefore, the authors of [29–31] have proposed a stochastic SDO algorithm by incorporating some random disturbances at each searching point of the algorithm. Similarly, the authors of [32] have introduced the iterative SDO algorithm for analyzing the information on blurred images. In this algorithm, the model's output is given as an input to the same model iteratively. Thus, the optimization algorithm searches for the sharp image spirally with the blurred vision at the initial stage. On the other hand, the authors of [33] have proposed the distributed SDO algorithm to increase the diversity in the search space. In this conventional SDO algorithm, it is clear that the search points rotate spirally around the optimal center only. Thus, the algorithm falls into the local minimum quickly. However, in the proposed distributed SDO algorithm, the population of search points is split into sub-populations to increase diversity and capture the whole search space. The summary of all these approaches is given in Figure 6.

**Figure 6.** Improved versions of the SDO algorithm.

#### *3.3. Hybrid Versions of Spiral Dynamics Optimization Algorithm*

From the literature review, the following points are worth highlighting on the performance of the SDO algorithm. SDO has the advantages of a simple structure, few control parameters, and early diversification and intensification strategies. However, the SDO's performance is poor in searching the whole search space [20,34], and the exploration mechanism of the SDO needs to be improved [35]. The algorithm gets trapped at optimal local minima easily [33].

Thus, to improve the performance of SDO, researchers have proposed the hybridization of SDO with other algorithms. Further, various algorithms' performance has also been enhanced using SDO. The hybrid versions of the SDO algorithm presented in the literature used an artificial bee colony (ABC) [36,37], antlion optimization (ALO) [38], bacterial chemotaxis algorithm (BCA) [20,34,39], bacterial foraging algorithm (BFA) [35,40,41], biogeography-based optimization (BBO) [42], cuckoo search (CS) [43], genetic algorithm (GA) [44], particle swarm optimization (PSO) [45–48], sine-cosine algorithm (SCA) [49], and teaching–learning-based optimization (TLBO) [50], as shown in Figure 7. As shown in the figure, the excellent exploitation strategy of SDO is hybridized with the fast exploration strategy of another algorithm to balance both the exploitation and exploration phases.

**Figure 7.** Hybrid versions of the SDO algorithm.

Moreover, there are several other novel optimization algorithms in which spiral behavior or trajectory is used during the development of the algorithm. A detailed description of various spiral paths and a list of novel spiral path-inspired optimization algorithms are discussed in the following section.

#### **4. Spiral Path Inspired Optimization Algorithms**

The first part of this section presents the various spiral trajectories used to develop the optimization algorithms. Then, the list of different novel optimization algorithms created using these spirals is shown.

#### *4.1. Spiral Paths*

Patterns referred to as visible consistencies found in nature are trees, spirals, waves, etc. Visual patterns in nature are modeled using chaos theory, fractals, spirals, etc. In some natural patterns, the spirals and fractals are related. For instance, a variant of the logarithmic spiral, namely the Fibonacci spiral, is based on the golden ratio and Fibonacci numbers. As it is logarithmic, the curve at every scale appears the same and can be considered a fractal. Romanesco broccoli is an example of such Fractal spirals. The above patterns inspired researchers to develop optimization algorithms. Different types of spiral trajectories used in the research include:


A detailed description of the five most widely used spirals, including Archimedes, logarithmic, rose, epitrochoid, and hypotrochoid, is provided underneath. This detailed description includes the coordinates on the *xy*-plane and trajectories showing the effect of each parameter on the *xy*-plane.

#### 4.1.1. Logarithmic Spiral

The logarithmic spirals often appear in nature. For instance, the nautilus cutaway, Iceland's low-pressure area, galaxies, and tropical cyclones arms usually take a logarithmic spiral shape. The logarithmic spiral is also known as equiangular or growth spiral because the spiral distance increases in geometric progression. The coordinates of a logarithmic spiral on *xy*-plane are given as follows [13,38]:

$$\text{tr}(\phi) = a \cdot e^{h\phi} \cdot \cos(\phi), \ y(\phi) = a \cdot e^{h\phi} \cdot \sin(\phi), \tag{8}$$

where *φ* is the angle, *a* and *b* are the arbitrary constants.

The logarithmic spiral for *a* = 0.18, *φ* from −4*π* to 4*π*, and various *b* values is shown in Figure 8. The spiral in Figure 8a is obtained for positive values of *b*, while Figure 8b is obtained for negative values. The trajectories in Figure 8 show that parameter *b* controls the tightness and the direction of the spiral. The trajectories in Figure 8a also depict the logarithmic spiral proprieties that for positive *b* values and *φ* tends to +∞, the spiral evolves in an anticlockwise direction. Whereas for the same *b* values and *φ* tends to −∞, the spiral evolves in a clockwise direction. However, for negative *b* values, the spiral evolves or twists in the opposite direction.

**Figure 8.** Logarithmic spiral with various values of *b*: (**a**) logarithmic spiral with positive *b* values and (**b**) logarithmic spiral with negative *b* values.

#### 4.1.2. Archimedean Spiral

Archimedean spiral is another famous spiral that has been used in significant applications of engineering, biology, etc. The Archimedean spiral is also known as the arithmetic spiral. This spiral can be seen in nature in ferns, millipedes, and human fingerprints. The spiral trajectory is the locus of a point's position that moves away from the fixed point with a constant speed along a line that rotates with a constant angular velocity. The coordinates of an Archimedean spiral on *xy*-plane is given as follows [13,38]:

$$x(\psi) = (c + d \cdot \psi) \cdot \cos(\psi), \; y(\psi) = (c + d \cdot \psi) \cdot \sin(\psi), \tag{9}$$

where *c* and *d* are constants that define the spirals initial radius and the successive turns difference, respectively.

The Archimedean spiral for *c* = 0.5, *ψ* from 0 to −7*π*, and various *d* values are shown in Figure 9. The trajectory in Figure 9a is obtained for positive values of *d*, while Figure 9b is obtained for negative values. As the initial radius is *c* = 0.5, all the spirals are starting at this value, as shown in Figure 9. The spiral growth rate *d* controls the increment per revolution. Thus, the distance between successive turns is constant, which is equal to the value of *d*. Moreover, the parameter *d* controls the evolution of the spiral. The spiral in Figure 9a depicts that for positive *d* values, and the spiral evolves in an anticlockwise direction. Whereas for negative *d* values, the spiral evolves clockwise.

**Figure 9.** Archimedean spiral with various values of *d*: (**a**) Archimedean spiral with positive *d* values and (**b**) Archimedean spiral with negative *d* values.

Observing the spirals in Figures 8 and 9 shows a difference between the Archimedean and logarithmic spirals worth highlighting. In the Archimedean spiral, the intersection points of a ray from the origin on successive turnings have a constant separation distance. However, in a logarithmic spiral, these distance of intersection points on next turnings from the origin will form a geometric progression.

#### 4.1.3. Rose Spiral

As the name suggests, the rose spiral is often seen in the unfurling of rose petals and holds the properties of symmetric and periodic arc curves. The coordinates of a rose spiral on *xy*-plane is given as follows [13,38]:

$$\mathbf{x}(\boldsymbol{\xi}) = \boldsymbol{\varepsilon} \cdot \cos(n\boldsymbol{\xi}) \cdot \cos(\boldsymbol{\xi}), \ y(\boldsymbol{\xi}) = \boldsymbol{\varepsilon} \cdot \cos(n\boldsymbol{\xi}) \cdot \sin(\boldsymbol{\xi}), \tag{10}$$

where *e* and *n* are constants that define the pedal length and number, respectively.

The rose spiral with various values of *e* and *n* are shown in Figure 10. The spiral in Figure 10a is achieved for *n* = 2 and multiple values of *e*. Similarly, the spiral in Figure 10b is obtained for *e* = 2 and various values of *n*. In both cases, *ξ* ranges from 0 to 2. The spirals in Figure 10a depict that parameter *e* controls the petal length. It is worth noting that as the value of *e* increases, the petal length increases. The spirals in Figure 10b also show that *n* controls petals' number, size, and length. For an even value of *n*, the number of petals is 2*n*. However, for odd values of *n*, the number of petals is only *n*.

**Figure 10.** Rose spiral with various values of *e* and *n*: (**a**) rose spiral with constant *n* value and variable *e* and (**b**) rose spiral with constant *e* value and variable *n*.

4.1.4. Epitrochoid and Hypotrochoid Spirals

Epitrochoid and hypotrochoid spirals are a family of curves generated by a point attached to a rolling circle. This rolling circle will roll out around the outside of a fixed circle to form an epitrochoid spiral. On the other hand, to create a hypotrochoid spiral, the rolling one will roll around inside the fixed one. Let *ρ*<sup>1</sup> and *ρ*<sup>2</sup> be the radii of rolling and fixed circles, respectively, and *f* is the distance between the point and rolling circle's center. The coordinates of epitrochoid spiral on *xy*-plane is given as [13,38],

$$\begin{aligned} x(\zeta) &= (\rho\_2 + \rho\_1) \cdot \cos(\zeta) - f \cdot \cos\left(\frac{\rho\_2 + \rho\_1}{\rho\_1}\zeta\right), \text{ and} \\ y(\zeta) &= (\rho\_1 + \rho\_2) \cdot \sin(\zeta) - f \cdot \sin\left(\frac{\rho\_1 + \rho\_2}{\rho\_1}\zeta\right). \end{aligned} \tag{11}$$

Similarly, the coordinates of a hypotrochoid spiral on *xy*-plane is given as follows:

$$\begin{aligned} x(\zeta) &= (\rho\_2 - \rho\_1) \cdot \cos(\zeta) + f \cdot \cos\left(\frac{\rho\_2 - \rho\_1}{\rho\_1}\zeta\right), \text{ and} \\ y(\zeta) &= (\rho\_2 - \rho\_1) \cdot \sin(\zeta) - f \cdot \sin\left(\frac{\rho\_2 - \rho\_1}{\rho\_1}\zeta\right). \end{aligned} \tag{12}$$

The trajectories of epitrochoid and hypotrochoid spirals for *ρ*<sup>1</sup> = 0.8, *ρ*<sup>2</sup> = 3, *d* = 2.5, and *ζ* ranging from 0 to 10*π* is shown in Figure 11a,b, respectively. In both spirals, it should be noted that *ζ* significantly affects the spiral's shape. If the considered *ζ* ranges from 0 to 2*π*, the rolling circle will revolve only once around the fixed circle. Thus, it is not possible to obtain the whole pattern of the spiral. These spirals can be drawn using Spirograph toys and often appear in nature. For instance, the planets orbit in a geocentric system, and Wankel engines' combustion chambers take these spiral shapes.

**Figure 11.** Epitrochoid and hypotrochoid spirals for *ρ*<sup>1</sup> = 0.8, *ρ*<sup>2</sup> = 3, and *d* = 2.5: (**a**) epitrochoid spiral and (**b**) hypotrochoid spiral.

#### *4.2. Spiral Path-Based Optimization Algorithms*

Over the years, researchers have developed various novel optimization algorithms in which the spiral motion has been used while mimicking the system's behavior. Further, an improved version of multiple algorithms is also proposed using spiral trajectories to improve the performance of conventional techniques. Table 1 provides the list of spiral pathinspired optimization techniques, including the inspiration of developing the algorithm, the type of spiral used, and the source code links.

*Fractal Fract.* **2022**, *6*, 27


*Fractal Fract.* **2022**, *6*, 27


For example, a detailed description of four novel optimization algorithms in which spiral trajectory has been used in the development is explained underneath. The chosen novel optimizations algorithms list includes moth–flame, whale, seagull, and Aquila. Further, a detailed description of four improved optimization algorithms using spiral trajectories is also explained in this section. The enhanced optimization algorithms are the water cycle, antlion, slap swarm, and sparrow search. Some of these algorithms have been widely used by various researchers recently, and others have been developed newly, thus selected for the detailed explanation.

#### 4.2.1. Moth–Flame Optimization Algorithm

The moth–flame optimization algorithm was developed in 2015 by Seyedali Mirjalili from the behavior of moths' navigation around the light/flame in a spiral path [52,72,73]. The application of a logarithmic spiral to mimic the moths' transverse orientation property around the flame in this algorithm is explained underneath. In the algorithm, the initial moths' positions will be updated with respect to flames using the logarithmic spiral as follows [52,74]:

$$m\_{i,j} = \begin{cases} D\_{i,j} \cdot e^{b\tau} \cdot \cos(2\pi\tau) + f\_{i,j}, & \text{for } i \le F\_N \\ D\_{i,j} \cdot e^{b\tau} \cdot \cos(2\pi\tau) + f\_{N,j}, & \text{for } i > F\_N \end{cases} \tag{13}$$

where *mi*,*j*, *fi*,*j*, and *Di*,*<sup>j</sup>* are the positions of *j*th variable of *i*th moth, flame, and distance between the moth and its corresponding flame, *N* is the total number of flames. Further, *b* and *τ* are the parameters of logarithmic spiral (refer to Section 4.1.1).

The major drawback of this algorithm is the premature convergence at optimal local solutions during the search process. Moreover, they cannot be applied to permutation problems as it is developed for continuous search space [75]. As mentioned in Table 1, the source code of this optimization algorithm created using MATLAB for both single and multiobjective problems is made publicly available by the developer on his website at https://seyedalimirjalili.com/mfo (accessed on 1 December 2021). Further, the links for the source code using other platforms, such as Python, C++, and R studio, are also available on the same website.

#### 4.2.2. Whale Optimization Algorithm

The whale optimization algorithm is a novel metaheuristic algorithm developed in 2016 by Seyedali Mirjalili and Andrew Lewis to mimic whales' hunting bubble net phenomenon in a spiral motion [53,76–79]. The algorithm is a model of capturing whales' behavior during the encircling, attacking, and searching of prey. During the encircling phase, all the whales' positions will be updated to move towards the best whale position, which is near to the target and is given as,

$$
\vec{X}(i+1) = \vec{X}^\*(i) - \vec{A} \cdot |\vec{\mathcal{C}} \cdot \vec{X}^\*(i) - \vec{X}(i)|.\tag{14}
$$

During the phase of attacking the prey, the whales move spirally using the bubble net movement phenomenon. Thus, position updation of whales during this phenomenon in logarithmic spiral motion is as follows:

$$
\vec{X}(i+1) = |\vec{X}^\*(i) - \vec{X}(i)| \cdot e^{bl} \cdot \cos(2\pi l) \vec{X}^\*(i). \tag{15}
$$

Finally, the whales will choose either encircling or attacking during the searching of prey, which can be achieved using the following model:

$$\vec{X}(i+1) = \begin{cases} \vec{X}^\*(i) - \vec{A} \cdot |\vec{\mathbb{C}} \cdot \vec{X}^\*(i) - \vec{X}(i)|, & p < 0.5, \\ |\vec{X}^\*(i) - \vec{X}(i)| \cdot e^{bl} \cdot \cos(2\pi l) \vec{X}^\*(i), & p \ge 0.5. \end{cases} \tag{16}$$

Therefore, the position updation of all the whales during all three phases is summarized as,

$$\vec{X}(i+1) = \begin{cases} \left| \vec{X}^\*(i) - \vec{\mathcal{A}} \cdot | \vec{\mathcal{C}} \cdot \vec{\mathcal{X}}^\*(i) - \vec{\mathcal{X}}(i) \right|, & \vec{\mathcal{A}} < 1, \\ \left| \vec{X}\_\mathbf{r}(i) - \vec{\mathcal{A}} \cdot | \vec{\mathcal{C}} \cdot \vec{\mathcal{X}}\_\mathbf{r}(i) - \vec{\mathcal{X}}(i) \right|, & \vec{\mathcal{A}} \ge 1, \\ \left| \vec{X}^\*(i) - \vec{\mathcal{X}}(i) \right| \cdot \epsilon^{bl} \cdot \cos(2\pi l) \vec{\mathcal{X}}^\*(i), & p \ge 0.5, \end{cases} \tag{17}$$

where the vectors - *X*∗(*i*) is the closest whale's position to the prey, - *X*(*i*) and - *X*(*i* + 1) are the whales' positions at *i*th and *i* + 1th iterations, - A and - C are the coefficients, *b* and *l* are the parameters of logarithmic spiral (refer to Section 4.1.1). Further, it is to be noted that for - <sup>A</sup> <sup>≥</sup> 1, positions updation has been achieved using - *X*r(*i*), a random position vector at *i*th iteration.

This whale optimization algorithm has the drawbacks of lower accuracy, slow convergence, and being trapped into optimal local solutions and cannot solve higher-dimensional problems effectively [80]. As given in Table 1, the source codes of this optimization algorithm for single-objective problems using MATLAB, Python, C++, and R are publicly available at https://seyedalimirjalili.com/woa (accessed on 1 December 2021).

#### 4.2.3. Seagull Optimization Algorithm

Gaurav Dhiman et al. proposed the seagull optimization algorithm in 2019 to mimic the seagulls' migration and hunting behavior [56]. The algorithm is a mathematical model of seagulls' behavior in two stages, namely migration and attack. During the stage of natural attacking, the seagulls maintain spiral behavior in the air. The coordinates of this spiral behavior in *x*, *y*, and *z* planes are modeled as follows:

$$\mathbf{x} = \boldsymbol{\mu} \cdot \boldsymbol{\varepsilon}^{kv} \cdot \cos(k), \ y = \boldsymbol{\mu} \cdot \boldsymbol{\varepsilon}^{kv} \cdot \sin(k), \ z = \boldsymbol{\mu} \cdot \boldsymbol{\varepsilon}^{kv} \cdot k,\tag{18}$$

where *k* ∈ [0, 2*π*] is the spiral angle, *u* and *v* are the arbitrary constants.

The seagull optimization algorithm has the significant drawback of weak population diversity during the search process [81]. The link to the MATLAB-based source code of this optimization algorithm is given in Table 1.

#### 4.2.4. Aquila Optimization Algorithm

The Aquila optimization algorithm was proposed in 2021 by Laith Abualigah et al. to mimic Aquila's behavior during prey catching [68]. The algorithm constitutes four stages: (i) expanded exploration, (ii) narrowed exploration, (iii) expanded exploitation, and (iv) narrowed exploitation. During the stage of narrowed exploration, the Aquila rotates over a target prey for a short glide attack. This behavior is modeled as follows:

$$X(t+1) = X\_{\text{best}}(t) \cdot Lxy() + X\_r(t) + (y-x) \cdot rand(),\tag{19}$$

where *Xr*(*t*) and *Xbest*(*t*) are the random and best solutions at *t*th iteration, *X*(*t* + 1) solution at (*<sup>t</sup>* <sup>+</sup> <sup>1</sup>)th iteration, *rand*() <sup>∈</sup> (0, 1] is the random number, and *Levy*() is the Lévy distribution. Further, *x* and *y* are the Cartesian coordinates of the spiral with radius *r* and angle *l* given as follows:

$$\mathbf{x} = r\sin(l), \ y = r\cos(l). \tag{20}$$

From the above, it is to be highlighted that the Levy flight's effect is relatively weak. Thus, the algorithm has insufficient local exploitation ability [82]. The MATLAB and Javabased source code link of this optimization algorithm for single-objective problems is given in Table 1.

#### 4.2.5. Water Cycle Optimization Algorithm

The water cycle optimization algorithm was proposed in 2012 by Eskandar et al. to mimic the natural hydrological cycle process [10,83,84]. The algorithm simulates the stream and river flow, rainfall, and evaporation into the sea. In this algorithm, the position update of (a) streams flow to the rivers, (b) streams flow to the sea, and (c) rivers flow to the sea are respectively given as follows:

$$X\_{st}(i+1) = X\_{st}(i) + rand() \cdot \mathbb{C} \cdot (X\_r(i) - X\_{st}(i)),\tag{21}$$

$$X\_{st}(i+1) = X\_{st}(i) + rand() \cdot \mathbb{C} \cdot (X\_{sc}(i) - X\_{st}(i)),\tag{22}$$

$$X\_r(i+1) = X\_r(i) + rand() \cdot \mathbb{C} \cdot (X\_{sc}(i) - X\_r(i)),\tag{23}$$

where *Xst*(*i*), *Xr*(*i*) and *Xse*(*i*) are the positions of stream, river, and sea at *i*th iteration, *Xst*(*i* + 1), *Xr*(*i* + 1), and *Xse*(*i* + 1) are the positions of stream, river, and sea at (*i* + 1)th iteration, *C* ∈ [1, 2] is the constant value and *rand*() ∈ (0, 1] is the random number.

The MATLAB-based source code of this conventional optimization algorithm for both constrained and unconstrained problems, including several improved versions and multiobjective problems, are made publicly available by the researcher on his website at https://ali-sadollah.com/water-cycle-algorithm-wca/ (accessed on 1 December 2021).

The algorithm has insufficient exploitation ability, and thus, in [64], the authors have integrated the hyperbolic spiral, which helps improve the exploitation ability of the algorithm. Therefore, modified position update equations using the hyperbolic spiral are given as follows:

$$X\_{st}(i+1) = X\_{st}(i) + |X\_r(i) - X\_{st}(i)| \cdot \cos(2\pi l) / l,\tag{24}$$

$$X\_{st}(i+1) = X\_{st}(i) + |X\_{st}(i) - X\_{st}(i)| \cdot \cos(2\pi l) / l,\tag{25}$$

$$X\_r(i+1) = X\_r(i) + |X\_{sc}(i) - X\_r(i)| \cdot \cos(2\pi l) / l,\tag{26}$$

where *l* ∈ [−1, 1] is the parameter of hyperbolic spiral, which is an uniformly distributed random number.

#### 4.2.6. Ant Lion Optimization Algorithm

Seyedali Mirjalili proposed the antlion optimization algorithm in 2015 to mimic the natural hunting phenomenon of antlions [85–88]. The algorithm is a model of capturing the following ants and antlions behaviors: (i) the ants' random walk behavior and gets trapped in antlions pits and (ii) the antlions' hunting behaviors include building traps, sliding ants towards them, catching, rebuilding pits, and elitism. The algorithm retains the best antlion with optimal fitness value, elitism, and the corresponding antlion is called elite antlion. Thus, the elite and selected antlions update their position randomly as follows:

$$\text{Ant}\_l(t) = \frac{R\_\text{c}(t) + R\_\text{a}(t)}{2},\tag{27}$$

where *Re*(*t*) and *Ra*(*t*) are the elite and selected antlions random walk during *t*th iteration.

The MATLAB, Python, and R software-based source codes of this conventional optimization algorithm for both single and multiobjective problems are made publicly available by Seyedali Mirjalili on his website at https://seyedalimirjalili.com/alo (accessed on 1 December 2021).

In [38], the authors proposed an improved version of this algorithm. In this enhanced version, the elite and selected antlions update their position using eight spiral complex paths instead of moving in randomly to improve the convergence speed and performance. These spiral trajectories include Archimedes, cycloid, epitrochoid, hypotrochoid, logarithmic, rose, inverse, and overshoot spirals. For an example case, the values of *Re*(*t*) and *Ra*(*t*) are computed using logarithmic spiral as,

$$R\_t(t) = D\_1 \cdot e^{\rho\_1 t} \cos(2\pi t\_1), \ R\_a(t) = D\_1 \cdot e^{\rho\_1 t} \sin(2\pi t\_1),\tag{28}$$

where *D*1, *b*1, and *t*<sup>1</sup> are the parameters of logarithmic spiral (see Section 4.1.1).

Similarly, using the Archimedes spiral, the values of *Re*(*t*) and *Ra*(*t*) are computed as follows:

$$R\_{\mathbf{r}}(t) = D\_2 + b\_2 \cdot t\_2 \cdot \cos(2\pi t\_2), \ R\_{\mathbf{a}}(t) = D\_2 + b\_2 \cdot t\_2 \cdot \sin(2\pi t\_2), \tag{29}$$

where *D*2, *b*2, and *t*<sup>2</sup> are the parameters of Archimedes spiral (see Section 4.1.2).

#### 4.2.7. Slap Swarm Optimization Algorithm

Slap swarm optimization algorithm was developed in 2017 by Seyedali Mirjalili et al. to mimic the behavior of slap chains, which is searching for target food [89–92]. In the slap chain, the first slap is the leader, and all the other slaps follow the leader. In the algorithm, the update equations for the leader and followers' positions during the searching of target food are as follows:

$$\mathbf{X}\_{i}^{1} = \begin{cases} \mathbf{F}\_{i} + r\_{1}((\mathbf{U}\mathbf{B}\_{i} - \mathbf{L}\mathbf{B}\_{i})r\_{2} + \mathbf{L}\mathbf{B}\_{i}), & \text{if } r\_{3} \ge 0, \\\mathbf{F}\_{i} - r\_{1}((\mathbf{U}\mathbf{B}\_{i} - \mathbf{L}\mathbf{B}\_{i})r\_{2} + \mathbf{L}\mathbf{B}\_{i}), & \text{if } r\_{3} < 0, \end{cases} \tag{30}$$

$$X\_i^j = 0.5(X\_i^j + X\_i^{j-1}), \; j \ge 2,\tag{31}$$

where *X*<sup>1</sup> *<sup>i</sup>* and *<sup>X</sup><sup>j</sup> <sup>i</sup>* are the positions of leader and followers, *Fi* is the target food, LB*<sup>i</sup>* and UB*<sup>i</sup>* are the lower and upper bounds of *i*th dimension, *r*1, *r*2, and *r*<sup>3</sup> are random numbers.

The MATLAB-based source code of this optimization algorithm for both single and multiobjective problems is made publicly available by the developer on his website at https://seyedalimirjalili.com/ssa (accessed on 1 December 2021). Further, the links for the source code using Python and R are also available on the same website.

However, in [65], it is stated that the conventional slap swarm optimization algorithm (SSOA) has a slower convergence and gets trapped at local optima. Thus, the authors have proposed an improved SSOA using a logarithmic spiral. In this improved algorithm, the followers' positions are updated using a logarithmic spiral as follows:

$$X\_i^j = 0.5(X\_i^j + X\_i^{j-1}) \cdot e^{b\theta} \cdot \cos(2\pi\theta), \; j \ge 2,\tag{32}$$

where *b* and *θ* are the parameters of logarithmic spiral (refer to Section 4.1.1).

#### 4.2.8. Sparrow Search Optimization Algorithm

Jiankai Xue and Bo Shen proposed a sparrow search optimization algorithm in 2020 to mimic the sparrow's behaviors during group wisdom, antipredation, and foraging [93]. In this algorithm, the sparrows' population is divided into two groups of 20:80 as discovers and followers. The discover have a broad search space to search for the food and guide the followers to move towards the food source. The position update equation for the discover sparrows during the searching of target food is as follows:

$$X\_{i,j}(t+1) = \begin{cases} X\_{i,j}(t) \cdot \exp(-\frac{h}{a \cdot M}), & \text{if } R\_2 < ST, \\ X\_{i,j}(t) + Q \cdot L, & \text{if } R\_2 \ge ST, \end{cases} \tag{33}$$

where *Xi*,*j*(*t*) and *Xi*,*j*(*t* + 1) are the *i*th discover sparrows' position of *j*th dimension *t*th and (*t* + 1)th iterations, *h* and *M* are the current and maximum number of iterations, *Q* is a uniformly distributed random number, *L* is a row matrix with all values as one, *α* and *R*<sup>2</sup> ∈ [0, 1] are the random numbers, *ST* ∈ [0.5, 1] is the safety threshold values.

The values of *R*<sup>2</sup> and *ST* help indicate the safety of the food source area. Based on these values, the type of environment around the food source area, predators status, and the actions that need to be taken are classified as follows:

$$\text{Condition} = \begin{cases} \text{Safe}, & \text{No predators around, can search for food,} \quad \text{if } R\_2 < ST, \\ \text{Unsafe, predators around, fly to other safe area,} & \text{if } R\_2 \ge ST. \end{cases} \tag{34}$$

As some of the followers closely follow the discoverers, they update their positions to move towards the discovered food source area. The position update equation for the follower sparrows towards the food source is as follows:

$$X\_{i,j}(t+1) = \begin{cases} Q \cdot \exp\left(\frac{X\_{\text{wor}l}(t) - X\_{i,j}(t)}{\bar{r}^2}\right), & \text{if } i > n/2, \\ X\_p(t+1) + |X\_{i,j}(t) - X\_p(t+1)| \cdot A^T (AA^T)^{-1} \cdot L, & \text{otherwise}, \end{cases} \tag{35}$$

where *Xworst*(*t*) is the group's worst position at *t*th iteration, *Xp*(*t* + 1) is the discovers' optimal position at (*<sup>t</sup>* <sup>+</sup> <sup>1</sup>)th iteration, *<sup>A</sup>* is row matrix of randomly assigned with 1 or <sup>−</sup>1. Further, *i* > *n*/2 indicates that the sparrows are in a danger position. Thus, the sparrows make antipredation behavior. The MATLAB-based source code for implementing this algorithm is available for registered users at https://www.mathworks.com/matlabcentral/ fileexchange/88788 (accessed on 1 December 2021).

However, in [66,94], the authors proposed a variable spiral search technique for the followers to update their positions better. The position update equation of the followers using this search strategy is as follows:

$$X\_{i,j}(t+1) = \begin{cases} e^{zl} \cdot \cos(2\pi l) Q \cdot \exp\left(\frac{X\_{\text{wort}}(t) - X\_{i,j}(t)}{i^2}\right), & \text{if } i > n/2, \\ X\_p(t+1) + |X\_{i,j}(t) - X\_p(t+1)| \cdot A^T (AA^T)^{-1} \cdot L, & \text{otherwise,} \end{cases} \tag{36}$$

where *z* and *l* are the parameters of logarithmic spiral (refer to Section 4.1.1). Further, the value of *z* is varied at every iteration, making the proposed technique a variable spiral search approach.

#### **5. Application of Spiral Dynamics Optimization Algorithm**

The conventional and other variants of the SDO algorithm have been applied in various fields for finding the optimal solution, as explained underneath.

#### *5.1. Modeling and Controller Tuning*

The application of SDO and its variants in the area of modeling and controller tuning is as follows:


Hassan et al. proposed using an SDO algorithm to tune the predictive proportionalintegral (PI) controller for wireless networked control systems [95]. Similarly, the authors of [96] have utilized SDO in the tuning of proportional-integral-derivative (PID) in controlling the robotic arm movement. Moreover, for both modeling and control of flexible link manipulator systems, the authors of [14] have used conventional SDO. For the same application, the authors of [20,34,35] proposed the hybridization of the SDO algorithm with BCA and BFA. The improved and adaptive version of SDO is also presented for both modeling and control of a flexible link manipulator system [16,26,28]. In another application, fuzzy control of a stair descending in a wheelchair, an SDO algorithm is used for tuning of controller parameters. In [99], a hybrid algorithm using PSO and SDO is proposed for the tuning of a fuzzy controller designed for the inverted pendulum. Nasir et al. have proposed an improved SDO and hybrid algorithm using SDO and BFA to model twin rotor systems [25,34]. The hybrid SDO and BFA algorithm has also been used for controlling the two-wheeled robotic vehicles [39].

#### *5.2. Electrical Energy Optimization*

Similarly, the application of SDO and its variants in the area of optimizing electrical energy systems is as follows:


The economic and emission dispatch problems in power systems have been solved by various researchers using the SDO algorithm [14,101,102]. Similarly, an optimal strategy using the SDO algorithm is proposed for maximum power production in the wind farm [103]. A multiobjective SDO algorithm for a multigeneration energy system is presented for minimizing the total cost while maximizing energy efficiency [104]. In [105], a hybrid algorithm using SDO and BFA is proposed to optimize decentralized generation placement simultaneously. In another application, an optimal sizing strategy using the adaptive version of the SDO algorithm has been presented for hybrid electric air–ground vehicles [23]. The authors of [100] have proposed using SDO for the filter design. The algorithm achieved better performance in achieving the desired magnitude response in the multiobjective optimization task.

#### *5.3. Mechanical Systems Optimization*

Over the years, several mechanical systems have been optimized using the SDO algorithm. The list of applications are as follows:


Cruz et al. proposed the generalized and stochastic SDO algorithms to solve microelectronic thermal management problems [29,30]. The authors of [19] have proposed a hypotrochoid SDO algorithm to optimize the sensor placement in the 632-meter-tall Shanghai Tower and compared the performance with seven optimization algorithms, including its predictors. The authors of [18] also proposed the hypotrochoid SDO algorithm for finding the optimal setting parameters of 10, 37, 52, 72, and 200-bar planar and spatial truss structures. The use of spiral equation in improving the TLBO and antlion optimization algorithms for pressure vessel design problems is presented in [38,50]. The improved TLBO algorithm using logarithmic spiral trajectory is also applied to find the optimal setting parameters for welded beam design problems [50].

#### *5.4. Other Optimization Problems*

The application of the SDO algorithm for other types of optimization problems are as follows:


The authors of [107] are the first to showcase the problems and scope of spiral dynamics optimization applied to polyhedral cages. Another work before developing the conventional SDO algorithm is reported in [106]. Here, a heuristic spiral mapping algorithm is the first type of SDO applied for 2D mesh network topologies. For clustering problems, distributed SDO is proposed in which the population of search space is split into sub-populations [33]. Hong-Chun Jia et al. have proposed an efficient and intelligent algorithm using SDO for deep neural networks [108]. The network is to find the optimal physical health and fitness level in sports. Recently, James McCaffrey from Microsoft Research has developed the SDO algorithm in Python to train the neural network to find the optimal weights and biases values [112], the real-time implementation of a deterministic SDO algorithm using field-programmable gate arrays for spot patterns sorting in a Shack–Hartmann wavefront sensor [110].

As mentioned earlier, the SDO and its variants have been applied in various applications. The summary of all applications is given in Table 2. The table provides the details of the application system, including the dimension, software tool, cost function, type of optimization problem, and comparison techniques. In the table, SO and MO are optimization problems denoting single objective and multiobjective. The SDO validation and its variants on various benchmark functions are also detailed. It is to highlight that the most widely used error-based cost functions are: mean squared error (MSE), root mean squared error (RMSE), and the sum of squared error (SSE). Similarly, the integral error functions used in the research are integral squared error (ISE) and integral time absolute error (ITAE). The errors are computed as follows:

$$\text{MSE} \quad = \quad \frac{1}{\mathbf{n}\_{\text{s}}} \sum\_{i=1}^{\mathbf{n}\_{\text{s}}} (\mathbf{Y}\_{d} - \mathbf{Y}\_{p})^{2}, \tag{37}$$

$$\text{RMSE} \quad = \sqrt{\frac{1}{\text{n}\_{\text{s}}} \sum\_{i=1}^{\text{n}\_{\text{s}}} (Y\_{\text{a}} - Y\_{\text{p}})^2} \tag{38}$$

$$\text{ISE} \quad = \int\_{t=0}^{\infty} \mathbf{e}(t)dt,\tag{39}$$

$$\text{ITAE} \quad = \int\_{t=0}^{\infty} t|c(t)|dt,\tag{40}$$

where ns is the total number of samples, *Ya* and *Yp* are the actual and predicted values, *e*(*t*) is the error, the difference between actual and reference values.

*Fractal Fract.* **2022**, *6*, 27




*Fractal Fract.* **2022**, *6*, 27






#### **6. Conclusions**

#### *6.1. Findings*

SDO is a promising and fascinating algorithm that has been greatly appreciated in the literature. The SDO algorithm's advantages over other optimization algorithms lie in its simplicity, ease of implementation, the requirement of few control parameters, and better diversification and intensification strategies. This comprehensive review summarizes the research outcomes published from 1997 until January 2022. The advances and variants of SDO, including adaptive, improved, and hybrid approaches for solving various optimization problems, are critically analyzed. Further, the application of SDO and its variants in multiple fields, including modeling, controller tuning, electrical energy systems, mechanical systems, etc., is comprehensively summarized. Besides, a special interest is devoted to highlighting various nature-inspired optimization algorithms fascinated by the concept of spiral paths. This review is expected to draw the attention of the investigators, experts, and researchers to solve the optimization problems using the SDO algorithm and its variants.

#### *6.2. Future Perspectives*

This comprehensive review has helped open up new scopes in the field of spiralinspired optimization and is highlighted as such underneath.


**Author Contributions:** Conceptualization, K.B.; proofreading, guidance, and regular feedback, B.R.P., M.B.O. and R.I.; writing—original draft preparation, M.B.O. and K.B.; writing—review and editing, B.R.P.; supervision, R.I.; project administration and funding acquisition, M.B.O. and R.I. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by Yayasan Universiti Teknologi PETRONAS Fundamental Research Grant (YUTP-FRG) number 015LC0-362.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** No new data were created or analyzed in this study. Data sharing is not applicable to this article.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


### *Article* **Well Posedness of New Optimization Problems with Variational Inequality Constraints**

**Savin Trean¸t ˘a**

Department of Applied Mathematics, University Politehnica of Bucharest, 060042 Bucharest, Romania; savin.treanta@upb.ro

**Abstract:** In this paper, we studied the well posedness for a new class of optimization problems with variational inequality constraints involving second-order partial derivatives. More precisely, by using the notions of lower semicontinuity, pseudomonotonicity, hemicontinuity and monotonicity for a multiple integral functional, and by introducing the set of approximating solutions for the considered class of constrained optimization problems, we established some characterization results on well posedness. Furthermore, to illustrate the theoretical developments included in this paper, we present some examples.

**Keywords:** well posedness; constrained variational control problem; monotonicity; pseudomonotonicity; hemicontinuity; multiple integral functional; lower semicontinuity

#### **1. Introduction**

The notion of well posedness represents a useful mathematical tool by ensuring the convergence of a sequence of approximate solutions to the exact solution of some optimization problems. Starting with the work of Tykhonov [1] for unconstrained optimization problems, various types of well posedness for variational problems have been considered (see, for instance, Levitin-Polyak well posedness [2–5], extended well posedness [6–14]), *L*-well posedness [15], *α*-well posedness [16,17]). Moreover, the concept of well posedness can be useful to study some related problems, such as variational inequality and fixed point problems [18–22], hemivariational inequality problems [23], complementary problems [24], equilibrium problems [25,26], Nash equilibrium problems [27] and variational inclusion problems [28]. Recently, the study of well posedness for vector variational inequalities and the associated optimization problems was formulated by Jayswal and Shalini [29]. On the other hand, an important and interesting extension of variational inequality problems is that of multidimensional variational inequality problems and the corresponding multi-time optimization problems (see [30–40]).

Motivated by the aforementioned research works, in this paper we analyze the well posedness of a new class of constrained optimization problems governed by multiple integral functionals involving second-order partial derivatives. To this aim, first we introduce new forms for the concepts of monotonicity, lower semicontinuity, pseudomonotonicity and hemicontinuity associated with a multiple integral functional. Furthermore, we define the set of approximating solutions for the considered optimization problem and establish some characterization theorems on well posedness. The main novelty elements of this paper are represented by the following: the mathematical framework is based on infinitedimensional function spaces, multiple integral functionals, the presence of second-order partial derivatives, and innovative proofs of the main results. The aforementioned elements are completely new in the area of well-posed variational control problems. Most of the previous works in this field have been studied in classical finite-dimensional spaces, without taking into account the new notions mentioned above.

This paper is organized as follows. Section 2 provides the concepts of monotonicity, pseudomonotonicity, hemicontinuity and the lower semicontinuity of a multiple integral

**Citation:** Trean¸t ˘a, S. Well Posedness of New Optimization Problems with Variational Inequality Constraints. *Fractal Fract.* **2021**, *5*, 123. https:// doi.org/10.3390/fractalfract5030123

Academic Editor: Hari Mohan Srivastava

Received: 18 August 2021 Accepted: 14 September 2021 Published: 15 September 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

functional, and an auxiliary lemma. Section 3 investigates the well posedness for the considered constrained optimization problem. Concretely, we establish that well posedness and the existence and uniqueness of a solution are equivalent in the aforementioned problem. Furthermore, some examples are formulated throughout the paper to highlight the theoretical elements. Finally, in Section 4, we present the conclusions and provide further developments.

#### **2. Preliminaries**

Throughout this work, we consider the following mathematical tools and notations: let <sup>Ω</sup> be a compact set in <sup>R</sup>*<sup>m</sup>* and <sup>Ω</sup> *<sup>ζ</sup>* = (*ζα*), *<sup>α</sup>* <sup>=</sup> 1, *<sup>m</sup>*; consider <sup>A</sup> as the space of *<sup>C</sup>*4-class *state* functions *<sup>a</sup>* : <sup>Ω</sup> <sup>→</sup> <sup>R</sup>*<sup>n</sup>* and *<sup>a</sup><sup>α</sup>* :<sup>=</sup> *<sup>∂</sup><sup>a</sup> ∂ζ<sup>α</sup>* , *<sup>a</sup>βγ* :<sup>=</sup> *<sup>∂</sup>*2*<sup>a</sup> ∂ζβ∂ζγ* denote the *partial speed* and *partial acceleration*, respectively; also, consider <sup>U</sup> ass the space of *<sup>C</sup>*1-class *control* functions *<sup>u</sup>* : <sup>Ω</sup> <sup>→</sup> <sup>R</sup>*<sup>k</sup>* , and consider *A* × *U* as a closed, convex and non-empty subset of A×U, (*a*, *u*)|*∂*<sup>Ω</sup> = given, endowed with the scalar product:

$$\langle (a, u), (b, w) \rangle = \int\_{\Omega} \left[ a(\zeta) \cdot b(\zeta) + u(\zeta) \cdot w(\zeta) \right] d\zeta$$

$$= \int\_{\Omega} \left[ \sum\_{i=1}^{n} a^i(\zeta) b^i(\zeta) + \sum\_{j=1}^{k} u^j(\zeta) w^j(\zeta) \right] d\zeta, \quad \forall (a, u), (b, w) \in \mathcal{A} \times \mathcal{U}$$

and the induced norm, where *<sup>d</sup><sup>ζ</sup>* <sup>=</sup> *<sup>d</sup>ζ*<sup>1</sup> ··· *<sup>d</sup>ζ<sup>m</sup>* denotes the volume element on <sup>R</sup>*m*.

Consider *J* <sup>2</sup>(R*m*, R*n*) as the second-order jet bundle associated with R*<sup>m</sup>* and R*n*. Taking the scalar continuously differentiable function *f* : *J* <sup>2</sup>(R*m*, <sup>R</sup>*n*) <sup>×</sup> <sup>R</sup>*<sup>k</sup>* <sup>→</sup> <sup>R</sup>, we introduce the following multiple integral-type functional:

$$F: \mathcal{A} \times \mathcal{U} \to \mathbb{R}, \quad F(a, \mu) = \int\_{\Omega} f\left(\zeta, a(\zeta), a\_a(\zeta), a\_{\beta \gamma}(\zeta), \mu(\zeta)\right) d\zeta.$$

At this moment, we are able to introduce the following *constrained variational control problem* (in short, CVCP), given as follows (we use the notation (*πa*,*u*(*ζ*)) := (*ζ*, *a*(*ζ*), *aα*(*ζ*), *aβγ*(*ζ*), *u*(*ζ*))):

$$\begin{array}{rcl} \text{(CVCP)} & \quad \text{Minimize}\_{(a,\iota)} & \int\_{\Omega} f(\pi\_{a,\iota}(\zeta)) d\zeta\\ & \text{subject to} & (a,\iota) \in \Theta, \end{array}$$

where Θ is the solution set of the *controlled variational inequality problem* (in short, CVIP): *to find a pair* (*a*, *u*) ∈ *A* × *U such that:*

$$\begin{split} \text{(CVIP)} \quad & \int\_{\Omega} \left[ \frac{\partial f}{\partial a} (\pi\_{a,\mu}(\zeta)) (b(\zeta) - a(\zeta)) + \frac{\partial f}{\partial a\_{a}} (\pi\_{a,\mu}(\zeta)) D\_{a} (b(\zeta) - a(\zeta)) \right] \\ & + \frac{1}{n(\beta,\gamma)} \frac{\partial f}{\partial a\_{\beta\gamma}} (\pi\_{a,\mu}(\zeta)) D\_{\beta\gamma}^{2} (b(\zeta) - a(\zeta)) \\ & + \frac{\partial f}{\partial \mu} (\pi\_{a,\mu}(\zeta)) (w(\zeta) - \mu(\zeta)) \Big] d\zeta \ge 0, \quad \forall (b, w) \in A \times \mathcal{U}, \end{split}$$

where *D*<sup>2</sup> *βγ* := *Dβ*(*Dγ*), and *n*(*β*, *γ*) is the Saunders's multi-index notation (see Saunders [41], Trean¸t ˘a [40]).

More precisely, the feasible solution set for (CVIP) is given by

$$\begin{split} \Theta = \left\{ (a, u) \in A \times \mathcal{U} : \int\_{\Omega} \left[ (b(\zeta) - a(\zeta)) \frac{\partial f}{\partial a} (\pi\_{a, \mu}(\zeta)) \right. \right. \\ \left. + \, D\_{\mathfrak{a}} (b(\zeta) - a(\zeta)) \frac{\partial f}{\partial a\_{\mathfrak{a}}} (\pi\_{a, \mathfrak{a}}(\zeta)) + (w(\zeta) - u(\zeta)) \frac{\partial f}{\partial \mathfrak{a}} (\pi\_{a, \mathfrak{a}}(\zeta)) \right. \\ \left. + \, \frac{1}{n(\beta, \gamma)} D\_{\beta \gamma}^{2} (b(\zeta) - a(\zeta)) \frac{\partial f}{\partial a\_{\beta \gamma}} (\pi\_{a, \mathfrak{a}}(\zeta)) \right] d\zeta \ge 0, \\ \forall (b, w) \in A \times \mathcal{U} \Big\} . \end{split}$$

Next, we define the notions of *monotonicity* and *pseudomonotonicity* for the aforementioned multiple integral functional.

**Definition 1.** *The multiple integral functional <sup>F</sup>*(*a*, *<sup>u</sup>*) = Ω *f*(*πa*,*u*(*ζ*))*dζ is called monotone on A* × *U if the following inequality holds:*

$$\begin{split} &\int\_{\Omega} \left[ (a(\boldsymbol{\zeta}) - b(\boldsymbol{\zeta})) \left( \frac{\partial f}{\partial a} (\pi\_{a,u}(\boldsymbol{\zeta})) - \frac{\partial f}{\partial a} (\pi\_{b,w}(\boldsymbol{\zeta})) \right) \right] \\ &+ (u(\boldsymbol{\zeta}) - w(\boldsymbol{\zeta})) \left( \frac{\partial f}{\partial u} (\pi\_{a,u}(\boldsymbol{\zeta})) - \frac{\partial f}{\partial a} (\pi\_{b,w}(\boldsymbol{\zeta})) \right) \\ &+ D\_{a} (a(\boldsymbol{\zeta}) - b(\boldsymbol{\zeta})) \left( \frac{\partial f}{\partial a\_{a}} (\pi\_{a,u}(\boldsymbol{\zeta})) - \frac{\partial f}{\partial a\_{a}} (\pi\_{b,w}(\boldsymbol{\zeta})) \right) \\ &+ \frac{1}{n(\boldsymbol{\beta},\boldsymbol{\gamma})} D\_{\boldsymbol{\beta}\boldsymbol{\gamma}}^{2} (a(\boldsymbol{\zeta}) - b(\boldsymbol{\zeta})) \left( \frac{\partial f}{\partial a\_{\beta\gamma}} (\pi\_{a,u}(\boldsymbol{\zeta})) - \frac{\partial f}{\partial a\_{\beta\gamma}} (\pi\_{b,w}(\boldsymbol{\zeta})) \right) \Big] d\boldsymbol{\zeta} \ge 0, \\ &\forall (a,u), (b,w) \in A \times \mathcal{U}. \end{split}$$

**Definition 2.** *The multiple integral functional <sup>F</sup>*(*a*, *<sup>u</sup>*) = Ω *f*(*πa*,*u*(*ζ*))*dζ is called pseudomonotone on A* × *U if the following implication holds:*

$$\begin{split} \int\_{\Omega} \left[ \left( a(\zeta) - b(\zeta) \right) \frac{\partial f}{\partial a} \left( \pi\_{b,w}(\zeta) \right) + \left( u(\zeta) - w(\zeta) \right) \frac{\partial f}{\partial u} \left( \pi\_{b,w}(\zeta) \right) \right] \\ + D\_{\mathfrak{a}} \left( a(\zeta) - b(\zeta) \right) \frac{\partial f}{\partial a\_{\mathfrak{a}}} \left( \pi\_{b,w}(\zeta) \right) \\ + \frac{1}{n(\mathfrak{a}/\gamma)} D\_{\tilde{\rho}\gamma}^{2} \left( a(\zeta) - b(\zeta) \right) \frac{\partial f}{\partial a\_{\tilde{\rho}\gamma}} \left( \pi\_{b,w}(\zeta) \right) \right] d\zeta \geq 0 \\ \Rightarrow \quad \int\_{\Omega} \left[ \left( a(\zeta) - b(\zeta) \right) \frac{\partial f}{\partial a} \left( \pi\_{a,\mathfrak{a}}(\zeta) \right) + \left( u(\zeta) - w(\zeta) \right) \frac{\partial f}{\partial \mathfrak{a}} \left( \pi\_{a,\mathfrak{a}}(\zeta) \right) \right] \\ + D\_{\mathfrak{a}} \left( a(\zeta) - b(\zeta) \right) \frac{\partial f}{\partial a\_{\mathfrak{a}}} \left( \pi\_{a,\mathfrak{a}}(\zeta) \right) \\ + \frac{1}{n(\mathfrak{b},\gamma)} D\_{\tilde{\rho}\gamma}^{2} \left( a(\zeta) - b(\zeta) \right) \frac{\partial f}{\partial a\_{\tilde{\rho}\gamma}} \left( \pi\_{a,\mathfrak{a}}(\zeta) \right) \right] d\zeta \geq 0, \quad \forall (a,\mathfrak{a}), (b,w) \in A \times \mathcal{U}. \end{split}$$

Let us give an example of a multiple integral-type functional which is not monotone but is pseudomonotone.

**Example 1.** *Consider n* = *k* = 1, *m* = 2*, and* Ω = [0, 3] <sup>2</sup>*. We define:*

$$f(\pi\_{\mathfrak{a},\mathfrak{u}}(\zeta)) = 2\sin a(\zeta) + \mathfrak{u}(\zeta)e^{\mathfrak{u}(\zeta)}$$

*and show, in accordance with Definition 2, that the multiple integral functional F*(*a*, *u*) = Ω *<sup>f</sup>*(*πa*,*u*(*ζ*))*d<sup>ζ</sup> is pseudomonotone on <sup>A</sup>* <sup>×</sup> *<sup>U</sup>* <sup>=</sup> *<sup>C</sup>*4(Ω, [−1, 1]) <sup>×</sup> *<sup>C</sup>*1(Ω, [−1, 1])*. Indeed, we have:*

$$\begin{split} \int\_{\Omega} \left[ \left( a(\zeta) - b(\zeta) \right) \frac{\partial f}{\partial u} (\pi\_{b,w}(\zeta)) + (u(\zeta) - w(\zeta)) \frac{\partial f}{\partial u} (\pi\_{b,w}(\zeta)) \right] \\ &+ D\_a (a(\zeta) - b(\zeta)) \frac{\partial f}{\partial a\_x} (\pi\_{b,w}(\zeta)) \\ &+ \frac{1}{n(\beta,\gamma)} D\_{\hat{\rho}\gamma}^2 (a(\zeta) - b(\zeta)) \frac{\partial f}{\partial u\_{\hat{\rho}\gamma}} (\pi\_{b,w}(\zeta)) \Big] d\zeta \\ = \int\_{\Omega} \left[ 2(a(\zeta) - b(\zeta)) \cos b(\zeta) + (u(\zeta) - w(\zeta)) (e^{w(\zeta)} + w(\zeta)e^{w(\zeta)}) \right] d\zeta \ge 0, \\ \forall (a, u), (b, w) \in A \times U \\ &\Rightarrow \int\_{\Omega} \left[ \left( a(\zeta) - b(\zeta) \right) \frac{\partial f}{\partial d} (\pi\_{a, u}(\zeta)) + (u(\zeta) - w(\zeta)) \frac{\partial f}{\partial u} (\pi\_{a, u}(\zeta)) \right. \\ &+ D\_a (a(\zeta) - b(\zeta)) \frac{\partial f}{\partial d\_a} (\pi\_{a, u}(\zeta)) \\ &+ \frac{1}{n(\beta, \gamma)} D\_{\hat{\rho}\gamma}^2 (a(\zeta) - b(\zeta)) \frac{\partial f}{\partial d\_{\hat{\rho}\gamma}} (\pi\_{a, u}(\zeta)) \Big] d\zeta \\ = \int\_{\Omega} \left[ 2(a(\zeta) - b(\zeta)) \cos a(\zeta) + (u(\zeta) - w(\zeta)) (e^{u(\zeta)} + u(\zeta)e^{u(\zeta)}) \right] d\zeta \ge 0. \end{split}$$

However, it is not monotone on *A* × *U* in the sense of Definition 1, because:

$$\begin{split} \int\_{\Omega} \left[ \left( a(\zeta) - b(\zeta) \right) \left( \frac{\partial f}{\partial a} (\pi\_{a,\mu}(\zeta)) - \frac{\partial f}{\partial a} (\pi\_{b,\mu}(\zeta)) \right) \right. \\ &+ \left( u(\zeta) - w(\zeta) \right) \left( \frac{\partial f}{\partial u} (\pi\_{a,\mu}(\zeta)) - \frac{\partial f}{\partial u} (\pi\_{b,\mu}(\zeta)) \right) \\ &+ D\_{\mathfrak{a}} (a(\zeta) - b(\zeta)) \left( \frac{\partial f}{\partial a\_{\mathfrak{a}}} (\pi\_{a,\mathfrak{a}}(\zeta)) - \frac{\partial f}{\partial a\_{\mathfrak{a}}} (\pi\_{b,\mathfrak{a}}(\zeta)) \right) \\ &+ \frac{1}{n(\beta,\gamma)} D\_{\tilde{\rho}\gamma}^{2} (a(\zeta) - b(\zeta)) \left( \frac{\partial f}{\partial a\_{\tilde{\rho}\gamma}} (\pi\_{a,\mathfrak{a}}(\zeta)) - \frac{\partial f}{\partial a\_{\tilde{\rho}\gamma}} (\pi\_{b,\mathfrak{a}}(\zeta)) \right) \Big] d\zeta. \\ &= \int\_{\Omega} \left[ 2(a(\zeta) - b(\zeta)) (\cos a(\zeta) - \cos b(\zeta)) \right. \\ &+ (u(\zeta) - w(\zeta)) (u(\zeta)e^{u(\zeta)} + e^{u(\zeta)} - w(\zeta)e^{w(\zeta)} - e^{w(\zeta)}) \Big] d\zeta \succeq 0, \\ &\forall (a,\mathfrak{a}), (b,w) \in A \times \mathcal{U}. \end{split}$$

Then, in accordance with Usman and Khan [42], we define the concept of *hemicontinuity* for the considered multiple integral-type functional.

**Definition 3.** *The functional <sup>F</sup>*(*a*, *<sup>u</sup>*) = Ω *f*(*πa*,*u*(*ζ*))*dζ is hemicontinuous on A* × *U if the application:*

$$
\lambda \to \left\langle \left( (a(\zeta), u(\zeta)) - (b(\zeta), w(\zeta)), \left( \frac{\delta F}{\delta a\_\lambda}(\zeta), \frac{\delta F}{\delta u\_\lambda}(\zeta) \right) \right) \right\rangle, \quad 0 \le \lambda \le 1 \right\}$$

*is continuous at* <sup>0</sup>+*, for* <sup>∀</sup>(*a*, *<sup>u</sup>*),(*b*, *<sup>w</sup>*) <sup>∈</sup> *<sup>A</sup>* <sup>×</sup> *U, where:*

$$\frac{\delta F}{\delta a\_{\lambda}}(\zeta) := \frac{\partial f}{\partial a}(\pi\_{d\_{\lambda}, \mu\_{\lambda}}(\zeta)) - D\_{\mathbb{A}} \frac{\partial f}{\partial a\_{\mathbb{A}}}(\pi\_{d\_{\lambda}, \mu\_{\lambda}}(\zeta)) + \frac{1}{n(\oint\_{\mathcal{I}} \gamma)} D\_{\mathcal{I}\gamma}^{2} \frac{\partial f}{\partial a\_{\mathcal{I}\gamma}}(\pi\_{d\_{\lambda}, \mu\_{\lambda}}(\zeta)) \in A\_{\prime}.$$

$$\frac{\delta F}{\delta u\_{\lambda}}(\zeta) := \frac{\partial f}{\partial u}(\pi\_{d\_{\lambda}, \mu\_{\lambda}}(\zeta)) \in \mathcal{U},$$

$$a\_{\lambda} := \lambda a + (1 - \lambda)b\_{\prime} \quad u\_{\lambda} := \lambda u + (1 - \lambda)w.$$

The following lemma is an auxiliary result for proving the main results derived in the present paper.

**Lemma 1.** *Consider <sup>F</sup>*(*a*, *<sup>u</sup>*) = Ω *f*(*πa*,*u*(*ζ*))*dζ is pseudomonotone and hemicontinuous on A* × *U. The pair* (*a*, *u*) ∈ *A* × *U is a solution for (CVIP) if and only if* (*a*, *u*) *is a solution for the following variational inequality problem:*

$$\begin{split} \int\_{\Omega} \left[ (b(\boldsymbol{\zeta}) - a(\boldsymbol{\zeta})) \frac{\partial f}{\partial a} (\pi\_{b,w}(\boldsymbol{\zeta})) + (w(\boldsymbol{\zeta}) - u(\boldsymbol{\zeta})) \frac{\partial f}{\partial u} (\pi\_{b,w}(\boldsymbol{\zeta})) \right. \\ \left. + D\_a (b(\boldsymbol{\zeta}) - a(\boldsymbol{\zeta})) \frac{\partial f}{\partial a\_a} (\pi\_{b,w}(\boldsymbol{\zeta})) \right. \end{split}$$

$$\begin{split} \left. + \frac{1}{n(\boldsymbol{\beta}, \boldsymbol{\gamma})} D\_{\hat{\boldsymbol{\beta}} \boldsymbol{\gamma}}^2 (b(\boldsymbol{\zeta}) - a(\boldsymbol{\zeta})) \frac{\partial f}{\partial a\_{\hat{\boldsymbol{\beta}} \boldsymbol{\gamma}}} (\pi\_{b,w}(\boldsymbol{\zeta})) \right] d\boldsymbol{\zeta} \geq 0, \quad \forall (b, w) \in A \times \mathcal{U}. \end{split}$$

**Proof.** Consider that the pair (*a*, *u*) ∈ *A* × *U* is the solution for (CVIP). As a consequence, it results that:

$$\begin{split} \int\_{\Omega} \left[ \left( b(\boldsymbol{\zeta}) - a(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial a} (\pi\_{a,\mu}(\boldsymbol{\zeta})) + \left( w(\boldsymbol{\zeta}) - \boldsymbol{u}(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial \boldsymbol{u}} (\pi\_{a,\mu}(\boldsymbol{\zeta})) \right] \\ + D\_{a} \left( b(\boldsymbol{\zeta}) - a(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial a\_{a}} (\pi\_{a,\mu}(\boldsymbol{\zeta})) \\ + \frac{1}{n(\boldsymbol{\beta},\boldsymbol{\gamma})} D\_{\boldsymbol{\beta}\boldsymbol{\gamma}}^{2} \left( b(\boldsymbol{\zeta}) - a(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial a\_{\boldsymbol{\beta}\boldsymbol{\gamma}}} (\pi\_{a,\mu}(\boldsymbol{\zeta})) \Big] d\boldsymbol{\zeta} \geq 0, \quad \forall (b, w) \in A \times \boldsymbol{\mathcal{U}}. \end{split}$$

By using the pseudomonotonicity property of the considered multiple integral functional (see Definition 2), the previous inequality involves:

$$\begin{split} \int\_{\Omega} \left[ \left( b(\zeta) - a(\zeta) \right) \frac{\partial f}{\partial a} (\pi\_{b,w}(\zeta)) + \left( w(\zeta) - u(\zeta) \right) \frac{\partial f}{\partial u} (\pi\_{b,w}(\zeta)) \right] \\ + D\_a (b(\zeta) - a(\zeta)) \frac{\partial f}{\partial a\_a} (\pi\_{b,w}(\zeta)) \\ + \frac{1}{n(\beta\_\gamma \gamma)} D\_{\beta \gamma}^2 (b(\zeta) - a(\zeta)) \frac{\partial f}{\partial a\_{\beta \gamma}} (\pi\_{b,w}(\zeta)) \Big] d\zeta \ge 0, \quad \forall (b,w) \in A \times \mathcal{U}. \end{split}$$

Conversely, assume that:

$$\begin{split} \int\_{\Omega} \left[ \left( b(\boldsymbol{\zeta}) - a(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial a} (\pi\_{b,w}(\boldsymbol{\zeta})) + \left( w(\boldsymbol{\zeta}) - u(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial u} (\pi\_{b,w}(\boldsymbol{\zeta})) \right. \\ \left. + D\_a \left( b(\boldsymbol{\zeta}) - a(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial a\_a} (\pi\_{b,w}(\boldsymbol{\zeta})) \right. \end{split}$$
 
$$\begin{split} \left. + \frac{1}{n(\boldsymbol{\beta}, \boldsymbol{\gamma})} D\_{\boldsymbol{\beta}\boldsymbol{\gamma}}^2 \left( b(\boldsymbol{\zeta}) - a(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial a\_{\boldsymbol{\beta}\boldsymbol{\gamma}}} (\pi\_{b,w}(\boldsymbol{\zeta})) \right] d\boldsymbol{\zeta} \geq 0, \quad \forall (b, w) \in A \times \mathcal{U}. \end{split}$$

For *λ* ∈ (0, 1] and (*b*, *w*) ∈ *A* × *U*, we define:

$$(b\_{\lambda\prime}w\_{\lambda}) = ((1-\lambda)a + \lambda b, (1-\lambda)u + \lambda w) \in A \times \mathcal{U}.$$

Thus, the above inequality implies:

$$\begin{split} \int\_{\Omega} \left[ \left( b\_{\lambda} \left( \zeta \right) - a \left( \zeta \right) \right) \frac{\partial f}{\partial a} \left( \pi\_{b\_{\lambda}, w\_{\lambda}} \left( \zeta \right) \right) + \left( w\_{\lambda} \left( \zeta \right) - u \left( \zeta \right) \right) \frac{\partial f}{\partial u} \left( \pi\_{b\_{\lambda, w\_{\lambda}}} \left( \zeta \right) \right) \right] \\ + D\_{a} \left( b\_{\lambda} \left( \zeta \right) - a \left( \zeta \right) \right) \frac{\partial f}{\partial a\_{a}} \left( \pi\_{b\_{\lambda, w\_{\lambda}}} \left( \zeta \right) \right) \\ + \frac{1}{n \left( \beta\_{\gamma} \gamma \right)} D\_{\beta \gamma}^{2} \left( b\_{\lambda} \left( \zeta \right) - a \left( \zeta \right) \right) \frac{\partial f}{\partial a\_{\beta \gamma}} \left( \pi\_{b\_{\lambda, w\_{\lambda}}} \left( \zeta \right) \right) \Big] d\zeta \ge 0, \quad (b, w) \in A \times \mathcal{U}. \end{split}$$

By considering *λ* → 0, we obtain:

$$\begin{split} \int\_{\Omega} \left[ \left( b(\boldsymbol{\zeta}) - a(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial a} (\pi\_{a,\mu}(\boldsymbol{\zeta})) + \left( w(\boldsymbol{\zeta}) - \boldsymbol{u}(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial \boldsymbol{u}} (\pi\_{a,\mu}(\boldsymbol{\zeta})) \right] \\ + D\_{a} \left( b(\boldsymbol{\zeta}) - a(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial a\_{a}} (\pi\_{a,\mu}(\boldsymbol{\zeta})) \\ + \frac{1}{n(\boldsymbol{\beta},\boldsymbol{\gamma})} D\_{\boldsymbol{\beta}\boldsymbol{\gamma}}^{2} \left( b(\boldsymbol{\zeta}) - a(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial a\_{\boldsymbol{\beta}\boldsymbol{\gamma}}} (\pi\_{a,\mu}(\boldsymbol{\zeta})) \Big] d\boldsymbol{\zeta} \geq 0, \quad \forall (b,\boldsymbol{w}) \in A \times \mathcal{U}, \end{split}$$

which proves that (*a*, *u*) solves (CVIP). This completes the proof of this lemma.

Now, we give the definition of *lower semicontinuity* for the multiple integral functional *<sup>F</sup>*(*a*, *<sup>u</sup>*) = Ω *f*(*πa*,*u*(*ζ*))*dζ*.

**Definition 4.** *The multiple integral functional <sup>F</sup>*(*a*, *<sup>u</sup>*) = Ω *f*(*πa*,*u*(*ζ*))*dζ is called lower semicontinuous at a point* (*a*0, *u*0) ∈ *A* × *U if:*

$$\int\_{\Omega} f(\pi\_{a\_0, u\_0}(\zeta)) d\zeta \le \lim\_{(a,u) \to (a\_0, u\_0)} \inf \int\_{\Omega} f(\pi\_{a, u}(\zeta)) d\zeta.$$

#### **3. Well Posedness Associated with (CVCP)**

In this section, by considering the notions introduced in Section 2, we study the well posedness for the considered class of constrained optimization problems (CVCPs). To this aim, we introduce the following definitions and notations.

Denote by S *solution set* of (CVCP), namely:

$$\begin{split} \mathcal{S} &= \left\{ (a, u) \in A \times \mathcal{U} \mid \int\_{\Omega} f(\pi\_{a, \mathfrak{u}}(\zeta)) d\zeta \le \inf\_{(b, w) \in \Theta} \int\_{\Omega} f(\pi\_{b, w}(\zeta)) d\zeta \right\} \text{ and} \\ &\int\_{\Omega} \left[ (b(\zeta) - a(\zeta)) \frac{\partial f}{\partial a}(\pi\_{a, \mathfrak{u}}(\zeta)) + (w(\zeta) - u(\zeta)) \frac{\partial f}{\partial \mathfrak{u}}(\pi\_{a, \mathfrak{u}}(\zeta)) \right. \\ &+ D\_{\mathfrak{a}}(b(\zeta) - a(\zeta)) \frac{\partial f}{\partial a\_{\mathfrak{a}}}(\pi\_{a, \mathfrak{u}}(\zeta)) \\ &+ \frac{1}{n(\mathfrak{f}, \gamma)} D\_{\mathfrak{f}\gamma}^{2}(b(\zeta) - a(\zeta)) \frac{\partial f}{\partial a\_{\mathfrak{f}\gamma}}(\pi\_{a, \mathfrak{u}}(\zeta)) \Big] d\zeta \ge 0, \ \forall (b, w) \in A \times \mathcal{U} \Big\}. \end{split}$$

Consider the *set of approximating solutions* of (CVCP), for *σ*, *ι* ≥ 0, as follows:

$$\begin{split} \mathcal{S}(\sigma,\iota) &= \left\{ (a,u) \in A \times \mathcal{U} \mid \int\_{\Omega} f(\pi\_{a,u}(\zeta)) d\zeta \le \inf\_{(b,w) \in \Theta} \int\_{\Omega} f(\pi\_{b,w}(\zeta)) d\zeta + \sigma \quad \text{and} \\ &\int\_{\Omega} \left[ (b(\zeta) - a(\zeta)) \frac{\partial f}{\partial a}(\pi\_{a,u}(\zeta)) + (w(\zeta) - u(\zeta)) \frac{\partial f}{\partial u}(\pi\_{a,u}(\zeta)) \right. \\ &+ D\_{a}(b(\zeta) - a(\zeta)) \frac{\partial f}{\partial a\_{\mathfrak{a}}}(\pi\_{a,u}(\zeta)) \\ &+ \frac{1}{n(\mathfrak{P},\gamma)} D\_{\widetilde{\mathscr{P}}\gamma}^{2}(b(\zeta) - a(\zeta)) \frac{\partial f}{\partial a\_{\mathfrak{P}\gamma}}(\pi\_{a,u}(\zeta)) \Big] d\zeta + \iota \ge 0, \ \forall (b,w) \in A \times \mathcal{U} \Big\}. \end{split}$$

**Remark 1.** *For* (*σ*, *ι*)=(0, 0)*, we obtain* S = S(*σ*, *ι*)*, and for* (*σ*, *ι*) > (0, 0)*, we obtain* S⊆S(*σ*, *ι*)*.*

**Definition 5.** *The sequence* {(*an*, *un*)} *is an approximating sequence for (CVCP) if there exists ι<sup>n</sup>* → 0 *(a sequence of positive real numbers) as n* → ∞*, such that:*

$$\lim\_{n \to \infty} \sup \int\_{\Omega} f(\pi\_{\mathfrak{a}\_n, \mathfrak{a}\_n}(\zeta)) d\zeta \le \inf\_{(b, w) \in \Theta} \int\_{\Omega} f(\pi\_{b, w}(\zeta)) d\zeta$$

*and:*

$$\int\_{\Omega} \left[ \left( b(\zeta) - a\_{\mathfrak{n}}(\zeta) \right) \frac{\partial f}{\partial a} (\pi\_{a\_{\mathfrak{n}}, \mathfrak{n}\_{\mathfrak{n}}}(\zeta)) + \left( w(\zeta) - u\_{\mathfrak{n}}(\zeta) \right) \frac{\partial f}{\partial \mathfrak{n}} (\pi\_{a\_{\mathfrak{n}}, \mathfrak{n}\_{\mathfrak{n}}}(\zeta)) \right] $$

$$+ D\_{\mathfrak{a}} (b(\zeta) - a\_{\mathfrak{n}}(\zeta)) \frac{\partial f}{\partial a\_{\mathfrak{a}}} (\pi\_{a\_{\mathfrak{n}}, \mathfrak{n}\_{\mathfrak{n}}}(\zeta)) $$

$$+ \frac{1}{n(\mathfrak{f}, \gamma)} D\_{\widehat{\mathcal{P}}\gamma}^2 (b(\zeta) - a\_{\mathfrak{n}}(\zeta)) \frac{\partial f}{\partial a\_{\widehat{\mathcal{P}}\gamma}} (\pi\_{a\_{\mathfrak{n}}, \mathfrak{n}\_{\mathfrak{n}}}(\zeta)) \Big] d\zeta + \iota\_{\mathfrak{n}} \ge 0, \quad \forall (b, w) \in A \times \mathcal{U}$$

*are fulfilled.*

#### **Definition 6.** *The constrained optimization (CVCP) is well posed if:*

*(i) it admits a single solution* (*a*0, *u*0)*; and (ii) each approximating sequence of (CVCP) converges to* (*a*0, *u*0)*.*

Furthermore, denote by "diam *B*" the *diameter* of the set *B* and it is defined as follows

$$\text{diam}\,B = \sup\_{x,y \in B} ||x - y||.$$

The next theorem represents a first characterization result on the well posedness for (CVCP).

**Theorem 1.** *Consider that <sup>F</sup>*(*a*, *<sup>u</sup>*) = Ω *f*(*πa*,*u*(*ζ*))*dζ is lower semicontinuous, monotone and hemicontinuous on A* × *U. The constrained optimization problem (CVCP) is well posed if and only if:*

$$\mathcal{S}(\sigma, \iota) \neq \bigcirc, \forall \sigma, \iota > 0 \text{ and } \operatorname{diam} \, \mathcal{S}(\sigma, \iota) \to 0 \text{ as } (\sigma, \iota) \to (0, 0).$$

**Proof.** Consider (CVCP) is well posed. In consequence, it admits a single solution (*a*¯, *u*¯) ∈ S. By using the inclusion S⊆S(*σ*, *ι*), ∀*σ*, *ι* > 0, we obtain S(*σ*, *ι*) = ∅, ∀*σ*, *ι* > 0. Now, contrary to the result, suppose that diam S(*σ*, *ι*) 0 as (*σ*, *ι*) → (0, 0). Consequently, there exists *r* > 0, a positive integer *m*, *σn*, *ι<sup>n</sup>* > 0 with *σn*, *ι<sup>n</sup>* → 0, and (*an*, *un*), (*a <sup>n</sup>*, *u <sup>n</sup>*) ∈ S(*σn*, *ιn*) such that:

$$\left| \left| \left( a\_{n}, u\_{n} \right) - \left( a\_{n}', u\_{n}' \right) \right| \right| > r, \quad \forall n \ge m. \tag{1}$$

Since (*an*, *un*), (*a <sup>n</sup>*, *u <sup>n</sup>*) ∈ S(*σn*, *ιn*), we obtain:

$$\int\_{\Omega} f(\pi\_{\mathfrak{a}\_{n},\mathfrak{u}\_{n}}(\zeta))d\zeta \le \inf\_{(b,w)\in\Theta} \int\_{\Omega} f(\pi\_{\mathfrak{b},w}(\zeta))d\zeta + \sigma\_{n}$$

$$\int\_{\Omega} \left[ (b(\zeta) - a\_{\mathfrak{n}}(\zeta)) \frac{\partial f}{\partial a}(\pi\_{\mathfrak{a}\_{n},\mathfrak{u}\_{n}}(\zeta)) + (w(\zeta) - u\_{\mathfrak{n}}(\zeta)) \frac{\partial f}{\partial \mathfrak{u}}(\pi\_{\mathfrak{a}\_{n},\mathfrak{u}\_{n}}(\zeta)) \right]$$

$$+ D\_{\mathfrak{a}}(b(\zeta) - a\_{\mathfrak{n}}(\zeta)) \frac{\partial f}{\partial a\_{\mathfrak{a}}}(\pi\_{\mathfrak{a}\_{n},\mathfrak{u}\_{n}}(\zeta))$$

$$+ \frac{1}{n(\mathfrak{k},\gamma)} D\_{\mathfrak{P}\gamma}^{2}(b(\zeta) - a\_{\mathfrak{n}}(\zeta)) \frac{\partial f}{\partial a\_{\mathfrak{P}\gamma}}(\pi\_{\mathfrak{a}\_{n},\mathfrak{u}\_{n}}(\zeta)) \Big] d\zeta + \iota\_{n} \ge 0, \quad \forall (b,w) \in A \times \mathsf{U}$$

and: 
$$\int\_{\Omega} f(\pi\_{u\_n', u\_n'}(\zeta)) d\zeta \le \inf\_{(b, w) \in \Theta} \int\_{\Omega} f(\pi\_{b, w}(\zeta)) d\zeta + \sigma\_n$$
 
$$\int\_{\Omega} \left[ (b(\zeta) - a\_n'(\zeta)) \frac{\partial f}{\partial a}(\pi\_{u\_n', u\_n'}(\zeta)) + (w(\zeta) - u\_n'(\zeta)) \frac{\partial f}{\partial u}(\pi\_{u\_n', u\_n'}(\zeta)) \right.$$
 
$$+ D\_a(b(\zeta) - a\_n'(\zeta)) \frac{\partial f}{\partial a\_n}(\pi\_{u\_n', u\_n'}(\zeta))$$
 
$$+ \frac{1}{n(\beta, \gamma)} D\_{\beta \gamma}^2(b(\zeta) - a\_n'(\zeta)) \frac{\partial f}{\partial a\_{\beta \gamma}}(\pi\_{u\_n', u\_n'}(\zeta)) \Big] d\zeta + \iota\_n \ge 0, \quad \forall (b, w) \in A \times \mathcal{U}.$$

Clearly, it follows that {(*an*, *un*)} and {(*a <sup>n</sup>*, *u <sup>n</sup>*))} are two approximating sequences for (CVCP) which converge to (*a*¯, *u*¯) (by hypothesis, the problem (CVCP) is well posed). By computation, we obtain:

$$\left| \left( (a\_{\boldsymbol{n}\prime} \boldsymbol{u}\_{\boldsymbol{n}}) - (a\_{\boldsymbol{n}\prime}' \boldsymbol{u}\_{\boldsymbol{n}}') \right) \right|$$

$$= \left| \left( (a\_{\boldsymbol{n}\prime} \boldsymbol{u}\_{\boldsymbol{n}}) - (\vec{a}\_{\boldsymbol{n}\prime} \vec{\boldsymbol{u}}) + (\vec{a}\_{\boldsymbol{n}\prime} \vec{\boldsymbol{u}}) - (a\_{\boldsymbol{n}\prime}' \boldsymbol{u}\_{\boldsymbol{n}}') \right) \right|$$

$$\leq \left| \left( (a\_{\boldsymbol{n}\prime} \boldsymbol{u}\_{\boldsymbol{n}}) - (\vec{a}\_{\boldsymbol{n}} \vec{\boldsymbol{u}}) \right) \right| + \left| \left( \vec{a}\_{\boldsymbol{n}\prime} \vec{\boldsymbol{u}} \right) - \left( a\_{\boldsymbol{n}\prime}' \boldsymbol{u}\_{\boldsymbol{n}}' \right) \right| \leq \boldsymbol{u}\_{\boldsymbol{n}}$$

which contradicts (1), for some *ι* = *r*. It follows that diam S(*σ*, *ι*) → 0 as (*σ*, *ι*) → (0, 0).

Conversely, let {(*an*, *un*)} be an approximating sequence for (CVCP). Therefore, there exists a sequence of positive real numbers *ι<sup>n</sup>* → 0 as *n* → ∞ such that the inequalities:

$$\lim\_{n \to \infty} \sup \int\_{\Omega} f(\pi\_{a\_n, \mu\_n}(\zeta)) d\zeta \le \inf\_{(b, w) \in \Theta} \int\_{\Omega} f(\pi\_{b, w}(\zeta)) d\zeta,\tag{2}$$

$$\int\_{\Omega} \left[ (b(\zeta) - a\_n(\zeta)) \frac{\partial f}{\partial a}(\pi\_{a\_n, \mu\_n}(\zeta)) + (w(\zeta) - u\_n(\zeta)) \frac{\partial f}{\partial a}(\pi\_{a\_n, \mu\_n}(\zeta)) \right.$$

$$+ D\_a(b(\zeta) - a\_n(\zeta)) \frac{\partial f}{\partial a\_n}(\pi\_{a\_n, \mu\_n}(\zeta))$$

$$+ \frac{1}{n(\beta, \gamma)} D\_{\beta \gamma}^2(b(\zeta) - a\_n(\zeta)) \frac{\partial f}{\partial a\_{\beta \gamma}}(\pi\_{a\_n, \mu\_n}(\zeta)) \Big] d\zeta + \iota\_n \ge 0, \quad \forall (b, w) \in A \times U \tag{3}$$

hold, involving that (*an*, *un*) ∈ S(*σn*, *ιn*) (see *σ<sup>n</sup>* → 0 as *n* → ∞, a sequence of positive real numbers). By considering diam S(*σn*, *ιn*) → 0 as (*σn*, *ιn*) → (0, 0), we obtain that {(*an*, *un*)} is a Cauchy sequence which converges to some (*a*¯, *u*¯) ∈ *A* × *U* as *A* × *U* is a closed set.

By using the monotonicity property of Ω *f*(*πa*,*u*(*ζ*))*dζ* on *A* × *U*, for (*a*¯, *u*¯),(*b*, *w*) ∈ *A* × *U*, we have:

$$\int\_{\Omega} \left[ (\vec{u}(\zeta) - b(\zeta)) \left( \frac{\partial f}{\partial a} (\pi\_{\mathfrak{a}, \mathfrak{a}}(\zeta)) - \frac{\partial f}{\partial a} (\pi\_{b, w}(\zeta)) \right) \right]$$

$$+ (\vec{u}(\zeta) - w(\zeta)) \left( \frac{\partial f}{\partial u} (\pi\_{\mathfrak{a}, \mathfrak{a}}(\zeta)) - \frac{\partial f}{\partial a} (\pi\_{b, w}(\zeta)) \right)$$

$$+D\_{\mathfrak{a}}\left(\mathfrak{a}\left(\zeta\right)-b\left(\zeta\right)\right)\left(\frac{\partial f}{\partial a\_{\mathfrak{a}}}\left(\pi\_{\mathfrak{a},\mathfrak{a}}\left(\zeta\right)\right)-\frac{\partial f}{\partial a\_{\mathfrak{a}}}\left(\pi\_{\mathfrak{b},\mathfrak{w}}\left(\zeta\right)\right)\right)$$

$$+\frac{1}{n\left(\mathfrak{b},\gamma\right)}D\_{\mathfrak{\beta}\gamma}^{2}\left(\mathfrak{a}\left(\zeta\right)-b\left(\zeta\right)\right)\left(\frac{\partial f}{\partial a\_{\mathfrak{\beta}\gamma}}\left(\pi\_{\mathfrak{a},\mathfrak{i}}\left(\zeta\right)\right)-\frac{\partial f}{\partial a\_{\mathfrak{\beta}\gamma}}\left(\pi\_{\mathfrak{b},w}\left(\zeta\right)\right)\right)\right]d\zeta \ge 0,$$

or, equivalently:

 Ω (*a*¯(*ζ*) <sup>−</sup> *<sup>b</sup>*(*ζ*)) *<sup>∂</sup> <sup>f</sup> ∂a* (*πa*¯,*u*¯(*ζ*)) + (*u*¯(*ζ*) <sup>−</sup> *<sup>w</sup>*(*ζ*)) *<sup>∂</sup> <sup>f</sup> <sup>∂</sup><sup>u</sup>* (*πa*¯,*u*¯(*ζ*)) <sup>+</sup>*Dα*(*a*¯(*ζ*) <sup>−</sup> *<sup>b</sup>*(*ζ*)) *<sup>∂</sup> <sup>f</sup> ∂aα* (*πa*¯,*u*¯(*ζ*)) + 1 *n*(*β*, *γ*) *D*2 *βγ*(*a*¯(*ζ*) <sup>−</sup> *<sup>b</sup>*(*ζ*)) *<sup>∂</sup> <sup>f</sup> ∂aβγ* (*πa*¯,*u*¯(*ζ*)) *dζ* ≥ Ω (*a*¯(*ζ*) <sup>−</sup> *<sup>b</sup>*(*ζ*)) *<sup>∂</sup> <sup>f</sup> ∂a* (*πb*,*w*(*ζ*)) + (*u*¯(*ζ*) <sup>−</sup> *<sup>w</sup>*(*ζ*)) *<sup>∂</sup> <sup>f</sup> <sup>∂</sup><sup>u</sup>* (*πb*,*w*(*ζ*)) <sup>+</sup>*Dα*(*a*¯(*ζ*) <sup>−</sup> *<sup>b</sup>*(*ζ*)) *<sup>∂</sup> <sup>f</sup> ∂aα* (*πb*,*w*(*ζ*)) + 1 *n*(*β*, *γ*) *D*2 *βγ*(*a*¯(*ζ*) <sup>−</sup> *<sup>b</sup>*(*ζ*)) *<sup>∂</sup> <sup>f</sup> ∂aβγ* (*πb*,*w*(*ζ*)) *dζ*. (4)

By considering the limit in inequality (3), we obtain:

$$\int\_{\Omega} \left[ \left( \bar{a}(\zeta) - b(\zeta) \right) \frac{\partial f}{\partial a} (\pi\_{\mathfrak{a}, \mathfrak{a}}(\zeta)) + (\bar{u}(\zeta) - w(\zeta)) \frac{\partial f}{\partial \mathfrak{a}} (\pi\_{\mathfrak{a}, \mathfrak{a}}(\zeta)) \right]$$

$$+ D\_{\mathfrak{a}} (\bar{a}(\zeta) - b(\zeta)) \frac{\partial f}{\partial a\_{\mathfrak{a}}} (\pi\_{\mathfrak{a}, \mathfrak{a}}(\zeta))$$

$$+ \frac{1}{n(\mathfrak{a}, \gamma)} D\_{\tilde{\beta}\gamma}^2 (\bar{a}(\zeta) - b(\zeta)) \frac{\partial f}{\partial a\_{\tilde{\beta}\gamma}} (\pi\_{\mathfrak{a}, \mathfrak{a}}(\zeta)) \Big] d\zeta \le 0. \tag{5}$$

By using (4) and (5), it results that:

$$\begin{split} \int\_{\Omega} \left[ \left( b(\zeta) - \bar{a}(\zeta) \right) \frac{\partial f}{\partial a} (\pi\_{b,w}(\zeta)) + (w(\zeta) - \bar{a}(\zeta)) \frac{\partial f}{\partial u} (\pi\_{b,w}(\zeta)) \right] \\ + D\_{\mathfrak{a}} (b(\zeta) - \bar{a}(\zeta)) \frac{\partial f}{\partial a\_{\mathfrak{a}}} (\pi\_{b,w}(\zeta)) \\ + \frac{1}{n(\mathcal{B},\gamma)} D\_{\mathcal{\beta}\gamma}^2 (b(\zeta) - \bar{a}(\zeta)) \frac{\partial f}{\partial a\_{\mathcal{\beta}\gamma}} (\pi\_{b,w}(\zeta)) \Big] d\zeta \ge 0. \end{split}$$

Now, we use Lemma 1 to obtain:

$$\begin{split} \int\_{\Omega} \left[ \left( b(\boldsymbol{\zeta}) - \mathbb{\boldsymbol{d}}(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial a} (\pi\_{\mathbb{d},\mathbb{d}}(\boldsymbol{\zeta})) + (w(\boldsymbol{\zeta}) - \mathbb{\boldsymbol{d}}(\boldsymbol{\zeta})) \frac{\partial f}{\partial \boldsymbol{u}} (\pi\_{\mathbb{d},\mathbb{d}}(\boldsymbol{\zeta})) \right. \\ &+ D\_{\mathfrak{a}} \left( b(\boldsymbol{\zeta}) - \mathbb{\boldsymbol{d}}(\boldsymbol{\zeta}) \right) \frac{\partial f}{\partial a\_{\mathfrak{a}}} (\pi\_{\mathbb{d},\mathbb{d}}(\boldsymbol{\zeta})) \\ &+ \frac{1}{n(\boldsymbol{\beta},\boldsymbol{\gamma})} D\_{\overline{\boldsymbol{\beta}}\boldsymbol{\gamma}}^{2} (b(\boldsymbol{\zeta}) - \mathbb{\boldsymbol{d}}(\boldsymbol{\zeta})) \frac{\partial f}{\partial a\_{\overline{\boldsymbol{\beta}}\boldsymbol{\gamma}}} (\pi\_{\mathbb{d},\mathbb{d}}(\boldsymbol{\zeta})) \Big] d\boldsymbol{\zeta} \geq 0, \end{split} \tag{6}$$

which implies that (*a*¯, *u*¯) ∈ Θ.

Since the functional Ω *f*(*πa*,*u*(*ζ*))*dζ* is lower semi-continuous, we conclude:

$$\int\_{\Omega} f(\pi\_{\mathfrak{d},\mathfrak{d}}(\zeta))d\zeta \le \lim\_{n \to \infty} \inf \int\_{\Omega} f(\pi\_{\mathfrak{d},\mathfrak{a}\_{n}}(\zeta))d\zeta \le \lim\_{n \to \infty} \sup \int\_{\Omega} f(\pi\_{\mathfrak{a}\_{n},\mathfrak{a}\_{n}}(\zeta))d\zeta.$$

By (2), the previous inequality can be written as

$$\int\_{\Omega} f(\pi\_{\mathfrak{d},\mathfrak{a}}(\zeta))d\zeta \le \inf\_{(b,w)\in\Theta} \int\_{\Omega} f(\pi\_{b,w}(\zeta))d\zeta. \tag{7}$$

As a consequence, by (6) and (7), we obtain that (*a*¯, *u*¯) is the solution for (CVCP).

Let us prove that (*a*¯, *u*¯) is the single solution for (CVCP). Suppose that (*a*1, *u*1) = (*a*2, *u*2) are two solutions for (CVCP). Then:

$$0 < \left| (a\_1, u\_1) - (a\_2, u\_2) \right| \le \text{diam } \mathcal{S}(\sigma, \iota) \to 0 \text{ as } (\sigma, \iota) \to (0, 0),$$

which is not possible. The proof is now complete.

The second main result of this paper is contained in the next theorem.

**Theorem 2.** *Consider <sup>F</sup>*(*a*, *<sup>u</sup>*) = Ω *f*(*πa*,*u*(*ζ*))*dζ is lower semicontinuous, monotone and hemicontinuous on A* × *U. The constrained optimization problem (CVCP) is well posed if and only if it admits a solution.*

**Proof.** Consider that (CVCP) is well posed. In consequence, it has a solution (*a*0, *u*0). Conversely, consider that (CVCP) has a solution (*a*0, *u*0), that is:

$$\int\_{\Omega} f(\pi\_{a0, u\_0}(\zeta)) d\zeta \le \inf\_{(b, w) \in \Theta} \int\_{\Omega} f(\pi\_{b, w}(\zeta)) d\zeta,$$

$$\int\_{\Omega} \left[ (b(\zeta) - a\_0(\zeta)) \frac{\partial f}{\partial a}(\pi\_{a0, u\_0}(\zeta)) + (w(\zeta) - u\_0(\zeta)) \frac{\partial f}{\partial u}(\pi\_{a0, u\_0}(\zeta)) \right.$$

$$+ D\_{\mathfrak{a}}(b(\zeta) - a\_0(\zeta)) \frac{\partial f}{\partial a\_{\mathfrak{a}}}(\pi\_{a0, u\_0}(\zeta))$$

$$+ \frac{1}{n(\mathfrak{k}, \gamma)} D\_{\mathfrak{j}\gamma}^2(b(\zeta) - a\_0(\zeta)) \frac{\partial f}{\partial a\_{\mathfrak{j}\gamma}}(\pi\_{a0, u\_0}(\zeta)) \Big] d\zeta \ge 0, \quad \forall (b, w) \in A \times \mathcal{U}, \tag{8}$$

but (CVCP) is not well posed. Therefore, by Definition 6, there exists an approximating sequence {(*an*, *un*)} of (CVCP) (which does not converge to (*a*0, *u*0)), that is the following inequalities hold:

$$\lim\_{n \to \infty} \sup \int\_{\Omega} f(\pi\_{\mathfrak{a}\_n, \mathfrak{a}\_n}(\zeta)) d\zeta \le \inf\_{(b, w) \in \Theta} \int\_{\Omega} f(\pi\_{b, w}(\zeta)) d\zeta$$

and:

+

$$\int\_{\Omega} \left[ \left( b(\zeta) - a\_{\mathfrak{n}}(\zeta) \right) \frac{\partial f}{\partial a} (\pi\_{a\_{\mathfrak{n}}, \mathfrak{n}\_{\mathfrak{n}}}(\zeta)) + \left( w(\zeta) - u\_{\mathfrak{n}}(\zeta) \right) \frac{\partial f}{\partial \mathfrak{n}} (\pi\_{a\_{\mathfrak{n}}, \mathfrak{n}\_{\mathfrak{n}}}(\zeta)) \right] $$

$$+ D\_{\mathfrak{a}} (b(\zeta) - a\_{\mathfrak{n}}(\zeta)) \frac{\partial f}{\partial a\_{\mathfrak{a}}} (\pi\_{a\_{\mathfrak{n}}, \mathfrak{n}\_{\mathfrak{n}}}(\zeta)) $$

$$\frac{1}{n(\mathfrak{a}, \gamma)} D\_{\tilde{\rho}\gamma}^2 (b(\zeta) - a\_{\mathfrak{n}}(\zeta)) \frac{\partial f}{\partial a\_{\tilde{\rho}\gamma}} (\pi\_{a\_{\mathfrak{n}}, \mathfrak{n}\_{\mathfrak{n}}}(\zeta)) \Big] d\zeta + \iota\_{\mathfrak{n}} \ge 0, \quad \forall (b, w) \in A \times \mathsf{U}. \tag{9}$$

Furthermore, to prove the boundedness of {(*an*, *un*)}, we proceed by contradiction. Suppose, in contrast to the result, {(*an*, *un*)} is not bounded, that is, (*an*, *un*) → +∞ as *<sup>n</sup>* <sup>→</sup> <sup>+</sup>∞. Let us consider *<sup>δ</sup><sup>n</sup>* <sup>=</sup> <sup>1</sup> (*an*, *un*) <sup>−</sup> (*a*0, *<sup>u</sup>*0) and (a*n*, <sup>u</sup>*n*)=(*a*0, *<sup>u</sup>*0) + *<sup>δ</sup>n*[(*an*, *un*) <sup>−</sup>

(*a*0, *u*0)]. We can see that {(a*n*, u*n*)} is bounded in *A* × *U*. If necessary, passing to a subsequence, we may consider that:

$$(\mathfrak{a}\_{\mathsf{H}}, \mathfrak{u}\_{\mathsf{H}}) \to (\mathfrak{a}, \mathfrak{u}) \text{ weakly in } A \times \mathsf{U} \not\equiv (a\_{0\prime} \mathfrak{u}\_{0}).$$

It is easy to check that (a, u) = (*a*0, *u*0) thanks to *δn*[(*an*, *un*) − (*a*0, *u*0)] = 1, for all *n* ∈ N. Since (*a*0, *u*0) is a solution of (CVCP), the inequalities in (8) are satisfied. By Lemma 1, we obtain:

$$\int\_{\Omega} f(\pi\_{a\_0, u\_0}(\zeta)) d\zeta \le \inf\_{(b, w) \in \Theta} \int\_{\Omega} f(\pi\_{b, w}(\zeta)) d\zeta,$$

$$\int\_{\Omega} \left[ (b(\zeta) - a\_0(\zeta)) \frac{\partial f}{\partial a}(\pi\_{b, w}(\zeta)) + (w(\zeta) - u\_0(\zeta)) \frac{\partial f}{\partial u}(\pi\_{b, w}(\zeta)) \right.$$

$$+ D\_a \left( b(\zeta) - a\_0(\zeta) \right) \frac{\partial f}{\partial a\_a}(\pi\_{b, w}(\zeta))$$

$$+ \frac{1}{n(\beta, \gamma)} D\_{\beta \gamma}^2 \left( b(\zeta) - a\_0(\zeta) \right) \frac{\partial f}{\partial a\_{\beta \gamma}}(\pi\_{b, w}(\zeta)) \Big] d\zeta \ge 0, \quad \forall (b, w) \in A \times \mathcal{U}. \tag{10}$$

By using the monotonicity property of the multiple integral functional Ω *f*(*πa*,*u*(*ζ*))*dζ* on *A* × *U*, for (*an*, *un*),(*b*, *w*) ∈ *A* × *U*, we have:

$$\int\_{\Omega} \left[ \left( a\_n(\boldsymbol{\zeta}) - b(\boldsymbol{\zeta}) \right) \left( \frac{\partial f}{\partial a} (\pi\_{a\_n, \mu\_n}(\boldsymbol{\zeta})) - \frac{\partial f}{\partial a} (\pi\_{b, w}(\boldsymbol{\zeta})) \right) \right]$$

$$+ \left( u\_n(\boldsymbol{\zeta}) - w(\boldsymbol{\zeta}) \right) \left( \frac{\partial f}{\partial u} (\pi\_{a\_n, \mu\_n}(\boldsymbol{\zeta})) - \frac{\partial f}{\partial a} (\pi\_{b, w}(\boldsymbol{\zeta})) \right)$$

$$+ D\_{\mathbb{R}} (a\_{\mathbb{R}}(\boldsymbol{\zeta}) - b(\boldsymbol{\zeta})) \left( \frac{\partial f}{\partial a\_{\mathbb{R}}} (\pi\_{a\_n, \mu\_n}(\boldsymbol{\zeta})) - \frac{\partial f}{\partial a\_{\mathbb{R}}} (\pi\_{b, w}(\boldsymbol{\zeta})) \right)$$

$$+ \frac{1}{n(\boldsymbol{\beta}, \gamma)} D\_{\boldsymbol{\beta} \gamma}^2 (a\_n(\boldsymbol{\zeta}) - b(\boldsymbol{\zeta})) \left( \frac{\partial f}{\partial a\_{\boldsymbol{\beta} \gamma}} (\pi\_{a\_n, \mu\_n}(\boldsymbol{\zeta})) - \frac{\partial f}{\partial a\_{\boldsymbol{\beta} \gamma}} (\pi\_{b, w}(\boldsymbol{\zeta})) \right) \Big] d\boldsymbol{\zeta} \ge 0,$$

or, equivalently:

$$\begin{split} \int\_{\Omega} \left[ \left( b(\zeta) - a\_{n}(\zeta) \right) \frac{\partial f}{\partial a} \left( \pi\_{a\_{n}, u\_{n}}(\zeta) \right) + \left( w(\zeta) - u\_{n}(\zeta) \right) \frac{\partial f}{\partial u} \left( \pi\_{a\_{n}, u\_{n}}(\zeta) \right) \right. \\ \left. + D\_{a} \left( b(\zeta) \right) - a\_{n}(\zeta) \right) \frac{\partial f}{\partial a\_{n}} \left( \pi\_{a\_{n}, u\_{n}}(\zeta) \right) \\ \quad + \frac{1}{n \left( \beta, \gamma \right)} D\_{\beta \gamma}^{2} \left( b(\zeta) \right) - a\_{n}(\zeta) \right) \frac{\partial f}{\partial a\_{\beta \gamma}} \left( \pi\_{a\_{n}, u\_{n}}(\zeta) \right) \Big] d\zeta \\ \leq \int\_{\Omega} \left[ \left( b(\zeta) - a\_{n}(\zeta) \right) \frac{\partial f}{\partial a} \left( \pi\_{b, w}(\zeta) \right) + \left( w(\zeta) - u\_{n}(\zeta) \right) \frac{\partial f}{\partial u} \left( \pi\_{b, w}(\zeta) \right) \right. \\ \left. + D\_{a} \left( b(\zeta) - a\_{n}(\zeta) \right) \frac{\partial f}{\partial a\_{a}} \left( \pi\_{b, w}(\zeta) \right) \right. \\ \left. + \frac{1}{n \left( \beta, \gamma \right)} D\_{\beta \gamma}^{2} \left( b(\zeta) \right) - a\_{n}(\zeta) \right) \frac{\partial f}{\partial a\_{\beta \gamma}} \left( \pi\_{b, w}(\zeta) \right) \right] d\zeta. \end{split} \tag{11}$$

Combining with (9) and (11), we have:

$$\int\_{\Omega} \left[ \left( b(\zeta) - a\_n(\zeta) \right) \frac{\partial f}{\partial a} (\pi\_{b,w}(\zeta)) + \left( w(\zeta) - u\_n(\zeta) \right) \frac{\partial f}{\partial u} (\pi\_{b,w}(\zeta)) \right]$$

$$+ D\_a \left( b(\zeta) - a\_n(\zeta) \right) \frac{\partial f}{\partial a\_a} (\pi\_{b,w}(\zeta))$$

$$\left[1+\frac{1}{n\left(\beta,\gamma\right)}D\_{\beta\gamma}^2\left(b\left(\zeta\right)-a\_{\text{fl}}\left(\zeta\right)\right)\frac{\partial f}{\partial a\_{\beta\gamma}}\left(\pi\_{b,w}\left(\zeta\right)\right)\right]d\zeta \geq -\iota\_{\text{fl}}, \quad \forall \left(b,w\right) \in A \times \mathcal{U}.$$

Because of *δ<sup>n</sup>* → 0 as *n* → ∞ (by the assumption that {(*an*, *un*)} is not bounded), so, we can take *n*<sup>0</sup> ∈ N be large enough such that *δ<sup>n</sup>* < 1, for all *n* ≥ *n*0. Then, by multiplying the previous inequality and (10) by *δ<sup>n</sup>* > 0 and 1 − *δ<sup>n</sup>* > 0, respectively, we obtain:

$$\begin{split} \int\_{\Omega} \left[ (b(\boldsymbol{\zeta}) - \mathbf{a}\_{n}(\boldsymbol{\zeta})) \frac{\partial f}{\partial a} (\boldsymbol{\pi}\_{b,w}(\boldsymbol{\zeta})) + (w(\boldsymbol{\zeta}) - \mathbf{u}\_{n}(\boldsymbol{\zeta})) \frac{\partial f}{\partial a} (\boldsymbol{\pi}\_{b,w}(\boldsymbol{\zeta})) \right] \\ + D\_{a} (b(\boldsymbol{\zeta}) - \mathbf{a}\_{n}(\boldsymbol{\zeta})) \frac{\partial f}{\partial a\_{a}} (\boldsymbol{\pi}\_{b,w}(\boldsymbol{\zeta})) \\ + \frac{1}{n(\boldsymbol{\beta}, \boldsymbol{\gamma})} D\_{\hat{\beta}\gamma}^{2} (b(\boldsymbol{\zeta}) - \mathbf{a}\_{n}(\boldsymbol{\zeta})) \frac{\partial f}{\partial a\_{\hat{\beta}\gamma}} (\boldsymbol{\pi}\_{b,w}(\boldsymbol{\zeta})) \Big] d\boldsymbol{\zeta} \geq -\iota\_{n\prime}, \quad \forall (b, w) \in A \times \mathcal{U}, \ \forall n \geq n\_{0}. \end{split}$$

Since (a*n*, u*n*) → (a, u) = (*a*0, *u*0) and (a*n*, u*n*)=(*a*0, *u*0)+(**a***n*, **u***n*)[(*an*, *un*) − (*a*0, *u*0)], we have:

 Ω (*b*(*ζ*) <sup>−</sup> <sup>a</sup>(*ζ*)) *<sup>∂</sup> <sup>f</sup> ∂a* (*πb*,*w*(*ζ*)) + (*w*(*ζ*) <sup>−</sup> <sup>u</sup>(*ζ*)) *<sup>∂</sup> <sup>f</sup> <sup>∂</sup><sup>u</sup>* (*πb*,*w*(*ζ*)) <sup>+</sup>*Dα*(*b*(*ζ*) <sup>−</sup> <sup>a</sup>(*ζ*)) *<sup>∂</sup> <sup>f</sup> ∂aα* (*πb*,*w*(*ζ*)) + 1 *n*(*β*, *γ*) *D*2 *βγ*(*b*(*ζ*) <sup>−</sup> <sup>a</sup>(*ζ*)) *<sup>∂</sup> <sup>f</sup> ∂aβγ* (*πb*,*w*(*ζ*)) *dζ* <sup>=</sup> lim*n*→<sup>∞</sup> Ω (*b*(*ζ*) <sup>−</sup> <sup>a</sup>*n*(*ζ*)) *<sup>∂</sup> <sup>f</sup> ∂a* (*πb*,*w*(*ζ*)) + (*w*(*ζ*) <sup>−</sup> <sup>u</sup>*n*(*ζ*)) *<sup>∂</sup> <sup>f</sup> <sup>∂</sup><sup>u</sup>* (*πb*,*w*(*ζ*)) <sup>+</sup>*Dα*(*b*(*ζ*) <sup>−</sup> <sup>a</sup>*n*(*ζ*)) *<sup>∂</sup> <sup>f</sup> ∂aα* (*πb*,*w*(*ζ*)) + 1 *n*(*β*, *γ*) *D*2 *βγ*(*b*(*ζ*) <sup>−</sup> <sup>a</sup>*n*(*ζ*)) *<sup>∂</sup> <sup>f</sup> ∂aβγ* (*πb*,*w*(*ζ*)) *dζ* ≥ − lim*n*→<sup>∞</sup> *<sup>ι</sup><sup>n</sup>* <sup>=</sup> 0, <sup>∀</sup>(*b*, *<sup>w</sup>*) <sup>∈</sup> *<sup>A</sup>* <sup>×</sup> *<sup>U</sup>*.

By considering the lower semicontinuity of the considered functional, and taking into account Lemma 1, we have:

$$\int\_{\Omega} f(\pi\_{\mathbf{a},\mathbf{u}}(\boldsymbol{\xi}))d\boldsymbol{\zeta} \le \inf\_{(b,w)\in\Theta} \int\_{\Omega} f(\pi\_{b,w}(\boldsymbol{\zeta}))d\boldsymbol{\zeta},$$

$$\int\_{\Omega} \left[ (b(\boldsymbol{\zeta}) - \mathbf{a}(\boldsymbol{\zeta})) \frac{\partial f}{\partial a}(\pi\_{\mathbf{a},\mathbf{u}}(\boldsymbol{\zeta})) + (w(\boldsymbol{\zeta}) - \mathbf{u}(\boldsymbol{\zeta})) \frac{\partial f}{\partial u}(\pi\_{\mathbf{a},\mathbf{u}}(\boldsymbol{\zeta})) \right.$$

$$+ D\_{\mathbf{a}}(b(\boldsymbol{\zeta}) - \mathbf{a}(\boldsymbol{\zeta})) \frac{\partial f}{\partial a\_{\mathbf{a}}}(\pi\_{\mathbf{a},\mathbf{u}}(\boldsymbol{\zeta})) $$

$$+ \frac{1}{\pi(\boldsymbol{\beta},\boldsymbol{\gamma})} D\_{\boldsymbol{\beta}\gamma}^{2}(b(\boldsymbol{\zeta}) - \mathbf{a}(\boldsymbol{\zeta})) \frac{\partial f}{\partial a\_{\boldsymbol{\beta}\gamma}}(\pi\_{\mathbf{a},\mathbf{u}}(\boldsymbol{\zeta})) \Big] d\boldsymbol{\zeta} \ge 0, \quad \forall (b, w) \in A \times \mathcal{U}. \tag{12}$$

We obtain that (a, u) is a solution of (CVCP), which contradicts the uniqueness of (*a*0, *u*0). In consequence, {(*an*, *un*)} is a bounded sequence with a convergent subsequence {(*ank* , *unk* )} which converges to (*a*¯, *u*¯) ∈ *A* × *U* as *k* → ∞. Now, by Definition 1, for (*ank* , *unk* ),(*b*, *w*) ∈ *A* × *U*, we have (see (11)):

$$\begin{split} \int\_{\Omega} \left[ \left( b(\zeta) - a\_{n\_k}(\zeta) \right) \frac{\partial f}{\partial a} \left( \pi\_{a\_{n\_k}, u\_{n\_k}}(\zeta) \right) + \left( w(\zeta) - u\_{n\_k}(\zeta) \right) \frac{\partial f}{\partial u} \left( \pi\_{a\_{n\_k}, u\_{n\_k}}(\zeta) \right) \right] \\ + D\_a \left( b(\zeta) - a\_{n\_k}(\zeta) \right) \frac{\partial f}{\partial a\_a} \left( \pi\_{a\_{n\_k}, u\_{n\_k}}(\zeta) \right) \end{split}$$

$$+\frac{1}{n(\mathcal{J},\gamma)}D\_{\beta\gamma}^{2}(b(\zeta)-a\_{n\_{k}}(\zeta))\frac{\partial f}{\partial a\_{\beta\gamma}}(\pi\_{a\_{n\_{k}},\mu\_{n\_{k}}}(\zeta))\Big)d\zeta$$

$$\leq\int\_{\Omega}\left[\left(b(\zeta)-a\_{n\_{k}}(\zeta)\right)\frac{\partial f}{\partial a}(\pi\_{b,w}(\zeta))+\left(w(\zeta)-u\_{n\_{k}}(\zeta)\right)\frac{\partial f}{\partial u}(\pi\_{b,w}(\zeta))\right]$$

$$+D\_{\mathfrak{a}}(b(\zeta)-a\_{n\_{k}}(\zeta))\frac{\partial f}{\partial a\_{\mathfrak{a}}}(\pi\_{b,w}(\zeta))$$

$$+\frac{1}{n(\mathcal{J},\gamma)}D\_{\beta\gamma}^{2}(b(\zeta)-a\_{n\_{k}}(\zeta))\frac{\partial f}{\partial a\_{\beta\gamma}}(\pi\_{b,w}(\zeta))\Big]d\zeta.\tag{13}$$

Furthermore, on behalf of (9), we can write that:

$$\lim\_{k \to \infty} \int\_{\Omega} \left[ \left( b(\boldsymbol{\zeta}) - a\_{\boldsymbol{n}\_{k}}(\boldsymbol{\zeta}) \right) \frac{\partial \boldsymbol{f}}{\partial a} \left( \pi\_{\boldsymbol{u}\_{\boldsymbol{n}\_{k}}, \boldsymbol{u}\_{\boldsymbol{n}\_{k}}}(\boldsymbol{\zeta}) \right) + \left( w(\boldsymbol{\zeta}) - u\_{\boldsymbol{n}\_{k}}(\boldsymbol{\zeta}) \right) \frac{\partial \boldsymbol{f}}{\partial \boldsymbol{u}} \left( \pi\_{\boldsymbol{u}\_{\boldsymbol{n}\_{k}}, \boldsymbol{u}\_{\boldsymbol{n}\_{k}}}(\boldsymbol{\zeta}) \right) \right] $$

$$+ D\_{a}(b(\boldsymbol{\zeta}) - a\_{\boldsymbol{n}\_{k}}(\boldsymbol{\zeta})) \frac{\partial \boldsymbol{f}}{\partial a\_{a}}(\pi\_{\boldsymbol{u}\_{\boldsymbol{n}\_{k}}, \boldsymbol{u}\_{\boldsymbol{n}\_{k}}}(\boldsymbol{\zeta})) $$

$$+ \frac{1}{n(\boldsymbol{\beta}, \boldsymbol{\gamma})} D\_{\hat{\boldsymbol{\beta}}\boldsymbol{\gamma}}^{2}(b(\boldsymbol{\zeta}) - a\_{\boldsymbol{n}\_{k}}(\boldsymbol{\zeta})) \frac{\partial \boldsymbol{f}}{\partial a\_{\boldsymbol{\beta}\boldsymbol{\gamma}}}(\pi\_{\boldsymbol{u}\_{\boldsymbol{n}\_{k}}, \boldsymbol{u}\_{\boldsymbol{n}\_{k}}}(\boldsymbol{\zeta})) \Big] d\boldsymbol{\zeta} \ge 0. \tag{14}$$

Combining (13) and (14), we have:

lim *k*→∞ Ω (*b*(*ζ*) <sup>−</sup> *ank* (*ζ*)) *<sup>∂</sup> <sup>f</sup> ∂a* (*πb*,*w*(*ζ*)) + (*w*(*ζ*) <sup>−</sup> *unk* (*ζ*)) *<sup>∂</sup> <sup>f</sup> <sup>∂</sup><sup>u</sup>* (*πb*,*w*(*ζ*)) <sup>+</sup>*Dα*(*b*(*ζ*) <sup>−</sup> *ank* (*ζ*)) *<sup>∂</sup> <sup>f</sup> ∂aα* (*πb*,*w*(*ζ*)) + 1 *n*(*β*, *γ*) *D*2 *βγ*(*b*(*ζ*) <sup>−</sup> *ank* (*ζ*)) *<sup>∂</sup> <sup>f</sup> ∂aβγ* (*πb*,*w*(*ζ*)) *dζ* ≥ 0 ⇒ Ω (*b*(*ζ*) <sup>−</sup> *<sup>a</sup>*¯(*ζ*)) *<sup>∂</sup> <sup>f</sup> ∂a* (*πb*,*w*(*ζ*)) + (*w*(*ζ*) <sup>−</sup> *<sup>u</sup>*¯(*ζ*)) *<sup>∂</sup> <sup>f</sup> <sup>∂</sup><sup>u</sup>* (*πb*,*w*(*ζ*)) <sup>+</sup>*Dα*(*b*(*ζ*) <sup>−</sup> *<sup>a</sup>*¯(*ζ*)) *<sup>∂</sup> <sup>f</sup> ∂aα* (*πb*,*w*(*ζ*)) + 1 *n*(*β*, *γ*) *D*2 *βγ*(*b*(*ζ*) <sup>−</sup> *<sup>a</sup>*¯(*ζ*)) *<sup>∂</sup> <sup>f</sup> ∂aβγ* (*πb*,*w*(*ζ*)) *dζ* ≥ 0.

By considering the lower semicontinuity of the considered functional, in accordance with Lemma 1, we have:

$$\int\_{\Omega} f(\pi\_{\mathfrak{a},\mathfrak{a}}(\zeta))d\zeta \le \inf\_{(b,\mathfrak{w})\in\Theta} \int\_{\Omega} f(\pi\_{\mathfrak{b},\mathfrak{w}}(\zeta))d\zeta,$$

$$\int\_{\Omega} \left[ (b(\zeta) - \mathfrak{a}(\zeta)) \frac{\partial f}{\partial a}(\pi\_{\mathfrak{a},\mathfrak{a}}(\zeta)) + (w(\zeta) - \mathfrak{a}(\zeta)) \frac{\partial f}{\partial \mathfrak{a}}(\pi\_{\mathfrak{a},\mathfrak{a}}(\zeta)) \right]$$

$$+ D\_{\mathfrak{a}}(b(\zeta) - \mathfrak{a}(\zeta)) \frac{\partial f}{\partial a\_{\mathfrak{a}}}(\pi\_{\mathfrak{a},\mathfrak{a}}(\zeta))$$

$$+ \frac{1}{n(\mathfrak{z},\gamma)} D\_{\tilde{\beta}\gamma}^2(b(\zeta) - \mathfrak{a}(\zeta)) \frac{\partial f}{\partial a\_{\tilde{\beta}\gamma}}(\pi\_{\mathfrak{a},\mathfrak{a}}(\zeta)) \Big] d\zeta \ge 0,$$

implying that (*a*¯, *u*¯) is a solution for (CVCP). Therefore, (*ank* , *unk* ) → (*a*¯, *u*¯), that is, (*ank* , *unk* ) → (*a*0, *u*0), involving (*an*, *un*) → (*a*0, *u*0) and the proof is complete.

In the following illustrative example, we present an application of Theorems 1 and 2, as well.

**Example 2.** *Let us consider n* = *k* = 1, *m* = 2*, and* Ω = [0, 3] <sup>2</sup>*. We define:*

$$f(\pi\_{a,u}(\zeta)) = 3u^2(\zeta) + e^{a(\zeta)} - a(\zeta)^2$$

*and consider the following constrained variational control problem:*

*(CVCP-1) Minimize*(*a*,*u*) Ω *f*(*πa*,*u*(*ζ*))*dζ*

*subject to*

$$\int\_{\Omega} \left[ \left( \mathfrak{b}(w(\zeta) - u(\zeta))u(\zeta) + (b(\zeta) - a(\zeta))(e^{a(\zeta)} - 1) \right) d\zeta \ge 0, \,\forall (b, w) \in A \times \mathcal{U},$$

$$(a, u)|\_{\partial\Omega} = 0.$$

We have <sup>S</sup> <sup>=</sup> {(0, 0)}. Moreover, we have that the functional Ω *f*(*πa*,*u*(*ζ*))*dζ* is monotone, hemicontinuous and lower semicontinuous on *<sup>A</sup>* <sup>×</sup> *<sup>U</sup>* <sup>=</sup> *<sup>C</sup>*4(Ω, [−10, 10]) <sup>×</sup> *<sup>C</sup>*1(Ω, [−10, 10]). In consequence, all hypotheses in Theorem <sup>2</sup> hold, therefore the optimization problem (CVCP-1) is well posed. Furthermore, since S(*σ*, *ι*) = {(0, 0)}, we obtain S(*σ*, *ι*) = ∅ and diam S(*σ*, *ι*) → 0 as (*σ*, *ι*) → (0, 0). Consequently, by using Theorem 1, we obtain that the constrained optimization problem (CVCP-1) is well posed.

#### **4. Conclusions and Further Developments**

In this paper, we investigated the well posedness for a new class of constrained optimization problems governed by second-order partial derivatives. Concretely, by using the monotonicity, lower semicontinuity, pseudomonotonicity and hemicontinuity of multiple integral functional, we proved that the well posedness of the constrained optimization problem under study is characterized in terms of existence and uniqueness of solution. Furthermore, the theoretical developments have been accompanied by some examples.

Some further developments associated with the present paper are the following: to formulate the necessary and sufficient optimality conditions for the considered optimization problems, to establish some duality results, and to study the well posedness for similar classes of control problems by using fractional derivatives.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The author declares no conflict of interest.

#### **References**


### *Article* **Existence of Solutions to a Class of Nonlinear Arbitrary Order Differential Equations Subject to Integral Boundary Conditions**

**Ananta Thakur 1, Javid Ali <sup>1</sup> and Rosana Rodríguez-López 2,\***


**Abstract:** We investigate the existence of positive solutions for a class of fractional differential equations of arbitrary order *δ* > 2, subject to boundary conditions that include an integral operator of the fractional type. The consideration of this type of boundary conditions allows us to consider heterogeneity on the dependence specified by the restriction added to the equation as a relevant issue for applications. An existence result is obtained for the sublinear and superlinear case by using the Guo–Krasnosel'skii fixed point theorem through the definition of adequate conical shells that allow us to localize the solution. As additional tools in our procedure, we obtain the explicit expression of Green's function associated to an auxiliary linear fractional boundary value problem, and we study some of its properties, such as the sign and some useful upper and lower estimates. Finally, an example is given to illustrate the results.

**Keywords:** fractional differential equations; fractional derivative of Riemann–Liouville type; integral boundary value problems; Green's functions; Guo–Krasnosel'skii fixed point theorem in cones; sublinearity and superlinearity; Arzelà-Ascoli Theorem

**MSC:** Primary 26A33; Secondary 34B15

#### **1. Introduction**

Differential equations for non-integer order play an important role to describe the physical phenomena more accurately than classical integer order differential equations. The need for fractional order differential equations stems in part from the fact that many phenomena cannot be modeled by differential equations with integer derivatives. Therefore, the existence results for solutions to fractional differential equations have received considerable attention in recent years.

Some relevant monographs on fractional calculus and fractional differential equations are, for instance [1–3]. The work [4] gives some fundamental ideas on initial value problems for fractional differential equations from the point of view of Riemann–Liouville operators, discussing local and global existence, or extremal solutions, and the monograph [5] includes different theoretical results as well as developments related to applications in the field of fractional calculus.

There are several papers dealing with the existence and uniqueness of solution to initial and boundary value problems for fractional order differential equations. For instance, in 2009, some impulsive problems for Caputo-type differential equations with *δ* ∈ (1, 2] and boundary conditions given by *x*(0) + *x* (0) = 0, *x*(1) + *x* (1) = 0, were studied (see [6]). Later, in 2010, initial value problems and periodic boundary value problems for linear fractional differential equations were analyzed in [7] by giving some comparison results. The authors of [8] studied the existence of positive solutions for fractional differential

**Citation:** Thakur, A.; Ali, J.; Rodríguez-López, R. Existence of Solutions to a Class of Nonlinear Arbitrary Order Differential Equations Subject to Integral Boundary Conditions. *Fractal Fract.* **2021**, *5*, 220. https://doi.org/ 10.3390/fractalfract5040220

Academic Editor: Savin Trean¸t ˘a

Received: 17 October 2021 Accepted: 8 November 2021 Published: 15 November 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

equations of order *δ* ∈ (1, 2), whose nonlinearity depended on a fractional derivative of the unknown function, subject to Dirichlet boundary conditions.

They completed their study by calculating the associated Green's function and by applying the compressive version of the Guo–Krasnosel'skii fixed point theorem. Green's function, Banach contraction mapping and fixed point index theory are the main tools used in [9] for the analysis of a nonlocal problem for fractional differential equations. In [10], a result that guarantees the existence of a unique fixed point for a mixed monotone operator was used to provide the existence of a unique positive solution to an initial value problem for fractional differential equations of general order *n* − 1 < *δ* ≤ *n*, with *n* ≥ 2, whose nonlinearity depends on the classical derivatives of the unknown function up to order *n* − 2.

On the other hand, the development of the monotone iterative technique for periodic boundary value problems associated with impulsive fractional differential equations with Riemann–Liouville sequential derivatives was made in [11], and [12] was devoted to boundary value problems for fractional differential inclusions. We refer also to [13] for a monograph devoted to the positive solutions for differential, difference and integral equations.

Integral boundary value problems for differential equations with integer and noninteger order have been studied by several researchers [1,2,4,12,14,15]. To mention some related references, in [16], first-order problems were considered by using the method of upper and lower solutions, and, in [17], the Guo–Krasnosel'skii fixed point result was applied to study the existence of positive solutions to integral boundary value problems for classical second-order differential equations.

These kind of problems were also considered in [18], where some results were derived as a consequence of the nonlinear alternative of Leray–Schauder type. On the other hand, the monotone iterative technique was developed in [19] for integral boundary value problems relative to first-order integro-differential equations with deviating arguments. See also [20] for a similar study on analogous differential systems. Very recently, the results in [21] were devoted to the study of first-order problems with multipoint and integral boundary conditions by applying Banach or Schaefer's fixed point theorem.

In the fractional case, some sufficient conditions were established in [22] for the existence of solutions to nonlocal boundary value problems associated to Caputo-type fractional differential equations by using Banach and Schaefer's fixed point theorems. A related problem with integral boundary conditions in the context of Banach spaces was analyzed in [23] by using Green's functions and the Kuratowski measure of noncompactness.

The authors of [24] studied fractional differential equations subject to a nonlocal strip condition of integral type that, in the limit, approaches the usual integral boundary condition, and some results were derived by applying fixed point results and the Leray– Schauder degree theory. In [25], the authors considered boundary value problems for a class of fractional differential equations of order *δ* ∈ (1, 2] with three-point fractional integral boundary conditions by means of Schaefer's fixed point theorem.

In [26], the contractive mapping principle and the monotone iterative technique were the basic tools and procedures used in the study of a class of Riemann–Liouville fractional differential equations with integral boundary conditions. On the other hand, in [27], Lyapunov-type results were used to study the nonexistence, the uniqueness and the existence and uniqueness of solutions to fractional boundary value problems.

More recently, in [28], a fractional problem subject to Stieltjes and generalized fractional integral boundary conditions was analyzed by applying the Banach contraction mapping principle. An analogous method was applied in [29], where the authors studied a Cauchy problem for Caputo–Fabrizio fractional differential equations in Banach spaces, imposing an initial condition that involves an integral operator, and they deduced the existence and uniqueness of solutions by applying the Banach fixed point theorem.

Some results for Hilfer fractional differential equations subject to boundary conditions involving Riemann–Liouville fractional integral operators were given in [30], and the study was completed by applying a nonlinear alternative of Leray–Schauder type and the Nadler theorem. Classical fixed point theory was also the tool used in [31] for the analysis of sequential *ψ*-Hilfer fractional boundary value problems. In particular, one of the results applied was the Krasnosel'skii fixed point theorem for the addition of a contractive mapping and a compact mapping.

Several other recent papers include, for instance, [32], where the type of derivative considered was Caputo fractional derivatives with respect to a fixed function, and, under this framework the authors studied an impulsive problem subject to integral boundary conditions based on the Riemann–Stieltjes fractional integral through Leray–Schauder's nonlinear alternative; or [33], where *ψ*-Caputo operators were considered in the differential equation and in the integral boundary conditions, and the method of upper and lower solutions coupled with the monotone iterative technique were the main tools used.

More specifically, in 2012, Cabada and Wang [15] considered the following boundary value problem for fractional order differential equations with classical integral boundary conditions:

$$\begin{cases} \,^cD^\delta u(t) + f(t, u(t)) = 0, \; 0 < t < 1, \\ u(0) = u''(0) = 0, \; u(1) = \lambda \int\_0^1 u(s)ds, \end{cases}$$

where 2 <sup>&</sup>lt; *<sup>δ</sup>* <sup>&</sup>lt; 3, 0 <sup>&</sup>lt; *<sup>λ</sup>* <sup>&</sup>lt; 2, *cD<sup>δ</sup>* is the Caputo fractional derivative and *<sup>f</sup>* : [0, 1] <sup>×</sup> [0, <sup>∞</sup>) <sup>→</sup> [0, ∞) is a continuous function.

In 2014, Cabada and Hamdi [14] discussed, by defining a suitable cone on a Banach space and by applying Guo–Krasnosel'skii fixed point theorem, the existence of positive solutions for the following class of nonlinear fractional differential equations with integral boundary conditions:

$$\begin{cases} D^\delta u(t) + f(t, u(t)) = 0, \; 0 < t < 1, \\ u(0) = u'(0) = 0, \; u(1) = \lambda \int\_0^1 u(s) ds \end{cases}$$

where 2 <sup>&</sup>lt; *<sup>δ</sup>* <sup>≤</sup> 3, 0 <sup>&</sup>lt; *<sup>λ</sup>*, *<sup>λ</sup>* <sup>=</sup> *<sup>δ</sup>*, *<sup>D</sup><sup>δ</sup>* is the Riemann–Liouville fractional derivative of order *<sup>δ</sup>* and *f* is a continuous function.

The large collection of research works existing on the topic shows the increasing interest that the study of integral boundary value problems for fractional differential equations has received in the recent times, due to their applicability to the modeling of various processes for which hereditary or memory properties leave a footprint in the performance of the phenomena, and because, in many occasions, the restrictions on the real problem make it adequate to consider boundary conditions that consider the influence that the state on a certain interval has on the evolution of the system.

It is worthwhile to devote efforts to study the existence of positive solutions, since controlling the sign of the solutions is a relevant issue in many fields of application for which negative values are not admissible (populations, amount of substances etc.). In this sense, in comparison with the above mentioned works, we are interested in the consequences, in terms of the properties of the solutions, that the application of the Guo– Krasnosel'skii fixed point theorem may present for a fractional problem with a boundary condition including a fractional operator.

Motivated by the above-mentioned work [14] and its approach, this paper deals with the existence of positive solutions for the following fractional differential equation of general order *δ* > 2 with fractional integral boundary conditions:

$$\begin{cases} D\_{0+}^{\delta}w(t) + f(t, w(t)) = 0, \; 0 < t < 1, \\ w(0) = w'(0) = w''(0) = w''''(0) = \dots = w^{(n-2)}(0) = 0, \\ w(1) = \lambda I\_{0+}^{\gamma}w(\zeta), \; 0 < \zeta < 1, \; n - 1 < \delta \le n\_{\prime} \end{cases} \tag{1}$$

where *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>, *<sup>n</sup>* <sup>≥</sup> 3, *<sup>λ</sup>* <sup>&</sup>gt; 0 and *<sup>D</sup><sup>δ</sup>* <sup>0</sup><sup>+</sup> denotes the Riemann–Liouville fractional derivative of order *δ*, *I<sup>γ</sup>* is the Riemann–Liouville fractional integral operator of order *γ* > 0 and *f* : [0, 1] × [0, ∞) → [0, ∞) is a continuous function.

As original contributions of the paper, we mention the consideration of a boundary value problem that involves an integral operator of fractional type, which allows us to consider heterogeneity on the dependence specified by the restriction added to the equation and also the subsequent explicit calculation of the Green's function for this general problem, which is not easy to handle due to the high order of the equation and the introduction of fractional operators in the boundary conditions considered.

These novelties in the problem considered add more complexity to the study of the particular properties of the Green's function that are essential to build the mathematical constructs required for the application of the fixed point result, namely, the establishment of estimates, which allow us to define an appropriate cone that is mapped into itself through the integral operator corresponding to the boundary value problem.

To prove the existence of positive solutions to (1), we apply the Guo–Krasnosel'skii fixed point theorem in cones, used in [14] in the context of fractional problems with boundary conditions involving a classical integral term but different from the techniques followed in the discussed works dealing with boundary conditions involving integral operators of a fractional type. The main reason to use this fixed point result is its potential to provide a localization of the solution by handling conical shells whose boundary is defined by the boundaries of two sets, which can be, in this case, more general than open balls [34,35].

Then, it is not only possible to deduce the existence of a positive solution but also we can give an upper bound for its maximum value and establish a certain positive number that is exceeded by the values of the solution at some points. Having, at our disposal, a contractive and an expansive version of the hypotheses, it is possible to deduce the existence of a positive solution under different types of restrictions on the function defining the equation—namely, the sublinear and the superlinear case.

The organization of the paper is as follows. In Section 2, we recall some basic notations and concepts concerning fractional calculus as well as the fixed point result that we apply as a fundamental tool in our procedure. In Section 3, we explicitly obtain the Green's function for a modified linear fractional boundary value problem, and we deduce some estimates for its expression.

The study of the sign of the Green's function is relevant too, as well as the comparison between its value at different points, which is also useful to our reasoning. Then, in Section 4, we present our main result, which allows us to derive the existence of a positive solution for the nonlinear problem (1) in the sublinear and superlinear cases. The proof of the main result provides details regarding the conical shells to which the mentioned solution belongs in each case. In Section 5, an example is included, and, finally, Section 6 shows our conclusions.

#### **2. Materials and Methods**

In this section, we recall some notations, definitions and results that are essential to prove our main result.

**Definition 1.** *The fractional derivative of Riemann–Liouville type and fractional order δ* > 0 *is defined for a function f as*

$$D\_{0+}^{\\\delta}f(t) = \frac{1}{\Gamma(n-\delta)} \left(\frac{d}{dt}\right)^n \int\_0^t (t-s)^{n-\delta-1} f(s)ds,$$

*where n* = [*δ*] + 1*, and* [*δ*] *is the integer part of δ, provided that the integral on the right-hand side converges pointwise on (0,*∞*).*

**Definition 2.** *The fractional integral of Riemann–Liouville type and fractional order δ* > 0 *is defined for a function f as*

$$I\_{0+}^{\\\delta}f(t) = \frac{1}{\Gamma(\delta)} \int\_0^t (t-s)^{\delta-1} f(s)ds,$$

*provided that the integral on the right-hand side converges pointwise on (0,*∞*).*

**Lemma 1** ([1])**.** *Let δ* > 0*, and then the solutions to D<sup>δ</sup>* <sup>0</sup><sup>+</sup> *w*(*t*) + *y*(*t*) = 0 *are given by*

$$w(t) = -\int\_0^t \frac{(t-s)^{\delta-1}}{\Gamma(\delta)} y(s)ds + c\_1 t^{\delta-1} + c\_2 t^{\delta-2} + \dots + c\_n t^{\delta-n}.$$

Without loss of generality, we assume in this and later results that the fractional derivatives are developed taking 0 as base point. For a discussion on other types of conditions, we refer to Kilbas et al. [1] and Samko et al. [3].

**Definition 3.** *Let E be a real Banach space. A nonempty closed and convex set K* ⊂ *E is called a cone if it satisfies the following two conditions:*


**Theorem 1** ([34])**.** *Let E be a Banach space, and let K* ⊂ *E be a cone. Assume that* Ω1, Ω<sup>2</sup> *are open and bounded subsets of E with* 0 ∈ Ω<sup>1</sup> ⊂ Ω<sup>1</sup> ⊂ Ω2*, and let T* : *K* ∩ (Ω<sup>2</sup> \ Ω1) −→ *K be a completely continuous mapping such that one of the following conditions holds:*


*Then, the mapping T has at least one fixed point in K* ∩ (Ω<sup>2</sup> \ Ω1)*.*

We define the mapping *<sup>T</sup>* : *<sup>C</sup>*[0, 1] <sup>→</sup> *<sup>C</sup>*[0, 1] as [*Tu*](*t*) = <sup>1</sup> <sup>0</sup> *G*(*t*,*s*)*f*(*s*, *u*(*s*))*ds*, with *G* a certain Green's function whose expression is given as indicated below (see (3)). This Green's function will be built in such a way that the fixed points of the mapping *T* coincide with the solutions to problem (1), and, hence, by Theorem 1, we will deduce the existence of positive solutions to problem (1).

#### **3. Some Auxiliary Results**

First, we prove the following lemma, relative to the expression of the explicit solution for a linear fractional problem subject to integral boundary conditions of fractional type.

**Lemma 2.** *Let δ* > 0*, n* − 1 < *δ* ≤ *n,* 0 < *ζ* < 1, *y* ∈ *C*[0, 1]*, and suppose that P* := <sup>1</sup> <sup>−</sup> *<sup>λ</sup>*Γ(*δ*) <sup>Γ</sup>(*δ*+*γ*) *<sup>ζ</sup>δ*+*γ*−<sup>1</sup> <sup>=</sup> <sup>0</sup>*. Then, the problem*

$$\begin{cases} D\_{0+}^{\delta}w(t) + y(t) = 0, \; 0 < t < 1, \\ w(0) = w'(0) = w''(0) = w''''(0) = \dots = w^{(n-2)}(0) = 0, \\ w(1) = \lambda I\_{0+}^{\gamma}w(\zeta), \; 0 < \zeta < 1, \; n - 1 < \delta \le n, \end{cases} \tag{2}$$

*has a unique solution w* <sup>∈</sup> *<sup>C</sup>*1[0, 1]*, given by w*(*t*) = <sup>1</sup> <sup>0</sup> *G*(*t*,*s*)*y*(*s*)*ds, where*

$$G(t,s) = \begin{cases} \frac{-\text{P}\Gamma(\delta+\gamma)(t-s)^{\ell-1} + \Gamma(\delta+\gamma)(1-s)^{\ell-1}t^{\ell-1} - \Gamma(\delta)\lambda(\underline{\ell}-s)^{\ell+\gamma-1}t^{\ell-1}}{\text{P}\Gamma(\delta)\Gamma(\delta+\gamma)}, & 0 \le s \le t \le 1, s \le \underline{\ell}, \\\frac{\Gamma(\delta+\gamma)(1-s)^{\ell-1}t^{\ell-1} - \Gamma(\delta)\lambda(\underline{\ell}-s)^{\ell+\gamma-1}t^{\ell-1}}{\text{P}\Gamma(\delta)\Gamma(\delta+\gamma)}, & 0 \le t \le s \le \underline{\ell} \le 1, \\\frac{-\text{P}\Gamma(\delta+\gamma)(1-s)^{\ell-1} + \Gamma(\delta+\gamma)(1-s)^{\ell-1}t^{\ell-1}}{\text{P}\Gamma(\delta)\Gamma(\delta+\gamma)}, & 0 \le \underline{\ell} \le s \le t \le 1, \\\frac{\Gamma(\delta+\gamma)(1-s)^{\ell-1}t^{\ell-1}}{\text{P}\Gamma(\delta)\Gamma(\delta+\gamma)}, & 0 \le t \le s \le 1, s \ge \underline{\ell}. \end{cases} \tag{3}$$

*Here, G*(*t*,*s*) *is called the Green's function associated to the boundary value problem* (1)*. Note that G*(*t*,*s*) *is a continuous function on* [0, 1] × [0, 1]*.*

**Proof.** The first equation in problem (2) is equivalent to the following integral equation:

$$w(t) = -l\_{0+}^{\delta}y(t) + c\_1t^{\delta-1} + c\_2t^{\delta-2} + \dots + c\_nt^{\delta-n}.$$

By using

$$w(0) = w'(0) = \dots = w^{(n-2)}(0) = 0,$$

we obtain that

$$w(t) = -I\_{0+}^{\delta}y(t) + c\_1t^{\delta-1}.$$

It follows from

$$w(1) = \lambda I\_{0+}^{\gamma} w(\zeta)\_\*,$$

combined with

*w*(1) = −*I δ* <sup>0</sup>+*y*(1) + *c*<sup>1</sup>

and

$$
\lambda I\_{0+}^{\gamma} w(\zeta) = -\lambda I\_{0+}^{\delta+\gamma} y(\zeta) + \lambda c\_1 \frac{\Gamma(\delta)}{\Gamma(\delta+\gamma)} \zeta^{\delta+\gamma-1} \zeta
$$

that

$$\begin{split} w(t) &= -\frac{1}{\Gamma(\delta)} \int\_0^t (t-s)^{\delta-1} y(s) ds + \frac{t^{\delta-1}}{P\Gamma(\delta)} \int\_0^1 (1-s)^{\delta-1} y(s) ds \\ &- \frac{\lambda t^{\delta-1}}{P\Gamma(\delta+\gamma)} \int\_0^\zeta (\zeta-s)^{\delta+\gamma-1} y(s) ds. \end{split}$$

For *t* ≤ *ζ*, we have

$$\begin{split} w(t) &= \frac{-1}{\Gamma(\delta)} \int\_0^t (t-s)^{\delta-1} y(s) ds + \frac{t^{\delta-1}}{\Gamma\Gamma(\delta)} \left\{ \int\_0^t + \int\_t^{\zeta} + \int\_{\zeta}^1 \right\} (1-s)^{\delta-1} y(s) ds \\ &- \frac{\lambda t^{\delta-1}}{\Gamma\Gamma(\delta+\gamma)} \left\{ \int\_0^t + \int\_t^{\zeta} \right\} (\zeta - s)^{\delta+\gamma-1} y(s) ds \\ &= \int\_0^t \frac{-\Gamma(\delta+\gamma)(t-s)^{\delta-1} + \Gamma(\delta+\gamma)(1-s)^{\delta-1} t^{\delta-1} - \Gamma(\delta)\lambda(\zeta-s)^{\delta+\gamma-1} t^{\delta-1}}{\Gamma\Gamma(\delta)\Gamma(\delta+\gamma)} y(s) ds \\ &+ \int\_t^{\zeta} \frac{\Gamma(\delta+\gamma)(1-s)^{\delta-1} t^{\delta-1} - \Gamma(\delta)\lambda(\zeta-s)^{\delta+\gamma-1} t^{\delta-1}}{\Gamma(\delta)\Gamma(\delta+\gamma)} y(s) ds \\ &+ \int\_{\zeta}^1 \frac{\Gamma(\delta+\gamma)(1-s)^{\delta-1} t^{\delta-1}}{\Gamma\Gamma(\delta)\Gamma(\delta+\gamma)} y(s) ds \\ &= \int\_0^1 \mathcal{G}(t,s) y(s) ds. \end{split}$$

For *t* ≥ *ζ*, we deduce that

$$\begin{split} w(t) &= -\frac{1}{\Gamma(\delta)} \left\{ \int\_0^{\tilde{\zeta}} + \int\_{\tilde{\zeta}}^{\delta} \right\} (t-s)^{\delta-1} y(s) ds + \frac{t^{\delta-1}}{\mathcal{P}\Gamma(\delta)} \left\{ \int\_0^{\tilde{\zeta}} + \int\_{\tilde{\zeta}}^{\delta} + \int\_t^1 \right\} (1-s)^{\delta-1} y(s) ds \\ &- \frac{\lambda t^{\delta-1}}{\mathcal{P}\Gamma(\delta+\gamma)} \int\_0^{\tilde{\zeta}} (\zeta-s)^{\delta+\gamma-1} y(s) ds \\ &= \int\_0^{\tilde{\zeta}} \frac{-\mathcal{P}\Gamma(\delta+\gamma)(t-s)^{\delta-1} + \Gamma(\delta+\gamma)(1-s)^{\delta-1} t^{\delta-1} - \Gamma(\delta)\lambda(\zeta-s)^{\delta+\gamma-1} t^{\delta-1}}{\mathcal{P}\Gamma(\delta)\Gamma(\delta+\gamma)} y(s) ds \\ &+ \int\_{\tilde{\zeta}}^t \frac{-\mathcal{P}\Gamma(\delta+\gamma)(t-s)^{\delta-1} + \Gamma(\delta+\gamma)(1-s)^{\delta-1} t^{\delta-1}}{\mathcal{P}\Gamma(\delta)\Gamma(\delta+\gamma)} y(s) ds \\ &+ \int\_1^1 \frac{\Gamma(\delta+\gamma)(1-s)^{\delta-1} t^{\delta-1}}{\mathcal{P}\Gamma(\delta)\Gamma(\delta+\gamma)} y(s) ds \\ &= \int\_0^1 G(t,s) y(s) ds. \\ \end{split}$$

A careful analysis of the Green's function *G* allows us to prove some of its properties that will be useful to our procedure, such as the nonnegativity or the establishment of upper and lower estimates.

**Lemma 3.** *Let G be the Green's funtion corresponding to the problem* (2)*, which is given in Lemma 2. Then, for all <sup>δ</sup>* <sup>∈</sup> (*<sup>n</sup>* <sup>−</sup> 1, *<sup>n</sup>*], *and <sup>λ</sup>* <sup>&</sup>gt; <sup>0</sup> *with <sup>P</sup>* :<sup>=</sup> <sup>1</sup> <sup>−</sup> *<sup>λ</sup>*Γ(*δ*) <sup>Γ</sup>(*δ*+*γ*) *<sup>ζ</sup>δ*+*γ*−<sup>1</sup> <sup>&</sup>gt; <sup>0</sup>*, the following properties hold:*

$$\text{(I)}\qquad \mathcal{G}\left(t,s\right) \ge \frac{\lambda t^{\delta - 1} \zeta^{\delta + \gamma - 1}}{P \Gamma(\delta + \gamma)} \left[ (1 - s)^{\delta - 1} - (1 - s)^{\delta + \gamma - 1} \right] for all \ t, s \in (0, 1).$$


**Proof.** We start by proving (I) and (II) simultaneously. First, assume that 0 ≤ *s* ≤ *t* ≤ 1, *<sup>s</sup>* <sup>≤</sup> *<sup>ζ</sup>*. Since 0 <sup>&</sup>lt; *<sup>λ</sup>*Γ(*δ*)*ζδ*+*γ*−<sup>1</sup> <sup>Γ</sup>(*δ*+*γ*) < 1, then we obtain

$$\begin{split} &P\Gamma(\delta)\Gamma(\delta+\gamma)\mathbb{G}(t,s) \\ &= -P\Gamma(\delta+\gamma)(t-s)^{\delta-1} + \Gamma(\delta+\gamma)(1-s)^{\delta-1}t^{\delta-1} - \Gamma(\delta)\lambda(\zeta-s)^{\delta+\gamma-1}t^{\delta-1} \\ &= \lambda\Gamma(\delta)\zeta^{\delta+\gamma-1}(t-s)^{\delta-1} + [-\Gamma(\delta+\gamma)(t-s)^{\delta-1} + \Gamma(\delta+\gamma)(1-s)^{\delta-1}t^{\delta-1}] \\ &- \Gamma(\delta)\lambda(\zeta-s)^{\delta+\gamma-1}t^{\delta-1} \\ &\geq \lambda\Gamma(\delta)\zeta^{\delta+\gamma-1}(t-s)^{\delta-1} - \lambda\Gamma(\delta)\zeta^{\delta+\gamma-1}(t-s)^{\delta-1} \\ &+ \lambda\Gamma(\delta)\zeta^{\delta+\gamma-1}(1-s)^{\delta-1}t^{\delta-1} - \Gamma(\delta)\lambda(\zeta-s)^{\delta+\gamma-1}t^{\delta-1} \\ &= \lambda\Gamma(\delta)\zeta^{\delta+\gamma-1}(1-s)^{\delta-1}t^{\delta-1} - \Gamma(\delta)\lambda(\zeta-s)^{\delta+\gamma-1}t^{\delta-1} \\ &\geq \lambda\Gamma(\delta)\zeta^{\delta+\gamma-1}t^{\delta-1}[(1-s)^{\delta-1} - (1-s)^{\delta+\gamma-1}], \end{split}$$

and

$$\begin{split} & \operatorname{P} \Gamma(\delta) \Gamma(\delta + \gamma) \operatorname{G}(t, s) \\ &= -\operatorname{P} \Gamma(\delta + \gamma)(t - s)^{\delta - 1} + \operatorname{\Gamma}(\delta + \gamma)(1 - s)^{\delta - 1} t^{\delta - 1} - \operatorname{\Gamma}(\delta) \lambda(\zeta - s)^{\delta + \gamma - 1} t^{\delta - 1} \\ &= \lambda \Gamma(\delta) \zeta^{\delta + \gamma - 1} (t - s)^{\delta - 1} - \operatorname{\Gamma}(\delta + \gamma)(t - s)^{\delta - 1} \\ &+ \operatorname{\Gamma}(\delta + \gamma)(1 - s)^{\delta - 1} t^{\delta - 1} - \operatorname{\Gamma}(\delta) \lambda(\zeta - s)^{\delta + \gamma - 1} t^{\delta - 1} \\ &\leq \operatorname{\Gamma}(\delta + \gamma)(1 - s)^{\delta - 1} t^{\delta - 1} - \operatorname{\Gamma}(\delta) \lambda(\zeta - s)^{\delta + \gamma - 1} t^{\delta - 1} \\ &\leq \operatorname{\Gamma}(\delta + \gamma)(1 - s)^{\delta - 1} t^{\delta - 1} . \end{split}$$

For 0 ≤ *t* ≤ *s* ≤ *ζ* ≤ 1, we have

$$\begin{aligned} &P\Gamma(\delta)\Gamma(\delta+\gamma)G(t,s) \\ &=\Gamma(\delta+\gamma)(1-s)^{\delta-1}t^{\delta-1} - \Gamma(\delta)\lambda(\zeta-s)^{\delta+\gamma-1}t^{\delta-1} \\ &\geq \lambda\Gamma(\delta)\zeta^{\delta+\gamma-1}(1-s)^{\delta-1}t^{\delta-1} - \Gamma(\delta)\lambda(\zeta-s)^{\delta+\gamma-1}t^{\delta-1} \\ &\geq \lambda\Gamma(\delta)\zeta^{\delta+\gamma-1}t^{\delta-1}[(1-s)^{\delta-1}-(1-s)^{\delta+\gamma-1}], \end{aligned}$$

and

$$\begin{aligned} &P\Gamma(\delta)\Gamma(\delta+\gamma)G(t,s) \\ &=\Gamma(\delta+\gamma)(1-s)^{\delta-1}t^{\delta-1}-\Gamma(\delta)\lambda(\zeta-s)^{\delta+\gamma-1}t^{\delta-1}, \\ &\leq \Gamma(\delta+\gamma)(1-s)^{\delta-1}t^{\delta-1}. \end{aligned}$$

For 0 ≤ *ζ* ≤ *s* ≤ *t* ≤ 1, we find

$$\begin{split} &\quad \mathrm{PT}(\delta)\Gamma(\delta+\gamma)\mathrm{G}(t,s) \\ &= \, \_{-\mathrm{P}}\Gamma(\delta+\gamma)(t-s)^{\delta-1} + \Gamma(\delta+\gamma)(1-s)^{\delta-1}t^{\delta-1} \\ &= \lambda\Gamma(\delta)\mathbb{Y}\_{\mathsf{y}}^{\delta+\gamma-1}(t-s)^{\delta-1} - \Gamma(\delta+\gamma)(t-s)^{\delta-1} + \Gamma(\delta+\gamma)(1-s)^{\delta-1}t^{\delta-1} \\ &\geq \lambda\Gamma(\delta)\mathbb{Y}\_{\mathsf{y}}^{\delta+\gamma-1}(t-s)^{\delta-1} - \lambda\Gamma(\delta)\mathbb{Y}\_{\mathsf{y}}^{\delta+\gamma-1}(t-s)^{\delta-1} + \lambda\Gamma(\delta)\mathbb{Y}\_{\mathsf{y}}^{\delta+\gamma-1}(1-s)^{\delta-1}t^{\delta-1} \\ &\geq \lambda\Gamma(\delta)\mathbb{Y}\_{\mathsf{y}}^{\delta+\gamma-1}t^{\delta-1}[(1-s)^{\delta-1}-(1-s)^{\delta+\gamma-1}], \end{split}$$

and

$$\begin{aligned} &P\Gamma(\delta)\Gamma(\delta+\gamma)G(t,s) \\ &= -P\Gamma(\delta+\gamma)(t-s)^{\delta-1} + \Gamma(\delta+\gamma)(1-s)^{\delta-1}t^{\delta-1} \\ &= \lambda\Gamma(\delta)\zeta^{\delta+\gamma-1}(t-s)^{\delta-1} - \Gamma(\delta+\gamma)(t-s)^{\delta-1} + \Gamma(\delta+\gamma)(1-s)^{\delta-1}t^{\delta-1} \\ &\leq \Gamma(\delta+\gamma)(1-s)^{\delta-1}t^{\delta-1}. \end{aligned}$$

For 0 ≤ *t* ≤ *s* ≤ 1 *s* ≥ *ζ*, we have

$$\begin{aligned} &P\Gamma(\delta)\Gamma(\delta+\gamma)G(t,s) \\ &=\Gamma(\delta+\gamma)(1-s)^{\delta-1}t^{\delta-1} \\ &\geq \lambda\Gamma(\delta)\zeta^{\delta+\gamma-1}t^{\delta-1}[(1-s)^{\delta-1}-(1-s)^{\delta+\gamma-1}]. \end{aligned}$$

Property (III) is derived from (I). On the other hand, for the validity of (IV), we observe that

$$G(1,s) = \begin{cases} \frac{(1-P)\Gamma(\delta+\gamma)(1-s)^{\delta-1} - \Gamma(\delta)\lambda(\zeta-s)^{\delta+\gamma-1}}{P\Gamma(\delta)\Gamma(\delta+\gamma)} = \frac{\lambda[\zeta^{\delta+\gamma-1}(1-s)^{\delta-1} - (\zeta-s)^{\delta+\gamma-1}]}{P\Gamma(\delta+\gamma)}, & s \le \zeta\_{\tau} \\\frac{(1-P)\Gamma(\delta+\gamma)(1-s)^{\delta-1}}{P\Gamma(\delta)\Gamma(\delta+\gamma)} = \frac{\lambda\zeta^{\delta+\gamma-1}(1-s)^{\delta-1}}{P\Gamma(\delta+\gamma)}, & \zeta \le s\_{\tau} \end{cases}$$

which is obviously positive for *s* ∈ (0, 1). Finally, (V) is trivially derived.

The previous result is consistent with those obtained in [14] for the problem with <sup>2</sup> <sup>&</sup>lt; *<sup>δ</sup>* <sup>≤</sup> 3. In fact, for *<sup>γ</sup>* <sup>=</sup> 1, we have *<sup>P</sup>* <sup>=</sup> <sup>1</sup> <sup>−</sup> *<sup>λ</sup> <sup>δ</sup> <sup>ζ</sup>δ*, and thus the assumption *<sup>λ</sup>* <sup>∈</sup> (0, *<sup>δ</sup>*) (as considered in [14]) guarantees that *P* > 0.

**Corollary 1.** *For all <sup>δ</sup>* <sup>∈</sup> (*<sup>n</sup>* <sup>−</sup> 1, *<sup>n</sup>*], *and <sup>λ</sup>* <sup>&</sup>gt; <sup>0</sup> *with <sup>P</sup>* :<sup>=</sup> <sup>1</sup> <sup>−</sup> *<sup>λ</sup>*Γ(*δ*) <sup>Γ</sup>(*δ*+*γ*) *<sup>ζ</sup>δ*+*γ*−<sup>1</sup> <sup>&</sup>gt; <sup>0</sup>*, the Green's function G*(*t*,*s*) *satisfies*

$$t^{\delta -1} w\_1(s) \le G(t, s) \le t^{\delta - 1} w\_2(s), \quad \forall \ t, s \in (0, 1), \tag{4}$$

*where*

$$\begin{aligned} w\_1(s) &= \frac{\lambda \zeta^{\delta + \gamma - 1}}{P \Gamma(\delta + \gamma)} [(1 - s)^{\delta - 1} - (1 - s)^{\delta + \gamma - 1}], \\ w\_2(s) &= \frac{(1 - s)^{\delta - 1}}{P \Gamma(\delta)}. \end{aligned}$$

Similarly to [14], we derive the following Lemma, which expresses a correspondence between the values *G*(*t*,*s*) and *G*(1,*s*). This relation will be essential in the proof of the main result.

**Lemma 4.** *For all <sup>δ</sup>* <sup>∈</sup> (*<sup>n</sup>* <sup>−</sup> 1, *<sup>n</sup>*], *and <sup>λ</sup>* <sup>&</sup>gt; <sup>0</sup> *with <sup>P</sup>* :<sup>=</sup> <sup>1</sup> <sup>−</sup> *<sup>λ</sup>*Γ(*δ*) <sup>Γ</sup>(*δ*+*γ*) *<sup>ζ</sup>δ*+*γ*−<sup>1</sup> <sup>&</sup>gt; <sup>0</sup>*, the Green's function G*(*t*,*s*) *also satisfies*

$$t^{\delta - 1}G(1, s) \le G(t, s) \le \frac{1}{1 - P}G(1, s) = \frac{\Gamma(\delta + \gamma)}{\lambda \Gamma(\delta) \zeta^{\delta + \gamma - 1}} G(1, s), \quad \forall \ t, s \in (0, 1). \tag{5}$$

**Proof.** By Lemma 3 (IV), the sought inequality is equivalent to prove that

$$t^{\delta - 1} \le \frac{G(t, s)}{G(1, s)} \le \frac{1}{1 - P} = \frac{\Gamma(\delta + \gamma)}{\lambda \Gamma(\delta) \zeta^{\delta + \gamma - 1}}, \quad \forall \ t, s \in (0, 1). \tag{6}$$

Note also that, under the hypotheses imposed, *G*(*t*,*s*) > 0 for all *t*, *s* ∈ (0, 1). First, we consider the case 0 < *s* ≤ *t* < 1, with *s* ≤ *ζ*, and then

*<sup>ϕ</sup>*(*t*,*s*) :<sup>=</sup> *<sup>G</sup>*(*t*,*s*) *G*(1,*s*) <sup>=</sup> <sup>−</sup>*P*Γ(*<sup>δ</sup>* <sup>+</sup> *<sup>γ</sup>*)(*<sup>t</sup>* <sup>−</sup> *<sup>s</sup>*)*δ*−<sup>1</sup> <sup>+</sup> <sup>Γ</sup>(*<sup>δ</sup>* <sup>+</sup> *<sup>γ</sup>*)(<sup>1</sup> <sup>−</sup> *<sup>s</sup>*)*δ*−1*<sup>t</sup> <sup>δ</sup>*−<sup>1</sup> <sup>−</sup> <sup>Γ</sup>(*δ*)*λ*(*<sup>ζ</sup>* <sup>−</sup> *<sup>s</sup>*)*δ*+*γ*−1*<sup>t</sup> δ*−1 −*P*Γ(*<sup>δ</sup>* + *<sup>γ</sup>*)(<sup>1</sup> − *<sup>s</sup>*)*δ*−<sup>1</sup> + <sup>Γ</sup>(*<sup>δ</sup>* + *<sup>γ</sup>*)(<sup>1</sup> − *<sup>s</sup>*)*δ*−<sup>1</sup> − <sup>Γ</sup>(*δ*)*λ*(*<sup>ζ</sup>* − *<sup>s</sup>*)*δ*+*γ*−<sup>1</sup> = *t <sup>δ</sup>*−<sup>1</sup> <sup>−</sup>*P*Γ(*<sup>δ</sup>* <sup>+</sup> *<sup>γ</sup>*)(<sup>1</sup> <sup>−</sup> *<sup>s</sup> <sup>t</sup>*)*δ*−<sup>1</sup> <sup>+</sup> <sup>Γ</sup>(*<sup>δ</sup>* <sup>+</sup> *<sup>γ</sup>*)(<sup>1</sup> <sup>−</sup> *<sup>s</sup>*)*δ*−<sup>1</sup> <sup>−</sup> <sup>Γ</sup>(*δ*)*λ*(*<sup>ζ</sup>* <sup>−</sup> *<sup>s</sup>*)*δ*+*γ*−<sup>1</sup> −*P*Γ(*<sup>δ</sup>* + *<sup>γ</sup>*)(<sup>1</sup> − *<sup>s</sup>*)*δ*−<sup>1</sup> + <sup>Γ</sup>(*<sup>δ</sup>* + *<sup>γ</sup>*)(<sup>1</sup> − *<sup>s</sup>*)*δ*−<sup>1</sup> − <sup>Γ</sup>(*δ*)*λ*(*<sup>ζ</sup>* − *<sup>s</sup>*)*δ*+*γ*−<sup>1</sup> = *t δ*−1 −*P*(1<sup>−</sup> *<sup>s</sup> <sup>t</sup>* )*δ*−<sup>1</sup> (1−*s*)*δ*−<sup>1</sup> <sup>+</sup> <sup>1</sup> <sup>−</sup> <sup>Γ</sup>(*δ*)*λ*(*ζ*−*s*)*δ*+*γ*−<sup>1</sup> <sup>Γ</sup>(*δ*+*γ*)(1−*s*)*δ*−<sup>1</sup> <sup>−</sup>*<sup>P</sup>* <sup>+</sup> <sup>1</sup> <sup>−</sup> <sup>Γ</sup>(*δ*)*λ*(*ζ*−*s*)*δ*+*γ*−<sup>1</sup> <sup>Γ</sup>(*δ*+*γ*)(1−*s*)*δ*−<sup>1</sup> ∈ ⎡ ⎢ ⎣ *t <sup>δ</sup>*−1, <sup>1</sup> <sup>−</sup> <sup>Γ</sup>(*δ*)*λ*(*ζ*−*s*)*δ*+*γ*−<sup>1</sup> <sup>Γ</sup>(*δ*+*γ*)(1−*s*)*δ*−<sup>1</sup> <sup>1</sup> <sup>−</sup> *<sup>P</sup>* <sup>−</sup> <sup>Γ</sup>(*δ*)*λ*(*ζ*−*s*)*δ*+*γ*−<sup>1</sup> <sup>Γ</sup>(*δ*+*γ*)(1−*s*)*δ*−<sup>1</sup> ⎤ ⎥ <sup>⎦</sup> <sup>⊆</sup> *t <sup>δ</sup>*−1, <sup>1</sup> 1 − *P* = *t <sup>δ</sup>*−1, <sup>Γ</sup>(*<sup>δ</sup>* <sup>+</sup> *<sup>γ</sup>*) *λ*Γ(*δ*)*ζδ*+*γ*−<sup>1</sup> .

For 0 < *t* ≤ *s* ≤ *ζ* < 1, we have

$$\begin{aligned} \varphi(t,s) &:= \frac{G(t,s)}{G(1,s)} \\ &= t^{\delta-1} \frac{\Gamma(\delta+\gamma)(1-s)^{\delta-1} - \Gamma(\delta)\lambda(\zeta-s)^{\delta+\gamma-1}}{-P\Gamma(\delta+\gamma)(1-s)^{\delta-1} + \Gamma(\delta+\gamma)(1-s)^{\delta-1} - \Gamma(\delta)\lambda(\zeta-s)^{\delta+\gamma-1}} \\ &\ge t^{\delta-1}. \end{aligned}$$

Next, we prove that *<sup>ϕ</sup>*(*t*,*s*) <sup>≤</sup> <sup>1</sup> <sup>1</sup>−*<sup>P</sup>* , for 0 <sup>&</sup>lt; *<sup>t</sup>* <sup>≤</sup> *<sup>s</sup>* <sup>≤</sup> *<sup>ζ</sup>* <sup>&</sup>lt; 1. We study the behavior of the auxiliary one-variable function

$$\psi(s) := \frac{\Gamma(\delta + \gamma)(1 - s)^{\delta - 1} - \Gamma(\delta)\lambda(\zeta - s)^{\delta + \gamma - 1}}{(1 - P)\Gamma(\delta + \gamma)(1 - s)^{\delta - 1} - \Gamma(\delta)\lambda(\zeta - s)^{\delta + \gamma - 1}}$$

in the interval [*t*, *ζ*], with *t* ∈ (0, *ζ*] fixed. The sign of *ψ* (*s*) coincides with the sign of

$$\begin{split} \phi(s) &:= \left( -\Gamma(\delta+\gamma)(\delta-1)(1-s)^{\delta-2} + \Gamma(\delta)\lambda(\delta+\gamma-1)(\zeta-s)^{\delta+\gamma-2} \right) \\ &\times \left( (1-P)\Gamma(\delta+\gamma)(1-s)^{\delta-1} - \Gamma(\delta)\lambda(\zeta-s)^{\delta+\gamma-1} \right) \\ &\quad - \left( \Gamma(\delta+\gamma)(1-s)^{\delta-1} - \Gamma(\delta)\lambda(\zeta-s)^{\delta+\gamma-1} \right) \\ &\times \left( -(1-P)\Gamma(\delta+\gamma)(\delta-1)(1-s)^{\delta-2} + \Gamma(\delta)\lambda(\delta+\gamma-1)(\zeta-s)^{\delta+\gamma-2} \right) \\ &= \Gamma(\delta+\gamma)\Gamma(\delta)(1-s)^{\delta-2}\lambda(\zeta-s)^{\delta+\gamma-2}P\{ (\delta-1)(\zeta-1) - (1-s)\gamma \}. \end{split}$$

which is, clearly, nonpositive for *s* ∈ [*t*, *ζ*]. Hence, *ψ*(*s*) ≤ *ψ*(*t*), for *s* ∈ [*t*, *ζ*]. Since *ϕ*(*t*,*s*) = *t <sup>δ</sup>*−1*ψ*(*s*), this proves that, in the case 0 <sup>&</sup>lt; *<sup>t</sup>* <sup>≤</sup> *<sup>s</sup>* <sup>≤</sup> *<sup>ζ</sup>* <sup>&</sup>lt; 1, we have

$$\varphi(t,s) \le t^{\delta-1} \psi(t) = \frac{\Gamma(\delta+\gamma)t^{\delta-1}(1-t)^{\delta-1} - \Gamma(\delta)\lambda t^{\delta-1}(\zeta-t)^{\delta+\gamma-1}}{(1-P)\Gamma(\delta+\gamma)(1-t)^{\delta-1} - \Gamma(\delta)\lambda(\zeta-t)^{\delta+\gamma-1}} =: \mathcal{M}(t).$$

We now check that <sup>M</sup>(*t*) <sup>≤</sup> <sup>1</sup> <sup>1</sup>−*<sup>P</sup>* , for *<sup>t</sup>* <sup>∈</sup> (0, *<sup>ζ</sup>*], which is equivalent to

$$\Gamma(1-P)\Gamma(\delta+\gamma)(1-t)^{\delta-1}(1-t^{\delta-1}) \ge \Gamma(\delta)\lambda(\zeta-t)^{\delta+\gamma-1}(1-(1-P)t^{\delta-1}), \; t \in (0,\zeta].$$

By substituting the value of *P*, the previous condition is equivalent to the nonnegativity on the interval (0, *ζ*] of the function

$$R(t) := \zeta^{\delta+\gamma-1} (1-t)^{\delta-1} (1-t^{\delta-1}) - (\zeta -t)^{\delta+\gamma-1} \left( 1 - \frac{\Gamma(\delta) \lambda \zeta^{\delta+\gamma-1}}{\Gamma(\delta+\gamma)} t^{\delta-1} \right).$$

$$\text{Indeed, } R(0) = \zeta^{\delta + \gamma - 1} - \zeta^{\delta + \gamma - 1} = 0,\\ R(\zeta) = \zeta^{\delta + \gamma - 1} (1 - \zeta)^{\delta - 1} (1 - \zeta^{\delta - 1}) > 0,\\ \text{and} \\ R(\zeta) = \zeta^{\delta + \gamma - 1} (1 - \zeta)^{\delta - 1} (1 - \zeta)^{\delta - 1} > 0,\\ \text{and } R(\zeta) = \zeta^{\delta + \gamma - 1} (1 - \zeta)^{\delta - 1} > 0.$$

$$\begin{aligned} R'(t) &= \zeta^{\delta+\gamma-1} (\delta - 1)(1 - t)^{\delta - 2} \left( 1 - t^{\delta - 1} - (1 - t)t^{\delta - 2} \right) \\ &+ (\zeta - t)^{\delta + \gamma - 1} \left\{ (\delta + \gamma - 1) \left( 1 - \frac{\Gamma(\delta) \lambda \zeta^{\delta + \gamma - 1}}{\Gamma(\delta + \gamma)} t^{\delta - 1} \right) + \frac{\Gamma(\delta) \lambda \zeta^{\delta + \gamma - 1}}{\Gamma(\delta + \gamma)} (\delta - 1) t^{\delta - 2} \right\} .\end{aligned}$$

which is clearly positive on (0, *ζ*], since

$$\frac{\Gamma(\delta)\lambda\zeta^{\delta+\gamma-1}}{\Gamma(\delta+\gamma)}t^{\delta-1} < \frac{\Gamma(\delta)\lambda\zeta^{\delta+\gamma-1}}{\Gamma(\delta+\gamma)} < 1,$$

and *S*(*t*) := 1 − *t <sup>δ</sup>*−<sup>1</sup> <sup>−</sup> (<sup>1</sup> <sup>−</sup> *<sup>t</sup>*)*<sup>t</sup> <sup>δ</sup>*−<sup>2</sup> satisfies *S*(0) = 1, *S*(1) = 0, and *S* (*t*) = *t <sup>δ</sup>*−3(<sup>2</sup> <sup>−</sup> *<sup>δ</sup>*) <sup>&</sup>lt; <sup>0</sup> for *t* ∈ (0, 1); thus, *S* > 0 on (0, *ζ*]. This proves that *R* > 0 on (0, *ζ*].

For 0 < *ζ* ≤ *s* ≤ *t* < 1,

$$\begin{split} \varphi(t,s) &:= \frac{G(t,s)}{G(1,s)} \\ &= \frac{-P\Gamma(\delta+\gamma)(t-s)^{\delta-1} + \Gamma(\delta+\gamma)(1-s)^{\delta-1}t^{\delta-1}}{-P\Gamma(\delta+\gamma)(1-s)^{\delta-1} + \Gamma(\delta+\gamma)(1-s)^{\delta-1}} \\ &= t^{\delta-1} \frac{-P\Gamma(\delta+\gamma)(1-\frac{t}{t})^{\delta-1} + \Gamma(\delta+\gamma)(1-s)^{\delta-1}}{-P\Gamma(\delta+\gamma)(1-s)^{\delta-1} + \Gamma(\delta+\gamma)(1-s)^{\delta-1}} \\ &= t^{\delta-1} \frac{-P\frac{(1-\frac{t}{t})^{\delta-1}}{(1-s)^{\delta-1}} + 1}{-P+1} \\ &\in \left[t^{\delta-1}, \frac{1}{1-P}\right] = \left[t^{\delta-1}, \frac{\Gamma(\delta+\gamma)}{\lambda\Gamma(\delta)\xi^{\delta+\gamma-1}}\right]. \end{split}$$

Finally, for 0 < *t* ≤ *s* < 1,*s* ≥ *ζ*,

$$\begin{split} \varphi(t,s) &:= \frac{G(t,s)}{G(1,s)} \\ &= t^{\delta-1} \frac{\Gamma(\delta+\gamma)(1-s)^{\delta-1}}{-P\Gamma(\delta+\gamma)(1-s)^{\delta-1} + \Gamma(\delta+\gamma)(1-s)^{\delta-1}} \\ &= t^{\delta-1} \frac{1}{-P+1} \in \left[t^{\delta-1}, \frac{1}{1-P}\right] = \left[t^{\delta-1}, \frac{\Gamma(\delta+\gamma)}{\lambda\Gamma(\delta)\zeta^{\delta+\gamma-1}}\right]. \end{split}$$

#### **4. Main Results**

This section of the paper is focused on the study of the existence of at least one positive solution to the nonlinear boundary value problem specified in expression (1). The main tool used is the fixed point result by Guo and Krasnosel'skii [34], i.e., Theorem 1.

The base space of interest is *E* = *C*[0, 1], which is a Banach space if we consider the usual supremum norm · .

Next, similarly to [14], we consider the cone *K* ⊂ *E* defined in the following way:

$$K := \left\{ u \in E \, : \, u(t) \ge 0 \text{ for all } t \in [0, 1], \, u(t) \ge t^{\delta - 1} (1 - P) \|u\|\_{\prime} \text{ for all } t \in \left[ \frac{1}{2}, 1 \right] \right\}, \tag{7}$$

and develop, in the rest of the section, a procedure similar to that in the mentioned reference [14]. Hence, one of the assumptions that will be used is specified below:

(*a*) The function *f* : [0, 1] × [0, ∞) → [0, ∞) is continuous.

We take the following finite or infinite values:

$$f\_0 := \lim\_{h \to 0^+} \left\{ \min\_{t \in [\frac{1}{2}, 1]} \frac{f(t, h)}{h} \right\}, \quad f\_{\infty} := \lim\_{h \to \infty} \left\{ \min\_{t \in [\frac{1}{2}, 1]} \frac{f(t, h)}{h} \right\},$$

$$f^0 := \lim\_{h \to 0^+} \left\{ \max\_{t \in [0, 1]} \frac{f(t, h)}{h} \right\}, \text{ and } f^{\infty} := \lim\_{h \to \infty} \left\{ \max\_{t \in [0, 1]} \frac{f(t, h)}{h} \right\}.$$

Then, it is possible to extend Theorem 3.2 [14] to the context of the general-order problem (1). This fact is the main conclusion of this paper.

**Theorem 2.** *Suppose that the hypothesis* (*a*) *is satisfied, and that one of the following assumptions also holds:*


*Then, for all <sup>δ</sup>* <sup>∈</sup> (*<sup>n</sup>* <sup>−</sup> 1, *<sup>n</sup>*]*, and <sup>λ</sup>* <sup>&</sup>gt; <sup>0</sup> *with <sup>P</sup>* :<sup>=</sup> <sup>1</sup> <sup>−</sup> *<sup>λ</sup>*Γ(*δ*) <sup>Γ</sup>(*δ*+*γ*) *<sup>ζ</sup>δ*+*γ*−<sup>1</sup> <sup>&</sup>gt; <sup>0</sup>*, the problem* (1) *has a positive solution that belongs to the cone K given by* (7)*.*

**Proof.** We consider the mapping *T* defined by [*Tu*](*t*) := <sup>1</sup> <sup>0</sup> *G*(*t*,*s*)*f*(*s*, *u*(*s*)) *ds*, where *G* is the Green's function given in expression (3). In the first place, we check that the mapping *T* : *K* → *K* is a self-mapping and that *T* is also completely continuous. Indeed, using the continuity and the nonnegative character of the functions *G* and *f* on [0, 1] × [0, 1] and [0, 1] × [0, ∞), respectively, it is clear that, if *u* ∈ *K*, then *Tu* is continuous and nonnegative on [0, 1].

To prove that *T* is self-mapping, let *u* ∈ *K*, and then, by Lemma 4, we have

$$\begin{aligned} [Tu](t) &= \int\_0^1 G(t,s) f(s, u(s)) \, ds \\ &\ge t^{\delta - 1} \int\_0^1 G(1, s) f(s, u(s)) \, ds \\ &\ge t^{\delta - 1} (1 - P) \int\_0^1 \max\_{t \in [0, 1]} G(t, s) f(s, u(s)) \, ds \\ &\ge t^{\delta - 1} (1 - P) \max\_{t \in [0, 1]} \left\{ \int\_0^1 G(t, s) f(s, u(s)) \, ds \right\} \\ &= t^{\delta - 1} (1 - P) \|Tu\|. \end{aligned}$$

It is clear that the mapping *T* : *K* → *K* is continuous, since *G* and *f* are both continuous. Next, to check that *T* is completely continuous, let B ⊂ *K* be a bounded set, i.e., such that there exists a positive constant *N* > 0 with *u* ≤ *N* for all *u* ∈ B. Consider the compact set [0, 1] <sup>×</sup> [0, *<sup>N</sup>*], and take *<sup>L</sup>* :<sup>=</sup> max (*t*,*u*)∈[0,1]×[0,*N*] | *f*(*t*, *u*)| + 1 > 0.

Now we check that *T*(B) is a bounded set. Indeed, for an arbitrary *u* ∈ B, we have, by Corollary 1, that

$$\|\| [Tu](t) \|\| \leq \max\_{t \in [0,1]} \int\_0^1 G(t,s) |f(s,u(s))| \, ds \leq L \max\_{t \in [0,1]} \int\_0^1 t^{\delta - 1} \frac{(1-s)^{\delta - 1}}{P\Gamma(\delta)} \, ds \leq \frac{L}{P\Gamma(\delta)},$$

for every *t* ∈ [0, 1], so that *T*(B) is a bounded subset of *E*.

On the other hand, we seek an estimate for the derivative of the functions in *T*(B). Given an arbitrary *u* ∈ B, we have, from the calculations in Lemma 2, that

$$\begin{aligned} [Tu](t) &= \int\_0^1 G(t,s) f(s, u(s)) \, ds \\ &= -\frac{1}{\Gamma(\delta)} \int\_0^t (t-s)^{\delta-1} f(s, u(s)) \, ds + \frac{t^{\delta-1}}{P\Gamma(\delta)} \int\_0^1 (1-s)^{\delta-1} f(s, u(s)) \, ds \\ &- \frac{\lambda t^{\delta-1}}{P\Gamma(\delta+\gamma)} \int\_0^\zeta (\zeta - s)^{\delta+\gamma-1} f(s, u(s)) \, ds, \end{aligned}$$

so that

$$\begin{split} |(Tu)'(t)| &= \left| -\frac{1}{\Gamma(\delta-1)} \int\_0^t (t-s)^{\delta-2} f(s,u(s))ds \right| \\ &+ \frac{t^{\delta-2}}{P\Gamma(\delta-1)} \int\_0^1 (1-s)^{\delta-1} f(s,u(s))ds - \frac{(\delta-1)\Lambda t^{\delta-2}}{P\Gamma(\delta+\gamma)} \int\_0^\zeta (\xi-s)^{\delta+\gamma-1} f(s,u(s))ds \right| \\ &\leq \frac{1}{\Gamma(\delta-1)} \int\_0^t (t-s)^{\delta-2} |f(s,u(s))| ds \\ &+ \frac{t^{\delta-2}}{|P|\Gamma(\delta-1)} \int\_0^1 (1-s)^{\delta-1} |f(s,u(s))| ds + \frac{(\delta-1)\Lambda t^{\delta-2}}{|P|\Gamma(\delta+\gamma)} \int\_0^\zeta (\xi-s)^{\delta+\gamma-1} |f(s,u(s))| ds \\ &\leq \frac{Lt^{\delta-1}}{\Gamma(\delta)} + \frac{Lt^{\delta-2}}{|P|\Gamma(\delta-1)} + \frac{(\delta-1)\Lambda t^{\delta-2}\xi^{\delta+\gamma}}{|P|\Gamma(\delta+\gamma)(\delta+\gamma)} \\ &\leq \frac{Lt^{\delta-1}}{\Gamma(\delta)} + \frac{t^{\delta-2}L}{|P|\Gamma(\delta+\gamma)} + \frac{(\delta-1)Lt^{\delta-2}\xi^{\delta+\gamma}\lambda}{|P|\Gamma(\delta+\gamma+1)} \leq \frac{L}{\Gamma(\delta)} + \frac{L}{|P|\Gamma(\delta+\gamma+1)} =: M. \end{split}$$

Therefore, for every *t*1, *t*<sup>2</sup> ∈ [0, 1] with *t*<sup>1</sup> < *t*2, we obtain

$$|(Tu)(t\_2) - [Tu](t\_1)| \le M(t\_2 - t\_1)\_{\prime\prime}$$

and we deduce that *T*(B) is an equicontinuous set in *E*.

With these ingredients, the application of the Arzelà–Ascoli Theorem proves that *T*(B) is relatively compact. As a consequence, *T* : *K* → *K* is completely continuous.

Once we have proven some relevant properties of the mapping *T*, we distinguish two cases and complete the proof following the ideas in [14]. We include the explanations and adaptations here for completeness.

**Case (i):** (*f*<sup>0</sup> = ∞ and *f* <sup>∞</sup> = 0).

We choose ˜ *δ* > 0 to be sufficiently large such that

$$\delta(1-P)\max\_{t\in[0,1]}\left\{\int\_{\frac{1}{2}}^{1}s^{\delta-1}G(t,s)ds\right\} \ge 1. \tag{8}$$

Since *<sup>f</sup>*<sup>0</sup> <sup>=</sup> <sup>∞</sup>, we can affirm the existence of a constant *<sup>ρ</sup>*˜ <sup>&</sup>gt; 0 such that *<sup>f</sup>*(*t*, *<sup>h</sup>*) <sup>≥</sup> ˜ *δh* for every *<sup>t</sup>* <sup>∈</sup> [ <sup>1</sup> <sup>2</sup> , 1] and every 0 < *h* ≤ *ρ*˜.

Then, for an arbitrary *<sup>u</sup>* <sup>∈</sup> *<sup>K</sup>* with *<sup>u</sup>* <sup>=</sup> *<sup>ρ</sup>*˜, we have that *<sup>u</sup>*(*t*) <sup>&</sup>gt; 0 for *<sup>t</sup>* <sup>∈</sup> [ <sup>1</sup> <sup>2</sup> , 1] and, using the selection for ˜ *δ*, we obtain that

$$\begin{aligned} \|\|Tu\|\| &= \max\_{t \in [0,1]} \left\{ \int\_0^1 G(t,s) f(s,u(s)) ds \right\} \\ &\ge \delta \max\_{t \in [0,1]} \left\{ \int\_{\frac{1}{2}}^1 G(t,s) u(s) ds \right\} \\ &\ge \delta \|\|u\|\| (1-P) \max\_{t \in [0,1]} \left\{ \int\_{\frac{1}{2}}^1 s^{\delta-1} G(t,s) ds \right\} \\ &\ge \|\|u\|\|. \end{aligned}$$

By the continuity of *f*(*t*, ·) on the interval [0, ∞), we can consider the function:

$$\tilde{f}(t,h) = \max\_{z \in [0,h]} f(t,z)\_{\prime}$$

which is clearly a nondecreasing function on [0, ∞). By the hypothesis *f* <sup>∞</sup> = 0, it is deduced that

$$\lim\_{h \to \infty} \left\{ \max\_{t \in [0,1]} \frac{\vec{f}(t,h)}{h} \right\} = 0.1$$

Next, we select *δ*<sup>∗</sup> > 0 small enough such that *<sup>δ</sup>*<sup>∗</sup> *<sup>P</sup>*Γ(*δ*) ≤ 1.

By virtue of the previous limit, we can prove the existence of a constant *ρ*∗ > *ρ*˜ > 0 such that ˜ *f*(*t*, *h*) ≤ *δ*∗*h* for every *t* ∈ [0, 1] and all *h* ≥ *ρ*∗.

If we take *<sup>u</sup>* <sup>∈</sup> *<sup>K</sup>* such that *<sup>u</sup>* <sup>=</sup> *<sup>ρ</sup>*∗, then, using the nondecreasing character of ˜ *f* and Lemma 3 (II) (or Corollary 1), the next inequalities are satisfied:

$$\begin{split} \|\|Tu\|\| = \max\_{t \in [0,1]} \left\{ \int\_0^1 G(t,s) f(s,u(s)) \, ds \right\} &\le \max\_{t \in [0,1]} \left\{ \int\_0^1 G(t,s) \bar{f}(s, \|u\|) \, ds \right\} \\ &\le \delta^\* \|\|u\|\| \max\_{t \in [0,1]} \left\{ \int\_0^1 G(t,s) \, ds \right\} \le \frac{\delta^\*}{P \Gamma(\delta)} \|\|u\|\| \le \|u\|\|. \end{split}$$

Therefore, by part (i) in Theorem 1, we can affirm that problem (1) has at least one positive solution *u* with *ρ*˜ ≤ *u* ≤ *ρ*∗.

**Case (ii):** *f* <sup>0</sup> = 0 and *f*<sup>∞</sup> = ∞.

We take *δ*<sup>∗</sup> > 0 with *<sup>δ</sup>*<sup>∗</sup> *<sup>P</sup>*Γ(*δ*) ≤ 1.

Using *<sup>f</sup>* <sup>0</sup> <sup>=</sup> 0, it is possible to find a constant *<sup>r</sup>*<sup>∗</sup> <sup>&</sup>gt; 0 such that *<sup>f</sup>*(*t*, *<sup>h</sup>*) <sup>≤</sup> *<sup>δ</sup>*∗*<sup>h</sup>* for every *<sup>t</sup>* <sup>∈</sup> [0, 1] and 0 <sup>&</sup>lt; *<sup>h</sup>* <sup>≤</sup> *<sup>r</sup>*∗. From *<sup>f</sup>* <sup>0</sup> <sup>=</sup> 0, it is clear that lim*h*→0<sup>+</sup> *<sup>f</sup>*(*t*,*h*) *<sup>h</sup>* = 0 for every *t* ∈ [0, 1]; hence, lim*h*→0<sup>+</sup> *<sup>f</sup>*(*t*, *<sup>h</sup>*) = 0, and thus, by the continuity of *<sup>f</sup>* , *<sup>f</sup>*(*t*, 0) = 0, for every *<sup>t</sup>* ∈ [0, 1]. This, together with the previous inequality, implies that *f*(*t*, *h*) ≤ *δ*∗*h* for every *t* ∈ [0, 1] and 0 ≤ *h* ≤ *r*∗.

Then, for every *u* ∈ *K* with *u* = *r*∗, we deduce that

$$\begin{aligned} \|\|Tu\|\| = \max\_{t \in [0,1]} \left\{ \int\_0^1 G(t,s) f(s,u(s)) \, ds \right\} &\le \delta^\* \|\|u\|\| \max\_{t \in [0,1]} \left\{ \int\_0^1 G(t,s) \, ds \right\} \\ &\le \frac{\delta^\*}{P\Gamma(\delta)} \|u\|\| \le \|u\|. \end{aligned}$$

Finally, we select ˆ *δ* > 0 large enough such that

$$\frac{\tilde{\delta}}{2^{\delta-1}}(1-P)\max\_{t\in[0,1]}\left\{\int\_{\frac{1}{2}}^{1}G(t,s)\,ds\right\} \ge 1.$$

Since *f*<sup>∞</sup> = ∞, we can affirm the existence of *r*ˆ > *r*∗ > 0, which can be taken satisfying the additional condition *<sup>r</sup>*ˆ2*δ*−<sup>1</sup> <sup>&</sup>gt; *<sup>r</sup>*∗(<sup>1</sup> <sup>−</sup> *<sup>P</sup>*), such that *<sup>f</sup>*(*t*, *<sup>h</sup>*) <sup>≥</sup> <sup>ˆ</sup> *<sup>δ</sup><sup>h</sup>* for all *<sup>t</sup>* <sup>∈</sup> [ <sup>1</sup> <sup>2</sup> , 1] and all *h* ≥ *r*ˆ.

Next, we choose a convenient shell, in particular, we take an arbitrary *u* ∈ *K* with *<sup>u</sup>* <sup>=</sup> *<sup>r</sup>*<sup>ˆ</sup> <sup>1</sup>−*<sup>P</sup>* <sup>2</sup>*δ*−1. The definition of the cone *<sup>K</sup>* implies that *<sup>u</sup>*(*t*) <sup>≥</sup> *<sup>r</sup>*<sup>ˆ</sup> for every *<sup>t</sup>* <sup>∈</sup> [ <sup>1</sup> <sup>2</sup> , 1].

In summary, in this case, we obtain that

$$\begin{split} \|\|Tu\|\| &= \max\_{t \in [0,1]} \left\{ \int\_{0}^{1} G(t,s) f(s,u(s)) \, ds \right\} \\ &\geq \max\_{t \in [0,1]} \left\{ \int\_{\frac{1}{2}}^{1} G(t,s) f(s,u(s)) \, ds \right\} \\ &\geq \delta \max\_{t \in [0,1]} \left\{ \int\_{\frac{1}{2}}^{1} G(t,s) u(s) \, ds \right\} \\ &\geq \frac{\delta}{2^{\delta - 1}} (1 - P) \|u\| \max\_{t \in [0,1]} \left\{ \int\_{\frac{1}{2}}^{1} G(t,s) \, ds \right\} \\ &\geq \|u\|. \end{split}$$

In consequence, by case (ii) in Theorem 1, we deduce that problem (1) has at least one positive solution such that *<sup>r</sup>*<sup>∗</sup> <sup>≤</sup>*u*<sup>≤</sup> *<sup>r</sup>*<sup>ˆ</sup> <sup>1</sup>−*<sup>P</sup>* <sup>2</sup>*δ*−1.

#### **5. Example**

In this section, we discuss an example to show the applicability of our result.

**Example 1.** *Consider the following fractional integral boundary value problem on the interval* [0, 1]*:*

$$\begin{cases} D\_{0+}^{\frac{5}{2}} u(t) + f(t, u(t)) = 0 \\ u(0) = u'(0) = 0, \ u(1) = 2I\_{0+}^{\frac{1}{2}} u(\zeta), \end{cases} \tag{9}$$

*where f*(*t*, *u*(*t*)) = *u*<sup>1</sup> <sup>3</sup> (*t*) + log(<sup>1</sup> <sup>+</sup> *<sup>u</sup>*2(*t*)) + sin2(*eu*(*t*)), *<sup>D</sup>*<sup>5</sup> 2 <sup>0</sup><sup>+</sup> *denotes the Riemann–Liouville fractional derivative operator of order δ* = <sup>5</sup> <sup>2</sup> *, I* 1 2 <sup>0</sup><sup>+</sup> *is the Riemann–Liouville fractional integral operator of order γ* = <sup>1</sup> <sup>2</sup> *and* 0 < *ζ* < 1*. Here, f* : [0, 1] × [0, ∞) → [0, ∞) *is a continuous function. It is clear that f*<sup>0</sup> = ∞*, f* <sup>∞</sup> = 0*, and thus the function f is sublinear. Note that, since* 2Γ( <sup>5</sup> 2 ) <sup>Γ</sup>(3) <sup>&</sup>gt; <sup>1</sup>*, <sup>P</sup>* :<sup>=</sup> <sup>1</sup> <sup>−</sup> <sup>2</sup>Γ( <sup>5</sup> 2 ) <sup>Γ</sup>(3) *<sup>ζ</sup>*<sup>2</sup> *vanishes at a certain <sup>ζ</sup>* <sup>∈</sup> (0, 1)*, exactly at <sup>ζ</sup>*<sup>∗</sup> :<sup>=</sup> # Γ(3) 2Γ( <sup>5</sup> 2 ) *. Therefore, we must impose that ζ* ∈ (0, *ζ*∗) *in order to guarantee P* > 0*. Under this restriction, from case (i) in Theorem 2, the particular problem* (9) *has, at least, a positive solution.*

#### **6. Conclusions**

In this paper, we extended the results in [14] to general fractional problems of order greater than 2, dealing with the existence of positive solutions for differential equations of arbitrary order with fractional integral boundary conditions of the type (1). The introduction of a boundary condition that involves an integral operator of fractional type is interesting from the point of view of applications, since it allows for the mathematical expression of heterogeneity that may affect the dependence specified by the restriction added to the equation—a fact that is consistent with many physical problems.

The main tool used in the paper was Guo–Krasnosel'skii fixed point theorem in cones. In particular, in Lemma 2, we obtained, by imposing some adequate restrictions on the parameters, the integral expression of the solution to a modified linear fractional boundary value problem, which provides the Green's function of interest. Then, in Lemma 3, we studied some properties of the Green's function, including its positivity on (0, 1) × (0, 1) under some restrictions on the parameters, as well as some upper and lower estimates for its expression.

Another useful result is Lemma 4, which establishes the relation between the value of the Green's function at an arbitrary point and the value at the point with the same ordinate and abscise 1. The explicit calculations for this general problem were developed in detail due to the high order of the equation and the difficulty generated by the introduction of fractional operators in the boundary conditions.

Theorem 2 provides the existence of a positive solution to (1) by assuming that the nonlinearity *f* is sublinear or superlinear. The proof, based on the Guo–Krasnosel'skii fixed point theorem, makes a selection of the conical shells that allow localization of the solution in each case. Then, we have not only deduced the existence of a positive solution but the details of the proof also provide the procedure to obtain an estimate for its maximum value and to determine positive numbers that are not upper bounds for the solution.

Since the fixed point theorem used has two contexts of application (a contractive and expansive case), it is possible to consider the problem under two types of hypotheses; that is, two types of restrictions on the function defining the equation. The consideration of other types of restrictions on the function *f* can be one of the possible future lines of research.

Finally, an example was presented.

**Author Contributions:** Conceptualization, A.T., J.A. and R.R.-L.; methodology, A.T., J.A. and R.R.-L.; formal analysis, A.T., J.A. and R.R.-L.; investigation, A.T., J.A. and R.R.-L.; writing—review and editing, A.T., J.A. and R.R.-L. All authors have read and agreed to the published version of the manuscript.

**Funding:** The research of R. Rodríguez-López was partially supported by AEI/FEDER, UE, grant numbers PID2020-113275GB-I00 and MTM2016-75140-P, and by GRC Xunta de Galicia grant number ED431C 2019/02.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The authors are grateful to the anonymous Referees for their helpful comments and suggestions towards the improvement of the paper.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


### *Article* **Stability of Parametric Intuitionistic Fuzzy Multi-Objective Fractional Transportation Problem**

**Mohamed A. El Sayed 1,2,\*, Mohamed A. El-Shorbagy 3,4, Farahat A. Farahat 5, Aisha F. Fareed <sup>2</sup> and Mohamed A. Elsisy 2,6**


**Abstract:** In this study, a parametric intuitionistic fuzzy multi-objective fractional transportation problem (PIF-MOFTP) is proposed. The current PIF-MOFTP has a single-scalar parameter in the objective functions and an intuitionistic fuzzy supply and demand. Based on the (*α*, *β*)-cut concept a parametric (*α*, *β*)-MOFTP is established. Then, a fuzzy goal programming (FGP) approach is utilized to obtain (*α*, *β*)-Pareto optimal solution. We investigated the stability set of the first kind (SSFK) corresponding to the solution by extending the Kuhn-Tucker optimality conditions of multi-objective programming problems. An algorithm to crystalize the progressing SSFK for PIF-MOFTP as well as an illustrative numerical example is presented.

**Keywords:** multi-objective programming; fractional transportation problem; intuitionistic fuzzy set; parametric programming

#### **1. Introduction**

Transportation issues (TP) have been studied in various writings [1–7]. These issues and their solution processes postulate a worthy task in logistics and supply chain organization for reducing expenses, further developing service quality, etc. [3,8]. Nonetheless, TP is described by multiple, incommensurable, and clashing objective functions, being known as the multi-objective transportation problem (MO-TP). Accordingly, in MO-TP, the idea of an ideal solution offers spot to the idea of the best compromise solution or the non-dominated solutions. Optimization of the ratio of two functions is called fractional programming (ratio optimization) [7,9]. To be sure, in such circumstances, it is often a question of optimizing a ratio of benefit/cost, stock/deals, specialist/patient, and so on, subject to some constraints [7,9].

One of the significant issues looked at by specialists is that involving the exact values of parameters [7]. In this way, this might involve thinking about vagueness, or specifying the fundamental parameters of the model, which are the coefficients of the objective function and the constrains [4,8]. Accordingly, it might be naturalistic to take the distinct adjectival information on specialists and leaders about the parameters which can be exemplified as fuzzy data [7,10]. Uncertainty may happen because of the accompanying unrestrained factors. In this study the main hypotheses are that the transportation charge has a parametric nature, and the supply and the demand parameters are intuitionistic

**Citation:** El Sayed, M.A.; El-Shorbagy, M.A.; Farahat, F.A.; Fareed, A.F.; Elsisy, M.A. Stability of Parametric Intuitionistic Fuzzy Multi-Objective Fractional Transportation Problem. *Fractal Fract.* **2021**, *5*, 233. https://doi.org/ 10.3390/fractalfract5040233

Academic Editor: Savin Trean¸tă

Received: 14 October 2021 Accepted: 8 November 2021 Published: 19 November 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

fuzzy numbers (IFNs). The main hypotheses have not been presented in the literature, and the basic question is how we can get the SSFK for such PIF-MOFTP.

#### **2. Literature Review**

The research on MO-TP is improved by fusing the diverse numerical models and procedures. James et al. [11] examined transportation administration quality dependent on data combination. A lot of examination that deals with transportation wellbeing was created by Ergun et al. [12], Sheu and Chen [13]. Recently, MO-TP under different circumstances has been discussed by Roy et al. [14,15], Roy and Mahapatra [16], Roy [17], Maity and Roy [18,19].

Although fuzzy set theory (FST) is novel tool in handling uncertainties, it cannot tackle special kinds of uncertainties, as it is difficult to depict the membership degree using one specific value. To overcome the lack of knowledge of non-membership degrees, intuitionistic fuzzy set (IFS) was presented in 1986 by Atanassov [20] as an extension of FST. In IFS, each element in a set is attached with two grades: membership and non-membership, where the sum of these two grades is restricted to less or equal to one. Moreover, many creators have been utilized IFS for addressing various sorts of TPs [21,22]. The study of MO-TP with vague numbers has been presented by Ammar and Youness [1]. The fuzzy programming strategy was acquainted with tackle MO-TP with various non-linear membership functions [23]. IFS has additionally been utilized by several scientists to tackle different types of TPs [10,24]. One more strategy for thoroughly considering linear MO-TPs with vague nature has been suggested by Gupta and Kumar [25]. Recently, MO-TP under various types of uncertainty has been discussed by Roy and Mahapatra [16], Maity and Roy [26], and Ebrahimnejad and Verdegay [10]. Mahajan and Gupta [27] proposed a fully IF MO-TP utilizing various membership functions. Achievement stability set for parametric linear FGP problems has been introduced by El Sayed and Farahat [28]. The neutrosophic goal programming approach for solving the multi-objective fractional transportation problem was introduced by Veeramani et al., [29]. Pramanik and Banerjee [30] proposed a chance-constrained capacitated MO-TP with two fuzzy goals, and a consensus solution was found. Edalatpanah [31] developed a nonlinear framework for neutrosophic linear programming. Furthermore, Rizk-Allah et al. [32] developed a compromise solution framework for the MO-TP based on the neutrosophic environment. A fuzzy approach using generalized dinkelbach's algorithm for linear multi-objective fractional transportation problem (MOFTP) has been presented by Cetin and Tiryaki [3]. A fuzzy mathematical programming approach for solving fuzzy linear fractional programming problem has been demonstrated by Veeramani and Sumathi [33]. El Sayed and Abo-Sinna [7] introduced the intuitionistic fuzzy multi-objective fractional transportation problem (IF-MOFTP).

Parametric programming examines the impact of preordained continuous varieties in the objective function coefficients and the right-hand side of the constraints on the ideal solution [34–36]. In parametric analysis the objective function and the right-hand side vectors are replaced with the parameterized function *c*(*ϑ*) and *b*(*α*, *β*), where *ϑ* and *α*, *β* are the parameter of variation. The general idea of parametric analysis is to start with the *α*-Pareto optimal solution at *ϑ* = *ϑ*∗, *α* = *α*∗, *β* = *β*∗. Then by applying KKT optimality the SSFK is determined [35,37]. The concept of the stability set of the first kind (SSFK) has been introduced by Osman [35], and extended by Saad [38], Saad and Hughes [39], Osman et al. [36], Saad et al. [40].

In prior examinations, the MO-TP was created with the presumption that the supply, demand, and cost boundaries were known. Nonetheless, applications, every one of the parameters of the TP are not for the most part characterized definitively. It might have IF values. Comparable contemplations might be taken for supply and demand parameters in TP of this paper. Keeping this perspective, the primary commitments are concerned with two unique viewpoints: one is to find a (*α*, *β*)-Pareto optimal solution for the PIF-MOFTP, and another is to investigate the SSFK for PIF-MOFTP. First, based on the (*α*, *β*)-cut methodology a parametric (*α*, *β*)-MOFTP is established. Then, A FGP approach is used to

get (*α*, *β*)-Pareto optimal solution. Finally, the KKT optimality conditions applied to get the SSFK. An algorithm to clarify the developed SSFK for PIF-MOFTP as well as an illustrative numerical example are given.

The rest of this study is organized as follows: after the introduction and literature review, Section 3 introduces some basic concepts. Modelling of the PIF-MOFTP is presented in Section 4. Section 5 demonstrates the FGP methodology for tackling the PIF-MOFTP. In the next section the SSFK is investigated. An algorithm for obtaining the SSFK for PIF-MOFTP is introduced in Section 6. An illustrative example, discussion and limitations is given in Section 7. This paper ends with some concluding remarks.

#### **3. Preliminaries**

This part presents the concept of IFS [20,21,41,42].

**Definition 1.** *An IFS <sup>A</sup>*\$*<sup>I</sup> in <sup>X</sup> is a set of ordered triples <sup>A</sup>*\$*<sup>I</sup>* <sup>=</sup> %*x*, *<sup>μ</sup>A*\$*<sup>I</sup>*(*x*), *vA*\$*<sup>I</sup>*(*x*) |*x* ∈ *X* & *, where <sup>μ</sup>A*\$*<sup>I</sup>*(*x*), *vA*\$*<sup>I</sup>*(*x*) : *<sup>X</sup>* <sup>→</sup> [0, 1] *are functions such that* <sup>0</sup> <sup>≤</sup> *<sup>μ</sup>A*\$*<sup>I</sup>*(*x*) <sup>+</sup> *vA*\$*<sup>I</sup>*(*x*) <sup>≤</sup> 1, <sup>∀</sup>*<sup>x</sup>* <sup>∈</sup> *X. The value of <sup>μ</sup>A*\$*<sup>I</sup>*(*x*) *acts as the grade of membership and vA*\$*<sup>I</sup>*(*x*) *acts as the grade of non-membership of the element <sup>x</sup>* <sup>∈</sup> *<sup>X</sup> being in <sup>A</sup>*\$*<sup>I</sup> . <sup>h</sup>*(*x*) <sup>=</sup> <sup>1</sup> <sup>−</sup> *<sup>μ</sup>A*\$*<sup>I</sup>*(*x*) <sup>−</sup> *vA*\$*<sup>I</sup>*(*x*) *represents the grade of hesitation for the element x in <sup>A</sup>*\$*<sup>I</sup> [20,41].*

**Definition 2.** *An IFN of the form <sup>A</sup>*\$*<sup>I</sup>* <sup>=</sup> *a*1, *a*2, *a*3; − *a*1, *a*2, − *a*3 ! *is said to be triangular IFN (TIFN) with membership and non-membership functions defined as [41,43]:*

$$\mu\_{\bar{A}^l}(\mathbf{x}) = \begin{cases} \frac{\frac{\mathbf{x} - a\_1}{a\_2 - a\_1}}{\frac{a\_3 - a\_2}{a\_3 - a\_2}}, & a\_1 \le \mathbf{x} \le a\_2, \\\frac{a\_3 - a\_2}{a\_3 - a\_2}, & a\_2 \le \mathbf{x} \le a\_3 \\\ 0, & otherwise \end{cases} \tag{1}$$

$$\nu\_{\bar{A}^{\bar{I}}}(\mathbf{x}) = \begin{cases} \frac{a\_2 - \mathbf{x}}{a\_2 - \overline{a}\_1} & \overline{a}\_1 \le \mathbf{x} \le a\_2 \\\frac{\mathbf{x} - a\_2}{\overline{a}\_3 - a\_2} & a\_2 \le \mathbf{x} \le \overline{a}\_3 \\\mathbf{1} & \text{otherwise} \end{cases} \tag{2}$$

where *<sup>x</sup>*−*a*<sup>1</sup> *a*2−*a*<sup>1</sup> , and *<sup>x</sup>*−*a*<sup>2</sup> *<sup>a</sup>*3−*a*<sup>2</sup> are continuous monotone increasing functions, *<sup>a</sup>*3−*<sup>x</sup> <sup>a</sup>*3−*a*<sup>2</sup> and *<sup>a</sup>*2−*<sup>x</sup> <sup>a</sup>*2−*a*<sup>1</sup> are continuous monotone decreasing functions. *<sup>x</sup>*−*a*<sup>1</sup> *a*2−*a*<sup>1</sup> , *<sup>a</sup>*3−*<sup>x</sup> a*3−*a*<sup>2</sup> , *<sup>a</sup>*2−*<sup>x</sup> <sup>a</sup>*2−*a*<sup>1</sup> and *<sup>x</sup>*−*a*<sup>2</sup> *<sup>a</sup>*3−*a*<sup>2</sup> are the left and the right basis functions of the membership function and the non-membership function (see Figure 1), respectively. *<sup>a</sup>*<sup>1</sup> <sup>≤</sup> *<sup>a</sup>*<sup>1</sup> <sup>≤</sup> *<sup>a</sup>*<sup>2</sup> <sup>≤</sup> *<sup>a</sup>*<sup>3</sup> <sup>≤</sup> *<sup>a</sup>*<sup>3</sup> and 0 <sup>≤</sup> *<sup>μ</sup>A*\$*<sup>I</sup>*(*x*) <sup>+</sup> *vA*\$*<sup>I</sup>*(*x*) <sup>≤</sup> 1, <sup>∀</sup> *<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>*.

**Figure 1.** Triangular Intuitionistic Fuzzy number.

**Definition 3.** *A TIFNs <sup>A</sup>*\$*<sup>I</sup>* <sup>=</sup> *a*1, *a*2, *a*3; − *a*1, *a*2, − *a*3 ! *is assumed to be a non-negative TIFN iff,* − *a*<sup>1</sup> ≥ 0 *[41,43].*

**Definition 4.** *Two TIFNs <sup>A</sup>*\$*<sup>I</sup>* <sup>=</sup> *a*1, *a*2, *a*3; − *a*1, *a*2, − *a*3 ! *and <sup>A</sup>*\$*<sup>I</sup>* <sup>=</sup> *b*1, *b*2, *b*3; − *b*1, *b*2, − *b*3 *are equivalent to one another, <sup>A</sup>*\$*<sup>I</sup>* <sup>=</sup> *<sup>B</sup>*\$*<sup>I</sup> iff, ai* <sup>=</sup> *bi and* <sup>−</sup> *ai* <sup>=</sup> <sup>−</sup> *bi* ∀ *i* = 1, 2, 3 *[7,41,43].*

**Definition 5.** (*α*, *<sup>β</sup>*)*-cut of an IFS <sup>A</sup>*\$*<sup>I</sup> is defined by: <sup>A</sup>*\$*<sup>I</sup>* (*α*,*β*) <sup>=</sup> {*<sup>x</sup>* : *<sup>μ</sup>A*\$*<sup>I</sup>*(*x*) <sup>≥</sup> *<sup>α</sup>*, *<sup>ν</sup>A*\$*<sup>I</sup>*(*x*) <sup>≤</sup> *<sup>β</sup>*, *α* + *β* ≤ 1, *x* ∈ *X*}; *where α*, *β* ∈ (0, 1]*.*

**Definition 6.** (*α*, *<sup>β</sup>*)*-cut of a TIFN <sup>A</sup>*\$*<sup>I</sup>* <sup>=</sup> *a*1, *a*2, *a*3; − *a*1, *a*2, − *a*3 ! *is the set of all x whose degree of membership is greater than or equal to α and degree of non-membership is less than or equal to β, i.e., <sup>A</sup>*\$*<sup>I</sup>* (*α*,*β*) = % *<sup>x</sup>* : *<sup>μ</sup>A*\$*<sup>I</sup>*(*x*) <sup>≥</sup> *<sup>α</sup>*, *<sup>ν</sup>A*\$*<sup>I</sup>*(*x*) <sup>≤</sup> *<sup>β</sup>*, *<sup>α</sup>* <sup>+</sup> *<sup>β</sup>* <sup>≤</sup> 1, *<sup>x</sup>* <sup>∈</sup> *<sup>X</sup>* & *.*

The (*α*, *β*)-cut of a TIFN is shown in Figure 2, is defined as the crisp set of elements *<sup>x</sup>* which belong to *<sup>A</sup>*\$*<sup>I</sup>* at least to the degree *<sup>α</sup>* and which does belong to *<sup>A</sup>*\$*<sup>I</sup>* at most to the degree *β*.

**Figure 2.** The (*α*, *β*)-cut of a TIFN.

Now, *<sup>μ</sup>A*\$*<sup>I</sup>*(*x*) <sup>≥</sup> *<sup>α</sup>* <sup>⇒</sup> *<sup>x</sup>*−*a*<sup>1</sup> *<sup>a</sup>*2−*a*<sup>1</sup> <sup>≥</sup> *<sup>α</sup>* and *<sup>a</sup>*3−*<sup>x</sup> <sup>a</sup>*3−*a*<sup>2</sup> <sup>≥</sup> *<sup>α</sup>*, or *<sup>x</sup>* <sup>≥</sup> *<sup>a</sup>*<sup>1</sup> <sup>+</sup> *<sup>α</sup>*(*a*<sup>2</sup> <sup>−</sup> *<sup>a</sup>*1) and *<sup>x</sup>* <sup>≤</sup> *<sup>a</sup>*<sup>3</sup> <sup>−</sup> *<sup>α</sup>*(*a*<sup>3</sup> <sup>−</sup> *<sup>a</sup>*2) again, *<sup>ν</sup>A*\$*<sup>I</sup>*(*x*) <sup>≤</sup> *<sup>β</sup>* <sup>⇒</sup> *<sup>a</sup>*2−*<sup>x</sup> <sup>a</sup>*2−*a*<sup>1</sup> <sup>≤</sup> *<sup>β</sup>* and *<sup>x</sup>*−*a*<sup>2</sup> *<sup>a</sup>*3−*a*<sup>2</sup> <sup>≤</sup> *<sup>β</sup>*, or *<sup>x</sup>* <sup>≥</sup> *<sup>a</sup>*<sup>2</sup> <sup>−</sup> *<sup>β</sup>*(*a*<sup>2</sup> <sup>−</sup> *<sup>a</sup>*1) and *<sup>x</sup>* <sup>≤</sup> *<sup>a</sup>*<sup>2</sup> <sup>+</sup> *<sup>β</sup>*(*a*<sup>3</sup> <sup>−</sup> *<sup>a</sup>*2) [43]. Thus, referring to Figure <sup>2</sup> *<sup>A</sup>*\$*<sup>I</sup>* (*α*,*β*) = [*AL*, *AU*], where *AL* = *max*{*a*<sup>1</sup> + *α*(*a*<sup>2</sup> − *a*1), *a*<sup>2</sup> − *β*(*a*<sup>2</sup> − *a*1)} and *AU* = *min*{*a*<sup>3</sup> − *α*(*a*<sup>3</sup> − *a*2), *a*<sup>2</sup> + *β*(*a*<sup>3</sup> − *a*2)}.

#### **4. Mathematical Formulation**

In genuine case TP, during the modeling process, the transportation parameters are not precise on account of insufficient information the variance of the market situation. To deal quantitatively with such unclear information, we deemed parametric IF-MOFTP in which single-scalar parameter *ϑ* ∈ R in the objective functions and an intuitionistic fuzzy supply and demand. Suppose that there are *m* sources and *n* destinations. Thus, modelling of the parametric IF-MOFTP can be obtained as [3,7,9]:

$$\mathbf{Max}\,Z\_{\mathbb{Q}}(\mathbf{x},\theta) = \frac{\sum\_{i=1}^{m} \sum\_{j=1}^{n} \left(c\_{ij} + \theta \omega\_{ij}\right)^{(q)} \mathbf{x}\_{ij}^{(q)} + \delta^{(q)}}{\sum\_{i=1}^{m} \sum\_{j=1}^{n} d\_{ij}^{(q)} \mathbf{x}\_{ij}^{(q)} + \rho^{(q)}}, \ q = 1, 2, \ldots, \mathbf{Q},\tag{3}$$

**Subject to:** *<sup>n</sup>*

$$\sum\_{j=1}^{n} x\_{ij} \le \tilde{a}\_{i}^{I}, \quad i = 1, 2, \dots, m,\tag{4}$$

$$\sum\_{i=1}^{m} x\_{i\hat{j}} \ge \hat{b}\_{\hat{j}}^{l}, \quad j = 1, 2, \dots, n,\tag{5}$$

$$x\_{i\bar{j}} \ge 0, \ i = 1, 2, \dots, m, \ j = 1, 2, \dots, n. \tag{6}$$

where *c* (*q*) *ij* = *cij* <sup>+</sup> *ϑωij*(*q*) denotes the parametric profit gained from shipment of *<sup>i</sup> th* source to *j th* destination. Also, *d* (*q*) *ij* denotes the expense per unit of shipment from *i th* source to *j th* destination. *δ*(*q*), *ρ*(*q*) are some constant profit and cost, respectively. *x* (*q*) *ij* is the quantity shipped from *i th* source to *j th* destination. \$*a<sup>I</sup> <sup>i</sup>* = *a*1 *<sup>i</sup>* , *<sup>a</sup>*<sup>2</sup> *<sup>i</sup>* , *<sup>a</sup>*<sup>3</sup> *<sup>i</sup>* ; *<sup>a</sup>*<sup>1</sup> *<sup>i</sup>* , *<sup>a</sup>*<sup>2</sup> *<sup>i</sup>* , *<sup>a</sup>*<sup>3</sup> *i* ! stands for the available intuitionistic fuzzy supply at *i th* source and \$*b<sup>I</sup> <sup>j</sup>* = *b*1 *<sup>j</sup>* , *<sup>b</sup>*<sup>2</sup> *<sup>j</sup>* , *<sup>b</sup>*<sup>3</sup> *<sup>j</sup>* ; *b* 1 *<sup>j</sup>* , *b*<sup>2</sup> *<sup>j</sup>* , *b* 3 *j* ! alludes to the accessible intuitionistic fuzzy demand at *j th* destination. Further, we postulate that ∑*<sup>m</sup> <sup>i</sup>*=<sup>1</sup> <sup>∑</sup>*<sup>n</sup> <sup>j</sup>*=<sup>1</sup> *d* (*q*) *ij x* (*q*) *ij* <sup>+</sup> *<sup>ρ</sup>*(*q*) <sup>&</sup>gt; 0, *<sup>q</sup>* <sup>=</sup> 1, 2, ... , *<sup>Q</sup>*; \$*a<sup>I</sup> <sup>i</sup>* > <sup>0</sup>*<sup>I</sup>* , \$*bI <sup>j</sup>* > <sup>0</sup>*<sup>I</sup>* , ∀ *j*; *cij* <sup>+</sup> *ϑωij*(*q*) <sup>&</sup>gt; 0*I* , *δ*(*q*), *ρ*(*q*) > 0 for all *i*, *j*, and the gross supply is greater than or equal the gross demand [3,7]. *<sup>m</sup>*

$$\sum\_{i=1}^{m} \left(\widetilde{a}\_i^I\right)\_{(a,\beta)} \ge \sum\_{j=1}^{n} \left(\widetilde{b}\_j^I\right)\_{(a,\beta)}.\tag{7}$$

The disparity (7) is considered as a necessary and sufficient condition for the existence of a feasible solution to PIF-MOFTP.

For a certain degree of (*α*, *β*)-cut the PIF-MOFTP could be transformed into parametric (*α*, *β*)-MOFTP as:

$$\mathbf{Max}\,Z\_{q}(\mathbf{x},\theta) = \frac{\sum\_{i=1}^{m} \sum\_{j=1}^{n} \left(c\_{ij} + \theta \omega\_{i\bar{j}}\right)^{(q)} \mathbf{x}\_{i\bar{j}}^{(q)} + \delta^{(q)}}{\sum\_{i=1}^{m} \sum\_{j=1}^{n} d\_{i\bar{j}}^{(q)} \mathbf{x}\_{i\bar{j}}^{(q)} + \rho^{(q)}}, \quad q = 12, \dots, Q. \tag{8}$$

**Subject to:** *<sup>n</sup>*

$$\sum\_{j=1}^{n} x\_{ij} \le \left( a\_i \right)\_{\left(a, \emptyset\right)} \quad i = 1, 2, \dots, m \tag{9}$$

$$\sum\_{i=1}^{m} x\_{ij} \ge \left( b\_{\hat{j}} \right)\_{(a,\emptyset)} \quad j = 1, 2, \dots, n,\tag{10}$$

$$\mathbf{x}\_{ij} \ge \mathbf{0}, \ i = 1, 2, \dots, m, \ j = 1, 2, \dots, n,\tag{11}$$

$$a\_i^L \le (a\_i)\_{(a, \beta)} \le a\_i^{ll}, \quad i = 1, 2, \dots, m,\tag{12}$$

$$b\_j^L \le \left( b\_j \right)\_{\left( a, \beta \right)} \le b\_j^{ll}, \quad j = 1, 2, \dots, n. \tag{13}$$

Based on the concept of a convex linear combination method proposed in [40] parametric (*α*, *β*)-MOFTP can be rewritten as:

$$\mathbf{Max}\,Z\_{\eta}(\mathbf{x},\boldsymbol{\theta}) = \frac{\sum\_{i=1}^{\mathrm{nr}} \sum\_{j=1}^{n} \left(c\_{ij} + \theta \omega\_{ij}\right)^{(q)} \mathbf{x}\_{ij}^{(q)} + \delta^{(q)}}{\sum\_{i=1}^{\mathrm{nr}} \sum\_{j=1}^{n} d\_{ij}^{(q)} \mathbf{x}\_{ij}^{(q)} + \rho^{(q)}}, \ q = 12, \ldots, \mathrm{Q}, \tag{14}$$

**Subject to:** *<sup>n</sup>*

$$\sum\_{j=1}^{n} x\_{i\bar{j}} \le \lambda \, a\_i^L + (1 - \lambda) a\_i^{lI} \quad i = 1, 2, \dots, m,\tag{15}$$

$$\sum\_{i=1}^{m} x\_{ij} \ge \lambda \left[ b\_j^L + (1 - \lambda) b\_j^{II} \quad j = 1, 2, \dots, n,\right.\tag{16}$$

$$\text{If } x\_{i\bar{j}} \ge 0, \ \lambda \in [0, 1], \ i = 1, 2, \dots, m, \ j = 1, 2, \dots, n,\tag{17}$$

Let *M*(*α*,*β*) denote the set of constraints in Equations (15)–(17), the parametric (*α*, *β*)- MOFTP has an (*α*, *β*)-Pareto optimal solution *x*∗ *ij* at *ϑ*∗.

**Definition 7.** (*α*, *β*)*-Pareto optimal solution. x*∗ *ij* ∈ *M*(*α*,*β*) *is said to be an* (*α*, *β*)*-Pareto optimal solution to* (*α*, *β*)*-MOFTP if and only if there does not exist another x* ◦ *ij* ∈ *M*(*α*,*β*) *ai* ∈ (*ai*)(*<sup>α</sup>*,*β*), *bj* ∈ *bj* (*α*,*β*) *, such that Zq x* ◦ *ij*, *ϑ*<sup>∗</sup> ! ≥ *Zq x*∗ *ij*, *ϑ*<sup>∗</sup> ! *with at least one strict inequality hold for q* (*q* = 1, 2, . . . , *Q*)*.*

#### **5. FGP Methodology for PIF-MOFTP**

In this section the FGP approach is applied to obtain the compromise solution of the parametric (*α*, *β*)-MOFTP. The objective functions are modeled as fuzzy goals characterized by its' membership function *μ*(*zq*(*x*,*ϑ*∗)) [36,44–46]. The model formulation and solution process are carried out at *ϑ* = *ϑ*\* . The membership functions of the *qth* fuzzy goals [36,44], is defined as:

$$\mu\_{(z\_q(\mathbf{x},\theta^\*))} = \begin{cases} 1, & \text{if } Z\_q(\mathbf{x}, \theta^\*) \ge u\_q \text{"},\\ \frac{Z\_q(\mathbf{x}, \theta^\*) - \underline{g}\_q \text{"}}{u\_q - \underline{g}\_q}, & \text{if } \underline{g}\_q \text{"} \le Z\_q(\mathbf{x}, \theta^\*) \le u\_q \text{"},\\ 0, & \text{if } Z\_q(\mathbf{x}, \theta^\*) \le \underline{g}\_q \text{"}. \end{cases} \qquad q = 1, 2, \dots, Q \tag{18}$$

where *uq* <sup>∗</sup> = *max Zq*(*x*, *ϑ*∗), *gq* <sup>∗</sup> = *min Zq*(*x*, *ϑ*∗), and denotes the upper and lower tolerance limit for the membership function of *qth* objective, respectively. In the FGP approach, the most extensive level of membership is unity. So, the membership goals having the aspired level unity follows as [44]:

$$
\mu\_q \left( Z\_q(\mathbf{x}, \theta^\*) \right) + d\_q^- - d\_q^+ = 1, \quad q = 1, 2, \dots, Q,\tag{19}
$$

where *d*− *<sup>q</sup>* , *d*<sup>+</sup> *<sup>q</sup>* ≥ 0, with *d*<sup>−</sup> *<sup>q</sup>* <sup>×</sup> *<sup>d</sup>*<sup>+</sup> *<sup>q</sup>* = 0, denote the under- and over-deviations, respectively, from the aspired levels [36,44]. The final FGP model of the parametric (*α*, *β*)-MOFTP can be obtained as:

$$\mathbf{M} \mathbf{in} \ AF = \sum\_{q=1}^{\underline{Q}} w\_q^{-} \, d\_{q}^{-} \, \tag{20}$$

**Subject to:**

$$\frac{Z\_{\emptyset}(\mathbf{x}, \boldsymbol{\theta}^\*) - \mathbf{g}\_{\emptyset}{}^\*}{\boldsymbol{\mu}\_{\boldsymbol{q}}{}^\* - \mathbf{g}\_{\emptyset}{}^\*} + d\_{\boldsymbol{q}}{}^- - d\_{\boldsymbol{q}}{}^+ = 1, \quad \boldsymbol{q} = 1, 2, \dots, Q,\tag{21}$$

$$\sum\_{j=1}^{n} x\_{ij} \le \lambda \left[ a\_i^L + (1 - \lambda) a\_i^{II} \quad i = 1, 2, \dots, m,\tag{22}$$

$$\sum\_{i=1}^{m} x\_{ij} \ge \lambda \left. b\_j^L + (1 - \lambda) b\_j^U \right| \quad j = 1, 2, \dots, n,\tag{23}$$

$$\text{tr}\_{\text{ij}} \ge 0, \,\,\lambda \in [0, 1], \,\, i = 1, 2, \dots, m, \,\, j = 1, 2, \dots, n,\tag{24}$$

$$d\_q^- \times d\_q^+ = 0,\text{ and }d\_q^- \text{ , }d\_q^+ \ge 0,\text{ }q = 1,2,\dots,Q,\tag{25}$$

where *w*− *<sup>q</sup>* represents the relative importance of achieving the aspired levels of the respective fuzzy goals which given by [44,47]:

$$w\_q^- = \frac{1}{u\_q \ast - \mathbb{g}\_q \ast}, \quad q = 1, 2, \dots, \mathbb{Q} \tag{26}$$

#### *Extension of Pal's Method to Linearize the Membership Goals*

It can be easily realized that the parametric membership goals in Equation (19) are non-linear fractional in nature. To avoid such problem, Pal et al. [45] method is extended here to linearize the *qth* membership goals with single-scalar parameter *ϑ* = *ϑ*<sup>∗</sup> as:

$$
\mu\_{\emptyset} \left( Z\_{\emptyset} (\mathbf{x}, \theta^\*) \right) + d\_q^- - d\_q^+ = 1, \; q = 1, 2, \dots, Q,\tag{27}
$$

$$L\_q\left(Z\_q(\mathbf{x}, \theta^\*)\right) - L\_q \mathcal{g}\_q^\* + d\_q^- - d\_q^+ = 1; \ L\_q = \frac{1}{\mathcal{u}\_q^\* - \mathcal{g}\_{ij}^\*},\tag{28}$$

$$Z\_{\boldsymbol{\theta}}(\mathbf{x}, \boldsymbol{\theta}^{\*}) = \frac{\sum\_{i=1}^{m} \sum\_{j=1}^{n} \left(\mathbf{c}\_{ij} + \boldsymbol{\theta}^{\*} \boldsymbol{\omega}\_{ij}\right)^{(q)} \mathbf{x}\_{ij}^{(q)} + \boldsymbol{\delta}^{(q)}}{\sum\_{i=1}^{m} \sum\_{j=1}^{n} d\_{ij}^{(q)} \mathbf{x}\_{ij}^{(q)} + \boldsymbol{\rho}^{(q)}}, \quad q = 1, 2, \dots, Q,\tag{29}$$

Substituting from Equation (29) in Equation (28), we obtain:

$$L\_q \frac{\sum\_{i=1}^{m} \sum\_{j=1}^{n} \left(c\_{ij} + \theta^\* \omega\_{ij}\right)^{(q)} x\_{ij}^{(q)} + \delta^{(q)}}{\sum\_{i=1}^{m} \sum\_{j=1}^{n} d\_{ij}^{(q)} x\_{ij}^{(q)} + \rho^{(q)}} - L\_q \mathbf{g}\_q^{\* } + d\_q^{-} - d\_q^{+} = 1,\tag{30}$$

$$\begin{split} L\_{q} \left[ \sum\_{i=1}^{m} \sum\_{j=1}^{n} \left( c\_{ij} + \theta^{\*} \omega\_{ij} \right)^{(q)} \mathbf{x}\_{ij}^{(q)} + \delta^{(q)} \right] &- L\_{q} \mathbf{x}\_{q}^{\*} \left[ \sum\_{i=1}^{m} \sum\_{j=1}^{n} d\_{ij}^{(q)} \mathbf{x}\_{ij}^{(q)} + \rho^{(q)} \right] \\ &- d\_{q}^{+} \left[ \sum\_{i=1}^{m} \sum\_{j=1}^{n} d\_{ij}^{(q)} \mathbf{x}\_{ij}^{(q)} + \rho^{(q)} \right] = \left[ \sum\_{i=1}^{m} \sum\_{j=1}^{n} d\_{ij}^{(q)} \mathbf{x}\_{ij}^{(q)} + \rho^{(q)} \right], \end{split} \tag{31}$$

$$\begin{pmatrix} L\_q \left[ \sum\_{i=1}^m \sum\_{j=1}^n \left( c\_{ij} + \theta^\* \omega\_{ij} \right)^{(q)} \mathbf{x}\_{ij}^{(q)} + \delta^{(q)} \right] \\ + d\_q^{-} \left[ \sum\_{i=1}^m \sum\_{j=1}^n d\_{ij}^{(q)} \mathbf{x}\_{ij}^{(q)} + \rho^{(q)} \right] \\ - d\_q^{+} \left[ \sum\_{i=1}^m \sum\_{j=1}^n d\_{ij}^{(q)} \mathbf{x}\_{ij}^{(q)} + \rho^{(q)} \right] \\ \end{pmatrix} = \left( 1 + L\_q \mathbf{g}\_q \right)^\* \left[ \sum\_{i=1}^m \sum\_{j=1}^n d\_{ij}^{(q)} \mathbf{x}\_{ij}^{(q)} + \rho^{(q)} \right], \quad \text{(32)}$$

$$\begin{pmatrix} L\_q \left[ \sum\_{i=1}^m \sum\_{j=1}^n \left( c\_{ij} + \theta^\* \omega\_{ij} \right)^{(q)} x\_{ij}^{(q)} + \delta^{(q)} \\\\ + d\_q^{-} \left[ \sum\_{i=1}^m \sum\_{j=1}^n d\_{ij}^{(q)} x\_{ij}^{(q)} + \rho^{(q)} \right] \\\ -d\_q^{+} \left[ \sum\_{i=1}^m \sum\_{j=1}^n d\_{ij}^{(q)} x\_{ij}^{(q)} + \rho^{(q)} \right] \end{pmatrix} = L\_q^{\circ} \left[ \sum\_{i=1}^m \sum\_{j=1}^n d\_{ij}^{(q)} x\_{ij}^{(q)} + \rho^{(q)} \right]; \quad L\_q^{\mathbf{0}} = \left( 1 + L\_q \mathbf{g}\_q \right) \tag{33}$$

$$\begin{split} \mathbb{E}\left[L\_{\boldsymbol{q}}\sum\_{i=1}^{m}\sum\_{j=1}^{n}\left(\boldsymbol{c}\_{ij}+\boldsymbol{\theta}^{\boldsymbol{s}}\boldsymbol{\omega}\_{ij}\right)^{(q)}-L\_{\boldsymbol{q}}^{0}\sum\_{i=1}^{m}\sum\_{j=1}^{n}d\_{ij}^{(q)}\right] \mathbf{x}\_{ij}^{(q)} &+ d\_{\boldsymbol{q}}^{-}\left[\sum\_{i=1}^{m}\sum\_{j=1}^{n}d\_{ij}^{(q)}\mathbf{x}\_{ij}^{(q)}+\boldsymbol{\rho}^{(q)}\right] -d\_{\boldsymbol{q}}^{+}\left[\sum\_{i=1}^{m}\sum\_{j=1}^{n}d\_{ij}^{(q)}\mathbf{x}\_{ij}^{(q)}+\boldsymbol{\rho}^{(q)}\right] \\ &= \left[L\_{\boldsymbol{q}}^{0}\boldsymbol{\rho}^{(q)}-L\_{\boldsymbol{q}}\delta^{(q)}\right], \end{split} \tag{34}$$

$$\mathbf{C}\_{ij}^{(q)}\mathbf{x}\_{ij}^{(q)} + d\_q^{-} \left[ \sum\_{i=1}^{m} \sum\_{j=1}^{n} d\_{ij}^{(q)} \mathbf{x}\_{ij}^{(q)} + \rho^{(q)} \right] - d\_q^{+} \left[ \sum\_{i=1}^{m} \sum\_{j=1}^{n} d\_{ij}^{(q)} \mathbf{x}\_{ij}^{(q)} + \rho^{(q)} \right] = \mathbf{G}\_{\mathbf{q}};\tag{35}$$

where

$$C\_{ij}^{(q)} = \left[ L\_q \sum\_{i=1}^{m} \sum\_{j=1}^{n} \left( c\_{ij} + \mathfrak{d}^\* \omega\_{ij} \right)^{(q)} - L\_q^0 \sum\_{i=1}^{m} \sum\_{j=1}^{n} d\_{ij}^{(q)} \right], \tag{36}$$

$$\mathbf{G}\_{\mathfrak{q}} = \left[ L\_{\mathfrak{q}}^{0} \rho^{(q)} - L\_{\mathfrak{q}} \delta^{(q)} \right],\tag{37}$$

Considering Pal et al. [45], the goal expression in Equation (35) can be linearized as follows. Letting *D*− *<sup>q</sup>* = *d*<sup>−</sup> *q* ∑*<sup>m</sup> <sup>i</sup>*=<sup>1</sup> <sup>∑</sup>*<sup>n</sup> <sup>j</sup>*=<sup>1</sup> *d* (*q*) *ij x* (*q*) *ij* <sup>+</sup> *<sup>ρ</sup>*(*q*) and *D*<sup>+</sup> *<sup>q</sup>* = *d*<sup>+</sup> *q* ∑*<sup>m</sup> <sup>i</sup>*=<sup>1</sup> <sup>∑</sup>*<sup>n</sup> <sup>j</sup>*=<sup>1</sup> *d* (*q*) *ij x* (*q*) *ij* <sup>+</sup> *<sup>ρ</sup>*(*q*) , then the linear form of expression in Equation (32) is obtained as:

$$\mathbf{C}\_{ij}^{(q)}\mathbf{x}\_{ij}^{(q)} + D\_q^- - D\_q^+ = \mathbf{G}\_{q'} \tag{38}$$

with *D*− *<sup>q</sup>* , *D*<sup>+</sup> *<sup>q</sup>* ≥ 0; *and D*<sup>−</sup> *<sup>q</sup>* <sup>×</sup> *<sup>D</sup>*<sup>+</sup> *<sup>q</sup>* = 0, since *d*<sup>−</sup> *<sup>q</sup>* , *d*<sup>+</sup> *<sup>q</sup>* <sup>≥</sup> 0, and <sup>∑</sup>*<sup>m</sup> <sup>i</sup>*=<sup>1</sup> <sup>∑</sup>*<sup>n</sup> <sup>j</sup>*=<sup>1</sup> *d* (*q*) *ij x* (*q*) *ij* <sup>+</sup> *<sup>ρ</sup>*(*q*) <sup>&</sup>gt; 0. So, minimization of *d*− *<sup>q</sup>* means minimization of *D*<sup>−</sup> *<sup>q</sup>* = *d*<sup>−</sup> *q* ∑*<sup>m</sup> <sup>i</sup>*=<sup>1</sup> <sup>∑</sup>*<sup>n</sup> <sup>j</sup>*=<sup>1</sup> *d* (*q*) *ij x* (*q*) *ij* <sup>+</sup> *<sup>ρ</sup>*(*q*) which is also non-linear. So, involvement of *d*− *<sup>q</sup>* ≤ 1, in the solution leads to impose the following constraint in the model:

$$\frac{D\_q^-}{\left[\sum\_{i=1}^m \sum\_{j=1}^n d\_{ij}^{(q)} x\_{ij}^{(q)} + \rho^{(q)}\right]} \le 1. \tag{39}$$

Now, the final FGP model of the parametric (*α*, *β*)-MOFTP in model (20)–(25) becomes:

$$\mathbf{M} \mathbf{in} \ AF = \sum\_{q=1}^{Q} w\_q^{-} \, d\_q^{-} \, \tag{40}$$

**Subject to:**

$$\left[L\_q \sum\_{i=1}^{m} \sum\_{j=1}^{n} \left(c\_{ij} + \theta^\* \omega\_{ij}\right)^{(q)} - L\_q^0 \sum\_{i=1}^{m} \sum\_{j=1}^{n} d\_{ij}^{(q)}\right] \mathbf{x}\_{ij}^{(q)} + D\_q^- - D\_q^+ = \left[L\_q^0 \rho^{(q)} - L\_q \delta^{(q)}\right],\tag{41}$$

$$\sum\_{i=1}^{m} \sum\_{j=1}^{n} -d\_{ij}^{(q)} \ge\_{ij}^{(q)} + D\_q^{-} \le \rho^{(q)}, \; q = 1, 2, \dots, Q, \; \forall i, j,\tag{42}$$

$$\sum\_{j=1}^{n} x\_{ij} \le \lambda \, a\_i^L + (1 - \lambda) a\_i^{II}, \; i = 1, 2, \dots, m,\tag{43}$$

$$\sum\_{i=1}^{m} x\_{ij} \ge \lambda \left[ b\_j^L + (1 - \lambda) b\_j^{II} \right] j = 1, 2, \dots, n\_\prime \tag{44}$$

$$\text{If } x\_{i\rangle} \ge 0, \ \lambda \in [0, 1], \ i = 1, 2, \dots, m, \ j = 1, 2, \dots, n,\tag{45}$$

$$D\_q^- \times D\_q^+ = 0,\text{ and } D\_q^-, \ D\_q^+ \ge 0,\ q = 1,2,\dots,Q. \tag{46}$$

Thus, the above FGP model provides the satisfactory solution *x*∗ *ij* for the parametric (*α*, *β*)-MOFTP.

#### **6. The SSFK for Parametric (***α***,***β***) -MOFTP**

The main area of inquiry is as follows: having solved the parametric (*α*, *β*)-MOFTP, to what extent can its data with respect to *α*, *β* and *ϑ* be changed without invalidating the efficiency of its (*α*, *β*)-Pareto optimal solution? The set of feasible parameters, the solvability set, and the SSFK for parametric (*α*, *β*)-MOFTP are defined as:

**Definition 8.** *The set of feasible parameters for the parametric* (*α*, *β*)*-MOFTP is defined by:*

$$\mathcal{F} = \left\{ \begin{array}{l} a \in \mathbb{R}^{m}, \\ b \in \mathbb{R}^{n} \end{array} \; \middle| \; a\_{i} \in \mathcal{L}\_{a,\emptyset}(\hat{a}\_{i}^{I}), \; i = 1,2,\dots \; m; \; b\_{j} \in \mathcal{L}\_{a,\emptyset}(\hat{b}\_{j}^{I}), \; j = 1,2,\dots,n; \; \right\}.$$

$$a,\beta \in [0,1]; \; and \; M\_{(a,\emptyset)}(\mathbf{x}\_{i\backslash},a,b) \neq \mathcal{Q}$$

**Definition 9.** *The solvability set* M *of the parametric* (*α*, *β*)*-MOFTP is defined by:*

$$\mathcal{M} = \left\{ (\theta, a, b) \in \mathbb{R} \times \mathbb{R}^m \times \mathbb{R}^n \, \middle| \, \begin{array}{l} \operatorname{parameter}(a, \beta)-\operatorname{MOFTP} \, \text{has} \\\ an \, (a, \beta)-\operatorname{Pareto optimal solution}. \end{array} \right\}$$

**Definition 10.** *Suppose that x*∗ *ij be an* (*α*, *β*)*-Pareto optimal solution of the parametric* (*α*, *β*)*- MOFTP, then the SSFK S***<sup>1</sup>** *x*∗ *ij*, *α*, *β* ! *corresponding to x*∗ *ij is defined by:*

$$\mathbf{S\_1}(\mathbf{x\_{ij}^\*}, a, \boldsymbol{\beta}) = \left\{ (\boldsymbol{\theta}, a, b) \in \mathbb{R} \times \mathbb{R}^m \times \mathbb{R}^n \, \middle| \, \begin{array}{l} \mathbf{x\_{ij}^\*} \text{ is an } (\mathbf{a}, \boldsymbol{\beta})-\text{Pareto optimal solution of } \\ \text{parametric } (\mathbf{a}, \boldsymbol{\beta})-\text{MOFTP} \end{array} \right\}.$$

The SSFK of the parametric (*α*, *β*)-MOFTP is the set of all parameters corresponding to one (*α*, *β*)-Pareto optimal solution [35,36]. It is easy to see that the stability of the parametric (*α*, *β*)-MOFTP model (14)–(17) implies the stability of the parametric FGP model which is defined as follows:

$$\mathbf{Min}\ AF = \sum\_{q=1}^{Q} w\_q^{-} \ d\_{q}^{-}\,\,\,\,\,\tag{47}$$

.

**Subject to:**

$$\left[L\_q \sum\_{i=1}^{m} \sum\_{j=1}^{n} \left(c\_{ij} + \theta \omega\_{ij}\right)^{(q)} - L\_q^0 \sum\_{i=1}^{m} \sum\_{j=1}^{n} d\_{ij}^{(q)}\right] x\_{ij}^{(q)} + D\_q^- - D\_q^+ = \left[L\_q^0 \rho^{(q)} - L\_q \delta^{(q)}\right],\tag{48}$$

$$\sum\_{i=1}^{m} \sum\_{j=1}^{n} -d\_{ij}^{(q)} \ge\_{ij}^{(q)} + D\_q^- \le \rho^{(q)}, \ q = 1, 2, \dots, Q, \ \forall i, j \tag{49}$$

$$\sum\_{j=1}^{n} x\_{ij} \le \lambda \left. a\_i^L + (1 - \lambda) a\_i^{II} \right| \quad i = 1, 2, \dots, m,\tag{50}$$

$$\sum\_{i=1}^{m} x\_{ij} \ge \lambda \, b\_j^L + (1 - \lambda) b\_j^{II}, \quad j = 1, 2, \dots, n,\tag{51}$$

$$\mathbf{x}\_{i\bar{j}} \ge \mathbf{0}, \ \lambda \in [0, 1], \ \mathfrak{d} \in \mathbf{R}, \ \mathfrak{i} = 1, 2, \dots, m, \ j = 1, 2, \dots, n,\tag{52}$$

$$D\_q^- \times D\_q^+ = 0,\text{ and } D\_q^-, \ D\_q^+ \ge 0,\ q = 1,2,\dots,Q. \tag{53}$$

#### *6.1. KKT Optimality Conditions for Parametric FGP Model*

The Lagrangian function of parametric FGP model (47)–(53) follows as [36,37]:

*L* = ' *Q* ∑ *q*=1 *w*− *<sup>q</sup> D*<sup>−</sup> *q* ( + *ξ<sup>q</sup>* ''*Lq m* ∑ *i*=1 *n* ∑ *j*=1 *cij* <sup>+</sup> *ϑωij*(*q*) <sup>−</sup> *<sup>L</sup>*<sup>0</sup> *q m* ∑ *i*=1 *n* ∑ *j*=1 *d* (*q*) *ij* ( *x* (*q*) *ij* + *D*<sup>−</sup> *<sup>q</sup>* <sup>−</sup> *<sup>D</sup>*<sup>+</sup> *q* − *L*0 *<sup>q</sup>ρ*(*q*) <sup>−</sup> *Lqδ*(*q*) ( + *υ<sup>q</sup>* ' *m* ∑ *i*=1 *n* ∑ *j*=1 −*d* (*q*) *ij x* (*q*) *ij* + *D*<sup>−</sup> *<sup>q</sup>* <sup>−</sup> *<sup>ρ</sup>*(*q*) ( + *τ<sup>i</sup>* ' *n* ∑ *j*=1 *xij* − *λ a<sup>L</sup> <sup>i</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>λ</sup>*)*a<sup>U</sup> i* ( + *η<sup>j</sup>* − *m* ∑ *i*=1 *xij* + *λ b<sup>L</sup> <sup>j</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>λ</sup>*)*b<sup>U</sup> j* ! + *ϕij*0 −*xij*1 + *ψ<sup>i</sup>* 0 <sup>−</sup>*a<sup>L</sup> i* 1 + *φ<sup>j</sup>* <sup>−</sup>*b<sup>L</sup> j* + *<sup>i</sup>* 0 <sup>−</sup>*a<sup>U</sup> i* 1 + *<sup>j</sup>* <sup>−</sup>*b<sup>U</sup> j* + *ζ<sup>q</sup>* −*D*<sup>−</sup> *q* + *π<sup>q</sup>* <sup>−</sup>*D*<sup>+</sup> *q* , (54)

where *ξ*, *υ*, *τ*, *η*, *ϕ*, *ψ*, *φ*, , , *ζ* and *π* are the Lagrange multipliers. Thus, KKT optimality conditions [28,36,37,39] have the following form:

$$\frac{\partial L}{\partial x\_{ij}} = \xi\_q \left[ L\_q \sum\_{i=1}^m \sum\_{j=1}^n \left( c\_{ij} + \mathfrak{d} \omega\_{ij} \right)^{(q)} - L\_q^0 \sum\_{i=1}^m \sum\_{j=1}^n d\_{ij}^{(q)} \right] + v\_q \left[ \sum\_{i=1}^m \sum\_{j=1}^n - d\_{ij}^{(q)} \right] + v\_i - \eta\_j - \varrho\_{ij} = 0, i = 1,2,\dots,m, \ j = 1,2,\dots,n,\tag{55}$$

$$\frac{\partial L}{\partial a\_i^L} = -\lambda \tau\_i - \psi\_i = 0, \ i = 1, 2, \dots, m,\tag{56}$$

$$\frac{\partial L}{\partial a\_i^{\text{II}}} = -(1 - \lambda)\tau\_i - \mathcal{o}\_i = 0, \; i = 1, 2, \dots, m,\tag{57}$$

$$\frac{\partial L}{\partial b\_{\vec{j}}^{L}} = \lambda \eta\_{\vec{j}} - \phi\_{\vec{j}} = 0, \quad \vec{\iota} = 1, 2, \dots, m\_{\prime} \tag{58}$$

$$\frac{\partial L}{\partial b^{II}\_{\dot{j}}} = (1 - \lambda)\eta\_{\dot{j}} - \epsilon\_{\dot{j}} = 0, \ i = 1, 2, \dots, m\_{\prime} \tag{59}$$

$$\frac{\partial L}{\partial D\_q^-} = \sum\_{q=1}^Q w\_q^- + \tilde{\xi}\_q + \upsilon\_q - \tilde{\zeta}\_q = 0, \ q = 1, 2, \dots, Q. \tag{60}$$

$$\frac{\partial L}{\partial D\_q^+} = -\xi\_q - \pi\_q = 0, \quad q = 1, 2, \dots, Q,\tag{61}$$

$$\mathbb{E}\left[L\_q \sum\_{i=1}^{m} \sum\_{j=1}^{n} \left(c\_{ij} + \theta \omega\_{ij}\right)^{(q)} - L\_q^0 \sum\_{i=1}^{m} \sum\_{j=1}^{n} d\_{ij}^{(q)}\right] \mathbf{x}\_{ij}^{(q)} + D\_q^- - D\_q^+ - \left[L\_q^0 \rho^{(q)} - L\_q \delta^{(q)}\right] = 0,\tag{62}$$

$$\sum\_{i=1}^{m} \sum\_{j=1}^{n} -d\_{ij}^{(q)} x\_{ij}^{(q)} + D\_q^- - \rho^{(q)} \le 0, \; q = 1, 2, \dots, Q, \; \forall i, j \tag{63}$$

$$\sum\_{j=1}^{n} \mathbf{x}\_{ij} - \left[\lambda \, a\_i^L + (1 - \lambda) a\_i^{II}\right] \le 0, \; i = 1, 2, \dots, m,\tag{64}$$

$$\left[\lambda \; b\_{\vec{j}}^{L} + (1 - \lambda) b\_{\vec{j}}^{L}\right] - \sum\_{i=1}^{m} x\_{i\vec{j}} \le 0, \; j = 1, 2, \dots, n,\tag{65}$$

$$\mathbf{x}\_{i\mathbf{j}} \ge \mathbf{0}, \ i = 1, 2, \dots, m, \ j = 1, 2, \dots, n,\tag{66}$$

$$D\_{ij}^-, \ D\_{ij}^+ \ge 0, \ q = 1, 2, \dots, Q,\tag{67}$$

$$\upsilon\_q \left[ \sum\_{i=1}^{m} \sum\_{j=1}^{n} -d\_{ij}^{(q)} \mathbf{x}\_{ij}^{(q)} + D\_q^- - \rho^{(q)} \right] = 0, \ q = 1, 2, \dots, \mathbb{Q}, \ \forall i, j \tag{68}$$

$$\pi\_i \left[ \sum\_{j=1}^n \mathbf{x}\_{ij} - \left( \lambda \, a\_i^L + (1 - \lambda) a\_i^{II} \right) \right] = 0, \; i = 1, 2, \dots, m,\tag{69}$$

$$\left[\eta\_{j}\left[-\sum\_{i=1}^{m}\mathbf{x}\_{ij} + \left(\lambda\_{j}b\_{j}^{L} + (1-\lambda)b\_{j}^{II}\right)\right] = 0, \ j = 1,2,...,n,\tag{70}$$

$$\left[\mathfrak{q}\_{ij}\left[\mathfrak{x}\_{ij}\right]\right] = 0,\tag{71}$$

$$
\psi\_i \left[ a\_i^L \right] = 0,\tag{72}
$$

$$
\phi\_{\vec{l}} \begin{bmatrix} b\_{\vec{l}}^L \end{bmatrix} = 0,\tag{73}
$$

$$
\sigma\_i \begin{bmatrix} a\_i^{\mathcal{U}} \end{bmatrix} = 0,\tag{74}
$$

$$
\epsilon\_{\dot{\jmath}} \begin{bmatrix} b\_{\dot{\jmath}}^{\mathcal{U}} \end{bmatrix} = 0,\tag{75}
$$

$$\mathbb{Z}\_q \left[ D\_q^- \right] = 0,\tag{76}$$

$$
\pi\_q \left[ D\_q^+ \right] = 0,\tag{77}
$$

$$
\psi, \tau, \eta, \;\,\upvarphi, \psi, \phi, \;\phi, \;\circ , \zeta, \;\pi \ge 0, \;\text{and } \vartheta, \zeta \in R; \tag{78}
$$

where the KKT conditions (55)–(78) are evaluated at *x*∗ *ij*. Solving the system of Equations (55)–(78), the SSFK *S***<sup>1</sup>** *x*∗ *ij*, *α*, *β* ! for parametric IF-MOFTP is obtained.

*6.2. Algorithm for Determination of the SSFK S***<sup>1</sup>** (*x*<sup>∗</sup> *ij*, *a*, *b*)

Following the above discussion, the algorithm for obtaining the SSFK *S***<sup>1</sup>** *x*∗ *ij*, *α*, *β* ! for parametric (*α*, *β*)-MOFTP van be described as follows (Algorithms 1 and 2):


**Algorithms 2** Phase II: Determination of the SSFK *S***1**(*x*<sup>∗</sup> *ij*, *α*, *β*)


#### **7. Numerical Example**

To demonstrate the proposed algorithm for finding the SSFK, consider the following parametric IF-MOFTP:

$$\mathbf{Max} \left( \begin{array}{c} Z\_1(\mathbf{x}, \boldsymbol{\theta}) = \frac{\theta \mathbf{x}\_{11} + (2 + \theta) \mathbf{x}\_{12} + (3 + 2\theta) \mathbf{x}\_{21} + 6 \mathbf{x}\_{22} + 4}{\mathbf{x}\_{11} + 3 \mathbf{x}\_{12} + \mathbf{x}\_{21} + 2 \mathbf{x}\_{22} + 2}, \\ Z\_2(\mathbf{x}, \boldsymbol{\theta}) = \frac{2 \mathbf{x}\_{11} + (3 + \theta) \mathbf{x}\_{12} + (4 + 2\theta) \mathbf{x}\_{21} + (5 + \theta) \mathbf{x}\_{22} + 6}{\mathbf{x}\_{11} + 2 \mathbf{x}\_{12} + 3 \mathbf{x}\_{21} + \mathbf{x}\_{22} + 4} \end{array} \right) \mathbf{x}$$

**Subject to:**

Supply constraints:

$$
\mathbf{x}\_{11} + \mathbf{x}\_{12} \le \hat{a}\_{1\prime}^{l} \\
\mathbf{x}\_{21} + \mathbf{x}\_{22} \le \hat{a}\_{2\prime}^{l}
$$

Demand constraints:

$$x\_{11} + x\_{21} \ge \dot{b}\_{1\prime}^{I} \\ x\_{12} + x\_{22} \ge \dot{b}\_{2\prime}^{I}$$

where the membership functions *μ*\$*aI* 1 (*x*), *μ*\$*aI* 2 (*x*), *<sup>μ</sup>*\$*b<sup>I</sup>* 1 (*x*), *<sup>μ</sup>*\$*b<sup>I</sup>* 2 (*x*) and the non-membership functions *γ*\$*aI* 1 (*x*), *γ*\$*aI* 2 (*x*), *<sup>γ</sup>*\$*b<sup>I</sup>* 2 (*x*), *<sup>γ</sup>*\$*b<sup>I</sup>* 2 (*x*) of the supplies and demands are described as follows:

*μ*\$*aI* 1 (*x*) = ⎧ ⎪⎨ ⎪⎩ *x*−140 <sup>20</sup> *i f* 140 ≤ *x* ≤ 160, <sup>180</sup>−*<sup>x</sup>* <sup>20</sup> *i f* <sup>160</sup> <sup>≤</sup> *<sup>x</sup>* <sup>≤</sup> 180, 0 *otherwise*, *γ*\$*aI* 1 (*x*) = ⎧ ⎪⎨ ⎪⎩ <sup>160</sup>−*<sup>x</sup>* <sup>30</sup> *i f* <sup>130</sup> <sup>≤</sup> *<sup>x</sup>* <sup>≤</sup> 160, *x*−160 <sup>40</sup> *i f* 160 ≤ *x* ≤ 200, 1 *otherwise*,

$$\mu\_{\vec{a}\_2^+}^\pm(\mathbf{x}) = \begin{cases} \frac{\mathbf{x} - 220}{20} & \text{if } 220 \le \mathbf{x} \le 240, \\ \frac{250 - \mathbf{x}}{10} & \text{if } 240 \le \mathbf{x} \le 250, \ \gamma\_{\vec{a}\_2^+}(\mathbf{x}) = \begin{cases} \frac{240 - \mathbf{x}}{20} & \text{if } 210 \le \mathbf{x} \le 240, \\ \frac{\mathbf{x} - 240}{30} & \text{if } 240 \le \mathbf{x} \le 270, \\ 0 & \text{otherwise}, \end{cases} \\\\ \mu\_{\vec{b}\_1^-}^\pm(\mathbf{x}) = \begin{cases} \frac{\mathbf{x} - 40}{10} & \text{if } 40 \le \mathbf{x} \le 50, \\ \frac{60 - \mathbf{x}}{10} & \text{if } 50 \le \mathbf{x} \le 60, \\ 0 & \text{otherwise}, \end{cases} \\\\ \mu\_{\vec{b}\_2^+}^\pm(\mathbf{x}) = \begin{cases} \frac{\mathbf{x} - 30}{10} & \text{if } 310 \le \mathbf{x} \le 320, \\ \frac{350 - \mathbf{x}}{10} & \text{if } 30 \le \mathbf{x} \le 80, \\ \frac{350 - \mathbf{x}}{30} & \text{if } 320 \le \mathbf{x} \le 350, \\ \frac{\mathbf{x} - 320}{60} & \text{if } 320 \le \mathbf{x} \le 350, \\ 0 & \text{otherwise}, \end{cases} \\\\ \end{cases}$$

**Phase I:** Finding an (*α*, *β*)-Pareto optimal solution of the parametric IF-MOFTP. For a desired values of *α* = 0.6, and *β* = 0.2, then applying the concept of (*α*, *β*)-cut of the IFN we formulate the (*α*, *β*)-MOFTP at *ϑ* = *ϑ*∗ = 3.

$$\mathbf{Max} \begin{pmatrix} Z\_1(\mathbf{x}) = \frac{3\mathbf{x}\_{11} + 5\mathbf{x}\_{12} + 9\mathbf{x}\_{21} + 6\mathbf{x}\_{22} + 8}{\mathbf{x}\_{11} + 3\mathbf{x}\_{12} + \mathbf{x}\_{21} + 2\mathbf{x}\_{22} + 2} \\ Z\_2(\mathbf{x}) = \frac{2\mathbf{x}\_{11} + 6\mathbf{x}\_{12} + 10\mathbf{x}\_{21} + 8\mathbf{x}\_{22} + 6}{\mathbf{x}\_{11} + 2\mathbf{x}\_{12} + 3\mathbf{x}\_{21} + \mathbf{x}\_{22} + 4} \end{pmatrix} \prime$$

**Subject to:** Supply constraints:

$$
\lfloor \mathbf{x}\_{11} + \mathbf{x}\_{12} \le \lceil 154, 168 \rceil, \\
\lfloor \mathbf{x}\_{21} + \mathbf{x}\_{22} \le \lceil 234, 244 \rceil.
$$

Demand constraints:

$$\mathbf{x}\_{11} + \mathbf{x}\_{21} \ge \begin{bmatrix} 46 \text{, } 54 \end{bmatrix}, \mathbf{x}\_{12} + \mathbf{x}\_{22} \ge \begin{bmatrix} 316 \text{, } 332 \end{bmatrix}.$$

Based on the concept of convex linear combination on the constraints, then we obtain the MOFTP:

$$\mathbf{Max} \begin{pmatrix} Z\_1(\mathbf{x}) = \frac{3x\_{11} + 5x\_{12} + 9x\_{21} + 6x\_{22} + 8}{x\_{11} + 3x\_{12} + x\_{21} + 2x\_{22} + 2} \\ Z\_2(\mathbf{x}) = \frac{2x\_{11} + 6x\_{12} + 10x\_{21} + 8x\_{22} + 6}{x\_{11} + 2x\_{12} + 3x\_{21} + x\_{22} + 4} \end{pmatrix} \prime$$

**Subject to:**

$$\mathbf{x}\_{11} + \mathbf{x}\_{12} \le 165.2, \; \mathbf{x}\_{21} + \mathbf{x}\_{22} \le 240, \; \mathbf{x}\_{11} + \mathbf{x}\_{21} \ge 51.6, \; \mathbf{x}\_{12} + \mathbf{x}\_{22} \ge 328.8.$$

A FGP approach is utilized to solve the MOFTP according to the model of Equations (40)–(46). Firstly, the coefficients of the linearized membership goals are obtained in Table 1.

**Table 1.** The coefficient of the linearized membership goals *cij*!*<sup>T</sup>* and *Gij*.


**Min** *AF* = 3.0665*D*− <sup>1</sup> + 0.8386*D*<sup>−</sup> 2 ,

**Subject to:**

$$1.0.682\mathbf{x}\_{11} - 10.22\mathbf{x}\_{12} + 19.081\mathbf{x}\_{21} + 1.364\mathbf{x}\_{22} + D\_1^- - D\_1^+ = -7.497,$$

$$-2.8628\mathbf{x}\_{11} - 4.048\mathbf{x}\_{12} - 5.234\mathbf{x}\_{21} + 2.169\mathbf{x}\_{22} + D\_2^- - D\_2^+ = 13.128,$$

$$-\mathbf{x}\_{11} - 3\mathbf{x}\_{12} - \mathbf{x}\_{21} - 2\mathbf{x}\_{22} + D\_{1}^{-} \le 2,$$

$$-\mathbf{x}\_{11} - 2\mathbf{x}\_{12} - 3\mathbf{x}\_{21} - \mathbf{x}\_{22} + D\_{2}^{-} \le 4,$$

$$\mathbf{x}\_{11} + \mathbf{x}\_{12} \le 165.2,$$

$$\mathbf{x}\_{21} + \mathbf{x}\_{22} \le 240,$$

$$\mathbf{x}\_{11} + \mathbf{x}\_{21} \ge 51.6,$$

$$\mathbf{x}\_{12} + \mathbf{x}\_{22} \ge 328.8,$$

$$\mathbf{x}\_{11}, \mathbf{x}\_{12}, \mathbf{x}\_{21}, \mathbf{x}\_{22}, \ D\_{1}^{-}, \ D\_{1}^{+}, D\_{2}^{-}, \ D\_{2}^{+} \ge 0.$$

Using Lingo programming, the (*α*, *β*)-Pareto optimal solution of the parametric IF-MOFTP is obtained at *x*∗ <sup>11</sup>, *x*<sup>∗</sup> <sup>12</sup>, *x*<sup>∗</sup> <sup>21</sup>, *x*<sup>∗</sup> <sup>22</sup>, *D*<sup>−</sup> <sup>1</sup> , *<sup>D</sup>*<sup>+</sup> <sup>1</sup> , *D*<sup>−</sup> <sup>2</sup> , *<sup>D</sup>*<sup>+</sup> 2 = (0, 165.88, 76.39, 163.61, 0, 0, 726.78, 0) .

**Phase II:** determination of the SSFK *S***1**(*x*∗, *α*, *β*).

To determine the SSFK *S***1**(*x*∗, *a*, *b*) of the parametric IF-MOFTP, the coefficients of the linearized membership goals in the parametric form are recalculated in Table 2.


Therefore, the stability of parametric IF-MOFTP implies the stability of the parametric FGP model which is defined as:

$$\mathbf{M} \mathbf{in} \ AF = 3.067 D\_1^- + 0.839 D\_2^- \ \ (2)$$

**Subject to:**

(−8.518 + 3.067*ϑ*)*x*<sup>11</sup> + (−19.42 + 3.067*ϑ*)*x*<sup>12</sup> + (0.682 + 6.133*ϑ*)*x*<sup>21</sup> + 1.364*x*<sup>22</sup>

$$+D\_1^- - D\_1^+ = -7.497,$$

$$-2.8628x\_{11} + (-6.564 + 0.839\theta)x\_{12} + (-10.266 + 1.677\theta)x\_{21}$$

$$+ (-0.347 + 0.839\theta)x\_{22} + D\_2^- - D\_2^+ = 13.128,$$

$$-x\_{11} - 3x\_{12} - x\_{21} - 2x\_{22} + D\_1^- \le 2,$$

$$-x\_{11} - 2x\_{12} - 3x\_{21} - x\_{22} + D\_2^- \le 4,$$

$$x\_{11} + x\_{12} \le 0.2a\_1^\mathcal{L} + 0.8a\_1^\mathcal{U},$$

$$x\_{21} + x\_{22} \le 0.4a\_2^\mathcal{L} + 0.6a\_1^\mathcal{U},$$

$$x\_{11} + x\_{21} \ge 0.3b\_1^\mathcal{L} + 0.7b\_1^\mathcal{U},$$

$$x\_{12} + x\_{22} \ge 0.2b\_2^\mathcal{L} + 0.8b\_2^\mathcal{U},$$

$$x\_{11}, x\_{12}, x\_{21}, x\_{22}, a\_1^\mathcal{L}, a\_1^\mathcal{U}, a\_2^\mathcal{U}, b\_1^\mathcal{U}, b\_1^\mathcal{U}, b\_2^\mathcal{U} \ge 0,$$

$$D\_1^-, D\_1^+, D\_2^-, D\_2^+ \ge 0; \ \theta \in R$$

The Lagrangean function of the above parametric FGP model follows as:

*L* = 3.067*D*− <sup>1</sup> + 0.839*D*<sup>−</sup> <sup>2</sup> + *ξ*<sup>1</sup> (−8.518 + 3.067*ϑ*)*x*<sup>11</sup> + (−19.42 + 3.067*ϑ*)*x*<sup>12</sup> +(0.682 + 6.133*ϑ*)*x*<sup>21</sup> + 1.364*x*<sup>22</sup> + *D*<sup>−</sup> <sup>1</sup> <sup>−</sup> *<sup>D</sup>*<sup>+</sup> <sup>1</sup> <sup>+</sup> 7.497 + *ξ*<sup>2</sup> −2.8628*x*<sup>11</sup> + (−6.564 + 0.839*ϑ*)*x*<sup>12</sup> + (−10.266 + 1.677*ϑ*)*x*<sup>21</sup> +(−0.347 + 0.839*ϑ*)*x*<sup>22</sup> + *D*<sup>−</sup> <sup>2</sup> <sup>−</sup> *<sup>D</sup>*<sup>+</sup> <sup>2</sup> <sup>−</sup> 13.128 + ϑ<sup>1</sup> 0 −*x*<sup>11</sup> − 3*x*<sup>12</sup> − *x*<sup>21</sup> − 2*x*<sup>22</sup> + *D*<sup>−</sup> <sup>1</sup> − 2 1 + ϑ<sup>2</sup> 0 −*x*<sup>11</sup> − 2*x*<sup>12</sup> − 3*x*<sup>21</sup> − *x*<sup>22</sup> + *D*<sup>−</sup> <sup>2</sup> − 4 1 + *τ*<sup>1</sup> 0 *<sup>x</sup>*<sup>11</sup> <sup>+</sup> *<sup>x</sup>*<sup>12</sup> <sup>−</sup> 0.2*a<sup>L</sup>* <sup>1</sup> <sup>−</sup> 0.8*a<sup>U</sup>* 1 1 + *τ*<sup>2</sup> 0 *<sup>x</sup>*<sup>21</sup> <sup>+</sup> *<sup>x</sup>*<sup>22</sup> <sup>−</sup> 0.4*a<sup>L</sup>* <sup>2</sup> <sup>−</sup> 0.6*a<sup>U</sup>* 2 1 + *η*<sup>1</sup> 0 <sup>−</sup>*x*<sup>11</sup> <sup>−</sup> *<sup>x</sup>*<sup>21</sup> <sup>+</sup> 0.3*b<sup>L</sup>* <sup>1</sup> + 0.7*b<sup>U</sup>* 1 1 +*η*<sup>2</sup> 0 <sup>−</sup>*x*<sup>12</sup> <sup>−</sup> *<sup>x</sup>*<sup>22</sup> <sup>+</sup> 0.2*b<sup>L</sup>* <sup>2</sup> + 0.8*b<sup>U</sup>* 2 1 + *ϕ*1[−*x*11] + *ϕ*2[−*x*12] + *ϕ*3[−*x*21] + *ϕ*4[−*x*22] + *ψ*<sup>1</sup> 0 <sup>−</sup>*a<sup>L</sup>* 1 1 + *ψ*<sup>2</sup> 0 <sup>−</sup>*a<sup>L</sup>* 2 1 + *φ*<sup>1</sup> 0 <sup>−</sup>*b<sup>L</sup>* 1 1 + *φ*<sup>2</sup> 0 <sup>−</sup>*b<sup>L</sup>* 2 1 + <sup>1</sup> 0 <sup>−</sup>*a<sup>U</sup>* 1 1 +<sup>2</sup> 0 <sup>−</sup>*b<sup>U</sup>* 2 1 + <sup>1</sup> 0 <sup>−</sup>*b<sup>U</sup>* 1 1 + <sup>2</sup> 0 <sup>−</sup>*b<sup>U</sup>* 2 1 +*ζ*<sup>1</sup> 0 −*D*<sup>−</sup> 1 1 +*ζ*<sup>2</sup> 0 −*D*<sup>−</sup> 2 1 +*π*<sup>1</sup> 0 <sup>−</sup>*D*<sup>+</sup> 1 1 + *π*<sup>2</sup> 0 <sup>−</sup>*D*<sup>+</sup> 2 1

where *ϑ*, *ξ*1, *ξ*<sup>2</sup> ∈ *R*, and *υ*1, *υ*2, *τ*1, *τ*2, *η*1, *η*2, *ϕ*1, *ϕ*2, *ϕ*3, *ϕ*4, *ψ*1, *ψ*2, *φ*1, *φ*2, 1, 2, 1, <sup>2</sup> ≥ 0, and *ζ*1, *ζ*2, *π*1, *π*<sup>2</sup> ≥ 0, are the Lagrange multipliers. Therefore, KKT optimality conditions follows as:

*∂L ∂x*<sup>11</sup> = (−8.518 + 3.067*ϑ*)*ξ*<sup>1</sup> − 2.863*ξ*<sup>2</sup> − *υ*<sup>1</sup> − *υ*<sup>2</sup> + *τ*<sup>1</sup> − *η*<sup>1</sup> − *ϕ*<sup>1</sup> = 0 *∂L ∂x*<sup>12</sup> = (−19.42 + 3.067*ϑ*)*ξ*<sup>1</sup> + (−6.564 + 0.839*ϑ*)*ξ*<sup>2</sup> − 3*υ*<sup>1</sup> − 2*υ*<sup>2</sup> + *τ*<sup>1</sup> − *η*<sup>2</sup> − *ϕ*<sup>2</sup> = 0, *∂L ∂x*<sup>21</sup> = (0.682 + 6.133*ϑ*)*ξ*<sup>1</sup> + (−10.266 + 1.677*ϑ*)*ξ*<sup>2</sup> − *υ*<sup>1</sup> − 3*υ*<sup>2</sup> + *τ*<sup>2</sup> − *η*<sup>1</sup> − *ϕ*<sup>3</sup> = 0, *∂L ∂x*<sup>22</sup> = 1.364*ξ*<sup>1</sup> + (−0.347 + 0.839*ϑ*)*ξ*<sup>2</sup> − 2*υ*<sup>1</sup> − *υ*<sup>2</sup> + *τ*<sup>2</sup> − *η*<sup>2</sup> − *ϕ*<sup>4</sup> = 0, *∂L ∂a<sup>L</sup>* 1 = −0.2*τ*<sup>1</sup> − *ψ*<sup>1</sup> = 0, *∂L ∂a<sup>U</sup>* 1 = −0.8*τ*<sup>1</sup> − <sup>1</sup> = 0, *∂L ∂a<sup>L</sup>* 2 = −0.4*τ*<sup>2</sup> − *ψ*<sup>2</sup> = 0, *∂L ∂a<sup>U</sup>* 2 = −0.6*τ*<sup>2</sup> − <sup>2</sup> = 0, *∂L ∂b<sup>L</sup>* 1 = 0.3*η*<sup>1</sup> − *φ*<sup>1</sup> = 0, *∂L ∂b<sup>U</sup>* 1 = 0.7*η*<sup>1</sup> − <sup>1</sup> = 0, *∂L ∂b<sup>L</sup>* 2 = 0.2*η*<sup>2</sup> − *φ*<sup>2</sup> = 0, *∂L ∂b<sup>U</sup>* 2 = 0.8*η*<sup>2</sup> − <sup>2</sup> = 0, *∂L ∂D*− 1 = 3.067 + *ξ*<sup>1</sup> + *v*<sup>1</sup> − *ζ*<sup>1</sup> = 0, *∂L ∂D*<sup>+</sup> 1 = −*ξ*<sup>1</sup> − *π*<sup>1</sup> = 0, *∂L ∂D*− 2 = 0.839 + *ξ*<sup>2</sup> + *υ*<sup>2</sup> − *ζ*<sup>2</sup> = 0,

*∂L ∂D*<sup>+</sup> 2 = −*ξ*<sup>2</sup> − *π*<sup>2</sup> = 0, *υ*1 0 −*x*<sup>11</sup> − 3*x*<sup>12</sup> − *x*<sup>21</sup> − 2*x*<sup>22</sup> + *D*<sup>−</sup> <sup>1</sup> − 2 1 = 0, i.e., *υ*<sup>1</sup> = 0, *υ*2 0 −*x*<sup>11</sup> − 2*x*<sup>12</sup> − 3*x*<sup>21</sup> − *x*<sup>22</sup> + *D*<sup>−</sup> <sup>2</sup> − 4 1 = 0, i.e., *υ*<sup>2</sup> = 0, *τ*1 *<sup>x</sup>*<sup>11</sup> <sup>+</sup> *<sup>x</sup>*<sup>12</sup> <sup>−</sup> 0.2*a<sup>L</sup>* <sup>1</sup> <sup>−</sup> 0.8*a<sup>U</sup>* 1 = 0, i.e., *τ*<sup>1</sup> = 0, *τ*2 *<sup>x</sup>*<sup>21</sup> <sup>+</sup> *<sup>x</sup>*<sup>22</sup> <sup>−</sup> 0.4*a<sup>L</sup>* <sup>2</sup> <sup>−</sup> 0.6*a<sup>U</sup>* 2 = 0, i.e., *τ*<sup>2</sup> ≥ 0, *η*1 <sup>−</sup>*x*<sup>11</sup> <sup>−</sup> *<sup>x</sup>*<sup>21</sup> <sup>+</sup> 0.3*b<sup>L</sup>* <sup>1</sup> + 0.7*b<sup>U</sup>* 1 = 0, i.e., *η*<sup>1</sup> = 0, *η*2 <sup>−</sup>*x*<sup>12</sup> <sup>−</sup> *<sup>x</sup>*<sup>22</sup> <sup>+</sup> 0.2*b<sup>L</sup>* <sup>2</sup> + 0.8*b<sup>U</sup>* 2 = 0, i.e., *η*<sup>2</sup> = 0, *ϕ*1[−*x*11] = 0, i.e., *ϕ*<sup>1</sup> ≥ 0, *ϕ*2[−*x*12] = 0, i.e., *ϕ*<sup>2</sup> = 0, *ϕ*3[−*x*21] = 0, i.e., *ϕ*<sup>3</sup> = 0, *ϕ*4[−*x*22] = 0, i.e., *ϕ*<sup>4</sup> = 0, *ψ*1 <sup>−</sup>*a<sup>L</sup>* 1 = 0, i.e., *ψ*<sup>1</sup> = 0, *ψ*2 <sup>−</sup>*a<sup>L</sup>* 2 = 0, i.e., *ψ*<sup>2</sup> = 0, *φ*1 <sup>−</sup>*b<sup>L</sup>* 1 = 0, i.e., *φ*<sup>1</sup> = 0, *φ*2 <sup>−</sup>*b<sup>L</sup>* 2 = 0, i.e., *φ*<sup>2</sup> = 0, <sup>1</sup> <sup>−</sup>*a<sup>U</sup>* 1 = 0, i.e., <sup>1</sup> = 0, <sup>2</sup> <sup>−</sup>*a<sup>U</sup>* 2 = 0, i.e., <sup>2</sup> = 0, 1 <sup>−</sup>*b<sup>U</sup>* 1 = 0, i.e., <sup>1</sup> = 0, 2 <sup>−</sup>*b<sup>U</sup>* 2 = 0, i.e., <sup>2</sup> = 0, *ζ*1 0 −*D*<sup>−</sup> 1 1 = 0, i.e., *ζ*<sup>1</sup> ≥ 0, *ζ*2 0 −*D*<sup>−</sup> 2 1 = 0, i.e., *ζ*<sup>2</sup> = 0, *π*1 0 <sup>−</sup>*D*<sup>+</sup> 1 1 = 0, i.e., *π*<sup>1</sup> ≥ 0, *π*2 0 <sup>−</sup>*D*<sup>+</sup> 2 1 = 0, i.e., *π*<sup>2</sup> ≥ 0, − *x*<sup>11</sup> − 3*x*<sup>12</sup> − *x*<sup>21</sup> − 2*x*<sup>22</sup> + *D*<sup>−</sup> <sup>1</sup> ≤ 2, − *x*<sup>11</sup> − 2*x*<sup>12</sup> − 3*x*<sup>21</sup> − *x*<sup>22</sup> + *D*<sup>−</sup> <sup>2</sup> ≤ 4, *<sup>x</sup>*<sup>11</sup> <sup>+</sup> *<sup>x</sup>*<sup>12</sup> <sup>≤</sup> 0.2*a<sup>L</sup>* <sup>1</sup> + 0.8*a<sup>U</sup>* 1 , *<sup>x</sup>*<sup>21</sup> <sup>+</sup> *<sup>x</sup>*<sup>22</sup> <sup>≤</sup> 0.4*a<sup>L</sup>* <sup>2</sup> + 0.6*a<sup>U</sup>* 2 , *<sup>x</sup>*<sup>11</sup> <sup>+</sup> *<sup>x</sup>*<sup>21</sup> <sup>≥</sup> 0.3*b<sup>L</sup>* <sup>1</sup> + 0.7*b<sup>U</sup>* 1 , *<sup>x</sup>*<sup>12</sup> <sup>+</sup> *<sup>x</sup>*<sup>22</sup> <sup>≥</sup> 0.2*b<sup>L</sup>* <sup>2</sup> + 0.8*b<sup>U</sup>* 2 , *x*11, *x*12, *x*21, *x*22, *a<sup>L</sup>* <sup>1</sup> , *<sup>a</sup><sup>U</sup>* <sup>1</sup> , *<sup>a</sup><sup>L</sup>* <sup>2</sup> , *<sup>a</sup><sup>U</sup>* <sup>2</sup> , *<sup>b</sup><sup>L</sup>* <sup>1</sup> , *<sup>b</sup><sup>U</sup>* <sup>1</sup> , *<sup>b</sup><sup>L</sup>* <sup>2</sup> , *<sup>b</sup><sup>U</sup>* <sup>2</sup> , *D*<sup>−</sup> <sup>1</sup> , *<sup>D</sup>*<sup>+</sup> <sup>1</sup> , *D*<sup>−</sup> <sup>2</sup> , *<sup>D</sup>*<sup>+</sup> <sup>2</sup> ≥ 0; *ϑ* ∈ *R* Solving the above system of Equation. we get: *υ*<sup>1</sup> = *υ*<sup>2</sup> = *τ*<sup>1</sup> = *τ*<sup>2</sup> = *η*<sup>1</sup> = *η*<sup>2</sup> = *ϕ*<sup>2</sup> = *ϕ*<sup>3</sup> = *ϕ*<sup>4</sup> = *ψ*<sup>1</sup> = *ψ*<sup>2</sup> = *φ*<sup>1</sup> = *φ*<sup>2</sup> = <sup>1</sup> = <sup>2</sup> = <sup>1</sup> = <sup>2</sup> = *ζ*<sup>2</sup> = 0, and *ϕ*1, *ζ*1, *π*1, *π*<sup>2</sup> ≥ 0.

Also, *ξ*<sup>2</sup> = −*π*<sup>2</sup> = −0.839, *ξ*<sup>1</sup> = −*π*1. The above system of Equation is reduced to the following:

$$(-8.518 + 3.067\theta)\xi\_1^x - 2.863\xi\_2^x - \varrho\_1 = 0,$$

$$(-19.42 + 3.067\theta)\xi\_1^x + (-6.564 + 0.839\theta)\xi\_2^x = 0,$$

$$(0.682 + 6.133\theta)\xi\_1^x + (-10.266 + 1.677\theta)\xi\_2^x = 0,$$

$$1.364\xi\_1^x + (-0.347 + 0.839\theta)\xi\_2^x = 0,$$

Therefore, the SSFK for the parametric IF-MOFTP is given by:

$$\mathfrak{q} = \left\{ \begin{array}{c} \mathfrak{s}\_{1}(0, 165.88, 76.39, 163.61, 0, 0, 726.78, 0) \\ \mathfrak{p} \in R\_{\prime} \\ \mathfrak{a}, \mathfrak{f} \in [0, 1] \end{array} \Big| \begin{array}{c} 12.948 \ \mathfrak{f}\_{1} + [-1.41 + 6.133 \mathfrak{f}\_{1}] \mathfrak{f} + 5.799 - \mathfrak{q}\_{1} = 0, \\ \mathfrak{f}\_{1} = \mathfrak{f}\_{1} - 3.67; \ \mathfrak{f}\_{1} = -\mathfrak{r}\_{1}; \ \mathfrak{f}\_{2} = -\mathfrak{r}\_{2} = -0.839, \\ \mathfrak{f}\_{1}, \mathfrak{q}\_{1}, \mathfrak{r}\_{1}, \mathfrak{r}\_{2} \ge 0; \ \mathfrak{f}\_{1}, \mathfrak{f}\_{2} \in R \end{array} \right\}$$

After applying the KKT optimality conditions we obtain a large system of algebraic equations. By reducing and solving the algebraic system of equations the SSFK is obtained. The SSFK introduces the values and relations between different parameters which generate the same solution of the PIF-MOFTP as indicated by set *S*1. To test the obtained results of the SSFK, different values of *α*, *β* ∈ [0, 1] will be taken and the solution will remain the same.

#### **8. Conclusions**

The SSFK for the PIF-MOFTP was investigated in this study. Also, we characterized definitions of the set of feasible parameters and the solvability set for PIF-MOFTP. First, the concept of (*α*, *β*)-cut methodology was applied to get the parametric model. Moreover, the FGP approach was applied to find a (*α*, *β*)-Pareto optimal solution for PIF-MOFTP which has not been published in the literature to date. To obtain the SSFK for the novel model of PIF-MOFTP, the KKT necessary optimality conditions are applied. After applying the KKT optimality conditions, we obtained a large system of algebraic equations. By reducing and solving the algebraic system of equations, the SSFK was obtained. A detailed procedure that determines the SSFK for the PIF-MOFTP was exhibited. A numerical example was given to ensure the applicability and efficiency of the proposed PIF-MOFTP.

The major limitation of the proposed PIF-MOFTP is that a specific (*α*, *β*)-level is adopted in the proposed methods to represent the confidence level on DMs' subjective uncertainty to specify parameter values in the PIF-MOFTP. For simplification, the (*α*, *β*) level for all parameters of the supply and demand in the solution process are assumed to be the same. However, these may be limitations in practical applications. The determination of (*α*, *β*)-levels for various DMs' subjective uncertainties could be different in the real world due to DMs' different consideration of the real transportation data. Thus, this will be addressed in future studies.

Several remaining areas of research in the topic of parametric MOFTP include the following:


**Author Contributions:** Conceptualization, M.A.E.S., M.A.E.-S. and F.A.F.; Methodology, M.A.E.S., F.A.F. and M.A.E.; Investigation, M.A.E.S., M.A.E.-S., A.F.F. and M.A.E.; writing—review and editing, M.A.E.S., M.A.E.-S., A.F.F., M.A.E. and F.A.F. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


### *Article* **Hermite–Jensen–Mercer-Type Inequalities via Caputo–Fabrizio Fractional Integral for** *h***-Convex Function**

**Miguel Vivas-Cortez 1,\*,†, Muhammad Shoaib Saleem 2,†, Sana Sajid 2,† and Muhammad Sajid Zahoor 2,† and Artion Kashuri 3,†**


**Abstract:** Integral inequalities involving many fractional integral operators are used to solve various fractional differential equations. In the present paper, we will generalize the Hermite–Jensen–Mercertype inequalities for an *h*-convex function via a Caputo–Fabrizio fractional integral. We develop some novel Caputo–Fabrizio fractional integral inequalities. We also present Caputo–Fabrizio fractional integral identities for differentiable mapping, and these will be used to give estimates for some fractional Hermite–Jensen–Mercer-type inequalities. Some familiar results are recaptured as special cases of our results.

**Keywords:** convex function; *h*-convex function; Hermite–Hadamard inequality; Caputo–Fabrizio fractional integral; Hermite–Hadamard inequality; Jensen inequality; Jensen–Mercer inequality

#### **1. Introduction**

Fractional calculus has undergone rapid development in both applied and pure mathematics because of its enormous use in image processing, physics, machine learning, networking, and other branches. For more on fractional calculus identities, see [1–3]. The fractional derivative has received rapid attention among experts from different branches of science. Most of the applied problems can not be modeled by classical derivations. The complications in real-world problems are addressed by fractional differential equations. The famous fractional integral contains Riemann–Liouville [4–6], Hadamard [6,7], Caputo– Fabrizio [8], and Katugampola [6], etc.

In this paper, we will restrict ourselves to the Caputo–Fabrizio fractional integral operator. In the current direction of fractional calculus, numerous analysts are characterizing new operators by various methods to cover most of the real-world problems. Usually, the operators are not the same as each other in terms of singularity and locality of kernels. The main aspect that makes Caputo–Fabrizio different from others is that it has a non-singular kernel, and it is useful to find exact solutions for various issues.

For convex functions, the Hermite–Hadamard inequality is a famous inequality that has been proved in many ways and has several extensions and generalizations in the literature (see [9–19]). The Hermite–Hadamard inequality for the convex function is defined as:

Let *ξ* : *I* ⊆ R → R be a convex function. Then

$$
\zeta \left( \frac{\upsilon + \mu}{2} \right) \le \frac{1}{\mu - \upsilon} \int\_{\upsilon}^{\mu} \zeta(\chi) d\chi \le \frac{\zeta(\upsilon) + \zeta(\mu)}{2}.
$$

**Citation:** Vivas-Cortez, M.; Saleem, M.S.; Sajid, S.; Zahoor, M.S.; Kashuri, A. Hermite–Jensen–Mercer-Type Inequalities via Caputo–Fabrizio Fractional Integral for *h*-Convex Function. *Fractal Fract.* **2021**, *5*, 269. https://doi.org/10.3390/fractalfract 5040269

Academic Editors: Savin Trean¸t ˘a and Carlo Cattani

Received: 3 September 2021 Accepted: 25 November 2021 Published: 10 December 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

holds ∀ *υ*, *μ* ∈ *I* and *υ* < *μ*.

The generalization of the Hermite–Hadamard inequality for *h*-convex are defined as (see [20]):

Let *ξ* : *I* ⊆ R → R be a convex function. Then

$$\frac{1}{2h\left(\frac{1}{2}\right)}\xi\left(\frac{\upsilon+\mu}{2}\right) \le \frac{1}{\mu-\upsilon} \int\_{\upsilon}^{\mu} \xi(\chi)d\chi \le \left[\xi(\upsilon) + \xi(\mu)\right] \int\_{0}^{1} h(\sigma)d\sigma.$$

holds ∀ *υ*, *μ* ∈ *I* and *υ* < *μ*.

In the literature, some more interesting extensions and refinements of the Hermite– Hadamard integral inequality with the help of *h*-convex functions have been widely studied (see [21–26]).

In the literature, for the Jensen inequality, several interesting studies are given. In [27], for a convex function, a variant of Jensen's inequality is proved by Mercer within the year 2003. Later, Matkovi´c et al. presented the Jensen–Mercer inequality for operators with applications in the year 2006 (see [28]).

Vivas-Cortez et al. presented the following variant of the Jensen–Mercer inequality (see [29]).

**Theorem 1** ([29])**.** *Let ξ be a h-convex function defined on interval* [*υ*, *μ*]*. Then*

$$\left(\xi\left(\upsilon+\mu-\sum\_{i=1}^{n}\chi\_{i}\mathbf{x}\_{i}\right)\right)\leq M[\xi(\upsilon)+\xi(\mu)]-\sum\_{i=1}^{n}h(\chi\_{i})\xi(\mathbf{x}\_{i}),\tag{1}$$

*holds* <sup>∀</sup> *xi* <sup>∈</sup> [*υ*, *<sup>μ</sup>*] *and <sup>χ</sup><sup>i</sup>* <sup>∈</sup> [0, 1] *with* <sup>∑</sup>*<sup>n</sup> <sup>i</sup>*=<sup>1</sup> *χ<sup>i</sup> = 1, where M = sup* {*h*(*σ*) : *σ* ∈ (0, 1)}*.*

In 2019, the authors established the Hermite–Hadamard–Mercer-like inequalities for fractional integrals [30]. In [31], Butt et al. presented the Hermite–Jensen–Mercer type inequalities for conformable fractional integrals within the year 2020. Furthermore, they developed the Hermite–Jensen–Mercer-like inequalities for *k*-fractional integrals, generalized fractional integrals and *ψ*-Riemann–Liouville *k*-fractional integrals (see [32–34]). In 2020, several researchers presented Hermite–Jensen–Mercer-like inequalities in the setting of a *k*-Caputo fractional derivative and Caputo fractional derivative (see [35,36]). In [37], the authors developed the weighted Hermite–Hadamard–Mercer-type inequalities for convex functions within the year 2020. Chu et al. presented the new fractional estimates for Hermite–Hadamard–Mercer inequalities in the year 2020 (see [38]).

The present paper is organized as follows. First, we write definitions and preliminary material associated with our present paper. In Section 2, we will present Hermite–Jensen– Mercer-type inequalities for a Caputo–Fabrizio fractional integral operator with the help of an *h*-convex function. In Section 3, we will develop new Lemmas and then present some results for an *h*-convex function via a Caputo–Fabrizio fractional integral operator. In Section 4, some more integral inequalities for *h*-convex functions are established making use of the Hölder– ˙ *I*scan integral inequality for an improved power mean integral inequality, ¸ and at last, we will write concluding remarks to our present paper.

Throughout the paper, we need the following assumption:

Let *ξ* : *I* = [*υ*, *μ*] → R be a positive function, 0 ≤ *υ* < *μ* and *ξ* ∈ *L*1[*υ*, *μ*]. Furthermore, consider *h* : (0, 1) → R is a non-negative function, *h* = 0 and *I* ⊆ R is an interval.

Now, we begin with definitions and preliminary results, which will be used in this work.

**Definition 1.** *(Convex function) [39] The function ξ* : [*υ*, *μ*] → R *is called convex, if*

$$
\xi(\chi \mathbf{x}\_1 + (1 - \chi)\mathbf{x}\_2) \le \chi \xi(\mathbf{x}\_1) + (1 - \chi)\xi(\mathbf{x}\_2),
$$

*holds* ∀ *x*1, *x*<sup>2</sup> ∈ [*υ*, *μ*] *and χ* ∈ [0, 1]*.*

**Definition 2.** *(h-Convex function) [40] A function ξ* : [*υ*, *μ*] ⊆ R → R *is said to be h-convex if*

$$
\check{\xi}(\chi \mathbf{x}\_1 + (1 - \chi)\mathbf{x}\_2) \le h(\chi)\check{\xi}(\mathbf{x}\_1) + h(1 - \chi)\check{\xi}(\mathbf{x}\_2).
$$

*holds* ∀ *x*1, *x*<sup>2</sup> ∈ [*υ*, *μ*] *and χ* ∈ [0, 1]*.*

**Definition 3.** *(Superadditive function) A function h* : [*υ*, *μ*] ⊆ R → R *is called superadditive function if*

$$h(\mathbf{x}\_1 + \mathbf{x}\_2) \ge h(\mathbf{x}\_1) + h(\mathbf{x}\_2),$$

*holds* ∀ *x*1, *x*<sup>2</sup> ∈ [*υ*, *μ*]*.*

**Definition 4** ([8,41,42])**.** *Let <sup>ξ</sup>* <sup>∈</sup> *<sup>H</sup>*1(*x*1, *<sup>x</sup>*2), *<sup>x</sup>*<sup>1</sup> <sup>&</sup>lt; *<sup>x</sup>*2, *<sup>θ</sup>* <sup>∈</sup> [0, 1]*, then the definition of the left fractional derivative in the sense of Caputo and Fabrizio is defined as*

$$\left( \begin{matrix} {\_{\mathcal{X}}} {\boldsymbol{\varepsilon}} {\boldsymbol{\varepsilon}} {\boldsymbol{\varepsilon}} \\ {}^{\boldsymbol{\varepsilon}} {\boldsymbol{\varepsilon}} \end{matrix} \right) (t) = \frac{B(\boldsymbol{\theta})}{1 - \theta} \int\_{\boldsymbol{\varepsilon}\_1}^{t} {\boldsymbol{\xi}}' (z) e^{\frac{-\theta (t - z)^{\theta}}{1 - \theta}} dz, \boldsymbol{\varepsilon}$$

*and the associated fractional integral is*

$$\left( \,\_{x\_1}^{CF} I^{\theta} \zeta^{\mathbf{t}} \right)(t) = \frac{1-\theta}{B(\theta)} \zeta(t) + \frac{\theta}{B(\theta)} \int\_{x\_1}^{t} \zeta(z) dz\_{\tau}$$

*where B*(*θ*) > 0 *is a normalization function satisfying B*(0) = *B*(1) = 1*. The right fractional derivative is defined as*

$$\left( \,^{CFC}D\_{\chi\_2}^{\theta} \xi \right)(t) = \frac{-B\left(\theta\right)}{1-\theta} \int\_{t}^{\chi\_2} \xi'(z) e^{\frac{-\theta\left(z-t\right)^{\theta}}{1-\theta}} dz,$$

*and the associated fractional integral is*

$$\left( \,^{CF}I\_{l\_2}^{\theta} \zeta \right)(t) = \frac{1-\theta}{B(\theta)} \zeta(t) + \frac{\theta}{B(\theta)} \int\_{t}^{l\_2} \zeta(z) dz.$$

*In [43,44], the Hölder-* ˙ *Iscan integral inequality and improved power-mean integral inequality ¸ is explained as follows.*

**Theorem 2.** *(Hölder–* ˙ *Iscan integral inequality ¸ ) [43] Let ξ*<sup>1</sup> *and ξ*<sup>2</sup> *be real functions defined on* [*x*1, *x*2] *and if* |*ξ*1| *<sup>q</sup> and* <sup>|</sup>*ξ*2<sup>|</sup> *<sup>q</sup> are integrable on* [ *x*1, *x*2]*. If p* > 1 *and* <sup>1</sup> *<sup>p</sup>* <sup>+</sup> <sup>1</sup> *<sup>q</sup>* = 1*, then*

$$\begin{split} \int\_{\boldsymbol{x}\_{1}}^{\boldsymbol{\chi\_{2}}} |\tilde{\xi}\_{1}(\boldsymbol{z})\tilde{\xi}\_{2}(\boldsymbol{z})|d\boldsymbol{z} &\leq \frac{1}{\boldsymbol{x}\_{2} - \boldsymbol{x}\_{1}} \Bigg{(} \left( \int\_{\boldsymbol{x}\_{1}}^{\boldsymbol{x}\_{2}} (\boldsymbol{x}\_{2} - \boldsymbol{z}) |\tilde{\xi}\_{1}(\boldsymbol{z})|^{p} d\boldsymbol{z} \right)^{\frac{1}{p}} \left( \int\_{\boldsymbol{x}\_{1}}^{\boldsymbol{x}\_{2}} (\boldsymbol{x}\_{2} - \boldsymbol{z}) |\tilde{\xi}\_{2}(\boldsymbol{z})|^{q} d\boldsymbol{z} \right)^{\frac{1}{q}} \\ &+ \left( \int\_{\boldsymbol{x}\_{1}}^{\boldsymbol{x}\_{2}} (\boldsymbol{z} - \boldsymbol{x}\_{1}) |\tilde{\xi}\_{1}(\boldsymbol{z})|^{p} d\boldsymbol{z} \right)^{\frac{1}{p}} \left( \int\_{\boldsymbol{x}\_{1}}^{\boldsymbol{x}\_{2}} (\boldsymbol{z} - \boldsymbol{x}\_{1}) |\tilde{\xi}\_{2}(\boldsymbol{z})|^{q} d\boldsymbol{z} \right)^{\frac{1}{q}} \\ &\leq \left( \int\_{\boldsymbol{x}\_{1}}^{\boldsymbol{x}\_{2}} |\tilde{\xi}\_{1}(\boldsymbol{z})|^{p} d\boldsymbol{z} \right)^{\frac{1}{p}} \left( \int\_{\boldsymbol{x}\_{1}}^{\boldsymbol{x}\_{2}} |\tilde{\xi}\_{2}(\boldsymbol{z})|^{q} d\boldsymbol{z} \right)^{\frac{1}{q}}. \end{split}$$

**Theorem 3.** *(Improved power-mean integral inequality) [44] Let ξ*<sup>1</sup> *and ξ*<sup>2</sup> *be real functions defined on* [*x*1, *x*2] *and if* |*ξ*1|*,* |*ξ*1||*ξ*2| *<sup>q</sup> are integrable functions on* [ *<sup>x</sup>*1, *<sup>x</sup>*2]*. Let q* <sup>≥</sup> <sup>1</sup>*, then*

$$\begin{split} \int\_{x\_1}^{x\_2} |\xi\_1(z)\xi\_2(z)|dz \leq \frac{1}{x\_2 - x\_1} \left\{ \left(\int\_{x\_1}^{x\_2} (x\_2 - z) |\xi\_1(z)|dz\right)^{1 - \frac{1}{q}} \left(\int\_{x\_1}^{x\_2} (x\_2 - z) |\xi\_1(z)| |\xi\_2(z)|^q dz\right)^{\frac{1}{q}} \right\} \\ &+ \left(\int\_{x\_1}^{x\_2} (z - x\_1) |\xi\_1(z)|dz\right)^{1 - \frac{1}{q}} \left(\int\_{x\_1}^{x\_2} (z - x\_1) |\xi\_1(z)| |\xi\_2(z)|^q dz\right)^{\frac{1}{q}} \right\} \\ &\leq \left(\int\_{x\_1}^{x\_2} |\xi\_1(z)| dz\right)^{1 - \frac{1}{q}} \left(\int\_{x\_1}^{x\_2} |\xi\_1(z)| |\xi\_2(z)|^q dz\right)^{\frac{1}{q}}. \end{split}$$

#### **2. Hermite–Jensen–Mercer-Type Inequalities via the Caputo–Fabrizio Fractional Operator**

**Theorem 4.** *Let ξ* : *I* = [*υ*, *μ*] → R *be a h-convex function and ξ* ∈ *L*1[*υ*, *μ*]*. If h is a superadditive function and θ* ∈ [0, 1]*, then*

$$\begin{split} \frac{1}{2h\left(\frac{1}{2}\right)}\xi\left(\nu+\mu-\frac{\mathbf{x}\_{1}+\mathbf{x}\_{2}}{2}\right) &\leq \frac{B(\theta)}{\theta(\mathbf{x}\_{2}-\mathbf{x}\_{1})} \\ &\quad \times \left[ \left( \frac{\xi^{F}\_{F}}{\nu+\mu-\nu\_{2}} I^{\theta}\xi \right)(t) + \left( \frac{\epsilon^{F}}{I^{\theta}\_{\nu+\mu-\nu\_{2}}} I^{\tilde{\xi}}\_{\mu} \right)(t) - \frac{2(1-\theta)}{B(\theta)} \xi(t) \right] \\ &\leq \int\_{0}^{1} h(1) d\chi\left(M[\xi(\nu)+\xi(\mu)] - \frac{\xi(\mathbf{x}\_{1})+\xi(\mathbf{x}\_{2})}{2}\right), \end{split} \tag{2}$$

*holds for all x*1, *x*<sup>2</sup> ∈ [*υ*, *μ*]*, t* ∈ [*υ*, *μ*]*, B*(*θ*) > 0 *is a normalization function and M = sup* {*h*(*χ*) : *χ* ∈ (0, 1)}*.*

**Proof.** Since *ξ* is *h*-convex function on [*x*1, *x*2] yields that

$$\begin{aligned} \mathbb{E}\left(\upsilon + \mu - \frac{\mathbf{x}\_1 + \mathbf{x}\_2}{2}\right) &= \mathbb{E}\left(\frac{\upsilon + \mu - \mathbf{x}\_1 + \upsilon + \mu - \mathbf{x}\_2}{2}\right) \\ &\leq h\left(\frac{1}{2}\right) \left(\mathbb{E}(\upsilon + \mu - \mathbf{x}\_1) + \mathbb{E}(\upsilon + \mu - \mathbf{x}\_2)\right) \\ &= h\left(\frac{1}{2}\right) \left(\mathbb{E}(\upsilon + \mu - (\chi\mathbf{x}\_1 + (1 - \chi)\mathbf{x}\_2))\right) \\ &\quad + \mathbb{E}(\upsilon + \mu - ((1 - \chi)\mathbf{x}\_1 + \chi\mathbf{x}\_2))\right), \end{aligned}$$

holds for all *x*1, *x*<sup>2</sup> ∈ [*υ*, *μ*].

The above inequality is integrated with respect to *χ* over [0, 1] and by change of variable technique, we can deduce

$$\frac{1}{h\left(\frac{1}{2}\right)}\xi\left(\upsilon+\mu-\frac{\mathbf{x}\_1+\mathbf{x}\_2}{2}\right)\leq\frac{2}{\mathbf{x}\_2-\mathbf{x}\_1}\int\_{\upsilon+\mu-\mathbf{x}\_2}^{\upsilon+\mu-\mathbf{x}\_1}\xi(z)dz$$

$$=\frac{2}{\mathbf{x}\_2-\mathbf{x}\_1}\left(\int\_{\upsilon+\mu-\mathbf{x}\_2}^{\chi}\xi(z)dz+\int\_{\chi}^{\upsilon+\mu-\mathbf{x}\_1}\xi(z)dz\right). \tag{3}$$

Both sides of (3) multipled by *<sup>θ</sup>*(*x*2−*x*1) <sup>2</sup>*B*(*θ*) and adding <sup>2</sup>(1−*θ*) *<sup>B</sup>*(*θ*) *ξ*(*t*), we have

$$\begin{split} &\frac{2(1-\theta)}{B(\theta)}\tilde{\xi}(t) + \frac{\theta(x\_{2}-x\_{1})}{2\mathrm{l}(\frac{1}{2})B(\theta)}\tilde{\xi}\left(v+\mu-\frac{x\_{1}+x\_{2}}{2}\right) \\ &\leq \frac{2(1-\theta)}{B(\theta)}\tilde{\xi}(t) + \frac{\theta}{B(\theta)}\left(\int\_{v+\mu-x\_{2}}^{t}\tilde{\xi}(z)dz + \int\_{t}^{v+\mu-x\_{1}}\tilde{\xi}(z)dz\right) \\ &= \left(\frac{(1-\theta)}{B(\theta)}\tilde{\xi}(t) + \frac{\theta}{B(\theta)}\int\_{v+\mu-x\_{2}}^{t}\tilde{\xi}(z)dz\right) + \left(\frac{(1-\theta)}{B(\theta)}\tilde{\xi}(t) + \int\_{t}^{v+\mu-x\_{1}}\tilde{\xi}(z)dz\right) \\ &= \left(\frac{\mathbb{C}F}{v+\mu-x\_{2}}I^{\theta}\tilde{\xi}\right)(t) + \left(\frac{\mathbb{C}F}{v}I^{\theta}\_{v+\mu-x\_{1}}\tilde{\xi}\right)(t). \end{split} \tag{4}$$

Suitable rearrangement of (4) yields the first inequality of (2). By using *h*-convexity of *ξ*, we have

$$
\xi(\chi(\upsilon+\mu-\mathbf{x}\_1)+(1-\chi)(\upsilon+\mu-\mathbf{x}\_2)) \le h(\chi)\xi(\upsilon+\mu-\mathbf{x}\_1) + h(1-\chi)\xi(\upsilon+\mu-\mathbf{x}\_2),
$$

and

$$
\xi((1-\chi)(\upsilon+\mu-\mathbf{x}\_1)+\chi(\upsilon+\mu-\mathbf{x}\_2)) \le h(1-\chi)\xi(\upsilon+\mu-\mathbf{x}\_1) + h(\chi)\xi(\upsilon+\mu-\mathbf{x}\_2).
$$

Adding the above two inequalities and then by using the super additivity of function and Jensen–Mercer inequality yields that

$$
\begin{split} &\mathcal{J}\left(\chi(\upsilon+\mu-\mathbf{x}\_{1})+(1-\chi)(\upsilon+\mu-\mathbf{x}\_{2})\right)+\mathfrak{f}\left((1-\chi)(\upsilon+\mu-\mathbf{x}\_{1})+\chi(\upsilon+\mu-\mathbf{x}\_{2})\right) \\ &\leq h(1)\Big(\mathfrak{f}\left(\upsilon+\mu-\mathbf{x}\_{1}\right)+\mathfrak{f}\left(\upsilon+\mu-\mathbf{x}\_{2}\right)\Big) \\ &\leq h(1)\Big(2M[\mathfrak{f}\left(\upsilon\right)+\mathfrak{f}\left(\mu\right)]-\left(\mathfrak{f}\left(\mathbf{x}\_{1}\right)+\mathfrak{f}\left(\mathbf{x}\_{2}\right)\right)\Big). \end{split} \tag{5}
$$

Integrating the inequality (5) with respect to *χ* over [0, 1] and by the change of variable technique, we can write

$$\frac{2}{\|\mathbf{x}\_{2} - \mathbf{x}\_{1}\|} \int\_{\upsilon + \mu - \mathbf{x}\_{2}}^{\upsilon + \mu - \mathbf{x}\_{1}} \tilde{\boldsymbol{\xi}}(\boldsymbol{z}) d\boldsymbol{z} \leq \int\_{0}^{1} h(1) d\chi \left( 2M[\tilde{\boldsymbol{\xi}}(\upsilon) + \boldsymbol{\xi}(\mu)] - (\boldsymbol{\xi}(\mathbf{x}\_{1}) + \boldsymbol{\xi}(\mathbf{x}\_{2})) \right). \tag{6}$$

By making use of the same operations with (3) in (6), we have

$$\begin{split} & \left( \,^{\mathrm{CF}}\_{\boldsymbol{\nu}+\boldsymbol{\mu}-\boldsymbol{\chi}\_{2}} \,^{\theta} \widetilde{\boldsymbol{\xi}} \right) (t) + \left( \,^{\mathrm{CF}}\_{\boldsymbol{\nu}+\boldsymbol{\mu}-\boldsymbol{\chi}\_{1}} \widetilde{\boldsymbol{\xi}} \right) (t) \\ & \leq \frac{2(1-\theta)}{B(\theta)} \widetilde{\boldsymbol{\xi}}(t) + \frac{\theta(\boldsymbol{\chi}\_{2}-\boldsymbol{\chi}\_{1})}{2B(\theta)} \left[ \int\_{0}^{1} h(1) d\boldsymbol{\chi} \Big( 2M[\widetilde{\boldsymbol{\xi}}(\boldsymbol{\nu}) + \widetilde{\boldsymbol{\xi}}(\boldsymbol{\mu})] - (\widetilde{\boldsymbol{\xi}}(\boldsymbol{\chi}\_{1}) + \widetilde{\boldsymbol{\xi}}(\boldsymbol{\chi}\_{2})) \Big) \right]. \end{split} \tag{7}$$

By suitable rearrangement of (7), we obtain inequality (2).

**Remark 1.** *By putting h*(*χ*) = *χ, M = sup* {*h*(*χ*) : *χ* ∈ (0, 1)} = 1*, x*<sup>1</sup> = *υ and x*<sup>2</sup> = *μ in Theorem* 2*, then we obtain Theorem 2 of (see [45]).*

**Theorem 5.** *Assume that ξ* : *I* = [*υ*, *μ*] → R *is a h-convex function and ξ* ∈ *L*1[*υ*, *μ*]*. If θ* ∈ [0, 1]*, then*

$$\begin{split} \frac{1}{h\left(\frac{1}{2}\right)}\tilde{\xi}\left(\upsilon+\mu-\frac{\mathbf{x}\_{1}+\mathbf{x}\_{2}}{2}\right)\int\_{0}^{1}h(\chi)d\chi\\ \leq \frac{1}{h\left(\frac{1}{2}\right)}M[\tilde{\xi}(\upsilon)+\tilde{\xi}(\mu)]\int\_{0}^{1}h(\chi)d\chi\\ -\frac{B(\theta)}{\theta(\mathbf{x}\_{2}-\mathbf{x}\_{1})}\left[\left(\frac{\mathbf{c}\_{1}^{T}}{\mathbf{x}\_{1}}I^{\theta}\tilde{\xi}\right)(t)+\left(\mathbf{c}\_{1}^{T}I^{\theta}\_{\mathbf{x}\_{2}}\tilde{\xi}\right)(t)-\frac{2(1-\theta)}{B(\theta)}\tilde{\xi}(t)\right]\\ \leq \frac{1}{h\left(\frac{1}{2}\right)}M[\tilde{\xi}(\upsilon)+\tilde{\xi}(\mu)]\int\_{0}^{1}h(\chi)d\chi-\frac{1}{2h\left(\frac{1}{2}\right)}\tilde{\xi}\left(\frac{\mathbf{x}\_{1}+\mathbf{x}\_{2}}{2}\right), \end{split} \tag{8}$$

*holds* ∀ *x*1, *x*<sup>2</sup> ∈ [*υ*, *μ*]*, t* ∈ [*υ*, *μ*]*, B*(*θ*) > 0 *is a normalization function and M = sup* {*h*(*χ*)*: χ* ∈ (0, 1)}*.*

**Proof.** By the Jensen–Mercer inequality, we have

$$\mathbb{E}\left(\upsilon + \mu - \frac{\mathbf{x}\_1 + \mathbf{x}\_2}{2}\right) \le M[\check{\varsigma}(\upsilon) + \check{\varsigma}(\mu)] - h\left(\frac{1}{2}\right)[\check{\varsigma}(\mathbf{x}\_1) + \check{\varsigma}(\mathbf{x}\_2)].$$

Both sides of the above inequality are multiplied by *h*(*χ*) and integrated with respect to *χ* over [0,1], and we obtain

$$\begin{aligned} &\xi\left(\upsilon+\mu-\frac{\mathbf{x}\_1+\mathbf{x}\_2}{2}\right)\int\_0^1 h(\chi)d\chi\\ &\leq M[\xi(\upsilon)+\xi(\mu)]\int\_0^1 h(\chi)d\chi-h\left(\frac{1}{2}\right)[\xi(\mathbf{x}\_1)+\xi(\mathbf{x}\_2)]\int\_0^1 h(\chi)d\chi. \end{aligned}$$

which implies that

$$\begin{aligned} &\frac{1}{h\left(\frac{1}{2}\right)}\xi\left(\upsilon+\mu-\frac{\mathsf{x}\_{1}+\mathsf{x}\_{2}}{2}\right)\int\_{0}^{1}h(\chi)d\chi\\ &\leq \frac{1}{h\left(\frac{1}{2}\right)}M[\mathfrak{z}(\upsilon)+\mathfrak{z}(\mu)]\int\_{0}^{1}h(\chi)d\chi-[\mathfrak{z}(\mathsf{x}\_{1})+\mathfrak{z}(\mathsf{x}\_{2})]\int\_{0}^{1}h(\chi)d\chi. \end{aligned}$$

Now, we will use the right-hand side of the Hermite–Hadamard inequality for the *h*-convex function, and we obtain

$$\begin{split} &\frac{1}{h\left(\frac{1}{2}\right)^{\mathsf{T}}} \mathbb{E}\left(\upsilon + \mu - \frac{\mathsf{x}\_{1} + \mathsf{x}\_{2}}{2}\right) \int\_{0}^{1} h(\chi) d\chi \\ &\leq \frac{1}{h\left(\frac{1}{2}\right)} M[\mathfrak{f}(\upsilon) + \mathfrak{f}(\mu)] \int\_{0}^{1} h(\chi) d\chi - \frac{1}{\mathsf{x}\_{2} - \mathsf{x}\_{1}} \int\_{\mathsf{x}\_{1}}^{\mathsf{x}\_{2}} \mathfrak{f}(z) dz \\ &= \frac{1}{h\left(\frac{1}{2}\right)} M[\mathfrak{f}(\upsilon) + \mathfrak{f}(\mu)] \int\_{0}^{1} h(\chi) d\chi - \frac{1}{\mathsf{x}\_{2} - \mathsf{x}\_{1}} \left(\int\_{\mathsf{x}\_{1}}^{\mathsf{t}} \mathfrak{f}(z) dz + \int\_{\mathsf{t}}^{\mathsf{x}\_{2}} \overline{\mathfrak{f}}(z) dz\right). \end{split} \tag{9}$$

Both sides of (9) multiplying by *<sup>θ</sup>*(*x*2−*x*1) *<sup>B</sup>*(*θ*) and subtracting <sup>2</sup>(1−*θ*) *<sup>B</sup>*(*θ*) *ξ*(*t*), we have

*θ*(*x*<sup>2</sup> − *x*1) *B*(*θ*)*h* 1 2 ! *ξ <sup>υ</sup>* <sup>+</sup> *<sup>μ</sup>* <sup>−</sup> *<sup>x</sup>*<sup>1</sup> <sup>+</sup> *<sup>x</sup>*<sup>2</sup> 2 <sup>1</sup> 0 *<sup>h</sup>*(*χ*)*d<sup>χ</sup>* <sup>−</sup> <sup>2</sup>(<sup>1</sup> <sup>−</sup> *<sup>θ</sup>*) *<sup>B</sup>*(*θ*) *<sup>ξ</sup>*(*t*) ≤ *θ*(*x*<sup>2</sup> − *x*1) *B*(*θ*)*h* 1 2 ! *<sup>M</sup>*[*ξ*(*υ*) + *<sup>ξ</sup>*(*μ*)] <sup>1</sup> 0 *h*(*χ*)*dχ* <sup>−</sup> *<sup>θ</sup> B*(*θ*) *<sup>t</sup> x*1 *ξ*(*z*)*dz* + *<sup>x</sup>*<sup>2</sup> *t ξ*(*z*)*dz* <sup>−</sup> <sup>2</sup>(<sup>1</sup> <sup>−</sup> *<sup>θ</sup>*) *<sup>B</sup>*(*θ*) *<sup>ξ</sup>*(*t*) <sup>=</sup> *<sup>θ</sup>*(*x*<sup>2</sup> <sup>−</sup> *<sup>x</sup>*1) *B*(*θ*)*h* 1 2 ! *<sup>M</sup>*[*ξ*(*υ*) + *<sup>ξ</sup>*(*μ*)] <sup>1</sup> 0 *h*(*χ*)*dχ* − ' *θ B*(*θ*) *<sup>t</sup> x*1 *<sup>ξ</sup>*(*z*)*dz* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>θ</sup>*) *<sup>B</sup>*(*θ*) *<sup>ξ</sup>*(*t*) + *θ B*(*θ*) *<sup>x</sup>*<sup>2</sup> *t <sup>ξ</sup>*(*z*)*dz* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>θ</sup>*) *<sup>B</sup>*(*θ*) *<sup>ξ</sup>*(*t*) ( <sup>=</sup> *<sup>θ</sup>*(*x*<sup>2</sup> <sup>−</sup> *<sup>x</sup>*1) *h* 1 2 ! *B*(*θ*) *M*[*ξ*(*υ*) + *ξ*(*μ*)] <sup>1</sup> 0 *h*(*χ*)*dχ* − *CF <sup>x</sup>*<sup>1</sup> *I θ ξ* ! (*t*) + *CF I θ x*2 *ξ* ! (*t*) . (10)

After suitable rearrangement, (10) yields the first inequality of (8).

For the second part of the inequality of (8), we will use the right-hand side of the Hermite–Hadamard integral inequality for the *h*-convex function, and we can write

$$-\frac{1}{x\_2 - x\_1} \int\_{x\_1}^{x\_2} \xi(z) dz \le -\frac{1}{2h\left(\frac{1}{2}\right)} \xi\left(\frac{x\_1 + x\_2}{2}\right). \tag{11}$$

By using the same operations with (9) in (11), we have

$$-\frac{B(\theta)}{\theta(\mathbf{x}\_{2}-\mathbf{x}\_{1})} \left[ \binom{\mathbf{C}\_{F}}{\mathbf{x}\_{1}} \overline{\mathbf{f}}\_{\theta}^{\theta} \overline{\mathbf{f}} \right](t) + \left( {}^{\mathbf{C}F}I\_{\mathbf{x}\_{2}}^{\theta} \overline{\mathbf{f}} \right)(t) - \frac{2(1-\theta)}{B(\theta)} \overline{\mathbf{f}}(t) \right] \leq -\frac{1}{2h\left(\frac{1}{2}\right)} \overline{\xi} \left( \frac{\mathbf{x}\_{1} + \mathbf{x}\_{2}}{2} \right). \tag{12}$$

Adding <sup>1</sup> *h*( <sup>1</sup> 2 ) *M*[*ξ*(*υ*) + *ξ*(*μ*)] 1 <sup>0</sup> *h*(*χ*)*dχ* to both sides of (12), we have

$$\begin{split} &\frac{1}{h\left(\frac{1}{2}\right)}M[\xi^{\varepsilon}(\nu)+\xi^{\varepsilon}(\mu)]\int\_{0}^{1}h(\chi)d\chi\\ & -\frac{B(\theta)}{\theta(\varkappa\_{2}-\varkappa\_{1})}\left[\left(\prescript{CF}{\boldsymbol{x}\_{1}}{\boldsymbol{\varepsilon}^{\boldsymbol{X}}\_{1}}{\boldsymbol{\varepsilon}^{\boldsymbol{X}}\_{1}}\xi^{\theta}(\boldsymbol{\varepsilon})+\left(\prescript{CF}{\boldsymbol{x}\_{2}}{\boldsymbol{\varepsilon}^{\boldsymbol{X}}\_{2}}{\boldsymbol{\varepsilon}^{\boldsymbol{X}}\_{2}}\xi\right)(\boldsymbol{t})-\frac{2(1-\theta)}{B(\theta)}\xi(\boldsymbol{t})\right] \\ & \leq \frac{1}{h\left(\frac{1}{2}\right)}M[\xi(\nu)+\xi(\mu)]\int\_{0}^{1}h(\chi)d\chi-\frac{1}{2h\left(\frac{1}{2}\right)}\xi\left(\frac{\boldsymbol{x}\_{1}+\boldsymbol{x}\_{2}}{2}\right). \end{split}$$

which completes the proof.

**Theorem 6.** *Let ξ*1, *ξ*<sup>2</sup> : *I* ⊆ R → R *be an h-convex function on I. If ξ*1*ξ*<sup>2</sup> ∈ *L*[*υ*, *μ*] *, then*

$$\frac{2B(\theta)}{\theta(\mathbf{x}\_{2}-\mathbf{x}\_{1})} \left[ \left( {}^{\text{CF}}\_{\boldsymbol{\nu}+\boldsymbol{\mu}-\mathbf{x}\_{2}} I^{\theta}{}\_{1}\mathbf{x}\_{2} \right) (t) + \left( {}^{\text{CF}}\_{\boldsymbol{\nu}+\boldsymbol{\mu}-\mathbf{x}\_{1}} \mathbb{E}{}\_{1}\mathbb{I}\_{2} \right) (t) - \frac{2(1-\theta)}{B(\theta)} \mathbb{E}{}\_{1}(t) \mathbb{I}\_{2} (t) \right]$$

$$\leq 2M^{2}B\_{1}(\boldsymbol{\nu},\boldsymbol{\mu}) - 2MB\_{2}(\boldsymbol{\nu},\boldsymbol{\mu},\boldsymbol{\nu}\_{1}) \int\_{0}^{1} h(1-\chi) d\chi$$

$$-2MB\_{3}(\boldsymbol{\nu},\boldsymbol{\mu},\boldsymbol{\nu}\_{2}) \left[ \int\_{0}^{1} h(\boldsymbol{\chi}) d\chi + 2B\_{4}(\boldsymbol{\chi}\_{1},\boldsymbol{\nu}\_{2}) \int\_{0}^{1} h(\boldsymbol{\chi}) h(1-\chi) d\chi \right.$$

$$+2K\_{1}(\boldsymbol{x}\_{1}) \int\_{0}^{1} (h(1-\chi))^{2} d\chi + 2K\_{2}(\boldsymbol{x}\_{2}) \int\_{0}^{1} (h(\boldsymbol{\chi}))^{2} d\chi. \tag{13}$$

*where*

*B*1(*υ*, *μ*) = *ξ*1(*υ*)*ξ*2(*υ*) + *ξ*1(*υ*)*ξ*2(*μ*) + *ξ*1(*μ*)*ξ*2(*υ*) + *ξ*1(*μ*)*ξ*2(*μ*), *B*2(*υ*, *μ*, *x*1) = *ξ*1(*υ*)*ξ*2(*x*1) + *ξ*1(*μ*)*ξ*2(*x*1) + *ξ*1(*x*1)*ξ*2(*υ*) + *ξ*1(*x*1)*ξ*2(*μ*), *B*3(*υ*, *μ*, *x*2) = *ξ*1(*υ*)*ξ*2(*x*2) + *ξ*1(*μ*)*ξ*2(*x*2) + *ξ*1(*x*2)*ξ*2(*υ*) + *ξ*1(*x*2)*ξ*2(*μ*), *B*4(*x*1, *x*2) = *ξ*1(*x*1)*ξ*2(*x*2) + *ξ*1(*x*2)*ξ*2(*x*1), *K*1(*x*1) = *ξ*1(*x*1)*ξ*2(*x*1),

*and*

$$K\_2(\mathfrak{x}\_2) = \mathfrak{z}\_1^{\mathfrak{x}}(\mathfrak{x}\_2)\mathfrak{z}\_2(\mathfrak{x}\_2)\_{\mathfrak{z}\_2}$$

*holds* ∀ *x*1, *x*<sup>2</sup> ∈ [*υ*, *μ*]*, M = sup* {*h*(*χ*) : *χ* ∈ (0, 1)}*, t* ∈ [*υ*, *μ*] *and B*(*θ*) > 0 *is a normalization function.*

**Proof.** Since *ξ*<sup>1</sup> and *ξ*<sup>2</sup> are *h*-convex functions on [*x*1, *x*2] and making use of the Jensen– Mercer inequality, we have

$$\begin{aligned} &\xi\_1^\mathbb{Z}(\upsilon+\mu-((1-\chi)\mathbf{x}\_1+\chi\mathbf{x}\_2)) \\ &\leq M[\mathfrak{z}\_1(\upsilon)+\mathfrak{z}\_1(\mu)]-(h(1-\chi)\mathfrak{z}\_1(\mathbf{x}\_1)+h(\chi)\mathfrak{z}\_1(\mathbf{x}\_2)), \ \forall \chi\in[0,1], \mathbf{x}\_1, \mathbf{x}\_2\in I\_{\nu} \end{aligned}$$

and

$$\begin{aligned} &\mathbb{E}\_2(\upsilon + \mu - ((1-\chi)\mathbf{x}\_1 + \chi\mathbf{x}\_2)) \\ &\leq M[\mathbb{X}\_2(\upsilon) + \mathbb{X}\_2(\mu)] - (h(1-\chi)\mathbb{X}\_2(\mathbf{x}\_1) + h(\chi)\mathbb{X}\_2(\mathbf{x}\_2)), \ \forall \chi \in [0,1], \mathbf{x}\_1, \mathbf{x}\_2 \in I. \end{aligned}$$

Multiplying both sides of the above inequalities, we can write

*ξ*1(*υ* + *μ* − ((1 − *χ*)*x*<sup>1</sup> + *χx*2))*ξ*2(*υ* + *μ* − ((1 − *χ*)*x*<sup>1</sup> + *χx*2)) <sup>≤</sup> *<sup>M</sup>*2[*ξ*1(*υ*)*ξ*2(*υ*) + *<sup>ξ</sup>*1(*υ*)*ξ*2(*μ*) + *<sup>ξ</sup>*1(*μ*)*ξ*2(*υ*) + *<sup>ξ</sup>*1(*μ*)*ξ*2(*μ*)] − *Mh*(1 − *χ*)[*ξ*1(*υ*)*ξ*2(*x*1) + *ξ*1(*μ*)*ξ*2(*x*1) + *ξ*1(*x*1)*ξ*2(*υ*) + *ξ*1(*x*1)*ξ*2(*μ*)] − *Mh*(*χ*)[*ξ*1(*υ*)*ξ*2(*x*2) + *ξ*1(*μ*)*ξ*2(*x*2) + *ξ*1(*x*2)*ξ*2(*υ*) + *ξ*1(*x*2)*ξ*2(*μ*)] <sup>+</sup> *<sup>h</sup>*(*χ*)*h*(<sup>1</sup> <sup>−</sup> *<sup>χ</sup>*)[*ξ*1(*x*1)*ξ*2(*x*2) + *<sup>ξ</sup>*1(*x*2)*ξ*2(*x*1)] + (*h*(<sup>1</sup> <sup>−</sup> *<sup>χ</sup>*))2[*ξ*1(*x*1)*ξ*2(*x*1)] + (*h*(*χ*))2[*ξ*1(*x*2)*ξ*2(*x*2)].

Integrating the above inequality with respect to *χ* over [0,1] and then by the change of variable technique, we obtain

1 *x*<sup>2</sup> − *x*<sup>1</sup> *<sup>υ</sup>*+*μ*−*x*<sup>1</sup> *υ*+*μ*−*x*<sup>2</sup> *ξ*1(*z*)*ξ*2(*z*)*dz* <sup>≤</sup> *<sup>M</sup>*2[*ξ*1(*υ*)*ξ*2(*υ*) + *<sup>ξ</sup>*1(*υ*)*ξ*2(*μ*) + *<sup>ξ</sup>*1(*μ*)*ξ*2(*υ*) + *<sup>ξ</sup>*1(*μ*)*ξ*2(*μ*)] <sup>−</sup> *<sup>M</sup>*[*ξ*1(*υ*)*ξ*2(*x*1) + *<sup>ξ</sup>*1(*μ*)*ξ*2(*x*1) + *<sup>ξ</sup>*1(*x*1)*ξ*2(*υ*) + *<sup>ξ</sup>*1(*x*1)*ξ*2(*μ*)] <sup>1</sup> 0 *h*(1 − *χ*)*dχ* <sup>−</sup> *<sup>M</sup>*[*ξ*1(*υ*)*ξ*2(*x*2) + *<sup>ξ</sup>*1(*μ*)*ξ*2(*x*2) + *<sup>ξ</sup>*1(*x*2)*ξ*2(*υ*) + *<sup>ξ</sup>*1(*x*2)*ξ*2(*μ*)] <sup>1</sup> 0 *h*(*χ*)*dχ* + [*ξ*1(*x*1)*ξ*2(*x*2) + *ξ*1(*x*2)*ξ*2(*x*1)] <sup>1</sup> 0 *h*(*χ*)*h*(1 − *χ*)*dχ* + [*ξ*1(*x*1)*ξ*2(*x*1)] <sup>1</sup> 0 (*h*(<sup>1</sup> <sup>−</sup> *<sup>χ</sup>*))2*d<sup>χ</sup>* <sup>+</sup> [*ξ*1(*x*2)*ξ*2(*x*2)] <sup>1</sup> 0 (*h*(*χ*))2*dχ*,

which implies

$$\begin{split} &\frac{2}{\alpha\_{2}-\mathsf{x}\_{1}} \left[ \int\_{\upsilon+\mu-\mathsf{x}\_{2}}^{\chi} \mathsf{g}\_{1}(z)\mathsf{f}\_{2}(z)dz + \int\_{\chi}^{\upsilon+\mu-\mathsf{x}\_{1}} \mathsf{g}\_{1}(z)\mathsf{f}\_{2}(z)dz \right] \\ &\leq 2M^{2}B\_{1}(\upsilon,\mu) - 2MB\_{2}(\upsilon,\mu,\mathsf{x}\_{1})\int\_{0}^{1}h(1-\chi)d\chi \\ &\quad - 2MB\_{3}(\upsilon,\mu,\mathsf{x}\_{2})\Big{]}\int\_{0}^{1}h(\chi)d\chi + 2B\_{4}(\mathsf{x}\_{1},\mathsf{x}\_{2})\int\_{0}^{1}h(\chi)h(1-\chi)d\chi \\ &\quad + 2K\_{1}(\mathsf{x}\_{1})\int\_{0}^{1}\big{(}h(1-\chi)\Big{)}^{2}d\chi + 2K\_{2}(\mathsf{x}\_{2})\int\_{0}^{1}(h(\chi))^{2}d\chi. \end{split}$$

The above inequality is multipled by *<sup>θ</sup>*(*x*2−*x*1) <sup>2</sup>*B*(*θ*) , and adding <sup>2</sup>(1−*θ*) *<sup>B</sup>*(*θ*) *ξ*1(*t*)*ξ*2(*t*), we have

$$\begin{split} &\frac{\theta}{B(\theta)} \left[ \int\_{\upsilon+\mu-x\_2}^t \mathbb{\tilde{G}}\_1(z) \mathbb{I}\_2(z) dz + \int\_t^{\upsilon+\mu-x\_1} \mathbb{\tilde{G}}\_1(z) \mathbb{I}\_2(z) dz \right] + \frac{2(1-\theta)}{B(\theta)} \mathbb{\tilde{G}}\_1(t) \mathbb{I}\_2(t) \\ &\leq \frac{\theta(x\_2-x\_1)}{2B(\theta)} \left[ 2M^2 B\_1(\upsilon,\mu) - 2MB\_2(\upsilon,\mu,x\_1) \int\_0^1 h(1-\chi) d\chi \\ &\quad - 2MB\_3(\upsilon,\mu,x\_2) \left[ \int\_0^1 h(\chi) d\chi + 2B\_4(x\_1,x\_2) \int\_0^1 h(\chi) h(1-\chi) d\chi \right. \\ &\left. + 2K\_1(x\_1) \int\_0^1 (h(1-\chi))^2 d\chi + 2K\_2(x\_2) \int\_0^1 (h(\chi))^2 d\chi \right] + \frac{2(1-\theta)}{B(\theta)} \mathbb{I}\_1(t) \mathbb{I}\_2(t). \end{split}$$

Therefore,

$$\begin{split} & \left[ \frac{(1-\theta)}{B(\theta)} \mathfrak{z}\_1(t) \mathfrak{z}\_2^\*(t) + \frac{\theta}{B(\theta)} \int\_{\upsilon+\mu-\mathbf{x}\_2}^t \mathfrak{z}\_1(w) \mathfrak{z}\_2^\*(w) dw \right] + \left[ \frac{(1-\theta)}{B(\theta)} \mathfrak{z}\_1^\*(t) \mathfrak{z}\_2^\*(t) \right] \\ & + \frac{\theta}{B(\theta)} \int\_t^{\upsilon+\mu-\mathbf{x}\_1} \mathfrak{z}\_1^\*(w) \mathfrak{z}\_2^\*(w) dw \Biggr] \\ & \leq \frac{\theta(\mathbf{x}\_2-\mathbf{x}\_1)}{2B(\theta)} \left[ 2M^2 B\_1(\upsilon,\mu) - 2M B\_2(\upsilon,\mu,\mathbf{x}\_1) \int\_0^1 h(1-\chi) d\chi \\ & - 2M B\_3(\upsilon,\mu,\mathbf{x}\_2) \left[ \int\_0^1 h(\chi) d\chi + 2B\_4(\mathbf{x}\_1,\mathbf{x}\_2) \int\_0^1 h(\chi) h(1-\chi) d\chi \right. \\ & + 2K\_1(\mathbf{x}\_1) \int\_0^1 (h(1-\chi))^2 d\chi + 2K\_2(\mathbf{x}\_2) \int\_0^1 (h(\chi))^2 d\chi \right] + \frac{2(1-\theta)}{B(\theta)} \mathfrak{z}\_1(t) \mathfrak{z}\_2^\*(t). \end{split}$$

Thus,

$$\begin{split} & \left[ \left( \int\_{\mathbb{H}^{2} \vert \mu - \mathbf{x}\_{2} \vert \right.}^{\mathcal{E}} l^{\theta} \tilde{\xi}\_{1} \tilde{\xi}\_{2} \right) (t) + \left( \, \Big. \left( \, ^{\mathcal{E}}I\_{\mathbb{H}^{2} \vert \mu - \mathbf{x}\_{1} \vert \right. \tilde{\xi}\_{1} \vert \tilde{\xi}\_{2} \right) (t) \right] \\ & \leq \frac{\theta (\varkappa\_{2} - \mathbf{x}\_{1})}{2B(\theta)} \Big[ 2M^{2} B\_{1} (\upsilon, \mu) - 2MB\_{2} (\upsilon, \mu, \mathbf{x}\_{1}) \int\_{0}^{1} h(1 - \chi) d\chi \\ & \quad - 2MB\_{3} (\upsilon, \mu, \mathbf{x}\_{2}) \Big] \int\_{0}^{1} h(\chi) d\chi + 2B\_{4} (\mathbf{x}\_{1}, \mathbf{x}\_{2}) \int\_{0}^{1} h(\chi(1 - \chi)) \\ & \quad + 2K\_{1} (\mathbf{x}\_{1}) \int\_{0}^{1} h((1 - \chi)^{2}) d\chi + 2K\_{2} (\mathbf{x}\_{2}) \int\_{0}^{1} h(\chi^{2}) d\chi \Big] + \frac{2(1 - \theta)}{B(\theta)} \xi\_{1} (t) \tilde{\xi}\_{2} (t). \end{split} \tag{14}$$

By suitable rearrangement, (14) yields required inequality (13).

**Remark 2.** *By putting h*(*χ*) = *χ, M = sup* {*h*(*χ*) : *χ* ∈ (0, 1)} = 1*, x*<sup>1</sup> = *υ and x*<sup>2</sup> = *μ in Theorem* 2*, then we obtain Theorem 3 of [45].*

#### **3. Some Novel Results Related to the Caputo–Fabrizio Fractional Operator**

In this section, we will present some new Lemmas, and then we develop some novel results for an *h*-convex function with the help of the Caputo–Fabrizio fractional integral operator.

**Lemma 1.** *Let ξ* : *I* = [*υ*, *μ*] → R *be a differentiable mapping on I*◦*, where υ*, *μ* ∈ *I with υ* < *μ. If ξ* ∈ *L*1[*υ*, *μ*]*, then*

$$\frac{\xi'(\upsilon+\mu-x\_1)+\xi(\upsilon+\mu-x\_2)}{2} - \frac{1}{x\_2-x\_1} \int\_{\upsilon+\mu-x\_2}^{\upsilon+\mu-x\_1} \xi(z)dz$$

$$=\frac{x\_2-x\_1}{2} \int\_0^1 (1-2\chi)\xi'(\upsilon+\mu-((1-\chi)x\_1+\chi x\_2))d\chi. \tag{15}$$

*holds for all x*1, *x*<sup>2</sup> ∈ [*υ*, *μ*]*.*

**Proof.** Note that

$$\begin{split} I &= \int\_{0}^{1} (1 - 2\chi) \underline{\xi}^{\prime} (\upsilon + \mu - ((1 - \chi)\mathbf{x}\_{1} + \chi\mathbf{x}\_{2})) d\chi \\ &= \frac{\underline{\xi} (\upsilon + \mu - ((1 - \chi)\mathbf{x}\_{1} + \chi\mathbf{x}\_{2}))}{\mathbf{x}\_{1} - \mathbf{x}\_{2}} (1 - 2\chi) \bigg|\_{0}^{1} + 2 \int\_{0}^{1} \frac{\underline{\xi} (\upsilon + \mu - ((1 - \chi)\mathbf{x}\_{1} + \chi\mathbf{x}\_{2}))}{\mathbf{x}\_{1} - \mathbf{x}\_{2}} d\chi \\ &= \frac{\underline{\xi} (\upsilon + \mu - \mathbf{x}\_{1}) + \underline{\xi} (\upsilon + \mu - \mathbf{x}\_{2})}{\mathbf{x}\_{2} - \mathbf{x}\_{1}} - \frac{2}{\underline{\chi}\_{2} - \mathbf{x}\_{1}} \cdot \frac{1}{\underline{\xi} \underline{\omega} - \mathbf{x}\_{1}} \int\_{\upsilon + \mu - \mathbf{x}\_{2}}^{\upsilon + \mu - \mathbf{x}\_{1}} \underline{\xi} (z) dz. \end{split}$$

After suitable rearrangements, we obtain the required inequality (15).

**Remark 3.** *For x*<sup>1</sup> = *υ and x*<sup>2</sup> = *μ in Lemma* 3*, we obtain Lemma 2.1 of (see [46]).*

**Lemma 2.** *Suppose that ξ* : *I* = [*υ*, *μ*] → R *is a differentiable mapping on I*◦*, υ*, *μ* ∈ *I with υ* < *μ. If ξ* ∈ *L*1[*υ*, *μ*] *and take θ* ∈ [0, 1]*, then*

$$\begin{split} & \frac{\chi \mathbf{x}\_{2} - \mathbf{x}\_{1}}{2} \int\_{0}^{1} (1 - 2\chi) \tilde{\xi} \left( \upsilon + \mu - ((1 - \chi)\mathbf{x}\_{1} + \chi \mathbf{x}\_{2}) \right) d\chi - \frac{2(1 - \theta)}{\theta(\mathbf{x}\_{2} - \mathbf{x}\_{1})} \tilde{\xi}(t) \\ & = \frac{\tilde{\xi}(\upsilon + \mu - \mathbf{x}\_{1}) + \tilde{\xi}(\upsilon + \mu - \mathbf{x}\_{2})}{2} - \frac{B(\theta)}{\theta(\mathbf{x}\_{2} - \mathbf{x}\_{1})} \left[ \left( \prescript{\mathcal{C}F}{\upsilon + \mu - \mathbf{x}\_{2}} \prescript{\mathcal{I}}{\mathcal{I}}{\mathcal{I}}{\xi} \right)(t) + \left( \prescript{\mathcal{C}F}{\upsilon + \mu - \mathbf{x}\_{1}} \prescript{\mathcal{I}}{\mathcal{I}}{\xi} \right)(t) \right], \end{split}$$

*holds for all x*1, *x*<sup>2</sup> ∈ [*υ*, *μ*]*, where t* ∈ [*υ*, *μ*] *and B*(*θ*) > 0 *is a normalization function.*

**Proof.** It is easy to see that

$$\begin{split} &\int\_{0}^{1} (1-2\chi)\widetilde{\xi}'(\upsilon+\mu-((1-\chi)\mathbf{x}\_{1}+\chi\mathbf{x}\_{2}))d\chi \\ &=\frac{\widetilde{\xi}(\upsilon+\mu-\mathbf{x}\_{1})+\widetilde{\xi}(\upsilon+\mu-\mathbf{x}\_{2})}{\mathbf{x}\_{2}-\mathbf{x}\_{1}}-\frac{2}{(\mathbf{x}\_{2}-\mathbf{x}\_{1})^{2}}\left(\int\_{\upsilon+\mu-d}^{t}\widetilde{\xi}(z)dz+\int\_{t}^{\upsilon+\mu-\mathbf{x}\_{1}}\widetilde{\xi}(z)dz\right). \end{split}$$

With both sides of the above inequality multiplied by *<sup>θ</sup>*(*x*2−*x*1)<sup>2</sup> <sup>2</sup>*B*(*θ*) and subtracting 2(1−*θ*) *<sup>B</sup>*(*θ*) *ξ*(*t*), we have

*<sup>θ</sup>*(*x*<sup>2</sup> <sup>−</sup> *<sup>x</sup>*1)<sup>2</sup> 2*B*(*θ*) <sup>1</sup> 0 (1 − 2*χ*)*ξ* (*<sup>υ</sup>* <sup>+</sup> *<sup>μ</sup>* <sup>−</sup> ((<sup>1</sup> <sup>−</sup> *<sup>χ</sup>*)*x*<sup>1</sup> <sup>+</sup> *<sup>χ</sup>x*2))*d<sup>χ</sup>* <sup>−</sup> <sup>2</sup>(<sup>1</sup> <sup>−</sup> *<sup>θ</sup>*) *<sup>B</sup>*(*θ*) *<sup>ξ</sup>*(*t*) <sup>=</sup> *<sup>θ</sup>*(*x*<sup>2</sup> <sup>−</sup> *<sup>x</sup>*1)(*ξ*(*<sup>υ</sup>* <sup>+</sup> *<sup>μ</sup>* <sup>−</sup> *<sup>x</sup>*1) + *<sup>ξ</sup>*(*<sup>υ</sup>* <sup>+</sup> *<sup>μ</sup>* <sup>−</sup> *<sup>x</sup>*2)) <sup>2</sup>*B*(*θ*) <sup>−</sup> <sup>2</sup>(<sup>1</sup> <sup>−</sup> *<sup>θ</sup>*) *<sup>B</sup>*(*θ*) *<sup>ξ</sup>*(*t*) <sup>−</sup> *<sup>θ</sup> B*(*θ*) *<sup>t</sup> υ*+*μ*−*x*<sup>2</sup> *ξ*(*z*)*dz* + *<sup>υ</sup>*+*μ*−*x*<sup>1</sup> *t ξ*(*z*)*dz* <sup>=</sup> *<sup>θ</sup>*(*x*<sup>2</sup> <sup>−</sup> *<sup>x</sup>*1)(*ξ*(*<sup>υ</sup>* <sup>+</sup> *<sup>μ</sup>* <sup>−</sup> *<sup>x</sup>*1) + *<sup>ξ</sup>*(*<sup>υ</sup>* <sup>+</sup> *<sup>μ</sup>* <sup>−</sup> *<sup>x</sup>*2)) 2*B*(*θ*) − (1 − *θ*) *<sup>B</sup>*(*θ*) *<sup>ξ</sup>*(*t*) + *<sup>θ</sup> B*(*θ*) *<sup>t</sup> υ*+*μ*−*x*<sup>2</sup> *ξ*(*z*)*dz* − (1 − *θ*) *<sup>B</sup>*(*θ*) *<sup>ξ</sup>*(*t*) + *<sup>θ</sup> B*(*θ*) *<sup>υ</sup>*+*μ*−*x*<sup>1</sup> *t ξ*(*z*)*dz* <sup>=</sup> *<sup>θ</sup>*(*x*<sup>2</sup> <sup>−</sup> *<sup>x</sup>*1)(*ξ*(*<sup>υ</sup>* <sup>+</sup> *<sup>μ</sup>* <sup>−</sup> *<sup>x</sup>*1) + *<sup>ξ</sup>*(*<sup>υ</sup>* <sup>+</sup> *<sup>μ</sup>* <sup>−</sup> *<sup>x</sup>*2)) 2*B*(*θ*) − *CF <sup>υ</sup>*+*μ*−*x*<sup>2</sup> *I θ ξ* ! (*t*) + *CF I θ <sup>υ</sup>*+*μ*−*x*<sup>1</sup> *<sup>ξ</sup>* ! (*t*) .

After suitable rearrangements, we obtain the desired result.

**Remark 4.** *For x*<sup>1</sup> = *υ and x*<sup>2</sup> = *μ in Lemma* 3*, then we obtain Lemma 2 of (see [45]).*

**Theorem 7.** *Let ξ* : *I* → R *be a positive differentiable function on I*◦*. If* |*ξ* | *is a h-convex function on* [*υ*, *μ*] *where x*1, *x*<sup>2</sup> ∈ *I with υ* < *μ, ξ* ∈ *L*1[*υ*, *μ*] *and θ* ∈ [0, 1]*, then*

$$\begin{aligned} &\left|\frac{\tilde{\xi}(\boldsymbol{\nu}+\boldsymbol{\mu}-\boldsymbol{\boldsymbol{x}}\_{1})+\tilde{\xi}(\boldsymbol{\nu}+\boldsymbol{\mu}-\boldsymbol{\nu}\boldsymbol{\omega})}{2}-\frac{B(\boldsymbol{\theta})}{\theta(\boldsymbol{\omega}\_{2}-\boldsymbol{\boldsymbol{x}}\_{1})}\left[\left(\frac{\boldsymbol{\zeta}\boldsymbol{F}}{\boldsymbol{\nu}+\boldsymbol{\mu}-\boldsymbol{\omega}\_{2}}I^{\theta}\tilde{\xi}\right)(\boldsymbol{t})+\left(\frac{\boldsymbol{\zeta}\boldsymbol{F}}{\boldsymbol{\nu}+\boldsymbol{\mu}-\boldsymbol{\omega}\_{1}}\tilde{\xi}\right)(\boldsymbol{t})\right] \right| \\ &+\frac{2(1-\theta)}{\theta(\boldsymbol{\omega}\_{2}-\boldsymbol{\omega}\_{1})}\tilde{\xi}(\boldsymbol{t})\Big{|} \\ &\leq\frac{\boldsymbol{\kappa}\_{2}-\boldsymbol{\kappa}\_{1}}{2}\left[\frac{1}{2}M\left(\left|\boldsymbol{\xi}^{\prime}(\boldsymbol{\nu})\right|+\left|\boldsymbol{\xi}^{\prime}(\boldsymbol{\mu})\right|\right)-\left\{B\_{\boldsymbol{h}}(1-\chi)\left|\boldsymbol{\xi}^{\prime}(\boldsymbol{\lambda}\_{1})\right|+B\_{\boldsymbol{h}}(\chi)\left|\boldsymbol{\xi}^{\prime}(\boldsymbol{\lambda}\_{2})\right|\right\}\right],\end{aligned}$$

*where*

$$\begin{aligned} B\_h(1-\chi) &= \int\_0^{\frac{1}{2}} (1-2\chi)h(1-\chi)d\chi + \int\_{\frac{1}{2}}^1 (2\chi-1)h(1-\chi)d\chi, \\ B\_h(\chi) &= \int\_0^{\frac{1}{2}} (1-2\chi)h(\chi)d\chi + \int\_{\frac{1}{2}}^1 (2\chi-1)h(\chi)d\chi. \end{aligned}$$

*holds* ∀ *x*1, *x*<sup>2</sup> ∈ [*υ*, *μ*]*, t* ∈ [*υ*, *μ*]*, B*(*θ*) > 0 *is a normalization function and M = sup* {*h*(*χ*)*: χ* ∈ (0, 1)}*.*

**Proof.** By making use of Lemma 3, the properties of the absolute value, the *h*-convexity of |*ξ* | and the Jensen–Mercer inequality yields

" " " " " *ξ*(*υ* + *μ* − *x*1) + *ξ*(*υ* + *μ* − *x*2) <sup>2</sup> <sup>−</sup> *<sup>B</sup>*(*θ*) *θ*(*x*<sup>2</sup> − *x*1) *CF <sup>υ</sup>*+*μ*−*x*<sup>2</sup> *I θ ξ* ! (*t*) + *CF I θ <sup>υ</sup>*+*μ*−*x*<sup>1</sup> *ξ* ! (*t*) <sup>+</sup> <sup>2</sup>(<sup>1</sup> <sup>−</sup> *<sup>θ</sup>*) *θ*(*x*<sup>2</sup> − *x*1) *ξ*(*t*) " " " " " <sup>≤</sup> *<sup>x</sup>*<sup>2</sup> <sup>−</sup> *<sup>x</sup>*<sup>1</sup> 2 <sup>1</sup> 0 |1 − 2*χ*| " " "*ξ* (*υ* + *μ* − ((1 − *χ*)*x*<sup>1</sup> + *χx*2)) " " "*dχ* <sup>≤</sup> *<sup>x</sup>*<sup>2</sup> <sup>−</sup> *<sup>x</sup>*<sup>1</sup> 2 <sup>1</sup> 0 |1 − 2*χ*| *M* " " "*ξ* (*υ*) " " " + " " "*ξ* (*μ*) " " " − *h*(1 − *χ*) " " "*ξ* (*x*1) " " " <sup>+</sup> *<sup>h</sup>*(*χ*) " " "*ξ* (*x*2) " " " ! *dχ* <sup>≤</sup> *<sup>x</sup>*<sup>2</sup> <sup>−</sup> *<sup>x</sup>*<sup>1</sup> 2 <sup>1</sup> 2 0 (1 − 2*χ*) *M* " " "*ξ* (*υ*) " " " + " " "*ξ* (*μ*) " " " − *h*(1 − *χ*) " " "*ξ* (*x*1) " " " <sup>+</sup> *<sup>h</sup>*(*χ*) " " "*ξ* (*x*2) " " " !!*dχ* + <sup>1</sup> 1 2 (2*χ* − 1) *M* " " "*ξ* (*υ*) " " " + " " "*ξ* (*μ*) " " " − *h*(1 − *χ*) " " "*ξ* (*x*1) " " " <sup>+</sup> *<sup>h</sup>*(*χ*) " " "*ξ* (*x*2) " " " !!*dχ* <sup>≤</sup> *<sup>x</sup>*<sup>2</sup> <sup>−</sup> *<sup>x</sup>*<sup>1</sup> 2 ' 1 2 *M* " " "*ξ* (*υ*) " " " + " " "*ξ* (*μ*) " " " ! − " " "*ξ* (*x*1) " " " <sup>1</sup> 2 0 (1 − 2*χ*)*h*(1 − *χ*)*dχ* + <sup>1</sup> 1 2 (2*χ* − 1)*h*(1 − *χ*)*dχ* + " " "*ξ* (*x*2) " " " <sup>1</sup> 2 0 (1 − 2*χ*)*h*(*χ*)*dχ* + <sup>1</sup> 1 2 (2*χ* − 1)*h*(*χ*)*dχ* /( <sup>≤</sup> *<sup>x</sup>*<sup>2</sup> <sup>−</sup> *<sup>x</sup>*<sup>1</sup> 2 1 2 *M* " " "*ξ* (*υ*) " " " + " " "*ξ* (*μ*) " " " ! − *Bh*(1 − *χ*) " " "*ξ* (*x*1) " " " <sup>+</sup> *Bh*(*χ*) " " "*ξ* (*x*2) " " " .

This completes the proof.

**Remark 5.** *By putting h*(*χ*) = *χ, M = sup* {*h*(*χ*) : *χ* ∈ (0, 1)} = 1*, x*<sup>1</sup> = *υ and x*<sup>2</sup> = *μ in Theorem* 3*, we obtain Theorem 5 of [45].*

**Theorem 8.** *Suppose that ξ* : *I* → R *is a positive differentiable function on I*◦ *and* |*ξ* | *<sup>q</sup> is a h-convex function on* [*υ*, *<sup>μ</sup>*]*, <sup>υ</sup>*, *<sup>μ</sup>* <sup>∈</sup> *<sup>I</sup>*◦ *with <sup>υ</sup>* <sup>&</sup>lt; *<sup>μ</sup> for <sup>p</sup>*, *<sup>q</sup>* <sup>&</sup>gt; <sup>1</sup> *with* <sup>1</sup> *<sup>p</sup>* <sup>+</sup> <sup>1</sup> *<sup>q</sup>* = 1*, where υ*, *μ* ∈ *I with υ* < *μ. If ξ* ∈ *L*1[*υ*, *μ*] *and θ* ∈ [0, 1]*, then*

$$\begin{aligned} &\left|\frac{\xi(\boldsymbol{\nu}+\boldsymbol{\mu}-\boldsymbol{\chi}\_{1})+\xi(\boldsymbol{\nu}+\boldsymbol{\mu}-\boldsymbol{\chi}\_{2})}{2}-\frac{B(\boldsymbol{\theta})}{\theta(\boldsymbol{\chi}\_{2}-\boldsymbol{\chi}\_{1})}\left[\left(\boldsymbol{\xi}\_{+\boldsymbol{\mu}-\boldsymbol{\mu}\_{2}}^{\rm CF}I^{\theta}\boldsymbol{\xi}\right)(\boldsymbol{\nu})+\left(\boldsymbol{\xi}^{\rm CF}I^{\theta}\_{\boldsymbol{\nu}+\boldsymbol{\mu}-\boldsymbol{\chi}\_{1}}\boldsymbol{\xi}\right)(\boldsymbol{\nu})\right]\right|\\ &+\frac{2(1-\theta)}{\theta(\boldsymbol{\chi}\_{2}-\boldsymbol{\chi}\_{1})}\xi(\boldsymbol{\nu})\bigg|\\ &\leq\frac{\boldsymbol{\chi}\_{2}-\boldsymbol{\chi}\_{1}}{2}\left(\frac{1}{p+1}\right)^{\frac{1}{p}}\left(M\left[\left|\boldsymbol{\xi}^{\prime}(\boldsymbol{\nu})\right|^{q}+\left|\boldsymbol{\xi}^{\prime}(\boldsymbol{\mu})\right|^{q}\right]\right)\\ &-\left(\left|\boldsymbol{\xi}^{\prime}(\boldsymbol{\chi}\_{1})\right|^{q}\int\_{0}^{1}h(1-\chi)d\chi+\left|\boldsymbol{\xi}^{\prime}(\boldsymbol{\chi}\_{2})\right|^{q}\int\_{0}^{1}h(\chi)d\chi\right)\right)^{\frac{1}{q}},\tag{17}$$

*holds* ∀ *x*1, *x*<sup>2</sup> ∈ [*υ*, *μ*]*, t* ∈ [*υ*, *μ*]*, B*(*θ*) > 0 *is a normalization function and M = sup* {*h*(*χ*)*: χ* ∈ (0, 1)}*.*

**Proof.** From Lemma 3, Hölder's integral inequality, the *h*-convexity of |*ξ* | *<sup>q</sup>* and the Jensen– Mercer inequality yields that

" " " " " *ξ*(*υ* + *μ* − *x*1) + *ξ*(*υ* + *μ* − *x*2) <sup>2</sup> <sup>−</sup> *<sup>B</sup>*(*θ*) *θ*(*x*<sup>2</sup> − *x*1) *CF <sup>υ</sup>*+*μ*−*x*<sup>2</sup> *I θ ξ* ! (*t*) + *CF I θ <sup>υ</sup>*+*μ*−*x*<sup>1</sup> *ξ* ! (*t*) + 2(1 − *θ*) *θ*(*x*<sup>2</sup> − *x*1) *ξ*(*t*) " " " " " <sup>≤</sup> *<sup>x</sup>*<sup>2</sup> <sup>−</sup> *<sup>x</sup>*<sup>1</sup> 2 <sup>1</sup> 0 |1 − 2*χ*| " " "*ξ* (*υ* + *μ* − ((1 − *χ*)*x*<sup>1</sup> + *χx*2)) " " " *dχ* <sup>≤</sup> *<sup>x</sup>*<sup>2</sup> <sup>−</sup> *<sup>x</sup>*<sup>1</sup> 2 <sup>1</sup> 0 |1 − 2*χ*| *p dχ* 1 *<sup>p</sup>* <sup>1</sup> 0 " " "*ξ* (*υ* + *μ* − ((1 − *χ*)*x*<sup>1</sup> + *χx*2)) " " " *q dχ* 1 *q* <sup>≤</sup> *<sup>x</sup>*<sup>2</sup> <sup>−</sup> *<sup>x</sup>*<sup>1</sup> 2 <sup>1</sup> 0 |1 − 2*χ*| *p dχ* 1 *<sup>p</sup>* <sup>1</sup> 0 *M* " " "*ξ* (*υ*) " " " *q* + " " "*ξ* (*μ*) " " " *q* − *h*(1 − *χ*) " " "*ξ* (*x*1) " " " *q* + *h*(*χ*) " " "*ξ* (*x*2) " " " *q*! *dχ* 1 *q* <sup>≤</sup> *<sup>x</sup>*<sup>2</sup> <sup>−</sup> *<sup>x</sup>*<sup>1</sup> 2 1 *p* + 1 1 *p M* " " "*ξ* (*υ*) " " " *q* + " " "*ξ* (*μ*) " " " *q* − " " "*ξ* (*x*1) " " " *<sup>q</sup>* <sup>1</sup> 0 *h*(1 − *χ*)*dχ* + " " "*ξ* (*x*2) " " " *<sup>q</sup>* <sup>1</sup> 0 *h*(*χ*)*dχ* <sup>1</sup> *q* .

This completes the proof.

**Remark 6.** *By putting h*(*χ*) = *χ, M = sup* {*h*(*χ*) : *χ* ∈ (0, 1)} = 1*, x*<sup>1</sup> = *υ and x*<sup>2</sup> = *μ in Theorem* 3*, we obtain Theorem 6 of [45].*

Next, we will prove the following theorems using the Hölder– ˙ *Iscan* integral inequality and for improved power mean integral inequality, respectively.

**Theorem 9.** *Assume that ξ* : *I* → R *is a positive differentiable mapping on I*◦ *and* |*ξ* | *<sup>q</sup> is a h-convex function on* [*υ*, *μ*]*, υ*, *μ* ∈ *I*◦ *with υ* < *μ for q* ≥ 1*, where υ*, *μ* ∈ *I with υ* < *μ. If ξ* ∈ *L*1[*υ*, *μ*] *and θ* ∈ [0, 1]*, then*

" " " " " *ξ*(*υ* + *μ* − *x*1) + *ξ*(*υ* + *μ* − *x*2) <sup>2</sup> <sup>−</sup> *<sup>B</sup>*(*θ*) *θ*(*x*<sup>2</sup> − *x*1) *CF <sup>υ</sup>*+*μ*−*x*<sup>2</sup> *I θ ξ* ! (*t*) + *CF I θ <sup>υ</sup>*+*μ*−*x*<sup>1</sup> *<sup>ξ</sup>* ! (*t*) + 2(1 − *θ*) *θ*(*x*<sup>2</sup> − *x*1) *ξ*(*t*) " " " " " <sup>≤</sup> *<sup>x</sup>*<sup>2</sup> <sup>−</sup> *<sup>x</sup>*<sup>1</sup> 2 1 2 <sup>1</sup><sup>−</sup> <sup>1</sup> *q* 1 2 *M* " " "*ξ* (*υ*) " " " *q* + " " "*ξ* (*μ*) " " " *q* − " " "*ξ* (*x*1) " " " *<sup>q</sup>* <sup>1</sup> <sup>0</sup> <sup>|</sup><sup>1</sup> <sup>−</sup> <sup>2</sup>*χ*|*h*(<sup>1</sup> <sup>−</sup> *<sup>χ</sup>*)*d<sup>χ</sup>* <sup>+</sup> " " "*ξ* (*x*2) " " " *<sup>q</sup>* <sup>1</sup> <sup>0</sup> <sup>|</sup><sup>1</sup> <sup>−</sup> <sup>2</sup>*χ*|*h*(*χ*)*d<sup>χ</sup>* <sup>1</sup> *q* , (18)

*holds* ∀ *x*1, *x*<sup>2</sup> ∈ [*υ*, *μ*]*, t* ∈ [*υ*, *μ*]*, B*(*θ*) > 0 *is a normalization function and M = sup* {*h*(*χ*)*: χ* ∈ (0, 1)}*.*

**Proof.** Take *q* > 1, by using Lemma 3, the power mean inequality, the *h*-convexity of |*ξ* | *q* and the Jensen–Mercer inequality, and we have

" " " " " *ξ*(*υ* + *μ* − *x*1) + *ξ*(*υ* + *μ* − *x*2) <sup>2</sup> <sup>−</sup> *<sup>B</sup>*(*θ*) *θ*(*x*<sup>2</sup> − *x*1) *CF <sup>υ</sup>*+*μ*−*x*<sup>2</sup> *I θ ξ* ! (*t*) + *CF I θ <sup>υ</sup>*+*μ*−*x*<sup>1</sup> *<sup>ξ</sup>* ! (*t*) + 2(1 − *θ*) *θ*(*x*<sup>2</sup> − *x*1) *ξ*(*t*) " " " " " <sup>≤</sup> *<sup>x</sup>*<sup>2</sup> <sup>−</sup> *<sup>x</sup>*<sup>1</sup> 2 <sup>1</sup> <sup>0</sup> <sup>|</sup><sup>1</sup> <sup>−</sup> <sup>2</sup>*χ*<sup>|</sup> " " "*ξ* (*υ* + *μ* − ((1 − *χ*)*x*<sup>1</sup> + *χx*2)) " " " *dχ* <sup>≤</sup> *<sup>x</sup>*<sup>2</sup> <sup>−</sup> *<sup>x</sup>*<sup>1</sup> 2 <sup>1</sup> <sup>0</sup> <sup>|</sup><sup>1</sup> <sup>−</sup> <sup>2</sup>*χ*|*d<sup>χ</sup>* <sup>1</sup><sup>−</sup> <sup>1</sup> *q* × <sup>1</sup> <sup>0</sup> <sup>|</sup><sup>1</sup> <sup>−</sup> <sup>2</sup>*χ*<sup>|</sup> " " "*ξ* (*υ* + *μ* − ((1 − *χ*)*x*<sup>1</sup> + *χx*2)) " " " *q dχ* 1 *q* <sup>≤</sup> *<sup>x</sup>*<sup>2</sup> <sup>−</sup> *<sup>x</sup>*<sup>1</sup> 2 1 2 <sup>1</sup><sup>−</sup> <sup>1</sup> *q* <sup>1</sup> <sup>0</sup> <sup>|</sup><sup>1</sup> <sup>−</sup> <sup>2</sup>*χ*<sup>|</sup> × *M* " " "*ξ* (*υ*) " " " *q* + " " "*ξ* (*μ*) " " " *q* − *h*(1 − *χ*) " " "*ξ* (*x*1) " " " *q* + *h*(*χ*) " " "*ξ* (*x*2) " " " *<sup>q</sup>*! *dχ* 1 *q* <sup>≤</sup> *<sup>x</sup>*<sup>2</sup> <sup>−</sup> *<sup>x</sup>*<sup>1</sup> 2 1 2 <sup>1</sup><sup>−</sup> <sup>1</sup> *q* 1 2 *M* " " "*ξ* (*υ*) " " " *q* + " " "*ξ* (*μ*) " " " *q* − " " "*ξ* (*x*1) " " " *<sup>q</sup>* <sup>1</sup> <sup>0</sup> <sup>|</sup><sup>1</sup> <sup>−</sup> <sup>2</sup>*χ*|*h*(<sup>1</sup> <sup>−</sup> *<sup>χ</sup>*)*d<sup>χ</sup>* <sup>+</sup> " " "*ξ* (*x*2) " " " *<sup>q</sup>* <sup>1</sup> <sup>0</sup> <sup>|</sup><sup>1</sup> <sup>−</sup> <sup>2</sup>*χ*|*h*(*χ*)*d<sup>χ</sup>* <sup>1</sup> *q* . (19)

This completes the proof.

#### **4. Some Results in Improved Hölder Setting**

In this section, we will present some results for the *h*-convex function in the setting of the Hölder– ˙ *I*scan integral inequality and improved power mean integral inequality via the ¸ Caputo–Fabrizio fractional integral operator.

**Theorem 10.** *Let ξ* : *I* → R *be a positive differentiable mapping on I*◦ *and* |*ξ* | *<sup>q</sup> be a h-convex function on* [*υ*, *<sup>μ</sup>*]*, <sup>υ</sup>*, *<sup>μ</sup>* <sup>∈</sup> *<sup>I</sup>*◦ *with <sup>υ</sup>* <sup>&</sup>lt; *<sup>μ</sup> for <sup>p</sup>*, *<sup>q</sup>* <sup>&</sup>gt; <sup>1</sup> *with* <sup>1</sup> *<sup>p</sup>* <sup>+</sup> <sup>1</sup> *<sup>q</sup>* = 1*, where υ*, *μ* ∈ *I with υ* < *μ. If ξ* ∈ *L*1[*υ*, *μ*] *and θ* ∈ [0, 1]*, then*

" " " " " *ξ*(*υ* + *μ* − *x*1) + *ξ*(*υ* + *μ* − *x*2) <sup>2</sup> <sup>−</sup> *<sup>B</sup>*(*θ*) *θ*(*x*<sup>2</sup> − *x*1) *CF <sup>υ</sup>*+*μ*−*x*<sup>2</sup> *I θ ξ* ! (*t*) + *CF I θ <sup>υ</sup>*+*μ*−*x*<sup>1</sup> *<sup>ξ</sup>* ! (*t*) + 2(1 − *θ*) *θ*(*x*<sup>2</sup> − *x*1) *ξ*(*t*) " " " " " <sup>≤</sup> *<sup>x</sup>*<sup>2</sup> <sup>−</sup> *<sup>x</sup>*<sup>1</sup> 2 ' 1 2(*p* + 1) 1 *p* 1 2 *M* " " "*ξ* (*υ*) " " " *q* + " " "*ξ* (*μ*) " " " *q*! − " " "*ξ* (*x*1) " " " *<sup>q</sup>* <sup>1</sup> 0 (1 − *χ*)*h*(1 − *χ*)*dχ* + " " "*ξ* (*x*2) " " " *<sup>q</sup>* <sup>1</sup> 0 (1 − *χ*)*h*(*χ*)*dχ* <sup>1</sup> *q* + 1 2(*p* + 1) 1 *p* 1 2 *M* " " "*ξ* (*υ*) " " " *q* + " " "*ξ* (*μ*) " " " *q*! − " " "*ξ* (*x*1) " " " *<sup>q</sup>* <sup>1</sup> 0 *χh*(1 − *χ*)*dχ* + " " "*ξ* (*x*2) " " " *<sup>q</sup>* <sup>1</sup> 0 *χh*(*χ*)*dχ* <sup>1</sup> *q* ( , (20)

*holds* ∀ *x*1, *x*<sup>2</sup> ∈ [*υ*, *μ*]*, t* ∈ [*υ*, *μ*]*, B*(*θ*) > 0 *is a normalization function and M = sup* {*h*(*χ*)*: χ* ∈ (0, 1)}*.*

**Proof.** From Lemma 3, using the Hölder– ˙ *I*scan integral inequality, the *h*-convexity of |*ξ* | *q* and the Jensen–Mercer inequality yields

" " " " " *ξ*(*υ* + *μ* − *x*1) + *ξ*(*υ* + *μ* − *x*2) <sup>2</sup> <sup>−</sup> *<sup>B</sup>*(*θ*) *θ*(*x*<sup>2</sup> − *x*1) *CF <sup>υ</sup>*+*μ*−*x*<sup>2</sup> *I θ ξ* ! (*t*) + *CF I θ <sup>υ</sup>*+*μ*−*x*<sup>1</sup> *ξ* ! (*t*) + 2(1 − *θ*) *θ*(*x*<sup>2</sup> − *x*1) *ξ*(*t*) " " " " " <sup>≤</sup> *<sup>x</sup>*<sup>2</sup> <sup>−</sup> *<sup>x</sup>*<sup>1</sup> 2 <sup>1</sup> 0 |1 − 2*χ*| " " "*ξ* (*υ* + *μ* − ((1 − *χ*)*x*<sup>1</sup> + *χx*2)) " " "*dχ* <sup>≤</sup> *<sup>x</sup>*<sup>2</sup> <sup>−</sup> *<sup>x</sup>*<sup>1</sup> 2 ' <sup>1</sup> 0 (1 − *χ*)|1 − 2*χ*| *p dχ* 1 *p* × <sup>1</sup> 0 (1 − *χ*) " " "*ξ* (*υ* + *μ* − ((1 − *χ*)*x*<sup>1</sup> + *χx*2)) " " " *q dχ* 1 *q* + <sup>1</sup> 0 *χ*|1 − 2*χ*| *p dχ* 1 *<sup>p</sup>* <sup>1</sup> 0 *χ* " " "*ξ* (*υ* + *μ* − ((1 − *χ*)*x*<sup>1</sup> + *χx*2)) " " " *q dχ* 1 *q* ( <sup>≤</sup> *<sup>x</sup>*<sup>2</sup> <sup>−</sup> *<sup>x</sup>*<sup>1</sup> 2 ' 1 2(*p* + 1) 1 *<sup>p</sup>* <sup>1</sup> 0 (1 − *χ*) " " "*ξ* (*υ* + *μ* − ((1 − *χ*)*x*<sup>1</sup> + *χx*2)) " " " *q dχ* 1 *q* + 1 2(*p* + 1) 1 *<sup>p</sup>* <sup>1</sup> 0 *χ* " " "*ξ* (*υ* + *μ* − ((1 − *χ*)*x*<sup>1</sup> + *χx*2)) " " " *q dχ* 1 *q* ( <sup>≤</sup> *<sup>x</sup>*<sup>2</sup> <sup>−</sup> *<sup>x</sup>*<sup>1</sup> 2 ' 1 2(*p* + 1) 1 *p* <sup>1</sup> 0 (1 − *χ*) × *M* " " "*ξ* (*υ*) " " " *q* + " " "*ξ* (*μ*) " " " *q* − *h*(1 − *χ*) " " "*ξ* (*x*1) " " " *q* + *h*(*χ*) " " "*ξ* (*x*2) " " " *<sup>q</sup>*! *dχ* 1 *q* + 1 2(*p* + 1) 1 *p* <sup>1</sup> 0 *χ* × *M* " " "*ξ* (*υ*) " " " *q* + " " "*ξ* (*μ*) " " " *q* − *h*(1 − *χ*) " " "*ξ* (*x*1) " " " *q* + *h*(*χ*) " " "*ξ* (*x*2) " " " *<sup>q</sup>*! *dχ* 1 *<sup>q</sup>* ( <sup>≤</sup> *<sup>x</sup>*<sup>2</sup> <sup>−</sup> *<sup>x</sup>*<sup>1</sup> 2 ' 1 2(*p* + 1) 1 *<sup>p</sup>* 1 2 *M* " " "*ξ* (*υ*) " " " *q* + " " "*ξ* (*μ*) " " " *q*! − " " "*ξ* (*x*1) " " " *<sup>q</sup>* <sup>1</sup> 0 (1 − *χ*)*h*(1 − *χ*)*dχ* + " " "*ξ* (*x*2) " " " *<sup>q</sup>* <sup>1</sup> 0 (1 − *χ*)*h*(*χ*)*dχ* <sup>1</sup> *q* + 1 2(*p* + 1) 1 *<sup>p</sup>* 1 2 *M* " " "*ξ* (*υ*) " " " *q* + " " "*ξ* (*μ*) " " " *q*! − " " "*ξ* (*x*1) " " " *<sup>q</sup>* <sup>1</sup> 0 *χh*(1 − *χ*)*dχ* + " " "*ξ* (*x*2) " " " *<sup>q</sup>* <sup>1</sup> 0 *χh*(*χ*)*dχ* <sup>1</sup> *q* ( .

This completes the proof.

**Theorem 11.** *Let ξ* : *I* → R *be a positive differentiable mapping on I*◦ *and* |*ξ* | *<sup>q</sup> be a h-convex function on* [*υ*, *μ*]*, υ*, *μ* ∈ *I*◦ *with υ* < *μ for q* ≥ 1*, where υ*, *μ* ∈ *I with υ* < *μ. If ξ* ∈ *L*1[*υ*, *μ*] *and θ* ∈ [0, 1]*, then*

$$\begin{aligned} & \left| \frac{\xi'(\upsilon + \mu - x\_1) + \xi(\upsilon + \mu - x\_2)}{2} - \frac{B(\theta)}{\theta(x\_2 - x\_1)} \left( \left( \,\_{\upsilon + \mu - x\_2}^{\textnormal{CF}} I\_{\xi}^{\theta} \right)(t) + \left( \,\_{\upsilon + \mu - x\_1}^{\textnormal{CF}} \xi \right)(t) \right) \right| \\ & + \left. \frac{2(1 - \theta)}{\theta(x\_2 - x\_1)} \xi(t) \right| \end{aligned}$$

$$\begin{split} & \leq \frac{x\_{2} - x\_{1}}{2} \left[ \left(\frac{1}{4}\right)^{1 - \frac{1}{\delta}} \left(\frac{1}{4} M\left(\left|\boldsymbol{\xi}'(\boldsymbol{\nu})\right|^{q} + \left|\boldsymbol{\xi}'(\boldsymbol{\mu})\right|^{q}\right) \right. \\ & \left. - \left(\left|\boldsymbol{\xi}'(\boldsymbol{x}\_{1})\right|^{q} \int\_{0}^{1} (1 - \chi) |1 - 2\chi| h(1 - \chi) d\chi + \left|\boldsymbol{\xi}'(\boldsymbol{x}\_{2})\right|^{q} \int\_{0}^{1} (1 - \chi) |1 - 2\chi| h(\chi) d\chi \right) \right)^{\frac{1}{q}} \\ & + \left(\frac{1}{4}\right)^{1 - \frac{1}{\delta}} \left(\frac{1}{4} M\left(\left|\boldsymbol{\xi}'(\boldsymbol{\nu})\right|^{q} + \left|\boldsymbol{\xi}'(\boldsymbol{\mu})\right|^{q}\right) \\ & - \left(\left|\boldsymbol{\xi}'(\boldsymbol{x}\_{1})\right|^{q} \int\_{0}^{1} \chi |1 - 2\chi| h(1 - \chi) d\chi + \left|\boldsymbol{\xi}'(\boldsymbol{x}\_{2})\right|^{q} \int\_{0}^{1} \chi |1 - 2\chi| h(\chi) d\chi \right) \right)^{\frac{1}{q}} \end{split} \tag{21}$$

*holds* ∀ *x*1, *x*<sup>2</sup> ∈ [*υ*, *μ*]*, t* ∈ [*υ*, *μ*]*, B*(*θ*) > 0 *is a normalization function and M = sup* {*h*(*χ*)*: χ* ∈ (0, 1)}*.*

**Proof.** Take *q* > 1, from Lemma 3, and using the improved power-mean integral inequality, the definition of the *h*-convexity of |*ξ* | *<sup>q</sup>*, and the Jensen–Mercer inequality, we have

" " " " " *ξ*(*υ* + *μ* − *x*1) + *ξ*(*υ* + *μ* − *x*2) <sup>2</sup> <sup>−</sup> *<sup>B</sup>*(*θ*) *θ*(*x*<sup>2</sup> − *x*1) *CF <sup>υ</sup>*+*μ*−*x*<sup>2</sup> *I θ ξ* ! (*t*) + *CF I θ <sup>υ</sup>*+*μ*−*x*<sup>1</sup> *ξ* ! (*t*) <sup>≤</sup> *<sup>x</sup>*<sup>2</sup> <sup>−</sup> *<sup>x</sup>*<sup>1</sup> 2 <sup>1</sup> 0 |1 − 2*χ*| " " "*ξ* (*υ* + *μ* − ((1 − *χ*)*x*<sup>1</sup> + *χx*2)) " " "*dχ* <sup>≤</sup> *<sup>x</sup>*<sup>2</sup> <sup>−</sup> *<sup>x</sup>*<sup>1</sup> 2 ' <sup>1</sup> 0 (1 − *χ*)|1 − 2*χ*|*dχ* <sup>1</sup><sup>−</sup> <sup>1</sup> *q* × <sup>1</sup> 0 (1 − *χ*)|1 − 2*χ*| " " "*ξ* (*υ* + *μ* − ((1 − *χ*)*x*<sup>1</sup> + *χx*2)) " " " *q dχ* 1 *q* + <sup>1</sup> 0 *χ*|1 − 2*χ*|*dχ* <sup>1</sup><sup>−</sup> <sup>1</sup> *q* × <sup>1</sup> 0 *χ*|1 − 2*χ*| " " "*ξ* (*υ* + *μ* − ((1 − *χ*)*x*<sup>1</sup> + *χx*2)) " " " *q dχ* 1 *q* ( <sup>≤</sup> *<sup>x</sup>*<sup>2</sup> <sup>−</sup> *<sup>x</sup>*<sup>1</sup> 2 ' 1 4 <sup>1</sup><sup>−</sup> <sup>1</sup> *<sup>q</sup>* <sup>1</sup> 0 (1 − *χ*)|1 − 2*χ*| " " "*ξ* (*υ* + *μ* − ((1 − *χ*)*x*<sup>1</sup> + *χx*2)) " " " *q dχ* 1 *q* + 1 4 <sup>1</sup><sup>−</sup> <sup>1</sup> *<sup>q</sup>* <sup>1</sup> 0 *χ*|1 − 2*χ*| " " "*ξ* (*υ* + *μ* − ((1 − *χ*)*x*<sup>1</sup> + *χx*2)) " " " *q dχ* 1 *q* ( <sup>≤</sup> *<sup>x</sup>*<sup>2</sup> <sup>−</sup> *<sup>x</sup>*<sup>1</sup> 2 ' 1 4 <sup>1</sup><sup>−</sup> <sup>1</sup> *q* <sup>1</sup> 0 (1 − *χ*)|1 − 2*χ*| × *M* " " "*ξ* (*υ*) " " " *q* + " " "*ξ* (*μ*) " " " *q* − *h*(1 − *χ*) " " "*ξ* (*x*1) " " " *q* + *h*(*χ*) " " "*ξ* (*x*2) " " " *q*! *dχ* 1 *q* + 1 4 <sup>1</sup><sup>−</sup> <sup>1</sup> *q* <sup>1</sup> 0 *χ*|1 − 2*χ*| × *M* " " "*ξ* (*υ*) " " " *q* + " " "*ξ* (*μ*) " " " *q* − *h*(1 − *χ*) " " "*ξ* (*x*1) " " " *q* + *h*(*χ*) " " "*ξ* (*x*2) " " " *q*! *dχ* 1 *<sup>q</sup>* ( <sup>≤</sup> *<sup>x</sup>*<sup>2</sup> <sup>−</sup> *<sup>x</sup>*<sup>1</sup> 2 ' 1 4 <sup>1</sup><sup>−</sup> <sup>1</sup> *<sup>q</sup>* 1 4 *M* " " "*ξ* (*υ*) " " " *q* + " " "*ξ* (*μ*) " " " *q*! − " " "*ξ* (*x*1) " " " *<sup>q</sup>* <sup>1</sup> 0 (1 − *χ*)|1 − 2*χ*|*h*(1 − *χ*)*dχ* + " " "*ξ* (*x*2) " " " *<sup>q</sup>* <sup>1</sup> 0 (1 − *χ*)|1 − 2*χ*|*h*(*χ*)*dχ* <sup>1</sup> *q* + 1 4 <sup>1</sup><sup>−</sup> <sup>1</sup> *<sup>q</sup>* 1 4 *M* " " "*ξ* (*υ*) " " " *q* + " " "*ξ* (*μ*) " " " *q*! − " " "*ξ* (*x*1) " " " *<sup>q</sup>* <sup>1</sup> 0 *χ*|1 − 2*χ*|*h*(1 − *χ*)*dχ* + " " "*ξ* (*x*2) " " " *<sup>q</sup>* <sup>1</sup> 0 *χ*|1 − 2*χ*|*h*(*χ*)*dχ* <sup>1</sup> *q* ( .

This completes the proof.

#### **5. Conclusions**

In this note, we established the Hermite–Jensen–Mercer-type inequalities for an *h*convex function in the Caputo–Fabrizio setting, and various Caputo–Fabrizio fractional integral inequalities are provided as well. We expect that this work will lead to the novel fractional integral research for Hermite–Hadamard inequalities. The remarks at the end of the results verify the generalization of the results. These results are new and set various interesting directions. In the future, we will prove the inequalities (2) and (8) by using any other method.

**Author Contributions:** Conceptualization, M.V.-C., M.S.S., S.S., M.S.Z. and A.K.; Funding acquisition, M.V.-C.; Investigation, M.V.-C., M.S.S., S.S., M.S.Z. and A.K.; Methodology, M.V.-C. and M.S.Z.; Writing—original draft, M.V.-C., M.S.S., S.S., M.S.Z. and A.K.; Writing—review and editing, M.V.- C., M.S.S., S.S., M.S.Z. and A.K. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received external funding from Dirección de Investigción in Ponticial Catholic University of Ecuador.

**Acknowledgments:** The authors thank Ponticial Catholic University of Ecuador for the technical support given for this project.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


### *Article* **Semidefinite Multiobjective Mathematical Programming Problems with Vanishing Constraints Using Convexificators**

**Kin Keung Lai 1,\*,†, Mohd Hassan 2,†, Sanjeev Kumar Singh 2,†, Jitendra Kumar Maurya 3,† and Shashi Kant Mishra 2,†**

	- jitendrak.maurya1@bhu.ac.in

**Abstract:** In this paper, we establish Fritz John stationary conditions for nonsmooth, nonlinear, semidefinite, multiobjective programs with vanishing constraints in terms of convexificator and introduce generalized Cottle type and generalized Guignard type constraints qualification to achieve strong *S*−stationary conditions from Fritz John stationary conditions. Further, we establish strong *S*−stationary necessary and sufficient conditions, independently from Fritz John conditions. The optimality results for multiobjective semidefinite optimization problem in this paper is related to two recent articles by Treanta in 2021. Treanta in 2021 discussed duality theorems for special class of quasiinvex multiobjective optimization problems for interval-valued components. The study in our article can also be seen and extended for the interval-valued optimization motivated by Treanta (2021). Some examples are provided to validate our established results.

**Keywords:** multiobjective programs with vanishing constraints; semidefinite programming; convexificators; nonsmooth analysis; constraint qualifications

#### **1. Introduction**

Nonlinear semidefinite programming problems (*SDP*) include several classes of optimization problems, such as linear programming, quadratic programming, second order cone programming [1], and semidefinite programming [2]. The nonlinear semidefinite programming problem has broad applications in system control [3], truss topology optimization [4], and other several fields. It has been at the center point of optimization research for the last two decades. For instance, in the release of library COMPleib [5], where 168 test examples on nonlinear semidefinite programs from various fields, such as control system design, academia, and many real-life based problems are collected.

In this paper, we consider the following semidefinite multiobjective mathematical programs with vanishing constraints (*S* − *MMPVC*),

$$\min f(A) = (\mathfrak{f}\_{\!i}(A), \dots, \mathfrak{f}\_{\!j}(A))\tag{1}$$
 
$$\text{subject to } A \in M = \{ A \in \mathbb{M}\_{+}^{n} : \mathcal{H}\_{\!i}^{e}(A) \ge 0, \,\,\mathfrak{f}\_{\!i}^{e}(A) \not\subseteq 0\},$$

where *M<sup>n</sup>* <sup>+</sup> is set of *<sup>n</sup>* <sup>×</sup> *<sup>n</sup>* positive semidefinite matrix, <sup>f</sup>*<sup>i</sup>* : <sup>M</sup>*<sup>n</sup>* <sup>+</sup> → R ∪ {+∞} (*i* = 1, ...*p*) and G*i*, H*<sup>i</sup>* : M*<sup>n</sup>* <sup>+</sup> → R ∪ {+∞}(*i* = 1, ..., *m*) are extended real-valued locally Lipschitz functions.

Nonlinear semidefinite programming problems consist of the nonlinear problems where vector variables are replaced by symmetric positive semidefinite matrices. Nonlinear SDPs have been studied extensively due to a wide range of applications, see for

**Citation:** Lai, K.K.; Hassan, M.; Singh, S.K.; Maurya, J.K.; Mishra, S.K. Semidefinite Multiobjective Mathematical Programming Problems with Vanishing Constraints Using Convexificators. *Fractal Fract.* **2022**, *6*, 3. https://doi.org/10.3390/ fractalfract6010003

Academic Editor: Savin Trean¸t ˘a

Received: 17 November 2021 Accepted: 19 December 2021 Published: 22 December 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

instance, [6,7]. Shapiro [6] established first and second order necessary and sufficient optimality conditions under the convexity assumptions. Forsgren [8] extended those results for nonconvex semidefinite programming. Further, Sun et al. [7] and Sun [9] discussed the algorithmic approaches to solve nonlinear semidefinite programming problems. Yamashita and Yabe [10] introduced some numerical methods to solve nonlinear SDP and studied the algorithmic consequences. Recently, Golestani and Nobakhtian [11] proposed the generalized Abadie constraint qualification (*GACQ*) and established necessary and sufficient optimality conditions for nonlinear semidefinite programming problems using convexificators.

Mathematical programs with vanishing constraints(*MPVC*) has many applications in truss topology optimization [12], pathfinding problem with logic communication constraints in robot motion planning [13], mixed integer nonlinear optimal control problems [14], scheduling problems with disjoint feasible regions in power generation dispatch [15] and many more fields of the current research [16–18]. Initially, mathematical programs with vanishing constraints (MPVC) was introduced by Achtziger and Kanzow in 2008. MPVC is closely related to an optimization problem known as mathematical programs with equilibrium constraints (MPEC), for more details on MPEC, we refer, [19–28].

Due to the constraints G*i*(*z*)H*i*(*z*) ≤ 0, the feasible set may not be convex even disconnected, most of the basic constraint qualifications such as linearly independent constraint qualification and Mangasarian–Fromovitz constraint qualification do not hold, therefore, standard Karush–Kuhn–Tucker conditions are of no use in such cases. Several constraint qualifications and necessary optimality conditions have been established in [12] for mathematical programs with vanishing constraints. First order sufficient optimality conditions, as well as second order necessary and sufficient optimality conditions, have been discussed in [29] using generalized convexity for mathematical programs with vanishing constraints. In [30], various stationary conditions under weaker assumptions of constraint qualifications were derived. Further, Hoheisel and Kanzow [31] investigated necessary and sufficient optimality conditions through Abadie and Guignard type constraint qualifications for mathematical programs with vanishing constraints. For more details on the MPVC, we refer to [16,32,33] and the references therein.

Multiobjective optimization problems (MOP) plays a vital role in science, technology, business, economics, and many others field of daily demand, where optimal decisions need to be taken among many conflicting objectives and all objective functions to be optimized simultaneously. Effect of conflict on objectives leads to some change in the solution of (MOP) compared to the optimal solution of single-objective optimization problems. Therefore, weak efficient point (weak Pareto optimal solution), efficient point (Pareto optimal solution) like terms are coined for the solutions of (MOP). Initially, the concept of Pareto optimal solutions was given by Italian civil engineer and economist Vilfredo Pareto and was applied in the studies of economic efficiency and income distribution. Basic concept and literature on the solution of multiobjective optimization problems can be found [34,35]. Maeda [36] studied the strong KKT optimality conditions and differentiable functions. Preda and Chitescu [37] extends these results for semidifferentiable functions. Further, Li [38] discussed these results for the nonsmooth case. Recently, Lai et al. [39] proposed saddle point necessary and sufficient Pareto optimality conditions for multiobjective convex optimization problems. Treanta [40] established dual pair of multiobjective interval-valued variational control problems. Further, Treanta [41] discussed duality theorems for special class of quasiinvex multiobjective optimization problems for interval-valued components.

Since nonsmoothness in optimization is naturally generated from the mathematical formulation of real-world problems, therefore, proper effective way for solving these problems should be discovered. Even the solution of some smooth problems, sometimes requires the use of nonsmooth optimization techniques, in order to either make it easy or simplify its form. Thus, the field of nonsmooth optimization is an important branch of mathematical programming that is based on classical concepts of variational analysis and generalized derivatives. In recent years, research in nonsmooth analysis has focused on the growth of generalized subdifferentials that give sharp results and good calculus rules for nonsmooth functions. It is convexificators [42], that has been used to extend, unify, and sharpen the results in various aspects of optimization. Jeyakumar and Luc [43] provided a more sophisticated version of convexificators by introducing the new notion of convexificators which are the closed set but not necessarily bounded or convex. The new version of convexificators consists only finitely many points so it is advantageous for application point of view. We have used the convexificator due to Jeyakumar and Luc [43] in our study.

Recently, Dorsch et al. [44] established a new result for nonlinear semidefinite programming (NLSDP) where almost all linear perturbations of a given NLSDP are shown to be nondegenerate. Semidefinite programming is a powerful framework from convex optimization that has striking potential for data science applications [45]. Sequential optimality conditions have played a vital role in unifying and extending global convergence results for several classes of algorithms for general nonlinear optimization, Andreani et al. [46] extended these concepts for nonlinear semidefinite programming. Andreani et al. [47] discussed simple extensions of constant rank-type constraint qualifications to semidefinite programming, which are based on the Approximate Karush–Kuhn–Tucker necessary optimality condition and on the application of the reduction approach.

Motivated by the above mentioned work, we propose some new constraints qualification to establish necessary and sufficient type optimality conditions for nonsmooth, nonlinear, semidefinite, multiobjective mathematical programs with vanishing constraints. The organization of this article is as follows: In Section 2, we recall some needful preliminaries and fundamental results. In Section 3, we establish Fritz John necessary optimality conditions and propose generalized Cottle and generalized Guignard type constraint qualification to establish strong Karush–Kuhn–Tucker necessary optimality conditions. Further, sufficient optimality conditions are also established under generalized convexity. Section 4, presents the conclusion of the paper, as well as some possible views towards future work.

#### **2. Preliminaries**

This section recalls needful notation, definitions, and preliminaries that will be used throughout the paper. <sup>M</sup>*<sup>n</sup>* is denoted as the space of *<sup>n</sup>* <sup>×</sup> *<sup>n</sup>* symmetric matrices. The notation *A* 0(*A* 0) means that *A* is a positive semidefinite matrix (positive definite matrix) and we denote by M*<sup>n</sup>* +(M*<sup>n</sup>* ++) the set of all positive semidefinite matrices (positive definite matrices). The inner product of the symmetric matrices *<sup>P</sup>*, *<sup>Q</sup>* <sup>∈</sup> <sup>M</sup>*<sup>n</sup>* is denoted by *P*, *Q* and defined by *P*, *Q* = *tr*(*PQ*) where *tr*(.) denotes the summation of the diagonal elements of a square matrix. The inner product of *<sup>x</sup>* = (*x*1, ..., *xn*), *<sup>y</sup>* = (*y*1, ..., *yn*) <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* is denoted and defined by *<sup>x</sup>Ty* <sup>=</sup> *<sup>n</sup>* ∑ *i*=1 *xiyi*. The norm associated with matrix inner product 1 1

is called the Frobenius norm ||*P*||*<sup>F</sup>* = *tr*(*PP*) <sup>2</sup> = (∑*<sup>n</sup> <sup>i</sup>*, *<sup>j</sup>*=<sup>1</sup> *a*<sup>2</sup> *ij*) <sup>2</sup> . The vector space M*<sup>n</sup>* with this norm is a Hilbert space and M*<sup>n</sup>* <sup>+</sup> is a closed convex cone in M*n*. The interior of the positive semidefinite matrices is the positive definite matrices, for more basics on matrices see [48,49]. For *<sup>y</sup>*,<sup>z</sup> <sup>∈</sup> <sup>R</sup>*n*,

$$\begin{aligned} y \le y &\iff y\_i \le y\_{i'} \; i = 1, \dots, n, \\ y \le y &\iff y \le y\_{\; \prime} \; y \ne y\_{\prime} \\ y < y &\iff y\_i < y\_{i'} \; i = 1, \dots, n. \end{aligned}$$

Some index sets are as follows

*<sup>M</sup>* <sup>=</sup> {*<sup>A</sup>* <sup>∈</sup> <sup>M</sup>*<sup>n</sup>* <sup>+</sup> : H*i*(*A*) - 0, G*i*(*A*)H*i*(*A*) 0}, *θi*(*A*) = G*i*(*A*)H*i*(*A*), <sup>ג</sup><sup>f</sup> <sup>=</sup> {1, ..., *<sup>p</sup>*}, <sup>ג</sup>*<sup>k</sup>* <sup>f</sup> = {1, ..., *p*}\{*k*}, גG H := {1, ..., *m*}, *<sup>Q</sup>* <sup>=</sup> {*<sup>A</sup>* <sup>∈</sup> <sup>M</sup>*<sup>n</sup>* <sup>+</sup> : <sup>f</sup>*i*(*A*) <sup>f</sup>*i*(*A*¯) (*<sup>i</sup>* <sup>∈</sup> <sup>ג</sup>f), <sup>H</sup>*i*(*A*) - 0, G*i*(*A*)H*i*(*A*) 0}, *<sup>Q</sup><sup>k</sup>* <sup>=</sup> {*<sup>A</sup>* <sup>∈</sup> <sup>M</sup>*<sup>n</sup>* <sup>+</sup> : <sup>f</sup>*i*(*A*) <sup>f</sup>*i*(*A*¯) (*<sup>i</sup>* <sup>∈</sup> <sup>ג</sup>*<sup>k</sup>* <sup>f</sup> ), H*i*(*A*) - 0, <sup>G</sup>*i*(*A*)H*i*(*A*) <sup>0</sup>}, where *<sup>A</sup>*¯ <sup>∈</sup> *<sup>M</sup>*, R*n* <sup>+</sup> <sup>=</sup> {*<sup>x</sup>* <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* : *<sup>x</sup>* - <sup>0</sup>}, <sup>R</sup>*<sup>n</sup>* ++ <sup>=</sup> {*<sup>x</sup>* <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* : *<sup>x</sup>* <sup>&</sup>gt; <sup>0</sup>},

<sup>ג</sup><sup>0</sup> <sup>=</sup> <sup>ג</sup>0)*A*¯) :<sup>=</sup> {*<sup>i</sup>* <sup>∈</sup> <sup>ג</sup>G H : <sup>H</sup>*i*(*A*¯) = <sup>0</sup>}, <sup>ג</sup><sup>+</sup> <sup>=</sup> <sup>ג</sup>+)*A*¯) :<sup>=</sup> {*<sup>i</sup>* <sup>∈</sup> <sup>ג</sup>G H : <sup>H</sup>*i*(*A*¯) <sup>&</sup>gt; <sup>0</sup>}, <sup>ג</sup>0<sup>+</sup> <sup>=</sup> <sup>ג</sup>0+)*A*¯) :<sup>=</sup> {*<sup>i</sup>* <sup>∈</sup> <sup>ג</sup>G H : <sup>H</sup>*i*(*A*¯) = 0, <sup>G</sup>*i*(*A*¯) <sup>&</sup>gt; <sup>0</sup>}, <sup>ג</sup><sup>00</sup> <sup>=</sup> <sup>ג</sup>00)*A*¯) :<sup>=</sup> {*<sup>i</sup>* <sup>∈</sup> <sup>ג</sup>G H : <sup>H</sup>*i*(*A*¯) = 0, <sup>G</sup>*i*(*A*¯) = <sup>0</sup>}, <sup>ג</sup>0<sup>−</sup> <sup>=</sup> <sup>ג</sup>0−)*A*¯) :<sup>=</sup> {*<sup>i</sup>* <sup>∈</sup> <sup>ג</sup>G H : <sup>H</sup>*i*(*A*¯) = 0, <sup>G</sup>*i*(*A*¯) <sup>&</sup>lt; <sup>0</sup>}, <sup>ג</sup>+<sup>0</sup> <sup>=</sup> <sup>ג</sup>+0)*A*¯) :<sup>=</sup> {*<sup>i</sup>* <sup>∈</sup> <sup>ג</sup>G H : <sup>H</sup>*i*(*A*¯) <sup>&</sup>gt; 0, <sup>G</sup>*i*(*A*¯) = <sup>0</sup>}, <sup>ג</sup>+<sup>−</sup> <sup>=</sup> <sup>ג</sup>+−)*A*¯) :<sup>=</sup> {*<sup>i</sup>* <sup>∈</sup> <sup>ג</sup>G H : <sup>H</sup>*i*(*A*¯) <sup>&</sup>gt; 0, <sup>G</sup>*i*(*A*¯) <sup>&</sup>lt; <sup>0</sup>}.

We discuss the solution concepts of *S* − *MMPVC* motivated by Miettinen [34].

**Definition 1.** *A feasible point <sup>A</sup>*¯ *is said to be a weak efficient solution of <sup>S</sup>* <sup>−</sup> *MMPVC if there is no any A* ∈ *M*, *such that*

$$\mathfrak{f}\_i(A) < \mathfrak{f}\_i(\vec{A})\_\prime \,\,\forall \, i \in \mathfrak{I}\_\dagger.$$

**Definition 2.** *A feasible point <sup>A</sup>*¯ *is said to be a local weak efficient solution of <sup>S</sup>* <sup>−</sup> *MMPVC if there exist a neighborhood* <sup>N</sup> (*A*¯) *of <sup>A</sup>*¯, *such that there is no any A* <sup>∈</sup> *<sup>M</sup>* <sup>∩</sup> <sup>N</sup> (*A*¯), *for which*

$$\mathfrak{f}\_{\vec{\imath}}(A) < \mathfrak{f}\_{\vec{\imath}}(\vec{A}), \,\,\forall \,\, i \in \mathfrak{I}\_{\vec{\imath}}.$$

*holds.*

Given a nonempty subset *M* of M*n*, the closure, the convex hull and the convex cone (including the origin) generated by *M* are denoted by *clM*, *coM*, and *coneM*, respectively. The negative and the strictly negative polar cone of *M* are defined respectively by

$$\mathcal{M}^- := \{ V \in \mathbb{M}^n : \langle V, \mathcal{W} \rangle \le 0, \,\forall \, \mathcal{W} \in \mathcal{M} \}, \,\, \mathcal{M}^\varepsilon := \{ V \in \mathbb{M}^n : \langle V, \mathcal{W} \rangle < 0, \,\forall \, \mathcal{W} \in \mathcal{M} \}.$$

Contingent cone *T*(*M*, *A*) to *M* at point *A* ∈ *clM* are defined by

$$T(M, A) := \{ V \in \mathbb{M}^n : \exists \ t\_n \downarrow 0, \ V\_{\mathbb{M}} \to V \text{ such that } \ A + t\_{\mathbb{M}} V\_{\mathbb{M}} \in M \,\forall \ n \}.$$

The notion of semi-regular convexificators [43] will be used here. It is observed that for locally Lipschitz function many generalized subdifferential like Clarke subdifferential [50], Michel-Penot subdifferential [51], Mordukhovich subdifferential [52], and Treiman subdifferential [53] are examples of upper semi-regular convexificators.

Let <sup>f</sup> : <sup>M</sup>*<sup>n</sup>* <sup>→</sup> <sup>R</sup> ∪ {+∞} be an extended real-valued function and let *<sup>A</sup>* <sup>∈</sup> <sup>M</sup>*<sup>n</sup>* at which <sup>f</sup> is finite. The lower and upper Dini derivatives of <sup>f</sup> at *<sup>A</sup>* in the direction *<sup>V</sup>* <sup>∈</sup> <sup>M</sup>*<sup>n</sup>* are defined, respectively, by

$$f^-(A;V) := \liminf\_{t \downarrow 0} \frac{\mathfrak{f}(A+tV) - \mathfrak{f}(A)}{t},$$

$$\mathfrak{f}^+(A;V) := \limsup\_{t \downarrow 0} \frac{\mathfrak{f}(A+tV) - \mathfrak{f}(A)}{t}.$$

Now, we recall the definition of upper and lower semi-regular convexificators from [42,43].

**Definition 3.** *Let* <sup>f</sup> : <sup>M</sup>*<sup>n</sup>* <sup>→</sup> <sup>R</sup> ∪ {+∞} *be an extended real-valued function and let <sup>A</sup>* <sup>∈</sup> <sup>M</sup>*<sup>n</sup> at which* <sup>f</sup> *is finite. The function* <sup>f</sup> *is said to admit an upper semi-regular convexificator <sup>∂</sup>*∗f(*A*) <sup>⊂</sup> <sup>M</sup>*<sup>n</sup> at A if <sup>∂</sup>*∗f(*A*) *is closed and for each V* <sup>∈</sup> <sup>M</sup>*n*,

$$f^+(A;V) \le \sup\_{\substack{\mathfrak{F} \in \partial^\* \mathfrak{f}(A)}} \langle \mathfrak{f}\_\nu^\* V \rangle.$$

*The function* <sup>f</sup> *is said to admit a lower semi-regular convexificator <sup>∂</sup>*∗f(*A*) <sup>⊂</sup> <sup>M</sup>*<sup>n</sup> at <sup>A</sup> if <sup>∂</sup>*∗f(*A*) *is closed and for each V* <sup>∈</sup> <sup>M</sup>*<sup>n</sup>*

$$f^-(A;V) \ge \inf\_{\substack{\mathfrak{F} \in \partial^\* \mathfrak{f}(A)}} \langle \mathfrak{f}\_{\mathfrak{F}}^\times V \rangle.$$

**Definition 4.** *Set ∂*f(*A*) *is said to be semi-regular convexificators if it satisfy both upper semiregular convexificators, as well as lower semi-regular convexificators.*

**Definition 5.** *Let* <sup>f</sup> : <sup>M</sup>*<sup>n</sup>* <sup>→</sup> <sup>R</sup> ∪ {+∞} *be an extended real-valued function. Suppose that <sup>A</sup>* <sup>∈</sup> <sup>M</sup>*n*, <sup>f</sup>(*A*) *is finite and admits a convexificator <sup>∂</sup>*∗f(*A*) *at A*.

• <sup>f</sup> *is said to be <sup>∂</sup>*∗−*convex at A if, and only if, for all B* <sup>∈</sup> <sup>M</sup>*n*,

f(*B*) − f(*A*) ≥ *ξ*, *B* − *A* , ∀ *ξ* ∈ *∂*∗f(*A*).

• <sup>f</sup> *is said to be strictly <sup>∂</sup>*∗−*convex at A if, and only if, for all B* <sup>∈</sup> <sup>M</sup>*n*,

$$
\mathfrak{f}(B) - \mathfrak{f}(A) > \langle \mathfrak{f}, B - A \rangle, \forall \, \mathfrak{f} \in \partial^\* \mathfrak{f}(A).
$$

• <sup>f</sup> *is said to be <sup>∂</sup>*∗*-pseudoconvex at A if, and only if, for all B* <sup>∈</sup> <sup>M</sup>*n*,

f(*B*) < f(*A*) =⇒ *ξ*, *B* − *A* < 0, ∀ *ξ* ∈ *∂*∗f(*A*).

• <sup>f</sup> *is said to be strictly <sup>∂</sup>*∗*-pseudoconvex at A if, and only if, for all B*(<sup>=</sup> *<sup>A</sup>*) <sup>∈</sup> <sup>M</sup>*n*,

$$
\langle \zeta, B - A \rangle \ge 0 \implies \mathfrak{f}(B) > \mathfrak{f}(A) \,\,\forall \, \mathfrak{f} \in \mathfrak{d}^\* \mathfrak{f}(A).
$$

• <sup>f</sup> *is said to be <sup>∂</sup>*∗−*quasiconvex at A if, and only if, for all B* <sup>∈</sup> <sup>M</sup>*n*,

$$\mathfrak{f}(B) \le \mathfrak{f}(A) \implies \langle \mathfrak{f}, B - A \rangle \le 0,\\ \forall \, \mathfrak{f} \in \partial^\* \mathfrak{f}(A).$$

Now, we recall generalized version of Farkas' lemma [54], which will play the vital role in the derivation of main result of this paper.

**Lemma 1.** *(Farkas' Lemma) Let* <sup>h</sup> : <sup>M</sup>*<sup>n</sup>* <sup>→</sup> <sup>R</sup>*<sup>m</sup> be convex functions. Then, the following system:*

$$\begin{cases} \mathfrak{h}(A) < 0, \\ A \in \mathcal{M}\_{++}^{\mathfrak{n}}. \end{cases}$$

*has no solution if, and only if, there exists* (*λ*, <sup>W</sup> ) <sup>∈</sup> <sup>R</sup>*<sup>m</sup>* <sup>×</sup> <sup>M</sup>*<sup>n</sup> with <sup>λ</sup>* - 0, W 0 *and* (*λ*, W ) = (0, 0)*, such that*

$$
\lambda^T \mathfrak{h}(A) + \langle \mathcal{W}, A \rangle \stackrel{>}{=} 0, \,\forall \, A \in \mathbb{M}^n.
$$

#### **3. Optimality Conditions**

In this section, we deal with the traditional Fritz John necessary optimality conditions and propose some constraint qualifications to establish strong Karush–Kuhn–Tucker necessary optimality conditions, as well as sufficient optimality conditions for semidefinite multiobjective mathematical programs with vanishing constraints in terms of convexificators.

**Theorem 1.** *(Fritz–John necessary optimality conditions) Let A*¯ *be a local weak efficient solution for* (*S* − *MMPVC*)*. Suppose that* f*<sup>i</sup>* (*i* ∈ גf) *and* H*<sup>i</sup>* (*i* ∈ ג0(, G*<sup>i</sup>* (*i* ∈ ג+0(, *admit bounded upper semi-regular convexificators and for each* H*<sup>i</sup>* (*i* ∈ ג+(, G*<sup>i</sup>* (*i* ∈ ג<sup>0</sup> ∪ ג+−(, *is continuous. Then, there exist λ*¯ <sup>f</sup> *<sup>i</sup>* - <sup>0</sup> (*<sup>i</sup>* <sup>∈</sup> <sup>ג</sup>f), *<sup>λ</sup>*¯ <sup>H</sup> *<sup>i</sup>* - <sup>0</sup> (*<sup>i</sup>* <sup>∈</sup> <sup>ג</sup>0<sup>−</sup> <sup>∪</sup> <sup>ג</sup>00(, *<sup>λ</sup>*¯ <sup>H</sup> *<sup>i</sup> free* (*<sup>i</sup>* <sup>∈</sup> <sup>ג</sup>0+(, *<sup>λ</sup>*¯ <sup>G</sup> *<sup>i</sup>* - <sup>0</sup> (*<sup>i</sup>* <sup>∈</sup> <sup>ג</sup>+0(, *<sup>λ</sup>*¯ <sup>G</sup> *<sup>i</sup>* <sup>=</sup> <sup>0</sup> (*<sup>i</sup>* <sup>∈</sup> <sup>ג</sup><sup>0</sup> <sup>∪</sup> <sup>ג</sup>+−(, <sup>W</sup>¯ <sup>∈</sup> <sup>M</sup>*<sup>n</sup>* <sup>+</sup> *and not all multipliers along with* W¯ *can be simultaneously zero, such that*

$$0 \in \sum\_{i=1}^{p} \lambda\_i^{\dagger} co\partial^\* \sharp\_i(A) + \sum\_{i=1}^{m} [\lambda\_i^{\prime \mathcal{J}} co\partial^\* \sharp\_i(A) - \lambda\_i^{\mathcal{A^c}} co\partial^\* \mathcal{A}\_i^c(A)] - \mathcal{W}, \ \langle A, \mathcal{W} \rangle = 0.$$

**Proof.** We have to show that

$$\begin{split} \left( \left( \bigcup\_{i \in \mathfrak{I}\_{\mathbb{H}}} \partial^{\ast} \mathfrak{f}\_{i}(A) \right)^{s} + A \right) &\bigcap \left( \left( \bigcup\_{i \in \mathfrak{I}\_{0+} \cup \mathfrak{J}\_{00} \cup \mathfrak{J}\_{0-}} -\partial^{\ast} \mathscr{A}\_{i}^{\varrho}(A) \right)^{s} + A \right) \\ &\bigcap \left( \left( \bigcup\_{i \in \mathfrak{I}\_{0+} \cup \mathfrak{J}\_{00} \cup \mathfrak{J}\_{0-} \cup \mathfrak{J}\_{+0}} \partial^{\ast} \theta\_{i}(\bar{A}) \right)^{s} + \bar{A} \right) \bigcap M\_{++}^{n} = \bigcirc . \end{split} \tag{2}$$

Suppose, on the contrary,

$$A \in \left( \left( \bigcup\_{i \in \mathfrak{I}\_{\mathfrak{I}}} \partial^\* \mathfrak{f}\_i(A) \right)^s + A \right) \bigcap \left( \left( \bigcup\_{i \in \mathfrak{I}\_{\mathfrak{I}+} \cup \mathfrak{I}\_{\mathfrak{I}0} \cup \mathfrak{I}\_{\mathfrak{I}-}} - \partial^\* \mathcal{A}\_i^c(A) \right)^s + A \right)$$

$$\bigcap \left( \left( \bigcup\_{i \in \mathfrak{I}\_{\mathfrak{I}+} \cup \mathfrak{I}\_{\mathfrak{I}0} \cup \mathfrak{I}\_{\mathfrak{I}-} \cup \mathfrak{I}\_{\mathfrak{I}+}} \right)^s + A \right) \bigcap \mathcal{M}\_{++}^m.$$

As, f*<sup>i</sup>* (*i* ∈ גf), H*<sup>i</sup>* (*i* ∈ ג0<sup>+</sup> ∪ ג<sup>00</sup> ∪ ג0<sup>−</sup> (and *θ<sup>i</sup>* (*i* ∈ ג+<sup>0</sup> ∪ ג<sup>00</sup> ∪ ג0<sup>−</sup> ∪ ג0+(, admit bounded upper semi-regular convexificators, we deduce that

$$\begin{aligned} \mathfrak{f}\_i^+(\bar{A}, A - \bar{A}) &< 0, \; i \in \mathfrak{I}\_{\mathfrak{f}}, \\ -\mathcal{H}\_i^{e+}(\bar{A}, A - \bar{A}) &< 0, \; i \in \mathfrak{I}\_{0+} \cup \mathfrak{I}\_{00} \cup \mathfrak{I}\_{0-}, \\ \theta\_i^+(\bar{A}, A - \bar{A}) &< 0, \; i \in \mathfrak{I}\_{0+} \cup \mathfrak{I}\_{00} \cup \mathfrak{I}\_{0-} \cup \mathfrak{I}^{+0}. \end{aligned}$$

Therefore, there exists *τ* > 0 and *t* ∈ (0, *τ*) such that

$$f\_{\vec{l}}(\vec{A} + t(A - \vec{A})) < f\_{\vec{l}}(\vec{A}), \ i \in \mathfrak{I}\_{\|}. \tag{4}$$

$$-\mathcal{A}\_i^e(\bar{A} + t(A - \bar{A})) < -\mathcal{A}\_i^e(\bar{A}), \ i \in \mathfrak{I}\_{0+} \cup \mathfrak{I}\_{00} \cup \mathfrak{I}\_{0-},\tag{5}$$

$$
\theta\_i(\bar{A} + t(A - \bar{A})) < \theta\_i(\bar{A}), \ i \in \mathfrak{I}\_{0+} \cup \mathfrak{I}\_{00} \cup \mathfrak{I}\_{0-} \cup \mathfrak{I}\_{+0}.\tag{6}
$$

The continuity of H*<sup>i</sup>* (*i* ∈ ג+<sup>−</sup> ∪ ג+<sup>0</sup> (and *θ<sup>i</sup>* (*i* ∈ ג+<sup>−</sup> (implies there exists *τ* > 0, such that ∀ *t* ∈ (0, *τ*),

$$-\mathcal{H}\_i^c(\bar{A} + t(A - \bar{A})) < 0 \ (i \in \mathfrak{I}\_{+-} \cup \mathfrak{I}\_{+0}), \ \theta\_i(\bar{A} + t(A - \bar{A})) < 0 \ (i \in \mathfrak{I}\_{+-}).\tag{7}$$

From (4)–(7) and the convexity of *M<sup>n</sup>* <sup>+</sup> we find the contradiction with the local weak efficient point of *A*¯. Consider

$$\begin{split} \phi\_{i}(A) &= \sup\_{\mathbb{Z}\_{i} \in \partial^{\*}f\_{i}(\bar{A})} \langle \mathbb{Z}\_{i}, A - \bar{A} \rangle, \ i \in \mathfrak{I}\_{\mathbb{H}}, \\ \psi\_{i}(A) &= \sup\_{\eta\_{i} \in -\partial^{\*}\mathcal{A}\_{i}^{c}(\bar{A})} \langle \eta\_{i}, A - \bar{A} \rangle, \ i \in \mathfrak{I}\_{0+} \cup \mathfrak{I}\_{00} \cup \mathfrak{I}\_{0-}, \\ \phi\_{i}(A) &= \sup\_{\mathbb{Z}\_{i} \in \partial^{\*}\theta\_{i}(\bar{A})} \langle \mathbb{Z}\_{i}, A - \bar{A} \rangle, \ i \in \mathfrak{I}\_{0+} \cup \mathfrak{I}\_{00} \cup \mathfrak{I}\_{0-} \cup \mathfrak{I}\_{+0}. \end{split}$$

Easily, we can seen that *φi*(·), *ψi*(·) and *ϕi*(·) are convex functions. From (2), it follows that the following system has no solution

$$K = \begin{cases} \phi\_i(A) < 0 & \text{if } i \in \mathfrak{I}\_{\mathfrak{f}}, \\ \psi\_i(A) < 0 & \text{if } i \in \mathfrak{I}\_{0+} \cup \mathfrak{I}\_{00} \cup \mathfrak{I}\_{0-}, \\ \phi\_i(A) < 0 & \text{if } i \in \mathfrak{I}\_{0+} \cup \mathfrak{I}\_{00} \cup \mathfrak{I}\_{0-} \cup \mathfrak{I}\_{+0}, \\ M\_{++-}^{n} . \end{cases}$$

Farkas' Lemma 1 implies that there exist *λ*¯ <sup>f</sup> *<sup>i</sup>* - <sup>0</sup> (*<sup>i</sup>* <sup>∈</sup> <sup>ג</sup>f), *<sup>λ</sup>*<sup>H</sup> ∪ <sup>00</sup>ג ∪ <sup>0</sup>+ג ∋ *i* (0 *i <sup>λ</sup><sup>θ</sup>* ,)0−<sup>ג</sup> *<sup>i</sup>* - <sup>0</sup> (*<sup>i</sup>* <sup>∈</sup> <sup>ג</sup>0<sup>+</sup> <sup>∪</sup> <sup>ג</sup><sup>00</sup> <sup>∪</sup> <sup>ג</sup>0<sup>−</sup> <sup>∪</sup> <sup>ג</sup>+<sup>0</sup> (and <sup>W</sup>¯ <sup>∈</sup> *<sup>M</sup><sup>n</sup>* <sup>+</sup> and not all multipliers along with W¯ can be simultaneously zero, such that

$$\sum\_{l \in \mathfrak{J}\_l} \bar{\lambda}\_l^\dagger \Phi\_l(A) + \sum\_{l \in \mathfrak{J}\_{0+} \cup \mathfrak{J}\_{00} \cup \mathfrak{J}\_{0-}} \lambda\_l^{\mathfrak{M}^\flat} \Psi\_l(A) + \sum\_{l \in \mathfrak{J}\_{0+} \cup \mathfrak{J}\_{00} \cup \mathfrak{J}\_{0-} \cup \mathfrak{J}\_{+0}} \lambda\_l^\theta \varphi\_l(A) - \langle \mathcal{W}, A \rangle \ge 0, \forall \ A \in \mathcal{M}^{\mathrm{n}}.\tag{8}$$

The above inequality (8) implies that W¯, *<sup>A</sup>*¯ 0. Differently, <sup>W</sup>¯ and *<sup>A</sup>*¯ are two elements in M*<sup>n</sup>* <sup>+</sup>, hence W¯, *<sup>A</sup>*¯ <sup>=</sup> 0. Therefore,

$$\nu(A) = \sum\_{i \in \mathfrak{J}\_{\emptyset}} \lambda\_i^{\dagger} \phi\_i(A) + \sum\_{i \in \mathfrak{J}\_{0+} \cup \mathfrak{J}\_{00} \cup \mathfrak{J}\_{0-}} \lambda\_i^{\mathcal{R}'} \psi\_i(A) + \sum\_{i \in \mathfrak{J}\_{0+} \cup \mathfrak{J}\_{00} \cup \mathfrak{J}\_{0-} \cup \mathfrak{J}\_{+0}} \lambda\_i^{\theta} \eta\_i(A) - \langle \mathcal{W}, A \rangle\_i.$$

is a convex function and *<sup>ν</sup>*(*A*¯) = 0. This implies 0 <sup>∈</sup> *∂ν*(*A*¯), where *∂ν*(*A*¯) is the subdifferential set for *ν*. Hence,

$$0 \in \sum\_{i \in \mathfrak{J}\_{\bar{f}}} \bar{\lambda}\_i^{\bar{f}} \partial \phi\_i(\bar{A}) + \sum\_{i \in \mathfrak{J}\_{0+} \cup \mathfrak{J}\_{00} \cup \mathfrak{J}\_{0-}} \lambda\_i^{\mathcal{M}'} \partial \psi\_i(\bar{A}) + \sum\_{i \in \mathfrak{J}\_{0+} \cup \mathfrak{J}\_{00} \cup \mathfrak{J}\_{0-} \cup \mathfrak{J}\_{+0}} \lambda\_i^{\theta} \partial \varphi\_i(\bar{A}) - \mathcal{W}.$$

This implies,

$$0 \in \sum\_{i \in \mathfrak{J}\_{\boldsymbol{\theta}}} \bar{\lambda}\_{i}^{\boldsymbol{\theta}} \partial^{\*} \boldsymbol{\phi}\_{i}(\bar{A}) + \sum\_{i \in \mathfrak{J}\_{0+} \cup \mathfrak{J}\_{00} \cup \mathfrak{J}\_{0-}} \lambda\_{i}^{\boldsymbol{\omega}\boldsymbol{\theta}^{\*}} \partial^{\*} \boldsymbol{\upmu}\_{i}(\bar{A}) + \sum\_{i \in \mathfrak{J}\_{0+} \cup \mathfrak{J}\_{00} \cup \mathfrak{J}\_{0-} \cup \mathfrak{J}\_{+0}} \lambda\_{i}^{\boldsymbol{\theta}} \partial^{\*} \boldsymbol{\upmu}\_{i}(\bar{A}) - \boldsymbol{\upmu}^{\boldsymbol{\upmu}}.$$

$$0 \in \sum\_{i \in \mathfrak{J}\_l} \bar{\lambda}\_i^{\dagger} \partial^\* \mathfrak{f}\_i(\bar{A}) - \sum\_{i \in \mathfrak{J}\_{0+} \cup \mathfrak{J}\_{00} \cup \mathfrak{J}\_{0-}} \lambda\_i^{\mathcal{M}^c} \partial^\* \mathcal{M}\_l^c(\bar{A}) + \sum\_{i \in \mathfrak{J}\_{0+} \cup \mathfrak{J}\_{00} \cup \mathfrak{J}\_{0-} \cup \mathfrak{J}\_{+0}} \lambda\_i^{\emptyset} \partial^\* \theta\_i(\bar{A}) - \mathcal{W}\_{\bullet}$$

$$\begin{split} 0 \in \sum\_{i \in \mathfrak{J}\_{l}} \lambda\_{i}^{\dagger} \mathfrak{d}^{\*} \mathfrak{f}\_{i}(\bar{A}) - \sum\_{i \in \mathfrak{J}\_{0+} \cup \mathfrak{J}\_{00} \cup \mathfrak{J}\_{0-}} \lambda\_{i}^{\mathcal{H}} \mathfrak{d}^{\*} \mathcal{H}\_{i}^{e}(\bar{A}) \\ + \sum\_{i \in \mathfrak{J}\_{0+} \cup \mathfrak{J}\_{00} \cup \mathfrak{J}\_{0-} \cup \mathfrak{J}\_{+0}} \lambda\_{i}^{\theta} [\mathcal{A}\_{i}^{e}(\bar{A}) \mathfrak{d}^{\*} \mathcal{H}\_{i}(\bar{A}) + \mathfrak{G}\_{i}(\bar{A}) \mathfrak{d}^{\*} \mathcal{H}\_{i}^{e}(\bar{A})] - \mathbb{X}^{\emptyset} . \end{split} \tag{9}$$

For *λ*<sup>H</sup> *<sup>i</sup>* <sup>=</sup> <sup>0</sup> (*<sup>i</sup>* <sup>∈</sup> <sup>ג</sup>+<sup>−</sup> <sup>∪</sup> <sup>ג</sup>+0(, *<sup>λ</sup><sup>θ</sup> <sup>i</sup>* = 0 (*i* ∈ ג+−(, we obtain from (9)

$$\begin{cases} 0 \in \sum\_{i \in \mathsf{I}\_{\mathsf{f}}} \bar{\lambda}\_{i}^{\dagger} co\partial^{\*}\mathfrak{f}\_{i}(\bar{A}) + \sum\_{i=1}^{m} [\bar{\lambda}\_{i}^{\mathcal{G}}co\partial^{\*}\mathcal{G}\_{i}(\bar{A}) - \bar{\lambda}\_{i}^{\mathcal{A}^{\mathcal{C}}}co\partial^{\*}\mathcal{A}\_{i}^{c}(\bar{A})] - \mathsf{y}^{\overline{\mathsf{f}}},\\ \text{where } \bar{\lambda}\_{i}^{\mathcal{A}^{\mathcal{C}}} = \lambda\_{i}^{\mathcal{A}^{\mathcal{C}}} - \lambda\_{i}^{\mathcal{A}}\mathcal{G}\_{i}(\bar{A}) \ (i \in \mathsf{I}\_{0+} \cup \mathsf{I}\_{0-} \cup \mathsf{I}\_{00} \cup \mathsf{I}\_{+0}),\\ \bar{\lambda}\_{i}^{\mathcal{A}^{\mathcal{C}}} = \lambda\_{i}^{\mathcal{A}} = 0 \ (i \in \mathsf{I}\_{+-}), \ \bar{\lambda}\_{i}^{\mathcal{A}} = \lambda^{\theta}. \mathsf{A}\_{i}^{c}(\bar{A}) \ (i \in \mathsf{I}\_{0+} \cup \mathsf{I}\_{0-} \cup \mathsf{I}\_{00} \cup \mathsf{I}\_{+0}),\\ \bar{\lambda}\_{i}^{\mathcal{G}} = \lambda\_{i}^{\theta} = 0 \ (i \in \mathsf{I}\_{+-}). \end{cases}$$

Thus, we have

$$\begin{split} 0 \in \sum\_{i \in \mathbb{Z}\_{\notin}} \lambda\_i^{\dagger} \text{cob}^\* f\_i(A) + \sum\_{i=1}^{n} [\lambda\_i^{\prime \theta} \text{cob}^\* \mathcal{H}\_i(A) - \lambda\_i^{\text{pr}} \text{cob}^\*, \mathcal{H}\_i^{\prime}(A)] - \mathcal{W}, \\ \lambda\_i^{\dagger} \ge 0 \ (i \in \mathbb{Z}\_{\neq}), \ (\mathcal{W}^{\theta}, A) = 0, \ \lambda\_i^{\text{pr}} = 0 \ (i \in \mathbb{Z}\_{+0} \cup \mathbb{Z}\_{+-}), \ \lambda\_i^{\text{pr}} \ge 0 \ (i \in \mathbb{Z}\_{0-} \cup \mathbb{Z}\_{00}), \ \lambda\_i^{\text{pr}} \text{cfree} \ (i \in \mathbb{Z}\_{0+}), \\ \lambda\_i^{\text{qr}} = 0 \ (i \in \mathbb{Z}\_{0+} \cup \mathbb{Z}\_{0-} \cup \mathbb{Z}\_{00} \cup \mathbb{Z}\_{+-}), \ \lambda\_i^{\text{qr}} \ge 0 \ (i \in \mathbb{Z}\_{+0}). \end{split}$$

**Definition 6.** *The generalized Cottle constraint qualification (GCCQ) is said to satisfy at A if* ¯

$$\left(\bigcup\_{i\in\mathfrak{I}\_{\sharp}^{k}}\operatorname{co\partial}^{\*}\mathfrak{f}\_{i}(\varvec{A})\right)^{s}\bigcap\left(\bigcup\_{i\in\mathfrak{I}\_{0+}}\operatorname{co\partial}^{\*}\mathcal{A}\_{i}^{c}(\varvec{A})\bigcup\_{i\in\mathfrak{I}\_{0+}}-\operatorname{co\partial}^{\*}\operatorname{\mathcal{A}}\_{i}^{c}(\varvec{A})\right)$$

$$\bigcup\_{i \in \mathfrak{I}\_{0-} \cup \mathfrak{I}\_{00}} -\mathsf{co}\mathfrak{d}^\* \, \mathcal{H}\_i^\circ(\bar{A}) \bigcup\_{i \in \mathfrak{I}\_{+0}} \mathsf{co}\mathfrak{d}^\* \mathcal{G}\_i(\bar{A}) \left( \bigcap \mathcal{M}\_+^n \neq \mathcal{O}\_\star \, \forall \, k \in \mathfrak{I}\_{\mathfrak{f}}.\tag{10}$$

**Theorem 2.** *Let <sup>A</sup>*¯ *be a local weak efficient solution for* (*<sup>S</sup>* <sup>−</sup> *MMPVC*)*. Suppose that* <sup>f</sup>*<sup>i</sup>* (*i* ∈ גf), H*<sup>i</sup>* (*i* ∈ ג<sup>0</sup> (*and* G*<sup>i</sup>* (*i* ∈ ג+<sup>0</sup> (*admit bounded upper semi-regular convexificators and* <sup>H</sup>*<sup>i</sup>* (*<sup>i</sup>* <sup>∈</sup> <sup>ג</sup>+(, <sup>G</sup>*<sup>i</sup>* (*<sup>i</sup>* <sup>∈</sup> <sup>ג</sup><sup>0</sup> <sup>∪</sup> <sup>ג</sup>+<sup>−</sup> (*are continuous. If (GCCQ) holds at <sup>A</sup>*¯ *then there exist λ*¯ f *<sup>i</sup>* <sup>&</sup>gt; <sup>0</sup> (*<sup>i</sup>* <sup>∈</sup> <sup>ג</sup>f), *<sup>λ</sup>*¯ <sup>H</sup> *<sup>i</sup>* , *<sup>λ</sup>*¯ <sup>G</sup> *<sup>i</sup>* <sup>∈</sup> <sup>R</sup>*m*, <sup>W</sup>¯ <sup>∈</sup> <sup>M</sup>*<sup>n</sup>* <sup>+</sup>, *such that*

$$\begin{split} 0 \in \sum\_{i \in \mathfrak{I}\_{\vec{l}}} \bar{\lambda}\_{i}^{\ell} \circ \partial^{\ast} f\_{i}(\vec{A}) + \sum\_{i=1}^{m} [\bar{\lambda}\_{i}^{\mathcal{G}} \circ \partial^{\ast} \mathcal{G}\_{i}(\vec{A}) - \bar{\lambda}\_{i}^{\mathcal{R}^{\mathcal{C}}} \circ \partial^{\ast}, \mathcal{H}\_{i}^{c}(\vec{A})] - \mathcal{W}, \\ \langle \mathcal{W}^{\mathbb{P}}, A \rangle = 0, \ \bar{\lambda}\_{i}^{\mathcal{R}^{\mathcal{C}}} = 0 \ (i \in \mathfrak{I}\_{+0} \cup \mathfrak{I}\_{+-}), \ \bar{\lambda}\_{i}^{\mathcal{R}^{\mathcal{C}}} \ge 0 \ (i \in \mathfrak{I}\_{0-} \cup \mathfrak{I}\_{00}), \ \bar{\lambda}\_{i}^{\mathcal{R}^{\mathcal{C}}} \operatorname{free}\ (i \in \mathfrak{I}\_{0+}), \\ \lambda\_{i}^{\mathcal{R}} = 0 \ (i \in \mathfrak{I}\_{0+} \cup \mathfrak{I}\_{0-} \cup \mathfrak{I}\_{00} \cup \mathfrak{I}\_{+-}), \ \bar{\lambda}\_{i}^{\mathcal{R}} \ge 0 \ (i \in \mathfrak{I}\_{+0}). \end{split}$$

**Proof.** Since *A*¯ is a local weak efficient solution, Theorem 1 implies that there exist *λ*¯ f *<sup>i</sup>* - <sup>0</sup> (*<sup>i</sup>* <sup>∈</sup> <sup>ג</sup>f), *<sup>λ</sup>*¯ <sup>H</sup> *<sup>i</sup>* - 0, *λ*¯ <sup>G</sup> *<sup>i</sup>* - 0 and <sup>W</sup>¯ <sup>∈</sup> <sup>M</sup>*<sup>n</sup>* <sup>+</sup>, such that

$$0 \in \sum\_{i \in \mathfrak{I}\_{\vec{\mathbb{I}}}} \bar{\lambda}\_{i}^{\vec{\mathbb{I}}} \circ \mathsf{o} \mathfrak{d}^{\*} \mathfrak{f}\_{i}(\bar{A}) + \sum\_{i=1}^{m} [\bar{\lambda}\_{i}^{\mathcal{G}} \circ \mathsf{o} \mathfrak{d}^{\*} \mathcal{G}\_{i}(\bar{A}) - \bar{\lambda}\_{i}^{\mathcal{M}'} \circ \mathsf{o} \mathfrak{d}^{\*}, \mathfrak{A}\_{i}^{\boldsymbol{\varrho}}(\bar{A})] - \mathfrak{y}^{\overline{\mathbb{I}}},$$

$$\langle \mathsf{A}^{\boldsymbol{\varrho}}, \bar{A} \rangle = 0, \ \bar{\lambda}\_{i}^{\mathcal{M}'} = 0 \ (i \in \mathfrak{I}\_{+0} \cup \mathfrak{I}\_{+-}), \ \bar{\lambda}\_{i}^{\mathcal{M}'} \ge 0 \ (i \in \mathfrak{I}\_{0-} \cup \mathfrak{I}\_{00}), \ \bar{\lambda}\_{i}^{\mathcal{M}'} \text{ free} \ (i \in \mathfrak{I}\_{0+}),$$

$$\bar{\lambda}\_{i}^{\mathcal{G}} = 0 \ (i \in \mathfrak{I}\_{0+} \cup \mathfrak{I}\_{0-} \cup \mathfrak{I}\_{00} \cup \mathfrak{I}\_{+-}), \ \bar{\lambda}\_{i}^{\mathcal{G}} \ge 0 \ (i \in \mathfrak{I}\_{+0}). \tag{11}$$

Without loss of generality, assume that *<sup>λ</sup>*<sup>1</sup> <sup>=</sup> 0, then there exist *<sup>ξ</sup><sup>i</sup>* <sup>∈</sup> *co∂*f*i*(*A*¯) (*<sup>i</sup>* <sup>∈</sup> <sup>ג</sup><sup>1</sup> f ), *<sup>η</sup><sup>i</sup>* <sup>∈</sup> *co∂*H*i*(*A*¯), *<sup>ζ</sup><sup>i</sup>* <sup>∈</sup> *co∂*G*i*(*A*¯), such that Equation (11) becomes

$$0 = \sum\_{i \in \mathfrak{I}^1\_\sharp} \bar{\lambda}\_i^\dagger \zeta\_i + \sum\_{i=1}^m [\bar{\lambda}\_i^{\mathcal{G}} \zeta\_i - \bar{\lambda}\_i^{\mathcal{H}'} \eta\_i] - \mathcal{W}.$$

it follows from (GCCQ), there exists *<sup>A</sup>* <sup>∈</sup> <sup>M</sup>*<sup>n</sup>* <sup>+</sup> such that

$$\begin{split} 0 &> \sum\_{i \in \mathfrak{J}\_{\parallel}^{\ell}} \bar{\lambda}\_{i}^{\dagger} \langle \zeta\_{i}, A \rangle + \sum\_{i=1}^{m} [\bar{\lambda}\_{i}^{\mathcal{G}} \langle \zeta\_{i}, A \rangle - \bar{\lambda}\_{i}^{\mathcal{A}^{\mathcal{C}}} \langle \eta\_{i}, A \rangle] - \langle \mathcal{W}, A \rangle \\ &= \left\langle \sum\_{i \in \mathfrak{J}\_{\parallel}^{\ell}} \bar{\lambda}\_{i}^{\ell} \xi\_{i} + \sum\_{i=1}^{m} [\bar{\lambda}\_{i}^{\mathcal{G}} \zeta\_{i} - \bar{\lambda}\_{i}^{\mathcal{A}^{\mathcal{C}}} \eta\_{i}] - \mathcal{W}, A \right\rangle = 0. \end{split}$$

This contradicts the assumption. Thus, we obtain *λ*<sup>f</sup> <sup>1</sup> > 0. Repeating the above process for each *k* ∈ ג<sup>f</sup> we find the required result.

Now, we introduce more relaxed constraint qualifications than (GCCQ).

**Definition 7.** *The generalized Guignard constraint qualification (GGCQ) is said to be hold at A*¯ *if*

$$\begin{split} \mathcal{C} &= \operatorname{cone} \, \mathrm{co} \left( \bigcup\_{i \in \mathfrak{I}\_{0+}} \operatorname{co} \partial^{\*} . \mathcal{H}\_{i}^{\otimes}(\bar{A}) \bigcup\_{i \in \mathfrak{I}\_{0+}} -\operatorname{co} \partial^{\*} . \mathcal{H}\_{i}^{\otimes}(\bar{A}) \right) \\ & \coprod\_{i \in \mathfrak{I}\_{0-} \cup \mathfrak{I}\_{00}} -\operatorname{co} \partial^{\*} . \mathcal{H}\_{i}^{\otimes}(\bar{A}) \bigcup\_{i \in \mathfrak{I}\_{+0}} \operatorname{co} \partial^{\*} \mathcal{H}\_{i}(\bar{A}) \left( \begin{array}{c} \\ \end{array} \right) - \mathcal{M}\_{+}^{n} \text{ is closed set and} \\ & \left( \bigcup\_{i \in \mathfrak{I}\_{0}} \operatorname{co} \partial^{\*} \mathfrak{I}\_{i}(\bar{A}) \right)^{-} \bigcap \left( \bigcup\_{i \in \mathfrak{I}\_{0+}} \operatorname{co} \partial^{\*} . \mathcal{H}\_{i}^{\otimes}(\bar{A}) \bigcup\_{i \in \mathfrak{I}\_{0+}} -\operatorname{co} \partial^{\*} . \mathcal{H}\_{i}^{\otimes}(\bar{A}) \right) \end{split}$$

$$\bigcup\_{i \in \mathfrak{I}\_{0-} \cup \mathfrak{I}\_{00}} -\operatorname{co} \partial^\* \mathcal{A}\_i^q(\vec{A}) \bigcup\_{i \in \mathfrak{I}\_{+0}} \operatorname{co} \partial^\* \mathcal{G}\_i(\vec{A}) \left( \begin{array}{c} \\ \end{array} \right)^- \bigcap M\_+^n \subset \bigcap\_{i=1}^p \operatorname{co} T(Q^i, \vec{A}).$$

**Lemma 2.** *Let <sup>A</sup>*¯ *be any feasible solution to problem* (*<sup>S</sup>* <sup>−</sup> *MMPVC*)*. Suppose that* <sup>f</sup>*<sup>i</sup>* (*<sup>i</sup>* <sup>∈</sup> <sup>ג</sup>f), H*<sup>i</sup>* (*i* ∈ ג0(, G*<sup>i</sup>* (*i* ∈ ג+0(, *admit bounded upper semi-regular convexificators and for each* <sup>H</sup>*<sup>i</sup>* (*<sup>i</sup>* <sup>∈</sup> <sup>ג</sup>+(, <sup>G</sup>*<sup>i</sup>* (*<sup>i</sup>* <sup>∈</sup> <sup>ג</sup><sup>0</sup> <sup>∪</sup> <sup>ג</sup>+−(, *are continuous. If <sup>C</sup> is closed and GCCQ holds at <sup>A</sup>*¯, *then GGCQ holds at A*¯.

**Proof.** Without loss of generality, we assume that *A* satisfies GCCQ for *k* = 1.

$$A \in \left(\bigcup\_{i \in \mathbb{J}\_{\bar{t}}^{1}} co\eth^{\*} \mathfrak{f}\_{i}(\bar{A})\right)^{-} \bigcap \left(\bigcup\_{i \in \mathbb{J}\_{\bar{0}+}} co\eth^{\*} \mathcal{H}\_{i}^{c}(\bar{A}) \bigcup\_{i \in \mathbb{J}\_{\bar{0}+}} -co\eth^{\*} \mathcal{H}\_{i}^{c}(\bar{A})\right)$$

$$\bigcup\_{i \in \mathbb{J}\_{\bar{0}-} \cup \mathbb{J}\_{\bar{0}0}} -co\eth^{\*} \mathcal{H}\_{i}^{c}(\bar{A}) \bigcup\_{i \in \mathbb{J}\_{\bar{0}+}} co\eth^{\*} \mathcal{H}\_{i}(\bar{A}) \Big)^{-} \bigcap \mathcal{M}\_{+}^{n} \neq \bigcirc. \tag{12}$$

Since all f*<sup>i</sup>* (*i* ∈ גf), H*<sup>i</sup>* (*i* ∈ ג0(, G*<sup>i</sup>* (*i* ∈ ג+0(, admit bounded upper semi-regular convexificators, so we have

$$\begin{aligned} \mathfrak{f}\_i^+(\vec{A}; A) &< 0, \,\,\forall \, i \in \mathfrak{I}\_{\mathfrak{f}'}^1, \\ -\mathcal{A}\_i^{c+}(\vec{A}; A) &< 0, \,\,\forall \, i \in \mathfrak{I}\_{0^\star}^\star, \\ \mathfrak{g}\_i^+(\vec{A}; A) &< 0, \,\,\forall \, i \in \mathfrak{I}\_{+0^\star} \end{aligned}$$

Since M*<sup>n</sup>* <sup>+</sup> is a convex cone, there exists *τ* > 0, such that

$$\begin{cases} \mathfrak{H}\_{l}(\bar{A} + tA) < \mathfrak{f}\_{l}(\bar{A}) \ (i \in \mathfrak{I}\_{\mathfrak{f}}^{1}), \ -\mathfrak{M}\_{l}^{c}(\bar{A} + tA) < 0, \forall \ i \in \mathfrak{I}\_{0}, \ \mathfrak{G}\_{l}(\bar{A} + tA) < 0, \forall \ i \in \mathfrak{I}\_{+0},\\ \bar{A} + tA \in \mathbb{M}\_{+}^{n} \ \forall \ t \in (0, \tau). \end{cases} \tag{13}$$

On the other hand H*<sup>i</sup>* (*i* ∈ ג+(, G*<sup>i</sup>* (*i* ∈ ג<sup>0</sup> ∪ ג+<sup>−</sup> (are a continuous. Therefore, there exists *τ* > 0, such that

$$1 - \mathcal{H}\_l^c(\bar{A} + tA) < 0 \ (i \in \mathfrak{I}\_+), \ \mathcal{H}\_l(\bar{A} + tA) < 0 \ (i \in \mathfrak{I}\_0 \cup \mathfrak{I}\_{+-}) \ \bar{A} + tA \in \mathbb{M}\_+^n, \ t \in (0, \tau).$$

Thus, *<sup>A</sup>* <sup>∈</sup> *<sup>T</sup>*(*Q*1, *<sup>A</sup>*¯). Therefore, we have

A = 5 *i*∈ג<sup>f</sup> *co∂*∗f*i*(*A*¯) <sup>−</sup> <sup>6</sup> <sup>5</sup> <sup>0</sup>+ג∋*i co∂*∗H*i*(*A*¯) <sup>5</sup> <sup>0</sup>+ג∋*i* <sup>−</sup>*co∂*∗H*i*(*A*¯) 5 <sup>00</sup>ג∪0−ג∋*<sup>i</sup>* <sup>−</sup>*co∂*∗H*i*(*A*¯) <sup>5</sup> <sup>+</sup>0ג∋*i co∂*∗G*i*(*A*¯) <sup>−</sup> <sup>6</sup> *<sup>M</sup><sup>n</sup>* + <sup>=</sup> *cl* 5 *i*∈ג<sup>f</sup> *co∂*∗f*i*(*A*¯) *<sup>s</sup>* <sup>6</sup> <sup>5</sup> <sup>0</sup>+ג∋*i co∂*∗H*i*(*A*¯) <sup>5</sup> <sup>0</sup>+ג∋*i* <sup>−</sup>*co∂*∗H*i*(*A*¯) 5 <sup>00</sup>ג∪0−ג∋*<sup>i</sup>* <sup>−</sup>*co∂*∗H*i*(*A*¯) <sup>5</sup> <sup>+</sup>0ג∋*i co∂*∗G*i*(*A*¯) *s* 6 *M<sup>n</sup>* ++ <sup>⊂</sup> *cl* 5 <sup>1</sup>ג∋*<sup>i</sup>* f *co∂*∗f*i*(*A*¯) *<sup>s</sup>* <sup>6</sup> <sup>5</sup> <sup>0</sup>+ג∋*i co∂*∗H*i*(*A*¯) <sup>5</sup> <sup>0</sup>+ג∋*i* <sup>−</sup>*co∂*∗H*i*(*A*¯) 5 <sup>00</sup>ג∪0−ג∋*<sup>i</sup>* <sup>−</sup>*co∂*∗H*i*(*A*¯) <sup>5</sup> <sup>+</sup>0ג∋*i co∂*∗G*i*(*A*¯) *s* 6 *M<sup>n</sup>* ++

$$
\vdash clcoT(Q^1, \bar{A}) = coT(Q^1, \bar{A}).
$$

Similarly, it can be proved that <sup>A</sup> <sup>⊂</sup> *coT*(*Q<sup>i</sup>* , *<sup>A</sup>*¯), <sup>∀</sup> *<sup>i</sup>* <sup>∈</sup> <sup>ג</sup>f. Therefore

$$\begin{split} & \left(\bigcup\_{i \in \mathfrak{I}\_{\mathbb{f}}} co\mathfrak{d}^{\*}\mathfrak{f}\_{i}(\bar{A})\right)^{-} \bigcap \left(\bigcup\_{i \in \mathfrak{I}\_{\mathbb{0}+}} co\mathfrak{d}^{\*}\,\mathcal{H}^{c}\_{i}(\bar{A}) \bigcup\_{i \in \mathfrak{I}\_{\mathbb{0}+}} -co\mathfrak{d}^{\*}\,\mathcal{H}^{c}\_{i}(\bar{A}) \right) \\ & \quad \cup \bigcup\_{i \in \mathfrak{I}\_{0-} \cup \mathfrak{I}\_{\mathbb{0}0}} -co\mathfrak{d}^{\*}\,\mathcal{H}^{c}\_{i}(\bar{A}) \bigcup\_{i \in \mathfrak{I}\_{+0}} co\mathfrak{d}^{\*}\,\mathcal{H}^{c}\_{i}(\bar{A}) \Biggr)^{-} \bigcap M^{n}\_{+} \subset \bigcap\_{i=1}^{p} co\mathcal{T}(Q^{i},\bar{A}). \end{split}$$

We present an example to show that converse of the above Lemma (2) does not hold.

**Example 1.** *Consider the problem*

$$\min \left( \mathfrak{f}\_1(A), \mathfrak{f}\_2(A) \right), \text{ subject to } \mathcal{H}^\rho(A) = \mathbf{x}\_1 \ge \mathbf{0}, \; \mathcal{G}(A). \\ \mathcal{H}^\rho(A) = \mathbf{x}\_3. \\ \mathbf{x}\_1 \le \mathbf{0},$$

$$A = \begin{bmatrix} \mathbf{x}\_1 & \mathbf{x}\_2 \\ \mathbf{x}\_2 & \mathbf{x}\_3 \end{bmatrix} \in \mathbb{M}\_{+}^2, where \\ \mathfrak{f}\_1(A) = |\mathbf{x}\_1|, \mathfrak{f}\_2(A) = |\mathbf{x}\_3|.$$

Feasible set *M* = *x*<sup>1</sup> *x*<sup>2</sup> *x*<sup>2</sup> *x*<sup>3</sup> <sup>∈</sup> <sup>M</sup><sup>2</sup> <sup>+</sup> : *x*<sup>1</sup> - 0, *x*1*x*<sup>3</sup> 0 / . Since *A*¯ = 0 0 0 0 , is

weak efficient solution for the considered problem. Now, we can find upper semi-regular convexificator of each functions at point *A*¯ as follows:

$$
\mathfrak{d}^\*\mathfrak{f}\_1(\bar{A}) = \left\{ \begin{bmatrix} -1 & 0\\ 0 & 0 \end{bmatrix}, \begin{bmatrix} 1 & 0\\ 0 & 0 \end{bmatrix} \right\}, \mathfrak{d}^\*\mathfrak{f}\_2(\bar{A}) = \left\{ \begin{bmatrix} 0 & 0\\ 0 & -1 \end{bmatrix}, \begin{bmatrix} 0 & 0\\ 0 & 1 \end{bmatrix} \right\},
$$

$$
\mathfrak{d}^\*\mathcal{A}^\mathfrak{f}(\bar{A}) = \left\{ \begin{bmatrix} 1 & 0\\ 0 & 0 \end{bmatrix} \right\}, \mathfrak{d}^\*\mathcal{A}(\bar{A}) = \left\{ \begin{bmatrix} 0 & 0\\ 0 & 1 \end{bmatrix} \right\}.
$$

$$
Q^1 = \left\{ \begin{bmatrix} x\_1 & x\_2\\ x\_2 & x\_3 \end{bmatrix} \in \mathbb{M}\_+^2 \, : \, x\_1 \ge 0, \, x\_2 = 0, \, x\_3 = 0 \right\},
$$

$$
Q^2 = \left\{ \begin{bmatrix} x\_1 & x\_2\\ x\_2 & x\_3 \end{bmatrix} \in \mathbb{M}\_+^2 \, : \, x\_1 = 0, \, x\_2 = 0, \, x\_3 \in \mathbb{R} \right\}.
$$

So, we conclude that

$$\begin{aligned} \left[\begin{matrix} 0 & 0\\ 0 & 0 \end{matrix} \right] \in \bigcap\_{i=1}^{2} \bigcirc T(Q^i, A) \text{ and } \bigcup\_{i=1}^{2} \bigcirc \partial^\* \sharp\_i(A) = \left\{ \begin{bmatrix} t & 0\\ 0 & 0 \end{bmatrix}, \begin{bmatrix} 0 & 0\\ 0 & s \end{bmatrix} : t, \ s \in [-1, 1] \right\}, \end{aligned}$$

thus, we have

$$\left(\bigcup\_{i=1}^{2} \operatorname{co\ddot{o}^{\star}} \mathfrak{f}\_{i}(\bar{A})\right)^{-} = \left\{ \begin{bmatrix} 0 & \mathfrak{x}\_{2} \\ \mathfrak{x}\_{2} & 0 \end{bmatrix} : \mathfrak{x}\_{2} \in \mathbb{R} \right\}.$$

Since,

$$\operatorname{co}\partial^\*\mathcal{A}^\circ(\bar{A}) = \left\{ \begin{bmatrix} 1 & 0\\ 0 & 0 \end{bmatrix} \right\}\text{, then } \left(-\operatorname{co}\partial^\*\mathcal{A}^\circ(\bar{A})\right)^- = \left\{ \begin{bmatrix} \mathbf{x}\_1 & \mathbf{x}\_2\\ \mathbf{x}\_2 & \mathbf{x}\_3 \end{bmatrix} : \mathbf{x}\_1 \ge 0 \right\}.$$

Consequently, we have

$$\left(\bigcup\_{i=1}^{2} co\partial^\* f\_i(\vec{A})\right)^- \bigcap \left(-co\partial^\* \mathcal{A}^o(\vec{A})\right)^- \bigcap \mathbb{M}\_+^2 = \left\{ \begin{bmatrix} 0 & 0\\ 0 & 0 \end{bmatrix} \right\} \subset \bigcap\_{i=1}^{2} coT(Q^i, \vec{A}).$$

Obviously, *<sup>C</sup>* <sup>=</sup> *cone co∂*∗<sup>H</sup> (*A*¯) <sup>−</sup> <sup>M</sup><sup>2</sup> <sup>+</sup> is closed set. Hence, (GGCQ) satisfied at *A*¯. Now,

$$\left(\bigcup\_{i\in\mathfrak{I}\_{\parallel}^{s}}co\partial^{\*}f\_{i}(\var A)\right)^{s} = \left(co\partial^{\*}f\_{\varOmega}(\var A)\right)^{s} = \bigotimes\_{i\in\mathfrak{I}\_{\parallel}^{s}}\left(\bigcup\_{i\in\mathfrak{I}\_{\parallel}^{2}}co\partial^{\*}f\_{i}(\var A)\right)^{s} = \left(co\partial^{\*}f\_{1}(\var A)\right)^{s} = \bigotimes\_{i\in\mathfrak{I}\_{\parallel}^{s}}\left(co\partial^{\*}f\_{i}(\var A)\right)$$

which implies that

$$\left(\bigcup\_{i\in\mathfrak{I}\_{\emptyset}^{k}}co\mathfrak{d}^{\*}\mathfrak{f}\_{i}(\bar{A})\right)^{s}\bigcap\left(\bigcup\_{i\in\mathfrak{I}\_{0+}}co\mathfrak{d}^{\*}.\mathcal{A}\_{i}^{\diamondsuit}(\bar{A})\bigcup\_{i\in\mathfrak{I}\_{0+}}-co\mathfrak{d}^{\*}.\mathcal{A}\_{i}^{\diamondsuit}(\bar{A})\right)$$

$$\bigcup\_{i\in\mathfrak{I}\_{0-}\cup\mathfrak{I}\_{00}}-co\mathfrak{d}^{\*}.\mathcal{A}\_{i}^{\diamondsuit}(\bar{A})\bigcup\_{i\in\mathfrak{I}\_{+0}}co\mathfrak{d}^{\*}\mathcal{A}\_{i}^{\diamondsuit}(\bar{A})\right)^{s}\bigcap\mathcal{M}\_{+}^{n}=\mathcal{O}\_{\prime}\,\,\forall\,\,k\in\mathfrak{I}\_{\varnothing}.$$

Hence, *GCCQ* not satisfied.

Applying the generalized Guignard constraint qualification, we derive the Karush– Kuhn–Tucker type necessary optimality conditions for (*S* − *MMPVC*).

**Theorem 3.** *Suppose <sup>A</sup>*¯ *is a local weak efficient solution for* (*<sup>S</sup>* <sup>−</sup> *MMPVC*)*. Assume that* <sup>f</sup>*i*, <sup>H</sup>*i*, <sup>G</sup>*<sup>i</sup> admits bounded upper semi-regular convexificator <sup>∂</sup>*∗f*i*(*A*¯) (*<sup>i</sup>* <sup>∈</sup> <sup>ג</sup>f), *<sup>∂</sup>*∗H*i*(*A*¯) (*<sup>i</sup>* <sup>∈</sup> <sup>ג</sup>0(,∗*<sup>∂</sup>* <sup>G</sup>*i*(*A*¯) (*<sup>i</sup>* <sup>∈</sup> <sup>ג</sup>+0(*, respectively, at <sup>A</sup>*¯. *If (GGCQ) holds at <sup>A</sup>*¯ *then there exists <sup>λ</sup>*¯ <sup>f</sup> *<sup>i</sup>* > 0 (*<sup>i</sup>* <sup>∈</sup> <sup>ג</sup>f), *<sup>λ</sup>*¯ <sup>G</sup> <sup>∈</sup> <sup>R</sup>*m*, *<sup>λ</sup>*¯ <sup>H</sup> <sup>∈</sup> <sup>R</sup>*<sup>m</sup> and* <sup>W</sup>¯ <sup>∈</sup> <sup>M</sup>*<sup>n</sup>* <sup>+</sup> *such that*

$$\begin{split} 0 \in \sum\_{i \in \mathfrak{I}\_{\emptyset}} \underline{\lambda}\_{i}^{\dagger} co\partial^{\*} \mathfrak{f}\_{i}(A) + \sum\_{i=1}^{m} [\underline{\lambda}\_{i}^{\mathcal{G}} co\partial^{\*} \mathcal{G}\_{i}(A) - \underline{\lambda}\_{i}^{\mathcal{H}} co\partial^{\*}, \mathcal{H}\_{i}(A)] - \mathfrak{h}^{\mathcal{P}}, \\ \langle \underline{\lambda}^{\mathcal{G}}, \bar{A} \rangle = 0, \ \bar{\lambda}\_{i}^{\mathcal{H}} = 0 \ (i \in \mathfrak{I}\_{+0} \cup \mathfrak{I}\_{+-}), \ \bar{\lambda}\_{i}^{\mathcal{H}} \ge 0 \ (i \in \mathfrak{I}\_{0-} \cup \mathfrak{I}\_{00}), \ \bar{\lambda}\_{i}^{\mathcal{H}} \text{ free } (i \in \mathfrak{I}\_{0+}), \\ \lambda\_{i}^{\mathcal{G}} = 0 \ (i \in \mathfrak{I}\_{0+} \cup \mathfrak{I}\_{0-} \cup \mathfrak{I}\_{00} \cup \mathfrak{I}\_{+-}), \ \bar{\lambda}\_{i}^{\mathcal{G}} \ge 0 \ (i \in \mathfrak{I}\_{+0}). \end{split}$$

**Proof.** For the claim of the theorem, it suffices to show that,

$$0 \in \sum\_{i=1}^{p} \lambda\_i^{\dagger} \text{co} \partial^\* \mathfrak{f}\_i(\vec{A}) + \mathbb{C}, \ \lambda^{\dagger} > 0. \tag{14}$$

Suppose, on the contrary, assume that

$$0 \notin \sum\_{i=1}^{p} \lambda\_i^{\dagger} \text{co} \partial^\* f\_i(\bar{A}) + \mathbb{C}, \ \lambda^{\dagger} > 0. \tag{15}$$

As f*<sup>i</sup>* (*i* ∈ גf) admits an upper semi-regular convexificator, this implies that the right side in (14) is a closed convex set in M*n*. The classical separation theorem implies that there exists *<sup>A</sup>* <sup>∈</sup> <sup>M</sup>*n*, such that

$$\langle \pi, A \rangle < 0, \forall \, \pi \in \sum\_{i=1}^{p} \lambda\_i^{\dagger} \stackrel{\leftarrow}{co} \partial^\* \mathfrak{f}\_i(A) + \mathbb{C}, \ \lambda^{\dagger} > 0. \tag{16}$$

Consequently,

$$<\langle \mathfrak{z}\_i, A \rangle < 0, \,\forall \, \mathfrak{z}\_i \in co \partial^\* f\_i(\bar{A}) \; (i \in \mathfrak{I}\_\emptyset), \tag{17}$$

$$-\langle \eta\_{i\prime} A \rangle \stackrel{<}{\leq} 0,\ \forall \ \eta\_{i} \in \mathfrak{co} \mathfrak{d}^{\*}.\mathfrak{K}\_{i}(\vec{A})\ (i \in \mathfrak{I}\_{0-} \cup \mathfrak{I}\_{00}),\tag{18}$$

$$1 - \langle \eta\_i, A \rangle \le 0, \forall \, \eta\_i \in \text{co}\, \partial^\*, \mathcal{M}\_i^c(\vec{A}) \text{ ( $i \in \mathfrak{I}\_{0+}$ )},\tag{19}$$

$$<\langle \eta\_{i\prime}, A \rangle \le 0,\,\forall \,\eta\_{i} \in ca\partial^\*, \mathcal{M}\_{i}^{\epsilon}(\bar{A}) \text{ ( $i \in \mathfrak{I}\_{0+}$ )},\tag{20}$$

$$<\langle \zeta\_i, A \rangle \le 0, \forall \, \zeta\_i \in co \partial^\* \mathcal{J}(\vec{A}) \ (i \in \mathfrak{I}\_{+0}),\tag{21}$$

$$-\langle \mathcal{W}, A \rangle \le 0, \; \forall \; \mathcal{W} \in \mathbb{M}\_{+}^{n}. \tag{22}$$

Inequalities (17)–(22) and (GGCQ) implies that

$$A \in \left(\bigcup\_{i \in \mathfrak{I}\_{\bar{\mathfrak{I}}}} co\eth^\* \mathfrak{f}\_i(\bar{A})\right)^{-} \bigcap \left(\bigcup\_{i \in \mathfrak{I}\_{\bar{\mathfrak{I}}\_{\bar{\mathfrak{I}}}}} co\eth^\* \mathcal{A}\_i^{\boldsymbol{\varrho}}(\bar{A}) \bigcup\_{i \in \mathfrak{I}\_{\bar{\mathfrak{I}}\_{\bar{\mathfrak{I}}}}} - co\eth^\* \mathcal{A}\_i^{\boldsymbol{\varrho}}(\bar{A})\right)$$

$$\bigcup\_{i \in \mathfrak{I}\_{\bar{\mathfrak{I}}\_{\bar{\mathfrak{I}}}} - co\eth^\* \mathcal{A}\_i^{\boldsymbol{\varrho}}(\bar{A}) \bigcup\_{i \in \mathfrak{I}\_{\bar{\mathfrak{I}}\_{\bar{\mathfrak{I}}}}} co\eth^\* \mathcal{A}\_i^{\boldsymbol{\varrho}}(\bar{A})\right)^{-} \bigcap \mathcal{M}\_{+}^{\mathfrak{n}} \subset \bigcap\_{i=1}^p coT(Q^i, \bar{A}).$$

Hence, *<sup>A</sup>* <sup>∈</sup> <sup>7</sup>*<sup>p</sup> <sup>i</sup>*=<sup>1</sup> *coT*(*Q<sup>i</sup>* , *<sup>A</sup>*¯), which implies that, there exist *tn* <sup>↓</sup> 0, such that *<sup>A</sup>*¯ <sup>+</sup> *tnA* <sup>∈</sup> *M*. Therefore, from (17), we obtain

$$f\_i(\vec{A} + tA) < f\_i(\vec{A}), \ \forall \ i \in \mathfrak{I}\_{\emptyset}.$$

Thus, we obtain the contradiction that the feasible point *A*¯ is a local weak efficient solution for (*S* − *MMPVC*). Hence, the result.

Motivated by Achtziger and Kanzow [12] and Sadeghieh et al. [55], we define Sstationary point for S-MMPVC.

**Definition 8.** *A feasible point <sup>A</sup>*¯ *is said to be weak <sup>S</sup>*−*stationary point for* (*<sup>S</sup>* <sup>−</sup> *MMPVC*) *if there exist <sup>λ</sup>*<sup>f</sup> <sup>∈</sup> <sup>R</sup>*p*, *<sup>λ</sup>*<sup>H</sup> <sup>∈</sup> <sup>R</sup>*m*, *<sup>λ</sup>*<sup>G</sup> <sup>∈</sup> <sup>R</sup>*M*, <sup>W</sup> <sup>∈</sup> <sup>M</sup>*<sup>n</sup>* <sup>+</sup>, *and not all multipliers along with* W¯ *can be simultaneously zero, such that*

$$\begin{split} 0 \in \sum\_{i \in \mathbb{J}\_{\text{f}}} \lambda\_{i}^{\dagger} co \partial^{\ast} \mathfrak{f}\_{i}(\bar{A}) + \sum\_{i=1}^{m} [\lambda\_{i}^{\mathcal{H}} co \partial^{\ast} \mathcal{H}\_{i}(\bar{A}) - \lambda\_{i}^{\mathcal{H}} co \partial^{\ast} \mathcal{H}\_{i}^{\varepsilon}(\bar{A})] - \mathcal{W}, \\ \lambda\_{i}^{\dagger} \ge 0 \,(i \in \mathbb{J}\_{\text{f}}), \,\langle \mathcal{W}, \bar{A} \rangle = 0, \,\lambda\_{i}^{\mathcal{H}} = 0 \,(i \in \mathbb{J}\_{+0} \cup \mathbb{J}\_{+-}), \,\lambda\_{i}^{\mathcal{H}} \ge 0 \,(i \in \mathbb{J}\_{0-} \cup \mathbb{J}\_{00}), \\ \lambda\_{i}^{\mathcal{H}} \text{free } (i \in \mathbb{J}\_{0+}), \,\lambda\_{i}^{\mathcal{G}} = 0 \,(i \in \mathbb{J}\_{0+} \cup \mathbb{J}\_{0-} \cup \mathbb{J}\_{00} \cup \mathbb{J}\_{+-}), \,\lambda\_{i}^{\mathcal{G}} \ge 0 \,(i \in \mathbb{J}\_{+0}). \end{split}$$

**Definition 9.** *A feasible point <sup>A</sup>*¯ *is said to be strong <sup>S</sup>*−*stationary point for* (*<sup>S</sup>* <sup>−</sup> *MMPVC*) *if there exist <sup>λ</sup>*<sup>f</sup> <sup>∈</sup> <sup>R</sup>*p*, *<sup>λ</sup>*<sup>H</sup> <sup>∈</sup> <sup>R</sup>*m*, *<sup>λ</sup>*<sup>G</sup> <sup>∈</sup> <sup>R</sup>*<sup>M</sup> and* <sup>W</sup> <sup>∈</sup> <sup>M</sup>*<sup>n</sup>* <sup>+</sup>, *such that*

$$\begin{split} 0 \in \sum\_{i \in \mathbb{J}\_{\parallel}} \lambda\_{i}^{\dagger} \text{c} \boldsymbol{\eth}^{\star} \mathfrak{f}\_{i}(\bar{A}) + \sum\_{i=1}^{m} [\lambda\_{i}^{\mathcal{H}} \boldsymbol{\rm co} \boldsymbol{\eth}^{\star} \boldsymbol{\mathcal{H}}\_{i}(\bar{A}) - \lambda\_{i}^{\mathcal{H}} \boldsymbol{\rm co} \boldsymbol{\eth}^{\star} \boldsymbol{\mathcal{H}}\_{i}^{\star}(\bar{A})] - \boldsymbol{\mathcal{W}}, \\ \lambda\_{i}^{\mathcal{H}} > 0 \ (i \in \mathbb{J}\_{\parallel}), \ \langle \boldsymbol{\mathcal{W}}, \boldsymbol{A} \rangle = 0, \ \lambda\_{i}^{\mathcal{H}} = 0 \ (i \in \mathbb{J}\_{+0} \cup \mathbb{J}\_{+-}), \ \lambda\_{i}^{\mathcal{H}} \succeq 0 \ (i \in \mathbb{J}\_{0-} \cup \mathbb{J}\_{00}), \\ \lambda\_{i}^{\mathcal{H}} \operatorname{frec} \ (i \in \mathbb{J}\_{0+}), \ \lambda\_{i}^{\mathcal{H}} = 0 \ (i \in \mathbb{J}\_{0+} \cup \mathbb{J}\_{0-} \cup \mathbb{J}\_{00} \cup \mathbb{J}\_{+-}), \ \lambda\_{i}^{\mathcal{H}} \succeq 0 \ (i \in \mathbb{J}\_{+0}). \end{split}$$

*Note that, if multipliers of gradients of objective functions are strictly greater than zero, then it is considered as strong S*−*stationary conditions.*

**Example 2.** *Consider following optimization problem*

$$\min\left(\mathfrak{f}\_1(A), \mathfrak{f}\_2(A)\right),\\ \text{subject to } \mathcal{H}^\rho(A) = \mathbf{x}\_1 \ge \mathbf{0},\\ \mathcal{H}(A)\mathcal{H}^\rho(A) = \mathbf{x}\_3.\\ \mathbf{x}\_1 \le \mathbf{0},\\ \mathbf{x}\_2$$

$$A = \begin{bmatrix} \mathbf{x}\_1 & \mathbf{x}\_2\\ \mathbf{x}\_2 & \mathbf{x}\_3 \end{bmatrix} \in \mathbb{M}\_{+'}^2,\\ \text{where } \mathfrak{f}\_1(A) = |\mathbf{x}\_1 - \mathbf{1}|, \mathfrak{f}\_2(A) = |\mathbf{x}\_3|.$$

Feasible set *M* = *x*<sup>1</sup> *x*<sup>2</sup> *x*<sup>2</sup> *x*<sup>3</sup> <sup>∈</sup> <sup>M</sup><sup>2</sup> <sup>+</sup> : *x*<sup>1</sup> - 0, *x*1*x*<sup>3</sup> 0 / . Since *A*¯ = 1 0 0 0 is

weak efficient solution for the considered problem. Now, we can find upper semi-regular convexificator of each functions at point *A*¯ as follows:

$$
\partial^\* f\_1(\bar{A}) = \left\{ \begin{bmatrix} -1 & 0 \\ 0 & 0 \end{bmatrix}, \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} \right\}, \\
\partial^\* \mathfrak{J}^\* f\_2(\bar{A}) = \left\{ \begin{bmatrix} 0 & 0 \\ 0 & -1 \end{bmatrix}, \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} \right\},
$$

$$
\partial^\* \mathcal{A}^c(\bar{A}) = \left\{ \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} \right\}, \\
\partial^\* \mathcal{J}^c(\bar{A}) = \left\{ \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} \right\}, \\
\partial^1 = \left\{ \begin{bmatrix} x\_1 & x\_2 \\ x\_2 & x\_3 \end{bmatrix} \in \mathcal{M}\_+^2 : x\_1 \ge 0, \; x\_2 = 0, \; x\_3 = 0 \right\}, \\
\partial^2 = \left\{ \begin{bmatrix} x\_1 & x\_2 \\ x\_2 & x\_3 \end{bmatrix} \in \mathcal{M}\_+^2 : x\_1 = 1, \; x\_2 = 0, \; x\_3 = 0 \right\}.
$$

So, we conclude that

$$\begin{aligned} \left[\begin{matrix} 0 & 0\\ 0 & 0 \end{matrix}\right] \in \bigcap\_{i=1}^{2} \bigcirc T(Q^i, A) \text{ and } \bigcup\_{i=1}^{2} \bigcirc \partial^\*\sharp\_i(A) = \left\{ \begin{bmatrix} t & 0\\ 0 & 0 \end{bmatrix}, \begin{bmatrix} 0 & 0\\ 0 & s \end{bmatrix} : t, \ s \in [-1, 1] \right\} \end{aligned}$$

thus, we have

$$\left(\bigcup\_{i=1}^{2} \operatorname{co} \partial^\* f\_{\vec{\imath}}(\vec{A})\right)^{-} = \left\{ \begin{bmatrix} \mathbf{x}\_1 & \mathbf{x}\_2 \\ \mathbf{x}\_2 & \mathbf{x}\_3 \end{bmatrix} : \mathbf{x}\_1 = \mathbf{0}, \ \mathbf{x}\_2 = \mathbf{0}, \ \mathbf{x}\_3 = \mathbf{0} \right\}.$$

Since,

$$\operatorname{co\ddot{o}^\*} \mathcal{H}^\ell(\bar{A}) = \left\{ \begin{bmatrix} 1 & 0\\ 0 & 0 \end{bmatrix} \right\} , \text{ then } \left( -\operatorname{co\dot{o}^\*} \mathcal{H}^\ell(\bar{A}) \right)^{-} = \left\{ \begin{bmatrix} \mathbf{x}\_1 & \mathbf{x}\_2\\ \mathbf{x}\_2 & \mathbf{x}\_3 \end{bmatrix} : \mathbf{x}\_1 \ge 0 \right\}.$$

Consequently, we have

$$\left(\bigcup\_{i=1}^{2} co\partial^\* f\_i(\vec{A})\right)^- \bigcap \left(-co\partial^\* \mathcal{A}^e(\vec{A})\right)^- \bigcap \mathbb{M}\_+^2 = \left\{ \begin{bmatrix} 0 & 0\\ 0 & 0 \end{bmatrix} \right\} \subset \bigcap\_{i=1}^{2} coT(Q^i, \vec{A}).$$

Obviously, *<sup>C</sup>* <sup>=</sup> *cone co∂*∗<sup>H</sup> (*A*¯) <sup>−</sup> <sup>M</sup><sup>2</sup> <sup>+</sup> is closed set. Hence, (GGCQ) satisfied at *A*¯. Now, for *λ*<sup>f</sup> <sup>1</sup> <sup>=</sup> 1, *<sup>λ</sup>*<sup>f</sup> <sup>2</sup> <sup>=</sup> 1, *<sup>λ</sup>*<sup>H</sup> <sup>=</sup> 0, <sup>W</sup>¯ <sup>=</sup> 0 0 0 1 , *ξ*<sup>1</sup> = 0 0 0 0 <sup>∈</sup> *co∂*∗f1(*A*¯),

*ξ*<sup>2</sup> = 0 0 0 1 <sup>∈</sup> *co∂*∗f2(*A*¯), and *<sup>η</sup>* <sup>=</sup> 1 0 0 0 <sup>∈</sup> *co∂*∗<sup>H</sup> (*A*¯), we have 0 = *λ*<sup>f</sup> <sup>1</sup>*ξ*<sup>1</sup> <sup>+</sup> *<sup>λ</sup>*<sup>f</sup> <sup>2</sup>*ξ*<sup>2</sup> <sup>−</sup> *<sup>λ</sup>*<sup>H</sup> *<sup>η</sup>* <sup>−</sup> <sup>W</sup>¯ <sup>=</sup> <sup>1</sup> 0 0 0 0 + 1 0 0 0 1 − 0 1 0 0 0 − 0 0 0 1 <sup>∈</sup> *<sup>λ</sup>*<sup>f</sup> <sup>1</sup>*co∂*∗f1(*A*¯) + *<sup>λ</sup>*<sup>f</sup> <sup>2</sup>*co∂*∗f2(*A*¯) <sup>−</sup> *<sup>λ</sup>*<sup>H</sup> *co∂*∗<sup>H</sup> (*A*¯) <sup>−</sup> <sup>W</sup>¯,

and *A*¯, <sup>W</sup>¯ <sup>=</sup> *Tr* 1 0 0 0 0 0 0 1 = 0. Hence, strong *S*−stationary conditions satisfied at weak efficient point *A*¯.

**Corollary 1.** *Let <sup>A</sup>*¯ *be a local weak efficient solution for* (*<sup>S</sup>* <sup>−</sup> *MMPVC*)*. Suppose that* <sup>f</sup>*i*, <sup>H</sup>*i*, <sup>G</sup>*<sup>i</sup> admits bounded upper semi-regular convexificator <sup>∂</sup>*∗f*i*(*A*¯) (*<sup>i</sup>* <sup>∈</sup> <sup>ג</sup>f), *<sup>∂</sup>*∗H*i*(*A*¯) (*<sup>i</sup>* <sup>∈</sup> <sup>ג</sup>0(,∗*<sup>∂</sup>* <sup>G</sup>*i*(*A*¯) (*<sup>i</sup>* <sup>∈</sup> <sup>ג</sup>+0(*, respectively, at <sup>A</sup>*¯. *If (GGCQ) holds at <sup>A</sup>*¯ *then there exists <sup>λ</sup>*¯ <sup>f</sup> *<sup>i</sup>* > 0 (*i* ∈ גf), *<sup>λ</sup>*¯ <sup>G</sup> <sup>∈</sup> <sup>R</sup>*m*, *<sup>λ</sup>*¯ <sup>H</sup> <sup>∈</sup> <sup>R</sup>*<sup>m</sup> and* <sup>W</sup>¯ <sup>∈</sup> <sup>M</sup>*<sup>n</sup>* <sup>+</sup> *such that*

$$\begin{split} 0 \in \sum\_{i \in \mathfrak{I}\_{\emptyset}} \bar{\lambda}\_{i}^{\dagger} co\boldsymbol{\delta}^{\*} \mathfrak{f}\_{i}(\boldsymbol{A}) + \sum\_{i=1}^{m} [\bar{\lambda}\_{i}^{\mathcal{G}} co\boldsymbol{\delta}^{\*} \mathcal{H}\_{i}(\boldsymbol{A}) - \bar{\lambda}\_{i}^{\mathcal{H}} co\boldsymbol{\delta}^{\*}, \mathcal{H}\_{i}^{c}(\boldsymbol{A})] - \mathcal{W}, \\ \langle \boldsymbol{\vartheta}^{\boldsymbol{\mathcal{V}}}, \bar{\boldsymbol{A}} \rangle = 0, \ \bar{\lambda}\_{i}^{\mathcal{H}} = 0 \ (i \in \mathfrak{I}\_{+0} \cup \mathfrak{J}\_{+-}), \ \bar{\lambda}\_{i}^{\mathcal{H}} \geq 0 \ (i \in \mathfrak{I}\_{0-} \cup \mathfrak{J}\_{00}), \ \bar{\lambda}\_{i}^{\mathcal{H}} \operatorname{free}\ (i \in \mathfrak{I}\_{0+}), \\ \sum\_{i=1}^{p} \bar{\lambda}\_{i}^{\mathcal{I}} = 1, \ \bar{\lambda}\_{i}^{\mathcal{G}} = 0 \ (i \in \mathfrak{I}\_{0+} \cup \mathfrak{J}\_{0-} \cup \mathfrak{J}\_{00} \cup \mathfrak{J}\_{+-}), \ \bar{\lambda}\_{i}^{\mathcal{G}} \geq 0 \ (i \in \mathfrak{I}\_{+0}). \end{split}$$

**Proof.** Since, all conditions of Theorem <sup>3</sup> are satisfying for some *<sup>λ</sup>*<sup>f</sup> <sup>&</sup>gt; 0, *<sup>λ</sup>*<sup>H</sup> , *<sup>λ</sup>*<sup>G</sup> <sup>∈</sup> <sup>R</sup>*m*, and W as follows:

$$0 \in \sum\_{i \in \mathbb{Z}\_{\parallel}} \lambda\_i^{\dagger} \circ \partial^\* \mathfrak{f}\_i(\bar{A}) + \sum\_{i=1}^{m} [\lambda\_i^{\mathcal{H}} \circ \partial^\* \mathcal{G}\_i(\bar{A}) - \lambda\_i^{\mathcal{H}} \circ \partial^\* \mathcal{A}\_i^{\rho}(\bar{A})] - \mathcal{W}, \tag{23}$$

$$\langle \mathcal{W}, \bar{A} \rangle = 0, \ \lambda\_i^{\mathcal{H}} = 0 \ (i \in \mathbb{1}\_{+0} \cup \mathbb{1}\_{+-}), \ \lambda\_i^{\mathcal{H}} \ge 0 \ (i \in \mathbb{1}\_{0-} \cup \mathbb{1}\_{00}),$$

$$\langle \lambda\_i^{\mathcal{H}} \text{ free} \ (i \in \mathbb{1}\_{0+}), \ \lambda\_i^{\mathcal{H}} = 0 \ (i \in \mathbb{1}\_{0+} \cup \mathbb{1}\_{0-} \cup \mathbb{1}\_{00} \cup \mathbb{1}\_{+-}), \ \lambda\_i^{\mathcal{G}} \ge 0 \ (i \in \mathbb{1}\_{+0}).$$

Now, dividing (23) by *p* ∑ *i*=1 *λ*f *<sup>i</sup>* and taking

$$\lambda\_i^\dagger = \frac{\lambda\_i^\dagger}{\sum\_{i=1}^p \lambda\_i}, \lambda\_i^{\mathcal{H}'} = \frac{\lambda\_i^{\mathcal{H}'}}{\sum\_{i=1}^p \lambda\_i^\dagger}, \lambda\_i^{\mathcal{H}} = \frac{\lambda\_i^{\mathcal{H}}}{\sum\_{i=1}^p \lambda\_i^\dagger}, \mathcal{W} = \frac{\mathcal{W}}{\sum\_{i=1}^p \lambda\_i^\dagger}.$$

we obtain the required result.

Now, we propose some index sets to show sufficient optimality conditions for S-MMPVC:

$$\begin{split} \mathsf{T}\_{00}^{+} &:= \{ i \in \mathsf{I}\_{00} : \lambda\_{i}^{\mathcal{R}'} > 0 \}, \\ \mathsf{T}\_{00}^{0} &:= \{ i \in \mathsf{I}\_{00} : \lambda\_{i}^{\mathcal{R}'} = 0 \}, \\ \mathsf{T}\_{0-}^{+} &:= \{ i \in \mathsf{I}\_{0-} : \lambda\_{i}^{\mathcal{R}'} > 0 \}, \\ \mathsf{T}\_{0-}^{0} &:= \{ i \in \mathsf{I}\_{0-} : \lambda\_{i}^{\mathcal{R}'} = 0 \}, \\ \mathsf{T}\_{0+}^{+} &:= \{ i \in \mathsf{I}\_{0+} : \lambda\_{i}^{\mathcal{R}'} > 0 \}, \\ \mathsf{T}\_{0+}^{-} &:= \{ i \in \mathsf{I}\_{0+} : \lambda\_{i}^{\mathcal{R}'} < 0 \}, \\ \mathsf{T}\_{0+}^{0} &:= \{ i \in \mathsf{I}\_{0+} : \lambda\_{i}^{\mathcal{R}'} = 0 \}, \\ \mathsf{T}\_{+0}^{0+} &:= \{ i \in \mathsf{I}\_{+0} : \lambda\_{i}^{\mathcal{R}'} = 0 \ , \lambda\_{i}^{\mathcal{R}'} > 0 \}, \\ \mathsf{T}\_{+0}^{0} &:= \{ i \in \mathsf{I}\_{+0} : \lambda\_{i}^{\mathcal{R}'} = 0 \ , \lambda\_{i}^{\mathcal{R}'} = 0 \}. \end{split}$$

Following result is motivated by Sadeghieh et al. ([55], Theorem 9).

**Theorem 4.** *(Sufficient conditions) Suppose* f*<sup>i</sup>* (*i* ∈ גf), H*<sup>i</sup>* (*i* ∈ ג0<sup>+</sup> ∪ ג<sup>00</sup> ∪ ג0−(, G*<sup>i</sup>* (*i* ∈ ג+0( *admit bounded upper semi-regular convexificators at A*¯. *Assume that feasible point A*¯ *satisfies weak <sup>S</sup>*−*stationary conditions under suitable choice of multipliers <sup>λ</sup>*<sup>f</sup> <sup>∈</sup> <sup>R</sup>*p*, *<sup>λ</sup>*<sup>H</sup> <sup>∈</sup> <sup>R</sup>*m*, *<sup>λ</sup>*<sup>G</sup> <sup>∈</sup> <sup>R</sup>*m*, <sup>W</sup>¯ <sup>∈</sup> <sup>M</sup>*<sup>n</sup>* <sup>+</sup> *for S* − *MMPVC. If* H*<sup>i</sup>* (*i* ∈ ג<sup>−</sup> <sup>0</sup>+), <sup>−</sup>H*<sup>i</sup>* (*<sup>i</sup>* <sup>∈</sup> <sup>ג</sup><sup>+</sup> <sup>+</sup><sup>ג</sup> <sup>∪</sup> <sup>0</sup><sup>+</sup> <sup>+</sup><sup>ג</sup> <sup>∪</sup> <sup>00</sup> <sup>0</sup>−), <sup>G</sup>*<sup>i</sup>* (*<sup>i</sup>* <sup>∈</sup> <sup>ג</sup>0<sup>+</sup> <sup>+</sup>0), *are <sup>∂</sup>*∗−*quasiconvex and* <sup>f</sup>*<sup>i</sup>* (*<sup>i</sup>* <sup>∈</sup> <sup>ג</sup>f) *are <sup>∂</sup>*∗−*pseudoconvex at <sup>A</sup>*¯ *and at least one λ*f *<sup>i</sup>* > 0*. Then,*

*(i) A is a local weak efficient solution for S* ¯ <sup>−</sup> *MMPVC;*

*(ii) In addition to that if* ג− <sup>0</sup>+<sup>ג</sup> <sup>∪</sup> <sup>0</sup><sup>+</sup> <sup>+</sup><sup>0</sup> <sup>=</sup> <sup>∅</sup>, *then A is a weak efficient solution for S* ¯ <sup>−</sup> *MMPVC.*

**Proof.** (*i*) From continuity of G*i*(*i* ∈ ג0<sup>+</sup> (and H*i*(*i* ∈ ג+<sup>0</sup> (there exist neighborhoods N and <sup>M</sup> for *<sup>A</sup>*¯, such that

$$\mathcal{A}\ell\_i^c(A) = 0,\ \forall\_i^c(A) > 0,\ \forall\ A \in M \cap \mathcal{N}\ \forall\ i \in \mathfrak{I}\_{0+\prime} \tag{24}$$

$$
\exists \mathcal{M}\_i^c(A) > 0, \forall i (A) \le 0, \forall \ A \in M \cap \mathcal{M} \,\forall \, i \in \mathfrak{I}\_{\!+0}.\tag{25}
$$

Since *<sup>A</sup>*¯ is a weak *<sup>S</sup>*−stationary point, so there exist *<sup>λ</sup>*<sup>f</sup> <sup>∈</sup> <sup>R</sup>*p*, *<sup>λ</sup>*<sup>H</sup> <sup>∈</sup> <sup>R</sup>*m*, *<sup>λ</sup>*<sup>G</sup> <sup>∈</sup> <sup>R</sup>*m*, <sup>W</sup>¯ and not all multipliers along with W¯ can be simultaneously zero, such that satisfies weak *<sup>S</sup>*−stationary conditions. Thus, there exist *<sup>ξ</sup><sup>i</sup>* <sup>∈</sup> *co∂*∗f*i*(*A*¯) (*<sup>i</sup>* <sup>∈</sup> <sup>ג</sup>f), *<sup>η</sup><sup>i</sup>* <sup>∈</sup> *co∂*∗H*i*(*A*¯) (*<sup>i</sup>* <sup>∈</sup> <sup>ג</sup>0(, *<sup>ζ</sup><sup>i</sup>* <sup>∈</sup> *co∂*∗G*i*(*A*¯) (*<sup>i</sup>* <sup>∈</sup> <sup>ג</sup>+0(, such that

$$\sum\_{i \in \mathfrak{J}\_{\emptyset}} \lambda\_i^{\dagger} \mathfrak{J}\_i^{\varepsilon} + \sum\_{i \in \mathfrak{J}\_{\bullet 0}} \lambda\_i^{\prime \mathcal{G}} \zeta\_i - \sum\_{i \in \mathfrak{J}\_0} \lambda\_i^{\mathcal{M}^{\varepsilon}} \eta\_i - \mathcal{\mathcal{W}} = 0. \tag{26}$$

Suppose, on contrary *<sup>A</sup>*¯ is not local weak efficient solution for *<sup>S</sup>* <sup>−</sup> *MMPVC*. Then, there exists *B* ∈ *M* ∩N ∩M, such that

$$\mathfrak{f}\_{\mathfrak{i}}(B) < \mathfrak{f}\_{\mathfrak{i}}(\vec{A}), \ \forall \ i \in \mathfrak{I}\_{\mathfrak{f}}.\tag{27}$$

By the *∂*∗-pseudoconvexity of f*<sup>i</sup>* (*i* ∈ גf) and (27), we obtain

$$
\langle \zeta\_i^{\mathfrak{z}}, B - \bar{A} \rangle < 0, \,\forall \, i \in \mathfrak{I}\_{\mathfrak{f}}.\tag{28}
$$

By the *<sup>∂</sup>*∗-quasiconvexity of functions <sup>G</sup>*<sup>i</sup>* (*<sup>i</sup>* <sup>∈</sup> <sup>ג</sup>0<sup>+</sup> <sup>+</sup>0), H*<sup>i</sup>* (*i* ∈ ג<sup>−</sup> <sup>0</sup>+) and (24) and (25), we obtain

$$\mathcal{H}\_i(B) \le 0 = \mathcal{H}\_i(\bar{A}) \implies \langle \mathcal{I}\_i, B - \bar{A} \rangle \le 0, \forall \ i \in \mathfrak{I}\_{+0}^{0+}.\tag{29}$$

$$\mathcal{H}\_l^e(B) = 0 \le \mathcal{H}\_l^e(\vec{A}) \implies \langle \eta\_{l\prime} B - \vec{A} \rangle \le 0, \forall \, i \in \mathfrak{I}\_{0+}^-. \tag{30}$$

On the other hand, <sup>∀</sup> *<sup>i</sup>* <sup>∈</sup> <sup>ג</sup><sup>+</sup> <sup>+</sup><sup>ג</sup> <sup>∪</sup> <sup>0</sup><sup>+</sup> <sup>+</sup><sup>ג</sup> <sup>∪</sup> <sup>0</sup><sup>−</sup> 00,

$$0 - \mathcal{H}\_l^c(B) \stackrel{<}{=} 0 = -\mathcal{H}\_l^c(\bar{A}) \implies \langle -\eta\_{\bar{i}i}B - \bar{A} \rangle \stackrel{<}{=} 0,\ \forall \ -\eta\_{\bar{i}} \in -co\partial^\*, \mathcal{H}\_l^c(\bar{A}).\tag{31}$$

Since <sup>W</sup>¯, *<sup>B</sup>* <sup>∈</sup> <sup>M</sup>*<sup>n</sup>* <sup>+</sup>, so we have

$$-\langle \mathcal{W}^{\mathbb{P}}, B \rangle + \langle \mathcal{W}^{\mathbb{P}}, \vec{A} \rangle = -\langle \mathcal{W}^{\mathbb{P}}, B - \vec{A} \rangle \le 0. \tag{32}$$

Multiplying their corresponding multiplier in (29) to (32) and adding, we obtain contradictions to (26). Hence, the result.

(*ii*) We proceed similar to (*i*) and using ג+<sup>0</sup> <sup>−</sup>ג ∪ <sup>0</sup><sup>+</sup> <sup>+</sup><sup>0</sup> = ∅, therefore without making use of neighborhood N and M, we obtain the required result.

To validate the sufficient optimality conditions we present following example.

**Example 3.** *Consider following optimization problem*

$$\begin{aligned} \min \left( \mathfrak{f}\_1(A), \mathfrak{f}\_2(A) \right), \text{ subject to } \mathfrak{X}\_1^\sharp(A) = -\mathbf{x}\_2 \ge \mathbf{0}, \,\mathfrak{Y}\_1(A). \mathfrak{X}\_1^\sharp(A) = -|\mathbf{x}\_3|\mathbf{x}\_2 \le \mathbf{0},\\ A = \begin{bmatrix} \mathbf{x}\_1 & \mathbf{x}\_2 \\ \mathbf{x}\_2 & \mathbf{x}\_3 \end{bmatrix} \in \mathbb{M}\_{+}^2, \text{where } \mathfrak{f}\_1(A) = \mathbf{x}\_2, \,\mathfrak{f}\_2(A) = \mathbf{x}\_3. \end{aligned}$$

*Feasible set,*

$$\begin{aligned} M &= \left\{ \begin{bmatrix} \mathbf{x}\_1 & \mathbf{x}\_2 \\ \mathbf{x}\_2 & \mathbf{x}\_3 \end{bmatrix} \in \mathbb{M}\_+^2 : \mathbf{x}\_2 \le \mathbf{0}, \ |\mathbf{x}\_3|\mathbf{x}\_2 \ge \mathbf{0} \right\}, \\ &= \left\{ \begin{bmatrix} \mathbf{x}\_1 & \mathbf{x}\_2 \\ \mathbf{x}\_2 & \mathbf{x}\_3 \end{bmatrix} : \mathbf{x}\_1 \ge \mathbf{0}, \ \mathbf{x}\_1\mathbf{x}\_3 - \mathbf{x}\_2^2 \ge \mathbf{0}, \ \mathbf{x}\_2 \le \mathbf{0}, \ |\mathbf{x}\_3|\mathbf{x}\_2 \ge \mathbf{0} \right\}. \end{aligned}$$

.

*Consider at feasible point A*¯ = 0 0 0 0 . *We observe that* f1, f<sup>2</sup> *are ∂*∗−*pseudoconvex,* −H<sup>1</sup> *is <sup>∂</sup>*∗−*quasiconvex at <sup>A</sup>*¯ *and* <sup>H</sup>*<sup>i</sup>* (*<sup>i</sup>* <sup>=</sup> <sup>1</sup> <sup>∈</sup> <sup>ג</sup>00(, <sup>G</sup>*<sup>i</sup>* (*<sup>i</sup>* <sup>=</sup> <sup>1</sup> <sup>∈</sup>/ <sup>ג</sup>+<sup>0</sup> (*also* <sup>ג</sup>+<sup>0</sup> <sup>−</sup>ג ∪ <sup>0</sup><sup>+</sup> <sup>+</sup><sup>0</sup> = ∅*. Now, we can find upper semi-regular convexificator of each functions at point A as follows:* ¯

$$
\partial^\* f\_1(\bar{A}) = \left\{ \begin{bmatrix} 0 & \frac{1}{2} \\ \frac{1}{2} & 0 \end{bmatrix} \right\}, \,\partial^\* f\_2(\bar{A}) = \left\{ \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} \right\}, \,\partial^\* \mathcal{A}\_1^\circ(\bar{A}) = \left\{ \begin{bmatrix} 0 & -\frac{1}{2} \\ -\frac{1}{2} & 0 \end{bmatrix} \right\}.
$$

$$
\text{Thus, for } \lambda\_1^\dagger = 0, \,\lambda\_2^\dagger > 0, \,\lambda\_1^{\mathcal{A}^\circ} = 0, \,\text{and } \mathcal{W} = \begin{bmatrix} 0 & 0 \\ 0 & \lambda\_2^\dagger \end{bmatrix}, \text{we have}
$$

$$
\lambda\_1^\dagger \circ \partial^\* f\_1(\bar{A}) + \lambda\_2^\dagger \circ \partial^\* f\_2(\bar{A}) - \lambda\_1^{\mathcal{A}^\circ} \circ \partial^\* \mathcal{A}\_1^\circ(\bar{A}) - \mathcal{W} = 0.
$$

*That is, <sup>A</sup>*¯ *satisfying weak <sup>S</sup>*−*stationary conditions. Hence, <sup>A</sup>*¯ *is weak efficient solution, which is true by simple observations.*

#### **4. Conclusions and Future Remarks**

Golestani and Nobakhtian [11] established optimality conditions for nonsmooth semidefinite single optimization problems. We have established the optimality conditions for a more interesting class of nonlinear optimization namely, mathematical programming problems with vanishing constraints (MPVC), which is more applicable in topology optimization and many real-life problems. We have further extended the single objective semidefinite optimization problems to multiobjective semidefinite optimization problems. We established Fritz John stationary conditions for nonsmooth, nonlinear, semidefinite, multiobjective programs with vanishing constraints using convexificator and generalized Cottle type and generalized Guignard type constraints qualification have been introduced to achieve strong *S*−stationary conditions from Fritz John stationary conditions. Sufficient conditions are also established under generalized convexity assumptions and through an example, we validate our established results. We have used the constraint qualifications technique motivated by Li [38] and provided some generalized constraint qualifications for semidefinite optimization problems. We have also used the linearization technique inspired by Kanzow et al. [56]. Recently, Treanta [41] discussed duality theorems for a special class of quasiinvex multiobjective optimization problems for interval-valued components. Further, Treanta established dual pair of multiobjective interval-valued variational control problems. We can extend the results on multiobjective semidefinite optimization problems to variational control problems and interval-valued optimization problems motivated by [40,41,57–61] for the application point of view.

**Author Contributions:** Writing-original draft preparation, K.K.L., M.H., S.K.S., J.K.M. and S.K.M.; writing-review and editing, K.K.L., M.H., S.K.S., J.K.M. and S.K.M.; funding acquisition, K.K.L. All authors have read and agreed to the published version of the manuscript.

**Funding:** The second author is financially supported by CSIR-UGC JRF, New Delhi, India, through Reference no.: 1009/(CSIR-UGC NET JUNE 2018). The third author is financially supported by CSIR-UGC JRF, New Delhi, India, through Reference no.: 1272/(CSIR-UGC NET DEC.2016). The fifth author is financially supported by "Research Grant for Faculty" (IoE Scheme) under Dev. Scheme NO. 6031 and Department of Science and Technology, SERB, New Delhi, India, through grant no.: MTR/2018/000121.

**Institutional Review Board Statement:** Not Applicable.

**Informed Consent Statement:** Not Applicable.

**Data Availability Statement:** No data were used to support this study.

**Acknowledgments:** The authors are indebted to the anonymous reviewers for their valuable comments and remarks that helped to improve the presentation and quality of the manuscript.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


### *Article* **Some Hadamard–Fejér Type Inequalities for LR-Convex Interval-Valued Functions**

**Muhammad Bilal Khan 1, Savin Treant,aˇ 2, Mohamed S. Soliman 3, Kamsing Nonlaopon 4,\* and Hatim Ghazi Zaini <sup>5</sup>**


**Abstract:** The purpose of this study is to introduce the new class of Hermite–Hadamard inequality for LR-convex interval-valued functions known as LR-interval Hermite–Hadamard inequality, by means of pseudo-order relation ( ≤*<sup>p</sup>* ). This order relation is defined on interval space. We have proved that if the interval-valued function is LR-convex then the inclusion relation " ⊆ " coincident to pseudo-order relation " ≤*<sup>p</sup>* " under some suitable conditions. Moreover, the interval Hermite–Hadamard–Fejér inequality is also derived for LR-convex interval-valued functions. These inequalities also generalize some new and known results. Useful examples that verify the applicability of the theory developed in this study are presented. The concepts and techniques of this paper may be a starting point for further research in this area.

**Keywords:** interval-valued function; Riemann integral; LR-convex interval-valued function; interval Hermite–Hadamard inequality; interval Hermite–Hadamard–Fejér inequality

#### **1. Introduction**

In the development of pure and applied mathematics [1,2] convexity has played a key role. Due to their resilience, convex sets and convex functions have been refined and expanded in many mathematical fields; see [3–8]. Convexity theory may be used to generate numerous inequalities in the literature. Integral inequalities [9] have uses in linear programming, combinatory, orthogonal polynomials, quantum theory, number theory, optimization theory, dynamics, and the theory of relativity. Researchers have given this problem a lot of attention [10–14], and it is now regarded an integrative topic involving economics, mathematics, physics, and statistics [15,16]. The Hermite–Hadamard inequality (*HH*-inequality) is, to the best of my knowledge, a well-known, ultimate, and broadly applied inequality. Other classical inequalities, such as the Oslen and Gagliardo–Nirenberg, Oslen, Opial, Hardy, Young, Linger, Ostrowski, levison, Arithmetic's-Geometric, Ky-fan, Minkowski, Beckenbach–Dresher, and Holer inequality, are closely linked to the classical *HH*-inequality [17–20], and it can be put in the following manner.

Let S : *K* → R be a convex function on a convex set *K* and t, *υ* ∈ *K* with t ≤ *υ* . Then,

$$\mathfrak{S}\left(\frac{\mathfrak{t}+\upsilon}{2}\right) \le \frac{1}{\upsilon - \mathfrak{t}} \int\_{\mathfrak{t}}^{\upsilon} \mathfrak{S}(\omega) d\omega \le \frac{\mathfrak{S}(\mathfrak{t}) + \mathfrak{S}(\upsilon)}{2}.\tag{1}$$

In [21], Fejér looked at the key extensions of *HH*-inequality, dubbed Hermite–Hadamard– Fejér inequality (*HH*-Fejér inequality).

**Citation:** Khan, M.B.; Treant,a, S.; ˇ Soliman, M.S.; Nonlaopon, K.; Zaini, H.G. Some Hadamard–Fejér Type Inequalities for LR-Convex Interval-Valued Functions. *Fractal Fract.* **2022**, *6*, 6. https://doi.org/ 10.3390/fractalfract6010006

Academic Editor: Ricardo Almeida

Received: 5 December 2021 Accepted: 15 December 2021 Published: 23 December 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

Let S : *K* → R be a convex function on a convex set *K* and t, *υ* ∈ *K* with t ≤ *υ* . Then,

$$\mathfrak{S}\left(\frac{\mathfrak{t}+\upsilon}{2}\right) \leq \frac{1}{\int\_{\mathfrak{t}}^{\upsilon} \mathfrak{D}(\omega)d\omega} \int\_{\mathfrak{t}}^{\upsilon} \mathfrak{S}(\omega)\mathfrak{D}(\omega)d\omega \leq \frac{\mathfrak{S}(\mathfrak{t}) + \mathfrak{S}(\upsilon)}{2} \int\_{\mathfrak{t}}^{\upsilon} \mathfrak{D}(\omega)d\omega. \tag{2}$$

If D(*ω*) = 1 then, we obtain (1) from (2). Many classical inequalities may be derived by specific convex functions with the help of inequality (1). Furthermore, in both pure and industrial mathematics, these inequalities play a crucial role for convex functions. We encourage readers to go more into the literature on generalized convex functions and *HH*-integral inequalities, particularly [22–29] and the references therein.

Interval analysis, on the other hand, was mostly forgotten for a long time due to a lack of applicability in other fields. Moore [30] and Kulish and W. Miranker [31] introduced and researched the notion of interval analysis. It is the first time in numerical analysis that it is utilized to calculate the error boundaries of numerical solutions of a finite state machine. Since then, a number of analysts have focused on and studied interval analysis and intervalvalued functions (*I.V-Fs*) in both mathematics and applications. As a result, various writers looked into the literature and applications of neural network output optimization, automatic error analysis, computational physics, robotics, computer graphics, and a variety of other well-known scientific and technology fields. We encourage readers to conduct more research into essential aspects and applications in the literature (see [32–40] and the references therein).

The theory of fuzzy sets and systems has progressed in a number of ways from its introduction five decades ago, as seen in [41]. As a result, it is useful in the study of a variety of issues in pure mathematics and applied sciences, such as operation research, computer science, management sciences, artificial intelligence, control engineering, and decision sciences. Convex analysis has contributed significantly to the advancement of several sectors of practical and pure research. Similarly, the concepts of convexity and non-convexity are important in fuzzy optimization because we obtain fuzzy variational inequalities when we characterize the optimality condition of convexity, so variational inequality theory and fuzzy complementary problem theory established powerful mechanisms of mathematical problems and have a friendly relationship. Costa [42], Costa and Roman-Flores [43], Flores-Franulic et al. [44], Roman-Flores et al. [45,46], and Chalco-Cano et al. [47,48] have recently generalized several classical discrete and integral inequalities not only to the environment of the *I.V-Fs* and fuzzy *I.V-Fs*, but also to more general set valued maps by Nikodem et al. Zhang et al. [49] used a pseudo order relation to establish a novel version of Jensen's inequalities for set-valued and fuzzy set-valued functions, proving that these Jensen's inequalities are an expanded form of Costa Jensen's inequalities [42]. Zhao et al. [50], inspired by the literature, introduced ࣺ-convex *I.V-Fs* and established that the *HH*-inequality for ࣺ-convex *I.V-Fs*. Yanrong An et al. [51] took a step forward by introducing the class of (ࣺ1, ࣺ<sup>2</sup> (ࣺ -convex *I.V-Fs* and establishing the interval *HH*-inequality for (ࣺ1, ࣺ<sup>2</sup> (-convex *I.V-Fs*.

This research is structured as follows: preliminary and novel notions and results in interval space and interval-valued convex analysis are presented in Section 2. Section 3 uses LR-convex *I.V-Fs* to generate LR-interval *HH*-inequalities and *HH*-Fejér inequalities. In addition, several intriguing cases are provided to support our findings. Conclusions and future plans are presented in Section 4.

#### **2. Preliminaries**

Let K*<sup>C</sup>* be the collection of all closed and bounded intervals of R that is K*<sup>C</sup>* = {[Z∗, Z∗] : Z∗, Z<sup>∗</sup> ∈ R and Z∗ ≤ Z∗}. If Z∗ ≥ 0 , then [Z∗, Z∗] is named as positive interval. The set of all positive interval is denoted by <sup>K</sup><sup>+</sup> *<sup>C</sup>* and defined as K+ *<sup>C</sup>* = {[Z∗, Z∗] : Z∗, Z<sup>∗</sup> ∈ K*<sup>C</sup>* and Z∗ ≥ 0}.

If [A∗, A∗], [Z∗, Z∗] ∈ K*<sup>C</sup>* and s ∈ R , then arithmetic operations are defined by

$$[\mathfrak{A}\_{\ast}, \mathfrak{A}^{\ast}] + [\mathcal{Z}\_{\ast}, \mathcal{Z}^{\ast}] = [\mathfrak{A}\_{\ast} + \mathcal{Z}\_{\ast}, \mathfrak{A}^{\ast} + \mathcal{Z}^{\ast}]\_{\ast}$$

[A∗, A∗] × [Z∗, Z∗] = [min{A∗Z∗, A∗Z∗, A∗Z∗, A∗Z∗}, max{A∗Z∗, A∗Z∗, A∗Z∗, A∗Z∗}],

$$\text{s.}[\mathfrak{A}\_{\ast}, \mathfrak{A}^{\ast}] = \begin{cases} \text{ }[\mathfrak{s}\mathfrak{A}\_{\ast}, \mathfrak{s}\mathfrak{A}^{\ast}] \text{ if } \mathfrak{s} > 0\\ \text{ }[\emptyset] & \text{ if } \mathfrak{s} = 0, \\ \text{ }[\mathfrak{s}\mathfrak{A}^{\ast}, \mathfrak{s}\mathfrak{A}\_{\ast}] & \text{ if } \mathfrak{s} < 0. \end{cases}$$

For [A∗, A∗], [Z∗, Z∗] ∈ K*C*, the inclusion " ⊆ " is defined by

[A∗, A∗] ⊆ [Z∗, Z∗], if and only if Z∗ ≤ A∗, A<sup>∗</sup> ≤ Z∗.

**Remark 1. [49].** *(i) The relation* " ≤*<sup>p</sup>* " *defined on* K*<sup>C</sup> by* [A∗, A∗] ≤*<sup>p</sup>* [Z∗, Z∗] *if and only if* A<sup>∗</sup> ≤ Z∗, A<sup>∗</sup> ≤ Z∗, *for all* [A∗, A∗], [Z∗, Z∗] ∈ K*C*, *it is a pseudo-order relation. The relation* [A∗, A∗] ≤*<sup>p</sup>* [Z∗, Z∗] *coincident to* [A∗, A∗] ≤ [Z∗, Z∗] *on* K*C*.

*(ii) It can be easily seen that* " ≤*<sup>p</sup>* " *looks similar to "left and right" on the real line* R, *so we call* " ≤*<sup>p</sup>* " *is "left and right" (or "LR" order, in short).*

The concept of Riemann integral for I.V-F first introduced by Moore [30] is defined as follow:

**Theorem 1. [30].** *If* S : [t, *υ*] ⊂ R → K*<sup>C</sup> is an I.V-F on such that* S(*ω*) = [S∗(*ω*), S∗(*ω*)]. *Then* S *is Riemann integrable over* [t, *υ*] *if and only if,* S<sup>∗</sup> *and* S<sup>∗</sup> *both are Riemann integrable over* [t, *υ*] *such that*

$$(I\mathbb{R})\int\_{\mathfrak{t}}^{v} \mathfrak{S}(\omega)d\omega = [(\mathbb{R})\int\_{\mathfrak{t}}^{v} \mathfrak{S}\_{\*}(\omega)d\omega,\ (\mathbb{R})\int\_{\mathfrak{t}}^{v} \mathfrak{S}^{\*}(\omega)d\omega].$$

The collection of all Riemann integrable real valued functions and Riemann integrable *I.V-F* is denoted by R[t, *<sup>υ</sup>*] and IR[t, *<sup>υ</sup>*], respectively.

**Definition 1.** *The real mapping* S : [t, *υ*] → R *is named as convex function if for all ω*, *y* ∈ [t, *υ*] *and ς* ∈ [0, 1] *we have*

$$\mathfrak{G}(\mathfrak{g}\omega + (1-\mathfrak{g})y \mid) \le \mathfrak{g}\mathfrak{G}(\omega) + (1-\mathfrak{g})\mathfrak{G}(y),\tag{3}$$

*If inequality (3) is reversed, then* S *is named as concave on* [t, *υ*] . *A function* S *is named as affine if* S *is both convex and cocave function. The set of all convex (concave) functions is denoted by*

$$SX([\mathfrak{t},\upsilon]\_{\prime})\left(SV([\mathfrak{t},\upsilon]\_{\prime},\mathbb{R}^+),\,\,\,\mathcal{S}A\left([\mathfrak{t},\upsilon]\_{\prime},\mathbb{R}^+\right)\right).$$

**Definition 2. [50].** *The I.V-F* <sup>S</sup> : [t, *<sup>υ</sup>*] <sup>→</sup> <sup>R</sup><sup>+</sup> *<sup>I</sup> is named as convex I.V-F if for all ω*, *y* ∈ [t, *υ*] *and ς* ∈ [0, 1]*, the coming inequality*

$$\mathfrak{S}(\mathfrak{g}\omega + (1-\mathfrak{c})y) \supseteq \mathfrak{A}(\mathfrak{g})\mathfrak{S}(\omega) + \mathfrak{A}(1-\mathfrak{c})\mathfrak{S}(y),\tag{4}$$

*is valid. If inequality (4) is reversed, then* S *is named as concave on* [t, *υ*] *. A I.V-F* S *is named as affine if* S *is both convex and cocave I.V-F. The set of all convex (concave, affine) I.V-Fs is denoted by*

$$\mathcal{S}X\big( [\mathfrak{t}, \upsilon], \,\,\mathcal{K}\_{\mathbb{C}}^{+} \big) \quad \big( \mathcal{S}V\big( [\mathfrak{t}, \upsilon], \,\,\mathcal{K}\_{\mathbb{C}}^{+} \big), \,\,\mathcal{S}A\big( [\mathfrak{t}, \upsilon], \,\,\mathcal{K}\_{\mathbb{C}}^{+} \big) \big) . \,\,\iota$$

**Definition 3. [49].** *TheI.V-F* <sup>S</sup> : [t, *<sup>υ</sup>*] → K<sup>+</sup> *<sup>C</sup> is named as LR-convex I.V-F if for all ω*, *y* ∈ [t, *υ*] *and ς* ∈ [0, 1] *, the coming inequality*

$$\mathfrak{S}\left(\mathfrak{g}\omega + (1-\mathfrak{g})\mathfrak{y}\right) \leq\_{\mathbb{P}} \mathfrak{G}\mathfrak{G}(\omega) + (1-\mathfrak{g})\mathfrak{G}(\mathfrak{y}),\tag{5}$$

*is valid. If inequality (5) is reversed, then* S *is named as LR-concave on* [t, *υ*] *. A I.V-F* S *is named as LR-affine if* S *is both LR-convex and LR-cocave I.V-F. The set of all LR-convex (LR-concave) I.V-Fs is denoted by*

$$LRSSX([\mathtt{t}\text{, }\boldsymbol{v}]\text{, }\mathcal{K}\_{\mathbb{C}}^{+})\left(LRSSV([\mathtt{t}\text{, }\boldsymbol{v}]\text{, }\mathcal{K}\_{\mathbb{C}}^{+})\text{, }LRSA([\mathtt{t}\text{, }\boldsymbol{v}]\text{, }\mathcal{K}\_{\mathbb{C}}^{+})\text{)}\dots$$

**Theorem 2. [49].** *Let* <sup>S</sup> : [t, *<sup>υ</sup>*] → K<sup>+</sup> *<sup>C</sup> be an I.V-F defined by* S(*ω*) = [S∗(*ω*), S∗(*ω*)], *for all ω* ∈ [t, *υ*] *. Then* S ∈ *LRSX* [t, *<sup>υ</sup>*], <sup>K</sup><sup>+</sup> *C if and only if,* S∗, S<sup>∗</sup> ∈ *SX*([t, *υ*]) .

**Example 1.** *We consider the I.V-F* <sup>S</sup> : [1, 4] → K<sup>+</sup> *<sup>C</sup> defined by* S(*ω*) = 0 2*ω*, 2*ω*<sup>2</sup> 1 *. Since end point functions* S∗(*ω*) *and* S∗(*ω*) *are convex functions. Hence* S(*ω*) *is LR-convex I.V-F.*

**Remark 2.** *By using our Definition 3 and Example 1, it can be easily observed that the concept of set inclusion* " ⊇ " *coincident to relation* " ≤*<sup>p</sup>* " *(or* " ≤*<sup>p</sup>* " *coincident to* " ⊇ " *) when one of the end point function* S<sup>∗</sup> *or* S<sup>∗</sup> *is affine function such that "If* S ∈ *SX* [t, *<sup>υ</sup>*], <sup>K</sup><sup>+</sup> *C then* S ∈ *LRSV* [t, *<sup>υ</sup>*], <sup>K</sup><sup>+</sup> *C if and only if* <sup>S</sup><sup>∗</sup> <sup>∈</sup> *SA*([t, *<sup>υ</sup>*], <sup>R</sup>+) *and* <sup>S</sup><sup>∗</sup> <sup>∈</sup> *SX*([t, *<sup>υ</sup>*], <sup>R</sup>+) *"*. Similarly, "If S ∈ *SV* [t, *<sup>υ</sup>*], <sup>K</sup><sup>+</sup> *C then* S ∈ *LRSX* [t, *<sup>υ</sup>*], <sup>K</sup><sup>+</sup> *C , if and only if* S<sup>∗</sup> ∈ *SV* ([t, *<sup>υ</sup>*], <sup>R</sup>+) *and* <sup>S</sup><sup>∗</sup> <sup>∈</sup> *SA*([t, *<sup>υ</sup>*], <sup>R</sup>+) *".*

**Remark 3.** *From Theorem 2, it can be easily seen that if* S∗(*ω*) = S∗(*ω*) *then, LR-convex I.V-Fs becomes classical convex functions.*

**Example 2.** *We consider the I.V-F* <sup>S</sup> : [1, 4] → K<sup>+</sup> *<sup>C</sup> defined by* S(*ω*) = 0 2*ω*2, 2*ω*<sup>2</sup> 1 *. Since end point functions* S∗(*ω*), S∗(*ω*)*, are equal and convex functions. Hence,* S(*ω*) *is a convex function.*

#### **3. Interval Inequalities**

In this section, we present two classes of *HH*-inequalities and discuss some related results, and verify with the help of use examples. First of all, we derive *HH*-inequality for LR-convex *I.V-F*.

**Theorem 3.** *Let* <sup>S</sup> : [t, *<sup>υ</sup>*] → K<sup>+</sup> *<sup>C</sup> be an I.V-F such that* S(*ω*) = [S∗(*ω*), S∗(*ω*)] *for all ω* ∈ [t, *υ*] *and* S ∈ IR([t, *<sup>υ</sup>*]) *. If* S ∈ *LRSX* [t, *<sup>υ</sup>*], <sup>K</sup><sup>+</sup> *C , then*

$$\mathfrak{S}\left(\frac{\mathfrak{t}+\upsilon}{2}\right) \leq\_p \frac{1}{\upsilon - \mathfrak{t}} \left(IR\right) \int\_{\mathfrak{t}}^{\upsilon} \mathfrak{S}(\omega) d\omega \leq\_p \frac{\mathfrak{S}(\mathfrak{t}) + \mathfrak{S}(\upsilon)}{2}.\tag{6}$$

*If* S ∈ *LRSV* [t, *<sup>υ</sup>*], <sup>K</sup><sup>+</sup> *C , then*

$$\mathfrak{S}\left(\frac{\mathbf{t}+\upsilon}{2}\right) \ge\_p \frac{1}{\upsilon - \mathbf{t}} \left(IR\right) \int\_{\mathbf{t}}^{\upsilon} \mathfrak{S}(\omega) d\omega \ge\_p \frac{\mathfrak{S}(\mathbf{t}) + \mathfrak{S}(\upsilon)}{2}.$$

**Proof.** Let S ∈ LRSX [t, <sup>υ</sup>], <sup>K</sup><sup>+</sup> C convex I.V-F. Then, by hypothesis, we have

$$\begin{cases} 2\mathfrak{S}\_{\*}\left(\frac{\mathfrak{t}+\upsilon}{2}\right) \leq \mathfrak{S}\_{\*}\left(\mathfrak{c}\mathfrak{t} + (1-\mathfrak{c})\upsilon\right) + \mathfrak{S}\_{\*}\left((1-\mathfrak{c})\mathfrak{t} + \mathfrak{c}\upsilon\right),\\ 2\mathfrak{S}^{\*}\left(\frac{\mathfrak{t}+\upsilon}{2}\right) \leq \mathfrak{S}^{\*}\left(\mathfrak{c}\mathfrak{t} + (1-\mathfrak{c})\upsilon\right) + \mathfrak{S}^{\*}\left((1-\mathfrak{c})\mathfrak{t} + \mathfrak{c}\upsilon\right). \end{cases}$$

Then

$$2\int\_0^1 \mathfrak{S}\_\*\left(\frac{\mathfrak{t}+\upsilon}{2}\right)d\mathfrak{g} \le \int\_0^1 \mathfrak{S}\_\*\left(\mathfrak{ct} + (1-\mathfrak{g})\upsilon\right)d\mathfrak{g} + \int\_0^1 \mathfrak{S}\_\*\left((1-\mathfrak{g})\mathfrak{t} + \mathfrak{g}\upsilon\right)d\mathfrak{g},\\ 2\int\_0^1 \mathfrak{S}^\*\left(\frac{\mathfrak{t}+\upsilon}{2}\right)d\mathfrak{g} \le \int\_0^1 \mathfrak{S}^\*\left(\mathfrak{ct} + (1-\mathfrak{g})\upsilon\right)d\mathfrak{g} + \int\_0^1 \mathfrak{S}^\*\left((1-\mathfrak{g})\mathfrak{t} + \mathfrak{g}\upsilon\right)d\mathfrak{g}.$$

It follows that

 $\mathfrak{S}\_{\*}\left(\frac{\mathfrak{t}+\upsilon}{2}\right) \le \frac{1}{\upsilon-\mathsf{t}}$   $\int\_{\mathsf{t}}^{\upsilon} \mathfrak{S}\_{\*}(\omega) d\omega$ ,  $\mathsf{t} \in \mathbb{S}^{\*}$   $\left(\frac{\mathfrak{t}+\upsilon}{2}\right) \le \frac{1}{\upsilon-\mathsf{t}}$   $\int\_{\mathsf{t}}^{\upsilon} \mathfrak{S}^{\*}(\omega) d\omega$ .

That is

$$\mathbb{E}\left[\mathfrak{S}\_{\ast}\left(\frac{\mathfrak{t}+\upsilon}{2}\right),\mathfrak{S}^{\ast}\left(\frac{\mathfrak{t}+\upsilon}{2}\right)\right] \leq\_{p} \frac{1}{\upsilon-\mathsf{t}} \left[\int\_{\mathsf{t}}^{\upsilon} \mathfrak{S}\_{\ast}(\omega)d\omega,\int\_{\mathsf{t}}^{\upsilon} \mathfrak{S}^{\ast}(\omega)d\omega\right].$$

Thus,

$$\mathfrak{S}\left(\frac{\mathfrak{t}+\upsilon}{2}\right) \leq\_p \frac{1}{\upsilon-\mathsf{t}} \, (IR) \int\_{\mathsf{t}}^{\upsilon} \mathfrak{S}(\omega)d\omega. \tag{7}$$

In a similar way as above, we have

$$\frac{1}{\left(\upsilon - \mathbf{t}\right)} \left(IR\right) \int\_{\mathbf{t}}^{\upsilon} \mathfrak{S}(\omega) d\omega \leq\_{p} \frac{\mathfrak{S}(\mathbf{t}) + \mathfrak{S}(\upsilon)}{2}.\tag{8}$$

Combining (7) and (8), we have

$$\mathfrak{S}\left(\frac{\mathfrak{t}+\upsilon}{2}\right) \leq\_p \frac{1}{\upsilon-\mathfrak{t}} \left(IR\right) \int\_{\mathfrak{t}}^{\upsilon} \mathfrak{S}(\omega)d\omega \leq\_p \frac{\mathfrak{S}(\mathfrak{t})+\mathfrak{S}(\upsilon)}{2}.$$

Hence, the required result.

**Remark 4.** *If* S∗(*ω*) = S∗(*ω*)*, then Theorem 3, reduces to the result for convex function:*

$$\mathfrak{S}\left(\frac{\mathfrak{t}+\upsilon}{2}\right) \le \frac{1}{\upsilon - \mathfrak{t}} \left(\mathbb{R}\right) \int\_{\mathfrak{t}}^{\upsilon} \mathfrak{S}(\omega) d\omega \le \frac{\mathfrak{S}(\mathfrak{t}) + \mathfrak{S}(\upsilon)}{2}.$$

It is easy to see that due to the convexity of end point functions S∗(*ω*) and S∗(*ω*) have following two possibilities to satisfy (1) either both are convex or affine convex functions. However, in the case of interval inclusion both functions S∗(*ω*) and S∗(*ω*) has only one possibility to satisfy (1) such that both end point functions should be affine convex because in interval inclusion S∗(*ω*) is convex and S∗(*ω*) is concave, see [50].

**Example 3.** *We consider the function* <sup>S</sup> : [t, *<sup>υ</sup>*] <sup>=</sup> [0, 2] → K<sup>+</sup> *<sup>C</sup> defined by,* S(*ω*) = 0 *ω*2, 2*ω*<sup>2</sup> 1 . *Since end point functions* <sup>S</sup>∗(*ω*) <sup>=</sup> *<sup>ω</sup>*2, <sup>S</sup>∗(*ω*) <sup>=</sup> <sup>2</sup>*ω*<sup>2</sup> *LR-convex functions. Hence* <sup>S</sup>(*ω*) *is LR-convex I.V-F. We now compute the following*

$$
\mathfrak{S}\_\*\left(\frac{\mathbf{t}+\upsilon}{2}\right) \le \frac{1}{\upsilon - \mathbf{t}} \int\_\mathbf{t}^\upsilon \mathfrak{S}\_\*(\omega) d\omega \le \frac{\mathfrak{S}\_\*(\mathbf{t}) + \mathfrak{S}\_\*(\upsilon)}{2}.
$$

$$
\mathfrak{S}\_\*\left(\frac{\mathbf{t}+\upsilon}{2}\right) = \mathfrak{S}\_\*(1) = 1,
$$

$$
\frac{1}{\upsilon - \mathbf{t}} \int\_\mathbf{t}^\upsilon \mathfrak{S}\_\*(\omega) d\omega = \frac{1}{2} \int\_0^2 \omega^2 d\omega = \frac{4}{3},
$$

$$
\frac{\mathfrak{S}\_\*(\mathbf{t}) + \mathfrak{S}\_\*(\upsilon)}{2} = 2.
$$

*That means*

$$1 \le \frac{4}{3} \le 2.$$

*Similarly, it can be easily show that*

$$\mathfrak{S}^\*\left(\frac{\mathfrak{t}+\upsilon}{2}\right) \le \frac{1}{\upsilon - \mathfrak{t}} \int\_{\mathfrak{t}}^{\upsilon} \mathfrak{S}^\*(\omega) d\omega \le \frac{\mathfrak{S}^\*(\mathfrak{t}) + \mathfrak{S}^\*(\upsilon)}{2}.$$

*such that*

$$\mathfrak{S}^\*(\frac{\mathfrak{t}+\upsilon}{2}) = \mathfrak{S}\_\*(1) = 2,$$

$$\frac{1}{\upsilon - \mathfrak{t}} \int\_{\mathfrak{t}}^{\upsilon} \mathfrak{S}^\*(\omega) d\omega = \frac{1}{2} \int\_0^2 2\omega^2 d\omega = \frac{8}{3}\mathsf{A},$$

$$\frac{\mathfrak{S}^\*(\mathfrak{t}) + \mathfrak{S}^\*(\upsilon)}{2} = 4,$$

*from which, it follows that*

$$2 \le \frac{8}{3} \le 4,$$

$$[1,2] \le \left[\frac{4}{3}, \frac{8}{3}\right] \le [2, 4].$$

*that is*

*Hence,*

$$\mathfrak{S}\left(\frac{\mathfrak{t}+\upsilon}{2}\right) \leq\_p \frac{1}{\upsilon-\mathsf{t}} \left(IR\right) \int\_{\mathsf{t}}^{\upsilon} \mathfrak{S}(\omega)d\omega \leq\_p \frac{\mathfrak{S}(\mathsf{t})+\mathfrak{S}(\upsilon)}{2}.$$

**Theorem 4.** *Let* <sup>S</sup> : [t, *<sup>υ</sup>*] → K<sup>+</sup> *<sup>C</sup> be an I.V-F such that* S(*ω*) = [S∗(*ω*), S∗(*ω*)] *for all <sup>ω</sup>* <sup>∈</sup> [t, *<sup>υ</sup>*] *and* <sup>S</sup> ∈ IR([t, *<sup>υ</sup>*]) *. If* <sup>S</sup> <sup>∈</sup> *LRSX*([t, *<sup>υ</sup>*], <sup>K</sup>*C*+)*, then*

$$\mathfrak{S}\left(\frac{\mathfrak{t}+\upsilon}{2}\right) \leq\_p \rhd\_2 \leq\_p \frac{1}{\upsilon-\mathsf{t}} \left(IR\right) \int\_{\mathsf{t}}^{\upsilon} \mathfrak{S}(\omega)d\omega \leq\_p \rhd\_1 \leq\_p \frac{\mathfrak{S}(\mathsf{t})+\mathfrak{S}(\upsilon)}{2},$$

*where*

*and* -

$$\begin{aligned} \odot\_1 &= \frac{\frac{\mathfrak{G}(\mathfrak{t}) + \mathfrak{G}(\nu)}{2} + \mathfrak{G}\left(\frac{\mathfrak{t} + \nu}{2}\right)}{2}, \rhd\_2 = \frac{\mathfrak{G}\left(\frac{3\mathfrak{t} + \nu}{4}\right) + \mathfrak{G}\left(\frac{\mathfrak{t} + 3\nu}{4}\right)}{2}, \\ \odot\_1 &= [\rhd\_{1\*}, \rhd\_1 \*], \rhd\_2 = [\rhd\_{2\*\*}, \rhd\_2 \*]. \end{aligned}$$

**Proof.** Take 0 t, <sup>t</sup>+*<sup>υ</sup>* 2 1 , we have

$$2\mathfrak{S}\left(\frac{\varsigma t + (1-\varsigma)\frac{\mathfrak{t}+\upsilon}{2}}{2} + \frac{(1-\varsigma)\mathfrak{t} + \varsigma\frac{\mathfrak{t}+\upsilon}{2}}{2}\right) \leq\_{\mathscr{P}} \mathfrak{S}\left(\varsigma t + (1-\varsigma)\frac{\mathfrak{t}+\upsilon}{2}\right) + \mathfrak{S}\left((1-\varsigma)\mathfrak{t} + \varsigma\frac{\mathfrak{t}+\upsilon}{2}\right).$$

From which, we have

$$\begin{cases} 2\mathfrak{S}\_{\*}\left(\frac{\mathfrak{c}t+(1-\mathfrak{c})\frac{\mathfrak{t}+\mathfrak{c}}{2}}{2}+\frac{(1-\mathfrak{c})\mathfrak{t}+\mathfrak{c}\frac{\mathfrak{t}+\mathfrak{c}}{2}}{2}\right) \leq \mathfrak{S}\_{\*}\left(\mathfrak{c}t+(1-\mathfrak{c})\frac{\mathfrak{t}+\mathfrak{c}}{2}\right)+\mathfrak{S}\_{\*}\left((1-\mathfrak{c})\mathfrak{t}+\mathfrak{c}\frac{\mathfrak{t}+\mathfrak{c}}{2}\right),\\ 2\mathfrak{S}^{\*}\left(\frac{\mathfrak{c}t+(1-\mathfrak{c})\frac{\mathfrak{t}+\mathfrak{c}}{2}}{2}+\frac{(1-\mathfrak{c})\mathfrak{t}+\mathfrak{c}\frac{\mathfrak{t}+\mathfrak{c}}{2}}{2}\right) \leq \mathfrak{S}^{\*}\left(\mathfrak{c}t+(1-\mathfrak{c})\frac{\mathfrak{t}+\mathfrak{c}}{2}\right)+\mathfrak{S}^{\*}\left((1-\mathfrak{c})\mathfrak{t}+\mathfrak{c}\frac{\mathfrak{t}+\mathfrak{c}}{2}\right). \end{cases}$$

In consequence, we obtain

$$\frac{\mathfrak{S}\_\*\left(\frac{3+\upsilon}{4}\right)}{2} \le \frac{1}{\upsilon - \mathfrak{t}} \int\_{\mathfrak{t}}^{\frac{\mathfrak{t}+\upsilon}{2}} \mathfrak{S}\_\*(\omega) d\omega,$$

$$\frac{\mathfrak{S}^\*\left(\frac{3+\upsilon}{4}\right)}{2} \le \frac{1}{\upsilon - \mathfrak{t}} \int\_{\mathfrak{t}}^{\frac{\mathfrak{t}+\upsilon}{2}} \mathfrak{S}^\*(\omega) d\omega.$$

That is

$$\frac{\mathbb{E}\left[\left.\mathfrak{S}\_{\*}\left(\frac{3\mathfrak{k}+\upsilon}{4}\right),\ \mathfrak{S}^{\*}\left(\frac{3\mathfrak{k}+\upsilon}{4}\right)\right]}{2} \leq \frac{1}{\upsilon - \mathfrak{t}} \left[\int\_{\mathfrak{t}}^{\frac{\mathfrak{t}+\upsilon}{2}} \mathfrak{S}\_{\*}(\omega) d\omega,\ \int\_{\mathfrak{t}}^{\frac{\mathfrak{t}+\upsilon}{2}} \mathfrak{S}^{\*}(\omega) d\omega\right].$$

It follows that

$$\frac{\mathfrak{S}\left(\frac{3\mathfrak{t}+\upsilon}{4}\right)}{2} \leq\_p \frac{1}{\upsilon-\mathsf{t}} \,\,\, (IR) \int\_{\mathsf{t}}^{\frac{\mathsf{t}+\upsilon}{2}} \mathfrak{S}(\omega)d\omega. \tag{9}$$

In a similar way as above, we have

$$\frac{\mathfrak{S}\left(\frac{\mathfrak{t}+3\mathfrak{v}}{4}\right)}{2} \leq\_{p} \frac{1}{\upsilon - \mathfrak{t}} \text{ (IR)} \int\_{\frac{\mathfrak{b}+\mathfrak{v}}{2}}^{\upsilon} \mathfrak{S}(\omega) d\omega. \tag{10}$$

Combining (9) and (10), we have

$$\frac{\left[\mathfrak{G}\left(\frac{3t+\upsilon}{4}\right) + \mathfrak{G}\left(\frac{t+3\upsilon}{4}\right)\right]}{2} \leq\_p \frac{1}{\upsilon - t} \left(IR\right) \int\_t^{\upsilon} \mathfrak{G}(\omega)d\omega.$$

By using Theorem 3, we have

$$
\mathfrak{S}\left(\frac{\mathbf{t}+\upsilon}{2}\right) = \mathfrak{S}\left(\frac{1}{2}.\frac{3\mathbf{t}+\upsilon}{4} + \frac{1}{2}.\frac{\mathbf{t}+3\upsilon}{4}\right).
$$

From which, we have

S∗ <sup>t</sup>+*<sup>υ</sup>* 2 = S<sup>∗</sup> 1 2 . 3t+*υ* <sup>4</sup> <sup>+</sup> <sup>1</sup> 2 . t+3*υ* 4 ! , S∗ <sup>t</sup>+*<sup>υ</sup>* 2 = S∗ 1 2 . 3t+*υ* <sup>4</sup> <sup>+</sup> <sup>1</sup> 2 . t+3*υ* 4 ! , ≤ 1 <sup>2</sup>S<sup>∗</sup> 3t+*<sup>υ</sup>* 4 + <sup>1</sup> <sup>2</sup>S<sup>∗</sup> <sup>t</sup>+3*<sup>υ</sup>* 4 , ≤ 1 2S∗ 3t+*<sup>υ</sup>* 4 + <sup>1</sup> 2S∗ <sup>t</sup>+3*<sup>υ</sup>* 4 , = -2∗, = -2 ∗, <sup>≤</sup> <sup>1</sup> *υ*−t *υ* <sup>t</sup> S∗(*ω*)*dω*, <sup>≤</sup> <sup>1</sup> *υ*−t *υ* <sup>t</sup> S∗(*ω*)*dω*, <sup>≤</sup> <sup>1</sup> 2 S∗(t)+S∗(*υ*) <sup>2</sup> + S<sup>∗</sup> <sup>t</sup>+*<sup>υ</sup>* 2 , <sup>≤</sup> <sup>1</sup> 2 S∗(t)+S∗(*υ*) <sup>2</sup> <sup>+</sup> <sup>S</sup>∗ <sup>t</sup>+*<sup>υ</sup>* 2 , = -1∗, = -1 ∗, <sup>≤</sup> <sup>1</sup> 2 S∗(t)+S∗(*υ*) <sup>2</sup> <sup>+</sup> <sup>S</sup>∗(t)+S∗(*υ*) 2 , <sup>≤</sup> <sup>1</sup> 2 S∗(t)+S∗(*υ*) <sup>2</sup> <sup>+</sup> <sup>S</sup>∗(t)+S∗(*υ*) 2 , = <sup>S</sup>∗(t)+S∗(*υ*) <sup>2</sup> , = <sup>S</sup>∗(t)+S∗(*υ*) <sup>2</sup> ,

that is

$$\mathfrak{S}\left(\frac{\mathfrak{t}+\upsilon}{2}\right) \leq\_p \rhd\_2 \leq\_p \frac{1}{\upsilon-\mathsf{t}} \left(IR\right) \int\_{\mathsf{t}}^{\upsilon} \mathfrak{S}(\omega)d\omega \leq\_p \rhd\_1 \leq\_p \frac{\mathfrak{S}(\mathsf{t})+\mathfrak{S}(\upsilon)}{2}\omega$$

hence, the result follows.

**Example 4.** *We consider the function* <sup>S</sup> : [t, *<sup>υ</sup>*] <sup>=</sup> [0, 2] → K<sup>+</sup> *<sup>C</sup> defined by,* S(*ω*) = 0 *ω*2, 2*ω*<sup>2</sup> 1 , *as in Example 3, then* <sup>S</sup>(*ω*) *is LR-convex I.V-F and satisfying (10). We have* <sup>S</sup>∗(*ω*) <sup>=</sup> *<sup>ω</sup>*<sup>2</sup> *and* S∗(*ω*) = 2*ω*<sup>2</sup> *. We now compute the following*

$$\begin{aligned} \frac{\mathfrak{S}\_{\*}(\mathfrak{t}) + \mathfrak{S}\_{\*}(\mathfrak{v})}{2} &= 2, \\ \mathbb{S}\_{1\*} &= \frac{\mathfrak{S}\_{\*}(\mathfrak{t}) + \mathfrak{S}\_{\*}(\mathfrak{v})}{2} = 4, \\ \mathbb{S}\_{1\*} &= \frac{\mathfrak{S}\_{\*}(\mathfrak{t}) + \mathfrak{S}\_{\*}(\mathfrak{v})}{2} + \mathfrak{S}\_{\*}\left(\frac{\mathfrak{t} + \mathfrak{v}}{2}\right)}{2} = \frac{3}{2}, \\ \mathbb{S}\_{1\*} &= \frac{\mathfrak{S}\_{\*}\left(\frac{3 + \mathfrak{s}}{4}\right) + \mathfrak{S}\_{\*}\left(\frac{\mathfrak{t} + 3 \mathfrak{c}}{4}\right)}{2} = 3, \\ \mathbb{S}\_{2\*} &= \frac{\mathfrak{S}\_{\*}\left(\frac{3 + \mathfrak{c}}{4}\right) + \mathfrak{S}\_{\*}\left(\frac{\mathfrak{t} + 3 \mathfrak{c}}{4}\right)}{2} = \frac{5}{4}, \\ \mathbb{S}\_{1\*} &= \frac{\mathfrak{S}\_{\*}\left(\frac{3 + \mathfrak{c}}{4}\right) + \mathfrak{S}\_{\*}\left(\frac{\mathfrak{t} + 3 \mathfrak{c}}{4}\right)}{2} = \frac{5}{2}, \end{aligned}$$

*Then we obtain that*

$$\begin{aligned} 1 \le \frac{5}{4} \le \frac{4}{3} \le \frac{3}{2} \le 2, \\ 2 \le \frac{5}{2} \le \frac{8}{3} \le 3 \le 4, \end{aligned}$$

*Hence, Theorem 4 is verified.*

**Theorem 5.** *Let* <sup>S</sup>, *<sup>g</sup>* : [t, *<sup>υ</sup>*] → K<sup>+</sup> *<sup>C</sup> be two I.V-F such that* S(*ω*) = [S∗(*ω*), S∗(*ω*)] *and g*(*ω*) = [*g*∗(*ω*), *g*∗(*ω*)] *for all ω* ∈ [t, *υ*] *and* S*g* ∈ IR([t, *<sup>υ</sup>*]) *. If* S, *g* ∈ *LRSX* [t, *<sup>υ</sup>*], <sup>K</sup><sup>+</sup> *C , then*

$$\frac{1}{\upsilon - \mathfrak{t}} \left(IR\right) \int\_{\mathfrak{t}}^{\upsilon} \mathfrak{S}(\omega) \underline{\mathcal{g}}(\omega) d\omega \leq\_p \frac{\mathfrak{B}(\mathfrak{t}, \upsilon)}{3} + \frac{\mathfrak{C}(\mathfrak{t}, \upsilon)}{6}.$$

*where*B(t, *υ*) = S(t)*g*(t) + S(*υ*)*g*(*υ*), C(t, *υ*) = S(t)*g*(*υ*) + S(*υ*)*g*(t), *and*B(t, *υ*)=[B∗((t, *υ*)), B∗((t, *υ*))] *and* C(t, *υ*) = [C∗((t, *υ*)), C∗((t, *υ*))].

**Proof.** Since S, *g* ∈ IR([t, *<sup>υ</sup>*]) , then we have

$$\begin{aligned} \mathfrak{S}\_\*(\mathfrak{ct} + (1 - \mathfrak{c})\upsilon) &\leq \mathfrak{c} \mathfrak{S}\_\*(\mathfrak{t}) + (1 - \mathfrak{c}) \mathfrak{S}\_\*(\upsilon), \\ \mathfrak{S}^\*(\mathfrak{ct} + (1 - \mathfrak{c})\upsilon) &\leq \mathfrak{c} \mathfrak{S}^\*(\mathfrak{t}) + (1 - \mathfrak{c}) \mathfrak{S}^\*(\upsilon). \end{aligned}$$

And

$$\begin{aligned} \mathfrak{g}\_\*(\mathfrak{ct} + (1 - \mathfrak{c})\upsilon) &\leq \mathfrak{g}\_\*(\mathfrak{t}) + (1 - \mathfrak{c})\mathfrak{g}\_\*(\upsilon), \\ \mathfrak{g}^\*(\mathfrak{ct} + (1 - \mathfrak{c})\upsilon) &\leq \mathfrak{g}^\*(\mathfrak{t}) + (1 - \mathfrak{c})\mathfrak{g}^\*(\upsilon). \end{aligned}$$

From the definition of LR-convex *I.V-Fs* it follows that 0 ≤*<sup>p</sup>* S(*ω*) and 0 ≤*<sup>p</sup> g*(*ω*) , so

S∗(*ς*t + (1 − *ς*)*υ*)*g*∗(*ς*t + (1 − *ς*)*υ*) ≤ *<sup>ς</sup>*S∗(t) <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)S∗(*υ*) ! *ςg*∗(t) <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*g*∗(*υ*) ! <sup>=</sup> <sup>S</sup>∗(t)*g*∗(t)*ς*<sup>2</sup> <sup>+</sup> <sup>S</sup>∗(*υ*)*g*∗(*υ*)*ς*<sup>2</sup> <sup>+</sup> <sup>S</sup>∗(t)*g*∗(*υ*)*ς*(<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*) <sup>+</sup> <sup>S</sup>∗(*υ*)*g*∗(t)*ς*(<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*) S∗(*ς*t + (1 − *ς*)*υ*)*g*∗(*ς*t + (1 − *ς*)*υ*) ≤ *<sup>ς</sup>*S∗(t) <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)S∗(*υ*) ! *ςg*∗(t) <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*g*∗(*υ*) ! <sup>=</sup> <sup>S</sup>∗(t)*g*∗(t)*ς*<sup>2</sup> <sup>+</sup> <sup>S</sup>∗(*υ*)*g*∗(*υ*)*ς*<sup>2</sup> <sup>+</sup> <sup>S</sup>∗(t)*g*∗(*υ*)*ς*(<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*) <sup>+</sup> <sup>S</sup>∗(*υ*)*g*∗(t)*ς*(<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*),

Integrating both sides of above inequality over [0, 1] we obtain

$$\begin{split} \int\_{0}^{1} \mathfrak{S}\_{\ast}(\mathfrak{c}\mathfrak{t} + (1-\mathfrak{c})\upsilon) \mathfrak{g}\_{\ast}(\mathfrak{c}\mathfrak{t} + (1-\mathfrak{c})\upsilon) &= \frac{1}{\upsilon-t} \int\_{\mathfrak{t}}^{\upsilon} \mathfrak{S}\_{\ast}(\omega) \mathfrak{g}\_{\ast}(\omega) d\omega \\ &\leq (\mathfrak{S}\_{\ast}(\mathfrak{t})\mathfrak{g}\_{\ast}(\mathfrak{t}) + \mathfrak{S}\_{\ast}(\upsilon)\mathfrak{g}\_{\ast}(\upsilon)) \int\_{0}^{1} \mathfrak{s}^{2} d\mathfrak{g} \\ &+ (\mathfrak{S}\_{\ast}(\mathfrak{t})\mathfrak{g}\_{\ast}(\upsilon) + \mathfrak{S}\_{\ast}(\upsilon)\mathfrak{g}\_{\ast}(\mathfrak{t})) \int\_{0}^{1} \mathfrak{s}(1-\mathfrak{c}) d\mathfrak{g} \\ \int\_{0}^{1} \mathfrak{S}^{\ast}(\mathfrak{t} + (1-\mathfrak{c})\upsilon) \mathfrak{g}^{\ast}(\mathfrak{t} + (1-\mathfrak{c})\upsilon) &= \frac{1}{\upsilon-t} \int\_{\mathfrak{t}}^{\mathfrak{t}} \mathfrak{S}^{\ast}(\omega) \mathfrak{g}^{\ast}(\omega) d\omega \\ &\leq \left(\mathfrak{S}^{\ast}(\mathfrak{t})\mathfrak{g}^{\ast}(\mathfrak{t}) + \mathfrak{S}^{\ast}(\upsilon)\mathfrak{g}^{\ast}(\upsilon)\right) \int\_{0}^{1} \mathfrak{s}^{2} d\mathfrak{g} \\ &+ \left(\mathfrak{S}^{\ast}(\mathfrak{t})\mathfrak{g}^{\ast}(\upsilon) + \mathfrak{S}^{\ast}(\upsilon)\mathfrak{g}^{\ast}(\mathfrak{t})\right) \int\_{0}^{1} \mathfrak{s}(1-\mathfrak{c}) d\mathfrak{c}. \end{split}$$

It follows that,

 $\frac{1}{\nu-t}$  $\int\_{t}^{v} \mathfrak{S}\_{\*}(\omega)g\_{\*}(\omega)d\omega \le \mathfrak{B}\_{\*}((\mathfrak{t},\nu))$  $\int\_{0}^{1} \xi^{2}d\xi + \mathfrak{C}\_{\*}((\mathfrak{t},\nu))$  $\int\_{0}^{1} \xi(1-\xi)d\xi$  $\frac{1}{\nu-t}$  $\int\_{t}^{v} \mathfrak{S}^{\*}(\omega)g\_{\*}^{\*}(\omega)d\omega \le \mathfrak{B}^{\*}((\mathfrak{t},\nu))$  $\int\_{0}^{1} \xi^{2}d\xi + \mathfrak{C}^{\*}((\mathfrak{t},\nu))$  $\int\_{0}^{1} \xi(1-\xi)d\xi$ 

that is

$$\frac{1}{\upsilon - \mathfrak{t}} \left[ \int\_{\mathfrak{t}}^{\upsilon} \mathfrak{G}\_{\*}(\omega) g\_{\*}(\omega) d\omega, \int\_{\mathfrak{t}}^{\upsilon} \mathfrak{G}^{\*}(\omega) g\_{\*}^{\*}(\omega) d\omega \right] \leq\_{p} \left[ \frac{\mathfrak{B}\_{\*}(\mathsf{(t,\upsilon)})}{3}, \frac{\mathfrak{B}^{\*}(\mathsf{(t,\upsilon)})}{3} \right] + \left[ \frac{\mathfrak{C}\_{\*}(\mathsf{(t,\upsilon)})}{6}, \frac{\mathfrak{C}^{\*}(\mathsf{(t,\upsilon)})}{6} \right].$$
 
$$\text{Thus,}$$
 
$$\frac{1}{\upsilon - \mathfrak{t}} \left( \mathrm{IR} \right) \int\_{\mathfrak{t}}^{\upsilon} \mathfrak{G}(\omega) g(\omega) d\omega \leq\_{p} \frac{\mathfrak{B}(\mathsf{t},\upsilon}{3}) + \frac{\mathfrak{C}(\mathsf{t},\upsilon}{6},$$

and the theorem has been established.

**Theorem 6.** *Let* <sup>S</sup>, *<sup>g</sup>* : [t, *<sup>υ</sup>*] → K<sup>+</sup> *<sup>C</sup> be two I.V-Fs such that* S(*ω*) = [S∗(*ω*), S∗(*ω*)] *and g*(*ω*) = [*g*∗(*ω*), *g*∗(*ω*)] *for all ω* ∈ [t, *υ*] *and* S*g* ∈ IR([t, *<sup>υ</sup>*]) *. If* S, *g* ∈ *LRSX* [t, *<sup>υ</sup>*], <sup>K</sup><sup>+</sup> *C , then*

$$2\csc\left(\frac{\mathbf{t}+\upsilon}{2}\right)\mathsf{g}\left(\frac{\mathbf{t}+\upsilon}{2}\right) \leq\_p \frac{1}{\upsilon-\mathsf{t}}\left(IR\right)\int\_{\mathsf{t}}^{\upsilon}\mathsf{S}(\omega)\mathsf{g}(\omega)d\omega + \frac{\mathsf{B}(\mathbf{t},\upsilon)}{6} + \frac{\mathsf{C}(\mathbf{t},\upsilon)}{3}\lambda$$

*where* B(t, *υ*) = S(t)*g*(t) + S(*υ*)*g*(*υ*), C(t, *υ*) = S(t)*g*(*υ*) + S(*υ*)*g*(t), *and* B(t, *υ*) = [B∗((t, *υ*)),B∗((t, *υ*))] *and* C(t, *υ*) = [C∗((t, *υ*)), C∗((t, *υ*))].

**Proof.** By hypothesis, we have

S∗ <sup>t</sup>+*<sup>υ</sup>* 2 *g*∗ <sup>t</sup>+*<sup>υ</sup>* 2 S∗ <sup>t</sup>+*<sup>υ</sup>* 2 *g*∗ <sup>t</sup>+*<sup>υ</sup>* 2 <sup>≤</sup> <sup>1</sup> 4 ' <sup>S</sup>∗(*ς*<sup>t</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*υ*)*g*∗(*ς*<sup>t</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*υ*) +S∗(*ς*t + (1 − *ς*)*υ*)*g*∗((1 − *ς*)t + *ςυ*) ( +<sup>1</sup> 4 ' <sup>S</sup>∗((<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)<sup>t</sup> <sup>+</sup> *ςυ*)*g*∗(*ς*<sup>t</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*υ*) +S∗((1 − *ς*)t + *ςυ*)*g*∗((1 − *ς*)t + *ςυ*) ( , <sup>≤</sup> <sup>1</sup> 4 ' <sup>S</sup>∗(*ς*<sup>t</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*υ*)*g*∗(*ς*<sup>t</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*υ*) +S∗(*ς*t + (1 − *ς*)*υ*)*g*∗((1 − *ς*)t + *ςυ*) ( +<sup>1</sup> 4 ' <sup>S</sup>∗((<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)<sup>t</sup> <sup>+</sup> *ςυ*)*g*∗(*ς*<sup>t</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*υ*) +S∗((1 − *ς*)t + *ςυ*)*g*∗((1 − *ς*)t + *ςυ*) ( , <sup>≤</sup> <sup>1</sup> 4 ' <sup>S</sup>∗(*ς*<sup>t</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*υ*)*g*∗(*ς*<sup>t</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*υ*) +S∗((1 − *ς*)t + *ςυ*)*g*∗((1 − *ς*)t + *ςυ*) ( +<sup>1</sup> 4 ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ (*ς*S∗(t) + (1 − *ς*)S∗(*υ*)) ((1 − *ς*)*g*∗(t) + *ςg*∗(*υ*)) +((1 − *ς*)S∗(t) + *ς*S∗(*υ*)) (*ςg*∗(t) + (1 − *ς*)*g*∗(*υ*)) ⎤ ⎥ ⎥ ⎥ ⎥ ⎦ , <sup>≤</sup> <sup>1</sup> 4 ' <sup>S</sup>∗(*ς*<sup>t</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*υ*)*g*∗(*ς*<sup>t</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*υ*) +S∗((1 − *ς*)t + *ςυ*)*g*∗((1 − *ς*)t + *ςυ*) ( +<sup>1</sup> 4 ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ (*ς*S∗(t) + (1 − *ς*)S∗(*υ*)) ((1 − *ς*)*g*∗(t) + *ςg*∗(*υ*)) +((1 − *ς*)S∗(t) + *ς*S∗(*υ*)) (*ςg*∗(t) + (1 − *ς*)*g*∗(*υ*)) ⎤ ⎥ ⎥ ⎥ ⎥ ⎦ , = <sup>1</sup> 4 ' <sup>S</sup>∗(*ς*<sup>t</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*υ*)*g*∗(*ς*<sup>t</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*υ*) +S∗((1 − *ς*)t + *ςυ*)*g*∗((1 − *ς*)t + *ςυ*) ( +<sup>1</sup> 2 ⎡ ⎣ *<sup>ς</sup>*<sup>2</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*) 2 C∗((t, *υ*)) +{*ς*(1 − *ς*) + (1 − *ς*)*ς*}B∗((t, *υ*)) ⎤ ⎦, = <sup>1</sup> 4 ' <sup>S</sup>∗(*ς*<sup>t</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*υ*)*g*∗(*ς*<sup>t</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*υ*) +S∗((1 − *ς*)t + *ςυ*)*g*∗((1 − *ς*)t + *ςυ*) ( +<sup>1</sup> 2 ⎡ ⎣ *<sup>ς</sup>*<sup>2</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*) 2 C∗((t, *υ*)) +{*ς*(1 − *ς*) + (1 − *ς*)*ς*}B∗((t, *υ*)) ⎤ ⎦.

*IR* -Integrating over [0, 1], we have

 $2\circledast\_{\*}\left(\frac{\mathsf{t}+\mathsf{v}}{2}\right)$  $g\_{\*}\left(\frac{\mathsf{t}+\mathsf{v}}{2}\right)\leq\frac{1}{\mathsf{v}-\mathsf{t}}$  $\int\_{\mathsf{t}}^{\mathsf{v}}\mathsf{G}\_{\*}\left(\omega\right)g\_{\*}\left(\omega\right)d\omega + \frac{\mathfrak{B}\_{\*}\left(\mathsf{t},\mathsf{v}\right)}{\mathsf{6}} + \frac{\mathfrak{C}\_{\*}\left(\mathsf{t},\mathsf{v}\right)}{\mathsf{3}},$ 
$$2\circledast^{\*}\left(\frac{\mathsf{t}+\mathsf{v}}{2}\right)$$
 $g^{\*}\left(\frac{\mathsf{t}+\mathsf{v}}{2}\right)\leq\frac{1}{\mathsf{v}-\mathsf{t}}$  $\int\_{\mathsf{t}}^{\mathsf{v}}\mathsf{G}^{\*}\left(\omega\right)g^{\*}\left(\omega\right)d\omega + \frac{\mathfrak{B}^{\*}\left(\mathsf{t},\mathsf{v}\right)}{\mathsf{6}} + \frac{\mathfrak{C}^{\*}\left(\mathsf{t},\mathsf{v}\right)}{\mathsf{3}}.$ 

that is

$$2\csc\left(\frac{\mathbf{t}+\upsilon}{2}\right)\mathsf{g}\left(\frac{\mathbf{t}+\upsilon}{2}\right) \leq\_p \frac{1}{\upsilon-\mathsf{t}}\,(IR)\int\_{\mathsf{t}}^{\upsilon}\mathsf{G}(\omega)\mathsf{g}(\omega)d\omega + \frac{\mathsf{B}(\mathbf{t},\upsilon)}{6} + \frac{\mathsf{C}(\mathbf{t},\upsilon)}{3}.$$

Hence, the required result.

**Example 5.** *We consider the I.V-Fs* <sup>S</sup>, *<sup>g</sup>* : [t, *<sup>υ</sup>*] <sup>=</sup> [0, 1] → K<sup>+</sup> *<sup>C</sup> defined by* S(*ω*) = 0 2*ω*2, 4*ω*<sup>2</sup> 1 *and <sup>g</sup>*(*ω*) <sup>=</sup> [*ω*, 2*ω*]. *Since end point functions* <sup>S</sup>∗(*ω*) <sup>=</sup> <sup>2</sup>*ω*2, <sup>S</sup>∗(*ω*) <sup>=</sup> <sup>4</sup>*ω*<sup>2</sup> *and <sup>g</sup>*∗(*ω*) <sup>=</sup> *<sup>ω</sup>, g*∗(*ω*) = 2*ω are convex functions. Hence* S, *g both are LR-convex I.V-Fs. We now compute the following*

$$\begin{aligned} \frac{1}{v-t} \int\_{t}^{v} \mathfrak{S}\_{\*}(\omega) \mathfrak{g}\_{\*}(\omega) d\omega &= \frac{1}{2}, \\ \frac{1}{v-t} \int\_{t}^{v} \mathfrak{S}^{\*}(\omega) \mathfrak{g}^{\*}(\omega) d\omega &= 2, \\ \frac{\mathfrak{B}\_{\*}((t,v))}{3} &= \frac{2}{3}, \\ \frac{\mathfrak{B}\_{\*}((t,v))}{3} &= \frac{8}{3}, \\ \frac{\mathfrak{C}\_{\*}((t,v))}{6} &= 0, \\ \frac{\mathfrak{C}\_{\*}((t,v))}{6} &= 0, \end{aligned}$$

*that means* <sup>1</sup>

$$\begin{aligned} \frac{1}{2} &\le \frac{2}{3} + 0 = \frac{2}{3}, \\ 2 &\le \frac{8}{3} + 0 = \frac{8}{3}, \end{aligned}$$

*Consequently, Theorem 5 is verified. For Theorem 6, we have*

$$\begin{aligned} 2\ \mathfrak{S}\_{\*}\left(\frac{\mathfrak{t}+\upsilon}{2}\right)\mathfrak{g}\_{\*}\left(\frac{\mathfrak{t}+\upsilon}{2}\right) &= \frac{1}{2}, \\ 2\ \mathfrak{S}^{\*}\left(\frac{\mathfrak{t}+\upsilon}{2}\right)\mathfrak{g}^{\*}\left(\frac{\mathfrak{t}+\upsilon}{2}\right) &= 2, \\ \frac{1}{\upsilon-\mathsf{t}}\int\_{\mathfrak{t}}^{\upsilon}\mathfrak{G}\_{\*}\left(\omega\right)\mathfrak{g}\_{\*}\left(\omega\right)d\omega &= \frac{1}{2}, \\ \frac{1}{\upsilon-\mathsf{t}}\int\_{\mathfrak{t}}^{\upsilon}\mathfrak{G}^{\*}\left(\omega\right)\mathfrak{g}^{\*}\left(\omega\right)d\omega &= 2, \\ \frac{\mathfrak{B}\_{\*}\left(\left(\mathfrak{t},\upsilon\right)\right)}{6} &= \frac{1}{3}, \\ \frac{\mathfrak{B}\_{\*}\left(\left(\mathfrak{t},\upsilon\right)\right)}{6} &= \frac{4}{3}, \\ \frac{\mathfrak{B}\_{\*}\left(\left(\mathfrak{t},\upsilon\right)\right)}{2} &= 0, \end{aligned}$$

*From which, we have* <sup>1</sup>

<sup>2</sup> <sup>≤</sup> <sup>1</sup> <sup>2</sup> <sup>+</sup> <sup>0</sup> <sup>+</sup> <sup>1</sup> <sup>3</sup> <sup>=</sup> <sup>5</sup> <sup>2</sup> <sup>≤</sup> <sup>2</sup> <sup>+</sup> <sup>0</sup> <sup>+</sup> <sup>4</sup> <sup>3</sup> <sup>=</sup> <sup>10</sup>

*Consequently, Theorem 6 is demonstrated.*

We now give *HH*-Fejér inequalities for LR-convex *I.V-Fs*. Firstly, we obtain the second *HH*-Fejér inequality for LR-convex *I.V-F*.

6 ,

3 ,

**Theorem 7.** *Let* <sup>S</sup> : [t, *<sup>υ</sup>*] → K<sup>+</sup> *<sup>C</sup> be an I.V-F with* t < *υ, such that* S(*ω*) = [S∗(*ω*), S∗(*ω*)] *for all ω* ∈ [t, *υ*] *and* S ∈ IR([t, *<sup>υ</sup>*])*. If* S ∈ *LRSX* [t, *<sup>υ</sup>*], <sup>K</sup><sup>+</sup> *C , then* D : [t, *υ*] → R, D(*ω*) ≥ 0, *symmetric with respect to* <sup>t</sup>+*<sup>υ</sup>* <sup>2</sup> , *then*

$$\frac{1}{\left(\upsilon - \mathbf{t}\right)} \left(IR\right) \int\_{\mathbf{t}}^{\upsilon} \mathfrak{G}(\omega) \mathfrak{J}(\omega) d\omega \leq\_{p} \left[\mathfrak{G}(\mathbf{t}) + \mathfrak{G}(\upsilon)\right] \int\_{0}^{1} \mathfrak{J} \mathfrak{J}((1-\mathfrak{g})\mathbf{t} + \mathfrak{g}\upsilon) d\mathfrak{g}.\tag{11}$$

**Proof.** Let S ∈ *LRSX* [t, *<sup>υ</sup>*], <sup>K</sup><sup>+</sup> *C* . Then we have

$$\begin{split} \mathfrak{S}\_{\*} (\mathfrak{c} + (1 - \mathfrak{c})\upsilon) D(\mathfrak{c}\mathfrak{t} + (1 - \mathfrak{c})\upsilon) \\ \qquad \qquad \qquad \qquad \le (\mathfrak{c}\mathfrak{S}\_{\*} (\mathfrak{t}) + (1 - \mathfrak{c})\mathfrak{S}\_{\*} (\upsilon)) D(\mathfrak{c}\mathfrak{t} + (1 - \mathfrak{c})\upsilon), \\ \mathfrak{S}^{\*} (\mathfrak{c}\mathfrak{t} + (1 - \mathfrak{c})\upsilon) D(\mathfrak{c}\mathfrak{t} + (1 - \mathfrak{c})\upsilon) \\ \qquad \qquad \le (\mathfrak{c}\mathfrak{S}^{\*} (\mathfrak{t}) + (1 - \mathfrak{c})\mathfrak{S}^{\*} (\upsilon)) D(\mathfrak{c}\mathfrak{t} + (1 - \mathfrak{c})\upsilon). \end{split} \tag{12}$$

And

$$\mathfrak{S}\_{\*}((1-\mathfrak{c})\mathbf{t}+\mathfrak{c}\boldsymbol{\nu})D((1-\mathfrak{c})\mathbf{t}+\mathfrak{c}\boldsymbol{\nu}) \le ((1-\mathfrak{c})\mathfrak{S}\_{\*}(\mathbf{t})+\mathfrak{c}\mathfrak{S}\_{\*}(\boldsymbol{\nu}))D((1-\mathfrak{c})\mathbf{t}+\mathfrak{c}\boldsymbol{\nu}), \quad \text{(13)}$$

$$\mathfrak{S}^{\*}((1-\mathfrak{c})\mathbf{t}+\mathfrak{c}\boldsymbol{\nu})D((1-\mathfrak{c})\mathbf{t}+\mathfrak{c}\boldsymbol{\nu}) \le ((1-\mathfrak{c})\mathfrak{S}^{\*}(\mathbf{t})+\mathfrak{c}\mathfrak{S}^{\*}(\boldsymbol{\nu}))D((1-\mathfrak{c})\mathbf{t}+\mathfrak{c}\boldsymbol{\nu}).$$

After adding (12) and (13), and integrating over [0, 1], we obtain

 1 <sup>0</sup> <sup>S</sup>∗(*ς*<sup>t</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*υ*)D(*ς*<sup>t</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*υ*)*d<sup>ς</sup>* <sup>+</sup> <sup>1</sup> <sup>0</sup> S∗((1 − *ς*)t + *ςυ*)D((1 − *ς*)t + *ςυ*)*dς* <sup>≤</sup> <sup>1</sup> 0 ' <sup>S</sup>∗(t){*ς*D(*ς*<sup>t</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*υ*) <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)D((<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)<sup>t</sup> <sup>+</sup> *ςυ*)} +S∗(*υ*){(1 − *ς*)D(*ς*t + (1 − *ς*)*υ*) + *ς*D((1 − *ς*)t + *ςυ*)} ( *dς*, 1 <sup>0</sup> <sup>S</sup>∗((<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)<sup>t</sup> <sup>+</sup> *ςυ*)D((<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)<sup>t</sup> <sup>+</sup> *ςυ*)*d<sup>ς</sup>* <sup>+</sup> <sup>1</sup> <sup>0</sup> S∗(*ς*t + (1 − *ς*)*υ*)D(*ς*t + (1 − *ς*)*υ*)*dς* <sup>≤</sup> <sup>1</sup> 0 ' <sup>S</sup>∗(t){*ς*D(*ς*<sup>t</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*υ*) <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)D((<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)<sup>t</sup> <sup>+</sup> *ςυ*)} +S∗(*υ*){(1 − *ς*)D(*ς*t + (1 − *ς*)*υ*) + *ς*D((1 − *ς*)t + *ςυ*)} ( *dς*. = 2S∗(t) 1 <sup>0</sup> *<sup>ς</sup>D*(*ς*<sup>t</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*υ*) *<sup>d</sup><sup>ς</sup>* <sup>+</sup> <sup>2</sup>S∗(*υ*) 1 <sup>0</sup> *<sup>ς</sup>D*((<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)<sup>t</sup> <sup>+</sup> *ςυ*) *<sup>d</sup>ς*, = 2S∗(t) 1 <sup>0</sup> *<sup>ς</sup>D*(*ς*<sup>t</sup> <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ς</sup>*)*υ*) *<sup>d</sup><sup>ς</sup>* <sup>+</sup> <sup>2</sup>S∗(*υ*) 1 <sup>0</sup> *ςD*((1 − *ς*)t + *ςυ*) *dς*.

Since D is symmetric, then

$$\begin{split} &=2\left[\mathfrak{S}\_{\ast}(\mathsf{t})+\mathfrak{S}\_{\ast}(\mathsf{v})\right]\int\_{0}^{1}\ \xi D((\mathsf{1}-\mathsf{g})\mathsf{t}+\mathsf{g}\boldsymbol{\upsilon})\ \,d\mathsf{g}, \\ &=2\left[\mathfrak{S}^{\ast}(\mathsf{t})+\mathfrak{S}^{\ast}(\mathsf{v})\right]\int\_{0}^{1}\ \xi D((\mathsf{1}-\mathsf{g})\mathsf{t}+\mathsf{g}\boldsymbol{\upsilon})\ \,d\mathsf{g}. \end{split} \tag{14}$$

Since

$$\begin{split} \int\_{0}^{1} \mathfrak{S}\_{\*} (\mathfrak{c} + (1 - \mathfrak{c})\upsilon) \mathfrak{D}(\mathfrak{c} + (1 - \mathfrak{c})\upsilon) d\mathfrak{c} \\ &= \int\_{0}^{1} \mathfrak{S}\_{\*} ((1 - \mathfrak{c})\mathfrak{t} + \mathfrak{c}\upsilon) \mathfrak{D}((1 - \mathfrak{c})\mathfrak{t} + \mathfrak{c}\upsilon) d\mathfrak{c} = \frac{1}{\upsilon - \mathfrak{c}} \int\_{\mathfrak{t}}^{\upsilon} \mathfrak{S}\_{\*} (\omega) \mathfrak{D}(\omega) d\omega \\ \int\_{0}^{1} \mathfrak{S}^{\*} ((1 - \mathfrak{c})\mathfrak{t} + \mathfrak{c}\upsilon) \mathfrak{D}((1 - \mathfrak{c})\mathfrak{t} + \mathfrak{c}\upsilon) d\mathfrak{c} \\ &= \int\_{0}^{1} \mathfrak{S}^{\*} (\mathfrak{c}\mathfrak{t} + (1 - \mathfrak{c})\upsilon) \mathfrak{D}(\mathfrak{c}\mathfrak{t} + (1 - \mathfrak{c})\upsilon) d\mathfrak{c} = \frac{1}{\upsilon - \mathfrak{c}} \int\_{\mathfrak{t}}^{\upsilon} \mathfrak{S}^{\*} (\omega) \mathfrak{D}(\omega) d\omega \end{split} \tag{15}$$

From (15), we have

$$\begin{split} \frac{1}{v-t} \int\_{\mathfrak{t}}^{v} \mathfrak{S}\_{\*}(\omega) \mathfrak{D}(\omega) d\omega &\leq \left[\mathfrak{S}\_{\*}(\mathfrak{t}) + \mathfrak{S}\_{\*}(v)\right] \int\_{0}^{1} \mathfrak{g} D((1-\mathfrak{c})\mathfrak{t} + \mathfrak{c}v) d\mathfrak{c}, \\ \frac{1}{v-t} \int\_{\mathfrak{t}}^{v} \mathfrak{S}^{\*}(\omega) \mathfrak{D}(\omega) d\omega &\leq \left[\mathfrak{S}^{\*}(\mathfrak{t}) + \mathfrak{S}^{\*}(v)\right] \int\_{0}^{1} \mathfrak{g} D((1-\mathfrak{c})\mathfrak{t} + \mathfrak{c}v) \ d\mathfrak{c}. \end{split}$$

that is

$$\begin{aligned} &\left[\frac{1}{\upsilon-\mathsf{t}}\int\_{\mathsf{t}}^{\upsilon}\mathsf{G}\_{\mathsf{s}}(\omega)\mathfrak{D}(\omega)d\omega,\ \frac{1}{\upsilon-\mathsf{t}}\int\_{\mathsf{t}}^{\upsilon}\mathsf{G}^{\ast}(\omega)\mathfrak{D}(\omega)d\omega\right] \\ &\leq\_{p}\left[\mathfrak{S}\_{\mathsf{s}}(\mathsf{t})+\mathfrak{S}\_{\mathsf{s}}(\upsilon),\ \mathfrak{S}^{\ast}(\mathsf{t})+\mathfrak{S}^{\ast}(\upsilon)\right]\int\_{0}^{1}\ \_{\mathsf{f}}D((1-\mathsf{g})\mathsf{t}+\mathsf{g}\upsilon)\ \,d\varsigma\right] \end{aligned}$$

hence

$$\frac{1}{\left(\upsilon - \mathfrak{t}\right)} \left(IR\right) \int\_{\mathfrak{t}}^{\upsilon} \mathfrak{S}(\omega) \mathfrak{D}(\omega) d\omega \leq\_{p} \left[\mathfrak{S}(\mathfrak{t}) + \mathfrak{S}(\upsilon)\right] \int\_{0}^{1} \mathfrak{J} \mathfrak{D}((1 - \mathfrak{s})\mathfrak{t} + \varsigma \upsilon) d\varsigma.$$

Next, we construct first *HH*-Fejér inequality for LR-convex *I.V-F*, which generalizes first *HH*-Fejér inequalities for convex function, see [21].

**Theorem 8.** *Let* <sup>S</sup> : [t, *<sup>υ</sup>*] → K<sup>+</sup> *<sup>C</sup> be an I.V-F with* t < *υ, such that* S(*ω*) = [S∗(*ω*), S∗(*ω*)] *for all ω* ∈ [t, *υ*] *and* S ∈ IR([t, *<sup>υ</sup>*]) *. If* S ∈ *LRSX* [t, *<sup>υ</sup>*], <sup>K</sup><sup>+</sup> *C and* D : [t, *υ*] → R, D(*ω*) ≥ 0, *symmetric with respect to* <sup>t</sup>+*<sup>υ</sup>* <sup>2</sup> , *and <sup>υ</sup>* <sup>t</sup> D(*ω*)*dω* > 0*, then*

$$\mathfrak{S}\left(\frac{\mathfrak{t}+\upsilon}{2}\right) \leq\_p \frac{1}{\int\_{\mathfrak{t}}^{\upsilon} \mathfrak{D}(\omega) d\omega} \ (IR) \int\_{\mathfrak{t}}^{\upsilon} \mathfrak{S}(\omega) \mathfrak{D}(\omega) d\omega. \tag{16}$$

**Proof.** Since S ∈ *LRSX* [t, *<sup>υ</sup>*], <sup>K</sup><sup>+</sup> *C* , then we have

$$\begin{array}{l} \mathfrak{S}\_{\*}\left(\frac{\mathfrak{t}+\upsilon}{2}\right) \leq \frac{1}{2} (\mathfrak{S}\_{\*}\left(\mathfrak{ct}+(1-\mathfrak{c})\upsilon\right) + \mathfrak{S}\_{\*}\left((1-\mathfrak{c})\mathfrak{t}+\varsigma\upsilon\right)),\\ \mathfrak{S}^{\*}\left(\frac{\mathfrak{t}+\upsilon}{2}\right) \leq \frac{1}{2} (\mathfrak{S}^{\*}\left(\mathfrak{ct}+(1-\mathfrak{c})\upsilon\right) + \mathfrak{S}^{\*}\left((1-\mathfrak{c})\mathfrak{t}+\varsigma\upsilon\right)),\end{array} \tag{17}$$

By multiplying (17) by D(*ς*t + (1 − *ς*)*υ*) = D((1 − *ς*)t + *ςυ*) and integrate it by *ς* over [0, 1], we obtain

$$\begin{split} \mathfrak{S}\_{\*}\left(\frac{\mathbf{t}+\upsilon}{2}\right) \int\_{0}^{1} \mathfrak{D}((1-\zeta)\mathbf{t}+\zeta\upsilon)d\zeta \\ \leq & \frac{1}{2} \left( \begin{array}{l} \int\_{0}^{1} \mathfrak{S}\_{\*}\left(\zeta\mathbf{t}+(1-\zeta)\upsilon\right)\mathfrak{D}(\zeta\mathbf{t}+(1-\zeta)\upsilon)d\zeta \\ + \int\_{0}^{1} \mathfrak{S}\_{\*}\left((1-\zeta)\mathbf{t}+\zeta\upsilon\right)\mathfrak{D}((1-\zeta)\mathbf{t}+\zeta\upsilon)d\zeta \end{array} \right) , \\ \mathfrak{S}^{\*}\left(\frac{\mathbf{t}+\upsilon}{2}\right) \int\_{0}^{1} \mathfrak{D}((1-\zeta)\mathbf{t}+\zeta\upsilon)d\zeta \\ \leq & \frac{1}{2} \left( \begin{array}{l} \int\_{0}^{1} \mathfrak{S}^{\*}\left(\zeta\mathbf{t}+(1-\zeta)\upsilon\right)\mathfrak{D}(\zeta\mathbf{t}+(1-\zeta)\upsilon)d\zeta \\ + \int\_{0}^{1} \mathfrak{S}^{\*}\left((1-\zeta)\mathbf{t}+\zeta\upsilon\right)\mathfrak{D}((1-\zeta)\mathbf{t}+\zeta\upsilon)d\zeta \end{array} \right) , \end{split} \tag{18}$$

Since

$$\begin{split} \int\_{0}^{1} \mathfrak{S}\_{\ast}(\xi \mathfrak{t} + (1 - \xi)\upsilon) \mathfrak{D}(\xi \mathfrak{t} + (1 - \xi)\upsilon) d\xi \\ &= \int\_{0}^{1} \mathfrak{S}\_{\ast}((1 - \xi)\mathfrak{t} + \xi\upsilon) \mathfrak{D}((1 - \xi)\mathfrak{t} + \xi\upsilon) d\xi \\ &= \frac{1}{\upsilon - \mathsf{t}} \int\_{\mathfrak{t}}^{\upsilon} \mathfrak{S}\_{\ast}(\omega) \mathfrak{D}(\omega) d\omega \\ \int\_{0}^{1} \mathfrak{S}^{\ast}((1 - \xi)\mathfrak{t} + \xi\upsilon) \mathfrak{D}((1 - \xi)\mathfrak{t} + \xi\upsilon) d\xi \\ &= \int\_{0}^{1} \mathfrak{S}^{\ast}(\mathfrak{t} + (1 - \xi)\upsilon) \mathfrak{D}(\mathfrak{t} + (1 - \xi)\upsilon) d\mathfrak{g} \\ &= \frac{1}{\upsilon - \mathsf{t}} \int\_{\mathfrak{t}}^{\upsilon} \mathfrak{S}^{\ast}(\omega) \mathfrak{D}(\omega) d\omega \end{split} \tag{19}$$

From (19), we have

$$\begin{aligned} \mathfrak{S}\_\*\left(\frac{\mathfrak{t}+\upsilon}{2}\right) &\leq \frac{1}{\int\_{\mathfrak{t}}^{\upsilon} \mathfrak{D}(\omega) d\omega} \int\_{\mathfrak{t}}^{\upsilon} \mathfrak{S}\_\*(\omega) \mathfrak{D}(\omega) d\omega, \\\mathfrak{S}^\*\left(\frac{\mathfrak{t}+\upsilon}{2}\right) &\leq \frac{1}{\int\_{\mathfrak{t}}^{\upsilon} \mathfrak{D}(\omega) d\omega} \int\_{\mathfrak{t}}^{\upsilon} \mathfrak{S}^\*(\omega) \mathfrak{D}(\omega) d\omega, \end{aligned}$$

From which, we have

$$\begin{aligned} \left[\mathfrak{S}\_{\*}\left(\frac{\mathfrak{t}+\upsilon}{2}\right), \ \mathfrak{S}^{\*}\left(\frac{\mathfrak{t}+\upsilon}{2}\right)\right] \\ \leq\_{p} \frac{1}{\int\_{\mathfrak{t}}^{\upsilon} \mathfrak{D}(\omega) d\omega} \left[\int\_{\mathfrak{t}}^{\upsilon} \mathfrak{S}\_{\*}(\omega) \mathfrak{D}(\omega) d\omega, \ \int\_{\mathfrak{t}}^{\upsilon} \mathfrak{S}^{\*}(\omega) \mathfrak{D}(\omega) d\omega\right] \end{aligned}$$

that is

$$\left(\mathfrak{S}\left(\frac{\mathfrak{t}+\upsilon}{2}\right)\leq\_{p}\frac{1}{\int\_{\mathfrak{t}}^{\upsilon}\mathfrak{D}(\omega)d\omega}\right)(IR)\int\_{\mathfrak{t}}^{\upsilon}\mathfrak{S}(\omega)\mathfrak{D}(\omega)d\omega.$$

This completes the proof.

**Remark 5.** *If* D(*ω*) = 1 *then, combining Theorems 7 and 8, we obtain Theorem 3.*

*If* S∗(t) = S∗(t) *then, Theorems 7 and 8 reduces to classical first and second* HH*-Fejér inequality for convex function, see [21].*

*If* S∗(t) = S∗(t) *with* D(*ω*) = 1 *then, Theorems 7 and 8 reduces to classical first and second* HH*-Fejér inequality for convex function, see [17,18].*

**Example 6.** *We consider the I.V-F* S : [t, *υ*] = 0 *<sup>π</sup>* <sup>4</sup> , *<sup>π</sup>* 2 1 → K<sup>+</sup> *<sup>C</sup> defined by,*

$$\mathfrak{S}(\omega) = \left[ \exp(\sin(\omega)) , 2 \exp(\sin(\omega)) \right]$$

*Since end point functions* S∗(*ω*) = exp(sin(*ω*)) *,* S∗(*ω*) = 2 exp(sin(*ω*)) *convex functions then, by Theorem 2,* S(*ω*) *is LR-convex I.V-F. If*

$$\mathfrak{D}(\omega) = \begin{cases} \omega - \frac{\pi}{4}, & \mathcal{S} \in \left[\frac{\pi}{4}, \frac{3\pi}{8}\right], \\\frac{\pi}{2} - \omega, & \mathcal{S} \in \left(\frac{3\pi}{8}, \frac{\pi}{2}\right]. \end{cases}$$

*then, we have*

1 *υ*−t *υ* <sup>t</sup> [S∗(*ω*)]D(*ω*)*d<sup>ω</sup>* <sup>=</sup> <sup>4</sup> *π π* 2 *π* 4 [S∗(*ω*)]D(*ω*)*d<sup>ω</sup>* <sup>=</sup> <sup>4</sup> *π* 3*π* 8 *π* 4 [S∗(*ω*)]D(*ω*)*d<sup>ω</sup>* <sup>+</sup> <sup>4</sup> *π π* 2 3*π* 8 S∗(*ω*)D(*ω*)*dω*, 1 *υ*−t *υ* <sup>t</sup> [S∗(*ω*)]D(*ω*)*d<sup>ω</sup>* <sup>=</sup> <sup>4</sup> *π π* 2 *π* 4 [S∗(*ω*)]D(*ω*)*dω* = <sup>4</sup> *π* 3*π* 8 *π* 4 [S∗(*ω*)]D(*ω*)*dω* + <sup>4</sup> *π π* 2 3*π* 8 S∗(*ω*)D(*ω*)*dω*, = <sup>4</sup> *π* 3*π* 8 *π* 4 [exp(sin(*ω*))] *<sup>ω</sup>* <sup>−</sup> *<sup>π</sup>* 4 *dω* + <sup>4</sup> *π π* 2 3*π* 8 exp(sin(*ω*)) *<sup>π</sup>* <sup>2</sup> − *ω <sup>d</sup><sup>ω</sup>* <sup>≈</sup> <sup>63</sup> <sup>100</sup>*<sup>π</sup>* , = <sup>8</sup> *π* 3*π* 8 *π* 4 exp(sin(*ω*)) *<sup>ω</sup>* <sup>−</sup> *<sup>π</sup>* 4 *dω* + <sup>8</sup> *π π* 2 3*π* 8 exp(sin(*ω*)) *<sup>π</sup>* <sup>2</sup> − *ω <sup>d</sup><sup>ω</sup>* <sup>≈</sup> <sup>63</sup> <sup>50</sup>*<sup>π</sup>* , (20) *and* [S∗(t) <sup>+</sup> <sup>S</sup>∗(*υ*)] <sup>1</sup> <sup>0</sup> *<sup>ς</sup>D*(<sup>t</sup> <sup>+</sup> *ς∂*(*υ*, t)) *<sup>d</sup><sup>ς</sup>*

$$\begin{split} \left[\mathfrak{S}\_{\ast}(\mathbf{t}) + \mathfrak{S}\_{\ast}(\upsilon)\right] \int\_{0}^{1} \xi D(\mathbf{t} + \xi \theta(\upsilon, \mathbf{t})) \, d\xi \\ \left[\mathfrak{S}^{\ast}(\mathbf{t}) + \mathfrak{S}^{\ast}(\upsilon)\right] \int\_{0}^{1} \xi D(\mathbf{t} + \xi \bar{\theta}(\upsilon, \mathbf{t})) d\xi \\ = \frac{\pi}{2} \left[\int\_{0}^{\frac{1}{2}} \xi^{2} d\xi + \int\_{\frac{1}{2}}^{1} \xi (1 + \varsigma) d\varsigma \right] = \frac{17\pi}{48}, \\ = \pi \left[\int\_{0}^{\frac{1}{2}} \xi^{2} d\xi + \int\_{\frac{1}{2}}^{1} \xi (1 + \varsigma) d\varsigma \right] = \frac{17\pi}{24}. \end{split} \tag{21}$$

*From (20) and (21), we have*

$$
\left[\frac{63}{100\pi'}, \frac{63}{50\pi}\right]\_{\quad} \leq \quad \,\_p\left[\frac{17\pi}{48}, \frac{17\pi}{24}\right]\_{\quad}
$$

*Hence, Theorem 7 is verified. For Theorem 8, we have*

$$\begin{aligned} \mathfrak{S}\_\*\left(\frac{\mathfrak{t}+\upsilon}{2}\right) &= \mathfrak{S}\_\*\left(\frac{3\pi}{8}\right) \approx 1 \\ \mathfrak{S}^\*\left(\frac{\mathfrak{t}+\upsilon}{2}\right) &= \mathfrak{S}^\*\left(\frac{3\pi}{8}\right) \approx 2 \end{aligned} \tag{22}$$

$$\int\_{t}^{v} \mathfrak{D}(\omega)d\omega = \int\_{\frac{\pi}{4}}^{\frac{3\pi}{8}} \left(\omega - \frac{\pi}{4}\right) d\omega + \int\_{\frac{3\pi}{8}}^{\frac{\pi}{2}} \left(\frac{\pi}{2} - \omega\right) d\omega \approx \frac{4}{25}.$$

$$\frac{1}{\int\_{t}^{v} \mathfrak{D}(\omega)d\omega} \int\_{t}^{v} \mathfrak{G}\_{\*}(\omega)\mathfrak{D}(\omega)d\omega \approx 1.1$$

$$\frac{1}{\int\_{t}^{v} \mathfrak{D}(\omega)d\omega} \int\_{t}^{v} \mathfrak{G}^{\*}(\omega)\mathfrak{D}(\omega)d\omega \approx 2.1. \tag{23}$$

*From (22) and (23), we have*

$$[1,2]\_{\phantom{p^{-}}} \le \prescript{}{}{}\_{p}[1.1,2.1]\_{\phantom{p^{+}}}$$

*Hence, Theorem 8 is verified.*

#### **4. Results and Discussion**

For LR-convex I.V-Fs, we find Hermite–Hadamard type inequalities. Our findings not only improve on Zhao's work, but they also investigate some of the findings of Sarikaya et al. We have not looked into inequalities using interval derivatives since there are not any "interval derivatives" with desirable characteristics.

#### **5. Conclusions**

In this paper, *HH*-inequalities have been investigated for the concept of LR-convex I.V-Fs. The most important thing in this study is that we have proved that both concepts LRconvex I.V-F and convex I.V-Fs coincide under some mild conditions when these conditions are defined on the endpoint functions. As for future research, we try to explore this concept for generalized LR-convex I.V-Fs and some applications in interval nonlinear programing. This is an open problem for the readers and anyone can investigate this concept, "the optimality conditions of LR-convex I.V-Fs can be obtained through variational inequalities". We hope that this concept will be helpful for other authors to play their roles in different fields of sciences. Moreover, in future, we will also start exploring this concept and their generalizations by using different fractional integral operators.

**Author Contributions:** Conceptualization, M.B.K.; methodology, M.B.K.; validation, S.T., M.S.S. and H.G.Z.; formal analysis, K.N.; investigation, M.S.S.; resources, S.T.; data curation, H.G.Z.; writing original draft preparation, M.B.K., K.N. and H.G.Z.; writing—review and editing, M.B.K. and S.T.; visualization, H.G.Z.; supervision, M.B.K. and M.S.S.; project administration, M.B.K.; funding acquisition, K.N., M.S.S. and H.G.Z. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The authors wish to thank the Rector, COMSATS University Islamabad, Islamabad, Pakistan, for providing excellent research and this work was funded by the Taif University Researchers Supporting Project (Number TURSP-2020/345), Taif University, Taif, Saudi Arabia. Moreover, this research has also received funding support from the National Science, Research and Innovation Fund (NSRF), Thailand.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


### *Article* **Application of the Pick Function in the Lieb Concavity Theorem for Deformed Exponentials**

**Guozeng Yang 1, Yonggang Li 2,\*, Jing Wang <sup>3</sup> and Huafei Sun 4,5**


**Abstract:** The Lieb concavity theorem, successfully solved in the Wigner–Yanase–Dyson conjecture, is an important application of matrix concave functions. Recently, the Thompson–Golden theorem, a corollary of the Lieb concavity theorem, was extended to deformed exponentials. Hence, it is worthwhile to study the Lieb concavity theorem for deformed exponentials. In this paper, the Pick function is used to obtain a generalization of the Lieb concavity theorem for deformed exponentials, and some corollaries associated with exterior algebra are obtained.

**Keywords:** Lieb concavity theorem; deformed exponential; Pick function; convexity of matrix

**MSC:** 15A42; 15A16; 47A56

#### **1. Introduction**

Matrix theory is widely used in statistics [1], physics [2], computer science [3] and so on. For convenience, *M*(*n*, C) is denoted as the set of all *n* × *n* complex matrices (C is the set of complex numbers) [4]. *A* is called a Hermitian matrix when *A* ∈ *M*(*n*, C) satisfies *A*∗ = *A* (*A*∗ denotes conjugate transposition of *A*). The Hermitian matrix is frequently used in quadratic forms and their correlation theory [5]. Let *Hn* denote the set of *n* × *n* Hermitian matrices and *H*<sup>+</sup> *<sup>n</sup>* denote the *<sup>n</sup>* <sup>×</sup> *<sup>n</sup>* positive semidefinite Hermitian matrix (C*<sup>n</sup>* is the *n* dimensional complex Euclidean space).

Set *<sup>u</sup>*1, *<sup>u</sup>*2, ··· , *un* to be any orthonormal basis of <sup>C</sup>*n*, and then the trace operator Tr is defined as [4]

$$\text{Tr}[A] = \sum\_{i=1}^{n} (u\_{i\prime} A u\_i)\_{\prime\prime}$$

where (·, ·) is the inner product of <sup>C</sup>*n*. It is well known that for any *<sup>A</sup>* = (*aij*) <sup>∈</sup> *<sup>M</sup>*(*n*, <sup>C</sup>), the following equalities hold [6]

$$\text{Tr}[A] = \sum\_{j=1}^{n} \lambda\_i = \sum\_{j=1}^{n} a\_{\overrightarrow{ii}\prime}$$

where *λ<sup>i</sup>* is the eigenvalue of *A*.

From the spectral theorem [5], *<sup>A</sup>* <sup>∈</sup> *<sup>H</sup>*<sup>+</sup> *<sup>n</sup>* can be decomposed as

$$A = P^\* \Lambda\_A P\_\nu$$

**Citation:** Yang, G.; Li, Y.; Wang, J.; Sun, H. Application of the Pick Function in the Lieb Concavity Theorem for Deformed Exponentials. *Fractal Fract.* **2022**, *6*, 20. https:// doi.org/10.3390/fractalfract6010020

Academic Editor: Savin Trean¸t ˘a

Received: 18 October 2021 Accepted: 9 December 2021 Published: 31 December 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

where *P* is a unitary matrix and Λ*<sup>A</sup>* := diag{*λ*1, ..., *λn*} is a diagonal matrix with eigenvalues *λ*1, ..., *λn*. Then, matrix function *f*(*A*) is defined as

$$f(A) = P^\* f(\Lambda\_A) P = \sum\_{i=1} f(\lambda\_i) P\_{i\prime} \tag{1}$$

where *<sup>f</sup>*(Λ*A*) :<sup>=</sup> diag{ *<sup>f</sup>*(*λ*1), ..., *<sup>f</sup>*(*λn*)} and *<sup>P</sup>*<sup>2</sup> *<sup>i</sup>* = *Pi*.

Based on the above definition, in 1963, the Wigner–Yanase skew information

$$\mathcal{I}\_{WY}(\rho) = -\frac{1}{2}\operatorname{Tr}\left[\left[\sqrt{\rho}\,\_{\prime}H\right]^2\right]$$

was introduced by Wigner and Yanase ([7]), where *ρ* is a density matrix (*ρ* ≥ 0, tr *ρ* = 1) and *H* is a Hermitian matrix. Then, an open problem was left

$$\text{Tr}[A^sKA^{1-s}K^\*]\_\prime \tag{2}$$

which is concave for any positive semidefinite matrix *A*.

In 1973, (2) was proven by Lieb for all 0 < *s* < 1 [8], and a more generalized result was obtained from the following fact [9]

$$\begin{aligned} \text{Tr}[A^s K B^{1-s} K^\*] &= \langle K, B^{1-s} K^\* A^s \rangle\_{\mathcal{L}(H)} \\ &= \langle K, \Psi^{-1} (B^{1-s} \otimes A^s) K^\* \rangle\_{\mathcal{L}(H)}. \end{aligned}$$

where Ψ−1(*A*) = ∑ *j* (*Aej*) ⊗ *e*<sup>∗</sup> *<sup>j</sup>* . In fact, the Lieb concavity theorem is equivalent to the concavity of *<sup>B</sup>*1−*<sup>s</sup>* <sup>⊗</sup> *<sup>A</sup><sup>s</sup>* .

A more elegant proof of the Lieb concavity theorem appeared in [10] using

$$\operatorname{Tr}[\mathcal{K}^\* A^s K B^{1-s}] = \langle \mathcal{K}, (A^s \otimes B^{1-s}) K \rangle\_{\mathcal{L}(H)'} $$

where

$$[(A \otimes B)K]\_{i,j} = \sum\_{k,l} A\_{i,k} B\_{j,l} K\_{k,l} \dots$$

In 2009, Effros gave another proof of the Lieb concavity theorem based on the Hansen– Pedersen–Jensen inequality ([11]). Using

$$L\_A(K) = AK, \\ R\_B(K) = KB, \\ \lambda$$

then one obtains

$$\begin{aligned} \operatorname{Tr}[\mathcal{K}^\* A^s K B^{1-s}] &= \langle K, L\_{A^s} R\_{B^{1-s}}(K) \rangle\_{\mathcal{L}(H)} \\ &= \langle K, \mathcal{R}\_B^{\frac{1}{2}} (\mathcal{R}\_B^{-\frac{1}{2}} L\_A \mathcal{R}\_B^{-\frac{1}{2}})^s \mathcal{R}\_B^{\frac{1}{2}}(K) \rangle\_{\mathcal{L}(H)}. \end{aligned}$$

All the above proof of the Lieb concave theorem is equivalent to the joint concavity of commutative operators. In addition, Epstein also obtained the Lieb concave theorem using the theory of Herglotz functions [12].

Recently, Shi and Hansen [13] generalized the Thompson–Golden theorem

$$\operatorname{Tr}[\exp\_q(A+B)] \le \operatorname{Tr}[(\exp\_q(A))^{2-q}(A(q-1) + \exp\_q(B))],$$

As the Thompson–Golden theorem can be regarded as a special form of the Lieb concave theorem, it is worthwhile to study the Lieb concavity theorem for deformed exponentials. In this paper, we will use the theory of the Pick function to obtain a generalization of the Lieb concavity theorem and some other corollaries. The rest of the paper is organized as follows. In Section 2, some general definitions and important conclusions are introduced. With these preparations, we obtain some useful results, such as the Lieb concavity theorem, presented in the final Section 3.

#### **2. Preliminary**

In this section, some general definitions and some important properties are introduced.

#### *2.1. The q-Logarithm Function and q-Exponential Function*

It is well known that the q-logarithm function ln*q*(*x*) is defined as [13]

$$\ln\_q(x) = \left\{ \begin{array}{ll} \frac{x^{q-1} - 1}{q - 1}, & q \neq 1 \\\ln x, & q = 1 \end{array} \right\}$$

for any *x* > 0. The deformed exponential function or the *q*−exponential exp*q*(*x*) is the inverse function of the *q*−logarithm and is defined as

$$\exp\_q(\mathbf{x}) = \left\{ \begin{array}{ll} [(q-1)\mathbf{x} + 1]^{\frac{1}{q-1}}, & \mathbf{x} > \frac{1}{q-1}, & q > 1 \\\ [(q-1)\mathbf{x} + 1]^{\frac{1}{q-1}}, & \mathbf{x} < \frac{1}{q-1}, & q < 1 \\\ \exp(\mathbf{x}), & \mathbf{x} \in \mathbb{R}, & q = 1 \end{array} \right\}$$

#### *2.2. Tensor Product and Exterior Algebra*

The tensor product, denoted by " ⊗ ", is also called the Kronecker product. It is a generalization of the outer product from vectors to matrices, and the tensor product of matrices is also referred to as the outer product in certain contexts ([9]). For an *m* × *n* matrix *A* and a *p* × *q* matrix *B*, the tensor product of *A* and *B* is defined by

$$A \otimes B := \left( \begin{array}{ccc} a\_{11}B & \cdots & a\_{1n}B \\ \vdots & \ddots & \vdots \\ a\_{m1}B & \cdots & a\_{mn}B \end{array} \right) \prime$$

where *A* = *aij* <sup>1</sup>≤*i*≤*m*,1≤*j*≤*n*.

The tensor product is different from matrix multiplication, and one of the differences is commutativity

$$(I \otimes B)(A \otimes I) = (A \otimes I)(I \otimes B) = A \otimes B.$$

From the above equations, we obtain

$$\begin{aligned} A\mathcal{C} \otimes B\mathcal{D} &= (A\mathcal{C} \otimes I)(I \otimes B\mathcal{D}) \\ &= (A \otimes I)(\mathcal{C} \otimes I)(I \otimes B)(I \otimes D) \\ &= (A \otimes I)(I \otimes B)(\mathcal{C} \otimes I)(I \otimes D) \\ &= (A \otimes B)(\mathcal{C} \otimes D) .\end{aligned}$$

For convenience, we denote

$$\otimes\_k A = \underbrace{A \otimes A \otimes \cdots \otimes A}\_{k} \cdot$$

In addition to the tensor product, there is another common product called exterior algebra [6]. Exterior algebra, denoted by "∧", is a binary operation for any *Ai* <sup>∈</sup> *<sup>H</sup>*<sup>+</sup> *<sup>n</sup>* , and the definition is

$$\begin{split} & (\underbrace{A\_1 \wedge A\_2 \wedge \cdots \wedge A\_k}\_{k}) (\underbrace{x\_{i\_1} \wedge \xi\_{i\_2} \wedge \cdots \wedge \xi\_{i\_k}}\_{k})\_{1 \le i\_1 < \cdots < i\_k \le n} \\ & = (A\_1 \xi\_{i\_1} \wedge A\_2 \xi\_{i\_2} \wedge \cdots \wedge A\_k \xi\_{i\_k})\_{1 \le i\_1 < \cdots < i\_k \le n} \end{split}$$

where {*ξj*}*<sup>n</sup> <sup>j</sup>*=<sup>1</sup> is an orthonormal basis of C*n*, and

$$\mathfrak{z}\_{i\_1} \wedge \mathfrak{z}\_{i\_2} \dots \wedge \mathfrak{z}\_{i\_k} = \frac{1}{\sqrt{n!}} \sum\_{\pi \in \sigma\_n} (-1)^{\pi} \mathfrak{z}\_{\pi(i\_1)} \otimes \mathfrak{z}\_{\pi(i\_2)} \dots \otimes \mathfrak{z}\_{\pi(i\_k)}.$$

*σ<sup>n</sup>* is the family of all permutations on {1, 2, ··· , *n*}.

Let <sup>&</sup>lt;*<sup>k</sup>* <sup>C</sup>*<sup>n</sup>* be the span of the {*ξi*<sup>1</sup> <sup>∧</sup> *<sup>ξ</sup>i*<sup>2</sup> ···∧ *<sup>ξ</sup>ik*}1≤*i*1<···<*ik*≤*n*, and then a simple calculation shows that

$$\wedge\_{\mathfrak{n}} A = (\underbrace{A \wedge A \wedge \dots \wedge A}\_{k}) = \det(A)$$

#### *2.3. Pick Function*

Let *z* = *x* + *iy* be a complex number where *i* is the imaginary unit and *f*(*z*) = *U*(*z*) + *iV*(*z*) is analytic where *U*(*z*), *V*(*z*) are all real functions. Re *z* = *x* denotes the real part of *z*, and Im *z* = *y* is the imaginary part of *z*. If Im *f*(*z*) > 0 for any Im *z* > 0, then we call the analytic function *f*(*z*) a Pick function [14]. It is equivalent that *f*(*z*) is analytic in the upper half-plane with the positive imaginary part.

The Pick functions evidently form a convex cone—for instance, if *α* and *β* are positive numbers and *f*(*z*) and *g*(*z*) are two Pick functions, then the function *α f*(*z*) + *βg*(*z*) is also a Pick function. A simple example is that tan(*z*) is a Pick function.

$$\begin{aligned} \tan(x+iy) &= \frac{\tan(x) + \tan(iy)}{1 - \tan(x)\tan(iy)} \\ &= \frac{\tan(x) + i\tanh(y)}{1 - i\tan(x)\tanh(y)}. \end{aligned}$$

Hence, Im tan(*z*) = (1+tan2(*x*))tanh(*y*) <sup>1</sup>+tan2(*x*)tanh2(*y*) , and this implies that Im tan(*z*) <sup>&</sup>gt; 0 when *<sup>y</sup>* <sup>&</sup>gt; 0.

It is well known that the Pick function has a integral representation, such as the following lemma [14].

**Lemma 1.** *Let f*(*z*) *be a Pick function. Then, f*(*z*) *has a unique canonical representation of the form*

$$f(z) = \alpha + \beta z + \int\_{\mathbb{R}} \left( \frac{1}{\lambda - z} - \frac{\lambda}{1 + \lambda^2} \right) \mathbf{d} \,\mu(\lambda)\,\mu$$

*where <sup>α</sup> is real, <sup>β</sup>* ≥ 0 *and* <sup>d</sup> *<sup>μ</sup>*(*λ*) *is a positive Borel measure on the real <sup>λ</sup>*−*axis that R* (1 + *λ*2)−<sup>1</sup> d *μ*(*λ*) *is finite. Conversely, any function of this form is also a Pick function.*

Lemma 1 is frequently used for functions that are positive and harmonic in the halfplane.

*2.4. The Matrix-Monotone Function*

A matrix function *f* is said to be matrix-monotonic if it satisfies

$$f(A) \ge f(B) \text{ for all } A \ge B > 0. \tag{3}$$

where *A* ≥ *B* > is equivalent to *A* − *B* is a positive semidefinite Hermitian matrix.

Since the matrix-monotone function is a special kind of operator monotone function, we have the following general conclusions [14].

**Lemma 2.** *The following statements for a real valued continuous function f on* (0, +∞) *are equivalent:*


(3) *f admits an integral representation:*

$$f(\lambda) = a + \beta \lambda + \int\_{-\infty}^{0} (1 + \lambda t)(t - \lambda)^{-1} \mathbf{d}\mu(t), \text{for any } \lambda > 0,\tag{4}$$

*where α is a real number, β is non-negative and μ is a finite positive measure on* (−∞, 0)*.*

From Lemmas 1 and 2, we know that a Pick function must be a matrix-monotone function.

#### *2.5. Convexity of Matrix*

Suppose that *X* is a convex set in R*<sup>n</sup>* and *f* is a function defined on *X*. Then, we call *f* a convex function if

$$f(t\mathbf{x}\_1 + (1-t)\mathbf{x}\_2) \le tf(\mathbf{x}\_1) + (1-t)f(\mathbf{x}\_2), \forall \mathbf{x}\_1, \mathbf{x}\_2 \in X, \forall t \in [0,1]\_\tau$$

for all *x*1, *x*<sup>2</sup> ∈ *X* and *t* ∈ [0, 1].

A matrix function *f* is called convex if [15–17]

$$tf(tA + (1 - t)B) \le tf(A) + (1 - t)f(B),\tag{5}$$

for any *<sup>A</sup>*, *<sup>B</sup>* <sup>∈</sup> *<sup>H</sup>*<sup>+</sup> *<sup>n</sup>* and any *t* ∈ [0, 1]. Replacing ≤ by < in (5), this gives the definition of a strictly matrix convex function. A matrix function *f* is called (strictly) concave if −*f* is (strictly) convex. More details can be found in [18].

A matrix convex function must be a convex function; however, the inverse claim is not always true. For instance, the function *<sup>f</sup>* : [0, <sup>+</sup>∞) <sup>→</sup> <sup>R</sup> given by *<sup>f</sup>*(*x*) = *<sup>x</sup>*<sup>3</sup> is a convex function. However, the matrix function *<sup>f</sup>*(*A*) = *<sup>A</sup>*<sup>3</sup> for any *<sup>A</sup>* <sup>∈</sup> *<sup>H</sup>*<sup>+</sup> *<sup>n</sup>* is not convex.

Let *<sup>f</sup>*(·, ·) be a bivariate function defined on *<sup>H</sup>*<sup>+</sup> *<sup>n</sup>* <sup>×</sup> *<sup>H</sup>*<sup>+</sup> *<sup>n</sup>* . We call *f*(·, ·) jointly convex if

$$tf(tA\_1 + (1-t)A\_2, tB\_1 + (1-t)B\_2) \le tf(A\_1, B\_1) + (1-t)f(A\_2, B\_2), t$$

for all *<sup>A</sup>*1, *<sup>A</sup>*2, *<sup>B</sup>*1, *<sup>B</sup>*<sup>2</sup> <sup>∈</sup> <sup>H</sup><sup>+</sup> *<sup>n</sup>* and all *t* ∈ [0, 1].

*2.6. Brunn–Minkowski Inequality*

Finally, let us review the Brunn–Minkowski inequality [19].

**Lemma 3.** *for any A*, *B* > 0*, and then*

$$\{\operatorname{Tr}[\wedge^k (A+B)]\}^{\frac{1}{k}} \ge \{\operatorname{Tr}[\wedge^k A]\}^{\frac{1}{k}} + \{\operatorname{Tr}\left[\wedge^k B\right]\}^{\frac{1}{k}}.$$

**Proof.** Let {*ξi*}*<sup>n</sup> <sup>i</sup>*=<sup>1</sup> be the eigenvectors of *<sup>A</sup>* <sup>+</sup> *<sup>B</sup>* with the eigenvalue {*λi*}*<sup>n</sup> <sup>i</sup>*=1, then

$$\begin{aligned} \left\{ \left\{ \text{Tr} \left[ \boldsymbol{\wedge}^{k} (\boldsymbol{A} + \boldsymbol{B}) \right] \right\} \right\}^{\frac{1}{k}} &= \left[ \sum\_{1 \le \xi\_{i\_{1}} < \cdots < \xi\_{i\_{k}} \le n} \lambda\_{i\_{1}} \cdots \lambda\_{i\_{k}} \right]^{\frac{1}{k}} \\ &= \left[ \sum\_{1 \le \xi\_{i\_{1}} < \cdots < \xi\_{i\_{k}} \le n} \left( \det \left| P\_{i\_{1},\ldots,i\_{k}}^{\*} (\boldsymbol{A} + \boldsymbol{B}) P\_{i\_{1},\ldots,i\_{k}} \right| \right) \right]^{\frac{1}{k}} \\ &\ge \left[ \sum\_{1 \le \xi\_{i\_{1}} < \cdots < \xi\_{i\_{k}} \le n} \left( \det \left| P\_{i\_{1},\ldots,i\_{k}}^{\*} \boldsymbol{A} P\_{i\_{1},\ldots,i\_{k}} \right| + \det \left| P\_{i\_{1},\ldots,i\_{k}}^{\*} \boldsymbol{B} P\_{i\_{1},\ldots,i\_{k}} \right| \right) \right]^{\frac{1}{k}} \end{aligned}$$

where *Pi*1,··· ,*ik* = (*ξi*<sup>1</sup> , ··· , *<sup>ξ</sup>ik* ) and ≥ holds due to det(*<sup>A</sup>* + *<sup>B</sup>*) ≥ det(*A*) + det(*B*).

As *Sk* = ' ∑ 1≤*ξ<sup>i</sup>* <sup>1</sup><···<*ξ<sup>i</sup> k* ≤*n xi*<sup>1</sup> ··· *xik* (1 *k* is concave [20], we have {Tr[∧*k*(*<sup>A</sup>* <sup>+</sup> *<sup>B</sup>*)]} 1 *<sup>k</sup>* ≥ ⎡ <sup>⎣</sup> <sup>∑</sup> <sup>1</sup>≤*ξ<sup>i</sup>* <sup>1</sup><···<*ξ<sup>i</sup> k* ≤*n* det " " " *P*∗ *i*1,··· ,*ik APi*1,··· ,*ik* " " " ⎤ ⎦ 1 *k* + ⎡ <sup>⎣</sup> <sup>∑</sup> <sup>1</sup>≤*ξ<sup>i</sup>* <sup>1</sup><···<*ξ<sup>i</sup> k* ≤*n* det " " " *P*∗ *i*1,··· ,*ik BPi*1,··· ,*ik* " " " ⎤ ⎦ 1 *k* = ⎡ <sup>⎣</sup> <sup>∑</sup> <sup>1</sup>≤*ξ<sup>i</sup>* <sup>1</sup><···<*ξ<sup>i</sup> k* ≤*n ξi*<sup>1</sup> ∧···∧ *ξik* , *Aξi*<sup>1</sup> ∧···∧ *Aξik* ⎤ ⎦ 1 *k* + ⎡ <sup>⎣</sup> <sup>∑</sup> <sup>1</sup>≤*ξ<sup>i</sup>* <sup>1</sup><···<*ξ<sup>i</sup> k* ≤*n ξi*<sup>1</sup> ∧···∧ *ξik* , *Bξi*<sup>1</sup> ∧···∧ *Bξik* ⎤ ⎦ 1 *k* <sup>=</sup> {Tr[∧*kA*]} 1 *<sup>k</sup>* <sup>+</sup> {Tr[∧*kB*]} 1 *k* .

#### **3. Lieb Concavity Theorem for Deformed Exponential**

In this section, we obtain some useful conclusions, and some simple and straightforward computations are omitted. Recently, by using the Young inequality,

$$\text{Tr}[\boldsymbol{Y}] = \max\_{\boldsymbol{X} \succeq 0} \{ \text{Tr}[\boldsymbol{X}] - \text{Tr}[\boldsymbol{X}^{2-q}(\ln\_q \boldsymbol{X} - \ln\_q \boldsymbol{Y})] \},$$

Shi and Hansen obtained that *<sup>F</sup>*(*A*) = Tr exp 1 *p <sup>q</sup>* (*K*<sup>∗</sup> ln*q*(*Ap*)*K*) is concave for any 1 ≤ *q* ≤ 2 where *K*∗*K* ≤ *I* (*I* is the identity matrix of *M*(*n*, C)) [13], namely, the following theorem.

**Theorem 1.** *For* 0 < *p* ≤ 1*,* 1 < *q* ≤ 2 *and K*∗*K* ≤ *I, the function*

$$F(A) = \text{Tr}\left[\exp\_q^{\frac{1}{\tilde{p}}} (K^\* \ln\_{\emptyset} (A^p) K) \right] \tag{6}$$

*is concave for the strictly positive A* <sup>∈</sup> *<sup>H</sup>*<sup>+</sup> *n .*

**Proof.** (The first proof of Theorem 1) Since [21]

$$\operatorname{D}f(A)(B) = \sum\_{i} \sum\_{j} \frac{f(\lambda\_i) - f(\lambda\_j)}{\lambda\_i - \lambda\_i} P\_i B P\_{j\nu}$$

we obtain

$$\begin{aligned} \frac{\mathbf{d}(\text{Tr}[f(A+tB)-f(A)])}{\mathbf{d}\,t} &= \text{Tr}\left[\sum\_{i}\sum\_{j}\frac{f(\lambda\_{i})-f(\lambda\_{j})}{\lambda\_{i}-\lambda\_{i}}P\_{i}BP\_{j}\right] \\ &= \text{Tr}\left[\sum\_{i}P\_{j}\sum\_{j}\frac{f(\lambda\_{i})-f(\lambda\_{j})}{\lambda\_{i}-\lambda\_{i}}P\_{i}B\right] \\ &= \text{Tr}\left[\sum\_{i}f'(\lambda\_{i})P\_{i}B\right] = \text{Tr}[f'(A)B]\_{i} \end{aligned}$$

where *λ<sup>i</sup>* are eigenvalues of A. When *f*(*x*) is a convex function, we obtain

$$\operatorname{Tr}[f(A+tB) - f(A)] \ge \operatorname{Tr}[f'(A)tB],$$

for any *t*. This implies that

$$\text{Tr}[f(\mathcal{C})] = \max\{\text{Tr}[f(D) + f'(D)(\mathcal{C} - D)] : D > 0\}.$$

Therefore, we obtain

$$\begin{aligned} &\text{Tr}[(K^\*A^{pq}-{}^PK+I-K^\*K)^{\frac{1}{pq-p}}] \\ &= \max\{\text{Tr}[D^{\frac{1}{pq-p}}+\frac{D^{\frac{1}{pq-p}-1}(K^\*A^{pq-p}K+I-K^\*K-D)}{pq-p}]:D>0\} \\ &= \max\{\text{Tr}[\mathbf{C}+\frac{\mathbf{C}^{1-pq+p}(K^\*A^{pq-p}K+I-K^\*K-\mathbb{C}^{pq-p})}{pq-p}]:\mathbb{C}=D^{\frac{1}{pq-p}}>0\} \\ &= \max\{\text{Tr}[\mathbf{C}(1-\frac{1}{pq-p})+\frac{\mathbf{C}^{1-pq+p}K^\*A^{pq-p}K}{pq-p}+\mathbf{C}^{1-pq+p}(I-K^\*K)]:\mathbb{C}>0\} \end{aligned}$$

Thus, the concavity of *<sup>F</sup>*(*A*) is equivalent to the jointly concavity of Tr[ *<sup>C</sup>*1−*pq*<sup>+</sup>*pK*<sup>∗</sup>*Apq*−*pK pq*−*<sup>p</sup>* ] for the strictly positive *A* and *C*, which is the Lieb concavity theorem [22,23].

Unfortunately, Theorem 1 cannot be obtained using Epstein's theorem. Hence, we require a more general generalization of Epstein's theorem. First, for any Im(*z*) > 0, we know that *<sup>A</sup>* <sup>+</sup> *zB* is invertible and *<sup>x</sup>*∗(*<sup>A</sup>* <sup>+</sup> *zB*)*<sup>x</sup>* is a Pick function for any *<sup>x</sup>* <sup>∈</sup> <sup>C</sup>*<sup>n</sup>* [14]. For any *A* ∈ *M*(*n*, C), we know *f*(*A*) is defined as [12]

$$f(A) = \frac{1}{2\pi} \oint\_C \frac{f(z)}{z - A} \,\mathrm{d}\, z\,\mathrm{d}z$$

where *f*(*z*) is a complex holomorphic function in an open set of the complex plane containing Sp(*A*) (the set of all eigenvalues of *A*). Then, we have the following lemma.

**Lemma 4.** *Let A*, *<sup>B</sup>* <sup>∈</sup> *<sup>H</sup>*<sup>+</sup> *<sup>n</sup> and* 0 < *α* ≤ 1*, then*

$$\mathbf{x}^\*(A + zB)^a \mathbf{x}$$

*is a Pick function for any <sup>x</sup>* <sup>∈</sup> <sup>C</sup>*<sup>n</sup> and* <sup>0</sup> <sup>&</sup>lt; arg(*x*∗(*<sup>A</sup>* <sup>+</sup> *zB*)*αx*) <sup>&</sup>lt; *απ if* <sup>0</sup> <sup>&</sup>lt; arg(*z*) = *<sup>θ</sup>* <sup>&</sup>lt; *<sup>π</sup>, such as* Sp((*<sup>A</sup>* <sup>+</sup> *zB*)*α*) <sup>⊆</sup> (Sp(*<sup>A</sup>* <sup>+</sup> *zB*)*α*)*. Generally, we can find that*

$$x^\* f(A + zB)x$$

*is a Pick function when f is a Pick function.*

**Proof.** Setting *z* = *ρeiθ*, we have

$$\begin{aligned} (A+zB)^{\alpha} &= \int\_0^{+\infty} (\frac{A+zB}{t+A+zB}) \, \mathrm{d}\,\mu(t) \\ &= \int\_0^{+\infty} (\frac{1}{\frac{t}{A+zB}+1}) \, \mathrm{d}\,\mu(t), \end{aligned}$$

where d *μ*(*t*) = *<sup>t</sup> <sup>α</sup>*−1*<sup>π</sup>* sin *απ* . Since Im *z* > 0, we see that *A* + *zB* is invertible. Hence, we have

$$\begin{split} \mathbf{x}^\*(A+zB)^\mathbf{a} \mathbf{x} &= \int\_0^{+\infty} \mathbf{x}^\*(\frac{1}{\frac{t}{A+zB}+1}) \mathbf{x} \, \mathbf{d}\, \mu(t) \\ &= \int\_0^{+\infty} y^\*(\frac{t}{A+z^\*B}+1) y \, \mathbf{d}\, \mu(t), \ \mathbf{y} = (\frac{t}{A+zB}+1)^{-1} \mathbf{x} \\ &= \int\_0^{+\infty} y^\*y + tw^\*(A+zB)w \, \mathbf{d}\, \mu(t), \ \mathbf{w} = (A+z^\*B)^{-1}\mathbf{y} \\ &= \int\_0^{+\infty} y^\*\mathbf{y} + tw^\*Aw \, \mathbf{d}\, \mu(t) + z \int\_0^{+\infty} tw^\*Bw \, \mathbf{d}\, \mu(t). \end{split}$$

This implies that

$$\operatorname{Im} \mathbf{x}^\*(A + zB)^\kappa \mathbf{x} = \operatorname{Im}(z) \cdot \int\_0^{+\infty} t w^\* B w \, \mathbf{d} \, \mu(t) > 0;$$

hence, 0 < arg(*x*∗(*A* + *zB*)*αx*) when 0 < arg(*z*) = *θ* < *π*.

In the same way, we can obtain

$$\operatorname{Im} w^\* [(-A - z^\* B)^{-a}] w = \operatorname{Im} (e^{-i\omega \tau} z^\*) \cdot \int\_0^{+\infty} t v^\* B v \operatorname{d} \mu(t) < 0, \ v = (t(A + z^\* B) + 1)^{-1} w.$$

In particular, letting *w* = (*A* + *z*∗*B*)*αx*, we have

$$\operatorname{Im}(e^{-i\alpha\pi}\mathbf{x}^\*(A+zB)^\alpha\mathbf{x}) < 0.1$$

This is equivalent to arg(*x*∗(*A* + *zB*)*αx*) < *απ*. To prove Sp((*<sup>A</sup>* <sup>+</sup> *zB*)*α*) <sup>⊆</sup> (Sp(*<sup>A</sup>* <sup>+</sup> *zB*)*α*), let (*<sup>A</sup>* <sup>+</sup> *zB*)*<sup>ξ</sup>* <sup>=</sup> *λξ*, we find

$$\left[\mathfrak{z}^\*(A+zB)^a\mathfrak{z}\right]\mathfrak{z} = \left[\mathfrak{z}^\*(A+zB)\mathfrak{z}\right]^a = \left[\mathfrak{z}^\*A\mathfrak{z} + z\mathfrak{z}^\*B\mathfrak{z}\right]^a = \rho^a e^{ia\theta},$$

where tan *θ* = *<sup>ξ</sup>*∗*B<sup>ξ</sup>* Im(*z*) *<sup>ξ</sup>*∗*Aξ*+*ξ*∗*B<sup>ξ</sup>* Re(*z*) ≤ tan arg(*z*).

When *f*(*z*) is a Pick function, using the integral represented of *f*(*z*), in a similar way, we can obtain that

$$x^\*f(A+zB)x$$

is a Pick function for any *<sup>x</sup>* <sup>∈</sup> <sup>C</sup>*n*.

Using Lemma 4, another proof of Theorem 1 can be obtained.

**Theorem 2.** *For* 0 < *p* ≤ 1*,* 1 < *q* ≤ 2 *and K*∗*K* ≤ *I, the function*

$$F(A) = \text{Tr}\left[\exp\_q^{\frac{1}{\tilde{F}}} \left(K^\* \ln\_q(A^p) K\right)\right],$$

*is concave for the strictly positive A* <sup>∈</sup> *<sup>H</sup>*<sup>+</sup> *n .*

**Proof.** (The second proof of Theorem 1)

First, setting *<sup>f</sup>*(*z*) = Tr[(*A*(*z*) + *iB*(*z*)) <sup>1</sup> *pq*−*<sup>p</sup>* ] where *<sup>A</sup>*(*z*) = Re(*K*∗(*<sup>A</sup>* <sup>+</sup> *zB*)*pq*−*pK* <sup>+</sup> *<sup>I</sup>* <sup>−</sup> *<sup>K</sup>*∗*K*) and *<sup>B</sup>*(*z*) = Im(*K*∗(*<sup>A</sup>* <sup>+</sup> *zB*)*pq*−*pK* <sup>+</sup> *<sup>I</sup>* <sup>−</sup> *<sup>K</sup>*∗*K*) <sup>∈</sup> *<sup>H</sup>*<sup>+</sup> *<sup>n</sup>* . As

$$\begin{split} \operatorname{Im} \left[ \operatorname{Tr} \big[ (A(z) + iB(z))^{\frac{1}{N^{\theta}-p}} \big] \right] &= \operatorname{Im} \left[ \operatorname{Tr} \big[ \int \limits\_{0}^{+\infty} (\frac{A(z) + iB(z)}{t + A(z) + iB(z)}) \operatorname{d}\mu(t) \big] \right] \\ &= \operatorname{Im} \left[ \int\_{0}^{+\infty} \operatorname{Tr} \big[ (\frac{\Lambda\_{A(z) + iB(z)}}{t + \Lambda\_{A(z) + iB(z)}}) \operatorname{d}\mu(t) \big] \right] \\ &= \operatorname{Im} \left[ \int\_{0}^{+\infty} \sum\_{i=1}^{n} \big[ (\frac{\lambda\_{i}(A(z) + iB(z))}{t + \lambda\_{i}(A(z) + iB(z))}) \operatorname{d}\mu(t) \big] \right] \\ &= \operatorname{Im} \left[ \sum\_{i=1}^{n} \big[ (\lambda\_{i}(A(z) + iB(z)) \big]^{\frac{1}{N^{\theta}-p}} \right], \end{split}$$

when arg(*z*) ∈ (0, *π*) and *K*∗*K* ≤ *I*, then

$$\begin{aligned} &\operatorname{arg}(\lambda\_i(A(z) + iB(z))) \\ &= \operatorname{arg}(\mathbf{x}\_i^\*(A(z) + iB(z))\mathbf{x}\_i) \\ &= \operatorname{arg}(\mathbf{x}\_i^\* K^\* (A + zB)^{pq-p} K \mathbf{x}\_i + \mathbf{x}\_i^\*(I - K^\* K) \mathbf{x}\_i) \in (0, (pq - p)\pi), \end{aligned}$$

where *xi* <sup>∈</sup> <sup>C</sup>*<sup>n</sup>* are the eigenvectors of *<sup>K</sup>*∗(*<sup>A</sup>* <sup>+</sup> *zB*)*pq*−*pK* <sup>+</sup> *<sup>I</sup>* <sup>−</sup> *<sup>K</sup>*∗*K*.

Hence,

$$\operatorname{Im}\left[\operatorname{Tr}[(A(z)+iB(z))^{\frac{1}{p\eta-\rho}}]\right] = \operatorname{Im}\left[\sum\_{i=1}^{n} z\_i\right],$$

where *zi* is the *<sup>i</sup>* eigenvalue of (*A*(*z*) + *iB*(*z*)) <sup>1</sup> *pq*−*<sup>p</sup>* and arg(*zi*) ∈ (0, *π*).

Thus, *<sup>f</sup>*(*z*) = Tr[(*A*(*z*) + *iB*(*z*)) <sup>1</sup> *pq*−*<sup>p</sup>* ] is a Pick function, and this implies that *F*(*A*) is concave.

Using a similar method, we can obtain the following corollary.

**Corollary 1.** *For* 0 < *p* ≤ 1 *and* 1 < *q* ≤ 2*, the function*

$$E(A) = \text{Tr}\left[\exp\_q^{\frac{1}{\tilde{p}}} \left[B + \ln\_\emptyset(A^p)\right]\right] \tag{7}$$

*is concave for the strictly positive A* <sup>∈</sup> *<sup>H</sup>*<sup>+</sup> *n .*

Since the Thompson–Golden theorem can be seen as a corollary of the Lieb concavity theorem, we discuss the Lieb concavity theorem for deformed exponentials. Setting SP(*A*) ⊂ {*<sup>z</sup>* <sup>=</sup> *<sup>ρ</sup>ei<sup>θ</sup>* : <sup>0</sup> <sup>&</sup>lt; *<sup>ρ</sup>*, 0 <sup>&</sup>lt; *<sup>θ</sup>* <sup>&</sup>lt; *<sup>α</sup>*} and SP(*B*) ⊂ {*<sup>z</sup>* <sup>=</sup> *<sup>ρ</sup>ei<sup>θ</sup>* : <sup>0</sup> <sup>&</sup>lt; *<sup>ρ</sup>*, 0 <sup>&</sup>lt; *<sup>θ</sup>* <sup>&</sup>lt; *<sup>β</sup>*}, then for any *<sup>A</sup>*1, *<sup>B</sup>*<sup>1</sup> <sup>∈</sup> <sup>H</sup>*<sup>n</sup>* ,*A*2, *<sup>B</sup>*<sup>2</sup> <sup>∈</sup> <sup>H</sup><sup>+</sup> *<sup>n</sup>* and *A* = *A*<sup>1</sup> + *iA*2, *B* = *B*<sup>1</sup> + *iB*2, we have [12]

$$\text{SP}(AB) \subset \{ z = \rho \epsilon^{i\theta} : 0 < \rho, 0 < \theta < a + \beta \}. \tag{8}$$

and then the following theorem can be obtained.

**Theorem 3.** *For* 0 < *p* ≤ 1*,* 1 < *q* ≤ 2 *and P*∗*P* ≤ *I, the following function*

$$L(A) = \text{Tr}[\exp\_q(P^\* \ln\_q(K^\* A^p K) P) \exp\_q(P^\* \ln\_q A^{1-p} P)]\tag{9}$$

*is concave for any A* <sup>∈</sup> *<sup>H</sup>*<sup>+</sup> *n .*

$$\text{Proof. Set } L\_{A,B}(z) = \text{Tr}[\exp\_q(P^\* \ln\_q(K^\*(A+zB)^p K)P) \exp\_q(P^\* \ln\_q(A+zB)^{1-p}P)].$$

When *xi* <sup>∈</sup> <sup>C</sup>*<sup>n</sup>* is a eigenvector of *<sup>P</sup>*∗(*<sup>A</sup>* <sup>+</sup> *zB*)*pq*−*pP* <sup>+</sup> *<sup>I</sup>* <sup>−</sup> *<sup>P</sup>*∗*<sup>P</sup>* and *<sup>P</sup>*∗*<sup>P</sup>* <sup>≤</sup> *<sup>I</sup>*,

$$\arg(\mathbf{x}\_i^\* P^\* K^\* (A + zB)^{pq-p} K P \mathbf{x}\_i + \mathbf{x}\_i^\* (I - P^\* P) \mathbf{x}\_i) \in (0, (pq - p)\pi),$$

if arg(*z*) ∈ (0, *π*). This implies

$$\text{SP}(P^\*K^\*(A+zB)^{pq-p}KP+I-P^\*P)\subset \{z=\rho e^{j\theta}:0<\rho,0<\theta<(pq-p)\pi\},$$

such as

$$\operatorname{SP}(\exp\_q(P^\* \ln\_q(K^\*(A + zB)^p K)P)) \subset \{z = \rho \epsilon^{i\theta} : 0 < \rho, 0 < \theta < p\pi\}.$$

Similarly, we can also obtain

$$\Pr(\exp\_q(P^\*\ln\_q(A+zB)^{1-p}P)) \subset \{z = \rho e^{i\theta} : 0 < \rho, 0 < \theta < (1-p)\pi\}.$$

Hence, using (8), we see that

$$\begin{aligned} &\operatorname{SP}[\exp\_q(P^\*\ln\_q(K^\*(A+zB)^pK)P)\exp\_q(P^\*\ln\_q(A+zB)^{1-p}P)] \\ &\subset \{z=\rho e^{i\theta}: 0<\rho, 0<\theta<\pi\}. \end{aligned}$$

Thus, we know arg(*LA*,*B*(*z*)) ∈ (0, *π*), which implies that*LA*,*B*(*z*) is a Pick function. Hence, L(A) is concave.

In fact, Theorem 3 is a generalization of the Lieb concavity theorem setting *P* = *I*, *K* = 0 0 *H* 0 and *A* = *Z* 0 0 *B* . Moreover, we can obtain the following theorem.

**Theorem 4.** *For* 0 < *p*,*s* ≤ 1*,* 1 < *q* ≤ 2 *and P*∗*P* ≤ *I, the functions*

$$\operatorname{Tr}\left[\left(\exp\_q(P^\*\ln\_q A \stackrel{\mathbb{F}^s}{\mathbb{T}}P\right)\exp\_q(P^\*\ln\_q(K^\*A^{s-sp}K)P)\exp\_q(P^\*\ln\_q A \stackrel{\mathbb{F}^s}{\mathbb{T}}P)\right]^{\frac{1}{s}}\right] \tag{10}$$

*and*

$$\left[ \text{Tr} \exp\_q(P^\* \ln\_\theta A \stackrel{\scriptstyle \varphi}{\not\simeq} P) \exp\_q(P^\* \ln\_\theta (K^\* A^{s-sp} K) P) \exp\_q(P^\* \ln\_\theta A \stackrel{\scriptstyle \varphi}{\not\simeq} P) \right]^{\frac{1}{s}} \tag{11}$$

*are jointly concave for any A* <sup>∈</sup> *<sup>H</sup>*<sup>+</sup> *n .*

The proof of Theorem 4 is similar to Theorem 3; here, we do not repeat the proof. In [19], Huang used exterior algebra to find that

$$\left[\text{Tr}\,\wedge^k [\exp(K^\* \ln(A)K)]\right]^{\frac{1}{k}}$$

is a concave function for any *<sup>A</sup>* <sup>∈</sup> *<sup>H</sup>*<sup>+</sup> *<sup>n</sup>* , *K*∗*K* ≤ *I* and *k* ≤ *n*. Associated with Theorem 1, we can obtain a generalization as the following theorem.

**Theorem 5.** *For* 0 < *p* ≤ 1*,* 1 < *q* ≤ 2 *and K*∗*K* ≤ *I, the function*

$$\left[ \text{Tr} \wedge^k \left[ \exp\_q^{\frac{1}{\overline{\rho}}} (K^\* \ln\_q (A^p) K) \right] \right]^{\frac{1}{\overline{\rho}}} \tag{12}$$

*is concave for the strictly positive A* <sup>∈</sup> *<sup>H</sup>*<sup>+</sup> *<sup>n</sup> and k* ≤ *n.*

**Proof.** In fact, we can prove that

$$\left[ \text{Tr} \wedge^k \left[ \left( H^\* A^s H + B \right)^{\frac{1}{s}} \right] \right]^{\frac{1}{k}}$$

is a concave function for any *<sup>A</sup>* <sup>∈</sup> *<sup>H</sup>*<sup>+</sup> *<sup>n</sup>* where 0 <sup>&</sup>lt; *<sup>s</sup>* <sup>≤</sup> 1 and *<sup>B</sup>* <sup>∈</sup> *<sup>H</sup>*<sup>+</sup> *n* . Using Theorem 1, we know that

$$\operatorname{Tr}\left[\left(H^\*A^pH+\mathbb{C}\right)^{\frac{1}{p}}\right] \tag{13}$$

is a concave function for any *<sup>A</sup>* <sup>∈</sup> *<sup>H</sup>*<sup>+</sup> *<sup>n</sup>* where 0 <sup>&</sup>lt; *<sup>p</sup>* <sup>≤</sup> 1 and *<sup>C</sup>* <sup>∈</sup> *<sup>H</sup>*<sup>+</sup> *n* . Then, for any *<sup>A</sup>*1, *<sup>A</sup>*<sup>2</sup> <sup>∈</sup> *<sup>H</sup>*<sup>+</sup> *<sup>n</sup>* , we have

$$\begin{split} & \left[ \text{Tr} \, \wedge^{k} \Big[ \left( H^\* (\frac{A\_1 + A\_2}{2})^s H + B)^{\frac{1}{2}} \right)^{\frac{1}{2}} \right]^{\frac{1}{2}} \\ &= \left[ \text{Tr} \left[ (\hat{H}^\* (\frac{A\_1 \wedge^{k-1} I + A\_2 \wedge^k I}{2})^s \hat{H} + \bar{B})^{\frac{1}{2}} \right] \right]^{\frac{1}{2}} \\ & \geq \left[ \text{Tr} \left[ \frac{(\hat{H}^\* (A\_1 \wedge^{k-1} I)^s \hat{H} + \bar{B})^{\frac{1}{2}} + (\hat{H}^\* (A\_2 \wedge^{k-1} I)^s \hat{H} + \bar{B})^{\frac{1}{2}}}{2} \right] \right]^{\frac{1}{2}} \\ &= \left[ \text{Tr} \left[ \left( \frac{(H^\* A\_1^s H + B)^{\frac{1}{2}} + (H^\* A\_2^s H + B)^{\frac{1}{2}}}{2} \right) \wedge^{k-1} (H^\* (\frac{A\_1 + A\_2}{2})^s H + B)^{\frac{1}{2}}} \right] \right]^{\frac{1}{2}} \end{split}$$

where *<sup>H</sup>*¯ <sup>=</sup> *<sup>H</sup>* <sup>∧</sup>*k*−<sup>1</sup> (*H*∗( *<sup>A</sup>*1+*A*<sup>2</sup> <sup>2</sup> )*sH* + *<sup>B</sup>*) 1 *<sup>s</sup>* and *<sup>B</sup>*¯ <sup>=</sup> *<sup>B</sup>* <sup>∧</sup>*k*−<sup>1</sup> (*H*∗( *<sup>A</sup>*1+*A*<sup>2</sup> <sup>2</sup> )*sH* + *<sup>B</sup>*) 1 *<sup>s</sup>* . Analogously, we can obtain

$$\begin{aligned} &\left[\text{Tr}\,\wedge^k \left[\left(H^\*(\frac{A\_1+A\_2}{2})^sH+B\right)^{\frac{1}{s}}\right]\right]^{\frac{1}{k}}\\ &\geq \left[\text{Tr}\left[\wedge^k \left(\frac{(H^\*A\_1^sH+B)^{\frac{1}{s}}+(H^\*A\_2^sH+B)^{\frac{1}{s}}}{2}\right)\right]\right]^{\frac{1}{k}}.\end{aligned}$$

Using lemma 3, we obtain

$$\begin{aligned} &\left[\text{Tr}\wedge^k\left[\left(H^\*(\frac{A\_1+A\_2}{2})^sH+B)^{\frac{1}{s}}\right)\right]^{\frac{1}{k}}\\ &\geq\frac{\left[\text{Tr}\left[\wedge^k\left(\left(H^\*A\_1^sH+B\right)^{\frac{1}{s}}\right)\right]\right]^{\frac{1}{k}}+\left[\text{Tr}\left[\wedge^k\left(\left(H^\*A\_2^sH+B\right)^{\frac{1}{s}}\right)\right]\right]^{\frac{1}{k}}}{2}.\end{aligned}$$

Clearly, the proof of Theorem 5 is in the application of exterior algebra and the Brunn– Minkowski inequality. Hence, other theorems, such as the Thompson–Golden theorem in a deformed exponential, can be generalized to a more general form, but we do not discuss this here.

#### **4. Conclusions**

In this paper, we used the Pick function to obtain a generalization of the Lieb concavity theorem and some corollaries. The advantage of using the Pick function is that it avoids discussing the commutativity of the matrix and variational method. Generally, we obtain that the following two functions are concave for 0 < *p*,*s* ≤ 1, 1 < *q* ≤ 2 and *P*∗*P* ≤ *I*

$$\left[ \text{Tr} \, \wedge^k \left[ \exp\_q (P^\* \ln\_q A^{\frac{p}{2}} P) \exp\_q (P^\* \ln\_q (K^\* A^{s-sp} K) P) \exp\_q (P^\* \ln\_q A^{\frac{p}{2}} P) \right]^{\frac{1}{s}} \right]^{\frac{1}{k}} \tag{14}$$

and

$$\left[ \left( \text{Tr} \wedge^{k} \left[ \exp\_{q} (P^{\*} \ln\_{q} A^{\frac{p\*}{2}} P) \exp\_{q} (P^{\*} \ln\_{q} (K^{\*} A^{s-sp} K) P) \exp\_{q} (P^{\*} \ln\_{q} A^{\frac{p\*}{2}} P) \right] \right)^{\frac{1}{\delta s}} , \tag{15}$$

where *<sup>A</sup>* <sup>∈</sup> *<sup>H</sup>*<sup>+</sup> *<sup>n</sup>* and *k* ≤ *n*, and this provides work for the future.

**Author Contributions:** Conceptualization, H.S.; writing—original draft, G.Y. and Y.L.; writing review and editing, J.W.; funding acquisition, J.W. All authors have read and agreed to the published version of the manuscript.

**Funding:** The present research was supported by the General Project of Science and Technology Plan of Beijing Municipal Education Commission (Grant No. KM202010037003).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The authors thank the referees for detailed reading and comments that were both helpful and insightful.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


### *Article* **The Role of the Discount Policy of Prepayment on Environmentally Friendly Inventory Management**

**Shirin Sultana 1, Abu Hashan Md Mashud 2, Yosef Daryanto 3, Sujan Miah 4, Adel Alrasheedi <sup>5</sup> and Ibrahim M. Hezam 5,\***


**Abstract:** Nowadays, more and more consumers consider environmentally friendly products in their purchasing decisions. Companies need to adapt to these changes while paying attention to standard business systems such as payment terms. The purpose of this study is to optimize the entire profit function of a retailer and to find the optimal selling price and replenishment cycle when the demand rate depends on the price and carbon emission reduction level. This study investigates an economic order quantity model that has a demand function with a positive impact of carbon emission reduction besides the selling price. In this model, the supplier requests payment in advance on the purchased cost while offering a discount according to the payment in the advanced decision. Three different types of payment-in-advance cases are applied: (1) payment in advance with equal numbers of instalments, (2) payment in advance with a single instalment, and (3) the absence of payment in advance. Numerical examples and sensitivity analysis illustrate the proposed model. Here, the total profit increases for all three cases with higher values of carbon emission reduction level. Further, the study finds that the profit becomes maximum for case 2, whereas the selling price and cycle length become minimum. This study considers the sustainable inventory model with payment-in-advance settings when the demand rate depends on the price and carbon emission reduction level. From the literature review, no researcher has undergone this kind of study in the authors' knowledge.

**Keywords:** low carbon inventory; discount; payment in advance; price-sensitive demand; emission reduction

#### **1. Introduction**

Customer preferences have always been a concern in industries in terms of their effect on business growth. Customer preferences are affected by many factors and are reflected in the customers' willingness to buy. The level of consumer demand is usually sensitive to product prices. However, in today's setting, more and more consumers consider the environmental performance of the producer and the green level of product in their purchasing decisions [1–3]. This trend is expanding globally along with the increasing consumer awareness of the importance of environmental conservation in the midst of climate change issues. Hence, many producers and retailers innovate green products and promote green operations to attract these customers [4,5]. Moreover, regulations become another driver for this eco-innovation. Companies try to reduce carbon emissions from

**Citation:** Sultana, S.; Mashud, A.H.M.; Daryanto, Y.; Miah, S.; Alrasheedi, A.; Hezam, I.M. The Role of the Discount Policy of Prepayment on Environmentally Friendly Inventory Management. *Fractal Fract.* **2022**, *6*, 26. https://doi.org/ 10.3390/fractalfract6010026

Academic Editor: Savin Trean¸tă

Received: 25 November 2021 Accepted: 30 December 2021 Published: 2 January 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

production, logistics, and transportation activities and apply green technology to meet new regulations and pressure from these customers.

The inventory decisions on supply chain operations already incorporate environmental parameters with certain intentions such as reducing carbon emission levels. Previous supply chain studies also consider customers' awareness of low carbon emissions [6–8], the green quality of the product [9,10], and the amount of carbon emissions [11], which affect the demand level. Customer awareness and green quality level have positive impacts on the demand function of green products, while the amount of carbon emissions has the opposite effect. Green marketing becomes a powerful strategy for businesses through various green advertising, branding, and eco-labeling. This strategy has been adopted to promote many international and local brands and products in both developed and developing countries [12–16]. Xia et al. [7] incorporated the positive effect of emission reduction and the promotion of this environmental benefit into the demand function. Recently, Dong et al. [17] considered the manufacturer's reduction in carbon emission levels, which shows the company's initiative for greener operations. The positive effect of carbon emission reduction on customer demand was combined with the negative effect of the selling price. The study also examined the payment issue by analyzing the effect of trade credit and bank loans. Based on Dong et al. [17], we study a sustainable inventory model considering a prepayment mechanism, another common payment term in business, in order to consider a real situation.

The payment term in the transaction between a supplier and a buyer is an important issue in supply chain collaboration. The term should be agreed upon by both parties so that it is clear and beneficial for all. The classic economic order quantity (EOQ) model assumes a payment immediately after product delivery. However, in many cases, payment in advance is applied, in which the buyer should pay the purchase cost before the product delivery. The buyer may have to pay all the purchase cost in advance [18–20] or pay only a percentage of it [19,21]. Further, the prepayment can be done in several time intervals [22,23]. While the payment in advance will give the supplier an advantage by mitigating the risk of cancellation, the supplier can offer some discounts to the buyer so that they benefit as well. Our study considers discount offers similar to Mashud et al. [23].

The increase in customers' awareness of green issues, together with the trend in producers' concern on carbon emission reduction and the common practices of payment in advance, has motivated this study to contribute to the development of a sustainable inventory model. This paper presents a profit maximization study of a retailer inventory system to respond to customers' increasing low carbon preferences. When customer demand depends on the selling price and the retailer must pay the purchase cost in advance, the proposed model suggests the optimum replenishment time and selling price. The study aims at providing managerial insights by answering the following questions:


In this paper, we first study the retailer's optimal selling price and replenishment cycle when payment in advance is fulfilled with some equal numbers of instalments and a discount is offered by the supplier. Then, we address the same issue when the purchase cost is completed in a single payment, and in return, the customer will get a different discount rate. Next, we study the situation in which no payment in advance is considered, hence no discount is offered.

#### **2. Literature Review**

Traditional inventory management focuses on the economic benefit from a business point of view. For example, the classic inventory model works under some basic assumptions such as infinite replenishment and planning horizon that aim at optimizing financial profit. In recent years, considerable attention has shifted to economic and environmental aspects of the emergence of the sustainable inventory model terminology. The focus has broadened to include minimizing environmental impacts through carbon emission reductions, energy efficiency, and the adoption of green technologies [24,25]. Most sustainable inventory models seek to reduce supply chain emission levels by considering emissions from production, transportation, and inventory storage activities. Carbon tax systems are widely used to include the cost of carbon emissions in the objective function [26–31]. Other carbon regulations such as carbon cap-and-trade and strict carbon limits are also used, depending on the regulations imposed by the government [32,33]. Datta [34] developed an inventory model with investment in emission reduction technology focusing on emissions from production activities. The study considered carbon emission reduction under a carbon tax policy and optimization of the investment amount. Under a similar carbon tax policy, Mashud et al. [35] considered emission reduction from transportation activities. Simultaneous investments for emission reduction and deterioration rate were studied by Mishra et al. [36]. Lou et al. [37] optimized the green technology investment considering a subsidy from the government. Optimum investment level was considered important because customer demand was assumed to be sensitive to emission reduction level. The study recommended an active role from the government such as providing technology investment subsidies and controlling the emission trading price.

In addition to government regulations, efforts to reduce carbon emission levels are also driven by the increasing number of consumers who consider environmental aspects in their purchasing decisions [9]. Hence, the environmental performance of the product was added to the demand function. Pang et al. [6] and Gao et al. [38] considered the customers' environmental awareness and set a linear demand function in addition to the effect of selling price on demand. Hovelaque and Bironneau [11] set a demand function that depends on price and the amount of carbon emissions. They found two order quantities, one that will maximize the total profit and another one that will minimize the emission level. Lou et al. [37] also set a demand function with a linear effect of price and emission reduction. The percentage of emission reduction per unit product was optimized together with the selling price. Xia et al. [7] incorporated a promotion strategy to support the emission reduction program to gain customer attention. Zhang et al. [3] analyzed a manufacturer's decision to introduce a new green product to customers with high environmental awareness. The study found a conflict if the manufacturer also sells an ordinary product because the products will compete with each other. Recently, Dong et al. [17] considered the positive effect of carbon emission reduction and the negative effect of the selling price on the customer demand rate. For a single-supplier, single-buyer supply chain model, they also studied the effect of financial facilities such as trade credit and bank loans on the manufacturer's decision regarding the level of emission reduction. Using a similar demand function, we study an economic order quantity (EOQ) model of a retailer when payment in advance is requested by the supplier. The retailer's optimal selling price and replenishment cycle are the decision variables.

The inventory model with payment in advance is another research stream in this paper. The practices of payment in advance may be introduced by a powerful supplier to prevent order cancellation, especially for customized and expensive products [19]. The payment may help the supplier to finish the product. Maiti et al. [21] are among the first researchers who incorporated payment in advance into an inventory model. The proposed model says that the buyer gets some price discount based on the amount of the payment in advance. Further, due to the payment in advance, the retailer may need cash aid from any financial institution, which means an additional cost of interest. The buyer may have to pay all the purchase costs in advance or pay only a percentage of it [18,19,39,40]. During the payment-in-advance period, multiple instalments may be applied to reduce the retailer burden [22,23]. Hence, our study examines the effect of three different payment-inadvance settings—(1) payment in advance with equal numbers of instalments, (2) payment in advance with single instalment, and (3) the absence of payment in advance—on the optimum price and replenishment cycle that optimizes the profit.

#### **3. Mathematical Model Formulation for Inventory Model**

The proposed mathematical model is based on the following assumptions:


*ψ* is the market potential


*RC* is the manufacturer's carbon emission reduction level

In addition, the following nomenclatures (Table 1) are used.

**Table 1.** Notation description.


An inventory model is developed under consideration of the above assumptions. The model is divided into three cases considering the payment-in-advance cost. In Case I, payment in advance is fulfilled with some equal numbers of instalments, and a discount is offered for the retailer as a benefit according to the number of instalments. Case II considers that the payment is completed with a single instalment and, in return, the retailer will get

a different discount from the supplier. Finally, in Case III, no payment-in-advance cost is considered, and hence no discount is offered to the retailer.

#### *3.1. Case I: With Advanced Payment and a Discount for Instalment Based Payment*

A retailer runs its business with an initial stock of Ω units. Depending on the demand function, the stock decreases and becomes zero at time *t* = *TC*. Thus, one cycle ends, and the process repeats so that the business continues. To get this stock, the retailer pays a percentage of the purchase cost (*δ*) in *n* equal number of instalments before the products are delivered. The amount of each prepayment is *δω*<sup>Ω</sup> *<sup>n</sup>* . At the moment of delivery, the retailer needs to pay for the remaining (1 − *δ*)Ω quantity. Figure 1 outlines the above facts and the pattern of the level of inventory.

**Figure 1.** Graphical presentation of the inventory system for Case I.

The inventory system is described by the following differential equation considering the demand *DL* = *ψ* − *γp* + *ηRC*.

$$\frac{dI\_S(t)}{dt} = -D\_{L\prime} \ 0 \le t \le T\_{\mathbb{C}} \tag{1}$$

With the help of boundary conditions *IS*(0) = Ω and *IS*(*TC*) = 0 , by solving Equation (1) we get,

$$I\_S(t) = D\_L(T\_\mathbb{C} - t) \tag{2}$$

and

$$
\Omega = D\_L T\_{\mathbb{C}} \tag{3}
$$


$$OC = \mathbb{Q} \tag{4}$$

(b) The inventory holding cost per cycle:

$$\hbar \text{HC} = \hbar \int\_0^{T\_{\text{C}}} I\_{\text{S}}(t) \, dt = \frac{1}{2} \hbar D\_L T\_{\text{C}} \,^2 \tag{5}$$

(c) The purchase cost per cycle:

$$PC = \Omega \omega \tag{6}$$

(d) Transportation cost per cycle:

Three major costs are considered to estimate the transportation cost: fixed cost (*FC*), variable cost, and carbon emission cost. However, the variable cost and the carbon emission cost are different for an empty vehicle (truck) and a loaded vehicle. The total travel distance is 2 as the vehicle has to travel a distance to ship the goods and has to travel another distance to return with an empty load. For the vehicle only, the variable cost is the total fuel consumption (2 *fe*) times the fuel price (*α*). An additional variable cost is estimated for one-way distance based on the vehicle load, that is, a one-way distance () multiplied by the fuel consumption per ton of payload (*cp*), product weight (*wp*), order quantity per trip <sup>Ω</sup> *Nt* ! , and fuel price (*α*). Similarly, the carbon emission cost for the vehicle is 2 times the cost of carbon emission per unit distance of delivery (*ce*), and the carbon emission cost based on the load is times the cost of carbon emission per unit item per unit distance of delivery (*ck*) times <sup>Ω</sup> *Nt* ! . Thus, the total transportation cost per cycle is

$$T\mathbb{C} = N\_{\mathbb{I}} \left[ F\_{\mathbb{C}} + \left( 2\ell f\_{\mathbb{f}} a + \frac{\ell c\_{p} w\_{p} \Omega a}{N\_{\mathbb{f}}} \right) + \left( 2\ell c\_{\mathbb{f}} + \frac{\ell c\_{\mathbb{k}} \Omega}{N\_{\mathbb{f}}} \right) \right]$$

$$T\mathbb{C} = F\_{\mathbb{C}} N\_{\mathbb{f}} + 2\ell f\_{\mathbb{f}} a N\_{\mathbb{f}} + \ell c\_{p} w\_{p} \Omega a + 2\ell c\_{\mathbb{f}} N\_{\mathbb{f}} + \ell c\_{\mathbb{k}} \Omega \tag{7}$$

(e) Instalment capital cost:

The instalment capital cost is estimated following the procedure described by [35,43]:

$$\begin{array}{l} IC = \left(\frac{\Phi \delta \omega}{n} \Omega \times n \times \frac{\kappa}{n}\right) + \left(\frac{\Phi \delta \omega}{n} \Omega \times (n-1) \times \frac{\kappa}{n}\right) + \dots + \left(\frac{\Phi \delta \omega}{n} \Omega \times \left(n - (n-1)\right) \times \frac{\kappa}{n}\right) \\\ = \left(\frac{\Phi \delta \omega}{n} \Omega \times \frac{\kappa}{n}\right) (n + (n-1) + \dots + 2 + 1) = \left(\frac{\Phi \delta \omega}{n} \Omega \times \frac{\kappa}{n}\right) \frac{n(n+1)}{2} \\\ = \frac{(n+1)}{2n} \phi \delta \omega \sigma \Omega \kappa \end{array} \tag{8}$$

#### (f) Discount on purchase cost:

For the retailer's advanced payment on the purchase cost, the supplier provides *υ*% discount. The discount rate *ξ* depends on the number of instalments *n*; that is, the supplier offers a lower discount rate for more installments as follows:

$$
\zeta = \frac{\upsilon}{n'} \quad 0 \le \upsilon \le 100. \tag{9}
$$

Hence, the total discount is

$$DC = \Omega \omega \mathfrak{J} = \frac{\Omega \omega \upsilon}{n}.\tag{10}$$

#### (g) Carbon emission reduction cost:

The effort of carbon emission reduction needs an investment. The higher emission reduction needs an increasingly accelerated emission reduction cost. This cost is estimated according to Swami and Shah [42] in Equation (11).

$$
\mathcal{RC} = \chi \mathcal{R}\_{\mathbb{C}}^2. \tag{11}
$$

(h) Sales revenue per cycle:

$$SR = p \int\_{0}^{T\_{\mathbb{C}}} D\_{L} \, dt = p D\_{L} T\_{\mathbb{C}} \tag{12}$$

#### 3.1.2. Total Profit per Unit Time

Now, for the total profit per unit time, one can write:

$$\begin{aligned} \tau(p, T\_{\mathbb{C}}) &= \frac{1}{T\_{\mathbb{C}}} (SR - OC - HC - PC - TC - IC - RC + DC) \\\\ \tau(p, T\_{\mathbb{C}}) &= \frac{1}{T\_{\mathbb{C}}} \begin{pmatrix} pD\_{L}T\_{\mathbb{C}} - \zeta - \frac{1}{2}\hbar D\_{L}T\_{\mathbb{C}}^{-2} - \Omega\omega - \left(\frac{F\_{\mathbb{C}}N\_{t} + 2\ell f\_{t}aN\_{t} + \ell\varepsilon\_{p}w\_{p}\Omega x\right) \\ + 2\ell\varepsilon\_{t}N\_{t} + \ell\varepsilon\_{k}\Omega \end{pmatrix} \\\ \tau(p, T\_{\mathbb{C}}) &= \frac{1}{T\_{\mathbb{C}}} \left(pD\_{L}T\_{\mathbb{C}} - \frac{1}{2}\hbar D\_{L}T\_{\mathbb{C}}^{-2} - \left(\frac{\zeta + \Omega\omega + \frac{(n+1)}{2n}\eta\delta\omega\Omega\kappa + F\_{\mathbb{C}}N\_{t} + 2\ell f\_{t}aN\_{t}\right)\right) \\\ \tau(p, T\_{\mathbb{C}}) &= \frac{1}{T\_{\mathbb{C}}} \left(p(\psi - \gamma p + \eta R\_{\mathbb{C}})T\_{\mathbb{C}} - \frac{1}{2}\hbar(\psi - \gamma p + \eta R\_{\mathbb{C}})T\_{\mathbb{C}}^{-2} - \lambda\_{1}\right) \end{aligned} \tag{13}$$

where

$$\hat{\lambda}\_1 = \zeta + \Omega \omega + \frac{(n+1)}{2n} \phi \delta \omega \Omega \mathbf{\hat{x}} + F\_{\hat{C}} \mathbf{N}\_l + 2 \ell f\_l a N\_l + \ell c\_p w\_p \Omega \mathbf{u} + 2 \ell c\_\ell \mathbf{N}\_l + \ell c\_k \Omega + \chi R\_{\hat{C}}^2 - \frac{\Omega \omega \upsilon}{n} \tag{14}$$

#### *3.2. Case II: With Advanced Payment and a Discount for Single Time Payment*

In this case, the retailer has to pay in advance in a single payment. The payment amount means the whole purchase cost. The scenario is described in Figure 2, which is a modified version of Figure 1.

**Figure 2.** Graphical presentation of the inventory system for Case II.

The supplier offers a *υ*% discount to the retailer for a single-time prepayment as a benefit. In this situation, the retailer may have a crisis of capital during time *κ*; in that case, a loan with some interest of *φL*% from any financial institutes or other funds can be a suitable option to manage the required capital.

The discount for purchase cost is

$$PC = (1 - \upsilon)\Omega\omega.\tag{15}$$

The associated cost of taking a loan is

$$L\mathbb{C} = \phi\_L \mathbb{x} (1 - \upsilon) \Omega \omega. \tag{16}$$

Hence, the total profit per unit time can be written as:

$$\begin{aligned} \tau\_f(p, T\_\complement) &= \frac{1}{T\_\complement} (SR - OC - HC - PC - TC - RC - LC) \\\\ \tau\_f(p, T\_\complement) &= \frac{1}{T\_\complement} \begin{pmatrix} pD\_L T\_\complement - \zeta - \frac{1}{2} \hbar D\_L T\_\complement^2 - (1 - \upsilon) \Omega \omega \\\ -\left(\begin{array}{c} F\_\complement N\_l + 2\ell f\_c \kappa N\_l + \ell c\_p w\_p \Omega \mathfrak{a} \\\ +2\ell c\_l N\_l + \ell c\_k \Omega \end{array}\right) - \chi R\_\complement^2 - \phi\_L \kappa (1 - \upsilon) \Omega \omega \\\ \tau\_f(p, T\_\complement) &= \frac{1}{T\_\complement} \left( p(\varprojlim - \gamma p + \eta \aleph\_C) T\_\complement - \frac{1}{2} \hbar (\psi - \gamma p + \eta \aleph\_C) T\_\complement^2 - \mathfrak{X}\_2 \right) \end{aligned} \tag{17}$$

where

$$\mathcal{H}\_2 = \mathbb{\zeta} + (1 + \phi\_L \kappa)(1 - \upsilon)\Omega\omega + F\_\mathbb{C}N\_t + 2\ell f\_t \mathfrak{a} N\_t + \ell \mathfrak{c}\_p w\_p \Omega \mathfrak{a} + 2\ell \mathfrak{c}\_t N\_t + \ell \mathfrak{c}\_k \Omega + \chi \mathbb{R}\_\mathbb{C}^2 \tag{18}$$

#### *3.3. Case III: Without Advanced Payment*

In this case, the retailer does not pay in advance. If the retailer does not pay any payment in advance, then there is no instalment cost and discount. Thus, there is a necessity to modify Figure 1 into Figure 3. The retailer must pay the full payment during purchase product shipment.

Then, the total profit per unit time can be written as:

$$\tau\_N(p, T\_\mathcal{C}) = \frac{1}{T\_\mathcal{C}} (SR - \mathcal{O}\mathcal{C} - H\mathcal{C} - \mathcal{P}\mathcal{C} - T\mathcal{C} - RC),$$

$$\tau\_{\mathcal{N}}(p, T\_{\mathcal{C}}) = \frac{1}{T\_{\mathcal{C}}} \left( p D\_L T\_{\mathcal{C}} - \xi - \frac{1}{2} \hbar D\_L T\_{\mathcal{C}} ^2 - \Omega \omega - \left( \begin{array}{c} F\_{\mathcal{C}} N\_l + 2 \ell f\_l a N\_l + \ell \varepsilon\_p w\_p \Omega \alpha \\ + 2 \ell \varepsilon\_c N\_l + \ell \varepsilon\_k \Omega \end{array} \right) - \chi R\_{\mathcal{C}}^2 \right)$$

$$\tau\_{\mathcal{N}}(p, T\_{\mathcal{C}}) = \frac{1}{T\_{\mathcal{C}}} \left( p (\Psi - \gamma p + \eta R\_{\mathcal{C}}) T\_{\mathcal{C}} - \frac{1}{2} \hbar (\Psi - \gamma p + \eta R\_{\mathcal{C}}) T\_{\mathcal{C}}^2 - \aleph\_3 \right) \tag{19}$$

where

$$\lambda\_3 = \zeta + \Omega\omega + F\_\mathbb{C}N\_t + 2\ell f\_t a N\_t + \ell c\_p w\_p \Omega a + 2\ell c\_\varepsilon N\_t + \ell c\_k \Omega + \chi R\_\mathbb{C}^2 \tag{20}$$

#### **4. Theoretical Development**

Here, the concavity of the profit function is analyzed to show the existence of an optimal solution for each case.

#### *4.1. Case I (with Advanced Payment and a Discount for Instalment Based Payment)*

It is now important to investigate the concavity nature of the profit function *τ*(*p*, *TC*) in Equation (13) for Case I. For this purpose, the priority is to determine the critical points, and one needs to differentiate Equation (13) with respect to the two decision variables *p*, *TC* as follows:

$$\frac{\partial \tau}{\partial T\_{\mathbb{C}}} = -\frac{1}{T\_{\mathbb{C}}^2} \left[ \frac{1}{2} \hbar (\psi - \gamma p + \eta \mathbb{R}\_{\mathbb{C}}) T\_{\mathbb{C}}^2 - \mathbb{X}\_1 \right] \tag{21}$$

$$\frac{\partial \tau}{\partial p} = \psi - 2\gamma p + \eta R\_{\mathcal{C}} + \frac{1}{2} \hbar \gamma T\_{\mathcal{C}} \tag{22}$$

The critical points can be determined by setting Equations (21) and (22) to zero and doing some manipulations.

$$T\_{\mathbb{C}}^{\*} = \sqrt{\frac{2\lambda\_1}{\hbar(\psi - \gamma p + \eta \mathbb{R}\_{\mathbb{C}})}}\tag{23}$$

$$p^\* = \frac{1}{2\gamma} \left(\psi + \eta R\_{\mathbb{C}} + \frac{1}{2}\hbar\gamma T\_{\mathbb{C}}\right) \tag{24}$$

The concavity of the profit function is next discussed with some conditions.

**Proposition 1.** *The profit function τ*(*p*, *TC*) *in Equation (13) is concave regarding the replenishment time TC if the selling price p remains fixed, and hence it provides a unique optimal T*∗ *C*.

**Proof.** One needs to determine the associated critical points as well as prove the sufficient condition to confirm the concavity of the profit function. The critical point is associated with Equation (23).

$$T\_{\mathbb{C}}^{\*} = \sqrt{\frac{2\mathfrak{X}\_{1}}{\hbar(\psi - \gamma p + \eta \mathcal{R}\_{\mathbb{C}})}}$$

Then, differentiating the profit function as in Equation (13) with respect to *TC*, one can find:

$$\frac{\partial^2 \tau}{\partial T\_{\gets}^2} = -\frac{2\lambda\_1}{T\_{\gets}^3} \tag{25}$$

Since <sup>1</sup> <sup>&</sup>gt; 0 and replenishment time *TC* must be positive, *<sup>∂</sup>*2*<sup>τ</sup> ∂T*<sup>2</sup> *C* < 0. Thus, we confirm the concave nature of the profit function regarding *TC*, and the critical point *TC* becomes the unique optimal point *T*∗ *<sup>C</sup>*.

**Proposition 2.** *The profit function τ*(*p*, *TC*) *in Equation (13) is concave regarding the replenishment time p if the selling price p remains fixed, and hence it provides a unique optimal p*∗.

**Proof.** The solution system is akin to the proposed system of Proposition 1; thus, to avoid redundancy the proof is removed.

**Proposition 3.** *The profit function τ*(*p*, *TC*) *of selling price p and replenishment time TC in Equation (13) is a strictly pseudo-concave function at a unique optimal investment* (*p*∗, *T*∗ *C*).

**Proof.** The Hessian matrix of *τ*(*p*, *TC*) is of order 2 × 2.

$$
\Delta = \begin{bmatrix}
\frac{\partial^2 \tau(p, T\_{\bar{C}})}{\partial T\_{\bar{C}}^2} & \frac{\partial^2 \tau(p, T\_{\bar{C}})}{\partial T\_{\bar{C}} \partial p} \\
\frac{\partial^2 \tau(p, T\_{\bar{C}})}{\partial p \partial T\_{\bar{C}}} & \frac{\partial^2 \tau(p, T\_{\bar{C}})}{\partial p^2}
\end{bmatrix} \tag{26}
$$

To prove that *τ*(*p*, *TC*) is a strictly pseudo-concave function, it is essential to confirm that the Hessian matrix Δ is negative definite. Thus, it is necessary to show that the leading principal minors, (−1) *k* Δ*<sup>k</sup>* > 0, 1 ≤ *k* ≤ 2 means the first principal minor Δ<sup>1</sup> is negative, and the second principle minor Δ<sup>2</sup> is positive.

$$
\Delta\_1 = \left| \frac{\partial^2 \tau(p\_\prime T\_\odot)}{\partial T\_\odot^2} \right| = \frac{\partial^2 \tau(p\_\prime T\_\odot)}{\partial T\_\odot^2} \tag{27}
$$

and

$$\Delta\_{2} = \left| \begin{array}{c} \frac{\partial^{2}\tau(p, T\_{\mathbb{C}})}{\partial T\_{\mathbb{C}}^{2}} \quad \frac{\partial^{2}\tau(p, T\_{\mathbb{C}})}{\partial T\_{\mathbb{C}}\partial p} \\ \frac{\partial^{2}\tau(p, T\_{\mathbb{C}})}{\partial p\partial T\_{\mathbb{C}}} \quad \frac{\partial^{2}\tau(p, T\_{\mathbb{C}})}{\partial p^{2}} \end{array} \right| = \frac{\partial^{2}\tau(p, T\_{\mathbb{C}})}{\partial T\_{\mathbb{C}}^{2}} \frac{\partial^{2}\tau(p, T\_{\mathbb{C}})}{\partial p^{2}} - \frac{\partial^{2}\tau(p, T\_{\mathbb{C}})}{\partial p\partial T\_{\mathbb{C}}} \frac{\partial^{2}\tau(p, T\_{\mathbb{C}})}{\partial T\_{\mathbb{C}}\partial p} \tag{28}$$

Taking the second order partial derivatives of the profit function *τ*(*p*, *TC*) in Equation (13) with respect to *p* and *TC*, one gets

$$\frac{\partial^2 \tau}{\partial T\_{\mathbb{C}}^2} = -\frac{2\lambda\_1}{T\_{\mathbb{C}}^3} \tag{29}$$

$$\frac{\partial^2 \tau}{\partial p^2} = -2\gamma$$

$$\frac{\partial^2 \tau}{\partial T\_{\mathbb{C}} \partial p} = \frac{1}{2} \hbar \gamma \tag{31}$$

Proposition 1 ensures that the first principal minor Δ<sup>1</sup> is negative at the optimal point *p* = *p*<sup>∗</sup> and *TC* = *T*<sup>∗</sup> *<sup>C</sup>*. Now, the only target should be to prove that the second principal minor Δ<sup>2</sup> is positive and to aim it, after manipulations, one can write:

$$
\Delta\_2 = \frac{4\gamma\hbar\_1}{T\_\odot^3} - \frac{1}{4}\hbar^2\gamma^2. \tag{32}
$$

At the optimal point *p* = *p*<sup>∗</sup> and *TC* = *T*<sup>∗</sup> *C*,

$$
\Delta \mathfrak{a} = \frac{4\gamma \mathfrak{k}\_1}{T\_{\mathbb{C}}^{\*\mathfrak{k}}} - \frac{1}{4} \hbar^2 \gamma^2. \tag{33}
$$

Later, Lemma 1 confirms the fact that Δ<sup>2</sup> > 0.

Thus, the proof of Proposition 3 is complete such that *τ*(*p*, *TC*) is a strictly pseudoconcave function at a unique optimal investment (*p*∗, *T*∗ *<sup>C</sup>*). Hence, the profit function affirms the global maximum solution at (*p*∗, *T*∗ *<sup>C</sup>*).

**Lemma 1.** *If replenishment time TC* < <sup>16</sup><sup>1</sup> 2*γ* 1 3 , *then Equation (32) provides positive results, which consequently shows that Proposition 3 is valid*.

\*\*Proof.\*\* Replerishment time  $T\_{\mathbb{C}} < \left(\frac{16\lambda\_1}{\hbar^2 \gamma}\right)^{\frac{1}{3}}$  
$$\hbar^2 \gamma < \frac{16\lambda\_1}{T\_{\mathbb{C}}^3}$$
 
$$\hbar^2 \gamma \frac{\gamma}{4} < \frac{16\lambda\_1}{T\_{\mathbb{C}}^3} \frac{\gamma}{4} \quad [\text{since } \gamma > 0]$$

$$
\frac{1}{4}\hbar^2\gamma^2 < \frac{4\gamma\hbar\_1}{T\_C^3}
$$

$$
\frac{4\gamma\hbar\_1}{T\_C^{\*3}} - \frac{1}{4}\hbar^2\gamma^2 > 0
$$

Thus, Δ<sup>2</sup> > 0.

#### *4.2. Case II: With Advanced Payment and a Discount for Single Time Payment*

The concavity test for Case II is similar to Case I, so the proof for Case II is not shown to avoid redundancy. From Equations (13) and (17) one has:

$$\tau(p, T\_{\mathbb{C}}) = \frac{1}{T\_{\mathbb{C}}} \left( p(\psi - \gamma p + \eta R\_{\mathbb{C}}) T\_{\mathbb{C}} - \frac{1}{2} \hbar (\psi - \gamma p + \eta R\_{\mathbb{C}}) T\_{\mathbb{C}}^{-2} - \hbar\_1 \right)$$

$$\tau\_f(p, T\_{\mathbb{C}}) = \frac{1}{T\_{\mathbb{C}}} \left( p(\psi - \gamma p + \eta R\_{\mathbb{C}}) T\_{\mathbb{C}} - \frac{1}{2} \hbar (\psi - \gamma p + \eta R\_{\mathbb{C}}) T\_{\mathbb{C}}^{-2} - \hbar\_2 \right)$$

where

$$\hat{\lambda}\_1 = \zeta + \Omega \omega + \frac{(n+1)}{2n} \phi \delta \omega \Omega \kappa + F \circled{N\_l + 2\ell f\_l a N\_l + \ell c\_p w\_p \Omega a + 2\ell c\_t N\_l + \ell c\_k \Omega + \chi R\_\mathbb{C}^2 - \frac{\Omega \omega \upsilon}{n} \delta$$

$$\mathcal{R}\_2 = \mathbb{\zeta} + (1 + \phi\_L \mathbb{k})(1 - \nu)\Omega\omega + F\_\mathbb{C}N\_t + 2\ell f\_t \mathfrak{a}N\_t + \ell c\_p w\_p \Omega \mathfrak{a} + 2\ell c\_t N\_t + \ell c\_k \Omega + \chi R\_\mathbb{C}^2$$

From *τ*(*p*, *TC*) and *τf*(*p*, *TC*) one can easily notice that parts (<sup>1</sup> and 2) are the only difference between these two profit functions. Moreover, these two parts (<sup>1</sup> and 2) are independent of decision variables (*p*, *TC*). Thus, there will be no change in making a decision regarding the concavity of these profit functions. However, in the numerical example, the concavity is presented numerically.

#### *4.3. Case III: Without Advanced Payment*

From Equations (13) and (19), one has:

$$\tau(p, T\_{\mathbb{C}}) = \frac{1}{T\_{\mathbb{C}}} \left( p(\psi - \gamma p + \eta \mathbb{R}\_{\mathbb{C}}) T\_{\mathbb{C}} - \frac{1}{2} \hbar (\psi - \gamma p + \eta \mathbb{R}\_{\mathbb{C}}) T\_{\mathbb{C}}^{-2} - \mathbb{\lambda}\_1 \right)$$

$$\tau\_{\mathbb{N}}(p, T\_{\mathbb{C}}) = \frac{1}{T\_{\mathbb{C}}} \left( p(\psi - \gamma p + \eta \mathbb{R}\_{\mathbb{C}}) T\_{\mathbb{C}} - \frac{1}{2} \hbar (\psi - \gamma p + \eta \mathbb{R}\_{\mathbb{C}}) T\_{\mathbb{C}}^{-2} - \mathbb{\lambda}\_3 \right)$$

where

$$\mathcal{H}\_1 = \mathbb{J} + \Omega\omega + \frac{(n+1)}{2n}\phi\delta\omega\Omega\mathbb{K} + \mathbb{F}\_{\mathbb{C}}\mathbb{N}l + 2\ell f\_l a \mathcal{N}l + \ell c\_p w\_p \Omega u + 2\ell c\_t \mathcal{N}l + \ell c\_k \Omega + \chi R\_{\mathbb{C}}^2 - \frac{\Omega\omega\nu}{n}$$

$$\mathfrak{A}\_3 = \zeta + \Omega \omega + F\_\mathbb{C} N\_t + 2\ell f\_t a N\_t + \ell c\_p w\_p \Omega \alpha + 2\ell c\_\varepsilon N\_t + \ell c\_k \Omega + \chi R\_\mathbb{C}^2$$

The whole scenario of this case is similar to the previous Case II. Therefore, there will be no change in decision-making as in Case II. However, the concavity of the profit function is presented in the numerical example section.

#### **5. Analysis and Discussion**

#### *5.1. Case Study*

The choice of eco-friendly products is a growing trend that is being adopted by millions of people. A new addition in this category is an eco-friendly microwave oven (Figure 4), which draws the attention of business owners and customers. The higher the eco-friendliness, the higher the demand; although sometimes the price is slightly elevated, it satisfies all purposes of customers. A retailer who does not have enough capital can

advance some purchase costs to the supplier to book the products. The supplier, in return, provides numerous discount amounts for him according to the retailer's payment. A case from a retailer shop is visited to fit in our model. The proposed problem is discussed with the shop manager, and he is asked to provide actual data accordingly. Those data are used in later numerical sections to validate the model and maintain a relationship with the data of the previously published article.

**Figure 4.** A retail shop of microwave oven. (Source: https://upload.wikimedia.org/wikipedia/ commons/7/7e/Microwave\_ovens%2C\_Media\_Markt%2C\_Svagertorp%2C\_Malmo.JPG, accessed on 26 December 2021).

#### *5.2. Numerical Illustration*

Here, we present three examples. We have collected secondary data from different published articles.

**Example 1. (Case III)** *In the first example, green carbon emission costs are considered with no payment in advance. For numerical illustration, the following parameters are considered: ordering costs per order placement ξ = \$1000/cycle, the demand combined with market potential* Ψ *= 220, price sensitivity coefficient γ = 0.65, low carbon preference coefficient η = 2, and manufacturer's carbon emission reduction level Rc = 0.5, carbon emission reduction investment χ* = \$800. *The purchase cost per unit ω = \$150, per unit holding cost h = \$2. Further, the fixed cost per trip Fc = \$200/trip, number of trips Nt = 3, fuel price α = \$0.3/liter, the empty vehicle fuel consumption fc = 1 liter, travelled distance l = 100 km, product weight wp = 0.5 kg, fuel consumption per ton of payload Cp = 1.5 liter/ton, carbon emission cost per unit distance ce = \$0.03/km, and carbon emission cost per unit item per unit distance ck = \$0.02/unit/km*.

We obtain optimal solutions per unit selling price *p\** = \$260.36, replenishment time *T*∗ *<sup>C</sup>* = 6.21 months, order quantity Ω = 321.61 units, and total profit *τ<sup>N</sup>* = \$3801.423 using Lingo 19 software with the aid of an exact optimization approach.

If we consider manufacturer's carbon emission reduction level (*Rc*) and selling price (*p*) as decision variables and cycle time (*TC* = 6.21 months) as constant, then, we obtain the optimal solutions, per unit selling price *p\** = \$260.54, manufacturer's carbon emission reduction level *R*∗ *<sup>c</sup>* = 0.62, order quantity Ω = 322.23 units, and total profit *τ<sup>N</sup>* = \$3803.25.

Again, if we consider that manufacturer's carbon emission reduction level (*Rc*) and cycle time (*TC*) are decision variables and selling price (*p* = \$260.35) is constant, then we obtain the optimal solutions, manufacturer's carbon emission reduction level *R*∗ *<sup>c</sup>* = 0.63, cycle time *T*∗ *<sup>C</sup>* = 6.38 months, order quantity Ω = 332.11 units, and total profit *τ<sup>N</sup>* = \$3803.438.

From Figure 5, one can easily observe that the total profit function confirms the concavity nature in terms of the two decision variables, and the optimum profit is located at the blue dot point.

**Figure 5.** Profit function (*τ*) with regard to: (**a**) the selling price (*p*) and cycle time (*TC*); (**b**) the manufacturer's carbon emission reduction level (*Rc*) and selling price (*p*); (**c**) the manufacturer's carbon emission reduction level (*Rc*) and cycle time (*TC*).

When selling price (*p*), cycle time (*TC*), and manufacturer's carbon emission reduction level (*Rc*) are decision variables, then optimal solutions are per-unit selling price *p\** = \$260.66, cycle time *T*∗ *<sup>C</sup>* = 6.40 months, and manufacturer's carbon emission reduction level *R*∗ *<sup>c</sup>* = 0.638, order quantity Ω = 331.82 units, and total profit *τ<sup>N</sup>* = \$3803.499.

Figure 6 shows that the profit function increases with respect to the increased selling price (*p*), cycle time (*TC*), and manufacturer's carbon emission reduction level (*Rc*), and the profit function becomes optimum for optimum selling price *p\** = \$260.66, optimum cycle time *Tc* = 6.40 months, and manufacturer's carbon emission reduction level *R*∗ *<sup>c</sup>* = 0.638. After the optimum point indicated by the green star marker, the profit function decreases, although the selling price and the cycle time increase. This behavior also confirms the concavity nature of the profit function.

**Example 2. (Case II)** *All the parameters are the same as in Example 1. For a single payment model, the supplier offers υ* = 5%. *Further, the length of time during prepayment κ* = 0.5 *years and retailer interest rate of loan from any financial institutes φ<sup>L</sup>* = 3%.

We obtain the optimal solutions, per unit selling price *p\** = \$257.62, replenishment time *T*∗ *<sup>C</sup>* = 6.11 months, order quantity Ω = 327.08 units, and total profit *τ<sup>f</sup>* = \$4083.795.

If we consider manufacturer's carbon emission reduction level (*Rc*) and selling price (*p*) as decision variables and cycle time (*TC* = 6.12 months) as constant, then we obtain the optimal solutions of per-unit selling price *p\** = \$ 257.83, manufacturer's carbon emission reduction level *R*∗ *<sup>c</sup>* = 0.63, order quantity Ω = 328.48 units, and total profit *τ<sup>N</sup>* = \$4086.035.

Again, if we consider manufacturer's carbon emission reduction level (*Rc*) and cycle time (*TC*) as decision variables and selling price (*p* = \$257.62) as constant, then we obtain the optimal solutions as manufacturer's carbon emission reduction level *R*∗ *<sup>c</sup>* = 0.65, cycle time *T*∗ *<sup>C</sup>* = 6.29 months, order quantity Ω = 338.84 units, and total profit *τ<sup>N</sup>* = \$4086.235.

When selling price (*p*), cycle time (*TC*), and manufacturer's carbon emission reduction level (*Rc*) are decision variables, then optimal solutions are per-unit selling price *p\** = \$257.96, cycle time *T*∗ *<sup>C</sup>* = 6.31 months, and manufacturer's carbon emission reduction level *R*∗ *<sup>c</sup>* = 0.65, order quantity Ω = 338.54 units, and total profit *τ<sup>N</sup>* = \$4086.305.

For this example, Figure 7 confirms the concavity nature of the profit function with respect to the two decision variables.

Figure 8a shows that the profit declines due to growing lead time. Figure 8b confirms that the higher discount rate produces the higher profit gaining, and Figure 8c confirms that the total profit declines for the higher interest rate on the loan amount to collect capital.

**Figure 6.** Profit function (*τ*) regarding (**a**) the selling price (*p*); (**b**) cycle time (*TC*); (**c**) manufacturer's carbon emission reduction level (*Rc*).

**Example 3. (Case I)** *All the parameters are the same as Example 1. Some additional parameters are as follows: number of equal prepayments before receiving order quantity n = 10, the portion of total purchase cost δ = 0.8, the interest rate of capital cost per year, length of time during which the prepayments are paid κ =* 0.5 *years, and discount rate for prepayment υ* = 5%.

We acquire the optimal solutions of per-unit selling price *p\** = \$276.86, replenishment time < *φ* = 1% months, order quantity Ω = 286.35 units, and total profit *τ* = \$2304.672.

If we consider manufacturer's carbon emission reduction level (*Rc*) and selling price (*p*) as decision variables and cycle time (*TC* = 6.21 months) as constant, then we obtain the optimal solutions of per-unit selling price *p\** = \$276.45, manufacturer's carbon emission reduction level *R*∗ *<sup>c</sup>* = 0.49, order quantity Ω = 256.35 units, and total profit *τ<sup>N</sup>* = \$2300.881.

Again, if we consider manufacturer's carbon emission reduction level (*Rc*) and cycle time (*TC*) as decision variables and selling price (*p* = \$260.35) as constant, then we obtain the optimal solutions of manufacturer's carbon emission reduction level *R*∗ *<sup>c</sup>* = 0.36, cycle time *T*∗ *<sup>C</sup>* = 6.08 months, order quantity Ω = 312.98 units, and total profit *τ<sup>N</sup>* = \$2134.147.

When selling price (*p*), cycle time (*TC*), and manufacturer's carbon emission reduction level (*Rc*) are decision variables, then optimal solutions are per-unit selling price *p\** = \$276.99, cycle time *T*∗ *<sup>C</sup>* = 7.06 months, and manufacturer's carbon emission reduction level *R*∗ *<sup>c</sup>* = 0.56, order quantity Ω = 289.92 units, and total profit *τ<sup>N</sup>* = \$2305.004.

For this example, Figure 9 confirms the concavity nature of the profit function with respect to the two decision variables.

**Figure 7.** Profit function (*τ*) with regard to: (**a**) the selling price (*p*) and cycle time (*TC*); (**b**) the manufacturer's carbon emission reduction level (*Rc*) and selling price (*p*); (**c**) the manufacturer's carbon emission reduction level (*Rc*) and cycle time (*TC*).

Figure 10a shows that the profit is higher for a smaller number of instalments and is smaller for a larger number of instalments. As the retailer has to pay interest for instalmentbased payment, the cost becomes higher, and the profit becomes lower. From Figure 10b, one can confirm that the higher lead time forces a lower profit to be gained. Figure 10c confirms that the total profit declines for higher portion payment in advance, whereas Figure 10d assures the fact that for a higher amount of discount rate for instalment payment in advance, the total profit increases.

#### *5.3. Sensitivity Analysis*

Table 2 shows the sensitivity analysis of the present work. One can easily observe the robustness among the parameters. For three cases, the test has been performed for the parameters within the range of −20% to +20%.Some critical observations can be summarized based on the sensitivity table (Table 2):

a. The market potential (*ψ*) is positively correlated with the integrated profit. The selling price is correlated similarly, but the cycle length interacts negatively. One can detect the continuous rise in profit and selling price with growing market potential (*ψ*) for all these three cases, and the profit becomes maximum for Case II, whereas the selling price, as well as cycle length, become minimum.


**Figure 8.** Total profit profile associated with single payment-based payment in advance parameters. (**a**) describes the total profit against different lead times, (**b**) describes the total profit for different discount rates due to instalment-based payment in advance, and (**c**) shows the profit vs. interest rate on the loan amount.

**Figure 9.** Profit function (*τ*) with regard to: (**a**) the selling price (*p*) and cycle time (*TC*); (**b**) the manufacturer's carbon emission reduction level (*Rc*) and selling price (*p*); (**c**) the manufacturer's carbon emission reduction level (*Rc*) and cycle time (*TC*).

**Figure 10.** Total profit profile associated with instalment-based payment-in-advance parameters. (**a**) describes the total profit against the number of instalments, (**b**) shows the total profit for different lead times, (**c**) describes the profit vs. payment in advance portion, and (**d**) describes the total profit for different discount rates due to instalment based payment in advance.


**Table 2.** Sensitivity Analysis.


**Table 2.** *Cont.*

#### *5.4. Managerial Implications*

The managerial implications of this sustainable inventory management study in terms of pricing strategies, low carbon preferences, suitability of discount policy, and impact of payments in advance are vast:


#### **6. Conclusions**

This paper presents a low-carbon preference inventory model with selling price and carbon-emission-reduction-dependent demand. Some major issues solved through this model are:


Therefore, to maximize profit, this study recommends that retailers respond to the increasing customers' preferences for low carbon by promoting environmentally friendly products. Simultaneously, retailers should attract more customers by setting a lower price by minimizing the number of instalments to take advantage of the discounts offered.

However, this model has limitations in terms of exposition, choice of variables, incorporation of marketing strategies, etc. This model can easily be extended by incorporating trade-credit policy [40,44,45], including some carbon emission regulations [33,46] and taking more than one player, e.g., a vendor–buyer system [47,48]. This study also does not allow for shortages; hence, further research may consider shortages with a full or partial backlog. Moreover, the retailer can dynamically purchase the inventory from the outside supplier to reduce the financial risk and avail the full discount facilities.

**Author Contributions:** Conceptualization, S.S. and A.H.M.M.; methodology, S.S. and A.H.M.M.; software, S.S. and S.M.; validation, S.S., A.H.M.M., A.A., and I.M.H.; writing—original draft preparation, S.S., A.H.M.M., S.M., and Y.D.; writing—review and editing, S.S., A.H.M.M., Y.D., A.A., and I.M.H.; visualization, S.S., S.M., I.M.H. and A.H.M.M. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work was supported by the Research Supporting Project No. (RSP-2021/389), King Saud University, Riyadh, Saudi Arabia.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** All data are given in the manuscript which is used to justify the proposed model.

**Acknowledgments:** We would like to thank the editors of the journal as well as the anonymous reviewers for their valuable suggestions that make the paper stronger and more consistent.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


### *Article* **Generalized** *p***-Convex Fuzzy-Interval-Valued Functions and Inequalities Based upon the Fuzzy-Order Relation**

**Muhammad Bilal Khan 1,\*, Savin Treant,aˇ 2,\* and Hüseyin Budak <sup>3</sup>**


**Abstract:** Convexity is crucial in obtaining many forms of inequalities. As a result, there is a significant link between convexity and integral inequality. Due to the significance of these concepts, the purpose of this study is to introduce a new class of generalized convex interval-valued functions called (*p*,*s*)-convex fuzzy interval-valued functions ((*p*,*s*)-convex *F-I-V-F*s) in the second sense and to establish Hermite–Hadamard (H–H) type inequalities for (*p*,*s*)-convex *F-I-V-F*s using fuzzy order relation. In addition, we demonstrate that our results include a large class of new and known inequalities for (*p*,*s*)-convex *F-I-V-F*s and their variant forms as special instances. Furthermore, we give useful examples that demonstrate usefulness of the theory produced in this study. These findings and diverse approaches may pave the way for future research in fuzzy optimization, modeling, and interval-valued functions.

**Keywords:** (*p*,*s*)-convex fuzzy-interval-valued function; fuzzy Riemann integral; Jensen type inequality; Schur type inequality; Hermite–Hadamard type inequality; Hermite–Hadamard–Fejér type inequality

#### **1. Introduction**

A convex function has a convex set as its epigraph; therefore, the theory of inequality of convex functions falls under the umbrella of convexity. Nonetheless, it is a significant theory in and of itself, as it affects practically all fields of mathematics. The graphical analysis is most often the initial issue that necessitates the acquaintance with this theory. This is an opportunity to learn about the second derivative test of convexity, which is a useful tool for detecting convexity. The difficulty of identifying the extreme values of functions with many variables, as well as the application of Hessian as a higher dimensional generalization of the second derivative, follows. Holder, Jensen, and Minkowski all made early contributions to convex analysis. The next step is to go on to optimization issues in infinite dimensional spaces; however, despite the technological sophistication required to solve such problems, the fundamental concepts are quite similar to those underlying the one variable situation. Despite numerous applications, many contemporary difficulties in economics and engineering, the relevance of convex analysis is well recognized in optimization theory [1–3], and the idea of convexity no longer suffices.

Over the years, remarkable varieties of convexities, such as harmonic convexity [4], quasi convexity [5], Schur convexity [6], strong convexity [7,8], *p*-convexity [9], fuzzy convexity [10,11], fuzzy preinvexity [12] and generalized convexity [13], *p*-convexity [14] and so on, have been introduced to convex sets and convex functions. A fascinating field for research is the definition of convexity with an integral problem. Therefore, several authors have identified a great number of equalities or inequalities as applications of convex functions. The representative results include Gagliardo–Nirenberg-type inequality [15], Hardy-type inequality [16], Ostrowski-type inequality [17], Olsen-type inequality [18],

**Citation:** Khan, M.B.; Treant,a, S.; ˇ Budak, H. Generalized *p*-Convex Fuzzy-Interval-Valued Functions and Inequalities Based upon the Fuzzy-Order Relation. *Fractal Fract.* **2022**, *6*, 63. https://doi.org/10.3390/ fractalfract6020063

Academic Editor: Ravi P. Agarwal

Received: 30 November 2021 Accepted: 24 January 2022 Published: 26 January 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

and the most commonly known inequality of, namely, the H–H inequality [19]. Similarly, many authors have devoted themselves to study the fractional integral inequalities for single-valued and interval-valued functions, see [20–28].

In ref. [29], the enormous research work fuzzy set and system has been dedicated on development of different fields, and it plays an important role in the study of a wide class problems arising in pure mathematics and applied sciences including operation research, computer science, managements sciences, artificial intelligence, control engineering and decision sciences. Recently, fuzzy interval analysis and fuzzy interval-valued differential equations have been put forward to deal the ambiguity originate by insufficient data in some mathematical or computer models that determine real-world phenomena [30–40]. There are some integrals to deal with fuzzy-interval-valued functions (in short, *F-I-V-F*s), where the integrands are *F-I-V-F*s. For instance, Osuna-Gomez et al. [41], and Costa et al. [42] constructed Jensen's integral inequality for *F-I-V-F*s through a Kulisch–Miranker order relation, see [43]. By using the same approach, Costa and Roman-Flores also presented Minkowski and Beckenbach's inequalities, where the integrands are *F-I-V-F*s. This paper is motivated by [42–44] and especially by Costa et al. [45] because they established a relation between elements of fuzzy-interval space and interval space, and introduced level-wise fuzzy order relation on fuzzy-interval space through a Kulisch–Miranker order relation defined on interval space. For more information related to fuzzy interval calculus and generalized convex *F-I-V-F*s, see [46–61].

Inspired by the ongoing research work, the new class of generalized convex *F-I-V-F*s is introduced, which is known as (*p*,*s*)-convex *F-I-V-F*s. With the help of this class and fuzzy Riemann integral operator, we introduce Jensen, Schur, and fuzzy interval H–H type inequalities via fuzzy order relation. Moreover, we show that our results include a wide class of new and known inequalities for (*p*,*s*)-convex *F-I-V-F*s and their variant forms as special cases. Some useful examples are also presented to verify the validity of our main results.

#### **2. Definitions and Basic Results**

Let K*<sup>C</sup>* and F*C*(R) be the collection of all closed and bounded intervals, and fuzzy intervals of <sup>R</sup>. We use <sup>K</sup><sup>+</sup> *<sup>C</sup>* to represent the set of all positive intervals. The collection of all Riemann integrable real-valued functions, Riemann integrable *I-V-F*s and fuzzy Riemann integrable *F-I-V-F*s over [t, s] is denoted by R[t, s], IR[t, s], and FR([t, s]), respectively. For more conceptions on interval-valued functions and fuzzy interval-valued functions, see [36,42–44]. Moreover, we have:

The inclusion " ⊆ " means that

$$
\xi \subseteq \eta \text{ if and only if, } \lfloor \xi\_{\*\*} \downarrow \overline{\xi} \text{ } \mid \subseteq \lfloor \eta\_{\*\*} \downarrow \overline{\eta} \text{ } \mid \text{ if and only if } \eta\_{\*} \le \overline{\xi}\_{\*\*} \text{ } \xi \text{ } \mid \tag{1}
$$

for all [ं∗, ं∗], [*η*∗, *<sup>η</sup>*∗] ∈ K*C*.

**Remark 1 ([43]).** The relation " ≤*<sup>I</sup>* " defined on K*<sup>C</sup>* by

$$[\ \sigma^{\omega}, \ \sigma^{\omega^{\*}}] \leq\_{I} [\eta\_{\*}, \ \eta^{\*}] \text{ if and only if } \sigma^{\omega}\_{\*} \leq \eta\_{\*}, \ \sigma^{\*} \leq \eta^{\*},\tag{2}$$

for all [ं∗, ं∗], [*η*∗, *<sup>η</sup>*∗] ∈ K*C*; it is an order relation.

**Proposition 1 ([7]).** Let F*C*(R) be a set of fuzzy numbers. If *ξ*, ∈ F*C*(R), then relation " " defined on F*C*(R) by

$$\mathfrak{l}\_{\flat}^{\mathfrak{r}} \preccurlyeq \omega \text{ if and only if, } \left[\mathfrak{l}\right]^{\mathfrak{q}} \leq\_{I} \left[\mathfrak{a}\right]^{\mathfrak{q}}, \text{ for all } \mathfrak{q} \in \left[0, \ 1\right]; \tag{3}$$

this relation is known as partial order relation.

**Theorem 1 ([50]).** Let U : [t, s] ⊂ R → F*C*(R) be a *F-I-V-F*, whose *ϕ*-levels define the family of *I-V-F*s U*<sup>ϕ</sup>* : [t, s] ⊂ R → K*<sup>C</sup>* are given by U*ϕ*(κ) = [U∗(κ, *ϕ*), U∗(κ, *ϕ*)] for all ∈ [t, s] and for all *ϕ* ∈ (0, 1]. Then, U is fuzzy Riemann integrable over [t, s] if, and only if, U∗(κ, *ϕ*) and U∗(κ, *ϕ*) both are Riemann integrable over [t, s]. Moreover, if U is fuzzy Riemann integrable over [t, s], then

$$\mathbb{E}\left(\left(FR\right)\int\_{1}^{\mathfrak{s}}\mathfrak{U}\left(\varkappa\right)d\varkappa\right)^{\mathfrak{g}} = \left(\left(R\right)\int\_{1}^{\mathfrak{s}}\mathfrak{U}\left(\varkappa\right)d\varkappa\varkappa\right)\mathfrak{U}\mathfrak{s}\left(\varkappa\right)\int\_{1}^{\mathfrak{s}}\mathfrak{U}\left(\varkappa\right)d\varkappa\right) = \left(IR\right)\int\_{1}^{\mathfrak{s}}\mathfrak{U}\_{\mathfrak{f}}\left(\varkappa\right)d\varkappa,\tag{4}$$

for all *ϕ* ∈ (0, 1].

**Definition 1 ([10]).** Let *K* be a convex set. Then, *F-I-V-F* U : *K* → F*C*(R) is named as a convex *F-I-V-F* on *K* if the coming inequality

$$
\mathfrak{U}(\mathbb{Q} + (1 - \mathbb{Q})y) \preccurlyeq \mathfrak{U}(\varkappa) \hat{+} (1 - \mathbb{Q})\mathfrak{U}(y) \tag{5}
$$

is valid for all , *<sup>y</sup>* <sup>∈</sup> *<sup>K</sup>*, *<sup>ζ</sup>* <sup>∈</sup> [0, 1], where <sup>U</sup>(κ) \$0. If (5) is reversed, then <sup>U</sup> is named as a concave on [t, s]. U is affine if and only if it is both a convex and concave function.

**Definition 2.** Let *Kp* be a *p*-convex set and *s* ∈ [0, 1]. Then, *F-I-V-F* U : *Kp* → F*C*(R) is named as a (*p*,*s*)-convex *F-I-V-F* in the second sense on *Kp* such that

$$
\mathfrak{U}\left(\left[\ulcorner\mathfrak{s}\epsilon^{p} + (1-\ulinus)\upchi^{p}\right]^{\frac{1}{p}}\right) \preccurlyeq \zeta^{s}\mathfrak{U}(\varkappa) \not\to (1-\zeta)^{s}\mathfrak{U}(\wp),\tag{6}
$$

for all <sup>κ</sup>, *<sup>y</sup>* <sup>∈</sup> *Kp*, *<sup>ζ</sup>* <sup>∈</sup> [0, 1], where <sup>U</sup>(κ) \$0. If (6) is reversed, then <sup>U</sup> is named as a (*p*,*s*)-concave *F-I-V-F* in the second sense on [t, s]. U is (*p*,*s*)-affine if and only if it is both (*p*,*s*)-convex and (*p*,*s*)-concave *F-I-V-F* in the second sense.

**Remark 2.** The (*p*,*s*)-convex *F-I-V-F*s in the second sense have some very nice properties similar to convex *F-I-V-F*:


We now discuss some new and known special cases of (*p*,*s*)-convex *F-I-V-F*s in the second sense:


$$\mathfrak{U}\mathfrak{U}\left(\left[\mathbb{\zeta}\varkappa^{p}+(1-\zeta)y^{p}\right]^{\frac{1}{p}}\right)\prec\zeta\mathfrak{U}(\varkappa)\widehat{+}(1-\zeta)\mathfrak{U}(y),\ \forall\ \varkappa,\ y\in\mathsf{K},\ \zeta\in[0,1].\tag{7}$$


$$\mathfrak{sl}(\mathbb{Q}\,\varkappa + (1-\mathbb{Q})y) \not\simeq \zeta^s \mathfrak{sl}(\varkappa) \overline{\mathfrak{l}} + (1-\mathbb{Q})^s \mathfrak{sl}(y), \; \forall \varkappa, \; y \in \mathbb{K}, \; \mathbb{Q} \in [0,1], \; s \in [0,1]. \tag{8}$$


$$\mathfrak{U}(\mathbb{Q}\varkappa + (1-\zeta)y) \twoheadrightarrow \zeta \mathfrak{U}(\varkappa) \widehat{+} (1-\zeta)\mathfrak{U}(y), \forall \ \varkappa, \ y \in \mathbb{K}, \ \zeta \in [0,1]. \tag{9}$$

**Theorem 2.** Let *Kp* be *p*-convex set and U : *Kp* → F*C*(R) be a *F-I-V-F*, whose *ϕ*-levels define the family of IVFs <sup>U</sup>*<sup>ϕ</sup>* : *Kp* <sup>⊂</sup> <sup>R</sup> → K*C*<sup>+</sup> ⊂ K*<sup>C</sup>* are given by

$$
\natural \mathfrak{r}^{\Phi}(\varkappa) = [\mathfrak{r} \mathfrak{r}^\*(\varkappa \lnot \Phi), \mathfrak{r} \mathfrak{l}\_\*(\varkappa \lnot \Phi)], \tag{10}
$$

for all ∈ *Kp* and for all *ϕ* ∈ [0, 1]. Then, U is (*p*,*s*)-convex *F-I-V-F* in the second sense on *Kp*, if and only if, for all *ϕ* ∈ [0, 1], U∗(κ, *ϕ*) and U∗(κ, *ϕ*) both are (*p*,*s*)-convex functions in the second sense.

**Proof.** Assume that, for each *ϕ* ∈ [0, 1], U∗(κ, *ϕ*) and U∗(κ, *ϕ*) are (*p*,*s*)-convex function in the second sense on *Kp*. Then, from Equation (6), we have

$$\mathfrak{U}\_\*\left(\left[\ulcorner\varkappa^p + (1-\ulinus)\jmath^p\right]^{\frac{1}{p}},\varrho\right) \leq \ulinus^s \mathfrak{U}\_\*\left(\varkappa,\varrho\right) + (1-\ulinus)^s \mathfrak{U}\_\*\left(\jmath,\varrho\right), \forall \ \varkappa,\varrho \in \mathbb{K}\_{p^\*}, \mathbb{Q} \in [0,1]\_{\geq 1}$$

and

$$\mathfrak{U}^\*\left(\left[\zeta\varkappa^p + (1-\zeta)y^p\right]^{\frac{1}{p}}, \varrho\right) \le \zeta^s \mathfrak{U}^\*(\varkappa, \varrho) + (1-\zeta)^s \mathfrak{U}^\*(y, \varrho), \forall \ \varkappa, y \in K\_{\mathbb{P}^\*} \ \zeta \in [0, 1].$$

Then, by Equation (10), we obtain

$$\begin{split} \mathfrak{U}\_{\boldsymbol{\Phi}} \Big( \left[ \boldsymbol{\zeta} \varkappa^{p} + (1 - \boldsymbol{\zeta}) \boldsymbol{y}^{p} \right]^{\frac{1}{p}} \Big) &= \Big[ \mathfrak{U}\_{\boldsymbol{\star}} \Big( \left[ \boldsymbol{\zeta} \varkappa^{p} + (1 - \boldsymbol{\zeta}) \boldsymbol{y}^{p} \right]^{\frac{1}{p}}, \boldsymbol{\varrho} \big), \mathfrak{U}^{\*} \Big( \left[ \boldsymbol{\zeta} \varkappa^{p} + (1 - \boldsymbol{\zeta}) \boldsymbol{y}^{p} \right]^{\frac{1}{p}}, \boldsymbol{\varrho} \big) \Big], \\ &\leq \left[ \boldsymbol{\zeta}^{s} \boldsymbol{\sharp} \mathsf{U}\_{\boldsymbol{\star}} (\varkappa, \boldsymbol{\varrho}), \ \boldsymbol{\zeta}^{s} \boldsymbol{\sharp} \mathsf{U}\_{\boldsymbol{\star}} (\varkappa, \boldsymbol{\varrho}) \big] + \left[ (1 - \boldsymbol{\zeta})^{s} \boldsymbol{\sharp} \mathsf{U}\_{\boldsymbol{\star}} (\boldsymbol{\varrho}, \boldsymbol{\varrho}), (1 - \boldsymbol{\zeta})^{s} \boldsymbol{\sharp} \mathsf{U}\_{\boldsymbol{\star}} (\boldsymbol{\varrho}, \boldsymbol{\varrho}) \right], \end{split}$$

that is

$$\mathfrak{U}\mathfrak{U}\left(\left[\mathcal{J}\varkappa^{p}+(1-\mathcal{J})\mathcal{Y}^{p}\right]^{\frac{1}{p}}\right)\vartriangleleft\zeta^{s}\mathfrak{U}(\varkappa)\widetilde{+}(1-\mathcal{J})^{s}\mathfrak{U}(\mathcal{y}), \forall\ \varkappa,\mathcal{y}\in\mathsf{K}\_{p\prime}\ \zeta\in[0,1].$$

Hence, U is (*p*,*s*)-convex *F-I-V-F* in the second sense on *Kp*.

Conversely, let U be (*p*,*s*)-convex *F-I-V-F* in the second sense on *Kp*. Then, for all κ, *y* ∈ *Kp* and *ζ* ∈ [0, 1], we have

$$
\mathfrak{U}\left(\left[\widetilde{\zeta}\varkappa^p + (1-\zeta)y^p\right]^{\frac{1}{p}}\right) \vartriangleleft \zeta^s \mathfrak{U}(\varkappa) \widetilde{+} (1-\zeta)^s \mathfrak{U}(y).
$$

Therefore, from Equation (10), we have

$$\mathfrak{U}\_{\boldsymbol{\theta}}\left(\left[\boldsymbol{\zeta}\varkappa^{p}+(1-\boldsymbol{\zeta})\boldsymbol{y}^{p}\right]^{\frac{1}{p}}\right) = \left[\mathfrak{U}\_{\boldsymbol{\ast}}\left(\left[\boldsymbol{\zeta}\varkappa^{p}+(1-\boldsymbol{\zeta})\boldsymbol{y}^{p}\right]^{\frac{1}{p}},\boldsymbol{\varrho}\right),\mathfrak{U}^{\boldsymbol{\ast}}\left(\left[\boldsymbol{\zeta}\varkappa^{p}+(1-\boldsymbol{\zeta})\boldsymbol{y}^{p}\right]^{\frac{1}{p}},\boldsymbol{\varrho}\right)\right].$$

Again, from Equation (10), we obtain

*ζs* <sup>U</sup>*ϕ*(κ)+\$ (<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*) *s* U*ϕ*(κ) = [*ζ<sup>s</sup>* <sup>U</sup>∗(κ, *<sup>ϕ</sup>*), *<sup>ζ</sup><sup>s</sup>* U∗(κ, *ϕ*)] + 0 (1 − *ζ*) *s* U∗(*y*, *ϕ*), (1 − *ζ*) *s* U∗(*y*, *ϕ*) 1 ,

Then, by (*p*,*s*)-convexity in the second sense of U, we have

$$\mathfrak{U}\_\*\left(\left[\check{\zeta}\varkappa^p + (1-\check{\zeta})y^p\right]^{\frac{1}{p}}, \varrho\right) \le \check{\zeta}^s \mathfrak{U}\_\*\left(\varkappa, \varrho\right) + (1-\check{\zeta})^s \mathfrak{U}\_\*\left(y, \varrho\right),$$

and

$$
\mathfrak{U}^\* \left( \left[ \mathbb{\zeta} \varkappa^p + (1 - \zeta) \wp^p \right]^{\frac{1}{p}}, \varrho \right) \le \zeta^s \mathfrak{U}^\* (\varkappa, \varrho) + (1 - \zeta)^s \mathfrak{U}^\* (\wp, \varrho),
$$

for each *ϕ* ∈ [0, 1]. Hence, the result follows.

**Remark 3.** On the basis of Theorem 2, we consider the special situation as below:



**Example 1.** We consider the *F-I-V-F* U : [0, 1] → F*C*(R) defined by

$$\mathfrak{U}(\varkappa)(\sigma) = \begin{cases} \frac{\sigma}{2\varkappa^p} & \sigma \in [0, \, 2\varkappa^p] \\ \frac{4\varkappa^p - \upsilon}{2\varkappa^2} & \sigma \in (2\varkappa^p, \, 4\varkappa^p] \\ 0 & \text{otherwise} \end{cases} \tag{11}$$

Then, for each *<sup>ϕ</sup>* <sup>∈</sup> [0, 1], we have <sup>U</sup>*ϕ*(κ) <sup>=</sup> [2*ϕ*κ*p*,(<sup>4</sup> <sup>−</sup> <sup>2</sup>*ϕ*)κ*p*]. Since end point functions U∗(κ, *ϕ*) and U∗(κ, *ϕ*), both are (*p*,*s*)-convex functions in the second sense for each *ϕ* ∈ [0, 1] and *s* ∈ [0, 1]. Hence, U(κ) is (*p*,*s*)-convex *F-I-V-F* in the second sense.

#### **3. Discrete Inequalities for** (*p*,*s*)**-Convex** *F-I-V-F* **in the Second Sense**

In the following, we establish the following result:

**Theorem 3.** (Discrete Jensen type inequality for (*p*,*s*)-convex *F-I-V-F*) Let *<sup>ω</sup><sup>j</sup>* <sup>∈</sup> <sup>R</sup>+, *tj* ∈ [t, s], (*j* = 1, 2, 3, . . . , *k*, *k* ≥ 2) and U : [t, s] → F*C*(R) be a (*p*,*s*)-convex *F-I-V-F*, whose *<sup>ϕ</sup>*-levels define the family of *I-V-F*<sup>s</sup> <sup>U</sup>*<sup>ϕ</sup>* : [t, s] <sup>⊂</sup> <sup>R</sup> → K*C*<sup>+</sup> are given by <sup>U</sup>*ϕ*(κ) <sup>=</sup> [U∗(κ, *ϕ*), U∗(κ, *ϕ*)] for all ∈ [t, s] and for all *ϕ* ∈ [0, 1], then

$$\mathfrak{U}\left(\left[\frac{1}{\mathcal{W}\_k}\sum\_{j=1}^k \omega\_j \mathbf{t}\_j^{\mathcal{P}}\right]^{\frac{1}{p}}\right) \preccurlyeq \sum\_j^k \left(\frac{\omega\_j}{\mathcal{W}\_k}\right)^s \mathfrak{U}(\mathbf{t}\_j),\tag{12}$$

where *Wk* = <sup>∑</sup>*<sup>k</sup> <sup>j</sup>*=1*ωj*. If U is (*p*,*s*)-concave *F-I-V-F*, then inequality Equation (29) is reversed.

**Proof.** When *k* = 2, then inequality Equation (12) is true. Considering that inequality Equation (29) is true for *k* = *n* − 1, then

$$\mathfrak{U}\left(\left[\frac{1}{\mathcal{W}\_{n-1}}\sum\_{j=1}^{n-1}\omega\_j\mathfrak{t}\_j^{\mathfrak{p}}\right]^{\frac{1}{p}}\right) \precsim \sum\_{j=1}^{n-1} \left(\frac{\omega\_j}{\mathcal{W}\_{n-1}}\right)^s \mathfrak{U}(\mathfrak{t}\_j)^{\frac{1}{p}}$$

Now, let us prove that inequality (12) holds for *k* = *n*.

$$\mathfrak{U}\left(\left[\frac{1}{\mathcal{W}\_n}\sum\_{j=1}^n \omega\_j \mathbf{t}\_j^{\cdot p}\right]^{\frac{1}{p}}\right)^{\frac{1}{p}}$$

$$\mathbf{H} = \mathfrak{U} \left( \left[ \frac{\mathcal{W}\_{n-2}}{\mathcal{W}\_{n}} \frac{1}{\mathcal{W}\_{n-2}} \sum\_{j=1}^{n-2} \omega\_{j} \mathbf{t}\_{j}^{p} + \frac{\omega\_{n-1} + \omega\_{n}}{\mathcal{W}\_{n}} \left( \frac{\omega\_{n-1}}{\omega\_{n-1} + \omega\_{n}} \mathbf{t}\_{n-1}^{-p} + \frac{\omega\_{n}}{\omega\_{n-1} + \omega\_{n}} \mathbf{t}\_{n}^{-p} \right) \right]^{\frac{1}{p}} \right).$$

Therefore, for each *ϕ* ∈ [0, 1], we have

$$\mathfrak{U}\_{\*}\left(\left[\begin{matrix}\frac{1}{\mathbf{W}\_{\pi}}\sum\_{j=1}^{\mathrm{n}}\omega\_{j}\mathbf{t}\_{j}^{\mathrm{p}}\\ \end{matrix}\right]^{\frac{1}{p}},\mathfrak{P}\right)$$

$$\mathfrak{U}\_{\*}\left(\left[\begin{matrix}\frac{1}{\mathbf{W}\_{\pi}}\sum\_{j=1}^{\mathrm{n}}\omega\_{j}\mathbf{t}\_{j}^{\mathrm{p}}\\ \end{matrix}\right]^{\frac{1}{p}},\mathfrak{P}\right)$$

= U<sup>∗</sup> ⎛ ⎝ ' *Wn*−<sup>2</sup> *Wn* <sup>1</sup> *Wn*−<sup>2</sup> *n*−2 ∑ *j*=1 *ωj*t*<sup>j</sup> <sup>p</sup>* + *<sup>ω</sup>n*−1+*ω<sup>n</sup> Wn <sup>ω</sup>n*−<sup>1</sup> *<sup>ω</sup>n*−1+*ω<sup>n</sup>* <sup>t</sup>*n*−<sup>1</sup> *<sup>p</sup>* + *<sup>ω</sup><sup>n</sup> <sup>ω</sup>n*−1+*ω<sup>n</sup>* <sup>t</sup>*<sup>n</sup> <sup>p</sup>* !( 1 *p* , *ϕ* ⎞ ⎠ = U∗ ⎛ ⎝ ' *Wn*−<sup>2</sup> *Wn* <sup>1</sup> *Wn*−<sup>2</sup> *n*−2 ∑ *j*=1 *ωj*t*<sup>j</sup> <sup>p</sup>* + *<sup>ω</sup>n*−1+*ω<sup>n</sup> Wn <sup>ω</sup>n*−<sup>1</sup> *<sup>ω</sup>n*−1+*ω<sup>n</sup>* <sup>t</sup>*n*−<sup>1</sup> *<sup>p</sup>* + *<sup>ω</sup><sup>n</sup> <sup>ω</sup>n*−1+*ω<sup>n</sup>* <sup>t</sup>*<sup>n</sup> <sup>p</sup>* !( 1 *p* , *ϕ* ⎞ ⎠ <sup>≤</sup> <sup>∑</sup>*n*−<sup>2</sup> *j*=1 *ω<sup>j</sup> Wn* !*s* U∗ t*j*, *ϕ* + *<sup>ω</sup>n*−1+*ω<sup>n</sup> Wn* !*s* U∗ *<sup>ω</sup>n*−<sup>1</sup> *<sup>ω</sup>n*−1+*ω<sup>n</sup>* <sup>t</sup>*n*−<sup>1</sup> *<sup>p</sup>* + *<sup>ω</sup><sup>n</sup> <sup>ω</sup>n*−1+*ω<sup>n</sup>* <sup>t</sup>*<sup>n</sup> <sup>p</sup>* 1 *p* , *ϕ* <sup>≤</sup> <sup>∑</sup>*n*−<sup>2</sup> *j*=1 *ω<sup>j</sup> Wn* !*s* U∗ t*j*, *ϕ* + *<sup>ω</sup>n*−1+*ω<sup>n</sup> Wn* !*s* U∗ *<sup>ω</sup>n*−<sup>1</sup> *<sup>ω</sup>n*−1+*ω<sup>n</sup>* <sup>t</sup>*n*−<sup>1</sup> *<sup>p</sup>* + *<sup>ω</sup><sup>n</sup> <sup>ω</sup>n*−1+*ω<sup>n</sup>* <sup>t</sup>*<sup>n</sup> <sup>p</sup>* 1 *p* , *ϕ* <sup>≤</sup> <sup>∑</sup>*n*−<sup>2</sup> *j*=1 *ω<sup>j</sup> Wn* !*s* U∗ t*j*, *ϕ* + *<sup>ω</sup>n*−1+*ω<sup>n</sup> Wn* !*s ωn*−<sup>1</sup> *<sup>ω</sup>n*−1+*ω<sup>n</sup>* !*s* U∗(t*n*−1, *ϕ*) + *ω<sup>n</sup> <sup>ω</sup>n*−1+*ω<sup>n</sup>* !*s* U∗(t*n*, *ϕ*) <sup>≤</sup> <sup>∑</sup>*n*−<sup>2</sup> *j*=1 *ω<sup>j</sup> Wn* !*s* U∗ t*j*, *ϕ* + *<sup>ω</sup>n*−1+*ω<sup>n</sup> Wn* !*s ωn*−<sup>1</sup> *<sup>ω</sup>n*−1+*ω<sup>n</sup>* !*s* U∗(t*n*−1, *ϕ*) + *ω<sup>n</sup> <sup>ω</sup>n*−1+*ω<sup>n</sup>* !*s* U∗(t*n*, *ϕ*) <sup>≤</sup> <sup>∑</sup>*n*−<sup>2</sup> *j*=1 *ω<sup>j</sup> Wn* !*s* U∗ t*j*, *ϕ* + *ωn*−<sup>1</sup> *Wn* !*s* U∗(t*n*−1, *ϕ*) + *ω<sup>n</sup> Wn* !*s* U∗(t*n*, *ϕ*) <sup>≤</sup> <sup>∑</sup>*n*−<sup>2</sup> *j*=1 *ω<sup>j</sup> Wn* !*s* U∗ t*j*, *ϕ* + *ωn*−<sup>1</sup> *Wn* !*s* U∗(t*n*−1, *ϕ*) + *ω<sup>n</sup> Wn* !*s* U∗(t*n*, *ϕ*) = ∑*<sup>n</sup> j*=1 *ω<sup>j</sup> Wn* !*s* U∗ t*j*, *ϕ* = ∑*<sup>n</sup> j*=1 *ω<sup>j</sup> Wn* !*s* U∗ t*j*, *ϕ* .

From which, we have

$$\begin{split} & \left[ \mathfrak{U}\_{\bullet} \left( \left[ \frac{1}{\mathcal{W}\_{\mathrm{n}}} \sum\_{j=1}^{n} \omega\_{j} \mathbf{t}\_{j}^{\mathrm{p}} \right]^{\frac{1}{p}}, \boldsymbol{\varrho} \right), \mathfrak{U}^{\*} \left( \left[ \frac{1}{\mathcal{W}\_{\mathrm{n}}} \sum\_{j=1}^{n} \omega\_{j} \mathbf{t}\_{j}^{\mathrm{p}} \right]^{\frac{1}{p}}, \boldsymbol{\varrho} \right) \right] \\ & \leq \boldsymbol{I} \left[ \sum\_{j=1}^{n} \left( \frac{\omega\_{j}}{\mathcal{W}\_{\mathrm{n}}} \right)^{s} \mathfrak{U}\_{\mathrm{s}} \left( \mathbf{t}\_{j}, \boldsymbol{\varrho} \right), \sum\_{j=1}^{n} \left( \frac{\omega\_{j}}{\mathcal{W}\_{\mathrm{n}}} \right)^{s} \mathfrak{U}^{\*} \left( \mathbf{t}\_{j}, \boldsymbol{\varrho} \right) \right], \end{split}$$

that is,

$$\mathfrak{U}\left(\left[\frac{1}{\mathcal{W}\_n}\sum\_{j=1}^n \omega\_j \mathbf{t}\_j^{\mathcal{P}}\right]^{\frac{1}{p}}\right) \underset{\omega}{\prec} \sum\_{j=1}^n \left(\frac{\omega\_j}{\mathcal{W}\_n}\right)^s \mathfrak{U}\left(\mathbf{t}\_j\right),$$

and the result follows.

If *ω*<sup>1</sup> = *ω*<sup>2</sup> = *ω*<sup>3</sup> = ··· = *ω<sup>k</sup>* = 1, then Theorem 3 reduces to the following result:

**Corollary 1.** Let *s* ∈ [0, 1] t*<sup>j</sup>* ∈ [t, s],. (*j* = 1, 2, 3, . . . , *k*, *k* ≥ 2) and U : [t, s] → F*C*(R) be <sup>a</sup> (*p*,*s*)-convex *F-I-V-F*, whose *<sup>ϕ</sup>*-levels define the family of *I-V-F*<sup>s</sup> <sup>U</sup>*<sup>ϕ</sup>* : [t, s] <sup>⊂</sup> <sup>R</sup> → K*C*<sup>+</sup> that are given by U*ϕ*(κ) = [U∗(κ, *ϕ*), U∗(κ, *ϕ*)] for all ∈ [t, s] and for all *ϕ* ∈ [0, 1]; then,

$$\mathfrak{U}\left(\left[\frac{1}{k}\sum\_{j=1}^{k}\mathfrak{t}\_{j}^{\,\,p}\right]^{\frac{1}{p}}\right) \preccurlyeq \sum\_{l=1}^{k} \left(\frac{1}{k}\right)^{s} \mathfrak{U}\left(\mathfrak{t}\_{l}\right). \tag{1.3}$$

If U is a (*p*,*s*)-concave *F-I-V-F*, then inequality Equation (13) is reversed.

The next Theorem 4 gives the Schur-type inequality for (*p*,*s*)-convex *F-I-V-F*s.

**Theorem 4.** (Discrete Schur-type inequality for (*p*,*s*)-convex *F-I-V-F*) Let *s* ∈ [0, 1] and U : [t, s] → F*C*(R) be a (*p*,*s*)-convex *F-I-V-F*, whose *ϕ*-levels define the family of IVFs <sup>U</sup>*<sup>ϕ</sup>* : [t, s] <sup>⊂</sup> <sup>R</sup> → K*C*<sup>+</sup> are given by <sup>U</sup>*ϕ*(κ) <sup>=</sup> [U∗(κ, *<sup>ϕ</sup>*), <sup>U</sup>∗(κ, *<sup>ϕ</sup>*)] for all <sup>∈</sup> [t, s] and for all *ϕ* ∈ [0, 1]. If t1, t2, t3 ∈ [t, s], such that t1 < t2 < t3 and t3 *<sup>p</sup>* <sup>−</sup> t1 *<sup>p</sup>*, t3 *<sup>p</sup>* <sup>−</sup> t2 *p*, t2 *<sup>p</sup>* <sup>−</sup> t1 *<sup>p</sup>* <sup>∈</sup> [0, 1], we have

$$(\mathfrak{t}\_{3}{}^{p} - \mathfrak{t}\_{1}{}^{p})^{s} \mathfrak{t}\mathfrak{l}(\mathfrak{t}\_{2}) \prec (\mathfrak{t}\_{3}{}^{p} - \mathfrak{t}\_{2}{}^{p})^{s} \mathfrak{l}\mathfrak{l}(\mathfrak{t}\_{1}) + (\mathfrak{t}\_{2}{}^{p} - \mathfrak{t}\_{1}{}^{p})^{s} \mathfrak{l}\mathfrak{l}(\mathfrak{t}\_{3}).\tag{14}$$

If U is a (*p*,*s*)-concave *F-I-V-F*, then inequality Equation (14) is reversed.

**Proof.** Let t*<sup>j</sup>* such that *L* < t*<sup>j</sup>* > *U* (*j* = 1, 2, 3, . . . , *k*), (t3 *<sup>p</sup>* <sup>−</sup> t1 *p*) *s*? 0. Then, by hypothesis, we have

$$\left(\frac{\mathbf{t}\_3^p - \mathbf{t}\_2^p}{\mathbf{t}\_3^p - \mathbf{t}\_1^p}\right)^s = \frac{\left(\mathbf{t}\_3^p - \mathbf{t}\_2^p\right)^s}{\left(\mathbf{t}\_3^p - \mathbf{t}\_1^p\right)^s} \text{ and } \left(\frac{\mathbf{t}\_2^p - \mathbf{t}\_1^p}{\mathbf{t}\_3^p - \mathbf{t}\_1^p}\right)^s = \frac{\left(\mathbf{t}\_2^p - \mathbf{t}\_1^p\right)^s}{\left(\mathbf{t}\_3^p - \mathbf{t}\_1^p\right)^s}.$$

Consider *ζ* = t3 *<sup>p</sup>*−t2 *p* t3 *<sup>p</sup>*−t1 *<sup>p</sup>* , then t2 *<sup>p</sup>* = *ζ*t1 *<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)t3 *<sup>p</sup>*. Since U is a (*p*,*s*)-convex *F-I-V-F* then, by hypothesis, we have

$$\mathfrak{U}(\mathfrak{t}\_2) \preccurlyeq \left(\frac{\mathfrak{t}\_3^p - \mathfrak{t}\_2^p}{\mathfrak{t}\_3^p - \mathfrak{t}\_1^p}\right)^s \mathfrak{U}(\mathfrak{t}\_1) + \left(\frac{\mathfrak{t}\_2^p - \mathfrak{t}\_1^p}{\mathfrak{t}\_3^p - \mathfrak{t}\_1^p}\right)^s \mathfrak{U}(\mathfrak{t}\_3) \dots$$

Therefore, for each *ϕ* ∈ [0, 1], we have

$$\begin{array}{ll} \mathfrak{U}\_{\bullet}(\mathfrak{t}\_{2},\mathfrak{q}) \leq \left(\frac{\mathfrak{t}\_{3}^{\mathfrak{p}}-\mathfrak{t}\_{1}^{\mathfrak{p}}}{\mathfrak{t}\_{3}^{\mathfrak{p}}-\mathfrak{t}\_{1}^{\mathfrak{p}}}\right)^{\mathfrak{s}} \mathfrak{U}\_{\bullet}(\mathfrak{t}\_{1},\mathfrak{q}) + \left(\frac{\mathfrak{t}\_{2}^{\mathfrak{p}}-\mathfrak{t}\_{1}^{\mathfrak{p}}}{\mathfrak{t}\_{3}^{\mathfrak{p}}-\mathfrak{t}\_{1}^{\mathfrak{p}}}\right)^{\mathfrak{s}} \mathfrak{U}\_{\bullet}(\mathfrak{t}\_{3},\mathfrak{q}),\\ \mathfrak{U}\_{\bullet}(\mathfrak{t}\_{2},\mathfrak{q}) \leq \left(\frac{\mathfrak{t}\_{3}^{\mathfrak{p}}-\mathfrak{t}\_{1}^{\mathfrak{p}}}{\mathfrak{t}\_{3}^{\mathfrak{p}}-\mathfrak{t}\_{1}^{\mathfrak{p}}}\right)^{\mathfrak{s}} \mathfrak{U}\_{\bullet}(\mathfrak{t}\_{1},\mathfrak{q}) + \left(\frac{\mathfrak{t}\_{2}^{\mathfrak{p}}-\mathfrak{t}\_{1}^{\mathfrak{p}}}{\mathfrak{t}\_{3}^{\mathfrak{p}}-\mathfrak{t}\_{1}^{\mathfrak{p}}}\right)^{\mathfrak{s}} \mathfrak{U}\_{\bullet}(\mathfrak{t}\_{3},\mathfrak{q}) \end{array} \tag{15}$$

$$\begin{array}{l} \mathfrak{l} = \frac{(\mathfrak{t}\_{3}^{\mathsf{p}} - \mathfrak{t}\_{2}^{\mathsf{p}})^{\mathsf{s}}}{(\mathfrak{t}\_{3}^{\mathsf{p}} - \mathfrak{t}\_{1}^{\mathsf{p}})^{\mathsf{s}}} \mathfrak{t} \mathfrak{t}\_{\mathsf{s}}(\mathfrak{t}\_{1}, \mathfrak{q} \mathsf{p}) + \frac{(\mathfrak{t}\_{2}^{\mathsf{p}} - \mathfrak{t}\_{1}^{\mathsf{p}})^{\mathsf{s}}}{(\mathfrak{t}\_{3}^{\mathsf{p}} - \mathfrak{t}\_{1}^{\mathsf{p}})^{\mathsf{s}}} \mathfrak{t} \mathfrak{t}\_{\mathsf{s}}(\mathfrak{t}\_{3}, \mathfrak{q} \mathsf{p}) \\ \hline = \frac{(\mathfrak{t}\_{3}^{\mathsf{p}} - \mathfrak{t}\_{1}^{\mathsf{p}})^{\mathsf{s}}}{\mathfrak{t}\_{3}^{\mathsf{p}} - \mathfrak{t}\_{2}^{\mathsf{p}})^{\mathsf{s}}} \mathfrak{t} \mathfrak{t}\_{\mathsf{s}}(\mathfrak{t}\_{1}^{\mathsf{s}}, \mathfrak{q}) + \frac{(\mathfrak{t}\_{2}^{\mathsf{p}} - \mathfrak{t}\_{1}^{\mathsf{p}})^{\mathsf{s}}}{(\mathfrak{t}\_{2}^{\mathsf{p}} - \mathfrak{t}\_{1}^{\mathsf{p}})^{\mathsf{s}}} \mathfrak{t} \mathfrak{t}\_{\mathsf{s}}(\mathfrak{t}\_{3}, \mathfrak{q}). \end{array} \tag{16}$$

From Equation (16), we have

$$\begin{cases} \mathsf{L}\_{3}\mathsf{t}\_{3}^{p} - \mathsf{t}\_{1}{}^{p}\rangle^{\operatorname{s}}\mathsf{U}\_{\mathsf{s}}\left(\mathsf{t}\_{2},\mathsf{q}\right) \leq \left(\mathsf{t}\_{3}{}^{p} - \mathsf{t}\_{2}{}^{p}\right)^{\operatorname{s}}\mathsf{U}\_{\mathsf{s}}\left(\mathsf{t}\_{1},\mathsf{q}\right) + \left(\mathsf{t}\_{2}{}^{p} - \mathsf{t}\_{1}{}^{p}\right)^{\operatorname{s}}\mathsf{U}\_{\mathsf{s}}\left(\mathsf{t}\_{3},\mathsf{q}\right),\\ \left(\mathsf{t}\_{3}{}^{p} - \mathsf{t}\_{1}{}^{p}\right)^{\operatorname{s}}\mathsf{U}\_{\mathsf{s}}\left(\mathsf{t}\_{2},\mathsf{q}\right) \leq \left(\mathsf{t}\_{3}{}^{p} - \mathsf{t}\_{2}{}^{p}\right)^{\operatorname{s}}\mathsf{U}\_{\mathsf{s}}\left(\mathsf{t}\_{1},\mathsf{q}\right) + \left(\mathsf{t}\_{2}{}^{p} - \mathsf{t}\_{1}{}^{p}\right)^{\operatorname{s}}\mathsf{U}\_{\mathsf{s}}\left(\mathsf{t}\_{3},\mathsf{q}\right), \end{cases}$$

that is

$$\leq \begin{bmatrix} \begin{bmatrix} (\mathsf{t}\_{2}^{p} - \mathsf{t}\_{2}^{p})^{\mathsf{s}} \mathsf{t} \mathsf{t}\_{\star}(\mathsf{t}\_{1}, \mathsf{q}) + (\mathsf{t}\_{2}^{p} - \mathsf{t}\_{1}^{p})^{\mathsf{s}} \mathsf{t} \mathsf{t}\_{\star}(\mathsf{t}\_{2}, \mathsf{q}), & (\mathsf{t}\_{2}^{p} - \mathsf{t}\_{1}^{p})^{\mathsf{s}} \mathsf{t} \mathsf{t}^{\star}(\mathsf{t}\_{2}, \mathsf{q}) \end{bmatrix} \\ \leq \begin{bmatrix} \left[ (\mathsf{t}\_{2}^{p} - \mathsf{t}\_{1}^{p})^{\mathsf{s}} \mathsf{t} \mathsf{t}\_{\star}(\mathsf{t}\_{2}, \mathsf{q}) + (\mathsf{t}\_{2}^{p} - \mathsf{t}\_{1}^{p})^{\mathsf{s}} \mathsf{t} \mathsf{t}\_{\star}(\mathsf{t}\_{2}, \mathsf{q}), & (\mathsf{t}\_{2}^{p} - \mathsf{t}\_{1}^{p})^{\mathsf{s}} \mathsf{t} \mathsf{t}\_{\star}(\mathsf{t}\_{2}, \mathsf{q}) \end{bmatrix} \end{bmatrix}$$

Hence,

$$\mathfrak{l}\left(\mathfrak{t}\_{3}^{p}-\mathfrak{t}\_{1}^{p}\right)^{s}\mathfrak{l}\mathfrak{l}(\mathfrak{t}\_{2})\nightsquigarrow\mathfrak{l}\left(\mathfrak{t}\_{3}^{p}-\mathfrak{t}\_{2}^{p}\right)^{s}\mathfrak{l}\mathfrak{l}(\mathfrak{t}\_{1})+\left(\mathfrak{t}\_{2}^{p}-\mathfrak{t}\_{1}^{p}\right)^{s}\mathfrak{l}\mathfrak{l}(\mathfrak{t}\_{3})\dots\mathfrak{l}$$

A refinement of Jensen type inequality for (*p*,*s*)-convex *F-I-V-F* is given in the following theorem.

**Theorem 5.** Let *<sup>s</sup>* <sup>∈</sup> [0, 1], *<sup>ω</sup><sup>j</sup>* <sup>∈</sup> <sup>R</sup>+, <sup>t</sup>*<sup>j</sup>* <sup>∈</sup> [t, s], (*<sup>j</sup>* <sup>=</sup> 1, 2, 3, . . . , , *<sup>k</sup>*, *<sup>k</sup>* <sup>≥</sup> <sup>2</sup>) and U : [t, s] → F*C*(R) be a (*p*,*s*)-convex *F-I-V-F*, whose *ϕ*-levels define the family of *I-V-F*s <sup>U</sup>*<sup>ϕ</sup>* : [t, s] <sup>⊂</sup> <sup>R</sup> → K*C*<sup>+</sup> are given by <sup>U</sup>*ϕ*(κ) <sup>=</sup> [U∗(κ, *<sup>ϕ</sup>*), <sup>U</sup>∗(κ, *<sup>ϕ</sup>*)] for all <sup>∈</sup> [t, s] and for all *ϕ* ∈ [0, 1]. If (*L*, *U*) ⊆ [t, s], then

$$\sum\_{j=1}^{k} \left(\frac{\omega\_{j}}{\mathcal{W}\_{k}}\right)^{s} \mathfrak{U}\left(\mathfrak{t}\_{j}\right) \preccurlyeq \sum\_{j=1}^{k} \left(\left(\frac{\mathcal{U}^{p} - \mathfrak{t}\_{j}^{p}}{\mathcal{U}^{p} - \mathcal{U}^{p}}\right)^{s} \left(\frac{\omega\_{j}}{\mathcal{W}\_{k}}\right)^{s} \mathfrak{U}(L, \boldsymbol{\varphi}) + \left(\frac{\mathfrak{t}\_{j}^{p} - \mathcal{U}^{p}}{\mathcal{U}^{p} - \mathcal{U}^{p}}\right)^{s} \left(\frac{\omega\_{j}}{\mathcal{W}\_{k}}\right)^{s} \mathfrak{U}(\mathcal{U}, \boldsymbol{\varphi})\right), \tag{17}$$

where *Wk* = ∑*<sup>k</sup> <sup>j</sup>*=<sup>1</sup> *ωj*. If U is (*p*,*s*)-concave *F-I-V-F*, then inequality Equation (17) is reversed.

**Proof.** Consider t*<sup>j</sup>* such that *L* < t*<sup>j</sup>* < *U* (*j* = 1, 2, 3, . . . , *k*). Then, by hypothesis and inequality Equation (15), we have

$$
\mathfrak{U}(\mathfrak{t}\_{\mathfrak{j}}) \le \left(\frac{lL^p - \mathfrak{t}\_{\mathfrak{j}}^p}{lL^p - L^p}\right)^s \mathfrak{U}(L, \mathfrak{q}) + \left(\frac{\mathfrak{t}\_{\mathfrak{j}}^p - L^p}{lL^p - L^p}\right)^s \mathfrak{U}(lL, \mathfrak{q}).
$$

Therefore, for each *ϕ* ∈ [0, 1], we have

$$\begin{array}{rcl}\mathsf{41}\_{\mathsf{\*}}\left(\mathsf{t}\_{\mathsf{j}},\mathsf{q}\right) & \leq \left(\frac{\mathsf{LT}^{p}-\mathsf{t}\_{\mathsf{j}}^{p}}{\mathsf{LT}^{p}-\mathsf{L}^{p}}\right)^{\mathsf{s}}\mathsf{41}\_{\mathsf{\*}}\left(L,\mathsf{q}\right) + \left(\frac{\mathsf{t}\_{\mathsf{j}}^{\mathsf{p}}-\mathsf{L}^{p}}{\mathsf{LT}^{p}-\mathsf{L}^{p}}\right)^{\mathsf{s}}\mathsf{41}\_{\mathsf{\*}}\left(\mathsf{L}I,\mathsf{q}\right),\\\mathsf{41}^{\mathsf{s}}\left(\mathsf{t}\_{\mathsf{j}},\mathsf{q}\right) & \leq \left(\frac{\mathsf{LT}^{p}-\mathsf{t}\_{\mathsf{j}}^{p}}{\mathsf{LT}^{p}-\mathsf{L}^{p}}\right)^{\mathsf{s}}\mathsf{41}^{\mathsf{s}}\left(L,\mathsf{q}\right) + \left(\frac{\mathsf{t}\_{\mathsf{j}}^{\mathsf{p}}-\mathsf{L}^{p}}{\mathsf{LT}^{p}-\mathsf{L}^{p}}\right)^{\mathsf{s}}\mathsf{41}^{\mathsf{s}}\left(\mathsf{L}I,\mathsf{q}\right).\end{array}$$

The above inequality can be written as

$$
\begin{array}{ll}
\left(\frac{\omega\_{\text{\tiny\text{\tiny\text{\tiny\text{\textstar}}}}^{\omega\_{\text{\tiny\text{\textstar}}}}}{\text{W}\_{k}}\right)^{\text{s}}\mathfrak{U}\_{\text{\textgreater{\text{\tiny\text{\textstar}}}}}\left(\text{t}\_{\text{\textgreater{\textgreater{\textgreater{\text}}}}}\mathfrak{d}\right) \leq \left(\frac{\mathrm{l}\mathcal{P}-\mathrm{t}\_{\text{\textstar}}^{\mathrm{p}}}{\text{l}\mathcal{P}-\mathrm{L}^{\text{p}}}\right)^{\text{s}}\left(\frac{\omega\_{\text{\textgreater{\text}}}^{\omega\_{\text{\textgreater{\text}}}}{\text{W}\_{k}}\right)^{\text{s}}\mathfrak{U}\_{\text{\textgreater{\textgreater{\text}}}}\left(\mathrm{L}\,\mathrm{\text{\textstar}}\,\mathrm{\text{\textdegree{\textdegree}}}\right) + \left(\frac{\mathrm{l}\mathcal{P}-\mathrm{L}^{\text{p}}}{\mathrm{L}\mathrm{D}^{\text{s}}-\mathrm{L}^{\text{p}}}\right)^{\text{s}}\left(\frac{\omega\_{\text{\textgreater{\textgreater{\text}}}}{\text{W}\_{k}}\right)^{\text{s}}\mathfrak{U}\_{\text{\textgreater{\textgreater{\textgreater{\text}}}}}\left(\mathrm{L}\,\mathrm{\text{\textstar}}\,\mathrm{\text{\textdegree}}\right) \\
+ \left(\frac{\omega\_{\text{\textgreater{\text}}}^{\omega\_{\text{\textgreater{\text}}}}{\text{W}\_{k}}\right)^{\text{s}}\mathfrak{U}\_{\text{\textgreater{\textgreater{\textgreater{\text}}}}}\left(\mathrm{L}\,\mathrm{\text{\textstar}}\,\mathrm{\text{\text$$

Taking the sum of all inequalities (18) for *j* = 1, 2, 3, . . . , *k*, we have

∑*k j*=1 *ω<sup>j</sup> Wk* !*s* U∗ t*j*, *ϕ* ≤ <sup>∑</sup>*<sup>k</sup> j*=1 *<sup>U</sup>p*−t*<sup>j</sup> p <sup>U</sup>p*−*L<sup>p</sup>* !*<sup>s</sup> <sup>ω</sup><sup>j</sup> Wk* !*s* U∗(*L*, *ϕ*) + t*<sup>j</sup> <sup>p</sup>*−*L<sup>p</sup> <sup>U</sup>p*−*L<sup>p</sup>* !*<sup>s</sup> <sup>ω</sup><sup>j</sup> Wk* !*s* U∗(*U*, *ϕ*) , ∑*k j*=1 *ω<sup>j</sup> Wk* !*s* U∗ t*j*, *ϕ* ≤ <sup>∑</sup>*<sup>k</sup> j*=1 *<sup>U</sup>p*−t*<sup>j</sup> p <sup>U</sup>p*−*L<sup>p</sup>* !*<sup>s</sup> <sup>ω</sup><sup>j</sup> Wk* !*s* U∗(*L*, *ϕ*) + t*<sup>j</sup> <sup>p</sup>*−*L<sup>p</sup> <sup>U</sup>p*−*L<sup>p</sup>* !*<sup>s</sup> <sup>ω</sup><sup>j</sup> Wk* !*s* U∗(*U*, *ϕ*) ,

that is

∑*k j*=1 *ω<sup>j</sup> Wk s* U*<sup>ϕ</sup>* t*j* = ∑*k j*=1 *ω<sup>j</sup> Wk s* U∗ t*j*, *ϕ* , ∑*<sup>k</sup> j*=1 *ω<sup>j</sup> Wk s* U∗ t*j*, *ϕ* ≤*I* ⎡ ⎣∑*<sup>k</sup> j*=1 ⎛ ⎝ *<sup>U</sup>p*−t*<sup>j</sup> p <sup>U</sup>p*−*L<sup>p</sup>* !*<sup>s</sup> <sup>ω</sup><sup>j</sup> Wk* !*s* U∗(*L*, *ϕ*) + t*<sup>j</sup> <sup>p</sup>*−*L<sup>p</sup> <sup>U</sup>p*−*L<sup>p</sup>* !*<sup>s</sup> <sup>ω</sup><sup>j</sup> Wk* !*s* U∗(*U*, *ϕ*) ⎞ <sup>⎠</sup>, <sup>∑</sup>*<sup>k</sup> j*=1 ⎛ ⎝ *<sup>U</sup>p*−t*<sup>j</sup> p <sup>U</sup>p*−*L<sup>p</sup>* !*<sup>s</sup> <sup>ω</sup><sup>j</sup> Wk* !*s* U∗(*L*, *ϕ*) + t*<sup>j</sup> <sup>p</sup>*−*L<sup>p</sup> <sup>U</sup>p*−*L<sup>p</sup>* !*<sup>s</sup> <sup>ω</sup><sup>j</sup> Wk* !*s* U∗(*U*, *ϕ*) ⎞ ⎠ ⎤ ⎦ = ∑*<sup>k</sup> j*=1 *<sup>U</sup><sup>p</sup>* <sup>−</sup> <sup>t</sup>*<sup>j</sup> p <sup>U</sup><sup>p</sup>* − *<sup>L</sup><sup>p</sup> <sup>s</sup> ω<sup>j</sup> Wk s* [U∗(*L*, *<sup>ϕ</sup>*), <sup>U</sup>∗(*L*, *<sup>ϕ</sup>*)] <sup>+</sup> <sup>∑</sup>*<sup>k</sup> j*=1 t*<sup>j</sup> <sup>p</sup>* <sup>−</sup> *<sup>L</sup><sup>p</sup> <sup>U</sup><sup>p</sup>* − *<sup>L</sup><sup>p</sup> <sup>s</sup> ω<sup>j</sup> Wk s* [U∗(*U*, *ϕ*), U∗(*U*, *ϕ*)] = ∑*<sup>k</sup> j*=1 *<sup>U</sup><sup>p</sup>* <sup>−</sup> <sup>t</sup>*<sup>j</sup> p <sup>U</sup><sup>p</sup>* − *<sup>L</sup><sup>p</sup> <sup>s</sup> ω<sup>j</sup> Wk s* U*ϕ*(*L*) + ∑*<sup>k</sup> j*=1 t*<sup>j</sup> <sup>p</sup>* <sup>−</sup> *<sup>L</sup><sup>p</sup> <sup>U</sup><sup>p</sup>* − *<sup>L</sup><sup>p</sup> <sup>s</sup> ω<sup>j</sup> Wk s* U*ϕ*(*U*).

Thus,

$$\sum\_{j=1}^{k} \left(\frac{\omega\_{\hat{j}}}{\mathcal{W}\_{k}}\right)^{s} \mathfrak{U}(\mathbf{t}\_{\hat{j}}) \preccurlyeq \sum\_{j=1}^{k} \left( \left(\frac{\mathcal{U}^{p} - \mathbf{t}\_{\hat{j}}^{p}}{\mathcal{U}^{p} - \mathcal{U}^{p}}\right)^{s} \left(\frac{\omega\_{\hat{j}}}{\mathcal{W}\_{k}}\right)^{s} \mathfrak{U}(L) + \left(\frac{\mathbf{t}\_{\hat{j}}^{p} - L^{p}}{\mathcal{U}^{p} - L^{p}}\right)^{s} \left(\frac{\omega\_{\hat{j}}}{\mathcal{W}\_{k}}\right)^{s} \mathfrak{U}(L)\right),$$

and this completes the proof.

We now consider some special cases of Theorems 3 and 5.

If U∗(κ, *ϕ*) = U∗(κ, *ϕ*), then Theorems 3 and 5 reduce to the following results:

**Corollary 2 ([21]).** (Jensen inequality for (*p*,*s*)-convex function) Let *<sup>s</sup>* <sup>∈</sup> [0, 1], *<sup>ω</sup><sup>j</sup>* <sup>∈</sup> <sup>R</sup>+, <sup>t</sup>*<sup>j</sup>* <sup>∈</sup> [t, s], (*<sup>j</sup>* <sup>=</sup> 1, 2, 3, . . . , *<sup>k</sup>*, *<sup>k</sup>* <sup>≥</sup> <sup>2</sup>) and let <sup>U</sup> : [t, s] <sup>→</sup> <sup>R</sup><sup>+</sup> be a non-negative real-valued function. If U is a (*p*,*s*)-convex function, then

$$\mathfrak{U}\left(\left[\frac{1}{\mathcal{W}\_k}\sum\_{j=1}^k \omega\_j \mathbf{t}\_j^{\mathcal{P}}\right]^{\frac{1}{p}}\right) \le \sum\_{j=1}^k \left(\frac{\omega\_j}{\mathcal{W}\_k}\right)^s \mathfrak{U}(\mathbf{t}\_j),\tag{19}$$

where *Wk* = ∑*<sup>k</sup> <sup>j</sup>*=<sup>1</sup> *ωj*. If U is (*p*,*s*)-concave function, then inequality (19) is reversed. **Corollary 3.** Let*<sup>s</sup>* <sup>∈</sup> [0, 1], *<sup>ω</sup><sup>j</sup>* <sup>∈</sup> <sup>R</sup>+,t*<sup>j</sup>* <sup>∈</sup> [t, s],(*<sup>j</sup>* <sup>=</sup> 1, 2, 3, . . . , *<sup>k</sup>*, *<sup>k</sup>* <sup>≥</sup> <sup>2</sup>), and <sup>U</sup> : [t, s] <sup>→</sup> <sup>R</sup><sup>+</sup> be a non-negative real-valued function. If U is a (*p*,*s*)-convex function and t1, t2, ... , t*<sup>j</sup>* ∈ (*L*, *U*) ⊆ [t, s], then

$$\sum\_{l'=1}^{k} \left(\frac{\omega\_{l'}}{\mathcal{W}\_{k}}\right)^{\mathfrak{s}} \mathfrak{U}\left(\mathfrak{t}\_{l}\right) \leq \sum\_{l'=1}^{k} \left(\left(\frac{\mathcal{U}^{p} - \mathfrak{t}\_{l'}^{p}}{\mathcal{U}^{p} - \mathcal{U}^{p}}\right)^{\mathfrak{s}} \left(\frac{\omega\_{l'}}{\mathcal{W}\_{k}}\right)^{\mathfrak{s}} \mathfrak{U}(L) + \left(\frac{\mathfrak{t}\_{l'}^{p} - \mathcal{U}^{p}}{\mathcal{U}^{p} - \mathcal{U}^{p}}\right)^{\mathfrak{s}} \left(\frac{\omega\_{l'}}{\mathcal{W}\_{k}}\right)^{\mathfrak{s}} \mathfrak{U}(L)\right),\tag{20}$$

where *Wk* = ∑*<sup>k</sup> <sup>j</sup>*=<sup>1</sup> *ωj*. If U is a (*p*,*s*)-concave function, then inequality (20) is reversed.

#### **4. Hermite–Hadamard Type Inequalities for** (*p*,*s*)**-Convex** *F-I-V-F* **in the Second Sense**

In this section, we will continue with the H–H inequality for (*p*,*s*)-convex fuzzy-*I-V-F*s as well as the fuzzy-interval H–H Fejér inequality for (*p*,*s*)-convex fuzzy-*I-V-F*s using the fuzzy order relation. Firstly, we start with the following H–H inequality for (*p*,*s*)-convex fuzzy-*I-V-F*s:

**Theorem 6.** Let U : [t, s] → F*C*(R) be a (*p*,*s*)-convex *F-I-V-F*, whose *ϕ*-levels define the family of *I-V-F*s. <sup>U</sup>*<sup>ϕ</sup>* : [t, s] <sup>⊂</sup> <sup>R</sup> → K*C*<sup>+</sup> are given by <sup>U</sup>*ϕ*(κ) <sup>=</sup> [U∗(κ, *<sup>ϕ</sup>*), <sup>U</sup>∗(κ, *<sup>ϕ</sup>*)] for all ∈ [t, s] and for all *ϕ* ∈ [0, 1]. If U ∈ FR([t, s]), then

$$2^{s-1} \mathfrak{U}\left(\left[\frac{\mathbf{t}^p + \mathbf{s}^p}{2}\right]^{\frac{1}{p}}\right) \preccurlyeq \frac{p}{\mathbf{s}^p - \mathbf{t}^p} \ (FR) \int\_{\mathbf{t}}^{\mathbf{s}} \varkappa^{p-1} \mathfrak{U}(\varkappa) d\varkappa \leq\_p \frac{\mathfrak{U}(\mathbf{t}) \mho(\varkappa)}{\mathbf{s} + 1}.\tag{21}$$

If U is a (*p*,*s*)-concave *F-I-V-F*, then

$$\mathfrak{L}^{s-1}\mathfrak{U}\left(\left[\frac{\mathfrak{t}^p+\mathfrak{s}^p}{2}\right]^{\frac{1}{p}}\right)\asymp\frac{p}{\mathfrak{s}^p-\mathfrak{t}^p}\left(FR\right)\int\_{\mathfrak{t}}^{\mathfrak{s}}\varkappa^{p-1}\mathfrak{U}(\varkappa)d\varkappa\geqslant\frac{\mathfrak{U}(\mathfrak{t})\,\widetilde{+}\mathfrak{U}(\mathfrak{s})}{s+1}.\tag{22}$$

**Proof.** Let U be a (*p*, *s*)-convex *F-I-V-F*. Then, by hypothesis, we have

$$2^s \mathfrak{U}\left(\left[\frac{\mathbf{t}^p + \mathbf{s}^p}{2}\right]^{\frac{1}{p}}\right) \preccurlyeq \mathfrak{U}\left(\left[\zeta \mathbf{t}^p + (\mathbf{1} - \zeta)\mathbf{s}^p\right]^{\frac{1}{p}}\right) \overset{\sim}{+} \mathfrak{U}\left(\left[(\mathbf{1} - \zeta)\mathbf{t}^p + \zeta \mathbf{s}^p\right]^{\frac{1}{p}}\right).$$

Therefore, for each *ϕ* ∈ [0, 1], we have

$$\begin{split} 2^s \mathfrak{U}\_\* \left( \left[ \frac{\mathfrak{t}^p + \mathfrak{s}^p}{2} \right]^{\frac{1}{p}}, \mathfrak{q} \right) &\leq \mathfrak{U}\_\* \left( \left[ \mathfrak{f} \mathfrak{t}^p + (1 - \zeta) \mathfrak{s}^p \right]^{\frac{1}{p}}, \mathfrak{q} \right) + \mathfrak{U}\_\* \left( (1 - \zeta) \mathfrak{t}^p + \zeta \mathfrak{s}^p, \mathfrak{q} \right), \\ 2^s \mathfrak{U}^\* \left( \left[ \frac{\mathfrak{t}^p + \mathfrak{s}^p}{2} \right]^{\frac{1}{p}}, \mathfrak{q} \right) &\leq \mathfrak{U}^\* \left( \left[ \mathfrak{f} \mathfrak{t}^p + (1 - \zeta) \mathfrak{s}^p \right]^{\frac{1}{p}}, \mathfrak{q} \right) + \mathfrak{U}^\* \left( (1 - \zeta) \mathfrak{t}^p + \zeta \mathfrak{s}^p, \mathfrak{q} \right). \end{split}$$

Then,

$$\begin{split} \left\| 2^{s} \int\_{0}^{1} \mathfrak{U}\_{\ast} \left( \left[ \frac{\mathbb{v}^{p} + \mathsf{s}^{p}}{2} \right]^{\frac{1}{p}}, \varphi \right) d\zeta \right\| & \leq \int\_{0}^{1} \mathfrak{U}\_{\ast} \left( \left[ \mathbb{\zeta} \mathsf{t}^{p} + (1 - \zeta) \mathsf{s}^{p} \right]^{\frac{1}{p}}, \varphi \right) d\zeta + \int\_{0}^{1} \mathfrak{U}\_{\ast} ((1 - \zeta) \mathsf{t}^{p} + \zeta \mathsf{s}^{p}, \varphi) d\zeta, \\ \left\| 2^{s} \int\_{0}^{1} \mathfrak{U}\_{\ast} \left( \left[ \frac{\mathbb{v}^{p} + \mathsf{s}^{p}}{2} \right]^{\frac{1}{p}}, \varphi \right) d\zeta \right\| & \leq \int\_{0}^{1} \mathfrak{U}\_{\ast} \left( \left[ \mathbb{\zeta} \mathsf{t}^{p} + (1 - \zeta) \mathsf{s}^{p} \right]^{\frac{1}{p}}, \varphi \right) d\zeta + \int\_{0}^{1} \mathfrak{U}\_{\ast} ((1 - \zeta) \mathsf{t}^{p} + \zeta \mathsf{s}^{p}, \varphi) d\zeta, \\ & & \text{It'sullmann that} \end{split}$$

It follows that

$$\begin{array}{c} 2^{s-1} \mathfrak{U}\_{\ast} \left( \left[ \frac{\mathfrak{t}^{p} + \mathfrak{s}^{p}}{2} \right]^{\frac{1}{p}}, q \right) \leq \frac{p}{s^{p} - \mathfrak{t}^{p}} \int\_{\mathfrak{t}}^{\mathfrak{s}} \varkappa^{p-1} \mathfrak{U}\_{\ast} (\varkappa, q) d\varkappa, \\\ 2^{s-1} \mathfrak{U}^{\ast} \left( \left[ \frac{\mathfrak{t}^{p} + \mathfrak{s}^{p}}{2} \right]^{\frac{1}{p}}, q \right) \leq \frac{p}{s^{p} - \mathfrak{t}^{p}} \int\_{\mathfrak{t}}^{\mathfrak{s}} \varkappa^{p-1} \mathfrak{U}^{\ast} (\varkappa, q) d\varkappa. \end{array}$$

That is,

$$2^{s-1} \left[ \mathfrak{U}\_{\star} \left( \left[ \frac{\mathfrak{t}^p + \mathfrak{s}^p}{2} \right]^{\frac{1}{p}}, \mathfrak{q} \right), \mathfrak{U}^{\star} \left( \left[ \frac{\mathfrak{t}^p + \mathfrak{s}^p}{2} \right]^{\frac{1}{p}}, \mathfrak{q} \right) \right] \leq\_I \frac{p}{\mathfrak{s}^p - \mathfrak{t}^p} \left[ \int\_{\mathfrak{t}}^{\infty} \varkappa^{p-1} \mathfrak{U}\_{\star}(\varkappa, \mathfrak{q}) d\varkappa, \int\_{\mathfrak{t}}^{\infty} \varkappa^{p-1} \mathfrak{t} \mathfrak{t}^{\star}(\varkappa, \mathfrak{q}) d\varkappa \right].$$
 
$$\text{Thus}$$

Thus,

$$2^{s-1}\mathfrak{U}\left(\left[\frac{\mathbf{t}^p+\mathbf{s}^p}{2}\right]^{\frac{1}{p}}\right) \preccurlyeq \frac{p}{\mathbf{s}^p-\mathbf{t}^p} \text{ (FR)} \int\_{\mathbf{t}}^{\mathbf{s}} \varkappa^{p-1} \mathfrak{U}(\varkappa)d\varkappa. \tag{23}$$

In a similar way as above, we have

$$\frac{p}{\mathbf{s}^p - \mathbf{t}^p} \text{ (FR)} \int\_{\mathbf{t}}^{\mathbf{s}} \varkappa^{p-1} \mathfrak{U}(\varkappa) d\varkappa \preccurlyeq \frac{1}{\mathbf{s} + 1} \left[\mathfrak{U}(\mathbf{t}) \mho(\mathbf{s})\right]. \tag{24}$$

Combining Equations (23) and (24), we have

$$\int 2^{s-1} \mathfrak{U} \left( \left[ \frac{\mathbf{t}^p + \mathbf{s}^p}{2} \right]^{\frac{1}{p}} \right) \preccurlyeq \frac{p}{\mathbf{s}^p - \mathbf{t}^p} \left( FR \right) \int\_{\mathbf{t}}^{\mathbf{s}} \varkappa^{p-1} \mathfrak{U}(\varkappa) d\varkappa \preccurlyeq \frac{1}{s+1} \left[ \mathfrak{U}(\mathbf{t}) \widetilde{\dashv} \mathfrak{U}(\mathbf{s}) \right].$$

Hence, we obtain the required result.

**Remark 4.** On the basis of Theorem 6, we consider the certain the special situation as below:


$$\mathfrak{U}\left(\left[\frac{\mathbf{t}^p + \mathbf{s}^p}{2}\right]^{\frac{1}{p}}\right) \preccurlyeq \frac{p}{\mathbf{s}^p - \mathbf{t}^p} \text{ (FR)} \int\_{\mathbf{t}}^{\mathbf{s}} \varkappa^{p-1} \mathfrak{U}(\varkappa) d\varkappa \preccurlyeq \frac{\mathfrak{U}(\mathbf{t}) \hat{+} \mathfrak{U}(\mathbf{s})}{2};\tag{25}$$


$$\mathfrak{U}\left(\frac{\mathfrak{t}+\mathfrak{s}}{2}\right) \preccurlyeq \frac{1}{\mathfrak{s}-\mathfrak{t}} \left(FR\right) \int\_{\mathfrak{t}}^{\mathfrak{s}} \mathfrak{U}(\varkappa) d\varkappa \preccurlyeq \frac{\mathfrak{U}(\mathfrak{t}) \xleftarrow{\mathfrak{t}} \mathfrak{U}(\mathfrak{s})}{\mathfrak{s}+1};\tag{26}$$


$$\mathfrak{U}\left(\frac{\mathfrak{t}+\mathfrak{s}}{2}\right) \nleq \frac{1}{\mathfrak{s}-\mathfrak{t}} \text{ (FR)} \int\_{\mathfrak{t}}^{\mathfrak{s}} \mathfrak{U}(\varkappa) d\varkappa \prec \check{\mathfrak{U}}\frac{\mathfrak{U}(\mathfrak{t}) \hat{\twoheadrightarrow} \mathfrak{U}(\mathfrak{s})}{2};\tag{27}$$


$$2^{s-1}\mathfrak{U}\left(\left[\frac{\mathbf{t}^p+\mathbf{s}^p}{2}\right]^{\frac{1}{p}}\right) \le \frac{p}{\mathbf{s}^p-\mathbf{t}^p}\left(\mathbb{R}\right)\int\_{\mathbf{t}}^{\mathbf{s}} \varkappa^{p-1}\mathfrak{U}(\varkappa)d\varkappa \le \frac{1}{s+1}\left[\mathfrak{U}(\mathbf{t})\overset{\sim}{\to}\mathfrak{U}(\mathbf{s})\right];\tag{28}$$


$$\mathfrak{U}\left(\left[\frac{\mathbf{t}^p + \mathbf{s}^p}{2}\right]^{\frac{1}{p}}\right) \le \frac{p}{\mathbf{s}^p - \mathbf{t}^p} \ (\mathbf{R}) \int\_{\mathbf{t}}^{\mathbf{s}} \varkappa^{p-1} \mathfrak{U}(\varkappa) d\varkappa \le \frac{\mathfrak{U}(\mathbf{t}) + \mathfrak{U}(\mathbf{s})}{2};\tag{29}$$


$$
\mathfrak{U}\left(\frac{\mathfrak{t}+\mathfrak{s}}{2}\right) \le \frac{1}{\mathfrak{s}-\mathfrak{t}}\left(\mathbb{R}\right) \int\_{\mathfrak{t}}^{\mathfrak{s}} \mathfrak{U}(\varkappa)d\varkappa \le \frac{\mathfrak{U}(\mathfrak{t})+\mathfrak{U}(\mathfrak{s})}{2}.\tag{30}
$$

**Example 2.** Let *p* be an odd number and *s* ∈ [0, 1], and the *F-I-V-F* U : [t, s] = [2, 3] → F*C*(R) defined by

$$\mathfrak{U}(\varkappa)(\sigma) = \begin{cases} \frac{\sigma}{\left(2-\varkappa^{\frac{p}{2}}\right)}, & \sigma \in \left[0, \ 2-\varkappa^{\frac{p}{2}}\right] \\\\ \frac{2\left(2-\varkappa^{\frac{p}{2}}\right)-\sigma}{\left(2-\varkappa^{\frac{p}{2}}\right)}, & \sigma \in \left(2-\varkappa^{\frac{p}{2}}, 2\left(2-\varkappa^{\frac{p}{2}}\right)\right] \\\\ 0, & \text{otherwise}. \end{cases} \tag{31}$$

Then, for each *ϕ* ∈ [0, 1], we have U*ϕ*(κ) = *ϕ* <sup>2</sup> <sup>−</sup> <sup>κ</sup> *<sup>p</sup>* 2 ! , (2 − *ϕ*) <sup>2</sup> <sup>−</sup> <sup>κ</sup> *<sup>p</sup>* 2 !. Since ⎛ ⎞

end point functions U∗(κ, *ϕ*) = *ϕ* <sup>2</sup> <sup>−</sup> <sup>κ</sup> *<sup>p</sup>* 2 ! , U∗(κ, *ϕ*) = (2 − *ϕ*) ⎜⎜⎜⎝ <sup>2</sup> <sup>−</sup> <sup>κ</sup> *<sup>p</sup>* 2 ⎟⎟⎟⎠ are (*p*,*s*)-

convex functions for each *ϕ* ∈ [0, 1]. Then, U(κ) is (*p*,*s*)-convex *F-I-V-F*. We now compute the following:

<sup>2</sup>*s*−1U<sup>∗</sup> t*p*+s*<sup>p</sup>* 2 1 *p* , *ϕ* = <sup>4</sup><sup>−</sup> <sup>√</sup><sup>10</sup> <sup>2</sup> *ϕ*, 2*s*−1U<sup>∗</sup> t*p*+s*<sup>p</sup>* 2 1 *p* , *ϕ* = <sup>4</sup><sup>−</sup> <sup>√</sup><sup>10</sup> <sup>2</sup> (2 − *ϕ*), *p* <sup>s</sup>*p*−t*<sup>p</sup> s* <sup>t</sup> <sup>κ</sup>*p*−1U∗(κ, *<sup>ϕ</sup>*)*d*<sup>κ</sup> <sup>=</sup> *<sup>ϕ</sup>* 3 2 <sup>2</sup> <sup>−</sup> <sup>κ</sup> *<sup>p</sup>* 2 ! *d*κ = <sup>21</sup> <sup>50</sup> *ϕ*, *p* <sup>s</sup>*p*−t*<sup>p</sup> s* <sup>t</sup> <sup>κ</sup>*p*−1U∗(κ, *<sup>ϕ</sup>*)*d*<sup>κ</sup> <sup>=</sup> (<sup>2</sup> <sup>−</sup> *<sup>ϕ</sup>*) 3 2 <sup>2</sup> <sup>−</sup> <sup>κ</sup> *<sup>p</sup>* 2 ! *d*κ = <sup>21</sup> <sup>50</sup> (2 − *ϕ*), U∗(t, *ϕ*)+U∗(s, *ϕ*) *<sup>s</sup>*+<sup>1</sup> <sup>=</sup> <sup>4</sup><sup>−</sup> <sup>√</sup>2<sup>−</sup> √3 <sup>2</sup> *ϕ*, U∗(t, *ϕ*)+U∗(s, *ϕ*) *<sup>s</sup>*+<sup>1</sup> <sup>=</sup> <sup>4</sup><sup>−</sup> <sup>√</sup>2<sup>−</sup> √3 <sup>2</sup> (2 − *ϕ*),

for all *ϕ* ∈ [0, 1]. That means

$$\left[\frac{4-\sqrt{10}}{2}\varphi,\frac{4-\sqrt{10}}{2}(2-\varphi)\right] \leq\_I \left[\frac{21}{50}\varphi,\frac{21}{50}(2-\varphi)\right] \leq\_I \left[\frac{4-\sqrt{2}-\sqrt{3}}{2}\varphi,\frac{4-\sqrt{2}-\sqrt{3}}{2}(2-\varphi)\right],\\ \text{for all } \varphi \in [0,1].$$

and the Theorem 6 has been verified.

**Theorem 7.** Let U : [t, s] → F*C*(R) be a (*p*,*s*)-convex *F-I-V-F*, whose *ϕ*-levels define the family of *I-V-F*<sup>s</sup> <sup>U</sup>*<sup>ϕ</sup>* : [t, s] <sup>⊂</sup> <sup>R</sup> → K*C*<sup>+</sup> are given by <sup>U</sup>*ϕ*(κ) <sup>=</sup> [U∗(κ, *<sup>ϕ</sup>*), <sup>U</sup>∗(κ, *<sup>ϕ</sup>*)] for all ∈ [t, s] and for all *ϕ* ∈ [0, 1]. If U ∈ FR([t, s]), then

$$4^{s-1} \cdot \mathfrak{U}\left(\left[\frac{\mathfrak{t}^p + \mathfrak{s}^p}{2}\right]^{\frac{1}{p}}\right) \preccurlyeq \rhd \big[ \begin{matrix} \mathfrak{s} \\ \mathfrak{s}^p - \mathfrak{t}^p \end{matrix} \left(FR\right) \int\_{\mathfrak{t}}^{\mathfrak{s}} \varkappa^{p-1} \mathfrak{U}(\varkappa) d\varkappa \preccurlyeq \lnot \lnot \frac{\mathfrak{U}(\mathfrak{t}) \overline{+} \mathfrak{U}(\mathfrak{s})}{s+1} \left[\frac{1}{2} + \frac{1}{2^{\mathfrak{s}}}\right],\tag{32}$$

where

$$\begin{split} \rhd\_{1} &= \frac{\frac{\mathsf{U}(\mathsf{t}) \cdot \mathsf{U}(\mathsf{s})}{2} \widetilde{\mathsf{T}} \mathsf{U}\left(\left[\frac{\mathsf{t}^{p} + \mathsf{s}^{p}}{2}\right]^{\frac{1}{p}}\right)}{s + 1} \rhd\_{2} = 2^{s-2} \Big[ \mathsf{U}\left(\left[\frac{\mathsf{3t}^{p} + \mathsf{s}^{p}}{4}\right]^{\frac{1}{p}}\right) \widetilde{\mathsf{T}} \mathsf{U}\left(\left[\frac{\mathsf{t}^{p} + \mathsf{3s}^{p}}{4}\right]^{\frac{1}{p}}\right) \Big], \\ \text{and } & \rhd\_{1} = [\circ\_{1, \mathsf{t}} \ \circ\_{\mathsf{1}}^{\mathsf{t}}], \rhd\_{2} = [\circ\_{2, \mathsf{t}} \ \circ\_{\mathsf{2}}^{\mathsf{t}}]. \end{split}$$

**Proof.** Take t*p*, <sup>t</sup>*p*+s*<sup>p</sup>* 2 , and we have

$$\begin{split} \mathsf{L}^{s}\mathsf{U}\left(\left[\frac{\mathsf{f}^{\mathsf{t}\mathsf{t}^{p}+(1-\mathsf{f})}\frac{\mathsf{t}^{p}+\mathsf{s}^{p}}{2}+\frac{(1-\mathsf{f})\mathsf{t}^{p}+\mathsf{f}\frac{\mathsf{t}^{p}+\mathsf{s}^{p}}{2}}{2}\right]^{\frac{1}{p}}\right) \\ \quad \prec \mathsf{U}\left(\left[\mathsf{f}\mathsf{t}^{p}+(\mathsf{1}-\mathsf{f})\frac{\mathsf{t}^{p}+\mathsf{s}^{p}}{2}\right]^{\frac{1}{p}}\right) + \mathsf{U}\left(\left[(\mathsf{1}-\mathsf{f})\mathsf{t}^{p}+\mathsf{f}\frac{\mathsf{t}^{p}+\mathsf{s}^{p}}{2}\right]^{\frac{1}{p}}\right) .\end{split}$$

Therefore, for each *ϕ* ∈ [0, 1], we have

$$\begin{split} 2^{s} \mathfrak{U}\_{\bullet} \left( \left[ \frac{\zeta t^{p} + (1 - \zeta) \frac{t^{p} + \nu^{p}}{2}}{2} + \frac{(1 - \zeta) t^{p} + \zeta \frac{p^{p} + \nu^{p}}{2}}{2} \right]^{\frac{1}{p}}, q \right) \\ &\leq \mathfrak{U}\_{\bullet} \left( \left[ \zeta t^{p} + (1 - \zeta) \frac{t^{p} + \nu^{p}}{2} \right]^{\frac{1}{p}}, q \right) + \mathfrak{U}\_{\bullet} \left( \left[ (1 - \zeta) t^{p} + \zeta \frac{t^{p} + \nu^{p}}{2} \right]^{\frac{1}{p}}, q \right), \\ 2^{s} \mathfrak{U}^{\bullet} \left( \left[ \frac{\zeta t^{p} + (1 - \zeta) \frac{t^{p} + \nu^{p}}{2}}{2} + \frac{(1 - \zeta) t^{p} + \zeta \frac{t^{p} + \nu^{p}}{2}}{2} \right]^{\frac{1}{p}}, q \right) \\ &\leq \mathfrak{U}^{\bullet} \left( \left[ \zeta t^{p} + (1 - \zeta) \frac{t^{p} + \nu^{p}}{2} \right]^{\frac{1}{p}}, q \right) + \mathfrak{U}^{\bullet} \left( \left[ (1 - \zeta) t^{p} + \zeta \frac{t^{p} + \nu^{p}}{2} \right]^{\frac{1}{p}}, q \right). \end{split}$$

Consequently, we obtain

$$\begin{array}{ll}2^{s-2}\mathfrak{U}\_{\ast}\left(\left[\frac{3\mathfrak{t}^{p}+\mathfrak{s}^{p}}{4}\right]^{\frac{1}{p}},\boldsymbol{\varrho}\right)\leq\frac{p}{\mathfrak{s}^{p}-\mathfrak{t}^{p}}\quad\Updownarrow\\2^{s-2}\mathfrak{U}^{\ast}\left(\left[\frac{3\mathfrak{t}^{p}+\mathfrak{s}^{p}}{4}\right]^{\frac{1}{p}},\boldsymbol{\varrho}\right)\leq\frac{p}{\mathfrak{s}^{p}-\mathfrak{t}^{p}}\quad\Updownarrow\\\end{array}\\2^{s-1}\mathfrak{U}^{\ast}\left(\left[\frac{3\mathfrak{t}^{p}+\mathfrak{s}^{p}}{4}\right]^{\frac{1}{p}},\boldsymbol{\varrho}\right)\leq\frac{p}{\mathfrak{s}^{p}-\mathfrak{t}^{p}}\quad\Updownarrow\\\end{array}$$

That is,

$$\begin{array}{c} \mathsf{2}^{\varepsilon-2} \left[ \mathsf{4} \mathsf{1}\_{\star} \left( \left[ \frac{\mathsf{3} \mathsf{1}^{p} + \mathsf{s} \mathsf{p}}{4} \right]^{\frac{1}{p}}, q \mathsf{p} \right), \mathsf{4} \mathsf{1}^{\ast} \left( \left[ \frac{\mathsf{3} \mathsf{1}^{p} + \mathsf{s} \mathsf{p}}{4} \right]^{\frac{1}{p}}, q \mathsf{p} \right) \right] \\ \leq \mathsf{1}\_{\mathsf{L}} \frac{p}{\mathsf{s}^{p} - \mathsf{t}^{p}} \left[ \int\_{\mathsf{t}}^{\frac{\mathsf{y}^{\*} + \mathsf{s} \mathsf{t}^{p}}{2}} \varkappa^{p-1} \mathsf{4} \mathsf{1}\_{\star} \left( \varkappa, q \right) d \varkappa, \int\_{\mathsf{t}}^{\frac{\mathsf{y}^{\*} + \mathsf{s} \mathsf{t}^{p}}{2}} \varkappa^{p-1} \mathsf{4} \mathsf{1}^{\ast} \left( \varkappa, q \right) d \varkappa \right]. \end{array}$$

It follows that

$$2^{s-2} \mathfrak{U}\left(\left[\frac{3\mathfrak{t}^p + \mathfrak{s}^p}{4}\right]^{\frac{1}{p}}\right) \preccurlyeq \frac{p}{\mathfrak{s}^p - \mathfrak{t}^p} \int\_{\mathfrak{t}}^{\frac{\mathfrak{t}^p + \mathfrak{s}^p}{2}} \varkappa^{p-1} \mathfrak{U}(\varkappa) d\varkappa. \tag{33}$$

In a similar way as above, we have

$$2^{s-2} \mathfrak{U}\left(\left[\frac{\mathbf{t}^p + 3\mathbf{s}^p}{4}\right]^{\frac{1}{p}}\right) \preccurlyeq \frac{p}{\mathbf{s}^p - \mathbf{t}^p} \int\_{\frac{\mathbf{t}^p + \mathbf{s}^p}{2}}^{\mathbf{s}} \varkappa^{p-1} \mathfrak{U}(\varkappa) d\varkappa. \tag{34}$$

Combining Equations (33) and (34), we have

$$2^{s-2}\left[\mathfrak{U}\left(\left[\frac{3\mathbf{t}^p+\mathbf{s}^p}{4}\right]^{\frac{1}{p}}\right)\breve{+}\mathfrak{U}\left(\left[\frac{\mathbf{t}^p+3\mathbf{s}^p}{4}\right]^{\frac{1}{p}}\right)\right] \vartint \frac{p}{\mathbf{s}^p-\mathbf{t}^p} \int\_{\mathbf{t}}^{\kappa} \varkappa^{p-1}\mathfrak{U}(\varkappa)d\varkappa.\right]$$

By using Theorem 6, we have

$$4^{s-1} \mathfrak{U}\left(\left[\frac{\mathbf{t}^p + \mathbf{s}^p}{2}\right]^{\frac{1}{p}}\right) = 4^{s-1} \mathfrak{U}\left(\left[\frac{1}{2}. \frac{3\mathbf{t}^p + \mathbf{s}^p}{4} + \frac{1}{2}. \frac{\mathbf{t}^p + 3\mathbf{s}^p}{4}\right]^{\frac{1}{p}}\right).$$

Therefore, for each *ϕ* ∈ [0, 1], we have

<sup>4</sup>*s*−1U<sup>∗</sup> t*p*+s*<sup>p</sup>* 2 1 *p* , *ϕ* <sup>=</sup> <sup>4</sup>*s*−<sup>1</sup> <sup>U</sup><sup>∗</sup> 1 2 . 3t*p*+s*<sup>p</sup>* <sup>4</sup> <sup>+</sup> <sup>1</sup> 2 . t*p*+3s*<sup>p</sup>* 4 1 *p* , *ϕ* , 4*s*−1U<sup>∗</sup> t*p*+s*<sup>p</sup>* 2 1 *p* , *ϕ* = 4*s*−1U<sup>∗</sup> 1 2 . 3t*p*+s*<sup>p</sup>* <sup>4</sup> <sup>+</sup> <sup>1</sup> 2 . t*p*+3s*<sup>p</sup>* 4 1 *p* , *ϕ* <sup>≤</sup> <sup>2</sup>*s*−<sup>2</sup> U∗ 3t*p*+s*<sup>p</sup>* 4 1 *p* , *ϕ* + U<sup>∗</sup> t*p*+3s*<sup>p</sup>* 4 1 *p* , *ϕ* <sup>≤</sup> <sup>2</sup>*s*−<sup>2</sup> U∗ 3t*p*+s*<sup>p</sup>* 4 1 *p* , *ϕ* + U∗ t*p*+3s*<sup>p</sup>* 4 1 *p* , *ϕ* = -2∗ = -2 ∗ <sup>≤</sup> *<sup>p</sup>* <sup>s</sup>*p*−t*<sup>p</sup>* s <sup>t</sup> <sup>κ</sup>*p*−1U∗(κ, *<sup>ϕ</sup>*)*d*<sup>κ</sup> <sup>≤</sup> *<sup>p</sup>* <sup>s</sup>*p*−t*<sup>p</sup>* s <sup>t</sup> <sup>κ</sup>*p*−1U∗(κ, *<sup>ϕ</sup>*)*d*<sup>κ</sup> <sup>≤</sup> <sup>1</sup> *s*+1 U∗(t,*ϕ*)+U∗(s,*ϕ*) <sup>2</sup> + U<sup>∗</sup> t*p*+s*<sup>p</sup>* 2 1 *p* , *ϕ* <sup>≤</sup> <sup>1</sup> *s*+1 U∗(t,*ϕ*)+U∗(s,*ϕ*) <sup>2</sup> + U<sup>∗</sup> t*p*+s*<sup>p</sup>* 2 1 *p* , *ϕ* = -1∗ = -1 ∗ <sup>≤</sup> <sup>1</sup> *s*+1 U∗(t,*ϕ*)+U∗(s,*ϕ*) <sup>2</sup> <sup>+</sup> <sup>1</sup> <sup>2</sup>*<sup>s</sup>* (U∗(t, *<sup>ϕ</sup>*) <sup>+</sup> <sup>U</sup>∗(s, *<sup>ϕ</sup>*)) <sup>≤</sup> <sup>1</sup> *s*+1 U∗(t,*ϕ*)+U∗(s,*ϕ*) <sup>2</sup> <sup>+</sup> <sup>1</sup> <sup>2</sup>*<sup>s</sup>* (U∗(t, *<sup>ϕ</sup>*) <sup>+</sup> <sup>U</sup>∗(s, *<sup>ϕ</sup>*)) = <sup>1</sup> *<sup>s</sup>*+<sup>1</sup> [U∗(t, *<sup>ϕ</sup>*) <sup>+</sup> <sup>U</sup>∗(s, *<sup>ϕ</sup>*)] 1 <sup>2</sup> <sup>+</sup> <sup>1</sup> 2*s* = <sup>1</sup> *<sup>s</sup>*+<sup>1</sup> [U∗(t, *<sup>ϕ</sup>*) <sup>+</sup> <sup>U</sup>∗(s, *<sup>ϕ</sup>*)] 1 <sup>2</sup> <sup>+</sup> <sup>1</sup> 2*s* ,

that is

$$\mathfrak{l}\mathfrak{l}^{s-1}\mathfrak{l}\mathfrak{l}\left(\left[\frac{\mathfrak{l}^p+\mathfrak{s}^p}{2}\right]^{\frac{1}{p}}\right)\vightsquigarrow\bigcirc\simeq\mathfrak{l}\rightsquigarrow\mathfrak{l}\mathfrak{g}^p\\\vartriangledown\prod\_{\mathfrak{t}}\mathfrak{l}^{p-1}\mathfrak{l}\mathfrak{l}(\mathfrak{k})d\mathfrak{l}\mathfrak{e}\vdash\vircledast\mathfrak{l}\lnot\cong\mathfrak{l}\rightsquigarrow\mathfrak{l}\mathfrak{l}\rightsquigarrow\\\vartriangledown\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak{l}\mathfrak$$

hence, the result follows.

**Example 3.** Let *p* be an odd number and the *F-I-V-F* U : [t, s] = [2, 3] → F*C*(R) defined by, U*ϕ*(κ) = ⎡ ⎢ ⎢ ⎢ ⎣ *ϕ* <sup>2</sup> <sup>−</sup> <sup>κ</sup> *<sup>p</sup>* 2 ! ,(2 − *ϕ*) ⎛ ⎜⎜⎜⎝ <sup>2</sup> <sup>−</sup> <sup>κ</sup> *<sup>p</sup>* 2 ⎞ ⎟⎟⎟⎠ ⎤ ⎥ ⎥ ⎥ ⎦ , as in Example 2, then U(κ) is (*p*,*s*)-convex

*F-I-V-F* and satisfies Equation (21). We have

$$\mathfrak{U}\_\*(\varkappa,\,\,\varrho) = \varrho\left(2 - \varkappa^{\frac{p}{2}}\right) \text{ and } \mathfrak{U}^\*(\varkappa,\,\,\varrho) = (2 - \varrho)\left(2 - \varkappa^{\frac{p}{2}}\right).$$

We now compute the following:

U∗(t, *ϕ*)+U∗(s, *ϕ*) *s*+1 1 <sup>2</sup> <sup>+</sup> <sup>1</sup> 2*s* = <sup>4</sup><sup>−</sup> <sup>√</sup>2<sup>−</sup> √3 <sup>2</sup> *ϕ*, U∗(t, *ϕ*)+U∗(s, *ϕ*) *s*+1 1 <sup>2</sup> <sup>+</sup> <sup>1</sup> 2*s* = <sup>4</sup><sup>−</sup> <sup>√</sup>2<sup>−</sup> √3 <sup>2</sup> (2 − *ϕ*), -<sup>1</sup><sup>∗</sup> = <sup>U</sup>∗(t, *<sup>ϕ</sup>*)+U∗(s, *<sup>ϕ</sup>*) <sup>2</sup> +U<sup>∗</sup> t*p*+s*p* 2 1 *<sup>p</sup>* , *ϕ <sup>s</sup>*+<sup>1</sup> <sup>=</sup> <sup>8</sup><sup>−</sup> <sup>√</sup>2<sup>−</sup> <sup>√</sup>3<sup>−</sup> <sup>√</sup><sup>10</sup> <sup>4</sup> *ϕ*, -1 ∗ = U∗(t, *ϕ*)+U∗(s, *ϕ*) <sup>2</sup> +U<sup>∗</sup> t*p*+s*p* 2 1 *<sup>p</sup>* , *ϕ <sup>s</sup>*+<sup>1</sup> <sup>=</sup> <sup>8</sup><sup>−</sup> <sup>√</sup>2<sup>−</sup> <sup>√</sup>3<sup>−</sup> <sup>√</sup><sup>10</sup> <sup>4</sup> (2 − *ϕ*), -<sup>2</sup><sup>∗</sup> <sup>=</sup> <sup>2</sup>*s*−<sup>2</sup> U∗ 3t*p*+s*<sup>p</sup>* 4 1 *p* , *ϕ* + U<sup>∗</sup> t*p*+3s*<sup>p</sup>* 4 1 *p* , *ϕ* <sup>=</sup> <sup>5</sup><sup>−</sup> <sup>√</sup><sup>11</sup> <sup>4</sup> *ϕ*, -2 <sup>∗</sup> = 2*s*−<sup>2</sup> U∗ 3t*p*+s*<sup>p</sup>* 4 1 *p* , *ϕ* + U∗ t*p*+3s*<sup>p</sup>* 4 1 *p* , *ϕ* <sup>=</sup> <sup>5</sup><sup>−</sup> <sup>√</sup><sup>11</sup> <sup>4</sup> (2 − *ϕ*), <sup>4</sup>*s*−1U<sup>∗</sup> t*p*+s*<sup>p</sup>* 2 1 *p* , *ϕ* = <sup>4</sup><sup>−</sup> <sup>√</sup><sup>10</sup> <sup>2</sup> *ϕ*, 4*s*−1U<sup>∗</sup> t*p*+s*<sup>p</sup>* 2 1 *p* , *ϕ* = <sup>4</sup><sup>−</sup> <sup>√</sup><sup>10</sup> <sup>2</sup> (2 − *ϕ*).

Then, we obtain that

$$\begin{array}{c} \frac{4-\sqrt{10}}{2}\varrho \leq \frac{5-\sqrt{11}}{4}\varrho \leq \frac{21}{50}\varrho \leq \frac{8-\sqrt{2}-\sqrt{3}-\sqrt{10}}{4}\varrho \leq \frac{4-\sqrt{2}-\sqrt{3}}{2}\varrho\\ \frac{4-\sqrt{10}}{2}(2-\varrho) \leq \frac{5-\sqrt{11}}{4}(2-\varrho) \leq \frac{21}{50}(2-\varrho) \leq \frac{8-\sqrt{2}-\sqrt{3}-\sqrt{10}}{4}(2-\varrho) \leq \frac{4-\sqrt{2}-\sqrt{3}}{2}(2-\varrho). \end{array}$$

Hence, Theorem 7 is verified.

The next Theorems 8 and 9 give the second H–H Fejér inequality and the first H–H Fejér inequality for (*p*,*s*)-convex *F-I-V-F*, respectively.

**Theorem 8.** (Second H–H Fejér inequality for (*p*,*s*)-convex *F-I-V-F*) Let U : [t, s] → F*C*(R) be a (*p*,*s*)-convex *F-I-V-F* with t < s, whose *ϕ*-levels define the family of *I-V-F*s <sup>U</sup>*<sup>ϕ</sup>* : [t, s] <sup>⊂</sup> <sup>R</sup> → K*C*<sup>+</sup> are given by <sup>U</sup>*ϕ*(κ) <sup>=</sup> [U∗(κ, *<sup>ϕ</sup>*), <sup>U</sup>∗(κ, *<sup>ϕ</sup>*)] for all <sup>κ</sup> <sup>∈</sup> [t, s] and for all *ϕ* ∈ [0, 1]. If U ∈ FR([t, s]) and Ψ : [t, s] → R, Ψ(κ) ≥ 0, *p*-symmetric with respect to t*p*+s*<sup>p</sup>* 2 1 *p* , then

$$\frac{p}{\mathbf{s}^p - \mathbf{t}^p} \text{ (FR)} \int\_{\mathbf{t}}^{\mathbf{s}} \varkappa^{p-1} \mathfrak{U}(\varkappa) \Psi(\varkappa) d \precsim \left[\mathfrak{U}(\mathbf{t}) \overset{\sim}{\to} \mathfrak{U}(\mathbf{s})\right] \int\_0^1 \mathbb{J}^{rs} \Psi\left(\left[(1-\zeta)\mathbf{t}^p + \zeta \mathbf{s}^p\right]^{\frac{1}{p}}\right) d\zeta. \tag{35}$$

If U is (*p*,*s*)-concave *F-I-V-F*, then Equation (35) is reversed.

**Proof.** Let U be a (*p*, *s*)-convex *F-I-V-F*. Then, for each *ϕ* ∈ [0, 1], we have

$$\begin{split} \mathsf{VI}\_{\mathsf{s}}\left(\left[\mathsf{f}\mathsf{t}^{p}+(1-\mathsf{f})\mathsf{s}^{p}\right]^{\frac{1}{p}},\mathsf{q}\right) & \mathsf{Y}\left(\left[\mathsf{f}\mathsf{t}^{p}+(1-\mathsf{f})\mathsf{s}^{p}\right]^{\frac{1}{p}}\right) \\ & \leq \left(\mathsf{f}^{s}\mathsf{U}\_{\mathsf{s}}\left(\mathsf{t},\mathsf{q}\right)+\left(1-\mathsf{f}\right)^{s}\mathsf{U}\_{\mathsf{s}}\left(\mathsf{s},\mathsf{q}\right)\right) \mathsf{Y}\left(\left[\mathsf{f}\mathsf{t}^{p}+(1-\mathsf{f})\mathsf{s}^{p}\right]^{\frac{1}{p}}\right), \\ \mathsf{U}^{\mathsf{s}}\left(\left[\mathsf{f}\mathsf{t}^{p}+\left(1-\mathsf{f}\right)\mathsf{s}^{p}\right]^{\frac{1}{p}},\mathsf{q}\right) & \mathsf{Y}\left(\left[\mathsf{f}\mathsf{t}^{p}+(1-\mathsf{f})\mathsf{s}^{p}\right]^{\frac{1}{p}}\right) \\ & \leq \left(\mathsf{f}^{s}\mathsf{t}\mathsf{t}^{s}\left(\mathsf{t},\mathsf{q}\right)+\left(1-\mathsf{f}\right)^{\mathsf{s}}\mathsf{t}\mathsf{t}^{s}\left(\mathsf{s},\mathsf{q}\right)\right) \mathsf{Y}\left(\left[\mathsf{f}\mathsf{t}^{p}+(1-\mathsf{f})\mathsf{s}^{p}\right]^{\frac{1}{p}}\right). \end{split} \tag{36}$$

and

$$\begin{split} \mathsf{41}\_{\*}\left(\left[(1-\xi)\mathsf{t}^{p}+\zeta\mathsf{s}^{p}\right]^{\frac{1}{p}},\mathsf{q}\right)\mathsf{Y}\left(\left[(1-\xi)\mathsf{t}^{p}+\zeta\mathsf{s}^{p}\right]^{\frac{1}{p}}\right) \\ \leq & \left(\left(1-\xi\right)^{s}\mathsf{t}\mathsf{t}\_{\*}\left(\mathsf{t},\mathsf{q}\right)+\zeta^{s}\mathsf{t}\mathsf{t}\_{\*}\left(\mathsf{s},\mathsf{q}\right)\right)\mathsf{Y}\left(\left[(1-\xi)\mathsf{t}^{p}+\zeta\mathsf{s}^{p}\right]^{\frac{1}{p}}\right), \\ \mathsf{41}^{\*}\left(\left[(1-\xi)\mathsf{t}^{p}+\zeta\mathsf{s}^{p}\right]^{\frac{1}{p}},\mathsf{q}\right)\mathsf{Y}\left(\left[(1-\xi)\mathsf{t}^{p}+\zeta\mathsf{s}^{p}\right]^{\frac{1}{p}}\right) \\ \leq & \left(\left(1-\xi\right)^{s}\mathsf{t}\mathsf{t}^{\*}\left(\mathsf{t},\mathsf{q}\right)+\zeta^{s}\mathsf{t}\mathsf{t}^{\*}\left(\mathsf{s},\mathsf{q}\right)\right)\mathsf{Y}\left(\left[(1-\xi)\mathsf{t}^{p}+\zeta\mathsf{s}^{p}\right]^{\frac{1}{p}}\right). \end{split} \tag{37}$$

After adding Equations (36) and (37), and integrating over [0, 1], we get

 1 <sup>0</sup> U<sup>∗</sup> [*ζ*t*<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)s*p*] 1 *<sup>p</sup>* , *ϕ* ! Ψ [*ζ*t*<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)s*p*] 1 *p* ! *dζ* + <sup>1</sup> <sup>0</sup> U<sup>∗</sup> [(<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)t*<sup>p</sup>* <sup>+</sup> *<sup>ζ</sup>*s*p*] 1 *<sup>p</sup>* , *ϕ* ! Ψ [(<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)t*<sup>p</sup>* <sup>+</sup> *<sup>ζ</sup>*s*p*] 1 *p* ! *dζ* <sup>≤</sup> <sup>1</sup> 0 ⎡ ⎣ U∗(t, *ϕ*) *ζs*Ψ [*ζ*t*<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)s*p*] 1 *p* ! + (1 − *ζ*) *s* Ψ [(<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)t*<sup>p</sup>* <sup>+</sup> *<sup>ζ</sup>*s*p*] 1 *p* ! +U∗(s, *ϕ*) (1 − *ζ*) *s* Ψ [*ζ*t*<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)s*p*] 1 *p* ! + *ζs*Ψ [(<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)t*<sup>p</sup>* <sup>+</sup> *<sup>ζ</sup>*s*p*] 1 *p* ! ⎤ <sup>⎦</sup>*dζ*, 1 <sup>0</sup> U<sup>∗</sup> [*ζ*t*<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)s*p*] 1 *<sup>p</sup>* , *ϕ* ! Ψ [*ζ*t*<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)s*p*] 1 *p* ! *dζ* + <sup>1</sup> <sup>0</sup> U<sup>∗</sup> [(<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)t*<sup>p</sup>* <sup>+</sup> *<sup>ζ</sup>*s*p*] 1 *<sup>p</sup>* , *ϕ* ! Ψ [(<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)t*<sup>p</sup>* <sup>+</sup> *<sup>ζ</sup>*s*p*] 1 *p* ! *dζ* <sup>≤</sup> <sup>1</sup> 0 ⎡ ⎣ U∗(t, *ϕ*) *ζs*Ψ [*ζ*t*<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)s*p*] 1 *p* ! + (1 − *ζ*) *s* Ψ [(<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)t*<sup>p</sup>* <sup>+</sup> *<sup>ζ</sup>*s*p*] 1 *p* ! +U∗(s, *ϕ*) (1 − *ζ*) *s* Ψ [*ζ*t*<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)s*p*] 1 *p* ! + *ζs*Ψ [(<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)t*<sup>p</sup>* <sup>+</sup> *<sup>ζ</sup>*s*p*] 1 *p* ! ⎤ <sup>⎦</sup>*dζ*. = 2U∗(t, *ϕ*) 1 <sup>0</sup> *<sup>ζ</sup>s*<sup>Ψ</sup> [*ζ*t*<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)s*p*] 1 *p* ! *dζ* + 2U∗(s, *ϕ*) 1 <sup>0</sup> *<sup>ζ</sup>s*<sup>Ψ</sup> [(<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)t*<sup>p</sup>* <sup>+</sup> *<sup>ζ</sup>*s*p*] 1 *p* ! *dζ* = 2U∗(t, *ϕ*) 1 <sup>0</sup> *<sup>ζ</sup>s*<sup>Ψ</sup> [*ζ*t*<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)s*p*] 1 *p* ! *dζ* + 2U∗(s, *ϕ*) 1 <sup>0</sup> *<sup>ζ</sup>s*<sup>Ψ</sup> [(<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)t*<sup>p</sup>* <sup>+</sup> *<sup>ζ</sup>*s*p*] 1 *p* ! *dζ*.

Since Ψ is symmetric, then

$$\begin{split} \mathbf{1} &= 2\left[\mathfrak{U}\_{\mathbf{t}}\mathfrak{(t,\rho)} + \mathfrak{U}\_{\mathbf{t}}\mathfrak{(s,\rho)}\right] \int\_{0}^{1} \int\_{\mathbf{t}}^{\mathbf{s}} \zeta^{s} \Psi\left(\left[(1-\zeta)\mathfrak{t}^{p} + \zeta \mathfrak{s}^{p}\right]^{\frac{1}{p}}\right) \, d\zeta \\ &= 2\left[\mathfrak{U}^{\ast}(\mathfrak{t},\mathfrak{q}) + \mathfrak{U}^{\ast}(\mathfrak{s},\mathfrak{q})\right] \int\_{0}^{1} \int\_{\mathbf{t}}^{\mathbf{s}} \zeta^{s} \Psi\left(\left[(1-\zeta)\mathfrak{t}^{p} + \zeta \mathfrak{s}^{p}\right]^{\frac{1}{p}}\right) \, d\zeta. \end{split} \tag{38}$$

Since

 1 <sup>0</sup> U<sup>∗</sup> [*ζ*t*<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)s*p*] 1 *<sup>p</sup>* , *ϕ* Ψ [*ζ*t*<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)s*p*] 1 *p dζ* = <sup>1</sup> <sup>0</sup> U<sup>∗</sup> [(<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)t*<sup>p</sup>* <sup>+</sup> *<sup>ζ</sup>*s*p*] 1 *<sup>p</sup>* , *ϕ* Ψ [(<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)t*<sup>p</sup>* <sup>+</sup> *<sup>ζ</sup>*s*p*] 1 *p dζ* = *<sup>p</sup>* <sup>s</sup>*p*−t*<sup>p</sup>* s <sup>t</sup> <sup>κ</sup>*p*−1U∗(κ, *<sup>ϕ</sup>*)Ψ(κ)*d*κ, 1 <sup>0</sup> U<sup>∗</sup> [*ζ*t*<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)s*p*] 1 *<sup>p</sup>* , *ϕ* Ψ [*ζ*t*<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)s*p*] 1 *p dζ* = <sup>1</sup> <sup>0</sup> U<sup>∗</sup> [(<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)t*<sup>p</sup>* <sup>+</sup> *<sup>ζ</sup>*s*p*] 1 *<sup>p</sup>* , *ϕ* Ψ [(<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)t*<sup>p</sup>* <sup>+</sup> *<sup>ζ</sup>*s*p*] 1 *p dζ* = *<sup>p</sup>* <sup>s</sup>*p*−t*<sup>p</sup>* s <sup>t</sup> <sup>κ</sup>*p*−1U∗(κ, *<sup>ϕ</sup>*)Ψ(κ)*d*κ, (39)

From Equation (39) and integrating with respect to *ζ* over [0, 1], we have

$$\begin{cases} \frac{p}{\mathfrak{s}^{p}-\mathfrak{t}^{p}} \int\_{t}^{\mathfrak{s}} \varkappa^{p-1} \mathfrak{t}\mathfrak{t}\left(\varkappa,\mathfrak{q}\right) \Psi(\varkappa) d\varkappa \leq \left[\mathfrak{U}\_{\ast}\left(\mathfrak{t},\mathfrak{q}\right) + \mathfrak{U}\_{\ast}\left(\mathfrak{s},\mathfrak{q}\right)\right] \int\_{0}^{1} \int\_{\mathfrak{s}}^{\mathfrak{s}} \zeta^{s} \Psi\left(\left[\left(1-\xi\right)\mathfrak{t}^{p} + \zeta\mathfrak{s}^{p}\right]^{\frac{1}{p}}\right) d\zeta, \\\frac{p}{\mathfrak{s}^{p}-\mathfrak{t}^{p}} \int\_{t}^{\mathfrak{s}} \varkappa^{p-1} \mathfrak{t}\mathfrak{t}^{s}\left(\varkappa,\mathfrak{q}\right) \Psi\left(\varkappa\right) d\varkappa \leq \left[\mathfrak{t}\mathfrak{t}\left(\mathfrak{t},\mathfrak{q}\right) + \mathfrak{t}\mathfrak{t}\left(\mathfrak{s},\mathfrak{q}\right)\right] \int\_{0}^{1} \int\_{\mathfrak{s}}^{\mathfrak{s}} \zeta^{s} \Psi\left(\left[\left(1-\xi\right)\mathfrak{t}^{p} + \zeta\mathfrak{s}^{p}\right]^{\frac{1}{p}}\right) d\zeta, \end{cases}$$

that is,

$$\begin{split} & \left[ \begin{array}{c} \frac{p}{p-1} \Big[ \int\_{t}^{t} \varkappa^{p-1} \mathsf{T} \mathsf{V}^{\*} (\varkappa,\mathsf{q}) \varPsi \,(\varkappa) d\varkappa, \\ \end{array} \right] \, \begin{split} & \left[ \begin{array}{c} \int\_{t}^{t} \varkappa^{p-1} \mathsf{V}^{\*} (\varkappa,\mathsf{q}) \, \upPsi \,(\varkappa) d\varkappa, \\ \end{array} \right] \, \begin{subarray}{c} \mathrm{I}\_{s}^{\mathsf{t}} \varkappa^{p-1} \mathsf{V}^{\*} (\mathsf{t},\mathsf{q}) + \mathsf{U}^{\*} (\mathsf{s},\mathsf{q}) \Big] \int\_{0}^{1} \, \mathsf{J}^{s} \mathsf{V} \left( \left[ (\mathsf{I}-\mathsf{J}) \mathsf{t}^{p} + \mathsf{J} \mathsf{s}^{p} \right]^{\frac{1}{p}} \right) \, d\mathsf{J}, \\ \end{split} \end{split} \right] \end{split}$$

hence

$$\frac{p}{\mathbf{s}^p - \mathbf{t}^p} \left( FR \right) \int\_{\mathbf{t}}^{\mathbf{s}} \varkappa^{p-1} \mathfrak{U}(\varkappa) \Psi(\varkappa) d\varkappa \prec \left[ \mathfrak{U}(\mathbf{t}) \widetilde{\u} \mathfrak{U}(\mathbf{s}) \right] \int\_0^1 \mathbb{Z}^{\varkappa} \Psi \left( \left[ (1-\zeta)\mathbf{t}^p + \zeta \mathbf{s}^p \right]^{\frac{1}{p}} \right) d\zeta.$$
 
$$\Box$$

**Theorem 9.** (First H–H Fejér inequality for (*p*, *s*)-convex *F-I-V-F*) Let U : [t, s] → F*C*(R) be a (*p*, *s*)-convex *F-I-V-F* with t < s, whose *ϕ*-levels define the family of *I-V-F*s <sup>U</sup>*<sup>ϕ</sup>* : [t, s] <sup>⊂</sup> <sup>R</sup> → K*C*<sup>+</sup> are given by <sup>U</sup>*ϕ*(κ) <sup>=</sup> [U∗(κ, *<sup>ϕ</sup>*), <sup>U</sup>∗(κ, *<sup>ϕ</sup>*)] for all <sup>∈</sup> [t, s] and for all *ϕ* ∈ [0, 1]. If U ∈ FR([t, s]) and Ψ : [t, s] → R, Ψ(κ) ≥ 0, *p*-symmetric with respect to t*p*+s*<sup>p</sup>* 2 1 *p* , and <sup>s</sup> <sup>t</sup> Ψ(κ)*d*κ > 0, then

$$2^{s-1} \mathfrak{U}\left(\left[\frac{\mathfrak{t}^p + \mathfrak{s}^p}{2}\right]^{\frac{1}{p}}\right) \preccurlyeq \frac{p}{\int\_{\mathfrak{t}}^{\kappa} \varkappa \ell^{p-1} \varPsi(\varkappa) d\varkappa} \left(\mathsf{FR}\right) \int\_{\mathfrak{t}}^{\kappa} \varkappa^{p-1} \mathfrak{U}(\varkappa) \varPsi(\varkappa) d\varkappa. \tag{40}$$

If U is (*p*, *s*)-concave *F-I-V-F*, then inequality (40) is reversed.

**Proof.** Since U is a (*p*, *s*)-convex *F-I-V-F*, then, for each *ϕ* ∈ [0, 1], we have

$$\begin{split} 2^{s} \mathfrak{U}\_{\star} \left( \left[ \frac{\mathbb{t}^{p} + \mathbb{s}^{p}}{2} \right]^{\frac{1}{p}}, \boldsymbol{\varrho} \right) &\leq \mathfrak{U}\_{\star} \left( \left[ \mathbb{\zeta} \mathbb{t}^{p} + (1 - \mathbb{\zeta}) \mathbb{s}^{p} \right]^{\frac{1}{p}}, \boldsymbol{\varrho} \right) + \mathfrak{U}\_{\star} \left( \left[ (1 - \mathbb{\zeta}) \mathbb{t}^{p} + \mathbb{\zeta} \mathbb{s}^{p} \right]^{\frac{1}{p}}, \boldsymbol{\varrho} \right), \\ 2^{s} \mathfrak{U}\_{\star} \left( \left[ \frac{\mathbb{t}^{p} + \mathbb{s}^{p}}{2} \right]^{\frac{1}{p}}, \boldsymbol{\varrho} \right) &\leq \mathfrak{U} \left( \left[ \mathbb{\zeta} \mathbb{t}^{p} + (1 - \mathbb{\zeta}) \mathbb{s}^{p} \right]^{\frac{1}{p}}, \boldsymbol{\varrho} \right) + \mathfrak{U} \left( \left[ (1 - \mathbb{\zeta}) \mathbb{t}^{p} + \mathbb{\zeta} \mathbb{s}^{p} \right]^{\frac{1}{p}}, \boldsymbol{\varrho} \right). \end{split} \tag{41}$$

By multiplying Equation (41) by Ψ [*ζ*t*<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)s*p*] 1 *p* = Ψ [(<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)t*<sup>p</sup>* <sup>+</sup> *<sup>ζ</sup>*s*p*] 1 *p* and integrating it by *ζ* over [0, 1], we obtain

<sup>2</sup>*<sup>s</sup>* <sup>U</sup><sup>∗</sup> t*p*+s*<sup>p</sup>* 2 1 *p* , *ϕ* <sup>1</sup> <sup>0</sup> Ψ [(<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)t*<sup>p</sup>* <sup>+</sup> *<sup>ζ</sup>*s*p*] 1 *<sup>p</sup>* , *ϕ dζ* ≤ ⎛ ⎜⎜⎝ 1 <sup>0</sup> U<sup>∗</sup> [*ζ*t*<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)s*p*] 1 *<sup>p</sup>* , *ϕ* Ψ [*ζ*t*<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)s*p*] 1 *p dζ* + <sup>1</sup> <sup>0</sup> U<sup>∗</sup> [(<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)t*<sup>p</sup>* <sup>+</sup> *<sup>ζ</sup>*s*p*] 1 *<sup>p</sup>* , *ϕ* Ψ [(<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)t*<sup>p</sup>* <sup>+</sup> *<sup>ζ</sup>*s*p*] 1 *p dζ* ⎞ ⎟⎟⎠, 2*s* U∗ t*p*+s*<sup>p</sup>* 2 1 *p* , *ϕ* <sup>1</sup> <sup>0</sup> Ψ [(<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)t*<sup>p</sup>* <sup>+</sup> *<sup>ζ</sup>*s*p*] 1 *p dζ* ≤ ⎛ ⎜⎜⎝ 1 <sup>0</sup> U<sup>∗</sup> [*ζ*t*<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)s*p*] 1 *<sup>p</sup>* , *ϕ* Ψ [*ζ*t*<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)s*p*] 1 *p dζ* + <sup>1</sup> <sup>0</sup> U<sup>∗</sup> [(<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)t*<sup>p</sup>* <sup>+</sup> *<sup>ζ</sup>*s*p*] 1 *<sup>p</sup>* , *ϕ* Ψ [(<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)t*<sup>p</sup>* <sup>+</sup> *<sup>ζ</sup>*s*p*] 1 *p dζ* ⎞ ⎟⎟⎠. (42)

Since

 1 <sup>0</sup> U<sup>∗</sup> [*ζ*t*<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)s*p*] 1 *<sup>p</sup>* , *ϕ* Ψ [*ζ*t*<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)s*p*] 1 *p dζ* = <sup>1</sup> <sup>0</sup> U<sup>∗</sup> [(<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)t*<sup>p</sup>* <sup>+</sup> *<sup>ζ</sup>*s*p*] 1 *<sup>p</sup>* , *ϕ* Ψ [(<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)t*<sup>p</sup>* <sup>+</sup> *<sup>ζ</sup>*s*p*] 1 *p dζ* = *<sup>p</sup>* <sup>s</sup>*p*−t*<sup>p</sup>* s <sup>t</sup> <sup>κ</sup>*p*−1U∗(κ, *<sup>ϕ</sup>*)Ψ(κ)*d*κ, 1 <sup>0</sup> U<sup>∗</sup> [*ζ*t*<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)s*p*] 1 *<sup>p</sup>* , *ϕ* Ψ [*ζ*t*<sup>p</sup>* <sup>+</sup> (<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)s*p*] 1 *p dζ* = <sup>1</sup> <sup>0</sup> U<sup>∗</sup> [(<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)t*<sup>p</sup>* <sup>+</sup> *<sup>ζ</sup>*s*p*] 1 *<sup>p</sup>* , *ϕ* Ψ [(<sup>1</sup> <sup>−</sup> *<sup>ζ</sup>*)t*<sup>p</sup>* <sup>+</sup> *<sup>ζ</sup>*s*p*] 1 *p dζ* = *<sup>p</sup>* <sup>s</sup>*p*−t*<sup>p</sup>* s <sup>t</sup> <sup>κ</sup>*p*−1U∗(κ, *<sup>ϕ</sup>*)Ψ(κ)*d*κ, (43)

From Equation (43), we have

$$\begin{split} 2^{s-1} \mathfrak{U}\_{\ast} \left( \left[ \frac{\mathfrak{l}^{p} + \mathfrak{s}^{p}}{2} \right]^{\frac{1}{p}}, \mathfrak{q} \right) &\leq \frac{p}{\int\_{\mathfrak{l}}^{\ast} \overline{\mathfrak{l}}(\varkappa) d\varkappa} \int\_{\mathfrak{l}}^{\ast} \varkappa^{p-1} \mathfrak{l}^{\ast}(\varkappa, \mathfrak{q}) \varPsi(\varkappa) d\varkappa. \\ 2^{s-1} \mathfrak{l} \mathfrak{l}^{\ast} \left( \left[ \frac{\mathfrak{l}^{p} + \mathfrak{s}^{p}}{2} \right]^{\frac{1}{p}}, \mathfrak{q} \right) &\leq \frac{p}{\int\_{\mathfrak{l}}^{\ast} \overline{\mathfrak{l}}(\varkappa) d\varkappa} \int\_{\mathfrak{l}}^{\ast} \varkappa^{p-1} \mathfrak{l} \mathfrak{l}^{\ast}(\varkappa, \mathfrak{q}) \varPsi(\varkappa) d\varkappa. \end{split}$$

From this, we have

$$\begin{split} & 2^{\kappa-1} \left[ \mathfrak{U}\_{\ast} \left( \left[ \frac{\mathfrak{t}^{p} + \mathfrak{s}^{p}}{2} \right]^{\frac{1}{p}}, \mathfrak{q} \right), \mathfrak{U}^{\ast} \left( \left[ \frac{\mathfrak{t}^{p} + \mathfrak{s}^{p}}{2} \right]^{\frac{1}{p}}, \mathfrak{q} \right) \right] \\ & \leq\_{I} \frac{p}{\int\_{\mathfrak{t}}^{\kappa} \Psi(\varkappa) d\varkappa} \left[ \int\_{\mathfrak{t}}^{\mathfrak{s}} \varkappa^{p-1} \mathfrak{t} \mathfrak{t}^{\ast}(\varkappa, \mathfrak{q}) \Psi(\varkappa) d\varkappa, \int\_{\mathfrak{t}}^{\mathfrak{s}} \varkappa^{p-1} \mathfrak{t} \mathfrak{t}^{\ast}(\varkappa, \mathfrak{q}) \Psi(\varkappa) d\varkappa \right], \end{split}$$

that is

$$2^{s-1} \mathfrak{U}\left(\left[\frac{\mathbf{t}^p + \mathbf{s}^p}{2}\right]^{\frac{1}{p}}\right) \asymp \frac{p}{\int\_{\mathbf{t}}^{\mathbf{s}} \varkappa^{p-1} \varPsi(\varkappa) d\varkappa} \left(FR\right) \int\_{\mathbf{t}}^{\mathbf{s}} \varkappa^{p-1} \varPsi(\varkappa, \varrho) \varPsi(\varkappa) d\varkappa^{s}$$

and this completes the proof.

**Remark 5.** If we attempt to take *s* = 1 in Theorems 8 and 9, then we achieve the appropriate theorems for *p*-convex *F-I-V-F*s, see [13]:


**Example 4.** We consider the *F-I-V-F* U : [1, 4] → F*C*(R) defined by

$$\mathfrak{U}(\varkappa)(\sigma) = \begin{cases} \frac{\sigma - e^{\varkappa \overline{\varkappa}^p}}{4e^{\varkappa \overline{\varkappa}^p} - \sigma}, & \sigma \in [e^{\varkappa \overline{\varkappa}^p}, 2e^{\varkappa \overline{\varkappa}^p}],\\ \frac{4e^{\varkappa \overline{\varkappa}^p} - \sigma}{2e^{\varkappa \overline{\varkappa}^p}}, & \sigma \in (2e^{\varkappa \overline{\varkappa}^p}, 4e^{\varkappa \overline{\varkappa}^p}],\\ 0, & otherwise, \end{cases} \tag{44}$$

Then, for each *<sup>ϕ</sup>* <sup>∈</sup> [0, 1], we have <sup>U</sup>*ϕ*(κ) <sup>=</sup> [(<sup>1</sup> <sup>+</sup> *<sup>ϕ</sup>*)*e*<sup>κ</sup> *<sup>p</sup>*, 2(<sup>2</sup> <sup>−</sup> *<sup>ϕ</sup>*)*e*<sup>κ</sup> *<sup>p</sup>*]. Since end point functions U∗(κ, *ϕ*), U∗(κ, *ϕ*) are (*p*, *s*)-convex functions, for each *s*, *ϕ* ∈ [0, 1], then U(κ) is (*p*,*s*)-convex *F-I-V-F*. If

$$\Psi(\varkappa) = \begin{cases} \varkappa^p - 1, & \sigma \in \left[1, \frac{5}{2}\right], \\ 4 - \varkappa^p, & \sigma \in \left(\frac{5}{2}, 4\right], \end{cases} \tag{45}$$

where *p* = 1. Then, we have

$$\begin{split} \frac{p}{s^p - v^p} \int\_1^4 \varkappa^{p-1} \mathfrak{U}\_\*(\varkappa, q) \Psi(\varkappa) d\varkappa &= \frac{1}{3} \int\_1^4 \varkappa^{p-1} \mathfrak{U}\_\*(\varkappa, q) \Psi(\varkappa) d\varkappa \\ &= \frac{1}{3} \int\_1^{\frac{5}{2}} \varkappa^{p-1} \mathfrak{U}\_\*(\varkappa, q) \Psi(\varkappa) d\varkappa + \frac{1}{3} \int\_{\frac{5}{2}}^{\frac{4}{2}} \varkappa^{p-1} \mathfrak{U}\_\*(\varkappa, q) \Psi(\varkappa) d\varkappa \\ &= \frac{1}{3} (1 + q) \int\_1^{\frac{5}{2}} e(-1) d\varkappa + \frac{1}{3} (1 + q) \int\_{\frac{5}{2}}^4 e(4 -) d\varkappa \approx 11 (1 + q), \\ \frac{p}{s^p - v^p} \int\_1^4 \varkappa^{p-1} \mathfrak{U}\_\*^\*(\varkappa, q) \Psi(\varkappa) d\varkappa &= \frac{1}{3} \int\_1^4 \varkappa^{p-1} \mathfrak{U}\_\*^\*(\varkappa, q) \Psi(\varkappa) d\varkappa \\ &= \frac{1}{3} \int\_1^{\frac{5}{2}} \varkappa^{p-1} \mathfrak{U}\_\*^\*(\varkappa, q) \Psi(\varkappa) d\varkappa + \frac{1}{3} \int\_{\frac{5}{2}}^4 \varkappa^{p-1} \mathfrak{U}\_\*^\*(\varkappa, q) \Psi(\varkappa) d\varkappa \\ &= \frac{2}{3} (2 - q) \int\_1^{\frac{5}{2}} e(-1) d\varkappa + \frac{2}{3} (2 - q) \int\_{\frac{5}{2}}^4 e(4 -) d\varkappa \approx 22(2 - q), \end{split} \tag{46}$$

and

$$\begin{split} \left[\mathsf{M}^{\*}\left(\mathsf{t},\ \mathsf{q}\right)\right] &\quad + \mathsf{M}^{\*}\left(\mathsf{s},\ \mathsf{q}\right)\right] \int\_{0}^{1} \int\_{0}^{\mathsf{s}} \xi^{s} \mathsf{P}\left(\left[(1-\mathsf{r})\mathsf{t}^{p} + \mathsf{r}\mathsf{s}^{p}\right]^{\frac{1}{p}}\right) d\mathsf{r} \\ \left[\mathsf{M}\_{\*}\left(\mathsf{t},\ \mathsf{q}\right)\right] &\quad + \mathsf{M}\_{\*}\left(\mathsf{s},\ \mathsf{q}\right)\right] \int\_{0}^{1} \int\_{0}^{\mathsf{s}} \xi^{s} \mathsf{P}\left(\left[(1-\mathsf{r})\mathsf{t}^{p} + \mathsf{r}\mathsf{s}^{p}\right]^{\frac{1}{p}}\right) d\mathsf{r} \\ &= \left(1+\mathsf{q}\right)\left[\mathsf{e}+\mathsf{e}^{4}\right] \left[\int\_{0}^{\frac{1}{2}} 3\mathsf{r}^{2}d + \int\_{\frac{1}{2}}^{1} \mathsf{r}(3-3\mathsf{r})d\mathsf{r}\right] \approx \frac{43}{2}(1+\mathsf{q}) \\ &= 2\left(2-\mathsf{q}\right)\left[\mathsf{e}+\mathsf{e}^{4}\right] \left[\int\_{0}^{\frac{1}{2}} 3\mathsf{r}^{2}d + \int\_{\frac{1}{2}}^{1} \mathsf{r}(3-3\mathsf{r})d\mathsf{r}\right] \approx 43(2-\mathsf{q}). \end{split} \tag{47}$$

From Equations (46) and (47), we have

$$\left[11(1+\varrho), 22(2-\varrho)\right]\_I \le \,\_I\left[\frac{43}{2}(1+\varrho), 43(2-\varrho)\right], \text{for each } \varrho \in [0, 1].$$

Hence, Theorem 8 is verified. For Theorem 9, we have

$$\begin{aligned} 2^{s-1} \mathfrak{U}\_\* \left( \left[ \frac{\mathfrak{t}^p + \mathfrak{s}^p}{2} \right]^{\frac{1}{p}}, \mathfrak{q} \right) &\approx \frac{61}{5} (1+\mathfrak{q}), \\ 2^{s-1} \mathfrak{U}^\* \left( \left[ \frac{\mathfrak{t}^p + \mathfrak{s}^p}{2} \right]^{\frac{1}{p}}, \mathfrak{q} \right) &\approx \frac{122}{5} (2-\mathfrak{q}), \end{aligned} \tag{48}$$
 
$$\mathfrak{p}^{-1} \mathfrak{w}(\mathfrak{q})\_{\mathfrak{d},\mathfrak{d},\mathfrak{d},\mathfrak{d},\mathfrak{d}} = \int \stackrel{\frac{\mathfrak{t}}{2}}{2} (\mathfrak{t}\_{(1,1)})\_{\mathfrak{d},\mathfrak{d},\mathfrak{d},\mathfrak{d},\mathfrak{d},\mathfrak{d},\mathfrak{d}} = 9$$

$$\int\_{t}^{\varkappa} \varkappa^{p-1} \Psi(\varkappa) d\varkappa = \int\_{1}^{\frac{\varkappa}{2}} (\varkappa - 1) d\varkappa \int\_{\frac{5}{2}}^{4} (4 - \imath) d\varkappa = \frac{9}{4},$$

$$\frac{\frac{p}{\int\_{1}^{\varkappa} \varkappa^{p-1} \Psi(\varkappa) d\varkappa}}{\frac{p}{\int\_{1}^{\varkappa} \varkappa^{p-1} \Psi(\varkappa) d\varkappa}} \int\_{1}^{4} \varkappa^{p-1} \mathfrak{U}\_{\ast}(\varkappa, \varrho) \Psi(\varkappa) d\varkappa \approx \frac{73}{5} (1 + \varrho),$$

$$\frac{\frac{p}{\int\_{1}^{\varkappa} \varkappa^{p-1} \Psi(\varkappa) d\varkappa}}{\frac{p}{\int\_{1}^{\varkappa} \varkappa^{p-1} \Psi(\varkappa) d\varkappa}} \int\_{1}^{4} \varkappa^{p-1} \mathfrak{U}\_{\ast}(\varkappa, \varrho) \Psi(\varkappa) d\varkappa \approx \frac{293}{10} (2 - \varrho). \tag{49}$$

From Equations (48) and (49), we have

$$\left[\frac{61}{5}(1+\varrho), \frac{122}{5}(2-\varrho)\right]\_I \leq \,\_I\left[\frac{73}{5}(1+\varrho), \frac{293}{10}(2-\varrho)\right].$$

Hence, Theorem 9 has been demonstrated.

#### **5. Conclusions and Future Developments**

Through this study, we have provided a reformative version of the different inequalities in the frame of fuzzy interval space, which offers a better approximation than the interval integral inequalities.

Then, for mappings satisfying the property "the product of two (*p*, *s*)-convex *F-I-V-F*s is a (*p*, *s*)-convex *F-I-V-F*", we created certain fuzzy interval integral inequalities in terms of the fuzzy interval H–H type inequalities. It is a fascinating topic to apply these fuzzy interval inequalities to *ϕ*-type special means, numerical integration, and probability density functions. With the methods and ideas provided in this article, the interested readers are encouraged to further excavation on fuzzy interval inequalities. In the future, we will try to explore this concept and its generalizations with the help of fuzzy fractional integral operators.

**Author Contributions:** Conceptualization, M.B.K.; validation, H.B. and S.T.; formal analysis, H.B. and S.T.; investigation, M.B.K. and S.T.; resources, M.B.K. and H.B.; writing—original draft, M.B.K. and H.B.; writing—review and editing, M.B.K. and S.T.; visualization, M.B.K., H.B. and S.T.; supervision, M.B.K. and S.T.; project administration, H.B., M.B.K. and S.T. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not Applicable.

**Acknowledgments:** The authors would like to thank the Rector, COMSATS University Islamabad, Islamabad, Pakistan, for providing excellent research.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **Abbreviations**


#### **References**


### *Article* **The Method of Fundamental Solutions for the 3D Laplace Inverse Geometric Problem on an Annular Domain**

**Mojtaba Sajjadmanesh 1, Hassen Aydi 2,3,4,***∗***, Eskandar Ameer <sup>5</sup> and Choonkil Park 6,***<sup>∗</sup>*


**Abstract:** In this paper, we are interested in an inverse geometric problem for the three-dimensional Laplace equation to recover an inner boundary of an annular domain. This work is based on the method of fundamental solutions (MFS) by imposing the boundary Cauchy data in a leastsquare sense and minimisation of the objective function. This approach can also be considered with noisy boundary Cauchy data. The simplicity and efficiency of this method is illustrated in several numerical examples.

**Keywords:** inverse geometric problem; Laplace equation; method of fundamental solution; leastsquare problem

#### **1. Introduction**

The inverse geometry problems, as an important subclass of inverse problems, can be subdivided into two subclasses, depending on the location of the unknown boundary. In the first kind, the portion of the outer boundary of the solution domain is unknown, whilst in the second kind, the inner boundary is unknown.

There are many methods for solving the inverse geometry problems, such as the boundary element regularisation method by Lesnic et al. [1], the method of fundamental solutions and moving pseudo-boundary method by Karageorghis et al. [2–4], the boundary function method by Wang et al. [5], the conjugate gradient method (CGM) and the boundary element technique by Huang et al. [6,7].

Bin-Mohsin and Lesnic in 2012 utilised the method of fundamental solutions (MFS) to the modified Helmholtz inverse geometry problem on an annular domain [8]. The purpose of this paper is to extend the aforementioned current approach to the threedimensional Laplace equation based on the method of fundamental solutions. Finally, two examples are presented to show the simplicity and efficiency of this method.

#### **2. Formulation of the Inverse Geometric Problem**

Let *<sup>D</sup>* <sup>⊂</sup> <sup>R</sup><sup>3</sup> be a simply connected domain with an unknown boundary *<sup>∂</sup><sup>D</sup>* which is compactly contained in a simply connected domain <sup>Ω</sup> <sup>⊂</sup> <sup>R</sup><sup>3</sup> with the boundary *<sup>∂</sup>*Ω. Let us consider the following inverse problem:

$$
\Delta u = 0 \quad \text{in } \Omega \backslash \overline{D}, \tag{1}
$$

**Citation:** Sajjadmanesh, M.; Aydi, H.; Ameer, E.; Park, C. The Method of Fundamental Solutions for the 3D Laplace Inverse Geometric Problem on an Annular Domain. *Fractal Fract.* **2022**, *6*, 66. https://doi.org/ 10.3390/fractalfract6020066

Academic Editor: Savin Trean¸t ˘a

Received: 11 December 2021 Accepted: 9 January 2022 Published: 27 January 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

subject to the boundary conditions,

$$
\mu = f \quad \text{on} \quad \partial \Omega\_{\prime} \tag{2}
$$

$$\frac{\partial \mu}{\partial n} = \text{g} \quad \text{on} \quad \partial \Omega,\tag{3}$$

$$
\mu = h \quad \text{on} \quad \partial D\_\prime \tag{4}
$$

where *<sup>f</sup>* <sup>∈</sup> *<sup>H</sup>*1/2(*∂*Ω) and *<sup>g</sup>* <sup>∈</sup> *<sup>H</sup>*−1/2(*∂*Ω) are given functions and *<sup>n</sup>* is an outward unit normal vector on *<sup>∂</sup>*Ω. Moreover, the function *<sup>h</sup>* <sup>∈</sup> *<sup>H</sup>*1/2(*∂D*) is given on the unknown boundary *∂D*. Without loss of generality, we can suppose that Ω is the unit disk *B*(0; 1); otherwise we can conformally map the exterior of the simply connected domain Ω onto the exterior of the unit disk.

The unknown boundary *∂D* can be expressed in spherical coordinates as

$$\mathcal{D}D = \{ \,\,r(\theta,\,\varphi)(\cos\theta\sin\varphi,\sin\theta\sin\varphi,\cos\varphi);\ \,\,\theta \in [0,2\pi),\,\,\varphi \in [0,\pi] \}$$

where *r*(*θ*, *ϕ*) is a 2*π*-periodic and *π*-periodic smooth function with respect to *θ* and *ϕ*, respectively, with values in the interval (0, 1).

The inverse problem we are concerned with is to determine geometrically the domain boundary *∂D* by utilising the method of fundamental solutions.

#### **3. The Least-Square Problem Based on the MFS**

In the classic MFS, the solution of a homogeneous linear partial differential equation (PDE) is approximated by a linear combination of the fundamental solutions with the set of sources located outside the problem domain and a set of points on the domain boundary. The linear combination coefficients are determined by collocation or, alternatively, with a least-squares fit of the boundary conditions.

Based on the MFS, one can approximate the solution of (1) by a linear combination of its fundamental solutions, which is given by [9]

$$\mathcal{U}(\mathbf{z}, \mathbf{s}) = \frac{1}{4\pi r}; \ r = \|\mathbf{z} - \mathbf{s}\|. \tag{5}$$

i.e.,

$$\mu(\mathbf{z}) = \sum\_{j=1}^{n\_s} c\_j \mathcal{U}(\mathbf{z}, \mathbf{s}\_j), \tag{6}$$

where the collocation points **z***<sup>i</sup>* and **z***i*+*<sup>M</sup>* are uniformly located on *∂*Ω and *∂D*, respectively, i.e.,

$$\mathbf{z}\_{i} = (\cos \theta\_{i} \sin \phi\_{i}, \sin \theta\_{i} \sin \phi\_{i}, \cos \phi\_{i}), \quad i = \overline{1, M} \tag{7}$$

$$\mathbf{z}\_{i+M} = r\_i(\cos \theta\_i \sin \varphi\_{i\prime} \sin \theta\_i \sin \varphi\_{i\prime} \cos \varphi\_{i\prime}), \quad i = \overline{1, N} \tag{8}$$

Further, the *ns* := *M* + *N* source points **s***<sup>j</sup>* and **s***j*+*<sup>M</sup>* are uniformly located on the outside of Ω and the inside of *D*, respectively, i.e.,

$$\mathbf{s}\_{\rangle} = R\_1 (\cos \theta\_{\rangle} \sin \phi\_{\rangle}, \sin \theta\_{\rangle} \sin \phi\_{\rangle}, \cos \phi\_{\rangle}), \quad j = \overline{1, M} \tag{9}$$

$$\mathbf{s}\_{j+M} = \frac{r\_j}{R\_2} (\cos \theta\_j \sin \varphi\_{j\prime} \sin \theta\_j \sin \varphi\_{j\prime} \cos \varphi\_{\not\searrow}), \quad j = \overline{1, N} \tag{10}$$

where *R*1, *R*<sup>2</sup> > 1.

The coefficients vector **c** = (*cj*)*j*=1,*M*+*<sup>N</sup>* in linear combination (6) and also, the radial vector **r** = (*rj*)*j*=1,*<sup>N</sup>* can be determined by imposing the boundary conditions (2)–(4) in a least-square sense, which recasts into minimising the objective function

$$T(\mathbf{c}, \mathbf{r}) = \|\boldsymbol{u} - \boldsymbol{f}\|\_{L^2(\partial\Omega)}^2 + \|\frac{\partial\boldsymbol{u}}{\partial\boldsymbol{u}} - \boldsymbol{g}\|\_{L^2(\partial\Omega)}^2 + \|\boldsymbol{u} - \boldsymbol{h}\|\_{L^2(\partial\boldsymbol{D})}^2. \tag{11}$$

Upon discretisation, Equation (11) yields

$$\begin{split} T(\mathbf{c}, \mathbf{r}) &= \sum\_{i=1}^{M} \left[ \sum\_{j=1}^{M+N} c\_j l I(\mathbf{z}\_i, \mathbf{s}\_j) - f(\mathbf{z}\_i) \right]^2 + \sum\_{i=M+1}^{2M} \left[ \sum\_{j=1}^{M+N} c\_j \frac{\partial l I}{\partial n}(\mathbf{z}\_{i-M}, \mathbf{s}\_j) - g(\mathbf{z}\_{i-M}) \right]^2 \\ &+ \sum\_{i=2M+1}^{2M+N} \left[ \sum\_{j=1}^{M+N} c\_j l I(\mathbf{z}\_{i-M}, \mathbf{s}\_j) - h(\mathbf{z}\_{i-M}) \right]^2. \end{split} \tag{12}$$

In general, the boundary data *F* ∈ { *f* , *g*, *h*} are measured noisy data satisfying

$$F\_i^\delta = F\_i + \delta \, rand(i) \, F\_{i\prime} \tag{13}$$

where *δ* is the percentage noise and the number *rand*(*i*) is a random number drawn from the standard uniform distribution on the interval [−1, 1] generated by the MATLAB code −1 + 2*rand*(*i*).

Imposing noise on all measured data implies

$$\begin{split} T^{\delta}(\mathbf{c}, \mathbf{r}) &= \sum\_{i=1}^{M} \left[ \sum\_{j=1}^{M+N} c\_{j}! I(\mathbf{z}\_{i}, \mathbf{s}\_{j}) - f^{\delta}(\mathbf{z}\_{i}) \right]^{2} + \sum\_{i=M+1}^{2M} \left[ \sum\_{j=1}^{M+N} c\_{j} \frac{\partial L}{\partial n}(\mathbf{z}\_{i-M}, \mathbf{s}\_{j}) - \mathbf{g}^{\delta}(\mathbf{z}\_{i-M}) \right]^{2} \\ &+ \sum\_{i=2M+1}^{2M+N} \left[ \sum\_{j=1}^{M+N} c\_{j}! I(\mathbf{z}\_{i-M}, \mathbf{s}\_{j}) - h^{\delta}(\mathbf{z}\_{i-M}) \right]^{2} . \end{split} \tag{14}$$

The minimisation of (12) or (14) imposes 2*M* + *N* nonlinear equations for the 2*N* + *M* unknowns (**c**,**r**), and for a unique solution, it is necessary that *M* ≥ *N*.

#### **4. Error Analysis and the Regularisation**

The accuracy of the presented method is evaluated by the normalised relative root mean square error (RMSE) and *L*∞-error:

$$\text{RMSE} = \frac{\left\{ \frac{1}{N} \sum\_{i=1}^{N} |r\_i^{(an)} - r\_i^{(num)}|^2 \right\}^{\frac{1}{2}}}{\max\_{1 \le i \le N} |r\_i^{(an)}|} \quad , \quad L\_{\infty}\text{-error} = \max\_{1 \le i \le N} |r\_i^{(an)} - r\_i^{(num)}|$$

where *r* (*an*) *<sup>i</sup>* and *r* (*num*) *<sup>i</sup>* denote the analytical and numerical radial vectors, respectively, at the *i th* collocation point on the boundary *∂D*.

The obtained numerical radial vectors from the presented method are unstable, especially when noise is added to the boundary data, and so the regularisation is needed. For this, we can add the following regularisation terms via standard zeroth- and first-order Tikhonov's regularisation with parameters *λ*1, *λ*<sup>2</sup> ≥ 0 to the functional (14), i.e.,

$$\mathrm{Reg}(\underline{a}, \underline{r}) = \sum\_{j=2M+N+1}^{3M+2N} \left(\sqrt{\lambda\_1} \mathfrak{a}\_{j-2M-N}\right)^2 + \sum\_{j=3M+2N+1}^{3M+3N-1} \left(\sqrt{\lambda\_2} (r\_{j-3M-2N+1} - r\_{j-3M-2N})\right)^2,\tag{15}$$

#### **5. Numerical Examples**

In this section, we give some examples to check the effectiveness of the presented method. We consider a three-dimensional annular domain with an outer boundary as the unit sphere *∂*Ω = *B*(0; 1), *R*<sup>1</sup> = *R*<sup>2</sup> = 2 and *M* = *N* ∈ {25, 50} in (7)–(10). Moreover, the percentage noise *δ* = 5% is added to every measured boundary data.

The minimisation of functional (12) or (14) is carried out using the MATLAB optimisation toolbox routine lsqnonlin, which solves nonlinear least-squares problems.

**Example 1.** *Consider a three-dimensional annular domain with an unknown inner boundary ∂D* = *B*(0;*r*(*an*)) *of radius r*(*an*) = 0.7*. The boundary data are given as follows:*

$$\begin{aligned} \mu|\_{\partial\Omega} &= f(\theta,\varphi) = \frac{1}{2} \left\{ \sin 2\varrho \left( \cos \theta + \sin \theta \right) + \sin 2\theta \sin^2 \varrho \right\}, \\ \frac{\partial \mu}{\partial n}|\_{\partial\Omega} &= \ g(\theta,\varphi) = \sin 2\varrho \left( \cos \theta + \sin \theta \right) + \sin 2\theta \sin^2 \varrho \,, \\ \mu|\_{\partial D} &= h(\theta,\varphi) = \frac{49}{200} \left\{ \sin 2\varrho \left( \cos \theta + \sin \theta \right) + \sin 2\theta \sin^2 \varrho \right\}. \end{aligned}$$

*The exact solution for these input boundary data is u*(*x*) = *x*1*x*<sup>2</sup> + *x*1*x*<sup>3</sup> + *x*2*x*3*.*

*Table 1 gives the values of the objective functions and the corresponding errors obtained using the optimal initial guess r*<sup>0</sup> *and M* = *N* ∈ {25, 50} *without using regularisation parameters. It can be seen that the values of the corresponding errors increase with the number of collocation points and so regularisation is needed.*

**Table 1.** The values of the optimal initial guess, *r*<sup>0</sup> , objective functions and the corresponding errors with *M* = *N* ∈ {25, 50} and no regularisation parameters for Example 1.


*In Tables 2 and 3, we present the values of the objective functions and the corresponding errors with initial guess, <sup>r</sup>*0*, obtained using the regularisation parameters <sup>λ</sup>*1, *<sup>λ</sup>*<sup>2</sup> ∈ {0, 10−6, 10−3, 10−1} *with M* = *N* ∈ {25, 50} *and so, in Table 4, we give the minimal objective functions and the corresponding errors with initial guess r*0*.*

**Table 2.** The values of the optimal initial guess, *r*<sup>0</sup> , objective functions and the corresponding errors using the regularisation parameters *λ*1, *λ*<sup>2</sup> with *M* = *N* = 25 for Example 1.



**Table 3.** The values of the optimal initial guess, *r*<sup>0</sup> , objective functions and the corresponding errors using the regularisation parameters *λ*1, *λ*<sup>2</sup> with *M* = *N* = 50 for Example 1.

**Example 2.** *Consider a three-dimensional annular domain with an unknown inner boundary of radius r*(*an*) = <sup>1</sup> <sup>4</sup> (1 + cos *θ* sin 2*ϕ*)*. The boundary data are given as follows:*

$$\begin{aligned} \mu|\_{\partial\Omega} &= f(\theta,\varphi) = 3\sin^2\varphi - 2, \\ \frac{\partial\mu}{\partial n}|\_{\partial\Omega} &= g(\theta,\varphi) = 6\sin^2\varphi - 4, \\ \mu|\_{\partial D} &= h(\theta,\varphi) = \frac{1}{16}(3\sin^2\varphi - 2)(\sin 2\varphi\cos\theta + 1)^2. \end{aligned}$$

*The exact solution for these input boundary data is u*(*x*) = *x*<sup>2</sup> <sup>1</sup> + *<sup>x</sup>*<sup>2</sup> <sup>2</sup> <sup>−</sup> <sup>2</sup>*x*<sup>2</sup> 3*.*

**Table 4.** The values of the minimal objective functions and the corresponding errors with initial guess *r*0, obtained (with/no) selecting the optimal regularisation parameters with *M* = *N* ∈ {25, 50} for Example 1 .


*Table 5 gives the values of the objective functions and the corresponding errors obtained using the optimal initial guess r*0*, M* = *N* ∈ {25, 50} *without using regularisation parameters, whilst Tables 6 and 7 are obtained using the regularisation parameters λ*1, *λ*<sup>2</sup> *and so, in Table 8 we give the minimal objective functions and the corresponding errors with initial guess r*0*.*

**Table 5.** The values of the optimal initial guess, *r*<sup>0</sup> , objective functions and the corresponding errors for Example 2 with *M* = *N* ∈ {25, 50} and no regularisation parameters .



**Table 6.** The values of the optimal initial guess, *r*<sup>0</sup> , objective functions and the corresponding errors using the regularisation parameters *<sup>λ</sup>*1, *<sup>λ</sup>*<sup>2</sup> ∈ {0, 10−6, 10−3, 10−1} with *<sup>M</sup>* <sup>=</sup> *<sup>N</sup>* <sup>=</sup> 25 for Example 2.

**Table 7.** The values of the optimal initial guess, *r*<sup>0</sup> , objective functions and the corresponding errors using the regularisation parameters *<sup>λ</sup>*1, *<sup>λ</sup>*<sup>2</sup> ∈ {0, 10−6, 10−3, 10−1} with M = N = 50 for Example 2.


**Table 8.** The values of the minimal objective functions and the corresponding errors with initial guess, *r*0, obtained (with/no) selecting the optimal regularisation parameters *λ*1, *λ*<sup>2</sup> with *M* = *N* ∈ {25, 50} for Example 2.


#### **6. Conclusions**

In this paper, we extended the aforementioned method presented in [8], based on the method of fundamental solutions to solve numerically the three-dimensional inverse geometry problem on an annular domain. To obtain the stable and accuracy results, Tikhonov's regularisation parameters were used combined with the problem of the minimising an objective function. From the examples, we can see that our proposed method is effective and stable, even for the boundary data added with noise.

**Author Contributions:** Validation, H.A.; formal analysis, E.A., C.P.; investigation, M.S., H.A., E.A., C.P.; writing—original draft preparation, M.S.; writing—review and editing, H.A.; supervision, M.S.; funding acquisition, C.P. All authors have read and agreed to the published version of the manuscript. **Funding:** This work does not receive any external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Data sharing is not applicable to this article, as no data set was generated or analysed during the current study.

**Conflicts of Interest:** The authors declare no confilct of interests.

#### **References**


### *Article* **Multistability of the Vibrating System of a Micro Resonator**

**Yijun Zhu and Huilin Shang \***

School of Mechanical Engineering, Shanghai Institute of Technology, Shanghai 201418, China; zyjmain@163.com **\*** Correspondence: suliner60@hotmail.com

**Abstract:** Multiple attractors and their fractal basins of attraction can lead to the loss of global stability and integrity of Micro Electro Mechanical Systems (MEMS). In this paper, multistability of a class of electrostatic bilateral capacitive micro-resonator is researched in detail. First, the dynamical model is established and made dimensionless. Second, via the perturbating method and the numerical description of basins of attraction, the multiple periodic motions under primary resonance are discussed. It is found that the variation of AC voltage can induce safe jump of the micro resonator. In addition, with the increase of the amplitude of AC voltage, hidden attractors and chaos appear. The results may have some potential value in the design of MEMS devices.

**Keywords:** micro resonator; fractal; multistability; safe jump; hidden attractor; chaos; basin of attraction

#### **1. Introduction**

Multistability, i.e., the coexistence of multiple attractors, is a common dynamical phenomenon in MEMS/NEMS [1,2]. Based on it, there are many applications such as MEMS-based memory [3] and switches [4]. In addition, considering the loss of global stability that multistability may trigger, there are some devices that should avoid the appearance of multiple attractors in their vibrating systems, such as filters [5], microvalves [6], and micro-relays [7]. As one of the fastest developing MEMS products [8], electrostatic micro-resonators should assure that the resonators undergo periodic vibration whose amplitudes vary continuously with the driven voltages. However, in practical applications of electrostatic micro-resonators [9], there are many complex dynamic behaviors such as multistability [10,11], quasi-periodic motion [12] and periodic-n motion [13], chaos [14,15], and pull-in instability.

It is of great significance to study the multistability and necessary conditions for inducing it either for avoiding this phenomenon or making use of it. Thus, multistability of vibrating systems of micro resonators has been studied experimentally and numerically during these decades [16]. Via experiments, Mohammadreza investigated the dynamic response of an electrostatic micro-actuator in the vicinity of the primary resonance and the parametric one [17]. Siewe et al. [18] studied the vibration of a double-side MEMS resonator numerically and found the variation of the driven voltage could induce the coexistence of chaos and quasi-periodic motions. Shang et al. [19] found the coexisting chaos and dynamical pull-in in the vibrating system of a single-side electrostatic micro sensor. Haghighi et al. [20] found the coexisting periodic-n motion and the chaotic motion of micromechanical resonators with electrostatic forces on both sides, and then discussed the global bifurcation of its vibrating system by approximately expressing its homoclinic orbits as the ones of a typical duffing equation. When amplifying signals of a nanomechanical duffing resonator, Almog et al. [21] found that multistability was an interesting dynamical phenomenon of nonlinear systems and could be explored for many applications. Gusso et al. [22] studied chaos of a typical micro/nanoelectromechanical beam resonator with two-sided electrodes experimentally and observed multiple attractors in a significant region of the relevant parameter space, involving periodic and chaotic attractors.

**Citation:** Zhu, Y.; Shang, H. Multistability of the Vibrating System of a Micro Resonator. *Fractal Fract.* **2022**, *6*, 141. https://doi.org/ 10.3390/fractalfract6030141

Academic Editor: Savin Trean¸tă

Received: 5 February 2022 Accepted: 1 March 2022 Published: 2 March 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

By applying cell-mapping method to depict the basins of attraction for all the attractors, they also found that the basin boundaries were fractal under certain conditions of the excitations, indicating that the attractors are strongly intermingled. Liu et al. [23] applied the method of multiple scales (MMS) to analyze the multiple periodic motions induced by the local bifurcation, and used the Melnikov method to predict necessary conditions for chaos and its control. The corresponding numerical results were also presented by the basins of attraction and spectrum diagrams. Angelo et al. [24] investigated the effect of the linear and nonlinear stiffness terms and damping coefficients on dynamical behaviors of a microelectromechanical resonator and controlled the chaotic motion by forcing it into an orbit obtained analytically via the harmonic balance method. However, most study concentrated on describing or observing the phenomenon itself rather than studying its mechanism, which is still not that clear yet.

To this end, we consider a typical electrostatic driven bilateral capacitive microresonator and study the possible multistability and its mechanism in its vibrating system. The paper is organized as follows. In Section 2, the dynamical model is constructed and made dimensionless. In Sections 3 and 4, two different cases for coexisting multiple periodic attractors, fractal basins of attraction, and other complex attractors of the systems are discussed both theoretically and numerically. In Section 5, the conclusions are presented.

#### **2. Dynamical Model**

We choose to study a class of bilateral micro resonator whose simplified diagram is shown in Figure 1. The driven forces on the resonator are electrostatic ones between the moving electrode and the fixed electrode [25]. The driven voltage in Figure 1 is the combination of alternate current (AC) and direct current (DC) actuation. In the figure, *x* is the vertical displacement of the moving electrode at moment *t*, *d* the initial gap width between the moving electrode and each fixed one, *Vb* the DC bias voltage, *VACsin*Ω*t* the AC voltage where *VAC* is the amplitude and Ω the frequency. Suppose that the amplitude of the AC voltage *VAC* is much lower than the bias DC voltage *Vb*, i.e., *VAC Vb*. According to the Second Law of Newton, the vibrating system of the moving electrode can be expressed as a nonlinear system as follows:

$$m\frac{d^2x}{dt^2} + c\frac{dx}{dt} + k\_1x + k\_2x^3 = \frac{\mathbb{C}\_0}{2(d-x)^2}(V\_b + V\_{AC}\sin\Omega t)^2 - \frac{\mathbb{C}\_0V\_b^2}{2(d+x)^2} \tag{1}$$

where *m* represents the effective lumped mass of the moving electrode, *k*<sup>1</sup> its linear mechanical stiffness, *k*<sup>2</sup> its cubic nonlinear stiffness, *c* the damping coefficient, *C*<sup>0</sup> the initial capacitance of the parallel-plate structure.

**Figure 1.** Simplified diagram of a bilateral MEMS resonator.

Introducing the following dimensionless variables

$$
\omega\_0 = \sqrt{\frac{k\_1}{m}}, \omega = \frac{\Omega}{\omega\_0}, \mu = \frac{\varepsilon}{m\omega\omega\_0}, \mu = \frac{k\_2 d^2}{m\omega\_0^2}, \pounds = \frac{\mathbb{C}\_0 V\_b^2}{2k\_1 d^3}, \gamma = \frac{V\_{AC}}{V\_b}, T = \omega\wp t, \mu = \frac{\mathbf{x}}{d}, \dot{\mathbf{u}} = \frac{d\mathbf{u}}{dT} \tag{2}
$$

and substituting Equation (2) into Equation (1), one can obtain that

$$
\ddot{u} + \mu \dot{u} + u + au^3 = \frac{\beta}{\left(1 - u\right)^2} (1 + \gamma \sin \omega T)^2 - \frac{\beta}{\left(1 + u\right)^2} \tag{3}
$$

which is a dimensionless system. Since in the original system (1), the viscous damping coefficient of air *c* is very tiny, and *VAC Vb*, the parameters *μ* and *γ* in (3) will be both small and can be considered as perturbed parameters. Thus, considering *μ* = 0 and *γ* = 0 in Equation (3), one has the unperturbed system that can be expressed as below:

$$
\dot{u} = v,\\
\dot{v} = -u - au^3 + \frac{\beta}{\left(1 - u\right)^2} - \frac{\beta}{\left(1 + u\right)^2}.\tag{4}
$$

Letting the right side of Equation (4) be zero, one can determine equilibria of the dimensionless system (3). Equation (5) is a Hamilton system with the Hamiltonian

$$H(u,v) = \frac{1}{2}v^2 + \frac{1}{2}u^2 + \frac{\alpha}{4}u^4 - \frac{\beta}{1-\mu} - \frac{\beta}{1+\mu} + 2\beta\tag{5}$$

and the function of potential energy (P.E.)

$$V(\mu) = \frac{1}{2}\mu^2 + \frac{\alpha}{4}\mu^4 - \frac{\beta}{1-\mu} - \frac{\beta}{1+\mu} + 2\beta. \tag{6}$$

Concerning Equation (4), the number of the equilibria, and the shapes and positions of the possible potential wells of the unperturbed system (4) depend on the parameters *α* and *β*. The same as in [20], the values of the parameters in the system (1) are given by:

$$m = 5 \times 12^{-12} \text{ kg}, c = 5 \times 12^{-8} \text{ kg/s}, k\_1 = 5 \text{ }\upmu\text{N/}\mu\text{m}, \ k\_2 = 15 \text{ }\upmu\text{N/}\mu\text{m}^3, d = 2 \mu\text{m}, \ \mathbb{C}\_0 = 1.875 \times 10^{-18} \text{ m} \\ \text{F.} \quad \text{(7)}$$

Accordingly, in system (4), *α* = 12.

Different equilibria and potential energy diagrams of the unperturbed system under different values of the parameter *β* can be seen in Figure 2. It shows that there are three P.E. poles when *β* = 0.211, five P.E. poles when *β* increases to 0.338, and only one P.E. pole when *β* increases to 0.6. Under different values of *β*, the potential wells and unperturbed orbits are shown in Figure 3. When *β* = 0.211, there are three equilibria (two non-trivial equilibria are saddles and the origin is a center) as well as one well surrounded by heteroclinic orbits (see Figure 3a). As *β* increases to 0.338, there will be five equilibria among which two non-trivial equilibria S1 (−0.196339,0) and S2 (0.196339,0) are centers of the two wells surrounded by homoclinic orbits; the other three equilibria are unstable. When *β* = 0.6, no wells or nontrivial equilibria of the unperturbed system (4) exist. The P.E. poles in Figure 2 correspond to the fixed points shown in Figure 3. Therefore, according to Equations (2) and (7), when the structural parameters are fixed, the number of centers will depend on the value of DC bias voltage *Vb*: when the DC bias voltage is very low, there will be a center of the system (4) as well as a stable point attractor of the system (3) without AC voltage. Under a higher DC bias voltage, there may be two centers of the system (4). As is well known, periodic vibration can often be attributed to the perturbation of the centers. Since the number and the location of the centers in Figure 3a,b are totally different, the mechanism for the possible multiple periodic attractors of the vibrating system of the micro-resonator can be different as well. Therefore, in Sections 3 and 4, we discuss the different mechanism of multi-stability for these two different cases, i.e., the only center (the origin) and the two non-trivial centers, respectively.

**Figure 2.** Potential energy of the unperturbed system (4) under different values of parameter *β*.

**Figure 3.** Orbits of the unperturbed system (4) under different values of parameter *β*.

#### **3. Multiple Periodic Attractors in the Neighborhood of the Origin**

Considering the case where the DC bias voltage is low, and the periodic vibration of the microstructure is induced by the perturbation of the only center (see Figure 3a, where *Vb* =3V), one may use the Method of Multiple Scales (MMS) to analyze the periodic solutions in the neighborhood of the origin. Expanding the fractional terms of the dimensionless system (3) as Taylor series in the neighborhood of *u* = 0, and neglecting the higher-order-than-three terms of *u*, one has:

$$
\ddot{u} + \mu \dot{u} + \mu + au^3 = 2\beta \gamma \sin \omega T + 4\beta u + 4u\beta \gamma \sin \omega T + 6u^2 \beta \gamma \sin \omega T + 8\beta u^3 + 8u^2 \beta \gamma \sin \omega T. \tag{8}
$$

As mentioned in Section 2, the values of the parameters *μ* and *γ* in the above system are small; one can introduce a small parameter *ε* satisfying 0 < *ε* 1, and can re-scale the two parameters in the system (8) as:

$$
\mu = \varepsilon^2 \widetilde{\mu}, \ \gamma = \varepsilon^2 \widetilde{\gamma}. \tag{9}
$$

Then Equation (8) becomes

$$
\ddot{\mu} + \ddot{\omega}^2 \mu = -\varepsilon^2 \ddot{\mu} \ddot{u} + 2\varepsilon^2 \beta \ddot{\gamma} \sin \omega T + 4u\varepsilon^2 \beta \ddot{\gamma} \sin \omega T + 6u^2 \varepsilon^2 \beta \ddot{\gamma} \sin \omega T - P\_1 u^3 + 8u^3 \varepsilon^2 \beta \ddot{\gamma} \sin \omega T. \tag{10}
$$

where

$$
\widetilde{\omega}^2 = 1 - 4\beta,\ P\_1 = \alpha - 8\beta.\tag{11}
$$

To apply MMS, one may rescale some terms in the system (10) that

$$
\omega = \widetilde{\omega} + \varepsilon \sigma,\\
u = \varepsilon u\_1 + \varepsilon^2 u\_2 + \dotsb,\\
\sigma = O(1). \tag{12}
$$

and

$$T\_i = \varepsilon^i T,\\
D\_i = \frac{\partial}{\partial T\_i},\\\frac{d}{dT} = \sum\_{i=0}^n \varepsilon^i D\_i \text{ ( $i = 0, 1, 2, \dots$ )}\tag{13}$$

Comparing the coefficients of *ε*1,*ε*2, and *ε*<sup>3</sup> in the system (10), respectively, one obtains

that

$$\varepsilon^1: \; D\_0^{\;\;2}u\_1 + \omega^2 u\_1 = 0,\tag{14}$$

$$
\epsilon^2 \, : \, D\_0^{\; 2} u\_2 + \omega^2 u\_2 = -2D\_1 D\_0 u\_1 + 2\omega \nu u\_1 + 2\beta \tilde{\gamma} \sin \omega T,\tag{15}
$$

and

$$\varepsilon^{3}:\ D\_{0}^{2}u\_{3}+\omega^{2}u\_{3}=-2D\_{1}D\_{0}u\_{2}-\tilde{\mu}D\_{0}u\_{1}-D\_{1}^{-2}u\_{1}+2u\_{2}\omega v-\sigma^{2}u\_{1}-2D\_{2}D\_{0}u\_{1}-P\_{1}u\_{1}^{3}+4\beta\tilde{\gamma}u\_{1}\sin\omega T.\tag{16}$$

To solve Equation (14), one can assume that

$$
\mu\_1 = A\_1(T\_1, T\_2)e^{i\omega \cdot T\_0} + \overline{A}\_1(T\_1, T\_2)e^{-i\omega \cdot T\_0},\tag{17}
$$

where

$$A\_1 = \frac{a\left(T\_1, T\_2\right)}{2} e^{i\theta\left(T\_1, T\_2\right)}.\tag{18}$$

Substituting Equations (17) and (18) into Equation (15), and eliminating the secular terms of Equation (15), one will have:

$$D\_1 A\_1 = -\frac{\beta \tilde{\gamma}}{2\omega} - i\sigma A\_1. \tag{19}$$

Solving Equation (15), one may assume:

$$
\mu\_2 = A\_2(T\_2)e^{i\omega \cdot T\_0} + \overline{A}\_2(T\_2)e^{-i\omega \cdot T\_0}.\tag{20}
$$

Substituting Equation (20) into Equation (16), and eliminating secular terms of Equation (16), one will obtain:

$$D\_2 A\_1 = -\frac{\tilde{\mu}}{2} A\_1 + \frac{\beta \tilde{\gamma}}{2\omega} - \frac{\sigma \beta \tilde{\gamma}}{4\omega^2} + \frac{3i P\_1 A\_1^2 \overline{A}\_1}{2\omega}. \tag{21}$$

Since .

$$A\_1 \approx D\_0 A\_1 + \varepsilon D\_1 A\_1 + \varepsilon^2 D\_2 A\_1,\tag{22}$$

Substituting Equations (19) and (21) into Equation (22), and expressing it by the original dimensionless parameters of Equation (3), one has:

$$\begin{aligned} \varepsilon \dot{a} &= -\frac{\mu}{2} (\varepsilon a) - P\_2 \cos \theta, \\ (\varepsilon a) \dot{\theta} &= -(\omega - \widetilde{\omega}) (\varepsilon a) + \frac{3P\_1 (\varepsilon a)^3}{8\omega} + P\_2 \sin \theta. \end{aligned} \tag{23}$$

where

$$P\_2 = \frac{(3\omega - \tilde{\omega})\beta\gamma}{2\omega^2}.\tag{24}$$

According to Equation (18), it is obvious that the amplitude of the periodic solution *a* is the function of the time scale *T*<sup>1</sup> where *T*<sup>1</sup> is a one-order term of *ε*; thus, one can assume the amplitude of the solution u of Equation (10) \$*<sup>a</sup>* as:

$$
\varepsilon a = \tilde{a}.\tag{25}
$$

Letting . *<sup>a</sup>* <sup>=</sup> 0, and . *θ* = 0, one can obtain:

$$-\frac{\mu}{2}\widetilde{a} = P\_2 \cos \theta,\ (\omega - \widetilde{\omega})\widetilde{a} - \frac{3P\_1 \widetilde{a}^3}{8\omega} = P\_2 \sin \theta. \tag{26}$$

Eliminating the triangulation function of Equation (26), one can get:

$$
\frac{\mu^2}{4}a^2 + \left(\omega - \tilde{\omega} - \frac{3P\_1 a^2}{8\omega}\right)^2 a^2 = \frac{\left(3\omega - \tilde{\omega}\right)^2}{4\omega^4} \beta^2 \gamma^2. \tag{27}
$$

According to Equations (17), (18) and (25), the periodic solution can be expressed as:

$$
\mu \approx \tilde{\mu} \cos(\omega T + \theta). \tag{28}
$$

To determine the stability of the periodic solutions, one can get the corresponding characteristic equation of the periodic solution based on Equation (23). It shows that the periodic solution will lose its stability when its amplitude \$*<sup>a</sup>* satisfies:

$$(8(\omega - \tilde{\omega}) - \frac{9\tilde{a}^2 P\_1}{\omega})(8(\omega - \tilde{\omega}) - \frac{3\tilde{a}^2 P\_1}{\omega}) \ge 16\mu^2. \tag{29}$$

Based on Equations (26)–(29), the variation of the amplitude of the periodic solutions of the system (3) and their stability with AC voltage is shown in Figure 4 where the frequency and the amplitude of AC voltage are considered as the control parameters in Figure 4a and Figure 4b, respectively. In Figure 4a, where *VAC* = 0.01 V, when *ω* is lower than 0.45, there is only one periodic attractor in the system (3) whose amplitude changes continuously with the increase in *ω*. Comparatively, when ω ranges from 0.46 to 0.69, the global dynamical behaviors of the system (3) will change to bistable periodic attractors, which can be attributed to Hopf bifurcation. As *ω* continues to increase from 0.7, the periodic attractor with the higher amplitude will disappear, only the periodic attractor with the lower amplitude will exist, and its amplitude will decrease continuously with the increase of *ω*. Similarly, the change in global dynamical behaviors in Figure 4b also shows that in a certain range of *VAC*, there will be two periodic attractors coexisting, which can be due to Hopf bifurcation of the system (3). The accuracy of the theoretical prediction in Figure 4 are verified by the numerical results.

when *VAC* = 0.01 V (**b**) Amplitude of the periodic solution vs. *VAC* when Ν = 0.6

**Figure 4.** Variation of the amplitude of the periodic solution with the change in AC voltage.

Figure 4 demonstrates that the parameters ω and VAC can induce the coexistence of bistability, meaning that under fixed values of parameters of system (3), different initial conditions may lead to different periodic attractors. Accordingly, it is necessary to classify the basins of attraction for the two different periodic attractors. Here, the 4th order Runge-Kutta approach and the cell-mapping method are applied to depict the basins of attraction of the system (3). The time step is taken as 1/102 of the period of excitation. To investigate the long-term dynamical behaviors, it is supposed that an initial condition will be safe if

the vibration in this initial condition keeps satisfying |u(T)| < 1 within 105 excited circles; otherwise, the micro resonator will undergo pull in [20]. The union of all initial conditions leading to the same periodic motion will be the basin of attraction for that attractor which will surely be marked in the same color in the initial plane. The basins of attraction of system (3) are drawn in sufficiently large ranges for the initial position and velocity of the proof mass defined as |u(0)| < 1 and | . u(0)| < 1.5 by generating an 200 × 100 array of initial points. The change of attractors and the area and nature of their basins of attraction with frequency *ω* is shown in Figure 5 where the amplitude of AC voltage VAC is fixed as 0.01 V.

**Figure 5.** Evolution of multiple attractors and their basins of attraction under different values of *ω*.

According to Figure 5, with the increase in parameter ω, the number of attractors and the boundary of basins of attraction will both change. When *ω* = 0.45 in system (3), there will only be one periodic attractor whose basin of attraction is comparatively bigger with a smooth boundary (see Figure 5a1,a2). However, with a small increase of *ω*, i.e., *ω* = 0.45, the global dynamics are totally different, as shown in Figure 5b1,b2 where two periodic attractors coexist, whose basins of attraction mix each other and are both fractal. It means that the dynamical behavior of system (3) is highly sensitive to initial conditions. In other words, system (3) may undergo a safe jump. A similar phenomenon can be seen in Figure 5c1,c2 under a higher *ω.* As *ω* increases, the basin of attraction of the periodic attractor with the higher amplitude becomes small (see the red regions in Figure 5b2,c2,d2). Specifically, in Figure 5d2, the regions of attraction are almost blue, and there is very little area of basin of attraction for that periodic attractor. As *ω* = 0.70 (see Figure 5e1,e2), the periodic attractor with the higher amplitude disappears, and there is only the other attractor whose basin of attraction is almost the same as that in Figure 4b, showing that when the frequency *ω* increases enough, the periodic attractor with the lower amplitude replaces the initial one.

#### **4. Multistability in the Neighborhood of Non-Trivial Equilibria**

In this section, the case that the DC voltage is higher, and the periodic vibration of the microstructure is induced by the perturbation of the two non-trivial centers (see Figure 3b) is considered; thus, we set *β* = 0.338, i.e., *Vb* = 3.8V. In addition, we consider the effect of AC voltage on the global dynamics of the system (3). To begin with, setting

$$
\varepsilon\hbar = \mathfrak{u} \mp \mathfrak{u}\_{\mathfrak{c}} \tag{30}
$$

where *uc* is the abscissa of the right center (see S1 in Figure 3), rescaling he two parameters *μ* and *γ* in the system (3) by Equation (9), expanding the fractional terms of the dimensionless system (3) as a Taylor series in the neighborhood of the non-trivial equilibria and ignoring the higher-order-than-cubic terms of *u*ˆ, the system (3) becomes

$$\bar{\mu} = -\varepsilon^2 \bar{\mu} \dot{\mathfrak{u}} - \hat{\omega}^2 \mathfrak{u} + \varepsilon Q\_1 \mathfrak{u}^2 + \varepsilon^2 Q\_2 \mathfrak{u}^3 + \frac{2\varepsilon \mathcal{G} \tilde{\gamma} \sin \omega T}{\left(1 \mp u\_\varepsilon \right)^2} + \frac{4\varepsilon^2 \mathcal{G} \tilde{\gamma} \sin \omega T}{\left(1 \mp u\_\varepsilon \right)^3} \mathfrak{u} + \frac{6\varepsilon^3 \mathcal{G} \tilde{\gamma} \sin \omega T}{\left(1 \mp u\_\varepsilon \right)^4} \mathfrak{u}^2,\tag{31}$$

where

$$\omega^2 = 1 + 3au\_{\epsilon}^2 - \frac{4\beta(1 + 3u\_{\epsilon}^2)}{\left(1 - u\_{\epsilon}^2\right)^5},\\Q\_1 = \pm 3u\_{\epsilon}(-a + \frac{8\beta(1 + u\_{\epsilon}^2)}{\left(1 - u\_{\epsilon}^2\right)^4}),\\Q\_2 = -a + \frac{8\beta(1 + 10u\_{\epsilon}^2 + 5u\_{\epsilon}^4)}{\left(1 - u\_{\epsilon}^2\right)^5}.\tag{32}$$

To apply the Method of Multiple Scale in Equation (32), one can assume in this equation that:

$$
\hat{\omega} = \omega + \varepsilon \vartheta \text{, if } = \mathfrak{d}\_0 + \varepsilon \mathfrak{d}\_1 + \varepsilon^2 \mathfrak{d}\_2 + \cdots \text{, } \mathfrak{d} = O(1). \tag{33}
$$

Comparing the coefficients of *ε*1,*ε*<sup>2</sup> and *ε*3, one has:

$$
\epsilon^0: D\_0^{-2}\mathfrak{d}\_0 + \omega^2\mathfrak{d}\_0 = 0,\tag{34}
$$

$$\boldsymbol{\alpha}^1 \colon \boldsymbol{D}\_0 \boldsymbol{\alpha}\_1^2 \boldsymbol{\alpha}\_1 + \boldsymbol{\omega}^2 \boldsymbol{\hat{u}}\_1 = -2\boldsymbol{D}\_1 \boldsymbol{D}\_0 \boldsymbol{\hat{u}}\_0 + 2\boldsymbol{\omega} \boldsymbol{\hat{v}} \boldsymbol{\hat{u}}\_0 + \frac{2\boldsymbol{\beta} \boldsymbol{\hat{\gamma}} \boldsymbol{\sin} \boldsymbol{\omega} \boldsymbol{T}}{\left(1 \mp \boldsymbol{u}\_c\right)^2} + Q\_1 \boldsymbol{\hat{u}}\_0^2. \tag{35}$$

and

$$\varepsilon^{2}: D\boldsymbol{\eta}^{2}\hat{\boldsymbol{u}}\_{2} + \omega^{2}\hat{\boldsymbol{u}}\_{2} = -2D\_{1}\boldsymbol{D}\_{0}\hat{\boldsymbol{u}}\_{1} - \tilde{\boldsymbol{\mu}}\mathrm{D}\_{0}\hat{\boldsymbol{u}}\_{0} - D\_{1}^{-2}\hat{\boldsymbol{u}}\_{0} + 2\hat{\boldsymbol{u}}\_{1}\omega\hat{\boldsymbol{v}} - \hat{\boldsymbol{v}}^{2}\hat{\boldsymbol{u}}\_{0} - 2D\_{2}\mathrm{D}\boldsymbol{\mu}\hat{\boldsymbol{u}}\_{0} + 2Q\_{1}\hat{\boldsymbol{u}}\_{1}\boldsymbol{\hat{u}}\_{1} + Q\_{2}\hat{\boldsymbol{u}}\_{0}^{3} + \frac{4\hat{\boldsymbol{\rho}}^{\mathrm{T}}\boldsymbol{\hat{\gamma}}\sin\omega\boldsymbol{T}}{\left(\boldsymbol{\mp}\cdot\boldsymbol{u}\_{0}\right)^{3}}\hat{\boldsymbol{u}}\_{0}.\tag{36}$$

One can set the solution of Equation (34) as:

$$\mathfrak{A}\_0 = B\_1(T\_1)e^{i\omega T\_0} + \overline{B}\_1(T\_1)e^{-i\omega T\_0}.\tag{37}$$

Substituting Equation (37) into Equation (35), and eliminating the secular terms of Equation (35), one can obtain that:

$$\begin{split} D\_1 B\_1 &= \frac{-\not p\_{\overline{1}}}{2\omega \left(1 \mp u\_{\varepsilon}\right)^2} - i\sigma B\_1, \\ \mathcal{U}\_1 &= -\frac{B\_1^2 P}{3\omega^2} \epsilon^{i2\omega \prime T\_0} - \frac{B\_1^2 P}{3\omega^2} \epsilon^{-i2\omega \prime T\_0} + \frac{2B\_1 \overline{\mathbb{F}}\_1 P}{\omega^2}. \end{split} \tag{38}$$

Substituting the equation above into Equation (36) and eliminating its secular terms, one can have:

$$D\_2 B\_1 = -\frac{\tilde{\mu} B\_1}{2} - \frac{\text{\textdegree } \beta \tilde{\gamma}}{4\omega^2 \left(1 \mp \mu\_c\right)^2} - i \left(\frac{5Q\_1^2}{3\omega^3} + \frac{3Q\_2}{2\omega}\right) B\_1^2 \overline{B}\_1. \tag{39}$$

Now setting

$$B\_1 = \frac{1}{2} \varepsilon b(T\_1, T\_2) e^{i\varphi(T\_1, T\_2)},\tag{40}$$

considering .

$$
\dot{B}\_1 \approx D\_0 B\_1 + \varepsilon D\_1 B\_1 + \varepsilon^2 D\_2 B\_1,\tag{41}
$$

and substituting Equations (38) and (39) into Equation (41), and expressing Equation (41) by the original dimensionless parameters of Equation (3), one has:

$$\begin{split} \dot{b} &= \frac{-\left(\frac{3\,\omega-\dot{\omega}}{2}\right)\delta\gamma\cos\varrho}{2\omega^{2}\left(1\mp\mu\_{\epsilon}\right)^{2}} - \frac{\mu b}{2},\\ b\dot{\varrho} &= \frac{\left(\frac{3\,\omega-\dot{\omega}}{2}\right)\delta\gamma\sin\varrho}{2\omega^{2}\left(1\mp\mu\_{\epsilon}\right)^{2}} - (\omega-\dot{\omega})b - \frac{\frac{5b^{3}Q\_{1}^{2}}{12\omega^{3}} - \frac{3b^{3}Q\_{2}}{8\omega}}{12\omega^{3}}. \end{split} \tag{42}$$

The periodic solution of the system (3) satisfies . *<sup>b</sup>* <sup>=</sup> 0, and . *ϕ* = 0, i.e.,

$$1 - \frac{\mu b}{2} = \frac{(3\,\omega - \hat{\omega})\beta\gamma\cos\varvarrho}{2\omega^2(1 \mp u\_c)^2},\\(\omega - \hat{\omega})b + \left(\frac{5Q\_1^2}{12\omega^3} + \frac{3Q\_2}{8\omega}\right)b^3 = \frac{(3\,\omega - \hat{\omega})\beta\gamma\sin\varvarrho}{2\omega^2(1 \mp u\_c)^2}.\tag{43}$$

The periodic solution can be expressed analytically as:

$$
\mu = \pm u\_{\varepsilon} + \frac{2b^2 Q\_1}{3\omega^2} + b\cos(\omega T + q) - \frac{b^2 Q\_1}{3\omega^2} \cos^2(\omega T + q). \tag{44}
$$

According to the characteristic solutions of Equation (42), it shows that the theoretical periodic solution expressed by Equation (44) will become unstable if:

$$(\omega - \hat{\omega} - (\frac{5Q\_1^2}{12\omega^3} + \frac{9Q\_2}{8\omega})b^2)(\omega - \hat{\omega} + (\frac{5Q\_1^2}{12\omega^3} + \frac{3Q\_2}{8\omega})b^2) \ge \frac{\mu^2}{4}.\tag{45}$$

Based on Equations (43)–(45), the evolution of the periodic solutions of system (3) with the amplitude of AC voltage when *ω* = 0.6 is shown in Figure 6. Obviously, when *VAC* increases from 0, the two non-trivial equilibria lose their stability; instead, there are two periodic attractors coexisting. The amplitudes of the two periodic attractors increases with the amplitude of AC voltage. The coexistence of multiple periodic attractors can be attributed to the disturbance of the bistable non-trivial equilibria of the system (3) when *VAC* = 0 V.

**Figure 6.** Variation of the periodic solutions with the amplitude of AC voltage when *ω* = 0.6.

In Figure 6, when *VAC* varies from 0 to 0.055 V, the numerical simulation is in great agreement with the theoretical solution. However, when *VAC* exceeds 0.056 V, the theoretical prediction of the periodic attractor in the neighborhood of the right non-trivial equilibria is not that accurate, which may be due to the limitation of the Method of Multiple Scale. It will then be essential for us to apply numerical simulation to investigate the evolution of the attractors with the change in AC voltage. The basic settings for the simulation, such as the time step and initial plane, are the same as that in Section 3. The change of the attractors and the area and nature of their basins of attraction with *VAC* are shown in Figure 7, where *ω* = 0.6. The evolution of global dynamics of system (3) with the increase in *VAC* can be separated into the following five stages.

Firstly, when *VAC* = 0 V, there are two point attractors coexisting whose basins of attraction are fractal and trigger each other (see Figure 7a1,a2). According to Figure 7a2, in a small neighborhood of each point attractor, the attractor of system (3) is locally stable. Otherwise, a small disturbance of initial conditions will lead to a different point attractor, meaning that it is easy to induce a safe jump.

Secondly, when *VAC* increases from 0 to 0.01 V (see Figure 7b1–d2), the number of the periodic attractors increases with *VAC*. At *VAC* = 0.005 V, the two point attractors become two periodic attractors; apart from these two periodic attractors predicted theoretically, a new periodic attractor appears suddenly, marked by the yellow curve in Figure 7b1, and its basin of attraction is discrete (see the yellow regions in Figure 7b2). It shows that the new periodic attractor is a hidden attractor [26]. When *VAC* increases to 0.006 V, another hidden attractor appears, which is almost symmetric to the former one (see the blue curve of Figure 7c1 and the blue regions of Figure 7c2). When *VAC* = 0.01 V, there are five periodic attractors coexisting, as shown in Figure 7d1. A new periodic attractor appears (see the green curve in Figure 7d1), whose amplitude is much bigger than the other ones.

Thirdly, as *VAC* increases from 0.01 V to 0.116 V, the number of attractors will decrease. Comparing Figure 7e1 with Figure 7d1, it is obvious that when *VAC* increases to 0.02 V, the yellow periodic attractor disappears whose basin of attraction is eroded by that of the green attractor; thus, the basin of attraction of the green attractor can be much bigger in Figure 7e2 than in Figure 7d2. When *VAC* continues to increase, the other three periodic attractors, i.e., the blue attractor, the red one, and the black one, disappear successively (see Figure 7f1,h1,j1) whose basins of attraction are aggressed by the basin of attraction of the green attractor, as shown in Figure 7e1–j2. Till *VAC* becomes 0.116 V, there will be a single periodic attractor left whose basin of attraction is not fractal but with a smooth boundary (see Figure 7j1,j2).

Besides, when *VAC* increases to 0.128 V, there will be a new complex attractor coexisting with the former green periodic attractor. It is a period-3 attractor (see the purple curve in Figure 7k1) whose basin of attraction is fractal and eroded to the basin of attraction of the periodic attractor (see Figure 7k2). It follows that a small change of initial conditions possibly shifts the dynamical behavior of the system (3) from a periodic motion to a period-3 motion, which is another type of safe jump.

Finally, as *VAC* continues to increase, another type of complex dynamical behavior is induced. According to the phase map, Poincare map, and frequency spectrum in Figure 8a–c, there is only a chaotic attractor when *VAC* = 0.28 V, and the boundary of its basin of attraction is not fractal (see Figure 8d).

(**c1**) Attractors when *VAC* = 0.006 V (**c2**) Basins of attraction when *VAC* = 0.006 V

**Figure 7.** *Cont*.

(**h1**) Attractors when *VAC* = 0.057 V (**h2**) Basins of attraction when *VAC* = 0.057 V

**Figure 7.** Evolution of multiple attractors and their basins of attraction under different values of *VAC*.

**Figure 8.** Attractor and its basin of attraction when *VAC* = 0.28.

#### **5. Conclusions**

In this paper, a typical electrostatic bilateral micro-resonator is considered. The theory of local bifurcation and numerical approaches are applied to analyze the global dynamics of the vibrating system of the micro resonator. The main conclusions are presented as follows:


Our results provide some theoretical reference in avoiding complex dynamics of micro resonators, thus having some potential values in the design of micro sensors. The hidden attractors are depicted numerically, but their mechanism is still not that clear, which will be discussed in our future study.

**Author Contributions:** Conceptualization, H.S.; methodology, H.S.; software, Y.Z.; validation, H.S.; formal analysis, H.S.; investigation, Y.Z.; writing—original draft preparation, Y.Z. and H.S.; writing—review and editing, H.S.; visualization, Y.Z.; supervision, H.S.; project administration, H.S.; funding acquisition, H.S. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by the National Natural Science Foundation of China, grant number 11472176.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** Huilin Shang acknowledges the support of the National Natural Science Foundation of China under grant number 11472176. The authors are grateful for the valuable comments of the reviewers.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


### *Article* **Hermite-Hadamard Inequalities in Fractional Calculus for Left and Right Harmonically Convex Functions via Interval-Valued Settings**

**Muhammad Bilal Khan 1, Jorge E. Macías-Díaz 2,3,\*, Savin Treant,aˇ 4,\*, Mohammed S. Soliman <sup>5</sup> and Hatim Ghazi Zaini <sup>6</sup>**


**Abstract:** The purpose of this study is to define a new class of harmonically convex functions, which is known as left and right harmonically convex interval-valued function (LR-ऒ-convex *IV-F*), and to establish novel inclusions for a newly defined class of interval-valued functions (*IV-Fs)* linked to Hermite–Hadamard (*H-H*) and Hermite–Hadamard–Fejér (*H-H*-Fejér) type inequalities via intervalvalued Riemann–Liouville fractional integrals (*IV-RL*-fractional integrals). We also attain some related inequalities for the product of two LR-ऒ-convex *IV-Fs.* These findings enable us to identify a new class of inclusions that may be seen as significant generalizations of results proved by Iscan and Chen. Some examples are included in our findings that may be used to determine the validity of the results. The findings in this work can be seen as a considerable advance over previously published findings.

**Keywords:** interval-valued function; LR-Harmonically convexity; fractional integral operator; Hermite–Hadamard type inequalities

#### **1. Introduction**

The concept of convexity of functions is a useful instrument that is used to solve a wide range of pure and applied scientific issues. Many researchers have recently committed themselves to investigate the attributes and inequalities of convexity in various directions, as evidenced by [1–6] and the references therein. The Hermite–Hadamard inequality (*H-H* inequality), which is also used frequently in many other parts of practical mathematics, notably in optimization and probability, is one of the most important mathematical inequalities relevant to convex maps. Let us elicit it as follows:

Suppose that the mapping: [t, *υ*] → R. For every for all κ, *μ* ∈ [t, *υ*] and s ∈ [0, 1], if the successive inequality

$$\mathfrak{A}(\ (1-\mathsf{s})\mathsf{x}+\mathsf{s}\mu)\leq(1-\mathsf{s})\mathfrak{A}(\mathsf{x})+\mathsf{s}\mathfrak{A}(\mu)\tag{1}$$

Then, A is named as convex function on the convex interval [t, *υ*]. If (1) is reversed, then, A is named as a concave function on [t, *υ*].

This famous inequality gives error bounds for the mean value of a continuous convex mapping: [t, *υ*] → R, which has gotten a lot of attention from a lot of authors. Many

**Citation:** Khan, M.B.; Macías-Díaz, J.E.; Treant,a, S.; Soliman, M.S.; Zaini, ˇ H.G. Hermite-Hadamard Inequalities in Fractional Calculus for Left and Right Harmonically Convex Functions via Interval-Valued Settings. *Fractal Fract.* **2022**, *6*, 178. https://doi.org/10.3390/ fractalfract6040178

Academic Editor: Carlo Cattani

Received: 3 February 2022 Accepted: 8 March 2022 Published: 23 March 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

investigations have been conducted on the *H-H* type inequalities for additional forms of convex mappings. For example, s-convex mappings may be found in Kórus [7], Nquasi-convex mappings in Abramovich and Persson [8], h-convex mappings in Delavar and De La Sen [9], etc. Kadakal and Bekar [10], I¸scan [11], Marinescu and Monea [12], Kadakal et al. [13], and the references therein provide new developments on this important issue.

Fractional calculus has shown to be an important cornerstone in mathematics and applied sciences as a very valuable tool. As a result of this fruitful interaction of various approaches to fractional calculus, many authors have studied some prominent integral inequalities, including [14] in the study of the *H-H* inequality for Riemann–Liouville fractional integrals, [15] in the *H-H* Fejér type inequality for Katugampola fractional integrals, and [16] in the extensions of trapezium inequalities for k-fractional integrals. We recommend interested readers to [17,18] and the references therein for other significant conclusions relating to fractional integral operators.

Set-valued analysis is a subset of interval analysis. There is no denying that interval analysis is important in both pure and practical research. The error limits of numerical solutions of finite state machines were one of the first applications of interval analysis. However, interval analysis, as one of the strategies for resolving interval uncertainty, has been a key component of mathematical and computer models for the past fifty years. Several applications in automated error analysis [19], computer graphics [20], and neural network output optimization [21] have been described. Furthermore, Refs. [22,23] has several optimization theory applications involving *IV-Fs*. The interested reader is recommended to Zhao et al. [24] and Román-Flores et al. [25] and their references for current developments in the area of *IV-Fs*. We recommend interested readers to [26–34] and the references therein for other significant conclusions relating to inequalities and fractional integral inequalities.

We structured the article in the following manner in response to the aforementioned tendency and invigorated by ongoing research activity in this fascinating topic. To prove fractional integral inclusions, firstly, we have generalized the class of ऒ-convex functions in terms of LR-ऒ-convex *IV-Fs.* Then, a class of *IV-RL*-fractional integrals inequalities is presented to achieve this aim. Some inclusion relations for convex *IV-Fs* in connection with the renowned *H-H*, *H-H*-Fejér type inequalities are found in this paper utilizing the newly presented class of ऒ-convex functions.

#### **2. Preliminaries**

Let us begin the rest of this part by outlining the theory of interval analysis, which is mostly due to [28]. The sets of all closed intervals of R, the sets of all negative closed intervals of R, and the sets of all positive closed intervals of R are denoted by K*C*, K<sup>−</sup> *<sup>C</sup>* , and K+ *<sup>C</sup>* , respectively. For more conceptions on *IV*·*Fs*, see [24]. Moreover, we have:

**Remark 1** ([29])**.** *(i) The relation* " ≤<sup>p</sup> " *defined on* K<sup>C</sup> *by*

$$\mathbb{P}\left[\mathcal{Q}\_{\ast\ast}\colon\mathcal{Q}^{\ast}\right] \leq\_{\mathbb{P}} \left[\mathcal{Z}\_{\ast\ast}\colon\mathcal{Z}^{\ast}\right] \text{ if and only if } \mathcal{Q}\_{\ast} \leq \mathcal{Z}\_{\ast\ast}\colon\mathcal{Q}^{\ast} \leq \mathcal{Z}^{\ast},\tag{2}$$

*for all* [Q∗, Q∗], [Z∗, Z∗] ∈ KC, *it is a pseudo order relation. For given* [Q∗, Q∗], [Z∗, Z∗] ∈ KC, *we say that* [Q∗, Q∗] ≤<sup>p</sup> [Z∗, Z∗] *if and only if* Q∗ ≤ Z∗, Q<sup>∗</sup> ≤ Z<sup>∗</sup> or Q∗ ≤ Z∗, Q<sup>∗</sup> < Z∗. *The relation* [Q∗, Q∗] ≤<sup>p</sup> [Z∗, Z∗] *coincident to* [Q∗, Q∗] ≤ [Z∗, Z∗] on KC.

*(ii) It can be easily seen that* " ≤<sup>p</sup> " *looks like "left and right" on the real line* R, *so we call* " ≤<sup>p</sup> " *is "left and right" (or "LR" order, in short).*

**Theorem 1** ([28])**.** *If* A : [t, *υ*] ⊂ R → K*<sup>C</sup> is an I-V*·*F on such that* A(κ) = [A∗(κ), A∗(κ)]*, then,* A *is Riemann integrable over* [t, *υ*] *if and only if,* A<sup>∗</sup> *and* A<sup>∗</sup> *both are Riemann integrable over* [t, *υ*] *such that*

$$\mathbb{P}\left[\left(IR\right)\int\_{t}^{v}\mathfrak{A}(\varkappa)d\varkappa = \left[\left(R\right)\int\_{t}^{v}\mathfrak{A}^\*(\varkappa)d\varkappa,\ \left(R\right)\int\_{t}^{v}\mathfrak{A}^\*(\varkappa)d\varkappa\right].\right]$$

*The following interval-valued Riemann–Liouville fractional integral (IV-RL-fractional integral) operators were presented by Buduk et al.* [1]:

*Let β* > 0 *and L* [t, *<sup>υ</sup>*], <sup>K</sup><sup>+</sup> *C be the collection of all Lebesgue measurable I-V-Fs on* [t, *υ*]*. Then, the IV-RL-fractional integrals of* A ∈ *L* [t, *<sup>υ</sup>*], <sup>K</sup><sup>+</sup> *C with order β* > 0 *are defined by*

$$\mathfrak{S}\_{\mathbf{t}^+}^{\beta} \mathfrak{A}(\varkappa) = \frac{1}{\Gamma(\beta)} \int\_{\mathbf{t}}^{\varkappa} (\varkappa - \mathbf{s})^{\beta - 1} \mathfrak{A}(\mathbf{s}) d\mathbf{s}, \ (\varkappa > \mathbf{t}), \tag{3}$$

*and*

$$\mathfrak{T}\_{v^{-}}^{\beta}\mathfrak{A}(\varkappa) = \frac{1}{\Gamma(\beta)} \int\_{\varkappa}^{v} (\varkappa - \varkappa)^{\beta - 1} \mathfrak{A}(\varkappa) d\varsigma\_{\prime} \, (\varkappa < v), \tag{4}$$

*respectively, where Γ*(*β*) = <sup>∞</sup> <sup>0</sup> <sup>s</sup>κ−1*e*−s*d*<sup>s</sup> *is the Euler gamma function.*

**Definition 1** ([27])**.** *A set <sup>K</sup>* <sup>=</sup> [t, *<sup>υ</sup>*] <sup>⊂</sup> <sup>R</sup><sup>+</sup> <sup>=</sup> (0, <sup>∞</sup>) *is said to be harmonically convex set, if, for all* κ, *μ* ∈ *K*, s ∈ [0, 1]*, we have:*

$$\frac{\varkappa\mu}{\varkappa\varkappa + (1 - \mathbf{s})\mu} \in \mathcal{K}.\tag{5}$$

**Definition 2** ([27])**.** *Suppose that the mapping:* [t, *υ*] → R*. For every* κ, *μ* ∈ [t, *υ*] *and* s ∈ [0, 1]*, if the successive inequality*

$$\mathfrak{A}\left(\frac{\varkappa\mathfrak{a}}{\varkappa\varkappa + (1-\mathfrak{s})\mu}\right) \le (1-\mathfrak{s})\mathfrak{A}(\varkappa) + \mathfrak{s}\mathfrak{A}(\mu),\tag{6}$$

*Then*, A *is named as harmonically convex function (*ऒ-*convex function) on interval* [t, *υ*]. If *(6) is reversed, then,* A *is named as a* ऒ-*concave function on* [t, *υ*].

**Definition 3** ([29])**.** *Suppose that the mapping*: [t, *υ*] → K*<sup>C</sup> . For every* κ, *μ* ∈ [t, *υ*] *and* s ∈ [0, 1]*, if the successive inequality*

$$\mathfrak{A}(\ (1-\mathsf{s})\mathsf{x}+\mathsf{s}\mu)\leq\_{\mathsf{P}}^{\mathsf{c}}(1-\mathsf{s})\mathfrak{A}(\mathsf{x})+\mathsf{s}\mathfrak{A}(\mu),\tag{7}$$

*Then,* A *is named as LR-convex IV-F on the convex interval* [t, *υ*]*. If (7) is reversed, then,* A *is named as a concave function on* [t, *υ*].

**Definition 4.** *Suppose that the mapping* A : [t, *υ*] → K*<sup>C</sup> . For all* κ, *μ* ∈ [t, *υ*] *and* s ∈ [0, 1]*, if the successive inequality*

$$\mathfrak{A}\left(\frac{\varkappa\mathfrak{a}}{\varkappa\varkappa + (1-\mathfrak{s})\mu}\right) \leq\_{\mathbb{P}} (1-\mathfrak{s})\mathfrak{A}(\varkappa\mathfrak{s}) + \mathfrak{s}\mathfrak{A}(\mu),\tag{8}$$

*is valid, then,* A *is named as LR-harmonically convex IV-F (LR-*ऒ*-convex IV-F) defined on interval* [t, *υ*]*. If (8) is reversed, then,* A *is called LR-*ऒ*-concave IV-F on* [t, *υ*]*. The set of all LR-*ऒ*-convex (LR-*ऒ*-concave IV-F) is denoted*

$$LRHSX([\mathfrak{t},\upsilon],\,\mathcal{K}\_{\mathbb{C}})(LRHSV([\mathfrak{t},\upsilon],\,\mathcal{K}\_{\mathbb{C}})).$$

**Theorem 2.** *Let K be harmonically convex set, and let* A : *K* → K*<sup>C</sup> be an IV-F is given by*

$$\mathfrak{A}(\varkappa) = [\mathfrak{A}^\*(\varkappa), \mathfrak{A}\_\*(\varkappa)], \,\,\forall \,\kappa,\tag{9}$$

*for all*κ ∈ *K. Then,* A *is LR-*ऒ*-convex function on K*, *if and only if,* A∗(κ) *and* A∗(κ)*are* <sup>ऒ</sup> *-convex functions*.

**Proof.** Assume that A∗(κ) and A∗(κ) are <sup>ऒ</sup>-convex on *K*. Then, from (6), we have

$$\mathfrak{A}\_{\ast} \left( \frac{\varkappa \mu}{\mathrm{s} \varkappa + (1 - \mathrm{s}) \mu} \right) \leq (1 - \mathrm{s}) \mathfrak{A}\_{\ast} (\varkappa) + \mathrm{s} \mathfrak{A}\_{\ast} (\mu),$$

and

$$\mathfrak{A}^\* \left( \frac{\varkappa \mu}{\mathsf{s} \varkappa + (1 - \mathsf{s}) \mu} \right) \le (1 - \mathsf{s}) \mathfrak{A}^\*(\varkappa) + \mathsf{s} \mathfrak{A}^\*(\mu).$$

Then, by (9), we obtain

$$\begin{split} \mathfrak{A}\left(\frac{\mathsf{s}\mathsf{e}\mathsf{h}}{\mathsf{s}\mathsf{e}\mathsf{e}+(1-\mathsf{s})\mathsf{h}}\right) &= \left[\mathfrak{A}\_{\mathsf{s}}\left(\mathsf{s}\mathsf{e}\mathsf{z}+(1-\mathsf{s})\mathsf{h}\right), \mathfrak{A}^{\mathsf{s}}\left(\mathsf{s}\mathsf{e}\mathsf{z}+(1-\mathsf{s})\mathsf{h}\right)\right] \leq\_{\mathsf{P}} (1-\mathsf{s}) \\ &\text{s}\left[\mathfrak{A}\_{\mathsf{s}}\left(\mathsf{s}\mathsf{e}\right), \mathfrak{A}^{\mathsf{s}}\left(\mathsf{s}\mathsf{e}\right)\right] + \mathfrak{s}\left[\mathfrak{A}\_{\mathsf{s}}\left(\mathsf{s}\right), \mathfrak{A}^{\mathsf{s}}\left(\mathsf{s}\right)\right], \end{split}$$

that is

$$\mathfrak{A}\left(\frac{\varkappa\mu}{\varkappa\varkappa+(1-\mathfrak{s})\mu}\right) \leq\_{\mathbb{P}} (1-\mathfrak{s})\mathfrak{A}(\varkappa) + \mathfrak{s}\mathfrak{A}(\mu), \forall \ \varkappa, \mu \in \mathbb{K}, \ \mathfrak{s} \in [0,1].$$

Hence, A is *LR-*ऒ-convex *IV-F* on *K*.

Conversely, let A be LR-ऒ-convex IV-F on *K*. Then, for all κ, *μ* ∈ *K,* s ∈ [0, 1], we have

$$
\mathfrak{A}\left(\frac{\varkappa\mathfrak{a}}{\varkappa\varkappa + (1-\mathfrak{s})\mu}\right) \leq\_{\mathbb{P}} (1-\mathfrak{s})\mathfrak{A}(\varkappa\mathfrak{s}) + \mathfrak{s}\mathfrak{A}(\mu).
$$

Therefore, from (9), left side of above inequality, we have

$$\Re\left(\frac{\mu\kappa}{\varkappa\varkappa+(1-\mathsf{s})\mu}\right) = \left[\mathfrak{A}^\*\left(\frac{\mathsf{s}\varkappa+(1-\mathsf{s})\mu}{\varkappa\varkappa+(1-\mathsf{s})\mu}\right), \mathfrak{A}^\*\left(\frac{\mathsf{s}\varkappa+(1-\mathsf{s})\mu}{\varkappa\varkappa+(1-\mathsf{s})\mu}\right)\right].$$

Again, from (9), we obtain

$$(1 - \mathbf{s})\mathfrak{A}(\varkappa) + \mathfrak{s}\mathfrak{A}(\varkappa) = (1 - \mathbf{s})[\mathfrak{A}^\*(\varkappa), \mathfrak{A}^\*(\varkappa)] + \mathfrak{s}[\mathfrak{A}^\*(\mu), \mathfrak{A}^\*(\mu)],$$

for all κ, *μ* ∈ *K,* s ∈ [0, 1]. Then, by <sup>ऒ</sup>-convexity of A, we have for all κ, *μ* ∈ *K,* s ∈ [0, 1] such that

$$\mathfrak{A}\_{\ast} \left( \frac{\mathfrak{s}\mathfrak{s}}{\mathfrak{s}\varkappa + (1 - \mathfrak{s})\mu} \right) \le (1 - \mathfrak{s})\mathfrak{A}\_{\ast}(\varkappa) + \mathfrak{s}\mathfrak{A}\_{\ast}(\mu),$$

and

$$\mathfrak{A}^\* \left( \frac{\varkappa \mu}{\varkappa \varkappa + (1 - \mathfrak{s}) \mu} \right) \le (1 - \mathfrak{s}) \mathfrak{A}^\*(\varkappa) + \mathfrak{s} \mathfrak{A}^\*(\mu).$$

this concludes the proof.

**Remark 2.** *If one attempts to take* A∗(κ) = A∗(κ)*, then, from Definition 3, we achieve Definition 2.*

**Example 1.** *We consider the IV-Fs* A : [1, 2] → K*<sup>C</sup> defined by* A(κ) = 0 *ln*(κ), 2√κ 1 *. Since end point functions* A∗(κ), A∗(κ) *are* <sup>ऒ</sup>*-convex functions. Hence,* A(κ) *is LR-*ऒ*-convex IV-F.*

In next result, we will establish a relation between *LR*-convex *IV-F* and LR-ऒ-convex *IV-F*.

**Theorem 3.** *Let* A : *K* → K*<sup>C</sup> be an IV-F such that* A(κ) = [A∗(κ), A∗(κ)], *for all* κ ∈ *K. Then,* A(κ) *is LR-*ऒ*-convex IV-F on K*, *if and only if,* A 1 κ ! *is LR-convex IV-F on K*.

**Proof.** Since A(κ) is a LR-ऒ-convex *IV-F*, then, for κ, μ ∈ [t, υ], s ∈ [0, 1], we have

$$\mathfrak{A}\left(\frac{\varkappa\mathfrak{s}\mu}{\mathsf{s}\varkappa + (1-\mathsf{s})\mu}\right) \stackrel{\leq}{\leq}\_{\mathsf{P}} (1-\mathsf{s})\mathfrak{A}(\varkappa\mathsf{.}) + \mathsf{s}\mathfrak{A}(\mu).$$

Therefore, we have

$$
\begin{split}
\mathfrak{A}\_{\*}\left(\frac{\varkappa\mu}{s\varkappa + (1-s)\mu}\right) &\leq (1-\mathsf{s})\mathfrak{A}\_{\*}\left(\varkappa\right) + \mathsf{s}\mathfrak{A}\_{\*}\left(\mu\right), \\
\mathfrak{A}^{\*}\left(\frac{\varkappa\mu}{s\varkappa + (1-s)\mu}\right) &\leq (1-\mathsf{s})\mathfrak{A}^{\*}\left(\varkappa\right) + \mathsf{s}\mathfrak{A}^{\*}\left(\mu\right).
\end{split} \tag{10}
$$

Consider *θ*(κ) = A 1 κ ! . Taking m = <sup>1</sup> <sup>κ</sup> and <sup>n</sup> <sup>=</sup> <sup>1</sup> <sup>μ</sup> to replace κ and μ, respectively. Then, applying (10)

$$\begin{split} \mathfrak{A}\_{\bullet} \left( \frac{\frac{1}{\varkappa t}}{s \frac{1}{\varkappa} + (1-s)\frac{1}{\varkappa}} \right) &= \mathfrak{A}\_{\bullet} \left( \frac{1}{(1-s)\varkappa + s\mu} \right) \\ &= \theta\_{\bullet} ((1-s)\varkappa + s\mu) \\ &\leq \mathfrak{s} \mathfrak{A}\_{\bullet} \left( \frac{1}{\mu} \right) + (1-s)\mathfrak{A}\_{\bullet} \left( \frac{1}{\varkappa} \right) \\ &= \mathfrak{s} \theta\_{\bullet} (\mu) + (1-s)\theta\_{\bullet} (\varkappa), \\ \mathfrak{A}^{\bullet} \left( \frac{\frac{1}{\varkappa t}}{s \frac{1}{\varkappa} + (1-s)\frac{1}{\varkappa}} \right) &= \mathfrak{A}^{\bullet} \left( \frac{1}{(1-s)\varkappa + s\mu} \right) \\ &= \theta^{\bullet} \left( (1-s)\varkappa + s\mu \right) \\ &\leq \mathfrak{s} \mathfrak{A}^{\bullet} \left( \frac{1}{\mu} \right) + (1-s)\mathfrak{A}^{\bullet} \left( \frac{1}{\varkappa} \right) \\ &= \mathfrak{s} \theta^{\bullet} (\mu) + (1-s)\theta^{\bullet} (\varkappa) \end{split}$$

It follows that

$$\begin{bmatrix} \mathfrak{A}\_{\ast}\left(\frac{\frac{1}{\varkappa\mu}}{s\frac{1}{\varkappa}+(1-s)\frac{1}{\mu}}\right), \mathfrak{A}^{\ast}\left(\frac{\frac{1}{\varkappa\mu}}{s\frac{1}{\varkappa}+(1-s)\frac{1}{\mu}}\right) \end{bmatrix} = \\ [\theta\_{\ast}((1-s)\varkappa+s\mu), \theta^{\ast}((1-s)\varkappa+s\mu)] \leq\_{\mathsf{P}} \mathsf{s}[\theta\_{\ast}(\mu), \theta^{\ast}(\mu)] + (1-s) \\ \mathsf{s}[(\theta\_{\ast}(\varkappa), \theta^{\ast}(\varkappa)].$$

which implies that

$$
\theta((1-\mathbf{s})\varkappa + \mathbf{s}\mu) \stackrel{<}{\leq} \mathbf{s} \\
\theta(\mu) + (1-\mathbf{s})\theta(\varkappa).
$$

This concludes that *θ*(κ) is a LR-convex IV-F.

Conversely, let *θ* is LR-convex *IV-F* on K. Then, for all κ, μ ∈ K, s ∈ [0, 1], we have

$$
\theta(\mathsf{s}\varkappa + (1-\mathsf{s})\mu) \leq\_{\mathsf{P}} \mathsf{s}\theta(\varkappa) + (1-\mathsf{s})\theta(\mu).
$$

By using the same steps as above, we have

$$\begin{split} \theta\_{\*}\left(\mathbf{s}\frac{1}{\varkappa} + (1-\mathbf{s})\frac{1}{\mu}\right) &= \mathfrak{A}\_{\*}\left(\frac{1}{\varkappa\frac{1}{\varkappa} + (1-\mathbf{s})\frac{1}{\mu}}\right) \\ &= \mathfrak{A}\_{\*}\left(\frac{\varkappa\mu}{(1-\mathbf{s})\varkappa + \varsigma\mu}\right) \\ &\leq \mathfrak{s}\theta\_{\*}\left(\frac{1}{\varkappa}\right) + (1-\mathbf{s})\theta\_{\*}\left(\frac{1}{\mu}\right) \\ &= \mathfrak{s}\mathfrak{A}\_{\*}(\varkappa) + (1-\mathbf{s})\mathfrak{A}\_{\*}(\mu) \\ \theta^{\*}\left(\mathbf{s}\frac{1}{\varkappa} + (1-\mathbf{s})\frac{1}{\mu}\right) &= \mathfrak{A}\_{\*}\left(\frac{1}{\varkappa\frac{1}{\varkappa} + (1-\mathbf{s})\frac{1}{\mu}}\right) \\ &= \mathfrak{A}\_{\*}\left(\frac{\varkappa\mu}{(1-\mathbf{s})\varkappa + \varsigma\mu}\right) \\ &\leq \mathfrak{s}\theta^{\*}\left(\frac{1}{\varkappa}\right) + (1-\mathbf{s})\theta^{\*}\left(\frac{1}{\mu}\right) \\ &= \mathfrak{s}\mathfrak{A}\_{\*}(\varkappa) + (1-\mathbf{s})\mathfrak{A}\_{\*}(\mu) \end{split}$$

It follows that

$$\mathfrak{A}\left(\frac{\varkappa\mathfrak{a}}{\mathsf{s}\varkappa + (1-\mathsf{s})\mu}\right) \leq\_{\mathsf{P}} (1-\mathsf{s})\mathfrak{A}(\varkappa) + \mathsf{s}\mathfrak{A}(\mu).$$

This completes the proof.

**Remark 3.** If one attempts to take A∗(κ) = A∗(κ)*, then, from Theorem 3, we acquire the Lemma 2.1 of* [30].

#### **3. Main Results**

Budak et al. [1] introduced the notion of *IV-RL*-fractional integrals. As may be seen, fractional integral definitions and *IV-RL*-fractional integral definitions have comparable configurations. As a result of this observation, we may state the *H-H* inequality for LRharmonically *IV-Fs* using *IV-RL*-fractional integrals.

**Theorem 4.** *Let* A ∈ *LRHSX* [t, *<sup>υ</sup>*], <sup>K</sup><sup>+</sup> *C* , *and defined on the interval* [t, *υ*] *such that* A(κ) = [A∗(κ), A∗(κ)] *for all* κ ∈ [t, *υ*]*. If* A ∈ *L* [t, *<sup>υ</sup>*], <sup>K</sup><sup>+</sup> *C and fractional integral over* [t, *υ*]*, then*

$$\mathfrak{A}\left(\frac{2\mathfrak{t}\upsilon}{\mathfrak{t}+\upsilon}\right) \leq\_{\mathbb{P}} \frac{\Gamma(\mathfrak{t}+1)}{2(\upsilon-\mathfrak{t})^{\mathfrak{t}}} \left[\mathfrak{T}\_{\frac{1}{\mathfrak{t}}-}^{\mathfrak{t}}(\mathfrak{A}\circ\Psi)\left(\frac{1}{\upsilon}\right) + \mathfrak{T}\_{\frac{1}{\mathfrak{t}}+}^{\mathfrak{t}}\left(\mathfrak{A}\circ\Psi\right)\left(\frac{1}{\mathfrak{t}}\right)\right] \leq\_{\mathbb{P}} \frac{\mathfrak{A}(\mathfrak{t}) + \mathfrak{A}(\upsilon)}{2}.\tag{11}$$

*If* A(κ) *is LR-*ऒ*-concave IV-F, then*

$$\mathfrak{A}\left(\frac{2t\upsilon}{t+\upsilon}\right) \ge\_p \frac{\Gamma(\beta+1)}{2(\upsilon-t)^{\beta}} \left[\mathfrak{T}\_{\frac{1}{t}-}^{\beta}\left(\mathfrak{A}\circ\Psi\right)\left(\frac{1}{\upsilon}\right) + \mathfrak{T}\_{\frac{1}{t}+}^{\beta}\left(\mathfrak{A}\circ\Psi\right)\left(\frac{1}{t}\right)\right] \ge\_p \frac{\mathfrak{A}(t) + \mathfrak{A}(\upsilon)}{2}.\tag{12}$$

*where Ψ*(κ) = <sup>1</sup> κ *.*

**Proof.** Let A ∈ *LRHSX* [t, *<sup>υ</sup>*], <sup>K</sup><sup>+</sup> *C* . Then, by hypothesis, we have

$$\mathfrak{A}\left(\frac{2\mathfrak{t}\upsilon}{\mathfrak{t}+\upsilon}\right)\leq\_{\mathbb{P}}\mathfrak{A}\left(\frac{\mathfrak{t}\upsilon}{\mathfrak{st}+(1-\mathfrak{s})\upsilon}\right)+\mathfrak{A}\left(\frac{\mathfrak{t}\upsilon}{(1-\mathfrak{s})\mathfrak{t}+\mathfrak{s}\upsilon}\right).$$

Therefore, we have

$$\begin{cases} 2\mathfrak{A}\_{\ast} \left( \frac{2\mathfrak{t}\upsilon}{\mathfrak{t} + \upsilon} \right) \leq \mathfrak{A}\_{\ast} \left( \frac{\mathfrak{t}\upsilon}{\mathfrak{s}\mathfrak{t} + (1 - \mathfrak{s})\upsilon} \right) + \mathfrak{A}\_{\ast} \left( \frac{\mathfrak{t}\upsilon}{(1 - \mathfrak{s})\mathfrak{t} + \mathfrak{s}\upsilon} \right),\\ 2\mathfrak{A}^{\ast} \left( \frac{2\mathfrak{t}\upsilon}{\mathfrak{t} + \upsilon} \right) \leq \mathfrak{A}^{\ast} \left( \frac{\mathfrak{t}\upsilon}{\mathfrak{s}\mathfrak{t} + (1 - \mathfrak{s})\upsilon} \right) + \mathfrak{A}^{\ast} \left( \frac{\mathfrak{t}\upsilon}{(1 - \mathfrak{s})\mathfrak{t} + \mathfrak{s}\upsilon} \right). \end{cases}$$

Consider *θ*(κ) = A 1 κ ! . By Theorem 3, we have *θ*(κ) is LR-convex *IV-F*. Then, above inequality, we have

$$2\theta\_\*\left(\frac{\mathbf{t}+\upsilon}{2\mathbf{t}\upsilon}\right) \le \theta\_\*\left(\frac{\mathbf{st}+(\mathbf{1}-\mathbf{s})\upsilon}{\mathbf{t}\upsilon}\right) + \theta\_\*\left(\frac{(\mathbf{1}-\mathbf{s})\mathbf{t}+\mathbf{s}\upsilon}{\mathbf{t}\upsilon}\right).$$

Multiplying both sides by s*β*−<sup>1</sup> and integrating the obtained result with respect to s over (0, 1), we have

$$\begin{array}{c} \mathcal{Z} \int\_{0}^{1} \mathbf{s}^{\mathfrak{f}-1} \theta\_{\ast} \left( \frac{\mathfrak{t} + \upsilon}{2 \mathfrak{t} \upsilon} \right) d\mathfrak{s} \\ \leq \int\_{0}^{1} \mathbf{s}^{\mathfrak{f}-1} \theta\_{\ast} \left( \frac{\mathfrak{st} + (1 - \mathfrak{s}) \upsilon}{\mathfrak{t} \upsilon} \right) d\mathfrak{s} + \int\_{0}^{1} \mathbf{s}^{\mathfrak{f}-1} \theta\_{\ast} \left( \frac{(1 - \mathfrak{s}) \mathfrak{t} + \mathfrak{s} \upsilon}{\mathfrak{t} \upsilon} \right) d\mathfrak{s} . \end{array}$$
  $\text{Let } \mathfrak{s} = \frac{(1 - \mathfrak{s}) \mathfrak{t} + \mathfrak{s} \upsilon}{\mathfrak{t} \upsilon} \text{ and } \mu = \frac{\mathfrak{s} + (1 - \mathfrak{s}) \upsilon}{\mathfrak{t} \upsilon}. \text{ Then, we have}$ 

$$\begin{split} \frac{\alpha^{2}}{\beta} \theta\_{\*} \left( \frac{\mathbf{i} \pm \boldsymbol{\mu}}{2\upsilon} \right) &\leq \left( \frac{\mathbf{i}\upsilon}{\upsilon - t} \right)^{\beta} \int\_{\frac{\boldsymbol{\mu}}{\upsilon}}^{\frac{\boldsymbol{\mu}}{\boldsymbol{\mu}}} \left( \frac{1}{t} - \boldsymbol{\mu} \right)^{\beta - 1} \theta\_{\*} (\boldsymbol{\mu}) d\boldsymbol{\mu} + \left( \frac{\mathbf{i}\upsilon}{\upsilon - t} \right)^{\beta} \int\_{\frac{\boldsymbol{\mu}}{\upsilon}}^{\frac{\boldsymbol{\mu}}{\boldsymbol{\mu}}} \left( \boldsymbol{\varkappa} - \frac{1}{\upsilon} \right)^{\beta - 1} \theta\_{\*} (\boldsymbol{\varkappa}) d\boldsymbol{\varkappa} = \boldsymbol{\Gamma}(\boldsymbol{\emptyset}) \left( \frac{\mathbf{i}\upsilon}{\upsilon - t} \right)^{\beta} \left[ \mathfrak{T}\_{\boldsymbol{\wp}}^{\boldsymbol{\Theta}} - \theta\_{\*} \left( \frac{1}{\upsilon} \right) + \mathfrak{T}\_{\boldsymbol{\wp}}^{\boldsymbol{\Theta}} \right] \\ &\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \boldsymbol{\Xi}\_{\*}^{\boldsymbol{\Theta}} = \boldsymbol{\Gamma}(\boldsymbol{\mathit{\Theta}}) \left( \frac{\mathbf{i}}{t} \right)^{\beta} \theta\_{\*} \end{split}$$

Similarly, for *θ*∗(κ), we have

$$\frac{2}{\beta} \theta^\* \left( \frac{\mathbf{t} + \boldsymbol{\upsilon}}{2 \mathbf{t} \boldsymbol{\upsilon}} \right) \leq \boldsymbol{\Gamma}(\beta) \left( \frac{\mathbf{t} \boldsymbol{\upsilon}}{\boldsymbol{\upsilon} - \mathbf{t}} \right)^{\beta} \left[ \mathfrak{T}^{\beta}\_{\left( \frac{1}{\mathbf{t}} \right)^{-}} \left. \theta^\* \left( \frac{1}{\boldsymbol{\upsilon}} \right) + \mathfrak{T}^{\beta}\_{\left( \frac{1}{\mathbf{t}} \right)^{+}} \left. \theta^\* \left( \frac{1}{\mathbf{t}} \right) \right] \right]$$

It follows that

$$2\left[\theta\_\*\left(\frac{\mathbf{t}+\upsilon}{2\upsilon}\right),\theta^\*\left(\frac{\mathbf{t}+\upsilon}{2\upsilon}\right)\right] \leq\_{\mathbf{P}} \Gamma(\boldsymbol{\theta}+1) \left(\frac{\mathbf{t}\upsilon}{\upsilon-\mathbf{t}}\right)^{\boldsymbol{\beta}} \left[\mathfrak{T}\_{(\frac{\boldsymbol{\beta}}{\boldsymbol{\tau}})^{-}}^{\boldsymbol{\theta}}\theta\_\*\left(\frac{1}{\upsilon}\right) + \mathfrak{T}\_{(\frac{\boldsymbol{\beta}}{\boldsymbol{\tau}})^{+}}^{\boldsymbol{\theta}}\theta\_\*\left(\frac{1}{\mathbf{t}}\right),\mathfrak{T}\_{(\frac{\boldsymbol{\beta}}{\boldsymbol{\tau}})^{-}}^{\boldsymbol{\theta}}\theta^\*\left(\frac{1}{\upsilon}\right) + \mathfrak{T}\_{(\frac{\boldsymbol{\beta}}{\boldsymbol{\tau}})^{+}}^{\boldsymbol{\theta}}\theta^\*\left(\frac{1}{\mathbf{t}}\right)\right].$$

That is,

$$2\,\theta\left(\frac{\mathbf{t}+\boldsymbol{\upsilon}}{2\mathbf{t}\boldsymbol{\upsilon}}\right) \leq\_{\mathbf{p}} \Gamma(\boldsymbol{\beta}+1) \left(\frac{\mathbf{t}\boldsymbol{\upsilon}}{\boldsymbol{\upsilon}-\mathbf{t}}\right)^{\beta} \left[\mathfrak{T}^{\boldsymbol{\beta}}\_{(\frac{1}{\mathbf{t}})^{-}}\theta\left(\frac{1}{\boldsymbol{\upsilon}}\right) + \mathfrak{T}^{\boldsymbol{\beta}}\_{(\frac{1}{\mathbf{t}})^{+}}\theta\left(\frac{1}{\mathbf{t}}\right)\right].\tag{13}$$

In a similar way as above, we have

$$\Gamma(\beta) \left(\frac{\text{tv}}{\upsilon - \text{t}}\right)^{\beta} \left[\mathfrak{T}^{\beta}\_{\left(\frac{1}{\text{t}}\right)^{-}} \theta\left(\frac{1}{\upsilon}\right) + \mathfrak{T}^{\beta}\_{\left(\frac{1}{\text{t}}\right)^{+}} \theta\left(\frac{1}{\text{t}}\right)\right] \leq\_{\text{P}} \frac{\theta\left(\frac{1}{\text{t}}\right) + \theta\left(\frac{1}{\upsilon}\right)}{\beta}.\tag{14}$$

Combining (31) and (32), we have

$$\theta\left(\frac{\mathfrak{t}+\upsilon}{2\mathfrak{t}\upsilon}\right) \leq\_{\mathbb{P}} \frac{\Gamma(\mathfrak{t}+1)}{2} \frac{\left(\mathfrak{t}\mathfrak{t}\right)^{\mathfrak{t}}}{2} \left[\mathfrak{T}^{\mathfrak{t}}\_{\left(\frac{\mathfrak{t}}{\mathfrak{t}}\right)^{-}}\theta\left(\frac{1}{\upsilon}\right) + \mathfrak{T}^{\mathfrak{t}}\_{\left(\frac{\mathfrak{t}}{\mathfrak{t}}\right)^{+}}\theta\left(\frac{1}{\mathfrak{t}}\right)\right] \leq\_{\mathbb{P}} \frac{\theta\left(\frac{1}{\mathfrak{t}}\right) + \theta\left(\frac{1}{\upsilon}\right)}{2} \lambda$$

that is

$$\mathfrak{A}\left(\frac{2t\upsilon}{t+\upsilon}\right) \leq\_{\mathbb{P}} \frac{\Gamma(\beta+1)}{2(\upsilon-t)^{\beta}} \left[\mathfrak{T}\_{1^{-}}^{\beta}(\mathfrak{A}\circ\Psi)\left(\frac{1}{\upsilon}\right) + \mathfrak{T}\_{1^{+}}^{\beta}\left(\mathfrak{A}\circ\Psi\right)\left(\frac{1}{t}\right)\right] \leq\_{\mathbb{P}} \frac{\mathfrak{A}(t)+\mathfrak{A}(\upsilon)}{2}.$$

Hence, the required result.

**Remark 4.** *On the basic of the inequality (29), we consider certain special cases as below. If we attempt to take β* = 1*, then, we achieve the coming inequality which is also new one:*

$$
\mathfrak{A}\left(\frac{2\mathfrak{t}\upsilon}{\mathfrak{t}+\upsilon}\right) \leq\_{\mathbb{P}} \frac{\mathfrak{t}\upsilon}{\upsilon-\mathfrak{t}} \int\_{\mathfrak{t}}^{\upsilon} \frac{\mathfrak{A}(\varkappa)}{\varkappa^{2}} d\varkappa \leq\_{\mathbb{P}} \frac{\mathfrak{A}(\mathfrak{t})+\mathfrak{A}(\upsilon)}{2}.\tag{15}
$$

*If we attempt to take* A∗(κ) = A∗(κ)*, then, we achieve the coming inequality, see* [30]:

$$\mathfrak{A}\left(\frac{2\mathfrak{t}\upsilon}{\mathfrak{t}+\upsilon}\right) \leq \frac{\Gamma(\beta+1)}{2(\upsilon-\mathsf{t})^{\beta}} \left[\mathfrak{T}\_{\frac{1}{\mathsf{t}}-}^{\beta}(\mathfrak{A}\circ\mathsf{Y})\left(\frac{1}{\upsilon}\right) + \mathfrak{T}\_{\frac{1}{\mathsf{t}}+}^{\beta}\left(\mathfrak{A}\circ\mathsf{Y}\right)\left(\frac{1}{\mathsf{t}}\right)\right] \leq \frac{\mathfrak{A}(\mathsf{t})+\mathfrak{A}(\upsilon)}{2}.\tag{16}$$

*If we attempt to take* A∗(κ) = A∗(κ) *with β* = 1*, then, we acquire the coming inequality, see* [27].

$$\mathfrak{A}\left(\frac{2\mathfrak{t}\upsilon}{\mathfrak{t}+\upsilon}\right) \le \frac{\mathfrak{t}\upsilon}{\upsilon-\mathfrak{t}} \int\_{\mathfrak{t}}^{\upsilon} \frac{\mathfrak{A}(\varkappa)}{\varkappa^{2}} d\varkappa \le \frac{\mathfrak{A}(\mathfrak{t}) + \mathfrak{A}(\upsilon)}{2} \tag{17}$$

**Example 2.** *If we consider taking the IV-Fs* A : [0, 2] → F*C*(R) *such that* [1, 2] <sup>√</sup>κ*, then, all assumptions mentioned in Theorem 4 are met. Since* <sup>A</sup>∗(κ) <sup>=</sup> <sup>√</sup>κ, <sup>A</sup>∗(κ, *<sup>θ</sup>*) <sup>=</sup> <sup>2</sup> √κ*. If <sup>β</sup>* = <sup>1</sup>*, then, we compute the following:*

$$\begin{split} \mathfrak{A}\_{\bullet} \left( \frac{2w}{t+v} \right) &\leq \frac{\Gamma(\beta+1)}{2(v-t)^{\beta}} \Big[ \mathfrak{A}\_{\frac{\beta}{t}-}^{\beta} (\mathfrak{A}\_{\bullet} \circ \Psi) \Big( \frac{1}{v} \Big) + \mathfrak{A}\_{\frac{\beta}{v}}^{\beta} \cdot (\mathfrak{A}\_{\bullet} \circ \Psi) \Big( \frac{1}{t} \Big) \Big] \leq \frac{\mathfrak{A}\_{\star}(t) + \mathfrak{A}\_{\star}(v)}{2}, \\ &\qquad \qquad \qquad \qquad \qquad \qquad \qquad \mathfrak{A}\_{\bullet} \left( \frac{2w}{t+v} \right) = \mathfrak{A}\_{\star}(0) = 0, \\\\ &\frac{\Gamma(\beta+1)}{2(v-t)^{\beta}} \Big[ \mathfrak{A}\_{\frac{\beta}{t}-}^{\beta} (\mathfrak{A}\_{\bullet} \circ \Psi) \Big( \frac{1}{v} \Big) + \mathfrak{A}\_{\frac{\beta}{v}}^{\beta} \cdot (\mathfrak{A}\_{\bullet} \circ \Psi) \Big( \frac{1}{t} \Big) \Big] = 0, \\\\ &\frac{tv}{v-t} \int\_{1}^{v} \frac{\mathfrak{A}\_{\star}(\varkappa)}{\varkappa^{2}} d\varkappa = \frac{0}{2} \int\_{0}^{2} \frac{\sqrt{\varkappa}}{\varkappa^{2}} d\varkappa = 0, \end{split}$$

*That means*

$$0 \le 0 \le \frac{1}{\sqrt{2}}.$$

*Similarly, it can be easily shown that*

$$\mathfrak{A}^\*\left(\frac{2\mathfrak{t}\upsilon}{\mathfrak{t}+\upsilon}\right) \leq \frac{\Gamma(\mathfrak{k}+1)}{2(\upsilon-\mathfrak{t})^{\beta}} \left[\mathfrak{T}\_{\frac{1}{\mathfrak{t}}-}^{\mathfrak{G}}(\mathfrak{A}^\*\circ\Psi)\left(\frac{1}{\upsilon}\right) + \mathfrak{T}\_{\frac{1}{\mathfrak{v}}+}^{\mathfrak{G}}(\mathfrak{A}^\*\circ\Psi)\left(\frac{1}{\mathfrak{t}}\right)\right] \leq \frac{\mathfrak{A}^\*(\mathfrak{t}) + \mathfrak{A}^\*(\upsilon)}{2}.$$

*Now*

$$\mathfrak{A}^\*\left(\frac{2\upsilon}{\mathfrak{t}+\upsilon}\right) = \mathfrak{A}\_\*(0) = 0,$$

$$\frac{\Gamma(\mathfrak{s}+1)}{2(\upsilon-\mathfrak{t})^{\mathfrak{s}}} \left[ \mathfrak{L}\_{\frac{1}{\mathfrak{t}}-}^{\mathfrak{t}} \left( \mathfrak{A}^\* \circ \Psi \right) \left( \frac{1}{\upsilon} \right) + \mathfrak{L}\_{\frac{1}{\mathfrak{t}}+}^{\mathfrak{t}} \left( \mathfrak{A}^\* \circ \Psi \right) \left( \frac{1}{\mathfrak{t}} \right) \right] = 0,$$

$$\frac{\mathfrak{A}^\*(\mathfrak{t}) + \mathfrak{A}^\*(\upsilon)}{2} = \sqrt{2}.$$

*From which, we have*

$$0 \le 0 \le \sqrt{2}$$

*that is*

$$[0,0] \leq\_{\mathbb{P}} [0,0] \leq\_{\mathbb{P}} \left[ \frac{1}{\sqrt{2}}, \sqrt{2} \right].$$

*Hence,*

$$\mathfrak{A}\left(\frac{2\mathfrak{t}\upsilon}{\mathfrak{t}+\upsilon}\right) \leq\_{\mathbb{P}} \frac{\Gamma(\mathfrak{k}+1)}{\mathfrak{2}(\upsilon-\mathfrak{t})^{\beta}} \left[\mathfrak{T}\_{\mathfrak{t}}^{\beta}{}\_{-}(\mathfrak{A}\circ\Psi)\left(\frac{1}{\upsilon}\right) + \mathfrak{T}\_{\mathfrak{t}}^{\beta}{}\_{+}\left(\mathfrak{A}\circ\Psi\right)\left(\frac{1}{\mathfrak{t}}\right)\right] \leq\_{\mathbb{P}} \frac{\mathfrak{A}(\mathfrak{t})+\mathfrak{A}(\upsilon)}{2}.$$

*Based on the IV-RL-fractional integrals, our next main results in association with the H-H type inequalities for product of two LR-harmonically IV-Fs are presented as follows.*

**Theorem 5.** *Let* A, *Ψ* ∈ *LRHSX* [t, *<sup>υ</sup>*], <sup>K</sup><sup>+</sup> *C* , *and defined on the interval* [t, *υ*] *such that* A(κ) = [A∗(κ), A∗(κ)] *and Ψ*(κ) = [*Ψ*∗(κ), *Ψ*∗(κ)] *for all*κ ∈ [t, *υ*]*. If* A × *Ψ* ∈ *L* [t, *<sup>υ</sup>*],K<sup>+</sup> *C , and fractional integral over* [t, *υ*]*, then*

$$\begin{split} \frac{\Gamma\left(\left\|\begin{smallmatrix} \frac{1}{\mathsf{p}} \\ \mathsf{r} \end{smallmatrix} \right\| 2\right) \mathcal{B}\left[\begin{smallmatrix} \frac{1}{\mathsf{p}} \\ \mathsf{r} \end{smallmatrix} \right] + \mathcal{B}\left[\begin{smallmatrix} \frac{1}{\mathsf{p}} \\ \mathsf{r} \end{smallmatrix} \right] \mathcal{B}\left[\begin{smallmatrix} \frac{1}{\mathsf{p}} \\ \mathsf{r} \end{smallmatrix} \right] \\ \leq \mathbb{P}\left(\frac{\mathsf{p}}{\mathsf{p}} \right) + \mathbb{P}\left(\mathsf{r} \right) \left[\mathsf{r} \right] + \mathbb{P}\left(\mathsf{r} \right) \\ \leq \mathbb{P}\left(\mathsf{r} \right) + \mathbb{P}\left(\mathsf{r} \right) \\ \leq \mathbb{P}\left(\mathsf{r} \right) + \mathbb{P}\left(\mathsf{r} \right) \end{gathered} \left[\begin{smallmatrix} \frac{1}{\mathsf{p}} \\ \mathsf{r} \end{smallmatrix} \right] \mathcal{B}\left(\mathsf{t}, \mathsf{v} \right) + \mathbb{P}\left(\mathsf{r} \right) \left[\mathsf{r} \right] \end{gathered}$$

*where* D(t, *υ*) = A(t) × *Ψ*(t) + A(*υ*) × *Ψ*(*υ*), Q(t, *υ*) = A(t) × *Ψ*(*υ*) + A(*υ*) × *Ψ*(t), *and* D(t, *υ*) = [D∗(t, *υ*), D∗(t, *υ*)] *and* Q(t, *υ*) = [Q∗(t, *υ*), Q∗(t, *υ*)].

**Proof.** Since A, *Ψ* ∈ *LRHSX* [t, *<sup>υ</sup>*], <sup>K</sup><sup>+</sup> *C* , then, we have

$$\mathfrak{A}\_{\ast} \left( \frac{\mathfrak{t}\upsilon}{\mathfrak{st} + (1 - \mathfrak{s})\upsilon} \right) \le (1 - \mathfrak{s})\mathfrak{A}\_{\ast}(\mathfrak{t}) + \mathfrak{s}\mathfrak{A}\_{\ast}(\upsilon),$$

and

$$\Psi\_\*\left(\frac{tv}{st+(1-s)v}\right) \le (1-s)\Psi\_\*(t) + s\Psi\_\*(v) \ .$$

From the definition of LR-ऒ-convex *IV-Fs* it follows that 0 ≤<sup>p</sup> A(κ) and 0 ≤<sup>p</sup> *Ψ*(κ), so

$$\begin{array}{c} \mathfrak{A}\_{\ast} \left( \frac{\mathrm{tr}}{\mathrm{st} + (1-\mathrm{s})\upsilon} \right) \times \Psi\_{\ast} \left( \frac{\mathrm{tr}}{\mathrm{st} + (1-\mathrm{s})\upsilon} \right) \\ \leq \left( \left( 1-\mathrm{s} \right) \mathfrak{A}\_{\ast} \left( \mathrm{t} \right) + \mathrm{s} \mathfrak{A}\_{\ast} \left( \upsilon \right) \right) \left( \left( 1-\mathrm{s} \right) \mathbb{Y}\_{\ast} \left( \mathrm{t} \right) + \mathrm{s} \mathbb{Y}\_{\ast} \left( \upsilon \right) \right) \\ = \left( \left( 1-\mathrm{s} \right)^{2} \mathfrak{A}\_{\ast} \left( \mathrm{t} \right) \times \mathbb{Y}\_{\ast} \left( \mathrm{t} \right) + \mathrm{s}^{2} \mathfrak{A}\_{\ast} \left( \upsilon \right) \times \mathbb{Y}\_{\ast} \left( \upsilon \right) \\ + \mathrm{s} \left( \mathrm{1}-\mathrm{s} \right) \mathfrak{A}\_{\ast} \left( \mathrm{t} \right) \times \mathbb{Y}\_{\ast} \left( \upsilon \right) + \mathrm{s} \left( \mathrm{1}-\mathrm{s} \right) \mathfrak{A}\_{\ast} \left( \upsilon \right) \times \mathbb{Y}\_{\ast} \left( \mathrm{t} \right) \end{array} \right. \tag{18}$$

Analogously, we have

$$\begin{array}{c} \mathfrak{A}\_{\ast} \left( \frac{\mathrm{tv}}{(1-\mathrm{s})\mathrm{t}+\mathrm{sv}} \right) \Psi\_{\ast} \left( \frac{\mathrm{tv}}{(1-\mathrm{s})\mathrm{t}+\mathrm{sv}} \right) \\ \leq \mathrm{s}^{2} \mathfrak{A}\_{\ast}(\mathrm{t}) \times \Psi\_{\ast}(\mathrm{t}) + (1-\mathrm{s})^{2} \mathfrak{A}\_{\ast}(\mathrm{v}) \times \Psi\_{\ast}(\mathrm{v}) \\ + \mathrm{s}(1-\mathrm{s})\mathfrak{A}\_{\ast}(\mathrm{t}) \times \Psi\_{\ast}(\mathrm{v}) + \mathrm{s}(1-\mathrm{s})\mathfrak{A}\_{\ast}(\mathrm{v}) \times \Psi\_{\ast}(\mathrm{t}) \end{array} \tag{19}$$

,

,

Adding (18) and (19), we have

$$\begin{array}{ll} \mathfrak{A}\_{\ast} \left( \frac{\text{tv}}{\text{st} + (1-\text{s})\upsilon} \right) \times \Psi\_{\ast} \left( \frac{\text{tv}}{\text{st} + (1-\text{s})\upsilon} \right) + \mathfrak{A}\_{\ast} \left( \frac{\text{tv}}{(1-\text{s})\mathfrak{t} + \text{s}\upsilon} \right) \times \Psi\_{\ast} \left( \frac{\text{tv}}{(1-\text{s})\mathfrak{t} + \text{s}\upsilon} \right) \\\leq \left[ \text{s}^{2} + (1-\text{s})^{2} \right] \left[ \begin{array}{ll} \mathfrak{A}\_{\ast}(\text{t}) \times \Psi\_{\ast}(\text{t}) + \mathfrak{A}\_{\ast}(\upsilon) \times \Psi\_{\ast}(\upsilon) \\ + 2\mathfrak{s}(1-\text{s}) \left[ \begin{array}{ll} \mathfrak{A}\_{\ast}(\upsilon) \times \Psi\_{\ast}(\text{t}) + \mathfrak{A}\_{\ast}(\text{t}) \times \Psi\_{\ast}(\upsilon) \end{array} \right] \end{array} . \end{array} . \tag{20}$$

Taking multiplication of (20) by s*β*−<sup>1</sup> and integrating the obtained result with respect to s over (0, 1), we have

$$\begin{array}{l} \int\_{0}^{1} \mathbf{s}^{\mathfrak{f}-1} \mathfrak{A}\_{\ast} \left( \frac{\mathbf{t}\mathbf{v}}{\mathbf{s}\mathbf{t} + (1-\mathbf{s})\boldsymbol{\upsilon}} \right) \times \Psi\_{\ast} \left( \frac{\mathbf{t}\mathbf{v}}{\mathbf{s}\mathbf{t} + (1-\mathbf{s})\boldsymbol{\upsilon}} \right) d\mathbf{s} \\ \qquad + \int\_{0}^{1} \mathbf{s}^{\mathfrak{f}-1} \mathfrak{A}\_{\ast} \left( \frac{\mathbf{t}\mathbf{v}}{(1-\mathbf{s})\mathbf{t} + \mathbf{s}\boldsymbol{\upsilon}} \right) \times \Psi\_{\ast} \left( \frac{\mathbf{t}\mathbf{v}}{(1-\mathbf{s})\mathbf{t} + \mathbf{s}\boldsymbol{\upsilon}} \right) d\mathbf{s} \\ \leq \mathfrak{D}\_{\ast}(\mathbf{t}, \boldsymbol{\upsilon}) \int\_{0}^{1} \mathbf{s}^{\mathfrak{f}-1} \left[ \mathbf{s}^{2} + (1-\mathbf{s})^{2} \right] d\mathbf{s} + 2Q\_{\ast}(\mathbf{t}, \boldsymbol{\upsilon}) \int\_{0}^{1} \mathbf{s}^{\mathfrak{f}-1} \mathbf{s} (1-\mathbf{s}) d\mathbf{s} . \end{array}$$

It follows that,

$$\begin{split} \Gamma(\boldsymbol{\beta}) (\frac{\mathsf{tr}}{\upsilon-\mathsf{t}})^{\boldsymbol{\beta}} \Big[ \mathsf{S}\_{\left(\frac{1}{\upsilon}\right)^{+}}^{\boldsymbol{\beta}} \, \mathsf{A}\_{\ast} \Big( \frac{1}{\mathsf{t}} \Big) \times \mathsf{Y}\_{\ast} \Big( \frac{1}{\mathsf{t}} \Big) + \mathsf{S}\_{\left(\frac{1}{\mathsf{t}}\right)^{-}}^{\boldsymbol{\beta}} \, \mathsf{A}\_{\ast} \Big( \frac{1}{\upsilon} \Big) \times \mathsf{Y}\_{\ast} \Big( \frac{1}{\upsilon} \Big) \\ \leq \frac{2}{\beta} \Big( \frac{1}{2} - \frac{\beta}{(\beta+1)(\beta+2)} \right) \mathfrak{D}\_{\ast}(\mathsf{t}, \upsilon) + \frac{2}{\beta} \Big( \frac{\beta}{(\beta+1)(\beta+2)} \Big) \mathscr{Q}\_{\ast}(\mathsf{t}, \upsilon) \end{split}$$

Similarly, for A∗(κ), we have

$$\begin{split} \left[\Gamma\left(\boldsymbol{\beta}\right)\left(\frac{\mathsf{t}\mathsf{v}}{\mathsf{v}-\mathsf{t}}\right)^{\beta}\right] & \mathfrak{S}^{\mathsf{f}}\_{\left(\frac{1}{\mathsf{v}}\right)^{+}}\mathfrak{A}\_{\ast}\left(\frac{1}{\mathsf{t}}\right)\times\Psi\_{\ast}\left(\frac{1}{\mathsf{t}}\right)+\mathfrak{S}^{\mathsf{f}}\_{\left(\frac{1}{\mathsf{t}}\right)^{-}}\mathfrak{A}\_{\ast}\left(\frac{1}{\mathsf{v}}\right)\times\Psi\_{\ast}\left(\frac{1}{\mathsf{v}}\right) \\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \triangleq \frac{2}{\beta}\left(\frac{1}{2}-\frac{\beta}{(\beta+1)(\beta+2)}\right)\mathfrak{D}\_{\ast}\left(\mathsf{t},\mathsf{v}\right)+\frac{2}{\beta}\left(\frac{\beta}{(\beta+1)(\beta+2)}\right)\mathfrak{Q}\_{\ast}\left(\mathsf{t},\mathsf{v}\right) \end{split}$$

that is

$$\begin{split} \left| \Gamma(\boldsymbol{\beta}) \left( \frac{\operatorname{tr}}{\boldsymbol{\upsilon} - \operatorname{tr}} \right) \right| & \mathfrak{A}\_{\ast} \left( \frac{1}{\mathfrak{r}} \right) \times \Psi\_{\ast} \left( \frac{1}{\mathfrak{r}} \right) + \mathfrak{T}\_{\ast} \left( \frac{1}{\mathfrak{r}} \right) \times \Psi\_{\ast} \left( \frac{1}{\mathfrak{r}} \right) \times \Psi\_{\ast} \left( \frac{1}{\mathfrak{r}} \right), \ \mathfrak{T}\_{\left( \frac{1}{\mathfrak{r}} \right)}^{\mathfrak{f}}, \ \mathfrak{A}^{\ast} \left( \frac{1}{\mathfrak{r}} \right) \times \mathfrak{T}\_{\ast} \\ \Psi\_{\ast} \left( \frac{1}{\mathfrak{r}} \right) + \mathfrak{T}\_{\left( \frac{1}{\mathfrak{r}} \right)^{-}}^{\mathfrak{f}} \mathfrak{A}^{\ast} \left( \frac{1}{\mathfrak{r}} \right) \times \Psi^{\ast} \left( \frac{1}{\mathfrak{r}} \right) \end{split}$$

$$ \begin{split} \frac{2}{\mathfrak{r}} \left( \frac{1}{\mathfrak{r}} \right) + \mathfrak{T}\_{\left( \frac{1}{\mathfrak{r}} \right)^{-}}^{\mathfrak{f}} \mathfrak{A}^{\ast} \left( \frac{1}{\mathfrak{r}} \right) \times \Psi^{\ast} \left( \frac{1}{\mathfrak{r}} \right) \end{split}$$

$$ + \frac{2}{\mathfrak{f}} \left( \frac{\mathfrak{f}}{(\mathfrak{f}^{\sharp}) + 1/\mathfrak{f} + 2} \right) \left[ \mathcal{Q}\_{\ast}(\mathfrak{t}, \boldsymbol{\upsilon}), \ \mathcal{Q}^{\ast}(\mathfrak{t}, \boldsymbol{\upsilon}) \right].$$

Thus,

$$
\begin{split} & \frac{\Gamma(\beta+1)}{2} (\frac{\mathrm{tr}}{\upsilon})^{\beta} \Bigg[ \begin{array}{c} \mathfrak{T}^{\beta}\_{\left(\frac{1}{\upsilon}\right)^{+}} \mathfrak{A} \circ \Psi \Big( \frac{1}{\mathsf{t}} \Big) \times \Psi \circ \Psi \Big( \frac{1}{\mathsf{t}} \Big) \\ \Psi \Big( \frac{1}{\upsilon} \Big) \Big] \end{array} + \mathfrak{T}^{\beta} \circ \Psi \Big( \frac{1}{\mathsf{t}} \Big)^{+} + \mathfrak{T}^{\beta}\_{\left(\frac{1}{\mathsf{t}}\right)^{-}} \mathfrak{A} \circ \Psi \Big( \frac{1}{\upsilon} \Big) \times \Psi \circ \Psi \\ & \Psi \Big( \frac{1}{\upsilon} \Big) \Big] \leq\_{\mathsf{P}} \left( \frac{1}{2} - \frac{\beta}{(\beta+1)(\beta+2)} \right) \mathfrak{D}(\mathsf{t}, \upsilon) + \left( \frac{\beta}{(\beta+1)(\beta+2)} \right) \bar{\mathcal{Q}}(\mathsf{t}, \upsilon) . \end{split}
$$

and the theorem has been established.

**Theorem 6.** *Let* A, *Ψ* ∈ *LRHSX* [t, *<sup>υ</sup>*], <sup>K</sup><sup>+</sup> *C* , *and defined on the interval* [t, *υ*] *such that* A(κ) = [A∗(κ), A∗(κ)] *and Ψ*(κ) = [*Ψ*∗(κ), *Ψ*∗(κ)] *for all* κ ∈ [t, *υ*]*. If* A× *Ψ* ∈ *L* [t, *<sup>υ</sup>*],K<sup>+</sup> *C and fractional integral over* [t, *υ*]*, then*

$$
\begin{split}
\mathfrak{A}\left(\frac{2\upsilon}{\overline{\mathfrak{t}}+\upsilon}\right)\times\mathbb{Y}\left(\frac{2\upsilon}{\overline{\mathfrak{t}}+\upsilon}\right)\leq&\dfrac{\Gamma(\beta+1)}{4}\left(\frac{\upsilon}{\overline{\mathfrak{t}}-\overline{\mathfrak{t}}}\right)^{\beta}\left[\mathfrak{T}\begin{matrix}\frac{\mathfrak{t}}{\overline{\mathfrak{t}}}\\ \left(\frac{1}{\overline{\mathfrak{t}}}\right)^{+}\end{matrix}\times\mathbb{Y}\left(\frac{1}{\overline{\mathfrak{t}}}\right)+\mathfrak{T}\begin{matrix}\frac{\mathfrak{t}}{\overline{\mathfrak{t}}}\end{matrix}\begin{matrix}\mathfrak{A}\left(\frac{1}{\overline{\mathfrak{t}}}\right)\\ \overline{\mathfrak{t}}\end{matrix}\times\begin{matrix}\frac{1}{\overline{\mathfrak{t}}}\end{matrix}\right]+\tfrac{1}{2}\left(\frac{1}{2}-\frac{\beta}{(\overline{\mathfrak{t}}+1)(\overline{\mathfrak{t}}+2)}\right)\mathfrak{Q}(\mathfrak{t},\upsilon)+\cdots
$$

*where* D(t, *υ*) = A(t) × *Ψ*(t) + A(*υ*) × *Ψ*(*υ*), Q(t, *υ*) = A(t) × *Ψ*(*υ*) + A(*υ*) × *Ψ*(t), *and* D(t, *υ*) = [D∗(t, *υ*), D∗(t, *υ*)] *and* Q(t, *υ*) = [Q∗(t, *υ*), Q∗(t, *υ*)].

**Proof.** Consider A, *Ψ* ∈ *LRHSX* [t, *<sup>υ</sup>*], <sup>K</sup><sup>+</sup> *C* . Then, by hypothesis, we have

A∗ 2t*<sup>υ</sup>* <sup>t</sup>+*<sup>υ</sup>* × *Ψ*<sup>∗</sup> 2t*<sup>υ</sup>* <sup>t</sup>+*<sup>υ</sup>* <sup>≤</sup> <sup>1</sup> 4 ⎡ ⎣ A∗ t*υ* st+(1−s)*υ* ! × *Ψ*<sup>∗</sup> t*υ* st+(1−s)*υ* ! +A<sup>∗</sup> t*υ* st+(1−s)*υ* ! × *Ψ*<sup>∗</sup> t*υ* (1−s)t+s*υ* ! ⎤ <sup>⎦</sup> <sup>+</sup> <sup>1</sup> 4 ⎡ ⎣ A∗ t*υ* (1−s)t+s*υ* ! × *Ψ*<sup>∗</sup> t*υ* st+(1−s)*υ* ! +A<sup>∗</sup> t*υ* (1−s)t+s*υ* ! × *Ψ*<sup>∗</sup> t*υ* (1−s)t+s*υ* ! ⎤ ⎦, <sup>≤</sup> <sup>1</sup> 4 ⎡ ⎣ A∗ t*υ* st+(1−s)*υ* ! × *Ψ*<sup>∗</sup> t*υ* st+(1−s)*υ* ! +A<sup>∗</sup> t*υ* (1−s)t+s*υ* ! × *Ψ*<sup>∗</sup> t*υ* (1−s)t+s*υ* ! ⎤ <sup>⎦</sup> <sup>+</sup> <sup>1</sup> 4 ⎡ ⎢ ⎢ ⎣ (sA∗(t) + (1 − s)A∗(*υ*)) ×((1 − s)*Ψ*∗(t) + s*Ψ*∗(*υ*)) +((1 − s)A∗(t) + sA∗(*υ*)) ×(s*Ψ*∗(t) + (1 − s)*Ψ*∗(*υ*)) ⎤ ⎥ ⎥ ⎦, = <sup>1</sup> 4 ⎡ ⎣ A∗ t*υ* st+(1−s)*υ* ! × *Ψ*<sup>∗</sup> t*υ* st+(1−s)*υ* ! +A<sup>∗</sup> t*υ* (1−s)t+s*υ* ! × *Ψ*<sup>∗</sup> t*υ* (1−s)t+s*υ* ! ⎤ <sup>⎦</sup> <sup>+</sup> <sup>1</sup> 4 ' s2 <sup>+</sup> (<sup>1</sup> <sup>−</sup> <sup>s</sup>) 2 Q∗(t, *υ*) +{s(1 − s) + (1 − s)s}D∗(t, *υ*) ( . (21)

Multiplying inequality (21) by s*β*−<sup>1</sup> and integrating over (0, 1),

$$\begin{split} \mathbb{E} \leq & \frac{\mathbb{1}}{4} \Big[ \int\_{0}^{1} \mathsf{s}^{\delta-1} \mathfrak{A}\_{\ast} \Big( \frac{\mathsf{tr}\upsilon}{\mathsf{st}+(1-\mathsf{s})\upsilon} \right) \times \mathbb{1}\_{\ast} \Big( \frac{2\upsilon}{\mathsf{st}+\upsilon} \Big) \\ \leq & \frac{1}{4} \Big[ \Big. + \int\_{0}^{1} \mathsf{s}^{\delta-1} \mathfrak{A}\_{\ast} \Big( \frac{\mathsf{tr}\upsilon}{\mathsf{st}+(1-\mathsf{s})\upsilon} \Big) \times \mathbb{1}\_{\ast} \Big( \frac{\mathsf{tr}\upsilon}{(1-\mathsf{s})\mathsf{t}+\upsilon\mathsf{s}} \Big) \Big] \Big. \mathrm{d}s + \left[ \begin{array}{c} \frac{1}{4} \mathscr{Q}\_{\ast}(\mathsf{t},\mathsf{t}\upsilon) \int\_{0}^{1} \mathsf{s}^{\delta-1} \Big[ \mathsf{s}^{2} + (1-\mathsf{s})^{2} \Big] d\mathsf{s} \\ + 2\mathscr{Q}\_{\ast}(\mathsf{t},\upsilon) \int\_{0}^{1} \mathsf{s}^{\delta-1} \mathbf{s}(1-\mathsf{s}) d\mathsf{s} \end{array} \right]. \end{split}$$
 
$$\text{Taking } \mathsf{x} = \frac{\mathsf{tr}}{\mathsf{s}\mathsf{t}+(1-\mathsf{s})\upsilon} \text{ and } \mu = \frac{\mathsf{tr}}{(1-\mathsf{s})\mathsf{t}+\mathsf{s}\mathsf{t}}$$
 
$$\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad$$

1 *<sup>β</sup>* A<sup>∗</sup> <sup>2</sup>*t<sup>υ</sup> <sup>t</sup>*+*<sup>υ</sup>* × *Ψ*<sup>∗</sup> <sup>2</sup>*t<sup>υ</sup> <sup>t</sup>*+*<sup>υ</sup>* <sup>≤</sup> *<sup>Γ</sup>*(*β*) 4 *<sup>t</sup><sup>υ</sup> υ*−*t β* T*β* ( 1 *υ* ) <sup>+</sup> A<sup>∗</sup> ◦ *Ψ* 1 *t* ! × *Ψ*<sup>∗</sup> ◦ *Ψ* 1 *t* ! + T*<sup>β</sup>* ( 1 *t* ) <sup>−</sup> A<sup>∗</sup> ◦ *Ψ* 1 *υ* ! × *Ψ*<sup>∗</sup> ◦ *Ψ* 1 *υ* ! + <sup>1</sup> 2*β* 1 <sup>2</sup> <sup>−</sup> *<sup>β</sup>* (*β*+1)(*β*+2) ! Q∗(*t*, *<sup>υ</sup>*) <sup>+</sup> <sup>1</sup> 2*β β* (*β*+1)(*β*+2) ! D∗(*t*, *υ*), 1 *<sup>β</sup>* <sup>A</sup>∗ <sup>2</sup>*t<sup>υ</sup> <sup>t</sup>*+*<sup>υ</sup>* <sup>×</sup> *<sup>Ψ</sup>*∗ <sup>2</sup>*t<sup>υ</sup> <sup>t</sup>*+*<sup>υ</sup>* <sup>≤</sup> *<sup>Γ</sup>*(*β*) 4 *<sup>t</sup><sup>υ</sup> υ*−*t β* T*β* ( 1 *υ* ) <sup>+</sup> A<sup>∗</sup> ◦ *Ψ* 1 *t* ! × *Ψ*<sup>∗</sup> ◦ *Ψ* 1 *t* ! + T*<sup>β</sup>* ( 1 *t* ) <sup>−</sup> A<sup>∗</sup> ◦ *Ψ* 1 *υ* ! × *Ψ*<sup>∗</sup> ◦ *Ψ* 1 *υ* ! + <sup>1</sup> 2*β* 1 <sup>2</sup> <sup>−</sup> *<sup>β</sup>* (*β*+1)(*β*+2) ! <sup>Q</sup>∗(*t*, *<sup>υ</sup>*) <sup>+</sup> <sup>1</sup> 2*β β* (*β*+1)(*β*+2) ! D∗(*t*, *υ*),

Similarly, for A∗(κ), we have

$$\begin{array}{c} \frac{1}{\mathbb{P}} \mathfrak{A}^{\ast} \left( \frac{2t\upsilon}{\mathfrak{f} + \upsilon} \right) \times \mathbb{P}^{\ast} \left( \frac{2t\upsilon}{\mathfrak{f} + \upsilon} \right) \\ \leq \frac{\Gamma(\beta)}{4} (\frac{\textnormal{tr}}{\upsilon - \mathfrak{t}})^{\beta} \left[ \begin{array}{c} \mathfrak{T}^{\beta} \\ \left( \frac{1}{\upsilon} \right)^{+} \end{array} \mathfrak{A}^{\ast} \circ \Psi \left( \frac{1}{\mathfrak{f}} \right) \times \mathbb{P}^{\ast} \circ \Psi \left( \frac{1}{\mathfrak{f}} \right) + \mathfrak{T}^{\beta}\_{\begin{subarray}{c} (\frac{1}{\mathfrak{f}})^{-} \end{subarray}} \mathfrak{A}^{\ast} \circ \Psi \left( \frac{1}{\overline{\upsilon}} \right) \times \Psi^{\ast} \circ \Psi \left( \frac{1}{\overline{\upsilon}} \right) \right] \\ + \frac{1}{2\overline{\rho}} \left( \frac{1}{2} - \frac{\beta}{(\beta + 1)(\overline{\beta + 2})} \right) \mathcal{Q}^{\ast} (\mathfrak{t}, \upsilon) + \frac{1}{2\overline{\rho}} \left( \frac{\beta}{(\beta + 1)(\overline{\beta + 2})} \right) \mathfrak{D}^{\ast} (\mathfrak{t}, \upsilon), \end{array}$$

that is

A 2t*<sup>υ</sup>* <sup>t</sup>+*<sup>υ</sup>* <sup>×</sup>\$ *<sup>Ψ</sup>* 2t*<sup>υ</sup>* <sup>t</sup>+*<sup>υ</sup>* <sup>≤</sup><sup>p</sup> *<sup>Γ</sup>*(*β*+1) 4 <sup>t</sup>*<sup>υ</sup> υ*−t *β* T*β* ( 1 *υ* ) <sup>+</sup> A 1 t ! × *Ψ* 1 t ! + T*<sup>β</sup>* ( 1 t ) <sup>−</sup> A 1 *υ* ! × *Ψ* 1 *υ* ! + <sup>1</sup> 2 1 <sup>2</sup> <sup>−</sup> *<sup>β</sup>* (*β*+1)(*β*+2) ! Q(t, *υ*)+ 1 2 *β* (*β*+1)(*β*+2) ! D(t, *υ*).

Hence, the required result.

**Theorem 7.** *Let* A, *Ψ* ∈ *LRHSX* [t, *<sup>υ</sup>*], <sup>K</sup><sup>+</sup> *C* , *and defined on the interval* [t, *υ*] *such that* A(κ) = [A∗(κ), A∗(κ)] *and Ψ*(κ) = [*Ψ*∗(κ), *Ψ*∗(κ)] *for all* κ ∈ [t, *υ*]*. If* A× *Ψ* ∈ *L* [t, *<sup>υ</sup>*],K<sup>+</sup> *C and fractional integral over* [t, *υ*]*, then*

2A 2t*<sup>υ</sup>* <sup>t</sup>+*<sup>υ</sup>* × *Ψ* 2t*<sup>υ</sup>* <sup>t</sup>+*<sup>υ</sup>* <sup>≤</sup><sup>p</sup> *<sup>Γ</sup>*(*β*+1) 21−*<sup>β</sup>* <sup>t</sup>*<sup>υ</sup> υ*−t *β* T*β* (t+*<sup>υ</sup>* 2t*<sup>υ</sup>* ) <sup>+</sup> A ◦ *Ψ* 1 t ! × *Ψ* ◦ *Ψ* 1 t ! + T*<sup>β</sup>* (t+*<sup>υ</sup>* 2t*<sup>υ</sup>* ) <sup>−</sup> A ◦ *Ψ* 1 *υ* ! × *Ψ* ◦ *Ψ* 1 *υ* ! + 1 <sup>2</sup> <sup>−</sup> *<sup>β</sup>*2+3*<sup>β</sup>* 4(*β*+1)(*β*+2) ! <sup>Q</sup>(t, *<sup>υ</sup>*) <sup>+</sup> *<sup>β</sup>*2+3*<sup>β</sup>* <sup>4</sup>(*β*+1)(*β*+2)D(t, *υ*), *where* D(t, *υ*) = A(t) × *Ψ*(t) + A(*υ*) × *Ψ*(*υ*), Q(t, *υ*) = A(t) × *Ψ*(*υ*) + A(*υ*) × *Ψ*(t), *and* D(t, *υ*) = [D∗(t, *υ*), D∗(t, *υ*)] *and* Q(t, *υ*) = [Q∗(t, *υ*), Q∗(t, *υ*)].

**Proof.** Consider A, *Ψ* ∈ *LRHSX* [t, *<sup>υ</sup>*], <sup>K</sup><sup>+</sup> . Then, by hypothesis, we have

*C*

A∗ 2t*<sup>υ</sup>* <sup>t</sup>+*<sup>υ</sup>* × *Ψ*<sup>∗</sup> 2t*<sup>υ</sup>* <sup>t</sup>+*<sup>υ</sup>* <sup>≤</sup> <sup>1</sup> 4 ⎡ ⎣ A∗ t*υ* st+(1−s)*υ* ! × *Ψ*<sup>∗</sup> t*υ* st+(1−s)*υ* ! +A<sup>∗</sup> t*υ* st+(1−s)*υ* ! × *Ψ*<sup>∗</sup> t*υ* (1−s)t+s*υ* ! ⎤ <sup>⎦</sup> <sup>+</sup> <sup>1</sup> 4 ⎡ ⎣ A∗ t*υ* (1−s)t+s*υ* ! × *Ψ*<sup>∗</sup> t*υ* st+(1−s)*υ* ! +A<sup>∗</sup> t*υ* (1−s)t+s*υ* ! × *Ψ*<sup>∗</sup> t*υ* (1−s)t+s*υ* ! ⎤ ⎦ , <sup>≤</sup> <sup>1</sup> 4 ⎡ ⎣ A∗ t*υ* st+(1−s)*υ* ! × *Ψ*<sup>∗</sup> t*υ* st+(1−s)*υ* ! +A<sup>∗</sup> t*υ* (1−s)t+s*υ* ! × *Ψ*<sup>∗</sup> t*υ* (1−s)t+s*υ* ! ⎤ ⎦ + <sup>1</sup> 4 ⎡ ⎢ ⎢ ⎣ (sA∗(t) + (1 − s)A∗(*υ*)) ×((1 − s)*Ψ*∗(t) + s*Ψ*∗(*υ*)) +((1 − s)A∗(t) + sA∗(*υ*)) ×(s*Ψ*∗(t) + (1 − s)*Ψ*∗(*υ*)) ⎤ ⎥ ⎥ ⎦ = <sup>1</sup> 4 ⎡ ⎣ A∗ t*υ* st+(1−s)*υ* ! × *Ψ*<sup>∗</sup> t*υ* st+(1−s)*υ* ! +A<sup>∗</sup> t*υ* (1−s)t+s*υ* ! × *Ψ*<sup>∗</sup> t*υ* (1−s)t+s*υ* ! ⎤ ⎦ + <sup>1</sup> 4 ' s2 <sup>+</sup> (<sup>1</sup> <sup>−</sup> <sup>s</sup>) 2 Q∗(t, *υ*) +2s(1 − s)D∗(t, *υ*) ( . (22)

Multiplying inequality (22) by 21<sup>+</sup>*ββ*s*β*−<sup>1</sup> and then, integrating the obtain outcome over 0, <sup>1</sup> 2 ,

$$\begin{split} & \mathfrak{A}\_{\star} \left( \frac{2\mu}{t^{2}v} \right) \times \mathbb{Y}\_{\star} \Big( \frac{2tv}{t^{2}v} \Big) \\ \leq \frac{1}{4} \int\_{0}^{1} 2^{1+\beta} \beta s^{\theta-1} \Big[ \mathfrak{A}\_{\star} \Big( \frac{tv}{st + (1-s)v} \Big) \times \mathbb{Y}\_{\star} \Big( \frac{tv}{st + (1-s)v} \Big) + \mathfrak{A}\_{\star} \Big( \frac{tv}{(1-s)t + sv} \Big) \times \mathbb{Y}\_{\star} \Big( \frac{tv}{(1-s)t + sv} \Big) \Big] ds \\ + \frac{1}{4} \Big[ \begin{array}{c} \mathcal{Q}\_{\star}(t,v) \int\_{0}^{\frac{t}{2}} 2^{1+\beta} \beta s^{\theta-1} \Big[ s^{2} + (1-s)^{2} \Big] ds + 2 \mathfrak{D}\_{\star}(t,v) \int\_{0}^{\frac{t}{2}} 2^{1+\beta} \beta s^{\theta-1} \mathbf{s} (1-s) ds \Big] \\ \end{array} \Big] \Big] \\ \text{Taking } \varkappa = \frac{tv}{st + (1-s)v} \text{ and } \mu = \frac{tv}{(1-s)t + sv}, \text{ then, we get} \end{split}$$

$$\begin{split} \mathcal{B} \leq & \frac{2}{2^{1-\beta}} \left( \frac{2\upsilon}{\upsilon-t} \right) \times \mathbb{V}\_{\*} \left( \frac{2\upsilon}{t+\upsilon} \right) \\ \leq & \frac{\Gamma(\beta+1)}{2^{1-\beta}} \left( \frac{\mathrm{tr}}{\upsilon-t} \right)^{\beta} \left[ \mathfrak{L}\_{\*}^{\beta}{}\_{\left(\frac{1}{t}\right)} \cdot \mathfrak{A}\_{\*} \circ \Psi \left( \frac{1}{t} \right) \times \mathbb{V}\_{\*} \circ \Psi \left( \frac{1}{t} \right) + \mathfrak{L}\_{\*}^{\beta}{}\_{\left(\frac{1}{t}\right)} \cdot \mathfrak{A}\_{\*} \circ \Psi \left( \frac{1}{t} \right) \times \mathbb{V}\_{\*} \circ \Psi \left( \frac{1}{t} \right) \right] \\ & + \left( \frac{1}{2} - \frac{\beta}{(\beta+1)(\beta+2)} \right) \mathfrak{Q}\_{\*} \left( \mathfrak{t}, \upsilon \right) + \left( \frac{\beta}{(\beta+1)(\beta+2)} \right) \mathfrak{D}\_{\*} \left( \mathfrak{t}, \upsilon \right). \end{split} \tag{23}$$

Similarly, for A∗(κ), we have

$$\begin{array}{c} 2\ \mathfrak{A}^\*(\frac{2\overline{\boldsymbol{w}}}{\overline{\boldsymbol{t}}-\boldsymbol{v}}) \times \Psi^\*\left(\frac{2\overline{\boldsymbol{w}}}{\overline{\boldsymbol{t}}+\boldsymbol{v}}\right) \\ \leq \frac{\Gamma(\beta+1)}{2^{1-\beta}} \left(\frac{\mathsf{tw}}{\overline{\boldsymbol{v}}-\boldsymbol{l}}\right)^{\beta} \left[\mathfrak{A}^{\overline{\boldsymbol{\theta}}}{\overline{\boldsymbol{v}}}\right)^{\beta} \cdot \mathfrak{A}^\* \circ \Psi\left(\frac{1}{\overline{\boldsymbol{t}}}\right) \times \Psi^\* \circ \Psi\left(\frac{1}{\overline{\boldsymbol{t}}}\right) + \mathfrak{A}^{\overline{\boldsymbol{\theta}}}\_{\left(\frac{1}{\overline{\boldsymbol{t}}}\right)^{-}} \cdot \mathfrak{A}^\* \circ \Psi\left(\frac{1}{\overline{\boldsymbol{v}}}\right) \times \Psi^\* \circ \Psi\left(\frac{1}{\overline{\boldsymbol{v}}}\right)\right] \\ \quad + \left(\frac{1}{2} - \frac{\beta^2 + 3\overline{\boldsymbol{\theta}}}{4(\beta+1)(\overline{\beta+2})}\right) \mathcal{Q}^\*(\mathbf{t}, \boldsymbol{v}) + \frac{\beta^2 + 3\beta}{4(\beta+1)(\overline{\beta+2})} \mathfrak{D}^\*(\mathbf{t}, \boldsymbol{v}). \end{array} \tag{24}$$

From (23) and (24), we have

$$\begin{split} \mathfrak{Z}\mathfrak{A}\left(\frac{2\upsilon}{t+\upsilon}\right) \times \mathbb{Y}\left(\frac{2\upsilon}{t+\upsilon}\right) \leq\_{\mathbb{P}} &\frac{\Gamma(\beta+1)}{2^{1-\beta}} \left(\frac{\mathrm{tr}}{\upsilon-t}\right)^{\beta} \Big{(} \mathfrak{T}\begin{matrix} \frac{\rho}{t+\upsilon} \\ \frac{\rho}{2\mathrm{tr}} \end{matrix} + \mathfrak{A}\circ\mathbb{Y}\left(\frac{1}{t}\right) \times \mathbb{Y}\circ\mathbb{Y}\left(\frac{1}{t}\right) + \mathfrak{T}^{\beta}\_{\begin{subarray}{c}(\frac{1+\upsilon}{2\mathrm{tr}}) \\ \frac{\rho}{2\mathrm{tr}} \end{subarray}} \mathfrak{A}\circ\mathbb{Y}\left(\frac{1}{t}\right) \times \mathbb{Y}\circ\mathbb{Y}\left(\frac{1}{\upsilon}\right) + \mathfrak{T}^{\beta}\_{\begin{subarray}{c}(\frac{1+\upsilon}{2\mathrm{tr}}) \\ \frac{\rho}{2\mathrm{tr}} \end{subarray}} \mathfrak{A}\circ\mathbb{Y}\left(\frac{1}{\upsilon}\right) + \mathfrak{T}^{\beta}\_{\begin{subarray}{c}(\frac{1+\upsilon}{2\mathrm{tr}}) \\ \frac{\rho}{2\mathrm{tr}} \end{subarray}} \mathfrak{T}^{\beta}\mathfrak{B}\sharp\mathbb{Y}\left(\frac{1}{\upsilon}\right) + \mathfrak{T}^{\beta}\_{\begin{subarray}{c}(\frac{1+\upsilon}{2\mathrm{tr}}) \\ \frac{\rho}{2\mathrm{tr}} \end{subarray}} \mathfrak{T}^{\beta}\mathfrak{B}\sharp\mathbb{Y}\left(\frac{1}{\upsilon}\right) + \mathfrak{T}^{\beta}\_{\begin{subarray}{c}(\frac{1+\upsilon}{2\mathrm{tr}}) \\ \frac{\rho}{2\mathrm{tr}} \end{subarray}} \mathfrak{B}\sharp\mathbb{Y}\left(\frac{1}{\upsilon}\right) \Big($$

Now, we present the reformative version of the generalized *IV-RL*-fractional integral *H-H* Fejér inequality on convex interval.

**Theorem 8.** *Let* A ∈ *LRHSX* [t, *<sup>υ</sup>*], <sup>K</sup><sup>+</sup> *C* , *and defined on the interval* [t, *υ*] *such that* A(κ) = [A∗(κ), A∗(κ)] *for all* κ ∈ [t, *υ*] ∈ [0, 1] *and let* A ∈ *L* [t, *<sup>υ</sup>*],K<sup>+</sup> *C and fractional integral over* [t, *υ*]*. If* D : [t, *υ*] → R, D <sup>1</sup> <sup>1</sup> <sup>t</sup> <sup>+</sup> <sup>1</sup> *<sup>υ</sup>* <sup>−</sup> <sup>1</sup> κ = D(κ) ≥ 0, *then*

$$
\begin{split} \mathfrak{A}(\frac{2\pi}{t+\upsilon}) \left[ \mathfrak{T}\_{\left(\frac{1}{t}\right)^{+}}^{\theta} + (\mathfrak{D}\circ\Psi) \left(\frac{1}{t}\right) + \mathfrak{T}\_{\left(\frac{1}{t}\right)^{-}}^{\theta} (\mathfrak{D}\circ\Psi) \left(\frac{1}{\upsilon}\right) \right] &\leq\_{\mathbb{P}} \left[ \mathfrak{T}\_{\left(\frac{1}{t}\right)^{+}}^{\theta} \left( \mathfrak{A}\mathfrak{D}\circ\Psi \right) \left(\frac{1}{t}\right) + \mathfrak{T}\_{\left(\frac{1}{t}\right)^{-}}^{\theta} (\mathfrak{A}\mathfrak{D}\circ\Psi) \left(\frac{1}{\upsilon}\right) \right] \\ \Psi \left(\frac{1}{\upsilon}\right) &\leq\_{\mathbb{P}} \frac{\mathfrak{A}(\frac{1}{t}) + \mathfrak{A}(\upsilon)}{2} \left[ \mathfrak{T}\_{\left(\frac{1}{t}\right)^{+}}^{\theta} \left( \mathfrak{D}\circ\Psi \right) \left(\frac{1}{t}\right) + \mathfrak{T}\_{\left(\frac{1}{t}\right)^{-}}^{\theta} (\mathfrak{D}\circ\Psi) \left(\frac{1}{\upsilon}\right) \right]. \end{split} \tag{25}
$$

*If* A *is LR-*ऒ*-concave IV-F, then, inequality (25) is reversed.*

**Proof.** Since A ∈ *LRHSX* [t, *<sup>υ</sup>*], <sup>K</sup><sup>+</sup> *C* , then, we have

$$\mathfrak{A}\_{\ast}\left(\frac{2t\upsilon}{1+\upsilon}\right) \le \frac{1}{2}\left(\mathfrak{A}\_{\ast}\left(\frac{t\upsilon}{st+(1-s)\upsilon}\right) + \mathfrak{A}\_{\ast}\left(\frac{t\upsilon}{(1-s)t+s\upsilon}\right)\right) \tag{26}$$

Multiplying both sides by (26) by s*β*−1D t*υ* (1−s)t+s*υ* ! and then, integrating the resultant with respect to s over [0, 1], we obtain

$$\mathfrak{A}\_{\ast} \left( \frac{2\upsilon}{t+v} \right) \int\_{0}^{1} \mathfrak{s}^{\delta-1} \mathfrak{D} \left( \frac{\upsilon\upsilon}{(1-s)t+\upsilon\upsilon} \right) d\mathfrak{s} \leq \frac{1}{2} \left( \begin{array}{c} \int\_{0}^{1} \mathfrak{s}^{\delta-1} \mathfrak{A}\_{\ast} \left( \frac{\mathfrak{t}\upsilon}{st+(1-s)\upsilon} \right) \mathfrak{D} \left( \frac{\mathfrak{t}\upsilon}{(1-s)t+\upsilon\upsilon} \right) d\mathfrak{s} \\ + \int\_{0}^{1} \mathfrak{s}^{\delta-1} \mathfrak{A}\_{\ast} \left( \frac{\mathfrak{t}\upsilon}{(1-s)t+\upsilon\upsilon} \right) \mathfrak{D} \left( \frac{\mathfrak{t}\upsilon}{(1-s)t+\upsilon\upsilon} \right) d\mathfrak{s} \right) \\ \quad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad$$

Let κ = <sup>t</sup>*<sup>υ</sup>* st+(1−s)*<sup>υ</sup>* . Then, we have

2 *<sup>t</sup><sup>υ</sup> υ*−*t β* A∗ <sup>2</sup>*t<sup>υ</sup> <sup>t</sup>*+*<sup>υ</sup>* <sup>1</sup> *t* 1 *υ* <sup>κ</sup> <sup>−</sup> <sup>1</sup> *υ* !*β*−<sup>1</sup> D 1 κ ! *d*κ <sup>≤</sup> *<sup>t</sup><sup>υ</sup> υ*−*t <sup>β</sup>* <sup>1</sup> *t* 1 *υ* <sup>κ</sup> <sup>−</sup> <sup>1</sup> *υ* !*β*−<sup>1</sup> A∗ <sup>1</sup> <sup>1</sup> *<sup>t</sup>* <sup>+</sup> <sup>1</sup> *<sup>υ</sup>* <sup>−</sup> <sup>1</sup> κ D 1 κ ! *d*κ + *<sup>t</sup><sup>υ</sup> υ*−*t <sup>β</sup>* <sup>1</sup> *t t* <sup>κ</sup> <sup>−</sup> <sup>1</sup> *υ* !*β*−<sup>1</sup> A∗ 1 κ ! D 1 κ ! *d*κ = *<sup>t</sup><sup>υ</sup> υ*−*t <sup>β</sup>* <sup>1</sup> *t* 1 *υ* 1 *<sup>t</sup>* − κ !*β*−<sup>1</sup> A∗(κ)D <sup>1</sup> <sup>1</sup> *<sup>t</sup>* <sup>+</sup> <sup>1</sup> *<sup>υ</sup>* <sup>−</sup> <sup>1</sup> κ *d*κ + *<sup>t</sup><sup>υ</sup> υ*−*t <sup>β</sup>* <sup>1</sup> *t* 1 *υ* <sup>κ</sup> <sup>−</sup> <sup>1</sup> *υ* !*β*−<sup>1</sup> A∗ 1 κ ! D 1 κ ! *d*κ = *Γ*(*β*) *<sup>t</sup><sup>υ</sup> υ*−*t β* T*β* ( 1 *υ* ) <sup>+</sup> A∗D 1 *t* ! + T*<sup>β</sup>* ( 1 *t* ) <sup>−</sup> A∗D 1 *υ* ! , (28)

Similarly, for A∗(κ), we have

$$\mathfrak{D}\left(\frac{\text{tw}}{\upsilon-\text{t}}\right)^{\beta}\mathfrak{A}^{\*}\left(\frac{2\text{tw}}{\text{t}+\upsilon}\right)\int\_{\frac{1}{\upsilon}}^{\frac{1}{\mathfrak{t}}}\left(\varkappa-\frac{1}{\upsilon}\right)^{\beta-1}\mathfrak{D}\left(\frac{1}{\varkappa}\right)d\varkappa \leq \Gamma(\beta)\left(\frac{\text{tw}}{\upsilon-\text{t}}\right)^{\beta}\left[\mathfrak{T}^{\beta}\_{\left(\frac{1}{\mathfrak{t}}\right)^{+}}\mathfrak{A}^{\*}\mathfrak{D}\left(\frac{1}{\mathfrak{t}}\right)+\mathfrak{T}^{\beta}\_{\left(\frac{1}{\mathfrak{t}}\right)^{-}}\mathfrak{A}^{\*}\mathfrak{D}\left(\frac{1}{\upsilon}\right)\right]. \tag{29}$$
  $\text{From (28) and (29), we have}$ 

$$\begin{split} & \quad \Gamma(\boldsymbol{\beta}) \Big( \frac{t\boldsymbol{\nu}}{\boldsymbol{\nu} - \boldsymbol{t}} \Big)^{\beta} \Big[ \mathfrak{A}\_{\ast} \Big( \frac{2t\boldsymbol{\nu}}{t + \boldsymbol{\nu}} \Big), \ \mathfrak{A}^{\ast} \Big( \frac{2t\boldsymbol{\nu}}{t + \boldsymbol{\nu}} \Big) \Big] \Big. \Big[ \mathfrak{T}^{\boldsymbol{\beta}}\_{\left( \frac{1}{\boldsymbol{\nu}} \right)^{+}} \mathcal{D} \Big( \frac{1}{t} \Big) + \mathfrak{T}^{\boldsymbol{\beta}}\_{\left( \frac{1}{\boldsymbol{\nu}} \right)^{-}} \mathcal{D} \Big( \frac{1}{\boldsymbol{\nu}} \Big) \Big] \\ & \leq \ \ \_p\Gamma(\boldsymbol{\beta}) \Big( \frac{t\boldsymbol{\nu}}{t - \boldsymbol{t}} \Big)^{\beta} \Big[ \ \ \mathfrak{T}^{\boldsymbol{\beta}}\_{\left( \frac{1}{\boldsymbol{\nu}} \right)^{+}} \cdot \mathfrak{A}\_{\ast} D \Big( \frac{1}{t} \Big) + \mathfrak{T}^{\boldsymbol{\beta}}\_{\left( \frac{1}{\boldsymbol{\nu}} \right)^{-}} \cdot \mathfrak{A}\_{\ast}^{\ast} D \Big( \frac{1}{t} \Big) + \mathfrak{T}^{\boldsymbol{\beta}}\_{\left( \frac{1}{\boldsymbol{\nu}} \right)^{-}} \cdot \mathfrak{A}^{\ast} D \Big( \frac{1}{t} \Big) \ \ \ \ \ \ \ \ \text{that} \ \text{ is} \end{split}$$

$$\mathfrak{A}\left(\frac{2\text{tr}}{\text{t}+\text{v}}\right)\left[\mathfrak{T}^{\mathfrak{G}}\_{(\frac{1}{\mathfrak{T}})^{+}}\left(\mathfrak{D}\circ\Psi\right)\left(\frac{1}{\mathfrak{t}}\right)+\mathfrak{T}^{\mathfrak{G}}\_{(\frac{1}{\mathfrak{t}})^{-}}\left(\mathfrak{D}\circ\Psi\right)\left(\frac{1}{\mathfrak{v}}\right)\right] \leq\_{\mathbb{P}}\left[\mathfrak{T}^{\mathfrak{G}}\_{(\frac{1}{\mathfrak{T}})^{+}}\left(\mathfrak{A}\mathfrak{D}\circ\Psi\right)\left(\frac{1}{\mathfrak{t}}\right)+\mathfrak{T}^{\mathfrak{G}}\_{(\frac{1}{\mathfrak{t}})^{-}}\left(\mathfrak{A}\mathfrak{D}\circ\Psi\right)\left(\frac{1}{\mathfrak{v}}\right)\right].\tag{30}$$

Similarly, if A be a LR-ऒ-convex *IV-F* and s*β*−1D t*υ* st+(1−s)*υ* ! ≥ 0, then, we have

$$\mathbf{s}^{\delta-1}\mathfrak{A}\_{\ast}\left(\frac{\mathbf{t}\mathbf{v}}{\mathfrak{s}t+(1-\mathfrak{s})\upsilon}\right)\mathfrak{D}\left(\frac{\mathfrak{t}\mathbf{v}}{\mathfrak{s}t+(1-\mathfrak{s})\upsilon}\right) \leq \mathbf{s}^{\delta-1}((1-\mathfrak{s})\mathfrak{A}\_{\ast}(\mathbf{t})+\mathfrak{s}\mathfrak{A}\_{\ast}(\upsilon))\mathfrak{D}\left(\frac{\mathfrak{t}\mathbf{v}}{\mathfrak{s}t+(1-\mathfrak{s})\upsilon}\right)\dots \tag{31}$$

And

$$\mathbf{s}^{\delta-1}\mathfrak{A}\_{\ast}\left(\frac{\mathbf{t}\mathbf{v}}{(1-\mathbf{s})\mathbf{t}+\mathbf{s}\boldsymbol{\nu}}\right)\mathfrak{D}\left(\frac{\mathbf{t}\mathbf{v}}{\mathbf{s}\mathbf{t}+(1-\mathbf{s})\boldsymbol{\nu}}\right)\leq\mathbf{s}^{\delta-1}(\mathbf{s}\mathfrak{A}\_{\ast}(\mathbf{t})+(1-\mathbf{s})\mathfrak{A}\_{\ast}(\mathbf{v}))\mathfrak{D}\left(\frac{\mathbf{t}\mathbf{v}}{\mathbf{s}\mathbf{t}+(1-\mathbf{s})\boldsymbol{\nu}}\right).\tag{32}$$

After adding (31) and (32), and integrating the resultant over [0, 1], we get

$$\begin{split} &\int\_{0}^{1} \mathbf{s}^{\delta-1} \mathfrak{A}\_{\ast} \left( \frac{tv}{st + (1-s)v} \right) \mathfrak{D} \left( \frac{tv}{st + (1-s)v} \right) ds + \int\_{0}^{1} \mathbf{s}^{\delta-1} \mathfrak{A}\_{\ast} \left( \frac{tv}{(1-s)t + sv} \right) \mathfrak{D} \left( \frac{tv}{st + (1-s)v} \right) ds \\ &\leq \int\_{0}^{1} \left[ \mathbf{s}^{\delta-1} \mathfrak{A}\_{\ast}(t) \left\{ \mathbf{s} + (1-\mathbf{s}) \right\} \left( \frac{tv}{st + (1-s)v} \right) + \mathbf{s}^{\delta-1} \mathfrak{A}\_{\ast}(v) \left\{ (1-\mathbf{s}) + \mathbf{s} \right\} \left( \frac{tv}{st + (1-s)v} \right) \right] ds, \\ &= \mathfrak{A}\_{\ast}(t) \int\_{0}^{1} \mathbf{s}^{\delta-1} D \left( \frac{tv}{st + (1-s)v} \right) \left[ \mathbf{s} + \mathfrak{A}\_{\ast}(v) \int\_{0}^{1} \mathbf{s}^{\delta-1} D \left( \frac{tv}{st + (1-s)v} \right) \right] ds. \end{split}$$

Similarly, for A∗(κ), we have

$$\int\_0^1 \mathbf{s}^{\beta-1} \mathfrak{A}^\* \left(\frac{\mathbf{t}\mathbf{v}}{\mathbf{s}\mathbf{t}+(1-\mathbf{s})\mathbf{v}}\right) \mathfrak{D}\left(\frac{\mathbf{t}\mathbf{v}}{\mathbf{s}\mathbf{t}+(1-\mathbf{s})\mathbf{v}}\right) d\mathbf{s} + \int\_0^1 \mathbf{s}^{\beta-1} \mathfrak{A}^\* \left(\frac{\mathbf{t}\mathbf{v}}{(1-\mathbf{s})\mathbf{t}+\mathbf{s}\mathbf{v}}\right) \mathfrak{D}\left(\frac{\mathbf{t}\mathbf{v}}{\mathbf{s}\mathbf{t}+(1-\mathbf{s})\mathbf{v}}\right) d\mathbf{s} = \mathfrak{A}^\*(\mathbf{t}) \int\_0^1 \mathbf{s}^{\beta-1} D\left(\frac{\mathbf{t}\mathbf{v}}{\mathbf{s}\mathbf{t}+(1-\mathbf{s})\mathbf{v}}\right) \mathbf{d}\mathbf{s}.$$

From which, we have

$$\begin{split} \left| \begin{pmatrix} \Gamma(\boldsymbol{\beta}) \left( \frac{\operatorname{tr}}{\boldsymbol{v} - \operatorname{tr}} \right)^{\boldsymbol{\beta}} \Big[ \mathfrak{T}\_{\frac{1}{2} +}^{\boldsymbol{\beta}} \mathfrak{A} \mathfrak{D} \circ \Psi'(\boldsymbol{v}) + \mathfrak{T}\_{\left( \frac{1}{t} \right)^{-}}^{\boldsymbol{\beta}} \mathfrak{A} \mathfrak{D} \circ \Psi' \left( \frac{1}{\boldsymbol{v}} \right) \Big] \right|\_{\boldsymbol{\Sigma}} \leq\_{\text{P}} \Gamma(\boldsymbol{\beta}) \Big( \frac{\operatorname{tr}}{\boldsymbol{v} - \operatorname{tr}} \Big)^{\boldsymbol{\beta}} \frac{\mathfrak{A}(\boldsymbol{v}) + \mathfrak{A}(\boldsymbol{v})}{2} \Big[ \mathfrak{T}\_{\frac{1}{t} +}^{\boldsymbol{\beta}} \left( \mathfrak{D} \circ \Psi' \right) \left( \frac{1}{t} \right) + \mathfrak{T}\_{\frac{1}{2} +}^{\boldsymbol{\beta}} \mathfrak{T}\_{\frac{1}{2} +}^{\boldsymbol{\beta}} \mathfrak{D} \circ \Psi' \left( \frac{1}{t} \right) \Big]. \end{split}$$

that is

$$\mathbb{P}\left[\mathfrak{S}^{\theta}\_{(\frac{1}{T})^{+}}\,\,\mathfrak{A}\mathfrak{D}\circ\Psi\left(\frac{1}{\mathfrak{t}}\right)+\mathfrak{S}^{\theta}\_{(\frac{1}{T})^{-}}\,\,\mathfrak{A}\mathfrak{D}\circ\Psi\left(\frac{1}{\mathfrak{v}}\right)\right] \leq\_{\mathbb{P}} \frac{\mathfrak{A}(\mathfrak{t})+\mathfrak{A}(\mathfrak{v})}{2}\left[\mathfrak{S}^{\theta}\_{(\frac{1}{T})^{+}}\left(\mathfrak{D}\circ\Psi\right)\left(\frac{1}{\mathfrak{t}}\right)+\mathfrak{S}^{\theta}\_{(\frac{1}{T})^{-}}\left(\mathfrak{D}\circ\Psi\right)\left(\frac{1}{\mathfrak{v}}\right)\right].\tag{33}$$
 
$$\text{By combining (30) and (33), we obtain the nonwind inequality (25) }\Box$$

By combining (30) and (33), we obtain the required inequality (25).

**Remark 5.** *Let one attempt to take β* = 1*. Then, from (25), we acquire the coming inequality, which is also new one:*

$$\mathfrak{A}\left(\frac{2tv}{t+v}\right)\int\_{t}^{v}\frac{\mathfrak{D}(\varkappa)}{\varkappa^{2}}d\varkappa \leq\_{\mathbb{P}}\int\_{t}^{v}\frac{\mathfrak{A}(\varkappa)}{\varkappa^{2}}\mathfrak{D}(\varkappa)d\varkappa \leq\_{\mathbb{P}}\frac{\mathfrak{A}(t)+\mathfrak{A}(v)}{2}\int\_{t}^{v}\frac{\mathfrak{D}(\varkappa)}{\varkappa^{2}}d\varkappa$$

*Let one attempt to take* D(κ) = 1*. Then, from (25), we obtain inequality (11).*

*Let one attempt to take* D(κ) = 1 *and β* = 1*, then, from (25), we get H-H inequality for LR-*ऒ*-convex IV-F.*

$$
\mathfrak{A}\left(\frac{2tv}{t+v}\right) \leq\_{\mathbb{P}} \frac{tv}{v-t} \int\_{t}^{v} \frac{\mathfrak{A}(\varkappa)}{\varkappa^{2}} d\varkappa \leq\_{\mathbb{P}} \frac{\mathfrak{A}(\mathfrak{t}) + \mathfrak{A}(v)}{2}.
$$

*If one attempts to take* A∗(κ) = A∗(κ)*, then, from (40), we acquire the fractional H*-*H Fejér inequality, see* [31].

*Let one attempt to take* A∗(κ) = A∗(κ) *with β* = 1*. Then, from (25), we achieve the coming inequality, see* [3].

$$\mathfrak{A}\left(\frac{2tv}{t+v}\right)\int\_{t}^{v}\frac{\mathfrak{D}(\varkappa)}{\varkappa^{2}}d\varkappa \leq \int\_{t}^{v}\frac{\mathfrak{A}(\varkappa)}{\varkappa^{2}}\mathfrak{D}(\varkappa)d\varkappa \leq \frac{\mathfrak{A}(t)+\mathfrak{A}(v)}{2}\int\_{t}^{v}\frac{\mathfrak{D}(\varkappa)}{\varkappa^{2}}d\varkappa.$$

*If one attempts to take* A∗(κ) = A∗(κ) *with* D(κ) = 1 *then, from (25), we acquire the coming classical inequality for* ऒ*-convex function.*

$$\mathfrak{A}\left(\frac{2\mathfrak{t}\upsilon}{\mathfrak{t}+\upsilon}\right) \leq \frac{\Gamma(\mathfrak{k}+1)}{2(\upsilon-\mathfrak{t})^{\beta}} \left[\mathfrak{T}\_{\frac{1}{\mathfrak{t}}-}^{\beta}(\mathfrak{A}\circ\Psi)\left(\frac{1}{\upsilon}\right) + \mathfrak{T}\_{\frac{1}{\mathfrak{t}}+}^{\beta}\left(\mathfrak{A}\circ\Psi\right)\left(\frac{1}{\mathfrak{t}}\right)\right] \leq \frac{\mathfrak{A}(\mathfrak{t})+\mathfrak{A}(\upsilon)}{2}.$$

*If one attempts to take* A∗(κ) = A∗(κ) *and* D(κ) = *β* = 1 *then, from (25), we acquire the coming classical inequality for* ऒ*-convex function.*

$$\mathfrak{A}\left(\frac{2\mathfrak{t}v}{\mathfrak{t}+\upsilon}\right) \le \frac{\mathfrak{t}v}{\upsilon-\mathfrak{t}} \int\_{\mathfrak{t}}^{\upsilon} \frac{\mathfrak{A}(\varkappa)}{\varkappa^{2}} d\varkappa \le \frac{\mathfrak{A}(\mathfrak{t})+\mathfrak{A}(\upsilon)}{2}.$$

#### **4. Conclusions**

We use *IV-RL*-fractional integral operators to infer various inclusions in the *H-H*, *H-H*-Fejér type inequalities, and some related inequalities in this paper. We show the relationships between the examined results and previously published ones to show their generic properties. In addition, some nontrivial examples are given to demonstrate the accuracy of the results derived in the study. The point we wish to make here is that intervalvalued analyses are commonly used in practical mathematics, particularly in the field of optimality analysis (see [22,23]). This important subject in interval-valued analysis using fractional integral operators deserves to be explored further.

In our final view, we believe that our work can be generalized to other models of fractional calculus, such as Atangana–Baleanu and Prabhakar fractional operators with Mittag–Liffler functions in their kernels. We have left this consideration as an open problem for the researchers who are interested in this field. The interested researchers can proceed as done in references [15,16].

**Author Contributions:** Conceptualization, M.B.K.; methodology, M.B.K.; validation, J.E.M.-D., M.S.S., S.T. and H.G.Z.; formal analysis, M.B.K.; investigation, M.S.S.; resources, J.E.M.-D.; data curation, H.G.Z.; writing—original draft preparation, M.B.K. and H.G.Z.; writing—review and editing, M.B.K. and J.E.M.-D.; visualization, H.G.Z.; supervision, M.B.K. and M.S.S.; project administration, M.B.K.; funding acquisition, S.T.; M.S.S. and H.G.Z. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work was funded by Taif University Researchers Supporting Project number (TURSP-2020/345), Taif University, Taif, Saudi Arabia and this work was also supported by the National Council of Science and Technology of Mexico (CONACYT) through grant A1-S-45928.

**Data Availability Statement:** Not Applicable.

**Acknowledgments:** The authors would like to thank the Rector, COMSATS University Islamabad, Islamabad, Pakistan, for providing excellent research.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


### *Article* **Non-Dominated Sorting Manta Ray Foraging Optimization for Multi-Objective Optimal Power Flow with Wind/Solar/Small-Hydro Energy Sources**

**Fatima Daqaq 1,2,\*, Salah Kamel 3, Mohammed Ouassaid 2, Rachid Ellaia 1,2 and Ahmed M. Agwa 4,5,\***


**Abstract:** This present study describes a novel manta ray foraging optimization approach based nondominated sorting strategy, namely (NSMRFO), for solving the multi-objective optimization problems (MOPs). The proposed powerful optimizer can efficiently achieve good convergence and distribution in both the search and objective spaces. In the NSMRFO algorithm, the elitist non-dominated sorting mechanism is followed. Afterwards, a crowding distance with a non-dominated ranking method is integrated for the purpose of archiving the Pareto front and improving the optimal solutions coverage. To judge the NSMRFO performances, a bunch of test functions are carried out including classical unconstrained and constrained functions, a recent benchmark suite known as the completions on evolutionary computation 2020 (CEC2020) that contains twenty-four multimodal optimization problems (MMOPs), some engineering design problems, and also the modified real-world issue known as IEEE 30-bus optimal power flow involving the wind/solar/small-hydro power generations. Comparison findings with multimodal multi-objective evolutionary algorithms (MMMOEAs) and other existing multi-objective approaches with respect to performance indicators reveal the NSMRFO ability to balance between the coverage and convergence towards the true Pareto front (PF) and Pareto optimal sets (PSs). Thus, the competing algorithms fail in providing better solutions while the proposed NSMRFO optimizer is able to attain almost all the Pareto optimal solutions.

**Keywords:** multimodal multi-objective optimization; manta ray foraging optimizer; non-dominated solution; crowing distance; engineering design problem; optimal power flow; renewable energy sources

#### **1. Introduction**

Nowadays, meta-heuristics become popular in different research areas for resolving challenging optimization issues. These stochastic approaches are among the best and effective strategies in finding optimal solutions, conflicting with the classical (deterministic) optimization approaches which are devalued due to their drawbacks as local optima stagnation [1], etc. In spite of the benefits of the intelligence algorithms, they require some improvement to satisfy the diverse characteristics of complex real-world applications. Features that mostly faced in real issues are uncertainty [2], dynamicity [3], combinatorial, multiple objectives [4,5], constraints, etc. Along these lines, it is obviously seen that no approach is qualified in resolving the diverse kind of optimization problems. In that regard,

**Citation:** Daqaq, F.; Kamel, S.; Ouassaid, M.; Ellaia, R.; Agwa, A.M. Non-Dominated Sorting Manta Ray Foraging Optimization for Multi-Objective Optimal Power Flow with Wind/Solar/Small-Hydro Energy Sources. *Fractal Fract.* **2022**, *6*, 194. https://doi.org/10.3390/ fractalfract6040194

Academic Editor: Savin Trean¸t ˘a

Received: 17 February 2022 Accepted: 22 March 2022 Published: 31 March 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

the No-Free Lunch (NFL) theorem [6] validates this and opens the way for developers to create the newest approaches and enhances the quality of the existing ones.

Some of well-regarded meta-heuristic algorithms are: a genetic algorithm (GA), which is the first stochastic algorithm inspired by John Holland in 1960 [7], followed by simulated annealing (SA) in 1983 [8], particle swarm optimization (PSO) in 1995 by Kennedy [9], and more approaches that were developed later such as ant bee colony (ABC) [10], arithmetic optimization algorithm (AOA) [11], Harris hawks optimization (HHO) [12], sine cosine algorithm (SCA) [13], black widow optimization (BWO) [14], dynamic differential annealed optimization (DDAO) [15], levy flight distribution (LFD) [16], Salp swarm algorithm (SSA) [17], Henry gas solubility optimization (HGSO) [18], manta ray foraging optimization (MRFO) [19], and so on. All these developed stochastic algorithms are frequently single-objective; therefore, the researchers improve them according to the nature and complexity of their problems. Hence, in line with the aforementioned nature of applications, we have improved the new bio-inspired approach called manta ray foraging optimization (MRFO) [19] with a view to cope with the multi-objective problems (MOPs), which are the main focus of this paper.

During the two last years, several studies have guaranteed the superiority and efficiency of the MRFO algorithm in solving global optimization problems, such as: Fahd et al. [20], who applied the standard MRFO to perform the dynamic operation for connecting PV into the grid system. The authors in [21] have examined the global maximum power point of a partially shaded MJSC photovoltaic (PV) array applying the MRFO algorithm. Regarding the work of Selem et al. [22], the MRFO was applied to define the unknown electrical parameters of proton exchange membrane fuel cells stacks, which is considered as a constrained optimization problem. In addition, El-Hameed et al. [23] used MRFO to solve the solar module parameters' identifications of a three diode equivalent model of PV. In addition, in an attempt to ameliorate the performance of this suggested approach, Dalia et al. [24] introduced a modified MRFO by using the fractional-order optimization algorithms, in order to enhance its exploitation ability. Referring to [25], a binary version of MRFO has been proposed using four S-Shaped and four V-Shaped transfer functions for the feature selection problem. In the bio-medical area, Karrupusamy utilized a hybrid MRFO to identify the issue in an existing brain tumor by using a convolutional neural network as a classifier that classifies the features and supplies optimal classification results [26], etc. The authors in [27] have used the multi-objective manta ray foraging optimization (MOMRFO) based weighted sum to handle the optimal power flow (OPF) problem for hybrid AC and multi-terminal direct current power grids. In Ref. [28], the authors applied the IMOMRFO to solve the IEEE-30 and IEEE-57 OPF issues.

In accordance with the literature review, the multi-objective algorithms (MOAs) are divided into two techniques: a priori versus a posteriori [29]. The first class converts the multi-objective problem to a single one, by aggregating all objectives in one function using a set of weights that are chosen by an expert in the problem domain (decision makers). The drawback of this method appears when we generate the Pareto optimal set, we should run the algorithm multiple times. Alternatively, the second class is the posteriori technique which does not require any addition weights. In this method, the multi-objective formulation is maintained and the Pareto optimal set is obtained in just one run; then, the decision-making occurs after the optimization. In addition, the Pareto front of all kinds of problems can be determined utilizing this posteriori technique, which is the focus of this work, in which a new multi-objective version of MRFO based non-dominated sorting approach named NSMRFO was developed.

Different shapes of fronts exist in multi-objective problems: linear, convex, concave, separated, and so on. Therefore, to obtain an accurate approximation Pareto optimal front for every multi-objective optimization issues, three fundamental challenges should be addressed: distribution of solutions (coverage), accuracy (convergence), and local fronts [30]. Thus, an efficient algorithm is the one that has the ability to balance between them: avoid a premature convergence and extract a uniform distribution front covering the

entire true Pareto optimal front. Some of the most popular multi-objective algorithms are: Non-dominated sorting genetic algorithm (NSGA) [31,32], strength Pareto evolutionary algorithm (SPEA) [33,34], multi-objective particle swarm optimization (MOPSO) [35], and multi-objective evolutionary algorithm based on decomposition (MOEA/D) [36].

Since the multi-objective optimization problem appears, the non-dominated sorting strategy with crowding distance and non-dominated ranking has been known as the efficient and most significant mechanisms in handling the algorithms for solving the multiobjective problems. The significant advantages of the NSGA-II and its borrowed MOAs motivated us to suggest a novel multi-objective variant of the MRFO approach, which is based on the NSGA-II outstanding operators. The search mechanism in MRFO is kept similar in the NSMRFO optimizer. Furthermore, in order to assess the NSMRFO success, various MMMOEAs with other MOEAs were investigated for comparisons with respect to diverse indicator metrics in search and objective spaces. Thus, with accordance to the statistical outcomes, the proposed NSMRFO outperformed its competitors and even the existing MOMRFOs for different kinds of problems.

The main contributions of this paper are as follows:


The remainder of this paper is arranged in four sections as follows: Section 2 summarizes the basic definitions of the multi-objective problems, and then describes the proposed algorithm MRFO and the structure of its multi-objective version NSMRFO. Simulation results, analyses, and competing algorithms are discussed in Section 3. As a final point, Section 4 concludes this work and proposes some future research directions.

#### **2. Multi-Objective Optimization**

As mentioned before, the multi-objective problem is the subject of handling problems that need optimizing more than one objective simultaneously, which are mostly in conflict. The basic mathematical formulation of the multi-objective optimization for such minimization problem can be defined as:

$$\begin{aligned} \text{Minimize:} \quad & F(\vec{x}) = \left\{ f\_1(\vec{x}), f\_2(\vec{x}), \dots, f\_{N\_{obj}}(\vec{x}) \right\} \\ \text{Subject to:} \quad & g\_i(\vec{x}) \ge 0, \qquad i = 1, 2, \dots, m \\ & h\_i(\vec{x}) = 0, \qquad i = 1, 2, \dots, p \\ & L\_i \le x\_i \le L\_i, \quad i = 1, 2, \dots, n \end{aligned} \tag{1}$$

where *F*(*x*) is the objective function to be optimized, *hi*(*x*) is the equality constraints, *gi*(*x*) is the inequality constraints, *Nobj*, *m*, *p*, and *n* are the numbers of objective functions, inequality constraints, equality constraints, and variables. *Li* and *Ui* are the boundaries of the *i*th variable.

The arithmetic relational operators cannot be effective in multi-objective optimization to compare the search space of different solutions. Alternatively, the Pareto optimal dominance concept is utilized to determine which solution is better than another. The essential definitions of dominance relation are defined as follows [37,38]:

Let us take two vectors *x* = (*x*1, *x*2,..., *xn*) and *y* = (*y*1, *y*2,..., *yn*) **Definition 1** (Pareto Dominance)**.** *x is said to dominate y if and only if x is partially less than y (i.e., x* ≤ *y):*

$$\forall i \in \{1, 2, \ldots, N\_{obj}\}: f\_i(\vec{x}) \le f\_i(\vec{y}) \land \exists i \in \{1, 2, \ldots, N\_{obj}\}: f\_i(\vec{x}) < f\_i(\vec{y})\tag{2}$$

**Definition 2** (Pareto Optimality)**.** *x* ∈ *X is called a Pareto-optimal solution iff:*

$$\exists \vec{y} \in X \mid F(\vec{y}) < F(\vec{x}) \tag{3}$$

**Definition 3** (Pareto Optimal Set)**.** *The Pareto optimal set is a set that comprises all Pareto optimal solutions (neither x dominates y nor y dominates x):*

$$P\_s := \{ \mathbf{x}, \mathbf{y} \in X \mid \exists F(\vec{y}) > F(\vec{x}) \}\tag{4}$$

**Definition 4** (Pareto Optimal Front)**.** *The Pareto optimal front is defined as:*

$$P\_f := \{ F(\vec{x}) \mid \vec{x} \in P\_s \} \tag{5}$$

In such multi-objective optimization problem, a solution is the set of best non-dominated solutions. Therefore, the Pareto optimal solutions projection in the objective space are kept in a set called Pareto optimal front as illustrated in Figure 1. The solutions of both spaces obviously reveal that the green shapes are better than the others, since they dominate all other colors.

**Figure 1.** Parameter and objective spaces.

The concept of the MRFO standard version is explained briefly in the following section.

#### *2.1. Manta Ray Foraging Optimizer (MRFO)*

MRFO is among the recent algorithms proposed in 2020, inspired by giant known critters of the sea called Manta Rays [19]. Figure 2 depicts the shape of a manta ray. To establish this algorithm, the authors mimic three feeding behaviors of Manta Rays: chain, somersault, and cyclone feeding. Furthermore, the manta rays are assumed as search agents which explore the planktons' location and proceed towards them. Then, the planktons at significant concentration represent the best solution. The source code of MRFO is given in https://www.mathworks.com/matlabcentral/fileexchange/73130-mant a-ray-foraging-optimization-mrfo (accessed on 24 May 2021).

Following the population-based optimization algorithms, the steps of MRFO are randomly initialized as illustrated below:

$$\mathbf{x}\_{i} = Lb\_{i} + rand \times (\mathbf{U}b\_{i} - Lb\_{i}), \quad i = 1, \ldots, N \tag{6}$$

where *Ub* and *Lb* are the maximum and minimum bounds of variables in the search space, rand is a random number between 0 and 1, *rand* ∈ [0, 1].

The three main operators are mathematically clarified in the next subsections.

**Figure 2.** Manta ray body form. (**a**) Manta ray in the ocean; (**b**) parts of a manta ray, dorsal, and ventral.

#### 2.1.1. Chain Foraging

In this foraging strategy, about 50 Mantas line up head to tail forming an orderly line. The chain swims towards the position of intense concentration of plankton with a fully open mouth. The missing plankton by the leader (manta at the top of the chain) will be devoured by the followers. In the course of the foraging process, the position of each follower is updated towards the best source of plankton and individuals in front of it. This foraging phase is depicted in Figure 3. The mathematical updating formulas are presented as follows:

$$\mathbf{x}\_{l,j}^{t+1} = \begin{cases} \mathbf{x}\_{l,j}^{t} + r\_1 \cdot \begin{pmatrix} \mathbf{x}\_{\text{best},j}^{t} - \mathbf{x}\_{l,j}^{t} \\ \mathbf{x}\_{l,j}^{t} + r\_2 \cdot \begin{pmatrix} \mathbf{x}\_{l-1,j}^{t} - \mathbf{x}\_{l,j}^{t} \end{pmatrix} + a \cdot \begin{pmatrix} \mathbf{x}\_{\text{best},j}^{t} - \mathbf{x}\_{l,j}^{t} \\ \mathbf{x}\_{\text{best},j}^{t} - \mathbf{x}\_{l,j}^{t} \end{pmatrix}, & i = 2, \dots, N \end{cases} \tag{7}$$

where *xi*,*<sup>j</sup>* is the position of *i*th manta ray in the *j*th dimension, *r*<sup>1</sup> and *r*<sup>2</sup> are the random vector in range [0, 1], *x<sup>t</sup> best*,*<sup>j</sup>* is the best plankton concentration position, and *α* is a weight coefficient that is expressed as:

$$\alpha = 2 \cdot r\_3 \cdot \sqrt{|\log(r\_4)|} \tag{8}$$

where *r*<sup>3</sup> and *r*<sup>4</sup> introduce the random vector in range [0, 1].

**Figure 3.** Simulation model of chain foraging behavior.

#### 2.1.2. Cyclone Foraging

Cyclone foraging phase follows the feeding strategy in WOA [39] in terms of spiral movement. After discovering a significant amount of plankton in the profundity of the ocean, the mantas move one behind another towards plankton making a spiral shape. This foraging phase is illustrated in Figure 4. The manta updates its position based on its best previous position and the manta in front of it.

The spiral-shaped movement is mathematically modeled as:

$$\mathbf{x}\_{l,j}^{t+1} = \begin{cases} \mathbf{x}\_{\text{best},j} + r\_{\\$} \cdot \left( \mathbf{x}\_{\text{best},j}^{t} - \mathbf{x}\_{l,j}^{t} \right) + \boldsymbol{\beta} \cdot \left( \mathbf{x}\_{\text{best},j}^{t} - \mathbf{x}\_{l,j}^{t} \right), & i = 1 \\\ \mathbf{x}\_{\text{best},j} + r\_{\boldsymbol{\theta}} \cdot \left( \mathbf{x}\_{l-1,j}^{t} - \mathbf{x}\_{l,j}^{t} \right) + \boldsymbol{\beta} \cdot \left( \mathbf{x}\_{\text{best},j}^{t} - \mathbf{x}\_{l,j}^{t} \right), & i = 2, \ldots, N \end{cases} \tag{9}$$

where *r*<sup>5</sup> and *r*<sup>6</sup> present the random value in [0, 1]; *β* is the weight coefficient that is formulated as:

$$\beta = 2\epsilon^{r\gamma \frac{T-t+1}{T}} \cdot \sin(2\pi r\_7) \tag{10}$$

where *r*<sup>7</sup> denotes the random vector in range [0, 1], and *T* and *t* are the maximum and current iteration, respectively.

**Figure 4.** Simulation model of cyclone foraging behavior.

The cyclone foraging can be considered as the main phase in MRFO, in which it performs the intensification (exploitation) and diversification (exploration) mechanisms. The exploitation improvement is achieved based on considering the best plankton found so far as a reference point. On the other hand, the exploration phase incites MRFO to reach the overall optimal solution in accordance with the mathematical equations described below:

$$\mathbf{x}\_{rand,j} = \mathbf{L}\mathbf{b}\_{j} + r\_{8} \cdot \left(\mathbf{U}\mathbf{b}\_{j} - \mathbf{L}\mathbf{b}\_{j}\right) \tag{11}$$

$$\mathbf{x}\_{i,j}^{t+1} = \begin{cases} \mathbf{x}\_{\text{rand},j} + r\theta \left( \mathbf{x}\_{\text{rand},j} - \mathbf{x}\_{i,j}^t \right) + \beta \left( \mathbf{x}\_{\text{rand},j} - \mathbf{x}\_{i,j}^t \right), & i = 1 \\\ \mathbf{x}\_{\text{rand},j} + r\_{10} \left( \mathbf{x}\_{i-1,j}^t - \mathbf{x}\_{i,j}^t \right) + \beta \left( \mathbf{x}\_{\text{rand},j} - \mathbf{x}\_{i,j}^t \right), & i = 2, \dots, N \end{cases} \tag{12}$$

where *xrand*,*<sup>j</sup>* is the random position generated inside the search space.

#### 2.1.3. Somersault Foraging

The last phase in MRFO is somersault feeding, wherein the manta ray swims to and fro around a pivot point and somersaults around itself to a new position. Figure 5 illustrates this feeding behavior. The manta updates its position using the following mathematical model:

$$\mathbf{x}\_{i,j}^{t+1} = \mathbf{x}\_{i,j}^{t} + \mathbf{S} \cdot \left(r\_{11} \cdot \mathbf{x}\_{\text{best},j} - r\_{12} \cdot \mathbf{x}\_{i,j}^{t}\right), \quad i = 1, \ldots, N \tag{13}$$

where *r*11, *r*<sup>12</sup> depict the random values between 0 and 1. *S* is the somersault factor, *S* = 2.

**Figure 5.** Simulation model of somersault foraging behavior.

MRFO's diversification and intensification phases are balanced using the variations value *t*/*T*, which is gradually increased. The expression *t*/*T* > *rand* denotes the exploration stage, reversibly, and the exploitation process is adopted. The main steps followed in MRFO are demonstrated in Figure 6.

**Figure 6.** MRFO flowchart for the minimization problem.

#### *2.2. Proposed (NSMRFO)*

As MRFO is relevant for single objective issues, we have developed a multi-objective version of MRFO to handle problems with many fitness functions by applying the Pareto dominance strategy. This variant is inspired from the non-dominated sorting genetic algorithm (NSGA-II) approach, which is the most popular and efficient algorithm in the area of multi-objective optimization in the literature. The non-dominated sorting (NDS) technique employs the crowding distance to define an ordering among individuals and preserve the diversity and the elitist mechanism. To compute all non-dominated solutions, a ranking process is applied called non-dominated ranking (NDR), in which the front that is not dominated by any solutions is assigned to rank 1, while rank 2 is in accordance with the front that is dominated by at least one of the solutions, and so on; the ranking scheme is described in Figure 7a. The crowding distance value of a particular solution is the average distance of its two neighboring solutions as illustrated in Figure 7b. Therefore, the less value of crowding distance denotes comparatively the higher crowded space and conversely. The formulation of this crowding distance mechanism is defined as below:

$$CD\_i(j) = \frac{f\_i(j+1) - f\_i(j-1)}{f\_i^{\max} - f\_i^{\min}} \tag{14}$$

where *f* min *<sup>i</sup>* and *<sup>f</sup>* max *<sup>i</sup>* are the minimum and maximum values of the *i*th objective function.

**Figure 7.** Non-dominated ranking fonts (**a**); crowding-distance calculation (**b**).

It is worth discussing here that the (NDS) provides a probability to the dominated solutions to be chosen as well, which enhances the diversification of the suggested algorithm. The pseudo-code of NSMRFO is depicted in Algorithm 1. The computational space complexity of NSMRFO is the same as NSGA-II of the order of *O*(*MN*2), where *M* is the number of objectives, and *N* is the number of manta rays, while the computational complexity was found to be much better than that of some of the approaches such as SPEA and NSGA, which are of *O*(*MN*3).

#### **Algorithm 1** Non-Dominated Sorting Manta Ray Foraging Optimization


#### *2.3. Evaluation Criteria*

The employed performance metrics are described in this section. The performance indicators are one of the techniques employed for measuring the potential of a multiobjective algorithm in terms of the diversity and coverage. In this work, various metrics are used such as generational distance (GD) [40], inverted generational distance (IGD) [41] in search [42] and objective [42] spaces, spacing (SP) [43], Pareto sets proximity (rPSP) [44] and reciprocal of hypervolume (rHV) [45], which are formulated as follows:

• Generational Distance (GD) [40]:

$$\text{GD} = \frac{\sqrt{\sum\_{i=1}^{\mathbf{n}\_{\text{pf}}} \mathbf{d}\_i^2}}{\mathbf{n}\_{\text{pf}}} \tag{15}$$

where *np f* is the number of obtained Pareto optimal solutions, and *di* indicates the Euclidean distance between the *i*th Pareto optimal solution attained and the closest true Pareto optimal solution in the reference set:

• Inverted Generational Distance (IGD) [41]:

$$IGD = \frac{\sqrt{\sum\_{i=1}^{n\_{tpf}} \left(d\_i'\right)^2}}{n\_{tpf}} \tag{16}$$

where *ntpf* is the number of true Pareto optimal solutions and *d <sup>i</sup>* indicates the Euclidean distance between the *i*th true Pareto optimal solution and the closest Pareto optimal solution obtained in the reference set.

*IGDX* is the IGD in search space. *IGDF* is the IGD in objective space:

• Spacing (SP) [43]:

$$\text{SP} = \sqrt{\frac{1}{n\_{pf} - 1} \sum\_{i=1}^{n\_{pf}} \left(\bar{d} - d\_i\right)^2} \tag{17}$$

where *np f* is the number of Pareto optimal solutions obtained. *di* indicates the Euclidean distance between the *i*th Pareto optimal solution attained and the closest true Pareto optimal solution in the reference set. ¯*d* is the average of all *di*.

• Reciprocal of Pareto sets proximity (rPSP) [44]:

$$\text{rPSP} = \frac{IGDX}{CR} \tag{18}$$

$$CR = \left(\prod\_{i=1}^{m} \delta\_i\right)^{1/2D} \tag{19}$$

$$\delta\_{i} = \left(\frac{\min\left(PF\_{i}^{\text{max}}, PFt\_{i}^{\text{max}}\right) - \max\left(PF\_{i}^{\text{min}}, PFt\_{i}^{\text{min}}\right)}{PFt\_{i}^{\text{max}} - PFt\_{i}^{\text{min}}}\right)^{2} \tag{20}$$

where *CR* is the cover rate. *m* is the number of objective functions. *D* is the number of decision variables. *PFt*, *PFe* are the true and obtained Pareto front, respectively:

• Reciprocal of hypervolume (rHV) [45]:

$$rHV(S, w) = \frac{1}{HV(S, w)}\tag{21}$$

$$HV(S, w) = \lambda\_D \left( \bigcup\_{z \in S} [z; w] \right) \tag{22}$$

where *λ<sup>D</sup>* is the D-dimensional Lebesgue measure.

#### **3. Experimental Results and Analysis**

In this section, the effectiveness of the proposed multi-objective approach is carried out by using 18 different unconstrained benchmark functions, the CEC2020 benchmark test that contains 24 functions, four constrained problems, four engineering design problems, and the IEEE 30-bus optimal power flow issue incorporating wind/solar/small-hydro power. These test suites have different shapes of front like linear, convex, concave, connected, disconnected, etc., as indicated in Table 1. Five analyses are investigated to prove the robustness of the developed NSMRFO algorithm, the first one aims to assess the convergence by using generation distance (GD) metric, the second evaluates the diversity by computing the spacing (SP) metric, the inverse generation distance (IGD) in the search and objective spaces metric, which intends to affirm the NSMRFO's efficacy in balancing between convergence and diversity, and the reciprocal of Pareto sets proximity (rPSP) and hypervolume (rHV). Moreover, for evaluating the NSMRFO approach, 11 significant multi-objective optimization algorithms are re-implemented, which are named: multiobjective slime mould algorithm (MOSMA) [46], multi-objective bonobo optimizer based decomposition (MOBO/D) [47], multi-objective multi-verse optimization (MOMVO) [48], multi-objective water cycle algorithm (MOWCA) [49], non-dominated sorting grey wolf optimizer (NSGWO) [50], multi-objective manta ray foraging optimizer (MOMRFO) [51], improved multi-objective manta ray foraging optimizer (IMOMRFO) [28], non-sorting genetic algorithm II (NSGA-II) [32], double niched non sorting genetic algorithm II (DN-NSGA) [52], omni optimizer (OMNI) [53], a multi-objective particle swarm optimizer using ring topology (MO\_ Ring\_ PSO\_ SCD) [44], and their characteristics are shown in Table 2. The MATLAB codes for these algorithms were downloaded from: https://aliasgharhei dari.com/SMA.html (SMA), https://www.mathworks.com/matlabcentral/fileexchang e/79843-multi-objective-bonobo-optimizer-with-decomposition-method (BO), https:// seyedalimirjalili.com/mvo (MVO), https://ali-sadollah.com/water-cycle-algorithm-wca / (WCA), https://www.mathworks.com/matlabcentral/fileexchange/75259-multi-object ive-non-sorted-grey-wolf-mogwo-nsgwo?s\_tid=srchtitle\_nsgwo\_1 (GWO), https://www. mathworks.com/matlabcentral/fileexchange/103530-momrfo-multi-objective-manta-rayforaging-optimizer?s\_tid=srchtitle\_MOMRFO\_1 (MRFO), https://www.mathworks.com/ matlabcentral/fileexchange/103895-improved-multi-objective-manta-ray-foraging-optim ization?s\_tid=srchtitle\_MOMRFO\_2 (IMRFO), and the codes of the other algorithms and the CEC2020 test suite can be found https://www.mathworks.com/matlabcentral/fileexchang e/103895-improved-multi-objective-manta-ray-foraging-optimization?s\_tid=srchtitle\_MO

MRFO\_2, respectively (all accessed on 7 August 2021). Each approach is executed on a personal computer, Windows 8.1 (64-bit), core i5 with 4GB-RAM Processor @1:8 GHz using MATLAB R2020a. All benchmark functions are executed 20 times for 1000 iterations and 100 populations, except the CEC2020 problem, which is executed 21 times for a population *pop* = 200, and a maximum number of function evaluations equal to 10,000 ∗ *pop*. In addition, the OPF problem was repeated 20 time for *pop* = 100 and 200 iterations. Note that the best performing algorithm is assessed based on the mean and standard deviation outcomes. The quantitative and qualitative performance outcomes are illustrated in Tables 3–15 and Figures 8–16, respectively. The outcomes of each set of benchmark functions are outlined and discussed in the following sections.


**Table 1.** Descriptions of the unconstrained, constrained, and CEC2020 benchmark functions.


**Table 2.** Parameter settings of the tested algorithms.

#### *3.1. Evaluation on Unconstrained Benchmark Functions*

As mentioned above, the proposed approach is tested firstly on the classical unconstrained test problems with two and three objectives. The achieved mean and STD values of 20 runs of each parameter metrics from NSMRFO and different approaches are presented in Table 3 and indicated in bold. It is worth noting here that the better algorithm is the one with the lower metric value. The suggested approach NSMRFO managed to outperform the MOSMA [46], MOBO/D [47], MOMVO [48], MOWCA [49], MOMRFO [51], (IMOM-RFO) [28], and (NSGA-II) [32] optimizer significantly on eight functions out of 18 cases for GD, 14 out of 10 cases for IGD, and 14 out of 18 cases for SP metrics. By comparison, MOSMA is better on SCH2 for all metrics, the MOWCA is best on FON2 for GD, on POL for IGD, on POL, and VNT3 for SP, the MOMVO is best on just SCH1 for SP metric, the IMOM-RFO is best on SCH1 for GD, and VNT2 for IGD; however, the NSGA-II and MOMRFO offered a good solution on four and eight functions, respectively. By contrast, the MOBO/D optimizer provides the worst results. Therefore, it may be observed from this table that the NSMRFO approach is able to outperform all competitors on most cases. Furthermore, it is also evident from Figures 8–10 that NSMRFO converges better toward a true Pareto front with different features from diverse perspectives. In addition, the Pareto optimal solutions have been well distributed over the true PF on the classical functions.

**Table 3.** GD, IGD, and SP metrics comparison based on unconstrained test suites.


**Table 3.** *Cont.*



**Table 3.** *Cont.*

Underlined values indicate the best results.

*3.2. Evaluation on Constrained Benchmark Functions*

To evaluate the accuracy of the developed NSMRFO approach, four constrained test functions with different Pareto optimal fronts and three analysis metrics were investigated. Inspecting the obtained Pareto fronts in Figure 11, and the outcomes in Table 4, it is clearly seen that the suggested NSMRFO yields a higher convergence and coverage toward the true PF on all constrained benchmark functions. For the numerical results, NSMRFO ranks first for the majority of functions except on BNH and CONSTR for GD, and BNH, SRN for IGD in which it ranks second compared to the aforementioned well-known competitive techniques, and OZY for SP. Note that the optimal findings are marked in boldface and underlined.

**Figure 8.** Obtained Pareto front of NSMRFO on classical test suites.

**Figure 9.** Obtained Pareto front of NSMRFO on ZDT test suites.

**Figure 10.** Obtained Pareto front of NSMRFO on TYD test suites.

**Figure 11.** Obtained Pareto front of NSMRFO on constrained test suites.




**Table 4.** *Cont.*

Underlined values indicate the best results.

#### *3.3. Evaluation on CEC2020 Benchmark Functions*

This subsection presents the performance of the suggested technique NSMRFO in CEC2020 multimodal multi-objective optimization (MMO) problems using four indicator metrics, the *rPSP* and *IGDX* that reflect the quality of the Pareto set in the search space, and the *rHV* and *IGDF* that reflect the quality of the Pareto front in the objective space. The functions of the remaining MMO problems are characterized by different geometries' linear and nonlinear concave and convex functions. To illustrate the effectiveness of the multi-objective MRFO version, six well-known competitors are adopted for comparison such as: NSGA-II [32], DN-NSGAII [52], OMNI-OPT [53], MO\_Ring\_PSO\_SCD [44], MOM-RFO [51], IMOMRFO [28]. The numerical statistical results of the obtained parameter indexes in search and objective spaces by each approach are summarized in Table 5. It is worth noting that the optimal results of each indicator are the lowest values. The underlined bold solutions indicate the algorithms' optimum result. Additionally, the last five rows posted in this table present the results score of each approach, in which the NSMRFO ranks first by providing 47 optimal solutions out of a 96 benchmark suite, which means twenty-four functions times four indicators. By contrast, the MOMRFO, OMNI-OPT, and DN-NSGAII competitor approaches show the worst scores. It can be clearly observed from the perspective of the search space values that the crowding distance mechanism has the capability to efficiently increase the PS convergence and diversity of the optimization algorithms. However, In spite of the same strategy used in NSGAII, DN-NSGAII, and the proposed optimizer, they offered a significant different performance, which means that the NSMRFO diversity and convergence are improved. The IMOMRFO was the closest approach of NSMRFO, where it ranks second by having 20 best values out of 96, and it offered good search space results and poor objective space results. The NSGAII performed a little better on the objective space, especially the *rHV* metric. The CEC2020 corresponding box plots of the four metrics rPSP, rHV, IGDX, and IGDF are depicted in Figure 12. According to this figure, the NSMRFO achieved the indicator minimum values in most MMFs such as MMF1, MMF4, MMF5, MMF7, MMF8, MMF11, MMF12, MMF13, MMF14, MMF15, MMF1\_e, MMF14\_a, MMF15\_a, MMF10\_l, rHV on MMF11\_l, rPSP and rHV on MMF12\_l, rHV on MMF13\_l, rPSP and rHV on MMF15\_a\_l. In addition, the NSMRFO is more stable compared to its competitor approaches. To sum up, the suggested optimizer achieves the best rank performance in terms of all indicator metrics compared to its competing algorithms, and have significant stability.

*Fractal Fract.* **2022**, *6*, 194


**Table 5.** rPSP, rHV, IGDX, and IGDF indicator metrics comparison based on CEC2020 test suites.

*Fractal Fract.* **2022**, *6*, 194

**Table 5.** *Cont.*



**Table 5.** *Cont.*


**Figure 12.** *Cont*.

**Figure 12.** *Cont*.

**Figure 12.** *Cont*.

**Figure 12.** Box plots for the four metrics rPSP, rHV, IGDX, and IGDF on CEC2020 problems.

#### *3.4. Evaluation on Engineering Design Problems*

For examining the applicability of an algorithm, the engineering design problems can be very beneficial. In this subsection, four engineering functions are considered in order to assess the capability of NSMRFO in dealing with the real-word problems. The first one is 4-bar truss, and it aims to optimize the volume and deflection with four dimensions. The disk brake consists of minimizing the stopping time and weight of a brake system with four dimensions. The third engineering problem is the welded beam that tends to decrease the vertical deflection and cost of fabrication with four dimensions. The speed reducer as a last function attempts to reduce its stress and total weight with seven dimensions. For results verification, seven well-known approaches are applied. The statistical results are summarized in Table 6, and it is evident that NSMRFO can outrank the other algorithms on the most problems except the 4-bar truss for the GD metric, the welded beam for GD and SP metrics, and disk brake for SP metric; it ranks first in 8 out of 12 test suites. Accordingly, the NSGA-II and MOMRFO are the closest competitors where they provide good estimations on two functions over 12. However, concerning the other algorithms MOSMA, MOBO/D, MOMVO and IMOMRFO, they have the lowest results. As illustrated in Figure 13, the NSMRFO Pareto front shows higher approximations toward the true PFs in terms of coverage and convergence.

**Figure 13.** Obtained Pareto front of NSMRFO on engineering design test suites. (**a**) 4-bar truss; (**b**) disk brake; (**c**) welded beam; (**d**) speed reducer.

*Fractal Fract.* **2022**, *6*, 194


Underlined values indicate the best results.

#### *3.5. Evaluation on OPF Incorporating Wind/Solar/Small-Hydro Energy*

#### 3.5.1. Problem Methodology

#### **Wind power**

The wind cost can be expressed as below:

$$\mathcal{C}\_{Tw} = \mathcal{C}\_{dw} + \mathcal{C}\_{\text{new}} + \mathcal{C}\_{\text{ocw}} \tag{23}$$

with

$$\mathcal{L}\_{duv} = d\_w P\_{\text{ws}} \tag{24}$$

$$\mathcal{K}\_{\text{new}} = K\_{\text{new}} (P\_{\text{new}} - P\_{\text{ws}}) = K\_{\text{new}} \int\_{P\_{\text{ws}}}^{P\_{\text{wr}}} (p\_w - P\_{\text{ws}}) f\_w(p\_w) dp\_w \tag{25}$$

$$\mathcal{L}\_{\text{orw}} = K\_{\text{orw}} (P\_{\text{uv}} - P\_{\text{unw}}) = K\_{\text{orw}} \int\_0^{P\_{\text{uv}}} (P\_{\text{uv}} - p\_{\text{w}}) f\_w(p\_w) dp\_w \tag{26}$$

where *dw*, *Koew*, and *Kuew* are the coefficients of direct, over, and under estimation cost, respectively. *Pws*,*i*, *Pwav* are the scheduled and actual available power, respectively. *Pwr* is rated power output from the plant. *fw*(*pw*) is the probability density function of the wind power plant.

#### **Solar power**

The total solar cost can be formulated as follows:

$$\mathcal{C}\_{\text{Ts}} = \mathcal{C}\_{\text{ds}} + \mathcal{C}\_{\text{ucs}} + \mathcal{C}\_{\text{ocs}} \tag{27}$$

with

$$C\_{ds} = d\_s P\_{ss} \tag{28}$$

$$C\_{\rm ucs} = K\_{\rm ucs}(P\_{\rm sav} - P\_{\rm ss}) = K\_{\rm ucs} \ast f\_s(P\_{\rm sav} \rhd P\_{\rm ss}) \ast \left[ E(P\_{\rm sav} \rhd P\_{\rm ssc}) - P\_{\rm sc} \right] \tag{29}$$

$$C\_{\rm{occ}} = K\_{\rm{tors}}(P\_{\rm{ss}} - P\_{\rm{surv}}) = K\_{\rm{ucS}} \ast f\_s(P\_{\rm{surv}} < P\_{\rm{ss}}) \ast \left[P\_{\rm{ss}} - E(P\_{\rm{surv}} < P\_{\rm{ss}})\right] \tag{30}$$

where *ds*, *Koes* and *Kues* are the coefficients of direct, over and under estimation cost of solar power generator. *Pss*, *Psav* are the scheduled and actual available power, respectively. *fs*(*Psav* < *Pss*) The probability of occurrence of solar power shortage. *E*(*Psav* > *Pss*) and *E*(*Psav* < *Pss*) are the expectations of solar power above and below *Pss*.

#### **Small-hydro power**

The Small-hydro power is defined as follows:

$$\mathbf{C}\_{\text{Tsh}} = \mathbf{C}\_{\text{dsh}} + \mathbf{C}\_{\text{uesh}} + \mathbf{C}\_{\text{oessh}} \tag{31}$$

with

$$C\_{ds} = d\_{s}P\_{\text{sc}} + d\_{h}P\_{\text{hs}} \tag{32}$$

$$\mathcal{C}\_{\text{ussh}} = K\_{\text{ussh}} (P\_{\text{shav}} - P\_{\text{ssh}}) = K\_{\text{ussh}} \ast f\_{\text{sh}} \left( P\_{\text{shav}} > P\_{\text{ssh}} \right) \ast \left[ E \left( P\_{\text{shav}} > P\_{\text{ssh}} \right) - P\_{\text{ssh}} \right] \tag{33}$$

$$\mathcal{C}\_{\text{acsh}} = K\_{\text{ocsh}} (P\_{\text{shs}} - P\_{\text{shav}}) = K\_{\text{ocsh}} \ast f\_{\text{sh}} (P\_{\text{shav}} < P\_{\text{shs}}) \ast \left[ P\_{\text{shs}} - E \left( P\_{\text{shav}} < P\_{\text{shs}} \right) \right] \tag{34}$$

where *dh* is the small-hydro direct cost coefficient. *Koesh* and *Kuesh* are the coefficients of over and under estimation cost of combined solar and small-hydro power generator. *Pshs*, *Pshav* are the scheduled and actual available power, respectively. *E*(*Pshav* > *Pshs*) and *E*(*Pshav* < *Pshs*) are the expectations of combined system power above and below *Pshs*.

#### **Objective functions**

• Total cost

The network total cost including the thermal/wind/solar/small-hydro generators is modeled as follows:

$$F\_1 = \min\{F\_T + C\_{T\nu} + C\_{Ts} + C\_{Tsh}\}.\tag{35}$$

where

$$F\Gamma = \begin{array}{c} \sum\_{i=1}^{N\mathfrak{g}} a\_i + b\_i P\_{t\mathfrak{g}i} + c\_i P\_{t\mathfrak{g}i}^2 + |d\_i \ast \sin\left(e\_i \ast \left(P\_{t\mathfrak{g}i}^{min} - P\_{t\mathfrak{g}i}\right)\right)| \end{array} \tag{36}$$

*ai*, *bi* and *ci* are the conventional generators cost coefficients. *di* and *ei* are the coefficients for the valve-point loading effect.

• Emission

The emission function is formulated using an exponential function as shown below [54]:

$$F\_2 = E = \min\left\{\sum\_{i=1}^{N\_{\mathcal{S}}} 10^{-2} \left(a\_i + \beta\_i P\_{\mathcal{S}^i} + \gamma\_i P\_{\mathcal{S}^i}^2\right) + \zeta\_i \exp\left(\lambda\_i P\_{\mathcal{S}^i}\right)\right\} \tag{37}$$

where *αi*, *βi*, *γi*, *ξi*, and *λ<sup>i</sup>* are the emission coefficients of the power plant.

• Voltage deviation

The voltage deviation is calculated by:

$$F\_3 = VD = \min\left\{ \begin{array}{c} \sum\_{i=1}^{Npq} |V\_{Li} - 1.0| \end{array} \right\} \tag{38}$$

• Power loss

The Power loss is calculated by:

$$F\_4 = P\_{\rm loss} = \min \left\{ \begin{array}{l} \sum\_{l=1}^{N\_l} G\_{l(i,j)} (V\_i^2 + V\_j^2 - 2V\_i V\_j \cos(\delta\_{i\bar{j}})) \\ \end{array} \right\} \tag{39}$$

where *Gl*(*i*,*j*) represents the conductance of line *l*. *δij* = *δ<sup>i</sup>* − *δ<sup>j</sup>* represents the voltage angle difference between bus *i* and bus *j*.

#### **Constraints**

• Equality constraints

The power flow equations are assumed as equality constraints that are represented by:

$$\begin{cases} \begin{aligned} \, \left[ \begin{array}{c} P\_{\mathcal{S}^{i}} - P\_{di} - |V\_{i}| \sum\_{j=1}^{Nb} |V\_{j}| \left[ G\_{lj} \cos(\theta\_{ij}) + B\_{lj} \sin(\theta\_{ij}) \right] = 0 \\ \, \left[ Q\_{\mathcal{S}^{i}} - Q\_{di} - |V\_{i}| \sum\_{j=1}^{Nb} |V\_{j}| \left[ G\_{lj} \sin(\theta\_{ij}) - B\_{lj} \cos(\theta\_{ij}) \right] = 0 \end{aligned} \right] \end{cases} \end{cases} \tag{40}$$

where *Nb* is the number of buses. *Qgi* and *Pgi* are generated reactive and active power, respectively. *Qdi* and *Pdi* are reactive and active power demand, respectively. *Gij* and *Bij* represent the admittance matrix components *Yij* = *Gij* + *jBij* called the conductance and susceptance.

• Inequality constraints

The inequality constraints are given as below:

− Generator constraints:

$$V\_{\mathcal{S}^i}^{\text{unit}} \le V\_{\mathcal{S}^i} \le V\_{\mathcal{S}^i}^{\text{max}} \qquad \qquad i = 1, \dots, \text{Ng} \tag{41}$$

$$P\_{tgi}^{\rm min} \le P\_{tgi} \le P\_{tgi}^{\rm max} \qquad \qquad i = 1, \ldots, Nt \text{g} \tag{42}$$

$$P\_{\text{ws},i}^{\text{min}} \le P\_{\text{ws},i} \le P\_{\text{ws},i}^{\text{max}} \qquad \qquad i = 1, \ldots, N \text{wg} \tag{43}$$

*Pmin ss*,*<sup>i</sup>* <sup>≤</sup> *Pss*,*<sup>i</sup>* <sup>≤</sup> *<sup>P</sup>max ss*,*<sup>i</sup> i* = 1, ..., *Nsg* (44)

$$P\_{\text{slts},i}^{\text{min}} \le P\_{\text{slts},i} \le P\_{\text{slts},i}^{\text{max}} \qquad \qquad i = 1, \ldots, \text{Nshg} \tag{45}$$

$$\mathcal{Q}\_{t\bar{\mathfrak{g}}^i}^{\min} \le \mathcal{Q}\_{t\bar{\mathfrak{g}}^i} \le \mathcal{Q}\_{t\bar{\mathfrak{g}}^i}^{\max} \qquad \quad i = 1, \dots, Nt\mathfrak{g} \tag{46}$$

$$Q\_{\rm us,i}^{\rm unit} \le Q\_{\rm ws,i} \le Q\_{\rm ws,i}^{\rm max} \qquad i = 1, \dots, \rm Nw \mathbf{g} \tag{47}$$

$$\mathbf{Q}\_{ss,i}^{\text{min}} \le \mathbf{Q}\_{ss,i} \le \mathbf{Q}\_{ss,i}^{\text{max}} \qquad \quad i = 1, \dots, \text{Nsg} \tag{48}$$

$$\mathbf{Q}\_{\text{sls},i}^{\text{min}} \le \mathbf{Q}\_{\text{sls},i} \le \mathbf{Q}\_{\text{sls},i}^{\text{max}} \qquad \quad i = 1, \ldots, \text{Nslg} \tag{49}$$

− Security constraints:

$$V\_{Li}^{\min} \le V\_{Li} \le V\_{Li}^{\max} \qquad \qquad i = 1, \ldots, Npq \tag{50}$$

*Sli* <sup>≤</sup> *<sup>S</sup>max li i* = 1, ..., *Nl* (51)

where *Nl* is the number of transmission lines. *Sli* and *Smax li* indicate the maximum limit of the transmission line.

#### 3.5.2. Results of the OPF Problem

To assess performance of the suggested algorithm NSMRFO against other approaches, several cases related to the modified IEEE 30-bus optimal power flow problem integrating wind/solar/small-hydro power are investigated. This test system comprises 41 branches, 6 generating units in which 3 thermal generators at buses 1, 2, and 8, the wind and solar plants at buses 5 and 11, respectively, while combined solar and small-hydro generators are connected to bus 13 as summarized in Table 7. The detailed input data for the considered IEEE 30-bus system are given in [54]. The thermal generators' coefficients are provided in Table 8. Solar irradiance, wind distribution, and small-hydro river flow rate are modeled using Lognormal, Weibull, and Gumbel probability density function (PDF), respectively [54]. These PDF parameters are listed in Table 9. Additionally, in terms of the optimization issue, the system has 11 control variables, with various constraints and objective functions for a total active and reactive power demands of 283.4 MW and 126.2 MVAR, respectively.



**Table 8.** Cost and emission coefficients of thermal generators [54].


To validate the suggested approach, five well-known stochastic algorithms are employed as competitors, namely MOMVO [48], MOWCA [49], NSGWO [50], MOMRFO [51], and IMOMRFO [28]. The test system under study is examined via three case studies defined as follows:


• Minimizing the total cost and power loss.


**Table 9.** Characteristic details of wind/solar/small-hydro generators [54].

The optimum settings of the control variables, their allowable ranges, the numerical best outcomes of each objective, and the best compromise solutions (BCS) are depicted in Tables 10–15. Furthermore, their optimal Pareto fronts are illustrated in Figures 14a–16a. It is worth noting that all the findings are generated after twenty independent runs for a population size of 100 and 200 iterations. According to the aforementioned tables, it is obviously seen that the NSMRFO' results are remarkably better than the competitor approaches in all cases, notably the best compromise solutions' tables. In addition, it is clearly observed from the figures that the suggested NSMRFO can generate the superior Pareto non-dominated solutions with good distribution and good diversification front in comparison to other algorithms. As shown in Figures 14b–16b, the BCS' voltage profile PQ of load buses do not exceed their limits, and remained within the minimum and maximum bounds.

**Figure 14.** Optimal Pareto fronts of all the algorithms for case 1. (**a**) Pareto front of optimal solutions; (**b**) voltage profile of PQ buses.

**Figure 15.** Optimal Pareto fronts of all the algorithms for case 2. (**a**) Pareto front of optimal solutions; (**b**) voltage profile of PQ buses.

*Fractal Fract.* **2022**, *6*, 194


**Table11.**Findingsofbest compromisesolutionsforcase

 1.



*Fractal Fract.* **2022**, *6*, 194



**Table 13.** Findings of best compromise solutions for case 2.


*Fractal Fract.* **2022**, *6*, 194


*Fractal Fract.* **2022**, *6*, 194

**Table 15.**Findings of best compromise solutions for case

 3.


**Figure 16.** Optimal Pareto fronts of all the algorithms for case 3. (**a**) Pareto front of optimal solutions; (**b**) voltage profile of PQ buses.

#### *3.6. Discussion*

As previously stated, the main difference between the suggested approach and its competitors is the better diversity and accuracy for the majority of the problems, in which the NSMRFO ranks first, followed by the MOMRFO on the unconstrained problems, the NSGA-II on the constrained test suite, the NSGA-II on the engineering problems, and the IMOMRFO on the CEC2020 benchmark MMO functions, while the MOWCA gives a little better score. By contrast, the MOSMA, MOMVO and MOBO/D provide the worst rank. However, on the other hand, the NSMRFO generates very challenging and competitive solutions on most benchmark test suites. In summary, all quantitative and qualitative outcomes and analysis reveal the higher accuracy and significant diversity of NSMRFO in dealing with different unconstrained, constrained, CEC 2020 multimodal multi-objective, and engineering benchmark functions. This comes from the strong ability of MRFO in exploitation and exploration as long as the NSMRFO employs similar mechanisms as its single approach, and inherits its high convergence. In addition, the crowding distance and archive selection methodologies also contribute to the NSMRFO high coverage and convergence.

#### **4. Conclusions**

In this work, the ability of a suggested multi-objective manta ray foraging optimizer known as NSMRFO to handle problems with different characteristics has been tested. The NSMRFO optimizer has been developed on the basis of NSGA-II operators as crowding distance, elitist non-dominated sorting, and an archive mechanism. A set of test functions have been employed to benchmark the performance of the NSMRFO approach from different perspectives that include: seven classical, ZDT, TYD, four constrained, twenty-four CEC2020, four problems for engineering design, and the IEEE 30-bus OPF with renewable sources wind/solar/small-hydro power. Additionally, to qualitatively affirm the achieved solutions, the original true Pareto fronts have been compared to those obtained. Thereby, for performance assessment, various performance metrics in search and objective spaces have utilized such generational distance (GD), inverted generational distance (IGDX and IGDF), spacing metric, reciprocal Pareto sets proximity (rPSP), and reciprocal hypervolume (rHV). Thus, NSMRFO can relatively provide an accurate estimation shape with closer distance to the true PF compared to the multimodal multi-objective evolutionary approaches and some recent competitive algorithms. The NSMRFO impressive performance leads to handling challenging real-world problems in various engineering fields for future works.

**Author Contributions:** Conceptualization, F.D. and M.O.; methodology, F.D.; software, F.D.; validation, R.E., M.O. and S.K.; formal analysis, F.D.; investigation, S.K.; resources, M.O. and A.M.A; data curation, F.D.; writing—original draft preparation, F.D. and S.K.; writing—review and editing, F.D. and A.M.A.; visualization, R.E. and M.O.; supervision, R.E. and M.O.; project administration, R.E. and S.K.; funding acquisition, A.M.A. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by the Deputyship for Research & Innovation, Ministry of Education in Saudi Arabia through the project number "IF\_2020\_NBU\_408".

**Institutional Review Board Statement:** Not Applicable.

**Informed Consent Statement:** Not Applicable.

**Data Availability Statement:** Not Applicable.

**Acknowledgments:** The authors extend their appreciation to the Deputyship for Research & Innovation, Ministry of Education in Saudi Arabia for funding this research work through the project number "IF\_2020\_NBU\_408". The authors gratefully thank the Prince Faisal bin Khalid bin Sultan Research Chair in Renewable Energy Studies and Applications (PFCRE) at Northern Border University for their support and assistance.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


### *Article* **Optimal Design of TD-TI Controller for LFC Considering Renewables Penetration by an Improved Chaos Game Optimizer**

**Ahmed H. A. Elkasem 1, Mohamed Khamies 1, Mohamed H. Hassan 1, Ahmed M. Agwa 2,3,\* and Salah Kamel <sup>1</sup>**


**Abstract:** This study presents an innovative strategy for load frequency control (LFC) using a combination structure of tilt-derivative and tilt-integral gains to form a TD-TI controller. Furthermore, a new improved optimization technique, namely the quantum chaos game optimizer (QCGO) is applied to tune the gains of the proposed combination TD-TI controller in two-area interconnected hybrid power systems, while the effectiveness of the proposed QCGO is validated via a comparison of its performance with the traditional CGO and other optimizers when considering 23 bench functions. Correspondingly, the effectiveness of the proposed controller is validated by comparing its performance with other controllers, such as the proportional-integral-derivative (PID) controller based on different optimizers, the tilt-integral-derivative (TID) controller based on a CGO algorithm, and the TID controller based on a QCGO algorithm, where the effectiveness of the proposed TD-TI controller based on the QCGO algorithm is ensured using different load patterns (i.e., step load perturbation (SLP), series SLP, and random load variation (RLV)). Furthermore, the challenges of renewable energy penetration and communication time delay are considered to test the robustness of the proposed controller in achieving more system stability. In addition, the integration of electric vehicles as dispersed energy storage units in both areas has been considered to test their effectiveness in achieving power grid stability. The simulation results elucidate that the proposed TD-TI controller based on the QCGO controller can achieve more system stability under the different aforementioned challenges.

**Keywords:** improved chaos game optimization; TD-TI controller; load frequency control; renewable energy sources; electrical vehicles

#### **1. Introduction**

Recently, the world has become voracious in utilizing electrical power due to the growth of industrial and residential loads. Therefore, it was necessary to establish new electrical power grids to accommodate the load demands. As a result, energy planners were directed to penetrate the renewable energy sources (RESs) with the traditional power grids in the electrical power system to reduce the demerits of these traditional units. In addition, the penetration of RESs with newly established power systems is considered to have an economically good and positive rate that saves in the utilization of the oil, coal, and gas that operate traditional power plants, whereas the resulting flames from burning oil and coal lead to the release of carbon dioxide gas, causing an increase in the ozone hole and an increase in the global warming phenomenon [1]. Although the presence of RESs in electrical power grids reduces the severity of the resulting pollution from the

**Citation:** Elkasem, A.H.A.; Khamies, M.; Hassan, M.H.; Agwa, A.M.; Kamel, S. Optimal Design of TD-TI Controller for LFC Considering Renewables Penetration by an Improved Chaos Game Optimizer. *Fractal Fract.* **2022**, *6*, 220. https://doi.org/10.3390/ fractalfract6040220

Academic Editor: Savin Trean¸tă

Received: 8 February 2022 Accepted: 6 April 2022 Published: 13 April 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

traditional units, these renewable sources suffer from a lack of system inertia. As a result of the reduction in power system inertia caused by renewable sources, the stability and security of the system (i.e., more fluctuations in system frequency) will be affected [2,3]. Moreover, several reasons lead to more frequency fluctuations, such as a mismatching between the generated power and the demand power, system parameter variations, and different sorts of load variations. Hence, the fluctuations in system frequency can be tackled by the LFC [4]. Researchers have done their best to develop several control techniques for achieving reliability in power systems by attaining system frequency and tie-line power flow within tolerable limits.

Many interests have been prompted by researchers to address the issue of LFC in different structures of the power system; (i.e., the single-area power system [5,6], the multiarea interconnected power system [7–10], and the deregulated power system [11,12]). In addition, several different control techniques have been implemented to overcome the system frequency fluctuations, such as the intelligent control techniques (i.e., fuzzy logic controllers [13], artificial neural networks [14], and adaptive neuro-fuzzy controllers [15]). Moreover, several robust control techniques have been utilized to enhance the power system performance, such as the H-infinite technique [16] and μ-synthesis [17]. Furthermore, optimal control techniques, such as the linear quadratic Gaussian [18] and linear quadratic regulator [19], are implemented to attain the frequency within tolerable limits. In this regard, the majority of the industrial control loop is the proportional-integral-derivative (PID), due to its reputable merits (i.e., simpleness in construction, applicability, functionality, comfort, and inexpensiveness) [20]. Even so, it suffers from a bulky, complicated process when selecting its parameters using trial and error methods. Thus, researchers have been striving to accomplish the optimal PID controller, according to the different optimization techniques utilized in getting the optimal controller parameters. This design of the optimal PID controller leads to ensuring a reliable system performance in comparison to the conventional PID controller when facing the uncertainties in a studied power grid. Accordingly, several optimization techniques have been utilized to fine-tune the optimal PID controller parameters meticulously, including the grasshopper optimization algorithm [21], the ant colony optimization technique [22], the Jaya algorithm [23], and the class topper optimization algorithm [24].

On the other side, the fractional order controllers (FOCs) have become a distinct candidate in power system stabilizing due to their merits (i.e., flexibility in configuration and a higher degree of freedom). The FOCs have several types of poles, such as the hyper-damped poles, that need to be fine-tuned. Accordingly, this leads to an expansion in the stable region, giving more flexibility in the controller design process [25]. Furthermore, there are several types of controllers belonging to the FOC family; the fractional –order-proportionalintegral-derivative (FOPID) is one member of this family that has been presented in [26,27]. The FOPID controller has been utilized in several electrical power systems [28,29]. Moreover, the TID controller represents one of the FOCs; it looks exactly like the PID controller in construction except for one difference, which is that the proportional parameter is tilted with a (1/s 1/n) transfer function. This additional transfer function provides the optimization process with better feedback and good tracking performance. Lately, the TID controller has been implemented for solving the LFC problem due to its good merits (i.e., it can change the parameters of the closed-loop system; it has a tremendous ability in disturbance rejection; and it has more reliability with robustness) [30,31]. There is no doubt that fractional calculus provides several options to researchers for creativity and diversity in controller designing. As a result, different engineering problems have been solved by utilizing the amalgamation of the FOPID and TID properties as a hybrid controller [32]. In addition, the researchers' minds are destined to implement another strategy in control design, which is the cascaded controllers (CCs) form that includes one controller followed by another one; the CCs have more tuning knobs that give better results than in the utilization of non-crude CCs. Thus, many scientific studies have been presented using the different CCs to solve the LFC problem [33,34]. Another construction has been applied while designing different

controllers for studying the LFC issue, which depends on the combination of two different controllers to take the benefits of both controllers. There are examples of the combination of different proposed controllers from literature, such as the combination of the model predictive control (MPC) controller with the linear quadratic Gaussian controller [35] and the combination of an adaptive MPC with the recursive polynomial model estimator [36]. Furthermore, a new controller structure, labeled as a feed-forward/feed-backward controller, has been presented to reduce the disadvantages of the PID and TID controllers during system uncertainties that affect the input of the control signal. Thus, many studies have been presented to elucidate the robustness of the feed-forward/feed-backward controller structure in achieving system stability. The integral-proportional-derivative (I-PD) controller and the integral-tilt-derivative (I-TD) controller have been proposed to cope with the LFC problem, achieving more system stability compared to the PID and TID controllers, respectively [37,38].

The achievement of system stability is not dependent on the controller design only, but the utilized optimization technique represents a critical issue that must be selected carefully to attain the optimal controller parameters. Previously, the traditional optimization methods such as the tracking approach [39] and the aggregation methods [40] were applied for regulating the system frequency. In fact, the traditional optimization methods suffer from several drawbacks, such as slump, deathtrap in local minimums, the need for more iterations, and dependence on their initial conditions to attain the optimal solution. So, meta-heuristic optimization techniques such as the artificial bee colony [41], salp swarm algorithm (SSA) [42], and whale optimization algorithm (WOA) [43] have been proposed to overcome all of the previous drawbacks. Though the meta-heuristic optimization algorithms are not usually guaranteed to find the optimal global solution, they can often find a sufficiently good solution in a reasonable time. So, they are an alternative to exhaustive search, which would take exponential time. Moreover, these techniques have several demerits, such as slowing in the rate of convergence, poor local search capability, and local optimum convergence. In this regard, algorithmic scientists have improved these techniques to diminish all of their previous drawbacks. Examples of improved algorithms utilized to achieve system stability are presented as the improved stochastic fractal search algorithm [44] and the sine augmented scaled sine cosine [45]. In this regard, the authors in this work proposed an improved algorithm known as QCGO to select the suggested combining TD-TI controller parameters to attain the optimal studied power grid performance.

Referring to the aforementioned literature related to the LFC issue, there are several control strategies that depend on the designer experience, such as the MPC, the H-infinite techniques, and the fuzzy logic control, that can attain the desired performance, but their parameter-selecting strategies take a long time. In addition, the conventional PID controller has some difficulties when facing system uncertainties. Moreover, several studies have been presented utilizing conventional algorithms and meta-heuristic optimization techniques that have many demerits in comparison to the improved techniques that develop the searching process and obtain the global solution with a few search agents. Furthermore, several previous studies did not consider the different challenges that face power systems (i.e., different types of load variations such as series SLP and RLV, the high penetration of RESs, and communication time delay). According to the above salient observations, this study proposed a new control construction labeled as a combining TD-TI controller that is derived from the form of a TID controller to enhance the studied system stability. The parameters of the proposed combining TD-TI controller can be selected by utilizing the improved algorithm QCGO when considering the challenges of high RESs penetration, different load perturbation types, and communication time delay.

The studied work in this paper is presented to overcome the limitations of the previously published works in the literature. Table 1 elucidates the differences between this work and the other published works related to the LFC issue.



The main contributions of this work can be elucidated in detail as follows:


vii. The consideration of the integration of electrical vehicles (EVs) in both areas to support the proposed controller in overcoming the system frequency excursions during high renewables penetration.

The remainder of this article is organized into several sections that are clarified as follows: the studied system topology which considers the high penetration of RESs and EVs is illustrated in Section 2. Section 3 discusses the proposed control approach and the formulation of the studied problem. Then, the procedure of the improved QCGO technique is given in Section 4. Moreover, the simulation results according to the different scenarios are clarified in Section 5. Finally, Section 6 summarizes the conclusions of the current work.

#### **2. The Studied System Topology**

#### *2.1. Two-Area Interconnected Hybrid Power Grid Configuration*

In this article, the issue of LFC related to electrical power grids has been addressed by conducting a study on two-area interconnected hybrid power systems. The studied power grid encompasses two interconnected areas, which include several conventional generation power plants, such as the thermal unit, hydropower unit, and gas unit. The capacity of each area in the studied power grid that includes the three traditional units (i.e., thermal, hydro, and gas) is 2000 MW of rated power [48], of which the largest percentage of electrical power sharing went to the thermal power plant, which contributes 1087 MW, then the hydropower plant, which contributes 653 MW, and the gas turbine, sharing the generated power with 262 MW. The investigated power grid is presented as a simplified model shown in Figure 1.

**Figure 1.** The studied power grid schematic diagram.

Figure 2 shows the block diagram of the studied two-area interconnected hybrid power grid. The transfer functions in the studied power grid are listed in Table 2. The amalgamation of the TD-TI controller is proposed to be equipped in both areas for each generation unit to minimize the oscillations in the frequencies of both areas and the tie-line power flow between them. The attitude of the input signal of the proposed combining TD-TI controller can be represented as the *ACE*, while the attitude of the output signal can be represented as the action of the secondary/supplementary control on each generation power plant, in order to obtain extra active power for enhancing the power grid performance. Table 3 elucidates all the parameters included in the studied power grid with their nominal values. The *ACEs* in both areas can be obtained according to the formulas that follow in Equations (1) and (2) [47]:

$$A\!\!\!C\!\_{1} = \Delta\!\!\!C\_{\text{tie}1\,-2} + B\_{1}\Delta\!\!\!i\_{1} \tag{1}$$

$$ACE\_2 = \Delta \mathbf{C}\_{\text{tie2}-1} + B\_2 \Delta \mathbf{i}\_2 \tag{2}$$

**Figure 2.** The transfer function model of the studied power grid.


**Table 2.** The transfer functions that are presented in the studied power grid.


**Table 2.** *Cont.*

**Table 3.** The standard parameter values of the two interconnected identical areas [47].


#### *2.2. The Installation of Wind Farm Model*

This work presents the high penetration of RESs, including wind power in the investigated hybrid power grid. The professional software MATLAB/SIMULINK program (R2015a) (The MathWorks, Inc., Natick, MA, USA) is used in implementing the simplified model of wind power in order to share its energy in the first area of the studied power grid. The aforementioned wind power model generates power in the same way as the real behavior of the generated power from real wind farms. This is achieved using a white-noise block that is utilized in getting a random speed, which is multiplied by the wind speed, as shown in Figure 3 [47]. The captured output power from the wind model can be formulated in the following equations [47].

$$P\_{\rm wt} = \frac{1}{2} \rho A\_{\rm T} \mathbf{v}\_{\rm W}^3 \mathbf{C}\_{\rm P}(\lambda\_{\rm r} \beta) \tag{3}$$

$$\mathbf{C\_{P}}(\lambda,\beta) = \mathbf{C\_{1}} \left( \frac{\mathbf{C\_{2}}}{\lambda\_{i}} - \mathbf{C\_{3}}\beta - \mathbf{C\_{4}}\beta^{2} - \mathbf{C\_{5}} \right) \times \mathbf{e^{\frac{-\mathbf{C\_{6}}}{\lambda\_{i}}}} + \mathbf{C\_{7}}\lambda\_{\Gamma} \tag{4}$$

$$
\lambda\_{\rm T} = \lambda\_{\rm T}^{\rm OP} = \frac{\omega\_{\rm TTT}}{\rm V\_W} \tag{5}
$$

$$
\mathbf{1} \tag{6.035}
$$

$$\frac{1}{\lambda\_{\text{i}}} = \frac{1}{\lambda\_{\text{T}} + 0.08\beta} - \frac{0.035}{\beta^3 + 1} \tag{6}$$

**Figure 3.** The implemented model of wind power using MATLAB/Simulink program (R2015a).

All of these mentioned parameter values for the utilized wind farm are presented in [47]. Figure 4 shows the random output power of 257 wind turbine units of 750 KW for each wind power unit. The value of the generated power from the studied wind farm is about 192 MW.

**Figure 4.** The output power of the wind model.

#### *2.3. The Installation of the PV Model*

The Photovoltaic (PV) model can be built by utilizing the professional software MAT-LAB/SIMULINK program (R2015a) described in Figure 5. The generated output power from the model is similar to the real generated output power from a real PV plant. In addition, the output energy of the PV model is penetrated in the second area of the studied power grid at about 116 MW. Here, the white-noise block in the MATLAB program (R2015a) is used for obtaining random output oscillations that are multiplied by the standard output power generated from a real PV plant. The generated energy from the presented PV model can be obtained as formulated in Equation (7) [6]. Figure 6 clarifies the random output power generated from the PV model.

**Figure 5.** The implemented model of the solar power plant using MATLAB/Simulink (R2015a) program.

**Figure 6.** The output power of the photovoltaic model.

#### *2.4. The Installation of EV Model*

EVs can participate in frequency regulation effectively due to the receiving of the LFC order and pass this signal to the EV to control the power during the charging and discharging process. Moreover, the response of the LFC signal can be limited through the availability of the numbers of controllable EVs in the studied power grid and by the state of the charge related to their capacity, whereas the model of the EV is similar to the model of the battery energy storage system, due to the included batteries that supply extra energy to the power grid during fluctuations for regulating the frequency excursions. However, the batteries in EVs may not be in full charging capacity due to the nature of EVs being of mobility and load, which affects the amount of extra energy to tackle the LFC problem. Thus, it is important to check the level of the EV charging to ensure more system enhancement under different system fluctuations. The output power from an EV can be obtained by the first-order transfer function, including the electrical vehicle time constant *TEV*, which equals 0.28 s in series with the electrical vehicle controllers' gain, *KEV*, which equals 1, where *KEV* is represented as the ratio of the exchange in charging power of the EV's batteries to the change of system frequency. The transfer function that represents the EV model is formulated in Equation (8) [49]. Figure 7 describes the EV model that was built in the MATLAB/SIMULINK program (R2015a).

$$\frac{\text{K}\_{EV}}{\text{1} + \text{s} \; T\_{EV}} \tag{8}$$

**Figure 7.** The implemented model of the electrical vehicle using MATLAB/Simulink (R2015a).

#### **3. Control Methodology and Problem Formulation**

Due to the high RES penetration, communication time delay, and various types of load perturbations, it is essential to implement a robust controller to enhance the system performance during abnormal conditions. Hence, this study proposes a newly developed controller construction known as a combining TD-TI controller to overcome any fluctuations resulting from the previous considerations/challenges. Moreover, the proposed controller parameters have been selected based on an improved algorithm labeled as QCGO.

#### *3.1. The Proposed Control Strategy*

This paper presents an efficient controller labeled as the combining TD-TI controller, which represents an improved modified structure of the TID controller that is shown in Figure 8. The TID controller is a sort of fractional order controller (FOC) that depends on the fractional-order calculus in its design. The TID controller construction is similar to the PID controller construction except for one difference, which is that the proportional parameter is tilted with a (1/s 1/n) transfer function. In this regard, this paper proposed a combining TD-TI controller, as derived from TID controller, due to the merits of the TID, such as the ability to tune easily, superior fluctuations rejection, and better sensitivity due to variations of the system parametric [50]. The proposed combining TD-TI controller is utilized to enhance the studied power grid performance, such as by damping frequency oscillations in both areas and overcoming fluctuations related to the tie-line power flow. Furthermore, the proposed combining TD-TI controller parameters are selected utilizing an improved QCGO algorithm. In general, the transfer function of the combining TD-TI controller is formulated as follows [50]:

$$\text{G}\_{i1}, \text{TD}(\mathbf{s}) = \frac{\text{K}t\_i}{\text{S}^{\frac{1}{n}}} + \text{K}d\_i\text{S} \tag{9}$$

$$\text{G}\_{i2\text{\textquotedblleft}T}\text{T(s)} = \frac{\text{K}t\_i}{\text{S}^{\frac{1}{n}}} + \frac{\text{K}i\_i}{\text{s}}\tag{10}$$

$$\mathbf{G}\_{l\prime}\text{ total}(\mathbf{s}) = \mathbf{G}\_{l1\prime}\text{TD}(\mathbf{s}) + \mathbf{G}\_{l2\prime}\text{TI}(\mathbf{s})\tag{11}$$

where *i* refers to the specified proposed controller of the (thermal, hydro, and gas) turbines; thus, (*i* = 1, 2, 3). The gain values (*Kti*, *Kii*, and *Kdi*) are selected within the range of [0, 10], and n is tuned in the range of [1, 10]. The control signal of the *ith* area can be expressed as follows [38]:

$$\mathbf{U}\_{l}(\mathbf{s}) = \mathbf{G}\_{l}, \text{total}(\mathbf{s}) \times ACE\_{l}(\mathbf{s}) \tag{12}$$

**Figure 8.** The construction of the proposed combining tilt-derivative and tilt-integral controller.

According to the process of controller designing, there are several sorts of performance criteria, such as the integral time absolute error (*ITAE*), the integral of squared error (*ISE*), the integral time squared error (*ITSE*), and the integral of absolute error (*IAE*). The criteria of *ITAE* and *ISE* are often utilized in the literature for minimizing the objective function due to their merits in comparison to *ITSE* and *IAE*, whereas the strategy of the *ISE* criteria in minimizing the objective function is the integrating of the square of error signal over simulation time. For ease, the *ISE* criteria can effectively dampen the large errors compared to the small errors as the square of the large errors is larger than the square of the small errors. It can be said that the *ISE* criteria can penalize the large errors with tolerance for the presence of continuous small errors along with time simulation. Thus, the authors of this work do not hesitate in putting in the *ITAE* criteria utilized in minimizing the objective function because of the multiplication of the time term by the integral of the absolute error. The multiplied time term in *ITAE* criteria makes the optimization process more fast which achieves more system stability than utilizing the *ISE* criteria [51]. The *ITAE* criteria can be formulated as follows [47]:

$$J = ITAE = \int\_0^{Tsim} t.\left[\left|\Delta f\_1\right| + \left|\Delta f\_2\right| + \left|\Delta P\_{li\varepsilon}\right|\right]dt\tag{13}$$

where *dt* is represented as the time interval for taking the error signals' samples over the simulation process.

#### *3.2. The Proposed Optimization Technique*

In this subsection, the CGO method is briefly described; then, the process of the QCGO technique is presented.

#### 3.2.1. Chaos Game Optimization (CGO) Algorithm

This algorithm is based on certain rules of the chaos theory, where the arrangement of fractals is by the chaos game idea. Firstly, an initialization procedure is configured by determining the initial positions of the solution candidates from the following equations [52]:

$$\mathbf{X} = \begin{bmatrix} \mathbf{X}\_1 \\ \mathbf{X}\_2 \\ \vdots \\ \mathbf{X}\_i \\ \mathbf{X}\_i \\ \vdots \\ \mathbf{X}\_n \end{bmatrix} = \begin{bmatrix} \mathbf{x}\_1^1 \mathbf{x}\_1^2 & \dots \mathbf{x}\_1^1 & \dots \mathbf{x}\_1^1 \\ \mathbf{x}\_2^1 \mathbf{x}\_2^2 & \dots \mathbf{x}\_2^1 & \dots \mathbf{x}\_2^1 \\ & \ddots & \ddots \\ & & \ddots & \ddots \\ \mathbf{x}\_i^1 \mathbf{x}\_i^2 & \dots \mathbf{x}\_i^1 & \dots \mathbf{x}\_i^1 \\ & & \ddots & \ddots \\ & & \ddots & \ddots \\ & & & \ddots & \ddots \\ & & & & \ddots & \mathbf{x}\_n^1 \end{bmatrix}, \begin{cases} i = 1, 2, \dots, \mathbf{m} \\\ j = 1, 2, \dots, \mathbf{m} \\ \end{cases} \tag{14}$$

$$\mathbf{x}\_{i}^{j}(0) = \mathbf{x}\_{i,\text{min}}^{j} + \text{rand.} (\mathbf{x}\_{i,\text{max}}^{j} - \mathbf{x}\_{i,\text{min}}^{j}), \quad \left\{ \begin{array}{l} i = 1, 2, \dots, \text{m} \\ j = 1, 2, \dots, \text{d} \end{array} \right. \tag{15}$$

where d denotes the dimension of the problem and m refers to the total number of initialized candidates inside the search space. x j i,min, x j i,max are the lower and upper bounds of the decision variables. The position updating process for the temporary triangles is presented in Figure 9. The mathematical representation of the seed<sup>1</sup> <sup>i</sup> , as shown in Figure 9a, is as follows [52]:

$$\mathbf{seed}\_{\mathbf{i}}^{1} = \boldsymbol{\lambda}\_{\mathbf{i}} + \boldsymbol{\alpha}\_{\mathbf{i}} \times (\boldsymbol{\beta}\_{\mathbf{i}} - \mathbf{GB} - \boldsymbol{\gamma}\_{\mathbf{i}} \times \mathbf{MG}\_{\mathbf{i}}), \ \mathbf{i} = 1, 2, \ldots, \mathbf{m} \tag{16}$$

where GB is the global best, α<sup>i</sup> represents the movement limitation factor, and β<sup>i</sup> and γ<sup>i</sup> denote vectors randomly created by numbers in the range of [0, 1]. MGi is the mean group. From Figure 9b, seed2 <sup>i</sup> can be calculated as follows [52]:

$$\text{seed}\_{\text{i}}^{2} = \text{GB} + \mathfrak{a}\_{\text{i}} \times (\beta\_{\text{i}} \times \mathfrak{X}\_{\text{i}} - \gamma\_{\text{i}} \times \text{MG}\_{\text{i}}), \text{ i} = 1, 2, \dots, \text{m} \tag{17}$$

While seed3 <sup>i</sup> , which is displayed in Figure 9c, is mathematically computed as below [52]:

$$\text{seed}\_{\text{i}}^{3} = \text{MG}\_{\text{i}} + \mathfrak{x}\_{\text{i}} \times (\mathfrak{z}\_{\text{i}} \times \mathfrak{X}\_{\text{i}} - \mathfrak{y}\_{\text{i}} \times \text{GB}), \text{ i} = 1, 2, ..., \text{m} \tag{18}$$

Finally, seed4 <sup>i</sup> , which is shown in Figure 9d, can be mathematically represented as follows [52]:

$$\mathbf{i}\text{ seed}^4\_\mathbf{i} = \boldsymbol{\chi}\_\mathbf{i} \left(\mathbf{x}^\mathbf{k}\_\mathbf{i} = \mathbf{x}^\mathbf{k}\_\mathbf{i} + \mathbf{R}\right), \ \mathbf{k} = [1, 2, \dots, \mathbf{d}] \tag{19}$$

where R refers to a vector with random numbers in the range of [0, 1].

**Figure 9.** Position updating process for the temporary triangles [53].

#### 3.2.2. The Proposed Quantum Chaos Game Optimization (QCGO) Algorithm

In this subsection, quantum mechanics is used to develop the original CGO algorithm. This quantum model of a CGO algorithm is called here QCGO algorithm. Quantum mechanics was employed to develop the PSO in [54]. In the quantum model, by employing the Monte Carlo method, the solution xnew4 is calculated from this equation [54]:

 $\text{If } \mathbf{h} \ge 0.5$ 
$$\mathbf{x}\_{\text{new1}} = \mathbf{p} + \alpha \cdot |\mathbf{M} \mathbf{best}\_{\mathbf{i}} - \mathbf{X}\_{\mathbf{i}}| \cdot \ln(1/\mathbf{u}) \tag{20}$$

Else

$$\chi\_{\text{new}1} = \mathbf{p} - \alpha \cdot |\mathbf{M} \text{best}\_{\text{i}} - \chi\_{\text{i}}| \cdot \ln(1/\mathbf{u}) \tag{21}$$

End.

where α refers to a design parameter, u and h denote uniform probability distribution in the range [0, 1], and Mbest is the mean best of the population and is defined as the mean of the GB positions. It can be calculated as follows [54]:

$$\text{Mbest} = \frac{1}{\text{N}} \sum\_{\text{l=1}}^{\text{N}} \mathbf{p}\_{\text{g.l}}(\mathbf{i}) \tag{22}$$

where g is the index of the best solution among all the solutions.

#### **4. The Procedure of the Improved QCGO Algorithm**

*The Performance of QCGO*

The proposed QCGO algorithm competency and performance are evaluated on the numerous benchmark functions, using the statistical measurements, such as best values, mean values, median values, worst values, and standard deviation (STD), for the best solutions achieved using the proposed technique and the other well-known algorithms. The results attained by the QCGO technique are compared with three recent meta-heuristic techniques, including SDO [55], WOA [56], and BOA [57], in addition to the conventional CGO. All of the mentioned techniques were executed for the maximum number of iterations of the function of 200 and a population size of 50 for 20 independent runs, using Matlab R2016a working on Windows 8.1, 64 bit (Microsoft, Albuquerque, NM, USA). All computations were performed on a Core i5-4210U CPU@ 2.40 GHz of speed (Intel Corporation, Santa Clara, CA, USA) and 8 GB of RAM. Figure 10 shows the qualitative metrics on F1, F2, F3, F5, F6, F8, F10, F12, F15, F18, and F22, with 2D views of the functions, convergence curve, average fitness history, and search history.

**Figure 10.** *Cont*.

**Figure 10.** Qualitative metrics of nine benchmark functions using the proposed quantum chaos game optimizer algorithm: 2D views of the functions, search history, average fitness history, and convergence curve.

Tables 4–6 show the statistical results of the proposed QCGO technique and other algorithms when applied for the three types of benchmark functions (unimodal, multimodal, and composite, respectively). The best-obtained values using the QCGO, CGO, SDO, WOA, and BOA algorithms are displayed in bold. It is clearly seen that the QCGO algorithm achieves the optimal solution for most of those benchmark functions. The convergence curves of these techniques for those functions are illustrated in Figure 11, and the boxplots for each algorithm for these functions are displayed in Figure 12. From those figures, it is seen that the QCGO technique reached a stable point for all functions, and the boxplots of the proposed QCGO technique are very narrow and stable for most functions compared to the other techniques.


**Table 4.** Results of unimodal benchmark functions.

The best values obtained are in bold.

**Table 5.** Results of multimodal benchmark functions.



**Table 5.** *Cont.*

The best values obtained are in bold.

**Table 6.** Results of composite benchmark functions.



**Table 6.** *Cont.*

The best values obtained are in bold.

**Figure 11.** *Cont*.

**Figure 11.** The convergence curves of the proposed QCGO algorithm and four other algorithms for 23 benchmark functions.

**Figure 12.** *Cont*.

**Figure 12.** Boxplots of the proposed QCGO algorithm and four other algorithms for 23 benchmark functions.

#### **5. Simulation Results and Discussions**

In this study, the proposed control strategy is implemented in the secondary control loop with the high integration of RESs, considering different load variation types to restore the studied system frequency at the pre-defined value, where the presented control strategy relied on the combining TD-TI controller, which is optimally designed by an improved QCGO algorithm to obtain the minimum value of the frequency fluctuations for the studied power grid. Moreover, the performance of the suggested control strategy is compared with other control strategies (i.e., TID and PID). All of the simulation results for the studied two-area, multi-unit power grid are implemented using the professional software MATLAB/SIMULINK® program (R2015a) to ensure the efficacy of the proposed controller in enhancing the studied system performance. The code of the proposed QCGO algorithm is an m-file linked to the studied model for the optimization process. The simulation results are performed on a PC with Intel Core i5-2.60 GHz with 4.00 GB of RAM. The frequency stability has been assessed by applying different operating conditions through the following scenarios.


The studied power grid performance can be evaluated by measuring the value of the best objective function that is represented by the ITAE value over the iterations. For most, several initial considerations must be addressed while optimizing the proposed TD-TI controller using the proposed improved QCGO algorithm, such as the search agent number that equals 30 and the total iterations/attempts that equal 100. The convergence curve that is shown in Figure 13 clarifies the performance of the proposed combining TD-TI controller based on QCGO compared to the combining TD-TI controller based on CGO and SSA and compared with the TID controller based on QCGO and CGO. The demonstrated convergence curve can be obtained considering a 1% SLP at 10 s in the first area of the studied power grid, without any RESs penetration in both areas. It is clear that the proposed combining TD-TI controller based on QCGO attained the lowest value of the objective function compared to the other mentioned controllers that relied on various optimization techniques. As a result, the convergence curve elucidates the effectiveness of the proposed QCGO algorithm. It can be seen that the curve behavior of the proposed TD-TI based on QCGO starts with a 0.1098 objective function value; then, this value drops along the iterations to end up at the final iteration with a 0.0729 objective function value, whereas the behavior of the proposed controller/proposed algorithm can be described as it reaches the best objective function value quickly compared to the other utilized controllers via different techniques. Moreover, it can be said, the rest curve behaviors are far from the optimum goal achieved by the suggested controller using QCGO, demonstrating its robustness in damping the oscillations effectively.

**Scenario A:** evaluation of the studied system performance considering different load variation types (i.e., SLP, series SLP, and random load).

This scenario included a fair-maiden comparison between the proposed combining TD-TI controller utilizing the QCGO algorithm and the other published controllers, such as the PID controller based on TLBO and AOA. Moreover, the proposed combining TD-TI controller based on the improved QCGO technique was compared with different mentioned controllers, such as the TID controller based on QCGO and CGO and the combining TD-TI controller based on CGO and SSA, to test the stability of the studied power grid performance.

**Figure 13.** The convergence curve characteristics of QCGO, CGO, and salp swarm algorithm.

**Case A.1:** The SLP was selected as a challenge by applying it in the first area of the studied power grid to test the efficacy of the proposed combining TD-TI controller in enhancing the system performance. The applicable SLP occurred at 10 s with a 1% value, whereas the SLP can occur in the electrical power grids through disconnecting some generators from all the generation stations that may lead to blackouts with the shutdown of all the stations' generators. In addition, SLP may be represented as an unexpected switch of the connected electrical loads that may lead to instability in the system performance by increasing the wear and tear on the generators in the power grid.

**Case A.1.1:** This case presents a comparison between the performance of the proposed combining TD-TI controller in this work and the other published performances of the PID controller, to prove the efficacy of the proposed controller in attaining the main target (damping frequency oscillations). Table 7 indicates all of the aforementioned controller parameters that are utilized in diminishing the fluctuations in the system frequency and power flow in the tie line. In addition, Figure 14 clarifies a comparison between the different dynamic studied system responses (i.e., Δ*f*1, Δ*f*2, and Δ*Ptie*) of the proposed combining TD-TI controller, using QCGO and the PID controller based on TLBO and AOA, and considering a 1% SLP in the first area.

Table 8 illustrates the various specifications of the system performance, such as overshoot (*Osh*), undershoot (*Ush*), and the objective function values related to fluctuations in both the area frequencies and the power flow within the tie line. Table 8 clarifies the superiority of the proposed combining TD-TI controller-based, improved QCGO algorithm to achieve stability in the studied power grid. For ease, Table 9 denotes the percentage improvements in *Ush* and *Osh* for combining TD-TI/QCGO and PID/AOA, based on the PID/TLBO.


**Table 7.** The optimum parameters of the different controllers.

**Table 8.** The transient response specifications of the presented system for case A.1.1.


**Table 9.** Percentage improvement in *Ush* and *Osh* values for combining TD-TI/QCGO and PID/AOA based on PID controller via TLBO for scenario A.1.1.


**Figure 14.** Dynamic power grid responses in case A.1.1: (**a**) **Δ***f***<sup>1</sup>** (**b**) **Δ***f***<sup>2</sup>** (**c**) **Δ***Ptie*.

As can be seen, the improved QCGO algorithm utilized in fine-tuning the proposed combining TD-TI controller obtains the optimal controller parameters, which leads to attaining the optimal solution with a 0.075 objective function value. The obtained objective function value related to the proposed controller using an improved QCGO algorithm is the best compared to those attained from the published PID controller based on TLBO and AOA, which equal 0.402 and 0.189, respectively. It can be seen that the proposed combining TD-TI controller-based QCGO achieves a higher percentage in improving all system dynamic performance. For example, the percentage improvement in *Ush* and *Osh* of **Δ***f***<sup>1</sup>** related to combining TD-TI/QCGO is 60.01% and 52.43%, respectively. In contrast, the percentage improvement in *Ush* and *Osh* of **Δ***f***<sup>1</sup>** related to PID/AOA is 42.11% and 32.70%, respectively.

**Case A.1.2:** This case presents a suggestion of utilizing the TID controller based on CGO and QCGO to compare it with the proposed combining TD-TI controller based on QCGO to test the robustness of the proposed one in regulating the studied system frequency. All of the previously mentioned controller parameters are presented in Table 7. Moreover, Figure 15 describes a fair comparison between all of the dynamic system responses related to the proposed combining TD-TI controller based on QCGO and all those responses of the TID controller based on CGO and QCGO.

Table 10 illustrates the different specifications of the system performance, such as *Osh*, *Ush*, and the objective function values related to excursions in both the area frequencies and the power flow within the tie line. Table 10 clarifies the superiority of the proposed combining TD-TI controller-based improved QCGO algorithm in achieving system reliability. In addition, Table 11 clarifies the percentage improvements in *Ush* and *Osh* for combining TD-TI/QCGO and TID/(CGO, QCGO) based on the PID/TLBO.

**Table 10.** The transient response specifications of the presented system for case A.1.2.


The optimum values are bolded.

**Table 11.** Percentage improvement in *Ush* and *Osh* values for combining TD-TI/QCGO and PID/AOA based on PID controller via TLBO for scenario A.1.2.


**Figure 15.** Dynamic power grid responses in case A.1.2: (**a**) **Δ***f***<sup>1</sup>** (**b**) **Δ***f***<sup>2</sup>** (**c**) **Δ***Ptie*.

Table 10 clarifies that the obtained objective function value related to the proposed controller using an improved QCGO algorithm that equals 0.075 is the best compared to those attained from the TID controller based on CGO and QCGO, which equal 0.1381 and 0.1351, respectively. Moreover, Table 11 denotes that the proposed combining TD-TI controller-based QCGO achieves a higher percentage in improving all of the system dynamic performance. For example, the percentage improvement in *Ush* and *Osh* of **Δ***f***<sup>2</sup>** related to combining TD-TI/QCGO is 86.4% and 99.36%, respectively. In contrast, the percentage improvement in *Ush* and *Osh* of **Δ***f***<sup>2</sup>** related to TID/QCGO is 73.04% and 25.35%, respectively.

**Case A.1.3:** This case presents the SSA algorithm as a meta-heuristic optimization technique to tune the proposed combining TD-TI controller and make a comparison between it and the CGO and QCGO techniques in selecting the optimal controller parameters to prove that the improved QCGO algorithm can achieve more system stability compared to utilizing the different mentioned algorithms. Table 7 presents the aforementioned controller parameters that were utilized in overcoming the LFC problem in the studied power grid. Moreover, Figure 16 describes a fair comparison between all of the dynamic system responses related to the proposed combining TD-TI controller based on QCGO and all those responses of the combining TD-TI controller based on SSA and CGO.

Table 12 illustrates the different specifications of the system performance, such as *Osh*, *Ush*, and the objective function values related to the oscillations in both the area frequencies and the power flow within the tie line. Table 12 clarifies the superiority of the proposed combining TD-TI controller-based, improved QCGO algorithm in achieving system reliability. In addition, Table 13 clarifies the percentage improvements in *Ush* and *Osh* for combining TD-TI/QCGO and combining TD-TI/(CGO, SSA), based on the PID/TLBO.

**Table 12.** The transient response specifications of the presented system for case A.1.3.


**Table 13.** Percentage improvement in *Ush* and *Osh* values for combining TD-TI/QCGO and PID/AOA based on PID controller via TLBO for scenario A.1.3.


**Figure 16.** Dynamic power grid responses in case A.1.3: (**a**) **Δ***f***<sup>1</sup>** (**b**) **Δ***f***<sup>2</sup>** (**c**) **Δ***Ptie*.

Table 12 clarifies that the obtained objective function value related to the suggested controller using an improved QCGO algorithm that equals 0.075 is the best compared to those attained from the combining TD-TI controller based on CGO and SSA, which equal 0.078 and 0.087, respectively. Moreover, Table 13 denotes that the proposed combining TD-TI controller-based QCGO achieves a higher percentage in improving all of the system dynamic performance. For example, the percentage improvement in *Ush* and *Osh* of **Δ***Ptie* related to combining TD-TI/QCGO is 82.6% and 99.12%, respectively. In contrast, the percentage improvement in *Ush* and *Osh* of **Δ***Ptie* related to combining TD-TI/SSA is 76.85% and 92.76%, respectively.

**Case A.2:** In this case, the performance of the proposed combining TD-TI controller optimized with an improved QCGO algorithm has been tested and assessed by subjecting a series SLP in the first area of the studied power grid. The series SLP is represented as an emulation of the series changing in the realistic connected loads. It can be said that the series SLP is considered as a series-forced switch of generators or series interrupts of the connected loads. Figure 17 describes the applied form of the series SLP. In addition, the different dynamic system responses are indicated in Figure 18 to elucidate the superiority of the suggested combining TD-TI controller based on QCGO compared to those of the other controllers optimized with different algorithms (i.e., combining TD-TI based on CGO and SSA) in the presence of the series SLP in the first area.

**Figure 17.** The form of the applied series step load perturbation.

**Figure 18.** *Cont*.

**Figure 18.** Dynamic power grid responses in case A.2: (**a**) **Δ***f***<sup>1</sup>** (**b**) **Δ***f***<sup>2</sup>** (**c**) **Δ***Ptie*.

Table 14 illustrates the values of *Osh* and *Ush* related to the different system dynamic responses (i.e., **Δ***f***1**, **Δ***f***2**, and **Δ***Ptie*) according to oscillations in both the area frequencies and the power flow within the tie line. Table 14 clarifies the superiority of the proposed combining TD-TI controller-based improved QCGO algorithm in achieving system stability. In addition, Table 15 clarifies the percentage improvements in *Ush* and *Osh* for combining TD-TI/QCGO and combining TD-TI/CGO based on the combining TD-TI/SSA.

Table 14 clarifies that the suggested controller using an improved QCGO algorithm achieves more system stability after looking at the obtained results of the *Osh* and *Ush* values. Moreover, Table 15 denotes that the proposed combining TD-TI controller-based QCGO achieves a higher percentage in improving all of the system dynamic performance. For example, the percentage improvement in *Ush* and *Osh* of **Δ***f***<sup>1</sup>** related to combining TD-TI/QCGO is 26.13% and 25.71%, respectively. In contrast, the percentage improvement in *Ush* and *Osh* of **Δ***f***<sup>1</sup>** related to combining TD-TI/CGO is 15.81% and 14.29%, respectively.


**Table 14.** The transient response specifications of the presented system for case A.2.

**Table 15.** Percentage improvement in *Ush* and *Osh* values for combining TD-TI/QCGO and combining TD-TI/CGO based on combining TD-TI/SSA for scenario A.2.


The optimum values are bolded.

**Case A.3:** In this case, the studied power grid has been subjected to RLVs in the first area. The RLVs are a diverse combination of series perturbations in industrial connected loads to the grid that cause the same effects on the grid (i.e., unbalance in electrical power grid and the occurrence of blackout). The RLV is formed in Figure 19. In addition, Figure 20 describes the different dynamic power system responses explaining the efficacy of the proposed combining TD-TI controller based on QCGO in achieving more of a reduction in the system frequency fluctuations and the power flow in the tie line compared to the other ones.

**Figure 19.** The form of the applied random load variation.

**Figure 20.** Dynamic power grid responses in case A.3: (**a**) **Δ***f***<sup>1</sup>** (**b**) **Δ***f***<sup>2</sup>** (**c**) **Δ***Ptie*.

Table 16 illustrates the values of *Osh* and *Ush* related to the different system dynamic responses (i.e., **Δ***f***1**, **Δ***f2*, and **Δ***Ptie*) according to the oscillations in both the area frequencies and the power flow within the tie line. Table 16 presents the robustness of the proposed combining TD-TI controller-based improved QCGO algorithm in achieving system stability. In addition, Table 17 clarifies the percentage improvements in *Ush* and *Osh* for combining TD-TI/QCGO and combining TD-TI/CGO based on the combining TD-TI/SSA.


**Table 16.** The transient response specifications of the presented system for case A.3.

**Table 17.** Percentage improvement in *Ush* and *Osh* values for combining TD-TI/QCGO and combining TD-TI/CGO based on combining TD-TI/SSA for scenario A.3.


The optimum values are bolded.

Table 16 clarifies that the proposed controller via an improved QCGO algorithm achieves more system stability after looking at the obtained results of the *Osh* and *Ush* values. Additionally, Table 17 denotes that the proposed combining TD-TI controller-based QCGO achieves a higher percentage in improving all of the system dynamic performance. For example, the percentage improvement in *Ush* and *Osh* of **Δ***f***<sup>1</sup>** related to combining TD-TI/QCGO is 20.67% and 26.00%, respectively. However, the percentage improvement in *Ush* and *Osh* of **Δ***f***<sup>1</sup>** related to combining TD-TI/CGO is 10.00% and 16.00%, respectively.

**Scenario B:** evaluation of the studied system performance considering high penetration of RESs in both areas with series SLP and RLV.

Another challenge of high penetrating of RESs (i.e., wind energy in the first area and PV energy in the second area) is addressed in this study to test the robustness of the proposed combining TD-TI controller in reducing the studied system fluctuations. The series SLP and RLV are applied in the first area as well as integration of the RESs in the power grid. The penetration of RESs represents a burden on the studied power grid due to their demerits (i.e., lack of system inertia).

**Case B.1:** robustness test for the proposed combining TD-TI controller optimized by improved QCGO considering high RES penetration as well as series SLP challenge.

This section clarifies the dynamic system performance of the investigated power grid, taking into consideration a series SLP, high penetration of wind energy at t = 100 s in the first area and PV at t = 200 s in the second area. These mentioned challenges have been presented to ensure the reliability and effectiveness of the proposed combining TD-TI controller based on an improved QCGO algorithm in enhancing the studied power grid

performance. Figure 21 clarifies the applicable series SLP form in the first area. Moreover, all the dynamic power grid responses represented in Δ*f***1**, Δ*f***<sup>2</sup>** and **Δ***Ptie* are shown in Figure 22.

Table 18 illustrates the values of *Osh* and *Ush* related to the aforementioned system dynamic responses due to deviations in both the area frequencies and the power flow within the tie line. Table 18 presents the robustness of the proposed combining TD-TI controller-based improved QCGO algorithm in achieving system reliability. In addition, Table 19 clarifies the percentage improvements in *Ush* and *Osh* for combining TD-TI/QCGO and combining TD-TI/CGO based on the combining TD-TI/SSA.

**Table 18.** The transient response specifications of the presented system for case B.1.


**Table 19.** Percentage improvement in *Ush* and *Osh* values for combining TD-TI/QCGO and combining TD-TI/CGO based on combining TD-TI/SSA for scenario B.1.


**Figure 22.** Dynamic power grid responses in case B.1: (**a**) **Δ***f***<sup>1</sup>** (**b**) **Δ***f***<sup>2</sup>** (**c**) **Δ***Ptie*.

It can be summarized that Table 18 clarifies that the proposed controller/proposed algorithm achieves more system stability after showing the obtained results of the *Osh* and *Ush* values. In this regard, Table 19 clarifies that the proposed combining TD-TI controller-based QCGO achieves a higher percentage in improving all of the system dynamic performance. For example, the percentage improvement in *Ush* and *Osh* of **Δ***Ptie* related to combining TD-TI/QCGO is 30.41% and 26.15%, respectively. However, the percentage improvement in *Ush* and *Osh* of **Δ***Ptie* related to combining TD-TI/CGO is 7.22% and 6.15%, respectively.

**Case B.2:** robustness test for the proposed combining TD-TI controller optimized by improved QCGO considering high RES penetration as well as RLV.

This section includes a robustness test by the penetrating of RESs at both areas of the studied power grid with the applicable RLV in the first area. This test summarized the superiority of the proposed combining TD-TI controller based on an improved QCGO algorithm in overcoming the frequency excursions for the studied power grid. The applicable RLV is shown in Figure 23. Moreover, the behavior of both the area frequencies and the power flow in the tie line is clarified in Figure 24.

Table 20 elucidates the values of *Osh* and *Ush* related to all the mentioned system dynamic responses due to the deviations in both the area frequencies and the power flow within the tie line. Table 20 proves the robustness of the proposed controller/proposed algorithm in achieving system reliability. In addition, Table 21 clarifies the percentage improvements in *Ush* and *Osh* for combining TD-TI/QCGO and combining TD-TI/CGO based on the combining TD-TI/SSA.

**Table 20.** The transient response specifications of the presented system for case B.2.



**Table 21.** Percentage improvement in *Ush* and *Osh* values for combining TD-TI/QCGO and combining TD-TI/CGO based on combining TD-TI/SSA for scenario B.2.

The optimum values are bolded.

It can be said that Table 20 clarifies that the proposed controller/proposed algorithm achieves more system stability after knowing the obtained results of the *Osh* and *Ush* values. In this regard, Table 21 clarifies that the proposed controller/proposed algorithm achieves a higher percentage in improving all of the system dynamic performance. For example, the percentage improvement in *Ush* and *Osh* of **Δ***f***<sup>1</sup>** related to combining TD-TI/QCGO is 37.50% and 22.58%, respectively. However, the percentage improvement in *Ush* and *Osh* of **Δ***f***<sup>1</sup>** related to combining TD-TI/CGO is 25.00% and 12.9%, respectively.

**Figure 24.** *Cont*.

**Figure 24.** Dynamic power grid responses in case B.2: (**a**) **Δ***f***<sup>1</sup>** (**b**) **Δ***f***<sup>2</sup>** (**c**) **Δ***Ptie*.

**Scenario C:** evaluation of the studied system performance considering communication time delay, high penetration of RESs in both areas, and RLV.

This scenario presents the suggestion of the communication time delay challenge that is applied before and after the control action with a 0.01 s time delay value and also considers the applicable random load with high RES penetration to test the robustness of the suggested combining TD-TI controller in system stabilizing. The RLV behavior is described in Figure 25. Moreover, the different dynamic system responses represented in **Δ***f***1**, **Δ***f***2**, and Δ*Ptie* are shown in Figure 26.

**Figure 25.** The form of the applied RLV.

Figure 26 summarizes and elucidates the effectiveness of the proposed controller via the proposed technique in achieving system stability and reliability after testing the effect of the time delay in the controller action and in receiving the error signal. The proposed QCGO/combining TD-TI scheme shows excellent results in overcoming all the challenges and gaining more system stability.

**Figure 26.** Dynamic power grid responses in case C: (**a**) **Δ***f***<sup>1</sup>** (**b**) **Δ***f***<sup>2</sup>** (**c**) **Δ***Ptie*.

**Scenario D:** evaluation of the studied system performance, considering the effect of EV integration, high penetration of RESs in both areas, and RLV.

This scenario presents the integration of EVs in both areas of the studied power grid to test the effectiveness of EVs in regulating the studied system frequency and the power flow between both areas. Figure 27 shows the applicable RLV in the first area. Figure 28 illustrates the charging/discharging power of both the EVs that are integrated into both areas of the studied power grid. Moreover, the various dynamic system responses represented in **Δ***f***1**, **Δ***f***<sup>2</sup>** and Δ*Ptie* are described in Figure 29.

Table 22 presents the values of *Osh* and *Ush* related to all the different mentioned system dynamic responses due to the deviations in the both area frequencies and the power flow within the tie line. Table 22 proves that the proposed controller/proposed algorithm considering EV penetration in the studied system achieves more system stability compared to not utilizing these EVs. In addition, Table 23 clarifies the percentage improvements in *Ush* and *Osh* for combining TD-TI/QCGO with and without penetration of the EVs based on the combining TD-TI/SSA.

**Figure 27.** The form of the applied RLV.

**Figure 28.** The charging/discharging power of the applicable EVs in both areas.

**Figure 29.** Dynamic power grid responses in case D: (**a**) **Δ***f***<sup>1</sup>** (**b**) **Δ***f***<sup>2</sup>** (**c**) **Δ***Ptie*.


**Table 22.** The transient response specifications of the presented system for case D.

**Table 23.** Percentage improvement in *Ush* and *Osh* values for combining TD-TI/QCGO and combining TD-TI/CGO based on combining TD-TI/SSA for scenario D.


The optimum values are bolded.

It can be observed that Table 22 clarifies that the proposed controller/proposed algorithm achieves more system stability after presenting the values of the obtained *Osh* and *Ush*. In this regard, Table 23 clarifies that the proposed controller/proposed algorithm achieves a higher percentage in improving all system dynamic performance, whereas the percentage improvement in *Ush* and *Osh* of **Δ***f***<sup>1</sup>** related to combining TD-TI/QCGO considering EV penetration is 47.50% and 33.23%, respectively. In contrast, the percentage improvement in *Ush* and *Osh* of **Δ***f***<sup>1</sup>** related to combining TD-TI/CGO without EV penetration is 37.50% and 22.58%, respectively. In brief, the integration of EVs in the studied power grid can aid in dampening the frequency fluctuations due to their energy storage power which feeds the system with the extra power at abnormal conditions to obtain all the system dynamic responses within the tolerable limits.

#### **6. Conclusions**

This paper includes main points that are clarified as mentioned below:


**Author Contributions:** Conceptualization, A.H.A.E., M.K., M.H.H. and S.K.; data curation, A.M.A.; formal analysis, A.H.A.E., M.K. and M.H.H.; funding acquisition, A.M.A. and S.K.; investigation, A.H.A.E., M.K. and M.H.H.; methodology, A.M.A. and S.K.; project administration, A.H.A.E., M.K. and M.H.H.; resources, A.M.A. and S.K.; supervision, S.K. and A.M.A.; validation, A.H.A.E., M.K. and M.H.H.; visualization, A.H.A.E., M.K. and M.H.H.; writing-original draft, A.H.A.E., M.K. and M.H.H.; writing-review and editing, A.M.A. and S.K. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by the Deputyship for Research & Innovation, Ministry of Education in Saudi Arabia through the project number "IF\_2020\_NBU\_416".

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The authors extend their appreciation to the Deputyship for Research & Innovation, Ministry of Education in Saudi Arabia for funding this research work through the project number "IF\_2020\_NBU\_416". The authors gratefully thank the Prince Faisal bin Khalid bin Sultan Research Chair in Renewable Energy Studies and Applications (PFCRE) at Northern Border University for their support and assistance.

**Conflicts of Interest:** The authors declare that there is no conflict of interest.

#### **Nomenclature**




#### **References**


MDPI St. Alban-Anlage 66 4052 Basel Switzerland Tel. +41 61 683 77 34 Fax +41 61 302 89 18 www.mdpi.com

*Fractal and Fractional* Editorial Office E-mail: fractalfract@mdpi.com www.mdpi.com/journal/fractalfract

MDPI St. Alban-Anlage 66 4052 Basel Switzerland Tel: +41 61 683 77 34

www.mdpi.com

ISBN 978-3-0365-4750-3