**Modified Viscosity Subgradient Extragradient-Like Algorithms for Solving Monotone Variational Inequalities Problems**

### **Nopparat Wairojjana 1, Mudasir Younis 2, Habib ur Rehman 3, Nuttapol Pakkaranang <sup>3</sup> and Nattawut Pholasa 4,\***


Received: 19 August 2020; Accepted: 7 October 2020; Published: 15 October 2020

**Abstract:** Variational inequality theory is an effective tool for engineering, economics, transport and mathematical optimization. Some of the approaches used to resolve variational inequalities usually involve iterative techniques. In this article, we introduce a new modified viscosity-type extragradient method to solve monotone variational inequalities problems in real Hilbert space. The result of the strong convergence of the method is well established without the information of the operator's Lipschitz constant. There are proper mathematical studies relating our newly designed method to the currently state of the art on several practical test problems.

**Keywords:** projection methods; strong convergence; extragradient method; monotone mapping; variational inequalities

### **1. Introduction**

Assume that <sup>C</sup> is a nonempty, closed and convex subset of a real Hilbert space <sup>H</sup>, and <sup>R</sup> and N are the sets of real numbers and natural numbers, respectively. In this paper, we consider the classical variational inequalities problems [1,2] (in short, *V I*(*F*, C)) and the solution set of variational inequalities problem represent by *SV I*(*F*, <sup>C</sup>). Assume that *<sup>F</sup>* is an operator *<sup>F</sup>* : <sup>H</sup> <sup>→</sup> <sup>H</sup> and the variational inequalities problem for an operator *<sup>F</sup>* : <sup>H</sup> <sup>→</sup> <sup>H</sup> is defined in the following way:

$$\text{Find } u^\* \in \mathcal{C} \text{ such that } \left< F(u^\*), y - u^\* \right> \ge 0, \,\forall y \in \mathcal{C}. \tag{1}$$

The problem (1) is well defined and equivalent to solve the following fixed point problem:

Find a point *<sup>u</sup>*<sup>∗</sup> ∈ C such that *<sup>u</sup>*<sup>∗</sup> = *<sup>P</sup>*<sup>C</sup> [*u*<sup>∗</sup> − *<sup>ζ</sup>F*(*u*∗)],

for some 0 < *ζ* < <sup>1</sup> *<sup>L</sup>* where *L* is the Lipschitz constant of the operator *F*. We assume that the followings conditions have been satisfied:

(b1) The solution set is represented by *SV I*(*F*, C) and it is nonempty;

(b2) An operator *<sup>F</sup>* : <sup>H</sup> <sup>→</sup> <sup>H</sup> is monotone—i.e.,

$$\left< F(\mu\_1) - F(\mu\_2), \mu\_1 - \mu\_2 \right> \ge 0, \,\,\forall \,\mu\_1, \mu\_2 \in \mathcal{C}\_{\nu}$$

(b3) *F* is Lipschitz continuous if there exists *L* > 0, such that

$$\|\|F(\mu\_1) - F(\mu\_2)\|\| \le L \|\|\mu\_1 - \mu\_2\|\|, \,\forall \, \mu\_1, \mu\_2 \in \mathcal{C}.\|$$

The variational inequalities theory is a useful technique for investigating a large number of problems in physics, economics, engineering and optimization theory. It was firstly introduced by Stampacchia [1] in 1964 and also well established that the problem (1) is an important problem in nonlinear analysis. It is an advantageous mathematical model that puts together several topics of applied mathematics, such as the network equilibrium problems, the necessary optimality conditions, the systems of non-linear equations and the complementarity problems [3–7].

The projection method and its modified version methods are crucial for finding the numerical solutions of variational inequality problems. Many studies have been suggested and researched different types of projection methods to solve the variational inequalities problem (see for more details [8–18]) and others, as in [19–28]. The simplistic methodology is the gradient method for which only one projection on a feasible set is required. A convergence of the method, however, requires strong monotonicity on *F*. To prevent the strong monotonicity hypothesis, Korpelevich [8] and Antipin [29] introduced the following extragradient method.

$$\begin{cases} \boldsymbol{\mu\_{\mathcal{U}}} \in \mathcal{C}, \\\ \boldsymbol{v\_{\mathcal{U}}} = \boldsymbol{P\_{\mathcal{C}}}[\boldsymbol{\mu\_{\mathcal{U}}} - \mathcal{J}F(\boldsymbol{\mu\_{\mathcal{U}}})], \\\ \boldsymbol{u\_{\mathcal{U}+1}} = \boldsymbol{P\_{\mathcal{C}}}[\boldsymbol{\mu\_{\mathcal{U}}} - \mathcal{J}F(\boldsymbol{v\_{\mathcal{U}}})], \end{cases}$$

for some 0 < *ζ* < <sup>1</sup> *<sup>L</sup>* . The subgradient extragradient algorithm was recently developed by Censor et al. [10] to resolve problem (1) in real Hilbert space. Their method has the form of

$$\begin{cases} \boldsymbol{u}\_{n} \in \mathcal{C}, \\ \boldsymbol{v}\_{n} = \boldsymbol{P}\_{\mathcal{C}}[\boldsymbol{u}\_{n} - \boldsymbol{\zeta}F(\boldsymbol{u}\_{n})], \\ \boldsymbol{u}\_{n+1} = \boldsymbol{P}\_{\mathbb{H}\_{n}}[\boldsymbol{u}\_{n} - \boldsymbol{\zeta}F(\boldsymbol{v}\_{n})]. \end{cases} \tag{2}$$

where 0 < *ζ* < <sup>1</sup> *<sup>L</sup>* and <sup>H</sup>*<sup>n</sup>* <sup>=</sup> {*<sup>z</sup>* <sup>∈</sup> <sup>H</sup> : *un* <sup>−</sup> *<sup>ζ</sup>F*(*un*) <sup>−</sup> *vn*, *<sup>z</sup>* <sup>−</sup> *vn* ≤ <sup>0</sup>}.

In this article, motivated by the methods in [10,30,31] and the viscosity method [14] we introduce a new viscosity subgradient–extragradient algorithm to solve variational inequality problems involving monotone operators in Hilbert space. It is important to note that, our proposed algorithm operates more effectively than the existing ones. Particularly in comparison to the results of Yang et al. [30], our algorithm operates efficiently in most situations. Analogously to the results of Yang et al. [30], proof of the convergence of Algorithm 1, it is not compulsory to have the information of the Lipschitz constant of the operator *F*. The proposed algorithm could be seen as a modification of the methods that are found in [8,10,30,31]. Under mild conditions, a strong convergence theorem was proven to be associated with the proposed method. Numerical experimental studies have been shown that the new method considers being more effective than the current ones in [30].

The rest of the article is arranged in the following way: Section 2 provides a few definitions and basic results that are used throughout the paper. Section 3 contains the main algorithm and convergence theorem. Section 4 includes the numerical results that illustrate the algorithmic efficacy of the introduced method.

### **Algorithm 1** An Explicit Method for Monotone Variational Inequality Problems

**Step 0:** Let *<sup>u</sup>*<sup>0</sup> ∈ C, *<sup>μ</sup>* <sup>∈</sup> (0, 1), *<sup>ζ</sup>*<sup>0</sup> <sup>&</sup>gt; 0 and a sequence *<sup>β</sup><sup>n</sup>* <sup>⊂</sup> (0, 1) with *<sup>β</sup><sup>n</sup>* <sup>→</sup> 0 and <sup>∑</sup><sup>∞</sup> *<sup>n</sup> β<sup>n</sup>* = +∞. **Step 1:** Assume that {*un*} is given and compute

$$v\_n = P\_C[\mu\_n - \zeta\_n F(\mu\_n)].$$

If *un* = *vn*; STOP. Else, move to **Step 2**. **Step 2:** Create a half-space

$$\mathbb{H}\_n = \{ z \in \mathbb{H} : \langle \mathfrak{u}\_n - \zeta\_n F(\mathfrak{u}\_n) - v\_n, z - v\_n \rangle \le 0 \}.$$

**Step 3:**

$$
\mu\_{n+1} = \beta\_n f(\mu\_n) + (1 - \beta\_n) z\_{n,1}
$$

while *zn* = *P*H*<sup>n</sup>* [*un* − *ζnF*(*vn*)]. **Step 4:** Compute

$$\mathcal{I}\_{n+1} = \begin{cases} \min\left\{ \mathbb{1}\_{\mathbb{S}^n} \frac{\mu \|u\_n - v\_n\|^2 + \mu \|z\_n - v\_n\|^2}{2\left< F(u\_n) - F(v\_n), z\_n - v\_n \right>} \right\} & \text{if} \quad \left\langle F(u\_n) - F(v\_n), z\_n - v\_n \right> > 0, \\\mathbb{1}\_{\mathbb{S}^n} & \text{otherwise.} \end{cases}$$

Set *n* := *n* + 1 and return to **Step 1**.

### **2. Background**

A metric projection *<sup>P</sup>*<sup>C</sup> (*u*1) for *<sup>u</sup>*<sup>1</sup> <sup>∈</sup> <sup>H</sup> onto a closed and convex subset <sup>C</sup> of <sup>H</sup> is defined by

$$P\_{\mathcal{C}}(\boldsymbol{\mu}\_1) = \arg\min \{ ||\boldsymbol{\mu}\_2 - \boldsymbol{\mu}\_1|| : \boldsymbol{\mu}\_2 \in \mathcal{C} \}.$$

**Lemma 1** ([32]; Page 31)**.** *For u*, *<sup>v</sup>* <sup>∈</sup> <sup>H</sup> *and a* <sup>∈</sup> <sup>R</sup>, *then the following relationship holds.*

$$(i). \quad ||au + (1-a)v||^2 = a||u||^2 + (1-a)||v||^2 - a(1-a)||u-v||^2.$$

*(ii). <sup>u</sup>* <sup>+</sup> *<sup>v</sup>*<sup>2</sup> ≤ *u*<sup>2</sup> <sup>+</sup> <sup>2</sup>*v*, *<sup>u</sup>* <sup>+</sup> *<sup>v</sup>.*

**Lemma 2** ([32,33])**.** *Assume* <sup>C</sup> *be a nonempty, closed and convex subset of a real Hilbert space* <sup>H</sup> *and let <sup>P</sup>*<sup>C</sup> : <sup>H</sup> → C *be a metric projection from* <sup>H</sup> *onto* <sup>C</sup>*. Then:*

*(i). Let u*<sup>1</sup> ∈ C *and u*<sup>2</sup> <sup>∈</sup> <sup>H</sup>

$$||\boldsymbol{\mu\_1} - \boldsymbol{P\_{\mathcal{C}}}(\boldsymbol{\mu\_2})||^2 + ||\boldsymbol{P\_{\mathcal{C}}}(\boldsymbol{\mu\_2}) - \boldsymbol{\mu\_2}||^2 \le ||\boldsymbol{\mu\_1} - \boldsymbol{\mu\_2}||^2.$$

*(ii). <sup>u</sup>*<sup>3</sup> = *<sup>P</sup>*<sup>C</sup> (*u*1) *if and only if*

$$
\langle \mathfrak{u}\_1 - \mathfrak{u}\_3, \mathfrak{u}\_2 - \mathfrak{u}\_3 \rangle \le 0, \,\,\forall \,\mathfrak{u}\_2 \in \mathcal{C}.
$$

*(iii). For u*<sup>2</sup> ∈ C *and u*<sup>1</sup> <sup>∈</sup> <sup>H</sup>

$$\|\|\boldsymbol{\mu\_1} - \boldsymbol{P\_{\mathcal{C}}}(\boldsymbol{\mu\_1})\|\| \le \|\boldsymbol{\mu\_1} - \boldsymbol{\mu\_2}\|.$$

**Lemma 3** ([34])**.** *Assume that* {*χn*} *is a sequence of non-negative real numbers such that*

$$\chi\_{n+1} \le (1 - \alpha\_n)\chi\_n + \alpha\_n \delta\_{n\prime} \,\,\forall \, n \in \mathbb{N}\_{\prime}$$

*where* {*αn*} ⊂ (0, 1) *and* {*δn*} ⊂ <sup>R</sup> *meet with the following criteria:*

$$\lim\_{n \to \infty} a\_{\mathbb{H}} = 0, \sum\_{n=1}^{\infty} a\_{\mathbb{H}} = \infty, \text{ and } \limsup\_{n \to \infty} \delta\_{\mathbb{H}} \le 0.$$

*Axioms* **2020**, *9*, 118

*Then,* lim*n*→<sup>∞</sup> *χ<sup>n</sup>* = 0.

**Lemma 4** ([35])**.** *Assume that* {*χn*} *is a sequence of real numbers such that there is a subsequence* {*ni*} *of* {*n*} *such thatχni* <sup>&</sup>lt; *<sup>χ</sup>ni*+<sup>1</sup> *for all <sup>i</sup>* <sup>∈</sup> <sup>N</sup>. *Then, there is a non decreasing sequence mk* <sup>⊂</sup> <sup>N</sup> *such that mk* <sup>→</sup> <sup>∞</sup> *as <sup>k</sup>* <sup>→</sup> <sup>∞</sup>, *and the following conditions are fullfilled by all (sufficiently large) numbers k* <sup>∈</sup> <sup>N</sup>*:*

$$
\chi\_{m\_k} \le \chi\_{m\_{k+1}} \text{ and } \chi\_k \le \chi\_{m\_{k+1}}.
$$

*In fact, mk* = max{*j* ≤ *k* : *χ<sup>j</sup>* ≤ *χj*+1}.

**Lemma 5** ([36])**.** *Assume that* <sup>C</sup> *is a nonempty closed convex set in* <sup>H</sup> *and an operator <sup>F</sup>* : C → <sup>H</sup> *is monotone and continuous. Then, u*∗ *is a solution of the problem* (1) *if and only if u*∗ *is a solution of the following problem:*

*Find x* ∈ C *such that F*(*y*), *y* − *x* ≥ 0, ∀ *y* ∈ C.

### **3. Algorithm and Corresponding Strong Convergence Theorem**

We provide a method consisting of two convex minimization problems through a viscosity and an explicit stepsize formula which are being used to enhance the rate of convergence the iterative sequence and to make the method independent of the Lipschitz constant *L*. The detailed method is given below:

**Remark 1.** H*<sup>n</sup> is a half-space and so* H*<sup>n</sup> is a closed and convex set in* H.

**Lemma 6.** *The sequence* {*ζn*} *is decreasing monotonically with a lower bound* min *<sup>μ</sup> <sup>L</sup>* , *ζ*<sup>0</sup> *and converges to ζ* > 0.

**Proof.** From the sequence {*ζn*}, we see that this sequence is monotone and nonincreasing. It is given that *F* is Lipschitz-continuous with *L* > 0. Let / *F*(*un*) − *F*(*vn*), *zn* − *vn* 0 > 0, such that

$$\begin{split} \frac{\mu\left(\|\|u\_{n}-\boldsymbol{\upsilon}\_{n}\|\|^{2}+\|\|\boldsymbol{z}\_{n}-\boldsymbol{\upsilon}\_{n}\|\|^{2}\right)}{2\left(F(\boldsymbol{u}\_{n})-F(\boldsymbol{v}\_{n}),\boldsymbol{z}\_{n}-\boldsymbol{\upsilon}\_{n}\right)} &\geq \frac{2\mu\|\|u\_{n}-\boldsymbol{\upsilon}\_{n}\|\|\|\boldsymbol{z}\_{n}-\boldsymbol{\upsilon}\_{n}\|\|}{2\left\|F(\boldsymbol{u}\_{n})-F(\boldsymbol{v}\_{n})\right\|\|\|\boldsymbol{z}\_{n}-\boldsymbol{\upsilon}\_{n}\|\|} \\ &\geq \frac{2\mu\|\|u\_{n}-\boldsymbol{\upsilon}\_{n}\|\|\|\boldsymbol{z}\_{n}-\boldsymbol{\upsilon}\_{n}\|\|}{2\left\|u\_{n}-\boldsymbol{v}\_{n}\right\|\|\|\boldsymbol{z}\_{n}-\boldsymbol{\upsilon}\_{n}\|\|} \\ &\geq \frac{\mu}{L}. \end{split} \tag{3}$$

The above discussion implies that the sequence {*ζn*} has a lower bound min *<sup>μ</sup> <sup>L</sup>* , *ζ*<sup>0</sup> . Moreover, there exists number *ζ* > 0, such that lim*n*→<sup>∞</sup> *ζ<sup>n</sup>* = *ζ*.

**Lemma 7.** *Assume that an operator <sup>F</sup>* : C → <sup>H</sup> *satisfies the conditions* (b1)*–*(b3)*. For each <sup>u</sup>*<sup>∗</sup> <sup>∈</sup> *SV I*(*F*, <sup>C</sup>) <sup>=</sup> ∅, *we have*

$$\|\|z\_n - u^\*\|\|^2 \le \|u\_n - u^\*\|\|^2 - \left(1 - \frac{\mu \tilde{\zeta}\_n}{\tilde{\zeta}\_{n+1}}\right) \|u\_n - v\_n\|\|^2 - \left(1 - \frac{\mu \tilde{\zeta}\_n}{\tilde{\zeta}\_{n+1}}\right) \|z\_n - v\_n\|\|^2.$$

**Proof.** Let consider the following

$$\begin{split} \left\lVert \boldsymbol{z}\_{n} - \boldsymbol{u}^{\*} \right\rVert^{2} &= \left\lVert \boldsymbol{P}\_{\mathbb{H}\_{n}} [\boldsymbol{u}\_{n} - \boldsymbol{\zeta}\_{n} \boldsymbol{F}(\boldsymbol{v}\_{n})] - \boldsymbol{u}^{\*} \right\rVert^{2} \\ &= \left\lVert \boldsymbol{P}\_{\mathbb{H}\_{n}} [\boldsymbol{u}\_{n} - \boldsymbol{\zeta}\_{n} \boldsymbol{F}(\boldsymbol{v}\_{n})] + [\boldsymbol{u}\_{n} - \boldsymbol{\zeta}\_{n} \boldsymbol{F}(\boldsymbol{v}\_{n})] - [\boldsymbol{u}\_{n} - \boldsymbol{\zeta}\_{n} \boldsymbol{F}(\boldsymbol{v}\_{n})] - \boldsymbol{u}^{\*} \right\rVert^{2} \\ &= \left\lVert \left[ \boldsymbol{u}\_{n} - \boldsymbol{\zeta}\_{n} \boldsymbol{F}(\boldsymbol{v}\_{n}) \right] - \boldsymbol{u}^{\*} \right\rVert^{2} + \left\lVert \boldsymbol{P}\_{\mathbb{H}\_{n}} [\boldsymbol{u}\_{n} - \boldsymbol{\zeta}\_{n} \boldsymbol{F}(\boldsymbol{v}\_{n})] - [\boldsymbol{u}\_{n} - \boldsymbol{\zeta}\_{n} \boldsymbol{F}(\boldsymbol{v}\_{n})] \right\rVert^{2} \\ &+ 2 \left\langle \boldsymbol{P}\_{\mathbb{H}\_{n}} [\boldsymbol{u}\_{n} - \boldsymbol{\zeta}\_{n} \boldsymbol{F}(\boldsymbol{v}\_{n})] - [\boldsymbol{u}\_{n} - \boldsymbol{\zeta}\_{n} \boldsymbol{F}(\boldsymbol{v}\_{n})], \left[ \boldsymbol{u}\_{n} - \boldsymbol{\zeta}\_{n} \boldsymbol{F}(\boldsymbol{v}\_{n}) \right] - \boldsymbol{u}^{\*} \right\rangle. \end{split} \tag{4}$$

From the assumption that *<sup>u</sup>*<sup>∗</sup> <sup>∈</sup> *SV I*(*F*, <sup>C</sup>) ⊂C⊂ <sup>H</sup>*n*, we have

$$\begin{aligned} & \left\| P\_{\overline{\Pi}\_{n}} \left[ \boldsymbol{u}\_{n} - \boldsymbol{\zeta}\_{n} F(\boldsymbol{v}\_{n}) \right] - \left[ \boldsymbol{u}\_{n} - \boldsymbol{\zeta}\_{n} F(\boldsymbol{v}\_{n}) \right] \right\|^{2} \\ & \quad + \left\langle P\_{\overline{\Pi}\_{n}} \left[ \boldsymbol{u}\_{n} - \boldsymbol{\zeta}\_{n} F(\boldsymbol{v}\_{n}) \right] - \left[ \boldsymbol{u}\_{n} - \boldsymbol{\zeta}\_{n} F(\boldsymbol{v}\_{n}) \right] \boldsymbol{\iota} \left[ \boldsymbol{u}\_{n} - \boldsymbol{\zeta}\_{n} F(\boldsymbol{v}\_{n}) \right] - \boldsymbol{u}^{\*} \right\rangle \\ & \quad = \left\langle \left[ \boldsymbol{u}\_{\boldsymbol{n}} - \boldsymbol{\zeta}\_{\boldsymbol{n}} F(\boldsymbol{v}\_{\boldsymbol{n}}) \right] - P\_{\overline{\Pi}\_{n}} \left[ \boldsymbol{u}\_{\boldsymbol{n}} - \boldsymbol{\zeta}\_{\boldsymbol{n}} F(\boldsymbol{v}\_{\boldsymbol{n}}) \right] \boldsymbol{u}^{\*} - P\_{\overline{\Pi}\_{n}} \left[ \boldsymbol{u}\_{\boldsymbol{n}} - \boldsymbol{\zeta}\_{\boldsymbol{n}} F(\boldsymbol{v}\_{n}) \right] \right\rangle \leq 0, \end{aligned} \tag{5}$$

implies that

$$\begin{aligned} & \left\langle P\_{\overline{\Pi}\_{n}} \left[ \boldsymbol{\mu}\_{\boldsymbol{n}} - \boldsymbol{\zeta}\_{\boldsymbol{n}} F(\boldsymbol{\upsilon}\_{\boldsymbol{n}}) \right] - \left[ \boldsymbol{\mu}\_{\boldsymbol{n}} - \boldsymbol{\zeta}\_{\boldsymbol{n}} F(\boldsymbol{\upsilon}\_{\boldsymbol{n}}) \right] \boldsymbol{\mu}\_{\boldsymbol{n}} - \boldsymbol{\zeta}\_{\boldsymbol{n}} F(\boldsymbol{\upsilon}\_{\boldsymbol{n}}) \right] - \boldsymbol{\mu}^{\*} \right\rangle \\ & \leq - \left\| P\_{\overline{\Pi}\_{n}} \left[ \boldsymbol{\mu}\_{\boldsymbol{n}} - \boldsymbol{\zeta}\_{\boldsymbol{n}} F(\boldsymbol{\upsilon}\_{\boldsymbol{n}}) \right] - \left[ \boldsymbol{\mu}\_{\boldsymbol{n}} - \boldsymbol{\zeta}\_{\boldsymbol{n}} F(\boldsymbol{\upsilon}\_{\boldsymbol{n}}) \right] \right\|^{2}. \end{aligned} \tag{6}$$

Now, using the Equation (4) implies that

$$\begin{split} \left\lVert \left\lVert \boldsymbol{z}\_{n} - \boldsymbol{u}^{\*} \right\rVert \right\rVert^{2} &\leq \left\lVert \left\lVert \boldsymbol{u}\_{n} - \mathbb{J}\_{\boldsymbol{u}} \boldsymbol{F}(\boldsymbol{v}\_{n}) - \boldsymbol{u}^{\*} \right\rVert \right\rVert^{2} - \left\lVert \left\lVert \boldsymbol{P}\_{\mathbb{H}\_{n}}[\boldsymbol{u}\_{n} - \mathbb{J}\_{\boldsymbol{u}} \boldsymbol{F}(\boldsymbol{v}\_{n})] - [\boldsymbol{u}\_{n} - \mathbb{J}\_{\boldsymbol{u}} \boldsymbol{F}(\boldsymbol{v}\_{n})] \right\rVert \right\rVert^{2} \\ &\leq \left\lVert \boldsymbol{u}\_{n} - \boldsymbol{u}^{\*} \right\rVert^{2} - \left\lVert \left\lVert \boldsymbol{u}\_{n} - \boldsymbol{z}\_{n} \right\rVert \right\rVert^{2} + 2\zeta\_{n} \left( \boldsymbol{F}(\boldsymbol{v}\_{n}), \boldsymbol{u}^{\*} - \boldsymbol{z}\_{n} \right). \end{split} \tag{7}$$

Given that *u*<sup>∗</sup> is a solution of *V I*(*F*, C), we get

$$
\langle F(u^\*), y - u^\* \rangle \ge 0, \,\forall \, y \in \mathcal{C}. \tag{8}
$$

Due to the monotonicity of *F* on C, we can obtain

$$
\langle F(v\_n) - F(u^\*), v\_n - u^\* \rangle \ge 0, \,\,\forall \, y \in \mathcal{C}. \tag{9}
$$

Since *vn* ∈ C, it follows that

$$<\langle F(\upsilon\_{\hbar}), \upsilon\_{\hbar} - u^\* \rangle \ge 0. \tag{10}$$

Thus, we have

$$
\left< F(\upsilon\_n), u^\* - z\_n \right> = \left< F(\upsilon\_n), u^\* - \upsilon\_n \right> + \left< F(\upsilon\_n), \upsilon\_n - z\_n \right> \le \left< F(\upsilon\_n), \upsilon\_n - z\_n \right>. \tag{11}
$$

From (7) and (11), we get

$$\begin{split} \left\lVert \boldsymbol{z}\_{\boldsymbol{n}} - \boldsymbol{u}^{\*} \right\rVert^{2} &\leq \left\lVert \boldsymbol{u}\_{\boldsymbol{n}} - \boldsymbol{u}^{\*} \right\rVert^{2} - \left\lVert \boldsymbol{u}\_{\boldsymbol{n}} - \boldsymbol{z}\_{\boldsymbol{n}} \right\rVert^{2} + 2\mathbb{Z}\_{\boldsymbol{n}}\big{\prime}\Big{\prime}\left\langle \boldsymbol{F}(\boldsymbol{v}\_{\boldsymbol{n}}), \boldsymbol{v}\_{\boldsymbol{n}} - \boldsymbol{z}\_{\boldsymbol{n}} \right\rangle \\ &= \left\lVert \boldsymbol{u}\_{\boldsymbol{n}} - \boldsymbol{u}^{\*} \right\rVert^{2} - \left\lVert \boldsymbol{u}\_{\boldsymbol{n}} - \boldsymbol{v}\_{\boldsymbol{n}} + \boldsymbol{v}\_{\boldsymbol{n}} - \boldsymbol{z}\_{\boldsymbol{n}} \right\rVert^{2} + 2\mathbb{Z}\_{\boldsymbol{n}}\big{\prime}\Big{\prime}\left\langle \boldsymbol{F}(\boldsymbol{v}\_{\boldsymbol{n}}), \boldsymbol{v}\_{\boldsymbol{n}} - \boldsymbol{z}\_{\boldsymbol{n}} \right\rangle \\ &\leq \left\lVert \boldsymbol{u}\_{\boldsymbol{n}} - \boldsymbol{u}^{\*} \right\rVert^{2} - \left\lVert \boldsymbol{u}\_{\boldsymbol{n}} - \boldsymbol{v}\_{\boldsymbol{n}} \right\rVert^{2} - \left\lVert \boldsymbol{v}\_{\boldsymbol{n}} - \boldsymbol{z}\_{\boldsymbol{n}} \right\rVert^{2} + 2\left\langle \boldsymbol{u}\_{\boldsymbol{n}} - \boldsymbol{\zeta}\_{\boldsymbol{n}}\boldsymbol{F}(\boldsymbol{v}\_{\boldsymbol{n}}) - \boldsymbol{v}\_{\boldsymbol{n}}, \boldsymbol{z}\_{\boldsymbol{n}} - \boldsymbol{v}\_{\boldsymbol{n}} \right\rangle. \end{split} \tag{12}$$

Note that *zn* = *P*H*<sup>n</sup>* [*un* − *ζnF*(*vn*)] and by the definition of *ζn*+1, we have

$$\begin{split} &2\left \\ &=2\left+2\zeta\_{\mathrm{n}}\left \\ &\leq\frac{2\zeta\_{\mathrm{n}}}{\overline{\zeta}\_{\mathrm{n}}n+1}\zeta\_{\mathrm{n}+1}\left \leq \frac{\overline{\zeta}\_{\mathrm{n}}n}{\overline{\zeta}\_{\mathrm{n}}n+1}\left[\mu\left\|u\_{\mathrm{n}}-\upsilon\_{\mathrm{n}}\right\|^{2}+\mu\left\|z\_{\mathrm{n}}-\upsilon\_{\mathrm{n}}\right\|^{2}\right]. \end{split}\tag{13}$$

From expression (12) and (13), we obtain

$$\begin{split} & \|z\_{n} - u^{\*}\|^{2} \\ & \leq \|u\_{n} - u^{\*}\|^{2} - \left\|u\_{n} - v\_{n}\right\|^{2} - \left\|v\_{n} - z\_{n}\right\|^{2} + \frac{\tilde{\zeta}\_{n}}{\tilde{\zeta}\_{n+1}} \left[\mu \|u\_{n} - v\_{n}\|^{2} + \mu \|z\_{n} - v\_{n}\|^{2}\right] \\ & \leq \left\|u\_{n} - u^{\*}\right\|^{2} - \left(1 - \frac{\mu\_{\text{s},n}^{\mathsf{T}}}{\tilde{\zeta}\_{n+1}}\right) \left\|u\_{n} - v\_{n}\right\|^{2} - \left(1 - \frac{\mu\_{\text{s},n}^{\mathsf{T}}}{\tilde{\zeta}\_{n+1}}\right) \left\|z\_{n} - v\_{n}\right\|^{2}. \end{split} \tag{14}$$

**Theorem 1.** *Assume that an operator <sup>F</sup>* : C → <sup>H</sup> *satisfies the conditions* (b1)*–*(b3) *and <sup>u</sup>*<sup>∗</sup> *belongs to solution set SV I*(*F*, C). *Then, the sequences* {*un*}, {*vn*} *and* {*zn*} *generated by Algorithm 1 strongly converge to u*∗*.*

**Proof. Claim 1:** The sequence {*un*} is bounded in <sup>H</sup>.

From Lemma 7, we have

$$\|\|z\_n - u^\*\|\|^2 \le \|u\_n - u^\*\|\|^2 - \left(1 - \frac{\mu\_{\mathfrak{H}n}^{\mathcal{T}}}{\check{\zeta}\_{n+1}}\right) \|u\_n - v\_n\|\|^2 - \left(1 - \frac{\mu\_{\mathfrak{H}n}^{\mathcal{T}}}{\check{\zeta}\_{n+1}}\right) \|z\_n - v\_n\|\|^2. \tag{15}$$

Since *ζ<sup>n</sup>* → *ζ*, then exits a fixed number *-*∈ (0, 1 − *μ*) such that

$$\lim\_{n \to \infty} \left( 1 - \frac{\mu\_{\mathbb{S}^n}^{\mathbb{Z}}}{\zeta\_{n+1}} \right) = 1 - \mu > \epsilon > 0.$$

Thus, there is a finite number *<sup>N</sup>*<sup>1</sup> <sup>∈</sup> <sup>N</sup> such that

$$\left(1 - \frac{\mu \tilde{\zeta}\_n}{\tilde{\zeta}\_{n+1}}\right) > \varepsilon > 0, \ \forall n \ge N\_1. \tag{16}$$

Thus, from (15), we obtain

$$\|\|z\_n - \mathfrak{u}^\*\|\|^2 \le \|\|\mu\_n - \mathfrak{u}^\*\|\|^2, \,\forall n \ge N\_1. \tag{17}$$

Let *u*<sup>∗</sup> ∈ *SV I*(*F*, C). By definition of the sequence {*un*+1} and due to contraction *f* with constant *ρ* ∈ [0, 1) and *n* ≥ *N*1, we obtain

$$\begin{split} \left|| \boldsymbol{u}\_{n+1} - \boldsymbol{u}^\* \right|| &= \left|| \beta\_n \boldsymbol{f}(\boldsymbol{u}\_n) + (1 - \beta\_n) \boldsymbol{z}\_n - \boldsymbol{u}^\* \right|| \\ &= \left|| \beta\_n [f(\boldsymbol{u}\_n) - \boldsymbol{u}^\*] + (1 - \beta\_n) [\boldsymbol{z}\_n - \boldsymbol{u}^\*] \right|| \\ &= \left|| \beta\_n [f(\boldsymbol{u}\_n) + f(\boldsymbol{u}^\*) - f(\boldsymbol{u}^\*) - \boldsymbol{u}^\*] + (1 - \beta\_n) [\boldsymbol{z}\_n - \boldsymbol{u}^\*] \right|| \\ &\leq \beta\_n \left|| f(\boldsymbol{u}\_n) - f(\boldsymbol{u}^\*) \right|| + \beta\_n \left|| f(\boldsymbol{u}^\*) - \boldsymbol{u}^\* \right|| + (1 - \beta\_n) \left|| \boldsymbol{z}\_n - \boldsymbol{u}^\* \right|| \\ &\leq \beta\_n \rho \left|| \boldsymbol{u}\_n - \boldsymbol{u}^\* \right|| + \beta\_n \left|| f(\boldsymbol{u}^\*) - \boldsymbol{u}^\* \right|| + (1 - \beta\_n) \left|| \boldsymbol{z}\_n - \boldsymbol{u}^\* \right||. \end{split} \tag{18}$$

Consider the expressions (17) and (18) and *β<sup>n</sup>* ⊂ (0, 1), we have

$$\left\| \left| u\_{n+1} - u^\* \right| \right\| \le \beta\_n \rho \left\| \left| u\_n - u^\* \right| \right\| + \beta\_n \left\| f(u^\*) - u^\* \right\| + (1 - \beta\_n) \left\| u\_n - u^\* \right\|$$

$$= \left[ 1 - \beta\_n + \rho \beta\_n \right] \left\| u\_n - u^\* \right\| + \beta\_n (1 - \rho) \frac{\left\| f(u^\*) - u^\* \right\|}{(1 - \rho)}$$

$$\le \max \left\{ \left\| u\_n - u^\* \right\|, \frac{\left\| f(u^\*) - u^\* \right\|}{(1 - \rho)} \right\}$$

$$\le \max \left\{ \left\| u\_{N\_1} - u^\* \right\|, \frac{\left\| f(u^\*) - u^\* \right\|}{(1 - \rho)} \right\}. \tag{19}$$

Finally, we deduce that the sequence {*un*} is bounded.

**Claim 2:** If lim*n*→<sup>∞</sup> *un* − *vn* = 0, then, as a subsequence, {*unk*} of {*un*} such that {*unk*} *u*<sup>∗</sup> ∈ *SV I*(*F*, C) as *k* → ∞.

The reflexivity of <sup>H</sup> and the boundedness of {*un*} imply that there exists a subsequence {*unk*} such that {*unk*} *<sup>u</sup>*<sup>∗</sup> <sup>∈</sup> <sup>H</sup> as *<sup>k</sup>* <sup>→</sup> <sup>∞</sup>. It is sufficient to prove that *<sup>u</sup>*<sup>∗</sup> <sup>∈</sup> *SV I*(*F*, <sup>C</sup>). Due to lim*n*→<sup>∞</sup> *un* <sup>−</sup> *vn* = 0, we also have {*vnk*} *u*<sup>∗</sup> as *k* → ∞. In addition, the fact that

$$v\_{n\_k} = P\_{\mathcal{C}}[\mathfrak{u}\_{n\_k} - \mathcal{J}\_{n\_k} F(\mathfrak{u}\_{n\_k})]\_{\mathcal{I}}$$

that is equivalent to

$$\langle \boldsymbol{u}\_{n\_k} - \mathbb{Z}\_{n\_k} F(\boldsymbol{u}\_{n\_k}) - \boldsymbol{v}\_{n\_{k'}} \boldsymbol{y} - \boldsymbol{v}\_{n\_k} \rangle \le 0, \,\forall \, \boldsymbol{y} \in \mathcal{C}.$$

That is, we have

$$
\left< \mathfrak{u}\_{\mathfrak{n}\_{k}} - \mathfrak{v}\_{\mathfrak{n}\_{k'}}, \mathfrak{y} - \mathfrak{v}\_{\mathfrak{n}\_{k}} \right> \leq \mathbb{Z}\_{\mathfrak{n}\_{k}} \left< F(\mathfrak{u}\_{\mathfrak{n}\_{k}}), \mathfrak{y} - \mathfrak{v}\_{\mathfrak{n}\_{k}} \right>, \,\forall \, \mathfrak{y} \in \mathcal{C}. \tag{20}
$$

From the monotonicity condition on *F*, we have

$$\langle F(\mu\_{n\_k}) - F(y), \mu\_{n\_k} - y \rangle \ge 0, \,\,\forall \, y \in \mathcal{C}\_{\prime}.$$

that is

$$
\langle F(y), y - \mathfrak{u}\_{\mathbb{N}\_k} \rangle \ge \langle F(\mathfrak{u}\_{\mathbb{N}\_k}), y - \mathfrak{u}\_{\mathbb{N}\_k} \rangle, \,\forall y \in \mathcal{C}. \tag{21}
$$

Combining expressions (20) and (21), we obtain

$$\begin{split} 0 \le & \langle v\_{n\_k} - u\_{n\_k}, y - v\_{n\_k} \rangle + \tilde{\zeta}\_{n\_k} \langle F(u\_{n\_k}), y - v\_{n\_k} \rangle \\ &= \langle v\_{n\_k} - u\_{n\_k}, y - v\_{n\_k} \rangle + \tilde{\zeta}\_{n\_k} \langle F(u\_{n\_k}), y - u\_{n\_k} \rangle + \tilde{\zeta}\_{n\_k} \langle F(u\_{n\_k}), u\_{n\_k} - v\_{n\_k} \rangle \\ &\le \langle v\_{n\_k} - u\_{n\_k}, y - v\_{n\_k} \rangle + \tilde{\zeta}\_{n\_k} \langle F(y), y - u\_{n\_k} \rangle + \tilde{\zeta}\_{n\_k} \langle F(u\_{n\_k}), u\_{n\_k} - v\_{n\_k} \rangle, \end{split} \tag{22}$$

for all *<sup>y</sup>* ∈ C, since lim*k*→<sup>∞</sup> *<sup>ζ</sup>nk* <sup>=</sup> *<sup>ζ</sup>* <sup>&</sup>gt; 0 (see Lemma 6) and the sequence {*un*} is bounded in <sup>H</sup>. As lim*n*→<sup>∞</sup> *un* − *vn* = 0, and pass the limit in (22) as *k* → ∞, we obtain

$$
\langle F(y), y - u^\* \rangle \ge 0, \,\forall y \in \mathcal{C}. \tag{23}
$$

Apply the well-known Minty Lemma 5, this is what we infer: *u*<sup>∗</sup> ∈ *SV I*(*F*, C).

**Claim 3:** The sequence {*un*} is strong convergent in <sup>H</sup>.

The strong convergence of the sequence {*un*} is as follows. The continuity and monotonicity of the operator *F* and the Minty lemma gives that *SV I*(*F*, C) is a closed and convex set (see [37,38] for more details). As mapping *<sup>f</sup>* is a contraction, so is *PSV I*(*F*,C) ◦ *<sup>f</sup>* . By using the Banach contraction principle to guarantee that an unique element exists, *u*<sup>∗</sup> ∈ *SV I*(*F*, C), such that

$$
\mu^\* = P\_{SVI(F\mathcal{L})} (f(\mu^\*)).
$$

Hence, we have

$$
\langle \langle f(u^\*) - u^\*, y - u^\* \rangle \rangle \ge 0, \,\forall \, y \in SVI(F, \mathcal{C}). \tag{24}
$$

Now, considering *un*+<sup>1</sup> = *β<sup>n</sup> f*(*un*)+(1 − *βn*)*zn*, and using Lemma 1 (i) and Lemma 7, we have

$$\begin{split} \left\| \left| u\_{n+1} - u^\* \right| \right\|^2 &= \left\| \beta\_n (f(u\_n) + (1 - \beta\_n)z\_n - u^\*) \right\|^2 \\ &= \left\| \beta\_n [f(u\_n) - u^\*] + (1 - \beta\_n) [z\_n - u^\*] \right\|^2 \\ &= \beta\_n \left\| f(u\_n) - u^\* \right\|^2 + (1 - \beta\_n) \left\| z\_n - u^\* \right\|^2 - \beta\_n (1 - \beta\_n) \left\| f(u\_n) - z\_n \right\|^2 \\ &\le \beta\_n \left\| f(u\_n) - u^\* \right\|^2 + (1 - \beta\_n) \left[ \left\| u\_n - u^\* \right\|^2 - \left( 1 - \frac{\mu \xi\_n}{\xi\_{n+1}} \right) \left\| u\_n - v\_n \right\|^2 \right. \\ &\left. - \left( 1 - \frac{\mu \xi\_n}{\xi\_{n+1}} \right) \left\| z\_n - v\_n \right\|^2 \right] - \beta\_n (1 - \beta\_n) \left\| f(u\_n) - z\_n \right\|^2 \\ &\le \beta\_n \| f(u\_n) - u^\* \|^2 + \left\| u\_n - u^\* \right\|^2 - (1 - \beta\_n) \left( 1 - \frac{\mu \xi\_n}{\xi\_{n+1}} \right) \left[ \left\| z\_n - v\_n \right\|^2 + \left\| u\_n - v\_n \right\|^2 \right]. \end{split} \tag{25}$$

The remainder of the proof can be divided into two cases:

**Case 1:** Assume that there is a fixed number *<sup>N</sup>*<sup>2</sup> <sup>∈</sup> <sup>N</sup> (*N*<sup>2</sup> <sup>≥</sup> *<sup>N</sup>*1) such that

$$\|\|u\_{n+1} - \mathfrak{u}^\*\|\| \le \|\|u\_n - \mathfrak{u}^\*\|\|, \,\forall n \ge N\_{\mathsf{Z}}.\tag{26}$$

Thus, lim*n*→<sup>∞</sup> *un* − *u*∗ exists and let lim*n*→<sup>∞</sup> *un* − *u*∗ = *l*. By using expression (25), we have

$$(1 - \beta\_n) \left( 1 - \frac{\mu\_{\rm \varepsilon \mu}^{\mathcal{T}}}{\mathcal{Z}\_{n+1}} \right) \left[ \|z\_n - v\_n\|^2 + \|u\_n - v\_n\|^2 \right]$$

$$\leq \beta\_n \|f(u\_n) - u^\*\|^2 + \|u\_n - u^\*\|^2 - \|u\_{n+1} - u^\*\|^2. \tag{27}$$

Due to the existence of lim*n*→<sup>∞</sup> *un* − *u*∗ = *l*, and *β<sup>n</sup>* → 0, we obtain

$$\lim\_{n \to \infty} \|u\_n - v\_n\| = \lim\_{n \to \infty} \|z\_n - v\_n\| = 0. \tag{28}$$

It follows that

$$\lim\_{n \to \infty} \|u\_{\mathbb{H}} - z\_n\| \le \lim\_{n \to \infty} \|u\_{\mathbb{H}} - v\_{\mathbb{H}}\| + \lim\_{n \to \infty} \|v\_{\mathbb{H}} - z\_n\| = 0. \tag{29}$$

Hence, we obtain

$$\begin{aligned} \left|| \boldsymbol{u}\_{n+1} - \boldsymbol{u}\_{n} \right|| &= \left|| \beta\_{n} f(\boldsymbol{u}\_{n}) + (1 - \beta\_{n}) \boldsymbol{z}\_{n} - \boldsymbol{u}\_{n} \right|| \\ &= \left|| \beta\_{n} [f(\boldsymbol{u}\_{n}) - \boldsymbol{u}\_{n}] + (1 - \beta\_{n}) [\boldsymbol{z}\_{n} - \boldsymbol{u}\_{n}] \right|| \\ &\leq \beta\_{n} \left|| f(\boldsymbol{u}\_{n}) - \boldsymbol{u}\_{n} \right|| + (1 - \beta\_{n}) \left|| \boldsymbol{z}\_{n} - \boldsymbol{u}\_{n} \right|| \to 0. \end{aligned} \tag{30}$$

The sequence {*un*} is bounded and implies that the sequences {*vn*} and {*zn*} are also bounded. Thus, we can take a subsequence {*unk*} of {*un*} such that {*unk*} converges weakly to some *u*ˆ ∈ C and

$$\begin{split} \limsup\_{n \to \infty} \langle f(u^\*) - u^\*, u\_n - u^\* \rangle \\ = \limsup\_{k \to \infty} \langle f(u^\*) - u^\*, u\_{n\_k} - u^\* \rangle = \langle f(u^\*) - u^\*, \hat{u} - u^\* \rangle \le 0. \end{split} \tag{31}$$

We have lim*n*→<sup>∞</sup> 1 1*un*+<sup>1</sup> − *un* 1 1 = 0. It means that

$$\begin{split} & \limsup\_{n \to \infty} \langle f(u^\*) - u^\*, u\_{n+1} - u^\* \rangle \\ & \leq \limsup\_{k \to \infty} \langle f(u^\*) - u^\*, u\_{n+1} - u\_n \rangle + \limsup\_{k \to \infty} \langle f(u^\*) - u^\*, u\_n - u^\* \rangle \leq 0. \end{split} \tag{32}$$

From Lemma 7 and Lemma 1 (ii) (∀ *n* ≥ *N*2), we obtain

1 <sup>1</sup>*un*+<sup>1</sup> <sup>−</sup> *<sup>u</sup>*∗<sup>1</sup> 12 = 1 <sup>1</sup>*β<sup>n</sup> <sup>f</sup>*(*un*)+(<sup>1</sup> <sup>−</sup> *<sup>β</sup>n*)*zn* <sup>−</sup> *<sup>u</sup>*∗<sup>1</sup> 12 = 1 1*βn*[ *f*(*un*) − *u*∗]+(1 − *βn*)[*zn* − *u*∗] 1 12 <sup>≤</sup> (<sup>1</sup> <sup>−</sup> *<sup>β</sup>n*)2<sup>1</sup> <sup>1</sup>*zn* <sup>−</sup> *<sup>u</sup>*∗<sup>1</sup> <sup>1</sup><sup>2</sup> <sup>+</sup> <sup>2</sup>*βnf*(*un*) <sup>−</sup> *<sup>u</sup>*∗,(<sup>1</sup> <sup>−</sup> *<sup>β</sup>n*)[*zn* <sup>−</sup> *<sup>u</sup>*∗] + *<sup>β</sup>n*[ *<sup>f</sup>*(*un*) <sup>−</sup> *<sup>u</sup>*∗] = (<sup>1</sup> <sup>−</sup> *<sup>β</sup>n*)2<sup>1</sup> <sup>1</sup>*zn* <sup>−</sup> *<sup>u</sup>*∗<sup>1</sup> <sup>1</sup><sup>2</sup> <sup>+</sup> <sup>2</sup>*βnf*(*un*) <sup>−</sup> *<sup>f</sup>*(*u*∗) + *<sup>f</sup>*(*u*∗) <sup>−</sup> *<sup>u</sup>*∗, *un*+<sup>1</sup> <sup>−</sup> *<sup>u</sup>*∗ = (<sup>1</sup> <sup>−</sup> *<sup>β</sup>n*)2<sup>1</sup> <sup>1</sup>*zn* <sup>−</sup> *<sup>u</sup>*∗<sup>1</sup> <sup>1</sup><sup>2</sup> <sup>+</sup> <sup>2</sup>*βnf*(*un*) <sup>−</sup> *<sup>f</sup>*(*u*∗), *un*+<sup>1</sup> <sup>−</sup> *<sup>u</sup>*∗ <sup>+</sup> <sup>2</sup>*βnf*(*u*∗) <sup>−</sup> *<sup>u</sup>*∗, *un*+<sup>1</sup> <sup>−</sup> *<sup>u</sup>*∗ <sup>≤</sup> (<sup>1</sup> <sup>−</sup> *<sup>β</sup>n*)2<sup>1</sup> <sup>1</sup>*zn* <sup>−</sup> *<sup>u</sup>*∗<sup>1</sup> <sup>1</sup><sup>2</sup> <sup>+</sup> <sup>2</sup>*βn<sup>ρ</sup>* 1 <sup>1</sup>*un* <sup>−</sup> *<sup>u</sup>*∗<sup>1</sup> 1 1 <sup>1</sup>*un*+<sup>1</sup> <sup>−</sup> *<sup>u</sup>*∗<sup>1</sup> 1 + 2*βnf*(*u*∗) − *u*∗, *un*+<sup>1</sup> − *u*∗ <sup>≤</sup> (<sup>1</sup> <sup>+</sup> *<sup>β</sup>*<sup>2</sup> *<sup>n</sup>* − 2*βn*) 1 <sup>1</sup>*un* <sup>−</sup> *<sup>u</sup>*∗<sup>1</sup> <sup>1</sup><sup>2</sup> <sup>+</sup> <sup>2</sup>*βn<sup>ρ</sup>* 1 <sup>1</sup>*un* <sup>−</sup> *<sup>u</sup>*∗<sup>1</sup> <sup>1</sup><sup>2</sup> <sup>+</sup> <sup>2</sup>*βnf*(*u*∗) <sup>−</sup> *<sup>u</sup>*∗, *un*+<sup>1</sup> <sup>−</sup> *<sup>u</sup>*∗ = (1 − 2*βn*) 1 <sup>1</sup>*un* <sup>−</sup> *<sup>u</sup>*∗<sup>1</sup> <sup>1</sup><sup>2</sup> <sup>+</sup> *<sup>β</sup>*<sup>2</sup> *n* 1 <sup>1</sup>*un* <sup>−</sup> *<sup>u</sup>*∗<sup>1</sup> <sup>1</sup><sup>2</sup> <sup>+</sup> <sup>2</sup>*βn<sup>ρ</sup>* 1 <sup>1</sup>*un* <sup>−</sup> *<sup>u</sup>*∗<sup>1</sup> <sup>1</sup><sup>2</sup> <sup>+</sup> <sup>2</sup>*βnf*(*u*∗) <sup>−</sup> *<sup>u</sup>*∗, *un*+<sup>1</sup> <sup>−</sup> *<sup>u</sup>*∗ = 1 − 2*βn*(1 − *ρ*) 1 <sup>1</sup>*un* <sup>−</sup> *<sup>u</sup>*∗<sup>1</sup> <sup>1</sup><sup>2</sup> <sup>+</sup> <sup>2</sup>*βn*(<sup>1</sup> <sup>−</sup> *<sup>ρ</sup>*) 3 *βn* 1 1*un* − *u*<sup>∗</sup> 1 12 <sup>2</sup>(<sup>1</sup> <sup>−</sup> *<sup>ρ</sup>*) <sup>+</sup> *f*(*u*∗) <sup>−</sup> *<sup>u</sup>*∗, *un*+<sup>1</sup> <sup>−</sup> *<sup>u</sup>*∗ 1 − *ρ* 4 . (33)

It follows (32) that

$$\limsup\_{n \to \infty} \left[ \frac{\beta\_n ||u\_n - u^\*||^2}{2(1 - \rho)} + \frac{\langle f(u^\*) - u^\*, u\_{n+1} - u^\* \rangle}{1 - \rho} \right] \le 0. \tag{34}$$

Choose *<sup>n</sup>* <sup>≥</sup> *<sup>N</sup>*<sup>3</sup> <sup>∈</sup> <sup>N</sup> (*N*<sup>3</sup> <sup>≥</sup> *<sup>N</sup>*2) large enough such that 2*βn*(<sup>1</sup> <sup>−</sup> *<sup>ρ</sup>*) <sup>&</sup>lt; 1. Now, by using expressions (33) and (34) and applying Lemma 3, conclude that 1 1*un* − *u*<sup>∗</sup> 1 1 → 0, as *n* → ∞.

**Case 2:** Assume that there is a subsequence {*ni*} of {*n*} such that

$$||\mathfrak{u}\_{n\_i} - \mathfrak{u}^\*|| \le ||\mathfrak{u}\_{n\_{i+1}} - \mathfrak{u}^\*||\_{\prime} \,\,\forall i \in \mathbb{N}.$$

Thus, by Lemma <sup>4</sup> there is a sequence {*mk*} ⊂ <sup>N</sup> as {*mk*} → <sup>∞</sup>, such that

$$\|\|\boldsymbol{\mu}\_{\boldsymbol{\mathfrak{m}}\_{k}} - \boldsymbol{\mathfrak{u}}^{\*}\|\| \leq \|\boldsymbol{\mu}\_{\boldsymbol{\mathfrak{m}}\_{k+1}} - \boldsymbol{\mathfrak{u}}^{\*}\|\| \text{ and } \|\|\boldsymbol{\mu}\_{k} - \boldsymbol{\mathfrak{u}}^{\*}\|\| \leq \|\boldsymbol{\mu}\_{\boldsymbol{\mathfrak{m}}\_{k+1}} - \boldsymbol{\mathfrak{u}}^{\*}\|\|\prime \; \forall k \in \mathbb{N}.\tag{35}$$

Similar to case 1 and from (25), we obtain

$$(1 - \beta\_{m\_k}) \left(1 - \frac{\mu \zeta\_{\mathcal{V}m\_k}}{\zeta\_{m\_k + 1}}\right) \left[ \|z\_{m\_k} - \upsilon\_{m\_k}\|^2 + \|u\_{m\_k} - \upsilon\_{m\_k}\|^2 \right]$$

$$\leq \beta\_{m\_k} \|f(u\_{m\_k}) - u^\*\|^2 + \|u\_{m\_k} - u^\*\|^2 - \|u\_{m\_k + 1} - u^\*\|^2. \tag{36}$$

Due to *<sup>β</sup>mk* <sup>→</sup> 0, and <sup>1</sup> <sup>−</sup> *μζmk <sup>ζ</sup>mk*+<sup>1</sup> → 1 − *μ*, we deduce the following:

$$\lim\_{m \to \infty} ||\mu\_{m\_k} - \upsilon\_{m\_k}|| = \lim\_{k \to \infty} ||z\_{m\_k} - \upsilon\_{m\_k}|| = 0. \tag{37}$$

It follows that

$$\lim\_{k \to \infty} \left\| u\_{\mathfrak{m}\_k} - z\_{\mathfrak{m}\_k} \right\| \le \lim\_{k \to \infty} \left\| u\_{\mathfrak{m}\_k} - v\_{\mathfrak{m}\_k} \right\| + \lim\_{k \to \infty} \left\| v\_{\mathfrak{m}\_k} - z\_{\mathfrak{m}\_k} \right\| = 0. \tag{38}$$

Similar to case 1, we can easily obtain that

$$\lim\_{k \to \infty} \|u\_{m\_{k+1}} - u\_{m\_k}\| = 0, \quad \text{and} \quad \limsup\_{k \to \infty} \langle f(u^\*) - u^\*, u\_{m\_k + 1} - u^\* \rangle \le 0. \tag{39}$$

By using (35) and the same argument as in (33), we have

$$\begin{split} & \left\| u\_{m\_{k}+1} - u^{\*} \right\|^{2} \\ &= \left[ 1 - 2\beta \mathfrak{m}\_{\mathbb{K}}(1-\rho) \right] \left\| u\_{m\_{k}} - u^{\*} \right\|^{2} + 2\beta \mathfrak{m}\_{\mathbb{K}}(1-\rho) \left[ \frac{\beta \mathfrak{m}\_{\mathbb{K}} \left\| u\_{m\_{k}} - u^{\*} \right\|^{2}}{2(1-\rho)} + \frac{\langle f(u^{\*}) - u^{\*}, u\_{m\_{k}+1} - u^{\*} \rangle}{1-\rho} \right] \\ & \leq \left[ 1 - 2\beta \mathfrak{m}\_{\mathbb{K}}(1-\rho) \right] \left\| u\_{m\_{k}+1} - u^{\*} \right\|^{2} + 2\beta \mathfrak{m}\_{\mathbb{K}}(1-\rho) \left[ \frac{\beta \mathfrak{m}\_{\mathbb{K}} \left\| u\_{m\_{k}} - u^{\*} \right\|^{2}}{2(1-\rho)} + \frac{\langle f(u^{\*}) - u^{\*}, u\_{m\_{k}+1} - u^{\*} \rangle}{1-\rho} \right]. \end{split} \tag{40}$$

It follows that

$$\left\|u\_{m\_k+1} - u^\*\right\|^2 \le \frac{\beta\_{m\_k} \left\|u\_{m\_k} - u^\*\right\|^2}{2(1-\rho)} + \frac{\langle f(u^\*) - u^\*, u\_{m\_k+1} - u^\* \rangle}{1-\rho}.\tag{41}$$

Due to *<sup>β</sup>mk* <sup>→</sup> 0 as *<sup>k</sup>* <sup>→</sup> <sup>∞</sup>, and lim sup*k*→∞*f*(*u*∗) <sup>−</sup> *<sup>u</sup>*∗, *umk*<sup>+</sup><sup>1</sup> <sup>−</sup> *<sup>u</sup>*∗ ≤ 0, we obtain

$$||\mu\_{m\_k+1} - \mu^\*||^2 \to 0,\text{ as } k \to \infty. \tag{42}$$

Finally, the inequality

$$\lim\_{n \to \infty} \left\| \boldsymbol{u}\_{k} - \boldsymbol{u}^{\*} \right\|^{2} \le \lim\_{n \to \infty} \left\| \boldsymbol{u}\_{m\_{k} + 1} - \boldsymbol{u}^{\*} \right\|^{2} \le 0. \tag{43}$$

Consequently, *un* → *u*∗. This completes the proof of the theorem.

### **4. Numerical Illustrations**

The experimental results are discussed in this section to illustrate the efficacy of our proposed Algorithm 1 (m-EgA3) compared to Algorithm 1 (m-EgA1) in [30] and Algorithm 2 (m-EgA2) in [30].

**Example 1.** *Consider the HpHard problem which is taken from [39] and considered by many authors for numerical tests (see [40–42]), where <sup>F</sup>* : <sup>R</sup>*<sup>m</sup>* <sup>→</sup> <sup>R</sup>*<sup>m</sup> is an operator defined by <sup>F</sup>*(*u*) = *Mu* <sup>+</sup> *<sup>q</sup> with <sup>q</sup>* <sup>∈</sup> <sup>R</sup>*<sup>m</sup> and*

$$M = NN^T + B + D\_{\prime\prime}$$

*where N is an m* × *m matrix, B is an m* × *m skew–symmetric matrix and D is an m* × *m positive definite diagonal matrix. The feasible set is defined by*

$$\mathcal{C} = \{ \mathfrak{u} \in \mathbb{R}^m : \mathbb{Q}\mathfrak{u} \le b \}\_{\prime \prime}$$

*where <sup>Q</sup> is an* <sup>100</sup> <sup>×</sup> *<sup>m</sup> matrix and <sup>b</sup> is a nonnegative vector in* <sup>R</sup>*m. It is clear that <sup>F</sup> is monotone and Lipschitz continuous with L* = *M*. *For q* = 0*, the solution set of the corresponding variational inequality is V I*(C, *F*) = {0}*. In this experiment, we take the initial point <sup>u</sup>*<sup>0</sup> = (1, 1, ···, 1) *and Dn* <sup>=</sup> *un* <sup>−</sup> *vn* ≤ *TOL* <sup>=</sup> <sup>10</sup>−3. *Moreover, the control parameters ζ*<sup>0</sup> = 0.7 *<sup>L</sup> and <sup>μ</sup>* <sup>=</sup> 0.9 *for Algorithm 1 (m-EgA1) in [30]; <sup>ζ</sup>*<sup>0</sup> <sup>=</sup> 0.7 *<sup>L</sup> , μ* = 0.9 *and β<sup>n</sup>* = <sup>1</sup> <sup>30</sup>(*k*+2) *for Algorithm 2 (m-EgA2) in [30]; <sup>ζ</sup>*<sup>0</sup> <sup>=</sup> 0.7 *<sup>L</sup>* , *<sup>μ</sup>* <sup>=</sup> 0.9, *<sup>β</sup><sup>n</sup>* <sup>=</sup> <sup>1</sup> *<sup>n</sup>*+<sup>4</sup> *and <sup>f</sup>*(*u*) = *<sup>u</sup>* <sup>2</sup> *for Algorithm 1 (m-EgA3). The numerical results of all methods have been reported in Figures 1–8 and Table 1.*

**Figure 1.** Numerical behaviour of Algorithm 1 compared to Algorithm 1 in [30] and Algorithm 2 in [30] for Example 1, when *m* = 5.

**Figure 2.** Numerical behaviour of Algorithm 1 compared to Algorithm 1 in [30] and Algorithm 2 in [30] for Example 1, when *m* = 5.

**Figure 3.** Numerical behaviour of Algorithm 1 compared to Algorithm 1 in [30] and Algorithm 2 in [30] for Example 1, when *m* = 10.

**Figure 4.** Numerical behaviour of Algorithm 1 compared to Algorithm 1 in [30] and Algorithm 2 in [30] for Example 1, when *m* = 10.

**Figure 5.** Numerical behaviour of Algorithm 1 compared to Algorithm 1 in [30] and Algorithm 2 in [30] for Example 1, when *m* = 20.

**Figure 6.** Numerical behaviour of Algorithm 1 compared to Algorithm 1 in [30] and Algorithm 2 in [30] for Example 1, when *m* = 20.

**Figure 7.** Numerical behaviour of Algorithm 1 compared to Algorithm 1 in [30] and Algorithm 2 in [30] for Example 1, when *m* = 50.

**Figure 8.** Numerical behaviour of Algorithm 1 compared to Algorithm 1 in [30] and Algorithm 2 in [30] for Example 1, when *m* = 50.


**Table 1.** Numerical results numeric values for Figures 1–8.

**Example 2.** *Assume that* H = *L*2([0, 1]) *is a Hilbert space with an inner product*

$$\langle u, v \rangle = \int\_0^1 u(t)v(t)dt, \,\forall \, u, v \in \mathbb{H}\_{\star}$$

*and the induced norm is*

$$||u|| = \sqrt{\int\_0^1 |u(t)|^2 dt}.$$

*Let* <sup>C</sup> :<sup>=</sup> {*<sup>u</sup>* <sup>∈</sup> *<sup>L</sup>*2([0, 1]) : *u* ≤ <sup>1</sup>} *be the unit ball and F* : C → <sup>H</sup> *is defined by*

$$F(u)(t) = \int\_0^1 \left( u(t) - H(t,s)f(u(s)) \right) ds + g(t),$$

*where*

$$H(t,s) = \frac{2t s e^{(t+s)}}{e \sqrt{e^2 - 1}}, \quad f(u) = \cos(u), \quad g(t) = \frac{2t e^t}{e \sqrt{e^2 - 1}}.$$

*We can see in [41], that F is Lipschitz-continuous with Lipschitz constant L* = 2 *and monotone. Figures 9–11 and Table 2 show the numerical results by taking different initial values u*<sup>0</sup> *and -* = 10−3. *In this experiment, we take the different initial points <sup>u</sup>*<sup>0</sup> *and Dn* <sup>=</sup> *un* <sup>−</sup> *vn* ≤ *TOL* <sup>=</sup> <sup>10</sup>−3. *Moreover, the control parameters ζ*<sup>0</sup> = 0.6 *<sup>L</sup> and <sup>μ</sup>* <sup>=</sup> 0.45 *for Algorithm 1 (m-EgA1) in [30]; <sup>ζ</sup>*<sup>0</sup> <sup>=</sup> 0.6 *<sup>L</sup> , μ* = 0.45 *and β<sup>n</sup>* = <sup>1</sup> <sup>100</sup>(*k*+2) *for Algorithm 2 (m-EgA2) in [30]; <sup>ζ</sup>*<sup>0</sup> <sup>=</sup> 0.6 *<sup>L</sup>* , *<sup>μ</sup>* <sup>=</sup> 0.45, *<sup>β</sup><sup>n</sup>* <sup>=</sup> <sup>1</sup> *<sup>n</sup>*+<sup>2</sup> *and <sup>f</sup>*(*u*) = *<sup>u</sup>* <sup>3</sup> *for Algorithm 1 (m-EgA3).*

**Figure 9.** Numerical behaviour of Algorithm 1 compared to Algorithm 1 in [30] and Algorithm 2 in [30] for Example 1, when *u*<sup>0</sup> = *t*.

**Figure 10.** Numerical behaviour of Algorithm 1 compared to Algorithm 1 in [30] and Algorithm 2 in [30] for Example 1, when *u*<sup>0</sup> = sin(*t*).

**Figure 11.** Numerical behaviour of Algorithm 1 compared to Algorithm 1 in [30] and Algorithm 2 in [30] for Example 1, when *u*<sup>0</sup> = cos(*t*).


**Table 2.** Numerical comparison values for Figures 1–8.

**Example 3.** *Let F* : <sup>R</sup><sup>2</sup> <sup>→</sup> <sup>R</sup><sup>2</sup> *is defined by*

$$F\begin{pmatrix}\mu\_1\\\mu\_2\end{pmatrix} = \begin{pmatrix}\mu\_1+\mu\_2+\sin(\mu\_1)\\-\mu\_1+\mu\_2+\sin(\mu\_2)\end{pmatrix}, \quad \forall \begin{pmatrix}\mu\_1\\\mu\_2\end{pmatrix} \in \mathbb{R}^2$$

*and* C *is taken as*

$$\mathcal{C} = \{ \boldsymbol{\mu} = (\boldsymbol{\mu}\_1, \boldsymbol{\mu}\_1)^T \in \mathbb{R}^2 : 0 \le \boldsymbol{\mu}\_i \le 10, \, i = 1, 2 \}.$$

*This problem was proposed in [43], where <sup>F</sup> is L-Lipschitz continuous with Lipschitz constant <sup>L</sup>* <sup>=</sup> <sup>√</sup><sup>10</sup> *and monotone. In this experiment, we take the different initial points u*<sup>0</sup> *and Dn* = *un* − *vn* ≤ *TOL*. *Moreover, the control parameters ζ*<sup>0</sup> = 0.7 *<sup>L</sup> and <sup>μ</sup>* <sup>=</sup> 0.50 *for Algorithm 1 (m-EgA1) in [30]; <sup>ζ</sup>*<sup>0</sup> <sup>=</sup> 0.7 *<sup>L</sup> , μ* = 0.50 *and β<sup>n</sup>* = <sup>1</sup> <sup>100</sup>(*n*+2) *for Algorithm 2 (m-EgA2) in [30]; <sup>ζ</sup>*<sup>0</sup> <sup>=</sup> 0.7 *<sup>L</sup>* , *<sup>μ</sup>* <sup>=</sup> 0.50, *<sup>β</sup><sup>n</sup>* <sup>=</sup> <sup>1</sup> <sup>100</sup>(*n*+2) *and <sup>f</sup>*(*u*) = *<sup>u</sup>* <sup>4</sup> *for Algorithm 1 (m-EgA3). Table 3 reports the numerical results by using different tolerance and initial points.*


**Table 3.** Numerical behaviour of Algorithm 1 compared to Algorithm 1 in [30] and Algorithm 2 in [30] for Example 3 by using different initial points *u*0.

**Author Contributions:** Data curation, N.W.; formal analysis, M.Y.; funding acquisition, N.P. (Nuttapol Pakkaranang) and N.P. (Nattawut Pholasa); investigation, N.W., N.P. (Nuttapol Pakkaranang) and H.u.R.; methodology, H.u.R.; project administration, H.u.R., N.P. (Nattawut Pholasa) and M.Y.; resources, N.P. (Nattawut Pholasa); software, H.u.R.; supervision, H.u.R. and N.P. (Nuttapol Pakkaranang); Writing—original draft, N.W. and H.u.R.; Writing—review and editing, N.P. (Nuttapol Pakkaranang). All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by School of Science, University of Phayao, Phayao, Thailand (Grant No. UoE 63002).

**Acknowledgments:** We are very grateful to the editor and the anonymous referees for their valuable and useful comments, which helps in improving the quality of this work. N. Wairojjana would like to thank Valaya Alongkorn Rajabhat University under the Royal Patronage (VRU). N. Pholasa was partially supported by University of Phayao.

**Conflicts of Interest:** The authors declare no conflict of interest.

### **References**


**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

### *Article* **Reconstruction of Piecewise Smooth Multivariate Functions from Fourier Data**

### **David Levin**

School of Mathematical Sciences, Tel-Aviv University, Tel Aviv 6997801, Israel; levin@tauex.tau.ac.il

Received: 24 June 2020; Accepted: 22 July 2020; Published: 24 July 2020

**Abstract:** In some applications, one is interested in reconstructing a function *f* from its Fourier series coefficients. The problem is that the Fourier series is slowly convergent if the function is non-periodic, or is non-smooth. In this paper, we suggest a method for deriving high order approximation to *f* using a Padé-like method. Namely, we do this by fitting some Fourier coefficients of the approximant to the given Fourier coefficients of *f* . Given the Fourier series coefficients of a function on a rectangular domain in R*d*, assuming the function is piecewise smooth, we approximate the function by piecewise high order spline functions. First, the singularity structure of the function is identified. For example in the 2D case, we find high accuracy approximation to the curves separating between smooth segments of *f* . Secondly, simultaneously we find the approximations of all the different segments of *f* . We start by developing and demonstrating a high accuracy algorithm for the 1D case, and we use this algorithm to step up to the multidimensional case.

**Keywords:** fourier data; reconstruction; multivariate approximation; piecewise smooth

### **1. Introduction**

Fourier series expansion is a useful tool for representing and approximating functions, with applications in many areas of applied mathematics. The quality of the approximation depends on the smoothness of the approximated function and on whether or not it is periodic. For functions that are not periodic, the convergence rate is slow near the boundaries and the approximation by partial sums exhibits the Gibbs phenomenon. Several approaches have been used to improve the convergence rate, mostly for the one-dimensional case. One approach is to filter out the oscillations, as discussed in several papers [1,2]. Another useful approach is to transform the Fourier series into an expansion in a different basis. For the univariate case this approach is shown to be very efficient, as shown in [1] using Gegenbauer polynomials with suitably chosen parameters. Further improvement of this approach is presented in [3] using Freud polynomials, achieving very good results for univariate functions with singularities.

An algebraic approach for reconstructing a piecewise smooth univariate function from its first *N* Fourier coefficients has been realized by Eckhoff in a series of papers [4–6]. There, the "jumps" are determined by a corresponding system of linear equations. A full analysis of this approach is presented by Betankov [7]. Nersessian and Poghosyan [8] have used a rational Padé type approximation strategy for approximating univariate non-periodic smooth functions. For multiple Fourier series of smooth non-periodic functions, a convergence acceleration approach was suggested by Levin and Sidi [9]. More challenging is the case of multivariate functions with discontinuities, i.e., functions that are piecewise smooth. Here again, the convergence rate is slow, and near the discontinuities, the approximation exhibits the Gibbs phenomenon. In this paper, we present a Padé-like approach consisting of finding a piecewise-defined spline whose Fourier coefficients match the given Fourier coefficients.

The main contribution of this paper is demonstrating that this approach can be successfully applied to the multivariate case. Namely, we present a strategy for approximating both non-periodic

and non-smooth multivariate functions. We derive the numerical procedures involved and provide some interesting numerical results. We start by developing and demonstrating a high accuracy algorithm for the 1D case, and use this algorithm to step up to the multidimensional case.

### **2. The 1D Case**

In this section, we present the main tools for function approximation using its Fourier series coefficients. We define the basis functions and describe the fitting strategy and develop the computation algorithm. After dealing with the smooth case we move on to approximate a piecewise smooth function with a jump singularity.

### *2.1. Reconstructing Smooth Non-Periodic Functions*

Let *<sup>f</sup>* <sup>∈</sup> *<sup>C</sup>m*[0, 1], and assume we know the Fourier series expansion of *<sup>f</sup>*

$$f(\mathbf{x}) = \sum\_{n \in \mathbb{Z}} \hat{f}\_n e^{2\pi inx}. \tag{1}$$

The series converge pointwise for any *x* ∈ [0, 1], however, if *f* is not periodic, the convergence may be slow, and if *f*(1) = *f*(0) the convergence is not uniform and the Gibbs phenomenon occurs near 0 and near 1. As discussed in [9,10], one can apply convergence acceleration techniques for improving the convergence rate of the series. Another convergence acceleration approach was suggested by Gottlieb and Shu [1] using Gegenbauer polynomials. Yet, in both approaches, the convergence rate is not much improved near 0 and near 1. We suggest an approach in the spirit of Padé approximation. A Padé approximant is a rational function whose power series agrees as much as possible with the given power series of *f* . Here we look for approximations to *f* whose Fourier coefficients agree with a subset of the given Fourier coefficients of *f* . The approximation space can be any favorable linear approximation space, such as polynomials or trigonometric functions.

We choose to build the approximation using *k*th order spline functions, represented in the B-spline basis:

$$S\_d^{[k]}(\mathbf{x}) = \sum\_{j=1}^{N\_d} a\_j B\_d^{[k]}(\mathbf{x} - jd). \tag{2}$$

*B*[*k*] *<sup>d</sup>* (*x*) is the B-spline of order k with equidistant knots {−*kd*, ..., −2*d*, −*d*, 0}, and *Nd* = 1/*d* + *k* − 1 is the number of B-splines whose shifts do not vanish in [0, 1]. The advantage of using spline functions is threefold:


The B-splines basis functions used in the 1D case are shown in Figure 1. We denote by *<sup>S</sup>* <sup>≡</sup> *<sup>S</sup>*[*k*] *d* |[0,1] the restriction of *S*[*k*] *<sup>d</sup>* to the interval [0, 1]. We find the coefficients {*ai*}*Nd <sup>i</sup>*=<sup>1</sup> by least-squares fitting, matching the first *M* + 1 Fourier coefficients of *S* to the corresponding *M* + 1 Fourier coefficients of *f* . That is,

$$\{\{a\_i\}\}\_{i=1}^{N\_d} = \arg\min \sum\_{n=0}^{M} |f\_n - \mathcal{S}\_n|^2. \tag{3}$$

Notice that it is enough to consider the Fourier coefficients with non-negative indices.

**Figure 1.** The B-splines used in Example 1.

We denote by *Bi* <sup>≡</sup> *<sup>B</sup>*[*k*] *<sup>d</sup>* (· − *id*)|[0,1] the restriction of *<sup>B</sup>*[*k*] *<sup>d</sup>* (· − *id*) to the interval [0, 1], and by {*B*<sup>ˆ</sup> *i*,*n*} its Fourier coefficients. The normal equations for the least squares problem (3) induce the linear system *Aa* <sup>=</sup> *<sup>b</sup>* for *<sup>a</sup>* <sup>=</sup> {*ai*}*Nd <sup>i</sup>*=1, where

$$A\_{i,j} = \sum\_{n=0}^{M} \left[ \text{Re}(\mathcal{B}\_{i,n}) \, \text{Re}(\mathcal{B}\_{j,n}) + \text{Im}(\mathcal{B}\_{i,n}) \, \text{Im}(\mathcal{B}\_{j,n}) \right], \; 1 \le i, j \le N\_d. \tag{4}$$

and

$$b\_{i} = \sum\_{n=0}^{M} \left[ \text{Re}(\mathcal{B}\_{i,n}) \, \text{Re}(\hat{f}\_{n}) + \text{Im}(\mathcal{B}\_{i,n}) \, \text{Im}(\hat{f}\_{n}) \right], \; 1 \le i \le N\_{d}. \tag{5}$$

Numerical Example—The Smooth 1D Case

We consider the test function *f*(*x*) = *x* exp(*x*) + sin(8*x*), assuming only its Fourier coefficients are given. We have used only the 20 Fourier coefficients { <sup>ˆ</sup> *fn*}<sup>19</sup> *<sup>n</sup>*=0, and computed an approximation using 12th degree splines with equidistant knots' distance *d* = 0.1. For this case, the matrix *A* is of size 19 <sup>×</sup> 19, and *cond*(*A*) = 5.75 <sup>×</sup> 1020. We have employed an iterative refinement algorithm described below to obtain a high precision solution. The results are shown in the following two figures. In Figure 2 we see the test function on the left and the approximation error on the right. Figure <sup>3</sup> presents the graph of log10( <sup>ˆ</sup> *fn*) in blue and the graph of log10( <sup>ˆ</sup> *fn* <sup>−</sup> *<sup>S</sup>*<sup>ˆ</sup> *<sup>n</sup>*), showing eight orders of magnitude reduction in the Fourier coefficients. Notice the matching in the first Fourier coefficients reflected in the beginning of the red graph.

**Remark 1.** *The powerful iterative refinement method described in [11,12] is as follows:*

*For solving a system Ax* = *b, we use some solver, e.g., the Matlab pinv function. We obtain the solution <sup>x</sup>*(0) <sup>=</sup> *pinv*(*A*)*b. Next we compute the residual <sup>r</sup>*(0) <sup>=</sup> *<sup>b</sup>* <sup>−</sup> *Ax*(0)*. In case cond*(*A*) *is very large, the residual will be large. Now we solve again the system with r*(0) *at the right hand side, and use the solution to correct x*(0)*, to obtain*

$$\mathbf{x}^{(1)} = \mathbf{x}^{(0)} + p \| \mathbf{v}(A) \mathbf{r}^{(0)} \|$$

*We repeat this correction steps a few times, i.e., r*(*k*) <sup>=</sup> *<sup>b</sup>* <sup>−</sup> *Ax*(*k*)*, and*

$$\mathbf{x}^{(k+1)} = \mathbf{x}^{(k)} + p \| \mathbf{v}(A) \mathbf{r}^{(k)} \|$$

*until the resulting residual r*(*k*) *is small enough.*

**Figure 2.** The test function (**left**) and the spline approximation error (**right**).

**Figure 3.** log10 of the given Fourier coefficients (blue), and of the Fourier coefficients of the approximation error (red).

### *2.2. Reconstructing Non-Smooth Univariate Functions*

Let *<sup>f</sup>* be a piecewise smooth function on [0, 1], defined by combined two pieces *<sup>f</sup>*<sup>1</sup> <sup>∈</sup> *<sup>C</sup>m*[0,*s*∗] and *<sup>f</sup>*<sup>2</sup> <sup>∈</sup> *<sup>C</sup>m*(*s*∗, 1], and assume that *<sup>f</sup>*<sup>2</sup> can be continuously extended to [*s*∗, 1].

$$f(\mathbf{x}) = \begin{cases} f\_1(\mathbf{x}) & \mathbf{x} \ge \mathbf{s}^\*, \\ f\_2(\mathbf{x}) & \mathbf{x} < \mathbf{s}^\*. \end{cases} \tag{6}$$

Here again, we assume that all we know about *f* is its Fourier series expansion. In particular, we do not know the position *s*<sup>∗</sup> ∈ [0, 1] of the singularity of *f* . As in the case of a non-periodic function, the existence of a singularity in [0, 1] significantly influences the Fourier series coefficients and implies their slow decay. As we demonstrate below, good matching of the Fourier coefficients requires a good approximation of the singularity location. The approach we suggest here involves finding approximations to *f*<sup>1</sup> and *f*<sup>2</sup> simultaneously with a high precision identification of *s*∗.

Let *s* be an approximation of the singularity location *s*∗, and let us follow the algorithm suggested above for the smooth case. The difference here is that now we look for two separate spline approximations:

$$\mathcal{S}\_1 \equiv \mathcal{S}\_d^{[k]}|\_{[0,s]}(\mathbf{x}) = \sum\_{i=1}^{N\_d} a\_{1i} B\_d^{[k]}(\mathbf{x} - id)|\_{[0,s]} \sim f\_{1\prime} \tag{7}$$

*Axioms* **2020**, *9*, 88

and

$$\mathcal{S}\_2 \equiv \mathcal{S}\_d^{[k]}|\_{\left(s,1\right]}\left(\mathbf{x}\right) = \sum\_{i=1}^{N\_d} a\_{2i} B\_d^{[k]}\left(\mathbf{x} - id\right)|\_{\left(s,1\right]} \sim f\_2. \tag{8}$$

The combination *S* of *S*<sup>1</sup> and *S*<sup>2</sup> constitutes the approximation to *f* . Here again we aim at matching the first *M* + 1 Fourier coefficients of *f* and of *S*. Here *S* depends on the *Nd* coefficients {*a*1*i*} of *S*1, the *Nd* coefficients {*a*2*i*} of *S*<sup>2</sup> and on *s*. Therefore, the minimization process solves for all these unknowns:

$$\mathbb{E}\left[\{a\_{1i}\}\_{i=1'}^{N\_d}, \{a\_{2i}\}\_{i=1'}^{N\_d}, s\right] = \arg\min \sum\_{n=0}^{M} |f\_n - \mathcal{S}\_n|^2. \tag{9}$$

The minimization is non-linear with respect to *s*, and linear with respect to the other unknowns. Therefore, the minimization problem is actually a one parameter non-linear minimization problem, the parameter *s*. Using the approximation power of *k*th order splines (*k* ≤ *m*), and considering the value of the objective cost function for *s* = *s*∗, we can deduce that the minimal value of ∑*<sup>M</sup> <sup>n</sup>*=<sup>0</sup> <sup>|</sup> <sup>ˆ</sup> *fn* <sup>−</sup> *<sup>S</sup>*<sup>ˆ</sup> *n*| <sup>2</sup> is *O*(*d*2*k*). We also observe that an  deviation from *s*∗ implies a bounded deviation of the minimizing Fourier coefficients

$$\max\_{n \in \mathbb{Z}} |\hat{f}\_n - \hat{S}\_n| \le c\_1 \epsilon + c\_2 d^k. \tag{10}$$

As shown below, these observations can be used for finding a good approximation to *s*∗.

We denote by *<sup>B</sup>*1*<sup>i</sup>* <sup>≡</sup> *<sup>B</sup>*[*k*] *<sup>d</sup>* (· − *id*)|[0,*s*] the restriction of *<sup>B</sup>*[*k*] *<sup>d</sup>* (· − *id*) to the interval [0,*s*], and by *<sup>B</sup>*2*<sup>i</sup>* <sup>≡</sup> *<sup>B</sup>*[*k*] *<sup>d</sup>* (· − *id*)|(*s*,1] the restriction of *<sup>B</sup>*[*k*] *<sup>d</sup>* (· − *id*) to the interval (*s*, 1]. We concatenate these two sequences of basis functions, {*B*1*i*} and {*B*2*i*} into one sequence {*Bi*}2*Nd <sup>i</sup>*=<sup>1</sup> , and denote their Fourier coefficients by {*B*<sup>ˆ</sup> *<sup>i</sup>*,*n*}*n*∈Z. For a given *s*, the normal equations for the least squares problem (9) induce the linear system *Aa* <sup>=</sup> *<sup>b</sup>* for the splines' coefficients *<sup>a</sup>* = ({*a*1*i*}*Nd <sup>i</sup>*=1, {*a*2*i*}*Nd <sup>i</sup>*=1), where:

$$A\_{i,j} = \sum\_{n=0}^{M} \left[ \text{Re}(\mathcal{B}\_{i,n}) \, \text{Re}(\mathcal{B}\_{j,n}) + \text{Im}(\mathcal{B}\_{i,n}) \, \text{Im}(\mathcal{B}\_{j,n}) \right], \; 1 \le i, j \le 2N\_{d}. \tag{11}$$

and

$$b\_{i} = \sum\_{n=0}^{M} \left[ \text{Re}(\mathcal{B}\_{i,n}) \, \text{Re}(f\_{n}) + \text{Im}(\mathcal{B}\_{i,n}) \, \text{Im}(f\_{n}) \right], \; 1 \le i \le 2N\_{d}. \tag{12}$$

**Remark 2.** *Due to the locality of the B-splines, some of the basis functions* {*B*1*i*} *and* {*B*2*i*} *may be identical* 0*. It thus seems better to use only the non-zero basis functions. From our experience, since we use the generalized inverse approach for solving the system of equations, using all the basis functions gives the same solution.*

*The generalized inverse approach computes the least-squares solution to a system of linear equations that lacks a unique solution. It is also called the Moore–Penrose inverse, and is computed by Matlab pinv function.*

The above construction can be carried out to the case of several singular points.

### 2.2.1. Finding *s*∗

We present the strategy for finding *s*∗ together with a specific numerical example. We consider a test function on [0, 1] with a jump discontinuity at *s*∗ = 0.5:

$$f(\mathbf{x}) = \begin{cases} f\_1(\mathbf{x}) = \sin(5\mathbf{x}) & \mathbf{x} \ge \mathbf{s}^\*, \\ f\_2(\mathbf{x}) = \frac{1}{(x - 0.5)^2 + 0.5} & \mathbf{x} < \mathbf{s}^\*. \end{cases} \tag{13}$$

As expected, the Fourier series of *f* is slowly convergent, and it exhibits the Gibbs phenomenon near the ends of [0, 1] and near *s*∗. In Figure 4, on the left, we present the sum of the first 200 terms of the Fourier series, computed at 20,000 points in [0, 1]. This sum is not acceptable as an approximation to *f* , and yet we can use it to obtain a good initial approximation to *s*<sup>0</sup> ∼ *s*∗. On the right graph, we plot the first differences of the values in the left graph. The maximal difference is achieved at a distance of order 10−<sup>4</sup> from *s*∗.

**Figure 4.** A partial Fourier sum (**left**) and its first differences (**right**).

Having a good approximation *s*<sup>0</sup> ∼ *s*<sup>∗</sup> is not enough for achieving a good approximation to *f* . However, *s*<sup>0</sup> can be used as a starting point for an iterative method leading to a high precision approximation to *s*∗. To support this assertion we present the graph in Figure 5, depicting the maximum norm of the difference between 1000 of the given Fourier coefficients and the corresponding Fourier coefficients of the approximation *S*, as a function of *s*, near *s*∗ = 0.5. This function is almost linear on each side of *s*∗, and simple quasi-Newton iterations converge very fast to *s*∗. After obtaining a high accuracy approximation to *s*∗, we use it for deriving the piecewise spline approximation to *f* .

**Figure 5.** The graph of the error <sup>ˆ</sup> *<sup>f</sup>* <sup>−</sup> *<sup>S</sup>*ˆ as a function of *<sup>s</sup>* near *<sup>s</sup>*<sup>∗</sup> <sup>=</sup> 0.5.

In the following, we present the numerical results obtained for the test function defined in (13). We have used only 20 Fourier coefficients of *f* , and the two approximating functions *S*<sup>1</sup> and *S*<sup>2</sup> are

splines of order eight, with knots' distance *d* = 0.1. Figure 6 depicts the approximation error, showing that *<sup>f</sup>* <sup>−</sup> *<sup>S</sup>*<sup>∞</sup> <sup>=</sup> 5.3 <sup>×</sup> <sup>10</sup>−8, and that the Gibbs phenomenon is completely removed. Figure <sup>7</sup> shows log10 of the absolute values of the given Fourier coefficients of *f* (in blue), and the corresponding values for the Fourier coefficients of *f* − *S* (in red). The graph shows a reduction of ∼7 orders of magnitude. These results clearly demonstrate the high effectiveness of the proposed approach.

**Figure 6.** The approximation error for the 1D non-smooth case.

**Figure 7.** log10 of the given Fourier coefficients (blue), and of the Fourier coefficients of the approximation error (red).

### 2.2.2. The 1D Approximation Procedure

Let us sum up the suggested approximation procedure:


$$M + 1 \geq 2 \dim(\Pi). \tag{14}$$


### **3. The 2D Case—Non-Periodic and Non-Smooth**

### *3.1. The Smooth 2D Case*

Let *<sup>f</sup>* <sup>∈</sup> *<sup>C</sup>m*[0, 1] 2, and assume we know its Fourier series expansion

$$f(\mathbf{x}, y) = \sum\_{m \in \mathbb{Z}} \sum\_{n \in \mathbb{Z}} \hat{f}\_{mn} e^{2\pi imx} e^{2\pi iny}. \tag{15}$$

Such series are obtained when solving PDE using spectral methods. However, if the function is not periodic, or, as in the case of hyperbolic equations, the function has a jump discontinuity along some curve in [0, 1] 2, the convergence of the Fourier series is slow. Furthermore, the approximation of *f* by its partial sums suffers from the Gibbs phenomenon near the boundaries and near the singularity curve.

We deal with the case of smooth non-periodic 2D functions in the same manner as we did for the univariate case. We look for a bivariate spline function *S* whose Fourier coefficients match the Fourier coefficients of *f* . As in the univariate case, it is enough to match the coefficients of low frequency terms in the Fourier series. The technical difference in the 2D case is that we look for a tensor product spline approximation, using tensor product *k*th order B-spline basis functions.

$$\mathcal{S}\_d^{[k]}(\mathbf{x}, \mathbf{y}) = \sum\_{i=1}^{N\_d} \sum\_{j=1}^{N\_d} a\_{ij} B\_d^{[k]}(\mathbf{x} - id) B\_d^{[k]}(\mathbf{y} - jd). \tag{16}$$

The system of equations for the B-spline coefficients is the same as the system defined by (4) and (5) in the univariate case, only here we reshape the unknowns as a vector of *N*<sup>2</sup> *<sup>d</sup>* unknowns.

Numerical Example—The Smooth 2D Case

We consider the test function

$$f(\mathbf{x}, y) = \frac{10}{1 + 10(\mathbf{x}^2 + (y - 1)^2)} + \sin(10(\mathbf{x} - y)),$$

assuming only its Fourier coefficients are given. We have used only 160 Fourier coefficients, and constructed an approximation using 10th degree tensor product splines with equidistant knots' distance *d* = 0.1 in each direction. For this case, the matrix *A* is of size 361 × 361, and *cond*(*A*) = 6.2 <sup>×</sup> <sup>10</sup>30. Again, we have employed the iterative refinement algorithm to obtain a high precision solution (relative error 10−15). Computation time <sup>∼</sup>18 s.

In Figure 8 we plot the test function on [0, 1] 2. Note that it has high derivatives near (0, 1).

The approximation error *<sup>f</sup>* <sup>−</sup> *<sup>S</sup>*[10] 0.1 is shown in Figure 9. To demonstrate the convergence acceleration of the Fourier series achieved by subtracting the approximation from *f* , we present in Figure 10 log10 of the absolute values of the Fourier coefficients of *f* (in green) and of the Fourier coefficients of *<sup>f</sup>* <sup>−</sup> *<sup>S</sup>*[10] 0.1 (in blue), for frequencies 0 ≤ *m*, *n* ≤ 200. The magnitude of the Fourier coefficients is reduced by a factor of 105, and even more so for the low frequencies due to the matching strategy used to derive the spline approximation.

**Figure 8.** The test function for the smooth 2D case.

**Figure 9.** The approximation error *<sup>f</sup>* <sup>−</sup> *<sup>S</sup>*[10] 0.1 .

**Figure 10.** log10 of the Fourier coefficients before (green), and after (blue).

### *3.2. The Non-Smooth 2D Case*

Let Ω1, Ω<sup>2</sup> ⊂ [0, 1] <sup>2</sup> be open, simply connected domains with the properties

$$
\Omega\_1 \cap \Omega\_2 = \bigotimes\_{\prime} \quad \bar{\Omega}\_1 \cup \bar{\Omega}\_2 = [0,1]^2.
$$

Let Γ∗ be the curve separating the two domains,

$$
\Gamma^\* = \Omega\_1 \cap \Omega\_\star
$$

and assume Γ<sup>∗</sup> is a *Cm*-smooth curve.

Let *f* be a piecewise smooth function on [0, 1] 2, defined by combined two pieces *<sup>f</sup>*<sup>1</sup> <sup>∈</sup> *<sup>C</sup>m*[Ω1] and *<sup>f</sup>*<sup>2</sup> <sup>∈</sup> *<sup>C</sup>m*[Ω2], and assume that each *fj* can be continuously extended to a function in *<sup>C</sup>m*(Ω¯ *<sup>j</sup>*), *j* = 1, 2. Here again, we assume that all we know about *f* is its Fourier expansion. In particular, we do not know the position of the dividing curve separating Ω<sup>1</sup> and Ω2. We denote this curve by Γ∗, and we assume that it is a *Cm*-smooth curve. As in the case of a non-periodic function, the existence of a singularity curve in [0, 1] <sup>2</sup> significantly influences the Fourier series coefficients and implies their slow decay. In case of discontinuity of *f* across Γ∗, partial sums of the Fourier series exhibit the Gibbs phenomenon near Γ∗. As demonstrated below, a good matching of the Fourier coefficients requires a good approximation of the singularity location. As in the univariate non-smooth case, the computation algorithm involves finding approximations to *f*<sup>1</sup> and *f*<sup>2</sup> simultaneously with a high precision identification of Γ∗.

Evidently, finding a high precision approximation of the singularity curve Γ∗ is more involved than finding a high precision approximation to the singularity point *s*<sup>∗</sup> in the univariate case. Let *D*Γ<sup>∗</sup> (*x*, *y*) be the signed-distance function corresponding to the curve Γ∗:

$$D\Gamma^\*(x,y) = \begin{cases} \operatorname{dist}((x,y), \Gamma^\*) & (x,y) \in \Omega\_1, \\ -\operatorname{dist}((x,y), \Gamma^\*) & (x,y) \in \Omega\_2. \end{cases} \tag{17}$$

In looking for an approximation to Γ∗, we look for an approximation to *D*Γ<sup>∗</sup> . Here again we are using a tensor product spline approximants, the same set of spline functions described in the previous section. Since the curve is *Cm*, it can be shown that one can construct a spline function *D*˜ of order *k* ≤ *m*, with knots' distance *h*, which approximates *D*Γ<sup>∗</sup> near Γ<sup>∗</sup> so that the Hausdorff distance between the zero level set of *D*˜ and Γ<sup>∗</sup> is *O*(*hk*).

Let *D*¯ *<sup>b</sup>* be a spline approximation to *<sup>D</sup>*Γ<sup>∗</sup> , with spline coefficients ¯ *<sup>b</sup>* <sup>=</sup> {*bij*}*Nh <sup>i</sup>*,*j*=1:

$$D\_{\tilde{b}}(\mathbf{x}, \mathbf{y}) = \sum\_{i=1}^{N\_h} \sum\_{j=1}^{N\_h} b\_{ij} B\_{\mathbf{h}}^{[k]}(\mathbf{x} - i\mathbf{h}) B\_{\mathbf{h}}^{[k]}(\mathbf{y} - j\mathbf{h}). \tag{18}$$

For a given *D*¯ *<sup>b</sup>* we define the approximation to *f* similar to the construction in the univariate case by Equations (7)–(9). We look here for an approximation *S* to *f* which is a combination of two bivariate splines components:

$$S(\mathbf{x}, \mathbf{y}) = \sum\_{i=1}^{N\_d} \sum\_{j=1}^{N\_d} a\_{1ij} B\_d^{[k]}(\mathbf{x} - id) B\_d^{[k]}(\mathbf{y} - jd), \quad D\_b(\mathbf{x}, \mathbf{y}) \ge 0,\tag{19}$$

$$S(\mathbf{x}, \mathbf{y}) = \sum\_{i=1}^{N\_d} \sum\_{j=1}^{N\_d} a\_{2ij} B\_d^{[k]}(\mathbf{x} - \mathrm{id}) B\_d^{[k]}(\mathbf{y} - \mathrm{j}\mathbf{d}), \quad D\_{\mathbf{f}}(\mathbf{x}, \mathbf{y}) < 0,\tag{20}$$

such that (2*M* + 1)<sup>2</sup> Fourier coefficients of *f* and *S* are matched in the least-squares sense:

*Axioms* **2020**, *9*, 88

$$\mathbb{E}\left[\{a\_{1\hat{i}\hat{j}}\}\_{\hat{i},\hat{j}=1}^{N\_{d}},\{a\_{2\hat{i}\hat{j}}\}\_{\hat{i},\hat{j}=1}^{N\_{d}},\{b\_{\hat{i}\hat{j}}\}\_{\hat{i},\hat{j}=1}^{N\_{d}}\right] = \arg\min\left(\sum\_{m,n=-M}^{M} |\hat{f}\_{mn} - \hat{S}\_{mn}|^{2}\right). \tag{21}$$

We denote by *<sup>B</sup>*1*ij*(*x*, *<sup>y</sup>*) the restriction of *<sup>B</sup>*[*k*] *<sup>d</sup>* (*<sup>x</sup>* <sup>−</sup> *id*)*B*[*k*] (*y* − *jd*) to the domain defined by *D*¯ *<sup>b</sup>*(*x*, *<sup>y</sup>*) <sup>≥</sup> 0, and by *<sup>B</sup>*2*ij*(*x*, *<sup>y</sup>*) the restriction of *<sup>B</sup>*[*k*] *<sup>d</sup>* (*<sup>x</sup>* <sup>−</sup> *id*)*B*[*k*] (*y* − *jd*) to the domain defined by *D*¯ *<sup>b</sup>*(*x*, *y*) < 0. We concatenate these two sequences of basis functions, {*B*1*ij*} and {*B*2*ij*} into one sequence {*Bij*}*Nd*,2*Nd <sup>i</sup>*=1,*j*=1, denoting their Fourier coefficients by {*B*<sup>ˆ</sup> *ij*,*n*}*n*∈Z, and rearranging them (for each *n*) in vectors of length 2*N*<sup>2</sup> *<sup>d</sup>* , {*B*<sup>ˆ</sup> *i*,*n*} 2*N*<sup>2</sup> *d <sup>i</sup>*=1,*n*∈Z. For a given *<sup>D</sup>*¯ˆ*b*, the normal equations for the least squares problem (21) induce the linear system *Aa* = *b* for the splines' coefficients *<sup>a</sup>* = ({*a*1*ij*}*Nd <sup>i</sup>*,*j*=1, {*a*2*ij*}*Nd <sup>i</sup>*,*j*=1), where:

$$A\_{i,j} = \sum\_{m,n=-M}^{M} \left[ \text{Re}(\mathcal{B}\_{i,n}) \, \text{Re}(\mathcal{B}\_{j,n}) + \text{Im}(\mathcal{B}\_{i,n}) \, \text{Im}(\mathcal{B}\_{j,n}) \right], \; 1 \le i \le 2N\_d^2. \tag{22}$$

and

$$b\_i = \sum\_{m,n=-M}^{M} \left[ \text{Re}(\mathcal{B}\_{i,n}) \, \text{Re}(f\_n) + \text{Im}(\mathcal{B}\_{i,n}) \, \text{Im}(f\_n) \right], \; 1 \le i \le 2N\_d^2. \tag{23}$$

For a given choice of ¯ *<sup>b</sup>* <sup>=</sup> {*bij*}, the coefficients {*a*1*ij*}*Nd <sup>i</sup>*,*j*=1, {*a*2*ij*}*Nd <sup>i</sup>*,*j*=<sup>1</sup> are obtained by solving a linear system of equations, and properly rearranging the solution. However, finding the optimal ¯ *b* is a non-linear problem that requires an iterative process and is much more expensive.

**Remark 3.** *Representing the singularity curve of the approximation S as the zero level set of the bivariate spline function D*¯ *<sup>b</sup> is the way to achieve a smooth control over the approximation. As a result, the objective function in (21) varies smoothly with respect to the spline coefficients* {*bij*}*.*

**Remark 4.** *In principle, the above framework is applicable to cases where f is combined of k functions defined on k disjoint subdomains of* [0, 1] <sup>2</sup>*. The implementation, however, is more involved. The main challenge is to find a good first approximation to the curves separating the subdomains. In this context, for our case of two subdomains, we further assume for simplicity that the separating curve* Γ∗ *is bijective.*

Here again we choose to demonstrate the whole approximation procedure alongside a specific numerical example.

### 3.2.1. The Approximation Procedure—A Numerical Example

Consider a piecewise smooth function on [0, 1] <sup>2</sup> with a jump singularity across the curve Γ<sup>∗</sup> which is the quarter circle defined by *x*<sup>2</sup> + *y*<sup>2</sup> = 0.5. The test function is shown in Figure 11 and is defined as

$$f(\mathbf{x}, y) = \begin{cases} (\mathbf{x}^2 + y^2 - 0.5)\sin(10(\mathbf{x} + y)) & \mathbf{x}^2 + y^2 \ge 0.5, \\ (\mathbf{x}^2 + y^2 - 0.5)\sin(10(\mathbf{x} + y)) + 2\mathbf{x} & \mathbf{x}^2 + y^2 < 0.5. \end{cases} \tag{24}$$

In the univariate case, in Section 2.2.1, we use the Gibbs phenomenon in order to find an initial approximation *s*<sup>0</sup> to the singularity location *s*∗. The same idea, with some modifications to the 2D case, is applied here. The truncated Fourier sum

$$f\_{50}(x,y) = \sum\_{m,n=-50}^{50} f\_{mn} e^{2\pi imx} e^{2\pi iny}.\tag{25}$$

gives an approximation to *f* , but the approximation suffers from a Gibbs phenomenon near the boundaries of the domain and near the singularity curve Γ∗. We evaluated *f*<sup>50</sup> on a 400 × 400 mesh on [0, 1] 2, and enhanced the Gibbs effect by applying first order differences along the *x*-direction. The results are depicted in Figure 12. The locations of large *x*-direction differences and of large *y*-direction differences within [0, 1] <sup>2</sup> indicate the location of Γ∗.

**Figure 11.** The test function for the 2D non-smooth case.

**Figure 12.** First order *x*-direction differences of a truncated Fourier sum—notice the relatively high values at the boundary and near the singularity curve.

#### **Building the initial approximation** *D*¯ *b*0

Searching along 50 horizontal lines (*x*-direction) for maximal *x*-direction differences, and along 50 vertical lines (*y*-direction) for maximal *y* direction differences, we have found 72 such maximum points, which we denote by *P*0. We display these points (in red) in Figure 13, on top of the curve Γ<sup>∗</sup> (in blue). Now we use these points to construct the spline *D*¯ *b*0 , whose zero level curve is taken as the initial approximation to Γ∗. To construct *D*¯ *<sup>b</sup>*<sup>0</sup> we first overlay on [0, 1] <sup>2</sup> a net of 11 <sup>×</sup> 11 points, *<sup>Q</sup>*0. These are the green points displayed in Figure 14.

**Figure 13.** The singularity curve Γ<sup>∗</sup> (blue) and points of maximal first differences of *f*50.

**Figure 14.** The singularity curve Γ<sup>∗</sup> (blue) and points of maximal first differences of *f*50.

To each point in *Q*<sup>0</sup> we assign the value of its distance from the set *P*0, with a plus sign for points which are on the right or above *P*0, and a minus sign for the other points. To each point in *P*<sup>0</sup> we assign the value zero. The spline function *D*¯ *<sup>b</sup>*<sup>0</sup> is now defined by the least-squares approximation to the values at all the points *P*<sup>0</sup> ∪ *Q*0. We have used here tensor product splines of order 10, on a uniform mesh with knots' distance = 0.1. We denote the level curve zero of the resulting *D*¯ *<sup>b</sup>*<sup>0</sup> as Γ0, and this curve is depicted in yellow in Figure 14. It seems that Γ<sup>0</sup> is already a good approximation to Γ<sup>∗</sup> (in blue), and thus it is a good starting point for achieving the minimization target (21).

### **Improving the approximation to** Γ∗**, and building the two approximants**

Starting from *D*¯ *<sup>b</sup>*<sup>0</sup> we use a quasi-Newton method for iterative improvement of the approximation to Γ∗. The expensive ingredient in the computation procedure is the need to recompute the Fourier coefficients of the *B*-splines for any new set of coefficients ¯ *b* of *D*¯ *<sup>b</sup>*. We recall that we need (2*M* + <sup>1</sup>)<sup>2</sup> of these coefficients for each *B*-spline, and we have 2*N*<sup>2</sup> *<sup>d</sup> B*-splines. In the numerical example we have used *M* = 40 and *Nd* = 19. To illustrate the issue we present in Figure 15 one of those *B*-spline whose support intersects the singularity curve. When the singularity curve is updated, the Fourier coefficients of this *B*-spline are recalculated.

**Remark 5. Calculating Fourier coefficients of the B-splines** *Calculating the Fourier coefficients of the B-splines is the most costly step in the approximation procedure. For the univariate case the Fourier coefficients of the B-splines can be computed analytically. For a smooth d-variate function f* : [0, 1] *<sup>d</sup>* <sup>→</sup> <sup>R</sup>*, with no singularity within the unit cube* [0, 1] *d, piecewise Gauss quadrature may be used to compute the Fourier coefficients with high precision. The non-smooth multivariate case is more difficult, and more expensive. However, we noticed that using low precision approximations for the Fourier coefficients of the B-splines is fine. For example, in the above example, we have employed a simple numerical quadrature combined with fast Fourier transform, and we obtained the Fourier coefficients with a relative error* <sup>∼</sup>10−5*. Yet the resulting approximation error is small <sup>f</sup>* <sup>−</sup> *<sup>S</sup>*<sup>∞</sup> <sup>&</sup>lt; <sup>5</sup> <sup>×</sup> <sup>10</sup>−6*, as seen in Figure 18.*

**Figure 15.** One of the tensor product B-splines used for the approximation of *f* , chopped off by the singularity curve.

Using one quasi-Newton step we obtained new spline coefficients ¯ *b*<sup>1</sup> and an improved approximation Γ<sup>1</sup> to Γ<sup>∗</sup> as the zero level set of *D*¯ *b*1 . Stopping the procedure at this point yields approximation results as shown in the figures below. Figure 16 shows the approximation error *f* − *S* on [0, 1] <sup>2</sup> \ *<sup>U</sup>*, where *<sup>U</sup>* is a small neighborhood of <sup>Γ</sup>∗. Figure <sup>17</sup> shows, in green, log10 of the magnitude of the giver Fourier coefficients ˆ *fmn* and, in blue, log10 of the Fourier coefficients of the difference *f* − *S*. We observe a reduction of three orders of magnitude between the two.

**Figure 16.** The approximation error with *D*¯ *b*1 .

**Figure 17.** The magnitude reduction of the Fourier coefficients with *D*¯ *b*1 .

Applying four quasi-Newton iterations took ∼24 min execution time. The approximation of Γ<sup>∗</sup> by the zero level set of *D*¯ *<sup>b</sup>*<sup>4</sup> is now with an error of 10−9. The consequent approximation error to *<sup>f</sup>* is reduced as shown in Figure 18, and the Fourier coefficients of the error are reduced by 5 orders of magnitude, as shown in Figure 19.

**Figure 18.** The approximation error with *D*¯ *b*4 .

**Figure 19.** The magnitude reduction of the Fourier coefficients with *D*¯ *b*4 .

### 3.2.2. The 2D Approximation Procedure

Let us sum up the suggested approximation procedure:


$$(2M+1)^2 \ge 2\dim(\Pi\_1) + \dim(\Pi\_2).\tag{26}$$

	- (a) Compute a partial Fourier sum and locate maximal first order differences along horizontal and vertical lines to find points *P*<sup>0</sup> near Γ∗, with assigned values 0.
	- (b) Overlay a net of points *Q*<sup>0</sup> as in Figure 14, with assigned signed-distance values.
	- (c) Compute the least-squares approximation from Π<sup>2</sup> to the values at *P*<sup>0</sup> ∪ *Q*0, denote it *D*¯ *b*0 .

### 3.2.3. Lower Order Singularities

Let us assume that *f*(*x*, *y*) is a continuous function, and that *fx*(*x*, *y*) is discontinuous across the singularity curve Γ∗. In this case we cannot use the Gibbs phenomenon effect to approximate the singularity curve. However, the Fourier coefficients

$$\hat{\mathfrak{g}}\_{mn} = im \hat{f}\_{mn\nu}$$

represent a function *g* that has discontinuity across Γ∗, and the above procedure for approximating Γ∗ can be applied.

### *3.3. Error Analysis*

We consider the non-smooth bivariate case, where *f* is a combination of two smooth parts, *f*<sup>1</sup> on Ω<sup>1</sup> and *f*<sup>2</sup> on Ω2, separated by a smooth curve Γ∗. Throughout the paper we approximate *f* using spline functions. In this section we consider approximations by general approximation spaces. Let Π<sup>1</sup> be the approximation space for approximating the smooth pieces constituting *f* , and let Π<sup>2</sup> be the approximation space used for approximating the singularity curve. The following assumption characterize and quantify the assumptions about the function *f* and its singularity curve Γ∗ in terms the ability to approximate them using the approximation spaces Π1, Π2.

**Assumption 1.** *We assume that* Π<sup>1</sup> *and* Π<sup>2</sup> *are finite dimensional spaces of dimensions N*<sup>1</sup> *and N*<sup>2</sup> *respectively.*

**Assumption 2.** *We assume that f*<sup>1</sup> *and f*<sup>2</sup> *are smoothly extendable to* [0, 1] <sup>2</sup> *and dist*[0,1]<sup>2</sup> (*f*1, <sup>Π</sup>1) <sup>≤</sup> *-*, *dist*[0,1]<sup>2</sup> (*f*2, Π1) ≤ *-*.

**Assumption 3.** *For p* ∈ Π2*, let us denote by* Γ0(*p*) *the zero level curve of p within* [0, 1] <sup>2</sup>*. we assume there exists p* ∈ Π<sup>2</sup> *such that*

$$d\_H(\Gamma^\*, \Gamma\_0(p)) \le \delta\_{\prime\prime}$$

*where dH denotes the Hausdorff distance.*

We look for an approximation *<sup>S</sup>* to *<sup>f</sup>* which is a combination of two components, *<sup>p</sup>*<sup>1</sup> <sup>∈</sup> <sup>Π</sup><sup>1</sup> in <sup>Ω</sup>˜ <sup>1</sup> and *<sup>p</sup>*<sup>2</sup> <sup>∈</sup> <sup>Π</sup><sup>1</sup> in <sup>Ω</sup>˜ 2, separated by <sup>Γ</sup>0(*p*), *<sup>p</sup>* <sup>∈</sup> <sup>Π</sup>2, such that (2*<sup>M</sup>* <sup>+</sup> <sup>1</sup>)<sup>2</sup> Fourier coefficients of *<sup>f</sup>* and *<sup>S</sup>* are matched in the least-squares sense:

$$F(p\_1, p\_2, p) = \sum\_{m,n=-M}^{M} |f\_{mn} - \mathcal{S}\_{mn}|^2 \to \
minimum. \tag{27}$$

**Assumption 4.** *Consider the above function S constructed by a triple* (*p*1, *p*2, Γ0(*p*))*, p*1, *p*<sup>2</sup> ∈ Π1*, p* ∈ Π2*. We assume that there is a Lipschitz continuous inverse mapping from the* (2*M* + 1)<sup>2</sup> *Fourier coefficients of S to the triple* (*p*1, *p*2, Γ0(*p*))*:*

$$(\{\mathcal{S}\_{mn}\}\_{m,n=-M}^{M} \to - \; (p\_1, p\_2, \Gamma\_0(p)).\tag{28}$$

**Remark 6.** *To enable the above property we choose M so that*

$$(2M+1)^2 > 2N\_1 + N\_2.\tag{29}$$

*The topology in the space of triples* (*p*1, *p*2, Γ0(*p*)) *is in terms of the maximum norm for the first two components and the Hausdorff distance for the third component.*

**Proposition 1.** *Let f*1*, f*2*,* Γ∗*,* Π<sup>1</sup> *and* Π<sup>2</sup> *satisfy Assumptions 1, 2, 3 and 4. Then the triple* (*p*<sup>∗</sup> <sup>1</sup>, *p*<sup>∗</sup> <sup>2</sup>, *p*∗) *minimizing (27) provides the following approximation bounds:*

$$\|\|f\_1 - p\_1^\*\|\|\_{\infty,\Omega\_1^\*} \le C\_1 M \varepsilon + C\_2 M \delta,\tag{30}$$

$$\|\|f\_2 - p\_2^\*\|\|\_{\infty, \Omega\_2^s} \le \mathcal{C}\_1 M \varepsilon + \mathcal{C}\_2 M \delta,\tag{31}$$

*and*

$$d\_H(\Gamma^\*, \Gamma\_0(p^\*)) \le \mathcal{C}\_3 M \varepsilon + \mathcal{C}\_4 M \delta,\tag{32}$$

*where* Ω∗ <sup>1</sup> *and* Ω<sup>∗</sup> <sup>2</sup> *are separated by* Γ0(*p*∗)*.*

**Proof.** By Assumptions 2, 3 it follows that there exists an approximation *S* defined as above by a triple (*p*¯1, *p*¯2, *p*¯), such that

$$\|f\_1 - \vec{p}\_1\|\_{\infty, [0, 1]^2} \le \varepsilon,\tag{33}$$

$$\|f\_2 - \bar{p}\_2\|\_{\infty, [0, 1]^2} \le \varepsilon\_\prime \tag{34}$$

and

$$d\_H(\Gamma^\*, \Gamma\_0(\vec{p})) \le \delta. \tag{35}$$

Building an approximation *S*¯ to *f* as above by a triple (*p*¯1, *p*¯2, *p*¯), we can estimate its Fourier coefficients using the above bounds, and it follows that

$$|\hat{f}\_{nm} - \hat{S}\_{nm}| \le \epsilon + L\delta, \ -M \le m, n \le M. \tag{36}$$

Therefore,

$$\min\{F(p\_1, p\_2, p)\} \le (2M + 1)^2 (\epsilon + L\delta)^2. \tag{37}$$

Let

$$\mathbb{E}\left[p\_1^\*, p\_2^\*, p^\*\right] = \arg\min \left\{ \sum\_{m,n=-M}^M |\hat{f}\_{mn} - \hat{S}\_{mn}|^2 \right\}.\tag{38}$$

The approximation *S*∗ to *f* is the combination of the two components, *p*∗ <sup>1</sup> in Ω<sup>∗</sup> <sup>1</sup> and *p*<sup>∗</sup> <sup>2</sup> in Ω<sup>∗</sup> 2, where Ω∗ <sup>1</sup> and Ω<sup>∗</sup> <sup>2</sup> are separated by Γ0(*p*∗).

Using the bound in (37) it follows that

$$|\hat{f}\_{mn} - \hat{S}\_{mn}^\*| \le (2M + 1)(\varepsilon + L\delta), \ -M \le m, n \le M. \tag{39}$$

In view of (36) and (39) if follows that

$$|\mathring{S}\_{mn} - \mathring{S}\_{mn}^\*| \le (2M + 2)(\varepsilon + L\delta), \ -M \le m, n \le M. \tag{40}$$

Using Assumption 4, the bound (40) implies

$$\|\|p\_1^\*-\bar{p}\_1\|\|\_{\infty \dot{\Omega}\_1^\*} \le C(2M+2)(\epsilon+L\delta),\tag{41}$$

$$\|\|p\_2^\*-\bar{p}\_2\|\|\_{\infty \dot{\Omega}\_2^\*} \le C(2M+2)(\epsilon+L\delta),\tag{42}$$

and

$$d\_H(\Gamma\_0(p^\*), \Gamma\_0(\vec{p})) \le \mathcal{C}(2M+2)(\epsilon + L\delta). \tag{43}$$

The approximation result now follows by considering the inequalities (41)–(43), together with the inequalities (33)–(35), and applying the triangle inequality.

### Validity of the Approximation Assumptions

Let us check the validity of Assumptions 1, 2, 3 and 4 for the approximation tools suggested in Section 3.2 and used in the above numerical tests.

We assume that *<sup>f</sup>*1, *<sup>f</sup>*<sup>2</sup> <sup>∈</sup> *<sup>C</sup>m*[0, 1] 2, and that Γ<sup>∗</sup> is a *C<sup>m</sup>* curve. To construct the approximation to *f*<sup>1</sup> and *f*<sup>2</sup> we use the space Π<sup>1</sup> of *k*th degree tensor-product splines with equidistant knots' distance *d* in each direction, *k* ≤ *m*. The approximation to Γ<sup>∗</sup> is obtained using the approximation space Π<sup>2</sup> of th degree tensor product splines with equidistant knots' distance *h* in each direction, - ≤ *m*. dim(Π1)=(1/*<sup>d</sup>* <sup>+</sup> *<sup>k</sup>* <sup>−</sup> <sup>1</sup>)<sup>2</sup> <sup>≡</sup> *<sup>N</sup>*<sup>2</sup> *<sup>d</sup>* , dim(Π2)=(1/*h* + - <sup>−</sup> <sup>1</sup>)<sup>2</sup> <sup>≡</sup> *<sup>N</sup>*<sup>2</sup> *<sup>h</sup>* , and for both spaces we use the *B*-spline basis functions. Assumptions 2 and 3 are fulfilled with *-* = *C*1*d<sup>k</sup>* and *δ* = *C*2*h*-.

Assumption 4 is more challenging. To define the mapping

$$\{\{\hat{S}\_{mn}\}\_{m,n=-M}^{M}\rightarrow \ (p\_1, p\_2, \Gamma\_0(p)),$$

we use the same procedure Section 3.2.2 for defining the approximation to *f* :

We represent *p* and *p*1, *p*<sup>2</sup> using the *B*-spline basis function as in (18), (19) and (20), respectively. Each triple (*p*1, *p*2, *p*) defines a piecewise spline approximation *T*(*x*, *y*), and we look for the approximation T(x,y) such that (2*M* + 1)<sup>2</sup> Fourier coefficients of *T* match the Fourier coefficients {*S*ˆ *mn*}*<sup>M</sup> <sup>m</sup>*,*n*=−*<sup>M</sup>* in the least-squares sense:

$$\mathbb{E}\left[\left\{a\_{1\bar{i}\bar{j}}\right\}\_{\bar{i},\bar{j}=1'}^{N\_{d}}\left\{a\_{2\bar{i}\bar{j}}\right\}\_{\bar{i},\bar{j}=1'}^{N\_{d}}\left\{b\_{\bar{i}\bar{j}}\right\}\_{\bar{i},\bar{j}=1}^{N\_{\bar{h}}}\right] = \arg\min\left(\sum\_{m,n=-M}^{M} |\mathcal{S}\_{mn} - \mathcal{T}\_{mn}|^{2}\right). \tag{45}$$

Out of all the possible solutions of the above problem we look for the one with minimal coefficients' norm, i.e., minimizing

$$\sum\_{i,j=1}^{N\_d} a\_{1ij}^2 + \sum\_{i,j=1}^{N\_d} a\_{2ij}^2. \tag{46}$$

Following the procedure of Section 3.2.2, we observe that every step in the procedure is smooth with respect to its input. Possible non-uniqueness in solving the linear system of equations on step (5) is resolved by using the generalized inverse. Therefore, the composition of all the steps is also a smooth function of the input, which implies the validity of Assumption 4.

### **4. The 3D Case**

*Numerical Example—The Smooth 3D Case*

We consider the test function

$$f(\mathbf{x}, y, z) = (\mathbf{x}^2 + y^2 + z^2 - 0.5)\sin(4(\mathbf{x} + y - z))\_{\mathbf{x}}$$

assuming only its Fourier coefficients are given. We have used only 103 Fourier coefficients and constructed an approximation using 5th-degree tensor product splines with equidistant knots' distance *<sup>d</sup>* <sup>=</sup> 0.1 in each direction. For this case, the matrix *<sup>A</sup>* is of size 153 <sup>×</sup> <sup>15</sup>3, and *cond*(*A*) = 1.2 <sup>×</sup> 1022. Again, we have employed the iterative refinement algorithm to obtain a high precision solution. The test function is shown in Figure 20. The error in the resulting approximation is displayed in Figure 21.

**Figure 20.** The 3D test function reshaped into 2D.

**Figure 21.** The approximation error graph, reshaped into 2D.

### **5. Concluding Remarks**

The basic crucial assumption behind the presented Fourier acceleration strategy is that the underlying function is piecewise 'nice'. That is, piecewisely, the function can be well approximated by a suitable finite set of basis functions. The Fourier series of the function may be given to us as a result of the computational method dictated by the structure of the mathematical problem at hand. In itself, the Fourier series may not be the best tool for approximating the desired solution, and yet it contains all the information about the requested function. Utilizing this information we can derive high accuracy piecewise approximations to that function. The simple idea is to make the approximation match the coefficients of the given Fourier series. The suggested method is simple to implement for the approximation of smooth non-periodic functions in any dimension. The case of non-smooth functions is more challenging, and a special strategy is suggested and demonstrated for the univariate and bivariate cases. The paper contains a descriptive graphical presentation of the approximation procedure, together with a fundamental error analysis.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflicts of interest.

### **References**


© 2020 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

### *Article* **A Discussion on the Existence of Best Proximity Points That Belong to the Zero Set**

**Erdal Karapınar 1,2,\*,†, Mujahid Abbas 3,4,\*,† and Sadia Farooq 5,†**


Received: 13 December 2019; Accepted: 6 February 2020; Published: 11 February 2020

**Abstract:** In this paper, we investigate the existence of best proximity points that belong to the zero set for the *αp*-admissible weak (*F*, *ϕ*)-proximal contraction in the setting of *M*-metric spaces. For this purpose, we establish *ϕ*-best proximity point results for such mappings in the setting of a complete *M*-metric space. Some examples are also presented to support the concepts and results proved herein. Our results extend, improve and generalize several comparable results on the topic in the related literature.

**Keywords:** *m*-metric space; proximal *αp*-admissible; *αp*-admissible weak (*F*, *ϕ*)-proximal contraction; *G*−proximal graphic contraction; *ϕ*-best proximity point

**MSC:** 47H10; 54H25; 46J10

### **1. Introduction and Preliminaries**

Several real-world problems can be reformulated as a fixed point problem. In other words, the solution of the real-world problem reduces to the solution of a fixed point problem. In some cases getting a fixed point for certain mapping is impossible. In this case, instead of exact solution, it is natural to consider the approximate solution. Roughly speaking, if the equation *F*(*ξ*) = 0 has no exact solution where *F*(*ξ*) = *T*(*ξ*) − *ξ*, where *T* is an opeator defined on a certain distance space. In 1969, Ky Fan [1] suggested an answer to the question that what happen if a given mapping does not possess a fixed point. More precisely, he proved that if *A* is a compact, convex and nonempty subset of a Banach space *S* and *T* is continuous mapping from *A* to *S*, then there exists a point *ξ*<sup>∗</sup> ∈ *A* such that

$$d(\mathfrak{f}^\*, T\mathfrak{f}^\*) = d(T\mathfrak{f}^\*, A) = \inf\left\{ d(\mathfrak{f}, T\mathfrak{f}^\*), \mathfrak{f} \in A \right\}.$$

This results is known as best approximation theorem. In the above statement, the point *ξ*<sup>∗</sup> ∈ *A* is called as approximate fixed point of *T* or an approximate solution of a fixed point equation *Tξ* = *ξ*. In general, if *A*, *B* are nonempty subsets of a Banach space *S* and *T* : *A* → *B*, then *ξ*<sup>∗</sup> ∈ *A* is called best proximity point of *T* if it satisfies

$$d(\xi^{\ast}, T\xi^{\ast}) = d(A, B) = \inf \left\{ d(a, b) : a \in A, \ b \in B \right\} \dots$$

Note that *ξ*<sup>∗</sup> ∈ *A* turns to be a fixed point of *T*, if the sets *A*, *B* have non-empty intersection. Indeed, if *A* ∩ *B* = *φ* or *A* = *B*, then *d*(*A*, *B*) = 0 and hence the best proximity point *ξ*<sup>∗</sup> ∈ *A* becomes the solution of a fixed point equation *Tξ* = *ξ*. Attendantly, best proximity point results are natural generalizations of metric fixed point results. For further discussion in this direction, we refer to [2–8].

We underline the fact that a best proximity point *ξ*<sup>∗</sup> ∈ *A*, indeed solves the following optimization problem:

> min *<sup>ξ</sup>*∈*<sup>A</sup> <sup>d</sup>*(*ξ*, *<sup>T</sup>ξ*).

On the other hand, fixed point theory has been extended in several directions. For instance, metric space structure has been changed by some new abstract space which is more general than the standard set-up. One of the significant examples of this trend was given by Matthews [9]. He defined the notion of partial metric space and characterized the Banach contraction principle in that space. Roughly speaking, despite the metric space, in partial metric space self-distance may not be zero. This notion especially provides some simplicity in computer science, in particular, domain theory. A number of authors have involved in this trend with interesting results, see e.g., [10–18] and related reference therein. For the sake of completeness, we recall the concept of partial metric space as follows:

**Definition 1** ([9])**.** *A distance function p* : *S* × *S* → [0, ∞) *, on a non-empty set S, is called partial metric if the followings are fulfilled:*

**(p1)** *p*(*ξ*, *ξ*) = *p*(*η*, *η*) = *p*(*ξ*, *η*) ⇔ *ξ* = *η*, **(p2)** *p*(*ξ*, *ξ*) ≤ *p*(*ξ*, *η*), **(p3)** *p*(*ξ*, *η*) = *p*(*η*, *ξ*), **(p4)** *p*(*ξ*, *η*) ≤ *p*(*ξ*, *ζ*) + *p*(*ζ*, *η*) − *p*(*ζ*, *ζ*)

*for all ξ*, *η*, *ζ* ∈ *S. A corresponding pair* (*S*, *p*) *is called a partial metric space.*

It is evident that *p*(*ξ*, *η*) = 0, yields *ξ* = *η*. The contrary of the statement is false.

Asadi et al. [19] introduced the notion of an *M*-metric space and obtained fixed point results in the setup of *M*-metric spaces. It was indicated that *M*-metric space is a real generalization of a partial metric space and they supported their claim by providing some constructive examples. For more results in this direction see e.g., [20,21].

The following notations are useful in the sequel.

**(1)** *mξη* = min {*ρ*(*ξ*, *ξ*), *ρ*(*η*, *η*)} , **(2)** *Mξη* = max {*ρ*(*ξ*, *ξ*), *ρ*(*η*, *η*)} .

**Definition 2** ([19])**.** *A distance function ρ* : *S* × *S* → [0, ∞)*, on a non-empty set S, is called M-metric if the followings are fulfilled:*

**(m1)** *ρ*(*ξ*, *ξ*) = *ρ*(*η*, *η*) = *ρ*(*ξ*, *η*) ⇔ *ξ* = *η*, **(m2)** *mξη* ≤ *ρ*(*ξ*, *η*) **(m3)** *ρ*(*ξ*, *η*) = *ρ*(*η*, *ξ*), **(m4)** *ρ*(*ξ*, *η*) − *mξη* ≤ *ρ*(*ξ*, *ζ*) − *mξζ* + *ρ*(*ζ*, *η*) − *mζη*

*for all ξ*, *η*, *ζ* ∈ *S. A corresponding pair* (*S*, *ρ*) *is called an M-metric space.*

**Lemma 1** ([19])**.** *Each partial metric forms an M-metric. The converse is false.*

**Example 1.** *Let S* = {*ξ*, *η*, *ζ*}. *Define*

$$\begin{array}{rclclcl}\rho(\not\zeta,\not\xi)&=&1,&\rho(\eta,\eta)=9,&\rho(\not\zeta,\not\zeta)=5,\\\rho(\not\zeta,\eta)&=&\rho(\eta,\not\xi)=10,&\rho(\not\zeta,\not\zeta)=\rho(\not\zeta,\not\xi)=7,\\\rho(\not\zeta,\eta)&=&\rho(\eta,\not\zeta)=7.\end{array}$$

*It is clear that ρ is an M-metric. Notice that ρ does not form a partial metric.*

**Definition 3** ([19])**.** *Let* (*S*, *ρ*) *be an M-metric space and ξ* ∈ *S*. *A sequence* {*ξn*} *in S is called:*

**(1)** *M*−*convergent to ξ* ∈ *S if and only if*

$$\lim\_{n \to \infty} (\rho(\xi\_n, \xi) - m\_{\xi\_n, \xi}) = 0,$$

**(2)** *M*−*Cauchy sequence if and only if*

$$\lim\_{n,m\to\infty} (\rho(\mathfrak{z}\_{n}, \mathfrak{z}\_{\mathfrak{z}^{m}}) - m\_{\mathfrak{z}^{n}, \mathfrak{z}^{\mathfrak{z}}\_{\mathfrak{z}^{m}}}) \, and \, \lim\_{n,m\to\infty} (M\_{\mathfrak{z}^{n}, \mathfrak{z}^{\mathfrak{z}}\_{\mathfrak{z}^{m}}} - m\_{\mathfrak{z}^{n}, \mathfrak{z}^{\mathfrak{z}}\_{\mathfrak{z}^{m}}}),$$

*exist (and are finite).*

**Definition 4** ([19])**.** *An M-metric space is said to be M*−*complete if every M*−*Cauchy sequence* {*ξn*} *in S converges with respect to τ<sup>m</sup> ( topology induced by m ) to a point ξ* ∈ *S such that*

$$\lim\_{n \to \infty} (\rho(\xi\_n, \xi) - m\_{\tilde{\xi}\_n, \tilde{\xi}}) = 0 \text{ and } \lim\_{n \to \infty} (M\_{\tilde{\xi}\_n, \xi} - m\_{\tilde{\xi}\_n, \tilde{\xi}}) = 0.$$

**Remark 1** ([19])**.** *Let* (*S*, *ρ*) *be an M-metric space and for every ξ*, *η* ∈ (*S*, *ρ*), *we have*

(*r*1) 0 ≤ *Mξη* + *mξη* = *ρ*(*ξ*, *ξ*) + *ρ*(*η*, *η*), (*r*2) 0 ≤ *Mξη* − *mξη* = [*ρ*(*ξ*, *ξ*) − *ρ*(*η*, *η*)], (*r*3) *Mξη* − *mξη* ≤ (*Mξζ* − *mξζ* )+(*Mζη* − *mζη*).

The set {*ξ*<sup>∗</sup> ∈ *A* : *ϕ*(*ξ*∗) = 0} of all zeros of the function *ϕ* : *A* → [0, ∞) is denoted by *Zϕ*. By using this notion, Jleli et al. [22] introduced the notion of *ϕ*-fixed point as follows: If *S* is a non empty set, *T* : *S* → *S* and *ϕ* : *S* → [0, ∞) is a given function, then *ξ*<sup>∗</sup> ∈ *S* is said to be *ϕ*- fixed point of *T* if and only if *T*(*ξ*∗) = *ξ*<sup>∗</sup> and *ϕ*(*ξ*∗) = 0. We denote the set of all *ϕ*-fixed points of *T* by *ϕF*(*S*), that is,

$$\varphi\_F(\mathcal{S}) = \{ \mathcal{J}^\* \in \mathcal{S} : T(\mathcal{J}^\*) = \mathcal{J}^\* \text{ and } \varphi(\mathcal{J}^\*) = 0 \}\text{ .}$$

In [22], the authors also considered the concept of control function *<sup>F</sup>* : [0, <sup>∞</sup>)<sup>3</sup> <sup>→</sup> [0, <sup>∞</sup>) defined as follows:


The set of such control functions is denoted by F. An immediate examples of the control functions are collected below:

**Example 2** ([22])**.** *Let i* <sup>=</sup> {1, 2, 3} . *Define Fi* : [0, <sup>∞</sup>)<sup>3</sup> <sup>→</sup> [0, <sup>∞</sup>) *as follows:*

$$F\_1(a,b,c) = a+b+c,\ F\_2(a,b,c) = \max\left\{a,b\right\}+c \text{ and } F\_3(a,b,c) = a+a^2+b+c.$$

*for all a*, *b*, *c* ∈ [0, ∞). *Note that F*1, *F*2, *F*<sup>3</sup> ∈ F*.*

In [22], the notion of (*F*, *ϕ*)-contraction mapping was defined and the existence of a fixed point for such mappings were considered.

**Definition 5** ([22])**.** *Let* (*S*, *d*) *be a complete metric space and ϕ* : *S* → [0, ∞)*. A mapping T* : *S* → *S is said to be an* (*F*, *ϕ*)*-contraction mapping if there exist F* ∈ F *and k* ∈ [0, 1) *such that*

$$F(d(T\xi, T\eta), \varphi(T\xi), \varphi(T\eta)) \le k F(d(\xi, \eta), \varphi(\xi), \varphi(\eta)), \text{ for all } \xi, \eta \in \mathcal{S}.$$

Later, this result has been followed by several authors, see e.g., [23–26].

Let <sup>Ψ</sup> denote the set of nondecreasing functions *<sup>ψ</sup>* : [0, <sup>∞</sup>) <sup>→</sup> [0, <sup>∞</sup>) such that <sup>∑</sup>+<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *<sup>ψ</sup>n*(*t*) < <sup>∞</sup>, for all *<sup>t</sup>* <sup>&</sup>gt; 0, where *<sup>ψ</sup><sup>n</sup>* is an *<sup>n</sup>*−iterate of *<sup>ψ</sup>*. A function *<sup>ψ</sup>* is called a (*c*)−comparison function if *<sup>ψ</sup>* <sup>∈</sup> <sup>Ψ</sup>. Note that if *ψ* ∈ Ψ, then *ψ*(0) = 0 and *ψ*(*t*) < *t*, for all *t* > 0 [27].

**Remark 2** ([27])**.** *Note that* ∑+<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> *<sup>ψ</sup>n*(*t*) <sup>&</sup>lt; <sup>∞</sup> *implies* lim*n*→<sup>∞</sup> *<sup>ψ</sup>n*(*t*) = 0, *for all t* <sup>∈</sup> (0, <sup>∞</sup>).

In what follows we introduce the notion of "*ϕ*-best proximity point".

**Definition 6.** *Let* (*S*, *ρ*) *be an M-metric space, A*, *B are two subsets of S*. *An element ξ*<sup>∗</sup> ∈ *Z<sup>ϕ</sup> is said to be a ϕ-best proximity point of the operator T* : *A* → *B if and only if ρ*(*ξ*∗, *Tξ*∗) = *ρ*(*A*, *B*), *where ρ*(*A*, *B*) = inf {*ρ*(*a*, *b*) : *a* ∈ *A*, *b* ∈ *B*} *and ϕ*(*ξ*∗) = 0.

We denote the set of all *ϕ*-best proximity points of *T* by *ϕT*(*A*), that is,

*ϕT*(*A*) = {*ξ*<sup>∗</sup> ∈ *A* : *ρ*(*ξ*∗, *Tξ*∗) = *ρ*(*A*, *B*) and *ϕ*(*ξ*∗) = 0} .

The following definitions are also needed in the sequel. Before we state the definition, we underline the following assumption: Throughout the paper, all sets and subsets are supposed non-empty. We characterize the following sets (that plays a crucial role in best proximity theory) in the setting of *M*-metric space.

**Definition 7.** *Let* (*S*, *ρ*) *be an M-metric space, and A*, *B be two subsets of S*. *Define*

$$\begin{aligned} A\_0 &= \left\{ \xi \in A : \rho(\xi, \eta) = \rho(A, B) , \text{ for some } \eta \in B \right\} \text{ and} \\ B\_0 &= \left\{ \xi \in B : \rho(\xi, \eta) = \rho(A, B) , \text{ for some } \eta \in A \right\} \text{ .} \end{aligned}$$

**Definition 8.** *Let* (*S*, *ρ*) *be an M-metric space, and let A*, *B be two subsets of S*. *If α* : *A* × *A* → [−∞, ∞), *then mapping T* : *A* → *B is said to be proximal αp*−*admissible if*

$$\begin{array}{c} \alpha(\emptyset,\eta) \ge 0 \\ \rho(u, T\xi) = \rho(A,B) \\ \rho(v, T\eta) = \rho(A,B) \end{array} \Longleftrightarrow \alpha(u,v) \ge 0,$$

*for all ξ*, *η*, *u*, *v* ∈ *A*.

**Definition 9.** *Let* (*S*, *ρ*) *be an M-metric space, and T* : *A* → *B. In addition, let A be a subset of S, and α* : *A* × *A* → [−∞, ∞)*. Then A is said to be α*−*regular, if* {*ξn*} *is a sequence in A such that α*(*ξn*, *ξn*+1) ≥ 0 *and ξ<sup>n</sup>* → *ξ* ∈ *A as n* → ∞*, then α*(*ξn*, *ξ*) ≥ 0 *for all n* ∈ *N.*

In this paper, we introduce the notion of *ϕ*-best proximity point and prove the *ϕ*-best proximity point result in the setting of *M*-metric space. We also present an example to support our result.

### **2. Main Results**

We start the section by introducing the notion of *αp*-admissible weak (*F*, *ϕ*)-proximal contraction mappings as follows.

**Definition 10.** *Let A*, *B be two subsets of M-metric space* (*S*, *ρ*) *and F* ∈ F*. An αp-admissible mapping T* : *A* → *B is called an αp-admissible weak* (*F*, *ϕ*)*-proximal contraction, if there exists a lower semi-continuous function ϕ* : *A* → [0, ∞) *such that*

$$\begin{aligned} \label{eq:1} \alpha(\pounds,\eta) &\geq 0 \\ \rho(\upharpoonright, T\xi) &= \rho(A,B) \\ \rho(\vupright, T\eta) &= \rho(A,B) \\ \Longrightarrow \quad \mathfrak{a}(\xi,\eta) + F(\rho(\upharpoonright, \upsilon), \rho(\upharpoonright, \varrho(\upsilon)) &\leq \psi(F(\rho(\xi,\eta), \rho(\xi), \varrho(\eta))), \end{aligned}$$

*for all ξ*, *η*, *u*, *v* ∈ *A and ψ* ∈ Ψ.

By taking *α*(*ξ*, *η*) = 0, we shall get the following definition:

**Definition 11.** *Let A*, *B be two subsets of M-metric space* (*S*, *ρ*) *and F* ∈ F*. A mapping T* : *A* → *B is said to be a weak* (*F*, *ϕ*)*-proximal contraction, if there exist two functions ϕ* : *A* → [0, ∞) *and ψ* ∈ Ψ *such that*

$$\begin{aligned} \rho(\boldsymbol{u}, T\boldsymbol{\xi}) &= \rho(\boldsymbol{A}, \boldsymbol{B}) \\ \rho(\boldsymbol{v}, T\boldsymbol{\eta}) &= \rho(\boldsymbol{A}, \boldsymbol{B}) \\ \Longrightarrow \quad F(\rho(\boldsymbol{u}, \boldsymbol{v}), \boldsymbol{\varrho}(\boldsymbol{u}), \boldsymbol{\varrho}(\boldsymbol{v})) &\leq \psi(F(\rho(\boldsymbol{\xi}, \boldsymbol{\eta}), \boldsymbol{\varrho}(\boldsymbol{\xi}), \boldsymbol{\varrho}(\boldsymbol{\eta}))), \end{aligned}$$

*for all ξ*, *η*, *u*, *v* ∈ *A and ψ* ∈ Ψ*.*

The main result of the article is below.

**Theorem 1.** *Let A*, *B be two subsets of an M-complete M-metric space* (*S*, *ρ*) *and F* ∈ F*. Suppose that a mapping T* : *A* → *B is an αp-admissible weak* (*F*, *ϕ*)*-proximal contraction. If T*(*A*0) ⊆ *B*<sup>0</sup> *and A*<sup>0</sup> *is α*−*regular closed set in S, then there exists a ϕ-best proximity point of T provided that there exist ξ*0, *ξ*<sup>1</sup> ∈ *A*<sup>0</sup> *such that*

$$
\rho(\mathfrak{J}\_1, T\mathfrak{J}\_0) = \rho(A, B) \text{ and } \mathfrak{a}(\mathfrak{J}\_0, \mathfrak{J}\_1) \ge 0.
$$

*Moreover, if α*(*ξ*, *η*) ≥ 0 *for all ξ*, *η* ∈ *ϕT*(*A*)*, then ξ*<sup>∗</sup> *is the unique ϕ-best proximity point of T*.

**Proof.** Let *ξ*0, *ξ*<sup>1</sup> ∈ *A*<sup>0</sup> be such that *ρ*(*ξ*1, *Tξ*0) = *ρ*(*A*, *B*) and *α*(*ξ*0, *ξ*1) ≥ 0. As *Tξ*<sup>0</sup> ∈ *T*(*A*0) ⊆ *B*0, there exists *ξ*<sup>2</sup> in *A*<sup>0</sup> such that *ρ*(*ξ*2, *Tξ*1) = *ρ*(*A*, *B*). Since *T* is proximal *αp*−admissible, we have *α*(*ξ*1, *ξ*2) ≥ 0. Similarly, by *T*(*A*0) ⊆ *B*0, we obtain a point *ξ*<sup>3</sup> ∈ *A*<sup>0</sup> such that *ρ*(*ξ*3, *Tξ*2) = *ρ*(*A*, *B*) which further implies that *α*(*ξ*2, *ξ*3) ≥ 0. Continuing this way, we can obtain a sequence {*ξn*} in *A*<sup>0</sup> such that

$$\begin{array}{rcl}\rho(\zeta\_{n\prime}^{\mathfrak{x}}, T\_{\mathfrak{z}n-1}^{\mathfrak{x}}) &=& \rho(A, B),\\\rho(\zeta\_{n+1\prime}^{\mathfrak{x}}, T\_{\mathfrak{z}n}^{\mathfrak{x}}) &=& \rho(A, B), \; a(\zeta\_{n\prime}^{\mathfrak{x}} \zeta\_{n+1}) \ge 0, \; \text{for all } n \in \mathbb{N} \cup \{0\}.\end{array} \tag{1}$$

Since *T* is *αp*-admissible weak (*F*, *ϕ*)-proximal contraction, we have

$$\mu\left(\mathfrak{z}\_{n-1},\mathfrak{z}\_n\right) + F(\rho\left(\mathfrak{z}\_n,\mathfrak{z}\_{n+1}\right),\varrho\left(\mathfrak{z}\_n\right),\varrho\left(\mathfrak{z}\_{n+1}\right)) \le \psi\left(F(\rho\left(\mathfrak{z}\_{n-1},\mathfrak{z}\_n\right),\varrho\left(\mathfrak{z}\_{n-1}\right),\varrho\left(\mathfrak{z}\_n\right))\right).$$

Since *α*(*ξ*, *η*) ≥ 0 for all *ξ*, *η* ∈ *A*, we obtain that

$$F(\rho(\xi\_n, \xi\_{n+1}), \,\,\rho(\xi\_n), \,\,\rho(\xi\_{n+1})) \le \,\,\psi(F(\rho(\xi\_{n-1}, \xi\_n), \,\,\rho(\xi\_{n-1}), \,\,\rho(\xi\_n))).$$

By induction, we get

$$F(\rho(\xi\_n, \xi\_{n+1}), \rho(\xi\_n), \rho(\xi\_{n+1})) \le \psi^n(F(\rho(\xi\_0, \xi\_1), \rho(\xi\_0), \rho(\xi\_1))).$$

It follows from the condition **(F1)** that

$$\max\left\{\rho(\boldsymbol{\xi}\_{n}^{\boldsymbol{x}},\boldsymbol{\xi}\_{n+1}),\rho(\boldsymbol{\xi}\_{n})\right\} \leq \boldsymbol{\psi}^{n}(F(\rho(\boldsymbol{\xi}\_{0},\boldsymbol{\xi}\_{1}),\boldsymbol{\varphi}(\boldsymbol{\xi}\_{0}),\boldsymbol{\varphi}(\boldsymbol{\xi}\_{1}))).\tag{2}$$

By (2), we obtain that

$$
\rho(\xi\_{n\prime}\xi\_{n+1}) \le \psi^n(F(\rho(\xi\_{0\prime}\xi\_1), \rho(\xi\_0), \rho(\xi\_1))).\tag{3}
$$

On the other hand, we get

$$\lim\_{n \to \infty} \rho(\xi\_{n}, \xi\_{n+1}) = 0. \tag{4}$$

Using (4) and the condition **(m2)**, we have

$$\begin{aligned} \lim\_{n \to \infty} \rho(\mathfrak{J}\_{n\prime}\mathfrak{J}\_{n}) &= \lim\_{n \to \infty} \min \left\{ \rho(\mathfrak{J}\_{n\prime}\mathfrak{J}\_{n}), \rho(\mathfrak{J}\_{n+1}, \mathfrak{J}\_{n+1}) \right\} \\ &= \lim\_{n \to \infty} m\_{\mathfrak{J}\_{n}\mathfrak{J}\_{n+1}} \\ &\leq \lim\_{n \to \infty} \rho(\mathfrak{J}\_{n\prime}\mathfrak{J}\_{n+1}) = 0. \end{aligned}$$

Since lim*n*→<sup>∞</sup> *ρ*(*ξn*, *ξn*) = 0, we have

$$\lim\_{m,m\to\infty} m\_{\vec{\zeta}^{\vec{m}},\vec{\zeta}^{\vec{m}}} = 0.\tag{5}$$

We shall indicate that {*ξn*} is an *<sup>M</sup>*-Cauchy sequence. Consider *<sup>m</sup>*, *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> such that *<sup>m</sup>* <sup>&</sup>gt; *<sup>n</sup>*. On using (3) and the condition **(m4)**, we have

*ρ*(*ξn*, *ξm*) − *mξn*,*ξ<sup>m</sup>* ≤ *ρ*(*ξn*, *ξn*+1) − *mξn*,*ξn*+<sup>1</sup> + *ρ*(*ξn*+1, *ξm*) − *mξn*+1,*ξ<sup>m</sup>* ≤ *ρ*(*ξn*, *ξn*+1) − *mξn*,*ξn*+<sup>1</sup> + *ρ*(*ξn*+1, *ξn*+2) − *mξn*+1,*ξn*+<sup>2</sup> + *ρ*(*ξn*+2, *ξm*) − *mξn*+2,*ξ<sup>m</sup>* ≤ *ρ*(*ξn*, *ξn*+1) − *mξn*,*ξn*+<sup>1</sup> + *ρ*(*ξn*+1, *ξn*+2) − *mξn*+1,*ξn*+<sup>2</sup> <sup>+</sup> ... <sup>+</sup> *<sup>ρ</sup>*(*ξm*−1, *<sup>ξ</sup>m*) <sup>−</sup> *<sup>m</sup>ξm*−1,*ξ<sup>m</sup>* ≤ *ρ*(*ξn*, *ξn*+1) + *ρ*(*ξn*+1, *ξn*+2) + ... + *ρ*(*ξm*−1, *ξm*) <sup>≤</sup> *<sup>ψ</sup>n*(*F*(*ρ*(*ξ*0, *<sup>ξ</sup>*1), *<sup>ϕ</sup>*(*ξ*0), *<sup>ϕ</sup>*(*ξ*1))) + *<sup>ψ</sup>n*+1(*F*(*ρ*(*ξ*0, *<sup>ξ</sup>*1), *<sup>ϕ</sup>*(*ξ*0), *<sup>ϕ</sup>*(*ξ*1)))+ ... + *ψm*−1(*F*(*ρ*(*ξ*0, *ξ*1), *ϕ*(*ξ*0), *ϕ*(*ξ*1))) ≤ *m*−1 ∑ *i*=1 *ψi* (*F*(*ρ*(*ξ*0, *ξ*1), *ϕ*(*ξ*0), *ϕ*(*ξ*1))) − *n*−1 ∑ *j*=1 *ψj* (*F*(*ρ*(*ξ*0, *ξ*1), *ϕ*(*ξ*0), *ϕ*(*ξ*1))). (6)

It follows from Remark 2 and (6) that *ρ*(*ξn*, *ξm*) − *mξn*,*ξ<sup>m</sup>* → 0 as *n* → ∞. On the other hand, by (5), we obtain that

$$\lim\_{n,m \to \infty} (M\_{\vec{\xi}\_n \vec{\xi}\_m} - m\_{\vec{\xi}\_n \vec{\xi}\_m}) = 0.$$

Thus {*ξn*} is an *M*-Cauchy sequence in *A*<sup>0</sup> ⊆ *A* ⊂ *S*. By the completeness of *S* and closeness of *A*0, there exists *ξ*<sup>∗</sup> ∈ *A*<sup>0</sup> such that

$$\lim\_{n \to \infty} \rho(\xi\_n, \xi^\*) - m\_{\xi\_n \nu\_{\mathfrak{z}}^{\mathfrak{z}\_\*}} = 0 \text{ and } \lim\_{n \to \infty} M\_{\xi\_n \xi\_{\mathfrak{z}}^{\mathfrak{z}\_\*}} - m\_{\xi\_n \nu\_{\mathfrak{z}}^{\mathfrak{z}\_\*}} = 0.$$

Since lim*n*→<sup>∞</sup> *ρ*(*ξn*, *ξn*) = 0, we have

$$\lim\_{n \to \infty} \rho(\xi\_n, \xi^\*) = 0 \text{ and } \lim\_{n \to \infty} M\_{\xi\_n, \xi^\*} = 0. \tag{7}$$

Thus by Remark 1, we get that

$$\lim\_{n \to \infty} \rho(\xi^\*, \xi^\*) = \lim\_{n \to \infty} [M\_{\xi^n, \xi^\*} + m\_{\xi^n, \xi^\*} - \rho(\xi^{\underline{n}}, \xi^{\underline{n}})] = 0.$$

This implies that

$$
\rho(\xi^{\ast \ast}, \xi^{\ast \ast}) = 0.
$$

*Axioms* **2020**, *9*, 19

Now we need to show that *ϕ*(*ξ*∗) = 0. Using (2), we have

$$
\varphi(\xi\_n) \le \psi^n(F(\rho(\xi\_0, \xi\_1), \rho(\xi\_0), \rho(\xi\_1))) .
$$

Letting *n* → ∞ on the inequality above, we obtain

$$\lim\_{n \to \infty} \varrho(\xi\_n) = 0.\tag{8}$$

Since *ϕ* is lower semi continuous, it follows from (7) and (8) that

$$0 \le \varphi(\mathcal{J}^\*) \le \lim\_{n \to \infty} \inf \varphi(\mathcal{J}\_n) = 0.$$

Hence *ϕ*(*ξ*∗) = 0. Since *A*<sup>0</sup> is *α*−regular, *α*(*ξn*, *ξ*∗) ≥ 0. As *ξ*<sup>∗</sup> ∈ *A*0, *T*(*A*0) ⊆ *B*0, *Tξ*<sup>∗</sup> ∈ *B*0, we may choose a point *z* ∈ *A*<sup>0</sup> such that *z* = *ξ*<sup>∗</sup> and

$$
\rho(z, T\xi^\*) = \rho(A, B). \tag{9}
$$

We shall prove that *z* = *ξ*∗. On the contrary suppose that *z* = *ξ*∗. Since *T* is *αp*-admissible weak (*F*, *ϕ*)-proximal contraction, by using (1) and (9) we have

$$\begin{split} \rho(\boldsymbol{\xi}\_{n+1}^{\tau}, \boldsymbol{z}) &\leq \max\left\{\rho(\boldsymbol{\xi}\_{n+1}^{\tau}, \boldsymbol{z}), \rho(\boldsymbol{\xi}\_{n+1}^{\tau})\right\} \\ &\leq \quad F(\rho(\boldsymbol{\xi}\_{n+1}^{\tau}, \boldsymbol{z}), \rho(\boldsymbol{\xi}\_{n+1}^{\tau}), \rho(\boldsymbol{z})) \\ &\leq \quad \alpha(\boldsymbol{\xi}\_{n\prime}^{\tau}\boldsymbol{\xi}^{\ast}) + F(\rho(\boldsymbol{\xi}\_{n+1}, \boldsymbol{z}), \rho(\boldsymbol{\xi}\_{n+1}), \rho(\boldsymbol{z})) \\ &\leq \quad \psi(F(\rho(\boldsymbol{\xi}\_{n\prime}^{\tau}\boldsymbol{\xi}^{\ast}), \rho(\boldsymbol{\xi}\_{n\prime}^{\tau}), \rho(\boldsymbol{\xi}^{\ast}))) \\ &< \quad F(\rho(\boldsymbol{\xi}\_{n\prime}^{\tau}\boldsymbol{\xi}^{\ast}), \rho(\boldsymbol{\xi}\_{n\prime}^{\tau}), \rho(\boldsymbol{\xi}^{\ast})) \\ &= \quad F(\rho(\boldsymbol{\xi}\_{n\prime}^{\tau}\boldsymbol{\xi}^{\ast}), \rho(\boldsymbol{\xi}\_{n\prime}), 0). \end{split}$$

Letting *n* → ∞ on the inequality above, we have

$$\begin{aligned} \lim\_{n \to \infty} \rho(\xi\_{n+1}, z) &= \lim\_{n \to \infty} F(\rho(\xi\_n, \xi^\*), \rho(\xi\_n), 0) \\ &= \quad F(0, 0, 0) = 0, \end{aligned}$$

which implies that

$$\lim\_{n \to \infty} \rho(\xi\_{n+1}, z) = 0.$$

By using the condition **(m4)**, we have

$$\begin{aligned} \rho(\boldsymbol{\xi}^{\star}, \boldsymbol{z}) - m\_{\boldsymbol{\xi}^{\star}, \boldsymbol{z}} &\leq \rho(\boldsymbol{\xi}^{\star}, \boldsymbol{\xi}\_{n+1}) - m\_{\boldsymbol{\xi}^{\star}, \boldsymbol{\xi}\_{n+1}} + \rho(\boldsymbol{\xi}\_{n+1}, \boldsymbol{z}) - m\_{\boldsymbol{\xi}^{n+1}, \boldsymbol{z}}, \\\rho(\boldsymbol{\xi}^{\star}, \boldsymbol{z}) - m\_{\boldsymbol{\xi}^{\star}, \boldsymbol{z}} &\leq \rho(\boldsymbol{\xi}^{\star}, \boldsymbol{\xi}\_{n+1}) + \rho(\boldsymbol{\xi}\_{n+1}, \boldsymbol{z}) \\\lim\_{n \to \infty} \rho(\boldsymbol{\xi}^{\star}, \boldsymbol{z}) - m\_{\boldsymbol{\xi}^{\star}, \boldsymbol{z}} &\leq \lim\_{n \to \infty} \rho(\boldsymbol{\xi}^{\star}, \boldsymbol{\xi}\_{n+1}) + \lim\_{n \to \infty} \rho(\boldsymbol{\xi}\_{n+1}, \boldsymbol{z}) \\\lim\_{n \to \infty} \rho(\boldsymbol{\xi}^{\star}, \boldsymbol{z}) - m\_{\boldsymbol{\xi}^{\star}, \boldsymbol{z}} &\leq 0. \end{aligned}$$

Since *ρ*(*ξ*∗, *ξ*∗) = 0, *ξ*∗ = *z*. This is a contradiction. Attendantly, we have

$$
\rho(\xi^{\ast \ast}, T\xi^{\ast \ast}) = \rho(A, B).
$$

**Uniqueness:** Let *α*(*ξ*, *η*) ≥ 0, for all *ξ*, *η* ∈ *ϕT*(*A*). Suppose that *ξ*<sup>∗</sup> and *w* are two *ϕ*-best proximity points of *T* with *ξ*<sup>∗</sup> = *w*. Hence

$$
\rho(w, Tw) = \rho(A, B)\_r
$$

and

$$
\varphi(\mathcal{J}^\*) = \varphi(w) = 0.
$$

Since *T* is *αp*-admissible weak (*F*, *ϕ*)-proximal contraction, we have

$$\begin{aligned} F(\rho(\mathfrak{f}^\*, w), 0, 0) &\quad \le \quad a(\mathfrak{f}^\*, w) + F(\rho(\mathfrak{f}^\*, w), \rho(\mathfrak{f}^\*), \varrho(w)) \\ &\le \quad \Psi(F(\rho(\mathfrak{f}^\*, w), \varrho(\mathfrak{f}^\*), \varrho(w))) \\ &< \quad F(\rho(\mathfrak{f}^\*, w), 0, 0), \end{aligned}$$

a contradiction. Consequently, we find that *ξ*∗ is a unique *ϕ*-best proximity point of *T*.

**Corollary 1.** *Let A*, *B be two subsets of an M-complete M-metric space* (*S*, *ρ*) *and F* ∈ F*. Suppose that a mapping T* : *A* → *B is a weak* (*F*, *ϕ*)*-proximal contraction. If T*(*A*0) ⊆ *B*<sup>0</sup> *and A*<sup>0</sup> *is closed set in S, then there exist a unique ϕ-best proximity point of T provided that there exist ξ*0, *ξ*<sup>1</sup> ∈ *A*<sup>0</sup> *such that*

$$
\rho(\Sparrow \mathbf{1}\_{\mathsf{T}} \mathbf{1}\_{\mathsf{S}} \mathbf{0}) = \rho(A\_{\mathsf{T}} B).
$$

**Proof.** It is derived from Theorem 1 by choosing *α*(*ξ*, *η*) = 0.

Since an *M*-metric space is a partial metric space, from the Theorem 1 we deduce immediately the following result. Note that in the following result we consider the notions in Definitions 10 and 11 in the setting of partial metric spaces.

**Corollary 2.** *Let A*, *B be two subsets of a complete partial metric space* (*S*, *p*) *and F* ∈ F*. Suppose that a mapping T* : *A* → *B is an αp-admissible weak* (*F*, *ϕ*)*-proximal contraction. If T*(*A*0) ⊆ *B*<sup>0</sup> *and A*<sup>0</sup> *is α*−*regular closed set in S, then there exists a ϕ-best proximity point of T provided that there exist ξ*0, *ξ*<sup>1</sup> ∈ *A*<sup>0</sup> *such that*

$$p(\mathfrak{f}\_1, T\mathfrak{f}\_0) = p(A, B) \text{ and } \mathfrak{a}(\mathfrak{f}\_0, \mathfrak{f}\_1) \ge 0, \mathfrak{f}\_1$$

*p*(*A*, *B*) = inf {*p*(*a*, *b*) : *a* ∈ *A*, *b* ∈ *B*} . *Moreover, if α*(*ξ*, *η*) ≥ 0 *for all ξ*, *η* ∈ *ϕT*(*A*)*, then ξ*<sup>∗</sup> *is the unique ϕ-best proximity point of T*.

**Proof.** Since an *M*-metric space is a generalization of partial metric space, from Theorem 1 we deduce the result.

**Corollary 3.** *Let A*, *B be two subsets of a complete partial metric space* (*S*, *p*) *and F* ∈ F*. Suppose that a mapping T* : *A* → *B is a weak* (*F*, *ϕ*)*-proximal contraction. If T*(*A*0) ⊆ *B*<sup>0</sup> *and A*<sup>0</sup> *is closed set in S, then there exist a unique ϕ-best proximity point of T provided that there exist ξ*0, *ξ*<sup>1</sup> ∈ *A*<sup>0</sup> *such that*

$$p(\xi\_1, T\xi\_0) = p(A, B).$$

**Proof.** It is deduced from Corollary 2 by choosing *α*(*ξ*, *η*) = 0.

To support Corollary 1, we provide the following example.

**Example 3.** *Let S* = [0, 1] *and ρ* : *S* × *S* → [0, ∞) *be defined by*

$$\rho(\xi, \eta) = |\xi - \eta|\_{\mathcal{A}}$$

*otherwise. Then* (*S*, *ρ*) *is an M-metric space. Suppose that A* = {0, 0.4, 0.6, 0.9} *and B* = {0.1, 0.3, 0.7, 1} . *Note that ρ*(*A*, *B*) = 0.1, *A* = *A*<sup>0</sup> *and B* = *B*0. *Define a mapping T* : *A* → *B as:*

 $T(0) = 0.1$ ,  $T(0.4) = 0.1$ ,  $T(0.6) = 0.1$ ,  $T(0.9) = 0.3$ .

*Note that T*(*A*0) <sup>⊆</sup> *<sup>B</sup>*0. *Define functions <sup>ψ</sup>* : [0, <sup>∞</sup>) <sup>→</sup> [0, <sup>∞</sup>), *<sup>F</sup>* : [0, <sup>∞</sup>)<sup>3</sup> <sup>→</sup> [0, <sup>∞</sup>) *and <sup>ϕ</sup>* : *<sup>A</sup>* <sup>→</sup> [0, <sup>∞</sup>) *by*

$$\begin{aligned} \psi(t) &= \begin{array}{rcl} \frac{2t}{3} \\ F(a,b,c) &=& \max\left\{a,b\right\} + c, \text{ for all } a,b,c \in [0,\infty), \\ \text{and } \varphi(\xi) &=& \xi, \text{ for all } \xi \in A. \end{aligned} $$

*If we take ξ* = 0.6, *η* = 0.9, *u* = 0 *and v* = 0.4, *then we have*

$$
\rho(\mathfrak{u}, T\mathfrak{F}) = \rho(\upsilon, T\eta) = 0.1 = \rho(A, B),
$$

*which implies that*

$$F(\rho(\mathfrak{u}, \mathfrak{v}), \mathfrak{q}(\mathfrak{u}), \mathfrak{q}(\mathfrak{v})) = 0.8 \le 1 = \psi(F(\rho(\mathfrak{f}, \mathfrak{v}), \mathfrak{q}(\mathfrak{f}), \mathfrak{q}(\mathfrak{v}))).$$

*Hence T forms a weak* (*F*, *ϕ*)*-proximal contraction. Thus, all the conditions of Corollary 1 are satisfied. Moreover, ξ*∗ = 0 *is a unique ϕ-best proximity point.*

To support Corollary 3, we provide the following example.

**Example 4.** *Let S* = [0, 1] ∪ [2, 3]*. Define the mapping p* : *S* × *S* → [0, ∞) *by*

$$p(\xi,\eta) = \begin{cases} \max\left\{\xi,\eta\right\}, & \{\xi,\eta\} \cap [2,3] \neq \phi, \\ & |\xi-\eta|, \ \{\xi,\eta\} \subseteq [0,1]. \end{cases}$$

*Then* (*S*, *p*) *is a partial metric space. Suppose that A* = {0, 0.4, 0.6, 0.9} *and B* = {0.1, 0.3, 0.7, 1} . *Note that p*(*A*, *B*) = 0.1, *A* = *A*<sup>0</sup> *and B* = *B*0. *Define a mapping T* : *A* → *B as:*

 $T(0) = 0.1$ ,  $T(0.4) = 0.1$ ,  $T(0.6) = 0.1$ ,  $T(0.9) = 0.3$ .

*Note that T*(*A*0) <sup>⊆</sup> *<sup>B</sup>*0. *Define mappings <sup>ψ</sup>* : [0, <sup>∞</sup>) <sup>→</sup> [0, <sup>∞</sup>), *<sup>F</sup>* : [0, <sup>∞</sup>)<sup>3</sup> <sup>→</sup> [0, <sup>∞</sup>) *and <sup>ϕ</sup>* : *<sup>A</sup>* <sup>→</sup> [0, <sup>∞</sup>) *by*

$$\begin{array}{rcl} \psi(t) &=& \frac{t}{2}, \\ F(a,b,c) &=& a+b+c, \text{ for all } a,b,c \in [0,\infty), \\ \text{and } \psi(\xi) &=& \xi, \text{ for all } \xi \in A. \end{array}$$

*If we take ξ* = 0.6, *η* = 0.9, *u* = 0 *and v* = 0.4, *then we have*

$$p(\mathfrak{u}, T\mathfrak{g}) = p(\mathfrak{v}, T\mathfrak{q}) = 0.1 = p(A, B)\_{\mathfrak{v}}$$

*which implies that*

$$F(p(\mu, \upsilon), \varrho(\mu), \varrho(\upsilon)) = 0.8 \le 0.9 = \psi(F(p(\xi, \eta), \varrho(\xi), \varrho(\eta))).$$

*Hence, T forms a weak* (*F*, *ϕ*)*-proximal contraction. Thus all the conditions of Corollary 3 are satisfied. Moreover ξ*∗ = 0 *is a unique ϕ-best proximity point.*

### **3. Application to Fixed Point Theory**

Let us take *A* = *B* = *S*, and suppose that *T* is proximal *αp*−admissible mapping. Obviously

$$a(\xi, \eta) \ge 0,$$

*Axioms* **2020**, *9*, 19

and

$$\rho(u, T\xi) = 0 \text{ and } \rho(v, T\eta) = 0\_r$$

implies that

$$
\alpha(T\xi, T\eta) = \alpha(u, v) \ge 0.
$$

Hence *T* is *αp*−admissible mapping.

**Remark 3.** *If α* : *S* × *S* → [−∞, ∞)*, ϕ* : *S* → [0, ∞) *and a selfmapping T on S is αp-admissible weak* (*F*, *ϕ*)*-contraction, then α*(*ξ*, *η*) ≥ 0 *implies that*

$$a(\mathfrak{z}, \eta) + F(\rho(T\mathfrak{z}, T\eta), \varphi(T\mathfrak{z}), \varphi(T\eta)) \le \psi(F(\rho(\mathfrak{z}, \eta), \varphi(\mathfrak{z}), \varphi(\eta))),\tag{10}$$

*where F* ∈ F*, and ψ* ∈ Ψ, *for all ξ*, *η* ∈ *S*. *In other words, we consider the notions in Definitions 10 and 11 in the setting of standard metric spaces.*

**Definition 12.** *A self mapping T* : *S* → *S satisfying the above implication is called αp-admissible weak* (*F*, *ϕ*)*-contraction.*

**Corollary 4.** *Let* (*S*, *d*) *be a M*−*complete M-metric space, F* ∈ F*, and a self-mapping T be an αp-admissible weak* (*F*, *ϕ*)*-contraction. If* {*ξn*} *is a sequence in S such that α*(*ξn*, *ξn*+1) ≥ 0 *and* lim*n*→<sup>∞</sup> *ξ<sup>n</sup>* = *ξ* ∈ *S*, *then α*(*ξn*, *ξ*) ≥ 0, *for all n* ∈ *N*. *Then there exists a ϕ-fixed point of T provided that there exists ξ*<sup>0</sup> ∈ *S such that α*(*ξ*0, *Tξ*0) ≥ 0. *Moreover, if α*(*ξ*, *η*) ≥ 0 *for all ξ*, *η* ∈ *ϕF*(*S*)*, then ξ*<sup>∗</sup> *is the unique ϕ-fixed point of T*.

**Proof.** Let us take *A* = *B* = *S* in Theorem 1. We shall show that *T* is *αp*-admissible weak (*F*, *ϕ*)-contraction. Suppose that *ξ*, *η*, *u*, *v* ∈ *S* satisfies the following

$$\begin{aligned} \alpha(\xi,\eta) &\quad \ge & 0, \\ \rho(\mathfrak{u},T\xi) &= & \rho(A,B), \\ \rho(\mathfrak{v},T\eta) &= & \rho(A,B). \end{aligned}$$

As *ρ*(*A*, *B*) = 0, we have *u* = *Tξ* and *v* = *Tη*. Since *T* satisfies the condition (10), so

$$
\mu(\xi,\eta) + F(\rho(T\xi,T\eta),\varphi(T\xi),\varphi(T\eta)) \le \psi(F(\rho(\xi,\eta),\varphi(\xi),\varphi(\eta))),
$$

that is,

$$\alpha(\underline{\boldsymbol{\zeta}},\boldsymbol{\eta}) + F(\rho(\boldsymbol{u},\boldsymbol{\upsilon}),\boldsymbol{\varrho}(\boldsymbol{u}),\boldsymbol{\varrho}(\boldsymbol{\upsilon})) \le \boldsymbol{\psi}(F(\rho(\underline{\boldsymbol{\zeta}},\boldsymbol{\eta}),\boldsymbol{\varrho}(\underline{\boldsymbol{\zeta}}),\boldsymbol{\varrho}(\boldsymbol{\eta}))),$$

which implies that *T* is an *αp*-admissible weak (*F*, *ϕ*)-contraction Let *ξ*<sup>0</sup> be an arbitrary point in *S*. Define a sequence {*ξn*} in *S* by

$$
\xi\_{\mathbb{N}}^{\mathfrak{z}} = T \xi\_{n-1}, \text{ for all } n \in \mathbb{N}.
$$

As *T* is *αp*−admissible mapping. So, we have

$$\mathfrak{a}(\mathfrak{z}\_0, \mathfrak{z}\_1) = \mathfrak{a}(\mathfrak{z}\_0, T\mathfrak{z}\_0) \ge 0 \text{ implies that } \mathfrak{a}(T\mathfrak{z}\_0, T\mathfrak{z}\_1) = \mathfrak{a}(\mathfrak{z}\_1, \mathfrak{z}\_2) \ge 0.$$

By induction, we get that

$$a(\mathfrak{z}\_n, \mathfrak{z}\_{n+1}) = a(\mathfrak{z}\_n, T\mathfrak{z}\_n) \ge 0,\text{ for all } n \in \mathbb{N}.\tag{11}$$

Using (11) and the fact that *T* is (*F*, *M*, *ϕ*, *αp*, *ψ*)−contraction, we obtain

$$\begin{split} F(\rho(\mathfrak{f}\_{\mathfrak{N}}^{\mathbf{x}},\mathfrak{f}\_{\mathfrak{N}+1}^{\mathbf{x}}),\rho(\mathfrak{f}\_{\mathfrak{n}}),\varrho(\mathfrak{f}\_{\mathfrak{n}+1})) &= \ \mathrm{F}(\rho(T^{\mathfrak{x}}\_{\mathfrak{N}n-1},T^{\mathfrak{x}}\_{\mathfrak{N}n}),\rho(T^{\mathfrak{x}}\_{\mathfrak{N}n-1}),\varrho(T^{\mathfrak{x}}\_{\mathfrak{n}})) \\ &\leq \ \mathrm{a}(\mathfrak{f}\_{\mathfrak{N}n-1}^{\mathbf{x}},\xi\_{\mathfrak{n}}) + \mathrm{F}(\rho(T^{\mathfrak{x}}\_{\mathfrak{N}n-1},T^{\mathfrak{x}}\_{\mathfrak{N}n}),\varrho(T^{\mathfrak{x}}\_{\mathfrak{N}n-1}),\varrho(T^{\mathfrak{x}}\_{\mathfrak{N}n})) \\ &\leq \ \forall \ \mathrm{\forall}(\mathrm{F}(\rho(\mathfrak{f}\_{\mathfrak{N}n-1},\xi\_{\mathfrak{n}}),\varrho(\mathfrak{f}\_{\mathfrak{N}}),\varrho(\mathfrak{f}\_{\mathfrak{N}n+1}))), \text{ for all } n \in \mathbb{N}. \end{split}$$

Using the arguments similar to those given in the proof of Theorem 1, we obtain that {*ξn*}*n*∈<sup>N</sup> is a Cauchy sequence in *S*. Since (*S*, *ρ*) is *M*-complete *M*-metric space, there exists *ξ*<sup>∗</sup> ∈ *S* such that

$$\lim\_{n \to \infty} \rho(\xi\_n, \xi^\*) = 0 \text{ and } \lim\_{n \to \infty} M\_{\xi\_n \xi\_0^\*} = 0. \tag{12}$$

We now show that *ϕ*(*ξ*∗) = 0. From (2), we conclude that

$$
\varphi(\xi\_n) \le \psi^n(F(\rho(\xi\_0, \xi\_1), \rho(\xi\_0), \rho(\xi\_1))).
$$

Again by using the arguments similar to those given in the proof of Theorem 1, we obtain that *<sup>ϕ</sup>*(*ξ*∗) = 0. In the view of (11) and (12) we have *<sup>α</sup>*(*ξn*, *<sup>ξ</sup>*∗) <sup>≥</sup> 0, for all *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>. By taking *<sup>ξ</sup>* <sup>=</sup> *<sup>ξ</sup><sup>n</sup>* and *η* = *ξ*∗ in the condition (10), we have

$$\begin{split} \rho(\ulcorner\_{\sf S^{n}}, T^{\sf S^{n}}\_{\sf S}) &= \rho(T^{\sf X^{n}}\_{\sf S^{n}}, T^{\sf S^{n}}\_{\sf S}) \\ &\leq \max\limits\_{\sf S} \{\rho(T^{\sf x^{n}}\_{\sf S^{n}}, T^{\sf S^{n}}\_{\sf S}), \rho(T^{\sf x^{n}}\_{\sf S})\} \\ &\leq \sup\limits\_{\sf S} \{\rho(T^{\sf x^{n}}\_{\sf S}, T^{\sf S^{n}}\_{\sf S}), \rho(T^{\sf x^{n}}\_{\sf S}), \rho(T^{\sf x^{n}}\_{\sf S})\} \\ &\leq \sup\limits\_{\sf S} \{\sf F}(\rho(T^{\sf x^{n}}\_{\sf S}, T^{\sf x^{n}}\_{\sf S}), \rho(T^{\sf x^{n}}\_{\sf S}), \rho(T^{\sf x^{n}}\_{\sf S})\} \\ &\leq \sup\limits\_{\sf S} \{\sf F}(\rho(\sf x^{n}\_{\sf S}, \sf S^{n}), \rho(\sf x^{n}\_{\sf S}), \rho(\sf x^{n}\_{\sf S})\} \\ &= \sf F(\rho(\sf x^{n}\_{\sf S}, \sf S^{n}), \rho(\sf x^{n}\_{\sf S}), 0). \end{split}$$

On taking limit as *n* → ∞ on the both sides of the above inequality, we have

$$\begin{aligned} \lim\_{n \to \infty} \rho(\xi\_{n+1}, T\xi^\*) &= \lim\_{n \to \infty} F(\rho(\xi\_{n\prime}\xi^\*), \rho(\xi\_n), 0) \\ &= F(0, 0, 0) = 0, \end{aligned}$$

which implies that

$$\lim\_{n \to \infty} \rho(\xi\_{n+1}, T\xi^\*) = 0.$$

By using the condition **(m4)**, we have

$$\begin{split} \rho(\mathfrak{f}^\*, T\mathfrak{f}^\*) - m\_{\mathfrak{f}^\*} T\_{\mathfrak{f}^\*} \mathfrak{f}^\* &\leq \rho(\mathfrak{f}^\*, \mathfrak{f}\_{n+1}) - m\_{\mathfrak{f}^\*} \mathfrak{f}\_{n+1} + \rho(\mathfrak{f}\_{n+1}, T\mathfrak{f}^\*) - m\_{\mathfrak{f}\_{n+1}, T\mathfrak{f}^\*} \\ &\leq \quad \rho(\mathfrak{f}^\*, \mathfrak{f}\_{n+1}) + \rho(\mathfrak{f}\_{n+1}, T\mathfrak{f}^\*). \end{split}$$

Letting *n* → ∞ in the inequality above, we deduce that

$$\begin{aligned} \lim\_{n \to \infty} \rho(\mathfrak{J}^\*, T\mathfrak{J}^\*) - m\_{\mathfrak{J}^\*, T\mathfrak{J}^\*} &\leq \lim\_{n \to \infty} \rho(\mathfrak{J}^\*, \mathfrak{J}\_{n+1}) + \lim\_{n \to \infty} \rho(\mathfrak{J}\_{n+1}, T\mathfrak{J}^\*) \\\lim\_{n \to \infty} \rho(\mathfrak{J}^\*, T\mathfrak{J}^\*) - m\_{\mathfrak{J}^\*, T\mathfrak{J}^\*} &\leq \ 0. \end{aligned}$$

Since *ρ*(*ξ*∗, *ξ*∗) = 0, hence

$$
\rho(\xi^{\ast \ast}, T\xi^{\ast \ast}) = 0,
$$

gives that *ξ*∗ is a *ϕ*-fixed point of *T*.

**Uniqueness:** Let *α*(*ξ*, *η*) ≥ 0 for all *ξ*, *η* ∈ *ϕF*(*S*). Suppose that *ξ*<sup>∗</sup> and *w* are two *ϕ*−fixed point of *T* with *ξ*<sup>∗</sup> = *w*. Hence

$$
\rho(w, Tw) = 0,
$$

and

$$
\varphi(\zeta^\*) = \varphi(w) = 0.
$$

Since *T* is *αp*-admissible weak (*F*, *ϕ*)-contraction, we have

$$\begin{aligned} F(\rho(\xi^\*, w), 0, 0) &= \quad F(\rho(T\xi^\*, Tw), \rho(T\xi^\*), \rho(Tw)) \\ &\leq \quad a(\xi^\*, w) + F(\rho(T\xi^\*, Tw), \rho(T\xi^\*), \rho(Tw)) \\ &\leq \quad \psi(F(\rho(\xi^\*, w), \rho(\xi^\*), \rho(w))) \\ &< \quad F(\rho(\xi^\*, w), 0, 0), \end{aligned}$$

a contradiction. Attendantly, we find that *ξ*∗ is a unique *ϕ*-fixed point of *T*.

### **4. Application to Graph Theory**

Let *S* be a set and Δ denotes the diagonal of *S* × *S*. A graph is a pair (*V*, *E*), where the set *V* = *V*(*G*) of its vertices coincides with *S* and set *E* = *E*(*G*) of its edges which contains all loops, that is, Δ ⊆ *S* × *S*. Furthermore, we assume that the graph *G* has no parallel edges. In a graph *G*, by reversing the direction of edges we get the graph *G*−<sup>1</sup> whose set of edges and set of vertices are defined as follows:

$$E(G^{-1}) = \{ (\pounds, \eta) \in S \times S : (\eta, \pounds) \in E(G) \} \text{ and } V(G^{-1}) = V(G) \text{.}$$

We denote the undirected graph by *G* obtained from *G* by ignoring the direction of edges.

Consider the graph *G* as a directed graph for which the set of its edges is symmetric, under this convention, we have

$$E(\bar{G}) = E(G) \cup E(G^{-1})\dots$$

	- *2. Let <sup>ξ</sup> and <sup>η</sup> be two vertices of a graph <sup>G</sup>*. *A path from <sup>ξ</sup> to <sup>η</sup> of length <sup>n</sup>* (*where <sup>n</sup>* <sup>∈</sup> <sup>N</sup> <sup>∪</sup> {0}) *in a graph G is a sequence* {*ξ<sup>n</sup>* : *n* = 0, 1, 2, ..., *n*} *of n* + 1 *distinct vertices such that ξ*<sup>0</sup> = *ξ*, *ξ<sup>n</sup>* = *η and* (*ξi*, *ξi*+1) ∈ *E*(*G*) *for i* = 1, 2, . . . , *n*.
	- *3. A graph G is called connected graph if there exist a path between any two vertices of graph G and if G is connected then G is said to be weakly connected graph.*
	- *4. A path is called elementary if no vertices appear more than once in it.*

Throughout this section, we suppose that (*S*, *ρ*) is an *M*-metric space endowed with a directed graph *G* and has no parallel edges.

We now introduce a notion of *G*−proximal graphic contraction.

**Definition 14.** *Let A*, *B be two subsets of an M-complete M-metric space* (*S*, *ρ*), *ϕ* : *S* → [0, ∞), *ψ* ∈ Ψ, *F* ∈ F *and G be a graph without parallel edges such that V*(*G*) = *S. A mapping T* : *A* → *B is said to be a G*−*proximal graphic contraction if for all ξ*, *η*, *u*, *v* ∈ *A*, *ξ* = *η*, *with* (*ξ*, *η*) ∈ *E*(*G*) *we have*

$$\begin{cases} \rho(u, T^{\sharp}\_{\sharp}) = \rho(A, B) \\ \rho(v, T\eta) = \rho(A, B) \end{cases} \Longrightarrow F(\rho(u, v), \varrho(u), \varrho(v)) \leq \psi(F(\rho(\xi, \eta), \varrho(\xi), \varrho(\eta))),$$

*and*

$$(u,v)\in E(G).$$

**Theorem 2.** *Let ϕ* : *A* → [0, ∞) *be a lower semi continuous function and T* : *A* → *B a G*−*proximal graphic contraction. If <sup>T</sup>*(*A*0) <sup>⊆</sup> *<sup>B</sup>*0*, <sup>A</sup>*<sup>0</sup> *is closed set in <sup>S</sup> and there exist a path* (*η<sup>i</sup>* )*N <sup>i</sup>*=<sup>0</sup> ⊆ *A*<sup>0</sup> *in G between any two elements ξ and η. Then there exist a unique ϕ*−*best proximity point of T provided that there exist ξ*0, *ξ*<sup>1</sup> ∈ *A*<sup>0</sup> *and an elementary path between them in A*<sup>0</sup> *and*

$$
\rho(\Sparrow \urcorner T\Sosup) = \rho(A \urcorner B) .
$$

**Proof.** Let *<sup>ξ</sup>*0, *<sup>ξ</sup>*<sup>1</sup> <sup>∈</sup> *<sup>A</sup>*<sup>0</sup> such that *<sup>ρ</sup>*(*ξ*1, *<sup>T</sup>ξ*0) = *<sup>ρ</sup>*(*A*, *<sup>B</sup>*). A path *s*0 0,*s*<sup>1</sup> 0,*s*<sup>2</sup> 0,...,*s<sup>N</sup>* 0  in *G* is a sequence containing points of *A*0. Consequently, *s*<sup>0</sup> <sup>0</sup> = *<sup>ξ</sup>*0, *<sup>s</sup><sup>N</sup>* <sup>0</sup> = *<sup>ξ</sup>*<sup>1</sup> and (*s<sup>i</sup>* 0,*s <sup>i</sup>*+<sup>1</sup> <sup>0</sup> ) <sup>∈</sup> *<sup>E</sup>*(*G*) for all <sup>0</sup> <sup>≤</sup> *<sup>i</sup>* <sup>≤</sup> *<sup>N</sup>* <sup>−</sup> 1. Given that *s*<sup>1</sup> <sup>0</sup> <sup>∈</sup> *<sup>A</sup>*0, by *<sup>T</sup>*(*A*0) <sup>⊆</sup> *<sup>B</sup>*<sup>0</sup> and the definition of *<sup>A</sup>*0, there exist *<sup>s</sup>*<sup>1</sup> <sup>1</sup> <sup>∈</sup> *<sup>A</sup>*<sup>0</sup> such that *<sup>ρ</sup>*(*s*<sup>1</sup> <sup>1</sup>, *Ts*<sup>1</sup> <sup>0</sup>) = *ρ*(*A*, *B*). Similarly, for each *i* = 2, ... , *N*, there exists *s<sup>i</sup>* <sup>1</sup> <sup>∈</sup> *<sup>A</sup>*<sup>0</sup> such that *<sup>ρ</sup>*(*s<sup>i</sup>* <sup>1</sup>, *Ts<sup>i</sup>* <sup>0</sup>) = *ρ*(*A*, *B*). As *s*0 0,*s*<sup>1</sup> 0,*s*<sup>2</sup> 0,...,*s<sup>N</sup>* 0  is a path in *G*, (*s*<sup>0</sup> 0,*s*<sup>1</sup> 0)=(*ξ*0,*s*<sup>1</sup> <sup>0</sup>) ∈ *E*(*G*). From the above argument, we have *ρ*(*ξ*1, *Tξ*0) = *ρ*(*A*, *B*) and *ρ*(*s*<sup>1</sup> <sup>1</sup>, *Ts*<sup>1</sup> <sup>0</sup>) = *ρ*(*A*, *B*). Since, *T* is *G*−proximal graphic contraction, it follows that (*ξ*1,*s*<sup>1</sup> <sup>1</sup>) ∈ *E*(*G*). In similar manner, we have the following:

$$(s\_1^{i-1}, s\_1^i) \in E(G)\_\prime \text{ for all } 1 \le i \le N.$$

If *ξ*<sup>2</sup> = *s<sup>N</sup>* <sup>1</sup> , then *s*0 1,*s*<sup>1</sup> 1,*s*<sup>2</sup> 1,...,*s<sup>N</sup>* 1  is a path from *ξ*<sup>1</sup> = *s*<sup>0</sup> <sup>1</sup> to *<sup>ξ</sup>*<sup>2</sup> = *<sup>s</sup><sup>N</sup>* <sup>1</sup> . As *<sup>s</sup><sup>i</sup>* <sup>1</sup> <sup>∈</sup> *<sup>A</sup>*<sup>0</sup> and *Ts<sup>i</sup>* <sup>1</sup> ∈ *T*(*A*0) ⊆ *B*0, or each *i* = 1, 2, 3, ... , *N*, by the definition of *B*0, there exists *s<sup>i</sup>* <sup>2</sup> <sup>∈</sup> *<sup>A</sup>*<sup>0</sup> such that *<sup>ρ</sup>*(*s<sup>i</sup>* <sup>2</sup>, *Ts<sup>i</sup>* <sup>1</sup>) = *ρ*(*A*, *B*). In addition, we have *ρ*(*ξ*2, *Tξ*1) = *ρ*(*A*, *B*). As mentioned above, we have

$$E(\mathfrak{J}\_2, \mathfrak{s}\_2^1) \in E(G) \text{ and } (\mathfrak{s}\_2^{i-1}, \mathfrak{s}\_2^i) \in E(G), \text{ for all } 1 \le i \le N.$$

Similarly, by *<sup>T</sup>*(*A*0) <sup>⊆</sup> *<sup>B</sup>*0, there exists a point *<sup>ξ</sup>*<sup>3</sup> <sup>∈</sup> *<sup>A</sup>*<sup>0</sup> where *<sup>ξ</sup>*<sup>3</sup> <sup>=</sup> *<sup>s</sup><sup>N</sup>* <sup>2</sup> . Then (*s<sup>i</sup>* 2)*<sup>N</sup> <sup>i</sup>*=<sup>0</sup> is a path from *s*0 <sup>2</sup> = *<sup>ξ</sup>*<sup>2</sup> and *<sup>s</sup><sup>N</sup>* <sup>2</sup> <sup>=</sup> *<sup>ξ</sup>*3. Continuing in this manner for all *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>, we obtain a sequence {*ξn*}*n*∈<sup>N</sup> where *ξn*+<sup>1</sup> ∈ [*ξn*] *N <sup>G</sup>* and *<sup>ρ</sup>*(*ξn*<sup>+</sup>1, *<sup>T</sup>ξn*) = *<sup>ρ</sup>*(*A*, *<sup>B</sup>*) by producing a path *s*0 *n*,*s*<sup>1</sup> *n*,*s*<sup>2</sup> *n*,...,*s<sup>N</sup> n*  from *ξ<sup>n</sup>* = *s*<sup>0</sup> *<sup>n</sup>* and *ξn*+<sup>1</sup> = *s<sup>N</sup> <sup>n</sup>* in such a way that

$$\rho(s\_{n+1}^{\iota}, \operatorname{Ts}\_n^{\iota}) = \rho(A, B)\_{\iota}$$

for all 1 <sup>≤</sup> *<sup>i</sup>* <sup>≤</sup> *<sup>N</sup>*, *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>. Thus we have

$$\rho(s\_n^{i-1}, Ts\_{n-1}^{i-1}) = \rho(A, B) = \rho(s\_{n'}^{i}, Ts\_{n-1}^{i}), \text{ for all } 1 \le i \le N, \ n \in \mathbb{N}. \tag{13}$$

Now for any positive integer *n*

$$\begin{array}{ll} \rho(\boldsymbol{s}\_{\boldsymbol{n}}^{0}, \boldsymbol{s}\_{\boldsymbol{n}+1}^{1}) &= \rho(\boldsymbol{s}\_{\boldsymbol{n}}^{0}, \boldsymbol{s}\_{\boldsymbol{n}}^{N}) \\ &\leq \rho(\boldsymbol{s}\_{\boldsymbol{n}}^{0}, \boldsymbol{s}\_{\boldsymbol{n}}^{1}) - m\_{\boldsymbol{s}\_{\boldsymbol{n}}^{0}, \boldsymbol{s}\_{\boldsymbol{n}}^{1}} + \rho(\boldsymbol{s}\_{\boldsymbol{n}}^{1}, \boldsymbol{s}\_{\boldsymbol{n}}^{2}) - m\_{\boldsymbol{s}\_{\boldsymbol{n}}^{1}, \boldsymbol{s}\_{\boldsymbol{n}}^{2}} + \dots + \rho(\boldsymbol{s}\_{\boldsymbol{n}}^{N-1}, \boldsymbol{s}\_{\boldsymbol{n}}^{N}) - m\_{\boldsymbol{s}\_{\boldsymbol{n}}^{N-1}, \boldsymbol{s}\_{\boldsymbol{n}}^{N}} \\ &\leq \rho(\boldsymbol{s}\_{\boldsymbol{n}}^{0}, \boldsymbol{s}\_{\boldsymbol{n}}^{1}) + \rho(\boldsymbol{s}\_{\boldsymbol{n}}^{1}, \boldsymbol{s}\_{\boldsymbol{n}}^{2}) + \dots + \rho(\boldsymbol{s}\_{\boldsymbol{n}}^{N-1}, \boldsymbol{s}\_{\boldsymbol{n}}^{N}) \\ &\overset{N}{=} \sum\_{i=1}^{N} \rho(\boldsymbol{s}\_{\boldsymbol{n}}^{i-1}, \boldsymbol{s}\_{\boldsymbol{n}}^{i}), \end{array} \tag{14}$$

for all 1 <sup>≤</sup> *<sup>i</sup>* <sup>≤</sup> *<sup>N</sup>* and *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>. Note that, (*<sup>s</sup> <sup>i</sup>*−<sup>1</sup> *<sup>n</sup>*−1,*s<sup>i</sup> <sup>n</sup>*−1) <sup>∈</sup> *<sup>E</sup>*(*G*), and *<sup>T</sup>* is *<sup>G</sup>*−proximal graphic contraction. It follows from (13), that

$$F(\rho(s\_n^{i-1}, s\_n^i), \rho(s\_n^{i-1}), \rho(s\_n^i)) \le \psi(F(\rho(s\_{n-1}^{i-1}, s\_{n-1}^i), \rho(s\_{n-1}^{i-1}), \rho(s\_{n-1}^i))), \text{ for all } 1 \le i \le N, \ n \in \mathbb{N}. \ q. \ q. \ in \ \{0, 1\}^\* $$

Again by using the arguments similar to those given in the proof of Theorem 1, we obtain that

$$\rho(s\_n^{i-1}, s\_n^i) \le \psi^n(F(\rho(s\_0^{i-1}, s\_0^i), \rho(s\_0^{i-1}), \varphi(s\_0^i))).\tag{15}$$

From (14) and (15), we have

$$
\rho(\zeta\_{n'}\zeta\_{n+1}) \le \psi^n M\_{\prime} \text{ for all } n \in \mathbb{N}\_{\prime}.
$$

where *M* = *N* ∑ *i*=1 (*F*(*ρ*(*s <sup>i</sup>*−<sup>1</sup> <sup>0</sup> ,*<sup>s</sup> i* <sup>1</sup>), *ϕ*(*s <sup>i</sup>*−<sup>1</sup> <sup>0</sup> ), *<sup>ϕ</sup>*(*<sup>s</sup> i* <sup>1</sup>))). Again by using the arguments similar to those given in the proof of Theorem 1, we obtain

$$
\rho(\mathfrak{f}^\*) = 0 \text{ and } \rho(\mathfrak{f}^\*, T\mathfrak{f}^\*) = \rho(A, B).
$$

Hence *ξ*∗ is a unique *ϕ*-best proximity point of *T*.

### **5. Conclusions**

In this paper, we defined *ϕ*-best proximity point and *αp*-admissible weak (*F*, *ϕ*)-contraction. We proved some *ϕ*-best proximity point results in the setting of *M*-metric spaces. As an application, we derived the *ϕ*-fixed point results for some self mappings. We also introduced the notions of *G*−proximal graphic contraction and provided an application to graph theory in the setting of *M*-complete *M*-metric space. Some examples are also presented to illustrate the novelty of the result proved herein.

**Author Contributions:** Writing—original draft, S.F.; Writing—review and editing, E.K. and M.A. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Acknowledgments:** Authors are thankful to the reviewers for their suggestions to improve the presentation of this paper.

**Conflicts of Interest:** The authors declare no conflict of interest.

### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Admissible Hybrid** *Z***-Contractions in** *b***-Metric Spaces**

### **Ioan Cristian Chifu <sup>1</sup> and Erdal Karapınar 2,\***


Received: 25 November 2019; Accepted: 19 December 2019; Published: 21 December 2019

**Abstract:** In this manuscript, we introduce a new notion, admissible hybrid Z-contraction that unifies several nonlinear and linear contractions in the set-up of a *b*-metric space. In our main theorem, we discuss the existence and uniqueness result of such mappings in the context of complete *b*-metric space. The given result not only unifies the several existing results in the literature, but also extends and improves them. We express some consequences of our main theorem by using variant examples of simulation functions. As applications, the well-posedness and the Ulam–Hyers stability of the fixed point problem are also studied.

**Keywords:** admissible spaces; hybrid contraction; interpolative contraction; *b*-metric spaces; simulation function; interpolative contraction

**MSC:** 47H10; 54H25

### **1. Introduction**

Metric fixed point theory can be settled in the intersection of two disciplines; (nonlinear) functional analysis and topology. From the fixed point researchers' aspect, the first application of the metric fixed point theory is on the solution of differential equations. However, according to the point of view of researchers in applied mathematics, metric fixed point theory is a tool in the solution of a first-order ordinary differential equation with an initial value. Indeed, fixed point theory appears, firstly, in the paper of Liouville in 1837, and, later, in the paper of Picard in 1890. In the paper of Picard, the method of the successive approaches was used to investigate the existence of the solution. In 1922, Banach reported the first metric fixed point result in the setting of complete norm space that would be called Banach space later. Examined enough and carefully, we realized that Banach's theorem is the abstraction of the successive approaches. The characterization of the nominated fixed point theorem of Banach, in the complete metric space, was reported by Caccioppoli in 1931. This can be accepted as the first generalization of Banach's theorem. After this, a huge number of papers, on the generalization and extension of Banach's fixed point theorem, has been released.

Extensions and generalizations of Banach's theorem are based on two elements: by changing the structure (abstract space) and by changing the conditions on the considered mappings. The immediate examples of these new structures are partial metric space, quasi-metric space, semi-metric space, b-metric space, etc. Among all of these, we shall consider the *b*-metric that is the most interesting and most general form of the distance. The notion of *b*-metric has been discovered by several authors, such as Bourbaki [1], Bakhtin [2], and Czerwik [3], in different periods of time. Roughly speaking, *b*-metric space is derived from metric space by relaxing the triangle inequality.

As it was mentioned before, the theory has been advanced by reporting several new fixed point results that are obtained by changing the conditions on the given mappings. As a result, in the literature, there are so many different types of metric fixed point results that cause a disturbance, conflict, and disorder. For overcoming this problem, it needs to consider new theorems that cover several different results. One of the successful results in directions was given in [4] where admissible mappings were introduced to combine different structures. Other interesting results were given in [5] in which the notion of the simulation function was defined to combine many distinct contractions. The notion of the hybrid contractions can also be considered as a result of this trend: in two recent papers [6,7], the authors introduce a new type of contraction, namely *admissible hybrid contraction*, in order to unify several linear, nonlinear and interpolative contractions in the set-up of a complete metric and *b*-metric spaces.

One of the main aims of this paper is to unify the several existing results in the literature by combining the interesting notions: admissible mappings, simulation functions, and hybrid contractions. Besides unifying the results, we express our results in the most generalized form: in the setting of a complete *b*-metric space. Next, we shall consider applications for our obtained results. In particular, we shall consider the well-posedness and the Ulam–Hyers stability of the fixed point problem. We shall give some consequences and we shall indicate how one can get more consequences from the main theorem of the paper. In the next section, we shall give some basic notions and results to provide a self-contained, easily readable paper.

### **2. Preliminaries**

In this section, we shall collect the necessary notations, notions, and results for the sake of the completeness of the paper. We first express the definition of the *b*-metric, as follows.

**Definition 1** (See, e.g., Bourbaki [1], Bakhtin [2], and Czerwik [3])**.** *Let X be a nonempty set and let s* ≥ 1 *be a given real number. A functional d* : *X* × *X* → [0, ∞) *is said to be a b-metric with constant s, if*


$$d(\mathbf{x}, z) \le s \left[ d(\mathbf{x}, y) + d(y, z) \right], \text{ for all } \mathbf{x}, y, z \in X.$$

*In this case, the triple* (*X*, *d*,*s*) *is called a b-metric space with constant s.*

It is evident that the notions of *b*-metric and standard metric coincide in case of *s* = 1. For more details on *b*-metric spaces, see, e.g., [8–11] and corresponding references therein.

In what follows, we express the following immediate interesting examples of *b*-metric space to indicate the richness of this abstract space.

**Example 1.** *Let S be any set that has more than three elements. Suppose that S*1, *S*<sup>2</sup> *are the subsets of S such that S*<sup>1</sup> ∩ *S*<sup>2</sup> = ∅ *and S* = *S*<sup>1</sup> ∪ *S*<sup>2</sup> *Let s* ≥ 1 *be arbitrary. Consider the functional d* : *X* × *X* → [0, ∞)*, which is defined by:*

$$d(a,b) := \begin{cases} 0, & a = b, \\ 2s, & a, b \in S\_{1\prime} \\ 1, & \text{otherwise}. \end{cases}$$

*It is obvious that* (*X*, *d*,*s*) *forms a b-metric space.*

Another simple, but interesting example is the following:

*Axioms* **2020**, *9*, 2

**Example 2.** *Let X* <sup>=</sup> <sup>R</sup>*. The function d* : <sup>R</sup> <sup>×</sup> <sup>R</sup> <sup>→</sup> [0, <sup>∞</sup>)*, defined as*

$$d(\mathbf{x}, y) = |\mathbf{x} - y|^2,\tag{1}$$

*is a b-metric on* R *with s* = 2*. Clearly, the first two conditions are satisfied. For the third condition, we have*

$$\begin{array}{rcl} |\mathbf{x} - \mathbf{y}|^2 &=& |\mathbf{x} - \mathbf{z} + \mathbf{z} - \mathbf{y}|^2 = |\mathbf{x} - \mathbf{z}|^2 + 2|\mathbf{x} - \mathbf{z}||\mathbf{z} - \mathbf{y}| + |\mathbf{z} - \mathbf{y}|^2 \\ &\le & 2[ |\mathbf{x} - \mathbf{z}|^2 + |\mathbf{z} - \mathbf{y}|^2], \end{array}$$

*since*

$$2|\mathbf{x} - z||\mathbf{z} - \mathbf{y}| \le |\mathbf{x} - z|^2 + |\mathbf{z} - \mathbf{y}|^2.$$

*Thus,* (*X*, *d*, 2) *is a b-metric space.*

**Example 3.** *Let X* <sup>=</sup> {*a*, *<sup>b</sup>*, *<sup>c</sup>*} *and d* : *<sup>X</sup>* <sup>×</sup> *<sup>X</sup>* <sup>→</sup> <sup>R</sup><sup>+</sup> <sup>0</sup> *such that*

$$\begin{array}{l} d\ (a,b) = d\ (b,a) = d\ (a,c) = d\ (c,a) = 1, \\ d\ (b,c) = d\ (c,b) = a \ge 2, \\ d\ (a,a) = d\ (b,b) = d\ (c,c) = 0. \end{array}$$

*Then,*

$$d\left(\mathbf{x}, y\right) \le \frac{\alpha}{2} \left[ d\left(\mathbf{x}, z\right) + d\left(z, y\right) \right], \text{ for } a, b, c \in X.$$

*Then,* (*X*, *d*, *<sup>α</sup>* <sup>2</sup> ) *is a b-metric space.*

**Example 4** ([8])**.** *Let B be a Banach space with the zero vector* 0*B. Suppose that P be a cone whose interior is non-empty. Suppose also that forms a partial order with respect to P.*

*For a non-empty set S, we consider the functional d* : *X* × *X* → *B that fulfills*

(*M*1) 0*<sup>B</sup> δ*(*a*, *b*)*,* (*M*2) *δ*(*a*, *b*) = 0 *if and only if x* = *y,* (*M*3) *δ*(*a*, *b*) *δ*(*a*, *c*) + *δ*(*c*, *b*)*,* (*M*4) *δ*(*a*, *b*) = *δ*(*b*, *a*),

*for all a*, *b*, *c* ∈ *S. Then, δ is said to be a cone metric (or, Banach-valued metric). Furthermore, the pair* (*S*, *δ*) *is called a cone metric space (or Banach-valued metric space).*

*Let E be a Banach space and P be a normal cone in E with the coefficient of normality denoted by K. Let D* : *X* × *X* → [0, ∞) *be defined by D*(*x*, *y*) = ||*d*(*x*, *y*)||*, where d* : *X* × *X* → *E is a cone metric space. Then,* (*X*, *D*, *K*) *forms a b-metric space.*

**Example 5** (See, e.g., [1])**.** *Let X* = *Lp*[0, 1] *be the collections of all real functions x*(*t*) *such that* 1 <sup>0</sup> |*x*(*t*)| *pdt* <sup>&</sup>lt; <sup>∞</sup>, *where t* <sup>∈</sup> [0, 1] *and* <sup>0</sup> <sup>&</sup>lt; *<sup>p</sup>* <sup>&</sup>lt; <sup>1</sup>*. For the function d* : *<sup>X</sup>* <sup>×</sup> *<sup>X</sup>* <sup>→</sup> <sup>R</sup><sup>+</sup> <sup>0</sup> *defined by*

$$d(\mathbf{x}, y) := \left( \int\_0^1 |\mathbf{x}(t) - y(t)|^p dt \right)^{1/p}, \text{ for each } \mathbf{x}, y \in L^p[0, 1].$$

*the ordered triple* (*X*, *d*, 21/*<sup>p</sup>*) *forms a b-metric space.*

**Example 6** (See, e.g., [1])**.** *Let p* ∈ (0, 1) *and let*

$$X = l\_p(\mathbb{R}) = \left\{ \mathbf{x} = \{ \mathbf{x}\_n \} \subset \mathbb{R} \text{ such that } \sum\_{n=1}^{\infty} |\mathbf{x}\_n|^p < \infty \right\}.$$

*Define d*(*x*, *y*) : *X* × *X* → [0, ∞) *by*

$$d(\mathfrak{x}, \mathfrak{y}) = \left(\sum\_{n=1}^{\infty} |\mathfrak{x}\_n - y\_n|^p\right)^{1/p}.$$

*Then,* (*X*, *d*, 21/*<sup>p</sup>*) *is a b-metric space.*

**Definition 2** ([12])**.** *A mapping ϕ* : [0, ∞) → [0, ∞) *is called a comparison function if it is increasing and <sup>ϕ</sup>n*(*t*) <sup>→</sup> <sup>0</sup>*, as n* <sup>→</sup> <sup>∞</sup>*, for any t* <sup>∈</sup> [0, <sup>∞</sup>)*.*

**Example 7.** *Let γ* : [0, ∞) → [0, ∞) *be a function such that*

*γ*(*t*) = *ct* for all *t* ∈ [0, ∞) where *c* ∈ (0, 1).

*Then, γ forms a comparison function.*

**Example 8.** *Let β* : [0, ∞) → [0, ∞) *be a function such that*

$$
\beta(t) = \frac{t}{1+t} \text{ for all } t \in [0, \infty).
$$

*Then, γ forms a comparison function.*

**Lemma 1** ([10])**.** *If ϕ* : [0, ∞) → [0, ∞) *is a comparison function, then:*


**Definition 3** ([12])**.** *A function ϕ* : [0, ∞) → [0, ∞) *is said to be a c-comparison function if*


**Remark 1.** *Note that γ in Example 7 is also c-comparison function. On the other hand, β in Example 8 is not a c-comparison function.*

It is evident that the *c*-comparison function is not useful to work in the setting of *b*-metric space due to the third axiom, *s*-weighted triangle inequality. In the setting of *b*-metric space, we should involve the *b*-metric constant "*s*" in our analysis. That is why the *b*-comparison function was suggested by Berinde [10]. Indeed, the idea is so simple. In order to investigate fixed point results in the class of *b*-metric spaces, the notion of *c*-comparison function was extended to the *b*-comparison function by involving the *b*-metric constant "*s*".

In what follows, we remind readers about the formal definition of the *b*-comparison function:

**Definition 4** ([10])**.** *Let s* ≥ 1 *be a real number. A mapping ϕ* : [0, ∞) → [0, ∞) *is called a b-comparison function if the following conditions are fulfilled:*


**Example 9.** *Let s* ≥ 1 *be a real number and γ* : [0, ∞) → [0, ∞) *be a function such that*

$$
\gamma(t) = ct \text{ for all } t \in [0, \infty) \text{ where } c \in (0, \frac{1}{s}).
$$

*Then, γ forms a comparison function.*

The following lemma is very important in the proof of our results.

**Lemma 2** ([10])**.** *If ϕ* : [0, ∞) → [0, ∞) *is a b*−*comparison function, then we have the following conclusions:*


**Remark 2.** *Due to the Lemma 1.2., any b-comparison function is a comparison function.*

Let *α* : *X* × *X* → [0, ∞) be a function. We say that a mapping *f* : *X* → *X* is *α*-orbital admissible ([13]) if

$$
\mathfrak{a}(\mathfrak{x}, f\mathfrak{x}) \ge 1 \\
\Rightarrow \mathfrak{a}(f\mathfrak{x}, f^2\mathfrak{ }(\mathfrak{x})) \ge 1.
$$

An *α*-orbital admissible mapping *f* is called triangular *α*-orbital admissible ([13]) if

$$
\pi(\mathfrak{x}, \mathfrak{y}) \ge 1 \text{ and } \mathfrak{a}(\mathfrak{y}, f\mathfrak{y}) \ge 1 \Rightarrow \mathfrak{a}\left(\mathfrak{x}, f\mathfrak{y}\right) \ge 1, \text{ for every } \mathfrak{x}, \mathfrak{y} \in X.
$$

**Lemma 3.** *Let* (*X*, *d*) *be a b-metric space with constant s* ≥ 1*, and let f* : *X* → *X be triangular α-orbital admissible mapping having the property that there exists x*<sup>0</sup> ∈ *X such that α*(*x*0, *f* (*x*0)) ≥ 1. *Then,*

$$
\alpha(\mathfrak{x}\_{m}, \mathfrak{x}\_{m}) \ge 1, \quad \text{for all } n, m \in \mathbb{N}\_{\prime}
$$

*where the sequence* (*xn*)*n*∈<sup>N</sup> *is defined by xn*+<sup>1</sup> <sup>=</sup> *<sup>f</sup>* (*xn*)*, n* <sup>∈</sup> <sup>N</sup>.

Very recently, an interesting auxiliary function, to unify the different type contraction, was defined by Khojasteh [5] under the name of *simulation function*.

**Definition 5** ([5])**.** *<sup>A</sup>* simulation function *is a mapping <sup>ζ</sup>* : [0, <sup>∞</sup>) <sup>×</sup> [0, <sup>∞</sup>) <sup>→</sup> <sup>R</sup> *satisfying the following conditions:*

(*ζ*1) *ζ*(*t*,*s*) < *s* − *t for all t*,*s* > 0*;* (*ζ*2) *if* (*tn*)*n*∈<sup>N</sup> ,(*sn*)*n*∈<sup>N</sup> *are sequences in* (0, <sup>∞</sup>) *such that* lim*n*→<sup>∞</sup> *tn* <sup>=</sup> lim*n*→<sup>∞</sup> *sn* <sup>&</sup>gt; <sup>0</sup>*, then*

$$\limsup\_{n \to \infty} \mathbb{\zeta}(t\_{n\prime} s\_n) < 0. \tag{2}$$

In the original definition, given in [5], there was an additional but a superfluous condition *ζ*(0, 0) = 0. We underline the observation that a function *ζ*(*t*,*s*) := *ks* − *t*, where *k* ∈ [0, 1) for all *s*, *t* ∈ [0, ∞), is an instantaneous example of a simulation function. For further and more interesting examples, we refer e.g., [5,14–18] and relate references therein.

A self-mapping *f* , defined on a metric space (*X*, *d*), is called a Z*-contraction* with respect to *ζ* ∈ Z [5], if

$$\mathbb{Q}(d(f\mathbf{x}, f\mathbf{y}), d(\mathbf{x}, \mathbf{y})) \ge 0 \qquad \text{for all } \mathbf{x}, \mathbf{y} \in X. \tag{3}$$

The following is the main results of [5]:

*Axioms* **2020**, *9*, 2

**Theorem 1.** *Every* Z*-contraction on a complete metric space has a unique fixed point.*

As it is mentioned above, the immediate example *ζ*(*t*,*s*) := *ks* − *t* implies the outstanding Banach contraction mapping principle.

**Definition 6** (cf. [7])**.** *Let* (*X*, *d*) *be a b-metric space with constant s* ≥ 1*. A self-mapping f is called an admissible hybrid contraction, if there exist ϕ* : [0, ∞) → [0, ∞) *a b-comparison function and α* : *X* × *X* → [0, ∞) *such that*

$$a(\mathbf{x}, y)d(f\mathbf{x}, fy) \le \varrho \left(\mathcal{R}\_f^q(\mathbf{x}, y)\right),\tag{4}$$

.

*where q* <sup>≥</sup> <sup>0</sup> *and <sup>λ</sup><sup>i</sup>* <sup>≥</sup> 0, *<sup>i</sup>* <sup>=</sup> 1, 2, 3, 4, 5 *such that* <sup>∑</sup><sup>5</sup> *<sup>i</sup>*=<sup>1</sup> *λ<sup>i</sup>* = 1 *and*

$$\mathcal{R}\_f^q d(\mathbf{x}, y) = \begin{cases} \ \left[ N(\mathbf{x}, y) \right]^{1/q}, & \text{for } q > 0, \mathbf{x}, y \in X, \\\ \quad P(\mathbf{x}, y), & \text{for } q = 0, \mathbf{x}, y \in X. \end{cases} \tag{5}$$

*where*

 $N(x,y) := \lambda\_1 d^q(x,y) + \lambda\_2 d^q(x,f x) + \lambda\_3 d^q(y, f y)$ 
$$+ \lambda\_4 \left(\frac{d(y, f y)(1 + d(x, f x))}{1 + d(x, y)}\right)^q + \lambda\_5 \left(\frac{d(y, f x)(1 + d(x, f y))}{1 + d(x, y)}\right)^q$$

*and*

$$P(x,y) := \quad d^{\lambda\_1}(x,y) \cdot d^{\lambda\_2}(x,f x) \cdot d^{\lambda\_3}(y,f y)$$

$$\cdot \cdot \left(\frac{d(y, f y)(1 + d(x, f x))}{1 + d(x, y)}\right)^{\lambda\_4} \cdot \left(\frac{d(x, f y) + d(y, f x)}{2s}\right)^{\lambda\_5}$$

**Definition 7.** *Let* (*X*, *d*) *be a b-metric space with constant s* ≥ 1*. A mapping f* : *X* → *X is called admissible hybrid* Z*-contraction mapping if there is ϕ* : [0, ∞) → [0, ∞) *a b-comparison function, α* : *X* × *X* → [0, ∞) *and ζ* ∈ Z *such that*

$$\left(\mathbb{Z}\left(a(\mathbf{x},\mathbf{y})d(f\mathbf{x},f\mathbf{y}),\,\boldsymbol{\varrho}\left(\mathcal{R}^{q}\_{f}(\mathbf{x},\mathbf{y})\right)\right)\right) \geq 0,\text{ for all }\mathbf{x},\mathbf{y} \in \mathcal{X},\tag{6}$$

*where* <sup>R</sup>*<sup>q</sup> <sup>f</sup>*(*x*, *y*) *is as above.*

### **3. Existence and Uniqueness Results**

**Theorem 2.** *Let* (*X*, *d*) *be a complete b-metric space with constant s* ≥ 1 *and let f* : *X* → *X be an admissible hybrid* Z*-contraction. Suppose also that:*


*Then, f has a fixed point.*

**Proof.** Let *x*<sup>0</sup> ∈ *X* be an arbitrary point. Starting from here, we recursively construct the sequence (*xn*)*n*∈N, as *xn* <sup>=</sup> *<sup>f</sup> <sup>n</sup>* (*x*0) for all *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>. Supposing that there exists some *<sup>m</sup>* <sup>∈</sup> <sup>N</sup> such that *f xm* <sup>=</sup> *xm*+<sup>1</sup> = *xm*, we find that *xm* is a fixed point of *f* and the proof is finished. Thus, we can presume, from now on, that *xn* <sup>=</sup> *xn*−<sup>1</sup> for any *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>. Under the assumption (*i*), *<sup>f</sup>* is an admissible hybrid Z-contraction, if we consider in (6) *x* = *xn*−<sup>1</sup> and *y* = *xn*, we get

$$\begin{aligned} 0 &\leq \quad \zeta(\mathfrak{a}(\mathfrak{x}\_{n-1}, \mathfrak{x}\_n)d(f\left(\mathfrak{x}\_{n-1}\right), f\left(\mathfrak{x}\_n\right)), \mathfrak{q}(\mathfrak{R}\_f^q(\mathfrak{x}\_{n-1}, \mathfrak{x}\_n))) \\ &< \quad \mathfrak{q}(\mathfrak{R}\_f^q(\mathfrak{x}\_{n-1}, \mathfrak{x}\_n)) - \mathfrak{a}(\mathfrak{x}\_{n-1}, \mathfrak{x}\_n)d(f\left(\mathfrak{x}\_{n-1}\right), f\left(\mathfrak{x}\_n\right)), \end{aligned}$$

*Axioms* **2020**, *9*, 2

which is equivalent to

$$a(\mathbf{x}\_{n-1}, \mathbf{x}\_n)d(f\left(\mathbf{x}\_{n-1}\right), f\left(\mathbf{x}\_n\right)) \le \varphi(\mathcal{R}\_f^q(\mathbf{x}\_{n-1}, \mathbf{x}\_n)).\tag{7}$$

Taking into account that *f* is triangular *α*-orbital admissible, from (*ii*) and Lemma 1.3., we have *α*(*xn*−1, *xn*) ≥ 1. In this way, the above inequality becomes

$$d\left(\mathbf{x}\_{\text{il}}, \mathbf{x}\_{n+1}\right) \le a\left(\mathbf{x}\_{n-1}, \mathbf{x}\_{\text{il}}\right) d\left(f\left(\mathbf{x}\_{n-1}\right), f\left(\mathbf{x}\_{\text{il}}\right)\right) < \eta\left(\mathcal{R}\_f^q\left(\mathbf{x}\_{n-1}, \mathbf{x}\_{\text{il}}\right)\right). \tag{8}$$

**Case 1.** For the case *q* > 0, we have

R*q <sup>f</sup>*(*xn*−1, *xn*) = [*λ*1*dq*(*xn*−1, *xn*) + *<sup>λ</sup>*2*dq*(*xn*−1, *<sup>f</sup>* (*xn*−1)) + *<sup>λ</sup>*3*dq*(*xn*, *<sup>f</sup>* (*xn*))+ +*λ*<sup>4</sup> *d*(*xn*, *<sup>f</sup>*(*xn*))(1+*d*(*xn*−1, *<sup>f</sup>*(*xn*−1)) <sup>1</sup>+*d*(*xn*−1,*xn*) *q* + *λ*<sup>5</sup> *d*(*xn*, *<sup>f</sup>*(*xn*−1))(1+*d*(*xn*−1, *<sup>f</sup>*(*xn*)) <sup>1</sup>+*d*(*xn*−1,*xn*) *<sup>q</sup>* <sup>1</sup> *q* <sup>=</sup> [*λ*1*dq*(*xn*−1, *xn*) + *<sup>λ</sup>*2*dq*(*xn*−1, *xn*) + *<sup>λ</sup>*3*dq*(*xn*, *xn*+1)+ +*λ*<sup>4</sup> *d*(*xn*,*xn*+1)(1+*d*(*xn*−1,*xn*)) <sup>1</sup>+*d*(*xn*−1,*xn*) *q* + *λ*<sup>5</sup> *d*(*xn*,*xn*)(1+*d*(*xn*−1,*xn*+1)) <sup>1</sup>+*d*(*xn*−1,*xn*) *<sup>q</sup>* <sup>1</sup> *q* = *<sup>λ</sup>*1*dq*(*xn*−1, *xn*) + *<sup>λ</sup>*2*dq*(*xn*−1, *xn*) + *<sup>λ</sup>*3*dq*(*xn*, *xn*+1) + *<sup>λ</sup>*<sup>4</sup> (*d*(*xn*, *xn*+1)) *q* 1 *q* = [(*λ*<sup>1</sup> <sup>+</sup> *<sup>λ</sup>*2)*dq*(*xn*−1, *xn*)+(*λ*<sup>3</sup> <sup>+</sup> *<sup>λ</sup>*4)*dq*(*xn*, *xn*+1)]1/*q*,

and from (8) we get

$$\begin{split} d(\mathbf{x}\_{n}, \mathbf{x}\_{n+1}) &\leq a(\mathbf{x}\_{n-1}, \mathbf{x}\_{n}) d(f(\mathbf{x}\_{n-1}), f(\mathbf{x}\_{n})) < \varrho(\mathcal{R}\_{f}^{q}(\mathbf{x}\_{n-1}, \mathbf{x}\_{n})) \\ &= \varrho([(\lambda\_{1} + \lambda\_{2}) d^{q}(\mathbf{x}\_{n-1}, \mathbf{x}\_{n}) + (\lambda\_{3} + \lambda\_{4}) d^{q}(\mathbf{x}\_{n}, \mathbf{x}\_{n+1})]^{1/q}). \end{split} \tag{9}$$

Suppose that *d*(*xn*−1, *xn*) ≤ *d*(*xn*, *xn*+1). Since *ϕ* is a nondecreasing function, Equation (9) can be estimated as follows:

$$\begin{split} d(\mathbf{x}\_{n},\mathbf{x}\_{n+1}) &\quad \leq \kappa(\mathbf{x}\_{n-1},\mathbf{x}\_{n}) d(f\left(\mathbf{x}\_{n-1}\right), f\left(\mathbf{x}\_{n}\right)) \\ &\leq \varrho((\left[\left(\lambda\_{1}+\lambda\_{2}\right)d^{q}\left(\mathbf{x}\_{n-1},\mathbf{x}\_{n}\right)+\left(\lambda\_{3}+\lambda\_{4}\right)d^{q}\left(\mathbf{x}\_{n},\mathbf{x}\_{n+1}\right)\right]^{1/q})^{1/q} \\ &\text{due to assumption } d(\mathbf{x}\_{n-1},\mathbf{x}\_{n}) \leq d(\mathbf{x}\_{n},\mathbf{x}\_{n+1}) \text{ we get} \\ &\leq \varrho(\left[\lambda\_{1}+\lambda\_{2}+\lambda\_{3}+\lambda\_{4}\right]d^{q}\left(\mathbf{x}\_{n},\mathbf{x}\_{n+1}\right)]^{1/q}) \\ &\text{when we rearrange it, we get} \\ &= \varrho(\left(\lambda\_{1}+\lambda\_{2}+\lambda\_{3}+\lambda\_{4}\right)^{1/q}d(\mathbf{x}\_{n},\mathbf{x}\_{n+1})) \\ &\text{on account of the fact that } \varrho(t) < t, \text{ we find} \\ &< (\lambda\_{1}+\lambda\_{2}+\lambda\_{3}+\lambda\_{4})^{1/q}d(\mathbf{x}\_{n},\mathbf{x}\_{n+1}) \\ &\quad \leq \lambda\_{1}+\lambda\_{2}+\lambda\_{3}+\lambda\_{4} \leq 1, \text{ we obtain} \\ &\leq d(\mathbf{x}\_{n},\mathbf{x}\_{n+1}), \end{split}$$

which is a contradiction. Therefore, for every *<sup>n</sup>* <sup>∈</sup> <sup>N</sup>, we have

$$d(\mathbf{x}\_{\mathsf{n}}, \mathbf{x}\_{\mathsf{n}+1}) < d(\mathbf{x}\_{\mathsf{n}-1}, \mathbf{x}\_{\mathsf{n}})\_{\mathsf{n}}$$

in which case the inequality (8) yields

$$\begin{split} d(\mathbf{x}\_{\mathsf{n}}, \mathbf{x}\_{\mathsf{n}+1}) &\leq \varrho \left( [(\lambda\_1 + \lambda\_2) d^{\mathsf{q}}(\mathbf{x}\_{\mathsf{n}-1}, \mathbf{x}\_{\mathsf{n}}) + (\lambda\_3 + \lambda\_4) d^{\mathsf{q}}(\mathbf{x}\_{\mathsf{n}}, \mathbf{x}\_{\mathsf{n}+1})]^{1/q} \right) \\ &< \varrho \left( (\lambda\_1 + \lambda\_2 + \lambda\_3 + \lambda\_4)^{1/q} d(\mathbf{x}\_{\mathsf{n}-1}, \mathbf{x}\_{\mathsf{n}}) \right) \\ &\leq \varrho \left( d(\mathbf{x}\_{\mathsf{n}-1}, \mathbf{x}\_{\mathsf{n}}) \right) \leq \varrho^2 \left( d(\mathbf{x}\_{\mathsf{n}-2}, \mathbf{x}\_{\mathsf{n}-1}) \right) \leq \dots \leq \varrho^n \left( d(\mathbf{x}\_{\mathsf{0}}, \mathbf{x}\_1) \right). \end{split} \tag{10}$$

Now let *<sup>m</sup>*, *<sup>p</sup>* <sup>∈</sup> <sup>N</sup> such that *<sup>p</sup>* <sup>&</sup>gt; *<sup>m</sup>*. Using the triangle inequality and (10), we have

$$\begin{split} d(\mathbf{x}\_{m},\mathbf{x}\_{p}) &\quad \leq \operatorname{sd}(\mathbf{x}\_{m},\mathbf{x}\_{m+1}) + \mathbf{s}^{2} d(\mathbf{x}\_{m+1},\mathbf{x}\_{m+2}) + \dots + \mathbf{s}^{p-m} \cdot d(\mathbf{x}\_{p-1},\mathbf{x}\_{p}) \\ &\quad \leq \operatorname{sq}^{m}((d(\mathbf{x}\_{0},\mathbf{x}\_{1})) + \mathbf{s}^{2} \boldsymbol{\varrho}^{m+1}(d(\mathbf{x}\_{0},\mathbf{x}\_{1})) + \dots + \mathbf{s}^{p-m+1} \boldsymbol{\varrho}^{p}(d(\mathbf{x}\_{0},\mathbf{x}\_{1})) \\ &= \frac{1}{\boldsymbol{\varrho}^{m-1}} \left( \mathbf{s}^{m} \boldsymbol{\varrho}^{m}((d(\mathbf{x}\_{0},\mathbf{x}\_{1})) + \mathbf{s}^{m+1} \boldsymbol{\varrho}^{m+1}(d(\mathbf{x}\_{0},\mathbf{x}\_{1})) + \dots + \mathbf{s}^{p} \boldsymbol{\varrho}^{p}(d(\mathbf{x}\_{0},\mathbf{x}\_{1})) \right) \\ &= \frac{1}{\boldsymbol{\varrho}^{m-1}} \sum\_{j=m}^{p} s^{j} \boldsymbol{\varrho}^{j}((d(\mathbf{x}\_{0},\mathbf{x}\_{1})) .\end{split}$$

Since *<sup>ϕ</sup>* is a *<sup>b</sup>*-comparison function, the series <sup>∞</sup> ∑ *j*=0 *ϕj* (*d*(*x*0, *x*1)) is convergent. Denoting by S*<sup>n</sup>* = *n*

∑ *j*=0 *ϕj* (*d*(*x*0, *x*1)), the above inequality becomes

$$d(\mathfrak{x}\_{m}, \mathfrak{x}\_{p}) \le \frac{1}{s^{m-1}} \left( \mathcal{S}\_{p-1} - \mathcal{S}\_{m-1} \right)^{\frac{1}{p}}$$

and as *m*, *p* → ∞ we get

$$d(\mathbf{x}\_{m}, \mathbf{x}\_{p}) \to \mathbf{0},\tag{11}$$

,

which tells us that (*xn*)*n*∈<sup>N</sup> is a Cauchy sequence on a complete *<sup>b</sup>*-metric space, so there exists *<sup>x</sup>*<sup>∗</sup> <sup>∈</sup> *<sup>X</sup>* such that

$$\lim\_{n \to \infty} d(x\_n \mathbf{x}^\*) = 0.\tag{12}$$

We shall prove that *x*∗ is a fixed point of *f* . If *f* is continuous, (due to assumption (*iii*))

$$d\left(\mathbf{x}^\*, f\left(\mathbf{x}^\*\right)\right) = \lim\_{n \to \infty} d\left(\mathbf{x}\_{\mathbb{N}}, f\left(\mathbf{x}\_{\mathbb{N}}\right)\right) = \lim\_{n \to \infty} d(\mathbf{x}\_{\mathbb{N}}, \mathbf{x}\_{n+1}) = 0,$$

so we get that *f* (*x*∗) = *x*∗, that is, *x*∗ is a fixed point of *f* .

Suppose now that *<sup>f</sup>* <sup>2</sup> is continuous. It follows that *<sup>f</sup>* <sup>2</sup> (*x*∗) <sup>=</sup> lim*n*→<sup>∞</sup> *<sup>f</sup>* <sup>2</sup> (*xn*) <sup>=</sup> *<sup>x</sup>*∗. We shall prove that *f* (*x*∗) = *x*∗. Supposing that, on the contrary, *f* (*x*∗) = *x*∗, we have from (6)

$$\begin{aligned} 0 &\leq \zeta(\mathfrak{a}(f\left(\mathfrak{x}^\*\right), \mathfrak{x}^\*\right) d(f^2\left(\mathfrak{x}^\*\right), f\left(\mathfrak{x}^\*\right)), \mathfrak{q}(\mathfrak{R}\_f^q(f\left(\mathfrak{x}^\*\right), \mathfrak{x}^\*\right)), \\ &= \mathfrak{q}(\mathcal{R}\_f^q(f\left(\mathfrak{x}^\*\right), \mathfrak{x}^\*\right)) - \mathfrak{a}(f\left(\mathfrak{x}^\*\right), \mathfrak{x}^\*) d(f^2\left(\mathfrak{x}^\*\right), f\left(\mathfrak{x}^\*\right)), \end{aligned}$$

*Axioms* **2020**, *9*, 2

which implies

*<sup>d</sup>*(*x*∗, *<sup>f</sup>* (*x*∗)) = *<sup>d</sup>*(*<sup>f</sup>* <sup>2</sup> (*x*∗), *<sup>f</sup>* (*x*∗)) <sup>≤</sup> *<sup>α</sup>*(*<sup>f</sup>* (*x*∗), *<sup>x</sup>*∗)*d*(*<sup>f</sup>* (*x*∗), *<sup>x</sup>*∗) since *ϕ*(*t*) < *t*, we get <sup>≤</sup> *<sup>ϕ</sup>*(R*<sup>q</sup> <sup>f</sup>*(*<sup>f</sup>* (*x*∗), *<sup>x</sup>*∗)) <sup>&</sup>lt; <sup>R</sup>*<sup>q</sup> <sup>f</sup>*(*f* (*x*∗), *x*∗); due to (5), we have = *λ*1*dq*(*f* (*x*∗), *x*∗) + *λ*2*dq*(*x*∗, *f* (*x*∗)) + *λ*3*dq*(*f* (*x*∗), *f* <sup>2</sup> (*x*∗))+ *λ*4 *d*(*x*∗, *f*(*x*∗))(1+*d*(*f*(*x*∗), *f* <sup>2</sup>(*x*∗)) 1+*d*(*x*∗, *f*(*x*∗)) *q* + *λ*<sup>5</sup> *d*(*f*(*x*∗), *f*(*x*∗))(1+*d*(*x*∗, *f* <sup>2</sup>(*x*∗)) 1+*d*(*x*∗, *f*(*x*∗)) *q* 1 *q* = [*λ*1*dq*(*f* (*x*∗), *x*∗) + *λ*2*dq*(*x*∗, *f* (*x*∗)) + *λ*3*dq*(*f* (*x*∗), *x*∗)+ +*λ*<sup>4</sup> *d*(*x*∗, *f*(*x*∗))(1+*d*(*f*(*x*∗),*x*∗)) 1+*d*(*x*∗, *f*(*x*∗)) *q* + *λ*<sup>5</sup> *d*(*f*(*x*∗), *f*(*x*∗))(1+*d*(*x*∗,*x*∗)) 1+*d*(*x*∗, *f*(*x*∗)) *<sup>q</sup>* <sup>1</sup> *q* = [(*λ*<sup>1</sup> + *λ*<sup>2</sup> + *λ*<sup>3</sup> + *λ*4)*dq*(*x*∗, *f* (*x*∗))] 1 *q* = [(*λ*<sup>1</sup> + *λ*<sup>2</sup> + *λ*<sup>3</sup> + *λ*4)] 1 *<sup>q</sup> d*(*x*∗, *f* (*x*∗)) ≤ *d*(*x*∗, *f* (*x*∗)).

This is a contradiction, so that *f* (*x*∗) = *x*∗. **Case 2.** For the case *q* = 0, if we consider *x* = *xn*−<sup>1</sup> and *y* = *xn*, we have

R*q <sup>f</sup>*(*xn*−1, *xn*) = *<sup>d</sup>λ*<sup>1</sup> (*xn*−1, *xn*) · *<sup>d</sup>λ*<sup>2</sup> (*xn*−1, *<sup>f</sup>* (*xn*−1)) · *<sup>d</sup>λ*<sup>3</sup> (*xn*, *<sup>f</sup>* (*xn*))· · *<sup>d</sup>*(*xn*, *<sup>f</sup>*(*xn*))(1+*d*(*xn*−1, *f xn*−1)) <sup>1</sup>+*d*(*xn*−1,*xn*) *<sup>λ</sup>*<sup>4</sup> · *<sup>d</sup>*(*xn*−1, *<sup>f</sup>*(*xn*))+*d*(*xn*, *f xn*−1)) 2*s <sup>λ</sup>*<sup>5</sup> <sup>=</sup> *<sup>d</sup>λ*<sup>1</sup> (*xn*−1, *xn*) · *<sup>d</sup>λ*<sup>2</sup> (*xn*−1, *xn*) · *<sup>d</sup>λ*<sup>3</sup> (*xn*, *xn*+1)· · *<sup>d</sup>*(*xn*,*xn*+1)(1+*d*(*xn*−1,*xn*)) <sup>1</sup>+*d*(*xn*−1,*xn*) *<sup>λ</sup>*<sup>4</sup> · *<sup>d</sup>*(*xn*−1,*xn*+1)+*d*(*xn*,*xn*)) 2*s <sup>λ</sup>*<sup>5</sup> <sup>=</sup> *<sup>d</sup>λ*<sup>1</sup> (*xn*−1, *xn*) · *<sup>d</sup>λ*<sup>2</sup> (*xn*−1, *xn*) · *<sup>d</sup>λ*<sup>3</sup> (*xn*, *xn*+1) · *<sup>d</sup>λ*<sup>4</sup> (*xn*, *xn*+1) · *<sup>d</sup>*(*xn*−1,*xn*+1) 2*s <sup>λ</sup>*<sup>5</sup> .

Employing the triangle inequality, we have

$$\begin{split} \left( \mathcal{R}\_f^q(\mathbf{x}\_{n-1}, \mathbf{x}\_n) \right) \quad \leq d^{\lambda\_1}(\mathbf{x}\_{n-1}, \mathbf{x}\_n) \cdot d^{\lambda\_2}(\mathbf{x}\_{n-1}, \mathbf{x}\_n) \cdot d^{\lambda\_3}(\mathbf{x}\_n, \mathbf{x}\_{n+1}) \cdot d^{\lambda\_4}(\mathbf{x}\_n, \mathbf{x}\_{n+1}) \\ \qquad \cdot \left[ \frac{d(\mathbf{x}\_{n-1}, \mathbf{x}\_n) + d(\mathbf{x}\_n, \mathbf{x}\_{n+1})}{2} \right]^{\lambda\_5} . \end{split} \tag{13}$$

Using the following inequality

$$\left(\frac{a+b}{2}\right)^k \le \frac{a^k + b^k}{2}, \text{ for all } a, b, k > 0, 1$$

(13) becomes

$$\begin{array}{rcl} \mathcal{R}\_f^q(\mathbf{x}\_{n-1}, \mathbf{x}\_n) & \leq d^{\lambda\_1}(\mathbf{x}\_{n-1}, \mathbf{x}\_n) \cdot d^{\lambda\_2}(\mathbf{x}\_{n-1}, \mathbf{x}\_n) \cdot d^{\lambda\_3}(\mathbf{x}\_n, \mathbf{x}\_{n+1}) \\ & \cdot d^{\lambda\_4}(\mathbf{x}\_n, \mathbf{x}\_{n+1}) \cdot \frac{d^{\lambda\_5}(\mathbf{x}\_{n-1}, \mathbf{x}\_n) + d^{\lambda\_5}(\mathbf{x}\_n, \mathbf{x}\_{n+1})}{2}, \end{array}$$

and, from (6),

$$\begin{aligned} 0 &\leq \quad \lnot\left(a(\mathbf{x}\_{n-1}, \mathbf{x}\_n)d(f\left(\mathbf{x}\_{n-1}\right), f\left(\mathbf{x}\_n\right)), \varrho\left(\mathcal{R}\_f^q(\mathbf{x}\_{n-1}, \mathbf{x}\_n)\right)\right) \\ &< \quad \lnot\left(\mathcal{R}\_f^q(\mathbf{x}\_{n-1}, \mathbf{x}\_n)\right) - a\left(\mathbf{x}\_{n-1}, \mathbf{x}\_n\right) d(f\left(\mathbf{x}\_{n-1}\right), f\left(\mathbf{x}\_n\right)), \end{aligned}$$

which yields that

$$d(\mathbf{x}\_{\mathtt{n}}, \mathbf{x}\_{\mathtt{n}+1}) \quad \le a(\mathbf{x}\_{\mathtt{n}-1}, \mathbf{x}\_{\mathtt{n}}) d(f(\mathbf{x}\_{\mathtt{n}-1}), f(\mathbf{x}\_{\mathtt{n}})) \le \varrho(\mathcal{R}\_f^q(\mathbf{x}\_{\mathtt{n}-1}, \mathbf{x}\_{\mathtt{n}})).\tag{14}$$

Supposing that *d*(*xn*−1, *xn*) ≤ *d*(*xn*, *xn*+1), since *ϕ* is a nondecreasing function, we have

$$d(\mathbf{x}\_{n\prime}\mathbf{x}\_{n+1}) \prec d^{\lambda\_1 + \lambda\_2 + \lambda\_3 + \lambda\_4 + \lambda\_5}(\mathbf{x}\_{n\prime}\mathbf{x}\_{n+1}) = d(\mathbf{x}\_{n\prime}\mathbf{x}\_{n+1})\_\prime$$

which is a contradiction. Then, from (14), inductively, we obtain

$$d(\mathbf{x}\_{n}, \mathbf{x}\_{n+1}) \le q(\mathcal{R}\_f^q(\mathbf{x}\_{n-1}, \mathbf{x}\_n)) < q^n(d(\mathbf{x}\_0, \mathbf{x}\_1)).\tag{15}$$

By using the same arguments as the case *<sup>q</sup>* <sup>&</sup>gt; 0, we shall easily obtain that (*xn*)*n*∈<sup>N</sup> is a Cauchy sequence in a complete metric space and thus there exists *<sup>x</sup>*<sup>∗</sup> such that lim*n*→<sup>∞</sup> *xn* <sup>=</sup> *<sup>x</sup>*∗.

We claim that *x*∗ is a fixed point of *f* .

Under the assumption that *f* is continuous, we have

$$d\left(\mathbf{x}^\*, f\left(\mathbf{x}^\*\right)\right) = \lim\_{n \to \infty} d\left(\mathbf{x}\_{n\prime} f\left(\mathbf{x}\_n\right)\right) = \lim\_{n \to \infty} d(\mathbf{x}\_{n\prime} \mathbf{x}\_{n+1}) = 0\_{\prime 1}$$

and together with the uniqueness of limit, *f* (*x*∗) = *x*∗. In addition, if *f* <sup>2</sup> is continuous, as in **case 1**, we have that *<sup>f</sup>* <sup>2</sup> (*x*∗) <sup>=</sup> *<sup>x</sup>*<sup>∗</sup> and suppose that *<sup>f</sup>* (*x*∗) <sup>=</sup> *<sup>x</sup>*∗. Then, we get

$$\begin{aligned} 0 &\leq \zeta(\mathfrak{a}(f\left(\mathfrak{x}^\*\right), \mathfrak{x}^\*\right) d(f^2\left(\mathfrak{x}^\*\right), f\left(\mathfrak{x}^\*\right)), \varrho(\mathcal{R}\_f^q(f^2\left(\mathfrak{x}^\*\right), f\left(\mathfrak{x}^\*\right))) \\ &= \varrho(\mathcal{R}\_f^q(f^2\left(\mathfrak{x}^\*\right), f\left(\mathfrak{x}^\*\right))) - \mathfrak{a}(f\left(\mathfrak{x}^\*\right), \mathfrak{x}^\*) d(f^2\left(\mathfrak{x}^\*\right), f\left(\mathfrak{x}^\*\right)), \end{aligned}$$

which implies

$$\begin{aligned} d(\mathbf{x}^\*, f(\mathbf{x}^\*)) &= d(f^2(\mathbf{x}^\*), f(\mathbf{x}^\*)) \\ &\le \mathfrak{a}(f(\mathbf{x}^\*), \mathbf{x}^\*) d(f^2(\mathbf{x}^\*), f(\mathbf{x}^\*)) \\ &\le \mathfrak{q}(\mathcal{R}\_f^{\mathfrak{q}}(f^2(\mathbf{x}^\*), f(\mathbf{x}^\*)) = \mathfrak{q}(\mathcal{R}\_f^{\mathfrak{q}}(\mathbf{x}^\*, f(\mathbf{x}^\*)), \end{aligned}$$

where

$$\begin{split} \mathcal{R}\_f^q(\mathbf{x}^\*, f(\mathbf{x}^\*)) &= d^{\lambda\_1 + \lambda\_2 + \lambda\_3}(\mathbf{x}^\*, f(\mathbf{x}^\*)) \cdot \left[ \frac{d(\mathbf{x}^\*, f(\mathbf{x}^\*))(1 + d(\mathbf{x}^\*, f(\mathbf{x}^\*))}{1 + d(\mathbf{x}^\*, f(\mathbf{x}^\*))} \right]^{\lambda\_4} \cdot \left[ \frac{d(\mathbf{x}^\*, \mathbf{x}^\*) + d(f(\mathbf{x}^\*), f(\mathbf{x}^\*))}{2s} \right]^{\lambda\_5} \\ &= d^{\lambda\_1 + \lambda\_2 + \lambda\_3 + \lambda\_4}(\mathbf{x}^\*, f\left(\mathbf{x}^\*\right)) < d(\mathbf{x}^\*, f\left(\mathbf{x}^\*\right)). \end{split}$$

Hence, we have

$$d(\mathbf{x}^\*, f\left(\mathbf{x}^\*\right)) \le \varrho(\mathcal{R}\_f^q(\mathbf{x}^\*, f\left(\mathbf{x}^\*\right)) < \varrho(d(\mathbf{x}^\*, f\left(\mathbf{x}^\*\right)) < d(\mathbf{x}^\*, f\left(\mathbf{x}^\*\right)),)$$

which is a contradiction.

**Theorem 3.** *In the hypothesis of Theorem 2, if we assume supplementary that*

$$
\alpha(\mathbf{x}^\*, \mathbf{y}^\*) \ge 1\_{\text{\textquotedblleft}}
$$

*for any x*∗, *y*<sup>∗</sup> ∈ *Fixf*(*X*)*, then the fixed point of f is unique.*

**Proof.** Let *y*<sup>∗</sup> ∈ *X* be another fixed point of *f* . Suppose that *x*<sup>∗</sup> = *y*∗. In the case that *q* > 0, using (6), we have:

$$\begin{aligned} 0 &\leq \zeta \left( \mathfrak{a} \left( \mathfrak{x}^\*, \mathfrak{y}^\* \right) d \left( f \left( \mathfrak{x}^\* \right), f \left( \mathfrak{y}^\* \right) \right), \mathfrak{q} \left( \mathfrak{R}\_f^q \left( \mathfrak{x}^\*, \mathfrak{y}^\* \right) \right) \right) \\ &< \mathfrak{q} \left( \mathfrak{R}\_f^q \left( \mathfrak{x}^\*, \mathfrak{y}^\* \right) \right) - \mathfrak{a} \left( \mathfrak{x}^\*, \mathfrak{y}^\* \right) d \left( f \left( \mathfrak{x}^\* \right), f \left( \mathfrak{y}^\* \right) \right), \end{aligned}$$

*Axioms* **2020**, *9*, 2

which yields that

$$\begin{split} d(\mathbf{x}^\*, \mathbf{y}^\*) &\leq a\left(\mathbf{x}^\*, \mathbf{y}^\*\right) d\left(f\left(\mathbf{x}^\*\right), f\left(\mathbf{y}^\*\right)\right) \leq q\left(\mathcal{R}\_f^q\left(\mathbf{x}^\*, \mathbf{y}^\*\right)\right) < \mathcal{R}\_f^q\left(\mathbf{x}^\*, \mathbf{y}^\*\right) \\ &= \left[\lambda\_1 d(\mathbf{x}^\*, \mathbf{y}^\*) + \lambda\_2 d^q(\mathbf{x}^\*, f\left(\mathbf{x}^\*\right)) + \lambda\_3 d^q\left(\mathbf{y}^\*, f\left(\mathbf{y}^\*\right)\right) + \lambda\_4 \left(\frac{d(\mathbf{y}^\*, f(\mathbf{y}^\*))(1 + d(\mathbf{x}^\*, f(\mathbf{x}^\*)))}{1 + d(\mathbf{x}^\*, \mathbf{y}^\*)}\right)\right]^q + \\ &\lambda\_5 \left(\frac{d(\mathbf{y}^\*, f(\mathbf{x}^\*))(1 + d(\mathbf{x}^\*, f(\mathbf{y}^\*)))}{1 + d(\mathbf{x}^\*, \mathbf{y}^\*)}\right)^q\right]^{\frac{1}{q}} \\ &= \left(\lambda\_1 + \lambda\_5\right)^{\frac{1}{2}} d(\mathbf{x}^\*, \mathbf{y}^\*) < d(\mathbf{x}^\*, \mathbf{y}^\*), \end{split}$$

which is a contradiction.

In the case that *q* = 0, if we suppose that *x*<sup>∗</sup> = *y*∗, then we obtain that 0 < *d*(*x*∗, *y*∗) < 0, which is a contradiction.

Thus, *x*∗ = *y*∗, so that *f* possesses exactly one fixed point.

**Example 10.** *Let X* = [0, 2]*, d* : *X* × *X* → [0, ∞) *, d*(*x*, *y*) = |*x* − *y*| <sup>2</sup> *for all <sup>x</sup>*, *<sup>y</sup>* <sup>∈</sup> *X. Consider that the mapping <sup>f</sup>* : *<sup>X</sup>* <sup>→</sup> *<sup>X</sup> is defined by <sup>f</sup>*(*x*) = 1/2, *ifx* ∈ [0, 1] *<sup>x</sup>*/2, *ifx* <sup>∈</sup> (1, 2] *and the function <sup>α</sup>*(*x*, *<sup>y</sup>*) =


$$\text{15.} \quad \int 2^2 \, (\mathbf{x}) = \frac{1}{2} \text{ is continuous.}\\\text{Moreover,}\\\text{for } \mathbf{x} = \frac{1}{2} \in \text{Fix}\_{\neq} \mathbf{a} \text{ ( $X$ ), we have } a \left( f\left(\frac{1}{2}\right), \frac{1}{2}\right) = a \left( \frac{1}{2}, \frac{1}{2} \right) = 2 > 1;$$

$$
\partial. \qquad \zeta \left( a(\mathbf{x}, y) d(f \mathbf{x}, f y), \varrho \left( \mathcal{R}\_f^q(\mathbf{x}, y) \right) \right) \succeq 0.
$$

*If x*, *<sup>y</sup>* <sup>∈</sup> [0, 1]*, then f x* <sup>=</sup> *f y* <sup>=</sup> <sup>1</sup> <sup>2</sup> *and hence d* (*f x*, *f y*) = 0*. We have*

$$\mathcal{Z}\left(0, \mathfrak{q}\left(\mathcal{R}\_f^{\mathfrak{q}}(\mathbf{x}, \mathbf{y})\right)\right) = \frac{1}{2} \mathfrak{q}\left(\mathcal{R}\_f^{\mathfrak{q}}(\mathbf{x}, \mathbf{y})\right) \ge 0, \text{ for all } \mathbf{x}, \mathfrak{q} \in [0, 1].$$

*and hence*

$$\zeta \left( a(\mathbf{x}, \mathbf{y}) d(f \mathbf{x}, f \mathbf{y}), \,\uprho \left( \mathcal{R}\_f^{\eta}(\mathbf{x}, \mathbf{y}) \right) \right) \geq 0, \,\text{for all } \mathbf{x}, \mathbf{y} \in [0, 1].$$

*If x* = 0 *and y* = 2*, then if we consider q* = 2, *λ*<sup>1</sup> = *λ*<sup>2</sup> = *λ*<sup>3</sup> = *λ*<sup>4</sup> = *λ*<sup>5</sup> = <sup>1</sup> <sup>5</sup> *, then we have*

$$\begin{split} \mathbb{E}\left\{a(0,2)d(f\left(0\right),f\left(2\right)),q\left(\mathcal{R}\_f^d(0,2)\right)\right\} &= \frac{1}{2}\varrho\left(\mathcal{R}\_f^d(0,2)\right) - a(0,2)d(f\left(0\right),f\left(2\right)) = \varsigma = \\ &= \frac{1}{4}\left[\frac{1}{5}d^2(0,2) + \frac{1}{5}d^2(0,f\left(0\right)) + \frac{1}{5}d^2(2,f\left(2\right)) + \\ &\quad + \frac{1}{5}\left(\frac{d(2,f(2))(1+d(0,f(0)))}{1+d(0,2)}\right)^2 + \frac{1}{5}\left(\frac{d(2,f(0))(1+d(0,f(2)))}{1+d(0,2)}\right)^2\right]^{\frac{1}{2}} \\ &\quad - a(0,2)d\left(\frac{1}{2},1\right) \\ &= \frac{1}{4}\left[\frac{1}{5}\left(16 + \frac{1}{16} + 1 + \frac{1}{16} + \frac{81}{100}\right)\right]^{\frac{1}{2}} \\ &= \frac{1}{4}\left(\frac{3587}{1000}\right)^{\frac{1}{2}} - \frac{1}{4} \ge 0. \end{split}$$

*Hence,*

$$\zeta \left( a(0,2)d(f\left(0\right),f\left(2\right)), \varrho \left( \mathcal{R}\_f^{\mathcal{G}}(0,2) \right) \right) \ge 0.$$

*Axioms* **2020**, *9*, 2

*In all other cases, α*(*x*, *y*) = 0 *and*

$$\zeta\left(0, \varphi\left(\mathcal{R}\_f^q(\mathbf{x}, y)\right)\right) = \frac{1}{2}\varphi\left(\mathcal{R}\_f^q(\mathbf{x}, y)\right) \ge 0.$$

*Thus, we obtain that f is an admissible hybrid* Z*-contraction which satisfies the assumptions of Theorem 2 and then x* = <sup>1</sup> <sup>2</sup> *is the fixed point of f .*

**Remark 3.** *If, in the above example, we consider <sup>f</sup>*(*x*) = 1/3, *if x* ∈ [0, 1] *<sup>x</sup>*/2, *if x* <sup>∈</sup> (1, 2] *, then <sup>f</sup> is not continuous, but f* <sup>2</sup> (*x*) = <sup>1</sup> <sup>3</sup> *and for x* <sup>=</sup> <sup>1</sup> <sup>3</sup> ∈ *Fixf* <sup>2</sup> (*X*)*, we have α f* 1 3 . 1 3 = *α* 1 3 , 1 3 = 2 > 1*.*

Let Φ be the collection of all auxiliary functions *φ* : [0, ∞) →[0, ∞) which are continuous and *φ*(*t*) = 0 if and only if *t* = 0.

**Theorem 4.** *Let* (*X*, *d*) *be a complete b-metric space with constant s* ≥ 1*, f* : *X* → *X and α* : *X* × *X* → [0, ∞)*. Suppose that there exist two functions φ*1, *φ*<sup>2</sup> ∈ Φ, *with φ*1(*t*) < *t* ≤ *φ*2(*t*), *for all t* > 0*, such that*

$$
\phi\_2\left(a(\mathbf{x}, y)d(f\mathbf{x}, f\mathbf{y})\right) \le \phi\_1\left(\mathcal{R}\_f^q(\mathbf{x}, y)\right). \tag{16}
$$

*Furthermore, we suppose that:*


*Then, f has a unique fixed point.*

**Proof.** Let *ζ* (*t*,*s*) = *φ*<sup>1</sup> (*s*) − *φ*<sup>2</sup> (*t*). According to Example 10, if *φ*1, *φ*<sup>2</sup> ∈ Φ have the property *φ*1(*t*) < *t* ≤ *φ*2(*t*) for all *t* > 0, then *ζ* ∈ Z. Thus, the desired results follow from Theorems 2 and 3.

**Theorem 5.** *Let* (*X*, *d*) *be a complete b-metric space with constant s* ≥ 1 *, f* : *X* → *X and α* : *X* × *X* → [0, ∞)*. Suppose that there exists a function φ* ∈ Φ*, such that*

$$d\mathfrak{a}(\mathbf{x},\mathbf{y})d(f\mathbf{x},f\mathbf{y}) \le \mathcal{R}\_f^q(\mathbf{x},\mathbf{y}) - \Phi\left(\mathcal{R}\_f^q(\mathbf{x},\mathbf{y})\right). \tag{17}$$

*Furthermore, we suppose that*


*Then, f has a unique fixed point.*

**Proof.** Let *ζ* (*t*,*s*) = *s* − *φ* (*s*)) − *t*. According to Example 10, *ζ* ∈ Z. Thus, the desired results follow from Theorems 2 and 3.

**Theorem 6.** *Let* (*X*, *d*) *be a complete b-metric space with constant s* ≥ 1 *, f* : *X* → *X and α* : *X* × *X* → [0, ∞)*. Suppose that there exists a function <sup>μ</sup>* : [0, <sup>∞</sup>) <sup>→</sup> [0, <sup>∞</sup>) *such that <sup>ε</sup>* <sup>0</sup> *<sup>μ</sup>*(*u*)*du exists and <sup>ε</sup>* <sup>0</sup> *μ*(*u*)*du* > *ε, for each ε* > 0*, with the property that*

$$
\mu(\mathbf{x}, y)d(f\mathbf{x}, fy) \le \int\_0^{\mathcal{R}^q\_f(\mathbf{x}, y)} \mu(u) du. \tag{18}
$$

*Furthermore, we suppose that*


*Then, f has a unique fixed point.*

**Proof.** Let *<sup>ζ</sup>* (*t*,*s*) <sup>=</sup> *<sup>s</sup>* <sup>−</sup> *<sup>t</sup>* <sup>0</sup> *μ*(*u*)*du*. According to Example 10, *ζ* ∈ Z. Thus, the desired results follow from Theorems 2 and 3.

Let Φ be the class of auxiliary functions *φ* : [0, ∞) → [0, ∞) that are continuous functions and *μ*(*t*) = 0 if and only if, *t* = 0.

The following example is derived from [5,14,15].

**Example 11.** *(See, e.g., [5,14,15]) Let <sup>φ</sup><sup>i</sup>* <sup>∈</sup> <sup>Φ</sup> *for <sup>i</sup>* <sup>=</sup> 1, 2, 3 *and <sup>σ</sup><sup>j</sup>* : <sup>R</sup><sup>+</sup> <sup>0</sup> <sup>×</sup> <sup>R</sup><sup>+</sup> <sup>0</sup> <sup>→</sup> <sup>R</sup> *for <sup>j</sup>* <sup>=</sup> 1, 2, 3, 4, 5, 6*. Each of the functions defined below is an example of simulation functions:*


**Remark 4.** *By using the examples above, we may derive more consequences of our results.*

### **4. Well Posedness and Ulam–Hyers Stability**

Considered as a type of data dependence, the notion of Ulam stability was started by Ulam [19,20] and developed by Hyers [21], Rassias [22], etc. In this section, we investigate the general Ulam type stability in sense of a fixed point problem as well the well posedness of the fixed point problem.

Suppose that *f* : *X* → *X* is a self-mapping on a *b*-metric space (*X*, *d*) with the constant *s* > 1 and let us consider the following fixed point problem:

$$
\mathfrak{x} = f(\mathfrak{x}).\tag{19}
$$

**Definition 8.** *The fixed point problem (19) is well-posed if*


**Theorem 7.** *Let* (*X*, *d*) *be a complete b-metric space with constant s* > 1*. Suppose that all the hypotheses of Theorem <sup>3</sup> hold, and <sup>q</sup>* > <sup>0</sup>*. Additionally, we suppose that for any sequence* (*xn*)*n*∈N*, with <sup>d</sup>* (*xn*, *<sup>f</sup>* (*xn*)) → 0, *as <sup>n</sup>* <sup>→</sup> <sup>∞</sup>, *we have <sup>α</sup>* (*xn*, *<sup>x</sup>*∗) <sup>≥</sup> 1, *for all <sup>n</sup>* <sup>∈</sup> <sup>N</sup>, *where <sup>x</sup>*<sup>∗</sup> <sup>∈</sup> *Fixf* (*X*). *If <sup>λ</sup>*<sup>1</sup> <sup>+</sup> *<sup>λ</sup>*<sup>5</sup> <sup>&</sup>lt; <sup>1</sup> *γ*2(*q*) *, where γ*(*q*) = max 1, 2*q*−1*s<sup>q</sup> , then the fixed point problem (19) is well-posed.*

**Proof.** Taking into account the supplementary condition, since *Fixf*(*X*) = *x*∗, *u* sin *g* (6), we have

$$\begin{aligned} 0 &\leq \zeta \left( \alpha \left( \mathbf{x}\_{\boldsymbol{n}}, \mathbf{x}^\* \right) d \left( f \left( \mathbf{x}\_{\boldsymbol{n}} \right), f \left( \mathbf{x}^\* \right) \right), \varrho \left( \mathcal{R}\_f^q \left( \mathbf{x}\_{\boldsymbol{n}}, \mathbf{x}^\* \right) \right) \right) \\ &< \varrho \left( \mathcal{R}\_f^q \left( \mathbf{x}\_{\boldsymbol{n}}, \mathbf{x}^\* \right) \right) - \alpha \left( \mathbf{x}\_{\boldsymbol{n}}, \mathbf{x}^\* \right) d \left( f \left( \mathbf{x}\_{\boldsymbol{n}} \right), f \left( \mathbf{x}^\* \right) \right) . \end{aligned}$$

We have

*d*(*xn*, *x*∗) ≤ *sd*(*xn*, *f* (*xn*)) + *sd*(*f* (*xn*), *f* (*x*∗)) ≤ *sd*(*xn*, *f* (*xn*)) + *sα*(*xn*, *x*∗)*d*(*f* (*xn*), *f* (*x*∗)) <sup>≤</sup> *sd*(*xn*, *<sup>f</sup>* (*xn*)) + *<sup>s</sup>ϕ*(R*<sup>q</sup> <sup>f</sup>*(*xn*, *<sup>x</sup>*∗)) <sup>&</sup>lt; *sd*(*xn*, *<sup>f</sup>* (*xn*)) + *<sup>s</sup>*R*<sup>q</sup> <sup>f</sup>*(*xn*, *x*∗) ≤ *s λ*1*dq*(*xn*, *x*∗) + *λ*2*dq*(*xn*, *f* (*xn*)) + *λ*3*dq*(*x*∗, *f* (*x*∗)) + *λ*<sup>4</sup> *d*(*x*∗, *f*(*x*∗))(1+*d*(*xn*, *f*(*xn*))) 1+*d*(*xn*,*x*∗) *q* + +*λ*<sup>5</sup> *d*(*x*∗, *f*(*xn*))(1+*d*(*xn*, *f*(*x*∗))) 1+*d*(*xn*,*x*∗) *<sup>q</sup>* <sup>1</sup> *q* + *sd*(*xn*, *f* (*xn*)) <sup>=</sup> *<sup>s</sup>*[*λ*1*dq*(*xn*, *<sup>x</sup>*∗) + *<sup>λ</sup>*2*dq*(*xn*, *<sup>f</sup>* (*xn*)) + *<sup>λ</sup>*5*dq*(*x*∗, *<sup>f</sup>* (*xn*)] <sup>1</sup> *<sup>q</sup>* + *sd*(*xn*, *f* (*xn*)) ≤ *s <sup>λ</sup>*1*dq*(*xn*, *<sup>x</sup>*∗) + *<sup>λ</sup>*2*dq*(*xn*, *<sup>f</sup>* (*xn*)) + *<sup>s</sup>qλ*<sup>5</sup> (*<sup>d</sup>* (*x*∗, *xn*) <sup>+</sup> *<sup>d</sup>*(*xn*, *<sup>f</sup>* (*xn*))*q* <sup>1</sup> *<sup>q</sup>* + *sd*(*xn*, *f* (*xn*)) ≤ *s λ*1*dq*(*xn*, *x*∗) + *λ*2*dq*(*xn*, *f* (*xn*)) + 2*q*−1*sqλ*5*d<sup>q</sup>* (*x*∗, *xn*) + 2*q*−1*sqλ*5*dq*(*xn*, *f* (*xn*) 1 *<sup>q</sup>* + +*sd*(*xn*, *f* (*xn*)).

In this way, we obtain

$$\begin{array}{rcl}d^{q}(\mathbf{x}\_{n},\mathbf{x}^{\*}) & \leq 2^{q-1}\mathbf{s}^{q}\lambda\_{1}d^{q}(\mathbf{x}\_{n},\mathbf{x}^{\*}) + 2^{q-1}\mathbf{s}^{q}\lambda\_{2}d^{q}(\mathbf{x}\_{n},f(\mathbf{x}\_{n})) + 2^{2q-2}\mathbf{s}^{2q}\lambda\_{5}d^{q}(\mathbf{x}^{\*},\mathbf{x}\_{n}) + \\ & + 2^{2q-2}\mathbf{s}^{2q}\lambda\_{5}d^{q}(\mathbf{x}\_{n},f(\mathbf{x}\_{n})) + 2^{q-1}\mathbf{s}^{q}d^{q}(\mathbf{x}\_{n},f(\mathbf{x}\_{n})) \end{array}$$

or

$$d\left(1 - 2^{q-1}\mathbf{s}^q\lambda\_1 - 2^{2q-2}\mathbf{s}^{2q}\lambda\_5\right)d^q(\mathbf{x}\_n, \mathbf{x}^\*) \le 2^{q-1}\mathbf{s}^q\left(1 + \lambda\_2 + 2^{q-1}\mathbf{s}^q\lambda\_5\right)^q d^q(\mathbf{x}\_n, f\left(\mathbf{x}\_n\right)).$$

From here, we obtain

$$d^q(\mathbf{x}\_{\mathsf{n}}, \mathbf{x}^\*) \le \frac{(1 + \lambda\_2 + \gamma(q)\lambda\_5)\gamma(q)}{1 - \gamma(q)\lambda\_1 - \gamma^2(q)\lambda\_5} d^q(\mathbf{x}\_{\mathsf{n}}, f(\mathbf{x}\_{\mathsf{n}})) \dots$$

Letting *<sup>n</sup>* <sup>→</sup> <sup>∞</sup> in the above inequality and keeping in mind that lim*n*→<sup>∞</sup> *<sup>d</sup>*(*xn*, *<sup>f</sup>* (*xn*)) = 0, we obtain

$$\lim\_{n \to \infty} d(\mathbf{x}\_{n\prime} \mathbf{x}^\*) = 0\_{\prime\prime}$$

that is, the fixed point Equation (19) is well-posed.

**Definition 9.** *The fixed point problem (19) is called generalized Ulam–Hyers stable if and only if there exists ρ* : [0, ∞) → [0, ∞) *is increasing, continuous in* 0 *and ρ*(0) = 0, *such that for each ε* > 0 *and for each y*<sup>∗</sup> ∈ *X, which satisfy the inequality*

$$d(y, f(y)) \le \varepsilon,\tag{20}$$

*there exists a solution x*∗ *of the fixed point problem (19) such that*

$$d(y^\*, \mathbf{x}^\*) \le \rho(\varepsilon).$$

*Axioms* **2020**, *9*, 2

*If there exists <sup>c</sup>* <sup>&</sup>gt; <sup>0</sup> *such that <sup>ρ</sup>*(*t*) :<sup>=</sup> *<sup>c</sup>* · *t, for each <sup>t</sup>* <sup>∈</sup> <sup>R</sup>+*, then the fixed point problem (19) is said to be Ulam–Hyers stable.*

Before stating our theorem, we underline that Ulam–Hyers stability can be potentially applicable to the study of switched dynamics, see e.g., [23], and the related references therein.

**Theorem 8.** *Let* (*X*, *d*) *be a complete b-metric space with constant s* > 1*. Suppose that all the hypotheses of Theorem 3 hold, and q* > 0*. Additionally, we suppose that α* (*y*∗, *x*∗) ≥ 1, *for all y*<sup>∗</sup> ∈ *X verifying (20) and <sup>x</sup>*<sup>∗</sup> <sup>∈</sup> *Fixf* (*X*). *If <sup>λ</sup>*<sup>1</sup> <sup>+</sup> *<sup>λ</sup>*<sup>5</sup> <sup>&</sup>lt; <sup>1</sup> *γ*2(*q*) *, where γ*(*q*) = max 1, 2*q*−1*s<sup>q</sup> , then the fixed point problem (19) is Ulam–Hyers stable.*

*<sup>f</sup>*(*y*∗, *x*∗)))

<sup>0</sup> <sup>≤</sup> *<sup>ζ</sup>*(*<sup>α</sup>* (*y*∗, *<sup>x</sup>*∗) *<sup>d</sup>*(*<sup>f</sup>* (*y*∗), *<sup>f</sup>* (*x*∗)), *<sup>ϕ</sup>*(R*<sup>q</sup>*

**Proof.** Using (6),

<sup>&</sup>lt; *<sup>ϕ</sup>*(R*<sup>q</sup> <sup>f</sup>*(*y*∗, *x*∗)) − *α* (*y*∗, *x*∗) *d*(*f* (*y*∗), *f* (*x*∗)) *d*(*y*∗, *x*∗) = *d*(*y*∗, *f* (*x*∗)) ≤ *sd*(*f* (*y*∗), *f* (*x*∗)) + *sd*(*y*∗, *f* (*y*∗)) ≤ *sα*(*y*∗, *x*∗)*d*(*f* (*y*∗), *f* (*x*∗)) + *sd*(*y*∗, *f* (*y*∗)) <sup>≤</sup> *<sup>s</sup>ϕ*(R*<sup>q</sup> <sup>f</sup>*(*y*∗, *<sup>x</sup>*∗)) + *<sup>s</sup><sup>ε</sup>* <sup>&</sup>lt; *<sup>s</sup>*R*<sup>q</sup> <sup>f</sup>*(*y*∗, *x*∗)) + *sε* <sup>≤</sup> *<sup>s</sup>*[*λ*1*dq*(*y*∗, *<sup>x</sup>*∗) + *<sup>λ</sup>*2*dq*(*y*∗, *<sup>f</sup>* (*y*∗)) + *<sup>λ</sup>*3*dq*(*x*∗, *<sup>f</sup>* (*x*∗)) + *λ*4 *d*(*x*∗, *f*(*x*∗))(1+*d*(*x*∗, *f*(*x*∗)) 1+*d*(*y*∗,*x*∗) *q* + *λ*<sup>5</sup> *d*(*x*∗, *f*(*y*∗))(1+*d*(*y*∗, *f*(*x*∗))) 1+*d*(*y*∗,*x*∗) *<sup>q</sup>* <sup>1</sup> *q* + *sε* <sup>=</sup> *<sup>s</sup>*[*λ*1*dq*(*y*∗, *<sup>x</sup>*∗) + *<sup>λ</sup>*2*ε<sup>q</sup>* <sup>+</sup> *<sup>λ</sup>*5*dq*(*x*∗, *<sup>f</sup>* (*y*∗)] <sup>1</sup> *<sup>q</sup>* + *sε* ≤ *s <sup>λ</sup>*1*dq*(*y*∗, *<sup>x</sup>*∗) + *<sup>λ</sup>*2*ε<sup>q</sup>* <sup>+</sup> *<sup>s</sup>qλ*<sup>5</sup> (*<sup>d</sup>* (*y*∗, *<sup>x</sup>*∗) <sup>+</sup> *<sup>d</sup>*(*y*∗, *<sup>f</sup>* (*y*∗))*q* <sup>1</sup> *<sup>q</sup>* + *sε* ≤ *s <sup>λ</sup>*1*dq*(*y*∗, *<sup>x</sup>*∗) + *<sup>λ</sup>*2*ε<sup>q</sup>* <sup>+</sup> <sup>2</sup>*q*−1*sqλ*5*d<sup>q</sup>* (*y*∗, *<sup>x</sup>*∗) <sup>+</sup> <sup>2</sup>*q*−1*sqλ*5*d<sup>q</sup>* (*y*∗, *<sup>f</sup>* (*y*∗)) <sup>1</sup> *<sup>q</sup>* + *sε* ≤ *s λ*1*dq*(*y*∗, *x*∗) + *λ*2*ε<sup>q</sup>* + 2*q*−1*sqλ*5*d<sup>q</sup>* (*y*∗, *x*∗) + 2*q*−1*sqλ*5*ε<sup>q</sup>* 1 *<sup>q</sup>* + *sε*.

In this way, we obtain

$$d^q(y^\*, \mathbf{x}^\*) \quad \le 2^{q-1} \mathfrak{s}^q \lambda\_1 d^q(y^\*, \mathbf{x}^\*) + 2^{q-1} \mathfrak{s}^q \lambda\_2 \mathfrak{s}^q + 2^{2q-2} \mathfrak{s}^2 \lambda\_5 d^q(y^\*, \mathbf{x}^\*) + \mathfrak{s}^2 \mathfrak{s}^q$$

or

$$d\left(1 - 2^{q-1}s^q \lambda\_1 - 2^{2q-2}s^{2q}\lambda\_5\right)d^q(y^\*, \mathbf{x}^\*) \le 2^{q-1}s^q \left(1 + \lambda\_2 + 2^{q-1}s^q\lambda\_5\right)^q \varepsilon^q.$$

From here, we obtain

$$d^q(y^\*, \mathbf{x}^\*) \le \frac{(1 + \lambda\_2 + \gamma(q)\lambda\_5)\gamma(q)}{1 - \gamma(q)\lambda\_1 - \gamma^2(q)\lambda\_5} \varepsilon^q.$$

Hence,

$$d^{\emptyset}(y^\*, \mathfrak{x}^\*) \le \alpha \theta\_{\prime\prime}$$

$$\text{where } \mathfrak{c} = \frac{(1 + \lambda\_2 + \gamma(q)\lambda\_2)\gamma(q)}{1 - \gamma(q)\lambda\_1 - \gamma^2(q)\lambda\_5}, \text{ for any } q > 0 \text{ and } \lambda\_1, \lambda\_5 \in [0, 1) \text{ such that } \lambda\_1 + \lambda \mathfrak{g} < \frac{1}{\gamma^2(q)}. \quad \mathfrak{D}$$

### **5. Conclusions**

In this paper, we unify, extend, and improve several existing fixed point theorems by introducing the notion of admissible hybrid Z-contraction in the setting of complete *b*-metric spaces. Consequently, all presented results valid in the setting of complete metric space by letting *s* = 1. On the other hand, unifying several existing results in the literature, we have used admissible mappings, simulation functions, and hybrid contractions. We need to underline the fact that, by setting admissible function *α* in a proper way, one can get several new consequences of the existence results in the setting of (i) standard metric space, (ii) metric space endowed a partial order on it, and (iii) cyclic contraction. One can easily get these consequences by using the techniques in [4]. Furthermore, for the

different examples of simulation functions (as we showed in Theorems 5 and 6), one can get more new corollaries. Lastly, by regarding hybrid contraction approaches, one can get several more consequences, by following the techniques in [21,24–26].

Besides expressing a more generalized result in the setting of a complete *b*-metric space, we also present some applications for our obtained results. In particular, we shall consider the well-posedness and the Ulam–Hyers stability of the fixed point problem. We note that the word 'hybrid' has been used in different ways, in particular, in applicable nonlinear fields, see, e.g., [27,28].

**Author Contributions:** Writing—original draft, I.C.C.; Writing—review and editing, E.K. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Acknowledgments:** The authors thank the anonymous referees for their remarkable comments, suggestions, and ideas that help to improve this paper.

**Conflicts of Interest:** The authors declare no conflicts of interest.

### **References**


© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

### *Article*
