*Article* **Global Bounds for the Generalized Jensen Functional with Applications**

**Slavko Simi´c 1,\* and Bandar Bin-Mohsin <sup>2</sup>**


**Abstract:** In this article we give sharp global bounds for the generalized Jensen functional *Jn*(*g*, *h*; **p**, *x*). In particular, exact bounds are determined for the generalized power mean in terms from the class of Stolarsky means. As a consequence, we obtain the best possible global converses of quotients and differences of the generalized arithmetic, geometric and harmonic means.

**Keywords:** Jensen functional; A-G-H inequalities; global bounds; power means; convex functions

**MSC:** 26D07(26D15)

## **1. Introduction**

Recall that the Jensen functional *Jn*(*h*; **<sup>p</sup>**, **<sup>x</sup>**) is defined on an interval *<sup>I</sup>* <sup>⊆</sup> <sup>R</sup> by

$$J\_n(h; \mathbf{p}, \mathbf{x}) := \sum\_{1}^{n} p\_i h(x\_i) - h(\sum\_{1}^{n} p\_i x\_i)\_{\mathbf{x}}$$

where *<sup>h</sup>* : *<sup>I</sup>* <sup>→</sup> <sup>R</sup>, **<sup>x</sup>** = (*x*1, *<sup>x</sup>*2, ··· , *xn*) <sup>∈</sup> *<sup>I</sup><sup>n</sup>* and **<sup>p</sup>** <sup>=</sup> {*pi*}*<sup>n</sup>* <sup>1</sup> is a positive weight sequence. If *h* is a convex function on *I* then the inequality

$$0 \le f\_n(h; \mathbf{p}\_{\prime}x)$$

holds for each **<sup>x</sup>** <sup>∈</sup> *<sup>I</sup><sup>n</sup>* and any positive weight sequence **<sup>p</sup>**.

If *h* is a concave function on *I* then the above inequality is reversed. Those inequalities play a fundamental role in many parts of mathematical analysis and applications. For example, the well-known A−G−H inequality, Holder's inequality, Ky Fan inequality, etc., are proven by the help of Jensen's inequality (cf. [1–6]).

Our aim in this paper is to find the simplest constant *C* such that

$$0 \le J\_n(h; \mathbf{p}, \mathbf{x}) \le \mathbf{C}'$$

for any choice of **p**, *x* and thus make this inequality symmetrical.

This will be done by assuming that **x** ∈ [*a*, *b*] *<sup>n</sup>* <sup>⊂</sup> *<sup>I</sup>n*, and we shall find some *global bounds* for the generalized Jensen functional

$$J\_n(\mathcal{g}, h; \mathbf{p}, \mathbf{x}) := \mathcal{g}(\sum\_{1}^n p\_i h(\mathbf{x}\_i)) - \mathcal{g}(h(\sum\_{1}^n p\_i \mathbf{x}\_i)),$$

that is, the bounds not depending on **p** or **x** but only on *a*, *b* and functions *g* and *h*. In this sense, a typical result is given by the part of Theorem 1 (below).

**Citation:** Simi´c, S.; Bin-Mohsin, B. Global Bounds for the Generalized Jensen Functional with Applications. *Symmetry* **2021**, *13*, 2105. https:// doi.org/10.3390/sym13112105

Academic Editor: Nicusor Minculete

Received: 17 October 2021 Accepted: 30 October 2021 Published: 6 November 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional clai-ms in published maps and institutio-nal affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

For **x** ∈ [*a*, *b*] *<sup>n</sup>* <sup>⊂</sup> *<sup>I</sup>n*, let *<sup>h</sup>* : *<sup>I</sup>* <sup>→</sup> *<sup>J</sup>* be convex and *<sup>g</sup>* : *<sup>J</sup>* <sup>→</sup> <sup>R</sup> be an increasing function. Then

$$0 \le l\_n(\mathbf{g}, h; \mathbf{p}, \mathbf{x}) \le \max\_p [\mathbf{g}(ph(a) + (1 - p)h(b)) - \mathbf{g}(h(pa + (1 - p)b))].$$

Our global bounds will be entirely presented in terms of elementary means. Recall that the *mean* is a map *<sup>M</sup>* : <sup>R</sup><sup>+</sup> <sup>×</sup> <sup>R</sup><sup>+</sup> <sup>→</sup> <sup>R</sup>+, with a property

$$
\min(x, y) \le M(x, y) \le \max(x, y),
$$

for each *<sup>x</sup>*, *<sup>y</sup>* <sup>∈</sup> <sup>R</sup>+.

In the sequel we shall use the class of so-called Stolarsky (or extended) two-parametric mean values, defined for positive values of *x*, *y*, *x* = *y* by the following

$$E\_{r,s}(x,y) = \begin{cases} \left(\frac{r(x^s - y^s)}{s(x^r - y^r)}\right)^{1/(s-r)}, & rs(r-s) \neq 0\\ \exp\left(\frac{-1}{s} + \frac{x^s \log x - y^s \log y}{x^r - y^s}\right), & r = s \neq 0\\ \left(\frac{x^s - y^s}{s(\log x - \log y)}\right)^{1/s}, & s \neq 0, r = 0\\ \sqrt{xy}, & r = s = 0, \\ x, & y = x > 0. \end{cases}$$

In this form it was introduced by Keneth Stolarsky in [7]. Most of the classical two variable means are special cases of the class *E*. For example,

$$A(\mathbf{x}, y) = E\_{1,2}(\mathbf{x}, y) = \frac{\mathbf{x} + y}{2}$$

is the arithmetic mean;

$$G(x,y) = E\_{0,0}(x,y) = E\_{-r,r}(x,y) = \sqrt{x}y$$

is the geometric mean;

$$L(x, y) = E\_{0, 1}(x, y) = \frac{x - y}{\log x - \log y}$$

is the logarithmic mean;

$$I(\mathbf{x}, \mathbf{y}) = E\_{1,1}(\mathbf{x}, \mathbf{y}) = (\mathbf{x}^{\mathbf{x}} / \mathbf{y}^{\mathbf{y}})^{\frac{1}{\mathbf{x} - \mathbf{y}}} / e$$

is the identric mean, etc.

More generally, the *r*-th power mean

$$A\_r(x,y) = \left(\frac{x^r + y^r}{2}\right)^{1/r}$$

is equal to *Er*,2*r*(*x*, *y*).

Using the class of Stolarsky means enables our results to be presented in a condensed and applicable way. For example, we give some results regarding A−G−H inequalities, where

$$\mathcal{A}(\mathbf{p}, \mathbf{x}) := \sum\_{1}^{n} p\_{i} \mathbf{x}\_{i};$$

$$\mathcal{G}(\mathbf{p}, \mathbf{x}) := \prod\_{1}^{n} \mathbf{x}\_{i}^{p\_{i}};$$

$$\mathcal{H}(\mathbf{p}, \mathbf{x}) := (\sum\_{1}^{n} p\_{i} / \mathbf{x}\_{i})^{-1} \mathbf{x}\_{i}$$

are the generalized arithmetic, geometric and harmonic means, respectively. Let **x** ∈ [*a*, *b*] *<sup>n</sup>*, 0 < *a* < *b*. Then

$$0 \le \mathcal{A}(\mathbf{p}, \mathbf{x}) - \mathcal{H}(\mathbf{p}, \mathbf{x}) \le 2(A(a, b) - G(a, b));$$

$$0 \le \mathcal{A}(\mathbf{p}, \mathbf{x}) - \mathcal{G}(\mathbf{p}, \mathbf{x}) \le A(a, b) - L(a, b) + L(a, b) \log \frac{L(a, b)}{G(a, b)};$$

$$1 \le \frac{\mathcal{A}(\mathbf{p}, \mathbf{x})}{\mathcal{H}(\mathbf{p}, \mathbf{x})} \le \left(\frac{A(a, b)}{G(a, b)}\right)^2;$$

$$1 \le \frac{\mathcal{G}(\mathbf{p}, \mathbf{x})}{\mathcal{H}(\mathbf{p}, \mathbf{x})} \le \frac{I(a, b)L(a, b)}{G^2(a, b)};$$

$$1 \le \frac{\mathcal{A}(\mathbf{p}, \mathbf{x})}{\mathcal{G}(\mathbf{p}, \mathbf{x})} \le \frac{I(a, b)L(a, b)}{G^2(a, b)},$$

where *A*, *G*, *H*, *L*, *I* stands for the arithmetic, geometric, harmonic, logarithmic and identric means of positive numbers *a* and *b*, respectively.

All bounds above are the best possible.

#### **2. Results and Proofs**

Our results concerning global bounds for the generalized Jensen functional are given in the following two assertions.

**Theorem 1.** *1. For continuous functions <sup>g</sup>*, *<sup>h</sup> let <sup>h</sup>* : *<sup>I</sup>* <sup>→</sup> *<sup>J</sup> be convex and <sup>g</sup>* : *<sup>J</sup>* <sup>→</sup> <sup>R</sup> *be an increasing function or h* : *<sup>I</sup>* <sup>→</sup> *J be concave and g* : *<sup>J</sup>* <sup>→</sup> <sup>R</sup> *be a decreasing function. Then*

$$0 \le l\_n(\mathbf{g}, h; \mathbf{p}, \mathbf{x}) \le \max\_p [\mathbf{g}(ph(a) + (1 - p)h(b)) - \mathbf{g}(h(pa + (1 - p)b))].$$

*2. If <sup>h</sup>* : *<sup>I</sup>* <sup>→</sup> *<sup>J</sup> is convex and <sup>g</sup>* : *<sup>J</sup>* <sup>→</sup> <sup>R</sup> *is a decreasing function or <sup>h</sup>* : *<sup>I</sup>* <sup>→</sup> *<sup>J</sup> is concave and <sup>g</sup>* : *<sup>J</sup>* <sup>→</sup> <sup>R</sup> *is an increasing function. Then*

$$0 \le -J\_n(\underline{\mathcal{g}}, h; p, x) \le \max\_p [\underline{\mathcal{g}}(h(pa + (1 - p)b)) - \underline{\mathcal{g}}(ph(a) + (1 - p)h(b))].$$

**Proof.** We shall prove only the part 1. The proof of part 2 of this theorem is analogous.

Therefore, if *h* is a convex function on *J* we have ∑*<sup>n</sup>* <sup>1</sup> *pih*(*xi*) <sup>≥</sup> *<sup>h</sup>*(∑*<sup>n</sup>* <sup>1</sup> *pixi*). Since *g* is an increasing function, it follows that

$$J\_n(\mathcal{g}, h; \mathbf{p}, x) = \mathcal{g}(\sum\_{1}^{n} p\_i h(x\_i)) - \mathcal{g}(h(\sum\_{1}^{n} p\_i x\_i)) \ge 0.1$$

Similarly, if *h* is a concave function on *J* we have ∑*<sup>n</sup>* <sup>1</sup> *pih*(*xi*) <sup>≤</sup> *<sup>h</sup>*(∑*<sup>n</sup>* <sup>1</sup> *pixi*). Since *g* is a decreasing function, it follows again that

$$J\_n(\mathcal{g}, h; \mathbf{p}, \mathbf{x}) \ge 0.$$

On the other hand, since *a* ≤ *xi* ≤ *b*, there exist non-negative numbers *λi*, *μi*; *λ<sup>i</sup>* + *μ<sup>i</sup>* = 1, such that *xi* = *λia* + *μib*, *i* = 1, 2, ..., *n*.

Hence,

$$\mathbb{E}\left[\eta\_i(\mathbf{g}, h; \mathbf{p}, \mathbf{x})\right] = \operatorname\*{g}(\sum\_{1}^{n} p\_i h(\mathbf{x}\_i)) - \operatorname\*{g}(h(\sum\_{1}^{n} p\_i \mathbf{x}\_i)) = \operatorname\*{g}(\sum\_{1}^{n} p\_i h(\lambda\_i a + \mu\_i b)) - \operatorname\*{g}(h(\sum\_{1}^{n} p\_i (\lambda\_i a + \mu\_i b)))$$

$$\leq \operatorname\*{g}(\sum\_{1}^{n} p\_i (\lambda\_i h(a) + \mu\_i h(b))) - \operatorname\*{g}(h(a \sum\_{1}^{n} p\_i \lambda\_i + b \sum\_{1}^{n} p\_i \mu\_i)))$$

$$0 = \operatorname{g}(ph(a) + (1-p)h(b)) - \operatorname{g}(h(pa + (1-p)b)) \\ := F(p; a, b) \le \max\_{p} F(p; a, b)\_{\ast}$$

where we denoted ∑*<sup>n</sup>* <sup>1</sup> *piλ<sup>i</sup>* := *p* ∈ [0, 1].

The second case with concave *h* and decreasing *g* leads to the same result.

Note that the function *F*(*p*; *a*, *b*) is continuous in *p* and non-negative with *F*(0; *a*, *b*) = *F*(1; *a*, *b*) = 0. Therefore, max*<sup>p</sup> F*(*p*; *a*, *b*) exists. Another and sometimes difficult problem is to evaluate its exact value (see Open Problem below).

For this cause, we give an estimation of *Jn*(*g*, *h*; **p**, *x*) with a unique maximum, which could be easily calculated. This method can be applied to the second part of Theorem 1, as well.

**Theorem 2.** *1. Under the conditions of the first part of Theorem 1, assume firstly that g is a convex function on J. Then*

$$0 \le l\_n(\mathcal{g}, h; p, x) \le \max\_p [pf(a) + (1 - p)f(b) - f(pa + (1 - p)b)]\_\prime$$

*where f* := *g* ◦ *h.*

*2. Assuming that f* = *g* ◦ *h is a concave function, we have*

$$0 \le l\_n(\underline{\mathcal{g}}, h; p, \mathbf{x}) \le \max\_p [\underline{\mathcal{g}}(ph(a) + (1 - p)h(b)) - (pf(a) + (1 - p)f(b))].$$

*Now, both maximums can be easily determined by the standard technique.*

**Proof.** By the first part of Theorem 1, we found that there exists *p* ∈ [0, 1] such that

$$J\_n(\emptyset, h; \mathbf{p}, \mathbf{x}) \le \lg(ph(a) + (1 - p)h(b)) - \lg(h(pa + (1 - p)b)).$$

If additionally *g* is convex on *J*, then

$$
\lg(ph(a) + (1-p)h(b)) \le p(\lg \circ h)(a) + (1-p)(\lg \circ h)(b).
$$

Hence,

$$J\_n(\mathcal{g}, h; \mathbf{p}, \mathbf{x}) \le p(\mathcal{g} \circ h)(a) + (1 - p)(\mathcal{g} \circ h)(b) - (\mathcal{g} \circ h)(pa + (1 - p)b))$$

<sup>=</sup> *p f*(*a*)+(1<sup>−</sup> *<sup>p</sup>*)*f*(*b*) <sup>−</sup> *<sup>f</sup>*(*pa* + (1<sup>−</sup> *<sup>p</sup>*)*b*)] <sup>≤</sup> max*<sup>p</sup>* [*p f*(*a*)+(1<sup>−</sup> *<sup>p</sup>*)*f*(*b*) <sup>−</sup> *<sup>f</sup>*(*pa* + (1<sup>−</sup> *<sup>p</sup>*)*b*)].

Consequently, if *g* ◦ *h* is a concave function on *J*, we have

$$\lg(h(pa + (1-p)b) = (\lg h)(pa + (1-p)b) \ge p(\lg h)(a) + (1-p)(\lg h)(b),$$

and

$$J\_n(\emptyset, h; \mathbf{p}, \mathbf{x}) \le \max\_p [\mathcal{g}(ph(a) + (1 - p)h(b)) - (pf(a) + (1 - p)f(b))].$$

## **3. Applications**

The results above are the source of a number of interesting inequalities. For instance, taking *g*(*x*) = log *x* in Theorem 1, we are enabled to determine converses of the quotient

$$\frac{\sum p\_i h(\mathbf{x}\_i)}{h\left(\sum p\_i \mathbf{x}\_i\right)}$$

Or, taking *g*(*x*) = *h*−1(*x*), we can estimate the difference

$$\mathcal{A}\_{\mathbb{H}}(\mathbf{p}, \mathbf{x}) - \mathcal{A}(\mathbf{p}, \mathbf{x})\_{\prime}$$

where

$$\mathcal{A}\_h(\mathbf{p}, \mathbf{x}) := h^{-1}(\sum p\_i h(\mathbf{x}\_i)),$$

is the quasi-arithmetic mean and

$$\mathcal{A}\_{\mathbf{x}}(\mathbf{p}, \mathbf{x}) = \mathcal{A}(\mathbf{p}, \mathbf{x}) = \sum p\_i \mathbf{x}\_{i\prime}$$

is the generalized arithmetic mean.

We shall specialize this argument for the class of generalized power means B*s*(**p**, *x*) of order *<sup>s</sup>* <sup>∈</sup> <sup>R</sup>, where

$$\mathcal{B}\_{\mathbf{s}}(\mathbf{p}, \mathbf{x}) := (\sum p\_i \mathbf{x}\_i^{\mathbf{s}})^{1/s}.$$

Some important particular cases are

$$\mathcal{B}\_{-1}(\mathbf{p}, \mathbf{x}) = \left(\sum\_{1}^{n} p\_i / \mathbf{x}\_i\right)^{-1} := \mathcal{H}(\mathbf{p}, \mathbf{x});$$

$$\mathcal{B}\_0(\mathbf{p}, \mathbf{x}) = \lim\_{s \to 0} \mathcal{B}\_s(\mathbf{p}, \mathbf{x}) = \prod\_{1}^{n} \mathbf{x}\_i^{p\_i} := \mathcal{G}(\mathbf{p}, \mathbf{x});$$

$$\mathcal{B}\_1(\mathbf{p}, \mathbf{x}) = \sum\_{1}^{n} p\_i \mathbf{x}\_i := \mathcal{A}(\mathbf{p}, \mathbf{x}),$$

that is, the generalized harmonic, geometric and arithmetic means, respectively.

It is well-known that <sup>B</sup>*s*(**p**, *<sup>x</sup>*) is monotone increasing in *<sup>s</sup>* <sup>∈</sup> <sup>R</sup> (cf. [4]).

Therefore,

$$\mathcal{H}(\mathbf{p}, \mathbf{x}) \le \mathcal{G}(\mathbf{p}, \mathbf{x}) \le \mathcal{A}(\mathbf{p}, \mathbf{x})\_{\prime}$$

represents the famous A−G−H inequality.

As an application of Theorem 1, we shall estimate the difference B*s*(**p**, *x*) − A(**p**, *x*).

**Theorem 3.** *Let* **x** ∈ [*a*, *b*] *<sup>n</sup>* <sup>⊂</sup> *<sup>I</sup>n*, 0 <sup>&</sup>lt; *<sup>a</sup>* <sup>&</sup>lt; *b. Then*

$$0 \le \mathcal{B}\_s(p, \mathbf{x}) - \mathcal{A}(p, \mathbf{x}) \le \frac{s-1}{s} (E\_{s,1}(a, b) - E\_{s,s-1}^{-1}(1/a, 1/b)), \text{ s} > 1;$$

$$0 \le \mathcal{A}(p, \mathbf{x}) - \mathcal{B}\_s(p, \mathbf{x}) \le \frac{1-s}{s} (E\_{1,s}(a, b) - E\_{1-s,-s}(a, b)), \text{ } 0 < s < 1;$$

$$0 \le \mathcal{A}(p, \mathbf{x}) - \mathcal{B}\_s(p, \mathbf{x}) \le \frac{s-1}{s} (E\_{1-s,-s}(a, b) - E\_{1,s}(a, b)), \text{ s} < 0.$$

**Proof.** Let *h*(*x*) = *x<sup>s</sup>* , *g*(*x*) = *x*1/*<sup>s</sup>* ,*<sup>s</sup>* <sup>∈</sup> <sup>R</sup>/{0}.

If *s* > 1, then *h* is a convex function and *g* is monotone increasing on (0, ∞). Hence, by the first part of Theorem 1, we obtain

$$0 \le \mathcal{B}\_{\mathfrak{s}}(\mathbf{p}, \mathbf{x}) - \mathcal{A}(\mathbf{p}, \mathbf{x}) \le \max\_{p} ((pa^s + (1-p)b^s)^{1/s}) - (pa + (1-p)b) := M\_{\mathfrak{s}}(p\mathfrak{y}; a, b).$$

This maximum is easy to calculate and we obtain

$$p\_0 a^s + (1 - p\_0) b^s = \left(\frac{b^s - a^s}{s(b - a)}\right)^{s/(s - 1)} = E\_{s, 1}^s(a, b).$$

Therefore,

$$p\_0 = \frac{b^s - E\_{s,1}^s(a,b)}{b^s - a^s}; \; 1 - p\_0 = \frac{E\_{s,1}^s(a,b) - a^s}{b^s - a^s};$$

and

$$p\_0 a + (1 - p\_0)b = \frac{ab^\varsigma - ba^\varsigma}{b^\varsigma - a^\varsigma} + \frac{b - a}{b^\varsigma - a^\varsigma} E\_{\varsigma, 1}^\varsigma(a, b).$$

Since,

$$E\_{s,1}^s(a,b) = \frac{b^s - a^s}{s(b-a)} E\_{s,1}(a,b),$$

we obtain

$$M\_{\mathbf{s}}(p\_0; a, b) = \left(p\_0 a^{\mathbf{s}} + (1 - p\_0) b^{\mathbf{s}}\right)^{1/s} - \left(p\_0 a + (1 - p\_0) b\right)$$

$$= E\_{\mathbf{s}, 1}(a, b) - \left(\frac{(1/a)^{s - 1} - (1/b)^{s - 1}}{(1/a)^s - (1/b)^s} + \frac{1}{\mathbf{s}} E\_{\mathbf{s}, 1}(a, b)\right)$$

$$= \frac{s - 1}{s} (E\_{\mathbf{s}, 1}(a, b) - E\_{\mathbf{s}, \mathbf{s} - 1}^{-1}(1/a, 1/b)).$$

In cases 0 < *s* < 1 and *s* < 0 one should apply the second part of Theorem 1, since then *h* is concave and *g* is increasing in the first case and *h* is convex and *g* is decreasing in the second case. Proceeding as above, the result follows.

As a consequence, we obtain some converses of the A(**p**, *x*) − G(**p**, *x*) − H(**p**, *x*) inequality.

$$\begin{aligned} \text{(Corollary 1.)} \quad &Let \; \mathbf{x} \in [a, b]^n \subset I^n, \; b > a > 0. \\ &Then \\ & 0 \le \mathcal{A}(p, \mathbf{x}) - \mathcal{H}(p, \mathbf{x}) \le 2(A(a, b) - G(a, b)). \end{aligned}$$

**Proof.** Putting *s* = −1, we obtain

$$\begin{aligned} 0 \le \mathcal{A}(\mathbf{p}, \mathbf{x}) - \mathcal{B}\_{-1}(\mathbf{p}, \mathbf{x}) &= \mathcal{A}(\mathbf{p}, \mathbf{x}) - \mathcal{H}(\mathbf{p}, \mathbf{x}) \\ \le 2(E\_{2,1}(a, b) - E\_{1,-1}(a, b)) &= 2(A(a, b) - G(a, b)). \end{aligned}$$

**Corollary 2.** *Let* **x** ∈ [*a*, *b*] *<sup>n</sup>* <sup>⊂</sup> *<sup>I</sup>n*, *<sup>b</sup>* <sup>&</sup>gt; *<sup>a</sup>* <sup>&</sup>gt; <sup>0</sup>*. Then* <sup>0</sup> ≤ A(*p*, *<sup>x</sup>*) − G(*p*, *<sup>x</sup>*) <sup>≤</sup> *<sup>L</sup>*(*a*, *<sup>b</sup>*)log *<sup>L</sup>*(*a*, *<sup>b</sup>*)*I*(*a*, *<sup>b</sup>*) *<sup>G</sup>*2(*a*, *<sup>b</sup>*) .

**Proof.** Letting *s* → 0, we have

$$\mathcal{A}(\mathbf{p},\mathbf{x}) - \mathcal{G}(\mathbf{p},\mathbf{x}) = \lim\_{s \to 0} (\mathcal{A}(\mathbf{p},\mathbf{x}) - \mathcal{B}\_s(\mathbf{p},\mathbf{x})),$$

$$\leq \lim\_{s \to 0} \left( \frac{1 - s}{s} (E\_{1,s}(a, b) - E\_{1 - s, -s}(a, b)) \right).$$

After somewhat laborous calculation using Taylor series, the result follows.

**Remark 1.** *Estimating the Jensen functional*

$$J\_{\mathbb{H}}(e^{\mathbf{x}}; \mathbf{p}, \mathbf{x}) = \sum\_{1}^{n} p\_{i} e^{\mathbf{x}\_{i}} - e^{\sum\_{1}^{n} p\_{i} \mathbf{x}\_{i}}$$

*for* **x** ∈ [*a*, *b*] *<sup>n</sup>* <sup>⊂</sup> <sup>R</sup>*n*, *and then changing variables xi* <sup>→</sup> log *xi*; *<sup>a</sup>* <sup>→</sup> log *<sup>a</sup>*, *<sup>b</sup>* <sup>→</sup> log *b, we obtain the same result.*

**Open problem** *Find the exact upper global bound for*

$$
\mathcal{G}(p, x) - \mathcal{H}(p, x) \dots
$$

The next proposition gives global bounds for the quotient of two power means.

**Theorem 4.** *For s* > *t and* **x** ∈ [*a*, *b*] *<sup>n</sup>* <sup>⊂</sup> <sup>R</sup>*<sup>n</sup>* <sup>+</sup>*, we have*

$$1 \le \frac{\mathcal{B}\_{\mathbf{s}}(p, \mathbf{x})}{\mathcal{B}\_{\mathbf{t}}(p, \mathbf{x})} \le \frac{E\_{\mathbf{s}, \mathbf{s} - \mathbf{t}}(a, b)}{E\_{\mathbf{t}, \mathbf{t} - \mathbf{s}}(a, b)}.$$

*Both bounds are the best possible.*

**Proof.** Applying the method from the proof of Theorem 1, we obtain

$$\mathbf{x}\_i^t = \lambda\_i a^t + \mu\_i b^t,\\ \lambda\_i + \mu\_i = 1,\\ \vdots = 1, 2, \dots, n.$$

In the cases *s* > *t* > 0 or *s* > 0, *t* < 0, we have that the function *xs*/*<sup>t</sup>* is convex. Hence,

$$\alpha\_i^s = (\lambda\_i a^t + \mu\_i b^t)^{s/t} \le \lambda\_i (a^t)^{s/t} + \mu\_i (b^t)^{s/t} = \lambda\_i a^s + \mu\_i b^t,$$

and

$$\frac{\mathcal{B}\_{s}(\mathbf{p},\mathbf{x})}{\mathcal{B}\_{t}(\mathbf{p},\mathbf{x})} = \frac{(\sum\_{1}^{n} p\_{i} \mathbf{x}\_{i}^{s})^{1/s}}{(\sum\_{1}^{n} p\_{i} \mathbf{x}\_{i}^{t})^{1/t}} \le \frac{(a^{s} \sum\_{1}^{n} p\_{i} \lambda\_{i} + b^{s} \sum\_{1}^{n} p\_{i} \mu\_{i})^{1/s}}{(a^{t} \sum\_{1}^{n} p\_{i} \lambda\_{i} + b^{t} \sum\_{1}^{n} p\_{i} \mu\_{i})^{1/t}} = \frac{(pa^{s} + qb^{s})^{1/s}}{(pa^{t} + qb^{t})^{1/t}}.$$

where we put

$$\sum\_{1}^{n} p\_i \lambda\_i := p\_\prime \sum\_{1}^{n} p\_i \mu\_i := q ; \ p + q = 1.$$

Therefore, it follows that

$$\frac{\mathcal{B}\_s(\mathbf{p}, \mathbf{x})}{\mathcal{B}\_t(\mathbf{p}, \mathbf{x})} \le \max\_p \frac{(pa^s + qb^s)^{1/s}}{(pa^t + qb^t)^{1/t}} = \frac{(p\_0 a^s + q\_0 b^s)^{1/s}}{(p\_0 a^t + q\_0 b^t)^{1/t}}.$$

By standard means we obtain that this maximum satisfies the equation

$$\frac{s\left(p\_0a^s + q\_0b^s\right)}{a^s - b^s} = \frac{t\left(p\_0a^t + q\_0b^t\right)}{a^t - b^t}.$$

that is,

$$p\_0 = \frac{1}{s - t} \left( \frac{sb^s}{b^s - a^s} - \frac{tb^t}{b^t - a^t} \right); \ q\_0 = \frac{1}{s - t} \left( \frac{ta^t}{b^t - a^t} - \frac{sa^s}{b^s - a^s} \right).$$

Consequently,

$$p\_0a^t + q\_0b^t = \frac{s}{s-t} \frac{a^tb^s - a^sb^t}{b^s - a^s} = \frac{s}{s-t} \frac{(ab)^t(b^{s-t} - a^{s-t})}{b^s - a^s}.$$

and

$$p\_0 a^s + q\_0 b^s = \frac{t}{s - t} \frac{a^t b^s - a^s b^t}{b^t - a^t} = \frac{t}{t - s} \frac{(ab)^s (b^{t - s} - a^{t - s})}{b^t - a^t}.$$

Hence,

$$(p\_0 a^t + q\_0 b^t)^{1/t} = G^2(a, b) / E\_{s, s - t}(a, b);$$

$$(p\_0 a^s + q\_0 b^s)^{1/s} = G^2(a, b) / E\_{t, t - s}(a, b),$$

and we finally obtain

$$\max\_{p} \frac{(pa^s + qb^s)^{1/s}}{(pa^t + qb^t)^{1/t}} = \frac{(p\_0 a^s + q\_0 b^s)^{1/s}}{(p\_0 a^t + q\_0 b^t)^{1/t}} = \frac{E\_{\mathfrak{s}, \mathfrak{s} - t}(a, b)}{E\_{\mathfrak{t}, t - \mathfrak{s}}(a, b)}.$$

In the third case, for *s* > *t* > 0, we have

 $1 \le \frac{\mathcal{B}\_{-t}(\mathbf{p}, \mathbf{x})}{\mathcal{B}\_{-s}(\mathbf{p}, \mathbf{x})} = \frac{\mathcal{B}\_{s}(\mathbf{p}, 1/\mathbf{x})}{\mathcal{B}\_{t}(\mathbf{p}, 1/\mathbf{x})} \le \frac{E\_{s, s-t}(1/a, 1/b)}{E\_{t, t-s}(1/a, 1/b)}$  
$$= \frac{E\_{s, s-t}(a, b)}{E\_{t, t-s}(a, b)},$$

since

$$E\_{\mathfrak{u},\upsilon}(1/a, 1/b) = E\_{\mathfrak{u},\upsilon}(a, b) / G^2(a, b).$$

It is obvious that 1 is the best possible lower global bound. To prove that *Ms*,*t*(*a*, *b*) := *Es*,*s*−*t*(*a*, *b*)/*Et*,*t*−*s*(*a*, *b*) is also the best possible global bound, denote by *Ns*,*t*(*a*, *b*) an arbitrary upper bound. Then the relation

$$\frac{\mathcal{B}\_{\mathbf{s}}(\mathbf{p},\mathbf{x})}{\mathcal{B}\_{\mathbf{t}}(\mathbf{p},\mathbf{x})} \le N\_{\mathbf{s},\mathbf{t}}(a,b)\_{\mathbf{t}}$$

holds for any **p** and **x**.

Putting *x*<sup>1</sup> = *x*<sup>2</sup> = ... = *xn*−<sup>1</sup> = *a*, *xn* = *b*, *pn* = *q*0, we obtain

$$M\_{s,t}(a,b) = \frac{(p\_0 a^s + q\_0 b^s)^{1/s}}{(p\_0 a^t + q\_0 b^t)^{1/t}} = \frac{\mathcal{B}\_s(\mathbf{p}, \mathbf{x})}{\mathcal{B}\_t(\mathbf{p}, \mathbf{x})} \le N\_{s,t}(a,b),$$

and the proof is complete.

Some important consequences of this theorem are given in the following

**Corollary 3.** *For s* > 1*, we have*

$$\mathcal{A}(p,x) \le \mathcal{B}\_s(p,x) \le \frac{E\_{s,s-1}(a,b)}{E\_{1,1-s}(a,b)} \mathcal{A}(p,x).$$

**Corollary 4.** *For s* > 0*, we have*

$$
\mathcal{G}(p, x) \le \mathcal{B}\_s(p, x) \le \frac{E\_{s, s}(a, b) E\_{s, 0}(a, b)}{G^2(a, b)} \mathcal{G}(p, x).
$$

**Corollary 5.** *For s* > −1*, we have*

$$\mathcal{H}(p,x) \le \mathcal{B}\_s(p,x) \le \frac{E\_{s+1,s}(a,b)E\_{s+1,1}(a,b)}{G^2(a,b)}\mathcal{H}(p,x).$$

In the last two corollaries we used the identity

$$E\_{-\mathfrak{u}\_{\prime}-\mathbb{D}}(a,b)E\_{\mathfrak{u}\_{\prime}\mathbb{D}}(a,b) = G^2(a,b).$$

Finally, putting *s* = 1 in Corollary 4 and *s* = 0,*s* = 1 in Corollary 5, since *E*2,1(*a*, *b*) = *A*(*a*, *b*), *E*1,0(*a*, *b*) = *L*(*a*, *b*), *E*1,1(*a*, *b*) = *I*(*a*, *b*), we obtain global converses of the A−G− H inequality.

**Corollary 6.**

$$\mathcal{G}(p,x) \le \mathcal{A}(p,x) \le \frac{L(a,b)I(a,b)}{G^2(a,b)}\mathcal{G}(p,x);$$

$$\mathcal{H}(p,x) \le \mathcal{A}(p,x) \le \left(\frac{A(a,b)}{G(a,b)}\right)^2 \mathcal{H}(p,x);$$

$$\mathcal{H}(p,x) \le \mathcal{G}(p,x) \le \frac{L(a,b)I(a,b)}{G^2(a,b)}\mathcal{H}(p,x).$$

Therefore, a sort of tight symmetry is established for these inequalities.

## **4. Conclusions**

We give a method for two-sided estimations of the generalized Jensen functional *Jn*(*g*, *h*; **p**, **x**), with applications to the general means. In particular, sharp converses of the famous

A−G−H inequality are obtained. Further investigations can be undertaken on more general settings, i.e., *Jn*(*f* , *g*, *h*; **p**, **x**) := *f*(∑*<sup>n</sup>* <sup>1</sup> *pih*(*xi*)) <sup>−</sup> *<sup>g</sup>*(*h*(∑*<sup>n</sup>* <sup>1</sup> *pixi*)) or even *<sup>F</sup>*(∑*<sup>n</sup>* <sup>1</sup> *pih*(*xi*), *h*(∑*<sup>n</sup>* <sup>1</sup> *pixi*)), with properly chosen functions *f* , *g*, *h* and *F*(*x*, *y*).

**Author Contributions:** Theoretical part, S.S.; numerical part with examples, B.B.-M. All authors have read and agreed to the published version of the manuscript.

**Funding:** Bandar Bin-Mohsin is supported by Researchers Supporting Project number (RSP-2021/158), King Saud University, Riyadh, Saudi Arabia.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The authors are grateful to the referees for their valuable comments.

**Conflicts of Interest:** The authors declare no conflict of interests.

## **References**

