A Themed Issue on Mathematical Inequalities, Analytic Combinatorics and Related Topics in Honor of Professor Feng Qi

Edited by Wei-Shih Du, Ravi P. Agarwal, Erdal Karapinar, Marko Kostić and Jian Cao

mdpi.com/journal/axioms

## **A Themed Issue on Mathematical Inequalities, Analytic Combinatorics and Related Topics in Honor of Professor Feng Qi**

## **A Themed Issue on Mathematical Inequalities, Analytic Combinatorics and Related Topics in Honor of Professor Feng Qi**

Editors

**Wei-Shih Du Ravi P. Agarwal Erdal Karapinar Marko Kosti´c Jian Cao**


*Editorial Office* MDPI St. Alban-Anlage 66 4052 Basel, Switzerland

This is a reprint of articles from the Special Issue published online in the open access journal *Axioms* (ISSN 2075-1680) (available at: www.mdpi.com/journal/axioms/special issues/math inequalities).

For citation purposes, cite each article independently as indicated on the article page online and as indicated below:

Lastname, A.A.; Lastname, B.B. Article Title. *Journal Name* **Year**, *Volume Number*, Page Range.

**ISBN 978-3-0365-9001-1 (Hbk) ISBN 978-3-0365-9000-4 (PDF) doi.org/10.3390/books978-3-0365-9000-4**

© 2023 by the authors. Articles in this book are Open Access and distributed under the Creative Commons Attribution (CC BY) license. The book as a whole is distributed by MDPI under the terms and conditions of the Creative Commons Attribution-NonCommercial-NoDerivs (CC BY-NC-ND) license.

## **Contents**


#### **Tao Zhang and Alatancang Chen**


### **About the Editors**

#### **Wei-Shih Du**

Wei-Shih Du is a Full Professor of Mathematics in the Department of Mathematics, National Kaohsiung Normal University, Kaohsiung 82444, Taiwan. His main research interests include nonlinear analysis and its applications, fixed point theory and its applications, variational principles and inequalities, iterative methods for nonlinear mappings, optimization theory, equilibrium problems, and fractional calculus theory.

#### **Ravi P. Agarwal**

Ravi Prakash Agarwal is a Full Professor of Mathematics in the Department of Mathematics, Texas A&M University—Kingsville, Kingsville, TX 78363-8202, USA. His main research interests include nonlinear analysis, differential and difference equations, fixed point theory, and general inequalities.

#### **Erdal Karapinar**

Erdal Karapinar is a Full Professor of Mathematics in the Department of Mathematics, C¸ ankaya University, Etimesgut, Ankara 06790, Turkey. His main research interests include functional analysis, operator theory, linear topological invariants, fixed point theory, and best proximity.

#### **Marko Kosti´c**

Marko Kostic is a Full Professor of Mathematics in the Faculty of Technical Sciences, University ´ of Novi Sad, Trg D. Obradovica 6, 21125 Novi Sad, Serbia. His main research interests include abstract ´ Volterra integro-differential equations, abstract fractional differential equations, topological dynamics of linear operators, and abstract PDEs.

#### **Jian Cao**

Jian Cao is a Full Professor of Mathematics in the School of Mathematics, Hangzhou Normal University, Hangzhou 311121, China. His main research interests include mathematical inequalities and means, analytic combinatorics, *q*-series, *q*-difference equations, generating functions, and fractional *q*-calculus.

### **Preface**

This Special Issue of Axioms pays tribute to Professor Feng Qi's significant contributions and provides some important recent advances in mathematics. It comprises original, creative, and high-quality research papers that inspire advances in mathematical inequalities, mathematical means, the theory of special functions, analytic combinatorics, the analytic number theory, optimization, the convex analysis of functions, the matrix theory, and their applications.

Our five Guest Editors have exerted our best efforts to ensure the success of this Special Issue, and we believe our efforts will be rewarded. Our Guest Editors organized a comprehensive review process for each submission based on the journal's policy, instructions, and guidelines. We received 35 submissions and, after a comprehensive peer-review process, only 12 high-quality articles were accepted for publication (the acceptance rate is over 34%). The accepted papers can be classified according to the following seven schemes:


We hope that interested researchers and practitioners will be inspired by this Special Issue and find it valuable to their own research. This Special Issue has highlighted important issues and raised several new problems in these research areas. We would like to heartily thank the Editorial team and the reviewers of *Axioms*, particularly the Editor-in-Chief, Professor Humberto Bustince, and the Assistant Editor, Luna Shen, for their invaluable support and kind help throughout the editing process.

### **Wei-Shih Du, Ravi P. Agarwal, Erdal Karapinar, Marko Kosti´c, and Jian Cao** *Editors*

### *Editorial* **Preface to the Special Issue "A Themed Issue on Mathematical Inequalities, Analytic Combinatorics and Related Topics in Honor of Professor Feng Qi"**

**Wei-Shih Du 1,\* ,† , Ravi Prakash Agarwal <sup>2</sup> , Erdal Karapinar 3,4 , Marko Kosti´c <sup>5</sup> and Jian Cao <sup>6</sup>**


This Special Issue of the journal *Axioms* pays tribute to Professor Feng Qi's significant contributions and provides some important recent advances in mathematics. It comprises original, creative, and high-quality research papers that inspire advances in mathematical inequalities, mathematical means, the theory of special functions, analytic combinatorics, the analytic number theory, optimization, the convex analysis of functions, the matrix theory, and their applications. For more detailed information, please visit the website https://www.mdpi.com/journal/axioms/special\_issues/math\_inequalities (accessed on 22 April 2022).

Professor Feng Qi

Professor Feng Qi earned his Ph.D. degree, supervised by Professor Sen-Lin Xu (born in December 1941, passed away on 2 October 2022 in Beijing), from the University of Science and Technology of China in 1999. He received his master's degree, supervised by Professor Yi-Pei Chen (passed away), from Xiamen University in 1989. He graduated with his bachelor's degree from Henan University in 1986.

He is now a full-time professor at Henan Polytechnic University and Tiangong University, as well as an adjunct professor at Hulunbuir University, China. Additionally, he was an adjunct professor at Henan Normal University, Henan University, and Inner Mongolia University for Nationalities in China, and Victoria University in Australia. In 2005, he was promoted as the Specially Appointed Professor of the Education Committee of Henan Province, China.

**Citation:** Du, W.-S.; Agarwal, R.P.; Karapinar, E.; Kosti´c, M.; Cao, J. Preface to the Special Issue "A Themed Issue on Mathematical Inequalities, Analytic Combinatorics and Related Topics in Honor of Professor Feng Qi". *Axioms* **2023**, *12*, 846. https://doi.org/10.3390/ axioms12090846

Received: 14 August 2023 Accepted: 23 August 2023 Published: 30 August 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

Professor Feng Qi won the Award of Science and Technology from the Inner Mongolia Autonomous Region in 2017 and received the Certificate of High-level Talent in Henan Province in 2020. He also won several other academic awards and scientific funds from Henan Province, Inner Mongolia Autonomous Region, and Shaanxi Province in China.

He was the first former Head of the Department of Applied Mathematics and Informatics (current School of Mathematics and Informatics) at Henan Polytechnic University.

He has published over 691 research papers (accessed on 9 August 2023), affiliated to eight universities (accessed on 9 August 2023) (Henan Polytechnic University, Hulunbuir University, Tiangong University (357), Inner Mongolia Minzu University (199), Henan University (13), Henan Normal University (11), University of Science and Technology of China (20), and University of Electronic Science and Technology of China (8)), in over 225 journals, conference proceedings, Special Issues, books, and collections. Since 2012, Professor Feng Qi and many teachers and graduates at Inner Mongolia Minzu University jointly published 127 papers (accessed on 9 August 2023), in which 72 papers were abstracted and indexed by the Web of Science Core Collection and five papers were abstracted and indexed by the Engineering Village. His Erdös number is 3 (accessed on 9 August 2023).

From 2014 to 2023, he was named as one of the Most (Highly) Cited Chinese Researchers by the Elsevier-Shanghai Ranking for nine consecutive years. Since 2002, there have been over 91 papers or preprints (accessed on 8 August 2023) with titles that contain one of the following names: "Feng Qi", "F. Qi", or "Qi". His name was mentioned by Mourad Ismail in [1]. Since 2006, over 40 papers or preprints (accessed on 8 August 2023) have been published by other mathematicians around the world, with titles containing the notions of "logarithmically completely (absolutely) monotonic function" or its analogs. These notions were created or invented by Professor Feng Qi and his coauthors in 2004 and 2009, respectively. Many works published by Professor Feng Qi and his coauthors have been collected in the famous databases "The On-Line Encyclopedia of Integer Sequences" (accessed on 9 August 2023) and "Wikipedia, The Free Encyclopedia" (accessed on 9 August 2023), in monographs and handbooks [2–9], and in a Chinese textbook published by Beijing Normal University Press in 2017 for middle school students.

Professor Feng Qi delivered academic speeches (invited, keynote, and plenary) at international academic conferences, was financially supported to attend international academic conferences, was invited to be a member of the Scientific Committee, the International Advisory Committee, the International Committee, the International Organizing Committee, the International Advisory Board, the International Scientific Committee, and the Scientific Program Committee of international academic conferences, and was invited and financially supported to be a visiting professor in Australia, Denmark, Hong Kong, India, North Macedonia, Pakistan, Romania, South Korea, Taiwan, Turkey, and the USA over 25 times. He also attended domestic academic conferences and was invited and financially supported to deliver speeches over 30 times.

He is serving on the editorial boards for over 21 international mathematical journals (accessed on 8 August 2023). Moreover, he was appointed to the editorial boards for over 43 international mathematical journals (accessed on 8 August 2023). He won the top reviewer in Mathematics powered by Publons (accessed on 8 August 2023) twice: the Top Peer Reviewer 2019 and the Certified Sentinel of Science Award Recipient 2016 (The Top 10 Per Cent of Reviewers).

Since 1990, he has taught many courses for hundreds of thousands of undergraduate and graduate students, some of which include the following:


Along with teaching, he has also supervised two graduates (accessed on 8 August 2023) (Jian Cao and Da-Wei Niu) at Henan Polytechnic University and six graduates (accessed on 8 August 2023) (Xiao-Jing Zhang, Wen-Hui Li, Miao-Miao Zheng, Fang-Fang Liu, Xiao-Ting Shi, and Jing-Lin Wang) at Tiangong University.

Professor Feng Qi's main academic research interests (accessed on 9 August 2023) included, but were not limited to, the following:


Our five Guest Editors have done our best to ensure the success of this Special Issue, and we believe our efforts will be rewarded. Our Guest Editors organized a comprehensive review process for each submission basing on the journal's policy, instructions, and guidelines. We have received 35 submissions and, after a comprehensive peer review process, only 12 high-quality articles have been accepted for publication (the acceptance rate is over 34%). The list of published contributions is as follows:


The 30 authors of these 12 papers are as follows:


The accepted papers can be classified according to the following seven schemes:


As of 9 August 2023, two of these twelve papers have been cited, as shown in the following table.


We hope that interested researchers and practitioners will be inspired by this Special Issue and find it valuable to their own research. This Special Issue has highlighted important issues and raised several new problems in these research areas. We would like to heartily thank the editorial team and the reviewers of the journal *Axioms*, particularly the Editorin-Chief, Professor Humberto Bustince, and the Assistant Editor, Luna Shen, for their invaluable support and kind help throughout the editing process.

**Author Contributions:** Conceptualization, W.-S.D., R.P.A., E.K., M.K. and J.C.; methodology, W.-S.D., R.P.A., E.K., M.K. and J.C.; software, W.-S.D.; validation, W.-S.D., R.P.A., E.K., M.K. and J.C.; formal analysis, W.-S.D., R.P.A., E.K., M.K. and J.C.; investigation, W.-S.D., R.P.A., E.K., M.K. and J.C.; writing—original draft preparation, W.-S.D.; writing—review and editing, W.-S.D., R.P.A., E.K., M.K. and J.C.; visualization, W.-S.D., R.P.A., E.K., M.K. and J.C.; supervision, W.-S.D., R.P.A., E.K., M.K. and J.C.; project administration, W.-S.D., R.P.A., E.K., M.K. and J.C. All authors have read and agreed to the published version of the manuscript.

**Funding:** Wei-Shih Du is partially supported by Grant No. NSTC 112-2115-M-017-002 of the National Science and Technology Council of the Republic of China. Marko Kosti´c is partially supported by grant 451-03-68/2020/14/200156 of Ministry of Science and Technological Development, Republic of Serbia. Jian Cao is partially supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LY21A010019).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The authors wish to express their hearty thanks to the family of Professor Feng Qi for supplying his photograph and giving us permission to use it in this manuscript.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

#### **Ravi Prakash Agarwal 1,\* , Erdal Karapinar 2,3,\* , Marko Kosti´c <sup>4</sup> , Jian Cao <sup>5</sup> and Wei-Shih Du <sup>6</sup>**


**Abstract:** In the paper, the authors present a brief overview and survey of the scientific work by Chinese mathematician Feng Qi and his coauthors.

**Keywords:** overview; survey; inequality; series expansion; partial Bell polynomial; convex function; special function; mathematical mean; Bernoulli number; matrix; completely monotonic degree; logarithmically completely monotonic function; gamma function; polygamma function; Bell number; Wallis ratio; additivity; complete elliptic integral; Pólya inequality; statistics

**MSC:** 00-02; 01-02; 05-02; 11-02; 12-02; 15-02; 26-02; 33-02; 40-02; 41-02; 44-02; 53-02

#### **1. Introduction**

Professor Feng Qi, whose ORCID profile is at https://orcid.org/0000-0001-6239-2968, received his PhD degree from the University of Science and Technology of China in 1999 and is currently a full Professor at Tiangong University and Henan Polytechnic University, China. On 17 May 2022, he moved to Dallas as an independent researcher in mathematics.

December 2017 in Dallas

Among other institutions and universities, he has visited Victoria University in Australia and the University of Hong Kong twice, the University of Copenhagen, Antalya IC Hotel for attending a conference, several universities in South Korea, Sun Yat-sen University, Kaohsiung Normal University, and so on. He is, or was, the editor-in-chief, an associate editor, or a member of the editorial board of over 40 reputable international journals. In 1993, Qi published his first academic paper in China. In 1996, Qi published his first academic paper abroad. To date, he has published over 670 papers in 220 journals, collections, or proceedings. Currently, his academic interests and research fields mainly include the theory of special functions, classical analysis, mathematical inequalities and applications, mathematical means and applications, analytic combinatorics, analytic number theory, the convex theory of functions, and so on.

**Citation:** Agarwal, R.P.; Karapinar, E.; Kosti´c, M.; Cao, J.; Du, W.-S. A Brief Overview and Survey of the Scientific Work by Feng Qi. *Axioms* **2022**, *11*, 385. https://doi.org/ 10.3390/axioms11080385

Received: 11 July 2022 Accepted: 30 July 2022 Published: 5 August 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

Now, let us start out by briefly presenting an overview and survey of some research results obtained by Dr. Professor Feng Qi and his coauthors.

#### **2. Concrete Contributions**

#### *2.1. Bell Numbers and Inequalities*

From 2013 on, Dr. Qi began to consider some problems related to combinatorial number theory and applied the logarithmically complete monotonicity to combinatorial number theory.

December 2017 in Dallas

In the research article [1], Qi presented derivatives of the generating functions for the Bell numbers by induction and by the well-known Faà di Bruno formula. Using this approach, he recovered an explicit formula in terms of the Stirling numbers of the second kind, found the logarithmically absolute and complete monotonicity of the generating functions, and deduced some inequalities for the Bell numbers. The logarithmic convexity of the sequence of the Bell numbers is shown after that.

As is well known, the Bell number *B<sup>n</sup>* is defined as the number of all equivalence relations on the set N*<sup>n</sup>* = {1, 2, . . . , *n*} for *n* ∈ N. These numbers have been known already in medieval Japan, but they are named after Eric Temple Bell, who systematically analyzed them in the 1930s.

Let us recall that

$$B\_1 = 1, \quad B\_2 = 2, \quad B\_3 = 5, \quad B\_4 = 15, \quad B\_5 = 52.$$

Since

$$\mathbf{e}^{\mathbf{e}^{\mathbf{x}}} = \mathbf{e} \sum\_{k=0}^{\infty} B\_k \frac{\mathbf{x}^k}{k!} \quad \text{and} \quad \mathbf{e}^{\mathbf{e}^{-\mathbf{x}}} = \mathbf{e} \sum\_{k=0}^{\infty} (-1)^k B\_k \frac{\mathbf{x}^k}{k!}$$

,

the functions e e ±*x* are called the generating functions for the Bell numbers *B<sup>k</sup>* . The Bell numbers are also called exponential numbers.

It is known that, for every positive integer *n* ∈ N, we have

$$\frac{\mathbf{d}^{\mathrm{Il}}\mathbf{e}^{\mathrm{e}^{\mathrm{x}}}}{\mathbf{d}\,\mathbf{x}^{\mathrm{u}}} = \mathbf{e}^{\mathrm{e}^{\mathrm{x}}}\sum\_{k=1}^{n}\mathcal{S}(n,k)\,\mathbf{e}^{\mathrm{k}\mathbf{x}}\quad\text{and}\quad\frac{\mathbf{d}^{\mathrm{n}}\mathbf{e}^{\mathrm{e}^{\mathrm{e}^{-\mathrm{x}}}}}{\mathbf{d}\,\mathbf{x}^{\mathrm{n}}} = (-1)^{\mathrm{n}}\,\mathbf{e}^{\mathrm{e}^{-\mathrm{x}}}\sum\_{k=1}^{n}\mathcal{S}(n,k)\,\mathbf{e}^{-\mathrm{k}\mathbf{x}}\,\mathbf{e}^{\mathrm{e}^{-\mathrm{x}}}$$

where *S*(*n*, *k*) is the Stirling number of the second kind, which can be computed by

$$S(n,k) = \frac{1}{k!} \sum\_{\ell=1}^k (-1)^{k-\ell} \binom{k}{\ell} \ell^n.$$

The Stirling numbers of the second kind satisfy the recurrence relation

$$\mathcal{S}(n+1,k+1) = \mathcal{S}(n,k) + (k+1)\mathcal{S}(n,k+1), \quad 1 \le k \le n-1.$$

From the above, we have

$$B\_n = \frac{1}{\mathbf{e}} \lim\_{x \to 0} \frac{\mathbf{d}^n \,\mathbf{e}^{\mathbf{e}^x}}{\mathbf{d} \, x^n}$$

and therefore

$$B\_n = \sum\_{k=1}^n \mathcal{S}(n,k).$$

Several inequalities for the Bell numbers *B<sup>n</sup>* have been proven, including the following ones:

1. Let *a* = (*a*1, *a*2, . . . , *an*) and *b* = (*b*1, *b*2, . . . , *bn*) be two non-increasing tuples of nonnegative integers such that ∑ *k i*=1 *a<sup>i</sup>* ≥ ∑ *k i*=1 *bi* for 1 ≤ *k* ≤ *n* − 1 and ∑ *n i*=1 *a<sup>i</sup>* = ∑ *n i*=1 *bi* . Then

$$B\_{a\_1}B\_{a\_2}\cdots \cdot B\_{a\_n} \ge B\_{b\_1}B\_{b\_2}\cdots \cdot B\_{b\_n}.$$

2. If ` ≥ 0 and *n* ≥ *k* ≥ 0, then we have

$$B\_{n+\ell}^{k}B\_{\ell}^{n-k} \ge B\_{k+\ell}^{n}.$$

3. If ` ≥ 0, *n* ≥ *k* ≥ *m*, 2*k* ≥ *n*, and 2*m* ≥ *n*, then we have

$$B\_{k+\ell}B\_{n-k+\ell} \ge B\_{m+\ell}B\_{n-m+\ell}\cdot$$

4. If *k* ≥ 0 and *n* ∈ N, then we have

$$\left(\prod\_{\ell=0}^n B\_{k+2\ell}\right)^{1/(n+1)} \ge \left(\prod\_{\ell=0}^{n-1} B\_{k+2\ell+1}\right)^{1/n}.$$

These results have been extended and generalized in [2–5] by Qi and his coauthors.

#### *2.2. Partial Bell Polynomials*

Partial Bell polynomials are also called the Bell polynomials of the second kind. They are usually denoted by *Bn*,*<sup>k</sup>* (*x*1, *<sup>x</sup>*2, . . . , *<sup>x</sup>n*−*k*+<sup>1</sup> ). They are closely connected with the famous Faà di Bruno formula in combinatorics. In recent years, Qi and his coauthors creatively considered some special values of *Bn*,*<sup>k</sup>* for special sequences *<sup>x</sup>*1, *<sup>x</sup>*2, . . . , *<sup>x</sup>n*−*k*+<sup>1</sup> and successfully applied to some mathematical problems.

The survey article [6] is worth to be mentioned. We now just introduce the newest results obtained by Qi and his coauthors.

	- (a) For *<sup>m</sup>* <sup>∈</sup> <sup>N</sup> and <sup>|</sup>*t*<sup>|</sup> <sup>&</sup>lt; 1, the function arcsin *t t m* , whose value at *t* = 0 is defined to be 1, has Maclaurin's series expansion

$$\left(\frac{\arcsin t}{t}\right)^m = 1 + \sum\_{k=1}^{\infty} (-1)^k \frac{Q(m, 2k; 2)}{\binom{m+2k}{m}} \frac{(2t)^{2k}}{(2k)!} \tag{1}$$

where

$$Q(m,k;a) = \sum\_{\ell=0}^{k} \binom{m+\ell-1}{m-1} s(m+k-1,m+\ell-1) \left(\frac{m+k-a}{2}\right)^{\ell} \tag{2}$$

for *m*, *k* ∈ N, the constant *α* ∈ R such that *m* + *k* 6= *α*, and the Stirling numbers of the first kind *s*(*m* + *k* − 1, *m* + ` − 1) are analytically generalized by

$$\frac{[\ln(1+\varkappa)]^k}{k!} = \sum\_{n=k}^{\infty} s(n,k) \frac{\varkappa^n}{n!} \quad |\varkappa| < 1.$$

(b) For *k*, *n* ≥ 0 and *x<sup>m</sup>* ∈ C with *m* ∈ N, we have

$$B\_{2n+1,k}\left(0, \chi\_2, 0, \chi\_4, \dots, \frac{1 + (-1)^k}{2} \chi\_{2n-k+2}\right) = 0.\tag{3}$$

For *k*, *n* ∈ N such that 2*n* ≥ *k* ∈ N, we have

$$\begin{aligned} \,^3B\_{2n,k}\left(0, \frac{1}{3}, 0, \frac{9}{5}, 0, \frac{225}{7}, \dots, \frac{1 + (-1)^{k+1}}{2} \frac{[(2n-k)!!]^2}{2n - k + 2}\right) \\ &= (-1)^{n+k} \frac{(4n)!!}{(2n+k)!} \sum\_{q=1}^k (-1)^q \binom{2n+k}{k-q} Q(q, 2n; 2), \end{aligned}$$

where *Q*(*q*, 2*n*; 2) is given by (2).

Maclaurin's series expansion (1) was recovered in (Section 6 [9]) and was generalized in (Section 4 [10]) as

$$\left(\frac{\arcsin t}{t}\right)^{a} = 1 + \sum\_{n=1}^{\infty} (-1)^{n} \left[ \sum\_{k=1}^{2n} \frac{(-a)\_{k}}{(2n+k)!} \sum\_{q=1}^{k} (-1)^{q} \binom{2n+k}{k-q} \mathcal{Q}(q, 2n; 2) \right] (2t)^{2n} \tag{5}$$

for *α* ∈ R and |*t*| < 1 by rediscovering a special case of (3) and the closed-form Formula (4), where *Q*(*q*, 2*n*; 2) is given by (2) and the rising factorial of a complex number *α* ∈ C is defined by

$$(a)\_m = \prod\_{k=0}^{m-1} (a+k) = \begin{cases} a(a+1)\cdots(a+m-1), & m \in \mathbb{N}; \\ 1, & m = 0. \end{cases} \tag{6}$$

#### 2. In [9], among other things, by establishing the Taylor series expansion

$$\left[\frac{(\arccos x)^2}{2(1-x)}\right]^k = 1 + (2k)! \sum\_{n=1}^{\infty} \frac{Q(2k, 2n; 2)}{(2k + 2n)!} [2(x-1)]^n \tag{7}$$

for *k* ∈ N and |*x*| < 1, Qi derived the specific value

$$\begin{aligned} \,^lB\_{m,k}\left(-\frac{1}{12}, \frac{2}{45}, -\frac{3}{70}, \frac{32}{525}, -\frac{80}{693}, \dots, \frac{(2m-2k+2)!!}{(2m-2k+4)!}Q(2, 2m-2k+2; 2)\right) \\ &= (-1)^k [2(m-k)]!! \binom{m}{k} \sum\_{j=1}^k (-1)^j (2j)! \binom{k}{j} \frac{Q(2j, 2m; 2)}{(2j+2m)!} \end{aligned}$$

for *m* ≥ *k* ∈ N and then generalized the series expansion (7) to

$$\mathbb{E}\left[\frac{(\arccos x)^2}{2(1-x)}\right]^a = 1 + \sum\_{n=1}^{\infty} \left[ \sum\_{j=1}^n \frac{(-a)\_j}{j!} \sum\_{\ell=1}^j (-1)^\ell (2\ell)! \binom{j}{\ell} \frac{Q(2\ell, 2n; 2)}{(2\ell + 2n)!} \right] [2(x-1)]^n$$

for *α* ∈ R, where *Q*(2*j*, 2*m*; 2) is defined by (2).

3. In [10], among other things, by establishing the specific values

$$B\_{2r+k,k}\left(1,0,1,0,9,0,225,0,\ldots,\left[(2r-3)!!\right]^2,0,\left[(2r-1)!!\right]^2\right) = (-1)^r 2^{2r} Q(k, 2r; 2)$$

and

$$B\_{2r+k-1,k}(1,0,1,0,9,0,225,0,\dots,[(2r-3)!!]^2,0) = 0$$

for *r*, *k* ∈ N, Qi concluded

$$\begin{split} \left(\frac{2\arccos t}{\pi}\right)^{u} &= 1 + \sum\_{r=1}^{\infty} (-1)^{r} \left[ \sum\_{\ell=1}^{r} (-1)^{\ell} \frac{(-a)\_{2\ell-1}}{\pi^{2\ell-1}} Q(2\ell-1, 2r-2\ell; 2) \right] \frac{(2t)^{2r-1}}{(2r-1)!} \\ &+ \frac{(-a)\_{2}}{\pi^{2}} \frac{(2t)^{2}}{2!} + \sum\_{r=2}^{\infty} (-1)^{r} \left[ \sum\_{\ell=1}^{r} (-1)^{\ell} \frac{(-a)\_{2\ell}}{\pi^{2\ell}} Q(2\ell, 2r-2\ell; 2) \right] \frac{(2t)^{2r}}{(2r)!} \end{split}$$

for *α* ∈ R and |*t*| < 1, where (*α*)*<sup>r</sup>* for *α* ∈ R and *r* ∈ N is defined by (6) and *Q*(*k*, 2*r*; 2) is given by (2).

4. In [11], among other things, by establishing a special case of (3) and the explicit formula

$$\begin{aligned} B\_{2m,k}\left(0, -\frac{1}{3}, 0, \frac{1}{5}, \dots, \frac{(-1)^m}{2m - k + 2} \sin \frac{k \pi}{2}\right) \\ &= (-1)^{m+k} \frac{2^{2m}}{k!} \sum\_{j=1}^k (-1)^j \binom{k}{j} \frac{\Gamma(2m + j, j)}{\binom{2m + j}{j}}, \quad 2m \ge k \ge 1, \end{aligned}$$

Qi showed that,

(a) when *α* ≥ 0, the series expansions

$$\text{sinc}^a z = 1 + \sum\_{q=1}^{\infty} (-1)^q \left[ \sum\_{k=1}^{2q} \frac{(-a)\_k}{k!} \sum\_{j=1}^k (-1)^j \binom{k}{j} \frac{T(2q+j,j)}{\binom{2q+j}{j}} \right] \frac{(2z)^{2q}}{(2q)!} \tag{8}$$

is convergent in *z* ∈ C;

(b) when *α* < 0, the series expansion (8) is convergent in |*z*| < *π*; where

$$\text{sinc}\,z = \begin{cases} \frac{\sin z}{z}, & z \neq 0 \\ 1, & z = 0 \end{cases}$$

is called the sinc function,

$$T(n,\ell) = \begin{cases} 1, & (n,\ell) = (0,0) \\ 0, & n \in \mathbb{N}, \ell = 0 \\ \frac{1}{\ell!} \sum\_{j=0}^{\ell} (-1)^j \binom{\ell}{j} \left(\frac{\ell}{2} - j\right)^n, & n, \ell \in \mathbb{N} \end{cases}$$

for *n* ≥ ` ∈ N<sup>0</sup> = {0, 1, 2, . . . } is called the central factorial numbers of the second kind [12,13], and the rising factorial (*α*)*<sup>k</sup>* is defined by (6).

On new results and applications of special values of partial Bell polynomials *Bn*,*<sup>k</sup>* in recent years by Qi and his coauthors, please refer to [14–25] and closely related references therein.

#### *2.3. Wallis Ratio*

Starting from 1999, Qi began to be interested in special functions and applications. Through these work, he posed mathematical notions such as logarithmically completely monotonic function and completely monotonic degree.

The new approximation formula and the inequalities for the Wallis ratio

$$\mathcal{W}\_{\mathfrak{n}} = \frac{(2n-1)!!}{(2n)!!}, \quad \mathfrak{n} \in \mathbb{N}$$

have been examined in a joint research article [26] with C. Mortici. In (Theorems 4.1 and 4.2 [26]), the authors have proved the asymptotic formula

$$\mathcal{W}\_n \sim \sqrt{\frac{\mathbf{e}}{n}} \left( 1 - \frac{1}{2n} \right)^n \frac{1}{\sqrt{n}} \exp\left( \frac{1}{24n^2} + \frac{1}{48n^3} + \frac{1}{160n^4} + \frac{1}{960n^5} + \dotsb \right), \quad n \to \infty$$

and the inequality

$$\mathcal{W}\_n > \sqrt{\frac{\mathbf{e}}{n}} \left( 1 - \frac{1}{2n} \right)^n \frac{1}{\sqrt{n}} \exp\left( \frac{1}{24n^2} + \frac{1}{48n^3} + \frac{1}{160n^4} + \frac{1}{960n^5} \right), \quad n \ge 1$$

respectively. In (Theorem 5.2 [26]), the double inequality

$$
\sqrt{\frac{\mathbf{e}}{\pi}} \left[ 1 - \frac{1}{2(n + 1/3)} \right]^{n + 1/3} \frac{1}{\sqrt{n}} < \mathcal{W}\_{\mathbf{n}} < \sqrt{\frac{\mathbf{e}}{\pi}} \left[ 1 - \frac{1}{2(n + 1/3)} \right]^{n + 1/3} \frac{\mathbf{e}^{1/14n^3}}{\sqrt{n}}
$$

has been proved for each integer *n* ≥ 1.

In the branch of the Wallis ratio and inequalities, Qi and his coauthors published also the papers [27–33] and applied some results from [31] to the derivation of the series expansion (8).

#### *2.4. Additivity of Polygamma Functions*

The classical Euler gamma function Γ(*x*) is defined for *x* > 0 by

$$
\Gamma(\mathbf{x}) = \int\_0^\infty \mathbf{e}^{-t} \, t^{\mathbf{x} - 1} \, \mathbf{d} \, t.
$$

The function *ψ*(*x*) = <sup>Γ</sup> 0 (*x*) Γ(*x*) is usually called the psi or digamma function, while the function *ψ* (*k*) (*x*) for *k* ∈ N is called the polygamma function.

August 2008 in Sydney

The properties of the gamma function, the digamma function, and the polygamma functions have been investigated in many research papers by now. In a joint work [34] with B.-N. Guo and Q.-M. Luo, F. Qi proved that for each positive integer *i* ∈ N the function |*ψ* (*i*) (e *x* )| is subadditive on (ln *θ<sup>i</sup>* , ∞) and superadditive on (−∞, ln *θi*), where *θ<sup>i</sup>* ∈ (0, 1) is the unique root of the equation 2|*ψ* (*i*) (*θ*)| = |*ψ* (*i*) (*θ* 2 )|.

An earlier paper similar to [34] is [35] in which the convexity and concavity of the functions *ψ* (*k*) (e *x* ) and *ψ* (*k*) (*x c* ) for *x* ∈ R and *c* 6= 0 were considered by Qi and his two coauthors.

#### *2.5. Bounds for Mathematical Means in Terms of Mathematical Means*

In [36], a joint work with X.-T. Shi, F.-F. Liu, and Z.-H. Yang, Qi examined a double inequality for an integral mean in terms of the exponential and logarithmic means. Among many other results, it has been proved that, for every two distinct positive real numbers *a* > 0 and *b* > 0, we have

$$L(a,b) < \frac{2}{\pi} \int\_0^{\pi/2} a^{\cos^2 \theta} b^{\sin^2 \theta} \,\mathrm{d}\,\theta < I(a,b),$$

where

$$L(a,b) = \frac{b-a}{\ln b - \ln a} \quad \text{and} \quad I(a,b) = \frac{1}{\mathbf{e}} \left(\frac{b^b}{a^a}\right)^{1/(b-a)}$$

are called [37] the logarithmic and exponential means, respectively.

The paper [36] is a starting point of [38,39] and many other papers such as [40–48] by other mathematicians.

In a joint research article [49] with W.-D. Jiang, F. Qi proved a double inequality for the combination of the Toader mean and the arithmetic mean in terms of the contraharmonic mean. Qi and his coauthors also published many other papers such as [50–56] in which some special means are bounded in terms of elementary and simple mathematical means.

#### *2.6. Complete Elliptic Integrals*

There is no need to say that the theory of complete elliptic integrals has attracted F. Qi and his coauthors, who provided many significant contributions in this field. Some new bounds for the complete elliptic integrals of the first and second kind and their generalizations were given in [57–62], for example.

#### *2.7. Matrices*

In [63], Qi and his two coauthors analytically discovered the inverse of the interesting matrix

$$A\_{n} = (a\_{i,j})\_{n \times n} = \begin{pmatrix} (1\_0) & 0 & 0 & 0 & \cdots & 0 & 0 & 0 & 0\\ (1\_1) & (2\_0) & 0 & 0 & \cdots & 0 & 0 & 0 & 0\\ 0 & (2\_1) & (3\_0) & 0 & \cdots & 0 & 0 & 0 & 0\\ 0 & (2\_2) & (3\_1) & (4\_0) & \cdots & 0 & 0 & 0 & 0\\ 0 & 0 & (2\_3) & (4\_1) & \cdots & 0 & 0 & 0 & 0\\ 0 & 0 & (3\_3) & (4\_2) & \cdots & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & (4\_3) & \cdots & 0 & 0 & 0 & 0\\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \vdots\\ 0 & 0 & 0 & 0 & \cdots & \binom{n-3}{0} & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & \cdots & \binom{n-3}{1} & \binom{n-2}{0} & 0 & 0\\ 0 & 0 & 0 & 0 & \cdots & \binom{n-3}{2} & \binom{n-2}{1} & \binom{n-1}{0} & 0\\ 0 & 0 & 0 & 0 & \cdots & \binom{n-3}{3} & \binom{n-2}{2} & \binom{n-1}{1} & \binom{n}{0} \end{pmatrix} \tag{9}$$

for *n* ∈ N, where

$$a\_{i,j} = \begin{cases} 0, & i < j \\ \binom{j}{i-j}, & j \le i \le 2j \\ 0, & i > 2j \end{cases}$$

for 1 ≤ *i*, *j* ≤ *n*. Basing on this result, they presented an inversion theorem which states that

$$\frac{s\_n}{n!} = \sum\_{k=1}^n (-1)^k \binom{k}{n-k} \mathbf{S}\_k \quad \text{if and only if} \quad n \mathbf{S}\_n = \sum\_{k=1}^n \frac{(-1)^k}{(k-1)!} \binom{2n-k-1}{n-1} \mathbf{s}\_{k \vee}$$

where *s<sup>k</sup>* and *S<sup>k</sup>* are two sequences independent of *n* such that *n* ≥ *k* ≥ 1. Moreover, they deduced several identities, including

$$\sum\_{\ell=0}^{\lfloor (j-1)/2 \rfloor} (-1)^{\ell} \binom{j-\ell-1}{\ell} \mathbb{C}\_{i-\ell-1} = \frac{j}{i} \binom{2i-j-1}{i-1}, \quad i \ge j \ge 1$$

and

$$\frac{\sum\_{\ell=0}^{m-1} (-1)^{\ell} \binom{2m-\ell-1}{\ell} \frac{n+2\ell+1}{n-\ell+1} \mathbb{C}\_{n-\ell-1}}{\sum\_{\ell=0}^{m-1} (-1)^{\ell} \binom{2m-\ell-2}{\ell} \frac{1}{2m-2\ell-1} \mathbb{C}\_{n-\ell-1}} = m(2m-1), \quad n \ge 2m \ge 2,\tag{10}$$

relating to the Catalan numbers *C<sup>n</sup>* = <sup>1</sup> *n*+1 ( 2*n n* ), where b*x*c denotes the floor function whose value is the largest integer less than or equal to *x*.

We remark that the inverse of the matrix *A<sup>n</sup>* defined in (9) was also combinatorially studied and connected in ([64] p. 8), while the identity in (10) was also combinatorially discussed and compared at the end on ([65] p. 3162). We emphasize that the approaches and methods used in [64,65] are quite different from those in [63]. This means that the approaches and methods used by Qi and his coauthors in [63] are novel and innovative.

October 2007 at Weinan Normal University, China

By the way, as for the Catalan numbers *Cn*, we recommend the new papers [66–69] by Qi and his coauthors. In these papers, the Catalan numbers *C<sup>n</sup>* were generalized, some new properties of *C<sup>n</sup>* were discovered by considering logarithmically complete monotonicity of their generating functions, integral representations of *C<sup>n</sup>* were surveyed in [70] and applied in [63].

In [71], Hong and Qi clarified several new inequalities for generalized eigenvalues of perturbation problems on Hermitian matrices. If *<sup>A</sup>* <sup>∈</sup> <sup>C</sup>*n*×*<sup>n</sup>* is a Hermitian complex matrix of format *n* × *n*, then *A* has the pure real spectrum. Let us denote its eigenvalues by *λ*1(*A*), *λ*2(*A*), . . . , *λn*(*A*) and assume that

$$
\lambda\_1(A) \ge \lambda\_2(A) \ge \cdots \ge \lambda\_n(A).
$$

By k · k<sup>2</sup> we denote the spectral norm of a matrix. If *<sup>E</sup>* <sup>∈</sup> <sup>C</sup>*n*×*<sup>n</sup>* is also a Hermitian complex matrix of format *n* × *n*, then the famous Weyl theorem states that

$$\max\_{1 \le i \le n} \left| \lambda\_i(A) - \lambda\_i(A + E) \right| \le ||E||\_{2^\*}$$

Besides this result, we know that the following inequalities hold: If *<sup>A</sup>*, *<sup>B</sup>* <sup>∈</sup> <sup>C</sup>*n*×*<sup>n</sup>* are Hermitian complex matrices of format *n* × *n* and *i*, *j*, *k*, `, *m* ∈ N satisfy *j* + *k* − 1 ≤ *i* ≤ ` + *m* − *n* − 1, then we have

$$
\lambda\_\ell(A) + \lambda\_m(B) \le \lambda\_i(A+B) \le \lambda\_j(A) + \lambda\_k(B).
$$

In particular,

$$
\lambda\_i(A) + \lambda\_n(B) \le \lambda\_i(A+B) \le \lambda\_j(A) + \lambda\_1(B).
$$

Accurately, Hong and Qi proved in [71] the following results:

1. Suppose that *<sup>A</sup>*, *<sup>B</sup>*, *<sup>H</sup>*, *<sup>E</sup>* <sup>∈</sup> <sup>C</sup>*n*×*<sup>n</sup>* are Hermitian complex matrices of format *<sup>n</sup>* <sup>×</sup> *<sup>n</sup>*, that *B* is positive definite, that *ν* = k*E*k2/*λn*(*B*) < 1, and that the positive integers *i*, *j*, *k*, `, *m* ∈ N satisfy *j* + *k* − 1 ≤ *i* ≤ ` + *m* − *n* − 1.

$$\text{(a)}\qquad\text{If }\lambda\_i(A+H)\ge 0\text{, then}$$

$$\frac{\lambda\_\ell(AB^{-1}) + \lambda\_\mathfrak{m}(HB^{-1})}{1+\nu} \le \lambda\_i \left( (A+H)(B+H)^{-1} \right) \le \frac{\lambda\_j(AB^{-1}) + \lambda\_k(HB^{-1})}{1-\nu}.$$

(b) If *λi*(*A* + *H*) ≤ 0, then

$$\frac{\lambda\_{\hat{l}}(AB^{-1}) + \lambda\_{k}(HB^{-1})}{1-\nu} \le \lambda\_{\hat{l}}((A+H)(B+H)^{-1}) \le \frac{\lambda\_{\ell}(AB^{-1}) + \lambda\_{m}(HB^{-1})}{1+\nu}.$$

2. Suppose that *<sup>A</sup>*, *<sup>B</sup>*, *<sup>H</sup>*, *<sup>E</sup>* <sup>∈</sup> <sup>C</sup>*n*×*<sup>n</sup>* are Hermitian complex matrices of format *<sup>n</sup>* <sup>×</sup> *<sup>n</sup>*, that *B* is positive definite, and that *ν* = k*E*k2/*λn*(*B*) < 1. Then we have

$$\begin{aligned} \left(\beta\_i(A)\lambda\_i(AB^{-1}) + \beta\_n(H)\lambda\_n(HB^{-1}) \le \lambda\_i\left((A+H)(B+H)^{-1}\right) \\ \le a\_i(A)\lambda\_i(AB^{-1}) + a\_1(H)\lambda\_1(HB^{-1}). \end{aligned}$$

For more information on this topic, see also the joint papers [72,73] with Y. Hong, in which the authors considered determinantal inequalities of the Hua–Marcus–Zhang type for quaternion matrices and refined two determinantal inequalities for positive semidefinite matrices.

#### *2.8. Bounds for Ratio of Bernoulli Numbers*

One of the most influential scientific results of F. Qi was presented in [74], in which Qi considered a double inequality for the ratio of two non-zero neighboring Bernoulli numbers. This result has been quoted almost one hundred times in recent years.

It is well known that the Bernoulli numbers *B<sup>n</sup>* can be generated by

$$\frac{z}{\mathbf{e}^z - 1} = 1 - \frac{z}{2} + \sum\_{k=1}^{\infty} B\_{2k} \frac{z^{2k}}{(2k)!}, \quad |z| < 2\pi.$$

Since the function *<sup>x</sup>* e *<sup>x</sup>* <sup>−</sup><sup>1</sup> <sup>−</sup> <sup>1</sup> <sup>+</sup> *<sup>x</sup>* 2 is even on R, all of the Bernoulli numbers *B*2*n*+<sup>1</sup> for *n* ∈ N are equal to 0. Due to (Theorem 1.1 [74]), we have

$$\frac{2^{2k-1}-1}{2^{2k+1}-1} \frac{(2k+1)(2k+2)}{\pi^2} < \frac{\left|B\_{2k+2}\right|}{\left|B\_{2k}\right|} < \frac{2^{2k}-1}{2^{2k+2}-1} \frac{(2k+1)(2k+2)}{\pi^2}, \quad k \in \mathbb{N}.\tag{11}$$

This double inequality immediately implies

$$\lim\_{k \to \infty} \frac{\left| B\_{2k+2} \right|}{k^2 \left| B\_{2k} \right|} = \frac{1}{\pi^2}.$$

In order to achieve his aims, Qi used the well-known identity

$$B\_{2k} = 2 \frac{(-1)^{k+1} (2k)!}{(2\pi)^{2k}} \zeta(2k), \quad k \in \mathbb{N}\_{\succ}$$

where *ζ*(·) is the Riemann zeta function.

The double inequality (11) and related results in [75,76] have been extended, refined, generalized, improved, non-self-cited, and applied in over 50 preprints and papers such as [77–92] by many mathematicians, combinatorists, and physicists around the world.

#### *2.9. Special Polynomials*

The Boole polynomials *Bln*(*x*; *α*) are defined by

$$\frac{(1+t)^{\chi}}{1+(1+t)^{a}} = \sum\_{n=0}^{\infty} B l\_n(\chi;\alpha) \frac{t^n}{n!}.$$

The Peters polynomials (or higher-order Boole polynomials) *sn*(*x*; *α*, *ν*), defined by

$$\frac{(1+t)^{\chi}}{[1+(1+t)^{\mathfrak{a}}]^{\mathbb{V}}} = \sum\_{n=0}^{\infty} s\_n(\mathfrak{x}; \mathfrak{a}, \mathfrak{v}) \frac{t^n}{n!} \mathfrak{a}$$

clearly generalize the Boole polynomials. It is also known that the Peters polynomials can be further generalized. For example, the degenerate Peters polynomials *sn*(*x*; *α*, *ν*; *λ*), which are defined by

$$\frac{\mathbf{e}^{\mathbf{x}[(1+t)^{\lambda}1]/\lambda}}{\left(1+\mathbf{e}^{\mathbf{a}[(1+t)^{\lambda}1]/\lambda}\right)^{\mathsf{V}}} = \sum\_{n=0}^{\infty} s\_n(\mathbf{x}; \mathbf{a}, \mathbf{v}; \lambda) \frac{t^n}{n!} \mathsf{I}$$

generalize the Peters polynomials.

In a joint research article [93] with Y.-W. Li and M. C. Da˘glı, F. Qi showed that

$$\begin{split} s\_{\mathbb{H}}(\mathbf{x}; \mathfrak{a}, \boldsymbol{\nu}; \boldsymbol{\lambda}) &= (n-1)! \sum\_{k=1}^{n} \left[ \frac{(-1)^{k}}{\lambda^{k-1} k!} \sum\_{\ell=1}^{k} (-1)^{\ell} \ell \binom{k}{\ell} \binom{\lambda \ell - 1}{n-1} \right] \\ & \times \left[ \sum\_{\ell=1}^{k} \frac{\langle -\boldsymbol{\nu} \rangle\_{\ell}}{2^{\boldsymbol{\nu} + \ell}} \sum\_{\substack{\boldsymbol{r} + \boldsymbol{s} = \boldsymbol{\ell} \ \boldsymbol{i} + \boldsymbol{j} = k}} \binom{k}{\boldsymbol{i}} \binom{\boldsymbol{k}}{\boldsymbol{i}} \binom{\boldsymbol{k}}{\boldsymbol{\nu}}^{\boldsymbol{i}} \binom{\boldsymbol{k}}{\boldsymbol{\nu}}^{\boldsymbol{i}} \langle \boldsymbol{a} - \frac{\boldsymbol{\chi}}{\boldsymbol{\nu}} \rangle\_{\boldsymbol{k}}^{\boldsymbol{j}} S(\boldsymbol{i}, \boldsymbol{r}) S(\boldsymbol{j}, \boldsymbol{s}) \right]. \end{split}$$

where the falling factorial h*z*i*<sup>n</sup>* is defined for *z* ∈ C by

$$\langle z \rangle\_n = \prod\_{k=0}^{n-1} (z - k) = \begin{cases} z(z - 1) \cdots (z - n + 1), & n \ge 1; \\ 1, & n = 0. \end{cases}$$

Setting *x* = 0 in this formula, we obtain the special result stated in (Theorem 4.1 [93]).

In addition to the paper [93], Dr. Feng Qi and his coauthors conducted more work in the papers [94–114], for example, in this branch. Many of these papers are related to partial Bell polynomials *Bn*,*<sup>k</sup>* mentioned above.

#### *2.10. Complete Monotonicity Properties Related to Polygamma Functions*

In [115], Qi employed the convolution theorem for the Laplace transform, Bernstein's theorem for completely monotonic functions, and some other analytic techniques to reveal some necessary and sufficient conditions for two functions defined by two derivatives of a function involving trigamma function to be completely monotonic or monotonic. See also a joint paper [116] with R. P. Agarwal, where the authors analyzed the complete monotonicity for several classes of functions related to ratios of gamma functions, and a joint paper [117] with D. Lim, where the authors investigated a ratio of finite many gamma functions and its monotonicity properties. We notice that the papers [115,117] are companions of the papers [118–126]. This series of articles originate from the paper [127] and its preprints.

2008 at Victoria University, Footscray, Melbourne, Australia

#### *2.11. Convex Functions and Inequalities*

From 2012 on, F. Qi collaborated with Professor Bo-Yan Xi and his academic group at Inner Mongolia University for Nationalities and paid much attention on generalizations of convex functions and on establishment of integral inequalities of the Hermite–Hadamard type.

The theory of convex functions is extremely significant in many areas of pure and applied sciences. The Jensen inequality and the Hermite–Hadamard type inequalities are still very attractive fields of research within the theory of convex functions. Concerning the scientific work of Professor Feng Qi in this area, we would like to mention the research articles [128–140] and references cited therein.

In this issue, we will briefly describe the results obtained in collaboration of Professor Feng Qi with Y. Wang and M.-M. Zheng in [133] only. Suppose that *α* ∈ (0, 1] and *m* ∈ (0, 1]. Let us recall that a function *f* : [0, *b*] → R, where 0 < *b* < ∞, is said to be (*α*, *m*)-convex if and only if

$$f(t\mathbf{x} + m(1 - t)y) \le t^a f(\mathbf{x}) + m(1 - t^a)f(y)$$

for *x*, *y* ∈ [0, *b*] and *t* ∈ [0, 1]. If *α* = 1, then an (*α*, *m*)-convex function *f* : [0, *b*] → R is also said to be *<sup>m</sup>*-convex. Further on, a non-empty set *<sup>S</sup>* <sup>⊆</sup> <sup>R</sup>*<sup>n</sup>* is said to be invex with respect to the map *<sup>ν</sup>* : *<sup>S</sup>* <sup>×</sup> *<sup>S</sup>* <sup>→</sup> <sup>R</sup>*<sup>n</sup>* if and only if *x* + *tν*(*x*, *y*) ∈ *S* for all *t* ∈ [0, 1] and *x*, *y* ∈ *S*. If this is the case, a function *f* : *S* → R is said to be preinvex with respect to *ν* if and only if

$$f(y + t\nu(\mathbf{x}, y)) \le tf(\mathbf{x}) + (1 - t)f(y), \quad \mathbf{x}, y \in \mathbb{S}, \quad t \in [0, 1].$$

We know the following conclusions:

1. If −∞ < *c* < *a* < *b* < *d* < ∞, the function *f* : [*c*, *d*] → R is differentiable, and the derivative | *f* 0 | is convex on [*a*, *b*], then we have

$$\left| \frac{f(a) + f(b)}{2} - \frac{1}{b - a} \int\_{a}^{b} f(\mathbf{x}) \, \mathbf{d} \, \mathbf{x} \right| \le \frac{b - a}{8} (\left| f'(a) \right| + \left| f'(b) \right|).$$

2. For 0 ≤ *a* < *b* < ∞, if the function *f* : [0, *b*] → R is *m*-convex for *m* ∈ (0, 1] and the Lebesgue integrable, then we have

$$\left| \frac{1}{b-a} \int\_{a}^{b} f(\mathbf{x}) \, \mathbf{d} \, \mathbf{x} \right| \le \min \left\{ \frac{f(a) + mf(b/m)}{2}, \frac{f(b) + mf(a/m)}{2} \right\}.$$

3. For 0 ≤ *a* < *b* < ∞ and *α*, *m* ∈ (0, 1], if the function *f* : [0, *b*] → R is (*α*, *m*)-convex and differentiable and its first derivative is the Lebesgue integrable, then we have

$$\begin{aligned} \left| \frac{f(a) + f(b)}{2} - \frac{1}{b - a} \int\_a^b f(\mathbf{x}) \, \mathbf{d} \, \mathbf{x} \right| &\leq \frac{b - a}{2} \frac{1}{2^{1 - 1/q}} \\ &\times \min \left\{ \left[ v\_1 |f'(a)|^q + v\_2 m |f'(b)|^q \right]^{1/q}, \left[ v\_1 |f'(b)|^q + v\_2 m |f'(a)|^q \right]^{1/q} \right\} .\end{aligned}$$

provided that the function | *f* 0 | *q* is (*α*, *m*)-convex for some real number *q* ≥ 1, where

$$v\_1 = \frac{\mathfrak{a} + 1/2^{\mathfrak{a}}}{(\mathfrak{a} + 1)(\mathfrak{a} + 2)} \quad \text{and} \quad v\_2 = \frac{1}{(\mathfrak{a} + 1)(\mathfrak{a} + 2)} \left(\frac{\mathfrak{a}^2 + \mathfrak{a} + 2}{2} - \frac{1}{2^{\mathfrak{a}}}\right).$$

August 2014 in China

In (Definition 7 [133]), the authors introduced the following notion: Suppose that a non-empty set *<sup>S</sup>* <sup>⊆</sup> <sup>R</sup>*<sup>n</sup>* is invex with respect to *ν* for *α* ∈ (0, 1]. We say that a function *f* : *S* → R is *α*-preinvex with respect to *ν* if and only if

$$f(y + t\nu(x, y)) \le t^{\kappa} f(x) + m(1 - t^{\kappa}) f(y)$$

for *x*, *y* ∈ *S* and *t* ∈ [0, 1]. The main results are the Hermite–Hadamard type inequalities in (Theorems 5 to 9 [133]), where the authors mainly use the assumption that the function | *f* 0 | *q* is *α*-preinvex for some real number *α* ∈ (0, 1] and *q* ≥ 1. Until now, Qi and Xi's academic group have jointly published over 120 papers in reputable peer-review journals. Due to their better work in generalizing convex functions and in establishing the Hermite–Hadamard type inequalities, Qi and Xi's group acquired financial support from the National Natural Science Foundation of China with Grant No. 11361038 between 2014 and 2017.

#### *2.12. Fractional Derivatives and Integrals*

Let us note that Professor F. Qi analyzed, in three joint work [141–143] with W.-S. Du, A. Ghaffar, C.-J. Huang, S. M. Hussain, K. S. Nisar, and G. Rahman, the Cebyšev ˇ and Grüss type inequalities for conformable *k*-fractional integral operators, where the authors investigated the Hermite–Hadamard type inequalities for *k*-fractional conformable integrals.

Concerning the integral inequalities, it is also worth noting that F. Qi and his coauthors have generalized, in [144–147], the Young integral inequality using the Taylor theorems in terms of higher order derivatives and their norms; the authors have applied their results for the estimation of several concrete definite integrals.

#### *2.13. Differential Geometry*

From September 1982 to July 1986, F. Qi majored in mathematical education as a bachelor student at Department of Mathematics, Henan University, China. From September 1986 to June 1989, he majored in differential geometry as his master's research supervised by Professor Yi-Pei Chen at the Department of Mathematics, Xiamen University, China. From March 1996 to January 1999, he majored in analysis and topology as his doctoral supervised by Professor Sen-Lin Xu at the Department of Mathematics, University of Science and Technology of China. In this period, he jointly published over 10 papers, including [148–152], in differential geometry.

#### *2.14. Pólya Type Integral Inequalities*

Starting from 1993, Qi's research was extended to mathematical inequalities and applications, including generalizations of the Pólya integral inequality [153]. As for the Pólya type integral inequalities, his first paper is [154], his last paper is [155]. On this topic, he also published the papers [156–163]. Then, he surveyed the Pólya type integral inequalities from the origin to date in [164]. Some of these results have been applied to refine the famous Young's integral inequality in the papers [145–147].

October 2015 in Huizhou, Guangdong, China

#### *2.15. Properties of Special Mathematical Means*

Starting from 1997, Qi's research was further extended to mathematical means and applications. He started out by publishing [165,166]. His newest and creative papers in this area are [38,167–178], for example. In these papers, he discovered logarithmic convexity and Schur convexity of the extended mean values (or say, Stolarsky's means), considered the logarithmically complete monotonicity of special mathematical means, established integral and Lévy–Khintchine representations of some special mathematical means and their reciprocals. Concretely speaking, for example, Qi and his coauthors obtained the following results:

1. Let *n* ∈ N be not less than 2 and *a* = (*a*1, *a*2, . . . , *an*) be a positive sequence, that is, *a<sup>k</sup>* > 0 for 1 ≤ *k* ≤ *n*. The arithmetic and geometric means *An*(*a*) and *Gn*(*a*) of the positive sequence *a* are defined, respectively, as

$$A\_{\boldsymbol{\eta}}(\boldsymbol{a}) = \frac{1}{n} \sum\_{k=1}^{n} a\_k \quad \text{and} \quad \mathcal{G}\_{\boldsymbol{\eta}}(\boldsymbol{a}) = \left(\prod\_{k=1}^{n} a\_k\right)^{1/n}.$$

*n*

For *z* ∈ C \ (−∞, − min{*a<sup>k</sup>* , 1 ≤ *k* ≤ *n*}] and *n* ≥ 2, let *e* = ( z }| { 1, 1, . . . , 1) and

$$G\_n(a+ze) = \left[\prod\_{k=1}^n (a\_k+z)\right]^{1/n}.$$

In (Theorem 1.1 [176]), by virtue of the Cauchy integral formula in the theory of complex functions, the following integral representation was established.

Let *σ* be a permutation of the sequence {1, 2, . . . , *n*} such that the sequence *σ*(*a*) = *aσ*(1) , *aσ*(2) , . . . , *aσ*(*n*) is a rearrangement of *a* in an ascending order *aσ*(1) ≤ *aσ*(2) ≤ · · · ≤ *aσ*(*n*) . Then the principal branch of the geometric mean *Gn*(*a* + *ze*) has the integral representation

$$\mathcal{G}\_{\boldsymbol{n}}(\boldsymbol{a} + z\boldsymbol{e}) = A\_{\boldsymbol{n}}(\boldsymbol{a}) + z - \frac{1}{\pi} \sum\_{\ell=1}^{n-1} \sin \frac{\ell \pi}{n} \int\_{a\_{\boldsymbol{\sigma}(\ell)}}^{a\_{\boldsymbol{\sigma}(\ell+1)}} \left| \prod\_{k=1}^{n} (a\_k - t) \right|^{1/n} \frac{\mathbf{d} \, t}{t + z} \tag{12}$$

for *z* ∈ C \ (−∞, − min{*a<sup>k</sup>* , 1 ≤ *k* ≤ *n*}].

Taking *z* = 0 in the integral representation (12) yields the fundamental inequality

$$G\_{\boldsymbol{\theta}}(\boldsymbol{a}) = A\_{\boldsymbol{\theta}}(\boldsymbol{a}) - \frac{1}{\pi} \sum\_{\ell=1}^{n-1} \sin \frac{\ell \pi}{n} \int\_{a\_{\sigma(\ell)}}^{a\_{\sigma(\ell+1)}} \left[ \prod\_{k=1}^{n} |a\_k - t| \right]^{1/n} \frac{\mathbf{d} \, t}{t} \le A\_{\boldsymbol{\theta}}(\boldsymbol{a}). \tag{13}$$

For 0 < *a*<sup>1</sup> ≤ *a*<sup>2</sup> ≤ *a*3, taking *n* = 2, 3 in (13) gives

$$
\frac{a\_1 + a\_2}{2} - \sqrt{a\_1 a\_2} = \frac{1}{\pi} \int\_{a\_1}^{a\_2} \sqrt{\left(1 - \frac{a\_1}{t}\right) \left(\frac{a\_2}{t} - 1\right)} \,\mathrm{d}\, t \ge 0
$$

and

$$\frac{a\_1 + a\_2 + a\_3}{3} - \sqrt[3]{a\_1 a\_2 a\_3} = \frac{\sqrt{3}}{2\pi} \int\_{a\_1}^{a\_3} \sqrt[3]{\left| \left( 1 - \frac{a\_1}{t} \right) \left( 1 - \frac{a\_2}{t} \right) \left( 1 - \frac{a\_3}{t} \right) \right|} \,\mathrm{d}t \ge 0.$$

These texts are excerpted from the site https://math.stackexchange.com/a/4256320/ 945479 on 10 July 2022.

2. The weighted version of the integral representation (12) can be found in the paper (Theorem 3.1 [175]). We recite the weighted version as follows.

For *n* ≥ 2, *a* = (*a*1, *a*2, . . . , *an*), and *w* = (*w*1, *w*2, . . . , *wn*) with *a<sup>k</sup>* , *w<sup>k</sup>* > 0 and ∑ *n <sup>k</sup>*=<sup>1</sup> *w<sup>k</sup>* = 1, the weighted arithmetic and geometric means *Aw*,*n*(*a*) and *Gw*,*n*(*a*) of *a* with the positive weight *w* are defined, respectively, as

$$A\_{w,n}(\mathfrak{a}) = \sum\_{k=1}^n w\_k a\_k \quad \text{and} \quad G\_{w,n}(\mathfrak{a}) = \prod\_{k=1}^n a\_k^{w\_k}.$$

Let us denote *α* = min{*a<sup>k</sup>* , 1 ≤ *k* ≤ *n*}. For a complex variable *z* ∈ C \ (−∞, −*α*], we introduce the complex function

$$G\_{w,n}(a+z) = \prod\_{k=1}^{n} (a\_k + z)^{w\_k}.$$

With the aid of the Cauchy integral formula in the theory of complex functions, the following integral representation was established in (Theorem 3.1 [175]).

Let 0 < *a<sup>k</sup>* ≤ *ak*+<sup>1</sup> for 1 ≤ *k* ≤ *n* − 1 and *z* ∈ C \ (−∞, −*a*1]. Then the principal branch of the weighted geometric mean *Gw*,*n*(*a* + *z*) with a positive weight *w* = (*w*1, *w*2, . . . , *wn*) has the integral representation

$$\begin{split} \mathbb{G}\_{\mathbf{w},n}(\mathfrak{a}+\mathbf{z}) - A\_{\mathbf{w},n}(\mathfrak{a}) \\ = & -\frac{1}{\pi} \sum\_{\ell=1}^{n-1} \sin\left[ \left(\sum\_{k=1}^{\ell} w\_k \right) \pi \right] \int\_{a\_\ell}^{a\_{\ell+1}} \prod\_{k=1}^n |a\_k - t|^{w\_k} \frac{\mathbf{d} \, t}{t+z} . \end{split} \tag{14}$$

Letting *z* = 0 in the integral representation (14) gives the fundamental inequality

$$\begin{split} G\_{\mathfrak{w},\mathfrak{n}}(\mathfrak{a}) &= A\_{\mathfrak{w},\mathfrak{n}}(\mathfrak{a}) - \frac{1}{\pi} \sum\_{\ell=1}^{n-1} \sin \left[ \left( \sum\_{k=1}^{\ell} w\_k \right) \pi \right] \int\_{a\_\ell}^{a\_{\ell+1}} \prod\_{k=1}^n |a\_k - t|^{w\_k} \frac{\mathbf{d} \cdot \mathbf{t}}{t} \\ &\leq A\_{\mathfrak{w},\mathfrak{n}}(\mathfrak{a}). \end{split} \tag{15}$$

Setting *n* = 2 in (15) leads to

$$\begin{split} a\_1^{w\_1} a\_2^{w\_2} &= w\_1 a\_1 + w\_2 a\_2 - \frac{\sin(w\_1 \pi)}{\pi} \int\_{a\_1}^{a\_2} \left( 1 - \frac{a\_1}{t} \right)^{w\_1} \left( \frac{a\_2}{t} - 1 \right)^{w\_2} \mathrm{d}t \\ &\leq w\_1 a\_1 + w\_2 a\_2 \end{split} \tag{16}$$

for *w*1, *w*<sup>2</sup> > 0 such that *w*<sup>1</sup> + *w*<sup>2</sup> = 1. These texts are excerpted from the site https://math.stackexchange.com/a/4256320/945479 on 10 July 2022.

3. For *a<sup>k</sup>* < *ak*+<sup>1</sup> and *w<sup>k</sup>* > 0 with ∑ *n <sup>k</sup>*=<sup>1</sup> *w<sup>k</sup>* = 1 and *n* ≥ 2, the principal branch of the reciprocal *Ha*,*w*,*n*(*z*) of the weighted geometric mean *Gw*,*n*(*a* + *z*) can be represented by

$$\begin{split} H\_{\mathfrak{a},\mathfrak{w},\mathfrak{n}}(z) &= \frac{1}{\prod\_{k=1}^{n} (z + a\_k)^{w\_k}} \\ &= \frac{1}{\pi} \sum\_{\ell=1}^{n-1} \sin \left( \pi \sum\_{k=1}^{\ell} w\_k \right) \int\_{a\_\ell}^{a\_{\ell+1}} \frac{1}{\prod\_{k=1}^{n} |t - a\_k|^{w\_k}} \frac{\mathbf{d} \, t}{t + z} \end{split} \tag{17}$$

where *z* ∈ C \ [−*an*, −*a*1]. Consequently, the reciprocal *Ha*,*w*,*n*(*t* − *a*1) of the weighted geometric mean *Gw*,*n*(*a* + *t* − *a*1) is a Stieltjes function and a logarithmically completely monotonic function. See (Theorem 2.1 [172]).

#### *2.16. Invited Visits and Promotions*

Due to his better work in mathematical inequalities and applications, F. Qi and his academic groups obtained support from the National Natural Science Foundation of China with Grant No. 10001016 between 2001 and 2003. Due to this, Qi obtained an invitation and support from Dr. Professor Sever S. Dragomir to visit Victoria University (Melbourne, Australia) for collaboration between November 2001 and January 2002. This is his first visit abroad. Supported by the China Scholarship Council, he visited Victoria University again to collaborate with Dr. Professor Pietro Cerone and Sever S. Dragomir between March 2008 and February 2009.

May 2017 in Jiaozuo, China

Due to inventing the notion of logarithmically completely monotonic functions and his better work in special functions, Qi obtained an invitation and support from Dr. Professor Christian Berg at Copenhagen University to attend the Workshop on Integral Transforms, Positivity and Applications between 1 and 3 September 2010.

Dr. Feng Qi was also invited and supported by Dr. Professor Ahmet Ocak Akdemir, Wing-Sum Cheung, Yeol Je Cho, Junesang Choi, Wei-Shih Du, Taekyun Kim, and Jen-Chih

Yao, to visit the University of Hong Kong twice in 2004, to visit Dongguk University at Gyeongju, Gyeongsang National University, Kwangwoon University, Kyungpook National University, and several other universities in South Korea from 2012 to 2015, to visit Antalya in Turkey in 2016, and to visit Sun Yat-sen University and Kaohsiung Normal University in Taiwan in 2018, for academic collaborations and international conferences, including taking part in the International Congress of Mathematicians 2014.

Due to his excellent works in university mathematics education, administration, and academic research, Qi was promptly and quicker promoted from a lecturer to an associate professor, to a full professor, and to a Specially-Appointed-Professor for Universities of Henan Province at Henan Polytechnic University in November 1995, October 1999, and November 2005.

#### *2.17. Editorial and Refereeing Appointments*

Currently, Dr. Qi is editors-in-chief, associate editor, editor, member of editorial board for over 25 internationally-reputed and peer-reviewed journals such as the Journal of Inequalities and Applications which is being indexed by the Science Citation Index-Expanded and Scopus.

The first academic journal specializing in mathematical inequalities, the Journal of Inequalities and Applications, was found by Dr. Professor Ravi Prakash Agarwal in 1997. This history was cultivated in Qi's survey article [164]. In addition, the following seven journals have also specialized in mathematical inequalities:


It is also worth to mentioning the Monographs in Inequalities: Series in Inequalities at the site http://books.ele-math.com/ accessed on 10 July 2022.

Professor Qi was a recipient of the Top Peer Reviewer powered by Publons in the years 2016 and 2019. See Certificates in Figure 1.

**Figure 1.** Qi's Certificates for Top Peer Reviewer powered by Publons in 2016 and 2019.

#### **3. Statistics of Qi's Contributions**

Since 1993, Qi has published over 670 peer-reviewed articles, including over 42 papers published in Chinese, in over 220 journals, book chapters, collections, and conference proceedings in mathematics, see Table 1.


**Table 1.** The year distribution of Qi's papers formally published since 1993.

In Feng Qi's Google Scholar profile dated on 2 August 2022, his over 850 papers, preprints, and other works were indexed and they were totally cited 16858 times. See the screenshot in Figure 2.

**Figure 2.** Statistics from Qi's Google Scholar profile dated on 2 August 2022.

In Feng Qi's Scopus profile dated on 2 August 2022, his 417 articles were indexed and they were cited 6590 times by 2007 documents. See the screenshot in Figure 3.

**Figure 3.** Statistics from Qi's Scopus profile dated on 2 August 2022.

In Qi's Publons profile dated on 2 August 2022, his 412 papers were indexed by the Web of Science Core Collection and they were cited 5915 times. See the screenshots in Figure 4.


**Figure 4.** Statistics from Qi's Publons profile dated on 2 August 2022.

From 2014 to 2021, Qi consecutively ranked as the Most Cited Chinese Researchers in Mathematics. These rankings were carried out jointly by Elsevier and ShanghaiRanking Consultancy. See Figure 5.

In the Stanford University's 2021 list of World's Top 2% Scientists, Qi ranked 61510/ 190064 in the Single Year Impact Data (2020) and ranked 96040/186178 in the Careerlong Data (1960–2020). For more data, please click the link https://doi.org/10.17632 /btchxktzyw.3 accessed on 10 July 2022.

In the 2022 edition of the World's Top Mathematics Scientists by Research.com dated on 2 August 2022, Qi ranked 330 worldwide and his 580 papers were indexed and were cited 11291 times. See Figure 6 or click the link https://research.com/u/feng-qi accessed on 8 July 2022.


**Figure 6.** Statistics from Qi's Research.com profile dated on 2 August 2022.

Since the year 1992, Qi took charge of and participated in two national research projects supported by the National Natural Science Foundation of China, several provincial scientific projects supported by Henan Province, and several university scientific projects supported by Henan Polytechnic University and Tianjin Polytechnic University. Totally he acquired about one and a half millions CNY of funding support.

Since 2002, his names "Feng Qi", "F. Qi", and "Qi" have appeared in titles of over 89 papers or preprints which were published or announced by hundreds of mathematicians in the globe. See, for example, the papers [40–48,88,179–185].

Currently, Qi's 49 papers or preprints were cited at the Wikipedia site https://en. wikipedia.org/wiki/Euler\_numbers accessed on 10 July 2022 and in eight monographs or handbooks [37,186–192].

After the notion "logarithmically completely monotonic function" was explicitly defined in the preprints [193,194] and the papers [195,196], an important paper on logarithmically completely monotonic functions is [197], and the notion has been seemingly and gradually becoming a standard terminology in mathematical community. Currently, except over 60 preprints and papers by Qi and his coauthors, there have been over 40 papers and preprints whose titles contain the phrases "logarithmically completely monotonic function", "logarithmically complete monotonicity", and "logarithmically completely monotone" by other mathematicians. See, for example, the monographs [191,192] and the papers [198–201]. Qi pointed out several times that the terminology of the logarithmically completely monotonic function was first used without explicit definition in the paper [202].

By the Web of Science Core Collection, Feng Qi's papers have been cited at least in the following 50 research areas: mathematics, science technology other topics, computer science, plant sciences, mathematical computational biology, engineering, business economics, physics, mechanics, communication, telecommunications, biochemistry molecular biology, agriculture, operations research management science, food science technology, genetics heredity, life sciences biomedicine other topics, materials science, nutrition dietetics, anatomy morphology, chemistry, pharmacology pharmacy, biodiversity conservation, cell biology, environmental sciences ecology, instruments instrumentation, mathematical methods in social sciences, physiology, psychology, thermodynamics, transportation, acoustics, astronomy astrophysics, behavioral sciences, biophysics, biotechnology applied microbiology, cardiovascular system cardiology, developmental biology, energy fuels, government law, health care sciences services, infectious diseases, pathology, polymer science, psychiatry, public administration, public environmental occupational health, social issues, sociology, toxicology, and the like.

#### **4. Conclusions**

Recommended by Dr. Professor Ravi Prakash Agarwal, Dr. Professor Feng Qi is currently an editor of the *Journal of Inequalities and Applications*, the first academic journal specializing in mathematical inequalities in the world and in the history, founded by Dr. Professor Ravi Prakash Agarwal in 1997, as mentioned in Section 2.17. As one of the first two master students supervised by Dr. Professor Feng Qi between September 2004 and June 2007, Dr. Professor Jian Cao published the papers [27,35,66,203–218] jointly with Qi. As one of academic friends, Dr. Professor Wei-Shih Du published the papers [95,101,110,142] jointly with Qi. Currently Dr. Professor Feng Qi is an editor of the journal *Results in Nonlinear Analysis* founded by Dr. Professor Erdal Karapinar. As one of international colleagues, Dr. Professor Marko Kosti´c and his coauthors published the papers [77–79,82,84] in which Qi's results mentioned in Section 2.8 were cited and applied many times.

There have been more mathematical studies by Dr. Professor Feng Qi and his coauthors than those summarized in this paper. From the review articles [116,164,219–224], for example, we can also see more systematic contributions by F. Qi and his coauthors in mathematics. We think that we just summarized a small part of works and ideas created by Dr. Professor Feng Qi. This manuscript is the survey of the scientific work by Feng Qi and his coauthors, but not a total survey of all the topics Feng Qi and his coauthors have worked on. If this manuscript were an overall survey of or an almost complete overview of Qi's work, then it would be a book of more than 500 pages.

**Author Contributions:** Writing—original draft, R.P.A., J.C., W.-S.D., E.K., and M.K. All authors contributed equally to the manuscript and read and approved the final manuscript.

**Funding:** Marco Kosti´c is partially supported by Grant No. 451-03-68/2020/14/200156 of Ministry of Science and Technological Development, Republic of Serbia. Jian Cao is partially supported by Grant No. LY21A010019 of the Zhejiang Provincial Natural Science Foundation of China. Wei-Shih Du is partially supported by Grant No. MOST 111-2115-M-017-002 of the Ministry of Science and Technology of the Republic of China.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** The study did not report any data.

**Acknowledgments:** The authors thank anonymous referees for their careful corrections to and valuable comments on the original version of this paper.

**Conflicts of Interest:** The authors declare no conflicts of interest.

#### **References**


### *Article* **Bounds for the Neuman–Sándor Mean in Terms of the Arithmetic and Contra-Harmonic Means**

**Wen-Hui Li <sup>1</sup> , Peng Miao <sup>1</sup> and Bai-Ni Guo 2,\***


**Abstract:** In this paper, the authors provide several sharp upper and lower bounds for the Neuman– Sándor mean in terms of the arithmetic and contra-harmonic means, and present some new sharp inequalities involving hyperbolic sine function and hyperbolic cosine function.

**Keywords:** Neuman–Sándor mean; arithmetic mean; contra-harmonic mean; bound; inequality; hyperbolic sine function; hyperbolic cosine function

**MSC:** Primary 26E60; Secondary 26D07; 33B10; 41A30

#### **1. Introduction**

In the literature, the quantities

$$\begin{aligned} A(s,t) &= \frac{s+t}{2}, & G(s,t) &= \sqrt{s}t, & H(s,t) &= \frac{2st}{s+t'},\\ \overline{C}(s,t) &= \frac{2(s^2+st+t^2)}{3(s+t)}, & C(s,t) &= \frac{s^2+t^2}{s+t},\\ S(s,t) &= \sqrt{\frac{s^2+t^2}{2}}, & M\_p(s,t) &= \begin{cases} \left(\frac{s^p+t^p}{2}\right)^{1/p}, & p \neq 0;\\ \sqrt{st}, & p = 0 \end{cases} \end{aligned}$$

are called in [1–3], for example, the arithmetic mean, geometric mean, harmonic mean, centroidal mean, contra-harmonic mean, root-square mean, and the power mean of order *p* of two positive numbers *s* and *t*, respectively.

For *s*, *t* > 0 with *s* 6= *t*, the first Seiffert means *P*(*s*, *t*), the second Seiffert means *T*(*s*, *t*), and Neuman–Sándor mean *M*(*s*, *t*) are, respectively, defined [4–6] by

$$P(s,t) = \frac{s-t}{4\arctan\left(\sqrt{\frac{s}{t}}\right) - \pi}, \quad T(s,t) = \frac{s-t}{2\arctan\frac{s-t}{s+t}}, \quad M(s,t) = \frac{s-t}{2\operatorname{arsinh}\frac{s-t}{s+t}},$$

where arsinh *v* = ln *v* + √ *v* <sup>2</sup> + 1 is the inverse hyperbolic sine function. The first Seiffert mean *P*(*s*, *t*) can be rewritten [6] (Equation (2.4]) as

$$P(s,t) = \frac{s-t}{2\arcsin\frac{s-t}{s+t}}$$

.

A chain of inequalities

$$G(\mathbf{s}, t) < L\_{-1}(\mathbf{s}, t) < P(\mathbf{s}, t) < A(\mathbf{s}, t) < M(\mathbf{s}, t) < T(\mathbf{s}, t) < Q(\mathbf{s}, t)$$

**Citation:** Li, W.-H.; Miao, P.; Guo, B.-N. Bounds for the Neuman–Sándor Mean in Terms of the Arithmetic and Contra-Harmonic Means. *Axioms* **2022**, *11*, 236. https://doi.org/ 10.3390/axioms11050236

Academic Editor: Wei-Shih Du

Received: 28 April 2022 Accepted: 16 May 2022 Published: 19 May 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

were given in [6], where

$$L\_p(s,t) = \begin{cases} \left[\frac{t^{p+1} - s^{p+1}}{(p+1)(t-s)}\right]^{1/p}, & p \neq -1, 0; \\\left[\frac{1}{\mathbf{e}} \left(\frac{t^t}{s^s}\right)^{1/(t-s)}\right] & p = 0; \\\frac{t-s}{\ln t - \ln s}, & p = -1 \end{cases}$$

is the *p*-th generalized logarithmic mean of *s* and *t* with *s* 6= *t*. In [6,7], three double inequalities

$$A(\mathbf{s}, t) < M(\mathbf{s}, t) < T(\mathbf{s}, t), \quad P(\mathbf{s}, t) < M(\mathbf{s}, t) < T^2(\mathbf{s}, t),$$

and

$$A(s,t)T(s,t) < M^2(s,t) < \frac{A^2(s,t) + T^2(s,t)}{2}$$

were established for *s*, *t* > 0 with *s* 6= *t*.

For 0 < *s*, *t* < <sup>1</sup> <sup>2</sup> with *s* 6= *t*, the inequalities

$$\begin{aligned} \frac{G(s,t)}{G(1-s,1-t)} &< \frac{L\_{-1}(s,t)}{L\_{-1}(1-s,1-t)} < \frac{P(s,t)}{P(1-s,1-t)}\\ &< \frac{A(s,t)}{A(1-s,1-t)} < \frac{M(s,t)}{M(1-s,1-t)} < \frac{T(s,t)}{T(1-s,1-t)} \end{aligned}$$

of Ky Fan type were presented in [6] (Proposition 2.2).

In [8], Li and their two coauthors showed that the double inequality

$$L\_{p\_0}(s, t) < M(s, t) < L\_2(s, t)$$

holds for all *s*, *t* > 0 with *s* 6= *t* and for *p*<sup>0</sup> = 1.843 . . . , where *p*<sup>0</sup> is the unique solution of the equation (*p* + 1) 1/*<sup>p</sup>* = 2 ln 1 + √ 2 .

In [9], Neuman proved that the double inequalities

$$
\mu Q(\mathbf{s}, t) + (1 - \alpha) A(\mathbf{s}, t) < M(\mathbf{s}, t) \\
< \beta Q(\mathbf{s}, t) + (1 - \beta) A(\mathbf{s}, t)
$$

and

$$
\lambda \mathbb{C}(s, t) + (1 - \lambda) A(s, t) < M(s, t) < \mu \mathbb{C}(s, t) + (1 - \mu) A(s, t)
$$

hold for all *s*, *t* > 0 with *s* 6= *t* if and only if

$$\alpha \le \frac{1 - \ln\left(1 + \sqrt{2}\right)}{\left(\sqrt{2} - 1\right)\ln\left(1 + \sqrt{2}\right)} = 0.3249\dots, \quad \beta \ge \frac{1}{3}$$

and

$$
\lambda \le \frac{1 - \ln\left(1 + \sqrt{2}\right)}{\ln\left(1 + \sqrt{2}\right)} = 0.1345\ldots, \quad \mu \ge \frac{1}{6}.
$$

In [10], (Theorems 1.1 to 1.3), it was found that the double inequalities

$$\begin{aligned} \alpha\_1 H(\mathbf{s}, t) + (1 - \alpha\_1) Q(\mathbf{s}, t) &< M(\mathbf{s}, t) < \beta\_1 H(\mathbf{s}, t) + (1 - \beta\_1) Q(\mathbf{s}, t), \\ \alpha\_2 G(\mathbf{s}, t) + (1 - \alpha\_2) Q(\mathbf{s}, t) &< M(\mathbf{s}, t) < \beta\_2 G(\mathbf{s}, t) + (1 - \beta\_2) Q(\mathbf{s}, t), \end{aligned}$$

and

$$
\mathfrak{a}\_3 H(\mathfrak{s}, t) + (1 - \mathfrak{a}\_3) \mathbb{C}(\mathfrak{s}, t) < M(\mathfrak{s}, t) \\
< \beta\_3 H(\mathfrak{s}, t) + (1 - \beta\_3) \mathbb{C}(\mathfrak{s}, t).
$$

hold for all *s*, *t* > 0 with *s* 6= *t* if and only if

$$\begin{aligned} \mu\_1 \ge \frac{2}{9} = 0.2222\dots, \quad \beta\_1 \le 1 - \frac{1}{\sqrt{2}\ln\left(1 + \sqrt{2}\right)} = 0.1977\dots, \\\mu\_2 \ge \frac{1}{3} = 0.3333\dots, \quad \beta\_2 \le 1 - \frac{1}{\sqrt{2}\ln\left(1 + \sqrt{2}\right)} = 0.1977\dots, \end{aligned}$$

and

$$\alpha\_3 \ge 1 - \frac{1}{2\ln\left(1 + \sqrt{2}\right)} = 0.4327\dots, \quad \beta\_3 \le \frac{5}{12} = 0.4166\dots$$

In 2017, Chen and their two coauthors [11] established bounds for Neuman–Sándor mean *M*(*s*, *t*) in terms of the convex combination of the logarithmic mean and the second Seiffert mean *T*(*s*, *t*). In 2022, Wang and Yin [12] obtained bounds for the reciprocals of the Neuman–Sándor mean *M*(*s*, *t*).

In [13], it was showed that the double inequality

$$\frac{\alpha}{A(s,t)} + \frac{1-\alpha}{\overline{\mathsf{C}}(s,t)} < \frac{1}{T D(s,t)} < \frac{\beta}{A(s,t)} + \frac{1-\beta}{\overline{\mathsf{C}}(s,t)}\tag{1}$$

holds for all *<sup>s</sup>*, *<sup>t</sup>* <sup>&</sup>gt; <sup>0</sup> with *<sup>s</sup>* <sup>6</sup><sup>=</sup> *<sup>t</sup>* if and only if *<sup>α</sup>* <sup>≤</sup> *<sup>π</sup>* <sup>−</sup> <sup>3</sup> and *<sup>β</sup>* <sup>≥</sup> <sup>1</sup> 4 , where *TD*(*s*, *t*) is the Toader mean introduced in [14] by

$$TD(s,t) = \frac{2}{\pi} \int\_0^{\pi/2} \sqrt{s^2 \cos^2 \phi + t^2 \sin^2 \phi} \,\mathrm{d}\,\phi\,.$$

In this paper, motivated by the double inequality (1), we will aim to find out the largest values *α*1, *α*2, and *α*<sup>3</sup> and the smallest values *β*1, *β*2, and *β*<sup>3</sup> such that the double inequalities

$$\frac{a\_1}{C(s,t)} + \frac{1-a\_1}{A(s,t)} < \frac{1}{M(s,t)} < \frac{\beta\_1}{C(s,t)} + \frac{1-\beta\_1}{A(s,t)},\tag{2}$$

$$\frac{a\_2}{\mathcal{C}^2(s,t)} + \frac{1-a\_2}{A^2(s,t)} < \frac{1}{M^2(s,t)} < \frac{\beta\_2}{\mathcal{C}^2(s,t)} + \frac{1-\beta\_2}{A^2(s,t)}\tag{3}$$

and

$$a\_3 \mathbb{C}^2(\mathbf{s}, t) + (1 - a\_3)A^2(\mathbf{s}, t) < M^2(\mathbf{s}, t) < \beta\_3 \mathbb{C}^2(\mathbf{s}, t) + (1 - \beta\_3)A^2(\mathbf{s}, t) \tag{4}$$

hold for all positive real numbers *s* and *t* with *s* 6= *t*.

#### **2. Lemmas**

To attain our main purposes, we need the following lemmas.

**Lemma 1** ([15] (Theorem 1.25))**.** *For* −∞ < *s* < *t* < ∞*, let f* , *g* : [*s*, *t*] → R *be continuous on* [*s*, *t*]*, differentiable on* (*s*, *t*)*, and g* 0 (*v*) <sup>6</sup><sup>=</sup> <sup>0</sup> *on* (*s*, *<sup>t</sup>*)*. If <sup>f</sup>* 0 (*v*) *g* 0(*v*) *is* (*strictly*) *increasing* (*or* (*strictly*) *decreasing, respectively*) *on* (*s*, *t*)*, so are the functions*

$$\frac{f(v) - f(s)}{g(v) - g(s)} \quad \text{and} \quad \frac{f(v) - f(t)}{g(v) - g(t)}.$$

**Lemma 2** ([16] (Lemma 1.1))**.** *Suppose that the power series f*(*v*) = ∑ ∞ `=0 *u*`*v* ` *and g*(*v*) = ∑ ∞ `=<sup>0</sup> *w*`*v* ` *have the convergent radius <sup>r</sup>* <sup>&</sup>gt; <sup>0</sup> *and <sup>w</sup>*` <sup>&</sup>gt; <sup>0</sup> *for all* ` <sup>∈</sup> <sup>N</sup> <sup>=</sup> {0, 1, 2, . . . }*. Let h*(*v*) = *<sup>f</sup>*(*v*) *g*(*v*) *. Then the following statements are true.*


*such that h*(*v*) *is* (*strictly*) *increasing* (*decreasing*) *on* (0, *x*0) *and* (*strictly*) *decreasing* (*or increasing resepctively*) *on* (*x*0,*r*)*.*

**Lemma 3.** *Let*

$$h\_1(v) = \frac{2v\sinh v + \cosh v - 1}{3\sinh^2 v}.$$

*Then <sup>h</sup>*1(*v*) *is strictly decreasing on* (0, <sup>∞</sup>) *with* lim*v*→0<sup>+</sup> *<sup>h</sup>*1(*v*) = <sup>5</sup> 6 *and* lim*v*→<sup>∞</sup> *h*1(*v*) = 0*.*

#### **Proof.** Let

$$f\_1(v) = 2v \sinh v + \cosh v - 1 \quad \text{and} \quad f\_2(v) = 3 \sinh^2 v = \frac{3}{2} [\cosh(2v) - 1].$$

Using the power series

$$\sinh v = \sum\_{\ell=0}^{\infty} \frac{v^{2\ell+1}}{(2\ell+1)!} \quad \text{and} \quad \cosh v = \sum\_{\ell=0}^{\infty} \frac{v^{2\ell}}{(2\ell)!} \tag{5}$$

we can express the functions *f*1(*v*) and *f*2(*v*) as

$$f\_1(v) = \sum\_{\ell=0}^{\infty} \frac{2(2\ell+2)! + (2\ell+1)!}{(2\ell+1)!(2\ell+2)!} v^{2\ell+2} \quad \text{and} \quad f\_2(v) = \frac{3}{2} \sum\_{\ell=0}^{\infty} \frac{2^{2\ell+2} v^{2\ell+2}}{(2\ell+2)!}.$$

Hence, we have

$$h\_1(v) = \frac{\sum\_{\ell=0}^{\infty} u\_\ell v^{2\ell+2}}{\sum\_{\ell=0}^{\infty} w\_\ell v^{2\ell+2}}\,\tag{6}$$

where *u*` = 2(2`+2)!+(2`+1)! (2`+1)!(2`+2)! and *w*` = <sup>3</sup>×<sup>2</sup> 2`+1 (2`+2)! Let *c*` = *u*` *w*` . Then

$$c\_{\ell} = \frac{2(2\ell+2)! + (2\ell+1)!}{3(2\ell+1)!2^{2\ell+1}} \quad \text{and} \quad c\_{\ell+1} - c\_{\ell} = -\frac{4(3\ell+4)(2\ell+2)! + 3(2\ell+3)!}{3(2\ell+3)!2^{2\ell+3}} < 0.$$

.

As a result, by Lemma 2, it follows that the function *h*1(*v*) is strictly decreasing on (0, <sup>∞</sup>). From (6), it is easy to see that lim*v*→0<sup>+</sup> *<sup>h</sup>*1(*v*) = *<sup>u</sup>*<sup>0</sup> *w*0 = <sup>5</sup> 6 .

Using the L'Hospital rule leads to lim*v*→<sup>∞</sup> *h*1(*v*) = 0 immediately. The proof of Lemma 3 is complete.

#### **Lemma 4.** *Let*

$$h\_2(v) = \frac{\left(\sinh^2 v - v^2\right)\cosh^4 v}{\left(\cosh^2 v + 1\right)\sinh^4 v}.$$

*Then <sup>h</sup>*2(*v*) *is strictly increasing on <sup>v</sup>* <sup>∈</sup> (0, <sup>∞</sup>) *and has the limit* lim*v*→0<sup>+</sup> *<sup>h</sup>*2(*v*) = <sup>1</sup> 6 *and* lim*v*→<sup>∞</sup> *h*2(*v*) = 1*.*

**Proof.** Let

$$f\_3(v) = \left(\sinh^2 v - v^2\right)\cosh^4 v \quad \text{and} \quad f\_4(v) = \left(\cosh^2 v + 1\right)\sinh^4 v.$$

Since

$$f\_3'(v) = 2\left(\sinh v + 3\sinh^3 v - v\cosh v - 2v^2\sinh v\right)\cosh^3 v$$

and

$$f\_4'(v) = 2\left(3\cosh^2 v + 1\right)\sinh^3 v \cosh v \dots$$

we obtain

$$\begin{aligned} f\_4'(v) &= \frac{(\sinh v + 3\sinh^3 v - v\cosh v - 2v^2 \sinh v)\cosh^2 v}{(3\cosh^2 v + 1)\sinh^3 v} \\ &= \frac{\cosh^2 v}{3\cosh^2 v + 1} \frac{\sinh v + 3\sinh^3 v - v\cosh v - 2v^2 \sinh v}{\sinh^3 v} \\ &= \frac{1}{3 + \frac{1}{\cosh^2 v}} \left( 3 + \frac{\sinh v - v\cosh v - 2v^2 \sinh v}{\sinh^3 v} \right) \\ &= \frac{1}{3 + \frac{1}{\cosh^2 v}} [3 + g(v)], \end{aligned}$$

where

$$g(v) = \frac{\sinh v - v \cosh v - 2v^2 \sinh v}{\sinh^3 v}.$$

By using the identity that sinh(3*v*) = 3 sinh *v* + 4 sinh<sup>3</sup> *v*, we arrive at

$$g(v) = 4 \frac{\sinh v - v \cosh v - 2v^2 \sinh v}{\sinh(3v) - 3 \sinh v} \stackrel{\triangle}{=} 4 \frac{g\_1(v)}{g\_2(v)},$$

where *g*1(*v*) = sinh *v* − *v* cosh *v* − 2*v* 2 sinh *v* and *g*2(*v*) = sinh(3*v*) − 3 sinh *v*. Straightforward computation gives

$$\begin{aligned} g\_1'(v) &= -\left( 5v \sinh v + 2v^2 \cosh v \right), & g\_2'(v) &= \Im[\cosh(3v) - \cosh v], \\ g\_1''(v) &= -\left( 5\sinh v + 9v \cosh v + 2v^2 \sinh v \right), & g\_2''(v) &= \Im[3\sinh(3v) - \sinh v]. \end{aligned}$$

and

$$\mathbf{g\_1(0^+)} = \mathbf{g\_2(0^+)} = \mathbf{g\_1'(0^+)} = \mathbf{g\_2'(0^+)} = \mathbf{g\_1'(0^+)} = \mathbf{g\_1''(0^+)} = \mathbf{g\_2''(0^+)} = \mathbf{0.1}$$

Consequently, we obtain

$$\frac{g\_1''(v)}{g\_2''(v)} = -\frac{5\sinh v + 9v\cosh v + 2v^2\sinh v}{3[3\sinh(3v) - \sinh v]} \stackrel{\triangle}{=} -\frac{1}{3}\frac{g\_3(v)}{g\_4(v)}$$

Using the power series of sinh *v* and cosh *v*, we deduce

$$\begin{split} g\_{3}(v) &= 5 \sum\_{\ell=0}^{\infty} \frac{v^{2\ell+1}}{(2\ell+1)!} + 9 \sum\_{\ell=0}^{\infty} \frac{v^{2\ell+1}}{(2\ell)!} + 2 \sum\_{\ell=0}^{\infty} \frac{v^{2\ell+3}}{(2\ell+1)!} \\ &= 14v + \sum\_{\ell=1}^{\infty} \left[ \frac{5}{(2\ell+1)!} + \frac{9}{(2\ell)!} + \frac{2}{(2\ell-1)!} \right] v^{2\ell+1} \\ &= 14v + \sum\_{\ell=1}^{\infty} \left[ \frac{(4\ell+7)(2\ell)! + 9(2\ell+1)!}{(2\ell)!(2\ell+1)!} \right] v^{2\ell+1} \end{split}$$

and

$$g\_4(v) = 3\sum\_{\ell=0}^{\infty} \frac{(3v)^{2\ell+1}}{(2\ell+1)!} - \sum\_{\ell=0}^{\infty} \frac{v^{2\ell+1}}{(2\ell+1)!} = \sum\_{\ell=0}^{\infty} \left[\frac{3^{2\ell+2}-1}{(2\ell+1)!}\right] v^{2\ell+1}.$$

Therefore, we find

$$\frac{\mathbf{g}\_3(v)}{\mathbf{g}\_4(v)} = \frac{\sum\_{\ell=0}^\infty \boldsymbol{\mu}\_\ell v^{2\ell+1}}{\sum\_{\ell=0}^\infty \boldsymbol{w}\_\ell v^{2\ell+1}}.$$

where

$$\mu\_{\ell} = \begin{cases} 14, & \ell = 0; \\ \frac{(4\ell+7)(2\ell)! + 9(2\ell+1)!}{(2\ell)!(2\ell+1)!}, & \ell \ge 1 \end{cases} \quad \text{and} \quad w\_{\ell} = \begin{cases} 8 > 0, & \ell = 0; \\ 3^{2\ell+2} - 1, & \ell \ge 1. \end{cases}$$

Let *c*` = *u*` *w*` . Then

$$c\_{\ell} = \begin{cases} \frac{7}{4}, & \ell = 0; \\ \frac{(4\ell+7)(2\ell)! + 9(2\ell+1)!}{(3^{2\ell+2}-1)(2\ell)!}, & \ell \ge 1. \end{cases}$$

When ` <sup>=</sup> 0, we have *<sup>c</sup>*<sup>1</sup> <sup>−</sup> *<sup>c</sup>*<sup>0</sup> <sup>=</sup> <sup>−</sup><sup>51</sup> <sup>40</sup> < 0. When ` ≥ 1, it follows that

$$\begin{split} c\_{\ell+1} - c\_{\ell} &= \frac{(4\ell+11)(2\ell+2)! + 9(2\ell+3)!}{(3^{2\ell+4}-1)(2\ell+2)!} - \frac{(4\ell+7)(2\ell)! + 9(2\ell+1)!}{(3^{2\ell+2}-1)(2\ell)!} \\ &= \frac{1}{(3^{2\ell+2}-1)(3^{2\ell+4}-1)(2\ell+2)} \{ [(4\ell+11)(2\ell+2)! \\ &\quad + 9(2\ell+3)! \big{(} 3^{2\ell+2}-1\big{)} - [(4\ell+7)(2\ell)! + 9(2\ell+1)!](2\ell+2) \big{(} 3^{2\ell+4}-1\big{)} \} \\ &= \frac{1}{(3^{2\ell+2}-1)(3^{2\ell+4}-1)(2\ell+2)!} \{ [(4\ell+11)(2\ell+2)! \\ &\quad + 9(2\ell+3)! ]3^{2\ell+2} - [(4\ell+7)(2\ell)! + 9(2\ell+1)!](2\ell+2)3^{2\ell+4} \\ &\quad + (2\ell+2)[(4\ell+7)(2\ell)! + 9(2\ell+1)!] - [(4\ell+11)(2\ell+2)! + 9(2\ell+3)!] \} \\ &= -\frac{1}{(3^{2\ell+2}-1)(3^{2\ell+4}-1)(2\ell+2)!} \{ [(8\ell+13)(2\ell+2)! \\ &\quad + 9(16\ell+15)(2\ell+1)! ]3^{2\ell+2} + 9(2\ell+1)! + 4(2\ell+2)! \big{]} \\ &< 0. \end{split}$$

By Lemma 2, it follows that the function *<sup>g</sup>*3(*v*) *g*4 (*v*) is strictly decreasing on (0, ∞), so the function *<sup>g</sup>* 00 1 (*v*) *g* 00 2 (*v*) is strictly increasing on (0, ∞). Applying Lemma 1, it follows that the function *g*(*v*) is strictly increasing on (0, ∞). By the L'Hospital rule, we have

$$\lim\_{v \to 0^+} g(v) = -\frac{7}{3} \quad \text{and} \quad \lim\_{v \to \infty} g(v) = 0.$$

It is common knowledge that the function cosh *v* is strictly increasing on (0, ∞). Hence, the function <sup>1</sup> 3+ <sup>1</sup> cosh2 *v* is strictly increasing on (0, ∞). Therefore, the function *h*2(*v*) is strictly increasing on (0, ∞) with the limits

$$\lim\_{v \to 0} h\_2(v) = \frac{1}{6} \quad \text{and} \quad \lim\_{v \to \infty} h\_2(v) = 1.$$

The proof of Lemma 3 is complete.

**Lemma 5.** *Let*

$$h\_3(v) = \frac{2v\cosh^2 v}{\sinh v}.$$

*Then h*3(*v*) *is strictly increasing on* (0, <sup>∞</sup>) *and has the limit* lim*v*→0<sup>+</sup> *<sup>h</sup>*3(*v*) = <sup>2</sup>*.*

**Proof.** Let *k*1(*v*) = 2*v* cosh<sup>2</sup> *v* = *v* cosh(2*v*) + *v* and *k*2(*v*) = sinh *v*. By Equation (5), we have

$$k\_1(v) = 2v + \sum\_{\ell=1}^{\infty} \frac{2^{2\ell}}{(2\ell)!} v^{2\ell+1} \quad \text{and} \quad k\_2(v) = \sum\_{\ell=0}^{\infty} \frac{v^{2\ell+1}}{(2\ell+1)!}.$$

Hence,

$$h\_3(v) = \frac{2v + \sum\_{\ell=1}^{\infty} u\_\ell v^{2\ell+1}}{\sum\_{\ell=0}^{\infty} w\_\ell v^{2\ell+1}},\tag{7}$$

where

$$\mathfrak{u}\_{\ell} = \begin{cases} \mathfrak{2}, & \ell = 0; \\ \frac{2^{2\ell}}{(2\ell)!} & \ell \ge 1 \end{cases} \quad \text{and} \quad w\_{\ell} = \frac{1}{(2\ell + 1)!}.$$

Let *c*` = *u*` *w*` . Then

$$c\_{\ell} = \begin{cases} 2, & \ell = 0; \\ \frac{(2\ell + 1)! 2^{2\ell}}{(2\ell)!}, & \ell \ge 1 \end{cases} \quad \text{and} \quad c\_{\ell + 1} - c\_{\ell} = \begin{cases} 10, & \ell = 0; \\ \frac{(3\ell + 5)(2\ell + 1)! 2^{2\ell + 1}}{(2\ell + 2)!} > 0, & \ell \ge 1. \end{cases}$$

Thus, by Lemma 2, it follows that the function *h*3(*v*) is strictly increasing on (0, ∞). From (7), it is easy to see that lim*v*→0<sup>+</sup> *<sup>h</sup>*3(*v*) = *<sup>u</sup>*<sup>0</sup> *w*0 = 2. The proof of Lemma 5 is complete.

#### **3. Bounds for Neuman–Sándor Mean**

Now we are in a position to state and prove our main results.

**Theorem 1.** *For s*, *t* > 0 *with s* 6= *t, the double inequality* (2) *holds if and only if*

$$
\mu\_1 \ge 2\left[1 - \ln\left(1 + \sqrt{2}\right)\right] = 0.237253\dots \quad \text{and} \quad \beta\_1 \le \frac{1}{6}.
$$

**Proof.** Without loss of generality, we assume that *s* > *t* > 0. Let *q* = *<sup>s</sup>*−*<sup>t</sup> s*+*t* . Then *q* ∈ (0, 1) and

$$\frac{\frac{1}{M(s,t)} - \frac{1}{A(s,t)}}{\frac{1}{\mathbb{C}(s,t)} - \frac{1}{A(s,t)}} = \frac{\frac{\text{arsinh }q}{q} - 1}{\frac{1}{1+q^2} - 1}.$$

Let *q* = sinh *φ*. Then *φ* ∈ 0, ln 1 + √ 2 and

$$\frac{\frac{1}{A(s,t)} - \frac{1}{A(s,t)}}{\frac{1}{C(s,t)} - \frac{1}{A(s,t)}} = \frac{\frac{\phi}{\sinh\phi} - 1}{\frac{1}{\cosh^2\phi} - 1} = \frac{(\sinh\phi - \phi)\cosh^2\phi}{\sinh^3\phi} \triangleq F(\phi) = \frac{k\_1(\phi)}{k\_2(\phi)}.$$

Let

$$k\_1(\phi) = (\sinh \phi - \phi) \cosh^2 \phi \quad \text{and} \quad k\_2(\phi) = \sinh^3 \phi.$$

Then elaborated computations lead to *k*<sup>1</sup> 0 + = *k*<sup>2</sup> 0 + = 0 and

$$\frac{k\_1'(\phi)}{k\_2'(\phi)} = \frac{2(\sinh\phi - \phi)\sinh\phi + (\cosh\phi - 1)\cosh\phi}{3\sinh^2\phi} = 1 - \frac{2\phi\sinh\phi + \cosh\phi - 1}{3\sinh^2\phi}.$$

Combining this with Lemmas 1 and 3 reveals that the function *F*(*φ*) is strictly increasing on 0, ln 1 + √ 2 . Moreover, it is easy to compute the limits

$$\lim\_{\phi \to 0^+} F(\phi) = \frac{1}{6} \quad \text{and} \quad \lim\_{\phi \to \ln(1+\sqrt{2})^-} F(\phi) = 2 - 2\ln\left(1 + \sqrt{2}\right).$$

The proof of Theorem 1 is thus complete.

**Corollary 1.** *For all φ* ∈ 0, ln 1 + √ 2 *, the double inequality*

$$1 - \beta\_1 \left( 1 - \frac{1}{\cosh^2 \phi} \right) < \frac{\phi}{\sinh \phi} < 1 - \alpha\_1 \left( 1 - \frac{1}{\cosh^2 \phi} \right) \tag{8}$$

*holds if and only if*

$$
\alpha\_1 \le \frac{1}{6} \quad \text{and} \quad \beta\_1 \ge 2\left[1 - \ln\left(1 + \sqrt{2}\right)\right] = 0.237253\dots \dots
$$

**Theorem 2.** *For s*, *t* > 0 *with s* 6= *t, the double inequality* (3) *holds if and only if*

$$
\omega\_2 \ge \frac{4}{3} [1 - \ln^2(1 + \sqrt{2})] = 0.297574\dots \quad \text{and} \quad \beta\_2 \le \frac{1}{6}.
$$

**Proof.** Without loss of generality, we assume that *s* > *t* > 0. Let *q* = *<sup>s</sup>*−*<sup>t</sup> s*+*t* . Then *q* ∈ (0, 1) and

$$\frac{\frac{1}{\overline{M^2}(s,t)} - \frac{1}{\overline{A^2}(s,t)}}{\frac{1}{\overline{C^2}(s,t)} - \frac{1}{\overline{A^2}(s,t)}} = \frac{\frac{\text{arsinh}^2 \overline{q}}{\overline{q^2}} - 1}{\frac{1}{(1+q^2)^2} - 1}.$$

Let *q* = sinh *φ*. Then *φ* ∈ 0, ln 1 + √ 2 and

$$\frac{\frac{1}{A^2(s,t)} - \frac{1}{A^2(s,t)}}{\frac{1}{C^2(s,t)} - \frac{1}{A^2(s,t)}} = \frac{\frac{\phi^2}{\sinh^2 \phi} - 1}{\frac{1}{\cosh^4 \phi} - 1} = \frac{(\sinh^2 \phi - \phi^2)\cosh^4 \phi}{(\cosh^2 \phi + 1)\sinh^4 \phi} \stackrel{\triangle}{=} H(\phi).$$

By Lemma 4, it is easy to show that *H*(*φ*) is strictly increasing on 0, ln 1 + √ 2 . Moreover, the limits

$$\lim\_{\phi \to 0^+} H(\phi) = \frac{1}{6} \quad \text{and} \quad \lim\_{\phi \to \ln(1 + \sqrt{2})^-} H(\phi) = \frac{4}{3} \left[ 1 - \ln^2(1 + \sqrt{2}) \right]^-$$

can be computed readily. The double inequality (3) is thus proved.

**Corollary 2.** *For all φ* ∈ 0, ln 1 + √ 2 *, the double inequality*

$$1 - \beta\_2 \left( 1 - \frac{1}{\cosh^4 \phi} \right) < \left( \frac{\phi}{\sinh \phi} \right)^2 < 1 - a\_2 \left( 1 - \frac{1}{\cosh^4 \phi} \right) \tag{9}$$

*holds if and only if*

$$\alpha\_2 \le \frac{1}{6} \quad \text{and} \quad \beta\_2 \ge \frac{4}{3} \left[ 1 - \ln^2 \left( 1 + \sqrt{2} \right) \right] = 0.297574\dots\dots$$

**Theorem 3.** *For s*, *t* > 0 *with s* 6= *t, the double inequality* (4) *holds if and only if*

$$\alpha\_3 \le \frac{1 - \ln^2(1 + \sqrt{2})}{3\ln^2(1 + \sqrt{2})} = 0.095767\dots \quad \text{and} \quad \beta\_3 \ge \frac{1}{6}.$$

**Proof.** Without loss of generality, we assume that *s* > *t* > 0. Let *q* = *<sup>s</sup>*−*<sup>t</sup> s*+*t* . Then *q* ∈ (0, 1) and

$$\frac{M^2(s,t) - A^2(s,t)}{C^2(s,t) - A^2(s,t)} = \frac{\frac{q^2}{\text{arsinh}^2 q} - 1}{(1+q^2)^2 - 1}.$$

Let *q* = sinh *φ*. Then *φ* ∈ 0, ln 1 + √ 2 and

$$\frac{M^2(s,t) - A^2(s,t)}{\mathbb{C}^2(s,t) - A^2(s,t)} = \frac{\frac{\sinh^2 \phi}{\phi^2} - 1}{\cosh^4 \phi - 1} \stackrel{\triangle}{=} G(\phi) = \frac{k\_1(\phi)}{k\_2(\phi)},$$

where

$$k\_1(\phi) = \frac{\sinh^2 \phi}{\phi^2} - 1 \quad \text{and} \quad k\_2(\phi) = \cosh^4 \phi - 1.$$

Then *k*<sup>1</sup> 0 + = *k*<sup>2</sup> 0 + = 0 and

$$\frac{k\_1'(\phi)}{k\_2'(\phi)} = \frac{\phi \cosh \phi - \sinh \phi}{2\phi^3 \cosh^3 \phi}.$$

Denote

*k*3(*φ*) = *φ* cosh *φ* − sinh *φ* and *k*4(*φ*) = 2*φ* 3 cosh<sup>3</sup> *φ*, it is easy to obtain *k*<sup>3</sup> 0 + = *k*<sup>4</sup> 0 + = 0 and

$$\frac{k\_4'(\phi)}{k\_3'(\phi)} = \frac{6\phi\cosh^2\phi}{\sinh\phi} + 6\phi^2\cosh^2\phi. \tag{10}$$

Since the function *v* 2 cosh<sup>2</sup> *v* is strictly increasing on (0, ∞), by Lemma 5, we see that the ratio in (10) is strictly increasing and *<sup>k</sup>* 0 3 (*φ*) *k* 0 4 (*φ*) is strictly decreasing on 0, ln 1 + √ 2 . Consequently, from Lemma 1, it follows that *G*(*φ*) is strictly decreasing on 0, ln 1 + √ 2 . The limits

$$\lim\_{\phi \to 0^+} G(\phi) = \frac{1}{6} \quad \text{and} \quad \lim\_{\phi \to \ln(1+\sqrt{2})^-} G(\phi) = \frac{1-\ln^2(1+\sqrt{2})}{3\ln^2(1+\sqrt{2})}$$

can be computed easily. The proof of Theorem 3 is thus complete.

**Corollary 3.** *For all φ* ∈ 0, ln 1 + √ 2 *, the double inequality*

$$(1 + \kappa\_3(\cosh^4 \phi - 1) < \left(\frac{\sinh \phi}{\phi}\right)^2 < 1 + \beta\_3(\cosh^4 \phi - 1) \tag{11}$$

*holds if and only if*

$$\alpha\_3 \le \frac{1 - \ln^2(1 + \sqrt{2})}{3\ln^2(1 + \sqrt{2})} = 0.095767\dots \quad \text{and} \quad \beta\_3 \ge \frac{1}{6}.$$

#### **4. A Double Inequality**

From Lemma 5, we can deduce

$$\frac{\sinh v}{v} < \cosh^2 v \quad \text{and} \quad \frac{\sinh v}{v} > \frac{\tanh^2 x}{v^2} \tag{12}$$

for *v* ∈ (0, ∞). The inequality

$$\left(\frac{\sinh v}{v}\right)^3 > \cosh v \tag{13}$$

for *v* ∈ (0, ∞) can be found and has been applied in [17] (p. 65), [18] (p. 300), [19] (pp. 279, 3.6.9), and [20] (p. 260). In [21], (Lemma 3), Zhu recovered the fact stated in [19] (pp. 279, 3.6.9) that the exponent 3 in the inequality (13) is the least possible, that is, the inequality

$$\left(\frac{\sinh v}{v}\right)^p > \cosh v \tag{14}$$

for *x* > 0 holds if and only if *p* ≤ 3.

Inspired by (12) and (14), we find out the following double inequality.

**Theorem 4.** *The inequality*

$$
\cosh^a v < \frac{\sinh v}{v} < \cosh^\beta v \tag{15}
$$

.

*for v* <sup>6</sup><sup>=</sup> <sup>0</sup> *holds if and only if <sup>α</sup>* <sup>≤</sup> <sup>1</sup> 3 *and β* ≥ 1*.*

**Proof.** Let

$$h(v) = \frac{\ln \sinh v - \ln v}{\ln \cosh v} \stackrel{\triangle}{=} \frac{f\_1(v)}{f\_2(v)}.$$

Direct calculation yields

$$\frac{f\_1'(v)}{f\_2'(v)} = \frac{v \cosh^2 v - \sinh v \cosh v}{v \sinh^2 v} = \frac{v \cosh(2v) + v - \sinh(2v)}{v \cosh(2v) - v} \stackrel{\triangle}{=} \frac{f\_3(v)}{f\_4(v)}.$$

Using the power series of sinh *v* and cosh *v*, we obtain

$$\begin{split} f\_3(v) &= v + v \sum\_{\ell=0}^{\infty} \frac{(2v)^{2\ell}}{(2\ell)!} - \sum\_{\ell=0}^{\infty} \frac{(2v)^{2\ell+1}}{(2\ell+1)!} = \sum\_{\ell=1}^{\infty} \left[ \frac{2^{2\ell}}{(2\ell)!} - \frac{2^{2\ell+1}}{(2\ell+1)!} \right] v^{2\ell+1} \\ &= \sum\_{\ell=0}^{\infty} \frac{(2\ell+1)2^{2\ell+2}}{(2\ell+3)!} v^{2\ell+3} \triangleq \sum\_{\ell=0}^{\infty} u\_\ell v^{2\ell+3} \end{split}$$

and

$$f\_4(v) = v \sum\_{\ell=0}^{\infty} \frac{(2v)^{2\ell}}{(2\ell)!} - v = \sum\_{\ell=1}^{\infty} \frac{2^{2\ell}}{(2\ell)!} v^{2\ell+1} = \sum\_{\ell=0}^{\infty} \frac{2^{2\ell+2}}{(2\ell+2)!} v^{2\ell+3} \triangleq \sum\_{\ell=0}^{\infty} w\_\ell v^{2\ell+3} \lambda$$

where

$$\mu\_{\ell} = \frac{(2\ell+1)2^{2\ell+2}}{(2\ell+3)!} \quad \text{and} \quad w\_{\ell} = \frac{2^{2\ell+2}}{(2\ell+2)!}$$

When setting *c*` = *u*` *w*` , we obtain

$$c\_{\ell} = \frac{2\ell + 1}{2\ell + 3} = 1 - \frac{2}{2\ell + 3}$$

is increasing on ` <sup>∈</sup> <sup>N</sup>. Therefore, by Lemma 2, the ratio *<sup>f</sup>*3(*v*) *f*4 (*v*) is increasing on (0, ∞). Using Lemma 1, we obtain that

$$h(v) = \frac{f\_1(v)}{f\_2(v)} = \frac{f\_1(v) - f\_1(0^+)}{f\_2(v) - f\_2(0^+)}$$

is increasing on (0, ∞).

Moreover, the limits lim*v*→0<sup>+</sup> *<sup>h</sup>*<sup>1</sup> <sup>=</sup> <sup>1</sup> 3 and lim*v*→<sup>∞</sup> *h*<sup>1</sup> = 1 are obvious. The proof of Lemma 4 is thus complete.

#### **5. A Remark**

For *v*,*r* ∈ R, we have

$$\left(\frac{\sinh v}{v}\right)^r = 1 + \sum\_{m=1}^{\infty} \left[ \sum\_{k=1}^{2m} \frac{(-r)\_k}{k!} \sum\_{j=1}^k (-1)^j \binom{k}{j} \frac{T(2m+j,j)}{\binom{2m+j}{j}} \right] \frac{(2v)^{2m}}{(2m)!},\tag{16}$$

where the rising factorial (*r*)*<sup>k</sup>* is defined by

$$(r)\_k = \prod\_{\ell=0}^{k-1} (r+\ell) = \begin{cases} r(r+1)\cdots(r+k-1), & k \ge 1\\ 1, & k=0 \end{cases}$$

and *T*(2*m* + *j*, *j*) is called central factorial numbers of the second kind and can be computed by

$$T(n,\ell) = \frac{1}{\ell!} \sum\_{j=0}^{\ell} (-1)^j \binom{\ell}{j} \left(\frac{\ell}{2} - j\right)^n.$$

for *n* ≥ ` ≥ 0.

The series expansion (16) was recently derived in [22] (Corollary 4.1). Can one find bounds of the function sinh *v v r* for *v*,*r* ∈ R \ {0}?

#### **6. Conclusions**

In this paper, we found out the largest values *α*1, *α*2, *α*<sup>3</sup> and the smallest values *β*1, *β*2, *β*<sup>3</sup> such that the double inequalities (2), (3), and (4) hold for all positive real number *s*, *t* > 0 with *s* 6= *t*. Moreover, we presented some new sharp inequalities (8), (9), (11), and (15) involving the hyperbolic sine function sinh *φ* and the hyperbolic cosine function cosh *φ*.

**Author Contributions:** Writing—original draft, W.-H.L., P.M. and B.-N.G. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Data sharing is not applicable to this article as no new data were created or analyzed in this study.

**Acknowledgments:** The authors thank anonymous referees for their careful corrections to and valuable comments on the original version of this paper.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


### *Article* **New Inequalities and Generalizations for Symmetric Means Induced by Majorization Theory**

**Huan-Nan Shi <sup>1</sup> and Wei-Shih Du 2,\***


**Abstract:** In this paper, the authors study new inequalities and generalizations for symmetric means and give new proofs for some known results by applying majorization theory.

**Keywords:** majorization; inequality; log-concave sequence; symmetric function; symmetric mean

**MSC:** Primary 05E05; Secondary 26A09; 26A51; 26D15

#### **1. Introduction and Preliminaries**

Convex analysis has wide applications to many areas of mathematics and science. In the past nearly 80 years, convex analysis has reached a high level of maturity, and an increasing number of connections have been identified between mathematics, physics, economics and finance, automatic control systems, estimation and signal processing, communications and networks and so forth. Several authors have studied a large number of new concepts of generalized convexity and concavity; see, for example, [1–7] and the references therein. Majorization theory has contributed greatly to many branches of pure and applied mathematics, especially in the field of inequalities; for more details, one can refer to [4–6,8–13] and the references therein.

**Definition 1** (see [5] (p. 4))**.** *A finite sequence* {*xk*} *n k*=1 *or an infinite sequence* {*xk*} ∞ *k*=1 *of nonnegative real numbers is said to be*

*(i) logarithmically convex (abbreviated as log-convex) if*

$$
\mathfrak{x}\_k^2 \le \mathfrak{x}\_{k-1} \mathfrak{x}\_{k+1}
$$

*for all k* = 2, . . . , *n* − 1 *or for all k* ≥ 2*; and (ii) logarithmically concave (abbreviated as log-concave) if*

$$\mathbf{x}\_k^2 \ge \mathbf{x}\_{k-1} \mathbf{x}\_{k+1}$$

*for all k* = 2, . . . , *n* − 1 *or for all k* ≥ 2*.*

The following characterizations of logarithmic convexity are crucial to our proofs.

**Lemma 1** (see [5] (p. 4))**.** *Let*

$$\mathbb{N}\_0^n = \underbrace{\{0, 1, 2, \dots\} \times \{0, 1, 2, \dots\} \times \dots \times \{0, 1, 2, \dots\}}\_{n}.$$

**Citation:** Shi, H.-N.; Du, W.-S. New Inequalities and Generalizations for Symmetric Means Induced by Majorization Theory. *Axioms* **2022**, *11*, 279. https://doi.org/10.3390/ axioms11060279

Academic Editor: Mircea Merca

Received: 11 May 2022 Accepted: 8 June 2022 Published: 9 June 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

*The necessary and sufficient condition for a non-negative sequence* {*ak*} *to be log-convex is that, for any <sup>p</sup>* = (*p*1, *<sup>p</sup>*2, . . . , *<sup>p</sup>n*)*, <sup>q</sup>* = (*q*1, *<sup>q</sup>*2, . . . , *<sup>q</sup>n*) <sup>∈</sup> <sup>N</sup>*<sup>n</sup>* <sup>0</sup> *with p* ≺ *q, we have*

$$\prod\_{i=1}^{n} a\_{p\_i} \le \prod\_{i=1}^{n} a\_{q\_i}.$$

**Corollary 1.** *Let* {*ak*} *be a positive sequence. If* {*ak*} *is log-concave, then, for any p* = (*p*1, *p*2, . . . , *pn*)*, <sup>q</sup>* = (*q*1, *<sup>q</sup>*2, . . . , *<sup>q</sup>n*) <sup>∈</sup> <sup>N</sup>*<sup>n</sup>* <sup>0</sup> *with p* ≺ *q, we have*

$$\prod\_{i=1}^{n} a\_{p\_i} \ge \prod\_{i=1}^{n} a\_{q\_i}.$$

**Proof.** Since {*ak*} is a positive log-concave sequence, { 1 *ak* } is a positive log-convex sequence. According to Lemma 1, we have ∏ *n i*=1 1 *aq i* ≤ ∏ *n i*=1 1 *api* , this is ∏ *n i*=1 *ap<sup>i</sup>* ≥ ∏ *n i*=1 *aqi* , so that and Corollary 1 holds.

**Definition 2** (see [10,12])**.** *Let <sup>x</sup>* = (*x*1, *<sup>x</sup>*2, . . . , *<sup>x</sup>n*) *and <sup>y</sup>* = (*y*1, *<sup>y</sup>*2, . . . , *<sup>y</sup>n*) <sup>∈</sup> <sup>R</sup>*<sup>n</sup> . A vector x is said to be majorized by y, denoted by x* ≺ *y, if*

$$\sum\_{i=1}^k \varkappa\_{[i]} \le \sum\_{i=1}^k \mathcal{Y}\_{[i]} \quad \text{for } 1 \le k \le n - 1\_k$$

*and*

$$\sum\_{i=1}^{n} \mathbf{x}\_i = \sum\_{i=1}^{n} \mathbf{y}\_{i\nu}$$

*where x*[1] ≥ · · · ≥ *x*[*n*] *and y*[1] ≥ · · · ≥ *y*[*n*] *are rearrangements of x and y in a descending order.*

We now recall the concepts of symmetric function and symmetric mean as follows.

**Definition 3** (see, e.g., [9,11])**.** *Let <sup>x</sup>* = (*x*1, *<sup>x</sup>*2, . . . , *<sup>x</sup>n*) <sup>∈</sup> <sup>R</sup>*<sup>n</sup> .*

*(i) The kth symmetric function s<sup>k</sup>* (*x*) *for* 1 ≤ *k* ≤ *n is defined by*

$$s\_k(\mathbf{x}) = s\_k(\mathbf{x}\_1, \mathbf{x}\_2, \dots, \mathbf{x}\_n) = \sum\_{1 \le i\_1 < i\_2 < \dots < i\_k \le n} \prod\_{j=1}^k x\_{i\_j} \mathbf{x}\_j.$$

*In particular, sn*(*x*) = ∏ *n i*=1 *x<sup>i</sup> and s*1(*x*) = ∑ *n i*=1 *xi . We assume that s*0(*x*) = 1 *and sk* (*x*) = 0 *for k* < 0 *or k* > *n.*

*(ii) The kth symmetric mean is defined by*

$$B\_k(\mathfrak{x}) = \frac{s\_k(\mathfrak{x})}{\binom{n}{k}} \quad \text{for } k = 0, 1, \dots, n.$$

The following lemma is important and will be used for proving our main results.

**Lemma 2** (see [9] (p. 458) or [11] (p. 95))**.** *Let <sup>x</sup>* = (*x*1, *<sup>x</sup>*2, . . . , *<sup>x</sup>n*) <sup>∈</sup> <sup>R</sup>*<sup>n</sup> with <sup>x</sup><sup>i</sup>* <sup>≥</sup> <sup>0</sup> *for i* = 1, 2, . . . , *n. Then,*

$$B\_{k+1}(\mathfrak{x})B\_{k-1}(\mathfrak{x}) \le B\_k^2(\mathfrak{x})$$

*for all* 1 ≤ *k* ≤ *n. Equivalently speaking, the sequence* {*B<sup>k</sup>* (*x*)} *is log-concave.*

**Remark 1.** *(i) In particular, for n* ≥ 2 *and x<sup>i</sup>* > 0 *with i* = 1, 2, . . . , *n, we have*

$$B\_1(\mathfrak{x}) = \frac{s\_1(\mathfrak{x})}{\binom{n}{1}} = A\_n(\mathfrak{x}) = \frac{\mathfrak{x}\_1 + \mathfrak{x}\_2 + \dots + \mathfrak{x}\_n}{n}.$$

$$\sqrt[n]{B\_{\boldsymbol{n}}(\boldsymbol{\omega})} = \sqrt[n]{\frac{\mathbf{s}\_{\boldsymbol{n}}(\boldsymbol{\omega})}{\binom{n}{n}}} = \mathbf{G}\_{\boldsymbol{n}}(\boldsymbol{\omega}) = \sqrt[n]{\mathbf{x}\_{1}\mathbf{x}\_{2}\cdots\mathbf{x}\_{n}}$$

*and*

$$\frac{B\_n(\mathfrak{x})}{B\_{n-1}(\mathfrak{x})} = H\_n(\mathfrak{x}) = \frac{n}{\frac{1}{\chi\_1} + \frac{1}{\chi\_2} + \dots + \frac{1}{\chi\_n}}.$$

*where An*(*x*)*, Gn*(*x*) *and Hn*(*x*) *denote the arithmetic mean, geometric mean and harmonic mean of the n positive numbers x<sup>i</sup>* > 0 *for i* = 1, 2, . . . , *n, respectively. See the famous monograph [8].*

*(ii) Let x<sup>i</sup>* > 0 *for i* = 1, 2, . . . , *n. When n* ≥ 2*, the double inequality between the arithmetic, geometric and harmonic means reads that*

$$A\_n(\mathfrak{x}) \ge \mathcal{G}\_n(\mathfrak{x}) \ge H\_n(\mathfrak{x}).\tag{1}$$

*The double inequality* (1) *is fundamental and important in all areas of mathematical sciences. There have been over one hundred proofs for the double inequality* (1)*. See the related texts and references in the paper [2], for example.*

In the history of the research process of inequality theory, many important generalization studies have come from simple inequalities that have wide applications. In 1995, by virtue of the Lagrange multiplier method, Zhu [13] proved the following interesting inequality

$$n^{n-2}(\mathbf{x\_1x\_2}\cdots\mathbf{x\_{n-1}} + \mathbf{x\_2x\_3}\cdots\mathbf{x\_n} + \cdots + \mathbf{x\_nx\_1}\cdots\mathbf{x\_{n-2}}) \le (\mathbf{x\_1} + \mathbf{x\_2} + \cdots + \mathbf{x\_n})^{n-1}\prime$$

which is equivalent to

$$B\_{n-1}(\mathfrak{x}) \le B\_1^{n-1}(\mathfrak{x})\_\prime \tag{2}$$

where *<sup>x</sup>* = (*x*1, *<sup>x</sup>*2, . . . , *<sup>x</sup>n*) <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* with *<sup>x</sup><sup>i</sup>* <sup>≥</sup> <sup>0</sup> for *<sup>i</sup>* <sup>=</sup> 1, 2, . . . , *<sup>n</sup>* with *<sup>n</sup>* <sup>≥</sup> 3. In 2022, Hu [14] established the following inequality by mathematical induction:

$$\left(\left(s\_{n-1}(\mathfrak{x})\right)^{n-1} \ge n^{n-2} \left(s\_n(\mathfrak{x})\right)^{n-2} s\_1(\mathfrak{x})\right) \tag{3}$$

where *<sup>x</sup>* = (*x*1, *<sup>x</sup>*2, . . . , *<sup>x</sup>n*) <sup>∈</sup> <sup>R</sup>*<sup>n</sup>* with *<sup>x</sup><sup>i</sup>* <sup>&</sup>gt; <sup>0</sup> for *<sup>i</sup>* <sup>=</sup> 1, 2, . . . , *<sup>n</sup>*. It is easy to see that inequality (3) is equivalent to

$$B\_{n-1}^{n-1}(\mathfrak{x}) \ge B\_n^{n-2}(\mathfrak{x}) B\_1(\mathfrak{x}).$$

Motivated by the works mentioned above, in this paper, we study and investigate new inequalities and generalizations for symmetric means by applying majorization theory. We also give new proofs for some known results proven by difficult typical elementary or analytical methods. Our new proofs given in this paper are novel and concise.

#### **2. Main Results**

In this section, we establish the following new inequalities for symmetric means.

**Theorem 1.** *Let <sup>x</sup>* = (*x*1, *<sup>x</sup>*2, . . . , *<sup>x</sup>n*) <sup>∈</sup> <sup>R</sup>*<sup>n</sup> with <sup>x</sup><sup>i</sup>* <sup>&</sup>gt; <sup>0</sup> *for <sup>i</sup>* <sup>=</sup> 1, 2, . . . , *n. Then, the following hold:*


**Proof.** (i) Let 2 ≤ 2*k* ≤ *n*. It is easy to verify that

$$
\left(\underbrace{n-k,n-k,\dots,n-k}\_{n-k}\right)\prec \left(\underbrace{n,n,\dots,n}\_{n-2k},\underbrace{k,k,\dots,k}\_{k}\right).
$$

According to Lemma 2, the sequence {*B<sup>k</sup>* (*x*)} is log-concave, and by Corollary 1, it follows that

$$B\_{n-k}^{n-k}(\mathfrak{x}) \ge B\_n^{n-2k}(\mathfrak{x}) B\_k^k(\mathfrak{x}).$$

(ii) Let 1 ≤ *k* ≤ *n*. Since

$$\smile \left( \underbrace{n-k, \ldots, n-k}\_{n-k+1}, \underbrace{0, \ldots, 0}\_{k-1} \right) \prec \left( \underbrace{n-k+1, \ldots, n-k+1}\_{n-k}, \underbrace{0, \ldots, 0}\_{k} \right) \prec$$

by the logarithmic concavity of the sequence {*B<sup>k</sup>* (*x*)}, from Corollary 1, we have

$$B\_{n-k}^{n-k+1}(\mathfrak{x}) = B\_{n-k}^{n-k+1}(\mathfrak{x})\\B\_0^{k-1}(\mathfrak{x}) \ge B\_{n-k+1}^{n-k}(\mathfrak{x})\\B\_0^k(\mathfrak{x}) = B\_{n-k+1}^{n-k}(\mathfrak{x}).$$

(iii) Let 2 ≤ 2*k* ≤ *n*. Since

$$\left(\underbrace{n-k,\ldots,n-k}\_{n-k},\underbrace{n,\ldots,n}\_{2k}\right)\prec\left(\underbrace{k,\ldots,k}\_{k},\underbrace{n,\ldots,n}\_{n}\right)\prec$$

by the logarithmic concavity of the sequence {*B<sup>k</sup>* (*x*)}, we obtain

$$B\_{n-k}^{n-k}(\mathfrak{x})B\_n^{2k}(\mathfrak{x}) \ge B\_k^k(\mathfrak{x})B\_n^n(\mathfrak{x}).$$

(iv) Let 1 ≤ *k*<sup>1</sup> < *k*<sup>2</sup> ≤ *n*. Since

$$\left(\underbrace{k\_1, \ldots, k\_1}\_{k\_2}, \underbrace{0, \ldots, 0}\_{k\_1 - k\_2}\right) \prec \left(\underbrace{k\_2, \ldots, k\_2}\_{k\_1}\right) \prec$$

from the logarithmic concavity of the sequence {*B<sup>k</sup>* (*x*)}, we obtain

$$B\_{k\_1}^{k\_2}(\mathbf{x})B\_0^{k\_1-k\_2}(\mathbf{x}) = \left[\frac{\mathbf{s}\_{k\_1}(\mathbf{x})}{\binom{n}{k\_1}}\right]^{k\_2} \left[\frac{\mathbf{s}\_0(\mathbf{x})}{\binom{n}{0}}\right]^{k\_1-k\_2} \ge B\_{k\_2}^{k\_1}(\mathbf{x}) = \left[\frac{\mathbf{s}\_{k\_2}(\mathbf{x})}{\binom{n}{k\_2}}\right]^{k\_1}.$$

which implies

$$B\_{k\_1}^{1/k\_1}(\mathfrak{x}) \ge B\_{k\_2}^{1/k\_2}(\mathfrak{x}).$$

The proof is completed.

**Remark 2.** *When k*<sup>1</sup> = *k and k*<sup>2</sup> = *k* + 1 *in (iv) of Theorem 1, the inequality*

$$B\_{k\_1}^{1/k\_1}(\mathfrak{x}) \ge B\_{k\_2}^{1/k\_2}(\mathfrak{x})$$

*becomes the famous Maclaurin's inequality*

$$
\left[\frac{s\_k(\mathbf{x})}{\binom{n}{k}}\right]^{1/k} \le \left[\frac{s\_{k-1}(\mathbf{x})}{\binom{n}{k-1}}\right]^{1/(k-1)}.
$$

In terms of the symmetric means *B*1(*x*), *Bn*−1(*x*) and *Bn*(*x*), the double inequality (1) can be reformulated as follows.

**Theorem 2.** *Let n* <sup>≥</sup> <sup>2</sup> *and <sup>x</sup>* = (*x*1, *<sup>x</sup>*2, . . . , *<sup>x</sup>n*) <sup>∈</sup> <sup>R</sup>*<sup>n</sup> with x<sup>i</sup>* <sup>&</sup>gt; <sup>0</sup> *for i* <sup>=</sup> 1, 2, . . . , *n. Then,*

$$B\_1^{\eta}(\mathfrak{x}) \ge B\_{\mathfrak{n}}(\mathfrak{x}) \tag{4}$$

*and*

$$B\_{n-1}^{n}(\mathbf{x}) \ge B\_{n}^{n-1}(\mathbf{x}).\tag{5}$$

**Proof.** According to Lemma 2, the sequence {*B<sup>k</sup>* (*x*)} is log-concave. Since the majorization relation

$$\left(\underbrace{1,1,1,\ldots,1}\_{n}\right) \prec \left(\underbrace{n,0,0,\ldots,0}\_{n-1}\right).$$

is valid, by Corollary 1, we acquire

$$B\_1^n(\mathfrak{x}) \ge B\_n(\mathfrak{x}) B\_0^{n-1}(\mathfrak{x}) = B\_n(\mathfrak{x}).$$

Next, we varify *B n n*−1 (*x*) ≥ *B n*−1 *n* (*x*). It is not difficult to see that

$$\left(\underbrace{n-1,n-1,\ldots,n-1}\_{n}\right) \prec \left(\underbrace{n,n,\ldots,n}\_{n-1},0\right).$$

From the logarithmic concavity of the sequence {*B<sup>k</sup>* (*x*)}, we have

$$B\_{n-1}^n(\mathfrak{x}) \ge B\_n^{n-1}(\mathfrak{x}) B\_0(\mathfrak{x}) = B\_n^{n-1}(\mathfrak{x}).$$

The proof is completed.

The following result is a generalization of inequality (2).

\*\*Theorem 3.\*\*  $Let \ x = (\mathbf{x}\_1, \mathbf{x}\_2, \dots, \mathbf{x}\_n) \in \mathbb{R}^n$   $with \ \mathbf{x}\_i > 0$  for  $i = 1, 2, \dots, n$ . Then, for  $n \ge 2k \ge 2$ , 
$$B\_k^{n-k}(\mathbf{x}) \ge B\_{n-k}^k(\mathbf{x}). \tag{6}$$

**Proof.** According to Lemma 2, the sequence {*B<sup>k</sup>* (*x*)} is log-concave. Note that, for *n* ≥ 2*k*, this is, for *k* ≤ *n* − *k*,

$$
\left(\underbrace{k,k,\ldots,k}\_{n-k}\right) \prec \left(\underbrace{n-k,n-k,\ldots,n-k}\_{k}\underbrace{0,0,\ldots,0}\_{n-2k}\right).
$$

By Corollary 1, we obtain

$$B\_k^{n-k}(\mathbf{x}) = \left[\frac{s\_k(\mathbf{x})}{\binom{n}{k}}\right]^{n-k} \ge B\_{n-k}^k(\mathbf{x})B\_0^{n-2k}(\mathbf{x}) = \left[\frac{s\_{n-k}(\mathbf{x})}{\binom{n}{n-k}}\right]^k \left[\frac{s\_0(\mathbf{x})}{\binom{n}{0}}\right]^{n-2k}.$$

The inequality (6) is thus proved.

**Theorem 4.** *Let <sup>x</sup>* = (*x*1, *<sup>x</sup>*2, . . . , *<sup>x</sup>n*) <sup>∈</sup> <sup>R</sup>*<sup>n</sup> with x<sup>i</sup>* <sup>&</sup>gt; <sup>0</sup> *for i* <sup>=</sup> 1, 2, . . . , *n. Then, we have*

$$\sqrt[n]{\prod\_{k=1}^{n} B\_{2k}(\mathbf{x})} \ge \sqrt[n+1]{\prod\_{k=1}^{n} B\_{2k-1}(\mathbf{x})}.\tag{7}$$

**Proof.** The majorization relation

$$\left( \underbrace{2, \ldots, 2}\_{n+1}, \underbrace{4, \ldots, 4}\_{n+1}, \ldots, \underbrace{2n, \ldots, 2n}\_{n+1} \right) \prec \left( \underbrace{1, \ldots, 1}\_{n}, \underbrace{3, \ldots, 3}\_{n}, \ldots, \underbrace{2n+1}\_{n}, \ldots, \underbrace{2n+1}\_{n}, \ldots \right)$$

is shown in [4] (p. 40). By the logarithmic concavity of the sequence {*B<sup>k</sup>* (*x*)}, we obtain

$$\prod\_{k=1}^{n} B\_{2k}^{n+1}(\mathbf{x}) \ge \prod\_{k=0}^{n} B\_{2k+1}^{n}(\mathbf{x})\_{\mathbf{x}}$$

which deduces (11).

The arithmetic mean *An*(*x*), geometric mean *Gn*(*x*) and harmonic mean *Hn*(*x*) also satisfy Sierpinski's inequality [9] (p. 62) below. In this paper, we give a new proof for Sierpinski's inequality via majorization.

**Theorem 5** (Sierpinski's inequality [9] (p. 62))**.** *Let <sup>x</sup>* = (*x*1, *<sup>x</sup>*2, . . . , *<sup>x</sup>n*) <sup>∈</sup> <sup>R</sup>*<sup>n</sup> with <sup>x</sup><sup>i</sup>* <sup>&</sup>gt; <sup>0</sup> *for i* = 1, 2, . . . , *n with n* ≥ 2*. Then,*

$$A\_n(\mathfrak{x})H\_n^{n-1}(\mathfrak{x}) \le G\_n^n(\mathfrak{x}) \le A\_n^{n-1}(\mathfrak{x})H\_n(\mathfrak{x}).\tag{8}$$

**Proof.** In terms of symmetric means *B*1(*x*), *Bn*−1(*x*) and *Bn*(*x*), the left and right inequalities in the double inequality (8) can be reformulated as

$$B\_1(\mathfrak{x})B\_n^{n-1}(\mathfrak{x}) \le B\_n(\mathfrak{x})B\_{n-1}^{n-1}(\mathfrak{x})$$

and

$$B\_n(\mathfrak{x})B\_{n-1}(\mathfrak{x}) \le B\_1^{n-1}(\mathfrak{x})B\_n(\mathfrak{x}).$$

By the logarithmic concavity of the sequence {*B<sup>k</sup>* (*x*)}, the above two inequalities can be obtained from two majorization relations

$$\left( \left( n, \underbrace{n-1, n-1, \dots, n-1}\_{n-1}, 0 \right) \prec \left( \underbrace{n, n, \dots, n}\_{n-1}, 1 \right) \right)$$

and

$$\left(n, \underbrace{1, 1, \dots, 1}\_{n-1}\right) \prec \left(n, n-1, \underbrace{0, 0, \dots, 0}\_{n-2}\right) \lor$$

respectively. This proves the inequality (8).

**Theorem 6** ([9] (p. 260))**.** *Let <sup>x</sup>* = (*x*1, *<sup>x</sup>*2, . . . , *<sup>x</sup>n*) <sup>∈</sup> <sup>R</sup>*<sup>n</sup> with x<sup>i</sup>* <sup>&</sup>gt; <sup>0</sup> *for i* <sup>=</sup> 1, 2, . . . , *n. Then,*

$$s\_n(\mathbf{x}) \le \frac{s\_1(\mathbf{x})s\_{n-1}(\mathbf{x})}{n^2} \le \frac{s\_1^n(\mathbf{x})}{n^2}.\tag{9}$$

**Proof.** It is clear that (*n* − 1, 1) ≺ (*n*, 0) and

$$
\left(\underbrace{1,1,1,\ldots,1}\_{n}\right) \prec \left(\overset{n-1,1,\underbrace{0,\ldots,0}\_{n-2}}\_{n-2}\right).
$$

From the logarithmic concavity of the sequence {*B<sup>k</sup>* (*x*)}, it follows that

$$B\_{n-1}(\mathbf{x})B\_1(\mathbf{x}) = \frac{s\_{n-1}(\mathbf{x})}{\binom{n}{n-1}} \frac{s\_1(\mathbf{x})}{\binom{n}{1}} \ge B\_n(\mathbf{x})B\_0(\mathbf{x}) = \frac{s\_n(\mathbf{x})}{\binom{n}{1}} \frac{s\_0(\mathbf{x})}{\binom{n}{0}}$$

and

$$B\_1^n(\mathbf{x}) = \left[\frac{s\_1(\mathbf{x})}{\binom{n}{1}}\right]^n \ge B\_{n-1}(\mathbf{x})B\_1(\mathbf{x})B\_0^{n-2}(\mathbf{x}) = \frac{s\_{n-1}(\mathbf{x})}{\binom{n}{n-1}}\frac{s\_1(\mathbf{x})}{\binom{n}{1}}\left[\frac{s\_0(\mathbf{x})}{\binom{n}{0}}\right]^{n-2}.$$

This proves the double inequality (9).

**Theorem 7.** *Let <sup>x</sup>* = (*x*1, *<sup>x</sup>*2, . . . , *<sup>x</sup>n*) <sup>∈</sup> <sup>R</sup>*<sup>n</sup> with x<sup>i</sup>* <sup>&</sup>gt; <sup>0</sup> *for i* <sup>=</sup> 1, 2, . . . , *n. Then,*

$$s\_k(\mathfrak{x})s\_{n-k}(\mathfrak{x}) \ge \binom{n}{k}^2 s\_n(\mathfrak{x}) \tag{10}$$

*and*

$$(s\_k(\mathfrak{x}))^n \ge \binom{n}{k}^n (s\_{\mathfrak{n}}(\mathfrak{x}))^k \tag{11}$$

*for k* = 1, 2, . . . , *n* − 1*.*

**Proof.** It is clear that (*n* − *k*, *k*) ≺ (*n*, 0). From the logarithmic concavity of the sequence {*B<sup>k</sup>* (*x*)}, we find

$$\frac{s\_k(\mathbf{x})}{\binom{n}{k}} \frac{s\_{n-k}(\mathbf{x})}{\binom{n}{n-k}} = B\_k(\mathbf{x}) B\_{n-k}(\mathbf{x}) \ge B\_n(\mathbf{x}) B\_0(\mathbf{x}) = \frac{s\_n(\mathbf{x})}{\binom{n}{n}} \frac{s\_0(\mathbf{x})}{\binom{n}{0}} \lambda$$

which show inequality (10). From the majorization relation

$$\left(\underbrace{k,k,\ldots,k}\_{n},0\right)\prec\left(\underbrace{n,n,\ldots,n}\_{k},\underbrace{0,0,\ldots,0}\_{n-k}\right),0$$

it follows that inequality (11) holds.

#### **3. Conclusions**

As a discrete form of logarithmic convex (concave) functions, logarithmic convex (concave) sequences play an important role in mathematical analysis and inequality theory. Lemma 1 and Corollary 1 are important conclusion about logarithmic convex (concave) sequences in majorization theory. In this paper, in view of the logarithmic concavity of symmetric mean sequences, we use Corollary 1 and various majorization relations to establish new inequalities and generalizations for symmetric means and give concise, novel and unique proofs for some known results.

**Author Contributions:** Writing original draft, H.-N.S. and W.-S.D. All authors have read and agreed to the published version of the manuscript.

**Funding:** The second author is partially supported by Grant No. MOST 110-2115-M-017-001 of the Ministry of Science and Technology of the Republic of China.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The authors wish to express their hearty thanks to Feng Qi for their valuable suggestions and comments.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


### *Article* **Context-Free Grammars for Several Triangular Arrays**

**Roberta Rui Zhou 1,\*, Jean Yeh 2,\* and Fuquan Ren <sup>3</sup>**


**Abstract:** In this paper, we present a unified grammatical interpretation of the numbers that satisfy a kind of four-term recurrence relation, including the Bell triangle, the coefficients of modified Hermite polynomials, and the Bessel polynomials. Additionally, as an application, a criterion for real zeros of row-generating polynomials is also presented.

**Keywords:** recurrence relations; grammars; real zeros; Bell triangular array

**MSC:** 05A05; 05A15

#### **1. Introduction**

Let *A* denote an alphabet, the letters of which are considered as independent commutative indeterminates. Then, the context-free grammar *G* over *A* is defined as a set of replacement rules that substitute the letters in *A* with formal functions on *A*. The formal derivative *D* is a linear operator, which is defined relative to a context-free grammar *G* (see [1]). For example, for *A* = {*u*, *v*} and *G* = {*u* → *uv*, *v* → *v*}, then *D*(*u*) = *uv*, *D*<sup>2</sup> (*u*) = *u*(*v* + *v* 2 ), *D<sup>n</sup>* (*u*) = *u* ∑ *n k*=1 *S*(*n*, *k*)*v k* , where *S*(*n*, *k*) is the Stirling number of the second kind, i.e., the number of ways to partition [*n*] into *k* blocks.

In [2], Hao, Wang, and Yang presented a grammatical interpretation of the numbers *T*(*n*, *k*) that satisfy the following three-term recurrence relation:

$$T(n,k) = (a\_1n + a\_2k + a\_3)T(n-1,k) + (b\_1n + b\_2k + b\_3)T(n-1,k-1).$$

Very recently, there is a large literature devoted to the numbers *t*(*n*, *k*) that satisfy the following four-term recurrence relation (see [3–7]):

$$t\_{n,k} = (a\_1n + a\_2k + a\_3)t\_{n-1,k} + (b\_1n + b\_2k + b\_3)t\_{n-1,k-1} + (c\_1n + c\_2k + c\_3)t\_{n-1,k-2} \tag{1}$$

with *t*0,0 = 1 and *tn*,*<sup>k</sup>* = 0, unless 0 ≤ *k* ≤ *n*. For example, Ma [8] showed that if *G* = {*x* → *xy*, *y* → *yz*, *z* → *y* <sup>2</sup>}, then *<sup>D</sup><sup>n</sup>* (*x* 2 ) = *x* <sup>2</sup> ∑ *n <sup>k</sup>*=<sup>0</sup> *R*(*n* + 1, *k*)*y k z n*−*k* , where *R*(*n*, *k*) is the number of permutations in *S<sup>n</sup>* with *k* alternating runs, and it satisfies the recurrence relation

$$R(n,k) = kR(n-1,k) + 2R(n-1,k-1) + (n-k)R(n-1,k-2)$$

with the initial conditions *R*(1, 0) = 1 and *R*(1, *k*) = 0 for *k* ≥ 1.

Let

$$a(n,k) = \sum\_{i=0}^{n} S(n,i) \binom{i}{k}$$

for 0 ≤ *k* ≤ *n*. Clearly, *a*(*n*, *k*) is the number of set partitions of {1, 2, . . . , *n*} in which exactly *k* of the blocks have been distinguished. The numbers *a*(*n*, *k*) satisfy the recurrence relation

$$a(n+1,k) = a(n,k-1) + (k+1)a(n,k) + (k+1)a(n,k+1),\tag{2}$$

**Citation:** Zhou, R.R.; Yeh, J.; Ren, F. Context-Free Grammars for Several Triangular Arrays . *Axioms* **2022**, *11*, 297. https://doi.org/10.3390/ axioms11060297

Academic Editor: Wei-Shih Du

Received: 26 April 2022 Accepted: 10 June 2022 Published: 20 June 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

with *a*(0, 0) = 1, *a*(0, *k*) = 0 for *k* 6= 0 (see [9,10]). The triangular array {*a*(*n*, *k*)}*n*,*<sup>k</sup>* is known as the classical Bell triangle and is given as follows:

$$
\begin{pmatrix}
1\\1 & 1\\2 & 3 & 1\\5 & 10 & 6 & 1\\15 & 37 & 31 & 10 & 1\\\vdots & & & & & \ddots
\end{pmatrix}.
$$

It appears that *a*(*n*, 0) = ∑ *n i*=0 *S*(*n*, *i*) = *Bn*, which implies that the first column of the triangle array is made up of the Bell numbers *Bn*. A natural question is whether there exists a grammatical interpretation of the numbers *a*(*n*, *k*).

This paper is motivated by exploring the grammatical interpretation of the triangular array {*B*(*n*, *<sup>k</sup>*)}0≤*k*≤*<sup>n</sup>* that satisfies the following four-term recurrence relation

$$\begin{array}{c} B(n+1,k) = (a\_1n + a\_2k + a\_3)B(n,k-1) + (b\_1n + b\_2k + b\_3)B(n,k) \\ \phantom{(a,b)}{(c,k-1)} + (k+1)cB(n,k+1), \end{array} \tag{3}$$

where *a<sup>i</sup>* , *bi* , and *c* are integers for 1 ≤ *i* ≤ 3 with *B*(0, 0) = 1 and *B*(0, *k*) = 0 if *k* 6= 0. In Section 2, we present grammatical interpretations of the triangular array {*B*(*n*, *k*)}. In Section 3, we present grammatical interpretations of several combinatorial sequences, including the Bell triangle, the modified Hermite polynomials, the Bessel polynomials, and so on. In Section 4, we show the result of the real-rootedness of row-generating functions for {*B*(*n*, *k*)}, and apply the proposed criteria to the Bell triangular array as an example.

#### **2. Grammatical Interpretations of the Triangular Array** *B*(*n***,** *k*)

We now present the first main result of this paper.

**Theorem 1.** *Suppose that a<sup>i</sup> , b<sup>i</sup> , and c are integers for* 1 ≤ *i* ≤ 3*. Let*

$$G = \{ I \rightarrow (a\_2 + a\_3)IX + b\_3 IY; X \rightarrow (a\_1 + a\_2)X^2 + (b\_1 + b\_2)XY + cY^2; Y \rightarrow a\_1 XY + b\_1 Y^2 \}.$$

*Then, we have*

$$D^n(I) = I \sum\_{k \ge 0} B(n,k) X^k Y^{n-k} \, \_\prime \tag{4}$$

*where the coefficients B*(*n*, *k*) *satisfy the recurrence relation* (3)*.*

**Proof.** Note that *D*(*I*) = (*a*<sup>2</sup> + *a*3)*IX* + *b*<sup>3</sup> *IY*. Suppose that (4) holds for *n*. Then, by induction, we obtain

$$\begin{aligned} D^{n+1}(I) &= D\{D^n(I)\} = \sum\_{k\geq 0} B(n,k)D(I)X^kY^{n-k} \\ &+ \sum\_{k\geq 0} B(n,k)ID(X^k)Y^{n-k} + \sum\_{k\geq 0} B(n,k)IX^kD(Y^{n-k}). \end{aligned}$$

Applying the rules of *G*, we can derive

$$\begin{aligned} \sum\_{k\geq 0} B(n,k)I(a\_2+a\_3)X^{k+1}Y^{n-k} + \sum\_{k\geq 0} B(n,k)I b\_3 X^k Y^{n+1-k} \\ + \sum\_{k\geq 0} B(n,k)kI X^{k-1}Y^{n-k} \{(a\_1+a\_2)X^2 + (b\_1+b\_2)XY + cY^2\} \\ + \sum\_{k\geq 0} B(n,k)(n-k)I X^k Y^{n-k} \{a\_1X + b\_1Y\}. \end{aligned}$$

Collate and merge similar items

$$\begin{aligned} \sum\_{k\geq 0} B(n,k)(a\_2+a\_3+k(a\_1+a\_2)+(n-k)a\_1)IX^{k+1}Y^{n-k} \\ + \sum\_{k\geq 0} B(n,k)((n-k)b\_1+k(b\_1+b\_2)+b\_3)IX^kY^{n+1-k} \\ + \sum\_{k\geq 0} B(n,k)kcIX^{k-1}Y^{n-k+2}. \end{aligned}$$

Extracting the coefficient of *IXkY n*+1−*k* , we obtain (3). This completes the proof.

Along the same lines of the proof of Theorem 1, one can easily derive the following result.

#### **Proposition 1.** *Let*

$$G = \{ I \rightarrow (a\_2 + a\_3)IX + b\_3II; X \rightarrow (a\_1 + a\_2)X^2 + (b\_1 + b\_2)XY + cY^2; Y \rightarrow dX^2 + a\_1XY + b\_1Y^2 \}...$$

*Then, we have*

$$D^n(I) = I \sum\_{k \ge 0} M(n,k) X^k Y^{n-k},$$

*where M*(*n*, *k*) *satisfy the following five-term recursive relation:*

$$\begin{array}{c} M(n+1,k) = (n-k+2)dM(n,k-2) + (a\_1n + a\_2k + a\_3)M(n,k-1) \\ + (b\_1n + b\_2k + b\_3)M(n,k) + (k+1)cM(n,k+1). \end{array} \tag{5}$$

*where a<sup>i</sup> , b<sup>i</sup> , c, and d are integers for* 1 ≤ *i* ≤ 3*.*

When *d* = 0, the recurrence relation (5) is degenerated into (3).

#### **3. Applications**

#### *3.1. The Bell Triangle*

The Bell triangle was proposed by Aigner [9] to provide a characterization of the sequence of Bell numbers by means of the determinants of Hankel matrices. As a special case of Theorem 1, we now present a grammatical interpretations of the Bell triangle.

**Proposition 2.** *Let G* = {*I* → *IX* + *IY*; *X* → *XY* + *Y* 2 ;*Y* → 0}*. Then, we have*

$$D^n(I) = I \sum\_{k \ge 0} a(n,k) X^k Y^{n-k} = I Y^n a\_n(\frac{X}{Y}).$$

Note *D<sup>n</sup>* (*X*) = *XY<sup>n</sup>* + *Y n*+1 , *D<sup>n</sup>* (*Y*) = 0. From Leibniz's formula, we obtain the following corollary:

**Corollary 1.** *For n* ≥ 0*, we have*

$$a\_{n+1}(\boldsymbol{x}) = \sum\_{i=0}^{n} \binom{n}{k} a\_k(\boldsymbol{x}) (\boldsymbol{x} + 1)^i$$

Let *D<sup>n</sup>* (*IX*) = *I* ∑ *n*+1 *k*=0 *b*(*n* + 1, *k*)*X kY n*+1−*k* . It is routine to verify that

$$b(n+2,k) = b(n+1,k-1) + (k+1)b(n+1,k) + (k+1)b(n+1,k+1)\_{\prime\prime}$$

with *<sup>b</sup>*(1, 1) = <sup>1</sup> and *<sup>b</sup>*(1, *<sup>k</sup>*) = <sup>0</sup> when *<sup>k</sup>* <sup>6</sup><sup>=</sup> 1. Since *<sup>D</sup>n*+<sup>1</sup> (*I*) = *D<sup>n</sup>* (*IX*) + *D<sup>n</sup>* (*IY*), it follows that *a*(*n* + 1, *k*) = *b*(*n* + 1, *k*) + *a*(*n*, *k*).

Note that *D<sup>n</sup>* (*X*) = *Y n* (*X* + *Y*). Then,

$$b(n+1,k) = \sum\_{i=0}^{n+1-k} \binom{n}{i+k-1} a(i+k-1,k-1) + \sum\_{i=0}^{n-k} \binom{n}{i+k} a(i+k,k).$$

#### *3.2. On the Coefficients of Modified Hermite Polynomials*

The modified Hermite polynomials have the following form:

$$\begin{aligned} h(0, \mathbf{x}) &= 1 \\ h(1, \mathbf{x}) &= \mathbf{x} \\ h(2, \mathbf{x}) &= \mathbf{x}^2 + 1 \\ h(3, \mathbf{x}) &= \mathbf{x}^3 + 3\mathbf{x} \\ h(4, \mathbf{x}) &= \mathbf{x}^4 + 6\mathbf{x}^2 + 3 \\ h(5, \mathbf{x}) &= \mathbf{x}^5 + 10\mathbf{x}^3 + 15\mathbf{x} \\ h(6, \mathbf{x}) &= \mathbf{x}^6 + 15\mathbf{x}^4 + 45\mathbf{x}^2 + 15 \end{aligned}$$

If *n* − *k* ≥ 0 is even, let

$$T(n,k) = \frac{n!}{2^{\frac{n-k}{2}}(\frac{n-k}{2})!k!} \cdot \frac{n!}{2^k}$$

Otherwise, set *T*(*n*, *k*) = 0. It should be noted that the numbers *T*(*n*, *k*) are the coefficients of the modified Hermite polynomials (see A099174 [11]) and

$$T(n+1,k) = T(n,k-1) + (k+1)T(n,k+1).$$

Using Theorem 1, we obtain the following proposition.

**Proposition 3.** *Let G* = {*I* → *IX*; *X* → *Y* 2 ;*Y* → 0}*. Then, we have*

$$D^n(I) = I \sum\_{k \ge 0} T(n,k) X^k Y^{n-k} = I Y^n h(n, \frac{X}{Y}).$$

Note that *D<sup>n</sup>* (*X*) = 0 (*n* ≥ 2). From Leibniz's formula, we obtain the following corollaries:

**Corollary 2.** *For n* ≥ 0*, we have*

$$h(n+1, \mathfrak{x}) = \mathfrak{x}h(n, \mathfrak{x}) + nh(n-1, \mathfrak{x})$$

**Corollary 3.** *For n* ≥ *k* ≥ 1*, we have*

$$T(n+1,k) = T(n,k-1) + nT(n-1,k)$$

#### *3.3. The Bessel Polynomials*

As a well-known orthogonal sequence of polynomials, the Bessel polynomials *yn*(*x*) were introduced by Krall and Frink in [12], which can be defined as the polynomial solutions of the second-order differential equation

$$x^2 \frac{d^2 y\_n(\mathbf{x})}{d\mathbf{x}^2} + 2(\mathbf{x} + 1)\frac{dy\_n(\mathbf{x})}{d\mathbf{x}} = n(n+1)y\_n(\mathbf{x})$$

After that, the Bessel polynomials have been extensively studied and applied (see [13–15]). Moreover, the polynomials *yn*(*x*) can be generated by using the Rodrigues formula (see [11] [A001498]):

$$y\_n(\mathfrak{x}) = \frac{1}{2^n} e^{2/\mathfrak{x}} \frac{d^n}{d\mathfrak{x}^n} (\mathfrak{x}^{2n} e^{-2/\mathfrak{x}})^n$$

Explicitly, we can obtain

$$y\_n(\mathbf{x}) = \sum\_{k=0}^n \frac{(n+k)!}{(n-k)!k!} \left(\frac{\mathbf{x}}{2}\right)^k$$

Let

$$H(n,k) = \frac{(n+k)!}{2^k(n-k)!k!}$$

Then,

$$y\_n(\mathbf{x}) = \sum\_{k=0}^n H(n,k)\mathbf{x}^k$$

It is easy to verify that

$$H(n+1,k) = H(n,k) + (n+k)H(n,k-1)$$

The polynomials *yn*(*x*) satisfy the recurrence relation

$$y\_{n+1}(\mathbf{x}) = (2n+1)xy\_n(\mathbf{x}) + y\_{n-1}(\mathbf{x}), \quad \text{for } n \gg 0$$

with initial conditions *y*−1(*x*)=*y*0(*x*)=1. The first three Bessel Polynomials are expressed as

$$\begin{aligned} y\_1(\mathbf{x}) &= 1 + \mathbf{x}\_\prime \\ y\_2(\mathbf{x}) &= 1 + 3\mathbf{x} + 3\mathbf{x}^2 \\ y\_3(\mathbf{x}) &= 1 + 6\mathbf{x} + 15\mathbf{x}^2 + 15\mathbf{x}^3 \end{aligned}$$

We present here a grammatical characterization of the Bessel polynomials *yn*(*x*).

**Proposition 4.** *Let G* = {*I* → *IX* + *IY*; *X*→2*X* 2 ;*Y*→*XY*}*. Then, we have*

$$D^\eta(I) = I \sum\_{k \ge 0} H(n,k) X^k Y^{n-k} = I Y^\eta y\_n(X/Y).$$

Note that *D<sup>n</sup>* (*X*) = *n*!2 *<sup>n</sup>X <sup>n</sup>*+<sup>1</sup> and *D<sup>n</sup>* (*Y*) = (2*n* − 1)!!*X <sup>n</sup>Y*. From Leibniz's formula, we obtain the following corollary:

**Corollary 4.** *For n* ≥ 0*, we have*

$$y\_{n+1}(\mathbf{x}) = \sum\_{k=0}^{n} \binom{n}{k} (2n - 2k - 1)!! y\_k(\mathbf{x}) \mathbf{x}^{n-k} + \sum\_{k=0}^{n} \frac{n! 2^{n-k}}{k!} y\_k(\mathbf{x}) \mathbf{x}^{n-k+1}.$$

*3.4. The Exponential Riordan Array* [exp (*x*/(1 − *x*)), *x*/(1 − *x*)]

**Definition 1** (see [16])**.** *The exponential Riordan group G is a set of infinite lower-triangular integer matrices, and each matrix in G is defined by a pair of generating function g*(*x*) = *g*<sup>0</sup> + *g*1*x* + *g*2*x* <sup>2</sup> <sup>+</sup> · · · *and <sup>f</sup>*(*x*) = *<sup>f</sup>*1*<sup>x</sup>* <sup>+</sup> *<sup>f</sup>*2*<sup>x</sup>* <sup>2</sup> <sup>+</sup> · · · *, with <sup>g</sup>*<sup>0</sup> <sup>6</sup><sup>=</sup> <sup>0</sup> *and <sup>f</sup>*<sup>1</sup> <sup>6</sup><sup>=</sup> <sup>0</sup>*. The associated matrix is the matrix whose i-th column has exponential generating function g*(*x*)*f*(*x*) *<sup>i</sup>*/*i*! *(columns marked from 0). The matrix corresponding to the pair f* , *g is defined by* [*g*, *f* ]*.*

Let *R*(*n*, *k*) be the (*n*, *k*)-th element in the matrix [exp (*x*/(1 − *x*)), *x*/(1 − *x*)]. The associated Riordan array is given as follows:

> 1 1 1 3 4 1 13 21 9 1 73 136 78 16 1 . . . . . . (6)

From A059110 [11], we see that

$$R(n,k) = \sum\_{i=0}^{n} L'(n,i) \binom{i}{k}$$

for 0 ≤ *k* ≤ *n*, where *L* 0 (*n*, *i*) = *<sup>n</sup>*! *i*! ( *n*−1 *i*−1 ) are unsigned Lah numbers. It is routine to verify that

$$R(n+1,k) = R(n,k-1) + (n+k+1)R(n,k) + (k+1)R(n,k+1).$$

Hence, by Theorem 1, we obtain the following Proposition.

#### **Proposition 5.** *Let*

$$G = \{ I \rightarrow IX + IY; X \rightarrow 2XY + Y^2; Y \rightarrow Y^2 \}$$

*Then, we have*

$$D^n(I) = I \sum\_{k \ge 0} \mathcal{R}(n,k) X^k Y^{n-k} := I Y^n r\_n(\frac{X}{Y}).$$

Note *D<sup>n</sup>* (*X*) = (*n* + 1)!*xY<sup>n</sup>* + *nn*!*Y n*+1 , *D<sup>n</sup>* (*Y*) = *n*!*Y n*+1 . From Leibniz's formula, we obtain the following corollary:

**Corollary 5.** *For n* ≥ 0*, we have*

$$r\_{n+1}(\mathbf{x}) = \sum\_{k=1}^{n} \binom{n}{k} (n-k+1)! r\_k(\mathbf{x}) (\mathbf{x}+1)$$

In Table 1, we list some combinatorial sequences that satisfy (3). More examples can be found in similar tables in [17–19]. By using Theorem 1, we give the grammatical interpretation of the corresponding sequences, so that we can obtain more convolution formulas.

#### **4. Real Rootedness**

In this section, as an application, we will pay attention to the property of real roots of the row-generating functions in the array {*B*(*n*, *<sup>k</sup>*)}0≤*k*≤*<sup>n</sup>* in (3). For the sake of proving our results, some known results should be introduced beforehand.

Let {*Pn*(*x*)} denote a Sturm sequence, which is a sequence of standard polynomials meeting the condition of deg *P<sup>n</sup>* = *n* and *Pn*−1(*r*)*Pn*+1(*r*) < 0 whenever *Pn*(*r*) = 0 and *n* ≥ 1. Let RZ represent the set of polynomials with only real roots. {*Pn*(*x*)} is known as a generalized Sturm sequence (GSS) if *P<sup>n</sup>* ∈ *RZ* and zeros of *Pn*(*x*) are separated by those of *Pn*−1(*x*) for *n* ≥ 1. As a special case of Corollary 2.4 in Liu and Wang [20] (also see Zhu, Yeh, and Lu [7]), the following result provides a unified method to many polynomials with only real zeros.


**Table 1.** Some combinatorial sequences satisfying formula (3).

**Lemma 1.** *Let* {*Pn*(*x*)} *be a sequence of polynomials with nonnegative coefficients and* 0 ≤ deg *P<sup>n</sup>* − deg *Pn*−<sup>1</sup> ≤ 1*. Suppose that*

$$P\_n(\mathbf{x}) = (a\_n \mathbf{x} + b\_n) P\_{n-1}(\mathbf{x}) + \mathbf{x} (c\_n \mathbf{x} + d\_n) P\_{n-1}'(\mathbf{x})$$

*where an, b<sup>n</sup>* ∈ *R, and c<sup>n</sup>* ≤ 0*, d<sup>n</sup>* ≥ 0*. Then,* {*Pn*(*x*)}*n*≥<sup>0</sup> *is a generalized Sturm sequence.*

For nonnegative array *B*(*n*, *k*), which satisfies the recurrence relation (3), it is sufficient to assume that, for *n* ≥ 1,

$$\begin{cases} a\_1n + a\_2k + a\_3 - a\_1 \ge 0 \quad \text{for} \quad 1 \le k \le n\_\prime \\ b\_1n + b\_2k + b\_3 - b\_1 \ge 0 \quad \text{for} \quad 0 \le k \le n - 1\_\prime \\ c(k+1) \ge 0 \quad \text{for} \quad 0 \le k \le n - 2\_\prime \end{cases}$$

which is equivalent to

$$\begin{cases} a\_1 \ge 0 \, a\_1 + a\_2 \ge 0 \, a\_2 + a\_3 \ge 0 \, \lambda \\ b\_1 \ge 0 \, b\_1 + b\_2 \ge 0 \, b\_3 \ge 0 \, \lambda \\ c \ge 0 \, \lambda \end{cases}$$

Define *Bn*(*x*) = ∑ *n k*≥0 *B*(*n*, *k*)*x k* (*n* ≥ 0) as the row-generating functions of *B*(*n*, *k*). Thus, *B*0(*x*) = 1 and

$$B\_1(\mathfrak{x}) = b\_3 + (a\_2 + a\_3)\mathfrak{x}.$$

Moreover, it turns out that *Bn*(*x*) follows from the recurrence relation (3) as

$$B\_n(\mathbf{x}) = \left[b\_1 n + b\_3 - b\_1 + (a\_1 n + a\_2 + a\_3 - a\_1)\mathbf{x}\right] \mathcal{B}\_{n-1}(\mathbf{x}) + (\mathbf{c} + b\_2 \mathbf{x} + a\_2 \mathbf{x}^2) \mathcal{B}\_{n-1}'(\mathbf{x}),$$

which implies that

$$
\deg(B\_n(\mathfrak{x})) - \deg(B\_{n-1}(\mathfrak{x})) \le 1
$$

for each *n*.

**Theorem 2.** *Let* {*B*(*n*, *<sup>k</sup>*)}*n*,*k*≥<sup>0</sup> *be the array defined in* (3)*. Assume that <sup>b</sup>*<sup>2</sup> = *<sup>a</sup>*<sup>2</sup> + *c. Then, we have the following results:*

*(i) There exist polynomials An*(*x*) *for n* ≥ 0 *such that*

$$B\_n(\mathfrak{x}) = a^n (1+\mathfrak{x})^n A\_n(\frac{d}{1+\mathfrak{x}}),$$

*where An*(*x*)*satisfies the recurrence relation*

$$\begin{array}{lcl} A\_{\mathfrak{n}}(\mathbf{x}) &= \frac{1}{a} \{ (a\_1 + a\_2)\mathfrak{n} + a\_3 - a\_1 + \frac{(b\_1 + c - a\_1 - a\_2)\mathfrak{n} - c + b\_3 - b\_1 - a\_3 + a\_1}{d} \mathfrak{x} \} A\_{\mathfrak{n}-1}(\mathbf{x}) \\ &+ \frac{\mathbf{x}}{a} \{ \frac{(a\_2 - c)\mathbf{x}}{d} - a\_2 \} A'\_{\mathfrak{n}-1}(\mathbf{x}) \end{array} \tag{7}$$

*with A*0(*x*) = 1*, a* > 0 *and d* > 0*.*

*(ii) Assume b*<sup>1</sup> ≥ *a*<sup>1</sup> *and b*<sup>3</sup> ≥ *a*<sup>2</sup> + *a*3*. If a*<sup>2</sup> ≤ 0*, then* {*Bn*(*x*)}*n*≥<sup>0</sup> *is a generalized Sturm sequence.*

**Proof.** (*i*) Because *b*<sup>2</sup> = *a*<sup>2</sup> + *c*, it is obvious that

$$B\_n(\mathbf{x}) = \left[b\_1n + b\_3 - b\_1 + (a\_1n + a\_2 + a\_3 - a\_1)\mathbf{x}\right]B\_{n-1}(\mathbf{x}) + (c + a\_2\mathbf{x})(1+\mathbf{x})B'\_{n-1}(\mathbf{x}),$$

It can be proven that (*i*) holds by induction on *n* as follows. As *n* = 1, we can obtain

$$\begin{aligned} A\_1(\mathbf{x}) &= \frac{1}{a} \{a\_2 + a\_3 + \frac{b\_3 - a\_2 - a\_3}{d} \mathbf{x}\}, \\ B\_1(\mathbf{x}) &= b\_3 + (a\_2 + a\_3)\mathbf{x}. \end{aligned}$$

Thus, we have

$$B\_1(\mathfrak{x}) = a(1+\mathfrak{x})A\_1(\frac{d}{1+\mathfrak{x}}).$$

By the induction hypothesis, it now turns out that

$$\begin{aligned} B'\_{n-1}(\mathbf{x}) &= a^{n-1} (n-1) (\mathbf{x} + 1)^{n-2} A\_{n-1}(\frac{d}{1+\mathbf{x}}) - a^{n-1} (\mathbf{x} + 1)^{n-1} A'\_{n-1}(\frac{d}{1+\mathbf{x}}) \frac{d}{(1+\mathbf{x})^2} \\ &= \frac{(n-1)B\_{n-1}(\mathbf{x})}{1+\mathbf{x}} - d a^{n-1} (\mathbf{x} + 1)^{n-3} A'\_{n-1}(\frac{d}{1+\mathbf{x}}). \end{aligned}$$

It follows from that recurrence relation (7) that, for *n* ≥ 2,

$$\begin{aligned} &a^n(1+\mathbf{x})^n A\_n(\frac{d}{1+\mathbf{x}}) \\ &= \{ ((a\_1+a\_2)\mathbf{u} + a\_3 - a\_1)(1+\mathbf{x}) + (b\_1+c - a\_1 - a\_2)\mathbf{u} - c + b\_3 - b\_1 - a\_3 + a\_1 \} B\_{n-1}(\mathbf{x}) \\ &- (c + a\_2 \mathbf{x})(n-1)B\_{n-1}(\mathbf{x}) + (c + a\_2 \mathbf{x})(1+\mathbf{x})B\_{n-1}^\prime(\mathbf{x}) = B\_n(\mathbf{x}) \end{aligned}$$

Thus, for *n* ≥ 1, we can prove that

$$B\_n(\mathfrak{x}) = a^n (1+\mathfrak{x})^n A\_n(\frac{d}{1+\mathfrak{x}}).$$

(*ii*) Evidently, in light of (*i*), *Bn*(*x*) forms a generalized Sturm sequence if and only if (iff) *An*(*x*) forms a generalized Sturm sequence. The nonnegativity of the coefficients for *An*(*x*) needs to be considered firstly. Let *An*(*x*) = ∑ *n <sup>k</sup>*=<sup>0</sup> *A*(*n*, *k*)*x k* for *n* ≥ 0. Then, according to the recurrence relation (7), we obtain

$$\begin{aligned} A(n,k) &= \frac{(a\_1+a\_2)n - a\_2k + a\_3 - a\_1}{a} A(n-1,k) \\ &+ \frac{(b\_1+c-a\_1-a\_2)n - (c-a\_2)k + b\_3 - b\_1 + a\_1 - a\_2 - a\_3}{ad} A(n-1,k-1) \end{aligned}$$

for *<sup>n</sup>* ≥ 1. Following from the nonnegativity of {*B*(*n*, *<sup>k</sup>*)}*n*,*k*≥<sup>0</sup> , it holds

$$a\_1 + a\_2 \ge 0, a\_1 \ge 0, a\_2 + a\_3 \ge 0$$

Furthermore, by the hypothesis condition, we obtain

$$\begin{cases} b\_1 + c - a\_1 - a\_2 \ge c - a\_2 \ge 0, \\ (b\_1 + c - a\_1 - a\_2) - (c - a\_2) = b\_1 - a\_1 \ge 0, \\ (b\_1 + c - a\_1 - a\_2) - (c - a\_2) + b\_3 - b\_1 + a\_1 - a\_2 - a\_3 \ge 0. \end{cases}$$

Thus, {*B*(*n*, *<sup>k</sup>*)}*n*,*k*≥<sup>0</sup> is a nonnegative array. According to the recurrence relation (7) and Lemma 1, we can conclude that the polynomials *An*(*x*) form a generalized Sturm sequence if *a*<sup>2</sup> ≤ 0. For the same reason, the polynomials *Bn*(*x*) form a generalized Sturm sequence.

For example, the row-generating function of the Bell triangle *a*(*n*, *k*) in Section 3 is *an*(*x*) = ∑ *n k*=0 *a*(*n*, *k*)*x k* . Then, the polynomials satisfy

$$a\_n(\mathbf{x}) = (1+\mathbf{x})a\_{n-1}(\mathbf{x}) + (1+\mathbf{x})a'\_{n-1}(\mathbf{x})\_{\mathbf{x}}$$

with *a*0(*x*) = 1. Using Theorem 2 (*i*), there exists an array *A*(*n*, *k*) such that

$$a\_n(\mathbf{x}) = \sum\_{k=0}^n a(n,k)\mathbf{x}^k = (1+\mathbf{x})^n A\_n(\frac{1}{1+\mathbf{x}})$$

where *An*(*x*) for *n* ≥ 1 satisfies the recurrence relation

$$A\_n(\mathbf{x}) = [(n-1)\mathbf{x} + 1]A\_{n-1}(\mathbf{x}) - \mathbf{x}^2 A\_{n-1}'(\mathbf{x})$$

where *A*0(*x*) = 1 and *A*1(*x*) = 1. Obviously, *A*(*n*, *k*) = *S*(*n*, *n* − *k*) for *n* ≥ 1. Applying Theorem 2 (*ii*), it can be proven that {*an*(*x*)} for *n* ≥ 0 is a generalized Sturm sequence.

**Author Contributions:** Methodology, R.R.Z. and J.Y.; validation, F.R.; writing, R.R.Z. All authors have read and agreed to the published version of the manuscript.

**Funding:** The first author was supported by the National Natural Science Foundation of China (NSFC No. 11501090) and the Natural Science Foundation of Hebei Province (A2019501024). The second author was supported by MOST 110-2115-M-017-002-MY2; The third author's research was partially supported by the National Natural Science Foundation of China (NSFC No. 61807029).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


### *Article* **Several Double Inequalities for Integer Powers of the Sinc and Sinhc Functions with Applications to the Neuman–Sándor Mean and the First Seiffert Mean**

**Wen-Hui Li <sup>1</sup> , Qi-Xia Shen <sup>1</sup> and Bai-Ni Guo 2,3,\***


**Abstract:** In the paper, the authors establish a general inequality for the hyperbolic functions, extend the newly-established inequality to trigonometric functions, obtain some new inequalities involving the inverse sine and inverse hyperbolic sine functions, and apply these inequalities to the Neuman– Sándor mean and the first Seiffert mean.

**Keywords:** Neuman–Sándor mean; Seiffert mean; inequality; sinc function; sinhc function; inverse hyperbolic function; trigonometric function; necessary and sufficient condition

**MSC:** 26D07; 26E60; 41A30

#### B.-N. Several Double Inequalities for Integer Powers of the Sinc and Sinhc Functions with Applications to

the Neuman–Sándor Mean and the First Seiffert Mean. *Axioms* **2022**, *11*, 304. https://doi.org/10.3390/ axioms11070304

**Citation:** Li, W.-H.; Shen, Q.-X.; Guo,

Academic Editor: Hari Mohan Srivastava

Received: 27 May 2022 Accepted: 21 June 2022 Published: 23 June 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

#### **1. Introduction**

For *s*, *t* > 0 with *s* 6= *t*, the Neuman–Sándor mean *M*(*s*, *t*), the first Seiffert mean *P*(*s*, *t*), and the second Seiffert mean *T*(*s*, *t*) are, respectively, defined in [1–3] by

$$M(s,t) = \frac{s-t}{2\operatorname{arcsinh}\frac{s-t}{s+t}}, \quad P(s,t) = \frac{s-t}{4\arctan\sqrt{\frac{s}{t}}-\pi}, \quad T(s,t) = \frac{s-t}{2\arctan\frac{s-t}{s+t}}.$$

where arcsinh *x* = ln *x* + √ *x* <sup>2</sup> + 1 denotes the inverse hyperbolic sine function. The first Seiffert mean *P*(*s*, *t*) can be rewritten ([1], Equation (2.4)) as

$$P(s,t) = \frac{s-t}{2\arcsin\frac{s-t}{s+t}}$$

Recently, these bivariate mean values have been the subject of intensive research. In particular, many remarkable inequalities and properties for the means *M*(*s*, *t*), *P*(*s*, *t*), and *T*(*s*, *t*) can be found in the literature [4–20].

Let *A*(*s*, *t*) = *<sup>s</sup>*+*<sup>t</sup>* 2 , *H*(*s*, *t*) = <sup>2</sup>*st s*+*t* , and *C*(*s*, *t*) = *<sup>s</sup>* <sup>2</sup>+*t* 2 *s*+*t* be the arithmetic, harmonic, and contra-harmonic mean of two positive numbers *s* and *t*. The inequalities

$$H(\mathbf{s}, t) < P(\mathbf{s}, t) < A(\mathbf{s}, t) < T(\mathbf{s}, t) < \mathcal{C}(\mathbf{s}, t) \tag{1}$$

.

hold for all *s*, *t* > 0 with *s* 6= *t*. In [1,21], it was established that

$$P(s,t) < M(s,t) < T^2(s,t), \quad A(s,t) < M(s,t) < T(s,t), \tag{2}$$

$$A(s,t)T(s,t) < M^2(s,t) < \frac{A^2(s,t) + T^2(s,t)}{2}$$

for *s*, *t* > 0 with *s* 6= *t*. For *z* ∈ C, the functions

$$\text{sinc}\,z = \begin{cases} \frac{\sin z}{z}, & z \neq 0 \\ 1, & z = 0 \end{cases} \quad \text{and} \quad \text{sinh}\,z = \begin{cases} \frac{\sinh z}{z}, & z \neq 0 \\ 1, & z = 0 \end{cases}$$

are called the sinc function and hyperbolic sinc function, respectively. The function sinc *z* is also called the sine cardinal or sampling function, and the function sinhc *z* is also called the hyperbolic sine cardinal; see [22]. The sinc function sinc *z* arises frequently in signal processing, the theory of Fourier transforms, and other areas in mathematics, physics, and engineering. It is easy to see that these two functions sinc *z* and sinhc *z* are analytic on C, that is, they are entire functions.

In [23], the authors obtained double inequalities of the Neuman–Sándor meansin terms of the arithmetic and contra-harmonic means, and they deduced that the inequalities

$$\begin{split} 1 - \beta\_1 \left( 1 - \frac{1}{\cosh^2 \theta} \right) &< \frac{1}{\sinh \alpha} < 1 - a\_1 \left( 1 - \frac{1}{\cosh^2 \theta} \right), \\ 1 - \beta\_2 \left( 1 - \frac{1}{\cosh^4 \theta} \right) &< \frac{1}{\sinh^2 \theta} < 1 - a\_2 \left( 1 - \frac{1}{\cosh^4 \theta} \right), \\ 1 + a\_3 \left( \cosh^4 \theta - 1 \right) &< \sinh \alpha^2 \theta < 1 + \beta\_3 \left( \cosh^4 \theta - 1 \right) \end{split} \tag{3}$$

hold for *θ* ∈ (0, ln(1 + √ 2 )) if and only if

$$\begin{aligned} \alpha\_1 &\le \frac{1}{6} \quad \text{and} \quad \beta\_1 \ge 2\left[1 - \ln\left(1 + \sqrt{2}\right)\right] = 0.237253\dots, \\\alpha\_2 &\le \frac{1}{6} \quad \text{and} \quad \beta\_2 \ge \frac{4}{3}\left[1 - \ln^2\left(1 + \sqrt{2}\right)\right] = 0.297574\dots, \\\ a\_3 &\le \frac{1 - \ln^2\left(1 + \sqrt{2}\right)}{3\ln^2\left(1 + \sqrt{2}\right)} = 0.095767\dots \quad \text{and} \quad \beta\_3 \ge \frac{1}{6} \end{aligned}$$

respectively.

In this paper, motivated by those double inequalities in (3), we will obtain necessary and sufficient conditions on *α* and *β* such that double inequalities

$$1 - \mathfrak{a} + \mathfrak{a}\cosh^{2r}\mathfrak{x} < \sinh \mathfrak{c}^r \mathfrak{x} < 1 - \beta + \beta \cosh^{2r}\mathfrak{x} \tag{4}$$

and

$$1 - \alpha + \kappa \cos^{2r} \mathbf{x} < \sin^r \mathbf{x} < 1 - \beta + \beta \cos^{2r} \mathbf{x} \tag{5}$$

are valid on (−∞, ∞) for some ranges of *r* ∈ R. Hereafter, substituting the double inequalities (4) and (5) into the Neuman–Sándor mean *M*(*s*, *t*) and the first Seiffert means *P*(*s*, *t*), we will derive generalizations of some inequalities for the Neuman–Sándor mean *M*(*s*, *t*) and the first Seiffert means *P*(*s*, *t*).

#### **2. Lemmas**

To achieve our main purposes, we need the following lemmas.

**Lemma 1** ([24], Theorem 1.25)**.** *For* −∞ < *s* < *t* < ∞*, let f* , *g be continuous on* [*s*, *t*]*, differentiable on* (*s*, *t*)*, and g* 0 (*x*) <sup>6</sup><sup>=</sup> <sup>0</sup> *on* (*s*, *<sup>t</sup>*)*. If the ratio <sup>f</sup>* 0 (*x*) *g* 0(*x*) *is increasing on* (*s*, *t*)*, so are the functions <sup>f</sup>*(*x*)−*f*(*s*) *g*(*x*)−*g*(*s*) *and <sup>f</sup>*(*x*)−*f*(*t*) *g*(*x*)−*g*(*t*) *.*

**Lemma 2** ([25], Lemma 1.1)**.** *Suppose that the power series f*(*x*) = ∑ ∞ *n*=0 *anx n and g*(*x*) = ∑ ∞ *n*=0 *bnx <sup>n</sup> have the radius <sup>r</sup>* <sup>&</sup>gt; <sup>0</sup> *of convergence and <sup>b</sup><sup>n</sup>* <sup>&</sup>gt; <sup>0</sup> *for all <sup>n</sup>* <sup>∈</sup> <sup>N</sup><sup>0</sup> <sup>=</sup> {0, 1, 2, . . . }*. Let h*(*x*) = *<sup>f</sup>*(*x*) *g*(*x*) *. Then the following statements are true.*


The classical Bernoulli numbers *B<sup>n</sup>* for *n* ≥ 0 are generated in ([26], p. 3) by

$$\frac{z}{e^z - 1} = \sum\_{n=0}^{\infty} B\_n \frac{z^n}{n!} = 1 - \frac{z}{2} + \sum\_{n=1}^{\infty} B\_{2n} \frac{z^{2n}}{(2n)!}, \quad |z| < 2\pi.$$

In the recent papers [27–29], some novel results for the even-indexed Bernoulli numbers *B*2*<sup>n</sup>* were discovered.

**Lemma 3** ([30])**.** *Let B*2*<sup>n</sup> be the even-indexed Bernoulli numbers. Then*

$$\frac{\mathbf{x}}{\sin \mathbf{x}} = 1 + \sum\_{n=1}^{\infty} \frac{2^{2n} - 2}{(2n)!} |B\_{2n}| \mathbf{x}^{2n}, \quad 0 < |\mathbf{x}| < \pi. \tag{6}$$

**Lemma 4** ([30–32])**.** *Let B*2*<sup>n</sup> be the even-indexed Bernoulli numbers. Then*

$$\cot x = \frac{1}{x} - \sum\_{n=1}^{\infty} \frac{2^{2n}}{(2n)!} |B\_{2n}| x^{2n-1}$$

*and*

$$\frac{1}{\sin^2 \mathbf{x}} = \csc^2 \mathbf{x} = \frac{1}{\mathbf{x}^2} + \sum\_{n=1}^{\infty} \frac{2^{2n} (2n - 1)}{(2n)!} |B\_{2n}| \mathbf{x}^{2n - 2} \tag{7}$$

*for* 0 < |*x*| < *π.*

**Lemma 5.** *The function*

$$h\_1(\mathbf{x}) = \frac{2\sinh^2\mathbf{x}\cosh\mathbf{x} - \mathbf{x}\sinh\mathbf{x} - \mathbf{x}^2\cosh^3\mathbf{x}}{(\mathbf{x} - \sinh\mathbf{x}\cosh\mathbf{x} - \mathbf{x}\sinh^2\mathbf{x})(\mathbf{x}\cosh\mathbf{x} - \sinh\mathbf{x})}$$

*is increasing on* (0, ∞) *and has the limits*

$$\lim\_{\mathbf{x}\to\mathbf{0}^{+}}h\_{1}(\mathbf{x})=\frac{17}{25}\quad\text{and}\quad\lim\_{\mathbf{x}\to\infty}h\_{1}(\mathbf{x})=1.\tag{8}$$

**Proof.** Let

$$A(\mathfrak{x}) = 2\sinh^2\mathfrak{x}\cosh\mathfrak{x} - \mathfrak{x}\sinh\mathfrak{x} - \mathfrak{x}^2\cosh^3\mathfrak{x}$$

and

$$B(\mathbf{x}) = (\mathbf{x} - \sinh \mathbf{x} \cosh \mathbf{x} - \mathbf{x} \sinh^2 \mathbf{x})(\mathbf{x} \cosh \mathbf{x} - \sinh \mathbf{x}).$$

Straightforward computation gives

$$\begin{split} A(\mathbf{x}) &= 2\cosh^3\mathbf{x} - 2\cosh\mathbf{x} - \mathbf{x}\sinh\mathbf{x} - \mathbf{x}^2\cosh^3\mathbf{x} \\ &= \frac{\cosh 3\mathbf{x}}{2} - \frac{\cosh \mathbf{x}}{2} - \frac{\mathbf{x}^2\cosh 3\mathbf{x}}{4} - \frac{3\mathbf{x}^2\cosh \mathbf{x}}{4} - \mathbf{x}\sinh\mathbf{x} \\ &= \frac{1}{2}\sum\_{n=0}^\infty \frac{(3\mathbf{x})^{2n}}{(2n)!} - \frac{1}{2}\sum\_{n=0}^\infty \frac{\mathbf{x}^{2n}}{(2n)!} - \frac{\mathbf{x}^2}{4}\sum\_{n=0}^\infty \frac{(3\mathbf{x})^{2n}}{(2n)!} - \frac{3\mathbf{x}^2}{4}\sum\_{n=0}^\infty \frac{\mathbf{x}^{2n}}{(2n)!} - \mathbf{x}\sum\_{n=0}^\infty \frac{\mathbf{x}^{2n+1}}{(2n+1)!} \end{split}$$

$$\begin{aligned} &=\frac{1}{2}\sum\_{n=0}^{\infty}\frac{(3\mathbf{x})^{2n+2}}{(2n+2)!}-\frac{1}{2}\sum\_{n=0}^{\infty}\frac{\mathbf{x}^{2n+2}}{(2n+2)!}-\frac{1}{4}\sum\_{n=0}^{\infty}\frac{3^{2n}\mathbf{x}^{2n+2}}{(2n)!} \\ &-\frac{3}{4}\sum\_{n=0}^{\infty}\frac{\mathbf{x}^{2n+2}}{(2n)!}-\sum\_{n=0}^{\infty}\frac{\mathbf{x}^{2n+2}}{(2n+1)!} \\ &=\frac{1}{2}\sum\_{n=2}^{\infty}\frac{3^{2n}(-2n^{2}-3n+8)-6n^{2}-13n-8}{(2n+2)!}\mathbf{x}^{2n+2} \end{aligned}$$

and

$$\begin{split} B(\mathbf{x}) &= \mathbf{x}^2 \cosh \mathbf{x} - 2\mathbf{x} \sinh \mathbf{x} - \mathbf{x}^2 \sinh^2 \mathbf{x} \cosh \mathbf{x} + \sinh^2 \mathbf{x} \cosh \mathbf{x} \\ &= \mathbf{x}^2 \cosh \mathbf{x} - 2\mathbf{x} \sinh \mathbf{x} - \frac{\mathbf{x}^2 \cosh 3\mathbf{x}}{4} + \frac{\mathbf{x}^2 \cosh \mathbf{x}}{4} + \frac{\cosh 3\mathbf{x}}{4} - \frac{\cosh \mathbf{x}}{4} \\ &= \frac{5}{4} \sum\_{n=0}^{\infty} \frac{\mathbf{x}^{2n+2}}{(2n)!} - 2 \sum\_{n=0}^{\infty} \frac{\mathbf{x}^{2n+2}}{(2n+1)!} - \frac{1}{4} \sum\_{n=0}^{\infty} \frac{3^{2n} \mathbf{x}^{2n+2}}{(2n)!} + \frac{1}{4} \sum\_{n=0}^{\infty} \frac{(3\mathbf{x})^{2n}}{(2n)!} - \frac{1}{4} \sum\_{n=0}^{\infty} \frac{\mathbf{x}^{2n}}{(2n)!} \\ &= \frac{1}{4} \sum\_{n=2}^{\infty} \frac{3^{2n} (-4n^2 - 6n + 7) + 20n^2 + 14n - 7}{(2n+2)!} \mathbf{x}^{2n+2}. \end{split}$$

Let

$$a\_n = \frac{3^{2n}(-2n^2 - 3n + 8) - 6n^2 - 13n - 8}{2(2n + 2)!}$$

and

$$b\_n = \frac{3^{2n}(-4n^2 - 6n + 7) + 20n^2 + 14n - 7}{4(2n + 2)!}$$

.

Simple computation leads to

$$\begin{split} a\_{\mathrm{ll}} = \frac{3^{2n}(-2n^2 - 3n + 8) - 6n^2 - 13n - 8}{2(2n + 2)!} &\leq \frac{3^4(-2n^2 - 3n + 8) - 6n^2 - 13n - 8}{2(2n + 2)!} \\ &= \frac{-168n^2 - 256n + 640}{2(2n + 2)!} \leq -\frac{272}{(2n + 2)!} < 0 \end{split}$$

for all *n* ∈ N and *n* ≥ 2, whereas, for all *n* ∈ N and *n* ≥ 2,

$$\begin{split} b\_n &= \frac{3^{2n}(-4n^2 - 6n + 7) + 20n^2 + 14n - 7}{4(2n + 2)!} \leq \frac{3^4(-4n^2 - 6n + 7) + 20n^2 + 14n - 7}{4(2n + 2)!} \\ &= \frac{-304n^2 - 472n + 560}{4(2n + 2)!} \leq -\frac{400}{(2n + 2)!} < 0. \end{split} \tag{9}$$

Consequently, we obtain

$$\begin{aligned} c\_n &= \frac{-a\_n}{-b\_n} = 2 \times \frac{3^{2n}(2n^2 + 3n - 8) + 6n^2 + 13n + 8}{3^{2n}(4n^2 + 6n - 7) - 20n^2 - 14n + 7} \\ &= \frac{9^n(4n^2 + 6n - 16) + 12n^2 + 26n + 16}{9^n(4n^2 + 6n - 7) - 20n^2 - 14n + 7} \\ &= 1 + \frac{-9^{n+1} + 32n^2 + 40n + 9}{9^n(4n^2 + 6n - 7) - 20n^2 - 14n + 7} \\ &\overset{\Delta}{=} 1 + k(n) \end{aligned} \tag{10}$$

for *n* ∈ N and *n* ≥ 2. Let

$$k(\mathbf{x}) = \frac{-9^{\mathbf{x}+1} + 32\mathbf{x}^2 + 40\mathbf{x} + 9}{9^{\mathbf{x}}(4\mathbf{x}^2 + 6\mathbf{x} - 7) - 20\mathbf{x}^2 - 14\mathbf{x} + 7}$$

for *x* ∈ [2, ∞). Then

$$k'(\mathbf{x}) = \frac{\ell(\mathbf{x})}{[9^\mathbf{x}(4\mathbf{x}^2 + 6\mathbf{x} - 7) - 20\mathbf{x}^2 - 14\mathbf{x} + 7]^2} \mathbf{y}$$

where

`(*x*) = −9 *x*+1 ln 9 + 64*x* + 40-9 *x* 4*x* <sup>2</sup> <sup>+</sup> <sup>6</sup>*<sup>x</sup>* <sup>−</sup> <sup>7</sup> − 20*x* <sup>2</sup> <sup>−</sup> <sup>14</sup>*<sup>x</sup>* <sup>+</sup> <sup>7</sup> − −9 *<sup>x</sup>*+<sup>1</sup> + 32*x* <sup>2</sup> + 40*x* + 9 -9 *x* 4*x* <sup>2</sup> <sup>+</sup> <sup>6</sup>*<sup>x</sup>* <sup>−</sup> <sup>7</sup> ln 9 + 9 *x* (8*<sup>x</sup>* <sup>+</sup> <sup>6</sup>) <sup>−</sup> <sup>40</sup>*<sup>x</sup>* <sup>−</sup> <sup>14</sup> = 9 2*x*+1 (8*x* + 6) + 9 *x* - 9 20*x* <sup>2</sup> <sup>+</sup> <sup>14</sup>*<sup>x</sup>* <sup>−</sup> <sup>7</sup> − 4*x* <sup>2</sup> <sup>+</sup> <sup>6</sup>*<sup>x</sup>* <sup>−</sup> <sup>7</sup> 32*x* <sup>2</sup> + 40*x* + 9 ln 9 + 9 *x* - (64*x* + 40) 4*x* <sup>2</sup> <sup>+</sup> <sup>6</sup>*<sup>x</sup>* <sup>−</sup> <sup>7</sup> − 9(40*x* + 14) − (8*x* + 6) 32*x* <sup>2</sup> + 40*x* + 9 − (64*x* + 40) 20*x* <sup>2</sup> <sup>+</sup> <sup>14</sup>*<sup>x</sup>* <sup>−</sup> <sup>7</sup> + (40*x* + 14) 32*x* <sup>2</sup> + 40*x* + 9 = 9 2*x*+1 (8*x* + 6) + 9 *x* 352*x* + 128*x* <sup>2</sup> <sup>−</sup> <sup>352</sup>*<sup>x</sup>* <sup>3</sup> <sup>−</sup> <sup>128</sup>*<sup>x</sup>* 4 ln 9 + 9 *<sup>x</sup>* <sup>×</sup> <sup>4</sup> −115 − 220*x* + 8*x* 2 + 406 + 808*x* + 352*x* 2 = 2 × 9 *x* - 9 *x*+1 (3 + 4*x*) + 176*x* + 64*x* <sup>2</sup> <sup>−</sup> <sup>176</sup>*<sup>x</sup>* <sup>3</sup> <sup>−</sup> <sup>64</sup>*<sup>x</sup>* 4 ln 9 − 230 − 440*x* + 16*x* 2 + 406 + 808*x* + 352*x* 2 .

Let

$$m(\mathbf{x}) = \theta^{\mathbf{x}+1}(\mathbf{3} + 4\mathbf{x}) + \left(176\mathbf{x} + 64\mathbf{x}^2 - 176\mathbf{x}^3 - 64\mathbf{x}^4\right)\ln 9 - 230 - 440\mathbf{x} + 16\mathbf{x}^2.$$

Then

$$\begin{aligned} m'(x) &= 9x^{x+1} \ln 9 (3+4x) + 4 \times 9^{x+1} + \left(176 + 128x - 528x^2 - 256x^3\right) \ln 9 - 440 + 32x, \\ m'(2) &= 4219 \ln 9 + 2412 \\ &> 0, \\ m'''(x) &= \ln^2 9 \times 9^{x+1} (3+4x) + 8 \ln 9 \times 9^{x+1} + \left(128 - 1056x - 768x^2\right) \ln 9 + 32, \\ m'''(2) &= 8019 \ln^2 9 + 776 \ln 9 + 32 \\ &> 0, \\ m^{(3)}(x) &= \ln^3 9 \times 9^{x+1} (3+4x) + 12 \ln^2 9 \times 9^{x+1} + \left(-1056 - 1536x\right) \ln 9, \\ m^{(3)}(2) &= 8019 \ln^3 9 + 8748 \ln^2 9 - 2112 \ln 9 \\ &> 0, \\ m^{(4)}(x) &= \ln^4 9 \times 9^{x+1} (3+4x) + 16 \ln^3 9 \times 9^{x+1} - 1536 \ln 9 \\ &> \ln^4 9 \times 9^{x+1} (3+4x) + 11664 \ln^3 9 - 1536 \ln 9 \\ &> 0 \end{aligned}$$

on [2, ∞). Therefore, the function *m*(*x*) is increasing on [2, ∞) and

$$m(2) = 6973 - 1824\ln 9 > 1501 > 0.1$$

Hence, it follows that `(*x*) > 0 and the function *k*(*x*) is increasing on [2, ∞).

According to (10), we can observe that *c<sup>n</sup>* is increasing for *n* ∈ N and *n* ≥ 2. Thus, based on Lemma 2, the function *<sup>h</sup>*1(*x*) = *<sup>A</sup>*(*x*) *B*(*x*) is increasing on (0, ∞).

The limits in (8) are straightforward. The proof of Lemma 5 is complete.

#### **3. Necessary and Sufficient Conditions**

Now we are in a position to state and prove our main results.

**Theorem 1.** *Let x*,*r* ∈ R*.*


**Proof.** Let

$$F(\mathbf{x}) = \frac{\sinh^r \mathbf{x} - 1}{\cosh^{2r} \mathbf{x} - 1} \stackrel{\Delta}{=} \frac{f\_1(\mathbf{x})}{f\_2(\mathbf{x})},$$
 where  $f\_1(\mathbf{x}) = \sinh \mathbf{c}^r \mathbf{x} - 1$  and  $f\_2(\mathbf{x}) = \cosh^{2r} \mathbf{x} - 1$ . Then 
$$\frac{f\_1'(\mathbf{x})}{f\_2'(\mathbf{x})} = \frac{\sinh^{r-2} \mathbf{x} (\mathbf{x} \cosh \mathbf{x} - \sinh \mathbf{x})}{2 \mathbf{x}^{r+1} \cosh^{2r-1} \mathbf{x}}$$

and

$$\begin{split} \left[\frac{f\_1'(\mathbf{x})}{f\_2'(\mathbf{x})}\right]' &= \frac{r-1}{2} \left(\frac{\sinh x}{x \cosh^2 x}\right)^{r-2} \frac{x-\sinh x \cosh x - x\sinh^2 x}{x^2 \cosh x \cosh x} \frac{x \cosh x - \sinh x}{x^2 \sinh x \cosh x} \\ &+ \frac{1}{2} \left(\frac{\sinh x}{x \cosh^2 x}\right)^{r-1} \frac{2 \sinh^2 x \cosh x - x^2 \cosh^3 x - x \sinh x}{x^3 \sinh^2 x \cosh^2 x} \\ &= \frac{1}{2} \left(\frac{\sinh x}{x \cosh^2 x}\right)^{r-2} \frac{1}{x^4 \sinh x \cos \mathbf{a}^4 x} \left[(r-1)(x - \sinh x \cosh x - x \sinh^2 x)\right] \\ &\times (x \cosh x - \sinh x) + (2 \sinh^2 x \cosh x - x \sinh x - x^2 \cosh^3 x) \end{split}$$
 
$$\begin{split} &= \frac{1}{2} \left(\frac{\sinh x}{x \cosh^2 x}\right)^{r-2} \frac{(x - \sinh x \cosh x - x \sinh^2 x)(x \cosh x - \sinh x)}{x^4 \sinh x \cosh^4 x} \\ &\times \left[r - 1 + \frac{2 \sinh^2 x \cosh x - x \sinh x - x^2 \cosh^3 x}{(x - \sinh x \cosh x - x \sinh^2 x)(x \cosh x - \sinh x)}\right] \\ &= \frac{1}{2} \left(\frac{\sinh x}{x \cosh^2 x}\right)^{r-2} \frac{B(x)}{x^4 \sinh x \cosh^4 x} [r - 1 + h\_1(x)]. \end{split}$$

Based on the result (9) in the proof of Lemma 5, we can observe that the function *B*(*x*) < 0.

When *<sup>r</sup>* <sup>≥</sup> <sup>8</sup> <sup>25</sup> and *<sup>x</sup>* <sup>∈</sup> (0, <sup>∞</sup>), we have *<sup>r</sup>* <sup>−</sup> <sup>1</sup> <sup>+</sup> *<sup>h</sup>*1(*x*) <sup>&</sup>gt; 0, and then *<sup>f</sup>* 0 1 (*x*) *f* 0 2 (*x*) is decreasing on (0, ∞). Accordingly, by Lemma 1, the function *F*(*x*) = *<sup>f</sup>*<sup>1</sup> (*x*) *<sup>f</sup>*2(*x*) = *f*1 (*x*)−*f*<sup>1</sup> (0 +) *<sup>f</sup>*2(*x*)−*f*2(0+) is decreasing on (0, ∞).

When *<sup>r</sup>* <sup>&</sup>lt; <sup>0</sup> and *<sup>x</sup>* <sup>∈</sup> (0, <sup>∞</sup>), we have *<sup>r</sup>* <sup>−</sup> <sup>1</sup> <sup>+</sup> *<sup>h</sup>*1(*x*) <sup>&</sup>lt; 0, and then *<sup>f</sup>* 0 1 (*x*) *f* 0 2 (*x*) is increasing on (0, ∞). Accordingly, based on Lemma 1, the function *F*(*x*) = *<sup>f</sup>*<sup>1</sup> (*x*) *<sup>f</sup>*2(*x*) = *f*1 (*x*)−*f*<sup>1</sup> (0 +) *<sup>f</sup>*2(*x*)−*f*2(0+) is increasing on (0, ∞).

It is straightforward that lim*x*→0<sup>+</sup> *<sup>F</sup>*(*x*) = <sup>1</sup> 6 . The proof of Theorem 1 is thus complete.

**Corollary 1.** *Let r* > 0 *and x* ∈ R*. Then the inequality*

$$\frac{1}{\sinh^r x} < 1 - \alpha + \alpha \left(\frac{1}{\cosh x}\right)^{2r}$$

*holds if and only if <sup>α</sup>* <sup>≤</sup> <sup>1</sup> 6 *.*

**Corollary 2.** *Let x* ∈ R*. Then*

$$\frac{1}{\cosh^2 x} < \frac{1}{\sinh x} < \frac{5}{6} + \frac{1}{6 \cosh^2 x} < 1 < \sinh x < \frac{5}{6} + \frac{\cosh^2 x}{6} < \cosh^2 x.$$

**Corollary 3.** *Let t* 6= 0*. Then*

$$\frac{1}{1+t^2} < \frac{\operatorname{arcsinh} t}{t} < \frac{5}{6} + \frac{1}{6(1+t^2)} < 1 < \frac{t}{\operatorname{arcsinh} t} < \frac{5}{6} + \frac{1+t^2}{6} < 1+t^2$$

.

*.*

**Theorem 2.** *Let r* ∈ R*. For x* ∈ 0, *<sup>π</sup>* 2 *,*


#### **Proof.** Let

$$G(\mathbf{x}) = \frac{\operatorname{sinc}^r \mathbf{x} - 1}{\cos^{2r} \mathbf{x} - 1} \stackrel{\scriptstyle \Delta}{=} \frac{\operatorname{g}\_1(\mathbf{x})}{\operatorname{g}\_2(\mathbf{x})} \lambda$$

where *g*1(*x*) = sinc*<sup>r</sup> <sup>x</sup>* <sup>−</sup> 1 and *<sup>g</sup>*2(*x*) = cos2*<sup>r</sup> <sup>x</sup>* <sup>−</sup> 1. Then

$$\frac{g\_1'(\mathbf{x})}{g\_2'(\mathbf{x})} = -\frac{1}{2} \left( \frac{\sin \mathbf{x}}{\mathbf{x} \cos^2 \mathbf{x}} \right)^{r-1} \frac{\mathbf{x} \cos \mathbf{x} - \sin \mathbf{x}}{\mathbf{x}^2 \sin \mathbf{x} \cos \mathbf{x}}$$

and

$$\begin{aligned} \left[\frac{g\_1'(\mathbf{x})}{g\_2'(\mathbf{x})}\right]' &= \frac{r-1}{2} \left(\frac{\sin x}{x\cos^2 x}\right)^{r-2} \frac{x-\sin x \cos x + x\sin^2 x}{x^2 \cos x \cos x} \frac{x\sin x - x\cos x}{x^2 \sin x \cos x} \\ &+ \frac{1}{2} \left(\frac{\sin x}{x\cos^2 x}\right)^{r-1} \frac{x^2 \cos^3 x + x \sin x - 2\sin^2 x \cos x}{x^3 \sin^2 x \cos^2 x} \\ &= \frac{1}{2} \left(\frac{\sin x}{x\cos^2 x}\right)^{r-2} \frac{1}{x^4 \sin x \cos^4 x} \left[(r-1)(x - \sin x \cos x + x \sin^2 x)\right] \\ &\times \left(\sin x - x \cos x\right) + \left(x^2 \cos^3 x + x \sin x - 2\sin^2 x \cos x\right)\right] \\ &= \frac{1}{2} \left(\frac{\sin x}{x\cos^2 x}\right)^{r-2} \frac{2x \sin x - \sin^2 x \cos x - x^2 \cos x - x^2 \sin^2 x \cos x}{x^4 \sin x \cos^4 x} \\ &\times \left(r + \frac{2x^2 \cos x - x \sin x - \sin^2 x \cos x}{2x \sin x - \sin^2 x \cos x - x^2 \cos x - x^2 \sin^2 x \cos x}\right) \\ &= \frac{1}{2} \left(\frac{\sin x}{x \cos^2 x}\right)^{r-2} \frac{2x \sin x - \sin^2 x \cos x - x^2 \cos x - x^2 \sin^2 x \cos x}{x^4 \sin x \cos^4 x} \end{aligned}$$

where

$$\begin{split} u(\mathbf{x}) &= \frac{2\mathbf{x}^2 \cos \mathbf{x} - \mathbf{x} \sin \mathbf{x} - \sin^2 \mathbf{x} \cos \mathbf{x}}{2\mathbf{x} \sin \mathbf{x} - \sin^2 \mathbf{x} \cos \mathbf{x} - \mathbf{x}^2 \cos \mathbf{x} - \mathbf{x}^2 \sin^2 \mathbf{x} \cos \mathbf{x}} \\ &= \frac{\frac{2\mathbf{x}^2}{\sin^2 \mathbf{x}} - \frac{2\mathbf{x}}{\sin 2\mathbf{x}} - 1}{\frac{4\mathbf{x}}{\sin 2\mathbf{x}} - 1 - \frac{\mathbf{x}^2}{\sin^2 \mathbf{x}} - \mathbf{x}^2} \overset{\triangle}{=} \frac{D(\mathbf{x})}{E(\mathbf{x})} \end{split}$$

with

$$D(\mathbf{x}) = \frac{2\mathbf{x}^2}{\sin^2 \mathbf{x}} - \frac{2\mathbf{x}}{\sin 2\mathbf{x}} - 1 \quad \text{and} \quad E(\mathbf{x}) = \frac{4\mathbf{x}}{\sin 2\mathbf{x}} - 1 - \frac{\mathbf{x}^2}{\sin^2 \mathbf{x}} - \mathbf{x}^2.$$

By virtue of (6) and (7), we have

$$\begin{split} D(\mathbf{x}) &= 2\mathbf{x}^2 \Big[ \frac{1}{\mathbf{x}^2} + \sum\_{n=1}^{\infty} \frac{2^{2n}(2n-1)}{(2n)!} |B\_{2n}| \mathbf{x}^{2n-2} \Big] - \left[ 1 + \sum\_{n=1}^{\infty} \frac{2^{2n} - 2}{(2n)!} |B\_{2n}| (2\mathbf{x})^{2n} \right] - 1 \\ &= \sum\_{n=1}^{\infty} \frac{2^{2n+1}(2n-1)}{(2n)!} |B\_{2n}| \mathbf{x}^{2n} - \sum\_{n=1}^{\infty} \frac{2^{2n} - 2}{(2n)!} |B\_{2n}| (2\mathbf{x})^{2n} \end{split}$$

$$=\sum\_{n=2}^{\infty} \frac{2^{2n}(4n-2^{2n})}{(2n)!} |B\_{2n}| \pi^{2n} \stackrel{\Delta}{=} \sum\_{n=2}^{\infty} d\_n \pi^{2n}$$

and

$$\begin{split} E(\mathbf{x}) &= 2 \left[ 1 + \sum\_{n=1}^{\infty} \frac{2^{2n} - 2}{(2n)!} |B\_{2n}| (2\mathbf{x})^{2n} \right] - \mathbf{x}^2 \left[ \frac{1}{\mathbf{x}^2} + \sum\_{n=1}^{\infty} \frac{2^{2n} (2n - 1)}{(2n)!} |B\_{2n}| \mathbf{x}^{2n - 2} \right] - \mathbf{x}^2 - 1 \\ &= \sum\_{n=1}^{\infty} \frac{\left( 2^{2n + 1} - 2n - 3 \right) 2^{2n}}{(2n)!} |B\_{2n}| \mathbf{x}^{2n} - \mathbf{x}^2 \\ &= \sum\_{n=2}^{\infty} \frac{(2^{2n + 1} - 2n - 3) 2^{2n}}{(2n)!} |B\_{2n}| \mathbf{x}^{2n} \triangleq \sum\_{n=2}^{\infty} e\_n \mathbf{x}^{2n}, \end{split}$$

where

$$d\_n = \frac{2^{2n} (4n - 2^{2n})}{(2n)!} |B\_{2n}| \quad \text{and} \quad e\_n = \frac{(2^{2n+1} - 2n - 3)2^{2n}}{(2n)!} |B\_{2n}| > 0.$$

Since the sequence *c<sup>n</sup>* = *dn en* = <sup>4</sup>*n*−<sup>2</sup> 2*n* 2 <sup>2</sup>*n*+1−2*n*−<sup>3</sup> for *n* = 2, 3, . . . is decreasing, according to Lemma 2, the function *u*(*x*) = *<sup>D</sup>*(*x*) *E*(*x*) is decreasing from 0, *<sup>π</sup>* 2 onto −1 2 , <sup>−</sup> <sup>8</sup> 25 . When *<sup>r</sup>* <sup>≥</sup> <sup>1</sup> 2 , the function *<sup>g</sup>* 0 1 (*x*) *g* 0 2 (*x*) is increasing on 0, *<sup>π</sup>* 2 , and based on Lemma 1, the function *G*(*x*) = *<sup>g</sup>*<sup>1</sup> (*x*) *<sup>g</sup>*2(*x*) = *g*1 (*x*)−*g*<sup>1</sup> (0 +) *<sup>g</sup>*2(*x*)−*g*2(0+) is increasing on 0, *<sup>π</sup>* 2 . When *<sup>r</sup>* <sup>≤</sup> <sup>8</sup> <sup>25</sup> , the function *<sup>g</sup>* 0 1 (*x*) *g* 0 2 (*x*) is decreasing on 0, *<sup>π</sup>* 2 , and according to Lemma 1, the function *G*(*x*) = *<sup>g</sup>*<sup>1</sup> (*x*) *<sup>g</sup>*2(*x*) = *g*1 (*x*)−*g*<sup>1</sup> (0 +) *<sup>g</sup>*2(*x*)−*g*2(0+) is decreasing on 0, *<sup>π</sup>* 2 .

It is straightforward that lim*x*→0<sup>+</sup> *<sup>G</sup>*(*x*) = <sup>1</sup> 6 . The proof of Theorem 2 is thus complete.

**Corollary 4.** *Let r* > 0 *and* |*x*| < *π*/2*. Then the inequality*

$$\frac{1}{\text{sinc}^r \mathfrak{x}} < 1 - \mathfrak{a} + \mathfrak{a} \left(\frac{1}{\cos \mathfrak{x}}\right)^{2r}$$

*holds if and only if <sup>α</sup>* <sup>≥</sup> <sup>1</sup> 6 *.*

**Corollary 5.** *Let* <sup>|</sup>*x*| ≤ *<sup>π</sup>* 2 *. Then*

$$
\cos^2 \mathbf{x} < \cos \mathbf{x} < \sin \mathbf{x} \\
\mathbf{x} < \frac{5}{6} + \frac{\cos^2 \mathbf{x}}{6} < 1 < \frac{1}{\sin \mathbf{x}} < \frac{5}{6} + \frac{1}{6 \cos^2 \mathbf{x}} < \frac{1}{\cos^2 \mathbf{x}}.
$$

**Corollary 6.** *Let t* ∈ (0, 1)*. Then*

$$1 - t^2 < \frac{t}{\arcsin t} < \frac{5}{6} + \frac{1 - t^2}{6} < 1 < \frac{\arcsin t}{t} < \frac{5}{6} + \frac{1}{6(1 - t^2)} < \frac{1}{1 - t^2}.$$

#### **4. Applications of Necessary and Sufficient Conditions**

In this section, using Theorems 1 and 2 , we can obtain the following inequalities.

**Theorem 3.** *Let s*, *<sup>t</sup>* <sup>&</sup>gt; <sup>0</sup> *with s* <sup>6</sup><sup>=</sup> *t. When r* <sup>≥</sup> <sup>8</sup> <sup>25</sup> *, the double inequality*

$$\mathbf{a}\mathbf{C}'(\mathbf{s},t) + (1-\mathbf{a})A^{r}(\mathbf{s},t) < M^{r}(\mathbf{s},t) < \beta \mathbf{C}'(\mathbf{s},t) + (1-\beta)A^{r}(\mathbf{s},t) \tag{11}$$

*holds if and only if <sup>α</sup>* <sup>≤</sup> <sup>1</sup> 2 *<sup>r</sup>*−<sup>1</sup> <sup>1</sup>−ln*<sup>r</sup>* (1+ √ 2 ) ln*r* (1+ √ 2 ) *and <sup>β</sup>* <sup>≥</sup> <sup>1</sup> 6 *; when r* < 0*, the inequality* (11) *holds if and only if <sup>α</sup>* <sup>≥</sup> <sup>1</sup> 2 *<sup>r</sup>*−<sup>1</sup> <sup>1</sup>−ln*<sup>r</sup>* (1+ √ 2 ) ln*r* (1+ √ 2 ) *and <sup>β</sup>* <sup>≤</sup> <sup>1</sup> 6 *.*

**Proof.** Without loss of generality, we assume that *s* > *t* > 0. Let *u* = *<sup>s</sup>*−*<sup>t</sup> s*+*t* . Then *u* ∈ (0, 1) and

$$\frac{M^r(s,t) - A^r(s,t)}{C^r(s,t) - A^r(s,t)} = \frac{\frac{\mu^r}{\text{arcsinh}^r \mu} - 1}{(1 + \mu^2)^r - 1}.$$

Let *t* = sinh *θ*. Then *θ* ∈ 0, ln 1 + √ 2 and

$$\frac{M^r(\mathbf{s},t) - A^r(\mathbf{s},t)}{C^r(\mathbf{s},t) - A^r(\mathbf{s},t)} = \frac{\frac{\sinh^r \theta}{\theta^r} - 1}{\cosh^{2r} \theta - 1} \stackrel{\triangle}{=} F(\theta).$$

Using Theorem 1, we can observe that, when *<sup>r</sup>* <sup>≥</sup> <sup>8</sup> <sup>25</sup> , the function *F*(*θ*) is decreasing on the interval 0, ln 1 + √ 2 , whereas *F*(*θ*) is increasing on 0, ln 1 + √ 2 for *r* < 0.

According to L'Hospital's rule, we have

$$\lim\_{\theta \to 0^+} F(\theta) = \frac{1}{6} \quad \text{and} \quad \lim\_{\theta \to \ln(1+\sqrt{2})^-} F(\theta) = \frac{1}{2^r - 1} \frac{1 - \ln^r(1+\sqrt{2})}{\ln^r(1+\sqrt{2})}.$$

The proof of Theorem 3 is thus complete.

**Theorem 4.** *Let s*, *t* > 0 *with s* 6= *t. Then the double inequality*

$$aH^{r}(\mathbf{s},t) + (1-a)A^{r}(\mathbf{s},t) < P^{r}(\mathbf{s},t) < \beta H^{r}(\mathbf{s},t) + (1-\beta)A^{r}(\mathbf{s},t)$$

*holds if and only if*

$$\begin{cases} \text{for } r \ge \frac{1}{2}, & \quad \alpha \ge 1 - \left(\frac{2}{\pi}\right)^r \text{ and } \beta \le \frac{1}{6}; \\\\ \text{for } 0 < r \le \frac{8}{25}, & \quad \alpha \ge \frac{1}{6} \text{ and } \beta \le 1 - \left(\frac{2}{\pi}\right)^r; \\\\ \text{for } r < 0, & \quad \alpha \le 0 \text{ and } \beta \ge \frac{1}{6}. \end{cases}$$

**Proof.** Without the loss of generality, we assume that *s* > *t* > 0. Let *v* = *<sup>s</sup>*−*<sup>t</sup> s*+*t* . Then *v* ∈ (0, 1) and *r*

$$\frac{P^r(\mathbf{s},t) - A^r(\mathbf{s},t)}{H^r(\mathbf{s},t) - A^r(\mathbf{s},t)} = \frac{\frac{\upsilon^r}{\text{arcsin}^r \upsilon} - 1}{(1 - \upsilon^2)^r - 1}.$$

Let *v* = sin *θ*. Then *θ* ∈ 0, *<sup>π</sup>* 2 and

$$\frac{P^r(\mathbf{s},t) - A^r(\mathbf{s},t)}{H^r(\mathbf{s},t) - A^r(\mathbf{s},t)} = \frac{\frac{\sin^r \theta}{\theta^r} - 1}{\cos^{2r} \theta - 1} \stackrel{\triangle}{=} G(\theta).$$

By virtue of Theorem 2, we can observe that, when *r* ∈ (−∞, 0) ∪ 0, <sup>8</sup> 25 , the function *G*(*θ*) is decreasing on 0, *<sup>π</sup>* 2 , whereas *G*(*θ*) is increasing on 0, *<sup>π</sup>* 2 for *<sup>r</sup>* <sup>≥</sup> <sup>1</sup> 2 .

Using L'Hospital's rule, we obtain the limits lim*θ*→0<sup>+</sup> *<sup>G</sup>*(*θ*) = <sup>1</sup> 6 and

$$\lim\_{\theta \to (\pi/2)^{-}} G(\theta) = \begin{cases} 1 - \left(\frac{2}{\pi}\right)^{r}, & r > 0; \\ 0, & r < 0. \end{cases}$$

The proof of Theorem 4 is thus complete.

**Corollary 7.** *For all s*, *t* > 0 *with s* 6= *t,*

*1. The double inequality*

$$\frac{\mathfrak{a}\_1}{H(\mathfrak{s},t)} + \frac{1-\mathfrak{a}\_1}{A(\mathfrak{s},t)} < \frac{1}{P(\mathfrak{s},t)} < \frac{\beta\_1}{H(\mathfrak{s},t)} + \frac{1-\beta\_1}{A(\mathfrak{s},t)}$$

*holds if and only if*

$$\alpha\_1 \le 2\left[1 - \ln\left(1 + \sqrt{2}\right)\right] = 0.237253\dots \quad \text{and} \quad \beta\_1 \ge \frac{1}{6}\gamma$$

*2. The double inequality*

$$\frac{\alpha\_2}{H^2(s,t)} + \frac{1-\alpha\_2}{A^2(s,t)} < \frac{1}{P^2(s,t)} < \frac{\beta\_2}{H^2(s,t)} + \frac{1-\beta\_2}{A^2(s,t)}$$

*holds if and only if <sup>α</sup>*<sup>2</sup> <sup>≤</sup> <sup>0</sup> *and <sup>β</sup>*<sup>2</sup> <sup>≥</sup> <sup>1</sup> 6 *; 3. The double inequality*

$$
\mu\_3 H(s, t) + (1 - \mu\_3) A(s, t) < P(s, t) \\
< \beta\_3 H(s, t) + (1 - \beta\_3) A(s, t),
$$

*holds if and only if*

$$
\alpha\_3 \ge 1 - \frac{2}{\pi} = 0.36338\dots, \quad \text{and} \quad \beta\_3 \le \frac{1}{6};
$$

*4. The double inequality*

$$
\mu\_4 H^2(s, t) + (1 - \mu\_4) A^2(s, t) < P^2(s, t) \\
< \beta\_4 H^2(s, t) + (1 - \beta\_4) A^2(s, t)
$$

*holds if and only if*

$$
\alpha\_4 \ge 1 - \left(\frac{2}{\pi}\right)^2 = 0.594715\dots \quad \text{and} \quad \beta\_4 \le \frac{1}{6}.
$$

**Corollary 8.** *For all s*, *t* > 0 *with s* 6= *t, then*

$$\begin{split} H(s,t) &< \left(1 - \frac{2}{\pi}\right) H(s,t) + \frac{2}{\pi} A(s,t) < P(s,t) < \frac{1}{6} H(s,t) + \frac{5}{6} A(s,t) \\ &< A(s,t) < \frac{1 - \ln\left(1 + \sqrt{2}\right)}{\ln\left(1 + \sqrt{2}\right)} \mathbb{C}(s,t) + \frac{2\ln\left(1 + \sqrt{2}\right) - 1}{\ln\left(1 + \sqrt{2}\right)} A(s,t) \\ &< M(s,t) < \frac{1}{6} \mathbb{C}(s,t) + \frac{5}{6} A(s,t) < \mathbb{C}(s,t). \end{split} \tag{12}$$

#### **5. Remarks**

**Remark 1.** *When taking r* = −2, −1, 1, 2 *in Theorem 1, we can obtain the results reported in [13,23].*

**Remark 2.** *The inequality chain* (12) *improves the left-hand sides of inequalities* (1) *and* (2)*.*

**Remark 3.** *From* sinh(*z* i) = i sin *z, it follows that* sinhc(*z* i) = sinc *z. This relation is possibly available to simplify proofs of the main results in this paper.*

**Remark 4.** *In [33–36], series expansions of the functions*

$$\left(\frac{\arcsin t}{t}\right)^r\_{\prime\prime} \quad \left(\frac{\arcsin \hbar}{t}\right)^r\_{\prime\prime} \quad \left[\frac{(\arccos x)^2}{2(1-x)}\right]^r\_{\prime\prime}$$

$$\left[\frac{(\operatorname{arccosh} x)^2}{2(1-x)}\right]^r, \quad (\operatorname{arccosh} t)^r, \quad (\operatorname{arccosh} t)^r$$

*r*

*for r* ∈ R *were established. These series expansions are possibly available to prove the main results presented in this paper.*

#### **6. Conclusions**

In this paper, we have established some inequalities for the trigonometric functions and hyperbolic functions. These results can trigger further investigations on inequalities involving trigonometric and hyperbolic functions. The techniques used in this paper are suitable for proving and establishing many other inequalities involving the Neuman– Sándor mean, the Seiffert mean, the Toader mean, and so on.

**Author Contributions:** Writing—original draft, W.-H.L., Q.-X.S. and B.-N.G. All authors contributed equally to the manuscript. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The authors thank anonymous referees for their careful corrections to and valuable comments on the original version of this paper.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


### *Article* **Hermite–Hadamard's Integral Inequalities of** (*α***,** *s*)**-GAand** (*α***,** *s***,** *m*)**-GA-Convex Functions**

**Jing-Yu Wang <sup>1</sup> , Hong-Ping Yin <sup>1</sup> , Wen-Long Sun <sup>2</sup> and Bai-Ni Guo 3,4,\***


**Abstract:** In this paper, the authors propose the notions of (*α*,*s*)-geometric-arithmetically convex functions and (*α*,*s*, *m*)-geometric-arithmetically convex functions, while they establish some new integral inequalities of the Hermite–Hadamard type for (*α*,*s*)-geometric-arithmetically convex functions and for (*α*,*s*, *m*)-geometric-arithmetically convex functions.

**Keywords:** Hermite–Hadamard type integral inequality; (*α*,*s*)-geometric-arithmetically convex function; (*α*,*s*, *m*)-geometric-arithmetically convex function

**MSC:** Primary 26A51; Secondary 26D15; 41A55

#### **1. Introduction**

In this paper, we denote a nonempty and open interval with *I* ⊆ R. We first review some definitions of various convex functions and list some Hermite–

Hadamard-type integral inequalities.

It is general knowledge that a function *f* : *I* ⊆ R → R is said to be convex if

$$f(t\mathbf{x} + (1 - t)y) \le tf(\mathbf{x}) + (1 - t)f(y)$$

for all *x*, *y* ∈ *I* and *t* ∈ [0, 1]. One can find a lot of classical conclusions for convex functions in monographs [1,2].

In [3], Xi and his co-authors defined (*α*,*s*)-convex functions and (*α*,*s*, *m*)-convex functions and established some Hermite–Hadamard-type integral inequalities.

**Definition 1** ([3])**.** *For some s* ∈ [−1, 1] *and α* ∈ (0, 1]*, a function f* : *I* ⊆ R → R *is said to be* (*α*,*s*)*-convex if*

$$f(t\mathbf{x} + (1 - t)y) \le t^{\alpha s} f(\mathbf{x}) + (1 - t^{\alpha})^s f(y)$$

*holds for all x*, *y* ∈ *I and t* ∈ (0, 1)*.*

**Definition 2** ([3])**.** *For some s* ∈ [−1, 1] *and* (*α*, *m*) ∈ (0, 1] × (0, 1]*, a function f* : [0, *b*] → R *is said to be* (*α*,*s*, *m*)*-convex if*

$$f(t\mathbf{x} + m(1 - t)y) \le t^{\alpha s} f(\mathbf{x}) + m(1 - t^{\alpha})^s f(y),$$

*holds for all x*, *y* ∈ [0, *b*] *and t* ∈ (0, 1)*.*

**Definition 3** ([4,5])**.** *The function f* : *I* ⊆ R<sup>+</sup> = (0, ∞) → R *is said to be geometricarithmetically convex, that is, GA-convex, on I if*

$$f\left(x^{t}y^{1-t}\right) \le tf(x) + (1-t)f(y)$$

**Citation:** Wang, J.-Y.; Yin, H.-P.; Sun, W.-L.; Guo, B.-N. Hermite– Hadamard's Integral Inequalities of (*α*,*s*)-GA- and (*α*,*s*, *m*)-GA-Convex Functions. *Axioms* **2022**, *11*, 616. https://doi.org/10.3390/ axioms11110616

Academic Editors: Wei-Shih Du, Ravi P. Agarwal, Erdal Karapinar, Marko Kosti´c, Jian Cao and Behzad Djafari-Rouhani

Received: 28 September 2022 Accepted: 1 November 2022 Published: 6 November 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

*holds for all x*, *y* ∈ *I and t* ∈ [0, 1]*.*

In [6], Shuang and her co-authors, including the second author of this paper, introduced the notion of the geometric-arithmetically *s*-convex function and established some inequalities of the Hermite–Hadamard type for geometric-arithmetically *s*-convex functions.

**Definition 4** ([6])**.** *Let f* : *I* ⊆ R<sup>+</sup> → R<sup>0</sup> = [0, ∞) *and s* ∈ (0, 1]*. A function f*(*x*) *is said to be geometric-arithmetically s-convex on I if*

$$f\left(\mathfrak{x}^t y^{1-t}\right) \le t^s f(\mathfrak{x}) + (1-t)^s f(y)$$

*holds for all x*, *y* ∈ *I and t* ∈ (0, 1]*.*

**Remark 1.** *When s* = 1*, a geometric-arithmetically s-convex function becomes the GA-convex function defined in [4,5].*

**Remark 2.** *The integral estimates and applications of geometric-arithmetically convex functions have received renewed attention in recent years. A remarkable variety of refinements and generalizations have been found in, for example, [3–6]. In this paper, we will generalize the results of the above-mentioned literature and study the application problems.*

Let *f* : *I* ⊆ R → R be a convex function on *I*. Then, the Hermite–Hadamard integral inequality reads that

$$f\left(\frac{\mathbf{x}+\mathbf{y}}{2}\right) \le \frac{1}{y-\mathbf{x}} \int\_{\mathbf{x}}^{y} f(\mathbf{x}) \, \mathbf{d} \, \mathbf{x} \le \frac{f(\mathbf{x}) + f(\mathbf{y})}{2}, \quad \mathbf{x}, y \in I.$$

One can find a lot of classical conclusions for the Hermite–Hadamard integral inequality in the monograph [7].

Hermite–Hadamard-type integral inequalities are a very active research topic [8]. We now recall some known results below.

**Theorem 1** ([9], Theorem 2.2)**.** *Let f* : *I* ◦ ⊆ R → R *be a differentiable mapping on I* ◦ *, and let the points a*, *b* ∈ *I* ◦ *with a* < *b. If* | *f* 0 | *is convex on* [*a*, *b*]*, then*

$$\left| \frac{f(a) + f(b)}{2} - \frac{1}{b - a} \int\_{a}^{b} f(x) \, \mathrm{d}x \right| \le \frac{(b - a)(|f'(a)| + |f'(b)|)}{8} \dots$$

**Theorem 2** ([10], Theorems 1 and 2)**.** *Let f* : *I* ⊆ R → R *be differentiable on I* ◦ *, and let a*, *b* ∈ *I with a* < *b. If* | *f* 0 | *q is convex on* [*a*, *b*] *for q* ≥ 1*, then*

$$\left| \frac{f(a) + f(b)}{2} - \frac{1}{b - a} \int\_{a}^{b} f(x) \, \mathrm{d}x \right| \le \frac{b - a}{4} \left( \frac{|f'(a)|^q + |f'(b)|^q}{2} \right)^{1/q}$$

*and*

$$\left| f\left(\frac{a+b}{2}\right) - \frac{1}{b-a} \int\_{a}^{b} f(\mathbf{x}) \, \mathbf{d} \, \mathbf{x} \right| \le \frac{b-a}{4} \left( \frac{|f'(a)|^q + |f'(b)|^q}{2} \right)^{1/q}$$

.

**Theorem 3** ([11])**.** *Let f* : R<sup>0</sup> → R *be m-convex and m* ∈ (0, 1]*. If f* ∈ *L*1([*a*, *b*]) *for* 0 ≤ *a* < *b* < ∞*, then*

$$\frac{1}{b-a} \int\_{a}^{b} f(x) \, \mathrm{d}x \le \min \left\{ \frac{f(a) + mf(b/m)}{2}, \frac{mf(a/m) + f(b)}{2} \right\}.$$

**Theorem 4** ([12])**.** *Let f* : *I* ⊆ R<sup>0</sup> → R *be differentiable on I* ◦ *, the numbers a*, *b* ∈ *I with a* < *b, and f* <sup>0</sup> ∈ *L*1([*a*, *b*])*. If* | *f* 0 | *q is s-convex on* [*a*, *b*] *for some fixed s* ∈ (0, 1] *and q* ≥ 1*, then*

$$\begin{aligned} \left| \frac{f(a) + f(b)}{2} - \frac{1}{b - a} \int\_a^b f(\mathbf{x}) \, \mathbf{d} \, \mathbf{x} \right| \\ &\leq \frac{b - a}{2} \left( \frac{1}{2} \right)^{1 - 1/q} \left[ \frac{2 + 1/2^s}{(s + 1)(s + 2)} \right]^{1/q} \left[ |f'(a)|^q + |f'(b)|^q \right]^{1/q} . \end{aligned}$$

**Theorem 5** ([13])**.** *Let f* : *I* ⊆ R<sup>0</sup> → R *be differentiable on I* ◦ *, let a*, *b* ∈ *I with a* < *b, and let f* <sup>0</sup> ∈ *L*1([*a*, *b*])*. If* | *f* 0 | *q is s-convex on* [*a*, *b*] *for some fixed s* ∈ (0, 1] *and q* > 1*, then*

$$\begin{aligned} \left| f\left(\frac{a+b}{2}\right) - \frac{1}{b-a} \int\_{a}^{b} f(\mathbf{x}) \, \mathbf{d} \, \mathbf{x} \right| &\leq \frac{b-a}{4} \left[ \frac{1}{(s+1)(s+2)} \right]^{1/q} \left( \frac{1}{2} \right)^{1/p} \left\{ \left| f'(a) \right|^{q} \\ &+ (s+1) \left| f'\left(\frac{a+b}{2}\right) \right|^{q} \right\}^{1/q} + \left[ |f'(b)|^{q} + (s+1) \left| f'\left(\frac{a+b}{2}\right) \right|^{q} \right]^{1/q} \end{aligned}$$

*where* <sup>1</sup> *<sup>p</sup>* <sup>+</sup> <sup>1</sup> *<sup>q</sup>* = 1*.*

**Theorem 6** ([14])**.** *Let f* : *I* ⊆ R<sup>0</sup> → R *be differentiable on I* ◦ *, let a*, *b* ∈ *I with a* < *b, and let f* <sup>0</sup> ∈ *L*1([*a*, *b*])*. If* | *f* 0 | *is s-convex on* [*a*, *b*] *for some s* ∈ (0, 1]*, then*

$$\begin{aligned} \left| \frac{1}{6} \Big[ f(a) + 4f\left(\frac{a+b}{2}\right) + f(b) \Big] - \frac{1}{b-a} \int\_a^b f(\mathbf{x}) \, \mathbf{d} \, \mathbf{x} \right| \\ \leq \frac{(s-4)\mathfrak{G}^{s+1} + 2 \times \mathfrak{F}^{s+2} - 2 \times \mathfrak{F}^{s+2} + 2}{\mathfrak{G}^{s+2}(s+1)(s+2)} (b-a) \big( |f'(a)| + |f'(b)| \big) . \end{aligned}$$

Motivated by the studies above, we will introduce the notions of "(*α*,*s*)-geometricarithmetically convex functions" and "(*α*,*s*, *m*)-geometric-arithmetically convex functions", and we will establish some new inequalities of the Hermite–Hadamard type for (*α*,*s*)-geometric-arithmetically convex functions and for (*α*,*s*, *m*)-geometric-arithmetically convex functions.

#### **2. Definitions**

We now introduce the notions of "(*α*,*s*)-geometric-arithmetically convex functions" and "(*α*,*s*, *m*)-geometric-arithmetically convex functions".

**Definition 5.** *For some s* ∈ [−1, 1] *and α* ∈ (0, 1]*, a function f* : *I* ⊆ R<sup>+</sup> → R *is said to be* (*α*,*s*)*-geometric-arithmetically convex, or simply speaking,* (*α*,*s*)*-GA-convex if*

$$f(x^t y^{1-t}) \le t^{\alpha s} f(x) + (1 - t^{\alpha})^s f(y)$$

*holds for all x*, *y* ∈ *I and t* ∈ (0, 1)*.*

**Remark 3.** *By Definition 5, we can see that,*

*1. If α* = 1*, then f*(*x*) *is an s-GA-convex function on I, see [6];*

*2. If α* = *s* = 1*, then f*(*x*) *is a GA-convex function on I, see [4,5].*

**Definition 6.** *For some s* ∈ [−1, 1] *and* (*α*, *m*) ∈ (0, 1] × (0, 1]*, a function f* : (0, *b*] ⊆ R<sup>+</sup> → R *is said to be* (*α*,*s*, *m*)*-geometric-arithmetically convex, or simply speaking,* (*α*,*s*, *m*)*-GA-convex if*

$$f\left(\mathfrak{x}^t y^{m(1-t)}\right) \le t^{\alpha s} f(\mathfrak{x}) + m(1-t^a)^s f(y),$$

*holds for all x*, *y* ∈ (0, *b*] *and t* ∈ (0, 1)*.*

**Remark 4.** *By Definition 6, we can see that:*


It is obvious that:


**Proposition 1.** *Let α* ∈ (0, 1] *and s* ∈ [−1, 0)*. Then, the function f*(*x*) = *x r for r* ∈ (0, 1) *is* (*α*,*s*)*-geometric-arithmetically convex with respect to x* ∈ R+*.*

**Proof.** We only need to verify the inequality

$$f(\mathbf{x}^r)^t (y^r)^{1-t} = f\left(\mathbf{x}^t y^{1-t}\right) \le t^{as} f(\mathbf{x}) + (1 - t^a)^s f(y) = t^{as} \mathbf{x}^r + (1 - t^a)^s y^r$$

for all *x*, *y* ∈ R and *t* ∈ (0, 1).

For all *x*, *y* ∈ R and *t* ∈ (0, 1):

1. When *x <sup>r</sup>* <sup>≤</sup> *<sup>y</sup> r* , let *u* = *<sup>x</sup> r y r* , then 0 ≤ *u* ≤ 1 and (1 − *t α* ) *<sup>s</sup>* > 1; thus,

$$u^t \le 1 < (1 - t^{\alpha})^s < t^{\alpha s} \mu + (1 - t^{\alpha})^s \mu$$

that is,

$$f(\mathfrak{x}^t y^{1-t}) \le t^{\alpha s} f(\mathfrak{x}) + (1 - t^{\alpha})^s f(y);$$

2. When *x <sup>r</sup>* <sup>≥</sup> *<sup>y</sup> r* , we have

$$f(\mathbf{x}^t y^{1-t}) = (\mathbf{x}^r)^t (y^r)^{1-t} \le \mathbf{x}^r < t^{as} f(\mathbf{x}) < t^{as} f(\mathbf{x}) + (1 - t^a)^s f(y).$$

The proof of Proposition 1 is complete.

#### **3. Lemmas**

The following lemmas are necessary for us.

**Lemma 1** ([15])**.** *Let f* : *I* ⊆ R → R *be differentiable on I* ◦ *and let a*, *b* ∈ *I with a* < *b. If f* <sup>0</sup> ∈ *L*1([*a*, *b*])*, then for x* ∈ [*a*, *b*]*, we have*

$$\begin{aligned} &\frac{(b-x)f(b)+(x-a)f(a)}{b-a}-\frac{1}{b-a}\int\_{a}^{b}f(u)\,\mathrm{d}u \\ &=\frac{(\mathbf{x}-a)^{2}}{b-a}\int\_{0}^{1}(t-1)f'(t\mathbf{x}+(1-t)a)\,\mathrm{d}t+\frac{(b-x)^{2}}{b-a}\int\_{0}^{1}(1-t)f'(t\mathbf{x}+(1-t)b)\,\mathrm{d}t.\end{aligned}$$

**Lemma 2.** *Let α* ∈ (0, 1)*. Then,*

$$R\_{-1}(\alpha) \stackrel{\Delta}{=} \int\_0^1 \frac{1-t}{t^{\alpha}} \,\mathrm{d}\, t = \frac{1}{(1-\alpha)(2-\alpha)}$$

*and*

$$T\_{-1}(\alpha) \stackrel{\triangle}{=} \int\_0^1 \frac{1-t}{1-t^{\alpha}} \, \mathrm{d} \, t = \frac{1}{\alpha} \left[ \psi \left( \frac{2}{\alpha} \right) - \psi \left( \frac{1}{\alpha} \right) \right] \, \mathrm{d}$$

*where ψ*(*z*) = d ln <sup>Γ</sup>(*z*) d *z , and*

$$\Gamma(z) = \int\_0^1 t^{z-1} \, \mathbf{e}^{-t} \, \mathbf{d} \, t, \quad \Re(z) > 0$$

*denotes the classical Euler gamma function.*

**Proof.** By letting *u* = *t α* for *t* ∈ (0, 1) and using the formulas

$$
\psi(z) + \gamma = \int\_0^1 \frac{1 - t^{z - 1}}{1 - t} \,\mathrm{d}\, t
$$

and

$$\gamma = \int\_0^\infty \left(\frac{1}{1+t} - \mathbf{e}^{-t}\right) \frac{\mathbf{d} \cdot t}{t} dt$$

in [16] (p. 259, 6.3.22), it is easy to show that

$$\int\_0^1 \frac{1-t}{1-t^a} \,\mathrm{d}\,t = \frac{1}{a} \int\_0^1 \frac{u^{1/a-1} - u^{2/a-1}}{1-u} \,\mathrm{d}\,u = \frac{1}{a} \left[ \psi\left(\frac{2}{a}\right) - \psi\left(\frac{1}{a}\right) \right].$$

The proof of Lemma 2 is complete.

#### **4. Hermite–Hadamard-Type Integral Inequalities**

In this section, we turn our attention to the establishment of integral inequalities of the Hermite–Hadamard type for (*α*,*s*)-GA-convex and (*α*,*s*, *m*)-GA-convex functions.

**Theorem 7.** *For some s* ∈ [−1, 1] *and α* ∈ (0, 1]*, let f* : *I* ⊆ R<sup>+</sup> → R *be a differentiable function on I* ◦ *, let a*, *b* ∈ *I* ◦ *with a* < *b and x* ∈ [*a*, *b*]*, and let f* <sup>0</sup> ∈ *L*1([*a*, *b*]) *and* | *f* 0 | *be decreasing on* [*a*, *b*]*. If* | *f* 0 | *q is* (*α*,*s*)*-*GA*-convex on* [*a*, *b*] *for q* ≥ 1*, then the following conclusions are valid: 1. When s* ∈ (−1, 1] *and α* ∈ (0, 1]*, we have*

$$\begin{aligned} \left| \frac{(b-x)f(b) + (x-a)f(a)}{b-a} - \frac{1}{b-a} \int\_a^b f(u) \, \mathbf{d} \, u \right| \\ \leq \left( \frac{1}{2} \right)^{1-1/q} \left\{ \frac{(x-a)^2}{b-a} \left[ R(a,s) |f'(x)|^q + T(a,s) |f'(a)|^q \right]^{1/q} \right. \\ \left. + \frac{(b-x)^2}{b-a} \left[ R(a,s) |f'(x)|^q + T(a,s) |f'(b)|^q \right]^{1/q} \right\}, \quad \text{(1)} \end{aligned}$$

*where R*(*α*,*s*) *and T*(*α*,*s*) *are defined by*

$$R(\mathfrak{a}, \mathfrak{s}) \stackrel{\Delta}{=} \frac{1}{(\mathfrak{a}\mathfrak{s} + 1)(\mathfrak{a}\mathfrak{s} + 2)}$$

*and*

$$T(\alpha, s) \triangleq \frac{1}{\alpha} \left[ B\left(s + 1, \frac{1}{\alpha}\right) - B\left(s + 1, \frac{2}{\alpha}\right) \right]$$

*for s* ∈ (−1, 1]*; 2. When s* = −1 *and α* ∈ (0, 1)*, we have*

$$\begin{aligned} \left| \frac{(b-x)f(b) + (x-a)f(a)}{b-a} - \frac{1}{b-a} \int\_a^b f(u) \, \mathrm{d}u \right| \\ \leq \left( \frac{1}{2} \right)^{1-1/q} \left\{ \frac{(x-a)^2}{b-a} [R\_{-1}(a)|f'(x)|^q + T\_{-1}(a)|f'(a)|^q]^{1/q} \right. \\ \left. + \frac{(b-x)^2}{b-a} [R\_{-1}(a)|f'(x)|^q + T\_{-1}(a)|f'(b)|^q]^{1/q} \right\} . \end{aligned}$$

*where R*−1(*α*)*, T*−1(*α*) *are defined in Lemma 2 and*

$$B(x,y) = \int\_0^1 t^{x-1} (1-t)^{y-1} \, \text{d} \, t, \quad \Re(x), \Re(y) > 0 \tag{2}$$

*denotes the classical beta function.*

**Proof.** For *s* ∈ (−1, 1] and *α* ∈ (0, 1], since | *f* 0 | is decreasing on [*a*, *b*], by Lemma 1 and the Hölder integral inequality, we have

$$\begin{split} & \left| \frac{(b-x)f(b) + (x-a)f(a)}{b-a} - \frac{1}{b-a} \int\_{a}^{b} f(u) \, \mathrm{d}u \right| \\ & \leq \frac{(x-a)^2}{b-a} \int\_{0}^{1} (1-t) |f'(tx + (1-t)a)| \, \mathrm{d}t \\ & \quad + \frac{(b-x)^2}{b-a} \int\_{0}^{1} (1-t) |f'(tx + (1-t)b)| \, \mathrm{d}t \\ & \leq \frac{(x-a)^2}{b-a} \int\_{0}^{1} (1-t) |f'(\mathbf{x}^t a^{1-t})| \, \mathrm{d}t + \frac{(b-x)^2}{b-a} \int\_{0}^{1} (1-t) |f'(\mathbf{x}^t b^{1-t})| \, \mathrm{d}t \\ & \leq \frac{(x-a)^2}{b-a} \left[ \int\_{0}^{1} (1-t) \, \mathrm{d}t \right]^{1-1/q} \left[ \int\_{0}^{1} (1-t) |f'(\mathbf{x}^t a^{1-t})|^q \, \mathrm{d}t \right]^{1/q} \\ & \quad + \frac{(b-x)^2}{b-a} \left[ \int\_{0}^{1} (1-t) \, \mathrm{d}t \right]^{1-1/q} \left[ \int\_{0}^{1} (1-t) |f'(\mathbf{x}^t b^{1-t})|^q \, \mathrm{d}t \right]^{1/q}. \end{split} \tag{3}$$

Making use of the (*α*,*s*)-GA-convexity of | *f* 0 | *q* , we have

$$\begin{aligned} \int\_0^1 (1-t) \left| f'(\mathbf{x}^t a^{1-t}) \right|^q \mathbf{d} \, t \le \int\_0^1 (1-t) \left[ t^{as} |f'(\mathbf{x})|^q + (1-t^a)^s |f'(a)|^q \right] \mathbf{d} \, t^q \\ = R(a,s) |f'(\mathbf{x})|^q + T(a,s) |f'(a)|^q \end{aligned}$$

and

$$\begin{split} \int\_{0}^{1} (1-t) \left| f'(\mathbf{x}^{t} b^{1-t}) \right|^{q} \mathbf{d} \, t \leq \int\_{0}^{1} (1-t) \left[ t^{\text{as}} |f'(\mathbf{x})|^{q} + (1-t^{a})^{s} |f'(b)|^{q} \right] \mathbf{d} \, t \\ = R(\mathbf{a}, \mathbf{s}) |f'(\mathbf{x})|^{q} + T(\mathbf{a}, \mathbf{s}) |f'(b)|^{q} . \end{split} \tag{4}$$

By using the above inequalities between (3) and (4) and then simplifying them, we obtain the required inequality (1).

When *s* = −1 and *α* ∈ (0, 1), by the inequalities between (3) and (4) and by Lemma 2, we have

$$\begin{split} & \left| \frac{(b-x)f(b) + (x-a)f(a)}{b-a} - \frac{1}{b-a} \int\_{a}^{b} f(u) \, \mathrm{d}u \right| \\ & \leq \frac{(x-a)^2}{b-a} \left( \frac{1}{2} \right)^{1-1/q} \left[ \int\_{0}^{1} (1-t) |f'(x^{t} a^{1-t})|^{q} \, \mathrm{d}t \right]^{1/q} \\ & \qquad + \frac{(b-x)^2}{b-a} \left( \frac{1}{2} \right)^{1-1/q} \left[ \int\_{0}^{1} (1-t) |f'(x^{t} b^{1-t})|^{q} \, \mathrm{d}t \right]^{1/q} \\ & \leq \frac{(x-a)^2}{b-a} \left( \frac{1}{2} \right)^{1-1/q} \left[ \int\_{0}^{1} (1-t) \left[ t^{-a} |f'(x)|^{q} + (1-t^{a})^{-1} |f'(a)|^{q} \right] \, \mathrm{d}t \right]^{1/q} \\ & \qquad + \frac{(b-x)^2}{b-a} \left( \frac{1}{2} \right)^{1-1/q} \left[ \int\_{0}^{1} (1-t) \left[ t^{-a} |f'(x)|^{q} + (1-t^{a})^{-1} |f'(b)|^{q} \right] \, \mathrm{d}t \right]^{1/q} \\ & = \left( \frac{1}{2} \right)^{1-1/q} \left\{ \frac{(x-a)^2}{b-a} \left[ \frac{(x-a)^2}{b-a} \left[ \mathcal{R}\_{-1}(a) |f'(x)|^{q} + \mathcal{T}\_{-1}(a) |f'(a)|^{q} \right] \right. \right. \end{split}$$

The proof of Theorem 7 is complete.

In Theorem 7, when taking *α* = 1 and *s* ∈ (0, 1], we derive the same result as in [6].

**Corollary 1.** *Under the conditions of Theorem 7, with α* = 1 *and s* ∈ (−1, 1]*, we have*

$$\begin{aligned} \left| \frac{(b-x)f(b)+(x-a)f(a)}{b-a} - \frac{1}{b-a} \int\_a^b f(u) \, \mathrm{d}u \right| \\ &\leq \frac{(x-a)^2}{b-a} \left( \frac{1}{2} \right)^{1-1/q} \left[ \frac{|f'(\mathbf{x})|^q + (s+1)|f'(a)|^q}{(s+1)(s+2)} \right]^{1/q} \\ &\qquad + \frac{(b-x)^2}{b-a} \left( \frac{1}{2} \right)^{1-1/q} \left[ \frac{|f'(\mathbf{x})|^q + (s+1)|f'(b)|^q}{(s+1)(s+2)} \right]^{1/q} . \end{aligned}$$

In Theorem 7, when setting *α* = *s* = 1, we deduce the following integral inequalities of the Hermite–Hadamard type for the GA-convex function.

**Corollary 2.** *Under the conditions of Theorem 7, with α* = *s* = 1*, we have*

$$\begin{split} \left| \frac{(b-x)f(b) + (x-a)f(a)}{b-a} - \frac{1}{b-a} \int\_{a}^{b} f(u) \, \mathbf{d} \, u \right| \\ \leq \left( \frac{1}{2} \right)^{1-1/q} \left\{ \frac{(x-a)^2}{b-a} \left[ \frac{|f'(x)|^q + 2|f'(a)|^q}{6} \right]^{1/q} \\ &+ \frac{(b-x)^2}{b-a} \left[ \frac{|f'(x)|^q + 2|f'(b)|^q}{6} \right]^{1/q} \right\}. \end{split}$$

**Corollary 3.** *Under the conditions of Theorem 7, with q* = 1 *and s* ∈ (−1, 1]*, we obtain*

$$\begin{aligned} \left| \frac{(b-x)f(b) + (x-a)f(a)}{b-a} - \frac{1}{b-a} \int\_a^b f(u) \, \mathrm{d}u \right| \\ &\leq \frac{(x-a)^2}{b-a} \left[ R(a,s) |f'(x)| + T(a,s) |f'(a)| \right] \\ &\qquad + \frac{(b-x)^2}{b-a} \left[ R(a,s) |f'(x)| + T(a,s) |f'(b)| \right]. \end{aligned}$$

By making use of the same method as that in the proof of Theorem 7, we obtain the following integral inequalities for (*α*,*s*, *m*)-GA-convex functions.

**Theorem 8.** *For some fixed* (*α*, *m*) ∈ (0, 1] × (0, 1] *and s* ∈ (−1, 1]*, let a*, *b* ∈ R<sup>+</sup> *with b* > *a and x* ∈ [*a*, *b*]*, let f* : (0, max{*b*, *b* 1/*m*}] <sup>→</sup> <sup>R</sup> *be a differentiable function, let <sup>f</sup>* <sup>0</sup> ∈ *L*1([*a*, max{*b*, *b* 1/*m*}])*, and let* <sup>|</sup> *<sup>f</sup>* 0 | *be decreasing on* [*a*, *b*]*. If* | *f* 0 | *q is* (*α*,*s*, *m*)*-*GA*-convex on* (0, max{*b*, *b* 1/*m*}] *for q* <sup>≥</sup> <sup>1</sup>*, then:*

*1. When s* ∈ (−1, 1] *and α* ∈ (0, 1]*, we have*

$$\begin{split} \left| \frac{(b-x)f(b)+(x-a)f(a)}{b-a} - \frac{1}{b-a} \int\_{a}^{b} f(u) \, \mathrm{d}u \right| \\ \leq \left( \frac{1}{2} \right)^{1-1/q} \left\{ \frac{(x-a)^2}{b-a} \left[ R(a,s) |f'(x)|^q + mT(a,s) \left| f'(a^{1/m}) \right|^q \right]^{1/q} \right. \\ \left. + \frac{(b-x)^2}{b-a} \left[ R(a,s) |f'(x)|^q + mT(a,s) \left| f'(b^{1/m}) \right|^q \right]^{1/q} \right\}; \end{split} \tag{5}$$

*2. When s* = −1 *and α* ∈ (0, 1)*, we have*

$$\begin{split} \left| \frac{(b-x)f(b) + (x-a)f(a)}{b-a} - \frac{1}{b-a} \int\_{a}^{b} f(u) \, \mathbf{d} \, u \right| \\ \leq \left( \frac{1}{2} \right)^{1-1/q} \left\{ \frac{(x-a)^2}{b-a} \left[ R\_{-1}(a) |f'(x)|^q + mT\_{-1}(a) \left| f'(a^{1/m}) \right|^q \right]^{1/q} \right. \\ \left. + \frac{(b-x)^2}{b-a} \left[ R\_{-1}(a) |f'(x)|^q + mT\_{-1}(a) \left| f'(b^{1/m}) \right|^q \right]^{1/q} \right\}, \end{split} \tag{6}$$

*where R*(*α*,*s*)*, T*(*α*,*s*)*, R*−1(*α*)*, and T*−1(*α*) *are defined respectively in Theorem 7 and Lemma 2.*

**Proof.** Using (3), we have

$$\begin{split} \left| \frac{(b-\mathbf{x})f(b) + (\mathbf{x}-a)f(a)}{b-a} - \frac{1}{b-a} \int\_{a}^{b} f(u) \, \mathbf{d} \, u \right| \\ \leq \frac{(\mathbf{x}-a)^{2}}{b-a} \left[ \int\_{0}^{1} (1-t) \, \mathbf{d} \, t \right]^{1-1/q} \left[ \int\_{0}^{1} (1-t) |f'(\mathbf{x}^{t} a^{1-t})|^{q} \, \mathbf{d} \, t \right]^{1/q} \\ + \frac{(b-\mathbf{x})^{2}}{b-a} \left[ \int\_{0}^{1} (1-t) \, \mathbf{d} \, t \right]^{1-1/q} \left[ \int\_{0}^{1} (1-t) |f'(\mathbf{x}^{t} b^{1-t})|^{q} \, \mathbf{d} \, t \right]^{1/q} . \end{split}$$

Making use of the (*α*,*s*, *m*)-GA-convexity of | *f* 0 | *<sup>q</sup>* on (0, max{*b*, *<sup>b</sup>* 1/*m*}] once again yields

$$\begin{aligned} \int\_0^1 (1-t) \left| f' \left( \mathbf{x}^t \mathbf{a}^{1-t} \right) \right|^q \mathbf{d} \, t &= \int\_0^1 (1-t) \left| f' \left( \mathbf{x}^t \mathbf{a}^{m(1-t)/m} \right) \right|^q \mathbf{d} \, t \\ &\leq \int\_0^1 \left[ (1-t) t^{as} |f'(\mathbf{x})|^q + m(1-t)(1-t^a)^s |f'(a^{1/m})|^q \right] \mathbf{d} \, t \\ &= \mathcal{R}(\mathbf{a}, \mathbf{s}) |f'(\mathbf{x})|^q + mT(\mathbf{a}, \mathbf{s}) |f'(a^{1/m})|^q \end{aligned}$$

and

$$\int\_0^1 (1-t) \left| f'(\mathbf{x}^t b^{1-t}) \right|^q \mathbf{d} \, t \le \mathcal{R}(\mathfrak{a}, s) |f'(\mathbf{x})|^q + mT(\mathfrak{a}, s) \left| f'(b^{1/m}) \right|^q.$$

We then substitute the two inequalities above into (7) and simplify the result in the required inequality (5).

Similarly, we can prove inequality (6). The proof of Theorem 8 is complete.

**Corollary 4.** *In Theorem 8, if q* = 1 *and s* ∈ (−1, 1]*, then*

$$\begin{aligned} \left| \frac{(b-x)f(b) + (x-a)f(a)}{b-a} - \frac{1}{b-a} \int\_a^b f(u) \, \mathbf{d} \, u \right| \\ &\leq \frac{(x-a)^2}{b-a} [R(a,s)|f'(x)| + mT(a,s) \left| f'\left(a^{1/m}\right) \right|] \\ &+ \frac{(b-x)^2}{b-a} \left[ R(a,s)|f'(x)| + mT(a,s) \left| f'\left(b^{1/m}\right) \right| \right]. \end{aligned}$$

**Theorem 9.** *For some fixed* (*α*, *m*) ∈ (0, 1] × (0, 1] *and s* ∈ (−1, 1]*, let a*, *b* ∈ R<sup>+</sup> *with b* > *a and x* ∈ [*a*, *b*]*, let f* : (0, max{*b*, *b* 1/*m*}] <sup>→</sup> <sup>R</sup> *be a differentiable function, and let f* <sup>0</sup> ∈ *L*1([*a*, max{*b*, *b* 1/*m*}]) *and* <sup>|</sup> *<sup>f</sup>* 0 | *be decreasing on* [*a*, *b*]*. If* | *f* 0 | *q is* (*α*,*s*, *m*)*-*GA*-convex on* (0, max{*b*, *b* 1/*m*}] *for q* <sup>&</sup>gt; <sup>1</sup>*, then*

$$\begin{split} \left| \frac{(b-\mathbf{x})f(b)+(\mathbf{x}-a)f(a)}{b-a} - \frac{1}{b-a} \int\_{a}^{b} f(u) \, \mathbf{d} \, u \right| \\ \leq \left( \frac{q-1}{2q-1} \right)^{1-1/q} \left\{ \frac{(\mathbf{x}-a)^{2}}{b-a} \left[ \frac{a|f'(\mathbf{x})|^{q} + m(as+1)\mathcal{B}(\frac{1}{a},s+1) \left| f'(a^{1/m}) \right|^{q}}{a(as+1)} \right]^{1/q} \right. \\ \left. + \frac{(b-\mathbf{x})^{2}}{b-a} \left[ \frac{a|f'(\mathbf{x})|^{q} + m(as+1)\mathcal{B}(\frac{1}{a},s+1) \left| f'(b^{1/m}) \right|^{q}}{a(as+1)} \right]^{1/q} \right\}, \end{split} \tag{8}$$

*where B*(*x*, *y*) *is defined by* (2) *in Theorem 7.*

**Proof.** Since | *f* 0 | is decreasing on [*a*, *b*], by Lemma 1 and the Hölder integral inequality, we obtain

$$\begin{split} \left| \frac{(b-\mathbf{x})f(b) + (\mathbf{x}-a)f(a)}{b-a} - \frac{1}{b-a} \int\_{a}^{b} f(u) \, \mathbf{d} \, u \right| \\ \leq \frac{(\mathbf{x}-a)^{2}}{b-a} \left[ \int\_{0}^{1} (1-t)^{q/(q-1)} \, \mathbf{d} \, t \right]^{1-1/q} \left[ \int\_{0}^{1} |f'(\mathbf{x}^{t} \mathbf{d}^{1-t})|^{q} \, \mathbf{d} \, t \right]^{1/q} \\ + \frac{(b-\mathbf{x})^{2}}{b-a} \left[ \int\_{0}^{1} (1-t)^{q/(q-1)} \, \mathbf{d} \, t \right]^{1-1/q} \left[ \int\_{0}^{1} |f'(\mathbf{x}^{t} \mathbf{b}^{1-t})|^{q} \, \mathbf{d} \, t \right]^{1/q}, \end{split}$$

where

$$\begin{aligned} \int\_0^1 (1-t)^{q/(q-1)} \, \mathrm{d} \, t &= \frac{q-1}{2q-1}, \\ \int\_0^1 |f'(\mathbf{x}^t a^{1-t})|^q \, \mathrm{d} \, t &\leq \int\_0^1 \left[t^{\mathrm{as}} |f'(\mathbf{x})|^q + m(1-t^{\mathrm{a}})^s \right] f'(a^{1/m}) \, |^q \right] \mathrm{d} \, t \\ &= \frac{a|f'(\mathbf{x})|^q + m(\mathrm{as}+1)\mathcal{B}(\frac{1}{a'}, \mathbf{s}+1) \left| f'(a^{1/m}) \right|^q}{a(\mathbf{a}\mathbf{s}+1)} \end{aligned}$$

and

$$\int\_0^1 |f'(\mathbf{x}^t b^{1-t})|^q \, \mathbf{d} \, t \le \frac{a|f'(\mathbf{x})|^q + m(as+1)B(\frac{1}{a}, s+1)|f'(b^{1/m})|^q}{a(as+1)}$$

.

Note that in the above arguments, we used the fact that the function | *f* 0 | *q* is (*α*,*s*, *m*)- GA-convex on (0, max{*b*, *b* 1/*m*}]. Applying the above equality and inequalities into (9) and then simplifying them lead to the required inequality (8). The proof of Theorem 9 is complete.

Using the same method as that in the proof of Theorem 9, we obtain the following inequalities of (*α*,*s*)-GA-convex functions.

**Theorem 10.** *For some s* ∈ (−1, 1] *and α* ∈ (0, 1]*, let f* : *I* ⊆ R<sup>+</sup> → R *be a differentiable function on I* ◦ *, let a*, *b* ∈ *I* ◦ *with a* < *b and x* ∈ [*a*, *b*]*, and let f* <sup>0</sup> ∈ *L*1([*a*, *b*]) *and* | *f* 0 | *be decreasing on* [*a*, *b*]*. If* | *f* 0 | *q is* (*α*,*s*)*-*GA*-convex on* [*a*, *b*] *for q* > 1*, then*

$$\begin{split} \left| \frac{(b-x)f(b) + (x-a)f(a)}{b-a} - \frac{1}{b-a} \int\_{a}^{b} f(u) \, \mathbf{d} \, u \right| \\ \leq \left( \frac{q-1}{2q-1} \right)^{1-1/q} \left\{ \frac{(x-a)^2}{b-a} \left[ \frac{a|f'(x)|^q + (as+1)B(\frac{1}{a}, s+1)|f'(a)|^q}{a(as+1)} \right]^{1/q} \right. \\ &+ \frac{(b-x)^2}{b-a} \left[ \frac{a|f'(x)|^q + (as+1)B(\frac{1}{a}, s+1)|f'(b)|^q}{a(as+1)} \right]^{1/q} \cdot \end{split}$$

*where B*(*x*, *y*) *is defined by* (2) *in Theorem 7.*

In Theorem 10, when *α* = 1, the Hermite–Hadamard-type integral inequality is the same as the result in [6].

**Corollary 5** ([6])**.** *Under the conditions of Theorem 10, if we take α* = 1*, then*

$$\begin{aligned} \left| \frac{(b-\mathbf{x})f(b) + (\mathbf{x}-a)f(a)}{b-a} - \frac{1}{b-a} \int\_{a}^{b} f(u) \, \mathbf{d} \, u \right| &\leq \left( \frac{q-1}{2q-1} \right)^{1-1/q} \\ \times \left[ \frac{(\mathbf{x}-a)^{2}}{b-a} \left( \frac{|f'(\mathbf{x})|^{q} + |f'(a)|^{q}}{s+1} \right)^{1/q} + \frac{(b-\mathbf{x})^{2}}{b-a} \left( \frac{|f'(\mathbf{x})|^{q} + |f'(b)|^{q}}{s+1} \right)^{1/q} \right]. \end{aligned}$$

#### **5. Applications to Special Means**

For two positive numbers *a*, *b* ∈ R<sup>+</sup> with *b* > *a*, define

$$A(a,b) = \frac{a+b}{2}, \quad H(a,b) = \frac{2ab}{a+b}, \quad L(a,b) = \frac{b-a}{\ln b - \ln a}$$

and

$$L\_r(a,b) = \begin{cases} \left[\frac{b^{r+1} - a^{r+1}}{(r+1)(b-a)}\right]^{1/r}, & r \neq 0, -1;\\ L(a,b), & r = -1;\\ \frac{1}{e} \left(\frac{b^b}{a^a}\right)^{1/(b-a)}, & r = 0. \end{cases}$$

These means are respectively called the arithmetic, harmonic, logarithmic, and generalized logarithmic means of *a*, *b* ∈ R+.

**Theorem 11.** *Let a*, *b* ∈ R<sup>+</sup> *with a* < *b, let* 0 6= *r* ≤ 1*, and let q* ≥ 1*. 1. If r* 6= −1*, we have*

$$\begin{split} \left| A(a^r, b^r) - L\_r^r(a, b) \right| \\ \leq \frac{(b-a)|r|}{2} \left( \frac{1}{2} \right)^{2(1-1/q)} \left[ \left( \frac{2A^{(r-1)q}(a, b) + a^{(r-1)q}}{3} \right)^{1/q} \\ + \left( \frac{2A^{(r-1)q}(a, b) + b^{(r-1)q}}{3} \right)^{1/q} \right]. \end{split}$$

*2. If r* = −1*, we have*

$$\begin{aligned} \left| \frac{1}{H(a,b)} - \frac{1}{L(a,b)} \right| \\ &\leq \frac{b-a}{2} \left( \frac{1}{2} \right)^{-2/q} \left[ \left( \frac{2A^{-2q}(a,b) + a^{-2q}}{3} \right)^{1/q} \\ &\qquad + \left( \frac{2A^{-2q}(a,b) + b^{-2q}}{3} \right)^{1/q} \right]. \end{aligned}$$

**Proof.** In Corollary 1, let *x* = *<sup>a</sup>*+*<sup>b</sup>* 2 and *<sup>s</sup>* <sup>=</sup> <sup>−</sup><sup>1</sup> 2 . If *r* ≤ 1 and *q* ≥ 1, the | *f* 0 (*x*)| = |*r*|*x r*−1 is decreasing on [*a*, *b*]. By Proposition 1, we can derive the inequalities in Theorem 11.

**Corollary 6.** *Under the conditions of Theorem 11, with q* = 1*: 1. If r* 6= −1*, we have*

$$|A(a^r, b^r) - L\_r^r(a, b)| \le (b - a)|r| \left[ \frac{a^{r-1} + 4[A(a, b)]^{r-1} + b^{r-1}}{6} \right] \mu$$

*2. If r* = −1*, we have*

$$\left| \frac{1}{H(a,b)} - \frac{1}{L(a,b)} \right| \le (b-a)|r| \left[ \frac{a^{-2} + 4[A(a,b)]^{-2} + b^{-2}}{6} \right]^2$$

.

#### **6. Conclusions**

Integral inequalities are important for the prediction of upper and lower bounds in various aspects of applied sciences such as in Probability Theory, Functional Inequalities, and Information Theory.

In this paper, after recalling some convexities and the Hermite–Hadamard-type integral inequalities, we introduced the notions of (*α*,*s*)-geometric-arithmetically convex functions and (*α*,*s*, *m*)-geometric-arithmetically convex functions, established several integral inequalities of the Hermite–Hadamard type for (*α*,*s*)-GA-convex and (*α*,*s*, *m*)-GAconvex functions, and applied several results in the construction of several inequalities of special means.

**Author Contributions:** Writing—original draft, J.-Y.W., H.-P.Y., W.-L.S. and B.-N.G. All authors contributed equally to the writing of the manuscript and read and approved the final version of the manuscript.

**Funding:** This work was partially supported by the Research Program of Science and Technology at Universities of Inner Mongolia Autonomous Region (Grant No. NJZY20119), China.

**Data Availability Statement:** The study did not report any data.

**Acknowledgments:** The authors thank the anonymous referees for their careful corrections and valuable comments on the original version of this paper.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


### *Article* **Schur-Convexity of the Mean of Convex Functions for Two Variables**

**Huan-Nan Shi <sup>1</sup> , Dong-Sheng Wang <sup>2</sup> and Chun-Ru Fu 3,\***


**Abstract:** The results of Schur convexity established by Elezovic and Pecaric for the average of convex functions are generalized relative to the case of the means for two-variable convex functions. As an application, some binary mean inequalities are given.

**Keywords:** inequality; Schur-convex function; Hadamard's inequality; convex functions of two variables; mean

**MSC:** 26A51; 26D15; B25

#### **1. Introduction**

Let R be a set of real numbers, *g* be a convex function defined on the interval *I* ⊆ R → R and *c*, *d* ∈ *I*, *c* < *d*. Then

$$\lg\left(\frac{d+c}{2}\right) \le \frac{1}{d-c} \int\_c^d \lg(t) \,\mathrm{d}t \le \frac{\lg(d) + \lg(c)}{2}.\tag{1}$$

This is the famous Hadamard's inequality for convex functions.

In 2000, utilizing Hadamard's inequality, Elezovic and Pecaric [1] researched Schurconvexity on the lower and upper limit of the integral for the mean of the convex functions and obtained the following important and profound theorem.

**Theorem 1** ([1])**.** *Let I be an interval with nonempty interior on* R *and g be a continuous function on I. Then,*

$$\Phi(\mathfrak{c},d) = \begin{cases} \frac{1}{d-\mathfrak{c}} \int\_{\mathfrak{c}}^{d} \mathfrak{g}(\mathfrak{s})d\mathfrak{s}, \ \mathfrak{c},d \in \mathfrak{I}, \ d \neq \mathfrak{c} \\\ \mathfrak{g}(\mathfrak{c}), \ d = \mathfrak{c} \end{cases}$$

*is Schur convex* (*Schur concave*,*resp*.) *on I* × *I iff g is convex* (*concave, resp.*) *on I.*

In recent years, this result attracted the attention of many scholars (see references [2–12] and Chapter II of the monograph [13] and its references).

In this paper, the result of theorem 1 is generalized to the case of bivariate convex functions, and some bivariate mean inequalities are established.

**Theorem 2.** *Let I be an interval with non-empty interior on* R *and g*(*s*, *t*) *be a continuous function on I* × *I. If g is convex* (*or concave resp.*) *on I* × *I, then*

$$G(u,v) = \begin{cases} \frac{1}{(v-u)^2} \int\_u^v \int\_u^v g(s,t) \, \text{ds} \, \text{d}t, & (u,v) \in I \times I, \, u \neq v \\ g(u,u), & (u,v) \in I \times I, \, u = v \end{cases} \tag{2}$$

*is Schur convex* (*or Schur concave, resp.*) *on I* × *I.*

**Citation:** Shi, H.-N.; Wang, D.-S.; Fu, C.-R. Schur-Convexity of the Mean of Convex Functions for Two Variables. *Axioms* **2022**, *11*, 681. https:// doi.org/10.3390/axioms11120681

Academic Editor: Delfim F. M. Torres

Received: 17 September 2022 Accepted: 21 November 2022 Published: 29 November 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

#### **2. Definitions and Lemmas**

To prove Theorem 2, we provide the following lemmas and definitions.

**Definition 1.** *Let* (*x*1, *x*2) *and* (*y*1, *y*2) ∈ R × R*.*

*(1) A set* Ω ⊂ R × R *is said to be convex if* (*x*1, *x*2),(*y*1, *y*2) ∈ Ω *and* 0 ≤ *β* ≤ 1 *implies*

$$(\beta \mathfrak{x}\_1 + (1 - \beta)y\_1, \beta \mathfrak{x}\_2 + (1 - \beta)y\_2) \in \Omega.$$

*(2) Let* Ω ⊂ R × R *be convex set. A function ψ:* Ω → R *is said to be a convex function on* Ω *if, for all β* ∈ [0, 1] *and all* (*x*1, *x*2),(*y*1, *y*2) ∈ Ω*, inequality*

$$
\psi(\beta \mathbf{x}\_1 + (1 - \beta)y\_1, \beta \mathbf{x}\_2 + (1 - \beta)y\_2) \le \beta \psi(\mathbf{x}\_1, \mathbf{x}\_2) + (1 - \beta)\psi(y\_1, y\_2) \tag{3}
$$

*holds. If, for all β* ∈ [0, 1] *and all* (*x*1, *x*2),(*y*1, *y*2) ∈ Ω*, the strict inequality in* (3) *holds, then ψ is said to be strictly convex. ψ is called concave* ( *or strictly concave, resp.*) *iff* −*ψ is convex* ( *or strictly convex, resp.*)

**Definition 2** ([14,15])**.** *Let* Ω ⊆ R × R,(*x*1, *x*2) *and* (*y*1, *y*2) ∈ Ω*, and let ϕ* : Ω → R*:*


**Lemma 1** ([14] (p. 5))**.** *Let* (*x*1, *x*2) ∈ R × R*. Then*

$$
\left(\frac{\mathfrak{x}\_1+\mathfrak{x}\_2}{2}, \frac{\mathfrak{x}\_1+\mathfrak{x}\_2}{2}\right) \prec (\mathfrak{x}\_1, \mathfrak{x}\_2).
$$

**Lemma 2** ([14] (p. 5))**.** *Let* Ω ⊆ R × R *be symmetric set with a nonempty interior* Ω◦ *. ψ* : Ω → R *is continuous on* Ω *and differentiable in* Ω◦ *. Then, function ψ is Schur convex (or Schur concave, resp.) iff ψ is symmetric on* Ω *and*

$$(\mathfrak{x}\_1 - \mathfrak{x}\_2) \left( \frac{\partial \psi}{\partial \mathfrak{x}\_1} - \frac{\partial \psi}{\partial \mathfrak{x}\_2} \right) \ge 0 (or \le 0, resp.)$$

*holds for any* (*x*1, *x*2) ∈ Ω◦ *.*

**Lemma 3** ([16])**.** *Let ϕ*(*x*, *w*) *and ∂ϕ*(*x*,*w*) *∂w be continuous on*

$$D = \{ (\mathfrak{x}, w) : a \le \mathfrak{x} \le b, c \le w \le d \}; \text{let } \mathfrak{x}$$

*a*(*w*), *b*(*w*) *and their derivatives be continuous on* [*c*, *d*]*; v* ∈ [*c*, *d*] *implies a*(*w*), *b*(*w*) ∈ [*a*, *b*]*. Then,*

$$\frac{\mathbf{d}}{\mathbf{d}w} \int\_{a(w)}^{b(w)} \boldsymbol{\varrho}(\mathbf{x}, w) \, \mathbf{d}x = \int\_{a(w)}^{b(w)} \frac{\partial \boldsymbol{\varrho}(\mathbf{x}, w)}{\partial w} \, \mathbf{d}x + \boldsymbol{\varrho}(b(w), u)b'(w) - \boldsymbol{\varrho}(a(w), w)a'(w). \tag{4}$$

**Lemma 4.** *Let g*(*s*, *t*) *be continuous on rectangle* [*a*, *p*; *a*, *q*]*, G*(*c*, *d*) = R *d c* R *d c g*(*s*, *t*) d*s* d*t. If c* = *c*(*b*) *and d* = *d*(*b*) *are differentiable with b, a* ≤ *c*(*b*) ≤ *p and a* ≤ *d*(*b*) ≤ *q, then*

$$\frac{\partial G}{\partial b} = \int\_{c}^{d} \mathbf{g}(s, d) d'(b) \, \mathbf{d}s - \int\_{c}^{d} \mathbf{g}(s, c) \mathbf{c}'(b) \, \mathbf{d}s$$

$$+ d'(b) \int\_{c}^{d} \mathbf{g}(d, t) \, \mathbf{d}t - \mathbf{c}'(b) \int\_{c}^{d} \mathbf{g}(c, t) \, \mathbf{d}t. \tag{5}$$

**Proof.** Let *ϕ*(*s*, *b*) = R *<sup>d</sup> c g*(*s*, *t*) d*t*. Then,

$$
\frac{\partial \varrho(s,b)}{\partial b} = \mathbf{g}(s,d)d'(b) - \mathbf{g}(s,c)c'(b).
$$

By Lemma 3, we have

$$\begin{split} \frac{\partial G}{\partial b} &= \frac{\mathbf{d}}{\mathbf{d}b} \int\_{c}^{d} \boldsymbol{g}(s,b) \, \mathbf{d}s \\ &= \int\_{c}^{d} \frac{\partial \boldsymbol{g}(s,b)}{\partial b} \, \mathbf{d}s + \boldsymbol{g}(d,b)d'(b) - \boldsymbol{g}(c,b)c'(b) \\ &= \int\_{c}^{d} \boldsymbol{g}(s,d)d'(b) \, \mathbf{d}s - \int\_{c}^{d} \boldsymbol{g}(s,c)c'(b) \, \mathbf{d}s \\ &+ d'(b) \int\_{c}^{d} \boldsymbol{g}(d,s) \, \mathbf{d}s - c'(b) \int\_{c}^{d} \boldsymbol{g}(c,s) \, \mathbf{d}s. \end{split}$$

**Remark 1.** *In passing, it is pointed out that (9) in Lemma 5 of reference [2] is incorrect and should be replaced by (4) of this paper.*

**Lemma 5.** *Let I be an interval with nonempty interior on* R *and g*(*s*, *t*) *be a continuous function on I* × *I. For* (*u*, *v*) ∈ *I* × *I*, *u* 6= *v, let G*(*u*, *v*) = R *v u* R *v u g*(*s*, *t*) d*s* d*t. Then,*

$$\frac{\partial G}{\partial v} = \int\_{u}^{v} g(s, v) \, \mathrm{d}s + \int\_{u}^{v} g(v, t) \, \mathrm{d}t,\tag{6}$$

$$\frac{\partial G}{\partial u} = -\left(\int\_{u}^{v} g(s, u) \, \mathrm{d}s + \int\_{u}^{v} g(u, t) \, \mathrm{d}t\right). \tag{7}$$

**Proof.** By taking *c*(*b*) = *a* and *d*(*b*) = *b*, we have *c* 0 (*b*) = 0 and *d* 0 (*b*) = 1. By (5) in Lemma 4, we obtain (6).

Notice that *G*(*u*, *v*) = R *<sup>u</sup> v* R *<sup>u</sup> v g*(*s*, *t*) d*s* d*t*; from (5), we have

$$\frac{\partial \mathbf{G}}{\partial \boldsymbol{\mu}} = \int\_{\boldsymbol{\upsilon}}^{\boldsymbol{\mu}} \mathbf{g}(\mathbf{s}, \boldsymbol{\mu}) \, \mathbf{d} \mathbf{s} + \int\_{\boldsymbol{\upsilon}}^{\boldsymbol{\mu}} \mathbf{g}(\boldsymbol{\mu}, t) \, \mathbf{d}t = -\left(\int\_{\boldsymbol{\mu}}^{\boldsymbol{\upsilon}} \mathbf{g}(\mathbf{s}, \boldsymbol{\mu}) \, \mathbf{d} \mathbf{s} + \int\_{\boldsymbol{\mu}}^{\boldsymbol{\upsilon}} \mathbf{g}(\boldsymbol{\mu}, t) \, \mathbf{d}t\right).$$

$$\square$$

**Lemma 6** ([14] (p. 38, Proposition 4.3) and [15] (p. 644, B.3.d))**.** *Let* Ω ⊂ R × R *be an open convex set and let ψ*(*x*, *y*) : Ω → R *be twice differentiable. Then, ψ is convex on* Ω *iff the Hessian matrix*

$$H(\mathbf{x}, y) = \begin{pmatrix} \frac{\partial^2 \psi}{\partial x \partial x} & \frac{\partial^2 \psi}{\partial x \partial y} \\ \frac{\partial^2 \psi}{\partial y \partial x} & \frac{\partial^2 \psi}{\partial y \partial y} \end{pmatrix}$$

*is non-negative definite on* Ω*. If H*(*x*) *is positive definite on* Ω*, then ψ is strictly convex on* Ω*.*

#### **3. Proofs of Main Results**

**Proof of Theorem 2.** Let *g*(*s*, *t*)be convex on *I* × *I*. *G*(*u*, *v*) is evidently symmetric. By Lemma 5, we have

$$\frac{\partial G(u,v)}{\partial v} = \frac{-2}{(v-u)^3} \int\_u^v \int\_u^v g(s,t) \, \mathrm{ds} \, \mathrm{d}t + \frac{1}{(v-u)^2} \left( \int\_u^v g(s,v) \, \mathrm{ds} + \int\_u^v g(v,t) \, \mathrm{d}t \right).$$

$$\frac{\partial G(u,v)}{\partial u} = \frac{2}{(v-u)^3} \int\_u^v \int\_u^v g(s,t) \, \mathrm{ds} \, \mathrm{d}t - \frac{1}{(v-u)^2} \left( \int\_u^v g(s,u) \, \mathrm{ds} + \int\_u^v g(u,t) \, \mathrm{d}t \right).$$

$$\begin{split} \Delta := & (v - u) \left( \frac{\partial G(\boldsymbol{u}, \boldsymbol{v})}{\partial \boldsymbol{v}} - \frac{\partial G(\boldsymbol{u}, \boldsymbol{v})}{\partial \boldsymbol{u}} \right) = - \frac{4}{(v - u)^{2}} \int\_{\boldsymbol{u}}^{v} \int\_{\boldsymbol{u}}^{v} g(\boldsymbol{s}, t) \, \operatorname{ds} \, \mathrm{d}t \\ & + \frac{1}{v - u} \int\_{\boldsymbol{u}}^{v} (g(\boldsymbol{s}, v) + g(\boldsymbol{s}, \boldsymbol{u})) \, \operatorname{ds} + \frac{1}{v - u} \int\_{\boldsymbol{u}}^{v} (g(\boldsymbol{u}, t) + g(\boldsymbol{v}, t)) \, \operatorname{d}t \end{split}$$

By Hadamards inequality, we have

$$\begin{aligned} &\frac{2}{(v-u)^2} \int\_u^v \int\_u^v g(s,t) \, \mathrm{d}s \, \mathrm{d}t = \frac{2}{v-u} \int\_u^v \left( \frac{1}{v-u} \int\_u^v g(s,t) \, \mathrm{d}s \right) \mathrm{d}t \\ &\leq \frac{2}{v-u} \int\_u^v \frac{g(u,t) + g(v,t)}{2} \, \mathrm{d}t = \frac{1}{v-u} \int\_u^v a(g(u,t) + g(v,t)) \, \mathrm{d}t \end{aligned}$$

and

$$\begin{split} &\frac{2}{(v-u)^{2}} \int\_{u}^{v} \int\_{u}^{v} g(s,t) \, \mathrm{d}s \, \mathrm{d}t = \frac{2}{v-u} \int\_{u}^{v} \left( \frac{1}{v-u} \int\_{u}^{v} g(s,t) \, \mathrm{d}t \right) \mathrm{d}s \\ &\leq \frac{2}{v-u} \int\_{u}^{v} \frac{g(s,u) + g(s,v)}{2} \, \mathrm{d}s = \frac{1}{v-u} \int\_{u}^{v} (g(s,u) + g(s,v)) \, \mathrm{d}s. \end{split}$$

Moreover, we have

$$\begin{aligned} &\frac{4}{(v-u)^2} \int\_u^v \int\_u^v g(s,t) \, \mathrm{d}s \, \mathrm{d}t \\ &\leq \frac{1}{v-u} \int\_u^v (g(s,v) + g(s,u)) \, \mathrm{d}s + \frac{1}{v-u} \int\_u^v (g(u,t) + g(v,t)) \, \mathrm{d}t. \end{aligned}$$

Therefore, ∆ ≥ 0, so *G*(*u*, *v*) is Schur-convex on *I* × *I*.

When *g*(*s*, *t*) is a concave function on *I* × *I*, it can be proved with similar methods.

#### **4. Application on Binary Mean**

**Theorem 3.** *Let c* > 0 *and d* > 0*. If c* 6= *d*, 0 < *s* < 1*, then*

$$A(d,c) \ge S\_{s+1}^s(d,c)S\_s^{s-1}(d,c) \ge \frac{(c+d)^{2s-1}}{s(s+1)},\tag{8}$$

*where A*(*d*, *c*) = *<sup>c</sup>*+*<sup>d</sup>* 2 *and <sup>S</sup>s*(*d*, *<sup>c</sup>*) = *d <sup>s</sup>*−*<sup>c</sup> s s*(*d*−*c*) <sup>1</sup> *s*−1 *are the arithmetic mean and the s-order Stolarsky mean of positive numbers c and d, respectively.*

**Proof.** Let *x* > 0, *y* > 0 and 0 < *s* < 1. From Theorem 4 in the reference [17], we know that *g*(*x*, *y*) = *x sy* 1−*s* is concave on (0, +∞) × (0, +∞). For *c* 6= *d*, by Theorem 2, from ( *d*+*c* 2 , *d*+*c* 2 ) ≺ (*c*, *d*) ≺ (*d* + *c*, 0), it follows that

$$\begin{split} G(d+c,0) &= \frac{1}{(d+c-0)^2} \int\_c^d \int\_0^{d+c} x^s y^{1-s} \, \mathrm{d}x \, \mathrm{d}y \\ &= \frac{1}{(d+c)^2} \int\_0^{d+c} x^s \, \mathrm{d}x \int\_0^{d+c} y^{1-s} \, \mathrm{d}y \\ &= \frac{1}{(d+c)^2} \frac{(c+d)^{s+1}}{s+1} \frac{(c+d)^s}{s} = \frac{(c+d)^{2s-1}}{s(s+1)} \\ &\leq G(c,d) = \frac{1}{(d-c)^2} \int\_c^d \int\_c^d x^s y^{1-s} \, \mathrm{d}x \, \mathrm{d}y \\ &= \frac{1}{(d-c)^2} \int\_c^d x^s \, \mathrm{d}x \int\_c^d y^{1-s} \, \mathrm{d}y \\ &= \frac{1}{(d-c)^2} \frac{d^{s+1} - c^{s+1}}{s+1} \frac{d^s - c^s}{s} \\ &\leq G\left(\frac{d+c}{2}, \frac{d+c}{2}\right) = \frac{d+c}{2}, \end{split}$$

That is, we obtain the following.

$$\frac{(c+d)^{2s-1}}{s(s+1)} \le S\_{s+1}^s(d,c)S\_s^{s-1}(d,c) = \frac{d^{s+1}-c^{s+1}}{(s+1)(d-c)} \cdot \frac{d^s-c^s}{s(d-c)} \le \frac{d+c}{2} = A(d,c).$$

**Theorem 4.** *Let c* > 0, *d* > 0*. Then,*

$$\log\left(\frac{A(d,c)}{B(d,c)}\right)^2 \ge \left(\frac{c-d}{d+c}\right)^2,\tag{9}$$

*where B*(*d*, *<sup>c</sup>*) = <sup>√</sup> *dc is the geometric mean of of positive numbers c and d.*

**Proof.** From reference [17], we know that the function *g*(*x*, *y*) = <sup>1</sup> (*x*+*y*) 2 is convex on (0, +∞) × (0, +∞). For *c* > 0, *d* > 0 and *d* 6= *c*, by Theorem 2, from ( *d*+*c* 2 , *d*+*c* 2 ) ≺ (*d*, *c*), it follows that

$$\begin{aligned} \mathbf{G}(c,d) &= \frac{1}{(d-c)^2} \int\_c^d \int\_c^d \frac{1}{(x+y)^2} \, \mathrm{d}x \, \mathrm{d}y \\ &= \frac{1}{(d-c)^2} \int\_c^d \left( \frac{1}{c+y} - \frac{1}{d+y} \right) \mathrm{d}y \\ &= \frac{1}{(d-c)^2} [ (\log(d+c) - \log(2c)) - (\log(2d) - \log(d+c)) ] \\ &\ge G\left(\frac{d+c}{2}, \frac{d+c}{2}\right) = \frac{1}{(d+c)^2} \end{aligned}$$

That is, we obtain the following.

$$\log\left(\frac{A(d,c)}{B(d,c)}\right)^2 = \log\frac{(d+c)^2}{4dc} \ge \left(\frac{c-d}{d+c}\right)^2.$$

**Theorem 5.** *Let c* > 0, *d* > 0*. Then,*

$$H\_{\mathfrak{e}}(c^2, d^2) \ge A^2(\mathfrak{e}, d),\tag{10}$$

*where He*(*c*, *d*) = *<sup>c</sup>*<sup>+</sup> √ *cd*+*d* 3 *is the Heronian mean of positive numbers c and d.*

**Proof.** From reference [18], we know that the function of two variables

$$
\psi(x, y) = \frac{x^2}{2r^2} + \frac{y^2}{2s^2}
$$

is a convex function on (0, +∞) × (0, +∞), where *s* > 0 and *r* > 0. For *d* > 0, *c* > 0, and *c* 6= *d*, by Theorem 2, from ( *d*+*c* 2 , *d*+*c* 2 ) ≺ (*d*, *c*), it follows that

$$\begin{split} G(c,d) &= \frac{1}{(d-c)^2} \int\_c^d \int\_c^d \left(\frac{x^2}{2r^2} + \frac{y^2}{2s^2}\right) \mathrm{d}x \, \mathrm{d}y \\ &= \frac{1}{(d-c)^2} \int\_c^d \left(\frac{d^3 - c^3}{6r^2} + \frac{y^2(d-c)}{2s^2}\right) \mathrm{d}y \\ &= \frac{1}{(d-c)^2} \left(\frac{(d^3 - c^3)(d-c)}{6r^2} + \frac{(d^3 - c^3)(d-c)}{6s^2}\right) \\ &= \frac{1}{(d-c)^2} \cdot \frac{(d^3 - c^3)(d-c)}{6} \left(\frac{1}{r^2} + \frac{1}{s^2}\right) \\ &\ge G\left(\frac{d+c}{2}, \frac{d+c}{2}\right) = \frac{(c+d)^2}{8} \left(\frac{1}{r^2} + \frac{1}{s^2}\right) \end{split}$$

namely

$$H\_{\mathfrak{C}}(c^2, d^2) = \frac{c^2 + cd + d^2}{3} = \frac{(d^3 - c^3)}{3(d - c)} \ge \frac{(d + c)^2}{4} = A^2(d, c).$$

**Theorem 6.** *Let c* > 0, *d* > 0*. We have*

$$H\_{\mathfrak{e}}(\mathfrak{c}^2, d^2) \ge L(d, \mathfrak{c}) A(d, \mathfrak{c}), \tag{11}$$

*where L*(*d*, *c*) = *<sup>d</sup>*−*<sup>c</sup>* log *d*−log *c is the logarithmic mean of positive numbers c and d.*

**Proof.** Let *g*(*x*, *y*) = *y* 2*x* −1 , *x* > 0, *y* > 0. Then,

$$\mathbf{g\_{xx}} = 2\mathbf{x}^{-3}\mathbf{y}^2,\ \mathbf{g\_{xy}} = -2\mathbf{x}^{-2}\mathbf{y} = \mathbf{g\_{yx}},\ \mathbf{g\_{yy}} = 2\mathbf{x}^{-1}.$$

The Hesse matrix of *g*(*x*, *y*) is

$$H = \begin{pmatrix} 2x^{-3}y^2 & -2x^{-2}y \\ -2x^{-2}y & 2x^{-1} \end{pmatrix}.$$

$$\det(H - \lambda I) = \det\begin{pmatrix} 2x^{-3}y^2 - \lambda & -2x^{-2}y \\ -2x^{-2}y & 2x^{-1} - \lambda \end{pmatrix} = 0$$

$$\Rightarrow \lambda(\lambda - 2x^{-3}y^2 - 2x^{-1}) = 0 \Rightarrow \lambda\_1 = 0, \lambda\_2 = 2x^{-3}y^2 + 2x^{-1} > 0.$$

Therefore, matrix *H* is positive semidefinite, so it is known that *g*(*x*, *y*) is a convex function on (0, +∞) × (0, +∞). For *d* > 0, *c* > 0 and *d* 6= *c*, by Theorem 2, from ( *d*+*c* 2 , *d*+*c* 2 ) ≺ (*d*, *c*), it follows that

$$\begin{aligned} G(c,d) &= \frac{1}{(d-c)^2} \int\_c^d \int\_c^d y^2 \mathbf{x}^{-1} \, \mathbf{d} \, \mathbf{x} \, \mathbf{d}y \\ &= \frac{\log d - \log c}{d-c} \cdot \frac{d^2 + cd + c^2}{3} \ge \frac{\left(\frac{d+c}{2}\right)^2}{\frac{c+c}{2}} = \frac{d+c}{2}, \end{aligned}$$

which is

$$H\_{\mathfrak{e}}(c^2, d^2) \ge L(d, c)A(d, c).$$

**Theorem 7.** *Let d* > 0, *c* > 0, *d* 6= *c. Then*

$$
\widetilde{E}(d,c) \le A(d,c)e^{(d+c)} \left(\frac{d-c}{e^d - e^c}\right)^2 \le A(d,c), \tag{12}
$$

*where*

$$\tilde{E}(d,c) = \begin{cases} \frac{c\epsilon^d - d\epsilon^c}{\epsilon^d - \epsilon^c} + 1, \ d, c \in I, \ d \neq c, \\\ c, \ c = d \end{cases}$$

*is exponent type mean of positive numbers c and d* (*see [13] (p. 134)*)*.*

**Proof.** Let *g*(*x*, *y*) = *xe*−(*x*+*y*) , *y* > 0, *x* > 0. From reference [19], we know that function *g*(*x*, *y*) is convex on R × R. For *d* > 0, *c* > 0, and *d* 6= *c* by Theorem 2 from ( *d*+*c* 2 , *d*+*c* 2 ) ≺ (*d*, *c*), it follows that

$$\begin{split} G(c,d) &= \frac{1}{(c-d)^2} \int\_c^d \int\_c^d \mathbf{x} e^{-\mathbf{x}-y} \, \mathbf{d} \mathbf{x} \, \mathrm{d}y \\ &= \frac{1}{(c-d)^2} \int\_c^d \mathbf{x} e^{-\mathbf{x}} \, \mathbf{d} \mathbf{x} \int\_c^d e^{-y} \, \mathbf{d}y \\ &= \frac{1}{(c-d)^2} \left( \frac{c+1}{e^c} - \frac{d+1}{e^d} \right) \cdot \left( \frac{1}{e^c} - \frac{1}{e^d} \right) \\ &= \frac{1}{(d-c)^2} \frac{(ce^d - de^c) + (e^d - e^c)}{e^{(c+d)}} \cdot \frac{e^d - e^c}{e^{(c+d)}} \\ &\leq G\left(\frac{d+c}{2}, \frac{d+c}{2}\right) = \frac{c+d}{2} \frac{1}{e^{(d+c)}}, \end{split}$$

which is

$$\frac{c\epsilon^d - de^c}{e^d - e^c} + 1 \le \frac{d+c}{2} e^{(d+c)} \left(\frac{d-c}{e^d - e^c}\right)^2.$$

For the rest, we only need to prove that

$$e^{\left(c+d\right)} \left(\frac{d-c}{e^d - e^c}\right)^2 \le 1. \tag{13}$$

We write *e <sup>d</sup>* = *u* and *e <sup>c</sup>* = *v*; then, the above inequality is equivalent to the well-known log-geometric mean inequality.

$$L(v, \mu) = \frac{v - \mu}{\log v - \log \mu} \ge \sqrt{v\mu} = B(v, \mu).$$

**Author Contributions:** Conceptualization, H.-N.S., D.-S.W. and C.-R.F.; Methodology, H.-N.S.; Validation, C.-R.F.; Formal analysis, H.-N.S. and D.-S.W.; Investigation, D.-S.W.; Resources, C.-R.F.; Writing—original draft, D.-S.W.; Funding acquisition, C.-R.F. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** Not applicable.

**Acknowledgments:** The authors sincerely thanks Chen Dirong and Chen Jihang for their valuable opinions and suggestions.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


### *Article* **Inequalities and Reverse Inequalities for the Joint** *A***-Numerical Radius of Operators**

**Najla Altwaijry 1,\* ,† , Silvestru Sever Dragomir 2,† and Kais Feki 3,4,†**


**Abstract:** In this paper, we aim to establish several estimates concerning the generalized Euclidean operator radius of *d*-tuples of *A*-bounded linear operators acting on a complex Hilbert space H , which leads to the special case of the well-known *A*-numerical radius for *d* = 1. Here, *A* is a positive operator on H . Some inequalities related to the Euclidean operator *A*-seminorm of *d*-tuples of *A*-bounded operators are proved. In addition, under appropriate conditions, several reverse bounds for the *A*-numerical radius in single and multivariable settings are also stated.

**Keywords:** positive operator; joint *A*-numerical radius; Euclidean operator *A*-seminorm; joint operator *A*-seminorm

**MSC:** 47B65; 47A12; 47A13; 47A30

#### **1. Introduction**

The theory of inequalities remains a very attractive area of research in the last few decades. In particular, the investigation of numerical radius inequalities in Hilbert and semi-Hilbert spaces has occupied an important and central role in the theory of operator inequalities. For further details, interested readers are referred to the very recent book by Bhunia et al. [1].

Throughout the present article, H stands for a non-trivial complex Hilbert space with inner product h·, ·i and the corresponding norm k · k. By B(H ), we denote the *C* ∗ -algebra of all bounded linear operators acting on H . The identity operator on H will be simply written as *I*. Let *T* ∈ B(H ). The range and the adjoint of *T* will be denoted by R(*T*) and *T* ∗ , respectively. An operator *T* ∈ B(H ) is called positive and we write *T* ≥ 0 if h*Tx*, *x*i ≥ 0 for all *x* ∈ H . If *T* ≥ 0, then *T* 1/2 denotes the square root of *T*.

If S is a subspace of H , then we mean by S the closure of S in the norm topology of H . Let C be a closed subspace of H . We denote by *P*<sup>C</sup> the orthogonal projection onto C.

For the rest of this work, by an operator, we mean a bounded linear operator acting on H . We also assume that *A* ∈ B(H ) is a non-zero, positive operator. Such an *A* defines the following semi-inner product on H :

$$
\langle \mathfrak{x}, y \rangle\_A = \langle A\mathfrak{x}, y \rangle = \langle A^{1/2}\mathfrak{x}, A^{1/2}y \rangle\_A
$$

for all *x*, *y* ∈ H . The seminorm on H induced by h·, ·i*<sup>A</sup>* is stated as: k*x*k*<sup>A</sup>* = k*A* 1/2*x*<sup>k</sup> for every *x* ∈ H . Hence, we see that the above seminorm is a norm on H if and only if *A* is a

**Citation:** Altwaijry, N.; Dragomir, S.S.; Feki, K. Inequalities and Reverse Inequalities for the Joint *A*-Numerical Radius of Operators. *Axioms* **2023**, *12*, 316. https:// doi.org/10.3390/axioms12030316

Academic Editors: Wei-Shih Du, Ravi P. Agarwal, Erdal Karapinar, Marko Kosti´c and Jian Cao

Received: 25 February 2023 Revised: 17 March 2023 Accepted: 18 March 2023 Published: 22 March 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

one-to-one operator. Furthermore, one can prove that the semi-Hilbert space (H , k · k*<sup>A</sup>* ) is a complete space if and only if R(*A*) = R(*A*). The *A*-unit sphere of H is defined as

$$\mathbb{S}\_A^1 = \{ y \in \mathcal{H} \; ; \; \|y\|\_A = 1 \}.$$

We refer the reader to the following list of recent works on the theory of semi-Hilbert spaces [1–6].

Let *T* ∈ B(H ). We recall from [7] that an operator *R* ∈ B(H ) is called an *A*-adjoint of *T* if the equality

$$
\langle Ty, z \rangle\_A = \langle y, Rz \rangle\_A
$$

holds for all *y*, *z* ∈ H , that is, *AR* = *T* ∗*A*. In general, the existence and the uniqueness of an *A*-adjoint of an arbitrary bounded operator *T* are not guaranteed. By using a famous theorem due to Douglas [8], we see that the sets of all operators that admit *A*-adjoint and *A* 1/2-adjoint operators are, respectively, given by

$$\mathbb{B}\_A(\mathcal{H}) = \{ \mathcal{S} \in \mathbb{B}(\mathcal{H}^\ell) \; ; \; \mathcal{R}(\mathcal{S}^\* A) \subset \mathcal{R}(A) \}\_\ell$$

and

$$\mathbb{B}\_{A^{1/2}}(\mathcal{A}\ell) = \left\{ \mathcal{S} \in \mathbb{B}(\mathcal{A}\ell) \; ; \; \exists \mathcal{J}\_{\mathcal{S}} > 0 \; \text{ such that } \; \|\mathbb{S}\mathbf{x}\|\_{A} \le \mathbb{Q} \|\mathbf{x}\|\_{A^{\prime}} \; \forall \mathbf{x} \in \mathcal{A}\ell \right\}.$$

When an operator *S* belongs to B*A*1/2 (H ), we say that *S* is *A*-bounded. It is not difficult to check that B*A*(H ) and B*A*1/2 (H ) represent two subalgebras of B(H ). Moreover, the following inclusions

$$\mathbb{B}\_A(\mathcal{A}^\rho) \subseteq \mathbb{B}\_{A^{1/2}}(\mathcal{A}^\rho) \subseteq \mathbb{B}(\mathcal{A}^\rho)$$

hold and are, in general, proper. For more details, we refer to [7,9–11] and the references therein. We recall now that an operator *S* ∈ B(H ) is called *A*-self-adjoint if *AS* is selfadjoint. Clearly the fact that *S* is *A*-self-adjoint implies that *S* ∈ B*A*(H ). Furthermore, we say that an operator *S* is called *A*-positive (and we write *S* ≥*<sup>A</sup>* 0) if *AS* ≥ 0. Obviously, *A*-positive operators are *A*-self-adjoint. For *S* ∈ B*A*1/2 (H ), the operator *A*-seminorm and the *A*-numerical radius of *S* are given, respectively, by

$$\|\mathbb{S}\|\_{A} = \sup\_{\mathbf{x} \in \mathbb{S}\_{A}^{1}} \|\mathbb{S}\mathbf{x}\|\_{A} \quad \text{and} \quad \omega\_{A}(\mathbf{S}) = \sup\_{\mathbf{x} \in \mathbb{S}\_{A}^{1}} |\langle \mathbb{S}\mathbf{x}, \mathbf{x} \rangle\_{A}|. \tag{1}$$

The quantities in (1) are also intensively studied when *A* = *I*, and the reader is referred to [12–22] as a recent list of references treating the numerical radius and operator norm of operators on complex Hilbert spaces.

If *S* ∈ B*A*(H ), then by the Douglas theorem [8], there exists a unique solution, denoted by *S* †*<sup>A</sup>* , of the problem: *AX* = *S* <sup>∗</sup>*A* and R(*X*) ⊆ R(*A*). We emphasize here that if *S* ∈ B*A*(H ), then *S* †*<sup>A</sup>* <sup>∈</sup> <sup>B</sup>*A*(<sup>H</sup> ) and (*<sup>S</sup>* †*<sup>A</sup>* ) †*<sup>A</sup>* <sup>=</sup> *<sup>P</sup>*R(*A*) *SP*R(*A*) .

Now, let T = (*T*1, . . . , *T<sup>d</sup>* ) ∈ B*A*1/2 (H ) *<sup>d</sup>* be a *d*-tuple of operators. According to [23], the following two quantities

$$\omega\_A(\mathcal{T}) := \sup\_{\mathcal{Y} \in \mathbb{S}^1\_A} \sqrt{\sum\_{k=1}^d |\langle T\_k \mathcal{Y}\_\prime \mathcal{Y} \rangle\_A|^2} \,\prime$$

and

$$\|\|\mathcal{T}\|\_{A} := \sup\_{y \in \mathbb{S}^1\_A} \sqrt{\sum\_{k=1}^d \|\|T\_k y\|\_{A}^2}$$

generalize the notions in (1) and define two equivalent norms on B*A*1/2 (H ) *d* . Namely, we have

$$\frac{1}{2\sqrt{d}}||\mathcal{T}||\_A \le \omega\_A(\mathcal{T}) \le ||\mathcal{T}||\_{A'} \tag{2}$$

for every operator tuple T = (*T*1, . . . , *T<sup>d</sup>* ) ∈ B*A*1/2 (H ) *d* . Note that *ωA*(T ) and kT k*<sup>A</sup>* are called the joint *A*-numerical radius and joint operator *A*-seminorm of T , respectively. The above two quantities have been investigated by several authors when *A* = *I* (see for instance [24–27]). Another joint *A*-seminorm of *A*-bounded operators has been recently introduced [28]. Namely, the Euclidean *A*-seminorm of an operator tuple T = (*T*1, . . . , *T<sup>d</sup>* ) ∈ B*A*1/2 (H ) *d* is given by

$$\|\|\mathcal{T}\|\|\_{\varepsilon,A} = \sup\_{(\nu\_1,\dots,\nu\_d)\in\overline{\mathbb{B}}\_d} \|\nu\_1 T\_1 + \dots + \nu\_d T\_d\|\_{A\prime} \tag{3}$$

where <sup>B</sup>*<sup>d</sup>* denotes the closed unit ball of <sup>C</sup>*<sup>d</sup>* , i.e.,

$$\overline{\mathcal{B}}\_d := \left\{ \boldsymbol{\nu} = (\boldsymbol{\nu}\_1, \dots, \boldsymbol{\nu}\_d) \in \mathbb{C}^d \; ; \; \|\boldsymbol{\nu}\|\_2^2 := \sum\_{k=1}^d |\boldsymbol{\nu}\_k|^2 \le 1 \right\},$$

where C denotes the set of all complex numbers. It is important to note that the following inequalities,

$$\frac{1}{\sqrt{d}} \| \mathcal{T} \|\_{A} \le \| \mathcal{T} \|\_{\mathfrak{e}, \mathcal{A}} \le \| \mathcal{T} \|\_{A'}$$

hold for any *d*-tuple of operators T = (*T*1, . . . , *T<sup>d</sup>* ) ∈ B*A*1/2 (H ) *d* (see [28]).

Our aim in the present article is to establish several estimates involving the quantities *ωA*(T ), kT k*<sup>A</sup>* and kT k*e*,*A*, where T = (*T*1, . . . , *T<sup>d</sup>* ) is a *d*-tuple of *A*-bounded operators. Some inequalities connecting the *A*-numerical radius and operator *A*-seminorm for *A*bounded operators are established. One main target of this work is to derive, under appropriate conditions, several reverse bounds for *ωA*(T ) in both single and multivariable settings. In particular, for *T* ∈ B*A*1/2 (H ), *ν* ∈ C and *r* > 0, we will demonstrate under appropriate conditions on *T*, *ν* and *r* that

$$\left\|T\right\|\_{A}^{2} \leq \omega\_{A}^{2}(T) + \frac{2r^{2}}{\left|\nu\right| + \sqrt{\left|\nu\right|^{2} - r^{2}}}\omega\_{A}(T).$$

#### **2. Results**

This section is devoted to present our contributions. By <*ez*, we will denote the real part of any complex number *z* ∈ C. In the next theorem, we state our first result.

**Theorem 1.** *Let T* ∈ B*A*1/2 (H ) *and ρ*, *σ* ∈ C *with ρ* 6= *σ*. *If*

$$\left\| \Re \langle \rho \mathbf{x} - T \mathbf{x}, T \mathbf{x} - \sigma \mathbf{x} \rangle\_{A} \geq \mathbf{0} \quad \text{for any} \quad \mathbf{x} \in \mathbb{S}\_{A}^{1} \tag{4}$$

*or, equivalently*

$$\left\|Tx - \frac{\rho + \sigma}{2}x\right\|\_{A} \le \frac{1}{2}|\rho - \sigma| \text{ for any} \quad x \in \mathbb{S}^1\_{A'} \tag{5}$$

*then*

$$\|T\|\_{A}^{2} \le \omega\_{A}^{2}(T) + \frac{1}{4}|\rho - \sigma|^{2}. \tag{6}$$

**Proof.** Notice first that the following assertions,

(i) <*e u* − *y*, *y* − *z A* ≥ 0, (ii) *<sup>y</sup>* <sup>−</sup> *<sup>z</sup>*+*<sup>u</sup>* 2 *<sup>A</sup>* <sup>≤</sup> <sup>1</sup> 2 k*u* − *z*k*<sup>A</sup>*

,

are equivalent for every *y*, *z*, *u* ∈ H . Indeed, one can see that

$$\begin{split} \frac{1}{4} \| |u - z|\_A^2 - \left\| y - \frac{z + u}{2} \right\|\_A^2 &= \frac{1}{4} \| u - y + y - z \|\_A^2 - \frac{1}{4} \| y - z + y - u \|\_A^2 \\ &= \frac{1}{4} \left( \| u - y \|\_A^2 + 2 \mathfrak{Re} \langle u - y, y - z \rangle\_A + \| y - z \|\_A^2 \right) \\ &\quad - \frac{1}{4} \left( \| y - z \|\_A^2 + 2 \mathfrak{Re} \langle y - z, y - u \rangle\_A + \| u - y \|\_A^2 \right) \\ &= \frac{1}{2} \left( \mathfrak{Re} \langle u - y, y - z \rangle\_A - \mathfrak{Re} \langle y - z, y - u \rangle\_A \right) \\ &= \frac{1}{2} \left( \mathfrak{Re} \langle u - y, y - z \rangle\_A - \mathfrak{Re} \overline{\langle y - u, y - z \rangle\_A} \right) \\ &= \mathfrak{Re} \langle u - y, y - z \rangle\_A. \end{split}$$

Hence, the equivalence is proved.

By taking *u* = *ρx*, *z* = *σx* and *y* = *Tx* in the statements (i) and (ii), we deduce that (4) and (5) are equivalent.

Now, for *x* ∈ S 1 *A* , we define

$$I\_1 := \mathfrak{Re}\left[\left(\rho - \left\_A\right)\left(\overline{\left\_A} - \overline{\sigma}\right)\right]$$

and

$$I\_2 := \mathfrak{Re}\langle \rho \mathfrak{x} - T\mathfrak{x}, T\mathfrak{x} - \sigma \mathfrak{x} \rangle\_A.$$

Then,

$$I\_1 = \Re e \left[ \rho \overline{\langle T\mathbf{x}, \mathbf{x} \rangle\_A} + \overline{\sigma} \langle T\mathbf{x}, \mathbf{x} \rangle\_A \right] - \left| \left\langle T\mathbf{x}, \mathbf{x} \right\rangle\_A \right|^2 - \Re e (\rho \overline{\sigma})^2$$
 
$$\text{where } \Gamma\_{\text{max}} \subset \overline{\Gamma\_{\text{max}}}, \quad \Gamma\_{\text{max}} \subset \Gamma\_{\text{max}} \subset \Gamma\_{\text{max}} \subset \overline{\Gamma\_{\text{max}}}$$

and

$$I\_2 = \Re e \left[ \rho \overline{\langle T\mathfrak{x}, \mathfrak{x} \rangle\_A} + \overline{\sigma} \langle T\mathfrak{x}, \mathfrak{x} \rangle\_A \right] - \left\| T\mathfrak{x} \right\|\_A^2 - \Re e(\rho \overline{\sigma}).$$

This gives

$$I\_1 - I\_2 = \left| \|T\mathfrak{x}\|\|\_A^2 - \left| \left\langle T\mathfrak{x}, \mathfrak{x} \right\rangle\_A \right|^2 \right|$$

for any *x* ∈ S 1 *A* and *σ*, *ρ* ∈ C. This is an interesting identity itself as well. If (4) holds, then *I*<sup>2</sup> ≥ 0 and thus

$$\left| \|Tx\|\_{A}^{2} - \left| \left< Tx, x \right>\_{A} \right|^{2} \leq \Re e \left[ \left( \rho - \left< Tx, x \right>\_{A} \right) \left( \overline{\left< Tx, x \right>\_{A}} - \overline{\sigma} \right) \right]. \tag{7}$$

Furthermore, it can be checked that for every *u*, *v* ∈ C, we have

$$\Re e(u\overline{v}) \le \frac{1}{4}|u+v|^2.$$

By letting

$$\mu := \rho - \left\langle T\mathfrak{x}, \mathfrak{x} \right\rangle\_{A'} \ v := \left\langle T\mathfrak{x}, \mathfrak{x} \right\rangle\_A - \sigma$$

in the above elementary inequality, we obtain

$$\Re e\left[\left(\rho-\left\_{A}\right)\left(\overline{\left\_{A}}-\overline{\sigma}\right)\right] \leq \frac{1}{4}|\rho-\sigma|^{2}.\tag{8}$$

Making use of the inequalities (7) and (8), we deduce that

$$\|Tx\|\_A^2 \le \left| \langle Tx, x \rangle\_A \right|^2 + \frac{1}{4} |\rho - \sigma|^2 \tag{9}$$

and by taking the supremum over all *x* ∈ S 1 *A* in (9), we obtain the required result (6). **Remark 1.** *Let S* ∈ B(H )*. We say that S is an A-accretive operator, if*

$$\left| \mathfrak{Re} \langle \mathrm{Sx}, \mathrm{x} \rangle\_A \right| \geq 0, \quad \text{for all } \mathfrak{x} \in \mathcal{H}.$$

*Now, let <sup>T</sup>* <sup>∈</sup> <sup>B</sup>*A*(<sup>H</sup> ). *If <sup>θ</sup>* <sup>≥</sup> *<sup>µ</sup>* <sup>&</sup>gt; <sup>0</sup> *are such that either T* †*<sup>A</sup>* <sup>−</sup> *<sup>µ</sup><sup>I</sup>* (*θI* − *T*) *is A-accretive or T* †*<sup>A</sup>* <sup>−</sup> *<sup>µ</sup><sup>I</sup>* (*θI* − *T*) ≥*<sup>A</sup>* 0*, then by* (6)*, we obtain*

$$\left\|T\right\|\_{A}^{2} \leq \omega\_{A}^{2}(T) + \frac{1}{4}(\theta - \mu)^{2}\omega\_{A}^{2}$$

*which gives*

$$\|T\|\_{A} \le \sqrt{\omega\_A^2(T) + \frac{1}{4}(\theta - \mu)^2}.$$

As an application of Theorem 1, we state the following result.

**Corollary 1.** *Let* T = (*T*1, . . . , *T<sup>d</sup>* ) ∈ B*A*1/2 (H ) *d and ρ*, *σ* ∈ C *be such that ρ* 6= *σ and*

$$\left\| T\_i \mathfrak{x} - \frac{\rho + \sigma}{2} \mathfrak{x} \right\|\_A \le \frac{1}{2} |\rho - \sigma|\_A$$

*for any x* ∈ S 1 *A and every i* ∈ {1, . . . , *d*}. *Then,*

$$\left\|\|\mathcal{T}\|\right\|\_{\varepsilon,A}^2 \le d \left( \max\_{k \in \{1, \ldots, d\}} \omega\_A^2(T\_k) + \frac{1}{4} |\rho - \sigma|^2 \right). \tag{10}$$

**Proof.** Let (*ν*1, . . . , *ν<sup>d</sup>* ) ∈ B*<sup>d</sup>* . From Theorem 1, we have

$$\|T\_i\|\_A^2 \le \omega\_A^2(T\_i) + \frac{1}{4}|\rho - \sigma|^2$$

for *i* ∈ {1, . . . , *d*}. This gives,

$$\sum\_{i=1}^{d} |\nu\_i|^2 \|T\_i\|\_A^2 \le \sum\_{i=1}^{d} |\nu\_i|^2 \omega\_A^2(T\_i) + \frac{1}{4} |\rho - \sigma|^2 \sum\_{i=1}^{d} |\nu\_i|^2. \tag{11}$$

By using the triangle and Cauchy–Schwarz inequalities, we have

$$\frac{1}{d} \left\| \sum\_{i=1}^{d} \nu\_i T\_i \right\|\_A^2 \le \frac{1}{d} \left( \sum\_{i=1}^{d} ||\nu\_i T\_i||\_A \right)^2 \le \sum\_{i=1}^{d} |\nu\_i|^2 \|T\_i\|\_A^2. \tag{12}$$

Moreover, since

$$\sum\_{i=1}^d |\nu\_i|^2 \omega\_A^2(T\_i) \le \max\_{k \in \{1, \dots, d\}} \omega\_A^2(T\_k) \sum\_{i=1}^d |\nu\_i|^2 \omega\_A^2$$

then, by applying (11) and (12), we obtain

$$\frac{1}{d} \left\| \sum\_{i=1}^d \nu\_i T\_i \right\|\_A^2 \le \max\_{k \in \{1, \dots, d\}} \omega\_A^2(T\_k) \sum\_{i=1}^d \left| \nu\_i \right|^2 + \frac{1}{4} \left| \rho - \sigma \right|^2 \sum\_{i=1}^d \left| \nu\_i \right|^2$$

for all (*ν*1, . . . , *ν<sup>d</sup>* ) ∈ B*<sup>d</sup>* .

By taking the supremum over all (*ν*1, . . . , *ν<sup>d</sup>* ) ∈ B*<sup>d</sup>* in the last inequality and then using the identity in (3), we reach (10) as desired.

An important application of the inequality (9) can be stated as follows.

**Corollary 2.** *Let* T = (*T*1, . . . , *T<sup>d</sup>* ) ∈ B*A*1/2 (H ) *d and ρ<sup>i</sup>* , *σ<sup>i</sup>* ∈ C *with ρ<sup>i</sup>* 6= *σ<sup>i</sup> for i* ∈ {1, . . . , *d*}*. Assume that for every x* ∈ S 1 *A , we have*

$$\left\| T\_i \mathbf{x} - \frac{\rho\_i + \sigma\_i}{2} \mathbf{x} \right\|\_A \le \frac{1}{2} |\rho\_i - \sigma\_i|\_\prime \quad \forall \, i \in \{1, \dots, d\}. \tag{13}$$

*Then,*

$$\|\|\mathcal{T}\|\|\_{A} \leq \sqrt{\omega\_A^2(\mathcal{T}) + \frac{1}{4} \sum\_{i=1}^d \left|\rho\_i - \sigma\_i\right|^2}. \tag{14}$$

**Proof.** Let *x* ∈ S 1 *A* . By applying (9), we obtain

$$\|T\mathfrak{x}\_{\bar{i}}\|\_{A}^{2} \le \left| \left< T\_{\bar{i}}\mathfrak{x}\_{\prime}\mathfrak{x} \right>\_{A} \right|^{2} + \frac{1}{4} |\rho\_{\bar{i}} - \sigma\_{\bar{i}}|^{2}$$

for *i* ∈ {1, . . . , *d*}.

By summing over *i* = 1, . . . , *d*, we obtain

$$\sum\_{i=1}^d \left\| T\boldsymbol{x}\_i \right\|\_A^2 \le \sum\_{i=1}^d \left| \left< T\_i \boldsymbol{x}\_i \boldsymbol{x} \right>\_A \right|^2 + \frac{1}{4} \sum\_{i=1}^d \left| \rho\_i - \sigma\_i \right|^2$$

Finally, by taking the supremum over *x* ∈ S 1 *A* , we obtain

$$\|\mathcal{T}\|\_{A}^{2} \le \omega\_{A}^{2}(\mathcal{T}) + \frac{1}{4} \sum\_{i=1}^{d} |\rho\_{i} - \sigma\_{i}|^{2}.$$

This establishes (14).

The following lemma is needed for the sequel.

**Lemma 1** ([29] p. 9)**.** *Let σ*, *ρ* ∈ C *and ζ<sup>j</sup>* ∈ C *be such that*

$$\left|\zeta\_j - \frac{\sigma + \rho}{2}\right| \le \frac{1}{2}|\rho - \sigma|$$

*for all j* ∈ {1, . . . , *d*}*. Then,*

$$d\sum\_{j=1}^{d} \left| \zeta\_j \right|^2 - \left| \sum\_{j=1}^{d} \zeta\_j \right|^2 \le \frac{1}{4} d^2 |\rho - \sigma|^2. \tag{15}$$

We can now prove the next proposition.

**Proposition 1.** *Let* T = (*T*1, . . . , *T<sup>d</sup>* ) ∈ B*A*1/2 (H ) *d and ρ*, *σ* ∈ C *with ρ* 6= *σ. Assume that*

$$
\omega\_A \left( T\_j - \frac{\sigma + \rho}{2} I \right) \le \frac{1}{2} |\rho - \sigma| \text{ for any } j \in \{1, \dots, d\}. \tag{16}
$$

*Then,*

$$
\omega\_A^2(\mathcal{T}) \le \frac{1}{d} \omega\_A^2 \left(\sum\_{j=1}^d T\_j\right) + \frac{1}{4} d|\rho - \sigma|^2. \tag{17}
$$

**Proof.** Assume that (16) is valid. Let *x* ∈ S 1 *A* and take *ζ<sup>j</sup>* = *Tjx*, *x A* for all *j* ∈ {1, . . . , *d*}. Then, we see that

$$\begin{aligned} \left| \zeta\_j - \frac{\sigma + \rho}{2} \right| &= \left| \langle T\_j \mathbf{x}, \mathbf{x} \rangle\_A - \frac{\sigma + \rho}{2} \langle \mathbf{x}, \mathbf{x} \rangle\_A \right| \\ &= \left| \langle \left( T\_j - \frac{\sigma + \rho}{2} I \right) \mathbf{x}, \mathbf{x} \rangle\_A \right| \\ &\leq \sup\_{\mathbf{x} \in \mathbb{S}\_A^1} \left| \langle \left( T\_j - \frac{\sigma + \rho}{2} I \right) \mathbf{x}, \mathbf{x} \rangle\_A \right| \\ &= \omega\_A \left( T\_j - \frac{\sigma + \rho}{2} \right) \leq \frac{1}{2} |\rho - \sigma|. \end{aligned}$$

for any *j* ∈ {1, . . . , *d*}.

By using (15), we obtain

$$\sum\_{j=1}^d \left| \left< T\_j \mathbf{x}, \mathbf{x} \right>\_A \right|^2 \le \frac{1}{d} \left| \left< \sum\_{j=1}^d T\_j \mathbf{x}, \mathbf{x} \right>\_A \right|^2 + \frac{1}{4} d |\rho - \sigma|^2$$

So, by taking the supremum over all *x* ∈ S 1 *A* , we obtain (17) as desired. We now have the following result.

**Theorem 2.** *Let T* ∈ B*A*1/2 (H ). *If ν* ∈ C\{0} *and r* > 0 *are such that*

$$\|T - \nu I\|\_{A} \le r. \tag{18}$$

.

*Then,*

$$\|T\|\_{A} \le \omega\_A(T) + \frac{1}{2} \cdot \frac{r^2}{|\nu|}.$$

**Proof.** Let *x* ∈ S 1 *A* . It follows from (18) that

$$\|T\mathfrak{x} - \nu\mathfrak{x}\|\_A \le \|T - \nu I\|\_A \le r.$$

This implies that

$$2\|Tx\|\_A^2 + |\nu|^2 \le 2\Re e \left[\overline{\nu} \langle Tx, x \rangle\_A\right] + r^2 \le 2|\nu| \left| \langle Tx, x \rangle\_A \right| + r^2.$$

Taking the supremum over *x* ∈ S 1 *A* in the last inequality, we obtain

$$\left\|T\right\|\_{A}^{2} + \left|\nu\right|^{2} \le 2\omega\_{A}(T)|\nu| + r^{2}.\tag{19}$$

Moreover, it is clear that

$$2\|T\|\_A|\nu| \le \|T\|\_A^2 + |\nu|^2,\tag{20}$$

thus, by applying (19) and (20), we infer that

$$2\|T\|\_A|\nu| \le 2\omega\_A(T)|\nu| + r^2.$$

So, we immediately obtain the desired result.

The following corollary is now in order.

**Corollary 3.** *Let T* ∈ B*A*1/2 (H ) *and α*, *β* ∈ C *with α* ∈/ {−*β*, *β*}. *Assume that*

$$\left\|\Re\langle\alpha\mathbf{x} - T\mathbf{x}, T\mathbf{x} - \beta\mathbf{x}\rangle\right\|\_{A} \geq \mathbf{0} \quad \forall \mathbf{x} \in \mathbb{S}\_{A}^{1}.\tag{21}$$

*Then,*

$$\|T\|\_{A} \le \omega\_A(T) + \frac{1}{4} \frac{|\alpha - \beta|^2}{|\alpha + \beta|}. \tag{22}$$

**Proof.** According to the proof of Theorem 1, we observe that (21) is equivalent to

$$\left\|Tx - \frac{\alpha + \beta}{2}x\right\|\_{A} \le \frac{1}{2}|\alpha - \beta| \text{ for any} \quad x \in \mathbb{S}^1\_{A'} \tag{23}$$

which is, in turn, equivalent to the following operator norm inequality:

$$\left\| T - \frac{\alpha + \beta}{2} I \right\|\_{A} \le \frac{1}{2} |\alpha - \beta|.$$

Now, applying Theorem 2 for *ν* = *α*+*β* 2 and *r* = <sup>1</sup> 2 |*α* − *β*|, we deduce the desired result.

Another sufficient condition under which the inequality (22) hold is presented in terms of *A*-positive operators and reads as follows.

**Corollary 4.** *Let α*, *β* ∈ C *with α* ∈/ {−*β*, *β*} *and T* ∈ B*A*(H ). *If*

$$\left(T^{\dagger\_A} - \bar{\beta}I\right) \left(\alpha I - T\right) \ge\_A \mathbf{0}\_\prime$$

*then*

$$\|T\|\_{A} \le \omega\_A(T) + \frac{1}{4} \frac{|\alpha - \beta|^2}{|\alpha + \beta|}.$$

**Corollary 5.** *Suppose that T*, *ν and r are as in Theorem 2. If, in addition,*

$$\left| |\boldsymbol{\nu}| - \omega\_A(T) \right| \geq \rho\_\prime \tag{24}$$

*for some ρ* > 0, *then*

$$(0 \le) \|T\|\_A^2 - \omega\_A^2(T) \le r^2 - \rho^2.$$

**Proof.** From the inequality (19), we see that

$$\begin{aligned} \left\| T \right\|\_{A}^{2} - \omega\_{A}^{2}(T) &\leq r^{2} - \omega\_{A}^{2}(T) + 2\omega\_{A}(T)|\nu| - |\nu|^{2} \\ &= r^{2} - (|\nu| - \omega\_{A}(T))^{2}. \end{aligned}$$

Hence, an application of (24) leads to the desired inequality.

**Remark 2.** *If, in particular,* k*T* − *νI*k*<sup>A</sup>* ≤ *r with* |*ν*| = *ωA*(*T*), *ν* ∈ C*, then* (0 ≤)k*T*k 2 *<sup>A</sup>* − *ω* 2 *A* (*T*) ≤ *r* 2 .

Our next result reads as follows.

\*\*Theorem 3.\*\* Let  $\mathcal{T} = (T\_1, \dots, T\_d) \in \mathbb{B}\_{A^{1/2}}(\mathcal{H})^d$  and  $a\_i, \pounds\_i \in \mathbb{C}$  with  $a\_i \notin \{-\beta\_i, \beta\_i\}$  for  $i \in \{1, \dots, d\}$ . If 
$$\left\| T\_i - \frac{a\_i + \beta\_i}{2} I \right\|\_A \le \frac{1}{2} |a\_i - \beta\_i|\_\prime \tag{25}$$

*for i* ∈ {1, . . . , *d*}, *then*

kT k*e*,*A* ≤ *d* ∑ *i*=1 *ω* 2 *A* (*Ti*) !1 2 + 1 4 *d* ∑ *i*=1 |*α<sup>i</sup>* − *β<sup>i</sup>* | 4 |*α<sup>i</sup>* + *β<sup>i</sup>* | 2 !1 2 (26)

*and*

$$\|\|\mathcal{T}\|\|\_{A} \leq \omega\_{A}(\mathcal{T}) + \frac{1}{4} \frac{\sum\_{i=1}^{d} |\alpha\_{i} - \beta\_{i}|^{2}}{\left(\sum\_{i=1}^{d} |\alpha\_{i} + \beta\_{i}|^{2}\right)^{\frac{1}{2}}}.\tag{27}$$

**Proof.** Using Corollary 3, we have

$$\|T\_i\|\_A \le \omega\_A(T\_i) + \frac{1}{4} \frac{|\alpha\_i - \beta\_i|^2}{|\alpha\_i + \beta\_i|}.$$

for *i* ∈ {1, . . . , *d*}.

Let (*ν*1, . . . , *ν<sup>d</sup>* ) ∈ B*<sup>d</sup>* , multiply by |*ν<sup>i</sup>* | and sum to obtain

$$\sum\_{i=1}^{d} ||\boldsymbol{\nu\_{i}}\boldsymbol{T\_{i}}||\_{A} \leq \sum\_{i=1}^{d} |\boldsymbol{\nu\_{i}}| \omega\_{A}(\boldsymbol{T\_{i}}) + \frac{1}{4} \sum\_{i=1}^{d} |\boldsymbol{\nu\_{i}}| \frac{|\boldsymbol{\alpha\_{i}} - \boldsymbol{\beta\_{i}}|^{2}}{|\boldsymbol{\alpha\_{i}} + \boldsymbol{\beta\_{i}}|}. \tag{28}$$

,

By the triangle inequality, we have

$$\left\| \sum\_{i=1}^d \nu\_i T\_i \right\|\_A \le \sum\_{i=1}^d \left\| \nu\_i T\_i \right\|\_A$$

while by the Cauchy–Schwarz inequality, we obtain

$$\sum\_{i=1}^d |\upsilon\_i| \omega\_A(T\_i) \le \left(\sum\_{i=1}^d |\upsilon\_i|^2\right)^{\frac{1}{2}} \left(\sum\_{i=1}^d \omega\_A^2(T\_i)\right)^{\frac{1}{2}} \le \left(\sum\_{i=1}^d \omega\_A^2(T\_i)\right)^{\frac{1}{2}}$$

and

$$\begin{split} \sum\_{i=1}^{d} |\nu\_{i}| \frac{|\alpha\_{i} - \beta\_{i}|^{2}}{|\alpha\_{i} + \beta\_{i}|} &\leq \left(\sum\_{i=1}^{d} |\nu\_{i}|^{2}\right)^{\frac{1}{2}} \left(\sum\_{i=1}^{d} \frac{|\alpha\_{i} - \beta\_{i}|^{4}}{|\alpha\_{i} + \beta\_{i}|^{2}}\right)^{\frac{1}{2}} \\ &\leq \left(\sum\_{i=1}^{d} \frac{|\alpha\_{i} - \beta\_{i}|^{4}}{|\alpha\_{i} + \beta\_{i}|^{2}}\right)^{\frac{1}{2}} .\end{split}$$

From (28), we then obtain

$$\left\| \sum\_{i=1}^d \nu\_i T\_i \right\|\_A \le \left( \sum\_{i=1}^d \omega\_A^2(T\_i) \right)^{\frac{1}{2}} + \frac{1}{4} \left( \sum\_{i=1}^d \frac{|\alpha\_i - \beta\_i|^4}{|\alpha\_i + \beta\_i|^2} \right)^{\frac{1}{2}}$$

for all (*ν*1, . . . , *ν<sup>d</sup>* ) ∈ B*<sup>d</sup>* .

By taking the supremum over (*ν*1, . . . , *ν<sup>d</sup>* ) ∈ B*<sup>d</sup>* and using the representation (3), we obtain (26).

The inequality (25) is equivalent for *x* ∈ S 1 *A* to

$$\frac{1}{4}||T\_i\mathbf{x}||\_A^2 - 2\Re e\left[\frac{\overline{\alpha\_i + \beta\_i}}{2} \langle T\_i\mathbf{x}, \mathbf{x} \rangle\_A\right] + \frac{1}{4}|\alpha\_i + \beta\_i|^2 \le \frac{1}{4}|\alpha\_i - \beta\_i|^2$$

for *i* ∈ {1, . . . , *d*}. Therefore,

$$\frac{1}{4}||T\_i\mathbf{x}||\_A^2 + \frac{1}{4}|a\_i + \beta\_i|^2 \le \frac{1}{4}|a\_i - \beta\_i|^2 + 2\Re\left[\frac{\overline{a\_i + \beta\_i}}{2} \langle T\_i \mathbf{x}, \mathbf{x} \rangle\_A\right] \tag{29}$$

$$\le \frac{1}{4}|a\_i - \beta\_i|^2 + |a\_i + \beta\_i|\left|\langle T\_i \mathbf{x}, \mathbf{x} \rangle\_A\right| $$

for *i* ∈ {1, . . . , *d*}.

If we sum and apply the Cauchy–Schwarz inequality, we then obtain

$$\begin{split} &\sum\_{i=1}^{d} \|T\_i \mathbf{x}\|\_{A}^{2} + \frac{1}{4} \sum\_{i=1}^{d} |\alpha\_{i} + \beta\_{i}|^{2} \\ &\leq \frac{1}{4} \sum\_{i=1}^{d} |\alpha\_{i} - \beta\_{i}|^{2} + \sum\_{i=1}^{d} |\alpha\_{i} + \beta\_{i}| \left| \left< T\_{i} \mathbf{x}, \mathbf{x} \right>\_{A} \right| \\ &\leq \frac{1}{4} \sum\_{i=1}^{d} |\alpha\_{i} - \beta\_{i}|^{2} + \left( \sum\_{i=1}^{d} |\alpha\_{i} + \beta\_{i}|^{2} \right)^{\frac{1}{2}} \left( \sum\_{i=1}^{d} \left| \left< T\_{i} \mathbf{x}, \mathbf{x} \right>\_{A} \right|^{2} \right)^{\frac{1}{2}}. \end{split}$$

On the other hand, an application of the arithmetic-geometric mean inequality shows that

$$\left(\sum\_{i=1}^d \left\|T\_i \ge \right\|\_A^2\right)^{\frac{1}{2}} \left(\sum\_{i=1}^d \left|\alpha\_i + \beta\_i\right|^2\right)^{\frac{1}{2}} \le \sum\_{i=1}^d \left\|T\_i \ge \right\|\_A^2 + \frac{1}{4} \sum\_{i=1}^d \left|\alpha\_i + \beta\_i\right|^2.$$

Therefore, we deduce that

$$\begin{aligned} &\left(\sum\_{i=1}^d \|\boldsymbol{T}\_i \boldsymbol{\mathfrak{x}}\|\_A^2\right)^{\frac{1}{2}} \left(\sum\_{i=1}^d |\boldsymbol{\alpha}\_i + \boldsymbol{\beta}\_i|^2\right)^{\frac{1}{2}} \\ &\leq \frac{1}{4} \sum\_{i=1}^d |\boldsymbol{\alpha}\_i - \boldsymbol{\beta}\_i|^2 + \left(\sum\_{i=1}^d |\boldsymbol{\alpha}\_i + \boldsymbol{\beta}\_i|^2\right)^{\frac{1}{2}} \left(\sum\_{i=1}^d \left|\langle \boldsymbol{T}\_i \boldsymbol{\mathfrak{x}}, \boldsymbol{\mathfrak{x}} \rangle\_A\right|^2\right)^{\frac{1}{2}}.\end{aligned}$$

If we take the supremum over all *x* ∈ S 1 *A* , we obtain

$$||\mathcal{T}||\_A \left(\sum\_{i=1}^d |a\_i + \beta\_i|^2\right)^{\frac{1}{2}} \le \frac{1}{4} \sum\_{i=1}^d |a\_i - \beta\_i|^2 + \left(\sum\_{i=1}^d |a\_i + \beta\_i|^2\right)^{\frac{1}{2}} \omega\_A(\mathcal{T}) \dots$$

which gives (27). Hence, the proof is complete.

An immediate application of Theorem 3 is derived in the next corollary.

**Corollary 6.** *Let* T = (*T*1, . . . , *T<sup>d</sup>* ) ∈ B*A*1/2 (H ) *d and σ*, *ρ* ∈ C *with ρ* 6= ±*σ. Assume that*

$$\left\| T\_{\bar{j}} - \frac{\sigma + \rho}{2} I \right\|\_{A} \leq \frac{1}{2} |\rho - \sigma| \tag{30}$$

*for i* ∈ {1, . . . , *d*}*. Then,*

$$\begin{aligned} \|\mathcal{T}\|\_{\varepsilon,A} \leq \left(\sum\_{i=1}^d \omega\_A^2(T\_i)\right)^{\frac{1}{2}} + \frac{1}{4}\sqrt{d}\frac{|\rho-\sigma|^2}{|\sigma+\rho|} \\\\ \|\mathcal{T}\|\_A \leq \omega\_A(\mathcal{T}) + \frac{1}{4}\sqrt{d}\frac{|\rho-\sigma|^2}{|\sigma+\rho|}. \end{aligned}$$

*and*

Now, we state in the next lemma a reverse of the Cauchy–Schwarz inequality (see for instance ([29] p. 32) for a more general result).

**Lemma 2.** *Under the same assumptions of Lemma 1, we have*

$$\left(\sum\_{j=1}^{d} \left|\zeta\_{j}\right|^{2}\right)^{\frac{1}{2}} \le \frac{1}{\sqrt{d}} \left(\left|\sum\_{j=1}^{d} \zeta\_{j}\right| + \frac{1}{4}d\frac{\left|\rho-\sigma\right|^{2}}{\left|\rho+\sigma\right|}\right). \tag{31}$$

We state our next result as follows.

**Theorem 4.** *Let* T = (*T*1, . . . , *T<sup>d</sup>* ) ∈ B*A*1/2 (H) *d and σ*, *ρ* ∈ C *with ρ* 6= ±*σ. Assume that*

$$
\omega\_A \left( T\_j - \frac{\sigma + \rho}{2} I \right) \le \frac{1}{2} |\rho - \sigma| \text{ for any } j \in \{1, \dots, d\}. \tag{32}
$$

*Then,*

$$
\omega\_A(\mathcal{T}) \le \frac{1}{\sqrt{d}} \omega\_A\left(\sum\_{j=1}^d T\_j\right) + \frac{1}{4} \sqrt{d} \frac{|\rho - \sigma|^2}{|\rho + \sigma|}.
$$

**Proof.** Let *x* ∈ S 1 *A* and T = (*T*1, . . . , *T<sup>d</sup>* ) ∈ B*A*1/2 (H ) *<sup>d</sup>* with the property (32). By letting *ζ<sup>j</sup>* = *Tjx*, *x A* and then proceeding as in the proof of Proposition 1, we see that

$$\left|\zeta\_j - \frac{\sigma + \rho}{2}\right| \le \omega\_A \left(T\_j - \frac{\sigma + \rho}{2}\right) \le \frac{1}{2} |\rho - \sigma|\_{\prime\prime}$$

for any *j* ∈ {1, . . . , *d*}. So, by employing (31), we obtain

$$\begin{aligned} \left(\sum\_{j=1}^{d} \left| \left< T\_j \mathbf{x}, \mathbf{x} \right>\_A \right|^2 \right)^{\frac{1}{2}} &\leq \frac{1}{\sqrt{d}} \left( \left| \sum\_{j=1}^{d} \left< T\_j \mathbf{x}, \mathbf{x} \right>\_A \right| + \frac{1}{4} d \frac{\left| \rho - \sigma \right|^2}{\left| \rho + \sigma \right|} \right) \\ &= \frac{1}{\sqrt{d}} \left( \left| \left< \sum\_{j=1}^{d} T\_j \mathbf{x}, \mathbf{x} \right>\_A \right| + \frac{1}{4} d \frac{\left| \rho - \sigma \right|^2}{\left| \rho + \sigma \right|} \right) \end{aligned}$$

for every *x* ∈ S 1 *A* . By taking the supremum over all *x* ∈ S 1 *A* in the last inequality, we reach the desired result.

**Remark 3.** *Since ωA*(T ) ≤ kT k*A*, *then* (30) *implies* (32)*.*

Now, we aim to establish several reverse inequalities for the *A*-numerical radius of operators acting on semi-Hilbert spaces in both single and multivariable settings under some boundedness conditions for the operators. Our first new result in this context may be stated as follows.

**Theorem 5.** *Let T* ∈ B*A*1/2 (H ) *be such that AT* 6= 0*. If ν* ∈ C\{0} *and r* > 0 *are such that* |*ν*| > *r and*

k*T* − *νI*k*<sup>A</sup>* ≤ *r*,

*then*

$$\sqrt{1 - \frac{r^2}{|\nu|^2}} \le \frac{\omega\_A(T)}{||T||\_A} \quad (\le 1). \tag{33}$$

**Proof.** By (19), we have

$$\|T\|\_A^2 + |\nu|^2 - r^2 \le 2|\nu|\omega\_A(T).$$

Dividing by <sup>q</sup> |*ν*| <sup>2</sup> <sup>−</sup> *<sup>r</sup>* <sup>2</sup> > 0, we obtain

$$\frac{\left\|\left|T\right|\right\|\_{A}^{2}}{\sqrt{\left|\nu\right|^{2}-r^{2}}}+\sqrt{\left|\nu\right|^{2}-r^{2}}\leq\frac{2\left|\nu\right|\omega\_{A}(T)}{\sqrt{\left|\nu\right|^{2}-r^{2}}}.\tag{34}$$

Further, it is easy to verify that

$$2\|T\|\_{A} \le \frac{\|T\|\_{A}^{2}}{\sqrt{|\nu|^{2} - r^{2}}} + \sqrt{|\nu|^{2} - r^{2}}.$$

So, by using (34), we deduce

$$\|T\|\_{A} \le \frac{\omega\_A(T)|\nu|}{\sqrt{|\nu|^2 - r^2}}\nu$$

which is immediately equivalent to (33).

**Remark 4.** *(1) Squaring the inequality* (33)*, we obtain the following inequality:*

$$(0 \le) \|T\|\_A^2 - \omega\_A^2(T) \le \frac{r^2}{\left|\nu\right|^2} \|T\|\_A^2.$$

*(2) For every operator <sup>T</sup>* <sup>∈</sup> <sup>B</sup>*A*1/2 (<sup>H</sup> )*, we have the relation <sup>ω</sup>A*(*T*) <sup>≥</sup> <sup>1</sup> 2 k*T*k*<sup>A</sup> (see [23]). Inequality* (33) *would produce an improvement of the above classic fact only in the case when*

$$\frac{1}{2} \le \left(1 - \frac{r^2}{|\nu|^2}\right)^{\frac{1}{2}}.$$

*which is, in turn, equivalent to <sup>r</sup>* <sup>|</sup>*ν*<sup>|</sup> ≤ √ 3 2 .

The next corollary holds.

**Corollary 7.** *Let α*, *β* ∈ C *with* <*e αβ*¯ > 0. *Additionally, let T* ∈ B*A*1/2 (H ) *be such that AT* 6= 0*. Assume that either* (21) *or* (23) *holds. Then, we have*

$$\frac{2\sqrt{\Re e(a\overline{\beta})}}{|\alpha+\beta|} \le \frac{\omega\_A(T)}{||T||\_A} (\le 1) \tag{35}$$

.

*and*

$$(0 \le) \|T\|\_A^2 - \omega\_A^2(T) \le \left|\frac{\alpha - \beta}{\alpha + \beta}\right|^2 \|T\|\_A^2$$

**Proof.** If we consider *ν* = *α*+*β* 2 and *r* = <sup>1</sup> 2 |*α* − *β*|, then

$$\left| \left| \nu \right|^2 - r^2 = \left| \frac{\alpha + \beta}{2} \right|^2 - \left| \frac{\alpha - \beta}{2} \right|^2 = \Re(\alpha \vec{\beta}) > 0. \right.$$

Now, by applying Theorem 5, we deduce the desired result.

**Remark 5.** *If* |*α* − *β*| ≤ √ 3 2 |*α* + *β*| *and* <*e αβ*¯ > 0, *then* (35) *is a refinement of the inequality <sup>ω</sup>A*(*T*) <sup>≥</sup> <sup>1</sup> 2 k*T*k*<sup>A</sup>* .

**Corollary 8.** *Let α*, *β* ∈ C *with* <*e αβ*¯ > 0*. Additionally, let* T = (*T*1, . . . , *T<sup>d</sup>* ) ∈ B*A*1/2 (H ) *d be such that the condition*

$$\left\| \left| T\_j - \frac{\alpha + \beta}{2} I \right| \right\|\_{A} \leq \frac{1}{2} |\alpha - \beta| \tag{36}$$

*is true for i* ∈ {1, . . . , *d*}*. Then,*

$$||\mathcal{T}||\_{\epsilon,A} \le \frac{|\mathfrak{a} + \beta|}{2\sqrt{\mathfrak{R}e\left(\mathfrak{a}\bar{\beta}\right)}} \left(\sum\_{i=1}^d \omega\_A^2(T\_i)\right)^{\frac{1}{2}}.\tag{37}$$

**Proof.** Notice, first, that since (36) holds, then we infer that

$$\left\| \left| T\_i \mathfrak{x} - \frac{\mathfrak{a} + \beta}{2} \mathfrak{x} \right\|\_A \leq \frac{1}{2} |\mathfrak{a} - \beta|\_A$$

for any *x* ∈ S 1 *A* and all *i* ∈ {1, . . . , *d*}. Therefore, it follows from (35) that

$$||T\_i||\_A \le \frac{|\alpha + \beta|}{2\sqrt{\Re e(\alpha \overline{\beta})}} \omega\_A(T\_i)$$

for *i* ∈ {1, . . . , *d*}.

Let (*ν*1, . . . , *ν<sup>d</sup>* ) ∈ B*<sup>d</sup>* , multiply by |*ν<sup>i</sup>* | and sum to obtain

$$\sum\_{i=1}^d \|\boldsymbol{\nu}\_i T\_i\|\_A \le \frac{|\boldsymbol{\alpha} + \boldsymbol{\beta}|}{2\sqrt{\mathfrak{Re}(\boldsymbol{\alpha}\bar{\boldsymbol{\beta}})}} \sum\_{i=1}^d |\boldsymbol{\nu}\_i| \boldsymbol{\omega}\_A(T\_i).$$

Therefore, we see that

$$\begin{aligned} \left\| \sum\_{i=1}^d \nu\_i T\_i \right\|\_A &\leq \sum\_{i=1}^d \|\nu\_i T\_i\|\_A \\ &\leq \frac{|\alpha + \beta|}{2\sqrt{\mathfrak{Re}\left(a\tilde{\mathcal{B}}\right)}} \sum\_{i=1}^d |\nu\_i| \omega\_A(T\_i) . \end{aligned}$$

$$\leq \frac{|\alpha + \beta|}{2\sqrt{\mathfrak{Re}\left(a\tilde{\mathcal{B}}\right)}} \left( \sum\_{i=1}^d |\nu\_i|^2 \right)^{\frac{1}{2}} \left( \sum\_{i=1}^d \omega\_A^2(T\_i) \right)^{\frac{1}{2}}.$$

By taking the supremum over (*ν*1, . . . , *ν<sup>d</sup>* ) ∈ *B<sup>d</sup>* and using the representation (3), we obtain (37).

In the next result, we prove under appropriate conditions a new relation connecting the joint *A*-seminorms k · k*<sup>A</sup>* and *ωA*(·).

**Proposition 2.** *Let α<sup>i</sup>* , *β<sup>i</sup>* ∈ C *with* <*e αiβ*¯ *i* > 0 *for all i* ∈ {1, . . . , *d*}*. Additionally, let* T = (*T*1, . . . , *T<sup>d</sup>* ) ∈ B*A*1/2 (H) *d be such that* (25) *is valid for i* ∈ {1, . . . , *d*}. *Then,*

$$||\mathcal{T}||\_A \le \frac{1}{2} \frac{\left(\sum\_{i=1}^d |a\_i + \beta\_i|^2\right)^{\frac{1}{2}}}{\left(\sum\_{i=1}^d \Re(a\_i \overline{\beta}\_i)\right)^{\frac{1}{2}}} \omega\_A(\mathcal{T}). \tag{38}$$

**Proof.** From (29), we obtain

$$\left| \|T\_i \mathbf{x}\|\|\_A^2 + \frac{1}{4} |\alpha\_i + \beta\_i|^2 - \frac{1}{4} |\alpha\_i - \beta\_i|^2 \le |\alpha\_i + \beta\_i| \left| \langle T\_i \mathbf{x}, \mathbf{x} \rangle\_A \right| $$

for *i* ∈ {1, . . . , *d*}. This is equivalent to

$$\left| \left| T\_i \mathfrak{x} \right| \right|\_A^2 + \mathfrak{R}e \left( \mathfrak{a}\_i \overline{\mathfrak{z}}\_i \right) \le \left| \mathfrak{a}\_i + \mathfrak{z}\_i \right| \left| \left< T\_i \mathfrak{x}\_i \mathfrak{x} \right>\_A \right| $$

for *i* ∈ {1, . . . , *d*}.

If we sum and then apply the Cauchy–Schwarz inequality, we then obtain

$$\begin{split} \sum\_{i=1}^{d} \left\| \left| T\_i \mathbf{x} \right| \right\|\_{A}^{2} + \sum\_{i=1}^{d} \left\| \text{Re} \left( \boldsymbol{\alpha}\_{i} \boldsymbol{\beta}\_{i} \right) \right\| &\leq \sum\_{i=1}^{d} \left| \boldsymbol{\alpha}\_{i} + \boldsymbol{\beta}\_{i} \right| \left\langle \left\langle T\_i \mathbf{x}, \mathbf{x} \right\rangle\_{A} \right| \\ &\leq \left( \sum\_{i=1}^{d} \left| \boldsymbol{\alpha}\_{i} + \boldsymbol{\beta}\_{i} \right|^{2} \right)^{\frac{1}{2}} \left( \sum\_{i=1}^{d} \left| \left\langle T\_{i} \mathbf{x}, \mathbf{x} \right\rangle\_{A} \right|^{2} \right)^{\frac{1}{2}}. \end{split}$$

By applying the famous arithmetic–geometric mean inequality, we observe that

$$2\left(\sum\_{i=1}^d ||T\_i\mathfrak{x}||\_A^2\right)^{\frac{1}{2}} \left(\sum\_{i=1}^d \Re e\left(a\_i\overline{\mathfrak{z}}\_i\right)\right)^{\frac{1}{2}} \le \sum\_{i=1}^d ||T\_i\mathfrak{x}||\_A^2 + \sum\_{i=1}^d \Re e\left(a\_i\overline{\mathfrak{z}}\_i\right) \dots$$

Therefore,

$$\left(\sum\_{i=1}^d \|T\_i \mathbf{x}\|\_A^2\right)^{\frac{1}{2}} \le \frac{1}{2} \frac{\left(\sum\_{i=1}^d |\alpha\_i + \beta\_i|^2\right)^{\frac{1}{2}}}{\left(\sum\_{i=1}^d \Re e\left(\alpha\_i \overline{\beta}\_i\right)\right)^{\frac{1}{2}}} \left(\sum\_{i=1}^d \left|\langle T\_i \mathbf{x}, \mathbf{x} \rangle\_A\right|^2\right)^{\frac{1}{2}}.$$

and by taking the supremum over *x* ∈ S 1 *A* , we obtain (38).

**Remark 6.** *With the assumptions of Corollary 8, we can prove that*

$$\|\mathcal{T}\|\_{A} \leq \frac{1}{2} \frac{|\alpha + \beta|}{\sqrt{\mathfrak{Re}\left(\alpha \overline{\beta}\right)}} \omega\_A(\mathcal{T}).$$

The following lemma plays a fundamental role in the proof of our next proposition.

**Lemma 3** ([29] p. 26)**.** *If σ*, *ρ* ∈ C *and ζ<sup>j</sup>* ∈ C*, j* ∈ {1, . . . , *d*} *with the property that* <*e*(*ρσ*¯) > 0 *and* 

$$\left|\zeta\_j - \frac{\sigma + \rho}{2}\right| \le \frac{1}{2}|\rho - \sigma|.$$

*for each j* ∈ {1, . . . , *d*}, *then*

$$\sum\_{j=1}^{d} \left| \zeta\_{j} \right|^{2} \le \frac{1}{4d} \frac{|\rho + \sigma|^{2}}{\aleph e(\rho \sigma)} \left| \sum\_{j=1}^{d} \zeta\_{j} \right|^{2}. \tag{39}$$

By proceeding as in the proof of Theorem 4 and using Lemma 3, we state without proof the following result.

**Proposition 3.** *Let* T = (*T*1, . . . , *T<sup>d</sup>* ) ∈ B*A*1/2 (H ) *d , σ*, *ρ* ∈ C *with* <*e*(*ρσ*¯) > 0*. Suppose that* (32) *is satisfied. Then,*

$$
\omega\_A(\mathcal{T}) \le \frac{|\alpha + \beta|}{2d\sqrt{\mathfrak{Re}\left(\alpha \overline{\beta}\right)}} \omega\_A\left(\sum\_{j=1}^d T\_j\right).
$$

The following result also holds.

**Theorem 6.** *Let T* ∈ B*A*1/2 (H ) *and ν* ∈ C\{0}, *r* > 0 *with* |*ν*| > *r*. *If*

$$\|\|T - \nu I\|\|\_{A} \le r\_{\nu} \tag{40}$$

*then*

$$\|T\|\_{A}^{2} \le \omega\_{A}^{2}(T) + \frac{2r^{2}}{\left|\nu\right| + \sqrt{\left|\nu\right|^{2} - r^{2}}} \omega\_{A}(T). \tag{41}$$

**Proof.** Let *x* ∈ S 1 *A* . It follows from (40) that

$$\|T\mathfrak{x} - \nu\mathfrak{x}\|\_{A} \le \|T - \nu I\|\_{A} \le r\_{\nu}$$

which yields that

$$\|\|Tx\|\|\_{A}^{2} + |\nu|^{2} \le 2\Re\left[\overline{\nu}\langle Tx, x\rangle\_{A}\right] + r^{2}.\tag{42}$$

.

By using (42), it can be seen that |*ν*| *Tx*, *x A* <sup>6</sup><sup>=</sup> 0. So, by taking (42) into account, we obtain

$$\frac{\left\|\left.T\mathbf{x}\right\|\right\|\_{A}^{2}}{\left|\nu\right|\left\langle\left.T\mathbf{x},\mathbf{x}\right\rangle\_{A}} \leq \frac{2\mathfrak{Re}\left[\overline{\nu}\left\langle\left.T\mathbf{x},\mathbf{x}\right\rangle\_{A}\right\rangle\_{A}\right]}{\left|\nu\right|\left\langle\left.T\mathbf{x},\mathbf{x}\right\rangle\_{A}\right|} + \frac{r^{2}}{\left|\nu\right|\left\langle\left.T\mathbf{x},\mathbf{x}\right\rangle\_{A}\right|} - \frac{\left|\nu\right|}{\left|\left\langle\left.T\mathbf{x},\mathbf{x}\right\rangle\_{A}\right|}$$

Moreover, we see that

$$\begin{split} &\frac{\|\mathbb{T}\mathbf{x}\|\_{A}^{2}}{|\nu|\left|\langle\mathbf{Tx},\mathbf{x}\rangle\_{A}\right|}-\frac{\left|\langle\mathbf{Tx},\mathbf{x}\rangle\_{A}\right|}{|\nu|} \\ &\leq \frac{2\mathfrak{Re}\left[\overline{\nu}\langle\mathbf{Tx},\mathbf{x}\rangle\_{A}\right]}{|\nu|\left|\langle\langle\mathbf{Tx},\mathbf{x}\rangle\_{A}\right|}+\frac{r^{2}}{|\nu|\left|\langle\mathbf{Tx},\mathbf{x}\rangle\_{A}\right|}-\frac{\left|\langle\mathbf{Tx},\mathbf{x}\rangle\_{A}\right|}{|\nu|}-\frac{|\nu|}{\left|\langle\mathbf{Tx},\mathbf{x}\rangle\_{A}\right|} \\ &=\frac{2\mathfrak{Re}\left[\overline{\nu}\langle\langle\mathbf{Tx},\mathbf{x}\rangle\_{A}\rangle\_{A}\right]}{|\nu|\left|\langle\langle\mathbf{Tx},\mathbf{x}\rangle\_{A}\right|}-\frac{|\nu|^{2}-r^{2}}{|\nu|\left|\langle\langle\mathbf{Tx},\mathbf{x}\rangle\_{A}\right|}-\frac{\left|\langle\langle\mathbf{Tx},\mathbf{x}\rangle\_{A}\right|}{|\nu|} \\ &=\frac{2\mathfrak{Re}\left[\overline{\nu}\langle\langle\mathbf{Tx},\mathbf{x}\rangle\_{A}\rangle\right]}{|\nu|\left|\langle\langle\mathbf{Tx},\mathbf{x}\rangle\_{A}\right|}-\left(\frac{\sqrt{\left|\nu\right|^{2}-r^{2}}{\sqrt{\left|\nu\right|}}}{\sqrt{\left|\nu\right|\left|\left\langle\langle\mathbf{Tx},\mathbf{x}\rangle\_{A}\right|}}-\frac{\sqrt{\left|\langle\mathbf{Tx},\mathbf{x}\rangle\_{A}\right|}}{\sqrt{\left|\nu\right|}}\right)^{2}-2\frac{\sqrt{\left|\nu\right|^{2}-r^{2}}{\left|\nu\right|}}{|\nu|}.\end{split}$$

Since

$$\Re e \left[ \overline{\nu} \langle T \mathfrak{x}, \mathfrak{x} \rangle\_A \right] \le |\nu| \left| \langle T \mathfrak{x}, \mathfrak{x} \rangle\_A \right| $$

and

$$\left(\frac{\sqrt{|\nu|^2 - r^2}}{\sqrt{|\nu| \left| \langle Tx, x \rangle\_A \right|}} - \frac{\sqrt{\left| \langle Tx, x \rangle\_A \right|}}{\sqrt{|\nu|}} \right)^2 \ge 0.$$

then, we deduce that

$$\frac{\|T\mathbf{x}\|\_A^2}{|\nu|\left|\langle T\mathbf{x},\mathbf{x}\rangle\_A\right|} - \frac{\left|\langle T\mathbf{x},\mathbf{x}\rangle\_A\right|}{|\nu|} \le \frac{2\left(|\nu| - \sqrt{|\nu|^2 - r^2}\right)}{|\nu|}$$

which gives the inequality

$$\|Tx\|\_A^2 \le \left| \left< Tx, x \right>\_A \right|^2 + 2 \left| \left< Tx, x \right>\_A \right| \left( \left| \nu \right| - \sqrt{\left| \nu \right|^2 - r^2} \right). \tag{43}$$

,

By taking the supremum over *x* ∈ S 1 *A* in (43), we obtain

$$\|\|T\|\|\_{A}^{2}\leq\omega\_{A}^{2}(T)+2\omega\_{A}(T)\left(|\nu|-\sqrt{|\nu|^{2}-r^{2}}\right).\tag{44}$$

So, we immediately obtain (41).

By making use of the inequalities (44) and (43), we are ready to establish the next two corollaries as applications of our previous result.

**Corollary 9.** *Let ρ*, *σ* ∈ C *be such that ρ* 6= *σ and* <*e*(*ρσ*) ≥ 0. *Additionally, let T* ∈ B*A*1/2 (H ) *be such that either* (4) *or* (5) *holds. Then:*

$$\left\|\left|T\right|\right\|\_{A}^{2}\leq\omega\_{A}^{2}(T)+\left[\left|\rho+\sigma\right|-2\sqrt{\Re(\rho\overline{\sigma})}\right]\omega\_{A}(T).\tag{45}$$

**Proof.** Set *ν* := *ρ*+*σ* 2 and *r* := |*ρ*−*σ*| 2 . Clearly, |*ν*| > *r*. Moreover, since (5) holds, then so is (40). So, the desired result follows by applying (44) and then observing that

$$\left|\nu\right|^2 - r^2 = \left|\frac{\rho + \sigma}{2}\right|^2 - \left|\frac{\rho - \sigma}{2}\right|^2 = \Re(\rho \overline{\sigma}).\tag{46}$$

**Remark 7.** *Assume that <sup>T</sup>* <sup>∈</sup> <sup>B</sup>*A*(<sup>H</sup> )*. If <sup>θ</sup>* <sup>≥</sup> *<sup>µ</sup>* <sup>&</sup>gt; <sup>0</sup> *are such that either T* †*<sup>A</sup>* <sup>−</sup> *<sup>µ</sup><sup>I</sup>* (*θI* − *T*) *is A-accretive or*

$$\left(T^{\dagger\_A} - \mu I\right) (\theta I - T) \ge\_A \mathbf{0}$$

*then, by applying* (45)*, we infer that*

$$\left\|\left|T\right\|\right\|\_{A}^{2} \leq \omega\_{A}^{2}(T) + \left(\sqrt{\theta} - \sqrt{\mu}\right)^{2} \omega\_{A}(T).$$

**Corollary 10.** *Let* T = (*T*1, . . . , *T<sup>d</sup>* ) ∈ B*A*1/2 (H ) *d and ρ<sup>i</sup>* , *σ<sup>i</sup>* ∈ C *with ρ<sup>i</sup>* 6= *σ<sup>i</sup> ,* <*e*(*ρiσi*) ≥ 0 *for i* ∈ {1, . . . , *d*}*. Assume that*

$$\left\| T\_i - \frac{\rho\_i + \sigma\_i}{2} I \right\|\_A \le \frac{1}{2} |\rho\_i - \sigma\_i| \,\tag{47}$$

*for all i* ∈ {1, . . . , *d*}*. Then,*

$$\|\|\mathcal{T}\|\|\_{A}^{2}\leq\omega\_{A}^{2}(\mathcal{T})\left[\sum\_{i=1}^{d}\left(|\rho\_{i}+\sigma\_{i}|-2\sqrt{\mathfrak{Re}(\rho\_{i}\overline{\sigma\_{i}})}\right)^{2}\right]^{\frac{1}{2}}\omega\_{A}(\mathcal{T}).$$

**Proof.** Let *x* ∈ S 1 *A* . Set *ν<sup>i</sup>* := *ρi*+*σ<sup>i</sup>* 2 and *r<sup>i</sup>* = |*ρi*−*σ<sup>i</sup>* | 2 for all *i* ∈ {1, . . . , *d*}. Clearly, we have |*νi* | > *r<sup>i</sup>* and k*T<sup>i</sup>* − *ν<sup>i</sup> I*k*<sup>A</sup>* ≤ *r<sup>i</sup>* for every *i*. Thus, an application of (43) shows that

$$\|\|T\_i\mathbf{x}\|\|\_A^2 \le \left| \left\langle T\_i\mathbf{x}, \mathbf{x} \right\rangle\_A \right|^2 + 2 \left| \left\langle T\_i\mathbf{x}, \mathbf{x} \right\rangle\_A \right| \left( |\nu\_i| - \sqrt{|\nu\_i|^2 - r\_i^2} \right).$$

This yields, through (46), that

$$\left| \|T\_{\bar{i}}\mathbf{x}\|\|\_{A}^{2} \leq \left| \left\langle T\_{\bar{i}}\mathbf{x}, \mathbf{x} \right\rangle\_{A} \right|^{2} + \left( \left| \rho\_{\bar{i}} + \sigma\_{\bar{i}} \right| - 2\sqrt{\Re(\rho\_{\bar{i}}\overline{\sigma\_{\bar{i}}})} \right) \left| \left\langle T\_{\bar{i}}\mathbf{x}, \mathbf{x} \right\rangle\_{A} \right|^{2} \right| $$

for *i* ∈ {1, . . . , *d*}.

If we sum and then apply the Cauchy–Schwarz inequality, we then obtain

$$\begin{split} &\sum\_{i=1}^{d} \|T\_{i}\mathbf{x}\|\_{A}^{2} \\ &\leq \sum\_{i=1}^{d} \left| \left< T\_{i}\mathbf{x}, \mathbf{x} \right>\_{A} \right|^{2} + \sum\_{i=1}^{d} \left( |\rho\_{i} + \sigma\_{i}| - 2\sqrt{\mathfrak{Re}(\rho\_{i}\overline{\sigma\_{i}})} \right) \left| \left< T\_{i}\mathbf{x}, \mathbf{x} \right>\_{A} \right| \\ &\leq \sum\_{i=1}^{d} \left| \left< T\_{i}\mathbf{x}, \mathbf{x} \right>\_{A} \right|^{2} + \left[ \sum\_{i=1}^{d} \left( |\rho\_{i} + \sigma\_{i}| - 2\sqrt{\mathfrak{Re}(\rho\_{i}\overline{\sigma\_{i}})} \right)^{2} \right]^{\frac{1}{2}} \left( \sum\_{i=1}^{d} \left| \left< T\_{i}\mathbf{x}, \mathbf{x} \right>\_{A} \right|^{2} \right)^{\frac{1}{2}}. \end{split}$$

By taking the supremum over this inequality, we derive the desired result.

Another application of the inequality (45) provides an upper bound for the Euclidean operator *A*-seminorm of *d*-tuples of operators in B*A*1/2 (H ) *<sup>d</sup>* and stated in the next proposition.

**Proposition 4.** *Let* T = (*T*1, . . . , *T<sup>d</sup>* ) ∈ B*A*1/2 (H ) *d . Let also ρ*, *σ* ∈ C *with ρ* 6= *σ and* <*e*(*ρσ*) ≥ 0*. Suppose that*

$$\left\| T\_i \mathbf{x} - \frac{\rho + \sigma}{2} \mathbf{x} \right\|\_A \le \frac{1}{2} |\rho - \sigma| \,\tag{48}$$

*for any x* ∈ S 1 *A and all i* ∈ {1, . . . , *d*}*. Then,*

$$\|\|\mathcal{T}\|\|\_{\varepsilon,A}^2 \le d \max\_{k \in \{1, \dots, d\}} \omega\_A(T\_k) \left\{ \max\_{k \in \{1, \dots, d\}} \omega\_A(T\_k) + \left[ |\rho + \sigma| - 2\sqrt{\Re e(\rho \overline{\sigma})} \right] \right\}.$$

**Proof.** From (45), we see that

$$\left\|\left|T\_{\bar{i}}\right\|\_{A}^{2} \leq \omega\_{A}^{2}(T\_{\bar{i}}) + \left[|\rho + \sigma| - 2\sqrt{\Re\epsilon(\rho\overline{\sigma})}\right]\omega\_{A}(T\_{\bar{i}})\right\|\_{A}^{2}$$

for *i* ∈ {1, . . . , *d*}.

Let (*ν*1, . . . , *ν<sup>d</sup>* ) ∈ B*<sup>d</sup>* , multiply by |*ν<sup>i</sup>* | 2 and sum to obtain

$$\begin{split} \sum\_{i=1}^{d} |\nu\_{i}|^{2} \|T\_{i}\|\_{A}^{2} &\leq \sum\_{i=1}^{d} |\nu\_{i}|^{2} \omega\_{A}^{2}(T\_{i}) + \left[|\rho + \sigma| - 2\sqrt{\mathfrak{Re}(\rho\overline{\sigma})}\right] \sum\_{i=1}^{d} |\nu\_{i}|^{2} \omega\_{A}(T\_{i}) \\ &\leq \left(\sum\_{i=1}^{d} |\nu\_{i}|^{2}\right) \max\_{k \in \{1, \ldots, d\}} \omega\_{A}^{2}(T\_{k}) \\ &+ \left(\sum\_{i=1}^{d} |\nu\_{i}|^{2}\right) \max\_{k \in \{1, \ldots, d\}} \omega\_{A}(T\_{k}) \left[|\rho + \sigma| - 2\sqrt{\mathfrak{Re}(\rho\overline{\sigma})}\right] \\ &\leq \max\_{k \in \{1, \ldots, d\}} \omega\_{A}^{2}(T\_{k}) + \left[|\rho + \sigma| - 2\sqrt{\mathfrak{Re}(\rho\overline{\sigma})}\right] \max\_{k \in \{1, \ldots, d\}} \omega\_{A}(T\_{k}). \end{split}$$

Moreover, since

$$\frac{1}{d} \left\| \sum\_{i=1}^d \nu\_i T\_i \right\|\_{A}^2 \le \sum\_{i=1}^d |\nu\_i|^2 \left\| |T\_i|\right\|\_{A'}^2$$

hence

$$\frac{1}{d} \left\| \sum\_{i=1}^d \nu\_i T\_i \right\|\_A^2 \le \max\_{k \in \{1, \dots, d\}} \omega\_A^2(T\_k) + \left[ |\rho + \sigma| - 2\sqrt{\Re e(\rho \overline{\sigma})} \right]\_{k \in \{1, \dots, d\}} \omega\_A(T\_k).$$

By taking the supremum over (*ν*1, . . . , *ν<sup>d</sup>* ) ∈ *B<sup>d</sup>* and using the representation (3), we obtain the desired result.

The next lemma plays a crucial role in establishing our final result in this paper.

**Lemma 4** ([30])**.** *If σ*, *ρ*, *ζ<sup>j</sup>* ∈ C *are such that* <*e*(*ρσ*¯) > 0 *and*

$$\left|\zeta\_j - \frac{\sigma + \rho}{2}\right| \le \frac{1}{2}|\rho - \sigma|$$

*for each j* ∈ {1, . . . , *d*}, *then we have*

$$\sum\_{j=1}^d \left| \zeta\_j \right|^2 \le \left( \frac{1}{d} \left| \sum\_{j=1}^d \zeta\_j \right| + |\rho + \sigma| - 2\sqrt{\Re e(\rho \bar{\sigma})} \right) \left| \sum\_{j=1}^d \zeta\_j \right|.$$

Now, we are ready to state our final proposition.

**Proposition 5.** *Let* T = (*T*1, . . . , *T<sup>d</sup>* ) ∈ B*A*1/2 (H ) *d and let ρ*, *σ* ∈ C *be such that ρ* 6= *σ*, <*e*(*ρσ*¯) > 0*. Assume that the condition* (16) *is valid. Then,*

$$
\omega\_A^2(\mathcal{T}) \le \left[ \frac{1}{d} \omega\_A \left( \sum\_{j=1}^d T\_j \right) + |\rho + \sigma| - 2\sqrt{\Re(\rho \bar{\sigma})} \right] \omega\_A \left( \sum\_{j=1}^d T\_j \right).
$$

**Proof.** The proof follows by proceeding as in the proof of Proposition 2 and then taking Lemma 4 into consideration.

#### **3. Conclusions**

In this paper, we established several inequalities involving the generalized Euclidean operator radius of *d*-tuples of *A*-bounded linear operators acting on a complex Hilbert space H . The obtained bounds lead to the special case of the classical *A*-numerical radius of semi-Hilbert space operators. We proved also some estimates related to the Euclidean operator *A*-seminorm of *d*-tuples of *A*-bounded operators. In addition, we stated, under appropriate conditions, several reverse inequalities for the *A*-numerical radius in single and multivariable setting.

These inequalities can be further utilized to provide reverse triangle inequalities for the operator *A*-seminorm and *A*-numerical radius of semi-Hilbert space operators that play an important role in the geometrical structure of the *A*-inner product space under consideration.

Additionally, the techniques and ideas of this article can be useful for future investigations in this area of research. In future papers, we aim to investigate the connections between the joint *A*-numerical radius and joint operator *A*-seminorm of some special classes of multivariable operators such that the class of jointly *A*-hyponormal operators in semi-Hilbert spaces.

**Author Contributions:** The work presented here was carried out in collaboration between all authors. All authors contributed equally and significantly in writing this article. All authors have contributed to the manuscript. All authors have read and agreed to the published version of the manuscript.

**Funding:** Distinguished Scientist Fellowship Program, King Saud University, Riyadh, Saudi Arabia, Researchers Supporting Project number (RSP2023R187).

**Data Availability Statement:** Data is contained within the article.

**Acknowledgments:** The authors wish to express their deepest gratitude to the editor and the anonymous referees for their useful comments. The first author extends her appreciation to the Distinguished Scientist Fellowship Program at King Saud University, Riyadh, Saudi Arabia, for funding this work through Researchers Supporting Project number (RSP2023R187).

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

### *Article* **Reinsurance Policy under Interest Force and Bankruptcy Prohibition**

**Yangmin Zhong <sup>1</sup> and Huaping Huang 2,\***


**Abstract:** In this paper, we solve an optimal reinsurance problem in the mathematical finance area. We assume that the surplus process of the insurance company follows a controlled diffusion process and the constant interest rate is involved in the financial model. During the whole optimization period, the company has a choice to buy reinsurance contract and decide the reinsurance retention level. Meanwhile, the bankruptcy at the terminal time is not allowed. The aim of the optimization problem is to minimize the distance between the terminal wealth and a given goal by controlling the reinsurance proportion. Using the stochastic control theory, we derive the Hamilton-Jacobi-Bellman equation for the optimization problem. Via adopting the technique of changing variable as well as the dual transformation, an explicit solution of the value function and the optimal policy are shown. Finally, several numerical examples are shown, from which we find several main factors that affect the optimal reinsurance policy.

**Keywords:** Hamilton-Jacobi-Bellman equation; stochastic optimal control; dynamic programming principle; dual transformation

**MSC:** 93E20; 91G30

**Citation:** Zhong, Y.; Huang, H. Reinsurance Policy under Interest Force and Bankruptcy Prohibition. *Axioms* **2023**, *12*, 378. https:// doi.org/10.3390/axioms12040378

Academic Editor: Behzad Djafari-Rouhani

Received: 22 March 2023 Revised: 12 April 2023 Accepted: 13 April 2023 Published: 16 April 2023

4.0/).

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/

**1. Introduction**

The optimal reinsurance problem has a long history in the actuarial science. An insurance company has the option of transferring parts of premiums to a reinsurance company to reduce the payment of large claims. In the academic field, regarding the reinsurance problem, Ref. [1] studied the optimal dividend payout problem of the insurer by controlling the dividend as well as the risk exposure. Ref. [2] explored the optimal controlled reinsurance proportion and investment to maximize the expected utility at the terminal time in which the surplus is modelled by a perturbed classical risk process. Ref. [3] dealt with the non-proportional reinsurance schemes to minimize the ruin probability when the surplus follows a continuous diffusion model. For more past developments about reinsurance optimization, we refer interested readers to the excellent books [4,5].

In our model, we consider an insurance company that aims to reach a given goal at the terminal time. During the whole time period, the company has the choice to buy the reinsurance contract and decide the reinsurance retention level. Ref. [6] explored the optimal reinsurance problem while aiming to minimize the distance between the terminal wealth and a given goal. Unlike [6], besides a given goal, we also set up a bankruptcy prohibition for the insurance company, which means that the terminal wealth is not allowed to drop below 0. There are several works that concerns the ruin prohibition and control optimizations in the financial modelling area. As an example, Ref. [7] studied a mean-variance portfolio selection optimization problem where the surplus process is not allowed to drop below 0 at any time. Ref. [8] studied the optimal reinsurance and investment optimization with bankruptcy prohibition under the mean-variance criterion. Ref. [9] solved the optimal mean-risk portfolio problem aiming to minimize the expected payoff in a complete market.

There is an important element, that is, the interest rate, in the financial market. The government uses the interest rate as an instrument to control the geometry of the economy. In general, the interest rate will usually decrease if the central bank discovers that the current economic situation is weak. The capital market is very sensitive about the interest rate, which means that the money will gradually flow out of the bank to product with high investment returns or consumption, houses, cars, restaurants, and so on. Vice versa, when there is too much money in the market, which causes inflation, the central bank will raise the interest rate and the money from the stock market, funds, or real estate will slowly flow to banks. In our model, we assume that the interest rate is a constant, in other words, during the whole optimization phase the economy is steady. There is fruitful research about the constant interest rate in the area of actuarial science. As an example, Ref. [10] studied the ruin probability of the compound Poisson model in the finite time horizon under constant interest force. Ref. [11] studied the optimal dividend problem of an insurance company under constant interest force. One can also see [12–15] for more studies about the effect of interest rate in actuarial science. In our paper, although the interest rate is a constant, mathematical difficulty is still an issue. Affected by the interest rate, the target and the ruin prohibition are mathematically expressed as two curved boundaries, which cause the main difficulties in mathematical calculation.

We usually use stochastic optimal control theory to solve some optimization problems. By applying the stochastic control theory, the Hamilton-Jacobi-Bellman (for short, HJB) can be derived. By solving an explicit classical solution for the HJB equation, the corresponding optimal strategy and the optimal value function of the optimization problem can also be solved. As the mentioned above, in our model, due to the bankruptcy prohibition and the target of the terminal time, there are three boundary conditions (including two curved boundaries) in the HJB equation, which cause the main difficulty to solve the equation. We adopt the changing of the variable technique to simplify the curved boundary conditions. After the change of variable, the new HJB equation is a fully nonlinear partial differential equation (for short, PDE). To solve such a PDE, the dual transformation technique is used to convert the fully nonlinear PDE to a semilinear PDE. After calculating an explicit solution to the semilinear PDE, we can derive an explicit solution to the optimal policy.

The rest of the paper is constructed as follows. Section 2 introduces the surplus model and the optimization problem of the insurance company and then shows the HJB equation of the optimization problem. Section 3 presents the changing of the variable technique to simplify the original problem. We derive a new optimization problem and the corresponding HJB equation. In Section 4, the dual transformation is used and an explicit solution of the HJB equation is shown. A verification theorem is presented to prove that the solution to the HJB equation is indeed the value function of the optimization problem. Section 5 presents several numerical examples to depict the impacts of different parameters on the optimal strategy.

#### **2. The Model**

Denote (Ω, F, P) as a complete probability space with filtration {F*t*}*t*≥0. In the reality, the insurance company will receive premiums from individuals and then undertake possible loss for the insurant. Following the financial mathematical model of [16], we assume that the aggregate cumulative claims up to time *t* are written as follows:

$$\mathbf{C}\_{t} = mt - nB\_{t\nu}$$

where *m* > 0 represents the expected loss in a unit time; *n* > 0 is the diffusion volatility rate; and *B<sup>t</sup>* is a standard Brownian motion, which is adapted to the filtration {F*t*}. We assume that the insurance company sets the premium rate as (1 + ξ)*m*, where ξ > 0 is a constant representing the safety loading of the insurance contract. Denote *i* as the interest rate of the

financial market, where *i* > 0 is a positive constant. Then, the dynamics of the surplus of the insurance company can be mathematically expressed as follows:

$$\mathbf{d}\mathbf{Y}\_t = i\mathbf{Y}\_t\mathbf{d}t + (1+\xi)m\mathbf{d}t - (m\mathbf{d}t - n\mathbf{d}B\_t).$$

Now, we add the feature of reinsurance in our model. We assume that the insurance company will transfer a proportion of claims to the reinsurance company. At the same time, parts of the premium will also be transferred to the reinsurance company. Mathematically speaking, at the time *t*, the retention level of the insurance company is denoted by *q<sup>t</sup>* , where *q<sup>t</sup>* ≥ 0; the other proportion 1 − *q<sup>t</sup>* of claims will be paid by the reinsurance company. Meanwhile, the parts of the premium rate (1 + %)(1 − *qt*)*m* will be transferred to the reinsurance company from the insurance company, where % > 0 is the safety loading of the reinsurance company. We assume that % > ξ, which means that the reinsurance is non-cheap. Denote *Y*(*s*; *t*, *y*, *q*(·)) as the surplus process of the insurance company with the initial data (*t*, *y*) and strategy *q*(·).

In what follows, denote *Y q t* := *Y*(*s*; *t*, *y*, *q*(·)) for simplicity when there is no confusion. Then, the surplus process of the insurance company can be rewritten as

$$\mathbf{d}\mathbf{Y}\_t^q = \mathbf{i}\mathbf{Y}\_t^q \mathbf{d}t + (\xi - \varrho + \varrho q\_l)m\mathbf{d}t + q\_l m \mathbf{d}B\_l. \tag{1}$$

Let *T* > 0 be a finite time horizon. We assume that there is a non-bankruptcy constraint at the terminal time *T* for the insurance company. In other words, for any reinsurance strategy *q*, *Y q T* should be non-negative. To satisfy such a condition, at the time *t* ∈ [0, *T*], if the surplus is

$$\mathcal{Y}\_t^{\emptyset} = \frac{\xi - \varrho}{i} m(\mathbf{e}^{i(t-T)} - 1)\_{\prime \prime}$$

then for any time *s* ∈ [*t*, *T*], the null strategy *q<sup>s</sup>* = 0 should be invoked to make sure that *Y q T* = 0. Actually, when *Y q <sup>t</sup>* = ξ−% *<sup>i</sup> m*(e *<sup>i</sup>*(*t*−*T*) <sup>−</sup> <sup>1</sup>), if there exists a time *<sup>s</sup>* <sup>∈</sup> [*t*, *<sup>T</sup>*] such that *q<sup>s</sup>* , 0, then there is always a positive probability that *Y q T* < 0 due to the Brownian motion in Equation (1).

On the other hand, if there exists a time *t* ∈ [0, *T*] such that the wealth

$$\mathcal{Y}\_t^{\emptyset} < \frac{\xi - \varrho}{i} m(\mathbf{e}^{i(t-T)} - 1)\_{\prime \prime}$$

then no matter which strategy is chosen, there is always a positive probability that the terminal wealth *Y q T* < 0. Eventually, the restriction of non-bankruptcy means that for any time *t* ∈ [0, *T*], the surplus should satisfy

$$Y\_t^\theta \ge \frac{(\xi - \varrho)m}{i} (\mathbf{e}^{i(t-T)} - 1). \tag{2}$$

Now, we show a formal definition of the set of admissible strategies. For the initial time *t* ∈ [0, *T*) and the initial wealth *y* ∈ (ξ−%)*m i* e *<sup>i</sup>*(*t*−*T*) <sup>−</sup> <sup>1</sup> , +∞ , the set of admissible strategies is denoted by

$$\begin{split} \mathcal{D}\_{t,y} := \left\{ q(\cdot) \in L^2(\Omega \times [t, T]) | q(\cdot) \text{ is progressively measurable, } q(\cdot) \ge 0, \\ \forall s \in [t, T], \varprojlim (s; t, y, q(\cdot)) \ge \frac{\xi - \varrho}{i} m (\mathsf{e}^{i(s - T)} - 1) \right\}. \end{split} \tag{3}$$

In the model presented in this paper, we assume that the insurance company with a certain scale aims to achieve a given goal *G* for the surplus at the terminal time *T*, where *G* > 0 is a constant. We define the loss function to measure the expected discounted distance between the final wealth and the goal:

$$\tilde{L}(t, y; q(\cdot)) = \mathbb{E}\{\mathbf{e}^{-\varepsilon T}(Y\_T^{\theta} - \mathbf{G})^2\}.\tag{4}$$

where ε > 0 represents a discount factor to reflect the time value.

For any initial time *t* ∈ [0, *T*] and initial wealth *y* ≥ (ξ−%)*m i* (e *<sup>i</sup>*(*t*−*T*) <sup>−</sup> <sup>1</sup>), the insurance company aims to minimize the loss function by choosing the optimal reinsurance policy. Now, we analyze more details about the constraints of surplus. If the initial wealth is

$$y = G\mathbf{e}^{i(t-T)} + \frac{(\xi - \varrho)m}{i}(\mathbf{e}^{i(t-T)} - 1)\lambda$$

where *t* is the initial time, then the null strategy *q<sup>t</sup>* ≡ 0 will be invoked so that *y q T* = *G* and the loss function is minimized with value 0. If the initial wealth

$$y\_t > \mathrm{Ge}^{i(t-T)} + \frac{(\xi - \varrho)m}{i}(\mathbf{e}^{i(t-T)} - 1)\_{\prime\prime}$$

this kind of situation is not in consideration since it is meaningless to reach the goal *G* when the initial value is large enough. Eventually, combining with Equation (2), we can narrow down the domain of the surplus to

$$\left[ \frac{(\xi - \varrho)m}{i} (\mathbf{e}^{i(t-T)} - \mathbf{1}), \mathbf{G} \mathbf{e}^{i(t-T)} + \frac{(\xi - \varrho)m}{i} (\mathbf{e}^{i(t-T)} - \mathbf{1}) \right].$$

Until now, the set of all admissible strategies *D*ˆ *<sup>t</sup>*,*<sup>y</sup>* in (3) can be replaced by

$$\tilde{D}\_{t,y} := \left\{ q(\cdot) \in L^2(\Omega \times [t, T]) | q(\cdot) \text{ is progressively measurable, } q(\cdot) \ge 0, \right\}$$

$$\forall s \in [t, T] \;/\ \frac{\xi - \varrho}{i} \\ m(\mathbf{e}^{i(s-T)} - 1) \le Y(s; t, y, q(\cdot)) \le G \mathbf{e}^{i(s-T)} + \frac{(\xi - \varrho)m}{i} (\mathbf{e}^{i(s-T)} - 1) \cdot 1$$

Now, we define the value function as follows:

$$\tilde{S}(t, y) = \inf\_{q \in \tilde{\mathcal{D}}\_{t, y}} \tilde{L}(t, y; q(\cdot)). \tag{5}$$

In what follows, for simplicity, denote

$$\mathcal{g}\_0(t) := \frac{(\xi - \varrho)m}{i}(\mathbf{e}^{i(t-T)} - 1), \quad \mathcal{g}\_1(t) := \mathbf{G}\mathbf{e}^{i(t-T)} + \frac{(\xi - \varrho)m}{i}(\mathbf{e}^{i(t-T)} - 1), t \in [0, T].$$

By using the dynamic programming principle, the HJB equation of the optimization problem (5) is

$$\inf\_{q \ge 0} \left\{ \mathfrak{s}\_l + \mathfrak{s}\_y (iy + \xi - \varrho + \varrho q) + \frac{1}{2} \mathfrak{s}\_{yy} n^2 q^2 \right\} = 0,\tag{6}$$

with the following boundary conditions:

$$\begin{cases} \mathfrak{s}(T, y) = \mathbf{e}^{-\varepsilon T} (\mathbf{G} - y)^2, & y \in [0, \mathbf{G}], \\ \mathfrak{s}(t, \mathfrak{g}\_1(t)) = 0, & t \in [0, T], \\ \mathfrak{s}(t, \mathfrak{g}\_0(t)) = \mathbf{e}^{-\varepsilon T} \mathbf{G}^2, & t \in [0, T]. \end{cases} \tag{7}$$

From the theory of dynamic programming principle, as long as we find a continuously differentiable solution for (6) and (7), then such a solution *s*˜ equals the value function *S*˜, which is defined in (5). One can refer to [17] for the standard proof of such a conclusion. Unfortunately, there are several complex boundaries in (7). Solving such an equation can be quite difficult. Thus, we seek the help of the changing variable that was used in [18] to simplify the boundary conditions in the next section.

#### **3. Changing of Variable**

Define the diffeomorphism *Q* : [0, *T*] × [0, *G*] → Ψ, where Ψ := {(*t*, *y*)|*t* ∈ [0, *T*], *g*0(*t*) ≤ *y* ≤ *g*1(*t*)} and

$$\mathbf{u}(t,z)\rightarrow(t,y)=\mathbf{Q}(t,z)=(t,\mathbf{Q}\_1(t,z))=:\left(t,z\mathbf{e}^{-i(T-t)}+\frac{(\xi-\varrho)m}{i}(\mathbf{e}^{-i(T-t)}-1)\right).\tag{8}$$

For any strategy *<sup>q</sup>*(·) <sup>∈</sup> *<sup>D</sup>*˜ *<sup>t</sup>*,*y*, *<sup>Z</sup>*(·;*t*, *<sup>z</sup>*, *<sup>q</sup>*(·)) := [*Q*1(*s*, ·)]−<sup>1</sup> (*Y*(·;*t*, *y*, *q*(·))), in which *z* = *Q*−<sup>1</sup> 1 (*t*, *y*). We also denote *Z q s* := *Z*(*s*; *t*, *z*, *q*(·)) for simplicity when there is no confusion. We can obtain that

$$Z\_t^\emptyset := [Q\_1(t, \cdot)]^{-1}(Y\_t^\emptyset), t \in [0, T]\_\prime$$

which leads to

$$Z\_t^q = \mathbf{e}^{i(T-t)} Y\_t^q + \frac{(\xi - \varrho)m}{i} (\mathbf{e}^{i(T-t)} - 1).$$

By some simple calculations, we see that

$$\mathbf{d}Z\_t^{\eta} = \mathbf{e}^{i(T-t)} \left( \varrho q\_t \mathfrak{m} \mathbf{d}t + q\_t \mathfrak{m} \mathbf{d}B\_t \right).$$

Moreover, for any given *s* ∈ [0, *T*], if *Y q <sup>s</sup>* = *g*0(*t*), then *Z q <sup>s</sup>* = 0; if *Y q <sup>s</sup>* = *g*1(*t*), then *Z q <sup>s</sup>* = *G*. Regarding the new dynamics of *Z q s* , the set of all admissible strategies can be written as

$$D\_{t,z} := \left\{ q(\cdot) \in L^2(\Omega \times [t, T]) \Big| q(\cdot) \text{ is progressively measurable, } q(\cdot) \ge 0, \int\_{\Omega} |q(\cdot)|^2 d\mu(\cdot) \ge 0 \right\}.$$

$$\forall s \in [t, T], 0 \le Z(s; t, z, q(\cdot)) \le G \Big\}.$$

For any (*t*, *z*) ∈ [0, *T*] × [0, *G*], in terms of *Z*(·;*t*, *z*, *q*(·)), the original loss function (4) can be transformed to

$$L(t, z; q(\cdot)) = \mathbb{E}(\mathbf{e}^{-\varepsilon T} (Z\_T^q - G)^2).$$

The new value function is defined as

$$S(t, z) := \inf\_{q(\cdot) \in D\_{t, z}} L(t, z; q(\cdot)). \tag{9}$$

Now, we pay attention to solving the optimization problem (9). Again, by using the dynamic programming principle, the new version of the HJB equation is written by

$$\inf\_{q\geq 0} \left\{ s\_l + \mathbf{e}^{i(T-t)} \varrho q m s\_z + \frac{1}{2} \mathbf{e}^{2i(T-t)} q^2 n^2 s\_{z\varpi} \right\} = 0, \qquad \text{for all } (t, z) \in [0, T) \times (0, \mathbb{G}), \tag{10}$$

with the boundary conditions:

$$\begin{cases} s(T, z) = \mathbf{e}^{-\varepsilon T} (G - z)^2, & z \in [0, G], \\ s(t, G) = 0, & t \in [0, F], \\ s(t, 0) = \mathbf{e}^{-\varepsilon T} G^2, & t \in [0, T]. \end{cases} \tag{11}$$

As stated in Section 2, a continuously differentiable solution for (10) and (11) equals the value function defined in (9). Before solving Equations (10) and (11), we explore some properties of the value function.

**Proposition 1.** *The value function S defined in* (9) *is a decreasing function with regard to the variable z.*

We omit the proof since the conclusion is obvious.

**Proposition 2.** *The value function defined in* (9) *is convex on the variable z.*

**Proof.** For any β > 0, let *q* <sup>β</sup>,*z*<sup>1</sup> , *q* <sup>β</sup>,*z*<sup>2</sup> be the β-optimal policies with initial data (*t*, *z*1),(*t*, *z*2), respectively, i.e.,

$$\begin{aligned} L(t, z\_1; q^{\beta, z\_1}(\cdot)) &\leq S(t, z\_1) + \beta, \\ L(t, z\_2; q^{\beta, z\_2}(\cdot)) &\leq S(t, z\_2) + \beta. \end{aligned}$$

Notice that

$$\mathbf{d}Z\_t^\eta = \mathbf{e}^{i(T-t)}(\varrho q\_t \mathbf{m} \mathbf{d}t + q\_t \mathbf{n} \mathbf{d}B\_t).$$

Denote *Z*(*s*;*t*, *z*1, *q* <sup>β</sup>,*z*<sup>1</sup> ) =: *Z*1*<sup>s</sup>* , *Z*(*s*;*t*, *z*2, *q* <sup>β</sup>,*z*<sup>2</sup> ) =: *Z*2*<sup>s</sup>* for simplicity. For any fixed λ ∈ (0, 1), let *R<sup>s</sup>* := λ*Z*1*<sup>s</sup>* + (1 − λ)*Z*2*<sup>s</sup>* and the corresponding reinsurance strategy of the surplus *R<sup>S</sup>* be *q* β,*r* := λ*q* <sup>β</sup>,*z*<sup>1</sup> + (<sup>1</sup> <sup>−</sup> <sup>λ</sup>)*<sup>q</sup>* <sup>β</sup>,*z*<sup>2</sup> , where *<sup>r</sup>* = <sup>λ</sup>*z*<sup>1</sup> + (<sup>1</sup> <sup>−</sup> <sup>λ</sup>)*z*2. Then, we can obtain that

$$\begin{split} \lambda \mathbb{S}(t, z\_1) + (1 - \lambda)\mathbb{S}(t, z\_2) &\geq \lambda L(t, z\_1; q^{\beta, z\_1}) + (1 - \lambda)L(t, z\_2; q^{\beta, z\_2}) - \beta \\ &= \lambda \mathbb{E}\Big(\mathbf{e}^{-\varepsilon T}(Z\_{1T} - \mathbf{G})^2\Big) + (1 - \lambda)\mathbb{E}\Big(\mathbf{e}^{-\varepsilon T}(Z\_{2T} - \mathbf{G})^2\Big) - \beta \\ &\geq \mathbb{E}\Big(\mathbf{e}^{-\varepsilon T}(R\_T - \mathbf{G})^2\Big) - \beta, \end{split} \tag{12}$$

where the last inequality is due to the convexity of the function *x* 7→ (*x*− *G*) 2 . Combining (12) with the fact that

$$\mathbb{E}\left(\mathbf{e}^{-\varepsilon T}(R\_T - G)^2\right) \ge S(t, r)\_r$$

we obtain that

$$S(t, z\_1) + (1 - \lambda)S(t, z\_2) \ge S(t, \lambda z\_1 + (1 - \lambda)z\_2) - \beta.$$

Since β > 0 is arbitrary, the convexity of the value function on the variable *z* is proved.

**Remark 1.** *By the definition of <sup>S</sup>*˜ *and S, i.e., Equations* (5) *and* (9)*, for any* (*t*, *<sup>y</sup>*) <sup>∈</sup> [0, *<sup>T</sup>*] <sup>×</sup> [0, *<sup>G</sup>*]*, it satisfies <sup>S</sup>*(*t*, *<sup>z</sup>*) = *<sup>S</sup>*˜(*t*, *<sup>Q</sup>*1(*t*, *<sup>z</sup>*))*, where <sup>Q</sup>*<sup>1</sup> *is defined in* (8)*. For any fixed time <sup>t</sup>* <sup>∈</sup> [0, *<sup>T</sup>*], *the mapping y* 7→ *Q*1(*t*, *y*) *is linear. Due to linearity, the convexity of S*(*t*, *z*) *on z is equivalent to the convexity of S*˜(*t*, *y*) *on the variable y. Proposition 2 implies that the value function S*˜(*t*, *y*) *is also convex on y*.

In what follows, we attempt to solve a continuously differentiable convex solution for the HJB Equations (10) and (11).

#### **4. Solving the HJB Equation**

If there exists a continuously differentiable solution *s* for (10), then the minimizer of (10) is

$$q^\* = -\frac{\varrho m s\_z}{\mathbf{e}^{i(T-t)} n^2 s\_{zz}}.\tag{13}$$

Substitute (13) into (10) it gives

$$\mathbf{s}\_{l} = \frac{\varrho^2 m^2 \mathbf{s}\_z^2}{2n^2 \mathbf{s}\_{zz}}.\tag{14}$$

Differentiate (14) with respect to *z* it leads to

$$s\_{tz} = \frac{2\varrho^2 m^2 s\_z s\_{zz}^2 - \varrho^2 m^2 s\_z^2 s\_{zzz}}{2n^2 s\_{zz}^2}. \tag{15}$$

In this section, the dual transformation is used to transfer the above fully nonlinear PDE to a semilinear PDE. For each (*t*, *l*) ∈ [0, *T*) × (0, +∞), define the mapping by

$$[0,G] \to \mathbb{R}^+ , z \mapsto s(t,z) + zl\_{\prime\prime}$$

where R<sup>+</sup> denotes the set of positive real numbers. Assume that for any given (*t*, *l*), τ(*t*, *l*) ∈ (0, *G*) is the unique minimizer of *s*(*t*, *z*) + *zl*. If the function *s* is smooth enough, then the minimizer satisfies

$$s\_z(t, \tau(t, l)) = -l.\tag{16}$$

Differentiate (16) with respect to *t*, *l* it gives

$$s\_{t\mathbb{Z}}(t,\tau(t,l)) + s\_{\mathbb{Z}\mathbb{Z}}(t,\tau(t,l))\tau\_l = 0,\tag{17}$$

$$s\_{\overline{z}z}(t,\tau(t,l))\tau\_l(t,l) = -1,\tag{18}$$

$$s\_{zzz}(t, \tau(t, l))\tau\_l^2(t, l) + s\_{zz}(t, \tau(t, l))\tau\_{ll}(t, l) = 0. \tag{19}$$

Substituting (16)–(19) into (15), we have

$$
\pi\_l(t, l) + hl\pi\_l(t, l) + \frac{h}{2}l^2\pi\_{ll}(t, l) = 0,\tag{20}
$$

where *h* := % <sup>2</sup>*m*<sup>2</sup> *n* 2 is a positive constant. Combining with the boundary condition *s*(*T*, *z*) = e −ε*T* (*G* − *z*) <sup>2</sup> of (11), we have

$$
\pi(T\_\prime l) = (-\frac{l}{2}\mathbf{e}^{\varepsilon T} + \mathbf{G}) \vee \mathbf{0}.\tag{21}
$$

Following the similar analysis of [19], we can obtain the other two boundary conditions as follows:

$$
\pi(t\_\prime 0) = G\_\prime \qquad \lim\_{l \to +\infty} \pi(t\_\prime l) = 0.
$$

Apparently, (20) admits a Kolmogorov probabilistic representation of

$$\pi(t,l) = \mathbb{E}[\pi(T, \Lambda(T;t,l))],\tag{22}$$

where Λ(·; *t*, *l*) satisfies the following stochastic differential equation:

$$\begin{cases} \mathbf{d}\Lambda(\mathbf{s}) = h\Lambda(\mathbf{s})\mathbf{d}\mathbf{s} + \sqrt{h}\Lambda(\mathbf{s})\mathbf{d}\tilde{\mathbf{s}}\_{\mathbf{s}} \quad \mathbf{s} \in (t, T]\_{\mathbf{s}},\\ \Lambda(t) = l\_{\mathbf{s}} \end{cases}$$

in which *B*˜ *s* is a standard Brownian motion. Obviously, it is easy to see that

$$
\Lambda(s; t, y) = \Lambda(t) \exp\left\{ \frac{h}{2}(s - t) + \sqrt{h}(B\_s - B\_t) \right\}, s \ge t. \tag{23}
$$

Combining (22), (23) with (21) it leads to

$$\tau(t,l) = \mathbb{E}\left[ \left( G - \frac{l \exp\left\{ \frac{h}{2}(T-t) + \sqrt{h}(\tilde{B}\_T - \tilde{B}\_l) + \varepsilon T \right\}}{2} \right) \vee 0 \right].$$

Using the fact that *B*˜ *<sup>T</sup>* <sup>−</sup> *<sup>B</sup>*˜ *<sup>t</sup>* follows a normal distribution, we can directly calculate that

$$\begin{cases} \begin{aligned} \tau(t,l) &=& G\Phi\Big(\frac{\ln(\frac{2G}{\Gamma}) - \frac{h(T-t)}{2} - \varepsilon T}{\sqrt{h(T-t)}}\Big) \\ &- \frac{l \exp\{\varepsilon T + h(T-t)\}}{2} \Phi\Big(\frac{\ln(\frac{2G}{\Gamma}) - \frac{h(T-t)}{2} - \varepsilon T}{\sqrt{h(T-t)}} - \sqrt{h(T-t)}\Big), t \in [0,T), \end{aligned} \end{cases} \tag{24}$$

where Φ is the distribution function of standard normal distribution. Now, we are ready to show an expression of the solution to the HJB Equations (10) and (11).

**Proposition 3.** *Let* τ *be the function defined in* (24)*, and define*

$$\begin{cases} s(t,z) = \mathbf{e}^{-\varepsilon T} \mathbf{G}^2 - \int\_0^z [\tau(t,\cdot)]^{-1}(\nu) \mathbf{d}\nu, & (t,z) \in [0,T) \times [0,\mathbf{G}],\\ s(T,z) = \mathbf{e}^{-\varepsilon T} (\mathbf{G} - z)^2, \end{cases} \tag{25}$$

*where* [τ(*t*, ·)]−<sup>1</sup> *denotes the inverse function of* <sup>τ</sup>. *Then, <sup>s</sup>*(*t*, *<sup>z</sup>*) *is a classical solution of* (10) *and* (11)*.*

This conclusion follows the direct calculations. Now, we show that the solution defined in Proposition 3 equals to the value function of the optimization problem (9), which is also called the verification theorem.

**Theorem 1.** *For any* (*t*, *z*) ∈ [0, *T*) × [0, *G*]*, s*(*t*, *z*) = *S*(*t*, *z*)*, where s*(*t*, *z*) *is defined in* (25)*. Furthermore, the optimal strategy of optimization problem* (9) *is as follows:*

$$q^\*(t, z) = \begin{cases} -\frac{\varrho m \mathbf{s}\_z}{\mathbf{e}^{i(T-t)} n^2 s\_{zz}}, & (t, z) \in [0, T) \times (0, \mathbf{G}), \\ 0, & (t, z) \in [0, T) \times \{0, \mathbf{G}\}. \end{cases} \tag{26}$$

**Proof.** We only prove the case of (*t*, *z*) ∈ [0, *T*) × (0, *G*) since the case of [0, *T*) × {0, *G*} is trivial.

For any admissible strategy *q* ∈ *Dt*,*<sup>z</sup>* and initial state (*t*, *z*), denote *Z q <sup>s</sup>* as the corresponding surplus process under the strategy *q*. Define the stopping time

$$\gamma := T \wedge \gamma\_0 \wedge \gamma\_{G'}$$

where γ<sup>0</sup> := inf{*s*|*Z q <sup>s</sup>* = 0,*s* ∈ [*t*, *T*]} and γ*<sup>G</sup>* := inf{*s*|*Z q <sup>s</sup>* = *G*,*s* ∈ [*s*, *T*]}. Applying the Itô formula to *s*(γ,*Z q* γ ) and taking expectation on both sides of the Itô formula, we arrive at

$$\begin{split} & \mathbb{E} \{ \mathbf{s}(\gamma, Z^{q}\_{\mathcal{V}}) \} \\ = & \mathbf{s}(t, \boldsymbol{z}) + \mathbb{E} \left[ \int\_{t}^{\gamma} \left( \frac{\partial \mathbf{s}}{\partial t} (\mathbf{s}, Z^{q}\_{\mathbf{s}}) + \mathbf{e}^{i(\boldsymbol{T} - \boldsymbol{t})} \boldsymbol{\varrho} q\_{\mathbf{s}} m \frac{\partial \mathbf{s}}{\partial \mathbf{z}} (\mathbf{s}, Z^{q}\_{\mathbf{s}}) + \frac{1}{2} \mathbf{e}^{i(\boldsymbol{T} - \boldsymbol{s})} q\_{\mathbf{s}}^{2} n^{2} \frac{\partial^{2} \mathbf{s}}{\partial \mathbf{z}^{2}} (\mathbf{s}, Z^{q}\_{\mathbf{s}}) \right) \mathbf{d} \mathbf{s} \right]. \end{split} \tag{27}$$

Since the function *s* solves (10), we obtain that

$$\mathbb{E}\left[\int\_{t}^{\gamma} \left(\frac{\partial s}{\partial t}(s, Z\_{s}^{q}) + \mathbf{e}^{i(T-t)}\varrho q\_{s}m\frac{\partial s}{\partial z}(s, Z\_{s}^{q}) + \frac{1}{2}\mathbf{e}^{i(T-s)}q\_{s}^{2}n^{2}\frac{\partial^{2}s}{\partial z^{2}}(s, Z\_{s}^{q})\right) \text{ds}\right] \geq 0. \tag{28}$$

Substitute (28) into (27) it gives

$$\mathbb{E}\{s(\boldsymbol{\gamma},\boldsymbol{Z}^{q}\_{\boldsymbol{\gamma}})\} \geq s(t,\boldsymbol{z}).\tag{29}$$

Combining (29) with the boundary conditions (11), we obtain that

$$s(t, z) \le \mathbb{E}(\mathbf{e}^{-\varepsilon T} (Z\_T^q - \mathbf{G})^2) = L(t, z; q(\cdot)).$$

Take the infimum over the set, *Dt*,*z*, *s*(*s*, *z*) ≤ *S*(*t*, *z*) is proved.

On the other hand, using the standard verification arguments and combining the admissibility of *q* ∗ and the fact that *s* solves the HJB Equations (10) and (11), we can show that *L*(*t*, *z*; *q* ∗ (·)) = *s*(*t*, *z*), which implies that *q* ∗ is optimal. For more arguments about verification, one can refer to [17].

We have completely solved the optimal value function and the optimal policy for the optimization problem (9). In the following remark, we show the optimal policy for the original optimization problem (5) via Equation (8).

**Remark 2.** *For each* (*t*, *y*) ∈ [0, *T*) × [*g*0(*t*), *g*1(*t*)]*, the policy defined by*

$$\begin{cases} \begin{aligned} q^\* = -\frac{\varrho m s\_z(t, Q\_1^{-1}(t, y))}{e^{i(T - t)} n^2 s\_{zz}(t, Q\_1^{-1}(t, y))}, & (t, y) \in [0, T) \times (g\_0(t), g\_1(t))\_{\mathcal{H}}, \\ 0, & (t, y) \in [0, T) \times \{g\_0(t), g\_1(t)\}. \end{aligned} \end{cases}$$

*is the optimal policy of the initial optimization problem* (5)*.*

#### **5. Numerical Example**

Now we present several examples to vividly show the optimal policy and the value function.

**Example 1.** *We assume that the parameters are as follows. The goal of the terminal time G* = 10; *the interest rate i* = 0.15*; the discount factor* ε = 0.2*; and the safety loading parameters* % = 0.4*,* ξ = 0.2*. The expected loss in unit time m* = 1*, and the di*ff*usion volatility rate n* = 0.5*. The terminal time T is assumed to be 5.*

*Figure 1 presents the value function of s*(1, *z*)*. Apparently, Figure 1 shows that the value function is decreasing and convex on the variable z, which verifies Propositions 1 and 2. Figure 2 shows the optimal policy of the di*ff*erent initial value z at time* 1. *As we can see, the reinsurance retention proportion will first increase and then decrease with respect to the wealth. This can explain that when the wealth is close to 0 or close to the target, the insurance company will prefer to transfer all of the risky claims to the reinsurance company and invest money on the risk-less asset.*

**Figure 1.** The optimal value function *s* with respect to *z* at time *t* = 1.

**Figure 2.** The optimal reinsurance policy with respect to *z* at time *t* = 1.

**Example 2.** *In this example, we use the same parameters as in Example 1, except that we change the time t* = 1, 2, 3*, respectively, and see the e*ff*ect of the time variable on the optimal policy. Figure 3 shows the optimal reinsurance policy with respect to variable z at di*ff*erent times t* = 1, 2, 3. *As we can see, as time passes, the reinsurance retention proportion increases, which means that the insurance company would like to undertake more risks when the time is close to the deadline.*

**Figure 3.** The optimal reinsurance policy with respect to *z* at time *t* = 1, 2, 3.

**Example 3.** *In this example, we use the same parameters as in Example 1, except that we change the interest rate i* = 0.5, 0.1, 0.15*, respectively. Figure 4 shows the e*ff*ect of di*ff*erent interest rates on the optimal policy. As we can see, as the interest rate increases, the reinsurance retention proportion decreases, which means that the insurance company will prefer to invest more on the risk-less asset when the interest rate increases. This phenomenon is consistent with common sense because when the interest rates rise, investors are more inclined to keep their money in the bank.*

**Figure 4.** The optimal reinsurance policy with respect to *z* under different interest rates *i* = 0.05, 0.1, 0.15.

**Example 4.** *In this example, we use the same parameters as in Example 1 except that we change the di*ff*usion volatility rate n*. *As n increases, the risk of large claims also increases. As shown in Figure 5, as n increases, the reinsurance retention level decreases. In other words, if the claim risk is* *too high, the insurance company will prefer to transfer risks to the reinsurance company instead of keeping premiums.*

**Figure 5.** The optimal reinsurance policy with respect to *z* under different volatility rates *n* = 0.5, 1, 1.5.

**Example 5.** *In this example, we still use the same parameters as in Example 1 except the reinsurance safety loading* %*. Figure 6 shows the optimal reinsurance retention level with di*ff*erent reinsurance safety loadings. The increasing of safety loading means that the reinsurance contract is more expensive. Thus, the optimal choice is to increase the reinsurance retention level so that the insurer can keep more premiums in the insurance company.*

**Figure 6.** The optimal reinsurance policy with respect to *z* with different reinsurance safety loading % = 0.4, 0.5, 0.6.

**Example 6.** *In this example, we still use the same parameters as in Example 1, except we change the expected loss in each unit time m* = 1, 1.5, 2*, respectively. Figure 7 shows that when m increases, the reinsurance retention level will also increase. This can be explained by the fact that when the parameter m increases, the insurance company obtains more premiums so that the optimal choice for the insurance company is to pull up the insurance retention level.*

**Figure 7.** The optimal reinsurance policy under different expected losses in unit time *m* = 1, 1.5, 2.

#### **6. Conclusions**

As an application of probability, this paper explores a reinsurance optimization problem that has multiple curved boundaries. To simplify the optimization problem, the technique of changing variables is used. After changing variables, we adopt the dual transformation to solve the new HJB equation. Eventually, an explicit expression of the value function as well as the optimal policy is shown. With some numerical experiments, we list several important influential factors that affect the reinsurance retention level in Table 1. For simplicity, the notation ↑ means "increases" and ↓ means "decreases". Table 1 shows that the current time, the interest rate, the diffusion volatility rate, the reinsurance safety loading, and the expected loss in unit time will simultaneously affect the optimal reinsurance policy.


**Table 1.** Factors that affect reinsurance policy.

**Author Contributions:** Y.Z. designed the research and wrote the paper. H.H. gave the methodology and the support of funding acquisition. All authors have read and agreed to the published version of the manuscript.

**Funding:** The work was sponsored by the Natural Science Foundation of Chongqing (cstc2020jcyjmsxmX0762, CSTB2022NSCQ-MSX0290) and the Talent Initial Funding for Scientific Research of Chongqing Three Gorges University (20190020).

**Institutional Review Board Statement:** Not applicable.

**Informed Consent Statement:** Not applicable.

**Data Availability Statement:** The data presented in this study are available upon request from the corresponding author.

**Acknowledgments:** The authors thank the editor and the referees for their valuable comments and suggestions, which improved greatly the quality of this paper.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


**Disclaimer**/**Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

### *Article* **Self-Improving Properties of Continuous and Discrete Muckenhoupt Weights: A Unified Approach**

**Maryam M. Abuelwafa <sup>1</sup> , Ravi P. Agarwal <sup>2</sup> , Safi S. Rabie <sup>1</sup> and Samir H. Saker 1,\***


**Abstract:** In this paper, we develop a new technique on a time scale T to prove that the self-improving properties of the Muckenhoupt weights hold. The results contain the properties of the weights when T = R and when T = N, and also can be extended to cover different spaces such as T = *h*N, T = *q* N, etc. The results will be proved by employing some new refinements of Hardy's type dynamic inequalities with negative powers proven and designed for this purpose. The results give the exact value of the limit exponent as well as the new constants of the new classes.

**Keywords:** dynamic Hardy's type inequality; Muckenhoupt weights; self-improving properties; time scales

**MSC:** 26D07; 42B25; 42C10

#### **1. Introduction**

A weight *u* is a non-negative locally integrable function defined on a bounded interval <sup>ˆ</sup>*J*<sup>0</sup> <sup>⊂</sup> <sup>R</sup><sup>+</sup> = [0, <sup>∞</sup>). We consider subintervals <sup>ˆ</sup>*<sup>J</sup>* of <sup>ˆ</sup>*J*<sup>0</sup> of the form [0, *<sup>t</sup>*], for <sup>0</sup> <sup>&</sup>lt; *<sup>t</sup>* <sup>&</sup>lt; <sup>∞</sup> and denote by ˆ*J* the Lebesgue measure of ˆ*J*. A weight *u* which satisfies

$$\frac{1}{|\hat{f}|} \int\_{\hat{f}} \mu(t) dt \le \mathcal{C} \text{ess} \inf\_{t \in \hat{f}} \mu(t), \text{ for all } t \in \hat{f}, \tag{1}$$

is called an *A* 1 (C)− Muckenhoupt weight, where C > 1. In [1], the author proved that if *u* is a monotonic weight that satisfies the condition (1), then there exists *p* ∈ [1, C/(C − 1)] such that

$$\frac{1}{|\hat{f}|} \int\_{\hat{f}} u^p(t)dt \le \frac{\mathcal{C}}{\mathcal{C} - p(\mathcal{C} - 1)} \left( \frac{1}{|\hat{f}|} \int\_{\hat{f}} u(t)dt \right)^p,\tag{2}$$

which is the reverse of Hölder's inequality. In [2], the authors improved the Muckenhoupt inequality (2) by establishing the best constant for any weight *u* , which is not necessarily monotonic. Their proof was obtained by using the rearrangement *u* ∗ of the function *u* over the interval <sup>ˆ</sup>*J*0. In particular, they proved that if *<sup>u</sup>* satisfies (1) with <sup>C</sup> <sup>&</sup>gt; 1, then

$$\frac{1}{|\vec{f}|} \int\_{\hat{f}} \mu^p(t)dt \le \frac{\mathcal{C}^{1-p}}{\mathcal{C} - p(\mathcal{C}-1)} \left(\frac{1}{|\vec{f}|} \int\_{\hat{f}} \mu(t)dt\right)^p,\tag{3}$$

for *<sup>p</sup>* <sup>&</sup>lt; <sup>C</sup>/(C − <sup>1</sup>). A non-negative measurable weight *<sup>u</sup>* is called an <sup>A</sup>*<sup>p</sup>* (C)−Muckenhoupt weight for *p* > 1, if there exists a constant C > 1, such that the inequality

**Citation:** Abuelwafa, M.M.; Agarwal, R.P.; Rabie, S.S.; Saker, S.H. Self-Improving Properties of Continuous and Discrete Muckenhoupt Weights: A Unified Approach. *Axioms* **2023**, *12*, 505. https://doi.org/10.3390/ axioms12060505

Academic Editor: Feliz Manuel Minhós

Received: 15 March 2023 Revised: 5 May 2023 Accepted: 6 May 2023 Published: 23 May 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

$$\left(\frac{1}{|\mathcal{f}|}\int\_{\mathcal{f}} u(t)dt\right)\left(\frac{1}{|\mathcal{f}|}\int\_{\mathcal{f}} u^{-\frac{1}{p-1}}(t)dt\right)^{p-1} \le \mathcal{C}\_{\prime} \tag{4}$$

holds for every subinterval <sup>ˆ</sup>*<sup>J</sup>* <sup>⊂</sup> <sup>ˆ</sup>*J*<sup>0</sup> . The smallest constant <sup>C</sup> satisfying (1) or (4) is called the <sup>A</sup>*p*−norm of the weight *<sup>u</sup>* and is denoted by [A*<sup>p</sup>* (*u*)]. For a given fixed constant, <sup>C</sup> <sup>&</sup>gt; <sup>1</sup> if the weight *<sup>u</sup>* ∈ A*<sup>p</sup>* then [A*<sup>p</sup>* (*u*)] ≤ C. In 1972, Muckenhoupt [1] introduced the full characterizations of <sup>A</sup>*p*−weights in connection with the boundedness of the Hardy and Littlewood maximal operator in the space *L p <sup>u</sup>*(R+). In [3], the authors proved that if *u* ∈ *L p* (R+) and satisfies (4), then

$$\left(\frac{1}{|\mathcal{J}|} \int\_{\mathcal{I}} u(t)dt\right) \left(\frac{1}{|\mathcal{J}|} \int\_{\mathcal{I}} u^{-\frac{1}{q-1}}(t)dt\right)^{q-1} \le \mathcal{C}\_{1\prime} \tag{5}$$

for all *q* < *p*, where the constant C<sup>1</sup> = C1(*q*, C). In other words, Muckenhoupt's result for the *self-improving* property states that *<sup>u</sup>* ∈ A*<sup>p</sup>* (C) ⇒ ∃ *<sup>e</sup>* <sup>&</sup>gt; <sup>0</sup> such that *<sup>u</sup>* ∈ A*p*−*<sup>e</sup>* (C1), and then

$$\mathcal{A}^p(\mathcal{C}) \subset \mathcal{A}^{p-\varepsilon}(\mathcal{C}\_1). \tag{6}$$

The properties of Muckenhoupt class have been deeply investigated, especially in one dimension, and the following aspects have been considered extensively:

(*h*1). *Finding the exact value of the limit exponent q for which the self-improving property holds;*

(*h*2). *Finding the best constants C*<sup>1</sup> *for which the improved* <sup>A</sup>*q*<sup>−</sup> *condition is satisfied.*

Some great work in the problems of finding the exact bounds of exponents for embedding (6) was achieved in many papers, see for example, [1,2,4–11]. Since it is impossible to give an exhaustive account of the results related to the problems under consideration, we shall dwell only on some of them, concerned with sharp results for a self-improving property given by Korenovskii [12]. In particular, Korenovskii found the sharp lower bound of the exponent (*self-improving property*), for which (6) holds and proved that the optimal integrability exponent *q* is the positive root of the equation

$$\frac{1}{\infty} \left( \frac{p-1}{p-\infty} \right)^{p-1} = \mathcal{C}\_{\prime} \tag{7}$$

and also found the explicit value of the constant of the new class. One of the most significant characteristics of the *A <sup>p</sup>* Muckenhoupt weights is the extrapolation theorem that was announced, and a detailed proof was given, by Rubio de Francia in [13,14]. Many results related to this topic have been studied by several authors (see [15–22]).

Over the past few years, the interest in the area of discrete harmonic analysis has been renovated and it became an active field of research. This renovated interest began with an observation of M. Riesz in their work on the Hilbert transform in 1928, who proved that the Hilbert discrete operator

$$Hf(n) := \sum\_{m \in \mathbb{Z}\_+} \frac{f(n-m)}{m}.$$

is bounded in ` *<sup>p</sup>*−spaces if the operator

$$Hf(\mathfrak{x}) := p.v.\int\_{\mathbb{R}} \left( \frac{f(\mathfrak{x} - t)}{t} \right) dt.\mathfrak{x}$$

is bounded in *L <sup>p</sup>*−spaces. In 1952, Alberto Calderón and Antoni Zygmund [23] extended the results to a more general class of singular integral operators with kernels. It is worth mentioning that the progress in the last years regarding discrete analogues of operators in harmonic analysis is related with Calderón–Zygmund analogues, discrete maximal operators and related problems with number theory, translation invariant fractional integral operators, translation invariant singular Radon transforms, quasi-translation invariant operators, spherical averages and discrete rough maximal functions; we suggest the reader to consider the paper [24] and the references cited therein.

As performed by Hughes (see [25] and the references therein), the discrete operators are nicely connected to critical problems in number theory. For example, Waring's problem, which questions whether each natural number *k* is associated with a positive integer *s* satisfying that every natural number is the sum of at most *s* natural numbers raised to the power *k*. This problem has been extended to find the the operator *G*(*k*), which is defined to be the smallest positive integer *s* so that every sufficiently large integer (i.e., every integer greater than some constant) can be illustrated as a sum of no more than *s* positive integers to the power of *k*. Throughout the paper, we assume that 1 < *p* < ∞ and assume that the discrete weights are positive sequences defined on *J*⊂ Z<sup>+</sup> = {1, 2, 3, . . . }, where *J* is of the form {1, 2, . . . , *N*}. The notion *X <sup>d</sup>* denotes the set of all nonincreasing and non-negative sequences of *X*. The discrete weight *v* is said to be in the discrete Muckenhoupt A*<sup>p</sup>* class for *p* > 1, if there exists a constant *A* > 1 satisfying the inequality

$$\left(\frac{1}{n}\sum\_{k=1}^{n}v(k)\right)\left(\frac{1}{n}\sum\_{k=1}^{n}v^{\frac{-1}{p-1}}(k)\right)^{p-1} \le A,\text{ for all } n \in \mathcal{I}.\tag{8}$$

The discrete *v* is said to be in the discrete Ari ˇno and Muckenhoupt B*<sup>p</sup>* class for *p* > 0, if there exists a constant *A* > 1 such that the inequality

$$\sum\_{k=n}^{\infty} \frac{v(k)}{k^p} \le \frac{A}{n^p} \sum\_{k=1}^n v(k), \text{ for all } n \in J. \tag{9}$$

The necessary and sufficient conditions for the boundedness of a series of discrete classical operators (Hardy–Littlewood maximal operator, Hardy's operator) in the weighted spaces ` *p* (*v*) are the A*p*−Muckenhoupt condition, B*p*−condition on the weight *v*. In [26], the authors proved that the discrete Hardy–Littlewood maximal operator M : ` *p* (*v*) *<sup>d</sup>* <sup>→</sup> ` *p* (*v*), which is defined by

$$\mathcal{M}f(n) = \sup\_{n \in \mathcal{J}} \frac{1}{n} \sum\_{k=1}^{n} f(k), \text{ for all } n \in \mathcal{J}\_{\mathcal{I}}$$

is bounded on ` *p* (*v*) *d* for *p* > 1 if and only if *v* ∈ A*p*. In [27], Heing and Kufner proved that the Hardy operator H : ` *p* (*v*) *<sup>d</sup>* <sup>→</sup> ` *p* (*v*), which is defined by

$$\mathcal{H}f(n) = \frac{1}{n} \sum\_{k=1}^{n} f(k), \text{ for all } n \in \mathbb{J}\_{\nu}$$

is bounded in ` *p* (*v*) *d* for 1 < *p* < ∞ if and only if *v* ∈ B*<sup>p</sup>* and lim*n*→∞(*v*(*n* + 1)/*v*(*n*)) = *c* for some constant *c* > 0 and ∑ ∞ *n*=1 *v*(*n*) = ∞. In [28], Bennett and Gross-Erdmann improved the result of Heing and Kufner by excluding the conditions on *v*. In [29], the authors proved that the discrete Hardy operator is bounded in ` *p* (*v*) *d* for *p* > 1 if and only if *v* ∈ A*p*. The discrete weight *v* is said to be belong to the discrete Muckenhoupt A1−class if there exists a constant *A* > 0 such that the inequality H*u*(*n*) ≤ *A* inf*n*∈*<sup>J</sup> u*(*n*), or equivalently M*u*(*n*) ≤ *Au*(*n*), holds for all *n* ∈ *J*. In [29], the authors proved the self-improving property of the weighted discrete Muckenhoupt classes. They established also the exact values of the limit exponents as well as new constants of the new classes. These values correspond to the sharp values of the continuous case that has been obtained by Nikolidakis (see [7,8]). For more details of discrete results, we refer the reader to the papers [30–34].

In [28], the authors marked that the study of discrete inequalities is not a simple mission, and it is in fact more complicated to analyze than its integral counterparts. They discovered that the conditions do not coincide, in any natural way, with those that are obtained by discretization of the results of functions but the reverse is true. In other words, the results satisfied for sums holds, with the obvious modifications, for integrals which in fact proved the first part of basic principle of Hardy, Littlewood and Polya [35]. Obviously the proofs in the discrete form are transferred instantly and much more simpler, when applied to integrals.

The natural questions which arise now are as follows:

(*Q*1). *Is it possible to find a new approach to unify the proofs of the self-improving properties of continuous and discrete Muckenhoupt weights*?

(*Q*2). *Is it possible to prove the self-improving properties of Ariˇno and Muckenhoupt* B*<sup>p</sup> weights*?

Our aim in this paper is to give an answer to the first question on time scales, which has received much attention and become a major field in pure and applied mathematics today. The second question will be considered later.

The general idea on time scales is to prove a result for dynamic inequality or dynamic equation, where the domain of the unknown function is a so-called time scale T, which is an arbitrary nonempty closed subset of the real numbers R. This idea has been created by Hilger [36] to unify the study of the continuous and the discrete results. He started the study of dynamic equations on time scales. The three most popular examples of calculus on time scales are differential calculus, difference calculus and quantum calculus, i.e., when T = R, T = N, T = *h*N, for *h* > 0 and T = *q* <sup>N</sup><sup>0</sup> = {*<sup>q</sup> t* : *t* ∈ N0} where *q* > 1. The cases when the time scale is equal to the reals or to the integers represent the classical theories of integral and of discrete inequalities. In more precise terms, we can say that the study of dynamic inequalities or dynamic equations on time scales helps avoid proving results twice—once for differential inequality and once again for difference inequality. For more details we refer to the books [37,38] and the references they have cited. Very recently, the authors in [39–43] proved the time scale versions of the Muckenhoupt and Gehring inequalities and used them to prove some higher integrability results on time scales. This also motivated us to develop a new technique on time scales to prove some new results of inequalities with weights and use the new inequalities to formulate some conditions for the boundedness of the Hardy operator with negative powers on time scales and show the applications of the obtained results.

The paper is organized as follows: In Section 2, we prove some Hardy's type inequalities and new refinements of these inequalities with negative powers. In Section 3, we will employ some of these inequalities to prove the self-improving properties of the Muckenhoupt class on a time scale T for non-negative and nondecreasing weights. The main results give a solution on time scales of the problem of finding the exact value of the limit exponent *q* < *p*, for which the self-improving property holds and also for the problem of finding the best constants C<sup>1</sup> for which the improved *Aq*−condition satisfies (*h*1) and (*h*2) above.

#### **2. Hardy's Type Inequalities with Negative Powers**

In this section, we prove some Hardy's type inequalities and the new refinements of these inequalities with negative powers. First, we recall the following concepts related to the notions of time scales and for more details, we refer to the two books [44,45] which summarize and organize much of the time scale calculus. A function *f* : T → R is called right-dense continuous (rd-continuous) if *f* is continuous at left-dense points and right dense-points in T, and left-side limits exist and are finite. The set *Crd*(T) = *Crd*(T, R) denotes the set of all rd-continuous functions *f* : T → R. The derivative of the product *f g* and the quotient *<sup>f</sup>* /*<sup>g</sup>* (where *gg<sup>σ</sup>* <sup>6</sup><sup>=</sup> 0, here *<sup>g</sup> <sup>σ</sup>* <sup>=</sup> *<sup>g</sup>* ◦ *<sup>σ</sup>*) of two differentiable functions *f* and *g* are given by

$$(fg)^{\Lambda} = fg^{\Lambda} + f^{\Lambda}g^{\sigma} = f^{\Lambda}g + f^{\sigma}g^{\Lambda}, \text{ and } \left(\frac{f}{\mathcal{S}}\right)^{\Lambda} = \frac{f^{\Lambda}g - fg^{\Lambda}}{\mathcal{S}\,\mathcal{S}^{\sigma}}.$$

where *σ* = *σ*(*t*) := inf{*s* ∈ T : *s* > *t*} is the forward jump operator on a time scale. Let *f* : R → R be continuously differentiable and suppose that *g* : T → R is delta differentiable. Then *f* ◦ *g* : T → R is delta differentiable and the two chain rules that we will use in this paper are given in the next two formulas.

$$f^{\Delta}(g(t)) = f'(g(\xi))g^{\Delta}(t), \quad \text{for} \quad \xi \in [t, \sigma(t)], \tag{10}$$

and

$$(f \circ g)^{\Delta}(t) = \left\{ \int\_0^1 f'\left(g(t) + h\mu(t)g^{\Delta}(t)\right) dh \right\} g^{\Delta}(t). \tag{11}$$

A special case of (11) is

$$\left[u^{\lambda}(t)\right]^{\Delta} = \lambda \int\_{0}^{1} \left[hu^{\sigma} + (1 - h)u\right]^{\lambda - 1} u^{\Delta}(t) dh. \tag{12}$$

In this paper, we will refer to the (delta) integral which, we can define as follows: If *G* <sup>∆</sup>(*t*) = *g*(*t*), then the Cauchy (delta) integral of *g* is defined by R *<sup>t</sup> a g*(*x*)∆*x* := *G*(*t*) − *G*(*a*). If *g* ∈ *Crd*(T), then the Cauchy integral *G*(*t*) := R *t t*0 *g*(*x*)∆*x* exists, *t*<sup>0</sup> ∈ T, and satisfies *G* <sup>∆</sup>(*t*) = *<sup>g</sup>*(*t*), *<sup>t</sup>* <sup>∈</sup> <sup>T</sup>. An infinite integral is defined as <sup>R</sup> <sup>∞</sup> *a <sup>f</sup>*(*x*)∆*<sup>x</sup>* := lim*b*→<sup>∞</sup> R *b a f*(*x*)∆*x*. The integration on discrete time scales is defined by

$$\int\_{a}^{b} \mathbf{g}(t) \Delta t = \sum\_{t \in [a,b)} \mu(t) \mathbf{g}(t).$$

The integration by parts formula on time scale is given by

$$\int\_{a}^{\infty} u^{\Delta}(t) v^{\sigma}(t) \, \Delta t = \left. u(t) v(t) \right|\_{a}^{\infty} - \int\_{a}^{\infty} u(t) v^{\Delta}(t) \Delta t. \tag{13}$$

The Hölder inequality on the time scale is given by

$$\int\_{a}^{\infty} f(t)g(t)\Delta t \le \left(\int\_{a}^{\infty} f^{\gamma}(t)\Delta t\right)^{\frac{1}{\gamma}} \left(\int\_{a}^{\infty} g^{\nu}(t)\Delta t\right)^{\frac{1}{\nu}}\tag{14}$$

where *<sup>γ</sup>* <sup>&</sup>gt; 1, 1/*<sup>γ</sup>* <sup>+</sup> 1/*<sup>ν</sup>* <sup>=</sup> <sup>1</sup> and *<sup>f</sup>* , *<sup>g</sup>* <sup>∈</sup> *<sup>C</sup>rd*([*a*, <sup>∞</sup>)T, <sup>R</sup>+). The inequality (14) is reversed for 0 < *γ* < 1. In the following, we will assume that 0 ∈ T and I = [0, ∞)T. Throughout this paper, we will assume that the functions in the statements of the theorems are rdcontinuous functions and the integrals considered are assumed to exist and be finite. In addition, in our proofs, we will use the convention 0.∞ = 0 and 0/0 = 0. Throughout the paper, we assume that 1 < *p* < ∞ and I is a fixed finite interval from [0, ∞)T. We define the time scale interval [*a*, *b*]<sup>T</sup> by [*a*, *b*]<sup>T</sup> := [*a*, *b*] ∩ T. A weight *ω* defined on T is a ∆−integrable function of non-negative real numbers. We consider the norm on *L p* (T) of the form

$$\|\omega\|\_{L^p(\mathbb{T})} := \left(\int\_0^\infty |\omega(s)|^p \Delta s\right)^{1/p} < \infty.$$

A non-negative <sup>∆</sup>−integrable function *<sup>ω</sup>* belongs to the Muckenhoupt class <sup>A</sup><sup>1</sup> (C) on the fixed interval I =[0, ∞)<sup>T</sup> if there exists a constant C > 1 such that the inequality

$$\frac{1}{|\mathcal{I}|} \int\_{\hat{\mathcal{I}}} \omega(\mathbf{x}) \Delta \mathbf{x} \le \mathcal{A} \text{ess} \inf\_{\mathbf{x} \in \hat{\mathcal{I}}} \omega(\mathbf{x}), \text{ for all } \mathbf{x} \in \hat{\mathcal{I}}, \tag{15}$$

holds for every subinterval <sup>ˆ</sup>*<sup>J</sup>* <sup>⊂</sup> <sup>I</sup>. A non-negative <sup>∆</sup>−integrable function *<sup>ω</sup>* belongs to the Muckenhoupt class A*<sup>p</sup>* (C) for *p* > 1 if there exists a constant C > 1 such that the inequality

$$\left(\frac{1}{|\hat{f}|}\int\_{\hat{f}}\omega(\mathbf{x})\Delta\mathbf{x}\right)\left(\frac{1}{|\hat{f}|}\int\_{\hat{f}}\omega^{\frac{-1}{p-1}}(\mathbf{x})\Delta\mathbf{x}\right)^{p-1} \leq \mathcal{C}\_{\prime} \tag{16}$$

holds for every subinterval <sup>ˆ</sup>*<sup>J</sup>* <sup>⊂</sup>I. For a given exponent *<sup>p</sup>* <sup>&</sup>gt; 1, we define the <sup>A</sup>*<sup>p</sup>* -norm of A non-negative ∆−integrable weight *ω* by the following quantity:

$$[\mathbb{A}^p(\omega)] := \sup\_{\tilde{f} \subset \mathbb{J}} \left( \frac{1}{|\tilde{f}|} \int\_{\tilde{f}} \omega(\boldsymbol{x}) \Delta \boldsymbol{x} \right) \left( \frac{1}{|\tilde{f}|} \int\_{\tilde{f}} \omega^{\frac{-1}{p-1}}(\boldsymbol{x}) \Delta \boldsymbol{x} \right)^{p-1}$$

where the supremum is taken over all intervals <sup>ˆ</sup>*<sup>J</sup>* <sup>⊂</sup>I. Note that by Hölder's inequality [A*<sup>p</sup>* (*ω*)] ≥ 1 for all 1 < *p* < ∞, and the following inclusion is true:

> if 1 < *p* ≤ *q* < ∞, then A <sup>1</sup> <sup>⊂</sup> <sup>A</sup> *<sup>p</sup>* <sup>⊂</sup> <sup>A</sup> *q* and [A *q* (*ω*)] ≤ [A *p* (*ω*)].

For any function *<sup>f</sup>* : <sup>I</sup> <sup>→</sup> <sup>R</sup><sup>+</sup> which is non-negative, we define the operator <sup>H</sup> *<sup>f</sup>* : [0, <sup>∞</sup>)<sup>T</sup> <sup>→</sup> <sup>R</sup><sup>+</sup> by

$$\mathcal{H}(t) = \frac{1}{t} \int\_0^t f(s) \Delta s\_\prime \text{ for all } t \in \mathbb{I}. \tag{17}$$

,

From the definition of H, we see that if *f* is nondecreasing, then

$$
\mathcal{H}f(t) = \frac{1}{t} \int\_0^t f(s) \Delta s \le \frac{1}{t} \int\_0^t f(t) \Delta s = f(t).
$$

Additionally, we have determined by using the above inequality that

$$(\mathcal{H}f(t))^\Delta = \frac{1}{\sigma(t)}[f(t) - \mathcal{H}f(t)] \ge 0, \quad \text{for } t \in \mathbb{I}.$$

Furthermore, if *f* is nonincreasing, we have that

$$\mathcal{H}f(t) = \frac{1}{t} \int\_0^t f(s) \Delta s \ge \frac{1}{t} \int\_0^t f(t) \Delta s = f(t).$$

and

$$(\mathcal{H}f(t))^\Lambda = \frac{1}{\sigma(t)}[f(t) - \mathcal{H}f(t)] \le 0, \quad \text{for } t \in \mathbb{T}.$$

From these facts, we have the following properties of H *f* .

#### **Lemma 1.**


#### **Lemma 2.**


**Remark 1.** *As a consequence of Lemma 1, we notice that if f is non-negative, and nondecreasing, then* H *f <sup>q</sup>* <sup>≤</sup> *<sup>f</sup> q . We also notice from Lemma 1 that if f is non-negative, and nondecreasing, then* H *f q is also non-negative and nondecreasing for q* > 1.

**Remark 2.** *As a consequence of Lemma 2, we notice that if f is non-negative, and nonincreasing, then* H *f <sup>q</sup>* <sup>≥</sup> *<sup>f</sup> q . We also notice from Lemma 2 that if f is non-negative, and nonincreasing, then* H *f q is also non-negative and nonincreasing for q* > 1.

In what follows, we will define *f σ* , <sup>H</sup>*<sup>σ</sup> <sup>f</sup>* and <sup>H</sup>[H*<sup>σ</sup> f* ] *<sup>p</sup>* where *σ* is the forward jump operator, by *f σ* (*t*) = (*f* ◦ *σ*)(*t*),

$$\mathcal{H}^{\sigma}f(t) = \frac{1}{\sigma(t)} \int\_{0}^{\sigma(t)} f(\mathbf{x}) \Delta \mathbf{x}, \text{ for } t \in \mathbb{T}\_{\omega}$$

and

$$\mathcal{H}[\mathcal{H}^{\sigma}f]^p(t) = \frac{1}{t} \int\_0^t \left(\frac{1}{\sigma(s)} \int\_0^{\sigma(s)} f(\mathbf{x}) \Delta \mathbf{x}\right)^p \Delta \mathbf{s}, \text{ for } t \in \mathbb{T}.$$

**Theorem 1.** *Assume that f is non-negative and nondecreasing on* I*. If s* ≥ *r* > 0, *then*

$$\int\_0^{\sigma(t)} [f(\mathbf{x})]^{r/s} [\mathcal{H}^\sigma f(\mathbf{x})]^{-s-\frac{\mathsf{I}}{s}} d\mathbf{x} \le \left(\frac{s+1}{s}\right)^{r/s} \int\_0^{\sigma(t)} [\mathcal{H}^\sigma f(\mathbf{x})]^{-s} d\mathbf{x} \tag{18}$$

*for any t* ∈ I*.*

**Proof.** First, we consider the case when *s* = *r* and prove that

$$\int\_0^{\sigma(t)} f(\mathbf{x}) [\mathcal{H}^\sigma f(\mathbf{x})]^{-s-1} \Delta \mathbf{x} \le \left(\frac{s+1}{s}\right) \int\_0^{\sigma(t)} [\mathcal{H}^\sigma f(\mathbf{x})]^{-s} \Delta \mathbf{x}.$$

For brevity, we write *F* = H *f* . By employing the integration by parts (13), with *u*(*t*) = *σ*(*t*) and *v*(*t*) = *F* −*s* (*t*), we obtain

$$\int\_0^{\sigma(t)} (F^\sigma(\mathbf{x}))^{-s} \Delta \mathbf{x} \, dt = \left. u(\mathbf{x}) F^{-s}(\mathbf{x}) \right|\_0^{\sigma(t)} - \int\_0^{\sigma(t)} \sigma(\mathbf{x}) (F^{-s}(\mathbf{x}))^\Delta \Delta \mathbf{x} \, dt$$

$$= \left. u^\sigma(t) (F^\sigma(t))^{-s} - \int\_0^{\sigma(t)} \sigma(\mathbf{x}) (F^{-s}(\mathbf{x}))^\Delta \Delta \mathbf{x} \, dt \right. \tag{19}$$

$$= \int\_0^{\sigma(t)} \sigma(\mathbf{x}) (F^{-s}(\mathbf{x}))^\Delta \Delta \mathbf{x} \, dt \tag{100}$$

$$\geq \ -\int\_{0}^{\sigma(t)} \sigma(\mathbf{x}) (F^{-s}(\mathbf{x}))^{\Delta} \Delta \mathbf{x}. \tag{20}$$

By the chain rule (12), we see that

$$\begin{aligned} \left(F^{-s}\right)^{\Delta} &= \quad -sF^{\Delta} \int\_0^1 \frac{dh}{(hF^{\sigma} + (1-h)F)^{s+1}} \\ &\leq \quad -sF^{\Delta} \int\_0^1 (hF^{\sigma} + (1-h)F^{\sigma})^{-s-1} dh = \quad -sF^{\Delta}(F^{\sigma})^{-s-1}. \end{aligned}$$

Substituting the last inequality into (20), we obtain

$$\begin{split} \int\_{0}^{\sigma(t)} (F^{\sigma}(\mathbf{x}))^{-s} \Delta \mathbf{x} &\geq \quad \mathop{\rm s\, \int\_{0}^{\sigma(t)} \sigma(\mathbf{x}) F^{\Delta}(\mathbf{x}) (F^{\sigma}(\mathbf{x}))^{-s-1} \Delta \mathbf{x} \\ &\geq \quad \mathop{\rm s\, \int\_{0}^{\sigma(t)} \mathbf{x} F^{\Delta}(\mathbf{x}) (F^{\sigma}(\mathbf{x}))^{-s-1} \Delta \mathbf{x}. \end{split} \tag{21}$$

Moreover, since

$$tF(t) = \int\_0^t f(\mathbf{x})\Delta \mathbf{x}\_r$$

the product rule gives

$$tF^{\Delta}(t) + F^{\sigma}(t) = f(t). \tag{22}$$

Substituting (22) into (21), we obtain

$$\int\_0^{\sigma(t)} (F^{\sigma}(\mathbf{x}))^{-s} \Delta \mathbf{x} \ge s \int\_0^{\sigma(t)} f(\mathbf{x}) (F^{\sigma}(\mathbf{x}))^{-s-1} \Delta \mathbf{x} - s \int\_0^{\sigma(t)} (F^{\sigma}(\mathbf{x}))^{-s} \Delta \mathbf{x} \,.$$

By combining like terms, we obtain

$$\int\_0^{\sigma(t)} f(\mathfrak{x}) (F^{\sigma}(\mathfrak{x}))^{-s-1} \Delta \mathfrak{x} \le \left(\frac{s+1}{s}\right) \int\_0^{\sigma(t)} (F^{\sigma}(\mathfrak{x}))^{-s} \Delta \mathfrak{x} \, \omega$$

which proves the inequality (18) when *s* = *r*. Now, consider the case when *s* 6= *r* and fix *r* ∈ (0,*s*). Then by applying Hölder's inequality (14) with *s*/*r* and *s*/(*s* − *r*), we obtain

$$\begin{split} &\quad \int\_{0}^{\sigma(t)} [f(\mathbf{x})]^{r/s} (F^{\sigma}(\mathbf{x}))^{-r-\frac{r}{s}} (F^{\sigma}(\mathbf{x}))^{-s+r} \Delta \mathbf{x} \\ &\leq \quad \left[ \int\_{0}^{\sigma(t)} f(\mathbf{x}) (F^{\sigma}(\mathbf{x}))^{-s-1} \Delta \mathbf{x} \right]^{r/s} \left[ \int\_{0}^{\sigma(t)} (F^{\sigma}(\mathbf{x}))^{-s} \Delta \mathbf{x} \right]^{1-\frac{r}{s}} \\ &\leq \quad \left( \frac{s+1}{s} \right)^{r/s} \left[ \int\_{0}^{\sigma(t)} (F^{\sigma}(\mathbf{x}))^{-s} \Delta \mathbf{x} \right]^{r/s} \left[ \int\_{0}^{\sigma(t)} (F^{\sigma}(\mathbf{x}))^{-s} \Delta \mathbf{x} \right]^{1-\frac{r}{s}} \\ &= \quad \left( \frac{s+1}{s} \right)^{r/s} \int\_{0}^{\sigma(t)} (F^{\sigma}(\mathbf{x}))^{-s} \Delta \mathbf{x} . \end{split}$$

which is the desired inequality (18). The proof is complete.

**Theorem 2.** *Assume that f is non-negative and nondecreasing on* I*. If s* ≥ *r* > 0, *then*

$$\int\_0^{\sigma(t)} [\mathcal{H}^\sigma f(\mathbf{x})]^{-s} \Delta \mathbf{x} \le \left(\frac{s+1}{s}\right)^r \int\_0^{\sigma(t)} f^{-r}(\mathbf{x}) [\mathcal{H}^\sigma f(\mathbf{x})]^{-s+r} \Delta \mathbf{x}.\tag{23}$$

**Proof.** From the elementary inequality (see Elliott [46]),

$$s y^{s+1} - (s+1)y^s \ge -1,\tag{24}$$

.

for every *y* ≥ 0 and *s* > 0, we deduce by using *y* = *y*1/*y*2, where *y*1, *y*<sup>2</sup> > 0, that

$$y\_1^{-s} + sy\_1y\_2^{-s-1} - (s+1)y\_2^{-s} \ge 0. \tag{25}$$

Now, by defining

$$y\_1 := \left(\frac{s}{s+1}\right)^{1+\frac{r}{s}} f^{r/s}(t) \left[\mathcal{H}^\sigma f(t)\right]^{1-\frac{r}{s}}, \text{ and } y\_2 := \left(\frac{s}{s+1}\right) \mathcal{H}^\sigma f(t).$$

we obtain

$$y\_1^{-s} := \left(\frac{s}{s+1}\right)^{-s-r} f^{-r}(t) [\mathcal{H}^\sigma f(t)]^{-s+r}, \text{ and } y\_2^{-s} := \left(\frac{s}{s+1}\right)^{-s} [\mathcal{H}^\sigma f(t)]^{-s}.$$

and then

$$y\_1 y\_2^{-s-1} := \left(\frac{s}{s+1}\right)^{-s+r/s} f^{r/s}(t) [\mathcal{H}^\sigma f(t)]^{-s-\frac{p}{s}}$$

By using these values in (25), we have

$$\begin{aligned} &\left(\frac{s}{s+1}\right)^{-s-r} f^{-r}(t)[\mathcal{H}^{\sigma}f(t)]^{-s+r} + s\left(\frac{s}{s+1}\right)^{-s+\frac{1}{s}} f^{r/s}(t)[\mathcal{H}^{\sigma}f(t)]^{-s-\frac{1}{s}} \\ &\geq \quad (s+1)\left(\frac{s}{s+1}\right)^{-s}[\mathcal{H}^{\sigma}f(t)]^{-s}. \end{aligned} \tag{26}$$

By integrating (26) from 0 to *σ*(*t*), we obtain

$$\begin{aligned} & \left(\frac{s+1}{s}\right)^r \int\_0^{\sigma(t)} f^{-r}(\mathbf{x}) [\mathcal{H}^\sigma f(\mathbf{x})]^{-s+r} \Delta \mathbf{x} \\ &+ s \left(\frac{s}{s+1}\right)^{\frac{r}{s}} \int\_0^{\sigma(t)} f^{r/s}(\mathbf{x}) [\mathcal{H}^\sigma f(\mathbf{x})]^{-s-\frac{r}{s}} \Delta \mathbf{x} \\ &\geq \quad (s+1) \int\_0^{\sigma(t)} [\mathcal{H}^\sigma f(\mathbf{x})]^{-s} \Delta \mathbf{x} . \end{aligned} \tag{27}$$

Now, by applying Theorem 1 on the term

$$\int\_0^{\sigma(t)} f^{r/s}(\mathfrak{x}) [\mathcal{H}^\sigma f(\mathfrak{x})]^{-s-\frac{r}{s}} \Delta \mathfrak{x}\_{\prime\prime}$$

we obtain

$$\int\_0^{\sigma(t)} f^{r/s}(\mathbf{x}) [\mathcal{H}^\sigma f(\mathbf{x})]^{-s-\frac{r}{s}} \Delta \mathbf{x} \le \left(\frac{s+1}{s}\right)^{r/s} \int\_0^{\sigma(t)} [\mathcal{H}^\sigma f(\mathbf{x})]^{-s} \Delta \mathbf{x}.\tag{28}$$

Comparing (27) and (28), we have

$$\begin{split} & \left( \frac{s+1}{s} \right)^{r} \int\_{0}^{\sigma(t)} f^{-r}(\mathbf{x}) [\mathcal{H}^{\sigma} f(\mathbf{x})]^{-s+r} \Delta \mathbf{x} \\ & \quad + s \left( \frac{s}{s+1} \right)^{r/s} \left( \frac{s+1}{s} \right)^{r/s} \int\_{0}^{\sigma(t)} [\mathcal{H}^{\sigma} f(\mathbf{x})]^{-s} \Delta \mathbf{x} \\ \geq & \quad (s+1) \int\_{0}^{\sigma(t)} [\mathcal{H}^{\sigma} f(\mathbf{x})]^{-s} \Delta \mathbf{x}. \end{split} \tag{29}$$

By combining like terms in the last inequality, we conclude that

$$\int\_0^{\sigma(t)} [\mathcal{H}^\sigma f(\mathbf{x})]^{-s} \Delta \mathbf{x} \le \left(\frac{s+1}{s}\right)^r \int\_0^{\sigma(t)} f^{-r}(\mathbf{x}) [\mathcal{H}^\sigma f(\mathbf{x})]^{-s+r} \Delta \mathbf{x},\tag{30}$$

which is the desired inequality (18). The proof is complete.

**Theorem 3.** *Assume that f is non-negative and nondecreasing on* I. *If* 0 < *r*<sup>1</sup> < *r*<sup>2</sup> < *s*, *then*

$$\int\_0^{\sigma(t)} f^{-r\_1}(\mathbf{x}) [\mathcal{H}^\sigma f(\mathbf{x})]^{-s+r\_1} \Delta \mathbf{x} \le \left(\frac{s+1}{s}\right)^{r\_2-r\_1} \int\_0^{\sigma(t)} f^{-r\_2}(\mathbf{x}) [\mathcal{H}^\sigma f(\mathbf{x})]^{-s+r\_2} \Delta \mathbf{x}.\tag{31}$$

**Proof.** By applying Hölder's inequality (14) with *r*2/*r*<sup>1</sup> and *r*2/(*r*<sup>2</sup> − *r*1) on the left-hand side of (31), we obtain

$$\begin{split} \int\_{0}^{\sigma(t)} f^{-r\_{1}}(\mathbf{x}) [\mathcal{H}^{\sigma} f(\mathbf{x})]^{-s+r\_{1}} \Delta \mathbf{x} &\leq \left( \int\_{0}^{\sigma(t)} f^{-r\_{2}}(\mathbf{x}) [\mathcal{H}^{\sigma} f(\mathbf{x})]^{-s+r\_{2}} \Delta \mathbf{x} \right)^{\frac{r\_{1}}{r\_{2}}} \\ &\times \left( \int\_{0}^{\sigma(t)} [\mathcal{H}^{\sigma} f(\mathbf{x})]^{-s} \Delta \mathbf{x} \right)^{1-\frac{r\_{1}}{r\_{2}}}. \end{split} \tag{32}$$

Now, by replacing *r* with *r*<sup>2</sup> in (30), we obtain

$$\int\_0^{\sigma(t)} [\mathcal{H}^\sigma f(t)]^{-s} \Delta x \le \left(\frac{s+1}{s}\right)^{r\_2} \int\_0^{\sigma(t)} f^{-r\_2}(\mathbf{x}) [\mathcal{H}^\sigma f(\mathbf{x})]^{-s+r\_2} \Delta \mathbf{x}.\tag{33}$$

By combining (32) and (33), we see that

$$\begin{aligned} &\int\_0^{\sigma(t)} f^{-r\_1}(\mathbf{x}) [\mathcal{H}^\sigma f(\mathbf{x})]^{-s+r\_1} \Delta \mathbf{x} \\ &\le \quad \left(\frac{s+1}{s}\right)^{r\_2-r\_1} \int\_0^{\sigma(t)} f^{-r\_2}(\mathbf{x}) [\mathcal{H}^\sigma f(\mathbf{x})]^{-s+r\_2} \Delta \mathbf{x} \end{aligned}$$

which is the desired inequality (31). The proof is complete.

**Theorem 4.** *Assume that f is non-negative and nondecreasing on* I. *If s* ≥ *r* > 0, *then*

$$\begin{split} \frac{1}{\sigma(t)} \int\_{0}^{\sigma(t)} f^{r/s}(\mathbf{x}) [\mathcal{H}^{\sigma} f(\mathbf{x})]^{-s-\frac{r}{s}} \Delta \mathbf{x} &\leq \ \left(\frac{s+1}{s}\right)^{r/s} \frac{1}{\sigma(t)} \int\_{0}^{\sigma(t)} [\mathcal{H}^{\sigma} f(\mathbf{x})]^{-s} \Delta \mathbf{x} \\ &- \frac{r}{s^{2}} \left(\frac{s+1}{s}\right)^{r/s-1} [\mathcal{H}^{\sigma} f(t)]^{-s} .\end{split} \tag{34}$$

**Proof.** We proceed as in the proof of Theorem 1 (without removing the term *σ σ* (*t*)(*F σ* (*t*)) −*p* ) to obtain

$$\begin{split} &\quad \int\_{0}^{\sigma(t)} (F^{\sigma}(\mathbf{x}))^{-s} \Delta \mathbf{x} = \sigma^{\sigma}(t) (F^{\sigma}(t))^{-s} - \int\_{0}^{\sigma(t)} \sigma(\mathbf{x}) (F^{-s}(\mathbf{x}))^{\Delta} \Delta \mathbf{x} \\ &\geq \quad \sigma(t) (F^{\sigma}(t))^{-s} - \int\_{0}^{\sigma(t)} \sigma(\mathbf{x}) (F^{-s}(\mathbf{x}))^{\Delta} \Delta \mathbf{x} \\ &\geq \quad \sigma(t) (F^{\sigma}(t))^{-s} + s \int\_{0}^{\sigma(t)} \sigma(\mathbf{x}) F^{\Delta}(\mathbf{x}) (F^{\sigma}(\mathbf{x}))^{-s-1} \Delta \mathbf{x} \\ &\geq \quad \sigma(t) (F^{\sigma}(t))^{-s} + s \int\_{0}^{\sigma(t)} x F^{\Delta}(\mathbf{x}) (F^{\sigma}(\mathbf{x}))^{-s-1} \Delta \mathbf{x} \\ &\geq \quad \sigma(t) (F^{\sigma}(t))^{-s} + s \int\_{0}^{\sigma(t)} f(\mathbf{x}) (F^{\sigma}(\mathbf{x}))^{-s-1} \Delta \mathbf{x} - s \int\_{0}^{\sigma(t)} (F^{\sigma}(\mathbf{x}))^{-s} \Delta \mathbf{x}. \end{split}$$

By combining like terms, we obtain

$$\int\_0^{\sigma(t)} f(\mathbf{x}) (F^{\sigma}(\mathbf{x}))^{-s-1} \Delta \mathbf{x} \le \left(\frac{s+1}{s}\right) \int\_0^{\sigma(t)} (F^{\sigma}(\mathbf{x}))^{-s} \Delta \mathbf{x} - \frac{1}{s} \sigma(t) (F^{\sigma}(t))^{-s}.\tag{35}$$

If we fix *r* ∈ (0,*s*) then by applying Hölder's inequality with *s*/*r* and *s*/(*s* − *r*), we obtain

$$\begin{split} &\quad \int\_{0}^{\sigma(t)} f^{r/s}(\mathbf{x}) (F^{\sigma}(\mathbf{x}))^{-r-r/s} (F^{\sigma}(\mathbf{x}))^{-s+r} \Delta \mathbf{x} \\ &\quad \le \quad \left[ \int\_{0}^{\sigma(t)} f(\mathbf{x}) (F^{\sigma}(\mathbf{x}))^{-s-1} \Delta \mathbf{x} \right]^{r/s} \left[ \int\_{0}^{\sigma(t)} (F^{\sigma}(\mathbf{x}))^{-s} \Delta \mathbf{x} \right]^{1-\frac{r}{s}} \\ &\quad \le \quad \left[ \left( \frac{s+1}{s} \right) \int\_{0}^{\sigma(t)} (F^{\sigma}(\mathbf{x}))^{-s} \Delta \mathbf{x} - \frac{1}{s} \sigma(t) (F^{\sigma}(t))^{-s} \right]^{r/s} \\ &\quad \times \left[ \int\_{0}^{\sigma(t)} (F^{\sigma}(\mathbf{x}))^{-s} \Delta \mathbf{x} \right]^{1-\frac{r}{s}} . \end{split} \tag{36}$$

Now, in order to complete the proof, we shall utilize the inequality

$$(u+v)^{\gamma} \le u^{\gamma} + pu^{\gamma - 1}v,\text{ where } 0 < \gamma < 1. \tag{37}$$

which is a variant of the well-known Bernoulli inequality. This inequality is valid for all *u* ≥ 0 and *u* + *v* ≥ 0 or *u* > 0 and *u* + *v* > 0 and equality holds if only if *v* = 0. Now, by employing (37) with *γ* = *r*/*s* < 1,

$$\mu := \left(\frac{s+1}{s}\right) \int\_0^{\sigma(t)} (F^{\sigma}(\mathbf{x}))^{-s} \Delta \mathbf{x}, \text{ and } v := -\frac{1}{s} \sigma(t) (F^{\sigma}(t))^{-s} \Delta$$

and noting that

$$\left(\frac{s+1}{s}\right) \int\_0^{\sigma(t)} (F^{\sigma}(\mathbf{x}))^{-s} \Delta \mathbf{x} - \frac{1}{s} \sigma(t) (F^{\sigma}(t))^{-s} > 0,$$

we obtain

$$\begin{split} & \quad \left[ \left( \frac{s+1}{s} \right) \int\_{0}^{\sigma(t)} (F^{\sigma}(\mathbf{x}))^{-s} \Delta \mathbf{x} - \frac{1}{s} \sigma(t) (F^{\sigma}(t))^{-s} \right]^{r/s} \\ & \leq \quad \left( \frac{s+1}{s} \right)^{r/s} \left[ \int\_{0}^{\sigma(t)} (F^{\sigma}(\mathbf{x}))^{-s} \Delta \mathbf{x} \right]^{r/s} \\ & \quad \quad - \frac{r}{s} \left( \frac{s+1}{s} \right)^{r/s-1} \times \frac{1}{s} \sigma(t) (F^{\sigma}(t))^{-s} \left[ \int\_{0}^{\sigma(t)} (F^{\sigma}(\mathbf{x}))^{-s} \Delta \mathbf{x} \right]^{r/s-1} \\ & = \quad \left( \frac{s+1}{s} \right)^{r/s} \left[ \int\_{0}^{\sigma(t)} (F^{\sigma}(\mathbf{x}))^{-s} \Delta \mathbf{x} \right]^{r/s} \\ & \quad \quad - \frac{r}{s^{2}} \left( \frac{s+1}{s} \right)^{r/s-1} \left[ \int\_{0}^{\sigma(t)} (F^{\sigma}(\mathbf{x}))^{-s} \Delta \mathbf{x} \right]^{r/s-1} \sigma(t) (F^{\sigma}(t))^{-s} . \end{split}$$

Substituting the last inequality into (36), we obtain

$$\begin{split} &\quad \int\_{0}^{\sigma(t)} f^{r/s}(\mathbf{x}) (F^{\sigma}(\mathbf{x}))^{-s-r/s} \Delta \mathbf{x} \\ &\leq \quad \left( \int\_{0}^{\sigma(t)} (F^{\sigma}(\mathbf{x}))^{-s} \Delta \mathbf{x} \right)^{1-\frac{t}{s}} \left[ \left( \frac{s+1}{s} \right)^{r/s} \left( \int\_{0}^{\sigma(t)} (F^{\sigma}(\mathbf{x}))^{-s} \Delta \mathbf{x} \right)^{r/s} \right] \\ &\quad \quad - \frac{r}{s^{2}} \left( \frac{s+1}{s} \right)^{r/s-1} \left[ \int\_{0}^{\sigma(t)} (F^{\sigma}(\mathbf{x}))^{-s} \Delta \mathbf{x} \right]^{r/s-1} \sigma(t) (F^{\sigma}(t))^{-s} \right] \\ &= \quad \left( \frac{s+1}{s} \right)^{r/s} \int\_{0}^{\sigma(t)} (F^{\sigma}(\mathbf{x}))^{-s} \Delta \mathbf{x} - \frac{r}{s^{2}} \left( \frac{s+1}{s} \right)^{r/s-1} \sigma(t) (F^{\sigma}(t))^{-s} \Delta \end{split}$$

which is the desired inequality (34). The proof is complete.

**Theorem 5.** *Assume that f is non-negative and nondecreasing on* I*. If s* ≥ *r* > 0, *then*

$$\begin{split} &\int\_{0}^{\sigma(t)} [\mathcal{H}^{\sigma}f(\mathbf{x})]^{-s} \Delta \mathbf{x} \\ &\leq \quad \left(\frac{s+1}{s}\right)^{r} \int\_{0}^{\sigma(t)} f^{-r}(\mathbf{x}) [\mathcal{H}^{\sigma}f(\mathbf{x})]^{-s+r} \Delta \mathbf{x} - \frac{r}{s+1} \sigma(t) [\mathcal{H}^{\sigma}f(t)]^{-s} .\end{split} \tag{38}$$

**Proof.** We proceed as in the proof of Theorem 2, so we have from (27) that

$$\begin{aligned} &\left(\frac{s+1}{s}\right)^r \int\_0^{\sigma(t)} f^{-r}(\mathbf{x}) [\mathcal{H}^\sigma f(\mathbf{x})]^{-s+r} \Delta \mathbf{x} \\ &+ s \left(\frac{s}{s+1}\right)^{\frac{r}{s}} \int\_0^{\sigma(t)} f^{r/s}(\mathbf{x}) [\mathcal{H}^\sigma f(\mathbf{x})]^{-s-\frac{r}{s}} \Delta \mathbf{x} \\ \geq & \quad (s+1) \int\_0^{\sigma(t)} [\mathcal{H}^\sigma f(\mathbf{x})]^{-s} \Delta \mathbf{x}. \end{aligned}$$

By applying Theorem 4, we obtain

$$\begin{aligned} \int\_0^{\sigma(t)} f^{r/s}(\mathbf{x}) [\mathcal{H}^\sigma f(\mathbf{x})]^{-s-\frac{r}{s}} \Delta \mathbf{x} &\leq \quad \left(\frac{s+1}{s}\right)^{r/s} \int\_0^{\sigma(t)} [\mathcal{H}^\sigma f(\mathbf{x})]^{-s} \Delta \mathbf{x} \\ &- \frac{r}{s^2} \left(\frac{s+1}{s}\right)^{r/s-1} \sigma(t) [\mathcal{H}^\sigma f(t)]^{-s} .\end{aligned}$$

and then

$$\begin{split} & \quad \left(\frac{s+1}{s}\right)^r \int\_0^{\sigma(t)} f^{-r}(\mathbf{x}) [\mathcal{H}^\sigma f(\mathbf{x})]^{-s+r} \Delta \mathbf{x} \\ & \quad + s \left(\frac{s}{s+1}\right)^{\frac{r}{s}} \left(\frac{s+1}{s}\right)^{r/s} \int\_0^{\sigma(t)} [\mathcal{H}^\sigma f(\mathbf{x})]^{-s} \Delta \mathbf{x} \\ & \quad - s \left(\frac{s}{s+1}\right)^{\frac{r}{s}} \frac{r}{s^2} \left(\frac{s+1}{s}\right)^{r/s-1} \sigma(t) [\mathcal{H}^\sigma f(t)]^{-s} \\ & \ge \quad (s+1) \int\_0^{\sigma(t)} [\mathcal{H}^\sigma f(\mathbf{x})]^{-s} \Delta \mathbf{x} . \end{split}$$

By combining like terms, we obtain

$$\begin{aligned} & \quad \left(\frac{s+1}{s}\right)^r \int\_0^{\sigma(t)} f^{-r}(\mathbf{x}) [\mathcal{H}^\sigma f(\mathbf{x})]^{-s+r} \Delta \mathbf{x} - \frac{r}{(s+1)} \sigma(t) [\mathcal{H}^\sigma f(t)]^{-s} \\ & \ge \quad \int\_0^{\sigma(t)} [\mathcal{H}^\sigma f(\mathbf{x})]^{-s} \Delta \mathbf{x}, \end{aligned} \tag{39}$$

which is the desired inequality (38). The proof is complete.

**Theorem 6.** *Assume that f is non-negative and nondecreasing on* I*. If* 0 < *r*<sup>1</sup> < *r*<sup>2</sup> < *s*, *then*

$$\int\_{0}^{\sigma(t)} f^{-r\_1}(\mathbf{x}) [\mathcal{H}^\sigma f(\mathbf{x})]^{-s+r\_1} \Delta \mathbf{x} + \frac{(r\_2 - r\_1)\mathbf{s}^{r\_1}}{(s+1)^{1+r\_1}} \sigma(t) [\mathcal{H}^\sigma f(t)]^{-s}$$

$$\leq \quad \left(\frac{s+1}{s}\right)^{r\_2 - r\_1} \int\_{0}^{\sigma(t)} f^{-r\_2}(\mathbf{x}) [\mathcal{H}^\sigma f(\mathbf{x})]^{-s+r\_2} \Delta \mathbf{x}.\tag{40}$$

**Proof.** By applying Hölder's inequality with *r*2/*r*<sup>1</sup> and *r*2/(*r*<sup>2</sup> − *r*1) on the left hand side of (40), we obtain

$$\begin{split} \int\_{0}^{\sigma(t)} f^{-r\_{1}}(\mathbf{x}) [\mathcal{H}^{\sigma} f(\mathbf{x})]^{-s+r\_{1}} \Delta \mathbf{x} &\leq \left( \int\_{0}^{\sigma(t)} f^{-r\_{2}}(\mathbf{x}) [\mathcal{H}^{\sigma} f(\mathbf{x})]^{-s+r\_{2}} \Delta \mathbf{x} \right)^{\frac{r\_{1}}{r\_{2}}} \\ &\times \left( \int\_{0}^{\sigma(t)} [\mathcal{H}^{\sigma} f(\mathbf{x})]^{-s} \Delta \mathbf{x} \right)^{1-\frac{r\_{1}}{r\_{2}}}. \end{split} \tag{41}$$

Now, by replacing *r* with *r*<sup>2</sup> in (39), we obtain

$$\begin{split} \int\_{0}^{\sigma(t)} [\mathcal{H}^{\sigma} f(\mathbf{x})]^{-s} \Delta \mathbf{x} &\leq \ \ \left( \frac{s+1}{s} \right)^{r\_{2}} \int\_{0}^{\sigma(t)} f^{-r\_{2}}(\mathbf{x}) [\mathcal{H}^{\sigma} f(\mathbf{x})]^{-s+r\_{2}} \Delta \mathbf{x} \\ &\quad - \frac{r\_{2}}{(s+1)} \sigma(t) [\mathcal{H}^{\sigma} f(t)]^{-s} . \end{split} \tag{42}$$

By combining (41) and (42), we obtain

$$\begin{split} \int\_{0}^{\sigma(t)} f^{-r\_{1}}(\mathbf{x}) [\mathcal{H}^{\sigma} f(\mathbf{x})]^{-s+r\_{1}} \Delta \mathbf{x} &\leq \ \left( \int\_{0}^{\sigma(t)} f^{-r\_{2}}(\mathbf{x}) [\mathcal{H}^{\sigma} f(\mathbf{x})]^{-s+r\_{2}} \Delta \mathbf{x} \right)^{\frac{r\_{1}}{r\_{2}}} \times \\ &\quad \left[ \left( \frac{s+1}{s} \right)^{r\_{2}} \int\_{0}^{\sigma(t)} f^{-r\_{2}}(\mathbf{x}) [\mathcal{H}^{\sigma} f(\mathbf{x})]^{-s+r\_{2}} \Delta \mathbf{x} \right. \\ &\quad \left. - \frac{r\_{2}}{(s+1)} \sigma(t) [\mathcal{H}^{\sigma} f(t)]^{-s} \right]^{1-\frac{r\_{1}}{r\_{2}}} . \end{split} \tag{43}$$

Now, by employing (37), with *γ* = 1 − (*r*1/*r*2) < 1,

$$u = \left(\frac{s+1}{s}\right)^{r\_2} \int\_0^{v(t)} f^{-r\_2}(\mathbf{x}) [\mathcal{H}^\sigma f(\mathbf{x})]^{-s+r\_2} \Delta \mathbf{x}, \text{ and } v = -\frac{r\_2}{(s+1)} \sigma(t) [\mathcal{H}^\sigma f(t)]^{-s} \Delta$$

we obtain

*s* + 1 *s r*<sup>2</sup> Z *<sup>σ</sup>*(*t*) 0 *f* <sup>−</sup>*r*<sup>2</sup> (*x*)[H*<sup>σ</sup> f*(*x*)] <sup>−</sup>*s*+*r*<sup>2</sup>∆*<sup>x</sup>* <sup>−</sup> *r*2 (*s* + 1) *<sup>σ</sup>*(*t*)[H*<sup>σ</sup> f*(*t*)] −*s* 1<sup>−</sup> *r*1 *r*2 ≤ *s* + 1 *s r*2−*r*<sup>1</sup> Z *σ*(*t*) 0 *f* <sup>−</sup>*r*<sup>2</sup> (*x*)[H*<sup>σ</sup> f*(*x*)] <sup>−</sup>*s*+*r*<sup>2</sup>∆*x* 1<sup>−</sup> *r*1 *r*2 − *r*<sup>2</sup> − *r*<sup>1</sup> *r*2 × *s* + 1 *s* −*r*<sup>1</sup> Z *σ*(*t*) 0 *f* <sup>−</sup>*r*<sup>2</sup> (*x*)[H*<sup>σ</sup> f*(*x*)] <sup>−</sup>*s*+*r*<sup>2</sup>∆*x* − *r*1 *<sup>r</sup>*<sup>2</sup> *r*<sup>2</sup> (*s* + 1) *<sup>σ</sup>*(*t*)[H*<sup>σ</sup> f*(*t*)] −*s* = *s* + 1 *s r*2−*r*<sup>1</sup> Z *σ*(*t*) 0 *f* <sup>−</sup>*r*<sup>2</sup> (*x*)[H*<sup>σ</sup> f*(*x*)] <sup>−</sup>*s*+*r*<sup>2</sup>∆*x* 1<sup>−</sup> *r*1 *r*2 − (*r*<sup>2</sup> − *r*1)*s r*1 (*s* + 1) 1+*r*<sup>1</sup> Z *σ*(*t*) 0 *f* <sup>−</sup>*r*<sup>2</sup> (*x*)[H*<sup>σ</sup> f*(*x*)] <sup>−</sup>*s*+*r*<sup>2</sup>∆*x* − *r*1 *r*2 *<sup>σ</sup>*(*t*)[H*<sup>σ</sup> f*(*t*)] −*s* .

Substituting the last inequality into (43), we obtain

$$\begin{split} &\quad \int\_{0}^{\sigma(t)} f^{-r\_{1}}(\mathbf{x}) [\mathcal{H}^{\sigma} f(\mathbf{x})]^{-s+r\_{1}} \Delta \mathbf{x} \\ &\leq \quad \left( \int\_{0}^{\sigma(t)} f^{-r\_{2}}(\mathbf{x}) [\mathcal{H}^{\sigma} f(\mathbf{x})]^{-s+r\_{2}} \Delta \mathbf{x} \right)^{\frac{r\_{1}}{r\_{2}}} \\ &\quad \times \left[ \left( \frac{s+1}{s} \right)^{r\_{2}-r\_{1}} \left[ \int\_{0}^{\sigma(t)} f^{-r\_{2}}(\mathbf{x}) [\mathcal{H}^{\sigma} f(\mathbf{x})]^{-s+r\_{2}} \Delta \mathbf{x} \right]^{1-\frac{r\_{1}}{r\_{2}}} \right. \\ &\left. \quad - \frac{(r\_{2}-r\_{1})s^{r\_{1}}}{(s+1)^{1+r\_{1}}} \left[ \int\_{0}^{\sigma(t)} f^{-r\_{2}}(\mathbf{x}) [\mathcal{H}^{\sigma} f(\mathbf{x})]^{-s+r\_{2}} \Delta \mathbf{x} \right]^{-\frac{r\_{1}}{r\_{2}}} \sigma(t) [\mathcal{H}^{\sigma} f(t)]^{-s} \right] \\ &= \quad \left( \frac{s+1}{s} \right)^{r\_{2}-r\_{1}} \left( \int\_{0}^{\sigma(t)} f^{-r\_{2}}(\mathbf{x}) [\mathcal{H}^{\sigma} f(\mathbf{x})]^{-s+r\_{2}} \Delta \mathbf{x} \right) \\ &\quad \quad - \frac{(r\_{2}-r\_{1})s^{r\_{1}}}{(s+1)^{1+r\_{1}}} \sigma(t) [\mathcal{H}^{\sigma} f(t)]^{-s} . \end{split}$$

which is the desired inequality (40). The proof is complete.

**Theorem 7.** *Assume that ω is non-negative and nondecreasing and q* > 1. *Then we have for every t* ∈ I *that*

$$\begin{split} &\frac{1}{\sigma(t)} \int\_{0}^{\sigma(t)} \left[ (\omega^{\sigma}(\mathbf{x}))^{\frac{-1}{q-1}} \left[ \mathcal{H}^{\sigma} \omega^{\frac{-1}{q-1}}(\mathbf{x}) \right]^{\gamma-1} - \frac{(\gamma-1)}{\gamma} \left[ \mathcal{H}^{\sigma} \omega^{\frac{-1}{q-1}}(\mathbf{x}) \right]^{\gamma} \right] \Delta \mathbf{x} \\ &\leq \quad \frac{1}{\gamma} \left[ \mathcal{H}^{\sigma} \omega^{\frac{-1}{q-1}}(t) \right]^{\gamma} \end{split} \tag{44}$$

*for any γ* ≥ 1.

$$\textbf{Proof. Let } \mathbf{x} \in \mathbb{I}. \text{ Since } \omega^{\frac{-1}{q-1}}(\mathbf{x}) = \left[\mathbf{x} \sharp \mu \omega^{\frac{-1}{q-1}}(\mathbf{x})\right]^{\Delta}, \text{ it follows that}$$

$$\begin{aligned} & \quad \left(\omega^{\sigma}(\mathbf{x})\right)^{\frac{-1}{q-1}} \left[\mathcal{H}^{\sigma} \omega^{\frac{-1}{q-1}}(\mathbf{x})\right]^{\gamma-1} - \frac{(\gamma-1)}{\gamma} \left[\mathcal{H}^{\sigma} \omega^{\frac{-1}{q-1}}(\mathbf{x})\right]^{\gamma} \\ & \quad \le \left. \omega^{\frac{-1}{q-1}}(\mathbf{x}) \left[\mathcal{H}^{\sigma} \omega^{\frac{-1}{q-1}}(\mathbf{x})\right]^{\gamma-1} - \frac{(\gamma-1)}{\gamma} \left[\mathcal{H}^{\sigma} \omega^{\frac{-1}{q-1}}(\mathbf{x})\right]^{\gamma} \\ & \quad = \left[\mathbf{x} \sharp \mathcal{H} \omega^{\frac{-1}{q-1}}(\mathbf{x})\right]^{\Delta} \left[\mathcal{H}^{\sigma} \omega^{\frac{-1}{q-1}}(\mathbf{x})\right]^{\gamma-1} - \frac{(\gamma-1)}{\gamma} \left[\mathcal{H}^{\sigma} \omega^{\frac{-1}{q-1}}(\mathbf{x})\right]^{\gamma}. \end{aligned} \tag{45}$$

Moreover, utilizing the well-known product rule

$$\left(fg\right)^{\Lambda} = f\xi^{\Lambda} + f^{\Lambda}\xi^{\sigma},$$

$$\text{for } f = x\mathcal{H}\omega^{\frac{-1}{q-1}} \text{ and } g^{\sigma} = \left[\mathcal{H}^{\sigma}\omega^{\frac{-1}{q-1}}\right]^{\gamma-1}, \text{ we have that}$$

$$\left[x\mathcal{H}\omega^{\frac{-1}{q-1}}(x)\right]^{\Lambda}\left[\mathcal{H}^{\sigma}\omega^{\frac{-1}{q-1}}(x)\right]^{\gamma-1}$$

$$= \left[x\left(\mathcal{H}\omega^{\frac{-1}{q-1}}(x)\right)^{\gamma}\right]^{\Lambda} - \left[x\mathcal{H}\omega^{\frac{-1}{q-1}}(x)\right]\left[\left(\mathcal{H}\omega^{\frac{-1}{q-1}}(x)\right)^{\gamma-1}\right]^{\Lambda},\tag{46}$$

and for *f* = *x* and *g <sup>σ</sup>* = <sup>H</sup>*σ<sup>ω</sup>* −1 *q*−1 *γ* , we have that

$$\left[\mathcal{H}^{\sigma}\omega^{\frac{-1}{q-1}}(\mathbf{x})\right]^{\gamma} = \left[\mathbf{x}\left(\mathcal{H}\omega^{\frac{-1}{q-1}}(\mathbf{x})\right)^{\gamma}\right]^{\Delta} - \mathbf{x}\left[\left(\mathcal{H}\omega^{\frac{-1}{q-1}}(\mathbf{x})\right)^{\gamma}\right]^{\Delta}.\tag{47}$$

∆

*σ*

By comparing (46) and (47) with (45), we obtain

$$\begin{split} & \quad \left(\omega^{\sigma}(\mathbf{x})\right)^{\frac{-1}{q-1}} \left[\mathcal{H}^{\sigma}\omega^{\frac{-1}{q-1}}(\mathbf{x})\right]^{\gamma-1} - \frac{(\gamma-1)}{\gamma} \left[\mathcal{H}^{\sigma}\omega^{\frac{-1}{q-1}}(\mathbf{x})\right]^{\gamma} \\ & \leq \quad \left[\mathbf{x}\left(\mathcal{H}\omega^{\frac{-1}{q-1}}(\mathbf{x})\right)^{\gamma}\right]^{\Delta} - \left[\mathcal{H}\omega^{\frac{-1}{q-1}}(\mathbf{x})\right] \left[\left(\mathcal{H}\omega^{\frac{-1}{q-1}}(\mathbf{x})\right)^{\gamma-1}\right]^{\Delta} \\ & \quad - \frac{(\gamma-1)}{\gamma} \left[\mathbf{x}\left(\mathcal{H}\omega^{\frac{-1}{q-1}}(\mathbf{x})\right)^{\gamma}\right]^{\Delta} + \frac{(\gamma-1)}{\gamma} \mathbf{x}\left[\left(\mathcal{H}\omega^{\frac{-1}{q-1}}(\mathbf{x})\right)^{\gamma}\right]^{\Delta} \\ & = \quad \frac{1}{\gamma} \left[\mathbf{x}\left(\mathcal{H}\omega^{\frac{-1}{q-1}}(\mathbf{x})\right)^{\gamma}\right]^{\Delta} \\ & \quad - \mathbf{x}\mathcal{H}\omega^{\frac{-1}{q-1}}(\mathbf{x}) \left[\left(\mathcal{H}\omega^{\frac{-1}{q-1}}(\mathbf{x})\right)^{\gamma-1}\right]^{\Delta} + \frac{(\gamma-1)}{\gamma} \mathbf{x}\left[\left(\mathcal{H}\omega^{\frac{-1}{q-1}}(\mathbf{x})\right)^{\gamma}\right]^{\Delta}. \end{split} \tag{48}$$

On the other hand, since *ω* −1 *q*−1 is nonincreasing, then so is H*ω* −1 *<sup>q</sup>*−<sup>1</sup> (*x*), or equivalently, H*ω* −1 *<sup>q</sup>*−<sup>1</sup> (*x*) ∆ < 0, then we have

$$\begin{split} & -\mathfrak{x}\mathfrak{H}\omega^{\frac{-1}{q-1}}(\mathbf{x}) \left[ \left( \mathfrak{H}\omega^{\frac{-1}{q-1}}(\mathbf{x}) \right)^{\gamma-1} \right]^{\Delta} + \frac{(\gamma-1)}{\gamma} \mathfrak{x} \left[ \left( \mathfrak{H}\omega^{\frac{-1}{q-1}}(\mathbf{x}) \right)^{\gamma} \right]^{\Delta} \\ & \leq \left. -\mathfrak{x}\mathfrak{H}\omega^{\frac{-1}{q-1}}(\mathbf{x}) \left[ \left( \mathfrak{H}\omega^{\frac{-1}{q-1}}(\mathbf{x}) \right)^{\gamma-1} \right]^{\Delta} + \mathfrak{x} \left[ \left( \mathfrak{H}\omega^{\frac{-1}{q-1}}(\mathbf{x}) \right)^{\gamma} \right]^{\Delta} . \end{split} \tag{49}$$

Consequently, yet another application of the product rule, with *f* = H*ω* −1 *<sup>q</sup>*−<sup>1</sup> (*x*) and *g* = H*ω* −1 *<sup>q</sup>*−<sup>1</sup> (*x*) *γ*−<sup>1</sup> , yields that

$$\begin{aligned} &\left[\left(\mathcal{H}\omega^{\frac{-1}{q-1}}(\mathbf{x})\right)^{\gamma}\right]^\Delta-\mathcal{H}\omega^{\frac{-1}{q-1}}(\mathbf{x})\left[\left(\mathcal{H}\omega^{\frac{-1}{q-1}}(\mathbf{x})\right)^{\gamma-1}\right]^\Delta\\ &=\quad\left(\mathcal{H}^\sigma\omega^{\frac{-1}{q-1}}(\mathbf{x})\right)^{\gamma-1}\left[\mathcal{H}\omega^{\frac{-1}{q-1}}(\mathbf{x})\right]^\Delta\end{aligned}$$

by substituting the last equation in (49), we have

$$-\mathfrak{X}\mathfrak{H}\omega^{\frac{-1}{q-1}}(\mathbf{x})\left[\left(\mathfrak{H}\omega^{\frac{-1}{q-1}}(\mathbf{x})\right)^{\gamma-1}\right]^{\Delta}+\frac{(\gamma-1)}{\gamma}\mathfrak{x}\left[\left(\mathfrak{H}\omega^{\frac{-1}{q-1}}(\mathbf{x})\right)^{\gamma}\right]^{\Delta}$$

$$\leq\quad\mathfrak{x}\left(\mathfrak{H}^{\sigma}\omega^{\frac{-1}{q-1}}(\mathbf{x})\right)^{\gamma-1}\left[\mathfrak{H}\omega^{\frac{-1}{q-1}}(\mathbf{x})\right]^{\Delta}\leq 0.\tag{50}$$

Now, taking into account relations (48) and (50), we have that

$$\begin{aligned} & \left(\omega^{\sigma}(\mathbf{x})\right)^{\frac{-1}{q-1}} \left[\mathcal{H}^{\sigma}\omega^{\frac{-1}{q-1}}(\mathbf{x})\right]^{\gamma-1} - \frac{(\gamma-1)}{\gamma} \left[\mathcal{H}^{\sigma}\omega^{\frac{-1}{q-1}}(\mathbf{x})\right]^{\gamma} \\ & \leq \quad \frac{1}{\gamma} \left[\mathfrak{x}\left(\mathcal{H}\omega^{\frac{-1}{q-1}}(\mathbf{x})\right)^{\gamma}\right]^{\Delta} .\end{aligned}$$

Finally, integrating the last inequality from 0 to *σ*(*t*) and dividing by *σ*(*t*), we obtain

$$\begin{aligned} &\frac{1}{\sigma(t)}\int\_0^{\sigma(t)} \left[ (\omega^{\sigma}(\mathbf{x}))^{\frac{-1}{q-1}} \left[ \mathcal{H}^{\sigma} \omega^{\frac{-1}{q-1}}(\mathbf{x}) \right]^{\gamma-1} - \frac{(\gamma-1)}{\gamma} \left[ \mathcal{H}^{\sigma} \omega^{\frac{-1}{q-1}}(\mathbf{x}) \right]^{\gamma} \right] \Lambda \mathbf{x} \\ &\leq \quad \frac{1}{\gamma} \left[ \mathcal{H}^{\sigma} \omega^{\frac{-1}{q-1}}(t) \right]^{\gamma} .\end{aligned}$$

The proof is complete

#### **3. Self-Improving Properties of Muckenhoupt's Weights**

In this section, we will prove the self-improving properties of the Muckenhoupt class on a time scale T for non-negative and nondecreasing weights.

**Theorem 8.** *Assume that ω is non-negative and nondecreasing on* I *and q* > 1 *such that <sup>ω</sup>* <sup>∈</sup> <sup>A</sup>*<sup>q</sup>* (C)*. Then for any <sup>η</sup>* <sup>≥</sup> <sup>1</sup> *satisfying that <sup>ω</sup><sup>σ</sup>* (*t*) <sup>≤</sup> *ηω*(*t*), *we have that <sup>ω</sup>* <sup>∈</sup> <sup>A</sup>*<sup>p</sup>* (C1) *for any p* ∈ (*p*0, *q*] *where p*<sup>0</sup> *is the unique root of the equation*

$$\frac{q - p\_0}{q - 1} (\mathcal{C} \eta p\_0)^{\frac{1}{q - 1}} = 1. \tag{51}$$

*Furthermore, the constant* C<sup>1</sup> *is given by*

$$\mathcal{C}\_1 := \left(\frac{p-1}{q-1} \frac{\mathcal{C}^{\frac{1}{q-1}}}{\Psi^{q,p}(\mathcal{C})}\right)^{p-1}, \text{ where } \Psi^{q,p}(\mathcal{C}) := \left(1 - \frac{q-p}{q-1} (\mathcal{C}\eta p)^{\frac{1}{q-1}}\right) \eta^{\frac{-1}{p-1}} > 0.$$

**Proof.** By Lemma 7 with *γ* = (*q* − 1)/(*p* − 1) > 1 for *q* > *p* > 1, we obtain

$$\begin{split} & \frac{q-1}{p-1} \int\_{0}^{\sigma(t)} (\omega^{\sigma}(\mathbf{x}))^{\frac{-1}{q-1}} \left[ \mathcal{H}^{\sigma} \omega^{\frac{-1}{q-1}}(\mathbf{x}) \right]^{\frac{q-p}{p-1}} \Delta \mathbf{x} - \frac{q-p}{p-1} \int\_{0}^{\sigma(t)} \left[ \mathcal{H}^{\sigma} \omega^{\frac{-1}{q-1}}(\mathbf{x}) \right]^{\frac{q-1}{p-1}} \Delta \mathbf{x} \\ & \leq \quad \sigma(t) \left[ \mathcal{H}^{\sigma} \omega^{\frac{-1}{q-1}}(t) \right]^{\frac{q-1}{p-1}} . \end{split} \tag{52}$$

Since *<sup>ω</sup>* <sup>∈</sup> <sup>A</sup>*<sup>q</sup>* (C), we see that

$$\mathcal{H}^{\sigma}\omega(t)\left[\mathcal{H}^{\sigma}\omega^{\frac{-1}{q-1}}(t)\right]^{q-1} \le \mathcal{C},\text{ for }\mathcal{C} > 1. \tag{53}$$

Substituting the last inequality into (52), we obtain

$$\begin{split} &\frac{q-1}{q-p} \int\_{0}^{\sigma(t)} (\omega^{\sigma}(\mathbf{x}))^{\frac{-1}{q-1}} \left[ \mathcal{H}^{\sigma} \omega^{\frac{-1}{q-1}}(\mathbf{x}) \right]^{\frac{q-p}{p-1}} \Delta \mathbf{x} - \int\_{0}^{\sigma(t)} \left[ \mathcal{H}^{\sigma} \omega^{\frac{-1}{q-1}}(\mathbf{x}) \right]^{\frac{q-1}{p-1}} \Delta \mathbf{x} \\ &\leq \quad \frac{p-1}{q-p} \mathcal{C}^{\frac{1}{p-1}} \sigma(t) [\mathcal{H}^{\sigma} \omega(t)]^{\frac{-1}{p-1}} . \end{split} \tag{54}$$

Define

$$g\_{\mathfrak{F}}(\rho) = \frac{q-1}{q-p} \mathfrak{F} \rho^{\frac{q-p}{p-1}} - \rho^{\frac{q-1}{p-1}}\rho$$

with

$$
\rho = \mathcal{H}^{\sigma} \omega^{\frac{-1}{q-1}} \text{ and } \mathfrak{f} = (\omega^{\sigma})^{\frac{-1}{q-1}}.
$$

Since *ω<sup>σ</sup>* is nondecreasing, then we have (*ω<sup>σ</sup>* ) −1 *q*−1 is nonincreasing, then by Lemma 2, we have (*ω<sup>σ</sup>* ) −1 *<sup>q</sup>*−<sup>1</sup> ≤ H*σ<sup>ω</sup>* −1 *q*−1 , that is *ξ* < *ρ*. From the definition of *g<sup>ξ</sup>* (*ρ*), we see that

$$\frac{d}{d\rho}g\_{\sharp}(\rho) = \frac{q-1}{p-1}\sharp\rho^{\frac{q-p}{p-1}-1} - \frac{q-1}{p-1}\rho^{\frac{q-p}{p-1}} = \frac{q-1}{p-1}\rho^{\frac{q-p}{p-1}-1}[\sharp-\rho] < 0,$$

and so we can recognize that *g<sup>ξ</sup>* (*ρ*) is nonincreasing. By defining

$$\mathcal{J} = \mathcal{C}^{\frac{1}{q-1}} [\mathcal{H}^{\sigma} \omega]^{\frac{-1}{q-1}} \mathcal{A}$$

and using *ρ* ≤ *ζ*, we have that

$$\operatorname{g\xi}(\rho) \ge \operatorname{g\xi}(\zeta)\nu$$

and then we obtain

$$\begin{split} &\quad \frac{q-1}{q-p} \int\_{0}^{\sigma(t)} (\omega^{\sigma}(\mathbf{x}))^{\frac{-1}{q-1}} \Big[ \mathcal{H}^{\sigma} \omega^{\frac{-1}{q-1}}(\mathbf{x}) \Big]^{\frac{q-p}{p-1}} \Delta \mathbf{x} - \int\_{0}^{\sigma(t)} \Big[ \mathcal{H}^{\sigma} \omega^{\frac{-1}{q-1}}(\mathbf{x}) \Big]^{\frac{q-1}{p-1}} \Delta \mathbf{x} \\ &\geq \quad \frac{q-1}{q-p} \mathcal{C}^{\frac{q-p}{(p-1)(q-1)}} \int\_{0}^{\sigma(t)} (\omega^{\sigma}(\mathbf{x}))^{\frac{-1}{q-1}} [\mathcal{H}^{\sigma} \omega(\mathbf{x})]^{\frac{1}{q-1} - \frac{1}{p-1}} \Delta \mathbf{x} \\ &\quad - \mathcal{C}^{\frac{1}{p-1}} \int\_{0}^{\sigma(t)} [\mathcal{H}^{\sigma} \omega]^{\frac{-1}{p-1}} \Delta \mathbf{x} .\end{split}$$

Compare last inequality and (54) we obtain

$$\begin{aligned} &\frac{q-1}{q-p} \mathcal{C}^{\frac{q-p}{(p-1)(q-1)}} \int\_0^{\sigma(t)} (\omega^{\sigma}(\mathbf{x}))^{\frac{-1}{q-1}} [\mathcal{H}^{\sigma}\omega(\mathbf{x})]^{\frac{1}{q-1} - \frac{1}{p-1}} \Delta \mathbf{x} \\ &\leq \quad \frac{p-1}{q-p} \mathcal{C}^{\frac{1}{p-1}} \sigma(t) [\mathcal{H}^{\sigma}\omega(t)]^{\frac{-1}{p-1}} + \mathcal{C}^{\frac{1}{p-1}} \int\_0^{\sigma(t)} [\mathcal{H}^{\sigma}\omega]^{\frac{-1}{p-1}} \Delta \mathbf{x}. \end{aligned}$$

Cancel a suitable power of C to obtain

$$\frac{q-1}{q-p}\int\_{0}^{\sigma(t)} (\omega^{\sigma}(\mathbf{x}))^{\frac{-1}{q-1}} [\mathcal{H}^{\sigma}\omega(\mathbf{x})]^{\frac{1}{q-1}-\frac{1}{p-1}} \Delta \mathbf{x}$$

$$\leq \quad \frac{p-1}{q-p} \mathcal{C}^{\frac{1}{q-1}} \sigma(t) [\mathcal{H}^{\sigma}\omega(t)]^{\frac{-1}{p-1}} + \mathcal{C}^{\frac{1}{q-1}} \int\_{0}^{\sigma(t)} [\mathcal{H}^{\sigma}\omega]^{\frac{-1}{p-1}} \Delta \mathbf{x}.\tag{55}$$

Replace *s* and *r* with <sup>1</sup> *p*−1 and <sup>1</sup> *q*−1 in the inequality (23), respectively, we obtain

$$\int\_0^{\sigma(t)} \left[\mathcal{H}^{\sigma}\omega(\mathbf{x})\right]^{-\frac{1}{p-1}} \Delta \mathbf{x} \le p^{\frac{1}{q-1}} \int\_0^{\sigma(t)} \omega^{\frac{-1}{q-1}}(\mathbf{x}) [\mathcal{H}^{\sigma}\omega(\mathbf{x})]^{-\frac{1}{p-1} + \frac{1}{q-1}} \Delta \mathbf{x}.\tag{56}$$

By combining (55) and (56), we see immediately that

$$\frac{q-1}{q-p} \int\_0^{\sigma(t)} (\omega^{\sigma}(\mathbf{x}))^{\frac{-1}{q-1}} [\mathcal{H}^{\sigma}\omega(\mathbf{x})]^{\frac{1}{q-1} - \frac{1}{p-1}} \Delta \mathbf{x}$$

$$- (\mathcal{C}p)^{\frac{1}{q-1}} \int\_0^{\sigma(t)} \omega^{\frac{-1}{q-1}}(\mathbf{x}) [\mathcal{H}^{\sigma}\omega(\mathbf{x})]^{-\frac{1}{p-1} + \frac{1}{q-1}} \Delta \mathbf{x}$$

$$\leq \quad \frac{p-1}{q-p} \mathcal{C}^{\frac{1}{q-1}} \sigma(t) [\mathcal{H}^{\sigma}\omega(t)]^{\frac{-1}{p-1}}.\tag{57}$$

Since *ω<sup>σ</sup>* (*t*) ≤ *ηω*(*t*), so we can see that

$$
\omega^{\frac{-1}{q-1}}(t) \le \eta^{\frac{1}{q-1}} (\omega^{\sigma}(t))^{\frac{-1}{q-1}}.
$$

Substituting the last inequality into (57) we see that

$$\begin{aligned} &\frac{q-1}{q-p}\int\_0^{\sigma(t)} (\omega^{\sigma}(\mathbf{x}))^{\frac{-1}{q-1}} [\mathcal{H}^{\sigma}\omega(\mathbf{x})]^{\frac{1}{q-1}-\frac{1}{p-1}} \Delta \mathbf{x} \\ &- (\mathcal{C}\eta p)^{\frac{1}{q-1}} \int\_0^{\sigma(t)} (\omega^{\sigma}(\mathbf{x}))^{\frac{-1}{q-1}} [\mathcal{H}^{\sigma}\omega(\mathbf{x})]^{-\frac{1}{p-1}+\frac{1}{q-1}} \Delta \mathbf{x} \\ &\leq \quad \frac{p-1}{q-p} \mathcal{C}^{\frac{1}{q-1}} \sigma(t) [\mathcal{H}^{\sigma}\omega(t)]^{\frac{-1}{p-1}} \end{aligned}$$

which gives us that

$$\begin{split} & \left[ 1 - \frac{q - p}{q - 1} (\mathcal{C} \eta \boldsymbol{p})^{\frac{1}{q - 1}} \right] \left( \frac{1}{\sigma(t)} \int\_0^{\sigma(t)} (\omega^{\mathcal{F}}(\boldsymbol{x}))^{\frac{-1}{q - 1}} [\mathcal{H}^{\mathcal{F}} \omega(\boldsymbol{x})]^{\frac{1}{q - 1} - \frac{1}{p - 1}} \Delta \boldsymbol{x} \right) \\ & \leq \quad \frac{p - 1}{q - 1} \mathcal{C}^{\frac{1}{q - 1}} [\mathcal{H}^{\mathcal{F}} \omega(\boldsymbol{t})]^{\frac{-1}{p - 1}} . \end{split} \tag{58}$$

The constant

$$K := 1 - \frac{q - p}{q - 1} (\mathcal{C} \eta p)^{\frac{1}{q - 1}} \nu$$

is positive for every *p* ∈ (*p*0, *q*], where *p*<sup>0</sup> is the unique positive root of the equation

$$\frac{q - p\_0}{q - 1} (\mathcal{C} \eta p\_0)^{\frac{1}{q - 1}} = 1.$$

Since *ω* is nondecreasing then we obtain (from Lemma 1) that

$$
\mathcal{H}^{\sigma}\omega(\mathfrak{x}) \le \omega^{\sigma}(\mathfrak{x}).
$$

This implies, since *p* − 1 < *q* − 1, that

$$[\mathcal{H}^{\sigma}\omega(\mathbf{x})]^{\frac{1}{q-1}-\frac{1}{p-1}} \ge (\omega^{\sigma})^{\frac{1}{q-1}-\frac{1}{p-1}}(\mathbf{x}).\tag{59}$$

which gives us

$$\begin{aligned} & \left[ 1 - \frac{q-p}{q-1} (\mathcal{C}\eta \, p)^{\frac{1}{q-1}} \right] \left( \frac{1}{\sigma(t)} \int\_0^{\sigma(t)} (\omega^{\sigma})^{\frac{-1}{p-1}}(\mathbf{x}) \Delta \mathbf{x} \right) \\ & \le \quad \frac{p-1}{q-1} \mathcal{C}^{\frac{1}{q-1}} [\mathcal{H}^{\sigma} \omega(t)]^{\frac{-1}{p-1}}. \end{aligned} \tag{60}$$

Since *ω<sup>σ</sup>* (*t*) ≤ *ηω*(*t*), so we can see that

$$(\omega^{\sigma}(t))^{\frac{-1}{p-1}} \ge (\eta \omega(t))^{\frac{-1}{p-1}}.$$

Substituting the last inequality into (60) we obtain

$$\begin{aligned} &\left[1-\frac{q-p}{q-1}(\mathcal{C}\eta p)^{\frac{1}{q-1}}\right]\eta^{\frac{-1}{p-1}}\left(\frac{1}{\sigma(t)}\int\_0^{\sigma(t)}\omega^{\frac{-1}{p-1}}(x)\Delta x\right) \\ &\leq \quad \frac{p-1}{q-1}\mathcal{C}^{\frac{1}{q-1}}[\mathcal{H}^{\sigma}\omega(t)]^{\frac{-1}{p-1}}.\end{aligned}$$

which implies that

$$(\mathcal{H}^\sigma \omega(t)) \left(\mathcal{H}^\sigma \omega^{\frac{-1}{p-1}}(t)\right)^{p-1} \le \mathcal{C}\_{1\prime}$$

where C<sup>1</sup> = C1(*p*, *q*, C, *η*) is positive constant. The proof is complete.

Now, we will refine the result above by improving the constant that appears as following.

**Theorem 9.** *Assume that ω is non-negative and nondecreasing on* I *and q* > 1 *such that <sup>ω</sup>* <sup>∈</sup> <sup>A</sup>*<sup>q</sup>* (C)*. Then <sup>ω</sup>* <sup>∈</sup> <sup>A</sup>*<sup>p</sup>* (C¯ <sup>1</sup>) *for any p* ∈ (*p*0, *q*] *where p*<sup>0</sup> *is the unique root of the equation*

$$\frac{q - p\_0}{q - 1} (\mathcal{C} \eta p\_0)^{\frac{1}{q - 1}} = 1. \tag{61}$$

*Furthermore the constant* <sup>C</sup>¯ <sup>1</sup> *is given by*

$$\begin{aligned} \mathcal{C}\_1 &:= \left( \frac{q}{p} \left( \frac{p-1}{q-1} \right)^2 \frac{\mathcal{C}^{\frac{1}{q-1}}}{\overline{\Psi^{q,p}(\mathcal{C})}} \right)^{p-1} \\ \Psi^{q,p}(\mathcal{C}) &:= \left( 1 - \frac{q-p}{q-1} (\mathcal{C}\eta p)^{\frac{1}{q-1}} \right) \eta^{\frac{-1}{p-1}} > 0. \end{aligned}$$

**Proof.** We will apply the same technique we use in Theorem 8 but we will replace *s* and *r* with 1/(*p* − 1) and 1/(*q* − 1) in (39), respectively to obtain

$$\begin{split} &\int\_{0}^{\sigma(t)} \left[\mathcal{H}^{\sigma}\omega(\mathbf{x})\right]^{-\frac{1}{p-1}} \Delta \mathbf{x} \\ \leq & \quad p^{\frac{1}{q-1}} \int\_{0}^{\sigma(t)} \omega^{\frac{-1}{q-1}}(\mathbf{x}) [\mathcal{H}^{\sigma}\omega(\mathbf{x})]^{-\frac{1}{p-1} + \frac{1}{q-1}} \Delta \mathbf{x} - \frac{p-1}{p(q-1)} \sigma(t) [\mathcal{H}^{\sigma}\omega(t)]^{-\frac{1}{p-1}} .\end{split} \tag{62}$$

Now, combine (55) and (62), we see immediately that

$$\begin{split} &\frac{q-1}{q-p} \int\_{0}^{\sigma(t)} (\omega^{\sigma}(\mathbf{x}))^{\frac{-1}{q-1}} [\mathcal{H}^{\sigma}\omega(\mathbf{x})]^{\frac{1}{q-1}-\frac{1}{p-1}} \Delta \mathbf{x} \\ & - (\mathcal{C}p)^{\frac{1}{q-1}} \int\_{0}^{\sigma(t)} \omega^{\frac{-1}{q-1}}(\mathbf{x}) [\mathcal{H}^{\sigma}\omega(\mathbf{x})]^{-\frac{1}{p-1}+\frac{1}{q-1}} \Delta \mathbf{x} \\ & \leq \quad \frac{p-1}{q-p} \mathcal{C}^{\frac{1}{q-1}}\_{q-1} \sigma(t) [\mathcal{H}^{\sigma}\omega(t)]^{\frac{-1}{p-1}} - \mathcal{C}^{\frac{1}{q-1}} \frac{p-1}{p(q-1)} \sigma(t) [\mathcal{H}^{\sigma}\omega(t)]^{-\frac{1}{p-1}} .\end{split}$$

Since *ω<sup>σ</sup>* (*t*) ≤ *ηω*(*t*), so we can see that

$$
\omega^{\frac{-1}{q-1}}(t) \le \eta^{\frac{1}{q-1}} (\omega^{\sigma}(t))^{\frac{-1}{q-1}}.
$$

Substituting the last inequality into (63) we see that

$$\begin{split} &\frac{q-1}{q-p}\int\_{0}^{\sigma(t)} (\omega^{\sigma}(\mathbf{x}))^{\frac{-1}{q-1}} [\mathcal{H}^{\sigma}\omega(\mathbf{x})]^{\frac{1}{q-1}-\frac{1}{p-1}} \Delta \mathbf{x} \\ & \qquad - (\mathcal{C}\eta p)^{\frac{1}{q-1}} \int\_{0}^{\sigma(t)} (\omega^{\sigma}(\mathbf{x}))^{\frac{-1}{q-1}} [\mathcal{H}^{\sigma}\omega(\mathbf{x})]^{-\frac{1}{p-1}+\frac{1}{q-1}} \Delta \mathbf{x} \\ & \qquad \leq \quad \frac{p-1}{q-p} \mathcal{C}^{\frac{1}{q-1}} \sigma(t) [\mathcal{H}^{\sigma}\omega(t)]^{\frac{-1}{p-1}} - \mathcal{C}^{\frac{1}{q-1}} \frac{p-1}{p(q-1)} \sigma(t) [\mathcal{H}^{\sigma}\omega(t)]^{-\frac{1}{p-1}} \,\_{q} \end{split}$$

which gives us

$$\begin{split} & \left[ 1 - \frac{q-p}{q-1} (\mathcal{C} \eta \boldsymbol{p})^{\frac{1}{q-1}} \right] \left( \frac{1}{\sigma(t)} \int\_0^{\sigma(t)} (\omega^{\sigma}(\mathbf{x}))^{\frac{-1}{q-1}} [\mathcal{H}^{\sigma} \omega(\mathbf{x})]^{\frac{-1}{p-1} + \frac{1}{q-1}} \Delta \mathbf{x} \right) \\ & \leq \quad \left[ \frac{p-1}{q-1} - \frac{(q-p)(p-1)}{p(q-1)^2} \right] \mathcal{C}^{\frac{1}{q-1}} [\mathcal{H}^{\sigma} \omega(t)]^{\frac{-1}{p-1}} \\ & = \quad \frac{q}{p} \left( \frac{p-1}{q-1} \right)^2 \mathcal{C}^{\frac{1}{q-1}} [\mathcal{H}^{\sigma} \omega(t)]^{\frac{-1}{p-1}}. \end{split} \tag{64}$$

Since *ω* is nondecreasing then we obtain (from Lemma 1) that

$$
\mathcal{H}^{\sigma}w(\mathfrak{x}) \le \omega^{\sigma}(\mathfrak{x}).
$$

This implies, since *p* − 1 < *q* − 1, that

$$[\mathcal{H}^{\sigma}\omega(\mathfrak{x})]^{\frac{1}{q-1} - \frac{1}{p-1}} \ge (\omega^{\sigma})^{\frac{1}{q-1} - \frac{1}{p-1}}(\mathfrak{x}),$$

then, we obtain

$$\frac{1}{p}\left[1-\frac{q-p}{q-1}(\mathcal{C}\eta p)^{\frac{1}{q-1}}\right]\frac{1}{\sigma(t)}\int\_{0}^{\sigma(t)}\left(\omega^{\sigma}(\mathbf{x})\right)^{-\frac{1}{p-1}}\Delta\mathbf{x} \leq \frac{q}{p}\left(\frac{p-1}{q-1}\right)^{2}\mathcal{C}^{\frac{1}{q-1}}[\mathcal{H}^{\sigma}\omega(t)]^{\frac{-1}{p-1}}.\tag{65}$$

Since *ω<sup>σ</sup>* (*x*) ≤ *ηω*(*x*) so we can see that

$$[\omega^{\sigma}(\mathfrak{x})]^{\frac{-1}{p-1}} \ge [\eta \omega(\mathfrak{x})]^{\frac{-1}{p-1}} \nu$$

Substituting the last inequality into (65) we obtain

$$\left[1 - \frac{q - p}{q - 1} (\mathcal{C} \eta \, p)^{\frac{1}{q - 1}}\right] \eta^{\frac{-1}{p - 1}} \left(\frac{1}{\sigma(t)} \int\_0^{\sigma(t)} \omega^{-\frac{1}{p - 1}}(\mathbf{x}) \Delta \mathbf{x}\right) \leq \frac{q}{p} \left(\frac{p - 1}{q - 1}\right)^2 \mathcal{C}^{\frac{1}{q - 1}} [\mathcal{H}^{\sigma} \omega(t)]^{\frac{-1}{p - 1}}.$$

which implies that

$$(\mathcal{H}^\sigma \omega(t)) \left(\mathcal{H}^\sigma \omega^{\frac{-1}{p-1}}(t)\right)^{p-1} \le \bar{\mathcal{C}}\_{1\prime}$$

where <sup>C</sup>¯ <sup>1</sup> <sup>=</sup> <sup>C</sup>¯ <sup>1</sup>(*q*, *<sup>p</sup>*, <sup>C</sup>, *<sup>η</sup>*) is positive constant,which proves that *<sup>ω</sup>* <sup>∈</sup> <sup>A</sup>*<sup>p</sup>* (C¯ <sup>1</sup>). The proof is complete.

**Remark 3.** *We note that Equation (61) can be written as*

$$\frac{1}{p\_0} \left( \frac{q - 1}{q - p\_0} \right)^{q - 1} = \mathcal{C} \eta. \tag{66}$$

*When* T = R*, we see that η* = 1 *and then (66) becomes the Equation (7) which is given by*

$$\frac{1}{p\_0} \left( \frac{q - 1}{q - p\_0} \right)^{q - 1} = \mathcal{C}. \tag{67}$$

*When* T = N*, we can choose η* = 2 *and then (66) becomes*

$$\frac{1}{p\_0} \left( \frac{q - 1}{q - p\_0} \right)^{q - 1} = \mathcal{LC}\_\prime \tag{68}$$

*for the discrete weights.*

#### **4. Conclusions**

In this paper, we proved some Hardy's type inequalities on time scales and the new refinements of these inequalities with negative powers that are needed to prove the main results. Next, we used these inequalities to design and prove some new additional inequalities by using the Bernoulli inequality that will be also needed in the proof of the main results. These results are the self-improving results for the Muckenhoupt weights on time scales. The self-improving properties used in harmonic analysis to prove one of the important theorems, which is the extrapolation theorem. We also expect that the new theory on time scales will also play the same act in proving extrapolation theory on time scales via the A*<sup>q</sup>* (C)−Muckenhoupt weights. The results as special cases contain the results for the classical results obtained for integrals and the discrete results obtained for the discrete weights. The technique that we have applied in this paper give a unified approach in proving a general results and avoiding the proof of integrals and again for sums. The results in the discrete case that we have derived contain an additional constant which is different from the case in the integral forms, see (67) and (68). We have checked the results with some values and concluded that these equations has unique positive roots.

**Author Contributions:** All authors contributed equally to the manuscript. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding

**Data Availability Statement:** Not applicable

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

### *Article* **Inequalities for the Windowed Linear Canonical Transform of Complex Functions**

**Zhen-Wei Li 1,† and Wen-Biao Gao 2,\* ,†**


**Abstract:** In this paper, we generalize the N-dimensional Heisenberg's inequalities for the windowed linear canonical transform (WLCT) of a complex function. Firstly, the definition for N-dimensional WLCT of a complex function is given. In addition, the N-dimensional Heisenberg's inequality for the linear canonical transform (LCT) is derived. It shows that the lower bound is related to the covariance and can be achieved by a complex chirp function with a Gaussian function. Finally, the N-dimensional Heisenberg's inequality for the WLCT is exploited. In special cases, its corollary can be obtained.

**Keywords:** Fourier transform; linear canonical transform; inequality; complex function

#### **1. Introduction**

Inequalities for the Fourier transform (FT) are widely used in mathematics, physics and engineering [1–6]. The classical N-dimensional Heisenberg's inequality of the FT is given by the following formula [7]:

$$\int\_{\mathbb{R}^N} (\mathbf{t} - \mathbf{t}^\mathbf{f})^2 \mid f(\mathbf{t}) \mid^2 \mathbf{d}\mathbf{t} \int\_{\mathbb{R}^N} (\mathbf{u} - \mathbf{u}^\mathbf{f})^2 \mid \widehat{f}(\mathbf{u}) \mid^2 \mathbf{d}\mathbf{u} \ge \left\| \mathbf{f} \right\|\_{\mathbf{L}^2(\mathbb{R}^N)'}^4 \tag{1}$$

where ð = ( *<sup>N</sup>* 4*π* ) 2 , **t** = (*t*1, *t*2, · · · , *tN*), **u** = (*u*1, *u*2, · · · , *uN*). b*f*(**u**) is the FT of any function *f* ∈ *L* 2 (RN),

$$\widehat{f}(\mathbf{u}) = F\{f(\mathbf{t})\}(\mathbf{u}) = \frac{1}{\sqrt{2\pi}} \int\_{\mathbb{R}^N} f(\mathbf{t}) e^{-i\mathbf{t}\mathbf{u}} d\mathbf{t},\tag{2}$$

$$\mathbf{t}^{\mathbf{f}} = \int\_{\mathbb{R}^{N}} \mathbf{t} \mid f(\mathbf{t}) \mid^{2} \,\mathrm{d}\mathbf{t},\tag{3}$$

$$\mathbf{u}^{\mathbf{f}} = \int\_{\mathbb{R}^{N}} \mathbf{u} \mid \widehat{f}(\mathbf{u}) \mid^{2} \,\mathrm{d}\mathbf{u},\tag{4}$$

$$\left\|f\right\|\_{L^{2}(\mathbb{R}^{N})}^{2} = \left\|f\right\|^{2} = \int\_{\mathbb{R}^{N}} |f(\mathbf{t})|^{2} \mathbf{d}\mathbf{t},\tag{5}$$

Based on Formula (1), Zhang obtained the N-dimensional Heisenberg's inequality of the fractional Fourier transform (FRFT) [8].

The windowed linear canonical transform (WLCT) [9–11] is a generalized integral transform of the FT [12] and the FRFT [13]. In recent years, inequality of the WLCT has

**Citation:** Li, Z.-W.; Gao, W.-B. Inequalities for the Windowed Linear Canonical Transform of Complex Functions. *Axioms* **2023**, *12*, 554. https://doi.org/10.3390/ axioms12060554

Academic Editors: Wei-Shih Du, Ravi P. Agarwal, Erdal Karapinar, Marko Kosti´c and Jian Cao

Received: 26 April 2023 Revised: 29 May 2023 Accepted: 29 May 2023 Published: 4 June 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

become a hot topic. Many scholars [14–17] have studied different types of inequalities for the WLCT.

The purpose of this paper is to obtain various kinds of N-dimensional inequalities associated with the WLCT.

#### **2. Preliminary**

Let any function *f*(**t**) = *f*1(**t**)*e <sup>i</sup>φ*(**t**) <sup>∈</sup> *<sup>L</sup>* 2 (RN) and window function <sup>0</sup> 6= *<sup>g</sup>*(**t**) = *g*1(**t**)*e <sup>i</sup>ϕ*(**t**) <sup>∈</sup> *<sup>L</sup>* 2 (RN).

**Definition 1** ([18])**.** *Let A* = *a b c d be a matrix parameter satisfying a*, *b*, *c*, *d* ∈ R *and ad* − *bc* = 1*. For any function f*(**t**)*, the linear canonical transform (LCT) of f*(**t**) *is defined as*

$$L\_A^f(\mathbf{u}) = L\_A[f(\mathbf{t})](\mathbf{u}) = \begin{cases} \int\_{\mathbb{R}^N} f(\mathbf{t}) K\_A(\mathbf{t}, \mathbf{u}) d\mathbf{t}, & b \neq 0 \\ \sqrt{d} e^{i\frac{d\mathbf{t}}{2}\mathbf{u}^2} f(d\mathbf{u})), & b = 0 \end{cases} \tag{6}$$

*where*

$$K\_{A}(\mathbf{t}, \mathbf{u}) = \frac{1}{\sqrt{i2\pi b}} e^{i\frac{d}{2\hbar}\mathbf{t}^{2} - i\frac{1}{b}\mathbf{t}\mathbf{u} + i\frac{d}{2\hbar}\mathbf{u}^{2}}.\tag{7}$$

Additionally, the paper [19] presented the following properties:

$$K\_A^\*(\mathbf{t}, \mathbf{u}) = K\_{A^{-1}}(\mathbf{u}, \mathbf{t}), \tag{8}$$

$$2\pi\delta(\mathbf{x}) = \int\_{\mathbb{R}^N} e^{\pm i \mathbf{u} \mathbf{x}} \mathbf{d} \mathbf{u},\tag{9}$$

where *A* <sup>−</sup><sup>1</sup> = *d* −*b* <sup>−</sup>*c a* , **x** = (*x*1, *x*2, · · · , *xN*).

If *b* = 0, then the LCT becomes a kind of scaling and chirp multiplication operations [20]. In this paper, we only consider *b* 6= 0.

The inverse formula of the LCT is given by [19]

$$f(\mathbf{t}) = \int\_{\mathbb{R}^N} L\_A^f(\mathbf{u}) K\_{A^{-1}}(\mathbf{u}, \mathbf{t}) d\mathbf{u}.\tag{10}$$

**Definition 2** ([9])**.** *Let A* = *a b c d be a matrix parameter satisfying a*, *b*, *c*, *d* ∈ R *and ad* − *bc* = 1*. The WLCT of function f with respect to g is defined by*

$$\begin{split} \mathcal{W}\_{\mathcal{S}}^{A} f(\mathbf{t}, \mathbf{u}) &= \int\_{\mathbb{R}^{N}} f(\mathbf{y}) \mathbf{g}^{\*} (\mathbf{y} - \mathbf{t}) K\_{A}(\mathbf{y}, \mathbf{u}) \mathrm{d}\mathbf{y} \\ &= \int\_{\mathbb{R}^{N}} f\_{\mathbf{t}}(\mathbf{y}) K\_{A}(\mathbf{y}, \mathbf{u}) \mathrm{d}\mathbf{y}, \end{split} \tag{11}$$

*where* **y** = (*y*1, *y*2, · · · , *yN*) *and f***t**(**y**) = *f*(**y**)*g* ∗ (**y** − **t**) = *f*1(**y**)*g* ∗ 1 (**y** − **t**)*e i*(*φ*(**y**)−*ϕ*(**y**−**t**)) *.*

Next, we will give a lemma.

**Lemma 1.** *For f* ∈ *L* 2 (RN) *and g* ∈ *<sup>L</sup>* 2 (RN)*, we have*

$$\mathcal{W}\_{\mathcal{S}}^{A}f(\mathbf{t},\mathbf{u}) = \int\_{\mathbb{R}^{N}} L\_{A}^{f}(\mathbf{k}) Q^{\*}(\mathbf{k}|\mathbf{u},\mathbf{t}) d\mathbf{k},\tag{12}$$

where  $A\_{1} = \begin{bmatrix} 0 & b' \\ -\frac{1}{b'} & d' \end{bmatrix}, 0 \neq b' = b \in \mathbb{R}$ , 
$$Q^\*(\mathbf{k}|\mathbf{u}, \mathbf{t}) = \sqrt{-i2\pi b} e^{i\frac{\delta'}{2b}(\mathbf{k}-\mathbf{u})^2} L\_{A\_1}^{\mathcal{S}}(\mathbf{k}-\mathbf{u})^\* \mathcal{K}\_A(\mathbf{t}, \mathbf{u}) K\_A^\*(\mathbf{t}, \mathbf{k}).$$

**Proof.** According to Definition 2 and Formula (10), we obtain

$$\begin{split} \mathcal{W}\_{\mathcal{S}}^{A}f(\mathbf{t},\mathbf{u}) &= \int\_{\mathbb{R}^{N}} f(\mathbf{y}) \overline{g(\mathbf{y}-\mathbf{t})} \mathcal{K}\_{A}(\mathbf{y},\mathbf{u}) \mathrm{d}\mathbf{y} \\ &= \int\_{\mathbb{R}^{N}} L\_{A}^{f}(\mathbf{k}) \int\_{\mathbb{R}^{N}} \mathcal{K}\_{A^{-1}}(\mathbf{k},\mathbf{y}) \overline{g(\mathbf{y}-\mathbf{t})} \mathcal{K}\_{A}(\mathbf{y},\mathbf{u}) \mathrm{d}\mathbf{y} d\mathbf{k}. \end{split} \tag{13}$$

Assume that *Q*∗ (**k**|**u**, **<sup>t</sup>**) = <sup>R</sup> <sup>R</sup><sup>N</sup> *KA*−<sup>1</sup> (**k**, **y**)*g*(**y** − **t***KA*(**y**, **u**)d**y** and **y** − **t** = **p**, then

*Q* ∗ (**k**|**u**, **<sup>t</sup>**) = <sup>Z</sup> R<sup>N</sup> *KA*−<sup>1</sup> (**k**, **y**)*g*(**y** − **t***KA*(**y**, **u**)d**y** = Z R<sup>N</sup> *g*(**p**) 1 √ −*i*2*πb* 1 √ *i*2*πb e* −*i* (**u**−**k**) *b* (**p**+**t**)+*i d* 2*b* (**u** <sup>2</sup>−**<sup>k</sup>** 2 )d**p** = 1 √ *i*2*πb* Z R<sup>N</sup> 1 √ −*i*2*πb g*(**p**)*e i* 0 −2*b* **p** <sup>2</sup>−*<sup>i</sup>* (**k**−**u**) −*b* **p**+*i d* 0 −2*b* (**k**−**u**) 2 d**p** × *e i d* 0 2*b* (**k**−**u**) <sup>2</sup>+*i d* 2*b* (**u** <sup>2</sup>−**<sup>k</sup>** 2 )−*i* (**u**−**k**) *b* **t** = 1 √ *i*2*πb e i d* 0 2*b* (**k**−**u**) 2 *L g A*1 (**k** − **u**) ∗ *e* −*i* **ut** *<sup>b</sup>* +*i d* 2*b* **u** 2 *e i* **kt** *<sup>b</sup>* +*i d* −2*b* **k** 2 = √ <sup>−</sup>*i*2*πbe<sup>i</sup> d* 0 2*b* (**k**−**u**) 2 *L g A*1 (**k** − **u**) <sup>∗</sup>*KA*(**t**, **u**)*K* ∗ *A* (**t**, **k**). (14)

Hence the Formula (13) becomes (12).

#### **3. Inequalities Associated with the WLCT**

The aim of this section is to obtain the new inequalities for the WLCT by the precise mathematical formulation.

**Definition 3.** *Let f* ∈ *L* 2 (RN)*, then we can define [21]*

$$\mathbf{t}^{\mathbf{f}} = \frac{1}{E} \int\_{\mathbb{R}^{N}} \mathbf{t} \mid f(\mathbf{t}) \mid^{2} \,\mathrm{d}\mathbf{t},\tag{15}$$

$$\mathbf{u}^{\mathbf{f}} = \frac{1}{E} \int\_{\mathbb{R}^{\mathbb{N}}} \mathbf{u} \, |\, \widehat{f}(\mathbf{u})\, |^{2} \, \mathbf{d} \mathbf{u},\tag{16}$$

$$\mathbf{u}\_{\mathbf{f}}^{\mathbf{A}} = \frac{1}{E} \int\_{\mathbb{R}^{\mathbb{N}}} \mathbf{u} \mid L\_{A}^{f}(\mathbf{u}) \mid^{2} \,\mathrm{d}\mathbf{u}.\tag{17}$$

$$
\Delta\_f^2 = \frac{1}{E} \int\_{\mathbb{R}^N} (\mathbf{t} - \mathbf{t}^\mathbf{f})^2 \mid f(\mathbf{t}) \mid^2 \,\mathrm{d}\mathbf{t} \,\tag{18}
$$

$$
\Lambda\_f^2 = \frac{1}{E} \int\_{\mathbb{R}^N} (\mathbf{u} - \mathbf{u}^\mathbf{f})^2 \ |\widehat{f}(\mathbf{u})\ |^2 \,\mathrm{d}\mathbf{u}\,\mathrm{\,}\tag{19}
$$

$$
\Lambda\_{A,f}^2 = \frac{1}{E} \int\_{\mathbb{R}^N} (\mathbf{u} - \mathbf{u}\_\mathbf{f}^\mathbf{A})^2 \mid L\_A^f(\mathbf{u}) \mid^2 \, \mathbf{d} \mathbf{u} \,\tag{20}
$$

*where*

$$E = \int\_{\mathbb{R}^N} |f(\mathbf{t})|^2 \, \mathbf{d}\mathbf{t} = \int\_{\mathbb{R}^N} |\mathbf{L}\_{\mathbf{A}}^f(\mathbf{u})|^2 \, \mathbf{d}\mathbf{u} = \int\_{\mathbb{R}^N} |\widehat{f}(\mathbf{u})|^2 \, \mathbf{d}\mathbf{u} \tag{21}$$

$$\mathbf{t}^{\mathbf{f}} = (t\_1^f, t\_2^f, \dots, t\_N^f), \tag{22}$$

$$t\_k^f = \frac{1}{E} \int\_{\mathbb{R}^N} t\_k \mid f(\mathbf{t}) \mid^2 \,\mathrm{d}\mathbf{t},\tag{23}$$

$$\mathbf{u}^{\mathbf{f}} = (\boldsymbol{u}\_1^f, \boldsymbol{u}\_{2'}^f, \dots, \boldsymbol{u}\_N^f)\_{\prime} \tag{24}$$

$$\boldsymbol{u}\_{k}^{f} = \frac{1}{E} \int\_{\mathbb{R}^{N}} \boldsymbol{u}\_{k} \ |\widehat{f}(\mathbf{u})\ |^{2} \,\mathrm{d}\mathbf{u}.\tag{25}$$

Zhang [8] has generalized the N-dimensional Heisenberg's inequality of the FT for complex function. It can be restated as follows:

**Lemma 2.** *Let f*(**t**) = *f*1(**t**)*e <sup>i</sup>φ*(**t**) <sup>∈</sup> *<sup>L</sup>* 2 (RN)*, for any* <sup>1</sup> ≤ *<sup>ε</sup>* ≤ *N, the classical partial derivatives ∂ f ∂t<sup>ε</sup> , ∂ f*<sup>1</sup> *∂t<sup>ε</sup> , ∂φ ∂t<sup>ε</sup> exist at any point* **<sup>t</sup>** ∈ RN*, then the inequality of the N-dimensional FT can be obtained:*

$$
\Delta\_f^2 \Lambda\_f^2 \ge \frac{N^2}{16\pi^2} \|f\|^2 + COV\_{f'}^2 \tag{26}
$$

*where*

$$\text{COV}\_f = \int\_{\mathbb{R}^N} |\mathbf{t} - \mathbf{t}^\mathbf{f}| |\boldsymbol{\varphi}\_\mathbf{t} \phi - \mathbf{u}^\mathbf{f}| f\_1^2(\mathbf{t}) \mathbf{dt},\tag{27}$$

*and <sup>v</sup>***t***<sup>φ</sup>* = ( *∂φ ∂t*<sup>1</sup> , *∂φ ∂t*<sup>2</sup> · · · , *∂φ ∂t<sup>N</sup>* )*. If v***t***φ is continuous and f*<sup>1</sup> 6= 0*, then the equality holds if and only if f*(**t**) *is a chirp function, the function is*

$$f(\mathbf{t}) = e^{-\frac{|\mathbf{t} - \mathbf{t}^f|^2}{2\varepsilon}} + \iota e^{2\pi i \left[\frac{1}{2\mathcal{F}}\sum\_{k=1}^N \varrho(t\_k)|t\_k - t\_k^f|^2 + \mathbf{t}\mathbf{u}^\mathbf{f} + \iota^\Pi\_{\mathcal{F}=1}^N \varrho(t\_\mathcal{F})\right]} \tag{28}$$

*where e*, *ϑ* > 0 *and ι*, *ι*<sup>∏</sup> *N σ*=1 *\$*(*tσ*) <sup>∈</sup> <sup>R</sup>*,*

$$\varrho(t\_{\sigma}) = \begin{cases} 1, & \sigma \in \mathbf{z}\_{1\tau} \\ -1, & \sigma \in \mathbf{z}\_{2\tau} \\ \operatorname{sgn}(t\_{\sigma} - t\_{\sigma}^{f}), & \sigma \in \mathbf{z}\_{3\tau}^{'} \\ -\operatorname{sgn}(t\_{\sigma} - t\_{\sigma}^{f}), & \sigma \in \mathbf{z}\_{4\tau} \end{cases} \tag{29}$$

$$\mathbf{z}\_{1\tau} = \{z\_{11}, z\_{12}, \dots, z\_{1\tau}\} = \left\{1 \le s \le N \mid \frac{\partial \phi}{\partial t\_s} = \frac{1}{\theta} (t\_\mathbf{s} - t\_\mathbf{s}^f) + u\_\mathbf{s}^f\right\},\tag{30}$$

$$\mathbf{z}\_{2\tau} = \{z\_{21}, z\_{22}, \dots, z\_{2\tau}\} = \left\{1 \le s \le N \mid \frac{\partial \phi}{\partial t\_s} = -\frac{1}{\theta}(t\_s - t\_s^f) + \mu\_s^f\right\},\tag{31}$$

$$\mathbf{z}\_{3\tau} = \{z\_{31}, z\_{32}, \dots, z\_{3\tau}\} = \left\{1 \le s \le N \mid \frac{\partial \phi}{\partial t\_s} = \begin{cases} \frac{1}{\theta} (t\_s - t\_s^f) + u\_s^f & t\_s \ge t\_s^f\\ -\frac{1}{\theta} (t\_s - t\_s^f) + u\_s^f & t\_s < t\_s^f \end{cases} \right\},\tag{32}$$

$$\mathbf{z}\_{4\tau} = \{z\_{41}, z\_{42}, \dots, z\_{4\tau}\} = \left\{1 \le s \le N \mid \frac{\partial \phi}{\partial t\_{\mathbf{s}}} = \begin{cases} -\frac{1}{\theta} (\mathbf{t}\_{\mathbf{s}} - \mathbf{t}\_{\mathbf{s}}^{f}) + \mathbf{u}\_{\mathbf{s}\mathbf{s}}^{f} & \mathbf{t}\_{\mathbf{s}} \ge \mathbf{t}\_{\mathbf{s}}^{f} \\ \frac{1}{\theta} (\mathbf{t}\_{\mathbf{s}} - \mathbf{t}\_{\mathbf{s}}^{f}) + \mathbf{u}\_{\mathbf{s}\mathbf{s}}^{f} & \mathbf{t}\_{\mathbf{s}} < \mathbf{t}\_{\mathbf{s}}^{f} \end{cases} \right\}, \tag{33}$$

$$\text{and } \bigcup\_{\rho=1}^{4} \mathbf{z}\_{\rho \tau} = \{1, 2, \dots, N\}, \mathbf{z}\_{\rho'\tau} \cap \mathbf{z}\_{\rho \tau} = \bigotimes \text{for } \rho \neq \rho'.$$

**Theorem 1.** *Let f*(**t**) = *f*1(**t**)*e <sup>i</sup>φ*(**t**) <sup>∈</sup> *<sup>L</sup>* 2 (RN)*,* **<sup>t</sup>***f*(**t**) ∈ *<sup>L</sup>* 2 (RN)*, for any* <sup>1</sup> ≤ *<sup>ε</sup>* ≤ *<sup>N</sup> the classical partial derivatives <sup>∂</sup> <sup>f</sup> ∂t<sup>ε</sup> , ∂ f*<sup>1</sup> *∂t<sup>ε</sup> , ∂φ ∂t<sup>ε</sup> exist at any point* **<sup>t</sup>** ∈ RN*, <sup>E</sup>* = <sup>1</sup>*, then inequality of the N-dimensional LCT can be obtained*

$$
\Delta\_f^2 \Lambda\_{Af}^2 \ge \frac{(bN)^2}{i16\pi^2} \|f\|^2 + \text{COV}\_{f,A'}^2 \tag{34}
$$

*where*

$$\text{COV}\_{f,A} = \int\_{\mathbb{R}^N} |\mathbf{t} - \mathbf{t}^\mathbf{f}| |\boldsymbol{\varphi}\_\mathbf{t} \phi' - \mathbf{u}\_\mathbf{f}^\mathbf{A}| f\_1^2(\mathbf{t}) \mathbf{dt},\tag{35}$$

*φ* 0 (**t**) = *φ*(**t**) + *<sup>a</sup>* 2*b* **t 2** *and v***t***φ* <sup>0</sup> = ( *∂φ*<sup>0</sup> *∂t*<sup>1</sup> , *∂φ*0 *∂t*<sup>2</sup> · · · , *∂φ*0 *∂t<sup>N</sup>* )*, If v***t***φ is continuous and f*<sup>1</sup> 6= 0*, then the equality holds if and only if f*(**t**) *is a chirp function (28).*

**Proof.** According to the Formulas (2) and (6), we have

$$L\_A[f(\mathbf{t})](\mathbf{u}) = \frac{1}{\sqrt{ib}} F\{f(\mathbf{t})e^{i\frac{a}{2b}\mathbf{t}^2}\} \left(\frac{\mathbf{u}}{b}\right) e^{i\frac{d}{2b}\mathbf{u}^2} \tag{36}$$

let **u** <sup>0</sup> = **<sup>u</sup>** *b* and *f* 0 (**t**) = *f*(**t**)*e i a* 2*b* **t** 2 , then

$$\begin{split} \Delta\_{f}^{2}\Delta\_{Af}^{2} &= \int\_{\mathbb{R}^{N}} (\mathbf{t} - \mathbf{t}^{\mathbf{f}})^{2} \mid f(\mathbf{t}) \mid \mathbf{2}^{\mathsf{f}}\operatorname{d}\int\_{\mathbb{R}^{N}} (\mathbf{u} - \mathbf{u}\_{\mathbf{f}}^{\mathsf{A}})^{2} \mid \mathbf{L}\_{\mathbf{A}}^{f}(\mathbf{u}) \mid \mathbf{2}^{\mathsf{f}}\operatorname{d}\mathbf{u} \\ &= \frac{1}{ib}\int\_{\mathbb{R}^{N}} (\mathbf{t} - \mathbf{t}^{\mathbf{f}})^{2} \mid f'(\mathbf{t}) \mid \mathbf{2}^{\mathsf{f}}\operatorname{d}\int\_{\mathbb{R}^{N}} (\mathbf{u} - \mathbf{u}\_{\mathbf{f}}^{\mathsf{A}})^{2} \mid F\{f'(\mathbf{t})\} \left(\frac{\mathbf{u}}{\mathbf{b}}\right) \mid \mathbf{2}^{\mathsf{f}}\operatorname{d}\mathbf{u} \\ &= \frac{b^{2}}{i}\int\_{\mathbb{R}^{N}} (\mathbf{t} - \mathbf{t}^{\mathbf{f}})^{2} \mid f'(\mathbf{t}) \mid \mathbf{2}^{\mathsf{f}}\operatorname{d}\int\_{\mathbb{R}^{N}} (\mathbf{u}' - \mathbf{u}\_{\mathbf{f}}^{\mathsf{A}})^{2} \mid F\{f'(\mathbf{t})\} \{\mathbf{u}'\} \mid \mathbf{2}^{\mathsf{f}}\operatorname{d}\mathbf{u}'. \end{split} \tag{37}$$

By the Formula (26), we have

$$
\Delta\_f^2 \Lambda\_{A,f}^2 \ge \frac{(bN)^2}{i16\pi^2} \|f\|^2 + \text{COV}\_{f,A}^2. \tag{38}
$$

**Corollary 1.** *When A* = 0 1 −1 0 *, the above theorem can become the Lemma 2.*

**Corollary 2.** *When A* = cos *α* sin *α* − sin *α* cos *α , the above theorem can reduce the N-dimensional Heisenberg's inequality of the FRFT for complex function [8].*

**Definition 4.** *Let f* , *g* ∈ *L* 2 (RN)*, then we can give the definition [11]*

$$\mathbf{t}\_A^W = \frac{1}{||\mathcal{W}\_\mathcal{g}^A f(\mathbf{t}, \mathbf{u})||^2} \int\_{\mathbb{R}^N} \int\_{\mathbb{R}^N} \mathbf{t} \mid \mathcal{W}\_\mathcal{g}^A f(\mathbf{t}, \mathbf{u}) \mid^2 \mathbf{d} \mathbf{t} d\mathbf{u} \,\tag{39}$$

$$\mathbf{u}\_A^W = \frac{1}{||\mathcal{W}\_\mathcal{S}^A f(\mathbf{t}, \mathbf{u})||^2} \int\_{\mathbb{R}^N} \int\_{\mathbb{R}^N} \mathbf{u} \mid \mathcal{W}\_\mathcal{S}^A f(\mathbf{t}, \mathbf{u}) \mid^2 \mathbf{d} \mathbf{t} d\mathbf{u}, \tag{40}$$

$$\Phi\_{A,W}^2 = \frac{1}{||\mathcal{W}\_{\mathcal{S}}^A f(\mathbf{t}, \mathbf{u})||^2} \int\_{\mathbb{R}^N} \int\_{\mathbb{R}^N} (\mathbf{t} - \mathbf{t}\_A^W)^2 \mid \mathcal{W}\_{\mathcal{S}}^A f(\mathbf{t}, \mathbf{u}) \mid^2 \,\mathbf{d}\mathbf{t}d\mathbf{u},\tag{41}$$

$$\|\Psi\_{A,\mathcal{W}}^2 = \frac{1}{\|\mathcal{W}\_{\mathcal{S}}^A f(\mathbf{t}, \mathbf{u})\|^2} \int\_{\mathbb{R}^N} \int\_{\mathbb{R}^N} (\mathbf{u} - \mathbf{u}\_A^{\mathcal{W}})^2 \mid \mathcal{W}\_{\mathcal{S}}^A f(\mathbf{t}, \mathbf{u}) \mid^2 \mathbf{d}\mathbf{t} d\mathbf{u},\tag{42}$$

Next, the N-dimensional Heisenberg's inequality of the WLCT will be obtained.

**Theorem 2.** *Let A* = *a b c d be a matrix parameter satisfying a*, *b*, *c*, *d* ∈ R *and ad* − *bc* = 1*. For f*(**t**) = *f*1(**t**)*e <sup>i</sup>φ*(**t**) <sup>∈</sup> *<sup>L</sup>* 2 (RN)*, g*(**t**) = *g*1(**t**)*e <sup>i</sup>ϕ*(**t**) <sup>∈</sup> *<sup>L</sup>* 2 (RN)*,***t***f*(**t**) ∈ *<sup>L</sup>* 2 (RN)*, we have*

$$\Phi\_{A,W}^2 \Psi\_{A,W}^2 \ge \frac{(bN)^2}{i16\pi^2} \|f\|^2 + \text{COV}\_{f,A}^2 + \frac{(bN)^2}{i16\pi^2} \|g\|^2 + \text{COV}\_{g,A\_1}^2 \tag{43}$$

$$+2\left(\left(\frac{(b\mathcal{N})^2}{i16\pi^2} \|f\|^2 + \mathcal{COV}\_{f,A}^2\right) \left(\frac{(b\mathcal{N})^2}{i16\pi^2} \|g\|^2 + \mathcal{COV}\_{g,A\_1}^2\right)\right)^{\frac{1}{2}},\tag{44}$$

*where A*<sup>1</sup> = 0 *b* 0 − 1 *b* <sup>0</sup> *d* 0 *,* 0 6= *b* <sup>0</sup> = *b* ∈ R*, the equality holds if and only if f*(**t**) *is a chirp function (28).*

**Proof.** On the one hand, according to Lemma 1 and the Formula (9), we obtain

$$\begin{split} \|\boldsymbol{W}\_{\mathcal{S}}^{A}f(\mathbf{t},\mathbf{u})\|^{2} &= \int\_{\mathbb{R}^{N}}\int\_{\mathbb{R}^{N}}|\boldsymbol{W}\_{\mathcal{S}}^{A}f(\mathbf{t},\mathbf{u})|^{2}\mathbf{d}\mathbf{t}d\mathbf{u} \\ &= \int\_{\mathbb{R}^{N}}\int\_{\mathbb{R}^{N}}\left[\int\_{\mathbb{R}^{N}}L\_{A}^{f}(\mathbf{m})\sqrt{-i2\pi b}e^{i\frac{\mathcal{U}}{2b}(\mathbf{m}-\mathbf{u})^{2}} \\ &\times L\_{A\_{1}}^{\mathcal{S}}(\mathbf{m}-\mathbf{u})^{\*}K\_{A}(\mathbf{t},\mathbf{u})K\_{A}^{\*}(\mathbf{t},\mathbf{m})\mathbf{d}\mathbf{m} \right] \\ &\times \left[\int\_{\mathbb{R}^{N}}L\_{A}^{f}(\mathbf{n})\sqrt{-i2\pi b}e^{i\frac{\mathcal{U}}{2b}(\mathbf{n}-\mathbf{u})^{2}} \\ &\times L\_{A\_{1}}^{\mathcal{S}}(\mathbf{n}-\mathbf{u})^{\*}K\_{A}(\mathbf{t},\mathbf{u})K\_{A}^{\*}(\mathbf{t},\mathbf{n})\mathbf{d}\mathbf{n} \right]^{\*}\mathbf{d}\mathbf{t}d\mathbf{u} \\ &= \int\_{\mathbb{R}^{N}}\int\_{\mathbb{R}^{N}}|L\_{A}^{f}(\mathbf{m})|^{2}|L\_{A\_{1}}^{\mathcal{S}}(\mathbf{m}-\mathbf{u})|^{2}\mathbf{d}\mathbf{m}d\mathbf{u}. \end{split} \tag{45}$$

Let **m** − **u** = **v**, then

$$\begin{split} \|W\_{\mathcal{S}}^{A}f(\mathbf{t},\mathbf{u})\|^{2} &= \int\_{\mathbb{R}^{N}} \int\_{\mathbb{R}^{N}} |L\_{A}^{f}(\mathbf{m})|^{2} |L\_{A\_{1}}^{\mathcal{S}}(\mathbf{v})|^{2} \mathbf{d}\mathbf{m} \mathbf{d}\mathbf{v} \\ &= \| |L\_{A}^{f}(\mathbf{m})| \|^{2} \| |L\_{A\_{1}}^{\mathcal{S}}(\mathbf{v})| \|^{2} . \end{split} \tag{46}$$

Moreover, we obtain

$$\begin{split} \mathbf{t}\_{A}^{\rm W} &= \frac{1}{\|\!\|W\_{\mathcal{S}}^{\rm A}\!f(\mathbf{t},\mathbf{u})\|^{2}} \int\_{\mathbb{R}^{\rm N}} \int\_{\mathbb{R}^{\rm N}} \mathbf{t} \, |\!\mathbf{W}\_{\mathcal{S}}^{\rm A} f(\mathbf{t},\mathbf{u})\|^{2} \, \mathbf{d} \mathbf{t} \mathbf{u} \\ &= \frac{1}{\|\!\|L\_{A}^{f}(\mathbf{m})\|^{2} \|L\_{A\_{1}}^{\mathcal{S}}(\mathbf{v})\|^{2}} \int\_{\mathbb{R}^{\rm N}} \int\_{\mathbb{R}^{\rm N}} \mathbf{t} \left[\int\_{\mathbb{R}^{\rm N}} f(\mathbf{m}') \overline{g(\mathbf{m}'-\mathbf{t})} K\_{A}(\mathbf{m}',\mathbf{u}) \mathbf{d} \mathbf{m}' \right] \\ & \quad \times \left[ \int\_{\mathbb{R}^{\rm N}} f(\mathbf{n}') \overline{g(\mathbf{m}'-\mathbf{t})} K\_{A}(\mathbf{n}',\mathbf{u}) \mathbf{d} \mathbf{n}' \right]^{\*} \mathbf{d} \mathbf{t} \mathbf{d} \\ &= \frac{1}{\|L\_{A}^{f}(\mathbf{m})\|^{2} \|L\_{A\_{1}}^{\mathcal{S}}(\mathbf{v})\|^{2}} \int\_{\mathbb{R}^{\rm N}} \int\_{\mathbb{R}^{\rm N}} \mathbf{t} |f(\mathbf{m}')|^{2} |g(\mathbf{m}'-\mathbf{t})|^{2} \mathbf{d} \mathbf{m}' \mathbf{d} \mathbf{t}. \end{split} \tag{47}$$

Let **m**<sup>0</sup> − **t** = **r**, then

$$\begin{split} \mathbf{t}\_{A}^{\mathsf{W}} &= \frac{1}{\|L\_{A}^{f}(\mathbf{m})\|^{2} \|L\_{A\_{1}}^{g}(\mathbf{v})\|^{2}} \int\_{\mathbb{R}^{\mathsf{N}}} \int\_{\mathbb{R}^{\mathsf{N}}} (\mathbf{m}'-\mathbf{r}) |f(\mathbf{m}')|^{2} |\mathbf{g}(\mathbf{r})|^{2} \mathsf{dom}' \mathbf{d}\mathbf{r} \\ &= \frac{1}{\|L\_{A}^{f}(\mathbf{m}')\|^{2}} \int\_{\mathbb{R}^{\mathsf{N}}} \mathbf{m}' |f(\mathbf{m}')|^{2} |\mathbf{dm}' - \frac{1}{\|L\_{A\_{1}}^{\mathsf{B}}(\mathbf{v})\|^{2}} \int\_{\mathbb{R}^{\mathsf{N}}} \mathbf{r} |\mathbf{g}(\mathbf{r})|^{2} \mathbf{d}\mathbf{r} \\ &= \mathbf{t}^{\mathsf{f}} - \mathbf{t}^{\mathsf{B}}. \end{split} \tag{48}$$

Using the same method, we can obtain

$$\mathbf{u}\_A^W = \mathbf{u}\_f^A - \mathbf{u}\_{\mathcal{S}}^{A\_1}.\tag{49}$$

From the Formula (46), then

Ψ 2 *<sup>A</sup>*,*<sup>W</sup>* = 1 k*W<sup>A</sup> g f*(**t**, **u**)k 2 Z R<sup>N</sup> Z R<sup>N</sup> (**u** − **u** *W A* ) 2 <sup>|</sup> *<sup>W</sup><sup>A</sup> g f*(**t**, **u**) | <sup>2</sup> d**t**d**u** = 1 k*L f A* (**m**)k 2 Z R<sup>N</sup> (**m**<sup>0</sup> − **u** *A f* ) 2 |*L f A* (**m**0 )| <sup>2</sup>d**m**<sup>0</sup> + 1 k*L* g A<sup>1</sup> (**v**)k 2 × Z R<sup>N</sup> (**v** <sup>0</sup> − **u** *A*1 *g* ) 2 |*L g A*1 (**v** 0 )| <sup>2</sup>d**v** <sup>0</sup> − 2 1 k*L f* A (**m**)k 2 × Z R<sup>N</sup> (**m**<sup>0</sup> − **u** *A f* )|*L f A* (**m**0 )| <sup>2</sup>d**m**<sup>0</sup> 1 k*L* g A<sup>1</sup> (**v**)k 2 × Z R<sup>N</sup> (**v** <sup>0</sup> − **u** *A*1 *g* )|*L g A*1 (**v** 0 )| <sup>2</sup>d**v** 0 = Λ 2 *<sup>A</sup>*, *<sup>f</sup>* + Λ 2 *A*1 ,*g* . (50)

From the same method, we can obtain

Φ

$$
\Phi\_{A,W}^2 = \Delta\_f^2 + \Delta\_{\mathbb{S}'}^2 \tag{51}
$$

On the other hand, using the Formulas (48)–(51), we can obtain

$$\begin{split} \boldsymbol{\Phi}\_{\boldsymbol{A},\mathcal{W}}^{2} \mathbf{Y}\_{\boldsymbol{A},\mathcal{W}}^{2} &= (\boldsymbol{\Delta}\_{f}^{2} + \boldsymbol{\Delta}\_{\mathcal{S}}^{2})(\boldsymbol{\Delta}\_{\boldsymbol{A},f}^{2} + \boldsymbol{\Delta}\_{\boldsymbol{A}\_{1},\mathcal{G}}^{2}) \\ &= \boldsymbol{\Delta}\_{f}^{2} \boldsymbol{\Delta}\_{\boldsymbol{A},f}^{2} + \boldsymbol{\Delta}\_{\mathcal{S}}^{2} \boldsymbol{\Delta}\_{\boldsymbol{A}\_{1},\mathcal{G}}^{2} + \boldsymbol{\Delta}\_{f}^{2} \boldsymbol{\Delta}\_{\boldsymbol{A}\_{1},\mathcal{G}}^{2} + \boldsymbol{\Delta}\_{\mathcal{S}}^{2} \boldsymbol{\Delta}\_{\boldsymbol{A},f}^{2}. \end{split} \tag{52}$$

According to the fact: *n* <sup>2</sup> <sup>+</sup> *<sup>m</sup>*<sup>2</sup> <sup>≥</sup> <sup>2</sup>*nm*, for <sup>∀</sup>*n*, *<sup>m</sup>* <sup>∈</sup> <sup>R</sup>, then

$$\begin{split} \Delta\_{A,W}^{2} \Psi\_{A,W}^{2} &= \Delta\_{f}^{2} \Lambda\_{A,f}^{2} + \Delta\_{\mathcal{g}}^{2} \Lambda\_{A\_{1},\mathcal{g}}^{2} + \Delta\_{f}^{2} \Lambda\_{A\_{1},\mathcal{g}}^{2} + \Delta\_{\mathcal{g}}^{2} \Lambda\_{A,f}^{2} \\ &\geq \Delta\_{f}^{2} \Lambda\_{A,f}^{2} + \Delta\_{\mathcal{g}}^{2} \Lambda\_{A\_{1},\mathcal{g}}^{2} + 2\sqrt{\Delta\_{f}^{2} \Lambda\_{A,f}^{2} \Delta\_{\mathcal{g}}^{2} \Lambda\_{A\_{1},\mathcal{g}}^{2}}. \end{split} \tag{53}$$

From the Formula (34), we can obtain the result.

**Corollary 3.** *When A* = cos *α* sin *α* − sin *α* cos *α , the N-dimensional Heisenberg's inequality of the windowed fractional Fourier transform (WFRFT) [22] for the complex function can be obtained:*

$$\|\Phi\_{\boldsymbol{a},\mathcal{W}}^{2}\Psi\_{\boldsymbol{a},\mathcal{W}}^{2} \geq \frac{(\sin aN)^{2}}{i16\pi^{2}}\|\boldsymbol{f}\|^{2} + \mathcal{CO}V\_{f,\mathfrak{a}}^{2} + \frac{(\sin aN)^{2}}{i16\pi^{2}}\|\boldsymbol{g}\|^{2} + \mathcal{CO}V\_{\mathcal{g},\mathfrak{a}\_{1}}^{2} \tag{54}$$

$$+2\left(\left(\frac{(\sin aN)^2}{i16\pi^2} \|f\|^2 + \mathcal{COV}\_{f,\mu}^2\right) \left(\frac{(\sin aN)^2}{i16\pi^2} \|g\|^2 + \mathcal{COV}\_{g,\mu\_1}^2\right)\right)^{\frac{1}{2}},\tag{55}$$

*where*

$$\boldsymbol{\Phi}\_{\boldsymbol{a},\mathcal{W}}^{2} = \frac{1}{||\mathcal{W}\_{\mathcal{G}}^{\boldsymbol{a}}f(\mathbf{t},\mathbf{u})||^{2}} \int\_{\mathbb{R}^{N}} \int\_{\mathbb{R}^{N}} (\mathbf{t} - \mathbf{t}\_{a}^{\mathcal{W}})^{2} \ | \ \boldsymbol{\mathcal{W}}\_{\mathcal{G}}^{\boldsymbol{a}} f(\mathbf{t},\mathbf{u}) \ |^{2} \ \mathrm{d}\mathbf{t} \mathbf{d} \mathbf{u} \,\tag{56}$$

$$\Psi\_{a,\mathcal{W}}^2 = \frac{1}{||\mathcal{W}\_{\mathcal{S}}^a f(\mathbf{t}, \mathbf{u})||^2} \int\_{\mathbb{R}^N} \int\_{\mathbb{R}^N} (\mathbf{u} - \mathbf{u}\_a^{\mathcal{W}})^2 \mid \mathcal{W}\_{\mathcal{S}}^a f(\mathbf{t}, \mathbf{u}) \mid^2 \,\mathrm{d}\mathbf{t} \mathbf{d} \mathbf{u},\tag{57}$$

$$\mathbf{t}\_{\mathfrak{a}}^{\mathcal{W}} = \frac{1}{||\mathcal{W}\_{\mathcal{S}}^{\mathfrak{a}} f(\mathfrak{t}, \mathfrak{u})||^{2}} \int\_{\mathbb{R}^{\mathbb{N}}} \int\_{\mathbb{R}^{\mathbb{N}}} \mathfrak{t} \mid \mathcal{W}\_{\mathcal{S}}^{\mathfrak{a}} f(\mathfrak{t}, \mathfrak{u}) \mid^{2} \,\mathrm{d}\mathbf{t} \mathbf{d} \mathfrak{u} \,\tag{58}$$

$$\mathbf{u}\_{\boldsymbol{\alpha}}^{\rm{W}} = \frac{1}{||\mathcal{W}\_{\mathcal{S}}^{\rm{h}}f(\mathbf{t},\mathbf{u})||^{2}} \int\_{\mathbb{R}^{\rm{N}}} \int\_{\mathbb{R}^{\rm{N}}} \mathbf{u} \mid \mathcal{W}\_{\mathcal{S}}^{\rm{h}}f(\mathbf{t},\mathbf{u}) \mid^{2} \,\mathrm{d}\mathbf{t}d\mathbf{u},\tag{59}$$

$$
\langle \mathbf{CO} V\_{f,\mathfrak{a}} = \int\_{\mathbb{R}^N} |\mathbf{t} - \mathbf{t}^\mathbf{f}| |\boldsymbol{\omega}\_\mathbf{t} \boldsymbol{\phi}^\prime - \mathbf{u}\_{f,\mathcal{W}}^a| f\_1^2(\mathbf{t}) \, \mathbf{dt},\tag{60}
$$

$$\text{COV}\_{\mathcal{g},\mathcal{a}'} = \int\_{\mathbb{R}^N} |\mathbf{t} - \mathbf{t}^{\mathbf{g}}| |\boldsymbol{\omega}\_1 \boldsymbol{\phi}' - \mathbf{u}\_{\mathcal{g},W}^{\boldsymbol{a}'}| \mathbf{g}\_1^2(\mathbf{t}) \, \text{d}\mathbf{t},\tag{61}$$

$$\mathbf{u}\_{f,\mathcal{W}}^{a} = \frac{1}{E} \int\_{\mathbb{R}^{\mathbb{N}}} \mathbf{u} \mid \mathcal{W}\_{\mathcal{S}}^{a} f(\mathbf{t}, \mathbf{u}) \mid^{2} \,\mathrm{d}\mathbf{u} \,\mathrm{\,} \tag{62}$$

$$\mathbf{u}\_{\mathcal{G},\mathcal{W}}^{a'} = \frac{1}{E} \int\_{\mathbb{R}^{\mathbb{N}}} \mathbf{u} \mid \mathcal{W}\_{\mathcal{G}}^{a} f(\mathbf{t}, \mathbf{u}) \mid^{2} \, \mathbf{d} \mathbf{u} \,\tag{63}$$

*and W<sup>α</sup> g f*(**t**, **u**) *is the WFRFT of complex function*

$$\mathcal{W}\_{\mathcal{S}}^{\mathfrak{m}}f(\mathbf{t},\mathbf{u}) = \begin{cases} \int\_{\mathbb{R}^{N}} f(\mathbf{y}) \mathcal{g}^{\*}(\mathbf{y} - \mathbf{t}) \mathcal{K}\_{\mathfrak{a}}(\mathbf{y},\mathbf{u}) d\mathbf{y}, & \mathfrak{a} \neq \mathfrak{n}\pi \\ f(\mathbf{u}), & \mathfrak{a} = 2n\pi \\ -f(\mathbf{u}), & \mathfrak{a} = (2n+1)\pi \end{cases} \tag{64}$$

*and Kα*(**y**, **u**) = (1 − *i* cot *α*) *N* 2 *e πi*(|**y**| <sup>2</sup>+|**u**<sup>|</sup> 2 ) cot *α*−2*πi***yu** csc *α .*

**Corollary 4.** *When A* = 0 1 −1 0 *, the N-dimensional Heisenberg's inequality of the windowed Fourier transform (WFT) [23] for the complex function can be obtained:*

$$
\Phi\_W^2 \Psi\_W^2 \ge \frac{N^2}{i16\pi^2} (\|f\|^2 + \|g\|^2) + \text{CO}V\_f^2 + \text{CO}V\_g^2 \tag{65}
$$

$$+2\left(\left(\frac{N^2}{i16\pi^2}\|f\|^2 + \text{COV}\_f^2\right)\left(\frac{N^2}{i16\pi^2}\|\mathcal{g}\|^2 + \text{COV}\_{\mathcal{g}}^2\right)\right)^{\frac{1}{2}},\tag{66}$$

*where*

$$\boldsymbol{\Phi}\_{\mathcal{W}}^{2} = \frac{1}{||\mathcal{W}\_{\mathcal{S}}f(\mathbf{t}, \mathbf{u})||^{2}} \int\_{\mathbb{R}^{N}} \int\_{\mathbb{R}^{N}} (\mathbf{t} - \mathbf{t}^{\mathcal{W}})^{2} \, | \, \mathcal{W}\_{\mathcal{S}}f(\mathbf{t}, \mathbf{u}) \, |^{2} \, \text{d}\mathbf{t} \mathbf{d} \mathbf{u} \,\tag{67}$$

$$\Psi\_W^2 = \frac{1}{||\mathcal{W}\_\mathcal{G} f(\mathbf{t}, \mathbf{u})||^2} \int\_{\mathbb{R}^N} \int\_{\mathbb{R}^N} (\mathbf{u} - \mathbf{u}^W)^2 \mid \mathcal{W}\_\mathcal{G} f(\mathbf{t}, \mathbf{u}) \mid^2 \mathbf{d} \mathbf{t} d\mathbf{u}, \tag{68}$$

$$\mathbf{t}^{\mathsf{W}} = \frac{1}{||\mathcal{W}\_{\mathcal{G}} f(\mathbf{t}, \mathbf{u})||^{2}} \int\_{\mathbb{R}^{\mathsf{N}}} \int\_{\mathbb{R}^{\mathsf{N}}} \mathbf{t} \mid \mathcal{W}\_{\mathcal{G}} f(\mathbf{t}, \mathbf{u}) \mid^{2} \,\mathsf{d}\mathbf{t} \,\mathsf{d}\mathbf{u},\tag{69}$$

$$\mathbf{u}^{\rm W} = \frac{1}{||\mathcal{W}\_{\mathcal{S}}f(\mathbf{t}, \mathbf{u})||^{2}} \int\_{\mathbb{R}^{N}} \int\_{\mathbb{R}^{N}} \mathbf{u} \mid \mathcal{W}\_{\mathcal{S}}f(\mathbf{t}, \mathbf{u}) \ |^{2} \,\mathrm{d}\mathbf{t} d\mathbf{u} \,\tag{70}$$

$$\text{COV}\_f = \int\_{\mathbb{R}^N} |\mathbf{t} - \mathbf{t}^\mathbf{f}| |\boldsymbol{\omega}\_\mathbf{t} \phi' - \mathbf{u}\_{\mathbf{f}, \mathbf{W}}| f\_1^2(\mathbf{t}) \mathbf{dt},\tag{71}$$

$$\text{COV}\_{\mathcal{S}} = \int\_{\mathbb{R}^N} |\mathbf{t} - \mathbf{t}^{\mathbf{g}}| |\boldsymbol{\omega}\_{\mathbf{t}} \phi' - \mathbf{u}\_{\mathbf{g}, \mathbf{w}}| g\_1^2(\mathbf{t}) \mathbf{dt},\tag{72}$$

$$\mathbf{u}\_{\mathbf{f},\mathbf{W}} = \frac{1}{E} \int\_{\mathbb{R}^N} \mathbf{u} \mid \mathcal{W}\_{\mathcal{S}} f(\mathbf{t}, \mathbf{u}) \mid^2 \mathbf{d} \mathbf{u} \,\tag{73}$$

$$\mathbf{u}\_{\mathbf{g},\mathbf{W}} = \frac{1}{E} \int\_{\mathbb{R}^{\mathbb{N}}} \mathbf{u} \mid \mathcal{W}\_{\mathbb{S}} f(\mathbf{t}, \mathbf{u}) \mid^{2} \, \mathbf{d} \mathbf{u} \,\tag{74}$$

*and W<sup>g</sup> f*(**t**, **u**) *is the WFT of the complex function*

$$\mathcal{W}\_{\mathbb{S}}f(\mathbf{t},\mathbf{u}) = \begin{cases} \int\_{\mathbb{R}^N} f(\mathbf{y}) g^\*(\mathbf{y}-\mathbf{t}) e^{-i\mathbf{y}\mathbf{u}} d\mathbf{y}, & \mathbf{a} \neq \mathbf{n}\pi \\ f(\mathbf{u}), & \mathbf{a} = 2n\pi \\ -f(\mathbf{u}), & \mathbf{a} = (2n+1)\pi \end{cases} \tag{75}$$

#### **4. Conclusions**

In this paper, by the N-dimensional Heisenberg's inequality of the FT, the N-dimensional Heisenberg's inequalities for the WLCT of a complex function are generalized. Firstly, the definition for N-dimensional WLCT of a complex function is given. In addition, according to the second-order moment of the LCT, the N-dimensional Heisenberg's inequality for the linear canonical transform (LCT) is derived. It shows that the lower bound is related to the covariance and can be achieved by a complex chirp function with a Gaussian function. Finally, the second-order moment of the WLCT is given, the relationship between the LCT and WLCT is obtained, and the N-dimensional Heisenberg's inequality for the WLCT is exploited. In special cases, its corollaries can be obtained.

**Author Contributions:** Writing-original draft, Z.-W.L. and W.-B.G. All authors contributed equally to the writing of the manuscript and read and approved the final version of the manuscript.

**Funding:** This research received no external funding.

**Data Availability Statement:** Data are contained within the article.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

### *Article* **Some New Estimates of Hermite–Hadamard Inequality with Application**

**Tao Zhang 1,\* and Alatancang Chen <sup>2</sup>**


**Abstract:** This paper establishes several new inequalities of Hermite–Hadamard type for | *f* 0 | *<sup>q</sup>* being convex for some fixed *q* ∈ (0, 1]. As application, some error estimates on special means of real numbers are given.

**Keywords:** Hermite–Hardamard inequality; convex function; integral inequalities; special means

**MSC:** 26A51; 26D15

#### **1. Introduction**

For simplicity, in this paper we let *I* ⊆ R = (−∞, +∞) be an interval.

**Definition 1.** *A function f* : *I* → R *is convex if*

$$f[t\mathfrak{a} + (1 - t)\mathfrak{f}] \le tf(\mathfrak{a}) + (1 - t)f(\mathfrak{f}) \tag{1}$$

*is true for any α*, *β* ∈ *I and* 0 ≤ *t* ≤ 1*. The inequality* (1) *is reversed if f is concave on I.*

Suppose that the function *f* : *I* → R is convex on *I*, *α*, *β* ∈ *I* with *α* < *β*, then

$$f\left(\frac{\alpha+\beta}{2}\right) \le \frac{1}{\beta-\alpha} \int\_{a}^{\beta} f(x)dx \le \frac{f(a)+f(\beta)}{2}.\tag{2}$$

It is well known in the literature as the Hermite–Hadamard inequality.

In [1], Dragomir and Agarwal obtained the following inequalities for the right part of (2).

**Theorem 1** (Theorem 2.2 in [1])**.** *Suppose that α*, *β* ∈ *I* ◦ *and α* < *β, the function f* : *I* ◦ → R *is differentiable and* | *f* 0 | *is convex on* [*α*, *β*]*, then*

$$\left| \frac{f(\mathfrak{a}) + f(\mathfrak{k})}{2} - \frac{1}{\beta - \mathfrak{a}} \int\_{\mathfrak{a}}^{\mathfrak{b}} f(\mathfrak{x}) d\mathfrak{x} \right| \le \frac{(\beta - \mathfrak{a})(|f'(\mathfrak{a})| + |f'(\mathfrak{k})|)}{8}. \tag{3}$$

**Theorem 2** (Theorem 2.3 in [1])**.** *Suppose that α*, *β* ∈ *I* ◦ *, α* < *β, and p* > 1*, the function f* : *I* ◦ → R *is differentiable and* | *f* 0 | *p p*−1 *is convex on* [*α*, *β*]*, then*

$$\left| \frac{f(\mathfrak{a}) + f(\mathfrak{z})}{2} - \frac{1}{\mathfrak{z} - \mathfrak{a}} \int\_{\mathfrak{a}}^{\mathfrak{z}} f(\mathfrak{x}) \mathbf{dx} \right| \leq \frac{\mathfrak{z} - \mathfrak{a}}{2(p+1)^{\frac{1}{p}}} \left( \frac{|f'(\mathfrak{a})|^{\frac{p}{p-1}} + |f'(\mathfrak{z})|^{\frac{p}{p-1}}}{2} \right)^{\frac{p-1}{p}} . \tag{4}$$

**Citation:** Zhang, T.; Chen, A. Some New Estimates of Hermite– Hadamard Inequality with Application. *Axioms* **2023**, *12*, 688. https://doi.org/10.3390/ axioms12070688

Academic Editors: Wei-Shih Du, Ravi P. Agarwal, Erdal Karapinar, Marko Kosti´c and Jian Cao

Received: 5 June 2023 Revised: 8 July 2023 Accepted: 12 July 2023 Published: 14 July 2023

**Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

In the literature, the extensions of the arithmetic, geometric, identric, logarithmic, and generalized logarithmic mean from two positive real numbers are, respectively, defined by

$$\begin{aligned} \mathcal{A}(s,t) &= \frac{s+t}{2}, & s,t \in \mathbb{R}, \\ \mathcal{G}(s,t) &= \sqrt{st}, & s,t \in \mathbb{R}, \quad s,t > 0, \\ \mathcal{I}(s,t) &= \frac{1}{\varepsilon} \left(\frac{t^{\dagger}}{s^{\ast}}\right)^{\frac{1}{1-s}}, & s,t > 0, \\ L(s,t) &= \frac{t-s}{\ln|t| - \ln|s|}, & |s| \neq |t|, \quad st \neq 0, \\ L\_{\mathbb{R}}(s,t) &= \left[\frac{t^{n+1} - s^{n+1}}{(n+1)(t-s)}\right]^{\frac{1}{n}}, & n \in \mathbb{Z} \cup \{-1, 0\}, \quad s,t \in \mathbb{R}, \quad s \neq t. \end{aligned}$$

It is well known that *G*(*s*, *t*) < *L*(*s*, *t*) < *I*(*s*, *t*) < *A*(*s*, *t*) for *s*, *t* > 0 with *s* 6= *t*, for example, see [2].

Dragomir and Agarwal used Theorem 1 and Theorem 2 to establish the following error estimates on special means:

**Theorem 3** (Propositions 3.1–3.4 in [1])**.** *Suppose that s*, *t* ∈ *I* ◦ *, s* < *t, n* ∈ Z*, then*

$$|A(s^n, t^n) - L\_n(s, t)^n| \le \frac{n(t - s)}{4} A\left(|s|^{n - 1}, |t|^{n - 1}\right), \quad \qquad n \ge 2,\tag{5}$$

$$|A(\mathbf{s}^n, t^n) - L\_n(\mathbf{s}, t)^n| \le \frac{n(t - s)}{2(p + 1)^{\frac{1}{p}}} \left[ A\left( |\mathbf{s}|^{\frac{(n - 1)p}{p - 1}}, |t|^{\frac{(n - 1)p}{p - 1}} \right) \right]^{\frac{p - 1}{p}}, \quad n \ge 2, \ p > 1,\tag{6}$$

$$|A\left(s^{-1}, t^{-1}\right) - L(s, t)^{-1}| \le \frac{t - s}{4} A\left(|s|^{-2}, |t|^{-2}\right), \quad \qquad 0 \notin [s, t], \tag{7}$$

$$|A\left(s^{-1}, t^{-1}\right) - L(s, t)^{-1}| \le \frac{t - s}{2(p + 1)^{\frac{1}{p}}} \left[ A\left(|s|^{\frac{-2p}{p - 1}}, |t|^{\frac{-2p}{p - 1}}\right) \right]^{\frac{p - 1}{p}}, \quad 0 \ne [s, t], \ p > 1. \tag{8}$$

In [3], Pearce and Peˇcari´c obtained a better upper bound for the inequality (4). Moreover, they obtained a similar inequality on the left part of (2).

**Theorem 4** (Theorems 1 and 2 in [3])**.** *Suppose that α*, *β* ∈ *I* ◦ *, α* < *β and q* ≥ 1*, the function f* : *I* ◦ → R *is differentiable and* | *f* 0 | *q is convex on* [*α*, *β*]*, then*

$$\left| \frac{f(\mathfrak{a}) + f(\mathfrak{f})}{2} - \frac{1}{\mathfrak{f} - \mathfrak{a}} \int\_{\mathfrak{a}}^{\mathfrak{f}} f(\mathbf{x}) d\mathbf{x} \right| \le \frac{\mathfrak{f} - \mathfrak{a}}{4} \left( \frac{|f'(\mathfrak{a})|^{q} + |f'(\mathfrak{f})|^{q}}{2} \right)^{\frac{1}{q}} \tag{9}$$

*and*

$$\left| f\left(\frac{\mathfrak{a} + \beta}{2}\right) - \frac{1}{\beta - \mathfrak{a}} \int\_{\mathfrak{a}}^{\beta} f(\mathfrak{x}) d\mathfrak{x} \right| \le \frac{\beta - \mathfrak{a}}{4} \left( \frac{|f'(\mathfrak{a})|^q + |f'(\mathfrak{f})|^q}{2} \right)^{\frac{1}{q}}.\tag{10}$$

By using Theorem 4, Pearce and Peˇcari´c generalized and improved the error estimates (5)–(8) and obtained the following error estimates on special means:

**Theorem 5** (Propositions 1 and 2 in [3])**.** *Suppose that s*, *t* ∈ R*, s* < *t,* 0 6∈ [*s*, *t*]*, n* ∈ Z*,* |*n*| ≥ 2, *q* ≥ 1*, then*

$$|A(s^n, t^n) - L\_{\hbar}(s, t)^{\eta}| \le \frac{|n|(t-s)}{4} \left[ A\left( |s|^{(n-1)q}, |t|^{(n-1)q} \right) \right]^{\frac{1}{q}} \tag{11}$$

$$|A(s,t)^{\
u} - L\_{\boldsymbol{n}}(s,t)^{\boldsymbol{n}}| \le \frac{|\boldsymbol{n}|(t-s)}{4} \left[ A\left( |\boldsymbol{s}|^{(\boldsymbol{n}-1)q}, |t|^{(\boldsymbol{n}-1)q} \right) \right]^{\frac{1}{q}} \tag{12}$$

$$|A\left(s^{-1}, t^{-1}\right) - L(s, t)^{-1}| \le \frac{t-s}{4} \left[A\left(|s|^{-2q}, |t|^{-2q}\right)\right]^{\frac{1}{q}},\tag{13}$$

$$|A(s,t)^{-1} - L(s,t)^{-1}| \le \frac{t-s}{4} \left[ A\left( |s|^{-2q}, |t|^{-2q} \right) \right]^{\frac{1}{q}}.\tag{14}$$

However, using their method could not obtain the corresponding estimate for *q* < 1. In this paper, supposing that | *f* 0 | *q* is convex for some fixed 0 < *q* ≤ 1, we obtain some estimates of (2). Moreover, if *q* = 1, our results are the same as (9) and (10), respectively. As application, some error estimates on special means are given, then the inequalities (11)–(14) are improved.

#### **2. Main Results**

**Theorem 6.** *Suppose that α*, *β* ∈ *I* ◦ *, α* < *β and* 0 < *q* ≤ 1*, the function f* : *I* ◦ → R *is differentiable and* | *f* 0 | *q is convex on* [*α*, *β*]*.*

$$\text{(i)}\qquad If \, 0 < q \le \frac{1}{2}, \, then$$

$$\left|\frac{f(a) + f(\beta)}{2} - \frac{1}{\beta - a} \int\_{a}^{\beta} f(x) dx\right| \le \frac{q(\beta - a)}{2(2q + 1)} \left[ \frac{q\left(q + 2^{\frac{1}{q}}\right)}{2(q + 1)} (|f'(a)| + |f'(\beta)|) \right.$$

$$+ (1 - q) \sqrt{|f'(a)f'(\beta)|} \Big|\_{}. \tag{15}$$

$$\text{(ii)}\quad If \frac{1}{2} < q \le 1, then$$

$$\begin{split} \left| \frac{f(a) + f(\beta)}{2} - \frac{1}{\beta - a} \int\_{a}^{\beta} f(\mathbf{x}) \mathbf{dx} \right| \leq & \frac{q(\beta - a)}{2(2q + 1)} \left[ \frac{1 + q \cdot 2^{-\frac{1}{q}}}{q + 1} (|f'(a)| + |f'(\beta)|) \right. \\ & + \left( 1 - 2^{1 - \frac{1}{q}} \right) \sqrt{|f'(a)f'(\beta)|} \Big| . \end{split} \tag{16}$$

Clearly, if *q* = 1, then (16) is the same as (9).

**Corollary 1.** *Suppose that α*, *β* ∈ *I* ◦ *and α* < *β, the function f* : *I* ◦ → R *is differentiable and* | *f* 0 | 1 <sup>2</sup> *is convex on* [*α*, *β*]*, then for any q* ≥ 1*, we have*

$$\begin{split} \left| \frac{f(a) + f(\beta)}{2} - \frac{1}{\beta - a} \int\_{a}^{\beta} f(\mathbf{x}) d\mathbf{x} \right| &\leq \frac{\beta - a}{4} \left[ \frac{3}{4} A(|f'(a)|, |f'(\beta)|) + \frac{1}{4} G(|f'(a)|, |f'(\beta)|) \right] \\ &\leq \frac{\beta - a}{4} A(|f'(a)|, |f'(\beta)|) \\ &\leq \frac{\beta - a}{4} \left[ A(|f'(a)|^{q}, |f'(\beta)|^{q}) \right]^{\frac{1}{q}}. \end{split} \tag{17}$$

**Proof.** Let *q* = <sup>1</sup> 2 in the inequality (15) and we have the first inequality. Note that p | *f* 0(*α*)*f* <sup>0</sup>(*β*)| ≤ <sup>|</sup> *<sup>f</sup>* 0 (*α*)|+| *f* 0 (*β*)| 2 , so the second inequality holds. By power–mean inequality, we can obtain the last inequality.

**Theorem 7.** *Suppose that α*, *β* ∈ *I* ◦ *, α* < *β and* 0 < *q* ≤ 1*, the function f* : *I* ◦ → R *is differentiable, and* | *f* 0 | *q is convex on* [*α*, *β*]*.*

(i) *If* <sup>0</sup> <sup>&</sup>lt; *<sup>q</sup>* <sup>≤</sup> <sup>1</sup> 2 *, then*

$$\begin{split} \left| f\left(\frac{a+\beta}{2}\right) - \frac{1}{\beta-a} \int\_{a}^{\beta} f(x) dx \right| &\leq \frac{q(\beta-a)}{2(2q+1)} \left[ \frac{q^2 \left(2^{\frac{1}{q}+1} - 1\right)}{2(q+1)} (|f'(a)| + |f'(\beta)|) \right. \\ &\left. + (1-q) \left( \left(\frac{1}{3} + \frac{1}{6q}\right) 2^{\frac{1}{q}} - 1 \right) \sqrt{|f'(a)f'(\beta)|} \right]. \end{split} \tag{18}$$

(ii) *If* <sup>1</sup> <sup>2</sup> < *q* ≤ 1 *, then*

$$\begin{split} \left| f\left(\frac{a+\beta}{2}\right) - \frac{1}{\beta-a} \int\_{a}^{\beta} f(\mathbf{x}) \mathbf{dx} \right| \leq & \frac{q(\beta-a)}{2(2q+1)} \left[ \frac{q\left(2 - 2^{-\frac{1}{q}}\right)}{q+1} (|f'(a)| + |f'(\beta)|) \\ &+ \left(2^{\frac{1}{q}} - 2\right) \left(\frac{1}{3} + \frac{1}{6q} - 2^{-\frac{1}{q}}\right) \sqrt{|f'(a)f'(\beta)|} \right]. \end{split} \tag{19}$$

Clearly, if *q* = 1, then (19) is the same as (10). If we let *q* = <sup>1</sup> 2 in the inequality (18), then we have the following.

**Corollary 2.** *Suppose that α*, *β* ∈ *I* ◦ *and α* < *β, the function f* : *I* ◦ → R *is differentiable, and* | *f* 0 | 1 <sup>2</sup> *is convex on* [*α*, *β*]*, then for any q* ≥ 1*, we have*

$$\begin{split} \left| f\left(\frac{a+\beta}{2}\right) - \frac{1}{\beta-a} \int\_{a}^{\beta} f(x) dx \right| &\leq \frac{\beta-a}{4} \left[ \frac{7}{12} A\left( |f'(a)|, |f'(\beta)| \right) + \frac{5}{12} G\left( |f'(a)|, |f'(\beta)| \right) \right] \\ &\leq \frac{\beta-a}{4} A\left( |f'(a)|, |f'(\beta)| \right) \\ &\leq \frac{\beta-a}{4} \left[ A\left( |f'(a)|^{q}, |f'(\beta)|^{q} \right) \right]^{\frac{1}{q}}. \end{split} \tag{20}$$

#### **3. Lemmas**

**Lemma 1** (Lemma 2.1 in [1])**.** *Suppose that α*, *β* ∈ *I* ◦ *and α* < *β, the function f* : *I* ◦ → R *is differentiable, and f* <sup>0</sup> ∈ *L*[*α*, *β*]*, then*

$$\frac{f(\alpha) + f(\beta)}{2} - \frac{1}{\beta - \alpha} \int\_{\alpha}^{\beta} f(\mathbf{x}) d\mathbf{x} = \frac{\beta - \alpha}{2} \int\_{0}^{1} (1 - 2t) f'[ta + (1 - t)\beta] d\mathbf{t}.\tag{21}$$

**Lemma 2** (Lemma 2.1 in [4])**.** *Suppose that α*, *β* ∈ *I* ◦ *and α* < *β, the function f* : *I* ◦ → R *is differentiable, and f* <sup>0</sup> ∈ *L*[*α*, *β*]*, then*

$$f\left(\frac{a+\beta}{2}\right) - \frac{1}{\beta-a} \int\_a^\beta f(\mathbf{x})d\mathbf{x} = (\beta-a) \int\_0^1 M(t)f'[ta+(1-t)\beta]d\mathbf{t},\tag{22}$$

*where*

$$M(t) = \begin{cases} -t, & t \in \left[0, \frac{1}{2}\right), \\\\ 1 - t, & t \in \left[\frac{1}{2}, 1\right]. \end{cases}$$

The following result can be found in [5]. For the convenience of readers, we provide the proof below.

**Lemma 3** (Lemma 2.1 in [5])**.** *Let x*, *y* > 0,*r* ∈ R \ {0}*.*

(i) *If r* ≥ 2 *or* 0 < *r* ≤ 1*, then x <sup>r</sup>* + (2 *<sup>r</sup>* <sup>−</sup> <sup>2</sup>)*<sup>x</sup> r* 2 *y r* <sup>2</sup> + *y <sup>r</sup>* <sup>≤</sup> (*<sup>x</sup>* <sup>+</sup> *<sup>y</sup>*) *r* ≤ 2 *r* 2*r* (*x <sup>r</sup>* + (2*<sup>r</sup>* <sup>−</sup> <sup>2</sup>)*<sup>x</sup> r* 2 *y r* <sup>2</sup> + *y r* ). (23)

$$\begin{aligned} \text{(ii)} \quad &\text{If } 1 \le r \le 2 \text{ or } r < 0, \text{ then} \\ &\mathbf{x}' + (\mathbf{2}' - 2)\mathbf{x}^{\frac{r}{2}}y^{\frac{r}{2}} + y^{r} \ge (\mathbf{x} + y)^{r} \ge \frac{2^{r}}{2r}(\mathbf{x}' + (2r - 2)\mathbf{x}^{\frac{r}{2}}y^{\frac{r}{2}} + y^{r}). \end{aligned} \tag{24}$$

*Each equality is true if and only if r* = 1, 2 *or x* = *y.*

**Proof.** It is easy to see that every equality in (23) and (24) holds when *r* = 1, 2 or *x* = *y*, so we suppose that *r* 6= 1, 2, *y* = 1 and *x* > 1 in the following.

First, we prove that the left parts of the inequalities (23) and (24) hold, respectively. Let

$$\begin{aligned} f(\mathbf{x}) &= (\mathbf{x} + \mathbf{1})^r - \mathbf{x}^r - (\mathbf{2}^r - \mathbf{2})\mathbf{x}^{\frac{r}{2}} - \mathbf{1}, &\quad \mathbf{x} > \mathbf{1}, \\ g(t) &= (1 + t)^{r-1} - 1 - (\mathbf{2}^{r-1} - \mathbf{1})t^{\frac{r}{2}}, &\quad 0 < t < 1. \end{aligned}$$

Then

$$\begin{aligned} f'(\mathbf{x}) &= r \mathbf{x}^{r-1} \mathbf{g} \left( \frac{1}{\mathbf{x}} \right), & \mathbf{x} &> 1, \\ g'(t) &= (r-1)(1+t)^{r-2} - \frac{r}{2}(2^{r-1}-1)t^{\frac{r}{2}-1}, & 0 &< t < 1. \end{aligned}$$

The following proof is divided into four cases.

(1) If *r* < 0, then

$$g'(t) = -\left[ (1 - r)(1 + t)^{r - 2} - \frac{r}{2}(1 - 2^{r - 1})t^{\frac{r}{2} - 1} \right] < 0, \ 0 < t < 1.1$$

Note that *g*(1) = 0, so *g*(*t*) > 0 (0 < *t* < 1). It follows that *f* 0 (*x*) < 0 (*x* > 1). Since *f*(1) = 0, we have *f*(*x*) < 0.

$$\text{(2)}\quad \text{If } 0 < r < 1 \text{, let}$$

$$h\_1(t) = -\left[\ln(1-r) + (r-2)\ln(1+t) - \ln\frac{r}{2} - \ln(1-2^{r-1}) - \left(\frac{r}{2}-1\right)\ln t\right],\ 0 < t < 1/2$$

then *h* 0 1 (*t*) = −( *r*−2 <sup>1</sup>+*<sup>t</sup>* <sup>−</sup> *<sup>r</sup>*−<sup>2</sup> 2*t* ) < 0. It means that *h*1(*t*) is strictly decreasing on (0, 1). Note that

$$\begin{aligned} h\_1(0^+) &= \ln \frac{r(1 - 2^{r-1})}{2(1 - r)} + \frac{r - 2}{2} \lim\_{t \to 0^+} \ln t > 0, \\ h\_1(1^-) &= \ln \frac{r}{1 - r} - \ln \frac{2^{r-1}}{1 - 2^{r-1}} < 0; \end{aligned}$$

there exists *ξ*<sup>1</sup> ∈ (0, 1), such that *h*1(*t*) > 0 (0 < *t* < *ξ*1) and *h*1(*t*) < 0 (*ξ*<sup>1</sup> < *t* < 1). Since *g* 0 (*t*) and *h*1(*t*) have the same sign, we obtain *g* 0 (*t*) > 0 (0 < *t* < *ξ*1) and *g* 0 (*t*) < 0 (*ξ*<sup>1</sup> < *t* < 1). Note that *g*(1) = *g*(0) = 0, and we have *g*(*t*) > 0 (0 < *t* < 1). It follows that *f* 0 (*x*) > 0 (*x* > 1). Because *f*(1) = 0, we have *f*(*x*) > 0 (*x* > 1).

(3) If 1 < *r* < 2, let

$$h\_2(t) = \ln(r - 1) + (r - 2)\ln(1 + t) - \ln\frac{r}{2} - \ln(2^{r - 1} - 1) - \left(\frac{r}{2} - 1\right)\ln t\_r$$

then *h* 0 2 (*t*) = *<sup>r</sup>*−<sup>2</sup> <sup>1</sup>+*<sup>t</sup>* <sup>−</sup> *<sup>r</sup>*−<sup>2</sup> <sup>2</sup>*<sup>t</sup>* > 0. It means that *h*2(*t*) is strictly increasing on (0, 1). Note that

$$\begin{aligned} h\_2(0^+) &= \ln \frac{2(r-1)}{r(2^{r-1}-1)} - \frac{r-2}{2} \lim\_{t \to 0^+} \ln t < 0, \\ h\_2(1^-) &= \ln \frac{r-1}{r} - \ln \frac{2^{r-1}-1}{2^{r-1}} > 0; \end{aligned}$$

there exists *ξ*<sup>2</sup> ∈ (0, 1), such that *h*2(*t*) < 0 (0 < *t* < *ξ*2) and *h*2(*t*) > 0 (*ξ*<sup>2</sup> < *t* < 1). Since *g* 0 (*t*) and *h*2(*t*) have the same sign and *g*(1) = *g*(0) = 0, we have *g*(*t*) < 0 (0 < *t* < 1). It follows that *f* 0 (*x*) < 0 (*x* > 1). Note that *f*(1) = 0, and we have *f*(*x*) < 0 (*x* > 1).

(4) If *r* > 2, then *h* 0 2 (*t*) = *<sup>r</sup>*−<sup>2</sup> <sup>1</sup>+*<sup>t</sup>* <sup>−</sup> *<sup>r</sup>*−<sup>2</sup> <sup>2</sup>*<sup>t</sup>* < 0. It means that *h*2(*t*) is strictly decreasing in (0, 1). Note that *h*2(0 <sup>+</sup>) > 0, *h*2(1 <sup>−</sup>) < 0, and there exists *ξ*<sup>3</sup> ∈ (0, 1), such that *h*2(*t*) > 0 (0 < *t* < *ξ*3) and *h*2(*t*) < 0 (*ξ*<sup>3</sup> < *t* < 1). Since *g* 0 (*t*) and *h*2(*t*) have the same sign and *g*(1) = *g*(0) = 0, we have *g*(*t*) > 0 (0 < *t* < 1). It follows that *f* 0 (*x*) > 0 (*x* > 1). Then by *f*(1) = 0, we have *f*(*x*) > 0 (*x* > 1).

Next, we derive that the right parts of the inequalities (23) and (24) are true, respectively. Let

$$\begin{aligned} L(\mathbf{x}) &= (\mathbf{x} + 1)^r - \frac{2^r}{2r} (\mathbf{x}^r + (2r - 2)\mathbf{x}^{\frac{r}{2}} + 1), & \mathbf{x} > 1, \\ l(t) &= r(t + 1)^{r - 1} - 2^{r - 1} - 2^{r - 1}(r - 1)t^{\frac{r}{2}}, & 0 < t < 1. \end{aligned}$$

Then

$$\begin{aligned} L'(\mathbf{x}) &= \mathbf{x}^{r-1} l \left(\frac{1}{\mathbf{x}}\right), & \mathbf{x} &> 1, \\\ l'(t) &= 2^{r-2} r (r-1) t^{\frac{r}{2}-1} \left[ \left(\frac{t+1}{2\sqrt{t}}\right)^{r-2} - 1 \right], & 0 &< t < 1. \end{aligned}$$

If 0 < *r* < 1 or *r* > 2, then *l* 0 (*t*) > 0. Note that *l*(1) = 0, we have *l*(*t*) < 0 (0 < *t* < 1), and *L* 0 (*x*) < 0 (*x* > 1). Then by *L*(1) = 0, we have *L*(*x*) < 0 (*x* > 1), so the right parts of the inequalities (23) holds.

If *r* < 0 or 1 < *r* < 2, then *l* 0 (*t*) < 0. Note that *l*(1) = 0, we have *l*(*t*) > 0 (0 < *t* < 1) and *L* 0 (*x*) > 0 (*x* > 1). Then by *L*(1) = 0, we have *L*(*x*) > 0 (*x* > 1), so the right parts of the inequalities (24) holds.

The proof is complete.

**Lemma 4.** *Suppose that α*, *β* ∈ *I* ◦ *, α* < *β,* 0 < *q* ≤ 1*,* 0 < *t* < 1*, the function f* : *I* → [0, +∞) *is positive, and f <sup>q</sup> is convex on* [*α*, *β*]*.*

(i) *If* <sup>0</sup> <sup>&</sup>lt; *<sup>q</sup>* <sup>≤</sup> <sup>1</sup> 2 *, then*

$$f[ta + (1 - t)\beta] \le q2^{\frac{1}{q} - 1} \left[ t^{\frac{1}{q}} f(a) + (1 - t)^{\frac{1}{q}} f(\beta) + \left( \frac{2}{q} - 2 \right) (t(1 - t))^{\frac{1}{2q}} \sqrt{f(a)f(\beta)} \right]. \tag{25}$$

(ii) *If* <sup>1</sup> <sup>2</sup> ≤ *q* ≤ 1*, then*

$$f[ta + (1 - t)\beta] \le t^{\frac{1}{q}} f(a) + (1 - t)^{\frac{1}{q}} f(\beta) + (2^{\frac{1}{q}} - 2)(t(1 - t))^{\frac{1}{2q}} \sqrt{f(a)f(\beta)}.\tag{26}$$

**Proof.** Since *f q* is convex and *q* > 0, we have

$$f[t\mathfrak{a} + (\mathfrak{1} - t)\mathfrak{beta}] \le \left[tf^{q}(\mathfrak{a}) + (\mathfrak{1} - t)f^{q}(\mathfrak{k})\right]^{\frac{1}{q}}.$$

If 0 <sup>&</sup>lt; *<sup>q</sup>* <sup>≤</sup> <sup>1</sup> 2 , then <sup>1</sup> *<sup>q</sup>* ≥ 2, by the right-hand side of the inequalities (23), then

$$\left[tf^q(a) + (1-t)f^q(\beta)\right]^{\frac{1}{q}} \le q2^{\frac{1}{q}-1}\left[t^{\frac{1}{q}}f(a) + (1-t)^{\frac{1}{q}}f(\beta) + \left(\frac{2}{q}-2\right)(t(1-t))^{\frac{1}{2q}}\sqrt{f(a)f(\beta)}\right].$$

Thus, the inequality (25) is valid.

If <sup>1</sup> <sup>2</sup> <sup>≤</sup> *<sup>q</sup>* <sup>≤</sup> 1, then 1 <sup>≤</sup> <sup>1</sup> *<sup>q</sup>* ≤ 2, by the left-hand side of the inequalities (24) we have

$$\frac{1}{q}\left[tf^{q}(\mathfrak{a})+(1-t)f^{q}(\mathfrak{f})\right]^{\frac{1}{q}} \leq t^{\frac{1}{q}}f(\mathfrak{a})+(1-t)^{\frac{1}{q}}f(\mathfrak{f})+(2^{\frac{1}{q}}-2)(t(1-t))^{\frac{1}{2q}}\sqrt{f(\mathfrak{a})f(\mathfrak{f})}.$$

Hence, the inequality (26) is valid.

#### **4. Derivation of Theorem 6 and 7**

**The derivation of Theorem 6:** (i) If <sup>0</sup> <sup>&</sup>lt; *<sup>q</sup>* <sup>≤</sup> <sup>1</sup> 2 , then by the inequalities (21) and (25), we can derive that

$$\begin{aligned} &\left|\frac{f(a)+f(\beta)}{2}-\frac{1}{\beta-a}\int\_{a}^{\beta}f(x)dx\right| \\ \leq &\frac{\beta-a}{2}\int\_{0}^{1}|1-2t||f'[ta+(1-t)\beta]|dt \\ \leq &\frac{(\beta-a)q2^{\frac{1}{q}-1}}{2}\int\_{0}^{1}|1-2t|\left[t^{\frac{1}{q}}|f'(a)|+(1-t)^{\frac{1}{q}}|f'(\beta)|+\left(\frac{2}{q}-2\right)(t(1-t))^{\frac{1}{2q}}\sqrt{|f'(a)f'(\beta)|}\right]dt. \end{aligned}$$

Note that

$$\begin{aligned} &\int\_0^1 |1 - 2t|(1 - t)^{\frac{1}{q}} \mathbf{d}t \\ &= \int\_0^1 |1 - 2t| t^{\frac{1}{q}} \mathbf{d}t \\ &= \int\_0^{\frac{1}{2}} (1 - 2t) t^{\frac{1}{q}} \mathbf{d}t - \int\_{\frac{1}{2}}^1 (1 - 2t) t^{\frac{1}{q}} \mathbf{d}t \\ &= \frac{q(1 + q \cdot 2^{\frac{-1}{q}})}{(q + 1)(2q + 1)} .\end{aligned}$$

and

$$\begin{split} &\int\_{0}^{1} |1 - 2t| (t(1 - t))^{\frac{1}{2q}} \mathrm{d}t \\ &= \int\_{0}^{\frac{1}{2}} (1 - 2t) (t(1 - t))^{\frac{1}{2q}} \mathrm{d}t - \int\_{\frac{1}{2}}^{1} (1 - 2t) (t(1 - t))^{\frac{1}{2q}} \mathrm{d}t \\ &= \frac{2q}{2q + 1} (t(1 - t))^{\frac{1}{2q} + 1} \left| \begin{matrix} \frac{1}{2} \\ 0 \end{matrix} - \frac{2q}{2q + 1} (t(1 - t))^{\frac{1}{2q} + 1} \right|\_{\frac{1}{2}}^{1} \\ &= \frac{q \cdot 2^{\frac{-1}{q}}}{2q + 1} \mathrm{d} \end{split}$$

so (15) is valid.

(ii) If <sup>1</sup> <sup>2</sup> < *q* ≤ 1, then by (21) and (26), we have

$$\begin{split} & \left| \frac{f(\boldsymbol{\alpha}) + f(\boldsymbol{\beta})}{2} - \frac{1}{\beta - \alpha} \int\_{\alpha}^{\beta} f(\boldsymbol{x}) \mathbf{dx} \right| \\ & \leq \frac{\beta - \alpha}{2} \int\_{0}^{1} |1 - 2t||f'[t\boldsymbol{\alpha} + (1 - t)\beta]| \mathbf{dx} \\ & \leq \frac{\beta - \alpha}{2} \int\_{0}^{1} |1 - 2t| \left[ t^{\frac{1}{q}} |f(\boldsymbol{a})| + (1 - t)^{\frac{1}{q}} |f(\beta)| + (2^{\frac{1}{q}} - 2)(t(1 - t))^{\frac{1}{2q}} \sqrt{|f(\boldsymbol{a})f(\beta)|} \right] \mathbf{dx} \\ & = \frac{q(\beta - \alpha)}{2(2q + 1)} \left[ \frac{1 + q \cdot 2^{-\frac{1}{q}}}{q + 1} (|f'(\boldsymbol{a})| + |f'(\beta)|) + \left( 1 - 2^{1 - \frac{1}{q}} \right) \sqrt{|f'(\boldsymbol{a})f'(\beta)|} \right]. \end{split}$$

so (16) is valid.

**The derivation of Theorem <sup>7</sup>**: (i) If <sup>0</sup> <sup>&</sup>lt; *<sup>q</sup>* <sup>≤</sup> <sup>1</sup> 2 , then by (22) and (25) we can derive that

$$\begin{aligned} & \left| f\left(\frac{\mathfrak{a}+\beta}{2}\right) - \frac{1}{\beta-\mathfrak{a}} \int\_{\mathfrak{a}}^{\beta} f(\mathbf{x}) d\mathbf{x} \right| \\ & \le (\beta-\mathfrak{a}) \int\_{0}^{1} |M(t)| |f'| \mathfrak{a} + (1-t)\beta| \, \middle| \, \mathbf{d}t \\ & \le (\beta-\mathfrak{a}) q \mathcal{Q}^{\frac{1}{2}-1} \int\_{0}^{1} |M(t)| \left[ t^{\frac{1}{q}} |f'(\mathfrak{a})| + (1-t)^{\frac{1}{q}} |f'(\mathfrak{b})| + \left( \frac{2}{q} - 2 \right) (t(1-t))^{\frac{1}{2q}} \sqrt{|f'(\mathfrak{a})f'(\mathfrak{b})|} \right] \, \middle| \, \mathbf{d}t. \end{aligned}$$

Note that

$$\begin{aligned} &\int\_0^1 |M(t)|(1-t)^{\frac{1}{q}} \mathbf{d}t \\ &= \int\_0^1 |M(t)| t^{\frac{1}{q}} \mathbf{d}t \\ &= \int\_0^{\frac{1}{2}} t^{\frac{1}{q}+1} \mathbf{d}t + \int\_{\frac{1}{2}}^1 (1-t) t^{\frac{1}{q}} \mathbf{d}t \\ &= \frac{q^2 (1 - 2^{-\frac{1}{q}-1})}{(q+1)(2q+1)}, \end{aligned}$$

and

$$\begin{aligned} &\int\_0^1 |M(t)| (t(1-t))^{\frac{1}{2q}} \mathbf{d}t \\ &= \int\_0^{\frac{1}{2}} t^{\frac{1}{2q}+1} (1-t)^{\frac{1}{2q}} \mathbf{d}t + \int\_{\frac{1}{2}}^1 t^{\frac{1}{2q}} (1-t)^{\frac{1}{2q}+1} \mathbf{d}t \\ &= -\frac{q \cdot 2^{-\frac{1}{q}-1}}{2q+1} + B\left(\frac{1}{2q} + 1, \frac{1}{2q} + 2\right) \end{aligned}$$

where the beta function is

$$B(\mathbf{x}, y) = \int\_0^1 t^{\mathbf{x} - 1} (1 - t)^{y - 1} \mathbf{d}t, \ \mathbf{x} > 0, \ y > 0.$$

Clearly, *B*(*x*, *x*) is decreasing on (0, +∞) and <sup>1</sup> <sup>2</sup>*<sup>q</sup>* ≥ 1, then

$$B\left(\frac{1}{2q} + 1, \frac{1}{2q} + 2\right) = \frac{1}{2}B\left(\frac{1}{2q} + 1, \frac{1}{2q} + 1\right) \le \frac{1}{2}B(2, 2) = \frac{1}{12}.$$

Thus, (18) is valid.

(ii) If <sup>1</sup> <sup>2</sup> < *q* ≤ 1, then by (22) and (26), we can induce that

$$\begin{split} & \left| f\left(\frac{\mathfrak{a}+\beta}{2}\right) - \frac{1}{\beta-\mathfrak{a}} \int\_{\mathfrak{a}}^{\beta} f(x) \mathrm{d}x \right| \\ \leq & (\beta-\mathfrak{a}) \int\_{0}^{1} |M(t)| |f'[t\mathfrak{a} + (1-t)\beta]| \mathrm{d}t \\ \leq & (\beta-\mathfrak{a}) \int\_{0}^{1} |M(t)| \left[ t^{\frac{1}{2}} |f(\mathfrak{a})| + (1-t)^{\frac{1}{2}} |f(\beta)| + (2^{\frac{1}{q}} - 2)(t(1-t))^{\frac{1}{2q}} \sqrt{|f(\mathfrak{a})f(\beta)|} \right] \mathrm{d}t \\ \leq & \frac{q(\beta-\mathfrak{a})}{2(2q+1)} \left[ \frac{q\left(2-2^{-\frac{1}{q}}\right)}{q+1} (|f'(\mathfrak{a})| + |f'(\beta)|) + \left(2^{\frac{1}{q}} - 2\right) \left(\frac{1}{3} + \frac{1}{6q} - 2^{-\frac{1}{q}}\right) \sqrt{|f'(\mathfrak{a})f'(\beta)|} \right]. \end{split}$$

Thus, (19) is valid.

#### **5. Applications**

In this section, we will use Corollary 1 and Corollary 2 to establish some error estimates on special means, then the inequalities (11)–(14) are improved.

**Proposition 1.** *Suppose that s*, *t* ∈ R*, s* < *t,* 0 6∈ [*s*, *t*]*, n* ∈ Z*, n* ≥ 3 *or n* ≤ −2*, then*

$$|A(s^n, t^n) - L\_n(s, t)^n| \le \frac{|n|(t-s)}{4} \left[ \frac{3}{4} A\left(|s|^{n-1}, |t|^{n-1}\right) + \frac{1}{4} G\left(|s|^{n-1}, |t|^{n-1}\right) \right],\tag{27}$$

$$|A(\mathbf{s}, t)^{n} - L\_{n}(\mathbf{s}, t)^{n}| \le \frac{|n|(t - \mathbf{s})}{4} \left[ \frac{7}{12} A\left( |\mathbf{s}|^{n - 1}, |t|^{n - 1} \right) + \frac{5}{12} G\left( |\mathbf{s}|^{n - 1}, |t|^{n - 1} \right) \right]. \tag{28}$$

**Proof.** Let *f*(*x*) = *x n* , *x* ∈ [*s*, *t*], *n* ∈ Z, *n* ≥ 3 or *n* ≤ −2, then

$$\left( |f'(\mathfrak{x})|^{\frac{1}{2}} \right)'' = \frac{\sqrt{|n|}(n-1)(n-3)}{4} |\mathfrak{x}|^{\frac{n-5}{2}} \ge 0.$$

Thus, | *f* 0 (*x*)| 1 <sup>2</sup> is convex on [*s*, *t*]. It follows that (27) and (28) hold by using Corollary 1 and Corollary 2, respectively.

**Remark 1.** *For any q* ≥ 1*, by the inequalities* (17) *and* (20)*, we have*

$$\frac{|n|(t-s)}{4} \left[ \frac{3}{4} A\left( |s|^{n-1}, |t|^{n-1} \right) + \frac{1}{4} G\left( |s|^{n-1}, |t|^{n-1} \right) \right] \le \frac{|n|(t-s)}{4} \left[ A\left( |s|^{(n-1)q}, |t|^{(n-1)q} \right) \right]^{\frac{1}{q}} \le \frac{|n|(t-s)}{4} \left[ A\left( |s|^{(n-1)q}, |t|^{(n-1)q} \right) \right]^{\frac{1}{q}} \le \frac{|n|(t-s)}{4} \left[ A\left( |s|^{(n-1)q}, |t|^{(n-1)q} \right) \right]^{\frac{1}{q}} \le \frac{|n|(t-s)}{4} \left[ A\left( |s|^{(n-1)q}, |t|^{(n-1)q} \right) \right]^{\frac{1}{q}} \le \frac{|n|(t-s)}{4} \left[ A\left( |s|^{(n-1)q}, |t|^{(n-1)q} \right) \right]^{\frac{1}{q}} \le \frac{|n|(t-s)}{4} \left[ A\left( |s|^{(n-1)q}, |t|^{(n-1)q} \right) \right]^{\frac{1}{q}} \le \frac{|n|(t-s)}{4} \left[ A\left( |s|^{(n-1)q}, |t|^{(n-1)q} \right) \right]^{\frac{1}{q}} \left[ A\left( |s|^{(n-1)q}, |t|^{(n-1)q} \right) \right]^{\frac{1}{q}}.$$

*and*

$$\frac{|n|(t-s)}{4} \left[ \frac{7}{12} A\left( |s|^{n-1}, |t|^{n-1} \right) + \frac{5}{12} G\left( |s|^{n-1}, |t|^{n-1} \right) \right] \le \frac{|n|(t-s)}{4} \left[ A\left( |s|^{(n-1)q}, |t|^{(n-1)q} \right) \right]^{\frac{1}{q}}.$$

*Thus, for n* ≥ 3 *or n* ≤ −2*, we obtain an improvement of the inequalities (11) and (12), which is an improvement of the inequalities (5) and (6).*

**Proposition 2.** *Suppose that s*, *t* ∈ R*, s* < *t,* 0 6∈ [*s*, *t*]*, then*

$$|A\left(s^{-1},t^{-1}\right) - L(s,t)^{-1}| \le \frac{t-s}{4} \left[\frac{3}{4}A\left(s^{-2},t^{-2}\right) + \frac{1}{4}G\left(s^{-2},t^{-2}\right)\right],\tag{29}$$

$$|A(s,t)^{-1} - L(s,t)^{-1}| \le \frac{t-s}{4} \left[ \frac{7}{12} A\left(s^{-2}, t^{-2}\right) + \frac{5}{12} G\left(s^{-2}, t^{-2}\right) \right].\tag{30}$$

**Proof.** Let *f*(*x*) = <sup>1</sup> *x* , *x* ∈ [*s*, *t*], then

$$\left( |f'(\mathfrak{x})|^{\frac{1}{2}} \right)'' = \frac{2}{|\mathfrak{x}|^3} \ge 0.$$

Thus, | *f* 0 (*x*)| 1 <sup>2</sup> is convex on [*s*, *t*]. It follows that (29) and (30) hold by using Corollary 1 and Corollary 2, respectively.

**Remark 2.** *For any q* ≥ 1*, by the inequalities* (17) *and* (20)*, we have*

$$\frac{t-s}{4}\left[\frac{3}{4}A\left(s^{-2},t^{-2}\right)+\frac{1}{4}G\left(s^{-2},t^{-2}\right)\right] \le \frac{t-s}{4}\left[A\left(|s|^{-2q},|t|^{-2q}\right)\right]^{\frac{1}{q}}$$

*and*

$$\frac{t-s}{4}\left[\frac{7}{12}A\left(s^{-2},t^{-2}\right)+\frac{5}{12}G\left(s^{-2},t^{-2}\right)\right] \le \frac{t-s}{4}\left[A\left(|s|^{-2q},|t|^{-2q}\right)\right]^{\frac{1}{q}}.$$

*Thus, we obtain an improvement of the inequalities (13) and (14), which is an improvement of the inequalities (7) and (8).*

**Proposition 3.** *Suppose that t* > *s* > 0*, then*

$$\ln I(s,t) - \ln G(s,t) \le \frac{t-s}{4} \left[ \frac{3}{4} A\left(s^{-1}, t^{-1}\right) + \frac{1}{4} G\left(s^{-1}, t^{-1}\right) \right],\tag{31}$$

$$
\ln A(s, t) - \ln I(s, t) \le \frac{t - s}{4} \left[ \frac{7}{12} A\left(s^{-1}, t^{-1}\right) + \frac{5}{12} G\left(s^{-1}, t^{-1}\right) \right]. \tag{32}
$$

**Proof.** Let *f*(*x*) = ln *x*, *x* ∈ [*s*, *t*], then

$$\left( |f'(\mathfrak{x})|^{\frac{1}{2}} \right)'' = \frac{3}{4} |\mathfrak{x}|^{-\frac{5}{2}} \ge 0.$$

Thus, | *f* 0 (*x*)| 1 <sup>2</sup> is convex on [*s*, *t*]. It follows that (31) and (32) hold by using Corollary 1 and Corollary 2, respectively.

**Author Contributions:** Conceptualization, T.Z. and A.C.; methodology, T.Z.; validation, T.Z.; formal analysis, T.Z. and A.C.; investigation, T.Z.; resources, T.Z.; writing—original draft preparation, T.Z.; funding acquisition, T.Z. and A.C. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work was supported by the National Natural Science Foundation of China (No. 11761029, No. 62161044) and the Natural Science Foundation of Inner Mongolia (Grant No. 2021LHMS01008).

**Data Availability Statement:** Not applicable.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


**Disclaimer/Publisher's Note:** The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

MDPI St. Alban-Anlage 66 4052 Basel Switzerland www.mdpi.com

*Axioms* Editorial Office E-mail: axioms@mdpi.com www.mdpi.com/journal/axioms

Disclaimer/Publisher's Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Academic Open Access Publishing

mdpi.com ISBN 978-3-0365-9000-4