Next Article in Journal
The Point of No Return: Evolution of Excess Mutation Rate Is Possible Even for Simple Mutation Models
Next Article in Special Issue
Improvement of Furuta’s Inequality with Applications to Numerical Radius
Previous Article in Journal
A Semiparametric Bayesian Joint Modelling of Skewed Longitudinal and Competing Risks Failure Time Data: With Application to Chronic Kidney Disease
Previous Article in Special Issue
Two Approximation Formulas for Bateman’s G-Function with Bounded Monotonic Errors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Refined Jensen Inequality Connected to an Arbitrary Positive Finite Sequence

by
Shanhe Wu
1,*,
Muhammad Adil Khan
2,
Tareq Saeed
3 and
Zaid Mohammed Mohammed Mahdi Sayed
2,4
1
Institute of Applied Mathematics, Longyan University, Longyan 364012, China
2
Department of Mathematics, University of Peshawar, Peshawar 25000, Pakistan
3
Department of Mathematics, King Abdulaziz University, Jeddah 21589, Saudi Arabia
4
Department of Mathematics, University of Sáadah, Sáadah 1872, Yemen
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(24), 4817; https://doi.org/10.3390/math10244817
Submission received: 12 November 2022 / Revised: 13 December 2022 / Accepted: 14 December 2022 / Published: 18 December 2022
(This article belongs to the Special Issue Mathematical Inequalities, Models and Applications)

Abstract

:
The prime purpose of this paper is to provide a refinement of Jensen’s inequality in connection with a positive finite sequence. We deal with the refinement for particular cases and point out the relation between the new result with earlier results of Jensen’s inequality. As results, we obtain refinements of the quasi-arithmetic and power mean inequalities. Finally, several results are obtained in information theory with the help of the main results.

1. Introduction

The great importance and applications of mathematical inequalities cannot be ignored in almost every field of science, such as information theory [1], engineering [2], qualitative theory of integral, mathematical statistics [3], differential equations [4], and economics [5]. Several mathematicians have taken a great interest in refining, proving, and generalizing numerous mathematical inequalities; due to their rapid developments, mathematical inequalities have been considered an independent field of modern applied analysis. Convexity plays a key role in developments in the field of mathematical inequalities [6,7]. The importance of convexity in the theory of inequalities is well known because some much more useful inequalities that originated from this concept have been created, such as the majorization inequality and Jensen’s, Slater’s, and Sherman’s inequalities [8]. Among these inequalities, the Jensen inequality has a special significance because it produces many other important and classical inequalities, such as the Hölder, Ky-Fan, and AM-GM inequalities, and it includes a great number of applications in several areas of mathematics. The Jensen inequality is expressed as follows [9]:
f i = 1 n p i x i P n 1 P n i = 1 n p i f ( x i ) ,
if f : J R is a convex function defined on the interval J, x i J , p i > 0 for i = 1 , 2 , , n with P n = i = 1 n p i . This inequality has been widely applied in many branches of science. In [10], Azar applied some versions of Jensen’s inequality in finance and examined the statistical importance of different Jensen-type inequalities by utilizing the mechanism of simulation of random normal variables. They also showed that Jensen’s inequality guarantees that the predicted utility paradigm is not simply a theoretical or a mathematical problem, but it has a statistical support. Jensen’s inequality has pertinence in every subject of biomathematics that consists of nonlinear processes, and this inequality gives a dynamic mechanism for predicting some direct outcomes of environmental variations in biological systems [11]. In information theory, the non-negativity of Kullback–Leibler divergence, bounds for Shannon entropy, Hellinger distance, Bhattacharyya coefficient, total variation distance, and Jeffrey distance can be obtained by using this inequality [12]. By using the majorization concept, a refinement of the Jensen inequality was given in [13], while some interesting bounds for the Jensen difference with several applications were provided in [14].
Due to the widespread use of this inequality, many mathematicians have taken a keen interest in studying its various aspects. In [15], Steffensen proved the same inequality (1) under relaxed conditions for weights with p 1 > 0 ,   p 1 + p 2 + + p k 0 for k = 1 , 2 , , n 1 and P n > 0 while using a strict condition of monotonicity of the tuple ( x 1 , x 2 , , x n ) . In 1981, Slater derived a very important inequality for increasing convex function that was related to the Jensen inequality [16], while in 1985, Pečarić [17] proved the same inequality for a convex function without using the monotonicity condition of the function; moreover, a multidimensional version of that inequality was established in [18]. In 2003, Mercer brought to the literature an inequality that was more in line with the Jensen inequality [19]. There are many results that are devoted to the Jensen–Mercer inequality. Niezgoda used the concept of majorization and separable sequences, and they derived a generalization of the Jensen–Mercer inequality [20]. In [21], Dragomir considered some indexing subsets of { 1 , 2 , , n } and constructed functionals associated with indexing sets and convex functions. With the aid of these functionals, a refinement of Jensen’s inequality was derived, which implied the earlier refinement obtained in [22].
This manuscript is organized as follows: In Section 2, a refinement of Jensen’s inequality that is associated with an arbitrary positive sequence and some indexing sets is provided (Theorem 1). The result is elaborated for particular indexing sets (Corollary 1). In Remarks 1 and 2, it is explained that the obtained results can generate the earlier results. In Section 3, the main results will be used to deduce refinements of the quasi-arithmetic and power mean inequalities. In Section 4, we illustrate several applications of the obtained results in information theory.

2. Refinement of Jensen’s Inequality

Before describing the main results, we first give some notations that will be used throughout the paper.
If S is a subset of { 1 , 2 , , n } and t i R for i = 1 , 2 , , n , then S c = { 1 , 2 , , n } \ S , T S = i S t i , and T n = i = 1 n t i .
Our first prime result is as follows.
Theorem 1.
Assume that f : J R is a convex function defined on the interval J, w i , p i > 0 , x i J for i = 1 , 2 , , n . Then, for arbitrary non-empty proper subsets K , L , M of { 1 , 2 , , n } , we have
f i = 1 n p i x i P n 1 P n W n W L P K f i K p i x i P K + W L P K c f i K c p i x i P K c + W L c P M f i M p i x i P M + W L c P M c f i M c p i x i P M c 1 P n i = 1 n p i f ( x i ) .
The above inequalities hold in the reverse direction if the function f is concave.
Proof. 
We begin to express the left side of Jensen’s inequality as:
f i = 1 n p i x i P n = f i = 1 n w i 1 P n W n i = 1 n p i x i = f i L w i + i L c w i 1 P n W n i = 1 n p i x i = f 1 P n W n i L w i i K p i x i + i K c p i x i + i L c w i i M p i x i + i M c p i x i = f 1 P n W n i L w i i K p i x i + i L w i i K c p i x i + i L c w i i M p i x i + i L c w i i M c p i x i = f ( 1 P n W n ( i L w i i K p i i K p i x i i K p i + i L w i i K c p i i K c p i x i i K c p i + i L c w i i M p i i M p i x i i M p i + i L c w i i M c p i i M c p i x i i M c p i ) ) .
Note that
1 P n W n i L w i i K p i + i L w i i K c p i + i L c w i i M p i + i L c w i i M c p i = i L w i 1 P n W n i K p i + i K c p i + i L c w i 1 P n W n i M p i + i M c p i = 1 W n i L w i + i L c w i = 1 .
Hence, by making use of Jensen’s inequality in (3), we achieve
f 1 P n i = 1 n p i x i 1 P n W n i L w i i K p i f i K p i x i i K p i + i L w i i K c p i f i K c p i x i i K c p i + i L c w i i M p i f i M p i x i i M p i + i L c w i i M c p i f ( i M c p i x i i M c p i ) .
1 P n W n i L w i i K p i i K p i f ( x i ) i K p i + i L w i i K c p i i K c p i f ( x i ) i K c p i + i L c w i i M p i i M p i f ( x i ) i M p i + i L c w i i M c p i i M c p i f ( x i ) i M c p i = 1 P n W n i L w i i K p i f ( x i ) + i L w i i K c p i f ( x i ) + i L c w i i M p i f ( x i ) + i L c w i i M c p i f ( x i ) = 1 P n i = 1 n p i f ( x i ) .
Remark 1.
It is important to note that Theorem 1 implies the refinement of Jensen’s inequality given in [21], while the refined inequality can be derived by choosing K = M .
In the upcoming result, we obtain a refined Jensen inequality for particular index sets, which gives the inequality established in [22].
Corollary 1.
Assume that f : J R is a convex function defined on the interval J, w i , p i > 0 , x i J for i = 1 , 2 , , n . Then, for any k , l , m { 1 , 2 , , n } , we have
f 1 P n i = 1 n p i x i 1 P n W n w l p k f ( x k ) + w l ( P n p k ) f i = 1 n p i x i p k x k P n p k + ( W n w l ) p m f ( x m ) + ( W n w l ) ( P n p m ) f i = 1 n p i x i p m x m P n p m 1 P n i = 1 n p i f ( x i ) .
The inequalities in (6) are reversed if the function f is concave.
Proof. 
Taking K = { k } , L = { l } and M = { m } in (2), we get (6). □
Remark 2.
In Corollary 1, if k = m , then (6) will become the inequality (2.1) given in [22].

3. Applications for Mean Inequalities

Let p = ( p 1 , , p n ) and x = ( x 1 , , x n ) be positive n-tuples and I { 1 , 2 , , n } . If r R , then the power mean P M ( x ; p ; r , I ) is defined by:
P M ( x ; p ; r , I ) = 1 P I i I p i x i r 1 r , if   r 0 , i I x i p i 1 P I , if   r = 0 .
In particular, if I = { 1 , 2 , , n } , then we denote the power mean by P M ( x ; p ; r ) .
As applications of the main result, we deduce refinements of the power mean inequality.
Corollary 2.
Let p i , x i , w i R + ( i = 1 , 2 , , n ) . If s , t R with s t and K , L , M are non-empty proper subsets of { 1 , 2 , , n } , then
P M ( x ; p ; s ) [ 1 P n W n P K W L P M t ( x ; p ; s , K ) + W L P K c P M t ( x ; p ; s , K c ) + W L c P M P M t ( x ; p ; s , M ) + W L c P M c P M t ( x ; p ; s , M c ) ] 1 t P M ( x ; p ; t ) , t 0 .
P M ( x ; p ; s ) exp [ 1 P n W n P K W L log P M ( x ; p ; s , K ) + W L P K c log P M ( x ; p ; s , K c ) + W L c P M log P M ( x ; p ; s , M ) + W L c P M c log P M ( x ; p ; s , M c ) ] P M ( x ; p ; t ) , t = 0 .
P M ( x ; p ; s ) [ 1 P n W n P K W L P M s ( x ; p ; t , K ) + W L P K c P M s ( x ; p ; t , K c ) + W L c P M P M s ( x ; p ; t , M ) + W L c P M c P M s ( x ; p ; t , M c ) ] 1 s P M ( x ; p ; t ) , s , t 0 .
P M ( x ; p ; s ) exp [ 1 P n W n P K W L log P M ( x ; p ; t , K ) + W L P K c log P M ( x ; p ; t , K c ) + W L c P M log P M ( x ; p ; t , M ) + W L c P M c log P M ( x ; p ; t , M c ) ] P M ( x ; p ; t ) , s = 0 .
Proof. 
Let s , t 0 and ϕ ( x ) = x t s , x > 0 . Clearly, ϕ ( x ) = t s ( t s 1 ) x t s 2 . The function ϕ will be convex if t s 1 or t s < 0 . If t s 1 , then s , t > 0 as s t . If t s < 0 , then t > 0 and s < 0 as s t . In both cases, by using the function ϕ ( x ) = x t s , substituting x i by x i s in (2), and then taking the power 1 t , we obtain (8). Similarly, for the case in which 0 < t s 1 , ϕ is concave with t < 0 . Therefore, by using Theorem 1 for the concave function ϕ ( x ) = x t s , substituting x i by x i s , and then taking the power 1 t , we deduce (8).
Similarly to the above procedure, we can prove (10) by using the function ϕ ( x ) = x s t , x > 0 and substituting x i by x i t in Theorem 1.
The inequalities (9) and (11) can be easily proven by taking t 0 and s 0 in (8) and (10), respectively. □
Let x = ( x 1 , , x n ) and p = ( p 1 , , p n ) be positive n-tuples and I { 1 , 2 , , n } . If h is a strictly monotone and continuous function, then the quasi-arithmetic mean is defined by:
Q M ( p ; x ; h ; I ) = h 1 1 P I i I p i h ( x i ) .
In particular, if I = { 1 , 2 , , n } , then we denote the power mean by Q M ( p ; x ; h ) .
Corollary 3.
Let p i , w i , x i ( 0 , ) ( i = 1 , 2 , , n ) . If K , L , M are non-empty proper subsets of { 1 , 2 , , n } and g h 1 is a convex function, then
g Q M ( p ; x ; h ) 1 P n W n [ W L P K g Q M ( p ; x ; h , K ) + W L P K c g Q M ( p ; x ; h , K c ) + W L c P M g Q M ( p ; x ; h , M ) + W L c P M c g Q M ( p ; x ; h , M c ) ] 1 P n i = 1 n p i g ( x i ) .
The above inequalities hold in the opposite direction if the function g h 1 is concave.
Proof. 
Using (2) for h ( x i ) instead of x i and g h 1 instead of f, we will obtain the required inequality. □

4. Applications in Information Theory

In order to illustrate applications of the new result in information theory, first, we recall some necessary concepts.
Assume that f : R + R is a convex function, a = ( a 1 , a 2 , , a n ) , and b = ( b 1 , b 2 , , b n ) are n-tuples with a i , b i R + ; then, the Csiszár f-divergence functional [23] is defined by
C f ( a , b ) = i = 1 n b i f a i b i .
We recall the following important notions in information theory [14,24]: Let a = ( a 1 , a 2 , , a n ) and b = ( b 1 , b 2 , , b n ) be positive n-tuples such that i = 1 n a i = i = 1 n b i = 1 .
Kullback - Leibler   divergence : K d ( a , b ) = i = 1 n a i log a i b i .
Shannon   entropy : S ( a ) = i = 1 n a i log a i .
Total   variation   distance : V d ( a , b ) = i = 1 n a i b i .
Jeffrey   distance : J d ( a , b ) = i = 1 n ( a i b i ) log a i b i .
Bhattacharyya   coefficient : B d ( a , b ) = i = 1 n a i b i .
Hellinger   distance : H d ( a , b ) = i = 1 n a i b i 2 .
Triangular   discrimination : T d ( a , b ) = i = 1 n a i b i 2 a i + b i .
Theorem 2.
Let f be a convex function defined on R + and let a = ( a 1 , a 2 , , a n ) , b = ( b 1 , b 2 , , b n ) be two positive n-tuples. Then, for arbitrary non-empty proper subsets K , L , M of { 1 , 2 , , n } , we have
f i = 1 n a i i = 1 n b i i = 1 n b i 1 W n W L i K b i f i K a i i K b i + W L i K c b i f i K c a i i K c b i + W L c i M b i f i M a i i M b i + W L c i M c b i f i M c a i i M c b i C f ( a , b ) .
Proof. 
Using Theorem 1 for x i = a i b i and p i = b i for i { 1 , 2 , , n } , we obtain (14). □
Corollary 4.
Let b = ( b 1 , b 2 , , b n ) be positive n-tuple such that i = 1 n b i = 1 . Then, for arbitrary non-empty proper subsets K , L , M of { 1 , 2 , , n } , we have
S ( b ) 1 W n W L i K b i log | K | i K b i + W L i K c b i log | K c | i K c b i + W L c i M b i log | M | i M b i + W L c i M c b i log | M c | i M c b i log n .
where | T | represents the number of elements in the set T.
Proof. 
Taking f ( x ) = log x , x ( 0 , ) , a i = 1 , for each i { 1 , 2 , , n } in (14), we obtain (15). □
Corollary 5.
Let a = ( a 1 , a 2 , , a n ) and b = ( b 1 , b 2 , , b n ) be positive n-tuples such that i = 1 n a i = i = 1 n b i = 1 . Then, for arbitrary non-empty proper subsets K , L , M of { 1 , 2 , , n } , we have
0 1 W n W L i K a i log i K a i i K b i + W L i K c a i log i K c a i i K c b i + W L c i M a i log i M a i i M b i + W L c i M c a i log i M c a i i M c b i K d ( a , b ) .
Proof. 
If we apply the inequality (14) for f ( x ) = x log x , x R + , then we obtain (16). □
Corollary 6.
If all of the conditions of Corollary 5 hold, then
V d ( a , b ) 1 W n W L | i K a i i K b i | + W L | i K c a i i K c b i | + W L c | i M a i i M b i | + W L c | i M c a i i M c b i | .
Proof. 
By applying the function f ( x ) = | x 1 | , x R + in (14), we obtain (17). □
Corollary 7.
If all of the conditions of Corollary 5 hold, then
J d ( a , b ) 1 W n ( W L i K b i i K a i log i K a i i K b i + W L i K c b i i K c a i log i K c a i i K c b i + W L c i M b i i M a i log i M a i i M b i + W L c i M c b i i M c a i log i M c a i i M c b i ) 0 .
Proof. 
By choosing the function f ( x ) = ( x 1 ) log x , x ( 0 , ) , in (14), we obtain (18). □
Corollary 8.
If all of the assumptions of Corollary 5 hold, then
B d ( a , b ) 1 W n W L i K a i i K b i + W L i K c a i i K c b i + W L c i M a i i M b i + W L c i M c a i i M c b i .
Proof. 
By utilizing the convex function f ( x ) = x where x R + , in (14), we obtain (19). □
Corollary 9.
Under the assumptions of Corollary 5, the following inequality holds:
H d ( a , b ) 1 W n W L i K a i i K b i 2 + W L i K c a i i K c b i 2 + W L c i M a i i M b i 2 + W L c i M c a i i M c b i 2 0 .
Proof. 
Considering the function f ( x ) = ( x 1 ) 2 , x ( 0 , ) , in (14), we deduce (20). □
Corollary 10.
Under the assumptions of Corollary 5, the following inequalities hold:
T d ( a , b ) 1 W n W L i K a i i K b i 2 i K a i + i K b i + W L i K c a i i K c b i 2 i K c a i + i K c b i + W L c i M a i i M b i 2 i M a i + i M b i + W L c i K c a i i M c b i 2 i M c a i + i M c b i 0 .
Proof. 
Since f ( x ) = ( x 1 ) 2 x + 1 , x ( 0 , ) is convex, by using the function in (14), we obtain (21). □
Remark 3.
If we substitute K = M in all of the results presented in this section, we may obtain the results derived in [21]. Furthermore, if we take K = { k } = L = { l } and M = { m } , then we may deduce the estimations for the notions in information theory as obtained in [22].
In the remainder of this article, we present applications of the main result for the Zipf–Mandelbrot entropy. Before we get started, we offer some basic information about the Zipf–Mandelbrot entropy.
Zipf’s law was further generalized by Mandelbrot in 1966 [25]. This generalized law is called the Zipf–Mandelbrot law in the literature. The Zipf–Mandelbrot law provides development to account for the low-rank words in a corpus where i < 100 [26]: l ( i ) = c ( i + t ) s , and for particular cases in which t = 0 , this law becomes Zipf’s law. There are numerous interesting applications of this generalized law in different fields, such as ecological field studies [27], information sciences [28], linguistics [26,29], etc.
The Zipf–Mandelbrot entropy Z M E ( H , t , u ) is given by
Z M E ( H , t , s ) = s H n , t , s j = 1 n log ( j + t ) ( j + t ) s + log H n , t , s ,
where n N , s > 0 , t 0 , H n , t , s = i = 1 n 1 ( i + t ) s and the Zipf-Mandelbrot law is given by: Z M L ( i , n , t , s ) = 1 / ( i + t ) s H n , t , s .
In the following corollaries, we demonstrate applications of the new result, which provides estimations for the Zipf–Mandelbrot entropy.
Corollary 11.
Let t 0 , s , b i > 0 with i = 1 n b i = 1 . Then, for arbitrary non-empty proper subsets K , L , M of { 1 , 2 , , n } , the following inequalities hold:
Z M E ( H , t , s ) i = 1 n log b i ( i + t ) s H n , t , s 1 W n ( W L i K log i K 1 ( i + t ) s H n , t , s i K b i ( i + t ) s H n , t , s + W L i K c log i K c 1 ( i + t ) s H n , t , s i K c b i ( i + t ) s H n , t , s + W L c i M log i M 1 ( i + t ) s H n , t , s i M b i ( i + t ) s H n , t , s + W L c i M c log i M c 1 ( i + t ) s H n , t , s i M c b i ( i + t ) s H n , t , s ) 0 .
Proof. 
Let a i = 1 ( i + t ) s H n , t , s , i { 1 , 2 , , n } ; then,
i = 1 n a i log a i = i = 1 n 1 ( i + t ) s H n , t , s log 1 ( i + t ) s H n , t , s = i = 1 n 1 ( i + t ) s H n , t , s log ( ( i + t ) s H n , t , s ) = i = 1 n s ( i + t ) s H n , t , s log ( i + t ) i = 1 n log H n , t , s ( i + t ) s H n , t , s = s ( i + t ) s H n , t , s i = 1 n log ( i + t ) ( i + t ) s log H n , t , s H n , t , s i = 1 n 1 ( i + t ) s = Z M E ( H , t , s ) .
Since H n , t , s = i = 1 n 1 ( t + i ) s , i = 1 n 1 ( i + t ) s H n , t , s = 1 . Hence, by using (16) for a i = 1 ( i + t ) s H n , t , s , i = 1 , 2 , . . , n , we obtain (23). □
Corollary 12.
If α 1 , α 2 0 and β 1 , β 2 > 0 , then for arbitrary non-empty proper subsets K , L , M of { 1 , 2 , , n } , the following inequalities hold:
Z M E ( H , α 1 , β 1 ) + i = 1 n log ( ( i + α 2 ) β 2 H n , α 2 , β 2 ) ( i + α 1 ) β 1 H n , α 1 , β 1 1 W n W L i K log i K 1 ( i + α 1 ) β 1 H n , α 1 , β 1 i K 1 ( i + α 2 ) β 2 H n , α 2 , β 2 ( i + α 1 ) β 1 H n , α 1 , β 1 + W L i K c log i K c 1 ( i + α 1 ) β 1 H n , α 1 , β 1 i K c 1 ( i + α 2 ) β 2 H n , α 2 , β 2 ( i + α 1 ) β 1 H n , α 1 , β 1 + W L c i K log i M 1 ( i + α 1 ) β 1 H n , α 1 , β 1 i M 1 ( i + α 2 ) β 2 H n , α 2 , β 2 ( i + α 1 ) β 1 H n , α 1 , β 1 + W L c i M c log i M c 1 ( i + α 1 ) β 1 H n , α 1 , β 1 i M c 1 ( i + α 2 ) β 2 H n , α 2 , β 2 ( i + α 1 ) β 1 H n , α 1 , β 1 0 .
Proof. 
Let a i = 1 ( i + α 1 ) β 1 H n , α 1 , β 1 and b i = 1 ( i + α 2 ) β 2 H n , α 2 , β 2 , i = 1 , 2 , , n ; then, as in the proof of Corollary 11, we have
i = 1 n a i log a i = i = 1 n 1 ( i + α 1 ) β 1 H n , α 1 , β 1 log 1 ( i + α 1 ) β 1 H n , α 1 , β 1 = Z M E ( H , α 1 , β 1 ) .
i = 1 n a i log b i = i = 1 n 1 ( i + α 1 ) β 1 H n , α 1 , β 1 log 1 ( i + α 2 ) β 2 H n , α 2 , β 2 = i = 1 n log ( ( i + α 2 ) β 1 H n , α 2 , β 2 ) ( i + α 1 ) β 1 H n , α 1 , β 1 .
A l s o ,   i = 1 n a i = i = 1 n 1 ( i + α 1 ) β 1 H n , α 1 , β 1 = 1   a n d   i = 1 n b i = i = 1 n 1 ( i + α 2 ) β 2 H n , α 2 , β 2 = 1 .
Therefore, by using (16) for a i = 1 ( i + α 1 ) β 1 H n , α 1 , β 1 and b i = 1 ( i + α 2 ) β 2 H n , α 2 , β 2 , i = 1 , 2 , , n , we obtain (24). □

5. Conclusions

Jensen’s inequality is very important in almost every field of science, as this inequality has very fruitful applications. Due to the various applications of Jensen’s inequality, numerous mathematicians have paid considerable attention to refinements, extensions, and generalizations of this inequality. In this work, we considered a positive finite sequence and obtained an interesting refinement of Jensen’s inequality pertaining to some index sets. We also discussed the refinement for particular index sets. Interestingly, the obtained results could generate the earlier refinements of Jensen’s inequality. Moreover, we gave several applications of the main result in information theory and deduced refinements of inequalities for some special means.

Author Contributions

S.W. provided the main idea of the main results. M.A.K. and S.W. worked on Section 1 and Section 2. T.S. and Z.M.M.M.S. worked on Section 3 and Section 4. M.A.K. wrote Section 5. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Natural Science Foundation of Fujian Province of China under Grant No. 2020J01365, and this work was funded by the Institutional Fund Projects under Grant No. IFPIP:0323-130-1443. The authors gratefully acknowledge the financial support provided by the Ministry of Education and King Abdulaziz University, DSR, Jeddah, Saudi Arabia.

Data Availability Statement

No data were used to support this study.

Acknowledgments

The authors are grateful to the anonymous reviewers for their valuable comments and suggestions on improving the quality of the manuscript.

Conflicts of Interest

The authors declare that they have no competing interest.

References

  1. Horváth, L.; Khan, K.A.; Pečarić, J. Cyclic refinements of the discrete and integral form of Jensen’s inequality with applications. Analysis 2016, 36, 253–263. [Google Scholar] [CrossRef]
  2. Cloud, M.J.; Drachman, B.C.; Lebedev, L.P. Inequalities with Applications to Engineering; Springer: Cham, Switzerland; Heidelberg, Germany; New York, NY, USA; Dordrecht, The Netherlands; London, UK, 2014. [Google Scholar]
  3. Liao, J.G.; Berg, A. Sharpening Jensen’s Inequality. Am. Stat. 2018, 4, 1–4. [Google Scholar] [CrossRef] [Green Version]
  4. Lakshmikantham, V.; Vatsala, A.S. Theory of Differential and Integral Inequalities with Initial Time Difference and Applications; Springer: Berlin, Germany, 1999. [Google Scholar]
  5. Lin, Q. Jensen inequality for superlinear expectations. Stat. Probabil. Lett. 2019, 151, 79–83. [Google Scholar] [CrossRef]
  6. Zhao, T.-H.; Wang, M.-K.; Chu, Y.-M. Concavity and bounds involving generalized elliptic integral of the first kind. J. Math. Inequal. 2021, 15, 701–724. [Google Scholar] [CrossRef]
  7. Zhao, T.-H.; Wang, M.-K.; Chu, Y.-M. Monotonicity and convexity involving generalized elliptic integral of the first kind. Rev. R. Acad. Cienc. Exactas Fís. Nat. Ser. A Mat. 2021, 115, 46. [Google Scholar] [CrossRef]
  8. Pečarić, J.; Proschan, F.; Tong, Y.L. Convex Functions, Partial Orderings and Statistical Applications; Academic Press, Inc.: Cambridge, MA, USA, 1992. [Google Scholar]
  9. Jensen, J.L.W.V. Sur les fonctions convexes et les inegalités entre les valeurs moyennes. Acta Math. 1906, 30, 175–193. [Google Scholar] [CrossRef]
  10. Azar, S.A. Jensen’s inequality in finance. Int. Adv. Econ. Res. 2008, 14, 433–440. [Google Scholar] [CrossRef]
  11. Ruel, J.J.; Ayres, M.P. Jensen’s inequality predicts effects of environmental variation. Trends Ecol. Evol. 1999, 14, 361–366. [Google Scholar] [CrossRef]
  12. Khan, S.; Khan, M.A.; Chu, Y.-M. Converses of the Jensen inequality derived from the Green functions with applications in information theory. Math. Method. Appl. Sci. 2020, 43, 2577–2587. [Google Scholar] [CrossRef]
  13. Saeed, T.; Khan, M.A.; Ullah, H. Refinements of Jensen’s inequality and applications. AIMS Math. 2022, 7, 5328–5346. [Google Scholar] [CrossRef]
  14. Ullah, H.; Khan, M.A.; Saeed, T. Determination of bounds for the Jensen gap and its applications. Mathematics 2021, 9, 3132. [Google Scholar] [CrossRef]
  15. Steffensen, J.F. On certain inequalities and methods of approximation. J. Inst. Actuar. 1919, 51, 274–297. [Google Scholar] [CrossRef]
  16. Slater, M.L. A companion inequality to Jensen’s inequality. J. Approx. Theory 1981, 32, 160–166. [Google Scholar] [CrossRef] [Green Version]
  17. Pečarić, J.E. A companion to Jensen-Steffensen’s inequality. J. Approx. Theory 1985, 44, 289–291. [Google Scholar] [CrossRef] [Green Version]
  18. Pečarić, J.E. A multidimensional generalization of Slater’s inequality. J. Approx. Theory 1985, 44, 292–294. [Google Scholar] [CrossRef] [Green Version]
  19. Mercer, A.M. A variant of Jensen’s inequality. JIPAM 2003, 4, 73. [Google Scholar]
  20. Niezgoda, M. A generalization of Mercer’s result on convex functions. Nonlinear Anal. 2009, 71, 2771–2779. [Google Scholar] [CrossRef]
  21. Dragomir, S.S. A new refinement of Jensen’s inequality in linear spaces with applications. Math. Comput. Model. 2010, 52, 1497–1505. [Google Scholar] [CrossRef] [Green Version]
  22. Dragomir, S.S. A refinement of Jensen’s inequality with applications for f-divergence measures. Taiwan. J. Math. 2010, 14, 153–164. [Google Scholar] [CrossRef]
  23. Csiszár, I. Information-type measures of differences of probability distributions and indirect observations. Stud. Sci. Math. Hung. 1967, 2, 299–318. [Google Scholar]
  24. Pečarić, D.; Pečarić, J.; Rodić, M. About the sharpness of the Jensen inequality. J. Inequal. Appl. 2018, 2018, 337. [Google Scholar] [CrossRef] [PubMed]
  25. Mandelbrot, B. Information Theory and Psycholinguistics: A Theory of Words Frequencies. In Reading in Mathematical Social Scence; Lazafeld, P., Henry, N., Eds.; MIT Press: Cambridge, MA, USA, 1966. [Google Scholar]
  26. Montemurro, M.A. Beyond the Zipf-Mandelbrot law in quantitative linguistics. arXiv 2001, arXiv:cond-mat/0104066v2. [Google Scholar] [CrossRef]
  27. Mouillot, D.; Lepretre, A. Introduction of relative abundance distribution (RAD) indices, estimated from the rank-frequency diagrams (RFD), to assess changes in community diversity. Environ. Monit. Assess. 2000, 63, 279–295. [Google Scholar] [CrossRef]
  28. Silagadze, Z.K. Citations and the Zipf-Mandelbrot Law. Complex Syst. 1997, 11, 487–499. [Google Scholar]
  29. Manin, D. Mandelbrot’s Model for Zipf’s Law: Can Mandelbrot’s Model Explain Zipf’s Law for Language. J. Quant. Linguist. 2009, 16, 274–285. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wu, S.; Khan, M.A.; Saeed, T.; Sayed, Z.M.M.M. A Refined Jensen Inequality Connected to an Arbitrary Positive Finite Sequence. Mathematics 2022, 10, 4817. https://doi.org/10.3390/math10244817

AMA Style

Wu S, Khan MA, Saeed T, Sayed ZMMM. A Refined Jensen Inequality Connected to an Arbitrary Positive Finite Sequence. Mathematics. 2022; 10(24):4817. https://doi.org/10.3390/math10244817

Chicago/Turabian Style

Wu, Shanhe, Muhammad Adil Khan, Tareq Saeed, and Zaid Mohammed Mohammed Mahdi Sayed. 2022. "A Refined Jensen Inequality Connected to an Arbitrary Positive Finite Sequence" Mathematics 10, no. 24: 4817. https://doi.org/10.3390/math10244817

APA Style

Wu, S., Khan, M. A., Saeed, T., & Sayed, Z. M. M. M. (2022). A Refined Jensen Inequality Connected to an Arbitrary Positive Finite Sequence. Mathematics, 10(24), 4817. https://doi.org/10.3390/math10244817

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop