Next Article in Journal
Feature Symmetry Fusion Remote Sensing Detection Network Based on Spatial Adaptive Selection
Next Article in Special Issue
A System of Coupled Matrix Equations with an Application over the Commutative Quaternion Ring
Previous Article in Journal
Explanation of the Mass Pattern of the Low-Lying Scalar Nonet
Previous Article in Special Issue
On the h-Additive Functions and Their Symmetry Properties
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Jensen–Jessen Inequality for Convex Maps

by
Zdzisław Otachel
Department of Applied Mathematics and Computer Science, University of Life Sciences in Lublin, Głȩboka 28, 20-950 Lublin, Poland
Symmetry 2025, 17(4), 601; https://doi.org/10.3390/sym17040601
Submission received: 24 March 2025 / Revised: 11 April 2025 / Accepted: 14 April 2025 / Published: 16 April 2025
(This article belongs to the Special Issue Advance in Functional Equations, Second Edition)

Abstract

:
In this paper, some vector inequalities for convex maps are proved. The obtained results refer to the famous Jensen inequality and generalize further classical inequalities of Jessen and McShane. In addition, the Hahn–Banach theorems with sublinear and convex maps are considered and used to prove the theorem on the support of certain convex maps.

1. Introduction and Motivation

Given a non-empty set E, let L be a real linear space of some real-valued functions x defined on E between others; the function I , constantly equaling 1, belongs to L, and let A be a linear mean on L, i.e., any linear positive functional on L with A ( I ) = 1 .
For any convex functions ϕ defined on a bounded interval I, inequalities of the form
ϕ ( A ( x ) ) A ( ϕ x ) ,
where x , ϕ x L , are classical.
The fundamental examples of linear means are A ( x ) = k = 1 n w k x k or A ( x ) = 0 1 w ( t ) x ( t ) d t , where w k , ( k = 1 , , n ) are positive weights summable to 1 and w ( t ) , t [ 0 , 1 ] , is a fixed non-negative function integrable to 1, while x ( k ) = x k I or x ( t ) I is integrable. For these functionals, inequalities (1) were established in 1906 by Jensen [1], one of the founders of the convex functions theory. Presently, they are just called Jensen’s inequalities; moreover, a convex function itself is defined by (1) in case of the mentioned discrete functional.
Twenty-five years later, Jessen [2] proved (1) in a general case for abstract linear means. The case of multi-variable convex function ϕ was developed by McShane [3]. The inequality remains true for the wider class of functionals compared to linear means, namely sublinear isotonic functionals A preserving constants; i.e., A ( α I ) = α , α R . Results of that type come from Pečarić and Raşa [4] and Dragomir, Pearce, and Pečarić [5] (see also ([6] Th. 12.18)). The latest results on this topic can be found in [7,8,9,10,11,12]. For example, in [12], Otachel obtained (1) with weaker assumptions about functionals than those made in [4] or in [5] and generalized Jessen’s and McShane’s inequalities. It complements the results obtained in [4,5].
In this article, we prove a vectorial version of inequality (1), where convex functions are replaced by convex maps Φ with values in ordered linear spaces and functionals A being linear means are altered to vectorial means. This is a body of Theorem 5 in Section 5; the notion of vectorial mean is introduced therein as well. Moreover, we present there a counterpart of McShane’s result (Theorem 6) for our vectorial means.
The key tool for proving inequalities of type (1) includes the supports of convex functions. In Section 4, in Theorem 4, we show that certain convex maps admit analogous characterizations.
They are consequences of versions of Hahn–Banach theorems on extensions of linear or affine maps dominated by sublinear or convex maps taking values in ordered linear spaces. These theorems are presented in Section 3, Theorems 2–3.
Section 2 includes the main settings of the paper, used notations, and the basic notions relevant to our considerations, along with illustrative examples and elementary properties.

2. Notation and Preliminaries

A binary relation ⪯ on a set Z that is reflexive ( x x for every x) and transitive (if x y and y z , then x z ) is called a preordering on Z. Additionally, if it is antisymmetric (both inequalities x y and y x , providing x = y ), we speak about (partial) ordering.
The Kuratowski–Zorn lemma states that a partially ordered set containing upper bounds for every chain (that is, every totally ordered subset, where any two elements are comparable) necessarily contains at least one maximal element (that is, there are no other succeeding elements in the set).
In the following, we will be particularly interested in partial orderings in real linear spaces. A subset C of a real linear space is a convex cone if r C + s C C for all non-negative scalars r , s R . Then, x y y x C defines a preordering in the space. Additionally, if C is a pointed cone ( C C = { 0 } ), then the relation is antisymmetric, so ⪯ is a partial ordering, called a cone ordering or an ordering generated by the convex cone or a cone-generated ordering.
In this case, one can add the inequalities side by side and multiply them by positive scalars.
The componentwise ordering in R n ( x = ( x 1 , , x n ) y = ( y 1 , , y n ) x i y i , i = 1 , 2 , , n ) and the natural ordering of real functions defined on a certain set E (that is, f g f ( x ) g ( x ) , x E ) are basic examples of cone orderings. The cones generating above orderings are the convex cone of vectors with non-negative components and the convex cone of non-negative functions, respectively. Both of them are pointed. Example 3 includes a more sophisticated instance.
Throughout the paper, every ordering on a linear real space is a cone ordering.
Let ( L i , i ) , i = 1 , 2 be ordered real linear spaces. The notions of convexity, sublinearity, isotonicity, positivity, and negativity assigned to real functionals can be naturally extended to maps defined on and taking values in ordered real linear spaces.
A map A : L 1 L 2 is said to be
  • convex if A ( r x + s y ) 2 r A ( x ) + s A ( y ) for all x , y L 1 and all s , r 0 with r + s = 1 ;
  • sublinear if A ( r x + s y ) 2 r A ( x ) + s A ( y ) for all x , y L 1 and all s , r 0 ;
  • isotonic if x 1 y A ( x ) 2 A ( y ) , x , y L 1 ;
  • positive if 0 1 x 0 2 A ( x ) , x L 1 ;
  • negative if x 1 0 A ( x ) 2 0 , x L 1 .
The above maps can be defined on some dedicated subsets of space, e.g., convex on convex, sublinear on convex cones, or isotonic on increasing subsets of the space. This sometimes complicates the assumptions, and, if it is not essential for the considerations, we will limit ourselves to the situation when the domain of the map is the entire space. For considering sublinear or convex (concave) maps, no ordering of the space L 1 is needed.
Clearly, every sublinear map is convex. The epigraph of any map F : L 1 L 2 is denoted and defined as epi F : = { ( x , z ) L 1 × L 2 : F ( x ) 2 z } .
Theorem 1.
Let F : L 1 L 2 and C 2 be a convex cone that induces the partial order 2 in L 2 .
(i) 
F is convex epi F  is a convex subset of L 1 × L 2 .
(ii) 
F is sublinear epi F  is a convex cone not including ( 0 , c ) , 0 c C 2 .
Proof. 
(ii). Let F : L 1 L 2 be a sublinear map and ( x i , z i ) epi F , that is x i L 1 and F ( x i ) 2 z i , i = 1 , 2 . Given arbitrary r , s 0 . Since F is sublinear, F ( r x 1 + s x 2 ) 2 r F ( x 1 ) + s F ( x 2 ) 2 r z 1 + s z 2 , where the last inequality is implied by our assumptions. Thus, r ( x 1 , z 1 ) + s ( x 2 , z 2 ) = ( r x 1 + s x 2 , r z 1 + s z 2 ) epi F ; i.e., epi F is a convex cone.
Moreover, if ( 0 , c ) epi F , 0 c C 2 , then F ( 0 ) 2 c . It is known that sublinearity of F ensures F ( 0 ) = 0 , so 0 2 c . It contradicts the basic setting that C 2 C 2 = { 0 } .
Conversely, for arbitrary x i L 1 , ( x i , F ( x i ) ) epi F , i = 1 , 2 . If epi F is a convex cone, then r ( x 1 , F ( x 1 ) ) + s ( x 2 , F ( x 2 ) ) = ( r x 1 + s x 2 , r F ( x 1 ) + s F ( x 2 ) ) epi F ; equivalently, F ( r x 1 + s x 2 ) 2 r F ( x 1 ) + s F ( x 2 ) , where r , s 0 are arbitrary. Therefore, F is a sublinear map.
(i). The proof is analogous. □
It is known, in case of a real valued function f, f is convex if and only if epi f is convex; f is a sublinear functional if and only if epi f is a convex cone with ( 0 , 1 ) epi f (see [13] (Th. 2.1.1, Prop. 2.1.2)).
The following further properties of any sublinear map A are easy to verify
A ( 0 ) = 0 ,
0 2 A ( x ) + A ( x ) , x L 1 ,
A ( x ) A ( y ) 2 A ( x y ) , x , y L 1 .
Sublinearity of a map A is equivalent to non-negative homogeneity, A ( r x ) = r A ( x ) for all real r 0 and all x L 1 , and subadditivity, A ( x + y ) 2 A ( x ) + A ( y ) for all x , y L 1 . Indeed, if A is sublinear, then, for r > 0 and x L 1 , A ( r x ) 2 r A ( x ) . On the other hand, A ( x ) = A ( 1 r · r x ) 2 1 r A ( r x ) ; hence, r A ( x ) 2 A ( r x ) , and, finally, r A ( x ) = A ( r x ) , r > 0 . The remaining details of the mentioned equivalence are obvious.
It is evident in (4) that any sublinear negative map is isotonic and therefore positive, but not conversely. For linear maps, isotonicity is equivalent to positiveness. Note, properties of isotonicity or positivity are independent of sublinearity or linearity. In case of maps with values in ( L 2 , 2 ) = ( R , ) , we will rather speak about functionals. A sublinear functional is called also a Banach functional. Examples of such functionals are found in [12].
It might seem that sublinearity can be defined for any real scalar. The following example shows that the sublinear maps are simply linear.
Example 1.
Let A : L 1 L 2 be a map fulfilling the following sublinearity condition A ( r x + s y ) 2 r A ( x ) + s A ( y ) for all x , y L 1 and all s , r R (sic!).
Then, A ( x ) = A ( x + ε y ε y ) 2 A ( x + ε y ) ε A ( y ) . Hence, A ( x ) + ε A ( y ) 2 A ( x + ε y ) and A ( x + ε y ) A ( x ) + ε A ( y ) . Thus, A ( x + ε y ) = A ( x ) + ε A ( y ) . Substituting ε = 1 , + 1 , we obtain A ( x y ) = A ( x ) A ( y ) and A ( x + y ) = A ( x ) + A ( y ) . The first identity (for y = x ) yields A ( 0 ) = 0 . Linking it with the second one for y = x leads to A ( x ) = A ( x ) .
Now, according to the assumed sublinearity, for any r R , we have A ( r x ) 2 r A ( x ) or, equivalently, r A ( x ) 2 A ( r x ) . Furthermore, A ( r x ) = A ( r x ) 2 r A ( x ) . Thus, A ( r x ) = r A ( x ) .
Therefore, a map A with A ( r x + s y ) 2 r A ( x ) + s A ( y ) for all x , y L 1 and all s , r R is necessarily linear.
In [14,15], one can find suggestions that the condition of sublinearity, in the above sense, is apparently weaker than linearity in case of functionals.
Example 2.
Here, we will additionally assume that, for all x , y L 1 , there exist x y = sup { x , y } and x y = inf { x , y } ; i.e., L 1 is a vector lattice. Set x + = x 0 , x = ( x 0 ) , and | x | = x + + x , where x L . It is known that 0 x + and 0 x ; moreover, x = x + x , ( x + y ) + x + + y + and ( x + y ) x + y and maps x x + , x x , x | x | are sublinear.
For arbitrary x , y L 1 , the following inequalities are obviously equivalent:
x + y 1 x + y
( x + y ) + ( x + y ) 1 x + x + y + y
x + y ( x + y ) 1 x + + y + ( x + y ) +
Now, let Φ 1 , Φ 2 : L 1 L 2 be linear positive maps with 0 2 Φ 1 ( x ) 2 Φ 2 ( x ) , 0 1 x . Then,
Φ 1 ( x + y ( x + y ) ) 2 Φ 2 ( x + + y + ( x + y ) + )
or, equivalently,
Φ 2 ( ( x + y ) + ) Φ 1 ( ( x + y ) ) 2 Φ 2 ( x + ) Φ 1 ( x ) + Φ 2 ( y + ) Φ 1 ( y )
It proves subadditivity of the map A ( x ) = Φ 2 ( x + ) Φ 1 ( x ) . Negativity, positivity, and non-negative homogeneity of A are obvious. Thus, A is sublinear and isotonic.
Example 3.
Let L 1 be the space of all extended real-valued and measurable functions that are integrable on the unit interval provided with Lebesgue measure μ. For each x L 1 , let x * L 1 be its decreasing rearrangement defined as the right-hand continuous inverse of the distribution function m ( s ) : = μ { x > s } , i.e., x * ( t ) : = s u p { s : m ( s ) > t } . Now, we are in position to recall a continuous version of well-known order originated by Hardy, Littlewood, and Pólya in [16,17].
For x , y L 1 , we will write y x whenever
0 1 y * d μ = 0 1 x * d μ , 0 s y * d μ 0 s x * d μ , 0 s 1 .
It is easy to see that the operation x x * is non-negatively homogeneous; i.e., ( r x ) * = r x * , r 0 . Moreover, in light of [18] (Th. 8.1), it is subadditive in the following sense ( x + y ) * x * + y * . Thus, the operation of taking decreasing rearrangement of a function from L 1 is sublinear regarding the above defined order.
The subset C = { x L 1 : 0 1 x d μ = 0 , 0 s x d μ 0 , 0 s 1 } is a convex cone. Moreover, if x C and x C , then s t x d μ = 0 , 0 s t 1 . Hence, by the Lebesgue differentiation theorem, x = 0 , a.e.
Note that the HLP-order restricted to the convex cone of all decreasing μ a.e. functions in L 1 is a cone order generated by C. Details of properties of HLP-orders are present in [19], cf. also [20].

3. Versions of the Hahn–Banach Theorem with Sublinear or Convex Maps

Let F ( x ) and S ( x ) , x L , be certain maps defined on the vector space L with values in the ordered real vector space ( M , ) , where the partial order ⪯ is generated by the pointed convex cone C. In this case, we do not need any ordering in L .
Additionally, it will be assumed that, for any subset A M bounded from above, i.e., x y for any x A and a fixed y M , there exists sup A , and, for any subset B M bounded from below, i.e., x y for any y B and a fixed x M , there exists inf B .
In particular, if x y for any x A and for any y B , then there exist sup A and inf B and, moreover, sup A inf B .
We say the map F is dominated by the map S if F ( x ) S ( x ) , x L .
In the literature (see e.g., [21] (Chap. VI, §3, Th. 1)), one can find the following version of Hahn–Banach theorem on extension of linear maps dominated by sublinear maps.
Theorem 2.
Let F 0 be a linear map defined on a subspace L 0 L and dominated by a sublinear map S defined on the space L, both with values in the ordered space M. Then, there exists at least one linear extension of F 0 to whole L, dominated by S; i.e., there exists a linear map F ( x ) M , x L such that F ( x ) = F 0 ( x ) , x L 0 and F ( x ) S ( x ) , x L .
Below, we show the similar fact on extension of affine maps dominated by convex ones. A particular case of the result for real convex functions is found in [22] (Th. 43A).
Theorem 3.
Let A 0 be an affine map defined on a subspace L 0 L and dominated by a convex map Φ defined on the space L, both with values in the ordered space M. Then, there exists at least one affine extension of A 0 to whole L, dominated by Φ; i.e., there exists an affine map A ( x ) M , x L such that A ( x ) = A 0 ( x ) , x L 0 and A ( x ) Φ ( x ) , x L .
Proof. 
We will adopt a modified version of the proof of the Hahn–Banach theorem based on the Kuratowski–Zorn lemma, which uses the preliminary version of the theorem obtained by E. Helly (1912). More on this can be found in [23].
Let R be the collection of all pairs ( B , X B ) , where X B is a linear subspace of L including L 0 and B is an affine map on X B , which is an extension of A 0 dominated by the convex map Φ . The set R is non-empty because ( A 0 , L 0 ) R , so we can define the partial order relation ≺ in R as follows
( B 1 , X B 1 ) ( B 2 , X B 2 ) X B 1 X B 2 and B 1 ( x ) = B 2 ( x ) , x X B 1 .
For any R 1 totally ordered subset of R , let X 1 be the sum of these X B such that ( B , X B ) R 1 , and for x X 1 we define B 1 ( x ) as equal to B ( x ) if only x X B . Clearly, X 1 is a subspace of L including L 0 and B 1 is an affine map on X 1 , being an extension of A 0 dominated by Φ . Thus, ( B 1 , X 1 ) R and ( B , X B ) ( B 1 , X 1 ) for any ( B , X B ) R 1 ; i.e., ( B 1 , X 1 ) is an upper bound for R 1 . By the Kuratowski–Zorn lemma, there is a maximal element in R , e.g., ( B ˜ , X ˜ ) .
If we show that X ˜ = L , then A = B ˜ will be the affine extension of A 0 dominated by the convex map Φ as required. To accomplish this, let us assume, on the contrary, that X ˜ L , so there exists x 0 L X ˜ . Then, for any x 1 , x 2 X ˜ and s , t > 0 , we have
s A ( x 1 ) + t A ( x 2 ) = ( s + t ) A s s + t x 1 + t s + t x 2 ( s + t ) Φ s s + t x 1 + t s + t x 2
= ( s + t ) Φ s s + t ( x 1 t x 0 ) + t s + t ( x 2 + s x 0 ) s Φ x 1 t x 0 + t Φ x 2 + s x 0 .
Hence,
1 t A ( x 1 ) Φ x 1 t x 0 1 s Φ ( x 2 + s x 0 ) A ( x 2 ) .
Define subsets of the space M as follows
P = 1 t A ( x 1 ) Φ x 1 t x 0 M : x 1 X ˜ , t > 0 ,
T = 1 s Φ ( x 2 + s x 0 ) A ( x 2 ) M : x 2 X ˜ , s > 0 .
under our settings, there exist sup P inf T ; selecting z = 1 2 ( sup P + inf T ) leads us to the estimates
A ( x 1 ) t z Φ ( x 1 t x 0 ) and A ( x 2 ) + s z Φ ( x 2 + s x 0 ) .
Let Y be a subspace of L defined as follows: Y = { x ˜ + α x 0 : x ˜ X ˜ , α R } . Obviously, X ˜ Y , but X ˜ Y . Since x 0 X ˜ , the representation Y y = x ˜ + α x 0 is unique, so we can define the affine map by
Y y = x ˜ + α x 0 B ( x ˜ + α x 0 ) = A ( x ˜ ) + α z M .
Note that B is an extension of A and also of A 0 . Now, we shall show that B is dominated by Φ ; i.e., B ( x ˜ + α x 0 ) Φ ( x ˜ + α x 0 ) , x ˜ X ˜ , α R . The case α = 0 is evident. If α > 0 , utilizing the second estimate of (6), we obtain
B ( x ˜ + α x 0 ) = A ( x ˜ ) + α z = α [ A ( x ˜ / α ) + z ] α Φ ( x ˜ / α + x 0 ) = Φ ( x ˜ + α x 0 ) .
In case α < 0 , we use the first estimate of (6)
B ( x ˜ + α x 0 ) = A ( x ˜ ) + α z = α [ A ( x ˜ / α ) z ] α Φ ( x ˜ / α x 0 ) = Φ ( x ˜ + α x 0 ) .
Finally, the pair ( B , Y ) R , ( A , X ˜ ) ( B , Y ) and ( A , X ˜ ) ( B , Y ) denies that ( A , X ˜ ) is a maximal element in R . □

4. The Support of Convex Maps

As before, L and M are real linear spaces, where M is endowed with the partial order ⪯ defined by the pointed convex cone C. We still assume that any subset of M bounded from above has supremum and any subset of M bounded from below has infimum.
We say a map Φ : L M has support at x 0 L if there exists an affine map A : L M with properties A ( x 0 ) = Φ ( x 0 ) and A ( x ) Φ ( x ) , x L . Clearly, such A has the form A ( x ) = Φ ( x 0 ) + T ( x x 0 ) , where T : L M is linear.
Theorem 4
(cf. [22] (Th. 43B)). The map Φ : L M is convex if and only if Φ has affine support at each point of L.
Proof. 
Let x 0 = α x 1 + β x 2 , x 1 , x 2 L , α , β 0 , α + β = 1 and A be an affine support of Φ at x 0 . Then,
Φ ( α x 1 + β x 2 ) = Φ ( x 0 ) = A ( x 0 ) = α A ( x 1 ) + β A ( x 2 ) α Φ ( x 1 ) + β Φ ( x 2 )
which proves convexity of Φ .
Conversely, let us assume Φ is a convex map. For fixed x 0 L , choose v L , a nonzero vector parallel to x 0 , in case x 0 = 0 ; any nonzero vector may be taken. Define the subspace L 0 = { x 0 + t v : t R } and the convex map ϕ ( t ) = Φ ( x 0 + t v ) . If r < s < t , then s = t s t r r + s r t r t . Hence, ϕ ( s ) t s t r ϕ ( r ) + s r t r ϕ ( t ) by convexity of ϕ , or, equivalently,
ϕ ( s ) ϕ ( r ) s r ϕ ( t ) ϕ ( s ) t s .
Put s = 0 and define
A = ϕ ( 0 ) ϕ ( r ) r M : r < 0 and B = ϕ ( t ) ϕ ( 0 ) t M : 0 < t .
Since there exist sup A inf B , choosing m = 1 2 ( sup A + inf B ) provides the estimates
ϕ ( 0 ) ϕ ( r ) r m ϕ ( t ) ϕ ( 0 ) t , r < 0 < t ,
which, after transformation, take the following concise form
ϕ ( 0 ) + t m ϕ ( t ) , t R .
Therefore, the affine map A 0 defined on L 0 by A 0 ( x 0 + t v ) = ϕ ( 0 ) + t m supports ϕ ( t ) = Φ ( x 0 + t v ) at x 0 or is also dominated by Φ . Thus, the special case of this theorem in which the domain of maps is one-dimensional has been proven.
In the general case, we obtain by Theorem 3, which ensures existence of an affine extension A for A 0 from L 0 to the whole space L, dominated by Φ . Clearly, A ( x 0 ) = A 0 ( x 0 ) = ϕ ( 0 ) = Φ ( x 0 ) , so A is required support of Φ at x 0 .

5. A Vectorial Version of Jensen–Jessen Inequality

Given a non-empty set E and the ordered real vector space ( M , ) , where the partial order ⪯ is defined by the pointed convex cone C, as the real linear space L, we consider here a function space that consists of some vector-valued functions g : E M and meeting the requirements
(L1):
the constant functions g ( x ) = m , x E , where m M is fixed, belonging to L;
(L2):
T g L for any linear map T : M M and g L .
Moreover, we assume that L is endowed with the preorder defined by the convex pointed cone of all positive functions from L onto M; that is, by { p L : 0 p ( x ) , x E } , but there is no need to use an explicit symbol to denote it.
We are interested in linear maps A : L M having the properties
(A1):
A ( m ) = m , m M , meaning that A ( g ) = g for all g L being constant functions;
(A2):
g L , 0 g ( x ) , x E 0 A ( g ) ; i.e., A is positive, or, equivalently, isotonic;
(A3):
T A ( g ) A ( T g ) , g L for any linear map T : M M .
Then, we say that A ( g ) is a vectorial mean defined on L with values in M. Let us recall, in case of M = R , we speak about linear means [24] (p. 47).
It is evident that the collection of vectorial means defined on L with values in M constitutes a convex subset in the space L, and there exist examples of such means.
Example 4.
According to our previous assumptions and notations, set M = R n with the componentwise order ( x 1 , , x n ) ( y 1 , , y n ) x i y i , i = 1 , 2 , , n and let L be a linear space of maps E R n meeting (L1)–(L2). Then, any g L has the form g = ( g 1 , g 2 , , g n ) , where g i : E R .
Let a be a fixed linear mean acting on the linear space L 1 consisting of all real functions g 1 , where g = ( g 1 , g 2 , , g n ) L ; i.e., a is a positive linear functional preserving real constants. We define A ( g ) = ( a ( g 1 ) , a ( g 2 ) , , a ( g n ) ) .
The reader will easily notice A is linear and fulfils (A1)–(A2). Since any linear map T : R n R n is represented by matrix [ t i j ] , i , j = 1 , , n , by linearity of a, we have
A ( T g ) = ( a ( j t 1 j g j ) , , a ( j t n j g j ) ) = ( j t 1 j a ( g j ) , , j t n j a ( g j ) ) = T A ( g ) .
Thus, (A3) also holds, so A is vectorial mean on L with values in R n .
In the below theorem, we extend Jessen’s inequality [24] (Th. 2.4) from linear means to vectorial ones.
Theorem 5.
Let L satisfy properties (L1)–(L2) on a non-empty set E and let A be a vectorial mean defined on L with values in M.
If Φ : M M is a convex map, then, for all g L such that Φ g L , the following inequality holds
Φ ( A ( g ) ) A ( Φ g ) .
Proof. 
Fix g L such that Φ g L and denote x 0 = A ( g ) M . According to Theorem 4, let S ( x ) = m + T ( x x 0 ) be a support map of Φ at x 0 , where m = Φ ( x 0 ) and T : M M is linear. Thus, S ( x ) Φ ( x ) , x M and S ( x 0 ) = Φ ( x 0 ) . Setting x = g ( t ), we obtain S g ( t ) Φ g ( t ) , t E . Note, S g L , by (L2). The assumption (A2) of isotonicity for A and further (A1) and (A3) provide the estimates
A ( Φ g ) A ( S g )
= A ( m ) + A ( T g ) A ( T x 0 ) m + T A ( g ) T x 0 = S ( A ( g ) )
= S ( x 0 ) = Φ ( x 0 ) = Φ ( A ( g ) )
which finishes the proof. □
The next theorem provides a counterpart of McShane’s result [24] (Th. 2.5) for vectorial means.
Theorem 6.
Let M be, in addition, a locally convex topological vector space and K M be a closed and convex subset.
If A is a vectorial mean on L with values in M, then, for any g L ,
g ( x ) K , x E A ( g ) K .
Proof. 
Assume x 0 : = A ( g ) K . Then, there exists a functional f : M R that separates x 0 and K; i.e., f ( z ) 0 , z K and f ( x 0 ) > 0 . Fix 0 c 0 and define T ( x ) = f ( x ) c , x M . Observe that T is linear and T ( z ) 0 , z K . By our hypotheses, T g ( x ) 0 , x E . Since (A1)–(A3), 0 A ( T g ) T A ( g ) = T ( x 0 ) = f ( x 0 ) c . Hence, f ( x 0 ) 0 , which contradicts our earlier assumption and finishes the proof. □
Example 5.
Let the spaces M , L , L 1 , and the vectorial mean A be as in Example 4. Fix any convex function ϕ : R n R and let Φ ( x ) = ( ϕ ( x ) , , ϕ ( x ) ) R n , x R n . Note, Φ is a convex map regarding the componentwise order on R n .
Now, for g = ( g 1 , , g n ) : E R n such that Φ g L , we obtain (7) according to Theorem 5. It is equivalent to the inequality
ϕ ( a ( g 1 ) , , a ( g n ) ) a ϕ ( g 1 , , g n ) ,
where a is a linear mean on L 1 and ϕ ( g 1 ) , , ϕ ( g n ) L 1 . This is McShane’s result; see [24] (Th. 2.6).
If K R n is closed convex and g ( x ) = ( g 1 ( x ) , , g n ( x ) ) K , x R n , then
A ( g ) = ( a ( g 1 ) , , a ( g n ) ) K ,
by Theorem 6. This is McShane’s result; as well, see [24] (Th. 2.5).

6. Summary

Research in applied sciences, such as natural sciences, health sciences, economics, physics, and engineering, is based on experiments. Let us agree that the goal of an experiment is to obtain a measurement or a series of measurements, depending on whether we observe one or many features. Knowledge about phenomena is obtained from multiple often planned repetitions of such experiments. The numerical data obtained in this way are elements of a certain vector space—the sample space—composed of numerical sequences, matrices, or, more generally, real functions.
The aim of the research is to develop and adapt a mathematical model to the data. To achieve this, it is necessary to obtain some information about the location, spatial extent, symmetry, and other geometric properties of the data set in that space. The researcher accomplishes this using various characteristics and measures. The basic such characteristics are means, which index the location of a data set in a space, indicating one specific point in that space.
They meet various requirements. A mean calculated for generally larger data is greater with respect to some fixed orderings (isotonicity). If we transform the data (by scaling or shifting), the mean is transformed in the same way (linearity). In the case of data that do not exhibit variability and take a common constant value, the mean value also takes that constant value. Roughly speaking, the mean is the point contained in the convex hull of all the measurements. The mathematical abstraction of such characteristics can be linear means in the case of studying a single feature or vectorial means in the case of many features.
The formal definition of vectorial mean and the main results of the article are included in Section 5 in the following theorems: Theorem 5, which presents a version of inequality (1) for vectorial means and generalizes classic inequalities by Jensen [1] and Jessen [2]; and Theorem 6, which includes a counterpart of McShane’s result [24] (Th. 2.5) for vectorial means and provides their important geometric property, stating that the average operation provides a point in the convex hull of all the averaged points.
There are other important results in the article without which proving the above theorems would not be possible: Theorem 3 on the extension of affine maps dominated by convex ones, being a multidimensional version of the Hahn–Banach theorem; and Theorem 4 on supports of convex maps. This last one is a key tool for proving inequality (1) for vectorial means.
Ultimately, the author hopes that further research on vectorial means and Jensen–Jessen inequalities can be used to construct new characteristics of numerical data that will also describe other geometric aspects of the data set in the sample space.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Jensen, J.L.W.V. Sur les fonctions convexes et les inéqualités entre les valeurs moyennes. Acta Math. 1906, 30, 175–193. [Google Scholar] [CrossRef]
  2. Jessen, B. Bemaerkinger om konvekse Funktioner og Uligheder imellem Middelvaerdier I. Mat. Tidsskrift B 1931, 17–28. [Google Scholar]
  3. McShane, E.J. Jensen’s inequality. Bull. Am. Math. Soc. 1937, 43, 521–527. [Google Scholar] [CrossRef]
  4. Pečarić, J.E.; Raşa, I. On Jessen’s inequality. Acta Sci. Math. 1992, 56, 305–309. [Google Scholar]
  5. Dragomir, S.S.; Pearce, C.E.M.; Pečarić, J.E. On Jessen’s and related inequalities for isotonic sublinear functionals. Acta Sci. Math. 1995, 61, 373–382. [Google Scholar]
  6. Dragomir, S.S. A Survey on Jessen’s Type Inequalities for Positive Functionals. In Nonlinear Analysis. Springer Optimization and Its Applications; Pardalos, P., Georgiev, P., Srivastava, H., Eds.; Springer: New York, NY, USA, 2012; Volume 68. [Google Scholar] [CrossRef]
  7. Cheung, W.S.; Matković, A.; Pečarić, J.E. A Variant of Jessen’s Inequality and Generalized Means. J. Inequal. Pure Appl. Math. 2006, 7, 10. [Google Scholar]
  8. Guessab, A.; Schmeisser, G. Necessary and sufficient conditions for the validity of Jensen’s inequality. Arch. Math. 2013, 100, 561–570. [Google Scholar] [CrossRef]
  9. Cǔljak, V.; Cǔljak, B. Ivanković, B.; Pečarić, J.E. On Jensen-Mcshane’s inequality. Period. Math. Hungar. 2019, 58, 139–154. [Google Scholar] [CrossRef]
  10. Aslam, G.I.H.; Ali, A.; Mehrez, K. A Note of Jessen’s Inequality and Their Applications to Mean-Operators. Mathematics 2022, 10, 879. [Google Scholar] [CrossRef]
  11. Khan, M.A.; Pečcarić, J.; Chu, Y.-M. Refinements of Jensen’s and McShane’s inequalities with applications. AIMS Math. 2020, 5, 4931–4945. [Google Scholar] [CrossRef]
  12. Otachel, Z. Inequalities for Convex Functions and Isotonic Sublinear Functionals. Results Math. 2024, 79, 76. [Google Scholar] [CrossRef]
  13. Zǎlinescu, C. Convex Analysis in General Vector Spaces; World Scientific: Singapore, 2002. [Google Scholar]
  14. Toader, G. Fujiwara’s inequality for functionals. Facta Univ. Ser. Math. Infor. 1992, 7, 43–48. [Google Scholar]
  15. Toader, G. On inequality of Seitz, 1995. Period. Math. Hung. 1995, 30, 165–170. [Google Scholar] [CrossRef]
  16. Hardy, G.H.; Littlewood, J.E.; Pólya, G. Some simple inequalities satisfied by convex functions. Messenger Math. 1929, 58, 145–152. [Google Scholar]
  17. Hardy, G.H.; Littlewood, J.E.; Pólya, G. Inequalities; Cambridge University Press: Cambridge, UK, 1934. [Google Scholar]
  18. Chong, K.M.; Rice, N.M. Equimeasurable Rearrangaments of Functions; Queen’s Papers in Pure and Applied Math. 28; Kingston, Ont., Queen’s University: Kingston, ON, Canada, 1971; Volume 28. [Google Scholar]
  19. Otachel, Z. Spectral Orders and Isotone Functionals. Linear Algebra Appl. 1997, 252, 159–172. [Google Scholar] [CrossRef]
  20. Otachel, Z. Functions preserving a cone preorder and their duals. Ann. Soc. Math. Pol. Ser. I Comment. Math. 2005, 45, 171–189. [Google Scholar]
  21. Day, M.M. Normed Linear Spaces, 3rd ed.; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 1973. [Google Scholar]
  22. Roberts, A.W.; Varberg, D.E. Convex Functions; Academic Press: New York, NY, USA; London, UK, 1973. [Google Scholar]
  23. Narici, L. On the Hahn-Banach Theorem; World Scientific: Singapore, 2007; pp. 87–122. [Google Scholar] [CrossRef]
  24. Pečarić, J.E.; Proschan, F.; Tong, Y.L. Convex Functions, Partial Orderings, and Statistical Applications; Mathematics in Science and Ingineering; Academic Press, Inc.: Cambridge, MA, USA, 1992; Volume 187. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Otachel, Z. Jensen–Jessen Inequality for Convex Maps. Symmetry 2025, 17, 601. https://doi.org/10.3390/sym17040601

AMA Style

Otachel Z. Jensen–Jessen Inequality for Convex Maps. Symmetry. 2025; 17(4):601. https://doi.org/10.3390/sym17040601

Chicago/Turabian Style

Otachel, Zdzisław. 2025. "Jensen–Jessen Inequality for Convex Maps" Symmetry 17, no. 4: 601. https://doi.org/10.3390/sym17040601

APA Style

Otachel, Z. (2025). Jensen–Jessen Inequality for Convex Maps. Symmetry, 17(4), 601. https://doi.org/10.3390/sym17040601

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop