Next Article in Journal
Tele-Education under the COVID-19 Crisis: Asymmetries in Romanian Education
Previous Article in Journal
A Novel Information Theoretical Criterion for Climate Network Construction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Types of Statistical Indicators Characterized by 2-Pre-Hilbert Spaces

by
Nicuşor Minculete
Department of Mathematics and Computer Science, Transilvania University of Braşov, Iuliu Maniu Street, No. 50, 500091 Braşov, Romania
Symmetry 2020, 12(9), 1501; https://doi.org/10.3390/sym12091501
Submission received: 1 August 2020 / Revised: 31 August 2020 / Accepted: 10 September 2020 / Published: 12 September 2020
(This article belongs to the Section Mathematics)

Abstract

:
In this article, we established new results related to a 2-pre-Hilbert space. Among these results we will mention the Cauchy-Schwarz inequality. We show several applications related to some statistical indicators as average, variance and standard deviation and correlation coefficient, using the standard 2-inner product and some of its properties. We also present a brief characterization of a linear regression model for the random variables in discrete case.
MSC:
Primary 46C05; secondary 26D10; 26D15

1. Introduction

In Reference [1] Gähler introduced the definitions of a linear 2-normed space and a 2-metric space. In References [2,3], Diminnie, Gähler and White studied the properties of a 2-inner product space.
Several results related to the theory of 2-inner product spaces can be found in Reference [4]. In Reference [5] Dragomir et al. show the corresponding version of Boas-Bellman inequality in 2-inner product spaces and in Reference [6] the superadditivity and the monotony of 2-norms generated by inner products was studied.
We consider X a linear space of dimension greater than 1 over the field K , where K is the set of the real or the complex numbers. Suppose that ( · , · | · ) is a K -valued function defined on X × X × X satisfying the following conditions:
(a)
( u , u | w ) 0 and ( u , u | w ) = 0 if and only if u and w are linearly dependent;
(b)
( u , u | w ) = ( w , w | u ) ;
(c)
( u , v | w ) = ( v , u | w ) ¯ ;
(d)
( α u , v | w ) = α ( u , v | w ) , for any scalar α K ;
(e)
( u 1 + u 2 , v | w ) = ( u 1 , v | w ) + ( u 2 , v | w ) .
Function ( · , · | · ) is called 2-inner product on X and ( X , ( · , · | · ) ) is called 2-inner product space (or 2-pre-Hilbert space).
A series of consequences of these requirements can be deduced (see e.g., References [2,4,7]):
( 0 , v | w ) = ( u , 0 | w ) = ( u , v | 0 ) = 0 , ( w , v | w ) = ( v , w | w ) = 0 ,
R e ( u , v | w ) = 1 4 [ ( w , w | u + v ) ( w , w | u v ) ] ,
( u , α v | w ) = α ¯ ( u , v | w ) , ( u , v | α w ) = | α | 2 ( u , v | w ) ,
for all u , v , w X and α K .
The standard 2-inner product ( · , · | · ) is defined on the inner product space X = ( X , · , · ) by:
( u , v | w ) : = u , v w , w u , w w , v ,
for all u , v , w X .
Let ( X , ( · , · | · ) ) be a 2-inner product space. We can define a function · | · on X × X by
u | w = ( u , u | w ) ,
for all u , w X . This function satisfies the following conditions:
(a)
u | w 0 and u | w = 0 if and only if w = α u ;
(b)
u | w = w | u ;
(c)
α u | w = | α | u | w , for any scalar α K ;
(d)
u 1 + u 2 | w u 1 | w + u 2 | w , for all u 1 , u 2 , w X .
A function · | · defined on X × X and satisfying the above conditions is called 2-norm on X and ( X , · | · ) is called linear 2-normed space.
It is easy to see that if X = ( X , ( · , · | · ) ) is a 2-inner product space over the field of real numbers R or the field of complex numbers C , then ( X , · | · ) is a linear 2-normed space and the 2-norm · | · is generated by the 2-inner product ( · , · | · ) .
Two consequences of the above properties are given by the following: the parallelogram law [4],
u + v | w 2 + u v | w 2 = 2 u | w 2 + 2 v | w 2 ,
for all u , v , w X and the Cauchy-Schwarz inequality (see e.g., References [4,7]),
| ( u , v | w ) | u | w v | w ,
for all u , v , w X . The equality in (3) holds if and only if u , v and w are linearly dependent.
If X = ( X , · , · ) is an inner product space, inequality (3) becomes ([6,8]):
| u , v w 2 u , w w , v | u 2 w 2 | u , w | 2 v 2 w 2 | v , w | 2 .
A reverse of the Cauchy-Schwarz inequality in 2-inner product spaces can be found in Reference [5]: if u , v , w X and a , A K are such that R e ( A v u , u a v | w ) 0 or equivalently u a + A 2 v | w 1 2 | A a | v | w hold, then
0 u | w 2 v | w 2 | ( u , v | w ) | 2 1 4 | A a | 2 v | w 4 .
Constant 1 4 is the best possible.
Another important inequality in a 2-inner product space X is the triangle inequality [4],
u + v | w u | w + v | w ,
for all u , v , w X .
The Cauchy-Schwarz inequality in the real case, | u , v | u · v (see e.g., References [9,10]), can be obtained by the following identity, as in Reference [11],
u , v = u v 1 1 2 u u v v 2 ,
for all u , v X , u , v 0 . An inequality, which is an improvement of the Cauchy-Schwarz inequality, is the Ostrowski inequality. In Reference [12], we find some refinements of Ostrowski’s inequality and an extention to a 2-inner product space.
The purpose of this paper is to study some identities in a 2-pre-Hilbert space and we prove new results related to several inequalities in a 2-pre-Hilbert space. We will mention the Cauchy-Schwarz inequality. The novelty of this article is the introduction, for the first time, of the concepts of average, variance, covariance and standard deviation and of the correlation coefficient for vectors, using the standard 2-inner product and some of its properties. We also present a brief characterization of a linear regression model for the random variables in discrete case.

2. Inequalities in a 2-Pre-Hilbert Space

In this section, we will obtain some characterizations of the Cauchy-Schwarz inequality for a 2-pre-Hilbert space. First, we use an identity, which is given by the following result:
Lemma 1.
If X = ( X , ( · , · | · ) ) is a 2-inner product space over the field of complex numbers C , and the 2-norm · | · is generated by the 2-inner product ( · , · | · ) ) , then we have
a x b y | z 2 = | a | 2 x | z 2 2 R e a b ¯ ( x , y | z ) + | b | 2 y | z 2 ,
for vectors x,y and z in X and a,b ∈ C .
Proof. 
By making simple calculations, for all x , y , z X and a , b C , we have that
a x b y | z 2 = ( a x b y , a x b y | z ) = a ( a x b y , x | z ) ¯ b ( a x b y , y | z ) =
= | a | 2 x | z 2 a b ¯ ( x , y | z ) a ¯ b ( x , y | z ) ¯ + | b | 2 y | z 2 =
= | a | 2 x | z 2 2 R e a b ¯ ( x , y | z ) + | b | 2 y | z 2 ,
which proved the statement. □
Remark 1.
If in relation (8) we take a = 1 x | z and b = 1 y | z , then we obtain
R e ( x , y | z ) = x | z y | z 1 1 2 x x | z y y | z | z 2 ,
for all nonzero vectors x and y in X and the linearly independent pairs of vectors ( x , z ) and ( y , z ) . If ( x , y | z ) = ( x , y | z ) ¯ , then we obtain the following relation:
( x , y | z ) = x | z y | z 1 1 2 x x | z y y | z | z 2 .
The above equality is the extension of equality (7) to a 2-inner product space.
Using Lemma 1 in two conveniently chosen ways we get an important equality, thus:
Theorem 1.
With the above assumptions in a 2-pre-Hilbert space, the following equality holds
a x b y | z 2 + b x + a y | z 2 = | a | 2 + | b | 2 x | z 2 + y | z 2 ,
for all vectors x,y and z in X and a,b C with a b ¯ R .
Proof. 
In relation (8), if we replace a by b , and b by a, then we deduce the relation
b x + a y | z 2 = | b | 2 x | z 2 + 2 b a ¯ R e ( x , y | z ) + | a | 2 y | z 2 .
By adding the above relation with relation (8), we obtain the relation of the statement. □
Another equality in 2-inner product spaces is given by the following:
Theorem 2.
If X = ( X , ( · , · | · ) ) is a 2-inner product space over the field of complex numbers C , and the 2-norm · | · is generated by the 2-inner product ( · , · | · ) , then we have
R e ( ( b ¯ a ¯ ) ( a x | z 2 b y | z 2 ) ) + 1 2 ( a x b y | z 2 + a ¯ x b ¯ y | z 2 ) =
= R e ( a b ¯ ) x y | z 2 ,
for all vectors x,y,z X and a , b C .
Proof. 
If we make the substitutions a a ¯ and b b ¯ , in the equality form Lemma 1, then we find the relation
a ¯ x b ¯ y | z 2 = | a | 2 x | z 2 2 R e a ¯ b ( x , y | z ) + | b | 2 y | z 2 .
We can rewrite relation (12) as:
( b a ) a ¯ x | z 2 b ¯ y | z 2 + b ¯ a ¯ a x | z 2 b y | z 2 +
+ a x b y | z 2 + a ¯ x b ¯ y | z 2 = a b ¯ + a ¯ b x y | z 2 .
If we apply Lemma 1, then we obtain the relation
( b ¯ a ¯ ) ( a x | z 2 b y | z 2 ) + a x b y | z 2 =
= a b ¯ x | z 2 + a ¯ b y | z 2 a b ¯ ( x , y | z ) a ¯ b ( x , y | z ) ¯ .
Similarly, we deduce the identity
( b a ) ( a ¯ x | z 2 b ¯ y | z 2 ) + a ¯ x b ¯ y | z 2 =
= a ¯ b x | z 2 + a b ¯ y | z 2 a ¯ b ( x , y | z ) a b ¯ ( x , y | z ) ¯ .
By these relations and using the identity
( a b ¯ + a ¯ b ) x y | z 2 = ( a b ¯ + a ¯ b ) ( x | z 2 ( x , y | z ) ( x , y | z ) ¯ + y | z 2 ,
we deduce relation (14). Therefore, the relation of the statement is true. □
Remark 2.
It is easy to see that relation (14) becomes
| b | 2 | a | 2 x | z 2 y | z 2 | a b | 2 x | z 2 + y | z 2 +
+ a x b y | z 2 + a ¯ x b ¯ y | z 2 = 2 R e ( a b ¯ ) x y | z 2 .
Corollary 1.
With the above assumptions in a 2-pre-Hilbert space, the following identity holds
1 2 a x x | z b y y | z | z 2 + a ¯ x x | z b ¯ y y | z | z 2 =
= | a b | 2 + R e ( a b ¯ ) x x | z y y | z | z 2 ,
for all nonzero vectors x,y and z in X and the linearly independent pairs of vectors (x,z) and (y,z) and a,b C .
Proof. 
If we make the substitutions x x x | z and y y y | z in relation (12), then we deduce equality (16). □
Corollary 2.
With the above assumptions in a 2-pre-Hilbert space, the following equalities hold
( b a ) ( a x | z 2 b y | z 2 ) + a x b y | z 2 = a b x y | z 2 ,
for vectors x and y in X and a,b R , and
x x | z y y | z | z 2 = x y | z 2 ( x | z y | z ) 2 x | z · y | z ,
for nonzero vectors x,y and z in X and the linearly independent pairs of vectors (x,z) and (y,z).
Proof. 
In relation (12), if we take a , b R , then we have a ¯ = a , b ¯ = b , so, we obtain relation (17). For a = 1 x | z and b = 1 y | z , in relation (17), we deduce equality (18). □
Remark 3.
We can rearrange the expression from relation (18) as follows:
x x | z y y | z | z 2 = ( x y | z | x | z y | z | ) ( x y | z + | x | z y | z | ) min { x | z , y | z } max { x | z , y | z } .
But, using the inequality min { a , b } a b max { a , b } for positive real numbers a and b, we deduce the following inequality
x y | z | x | z y | z | min { x | z , y | z }
x x | z y y | z | z x y | z + | x | z y | z | max { x | z , y | z } ,
for nonzero vectors x,y and z in X and the linearly independent pairs of vectors ( x , z ) and ( y , z ) . This inequality is an extension of Maligranda’s inequality from Reference [13] to a 2-inner product space over the field of real numbers.
Next, we give an evaluation of the sum of the squares of the norms of two vectors, in a 2-inner product space:
Theorem 3.
If a,b C with R e ( a b ¯ ) > 0 and X = ( X , ( · , · | · ) ) is a 2-inner product space over the field of complex numbers C , and the 2-norm · | · is generated by the 2-inner product ( · , · | · ) , then we have
| a b | 2 + 2 R e ( a b ¯ ) 2 x x | z + y y | z | z
1 2 a x x | z b y y | z | z 2 + a ¯ x x | z b ¯ y y | z | z 2
| a b | 2 + 4 R e ( a b ¯ ) 2 x x | z + y y | z | z ,
for all nonzero vectors x,y and z in X and the linearly independent pairs of vectors (x,z) and (y,z).
Proof. 
From relation (16) and from the parallelogram identity,
x x | z y y | z | z 2 + x x | z + y y | z | z 2 = 2 x x | z | z 2 + y y | z | z 2 = 4 ,
we find the equality
1 2 ( a x x | z b y y | z | z 2 + a ¯ x x | z b ¯ y y | z | z 2 =
= | a b | 2 + R e ( a b ¯ ) 4 x x | z + y y | z | z 2 .
Using the equality,
4 x x | z + y y | z | z 2 = 2 x x | z + y y | z | z 2 + x x | z + y y | z | z ,
and from triangle inequality, we prove the relation 0 x x | z + y y | z | z 2 and taking into account that R e ( a b ¯ ) > 0 , we deduce inequality (20). □
Bellow, we obtain a refinement of the Cauchy-Schwarz inequality and a reverse inequality of the Cauchy-Schwarz inequality in a 2-pre-Hilbert space.
Corollary 3.
With the above assumptions in a 2-pre-Hilbert space, the following inequality holds
2 x x | z + y y | z | z x | z · y | z x | z · y | z R e ( x , y | z )
2 2 x x | z + y y | z | z x | z · y | z ,
for all nonzero vectors x,y and z in X and the linearly independent pairs of vectors (x,z) and (y,z).
Proof. 
If we take in inequality (20) a = b 0 , a , b R , then we find the following inequality
2 2 x x | z + y y | z | z x x | z y y | z | z 2
4 2 x x | z + y y | z | z .
From relation (9), we deduce the equality
1 2 x x | z y y | z | z 2 = x | z · y | z R e ( x , y | z ) x | z · y | z ,
and combining this with inequality (23), we find the inequality of the statement. □
Remark 4.
If we take x | z = y | z = 1 in inequality (22), we obtain the following inequality
2 x + y | z 3 R e ( x , y | z ) x + y | z 1 ,
for all x , y X , where z X is a given nonzero vector.
Next, we will show an estimate of the triangle inequality in a linear 2-normed space.
Theorem 4.
If X = ( X , · | · ) is a linear 2-normed space over the field of real numbers R , then the following inequality holds
min { a , b } ( x | z + y | z x + y | z ) a x | z + b y | z a x + b y | z
max { a , b } ( x | z + y | z x + y | z ) ,
for all vectors x , y , z in X and a , b R + .
Proof. 
If without reducing the generality, we assume that 0 a b , then we have
a x | z + b y | z min { a , b } ( x | z + y | z x + y | z ) =
= ( b a ) y | z + a x + y | z = ( b a ) y | z + a ( x + y ) | z a x + b y | z .
Similarly, we make the following calculations:
a x | z + b y | z max { a , b } ( x | z + y | z x + y | z ) =
= b x + y | z ( b a ) x | z a x + b y | z .
In the case 0 b a , we deduce the same above results. Therefore, the inequalities of the statement are true. □
Remark 5.
If we replace x by x x | z and y by y y | z in relation (25), we obtain the following inequality:
min { a , b } 2 x x | z + y y | z | z a + b a x | z + b y | z y | z
max { a , b } 2 x x | z + y y | z | z ,
for nonzero vectors x , y , z in X and the linearly independent sets of vectors { x , z } and { y , z } .
Corollary 4.
If X = ( X , · | · ) is a linear 2-normed space over the field of real numbers R , then we have
min { x | z , y | z } 2 x x | z + y y | z | z x | z + y | z x + y | z
max { x | z , y | z } 2 x x | z + y y | z | z ,
for nonzero vectors x , y , z in X and the linearly independent sets of vectors { x , z } and { y , z } .
Proof. 
For nonzero vectors x , y , z in X and the linearly independent sets of vectors { x , z } and { y , z } , in Theorem 4, we make the following substitutions: a = 1 x | z , b = 1 y | z , then we obtain
x | z + y | z x + y | z max { x | z , y | z } 2 x x | z + y y | z | z
x | z + y | z x + y | z min { x | z , y | z } ,
which implies the inequalities from (27). □

3. Applications of the Standard 2-Inner Product

If X = ( X , · , · ) is an inner product space, then the standard 2-inner product ( · , · | · ) is defined on X by:
( x , y | z ) = x , y z , z x , z z , y ,
for all x , y , z X .
But, ( X , · | · ) becomes a linear 2-normed space, with the 2-norm given by the following:
x | z 2 = ( x , x | z ) = x | z 2 | x , y | 2 ,
for all x , z X .
(a) We consider the vector space ( R n , · , · ) . For x = ( x 1 , x 2 , , x n ) , y = ( y 1 , y 2 , , y n ) , z = ( z 1 , z 2 , , z n ) , we have x , y = x 1 y 1 + x 2 y 2 + + x n y n , x = x 1 2 + x 2 2 + + x n 2 ,
( x , y | z ) = i = 1 n x i y i i = 1 n z i 2 i = 1 n x i z i i = 1 n z i y i
and x | z = i = 1 n x i 2 i = 1 n z i 2 i = 1 n x i z i 2 .
If we apply inequality (22) for the real vector space ( R n , · , · ) , then we have
0 A i = 1 n x i 2 · i = 1 n y i 2 i = 1 n x i y i 2 A ,
where
A = 2 i = 1 n x i 2 · i = 1 n y i 2 i = 1 n x i i = 1 n y i 2 + y i i = 1 n x i 2 2 .
(b) In the vector space ( C 0 [ a , b ] , · , · ) , for f , g , h C 0 [ a , b ] we have
f , g = a b f ( x ) g ( x ) d x , f = a b f 2 ( x ) d x ,
( f , g | h ) = a b f ( x ) g ( x ) d x a b h 2 ( x ) d x a b f ( x ) h ( x ) d x a b h ( x ) g ( x ) d x
and
f | h = a b f 2 ( x ) d x a b h 2 ( x ) d x a b f ( x ) h ( x ) d x 2 .
Now, applying inequality (22) for the real vector space ( C 0 [ a , b ] , · , · ) , we have
0 B a b f 2 ( x ) d x · a b g 2 ( x ) d x a b f ( x ) g ( x ) d x 2 B ,
where
B = 2 a b f 2 ( x ) d x · a b g 2 ( x ) d x
a b f ( x ) a b g 2 ( x ) d x + g ( x ) a b f 2 ( x ) d x 2 d x .
These inequalities are improvements of the Cauchy-Schwarz inequality in discrete version and in integral version.
(c) Let X be a real linear space with the inner product · , · . The Chebyshev functional [14] is defined by
T z ( x , y ) = z 2 x , y x , z y , z ,
for all x , y X , where z X is a given nonzero vector.
It is easy to see that we have T z ( x , y ) = ( x , y | z ) and T z ( x , x ) 0 , for all x , y , z X .
If we replace x and y by x x , z z 2 z and y y , z z 2 z , z 0 , in the Cauchy-Schwarz inequality, then we find the Cauchy-Schwarz inequality in terms of the Chebyshev functional, given by:
| T z ( x , y ) | 2 T z ( x , x ) T z ( y , y ) .
Let X be a real linear space with the inner product · , · . Equality (17) can be written in terms of the Chebyshev functional by
( b a ) ( a T z ( x , x ) b T z ( y , y ) ) + T z ( a x b y , a x b y ) = a b T z ( x y , x y ) ,
for all vectors x , y in X, where z X is a given nonzero vector and a , b R . If T z ( x , x ) = T z ( y , y ) , then
T z ( a x b y , a x b y ) a b T z ( x y , x y ) ,
for all vectors x , y in X, where z X , z 0 .
(d) For every subspace U X , we have the decomposition X = U U . Every x X can be uniquely written as x = x 1 + u , where x 1 U and u U . We define the orthogonal projection P U : X X by P U x = x 1 . It is easy to see that u , x 1 = 0 , for every u U , so we have P U x , u = 0 , which involves the equality x , u = u , u = u 2 , where the norm · is generated by the inner product · , · .
From relation (17), if x | z = y | z , then we have
a x b y | z a b x y | z ,
for vectors x and y in X and a , b R + .
For a subspace U of an inner product space X with x , y X , z U and x , y | z is the standard 2-inner product on X, we deduce the identity:
a x b y | z 2 = a P U x b P U y | z 2 + a u b v | z 2 ,
where we have the decompositions x = P U x + u , y = P U y + v . Using the equality from (17) and above identity, we proved the following equality:
a P U x b P U y | z 2 + a u b v | z 2 = a b x y | z 2 + a b 2 x | z 2 ,
for x | z = y | z .

4. Applications of the Standard 2-Inner Product to Certain Statistical Indicators

A variety of ways to present data, probability, and statistical estimation are mainly characterized by the following statistical indicators—mean (average), variance and standard deviation as well as covariance and Pearson correlation coefficient [15].
Taking the mean as the center of a random variable’s probability distribution, the variance is a measure of how much the probability mass is spread out around this center.
If V is a random variable with m e a n E [ V ] = μ V , then the formal definition of variance is the following: V a r ( V ) = E [ ( V μ V ) 2 ] . The expression for the variance can thus be expanded: V a r ( V ) = E [ V 2 ] E 2 [ V ] . The standard deviation σ of V is defined by σ = V a r ( V ) .
The covariance is a measure of how much two random variables V and W change together at the same time and is defined as C o v ( V , W ) = E [ ( V E [ V ] ) ( W E [ W ] ) ] , and is equivalent to the form C o v ( V , W ) = E [ V W ] E [ V ] [ W ] . We find the inequality of Cauchy-Schwarz for discrete random variables given by
| C o v ( V , W ) | V a r ( V ) V a r ( W ) .
The correlation between sets of data is a measure of how well they are related. A correlation coefficient is a numerical measure of some type of correlation, meaning a statistical relationship between two variables.
The Pearson correlation coefficient ( r ( V , W ) ) is a measure of the strength and direction of the linear relationship between two variables V and W that is defined as the covariance of the variables divided by the product of their standard deviations:
r ( V , W ) = C o v ( V , W ) V a r ( V ) V a r ( W ) .
Using the inequality of Cauchy-Schwarz, we deduce that 1 r ( V , W ) 1 . The variance of a discrete random variable V = ( x i , p i ) 1 i n with probabilities P ( V = x i ) = p i = 1 n for any i = 1 , n ¯ is its second central moment, the expected value of the squared deviation from mean μ V = E [ V ] = 1 n i = 1 n x i , thus: V a r ( V ) = 1 n i = 1 n ( x i μ V ) 2 .
Let x 1 , x 2 , , x n be real numbers, assume γ 1 x i Γ 1 for all i = 1 , n ¯ and the average μ V = 1 n i = 1 n x i . In 1935, Popoviciu (see e.g., References [16,17]) proved the following inequality
V a r ( V ) 1 4 ( Γ 1 γ 1 ) 2 .
The discrete version of Grüss inequality has the following form (see e.g., References [18,19]):
| 1 n i = 1 n x i y i 1 n i = 1 n x i 1 n i = 1 n y i | 1 4 ( Γ 1 γ 1 ) ( Γ 2 γ 2 ) ,
where x i , y i are real numbers so that γ 1 x i Γ 1 and γ 2 y i Γ 2 for all i = 1 , n ¯ .
From the relation
C o v ( V , W ) = E [ V W ] E [ V ] E [ W ] = 1 n i = 1 n x i y i 1 n i = 1 n x i 1 n i = 1 n y i
and using the inequality of Cauchy-Schwarz for discrete random variables given by | C o v ( V , W ) | V a r ( V ) V a r ( W ) , and inequality (32), we obtain a proof of Grüss’s inequality.
Bhatia and Davis show in Reference [16] the following inequality:
V a r ( V ) ( Γ 1 μ V ) ( μ V γ 1 ) .
The inequality of Bhatia and Davis represents an improvement of Popoviciu’s inequality, because ( Γ 1 γ 1 ) 2 4 ( Γ 1 μ V ) ( μ V γ 1 ) . Therefore, we will first have an improvement of Grüss’s inequality given by the following relation:
| 1 n i = 1 n x i y i 1 n i = 1 n x i 1 n i = 1 n y i | ( Γ 1 μ V ) ( μ V γ 1 ) ( Γ 2 μ W ) ( μ W γ 2 )
1 4 ( Γ 1 γ 1 ) ( Γ 2 γ 2 ) .
In Reference [18], we find some research on refining the Grüss inequality.
The Pearson correlation coefficient is given by
r ( V , W ) = 1 n i = 1 n x i y i 1 n i = 1 n x i 1 n i = 1 n y i 1 n i = 1 n x i 2 1 n i = 1 n x i 2 1 n i = 1 n y i 2 1 n i = 1 n y i 2 .
Florea and Niculescu in Reference [20] treated the problem of estimating the deviation of the values of a function from its mean value. The estimation of the deviation of a function from its mean value is characterized below.
We denote by R ( [ a , b ] ) the space of Riemann-integrable functions on the interval [ a , b ] , and by C 0 ( [ a , b ] ) the space of real-valued continuous functions on the interval [ a , b ] .
The integral arithmetic mean for a Riemann-integrable function f : [ a , b ] R is the number
M 1 [ f ] = 1 b a a b f ( x ) d x .
If f and h are two integrable functions on [ a , b ] and a b h ( x ) d x > 0 , then a generalization for the integral arithmetic mean is the number M h [ f ] = a b f ( x ) h ( x ) d x a b h ( x ) d x called the h-integral arithmetic mean for a Riemann-integrable function f. If function f is a Riemann-integrable function, we denote by
v a r ( f ) = M 1 [ ( f M 1 ( f ) ) 2 ]
the variance of f. The expression for the variance of f can be expanded in this way: v a r ( f ) = 1 b a a b f ( x ) 1 b a a b f ( t ) d t 2 d x . In the same way, we defined the h-variance of a Riemann-integrable function f by v a r h ( f ) = M h [ ( f M h ( f ) ) 2 ] . The expression for the h-variance can be thus expanded: v a r h ( f ) = 1 a b h ( x ) d x a b f ( x ) a b f ( t ) h ( t ) d t a b h ( t ) d t 2 h ( x ) d x .
It is easy to see another form of the h-variance, given by the following: v a r h ( f ) = M h [ f 2 ] M h 2 [ f ] . In Reference [21], Aldaz showed a refinement of the AM-GM inequality and used it in the proof that 1 a b f 1 / 2 ( x ) d x a b f ( x ) d x 1 / 2 is a measure of the dispersion of f 1 / 2 about its mean value, which is, in fact, comparable to the variance.
The covariance is a measure of how much two Riemann-integrable functions change together at the same time and is defined as c o v ( f , g ) = M 1 ( f M 1 [ f ] ) ( g M 1 [ g ] ) , and is equivalent to the form
c o v ( f , g ) = M 1 [ f g ] M 1 [ f ] M 1 [ g ]
= 1 b a a b f ( x ) g ( x ) d x 1 b a a b f ( x ) d x 1 b a a b g ( x ) d x .
In fact, the covariance is the Chebyshev functional attached to functions f and g. In Reference [22] it is written as T ( f , g ) . The properties of the Chebyshev functional have been studied by Elezović, Marangunić and Pečarić in Reference [19].
The h-covariance is a measure of how much two random variables change together and is defined as c o v h ( f , g ) = M h ( f M h [ f ] ) ( g M h [ g ] ) , and is equivalent to the form
c o v h ( f , g ) = M h [ f g ] M h [ f ] M h [ g ]
= a b f ( x ) g ( x ) d x a b h ( x ) d x a b f ( x ) h ( x ) d x a b h ( x ) d x a b g ( x ) h ( x ) d x a b h ( x ) d x .
In Reference [23], Pečarić used the generalization of the Chebyshev functional notion attached to functions f and g to the Chebyshev h-functional attached to functions f and g defined by T ( f , g ; h ) . Here, Pečarić showed some generalizations of the inequality of Grüss by the Chebyshev h-functional. It is easy to see that, in terms of covariance, this can be written as T ( f , g ; h ) = c o v h ( f , g ) .
In terms of covariance, the inequality of Grüss becomes
| c o v ( f , g ) | 1 4 ( Γ 1 γ 1 ) ( Γ 2 γ 2 ) .
In terms of Chebyshev functional, the inequality of Gruss becomes
| T ( f , g ) | 1 4 ( Γ 1 γ 1 ) ( Γ 2 γ 2 ) .
Next, using the notion of the standard 2-inner product, we extend the above concepts to vectors of R n . If X = ( X , · , · ) is an inner product space, then the standard 2-inner product ( · , · | · ) is defined on X by:
( x , y | z ) : = x , y z , z x , z z , y ,
for all x , y , z X .
But, ( X , · | · ) becomes a linear 2-normed space, with the 2-norm given by the following:
x | z 2 = ( x , x | z ) = x 2 z 2 x , z 2 ,
for all x , z X .
Now, we take the vector space ( R n , · , · ) . For x = ( x 1 , x 2 , , x n ) , y = ( y 1 , y 2 , , y n ) , z = ( z 1 , z 2 , , z n ) , we have
x , y = x 1 y 1 + x 2 y 2 + + x n y n , x = x 1 2 + x 2 2 + + x n 2 ,
( x , y | z ) = x , y z , z x , z z , y = i = 1 n x i y i i = 1 n z i 2 i = 1 n x i z i i = 1 n z i y i
and
x | z = i = 1 n x i 2 i = 1 n z i 2 i = 1 n x i z i 2 .
In Reference [14], Niezgoda studied certain orthoprojectors. The operator P z : X X defined by
P z ( x ) = x , z z z z , x X , z 0 ,
is the orthoprojector from X onto span { z } . If e = u u , where u = ( 1 , 1 , , 1 ) R n , then the average of vector x is μ x = x u , e = 1 n i = 1 n x i , and we have
x u | e = 1 n i = 1 n x i 2 1 n i = 1 n x i 2 .
Therefore, in ( R n , · , · ) , we define the variance of a vector x by
v a r ( x ) : = x u | e 2 = 1 u 2 x 2 P u ( x ) 2 .
The standard deviation σ ( x ) of x R n is defined by σ ( x ) : = v a r ( x ) , so we deduce that σ ( x ) = x u | e = 1 u x P u ( x ) . Since, using the standard 2-inner product, we have
x u , y u | e = 1 n i = 1 n x i y i 1 n i = 1 n x i 1 n i = 1 n y i ,
it is easy to define the covariance of two vectors x and y by
c o v ( x , y ) : = x u , y u | e .
The correlation coefficient ( r ( x , y ) ) of two vectors x and y can be defined by:
r ( x , y ) : = c o v ( x , y ) v a r ( x ) v a r ( y ) = x u , y u | e x u | e · y u | e .
Another definition of variance and covariance for vectors from R n can be made using projection. Vector projection is an important operation in the Gram-Schmidt orthonormalization of vector space bases.
The projection of a vector x onto a vector y is given by P y ( x ) = x , y y y y = x , y y 2 y .
If in ( R n , · , · ) , we have the vector u = ( 1 , 1 , , 1 ) , then
P u ( x ) = x , u u 2 u = 1 n i = 1 n x i , , 1 n i = 1 n x i .
We remark that the variance of a vector x is given by v a r ( x ) = 1 u 2 x P u x 2 and the covariance of two vectors x and y is given by c o v ( x , y ) = 1 u 2 x P u ( x ) , y P u ( y ) .
Next, we can write some equalities and inequalities, using several results from Section 2, related to variance, covariance and the standard deviation of vectors x , y X . Therefore, from relations (8), (10), (11), (15), (18)–(20), (22), (25), we obtain the following relations:
v a r ( a x b y ) = a 2 · v a r ( x ) 2 a b c o v ( x , y ) + b 2 · v a r ( y ) ,
c o v ( x , y ) = σ ( x ) σ ( y ) 1 1 2 v a r x σ ( x ) y σ ( y ) ,
( b a ) ( a · v a r ( x ) b · v a r ( y ) ) + v a r ( a x b y ) = a b · v a r ( x y ) ,
( b 2 a 2 ) ( v a r ( x ) v a r ( y ) ) ( a b ) 2 ( v a r ( x ) + v a r ( y ) ) + 2 v a r ( a x b y ) = 2 a b · v a r ( x y ) ,
v a r x σ ( x ) y σ ( y ) = v a r ( x y ) ( σ ( x ) σ ( y ) ) 2 σ ( x ) σ ( y ) ,
σ ( x y ) | σ ( x ) σ ( y ) | min { σ ( x ) , σ ( y ) } σ x σ ( x ) y σ ( y ) σ ( x y ) + | σ ( x ) σ ( y ) | min { σ ( x ) , σ ( y ) } ,
( a b ) 2 + 2 a b 2 σ x σ ( x ) y σ ( y ) v a r a x σ ( x ) b y σ ( y )
( a b ) 2 + 2 a b 2 σ x σ ( x ) y σ ( y ) ,
2 σ x σ ( x ) y σ ( y ) σ ( x ) σ ( y ) σ ( x ) σ ( y ) c o v ( x , y )
2 2 σ x σ ( x ) y σ ( y ) σ ( x ) σ ( y ) ,
for all x , y X , with σ ( x ) , σ ( y ) 0 , a , b R and
min { a , b } ( σ ( x ) + σ ( y ) σ ( x + y ) ) a σ ( x ) + b σ ( y ) σ ( a x + b y )
max { a , b } ( σ ( x ) + σ ( y ) σ ( x + y ) ) ,
for all x , y X and a , b R + .
If we take the vector space ( C 0 [ a , b ] , · , · ) , then for f , g , h C 0 [ a , b ] , we have
f , g = a b f ( x ) g ( x ) d x , f = a b f 2 ( x ) d x ,
( f , g | h ) = 1 b a a b f ( x ) g ( x ) d x 1 b a a b h 2 ( x ) d x
1 b a a b f ( x ) h ( x ) d x 1 b a a b g ( x ) h ( x ) d x
and
f | h = ( f , f | h ) = 1 b a a b f 2 ( x ) d x 1 b a a b h 2 ( x ) d x 1 b a a b f ( x ) h ( x ) d x 2 .
If h = 1 and e = h h , then e = 1 b a and
f h | e = 1 b a a b f 2 ( x ) d x 1 b a a b f ( x ) d x 2 .
Therefore, in ( C 0 [ a , b ] , · , · ) , we define the variance of a function f by
v a r ( f ) : = f h | e 2 ,
the standard deviation σ ( f ) of f C 0 [ a , b ] is defined by σ ( f ) = v a r ( f ) , so we deduce that σ = f h | e , and the covariance of two functions f and g by c o v ( f , g ) : = f h , g h | e .
The definition of variance of a function f and the covariance of two functions f and g in terms of the projection is given below.
The projection of a vector f onto a vector g is given by P g f = f , g g 2 g .
If in ( C 0 [ a , b ] , · ) , we take h ( x ) = 1 , we have
P h f = f , h h 2 h = 1 b a a b f ( x ) d x .
Thus, in ( C 0 [ a , b ] , · ) , we define the variance of a function f by v a r ( f ) = 1 h 2 f P h f 2 and the covariance of vectors f and g by c o v ( f , g ) = 1 h 2 f P h f , g P h g .
Relations (38)–(46) can be written in terms of the elements from C 0 [ a , b ] . We mention two of them:
2 σ f σ ( f ) g σ ( g ) σ ( f ) σ ( g ) σ ( f ) σ ( g ) c o v ( f , g )
2 2 σ f σ ( f ) g σ ( g ) σ ( f ) σ ( g ) ,
f , g 0 , and
m i n { a , b } ( σ ( f ) + σ ( g ) σ ( f + g ) ) a σ ( f ) + b σ ( g ) σ ( a f + b g )
max { a , b } ( σ ( f ) + σ ( g ) σ ( f + g ) ) ,
for all f , g C 0 [ a , b ] and a , b R + .
Let e , x , w be vectors in the inner product space X , over the field of real numbers, with e = 1 and vectors { e , x } being linearly independent, such that
a x + b e = w ,
where a , b R . Using the inner product and its properties, we deduce that a x , x + b e , x = w , x and a x , e + b = w , e . Therefore, we have to solve this system with two equations and two unknowns a , b R . But, we have the 2-inner product ( x , y | e ) = x , y x , e e , y , for all x , y , e X , with e = 1 .
If A is the matrix of the system, then we obtain d e t A = ( x , x | e ) = x | e . Because, vectors { e , x } are linearly independent, therefore, we have d e t A = ( x , x | e ) > 0 . Using the Cramer method to solve the system, we find that a = ( w , x | e ) ( x , x | e ) = ( w , x | e ) x | e 2 and b = ( w , e | x ) ( x , x | e ) = ( w , e | x ) x | e 2 . Let x , u , w be vectors in an inner product space X , over the field of real numbers, with u 0 and vectors { x , u } being linearly independent, such that
a x + b u = w ,
where a , b R . By dividing by u 0 , we deduce the relation a x u + b e = w u , where e = u u , so e = 1 . Therefore, we obtain a = ( w u , x u | e ) x u | e and b = w u , e a x u , e . If X = R n , then a = c o v w , x v a r x and b = μ w a μ x .
In statistics, linear regression is a linear approach to modelling the relationship between a dependent variable and one or more independent variables. The case of one independent variable is called simple linear regression.
We consider two random variables: V = ( x i , 1 n ) 1 i n , W = ( y i , 1 n ) 1 i n with probabilities P ( V = x i ) = 1 n , P ( W = y i ) = 1 n , for any i = 1 , n ¯ .
A linear regression model assumes that the relationship between the dependent variable W and the independent variable V is linear. Thus, the general linear model for one independent variable may be written as W = a V + b . We can describe the underlying relationship between y i and x i involving this error term ϵ i by ϵ i = y i a x i b .
If we have S ( a , b ) = i = 1 n ϵ i 2 = i = 1 n ( y i a x i b ) 2 , then we find m i n a , b R S ( a , b ) . Using the Lagrange method of multipliers, we obtain a i = 1 n x i + n b = i = 1 n y i and a i = 1 n x i 2 + b i = 1 n x i = i = 1 n x i y i . By simple calculations, we deduce a = C o v V , W V a r V and b = E W a E V , so, we obtain the same coefficients as above.

Funding

This research received no external funding.

Acknowledgments

The author would like to thank the reviewers for their constructive comments and suggestions which led to a substantial improvement of this article.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Gähler, S. Lineare 2-normierte Räume. Math. Nachr. 1965, 28, 1–43. [Google Scholar] [CrossRef]
  2. Diminnie, C.; Gähler, S.; White, A. 2-inner Product Spaces. Demonstr. Math. 1973, 6, 525–536. [Google Scholar]
  3. Diminnie, C.; Gähler, S.; White, A. 2-inner product spaces II. Demonstr. Math. 1977, 10, 169–188. [Google Scholar]
  4. Cho, Y.J.; Lin, P.C.S.; Kim, S.S.; Misiak, A. Theory of 2-inner Product Spaces; Nova Science Publishes, Inc.: New York, NY, USA, 2001; 330p. [Google Scholar]
  5. Dragomir, S.S.; Cho, Y.J.; Kim, S.S.; Sofo, A. Some Boas-Bellman type inequalities in 2-inner product spaces. J. Inequal. Pure Appl. Math. 2005, 6, 1–13. [Google Scholar]
  6. Dragomir, S.S.; Cho, Y.J.; Kim, S.S. Superadditivity and monotonicity of 2-norms generated by inner products and related results. Soochow J. Math. 1998, 24, 13–32. [Google Scholar]
  7. Cho, Y.J.; Matić, M.; Pečarić, J. On Gram’s determinant in 2-inner product spaces. J. Korean Math. Soc. 2001, 38, 1125–1156. [Google Scholar] [CrossRef] [Green Version]
  8. Dragomir, S.S.; Sándor, J. Some inequalities in pre-Hilbertian spaces. Stud. Univ. Babes-Bolyai Math. 1987, 32, 71–78. [Google Scholar]
  9. Dragomir, S.S. Improving Schwarz Inequality in Inner Product Spaces. Linear Multilinear Algebra 2019, 67, 337–347. [Google Scholar] [CrossRef] [Green Version]
  10. Mitrinović, D.S.; Pečarić, J.; Fink, A.M. Classical and New Inequalities in Analysis; Kluwer Academic: Dordrecht, The Netherlands, 1992; 740p. [Google Scholar]
  11. Niculescu, C.P.; Persson, L.E. Convex Functions and Their Applications: A Contemporary Approach, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2018; 256p. [Google Scholar]
  12. Minculete, N. Some Refinements of Ostrowski’s Inequality and an Extention to a 2-Inner Product Space. Symmetry 2019, 11, 707. [Google Scholar] [CrossRef] [Green Version]
  13. Maligranda, L. Some remarks on the triangle inequality for norms. Banach J. Math. Anal. 2008, 2, 31–41. [Google Scholar] [CrossRef]
  14. Niezgoda, M. On the Chebyshev functional. Math. Ineq Appl. 2007, 10, 535–546. [Google Scholar] [CrossRef]
  15. Evans, J.R. Statistics, Data Analysis and Decision Modeling; Pearson Prentice Hall: Upper Saddle River, NJ, USA, 2007. [Google Scholar]
  16. Bhatia, R.; Davis, C. A better bound on the variance. Am. Math. Mon. 2000, 107, 353–357. [Google Scholar] [CrossRef]
  17. Furuichi, S. A note on a parametrically extended entanglement-measure due to Tsallis relative entropy. Information 2006, 9, 837–844. [Google Scholar]
  18. Minculete, N.; Ciurdariu, L. A generalized form of Grüss type inequality and other integral inequalities. J. Inequal. Appl. 2014, 2014, 119. [Google Scholar] [CrossRef] [Green Version]
  19. Elezović, N.; Marangunić, L.; Pečarić, J. Some improvements of Gruss type inequality. J. Math. Inequal. 2007, 1, 425–436. [Google Scholar] [CrossRef] [Green Version]
  20. Florea, A.; Niculescu, C.P. A note on Ostrowski’s inequality. J. Inequal. Appl. 2005, 2005, 459–468. [Google Scholar] [CrossRef] [Green Version]
  21. Aldaz, J.M. A refinement of the inequality between arithmetic and geometric means. J. Math. Inequal. 2008, 2, 473–477. [Google Scholar] [CrossRef] [Green Version]
  22. Kechriniotis, A.; Delibasis, K. On generalizations of Grüss inequality in inner product spaces and applications. J. Inequal. Appl. 2010, 2010, 167091. [Google Scholar] [CrossRef] [Green Version]
  23. Pečarić, J. On the Ostrowski Generalization of Čebyšev’s Inequality. J. Math. Anal. Appl. 1984, 102, 479–487. [Google Scholar] [CrossRef] [Green Version]

Share and Cite

MDPI and ACS Style

Minculete, N. Types of Statistical Indicators Characterized by 2-Pre-Hilbert Spaces. Symmetry 2020, 12, 1501. https://doi.org/10.3390/sym12091501

AMA Style

Minculete N. Types of Statistical Indicators Characterized by 2-Pre-Hilbert Spaces. Symmetry. 2020; 12(9):1501. https://doi.org/10.3390/sym12091501

Chicago/Turabian Style

Minculete, Nicuşor. 2020. "Types of Statistical Indicators Characterized by 2-Pre-Hilbert Spaces" Symmetry 12, no. 9: 1501. https://doi.org/10.3390/sym12091501

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop