1. Introduction
In Reference [
1] Gähler introduced the definitions of a linear 2-normed space and a 2-metric space. In References [
2,
3], Diminnie, Gähler and White studied the properties of a 2-inner product space.
Several results related to the theory of 2-inner product spaces can be found in Reference [
4]. In Reference [
5] Dragomir et al. show the corresponding version of Boas-Bellman inequality in 2-inner product spaces and in Reference [
6] the superadditivity and the monotony of 2-norms generated by inner products was studied.
We consider X a linear space of dimension greater than 1 over the field , where is the set of the real or the complex numbers. Suppose that is a -valued function defined on X × X × X satisfying the following conditions:
- (a)
and if and only if u and w are linearly dependent;
- (b)
;
- (c)
;
- (d)
, for any scalar ;
- (e)
.
Function is called 2-inner product on X and is called 2-inner product space (or 2-pre-Hilbert space).
A series of consequences of these requirements can be deduced (see e.g., References [
2,
4,
7]):
for all
and
.
The standard 2-inner product
is defined on the inner product space
by:
for all
.
Let
be a 2-inner product space. We can define a function
on
by
for all
. This function satisfies the following conditions:
- (a)
and if and only if ;
- (b)
;
- (c)
, for any scalar ;
- (d)
, for all
A function defined on and satisfying the above conditions is called 2-norm on X and is called linear 2-normed space.
It is easy to see that if is a 2-inner product space over the field of real numbers or the field of complex numbers , then is a linear 2-normed space and the 2-norm is generated by the 2-inner product .
Two consequences of the above properties are given by the following: the parallelogram law [
4],
for all
and the Cauchy-Schwarz inequality (see e.g., References [
4,
7]),
for all
. The equality in (
3) holds if and only if
and
w are linearly dependent.
If
is an inner product space, inequality (
3) becomes ([
6,
8]):
A reverse of the Cauchy-Schwarz inequality in 2-inner product spaces can be found in Reference [
5]: if
and
are such that
or equivalently
hold, then
Constant is the best possible.
Another important inequality in a 2-inner product space
X is the triangle inequality [
4],
for all
.
The Cauchy-Schwarz inequality in the real case,
(see e.g., References [
9,
10]), can be obtained by the following identity, as in Reference [
11],
for all
. An inequality, which is an improvement of the Cauchy-Schwarz inequality, is the Ostrowski inequality. In Reference [
12], we find some refinements of Ostrowski’s inequality and an extention to a 2-inner product space.
The purpose of this paper is to study some identities in a 2-pre-Hilbert space and we prove new results related to several inequalities in a 2-pre-Hilbert space. We will mention the Cauchy-Schwarz inequality. The novelty of this article is the introduction, for the first time, of the concepts of average, variance, covariance and standard deviation and of the correlation coefficient for vectors, using the standard 2-inner product and some of its properties. We also present a brief characterization of a linear regression model for the random variables in discrete case.
3. Applications of the Standard 2-Inner Product
If
is an inner product space, then the standard 2-inner product
is defined on
X by:
for all
.
But,
becomes a linear 2-normed space, with the 2-norm given by the following:
for all
.
(a) We consider the vector space
. For
,
,
, we have
,
,
and
.
If we apply inequality (
22) for the real vector space (
, then we have
where
(b) In the vector space
, for
we have
and
Now, applying inequality (
22) for the real vector space
, we have
where
These inequalities are improvements of the Cauchy-Schwarz inequality in discrete version and in integral version.
(c) Let
X be a real linear space with the inner product
. The
Chebyshev functional [
14] is defined by
for all
, where
is a given nonzero vector.
It is easy to see that we have and for all .
If we replace
x and
y by
and
,
, in the Cauchy-Schwarz inequality, then we find the Cauchy-Schwarz inequality in terms of the Chebyshev functional, given by:
Let
X be a real linear space with the inner product
. Equality (
17) can be written in terms of the Chebyshev functional by
for all vectors
in
X, where
is a given nonzero vector and
. If
, then
for all vectors
in
X, where
(d) For every subspace , we have the decomposition . Every can be uniquely written as , where and We define the orthogonal projection by . It is easy to see that , for every , so we have , which involves the equality where the norm is generated by the inner product .
From relation (
17), if
, then we have
for vectors
x and
y in
X and
.
For a subspace
U of an inner product space
X with
,
and
is the standard 2-inner product on
X, we deduce the identity:
where we have the decompositions
,
. Using the equality from (
17) and above identity, we proved the following equality:
for
4. Applications of the Standard 2-Inner Product to Certain Statistical Indicators
A variety of ways to present data, probability, and statistical estimation are mainly characterized by the following statistical indicators—mean (average), variance and standard deviation as well as covariance and Pearson correlation coefficient [
15].
Taking the mean as the center of a random variable’s probability distribution, the variance is a measure of how much the probability mass is spread out around this center.
If V is a random variable with , then the formal definition of variance is the following: The expression for the variance can thus be expanded: The standard deviation of V is defined by
The
covariance is a measure of how much two random variables
V and
W change together at the same time and is defined as
and is equivalent to the form
We find the inequality of Cauchy-Schwarz for discrete random variables given by
The correlation between sets of data is a measure of how well they are related. A correlation coefficient is a numerical measure of some type of correlation, meaning a statistical relationship between two variables.
The
Pearson correlation coefficient is a measure of the strength and direction of the linear relationship between two variables
V and
W that is defined as the covariance of the variables divided by the product of their standard deviations:
Using the inequality of Cauchy-Schwarz, we deduce that The variance of a discrete random variable with probabilities for any is its second central moment, the expected value of the squared deviation from mean , thus:
Let
be real numbers, assume
for all
and the average
In 1935, Popoviciu (see e.g., References [
16,
17]) proved the following inequality
The discrete version of Grüss inequality has the following form (see e.g., References [
18,
19]):
where
are real numbers so that
and
for all
From the relation
and using the inequality of Cauchy-Schwarz for discrete random variables given by
, and inequality (
32), we obtain a proof of Grüss’s inequality.
Bhatia and Davis show in Reference [
16] the following inequality:
The inequality of Bhatia and Davis represents an improvement of Popoviciu’s inequality, because
. Therefore, we will first have an improvement of Grüss’s inequality given by the following relation:
In Reference [
18], we find some research on refining the Grüss inequality.
The
Pearson correlation coefficient is given by
Florea and Niculescu in Reference [
20] treated the problem of estimating the deviation of the values of a function from its mean value. The estimation of the deviation of a function from its mean value is characterized below.
We denote by the space of Riemann-integrable functions on the interval , and by the space of real-valued continuous functions on the interval .
The integral arithmetic mean for a Riemann-integrable function
is the number
If
f and
h are two integrable functions on
and
, then a generalization for the integral arithmetic mean is the number
called the
h-integral arithmetic mean for a Riemann-integrable function
f. If function
f is a Riemann-integrable function, we denote by
the
variance of
f. The expression for the variance of
f can be expanded in this way:
In the same way, we defined the
h-variance of a Riemann-integrable function
f by
The expression for the
h-variance can be thus expanded:
It is easy to see another form of the
h-variance, given by the following:
In Reference [
21], Aldaz showed a refinement of the AM-GM inequality and used it in the proof that
is a measure of the dispersion of
about its mean value, which is, in fact, comparable to the variance.
The
covariance is a measure of how much two Riemann-integrable functions change together at the same time and is defined as
and is equivalent to the form
In fact, the covariance is
the Chebyshev functional attached to functions
f and
g. In Reference [
22] it is written as
. The properties of the Chebyshev functional have been studied by Elezović, Marangunić and Pečarić in Reference [
19].
The
h-covariance is a measure of how much two random variables change together and is defined as
and is equivalent to the form
In Reference [
23], Pečarić used the generalization of the Chebyshev functional notion attached to functions
f and
g to the Chebyshev
h-functional attached to functions
f and
g defined by
. Here, Pečarić showed some generalizations of the inequality of Grüss by the Chebyshev
h-functional. It is easy to see that, in terms of covariance, this can be written as
In terms of covariance, the inequality of Grüss becomes
In terms of Chebyshev functional, the inequality of Gruss becomes
Next, using the notion of the standard 2-inner product, we extend the above concepts to vectors of
. If
is an inner product space, then the standard 2-inner product
is defined on
X by:
for all
But,
becomes a linear 2-normed space, with the 2-norm given by the following:
for all
Now, we take the vector space
For
we have
and
In Reference [
14], Niezgoda studied certain orthoprojectors. The operator
defined by
is
the orthoprojector from
X onto span
. If
, where
, then the
average of vector
x is
, and we have
Therefore, in
, we define the
variance of a vector
x by
The
standard deviation of
is defined by
, so we deduce that
Since, using the standard 2-inner product, we have
it is easy to define the
covariance of two vectors
x and
y by
The
correlation coefficient of two vectors
x and
y can be defined by:
Another definition of variance and covariance for vectors from can be made using projection. Vector projection is an important operation in the Gram-Schmidt orthonormalization of vector space bases.
The projection of a vector x onto a vector y is given by
If in
we have the vector
then
We remark that the variance of a vector x is given by and the covariance of two vectors x and y is given by
Next, we can write some equalities and inequalities, using several results from
Section 2, related to variance, covariance and the standard deviation of vectors
. Therefore, from relations (
8), (
10), (
11), (
15), (
18)–(
20), (
22), (
25), we obtain the following relations:
for all
with
and
for all
and
If we take the vector space
, then for
we have
and
If
and
then
and
Therefore, in
, we define the
variance of a function
f by
the
standard deviation of
is defined by
, so we deduce that
and the
covariance of two functions
f and
g by
The definition of variance of a function f and the covariance of two functions f and g in terms of the projection is given below.
The projection of a vector f onto a vector g is given by
If in
we take
we have
Thus, in we define the variance of a function f by and the covariance of vectors f and g by
Relations (
38)–(
46) can be written in terms of the elements from
. We mention two of them:
and
for all
and
Let
be vectors in the inner product space
over the field of real numbers, with
and vectors
being linearly independent, such that
where
Using the inner product and its properties, we deduce that
and
Therefore, we have to solve this system with two equations and two unknowns
But, we have the 2-inner product
for all
, with
If
A is the matrix of the system, then we obtain
Because, vectors
are linearly independent, therefore, we have
Using the Cramer method to solve the system, we find that
and
Let
be vectors in an inner product space
over the field of real numbers, with
and vectors
being linearly independent, such that
where
By dividing by
, we deduce the relation
where
, so
. Therefore, we obtain
and
If
, then
and
In statistics, linear regression is a linear approach to modelling the relationship between a dependent variable and one or more independent variables. The case of one independent variable is called simple linear regression.
We consider two random variables: , with probabilities , for any
A linear regression model assumes that the relationship between the dependent variable W and the independent variable V is linear. Thus, the general linear model for one independent variable may be written as . We can describe the underlying relationship between and involving this error term by .
If we have , then we find Using the Lagrange method of multipliers, we obtain and . By simple calculations, we deduce and so, we obtain the same coefficients as above.