1. Introduction
Throughout this paper ‘an operator’ means ‘a bounded linear operator over Hilbert space’. , , and denote arbitrary Hilbert spaces. denote the set of all bounded linear operators from to and . I denotes the identity operator over Hilbert space and O is the zero operator over Hilbert space. For an operator , , and denote the adjoint operator, the range and the null-space of T, respectively.
Recall that, an operator
is called the Moore–Penrose inverse of
, if
X satisfies the following four operator equations [
1]:
If such operator
X exists, then it is unique and is denoted by
. It is well known that the Moore–Penrose inverse of
T exists if, and only if,
is closed, see [
2,
3].
For any operator
, let
denote the set of operators
which satisfy equations
from among Equations (1)–(4) of Formula (1). An operator in
is called an
-inverse of
T and denoted by
. For example, an operator
X of the set
is called a
-inverse or a
g-inverse of
T and is denoted by
. The well-known seven common types of generalized inverse of
T introduced from
are, respectively, the
-inverse,
-inverse,
-inverse,
-inverse,
-inverse,
-inverse and
-inverse, the last being the unique Moore–Penrose inverse. In particular, when
T is nonsingular, then it is easily seen that
. We refer the reader to [
2,
3,
4] for basic results on generalized inverses.
The concepts of the generalized inverse were shown to be very useful in various applied mathematical settings. For example, applications to singular differential or difference equations, Markov chains, cryptography, iterative method or multibody system dynamics, and so on, which can be found in [
2,
3,
5,
6,
7,
8]. In the above applied mathematical settings, large-scale scientific computing problems eventually translate to least square problems. Using generalized inverse to give some fast and effective iterative solution algorithms for these least square problems has attracted considerable attention, and many interesting results have been obtained, see [
2,
3,
9,
10,
11].
Suppose
,
, and
are bounded linear operators over Hilbert space, the least squares problem is finding
x that minimizes the norm:
which is used in many practical scientific problems. Any solution
x of the above LS can be expressed as
. If
is consistent, the minimum norm solution
x has the form
. The unique minimal norm least square solution
x of the above LS is
. One of the problems concerning the above LS is under what conditions the reverse order law
holds.
If Formula (3) is true, then, according to the reverse order law Formula (3) and the iterative algorithm theory, we can naturally construct some ideal iterative sequence, and then design some fast and effective iterative algorithms to solve Formula (2). If Formula (3) is not necessarily true, can we find the necessary and sufficient conditions for Formula (3)? Applying the reverse order law to design some fast and effective iterative algorithms to solve Formula (2), will avoid multiple decompositions of the correlation matrices and keep it in each iteration. The structure of the iterative sequence reduces the amount of machine storage, maintains the convergence, stability of the algorithm, and improves the operation speed, see [
2,
3,
9,
11,
12,
13].
The reverse order law for generalized inverses of operator (or matrix) product yields a class of interesting problems that are fundamental in the theory of generalized inverses, see [
2,
3], which have attracted considerable attention since the middle 1960s, and many interesting results have been obtained, see [
14,
15,
16,
17,
18,
19,
20,
21,
22].
For the generalized inverses of matrix product, Greville [
7] first gave a necessary and sufficient condition for
. Since then, the problem of the reverse order law for generalized inverses of a matrix product was studied widely. Hartwig [
8] derived the necessary and sufficient conditions for the Moore–Penrose inverse of the product of three matrices, and Y. Tian [
14] obtained the reverse order law for the Moore–Penrose inverse of the products of multiple matrices. M. Wei [
15] and De Pierro [
16], respectively, derived necessary and sufficient conditions for the reverse order laws
and
, by applying product singular value decomposition (PSVD). M. Wei [
17] then deduced necessary and sufficient conditions for reverse order laws for
g-inverse of multiple matrix product. For
,
, Xiong and Zheng [
18] presented the equivalent conditions using extremal ranks of the generalized Schur complement.
For the generalized inverses of operator product, Bouldin [
5] and Izumino [
19] extended the results of Greville [
7] to the bounded linear operators on Hilbert space, using the gaps between subspaces. Let
and
, such that the product
is meaningful, using the technique of matrix form of bounded linear operators, D.S.Djordjević [
20] showed that the reverse order law
holds if, and only if,
and
. J.Kohila et al. [
21] obtained the necessary and sufficient conditions for the reverse order law of the Moore–Penrose inverse in rings with involutions. In [
22], D.S.Cvetković-IIić et al., studied this reverse order law of the Moore–Penrose inverse in
-algebra. The reader can find more results of the reverse order law for the Moore–Penrose inverse of operator product in [
23,
24,
25,
26,
27].
Recently, Xiong and Qin [
28,
29] studied the reverse order laws for
inverse and
inverse of operator products, using the technique of matrix form of bounded linear operators [
30] and some equivalent conditions are derived for these reverse order laws. With the same threads of [
28,
29], in this paper, we will study the reverse order law for the
g-inverse of an operator product
. In particular, some necessary and sufficient conditions for the reverse order law
is presented. Moreover, some finite dimensional results are extended to infinite dimensional settings.
3. Main Results
Let
,
and
where
,
,
and
are regular operators. From Lemma 1, we know that the operators
,
and
have the following matrix forms with respect to the orthogonal sum of subspaces:
where
is invertible on
.
where
is invertible on
.
where
is invertible on
.
According to the Formulas (5)–(10), we have the following theorem.
Theorem 1. Let , and where , and are regular operators. Then
(1)
(2)
(3)
and
Proof. By Lemma 3, we know that
and
Combining the Formulas (5)–(10) with the Formulas (11)–(13), we have
and
From (14)–(16), we have Theorem 1. □
From Lemma 1, we know that the operator
has the following matrix forms with respect to the orthogonal sum of subspaces:
and
Combining (5)–(10) with the results in Lemma 2, it follows that there exist three bounded linear operators
:
where
and
are arbitrary bounded linear operators on appropriate spaces.
where
and
are arbitrary bounded linear operators on appropriate spaces.
where
and
are arbitrary bounded linear operators on appropriate spaces.
Furthermore, by Lemma 2, we have
Combining the Formulas (17)–(21) with the results in Theorem 1, we obtain the main result of this paper.
Theorem 2. Let , and , where , , and are regular operators. Then the following statements are equivalent:
(1) ;
(2) , and
Proof. From the Formula 1 in Lemma 1, we know that the reverse order law (
4) holds, i.e., the conditions 1 in Theorem 2 holds, if, and only if,
holds for any
,
.
By (17) and (19)–(21), we have
where
In the rest of this section, we will prove that the conditions 1 in Theorem 2 is equal to the conditions 2 in Theorem 2, i.e., the Formula (22) is equal to the Formulas (14)–(16) in Theorem 1.
: Combining (23) with (14)–(16), we have
and
That is, from (14)–(16), we have (22), i.e., we proved that .
: If the reverse order law holds. From Lemma 2, we known that the equation holds for any .
Firstly, let
where
where
Then, we have
and
Since
(
) are arbitrary, let
, we have
Combining (29) with (30), we have
Let
, then from (31), we have
From (31) and (32), we get
According to the Equation (
33), we have
Since
,
and
,
are arbitrary, then from Lemma 4 and (34), we have
From (35) and (16), we get that if the reverse order law
holds, then
.
Secondly, let
where
Then, we have
and
Since
(
) are arbitrary, let
, we have
Combining (40) with (41), we have
Let
, then from (42), we have
From (42) and (43), we get
According to the Equation (
44), we have
Since
,
and
,
are arbitrary, then from Lemma 4 and (45), we have
From (15) and (46), we get that if the reverse order law
holds, then
.
Thirdly, let
where
Then we have
and
Since
(
) are arbitrary, let
, we have
Combining (51) with (52), we have
Let
, then from (53) we have
From (53) and (54), we get
According to the Formula (55), we have
Since
,
and
,
are arbitrary, then from Lemma 4 and (56), we have
From (14) and (57), we get that if the reverse order law
holds, then
.
Finally, combining (35), (46) with (57), we prove that if the reverse order law Formula (4) holds, the equalities (14)–(16) also hold. That is, . □
Corollary 1. Let and , where , and are regular operators. Then the following statements are equivalent:
(1) ;
(2) .