Next Article in Journal
Some Refinements and Generalizations of Bohr’s Inequality
Previous Article in Journal
Harmonic Series with Multinomial Coefficient 4nn,n,n,n and Central Binomial Coefficient 2nn
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Existence and Representation of the Solutions to the System of Operator Equations AiXBi + CiYDi + EiZFi = Gi(i = 1, 2)

1
School of Mathematical Sciences, Inner Mongolia University, Hohhot 010021, China
2
College of Science, Inner Mongolia Agricultural University, Hohhot 010018, China
*
Author to whom correspondence should be addressed.
Axioms 2024, 13(7), 435; https://doi.org/10.3390/axioms13070435
Submission received: 21 April 2024 / Revised: 16 June 2024 / Accepted: 24 June 2024 / Published: 27 June 2024

Abstract

:
In this paper, we give the necessary and sufficient conditions for the existence of general solutions, self-adjoint solutions, and positive solutions to the system of A i X B i + C i Y D i + E i Z F i = G i ( i = 1 , 2 ) under additional conditions. In addition, we derive the representation of general solutions to the system of A i X B i + C i Y D i + E i Z F i = G i ( i = 1 , 2 ) , and provide the matrix representation of the self-adjoint solutions and the positive solutions in the sense of the star order.

1. Introduction

Linear matrix equations are one of the main research topics in matrix theory and its applications. Many articles on matrix equations give different methods for the existence and the representation of solutions [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]. Various issues in engineering, math, and other fields may be converted into certain linear matrix equations. For example, Roth [17] gave a necessary and sufficient condition for the solvability of the generalized Sylvester matrix equation
A X Y B = C .
Lancaster [18] systematically investigated the equation
A X + X B = C .
After this, many researches have studied some properties of the generalized Sylvester matrix equation [19,20,21,22]. In reference [23], the author considered the solvability of the equation
A X B + C Y D = E ,
and this equation has many applications in the growth curve model in statistics.
Matrices are operators on finite-dimensional spaces, so can the relevant conclusions of matrix equations be generalized to operator equations on infinite-dimensional spaces?
Douglas [24] formulated the renowned “Douglas Range Inclusion Theorem” and provided some corresponding equivalent conditions for the existence of solutions to the operator equation AX = B. Subsequently, numerous academics studied the solvability of operator equations evolving from AX = B [25,26,27,28,29,30].
Recently, Vodough and Moslehian [31] considered A X B = B = B X A under the condition of star order and obtained representations of the solutions to the operator equations. Zhang and Ji [32] extended these results and gave the necessary and sufficient conditions for the existence of solutions to the operator equations A X B = B = B X A . Cvetkovi c ´ IIi c ´ [33] provided the sufficient and necessary conditions for the solvability of the operator equations A X C = C = C X A , which is a generalization of [31,32]. Hranislav [28] considered the solvability of the operator equation A i X B i = C i , i = 1 , 2 ¯ , and further generalized the problem in [32].
Inspired by the results of [25,28,31,32,33], in this paper, we study the existence of solutions and the representation of general, self-adjoint, and positive solutions of the operator equation
A i X B i + C i Y D i + E i Z F i = G i ( i = 1 , 2 )
Moreover, as a corollary, we derive the corresponding conclusion of the operator equation
A i X B i + C i Y D i = G i ( i = 1 , 2 ) .
The structure of this article is as follows:
In Section 2, we provide a review of the definitions of generalized inverses, self-adjoint operator, positive operator, star order, and Lemmas 1–6.
In Section 3, we investigate the operator Equation (1) and give the necessary and sufficient conditions for the existence of solutions under some assumptions. As a corollary, equivalent conditions for the solutions of the operator Equation (2) are provided under certain conditions. Furthermore, we give the necessary and sufficient conditions for the existence of self-adjoint and positive solutions to the operator systems (1) and (2) under a star-order condition.
In Section 4, we study the general solutions, self-adjoint solutions, and positive solutions of the operator Equations (1) and (2) and obtain the explicit representations of these solutions. The general solutions of Equations (1) and (2) are not unique. Indeed, an inner inverse (for T B ( H , K ) , elements X B ( K , H ) satisfying T X T = T , then X is called an inner inverse of T; refer to Preliminaries) is not unique, in general, and we express the solutions through the inner inverse. On the other hand, the representation of solutions has a free parameter S, which is arbitrary. Consequently, the general solutions derived in Theorem 4 and Corollary 9 are not unique. Additionally, if E i = 0 or F i = 0 in Theorem 4, the solutions (16) of Equation (1) coincide with the solutions (24) of Equation (2). Furthermore, it should be noted that the self-adjoint and positive solutions presented in Theorems 5 and 6, respectively, are also non-unique. In the operator matrix representation of the self-adjoint and positive solutions, the skew diagonal elements and the lower right corner of the main diagonal elements are arbitrary.

2. Preliminaries

Considering H and K as Hilbert spaces, we represent by B ( H , K ) the set of all bounded linear operators from H to K . For brevity, we denote B ( H ) = B ( H , H ) . If T B ( H , K ) , the notations R ( T ) , N ( T ) , and T * are the range space, the null space, and the adjoint of T, respectively. The operator T B ( H ) is called self-adjoint if T = T * and positive if T x , x 0 . T ¯ represents a bounded extension of T.
For any T B ( H , K ) , if there exists an arbitrary operator X : D ( X ) K H such that R ( T ) D ( X ) and
T X T = T ,
then X is called an inner inverse of T, which we denote by T , and T is called regular. Thus, X B ( K , H ) , in general. T B ( H , K ) has an inner inverse X B ( K , H ) if and only if R ( T ) is closed [34]. If X satisfies the equation X T X = X , then X is called a reflexive inverse of T. Furthermore, there exists a unique bounded operator that also verifies
S T = P R ( T * ) ¯ and T S = P R ( T ) ¯ | R ( T ) R ( T ) .
Such an operator S is called the Moore–Penrose generalized inverse of T and will be expressed as S = T .
In 1978, Drazin introduced a partial order on B ( H ) , which is called the star order. For A , B B ( H ) , the star order A * B means A A * = B A * , A * A = A * B ; the left star order A B means A * A = A * B and R ( A ) R ( B ) ; the right star order A B means A A * = B A * and R ( A * ) R ( B * ) [35,36,37,38,39,40,41]. In reference [38], the star order, the minus order, and the diamond order were mainly studied. In particular, the star order is a generalization of a positive definite order. For orthogonal projection operators, the star order is a positive definite order; that is, if A and B are orthogonal projection operators, then A * B is equivalent to A B .
Lemma 1
([35,41], Lemmas 1.1 and 2.1). Let A , B B ( H ) . The subsequent assertions hold.
(i) 
A * B R ( A ) R ( B ) , R ( A * ) R ( B * ) .
(ii) 
A * B A = P R ( A ) ¯ B = B P R ( A * ) ¯ .
(iii) 
A B A = P R ( A ) ¯ B .
A B A = B P R ( A * ) ¯ .
(iv) 
A * B A B and A B .
Lemma 2.
Let A B ( H , K ) , B B ( F , G ) be closed-range operators and C B ( F , K ) ; then the subsequent assertions are equivalent, as follows:
(i) 
Operator equation
A X B = C
has a solution.
(ii) 
For some inner inverses A , B , we have A A C B B = C .
(iii) 
R ( C ) R ( A ) a n d R ( C * ) R ( B * ) .
A representation of the general solution is
X = A C B + U A A U B B ,
where U B ( G , H ) is an arbitrary operator.
In Theorem 2 [42], when A , B , and C are matrices, the equation A X B = C is solvable, and the form of the general solution is obtained. When A and B are closed-range operators, the solvability and general solution of the operator equation A X B = C is similar to that of the matrix equation.
Lemma 3
([24], Douglas Lemma). Let A , B B ( H ) . The subsequent assertions are equivalent, as follows:
(i) 
R ( A ) R ( B ) ;
(ii) 
A A * λ 2 B B * for some constant λ > 0 ;
(iii) 
A = B C , for some C B ( H ) .
If the equivalent conditions ( i ) ( i i i ) hold, then there is a unique operator C such that
(a) 
C 2 = i n f { λ 2 | A A * λ 2 B B * } ;
(b) 
N ( A ) = N ( C ) ;
(c) 
R ( C ) R ( B * ) ¯ .
This solution will be called the  Douglas reduced solution and can be expressed as B A .
Lemma 4
([28], Theorem 2.1). Let A i B ( H , K ) , B i B ( F , G ) and E i B ( F , K ) , i = 1 , 2 ¯ . Assume that D ( B 1 ) = D ( B 2 ) and R ( E i B i ) D ( A i ) , i = 1 , 2 ¯ . If A 1 E 1 B 1 = A 2 E 2 B 2 ; then the subsequent assertions are equivalent:
(i) 
The operator equation system
A 1 X B 1 = E 1
A 2 X B 2 = E 2
is solvable;
(ii) 
R ( E i ) R ( A i ) and R ( ( A i E i ) * ) R ( B i * ) , i = 1 , 2 ¯ .
Lemma 5
([28], Theorem 2.2). Let A i B ( H , K ) , B i B ( F , G ) , and E i B ( F , K ) , i = 1 , 2 ¯ . If R ( B 1 ) R ( B 2 ) and R ( A 2 * ) R ( A 1 * ) ¯ , then the subsequent assertions are equivalent, as follows:
(i) 
The system of operator equations
A 1 X B 1 = E 1
A 2 X B 2 = E 2
is solvable;
(ii) 
R ( E 1 ) R ( A 1 ) , R ( E 2 * ) R ( B 2 * ) and the system
X B 1 = A 1 E 1
A 2 X = E 2 B 2 ¯
is solvable.
Given a closed subspace S of H and T B ( H ) , the operator matrix representation of T induced by S is
T 11 T 12 T 21 T 22 : S S S S
where T 11 = P S T P S | S B ( S ) , T 12 = P S T ( I P S ) | S B ( S , S ) , T 21 = ( I P S ) T P S | S B ( S , S ) , T 22 = ( I P S ) T ( I P S ) | S B ( S ) . The next well-known result characterizes positive operators in terms of their matrix decomposition.
Lemma 6
([43], Theorem 4.2). Let S be a closed subspace of H and T B ( H ) having an operator matrix representation induced by S provided in (3). Then T B ( H ) is a positive operator if and only if
(i) 
T 12 = T 21 * ;
(ii) 
T 11 0 ;
(iii) 
R ( T 12 ) R ( T 11 1 2 ) ;
(iv) 
T 22 = ( ( T 11 1 2 ) T 12 ) * ( T 11 1 2 ) T 12 + F , where F 0 .

3. Existence of Solutions

3.1. The Existence of General Solutions

In the subsequent theorem, we present several equivalent conditions for the existence of the solution to the operator equations A i X B i + C i Y D i + E i Z F i = G i ( i = 1 , 2 ) .
For convenience, we define the notation as follows ( i = 1 , 2 ) :
A ^ = A 1 0 0 A 2 : H H K K , B ^ = B 2 0 0 B 1 : F F G G , C ^ = C 1 0 0 C 2 : H H K K , D ^ = D 2 0 0 D 1 : F F G G , E ^ = E 1 0 0 E 2 : H H K K , F ^ = F 2 0 0 F 1 : F F G G , G ^ = 0 G 1 G 2 0 : F F K K , X ^ = 0 X X 0 : G G H H , Y ^ = 0 Y Y 0 : G G H H , Z ^ = 0 Z Z 0 : G G H H ,
A S i = ( I K A i A i ) C i , B S i = D i ( I F B i B i ) , M 1 = ( I K K A ^ A ^ ) C ^ , A T i = ( I K A i A i ) G i , B T i = G i ( I F B i B i ) , N 1 = ( I K K A ^ A ^ ) E ^ , A K i = ( I K A i A i ) E i , B K i = F i ( I F B i B i ) , W 1 = ( I K K A ^ A ^ ) G ^ , C L i = I K C i C i , D L i = I F D i D i , M 2 = D ^ ( I F F B ^ B ^ ) , C S i = ( I K C i C i ) E i , D S i = F i ( I F D i D i ) , N 2 = F ^ ( I F F B ^ B ^ ) , B S K i = B K i ( I F B S i B S i ) , A S K i = ( I K A S i A S i ) A K i , W 2 = G ^ ( I F F B ^ B ^ ) , B S T i = B T i ( I F B S i B S i ) , A S T i = ( I K A S i A S i ) A T i ,
W ^ = 0 W 1 W 2 0 : ( F F ) ( F F ) ( K K ) ( K K ) , P 1 = M 1 0 0 C ^ : ( H H ) ( H H ) ( K K ) ( K K ) , P 2 = N 1 0 0 E ^ : ( H H ) ( H H ) ( K K ) ( K K ) , Q 1 = M 2 0 0 D ^ : ( F F ) ( F F ) ( G G ) ( G G ) , Q 2 = N 2 0 0 F ^ : ( F F ) ( F F ) ( G G ) ( G G ) , Y = 0 Y ^ Y ^ 0 : ( G G ) ( G G ) ( H H ) ( H H ) , Z = 0 Z ^ Z ^ 0 : ( G G ) ( G G ) ( H H ) ( H H ) , P 11 = ( I ( K K ) ( K K ) P 1 P 1 ) P 2 , W 11 = ( I ( K K ) ( K K ) P 1 P 1 ) W ^ , P 22 = Q 2 ( I ( F F ) ( F F ) Q 1 Q 1 ) , W 22 = W ^ ( I ( F F ) ( F F ) Q 1 Q 1 ) .
Theorem 1.
Let A i , C i , E i B ( H , K ) , B i , D i , F i B ( F , G ) , ( i = 1 , 2 ) be closed-range operators, and G i B ( F , K ) , ( i = 1 , 2 ) . Suppose that
D ( B K i ) = D ( B S K i ) , D ( F i ) = D ( D S i ) ,
R ( A S T i F i ) D ( A S K i ) , R ( C L i B T i B K i ) D ( C S i ) , ( i = 1 , 2 ) .
If R ( B K i ) R ( B S K i ) , R ( F i ) R ( D S i ) , R ( ( A K i ) * ) R ( ( A S K i ) * ) ¯ , R ( ( E i ) * ) R ( ( C S i ) * ) ¯ and
A S K i A S T i F i = A K i A T i D L i D S i ,
C S i C L i B T i B K i = E i B S T i B S K i , ( i = 1 , 2 ) ;
then the subsequent assertions are equivalent, as follows:
(i) 
The operator equation system
A 1 X B 1 + C 1 Y D 1 + E 1 Z F 1 = G 1 A 2 X B 2 + C 2 Y D 2 + E 2 Z F 2 = G 2
has a solution.
(ii) 
R ( B S T i ) R ( E i ) , R ( C L i B T i ) R ( C S i ) , R ( A S T i ) R ( A S K i ) , R ( A T i D L i ) R ( A K i ) , R ( ( A S K i A S T i ) * ) R ( ( F i ) * ) , R ( ( C S i C L i B T i ) * ) R ( ( B K i ) * ) , R ( ( E i B S T i ) * ) R ( ( B S K i ) * ) , R ( ( A K i A T i D L i ) * ) R ( ( D S i ) * ) ,
where i = 1,2.
(iii) 
R ( A S T i ) R ( A S K i ) , R ( C L i B T i ) R ( C S i ) , R ( ( B S T i ) * ) R ( ( B S K i ) * ) , R ( ( A T i D L i ) * ) R ( ( D S i ) * ) , and the operator equation system
Z F i = A S K i A S T i , Z B K i = C S i C L i B T i , A K i Z = A T i D L i D S i ¯ , E i Z = B S T i B S K i ¯ ,
( i = 1 , 2 ) has a solution.
Proof. 
Now, we are going to prove it in four parts.
Claim 1 
The operator equation system (1) has a solution if and only if the operator equation system
M 1 Y ^ D ^ + N 1 Z ^ F ^ = W 1 C ^ Y ^ M 2 + E ^ Z ^ N 2 = W 2
has a solution.
Indeed, Equation (1) is equivalent to
A ^ X ^ B ^ + C ^ Y ^ D ^ + E ^ Z ^ F ^ = G ^ .
According to Lemma 2, Equation (5) has a solution equivalent to
R ( G ^ C ^ Y ^ D ^ E ^ Z ^ F ^ ) R ( A ^ ) , R ( ( G ^ C ^ Y ^ D ^ E ^ Z ^ F ^ ) * ) R ( B ^ * ) .
Then, R ( G ^ C ^ Y ^ D ^ E ^ Z ^ F ^ ) R ( A ^ ) if and only if
A ^ A ^ ( G ^ C ^ Y ^ D ^ E ^ Z ^ F ^ ) = G ^ C ^ Y ^ D ^ E ^ Z ^ F ^ .
It can be obtained by a simple calculation; we have
( I K K A ^ A ^ ) G ^ = ( I K K A ^ A ^ ) C ^ Y ^ D ^ + ( I K K A ^ A ^ ) E ^ Z ^ F ^ .
That is,
M 1 Y ^ D ^ + N 1 Z ^ F ^ = W 1 .
In a similar way, R ( ( G ^ C ^ Y ^ D ^ E ^ Z ^ F ^ ) * ) R ( B ^ * ) if and only if
B ^ * ( B ^ * ) ( ( G ^ C ^ Y ^ D ^ E ^ Z ^ F ^ ) * ) = ( G ^ C ^ Y ^ D ^ E ^ Z ^ F ^ ) * .
Then we have
G ^ ( I F F B ^ B ^ ) = C ^ Y ^ D ^ ( I F F B ^ B ^ ) + E ^ Z ^ F ^ ( I F F B ^ B ^ ) .
Namely,
C ^ Y ^ M 2 + E ^ Z ^ N 2 = W 2 .
Therefore, the existence of a solution for the system (1) is equivalent to the existence of a solution for the system (4).
Claim 2 
The system (4) has a solution if and only if
P 11 Z Q 2 = W 11 P 2 Z P 22 = W 22
has a solution.
In fact, the system (4) is equivalent to
P 1 Y Q 1 + P 2 Z Q 2 = W ^ .
Using Lemma 2 again, Equation (7) has a solution equivalent to
R ( W ^ P 2 Z Q 2 ) R ( P 1 ) , R ( ( W ^ P 2 Z Q 2 ) * ) R ( Q 1 * ) .
Then, R ( W ^ P 2 Z Q 2 ) R ( P 1 ) if and only if
P 1 P 1 ( W ^ P 2 Z Q 2 ) = ( W ^ P 2 Z Q 2 ) .
By a straightforward computation, we have
( I ( K K ) ( K K ) P 1 P 1 ) W ^ = ( I ( K K ) ( K K ) P 1 P 1 ) P 2 Z Q 2 .
That is,
P 11 Z Q 2 = W 11 .
In a similar way, R ( ( W ^ P 2 Z Q 2 ) * ) R ( Q 1 * ) if and only if
Q 1 * ( Q 1 * ) ( ( W ^ P 2 Z Q 2 ) * ) = ( W ^ P 2 Z Q 2 ) * .
After a further calculation, we obtain
W ^ ( I ( F F ) ( F F ) Q 1 Q 1 ) = P 2 Z Q 2 ( I ( F F ) ( F F ) Q 1 Q 1 ) .
Namely,
P 2 Z P 22 = W 22 .
Therefore, Equation (4) has a solution that is equivalent to the system (6) having a solution.
Claim 3 
The system (6) has a solution equivalent to
R ( W 11 ) R ( P 11 ) , R ( W 22 ) R ( P 2 ) ,
R ( ( P 11 W 11 ) * ) R ( Q 2 * ) , R ( ( P 2 W 22 ) * ) R ( P 22 * ) ,
if and only if
R ( B S T i ) R ( E i ) , R ( C L i B T i ) R ( C S i ) , R ( A S T i ) R ( A S K i ) , R ( A T i D L i ) R ( A K i ) , R ( ( A S K i A S T i ) * ) R ( ( F i ) * ) , R ( ( C S i C L i B T i ) * ) R ( ( B K i ) * ) , R ( ( E i B S T i ) * ) R ( ( B S K i ) * ) , R ( ( A K i A T i D L i ) * ) R ( ( D S i ) * ) ,
where i = 1, 2. The equivalence of Claim 3 is obtained by Lemma 4.
Claim 4 
The system (6) has a solution equivalent to
R ( W 11 ) R ( P 11 ) , R ( ( W 22 ) * ) R ( ( P 22 ) * )
and the operator equation system
Z Q 2 = P 11 W 11 ,
P 2 Z = W 22 P 22 ¯
has a solution.
Actually, the system (6) has a solution if and only if the following conditions hold:
(a)
R ( A S T i ) R ( A S K i ) , R ( C L i B T i ) R ( C S i ) , R ( ( B S T i ) * ) R ( ( B S K i ) * ) ,
R ( ( A T i D L i ) * ) R ( ( D S i ) * ) , ( i = 1 , 2 ) .
(b)
The system
Z F i = A S K i A S T i , Z B K i = C S i C L i B T i , A K i Z = A T i D L i D S i ¯ , E i Z = B S T i B S K i ¯ ,
( i = 1 , 2 ) has a solution.
The equivalence of Claim 4 is obtained by Lemma 5.
Therefore, the equivalence of ( i ) and ( i i ) in Theorem 1 is obtained from Claim 1, Claim 2, and Claim 3; the equivalence of ( i ) and ( i i i ) in Theorem 1 is obtained from Claim 1, Claim 2, and Claim 4. □
In the following corollary, we give some equivalent conditions for the existence of the solutions to the system A i X B i + C i Y D i = G i ( i = 1 , 2 ) . For convenience, we still use the notation defined earlier in Theorem 1. In particular, M 1 , M 2 , N 1 , and N 2 are of the following form:
M 1 = ( I K A ^ A ^ ) C ^ , N 1 = ( I K A ^ A ^ ) E ^ , M 2 = D ^ ( I F B ^ B ^ ) , N 2 = F ^ ( I F B ^ B ^ ) .
Corollary 1.
Let A i B ( H , K ) , B i B ( F , G ) be closed-range operators and C i B ( H , K ) , D i B ( F , G ) , G i B ( F , K ) , S i = I K A i A i , T i = I F B i B i , ( i = 1 , 2 ) . Suppose that
D ( D i ) = D ( ( D i T i ) ) , R ( S i G i D i ) D ( ( S i C i ) )
and
R ( G i T i ( D i T i ) ) D ( C i ) , ( i = 1 , 2 ) .
If R ( D i T i ) R ( D i ) , R ( C i * S i ) R ( C i * ) ¯ and
( S i C i ) S i G i D i = C i G i T i ( D i T i ) , ( i = 1 , 2 ) ,
then the subsequent assertions are equivalent as follows:
(i) 
The operator equation system
A 1 X B 1 + C 1 Y D 1 = G 1 A 2 X B 2 + C 2 Y D 2 = G 2
has a solution.
(ii) 
R ( G i T i ) R ( C i ) , R ( ( C 1 * ( S 1 ) ) R ( D 2 * ) , R ( ( C 2 * ( S 2 ) ) R ( D 1 * ) , R ( S i G i ) R ( S i C i ) , R ( T i G i * ( C i ) * ) R ( T i D i * ) ,
where i = 1, 2.
(iii) 
R ( G i * S i ) R ( D i * ) , R ( G i T i ) R ( C i ) , and the system
S i * C i Y P R ( D i ) ¯ = S i * G i D i ¯ ,
P R ( C i * ) ¯ Y D i T i = C i G i T i
( i = 1 , 2 ) has a solution.
Proof. 
Now, we are going to prove it in three parts.
Claim 1 
The system (2) has a solution if and only if
M 1 Y ^ D ^ = N 1 C ^ Y ^ M 2 = N 2
has a solution.
Indeed, the system (2) is equivalent to
A ^ X ^ B ^ + C ^ Y ^ D ^ = G ^ .
In view of Lemma 2, Equation (10) has a solution equivalent to
R ( G ^ C ^ Y ^ D ^ ) R ( A ^ ) , R ( ( G ^ C ^ Y ^ D ^ ) * ) R ( B ^ * ) .
Then, R ( G ^ C ^ Y ^ D ^ ) R ( A ^ ) if and only if
A ^ A ^ ( G ^ C ^ Y ^ D ^ ) = G ^ C ^ Y ^ D ^ .
It can be obtained by a simple calculation; we have
( I K A ^ A ^ ) G ^ = ( I K A ^ A ^ ) C ^ Y ^ D ^ .
That is
M 1 Y ^ D ^ = N 1 .
In a similar way, R ( ( G ^ C ^ Y ^ D ^ ) * ) R ( B ^ * ) if and only if
B ^ * ( B ^ * ) ( ( G ^ C ^ Y ^ D ^ ) * ) = ( G ^ C ^ Y ^ D ^ ) * .
Therefore, G ^ ( I F B ^ B ^ ) = C ^ Y ^ D ^ ( I F B ^ B ^ ) . Namely,
C ^ Y ^ M 2 = N 2 .
Therefore, the system (2) has a solution that is equivalent to the system (9) having a solution.
Claim 2 
The system (9) has a solution equivalent to
R ( N 1 ) R ( M 1 ) , R ( N 2 ) R ( C ^ ) , R ( ( M 1 N 1 ) * ) R ( D ^ * ) , R ( ( C ^ N 2 ) * ) R ( M 2 * ) ,
if and only if
R ( S i E i ) R ( S i C i ) , R ( E i T i ) R ( C i ) , R ( ( C 1 * ( S 1 ) ) R ( D 2 * ) , R ( ( C 2 * ( S 2 ) ) R ( D 1 * ) , R ( T i E i * ( C i ) * ) R ( T i D i * ) ,
where i = 1, 2.
The equivalence of Claim 2 is obtained by Lemma 4.
Claim 3 
The system (9) has a solution equivalent to the system M 1 Y = N 1 D ^ ¯ ,   Y M 2 = C ^ N 2 , where Y = C ^ C ^ Y ^ D ^ D ^ ¯ and R ( N 1 * ) R ( D ^ * ) , R ( N 2 ) R ( C ^ ) .
In fact, the system (9) has a solution if and only if the following conditions hold:
(a)
R ( E i * S i ) R ( D i * ) , R ( E i T i ) R ( C i ) , ( i = 1 , 2 ) .
(b)
The system
S i * C i Y P R ( D i ) ¯ = S i * E i D i ¯ , P R ( C i * ) ¯ Y D i T i = C i E i T i
(i = 1, 2) has a solution.
The equivalence of Claim 3 is obtained by Lemma 5.
Therefore, the equivalence of ( i ) and ( i i ) in Corollary 1 is obtained from Claim 1 and Claim 2; the equivalence of ( i i ) and ( i i i ) in Corollary 1 is obtained from Claim 1 and Claim 3. □
In Theorem 1, take A 1 = B 2 = A , B 1 = A 2 = B , C i = D i = E i = F i = 0 , G i = G , ( i = 1 , 2 ) . Then the following results are obtained:
Corollary 2
([32], Proposition 2.1). Let A , B , G B ( H ) . If A and B have a closed range, A G B = B G A , then the subsequent assertions are equivalent, as follows:
(1) The system A X B = G = B X A is solvable;
(2) A A G A A = B B G B B = G ;
(3) R ( G ) R ( A ) , R ( G ) R ( B ) , R ( G * ) R ( A * ) and R ( G * ) R ( B * ) .
Remark 1.
In Theorem 1, the system (6) can be expressed as follows:
A S K i Z F i = A S T i , C S i Z B K i = C L i B T i , A K i Z D S i = A T i D L i , E i Z B S K i = B S T i ,
where i = 1,2. Then the system (11) has a solution if and only if there exist some U 1 , V 1 B ( F , K ) and U 2 , V 2 B ( F , K ) such that the system
A S K i C S i Z F i B K i = A S T i U 1 V 1 C L i B T i , A K i E i Z D S i B S K i = A T i D L i U 2 V 2 B S T i
( i = 1 , 2 ) has a solution. Let
A 1 i = A S K i C S i , B 1 i = F i B K i , C 1 i = A S T i U 1 V 1 C L i B T i , A 2 i = A K i E i , B 2 i = D S i B S K i , C 2 i = A T i D L i U 2 V 2 B S T i .
Hence, Equation (12) can be expressed as
A 1 i Z B 1 i = C 1 i , A 2 i Z B 2 i = C 2 i ,
where i = 1, 2.
Remark 2.
Under the assumptions of Theorem 1, it can be further concluded that a solution of
Z B 1 i = A 1 i C 1 i , A 2 i Z = C 2 i B 2 i ¯ ,
( i = 1 , 2 ) is also a solution of the system (12). Additionally, if Z B ( G , H ) satisfies the system (12), we obtain the bounded linear extension
Z 0 = P R A 1 i * ¯ Z P R B 2 i ¯ ¯ , i = 1 , 2
which are solutions of the system (13).
In fact, suppose that Z satisfies the system (12). Then, clearly,
R A S T i U 1 V 1 C L i B T i R A S K i C S i ,
R ( ( A T i D L i U 2 V 2 B S T i ) * ) R ( ( D S i B S K i ) * ) ,
i.e.,
R ( A S T i ) R ( A S K i ) , R ( C L i B T i ) R ( C S i ) , R ( ( B S T i ) * ) R ( ( B S K i ) * ) , R ( ( A T i D L i ) * ) R ( ( D S i ) * ) .
Additionally, from
R ( B K i ) R ( B S K i ) , R ( F i ) R ( D S i ) , R ( ( A K i ) * ) R ( ( A S K i ) * ) ¯ , R ( ( E i ) * ) R ( ( C S i ) * ) ¯ ,
we obtain that
F i B K i = D S i B S K i D S i B S K i F i B K i , A K i E i = A K i E i A S K i C S i A S K i C S i , ( i = 1 , 2 ) ,
i.e.,
B 1 i = B 2 i B 2 i B 1 i , A 2 i = A 2 i A 1 i A 1 i .
Let
Z ˜ : = A S K i C S i A S K i C S i Z D S i B S K i D S i B S K i = A 1 i A 1 i Z B 2 i B 2 i B ( D ( B 2 i ) , H ) .
Then
Z ˜ B 1 i = A 1 i A 1 i Z B 2 i B 2 i B 1 i = A 1 i A 1 i Z B 1 i = A 1 i C 1 i ,
A 2 i Z ˜ = A 2 i A 1 i A 1 i Z B 2 i B 2 i = A 2 i Z B 2 i B 2 i = C 2 i B 2 i .
We denote the continuous bounded linear extension of Z ˜ as Z 0 B ( G , H ) . By R ( B 1 i ) R ( B 2 i ) D ( B 2 i ) = D ( Z ˜ ) , we obtain Z 0 B 1 i = Z ˜ B 1 i = A 1 i C 1 i . From R ( C 2 i * ) R ( B 2 i * ) , combined with Douglas’ Lemma 3, there exists Z ˚ B ( G , H ) such that
C 2 i = Z ˚ B 2 i , ( i = 1 , 2 ) .
It follows that
C 2 i B 2 i = Z ˚ B 2 i B 2 i B ( D ( B 2 i ) , H ) .
For any z G , z n D ( B 2 i ) such that z n z ( n ) . We have
A 2 i Z 0 z = A 2 i ( lim n Z ˜ z n ) = lim n A 2 i Z ˜ z n = lim n C 2 i B 2 i z n = C 2 i B 2 i ¯ z .
Then Z 0 is a solution of the system (13).
On the other hand, in view of
R ( A S T i ) R ( A S K i ) , R ( C L i B T i ) R ( C S i ) , R ( ( B S T i ) * ) R ( ( B S K i ) * ) , R ( ( A T i D L i ) * ) R ( ( D S i ) * ) ,
we obtain
C 1 i = A 1 i A 1 i C 1 i ,
C 2 i = C 2 i B 2 i B 2 i , ( i = 1 , 2 ) .
Now suppose that Z ˜ B ( G , H ) satisfies the system (13). Due to
R ( B 2 i ) D ( B 2 i ) = D ( C 2 i B 2 i ) ,
we have that
A 1 i Z ˜ B 1 i = A 1 i A 1 i C 1 i = C 1 i ,
A 2 i Z ˜ B 2 i = C 2 i B 2 i ¯ B 2 i = C 2 i B 2 i B 2 i = C 2 i ,
where i = 1, 2. Therefore, the system (12) has a solution.

3.2. The Existence of Self-Adjoint Solutions

In this section, we present several equivalent conditions for the existence of self-adjoint solutions to the system A i X B i + C i Y D i + E i Z F i = G i ( i = 1 , 2 ) in sense of star order.
For convenience, we define the notation as follows (i = 1, 2):
A S i = ( I K A i A i ) C i , A T i = ( I K A i A i ) G i , A K i = ( I K A i A i ) E i , B S i = D i ( I F B i B i ) , B T i = G i ( I F B i B i ) , B K i = F i ( I F B i B i ) , C L i = I K C i C i , C S i = ( I K C i C i ) E i , D L i = I F D i D i , D S i = F i ( I F D i D i ) , B S K i = B K i ( I F B S i B S i ) , A S K i = ( I K A S i A S i ) A K i , B S T i = B T i ( I F B S i B S i ) , A S T i = ( I K A S i A S i ) A T i ,
A 1 i = A S K i C S i , B 1 i = F i B K i , C 1 i = A S T i U 1 V 1 C L i B T i , A 2 i = A K i E i , B 2 i = D S i B S K i , C 2 i = A T i D L i U 2 V 2 B S T i ,
for some U 1 , V 1 , U 2 , V 2 B ( F , K ) .
A ^ = A 1 0 0 A 2 : H H K K , B ^ = B 2 0 0 B 1 : F F G G , C ^ = C 1 0 0 C 2 : H H K K , D ^ = D 2 0 0 D 1 : F F G G , E ^ = E 1 0 0 E 2 : H H K K , F ^ = F 2 0 0 F 1 : F F G G , G ^ = 0 G 1 G 2 0 : F F K K , N 1 = ( I K K A ^ A ^ ) E ^ , N 2 = F ^ ( I F F B ^ B ^ ) , P 2 = N 1 0 0 E ^ : ( H H ) ( H H ) ( K K ) ( K K ) , Q 2 = N 2 0 0 F ^ : ( F F ) ( F F ) ( G G ) ( G G ) , W 1 = ( I K K A ^ A ^ ) G ^ , W 2 = G ^ ( I F F B ^ B ^ ) , W ^ = 0 W 1 W 2 0 : ( F F ) ( F F ) ( K K ) ( K K ) , P 11 = ( I ( K K ) ( K K ) P 2 P 2 ) P 1 , P 22 = Q 2 ( I ( F F ) ( F F ) Q 2 Q 2 ) , W 11 = ( I ( K K ) ( K K ) P 2 P 2 ) W ^ , W 22 = W ^ ( I ( F F ) ( F F ) Q 2 Q 2 ) , M 1 = ( I K K A ^ A ^ ) C ^ , M 2 = D ^ ( I F F B ^ B ^ ) , P 1 = M 1 0 0 C ^ : ( H H ) ( H H ) ( K K ) ( K K ) , Q 1 = M 2 0 0 D ^ : ( F F ) ( F F ) ( G G ) ( G G ) .
Theorem 2.
Let A i , C i , E i B ( H , K ) , B i , D i , F i B ( F , G ) , ( i = 1 , 2 ) be closed-range operators, and G i B ( F , K ) , ( i = 1 , 2 ) . Suppose that
C 1 i * A 2 i * A 1 i , C 2 i * B 1 i * B 2 i , W 11 * P 2 * P 11 , W 22 * Q 1 * P 22 ,
and
A 1 i A 1 i | D ( B 2 i ) = B 2 i B 2 i , ( i = 1 , 2 ) , P 11 P 11 | D ( P 22 ) = P 22 P 22 .
If X = A i ( G i C i Y D i E i Z F i ) B i is a self-adjoint operator, and the system of operator equations A i X B i + C i Y D i + E i Z F i = G i , ( i = 1 , 2 ) has a solution, then the subsequent assertions are equivalent, as follows:
(i) 
The system
A 1 X B 1 + C 1 Y D 1 + E 1 Z F 1 = G 1 A 2 X B 2 + C 2 Y D 2 + E 2 Z F 2 = G 2
has a self-adjoint solution.
(ii) 
The system
Z B 1 i = A 1 i C 1 i , A 2 i Z = C 2 i B 2 i ¯ ,
(i = 1,2) has a self-adjoint solution.
Proof. 
( i ) ( i i ) Suppose that ( i ) holds. Analogous to the demonstration of Theorem 1, the system (1) has a solution equivalent to the system (6) having a solution. Additionally, according to Remark 1, the system (6) is equivalent to the system (12) having a solution.
Suppose that, in the system (12), there exists a self-adjoint solution Z. Then, from Remark 2,
Z 0 = P R ( ( A S K i C S i ) * ) ¯ Z P R ( D S i B S K i ) ¯ ¯ = P R ( ( A 1 i ) * ) ¯ Z P R ( B 2 i ) ¯ ¯ B ( H ) , i = 1 , 2 .
is a solution of the system (13).
In addition, for an arbitrary z H and z n i D ( B 2 i ) , such that z n i z , ( i = 1 , 2 ) . Since
A 1 i A 1 i = B 2 i B 2 i , ( i = 1 , 2 ) ,
on D ( B 2 i ) , we obtain
Z 0 z , z = lim n Z 0 z n i , z n i = lim n A 1 i A 1 i Z B 2 i B 2 i z n i , z n i = lim n A 1 i A 1 i Z A 1 i A 1 i z n i , z n i = lim n z n i , A 1 i A 1 i Z * A 1 i A 1 i z n i = lim n z n i , A 1 i A 1 i Z B 2 i B 2 i z n i = lim n z n i , Z 0 z n i = z , Z 0 z , ( i = 1 , 2 ) .
Therefore, Z 0 is a self-adjoint solution of the system (13).
( i i ) ( i ) If Z is a self-adjoint solution of the system (13), from condition
C 1 i * A 2 i * A 1 i , C 2 i * B 1 i * B 2 i ,
we can deduce
R ( ( C 2 i ) * ) R ( ( B 2 i ) * ) , R ( C 1 i ) R ( A 1 i ) .
Further, we obtain
C 1 i = A 1 i A 1 i C 1 i , C 2 i = C 2 i B 2 i B 2 i .
Therefore,
A 1 i Z B 1 i = A 1 i A 1 i C 1 i = C 1 i , A 2 i Z B 2 i = C 2 i B 2 i ¯ B 2 i = C 2 i B 2 i B 2 i = C 2 i , ( i = 1 , 2 ) .
It follows that Z is a self-adjoint solution of the system (12).
We know that a self-adjoint solution to the system (12) implies a self-adjoint solution to the system (6). This inference is based on the developed equivalence relation. Consequently, the existence of solutions for the system (6) indicates that the system (1) possesses a self-adjoint solution Z.
Next, we show that Y is also a self-adjoint solution of the system (1).
In Claim 2 of Theorem 1, we also know that the system (4) has a solution equivalent to the system (1) having a solution. As a matter of fact, the system (4) has a solution if and only if the system
P 1 Y Q 1 + P 2 Z Q 2 = W ^
has a solution.
From Lemma 2, Equation (7) is solvable equivalent to the following two inclusion relations holding simultaneously:
R ( W ^ P 1 Y Q 1 ) R ( P 2 ) , R ( ( W ^ P 1 Y Q 1 ) * ) R ( Q 2 * ) .
Similar to the previous proof, we can obtain that the system (4) has a solution that is equivalent to the system
P 11 Y Q 1 = W 11 , P 2 Y P 22 = W 22 ,
having a solution. Combined with
W 11 * P 2 * P 11 , W 22 * Q 1 * P 22 P 11 P 11 = P 22 P 22 ,
we obtain that a self-adjoint solution to the system (14) is equivalent to a self-adjoint solution to (1).
Additionally, since X = A i ( G i C i Y D i E i Z F i ) B i is a self-adjoint operator, the system (1) has three self-adjoint solutions. □
The following results can be obtained by taking E i = F i = 0 , ( i = 1 , 2 ) from Theorem 2.
Corollary 3.
Let A i , B i B ( H ) be closed-range operators and C i , D i , G i B ( H ) , S i = I H A i A i , T i = I H B i B i , ( i = 1 , 2 ) such that
G i T i * S i * C i * C i , S i * G i * D i T i * D i
and
C i C i | D ( D i ) = D i D i , ( i = 1 , 2 ) .
If A i ( G i C i Y D i ) B i is a self-adjoint operator, and the system A i X B i + C i Y D i = G i ( i = 1 , 2 ) has a solution, then the subsequent assertions are equivalent, as follows:
(i) 
The system
A 1 X B 1 + C 1 Y D 1 = G 1
A 2 X B 2 + C 2 Y D 2 = G 2
has self-adjoint solution.
(ii) 
The system
S i C i Y P R ( D i ) ¯ = S i G i D i ¯ ,
P R ( C i * ) ¯ Y D i T i = C i G i T i
( i = 1 , 2 ) has self-adjoint solution.
Proof. 
( i ) ( i i ) Suppose that ( i ) holds. Similar to the proof of Corollary 1, the system (2) has a solution equivalent to the system (9) having a solution. Additionally, the system (9) is equivalent to
S i * C i Y D i = S i * G i P R ( D i * ) ¯ , C i Y D i T i = P R ( C i ) ¯ G i T i , ( i = 1 , 2 ) .
Suppose that, in the system (15), there exists a self-adjoint solution Y. Then, from the proof of Corollary 1,
Y i = C i C i Y D i D i ¯ B ( H ) , ( i = 1 , 2 )
is a solution of the operator system
S i * C i Y P R ( D i ) ¯ = S i * G i D i ¯ ,
P R ( C i * ) ¯ Y D i T i = C i G i T i .
In addition, for an arbitrary y H and y n i D ( D i ) such that y n i y , ( i = 1 , 2 ) . Since C i C i | D ( D i ) = D i D i , ( i = 1 , 2 ) , we obtain
Y i y , y = lim n Y i y n i , y n i = lim n C i C i Y D i D i y n i , y n i = lim n C i C i Y C i C i y n i , y n i = lim n y n i , C i C i Y * C i C i y n i = lim n y n i , C i C i Y C i C i y n i = lim n y n i , C i C i Y D i D i y n i = y , Y i y , ( i = 1 , 2 ) .
Therefore, Y i = C i C i Y D i D i ¯ B ( H ) , ( i = 1 , 2 ) is a self-adjoint solution of
S i * C i Y P R ( D i ) ¯ = S i * G i D i ¯ , P R ( C i * ) ¯ Y D i T i = C i G i T i .
( i i ) ( i ) Assume that Y is a self-adjoint solution of S i * C i Y P R ( D i ) ¯ = S i * G i D i ¯ , P R ( C i * ) ¯ Y D i T i = C i G i T i , ( i = 1 , 2 ) . From the condition G i T i * S i * C i * C i ,   S i * G i * D i T i * D i , we can deduce R ( G i * S i ) R ( D i * ) , R ( G i T i ) R ( C i ) ; further, we obtain G i T i = C i C i G i T i , S i * G i = S i * E i D i D i . Therefore,
S i * C i Y D i = S i * G i D i ¯ D i = S i * G i D i D i = S i * G i P R ( D i * ) ¯ , C i Y D i T i = C i C i G i T i = P R ( C i ) ¯ G i T i , ( i = 1 , 2 ) .
We know that a self-adjoint solution to the system (15) means a self-adjoint solution to the system (9). Therefore, it follows from the equivalence relation that the system (9) and the system (2) have solutions that the system (2) has a self-adjoint solution Y. Additionally, since X = A i ( G i C i Y D i ) B i is a self-adjoint operator, the system (2) has a pair of self-adjoint solutions. □
The following results can be obtained by taking C i = D i = E i = F i = 0 , ( i = 1 , 2 ) from Theorem 2:
Corollary 4
([28], Theorem 2.3). Let A i , B i , G i B ( H ) , i = 1 , 2 ¯ such that G 1 * A 2 * A 1 , G 2 * B 1 * B 2 and A 1 A 1 | D ( B 2 ) = B 2 B 2 . If the system A i X B i = G i , i = 1 , 2 ¯ is solvable, then the subsequent assertions are equivalent, as follows:
(i) 
The operator equation system
A 1 X B 1 = G 1
A 2 X B 2 = G 2
has a self-adjoint solution;
(ii) 
The operator equation system
X B 1 = A 1 G 1 ,
A 2 X = G 2 B 2 ¯
has a self-adjoint solution.
The following results can be obtained by taking A 1 = B 2 = A , B 1 = A 2 = B , C i = D i = E i = F i = 0 , G i = G , ( i = 1 , 2 ) from Theorem 2.
Corollary 5
([32], Corollary 2.7). Let A , B , G B ( H ) . If A has a closed range with A A = A A and G * B * A , then the subsequent assertions are equivalent as follows:
(i) There exists a self-adjoint operator X ˜ B ( H ) such that A X ˜ B = G = B X ˜ A ;
(ii) There exists a self-adjoint operator Y ˜ B ( H ) such that B Y ˜ = G A and Y ˜ B = A G .

3.3. The Existence of Positive Solutions

In the subsequent theorem, we present several equivalent conditions for the existence of positive solutions to the system A i X B i + C i Y D i + E i Z F i = G i ( i = 1 , 2 ) .
For convenience, we can refer to the symbols defined in Theorem 2.
Theorem 3.
Let A i , C i , E i B ( H , K ) , B i , D i , F i B ( F , G ) , ( i = 1 , 2 ) be closed-range operators, and G i B ( F , K ) , ( i = 1 , 2 ) . Suppose that
C 1 i * A 2 i * A 1 i , C 2 i * B 1 i * B 2 i , W 11 * P 2 * P 11 , W 22 * Q 1 * P 22 ,
and
A 1 i A 1 i | D ( B 2 i ) = B 2 i B 2 i , ( i = 1 , 2 ) , P 11 P 11 | D ( P 22 ) = P 22 P 22 .
If X = A i ( G i C i Y D i E i Z F i ) B i is a positive operator, and the system A i X B i + C i Y D i + E i Z F i = G i , ( i = 1 , 2 ) has a solution, then the subsequent assertions are equivalent, as follows:
(i) 
The operator equation system
A 1 X B 1 + C 1 Y D 1 + E 1 Z F 1 = G 1 A 2 X B 2 + C 2 Y D 2 + E 2 Z F 2 = G 2
has positive solution.
(ii) 
The operator equation system
Z B 1 i = A 1 i C 1 i , A 2 i Z = C 2 i B 2 i ¯ ,
(i=1, 2) has positive solution.
Proof. 
The demonstration technique parallels the validation of Theorem 2, and is therefore omitted. □
The following results can be obtained by taking E i = F i = 0 , ( i = 1 , 2 ) from Theorem 3.
Corollary 6.
Let A i , B i B ( H ) be closed-range operators and C i , D i , G i B ( H ) , S i = I H A i A i , T i = I H B i B i , ( i = 1 , 2 ) such that
G i T i * S i * C i * C i , S i * G i * D i T i * D i
and
C i C i | D ( D i ) = D i D i , ( i = 1 , 2 ) .
If A i ( G i C i Y D i ) B i | D ( B i ) is a positive operator, and the system A i X B i + C i Y D i = G i ( i = 1 , 2 ) has a solution, then the subsequent assertions hold equivalently, as follows:
(i) 
The operator equation system
A 1 X B 1 + C 1 Y D 1 = G 1
A 2 X B 2 + C 2 Y D 2 = G 2
has positive solution.
(ii) 
The operator equation system
S i C i Y P R ( D i ) ¯ = S i G i D i ¯ ,
P R ( C i * ) ¯ Y D i T i = C i G i T i
( i = 1 , 2 ) has positive solution.
Proof. 
The demonstration technique parallels the demonstration in Corollary 3, which is omitted. □
The following results can be obtained by taking C i = D i = E i = F i = 0 , ( i = 1 , 2 ) from Theorem 3.
Corollary 7
([28], Theorem 2.4). Let A i , B i , G i B ( H ) , i = 1 , 2 ¯ such that G 1 * A 2 * A 1 , G 2 * B 1 * B 2 and A 1 A 1 | D ( B 2 ) = B 2 B 2 . If the system A i X B i = G i , i = 1 , 2 ¯ is solvable, then the subsequent assertions hold equivalently, as follows:
(i) 
The operator equation system
A 1 X B 1 = G 1
A 2 X B 2 = G 2
has a positive solution;
(ii) 
The operator equation system
X B 1 = A 1 G 1 ,
A 2 X = G 2 B 2 ¯
has a positive solution.
The following results can be obtained by taking A 1 = B 2 = A , B 1 = A 2 = B , C i = D i = E i = F i = 0 , G i = G , ( i = 1 , 2 ) from Theorem 3.
Corollary 8
([32], Corollary 2.8). Let A , B , G B ( H ) . If A has a closed range with A A = A A and G * B * A , then the subsequent assertions hold equivalently, as follows:
(1) There exists a positive operator X ˜ B ( H ) such that A X ˜ B = G = B X ˜ A ;
(2) There exists a positive operator Y ˜ B ( H ) such that B Y ˜ = G A and Y ˜ B = A G .

4. Representation of Solutions

For convenience, we define the notation as follows ( i = 1 , 2 ) :
A = A 1 A 2 : H K K , B = B 1 B 2 : F F G , C = C 1 C 2 : H K K , D = D 1 D 2 : F F G , E = E 1 E 2 : H K K , F = F 1 F 2 : F F G , G = G 1 U 3 V 3 G 2 , f o r s o m e U 3 , V 3 B ( F , K ) , M 1 = ( I K K A A ) C : H K K , M 1 = D ( I F F B B ) : F F G , M 2 = ( I K K A A ) E : H K K , M 2 = F ( I F F B B ) : F F G , N 1 = ( I K K A A ) G : F F K K , N 2 = G ( I F F B B ) : F F K K , M E = M 2 E : H ( K K ) ( K K ) , M C = M 1 C : H ( K K ) ( K K ) , M F = F M 2 : ( F F ) ( F F ) G , M D = D M 1 : ( F F ) ( F F ) G , N N = N 1 U 4 V 4 N 2 , f o r s o m e U 4 , V 4 B ( F F , K K ) , M M E = ( I K K K K M E M E ) M C : H ( K K ) ( K K ) , M M F = M D ( I F F F F M F M F ) : ( F F ) ( F F ) G , N M E = ( I K K K K M E M E ) N N : ( F F ) ( F F ) ( K K ) ( K K ) , N M F = N N ( I F F F F M F M F ) : ( F F ) ( F F ) ( K K ) ( K K ) , M E = ( I K K K K M C M C ) M E : H ( K K ) ( K K ) , N N = ( I K K K K M C M C ) N N : ( F F ) ( F F ) ( K K ) ( K K ) , M F = M F ( I F F F F M D M D ) : ( F F ) ( F F ) G , N N = N N ( I F F F F M D M D ) : ( F F ) ( F F ) ( K K ) ( K K ) .

4.1. The Representation of General Solutions

In the subsequent theorem, we derive the representation of the solutions of the equation system A i X B i + C i Y D i + E i Z F i = G i ( i = 1 , 2 ) .
Theorem 4.
Let A i , C i , E i B ( H , K ) , B i , D i , F i B ( F , G ) , G i B ( F , K ) , ( i = 1 , 2 ) be closed-range operators. If
N M E * M M F * M D , N M F * M M E * M C ,
then the general solutions of the operator system A i X B i + C i Y D i + E i Z F i = G i ( i = 1 , 2 ) are
Y = M M E N M E M D + L M M E M M E L M D M D , Z = M E N N M F M E M C M M E N M E M D M D M F M E M C L M D M F + M E M M E L M D M F , X = A G B A C M M E N M E M D D B + A C M M E M M E L M D M D D B A C L D B + A E M E M C M M E N M E M D M D M F F B A E M E N N M F F B + A E M E M C L M D M F F B A E M M E L M D M F F B ,
where
L = ( M C M M E ) ( N M F M C M M E N M E M D M M F ) M M F + S ( M C M M E ) ( M C M M E ) S M M F M M F .
In the above equation, S B ( G , K ) is an arbitrary operator.
Proof. 
The system (1) has a solution if and only if there exists U 3 , V 3 B ( F , K ) such that the equation
A 1 A 2 X B 1 B 2 + C 1 C 2 Y D 1 D 2 + E 1 E 2 Z F 1 F 2 = G 1 U 3 V 3 G 2
has a solution; i.e.,
A X B = G C Y D E Z F
has a solution. By Lemma 2, Equation (18) has a solution if and only if
R ( G C Y D E Z F ) R ( A ) , R ( ( G C Y D E Z F ) * ) R ( B * ) .
Then, R ( G C Y D E Z F ) R ( A ) if and only if
( I K K A A ) G = ( I K K A A ) C Y D + ( I K K A A ) E Z F .
That is,
M 1 Y D + M 2 Z F = N 1 .
In a similar way, R ( ( G C Y D E Z F ) * ) R ( B * ) if and only if
G ( I F F B B ) = C Y D ( I F F B B ) + E Z F ( I F F B B ) .
Namely,
C Y M 1 + E Z M 2 = N 2 .
Equation (1) has a solution if and only if the systems (19) and (20) have a solution. The systems (19) and (20) have a solution if and only if there exists U 4 , V 4 B ( F F , K K ) such that the equation
M 1 C Y D M 1 + M 2 E Z F M 2 = N 1 U 4 V 4 N 2
has a solution; i.e.,
M C Y M D + M E Z M F = N N
has a solution. According to Lemma 2, Equation (21) has a solution if and only if
R ( N N M C Y M D ) R ( M E ) , R ( ( N N M C Y M D ) * ) R ( M F * ) .
Then, R ( N N M C Y M D ) R ( M E ) if and only if
( I K K K K M E M E ) N N = ( I K K K K M E M E ) M C Y M D .
That is,
M M E Y M D = N M E .
In a similar way, R ( ( N N M C Y M D ) * ) R ( M F * ) if and only if
N N ( I F F F F M F M F ) = M C Y M D ( I F F F F M F M F ) .
Namely,
M C Y M M F = N M F .
Equation (21) has a solution if and only if the systems (22) and (23) have a solution.
In view of Lemma 2, the general solutions of Equation (22) are
Y = M M E N M E M D + L M M E M M E L M D M D ,
where L B ( G , H ) is an arbitrary operator. If Y satisfies Equation (23), then
M C ( M M E N M E M D + L M M E M M E L M D M D ) M M F = N M F ,
and then
M C M M E N M E M D M M F + M C L M M F M C M M E M M E L M D M D M M F = N M F .
By Lemma 1 and from
M M E * M C M M E = M C P R ( ( M M E ) * ) ¯ = M C M M E M M E ,
M M F * M D R ( M M F ) R ( M D ) M M F = M D M D M M F ,
it implies that
M C L M M F M M E L M M F = N M F M C M M E N M E M D M M F .
It is evident that L is a solution of the equation
( M C M M E ) L M M F = N M F M C M M E N M E M D M M F .
It follows from Lemma 2 that
L = ( M C M M E ) ( N M F M C M M E N M E M D M M F ) M M F + S ( M C M M E ) ( M C M M E ) S M M F M M F ,
where S B ( G , K ) is an arbitrary operator. Substituting the representation for Y into (21) and by M M E = M C M M E M M E yields (16) for Z. Using the representation of Y and Z, we obtain (16) for X. □
For convenience, we define the notation as follows ( i = 1 , 2 ) :
A = A 1 A 2 : H K K , B = B 1 B 2 : F F G , C = C 1 C 2 : H K K , D = D 1 D 2 : F F G , G = G 1 U V G 2 , for some U , V B ( F , K ) . M 1 = ( I K K A A ) C : H K K , N 1 = ( I K K A A ) G : F F K K , M 2 = D ( I F F B B ) : F F G , N 2 = G ( I F F B B ) : F F K K .
The following results can be obtained by taking E i = F i = 0 , ( i = 1 , 2 ) from Theorem 4.
Corollary 9.
Let A i , C i B ( H , K ) , B i , D i B ( F , G ) , G i B ( F , K ) , ( i = 1 , 2 ) be closed-range operators. If
N 1 * M 2 * D , N 2 * M 1 * C ,
then the general solutions of A i X B i + C i Y D i = G i ( i = 1 , 2 ) are
Y = M 1 N 1 D + L M 1 M 1 L D D , X = A G B A C M 1 N 1 D D B A C L D B + A M 1 L D B ,
where
L = ( C M 1 ) ( N 2 C M 1 N 1 D M 2 ) M 2 + S ( C M 1 ) ( C M 1 ) S M 2 M 2 .
In the above equation, S B ( G , K ) is an arbitrary operator.
Proof. 
The system (2) has a solution if and only if U , V B ( F , K ) exists such that the equation
A 1 A 2 X B 1 B 2 + C 1 C 2 Y D 1 D 2 = G 1 U V G 2
has a solution; i.e., the equation
A X B = G C Y D
has a solution. According to Lemma 2, Equation (25) has a solution if and only if
R ( G C Y D ) R ( A ) , R ( ( G C Y D ) * ) R ( B * ) .
Then, R ( G C Y D ) R ( A ) if and only if ( I A A ) G = ( I A A ) C Y D . That is,
M 1 Y D = N 1 .
In a similar way, R ( ( G C Y D ) * ) R ( B * ) if and only if G ( I B B ) = C Y D ( I B B ) . Namely,
C Y M 2 = N 2 .
Equation (25) has a solution if and only if the systems (26) and (27) have a solution; the solution of the systems (26) and (27) is the solution of system (2). According to Lemma 2, the general solutions of the equation M 1 Y D = N 1 are
Y = M 1 N 1 D + L M 1 M 1 L D D ,
where L B ( G , H ) is an arbitrary operator. If Y satisfies the equation C Y M 2 = N 2 , then
C ( M 1 N 1 D + L M 1 M 1 L D D ) M 2 = N 2 ,
and hence,
C M 1 N 1 D M 2 + C L M 2 C M 1 M 1 L D D M 2 = N 2 .
By Lemma 1 and from
M 1 * C M 1 = C P R ( M 1 * ) ¯ = C M 1 M 1 ,
M 2 * D R ( M 2 ) R ( D ) M 2 = D D M 2 ,
it implies that
C L M 2 M 1 L M 2 = N 2 C M 1 N 1 D M 2 .
It is evident that L is a solution of the equation
( C M 1 ) L M 2 = N 2 C M 1 N 1 D M 2 .
Using Lemma 2 again, we obtain
L = ( C M 1 ) ( N 2 C M 1 N 1 D M 2 ) M 2 + S ( C M 1 ) ( C M 1 ) S M 2 M 2 ,
where S B ( G , K ) is an arbitrary operator. Substituting the representation for Y into (25) yields (24) for X. □
In [28], the author studies the existence of the solution of the system A 1 X B 1 = G 1 , A 2 X B 2 = G 2 , and does not give the representation of the solution. Next, we give the representation of the solution of A 1 X B 1 = G 1 , A 2 X B 2 = G 2 .
The general solutions of A 1 X B 1 = G 1 , A 2 X B 2 = G 2 can be obtained by taking C i = D i = E i = F i = 0 , ( i = 1 , 2 ) from Theorem 4.
Corollary 10.
Let A i B ( H , K ) , B i B ( F , G ) , G i B ( F , K ) , ( i = 1 , 2 ) be closed-range operators. If
G 1 * A 2 * A 1 , G 2 * B 1 * B 2 ,
then the general solutions of the system A i X B i = G i ( i = 1 , 2 ) are
X = A 2 G 2 B 2 + ( A 1 A 2 ) ( G 1 A 1 A 2 G 2 B 2 B 1 ) B 1 + S A 2 A 2 ( A 1 A 2 ) ( G 1 A 1 A 2 G 2 B 2 B 1 ) B 1 B 2 B 2 ( A 1 A 2 ) ( A 1 A 2 ) S B 1 B 1 A 2 A 2 S B 2 B 2 + A 2 A 2 ( A 1 A 2 ) ( A 1 A 2 ) S B 1 B 1 B 2 B 2 ,
where S B ( G , K ) is an arbitrary operator.
The following results can be obtained by taking A 2 = B 1 = I from Corollary 10.
Corollary 11.
Let A 1 B ( H , K ) , B 2 B ( F , G ) , G i B ( F , K ) , ( i = 1 , 2 ) be closed-range operators. If
G 1 * A 1 , G 2 * B 2 ,
then the general solutions of the system A 1 X = G 1 , X B 2 = G 2 are
X = G 2 B 2 + ( A 1 I ) ( G 1 A 1 G 2 B 2 ) + S ( A 1 I ) ( A 1 I ) S ( A 1 I ) ( G 1 A 1 G 2 B 2 ) B 2 B 2 S B 2 B 2 + ( A 1 I ) ( A 1 I ) S B 2 B 2 ,
where S B ( G , K ) is an arbitrary operator.
The solution obtained in Corollary 11 is coincide with the result in reference [44].
The following results can be obtained by taking A 1 = B 2 , B 1 = A 2 , G 1 = G 2 = G from Corollary 10.
Corollary 12
([32], Theorem 3.1). Let A , B , G B ( H ) . If A has a closed range and
G * B * A ,
then the general solutions of A X B = G = B X A are
X = B ( G B A G B A ) ( A B ) B B S ( A B ) ( A B ) A A B ( G B A G B A ) ( A B ) B B + A G B + S + A A B B S ( A B ) ( A B ) B B A A S B B ,
where S H is arbitrary.

4.2. The Representation of Self-Adjoint Solutions

In the following theorem, combined with the notation defined in Theorem 4, we give the operator matrix representation of self-adjoint solutions for the system A i X B i + C i Y D i + E i Z F i = G i ( i = 1 , 2 ) .
Theorem 5.
Let A i , C i , E i B ( H , K ) , B i , D i , F i B ( F , G ) , G i B ( F , K ) , ( i = 1 , 2 ) be closed-range operators. If A ( G C Y 0 D E Z 0 F ) B | D ( B ) is a self-adjoint operator, N M E * M M F * M D , N M F * M M E * M C , N N * M F * M F , N N * M E * M E , M C = M E , and
M C M C | D ( M C ) = M C M C = M D M D | D ( M D ) = M D M D = A A | D ( A ) = A A = B B | D ( B ) = B B = C C | D ( C ) = C C = D D | D ( D ) = D D = E E | D ( E ) = E E = F F | D ( F ) = F F = G G | D ( G ) = G G = ( M E ) M E | D ( ( M E ) ) = M E ( M E ) = ( M F ) M F | D ( ( M F ) ) = M F ( M F ) .
Then A i X B i + C i Y D i + E i Z F i = G i ( i = 1 , 2 ) has self-adjoint solutions. Under space decomposition R ( ( M C ) * ) ¯ N ( M C ) R ( ( M C ) * ) ¯ N ( M C ) , the self-adjoint solutions have a matrix representation
Y = Y 11 Y 12 Y 12 * Y 22
Z = Z 11 Z 12 Z 12 * Z 22
X = X 11 X 12 X 12 * X 22
where
Y 11 = P R ( M C * ) ¯ Y 0 P R ( M D ) ¯ , Z 11 = P R ( ( M E ) * ) ¯ Z 0 P R ( M F ) ¯
are self-adjoint operators, satisfying that
Y 0 = M C M C Y M D M D ¯ , Z 0 = ( M E ) M E Z 0 M F M F ¯
are self-adjoint solutions of the equation system
M M E Y = N M E M D ¯ , Y M M F = M C N M F ,
and
M E Z = N N M F ¯ , Z M F = M E N N ,
respectively; Y 12 , Z 12 , and X 12 are arbitrary operators; and Y 22 , Z 22 , and X 22 are arbitrary self-adjoint operators.
Proof. 
If Y B ( G , H ) has matrix representation (29). According to Theorem 4, the systems (22) and (23) have a solution equivalent to the system (1) having a solution.
According to Lemma 1, since M M F * M D , M M E * M C and (28) holds, then under space decomposition R ( ( M C ) * ) ¯ N ( M C ) R ( ( M C ) * ) ¯ N ( M C ) , M M E , M C , M D , M M F , A , B , C , D , E , F , G have the matrix representations
A = A 11 0 0 0 , B = B 11 0 0 0 , C = C 11 0 0 0 , D = D 11 0 0 0 ,
E = E 11 0 0 0 , F = F 11 0 0 0 , G = G 11 0 0 0 , M C = M 11 0 0 0 ,
M D = M 21 0 0 0 , M M E = M 31 0 0 0 , M M F = M 41 0 0 0 .
After substituting them into the systems (22) and (23), we obtained
M M E Y M D = M 31 Y 11 M 21 0 0 0 = M M E P R ( ( M C ) * ) ¯ Y 0 P R ( M D ) ¯ M D ,
M C Y M M F = M 11 Y 11 M 41 0 0 0 = M C P R ( ( M C ) * ) ¯ Y 0 P R ( M D ) ¯ M M F ,
where P R ( ( M C ) * ) ¯ Y 0 P R ( M D ) ¯ = M C M C Y 0 M D M D B ( D ( M D ) , H ) . Combined with
R ( N M F ) R ( M C ) , R ( M M F ) R ( M D ) , R ( ( N M E ) * ) R ( ( M D ) * ) , R ( ( M M E ) * ) R ( ( M C ) * ) ,
we obtain
M M E Y M D = M M E P R ( ( M C ) * ) ¯ Y 0 P R ( M D ) ¯ M D = M M E M C M C Y 0 M D = M M E Y 0 M D = N M E M D ¯ M D = N M E M D M D = N M E , M C Y M M F = M C P R ( ( M C ) * ) ¯ Y 0 P R ( M D ) ¯ M M F = M C Y 0 P R ( M D ) ¯ M M F = M C Y 0 M D M D M M F = M C Y 0 M M F = M C M C N M F = N M F .
Therefore, P R ( ( M C ) * ) ¯ Y 0 P R ( M D ) ¯ is a solution of the systems (22) and (23) and the system (1).
In addition, for an arbitrary y R ( ( M C ) * ) ¯ = R ( M D ) ¯ ,
Y 11 y , y = P R ( ( M C ) * ) ¯ Y 0 y , y = Y 0 y , P R ( ( M C ) * ) ¯ y = y , Y 0 * y = P R ( ( M C ) * ) ¯ y , Y 0 y = y , P R ( ( M C ) * ) ¯ Y 0 y = y , Y 11 y .
Therefore, Y 0 is a self-adjoint solution of the system of the operator Equation (1).
Assume that Z B ( G , H ) has matrix representation (30). In the proof of Theorem 4, we also know that the systems (19) and (20) have a solution equivalent to the system (1) having a solution. Additionally, Equations (19) and (20) have a solution if and only if the equation
M C Y M D + M E Z M F = N N
has a solution.
Using Lemma 2 again, Equation (21) has a solution equivalent to
R ( N N M E Z M F ) R ( M C ) , R ( ( N N M E Z M F ) * ) R ( ( M D ) * ) .
According to Lemma 3 and a simple calculation, we get that the systems (19) and (20) have a solution that is equivalent to the system
M E Z M F = N N , M E Z M F = N N ,
having a solution. Combined with
N N * M F * M F , N N * M E * M E , M E M E = M F M F ,
we get that a self-adjoint solution Z of the system (32) is also a self-adjoint solution of (1).
In fact, similar to the proof above, under space decomposition R ( M F ) ¯ N ( M F ) R ( M F ) ¯ N ( M F ) , M E , M F , M E , M F have the matrix representations
M E = M 11 0 0 0 , M F = M 21 0 0 0 , M E = M 31 0 0 0 , M F = M 41 0 0 0 .
After substituting them into the system (32), we obtained
M E Z M F = M 11 Z 11 M 21 0 0 0 = M E P R ( ( M E ) * ) ¯ Z 0 P R ( M F ) ¯ M F ,
M E Z M F = M 31 Z 11 M 41 0 0 0 = M E P R ( ( M E ) * ) ¯ Z 0 P R ( M F ) ¯ M M F ,
where P R ( ( M E ) * ) ¯ Z 0 P R ( M F ) ¯ = ( M E ) M E Z 0 M F M F B ( D ( M F ) , H ) . Combined with
R ( N N ) R ( M E ) , R ( M F ) R ( M F ) , R ( ( N N ) * ) R ( ( M F ) * ) , R ( ( M E ) * ) R ( ( M E ) * ) ,
we obtain
M E Z M F = M E P R ( ( M E ) * ) ¯ Z 0 P R ( M F ) ¯ M F = M E ( M E ) M E Z 0 M F = M E Z 0 M F = N N M F ¯ M F = N N M F M F = N N , M E Z M F = M E P R ( ( M E ) * ) ¯ Z 0 P R ( M F ) ¯ M F = M E Z 0 P R ( M F ) ¯ M F = M E Z 0 M F ( M F ) M F = M E Z 0 M F = M E M E N N = N N .
Therefore, P R ( ( M E ) * ) ¯ Z 0 P R ( M F ) ¯ is a solution of the system (32) and the system of the operator Equation (1).
In addition, for an arbitrary z R ( ( M E ) * ) ¯ = R ( M F ) ¯ ,
Z 11 z , z = P R ( ( M E ) * ) ¯ Z 0 z , z = Z 0 z , P R ( ( M E ) * ) ¯ z = z , Z 0 * z = P R ( ( M E ) * ) ¯ z , Z 0 z = z , P R ( ( M E ) * ) ¯ Z 0 z = z , Z 11 z .
We can deduce that Z 0 is a self-adjoint solution of the system (1).
On the same assumption, X B ( G , H ) has the matrix representation (31). Substituting the matrix representations of Y , Z , and X into (18), we obtain
A 11 X 11 B 11 + C 11 Y 11 D 11 + E 11 Z F 11 = G 11 ,
which is equivalent to
A X B = G C P R ( ( M C ) * ) ¯ Y 0 P R ( M D ) ¯ D E P R ( ( M E ) * ) ¯ Z 0 P R ( M F ) ¯ F ,
then
X = A ( G C P R ( ( M C ) * ) ¯ Y 0 P R ( M D ) ¯ D E P R ( ( M E ) * ) ¯ Z 0 P R ( M F ) ¯ F ) B ¯ = A ( G C Y 0 D E Z 0 F ) B ¯ B ( G , H ) .
In addition, for an arbitrary x G , combined with the self-adjointness of A ( G C Y 0 D E Z 0 F ) B | D ( B ) ,
X 11 x , x = A ( G C Y 0 D E Z 0 F ) B ¯ x , x = x , ( B ) * ( G C Y 0 D E Z 0 F ) * ( A ) * ¯ x = x , A ( G C Y 0 D E Z 0 F ) B ¯ x = x , X 11 x .
We have X 11 * = X 11 . Therefore, X is also a self-adjoint solution of the system (1).
Conversely, if Y B ( G , H ) is a self-adjoint solution to Equation (18), then Y is also the self-adjoint solution of the systems (22) and (23). Under the space decomposition R ( ( M C ) * ) ¯ N ( M C ) R ( ( M C ) * ) ¯ N ( M C ) , set
Y = Y 11 Y 12 Y 21 Y 22 .
We denote the continuous bounded linear extension of M C M C Y M D M D as
Y 0 = M C M C Y M D M D ¯ B ( G , H ) .
By R ( M M F ) R ( M D ) D ( M D ) = D ( Y ) , we obtain Y 0 M M F = Y M M F = M C N M F . From R ( ( N M E ) * ) R ( ( M D ) * ) , combined with Lemma 3, there exists Y ˜ B ( G , H ) such that N M E = Y ˜ M D . It follows that N M E M D = Y ˜ M D M D B ( D ( M D ) , H ) . In that way, for any y G , y n D ( M D ) such that y n y . Therefore,
M M E Y 0 y = M M E ( lim n Y 0 y n ) = lim n M M E Y 0 y n = lim n N M E M D y n = N M E M D ¯ y .
Then Y 0 is a solution of the system
M M E Y 0 = N M E M D ¯ , Y 0 M M F = M C N M F .
By a simple calculation, we obtain
P R ( ( M C ) * ) ¯ Y 0 P R ( M D ) ¯ = P R ( ( M C ) * ) ¯ P R ( ( M C ) * ) ¯ Y P R ( M D ) ¯ | D ( M D ) ¯ P R ( M D ) ¯ = P R ( ( M C ) * ) ¯ Y P R ( M D ) ¯ B ( G , H ) .
It follows that Y 11 = P R ( ( M C ) * ) ¯ Y 0 P R ( M D ) ¯ . Since Y is a self-adjoint operator, we have that Y 12 is an arbitrary operator and satisfies Y 21 = Y 12 * , and Y 22 is an arbitrary self-adjoint operator. Therefore, Y has the matrix representation of (29).
Similar to the proof above, we have Z 11 = P R ( ( M E ) * ) ¯ Z 0 P R ( M F ) ¯ . Since Z is a self-adjoint operator, we have that Z 12 is an arbitrary operator and satisfies Z 21 = Z 12 * , and Z 22 is an arbitrary self-adjoint operator. Therefore, Z has the matrix representation of (30).
Suppose that X B ( G , H ) is a self-adjoint solution to the system (18). Under the space decomposition R ( ( M C ) * ) ¯ N ( M C ) R ( ( M C ) * ) ¯ N ( M C ) , set
X = X 11 X 12 X 21 X 22 .
Substituting X into equation A X B + C Y D + E Z F = G , we obtain
X = A 11 ( G 11 C 11 Y 11 D 11 E 11 Z F 11 ) B 11 0 0 0 = A ( G C Y D E Z F ) B ¯ B ( G , H ) ;
according to the known assumption, A ( G C Y D E Z F ) B | D ( B ) is self-adjoint operator, so we get that
X 11 = A ( G C P R ( ( M C ) * ) ¯ Y P R ( M D ) ¯ D E P R ( ( M C ) * ) ¯ Y P R ( M D ) ¯ F ) B
is self-adjoint operator. X 12 is an arbitrary operator and satisfies X 21 = X 12 * , and X 22 is an arbitrary self-adjoint operator. Therefore, the matrix representation of X is (31). □
For convenience, we define the notation as follows ( i = 1 , 2 ) :
A = A 1 A 2 : H K K , B = B 1 B 2 : F F G , C = C 1 C 2 : H K K , D = D 1 D 2 : F F G , G = G 1 U V G 2 , for some U , V B ( F , K ) . M 1 = ( I K K A A ) C : H K K , N 1 = ( I K K A A ) G : F F K K , M 2 = D ( I F F B B ) : F F G , N 2 = G ( I F F B B ) : F F K K .
The following results can be obtained by taking E i = F i = 0 , ( i = 1 , 2 ) from Theorem 5.
Corollary 13.
Let A i , C i B ( H , K ) , B i , D i B ( F , G ) , G i B ( F , K ) , ( i = 1 , 2 ) . If A ( G C Y D ) B | D ( B ) is a self-adjoint operator, N 1 * M 2 * D , N 2 * M 1 * C and
C C | D ( C ) = C C = D D | D ( D ) = D D = A A | D ( A ) = A A = B B | D ( B ) = B B = G G | D ( G ) = G G
Then A i X B i + C i Y D i = G i ( i = 1 , 2 ) has a pair of self-adjoint solutions. Under the space decomposition R ( C * ) ¯ N ( C ) R ( C * ) ¯ N ( C ) , the self-adjoint solutions have the matrix representation
Y = Y 11 Y 12 Y 12 * Y 22
X = X 11 X 12 X 12 * X 22
where
Y 11 = P R ( C * ) ¯ Y 0 P R ( D ) ¯
is a self-adjoint operator, satisfying that Y 0 = C C Y D D ¯ is a self-adjoint solution of
M 1 Y = N 1 D ¯ , Y M 2 = C N 2 ,
Y 12 and X 12 are arbitrary operators, Y 22 and X 22 are arbitrary self-adjoint operators.
Proof. 
The proof method is similar to the proof of Theorem 5, which is omitted here. □
The self-adjoint solutions of A 1 X B 1 = G 1 , A 2 X B 2 = G 2 can be obtained by taking C i = D i = 0 , ( i = 1 , 2 ) from Corollary 13.
Corollary 14
([28], Theorem 2.5). Let A i , B i , G i B ( H ) ,   i = 1 , 2 ¯ such that
G 1 * A 2 * A 1 , G 2 * B 1 * B 2
and
A A | D ( A ) = A A = B B | D ( B ) = B B .
If the system A i X B i = G i , i = 1 , 2 ¯ has a self-adjoint solution, then the general self-adjoint solution has the matrix representation
X = X 11 X 12 X 12 * X 22
in terms of H = R ( A 1 * ) ¯ N ( A 1 ) , where X 22 is a self-adjoint operator and X 11 = P R ( A 1 * ) ¯ Y ˜ P R ( B 2 ) ¯ satisfying that Y ˜ is a self-adjoint solution of
A 2 X = G 2 B 2 ¯ , X B 1 = A 1 G 1 .
The following results can be obtained by taking A 1 = B 2 , B 1 = A 2 , G 1 = G 2 = G from Corollary 14.
Corollary 15
([32], Theorem 3.3). Let A , B , G B ( H ) such that A has a closed range with A A = A A and G * B * A . If the system A X B = G = B X A has a self-adjoint solution, then the general self-adjoint solutions have the matrix representation
X = X 11 X 12 X 12 * X 22
in terms of H = R ( A * ) N ( A ) , where X 22 is self-adjoint and X 11 = P R ( A * ) Y ˜ P R ( A * ) satisfying that Y ˜ is a self-adjoint solution of B X = G A , X B = A G .

4.3. The Representation of Positive Solutions

In the following theorem, combined with the notation defined in Theorem 4, we give the operator matrix representation of positive solutions for the system A i X B i + C i Y D i + E i Z F i = G i ( i = 1 , 2 ) .
Theorem 6.
Let A i , C i , E i B ( H , K ) , B i , D i , F i B ( F , G ) , G i B ( F , K ) , ( i = 1 , 2 ) be a closed-range operator. If A ( G C Y 0 D E Z 0 F ) B | D ( B ) is a positive operator, N M E * M M F * M D , N M F * M M E * M C , N N * M F * M F , N N * M E * M E , M C = M E , and
M C M C | D ( M C ) = M C M C = M D M D | D ( M D ) = M D M D = A A | D ( A ) = A A = B B | D ( B ) = B B = C C | D ( C ) = C C = D D | D ( D ) = D D = E E | D ( E ) = E E = F F | D ( F ) = F F = G G | D ( G ) = G G = ( M E ) M E | D ( ( M E ) ) = M E ( M E ) = ( M F ) M F | D ( ( M F ) ) = M F ( M F ) .
Then A i X B i + C i Y D i + E i Z F i = G i ( i = 1 , 2 ) has positive solutions. Under the space decomposition R ( ( M C ) * ) ¯ N ( M C ) R ( ( M C ) * ) ¯ N ( M C ) , the positive solutions have the matrix representation
Y = Y 11 Y 12 Y 12 * ( ( Y 11 1 2 ) Y 12 ) * ( Y 11 1 2 ) Y 12 + f
Z = Z 11 Z 12 Z 12 * ( ( Z 11 1 2 ) Z 12 ) * ( Z 11 1 2 ) Z 12 + f
X = X 11 X 12 X 12 * ( ( X 11 1 2 ) X 12 ) * ( X 11 1 2 ) X 12 + f
where
Y 11 = P R ( M C * ) ¯ Y 0 P R ( M D ) ¯ , Z 11 = P R ( ( M E ) * ) ¯ Z 0 P R ( M F ) ¯
are positive operators, satisfying that
Y 0 = M C M C Y M D M D ¯ , Z 0 = ( M E ) M E Z 0 M F M F ¯
are positive solutions of the equation system
M M E Y = N M E M D ¯ , Y M M F = M C N M F ,
and
M E Z = N N M F ¯ , Z M F = M E N N ,
respectively, and Y 12 , Z 12 , and X 12 are arbitrary operators and satisfy
R ( Y 12 ) R ( Y 11 1 2 ) , Y 21 = Y 12 * , R ( Z 12 ) R ( Z 11 1 2 ) , Z 21 = Z 12 * , R ( X 12 ) R ( X 11 1 2 ) , X 21 = X 12 * ,
Y 22 = ( ( Y 11 1 2 ) Y 12 ) * ( Y 11 1 2 ) Y 12 + f , Z 22 = ( ( Z 11 1 2 ) Z 12 ) * ( Z 11 1 2 ) Z 12 + f , X 22 = ( ( X 11 1 2 ) X 12 ) * ( X 11 1 2 ) X 12 + f ,
and f is positive.
Proof. 
Suppose that Y B ( G , H ) has the matrix representation (36). According to Theorem 4, the systems (22) and (23) have a solution equivalent to the systems (1) having a solution.
According to Lemma 1, since M M F * M D , M M E * M C and (28) holds, then under the space decomposition R ( ( M C ) * ) ¯ N ( M C ) R ( ( M C ) * ) ¯ N ( M C ) , M M E , M C , M D , M M F , A , B , C , D , E , F , G have the matrix representations
A = A 11 0 0 0 , B = B 11 0 0 0 , C = C 11 0 0 0 , D = D 11 0 0 0 ,
E = E 11 0 0 0 , F = F 11 0 0 0 , G = G 11 0 0 0 , M C = M 11 0 0 0 ,
M D = M 21 0 0 0 , M M E = M 31 0 0 0 , M M F = M 41 0 0 0 .
After substituting them into the systems (22) and (23), we obtained
M M E Y M D = M 31 Y 11 M 21 0 0 0 = M M E P R ( ( M C ) * ) ¯ Y 0 P R ( M D ) ¯ M D ,
M C Y M M F = M 11 Y 11 M 41 0 0 0 = M C P R ( ( M C ) * ) ¯ Y 0 P R ( M D ) ¯ M M F ,
where P R ( ( M C ) * ) ¯ Y 0 P R ( M D ) ¯ = M C M C Y 0 M D M D B ( D ( M D ) , H ) . Combined with
R ( N M F ) R ( M C ) , R ( M M F ) R ( M D ) , R ( ( N M E ) * ) R ( ( M D ) * ) , R ( ( M M E ) * ) R ( ( M C ) * ) ,
we obtain
M M E Y M D = M M E P R ( ( M C ) * ) ¯ Y 0 P R ( M D ) ¯ M D = M M E M C M C Y 0 M D = M M E Y 0 M D = N M E M D ¯ M D = N M E M D M D = N M E , M C Y M M F = M C P R ( ( M C ) * ) ¯ Y 0 P R ( M D ) ¯ M M F = M C Y 0 P R ( M D ) ¯ M M F = M C Y 0 M D M D M M F = M C Y 0 M M F = M C M C N M F = N M F .
Therefore, P R ( ( M C ) * ) ¯ Y 0 P R ( M D ) ¯ is a solution of the systems (22) and (23), as well as the system (1).
In addition, for an arbitrary y R ( ( M C ) * ) ¯ = R ( M D ) ¯ ,
Y 11 y , y = P R ( ( M C ) * ) ¯ Y 0 y , y = Y 0 y , P R ( ( M C ) * ) ¯ y = Y P R ( ( M C ) * ) ¯ y , P R ( ( M C ) * ) ¯ y 0 .
Therefore, Y 0 is a positive solution of the system (1).
Assume that Z B ( G , H ) has the matrix representation (30). In the proof of Theorem 4, we also know that the systems (19) and (20) have a solution equivalent to the system (1) having a solution. Additionally, the Equations (19) and (20) have a solution if and only if the system
M C Y M D + M E Z M F = N N
has a solution.
Using Lemma 2 again, Equation (21) has a solution equivalent to
R ( N N M E Z M F ) R ( M C ) , R ( ( N N M E Z M F ) * ) R ( ( M D ) * ) .
According to Lemma 3 and a simple calculation, we get that the systems (19) and (20) have a solution that is equivalent to the system
M E Z M F = N N , M E Z M F = N N ,
having a solution. Combined with
N N * M F * M F , N N * M E * M E , M E M E = M F M F ,
we get that a positive solution Z of the system (37) is also a positive solution of (1).
In reality, similar to the proof above, under the space decomposition R ( M F ) ¯ N ( M F ) R ( M F ) ¯ N ( M F ) , M E , M F , M E , M F have the matrix representations
M E = M 11 0 0 0 , M F = M 21 0 0 0 , M E = M 31 0 0 0 , M F = M 41 0 0 0 .
After substituting them into the system (39), we obtained
M E Z M F = M 11 Z 11 M 21 0 0 0 = M E P R ( ( M E ) * ) ¯ Z 0 P R ( M F ) ¯ M F ,
M E Z M F = M 31 Z 11 M 41 0 0 0 = M E P R ( ( M E ) * ) ¯ Z 0 P R ( M F ) ¯ M M F ,
where P R ( ( M E ) * ) ¯ Z 0 P R ( M F ) ¯ = ( M E ) M E Z 0 M F M F B ( D ( M F ) , H ) . Combined with
R ( N N ) R ( M E ) , R ( M F ) R ( M F ) , R ( ( N N ) * ) R ( ( M F ) * ) , R ( ( M E ) * ) R ( ( M E ) * ) ,
we obtain
M E Z M F = M E P R ( ( M E ) * ) ¯ Z 0 P R ( M F ) ¯ M F = M E ( M E ) M E Z 0 M F = M E Z 0 M F = N N M F ¯ M F = N N M F M F = N N , M E Z M F = M E P R ( ( M E ) * ) ¯ Z 0 P R ( M F ) ¯ M F = M E Z 0 P R ( M F ) ¯ M F = M E Z 0 M F ( M F ) M F = M E Z 0 M F = M E M E N N = N N .
Therefore, P R ( ( M E ) * ) ¯ Z 0 P R ( M F ) ¯ is a solution of the system (39), as well as the system (1).
In addition, for an arbitrary z R ( ( M E ) * ) ¯ = R ( M F ) ¯ ,
Z 11 z , z = P R ( ( M E ) * ) ¯ Z 0 z , z = Z 0 z , P R ( ( M E ) * ) ¯ z = Z 0 P R ( ( M E ) * ) ¯ z , P R ( ( M E ) * ) ¯ z 0 .
we can conclude that Z 0 is a positive solution of the system (1).
On the same assumption, X B ( G , H ) has the matrix representation (38). Substituting the matrix representation of Y , Z , and X into (18), we obtain
A 11 X 11 B 11 + C 11 Y 11 D 11 + E 11 Z F 11 = G 11 ,
which is equivalent to
A X B = G C P R ( ( M C ) * ) ¯ Y 0 P R ( M D ) ¯ D E P R ( ( M E ) * ) ¯ Z 0 P R ( M F ) ¯ F ,
then
X = A ( G C P R ( ( M C ) * ) ¯ Y 0 P R ( M D ) ¯ D E P R ( ( M E ) * ) ¯ Z 0 P R ( M F ) ¯ F ) B ¯ = A ( G C Y 0 D E Z 0 F ) B ¯ B ( G , H ) .
In addition, for an arbitrary x G , combined with the positivity of A ( G C Y 0 D E Z 0 F ) B | D ( B ) ,
X 11 x , x = A ( G C Y 0 D E Z 0 F ) B ¯ x , x 0 .
We have X 11 * = X 11 . Therefore, X is also a positive solution of the system (1).
Conversely, assume that Y B ( G , H ) is a positive solution to Equation (18); then Y is also the positive solution of the systems (22) and (23). Under the space decomposition R ( ( M C ) * ) ¯ N ( M C ) R ( ( M C ) * ) ¯ N ( M C ) , set
Y = Y 11 Y 12 Y 21 Y 22 .
We denote the continuous bounded linear extension of M C M C Y M D M D as Y 0 = M C M C Y M D M D ¯   B ( G , H ) . By R ( M M F ) R ( M D ) D ( M D ) = D ( Y ) , we obtain Y 0 M M F = Y M M F = M C N M F . From R ( ( N M E ) * ) R ( ( M D ) * ) , combined with Lemma 3, there exists Y ˜ B ( G , H ) such that N M E = Y ˜ M D . It follows that N M E M D = Y ˜ M D M D B ( D ( M D ) , H ) . In that way, for any y G , y n D ( M D ) such that y n y . Therefore,
M M E Y 0 y = M M E ( lim n Y 0 y n ) = lim n M M E Y 0 y n = lim n N M E M D y n = N M E M D ¯ y .
Then Y 0 is a solution of the system M M E Y 0 = N M E M D ¯ , Y 0 M M F = M C N M F .
By a simple calculation, we obtain
Y 0 y , y = lim n Y 0 y n , y n = lim n M C M C Y M D M D y n , y n = lim n Y M D M D y n , M D M D y n 0 .
It follows that Y 11 = P R ( ( M C ) * ) ¯ Y 0 P R ( M D ) ¯ . According to Lemma 6, Y is a positive operator implying that Y 12 is an arbitrary operator and satisfying R ( Y 12 ) R ( Y 11 1 2 ) , Y 21 = Y 12 * , Y 22 = ( ( Y 11 1 2 ) Y 12 ) * ( Y 11 1 2 ) Y 12 + f .
Similar to the proof above, we have Z 11 = P R ( ( M E ) * ) ¯ Z 0 P R ( M F ) ¯ . Since Z is positive operator, we have Z 12 as an arbitrary operator and satisfying R ( Z 12 ) R ( Z 11 1 2 ) , Z 21 = Z 12 * , Z 22 = ( ( Z 11 1 2 ) Z 12 ) * ( Z 11 1 2 ) Z 12 + f . Therefore, Z has the matrix representation of (37).
Suppose that X B ( G , H ) is a positive solution to the system (18); then find the operator matrix representation of X. Under the space decomposition R ( ( M C ) * ) ¯ N ( M C ) R ( ( M C ) * ) ¯ N ( M C ) , set
X = X 11 X 12 X 21 X 22 .
Substituting X into the equation A X B + C Y D + E Z F = G , we obtain
X = A 11 ( G 11 C 11 Y 11 D 11 E 11 Z F 11 ) B 11 0 0 0 = A ( G C Y D E Z F ) B ¯ B ( G , H ) ,
according to the known assumptions; A ( G C Y D E Z F ) B | D ( B ) is positive operator, and we get that
X 11 = A ( G C P R ( ( M C ) * ) ¯ Y P R ( M D ) ¯ D E P R ( ( M C ) * ) ¯ Y P R ( M D ) ¯ F ) B
is a positive operator. X 12 is an arbitrary operator and satisfies R ( X 12 ) R ( X 11 1 2 ) , X 21 = X 12 * , X 22 = ( ( X 11 1 2 ) X 12 ) * ( X 11 1 2 ) X 12 + f . Therefore, the matrix representation of X is (38). □
For convenience, we define the notation as follows ( i = 1 , 2 ) :
A = A 1 A 2 : H K K , B = B 1 B 2 : F F G , C = C 1 C 2 : H K K , D = D 1 D 2 : F F G , G = G 1 U V G 2 , for some U , V B ( F , K ) . M 1 = ( I K K A A ) C : H K K , N 1 = ( I K K A A ) G : F F K K , M 2 = D ( I F F B B ) : F F G , N 2 = G ( I F F B B ) : F F K K .
Corollary 16.
Let A i , C i B ( H , K ) , B i , D i B ( F , G ) , G i B ( F , K ) , ( i = 1 , 2 ) . If A ( G C Y D ) B | D ( B ) is a positive operator, N 1 * M 2 * D , N 2 * M 1 * C and
C C | D ( C ) = C C = D D | D ( D ) = D D = A A | D ( A ) = A A = B B | D ( B ) = B B = G G | D ( G ) = G G .
Then A i X B i + C i Y D i = G i ( i = 1 , 2 ) has a pair of positive solutions. Under the space decomposition R ( C * ) ¯ N ( C ) R ( C * ) ¯ N ( C ) , the positive solutions have the matrix representation
Y = Y 11 Y 12 Y 12 * ( ( Y 11 1 2 ) Y 12 ) * ( Y 11 1 2 ) Y 12 + f
X = X 11 X 12 X 12 * ( ( X 11 1 2 ) X 12 ) * ( X 11 1 2 ) X 12 + f
where Y 11 = P R ( C * ) ¯ Y P R ( D ) ¯ and X 11 = A ( G C P R ( C * ) ¯ Y P R ( D ) ¯ D ) B are positive operators, satisfying that Y 0 = C C Y D D ¯ is a positive solution to the system
M 1 Y = N 1 D ¯ , Y M 2 = C N 2 ,
Y 12 and X 12 are arbitrary operators, Y 22 and X 22 are arbitrary positive operators, R ( Y 12 ) R ( Y 11 1 2 ) and R ( X 12 ) R ( X 11 1 2 ) , f is positive.
Proof. 
Assume that Y B ( G , H ) has the matrix representation (40). Since M 2 * D , M 1 * C and (33) hold, then, under the space decomposition R ( C * ) ¯ N ( C ) R ( C * ) ¯ N ( C ) , M 1 ,   C ,   D ,   M 2 ,   A ,   B ,   G have the matrix representations
M 1 = M 11 0 0 0 , C = C 11 0 0 0 , D = D 11 0 0 0 ,
M 2 = M 21 0 0 0 , A = A 11 0 0 0 , B = B 11 0 0 0 , G = G 11 0 0 0 .
After substituting them into the systems (26) and (27), we obtained
M 1 Y D = M 11 Y 11 D 11 0 0 0 = M 1 P R ( C * ) ¯ Y P R ( D ) ¯ D ,
C Y M 2 = C 11 Y 11 M 21 0 0 0 = C P R ( C * ) ¯ Y P R ( D ) ¯ M 2 ,
where P R ( C * ) ¯ Y P R ( D ) ¯ = C C Y D D B ( D ( D ) H ) . Combined with
R ( N 2 ) R ( C ) , R ( M 2 ) R ( D ) , R ( N 1 * ) R ( D * ) , R ( M 1 * ) R ( C * ) ,
we obtain
M 1 Y D = M 1 P R ( C * ) ¯ Y P R ( D ) ¯ D = M 1 C C Y D = M 1 Y D = N 1 D D = N 1 ,
C Y M 2 = C P R ( C * ) ¯ Y P R ( D ) ¯ M 2 = C Y D D M 2 = C Y M 2 = C C N 2 = N 2 .
Therefore, Y is a solution of the systems (26) and (27), as well as the system (2).
In addition, for an arbitrary y R ( C * ) ¯ = R ( D ) ¯ ,
Y 11 y , y = P R ( C * ) ¯ Y y , y = Y y , P R ( C * ) ¯ y = Y P R ( C * ) ¯ y , P R ( C * ) ¯ y 0 .
Therefore, Y is a positive solution of the system (2).
On the same assumption, X B ( G , H ) has the matrix representation (41). Substituting the matrix form of Y and X into (25), we obtain
A 11 X 11 B 11 + C 11 Y 11 D 11 = G 11 ,
which is equivalent to A X B = G C P R ( C * ) ¯ Y P R ( D ) ¯ D , then
X = A ( G C P R ( C * ) ¯ Y P R ( D ) ¯ D ) B ¯ = A ( G C Y D ) B ¯ B ( G , H ) .
In addition, for an arbitrary x G , combined with the positivity of A ( G C Y D ) B | D ( B ) ,
X 11 x , x = A ( G C Y D ) B ¯ x , x 0 .
Therefore, X is also one positive solution of the system (2).
Conversely, assuming that Y B ( G , H ) is a positive solution to the systems (25), then Y is also the positive solution of the system (26) and (27). Under the space decomposition R ( C * ) ¯ N ( C ) R ( C * ) ¯ N ( C ) , set
Y = Y 11 Y 12 Y 21 Y 22 .
We denote the continuous bounded linear extension of C C Y D D as Y 0 = C C Y D D ¯   B ( G , H ) . By R ( M 2 ) R ( D ) D ( D ) = D ( Y ) , we obtain Y 0 M 2 = Y M 2 = C N 2 . From R ( N 1 * ) R ( D * ) , combined with Lemma 3, there exists Z ˜ B ( G , H ) such that N 1 = Z ˜ D . It follows that N 1 D = Z ˜ D D B ( D ( D ) , H ) . In that way, for any y G , y n D ( D ) such that y n y . Therefore,
M 1 Y 0 y = M 1 ( lim n Y 0 y n ) = lim n M 1 Y 0 y n = lim n N 1 D y n = N 1 D ¯ y .
Then Y 0 is a solution of the system M 1 Y = N 1 D ¯ , Y M 2 = C N 2 . Additionally, we have
Y 0 y , y = lim n Y 0 y n , y n = lim n C C Y D D y n , y n = lim n Y D D y n , D D y n 0 .
Therefore, Y 0 B ( G , H ) is a positive solution of the system M 1 Y = N 1 D ¯ , Y M 2 = C N 2 . After a simple calculation, we obtain
P R ( C * ) ¯ Y 0 P R ( D ) ¯ = P R ( C * ) ¯ P R ( C * ) ¯ Y P R ( D ) ¯ | D ( D ) ¯ P R ( D ) ¯ = P R ( C * ) ¯ Y P R ( D ) ¯ B ( G , H ) .
Therefore, it follows that Y 11 = P R ( C * ) ¯ Y 0 P R ( D ) ¯ = P R ( C * ) ¯ Y P R ( D ) ¯ . According to Lemma 6, Y is positive, implying that R ( Y 12 ) R ( Y 11 1 2 ) , Y 21 = Y 12 * , Y 22 = ( ( Y 11 1 2 ) Y 12 ) * ( Y 11 1 2 ) Y 12 + f . Suppose that X B ( G , H ) is a positive solution to the system (25); then find the operator matrix representation of X. Under the space decomposition R ( C * ) ¯ N ( C ) R ( C * ) ¯ N ( C ) , set
X = X 11 X 12 X 21 X 22 .
Substituting X into the equation A X B + C Y D = G , we obtain
X = A 11 ( G 11 C 11 Y 11 D 11 ) B 11 0 0 0 = A ( G C Y D ) B ¯ B ( G , H ) ,
using Lemma 6 again, due to X being a positive operator, combined with the known assumptions A ( G C Y D ) B | D ( B ) being a positive operator, we get that X 11 = A ( G C P R ( C * ) ¯ Y P R ( D ) ¯ D ) B is a positive operator. X 12 is an arbitrary operator and satisfies R ( X 12 ) R ( X 11 1 2 ) , X 21 = X 12 * , X 22 = ( ( X 11 1 2 ) X 12 ) * ( X 11 1 2 ) X 12 + f . Therefore, the matrix representation of X is (41). □
The positive solutions of A 1 X B 1 = G 1 , A 2 X B 2 = G 2 can be obtained by taking C i = D i = 0 , ( i = 1 , 2 ) from Corollary 16.
Corollary 17
([28], Theorem 2.7). Let A i , B i , G i B ( H ) , i = 1 , 2 ¯ such that
G 1 * A 2 * A 1 , G 2 * B 1 * B 2
and
A A | D ( A ) = A A = B B | D ( B ) = B B .
If the system A i X B i = G i , i = 1 , 2 ¯ has a positive solution, then the general positive solution has the matrix representation
X = X 11 X 12 X 12 * ( ( X 11 1 2 ) X 12 ) * ( X 11 1 2 ) X 12 + f
in terms of H = R ( A 1 * ) ¯ N ( A 1 ) , where f is positive, R ( X 12 ) R ( X 11 1 2 ) and X 11 = P R ( A 1 * ) ¯ Y ˜ P R ( B 2 ) ¯ , satisfying that Y ˜ is a positive solution of
A 2 X = G 2 B 2 ¯ , X B 1 = A 1 G 1 .
The following result can be obtained by taking A 1 = B 2 , B 1 = A 2 , G 1 = G 2 = G from Corollary 16.
Corollary 18
([32], Theorem 3.4). Let A , B , G B ( H ) such that A has a closed range with A A = A A and G * B * A . If the system A X B = G = B X A has a positive solution, then the general positive solutions have the matrix representation
X = X 11 X 12 X 12 * ( ( X 11 1 2 ) X 12 ) * ( X 11 1 2 ) X 12 + f
with respect to H = R ( A * ) N ( A ) , where f is positive, R ( X 12 ) R ( X 11 1 2 ) and X 11 = P R ( A * ) Y ˜ P R ( A * ) satisfying that Y ˜ is a positive solution of
B X = G A , X B = A G .

5. Conclusions

The purpose of this paper is to study the necessary and sufficient conditions for the existence of general, self-adjoint, and positive solutions of the operator system A i X B i + C i Y D i + E i Z F i = G i ( i = 1 , 2 ) under additional assumptions. Furthermore, we discuss the representation of general solutions of the operator system A i X B i + C i Y D i + E i Z F i = G i ( I = 1 , 2 ) using star-order conditions. Additionally, we provide the operator matrix representation for self-adjoint and positive solutions with respect to the space decomposition R ( ( M C ) * ) ¯ N ( M C ) R ( ( M C ) * ) ¯ N ( M C ) .
Moreover, by means of space decomposition, generalized inverse, and star order, we can establish some equivalent conditions for solvability and the representation of general, self-adjoint, and positive solutions to the system
A 1 X B 1 + C 1 Y D 1 + E 1 Z F 1 = G 1 , A 2 X B 2 + C 2 Y D 2 + E 2 Z F 2 = G 2 , H X = G 3 , X J = G 4 .
A 1 X B 1 + C 1 Y D 1 + E 1 Z F 1 = G 1 , A 2 X B 2 + C 2 Y D 2 + E 2 Z F 2 = G 2 , H X = G 3 , Y J = G 4 ,
and so on [45,46,47]. We hope that such equations, like their classical prototypes, will find applications in control theory, information theory, linear system theory, sampling, and other areas.

Author Contributions

Writing—original draft and writing—review and editing, G.C., G.H., J.M., and X.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was completed with the support of the NNSFs of China (Grant Nos. 11761052, 11862019), NSF of Inner Mongolia (Grant No. 2020ZD01), Key Laboratory of Infinite-Dimensional Hamilton System and Its Algorithm Application (Inner Mongolia Normal University), Ministry of Education (Grant No. 2023KFZD01), and Inner Mongolia Autonomous Region University Innovation Team Development Plan (Grant No. NMGIRT2317), Inner Mongolia Agricultural University Basic Discipline Research Launch Fund Project (Grant No. JC2019003).

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Acknowledgments

The authors are grateful to the reviewers for their careful reading and valuable comments. The authors also thank the editors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Özgüler, A.B. The matrix equation AXB + CYD = E over a principal ideal domain. SIAM J. Matrix Anal. Appl. 1991, 12, 581–591. [Google Scholar] [CrossRef]
  2. Liao, A.P.; Lei, Y. Optimal approximate solution of the matrix equation AXB = C over symmetric matrices. J. Comput. Math. 2007, 25, 543–552. [Google Scholar]
  3. Groß, J. A note on the general Hermitian solution to AXA* = B. Bull. Malays. Math. Soc. 1998, 21, 57–62. [Google Scholar]
  4. Huang, L.P.; Zeng, Q. The solvability of matrix equation AXB + CYD = E over a simple Arinian ring. Linear Multilinear Algebra 1995, 38, 225–232. [Google Scholar]
  5. Dehghan, M.; Hajarian, M. Finite iterative algorithms for the reflexive and anti-reflexive solutions of the matrix equation A1XB1 + A2XB2 = C. Math. Comput. Model. 2009, 49, 1937–1959. [Google Scholar] [CrossRef]
  6. Dehghan, M.; Hajarian, M. The general coupled matrix equations over generalized bisymmetric matrices. Linear Algebra Appl. 2010, 432, 1531–1552. [Google Scholar] [CrossRef]
  7. Dehghan, M.; Hajarian, M. Two algorithms for finding the Hermitian reflexive and skew-Hermitian solutions of Sylvester matrix equations. Appl. Math. Lett. 2011, 24, 444–449. [Google Scholar] [CrossRef]
  8. Wang, Q.W.; van der Woude, J.W.; Chang, H.X. A system of real quaternion matrix equations with applications. Linear Algebra Appl. 2009, 431, 2291–2303. [Google Scholar] [CrossRef]
  9. Xu, Q.X. Common Hermitian and positive solutions to the adjointable operator equations AX = C, XB = D. Linear Algebra Appl. 2008, 429, 1–11. [Google Scholar] [CrossRef]
  10. Yuan, S.F.; Liao, A.P.; Wei, L. On solution of quaternion matrix equation AXB + CYD = E. Far East J. Appl. Math. 2008, 33, 369–385. [Google Scholar]
  11. Yuan, S.F.; Liao, A.P.; Lei, Y. Least squares Hermitian solution of the matrix equation (AXB, CXD) = (E, F) with the least norm over the skew field of quaternions. Math. Comput. Model. 2008, 48, 91–100. [Google Scholar] [CrossRef]
  12. Mitra, S.K. Common solutions to a pair of linear matrix equations A1XB1 = C1, A2XB2 = C2. Math. Proc. Camb. Philos. Soc. 1973, 74, 213–216. [Google Scholar] [CrossRef]
  13. Wang, Q.W.; He, Z.H. Some matrix equations with applications. Linear Multilinear Algebra 2012, 60, 1327–1353. [Google Scholar] [CrossRef]
  14. Duan, X.F.; Liao, A.P. On the existence of Hermitian positive definite solutions of the matrix equation Xs + A*X−tA = Q. Linear Algebra Appl. 2008, 429, 673–687. [Google Scholar] [CrossRef]
  15. Duan, X.F.; Peng, Z.Y.; Duan, F.J. Positive definite solution of two kinds of nonlinear matrix equations. Surv. Math. Appl. 2009, 4, 179–190. [Google Scholar]
  16. Tian, Y. The solvability of two linear matrix equations. Linear Multilinear Algebra 2000, 48, 123–147. [Google Scholar] [CrossRef]
  17. Roth, W.E. The equation AXYB = C and AXXB = C in matrices. Proc. Am. Math. Soc. 1952, 3, 392–396. [Google Scholar] [CrossRef]
  18. Lancaster, P.; Tismenetsky, M. The Theory of Matrices: With Applications; Academic Press: New York, NY, USA, 1985. [Google Scholar]
  19. Dmytryshyn, A.D.; Kagstrm, B. Coupled Sylvester-type matrix equations and block diagonalization. SIAM J. Matrix Anal. Appl. 2015, 36, 580–593. [Google Scholar] [CrossRef]
  20. Xu, G.P.; Wei, M.S.; Zheng, D.S. On solutions of matrix equation AXB + CYD = F. Linear Algebra Appl. 1998, 279, 93–109. [Google Scholar] [CrossRef]
  21. Baksalary, J.K.; Kala, R. The matrix equation AXB + CYD = E. Linear Algebra Its Appl. 1980, 30, 141–147. [Google Scholar] [CrossRef]
  22. Konstantinov, M.M.; Gu, D.W.; Mehrmann, V.; Petkov, P.H. Perturbation Theory for Matrix Equations; Gulf Professional Publishing: Oxford, UK, 2003; Volume 9. [Google Scholar]
  23. Tian, Y.G.; Takane, Y. On consistency, natural restrictions and estimability under classical and extended growth curve models. J. Stat. Plan. Inference 2009, 139, 2445–2458. [Google Scholar] [CrossRef]
  24. Douglas, R.G. On majorization, factorization and range inclusion of operators in Hilbert space. Proc. Am. Math. Soc. 1966, 17, 413–415. [Google Scholar] [CrossRef]
  25. Boussaid, A.; Lombarkia, F. Hermitian solutions to the equation AXA* + BYB* = C, for Hilbert space operators. Ser. Math. Inform. 2021, 36, 1–14. [Google Scholar]
  26. Deng, C. On the solutions of operator equation CAX = C = XAC. J. Math. Anal. Appl. 2013, 398, 664–670. [Google Scholar] [CrossRef]
  27. Djordjević, D.S. Explicit solution of the operator equation A*X + X*A = B. J. Comput. Appl. Math. 2007, 200, 701–704. [Google Scholar] [CrossRef]
  28. Stanković, H. Solvability of AiXBi = Ci, i = 1 , 2 ¯ , with applications to inequality C * AXB. Doc. Math. 2023, 4, 275–283. [Google Scholar]
  29. Xu, Q.; Sheng, L.; Gu, Y. The solutions to some operator equation. Linear Algebra Appl. 2008, 429, 1997–2024. [Google Scholar] [CrossRef]
  30. Cvetković-IIića, D.; Wang, Q.W.; Xu, Q.X. Douglas’ + Sebestyén’s lemmas = a tool for solving an operator equation problem. J. Math. Anal. Appl. 2019, 482, 123599. [Google Scholar] [CrossRef]
  31. Vosough, M.; Moslehian, M.S. Solutions of the system of operator equations AXB = B = BXA via ★−order. Electron. J. Linear Algebra 2017, 32, 172–183. [Google Scholar] [CrossRef]
  32. Zhang, X.; Ji, G. Solutions to the system of operator equations AXB = C = BXA. Acta Math. Sci. 2018, 38, 1143–1150. [Google Scholar] [CrossRef]
  33. Cvetković IIić, D.S. Note on the assumptions in working with generalized inverses. Appl. Math. Comput. 2022, 432, 127359. [Google Scholar] [CrossRef]
  34. Nashed, M.Z. Inner, outer, and generalized inverses in Banach and Hilbert spaces. Numer. Funct. Anal. Optim. 1987, 9, 261–325. [Google Scholar] [CrossRef]
  35. Wang, H.; Huang, J.J.; Li, M.R. On the solutions of the operator equation XAX = BX. Linear Multilinear Algebra 2022, 70, 7753–7761. [Google Scholar] [CrossRef]
  36. Antezana, J.; Cano, C.; Mosconi, I.; Stojanoff, D. A note on the star order in Hilbert spaces. Linear Multilinear Algebra 2010, 58, 1037–1051. [Google Scholar] [CrossRef]
  37. Baksalary, J.K.; Baksalary, O.M.; Liu, X.J. Further properties of the star, left-star, right-star, and minus partial orderings. Linear Algebra Appl. 2003, 375, 83–94. [Google Scholar] [CrossRef]
  38. Arias, M.L.; Maestripieri, A. On partial orders of operators. Ann. Funct. Anal. 2023, 14, 1–19. [Google Scholar] [CrossRef]
  39. Drazin, M.P. Natural structures on semigroups with involution. Bull. Am. Math. Soc. 1978, 84, 139–141. [Google Scholar] [CrossRef]
  40. Hartwig, R.E.; Drazin, M.P. Lattice properties of the *-order for complex matrices. J. Math. Anal. Appl. 1982, 86, 359–378. [Google Scholar] [CrossRef]
  41. Xu, X.M.; Du, H.K.; Fang, X.C.; Li, Y. The supremum of linear operators for the ★-order. Linear Algebra Appl. 2010, 433, 2198–2207. [Google Scholar] [CrossRef]
  42. Penrose, R. A generalized inverse for matrices. Math. Proc. Camb. Philos. Soc. 1955, 51, 406–413. [Google Scholar] [CrossRef]
  43. Arias, M.L.; Gonzalez, M.C. Positive solutions to operator equations AXB = C. Linear Algbra Appl. 2010, 433, 1194–1202. [Google Scholar] [CrossRef]
  44. Radenkovic, J.N.; Cvetkovic-Ilić, D.; Xu, Q.X. Solvability of the system of operator equations AX = C, XB = D in Hilbert C*-modules. Acta Math. Sci. 2021, 12, 32. [Google Scholar] [CrossRef]
  45. Rehman, A.; Wang, Q.W.; Ali, I.; Akram, M.; Ahmad, M.O. A constraint system of gen- eralized Sylvester quaternion matrix equations. Adv. Appl. Clifford Algebr. 2017, 27, 3183–3196. [Google Scholar] [CrossRef]
  46. Mehany, M.S.; Wang, Q.W. Three Symmetrical Systems of Coupled Sylvester-like Quaternion Matrix Equations. Symmetry 2022, 14, 550. [Google Scholar] [CrossRef]
  47. He, Z.H.; Wang, M. A quaternion matrix equations with two different restrictions. Adv. Appl. Clifford Algebr. 2021, 31, 1–30. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Che, G.; Hai, G.; Mei, J.; Cao, X. The Existence and Representation of the Solutions to the System of Operator Equations AiXBi + CiYDi + EiZFi = Gi(i = 1, 2). Axioms 2024, 13, 435. https://doi.org/10.3390/axioms13070435

AMA Style

Che G, Hai G, Mei J, Cao X. The Existence and Representation of the Solutions to the System of Operator Equations AiXBi + CiYDi + EiZFi = Gi(i = 1, 2). Axioms. 2024; 13(7):435. https://doi.org/10.3390/axioms13070435

Chicago/Turabian Style

Che, Gen, Guojun Hai, Jiarui Mei, and Xiang Cao. 2024. "The Existence and Representation of the Solutions to the System of Operator Equations AiXBi + CiYDi + EiZFi = Gi(i = 1, 2)" Axioms 13, no. 7: 435. https://doi.org/10.3390/axioms13070435

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop