Next Article in Journal
Entropy-Based Tests for Complex Dependence in Economic and Financial Time Series with the R Package tseriesEntropy
Next Article in Special Issue
New Results on the Unimodular Equivalence of Multivariate Polynomial Matrices
Previous Article in Journal
Levenberg–Marquardt Training Technique Analysis of Thermally Radiative and Chemically Reactive Stagnation Point Flow of Non-Newtonian Fluid with Temperature Dependent Thermal Conductivity
Previous Article in Special Issue
A Data-Driven Parameter Prediction Method for HSS-Type Methods
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Algebraic Characterizations of Relationships between Different Linear Matrix Functions

Shanghai Business School, College of Business and Economics, Shanghai 201400, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2023, 11(3), 756; https://doi.org/10.3390/math11030756
Submission received: 9 October 2022 / Revised: 18 January 2023 / Accepted: 28 January 2023 / Published: 2 February 2023
(This article belongs to the Special Issue Matrix Equations and Their Algorithms Analysis)

Abstract

:
Let f ( X 1 , X 2 , , X k ) be a matrix function over the field of complex numbers, where X 1 , X 2 , , X k are a family of matrices with variable entries. The purpose of this paper is to propose and investigate the relationships between certain linear matrix functions that regularly appear in matrix theory and its applications. We shall derive a series of meaningful, necessary, and sufficient conditions for the collections of values of two given matrix functions to be equal through the cogent use of some highly selective formulas and facts regarding ranks, ranges, and generalized inverses of block matrix operations. As applications, we discuss some concrete topics concerning the algebraic connections between general solutions of a given linear matrix equation and its reduced equations.

1. Introduction

Throughout this paper, we adopt the following notation: C m × n denotes the collection of all m × n matrices over the field of complex numbers; A T and A * denote the transpose and the conjugate transpose of A C m × n , respectively; r ( A ) denotes the rank of a matrix A C m × n ; R ( A ) = { A x | x C n } denotes the range of a matrix A C m × n ; I m denotes the identity matrix of order m; [ A , B ] denotes a partitioned matrix consisting of two submatrices A and B; the Moore–Penrose generalized inverse of a matrix A C m × n , denoted by A , is defined as the unique matrix X C n × m that satisfies the following four Penrose equations:
( 1 ) A X A = A , ( 2 ) X A X = X , ( 3 ) ( A X ) * = A X , ( 4 ) ( X A ) * = X A .
In addition, we denote by P A = I m A A and Q A = I n A A the two orthogonal projectors (Hermitian idempotent matrices) induced from A. For more detailed information regarding the generalized inverses of matrices, we refer the reader to [1,2,3,4]. The Kronecker product of any two matrices A C m × n and B C p × q is defined as A B = ( a i j B ) . The vectorization operator of a matrix A = [ a 1 , , a n ] is defined as vec ( A ) = A = [ a 1 T , , a n T ] T , where a i denotes the i-th column of A, i = 1 , 2 , , k . A well-known property of the vec operator of a triple matrix product is known as A X B = ( B T A ) X l; see, e.g., [5,6].
Consider a matrix function
Z = f ( X 1 , X 2 , , X k ) ,
where X 1 , X 2 , , X k are matrices of appropriate sizes with variable entries from the field of complex numbers, and Z denotes the matrix value of the matrix function corresponding to the k variable matrices. Further, we denote the collection of all possible values of the function corresponding to the k variable matrices by
D f = { Z | Z = f ( X 1 , X 2 , , X k ) } ,
and call it the domain of the matrix function. Given a matrix function as such, algebraists would like to know their algebraic properties and performances and then employ them when solving problems related to matrix functions in theoretical and computational mathematics.
Given such a matrix function, there are some fundamental questions that we may necessarily ask:
(I)
When is the matrix value of (2) unique with respect to all the variable matrices X 1 , X 2 , , X k ?
(II)
What is the solvability condition of the matrix equation f ( X 1 , X 2 , , X k ) = 0 , and what is the general solution of f ( X 1 , X 2 , , X k ) = 0 when it is solvable?
(III)
Given two matrix functions f ( · ) and g ( · ) of the same size with the domains D f and D g , respectively, what are the necessary and sufficient conditions for the four statements
D f D g , D f D g , D f D g , D f = D g
to hold, respectively?
Theoretically speaking, concrete matrix functions can be arbitrarily constructed according to various ordinary algebraic operations of matrices, while algebraists can propose or encounter numerous specified matrix functions when dealing with theoretical and computational problems in matrix analysis and its applications. In comparison, linear matrix functions (LMFs) are a class of simple forms of all matrix functions, and they can be routinely defined according to the additions and multiplications of matrices. Let us just mention here a typical example of LMFs:
f ( X 1 , X 2 , , X k ) = A + B 1 X 1 C 1 + B 2 X 2 C 2 + + B k X k C k ,
where A C m × n , B i C m × p i , and C i C q i × n are given, and X i C p i × q i are variable matrices; i = 1 , 2 , , k . Correspondingly, the domain of the LMF is denoted by
D f = { Z = A + B 1 X 1 C 1 + B 2 X 2 C 2 + + B k X k C k | X i C p i × q i , i = 1 , 2 , , k } .
The LMF in (5) includes many simple and well-known matrix expressions of this kind with variable entries as its special cases, such as A + B X , A + B X C , and A + B X + Y C , as well as various partially specified matrices, such as A B C * , A * * D , A * * * , where the symbol * denotes a unspecified sub-matrix (cf. [7,8,9,10,11]).
Nowadays, we have some powerful algebraic tools and techniques for characterizing relationships between different matrix functions and matrix equations. Among them are the simple but surprisingly effective matrix rank method, as well as the matrix range method and the block matrix representation method. The purpose of this paper is to propose and study some fundamental problems related to the domains of certain specified cases of (5) through the nicely organized employment of some known formulas and facts for ranges and ranks of matrices. As applications, we also discuss the connections among general solutions of some linear matrix equations and their reduced linear matrix equations. This paper is organized as follows. In Section 2, the authors give some preliminary results and facts about generalized inverses, rank formulas, and matrix equations. In Section 3, we first present some known and new results and facts about the relationships between two matrix sets generated from (5), and then we discuss the relationships between general solutions of some well-known linear matrix equations that occur in linear algebra and matrix analysis and their applications. Section 4 discusses the relationships between the general solutions of some basic linear matrix equations and their reduced equations.

2. Notation and Some Preliminary Results

As we know in linear algebra and matrix theory, a matrix is null if and only if its rank is zero. As a direct consequence of this elementary fact, we easily came to the conclusion that two given matrices A and B of the same size are equal if and only if r ( A B ) = 0 . In view of this fact, we may figure out that if certain nontrivial algebraic formulas for calculating the rank of the difference A B are obtained, we can reasonably utilize them to describe essential links between the two matrices and, especially, to characterize the matrix quality A = B in a convenient manner. A solid underpinning of this proposed method is that we are really able to determine or compute the rank of a matrix through various elementary operations of matrices and to obtain analytical formulas for expressing the ranks of matrices in many cases. Recall, in addition, that block matrices and matrix equations are two types of basic conceptual objects in linear algebra. Correspondingly, the matrix rank method (for short, MRM), the block matrix representation method (for short, BMRM), and the matrix equation method (for short, MEM) are three basic and traditional analytic tools and techniques that are extensively employed in matrix theory and its applications because they give algebraists the capacity to construct and analyze various complicated matrix expressions and matrix equalities in a subtle and computationally tractable way. On the other hand, it has been realized since the 1960s that generalized inverses of matrices can be adopted to derive numerous exact and analytical expansion formulas for calculating the ranks of block matrices. In the following, we present a series of seminal equalities and facts about the ranks of matrices and matrix equations.
Lemma 1 
([12]). Let A C m × n , B C m × k , and C C l × n . Then,
r [ A , B ] = r ( A ) + r ( P A B ) = r ( B ) + r ( P B A ) ,
r A C = r ( A ) + r ( C Q A ) = r ( C ) + r ( A Q C ) ,
r A B C 0 = r ( B ) + r ( C ) + r ( P B A Q C ) .
In particular, the following results hold.
(a)
r [ A , B ] = r ( A ) R ( A ) R ( B ) A A B = B P A B = 0 .
(b)
r A C = r ( A ) R ( C * ) R ( A * ) C A A = C C Q A = 0 .
(c)
r A B C 0 = r ( B ) + r ( C ) P B A Q C = 0 .
Lemma 2 
([13]). Let A i C m × n i , and denote A ^ i = [ A 1 , , A i 1 , A i + 1 , , A k ] , i = 1 , 2 , , k . Then,
( k 1 ) r [ A 1 , A 2 , , A k ] + dim ( R ( A ^ 1 ) R ( A ^ 2 ) R ( A ^ k ) ) = r ( A ^ 1 ) + r ( A ^ 2 ) + + r ( A ^ k ) .
In particular, the following three statements are equivalent:
(a)
r [ A 1 , A 2 , , A k ] = r ( A 1 ) + r ( A 2 ) + + r ( A k ) .
(b)
( k 1 ) r [ A 1 , A 2 , , A k ] = r ( A ^ 1 ) + r ( A ^ 2 ) + + r ( A ^ k ) .
(c)
R ( A ^ 1 ) R ( A ^ 2 ) R ( A ^ k ) = { 0 } .
Lemma 3 
([14]). Let
A X = B
be a given linear matrix equation, where A C m × n and B C m × p are known matrices, and X C n × p is an unknown matrix. Then, the following four statements are equivalent:
(a)
Equation (11) is solvable for X .
(b)
R ( A ) R ( B ) .
(c)
r [ A , B ] = r ( A ) .
(d)
A A B = B .
In this case, the general solution of the equation in (11) can be written in the parametric form
X = A B + Q A U ,
where U C n × p is an arbitrary matrix. In particular, (11) holds for all matrices X C n × p if and only if both A = 0 and B = 0 , or equivalently, [ A , B ] = 0 .
Lemma 4 
([14]). Let
A X B = C
be a two-sided linear matrix equation, where A C m × n , B C p × q , and C C m × q are given, and X C n × p is an unknown matrix.
Then, the following four statements are equivalent:
(a)
Equation (13) is solvable for X .
(b)
Both R ( A ) R ( C ) and R ( B * ) R ( C * ) .
(c)
Both r [ A , C ] = r ( A ) and r B C = r ( B ) .
(d)
A A C B B = C .
In this case, the general solution X of (13) can be written in the parametric form X = A C B + Q A U + V P B , where U , V C n × p are two arbitrary matrices. In particular, (13) holds for all matrices X C n × p if and only if
[ A , C ] = 0 o r B C = 0 .
Lemma 5 
([15]). The linear matrix equation
A 1 X 1 + X 2 B 2 = C
is solvable for the two unknown matrices X 1 and X 2 of appropriate sizes if and only if
r C A 1 B 2 0 = r ( A 1 ) + r ( B 2 )
hold, or equivalently,
P A 1 C Q B 2 = 0
holds. In particular, (15) holds for all matrices X 1 and X 2 if and only if C A 1 B 2 0 = 0 .
Lemma 6 
([16]). The linear matrix equation
A 1 X 1 B 1 + A 2 X 2 B 2 = C
is solvable the two unknown matrices X 1 and X 2 of appropriate sizes if and only if the following four matrix rank equalities
r [ C , A 1 , A 2 ] = r [ A 1 , A 2 ] , r C A 1 B 2 0 = r ( A 1 ) + r ( B 2 ) ,
r C A 2 B 1 0 = r ( A 2 ) + r ( B 1 ) , r C B 1 B 2 = r B 1 B 2
hold, or equivalently, the following four matrix equalities
P A C = 0 , P A 1 C Q B 2 = 0 , P A 2 C Q B 1 = 0 , C Q B = 0
hold, where A = [ A 1 , A 2 ] and B = B 1 B 2 .
Lemma 7 
([17,18]). Equation (18) holds for all matrices X 1 and X 2 of appropriate sizes if and only if any one of the following four block matrix equalities
[ C , A 1 , A 2 ] = 0 , C A 1 B 2 0 = 0 , C A 2 B 1 0 = 0 , C B 1 B 2 = 0
holds.
Lemma 8 
([19]). The linear matrix equation
A 1 X 1 + X 2 B 2 + A 3 X 3 B 3 + A 4 X 4 B 4 = C
is solvable for the four unknown matrices X 1 , X 2 , X 3 , and X 4 of appropriate sizes if and only if the following four matrix rank equalities hold:
r C A 1 A 3 A 4 B 2 0 0 0 = r [ A 1 , A 3 , A 4 ] + r ( B 2 ) ,
r C A 1 A 3 B 2 0 0 B 4 0 0 = r B 2 B 4 + r [ A 1 , A 3 ] ,
r C A 1 A 4 B 2 0 0 B 3 0 0 = r B 2 B 3 + r [ A 1 , A 4 ] ,
r C A 1 B 2 0 B 3 0 B 4 0 = r B 2 B 3 B 4 + r ( A 1 ) .

3. Main Results

We start by presenting two groups of fundamental results and facts regarding the relationships between two matrix sets generated from the two simplest cases in (5).
Lemma 9 
([20]). Assume that two LMFs and their domains are given by
D 1 = { A 1 + B 1 X 1 | X 1 C p 1 × n } a n d D 2 = { A 2 + B 2 X 2 | X 2 C p 2 × n } ,
where A 1 , A 2 C m × n , B 1 C m × p 1 , and B 2 C m × p 2 are known matrices, and X 1 C p 1 × n and X 2 C p 2 × n are variable matrices. Then, we have the following results.
(a)
D 1 D 2 , i.e., there exist X 1 and X 2 such that A 1 + B 1 X 1 = A 2 + B 2 X 2 if and only if R ( A 1 A 2 ) R [ B 1 , B 2 ] .
(b)
D 1 D 2 if and only if R [ A 1 A 2 , B 1 ] R ( B 2 ) .
(c)
D 1 = D 2 if and only if R ( A 1 A 2 ) R ( B 1 ) = R ( B 2 ) .
Lemma 10 
([20]). Assume that two LMFs and their domains are given by
D 1 = { A 1 + B 1 X 1 C 1 | X 1 C p 1 × q 1 } a n d D 2 = { A 2 + B 2 X 2 C 2 | X 2 C p 2 × q 2 } ,
where A i C m × n ,   B i C m × p i , and C i C q i × n are known matrices, and X i C p i × q i are variable matrices; i = 1 , 2 . Then, we have the following results.
(a)
D 1 D 2 if and only if the following four conditions hold:
R ( A 1 A 2 ) R [ B 1 , B 2 ] , R ( A 1 * A 2 * ) R [ C 1 * , C 2 * ] , r A 1 A 2 B 1 C 2 0 = r ( B 1 ) + r ( C 2 ) , r A 1 A 2 B 2 C 1 0 = r ( B 2 ) + r ( C 1 ) .
(b)
D 1 D 2 if and only if any one of the following three conditions holds:
(i)
R [ A 1 A 2 , B 1 ] R ( B 2 ) and R [ A 1 * A 2 * , C 1 * ] R ( C 2 * ) .
(ii)
B 1 = 0 , R ( A 1 A 2 ) R ( B 2 ) , and R ( A 1 * A 2 * ) R ( C 2 * ) .
(iii)
C 1 = 0 , R ( A 1 A 2 ) R ( B 2 ) , and R ( A 1 * A 2 * ) R ( C 2 * ) .
(c)
D 1 = D 2 if and only if any one of the following five conditions holds:
(i)
R ( A 1 A 2 ) R ( B 1 ) = R ( B 2 ) and R ( A 1 * A 2 * ) R ( C 1 * ) = R ( C 2 * ) .
(ii)
A 1 = A 2 , B 1 = 0 , and B 2 = 0 .
(iii)
A 1 = A 2 , B 1 = 0 , and C 2 = 0 .
(iv)
A 1 = A 2 , B 2 = 0 , and C 1 = 0 .
(v)
A 1 = A 2 , C 1 = 0 , and C 2 = 0 .
As an extension of these known facts, we proceed to derive the following results and facts about relationships between the domains of two general matrix functions, which we shall use in the latter part of the article.
Theorem 1.
Assume that two LMFs and their domains are given by
D 1 = { A 1 + B 1 X 1 + Y 1 C 1 | X 1 C p 1 × n 1 , Y 1 C m 1 × q 1 } ,
D 2 = { A 2 + B 2 X 2 C 2 + D 2 Y 2 E 2 | X 2 C s 2 × t 2 , Y 2 C u 2 × v 2 } .
where A 1 C m × n ,   B 1 C m × p 1 ,   C 1 C q 1 × n ,   A 2 C m × n ,   B 2 C m × s 2 ,   C 2 C t 2 × n ,   D 2 C m × u 2 , and E 2 C v 2 × n are known matrices. Then, we have the following results.
(a)
D 1 D 2 if and only if the following four conditions hold:
r A 2 A 1 B 1 B 2 D 2 C 1 0 0 0 = r [ B 1 , B 2 , D 2 ] + r ( C 1 ) ,
r A 2 A 1 B 1 B 2 C 1 0 0 E 2 0 0 = r C 1 E 2 + r [ B 1 , B 2 ] ,
r A 2 A 1 B 1 D 2 C 1 0 0 C 2 0 0 = r C 1 C 2 + r [ B 1 , D 2 ] ,
r A 2 A 1 B 1 C 1 0 C 2 0 E 2 0 = r C 1 C 2 E 2 + r ( B 1 ) .
(b)
D 1 D 2 if and only if any one of the following four conditions holds:
r A 2 A 1 B 1 B 2 D 2 C 1 0 0 0 = r ( B 1 ) + r ( C 1 ) ,
r A 2 A 1 B 1 B 2 C 1 0 0 E 2 0 0 = r ( B 1 ) + r ( C 1 ) ,
r A 2 A 1 B 1 D 2 C 1 0 0 C 2 0 0 = r ( B 1 ) + r ( C 1 ) ,
r A 2 A 1 B 1 C 1 0 C 2 0 E 2 0 = r ( B 1 ) + r ( C 1 ) .
(c)
D 1 D 2 if and only if any one of the following four groups of conditions holds:
r [ B 2 , D 2 ] = m o r r A 1 A 2 B 1 B 2 D 2 C 1 0 0 0 = r [ B 2 , D 2 ] ,
r ( B 2 ) = m o r r ( E 2 ) = n o r r A 1 A 2 B 1 B 2 C 1 0 0 E 2 0 0 = r ( B 2 ) + r ( E 2 ) ,
r ( C 2 ) = n o r r ( D 2 ) = m o r r A 1 A 2 B 1 D 2 C 1 0 0 C 2 0 0 = r ( C 2 ) + r ( D 2 ) ,
r C 2 E 2 = n o r r A 1 A 2 B 1 C 1 0 C 2 0 E 2 0 = r C 2 E 2 .
(d)
D 1 = D 2 if and only if both (b) and (c) hold.
Proof. 
The condition D 1 D 2 is obviously equivalent to A 1 + B 1 X 1 + Y 1 C 1 = A 2 + B 2 X 2 C 2 + D 2 Y 2 E 2 for some variable matrices X 1 , Y 1 , X 2 , and Y 2 . We rewrite this equation as
B 1 X 1 + Y 1 C 1 B 2 X 2 C 2 D 2 Y 2 E 2 = A 2 A 1 .
In this case, applying Lemma 8 to (44) leads to (a).
By (15)–(17) and (44), the condition D 1 D 2 is equivalent to the fact that
P B 1 ( A 2 A 1 ) Q C 1 + P B 1 B 2 X 2 C 2 Q C 1 + P B 1 D 2 Y 2 E 2 Q C 1 = 0
holds for all variable matrices X 2 and Y 2 . Further, by Lemma 7, the matrix equality in (45) holds for all variable matrices X 2 and Y 2 if and only if the following four matrix equalities,
[ P B 1 ( A 2 A 1 ) Q C 1 , P B 1 B 2 , P B 1 D 2 ] = 0 ,
P B 1 ( A 2 A 1 ) Q C 1 P B 1 B 2 E 2 Q C 1 0 = 0 ,
P B 1 ( A 2 A 1 ) Q C 1 P B 1 D 2 C 2 Q C 1 = 0 ,
P B 1 ( A 2 A 1 ) Q C 1 C 2 Q C 1 E 2 Q C 1 = 0
hold, where
r [ P B 1 ( A 2 A 1 ) Q C 1 , P B 1 B 2 , P B 1 D 2 ] = r A 2 A 1 B 1 B 2 D 2 C 1 0 0 0 r ( B 1 ) r ( C 1 ) , P B 1 ( A 2 A 1 ) Q C 1 P B 1 B 2 E 2 Q C 1 0 = r A 2 A 1 B 1 B 2 C 1 0 0 E 2 0 0 r ( B 1 ) r ( C 1 ) , r P B 1 ( A 2 A 1 ) Q C 1 P B 1 D 2 C 2 Q C 1 = r A 2 A 1 B 1 D 2 C 1 0 0 C 2 0 0 r ( B 1 ) r ( C 1 ) , r P B 1 ( A 2 A 1 ) Q C 1 C 2 Q C 1 E 2 Q C 1 = r A 2 A 1 B 1 C 1 0 C 2 0 E 2 0 r ( B 1 ) r ( C 1 )
hold by Lemma 1(c). Substituting these four matrix rank equalities into (46)–(49) leads to the equivalences of (36)–(39) and (46)–(49), respectively.
Applying (19)–(21) to (44), we see that the condition D 1 D 2 holds if and only if any one of the following four equations,
P G ( A 1 A 2 ) + P G B 1 X 1 + P G Y 1 C 1 = 0 ,
P B 2 ( A 1 A 2 ) Q E 2 + P B 2 B 1 X 1 Q E 2 + P B 2 Y 1 C 1 Q E 2 = 0 ,
P D 2 ( A 1 A 2 ) Q C 2 + P D 2 B 1 X 1 Q C 2 + P D 2 Y 1 C 1 Q C 2 = 0 ,
( A 1 A 2 ) Q H + B 1 X 1 Q H + Y 1 C 1 Q H = 0
holds for all matrices X 1 and Y 1 , where G = [ B 2 , D 2 ] and H = C 2 E 2 . By Lemma 7, the matrix equality in (50) holds for all matrices X 1 and Y 1 if and only if any one of the following two conditions holds:
P G = 0 or r P G ( A 1 A 2 ) P G B 1 C 1 0 = 0 ,
which are further equivalent to
r [ B 2 , D 2 ] = m or r A 1 A 2 B 1 B 2 D 2 C 1 0 0 0 = r [ B 2 , D 2 ]
by (7), (9), and Lemma 1(a) and (c), as is required for (40); the matrix equality in (51) holds for all matrices X 1 and Y 1 if and only if any one of the following three conditions holds:
P B 2 = 0 or r P B 2 ( A 1 A 2 ) Q E 2 P B 2 B 1 C 1 Q E 2 0 = 0 or Q E 2 = 0 ,
which are further equivalent to
r ( B 2 ) = m or r A 1 A 2 B 1 B 2 C 1 0 0 E 2 0 0 = r ( B 2 ) + r ( E 2 ) or r ( E 2 ) = n
by (7), (9), and Lemma 1(a) and (c), thus establishing (41); the matrix equality in (52) holds for all matrices X 1 and Y 1 if and only if any one of the following three conditions holds:
P D 2 = 0 or P D 2 ( A 1 A 2 ) Q C 2 P D 2 B 1 C 1 Q C 2 0 = 0 or Q C 2 = 0
These three equalities are further equivalent to
r ( D 2 ) = m or r A 1 A 2 B 1 D 2 C 1 0 0 C 2 0 0 = r ( C 2 ) + r ( D 2 ) or r ( C 2 ) = n
by (7), (9), and Lemma 1(a) and (c), as is required for (42); the matrix equality in (53) holds for all matrices X 1 and Y 1 if and only if any one of the following four conditions holds:
Q H = 0 or ( A 1 A 2 ) Q H B 1 C 1 Q H 0 = 0 ,
which are further equivalent to
r C 2 E 2 = n or r A 1 A 2 B 1 C 1 0 C 2 0 E 2 0 = r C 2 E 2
by (8), (9), and Lemma 1(b) and (c), thus establishing (43). □
It has been well known since Penrose [14] that general solutions of linear matrix equations can be derived and represented by certain algebraic linear matrix expressions that are composed of the given matrices in the matrix equations and their generalized inverses. In view of this basic fact, we are able to employ the preceding formulas, results, and facts to describe and characterize various possible relationships between solutions of different linear matrix equations.
We remark that there exist many types of linear matrix equations for which we can represent their general solutions in certain explicit linear matrix functions, as given in (54). In this section, we present a selection of results and facts on the relationships between certain linear transformations of the solutions of some fundamental linear matrix equations.
Theorem 2.
Assume that the two linear matrix equations
A 1 X 1 = B 1 a n d A 2 X 2 = B 2
are solvable for X 1 and X 2 , respectively, where A i C m i × n i and B i C m i × p are given, i = 1 , 2 . We also denote
D 1 = { S 1 X 1 + T 1 | A 1 X 1 = B 1 } a n d D 2 = { S 2 X 2 + T 2 | A 2 X 2 = B 2 } ,
where S i C s × n i and T i C s × p are given; i = 1 , 2 . Then, we have the following results.
(a)
D 1 D 2 if and only if r S 1 S 2 T 1 T 2 A 1 0 B 1 0 A 2 B 2 = r S 1 S 2 A 1 0 0 A 2 .
(b)
D 1 D 2 if and only if r S 1 S 2 T 1 T 2 A 1 0 B 1 0 A 2 B 2 = r S 2 A 2 + r ( A 1 ) .
(c)
D 1 = D 2 if and only if r S 1 S 2 T 1 T 2 A 1 0 B 1 0 A 2 B 2 = r S 1 A 1 + r ( A 2 ) = r S 2 A 2 + r ( A 1 ) .
Proof. 
By Lemma 3, the general solutions of the two linear matrix equations in (54) can be expressed as
X 1 = A 1 B 1 + Q A 1 U 1 , X 2 = A 2 B 2 + Q A 2 U 2 ,
where U 1 C n 1 × p and U 2 C n 2 × p are arbitrary matrices. Then, the two sets in (55) can be represented as
D 1 = { S 1 A 1 B 1 + S 1 Q A 1 U 1 + T 1 } and D 2 = { S 2 A 2 B 2 + S 2 Q A 2 U 2 + T 2 } .
Applying Lemma 9(a) to (57), we obtain that D 1 D 2 if and only if
r [ S 1 Q A 1 , S 2 Q A 2 , S 1 A 1 B 1 S 2 A 2 B 2 + T 1 T 2 ] = r [ S 1 Q A 1 , S 2 Q A 2 ] ,
where by (8),
r [ S 1 Q A 1 , S 2 Q A 2 , S 1 A 1 B 1 S 2 A 2 B 2 + T 1 T 2 ] = r S 1 S 2 S 1 A 1 B 1 S 2 A 2 B 2 + T 1 T 2 A 1 0 0 0 A 2 0 r ( A 1 ) r ( A 2 ) = r S 1 S 2 T 1 T 2 A 1 0 B 1 0 A 2 B 2 r ( A 1 ) r ( A 2 )
r [ S 1 Q A 1 , S 2 Q A 2 ] = r S 1 S 2 A 1 0 0 A 2 r ( A 1 ) r ( A 2 ) ,
Substitution of (59) and (60) into (58) yields
r S 1 S 2 T 1 T 2 A 1 0 B 1 0 A 2 B 2 = r S 1 S 2 A 1 0 0 A 2 ,
thus establishing (a).
Applying Lemma 9(b) to (58), we obtain that D 1 D 2 if and only if
r [ S 1 Q A 1 , S 2 Q A 2 , S 1 A 1 B 1 S 2 A 2 B 2 + T 1 T 2 ] = r ( S 2 Q A 2 ) ,
where by (8), the following rank equality
r ( S 2 Q A 2 ) = r S 2 A 2 r ( A 2 )
holds. Substitution of (59) and (62) into (61) yields Result (b). With a similar approach, we obtain that D 1 D 2 if and only if
r S 1 S 2 T 1 T 2 A 1 0 B 1 0 A 2 B 2 = r S 1 A 1 + r ( A 2 ) .
Combining this matrix rank equality with (b) leads to (c). □
Corollary 1.
Assume that A 1 X 1 = B 1 and A 2 X 2 = B 2 in (54) are solvable for X 1 and X 2 , respectively, and denote
D 1 = { X 1 | A 1 X 1 = B 1 } a n d D 2 = { X 2 | A 2 X 2 = B 2 } .
Then, we have the following results.
(a)
The two equations in (54) have a common solution if and only if r A 1 B 1 A 2 B 2 = r A 1 A 2 , i.e., R B 1 B 2 R A 1 A 2 .
(b)
D 1 D 2 if and only if r A 1 B 1 A 2 B 2 = r ( A 1 ) , i.e., R B 1 B 2 R A 1 A 2 and R ( A 2 * ) R ( A 1 * ) .
(c)
D 1 = D 2 if and only if r A 1 B 1 A 2 B 2 = r ( A 1 ) = r ( A 2 ) , i.e., R B 1 B 2 R A 1 A 2 and R ( A 2 * ) = R ( A 1 * ) .
Corollary 2.
Let A C m × n and B C m × p be given, and suppose that A X = B is solvable for X C n × p . In addition, we denote
D 1 = { S X | A X = B } a n d D 2 = { S X | M A X = M B } ,
where M C t × m and S C s × n are two given matrices. Then, the following results hold.
(a)
D 1 D 2 always holds.
(b)
D 1 = D 2 if and only if r M A S = r A S + r ( M A ) r ( A ) .
Corollary 3.
Let A C m × n and B C m × p be given, and suppose that A X = B is solvable for X C n × p . In addition, we denote
D 1 = { X | A X = B } a n d D 2 = { X | M A X = M B } ,
where M C s × m . Then, we have the following results.
(a)
D 1 D 2 always holds.
(b)
D 1 = D 2 if and only if r ( M A ) = r ( A ) .

4. Relationships between Solutions of Some Linear Matrix Equations and Their Reduced Equations

Let us partition the matrix equation in (11) as
A X = A 1 X 1 + A 2 X 2 + + A k X k = B ,
where A i C m × n i , A = [ A 1 , A 2 , , A k ] , and X i C n i × p are unknown matrices with X = [ X 1 T , X 2 T , , X k T ] T and p = p 1 + p 2 + + p k , i = 1 , 2 , , k . In this case, pre-multiplying (66) by P Y i yields the following reduced linear matrix equations:
P Y i A X = P Y i A i X i = P Y i B , i = 1 , 2 , , k ,
where Y i = [ A 1 , , A i 1 , 0 , A i + 1 , , A k ] , i = 1 , 2 , , k . Now, assume that (67) is solvable for X. Then, the equations in (67) are solvable for X i . Correspondingly, we denote
D i = { X i | A 1 X 1 + A 2 X 2 + + A k X k = B } and H i = { X i | P Y i A i X i = P Y i B }
for the matrix sets composed of the partial solutions X i of (66) and (67), respectively, i = 1 , 2 , , k , and denote
D = { X | A X = B } and H = { [ X 1 T , X 2 T , , X k T ] T | P Y i A i X i = P Y i B , i = 1 , 2 , , k } .
In this section, we first discuss the relationships between D i and H i in (68), i = 1 , 2 , , k , as well as the two sets in (69).
Theorem 3.
Assume that the matrix equation in (66) is solvable for X , and let D i and H i be as given in (68); i = 1 , 2 , , k . Then, the following matrix set equalities
D i = H i
always hold, i = 1 , 2 , , k .
Proof. 
Set S = [ 0 , I n i , , 0 ] and M = P Y i in (64), i = 1 , 2 , , k . Then, by (11) and simplifications, we obtain that
r P Y i A i S r A S r ( P Y i A i ) + r ( A ) = r A Y i S 0 r A S r [ Y i , A i ] + r ( A ) = r 0 Y i S 0 r Y i S r ( A ) + r ( A ) = 0
for i = 1 , 2 , , k . Thus, (70) holds by Corollary 2. □
Theorem 4.
Assume that the matrix equation in (66) is solvable for X , and let Y i , D , and H be as given in (67) and (69), respectively, for i = 1 , 2 , , k . Then, we have the following results.
(a)
D H always holds.
(b)
The following four statements are equivalent:
(i)
D = H .
(ii)
( k 1 ) r ( A ) = r ( Y 1 ) + r ( Y 2 ) + + r ( Y k ) .
(iii)
r ( A ) = r ( A 1 ) + r ( A 2 ) + + r ( A k ) .
(iv)
R ( Y 1 ) R ( Y 2 ) R ( Y k ) = { 0 } .
Proof. 
By Lemma 3, the general solutions of (67) are given by
X i = ( P Y i A i ) P Y i B + ( I n i ( P Y i A i ) ( P Y i A i ) ) U i ,
where U i C n i × p are arbitrary, i = 1 , 2 , , k . Substitution of (12) and (71) into (69) yields
D = { A B + Q A U } ,
H = ( P Y 1 A 1 ) P Y 1 B ( P Y k A k ) P Y k B + I n 1 ( P Y 1 A 1 ) P Y 1 A 1 0 0 I n k ( P Y k A k ) P Y k A k U 1 U k .
Applying Lemma 9(b) to (72) and (73), we see that D H if and only if
r A B ( P Y 1 A 1 ) P Y 1 B ( P Y k A k ) P Y k B , Q A , I n 1 ( P Y 1 A 1 ) ( P Y 1 A 1 ) 0 0 I n k ( P Y k A k ) ( P Y k A k ) = r I n 1 ( P Y 1 A 1 ) ( P Y 1 A 1 ) 0 0 I n k ( P Y k A k ) ( P Y k A k ) ,
where by (8), we obtain
r A B ( P Y 1 A 1 ) P Y 1 B ( P Y k A k ) P Y k B , Q A , I n 1 ( P Y 1 A 1 ) ( P Y 1 A 1 ) 0 0 I n k ( P Y k A k ) ( P Y k A k ) = r A B ( P Y 1 A 1 ) P Y 1 B ( P Y k A k ) P Y k B I n I n 1 0 0 I n k 0 A 0 0 0 P Y 1 A 1 0 0 P Y k A k r ( A ) r ( P Y 1 A 1 ) r ( P Y k A k ) = r 0 I n 0 B 0 A P Y 1 B P Y k B 0 P Y 1 A 1 0 0 P Y k A k r ( A ) r ( P Y 1 A 1 ) r ( P Y k A k ) = n r ( P Y 1 A 1 ) r ( P Y k A k ) ,
and
r I n 1 ( P Y 1 A 1 ) ( P Y 1 A 1 ) 0 0 I n k ( P Y k A k ) ( P Y k A k ) = n r ( P Y 1 A 1 ) r ( P Y k A k ) .
Both (75) and (76) mean that (74) is an identity, thus establishing (a). Substitution of (71) into (66) gives
( A 1 A 1 ( P Y 1 A 1 ) ( P Y 1 A 1 ) ) U 1 + + ( A k A k ( P Y k A k ) ( P Y k A k ) ) U k = B A 1 ( P Y 1 A 1 ) P Y 1 B A k ( P Y k A k ) P Y k B .
It is obvious that D H holds if and only if the matrix equation in (77) holds for all U 1 , , U k , which, by Lemma 3, is equivalent to
[ B A 1 ( P Y 1 A 1 ) P Y 1 B A k ( P Y k A k ) P Y k B , A 1 A 1 ( P Y 1 A 1 ) ( P Y 1 A 1 ) , , A k A k ( P Y k A k ) ( P Y k A k ) ] = 0 ,
where by (8), we obtain
r [ B A 1 ( P Y 1 A 1 ) P Y 1 B A k ( P Y k A k ) P Y k B , A 1 A 1 ( P Y 1 A 1 ) ( P Y 1 A 1 ) , , A k A k ( P Y k A k ) ( P Y k A k ) ] = r B A 1 ( P Y 1 A 1 ) P Y 1 B A k ( P Y k A k ) P Y k B A 1 A k 0 P Y 1 A 1 0 0 0 P Y k A k r ( P Y 1 A 1 ) r ( P Y k A k ) = r B A 1 A k P Y 1 B P Y 1 A 1 0 P Y k B 0 P Y k A k r ( P Y 1 A 1 ) r ( P Y k A k ) = r B A 1 A k 0 0 B A 1 0 Y 1 0 B 0 A k 0 Y k k r ( A ) = r B A 0 0 B A Y 1 0 B A 0 Y k k r ( A ) = r 0 A 0 0 0 0 Y 1 0 0 0 0 Y k k r ( A ) = r ( Y 1 ) + + r ( Y k ) ( k 1 ) r ( A ) .
Substituting it into (78), we see that (78) is equivalent to the rank equality ( k 1 ) r ( A ) = r ( Y 1 ) + + r ( Y k ) . Combining this fact with (a) leads to the equivalence of (i) and (ii) in Result (b). The equivalences of (ii), (iii), and (iv) in Result (b) follow from Lemma 2. □
Equation (18) is well known in matrix theory and its applications, while the solvability condition and the general solution of this equation were precisely established by implementing certain calculations of the ranks, ranges, and generalized inverses of the given matrices in this matrix equation; see, e.g., [15,16,18,21,22] and the relevant literature quoted there.
It is easy to see that we can construct from (18) some small or transformed linear matrix equations. For instance, pre- and post-multiplying (18) by P A i and Q B i , respectively, yields the following four reduced matrix equations:
P A 2 ( A 1 X 1 B 1 + A 2 X 2 B 2 ) = P A 2 A 1 X 1 B 1 = P A 2 C ,
P A 1 ( A 1 X 1 B 1 + A 2 X 2 B 2 ) = P A 1 A 2 X 2 B 2 = P A 1 C ,
( A 1 X 1 B 1 + A 2 X 2 B 2 ) Q B 2 = A 1 X 1 B 1 Q B 2 = C Q B 2 ,
( A 1 X 1 B 1 + A 2 X 2 B 2 ) Q B 1 = A 2 X 2 B 2 Q B 1 = C Q B 1 ,
respectively. Each of (79)–(82) is consistent as well if the matrix equation in (18) is consistent. Concerning the relationships among the solutions of (18) and (79)–(82), we have the following results.
Theorem 5.
Assume that the matrix equation in (18) is solvable for X 1 and X 2 , and we denote by
D = { ( X 1 , X 2 ) | A 1 X 1 B 1 + A 2 X 2 B 2 = C } ,
H 1 = { ( X 1 , X 2 ) | P A 2 A 1 X 1 B 1 = P A 2 C a n d P A 1 A 2 X 2 B 2 = P A 1 C } ,
H 2 = { ( X 1 , X 2 ) | P A 2 A 1 X 1 B 1 = P A 2 C a n d A 2 X 2 B 2 Q B 1 = C Q B 1 } ,
H 3 = { ( X 1 , X 2 ) | A 1 X 1 B 1 Q B 2 = C Q B 2 a n d P A 1 A 2 X 2 B 2 = P A 1 C } ,
H 4 = { ( X 1 , X 2 ) | A 1 X 1 B 1 Q B 2 = C Q B 2 a n d A 2 X 2 B 2 Q B 1 = C Q B 1 } ,
the collections of all pairs of solutions of (18) and (79)–(82), respectively. Then, we have the following results.
(a)
D H i always hold; i = 1 , 2 , 3 , 4 .
(b)
D = H 1 if and only if R ( A 1 ) R ( A 2 ) = { 0 } or [ B 1 * , B 2 * ] = 0 .
(c)
D = H 2 if and only if A 2 = 0 , or B 1 = 0 , or R ( A 1 ) R ( A 2 ) = { 0 } and R ( B 1 * ) R ( B 2 * ) = { 0 } .
(d)
D = H 3 if and only if A 1 = 0 , or B 2 = 0 , or R ( A 1 ) R ( A 2 ) = { 0 } and R ( B 1 * ) R ( B 2 * ) = { 0 } .
(e)
D = H 4 if and only if [ A 1 , A 2 ] = 0 or R ( B 1 * ) R ( B 2 * ) = { 0 } .
Proof. 
Result (a) follows directly from (79)–(82). By Lemma 4, the general solutions of (79)–(82) are given by
X 1 = ( P A 2 A 1 ) P A 2 C B 1 + ( I p 1 ( P A 2 A 1 ) ( P A 2 A 1 ) ) U 1 + V 1 ( I q 1 B 1 B 1 ) ,
X 2 = ( P A 1 A 2 ) P A 1 C B 2 + ( I p 2 ( P A 1 A 2 ) ( P A 1 A 2 ) ) U 2 + V 2 ( I q 2 B 2 B 2 ) ,
X 1 = A 1 C Q B 2 ( B 1 Q B 2 ) + ( I p 1 A 1 A 1 ) U 3 + V 3 ( I q 1 ( B 1 Q B 2 ) ( B 1 Q B 2 ) ) ,
X 2 = A 2 C Q B 1 ( B 2 Q B 1 ) + ( I p 2 A 2 A 2 ) U 4 + V 4 ( I q 2 ( B 2 Q B 1 ) ( B 2 Q B 1 ) ) ,
respectively, where U i and V i are arbitrary matrices; i = 1 , 2 , 3 , 4 . Substitution of (88)–(91) into (18) gives the following four matrix equations:
A 1 ( I p 1 ( P A 2 A 1 ) ( P A 2 A 1 ) ) U 1 B 1 + A 2 ( I p 2 ( P A 1 A 2 ) ( P A 1 A 2 ) ) U 2 B 2 = C A 1 ( P A 2 A 1 ) P A 2 C A 2 ( P A 1 A 2 ) P A 1 C ,
A 1 [ I p 1 ( P A 2 A 1 ) ( P A 2 A 1 ) ) U 1 B 1 + A 2 V 4 ( I q 2 ( B 2 Q B 1 ) ( B 2 Q B 1 ) ) B 2 = C A 1 ( P A 2 A 1 ) P A 2 C C Q B 1 ( B 2 Q B 1 ) B 2 ,
A 1 V 3 ( I q 1 ( B 1 Q B 2 ) ( B 1 Q B 2 ) ) B 1 + A 2 ( I p 2 ( P A 1 A 2 ) ( P A 1 A 2 ) ) U 2 B 2 = C C Q B 2 ( B 1 Q B 2 ) B 1 A 2 ( P A 1 A 2 ) P A 1 C ,
A 1 V 3 ( I q 1 ( B 1 Q B 2 ) ( B 1 Q B 2 ) ) B 1 + A 2 V 4 ( I q 2 ( B 2 Q B 1 ) ( B 2 Q B 1 ) ) B 2 = C C Q B 2 ( B 1 Q B 2 ) B 2 C Q B 1 ( B 2 Q B 1 ) B 2 ,
respectively. By Lemma 7, the equality in (92) holds for all U 1 and U 2 if and only if any one of the following four equalities holds:
[ A 1 ( I p 1 ( P A 2 A 1 ) ( P A 2 A 1 ) ) , A 2 ( I p 2 ( P A 1 A 2 ) ( P A 1 A 2 ) ) , C A 1 ( P A 2 A 1 ) P A 2 C A 2 ( P A 1 A 2 ) P A 1 C ] = 0 ,
C A 1 ( P A 2 A 1 ) P A 2 C A 2 ( P A 1 A 2 ) P A 1 C A 1 ( I p 1 ( P A 2 A 1 ) ( P A 2 A 1 ) ) B 2 0 = 0 ,
C A 1 ( P A 2 A 1 ) P A 2 C A 2 ( P A 1 A 2 ) P A 1 C A 2 ( I p 2 ( P A 1 A 2 ) ( P A 1 A 2 ) ) B 1 0 = 0 ,
C A 1 ( P A 2 A 1 ) P A 2 C A 2 ( P A 1 A 2 ) P A 1 C B 1 B 2 = 0 .
In addition, it is easy to verify that the ranks of the left-hand sides of (96)–(99) are given by
r [ A 1 ( I p 1 ( P A 2 A 1 ) ( P A 2 A 1 ) ) , A 2 ( I p 2 ( P A 1 A 2 ) ( P A 1 A 2 ) ) ,
C A 1 ( P A 2 A 1 ) P A 2 C A 2 ( P A 1 A 2 ) P A 1 C ] = r ( A 1 ) + r ( A 2 ) r [ A 1 , A 2 ] ,
r C A 1 ( P A 2 A 1 ) P A 2 C A 2 ( P A 1 A 2 ) P A 1 C A 1 ( I p 1 ( P A 2 A 1 ) ( P A 2 A 1 ) ) B 2 0 = r ( A 1 ) + r ( A 2 ) [ A 1 , A 2 ] + r ( B 2 ) ,
r C A 1 ( P A 2 A 1 ) P A 2 C A 2 ( P A 1 A 2 ) P A 1 C A 2 ( I p 2 ( P A 1 A 2 ) ( P A 1 A 2 ) ) B 1 0 = r ( A 1 ) + r ( A 2 ) [ A 1 , A 2 ] + r ( B 1 ) ,
r C A 1 ( P A 2 A 1 ) P A 2 C A 2 ( P A 1 A 2 ) P A 1 C B 1 B 2 = r B 1 B 2 .
Combining (96)–(99) with (101)–(104) leads to the equivalence in (b).
By Lemma 7, the equality in (93) holds for all U 1 and V 4 if and only if any one of the following four equalities holds:
[ C A 1 ( P A 2 A 1 ) P A 2 C A 2 ( P A 1 A 2 ) P A 1 C , A 1 ( I p 1 ( P A 2 A 1 ) ( P A 2 A 1 ) ) , A 2 ] = 0 ,
C A 1 ( P A 2 A 1 ) P A 2 C B 1 B 1 A 2 A 2 C Q B 1 ( B 2 Q B 1 ) B 2 A 1 ( I p 1 ( P A 2 A 1 ) ( P A 2 A 1 ) ) ( I q 2 ( B 2 Q B 1 ) ( B 2 Q B 1 ) ) B 2 0 = 0 ,
C A 1 ( P A 2 A 1 ) P A 2 C B 1 B 1 A 2 A 2 C Q B 1 ( B 2 Q B 1 ) B 2 A 1 B 2 0 = 0 ,
C A 1 ( P A 2 A 1 ) P A 2 C B 1 B 1 A 2 A 2 C Q B 1 ( B 2 Q B 1 ) B 2 B 1 ( I q 2 ( B 2 Q B 1 ) ( B 2 Q B 1 ) ) B 2 = 0 .
In addition, it is easy to verify that the ranks of the left-hand sides of (105)–(108) are given by
r [ C A 1 ( P A 2 A 1 ) P A 2 C A 2 ( P A 1 A 2 ) P A 1 C , A 1 ( I p 1 ( P A 2 A 1 ) ( P A 2 A 1 ) ) , A 2 ] = r ( A 2 ) ,
r C A 1 ( P A 2 A 1 ) P A 2 C B 1 B 1 A 2 A 2 C Q B 1 ( B 2 Q B 1 ) B 2 A 1 ( I p 1 ( P A 2 A 1 ) ( P A 2 A 1 ) ) ( I q 2 ( B 2 Q B 1 ) ( B 2 Q B 1 ) ) B 2 0 = r ( A 1 ) + r ( A 2 ) + r ( B 1 ) + r ( B 2 ) r [ A 1 , A 2 ] r B 1 B 2 ,
r C A 1 ( P A 2 A 1 ) P A 2 C B 1 B 1 A 2 A 2 C Q B 1 ( B 2 Q B 1 ) B 2 A 1 B 2 0 = r ( A 1 ) + r ( B 2 ) ,
r C A 1 ( P A 2 A 1 ) P A 2 C B 1 B 1 A 2 A 2 C Q B 1 ( B 2 Q B 1 ) B 2 B 1 ( I q 2 ( B 2 Q B 1 ) ( B 2 Q B 1 ) ) B 2 = r ( B 1 ) .
Combining (105)–(108) with (109)–(112) leads to the equivalence in (c). Results (d) and (e) can be established with a similar approach. □
Theorem 6.
Assume that the matrix equation in (18) is solvable X 1 and X 2 , and let
D 1 = { X 1 | A 1 X 1 B 1 + A 2 X 2 B 2 = C } ,
D 2 = { X 2 | A 1 X 1 B 1 + A 2 X 2 B 2 = C } ,
H 1 = { X 1 | A 1 X 1 B 1 A 2 A 2 A 1 X 1 B 1 B 2 B 2 = C A 2 A 2 C B 2 B 2 } ,
H 2 = { X 2 | A 2 X 2 B 2 A 1 A 1 A 2 X 2 B 2 B 1 B 1 = C A 1 A 1 C B 1 B 1 } ,
D = { ( X 1 , X 2 ) | A 1 X 1 B 1 + A 2 X 2 B 2 = C } ,
H = { ( X 1 , X 2 ) | X 1 H 1 a n d X 2 H 2 } .
Then, we have the following results.
(a)
The matrix set equalities D i = H i always hold; i = 1 , 2 .
(b)
D H always holds.
(c)
D = H if and only if R ( B 1 T A 1 ) R ( B 2 T A 2 ) = { 0 } .
Proof. 
By the vec operation of matrix, (18) can be equivalently expressed as
( B 1 T A 1 ) X 1 + ( B 2 T A 2 ) X 2 = C ,
which is a special case of (66). Pre-multiplying (119) with P B i T A i , i = 1 , 2 , yields the following two reduced linear matrix equations:
P B 2 T A 2 ( B 1 T A 1 ) X 1 = P B 2 T A 2 C , P B 1 T A 1 ( B 2 T A 2 ) X 2 = P B 1 T A 1 C .
Now, we denote
D ^ i = { X i | ( B 1 T A 1 ) X 1 + ( B 2 T A 2 ) X 2 = C } , i = 1 , 2 ,
H ^ 1 = { X 1 | P B 2 T A 2 ( B 1 T A 1 ) X 1 = P B 2 T A 2 C } ,
H ^ 2 = { X 2 | P B 1 T A 1 ( B 2 T A 2 ) X 2 = P B 1 T A 1 C } .
Then, we obtain from Theorem 3 that
D ^ i = H ^ i , i = 1 , 2
always hold. On the other hand, it is easy to verify that
P B i T A i = I m n ( B i T A i ) ( B i T A i ) = I m n ( B i T A i ) ( ( B i T ) A i ) = I m n ( B i B i ) T A i A i , i = 1 , 2 ,
and
P B 2 T A 2 ( B 1 T A 1 ) = B 1 T A 1 ( ( B 2 B 2 ) T A 2 A 2 ) ( B 1 T A 1 ) = B 1 T A 1 ( B 1 B 2 B 2 ) T A 2 A 2 A 1 , P B 1 T A 1 ( B 2 T A 2 ) = B 2 T A 2 ( ( B 1 B 1 ) T A 1 A 1 ) ( B 2 T A 2 ) = B 2 T A 2 ( B 2 B 1 B 1 ) T A 1 A 1 A 2
hold. Thus, the two equations in (120) denoted by the matrix vectorization operations are equivalent to
A 1 X 1 B 1 A 2 A 2 A 1 X 1 B 1 B 2 B 2 = C A 2 A 2 C B 2 B 2 , A 2 X 2 B 2 A 1 A 1 A 2 X 2 B 2 B 1 B 1 = C A 1 A 1 C B 1 B 1 ,
respectively. Hence, the two set equalities in (124) are equivalent to the set equalities in (a). Results (b) and (c) follow from applying Theorem 4 to (119). □

5. Conclusions

In the preceding sections, we described and studied some relationships between two different linear matrix functions and presented a series of clear explanations and solutions to some concrete problems in this subject area. The whole work covers some principal cases that are often encountered in the theory of matrix functions and their applications, while the results and facts obtained deeply reveal the inherent natures and properties of some basic linear matrix functions and their connections. Notice that the derivations of the main results and facts are based on various precise algebraic calculations of the ranks of certain block matrices related to the matrix equalities and matrix set inclusions, which substantially avoid certain complicated matrix operations occurring in matrix expressions and matrix equalities. Hence, they clearly demonstrate that the matrix rank method and the block matrix representation method are useful and effective for solving various matrix equality and matrix set inclusion problems. Actually, the two fundamental algebraic methods have been recognized as reliable and efficient tools and techniques in the descriptions and investigations of many matrix problems in theoretical and computational mathematics.
Finally, the two authors remark that the research results and facts in this article offer certain deep and valuable insights into various intrinsic links among different matrix functions that could not be seen previously, and therefore, we believe that this study will have a profound impact on the explorations of the connections among domains of general matrix functions under various specified assumptions.

Author Contributions

Methodology, Y.T.; Validation, R.Y.; Formal analysis, Y.T. and R.Y.; Investigation, Y.T. and R.Y.; Writing—original draft, Y.T.; writing—review and editing, Y.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The two authors would like to thank the four referees for their helpful comments and suggestions on an earlier version of this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ben-Israel, A.; Greville, T.N.E. Generalized Inverses: Theory and Applications, 2nd ed.; Springer: New York, NY, USA, 2003. [Google Scholar]
  2. Bernstein, D.S. Scalar, Vector, and Matrix Mathematics: Theory, Facts, and Formulas–Revised and Expanded Edition; Princeton University Press: Princeton, NJ, USA; Oxford, UK, 2018. [Google Scholar]
  3. Campbell, S.L.; Meyer, C.D. Generalized Inverses of Linear Transformations; SIAM: Philadelphia, PA, USA, 2009. [Google Scholar]
  4. Rao, C.R.; Mitra, S.K. Generalized Inverse of Matrices and Its Applications; Wiley: New York, NY, USA, 1971. [Google Scholar]
  5. Horn, R.; Johnson, C.R. Topics in Matrix Analysis; Cambridge University Press: Cambridge, UK, 1991. [Google Scholar]
  6. Steeb, W.-H. Matrix Calculus and Kronecker Product with Applications and C++ Programs; World Scientific Publishing: Singapore, 1997. [Google Scholar]
  7. Bakonyi, M.; Woerdeman, H.J. Matrix Completions, Moments, and Sums of Hermitian Squares; Princeton University Press: Princeton, NJ, USA, 2011. [Google Scholar]
  8. Dancis, J. Choosing the inertias for completions of certain partially specified matrices. SIAM J. Matrix Anal. Appl. 1993, 14, 813–829. [Google Scholar] [CrossRef]
  9. Gohberg, I.; Kaashoek, M.A.; Schagen, F.V. Partially Specified Matrices and Operators: Classification, Completion, Applications; Birkhäuser: Basel, Switzerland, 1995. [Google Scholar]
  10. Jordán, C.; Torregrosa, J.R.; Urbano, A. On the Jordan form of completions of partial upper triangular matrices. Linear Algebra Appl. 1997, 254, 241–250. [Google Scholar] [CrossRef]
  11. Krupnik, M. Geometric multiplicities of completions of partial triangular matrices. Linear Algebra Appl. 1995, 220, 215–227. [Google Scholar] [CrossRef]
  12. Marsaglia, G.; Styan, G.P.H. Equalities and inequalities for ranks of matrices. Linear Multilinear Algebra 1974, 2, 269–292. [Google Scholar] [CrossRef]
  13. Tian, Y. Formulas for calculating the dimensions of the sums and the intersections of a family of linear subspaces with applications. Contrib. Algebra Geom. 2019, 60, 471–485. [Google Scholar] [CrossRef]
  14. Penrose, R. A generalized inverse for matrices. In Mathematical Proceedings of the Cambridge Philosophical Society; Cambridge University Press: Cambridge, UK, 1955; Volume 51, pp. 406–413. [Google Scholar]
  15. Baksalary, J.K.; Kala, R. The matrix equation AXB + CYD = E. Linear Algebra Appl. 1980, 30, 141–147. [Google Scholar] [CrossRef]
  16. Özgüler, A.B. The matrix equation AXB + CYD = E over a principal ideal domain. SIAM J. Matrix Anal. Appl. 1991, 12, 581–591. [Google Scholar] [CrossRef]
  17. Jiang, B.; Tian, Y. Necessary and sufficient conditions for nonlinear matrix identities to always hold. Aequ. Math. 2019, 93, 587–600. [Google Scholar] [CrossRef]
  18. Tian, Y. Upper and lower bounds for ranks of matrix expressions using generalized inverses. Linear Algebra Appl. 2002, 355, 187–214. [Google Scholar] [CrossRef]
  19. Tian, Y. Solvability of two linear matrix equations. Linear Multilinear Algebra 2000, 48, 123–147. [Google Scholar] [CrossRef]
  20. Tian, Y. Relations between matrix sets generated from linear matrix expressions and their applications. Comput. Math. Appl. 2011, 61, 1493–1501. [Google Scholar] [CrossRef]
  21. Tian, Y. Ranks and independence of solutions of the matrix equation AXB + CYD = M. Acta Math. Univ. Comen. 2006, 75, 75–84. [Google Scholar]
  22. Xu, G.; Wei, M.; Zheng, D. On solution of matrix equation AXB + CYD = F. Linear Algebra Appl. 1998, 279, 93–109. [Google Scholar] [CrossRef] [Green Version]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tian, Y.; Yuan, R. Algebraic Characterizations of Relationships between Different Linear Matrix Functions. Mathematics 2023, 11, 756. https://doi.org/10.3390/math11030756

AMA Style

Tian Y, Yuan R. Algebraic Characterizations of Relationships between Different Linear Matrix Functions. Mathematics. 2023; 11(3):756. https://doi.org/10.3390/math11030756

Chicago/Turabian Style

Tian, Yongge, and Ruixia Yuan. 2023. "Algebraic Characterizations of Relationships between Different Linear Matrix Functions" Mathematics 11, no. 3: 756. https://doi.org/10.3390/math11030756

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop