Next Article in Journal
Dynamic Evolution Method and Symmetric Consistency Analysis for Big Data-Oriented Software Architecture Based on Extended Bigraph
Previous Article in Journal
Numerical Analysis on Cooling Performances for Connectors Using Immersion Cooling in Ultra-Fast Chargers for Electric Vehicles
Previous Article in Special Issue
On the Secant Non-Defectivity of Integral Hypersurfaces of Projective Spaces of at Most Five Dimensions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Comprehensive Review on Solving the System of Equations AX = C and XB = D

1
Department of Mathematics, Shanghai University, Shanghai 200444, China
2
Collaborative Innovation Center for the Marine Artificial Intelligence, Shanghai 200444, China
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(4), 625; https://doi.org/10.3390/sym17040625
Submission received: 31 March 2025 / Revised: 18 April 2025 / Accepted: 19 April 2025 / Published: 21 April 2025
(This article belongs to the Special Issue Mathematics: Feature Papers 2025)

Abstract

:
This survey provides a review of the theoretical research on the classic system of matrix equations A X = C and X B = D , which has wide-ranging applications across fields such as control theory, optimization, image processing, and robotics. The paper discusses various solution methods for the system, focusing on specialized approaches, including generalized inverse methods, matrix decomposition techniques, and solutions in the forms of Hermitian, extreme rank, reflexive, and conjugate solutions. Additionally, specialized solving methods for specific algebraic structures, such as Hilbert spaces, Hilbert C -modules, and quaternions, are presented. The paper explores the existence conditions and explicit expressions for these solutions, along with examples of their application in color images.
MSC:
15A03; 15A09; 15A24; 15B33; 15B57; 65F10; 65F45

1. Introduction

Systems of equations, particularly
A X = C , X B = D ,
are essential tools in linear algebra and have widespread applications in diverse scientific and engineering disciplines. These equations often appear in various domains, such as control theory, optimization, image processing, system identification, and robotics [1,2,3,4,5,6,7]. Specifically, the matrix system A X = C , X B = D can represent the state-space model of a dynamic system, where A and B correspond to system transformations, X represents the system state, and C and D are the output matrices [8]. Solving these equations provides the system’s state at a given time. In signal processing, particularly in filter design and signal reconstruction, the filter matrix X transforms input signals A to output C, ensuring the transformed signal interacts correctly with B to produce output D [9]. This concept extends to image processing, where matrices A and B represent operations (e.g., encryption), and C and D are the original and transformed images. Solving for X gives the required transformation. In robotics and computer vision, this matrix system arises in rigid body transformations. Dual quaternions represent 3D transformations, where A and B may represent rotation and translation, and X is the transformation matrix [10]. This system is vital for solving inverse kinematics problems, such as determining the joint parameters of a robotic arm.
Given the wide range of applications of the system (1), it has been extensively studied and possesses a wealth of results. This paper aims to summarize the theoretical results related to the matrix equation system (1), mainly focusing on the conditions and corresponding expressions for the existence of general solutions, least squares solutions, and minimum norm solutions. The paper also highlights generalized inverse methods and matrix decomposition methods in real and complex fields, as well as special solving methods for certain algebraic structures, such as Hilbert C -modules, Hilbert spaces, rings, dual numbers, quaternions, split quaternions, and dual quaternions. The special solutions of the system (1) introduced in this article across various algebraic structures and their corresponding solution methods are shown in Figure 1.
The most widely used and earliest approach for solving system (1) is based on generalized inverses or inner inverses. For special forms of solutions, such as Hermitian, non-negative definite, maximal and minimal rank solutions, and generalized (anti-)reflexive solutions, this class of method provides a rich theoretical framework. On the other hand, matrix decomposition is a powerful tool for solving more complex special forms of solutions. Due to the different forms resulting from various matrix decompositions, these special forms can be used to construct corresponding special solutions. Related research covers symmetric, mirror-symmetric, bi-(skew-)symmetric, and orthogonal solutions over the real numbers, as well as unitary, (semi-)positive definite, generalized reflexive, generalized conjugate, and Hamiltonian solutions over the complex numbers.
For certain special algebraic structures, there are specialized solving methods. For example, in Hilbert C -modules, Hilbert spaces, and rings, inner inverses are widely used. For quaternions, dual numbers, and dual quaternions, generalized inverses can also be applied to solve the system (1). However, in the case of split quaternions, matrix representations are the more widely used approach for solving the system. Additionally, some researchers have discussed the use of determinants to express the form of solutions for quaternion systems. This paper also provides examples of applying the system (1) to dual quaternion matrices and dual split quaternion tensors for the encryption and decryption of color images and videos.
The remainder of the paper is organized as follows. Section 2 introduces generalized inverse methods for solving the general solution, Hermitian and non-negative definite solutions, maximal and minimal rank solutions, and generalized reflexive solutions. The study of system (1) in Hilbert C -modules, Hilbert spaces, and rings is presented in Section 3. Section 4 discusses eigenvalue decomposition, singular value decomposition, and generalized singular value decomposition of matrices, along with research conclusions for some special solutions of system (1). Section 5 and Section 6 focus on the studies of dual numbers and quaternions, respectively. Section 7 introduces examples of using system (1) in the encryption and decryption of color images and videos. Finally, Section 8 summarizes the content of the paper.
For convenience in the narration of this paper, the following notations are used uniformly. Symbols R , C , R m × n , C n , and C m × n represent the real number field, the complex number field, the set of m × n matrices over the real numbers, the set of complex vectors with n elements, and the set of m × n matrices over the complex numbers, respectively. O and I denote appropriately sized zero matrices and identity matrices. For an arbitrary matrix, A, A ¯ , A T , and A represent the conjugate, transpose, and conjugate transpose of A, respectively. For an m × n matrix A over the real numbers, complex numbers, or quaternions, rank ( A ) represents the rank of A and R ( A ) expresses the range (column space) of A. For a complex square matrix A, it is (semi-)positive definite if and only if, for every v C n , we have v A v > ( ) 0 . For two complex square matrices A and B of the same size, we say that A > ( ) B in the Löwner partial ordering if A B is (semi-)positive definite. The symbols i + ( A ) and i ( A ) denote the numbers of positive and negative eigenvalues of a Hermitian complex matrix A, counted with multiplicities. Note that, for Hermitian non-negative definite matrix A, A 1 2 is the matrix satisfying A 1 2 A 1 2 = A . The | | · | | mentioned below represents the Frobenius-norm of a matrix.

2. The Generalized Inverse Methods for Solving (1)

Since 1954, Penrose has described a generalization of the inverse of non-singular matrices through the unique solution of a system of four matrix equations [11]. This area has since attracted considerable attention.
For A C m × n , there exists a unique A satisfying the following system:
A A A = A , A A A = A , ( A A ) = A A , ( A A ) = A A ,
where A is called the general inverse or the Moore–Penrose inverse of A. In the following discussion, we denote the symbols L A = I A A and R A = I A A .
Penrose proposed the necessary and sufficient conditions for the matrix system A X = C ,   X B = D , along with an expression for its solutions in terms of the general inverse.
Theorem 1
(General solutions using the Moore–Penrose inverse for (1) over C . [11]). Let A C p × m , B C n × q , C C p × n , D C m × q . The matrix system (1) is solvable if and only if the equations A X = C and X B = D are consistent, and the condition A D = C B holds, or equivalently,
A A C = C , D B B = D , A D = C B .
Under these conditions, the general solution is given by
X = A C + D B A A D B .
Remark 1.
The concept of the general inverse and Theorem 1 can also be extended to von Neumann regular rings, particularly to the quaternion algebra [12].
Later, the concept of the g-inverse of a complex matrix was introduced by Rao and Mitra [13]. For A C m × n , if the matrix A satisfies
A A A = A ,
then A is defined as the g-inverse of A.
Remark 2.
The g-inverse of a complex matrix is not necessarily unique.
The g-inverse can also be used to represent the solution of linear matrix systems. Theorem 1 can be restated in terms of the g-inverse as follows.
Theorem 2
(General solutions using the g-inverse for (1) over C . [14]). Let A C p × m ,   B C n × q , C C p × n , D C m × q . The matrix system (1) is solvable if and only if
A A C = C , D B B = D , A D = C B .
When (1) is consistent, the general solution is expressed as
X = A C + D B A A D B + ( I A A ) V ( I B B ) ,
where V C m × n is arbitrary.
Subsequently, researchers have explored a range of special solutions to system (1) using generalized inverses, including Hermitian solutions, non-negative solutions, maximal and minimal rank solutions, and general (anti-)reflexive solutions, as well as real non-negative and real positive solutions, among others [14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30].
The earliest research on the Hermitian and non-negative definite solutions of system (1) was conducted by Mitra et al. [15], followed by their subsequent work on the possible minimal rank of the solutions [16]. Mitra’s focus on matrix equation systems continued, and in 1990 he extended the study to a more general form of the system A 1 X B 1 = C 1 ,   A 2 X B 2 = C 2 [14]. After 2000, research on system (1) became more in-depth: Peng and other scholars investigated the (anti-)reflexive solutions of the system [17,18,19,20,21,22,23], while Liu et al. focused on the least squares solutions and the rank of the solutions, exploring the ranks of matrix blocks and the corresponding conditions using block matrix formulations [24,25]. Due to the unique properties of Hermitian matrices, Wang et al. examined the existence conditions and expressions for Hermitian solutions to system (1) that satisfy various inequality constraints, as well as the ranks and inertia indices of these solutions [26,27,28]. Additionally, some scholars have focused on bi-(skew-)symmetric solutions and reducible solutions [29,30].

2.1. Hermitian, Nonnegative Solutions

Hermitian and non-negative matrices are among the most widely applied special types of matrices, and their associated properties have been thoroughly studied. A complex square matrix A C n × n is called Hermitian if A = A .
In 1976, Khatri and Mitra considered the necessary and sufficient conditions for the existence of Hermitian and non-negative definite solutions to system (1) and provided expressions for the solutions when they exist. The main results are stated in Theorem 3.
Theorem 3
(Hermitian and non-negative solutions for (1) over C . [15]). Let A , C C p × n and B , D C n × q such that the system (1) is solvable. Define
M = C A C B D A D B .
( a ) The system (1) has Hermitian solutions if and only if M is Hermitian. Under this condition, a general Hermitian solution is given by
X = A B C D + [ C , D ] A B A B M A B + I A B A B U I A B A B ,
where U C n × n is an arbitrary Hermitian matrix.
( b ) The system (1) has non-negative definite solutions if and only if M is non-negative definite and rank ( M ) = rank [ C , D ] . Under this condition, general non-negative definite solutions have the form of
X = C , D M C D + I A B A B U I A B A B ,
where U C n × n is an arbitrary non-negative definite matrix.
Based on Theorem 3, the following theorem considers the solvability conditions and explicit expressions for the Hermitian solutions to the system (1) with inequality constraints:
A X = C , X B = D subject to M X M N
and
A X = C , X B = D subject to M X M N O ,
for given M C m × n and Hermitian N C m × m .
Theorem 4
(Hermitian solutions for (3) and (4) over C . [28]). Given A , C C p × n , B ,   D C n × q , M C m × n and Hermitian N C m × m . Assume
P = A B , Q = C D , T = C A C B D A D B , M ^ = M L P , P ^ = L P M ^ , N ^ = N M P Q + Q ( P ) P T ( P ) M , N ˜ = N M Q T Q M .
( a ) The system (3) has Hermitian solutions if and only if P P Q = Q , T is Hermitian, and
R M ^ N ^ R M ^ O , rank ( R M ^ N ^ R M ^ ) = rank ( R M ^ N ^ ) .
At this point, the Hermitian solution X C n × n can be expressed as
X = X 0 + P ^ W 1 ( P ^ ) + L P L M ^ V 1 L P + L P V 1 L M ^ L P ,
where
X 0 = P Q + Q P P T P + P ^ N ^ I R M ^ N ^ R M ^ N ^ P ^ ,
W 1 C n × n is an arbitrary non-negative definite Hermitian matrix, V 1 C n × n is arbitrary.
( b ) The system (4) has Hermitian non-negative definite solutions if and only if T is a non-negative Hermitian matrix, and
rank ( T ) = rank ( Q ) , M ^ M ^ N ˜ = N ˜ .
At this point, the Hermitian non-negative definite solution X C m × n can be described as
X = Q T Q + L P M ˙ N ˙ M ˙ + L M ^ U L M ^ L P ,
where
N ˙ = N ^ + M ^ M ^ W 1 M ^ M ^ , M ˙ = M ^ + L M ^ W N ˙ 1 2 ,
with arbitrary U , W , and non-negative definite Hermitian W 1 C n × n .
Remark 3.
In Theorem 4, selecting M = I and N = O can yield the Hermitian non-negative solutions to (1).
Remark 4.
Theorems 3 and 4 are derived by converting the system of equations into a single matrix equation, making the form more concise. However, this approach increases the size of the matrix and requires the computation of the generalized inverse of block matrices.
In [28], the authors consider the maximal rank and inertia of the Hermitian solution to (3) using matrix decomposition methods, which will be introduced in the next section.
Additionally, Ke and Ma [31] have supplemented the results for the symmetric solutions to system (1) over R .
Theorem 5
(Symmetric solutions for (1) over R . [31]). Given A , C R p × n and B , D R n × q . Denote K = C T L A , N = R C A T , Q 1 = D T C T A B K D C , and Q = B T A B A T L A D C A T L A K Q 1 N . The system (1) has a symmetric solution X R n × n if and only if the system of matrix equations
A Y = B , Y C = D , Y A T = B T , C T Y = D T
has a solution Y R n × n . In this case, the symmetric solution to (1) is given by
X = 1 2 ( Y + Y T ) .
Or in an equivalent way, equations
K K Q 1 R C = Q 1 , Q L N = O , R L A L K Q = O , A D = B C , A A B = B , D C C = D
hold. At this point, the symmetric solution of (1) can be expressed as
X = 1 2 ( A B + L A D C + L A K Q 1 R C + Q N R C + L A L K Z R N R C ) + 1 2 ( B T A T + C T D T L A + R C Q 1 T K T L A + R C N T + R C R N Z T L K L A ) ,
where Z R n × n is an arbitrary matrix.

2.2. Maximal and Minimal Rank Solutions with Inequality Constrain

Through the expression of the solution to the system (1) given by generalized inverses, the case of the rank of the solution can be further studied.
In 1984, Mitra obtained the minimal possible rank solutions to the system (1).
Theorem 6
(Minimal possible rank solutions for (1) over C . [16]). Let A C p × m ,   B C n × q ,   C C p × n ,   D C m × q such that (1) is consistent. Assume without loss of generality that rank ( C ) rank ( D ) . Let X be a solution of the matrix system (1). Then,
rank ( X ) max { rank ( C ) , rank ( D ) } = rank D .
Additionally, rank ( X ) = rank ( D ) if and only if rank ( C B ) = rank ( C ) .
Decades later, Liu extended Theorem 6, considering the maximal and minimal ranks of the general solutions and the least squares solutions for the system (1).
Theorem 7
(Maximal and minimal rank solutions for (1) over C . [24]). For A C p × m ,   B C n × q ,   C C p × n ,   D C m × q , the system (1) is consistent with a general solution X C n × n . Then, the maximal and minimal ranks of X are given by
max rank ( X ) = min m + rank ( C ) rank ( A ) , n + rank ( D ) rank ( B ) , min rank ( X ) = rank ( C ) + rank ( D ) rank ( C B ) .
The least squares solution for the system (1) can be expressed as shown in [24], which also provides the conditions for the uniqueness of the least squares solution and the expression for the solution when it is unique.
Theorem 8
(Least squares solutions for (1) over C . [24]). Assume that A C p × m ,   C C p × n ,   B C n × q , D C m × q .
( a ) The necessary and sufficient condition for (1) to have a least squares solution is
A A D B = A C B B .
( b ) If (5) holds, then the general least squares solution of (1) is expressed as
X = A A A C + D B B B A A A A D B B B + L A A W R B B ,
where W C m × n is arbitrary.
( c ) The least squares solution of (1) is unique if and only if
rank ( A ) = m or rank ( B ) = n .
In this case, the unique least squares solution is
X = A A A C or X = D B B B .
Additionally, the maximal and minimal ranks of the least squares solutions for the system (1) are considered based on Theorem 8.
Theorem 9
(Maximal and minimal least squares solutions for (1) over C . [24]). For A C p × m ,   B C n × q ,   C C p × n ,   D C m × q . If the system (1) has a general least squares solution X C n × n , then the maximal and minimal ranks of X are given by
max rank ( X ) = min m + rank ( A C ) rank ( A ) , n + rank ( D B ) rank ( B ) , min rank ( X ) = rank ( A C ) + rank ( D ) rank ( A C B ) .
Liu also presented a set of formulas for the maximal and minimal ranks of the submatrices in a general solution X to the system (1) in [25].
Suppose that X is a general solution of the system (1), and let X be partitioned into a 2 × 2 block form:
X = X 1 X 2 X 3 X 4 .
In this case, the system (1) can be rewritten as
A 1 , A 2 X 1 X 2 X 3 X 4 = C 1 , C 2 , X 1 X 2 X 3 X 4 B 1 B 2 = D 1 D 2 ,
where A 1 C p × m 1 ,   A 2 C p × m 2 ,   C 1 C p × n 1 ,   C 2 C p × n 2 ,   B 1 C n 1 × q ,   B 2 C n 2 × q ,   D 1 C m 1 × q ,   D 2 C m 2 × q and X 1 C m 1 × n 1 ,   X 2 C m 1 × n 2 ,   X 3 C m 2 × n 1 ,   X 4 C m 2 × n 2 , with m 1 + m 2 = m ,   n 1 + n 2 = n . Adopt the following notations for the collections of submatrices X 1 , X 2 , X 3 , and X 4 as
S i = X i | A 1 , A 2 X 1 X 2 X 3 X 4 = C 1 , C 2 , X 1 X 2 X 3 X 4 B 1 B 2 = D 1 D 2 , i = 1 , 2 , 3 , 4 .
The submatrices X i can be rewritten as the form of
X 1 = I n 1 , O X I p 1 O = P 1 X Q 1 , X 2 = I n 1 , O X O I p 2 = P 1 X Q 2 , X 3 = O , I n 2 X I p 1 O = P 2 X Q 1 , X 4 = O , I n 2 X O I p 2 = P 2 X Q 2 .
Substituting the general solution (2) gives the general expressions for X i , as follows:
X 1 = P 1 X 0 Q 1 + P 1 F A V E B Q 1 , X 2 = P 1 X 0 Q 2 + P 1 F A V E B Q 2 , X 3 = P 2 X 0 Q 1 + P 2 F A V E B Q 1 , X 4 = P 2 X 0 Q 2 + P 2 F A V E B Q 2 ,
where X 0 = A C + D B A A D B .
Liu [25] summarized the possible range of ranks for the solution to (1) as follows:
Theorem 10
(Maximal and minimal rank solutions using block matrices for (1) over C . [25]). Let A C p × m ,   B C n × q ,   C C p × n ,   D C m × q , such that the matrix system (1) has a general solution. Then,
( a ) max X 1 S 1 rank ( X 1 ) = min m 1 + rank [ C 1 , A 2 ] rank ( A ) , n 1 + rank D 1 B 2 rank ( B ) , min X 1 S 1 rank X 1 = rank C 1 , A 2 + rank D 1 B 2 rank C 1 B 2 A 2 B 2 0 .
( b ) max X 2 S 2 rank ( X 2 ) = min m 1 + rank [ C 2 , A 2 ] rank ( A ) , n 2 + rank D 1 B 1 rank ( B ) , min X 2 S 2 rank X 2 = rank [ C 2 , A 2 ] + rank D 1 B 1 rank C 2 B 2 A 2 B 1 O .
( c ) max X 3 S 3 rank ( X 3 ) = min m 2 + rank [ C 1 , A 1 ] rank ( A ) , n 1 + rank D 2 B 2 rank ( B ) min X 3 S 3 rank ( X 3 ) = rank [ C 1 , A 1 ] + rank D 2 B 2 rank C 1 B 1 A 1 B 2 0 .
( d ) max X 4 S 4 rank X 4 = min m 2 + rank C 2 , A 1 rank ( A ) , n 2 + rank D 2 B 1 rank ( B ) min X 4 S 4 rank X 4 = rank C 2 , A 1 + rank D 2 B 1 rank C 2 B 2 A 1 B 1 O .
In addition, by using the ranks of matrix blocks, the necessary and sufficient conditions for the uniqueness of the solution to system (1) are given in block matrix form.
Theorem 11
(Unique conditions of general solutions for (1) over C . [25]). Suppose that matrix system (6) has a solution. Then, the following statements hold.
( a ) The block X 1 in a general solution to (6) is unique if and only if
rank ( A 1 ) = n 1 , R A 1 R A 2 = { O } or rank B 1 = p 1 , R B 1 R B 2 = { O } .
( b ) The block X 2 in a general solution to (6) is unique if and only if
rank A 1 = n 1 , R A 1 R A 2 = { O } or rank B 2 = p 2 , R B 1 R B 2 = { O } .
( c ) The block X 3 in a general solution to (6) is unique if and only if
rank A 2 = n 2 , R A 1 R A 2 = { O } or rank B 1 = p 1 , R B 1 R B 2 = { O } .
( d ) The block X 4 in a general solution to (6) is unique if and only if
rank A 2 = n 2 , R A 1 R A 2 = { O } or rank B 2 = p 2 , R B 1 R B 2 = { O } .
Theorem 12
(General solutions using block matrices for (1) over C . [25]). Let A C p × m ,   B C n × q ,   C C p × n , D C m × q , such that the system (6) has a solution.
( a ) Consider S 1 , , S 4 in (7) as four independent matrix sets. Then,
max X i S i C 1 , C 2 A 1 , A 2 X 1 X 2 X 3 X 4 = min rank A 1 + rank A 2 rank ( A ) , p 1 + p 2 + rank B 1 + rank B 2 2 rank ( B ) , max X i S i D 1 D 2 X 1 X 2 X 3 X 4 B 1 B 2 = min rank B 1 + rank B 2 rank ( B ) , n 1 + n 2 + rank A 1 + rank A 2 2 rank ( A ) .
( b ) The four submatrices X 1 , , X 4 in (7) are independent. Specifically, for any choice of X i S i ( i = 1 , , 4 ) , the corresponding matrix X = X 1 X 2 X 3 X 4 is a solution of (7) if and only if
R A 1 R A 2 = { O } , R B 1 R B 2 = { O } .
Furthermore, Wang et al. first considered the extremal inertias and ranks of X X P and X X Q , where P and Q are Hermitian and X is a solution of (1). They also derived the necessary and sufficient conditions for special cases such as unitary solvability, contraction solvability, and the left and right minimal solutions to the system (1) [26].
For A C n × n , A is called a unitary matrix if and only if A A = A A = I . Let H be a given set consisting of some matrices in C n × n , and we say that A H is minimal (maximal) if A W (or A W ) for every W H . Denote
L = X X A X = C , X B = D , R = X X A X = C , X B = D .
A solution X is called left (right) minimal or maximal if X X ( X X ) is the minimal or maximal matrix of the set L ( R ). When X X I , X is called a contraction matrix. Furthermore, if X X < I , X is called a strict contraction matrix.
The main findings of [26] on the extremal inertias and ranks of X X P and X X Q are summarized below:
Theorem 13
(Extreme rank and inertia of X X P for X satisfying (1) over C . [26]). Let A C p × m , B C n × q ,   C C p × n , D C m × q , and P C m × n . Suppose that (1) has a solution. Denote the set of all solutions to (1) by S. Then,
( a )   max X S rank X X P = min r 1 , r 2 , r 3 , where
r 1 = m + rank D B A P C rank ( B ) rank ( A ) , r 2 = 2 m + rank C C A P A rank ( A ) , r 3 = rank D B B P D + n 2 rank ( B ) .
( b )   min X S rank X X P = max t 1 , t 2 , t 3 , t 4 , where
t 1 = 2 rank D B A P C 2 rank D A B A P A C + rank ( C C A P A ) , t 2 = 2 rank D B A P C + rank D B B P D 2 rank B B B B D C B A P n , t 3 = 2 rank D B A P C rank B B B B D C B A P + i + B B D D P rank D A B A P A C + i + C C A P A n , t 4 = 2 rank D B A P C rank B B B B D C B A P + i B B D D P rank D A B A P A C + i C C A P A .
( c )
max X S i + X X P = min i 1 , i 2 , max X S i X X P = min i 3 , i 4 ,
where
i 1 = m + i + C C A P A rank ( A ) , i 2 = i B B D D P + n rank ( B ) , i 3 = m + i C C A P A rank ( A ) , i 4 = i + B B D D P rank ( B ) .
( d )
min X S i + X X P = max p 1 , p 2 , min X S i X X P = max p 3 , p 4 ,
where
p 1 = rank D B A P C rank D A B A P A C + i + C C A P A , p 2 = i B B D D P + rank D B A P C rank B B B B D C B A P , p 3 = rank D B A P C rank D A B A P A C + i C C A P A , p 4 = i + B B D D P rank D B A P C + rank B B B B D C B A P n .
In Theorem 13, selecting P as the identity matrix can derive the necessary and sufficient conditions for (1) to have some special solutions, which are presented in the following corollary.
Corollary 1
(Unitary and (strict) contraction solutions for (1) over C . [26]). Let A C p × m ,   B C n × q ,   C C p × n , and D C m × q . Suppose that the system (1) has general solutions.
( a ) Then, (1) has unitary matrix solutions if and only if
n m , C C = A A , B B D D , i + B B D D n m , rank D B A P C = rank D A B A P A C = rank B B B B D C B A P = rank B C .
( b ) The system (1) has strict contraction matrix solutions if and only if
i C C A A rank ( A ) , i + B B D D rank ( B ) .
( c ) The system (1) has contraction matrix solutions when C C A A and
rank D B A C = rank B C B A A C rank B B B B D C B A i B B D D .
The left (right) minimal and maximal solutions to (1) are discussed as follows.
Theorem 14
(Left (right) minimal and maximal solutions for (1) over C . [26]). Let A C p × m ,   B C n × q ,   C C p × n , and D C m × q , with rank ( A ) < m and rank ( B ) < n .
( a ) There exists a solution X to (1) such that X is the left minimal solution if and only if
rank C B = rank ( B ) .
Under this circumstance, the left minimal solution is X = D B .
( b ) There exists a solution X to (1) such that X is the right minimal solution if and only if
rank D , A = rank ( A ) .
Under these circumstances, the right minimal solution is X = A C .
( c ) There does not exist a right or left maximal solution X to (1).
In a similar manner, Yao derived the maximal and minimal ranks and inertias of Q X P X , where X satisfies the system (1), with P and Q being Hermitian.
Theorem 15
(Extreme rank and inertia of Q X P X for X satisfying (1) over C . [27]). For A C p × m , B C n × q , C C p × n , D C m × q , and Hermitian P , Q C n × n , assume that (1) has a solution. Denote the set of all solutions to (1) by S. Then,
( a )
max X S rank Q X P X = min n + rank O P P A Q C P O D O B rank ( A ) rank ( B ) rank ( P ) , 2 n + rank A Q A C P C 2 rank ( A ) , rank Q O D O P B D B ± O 2 rank ( B ) .
( b )
min X S rank Q X P X = 2 n + 2 rank O P P A Q C P O D O B 2 rank ( A ) 2 rank ( B ) rank ( P ) + max s + + s , t + + t , s + + t , s + t + .
( c )
max X S i ± Q X P X = min n + i ± A Q A C P C rank ( A ) , i ± Q O D O P B D B O rank ( B ) .
( d )
min X S i ± Q X P X = n + rank O P P A Q C P O D O B rank ( A ) rank ( B ) i ± ( P ) + max s ± , t ± .
In which,
s ± = n + r ( A ) i ( P ) + i ± A Q A C P C rank C P A Q A B D A , t ± = n + rank ( A ) i ( P ) + i ± Q O D O P B D B O rank O P B A Q C P O D B O .
Theorem 15 can derive the positive definiteness of Q X P X in the following corollary.
Corollary 2
(Positive definiteness of Q X P X for X satisfying (1) over C . [27]). Let A C p × m , B C n × q , C C p × n , D C m × q , and Hermitian P , Q C n × n . Assume that the system (1) is consistent. Let s ± and t ± be as defined in Theorem 15. Then, we have the following statements.
( a ) The system (1) has a solution such that Q X P X O if and only if
n + rank O P P A Q C P O D O B rank ( A ) rank ( B ) i ( P ) + s 0 , n + rank O P P A Q C P O D O B rank ( A ) rank ( B ) i ( P ) + t 0 .
( b ) The system (1) has a solution such that Q X P X O if and only if
n + rank O P P A Q C P O D O B rank ( A ) rank ( B ) i + ( P ) + s + 0 , n + rank O P P A Q C P O D O B rank ( A ) rank ( B ) i + ( P ) + t + 0 .
( c ) The system (1) has a solution such that Q X P X > O precisely when
i + ( A Q A C P C ) rank ( A ) 0 , i + Q O D O P B D + B + O rank ( B ) n .
( d ) The system (1) has a solution such that Q X P X < O precisely when
i A Q A C P C rank ( A ) 0 , i Q O D O P B D B + O rank ( B ) n .
( e ) All general solutions of (1) satisfy Q X P X O if and only if
n + i A Q A C P C rank ( A ) = 0 or i Q O D O P B D B O rank ( B ) = 0 .
( f ) All general solutions of (1) satisfy Q X P X O if and only if
n + i + A Q A C P C rank ( A ) = 0 or i + Q O D O P B D B O rank ( B ) = 0 .
( g ) All general solutions of (1) satisfy Q X P X > O precisely when
n + rank O P P A Q C P O D O B rank ( A ) rank ( B ) i + ( P ) + s + = n
or
n + rank ( B ) O P P A Q C P O D O B rank ( A ) rank ( B ) i + ( P ) + t + = n .
( h ) All general solutions of (1) satisfy Q X P X < O precisely when
n + rank O P P A Q C P O D O B rank ( A ) rank ( B ) i ( P ) + s = n
or
n + rank O P P A Q C P O D O B rank ( A ) rank ( B ) i ( P ) + t = n .
( i ) The system (1) has a solution such that Q = X P X O if and only if
2 n + 2 rank O P P A Q C P O D O B 2 rank ( A ) 2 rank ( B ) rank ( P ) + s + + s 0 , 2 n + 2 rank O P P A Q C P O D O B 2 rank ( A ) 2 rank ( B ) rank ( P ) + t + + t 0 , 2 n + 2 rank O P P A Q C P O D O B 2 rank ( A ) 2 rank ( B ) rank ( P ) + s + + t 0 , 2 n + 2 rank O P P A Q C P O D O B 2 rank ( A ) 2 rank ( B ) rank ( P ) + s + t + 0 .
At the end of this section, based on Theorem 4, we present the maximal rank and inertia of the Hermitian solutions to (3), which has an additional inequality constraint M X M N , where M C m × n and Hermitian N C m × m are given matrices.
Theorem 16
(Maximal rank and inertia of the Hermitian solutions of (3) over C . [28]). Let A , C C p × n , B , D C n × q , M C m × n , and Hermitian N C m × m . Assume that
P = A B , Q = C D , T = C A C B D A D B , M ^ = M L P , P ^ = L P M ^ , N ^ = N M P Q + Q P P T P M , X 0 = P Q + Q P P T P + P ^ N ^ I R M ^ N ^ R M ^ N ^ P ^ .
Denote the set of all Hermitian solutions to the system (3) by S.
( a ) The maximal rank of X S is
max X S rank ( X ) = min s 1 , s 2 , s 3 , s 4 , s 5 ,
where
s 1 = n + rank ( Q ) rank ( P ) , s 2 = n rank ( P ) + rank ( P ^ Q ) , s 3 = 2 n rank M P rank ( P ) + rank M ^ M ^ 0 Q M Q P , s 4 = 2 n rank M P + rank Q M Q P rank ( P ) , s 5 = 2 n + rank ( Q P ) 2 rank ( P ) .
( b ) The maximal inertia of X S is
max X S i ± ( X ) = min n rank M P + i ± 0 M ^ M ^ 0 M ^ M ^ M X 0 M M Q 0 Q M Q P , n + i ± Q P rank ( P ) .

2.3. Generalized (Anti-)Reflexive Solutions

The general (anti-)reflexive matrices have wide applications in fields such as engineering and science [32]. A matrix X C n × n is called (anti-)reflexive with respect to the nontrivial generalized reflection matrix P if X = ( ) P X P , or equivalently P X = s X P , where s = ( ) 1 , and P is the nontrivial generalized reflection matrix satisfying P = P = P 1 I . X C n × n is called generalized (anti-)reflexive if P X = s X G P G , where G is a given unitary matrix of order n, or equivalently X = ( ) P X Q , where P and Q are nontrivial generalized reflection matrices.
Qiu et al. considered the (anti-)reflexive solutions to system (1), presenting the related results for the (anti-)reflexive solutions and the generalized (anti-)reflexive solutions, along with the corresponding least norm solutions.
Theorem 17
((Anti-)reflexive solutions for (1) over C . [20]). For given A , C C p × n , B , D C n × q , and the nontrivial generalized reflection matrix P C m × n , let
A 1 = A ( I + P ) , A 2 = A ( I P ) , B 1 = ( I + s P ) B , B 2 = ( I s P ) B , C 1 = C ( I + s P ) , C 2 = C ( I s P ) , D 1 = ( I + P ) D , D 2 = ( I P ) D .
( a ) The system A X = C , X B = D has an (anti-)reflexive solution if and only if
B C = A D , C P B = s A P D , C i = A i A i C i , D i = D i B i + B i , i = 1 , 2
for s = ( ) 1 .
( b ) In the meantime, the (anti-)reflexive solution is given by
X = i = 1 2 A i + C i + L A i D i B i + + L A 1 + L A 2 I F R B 1 + R B 2 I ,
with (anti-)reflexive F C m × n .
( c ) The least norm (anti-)reflexive solution is expressed as
X = i = 1 2 A i + C i + L A i D i B i + .
The relevant conclusions for generalized (anti-)reflexive solutions are presented below.
Theorem 18
(Generalized (anti-)reflexive solutions for (1) over C . [20,21,22,23]). For given A , C C p × n , B , D C n × q , and the nontrivial generalized reflection matrix P , Q C m × n , let
A 1 = A ( I + P ) , A 2 = A ( I P ) , B 1 = ( I + s Q ) B , B 2 = ( I s Q ) B , C 1 = C ( I + s Q ) , C 2 = C ( I s Q ) , D 1 = ( I + P ) D , D 2 = ( I P ) D .
( a ) The system (1) has a generalized (anti-)reflexive solution if and only if
C B = A D , C Q B = s A P D , C i = A i A i + C i , D i = D i B i + B i , i = 1 , 2
for s = ( ) 1 .
( b ) In the meantime, the generalized (anti-)reflexive solution is given by
X = i = 1 2 A i + C i + L A i D i B i + + L A 1 + L A 2 I F R B 1 + R B 2 I ,
where F C n × n is generalized (anti-)reflexive.
( c ) The least norm generalized (anti-)reflexive solution is expressed as
X = i = 1 2 A i + C i + L A i D i B i + .
( d ) Let symbols Φ r and Φ a represent the set of all generalized reflexive and anti-reflexive solutions of the system (1), respectively. For given E C n × n , when we select
F = 1 2 ( E + P E Q )
in (8), then (8) is the unique solution of approximation problem min X Φ r | | X E | | . If
F = 1 2 ( E P E Q ) ,
then (8) is the unique solution of min X Φ a | | X E | | .
Theorems 17 and 18 have presented the generalized (anti-)reflexive solutions of (1). For generalized (anti-)reflexive solutions and other more complex forms, the theoretical results are extensive. Additionally, matrix decomposition is a more widely used method for solving this special type of solution, which will be introduced in the next section.

2.4. Re-nnd, Re-pd Solutions

For a matrix A C n × n , the real part of A is defined as H ( A ) = 1 2 ( A + A ) . A matrix A is referred to as real non-negative definite (Re-nnd) if H ( A ) is positive semi-definite, and A is called real positive definite (Re-pd) if H ( A ) is positive definite.
Between 2011 and 2014, Xiong, Qin, and Liu explored the Re-nnd and Re-pd solutions to the system (1) [29,30].
Theorem 19
(Re-nnd solutions for (1) over C . [30]). For A , C C p × m , B , D C n × q , suppose that each equation in (1) has a Re-nnd solution. If the system (1) has a solution, then
( a ) there exists a Re-nnd solution if and only if
rank A C B D = rank A C A B D A = rank A C B B D B .
( b ) all the solutions are Re-nnd solutions if and only if rank ( A ) = n or rank ( B ) = n .
Theorem 20
(Re-pd solutions for (1) over C . [30]). For A , C C p × m , B , D C n × q , assume that each equation in (1) has a Re-pd solution. If the system (1) has a solution, then
( a ) there exists a Re-pd solution.
( b ) all the solutions are Re-pd solutions if and only if
rank A C B D min rank A C A B D A rank ( A ) , rank A C B B D B rank ( B ) = n .
Remark 5
([29]). When the system (1) has a Re-nnd (Re-pd) solution, one of the Re-nnd (Re-pd) solutions is given by
X = A C ( A C ) + A A C ( A ) + ( I n A A ) Y ( I n A A ) ,
for some Re-nnd (Re-pd) matrix Y C n × n .
In this section, we introduced the generalized inverse method for solving the system (1), along with the conclusions regarding special solutions, such as Hermitian, non-negative, generalized (anti-)reflexive, Re-nnd, and Re-pd solutions. Additionally, we focused on the maximal and minimal ranks and inertias of the solutions to system (1), the conditions under which these extremal values are achieved, and the expression of solutions that satisfy inequality constraints.
The generalized inverse methods were first proposed for solving the system (1) and are currently the most widely used approach. These methods often provide a more concise expression for the solutions. However, there are many special types of solution that cannot be represented solely using generalized inverses. In Section 4, a deeper exploration of these special solutions will be presented using matrix decomposition techniques.

3. The System (1) over Hilbert Spaces, Hilbert C -Modules, and Rings

The research mentioned above on matrices can be extended to more general cases, such as Hilbert spaces, Hilbert C -modules, and rings. However, there are several limitations when extending to these cases, leading to fewer studies compared to those on matrices. These studies primarily focus on the Hermitian and positive cases, with some scholars also investigating reducible solutions [33,34,35,36,37,38,39].
Dajić and Koliha were the first to study the system (1) for bounded linear operators between Hilbert spaces with the restriction that A and B have closed ranges. They provided conditions for the existence of general, Hermitian, and positive solutions and obtained formulas for the general form of these solutions [33]. Later, they extended these results from rings to rectangular matrices and operators between complex Hilbert spaces via embedding [34]. In 2008, Xu considered conditions under Hilbert C -modules [35]. In 2016, (anti-)reflexive solutions over rings were examined using the inner inverse [36]. In 2021, Radenković et al. reconsidered the system (1) in the context of Hilbert C -modules using orthogonally complemented projections, providing alternative expressions [37]. Subsequently, Zhang et al. discussed positive and real positive solutions to (1) over Hilbert spaces using the reduced solution to the system A X = C and B X = D , where the ranges of A and C may not be closed [38,39].
A Hilbert space is a complete inner-product space. A Hilbert C -module is a natural generalization of a Hilbert space, obtained by replacing the field of scalars C with a C -algebra. Since finite-dimensional spaces, Hilbert spaces, and C -algebras can all be regarded as Hilbert C -modules, matrix equations can be studied in a unified manner within the framework of Hilbert C -modules. The scope of rings is even broader. In the following statements, “ring” refers to an associative ring R with a unit element 1 0 .
Hereinafter, we introduce the notations and definitions used in this section:
Let H, K, and L denote complex Hilbert spaces, B ( H , K ) represent the set of all bounded linear operators between H and K, and B ( H ) be the set of all bounded linear operators over H. For A B ( H , K ) , let R ( A ) , N ( A ) , and R ( A ) ¯ represent the range, the null space, and the closure of the range of the operator A, respectively. An operator A B ( H , K ) is said to be regular if there exists an operator A B ( K , H ) such that A A A = A . A is referred to as the inner inverse of A. It is well known that A is regular if and only if A has a closed range. M is a closed subspace of H, and P M denotes the orthogonal projection onto M.
The Hilbert C -module is analogous to a Hilbert space, except that its inner product is not scalar-valued, but takes values in a C -algebra. Therefore, we continue to use the same notation for the Hilbert module as is used for Hilbert spaces. On Hilbert C -modules, a closed submodule M of H is said to be orthogonally complemented in H if H = M M , where M = { x H x , y = 0 for any y M } . In this case, the projection from H onto M is denoted by P M .
For an arbitrary ring R with involution x x , an element a R is Hermitian if a = a . If there exists b R such that a b a = a , then a is said to be regular (or inner invertible), and b is called the inner inverse of a, denoted as a .
Initially, the general solution of (1) over rings is presented.
Theorem 21
(General solutions for (1) over a ring. [34,36]). Let a , b , c , d R such that a and b are regular elements. Then, the following statements are equivalent.
( a ) There exists a solution x R of the system of equations a x = c , x c = d .
( b ) c b = a d , c = a a c , and d = d b b .
Moreover, if ( a ) or ( b ) is satisfied, then any solution of a x = c , x c = d can be expressed as
x = a c + 1 a a d b + 1 a a f 1 b b
for any f R .
Remark 6.
In [34], the authors further extended the results from elements in the ring to matrices over R, as well as to bounded linear operators between complex Banach or Hilbert spaces by constructing an embedding.
Next, the expression for the general solution in Hilbert C -modules is provided through orthogonal complements.
Theorem 22
(General solutions for (1) over Hilbert C -modules. [37]). Let H , K , L be Hilbert C -modules, A , C B ( H , K ) , B , D B ( L , H ) such that R ( A ) ¯ and R ( B ) ¯ are orthogonally complemented. Then, the system (1) has a general solution if and only if
R ( C ) R ( A ) , R ( D ) R ( B ) , A A D = A C B .
In such a case, the general solution has the form of
X = A C + ( I A A ) [ ( B ) D ( I A A ) ] + ( I A A ) Z [ I ( B ) B ] ,
where Z B ( H ) is arbitrary.
Remark 7.
Actually, R ( A ) ¯ and R ( B ) ¯ being orthogonally complemented implies that A and B are regular and have closed ranges. Additionally, Theorem 22 uses the Moore–Penrose inverse in place of the inner inverse. The definition of the Moore–Penrose inverse is similar to that in matrices, and therefore will not be elaborated further.
In 2023, Zhang et al. extended this result to infinite-dimensional Hilbert spaces without the requirement that the corresponding operators A and B have closed ranges, using reduced matrices.
Theorem 23
(General solutions for (1) over a Hilbert space. [38]). Let A , B , C , D B ( H ) , and P = P R ( A ) ¯ . Then, the system (1) has a solution if and only if R ( C ) R ( A ) , R ( D ) R ( B ) , and A D = C B . In this case, the general solution can be represented by
X = F ( I P ) H + ( I P ) Z ( I P R ( B ) ) ,
where F is the reduced solution of A X = C , H is the reduced solution of B X = D , and Z B ( H ) is arbitrary. Specifically, if R ( A ) and R ( B ) are closed, the general solution can be represented by
X = A C + D B A A D B + ( I A A ) Z ( I B B ) .
The situations of Hermitian solutions, positive solutions, real positive solutions, and reflexive solutions will be introduced in the following sections.

3.1. Hermitian Solutions

The Hermitian solution of (1) over Hilbert space was first studied by Dajić and Koliha.
Theorem 24
(Hermitian solutions for (1) over a Hilbert space. [33,34]). Let A , C B ( H , K ) , B , D B ( L , H ) , and the operators A and B have closed ranges. Assume M = B ( I A A ) have a closed range, T = D B A C . Let A , B , and M represent the inner inverses of A, B, and M, respectively. Then, the system (1) has a Hermitian solution X B ( H ) if and only if
A A C = C , A D = C B , ( I M M ) D = ( I M M ) B A C ,
and A C and B D are Hermitian. The general Hermitian solution is given by
X = A C + ( I A A ) M T + ( I A A ) ( I M M ) [ A C + ( I A A ) M T ] + ( I A A ) ( I M M ) U ( I M M ) ( I A A ) ,
where U B ( H ) is Hermitian.
Remark 8.
In [34], the authors presented an alternative form of (9) and (10):
N ( B ) N ( D ) , A D = C B , ( I N N ) C = ( I N N ) A ( B ) D
and
X = B D + ( I B B ) N Q + ( I B B ) ( I N N ) [ B D + ( I B B ) N Q ] + ( I B B ) ( I N N ) V ( I N N ) ( I B B ) ,
where N = A ( I B B ) , Q = C A B D , and V B ( H ) is Hermitian.
Remark 9.
Theorem 24 holds true in C -Hilbert modules and rings with involution as well [34,35].
By using the projection operator, Theorem 24 can be restated in another form over Hilbert C -modules.
Theorem 25
(Hermitian solutions for (1) over Hilbert C -modules. [37]). Let H , K , L be Hilbert C -modules, and A , C B ( H , K ) , B , D B ( L , H ) such that R ( A ) ¯ and R ( B ) ¯ are orthogonally complemented. Let M = B ( I A A ) , and assume that R ( M ) ¯ is orthogonally complemented. Then, the system (1) has a Hermitian solution if and only if
R ( C ) R ( A ) , R ( S ) R ( M ) , A A D = A C B ,
and C A and S M are Hermitian, where S = ( D B A C ) ( I A A ) . In this case, the Hermitian solution has the form
X = A C + ( I A A ) ( A C ) + ( I A A ) ( D ) ( I A A ) + ( I A A ) D ( I M M ) ( I A A ) + ( I A A ) ( I M M ) Z ( I M M ) ( I A A ) ,
where D = M S and Z B ( H ) is an arbitrary Hermitian matrix.

3.2. Positive Solutions

For the cases where the system (1) has a positive solution over Hilbert space, two different descriptions are presented as follows:
Theorem 26
(Positive solutions for (1) over a Hilbert space. [33]). Let A , C B ( H , K ) , B , D B ( L , H ) ,
A ^ = A B , C ^ = C D , Q = C A C B ( A D ) D B .
Assume that A ^ and Q are regular. The system (1) has a positive solution X B ( H ) if and only if Q is positive and R ( C ^ ) R ( Q ) . The general positive solution is given by
X = C ^ Q C ^ + ( I A ^ A ^ ) T ( I A ^ A ^ ) ,
where T B ( H ) is an arbitrary positive matrix.
Theorem 27
(Positive solutions for (1) over a Hilbert space. [33,35]). If A , C B ( H , K ) , B , D B ( L , H ) , M = B ( I A A ) , and G = D ( C B ) ( C A ) C , such that A, B, M, C A and G B are regular, R ( C B ) R ( C A ) . Then, (1) has a positive solution if and only if
C A = A C , D B = B D , C B = A D , R ( C ) = R ( C A ) , R ( G ) = R ( G B ) ,
and C A and F are positive. The general positive solution is given by
X = C ( C A ) C + G F G + ( I A A ) ( I M M ) T ( I M M ) ( I A A ) ,
where T B ( H ) is positive.
Remark 10.
Dajić and Koliha extended Theorems 26 and 27 to strongly-reducing rings with involution, which is an extension of the C -Hilbert modules [34].
Zhang et al. presented the existence and the general form of the positive solutions of (1) without the restriction on the closed range over Hilbert space.
Theorem 28
(Positive solutions using projection operators for (1) over a Hilbert space. [38]). Let A , B , C , and D be operators in B ( H ) . Let P = P R ( A ) and Q = P R ( ( I P ) B ) . The system (1) has positive solutions if and only if the following conditions hold.
( a )   R ( D ) R ( B ) , R ( C ) R ( A ) , and A D = C B .
( b )   R ( G ) R ( ( G P ) 1 2 ) and C A 0 .
( c )   ( D B G B H H ) ( I P ) C 0 , R ( ( D B G ) ( I P ) ) R ( C ( I P ) ) , and R ( K ) R ( ( K Q ) 1 2 ) .
In which G, H, and K are the reduced solutions of A X = C , ( G P ) 1 2 X = G ( I P ) , and B ( I P ) X = ( D B G B ( I P ) H H ) ( I P ) , respectively. Than, the positive solution is given by
X = G + ( I P ) G + ( I P ) H H ( I P ) + ( I P ) K ( I P ) + ( I P ) K ( I P Q ) + ( I P Q ) L L ( I P Q ) + ( I P Q ) Z ( I P Q ) ,
for any positive operator Z B ( H ) , where L is the reduced solution of ( K Q ) 1 2 X = K ( I Q ) .
The conclusions for the Hilbert C -modules can be directly obtained using the projection operator.
Theorem 29
(Positive solutions using projection operators for (1) over Hilbert C - modules. [37]). Let H , K , L be Hilbert C -modules, A , C B ( H , K ) , B , D B ( L , H ) such that R ( A ) ¯ and R ( B ) ¯ are orthogonally complemented. Denote P = A A , M = B ( I P ) , D = A C , S = [ D B D M ( D ) ( D P ) D ] ( I P ) and D = M S . Assume that R ( M ) ¯ is orthogonally complemented, R ( C ) R ( A ) , R ( D ) = R ( D P ) , along with C A L ( H ) is positive. Then, the system (1) has a positive solution if and only if
P D = D B , R ( S ) R ( M ) , R ( D ) = R ( D M M ) ,
and S M L ( H ) is positive. In such a case, the general solution has the form
X = X 0 + ( I P ) ( I M M ) Z ( I M M ) ( I P ) ,
where
X 0 = D + ( I P ) ( D ) + ( I P ) ( D ) ( D P ) D ( I P ) + ( I P ) Z 0 ( I P ) , Z 0 = D + ( I M M ) ( D ) + ( I M M ) ( D ) ( D M M ) D ( I M M ) ,
with Z B ( H ) is arbitrary positive.

3.3. Re-pd Solutions

The general form of the real positive solutions of (1) without the restriction on the closed range over Hilbert spaces was also provided by Zhang et al.
Theorem 30
(Re-pd solutions for (1) over a Hilbert space. [39]). Let A , B , C , D B ( H ) , P = P R ( A ) ¯ , and Q = P R ( ( I P ) B ) . The system (1) has a real positive solution if the following conditions are satisfied.
( a )   R ( C ) R ( A ) , R ( D ) R ( B ) , and A D = C B .
( b )   C A , ( D + B F ) ( I P ) B are real positive operators.
( c )   R ( ( D + B F ) ( I P ) ) R ( B ( I P ) ) , where F is the reduced solution of A X = C .
In this case, one of the real positive solutions can be represented as
X = F ( I P ) F + ( I P ) H ( I P ) ( I P ) H ( I P Q ) + ( I P Q ) Z ( I P Q ) ,
where H is the reduced solution of B ( I P ) X = ( D + B F ) ( I P ) , Z B ( H ) is arbitrary Re-pd.

3.4. (Anti-)Reflexive Solutions

Načevska used algebraic methods in rings with involution to obtain a generalization of (anti-)reflexive solutions over complex matrices [36].
An element w R is said to be a generalized reflection element if w = w and w 2 = 1 . For w , v R being generalized reflection elements, an element x R is called a generalized reflexive element (with respect to w and v) if x = w x v , denoted by R r ( w , v ) , and x is called a generalized anti-reflexive element (with respect to w and v) if x = w x v , denoted by R a r ( w , v ) .
For a , b , c , d R , these elements can be decomposed using projections as follows [36]:
a = a 1 + a 2 R r ( 1 , w ) R a r ( 1 , w ) , b = b 1 + b 2 R r ( v , 1 ) R a r ( v , 1 ) , c = c 1 + c 2 R r ( 1 , v ) R a r ( 1 , v ) , d = d 1 + d 2 R r ( w , 1 ) R a r ( w , 1 ) .
Define
a 1 ^ = 1 2 a 1 + w a 1 , a 2 ^ = 1 2 a 2 w a 2 , b 1 ^ = 1 2 b 1 + b 1 v , b 2 ^ = 1 2 b 2 b 2 v .
Theorem 31
(Reflexive solutions for (1) over a ring). [36] Let a , b , c , d R and (11) hold, such that a 1 , a 2 , b 1 and b 2 are regular elements. Then, the following statements are equivalent.
( a ) There exists a solution x R r ( w , v ) of the system (1).
( b )   c 1 b 1 = a 1 d 1 , c 1 = a 1 a 1 ^ c 1 , d 1 = d 1 b 1 ^ b 1 , c 2 b 2 = a 2 d 2 , c 2 = a 2 a 2 ^ c 2 , and d 2 = d 2 b 2 ^ b 2 .
Moreover, if ( a ) or ( b ) is satisfied, then any generalized reflexive solution of (1) can be expressed as
x = x 0 + e f g ,
where
x 0 = a 1 ^ c 1 + 1 a 1 ^ a 1 d 1 b 1 ^ + a 2 ^ c 2 + 1 b 2 ^ b 2 d 2 b 2 ^ , e = 1 a 1 ^ a 1 a 2 ^ a 2 , g = 1 b 1 b 1 ^ c 2 b 2 ^ ,
for a 1 ^ , a 2 ^ , c 1 ^ , c 2 ^ given by (12) and arbitrary f R r ( w , v ) .
Theorem 32
(Anti-reflexive solutions for (1) over a ring. [36]). Let a , b , c , d R and (11) hold such that a 1 , a 2 , b 1 , b 2 are regular elements. Then, the following statements are equivalent.
( a ) There exists a solution x R a r ( w , v ) of the system (1).
( b )   c 2 b 2 = a 1 d 1 , c 2 = a 1 a ^ 1 c 2 , d 2 = d 2 b ^ 1 b 1 , c 1 b 1 = a 2 d 2 , c 1 = a 2 a ^ 2 c 1 and d 1 = d 1 b ^ 2 c 2 .
Moreover, if ( a ) or ( b ) is satisfied, then any generalized anti-reflexive solution of (1) can be expressed as
x = x 0 + e f g ,
where
x 0 = a ^ 1 c 2 + 1 a ^ 1 a 1 d 1 b ^ 2 + a ^ 2 c 1 + 1 a ^ 2 a 2 d 2 b ^ 1 , e = 1 a ^ 1 a 1 a ^ 2 a 2 , g = 1 b 1 b ^ 1 b 2 b ^ 2 ,
for a ^ 1 , a ^ 2 , b ^ 1 , b ^ 2 given by (12) and arbitrary f R a r ( w , v ) .
This section mainly introduces the system (1) over Hilbert spaces, Hilbert C -modules, and rings, focusing on tools such as inner inverses, project operators, orthogonal complements, and reducibility.

4. Matrix Decomposition Methods for Solving (1)

Matrix decomposition techniques, such as eigenvalue decomposition (EVD), singular value decomposition (SVD), QR decomposition, LU decomposition, and others, play a crucial role in solving matrix equations. They are especially important in finding special solutions for system (1), such as mirror-symmetric, skew-symmetric, orthogonal symmetric, unitary, { P , Q , k } -reflexive, (Hermitian) R-conjugate, ( R , S ) -conjugate, and Hamiltonian solutions, as well as the corresponding least squares solutions. These types of solution, which are difficult to calculate using generalized inverses alone, can be efficiently computed using matrix decompositions. This section will introduce the related methods and results.
We introduce the eigenvalue decomposition (EVD). Let A be an n × n matrix with n linearly independent eigenvectors q i for i = 1 , , n . Then, A can be factored as
A = Q Λ Q ,
where Q is the square n × n matrix whose i-th column is the eigenvector q i of A, and Λ is the diagonal matrix whose diagonal elements are the corresponding eigenvalues, with Λ i i = λ i . However, it is important to note that not all square matrices are diagonalizable. In such cases, a more practical form of decomposition is the singular value decomposition.
The singular value decomposition (SVD) of a given m × n matrix A of rank k is
A = U Σ O O O V ,
where U and V are unitary or orthogonal matrices and Σ = diag ( σ 1 , σ 2 , , σ k ) , with σ 1 σ 2 σ k > 0 .
In 1981, Paige and Saunders extended the B-singular value decomposition in [40] to the generalized singular value decomposition (GSVD) for two matrices with the same number of columns. This is a powerful tool for solving equations. The GSVD can be expressed as the following lemma.
Lemma 1
(The generalized singular value decomposition over C . [41]). Let A C m × n and B C p × n be two matrices with the same number of columns. Denote k = rank A T , B T . There exist unitary matrices U and V, and a nonsingular matrix Q, such that
A = U Σ A Q , B = V Σ B Q ,
where
Σ A = I S A O O , Σ B = O S B O I ,
with S A = diag ( α 1 , α 2 , , α s ) , S B = diag ( β 1 , β 2 , , β s ) , 1 α 1 α 2 α s > 0 , 1 β 1 β 2 β s > 0 , and α i 2 + β i 2 = 1 for i = 1 , 2 , , s .
Remark 11.
It is important to note that the GSVD has various representations, which cannot be exhaustively listed. In the subsequent discussion, alternative forms of the GSVD will be presented.
Next, we present the necessary and sufficient conditions for the system (1) and the expression of the general solutions through the SVD.
Theorem 33
(General solutions using the SVD for (1) over C ). For A C p × m , B C n × q ,   C C p × n , D C m × q , rank ( A ) = r 1 and rank ( B ) = r 2 , the SVD of A and C expressed as
A = U 1 Σ A O O O V 1 , B = U 2 Σ B O O O V 2 ,
where Σ A = diag ( σ 1 , σ 2 , , σ r 1 ) and Σ B = diag ( σ 1 , σ 2 , , σ r 2 ) . Denote
C ^ = U 1 C U 2 = C 11 C 12 C 21 C 22 r 1 p r 1 r 1 n r 1 and D ^ = V 1 D V 2 = D 11 D 12 D 21 D 22 . r 2 m r 2 r 2 q r 2
Then, the system (1) is consistent if and only if C 12 , C 22 , D 12 , D 22 are zero matrices and Σ A 1 C 11 = D 11 Σ B 1 . The general solution can be expressed as
X = V 1 Σ A 1 C 11 Σ A 1 C 12 D 21 Σ B 1 Z U 2 ,
where Z C ( n r 1 ) × ( m r 2 ) is arbitrary.
Remark 12.
Theorem 33 is a special form of the system A X B = E , F X G = H , which is considered by the GSVD in [42].

4.1. Various Symmetric Solutions

In this section, we consider the least squares forms of various symmetric solutions to the system (1) over R , including ( k , p ) -mirror(skew-)symmetric solutions, symmetric solutions, and bi-(anti-)symmetric solutions. The existence conditions and expressions for symmetric solutions in subspaces are also be discussed.
In 2006, Li et al. considered the least squares ( k , p ) -mirror-symmetric solutions through the SVD of matrices over R [43].
A ( k , p ) -mirror matrix W ( k , p ) is defined by
W ( k , p ) = O O J k O I p O J k O O ,
where J k is the k-square backward identity matrix with ones along the secondary diagonal and zeros elsewhere. A matrix A R ( 2 k + p ) × ( 2 k + p ) is called a ( k , p ) -mirror-symmetric matrix if and only if
A = W ( k , p ) A W ( k , p ) .
We denote the set of all ( k , p ) -mirror(skew-)symmetric matrices by M S ( k , p ) .
The results about least squares ( k , p ) -mirror(skew-)symmetric solutions are as follows.
Theorem 34
(Least squares ( k , p ) -mirror(skew-)symmetric solutions for (1) over R . [43]). Let A , C R h × ( 2 k + p ) , B , D R ( 2 k + p ) × l and
K = 1 2 I k O J k O 2 I p O J k O I k .
Denote
A K = [ A 1 , A 2 ] , C K = [ C 1 , C 2 ] , K T B = B 1 B 2 , K T D = D 1 D 2 ,
where A 1 , C 1 R h × ( k + p ) , B 1 , D 1 R ( k + p ) × l , A 2 , C 2 R h × k , B 2 , D 2 R k × l . Denote the SVDs of A 1 , A 2 , B 1 , B 2 as
A 1 = U 1 Σ 1 O O O V 1 T = [ U 11 , U 12 ] Σ 1 O O O V 11 T V 12 T = U 11 Σ 1 V 11 T , A 2 = U 2 Σ 2 O O O V 2 T = [ U 21 , U 22 ] Σ 2 O O O V 21 T , V 22 T = U 21 Σ 2 V 21 T , B 1 = P 1 τ 1 O O O Q 1 T = [ P 11 , P 12 ] τ 1 O O O Q 11 T , Q 12 T = P 11 τ 1 Q 11 T , B 2 = P 2 τ 2 O O O Q 2 T = [ P 21 , P 22 ] τ 2 O O O Q 21 T Q 22 T = P 21 τ 2 Q 21 T ,
with Σ 1 = diag ( σ 1 , , σ r 1 ) , Σ 2 = diag ( δ 1 , , δ r 2 ) , τ 1 = diag ( α 1 , , α s 1 ) , τ 2 = diag ( β 1 , , β s 2 ) , r 1 = rank A 1 , r 2 = rank A 2 , s 1 = rank B 1 , s 2 = rank B 2 .
( a ) Then, the general solution X R ( 2 k + p ) × ( 2 k + p ) for the problem A 1 X C 1 2 + X B 1   D 1 2 = min can be expressed as
X = V 1 Φ Σ 1 U 11 T C 1 P 11 + V 11 T D 1 Q 11 τ 1 Σ 1 1 U 11 T C 1 P 12 V 12 T D 1 Q 11 τ 1 1 X 22 P 1 T ,
where Φ = φ i j , φ i j = 1 σ i 2 + α j 2 , 1 i r 1 , 1 j s 1 , and X 22 R 2 k + p r 1 × 2 k + p s 1 is arbitrary.
( b ) Then, A X C 2 + X B   D 2 = min has a solution X M S ( k , p ) . Moreover, the general solution can be given by
X = K X 1 O O X 2 K T ,
where
X 1 = V 1 Φ Σ 1 U 11 T B 1 P 11 + V 11 T D 1 Q 11 τ 1 Σ 1 1 U 11 T B 1 P 12 V 12 T D 1 Q 11 τ 1 1 X 22 P 1 T ,
X 2 = V 2 Φ Σ 2 U 21 T B 21 T B 2 P 21 + V 21 T D 2 Q 21 τ 2 Σ 2 1 U 21 T B 2 P 22 V 22 T D 2 Q 21 τ 2 1 X 22 P 2 T ,
where Φ = ( φ i j ) , φ i j = 1 σ i 2 + α j 2 , 1 i r 1 , 1 j s 1 , Φ = ( φ i j ) , φ i j = 1 δ i 2 + η j 2 , 1 i r 2 , 1 j s 2 , X 22 R ( k r 2 ) × ( k s 2 ) and X 22 R k r 2 × k s 2 are arbitrary.
( c ) Let S E represent the solution set of A X C 2 + X B   D 2 = min with X M S ( k , p ) . For a given E R ( 2 k + p ) × ( 2 k + p ) , | | X E | | = min has a unique solution, X ˜ S E . Moreover, X ˜ can be expressed as
X ˜ = K X ˜ 1 O O X ˜ 2 K T ,
where
X ˜ 1 = V 1 Φ Σ 1 U 11 T B 1 P 11 + V 11 T D 1 Q 11 τ 1 Σ 1 1 U 11 T B 1 P 12 V 12 T D 1 Q 11 τ 1 1 V 12 T X 11 P 12 P 1 T , X ˜ 2 = V 2 Φ Σ 2 U 21 T B 2 P 21 + V 21 T D 2 Q 21 τ 2 Σ 2 1 U 21 T B 2 P 22 V 22 T D 2 Q 21 τ 2 1 V 22 T E 22 P 22 P 2 T , X 11 = 1 2 I k O J k O 2 I p O X 1 I k O O 2 I p J k O , X 1 = 1 2 X + W ( k , p ) X W ( k , p ) , E 22 = 1 2 [ J k , O , I k ] X 1 J k O I k .
with Φ and Φ being given in ( b ) .
In 2010, Yuan considered the least squares solutions of the linear equation system (1) with a different form of Theorem 34 ( a ) and the least squares symmetric solutions [44].
Theorem 35
(Least squares symmetric solutions for (1) over R . [44]). Assume that A R p × m ,   B R n × q , C R p × n , and D R m × q . Let the SVDs of the matrices A , B be given by
A = U Σ O O O V T , B = P Ω O O O Q T ,
where U = U 1 , U 2 , V = V 1 , V 2 , P = P 1 , P 2 , Q = Q 1 , Q 2 are all orthogonal matrices and the partitions are compatible with the sizes of Σ = diag ( σ 1 , , σ r 1 ) and Ω = diag ( ω 1 , , ω r 2 ) , r 1 = rank ( A ) , r 2 = rank ( B ) .
( a ) The least squares solution set of (1) can be expressed as
S 1 = X X = V Φ V 1 T A T C + D B T P 1 Σ 1 2 V 1 T A T C + D B T P 2 V 2 T A T C + D B T P 1 Ω 1 2 X 22 P T ,
where X 22 R ( n r 1 ) × ( n r 2 ) is an arbitrary matrix. The unique least norm least squares solution can be expressed as
X ^ = V Φ V 1 T A T C + D B T P 1 Σ 1 2 V 1 T A T C + D B T P 2 V 2 T A T C + D B T P 1 Ω 1 2 O P T .
( b ) Consider the condition m = n . Let the EVD of the matrix A T A + B B T be given by
A T A + B B T = W Γ O O O W T ,
where W = [ W 1 , W 2 ] is an orthogonal matrix and the partition is compatible with the size of Γ = diag ( γ 1 , , γ l ) , l = rank ( A T A + B B T ) . Then, the least squares symmetric solution set of (1) can be expressed as
S 2 = X R n × n X = W Ψ W 1 T H W 1 Γ 1 W 1 T H W 2 W 2 T H W 1 Γ 1 X 22 W T ,
where H = C T A + A T C + D B T + B D T and X 22 R ( n l ) × ( n l ) is an arbitrary symmetric matrix. The unique least norm least squares symmetric solution can be expressed as
X ^ = W Ψ W 1 T H W 1 Γ 1 W 1 T H W 2 W 2 T H W 1 Γ 1 O W T .
In 2014, Ke and Ma derived the generalized bi-(skew-)symmetric solutions of the system (1) with the corresponding least squares solution [31].
For a symmetric orthogonal matrix P R n × n , a matrix X R n × n is called a generalized bi-(skew-)symmetric matrix if and only if ( X ) = X T = P X P .
Theorem 36
(Least squares bi-(skew-)symmetric solutions for (1) over R . [31]). Assume that A , C R p × n , B , D R n × q , and P R n × n is a symmetric orthogonal matrix. Let P be decomposed as
P = U I r O O I n r U T ,
where U is a symmetric orthogonal matrix. Let the partitions of A U , C U , B T U , and D T U be
A U = [ A 1 , A 2 ] , C U = [ C 1 , C 2 ] , B T U = [ B 1 T , B 2 T ] , D T U = [ D 1 T , D 2 T ] ,
with A 1 , C 1 R p × r , A 2 , C 2 R p × ( n r ) , C 1 , D 1 R r × q , C 2 , D 2 R ( p r ) × q , respectively.
( a ) Denote that K 1 = B 1 T L A 1 , N 1 = R B 1 A 1 T , Q 1 = D 1 T B 1 T A 1 C 1 K 1 D 1 B 1 , Q ¯ = C 1 T A 1 C 1 A 1 T L A 1 D 1 B 1 A 1 T L A 1 K 1 Q 1 N 1 , K 2 = B 2 T L A 2 , N 2 = R B 2 A 2 T , Q 2 = D 2 T B 2 T A 2 C 2 K 2 D 2 B 2 , Q ˜ = C 2 A 2 C 2 A 2 T L A 2 D 2 B 2 A 2 T L A 2 K 2 Q 2 N 2 . Then, (1) has bi-symmetric solutions if and only if equations
K 1 K 1 Q 1 R B 1 = Q 1 , Q ¯ L N 1 = O , R L A 1 L K 1 Q ¯ = O , K 2 K 2 Q 2 R B 2 = Q 2 , Q ˜ L N 2 = O , R L A 2 L K 2 Q ˜ = O , A 1 D 1 = C 1 B 1 , A 1 A 1 C 1 = C 1 , D 1 B 1 B 1 = D 1 , A 2 D 2 = C 2 B 2 , A 2 A 2 C 2 = C 2 , D 2 B 2 B 2 = D 2
hold. Under such circumstance, the bi-symmetric solutions can be expressed as
X = U X 11 O O X 22 U T ,
where
X 11 = 1 2 ( A 1 C 1 + L A 1 D 1 C 1 + L A 1 K 1 Q 1 R B 1 + Q ¯ N 1 R B 1 + L A 1 L K 1 Z 1 R N 1 R B 1 ) + 1 2 ( C 1 T A 1 T + B 1 T D 1 T L A 1 + R B 1 Q 1 T K 1 T L A 1 + R B 1 N 1 T Q ¯ T + R B 1 R N 1 Z 1 T L K 1 L A 1 ) , X 22 = 1 2 ( A 2 C 2 + L A 2 D 2 B 2 + L A 2 K 2 Q 2 R B 2 + Q ˜ N 2 R B 2 + L A 2 L K 2 Z 2 R N 2 R C 2 ) .
( b ) Let K = B 1 T L A 1 , N = L A 2 B 2 , Q 1 = D 2 T B 1 T A 1 C 1 + K C 1 T A 2 T , Q = D 1 A 1 C 2 B 2 + L A 1 C 1 T A 2 T B 2 L A 1 K 1 Q 1 N . Then, (1) has bi-skew-symmetric solutions if and only if
K K Q 1 R A 2 T = Q 1 , Q L N = 0 , R L A 1 L K Q = 0 , A 1 C 1 T = C 2 A 2 T , A 1 A 1 C 2 = C 2 , D 2 B 1 B 1 = D 2 , A 2 A 2 C 1 = C 1 , D 1 B 2 B 2 = D 1
hold. Under such circumstance, the bi-skew-symmetric solutions can be expressed as
X = U O X 12 X 12 T O U T ,
where
X 12 = A 1 C 2 L A 1 C 1 T A 2 T + L A 1 K Q 1 R A 2 T + Q N R A 2 T + L A 1 L K Z R N R A 2 T .
Additionally, let the SVDs of [ A 1 T , B 1 ] and [ A 2 T , B 2 ] be
[ A 1 T , B 1 ] = V Σ 0 0 0 W T , [ A 2 T , B 2 ] = T Ω 0 0 0 Q T ,
where V = [ V 1 , V 2 ] , W = [ W 1 , W 2 ] , T = [ T 1 , T 2 ] , and Q = [ Q 1 , Q 2 ] are all orthogonal matrices and the partitions are compatible with the sizes of Σ = diag ( σ 1 , , σ s ) , Ω = diag ( ω 1 , , ω t ) , s = rank [ A 1 T , B 1 ] , t = rank [ A 2 T , B 2 ] .
( c ) The least squares bi-symmetric solutions of (1) can be expressed as
X = U X 11 O O X 22 U T ,
where
X 11 = V Φ 1 [ V 1 T ( C 1 T D 1 ) W 1 Σ + Σ W 1 T ( C 1 T D 1 ) T V 1 ] Σ 1 W 1 T ( C 1 T D 1 ) T V 2 V 2 T ( C 1 T D 1 ) W 1 Σ 1 G 1 V T , X 22 = T Φ 2 [ T 1 T ( C 2 T D 2 ) Q 1 Ω + Ω Q 1 T ( C 2 T D 2 ) T T 1 ] Ω 1 Q 1 T ( C 2 T D 2 ) T T 2 T 2 T ( C 2 T D 2 ) Q 1 Ω 1 G 2 T T ,
where Φ 1 = ( φ i j ) R s × s , φ i j = 1 σ i 2 + σ j 2 , 1 i , j s , Φ 2 = ( φ i j ) R t × t , φ i j = 1 ω i 2 + ω j 2 , 1 i , j t , G 1 R ( r s ) × ( r s ) and G 2 R ( n r t ) × ( n r t ) are arbitrary symmetric matrices.
( d ) The least squares bi-skew-symmetric solutions of (1) can be expressed as
X = U O X 12 X 12 T O U T ,
where
X 12 = V Φ ( V 1 T X 0 T 1 ) ( Σ 1 ) 2 V 1 T X 0 T 2 V 2 T X 0 T 1 ( Ω 1 ) 2 G T T
with X 0 = A 1 T C 2 C 1 T A 2 B 1 D 2 T + D 1 B 2 T , Φ = ( φ i j ) R s × t , φ i j = 1 σ i 2 + ω j 2 , 1 i s , 1 j t , and G R ( r s ) × ( n r t ) being an arbitrary matrix.
Hu and Yuan have considered the symmetric solutions of (1) on a subspace Ω [45]. Let S R Ω n × n be the set of all n × n symmetric matrices on subspace Ω , where
Ω = { z R n G z = O , G R n × n } .
The necessary and sufficient conditions for the system (1) to have a solution in S R Ω n × n and also an expression for the solution X are obtained. Additionally, the associated optimal approximation problem to a given matrix E R n × n is discussed, and the optimal solution is elucidated.
Theorem 37
( S R Ω solutions for (1) over R . [45]). Given A , C R p × n and B , D R n × q . Assume that the SVD of G is given by
G = U 0 Σ O O O V 0 T ,
where Σ = diag ( γ 1 , , γ s ) > 0 , s = rank ( G ) , U 0 = [ U 01 , U 02 ] , V 0 = [ V 01 , V 02 ] are orthogonal matrices with U 01 , V 01 R n × s and U 02 , V 02 R n × ( n s ) . Let
A V 0 = A 1 , C V 0 = B 1 , V 0 T B = B 1 , V 0 T D = D 1 , P 1 = [ I s , O ] , P 2 = [ O , I n s ] , Q 1 = I s O , Q 2 = O I n s , P 1 L A 1 = A 2 , R B 1 Q 1 = B 2 , P 2 L A 1 = A 3 , R B 1 Q 2 = B 3 , G = P 2 A 1 C 1 Q 2 + A 3 D 1 B 1 Q 2 ( P 2 A 1 C 1 Q 2 + A 3 D 1 C 1 Q 2 ) T , L = L B 3 A 3 A 3 , P = 1 2 G ( I n s A 3 A 3 ) 1 2 G ( Q + Q T ) A 3 A 3 , Q = 2 L L B 3 G + ( I n s L L B 3 ) G L L .
( a ) The system (1) is solvable over S R Ω n × n if and only if
A 1 A 1 C 1 = C 1 , D 1 B 1 B 1 = D 1 , A 1 D 1 = C 1 B 1 , R A 3 G R A 3 = 0 , L B 3 G L B 3 = 0 , [ A 3 , B 3 ] [ A 3 , B 3 ] T A 3 = A 3 .
In which cases, the solution set S E can be expressed as
S E = X R n × n | X = V 0 X 11 X 12 X 21 X 22 V 0 T ,
where
X 11 = P 1 A 1 C 1 Q 1 + A 2 D 1 B 1 Q 1 + A 2 A 3 P B 3 B 2 + A 2 A 3 L L W L L T A 3 A 3 B 3 B 2 + A 2 Z B 2 A 2 A 3 A 3 Z B 3 B 2 , X 12 = P 1 A 1 C 1 Q 2 + A 2 D 1 B 1 Q 2 + A 2 A 3 P B 3 B 3 + A 2 A 3 L L W L L T A 3 A 3 B 3 B 3 + A 2 Z B 3 A 2 A 3 A 3 Z B 3 , X 21 = P 2 A 1 C 1 Q 1 + A 3 D 1 B 1 Q 1 + A 3 A 3 P B 3 C 2 + A 3 A 3 L L W L L T A 3 A 3 B 3 B 2 + A 3 Z B 2 A 3 A 3 A 3 Z B 3 B 2 , X 22 = P 2 A 1 C 1 Q 2 + A 3 D 1 B 1 Q 2 + A 3 A 3 P B 3 B 3 + A 3 A 3 L L W L L T A 3 A 3 B 3 B 3 ,
W , Z are arbitrary matrices with W = W T .
( b ) For E R n × n , let
V 0 T E V 0 = E 11 E 12 E 21 E 22 ,
where E 11 R s × s , E 22 R ( n s ) × ( n s ) . The optimal problem | | E X | | = min X S E has the unique solution X ^ admitting
X ^ = V 0 X 11 X 12 X 21 X 22 V 0 T ,
where X 11 , X 12 , X 21 , and X 22 are given by (13), with W = W T , and Z is determined by solving the unique solution of ( 44 ) in [45].

4.2. Various Orthogonal Solutions

This section introduces various orthogonal solutions to the system (1) over R . Wang et al. extended the conditions for various symmetric solutions to the equations A X = C or X B = D to the system (1), providing a series of conclusions. Qiu et al. constructed special matrices and used their EVDs to derive the corresponding conclusions [46,47].
Wang et al. have considered the orthogonality, (skew-)symmetric orthogonality, and least squares (skew-)symmetric orthogonality solutions, as well as the necessary and sufficient conditions for (1) to have these solutions and their corresponding expressions, respectively [46].
Theorem 38
(Orthogonal solutions for (1) over R . [46]). Given A , C R n × m and B , D R m × n .
( a ) Suppose the GSVD of A and C is
A = U Σ O O O V T , C = U Σ O O O Q T ,
where Σ = diag ( δ 1 , , δ l ) , l = rank ( A ) = rank ( C ) , U = [ U 1 , U 2 ] R n × n ,   V = [ V 1 , V 2 ] , Q = [ Q 1 , Q 2 ] R m × m are orthogonal, U 1 R n × l , V 1 , Q 1 R m × l ,   U 2 R n × ( n l ) , V 2 , Q 2 R m × ( m l ) . Denote
Q T B = B 1 B 2 , V T D = D 1 D 2 ,
where B 1 , D 1 R l × n and B 2 , D 2 R ( m l ) × n . Let the GSVD of B 2 and D 2 be
B 2 = U ˜ Π O O O V ˜ T , D 2 = W ˜ Π O O O V ˜ T ,
where U ˜ , W ˜ R ( m l ) × ( m l ) , V ˜ R n × n are orthogonal, Π R k × k is diagonal. Then, the system (1) has orthogonal solutions if and only if
A A T = C C T , B 1 = D 1 , D 2 T D 2 = B 2 T B 2 .
In which case, the orthogonal solutions can be expressed as
X = V ^ I k + l O O G Q ^ T ,
where
V ^ = V I l O O W ˜ R m × m , Q ^ = Q I l O O U ˜ R m × m
are orthogonal, and G R ( m k l ) × ( m k l ) is an arbitrary orthogonal matrix.
( b ) Let the GSVD of B and D be
B = U Π O O O V T , D = W Π O O O V T ,
where Π = diag ( σ 1 , , σ k ) , k = rank ( B ) = rank ( D ) , U = [ U 1 , U 2 ] , W = [ W 1 , W 2 ] R m × m and V = [ V 1 , V 2 ] R n × n are orthogonal, U 1 , W 1 R m × k , V 1 R n × k , U 2 , W 2 R m × ( m k ) , V 2 R n × ( n k ) . Partition
A W = [ A 1 , A 2 ] , C U = [ C 1 , C 2 ] ,
where A 1 , C 1 R n × k , A 2 , C 2 R n × ( m k ) . Assume the GSVD of A 2 and B 2 is
A 2 = U ˜ Σ O O O V ˜ T , C 2 = U ˜ Σ O O O Q ˜ T ,
where V ˜ , Q ˜ R ( m k ) × ( m k ) , U ˜ R n × n are orthogonal and Σ R l × l is diagonal. Then, the system (1) has orthogonal solutions if and only if
D T D = B T B , A 1 = C 1 , A 2 A 2 T = C 2 C 2 T .
In which case, the orthogonal solutions can be expressed as
X = W ^ I k + l O O H U ^ T ,
where
W ^ = W I k O O W ˜ R m × m , U ^ = U I k O O U ˜ R m × m ,
with arbitrary orthogonal H R ( m k l ) × ( m k l ) .
Remark 13.
Some researchers have also investigated the correction of the coefficient matrices when the system (1) is inconsistent under orthogonal constraints [48], specifically focusing on the optimization problem
min X , E 1 , E 2 ( E 1 2 + E 2 2 )
subject to
( A + E 1 ) X = C , X ( B + E 2 ) = D , X X T = I .
Theorem 39
(Symmetric orthogonal solutions for (1) over R . [46]). Given A , C R n × m ,   B , D R m × n . Notations Q , D 2 , R 2 , U ˜ , W ˜ are defined in Theorem 38.
( a ) Let the symmetric orthogonal solutions of the matrix equation A X = C be described as in
X = V ˜ I O O G Q ˜ T ,
where Q ˜ R m × m and V ˜ R m × m are orthogonal and G R ( m 2 l + r ) × ( m 2 l + r ) is an arbitrary symmetric orthogonal matrix. Partition
Q ˜ T B = B 1 B 2 , V ˜ T D = D 1 D 2 ,
where B 1 , D 1 R ( 2 l r ) × n , B 2 , D 2 R ( m 2 l + r ) × n . Then, the system (1) has symmetric orthogonal solutions if and only if
A A T = C C T , A C T = C A T , B 1 = D 1 , D 2 T D 2 = B 2 T B 2 , D 2 T B 2 = B 2 T D 2 .
In which case, the solutions can be expressed as
X = V ^ I O O G Q ^ T ,
where
V ^ = V I O O W ˜ , Q ^ = Q I O O U ˜ ,
and G R ( m 2 l + r 2 l + r ) × ( m 2 l + r 2 l + r ) is arbitrary symmetric orthogonal.
( b ) Let the symmetric orthogonal solutions of the matrix equation X B = D be described as
X = M ˜ I O O G N ˜ T ,
where M ˜ , N ˜ R m × m are orthogonal and G R ( m 2 k + r ) × ( m 2 k + r ) is symmetric orthogonal. Partition
A M ˜ = [ M 1 , M 2 ] , C N ˜ = [ N 1 , N 2 ] ,
where M 1 , N 1 R n × ( 2 k r ) , M 2 , N 2 R n × ( m 2 k + r ) . Then, the system (1) has symmetric orthogonal solutions if and only if
D T D = B T B , D T B = B T D , M 1 = N 1 , M 2 M 2 T = N 2 N 2 T , M 2 N 2 T = N 2 M 2 T .
In which case, the solutions can be expressed as
X = M ^ I O O H N ^ T ,
where
M ^ = M ˜ I O O W ˜ 1 , N ^ = N ˜ I O O U ˜ 1 ,
and H R ( m 2 k + r 2 k + r ) × ( m 2 k + r 2 k + r ) is an arbitrary symmetric orthogonal matrix.
Theorem 40
(Skew-symmetric orthogonal solutions for (1) over R . [46]). Given A , C R n × 2 m ,   B , D R 2 m × n .
( a ) Suppose the matrix equation A X = C has skew-symmetric orthogonal solutions with the form
X = V ˜ 1 I O O H Q ˜ 1 T ,
where H R 2 r × 2 r is arbitrary skew-symmetric orthogonal, V ˜ 1 , Q ˜ 1 R 2 m × 2 m is orthogonal. Partition
Q ˜ 1 T B = Q 1 Q 2 , V ˜ 1 T D = V 1 V 2 ,
where Q 1 , V 1 R ( 2 m 2 r ) × n , Q 2 , V 2 R 2 r × n . Then, the system (1) has skew-symmetric orthogonal solutions if and only if
A A T = C C T , A C T = C A T , Q 1 = V 1 , Q 2 T Q 2 = V 2 T V 2 , Q 2 T V 2 = V 2 T Q 2 .
In which case, the solutions can be expressed as
X = V ^ 1 I O O J Q ^ 1 T ,
where
V ^ = V ˜ 1 I O O W ˜ , Q ^ = Q ˜ 1 I O O U ˜ ,
and J R 2 k × 2 k is an arbitrary skew-symmetric orthogonal matrix.
( b ) Suppose the matrix equation X B = D has skew-symmetric orthogonal solutions with the form
X = W ˜ I O O K U ˜ T ,
where K R 2 p × 2 p is arbitrary skew-symmetric orthogonal, W ˜ , U ˜ R 2 m × 2 m are orthogonal. Partition
A W ˜ = [ W 1 , W 2 ] , C U ˜ = [ U 1 , U 2 ] ,
where W 1 , U 1 R n × ( 2 m 2 p ) , W 2 , U 2 R n × 2 p . Then, the system (1) has skew-symmetric orthogonal solutions if and only if
D T D = B T B , D T B = B T D , W 1 = U 1 , W 2 W 2 T = U 2 U 2 T , U 2 W 2 T = W 2 U 2 T .
In which case, the solutions can be expressed as
X = W ^ 1 I O O J U ^ 1 T ,
where
W ^ 1 = W ˜ I O O W ˜ 1 , U ^ 1 = I O O U ˜ 1 T U ˜ ,
and J R 2 q × 2 q is an arbitrary skew-symmetric orthogonal matrix.
Theorem 41
(Least squares (skew-)symmetric orthogonality solutions for (1) over R . [46]). Given A , C R n × 2 m and B , D R 2 m × n , denote
A T C + D B T = K , 1 2 ( K T K ) = T , 1 2 ( K T + K ) = N ,
Let the EVDs of T and N be
T = E Λ O O O E T , N = M Σ O O O M T ,
with Λ = diag ( Λ 1 , , Λ l ) , Λ i = 0 α i α i 0 , α i > 0 , i = 1 , , l , 2 l = rank ( T ) , Σ = Σ O O Σ , Σ = diag ( λ 1 , λ s ) , λ 1 λ s > 0 , Σ = diag ( λ s + 1 , λ t ) ,   λ t λ s + 1 < 0 , t = rank ( N ) .
( a ) Then, the least squares symmetric orthogonal solutions of the system (1) can be expressed as
X = M I ¯ O O L M T ,
with I ¯ = I s O O I t s and L R ( n t ) × ( n t ) being an arbitrary symmetric orthogonal matrix.
( b ) Then, the least squares skew-symmetric orthogonal solutions of the system (1) can be expressed as
X = E I ^ O O G E T ,
with I ^ = diag ( I ^ 1 , , I ^ l ) , I ^ i = 0 1 1 0 , for i = 1 , , l and G R ( 2 m 2 l ) × ( 2 m 2 l ) is an arbitrary skew-symmetric orthogonal matrix.
Remark 14.
Theorems 38–41 actually treat the system A X = C , X B = D as an extension of the single equation A X = C or X B = D . The proof of these theorems is based on the perspective that one of the equations has a corresponding solution.
Qiu et al. consider the least squares orthogonality, symmetric orthogonality, symmetric idempotence, and their corresponding P-commuting matrix solutions of (1).
Theorem 42
(Least squares orthogonal, symmetric orthogonal, and symmetric idempotence solutions for (1) over R . [47]). Assume that A , C R p × n , B , D R n × q .
( a ) Let W 1 = A T C + D B T . Denote the SVD of W 1 is
W 1 = U Σ r O O O V T ,
where U , V R n × n are orthogonal matrices, Σ r = diag ( σ 1 , , σ r ) , r = rank ( W 1 ) . Then, the least squares orthogonal solutions to (1) satisfies
X = U I r O O G V T ,
where G R ( n r ) × ( n r ) is arbitrary orthogonal.
( b ) Denote W 1 = A T C + D B T and W 2 = W 1 + W 1 T with W 2 is an orthogonal matrix. Let the EVD of W 2 be given by
W 2 = U diag ( λ 1 I l 1 , , λ t I l t ) U T ,
where U R n × n is an orthogonal matrix, j = 1 t l j = n . Then, the least squares symmetric orthogonal solutions to (1) are expressed as
X = U I n s l s G l s I n n s U T ,
where G l s R l s × l s is arbitrary symmetric orthogonal.
( c ) Denote W 3 = A T A + B B T 2 ( A T C + D B T ) and W 4 = W 3 + W 3 T . Let the EVD of W 4 be given by
W 4 = U diag ( λ 1 I l 1 , , λ t I l t ) U T ,
where U R n × n , j = 1 t l j = n . Then, the least squares symmetric idempotent solutions to (1) are expressed as
X = U I n s l s G l s I n n s U T ,
with arbitrary symmetric idempotent G l s R l s × l s .
Additionally, we generalize the corresponding P-commuting constraints, where P R n × n is a given symmetric matrix. Let the EVD of P be
P = H diag ( λ ˜ 1 I k 1 , , λ ˜ p I k p ) H T ,
where k 1 + + k p = n and H R n × n is an orthogonal matrix. A matrix X commutes with P, (i.e. P X = X P ), if and only if
X = H diag X 1 , , X p H T ,
where X i R k i × k i for i = 1 , , p .
Theorem 43
(Least squares orthogonality, symmetric orthogonality, and symmetric idempotence solutions commuting with P for (1) over R . [47]). Assume that A , C R p × n , B , D R n × q .
( a ) Suppose that W ¯ = H T W 1 H with W 1 = A T B + D C T . Partition the matrix W ¯ = W i j conforming to (14), where W i j R k i × k j . Let the SVD of the matrix W i i be
W i i = U i diag ( Σ i i ( i ) , O k i ξ i ) V i T ,
where U i R k i × k i , V i R k j × k j , Σ r i ( i ) = diag ( σ i ( i ) , , σ i i ( i ) ) , r i = rank ( W i i ) . Then, the least squares orthogonal solutions commuting with P to (1) are
X = H diag X 1 , , X p H T ,
where X i satisfies
X i = U i I r i G k k r i V i T ,
with arbitrary orthogonal G k i r i R k i r i .
( b ) Denote by W ¯ = H T W 1 H with W 1 = A T B + D C T , and partition the matrix W ¯ = W i j conforming to (14), where W i j R k i × k j . Let the EVD of the matrix W i i + W i i T be
W i i + W i i T = U i diag ( λ 1 ( i ) I l 1 ( i ) , , λ i i ( i ) I l t i ( i ) ) U i T ,
where U i R k × k is orthogonal, k i = j = 1 t i l j ( i ) . The least squares symmetric orthogonal solutions commuting with P to (1) are
X = H diag X 1 , , X p H T ,
where X i satisfies
X i = U i I n s i 1 ( m ) G l s i ( i ) I k i n s i ( i ) U i T ,
where G l s i ( i ) is an arbitrary symmetric orthogonal matrix with order l s i ( i ) .
( c ) Denote by W ˜ = H T W 2 H with W 2 = A T A + C C T 2 A T B + D C T , and partition the matrix W ˜ = W i j conforming to (14), where W i j R k i × k j . Let the EVD of the matrix W i i + W i i T ‘be
W i i + W i i T ( W i i ) = U i diag ( λ 1 ( i ) I l 1 ( i ) , , λ t i ( i ) I l t i ( i ) ) U i T ,
where U i R k i × k i is an orthogonal matrix, k i = j = 1 t i l j ( i ) . The least squares symmetric idempotent solutions commuting with P to (1) are
X = H diag X 1 , , X p H T ,
where X i satisfies
X i = U i I n s i 1 ( i ) G l s i ( i ) O k i n s i ( i ) U i T ,
with arbitrary symmetric idempotent G l s i R l i ( i ) × x s i ( i ) .

4.3. Unitary Solutions

This section presents the solvability conditions for the system (1) with the constraint X X = I p . These conditions are derived by applying the EVD and SVD of matrices. The general solutions to these matrix equations are also provided. Furthermore, the associated optimal approximation problems for the given matrices are discussed, and the optimal approximate solutions are derived [49].
Theorem 44
(Unitary solutions for (1) over C . [49]). Suppose that A C p × m , B C n × q , C C p × n , D C m × q with m n , and the SVD of D B is given by
D B = U 1 Σ 1 O O O P 1 ,
where Σ 1 = diag ( σ 1 ( 1 ) , , σ r 1 ( 1 ) ) , r 1 = rank ( D B ) , U 1 = [ U 11 , U 12 ] C m × m , P 1 = [ P 11 , P 12 ] C n × n with U 11 C m × r 1 , P 11 C n × r 1 . Let the matrices A i and C i , i = 1 , 2 , be given by A U 1 = [ A 1 , A 2 ] , C P 1 = [ C 1 , C 2 ] , and the SVD of A 2 be
A 2 = P 2 Σ 2 0 0 0 U 2 = P 21 Σ 2 U 21 ,
where Σ 2 = diag ( σ 1 ( 2 ) , , σ r 2 ( 2 ) ) , r 2 = rank ( A 2 ) , P 2 = [ P 21 , P 22 ] C p × p , U 2 = [ U 21 , U 22 ] C ( m r 1 ) × ( m r 1 ) with P 21 C p × r 2 and U 21 C ( m r 1 ) × r 2 .
( a ) Then, (1) is solvable for unitary matrices if and only if
B B = D D , A 1 = C 1 , A 2 A 2 C 2 = C 2 , I p r 1 C 2 ( A 2 A 2 ) C 2 O , rank I p r 1 C 2 ( A 2 A 2 ) C 2 n rank ( D B ) r 2 .
In this case, let the EVD of I p r 1 C 2 ( A 2 A 2 ) C 2 be given by
I n r 1 C 2 ( A 2 A 2 ) C 2 = Q 2 Λ 2 O O O Q 2 = Q 21 Λ 2 Q 21 ,
where Λ 2 = diag ( λ 1 ( 2 ) , , λ s 2 ( 2 ) ) , s 2 = rank ( I p r 1 C 2 ( A 2 A 2 ) C 2 ) , Q 2 = [ Q 21 , Q 22 ] C ( n r 1 ) × ( n r 1 ) , Q 21 C ( n r 1 ) × s 2 . The unitary solution set of (1) is
S 2 = X = U 1 I r 1 O O A 2 C 2 + U 52 K 2 Λ 2 1 2 Q 22 P 1 | K 2 C ( n r 1 r 2 ) × s 2 is unitary .
( b ) Let E C m × n and partition U 1 E P 1 as in
U 1 E P 1 = U 11 E P 41 U 41 E P 12 U 12 E P 41 U 42 E P 12 = N 11 N 12 N 21 N 22 .
Let the SVD of U 52 ( N 22 A 2 C 2 ) Q 21 Λ 2 1 2 be
U 22 ( N 22 A 2 C 2 ) Q 21 Λ 2 1 2 = U 3 Σ 3 O O O P 3 ,
where Σ 3 = diag ( σ 1 ( 3 ) , , σ r 3 ( 3 ) ) , r 3 = rank ( U 52 ( N 22 A 2 C 2 ) Q 21 Λ 2 1 2 ) , U 3 = [ U 31 , U 32 ] C ( m r 1 r 2 ) × ( m r 1 r 2 ) , P 3 = [ P 31 , P 32 ] C s 2 × s 2 with U 31 C ( m r 1 r 2 ) × r 3 and P 31 C s 2 × r 3 . If the conditions (15) are satisfied, then the solution of | | E X | | = min X S 2 is given by
X ^ = U 1 I r 1 O O A 2 C 2 + U 22 K 2 Λ 2 1 2 Q 21 P 1 ,
where
K 2 = U 3 I r 3 O O H K 2 P 3 ,
with arbitrary unitary H K 2 C ( m r 1 r 2 r 3 ) × ( s 2 r 3 ) .

4.4. Re-nnd and Re-pd Solutions and Inequality Constrains

This section introduces the relevant conclusions regarding the solutions of the system (1) with inequality constraints, as well as the Re-pd and Re-nnd solutions, using the GSVD [50,51].
Recently, Liao et al. considered the system (1) with the inequality constraint G X + X G ( > ) H .
Theorem 45
(General solutions with G X + X G ( > ) H constraint for (1) over C . [50]). Given matrices A C p × m , B C n × q , C C p × n , D C m × q , G C m × n , and H C n × n . Let X 0 = A C + L A D B , H ¯ = G X 0 + X 0 G H , G ¯ = G L A , V = H ¯ K 0 and P S = [ G ¯ , R B ] [ G ¯ , R B ] . The EVDs of I n P S , P S R B and P S G ¯ G ¯ can be given by
I n P S = U I g O O O U = U 1 U 1 , P S R B = E I a O O O E = E 1 E 1 , P S G ¯ G ¯ = F I b O O O F = F 1 F 1 ,
where g = rank ( I n P S ) , a = rank ( P S R B ) , b = rank ( P S G ¯ G ¯ ) and U 1 C n × g , E 1 C n × a , F 1 C n × b are unitary matrices. The GSVD of E 1 and F 1 is
E 1 = J Σ 1 P , F 1 = J Σ 2 Q ,
where J C n × n is a nonsingular matrix, P C a × a , Q C b × b are unitary matrices, and
Σ 1 = I O O Λ O O O O a s s k a p k , Σ 2 = O O Γ O O I O O , a s s k a p k a s s s b s
k = rank [ E 1 , F 1 ] = a + b s , and Λ = diag ( λ 1 , , λ s ) , Γ = diag ( γ 1 , , γ s ) with 1 > λ 1 λ s > 0 , 0 < γ 1 γ s < 1 , λ i 2 + γ i 2 = 1 , i = 1 , , s . Partition J V J into
J V J = V 11 V 12 V 13 V 14 V 12 V 22 V 23 V 24 V 13 V 23 V 33 V 34 V 14 V 24 V 34 V 44 . a s s k a p k a s s k a p k
( a ) The matrix inequality G X + X G H subject to (1) has a general solution if and only if the conditions
A A C = C , D B B = B , A D = C B , ( I n P S ) H ¯ ( I n P S ) 0 , R ( ( I n P S ) H ¯ ) = R ( ( I n P S ) H ¯ ( I n P S ) ) , V 11 V 12 V 12 V 22 O , V 22 V 23 V 23 V 33 O
hold. In this case, the general solution of G X + X G H subject to (1) can be expressed as
X = X 0 + L A W R B ,
where W is
W = G ¯ ( Φ + L L S W L L G ¯ G ¯ ) R B + M G ¯ G ¯ M R B , Φ = 1 2 ( K H ¯ ) ( 2 I p G ¯ ) + 1 2 ( Ψ Ψ ) G ¯ G ¯ , Ψ = 2 L B B ( K H ¯ ) + ( I n L B B ) ( K H ¯ ) L L , K = K 0 + P S H ^ P S , K 0 = H ¯ ( I n P S ) [ ( I n P S ) H ^ ( I n P S ) ] ( I n P S ) H ¯ , H ^ = ( J ) 1 S ( H ^ 13 ) S ( H ^ 13 ) N 1 N 1 S ( H ^ 13 ) N 2 + N 1 S ( H ^ 13 ) N 1 J 1 , S ( H ^ 13 ) = V 11 V 12 H ^ 13 V 12 V 22 V 23 H ^ 13 V 23 V 33 , H ^ 13 = V 12 V 22 V 23 + ( V 11 V 12 V 22 V 12 ) 1 2 N ( V 33 V 23 V 22 V 23 ) 1 2 ,
with L = B B G ˜ , arbitrary M C m × n and N 1 C k × ( n k ) , arbitrary anti-Hermitian S W C n × n , arbitrary Hermitian non-negative definite N 2 C ( n k ) × ( n k ) .
( b ) The matrix inequality G X + X G > H subject to (1) has a general solution if and only if the conditions
A A C = C , D B B = B , A D = C B , U 1 H ¯ U 1 > O , V 11 V 12 V 12 V 22 > O , V 22 V 23 V 23 V 33 > O
hold. In this case, the general solution of G X + X G > H subject to (1) can be expressed as
X = X 0 + L A W R B ,
where
W = G ¯ ( Φ + L L S W L L G ¯ G ¯ ) R B + M G ¯ G ¯ M R B , Φ = 1 2 ( K H ¯ ) ( 2 I p G ¯ ) + 1 2 ( Ψ Ψ ) G ¯ G ¯ , Ψ = 2 L B B ( K H ¯ ) + ( I n L B B ) ( K H ¯ ) L L , K = K 0 + P S H ^ P S , K 0 = H ¯ ( I n P S ) [ ( I n P S ) H ^ ( I n P S ) ] ( I n P S ) H ¯ , H ^ = ( J ) 1 S ( H ^ 13 ) N 1 N 1 N 2 + N 1 S ( H ^ 13 ) 1 N 1 J 1 , S ( H ^ 13 ) = V 11 V 12 H ^ 13 V 12 V 22 V 23 H ^ 13 V 23 V 33 , H ^ 13 = V 12 V 22 1 V 23 + ( V 11 V 12 V 22 1 V 12 ) 1 2 N ( V 33 V 23 V 22 1 V 23 ) 1 2 ,
with L = B B G ˜ , arbitrary M C m × n and N 1 C k × ( n k ) , arbitrary anti-Hermitian S W C n × n , arbitrary Hermitian non-negative definite N 2 C ( n k ) × ( n k ) .
Yuan et al. expanded upon the above research by deriving necessary and sufficient conditions for the system (1) to have Re-nnd and Re-pd solutions. Additionally, explicit representations of the general Re-nnd and Re-pd solutions are provided when the stated conditions are satisfied.
Theorem 46
(Re-pd and Re-nnd solutions for (1) over C . [51]). For given matrices A C p × n , B C n × q , C C p × n and D C n × q , denote A C + L A D B , B B L A , R L B B and X 0 + X 0 K 0 by X 0 , L, G and J, respectively. Suppose that the EVDs of G, A A G and B B G can be given by
G = U I g O O O U = U 1 U 1 , A A G = P I a O O O P = P 1 P 1 , B B G = Q I b O O O Q = Q 1 Q 1 ,
where g = rank ( G ) , a = rank ( A A G ) , b = rank ( B B G ) , U 1 C n × g , P 1 C n × a and Q 1 C n × b . The GSVD of the matrices P 1 and Q 1 is
P 1 = Υ Σ 1 E ˜ , Q 1 = Υ Σ 2 F ˜ ,
where Υ C n × n is a nonsingular matrix and E ˜ C a × a , F ˜ C b × b are unitary matrices, and
Σ 1 = I O O Λ O O O O a s s k a n k , Σ 2 = O O Δ O O I O O a s s k a n k , a s s s b s
with k = rank [ P 1 , Q 1 ] = a + b s , Λ = diag ( λ 1 , , λ s ) , Δ = diag ( δ 1 , , δ s ) , 1 > λ 1 λ 2 λ s > 0 , 0 < δ 1 δ 2 δ s < 1 , λ i 2 + δ i 2 = 1 , i = 1 , , s , and the partition of the matrix Υ J Υ is the form of
Υ Υ = J 11 J 12 J 13 J 14 J 12 J 22 J 23 J 24 J 13 J 23 J 33 J 34 J 14 J 24 J 34 J 44 a s s k a n k . a s s k a n k
( a ) The system (1) has a Re-nnd solution if and only if
A A C = C , D B B = D , A D = C B , G ( X 0 + X 0 ) G 0 , H ( G ( X 0 + X 0 ) ) = H ( G ( X 0 + X 0 ) G ) , J 11 J 12 J 12 J 22 O , J 22 J 23 J 23 J 33 O .
In this case, the general Re-nnd solution of (1) can be expressed as
X = X 0 + L A ( Θ + L L S K L L L A ) R B ,
where K 0 , Θ, Ψ, H, M ( H 13 ) and H 13 are given by
K 0 = ( X 0 + X 0 ) G [ G ( X 0 + X 0 ) G ] G ( X 0 + X 0 ) , Θ = 1 2 [ ( I n G ) H ( I n G ) J ] ( 2 I n L A ) + 1 2 ( Ψ Ψ ) L A , Ψ = 2 L B B [ ( I n G ) H ( I n G ) J ] + ( I n L B B ) [ ( I n G ) H ( I n G ) J ] L L , H = ( Υ ) 1 M ( H 13 ) M ( H 13 ) S S M ( H 13 ) T + S M ( H 13 ) S Υ 1 , M ( H 13 ) = J 11 J 12 H 13 J 12 J 22 J 23 H 13 J 23 J 33 , H 13 = J 12 J 22 J 23 + ( J 11 J 12 J 22 J 12 ) 1 2 N ( J 33 J 23 J 22 J 23 ) 1 2 ,
with arbitrary S C k × ( n k ) , S K C n × n satisfying S K = S K , arbitrary Hermitian non-negative definite T C ( n k ) × ( n k ) and arbitrary contraction N C ( a s ) × ( k a ) .
( b ) The system (1) has a Re-pd solution if and only if
A A C = C , D B B = D , A D = C B , U 1 ( X 0 + X 0 ) U 1 > O , J 11 J 12 J 12 J 22 > O , J 22 J 23 J 23 J 33 > O .
In this case, the general Re-pd solution of (1) can be expressed as
X = X 0 + L A ( Θ + L L S K L L L A ) R B ,
where K 0 , Θ, Ψ, H, M ( H 13 ) , and H 13 are, respectively, given by
K 0 = ( X 0 + X 0 ) G [ G ( X 0 + X 0 ) G ] G ( X 0 + X 0 ) , Θ = 1 2 [ ( I n G ) H ( I n G ) J ] ( 2 I n L A ) + 1 2 ( Ψ Ψ ) L A , Ψ = 2 L B B [ ( I n G ) H ( I n G ) J ] + ( I n L B B ) [ ( I n G ) H ( I n G ) J ] L L , H = ( Υ ) 1 M ( H 13 ) S S T + S ( M ( H 13 ) ) 1 S Υ 1 , M ( H 13 ) = J 11 J 12 H 13 J 12 J 22 J 23 H 13 J 23 J 33 , H 13 = J 12 J 22 1 J 23 + ( J 11 J 12 J 22 1 J 12 ) 1 2 N ( J 33 J 23 J 22 1 J 23 ) 1 2 ,
with arbitrary S C k × ( n k ) , S K C n × n satisfying S K = S K , arbitrary Hermitian positive definite T C ( n k ) × ( n k ) and arbitrary strict contraction N C ( a s ) × ( k a ) .

4.5. Different Types of Reflexive Solutions

Some scholars considered the (anti-)reflexive solutions of the system (1) and presented the following theorem.
Any nontrivial generalized reflection matrix P C n × n can be expressed in the form
P = U I r O O I n r U ,
where U = [ U 1 , U 2 ] , with U 1 U 2 = O .
Theorem 47
((Anti-)reflexive solutions for (1) over C . [17,18,19]). Let A , C C p × n , B , D C n × q and the nontrivial generalized reflection matrix P C n × n be known and X C n × n be unknown. The EVD of P is given by (16). Let
A U = [ A 1 , A 2 ] , C U = [ B 1 , B 2 ] , U B = C 1 C 2 , U D = D 1 D 2 .
( a ) The system (1) has an (anti-)reflexive solution with respect to a nontrivial generalized reflection matrix P if and only if
R A i C i = O , D i L B i = O , A i D i = C i B i , i = 1 , 2 .
In this case, the reflexive solution X with respect to P can be expressed as
X = U H 0 0 K U ,
and the anti-reflexive solution X with respect to P can be expressed as
X = U 0 H K 0 U ,
where
H = A 1 B 1 + L A 1 D 1 C 1 + L A 1 Y 1 R C 1 , K = A 2 B 2 + L A 2 D 2 C 2 + L A 2 Y 2 R C 2 ,
where Y 1 C r × r and Y 2 C ( n r ) × ( n r ) are arbitrary.
( b ) For a given matrix E C n × n , let
U E U = E 11 E 12 E 21 E 22 .
Symbols Φ r and Φ a represent the set of all reflexive and anti-reflexive solutions of the system A X = C , X B = D . Then, the approximation problem min X Φ r | | X E | | has a unique solution,
X ^ = U Z 1 O O Z 2 U ,
where
Z 1 = A 1 B 1 + L A 1 D 1 C 1 + L A 1 ( E 11 A 1 B 1 L A 1 D 1 C 1 ) R C 1 , Z 2 = A 2 B 2 + L A 2 D 2 C 2 + L A 2 ( E 22 A 2 B 2 L A 2 D 2 C 2 ) R C 2 .
The approximation problem min X Φ a | | X E | | has a unique solution
X ^ = U O Z 3 Z 4 O U ,
where
Z 3 = A 1 B 1 + L A 1 D 1 C 1 + L A 1 ( E 12 A 1 B 1 L A 1 D 1 C 1 ) R C 1 , Z 4 = A 2 B 2 + L A 2 D 2 C 2 + L A 2 ( E 21 A 2 B 2 L A 2 D 2 C 2 ) R C 2 .
Zhou and Yang considered the existence conditions of the (anti)-Hermitian reflexive solutions, which added the (anti)-Hermitian constraints in the reflexive solutions [52].
Theorem 48
((Anti-)Hermitian reflexive solutions for (1) over C . [52]). Let A , C C p × n , B , D C n × q and the nontrivial generalized reflection matrix P C n × n be known. The EVD of P is given by (16). Denote
A U = [ A 1 , A 2 ] , C U = [ C 1 , C 2 ] , U B = B 1 B 2 , U D = D 1 D 2 ,
where A 1 , C 1 C p × r , A 2 , C 2 C p × ( n r ) , C 1 , D 1 C r × q , C 2 , D 2 C ( n r ) × q . Let
M = ( I r A 1 A 1 ) C 1 , T 1 = A 1 C 1 ( I s M M ) , K 1 = ( I m + T 1 T 1 ) 1 , N = ( I n r A 2 A 2 ) C 2 , T 2 = A 2 C 2 ( I s N N ) , K 2 = ( I m + T 2 T 2 ) 1 , P = ( A 1 C 1 ) ( A 1 C 1 ) = A 1 W 1 + C 1 W 2 , Q = ( A 2 C 2 ) ( A 2 C 2 ) = A 2 W 3 + C 2 W 4 , W 1 = K 1 A 1 ( I r C 1 M ) , W 2 = T 1 K 1 A 1 ( I r C 1 M ) + M , W 3 = K 2 A 2 ( I n r C 2 N ) , W 4 = T 2 K 2 A 2 ( I n r C 2 N ) + N .
( a ) Then, the system (1) has Hermitian reflexive solutions in C n × n if and only if
B 1 A 1 B 1 C 1 D 1 A 1 D 1 C 1 = A 1 B 1 A 1 D 1 C 1 B 1 C 1 D 1 , B 2 A 2 B 2 C 2 D 2 A 2 D 2 C 2 = A 2 B 2 A 2 D 2 C 2 B 2 C 2 D 2 , B 1 , D 1 = B 1 W 1 A 1 + D 1 W 2 A 2 , B 1 W 1 C 1 + D 1 W 2 C 1 , B 2 , D 2 = B 2 W 3 A 2 + D 2 W 4 A 2 , B 2 W 3 C 2 + D 2 W 4 C 2 .
Moreover, the general Hermitian reflexive solution can be expressed as
X = U X 11 O O X 22 U ,
where X 11 C r × r , X 22 C ( n r ) × ( n r ) are
X 11 = X 10 + ( I r P ) G 1 ( I r P ) , X 22 = X 20 + ( I n r Q ) G 2 ( I n r Q ) ,
and
X 10 = B 1 W 1 + D 1 W 2 + ( W 1 B 1 + W 2 D 1 ) ( I r P ) , X 20 = B 2 W 3 + D 2 W 4 + ( W 3 B 2 + W 4 D 2 ) ( I n r Q ) ,
with arbitrary Hermitian G 1 C r × r and G 2 C ( n r ) × ( n r ) .
( b ) Then, the system (1) has anti-Hermitian reflexive solutions in C n × n if and only if
B 1 A 1 B 1 C 1 D 1 A 1 D 1 C 1 = A 1 B 1 A 1 D 1 C 1 B 1 C 1 D 1 , B 2 A 2 B 2 C 2 D 2 A 2 D 2 C 2 = A 2 B 2 A 2 D 2 C 2 B 2 C 2 D 2 , B 1 , D 1 = B 1 W 1 A 1 + D 1 W 2 A 2 , B 1 W 1 C 1 + D 1 W 2 C 1 , B 2 , D 2 = B 2 W 3 A 2 + D 2 W 4 A 2 , B 2 W 3 C 2 + D 2 W 4 C 2 .
Moreover, the general anti-Hermitian reflexive solution can be expressed as
X = U X 11 O O X 22 U ,
where X 11 C r × r , X 22 C ( n r ) × ( n r ) are
X 11 = X 10 + ( I r P ) G 1 ( I r P ) , X 22 = X 20 + ( I n r Q ) G 2 ( I n r Q ) ,
and
X 10 = B 1 W 1 + D 1 W 2 ( W 1 B 1 + W 2 D 1 ) ( I r P ) , X 20 = B 2 W 3 + D 2 W 4 ( W 3 B 2 + W 4 D 2 ) ( I n r Q ) ,
with arbitrary anti-Hermitian G 1 C r × r and G 2 C ( n r ) × ( n r ) .
( c ) Given matrix E C n × n . Let
E = U E 11 E 12 E 21 E 22 U ,
where E 11 C r × r , E 22 C ( n r ) × ( n r ) . If (1) has Hermitian reflexive solutions, then the optimize problem | | X E | | = min has a unique Hermitian reflexive solutions X ^ of (1), which can be represented as
X ^ = U X ^ 11 O O X ^ 22 U ,
where
X ^ 11 = X 10 + ( I r P ) G ^ 1 ( I r P ) , X ^ 22 = X 20 + ( I n r Q ) G ^ 2 ( I n r Q ) , G ^ 1 = 1 2 ( E 11 X 10 + ( E 11 X 10 ) ) , G ^ 2 = 1 2 ( E 22 X 20 + ( E 22 X 20 ) ) ,
with X 10 , X 20 given by (17).
( d ) For given matrix E C n × n , U E U is partitioned as (19). If (1) has anti-Hermitian reflexive solutions in, then the optimize problem | | X E | | = min has a unique anti-Hermitian reflexive solutions X ^ of (1), which can be represented as
X ^ = U X ^ 11 0 0 X ^ 22 U ,
where
X ^ 11 = X 10 + ( I r P ) G ^ 1 ( I r P ) , X ^ 22 = X 20 + ( I n r Q ) G ^ 2 ( I n r Q ) , G ^ 1 = 1 2 ( E 11 X 10 ( E 11 X 10 ) ) , G ^ 2 = 1 2 ( E 22 X 20 ( E 22 X 20 ) ) ,
with X 10 , X 20 given by (18).
Zhou et al. also considered the least squares (anti)-Hermitian reflexive solutions [53].
Theorem 49
(Least squares (anti-)Hermitian reflexive solutions for (1) over C . [53]). Let A , C C p × n , B , D C n × q and the nontrivial generalized reflection matrix P C n × n be known. The EVD of P is given by (16). Denote
A U = [ A 1 , A 2 ] , C U = [ C 1 , C 2 ] , U B = B 1 B 2 , U D = D 1 D 2 .
Σ i ( i = 1 , 2 , 3 , 4 ) , P i j , Q i j , S i j , T i j ( i = 1 , 2 , j = 1 , 2 ) and α i , β j i = 1 , , r 1 , j = 1 , , r 2 are given by the SVDs of A 1 , B 1 , A 2 , B 2 , C 1 , D 1 , C 2 , D 2 , C 1 , D 1 , C 2 , D 2 :
A 1 , B 1 = P 1 Σ 1 O O O Q 1 = P 11 , P 12 Σ 1 O O O Q 11 Q 12 = P 11 Σ 1 Q 11 , A 2 , B 2 = P 2 Σ 2 O O O Q 2 = P 21 , P 22 Σ 2 O O O Q 21 Q 22 = P 21 Σ 2 Q 21 , C 1 , D 1 = S 1 Σ 1 O O O T 1 = S 11 , S 12 Σ s O O O T 11 T 12 = S 11 Σ 3 T 11 , C 2 , D 2 = S 2 Σ 4 O O O T 2 = S 21 , S 22 Σ 4 O O O T 21 T 22 = S 21 Σ 4 T 21 , C 1 , D 1 = S 3 Σ 5 O O O T 3 = S 31 , S 32 Σ 5 O O O T 31 T 32 = S 31 Σ 5 T 31 , C 2 , D 2 = S 4 Σ 6 O O O T 4 = S 41 , S 42 Σ 6 O O O T 41 T 42 = S 41 Σ 6 T 41 ,
where P 1 = [ P 11 , P 12 ] , S 1 = [ S 11 , S 12 ] , S 3 = [ S 31 , S 32 ] C r × r , Q 1 = [ Q 11 , Q 12 ] , T 1 = [ T 11 , T 12 ] , T 3 = [ T 31 , T 32 ] C ( m + x ) × ( m + x ) , Σ 1 = diag α 1 , α 2 , , α r 1 , r 1 = rank [ A 1 , B 1 ] ,   Σ 2 = diag β 1 , β 2 , , β r 2 , r 2 = rank [ A 2 , B 2 ] , Σ 3 = diag ( γ 1 , γ 2 , , γ r 3 ) ,   r 3 = rank [ C 1 , D 1 ] , Σ 4 = diag η 1 , η 2 , , η r 4 , r 4 = rank [ C 2 , D 2 ] ,   Σ 5 = diag ζ 1 , ζ 2 , , ζ r 5 ,   r 5 = rank [ C 1 , D 1 ] , Σ 6 = diag ξ 1 , ξ 2 , , ξ r 6 , r 6 = rank [ C 2 , D 2 ] .
( a ) The least squares Hermitian reflexive solutions of the system A X = C , X B = D with respect to P can be expressed as
X = U X 11 O O X 22 U ,
with X 11 C r × r , X 22 C ( n r ) × ( n r ) being Hermitian, given by
X 11 = P 1 Φ 1 P 11 S 11 Σ 3 T 11 Q 11 Σ 1 + Σ 1 Q 11 T 11 Σ 3 S 11 P 11 Σ 1 1 Q 11 T 11 Σ 3 S 11 P 12 P 12 S 11 Σ 3 T 11 Q 11 Σ 1 1 G 1 P 1 , X 22 = P 2 Φ 2 P 21 S 21 Σ 4 T 21 Q 21 Σ 2 + Σ 2 Q 21 T 21 Σ 4 S 21 P 21 Σ 2 1 Q 21 T 21 Σ 4 S 21 P 22 P 22 S 21 Σ 4 T 21 Q 21 Σ 2 1 G 2 P 2 ,
where G 1 C r r 1 × r r 1 and G 2 C n r r 2 × n r r 2 are arbitrary Hermitian matrices, Φ 1 = ϕ i j C r 1 × r 1 , ϕ i j = 1 α 1 2 + α j 2 , i , j = 1 , , r 1 , Φ 2 = ϕ i j C r 2 × r 2 , ϕ i j = 1 β i 2 + β j 2 ,   i , j = 1 , , r 2 .
( b ) The least squares anti-Hermitian reflexive solutions of the system (1) with respect to P can be expressed as
X = U X 11 O O X 22 U ,
with X 11 C r × r , X 22 C ( n r ) × ( n r ) being anti-Hermitian, given by
X 11 = P 1 Φ 1 P 11 S 31 Σ 5 T 31 Q 11 Σ 1 Σ 1 Q 11 T 31 Σ 5 S 31 P 11 Σ 1 1 Q 11 T 31 Σ 5 S 31 P 12 P 12 S 31 Σ 5 T 31 Q 11 Σ 1 1 G 1 P 1 , X 22 = P 2 Φ 2 P 21 S 41 Σ 6 T 41 Q 21 Σ 2 Σ 2 Q 21 T 41 Σ 6 S 41 P 21 Σ 2 1 Q 21 T 41 Σ 6 S 41 P 22 P 22 S 41 Σ 6 T 41 Q 21 Σ 2 1 G 2 P 2 ,
where G 1 C r r 1 × r r 1 and G 2 C n r r 2 × n r r 2 are arbitrary anti-Hermitian matrices.
Given E C n × n . Let
U E U = E 11 E 12 E 21 E 22 ,
where E 11 C r × r , E 22 C ( n r ) × ( n r ) and denote
E 11 = X ˜ 111 X ˜ 112 X ˜ 121 X ˜ 122 , E 22 = X ˜ 211 X ˜ 212 X ˜ 221 X ˜ 222 ,
where X ˜ 111 C r 1 × r 1 , X ˜ 211 C r 2 × r 2 .
( c ) The optimization problem | | X E | | = min has a unique solution X ^ , which is the least squares Hermitian reflexive solution of (1) and can be represented as
X ^ = U X ^ 11 O O X ^ 22 U ,
where X ^ 11 C r × r , X ^ 22 C ( n r ) × ( n r ) are Hermitian, with
X ^ 11 = P 1 Φ 1 P 11 S 11 Σ 3 T 11 Q 11 Σ 1 + Σ 1 Q 11 T 11 Σ 3 S 11 P 11 Σ 1 1 Q 11 T 11 Σ 3 S 11 P 12 P 12 S 11 Σ 3 T 11 Q 11 Σ 1 1 1 2 ( X ¯ 122 + X ¯ 122 ) P 1 , X ^ 22 = P 2 Φ 2 P 21 S 21 Σ 4 T 21 Q 21 Σ 2 + Σ 2 Q 21 T 21 Σ 4 S 21 P 21 Σ 2 1 Q 21 T 21 Σ 4 S 21 P 22 P 22 S 21 Σ 4 T 21 Q 21 Σ 2 1 1 2 ( X 222 + X ¯ 222 ) P 2 .
( d ) The optimization problem | | X E | | = min has a unique solution X ^ , which is the least squares anti-Hermitian reflexive solution of (1) and can be represented as
X ^ = U X ^ 11 O O X ^ 22 U ,
where X ^ 11 C r × r , X ^ 22 C ( n r ) × ( n r ) are anti-Hermitian, with
X ^ 11 = P 1 Φ 1 P 11 S 31 Σ 5 T 31 Q 11 Σ 1 Σ 1 Q 11 T 31 Σ 5 S 31 P 11 Σ 1 1 Q 11 T 31 Σ 5 S 31 P 12 P 12 S 31 Σ 5 T 31 Q 11 Σ 1 1 1 2 ( X ¯ 122 X ¯ 122 ) P 1 , X ^ 22 = P 2 Φ 2 P 21 S 41 Σ 6 T 41 Q 21 Σ 2 Σ 2 Q 21 T 41 Σ 6 S 41 P 21 Σ 2 1 Q 21 T 41 Σ 6 S 41 P 22 P 22 S 41 Σ 6 T 41 Q 21 Σ 2 1 1 2 ( X 222 X 222 ) P 2 .
Dong and Wang have presented the system of matrix equations (1) subject to { P , Q , k + 1 } -reflexive and anti-reflexive constraints by converting it into two simpler cases: k = 1 and k = 2 . They provide the solvability conditions, the general solution to this system, and the least squares solution when (1) is inconsistent [54].
Let P C m × m and Q C n × n be Hermitian and { k + 1 } -potent matrices, that is, P k + 1 = P = P and Q k + 1 = Q = Q . A matrix X C m × n is called { P , Q , k + 1 } -(anti-)reflexive if P X Q = X (or P X Q = X ). For P C m × m and Q C n × n to be Hermitian, they are { k + 1 } -potent matrices if and only if P and Q are idempotent (i.e., P 2 = P , Q 2 = Q ) when k is odd, or tripotent (i.e., P 3 = P , Q 3 = Q ) when k is even. Moreover, there exist U U m × m and V U n × n such that
P = U I p O U , Q = V I q O V ,
if k is odd, and
P = U I r I p r O U , Q = V I s I q s O V ,
if k is even, where p = rank ( P ) and q = rank ( Q ) .
Theorem 50
( { P , Q , 2 } -(anti-)reflexive solutions for (1) over C . [54]). Given A C l × m , C C l × n , B C n × k , D C m × k . Let P C m × m and Q C n × n be Hermitian and { k + 1 } -potent with k = 1 . For U and V are given in (20), let A 1 , C 1 , B 1 , D 1 be defined in
A U = [ A 1 , A 2 ] , C V = [ C 1 , C 2 ] , V B = B 1 B 2 , U D = D 1 D 2 ,
where A 1 C l × p , C 1 C l × q , B 1 C q × k , and D 1 C p × k . Then, we have the following results.
( a ) The system (1) is consistent for { P , Q , 2 } -reflexive X if and only if
A 1 A 1 C 1 = C 1 , D 1 B 1 B 1 = D 1 , A 1 D 1 = C 1 B 1 .
In this case, the general solution is
X = U A 1 C 1 + D 1 B 1 A 1 A 1 D 1 B 1 + ( I A 1 A 1 ) U 1 ( I B 1 B 1 ) O O O V ,
where U 1 C p × q is arbitrary.
( b ) Let S X be the set of all { P , Q , 2 } -reflexive solutions to (1) and E be a given matrix in C m × n . Partition
U E V = E 11 E 12 E 21 E 22 ,
with E 11 C p × q . Then,
X ^ E = min X S X X E
has an only solution X ^ , which can be expressed as
X ^ = U A 1 C 1 + D 1 B 1 A 1 A 1 D 1 B 1 + ( I A 1 A 1 ) E 11 ( I B 1 B 1 ) O O O V .
( c ) Assume that the SVDs of A 1 , B 1 is expressed as
A 1 = W M 1 O O O Z , B 1 = P N 1 O O O Q ,
where W = [ W 1 , W 2 ] C l × l , Z = [ Z 1 , Z 2 ] C p × p , P = [ P 1 , P 2 ] C q × q , and Q = [ Q 1 , Q 2 ] C k × k are unitary matrices, M 1 = diag ( σ 1 , , σ r 1 ) , r 1 = rank ( M 1 ) , W 1 C l × r 1 , Z 1 C p × r 1 , N 1 = diag ( ρ 1 , , ρ r 2 ) , r 2 = rank ( N 1 ) , P 1 C q × r 2 , Q 1 C k × r 2 . Then, the least norm least squares solution X S L can be expressed as
X = U Z Θ ( M 1 W 1 C 1 P 1 + Z 1 D 1 Q 1 N 1 ) M 1 1 W 1 C 1 P 2 Z 2 D 1 Q 1 N 1 1 Y 4 P O O O V ,
where Θ = ( θ i j ) R r 1 × r 2 , θ i j = 1 σ i 2 + ρ j 2 , and Y 4 C ( p r 1 ) × ( q r 2 ) is an arbitrary matrix.
Theorem 51
( { P , Q , 3 } -(anti-)reflexive solutions for (1) over C . [54]). Given A C l × m , C C l × n , B C n × k , D C m × k , P C m × m and Q C n × n are Hermitian and { k + 1 } -potent with k = 2 . For U and V are given in (20), let A 1 , A 2 , C 1 , C 2 , B 1 , B 2 , D 1 , D 2 be defined in
A U = [ A 1 , A 2 , A 3 ] , C V = [ C 1 , C 2 , C 3 ] , V B = B 1 B 2 B 3 , U D = D 1 D 2 D 3 ,
where A 1 C l × r , A 2 C l × ( p r ) , C 1 C l × s , C 2 C l × ( q s ) , B 1 C s × k , B 2 C ( q s ) × k , D 1 C r × k , and D 2 C ( p r ) × k .
( a ) Then, the system (1) is consistent for { P , Q , 3 } -reflexive X if and only if
A 1 A 1 C 1 = C 1 , D 1 B 1 B 1 = D 1 , A 1 D 1 = C 1 B 1 , A 2 A 2 C 2 = C 2 , D 2 B 2 B 2 = D 2 , A 2 D 2 = C 2 B 2 .
In this case, the general solution is
X = U X 1 O O O X 2 O O O O V ,
where X 1 = A 1 C 1 + D 1 B 1 A 1 A 1 D 1 B 1 + ( I A 1 A 1 ) U 11 ( I B 1 B 1 ) , X 2 = A 2 C 2 + D 2 B 2 A 2 A 2 D 2 B 2 + ( I A 2 A 2 ) U 22 ( I B 2 B 2 ) , and U 11 , U 22 are arbitrary with suitable orders.
( b ) Let S X be the set of all { P , Q , 3 } -reflexive solutions to (1) and let E be a given matrix in C m × n . Partition
U E V = E 11 E 12 E 13 E 21 E 22 E 23 E 31 E 32 E 33
with E 11 C r × s , E 22 C ( p r ) × ( q s ) . Then, X ^ E = min X S X X E has a unique solution X ^ , which can be expressed as
X ^ = U X 1 0 O O X 2 O O O O V ,
where X 1 = A 1 C 1 + D 1 B 1 A 1 A 1 D 1 B 1 + ( I A 1 A 1 ) E 11 ( I B 1 B 1 ) and X 2 = A 2 C 2 + D 2 B 2 A 2 A 2 D 2 B 2 + ( I A 2 A 2 ) E 22 ( I B 2 B 2 ) .
Remark 15.
The { P , Q , 3 } -reflexive least squares problem can be reduced similarly to Theorem 50(c); hence, the conclusion is omitted.

4.6. Different Types of Conjugate Solutions

Chang et al. have presented the ( R , S ) -conjugate solution to the linear equation system (1) [55]. A matrix A C n × n is called an R-conjugate matrix if it satisfies A ¯ = R A R , where R is a nontrivial involution (i.e. R 2 = I , R I ). A matrix A C m × n is called an ( R , S ) -conjugate matrix if it satisfies A ¯ = R A S , where R and S are nontrivial involutions. The sets of R-conjugate and ( R , S ) -conjugate matrices are denoted by C C n × n ( R ) and C C m × n ( R , S ) , respectively. For nontrivial involution matrices R C m × m and S C n × n , there exists
R = [ P , Q ] I r O O I m r P T Q T , S = [ U , V ] I s O O I n s U T V T .
Denote Γ = [ P , i Q ] , F = [ U , i V ] . The results for the solutions in C C n × n ( R ) and C C m × n ( R , S ) to the system (1) are presented below.
Theorem 52
( ( R , S ) -conjugate solutions for (1) over C . [55]). Given A C p × m ,   B C n × q ,   C C p × n , D C m × q , nontrivial involutions R and S. Suppose that A = A 1 + i A 2 C p × m and B = B 1 + i B 2 C n × q , where
A 1 = 1 2 ( A + A ¯ R ) , A 2 = 1 2 i ( A A ¯ R ) , B 1 = 1 2 ( B + S B ¯ ) , B 2 = 1 2 i ( B S B ¯ ) .
Let
C F = C 1 + i C 2 , Γ D = D 1 + i D 2 ,
where C 1 , C 2 R k × n , D 1 , D 2 R m × q . Denote
F = A 1 Γ A 2 Γ , G = C 1 C 2 , K = F B 1 , F B 2 , L = D 1 , D 2 .
Assume that the SVDs of F R 2 p × m and K R n × 2 q are
F = W D r 1 O O O V ˜ T = W 1 D r 1 V 1 T , K = M D r 2 O O O U ˜ T = M 1 D r 2 U 1 T ,
where
W = W 1 , W 2 , V ˜ = V 1 , V 2 , M = M 1 , M 2 , U ˜ = U 1 , U 2 ,
with W 1 R 2 p × r 1 , V 1 R m × r 1 , M 1 R n × r 2 , U 1 R 2 q × r 2 , D r 1 = diag   λ 1 , , λ r 1 and D r 2 = diag ( μ , , μ r 2 ) .
( a ) Then, the system (1) has a solution in C C m × n ( R , S ) if and only if
F F G = G , L K K = L , F L = G K .
In which case, the general ( R , S ) -conjugate solution to (1) can be represented as
X = Γ F G + V 2 V 2 T L K + V 2 N M 2 T F ,
where N R m r 1 × n r 2 is arbitrary.
( b ) For a given E C m × n , let E 1 = 1 2 ( E + R E ¯ S ) . Denote the ( R , S ) -conjugate solution set of (1) is S X . If S X is nonempty, then the approximation problem | | X E | | = min X S E has a unique solution X ^ is the form of
X ^ = Γ F G + V 2 V 2 T L K M 1 M 1 T + V 2 V 2 T Γ E 1 F M 2 M 2 T F .
( c ) When (1) does not have an ( R , S ) -conjugate solution, then the least squares solution of (1) can be expressed as
X = Γ V ˜ Φ D r 1 W 1 T G M 1 + V 1 T L U D r 2 D r 1 1 W 1 T G M 2 V 2 T L U 1 D r 2 1 Y 22 M T F ,
where Φ = ( ϕ i j ) R r 1 × r 2 , ϕ = 1 λ i 2 + μ j 2 and Y 22 R m r 1 × n r 2 is an arbitrary matrix. The unique least squares least norm solution of (1) is
X ˜ = Γ V ˜ Φ D r 1 W 1 T G M 1 + V 1 T L U D r 2 D r 1 1 W 1 T G M 2 V 2 T L U 1 D r 2 1 O M T F .
Two years later, Chang et al. extended the results in [55] to consider the Hermitian R-conjugate solutions. They provided the necessary and sufficient conditions for the existence of the Hermitian R-conjugate solution to the system of complex matrix equations A X = C and X B = D and presented an expression for the Hermitian R-conjugate solution to this system when the solvability conditions are satisfied. In addition, the solution to an optimal approximation problem was obtained. Furthermore, the least squares Hermitian R-conjugate solution with the least norm for this system was also considered [56].
Theorem 53
(Hermitian R-conjugate solutions for (1) over C . [56]). For given A R p × n ,   C R p × n , B R n × q , and D R n × q . Given a nontrivial symmetric involution matrix R R n × n , which can be expressed as
R = [ P , Q ] I r O O I n r P T Q T ,
where P R n × r and Q R n × ( n r ) satisfying P T P = I r , Q T Q = I n r , P T Q = O , Q T P = O . Denote Γ = [ P , i Q ] . Let A Γ = A 1 + i A 2 , C Γ = C 1 + i C 2 , Γ B = B 1 + i B 2 , and Γ D = D 1 + i D 2 , where
A 1 = 1 2 ( A Γ + A Γ ¯ ) , A 2 = 1 2 i ( A Γ A Γ ¯ ) , C 1 = 1 2 ( C Γ + C Γ ¯ ) , C 2 = 1 2 i ( C Γ C Γ ¯ ) , B 1 = 1 2 ( Γ B + Γ B ¯ ) , B 2 = 1 2 i ( Γ B Γ B ¯ ) , D 1 = 1 2 ( Γ D + Γ D ¯ ) , D 2 = 1 2 i ( Γ D Γ D ¯ ) .
Denote
F = A 1 A 2 , G = C 1 C 2 , K = [ B 1 , B 2 ] , L = [ D 1 , D 2 ] , M = F K T , N = G L T .
Assume that the SVD of M R ( 2 p + 2 q ) × n is
M = U M 1 O O O V T ,
where U = U 1 , U 2 R ( 2 m + 2 l ) × ( 2 m + 2 l ) and V = V 1 , V 2 R n × n are orthogonal matrices, M 1 =   diag σ 1 , , σ r , r = rank ( M ) , U 1 R ( 2 m + 2 l ) × r , V 1 R n × r .
( a ) The system (1) has a Hermitian R-conjugate solution in C n × n if and only if
M N T = N M T , U 2 T N = O .
In that case, (1) has the general Hermitian R-conjugate solution
X = Γ V 1 M 1 1 U 1 T N + V 2 V 2 T N T U 1 M 1 1 V 1 T + V 2 G V 2 T Γ ,
where G is a arbitrary ( n r ) × ( n r ) symmetric matrix.
( b ) For E R n × n , let E 1 = 1 2 ( Γ E Γ + Γ E Γ ¯ ) . The system (1) has Hermitian R-conjugate solutions, then the optimal approximation problem | | X E | | = min has a unique Hermitian R-reflexive solution of (1) as
X ^ = Γ ( V 1 M 1 1 U 1 T N + V 2 V 2 T N T U 1 M 1 1 V 1 T + V 2 V 2 T E 1 V 2 V 2 T ) Γ .
( c ) The least squares Hermitian R-conjugate solution of (1) can be expressed as
X = Γ V M 1 1 U 1 T N V 1 M 1 1 U 1 T N V 2 V 2 T N T U 1 M 1 1 Y 22 V T Γ ,
where Y 22 R ( n r ) × ( n r ) is an arbitrary symmetric matrix.
( d ) The least norm least squares Hermitian R-conjugate solution of (1) can be expressed as
X = Γ V M 1 1 U 1 T N V 1 M 1 1 U 1 T N V 2 V 2 T N T U 1 M 1 1 O V T Γ .

4.7. Conjugate Class Solutions

Recall that matrices X , Y C n × n are in the same *congruence class if there exists a nonsingular matrix P C n × n such that X = P Y P .
Zheng first considered the *congruence class of the solutions of (1) [57].
Theorem 54
(*congruence class solutions for (1) over C . [57]). Let A , C C p × n , B , D C n × q . The GSVD of the matrices A and B is given by
A = U A S 1 T , B = T S 2 V B ,
where U A C p × p , V B C q × q are unitary matrices, T C n × n is a nonsingular matrix, and
S 1 = I O O O r 1 m r 1 , r 1 n r 1
S 2 = Γ 1 O O O O O O Γ 2 O O O O r 2 1 r 1 r 2 1 r 2 2 n r 1 r 2 2 r 2 1 r 2 2 k r 2
are block matrices with positive diagonal matrices Γ 1 , Γ 2 . r 1 = rank ( A ) , r 2 = r 2 1 + r 2 2 = rank ( B ) . Block U A C T and T D V B into suitable size as the form of
C = C 11 C 12 C 13 C 14 C 21 C 22 C 23 C 24 , D = D 11 D 12 D 13 D 21 D 22 D 23 .
The system (1) is solvable if and only if
C 2 j = O , j = 1 , 2 , 3 , 4 , D i 3 = O , i = 1 , 2 , C 11 = D 11 Γ 1 1 , C 13 = D 12 Γ 2 1 .
The general form of a solution of (1) is
X = T C 11 C 12 C 13 C 14 D 21 Γ 1 1 X 22 D 22 Γ 2 1 X 24 T 1 ,
where X 22 C ( n r 1 ) × ( r 1 r 2 1 ) and X 24 C ( n r 1 ) × ( n r 1 r 2 2 ) are an arbitrary. Obviously, X is *congruent to
C 11 C 12 C 13 C 14 D 21 Γ 1 1 X 22 D 22 Γ 2 1 X 24 .
Later, Zhang presented the following result.
Theorem 55
(*congruence class solutions for (1) over C . [58]). Let A , C C p × n and B , D C n × q . Assume that the GSVD of A and B can be expressed as
A = U Σ A P , B = V Σ B P ,
where U C m × m and V C p × p are unitary matrices, P C n × n is nonsingular matrix, Σ A C m × n , Σ B C p × n , r = rank [ A , B ] ,
Σ A = I A O O O O S A O O O O O O , t s r s t n r
Σ A = O O O O O S B O O O O I B O , t s r s t n r
where S A = diag ( α 1 , , α s ) , S B = diag ( β 1 , , β s ) with 1 > α 1 α s > 0 , 0 < β 1 β s < 1 , and α i 2 + β i 2 = 1 , i = 1 , , s . Denote
U C P = C 11 C 12 C 13 C 14 C 21 C 22 C 23 C 24 C 31 C 32 C 33 C 34 , t s r s t n r
P D V = D 11 D 12 D 13 D 21 D 22 D 23 D 31 D 32 D 33 D 41 D 42 D 43 . p r t s r s t
( a ) The system (1) has a solution in C n × n if and only if
C 3 i = O , D i 1 = O , i = 1 , 2 , 3 , 4 , C 12 = D 12 S B 1 , C 13 = D 13 , S A 1 C 22 = D 22 S B 1 , S A 1 C 23 = D 23 .
In that case, the general solutions of (1) are
X = P 1 C 11 C 12 C 13 C 14 S A 1 C 21 S A 1 C 22 D 23 S A 1 C 24 X 31 D 32 S B 1 D 33 X 34 X 41 D 42 S B 1 D 43 X 44 ( P 1 ) ,
where X 31 , X 41 , X 34 , and X 44 are arbitrary.
( b ) For arbitrary X 31 , X 41 , X 34 , and X 44 , there exists a solution in C n × n of (1), which is *congruent to
Y = C 11 C 12 C 13 C 14 S A 1 C 21 S A 1 C 22 D 23 S A 1 C 24 X 31 D 32 S B 1 D 33 X 34 X 41 D 42 S B 1 D 43 X 44 .
( c ) There exists a minimum norm solution in C n × n of (1), which is *congruent to
Y = C 11 C 12 C 13 C 14 S A 1 C 21 S A 1 C 22 D 23 S A 1 C 24 O D 32 S B 1 D 33 O O D 42 S B 1 D 43 O .
Remark 16.
Theorem 55 differs from Theorem 54 in both the approach to decomposition and the way solutions are expressed. Specifically, Theorem 55 provides an extended formulation that generalizes the results in Theorem 54, offering a more comprehensive method for decomposing the matrix equations and presenting solutions in a broader context.
The following theorem shows the corresponding least squares and least squares least norm solutions through GSVD.
Theorem 56
(Least squares *congruence class solutions for (1) over C . [58]). Let A , C C p × n , B , D C n × q , and rank ( A ) = rank ( B ) = k . There exists a unitary matrix U C n × n and nonsingular matrices R A C p × p and R B C q × q , such that the GSVD of matrix pair [ A , B ] is given as
A = U ( Σ A , O ) R A 1 , B = U ( Σ B , O ) R B 1 ,
where Σ A , Σ B C n × k are
Σ A = I r O O O G O O O O O O O O S O O O I t , Σ B = I r O O O I s O O O I q r s O O O O O O O O O ,
with p = r + s + t , G = diag ( g r + 1 , , g r + s ) , 1 > g r + 1 g r + s > 0 , S = diag ( w r + 1 , , w r + s ) , 0 > w r + 1 w r + s > 1 , G 2 + S 2 = I s . Denote the suitable block matrices with the forms
( R A ) C U = C 11 C 12 C 13 C 14 C 15 C 16 C 21 C 22 C 23 C 24 C 25 C 26 C 31 C 32 C 33 C 34 C 35 C 36 C 41 C 42 C 43 C 44 C 45 C 46 , U D R B = D 11 D 12 D 13 D 14 D 21 D 22 D 23 D 24 D 31 D 32 D 33 D 34 D 41 D 42 D 43 D 44 D 51 D 52 D 53 D 54 D 61 D 62 D 63 D 64 .
( a ) The least square solutions to (1) are
X = U C 11 + D 11 C 12 + D 12 C 13 + D 13 C 14 C 15 C 16 D 21 D 22 D 23 0 0 0 D 31 D 32 D 33 X 34 X 35 X 36 D 41 D 42 D 43 X 44 X 45 X 46 Y 51 Y 52 Y 53 S 1 C 24 S 1 C 25 S 1 C 26 D 31 + D 61 D 32 + D 62 D 33 + D 63 C 34 C 35 C 36 U ,
where X 34 , X 35 , X 36 , X 44 , X 45 , and X 46 are arbitrary, Y 5 i = Φ ( S ( G D 2 i C 2 i ) + D 5 i ) , i = 1 , 2 , 3 , Φ = ( φ j k ) C s × s , φ j j = 1 w r + j 2 + 1 , φ j k = 0 , j k .
( b ) For arbitrary X 34 , X 35 , X 36 , X 44 , X 45 , and X 46 , there exists a least square solution in C n × n of (1), which is *congruent to
Y = C 11 + D 11 C 12 + D 12 C 13 + D 13 C 14 C 15 C 16 D 21 D 22 D 23 O O O D 31 D 32 D 33 X 34 X 35 X 36 D 41 D 42 D 43 X 44 X 45 X 46 Y 51 Y 52 Y 53 S 1 C 24 S 1 C 25 S 1 C 26 D 31 + D 61 D 32 + D 62 D 33 + D 63 C 34 C 35 C 36 ,
where Y 5 i = Φ ( S ( G D 2 i C 2 i ) + D 5 i ) , i = 1 , 2 , 3 , Φ = ( φ j k ) C s × s , φ j j = 1 w r + j 2 + 1 ,   φ j k = 0 , j k .
( c ) There exists a minimum norm least square solution in C n × n of (1), which is *congruent to
Y = C 11 + D 11 C 12 + D 12 C 13 + D 13 C 14 C 15 C 16 D 21 D 22 D 23 O O O D 31 D 32 D 33 O O O D 41 D 42 D 43 O O O Y 51 Y 52 Y 53 S 1 C 24 S 1 C 25 S 1 C 26 D 31 + D 61 D 32 + D 62 D 33 + D 63 C 34 C 35 C 36 ,
where Y 5 i = Φ ( S ( G D 2 i C 2 i ) + D 5 i ) , i = 1 , 2 , 3 , Φ = ( φ j k ) C s × s , φ j j = 1 w r + j 2 + 1 ,   φ j k = 0 , j k .

4.8. (Anti-)Hermitian (Anti-)Hamiltonian Solutions

Hamiltonian matrices play a crucial role in various engineering applications, particularly in solving Riccati equations. Yu et al. studied four extended Hamiltonian solutions of the system (1) [59].
Table 1 outlines the definitions of anti-symmetric orthogonal matrices and (anti-)Hermitian generalized (anti-)Hamiltonian matrices, with J R 2 n × 2 n representing a non-trivial anti-symmetric orthogonal matrix and X C 2 n × 2 n denoting the (anti-)Hermitian generalized (anti-)Hamiltonian matrix.
For J A S O R 2 n × 2 n , the EVD of J can be expressed as
J = P i I n O O i I n P ,
where P C 2 n × 2 n is a unitary matrix [59].
Next, we present the necessary and sufficient conditions for the (anti-)Hermitian generalized (anti-)Hamiltonian solutions to the system (1) along with corresponding expressions. Additionally, for a given E C 2 n × 2 n , we consider the optimization problem | | X E | | = min , where X satisfies (1).
Theorem 57
( H H C solutions for (1) over C . [59]). Given A , C C p × 2 n , B , D C 2 n × q , let the decomposition of J A S O R 2 n × 2 n be (22). Partition
A P = [ A 1 , A 2 ] , C P = [ C 1 , C 2 ] , P B = B 1 B 2 , P D = D 1 D 2 ,
where A 1 , A 2 , C 1 , C 2 C p × n and B 1 , B 2 , D 1 , D 2 C n × q . Denote
A = A 1 B 1 , C = C 2 D 2 , B = [ A 2 , B 2 ] , D = [ C 1 , D 1 ] .
( a ) Then, the system (1) has a solution X H H C 2 n × 2 n if and only if
A ( A ) C = C , D ( B ) B = D , A D = C B ,
in which case the Hermitian generalized Hamiltonian solution to (1) can be expressed as
X = P O X 12 X 12 O P ,
where
X 12 = ( A ) C + D ( B ) ( A ) A D ( B ) + L A W R B
and W C n × n is arbitrary.
( b ) For a given E C 2 n × 2 n , let
P E P = E 11 E 12 E 21 E 22 , E 11 C n × n , E 22 C n × n .
Assume that the system (1) has a solution X H H C 2 n × 2 n . Then, the optimization problem | | X E | | = min has a unique solution X H H C 2 n × 2 n of (1) if and only if
L A 1 2 ( E 12 + ( E 21 ) ) X 0 R B = 1 2 ( E 12 + ( E 21 ) ) X 0 ,
in which case the unique solution X can be expressed as
E = P O X 0 ¯ ( X 0 ¯ ) O P ,
where
X 0 ¯ = 1 2 ( E 12 + ( E 21 ) ) , X 0 = ( A ) C + D ( B ) ( A ) A D ( B ) .
( c ) Denote
A = [ A 1 , B 1 ] , C = [ C 2 , D 2 ] .
Let the SVDs of A and C be given by
A = P 1 Γ O O O Q 1 , C = U 1 Λ O O O V 1 ,
where P 1 = [ P 11 , P 12 ] , Q 1 = [ Q 11 , Q 12 ] , U 1 = [ U 11 , U 12 ] , V Q = [ V 11 , V 12 ] , Γ = diag ( δ 1 , δ 2 , , δ t 1 ) , t 1 = rank ( A ) , Λ = diag ( γ 1 , γ 2 , , γ t 2 ) , t 2 = rank ( C ) . Then, the least squares Hermitian generalized Hamiltonian solution to (1) can be described as
X = P O X 12 X 12 O P ,
where
X 12 = P 1 ϕ P 11 D V 11 Λ + Γ Q 11 ( C ) U 12 Γ 1 Q 11 ( C ) U 12 P 12 D V 11 Λ 1 X 22 U 1 ,
with ϕ = ( ϕ i j ) C t 1 × t 2 , ϕ i j = 1 δ i 2 + γ j 2 , and arbitrary X 22 C ( n t 1 ) × ( n t 2 ) .
Theorem 58
( H A H C solutions for (1) over C . [59]). Given A , C C p × 2 n , B , D C 2 n × q , let the decomposition of J A S O R 2 n × 2 n be (22). The matrices A P , C P , P B , and P D respectively have the partitions as in (23). Denote
A ¯ = A 1 B 1 , C ¯ = C 1 D 1 , B ¯ = A 2 B 2 , D ¯ = C 2 D 2 .
Let the SVDs of A ¯ and B ¯ be
A ¯ = U Σ O O O V , B ¯ = Q Π O O O R ,
where U , Q C ( m + q ) × k , V , R C n × n , Σ = diag ( α 1 , , α r ) , r = rank ( A ¯ ) , Π = diag ( β 1 , , β s ) , s = rank ( B ¯ ) . Set
U C ¯ V = C ¯ 11 C ¯ 12 C ¯ 21 C ¯ 22 , Q D ¯ R = D ¯ 11 D ¯ 12 D ¯ 21 D ¯ 22 ,
where C ¯ 11 C r × r , D ¯ 11 C s × s , C ¯ 22 C ( m + q r ) × ( k r ) , D ¯ 22 C ( m + q s ) × ( k s ) .
( a ) Then, (1) has a solution X H A H C 2 n × 2 n if and only if
A ¯ A ¯ C ¯ = C ¯ , A ¯ ( C ¯ ) = C ¯ ( A ¯ ) , C ¯ 21 = O , C ¯ 22 = O , B ¯ B ¯ D ¯ = D ¯ , B ¯ ( D ¯ ) = D ¯ ( B ¯ ) , D ¯ 21 = O , D ¯ 22 = O ,
in which case the Hermitian generalized anti-Hamiltonian solution to (1) can be described as
X = P X 11 O O X 22 P ,
where
X 11 = V Σ 1 C ¯ 11 Σ 1 C ¯ 12 ( C ¯ 12 ) Σ 1 X ¯ 22 V , X 22 = R Π 1 D ¯ 11 Π 1 D ¯ 12 ( D ¯ 12 ) Π 1 X ^ 22 R ,
X ¯ 22 C ( k r ) × ( k r ) and X ^ 22 C ( k s ) × ( k s ) are arbitrary Hermitian matrices.
( b ) For a given E C 2 n × 2 n and the system (1) has a solution X H A H C 2 n × 2 n , let
1 2 ( P E + E P ) = E 11 E 12 ( E 12 ) E 22 , V E 11 V = X ^ 11 0 X ^ 12 0 ( X ^ 12 0 ) X ^ 22 0 , R E 22 R = X ^ 11 X ^ 12 ( X ^ 12 ) X ^ 22 ,
where E 11 C n × n , E 22 C n × n , X ^ 11 0 C r × r , X ^ 22 0 C ( k r ) × ( k r ) , X ^ 11 C s × s ,   X ^ 22 C ( k s ) × ( k s ) are Hermitian. Then, the optimization problem | | X E | | = min has a unique solution X 0 H A H C 2 n × 2 n of (1) as
X 0 = P X 11 0 O O X 22 0 P ,
where
X 11 0 = V Σ 1 C ¯ 11 Σ 1 C ¯ 12 ( C ¯ 12 ) Σ 1 X ^ 22 0 V , X 22 0 = R Π 1 D ¯ 11 Π 1 D ¯ 12 ( D ¯ 12 ) Π 1 X ^ 22 R .
Theorem 59
( A H H C solutions for (1) over C . [59]). Given A , C C p × 2 n , B , D C 2 n × q , let the decomposition of J A S O R 2 n × 2 n be (22). The matrices A P , C P , P B , and P D respectively have the partitions as in (23). Denote
A ˜ = A 1 B 1 , C ˜ = C 1 D 1 , B ˜ = A 2 C 2 , D ˜ = C 2 D 2 .
Let the SVDs of A ˜ and B ˜ be, respectively,
A ˜ = U Σ O O O V , B ˜ = Q Π O O O R ,
where U , Q C ( m + q ) × k , V , R C n × n are unitary and Σ = diag ( α 1 , , α r ) ,   r = rank ( A ˜ ) , Π = diag ( β 1 , , β s ) , s = rank ( C ˜ ) . Set
U C ^ V = C ^ 11 C ^ 12 C ^ 21 C ^ 22 , Q D ^ R = D ^ 11 D ^ 12 D ^ 21 D ^ 22 ,
where C ^ 11 C r × r , D ^ 11 C s × s , C ^ 22 C ( m + q r ) × ( k r ) , D ^ 22 C ( m + q s ) × ( k s ) .
( a ) Then, (1) has a solution X A H H C 2 n × 2 n if and only if
A ^ A ^ C ^ = C ^ , A ^ C ^ = C ^ A ^ , C ^ 21 = O , C ^ 22 = O , B ^ B ^ D ^ = D ^ , B ^ D ^ = D ^ C ^ , D ^ 21 = O , D ^ 22 = O ,
in which case the anti-Hermitian generalized Hamiltonian solution to (1) can be described as
X = P X 11 O O X 22 P ,
where
X 11 = V Σ 1 C ^ 11 Σ 1 C ^ 12 C ^ 12 Σ 1 X ¯ 22 V , X 22 = R Π 1 D ^ 11 Π 1 D ^ 12 D ^ 12 Π 1 X ^ 22 R ,
and X ¯ 22 C ( k r ) × ( k r ) , X ^ 22 C ( k s ) × ( k s ) are arbitrary anti-Hermitian.
( b ) For a given E C 2 n × 2 n , let
1 2 ( P E E P ) = E 11 E 12 ( E 12 ) E 22 , V E 11 V = X ^ 11 0 X ^ 12 0 ( X ^ 12 0 ) X ^ 22 0 , R E 22 R = X ^ 11 X ^ 12 ( X ^ 12 ) X ^ 22 ,
where E 11 C n × n , E 22 C n × n , X ^ 11 0 C r × r , X ^ 22 0 C ( k r ) × ( k r ) , X ^ 11 C s × s ,   X ^ 22 C ( k s ) × ( k s ) are anti-Hermitian. Assume that the system (1) has a solution X A H H C 2 n × 2 n . Then, the optimization problem | | X E | | = min has a unique solution X ˜ A H H C 2 n × 2 n of (1), satisfying
X ˜ = P X 11 0 O O X 22 0 P ,
where
X 11 0 = V Σ 1 B ^ 11 Σ 1 B ^ 12 B ^ 12 Σ 1 X ^ 22 0 V , X 22 0 = R Π 1 D ^ 11 Π 1 D ^ 12 D ^ 12 Π 1 X ^ 22 R .
Theorem 60
( A H A H C solutions for (1) over C . [59]). Given A , C C p × 2 n , B , D C 2 n × q , let the decomposition of J A S O R 2 n × 2 n be (22). The matrices A P , C P , P B , and P D respectively have the partitions as in (23). Denote
A ˜ = A 1 B 1 , C ˜ = C 2 D 2 , B ˜ = [ A 2 , B 2 ] , D ˜ = [ C 1 , D 1 ] .
( a ) Then, (1) has a solution X A H A H C 2 n × 2 n if and only if
A ˜ A ˜ C ˜ = C ˜ , D ˜ B ˜ B ˜ = D ˜ , A ˜ D ˜ = C ˜ B ˜ ,
in which case the anti-Hermitian generalized anti-Hamiltonian solution to (1) can be expressed as
X = P O X 12 X 12 O P ,
where
X 12 = A ˜ C ˜ + D ˜ B ˜ A ˜ A ˜ D ˜ B ˜ + L A ˜ Z R B ˜
and Z C n × n is arbitrary.
( b ) For a given E C 2 n × 2 n , let
P E P = E 11 E 12 E 21 E 22 , E 11 C n × n , E 22 C n × n .
If the system (1) has a solution X A H A H C 2 n × 2 n , the optimization problem | | X E | | = min has a unique solution X ^ A H A H C 2 n × 2 n of (1) if and only if
L A 1 2 ( E 12 ( E 21 ) ) X 0 R C = 1 2 ( E 12 ( E 21 ) ) X 0 ,
in which case the unique solution X can be expressed as
X ^ = P O X 0 ¯ ( X 0 ¯ ) O P ,
where
X 0 ¯ = 1 2 ( E 12 ( E 21 ) ) , X 0 = ( A ˜ ) C ˜ + D ˜ ( B ˜ ) ( A ˜ ) A ˜ D ˜ ( B ˜ ) .
( c ) Denote
A = [ A 1 , B 1 ] , C = [ C 2 , D 2 ] , B = [ A 2 , B 2 ] , D = [ C 1 , D 1 ] .
Let the SVDs of A and B be as given in
A = P 1 Γ O O O Q 1 , B = U 1 Λ O O O V 1 ,
where P 1 = [ P 11 , P 12 ] , Q 1 = [ Q 11 , Q 12 ] , U 1 = [ U 11 , U 12 ] , V Q = [ V 11 , V 12 ] , Γ = diag ( δ 1 , , δ t 1 ) , t 1 = rank ( A ) , Λ = diag ( γ 1 , , γ t 2 ) , t 2 = rank ( B ) . Then, the least squares Hermitian generalized Hamiltonian solution to (1) can be described as
X = P O X 12 X 12 O P ,
where
X 12 = P 1 Φ P 11 D V 11 Λ + Γ Q 11 ( C ) U 12 Γ 1 Q 11 ( C ) U 12 P 12 D V 11 Λ 1 X 22 U 1 ,
with Φ = ( ϕ i j ) , ϕ i j = 1 δ i 2 + γ j 2 , 1 i t 1 , 1 j t 2 and X 22 C ( n t 1 ) × ( n t 2 ) is arbitrary.
This section introduces matrix decomposition methods for solving special solutions of system (1), including various symmetric solutions, orthogonal solutions over the real field, unitary solutions over the complex field, inequality-constrained solutions, real-positive definite and real-semi-positive definite solutions, reflexive solutions, various conjugate solutions, and Hamiltonian-type solutions.

5. The System (1) over Dual Numbers

In 1873, Clifford introduced dual numbers for studying non-Euclidean geometry [60]. The set of dual numbers is typically denoted by
D = { a = a 1 + ϵ a 2 a 1 , a 2 R , ϵ 0 , ϵ 2 = 0 } .
For two dual numbers a = a 1 + ϵ a 2 and b = b 1 + ϵ b 2 , the arithmetic operations for dual numbers are defined as follows:
( a ) Equality: a = b a 1 = b 1 , a 2 = b 2 .
( b ) Addition: a + b = ( a 1 + b 1 ) + ϵ ( a 2 + b 2 ) .
( c ) Multiplication: a b = a 1 b 1 + ϵ ( a 1 b 2 + a 2 b 1 ) .
A matrix whose elements are dual numbers is called a dual matrix. Specifically, the set of all m × n real dual matrices is given by
D m × n = { A = A 1 + ϵ A 2 A 1 , A 2 R m × n } .
The operational rules for dual matrices follow those of dual numbers. Dual matrices have significant applications in kinematic analysis and robotics. The solution of systems of linear dual equations is a crucial task in various fields, such as synthesis problems and sensor calibration [61].
Recently, the existence of general solutions and corresponding expressions, along with minimal norm solutions for (1) over dual numbers has been investigated. We present the following theorem.
Theorem 61
(General solutions for (1) over D . [62]). Assume that dual matrices A = A 1 + ϵ A 2 , B = B 1 + ϵ B 2 , C = C 1 + ϵ C 2 and D = D 1 + ϵ D 2 , where A i R p × m , B i R n × q , C i R p × n , D i R m × q ( i = 1 , 2 ). Suppose that the SVDs of the matrices A 1 and B 1 are
A 1 = P Σ O O O Q T , B 1 = U Ω O O O V T ,
where Σ = diag ( γ 1 , , γ r 1 ) , r 1 = rank ( A 1 ) , P = [ P 1 , P 2 ] R p × p , Q = [ Q 1 , Q 2 ] R m × m , Ω = diag ( β 1 , , β r 2 ) , r 2 = rank ( B 1 ) , U = [ U 1 , U 2 ] R n × n , V = [ V 1 , V 2 ] R q × q with P 1 R m × r 1 , Q 1 R p × r 1 , U 1 R q × r 2 and V 1 R n × r 2 . Let the partitions of the matrices U T B 2 V , P T A 2 Q , P T C 1 U , P T C 2 U , Q T D 1 V and Q T D 2 V be given by
U T B 2 V = B 21 B 22 B 23 B 24 , P T A 2 Q = A 21 A 22 A 23 A 24 , P T C 1 U = C 11 C 12 C 13 C 14 , P T C 2 U = C 21 C 22 C 23 C 24 , Q T D 1 V = D ˜ 11 D ˜ 12 D ˜ 13 D ˜ 14 , Q T D 2 V = D ˜ 21 D ˜ 22 D ˜ 23 D ˜ 24 .
( a ) Then, (1) is solvable over dual numbers if and only if
R A 24 ( C 24 A 23 Σ 1 C 12 ) = O , R A 1 ( C 2 B 2 A 2 D 2 ) L B 1 = O , R A 1 ( C 2 B 1 A 2 D 1 ) = O , R A 1 C 1 = O , D 1 L B 1 = O , ( C 1 B 2 A 1 D 2 ) L B 1 = O , C 2 B 1 A 2 D 1 = A 1 D 2 C 1 B 2 , ( D ˜ 24 D ˜ 13 Ω 1 B 22 ) L B 24 = O , C 1 B 1 = A 1 D 1 .
In this case, the solution set of (1) over dual numbers can be expressed as
X 1 = Q Σ 1 C 11 Σ 1 C 12 D ˜ 13 Ω 1 J 4 + L A 24 W 3 R B 24 U T , X 2 = Q Σ 1 ( C 21 A 21 Σ 1 C 11 A 22 D ˜ 13 Ω 1 ) J 5 Σ 1 A 22 L A 24 W 3 R B 24 J 6 L A 24 W 3 R B 24 B 23 Ω 1 Z 4 U T ,
where
J 4 = A 24 ( C 24 A 23 Σ 1 C 12 ) + L A 24 ( D ˜ 24 D ˜ 13 Ω 1 B 22 ) B 24 , J 5 = Σ 1 C 22 Σ 1 A 21 Σ 1 C 12 Σ 1 A 22 J 4 , J 6 = D ˜ 23 Ω 1 D ˜ 13 Ω 1 B 21 Ω 1 J 4 B 23 Ω 1 .
and W 3 , Z 4 are arbitrary matrices.
( b ) If the conditions (24) are satisfied, then the solution of | | X ^ 1 | | 2 + | | X ^ 2 | | 2 = min with X ^ being the dual number solution of (1) is given by X ^ = X ^ 1 + ϵ X ^ 2 , where
X ^ 1 = Q Σ 1 C 11 Σ 1 C 12 D ˜ 13 Ω 1 J 4 + L A 24 W 3 R B 24 U T , X ^ 2 = Q Σ 1 J 4 J 5 Σ 1 A 22 L A 24 W 3 R B 24 J 6 L A 24 W 3 R B 24 B 23 Ω 1 O U T .
where W 3 is satisfying
Π ˜ vec ( W 3 ) = vec ( J 7 ) ,
where
J 7 = 2 L A 24 A 22 T Σ 1 J 5 R B 24 2 L A 24 J 4 R B 24 + 2 L A 24 J 6 Ω 1 B 23 T R B 24 . Π ˜ = R B 24 ( L A 24 + 2 L 2 T L 2 ) + ( R B 24 + 2 L 1 T L 1 ) L A 24 .
Remark 17.
In 2024, Fan presented an alternative form of Theorem 61 using the Moore–Penrose inverse instead of block matrices, which also requires the SVD form of A 1 and C 1 [63]. In fact, Theorem 61, when applied with the SVD, provides the specific form of the Moore–Penrose inverse, which may be more efficient in practical computations.
In this section, we introduced the solution of the system (1) in terms of dual quaternions. A more general form involving dual quaternions will be discussed in the next section.

6. The System (1) over Quaternions

Since Hamilton’s discovery of quaternions in 1843 [64], they have become a widely used tool for representing concepts across algebra, analysis, topology, and physics. Additionally, quaternion matrices have garnered significant attention in fields such as computer science, quantum physics, signal processing, and color image processing [65,66].
In 1849, Cockle introduced split quaternions [67]. The algebra of split quaternions is a four-dimensional Clifford algebra that is associative and noncommutative, but it has zero divisors, nilpotent elements, and nontrivial idempotents. As a result, the algebraic structure of split quaternions, denoted as H s , is more complex than that of real quaternions, H . Despite this complexity, the unique algebraic properties of split quaternions make them a valuable tool in quantum mechanics and geometry [68,69].
In 1873, Clifford extended the concepts of dual numbers and dual quaternions [60]. Dual quaternions have since become widely used in robot kinematics and unmanned aerial vehicle formation control due to their ability to represent the motion of rigid bodies in 3D space [70,71,72,73]. Similarly, the dual split quaternion can also be defined.
The system (1) over quaternion algebra, split quaternion algebra, and dual quaternion algebra has also been the focus of several scholars. Comparatively, since the algebraic structure of quaternions is well-understood, and the definitions of generalized inverses and rank have been extended to quaternion matrices, the system (1) has been thoroughly studied over quaternions. Since 2005, when Wang first proposed the general solution to the extended form of the system
A 1 X = C 1 , A 2 X = C 2 , A 3 X B 3 = C 3 , A 4 X B 4 = C 4
of the system (1), relevant results for the bi-symmetric, centro-symmetric, symmetric and skew-antisymmetric, ( P , Q ) -reflexive solutions and reducible solutions have been successively presented [74,75,76,77]. The study of the split quaternion matrix equation typically relies on the real or complex representation of the split quaternion, or vectorization operators. However, recent work by Jiang on split quaternion matrix SVD and generalized inverses has enabled the consideration of more diverse approaches [78,79]. Dual quaternions are more intricate, and only Xie has explored the system (1) over dual quaternions [80]. Yang et al. also considered the results of the system (1) over dual split quaternion tensors [81]. The following details are introduced.

6.1. The System (1) over Quaternions

Denote the set of all real quaternions by
H = { a = a 0 + a 1 i + a 2 j + a 3 k i 2 = j 2 = k 2 = i j k = 1 , a 0 , a 1 , a 2 , a 3 R } ,
where i, j, and k are the quaternion units. For a H , a ¯ = a 0 a 1 i a 2 j a 3 k is the conjugate of a. The set of all m × n quaternion matrices is denoted by H m × n . For a quaternion matrix A = ( a i j ) H m × n , the transpose conjugate of A is expressed as A = ( a ¯ j i ) . The Moore–Penrose inverse of A is denoted as A , satisfying the same equations in the definition of complex Moore–Penrose. The two orthogonal projectors L A and R A are defined as
L A = I A A , R A = I A A ,
The rank of A, denoted by rank ( A ) , is defined as the dimension of R ( A ) , where R ( A ) is the column right space of A [82,83].
Using the results of the system (25) and the properties of the rank of quaternion matrix equations, the general solution of system (1) over quaternions is presented below.
Theorem 62
(General solutions for (1) over H . [74]). For given A H p × m , C H p × n , B H n × q and D H m × q , then there exists three conditions, one of which is equivalent to (1) is consistence over H :
( a )   R A C = O , D L B = O , A D = C B ,
( b )   A A C = C , D B B = D , A D = C B ,
( c )   rank [ A , C ] = rank ( A ) , rank B D = D , A D = C B .
In this case, the general solution of (1) can be in the form of
X = A C + L A D B + L A Y R B ,
where Y is an arbitrary matrix over H with appropriate order.
Based on Theorem 62, Kyrchei investigated the row–column determinant expression of the solution of system (1) over quaternions [84].
Let S n denote the symmetric group on { 1 , 2 , , n } . For a quaternion matrix A H n × n , the row and column determinants are defined as follows:
( a ) Row determinant: The i-th row determinant of A = ( a i j ) H n × n for all i = 1 , , n is defined as
rdet i A = σ S n ( 1 ) n r ( a i k 1 a i k 1 + 1 a i k 1 + l 1 ) ( a i k r a i k r + 1 a i k r + l k r ) ,
where σ = ( i k 1 i k 1 + 1 i k 1 + l 1 ) ( i k 2 i k 2 + 1 i k 2 + l 2 ) ( i k r i k r + 1 i k r + l k r ) ,   i k 2 < i k 3 < < i k r and i k s < i k s + 1 for all t = 2 , , r and s = 1 , , l t .
( b ) Colimn determinant: The j-th column determinant of A = ( a i j ) H n × n for all j = 1 , , n is defined as
cdet j A = τ S n ( 1 ) n r ( a j k r + l r a j k r + 1 a j k r ) ( a j k 1 + l 1 a j k 1 + 1 a j k 1 a j k 1 a j k 1 ) ,
where τ = ( j k r + l r j k r ) ( j k 2 + l 2 j k 2 + 1 j k 2 ) ( j k 1 + l 1 j k 1 + 1 j k 1 j k 1 j k 1 ) ,   j k 2 < j k 3 < < j k r and j k s < j k s + 1 for all t = 2 , , r and s = 1 , , l t .
For 1 k n , let α = { α 1 , , α k } { 1 , , m } and β = { β 1 , , β k } { 1 , , n } with 1 k min { m , n } . The collection of strictly increasing sequences of k integers chosen from { 1 , , n } is denoted by L k , n = α = ( α 1 , , α k ) 1 α 1 < < α k n . For a fixed i α and j β , let I r , m { i } = { α L r , m i α } , J r , n { j } = { β L r , n j β } . Assume A = ( a i j ) H n × n . Let A α α be a principal submatrix of A whose rows and columns are indexed by α . If A H n × n is Hermitian, then | A | α α denotes the corresponding principal minor of det A . Let a · j be the j-th column and a i · be the i-th row of A. Suppose A · j ( b ) denotes the matrix obtained from A by replacing its j-th column with the column b, and A i · ( b ) denotes the matrix obtained from A by replacing its i-th row with the row b.
Theorem 63
(General solutions using row and column determinants for (1) over H . [84]). Let A = ( a i j ) H p × m , B = ( b i j ) H n × q , C = ( c i j ) H p × n , D = ( d i j ) H m × q , A = ( a i j ) H n × m , B = ( b i j ) H s × r , L A = I A A = ( l i j ) H n × n . Denote A C = C ^ = ( c ^ i j ) H n × r and L A D B = D ^ = ( d ^ i j ) H n × r . Assume that p < m and q < n . Quaternion matrix X 0 = ( x i j 0 ) H n × s as the solution of (1) has the following determinantal representation.
( a ) If rank ( A ) = k p < m and rank ( B ) = t q < n , then
x i j 0 = β J k , m { i } cdet i ( ( A A ) · i ( c ^ · j ) ) β β β J k , m | A A | β β + α I t , n { j } rdet j ( ( B B ) j · ( d ^ i · ) ) α α α I t , n | B B | α α .
( b ) If rank ( A ) = m and rank ( B ) = n , then
x i j 0 = cdet i ( A A ) · i ( c ^ · j ) det ( A A ) + rdet j ( B B ) j · ( d ^ i · ) det ( B B ) .
( c ) If rank ( A ) = k p < m and rank ( B ) = n , then
x i j 0 = β J k , m { i } cdet i ( ( A A ) · i ( c ^ · j ) ) β β β J k , m | A A | β β + rdet j ( B B ) j · ( d ^ i · ) det ( B B ) .
( d ) If rank ( A ) = m and rank ( B ) = t q < n , then
x i j 0 = cdet i ( A A ) · i ( c ^ · j ) det ( A A ) + α I t , n { j } rdet j ( ( B B ) j · ( d ^ i · ) ) α α α I t , n | B B | α α .
The following presents some special forms of symmetric solutions over quaternions and related results.
Table 2 outlines the definitions of several kinds of symmetric matrices, where A = ( a i j ) H m × n , A = ( a ¯ j i ) H n × m , A # = ( a m i + 1 , n j + 1 ) H m × n , and a ¯ j i is the conjugate of the quaternion a j i .
It is worth noting that centrosymmetric, symmetric and skew-antisymmetric, and ( P , Q ) -(skew)symmetric matrices do not necessarily need to be square.
Next, we will sequentially present the conclusions regarding the above special solutions of the system (1) over quaternions.
Theorem 64
(Bisymmetric solutions for (1) over H . [74]). Let A , C H p × n , B , D H n × q , and V = ( v i j ) R k × k , where v i j = 1 when i + j = k + 1 and v i j = 0 otherwise. Denote
U = I k V k V k I k ,
when n = 2 k , or
U = I k O V k O 1 O V k O I k ,
when n = 2 k + 1 . There exists block matrices
A U 1 = [ A 1 , A 2 ] , C U = [ C 1 , C 2 ] , U B = B 1 B 3 , U D = D 1 D 2 ,
where A 1 , A 2 , C 1 , C 2 H p × k and B 1 , B 2 , D 1 , D 2 H k × q when n = 2 k ; A 1 , C 1 H p × k , A 2 , C 2 H p × ( k + 1 ) , B 1 , D 1 H k × q and B 2 , D 2 H ( k + 1 ) × q when n = 2 k + 1 . Let i = 1 , 2 ,
S i = B i L A i , G i = R S i B i , T i = L A i L S i , N i = R B i A i , Ψ i = [ D i B i A i C i L A i S i B i ( ( D i B i ) A i C i ) ] B i , Φ i = S i B i [ ( D i B i ) A i C i ] + L S i T i Ψ i B i , Q i = C i A i C i A i L A i Φ i A i .
Then, the system (1) has a bisymmetric solution over quaternions if and only if
T i T i Ψ i = Ψ i , R T i Q i = O , D i B i B i = D i , A i A i C i = C i , G i [ ( D i B i ) A i C i ] = O ,
in which case, the general bisymmetric solution can be expressed as
X = U 1 X 1 O O X 2 U ,
where
X i = 1 2 ( Y i + Y i ) ,
Y i = A i C i + L A i S i B i [ ( D i B i ) A i C i ] + Ψ i B i + Q i N i R B i + T i W i R N i R B i ,
where W i is an arbitrary matrix over H with compatible dimension.
Remark 18.
In 2015, Yuan et al. considered the least squares η-bi-Hermitian solution for another linear system ( A X B , C X D ) = ( E , F ) [85].
Theorem 65
(Cetrosymmetric solutions for (1) over H . [74]). For A , C H p × n and B , D H n × q , denote
S = A # L A , T = L A L S , G = R S A # , N = R B B # , P = R L A L S L T L A L S , Ψ = [ D B A C L A S ( A A C ) # + L A S ( A # A C ) ] B , Φ = S A # [ ( A C ) # ( A C ) ] + L S T Ψ B , Q = D # A C B # M Φ B # .
Then, the system (1) has a centrosymmetric solution if and only if
T T Ψ = Ψ , R P R L A L S L T Q = O , R L A L S L T Q L N = O , D B B = D , A A C = C , G [ ( A C ) # ( A C ) ] = O .
In that case, the centrosymmetric solution can be expressed as
X = 1 2 ( X 1 + X 1 # ) ,
where
X 1 = A C + L A S A # [ ( A C ) # ( A C ) ] + L A L S T Ψ B + L A L S ( L A L S L T ) Q B # + L A L S P R L A L S L T Q N R B L A L S T ( L A L S L T ) L A L S P R L A L S L T Q B # + L A L S L T Z L A L S T ( L A L S L T ) L A L S L T Z ( B B ) # + L A 1 L S W R B L A L S T ( L A L S L T ) M L S P W N B # L A L S P P W N N R B ,
with arbitrary W , Z .
Theorem 66
(Symmetric and skew-antisymmetric solutions for (1) over H . [75]). Let A , C H p × n , B , D H n × q , V = ( v i j ) R k × k , where v i j = 1 when i + j = k + 1 and v i j = 0 otherwise. Denote
U = 1 2 I k I k V k V k
when n = 2 k , or
U = 1 2 I k O I k O 2 O V k O V k
when n = 2 k + 1 . There exists block matrices
A U 1 = [ A 1 , A 2 ] , C U = [ C 1 , C 2 ] , U B = B 1 B 3 , U D = D 1 D 2 ,
where A 1 , A 2 , C 1 , C 2 H p × k and B 1 , B 2 , D 1 , D 2 H k × q when n = 2 k ; A 1 , C 1 H p × k , A 2 , C 2 H p × ( k + 1 ) , B 1 , D 1 H k × q and B 2 , D 2 H ( k + 1 ) × q when n = 2 k + 1 . Let
S = B 2 L A 2 , G = R S B 2 , T = L A 2 L S , N = R B 1 A 1 , ψ = [ D 2 B 1 A 2 C 1 L A 2 S B 2 ( ( D 1 B 2 ) A 2 C 1 ) ] B 1 , ϕ = S B 2 [ ( D 1 B 2 ) A 2 C 1 ] + L S T ψ B 1 , Q = C 2 A 2 C 1 A 1 L A 2 ϕ A 1 .
Then, the system (1) has symmetric and skew-antisymmetric solutions if and only if
T T ψ = ψ , R T Q = O , D 2 B 1 B 1 = D 2 , C 2 ( A 1 ) A 1 = C 2 , A 2 A 2 C 1 = C 1 , B 2 ( B 2 ) D 1 = D 1 , G ( ( D 1 B 2 ) A 2 C 1 ) = O .
In which case, the general symmetric and skew-antisymmetric solution can be expressed as
X = U O Y Y O U ,
where
Y = A 2 C 1 + L A 2 S B 2 ( ( D 1 B 2 ) A 2 C 1 ) + ψ B 1 + Q N R B 1 + T W R N R B 1
with W is an arbitrary matrix over H with compatible dimension.
Theorem 67
( ( P , Q ) -(skew-)symmetric solution for (1) over H . [76]). Let A H p × m , C H p × n , B H n × q , D H m × q , and P H m × m , Q H n × n satisfying P 2 = I , Q 2 = I . The EVDs of P and Q can be written as the form of
P = U 1 I r 1 O O I m r 1 U , Q = V 1 I r 2 O O I n r 2 V ,
where U and V are invertible. Denote
A U 1 = [ A 1 , A 2 ] , C V 1 = [ C 1 , C 2 ] , V B = B 1 B 2 , U D = D 1 D 2 ,
where A 1 , C 1 H p × r 1 , B 1 , D 1 H r 2 × q . Then, the system (1) has a ( P , Q ) -(skew-)symmetric solution if and only if
R A i C i = O , D i L B i = O , A i D i = C i B i , i = 1 , 2 ,
or equivalently
A i D i = C i B i , rank [ A i , C i ] = rank ( A i ) , rank C i D i = rank ( C i ) , i = 1 , 2 .
( a ) The general ( P , Q ) -symmetric solution of (1) can be expressed as
X = U 1 A 1 C 1 + L A 1 D 1 B 1 + L A 1 Y 1 R B 1 O O A 2 C 2 + L A 2 D 2 B 2 + L A 2 Y 2 R B 2 V ,
where Y 1 , Y 2 are arbitrary matrices over H with appropriate sizes.
( b ) The general ( P , Q ) -skew-symmetric solution of (1) can be expressed as
X = U 1 O A 1 C 1 + L A 1 D 1 B 1 + L A 1 Y 1 R B 1 A 2 C 2 + L A 2 D 2 B 2 + L A 2 Y 2 R B 2 O V ,
where Y 1 , Y 2 are arbitrary matrices over H with appropriate sizes.
Subsequently, we introduce the extreme rank ( P , Q ) -(skew-)symmetric solutions of the system (1) over quaternions.
Theorem 68
(Extreme rank ( P , Q ) -(skew-)symmetric solutions for (1) over H . [76]). Suppose that the system (1) has a ( P , Q ) -(skew-)symmetric solutions X and Ω ( Ω ^ ) is the set of all ( P , Q ) -(skew-)symmetric solutions of (1). Denote
T 1 = R L A 1 ( A 1 C 1 + L A 1 D 1 B 1 ) , S 1 = ( A 1 C 1 + L A 1 D 1 B 1 ) L R B 1 , T 2 = R L A 2 ( A 2 C 2 + L A 2 D 2 B 2 ) , S 2 = ( A 2 C 2 + L A 2 D 2 B 2 ) L R B 2 .
( a ) The maximal rank of X Ω is
max X Ω r ( X ) = min { r 1 + rank ( C 1 ) rank ( A 1 ) , r 2 + rank ( D 1 ) rank ( B 1 ) } + min { m r 1 + rank ( C 2 ) rank ( A 2 ) , n r 2 + rank ( D 2 ) rank ( B 2 ) } .
The corresponding general expression of X is
X = U 1 X 1 O O X 2 V ,
where
X i = ( R S i L A i ) ( A i C i + L A i D i B i ) ( R B i L T i ) K i , i = 1 , 2 ,
and K i is chosen such that rank ( R S i L A i K i R B i L T i ) = min { rank ( R S i L A i ) , rank ( R B i L T i ) } .
( b ) The minimal rank of X Ω is
min X Ω rank ( X ) = rank ( C 1 ) + rank ( C 2 ) + rank ( D 1 ) + rank ( D 2 ) rank ( C 1 B 1 ) rank ( C 2 B 2 ) .
The corresponding general expression of X is
X = U 1 X 1 O O X 2 V ,
where
X i = ( R S i L A i ) ( A i C i + L A i D i B i ) ( R B i L T i ) + W i + ( R S i L A i ) L A i W i R B i ( R B i L T i ) ,
for i = 1 , 2 and W i is an arbitrary quaternion matrix with appropriate sizes.
( c ) The maximal rank of the ( P , Q ) -skewsymmetric solution of (1) is
max X Ω ^ rank ( X ) = min { m r 1 + rank ( C 1 ) rank ( A 1 ) , n r 2 + rank ( D 1 ) rank ( B 1 ) } + min { r 1 + rank ( C 2 ) rank ( A 2 ) , r 2 + rank ( D 2 ) rank ( B 2 ) } .
The general expression of X attaining the maximal rank can be expressed as
X = U 1 O X 1 X 2 O V ,
where
X i = ( R S i L A i ) ( A i C i + L A i D i B i ) ( R B i L T i ) K i , i = 1 , 2 ,
and K i is chosen such that rank ( R S i L A i K i R B i L T i ) = min { rank ( R S i L A i ) , rank ( R B i L T i ) } .
( d ) The minimal rank of the ( P , Q ) -skewsymmetric solution of (1) is
min X Ω ^ rank ( X ) = rank ( C 1 ) + rank ( C 2 ) + rank ( D 1 ) + rank ( D 2 ) rank ( A 1 D 1 ) rank ( A 2 D 2 ) ,
or
min X Ω ^ rank ( X ) = rank ( C 1 ) + rank ( C 2 ) + rank ( D 1 ) + rank ( D 2 ) rank ( C 1 B 1 ) rank ( C 2 B 2 ) .
The general expression of X attaining the minimal rank can be expressed as
X = U 1 O X 1 X 2 O V ,
where
X i = ( R S i L A i ) ( A i C i + L A i D i B i ) ( R B i L T i ) + K i + ( R S i L A i ) L A i K i R B i ( R B i L T i )
for i = 1 , 2 , and K i is an arbitrary quaternion matrix with appropriate size.
At the end of this section, we introduce the reducible solution of the system (1) over quaternions.
A matrix A H n × n is called reducible if there exists a permutation matrix K such that
A = K A 1 A 2 O A 3 K 1 ,
where A 1 and A 3 are square matrices of order at least 1 over H . Moreover, if the order of A 3 is k ( 1 k < n ), we call A to be k-reducible with respect to the permutation matrix K.
Theorem 69
(Reducible solutions for (1) over H . [77]). Let A , C H p × n , B , D H n × q be known, X H n × n unknown, K H n × n be a permutation matrix, 1 k < n . Denote
A K = [ A 1 A 3 ] , C K = [ C 1 C 2 ] , K 1 B = B 1 B 2 , K 1 D = C 3 C 4 ,
where A 1 , C 1 H p × ( n k ) , A 3 , C 2 H p × k , B 1 , C 3 H ( n k ) × q , B 2 , C 4 H k × q . Assume that M, N, P, Q, E, F, G, S, and T are defined as
E 2 = R A 3 A 1 , F 1 = B 2 L B 1 , M = A 1 N , N = A 1 L E 2 , P = R F 1 B 2 , Q = P B 2 , E = A 1 A 1 C 3 A 1 C 1 B 1 A 1 A 1 E 2 C 2 B 2 A 1 N C 3 F 1 B 2 , F = C 2 B 2 B 2 A 3 C 4 B 2 A 1 E 2 C 2 B 2 B 2 N C 3 F 1 B 2 B 2 , G = N F Q M E P , S = L M + L N , T = R P + R Q .
Then, the system (1) has a k-reducible solution X H n × n with the permutation matrix K if and only if one of the following two statements holds.
( a )
R A 1 C 1 = O , C 4 L B 2 = O , R ( A 1 A 1 + A 3 A 3 ) C 2 = O , C 3 L ( B 1 B 1 + B 2 B 2 ) = O , R A 3 ( A 1 C 3 C 2 B 2 ) L B 1 = O , A 1 C 3 C 2 B 2 C 1 B 1 + A 3 C 4 = O , R A 3 ( A 1 C 3 C 2 B 2 C 1 B 1 ) = O , ( C 1 B 1 A 1 C 3 ) L B 2 = O , ( A 1 C 3 C 2 B 2 + A 3 C 4 ) L B 1 = O , R A 1 ( A 3 C 4 C 2 B 2 ) = O ,
( b )
A 1 C 3 C 2 B 2 C 1 B 1 + A 3 C 4 = O , rank [ A 1 , C 1 ] = rank ( A 1 ) , rank [ A 1 , A 3 , C 2 ] = rank [ A 1 , A 3 ] , rank B 2 C 4 = rank ( B 2 ) , rank B 1 B 2 C 3 = rank B 1 B 2 , rank A 1 C 3 C 2 B 2 A 3 B 1 O = rank ( A 3 ) + rank ( B 1 ) , rank [ A 3 , A 1 C 3 C 2 B 2 C 1 B 1 ] = rank ( A 3 ) , rank [ A 1 , A 3 C 4 C 2 B 2 ] = rank ( A 1 ) , rank C 1 B 1 A 1 C 3 B 2 = rank ( B 2 ) , rank B 1 A 1 C 3 C 2 B 2 + A 3 C 4 = rank ( B 1 ) .
In that case, the k-reducible solution X of system (1) with respect to K can be expressed as
X = K X 1 X 2 O X 3 K 1 ,
where
X 1 = A 1 C 1 + L A 1 ( C 3 X 2 B 2 ) B 1 + L A 1 V 1 R B 1 , X 2 = E 2 C 2 + L E 2 C 3 F 1 + L E 2 D R F 1 L E 2 L M U ^ 2 R Q R F 1 + L E 2 L N U ^ 3 R P R F 1 + L E 2 L M ( I S ) L M V ^ 1 R F 1 + L E 2 L M S L N V ^ 2 R F 1 + L E 2 W ^ 1 R P ( I T ) R P R F 1 + L E 2 W ^ 2 R Q T R P R F 1 , X 3 = C 4 B 2 + A 3 ( C 2 A 1 X 2 ) R B 2 + L A 3 U 2 R B 2 .
with U 1 , U 2 , U ^ 2 , U ^ 3 , V ^ 1 , V ^ 2 , W ^ 1 , and W ^ 2 being arbitrary matrices over H with appropriate sizes.
Remark 19.
Due to the definition of reducible matrices, considering the reducible solution of the system (1) is actually equivalent to considering the general solution of a more complex system
A 1 X 1 = C 1 , A X 1 B 1 + X 2 B 2 = C 3 , A 1 X 2 + A 3 X 3 B = C 2 , X 3 B 3 = C 4 .
The proof process of Theorem 69 follows this approach as well.
Remark 20.
In fact, solving the reducible solution over quaternions is closely related in form to solving the general solution over dual quaternions.

6.2. The System (1) over Split Quaternions

The set of real quaternions form a noncommutative division algebra. In 1849, Cockle introduced split quaternions:
H s = { q = q 0 + q 1 i + q 2 j + q 3 k : q 0 , q 1 , q 2 , q 3 R } ,
where
i 2 = j 2 = k 2 = 1 , i j = j i = k , j k = k j = i , k i = i k = j .
Split quaternions have zero factors, which gives H s a more complex algebraic structure than H . Solving the split quaternion matrix equation mainly relies on real representation and complex representation, with the real representation having better structure-preserving properties and performing better in numerical examples.
Si et al. designed several real representations of the split quaternion matrix to establish sufficient and necessary conditions for the existence of the general, η -(anti-)conjugate, and η -(anti-)Hermitian solutions. Further, they derived expressions of the corresponding solutions when the system is solvable [86].
For any matrix A H s m × n , it can be represented uniquely as A = A 1 + A 2 i + A 3 j + A 4 k , where A 1 , A 2 , A 3 , A 4 R m × n . The three corresponding η -conjugates ( η { i , j , k } ) are defined as
A i = i 1 A i = A 1 + A 2 i A 3 j A 4 k , A j = j 1 A j = A 1 A 2 i + A 3 j A 4 k , A k = k 1 A k = A 1 A 2 i A 3 j + A 4 k .
Let A = A 1 T A 2 T i A 3 T j A 4 T k be the usual conjugate transpose of A. Then, the three other η -conjugate transposes ( η { i , j , k } ) of A are defined as follows:
A i = i A i = A 1 T A 2 T i + A 3 T j + A 4 T k , A j = j A j = A 1 T + A 2 T i A 3 T j + A 4 T k , A k = k A k = A 1 T + A 2 T i + A 3 T j A 4 T k .
For A H s n × n , η { i , j , k } , A is called η -(anti-)Hermitian if A η = ( ) A [87].
Given A H s m × n , A = A 1 + A 2 i + A 3 j + A 4 k , A 1 , A 2 , A 3 , A 4 R m × n , we define the following four real representations of A:
( a )
A σ = A 1 A 2 A 3 A 4 A 2 A 1 A 4 A 3 A 3 A 4 A 1 A 2 A 4 A 3 A 2 A 1 ,
( b )
A σ i = U m A σ = A 1 A 2 A 3 A 4 A 2 A 1 A 4 A 3 A 3 A 4 A 1 A 2 A 4 A 3 A 2 A 1 , U m = I m O O O O I m O O O O I m O O O O I m ,
( c )
A σ j = V m A σ = A 3 A 4 A 1 A 2 A 4 A 3 A 2 A 1 A 1 A 2 A 3 A 4 A 2 A 1 A 4 A 3 , V m = O O I m O O O O I m I m O O O O I m O O ,
( d )
A σ k = W m A σ = A 1 A 2 A 3 A 4 A 2 A 1 A 4 A 3 A 3 A 4 A 1 A 2 A 4 A 3 A 2 A 1 , W m = I m O O O O I m O O O O I m O O O O I m .
Using the real representations above, reference [86] presents the following results.
Theorem 70
(General solutions for (1) over H s . [86]). Consider matrices A H s p × m , C H s p × n , B H s n × q , and D H s m × q . Then there exist two equivalent statements for the system (1) that has a solution X H s m × n .
( a ) The system of real matrix equations
( A σ Y , Y B σ ) = ( C σ , D σ )
has a solution Y R 4 m × 4 n .
( b ) C σ B σ = A σ D σ , C σ = A σ ( A σ ) C σ , D σ = D σ ( B σ ) B σ .
If the system (28) is consistent, then
X = 1 8 ( I n I n i I n j I n k ) ( Y + P n Y P r T + Q n Y Q r T + R n Y R r T ) I r I r i I r j I r k ,
where
Y = ( A σ ) C σ + L A σ D σ ( B σ ) + L A σ F R B σ ,
for arbitrary F R 4 m × 4 n .
Theorem 71
( η -conjugate solutions for (1) over H s . [86]). Let A H s p × m , C H s p × n , B H s n × q , and D H s m × q . Then, the following statements are equivalent:
( a ) The system of split quaternion matrix equations (1) has a solution X = ( ) X η H s m × n , η { i , j , k } .
( b ) The system of real matrix equations (28) has a ( U n , U r ) , η = i ( V n , V r ) , η = j ( W n , W r ) , η = k generalized (anti-)reflexive solution Y R 4 m × 4 n .
( c )
C 1 σ B 1 σ = A 1 σ D 1 σ , C 1 σ = A 1 σ ( A 1 σ ) C 1 σ , D 1 σ = D 1 σ ( B 1 σ ) B 1 σ , C 2 σ B 2 σ = A 2 σ D 2 σ , C 2 σ = A 2 σ ( A 2 σ ) C 2 σ , D 2 σ = D 2 σ ( B 2 σ ) B 2 σ
or C 2 σ B 2 σ = A 1 σ D 1 σ , C 1 σ = A 2 σ ( A 2 σ ) C 1 σ , D 1 σ = D 1 σ ( B 2 σ ) B 2 σ , C 1 σ B 1 σ = A 2 σ D 2 σ , C 2 σ = A 1 σ ( A 1 σ ) C 2 σ , D 2 σ = D 2 σ ( B 1 σ ) B 1 σ
hold, where
A σ = A 1 σ + A 2 σ , U n ( A 1 σ ) = ( A 1 σ ) , U n ( A 2 σ ) = ( A 2 σ ) , ( A 1 σ ( A 2 σ ) = O , C σ = C 1 σ + C 2 σ , U r ( C 1 σ ) = ( C 1 σ ) , U r ( C 2 σ ) = ( C 2 σ ) , C 1 σ ( C 2 σ ) = O , B σ = B 1 σ + B 2 σ , U r B 1 σ = B 1 σ , U r B 2 σ = B 2 σ , ( B 2 σ ) B 1 σ = O , D σ = D 1 σ + D 2 σ , U n D 1 σ = D 1 σ , U n D 2 σ = D 2 σ , ( D 2 σ ) D 1 σ = O .
when η = i ;
A σ = A 1 σ + A 2 σ , V n ( A 1 σ ) = ( A 1 σ ) , V n ( A 2 σ ) = ( A 2 σ ) , A 1 σ ( A 2 σ ) = O , C σ = C 1 σ + C 2 σ , V r ( C 1 σ ) = ( C 1 σ ) , V r ( C 2 σ ) = ( C 2 σ ) , C 1 σ ( C 2 σ ) = O , B σ = B 1 σ + B 2 σ , V r B 1 σ = B 1 σ , V r B 2 σ = B 2 σ , ( B 2 σ ) B 1 σ = O , D σ = D 1 σ + D 2 σ , V n ( D 1 σ ) = D 1 σ , V n D 2 σ = D 2 σ , ( D 2 σ ) D 1 σ = O .
when η = j ;
A σ = A 1 σ + A 2 σ , W n ( A 1 σ ) = ( A 1 σ ) , W n ( A 2 σ ) = ( A 2 σ ) , A 1 σ ( A 2 σ ) = O , C σ = C 1 σ + C 2 σ , W r ( C 1 σ ) = ( C 1 σ ) , W r ( C 2 σ ) = ( C 2 σ ) , C 1 σ ( C 2 σ ) = O , B σ = B 1 σ + B 2 σ , W r B 1 σ = B 1 σ , W r B 2 σ = B 2 σ , ( B 2 σ ) B 1 σ = O , D σ = D 1 σ + D 2 σ , W n ( D 1 σ ) = D 1 σ , W n D 2 σ = D 2 σ , ( D 2 σ ) D 1 σ = O .
when η = k .
If the system (1) is consistent, then
X = 1 8 ( I n , I n i , I n j , I n k ) ( Y + P n Y P r T + Q n Y Q r T + R n Y R r T ) I r I r i I r j I r k ,
where Y = Y 0 + E F G ,
Y 0 = ( A 1 σ ) ( C 1 ) σ + L A 1 σ D 1 σ ( B 1 σ ) + ( A 2 σ ) C 2 σ + L A 2 σ D 2 σ ( B 2 σ ) , ( or Y 0 = ( A 1 σ ) ( C 2 ) σ + L A 2 σ D 2 σ ( B 1 σ ) + ( A 2 σ ) C 1 σ + L A 1 σ D 1 σ ( B 1 σ ) , ) E = I 4 n ( A 1 σ ) A 1 σ ( A 2 σ ) A 2 σ , G = I 4 r B 1 σ ( B 1 σ ) B 2 σ ( B 2 σ ) .
with arbitrary F R 4 m × 4 n is a ( U n , U r ) , η = i ( V n , V r ) , η = j ( W n , W r ) , η = k generalized (anti-)reflexive matrix.
Theorem 72
( η -Hermitian solutions for (1) over H s . [86]). If A , C H s p × m , B , D H s n × q , the following statements are equivalent:
( a ) The system of split quaternion matrix equations (1) has a solution X = ( ) X η H s n × n , η { i , j , k } .
( b ) The system of real matrix equation
( A σ Y , Y B σ ) = ( C σ , D σ ) , η = i , ( A σ k W n Y , Y W n B σ k ) = ( C σ k , D σ k ) , η = j , ( A σ j V n Y , Y V n B σ j ) = ( C σ j , D σ j ) , η = k ,
has a ( s k e w -)symmetric solution Y R 4 n × 4 n .
( c )
C σ B σ = A σ D σ , C σ = A σ ( A σ ) C σ , D σ = D σ ( B σ ) B σ , η = i , C σ k W n B σ k = A σ k W n D σ k , C σ k = A σ k W n ( A σ k W n ) C σ k , D σ k = D σ k ( W n B σ k ) W n B σ k , η = j , C σ j V n B σ j = A σ j V n D σ j , C σ j = A σ j V n ( A σ j V n ) C σ j , D σ j = D σ j ( V n B σ j ) V n B σ j , η = k ,
and M η is a symmetric matrix, where
M η = C σ ( A σ ) T C σ B σ ( D σ ) T ( A σ ) T ( D σ ) T B σ , η = i , C σ k ( A σ k W n ) T C σ k W n B σ k ( D σ k ) T ( A σ k W n ) T ( D σ k ) T W n B σ k , η = j , C σ j ( A σ j V n ) T C σ j V n B σ j ( D σ j ) T ( A σ j V n ) T ( D σ j ) T V n B σ j , η = k .
In this case, the general η-(anti-)Hermitian solution to the system (1) can be expressed as
X = 1 8 ( I n , I n i , I n j , I n k ) ( Y + P n Y P n T + Q n Y Q n T + R n Y R n T ) I n I n i I n j I n k ,
where
Y = E η F η ± F η T ( E η ) T E η M η ( E η ) T + L E η U ( L E η ) T ,
and U = ± U T R n × n is an arbitrary matrix,
E η = A σ ( B σ ) T , η = i , A σ k W n ( W n B σ k ) T , η = j , A σ j V n ( V n B σ j ) T , η = k , F η = C σ ( D σ ) T , η = i , C σ k ( D σ k ) T , η = j , C σ j ( D σ j ) T , η = k .
Remark 21.
More complex linear matrices or even tensor equations can be solved by means of complex representations or semi-tensor products of split quaternions, as detailed in references [88,89,90,91].

6.3. The System (1) over Dual Quaternions

The collection of dual quaternions is expressed as
DQ = { c = c 0 + c 1 ϵ : c 0 , c 1 H , ϵ 2 = 0 } ,
where c 0 and c 1 represent the standard part and the infinitesimal part of c, respectively [92]. We denote DQ m × n as the set of all m × n matrices over DQ .
For the general solution of the system (1) over dual quaternions, Xie et al. recently presented the following theorem.
Theorem 73
(General solutions for (1) over DQ . [80]). Let A = A 0 + A 1 ϵ D Q p × m , B = B 0 + B 1 ϵ D Q n × q , C = C 0 + C 1 ϵ D Q p × n and D = D 0 + D 1 ϵ D Q m × q be given. Set
C 11 = C 1 A 1 ( A 0 C 0 + L A 0 D 0 B 0 ) , D 11 = D 1 ( A 0 C 0 + L A 0 D 0 B 0 ) B 1 , A 11 = A 1 L A 0 , B 11 = R B 0 B 1 , A 2 = R A 0 A 11 , C 2 = R B 0 , B 2 = R A 0 C 11 , A 3 = L A 0 , C 3 = B 11 L B 0 , B 3 = D 11 L B 0 , A 00 = A 3 L A 2 , C 00 = R C 2 C 3 , B 00 = B 3 A 3 A 2 B 2 C 2 C 3 , D 00 = R A 00 A 3 , Φ = A 2 B 2 C 2 + L A 2 A 00 B 00 C 3 L A 2 A 0 A 3 D 00 R A 00 B 00 C 3 + D 00 R A 00 B 00 C 0 R C 2 .
Then, the system (1) is consistent if and only if
R A 0 C 0 = 0 , D 0 L B 0 = 0 , R A 2 B 2 = 0 , B 2 L C 2 = 0 , R A 4 B 3 = 0 , B 3 L C 3 = 0 , R A 40 B 00 L C 00 = 0 ,
or equivalently,
rank A 0 , C 0 = rank ( A 0 ) , rank B 0 D 0 = r ( B 0 ) , rank A 0 C 1 A 1 0 C 0 A 0 = rank A 0 A 1 0 A 0 , rank A 0 , A 1 D 0 C 1 B 0 = rank ( A 0 ) , rank B 0 C 0 B 1 A 0 D 1 = rank ( B 0 ) , rank D 1 D 0 B 1 B 0 = rank B 0 O B 1 B 0 , rank C 1 B 1 A 1 D 1 A 0 C 1 B 0 A 1 D 0 B 0 O O C 0 B 1 A 0 D 1 O O = rank ( A 0 ) + rank ( B 0 ) .
and equations A 0 D 0 = C 0 B 0 , A 0 D 1 C 0 B 1 = C 1 B 0 A 1 D 0 hold. In such circumstances, the general solution of the system (1) can be expressed as X = X 0 + X 1 ϵ , where
X 0 = A 0 C 0 + L A 0 D 0 B 0 + L A 0 U R B 0 , X 1 = A 1 ( C 11 A 11 U R B 0 ) + L A 1 ( D 11 L A 0 U B 11 ) B 0 + L A 0 U 1 R B 0 , U = Φ + L A 2 L A 00 + U 3 R C 00 R C 2 + L A 4 U 5 R C 2 + L A 3 U 2 R C 2 ,
and U i ( i = 1 , 5 ) are arbitrary.

6.4. The System (1) over Dual Split Quaternions

Yang et al. studied the system (1) over the dual split quaternion tensor and provided the general solution as well as the existence conditions and expressions for the η -Hermitian solution [81].
For a multidimensional array tensor A = ( a i 1 i M ) 1 i k I k ( k = 1 , , M ) with I 1 × I 2 × × I M entries, the general inverse of A can also be extended from the general inverse of a matrix [93], denoted as A . Let H s I 1 × × I M represent the sets of the order M tensors with I 1 × × I M dimensions over the split quaternion algebra H s . The identity tensor I = ( d t 1 t M t 1 t M ) H s T 1 × × T M × T 1 × × T M has all zero entries, except for the elements d t 1 t M t 1 t M = 1 . O denotes the zero tensor whose elements are all zero. Define L A = I A A , R A = I A A .
The sets of dual split quaternion and dual split quaternion tensors are represented as follows [94]:
D Q s = { q = q 0 + q 1 ϵ : q 0 , q 1 H s , ϵ 0 , ϵ 2 = 0 } , D Q s I 1 × × I M × J 1 × × J N = { Q = Q 0 + Q 1 ϵ : Q 0 , Q 1 H s I 1 × × I M × J 1 × × J N , ϵ 0 , ϵ 2 = 0 } .
Let X D Q s I 1 × × I M × L 1 × × L N and Y D Q s L 1 × × L N × K 1 × × K S . Then, we can define the Einstein product of tensors X and Y through the operation N as
( X N Y ) i 1 i M k 1 k S = l 1 l N x i 1 i M l 1 l N y l 1 l N k 1 k S D Q s I 1 × × I M × K 1 × × K S .
The real representations of the split quaternion tensor are of the same form as (27), with the only difference being the use of tensor notation.
For the system
A N X = C , X S B = D ,
the following conclusions hold.
Theorem 74
(General solutions for (29) over DQ s . [81]). Suppose that A = A 00 + A 01 ϵ DQ s I 1 × × I M × I 1 × × I N , B = B 00 + B 01 ϵ DQ s K 1 × × K S × L 1 × × L T , C = C 00 + C 01 ϵ DQ s I 1 × × I M × K 1 × × K S , and D = D 00 + D 01 ϵ DQ s I 1 × × I N × L 1 × × L T . Denote
A 0 = A 00 σ , A 1 = A 01 σ , B 0 = B 00 σ , B 1 = B 01 σ , C 0 = C 00 σ , C 1 = C 01 σ , D 0 = D 00 σ , D 1 = D 01 σ , A 11 = A 1 N L A 0 , B 11 = R B 0 S B 1 , C 11 = C 1 A 1 N ( A 0 M B 0 + L A 0 N D 0 T C 0 ) , D 11 = D 1 ( A 0 M C 0 + L A 0 N D 0 T B 0 ) S B 1 , A 2 = R A 0 M A 11 , C 2 = R C 0 , B 2 = R A 0 M C 11 , A 3 = L A 0 , B 3 = B 11 T L B 0 , B 3 = D 11 T L B 0 , A 4 = A 3 N L A 2 , C 4 = R C 2 S C 3 , B 4 = B 3 A 3 N A 2 M B 2 S C 2 S C 3 , D 4 = R A 4 N A 3 , F = L A 2 N A 4 N B 4 T C 3 L A 2 N A 4 N A 3 N D 4 N R A 4 N B 4 T C 3 + D 4 N R A 4 N B 4 T C 4 S R C 2 + A 2 M B 2 S C 2 .
Then, the following descriptions are equivalent:
( a ) The system of dual split quaternion tensor equation (29) is solvable.
( b ) The system of tensor equations:
A 0 N X 0 = C 0 , X 0 S B 0 = D 0 , A 0 N X 1 + A 1 N X 0 = C 1 , X 0 S B 1 + X 1 S B 0 = D 1
is consistent.
( c )
R A 0 M C 0 = O , D 0 T L B 0 = O , A 0 N D 0 = C 0 S B 0 , A 0 N D 1 C 0 S B 1 = C 1 S B 0 A 1 N D 0 , R A 2 M B 2 = 0 , B 2 S L B 2 = O , R A 3 N B 3 = 0 , B 3 T L C 3 = O , R A 4 N B 4 T L B 4 = O .
Based on these circumstances, the general solution of the system (29) can be represented as X = X 00 + X 01 ϵ , where
X 00 = 1 8 I N , i I N , j I N , k I N N X 0 + R N N X 0 S R S T + S N N X 0 S S S T + T N N X 0 S T S T S I S i I S j I s k I S , X 01 = 1 8 I N , i I N , j I N , k I N N X 1 + R N N X 1 s R S T + S N N X 1 S S S T + T N N X 1 S T S T S I S i I S j I S k I S , X 0 = A 0 M B 0 + L A 0 N D 0 T C 0 + L A 0 N W s R C 0 , X 1 = A 0 M B 11 A 11 N W s R C 0 + L A 0 N D 11 L A 0 N W S C 11 T C 0 + L A 0 N W 1 S R C 0 , W = F + L A 2 N L A 4 N W 2 + W 3 5 R B 4 S R B 2 + L A 2 N W 4 S R B 3 + L A 3 N W 5 S R B 2 ,
with arbitrary W i ( i = 1 , 5 ¯ ) .
Remark 22.
The (η-)Hermitian solutions for the system (29) can be derived by selecting different types of real representations of split quaternion tensors [81].
We present the general solution of the system (1) over quaternions, including the determinant expression for the general solution. It also covers bi-symmetric solutions, centrosymmetric solutions, symmetric and skew-symmetric solutions, ( P , Q ) -(skew-)symmetric solutions, extreme rank ( P , Q ) -(skew-)symmetric solutions, and reducible solutions. Additionally, the general solution over split quaternions, η -(anti-)conjugate solutions, and η -(anti-)Hermitian solutions are discussed. The general solution over dual quaternion matrices and split dual quaternion tensors are also examined. The existence conditions and corresponding expressions for these solutions are provided.

7. Applications

The system (1) has broad applications across various fields. This section focuses on its use in encrypting and decrypting color images and videos.
In image processing, the system (1) can be applied to various tasks, such as image transformation, filtering, and reconstruction. Matrix equations are used to model the transformation or processing of an image, where A and B represent certain image transformations, X is the unknown matrix to be solved, and C and D represent the image before and after processing, or certain features of the image.
In color images, a pure imaginary quaternion can represent the three color channels—red, green, and blue—using i , j , k , thus effectively representing a pixel. By utilizing quaternion matrices or dual quaternion matrices, which can represent even more information, we can process color images in a highly efficient manner. This approach allows us to simultaneously process multiple color channels of the image.
We present two examples of using the system (1). The first example involves using dual quaternions for encrypting and decrypting images, as shown in Figure 2 [80]. The original, encrypted, and decrypted images are displayed in Figure 3.
The other example demonstrates the application of the dual split quaternion tensor equations in color video processing [81]. The basic framework is the same as in image processing, but the tensor, as a high-dimensional matrix, can directly represent video. The results for several frames are shown here, as illustrated in Figure 4.
These two examples demonstrate that by using the solution method of the system (1), color images and videos can be effectively encrypted and decrypted, ensuring the security and accuracy of the communication process.

8. Conclusions

This paper presents a comprehensive review of the system (1), emphasizing its essential role in a wide range of applications. The discussion includes generalized inverse methods for obtaining both general and specialized solutions, such as Hermitian solutions, non-negative definite solutions, and maximal and minimal rank solutions. The theory is further extended to more advanced algebraic structures, including Hilbert spaces, Hilbert C -modules, and general rings, where specialized solving techniques can be applied. Matrix decomposition methods, such as eigenvalue decomposition, singular value decomposition, and generalized singular value decomposition, are explored for their effectiveness in solving the linear matrix equation systems. Additionally, the paper addresses solutions within specialized algebraic structures like dual numbers and various quaternions. At the end, examples of applications of the system (1) in color image and video processing are presented. This review aims to comprehensively summarize the research on various solutions to the system (1) across different algebraic structures. However, the differing research perspectives and the vast amount of literature may have resulted in some references being overlooked. Nonetheless, this does not detract from the primary value of the survey.
Future research may focus on addressing the computational challenges associated with large-scale matrix systems, as generalized inverses and matrix decomposition techniques can be computationally intensive. Therefore, finding numerical solutions to the system (1) is an important research direction. Inspired by [95], leveraging neural networks and other methods to explore these solutions could be a promising approach. Moreover, despite the widespread use of tensors in many fields due to their high-dimensional properties, exploration of the system (1) within the tensor framework has been relatively restricted. Consequently, continuing to study the system (1) within the context of tensors presents an exciting opportunity for future research. Lastly, it is worth noting that, given the extensive applications of dual quaternions, studying various special solutions to the system (1) in the context of dual quaternions, dual generalized commutative quaternions, and dual split quaternions—such as minimum norm solutions, Hermitian solutions, and reflexive solutions—presents an important development direction that warrants future attention. These developments are expected to further promote the in-depth application of the system (1) in areas such as control theory, optimization, image processing, system identification, and robotics.

Author Contributions

Methodology, Q.-W.W. and Z.-H.G.; software, Z.-H.G.; investigation, Q.-W.W., Z.-H.G. and J.-L.G.; writing—original draft preparation, Q.-W.W. and Z.-H.G.; writing—review and editing, Q.-W.W. and Z.-H.G.; supervision, Q.-W.W.; project administration, Q.-W.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by the National Natural Science Foundation of China (No. 12371023).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the Natural Science Foundation of China under grant No. 12371023.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Paula, A.; Acioli, G.; Barros, P. Frequency-based multivariable control design with stability margin constraints: A linear matrix inequality approach. J. Process Contr. 2023, 132, 103115. [Google Scholar] [CrossRef]
  2. van der Woude, J.W. Almost non-interacting control by measurement feed back. Syst. Control Lett. 1987, 9, 7–16. [Google Scholar] [CrossRef]
  3. Sanches, J.M.; Marques, J.S. Image denoising using the Lyapunov equation from non-uniform samples. In Image Analysis and Recognition (ICIAR 2006), Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2006; Volume 4141. [Google Scholar] [CrossRef]
  4. Elhami, M.; Dashti, I. A new approach to the solution of robot kinematics based on relative transformation matrices. Int. J. Robot. Autom. 2016, 5, 213–222. [Google Scholar] [CrossRef]
  5. Tokala, S.; Enduri, M.K.; Lakshmi, T.J.; Sharma, H. Community-based matrix factorization (CBMF) approach for enhancing quality of recommendations. Entropy 2023, 25, 1360. [Google Scholar] [CrossRef]
  6. Simoncini, V. Computational methods for linear matrix equations. SIAM Rev. 2016, 58, 3. [Google Scholar] [CrossRef]
  7. Wang, Q.W.; Xie, L.M.; Gao, Z.H. A survey on solving the matrix equation AXB = C with applications. Mathematics 2025, 13, 450. [Google Scholar] [CrossRef]
  8. Zhou, K.; Doyle, J.C.; Glover, K. Robust and Optimal Control; Prentice Hall: Chicago, IL, USA, 1996. [Google Scholar]
  9. Vaidyanathan, P.P. Multirate Systems and Filter Banks; Prentice Hall: Chicago, IL, USA, 1993. [Google Scholar]
  10. Murray, R.M.; Li, Z.; Sastry, S.S. A Mathematical Introduction to Robotic Manipulation; CRC Press: Boca Raton, FL, USA, 1994. [Google Scholar]
  11. Penrose, R. A generalized inverse for matrices. Math. Proc. Camb. Philos. Soc. 1955, 51, 406–413. [Google Scholar] [CrossRef]
  12. Mayne, A.J. Generalized inverse of matrices and its applications. J. Oper. Res. Soc. 1972, 23, 598. [Google Scholar]
  13. Rao, C.R.; Mitra, S.K. Generalized inverse of a matrix and its applications. In Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability; University of California Press: Oakland, CA, USA, 1971; pp. 601–610. [Google Scholar]
  14. Mitra, S.K. A pair of simultaneous linear matrix equations. Linear Algebra Appl. 1990, 131, 107–123. [Google Scholar] [CrossRef]
  15. Khatri, C.G.; Mitra, S.K. Hermitian and nonnegative definite solutions of linear matrix equations. SIAM J. Appl. Math. 1976, 31, 579–595. [Google Scholar] [CrossRef]
  16. Mitra, S.K. The matrix equations AX = C, XB = D: Common solutions of minimum rank. Linear Algebra Appl. 1984, 59, 171–181. [Google Scholar] [CrossRef]
  17. Peng, Z.Y.; Hu, X.Y. The reflexive and anti-reflexive solutions of the matrix equation AX = B. Linear Algebra Appl. 2003, 375, 147–155. [Google Scholar] [CrossRef]
  18. Chang, H.X. Reflection solutions to a system of matrix equations. Linear Algebra Appl. 2007, 428, 1059–1074. [Google Scholar] [CrossRef]
  19. Li, F.L.; Hu, X.Y.; Zhang, L. The reflexive solutions for a pair of simultaneous linear matrix equations. Linear Algebra Appl. 2007, 420, 246–259. [Google Scholar] [CrossRef]
  20. Qiu, Y.; Zhang, Z.; Lu, J. The matrix equations AX = B, XC = D with PX = sXP constraint. Appl. Math. Comput. 2007, 189, 1428–1434. [Google Scholar] [CrossRef]
  21. Li, F.L.; Hu, X.Y.; Zhang, L. The generalized anti-reflexive solutions for a class of matrix equations (BX = C, XD = E). Comp. Appl. Math. 2008, 27, 31–46. [Google Scholar] [CrossRef]
  22. Li, F.L.; Hu, X.Y.; Zhang, L. The generalized reflexive solutions for a class of matrix equations (AX, XC) = (B, D). Acta Math. 2008, 28, 185–193. [Google Scholar] [CrossRef]
  23. Liu, X.F. The common (P, Q) generalized reflexive and anti-reflexive solutions to AX = B and XC = D. Calcolo 2016, 53, 227–234. [Google Scholar] [CrossRef]
  24. Liu, Y.H. The common least-square solutions to the matrix equations AX = C, XB = D. Math. Appl. 2007, 20, 248–252. [Google Scholar]
  25. Liu, Y.H. Some properties of submatrices in a solution to the matrix equations AX = C, XB = D. Appl. Math. Comput. 2009, 31, 71–80. [Google Scholar] [CrossRef]
  26. Wang, Q.W.; Zhang, X.; He, Z.H. On the Hermitian structures of the solution to a pair of matrix equations. Linear Multilinear Algebra 2013, 61, 73–90. [Google Scholar] [CrossRef]
  27. Yao, Y. The optimization on ranks and inertias of a quadratic Hermitian matrix function and its applications. J. Appl. Math. 2013, 961568. [Google Scholar] [CrossRef]
  28. Yu, J.; Shen, S.Q. Solvability of systems of linear matrix equations subject to a matrix inequality. Linear Multilinear Algebra 2016, 64, 2446–2462. [Google Scholar] [CrossRef]
  29. Xiong, Z.; Qin, Y. The common Re-nnd and Re-pd solutions to the matrix equations AX = C and XB = D. Appl. Math. Comput. 2011, 218, 3330–3337. [Google Scholar] [CrossRef]
  30. Liu, X.F. Comments on “The common Re-nnd and Re-pd solutions to the matrix equations AX = C and XB = D”. Appl. Math. Comput. 2014, 236, 663–668. [Google Scholar] [CrossRef]
  31. Ke, Y.; Ma, C. The generalized bisymmetric (bi-skew-symmetric) solutions of a class of matrix equations and its least squares problem. In Abstract and Applied Analysis; Hindawi Publishing Corporation: London, UK, 2014; p. 239465. [Google Scholar] [CrossRef]
  32. Chen, H.C. Generalized reflexive matrices: Special properties and applications. SIAM J. Matrix Anal. Appl. 1998, 19, 140–153. [Google Scholar] [CrossRef]
  33. Dajić, A.; Koliha, J.J. Positive solutions to the equations AX = C and XB = D for Hilbert space operators. J. Math. Anal. Appl. 2007, 333, 567–576. [Google Scholar] [CrossRef]
  34. Dajicć, A.; Koliha, J.J. Equations ax = c and xb = d in rings and rings with involution with applications to Hilbert space operators. Linear Algebra Appl. 2008, 429, 1779–1809. [Google Scholar] [CrossRef]
  35. Xu, Q. Common Hermitian and positive solutions to the adjointable operator equations AX = C, XB = D. Linear Algebra Appl. 2008, 429, 1–11. [Google Scholar] [CrossRef]
  36. Nacevska, B. Generalized reflexive and anti-reflexive solution for a system of equations. Filomat 2016, 30, 55–64. [Google Scholar] [CrossRef]
  37. Radenković, J.N.; Cvetković-Ilić, D.; Xu, Q. Solvability of the system of operator equations AX = C, XB = D in Hilbert C*-modules. Ann. Funct. Anal. 2021, 12, 32. [Google Scholar] [CrossRef]
  38. Zhang, H.; Dou, Y.; Yu, W. Positive solutions of operator equations AX = B, XC = D. Axioms 2023, 12, 818. [Google Scholar] [CrossRef]
  39. Zhang, H.; Dou, Y.; Yu, W. Real positive solutions of operator equations AX = C and XB = D. AIMS Math. 2023, 8, 15214–15231. [Google Scholar] [CrossRef]
  40. Van Loan, C.F. Generalizing the singular value decomposition. SIAM J. Numer. Anal. 1976, 13, 76–83. [Google Scholar] [CrossRef]
  41. Paige, C.C.; Saunders, M.A. Towards a generalized singular value decomposition. SIAM J. Numer. Anal. 1981, 18, 398–405. [Google Scholar] [CrossRef]
  42. Chu, K.W.E. Singular value and generalized singular value decompositions and the solution of linear matrix equations. Linear Algebra Appl. 1987, 88, 83–98. [Google Scholar] [CrossRef]
  43. Li, F.; Liang, L. Least-squares mirrorsymmetric solution for matrix equations. Comput. Math. Appl. 2006, 51, 1009–1017. [Google Scholar]
  44. Yuan, Y.X. Least-squares solutions to the matrix equations AX = B and XC = D. Appl. Math. Comput. 2010, 216, 3120–3125. [Google Scholar] [CrossRef]
  45. Hu, S.; Yuan, Y. Common solutions to the matrix equations AX = B and XC = D on a subspace. J. Optimiz. Theory App. 2023, 198, 372–386. [Google Scholar] [CrossRef]
  46. Wang, Q.W.; Yu, J. Constrained solutions of a system of matrix equations. J. Appl. Math. 2012, 1, 471573. [Google Scholar] [CrossRef]
  47. Qiu, Y.; Wang, A. Least squares solutions to the equations AX = B, XC = D with some constraints. Appl. Math. Comput. 2008, 204, 872–880. [Google Scholar] [CrossRef]
  48. Ketabchi, S.; Samadi, E.; Aminikhah, H. On the optimal correction of inconsistent matrix equations AX = B and XC = D with orthogonal constraint. J. Math. Model. 2015, 2, 132–142. [Google Scholar]
  49. Zhang, H.; Liu, L.; Liu, H.; Yuan, Y. The solution of the matrix equation AXB = D and the system of matrix equations AX = C, XB = D with X*X = Ip. Appl. Math. Comput. 2022, 418, 126789. [Google Scholar] [CrossRef]
  50. Liao, X.; Yuan, Y. Solutions of three classes of constrained matrix inequalities. Comput. Appl. Math. 2024, 43, 217. [Google Scholar] [CrossRef]
  51. Yuan, Y.; Zhang, H.; Liu, L. The Re-nnd and Re-pd solutions to the matrix equations AX = C, XB = D. Linear Multilinear Algebra 2022, 70, 3543–3552. [Google Scholar] [CrossRef]
  52. Zhou, S.; Yang, S.T. The Hermitian reflexive solutions and the anti-Hermitian reflexive solutions for a class of matrix equations (AX = B, XC = D). Energy Procedia 2012, 17, 1591–1597. [Google Scholar] [CrossRef]
  53. Zhou, S.; Yang, S.T.; Wang, W. Least-squares solutions of matrix equations (AX = B, XC = D) for Hermitian reflexive (anti-Hermitian reflexive) matrices and its approximation. J. Math. Res. Expo. 2011, 31, 1108–1116. [Google Scholar] [CrossRef]
  54. Dong, C.Z.; Wang, Q.W. The {P, Q, k + 1}-reflexive solution to system of matrix equations AX = C, XB = D. Math. Probl. Eng. 2015, 1, 464385. [Google Scholar] [CrossRef]
  55. Chang, H.X.; Wang, Q.W.; Song, G.J. (R, S)-conjugate solution to a pair of linear matrix equations. Appl. Math. Comput. 2010, 217, 73–82. [Google Scholar] [CrossRef]
  56. Dong, C.Z.; Wang, Q.W.; Zhang, Y.P. On the Hermitian R-conjugate solution of a system of matrix equations. J. Appl. Math. 2012, 1, 398085. [Google Scholar] [CrossRef]
  57. Zheng, B.; Ye, L.; Cvetkovic-Ilic, D.S. The *congruence class of the solutions of some matrix equations. Comput. Math. Appl. 2009, 57, 540–549. [Google Scholar] [CrossRef]
  58. Zhang, Y.P.; Dong, C.Z. The *congruence class of the solutions to a system of matrix equations. J. Appl. Math. 2014, 1, 703529. [Google Scholar] [CrossRef]
  59. Yu, J.; Wang, Q.W.; Dong, C.Z. Anti-Hermitian generalized anti-Hamiltonian solution to a system of matrix equations. Math. Probl. Eng. 2014, 1, 539215. [Google Scholar] [CrossRef]
  60. Clifford, M.A. Preliminary sketch of bi-quaternions. Proc. Lond. Math. Soc. 1871, s1-4, 381–395. [Google Scholar] [CrossRef]
  61. Wang, X.K.; Han, D.P.; Yu, C.B.; Zheng, Z.Q. The geometric structure of unit dual quaternion with application in kinematic control. J. Math. Anal. Appl. 2012, 389, 1352–1364. [Google Scholar] [CrossRef]
  62. Chen, Y.; Zeng, M.; Fan, R.; Yuan, Y. The solutions of two classes of dual matrix equations. AIMS Math. 2023, 8, 23016–23031. [Google Scholar] [CrossRef]
  63. Fan, R.; Zeng, M.; Yuan, Y. The solutions to some dual matrix equations. Miskolc Math. Notes 2024, 25, 673–684. [Google Scholar] [CrossRef]
  64. Hamilton, W.R. Lectures on Quaternions; Hodges and Smith (London): Dublin, Ireland, 1843; Volume 163, pp. 10–13. [Google Scholar]
  65. McCarthy, J.M. Quaternions in kinematics. Mech. Mach. Theory 2025, 209, 105949. [Google Scholar] [CrossRef]
  66. Yang, L.Q.; Miao, J.F.; Jiang, T.X.; Zhang, Y.L.; Kou, K.I. Randomized quaternion tensor UTV decompositions for color image and color video processing. Pattern Recogn. 2025, 14, 111580. [Google Scholar] [CrossRef]
  67. Cockle, J. On systems of algebra involving more than imaginary and on equations of the fifth degree. Philos. Mag. 1849, 35, 434–437. [Google Scholar] [CrossRef]
  68. Jiang, T.S.; Wang, G.; Guo, G.W.; Zhang, D. Algebraic algorithms for a class of Schrödinger equations in split quaternionic mechanics. Math. Meth. Appl. Sci. 2024, 47, 6205–6215. [Google Scholar] [CrossRef]
  69. Jiang, T.S.; Guo, Z.W.; Zhang, D.; Vasil’ev, V.I. A fast algorithm for the Schrödinger equation in quaternionic quantum mechanics. Appl. Math. Lett. 2024, 150, 108975. [Google Scholar] [CrossRef]
  70. Özkaldi, S.; Gündoǧan, H. Dual split quaternions and screw motion in 3-dimensional Lorentzian space. Adv. Appl. Clifford Algebras 2011, 21, 193–202. [Google Scholar] [CrossRef]
  71. Daniilidis, K. Hand-eye calibration using dual quaternions. Int. J. Robot. Res. 1999, 18, 286–298. [Google Scholar] [CrossRef]
  72. Wang, X.; Yu, C.; Lin, Z. A dual quaternion solution to attitude and position control for rigid body coordination. IEEE Trans. Rob. 2012, 28, 1162–1170. [Google Scholar] [CrossRef]
  73. Kula, L.; Yayli, Y. Dual split quaternions and screw motion in Minkowski 3-space. Iran. J. Sci. Technol. Trans. 2006, 30, 245–258. [Google Scholar] [CrossRef]
  74. Wang, Q.W. The general solution to a system of real quaternion matrix equations. Comput. Math. Appl. 2007, 49, 665–675. [Google Scholar] [CrossRef]
  75. Li, Y.; Tang, Y. Symmetric and skew-antisymmetric solutions to systems of real quaternion matrix equations. Comput. Math. Appl. 2008, 55, 1142–1147. [Google Scholar] [CrossRef]
  76. Zhang, Q.; Wang, Q.W. The (P, Q)-(skew)symmetric extremal rank solutions to a system of quaternion matrix equations. Appl. Math. Comput. 2011, 217, 9286–9296. [Google Scholar] [CrossRef]
  77. Nie, X.R.; Wang, Q.W.; Zhang, Y. A system of matrix equations over the quaternion algebra with applications. Algebra Colloq. 2017, 24, 233–253. [Google Scholar] [CrossRef]
  78. Jiang, T.S.; Zhang, Z.Z.; Jiang, Z.W. Algebraic techniques for Schrödinger equations in split quaternionic mechanics. Comp. Math. Appl. 2018, 75, 2217–2222. [Google Scholar] [CrossRef]
  79. Wang, G.; Jiang, T.S.; Vasil’ev, V.I.; Guo, Z.W. On singular value decomposition for split quaternion matrices and applications in split quaternionic mechanics. J. Comp. Appl. Math. 2024, 436, 115447. [Google Scholar] [CrossRef]
  80. Xie, L.M.; Wang, Q.W. A system of dual quaternion matrix equations with its applications. Filomat 2025, 39, 1477–1490. [Google Scholar] [CrossRef]
  81. Yang, L.; Wang, Q.W.; Kou, Z. A system of tensor equations over the dual split quaternion algebra with an application. Mathematics 2024, 12, 3571. [Google Scholar] [CrossRef]
  82. Rodman, L. Topics in Quaternion Linear Algebra; Princeton University Press: Princeton, NJ, USA, 2014. [Google Scholar]
  83. Wei, M.S.; Li, Y.; Zhang, F.X. Quaternion Matrix Computations; Nova Science Publisher: Bejing, China, 2018. [Google Scholar]
  84. Kyrchei, I. Determinantal representations of solutions to systems of quaternion matrix equations. Adv. Appl. Clifford Algebras 2018, 28, 23. [Google Scholar] [CrossRef]
  85. Yuan, S.F.; Liao, A.P.; Wang, P. Least squares η-bi-Hermitian problems of the quaternion matrix equation (AXB, CXD) = (E, F). Linear Multilinear Algebra 2015, 63, 1849–1863. [Google Scholar] [CrossRef]
  86. Si, K.; Wang, Q.W.; Xie, L.M. A classical system of matrix equations over the split quaternion algebra. Adv. Appl. Clifford Algebras 2024, 34, 51. [Google Scholar] [CrossRef]
  87. Liu, X. The η-anti-Hermitian solution to some classic matrix equations. Appl. Math. Comput. 2018, 320, 264–270. [Google Scholar] [CrossRef]
  88. Gao, Z.H.; Wang, Q.W.; Xie, L.M. The (anti-)η-Hermitian solution to a novel system of matrix equations over the split quaternion algebra. Math. Meth. Appl. Sci. 2024, 47, 13896–13913. [Google Scholar] [CrossRef]
  89. Zhang, F.X.; Li, Y.; Zhao, J.L. The semi-tensor product method for special least squares solutions of the complex generalized Sylvester matrix equation. AIMS Math. 2022, 8, 5200–5215. [Google Scholar] [CrossRef]
  90. Zhang, W.H.; Chen, B.S. H -Representation and applications to generalized Lyapunov equations and linear stochastic systems. IEEE Trans. Autom. Control. 2012, 57, 3009–3022. [Google Scholar] [CrossRef]
  91. Li, J.F.; Li, W.; Chen, Y.M.; Huang, R. Solvability of matrix equations under semi-tensor product. Linear Multilinear Algebra 2017, 65, 1705–1733. [Google Scholar] [CrossRef]
  92. Qi, L.Q.; Ling, C.; Yan, H. Dual quaternions and dual quaternion Vectors. Commun. Appl. Math. Comput. 2022, 4, 1494–1508. [Google Scholar] [CrossRef]
  93. Sun, L.; Zheng, B.; Bu, C.; Wei, Y. Moore-penrose inverse of tensors via einstein product. Linear Multilinear Algebra 2015, 64, 686–698. [Google Scholar] [CrossRef]
  94. Einstein, A. The formal foundation of the general theory of relativity. Sitzungsber. Preuss. Akad. Wiss. Berl. (Math. Phys.) 1914, 1914, 1030–1085. Available online: https://inspirehep.net/literature/42607 (accessed on 7 November 2024).
  95. Dakić, J.; Petković, M.D. Gradient neural network model for the system of two linear matrix equations and applications. Appl. Math. Comput. 2024, 481, 128930. [Google Scholar] [CrossRef]
Figure 1. Research framework of system (1).
Figure 1. Research framework of system (1).
Symmetry 17 00625 g001
Figure 2. Scheme.
Figure 2. Scheme.
Symmetry 17 00625 g002
Figure 3. The original, encrypted, and decrypted color images.
Figure 3. The original, encrypted, and decrypted color images.
Symmetry 17 00625 g003
Figure 4. The original, encrypted, and decrypted images of randomly selected slices from color videos.
Figure 4. The original, encrypted, and decrypted images of randomly selected slices from color videos.
Symmetry 17 00625 g004
Table 1. Definition of (anti-) Hermitian generalized (anti-)Hamiltonian matrices.
Table 1. Definition of (anti-) Hermitian generalized (anti-)Hamiltonian matrices.
SymbolsTypes of MatricesDefinitions
A S O R 2 n × 2 n non-trivial anti-symmetric orthogonal matrix J T = J = J 1 I
H H C 2 n × 2 n Hermitian generalized Hamiltonian X = X and J X J = X
H A H C 2 n × 2 n Hermitian generalized anti-Hamiltonian X = X and J X J = X
A H H C 2 n × 2 n anti-Hermitian generalized Hamiltonian X = X and J X J = X
A H A H C 2 n × 2 n anti-Hermitian generalized anti-Hamiltonian X = X and J X J = X
Table 2. Definition of several kinds of symmetric matrices.
Table 2. Definition of several kinds of symmetric matrices.
Types of MatricesDefinitions
symmetric A = A
bisymmetric a i j = a n i + 1 , n j + 1 = a ¯ j i
centrosymmetric A = A #
symmetric and skew-antisymmetric A = A = A # .
( P , Q ) -(skew)symmetric A = P A Q with unitary P and Q
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Q.-W.; Gao, Z.-H.; Gao, J.-L. A Comprehensive Review on Solving the System of Equations AX = C and XB = D. Symmetry 2025, 17, 625. https://doi.org/10.3390/sym17040625

AMA Style

Wang Q-W, Gao Z-H, Gao J-L. A Comprehensive Review on Solving the System of Equations AX = C and XB = D. Symmetry. 2025; 17(4):625. https://doi.org/10.3390/sym17040625

Chicago/Turabian Style

Wang, Qing-Wen, Zi-Han Gao, and Jia-Le Gao. 2025. "A Comprehensive Review on Solving the System of Equations AX = C and XB = D" Symmetry 17, no. 4: 625. https://doi.org/10.3390/sym17040625

APA Style

Wang, Q.-W., Gao, Z.-H., & Gao, J.-L. (2025). A Comprehensive Review on Solving the System of Equations AX = C and XB = D. Symmetry, 17(4), 625. https://doi.org/10.3390/sym17040625

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop