Next Article in Journal
Understanding Machine Learning Principles: Learning, Inference, Generalization, and Computational Learning Theory
Previous Article in Journal
An Optimized Weighted-Voting-Based Ensemble Learning Approach for Fake News Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Survey on Solving the Matrix Equation AXB = C with Applications

1
Department of Mathematics and Newtouch Center for Mathematics, Shanghai University, Shanghai 200444, China
2
Collaborative Innovation Center for the Marine Artificial Intelligence, Shanghai 200444, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(3), 450; https://doi.org/10.3390/math13030450
Submission received: 13 January 2025 / Revised: 26 January 2025 / Accepted: 27 January 2025 / Published: 28 January 2025
(This article belongs to the Section A: Algebra and Logic)

Abstract

:
This survey provides a comprehensive overview of the solutions to the matrix equation A X B = C over real numbers, complex numbers, quaternions, dual quaternions, dual split quaternions, and dual generalized commutative quaternions, including various special solutions. Additionally, we summarize the numerical algorithms for these special solutions. This matrix equation plays an important role in solving linear systems and control theory. We specifically explore the application of this matrix equation in color image processing, highlighting its unique value in this field. Taking the dual quaternion matrix equation A X B = C as an example, we design a scheme for simultaneously encrypting and decrypting two color images. The experimental results demonstrate that this scheme is highly feasible.
MSC:
15A03; 15A09; 15A24; 15B33; 15B57; 65F10; 65F45

1. Introduction

Matrix equations play a crucial role in solving linear systems, eigenvalue problems, and system control. For instance, the classic matrix equation
A X B = C
encompasses both the equations A X = C and X B = C , and it is also essential in system control applications [1]. In 1955, Penrose [2] defined the Moore–Penrose inverse using four matrix equations and provided necessary and sufficient conditions for the solvability of the matrix Equation (1) over complex numbers, along with expressions for its general solution. The introduction of the Moore–Penrose inverse has greatly facilitated the solution of matrix equations, drawing the attention of many scholars and promoting further research into the matrix Equation (1).
The purpose of this survey is to provide an overview of the research on various solutions to matrix Equation (1), along with the corresponding numerical algorithms, and to discuss the applications of this matrix equation in color image processing. Since Penrose first applied the Moore–Penrose inverse to study matrix Equation (1), research on this matrix equation has remained active. Furthermore, matrices with special properties play crucial roles in specific fields. For example, Hermitian matrices, η -Hermitian matrices, nonnegative definite matrices, reflexive matrices, anti-reflexive matrices, reducible matrices, orthogonal matrices, bisymmetric matrices, and others have important applications in areas such as the estimation of covariance components in statistical models, load-flow analysis, short-circuit studies in power systems, engineering and scientific computations, statistical signal processing, Markov chains, compartmental analysis, continuous-time positive systems, covariance assignment, data matching in multivariate analysis, and other fields (see references [3,4,5,6,7,8,9,10,11,12]). Consequently, many scholars have explored various special solutions to this matrix equation [13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31]. In addition, numerous researchers have developed corresponding numerical algorithms for these special solutions (see references [32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63]).
Since most research on matrix Equation (1) has been conducted over the fields of real or complex numbers, many scholars have extended their studies to dual quaternions, dual split quaternions, and dual generalized commutative quaternions, thereby further exploring the solutions to this matrix equation (see References [64,65,66]). Additionally, some researchers have studied the matrix equation within the framework of the semi-tensor product [67], which removes the dimensional constraints typically imposed on matrix multiplication. Other work has elevated the study of matrices to the operator level, focusing on operator equations of the form A X B = C [68,69]. Moreover, the study of the rank of solutions to matrix Equation (1) has attracted considerable attention from scholars [70,71,72,73]. Currently, applications of this matrix equation in image processing are rare. Therefore, this survey uses the dual quaternion matrix equation A X B = C as an example to explore its application in the encryption and decryption of color images.
The remainder of this survey is organized as follows. Section 2 introduces some of the notation used in this survey and provides definitions and properties of several special types of matrices. In Section 3, we discuss various solutions to matrix Equation (1). Section 4 presents numerical algorithms for solving the special solutions of matrix Equation (1). In Section 5, we use the dual quaternion matrix equation A X B = C as an example to demonstrate its application in color image processing, supported by experimental verification. Finally, Section 6 offers a conclusion to the survey.

2. Preliminaries

In this section, we present some commonly used symbols, definitions related to matrices, and their relevant properties.
Let R be the field of real numbers, C be the field of complex numbers, H be the quaternions, H s be the split quaternions, H g be the generalized commutative quaternions, DQ be the dual quaternions, DH s be the dual split quaternions, and DH g be the dual generalized commutative quaternions. The rank of a matrix A is denoted by r ( A ) .

2.1. Real Matrix

A real matrix A R n × n is (anti-)symmetric if it obeys
A = ( ) A T ,
where A T denotes the transpose of A. We use the symbol ( A ) S R n × n to represent the set of all (anti-)symmetric matrices. For A S R n × n , if A A T = I , where I represents the identity matrix, then A is referred to as a symmetric orthogonal matrix. We use the notation S O R n × n to denote the set of all symmetric orthogonal matrices.
Definition 1
([74]). A matrix A = a i j R n × n is defined as follows:
Centro - symmetric , denoted by C S R n × n , if a i j = a n + 1 i , n + 1 j i , j = 1 , 2 , , n . Anti - centro - symmetric , denoted by A C S R n × n , if a i j = a n + 1 i , n + 1 j i , j = 1 , 2 , n .
Remark 1.
If A C S R n × n , then A = V n A V n , where V n = ( e n , , e 1 ) with e i denotes the i-th column of the identity matrix I n .
As the extension of centro-symmetric matrix, we define the generalized centro-symmetric matrix and the mirrorsymmetric matrix.
Definition 2
([32]). For P S O R n × n , a matrix A is said to be generalized centro-symmetric (generalized centro anti-symmetric) if it satisfies P A P = A ( P A P = A ) . The set of all such generalized centro-symmetric (or generalized central anti-symmetric) matrices with respect to P is denoted by C S R P n × n ( C A S R P n × n ) .
Remark 2.
If the definition above is given over C , then A is called a reflexive (anti-reflexive) matrix. In particular, if A also satisfies A T = A , then A is called symmetric P-symmetric. Furthermore, if P , Q S O R n × n satisfies ( P X Q ) T = ( ) P X Q , then X is called ( P , Q ) -orthogonal (skew-) symmetric.
Definition 3
([34]). A matrix A R ( 2 n + m ) × ( 2 n + m ) is said to be ( n , m )-mirrorsymmetric(( n , m )-mirrorskew) if
P ( n , m ) A P ( n , m ) = A ( A ) ,
where
P ( n , m ) = 0 0 J n 0 I m 0 J n 0 0 , J n = 1 1 .
In addition, for A = ( a i j ) R n × n , if it obeys a i j = a j i ( i , j = 1 , , n ), then A is said to be a general Toeplitz matrix. In other words,
A = a 0 a 1 a 2 a n 2 a n 1 a 1 a 0 a 1 a n 1 a n 2 a 2 a 1 a 0 a ( n 2 ) a ( n 3 ) a 1 a ( n 1 ) a ( n 2 ) a 2 a 1 a 0 .
If A satisfies a i j = a i + j 1 ( i , j = 1 , , n ), then A is called a Hankel matrix. This can be represented as follows:
A = a 1 a 2 a n a 2 a 3 a n + 1 a n a n + 1 a 2 n 1 .

2.2. Complex Matrix

2.2.1. Hermitian, Positive Semidefinite, and Positive Definite Matrices

A matrix A C n × n is called nonnegative definite or positive semidefinite, denoted A 0 , if
x * A x 0 , for all x C n .
A is further termed positive definite, denoted A > 0 , if x * A x > 0 holds true for all x 0 . Specifically, if A satisfies R e x * A x 0 and x 0 , then A is called Re-nonnegative definite and is denoted as Re-nnd [75], where R e x * A x denotes the real part of x * A x . Let R e n be the set of all n × n Re-nnd matrices. In addition, A C n × n is said to be Hermitian (or self-adjoint) if it satisfies the condition
A = A * ,
where A * denotes the conjugate transpose of A. Specifically, A ¯ denotes the conjugate of A. Key properties of Hermitian matrices include real diagonal elements, real eigenvalues, and the existence of an orthogonal set of eigenvectors.
For A , B C n × n , if there exists a nonsingular matrix P such that A = P B P * , then A and B are in the same *congruence class. It is evident that Hermitian, positive definite, and positive semidefinite matrices are special cases within *congruence.

2.2.2. Moore–Penrose Inverses of Matrices

In 1955, Penrose [2] formulated the generalized inverse (now known as the Moore–Penrose inverse) of a matrix A C m × n , denoted by A , using four matrix equations:
A A A = A , A A A = A , ( A A ) * = A A , ( A A ) * = A A ,
and A is unique. In particular, if A X A = A holds, then X is a generalized inner inverse (g-inverse) of A, denoted as A . The introduction of the concept of the generalized inverse of a matrix has significantly facilitated the solving of matrix equations.

2.2.3. Reflexive, { P , k + 1 } Reflexive, Generalized Reflexive, and ( R , S )-Symmetric Matrices

In 1998, Chen [3] introduced the concepts of reflexive matrices and anti-reflexive matrices.
Definition 4.
Assume that P is a nontrivial unitary involution matrix (generalized reflection matrix), i.e., P * = P and P 2 = I , then A is said to be a reflexive(anti-reflexive) matrix if
A C r n × n ( P ) ( C a n × n ( P ) ) ,
where C r n × n ( P ) = A C n × n | A = P A P and C a n × n ( P ) = A C n × n | A = P A P .
Remark 3.
If the matrix P obeys P k + 1 = P = P * , then P is called a generalized { k + 1 } -reflection matrix. Correspondingly, the matrix X satisfying P X P = ( ) X is called { P , k + 1 } (anti-)reflexive.
A more general definition is as follows.
Suppose that A C m × n , R C m × m , S C n × n and R , S are involutory Hermitian matrices. Then, A is called a generalized (anti-)reflexive matrix if and only if
R A S = ( ) A .
If R and S are simply involutory matrices, we have the following definition:
Definition 5
([76]). Suppose that A C m × n , then A is called a ( R , S )-symmetric (( R , S )-skew symmetric) matrix if
R A S = A ( R A S = A ) , R C m × m , S C n × n ,
where R 2 = I and S 2 = I .

2.2.4. Generalized Singular Value Decomposition

Generalized singular value decomposition (GSVD) [77]: for A C m × n and B C p × n , there exist unitary matrices U C m × m and V C p × p , as well as an invertible matrix P C n × n , such that
U A P = A l , O n l , V B P = B l , O n l ,
where
A = I t S A O A , B = O B S B I l t s , S A = diag α t + 1 , , α t + s , S = diag β t + 1 , , β t + s ,
and l = r A B , t = l r ( B ) , s = r ( A ) + r ( B ) l , α i 2 + β i 2 = 1 , i = t + 1 , , t + s , 1 > α t + 1 α t + s > 0 , 0 < β t + 1 β t + s < 1 .

2.3. Semi-Tensor Product of Matrices

The semi-tensor product of matrices breaks the dimension limitation of traditional matrix multiplication and is more effective than traditional matrix multiplication. First, we define the Kronecker product of matrices over the fields of real number and complex number.
Definition 6
([67]). For matrices M = m i j R p × q ( C p × q ) and N R t × s ( C t × s ) , the Kronecker product of M and N is defined as
M N = m 11 N m 12 N m 1 q N m 21 N m 22 N m 2 q N m p 1 N m p 2 N m p q N R p t × q s ( C p t × q s ) .
The operator vec ( · ) is defined to the column vector obtained by stacking its column, i.e.,
vec ( M ) = m 11 m 21 m p 1 m 1 q m 2 q m p q T .
Here are some operations similar to vec ( · ) .
Definition 7
([78]). Let A R n × n .
1.
Set a 1 = A ( 2 : n 1 , 1 ) T , a 2 = A ( 3 : n 2 , 2 ) T , , a k 1 = A ( k : n k + 1 , k 1 ) T , a k = A ( k + 1 , k ) T , denote
vec A ( A ) = a 1 , a 2 , , a k 1 T , w h e n n = 2 k , a 1 , a 2 , , a k 1 , a k T , w h e n n = 2 k + 1 .
2.
Set b 1 = A ( 1 : n , 1 ) T , b 2 = A ( 2 : n 1 , 2 ) T , , b k = A ( k : n k + 1 , k ) T , b k + 1 = A ( k + 1 , k + 1 ) T , denote
vec B ( A ) = b 1 , b 2 , , b k T , w h e n n = 2 k , b 1 , b 2 , , b k , b k + 1 T , w h e n n = 2 k + 1 .
Assume that M , N , T , I be matrices of appropriate sizes, then
( M N ) T = M ( N T ) , ( M N ) I = ( M I ) ( N I )
and the solvability of linear matrix equation M X N = T is equivalent to solve the linear systems
N T M vec ( X ) = vec ( T ) .
Now we give the concept of the semi-tensor product.
Definition 8
([79]). Let A R m × n ( C m × n ) , B R p × q ( C p × q ) , the semi-tensor product of A and B is defined as
A B = A I t / n B I t / p ,
where t = lcm ( n , p ) is the least common multiple of n and p .
Remark 4.
If n = p , the semi-tensor product of matrices reduces to the traditional matrix product.
Regarding the semi-tensor product of matrices, we can easily derive the following properties:
Proposition 1
([79]). ( 1 ) Suppose A , B , and C are real (complex) matrices of any dimensions, then
( A B ) C = A ( B C ) .
( 2 ) Let x R m ( C m ) , y R n ( C n ) , then
x y = x y .

2.4. Quaternion Matrix and Dual Quaternion Matrix

2.4.1. Quaternions

A quaternion a can be expressed in the form a = a 0 + a 1 i + a 2 j + a 3 k , where a 0 , a 1 , a 2 , a 3 R and i , j , k satisfying
i 2 = j 2 = k 2 = 1 , ij = ji = k , jk = kj = i , and ki = ik = j .
The conjugate quaternion of a is defined as a * = a 0 a 1 i a 2 j a 3 k , the norm of a is | a | = a a * = a 0 2 + a 1 2 + a 2 2 + a 3 2 .
If a map ϕ : H H obeys
ϕ ( a b ) = ϕ ( b ) ϕ ( a ) , ϕ ( a + b ) = ϕ ( a ) + ϕ ( b ) , a , b H ,
then ϕ is said to be an antiendomorphism. For a H , if ϕ is an antiendomorphism satisfying ϕ ( ϕ ( a ) ) = a , then we call ϕ is an involution.
Definition 9
([80]). An involution ϕ is said to be nonstandard, if and only if ϕ can be represented as a real matrix
ϕ = 1 0 0 P
under the basis 1 , i , j , k , where P is a real orthogonal symmetric matrix whose eigenvalues are 1 , 1 , 1 .
In fact, for a H , any nonstandard involution ϕ can be written as ϕ ( a ) = γ 1 a * γ for some γ H with γ 2 = 1 . Building on these fundamental definitions of quaternions, we can now proceed to introduce quaternion matrices and explore their associated properties.

2.4.2. Quaternion Matrix

A quaternion matrix A can be written as A = A 0 + A 1 i + A 2 j + A 3 k H m × n , where A 0 , A 1 , A 2 , A 3 R m × n . Its conjugate transpose A * is defined as follows:
A * = A 0 T A 1 T i A 2 T j A 3 T k ,
the Frobenius norm of A is defined by
A F = i = 1 m j = 1 n | a i j | 2 .
In the context of complex fields, we understand the concepts of centro-symmetric matrices and ( R , S )-symmetric (( R , S )-skew symmetric) matrices. Similarly, there are related concepts in the realm of quaternions.
Let A H n × n , V n = ( e n , , e 1 ) , B H m × n . Then
  • A = A * , A is called Hermitian.
  • A = V n A * V n , A is called Persymmetric.
  • A = V n A V n and A = A * , A is called Bisymmetric (bihermitian).
  • A = V n A * V n and A = A * , A is called a quaternion skew bihermitian matrix.
  • R B S = B ( R B S = B ) , R H m × m , and S H n × n , B is called a ( R , S )-symmetric (( R , S )-skew symmetric) matrix, where R 2 = I and S 2 = I .
Additionally, if A H n × n satisfies A = ( ) A η * : = ( ) η A * η , where η { i , j , k } , then A is said to be η (-anti)-Hermition matrix. Further, if A = A ϕ , then A is called ϕ -Hermitian matrix, where A ϕ = γ 1 A * γ for some γ H with γ 2 = 1 . For example, if ϕ obeys ϕ ( i ) = i , ϕ ( j ) = j , ϕ ( k ) = k , i.e., ϕ ( A ) = j 1 A * j , A H m × n , then
1 + i j ϕ = 1 + i j .
The following properties hold for ϕ -Hermitian matrices.
Proposition 2
([64]). Suppose A H m × n , then
1.
( A ϕ ) = ( A ) ϕ .
2.
( L A ) ϕ = R A ϕ .
3.
( R A ) ϕ = L A ϕ .
Penrose provided four matrix equations that define the conditions for a matrix to have a generalized inverse over the complex field, and these conditions also hold over quaternions. Let L A = I A A and R A = I A A . We can easily derive the following properties.
Proposition 3
([81]). Suppose that A H m × n , then
1.
A η * = A η * .
2.
r ( A ) = r A η * .
3.
L A η * = η L A η = L A η = L A * = R A η * .
4.
R A η * = η R A η = R A η = R A * = L A η * .
For A = A 0 + A 1 i + A 2 j + A 3 k H m × n , we give its real representation matrix A R as below:
A R : = A 0 A 1 A 2 A 3 A 1 A 0 A 3 A 2 A 2 A 3 A 0 A 1 A 3 A 2 A 1 A 0 .
For convenience of description, we denote A r i R and A c j R as the i-th row block and j-th column block of A R , respectively. The real representation matrix of A is not unique. For instance,
A R = A 0 A 1 A 2 A 3 A 1 A 0 A 3 A 2 A 2 A 3 A 0 A 1 A 3 A 2 A 1 A 0 R 4 m × 4 n
is also a real representation of A. See Reference [82] for details. Certainly, we can derive
A = 1 2 A R = A r i R = A c i R , i = 1 , , 4 ,
and some of the following properties.
Proposition 4
([83]). Assume that A , B H m × n , C H n × l , k R , i = 1 , , 4 , then
1.
A = B A R = B R A r i R = B r i R A c i R = B c i R .
2.
( A + B ) r i R = A r i R + B r i R , ( A + B ) c i R = A c i R + B c i R .
3.
( k A ) r i R = k A r i R , ( k A ) c i R = k A c i R .
4.
( A C ) r i R = A r i R C R , ( A C ) c i R = A R C c i R .
Let
Q n = 0 I n 0 0 I n 0 0 0 0 0 0 I n 0 0 I n 0 , G n = 0 0 I n 0 0 0 0 I n I n 0 0 0 0 I n 0 0 , T n = 0 0 0 I n 0 0 I n 0 0 I n 0 0 I n 0 0 0 .
Then, we have the following properties.
Proposition 5
([29]). Suppose that A , B H n × n and a R , then
1.
( A + B ) R = A R + B R , ( a A ) R = a A R .
2.
( A B ) R = A R B R .
3.
Q n T A R Q n = A R , G n T A R G n = A R , T n T A R T n = A R .
4.
A * R = A R T , A 1 R = A R 1 .
5.
A R commutes with Q n , G n , and T n with respect to multiplication.

2.4.3. Dual Quaternion Matrix

The set of dual quaternion matrices is defined as
DQ m × n : = { Y = Y 0 + Y 1 ϵ | Y 0 , Y 1 H m × n } ,
where Y 0 , Y 1 represent the standard part and the infinitesimal part of Y, respectively. The infinitesimal unit ϵ obeys ϵ 2 = 0 and commutes under multiplication with real numbers, complex numbers, and quaternions. Below are some basic operations on dual quaternion matrices. For A = A 0 + A 1 ϵ , C = C 0 + C 1 ϵ DQ m × n , B = B 0 + B 1 ϵ DQ n × k , then we have
A + C = A 0 + C 0 + ( A 1 + C 1 ) ϵ , A B = A 0 B 0 + ( A 0 B 1 + A 1 B 0 ) ϵ .
We know that there are some special matrices over quaternions, and there are also some special matrices over dual quaternions.
A dual quaternion matrix A = A 0 + A 1 ϵ DQ n × n is called ϕ -Hermitian if A = A ϕ , where A ϕ is defined as
A ϕ : = γ 1 A * γ = γ 1 A 0 * γ + γ 1 A 1 * γ ϵ = A 0 ϕ + A 1 ϕ ϵ ,
with γ H and γ 2 = 1 . It has the following properties.
Proposition 6
([64]). Suppose that A , B DQ n × n , then
1.
( A + B ) ϕ = A ϕ + B ϕ .
2.
( A B ) ϕ = B ϕ A ϕ .
3.
( A ϕ ) ϕ = A .

2.5. Dual Split Quaternion Matrix

2.5.1. Split Quaternion Matrix

A split quaternion matrix A can be written as A = A 0 + A 1 i + A 2 j + A 3 k H s m × n , where A 0 , A 1 , A 2 , A 3 R m × n , and
i 2 = j 2 = k 2 = 1 , ij = ji = k , jk = kj = i , ki = ik = j ,
the conjugate transpose A * is defined as follows:
A * = A 0 T A 1 T i A 2 T j A 3 T k ,
in addition, we define the i-conjugate and i-conjugate transpose of A as follows:
A i = i 1 A i = A 1 + A 2 i A 3 j A 4 k , A i * = i A * i = A 1 T A 2 T i + A 3 T j + A 4 T k .
It is evident that A i * = ( A * ) i = ( A i ) * . The real representation method is of great significance in solving the problem of split quaternion matrix equations. For A = A 0 + A 1 i + A 2 j + A 3 k H s m × n , two real representations of A are defined as follows:
A σ 1 : = A 1 A 2 A 3 A 4 A 2 A 1 A 4 A 3 A 3 A 4 A 1 A 2 A 4 A 3 A 2 A 1 , A σ i : = U m A σ 1 = A 1 A 2 A 3 A 4 A 2 A 1 A 4 A 3 A 3 A 4 A 1 A 2 A 4 A 3 A 2 A 1 ,
where
U m = I m 0 0 0 0 I m 0 0 0 0 I m 0 0 0 0 I m .
Denote
P m = 0 0 I m 0 0 0 0 I m I m 0 0 0 0 I m 0 0 , W m = 0 I m 0 0 I m 0 0 0 0 0 0 I m 0 0 I m 0 , R m = 0 0 0 I m 0 0 I m 0 0 I m 0 0 I m 0 0 0 ;
then, we can derive the following properties.
Proposition 7
([65]). Assume that A , B H s m × n , C H s n × q , and k R , then we have the following conclusions.
1.
A = B A σ 1 = B σ 1 , A = B A σ i = B σ i .
2.
( A + B ) σ 1 = A σ 1 + B σ 1 , ( k A ) σ 1 = k A σ 1 , ( A + B ) σ i = A σ i + B σ i , ( k A ) σ i = k A σ i .
3.
( A C ) σ 1 = A σ 1 C σ 1 , ( A C ) σ i = A σ i U n C σ i .
4.
(i)    P m T A σ 1 P n = A σ 1 , W m T A σ 1 W n = A σ 1 , R m T A σ 1 R n = A σ 1 .
(ii)    P m T A σ i P n = A σ i , W m T A σ i W n = A σ i , R m T A σ i R n = A σ i .
5.
(i)    A = 1 2 I m I m i I m j I m k A σ 1 I n I n i I n j I n k .
(ii)    A = 1 2 I m I m i I m j I m k A σ i I n I n i I n j I n k .
6.
( A * ) σ i = ( A σ i ) T , ( A i ) σ i = U m A σ i U n .

2.5.2. Dual Split Quaternion Matrix

The set of dual split quaternion matrices is defined as
D H s m × n : = { Y = Y 0 + Y 1 ϵ | Y 0 , Y 1 H s m × n } ,
where Y 0 and Y 1 represent the standard part and the infinitesimal part of Y, respectively. The infinitesimal unit ϵ obeys ϵ 2 = 0 and commutes under multiplication with real numbers, complex numbers, quaternions, and split quaternions. For A = A 0 + A 1 ϵ D H s m × n , the Hamiltonian conjugate of A is denoted as A ¯ = A 0 ¯ + A 1 ¯ ϵ D H s m × n . The transpose of A is represented by A T = A 0 T + A 1 T ϵ D H s n × m , and the conjugate transpose of A, denoted by A * , is given by A * = A 0 * + A 1 * ϵ D H s n × m .
Similar to dual quaternion matrices, dual split quaternion matrices also have the following basic operational properties. For A = A 0 + A 1 ϵ , C = C 0 + C 1 ϵ D H s m × n , B = B 0 + B 1 ϵ D H s n × k , then
A + C = A 0 + C 0 + ( A 1 + C 1 ) ϵ , A B = A 0 B 0 + ( A 0 B 1 + A 1 B 0 ) ϵ .

2.6. Dual Generalized Commutative Quaternion Matrix

2.6.1. Generalized Commutative Quaternion Matrix

A generalized commutative quaternion matrix A has the form A = A 0 + A 1 i + A 2 j + A 3 k H g m × n , where A 0 , A 1 , A 2 , A 3 R m × n , and i , j , k satisfy
i 2 = α , j 2 = β , k 2 = α β , i j = j i = k , j k = k j = β i , k i = i k = α j .
Here α , β R { 0 } . The concept of generalized commutative quaternions was introduced by Tian et al. [84] in 2023. In particular, when α = 1 and β = 1 , the generalized commutative quaternion matrix A simplifies to the commutative quaternion matrix A. Similar to split quaternion matrices, generalized commutative quaternion matrices also have real matrix representations. For A = A 0 + A 1 i + A 2 j + A 3 k H g m × n , it has the following three real matrix representations:
A σ i = A 0 α A 1 β A 2 α β A 3 A 1 A 0 β A 3 β A 2 A 2 α A 3 A 0 α A 1 A 3 A 2 A 1 A 0 , A σ j = V m A σ i , A σ k = U m A σ i ,
where U m is defined in Equation (7) and
V m = I m 0 0 0 0 I m 0 0 0 0 I m 0 0 0 0 I m .
Set
G n 1 = I n 0 0 0 0 α I n 0 0 0 0 β I n 0 0 0 0 α β I n , R n 1 = 0 α I n 0 0 I n 0 0 0 0 0 0 α I n 0 0 I n 0 , S n 1 = 0 0 β I n 0 0 0 0 β I n I n 0 0 0 0 I n 0 0 , T n 1 = 0 0 0 α β I n 0 0 β I n 0 0 α I n 0 0 I n 0 0 0 .
Thus, we can derive the following properties.
Proposition 8
([85]). Let A , B H g m × n , C H g n × s , and k R . Then the following conclusions hold.
1.
A = B A σ η = B σ η , η { i , j , k } .
2.
( A + B ) σ η = A σ η + B σ η , η { i , j , k } .
3.
( k A ) σ η = k A σ η , η { i , j , k } .
4.
( A C ) σ i = A σ i C σ i , ( A C ) σ j = A σ j V n C σ j , ( A C ) σ k = A σ k U n C σ k .
5.
(a)    ( R m 1 ) 1 A σ i R n 1 = A σ i , ( S m 1 ) 1 A σ i S n 1 = A σ i , ( T m 1 ) 1 A σ i T n 1 = A σ i .
(b)    ( R m 1 ) 1 A σ j R n 1 = A σ j , ( S m 1 ) 1 A σ j S n 1 = A σ j , ( T m 1 ) 1 A σ j T n 1 = A σ j .
(c)    ( R m 1 ) 1 A σ k R n 1 = A σ k , ( S m 1 ) 1 A σ k S n 1 = A σ k , ( T m 1 ) 1 A σ k T n 1 = A σ k .
6.
(a)    A = 1 4 I m I m i I m j I m k A σ i I n 1 α I n i 1 β I n j 1 α β I n k .
(b)    A = 1 4 I m I m i I m j I m k A σ j I n 1 α I n i 1 β I n j 1 α β I n k .
(c)    A = 1 4 I m I m i I m j I m k A σ k I n 1 α I n i 1 β I n j 1 α β I n k .

2.6.2. Dual Generalized Commutative Quaternion Matrix

We use
DH g m × n : = { A = A 0 + A 1 ϵ | A 0 , A 1 H g m × n } ,
where the infinitesimal unit ϵ satisfies ϵ 2 = 0 to represent all m × n dual generalized commutative quaternion matrices. The definitions of addition, multiplication, and equality for two dual generalized commutative quaternion matrices are similar to those for dual quaternions and dual split quaternions, and thus will not be provided here.

2.7. Tensor

A tensor
A = a i 1 i N 1 i j I j ( j = 1 , , N )
of order N is a multidimensional array with I 1 × × I N entries, where N is a positive integer. The sets of tensors of order N with dimension I 1 × × I N over the complex field C , the real field R , and the real quaternion algebra are represented, respectively, by C I 1 × × I N , R I 1 × × I N , and H I 1 × × I N . Specially, when N = 2 , the tensor is a matrix.
For a quaternion tensor
A = a i 1 i N j 1 j M H I 1 × × I N × J 1 × × J M ,
let
B = b i 1 i M j 1 j N H J 1 × × J M × I 1 × × I N
be the conjugate transpose of A , where
b i 1 i M j 1 j N = a ¯ i 1 i N j 1 j M ,
and the tensor B is denoted by A * . A “square” tensor D = d i 1 i N i 1 i N H I 1 × × I N × I 1 × × I N is called a diagonal tensor if all its entries are zero except for d i 1 i N i 1 i N . If all the diagonal entries d i 1 i N i 1 i N = 1 , then D is a unit tensor, denoted by I . The zero tensor with suitable order is denoted by 0.
Next, we give the definition of the Einstein product of tensors.
Definition 10
([86]). For A H I 1 × × I P × K 1 × × K N , B H K 1 × × K N × J 1 × × J M , the Einstein product of tensors A and B is defined by the operation * N via the following:
A * N B i 1 i P j 1 j M = 1 k 1 K 1 , , 1 k N K N a i 1 i P k 1 k N b k 1 k N j 1 j M ,
where A * N B H I 1 × × I P × J 1 × × J M .
Remark 5.
When N = P = M = 1 , we have A , B are quaternion matrices, and their Einstein product is the usual matrix product.
Now, we provide the Moore–Penrose inverse of quaternion tensors via Einstein product.
Definition 11
([13]). For a tensor A H I 1 × × I N × K 1 × × K N , the tensor
X H K 1 × × K N × I 1 × × I N
satisfying
(1) 
A N X N A = A ,
(2) 
X N A N X = X ,
(3) 
A N X = A N X ,
(4) 
X N A = X N A ,
is called the Moore–Penrose inverse of A , abbreviated by M-P inverse, denoted by A .
Furthermore, L A and R A stand for the two projectors
L A = I A * N A , R A = I A * N A ,
induced by A , respectively. We say that the tensor B H I 1 × × I N × I 1 × × I N is the inverse of tensor A H I 1 × × I N × I 1 × × I N , if
A N B = I = B N A ,
and we denote B = A 1 .
Reducible matrices are intimately linked to the connectivity of directed graphs and have found diverse applications in various fields. Below, we introduce the concept of a tensor being k-reducible. Before that, we will first provide the definition of a permutation tensor. For A H J 1 × × J M × J 1 × × J M , if it has a matricized form B that is a permutation matrix, then A is said to be a permutation tensor.
Definition 12
([13]). A tensor A H α 1 × × α N × α 1 × × α N is said to be K -reducible if there exists a permutation tensor K such that A is permutation similar to an M-upper (lower) triangular block tensor,
A = K * N B 1 B 2 0 B 3 * N K 1 ,
where
B 1 H I 1 × × I N × I 1 × × I N , B 2 H I 1 × × I N × J 1 × × J N , B 3 H J 1 × × J N × J 1 × × J N , K H α 1 × × α N × α 1 × × α N , α i = I i + J i , i = 1 , , N .

3. Various Solutions of Matrix Equation AXB = C

In 1955, Penrose defined the generalized inverse of a matrix using four matrix equations. He provided necessary and sufficient conditions for the solvability of the matrix equation A X B = C using the generalized inverse of matrices, along with an expression for the general solution.
Theorem 1
([2]). Let A , B , and C be given with appropriate sizes over C . Then A X B = C is solvable if and only if
A A C B B = C .
In this case, the general solution can be expressed as
X = A C B + W A A W B B ,
where W is arbitrary.
Remark 6.
By applying Theorem 1, the Moore–Penrose inverse can be used to provide the solvability conditions and the expression for the general solution of
A x = b .
For more details, refer to reference [2].
In the field of matrix analysis, Penrose’s theory of the generalized inverse matrix provides a powerful tool for solving linear matrix equations. In certain specific fields, the solution matrix of an matrix equation is required to be a special type of matrix. For example, in the estimation of covariance components in statistical models [4], load-flow analysis and short-circuit studies in power systems [5], the solution matrix needs to be Hermitian or nonnegative definite. Reflexive and anti-reflexive matrices are applied in the fields of engineering and scientific computations [3]. The η -Hermitian matrix is significantly utilized in applications related to statistical signal processing [6]. Reducible matrices are utilized in a range of applications, including Markov chains [7], compartmental analysis [8], and continuous-time positive systems [9], among others. Orthogonal solutions are required for covariance assignment [10] and data matching problems in multivariate analysis [11,12]. In addition, the bisymmetric matrix has been widely recognized for its applications in information theory, Markov processes, physical engineering, and many other fields. Consequently, many scholars have studied various special solutions to the matrix Equation (1).
  • In 1976, Khatri and Mitra [14] studied the solvability conditions and general solution expressions for the matrix Equation (1) with Hermitian and nonnegative definite solutions. Subsequently, in 2004, Zhang [15] investigated Hermitian nonnegative-definite and positive-definite solutions of this equation. Later, Wang et al. [16] and Cvetković-ilić [17] explored Re-nonnegative definite solutions of the same equation. Since *congruence encompasses Hermitian, positive definite, and positive semidefinite matrices, Zheng et al. [18] studied the *congruence class of the solutions to this matrix equation in 2009.
  • Employing the GSVD, Hua [19] and Liao [20] investigated the symmetric solutions and symmetric positive semidefinite least squares solutions of matrix Equation (1) over R . Additionally, in 2022, Hu et al. [21] studied the symmetric solutions of this matrix equation within a specific subspace over the real number field. The quaternion is an extension of real and complex numbers with broad applications. Accordingly, Liu [23] explored the η -Hermitian solution of this matrix equation. In addition, Wang et al. [22] and Zhang et al. [24] studied the least squares bisymmetric solutions and the skew bihermitian solutions of matrix Equation (1) over H .
  • In 2006, D.S. Cvetković-ilić [25] studied the reflexive and anti-reflexive solutions of the matrix Equation (1) over the complex field. In 2011, Herrero et al. [26] investigated the { P , k + 1 } reflexive and anti-reflexive solutions of the same equation over C . Building on these efforts, Liu et al. [27] explored the minimum norm least squares Hermitian (anti-)reflexive solutions of this equation in 2017. Subsequently, Yuan et al. [28] examined generalized reflexive solutions of this matrix equation over the complex field. In 2024, Liao et al. [29] extended this line of research by studying the ( R , S ) -symmetric solutions of this matrix equation over H , which encompassed generalized reflexive solutions.
  • In 2018, Yang et al. [30] studied the Hankel solutions and various Toeplitz solutions of matrix Equation (1) over R . In 2022, Zhang et al. [31] investigated the orthogonal solutions of this matrix equation in the complex field. Moreover, since tensors are higher-dimensional matrices with broader applications, Xie et al. [13] studied the K-reducible solutions of this matrix equation in quaternion tensors.
Furthermore, the study of the matrix Equation (1) using matrix ranks has attracted significant interest from researchers. Many scholars have also focused on solutions to this equation under specific conditions, such as when A , B , and C are operators, when the traditional matrix product is replaced by the semi-tensor product, or when the elements of the matrices come from a principal ideal domain, DQ , DH s , or DH g .
  • Previous studies on various specific solutions to matrix Equation (1) have mostly been based on the assumption that A , B and C are matrices. We know that matrices can be viewed as a special type of operator. Thus, in 2010, Arias et al. [68] explored the existence of positive solutions to this operator equation without this additional assumption. Building on this work, Cvetković-Ilić et al. [69] further investigated the positive solutions of this operator equation in 2019.
  • In addition, some scholars have employed matrix rank to investigate various aspects of matrix Equation (1). For example, Porter et al. (1979) [70] studied the number of solutions to this matrix equation over a given finite field. In 2007, Liu [71] explored the problems of maximal and minimal ranks for the least squares solutions of this equation over the complex field. Subsequently, Zhang et al. [73] extended the study to the maximal and minimal ranks of submatrices of the least squares solutions over C . In 2010, Wang et al. [72] investigated the maximal and minimal ranks of the four real matrices involved in the quaternion solution of this equation.
  • The traditional matrix product imposes requirements on the dimensions of the two matrices involved, while the semi-tensor product removes these restrictions and has broad applications. Consequently, in 2019, Ji et al. [67] studied matrix Equation (1) over the field of real numbers under the semi-tensor product. In 2020, Prokip [87] investigated this matrix equation over a principal ideal domain. DQ , DH s and DH g are extensions of Q , H s and H g , respectively. Therefore, in 2024, Chen et al. [64] investigated this matrix equation over DQ , while Si et al. [65] explored it over DH s , and Shi et al. [66] concentrated on DH g .
Let R ( X ) , N ( X ) , and tr ( X ) denote the column space, null space, and trace of the matrix X, respectively. For A C m × n , let I A A and I A A be denoted as L g A and R g A , respectively.

3.1. Hermitian, Nonnegative Definite, and Re-Nonnegative Definite Solutions

In 1976, Khatri used the g-inverse to provide the necessary and sufficient conditions for the matrix Equation (1) to have Hermitian and nonnegative definite solutions, along with the corresponding expressions for the general solutions.
Theorem 2
([14]). Suppose that A , B , C C n × n and A 0 , B 0 , then we have the following conclusions:
1.
The matrix Equation (1) has a Hermitian solution if
B ( A + B ) C ( A + B ) A
is Hermitian. In this case, the general Hermitian solution can be expressed as
X = ( A + B ) C + C * + Y + Z ( A + B ) * + W ( A + B ) ( A + B ) W ( A + B ) ( A + B ) * ,
where W is an arbitrary Hermitian matrix with appropriate sizes, and Y , Z are arbitrary Hermitian solutions of the matrix equations
Y ( A + B ) B = C ( A + B ) A , A ( A + B ) Z = B ( A + B ) C .
2.
The matrix Equation (1) has a nonnegative definite solution if
B ( A + B ) C ( A + B ) A
is nonnegative definite and
r ( T ) = r ( A ( A + B ) C * ) = r ( B ( A + B ) C ) .
In this case, the general nonnegative definite solution can be expressed as
X = ( A + B ) C + C * + Y + Z ) ( A + B ) * + L g A + B W L g A + B * ,
where W C n × n is an arbitrary nonnegative definite matrix, and Y , Z are arbitrary nonnegative definite solutions of (11) such that C + C * + Y + Z is nonnegative definite.
For the solution of (11), refer to Reference [14], it will not be provided here. Subsequently, in 2004, Zhang [15] employed matrix decomposition methods to present the necessary and sufficient conditions for matrix Equation (1) to have Hermitian nonnegative-definite and Hermitian positive-definite solutions, along with the general solution expression. Let H n , H n > be the set of n-by-n Hermitian nonnegative-definite matrices, and the set of n-by-n Hermitian positive-definite matrices, respectively.
Theorem 3
([15]). Assume that A C m × n , B C n × p , C C m × p obeying
0 m = r ( A ) r ( B ) = p n .
There exist an integer s and matrices P , Q , and T such that
P 1 C T 1 = C 1 C 2 C 3 C 4 , C 1 C ( p s ) × ( p s ) .
Then we can derive the following two conclusions:
1.
The matrix Equation (1) has a Hermitian nonnegative-definite solution if
C 1 H p s , R C 3 * C 2 R C 1 .
In this case, the general Hermitian nonnegative definite solution can be expressed as
X = Q 1 C 1 C 3 * C 2 X 14 C 3 X 22 C 4 X 24 C 2 * C 4 * X 33 X 34 X 14 * X 24 * X 34 * X 44 Q * 1 ,
where X 14 , X 22 , X 24 , X 33 , X 34 , and X 44 satisfy
R X 14 R C 1 , R C 0 Y 2 R Y 1 , R ( Z ) R Y 3 C 0 * Y 1 C 0 , a n d Y 1 H m p + s , Y 3 C 0 * Y 1 C 0 H s , Y 5 Y 2 * Y 1 Y 2 Z * Y 3 C 0 * Y 1 C 0 Z H n m s
with
C 0 = C 4 C 3 C 1 C 2 , Y 1 = X 22 C 3 C 1 C 3 * , Y 2 = X 24 C 3 C 1 X 14 , Y 3 = X 33 C 2 * C 1 C 2 , Y 4 = X 34 C 2 * C 1 X 14 , Y 5 = X 44 X 14 * C 1 X 14 , Z = Y 4 C 0 * Y 1 Y 2 .
2.
The matrix Equation (1) has a Hermitian positive-definite solution if
C 1 H p s > .
In this case, the general Hermitian positive-definite solution can be expressed as (12), where X 14 , X 22 , X 24 , X 33 , X 34 , and X 44 satisfy
Y 1 H m p + s > , Y 3 C 0 * Y 1 C 0 H s > , Y 5 Y 2 * Y 1 Y 2 Z * Y 3 C 0 * Y 1 C 0 Z H n m s >
with (13).
For details on the integer s and matrices P , Q , and T mentioned in Theorem 3, refer to Reference [15].
In 1998, Wang et al. [16] presented the necessary and sufficient conditions for the solvability of matrix Equation (1) and the general solution expression using the GSVD.
Theorem 4
([16]). Assume that A C m × n , B C n × q , and C C m × q are given. Set
P 1 X ( P 1 ) * = X 11 X 12 X 13 X 14 X 21 X 22 X 23 X 24 X 31 X 32 X 33 X 33 X 41 X 42 X 43 X 44 t s l t s n l , t s l r s n l U C V * = C 11 C 12 C 13 C 21 C 22 C 23 C 31 C 32 C 33 t s m t s . n l + t s l t s
1.
The matrix Equation (1) is consistent if and only if
C i 1 = 0 , i = 1 , 2 ; C 3 j = 0 , j = 1 , 2 , 3 .
In this case, the general solution can be expressed as
X = P X 11 C 12 S B 1 C 13 X 14 X 21 S A 1 C 22 S B 1 S A 1 C 23 X 24 X 31 X 32 X 33 X 34 X 41 X 42 X 43 X 44 P * ,
where X i 1 , X i 4 ( i = 1 , 2 ) , X 3 j , X 4 j ( j = 1 , , 4 ) are arbitrary matrices with appropriate sizes over C .
2.
The matrix Equation (1) has a Re-nnd solution if and only if
C i 1 = 0 , i = 1 , 2 , C 3 j = 0 , j = 1 , 2 , 3 , a n d S A 1 C 22 S B 1 is R e - n n d .
In this case, the general Re-nnd solution can be expressed as
X = P M N N * + T * M + M * D + T * M T P * ,
where
M = D 2 + T 2 * S A 1 C 22 S B 1 T 2 C 12 S B 1 C 13 F S A 1 C 22 S B 1 S A 1 C 23 X 31 G D 1 + T 1 * S A 1 C 22 S B 1 T 1 ,
and
F = S B 1 C 12 * + S A 1 C 22 S B 1 + S B 1 C 22 * S A 1 T 2 , G = C 23 * S A 1 + T 1 * S A 1 C 22 S B 1 + S B 1 C 22 * S A 1 ,
with M R e l , X 31 C ( l t s ) × t , D 1 R e l t s , D 2 R e t , D R e n l , T 1 C s × ( l t s ) , T 2 C s × t , T C l × ( n l ) , N C l × ( n l ) are all arbitrary matrices.
The matrices P , U , V , S A and S B in Theorem 4 are obtained by applying the GSVD to the matrices A and B * . In 2008, Cvetković-ilić [17] provided the Re-nonnegative definite solution to matrix Equation (1) using the g-inverse of matrices. Let H ( A ) denote the Hermitian part of A, i.e., H ( A ) = 1 2 ( A + A * ) .
Theorem 5
([17]). Suppose that A , B , C C n × n are given with A 0 , B 0 , then the matrix Equation (1) has a Re-nnd solutionn if
G = B ( A + B ) C ( A + B ) A
is Re-nnd. In this case, the general Re-nnd solution can be expressed as
X = K ( C + Y + Z + W ) K * + L g A + B U U * L g A + B * + Q L A + B L A + B Q * ,
where Y , Z , W are arbitrary solutions of the following matrix equations
Y ( A + B ) B = C ( A + B ) A , A ( A + B ) Z = B ( A + B ) C , A ( A + B ) W ( A + B ) B = G ,
with C + Y + Z + W is Re-nnd. Here K is defined by
K = ( A + B ) + L g A + B P H ( C + Y + Z + W ) 1 / 2 .
and P , Q C n × n , U C n × ( n t ) are arbitrary matrices, t = r ( C + Y + Z + W ) .

3.2. The *Congruence Class of the Solutions

Since Hermitian, positive definite, and positive semidefinite matrices are special cases of *congruence, Zheng et al. [18] studied the *congruence class of the solutions to matrix Equation (1) in 2009. Applying the GSVD to the matrices A C m × n and B C n × l , we obtain:
A = U A A P * , B = P B V B * ,
where U A , V B are unitary matrices, P is a nonsingular matrix, and
A = I 0 0 0 r 1 m r 1 , r 1 n r 1 B = S 1 0 0 0 0 0 0 S 2 0 0 0 0 r 2 r 1 r 2 r 3 n r 1 r 3 . r 2 r 3 l r 4
Here, r 1 = r ( A ) , r 4 = r 2 + r 3 = r ( B ) , and S 1 , S 2 are diagonal matrices with positive elements.
Theorem 6
([18]). For A C m × n , B C n × l , C C m × l , and X C n × n is an unknown matrix. Denote
U A * C V B = C 11 C 12 C 13 C 21 C 22 C 23 C 31 C 32 C 33 , P * X P = X 11 X 12 X 13 X 14 X 21 X 22 X 23 X 24 X 31 X 32 X 33 X 34 X 41 X 42 X 43 X 44 .
Then the matrix (1) is consistent if
C 3 i = 0 , i = 1 , 2 , 3 , C 13 = C 23 = 0 .
In this case, the general solution (or least square solution) is *congruent to
Y = C 11 S 1 1 X 12 C 12 S 2 1 X 14 C 21 S 1 1 X 22 C 22 S 2 1 X 24 X 31 X 32 X 33 X 34 X 41 X 42 X 43 X 44 ,
where X 3 i , X 4 i , ( i = 1 , , 4 ) , X j 4 , X j 2 , ( j = 1 , 2 ) are arbitrary complex matrices.

3.3. Symmetric, Symmetric Positive Semidefinite, η -Anti-Hermitian and Bihermitian Solutions

Employing the GSVD, Hua [19] and liao [20] investigated the solvability conditions and general solution expression for the symmetric and symmetric positive semidefinite solutions of the matrix Equation (1) over R .
Theorem 7
([19]). If A C m × n , B C n × q , C C m × q , and X C n × n is an unknown matrix. We partition X and C using the method in Theorem 4, then the matrix Equation (1) has a symmetric solution if
C i 1 = 0 , i = 1 , 2 , 3 ; C 32 = C 33 = 0 ; S A S B 1 C 22 T = C 22 S B 1 S A .
In this case, the general symmetric solution is given by
X = P X 11 C 12 S B 1 C 13 X 14 S B 1 C 12 T S A 1 C 22 S B 1 S A 1 C 23 X 24 C 13 T C 23 T S A 1 X 33 X 34 X 14 T X 24 T X 34 T X 44 P T ,
where X 11 , X 33 , X 44 are arbitrary symmetric matrices with appropriate size over R , and X 14 , X 24 , X 34 are arbitracy real matrices with appropriate size.
Theorem 8
([20]). Assume that A , B , C are given with appropriate size, and apply the GSVD to the matrix pair [ A T , B ]. Set C i j = U i T C V j ( i , j = 1 , 2 , 3 ) . Then the Equation (1) has a least squares symmetric positive semidefinite solution only if
r ( X ^ 22 ) = r ( S B 1 C 12 T , X ^ 22 , S A 1 C 23 ) .
In this case, the solution can be expressed as
X = P T Y Y Z ( Y Z ) T Z T Y Z + G 3 P 1 ,
where Z is a arbitrary real matrix and X ^ 22 is a unique minimizer of S A X 22 S B C 22 with respect to
X 22 0 , X 22 T = X 22 .
Here
Y = X 11 C 12 S B 1 C 13 S B 1 C 12 T X ^ 22 S A 1 C 23 C 13 T C 23 T S A 1 X 33 , X 11 = C 12 S B 1 X ^ 22 S B 1 C 12 T + G 1 , X 33 = C 23 T S A 1 X ^ 22 S A 1 C 23 + C 13 C 12 S B 1 X ^ 22 S A 1 C 23 T G 1 C 13 C 12 S B 1 X ^ 22 S A 1 C 23 + G 2 ,
G 2 , G 3 are arbitrary symmetric positive semidefinite matrices and G 1 is a symmetric positive semidefinite matrix with
r G 1 = r G 1 , C 13 C 12 S B 1 X ^ 22 S A 1 C 23 .
The matrices P , U i , V j in Theorems 7 and 8 are obtained by applying the GSVD to the matrix pair [ A , B T ] or [ A T , B ]. Afterward, Hu et al. [21] further investigated symmetric solutions of matrix Equation (1) on subspace
N ( G ) = { x R n | G x = 0 , G R m × n }
over the real field. Let
S R N ( G ) n × n = { A R n × n | ( x , A y ) = ( A x , y ) , x , y N ( G ) } .
Theorem 9
([21]). Suppose that A R m × n , B R n × q and C R m × q are given. Then, the matrix Equation (1) has a symmetric solution X S R N ( G ) n × n if and only if
A 1 A 1 C B 1 B 1 = C , L K D L K = 0 .
In this case, the solution is given by
S = X R n × n X = V 0 X 11 X 12 X 21 X 22 V 0 T ,
where
A 1 = A V 0 , B 1 = V 0 T B , K = A 2 T B 2 , K = K 1 K 2 , D = P 2 A 1 C B 1 Q 2 T P 2 A 1 C B 1 Q 2 ,
and
X 11 = P 1 A 1 C B 1 Q 1 + A 3 Y 1 + Z 1 B 3 , X 12 = P 1 A 1 C B 1 Q 2 + 1 2 A 3 K 1 T D + K 1 T D L K + A 3 V 1 1 2 A 3 A 2 T K 1 V 1 I n s + L K 1 2 A 3 A 2 T K 2 V 2 I n s + L K + 1 2 A 3 K 1 T V 1 T A 2 T 1 2 A 3 K 1 T V 2 T B 2 + Z 1 B 2 , X 21 = P 2 A 1 C B 1 Q 1 + 1 2 D T K 2 + L K D T K 2 B 3 + A 2 Y 1 + 1 2 I n s + L K V 1 T K 1 T B 2 T B 3 + V 2 T B 3 + 1 2 I n s + L K V 2 T K 2 T B 2 T B 3 + 1 2 A 2 V 1 K 2 B 3 1 2 B 2 T V 2 K 2 B 3 , X 22 = P 2 A 1 C B 1 Q 2 + 1 2 A 2 K 1 T D + K 1 T D L K + 1 2 D T K 2 + L K D T K 2 B 2 + V 2 T B 2 + A 2 V 1 1 2 A 2 A 2 T K 1 V 1 I n s + L K 1 2 A 2 A 2 T K 2 V 2 I n s + L K + 1 2 A 2 K 1 T V 1 T A 2 T 1 2 A 2 K 1 T V 2 T B 2 + 1 2 I n s + L K V 1 T K 1 T B 2 T B 2 + 1 2 I n s + L K V 2 T K 2 T B 2 T B 2 + 1 2 A 2 V 1 K 2 B 2 1 2 B 2 T V 2 K 2 B 2 ,
with V 1 , V 2 , Y 1 , and Z 1 are arbitrary real matrices. Here
P 1 = I s 0 , P 2 = 0 I n s , Q 1 = I s 0 , Q 2 = 0 I n s , A 2 = P 2 L A 1 , B 2 = R B 1 Q 2 , A 3 = P 1 L A 1 , B 3 = R B 1 Q 1 .
The matrix V 0 is obtained from the singular value decomposition (SVD) of the matrix G.
Quaternions, an extension of real and complex numbers, have significant applications in signal processing, color image processing, and quantum physics. In addition, the η -Hermitian and Bisymmetric matrices have practical applications. Consequently, Liu [23] employed the Moore–Penrose inverse of matrices to investigate the solvability conditions and the general solution expression of the matrix Equation (1) with an η -Hermitian solution. Additionally, Wang et al. [22] and Zhang et al. [24] explored the least squares Bisymmetric and skew Bisymmetric solutions of this matrix equation over H . Since these studies utilize the real representation of quaternion matrices and the Kronecker product as tools to derive the least squares Bisymmetric solution, only the results on the least squares Bisymmetric solution published by Zhang et al. in 2022 are presented here.
Theorem 10
([23]). Assume that A , B , C are given with appropriate dimensions, and set
A ˜ = R B η * L A , B ˜ = A C B ( A C B ) η * , D = R B η * + L A .
Then the matrix Equation (1) is consistent if
R D B ˜ R D = 0 , R A C = 0 , C L B = 0 .
In this case, the general η-anti-Hermitian solution can be expressed as
X = A C B + L A 0 I U + U η * I 0 R B ,
where
U = A ˜ B ˜ 1 2 A ˜ B ˜ ( A ˜ ) η * A ˜ η * + L A ˜ V + W η * ( A ˜ ) η * A ˜ η * A ˜ W A ˜ η * ,
and V , W are arbitrary quaternion matrices.
Denote
K = I n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 I n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 I n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 I n 0 0 0 I n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 I n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 I n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 I n 0 0 0 0 0 0 0 0 0 0 0 0 0 0 I n R 4 n 2 × 4 n 2 .
Theorem 11
([24]). If A H m × n , B H n × l , C H m × l , X = X 0 + X 1 i + X 2 j + X 3 k H n × n and set
M = ( B c 1 R ) T A R , F = diag ( I 4 n , , I 4 n ) diag ( Q n , , Q n ) diag ( G n , , G n ) diag ( T n , , T n ) , G = diag ( B n , W n , W n , W n ) , D = M F K G , G ^ = diag ( W n , B n , B n , B n ) , D ^ = M F K G ^ .
Then we derive the following results.
1.
The least square bisymmetric solutions of matrix Equation (1) is given by
H B = X vec B ( X 1 ) vec A ( X 2 ) vec A ( X 3 ) vec A ( X 4 ) = D vec ( C c 1 R ) + L D y ,
where Q n , G n , T n are defined in the form (5) and y is an arbitrary vector with appropriate size. Additionally, the minimal norm least square bisymmetric solution of this matrix equation can be expressed as
vec B ( X 1 ) vec A ( X 2 ) vec A ( X 3 ) vec A ( X 4 ) = D vec ( C c 1 R ) .
2.
The least square skew bihermitian solutions of matrix Equation (1) is given by
H S B = X vec A ( X 1 ) vec B ( X 2 ) vec B ( X 3 ) vec B ( X 4 ) = D ^ vec ( C c 1 R ) + L D ^ y ,
where y is an arbitrary real vector with appropriate size and the minimal norm least square skew bihermitian solution of this matrix equation can be expressed as
vec A ( X 1 ) vec B ( X 2 ) vec B ( X 3 ) vec B ( X 4 ) = D ^ vec ( C c 1 R ) .
The matrices B n and W n consist of standard unit vectors e i , as detailed in reference [24].

3.4. Reflexive, { P , k + 1 } -Reflexive, Hermitian Reflexive, and ( R , S ) -Symmetric Solutions

Now, we present the necessary and sufficient conditions for the existence of reflexive solutions to matrix Equation (1), along with the general solution.
Theorem 12
([25]). Let A , B , and C be given with the suitable dimensions. Then the matrix Equation (1) has a reflexive solution X C r n × n ( P ) if the system of matrix equations
A 1 Y B 1 + A 2 Z B 3 = C 1 , A 1 Y B 2 + A 2 Z B 4 = C 2 , A 3 Y B 1 + A 4 Z B 3 = C 3 , A 3 Y B 2 + A 4 Z B 4 = C 4
is consistent. Under such circumstances, the solution can be expressed as
X = V Y 0 0 Z V * .
The unitary matrix V is obtained through the decomposition of the generalized reflection matrix P, i.e.,
P = V I r 0 0 I n r V * .
In addition, the matrices A i , B i , C i , ( i = 1 , , 4 ) are derived from the decomposition of the matrices A , B , C :
A = V A 1 A 2 A 3 A 4 V * , B = V B 1 B 2 B 3 B 4 V * , C = V C 1 C 2 C 3 C 4 V * ,
where A 1 , B 1 , C 1 C r × r .
In 2011, Herrero et al. [26] simplified the problem of { P , k + 1 } reflexive solutions for matrix Equation (1) to the following issue:
{ P , 2 } reflexive solutions , when k is odd , the method used : SVD , vec . { P , 3 } reflexive solutions , when k is even , the method used : GSVD , vec .
Since the solution to matrix Equation (1) using the GSVD method has been provided earlier, this part will focus on presenting the { P , k + 1 } -reflexive solution to matrix Equation (1) using the vec operator method, as described in the reference [26].
Theorem 13
([26]). If A , B , C are given with appropriate size, and denote
D = B 1 T A 1 * , E = B 11 T A 11 * B 22 T A 22 * .
Then we have the following conclusions.
1.
The matrix Equation (1) has a { P , 2 } reflexive solution if one of the following conditions is satisfied
(1) 
vec ( C ) R ( D ) ,
(2) 
vec ( C ) N ( R g D ) .
In this case, the general { P , 2 } reflexive solution can be expressed as
X = U X 11 0 0 0 U * ,
where X 11 C r × r can be obtained by rearranging vec ( X 11 ) , and r = r ( P ) . Here
vec ( X 11 ) = D vec ( C ) + L g D y
with y an arbitrary vector.
2.
The matrix Equation (1) has a { P , 3 } reflexive solution if vec ( C ) R ( E ) or vec ( C ) N ( R g E ) . In this case, the general { P , 3 } reflexive solution can be expressed as
X = U X 11 0 0 0 X 22 0 0 0 0 U * ,
where X 11 , X 22 can be reconstructed from vec ( X 11 ) and vec ( X 22 ) , respectively. Here
vec ( X 11 ) vec ( X 22 ) = E vec ( C ) + L g E z
with z an arbitrary vector.
Remark 7.
For item 1 in Theorem 13, the matrix P satisfies P 2 = P = P * , implying that P can be unitarily diagonalized as
P = U I r 0 0 0 U * .
Define X ^ = U * X U , so X = U X ^ U * . In this case, the matrix equation A X B = C is transformed into A ^ X ^ B ^ = C , where
A ^ = A U : = A 1 * A 2 * , B ^ = U * B : = B 1 B 2 .
Additionally, from X = P X P , we can conclude that
X ^ = X 11 0 0 0 ,
where the order of X 11 is equal to r ( P ) . Following a similar approach, we can obtain the matrices A 11 , B 11 , A 22 and B 22 in item 2. The details are omitted here.
In 2017, Liu et al. [27] utilized the real representation of complex matrices, the Kronecker product, and the vec operator to derive the minimum norm least squares Hermitian (anti)reflexive solution for matrix Equation (1). For B = B 1 + B 2 i C m × n , and B 1 , B 2 R m × n , then the real representation matrix of B can be expressed as
f ( B ) = A 1 A 2 A 2 A 1 R 2 m × 2 n .
Theorem 14
([27]). Suppose that A C m × n , B C n × l , C = C 1 + C 2 i , and set
g ( C ) = C 1 C 2 , M = f ( B 1 * A 1 ¯ ) f ( B 2 * A 2 ¯ ) · K S 1 0 0 0 0 K A 1 0 0 0 0 K S 2 0 0 0 0 K A 2 , P ( r , n r ) = E 11 T E 1 , n r T E n , 1 E n , n r , N = f ( B 1 * A 2 ¯ ) P ( r , n r ) 0 0 P ( r , n r ) + f ( B 2 * A 1 ¯ )
with E i , j representing a matrix where the element at position ( i , j ) is 1 and all other elements are 0. Then, we have the following conclusions:
1.
the least squares Hermitian reflexive solution is expressed as
X = U Y 0 0 Z U * ,
where Y , Z can be obtained by
vec ( Y ) = K S 1 K A 1 i 0 0 ( M vec ( g ( C ) ) + L M y ) , vec ( Z ) = 0 0 K S 2 K A 2 i ( M vec ( g ( C ) ) + L M y ) ,
and y is an arbitrary real vector. In this case, the solution X with the minimum norm is provided by
X = U Y 1 0 0 Z 1 U * ,
where Y 1 , Z 1 are presented by
vec ( Y 1 ) = K S 1 K A 1 i 0 0 M vec ( g ( C ) ) , vec ( Z 1 ) = 0 0 K S 2 K A 2 i M vec ( g ( C ) ) .
2.
the least squares Hermitian anti-reflexive solution is expressed as
X = U 0 Y Y * 0 U * ,
where
I r ( n r ) i I r ( n r ) ( N vec ( g ( C ) ) + L N z ) ,
and z is an arbitrary real vector. In this case, the solution X with the minimum norm is derived by
X = U 0 Y 2 Y 2 * U * ,
where Y 2 is presented by
I r ( n r ) i I r ( n r ) N vec ( g ( C ) ) .
The matrix U is derived from the unitary diagonalization of the generalized reflection matrix P, i.e., there exists a unitary matrix U such that
P = U I r 0 0 I n r U * .
It should be noted that the matrix P here satisfies P 1 = P * = P I , so the result of the unitary diagonalization of P is different from (14). The matrices A i , B i , i = 1 , 2 are obtained in the same way as described in Remark 7. Additionally, the matrices K A i , K S i , i = 1 , 2 are related to the standard unit vector e i , see Reference [27] for details.
In 2008, Yuan et al. [28] explored generalized reflexive solutions to matrix Equation (1) using the GSVD. Notably, generalized reflexive solutions encompass reflexive solutions. A distinct approach involves the generalized reflexive solution, which requires performing unitary diagonalization on the corresponding generalized reflection matrices R and S, respectively, i.e.,
R = U I r 0 0 I s U * , S = V I k 0 0 I l V * ,
where U and V are unitary matrices.
Theorem 15
([28]). Assume that A , B , C are given with appropriate size, and denote
A U = A 1 A 2 , V * B = B 1 B 2 .
Applying the GSVD to the matrix pairs A 1 A 2 and B 1 * B 2 * yields
A 1 = M A 1 E * , A 2 = M A 2 F * , B 1 * = N B 1 K * , B 2 * = N B 2 H * .
Set
M 1 C ( N 1 ) * = C 11 C 12 C 13 C 14 C 21 C 22 C 23 C 24 C 31 C 32 C 33 C 34 C 41 C 42 C 43 C 44 .
Then the matrix Equation (1) has a generalized reflexive solution if
C i 4 = C 4 j = 0 , C 13 = C 31 = 0 , i = 1 , , 4 ; j = 1 , 2 , 3 .
In this case, the general generalized reflexive solution is given by
X = U Y 0 0 Z V * ,
where
Y = E C 11 C 12 S B 1 1 Y 13 S A 1 1 C 21 S A 1 1 ( C 22 S A 2 Z 22 S B 2 ) S B 1 1 Y 23 Y 31 Y 32 Y 33 K * ,
and
Z = F Z 11 Z 12 Z 13 Z 21 Z 22 S A 2 1 C 23 Z 31 C 32 S B 2 1 C 33 H *
where Y 3 i , Y j 3 , Z 1 i , Z l 1 , Z 22 ( i = 1 , 2 , 3 ; j = 1 , 2 ; l = 2 , 3 ) are arbitrary matrices.
So far, references [25,26,27,28] have all studied the reflexive solution of matrix Equation (1) over C . In 2024, Liao et al. [29] extended this research by investigating the ( R , S ) -(skew) symmetric solution of this matrix equation over H , which is a more general solution compared to the generalized reflexive solution. For A H m × n , A can be represented as a real matrix in the form of (4). This allows the ( R , S ) -(skew) symmetric solution over H to be transformed into an ( R R , S R ) -(skew) symmetric solution. Since R R and S R are merely nontrivial involutory matrices, they can be diagonalized, but not necessarily unitary diagonalized, which differs from the previous case.
For R R R 4 m × 4 m and S R R 4 n × 4 n , there are positive integers r , s , k , l and matrices P C 4 m × r , Q C 4 m × s , U C 4 n × k , V C 4 n × l , such that
r + s = 4 m , P * P = I r , R R P = P , Q * Q = I s , R R Q = Q , k + l = 4 n , U * U = I k , S R U = U , V * V = I l , S R V = V .
The choices for P , Q , U , and V are not unique. Suitable P , Q , U , and V can be obtained by applying the Gram–Schmidt process to the columns of I + R R , I R R , I + S R , and I S R , respectively,
P ^ = P * ( I + R R ) 2 , Q ^ = Q * ( I R R ) 2 , U ^ = U * ( I + S R ) 2 , V ^ = V * ( I S R ) 2 .
Then, we obtain
P ^ P = I , P ^ Q = 0 , Q ^ P = 0 , Q ^ Q = I , U ^ U = I , U ^ V = 0 , V ^ U = 0 , V ^ V = I .
In fact, R R = ( R R ) T , ( S R ) T = S R if and only if P * Q = 0 , U * V = 0 . In this case,
P ^ = P * , Q ^ = Q * , U ^ = U * , V ^ = V * ,
so P Q and U V are unitary.
Theorem 16
([29]). If A H e × m , B H n × f , C H e × f , and denote
A R P Q = A 1 A 2 , U ^ V ^ B R = B 1 B 2 .
then we have the following conclusions.
1.
The matrix Equation (1) has a ( R , S ) -symmetric slution X if and only if A R Y B R = C R has a ( R R , S R ) -symmetric slution Y or
r A 1 A 2 C R = r A 1 A 2 , r B 1 B 2 C R = r B 1 B 2 , r C R A 1 B 2 0 = r 0 A 1 B 2 0 , r C R A 2 B 1 0 = r 0 A 2 B 1 0 .
In this case, the general ( R , S ) -symmetric slution is provided by
X = 1 16 I m i I m j I m k I m ( Y + Q m Y Q n T + G m Y G n T + T m Y T n T ) I n i I n j I n k I n ,
where Q n , G n , T n are defined as (5) and
Y = P Q Y 1 0 0 Y 2 U ^ V ^
with
Y 1 = Y 1 ^ + M 1 L K 1 U 1 R K 2 N 1 + L A 1 W 1 + W 2 R B 1 , Y 2 = Y 2 ^ + M 2 L K 1 U 1 R K 2 N 2 + L A 2 V 1 + V 2 R B 2 .
Here
M 1 = I r 0 , M 2 = 0 I s , N 1 = I k 0 , N 2 = 0 I l , K 1 = A 1 A 2 , K 2 = B 1 B 2 ,
and U 1 , V i , W i ( i = 1 , 2 ) are arbitrary matrices, ( Y 1 ^ , Y 2 ^ ) is a pair of solutions of matrix equation A 1 Y 1 B 1 + A 2 Y 2 B 2 = C R .
2.
The matrix Equation (1) has a ( R , S ) -skew symmetric slution X if and only if A R Y B R = C R has a ( R R , S R ) -skew symmetric slution Y or
r A 2 A 1 C R = r A 2 A 1 , r B 1 B 2 C R = r B 1 B 2 , r C R A 2 B 2 0 = r 0 A 2 B 2 0 , r C R A 1 B 1 0 = r 0 A 1 B 1 0 .
In this case, the general ( R , S ) -skew symmetric slution is gven by
X = 1 16 I m i I m j I m k I m ( Y + Q m Y Q n T + G m Y G n T + T m Y T n T ) I n i I n j I n k I n ,
where
Y = P Q 0 Y 3 Y 4 0 U ^ V ^
and
Y 3 = Y 3 ^ + M 3 L K 1 U 2 R K 2 N 2 + L A 1 W 3 + W 4 R B 2 , Y 4 = Y 4 ^ + M 4 L K 1 U 2 R K 2 N 1 + L A 2 V 3 + V 4 R B 1 .
Here
M 4 = I s 0 , M 3 = 0 I r , N 1 = I k 0 , N 2 = 0 I l , K 1 = A 1 A 2 , K 2 = B 1 B 2 ,
and U 2 , V i , W i ( i = 3 , 4 ) are arbitrary matrices, ( Y 3 ^ , Y 4 ^ ) is a pair of solutions of matrix equation A 2 Y 4 B 1 + A 1 Y 3 B 2 = C R .
Additionally, Liao et al. utilized the vec operator to study the least squares ( R , S ) -(skew) symmetric solution of the matrix Equation (1). Since the solution method is similar to Herrero’s approach for solving the { P , k + 1 } reflexive solutions of the same matrix equation, it will not be described here. For details, please refer to reference [29].

3.5. Other Solutions with Specific Structures

In 2018, Yuan et al. [30] studied the solutions of matrix Equation (1) under the constraints of general Toeplitz matrices, upper triangular Toeplitz matrices, lower triangular Toeplitz matrices, symmetric Toeplitz matrices, and Hankel matrices. We know that a general Toeplitz matrix A R n × n can be written as
A = l = ( n 1 ) n 1 a l Δ l ,
where a l R and
( Δ l ) i j = 1 if j = i + l , 0 otherwise , l = ( n 1 ) , , n 1 .
Theorem 17
([30]). Suppose that the matrix Equation (1) is consistent. Then, we have the following conclusions.
1.
Its general Toeplitz solution is given by X = l = ( n 1 ) n 1 a l Δ l , where
( a ( n 1 ) , , a 0 , , a n 1 ) T = N b + L g N w ,
and w is an arbitrary real vector. Here,
N = tr ( B T Δ p T A T A Δ q B ) + tr ( B T Δ p T A T A Δ q B ) 2 , p , q = ( n 1 ) , , n 1 . b = tr ( C T A Δ ( n 1 ) B ) tr ( C T A Δ ( n 2 ) B ) tr ( C T A Δ n 1 B ) T .
Specifically, when l , p , q = 0 , , n 1 , we can obtain the upper triangular Toeplitz solution of this matrix equation. Similarly, when l , p , q = ( n 1 ) , , 0 , we can also obtain the lower triangular Toeplitz solution.
2.
Its symmetric Toeplitz solution is exoressed as X = l = 0 n 1 a l Δ l , where
( Δ l ) i j = 1 if l = | j i | , 0 otherwise , l = 0 , , n 1 .
and the method for finding a l is the same as for Equation (15), except that p , q = 0 , , n 1 .
3.
Its Hankel solution is provided by X = l = 1 2 n 1 a l Δ l , where
( Δ l ) i j = 1 if l + 1 = j + i , 0 otherwise , l = 1 , , 2 n 1 .
and the procedure to determine a l is similar to that in Equation (15), except that p and q range from 1 to 2 n 1 .
In 2022, Zhang et al. [31] provided the column unitary solution to matrix Equation (1) using singular value decomposition and spectral decomposition. For A C m × n , the SVD of A can be expressed as
A = P 1 P 2 A 0 0 0 U 1 * U 2 * ,
where A is a diagonal matrix composed of the non-zero singular values of matrix A, and P 1 P 2 , U 1 U 2 are unitary matrices. If D C k × k and satisfies D 0 , then the spectral decomposition of D is given by
D = Q 1 Q 2 Δ D 0 0 0 Q 1 * Q 2 * ,
where Δ D is a diagonal matrix composed of the non-zero eigenvalues of matrix D and Q 1 Q 2 is a unitary matrix.
Theorem 18
([31]). Let A C m × n , B C l × k , C C m × k be given with n l . Set
D = B * B C * ( A A * ) C .
Then the matrix Equation (1) has a column unitary solution X C n × l if and only if
R A C = 0 , D 0 , r ( D ) n r ( A ) .
In this case, the general column unitary solution is provided by
X = U 3 I 0 0 G P 3 * ,
where G is an arbitrary column unitary matrices and U 3 , P 3 are derived from the singular value decomposition of matrix E = ( A C + U 2 J 1 Δ D 1 2 Q 1 * ) B * :
E = U 3 E 0 0 0 P 3 * .
Here, J 1 is an arbitrary column unitary matrix and E is a diagonal matrix formed by the non-zero singular values of matrix E.
Tensors, being higher-dimensional matrices, have a wide range of applications. Xie et al. [13] studied the K-reducible solutions of the quaternion tensor matrix equation A M X M B = C under the Einstein product.
Theorem 19
([13]). Assume that A H J 1 × × J M × K 1 × × K M , B H K 1 × × K M × L 1 × × L M , C H J 1 × × J M × L 1 × × L M are given and K H K 1 × × K M × K 1 × × K M is a permutation tensor with K i = N i + P i . Set
A M K : = A 1 A 2 , K 1 M B : = B 1 B 2 , A 1 H J 1 × × J M × N 1 × × N M , A 2 H J 1 × × J M × P 1 × × P M , B 1 H N 1 × × N M × L 1 × × L M , B 2 H P 1 × × P M × L 1 × × L M , B 3 = B 1 M L B 2 , A 3 = R A 1 M A 2 , C 1 = C M L B 2 + R A 1 M C , B 4 = B 2 M L B 3 .
Then the quaternion tensor equation A M X M B = C has a K-reducible solution if and only if
R A 3 M R A 1 M C 1 = 0 , C 1 M L B 3 M L B 4 = 0 , R A 1 M C 1 M L B 2 = 0 , R A 3 M C 1 M L B 3 = 0 .
In this case, the general K-reducible solution is given by
X = K M X 1 X 2 0 X 3 M K 1 ,
where
X 1 = A 1 M C 1 M B 3 + L A 1 M W 1 + W 2 M R B 3 , X 2 = A 1 M ( C A 1 M X 1 M B 1 A 2 M X 3 M B 2 ) M B 2 + W 3 M R B 2 L A 1 M W 4 , X 3 = A 3 M C 1 M B 2 + L A 3 M W 5 + W 6 M R B 2 ,
where W i ( i = 1 , , 6 ) are arbitrary quaternion tensors.

3.6. Positive Solutions

In previous studies, A , B , and C in the equation A X B = C were all finite-dimensional matrices. However, in 2010, Arias et al. [68] investigated a more general case where A , B , and C are bounded linear operators acting on suitable Hilbert spaces, with the underlying vector spaces being infinite-dimensional. The symbols F , G , H , and K represent complex Hilbert spaces equipped with the inner product , . Let L ( F , G ) represent the set of bounded linear operators from F to G and L ( F ) = L ( F , F ) . The set L ( F ) + L ( F ) denotes the cone of positive operators, defined as L ( F ) + : = { A L ( F ) : A δ , δ 0 , δ F } . For any A L ( F , G ) , A * denotes the adjoint operator of A. For V F , where V is a closed subspace, P V represents the orthogonal projection onto the subspace V , and P V | V denotes the restriction of P V to V .
Theorem 20
([68]). If A L ( H , K ) , B L ( G , H ) , C L ( G , K ) , and they satify R ( B ) R A * ¯ , then the following descriptions are equivalent.
1.
The equation A X B = C is consistent.
2.
There exists a positive operator X ^ L ( H ) + such that A X ^ B = C .
3.
There exists a positive operator Y ^ L ( H ) + such that Y ^ B = A C .
4.
The operator B * A C is nonnegative, and R A C * R B * A C 1 2 .
In this case, the general positive solution can be expressed as
X ^ = x ^ 11 x ^ 12 x ^ 12 * x ^ 11 1 2 x ^ 12 * x ^ 11 1 2 x ^ 12 + l ,
where R x ^ 12 R x ^ 11 1 2 and l is positive. Also,
x ^ 11 = P R A * ¯ Y ^ R A * ¯ .
Here, Y ^ L ( H ) + and obeys Y ^ B = A C .
Arias et al. provided several equivalent solvability conditions and corresponding expressions for the positive solution of the operator equation A X B = C under the condition R ( B ) R A * ¯ . In 2019, Cvetković-Ilić [69] removed the condition R ( B ) R A * ¯ and, using the results of Douglas [88] and Sebestyén [89], presented several equivalent solvability conditions and corresponding expressions for the positive solution of this operator equation.
Theorem 21
([69]). Suppose that A L ( H , K ) , B L ( G , H ) , and C L ( G , K ) . Then, we have the following conclusions.
1.
The following statements are equivalent.
(a) 
The operator equation A X B = C has a positive solution.
(b) 
There exists a real number μ > 0 and Y L ( H ) such that, for every x L ( G ) ,
A C x 2 + L A Y B x 2 μ B * A C + B * L A Y B x , x .
(c) 
There exists a positive operator D = D 1 D 1 * L ( G ) such that R ( ( A C ) * ) R ( D 1 ) and the equation
B * A C + B * L A Y B = D
is consistent.
(d) 
There exists Y L H and a real number δ > 0 , such that
B * A C δ A C * A C + B * L A Y B 0 .
2.
Specifically, if R ( B * L A ) is closed, we have the following equivalent descriptions.
(a) 
The operator equation A X B = C is consistent;
(b) 
( I Q ) B * A C ( I Q ) 0 ;
(c) 
R ( ( I Q ) B * A C ) R ( E ) ;
(d) 
R ( ( I Q ) ( A C ) * ) R ( E ) ;
where Q = B * L A ( B * L A ) and E = ( ( I Q ) B * A C ( I Q ) ) 1 2 . In this case, the general positive solution is expressed as
X = A C B + B * L A W B * A C B + U ( I S ) U B B ,
where
S = ( B * L A ) B * L A , D 12 = ( I Q ) B * A C Q , W = ( I Q ) B * A C + Q B * A C * ( I Q ) + E D 12 * E D 12 + Q F Q .
Here, U L H and F L G + satisfy the conditions
R ( I S ) L A U B * R E + E D 12 * + ( Q F Q ) 1 2
and
R Q A C * E D 12 * E A C * R ( Q F Q ) 1 2 ,
respectively.

3.7. Rank Solutions

In 1974, Marsaglia and Styan [90] presented several equalities and inequalities related to matrix ranks, including the following lemma.
Lemma 1.
Assume that A , B , C , D and E are given with appropriate size over C , then we obtain
r A B L C R D E 0 = r A B 0 E 0 D 0 C 0 r ( C ) r ( D ) .
Lemma 1 plays an important role in the study of matrix equations. For instance, it can be used to investigate the extreme ranks of solutions to matrix Equation (1), as well as to provide necessary and sufficient conditions for the existence of solutions. Traditionally, determining whether a matrix equation has a solution requires computing the Moore–Penrose inverse of a matrix, but with this lemma, one only needs to compute the rank of the matrix. From a practical standpoint, this significantly reduces computational costs. In addition, this lemma can be easily generalized to H .
In 1979, Porter et al. [70] studied the number of rank-k solutions to matrix Equation (1) over the finite field GF ( p n ) , where p is an odd prime. For any m × n matrix of rank-r over GF ( p n ) , let q = p n . Then, the number of such matrices is denoted by
g ( m , n , r ) = q r ( r 1 ) 2 i = 1 r q m i + 1 1 q n i + 1 1 q i 1 .
It is evident that 1 r m i n { m , n } . In particular, if r = 0 , then g ( m , n , 0 ) = 1 .
Theorem 22
([70]). Let A , B , C be matrices of size s × m , f × t , and s × t , respectively, with ranks r ( A ) = ρ , r ( B ) = v , and r ( C ) = b . There exist nonsingular matrices P 1 , P 2 , Q 1 , Q 2 such that
P 1 A Q 1 = I ρ 0 0 0 , P 2 B Q 2 = I v 0 0 0 .
Then, the number of rank-r solutions to matrix Equation (1) over GF ( q ) is given by
N r = δ ( B 1 ) i = r + v f b min { v b , m ρ , r b } q ( m ρ ) b + ( b + i ) ( f v ) g ( m ρ , v b , i ) g ( m b i , f v , r b i ) .
Furthermore, the number of solutions to this matrix equation can be expressed as
N ( A , B , C ) = r = b min { m , f } N r = q m f ρ v δ ( B 1 ) ,
where B 1 = P 1 C Q 2 = ( l i j ) and
δ ( B 1 ) = 1 , i f l i j = 0 , i > ρ or j > v , δ ( B 1 ) = 0 , otherwise .
In 2007, Liu [71] used Lemma 1 to provide the maximal and minimal ranks of the least square solutions to matrix Equation (1) over C . In addition, the maximal and minimal ranks of the real and imaginary parts of the least square solutions to this matrix equation were also given.
Theorem 23
([71]). If A = A 0 + A 1 i , B = B 0 + B 1 i , and C = C 0 + C 1 i are given with appropriate size, and X = X 0 + X 1 i C m × n is a least square solution to (1), then
max r ( X ) = min m , n , m + n + r A * C B * r ( A ) r ( B ) , min r ( X ) = r A * C B * .
Furthermore, we can also provide the maximal and minimal ranks of X 0 and X 1 .
1.
The extreme ranks of X 0 are provided by
max r ( X 0 ) = min m , n , m + n + k 1 2 ( r ( A ) + r ( B ) ) , min r ( X 0 ) = k 1 r A 0 A 1 r B 0 B 1 ,
where
k 1 = r A 0 T A 1 T 0 A 1 T A 0 T 0 0 0 I C 0 C 1 A 0 C 1 C 0 A 1 B 0 B 1 0 B 0 T B 1 T 0 B 1 T B 0 T 0 0 0 I .
2.
The extreme ranks of X 1 are presented by
max r ( X 1 ) = min m , n , m + n + k 2 2 ( r ( A ) + r ( B ) ) , min r ( X 1 ) = k 2 r A 0 A 1 r B 0 B 1 ,
where
k 2 = r A 0 T A 1 T 0 A 1 T A 0 T 0 0 0 I C 0 C 1 A 0 C 1 C 0 A 1 B 1 B 0 0 B 0 T B 1 T 0 B 1 T B 0 T 0 0 0 I .
Subsequently, Zhang et al. [73] partitioned the least square solutions of matrix Equation (1) into a 2 × 2 block form and provided the maximal and minimal ranks for each sub-block matrix. Since X C m × n is a least square solution of matrix Equation (1), then
X = X 1 X 2 X 3 X 4 ,
where
X 1 = I m 1 0 X I n 1 0 : = E 1 X F 1 , X 2 = I m 1 0 X 0 I n 2 : = E 1 X F 2 , X 3 = 0 I m 2 X I n 1 0 : = E 2 X F 1 , X 4 = 0 I m 2 X 0 I n 2 : = E 2 X F 2 ,
and m 1 + m 2 = m , n 1 + n 2 = n .
Theorem 24
([73]). Assume that A , B , and C are given with the suitbale dimensions over C , and
X = X 1 X 2 X 3 X 4
is a least square solution of the matrix Equation (1). Then, the maximal and minimal ranks of X 1 , X 2 , X 3 , and X 4 can be expressed as follows:
min r ( X 1 ) = r A * C B * A * A 0 B B * 0 F 1 0 E 1 0 r A E 1 r B F 1 , max r ( X 1 ) = min m 1 , n 1 , r A * C B * A * A 0 B B * 0 F 1 0 E 1 0 r ( A ) r ( B ) ,
min r ( X 2 ) = r A * C B * A * A 0 B B * 0 F 2 0 E 1 0 r A E 1 r B F 2 , max r ( X 2 ) = min m 1 , n 2 , r A * C B * A * A 0 B B * 0 F 2 0 E 1 0 r ( A ) r ( B ) ,
min r ( X 3 ) = r A * C B * A * A 0 B B * 0 F 1 0 E 2 0 r A E 2 r B F 1 , max r ( X 3 ) = min m 2 , n 1 , r A * C B * A * A 0 B B * 0 F 1 0 E 2 0 r ( A ) r ( B ) ,
min r ( X 4 ) = r A * C B * A * A 0 B B * 0 F 2 0 E 2 0 r A E 2 r B F 2 , max r ( X 4 ) = min m 2 , n 2 , r A * C B * A * A 0 B B * 0 F 2 0 E 2 0 r ( A ) r ( B ) .
In 2010, Wang et al. [72] extended the study of extremal ranks for solutions of the matrix Equation (1) to quaternions. They utilized the real representation of quaternion matrices to determine the maximal and minimal ranks of the four real matrices in the quaternion solutions to this matrix equation. For convenience in description, we denote the real representation of the quaternion matrix B = B 1 + B 2 i + B 3 j + B 4 k H n × k as follows:
B R = B 1 B 2 B 3 B 4 B 2 B 1 B 4 B 3 B 3 B 4 B 1 B 2 B 4 B 3 B 2 B 1 : = L 1 L 2 L 3 L 4 .
Theorem 25
([72]). Let A = A 1 + A 2 i + A 3 j + A 4 k , B = B 1 + B 2 i + B 3 j + B 4 k , and C = C 1 + C 2 i + C 3 j + C 4 k be given with appropriate size. Set
D = A 2 A 3 A 4 A 1 A 4 A 3 A 4 A 1 A 2 A 3 A 2 A 1 , F 1 = L 2 L 3 L 4 , F 2 = L 1 L 3 L 4 , F 3 = L 1 L 2 L 4 , F 4 = L 1 L 2 L 3 .
Then the quaternion matrix Equation (1) has a solution X = X 1 + X 2 i + X 3 j + X 4 k H m × n if
A R Z 11 Z 12 Z 13 Z 14 Z 21 Z 22 Z 23 Z 24 Z 31 Z 32 Z 33 Z 44 Z 41 Z 42 Z 43 Z 44 B R = C R
is solvable over R . Here, the general solution is given by
X = 1 4 Z 11 + Z 22 + Z 33 + Z 44 + 1 4 Z 12 Z 21 + Z 34 Z 43 i + 1 4 Z 13 Z 31 + Z 42 Z 24 j + 1 4 Z 14 Z 41 + Z 23 Z 32 k ,
where Z i j ( i , j = 1 , , 4 ) are the general solutions of (16). In this case, the maximal and minimal ranks of X i ( i = 1 , , 4 ) can be expressed as
max r X i = min m , n , m + n + r 0 F i D C R 4 ( r ( A ) + r ( B ) ) , min r X i = r 0 F i D C R r ( D ) r F i .

3.8. General Solution Under Specific Conditions

In traditional matrix multiplication, there are dimensional requirements concerning the rows and columns of the matrices involved. However, in the context of the semi-tensor product, matrices of arbitrary dimensions can be multiplied. The semi-tensor product finds significant applications in various fields such as networked evolutionary games [91], dynamical games [92], and Boolean networks [93,94]. Therefore, in 2019, Ji et al. [67] studied the solvability conditions of matrix Equation (1) under the semi-tensor product.
Theorem 26
([67]). Suppose A R m × n , B R e × f , and C R l × k are given. When m = l , the matrix equation A X B = C has a solution X R g × r if and only if the matrix equation
( B T I k m f ) ( I r A ^ ) vec ( X ) = vec ( C )
is consistent, where
A ^ = A 1 A β + 1 A ( g 1 ) β + 1 A 2 A β + 2 A ( g 1 ) β + 2 A β A 2 β A g β ,
and A i is the i-th column of matrix A. Here β is a common factor of n and e k f , satisfying g = n β , r = e k f β .
Remark 8.
When m l , Reference [67] provides only the necessary condition for the solvability of Equation (1). This condition is not discussed here, but those interested can refer to that document for more details.
Prior research on matrix Equation (1) was mostly conducted over R , C , or H , and the corresponding conclusions do not necessarily hold over specific rings. Therefore, in 2020, Prokip [87] provided the necessary and sufficient conditions for the solvability of this matrix equation, as well as the general solution, over a principal ideal domain. Let R be a principal ideal ring with a unity. The set of invertible matrices in R n × n is denoted by G L ( n , R ) . For A R m × n and r ( A ) = k , then the Smith normal form of A is given by
T A = P A A Q A = T ( A ) k 0 0 0 ,
where P A G L ( m , R ) , Q A G L ( n , R ) and T ( A ) k = diag ( a 1 , , a k ) with a i | a i + 1 ( i = 1 , , k 1 ) .
Theorem 27
([87]). Assume that A R m × n , B R k × l , and C R m × l . The Smith normal forms of matrices A and B are given by
P A A Q A = T ( A ) p 0 0 0 , P B B Q B = T ( B ) q 0 0 0 .
If the matrix C satisfies
P A C Q B = T ( A ) p G T ( B ) q 0 0 0 , G R p × q ,
then the general solution of the matrix Equation (1) can be expressed as
X = Q A G D 12 D 21 D 22 P B R n × k ,
where D 12 , D 21 , and D 22 are arbitrary matrices with appropriate size over R.
Dual quaternions, dual split quaternions, and dual generalized commutative quaternions are extensions of quaternions, split quaternions, and generalized commutative quaternions, respectively, with significant applications in screw motions, computer graphics, rigid body motions, and robotics (see [95,96,97,98,99,100]). Notably, dual quaternions play a crucial role in the control of unmanned aerial vehicles and small satellites. Therefore, Chen et al. [64], Si et al. [65], and Shi et al. [66] provided several equivalent conditions for the solvability of matrix Equation (1) in the context of dual quaternions, dual split quaternions, and dual generalized commutative quaternions, along with expressions for the general solution.
Theorem 28
([64]). If A = A 0 + A 1 ϵ DQ a × b , B = B 0 + B 1 ϵ DQ c × d , and C = C 0 + C 1 ϵ DQ a × d are given. Set
A 2 = A 1 L A 0 , C 11 = A 0 A 0 C 0 B 0 B 1 , B 2 = R B 0 B 1 , C 22 = A 1 A 0 C 0 B 0 B 0 ,
C 3 = C 1 C 11 C 22 , M 1 = R A 0 A 2 , N 1 = R A 0 C 3 , E 1 = B 2 L B 0 , F 1 = C 3 L B 0 .
Then the matrix Equation (1) is solvable if and only if
R A 0 C 0 = 0 , R M 1 N 1 = 0 , C 0 L B 0 = 0 , F 1 L E 1 = 0 , R A 0 C 3 L B 0 = 0 ,
or if they satisfy the following rank equalities.
r A 0 C 0 = r ( A 0 ) , r B 0 C 0 = r ( B 0 ) , r C 1 A 0 B 0 0 = r ( A 0 ) + r ( B 0 ) , r A 1 C 1 A 0 A 0 C 0 0 = r A 1 A 0 A 0 0 , r B 1 B 0 C 1 C 0 B 0 0 = r B 1 B 0 B 0 0 .
In this situation, the general solution can be expressed as X = X 0 + X 1 ϵ , where
X 0 = A 0 C 0 B 0 + L A 0 U 1 + V 1 R B 0 , X 1 = A 0 ( C 3 A 0 V 1 B 2 A 2 U 1 B 0 ) B 0 + L A 0 Z 1 + Z 2 R B 0 , U 1 = M 1 N 1 B 0 + L M 1 Z 3 + Z 4 R B 0 , V 1 = A 0 F 1 E 1 + L A 0 Z 5 + Z 6 R E 1 ,
and Z i ( i = 1 , , 6 ) are arbitrary matrices over H .
Theorem 29
([65]). Suppose that A = A 0 + A 1 ϵ DH s p × q , B = B 0 + B 1 ϵ DH s k × r , and C = C 0 + C 1 ϵ DH s p × r are given. Set
A 00 = A 0 σ 1 , A 01 = A 1 σ 1 , B 00 = B 0 σ 1 , B 01 = B 1 σ 1 , C 00 = C 0 σ 1 , C 01 = C 1 σ 1 , A 2 = A 01 L A 00 , B 2 = R B 00 B 01 , C 11 = A 00 A 00 C 00 B 00 B 01 , C 22 = A 01 A 00 C 00 B 00 B 00 , C 3 = C 01 C 11 C 22 , M 1 = R A 00 A 2 , N 1 = R A 00 C 3 , E 1 = B 2 L B 00 , F 1 = C 3 L B 00 .
Then the following descriptions hold the same meaning.
1.
The dual split quaternion matrix Equation (1) is solvable.
2.
The system
A 00 X 00 B 00 = C 00 , A 00 X 00 B 01 + A 00 X 01 B 00 + A 01 X 00 B 00 = C 01
is solvable.
3.
R A 00 C 00 = 0 , C 00 L B 00 = 0 , R M 1 N 1 = 0 , R A 00 C 3 L B 00 = 0 , F 1 L E 1 = 0 .
4.
r A 00 C 00 = r ( A 00 ) , r B 00 C 00 = r ( B 00 ) , r A 01 A 00 C 01 A 00 0 C 00 = r A 01 A 00 A 00 0 , r C 01 A 00 B 00 0 = r ( A 00 ) + r ( B 00 ) , r B 01 B 00 B 00 0 C 01 C 00 = r B 01 B 00 B 00 0 .
In such cases, the general solution X = X 0 + X 1 ϵ is given by
X 0 = 1 8 I q I q i I q j I q k X 00 + P q X 00 P k T + W q X 00 W k T + R q X 00 R k T I k I k i I k j I k k , X 1 = 1 8 I q I q i I q j I q k X 01 + P q X 01 P k T + W q X 01 W k T + R q X 01 R k T I k I k i I k j I k k ,
where
X 00 = A 00 C 00 B 00 + L A 00 U 1 + V 1 R B 00 , X 01 = A 00 C 3 A 00 V 1 B 2 A 2 U 1 B 00 B 00 + L A 00 Z 1 + Z 2 R B 00 , U 1 = M 1 N 1 B 00 + L M 1 Z 3 + Z 4 R B 00 , V 1 = A 00 F 1 E 1 + L A 00 Z 5 + Z 6 R E 1 ,
and Z i ( i = 1 , , 6 ) are arbitrary real matrices.
Theorem 30
([66]). Assume that A = A 0 + A 1 ϵ DH g m × n , B = B 0 + B 1 ϵ DH g p × q , and C = C 0 + C 1 ϵ DH g m × q . Denote
A 00 η = A 0 σ i , η = i A 0 σ j V n , η = j A 0 σ k U n , η = k , A 11 η = A 1 σ i , η = i A 1 σ j V n , η = j A 1 σ k U n , η = k , B 00 η = B 0 σ i , η = i B 0 σ j V q , η = j B 0 σ k U q , η = k , B 11 η = B 1 σ i , η = i B 1 σ j V q , η = j B 1 σ k U q , η = k , C 00 η = C 0 σ η , C 11 η = C 1 σ η , C 00 = A 00 η A 00 η C 00 η B 00 η B 11 η + A 11 η A 00 η C 00 η B 00 η B 00 η , A 00 = A 11 η L A 00 η , B 00 = R B 00 η B 11 η , L 1 = B 00 η T A 00 , M 1 = B 00 T A 00 η , N 1 = B 00 η T A 00 η , Q 1 = L 1 M 1 N 1 , d = vec C 11 η C 00 ,
where V n , V q are defined in Equation (9) and U n , U q are given in Equation (7). Then, we can obtain the following equivalent description.
1.
The dual generalized commutative quaternion matrix Equation (1) is solvable.
2.
The system
A 00 η X 00 η B 00 η = C 00 η , A 00 η X 00 η B 11 η + A 00 η X 11 η B 00 η + A 11 η X 00 η B 00 η = C 11 η
is solvable.
3.
A 00 η A 00 η C 00 η B 00 η B 00 η = C 00 η , A 00 η A 00 η C 00 η = C 00 η , C 00 η B 00 η B 00 η = C 00 η , Q 1 Q 1 d = d .
In this case, the general solution can be expressed as X = X 0 + X 1 ϵ .
(1) 
When η = i ,
X 0 = 1 16 I n I n i I n j I n k X 00 i + ( R n 1 ) 1 X 00 i R n 1 + ( S n 1 ) 1 X 00 i S n 1 + ( T n 1 ) 1 X 00 i T n 1 I p 1 α I p i 1 β I p j 1 α β I p k , X 1 = 1 16 I n I n i I n j I n k X 11 i + ( R n 1 ) 1 X 11 i R n 1 + ( S n 1 ) 1 X 11 i S n 1 + ( T n 1 ) 1 X 11 i T n 1 I p 1 α I p i 1 β I p j 1 α β I p k .
(2) 
When η = j ,
X 0 = 1 16 I n I n i I n j I n k X 00 j ( R n 1 ) 1 X 00 j R n 1 ( S n 1 ) 1 X 00 j S n 1 + ( T n 1 ) 1 X 00 j T n 1 I p 1 α I p i 1 β I p j 1 α β I p k , X 1 = 1 16 I n I n i I n j I n k X 11 j ( R n 1 ) 1 X 11 j R n 1 ( S n 1 ) 1 X 11 j S n 1 + ( T n 1 ) 1 X 11 j T n 1 I n 1 α I n i 1 β I n j 1 α β I n k .
(3) 
When η = k ,
X 0 = 1 16 I n I n i I n j I n k X 00 k + ( R n 1 ) 1 X 00 k R n 1 ( S n 1 ) 1 X 00 k S n 1 ( T n 1 ) 1 X 00 k T n 1 I p 1 α I p i 1 β I p j 1 α β I p k , X 1 = 1 16 I n I n i I n j I n k X 11 k + ( R n 1 ) 1 X 11 k R n 1 ( S n 1 ) 1 X 11 k S n 1 ( T n 1 ) 1 X 11 k T n 1 I p 1 α I p i 1 β I p j 1 α β I p k .
where
X 00 η = A 00 η C 00 η B 00 η + L A 00 η V 1 + U 1 R B 00 η , η { i , j , k , } , vec ( V 1 ) vec ( U 1 ) vec X 11 η = Q 1 d + L Q 1 u ,
and u is any column vector of appropriate dimension over R .

4. Various Algorithms for Solving the Matrix Equation AXB = C

The previous section presented various special solutions to matrix Equation (1), and the numerical algorithms for these special solutions have also attracted significant attention from many scholars.
In 2008, Ding et al. [41] proposed a gradient-based iterative algorithm (GBI) to find the general solution of matrix Equation (1). Later, in 2013, Khorsand Zak et al. [46] introduced a nested splitting conjugate gradient (NSCG) iteration method, which leverages the symmetric and skew-symmetric splitting of the coefficient matrices A and B. Building on this, Wang et al. [47] extended the idea to C and proposed a new iterative method—the Hermitian and skew-Hermitian splitting (HSS) iteration method. Compared to GBI, this method offers advantages in terms of the number of iteration steps and computation time. In 2016, Zhou et al. [50] proposed a modified HSS iteration method (MHSS), which reduces computational complexity and enhances efficiency compared to the HSS iteration method. In 2017, Tian et al. [51] proposed Jacobi and Gauss–Seidel-type (GS) iteration methods, which require fewer iteration steps and have a broader range of applications compared to the HSS iteration method, but take more computing time. Subsequently, in 2019, Liu et al. [101] supplemented the gaps in the convergence proof of the Jacobi and GS iteration methods proposed by Tian et al. In 2021, Tian et al. [56] developed the relaxed Jacobi-type method and the relaxed Gauss–Seidel-type (RGS) method based on the Jacobi and GS iteration methods. These two relaxed iterative methods outperform HSS in terms of computation time and the number of iterations, with RGS method being the most effective among them. Meanwhile, Chen et al. [57] introduced the two-step accelerated over-relaxation iteration method (TS-AOR), which demonstrates faster convergence and reduced computation time compared to the Jacobi and GS iteration methods. In 2024, Tian et al. [62] presented a parameterized two-step iteration (PTSI) method based on the TS-AOR method. In solving matrix Equation (1), the Jacobi, GS, relaxed Jacobi, RGS, and TS-AOR iteration methods all involve the Kronecker product and inversion of the coefficient matrix, which inevitably increases both the matrix size and computational complexity. Therefore, Liu et al. [54] proposed the stationary splitting iterative method in 2020, which directly splits the coefficient matrices A and B instead of processing the matrix B T A . This approach is more efficient when dealing with large-scale data. Furthermore, the method can be applied to any convergent splitting, not just limited to Jacobi and Gauss–Seidel splittings, thereby increasing its flexibility. In 2023, Tian et al. [60] developed the parameterized accelerated iteration (PAI) method to circumvent the coefficient matrix inversion problem. Moreover, the PAI method outperforms HSS, GBI, and MHSS in terms of CPU time, and it surpasses the GBI method in iteration steps, though it is less efficient than the HSS method. In addition, in 2022, Wu et al. [58] combined the Kaczmarz methods with relaxed greedy selection to introduce the matrix equation relaxed greedy randomized Kaczmarz (ME-RGRK) method and the maximal weighted residual Kaczmarz (ME-MWRK) method. Both methods demonstrate superior convergence speed and computation time compared to GBI methods and iterative orthogonal direction methods. Currently, research primarily focuses on iterative methods for matrix equations, while some scholars are also exploring numerical algorithms for tensor equations. In 2019, Wang et al. [53] provided an iterative algorithm for the general solution of the tensor equation A N X M B = C under the Einstein product. For details, refer to Table 1.
In 2005, Peng et al. [35] developed an iterative method to find symmetric solutions to matrix Equation (1) over R , providing the optimal approximation solution in the Frobenius norm. Subsequently, Deng et al. [37] introduced the iteration orthogonal direction (IOD) method for Hermitian solutions over C , which outperformed the conjugate gradient for the normal equation (CGNE) method in terms of iteration steps and computation time. While the optimal approximation solution focuses on finding the closest solution to a given matrix under specific conditions, the least squares solution generally aims to find the best fit for potentially unsolvable equations. Their goals and applications differ significantly. Therefore, Peng et al. [36] investigated iterative algorithms for the least squares symmetric solution to this matrix equation. In 2006, Hou et al. [38] presented another iterative algorithm for the least squares symmetric solution. In 2007, Liao et al. [39] provided the least squares solution expressions for matrix Equation (1) using the GSVD and the canonical correlation decomposition (CCD). In the same year, Lei et al. [40], in response to the potential irregular convergence behavior in the residual norm of the iterative algorithm proposed by Peng et al. [36] for this matrix equation, introduced the minimal residual method based on the conjugate gradient method. Currently, most algorithms focus on symmetric solutions. Therefore, Huang et al. [42] proposed an iterative algorithm for skew-symmetric solutions in 2008. In 2010, Peng [43] utilized the LSQR algorithm to solve the matrix Equation (1) for symmetric, symmetric R-symmetric, and ( R , S )-symmetric solutions. This algorithm was proposed by Paige et al. [102] in 1982 and demonstrates superior convergence speed and accuracy compared to the iterative algorithms presented in references [35,36,40,42]. Following this, Peng [44] introduced two new matrix iterative methods based on Paige’s algorithms. In 2016, Peng et al. [63] introduced two new iterative methods based on the alternating variable minimization with multiplier (AVMM) method. Each of these methods has its advantages and disadvantages compared to LSQR. When the number of rows and columns in the coefficient matrix is relatively close, LSQR performs better. In 2020, Yu et al. [55] employed the alternating direction method with multipliers (ADMM) to solve the nearness skew-symmetric and symmetric solutions for matrix Equation (1). With the appropriate selection of parameters and preconditioners, ADMM outperforms the iterative algorithms presented in references [42,63]. Additionally, in 2016, Xie et al. [49] considered the generalized Lanczos trust region (GLTR) algorithm for the least squares symmetric solution of the matrix equation with a norm inequality constraint. In 2024, Duan et al. [61] employed the ADMM method to solve the least squares symmetric solution problem of the tensor equation A N X N B = C under the Einstein product. For details, see Table 2.
In addition, many scholars have studied algorithms for other specific structured solutions to matrix Equation (1), as shown in the Table 3.

5. An Application

In this section, we propose an encryption and decryption scheme for color images based on the dual quaternion matrix equation A X B = C , accompanied by a practical example to validate our approach.
We know that quaternions are a generalization of complex numbers, and dual quaternions are a further generalization of quaternions. Therefore, we consider the dual quaternion matrix equation A X B = C to illustrate its application in color image processing. Additionally, a color image can be represented by a quaternion matrix, and both the standard part and the infinitesimal part of a dual quaternion matrix are quaternion matrices. This means that a dual quaternion matrix can represent two color images. The encryption and decryption scheme is shown in Figure 1.
Select any two color images and encrypt them according to the principles illustrated in Figure 1.
Encrypt the image shown in Figure 2 to obtain Figure 3.
Decrypt the encrypted image in Figure 3 using the encryption matrices A and B along with Theorem 28 to obtain Figure 4.
From Figure 4, we can see that the decrypted image is indistinguishable from the original image. We use the Structural Similarity Index Measure (SSIM) to assess the quality of the decrypted image. The SSIM values for the images “Apple” and “Kettle” are both 1, indicating that the encryption scheme based on the dual quaternion matrix equation A X B = C is highly feasible. For more details, see Table 4.

6. Conclusions

This survey has provided an overview of various special solutions to matrix Equation (1) and the corresponding numerical algorithms. In the process, definitions of certain special matrices over R , C , H , DQ , DH s , and DH g have been given, along with a discussion of their related properties. The various special solutions to matrix Equation (1) have been classified and summarized, and the corresponding numerical algorithms have been explored. Furthermore, using the dual quaternion matrix equation A X B = C as an example, a scheme for color image encryption and decryption has been designed, with experimental results demonstrating its feasibility. This has enriched both the theoretical and practical applications of the matrix equation in color image processing. Due to the wide-ranging nature of this survey, numerous references have been cited. Although we have conducted a comprehensive search, the vast volume of academic information and diverse research perspectives may have led to some literature omissions. However, these omissions do not impact the core value of this review. Finally, we have found that most research on the special solutions of matrix Equation (1) has focused primarily on real numbers, complex numbers, and quaternions, while numerical algorithms have mostly been limited to real and complex numbers. Therefore, the following areas of work can be considered for future research:
  • The exploration of special solutions to matrix Equation (1) over dual quaternions, dual split quaternions, or dual generalized commutative quaternions could be a valuable direction for future research. This includes solutions such as (anti-)symmetric solutions, (anti-)reflexive solutions, (R, S)-(skew)symmetric solutions, bisymmetric solutions, reducible solutions, and so on. Furthermore, it would be interesting to investigate whether these solutions can be considered over dual quaternion tensors, dual split quaternion tensors, or dual generalized commutative quaternion tensors.
  • The study of corresponding numerical algorithms over quaternions, dual quaternions, dual split quaternions, or dual generalized commutative quaternions is another promising direction for future research. Furthermore, it would be worth exploring whether these algorithms can be extended to tensors (over the complex field), dual quaternion tensors, dual split quaternion tensors, or dual generalized commutative quaternion tensors.

Author Contributions

Methodology, Q.-W.W. and L.-M.X.; software, L.-M.X.; investigation, Q.-W.W., L.-M.X. and Z.-H.G.; writing—original draft preparation, Q.-W.W., L.-M.X., and Z.-H.G.; writing—review and editing, Q.-W.W. and L.-M.X.; supervision, Q.-W.W.; project administration, Q.-W.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by the National Natural Science Foundation of China (No. 12371023).

Data Availability Statement

Not applicable.

Acknowledgments

The authors are very grateful to the handling editor and the anonymous reviewers for their constructive advice, which greatly improved the completeness of the original survey.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. van der Woude, J.W. Almost non-interacting control by measurement feedback. Syst. Control Lett. 1987, 9, 7–16. [Google Scholar] [CrossRef]
  2. Penrose, R. A generalized inverse for matrices. Math. Proc. Camb. Philos. Soc. 1955, 51, 406–413. [Google Scholar] [CrossRef]
  3. Chen, H.C. Generalized reflexive matrices: Special properties and applications. SIAM J. Matrix Anal. Appl. 1998, 19, 140–153. [Google Scholar] [CrossRef]
  4. Rao, C.R. Estimation of variance and covariance components in linear models. J. Am. Stat. Assoc. 1972, 67, 112–115. [Google Scholar] [CrossRef]
  5. Stagg, G.W.; El-Abaid, A.H. Computer Methods in Power System Analysis; McGraw- Hill: New York, NY, USA, 1968. [Google Scholar]
  6. Took, C.C.; Mandic, D.P. Augmented second-order statistics of quaternion random signals. Signal Process. 2011, 91, 214–224. [Google Scholar] [CrossRef]
  7. Kirkland, S.J.; Neumann, M.; Xu, J.H. Transition matrices for well-conditioned Markov chains. Linear Algebra Appl. 2007, 424, 118–131. [Google Scholar] [CrossRef]
  8. Lei, J.Z.; Wang, C.Y. On the reducibility of compartmental matrices. Comput. Biol. Med. 2008, 38, 881–885. [Google Scholar] [CrossRef] [PubMed]
  9. Santesso, P.; Valcher, M.E. On the zero pattern properties and asymptotic behavior of continuous-time positive system trajectories. Linear Algebra Appl. 2007, 425, 283–302. [Google Scholar] [CrossRef]
  10. Hsieh, C.; Skelton, R.E. All covariance controllers for linear discrete-time systems. IEEE Trans. Autom. Control. 1990, 35, 908–915. [Google Scholar] [CrossRef]
  11. Chu, M.T.; Trendafilov, N.T. On a differential equation approach to the weighted orthogonal procrustes problem. Stat. Comput. 1998, 8, 125–133. [Google Scholar] [CrossRef]
  12. Chu, M.T.; Trendafilov, N.T. The orthogonally constrained regression revisited. J. Comput. Graph. Stat. 2001, 10, 746–771. [Google Scholar] [CrossRef]
  13. Xie, M.Y.; Wang, Q.W. Reducible solution to a quaternion tensor equation. Front. Math. China 2020, 15, 1047–1070. [Google Scholar] [CrossRef]
  14. Khatri, C.G.; Mitra, S.K. Hermitian and nonnegative definite solutions of linear matrix equations. SIAM J. Appl. Math. 1976, 31, 579–585. [Google Scholar] [CrossRef]
  15. Zhang, X. Hermitian nonnegative-definite and positive-definite solutions of the matrix equation AXB = C. Appl. Math. E-Notes 2004, 4, 40–47. [Google Scholar]
  16. Wang, Q.W.; Yang, C.L. The Re-nonnegative definite solutions to the matrix equation AXB = C. Comment. Math. Univ. Carolin. 1998, 39, 7–13. [Google Scholar]
  17. Cvetković-ilić, D.S. Re-nnd solutions of the matrix equation AXB = C. J. Aust. Math. Soc. 2008, 84, 63–72. [Google Scholar] [CrossRef]
  18. Zheng, B.; Ye, L.J.; Cvetković-ilić, D.S. The *congruence class of the solutions of some matrix equations. Comput. Math. Appl. 2009, 57, 540–549. [Google Scholar] [CrossRef]
  19. Hua, D. On the symmetric solutions of linear matrix equations. Linear Algebra Appl. 1990, 131, 1–7. [Google Scholar] [CrossRef]
  20. Liao, A.P. Least-squares solution of AXB=D over symmetric positive semidefinite matrices. J. Comput. Math. 2003, 21, 175–182. [Google Scholar]
  21. Hu, S.S.; Yuan, Y.X. The symmetric solution of the matrix equation AXB = D on subspace. Comput. Appl. Math. 2022, 41, 373. [Google Scholar] [CrossRef]
  22. Wang, D.; Li, Y.; Ding, W.X. The least squares Bisymmetric solution of quaternion matrix equation AXB = C. AIMS Math. 2021, 6, 13247–13257. [Google Scholar] [CrossRef]
  23. Liu, X. The η-anti-Hermitian solution to some classic matrix equations. Appl. Math. Comput. 2018, 320, 264–270. [Google Scholar] [CrossRef]
  24. Zhang, Y.Z.; Li, Y.; Zhao, H.; Zhao, J.L.; Wang, G. Least-squares bihermitian and skew bihermitian solutions of the quaternion matrix equation AXB = C. Linear Multilinear Algebra 2022, 70, 1081–1095. [Google Scholar] [CrossRef]
  25. Cvetković-ilić, D.S. The reflexive solutions of the matrix equation AXB = C. Comput. Math. Appl. 2006, 51, 897–902. [Google Scholar] [CrossRef]
  26. Herrero, A.; Thome, N. Using the GSVD and the lifting technique to find {P,k+1} reflexive and anti-reflexive solutions of AXB = C. Appl. Math. Lett. 2011, 24, 1130–1141. [Google Scholar] [CrossRef]
  27. Liu, X.; Wang, Q.W. The least squares Hermitian (anti)reflexive solution with the least norm to matrix equation AXB = C. Math. Probl. Eng. 2017, 2017, 9756035. [Google Scholar] [CrossRef]
  28. Yuan, Y.X.; Dai, H. Generalized reflexive solutions of the matrix equation AXB=D and an associated optimal approximation problem. Comput. Math. Appl. 2008, 56, 1643–1649. [Google Scholar] [CrossRef]
  29. Liao, R.P.; Liu, X.; Long, S.J.; Zhang, Y. (R,S)-(skew) symmetric solutions to matrix equation AXB = C over quaternions. Mathematics 2024, 12, 323. [Google Scholar] [CrossRef]
  30. Yang, J.; Deng, Y.B. On the solutions of the equation AXB = C under Toeplitz-like and Hankel matrices constraint. Math. Methods Appl. Sci. 2018, 41, 2074–2094. [Google Scholar] [CrossRef]
  31. Zhang, H.T.; Liu, L.N.; Liu, H.; Yuan, Y.X. The solution of the matrix equation AXB=D and the system of matrix equations AX=C,XB=D with X*X=Ip. Appl. Math. Comput. 2022, 418, 126789. [Google Scholar] [CrossRef]
  32. Liang, M.L.; You, C.H.; Dai, L.F. An efficient algorithm for the generalized centro-symmetric solution of matrix equation AXB = C. Numer. Alg. 2007, 44, 173–184. [Google Scholar] [CrossRef]
  33. Peng, Y.X.; Hu, X.Y.; Zhang, L. An Iterative Method for Bisymmetric Solutions and Optimal Approximation Solution of AXB = C. In Proceedings of the Third International Conference on Natural Computation (ICNC 2007), Washington, DC, USA, 24–27 August 2007; pp. 829–832. [Google Scholar]
  34. Li, J.F.; Hu, X.Y.; Duan, X.F.; Zhang, L. Numerical solutions of AXB = C for mirrorsymmetric matrix X under a specified submatrix constraint. Computing 2010, 90, 39–56. [Google Scholar] [CrossRef]
  35. Peng, Y.X.; Hu, X.Y.; Zhang, L. An iteration method for the symmetric solutions and the optimal approximation solution of the matrix equation AXB = C. Appl. Math. Comput. 2005, 160, 763–777. [Google Scholar] [CrossRef]
  36. Peng, Z.Y. An iterative method for the least squares symmetric solution of the linear matrix equation AXB = C. Appl. Math. Comput. 2005, 170, 711–723. [Google Scholar] [CrossRef]
  37. Deng, Y.B.; Bai, Z.Z.; Gao, Y.H. Iterative orthogonal direction methods for Hermitian minimum norm solutions of two consistent matrix equations. Numer. Linear Algebra Appl. 2006, 13, 801–823. [Google Scholar] [CrossRef]
  38. Hou, J.J.; Peng, Z.Y.; Zhang, Z.L. An iterative method for the least squares symmetric solution of matrix equation AXB = C. Numer. Alg. 2006, 42, 181–192. [Google Scholar] [CrossRef]
  39. Liao, A.P.; Lei, Y. Optimal approxomate solution of the matrix equation AXB = C over symmetric matrices. J. Comput. Math. 2007, 25, 543–552. [Google Scholar]
  40. Lei, Y.; Liao, A.P. A minimal residual algorithm for the inconsistent matrix equation AXB = C over symmetric matrices. Appl. Math. Comput. 2007, 188, 499–513. [Google Scholar] [CrossRef]
  41. Ding, F.; Liu, P.X.; Ding, J. Iterative solutions of the generalized Sylvester matrix equations by using the hierarchical identification principle. Appl. Math. Comput. 2008, 197, 41–50. [Google Scholar] [CrossRef]
  42. Huang, G.X.; Yin, F.; Guo, K. An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB = C. J. Comput. Appl. Math. 2008, 212, 231–244. [Google Scholar] [CrossRef]
  43. Peng, Z.Y. A matrix LSQR iterative method to solve matrix equation AXB = C. Int. J. Comput. Math. 2010, 87, 1820–1830. [Google Scholar] [CrossRef]
  44. Peng, Z.Y. New matrix iterative methods for constraint solutions of the matrix equation AXB = C. J. Comput. Appl. Math. 2010, 235, 726–735. [Google Scholar] [CrossRef]
  45. Li, J.F.; Hu, X.Y.; Zhang, L. Numerical solutions of AXB = C for centrosymmetric matrix X under a specified submatrix constraint. Numer. Linear Algebra Appl. 2011, 18, 857–873. [Google Scholar] [CrossRef]
  46. Khorsand Zak, M.; Toutounian, F. Nested splitting conjugate gradient method for matrix equation AXB = C and preconditioning. Comput. Math. Appl. 2013, 66, 269–278. [Google Scholar] [CrossRef]
  47. Wang, X.; Li, Y.; Dai, L. On Hermitian and skew-Hermitian splitting iteration methods for the linear matrix equation AXB = C. Comput. Math. Appl. 2013, 65, 657–664. [Google Scholar] [CrossRef]
  48. Sarduvan, M.; Si̇maek, S.; Özdemi̇r, H. On the best approximate (P,Q)-orthogonal symmetric and skew-symmetric solution of the matrix equation AXB = C. J. Numer. Math. 2014, 22, 255–270. [Google Scholar] [CrossRef]
  49. Xie, D.X.; Xu, A.B.; Peng, Z.Y. Least-squares symmetric solution to the matrix equation AXB = C with the norm inequality constraint. Int. J. Comput. Math. 2016, 93, 1564–1578. [Google Scholar] [CrossRef]
  50. Zhou, R.; Wang, X.; Zhou, P. A modified HSS iteration method for solving the complex linear matrix equation AXB = C. J. Comput. Math. 2016, 34, 437–450. [Google Scholar] [CrossRef]
  51. Tian, Z.L.; Tian, M.Y.; Liu, Z.Y.; Xu, T.Y. The Jacobi and Gauss–Seidel-type iteration methods for the matrix equation AXB = C. Appl. Math. Comput. 2017, 292, 63–75. [Google Scholar] [CrossRef]
  52. Wang, X.; Tang, X.B.; Gao, X.G.; Wu, W.H. Finite iterative algorithms for the generalized reflexive and anti-reflexive solutions of the linear matrix equation AXB = C. Filomat 2017, 31, 2151–2162. [Google Scholar] [CrossRef]
  53. Wang, Q.W.; Xu, X.J. Iterative algorithms for solving some tensor equations. Linear Multilinear Algebra 2019, 67, 1325–1349. [Google Scholar] [CrossRef]
  54. Liu, Z.Y.; Li, Z.; Ferreira, C.; Zhang, Y.L. Stationary splitting iterative methods for the matrix equation AXB = C. Appl. Math. Comput. 2020, 378, 125195. [Google Scholar] [CrossRef]
  55. Yu, D.M.; Chen, C.R.; Han, D.R. ADMM-based methods for nearness skew-symmetric and symmetric solutions of matrix equation AXB = C. East Asian J. Appl. Math. 2020, 10, 698–716. [Google Scholar] [CrossRef]
  56. Tian, Z.L.; Li, X.J.; Dong, Y.H.; Liu, Z.Y. Some relaxed iteration methods for solving matrix equation AXB = C. Appl. Math. Comput. 2021, 403, 126189. [Google Scholar] [CrossRef]
  57. Chen, F.; Li, T.Y. Two-step AOR iteration method for the linear matrix equation AXB = C. Comp. Appl. Math. 2021, 40, 89. [Google Scholar] [CrossRef]
  58. Wu, N.C.; Liu, C.Z.; Zuo, Q. On the Kaczmarz methods based on relaxed greedy selection for solving matrix equation AXB = C. J. Comput. Appl. Math. 2022, 413, 114374. [Google Scholar] [CrossRef]
  59. Duan, X.F.; Zhang, Y.S.; Wang, Q.W.; Li, C.M. Paige’s Algorithm for solving a class of tensor least squares problem. BIT 2023, 63, 48. [Google Scholar] [CrossRef]
  60. Tian, Z.L.; Duan, X.F.; Wu, N.C.; Liu, Z.Y. The parameterized accelerated iteration method for solving the matrix equation AXB = C. Numer. Alg. 2024, 97, 843–867. [Google Scholar] [CrossRef]
  61. Duan, X.F.; Zhang, Y.S.; Wang, Q.W. An efficient iterative method for solving a class of constrained tensor least squares problem. Appl. Numer. Math. 2024, 196, 104–117. [Google Scholar] [CrossRef]
  62. Tian, Z.L.; Wang, Y.D.; Wu, N.C.; Liu, Z.Y. On the parameterized two-step iteration method for solving the matrix equation AXB = C. Appl. Math. Comput. 2024, 464, 128401. [Google Scholar] [CrossRef]
  63. Peng, Z.Y.; Fang, Y.Z.; Xiao, X.W.; Du, D.D. New algorithms to compute the nearness symmetric solution of the matrix equation. SpringerPlus 2016, 5, 1005. [Google Scholar] [CrossRef]
  64. Chen, Y.; Wang, Q.W.; Xie, L.M. Dual quaternion matrix equation AXB = C with applications. Symmetry 2024, 16, 287. [Google Scholar] [CrossRef]
  65. Si, K.W.; Wang, Q.W. The general solution to a classical matrix equation AXB = C over the dual split quaternion algebra. Symmetry 2024, 16, 491. [Google Scholar] [CrossRef]
  66. Shi, L.; Wang, Q.W.; Xie, L.M.; Zhang, X.F. Solving the dual generalized commutative quaternion matrix equation AXB = C. Symmetry 2024, 16, 1359. [Google Scholar] [CrossRef]
  67. Ji, Z.D.; Li, J.F.; Zhou, X.L.; Duan, F.J.; Li, T. On solutions of matrix equation AXB = C under semi-tensor product. Linear Multilinear Algebra 2021, 69, 1935–1963. [Google Scholar] [CrossRef]
  68. Arias, M.L.; Gonzalez, M.C. Positive solutions to operator equations AXB = C. Linear Algebra Appl. 2010, 433, 1194–1202. [Google Scholar] [CrossRef]
  69. Cvetković-Ilić, D.; Wang, Q.W.; Xu, Q.X. Douglas’ +Sebestyén’s lemmas =a tool for solving an operator equation problem. J. Math. Anal. Appl. 2020, 482, 123599. [Google Scholar] [CrossRef]
  70. Porter, A.D.; Mousouris, N. Ranked solutions of AXC=B and AX=B. Linear Algebra Appl. 1979, 24, 217–224. [Google Scholar] [CrossRef]
  71. Liu, Y.H. Ranks of least squares solutions of the matrix equation AXB = C. Comput. Math. Appl. 2008, 55, 1270–1278. [Google Scholar] [CrossRef]
  72. Wang, Q.W.; Yu, S.W.; Xie, W. Extreme ranks of real matrices in solution of the quaternion matrix equation AXB = C with applications. Algebra Colloq. 2010, 17, 345–360. [Google Scholar] [CrossRef]
  73. Zhang, F.X.; Li, Y.; Guo, W.B.; Zhao, J.L. Least squares solutions with special structure to the linear matrix equation AXB = C. Appl. Math. Comput. 2011, 217, 10049–10057. [Google Scholar] [CrossRef]
  74. Zhou, F.Z.; Hu, X.Y.; Zhang, L. The solvability conditions for the inverse eigenvalue problems of centro-symmetric matrices. Linear Algebra Appl. 2003, 364, 147–160. [Google Scholar] [CrossRef]
  75. Wu, L.; Cain, B. The Re-nonnegative definite solutions to the matrix inverse problem AX=B. Linear Algebra Appl. 1996, 236, 137–146. [Google Scholar] [CrossRef]
  76. Trench, W.F. Minimization problems for (R,S)-symmetric and (R,S)-skew symmetric matrices. Linear Algebra Appl. 2004, 389, 23–31. [Google Scholar] [CrossRef]
  77. Paige, C.C.; Saunders, M.A. Towards a generalized singular value decomposition. SIAM J. Numer. Anal. 1981, 18, 398–405. [Google Scholar] [CrossRef]
  78. Yuan, S.F.; Liao, A.P.; Wang, P. Least squares η-bi-Hermitian problems of the quaternion matrix equation (AXB,CXD)=(E,F). Linear Multilinear Algebra 2015, 63, 1849–1863. [Google Scholar] [CrossRef]
  79. Liu, X.C.; Li, Y.; Ding, W.X.; Tao, R.Y. A real method for solving octonion matrix equation AXB = C based on semi-tensor product of matrices. Adv. Appl. Clifford Algebr. 2024, 34, 12. [Google Scholar] [CrossRef]
  80. Rodman, L. Topics in Quaternion Linear Algebra; Princeton University Press: Princeton, NJ, USA, 2014. [Google Scholar]
  81. He, Z.H.; Wang, Q.W. A real quaternion matrix equation with applications. Linear Multilinear Algebra 2013, 61, 725–740. [Google Scholar] [CrossRef]
  82. Wei, M.S.; Li, Y.; Zhang, F.X. Quaternion Matrix Computations; Nova Science Publisher: Beijing, China, 2018. [Google Scholar]
  83. Zhang, F.X.; Wei, M.S.; Li, Y.; Zhao, J.L. An efficient method for least squares problem of the quaternion matrix equation X-AX^B=C. Linear Multilinear Algebra 2020, 70, 2569–2581. [Google Scholar] [CrossRef]
  84. Tian, Y.; Liu, X.; Zhang, Y. Least-squares solutions of the generalized reduced biquaternion matrix equations. Filomat 2023, 37, 863–870. [Google Scholar] [CrossRef]
  85. Ren, B.Y.; Wang, Q.W.; Chen, X.Y. The η-anti-Hermitian solution to a system of constrained matrix equations over the generalized Segre quaternion algebra. Symmetry 2023, 15, 592. [Google Scholar] [CrossRef]
  86. He, Z.H. The general solution to a system of coupled Sylvester-type quaternion tensor equations involving η-Hermicity. Bull. Iran. Math. Soc. 2019, 45, 1407–1430. [Google Scholar] [CrossRef]
  87. Prokip, V.M. On solvability of the matrix equation AXB = C over a principal ideal domain. Model. Control. Inf. Technol. Proc. Int. Sci. Pract. Conf. 2020, 4, 47–50. [Google Scholar] [CrossRef]
  88. Douglas, R.G. On majorization, factorization and range inclusion of operators in Hilbert spaces. Proc. Am. Math. Soc. 1966, 17, 413–415. [Google Scholar] [CrossRef]
  89. Sebestyén, Z. Restriction of positive operators. Acta Sci. Math. 1983, 46, 299–301. [Google Scholar]
  90. Marsaglia, G.; Styan, G.P. Equalities and inequalities for ranks of matrices. Linear Multilinear Algebra 1974, 2, 269–292. [Google Scholar] [CrossRef]
  91. Cheng, D.Z.; Xu, T.T.; Qi, H.S. Evolutionarily stable strategy of networked evolutionary games. IEEE Trans. Neural Netw. Learn. Syst. 2014, 25, 1335–1345. [Google Scholar] [CrossRef]
  92. Cheng, D.Z.; Zhao, Y.; Mu, Y.F. Strategy optimization with its application to dynamic games. In Proceedings of the 49th IEEE Conference on Decision and Control, Atlanta, GA, USA, 15–17 December 2010; pp. 5822–5827. [Google Scholar] [CrossRef]
  93. Cheng, D.Z. Input-state approach to Boolean networks. IEEE Trans. Neural Netw. Learn. Syst. 2009, 20, 512–521. [Google Scholar] [CrossRef]
  94. Cheng, D.Z.; Qi, H.S. A linear representation of dynamics of Boolean networks. IEEE Trans. Automat. Control. 2010, 55, 2251–2258. [Google Scholar] [CrossRef]
  95. Özkaldi, S.; Gündoğan, H. Dual split quaternions and screw motion in 3-dimensional Lorentzian space. Adv. Appl. Clifford Algebr. 2011, 21, 193–202. [Google Scholar] [CrossRef]
  96. Daniilidis, K. Hand-eye calibration using dual quaternions. Int. J. Robot. Res. 1999, 18, 286–298. [Google Scholar] [CrossRef]
  97. Wang, X.; Yu, C.; Lin, Z. A dual quaternion solution to attitude and position control for rigid body coordination. IEEE Trans. Rob. 2012, 28, 1162–1170. [Google Scholar] [CrossRef]
  98. da Cruz Figueredo, L.F.; Adorno, B.V.; Ishihara, J.Y. Robust H kinematic control of manipulator robots using dual quaternion algebra. Automatica 2021, 132, 109817. [Google Scholar] [CrossRef]
  99. Kenright, B. A beginners guide to dual-quaternions. In Proceedings of the 20th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision, Plzen, Czech, 26–28 June 2012. [Google Scholar]
  100. Kula, L.; Yayli, Y. Dual split quaternions and screw motion in Minkowski 3-space. Iran. J. Sci. Technol. Trans. 2006, 30, 245–258. [Google Scholar]
  101. Liu, Z.Y.; Zhou, Y.; Zhang, Y.L.; Lin, L.; Xie, D.X. Some remarks on Jacobi and Gauss–Seidel-type iteration methods for the matrix equation AXB = C. Appl. Math. Comput. 2019, 354, 305–307. [Google Scholar] [CrossRef]
  102. Paige, C.C.; Saunders, M.A. LSQR: An algorithm for sparse linear equations and sparse least squares. ACM Trans. Math. Softw. 1982, 8, 43–71. [Google Scholar] [CrossRef]
Figure 1. Scheme.
Figure 1. Scheme.
Mathematics 13 00450 g001
Figure 2. Original image.
Figure 2. Original image.
Mathematics 13 00450 g002
Figure 3. Encrypted image.
Figure 3. Encrypted image.
Mathematics 13 00450 g003
Figure 4. Decrypted image.
Figure 4. Decrypted image.
Mathematics 13 00450 g004
Table 1. The general solution of Equation (1).
Table 1. The general solution of Equation (1).
Proposed byType of SolutionAlgorithm TypeNumber Field
Ding, 2008 [41]general solutionGBI R
Khorsand Zak, 2013 [46]NSCG R
Wang, 2013 [47]HSS C
Zhou, 2016 [50]MHSS C
Tian, 2017 [51]Jacobi and GS R
Liu, 2020 [54]stationary splitting iteration R
Tian, 2021 [56]relaxed Jacobi and RGS R
Chen, 2021 [57]TS-AOR R
Wu, 2022 [58]ME-RGRK and ME-MWRK R
Tian, 2023 [60]PAI R
Tian, 2024 [62]PTSI R
Wang, 2019 [53]general solution (tensor)iteration R
Table 2. Various symmetric solutions for Equation (1).
Table 2. Various symmetric solutions for Equation (1).
Proposed byType of SolutionAlgorithm TypeNumber Field
Peng, 2005 [35]symmetric, optimal approximationiteration R
Peng, 2005 [36]least squares symmetriciteration R
Deng, 2006 [37]Hermitian minimum normIOD C
Hou, 2006 [38]least squares symmetriciteration R
Liao, 2007 [39]optimal approximate least squares symmetricGSVD,CCD R
Lei, 2007 [40]optimal approximate least squares symmetricminimal residual algorithm R
Huang, 2008 [42]skew-symmetric, optimal approximationiteration R
Peng, 2010 [43]symmetric, symmetric R-symmetric, ( R , S )-symmetricLSQR R
Peng, 2010 [44]symmetric, symmetric R-symmetric, ( R , S )-symmetricPaige’s algorithm R
Xie, 2016 [49]least squares symmetricGLTR R
Peng, 2016 [63]nearness symmetricAVMM R
Yu, 2020 [55]nearness skew-symmetric and symmetricADMM R
Duan, 2024 [61]least squares symmetric (tensor)ADMM R
Table 3. Other types of special solutions for Equation (1).
Table 3. Other types of special solutions for Equation (1).
Proposed byType of SolutionAlgorithm TypeNumber Field
Liang, 2007 [32]generalized centro-symmetriciteration R
Peng, 2007 [33]bisymmetric, optimal approximationiteration R
Li, 2010 [34]mirrorsymmetricconjugate gradient least squares method (CGLS) R
Li, 2011 [45]centro-symmetricCGLS R
Sarduvan, 2014 [48] ( P , Q ) -orthogonal (skew-) symmetricspectral decomposition R
Wang, 2017 [52]generalized reflexive and anti-reflexiveiteration R
Duan, 2023 [59]least squares solution (tensor)Paige’s algorithm R
Table 4. Evaluation of effect.
Table 4. Evaluation of effect.
Color Image NameSSIM
Apple1
Kettle1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Q.-W.; Xie, L.-M.; Gao, Z.-H. A Survey on Solving the Matrix Equation AXB = C with Applications. Mathematics 2025, 13, 450. https://doi.org/10.3390/math13030450

AMA Style

Wang Q-W, Xie L-M, Gao Z-H. A Survey on Solving the Matrix Equation AXB = C with Applications. Mathematics. 2025; 13(3):450. https://doi.org/10.3390/math13030450

Chicago/Turabian Style

Wang, Qing-Wen, Lv-Ming Xie, and Zi-Han Gao. 2025. "A Survey on Solving the Matrix Equation AXB = C with Applications" Mathematics 13, no. 3: 450. https://doi.org/10.3390/math13030450

APA Style

Wang, Q.-W., Xie, L.-M., & Gao, Z.-H. (2025). A Survey on Solving the Matrix Equation AXB = C with Applications. Mathematics, 13(3), 450. https://doi.org/10.3390/math13030450

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop