Next Article in Journal
V-Quasi-Bi-Slant Riemannian Maps
Next Article in Special Issue
Matrix Equation’s Reflexive and Anti-Reflexive Solutions over Quaternions
Previous Article in Journal
Improve Stock Price Model-Based Stochastic Pantograph Differential Equation
Previous Article in Special Issue
An Iterative Algorithm for the Generalized Reflexive Solution Group of a System of Quaternion Matrix Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Solving Quaternion Linear System Based on Semi-Tensor Product of Quaternion Matrices

1
College of Mathematical Sciences, Liaocheng University, Liaocheng 252000, China
2
Research Center of Semi-Tensor Product of Matrices: Theory and Applications, Liaocheng 252000, China
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(7), 1359; https://doi.org/10.3390/sym14071359
Submission received: 2 June 2022 / Revised: 24 June 2022 / Accepted: 28 June 2022 / Published: 1 July 2022
(This article belongs to the Special Issue The Quaternion Matrix and Its Applications)

Abstract

:
In this paper, we use semi-tensor product of quaternion matrices, L -representation of quaternion matrices, and GH -representation of special quaternion matrices such as quaternion (anti)-centrosymmetric matrices to solve the special solutions of quaternion matrix equation. Based on semi-tensor product of quaternion matrices and the structure matrix of the multiplication of quaternion, we propose the vector representation operation conclusion of quaternion matrices, and study the different matrix representations of quaternion matrices. Then the problem of the quaternion matrix equation is transformed into the corresponding problem in the real number fields by using vector representation and L -representation of quaternion matrices, combined with the special structure of (anti)-centrosymmetric matrices, the independent elements are extracted by GH -representation method, so as to reduce the number of variables to be calculated and improve the calculation accuracy. Finally, the effectiveness of the method is verified by numerical examples, and the time comparison with the two existing algorithms is carried out. The algorithm in this paper is also applied in a centrosymmetric color digital image restoration model.

1. Introduction

The symbols to be used in this paper are as follows: R / Q represent the set of all the real numbers/quaternions, respectively. R t represents the set of all real column vectors with t-dimension. R m × n / Q m × n represent the set of all m × n real matrices/quaternion matrices, respectively. S n × n / AS n × n / S n × n / AS n × n represent the set of all n × n real centrosymmetric matrices/real anti-centrosymmetric matrices/quaternion centrosymmetric matrices/quaternion anti-centrosymmetric matrices, respectively. In addition, I n represents the unit matrix with n-dimension, δ n i ( i = 1 , 2 , , n ) represents the ith column of I n . A ¯ / A T / A H / A represent the conjugate/transpose/conjugate transpose/Moore-Penrose inverse of matrix A. ⊗ represents the Kronecker product of matrices, · represents the Frobenius norm of a matrix or Euclidean norm of a vector.
Currently the numerical computation is not only a tool for scientific calculations, but also one of the ways to discover truths. However, the traditional matrix theory also has some shortages; for example, it has dimensional restriction and noncommutativity. Semi-tensor product of matrices proposed by Cheng [1] is different from the traditional matrix product. It does not need size matching conditions and can be used for any two matrices. It is designed to deal with higher-dimensional data as well as multilinear mappings. In a computer the higher-dimensional data can easily be treated without arranging the m into a cube or even higher-dimensional cuboid. Semi-tensor product of matrices is designed in such a way that the product rule can automatically search the proper position for each factor of multiplier. At present, semi-tensor product of matrices is widely used in biological system and life science [2,3], game theory [4,5], graph theory and formation control [6,7], fuzzy control [8,9], coding theory, and algorithm implementation [10,11]. In addition, some scholars proposed a new quaternion real vector representation method [12,13] based on semi-tensor product of matrices, and applied this method to the solution of quaternion linear system. In this paper, some new conclusions of semi-tensor product of quaternion matrices are proposed, which will be used to solve quaternion linear systems.
Quaternion is a hypercomplex number composed of a scalar and a vector, which has the dual properties of real number and complex number. Due to the rapid development of computer graphics [14], robot and other fields [15,16], quaternion has been more and more widely used in computer animation, robot trajectory planning [17], modeling [18], rendering and three-dimensional fractal display. The application of quaternion matrix in color digital images is becoming more and more important and extensive [19,20]. Color digital image restoration is usually modeled as the solution of quaternion matrix equation.
Matrix equations have wide applications in many spheres. These real, complex and quaternion matrix equations have attracted extensive attention. As a special matrix equation, quaternion matrix equation has been widely integrated into computer science [21], signals [22], statistics [23], and color image processing [24]. Because quaternion does not satisfy the commutativity of multiplication, the quaternion matrix equation is usually transformed into a familiar problem of real matrix equation or complex matrix equation by real representation or complex representation, so as to simplify the operation of matrix equation. Many scholars have discussed different solutions to different equations with the help of these methods. For example, using the real representation matrix of quaternion matrices, ref. [25] obtained the expressions of the minimal norm least squares solution for the quaternion matrix equation A X B + C X D = E ; ref. [26] investigated the minimal norm least squares η -(anti)-Hermitian solution of quaternion matrix equation A X B + C Y D = E ; ref. [27] discussed the minimal norm least squares (anti)-j-self-conjugate solution on quaternion matrix equation X A X ^ B = C ; in addition, ref. [28] used the complex representation matrix of quaternion matrices to study the η -(anti)-Hermitian solution of quaternion matrix equation A X B + C Y D = E ; ref. [29] derive the expressions of the least squares solution, pure imaginary solution, real solution with the least norm for the quaternion matrix equation A X = B by using the complex representation matrix of quaternion matrices. Some scholars have also devoted themselves to the study of quaternion matrix equations by using Cramer’s rules [30,31,32], iterative algorithms [33,34,35,36] or rank method [37,38,39,40].
Definition 1
([41]). If X = ( x i j ) Q n × n satisfies:
x i j = x n i + 1 , n j + 1 , ( i , j = 1 , , n ) ,
then X is called a quaternion centrosymmetric matrix. If X = ( x i j ) Q n × n satisfies:
x i j = x n i + 1 , n j + 1 , ( i , j = 1 , , n ) ,
then X is called a quaternion anti-centrosymmetric matrix.
As two special kinds of matrices, (anti)-centrosymmetric matrices are applied broadly in the fields of statistical analysis and matrix countermeasures information theory, linear system theory and numerical analysis, and some matrices with special rules of elements, such as (anti)-centrosymmetric matrices. We want to extract the independent elements of the matrix to remove the redundancy and reduce the complexity of solving the matrix equation. The H -representation [42] method perfectly realizes our idea.
This paper presents the (anti)-centrosymmetric solutions of quaternion matrix equation
i = 1 k A i X B i = C
by using semi-tensor product of quaternion matrices, L -representation and GH -representation.
Problem 1 Let A i Q m × n , B i Q n × p , ( i = 1 , , k ) , C Q m × p , and
M S = X S n × n | i = 1 k A i X B i C = min .
Find out X S M S , such that
X S = min X M S X .
X S is called the minimal norm least squares centrosymmetric solution of quaternion matrix Equation (1). If min = 0 , X S is called the minimal norm centrosymmetric solution of quaternion matrix Equation (1).
Problem 2 Let A i Q m × n , B i Q n × p , ( i = 1 , , k ) , C Q m × p , and
M A = X AS n × n | i = 1 k A i X B i C = min .
Find out X A M A , such that
X A = min X M A X .
X A is called the minimal norm least squares anti-centrosymmetric solution of quaternion matrix Equation (1). If min = 0 , X A is called the minimal norm anti-centrosymmetric solution of quaternion matrix Equation (1).
Several new conclusions on semi-tensor product of quaternion matrices are presented in this article. By using semi-tensor product of quaternion matrices, quaternion matrix equations can be analyzed by vector representation directly. Under the structure matrix of the multiplication of quaternion, we establish different matrix representations of quaternion matrices by semi-tensor product of quaternion matrices, in this case, we define the definition of L -representation. Employing vector representation of quaternion matrices and combining L -representation of quaternion matrices with GH -representation, several types of special minimal norm solutions to quaternion equation i = 1 k A i X B i = C are presented, along with the necessary and sufficient conditions of compatibility. Using GH -representation method, we can remove the redundancy and reduce the complexity of the problem by identifying the independent elements of a special matrix. It can be seen that GH -representation simplifies solutions to quaternion matrix equations in a simple and effective manner.
The following are the main sections of this article: In Section 2, the fundamentals of quaternion and semi-tensor product of quaternion matrices are covered. In Section 3, the vector representation conclusion of quaternion matrices is given, and combined with the structure matrix, the definition of L -representation of quaternion matrices is proposed. In Section 4, H -representation of several special matrices are given, and the definition of GH -representation of special quaternion matrices is proposed. In Section 5, the necessary and sufficient conditions for the minimal norm solution and compatibility of the above problems are explored. In Section 6, the corresponding algorithm and numerical examples are shown to verify the effectiveness of the method, and we give the time comparison between the algorithm in this paper and the algorithms in references [43,44]. In Section 7, the research of centrosymmetric color digital image restoration is given. In Section 8, a brief summary is made of the full text.

2. Preliminaries

2.1. Quaternion and Quaternion Matrices

This part mainly introduces the basic knowledge of quaternion. For more information, please refer to the literature [25,26,27].
Definition 2.
A quaternion x can be uniquely expressed as
x = x 0 + x 1 i + x 2 j + x 3 k Q ,
where x s R ,   s = 0 , 1 , 2 , 3 , and the three imaginary units i ,   j ,   k satisfy i 2 = j 2 = k 2 = 1 ,   ij = ji = k ,   jk = kj = i ,   ki = ik = j . The conjugate of x is defined as
x ¯ = x 0 x 1 i x 2 j x 3 k Q .
A quaternion matrix X can be uniquely expressed as X = X 0 + X 1 i + X 2 j + X 3 k Q m × n , where X s R m × n ,   s = 0 , 1 , 2 , 3 . The conjugate of X is defined as X ¯ = X 0 X 1 i X 2 j X 3 k Q m × n .
Definition 3
([24]). The norm of a quaternion x = x 0 + x 1 i + x 2 j + x 3 k Q is defined as
x = | x 0 | 2 + | x 1 | 2 + | x 2 | 2 + | x 3 | 2 = x x ¯ ,
and the Frobenius norm of X = X 0 + X 1 i + X 2 j + X 3 k Q m × n is defined as
X = X 0 2 + X 1 2 + X 2 2 + X 3 2 .

2.2. Semi-Tensor Product of Quaternion Matrices

In this section, some basic knowledge about semi-tensor product of quaternion matrices is given. For more details of semi-tensor product of matrices on real number fields, please refer to the literature [1,45,46].
Definition 4.
Suppose A Q m × n , B Q p × q , the semi-tensor product of A and B is denoted by
A B = ( A I t / n ) ( B I t / p ) ,
where t = l c m ( n , p ) is the least common multiple of n and p. If n = p , the semi-tensor product reduces to the traditional matrix product.
Example 1.
Suppose A = 3 0 2 1 , B = 4 1 4 1 5 1 1 1 3 4 5 3 1 1 2 2 . First, we block matrix A and B into
A = 3 0 2 1 = A 11 A 12 A 21 A 22 , B = 4 1 4 1 5 1 1 1 3 4 5 3 1 1 2 2 = B 11 B 12 B 21 B 22 .
Then the semi-tensor product of A and B is
A B = ( A I 2 ) B = 3 0 0 0 0 3 0 0 2 0 1 0 0 2 0 1 4 1 4 1 5 1 1 1 3 4 5 3 1 1 2 2 = 12 3 12 3 15 3 3 3 11 6 13 5 11 3 4 4 = A 11 B 11 + A 12 B 21 A 11 B 12 + A 12 B 22 A 21 B 11 + A 22 B 21 A 21 B 12 + A 22 B 22 .
Theorem 1.
Suppose α , β R , A , B , C be quaternion matrices, then
(1) 
(Associative rule)
( A B ) C = A ( B C ) .
(2) 
(Distributive rule)
A ( α B + β C ) = α A B + β A C ,
( α B + β C ) A = α B A + β C A .
(3) 
(Conjugate Transpose)
( A B ) H = B H A H .
Definition 5
([46]). A swap matrix W [ m , n ] is a m n × m n matrix, which is defined as
W [ m , n ] = [ I n δ m 1 , I n δ m 2 , , I n δ m m ] .
The properties of swap matrix are as follows, which facilitates the calculation of matrix.
Theorem 2.
(1) Suppose A Q m × n , then
W [ m , n ] V r ( A ) = V c ( A ) ; W [ n , m ] V c ( A ) = V r ( A ) .
(2) Suppose A Q s × t , then for any integer m > 0 have
W [ s , m ] A W [ m , t ] = I m A .
Example 2.
Assume A = a 11 a 12 a 21 a 22 Q 2 × 2 , B = b 11 b 12 b 21 b 22 b 31 b 32 Q 3 × 2 , then m = n = 2 , s = 3 , t = 2 . Hence, we have
W [ 3 , 2 ] = [ I 2 δ 3 1 , I 2 δ 3 2 , I 2 δ 3 3 ] = 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 ;
W [ 2 , 2 ] = [ I 2 δ 2 1 , I 2 δ 2 2 ] = 1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 .
Then
W [ 3 , 2 ] B ¯ W [ 2 , 2 ] A ¯ = ( I 2 B ¯ ) A ¯ = b 11 ¯ b 12 ¯ 0 0 b 21 ¯ b 22 ¯ 0 0 b 31 ¯ b 32 ¯ 0 0 0 0 b 11 ¯ b 12 ¯ 0 0 b 21 ¯ b 22 ¯ 0 0 b 31 ¯ b 32 ¯ a 11 ¯ a 12 ¯ a 21 ¯ a 22 ¯ = b 11 ¯ a 11 ¯ b 12 ¯ a 11 ¯ b 11 ¯ a 12 ¯ b 12 ¯ a 12 ¯ b 21 ¯ a 11 ¯ b 22 ¯ a 11 ¯ b 21 ¯ a 12 ¯ b 22 ¯ a 12 ¯ b 31 ¯ a 11 ¯ b 32 ¯ a 11 ¯ b 31 ¯ a 12 ¯ b 32 ¯ a 12 ¯ b 11 ¯ a 21 ¯ b 12 ¯ a 21 ¯ b 11 ¯ a 22 ¯ b 12 ¯ a 22 ¯ b 21 ¯ a 21 ¯ b 22 ¯ a 21 ¯ b 21 ¯ a 22 ¯ b 22 ¯ a 22 ¯ b 31 ¯ a 21 ¯ b 32 ¯ a 21 ¯ b 31 ¯ a 22 ¯ b 32 ¯ a 22 ¯ = a 11 b 11 ¯ a 11 b 12 ¯ a 12 b 11 ¯ a 12 b 12 ¯ a 11 b 21 ¯ a 11 b 22 ¯ a 12 b 21 ¯ a 12 b 22 ¯ a 11 b 31 ¯ a 11 b 32 ¯ a 12 b 31 ¯ a 12 b 32 ¯ a 21 b 11 ¯ a 21 b 12 ¯ a 22 b 11 ¯ a 22 b 12 ¯ a 21 b 21 ¯ a 21 b 22 ¯ a 22 b 21 ¯ a 22 b 22 ¯ a 21 b 31 ¯ a 21 b 32 ¯ a 22 b 31 ¯ a 22 b 32 ¯ = A B ¯ .

3. Main Conclusions

3.1. Vector Representation of Quaternion Matrices

As we know, quaternion multiplication does not satisfy the commutative law,
V c ( A X B ) = ( B T A ) V c ( X )
is not tenable on quaternion. Therefore, some scholars [24,26,28,29] mainly study the quaternion matrix equation based on the real representation matrix and complex representation matrix of quaternion matrices. However, we can find a new straightening result on quaternion according to the property of quaternion conjugation. Then some straightening conclusions of semi-tensor product of quaternion matrices are given below, which will be used to solve quaternion matrix equation.
Definition 6.
For A = ( a i j ) Q m × n , the column vector representation of quaternion matrix A is defined as
V c ( A ) = ( a 11 , , a m 1 , a 12 , , a m 2 , , a 1 n , , a m n ) T ,
the row vector representation of quaternion matrix A is defined as
V r ( A ) = ( a 11 , , a 1 n , a 21 , , a 2 n , , a m 1 , , a m n ) T .
Theorem 3.
Suppose A Q m × n , X Q n × q , Y Q p × m , then
( 1 ) V r ( A X ) = A V r ( X ) , V c ( A X ) = ( I q A ) V c ( X ) . ( 2 ) V c ( Y A ¯ ) = A H V c ( Y ¯ ) , V r ( Y A ¯ ) = ( I p A H ) V r ( Y ¯ ) .
Proof. 
(1) For V r ( A X ) = A V r ( X ) . Suppose C = A X , a i ( i = 1 , , m ) represents the ith row of matrix A, x j ( j = 1 , , n ) represents the jth row of matrix X, c i ( i = 1 , , m ) represents the ith row of matrix C, then the ith block of A V r ( X ) is
a i V r ( X ) = a i ( x 1 ) T ( x n ) T = k = 1 n a i k x k 1 k = 1 n a i k x k q = ( c i ) T ,
then we have V r ( A X ) = A V r ( X ) .
By the properties of the swap matrix and V r ( A X ) = A V r ( X ) , then,
V c ( A X ) = W [ m , q ] V r ( A X ) = W [ m , q ] A V r ( X ) = W [ m , q ] A W [ q , n ] V c ( X ) = ( I q A ) V c ( X ) .
(2) We prove V c ( Y A ¯ ) = A H V c ( Y ¯ ) . Let A = [ a 1 , a 2 , , a n ] , a i ( i = 1 , 2 , , n ) represents the ith column of matrix A, Y = [ y 1 , y 2 , , y m ] , y j ( j = 1 , 2 , , m ) represents the jth column of matrix Y, then
V c ( Y A ¯ ) = V c ( Y a 1 ¯ , , Y a n ¯ ) = Y a 1 ¯ Y a n ¯ ,
by the conjugate properties of quaternions, we have
Y a i ¯ = y 1 a 1 i ¯ + y 2 a 2 i ¯ + + y m a m i ¯ = a 1 i ¯ y 1 ¯ + a 2 i ¯ y 2 ¯ + + a m i ¯ y m ¯ = [ a 1 i ¯ I p , , a m i ¯ I p ] V c ( Y ¯ ) .
So
V c ( Y A ¯ ) = a 11 ¯ I p a 21 ¯ I p a m 1 ¯ I p a 12 ¯ I p a 22 ¯ I p a m 2 ¯ I p a 1 n ¯ I p a 2 n ¯ I p a m n ¯ I p V c ( Y ¯ ) = ( A H I p ) V c ( Y ¯ ) = A H V c ( Y ¯ ) .
By the properties of the swap matrix and V c ( Y A ¯ ) = A H V c ( Y ¯ ) , we obtain
V r ( Y A ¯ ) = W [ n , p ] V c ( Y A ¯ ) = W [ n , p ] A H V c ( Y ¯ ) = W [ n , p ] A H W [ p , m ] V r ( Y ¯ ) = ( I p A H ) V r ( Y ¯ ) .
   □

3.2. L -Representation of Quaternion Matrices

Our main work in this section is to study the matrix representation of quaternion matrices by using the structure matrix of the multiplication of quaternion.
Definition 7.
[1] Let V i ( i = 1 , 2 , , k ) be n i -dimensional vector spaces with e 1 i , , e n i i as the fixed bases of V i , and ϕ : V 1 V k V 0 be a multilinear mapping. Denote
ϕ ( e i 1 1 , , e i k k ) = i 0 = 1 n 0 μ i 1 , i 2 , , i k i 0 e i 0 n 0 , i j = 1 , , n j , j = 1 , , k .
Then the matrix
M ϕ 1 = μ 11 1 1 μ 11 n k 1 μ n 1 n 2 n k 1 1 1 μ n 1 n 2 n k 1 μ 11 1 2 μ 11 n k 2 μ n 1 n 2 n k 1 1 2 μ n 1 n 2 n k 2 μ 11 1 n 0 μ 11 n k n 0 μ n 1 n 2 n k 1 1 n 0 μ n 1 n 2 n k n 0
is defined as the right structure matrix of ϕ. The matrix
M ϕ 2 = μ 11 1 1 μ n 1 1 1 1 μ 1 n 2 n k 1 n k 1 μ n 1 n 2 n k 1 μ 11 1 2 μ n 1 1 1 2 μ 1 n 2 n k 1 n k 2 μ n 1 n 2 n k 2 μ 11 1 n 0 μ n 1 1 1 n 0 μ 1 n 2 n k 1 n k n 0 μ n 1 n 2 n k n 0
is defined as the left structure matrix of ϕ. The left and right structure matrices are collectively called structure matrices.
Remark 1.
For a multi-dimensional data, we can sort it by certain indices. The left structure matrix and right structure matrix given in Definition 7 are sorted according to different indexes.
Example 3.
For x = x 0 + x 1 i + x 2 j + x 3 k , y = y 0 + y 1 i + y 2 j + y 3 k Q , then fix an ordered basis { 1 , i , j , k } , the basis is normalized to
1 δ 4 1 , i δ 4 2 , j δ 4 3 , k δ 4 4 .
Each quaternion can be represented as a column vector:
x = x 0 + x 1 i + x 2 j + x 3 k ( x 0 , x 1 , x 2 , x 3 ) T = x r .
Then the right structure matrix of the multiplication of quaternions can be obtained as
M Q 1 = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 .
In addition, the left structure matrix of the multiplication of quaternions can be obtained as
M Q 2 = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 .
And we have
( x y ) r = M Q 1 x r y r = M Q 2 y r x r .
In the case of different basis standardization, the structure matrix of the multiplication of quaternion is also diverse. We systematically define the matrix representation of quaternion matrices by using the structure matrix of the multiplication of quaternion and semi-tensor product of quaternion matrices.
Definition 8.
Suppose X = X 0 + X 1 i + X 2 j + X 3 k Q m × n be a quaternion matrix, where X t R m × n ( t = 0 , 1 , 2 , 3 ) , denote X ^ = ± X 0 T ± X 1 T ± X 2 T ± X 3 T T . Suppose Φ is a mapping such that Φ : X Φ ( X ) R 4 m × 4 n , Φ ( X ) can be represented as
Φ ( X ) = M Q ( I 4 X ^ ) ,
Φ ( X ) is called the matrix representation of quaternion matrix X. Furthermore, the first column of Φ ( X ) is defined as
Φ c ( X ) = Φ ( X ) δ 4 1 .
Remark 2.
It can be seen from the definition that Φ ( X ) and Φ c ( X ) are determined by X ^ and M Q , that is, when X ^ and M Q are determined, Φ ( X ) and Φ c ( X ) are also unique and certain.
Example 4.
Let X = X 0 + X 1 i + X 2 j + X 3 k Q n × n , by M Q 1 defined in Example 3, the matrix representation of quaternion matrix X can be expressed as
Φ 1 ( X ) = M Q 1 I 4 X 0 X 1 X 2 X 3 = X 0 X 1 X 2 X 3 X 1 X 0 X 3 X 2 X 2 X 3 X 0 X 1 X 3 X 2 X 1 X 0 .
If we select M Q = M Q 2 , then
Φ 2 ( X ) = M Q 2 I 4 X 0 X 1 X 2 X 3 = X 0 X 1 X 2 X 3 X 1 X 0 X 3 X 2 X 2 X 3 X 0 X 1 X 3 X 2 X 1 X 0 .
The matrix representation method in reference [47] is the matrix representation Φ 2 ( X ) in Example 4. Furthermore, the matrix representation of quaternion matrices plays an important role in many aspects of quaternion research.
Definition 9.
Suppose X Q m × n , Y Q n × p , Φ ( X ) is called L -representation of quaternion matrices if and only if Φ ( X ) satisfies the following equations,
( 1 ) Φ ( X Y ) = Φ ( X ) Φ ( Y ) , ( 2 ) Φ c ( X Y ) = Φ ( X ) Φ c ( Y ) .
It is easy to verify that the two matrix representations given in Example 4, Φ 1 ( X ) does not satisfy the two conditions of L -representation, but the matrix representation given by Φ 2 ( X ) does. It is clear that Definition 9 has the following equivalent form.
Definition 10.
Suppose X Q m × n , Y Q n × p , Φ ( X ) is called L -representation of quaternion matrices if and only if Φ ( X ) satisfies the following equations,
( 1 ) ( M Q I m ) ( I 4 X Y ^ ) = ( M Q I m ) ( M Q X ^ ) ( I 4 Y ^ ) , ( 2 ) ( M Q I m ) ( δ 4 1 X Y ^ ) = ( M Q I m ) ( M Q X ^ ) ( δ 4 1 Y ^ ) .

4. GH -Representation of Quaternion Matrices

In this section, we will first introduce the definition of H -representation, and then give examples of H -representation of special matrices.
Definition 11
([42]). Consider a q-dimensional real matrix subspace X R n × n over the field R . Assume that e 1 , e 2 , , e q form the basis of X , and define H = [ V c ( e 1 ) , V c ( e 2 ) , , V c ( e q ) ] . For each X X , if we express Ψ ( X ) = V c ( X ) in the form of
Ψ ( X ) = H X ˜ ,
with a q × 1 vector X ˜ = ( x 1 , x 2 , , x q ) T and X = i = 1 q x i e i , then H X ˜ is called an H -representation of Ψ ( X ) , and H is called an H -representation matrix of Ψ ( X ) .
From the definition of quaternion (anti)-centrosymmetric matrices, we can know that quaternion (anti)-centrosymmetric matrices is closely related to real (anti)-centrosymmetric matrices. In the following, we take real (anti)-centrosymmetric matrices as examples to give their H -representation.
Example 5.
Let X = S 3 × 3 , X = ( x i j ) X , and then d i m ( X ) = 5 . If we select a basis of X as
e 1 = 1 0 0 0 0 0 0 0 1 , e 2 = 0 0 0 1 0 1 0 0 0 , e 3 = 0 0 1 0 0 0 1 0 0 , e 4 = 0 1 0 0 0 0 0 1 0 , e 5 = 0 0 0 0 1 0 0 0 0 .
It is easy to compute
Ψ ( X ) = V c ( X ) = ( x 11 , x 21 , x 31 , x 12 , x 22 , x 12 , x 31 , x 21 , x 11 ) T ,
and
X ˜ = ( x 11 , x 21 , x 31 , x 12 , x 22 ) T ,
H = 1 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 1 0 0 0 0 T .
Example 6.
Let X = AS 3 × 3 , X = ( x i j ) X , and then d i m ( X ) = 4 . If we select a basis of X as
e 1 = 1 0 0 0 0 0 0 0 1 , e 2 = 0 0 0 1 0 1 0 0 0 , e 3 = 0 0 1 0 0 0 1 0 0 , e 4 = 0 1 0 0 0 0 0 1 0 .
It is easy to compute
Ψ ( X ) = V c ( X ) = ( x 11 , x 21 , x 31 , x 12 , 0 , x 12 , x 31 , x 21 , x 11 ) T ,
and
X ˜ = ( x 11 , x 21 , x 31 , x 12 ) T ,
H = 1 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 T .
Then, we select the standard basis for centrosymmetric and anti-centrosymmetric matrices, and give the H -representation matrices, respectively.
1. If X = S n × n , we select a standard basis as
{ E 1 , E 2 , , E α } ,
where E i = { ( e p q ) | e l ( k + 1 ) = e ( n + 1 l ) ( n k ) = 1 , i = k n + l ( 0 k l n ; i = 1 , 2 , , α } , α = n 2 + 1 2 ( i f n i s o d d ) n 2 2 ( i f n i s e v e n ) . Based on above standard basis, for any X X , we have
X ˜ = ( x 1 , x 2 , , x α ) T
and
H s = [ V c ( E 1 ) , V c ( E 2 ) , , V c ( E α ) ] R n 2 × α .
2. If X = AS n × n , we select a standard basis as
{ F 1 , F 2 , , F β } ,
where F i = { ( f p q ) | f l ( k + 1 ) = f ( n + 1 l ) ( n k ) = 1 , i = k n + l ( 0 k l n ; i = 1 , 2 , , β } , β = n 2 1 2 ( i f n i s o d d ) n 2 2 ( i f n i s e v e n ) . Based on above standard basis, for any X X , we have
X ˜ = ( x 1 , x 2 , , x β ) T
and
H a = [ V c ( F 1 ) , V c ( F 2 ) , , V c ( F β ) ] R n 2 × β .
Note that Ψ ( X ) is a column vector formed by all elements of matrix X. For the sake of clarity, we denote the H -representation matrix corresponding to X = S n × n by H s , the H -representation matrix corresponding to X = AS n × n by H a .
Theorem 4.
For an n 2 × 1 vector α 1 , if Ψ 1 ( α 1 ) S n × n , then there exists an α × 1 vector β 1 , such that α 1 = H s β 1 . For an n 2 × 1 vector α 2 , if Ψ 1 ( α 2 ) AS n × n , then there exists an β × 1 vector β 2 , such that α 2 = H a β 2 .
H -representation prompt us to define GH -representation on quaternion matrices.
Definition 12.
Consider a quaternion matrix subspace X Q n × n , for each X = X 0 + X 1 i + X 2 j + X 3 k X , let S = { X 0 , X 1 , X 2 , X 3 } . A permutation σ on S is a one-to-one mapping from S to S, denote X = ± σ ( X 0 ) ± σ ( X 1 ) ± σ ( X 2 ) ± σ ( X 3 ) . If we express Ψ ( X ) = V c ( X ) in the form of
Ψ ( X ) = H G X ˜ ,
where H G = H σ ( X 0 ) H σ ( X 1 ) H σ ( X 2 ) H σ ( X 3 ) , X ˜ = V c ( X ) ˜ represents a permutation of independent elements for each part of V c ( X ) . Then H G X ˜ is called a GH -representation of Ψ ( X ) , and H G is called a GH -representation matrix of Ψ ( X ) .

5. The Solutions of Problem 1 and Problem 2

In order to obtain the solution of the quaternion matrix Equation (1), we begin with the following Lemmas.
Lemma 1
([48]). The least squares solution of the linear system of equations A x = b , with A R m × n and b R m can be represented as
x = A b + ( I A A ) y ,
where y R n is an arbitrary vector. The minimal norm least squares solution of the linear system of equations A x = b is A b .
Lemma 2
([48]). The linear system of equations A x = b , with A R m × n and b R m , has a solution x R n if and only if
A A b = b .
In case that it has the general solution
x = A b + ( I A A ) y ,
where y R n is an arbitrary vector. The minimal norm solution of the linear system of equations A x = b is A b .
We select X ^ = X 0 T X 1 T X 2 T X 3 T T for X = X 0 + X 1 i + X 2 j + X 3 k Q n × n as an example in the following solving process, in this case, the matrix representation we obtain is the form of Φ 2 ( X ) in Example 4. Based on our earlier discussion, we now turn our attention to Problem 1. We obtain the necessary and sufficient condition of the existence of centrosymmetric solutions of quaternion matrix Equation (1), we obtain the following Theorem.
Theorem 5.
Suppose A i Q m × n , B i Q n × p , ( i = 1 , , k ) , C Q m × p , then the set M S of Problem 1 can be represented as
M S = X S n × n | Φ c ( V c ( X ) ) = H s R 1 Φ c ( V c ( C ) ) + H s ( I 4 α R 1 R 1 ) y ,
where y is an arbitrary vector with suitable dimension. Then, the minimal norm least squares centrosymmetric solution X S of quaternion matrix Equation (1) satisfies
Φ c ( V c ( X S ) ) = H s R 1 Φ c ( V c ( C ) ) ,
where H s = H s H s H s H s 4 n 2 × 4 α , K n = I n 0 0 0 0 I n 0 0 0 0 I n 0 0 0 0 I n , R 1 = i = 1 k Φ ( I p A i ) K n p Φ ( B i H I n ) K n 2 H s .
Proof. 
For X = X 0 + X 1 i + X 2 j + X 3 k S n × n , from Theorem 3, Theorem 4 and the definition of GH -representation, we can obtain
i = 1 k A i X B i C = i = 1 k V c ( A i X B i ) V c ( C ) = i = 1 k ( I p A i ) V c ( X B i ¯ ) ¯ V c ( C ) = i = 1 k ( I p A i ) ( B i H I n ) V c ( X ¯ ) ¯ V c ( C ) = i = 1 k Φ c ( ( I p A i ) ( B i H I n ) V c ( X ¯ ) ¯ ) Φ c ( V c ( C ) ) = i = 1 k Φ ( I p A i ) K n p Φ ( B i H I n ) K n 2 Φ c ( V c ( X ) ) Φ c ( V c ( C ) ) = i = 1 k Φ ( I p A i ) K n p Φ ( B i H I n ) K n 2 H s X 0 ˜ X 1 ˜ X 2 ˜ X 3 ˜ Φ c ( V c ( C ) ) = R 1 X 0 ˜ X 1 ˜ X 2 ˜ X 3 ˜ Φ c ( V c ( C ) ) .
Thus
i = 1 k A i X B i C = min R 1 X 0 ˜ X 1 ˜ X 2 ˜ X 3 ˜ Φ c ( V c ( C ) ) = min .
For the real matrix equation
R 1 X 0 ˜ X 1 ˜ X 2 ˜ X 3 ˜ = Φ c ( V c ( C ) ) .
Using Lemma 1, its least squares solution can be represented as
X 0 ˜ X 1 ˜ X 2 ˜ X 3 ˜ = R 1 Φ c ( V c ( C ) ) + ( I 4 α R 1 R 1 ) y , y R 4 α .
Then we have
Φ c ( V c ( X ) ) = H s X 0 ˜ X 1 ˜ X 2 ˜ X 3 ˜ = H s R 1 Φ c ( V c ( C ) ) + H s ( I 4 α R 1 R 1 ) y , y R 4 α .
And then, Equation (3) can be obtained.    □
Theorem 6.
Suppose A i Q m × n , B i Q n × p , ( i = 1 , , k ) , C Q m × p . Hence quaternion matrix Equation (1) has a solution X S n × n if and only if
( R 1 R 1 I 4 m p ) Φ c ( V c ( C ) ) = 0 ,
where R 1 is denoted in Theorem 5. Moreover, if (4) holds, the centrosymmetric solution set of quaternion matrix Equation (1) can be represented as
M S = X S n × n | Φ c ( V c ( X ) ) = H s R 1 Φ c ( V c ( C ) ) + H s ( I 4 α R 1 R 1 ) y ,
where y is an arbitrary vector suitable for dimension. Then, the minimal norm centrosymmetric solution X S satisfies
Φ c ( V c ( X S ) ) = H s R 1 Φ c ( V c ( C ) ) .
Proof. 
Quaternion matrix Equation (1) has a solution X S n × n if and only if
i = 1 k A i X B i C = 0 .
By means of Theorem 5 and the properties of the Moore–Penrose inverse, we obtain
i = 1 k A i X B i C = R 1 X 0 ˜ X 1 ˜ X 2 ˜ X 3 ˜ Φ c ( V c ( C ) ) = R 1 R 1 R 1 X 0 ˜ X 1 ˜ X 2 ˜ X 3 ˜ Φ c ( V c ( C ) ) = R 1 R 1 Φ c ( V c ( C ) ) Φ c ( V c ( C ) ) = ( R 1 R 1 I 4 m p ) Φ c ( V c ( C ) ) .
Therefore, we have
i = 1 k A i X B i C = 0 ( R 1 R 1 I 4 m p ) Φ c ( V c ( C ) ) = 0 ( R 1 R 1 I 4 m p ) Φ c ( V c ( C ) ) = 0 .
In case that quaternion matrix Equation (1) is compatible, its solution X S n × n satisfies
R 1 X 0 ˜ X 1 ˜ X 2 ˜ X 3 ˜ = Φ c ( V c ( C ) ) .
Moreover, by Lemma 2, we can obtain the centrosymmetric solution X satisfies
X 0 ˜ X 1 ˜ X 2 ˜ X 3 ˜ = R 1 Φ c ( V c ( C ) ) + ( I 4 α R 1 R 1 ) y , y R 4 α .
Then we have
Φ c ( V c ( X ) ) = H s R 1 Φ c ( V c ( C ) ) + H s ( I 4 α R 1 R 1 ) y , y R 4 α .
And the minimal norm centrosymmetric solution X S satisfies
Φ c ( V c ( X S ) ) = H s R 1 Φ c ( V c ( C ) ) .
   □
For Problem 2, we can also obtain the necessary and sufficient condition for the existence of anti-centrosymmetric solutions of quaternion matrix Equation (1) through vector representation of quaternion matrices, L -representation and GH -representation method. Similar to the analysis procedure of Problem 1, we obtain the following conclusions.
Theorem 7.
Suppose A i Q m × n , B i Q n × p , ( i = 1 , , k ) , C Q m × p , then the set M A of Problem 2 can be represented as
M A = X AS n × n | Φ c ( V c ( X ) ) = H a R 2 Φ c ( V c ( C ) ) + H a ( I 4 β R 2 R 2 ) y ,
where y R 4 β . Then, the minimal norm least squares anti-centrosymmetric solution X A of quaternion matrix Equation (1) satisfies
Φ c ( V c ( X A ) ) = H a R 2 Φ c ( V c ( C ) ) ,
where H a = H a H a H a H a 4 n 2 × 4 β ,   K n = I n 0 0 0 0 I n 0 0 0 0 I n 0 0 0 0 I n , R 2 = i = 1 k Φ ( I p A i ) K n p Φ ( B i H I n ) K n 2 H a .
Theorem 8.
Suppose A i Q m × n , B i Q n × p , ( i = 1 , , k ) , C Q m × p . Hence quaternion matrix Equation (1) has a solution X AS n × n if and only if
( R 2 R 2 I 4 m p ) Φ c ( V c ( C ) ) = 0 ,
where R 2 is denoted in Theorem 7. Moreover, if (8) holds, the anti-centrosymmetric solution set of quaternion matrix Equation (1) can be represented as
M A = X AS n × n | Φ c ( V c ( X ) ) = H a R 2 Φ c ( V c ( C ) ) + H a ( I 4 β R 2 R 2 ) y , y R 4 β .
And then, the minimal norm anti-centrosymmetric solution X A satisfies
Φ c ( V c ( X A ) ) = H a R 2 Φ c ( V c ( C ) ) .

6. Algorithms and Numerical Examples

Numerical experiments are used to verify the effectiveness of the above algorithms.
Example 7.
Suppose m = n = p , A i , B i Q n × n be generated randomly for n = 5 K , K = 1 : 11 . Randomly generate centrosymmetric matrix X S or anti-centrosymmetric matrix X A , respectively. Then for the left side of quaternion matrix Equation (1), replace X with X S or X A , let k = 2 , calculate C = A 1 X S B 1 + A 2 X S B 2 or C = A 1 X A B 1 + A 2 X A B 2 . For the quaternion matrix Equation (1) with A i , B i and C above, its computational solutions can be obtained by using Algorithms 1 and 2 and denoted as X ˘ S , X ˘ A , respectively. Denote ε 1 = l o g 10 Φ c ( X S ) Φ c ( X ˘ S ) , ε 2 = l o g 10 Φ c ( X A ) Φ c ( X ˘ A ) . As the dimension changes, ε t ( t = 1 , 2 ) is shown in Figure 1.
It can be seen from Figure 1 that the order of magnitude of error between the exact solution and the numerical solution in Problem 1 and 2 increases with the increase in dimension. However, for Problem 1, the order of magnitude of error of the centrosymmetric solution is always less than 11 ; for Problem 2, the order of magnitude of error of the anti-centrosymmetric solution is always less than 12 , which indicates that the order of magnitude of error between the numerical solution and the exact solution is very small, that is, the algorithm in this paper is effective.
Algorithm 1 Calculate the minimal norm centrosymmetric solution of quaternion matrix Equation (1).
Input: Quaternion matrix A i Q m × n , B i Q n × p , ( i = 1 , 2 , , k ) , C Q m × p ;
Output: Output the minimal norm centrosymmetric solution X ˘ S of quaternion matrix Equation (1) according to (5);
  1:
Compute Φ c ( V c ( C ) ) ;
  2:
Input H s , K n p , K n 2 , Φ ( I p A i ) , Φ ( B i H I n ) ;
  3:
Compute H s , R 1 = i = 1 k Φ ( I p A i ) K n p Φ ( B i H I n ) K n 2 H s ;
  4:
if (4) hold then
  5:
    Calculate the minimal norm solution of quaternion matrix equation according to (5);
  6:
end if
Algorithm 2 Calculate the minimal norm anti-centrosymmetric solution of quaternion matrix Equation (1).
Input: Quaternion matrix A i Q m × n , B i Q n × p , ( i = 1 , 2 , , k ) , C Q m × p ;
Output: Output the minimal norm centrosymmetric solution X ˘ A of quaternion matrix Equation (1) according to (9);
  1:
Compute Φ c ( V c ( C ) ) ;
  2:
Input H a , K n p , K n 2 , Φ ( I p A i ) , Φ ( B i H I n ) ;
  3:
Compute H a , R 2 = i = 1 k Φ ( I p A i ) K n p Φ ( B i H I n ) K n 2 H a ;
  4:
if (8) hold then
  5:
    Calculate the minimal norm solution of quaternion matrix equation according to (9);
  6:
end if
Next, taking the centrosymmetry solution as an example, we compare the method of solving the special solution of quaternion matrix equation in this paper with the method of in references [43,44].
The method in reference [43] used the real representation of quaternion matrices to process quaternion matrix equation firstly, the transformation from quaternion matrix equation to real matrix equation is realized, then the straighten operator is used to transform the real matrix equation into real vector matrix equation.
Remark 3.
The symbols appearing in Algorithm 3 follow the symbol representation in reference [43], J and K are defined in reference [43]. H s is the H -representation matrix of the centrosymmetric matrix in this paper, and H s is also defined in Theorem 4.
Algorithm 3 Calculate the minimal norm centrosymmetric solution of quaternion matrix Equation (1) according to the method of reference [43].
Input: Quaternion matrix A i Q m × n , B i Q n × p , ( i = 1 , 2 , , k ) , C Q m × p ;
Output: Output the minimal norm centrosymmetric solution X ˘ s of quaternion matrix Equation (1);
  1:
Compute v e c ( C c ) ;
  2:
Input H s , J, K;
  3:
Compute H s , R 3 = i = 1 k ( B i c T A i ) J K H s ;
  4:
Calculate the minimal norm solution of quaternion matrix equation according to X ˘ s = H s R 3 v e c ( C c ) .
The real vector representation method in reference [44] is to represent a quaternion as a 4 × 1 dimension vector, and then establish the relationship between quaternion matrix real vector representation operations through semi-tensor product of matrices.
Remark 4.
The symbols appearing in Algorithm 4 follow the symbol representation in reference [44], and J n = I 8 k 2 V 2 k 2 I 4 ( i f n i s e v e n ) I 4 ( 2 k 2 + 2 k + 1 ) V 2 k 2 + 2 k + 1 I 4 ( i f n i s o d d ) , where V n = 0 0 0 1 0 0 1 0 1 0 0 0 n × n and V n = 0 0 1 0 0 1 0 0 1 0 0 0 ( n 1 ) × n .
Algorithm 4 Calculate the minimal norm centrosymmetric solution of quaternion matrix Equation (1) according to the method of reference [44].
Input: Quaternion matrix A i Q m × n , B i Q n × p , ( i = 1 , 2 , , k ) , C Q m × p ;
Output: Output the minimal norm centrosymmetric solution X ˘ s of quaternion matrix Equation (1);
  1:
Compute A i r , B i c , C c ;
  2:
Compute G, G , J n ;
  3:
Compute R 4 = i = 1 k G G A i r W [ 4 n p , 4 n 2 ] B i c J n ;
  4:
Calculate the minimal norm solution of quaternion matrix equation according to X ˘ s = J n R 4 C c .
Example 8.
Suppose m = n = p , A i , B i Q n × n be generated randomly for n = 4 K , K = 1 : 10 . Randomly generate centrosymmetric matrix X S . Then for the left side of quaternion matrix Equation (1), let k = 1 , calculate C = A 1 X S B 1 . For the quaternion matrix Equation (1) with A i , B i and C above, its computational solutions can be obtained by using Algorithms 1, 3 and 4. As the dimension changes, time consumed by the algorithms is shown in Figure 2.
For the method in reference [44], because the matrix dimension is too large, we only choose K = 1 : 4 . If the form of the solution obtained by the algorithm in reference [43] wants to be consistent with the form of the solution obtained by the algorithm in this paper, it needs to be transformed with the help of a large matrix. The method of expressing quaternion as real vector in reference [44] makes the calculation process of quaternion matrix equation have a large dimension, which is not conducive to the improvement of calculation efficiency. As can be seen from Figure 2, the algorithm in this paper takes less time than the algorithm in references [43,44].

7. Application in Color Digital Image Restoration

We know that a color digital image consists of three primary colors: red, green and blue, and these three primary colors can correspond to the three imaginary parts of quaternion, respectively. That is, a color digital image can be represented by a pure imaginary quaternion matrix. One of the most basic applications in color digital image is color digital image restoration, and the process of color digital image restoration is the solution process of the minimal norm least squares solution of quaternion matrix equation. For an n × n pixel observation image g = g r i + g g j + g b k , we know its blurring phenomena K, where K is a real matrix, then the color digital image restoration model is established as
g = K f + N .
But in general, the noise N is unknown. In this section, we will work with the centrosymmetric color digital image restoration model. The centrosymmetric color image restoration problem is transformed into the least squares pure imaginary centrosymmetric solution problem of quaternion matrix equation K f = g .
Example 9.
Given two ideal centrosymmetric color digital image (see Figure 3a and Figure 4a), f = ( f r , f g , f b ) is the image matrix, f can be represented as f = f r i + f g j + f b k . By using L E N = 15 ; T H E T A = 30 ; P S F = f s p e c i a l ( m o t i o n , L E N , T H E T A ) disturb the image f g , and obtain the disturb image g g . Obviously, K = g g f g is a singular matrix. By using the matrix K, we can obtain the disturb image g = ( g r , g g , g b ) (see Figure 3b and Figure 4b). The minimal norm least squares pure imaginary centrosymmetric solution F can be obtained by Algorithm 5. Through the “ r e s h a p e ” command of MATLAB, we obtain the corresponding color digital restored image F = ( F r , F g , F b ) (see Figure 3c and Figure 4c).
Finally, we give the mean-square error of each channel which is defined as
M S E = 1 m n i = 0 m 1 j = 0 n 1 [ I ( i , j ) K ( i , j ) ] 2 .
The mean-square error of each channel is represented by ε r , ε g , ε b , respectively, and the results are shown in Table 1.
Algorithm 5 Calculate the minimal norm least squares pure imaginary centrosymmetric solution of color digital image model K f = g .
Output: Output the minimal norm least squares pure imaginary centrosymmetric solution of quaternion matrix equation K f = g ;
  1:
Compute K = I n K I n K I n K , H S = H s H s H s ;
  2:
Compute g = V c ( g r ) V c ( g g ) V c ( g b ) ;
  3:
Calculate the minimal norm least squares pure imaginary centrosymmetric solution of quaternion matrix equation K f = g according to f = H S ( K H S ) g .

8. Conclusions

The new conclusions of vector representation and L -representation of quaternion matrices makes semi-tensor product of quaternion matrices have a new application in solving quaternion matrix equation. Starting from these new conclusions of semi-tensor product of quaternion matrices, combined L -representation with H -representation method, the special solution of quaternion matrix equation i = 1 k A i X B i = C are solved. Furthermore, numerical examples show that the method is effective. Through a time comparison, it is found that the algorithm in this paper is relatively efficient compared with the algorithm in references [43,44]. The application of centrosymmetric color digital image restoration is also considered.
  • Notes:
  • The images used are from the MATLAB image processing toolbox or USC-SIPI image database image library of the University of Southern California (http://sipi.usc.edu/database/, accessed on 1 June 2022).
  • All computations are performed on an Intel(R) core(TM) i9-10940U @3.30 GHz/64 GB computer using MATLAB R2019b software.

Author Contributions

Methodology, X.F., Y.L., and Z.L.; software, X.F. and Z.L.; writing—original draft preparation, Y.L. and J.Z.; writing—review and editing, Y.L., X.F., and J.Z.; supervision, Y.L.; project administration, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

Supported by the National Natural Science Foundation of China (62176112) and the Natural Science Foundation of Shandong Province (ZR2020MA053).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cheng, D.Z. An Introduction to Semi-Tensor Product of Matrices and Its Applications; World Scientific: Singapore, 2012. [Google Scholar]
  2. Zhao, Y.; Kim, J.; Filippone, M. Aggregation algorithm towards large-scale Boolean Network analysis. IEEE Trans. Autom. Control 2013, 58, 1976–1985. [Google Scholar] [CrossRef] [Green Version]
  3. Li, H.T.; Wang, Y.Z. Output feedback stabilization control design for Boolean control networks. Automatica 2013, 49, 3641–3645. [Google Scholar] [CrossRef]
  4. Zhao, Y.; Li, Z.Q.; Cheng, D.Z. Optimal control of Logical Control Networks. IEEE Trans. Autom. Control 2011, 56, 1766–1776. [Google Scholar] [CrossRef]
  5. Cheng, D.Z.; Liu, T.; Zhang, K.Z.; Qi, H.S. On decomposed subspaces of Finite Games. IEEE Trans. Autom. Control 2016, 61, 3651–3656. [Google Scholar] [CrossRef]
  6. Meng, M.; Feng, J.E. A matrix approach to hypergraph stable set and coloring problems with its application to storing problem. J. Appl. Math. 2014, 2014, 783784. [Google Scholar] [CrossRef] [Green Version]
  7. Xu, M.R.; Wang, Y.Z.; Wei, A.R. Robust graph coloring based on the matrix semi-tensor product with application to examination timetabling. Control Theory Technol. 2014, 12, 187–197. [Google Scholar] [CrossRef]
  8. Yan, Y.Y.; Chen, Z.Q.; Liu, Z.X. Solving type-2 fuzzy relation equations via semi-tensor product of matrices. Control Theory Technol. 2014, 12, 173–186. [Google Scholar] [CrossRef]
  9. Hua, X.H.; Duan, P.Y.; Lv, H.L.; Zhang, Z.S.; Yang, X.W.; Zhang, C.J. Design of fuzzy controller for air-conditioning systems based-on semi-tensor product. In Proceedings of the 26th Chinese Control and Decision Conference, Changsha, China, 31 May–2 June 2014. [Google Scholar]
  10. Zhong, J.H.; Lin, D.D. A new linearization method for nonlinear feedback shift registers. J. Comput. Syst. Sci. 2015, 81, 783–796. [Google Scholar] [CrossRef]
  11. Zhong, J.H.; Lin, D.D. Stability of nonlinear feedback shift registers. Sci. China Inf. Sci. 2016, 59, 1–12. [Google Scholar] [CrossRef]
  12. Ding, W.X.; Li, Y.; Wang, D.; Wei, A.L. Constrainted least squares solution of Sylvester equation. Math. Model. Control 2021, 1, 112–120. [Google Scholar] [CrossRef]
  13. Ding, W.X.; Li, Y.; Wang, D. A real method for solving quaternion matrix equation XA X ^ B = C based on semi-tensor product of matrices. Adv. Appl. Clifford Algebras 2021, 31, 4–17. [Google Scholar] [CrossRef]
  14. Heise, R.; Macdonald, B.A. Quaternions and Motion Interpolation: A Tutorial; Springer: Tokyo, Japan, 1989; pp. 229–243. [Google Scholar]
  15. Zhang, Y.Z.; Li, Y.; Wei, M.S.; Zhao, H. An algorithm based on QSVD for quaternion equality constrained least squares problem. Numer. Algorithms 2021, 87, 1563–1576. [Google Scholar] [CrossRef]
  16. Jia, Z.G.; Ng, M.K.; Song, G.J. Lanczos method for large-scale quaternion singular value decomposition. Numer. Algorithms 2019, 82, 699–717. [Google Scholar] [CrossRef]
  17. Pletincks, D. Quaternion calculus as a basic tool in computer graphics. Vis. Comput. 1989, 5, 2–13. [Google Scholar] [CrossRef]
  18. Li, T.; Wang, Q.W.; Zhang, X.F. A modified conjugate residual method and nearest Kronecker product preconditioner for the generalized coupled Sylvester tensor equations. Mathematics 2022, 10, 1730. [Google Scholar] [CrossRef]
  19. Chen, B.J.; Sun, X.M.; Wang, D.C.; Zhao, X.P. Color face recognition using quaternion representation of color image. Acta Autom. Sin. 2012, 38, 1815–1823. [Google Scholar] [CrossRef]
  20. Pei, S.C.; Ding, M.J.J.; Chang, J.H. Efficient implementation of quaternion Fourier Transform, Convolution, and Correlation by 2-D Complex FFT. IEEE Trans. Signal Process. 2001, 49, 2783–2797. [Google Scholar]
  21. Ping, J.; Wu, H.T. A closed-form forward kinematics solution for the 6-6/sup p/Stewart platform. IEEE Trans. Robot. Autom. 2001, 17, 522–526. [Google Scholar] [CrossRef]
  22. Wang, Q.W.; He, Z.H.; Zhang, Y. Constrained two-sided coupled Sylvester-type quaternion matrix equations. Automatica 2019, 101, 207–213. [Google Scholar] [CrossRef]
  23. Song, G.J.; Wang, Q.W.; Yu, S.W. Cramer’s rule for a system of quaternion matrix equations with applications. Appl. Math. Comput. 2018, 336, 490–499. [Google Scholar] [CrossRef]
  24. Zhang, F.X.; Wei, M.S.; Li, Y.; Zhao, J.L. Special least squares solutions of the quaternion matrix equation AX = B with applications. Appl. Math. Comput. 2015, 270, 425–433. [Google Scholar]
  25. Zhang, F.X.; Wei, M.S.; Li, Y.; Zhao, J.L. Special least squares solutions of the quaternion matrix equation AXB + CXD = E. Comput. Math. Appl. 2016, 72, 1426–1435. [Google Scholar] [CrossRef]
  26. Zhang, F.X.; Wei, M.S.; Li, Y.; Zhao, J.L. An efficient real representation method for least squares problem of the quaternion constrained matrix equation AXB + CYD = E. Int. J. Comput. Math. 2021, 98, 1408–1419. [Google Scholar] [CrossRef]
  27. Zhang, F.X.; Wei, M.S.; Li, Y.; Zhao, J.L. An efficient method for least-squares problem of the quaternion matrix equation XA X ^ B = C. Linear Multilinear Algebra 2020, 1–13. [Google Scholar] [CrossRef]
  28. Yuan, S.F.; Wang, Q.W.; Zhang, X. Least-squares problem for the quaternion matrix equation AXB + CYD = E over different constrained matrices. Int. J. Comput. Math. 2013, 90, 565–576. [Google Scholar] [CrossRef]
  29. Yuan, S.F.; Wang, Q.W.; Duan, X.F. On solutions of the quaternion matrix equation AX = B and their applications in color image restoration. Appl. Math. Comput. 2013, 221, 10–20. [Google Scholar]
  30. Kyrchei, I. Explicit representation formulas for the minimum norm least squares solutions of some quaternion matrix equations. Linear Algebra Its Appl. 2013, 438, 136–152. [Google Scholar] [CrossRef] [Green Version]
  31. Kyrchei, I. Cramer’s rules for Sylvester quaternion matrix equation and its special cases. Adv. Appl. Clifford Algebras 2018, 28, 90. [Google Scholar] [CrossRef]
  32. Kyrchei, I. Cramer’s rules of η-(skew-) Hermitian solutions to the quaternion Sylvester-type matrix equations. Adv. Appl. Clifford Algebras 2019, 29, 56. [Google Scholar] [CrossRef]
  33. Ling, S.T.; Jia, Z.G.; Lu, X.; Yang, B. Matrix LSQR algorithm for structured solutions to quaternionic least squares problem. Comput. Math. Appl. 2019, 77, 830–845. [Google Scholar] [CrossRef]
  34. Ling, S.T.; Jia, Z.G.; Jiang, T.S. LSQR algorithm with structured preconditioner for the least squares problem in quaternionic quantum theory. Comput. Math. Appl. 2017, 73, 2208–2220. [Google Scholar] [CrossRef]
  35. Ling, S.T.; Wang, M.H.; Wei, M.S. Hermitian tridiagonal solution with the least norm to quaternionic least squares problem. Comput. Phys. Commun. 2010, 181, 481–488. [Google Scholar] [CrossRef]
  36. Wang, M.H.; Wei, M.S.; Feng, Y. An iterative algorithm for least squares problem in quaternionic quantum theory. Comput. Phys. Commun. 2008, 179, 203–207. [Google Scholar] [CrossRef]
  37. Liu, L.S.; Wang, Q.W.; Mehany, M.S. A Sylvester-Type matrix equation over the Hamilton quaternions with an application. Mathematics 2022, 10, 1758. [Google Scholar] [CrossRef]
  38. Liu, L.S.; Wang, Q.W.; Chen, J.F.; Xie, Y.Z. An exact solution to a quaternion matrix equation with an application. Symmetry 2022, 14, 375. [Google Scholar] [CrossRef]
  39. Mehany, M.S.; Wang, Q.W. Three symmetrical systems of coupled sylvester-like quaternion matrix equations. Symmetry 2022, 14, 550. [Google Scholar] [CrossRef]
  40. Wang, R.N.; Wang, Q.W.; Liu, L.S. Solving a system of Sylvester-like quaternion matrix equations. Symmetry 2022, 14, 1056. [Google Scholar] [CrossRef]
  41. Wang, Q.W. Bisymmetric and centrosymmetric solutions to systems of real quaternion matrix equations. Comput. Math. Appl. 2005, 49, 641–650. [Google Scholar] [CrossRef] [Green Version]
  42. Zhang, W.H.; Chen, B.S. H-Representation and applications to Generalized Lyapunov Equations and Linear Stochastic Systems. IEEE Trans. Autom. Control 2012, 57, 3009–3022. [Google Scholar] [CrossRef]
  43. Wei, A.L.; Li, Y.; Ding, W.X.; Zhao, J.L. Three special kinds of least squares solutions for the quaternion generalized Sylvester matrix equation. AIMS Math. 2022, 7, 5029–5048. [Google Scholar] [CrossRef]
  44. Wang, D.; Li, Y.; Ding, W.X. Several kinds of special least squares solutions to quaternion matrix equation AXB = C. J. Appl. Math. Comput. 2022, 68, 1881–1899. [Google Scholar] [CrossRef]
  45. Cheng, D.Z.; Qi, H.S.; Liu, Z.Q. From STP to game-based control. Sci. China Inf. Sci. 2018, 61, 010201. [Google Scholar] [CrossRef] [Green Version]
  46. Cheng, D.Z.; Qi, H.S.; Xue, A.C. A survey on semi-tensor product of matrices. J. Syst. Sci. Complex. 2007, 20, 304–322. [Google Scholar] [CrossRef]
  47. Jia, Z.G.; Wei, M.S.; Zhao, M.X.; Chen, Y. A new real structure-preserving quaternion QR algorithm. J. Comput. Appl. Math. 2018, 343, 26–48. [Google Scholar] [CrossRef] [Green Version]
  48. Golub, G.H.; Van Loan, C.F. Matrix Computations, 4th ed.; The Johns Hopkins University Press: Baltimore, MD, USA, 2013. [Google Scholar]
Figure 1. Errors in different dimensions.
Figure 1. Errors in different dimensions.
Symmetry 14 01359 g001
Figure 2. Time comparison results.
Figure 2. Time comparison results.
Symmetry 14 01359 g002
Figure 3. Image 1: 100 × 100 Pixel Centrosymmetric Color Digital .
Figure 3. Image 1: 100 × 100 Pixel Centrosymmetric Color Digital .
Symmetry 14 01359 g003
Figure 4. Image 2: 110 × 110 Pixel Centrosymmetric Color Digital .
Figure 4. Image 2: 110 × 110 Pixel Centrosymmetric Color Digital .
Symmetry 14 01359 g004
Table 1. Mean-square error (MSE).
Table 1. Mean-square error (MSE).
ε r ε g ε b
Figure 3 4.9586 × 10 18 2.4722 × 10 19 1.9076 × 10 18
Figure 4 1.4071 × 10 20 4.0846 × 10 22 1.2557 × 10 21
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Fan, X.; Li, Y.; Liu, Z.; Zhao, J. Solving Quaternion Linear System Based on Semi-Tensor Product of Quaternion Matrices. Symmetry 2022, 14, 1359. https://doi.org/10.3390/sym14071359

AMA Style

Fan X, Li Y, Liu Z, Zhao J. Solving Quaternion Linear System Based on Semi-Tensor Product of Quaternion Matrices. Symmetry. 2022; 14(7):1359. https://doi.org/10.3390/sym14071359

Chicago/Turabian Style

Fan, Xueling, Ying Li, Zhihong Liu, and Jianli Zhao. 2022. "Solving Quaternion Linear System Based on Semi-Tensor Product of Quaternion Matrices" Symmetry 14, no. 7: 1359. https://doi.org/10.3390/sym14071359

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop