Next Article in Journal
Coefficient Estimates and Fekete–Szegö Functional Inequalities for a Certain Subclass of Analytic and Bi-Univalent Functions
Previous Article in Journal
The Effect of Financial Market Factors on House Prices: An Expected Utility Three-Asset Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Some Results on Majorization of Matrices

by
Divya K. Udayan
*,† and
Kanagasabapathi Somasundaram
Department of Mathematics, Amrita School of Engineering, Amrita Vishwavidyapeetham, Coimbatore 641112, India
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Axioms 2022, 11(4), 146; https://doi.org/10.3390/axioms11040146
Submission received: 1 February 2022 / Revised: 28 February 2022 / Accepted: 20 March 2022 / Published: 23 March 2022
(This article belongs to the Section Algebra and Number Theory)

Abstract

:
For two n × m real matrices X and Y, X is said to be majorized by Y, written as X Y if X = S Y for some doubly stochastic matrix of order n . Matrix majorization has several applications in statistics, wireless communications and other fields of science and engineering. Hwang and Park obtained the necessary and sufficient conditions for X , Y to satisfy X Y for the cases where the rank of Y = n 1 and the rank of Y = n . In this paper, we obtain some necessary and sufficient conditions for X , Y to satisfy X Y for the cases where the rank of Y = n 2 and in general for rank of Y = n k , where 1 k n 1 . We obtain some necessary and sufficient conditions for X to be majorized by Y with some conditions on X and Y. The matrix X is said to be doubly stochastic majorized by Y if there is S Ω m such that X = Y S . In this paper, we obtain some necessary and sufficient conditions for X to be doubly stochastic majorized by Y . We introduced a new concept of column stochastic majorization in this paper. A matrix X is said to be column stochastic majorized by Y , denoted as X c Y , if there exists a column stochastic matrix S such that X = S Y . We give characterizations of column stochastic majorization and doubly stochastic majorization for ( 0 , 1 ) matrices.

1. Introduction

Let R n denote the set of all real column vectors with n coordinates. A real matrix A is called non-negative, denoted by A 0 , if all its entries are non-negative. Let Ω n denote the set of all n × n doubly stochastic matrices, i.e., real non-negative matrices, with each row sum and column sum equal to 1. For a vector ( a 1 , a 2 , , a n ) T R n , let ( a [ 1 ] , a [ 2 ] , , a [ n ] ) T denote the vector obtained from ( a 1 , a 2 , , a n ) T by rearranging the coordinates in nonincreasing order. For vectors x = ( x 1 , x 2 , , x n ) T , y = ( y 1 , y 2 , , y n ) T R n , x is said to be majorized by y, denoted by x y , if i = 1 k x [ i ] i = 1 k y [ i ] for all positive integers k such that 1 k n .
It is well known that for two vectors x , y R n , x y iff x = S y for some S Ω n . In [1], the polytope of doubly stochastic matrices D for which x = D y was investigated. The notion of vector majorization is naturally extended to that of matrix majorization as follows. For two matrices X and Y in R n × m , X is said to be majorized by Y, denoted by X Y , if X = S Y for some S Ω n . In [2], a new notion of matrix majorization was introduced, called weak matrix majorization, where X w Y , if there exists a row stochastic matrix R such that X = R Y . Additionally, relations between this concept and strong majorization (usual majorization) are considered. In [3], the polytope of row-stochastic matrices R for which X = R Y was investigated and generalizations of the results for vector majorization were obtained. The notion of matrix majorization was referred to as multivariate majorization in [4]. In [4], it was proved that, if X Y for X , Y R n × m , then X C Y C for any real matrix C with m rows.
For X = [ x 1 , x 2 , , x m ] , Y = [ y 1 , y 2 , , y m ] R n × m , if X Y , then certainly x j y j for all j = 1 , 2 , , m . However, the converse does not hold in general, as is easily seen with the matrices X = 1 2 1 1 2 0 and Y = 1 1 0 0 .
For x , y R n , let Ω ( x y ) denote the set of all S Ω n satisfying x = S y . The set Ω ( x y ) is known to contain various special types of doubly stochastic matrices [5,6]. While quite a lot of progress has been made in the theory of vector majorization, very little is known about multivariate majorization. In [7], it is proved that X is majorized by Y if X v is majorized by Y v for every real n vector v, under the assumption that [ X , e ] [ Y , e ] + is nonnegative, where e denotes the m-vector of ones and [ Y , e ] + denotes the Moore–Penrose generalized inverse of X .
In [8], a new matrix majorization order for classes (sets) of matrices was introduced, which generalizes several existing notions of matrix majorization. Matrix majorization order has several applications in mathematical statistics. Some applications of the matrix majorization were discussed by Marshall, Olkinbarry and Arnold [4] and Tong [9]. Majorization of (0,1) matrices have important applications in classification theory and principal/dominant component analysis. Eduard Jorswieck and Holger Boche [10] reviewed the basic definitions of majorization theory and matrix monotone functions, describing their concepts clearly with many illustrative examples and then proceeded to show their applications in wireless communications. In [11], an algorithm was developed for the problem of finding a low-rank correlation matrix nearest to a given correlation matrix. The algorithm was based on majorization. The problem of rank reduction of correlation matrices occurs when pricing a derivative dependent on a large number of assets, where the asset prices are modeled as correlated log-normal processes. Mainly, such an application concerns interest rates. Matrix majorization also has applications in the comparison of eigenvalues [12].
Hwang and Park [13] obtained some necessary and sufficient conditions for X , Y R n × m to satisfy X Y for the case that the rank of Y is any one of 1 , n 1 or n and for the case that n 3 . In Section 2, we obtain some necessary and sufficient conditions for X , Y R n × m to satisfy X Y for the case that the rank of Y is n 2 and for the general case that the rank of Y is n k , 1 k n 1 . We also obtain some necessary and sufficient conditions for X to be majorized by Y with some conditions on X and Y . Additionally, we obtain some necessary and sufficient conditions for X to be doubly stochastic majorized by Y with some conditions on X and Y .
Dahl, Guterman and Shteyner [14] obtained several results concerning matrix majorizations of ( 0 , 1 ) matrices and characterizations for certain matrix majorization orders. We extend these results for ( 0 , 1 ) matrices in Section 3. We introduce a new concept of column stochastic majorization. A matrix X is said to be column stochastic majorized by Y , denoted as X c Y , if there exists a column stochastic matrix S such that X = S Y . We obtain some characterizations for column stochastic majorization and doubly stochastic majorization of ( 0 , 1 ) matrices.

2. Matrix Majorization

Let I n denote the identity matrix of order n. For two real matrices A , B of the same size, let A B (resp. A B ) denote that each entry of A is bigger (resp. less) than or equal to the corresponding entry of B. Let e k denote all l vectors in R k . For matrix A, let σ ( A ) , r A and C A denote the sum of all of the entries of A, the row sum vector of A and the column sum vector of A, respectively. A vector z R n is called a stochastic vector if z 0 and z T e n = 1 .
Theorem 1.
Let X = [ x i j ] , Y = [ y i j ] = [ I n 2 , y 1 , y 2 ] T R n × n 2 . Then X Y if the following hold.
  • c X = c Y .
  • There exist stochastic vectors z 1 , z 2 R n such that e n r X = ( 1 σ ( y 1 ) ) z 1 + ( 1 σ ( y 2 ) ) z 2 and X z 1 y 1 T + z 2 y 2 T .
Proof. 
Let us assume X Y . Then there exists S Ω n satisfying X = S Y . It is easy to see that c X = c Y . Let S = [ A , z 1 , z 2 ] with z 1 , z 2 R n . Then X = S Y = A + z 1 y 1 T + z 2 y 2 T . Since A 0 , we obtain that there exist stochastic vectors z 1 and z 2 such that X z 1 y 1 T + z 2 y 2 T .   S e n = A e n 2 + z 1 + z 2 = X e n 2 z 1 y 1 T e n 2 z 2 y 2 T e n 2 + z 1 + z 2 = r X + ( 1 σ ( y 1 ) ) z 1 + ( 1 σ ( y 2 ) ) z 2 . Since S e n = e n , we obtain that there exist stochastic vectors z 1 and z 2 such that e n = r X + ( 1 σ ( y 1 ) ) z 1 + ( 1 σ ( y 2 ) ) z 2 . Conversely, suppose (1) and (2) hold. For the vectors z 1 , z 2 in (2), let S = [ A , z 1 , z 2 ] , where A = X z 1 y 1 T z 2 y 2 T . Then S Y = [ A , z 1 , z 2 ] [ I n 2 , y 1 , y 2 ] T = A + z 1 y 1 T + z 2 y 2 T = X . It remains to show that S Ω n . We see from ( 2 ) that S 0 . From (1) and (2) we have S e n = e n . Additionally, e n T S = [ e n T X e n T z 1 y 1 T e n T z 2 y 2 T , e n T z 1 , e n T z 2 ] = [ c X y 1 T y 2 T , 1 , 1 ] = [ e n 2 T , 1 , 1 ] = e n T since c X = c Y and c Y = e n 2 T + y 1 T + y 2 T . This implies that S Ω n .
The above theorem can be extended to any Y R n × n k of rank n k .
Theorem 2.
Let X = [ x i j ] , Y = [ I n k , y 1 , y 2 , , y k ] T R n × n k , 1 k n 1 . Then X Y if the following hold.
  • c X = c Y .
  • There exist stochastic vectors z 1 , z 2 , , z k such that e n r X = i = 1 k ( 1 σ ( y i ) ) z i and X z 1 y 1 T + z 2 y 2 T + + z k y k T .
Proof. 
Proof is similar to the proof of Theorem 1 extending to k vectors. □
In the next theorem, we give a necessary condition for X Y , for any two matrices X , Y R n × m .
Theorem 3.
Let X , Y R n × m . If X Y then there exists δ 1 , δ 2 , , δ n 1 R such that | δ i | 1 for i = 1 , 2 , 3 , , n 1 , δ 1 + δ 2 + + δ n 1 n 3 and x 1 j + x 2 j + + x n 1 , j x n , j = δ 1 y 1 j + δ 2 y 2 j + + δ n 1 j y n 1 j + y n j ( n 2 δ 1 δ 2 δ n 1 ) .
Proof. 
Suppose X Y so that X = S Y for some S = a 11 a 12 a 1 n a 21 a 22 a 2 n a n 1 a n 2 a n n Ω n . Let [ 1 , 1 , , 1 ] be a 1 × n matrix. Then the j-th element in the row vector [ 1 , 1 , , 1 ] X = [ 1 , 1 , , 1 ] S Y is x 1 j + x 2 j + + x n 1 j x n j = ( a 11 + a 21 + a n 1 ) y 1 j + ( a 12 + a 22 + a n 2 ) y 2 j + + ( a 1 n + a 2 n + a n n ) y n j = ( a 11 + a 21 + ( 1 ( a 11 + a 21 + + a n 1 ) ) ) y 1 j + ( a 12 + a 22 + ( 1 ( a 12 + a 22 + a n 2 ) ) y 2 j + + ( a 1 n 1 + a 2 n 1 + ( 1 ( a 1 n 1 + a 2 n 1 + a n n 1 ) ) ) y n j = ( 2 ( a 11 + a 21 + + a n 11 ) 1 ) y 1 j + ( 2 ( a 1 , 2 + a 2 , 2 + + a n 1 , 2 ) 1 ) y 2 , j + + ( 2 ( a 1 , n + a 2 , n 1 + + a n 1 , n 1 ) 1 ) y n , j = ( 2 ( a 1 , 1 + a 2 , 1 + + a n 1 , 1 ) 1 ) y 1 , j + ( 2 ( a 1 , 2 + a 2 , 2 + + a n 1 , 2 ) 1 ) y 2 , j + + ( 2 ( a 1 , n + a 2 , n + + a n 1 , n 1 ) 1 ) y n , j .   2 ( a 1 n + a 2 n + + a n 1 n ) 1 = 2 n 3 2 ( a 11 + a 12 + + a 1 n 1 + + a n 11 + a n 12 + + a n 1 n 1 ) . Let 2 ( a 11 + a 21 + + a n 11 ) 1 = δ 1 , 2 ( a 12 + a 22 + + a n 12 ) 1 = δ 2 , 2 ( a 1 n 1 + a 2 n 1 + + a n 1 n 1 ) 1 = δ n 1 .
This implies x 1 , j + x 2 , j + + x n 1 , j x n , j = δ 1 y 1 , j + δ 2 y 2 , j + + δ n 1 y n 1 , j + y n , j ( 2 n 3 2 ( 1 + δ 1 2 + 1 + δ 2 2 + + 1 + δ n 1 2 ) )   = δ 1 y 1 j + δ 2 y 2 j + + δ n 1 y n 1 , j + ( n 2 ( δ 1 + δ 2 + + δ n ) ) y n j .
2 ( a 1 , j + a 2 , j + + a n 1 , j ) 1 = δ j .
Since 0 a 1 , j + a 2 , j + + a n 1 , j 1 , 1 2 ( a 1 , j + a 2 , j + + a n 1 , j ) 1 1 .   | δ j | 1 . a 11 + a 21 + + a n 11 + + a 1 n 1 + a 2 n 1 + + a n 1 n 1 + 2 n 0 . 1 + δ 1 2 + 1 + δ 2 2 + + 1 + δ n 1 2 + 2 n 0 . δ 1 + δ 2 + + δ n 1 n 3 .
Example 1.
Let Y = 1 3 1 1 1 1 2 2 3 ,   S = 1 0 0 0 0.5 0.5 0 0.5 0.5 and X = 1 3 1 1.5 1.5 2 1.5 1.5 2 . Then δ 1 = 1 and δ 2 = 0 are such that they satisfy the required conditions of the theorem and so this example validates Theorem 3.
The converse of Theorem 3 is not true.
Example 2.
Let X = 1.5 1 1 0 1 2 0 2 3 and Y = 1 3 1 1 1 1 2 2 3 .
Take δ 1 = 0.5 and δ 2 = 0 . Then X , Y and δ 1 , δ 2 satisfy the conditions of the above theorem but there exists no matrix S such that X = S Y . Assume that there exists a matrix S such that S = s 11 s 12 1 s 11 s 12 s 21 s 22 1 s 21 s 22 1 s 11 s 21 1 s 12 s 22 ( s 11 + s 12 + s 21 + s 22 1 ) Since X = S Y when we multiply the first row of S with the first column of Y, we obtain x 11 = 1.5 = s 11 + s 12 + ( 1 s 11 s 12 ) 2 . This implies that s 11 + s 12 = 0.5 . Similarly for x 21 , we have x 21 = 0 = s 21 + s 22 + ( 1 s 21 s 22 ) 2 . This implies that s 21 + s 22 = 1 . Similarly for x 31 , we have ( 1 s 11 s 21 ) + ( 1 s 12 s 22 ) + ( s 11 + s 21 + s 12 + s 22 1 ) 2 = 0 . This implies that s 11 = 0 , s 12 = 0 , s 21 = 0 and s 22 = 0 . However, this is a contradiction to s 11 + s 12 = 0.5 . Therefore, there exists no such matrix S such that X = S Y . In the next theorem, we show sufficient conditions for X Y , for any two matrices X , Y R n × m .
Theorem 4.
Let X , Y R n × m . A sufficient condition for X to be majorized by Y is that c X = c Y and x 1 j = x 2 j = = x n 1 j for j = 1 , 2 , , m and there exists δ 1 , δ 2 , , δ n 1 R such that | δ i | 1 for i = 1 , 2 , , n 1 , δ 1 + δ 2 + + δ n 1 n 3 and for j = 1 , 2 , , m x 1 j + x 2 j + + x n 1 j x n j = δ 1 y 1 j + δ 2 y 2 j + + δ n 1 j y n 1 j + y n j ( n 2 δ 1 δ 2 δ n 1 ) .
Proof. 
Assume that c X = c Y and δ 1 , δ 2 , , δ n 1 R such that the conditions given in the theorem hold. Let S = 1 + δ 1 2 ( n 1 ) 1 + δ 2 2 ( n 1 ) 1 + δ n 1 2 ( n 1 ) 1 2 1 2 ( n 1 ) ( δ 1 + δ 2 + + δ n 1 ) 1 + δ 1 2 ( n 1 ) 1 + δ 2 2 ( n 1 ) 1 + δ n 1 2 ( n 1 ) 1 2 1 2 ( n 1 ) ( δ 1 + δ 2 + + δ n 1 ) 1 + δ 1 2 ( n 1 ) 1 + δ 2 2 ( n 1 ) 1 + δ n 1 2 ( n 1 ) 1 2 1 2 ( n 1 ) ( δ 1 + δ 2 + + δ n 1 ) 1 δ 1 2 1 δ 2 2 1 + δ n 1 2 1 ( n 1 ) 2 + δ 1 2 + δ 2 2 + + δ n 1 2 . Then S is a doubly stochastic matrix. When we multiply the first row of S and the first column of Y, we will obtain ( 1 + δ 1 ) 2 ( n 1 ) y 11 + ( 1 + δ 2 ) 2 ( n 1 ) y 21 + + 1 + δ n 1 2 ( n 1 ) y n 11 + ( 1 2 1 2 ( n 1 ) ( δ 1 + δ 2 + + δ n 1 ) y n 1 = y 11 2 ( n 1 ) + y 21 2 ( n 1 ) + + y n 11 2 ( n 1 ) + y n , 1 2 ( n 1 ) + 1 2 ( n 1 ) ( δ 1 y 1 , 1 + δ 2 y 2 , 1 + + δ n 1 y n 11 ) + y n , 1 ( 1 2 1 2 ( n 1 ) 1 2 ( n 1 ) ( δ 1 + δ 2 + δ n 1 ) ) = 1 2 ( n 1 ) ( y 11 + y 21 + + y n 1 ) + 1 2 ( n 1 ) ( δ 1 y 11 + δ 2 y 21 + + δ n 1 y n 11 ) + y n 1 ( 1 2 1 2 ( n 1 ) 1 2 ( n 1 ) ( δ 1 + δ 2 + + δ n 1 ) ) = 1 2 ( n 1 ) ( x 11 + x 21 + + x n 1 ) + 1 2 ( n 1 ) ( δ 1 y 11 + δ 2 y 21 + + δ n 1 y n 11 ) + y n , 1 ( n 2 ( δ 1 + δ 2 + + δ n 1 ) ) = 1 2 ( n 1 ) ( x 11 + x 21 + + x n 1 ) + ( x 11 + x 21 + + x n 11 x n 1 ) ) = x 11 . Hence, X = S Y . This implies that X is majorized by Y .
Example 3.
Let X = 1 0.5 1 1 0.5 1 2 3 2 and Y = 4 2 1 4 8 4 4 2 1 R 3 × 3 .
Then X and Y satisfy the conditions of Theorem 4 and so X is majorized by Y . Take δ 1 = 0.5 and δ 2 = 0 . Then S = 0.375 0.25 0.375 0.375 0.25 0.375 0.25 0.5 0.25 Ω 3 and X = S Y . So X is majorized by Y .
A matrix X is said to be doubly stochastic majorized by Y, denoted by X d s Y , when there is S Ω m such that X = Y S . In the next theorem, we prove a necessary condition for X to be doubly stochastic majorized by Y .
Theorem 5.
Let X , Y R n × m . If X d s Y then there exists δ 1 , δ 2 , , δ m 1 R such that | δ i | 1 for i = 1 , 2 , , m 1 , δ 1 + δ 2 + + δ m 1 m 3 and x i 1 + x i 2 + + x i m 1 x i m = δ 1 y i 1 + δ 2 y i 2 + + y i m ( m 2 δ 1 δ 2 δ m 1 ) .
Proof. 
Suppose X d s Y so that X = Y S for some S = ( a i j ) m × m Ω m . The j-th element of the vector X [ 1 , 1 , , 1 ] = Y S [ 1 , 1 , , 1 ] is x i 1 + x i 2 + x i m = y i 1 ( 2 ( a 11 + a 12 + + a 1 m 1 ) 1 ) + y i 2 ( 2 ( a 21 + a 22 + + a 2 m 1 ) 1 ) + + y i m ( 2 ( a m 1 + a m 2 + + a m m 1 ) 1 ) . 2 ( a m 1 + a m 2 + + a m m 1 ) 1 = 2 m 3 2 ( a 11 + + a m 11 + a 12 + a 22 + + a m 12 + + a 1 m 1 + a 2 m 1 + + a m 1 m 1 ) . Let 2 ( a 11 + a 21 + + a m 11 ) 1 = δ 1 , 2 ( a 12 + a 22 + + a m 12 ) 1 = δ 2 , 2 ( a 1 m 1 + a 2 m 1 + + a m 1 m 1 ) 1 = δ m 1 .
This implies, x i 1 + x i 2 + + x i m 1 x i m = δ 1 y i 1 + δ 2 y i 2 + + y i m ( 2 m 3 2 ( 1 + δ 1 2 + 1 + δ 2 2 + + 1 + δ m 1 2 ) ) = δ 1 y i 1 + δ 2 y i 2 + + y i m ( m 2 ( δ 1 + δ 2 + + δ m 1 ) ) . Since 0 a i 1 + a i 2 + + a i m 1 1 , 1 2 ( a i 1 + a i 2 + + a i m 1 ) 1 1 . | δ i | 1 . a m m 0 .   a 11 + a 12 + + a 1 m 1 + + a m 1 m 1 + 2 m 0 .   1 + δ 1 2 + 1 + δ 2 2 + + 1 + δ m 1 2 + 2 m 0 .   δ 1 + δ 2 + + δ m 1 m 3 .
Example 4.
Let X = 1 1.5 1.5 3 1.5 1.5 1 2 2 , Y = 1 1 2 3 1 2 1 1 3 and S = 1 0 0 0 0.5 0.5 0 0.5 0.5 . Then δ 1 = 1 and δ 2 = 0 are such that they satisfy the required conditions of the theorem and so this example validates Theorem 5.
The converse of Theorem 5 is not true.
Example 5.
X = 1.5 1 1 0 1 2 0 2 3 and Y = 1 3 1 1 1 1 2 2 3 . Take δ 1 = 0.5 and δ 2 = 0 . Then X , Y and δ 1 , δ 2 satisfy the conditions of the above theorem, but there exists no matrix such that X = Y S . Assume that there exists a matrix S such that S = s 11 s 12 1 s 11 s 12 s 21 s 22 1 s 21 s 22 1 s 11 s 21 1 s 12 s 22 ( s 11 + s 12 + s 21 + s 22 1 ) . Since X = Y S when we multiply the first row of Y with the first column of S , we obtain x 11 = 1.5 = 2 s 21 + 1 . This implies that s 21 = 0.25 . Similarly for x 21 we have x 21 = 1 = 2 s 22 + 1 . This implies that s 22 = 0 . Similarly, for x 13 we have x 13 = 1 = 3 2 s 21 2 s 22 . This implies that s 21 + s 22 = 1 which is a contradiction since s 21 = 0.25 and s 22 = 0 . Therefore, there exists no such matrix S such that X = Y S .
Theorem 6.
Let X , Y R n × m . A sufficient condition for X to be doubly stochastic majorized by Y is that r X = r Y and x i 1 = x i 2 = = x i m 1 for i = 1 , 2 , , m and there exist δ 1 , δ 2 , δ m 1 R such that | δ i | 1 for i = 1 , 2 , , m 1 , δ 1 + δ 2 + + δ m 1 m 3 and x i 1 + x i 2 + + x i m 1 x i m = δ 1 y i 1 + δ 2 y i 2 + + y i m ( m 2 δ 1 δ 2 δ m 1 ) .
Proof. 
The proof is similar to the proof of Theorem 4. □
Example 6.
Let X = 1 1 2 0.5 0.5 3 1 1 2 and Y = 4 4 4 2 8 2 1 4 1 R 3 × 3 . Then X and Y satisfy the conditions of Theorem 6 and so X is doubly stochastic majorized by Y .
Take δ 1 = 0.5 and δ 2 = 0 . Then S = 0.375 0.375 0.25 0.25 0.25 0.5 0.375 0.375 0.25 Ω 3 and Y S = X . So X is doubly stochastic majorized by Y .

3. Majorization for (0,1) Matrices

There are two main motivations for the study of matrix majorization for ( 0 , 1 ) matrices. First, it is of interest to see if this restriction to the subclass of ( 0 , 1 ) matrices leads to simpler characterizations of the majorization order in question. Secondly, ( 0 , 1 ) matrices are essential to represent combinatorial objects, and therefore one may look at the meaning of such a matrix majorization order (each associated with a ( 0 , 1 ) matrix). In [14] weak, directional and strong majorizations of ( 0 , 1 ) matrices are characterized. Matrix majorization on ( 0 , 1 ) matrices is investigated.
Definition 1.
A matrix X is said to be column stochastic majorized by Y , denoted as X c Y , if there exists a column stochastic matrix S such that X = S Y .
Theorems 7 and 8 give characterizations of d s and c for ( 0 , 1 ) matrices.
Theorem 7.
Let X , Y M m , n ( 0 , 1 ) and X d s Y . Then for every i = 1 , 2 , , m the number of 1 s in the i-th column of X is equal to the number of 1 s in the i-th column of Y .
Proof. 
By assumption, X = Y S for some S Ω n . Then e X = e Y S = e Y where e = [ 1 , 1 , , 1 ] t . Since both matrices are ( 0 , 1 ) , the i-th entry of e X is the number of 1 s in the i-th column of A . The same holds is for Y . Hence the result follows. □
Theorem 8.
Let X , Y M m , n ( 0 , 1 ) . Assume X c Y , but X d s Y . Then
(i) 
There exists S Ω n c o l u m n satisfying X = S Y such that S contains a zero row, and for each row sum r i of S it holds that either r i = 0 or r i 1 .
(ii) 
If Y does not contain a zero row, then for any S Ω n c o l u m n satisfying X = S Y it holds that S contains a zero row, and for each row sum r i of R it holds that r i = 0 or r i 1 .
(iii) 
X contains a zero row.
Proof. 
(i) Suppose X = S Y where S is column stochastic. Since S Ω n c o l u m n , the sum of all elements in S is n . Since by assumption X d s Y , there is i such that the ith row sum of S , r i < 1 and there is k such that the kth row sum of S , r k > 1 . If r i < 1 , then X ( i ) is a zero row. Indeed, each element of X ( i ) is less than or equal to r i < 1 . Hence, it is zero. We are going to modify S in order to construct S Ω n c o l u m n such that X = S Y and S ( i ) is zero. Fix some k such that r k > 1 . Suppose s i p 0 . Since 0 = X ( i ) = j = 1 n s i j Y ( j ) s i p Y ( p ) , we obtain Y ( p ) = 0 . Consider arbitrary x f g = h = 1 n s f h y h g = h p s f p y p g + s f p y p g = h p s f p y p g . We consider the matrix S , which is obtained from S by changing s k p to s k p + s i p and s i p to 0 . We do the same for the rest of the nonzero elements in S ( i ) . Finally, we obtain S such that X = S Y , S i is a zero row, r k > 1 and S ( l ) = S ( l ) for l i , k . We repeat this procedure for every q with 0 < r q < 1 . After several such substitutions, we obtain S such that for every q either r q = 0 or r q 1 . S contains a zero row, as required. (ii) Suppose Y does not contain a zero row. Let i be such that r i < 1 . From ( i ) , X ( i ) is zero. X ( i ) = j = 1 n s i j Y ( j ) and all summands are non-negative, and it follows that s i j = 0 for all j . Finally if r i < 1 then r i = 0 and S contains a zero row. (iii) By (i) we obtain that X contains a zero row, and, as a consequence, A = X B contains a zero row. □
In [14], it was proved that if A w m B then R ( A ) R ( B ) , but matrix majorization cannot be described in terms of row/column inclusion. The following examples show that strong majorization also cannot be described in terms of row/column inclusion. The first example shows that column inclusion does not follow from X m Y , and the second one shows that the converse implication does not hold as well.
Example 7.
Let X = 0 0 0 0 1 1 , Y = 1 0 0 1 0 0 , S = 0 0 1 0 1 0 1 0 0 .
Then X = S Y , so X s Y . X ( 1 ) = 0 0 1 C ( Y ) . Also, Y ( 1 ) = 1 0 0 C ( X ) .
Example 8.
Let X = 1 1 1 + 1 1 1 0 0 1 0 0 1 and Y = 1 1 1 0 1 1 1 0 1 0 0 1 . Then e X = e Y and R ( X ) R ( Y ) . But it is easy to verify that X m Y . Suppose that there exists S = a d g j b e h k such that X = S Y . Since X = S Y . Since S Y = a + g a + d a + d + g + j b + h b + e b + h + e + k it follows that a + g = a + d = a + d + g + j = 1 . Then d = g = j = 0 and a = 1 . For similar reasons, b = 1 and then R is not a row-stochastic matrix.

4. Conclusions

In this paper, we proved some necessary and sufficient conditions for X , Y R n × m , 1 m n 1 to satisfy X Y . We obtain some necessary and sufficient conditions for X to be majorized by Y with some conditions on X and Y. We also obtained some necessary and sufficient conditions for X to be doubly stochastic majorized by Y with some conditions on X and Y . We also obtained some characterizations for majorization of ( 0 , 1 ) matrices. Finding necessary and sufficient conditions for general matrix majorization is difficult and still it is open. Finding necessary and sufficient conditions for doubly stochastic majorization and strong majorization are the scope of future studies.

Author Contributions

Both authors have contributed equally to this paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Brualdi, R.A. The doubly stochastic matrices of a vector majorization. Linear Algebra Appl. 1984, 61, 141–154. [Google Scholar] [CrossRef] [Green Version]
  2. Pería, F.D.; Massey, P.G.; Silvestre, L.E. Weak matrix majorization. Linear Algebra Appl. 2005, 403, 343–368. [Google Scholar] [CrossRef] [Green Version]
  3. Dahl, G. Majorization Polytopes. Linear Algebra Appl. 1999, 297, 157–175. [Google Scholar] [CrossRef] [Green Version]
  4. Marshall, A.W.; Olkin, I. Inequalities: Theory of Majorization and Its Application; Academic Press: New York, NY, USA, 1979. [Google Scholar]
  5. Brualdi, R.A.; Hwang, S.G. Vector majorization via Hessenberg matrices. J. Lond. Math. Soc. 1996, 53, 28–38. [Google Scholar] [CrossRef]
  6. Chao, K.M.; Wong, C.S. Application of M-matrices to majorization. Linear Algebra Appl. 1992, 169, 31–40. [Google Scholar] [CrossRef] [Green Version]
  7. Hwang, S.G.; Pyo, S.S. Matrix majorization via vector majorization. Linear Algebra Appl. 2001, 332–334, 15–21. [Google Scholar] [CrossRef] [Green Version]
  8. Dahl, G. Alexander Guterman, and Pavel Shteyner, Majorization for matrix classes. Linear Algebra Appl. 2018, 555, 201–221. [Google Scholar] [CrossRef]
  9. Tong, Y.L. Some recent developments on majorization inequalities in probability and statistics. Linear Algebra Appl. 1994, 199, 69–90. [Google Scholar] [CrossRef] [Green Version]
  10. Jorswieck, E.; Boche, H. Majorization and Matrix-Monotone Functions in Wireless Communications; Now Publishers Inc.: Delft, The Netherlands, 2007. [Google Scholar]
  11. Pietersz, R.; Groenen, P.J. Rank reduction of correlation matrices by Majorization. Quant. Financ. 2004, 4, 649–662. [Google Scholar] [CrossRef] [Green Version]
  12. Ando, T. Majorization, Doubly stochastic matices, and comperison of eigenvalues. Linear ALgebra Appl. 1989, 118, 163–248. [Google Scholar] [CrossRef] [Green Version]
  13. Hwang, S.G.; Park, J.-Y. A note on multivariate majorization. Comm. Korean Math. Soc. 1999, 14, 479–485. [Google Scholar]
  14. Dahl, G. Alexander Guterman, and Pavel Shteyner, Majorization for (0,1)-matrices. Linear Algebra Appl. 2020, 585, 147–163. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Udayan, D.K.; Somasundaram, K. Some Results on Majorization of Matrices. Axioms 2022, 11, 146. https://doi.org/10.3390/axioms11040146

AMA Style

Udayan DK, Somasundaram K. Some Results on Majorization of Matrices. Axioms. 2022; 11(4):146. https://doi.org/10.3390/axioms11040146

Chicago/Turabian Style

Udayan, Divya K., and Kanagasabapathi Somasundaram. 2022. "Some Results on Majorization of Matrices" Axioms 11, no. 4: 146. https://doi.org/10.3390/axioms11040146

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop