Next Article in Journal
Phenolic Content and Antioxidant Activity in Raw and Denatured Aqueous Extracts from Sprouts and Wheatgrass of Einkorn and Emmer Obtained under Salinity
Next Article in Special Issue
A Seed Expansion Graph Clustering Method for Protein Complexes Detection in Protein Interaction Networks
Previous Article in Journal
The Molecular Design of Active Sites in Nanoporous Materials for Sustainable Catalysis
Previous Article in Special Issue
An Interface for Biomedical Big Data Processing on the Tianhe-2 Supercomputer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Robust Manifold Graph Regularized Nonnegative Matrix Factorization Algorithm for Cancer Gene Clustering

1
School of Information Science and Engineering, Central South University, Changsha 410083, China
2
School of Information Science and Engineering, Qufu Normal University, Rizhao 276826, China
*
Author to whom correspondence should be addressed.
Molecules 2017, 22(12), 2131; https://doi.org/10.3390/molecules22122131
Submission received: 27 October 2017 / Revised: 27 November 2017 / Accepted: 29 November 2017 / Published: 2 December 2017

Abstract

:
Detecting genomes with similar expression patterns using clustering techniques plays an important role in gene expression data analysis. Non-negative matrix factorization (NMF) is an effective method for clustering the analysis of gene expression data. However, the NMF-based method is performed within the Euclidean space, and it is usually inappropriate for revealing the intrinsic geometric structure of data space. In order to overcome this shortcoming, Cai et al. proposed a novel algorithm, called graph regularized non-negative matrices factorization (GNMF). Motivated by the topological structure of the GNMF-based method, we propose improved graph regularized non-negative matrix factorization (GNMF) to facilitate the display of geometric structure of data space. Robust manifold non-negative matrix factorization (RM-GNMF) is designed for cancer gene clustering, leading to an enhancement of the GNMF-based algorithm in terms of robustness. We combine the l 2 , 1 -norm NMF with spectral clustering to conduct the wide-ranging experiments on the three known datasets. Clustering results indicate that the proposed method outperforms the previous methods, which displays the latest application of the RM-GNMF-based method in cancer gene clustering.

1. Introduction

With the progressive implementation of human whole-genome and microarray technologies, it is possible to simultaneously observe the expressions of numerous genes in different tissue samples. By analyzing gene expression data, genes with varying expressions in tissues and their relationships may be identified to figure out the pathogenic mechanism of cancers based on genetic changes [1]. Recently, cancer classification based on gene expression data has become a hot research topic in bioinformatics.
Due to the fact that the analysis of genome-wide expression patterns can provide unique perspectives into the structure of genetic networks, the clustering technique has been used to analyze gene expression data [2,3]. Cluster analysis is the most widespread statistical techniques for analyzing massive gene expression data. Its major task is to classify genes with similar expressions to discover groups of genes with identical features or similar biological functions, in order that people can acquire a deeper understanding about the essence of many biological phenomena such as gene functions, development, cancer, and pharmacology [4].
Currently, it has been shown that non-negative matrix factorization (NMF) [5,6] is superior to the hierarchical clustering (HC) and self-organizing map (SOM) [7] in the application of cancer samples in clustering gene expression data. Over the past few years, the NMF-based method has been used for the gene expressions of statistically analyzing data for clustering [8,9,10,11,12,13]. The main idea is to approximately factorize a non-negative data matrix into a product of two non-negative matrices, which makes sure that all elements of the matrices are non-negative. Therefore, the appearance of NMF has attracted considerable attention. Recently, various variants based on the original NMF have been developed by modifying the objective function or the constraint conditions [14,15,16]. For instance, Cai et al. proposed graph regularized non-negative matrix factorization (GNMF), giving forth to the neighboring geometric structure. It illustrates the nearest neighbor graph that preserves the neighborhood information of high-dimensional space in low-dimensional space. The GNMF reveals the intrinsic geometrical structure by incorporating a Laplacian regularization term [17], which is effective for solving clustering problems. After that, the sparse NMF [18] was proposed with sparse constraints upon the basis matrices and coefficient matrices factored by the NMF so that the sparseness may be reflected from data. The non-smooth NMF [19] can realize the global or local sparseness [20] by making basis and encoding matrices sparse simultaneously. For the sake of enhancing the robustness of the GNMF-based method in gene clustering, we propose improved robust manifold non-negative matrix factorization (RM-GNMF) by making use of the combination of l 2 , 1 -norm and spectral clustering with Laplacian regularization, leading to the internal geometry of data representations. It facilitates the display purposes of intrinsic geometric structure of the cancer gene data space.
This paper is organized as follows. In Section 2, we give a brief review on the NMF and the GNMF. In Section 3, we propose an improved RM-GNMF algorithm. In Section 4, we give the experimental results comparing with the previous methods. Finally, the conclusions are drawn in Section 5.

2. The NMF-Based and GNMF-Based Method

The NMF-based method [5] is a linear and non-negative approximate data description for non-negative matrices. We consider an original decomposed matrix X of size m × n , where m represents data characteristics and n represents the number of samples. Based on the NMF method, the matrix X is decomposed into two non-negative matrices W R m × r and H R n × r , i.e.,
X = W H T ,
where r m i n ( m , n ) . For a given decomposition X = W H T , sample m can be divided into r classes according to matrix H T . Each sample is placed in the highest metagene expression level in the sample, meaning that if H i j T is the largest in column j, then sample j is assigned to class i .
Using the square of the Euclidean distance between X and W H T [21], we have the objective function of the NMF method
O N M F = | | X W H T | | 2 = i j ( x i j k = 1 r w i k h j k ) 2 .
According to the iterative update algorithms [6], the NMF-based method is performed on the basis of multiplicative update rules of W and H given by
w i k w i k ( X H ) i k ( W H T H ) i k , h j k h j k ( X T W ) j k ( H W T W ) j k .
In order to overcome the limitation of the NMF-based method, Cai et al. [17] proposed the GNMF-based method, in which an affinity graph is generated to encode the geometrical information followed by a matrix factorization with respect to the graph structure. In contrast to the NMF-based method, it has a regular graph constraint, which preserves the advantage of the local sparse representation of the NMF-based method and preserves the similarity between the original data points after dimensionality reduction.
There are several weighting schemes, such as zero-one weighting, heat kernel weighting, and Gaussian weighting [17]. In what follows, we consider the zero-one weighting described as
Q i j = 1 , if x i N k ( x j ) or x j N k ( x i ) , 0 , otherwise .
Based on the weight matrix Q, we obtain the objective function of the GNMF method given by
O G N M F = | | X W H T | | 2 + λ T r ( H T L H ) ,
where T r ( · ) denotes the trace of matrices and L = D Q . D is a diagonal matrix whose entries are column or row sums of Q with D j j = l Q j l [22]. The regularization parameter λ 0 can be used for the smoothness control of the new representation. By the iterative algorithms to minimize the objective function O G N M F , we achieve the updating rules
w i k w i k ( X H ) i k ( W H T H ) i k , h j k h j k ( X T W + λ W H ) j k ( H W T W + λ D H ) j k .

3. The RM-GNMF-Based Method for Gene Clustering

So far, we have described the NMF-based method and GNMF-based method. In what follows, we seek RM-GNMF gene clustering by making use of the combination of l 2 , 1 -norm and spectral clustering with the Laplacian regularization.

3.1. The l 2 , 1 -Norm

The l 2 , 1 -norm of a matrix was initially employed as a rotational invariant l 1 -norm [23], which was usually used for multi-task learning [24,25] and tensor factorization [26]. Instead of using l 2 -norm-based loss function that is sensitive to outliers, we resort to the l 2 , 1 -norm-based loss function and regularization [23], which is convergence-proved.
For the sake of getting over the drawbacks of the NMF-based method and enhancing the robustness of the GNMF-based method, we employ the l 2 , 1 -norm for the matrix factorization in the RM-GNMF-based method. For a non-negative matrix X of size m × n , the l 2 , 1 -norm of matrice X is defined as
| | X | | 2 , 1 = i = 1 n | | x i | | 2 ,
where data vectors are arranged in columns, and the l 2 , 1 -norm calculates the l 2 -norm for column vectors first. Subsequently, the matrix factorization assignment becomes
min H 0 | | X W H T | | 2 , 1 .

3.2. Spectral-Based Manifold Learning to Constrained GNMF

The spectral method is a classical method of analysis and algebra in mathematics. It is widely used in low dimensional representation and clustering problems of high dimensional data [27,28]. A relational matrix describing the similarity of the pair of data points is defined according to the given sample dataset, and the eigenvalues and eigenvectors of the matrices are calculated. After that, the appropriate eigenvectors are selected and the low dimensional embedding of the data is obtained. The degree matrices are defined on a given graph, such as an adjacency matrix of the graph, a Laplacian matrix, and so on [22].
Based on the spectrum of the matrices with respect to the graph, spectral theory further reveals the information contained in the graph , and establishes the connection between the discrete space and the continuous space through the techniques of geometry, analysis, and algebra. It has a wide range of applications in manifold learning. In this section, the p-nearest neighbor method can be used for establishing the relationship between each data point and its neighborhood.
For a data matrix X R m × n , we treat each column of X as a data point and each data point as a vertex, respectively. The p-nearest-neighbor graph G can be constructed with n vertices. Then the symmetric weight matrix Q R n × m is generated, where the element q i j denotes the weight of the edge joining vertices i and j and the value of q i j is given by
q i j = 1 , if x i N p ( x j ) or d j N p ( x i ) , 0 , otherwise .
where N p ( x i ) denotes the set of p-nearest neighbors of x i . It is obvious that the matrix Q represents the affinity between the data points.
There is an assumption about manifold. Namely, if two data points x i and x j are close in the intrinsic geometric structure of the data distribution, then their presentations under a new basis will be close [29]. Therefore, we define the relationship as follows
min x p i j | | x i x j | | 2 q i j ,
where m i and m j denote the mappings of x i and x j , respectively. The degree matrix D is a diagonal matrix given by d i i = j q i j . Obviously, d i i is the sum of all the similarities related with x i . Then, the graph Laplacian matrix is given by
L = D Q .
The graph embedding can be written as
min x i j | | x i x j | | 2 q i j = min X t r ( X ( D Q ) X T ) = min X t r ( X L X T ) .
In the RM-GNMF-based method, we combine the GNMF-based method with the spectral clustering, resulting in the l 2 , 1 -norm constrained GNMF as follows
O R M G N M F = | | X W H T | | 2 , 1 + λ T r ( H T L H ) ,
where λ 0 is the regularization parameter. We resort to the augmented Lagrange multiplier (ALM) method to solve the above problem.
For an auxiliary variable Z = X W H T , we rewrite the O R M G N M F in Equation (13) as
min W , H , Z | | Z | | 2 , 1 + α T r ( H T L H ) ,
satisfying the constraints Z X + W H T = 0 and H T H = I . Then, we define the augmented Lagrangian function
L μ ( Z , W , H , Λ ) = | | Z | | 2 , 1 + T r Λ T ( Z X + W H T ) + μ 2 | | Z X + W H T | | 2 , 1 + α T r ( H T L H ) .
satisfying the constraint H T H = I , where μ is the step size of update and Λ is the Lagrangian multiplier.
To optimize Equation (15), we rewrite the objective function to get the following task
L μ ( Z , W , H , Λ ) = | | Z | | 2 , 1 + μ 2 | | Z X + W H T + Λ μ | | F 2 + α T r ( H T L H ) ,
satisfying the constraint H T H = I .

3.3. Computation of Z

For the given W and H, we can solve Z in Equation (15) by using the update of Z related to the following issue
Z r + 1 = a r g min Z | | Z | | 2 , 1 + μ 2 | | Z ( X W r ( H r ) T Λ r μ | | F 2 .
We need the following Lemma to solve Z in Equation (15). Please see the Appendix for a detailed proof.
Lemma 1.
Given a matrix W = [ w 1 , , w n ] R m × n and a positive scalar λ, Z * is the optimal solution of
min 1 2 | | Z W | | F 2 + λ | | Z | | 2 , 1 ,
and the i-th column of Z * is given by
Z * ( : , i ) = | | w i | | λ | | w i | | W i , i f λ < | | W i | | , 0 , otherwise .

3.4. Computation of W and H

For the given other parameters, we solve the optimal W. The update of W amounts to solve
W r + 1 = arg min W μ 2 | | Z r X + W r ( H r ) T + Λ μ | | F 2 .
Let X Z + Λ μ = M . The problem in Equation (20) can be rewritten as
W r + 1 = arg min W μ 2 | | M + W r ( H r ) T | | F 2 .
If the partial derivative of W is set to be 0, we obtain
W r + 1 = M H r .
Then, we derive the optimal H. Taking W = M H , the update problem of H can be expressed as
H r + 1 = arg min H μ 2 | | M M H H T | | F 2 + α T r ( H T L H ) ,
satisfying the constraint H T H = I . We have
H r + 1 = arg min H | | M M H H T | | F 2 + 2 α μ T r ( H T L H ) = a r g min H T r H T ( M T M + 2 α μ L ) H .
Therefore, the optimal H r + 1 can be achieved by counting eigenvectors of
H r + 1 = ( h 1 , , h k ) .

3.5. Updating of Λ and μ

The update standard of Λ and μ can be described as follows
Λ r + 1 = Λ r + μ ( Z r + 1 X + W r ( H r ) T ) ,
μ r + 1 = p μ r ,
where p is the nearest neighbor graph parameter. The detailed process of the RM-GNMF-based method is listed in Algorithm 1.
Algorithm 1: The RM-GNMF-based Algorithm
Input: The dataset X = [ x 1 , x 2 , , x n ] R m × n ,
 a predefined number of clusters k,
 parameters μ , λ ,
 the nearest neighbor graph parameter p,
 maximum iteration number t m a x .
Initialization:
Z = Λ = 0 , W 0 R m × k , H 0 R k × n .
Repeat
 Fix other parameters, and then update Z by formula (17);
 Fix other parameters, and then update W by:
W = ( X Z + Λ μ ) H ;
 Update H by H = U V T , where U and V are
 the left and right singular values of the SVD decomposition;
 Fix other parameters, and then update Λ and μ by formulas (26)(27);
 t = t + 1;
 Until t t m a x .
Output: matrix W R m × k , matrix H R k × n .

4. Results and Discussion

In this section, we evaluate the performance of the proposed method on the three gene expression datasets. We compare the RM-GNMF-based method with the NMF-based method [6], the l 2 , 1 -NMF-based method [23], the LNMF-based method [20], and the GNMF-based method [17].

4.1. Datasets

In order to evaluate the performance of proposed RM-GNMF algorithm, the clustering experiment was conducted on several gene expressions datasets of cancer patients. Three classical genetic datasets were used in the experiment, including leukemia [1], colon, and GLI_85 [30]. These gene expression datasets are downloaded from: http://featureselection.asu.edu/datasets.php.The colon cancer datasets consist of the gene expression profiles of 2000 genes for 62 tissue samples among which 40 are colon cancer tissues and 22 are normal tissues. The leukemia datasets consist of 7129 genes and 72 samples (47 ALL and 25 AML).
A brief description of experimental datasets is described in Table 1.
More detailed information on these datasets can be found in the relevant references, and these datasets are available for download from the reference website.

4.2. Evaluation Metrics

For the sake of evaluating the clustering results, we use the clustering accuracy and normalized mutual information (NMI) to demonstrate the performance of the proposed algorithm.
Clustering accuracy can be calculated as
A C C = i = 1 n δ ( m a p ( c i ) , l i ) n ,
where c i is the cluster label of x i , and l i is the true class label i-th sample, n denotes the total number of samples, and δ ( m a p ( c i ) , l i ) is a delta function. If m a p ( c i ) = l i , we obtain δ ( m a p ( c i ) , l i ) = 1 , where m a p ( c i ) is the mapping function that maps the cluster label c i into the actual label l i . Otherwise, we have δ ( m a p ( c i ) , l i ) = 0 . We can find the best mapping by the Kuhn–Munkres method [31]. NMI can be described as
N M I = i = 1 N j = 1 N n i , j l o g n i , j n i n ^ j ( i = 1 N n i l o g n i n ) ( j = 1 N n ^ j l o g n ^ j n ) ,
where n i is the size of the i-th cluster and n ^ j is the size of the j-th class, n i , j is the number of data between the intersections, and N denotes the number of clusters. We perform 100 experiments under each target feature dimension, taking the mean of the accurate and NMI values as the experimental results.

4.3. Parameter Selection

The RM-GNMF-based method involves two essential parameters, i.e., the regularization parameter λ and the regularity coefficient μ determining the penalty for infeasibility.
We set the parameters λ and μ in the range of λ { 0.001 , 0.005 , 0.01 , 0.05 , 0.1 , 0.5 , 1 } and μ { 10 1 , 10 2 , 10 3 , 10 4 , 10 5 , 10 6 , 10 7 } . We use the cross-validation method to get the best parameter values λ = 0.05 and μ = 10 3 . In order to intuitively to analyze the influence of parameters λ and μ of the RM-GNMF-based method on the accuracy of clustering, Figure 1 shows the variation on clustering accuracy when the two parameters are modified. The three subgraphs in Figure 1 correspond to three gene expression datasets respectively. As can be viewed in Figure 1, the parameter μ = 10 3 can get higher ACC. With the change of regular parameter λ , the change of ACC is relatively flat, and the clustering accuracy is higher when the value of λ is smaller. Therefore, we set λ = 0.05 , μ = 10 3 in the follow-up experiments.

4.4. Clustering Results

In Table 2, we demonstrate the clustering results on the colon, GLI_85 and leukemia datasets, respectively. Reported is the mean of clustering results from 100 runs of different NMF methods together.
It can be found that the RM-GNMF-based method outperforms the original NMF-based method, while the RM-GNMF-based method achieves the best performance compared with the other three datasets. The clustering accuracies of the RM-GNMF-based method are 66.13 % , 75.29 % , and 65.28 % for the colon, GLI_85, and leukemia datasets, respectively.
Our tests on several gene expression profiling datasets of cancer patients consistently indicate that the RM-GNMF-based method achieves significant improvements in comparison with the NMF-based method, the l 2 , 1 -NMF-based method, the LNMF-based method, and the GNMF-based method, in terms of cancer prediction accuracy.
As shown in Figure 2, the RM-GNMF-based method always gives birth to better clustering results than other NMF-based method using the three original datasets.
To demonstrate the robustness of our approach to data changes, we add uniform noise onto the three gene expression datasets. A disturbed matrix Y n o i s e is generated by adding independent uniform noise, defined as follows:
Y n o i s e = Y + r ,
where Y is the original matrix, r is a random number generated by a uniform distribution on the interval [ 0 , m a x ] , and m a x is the maximum expression of Y .
The experimental results with noise added are shown in Figure 3. It can be seen that the clustering result of RM-GNMF algorithm is still stable with the addition of noise, which shows that RM-GNMF algorithm is robust.
In order to verify the results obtained from the algorithms in the experiments, we import the clustering result of the comparison methods into STAC web platform to perform the statistical test (http://tec.citius.usc.es/stac/). We selected the Friedman test of non-parametric multiple groups; the significance level is 0.05 . The analysis results obtained are presented in Table 3 and Table 4.
From the above test results it can be concluded that H 0 is rejected. Hence, we believe that the clustering results of five algorithms are significantly different.

5. Conclusions

We have proposed the RM-GNMF-based method with the l 2 , 1 -norm and spectral-based manifold learning. This algorithm is suitable for cancer gene expression data clustering with an elegant geometric structure. Our tests on several gene expression profiling datasets of cancer patients consistently indicate that the RM-GNMF-based method achieves significant improvements in comparison with the NMF-based method, the l 2 , 1 -NMF-based method, the LNMF-based method, and the GNMF-based method, in terms of cancer prediction accuracy and robustness.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Grant Nos. 61379153, 61572529, 61572284).

Author Contributions

Rong Zhu and Jin-Xing Liu conceived and designed the experiments; Rong Zhu performed the experiments; Rong Zhu and Yuan-Ke Zhang analyzed the data; Rong Zhu and Ying Guo wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Lemma A1.
With the given matrix W = [ w 1 , , w n ] R m × n and the positive scalar λ, Z * is the optimal solution of
min 1 2 | | Z W | | F 2 + λ | | Z | | 2 , 1 ,
and the i-th column of Z * can be calculated as
Z * ( : , i ) = | | w i | | λ | | w i | | W i , i f λ < | | W i | | , 0 , otherwise .
Proof. 
The objection function in Equation (A1) is equivalent to the following equation
i = 1 n | | z i w i | | F 2 + λ i = 1 n | | z i | | 2 ,
which can be solved in a decoupled manner
min z i 1 2 | | z i w i | | 2 2 + λ | | z i | | 2 .
After taking derivative with respect to z i , we get
| | z i | | 2 z i = r , if z I = 0 z i z i T z i , otherwise ,
where r is a subgradient vector and | | r | | 2 1 . For z i = 0 , we get
w i + λ r = 0 ,
where λ | | w i | | . For z i 0 , we get
z i w i + λ z i z i T z i = 0 .
Combining Equation (A6) with Equation (A7), we obtain
z i = α w i ,
where α = | | z i | | 2 | | z i | | 2 + λ > 0 . Plugging z i in Equation (A8) to Equation (A7), we solve α , which is substituted back into Equation (A8). After performing the above-mentioned steps, we obtain
z i = ( 1 λ | | w i | | ) w i .
 □

References

  1. Golub, T.R.; Slonim, D.K.; Tamayo, P.; Huard, C.; Gaasenbeek, M.; Mesirov, J.P.; Coller, H.; Loh, M.L.; Downing, J.R.; Caligiuri, M.A.; et al. Molecular classification of cancer: Class discovery and class prediction by gene expression monitoring. Science 1999, 286, 531–537. [Google Scholar] [CrossRef] [PubMed]
  2. Jiang, D.; Tang, C.; Zhang, A. Cluster analysis for gene expression data: Survey. IEEE Trans. Knowl. Data Eng. 2004, 16, 1370–1386. [Google Scholar] [CrossRef]
  3. Devarajan, K. Nonnegative matrix factorization: an analytical and interpretive tool in computational biology. PLoS Comput. Biol. 2008, 4, e1000029. [Google Scholar] [CrossRef] [PubMed]
  4. Luo, F.; Khan, L.; Bastani, F.; Yen, I.L.; Zhou, J. A dynamically growing self-organizing tree (DGSOT) for hierarchical clustering gene expression profiles. Bioinformatics 2004, 20, 2605–2617. [Google Scholar] [CrossRef] [PubMed]
  5. Lee, D.D.; Seung, H.S. Learning the parts of objects by non-negative matrix factorization. Nature 1999, 401, 788. [Google Scholar] [PubMed]
  6. Lee, D.D.; Seung, H.S. Algorithms for non-negative matrix factorization. In Advances in Neural Information Processing Systems; The MIT Press: Cambridge, MA, USA, 2001; pp. 556–562. [Google Scholar]
  7. Brunet, J.P.; Tamayo, P.; Golub, T.R.; Mesirov, J.P. Metagenes and molecular pattern discovery using matrix factorization. Proc. Natl. Acad. Sci. USA 2004, 101, 4164–4169. [Google Scholar] [CrossRef] [PubMed]
  8. Li, T.; Ding, C.H. Nonnegative Matrix Factorizations for Clustering: A Survey. In Data Clustering: Algorithms and Applications; CRC Press: Boca Raton, FL, USA, 2013. [Google Scholar]
  9. Ding, C.; He, X.; Simon, H.D. On the equivalence of nonnegative matrix factorization and spectral clustering. In Proceedings of the 2005 SIAM International Conference on Data Mining; SIAM: Philadelphia, PA, USA, 2005; pp. 606–610. [Google Scholar]
  10. Kuang, D.; Ding, C.; Park, H. Symmetric nonnegative matrix factorization for graph clustering. In Proceedings of the 2012 SIAM International Conference on Data Mining; SIAM: Philadelphia, PA, USA, 2012; pp. 106–117. [Google Scholar]
  11. Akata, Z.; Thurau, C.; Bauckhage, C. Non-negative matrix factorization in multimodality data for segmentation and label prediction. In Proceedings of the 16th Computer vision winter workshop, Mitterberg, Austria, 2–4 February 2011. [Google Scholar]
  12. Liu, J.; Wang, C.; Gao, J.; Han, J. Multi-view clustering via joint nonnegative matrix factorization. In Proceedings of the 2013 SIAM International Conference on Data Mining; SIAM: Philadelphia, PA, USA, 2013; pp. 252–260. [Google Scholar]
  13. Singh, A.P.; Gordon, G.J. Relational learning via collective matrix factorization. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and Data Mining, Las Vegas, Nevada, USA, 24–27 August 2008; pp. 650–658. [Google Scholar]
  14. Huang, Z.; Zhou, A.; Zhang, G. Non-negative matrix factorization: A short survey on methods and applications. In Computational Intelligence and Intelligent Systems; Springer: Berlin, Germany, 2012; pp. 331–340. [Google Scholar]
  15. Wang, Y.X.; Zhang, Y.J. Nonnegative matrix factorization: A comprehensive review. IEEE Trans. Knowl. Data Eng. 2013, 25, 1336–1353. [Google Scholar] [CrossRef]
  16. Kim, J.; He, Y.; Park, H. Algorithms for nonnegative matrix and tensor factorizations: A unified view based on block coordinate descent framework. J. Glob. Optim. 2014, 58, 285–319. [Google Scholar] [CrossRef]
  17. Cai, D.; He, X.; Han, J.; Huang, T.S. Graph regularized nonnegative matrix factorization for data representation. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 1548–1560. [Google Scholar] [PubMed]
  18. Kim, H.; Park, H. Sparse non-negative matrix factorizations via alternating non-negativity-constrained least squares for microarray data analysis. Bioinformatics 2007, 23, 1495–1502. [Google Scholar] [CrossRef] [PubMed]
  19. Pascual-Montano, A.; Carazo, J.M.; Kochi, K.; Lehmann, D.; Pascual-Marqui, R.D. Nonsmooth nonnegative matrix factorization (nsNMF). IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 403–415. [Google Scholar] [CrossRef] [PubMed]
  20. Li, S.Z.; Hou, X.W.; Zhang, H.J.; Cheng, Q.S. Learning spatially localized, parts-based representation. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Kauai, HI, USA, 8–14 December 2001. [Google Scholar]
  21. Paatero, P.; Tapper, U. Positive matrix factorization: A non-negative factor model with optimal utilization of error estimates of data values. Environmetrics 1994, 5, 111–126. [Google Scholar] [CrossRef]
  22. Chung, F.R. Spectral Graph Theory; Number 92; American Mathematical Soc.: Providence, RI, USA, 1997. [Google Scholar]
  23. Nie, F.; Huang, H.; Cai, X.; Ding, C.H. Efficient and robust feature selection via joint l2, 1-norms minimization. In Advances in Neural Information Processing Systems; The MIT Press: Cambridge, MA, USA, 2010; pp. 1813–1821. [Google Scholar]
  24. Argyriou, A.; Evgeniou, T.; Pontil, M. Multi-task feature learning. In Advances in Neural Information Processing Systems; The MIT Press: Cambridge, MA, USA, 2007; pp. 41–48. [Google Scholar]
  25. Guyon, I.; Weston, J.; Barnhill, S.; Vapnik, V. Gene selection for cancer classification using support vector machines. Mach. Learn. 2002, 46, 389–422. [Google Scholar] [CrossRef]
  26. Huang, H.; Ding, C. Robust tensor factorization using r1 norm. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Anchorage, AK, USA, 23–28 June 2008. [Google Scholar]
  27. Tao, H.; Hou, C.; Nie, F.; Zhu, J.; Yi, D. Scalable Multi-View Semi-Supervised Classification via Adaptive Regression. IEEE Trans. Image Process. 2017, 26, 4283–4296. [Google Scholar] [CrossRef] [PubMed]
  28. Shawe-Taylor, J.; Cristianini, N.; Kandola, J.S. On the concentration of spectral properties. In Advances in Neural Information Processing Systems; The MIT Press: Cambridge, MA, USA, 2002; pp. 511–517. [Google Scholar]
  29. Yin, M.; Gao, J.; Lin, Z. Laplacian regularized low-rank representation and its applications. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 504–517. [Google Scholar] [CrossRef] [PubMed]
  30. Li, J.; Cheng, K.; Wang, S.; Morstatter, F.; Robert, T.; Tang, J.; Liu, H. Feature Selection: A Data Perspective. arxiv, 2017; arXiv:1601.07996. [Google Scholar]
  31. Lovász, L.; Plummer, M.D. Matching Theory; American Mathematical Soc.: Providence, RI, USA, 2009; Volume 367. [Google Scholar]
Figure 1. Influence of parameters on clustering accuracy.
Figure 1. Influence of parameters on clustering accuracy.
Molecules 22 02131 g001
Figure 2. Clustering results on different datasets.
Figure 2. Clustering results on different datasets.
Molecules 22 02131 g002
Figure 3. Influence of noise on clustering accuracy.
Figure 3. Influence of noise on clustering accuracy.
Molecules 22 02131 g003
Table 1. Statistics of three gene expression datasets.
Table 1. Statistics of three gene expression datasets.
Data SetsInstancesFeaturesClasses
Colon6220002
GLI_858522,2832
Leukemia7270702
Table 2. Clustering results on different datasets. NMF: non-negative matrix factorization; GNMF: graph regularized non-negative matrices factorization; RM-GNMF: robust manifold non-negative matrix factorization; NMI: normalized mutual information.
Table 2. Clustering results on different datasets. NMF: non-negative matrix factorization; GNMF: graph regularized non-negative matrices factorization; RM-GNMF: robust manifold non-negative matrix factorization; NMI: normalized mutual information.
MethodsColonGLI_85Leukemia
ACCNMIACCNMIACCNMI
NMF0.62900.01100.60880.19060.63890.0193
L 2 , 1 -NMF0.53230.00480.60880.19160.63280.0258
LNMF0.61290.01810.52940.00110.62500.0306
GNMF0.62900.01100.60000.15840.63890.0193
RM-GNMF0.66130.02200.75290.19250.65280.0369
Table 3. Friedman test (significance level of 0.05).
Table 3. Friedman test (significance level of 0.05).
Statisticp-ValueResult
7.000000.01003 H 0 is rejected
Table 4. Ranking.
Table 4. Ranking.
RankAlgorithm
1.33333LNMF
2.33333NMF
2.66667 L 2 , 1 -NMF
3.66667GNMF
5.00000RM-GNMF

Share and Cite

MDPI and ACS Style

Zhu, R.; Liu, J.-X.; Zhang, Y.-K.; Guo, Y. A Robust Manifold Graph Regularized Nonnegative Matrix Factorization Algorithm for Cancer Gene Clustering. Molecules 2017, 22, 2131. https://doi.org/10.3390/molecules22122131

AMA Style

Zhu R, Liu J-X, Zhang Y-K, Guo Y. A Robust Manifold Graph Regularized Nonnegative Matrix Factorization Algorithm for Cancer Gene Clustering. Molecules. 2017; 22(12):2131. https://doi.org/10.3390/molecules22122131

Chicago/Turabian Style

Zhu, Rong, Jin-Xing Liu, Yuan-Ke Zhang, and Ying Guo. 2017. "A Robust Manifold Graph Regularized Nonnegative Matrix Factorization Algorithm for Cancer Gene Clustering" Molecules 22, no. 12: 2131. https://doi.org/10.3390/molecules22122131

APA Style

Zhu, R., Liu, J. -X., Zhang, Y. -K., & Guo, Y. (2017). A Robust Manifold Graph Regularized Nonnegative Matrix Factorization Algorithm for Cancer Gene Clustering. Molecules, 22(12), 2131. https://doi.org/10.3390/molecules22122131

Article Metrics

Back to TopTop