Next Article in Journal
Exploring Energy in the Direct Correction Method for Correcting Geometric Constraint Violations
Previous Article in Journal
A Continuum Damage-Based Anisotropic Hyperelastic Fatigue Model for Short Glass Fiber Reinforced Polyamide 66
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fusion and Enhancement of Consensus Matrix for Multi-View Subspace Clustering

School of Mathematics and Statistics, Guangdong University of Technology, Guangzhou 510520, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(6), 1509; https://doi.org/10.3390/math11061509
Submission received: 7 February 2023 / Revised: 10 March 2023 / Accepted: 16 March 2023 / Published: 20 March 2023

Abstract

:
Multi-view subspace clustering is an effective method that has been successfully applied to many applications and has attracted the attention of scholars. Existing multi-view subspace clustering seeks to learn multiple representations from different views, then gets a consistent matrix. Until now, most of the existing efforts only consider the multi-view information and ignore the feature concatenation. It may fail to explore their high correlation. Consequently, this paper proposes a multi-view subspace clustering algorithm with a novel consensus matrix construction strategy. It learns a consensus matrix by fusing the different information from multiple views and is enhanced by the information contained in the original feature direct linkage of the data. The error matrix of the feature concatenation data is reconstructed by regularization constraints and the sparse structure of the multi-view subspace. The feature concatenation data are simultaneously used to fuse the individual views and learn the consensus matrix. Finally, the data is clustered by using spectral clustering according to the consensus matrix. We compare the proposed algorithm with its counterparts on six datasets. Experimental results verify the effectiveness of the proposed algorithm.

1. Introduction

As a classical machine learning method, clustering has been an enduring area of research. A major reason is that it can be used as a data processing method independently and as a preliminary step in fields such as computer vision [1,2] and data mining [3,4]. Therefore, many researchers have focused on improving clustering algorithms, and further improvements of classical algorithms such as K-means and spectral clustering [5,6] have been made. Traditional clustering methods only use a single set of features. However, with the continuous development of computer technology, data acquisition can be conducted through different channels and methods, and the data can be processed through different methods. For example, data can be obtained by different feature extraction methods such as the Gabor filter [7] and local binary pattern methods (LBP) [8] to represent the same subject in computer vision. Multi-view clustering classifies similar subjects into the same group by combining the available multi-view feature information [9]. Because the information from different views is often compatible and complementary, it is possible to obtain better clustering performances using multi-view rather than one-view features.
Multi-view clustering has been widely used in data mining. For example, a multi-view recurrent neural network (MV-RNN) [10] is proposed for sequential recommendation and alleviating the item cold start problem. A knowledge transfer strategy [11,12] based on multi-view is introduced to handle the incomplete multi-modal visual data. A multi-view feature selection method [13] is presented to leverage the knowledge of all views and to guide the feature selection process in an individual view. Inspired by this, multi-view clustering has received much attention from scholars [14]. The development of multi-view clustering algorithms has gone through several stages, and the earliest was through single-view clustering algorithms such as K-means [15,16], and spectral clustering [17] to achieve the classification of categories. Then with the development of time, multi-view clustering algorithms based on collaborative training [18,19] and kernel methods [20,21] gradually emerged. In recent years, multi-view learning has been extensively studied, and many multi-view clustering algorithms have been proposed [9,15]. These methods can be broadly divided into the following two categories: model-based and similarity-based approaches.
Model-based approaches cooperatively train a generative model to represent the data. Each model represents one cluster [22]. For example, Kumar et al. [18] proposed a multi-view graph clustering algorithm based on collaborative training to obtain common clustering results by crossing views. Wang et al. [23] processed the missing multi-view data by view encoding and adversarial generative networks (GAN) before implementing clustering through clustering networks. In [24], a shared subspace was learned from different views to explicitly account for the individual and shared patterns hidden in different views. A multi-view graph clustering [25] algorithm was proposed to explore the topological manifold structure from multiple adaptive graphs. The topological relevance across multiple views can be explicitly detected. Wen et al. [26] proposed a unified embedding alignment framework to reconstruct the missing views simultaneously and learn the common representation of multiple views for incomplete multi-view clustering. An autoencoder in autoencoder networks (AE2-Nets) [27] was introduced to automatically encode intrinsic information from heterogeneous views into a comprehensive representation. It can adaptively balance the complementarity and consistency among different views. The choice of model and the sharing mechanism of model parameters significantly impact the algorithm’s performance in model-based approaches, especially when the distribution of different views is very different.
Up to now, researchers have proposed many similarity-based approaches to multi-view subspace clustering. We further divide them into three classes: matrix/tensor decomposition [28,29], graph learning [30], and subspace learning [31,32]. Until today, subspace and matrix decomposition [33] are still popular methods in multi-view learning. For example, Wang et al. [34] proposed a multi-view tensor based on third-order tensor decomposition to learn the appropriate matrix representation by constructing the tensor and performing different slices. Graph learning methods find a fusion graph across the input graphs of all views first and then employ an additional clustering algorithm on this fusion graph to produce the final clusters. For example, Chen et al. [7] proposed a low-rank tensor graph (LRTG) learning that simultaneously learns the representation and affinity matrix in a single step to preserve their correlation. Wang et al. [35] proposed graph-based multi-view clustering to help the learning of each view graph matrix and unified graph matrix by mutual reinforcement. An adaptive sample-level graph combination for partial multi-view clustering (ASGC-PMVC) [36] was presented to cluster partial multi-view samples by automatically adjusting the contributions of different views.
Multi-view subspace clustering has been successfully used in numerous applications, which leverages the complementary information from multiple views to benefit clustering. Among them, the most representative is the self-representing-based multi-view clustering algorithm. Zhang et al. [37] proposed a novel multi-view subspace clustering model (LMSC), which explores complementary information from multiple views and simultaneously seeks the underlying latent representation. Jia et al. [38] designed a novel structured tensor low-rank norm tailored to multi-view spectral clustering. It imposes a symmetric low-rank constraint and a structured sparse low-rank constraint. Brbic et al. [39] introduced subspace learning based on the sparse and low-rank structure of the subspace by regularization constraints to achieve both sparse and low-rank structures. Zheng et al. [40] proposed feature joint subspace multi-view clustering to improve the clustering results by mining the consensus information of multiple views. A multi-view low-rank sparse subspace clustering (MLRSSC) [39] algorithm was proposed to learn a joint subspace representation by constructing an affinity matrix shared among all views.
The above-mentioned multi-view subspace clustering algorithms fuse the individual views to get the consensus matrix. A consensus matrix is used to integrate the complementary and exclusivity information between multiple views. It generally ignores the overall data containing information under the joint view representation of multi-view data, which may be equally complementary and different from the original views. In addition, we must consider the relationship between the consensus matrix and the joint view. This paper proposes a fusion and enhancement strategy for the consensus matrix of multi-view subspace clustering to address these problems. The algorithm not only fuses the information of the joint view representation of multi-view data to enhance the information utilization but also extends the learning of the consensus matrix to the fused view subspace to ensure the differences and consensus between the original views. Moreover, we design an optimization algorithm based on the augmented lagrangian multiplier (ALM) to optimize the proposed method. We compare the proposed algorithm with its counterparts on six datasets. Comprehensive experiments show the superiority of the proposed algorithm. In summary, the main contributions of this paper can be delivered as follows:
  • We propose a self-representing-based multi-view subspace clustering. The consensus matrix is constructed on both the individual and the joint views to leverage the consensus and complementary information.
  • We introduce 2,1-norm to explore the structural sparsity of the multi-view subspace for benefitting the clustering performance.
  • Comprehensive experimental results compared the proposed algorithm with its counterparts demonstrate the advantage of the proposed method.
The rest of this paper is organized as follows. Section 2 presents some relevant premises. Section 3 presents the main improvements achieved by the proposed algorithm and the optimization of the formulation. Section 4 compares the proposed algorithm with several existing algorithms through experiments and discusses the results. Finally, a summary is presented in Section 5.

2. Related Work

2.1. Notation

We first introduce the notation involved in this paper. The original data matrix is denoted by X 1 , X 2 , , X v , X i R d i × n is the data matrix of the ith view, where n is the number of subjects and d i is the number of features in the ith view. X = [ X 1 , X 2 , , X v ] R d × n is the joint data matrix and d = i = 1 v d i . V R n × n is the self-representation matrix of X in the sparse clustering or the consensus matrix in the multi-view clustering. V i R n × n is a self-representation matrix of X i . | | C | | 2 , 1 = i = 1 n j = 1 n C i , j 2 is the 2,1-norm of matrix C. | | C | | F = i = 1 n j = 1 n C i , j 2 is the Frobenius norm of matrix C.

2.2. Subspace Clustering

Subspace learning is an effective means of processing high-dimensional data and is achieved by learning an optimal subspace representation instead of the original data. In this way, we can consider subspace learning as a particular type of matrix decomposition. Therefore, subspace learning is also widely used in various machine learning tasks. The essential purpose of subspace learning is to find a suitable subspace on a given data set X R n × m such that self-representation matrix V R n × n satisfies the following Equation:
min V | | X X V | | δ , s . t . d i a g V = 0 ,
where | | · | | δ is the paradigm expression and d i a g V = 0 is the diagonal array.
After further research and development, a constraint term is added to the above formula, which can reduce overfitting and enhance the condition of the learned subspace (sparse or low rank). Then, the formula becomes:
min V | | X X V | | δ + α | | V | | ρ , s . t . d i a g V = 0 .
Sparse subspace clustering (SSC) imposes a sparse constraint with ρ = 0 on the data representation matrix. It represents each data point as a sparse linear combination of other points and captures the local structure of the data. Low-rank representation (LRR) imposes a low-rank constraint [41] with a nuclear norm ρ = on the data representation matrix and captures the global structure of the data.

2.3. Multiview Subspace Clustering

For a given dataset D = X 1 , X 2 , , X v , X i R d i × n with v views [39],
min { V i } i = 1 v i = 1 v | | X i X i V i | | δ + α | | V i | | ρ , s . t . d i a g V i = 0 , i = 1 , 2 , , v .
where V i is a self-representation matrix for ith view, i = 1 , 2 , , v .
Researchers have developed many strategies to get consensus matrix V according to the self-representation matrices V i of each ith view. For example, the sparse and low-rank subspace multi-view clustering proposed by Brbic et al. [39] can obtain subspaces with particular structures by constraining and unifying the framework of consensus matrix learning with view subspace learning to make it more reasonable. Zhao et al. [41] proposed a method for learning the final consensus matrix that assigns weights to individual view clusters to obtain a better clustering result. Overall, many improvements have been made according to this unified framework, i.e.,
min { V i } i = 1 v , V i = 1 v | | X i X i V i | | δ + α | | V i | | ρ + β | | V i V i V | | F 2 , s . t . d i a g V i = 0 , i = 1 , 2 , , v .
After obtaining the consensus matrix V, we can obtain the affinity matrix and find cluster membership of data points according to the affinity matrix by using spectral clustering [17].

3. The Proposed Multi-View Subspace Clustering

3.1. The Proposed Multi-View Subspace Clustering Model

Most existing multi-view clustering algorithms learn consensus matrix V by weighting the self-representation matrix from multiple views. Because ignoring the feature concatenation, they may fail to explore the correlation among the views. This section proposes a multi-view subspace clustering considering the joint and individual views. The joint view is the union of all views. The consensus matrix of the proposed algorithm fuses the information from all views and is enhanced by the knowledge of the joint view to benefit clustering.
Equation (4) learns the view representation matrix and obtains the consensus matrix. In this way, each view’s complementary information is not adequately utilized, and the consensus matrix is just a simple integration of each view. Consequently, adding an error term can make the consensus matrix more reasonable and closer to each view. Equation (4) is replaced by
min { V i } i = 1 v , V i = 1 v 1 2 | | X i X i V i | | F 2 + α | | V i | | 2 , 1 + 1 2 β | | V V i | | F 2 , s . t . d i a g V i = 0 , i = 1 , 2 , , v .
where | | · | | 2 , 1 is used as a sparsity constraint for each view to learn that the representation matrix is as sparse as possible.
Equation (5) ignores the feature concatenation. It may fail to explore the correlation among the views. Thus, we enhance the consensus matrix with the information of the joint view X. The consensus matrix should satisfy the following conditions: (1) the matrix is learned from the feature direct concatenation data, and (2) all the views should recognize the representation matrix. In other words, the matrix should contain complementary information in the other views. Different from Equation (5), the algorithm proposed in this paper obtains a consensus matrix from the joint data. All the views are fused with this matrix through the error term, making the consensus matrix more comprehensive and reasonable information about the samples. On this basis, by adding the joint views, Equation (5) is replaced by Equation (6),
min { V i } i = 1 v , V i = 1 v 1 2 | | X i X i V i | | F 2 + β | | V i | | 2 , 1 + 1 2 γ | | V V i | | F 2 + α | | X X V | | 2 , 1 , s . t . d i a g V = 0 , d i a g V i = 0 , i = 1 , 2 , , v .
where α , β , γ are the penalty parameters. This way, we can fully fuse the information from each view of the data, yielding better cluster results. The flowchart of the proposed algorithm is given in Figure 1.

3.2. Optimization

The optimization objective (6) can be converted to Equation (7) by introducing auxiliary variables C i and E.
min { V i } i = 1 v , E , { C i } i = 1 v , V i = 1 v 1 2 | | X i X i V i | | F 2 + β | | C i | | 2 , 1 + 1 2 γ | | V V i | | F 2 + α | | E | | 2 , 1 , s . t . C i = V i , d i a g V i = 0 , i = 1 , 2 , , v , E = X X V , d i a g V = 0 .
The Lagrangian function of problem (7) is formulated as follows.
L { C i } i = 1 v , E , { V i } i = 1 v , { Y i } i = 1 v , Y , μ = i = 1 v 1 2 | | X i X i V i | | F 2 + β | | C i | | 2 , 1 + 1 2 γ | | V V i | | F 2 + Y i , C i V i + μ 2 | | C i V i | | F 2 + α | | E | | 2 , 1 + Y , E X + X V + μ 2 | | E X + X V | | F 2 ,
where A , B is the trace of A T B , μ is penalty parameter that need to be tuned, { Y i } i = 1 v and Y are Lagrange dual variables. Optimization is executed using the augmented Lagrange method (ALM), where the remaining variables are fixed to update one of the remaining variables. We give a detailed extrapolation of the optimization process in the following.
  • Updating C i as given the values of other variables E , V i , Y i , Y , μ . Equation (8) is equivalent to the following optimization equation
    min { C i } i = 1 v i = 1 v β | | C i | | 2 , 1 + Y i , C i V i + μ 2 | | C i V i | | F 2 .
Because C i , i = 1 , , v are no intersection, { C i } i = 1 v can be solved separately for each of these. For a specific C i , we need to solve
min C i β | | C i | | 2 , 1 + Y i , C i V i + μ 2 | | C i V i | | F 2 , i = 1 , , v .
Equation (10) is equivalent to
min C i β | | C i | | 2 , 1 + μ 2 | | C i V i + Y i μ | | F 2 .
The optimal solution C i * of Equation (11) can be given as
C i * : , j = | | Q : , j | | 2 β μ | | Q : , j | | 2 Q : , j , if | | Q : , j | | 2 > β μ 0 , otherwise , ,
where Q = V i + Y i μ , and [ . ] : , j is a vector that is the jth column of a matrix. | | . | | 2 is the 2-norm of a vector.
2.
Updating E as given the values of { C i } i = 1 v , { V i } i = 1 v , V , { Y i } i = 1 v , Y , μ . The optimization problem with respect to E can be denoted as
min E α | | E | | 2 , 1 + Y , E X + X V + μ 2 | | E X + X V | | F 2 .
The expression of problem (13) is similar to the expression of problem (10). Thus, the optimal solution of problem (13) has a similar expression to the optimal solution of problem (12). It can be given as
E * : , j = | | R : , j | | 2 α μ | | R : , j | | 2 R : , j , if | | R : , j | | 2 > α μ 0 , otherwise , ,
where R = X X V + Y μ .
3.
Updating { V i } i = 1 v as given the values of { C i } i = 1 v , E , V , { Y i } i = 1 v , Y , μ . Equation (8) is equivalent to the following quadratic programming problem.
min V i i = 1 v 1 2 | | X i X i V i | | F 2 + 1 2 γ | | V V i | | F 2 + Y i , C i V i + μ 2 | | C i V i | | F 2 .
Since V i , i = 1 , , v are independent, and problem (15) is a convex quadratic programming problem, we can obtain the optimal solution of problem (15) by having the gradient concerning V i equal to 0. That is
X i T X i X i V i γ V V i Y i μ C i V i = 0
Therefore
V i = X i T X i + γ I + μ I 1 X i T X i + γ V + Y i + μ C i , V i = V i d i a g ( V i ) .
4.
Updating V as given the values of { C i } i = 1 v , E , { V i } i = 1 v , { Y i } i = 1 v , Y , μ . The optimal problem with respect to V also is a convex quadratic programming problem according to problem (8). The formula is
min V i = 1 v 1 2 γ | | V V i | | F 2 + Y , E X + X V + μ 2 | | E X + X V | | F 2 .
The resulting updated formula is
V = μ X T X i + γ i = 1 v I 1 γ i = 1 v V i + μ X T X E X T Y , V = V d i a g ( V ) .
5.
As given the values of Y i , Y , μ C i , E , V , V i , the dual variables Y i , Y and penalty parameter μ are updated as follows:
Y i = Y i + μ C i V i , i = 1 , , v Y = Y + μ E X + X V , μ = min ρ μ , μ m a x .
where ρ > 1 is a constant greater than 1. It makes the penalty factor μ monotonically increased until reaching the maximum μ m a x .
These update steps are repeated until | | V k V k + 1 | | < ε or until the maximum number of iterations is reached, where V k and V k + 1 are the values of V at iteration k and k + 1 , respectively. After obtaining the consensus matrix through the above formula, we obtain the affinity matrix W for spectral clustering by Equation (21)
W = a b s V + a b s V T 2 .
where a b s · is the absolute value function. Given the affinity matrix W, we can find cluster membership of data points by applying k-means clustering to the eigenvectors of the graph Laplacian matrix of the affinity matrix W.
Algorithm 1 gives the steps of the proposed algorithm. We first input the dataset D, the number of clusters k, which is given by the number of classes, and the parameters of the proposed algorithm. We random initializ { V i } i = 1 v and V in [ 0 , 1 ] , and initialize Y i = 0 , Y = 0 and μ = 10 4 .
Algorithm 1 FEMV algorithm
Require: Multiview data: D = X 1 , X 2 , , X v ;
  Number of the clusters: k;
  Parameters: ρ , α , β , γ and ε .
Ensure: Assignments of the data points to k clusters.
  1:
Initial { V i } i = 1 v , V , { Y i } i = 1 v , Y and μ .
  2:
while the stopping criterion is not satisfied do
  3:
   Update { C i } i = 1 v according to Equation (12);
  4:
   Update E according to Equation (14);
  5:
   Update { V i } i = 1 v according to Equation (17);
  6:
   Update V according to Equation (19);
  7:
   Update Y i , Y , μ according to Equation (20).
  8:
end while
  9:
Compute the affinity matrix W by Equation (21).
10:
Apply spectral clustering to the affinity matrix W to obtain the assignments of the data points.

3.3. Computational Complexity

As shown in Algorithm 1, the main computational burden comprises five parts. The complexity of computing { C i } i = 1 v by Equation (12) is O ( v n 2 ) and the complexity of updating E is O ( n 2 ) . We need to compute the inverse of a matrix with a size n n for updating each V i and V. The complexity of updating { V i } i = 1 v is O ( d n 2 + v n 3 ) and the complexity of computing V is O ( d n 2 + n 3 ) . The complexity of updating the dual variable Y i and Y are O ( n 2 ) . To sum up, the computational complexity of each interaction is O ( d n 2 + n 3 ) .

4. Experiments

We compared the proposed algorithm with six existing multi-view clustering algorithms on seven real-world datasets. Three performance metrics are used to evaluate the performance of the compared algorithms. The experiments also investigate the sensitivity of the parameters of the proposed algorithm and the convergence. All the experiments are implemented under the following environments: Matlab R2021a running on a PC with Windows 64bit OS, Intel 3.40GHz CPU, and 8GB RAM.

4.1. Datasets

BBCSport, BBC4views, HandWritten, MSRCv1, WebKB, 100leaves, and NGs. BBCSport (http://mlg.ucd.ie/datasets/bbc.html, accessed on 1 October 2022) contains five categories (athletics, cricket, soccer, rugby, and tennis), with 544 documents and two views. BBC4view (http://mlg.ucd.ie/datasets/bbc.html, accessed on 1 October 2022) contains four views and 688 documents from the BBC News website, with five categories: business, entertainment, politics, sports, and technology. The HandWritten (https://cs.nyu.edu/roweis/data.html, accessed on 1 October 2022). This dataset contains 2000 samples. There are six views and ten categories. MSRCv1 (http://research.microsoft.com/en-us/projects/objectclassrecognition/, accessed on 1 October 2022) is an image dataset with 210 image samples extracted from seven classes and four views. WebKB (http://www.cs.cmu.edu/webkb/, accessed on 1 October 2022) is a hypertext dataset collected from various university web pages with three views and four classes. NGs (http://lig-membres.imag.fr/grimal/data.html, accessed on 1 October 2022) contains 500 images of people with three views. 100leaves (https://archive.ics.uci.edu/ml/datasets/One-hundred+plant+species+leaves+data+set, accessed on 1 October 2022) contains 1600 images of plant species leaves, with three views. Table 1 gives the number of data views and their respective categories for the seven data sets.

4.2. The Comparison Algorithms and Performance Metrics

In order to investigate the performance of the proposed algorithm, we compared it with the following six algorithms.
  • Spectral Clustering (SC) [17]: SC algorithm is applied to each view of the multi-view data. The best clustering result is selected by clustering multiple views separately, denoted as SC-BEST. Then the data obtained by feature union is clustered, denoted as SC-FU.
  • Adaptive structure concept factorization for multi-view clustering (MVCF) [42]: This algorithm enhances the use of the correlation between views by jointly optimizing the representation matrix and proposes a multi-view clustering algorithm based on the concept of decomposition.
  • Multi-view low-rank sparse subspace clustering (MLRSSC) [39]: This method balances different views by using low-rank and sparse constraints in constructing a consensus matrix of all views.
  • graph learning for multi-view clustering (MVGL) [43]: This method obtains the Laplace matrix to obtain clustering metrics by obtaining low-ranked adjacency matrices from each view and integrating them into a global adjacency matrix.
  • Graph-based system for multi-view clustering (MVGS) [35]: This method constructs the adjacency matrix by creating the feature matrices of all views and then fuses the adjacency matrices by weighting them to obtain the combined adjacency matrix. Finally, it gets the clustering results.
In this paper, the three most used performance metrics, i.e., accuracy (ACC), normalized mutual information (NMI), and F-score, are employed to evaluate the clustering performance of the compared algorithms. The formulas are given as follows.
ACC = k = 1 n φ c i , m a p c ¯ i n ,
where c i and c ¯ i correspond to the clustering result and the true clustering label, respectively. When x = y , φ x , y = 1 ; otherwise, φ x , y = 0 . The best match between the true clustering label and the clustering result is obtained using the Hungarian algorithm.
NMI Ω ; C = 2 I Ω , C H Ω + H C ,
where Ω = c ¯ 1 , c ¯ 2 , , c ¯ k and C = c 1 , c 2 , , c k denote true category partitioning and algorithmic cluster partitioning, respectively. I X , Y is mutual information, and H X is the entropy.
F - score = 1 + τ 2 P · R τ 2 · P + R ,
where P is the precision, R is the recall, and τ is a parameter that balances the two’s weights, which is usually equal to 1, indicating equal importance. The values of all three metrics are within 0 , 1 , and the closer the value is to 1, the better the result.

4.3. Experimental Results

Table 2 lists the means and standard deviations of ACC, NMI, and F-score values obtained by each algorithm running ten times independently. Algorithms that perform best were highlighted in bold. To reach a statistically sound conclusion, the Wilcoxon rank sum test at the significance level of 0.05 is used to analyze the differences between FEMV and the other algorithms. Symbol † indicates that the results obtained by the other algorithms are significantly worse than that of FEMV. This table shows that the proposed FEMV algorithm achieved premising clustering results on all six datasets. This means that the proposed FEMV algorithm has achieved significant improvements in terms of ACC, NMI, and F-score. The experimental results obtained by SC-FU are superior to those obtained by SC-BEST on all datasets except MSRCv1 and NGs in terms of all performance metrics. This indirectly demonstrates that multi-view fusion significantly improves clustering performance when multiple views have complementary information. It also shows that the information of the joint view is effective and reasonable in improving the clustering performance. It also shows that the joint view contains usable clustering information that previous multi-view clustering algorithms ignored.
In a more specific analysis, the FEMV algorithm improves by 9.11 % , 4.41 % , and 8.39 % on the BBCSport dataset compared to MVGS in terms of ACC, NMI, and F-score on the BBCSport dataset. Compared with MLRSSC, it improves ACC and F metrics by 2 % and 0.2 % , while it is similar to MLRSSC in terms of NMI. Although FEMV is inferior to MVGL and MVGS algorithms in terms of NMI metrics, FEMV is superior to MVGL and MVGS algorithms in terms of ACC and F-score on HandWritten. The result obtained by FEMV is slightly inferior to that of MVGS in terms of NMI on 100 leaves. In the other cases, the proposed FEMV algorithm significantly outperforms the comparison algorithms. The proposed FEMV algorithm is a significant improvement compared to the previous methods. Because the FEMV algorithm considers incorporating the overall joint information and learns the potential subspace representation from it, it fully integrates the information of the joint view.

4.4. Parametric Sensitivity Analysis and Convergence Analysis

The objective function of the proposed FEMV algorithm, i.e., Equation (6), has three penalty parameters α , β , and γ . This section discusses the sensitivity analysis of the parameters. Taking BBCsport, BBC4view, MSRCv1, and NGs datasets as an example, we apply grid search to find the best parameters with respect to ACC. We first fix the α and β to fine-tune the value of γ , which is chosen from 1 , 10 , 100 , 1000 to find the best value for the FEMV algorithm on BBCsport, BBC4view, MSRCv1, and NGs datasets. The best value for the proposed FEMV is shown in Figure 2. Then we fix γ with the best value and fine-tune the value of α , β , which are both chosen from 0.01 , 0.1 , 1 , 10 , 100 . Figure 2 shows the ACC obtained by the proposed algorithm with different α and β and a given γ . This figure shows that α and β have little effect on the result when γ = 100 on BBCSport. Thus, α and β take values of 1 in the experiments on BBCSport. Similarly, we can obtain the best parameters for the proposed FEMV on BBC4view, MSRCv1, and NGs.
Figure 3 plots the objective function value with respect to the number of iterations for the proposed FEMV algorithm on BBCsport, BBC4view, MSRCv1, and NGs. Figure 3 shows that the proposed algorithm smooths out after 10 iterations on these datasets, which means that the FEMV algorithm can achieve fast convergence.

4.5. Ablation Experiments

The effectiveness of the proposed algorithm with the addition of feature union constraints is verified by ablation experiments. The added feature union view terms and their equation constraints are removed based on the proposed formula. Then compare with the original experiment of Equation (5). Table 3 shows the results of the ablation experiments, where del-FEMV is the experimental result of eliminating the joint data of features. From this table, we can see that the algorithm with adding the joint view is significantly super to the algorithm without it under the same experimental conditions.

5. Conclusions

This paper proposed a multi-view clustering algorithm based on information fusion and enhancement. This method mainly adds a joint view of the data and learns an appropriate subspace for information fusion. It can achieve the purpose of potential information enhancement. Unlike traditional multi-view subspace clustering, the proposed algorithm can fuse the joint data information. Furthermore, it can enhance the utilization of the information while ensuring its sparsity using parametric constraints. Finally, experiments on seven real datasets demonstrate the effectiveness of the proposed algorithm.
However, since matrix inversion is involved, the computation is expensive for large datasets. In addition, the algorithm does not consider the weighting of the views. The proposed information fusion and enhancement framework applies to subspace multi-view clustering and other multi-view clustering methods, such as graph learning and tensor decomposition. Subsequent work will begin with these three aspects. For example, we intend to extend this framework to other multi-view clustering algorithms and investigate the impact of the view weights and the choice of the number of clusters.

Author Contributions

Build model and write code, Y.Z.; methodology, review and editing, F.G.; validation and funding acquisition, X.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the Natural Science Foundation of Guangdong Province (2021A1515011839, 2021A1414030004), and in part by Guangdong Provincial Key Laboratory of Cyber-Physical System under Grant (2020B1212060069).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The author would like to thank Liping Zheng for helping debug the code.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ACCAccuracy
AE2-NetsAutoencoder in autoencoder networks
ALMAugmented Lagrange method
ASGC-PMVCAdaptive sample-level graph combination for partial multiview clustering
GANAdversarial generative networks
SCSpectral clustering
SSCSparse subspace clustering
LBPLocal binary pattern
LRTGLow-rank tensor graph
LRRLow-rank representation
MSCMulti-view subspace clustering
MLRSSCMulti-view low-rank sparse subspace clustering
MVCFConcept factorization for multi-view clustering
MVGLGraph learning for multi-view clustering
MVGSGraph-based system for multi-view clustering
MV-RNNMulti-view recurrent neural network
NMINormalized mutual information

References

  1. Houthuys, L.; Langone, R.; Suykens, J.A.K. Multi-View Kernel Spectral Clustering. Inf. Fusion 2018, 44, 46–56. [Google Scholar] [CrossRef]
  2. Zhang, L.; Shi, Z.; Cheng, M.; Liu, Y.; Bian, J.; Zhou, J.T.; Zheng, G.; Zeng, Z. Nonlinear Regression via Deep Negative Correlation Learning. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 982–998. [Google Scholar] [CrossRef] [PubMed]
  3. Huang, L.; Chao, H.; Wang, C. Multi-view Intact Space Clustering. Pattern Recognit. 2019, 86, 344–353. [Google Scholar] [CrossRef]
  4. Huang, L.; Wang, C.; Chao, H.; Yu, P.S. MVStream: Multiview Data Stream Clustering. IEEE Trans. Neural Networks Learn. Syst. 2020, 31, 3482–3496. [Google Scholar] [CrossRef]
  5. Jia, Y.; Hou, J.; Kwong, S. Constrained Clustering with Dissimilarity Propagation-Guided Graph-Laplacian PCA. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 3985–3997. [Google Scholar] [CrossRef] [PubMed]
  6. Jia, Y.; Kwong, S.; Hou, J. Semi-Supervised Spectral Clustering with Structured Sparsity Regularization. IEEE Signal Process. Lett. 2018, 25, 403–407. [Google Scholar] [CrossRef]
  7. Chen, Y.; Xiao, X.; Peng, C.; Lu, G.; Zhou, Y. Low-Rank Tensor Graph Learning for Multi-View Subspace Clustering. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 92–104. [Google Scholar] [CrossRef]
  8. Wang, H.; Han, G.; Zhang, B.; Hu, Y.; Peng, H.; Han, C.; Cai, H. Tensor-based Low-rank and Graph Regularized Representation Learning for Multi-view Clustering. In Proceedings of the 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Seoul, Republic of Korea, 16–19 December 2020; pp. 821–826. [Google Scholar] [CrossRef]
  9. Chao, G.; Sun, S.; Bi, J. A Survey on Multiview Clustering. IEEE Trans. Artif. Intell. 2021, 2, 146–168. [Google Scholar] [CrossRef]
  10. Cui, Q.; Wu, S.; Liu, Q.; Zhong, W.; Wang, L. MV-RNN: A Multi-View Recurrent Neural Network for Sequential Recommendation. IEEE Trans. Knowl. Data Eng. 2020, 32, 317–331. [Google Scholar] [CrossRef] [Green Version]
  11. Zhao, H.; Liu, H.; Fu, Y. Incomplete Multi-Modal Visual Data Grouping. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, New York, NY, USA, 9–15 July 2016; pp. 2392–2398. [Google Scholar]
  12. Gu, F.; Liu, H.L.; Cheung, Y.M.; Zheng, M. A Rough-to-Fine Evolutionary Multiobjective Optimization Algorithm. IEEE Trans. Cybern. 2022, 52, 13472–13485. [Google Scholar] [CrossRef]
  13. Komeili, M.; Armanfard, N.; Hatzinakos, D. Multiview Feature Selection for Single-View Classification. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 3573–3586. [Google Scholar] [CrossRef]
  14. Yu, J.; Duan, Q.; Huang, H.; He, S. Effective Incomplete Multi-View Clustering via Low-Rank Graph Tensor Completion. Mathematics 2023, 11, 652. [Google Scholar] [CrossRef]
  15. Fu, L.; Lin, P.; Vasilakos, A.V.; Wang, S. An Overview of Recent Multi-view Clustering. Neurocomputing 2020, 402, 148–161. [Google Scholar] [CrossRef]
  16. Cheung, Y.m.; Zeng, H. Local Kernel Regression Score for Selecting Features of High-Dimensional Data. IEEE Trans. Knowl. Data Eng. 2009, 21, 1798–1802. [Google Scholar] [CrossRef]
  17. Xia, T.; Tao, D.; Mei, T.; Zhang, Y. Multiview Spectral Embedding. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2010, 40, 1438–1446. [Google Scholar] [CrossRef]
  18. Kumar, A.; Rai, P.; Daume, H. Co-regularized Multi-view Spectral Clustering. In Proceedings of the 24th International Conference on Neural Information Processing Systems, Granada, Spain, 12–15 December 2011; pp. 1413–1421. [Google Scholar]
  19. Balcan, M.; Blum, A.; Yang, K. Co-Training and Expansion: Towards Bridging Theory and Practice. In Proceedings of the 17th International Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 13–18 December 2004; pp. 89–96. [Google Scholar]
  20. Tzortzis, G.; Likas, A. Kernel-Based Weighted Multi-view Clustering. In Proceedings of the 12th International Conference on Data Mining, Brussels, Belgium, 10–13 December 2012; pp. 675–684. [Google Scholar]
  21. Liu, X.; Wang, L.; Zhu, X.; Li, M.; Zhu, E.; Liu, T.; Liu, L.; Dou, Y.; Yin, J. Absent Multiple Kernel Learning Algorithms. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 1303–1316. [Google Scholar] [CrossRef] [Green Version]
  22. Bai, R.; Huang, R.; Chen, Y.; Qin, Y. Deep Multi-view Document Clustering with Enhanced Semantic Embedding. Inf. Sci. 2021, 564, 273–287. [Google Scholar] [CrossRef]
  23. Wang, Q.; Ding, Z.; Tao, Z.; Gao, Q.; Fu, Y. Partial Multi-view Clustering via Consistent GAN. In Proceedings of the IEEE International Conference on Data Mining, Los Angeles, CA, USA, 5–9 February 2018; pp. 1290–1295. [Google Scholar] [CrossRef]
  24. Tan, Q.; Yu, G.; Wang, J.; Domeniconi, C.; Zhang, X. Individuality- Furthermore, Commonality-Based Multiview Multilabel Learning. IEEE Trans. Cybern. 2021, 51, 1716–1727. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Zhao, P.; Wu, H.; Huang, S. Multi-View Graph Clustering by Adaptive Manifold Learning. Mathematics 2022, 10, 1821. [Google Scholar] [CrossRef]
  26. Wen, J.; Zhang, Z.; Xu, Y.; Zhang, B.; Fei, L.; Liu, H. Unified Embedding Alignment with Missing Views Inferring for Incomplete Multi-view Clustering. In Proceedings of the 33rd AAAI Conference on Artificial Intelligence (AAAI 2019), Honolulu, HI, USA, 29–31 January 2019; pp. 5393–5400. [Google Scholar] [CrossRef] [Green Version]
  27. Zhang, C.; Liu, Y.; Fu, H. AE2-Nets: Autoencoder in Autoencoder Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 2577–2585. [Google Scholar] [CrossRef]
  28. Li, J.; Zhou, G.; Qiu, Y.; Wang, Y.; Zhang, Y.; Xie, S. Deep Graph Regularized Non-negative Matrix Factorization for Multi-view Clustering. Neurocomputing 2020, 390, 108–116. [Google Scholar] [CrossRef]
  29. Ashraphijuo, M.; Wang, X.; Aggarwal, V. Fundamental Sampling Patterns for Low-rank Multi-view Data Completion. Pattern Recognit. 2020, 103, 107307. [Google Scholar] [CrossRef]
  30. Guo, J.; Ye, J. Anchors Bring Ease: An Embarrassingly Simple Approach to Partial Multi-View Clustering. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 29–31 January 2019; pp. 118–125. [Google Scholar] [CrossRef] [Green Version]
  31. Rong, W.; Zhuo, E.; Peng, H.; Chen, J.; Wang, H.; Han, C.; Cai, H. Learning a Consensus Affinity Matrix for Multi-view Clustering via Subspaces Merging on Grassmann Manifold. Inf. Sci. 2021, 547, 68–87. [Google Scholar] [CrossRef]
  32. Chen, M.; Huang, L.; Wang, C.; Huang, D.; Lai, J. Relaxed Multi-view Clustering in Latent Embedding Space. Inf. Fusion 2021, 68, 8–21. [Google Scholar] [CrossRef]
  33. Liu, X.; Hu, Z.; Ling, H.; Cheung, Y.M. MTFH: A Matrix Tri-Factorization Hashing Framework for Efficient Cross-Modal Retrieval. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 964–981. [Google Scholar] [CrossRef] [Green Version]
  34. Wang, S.; Chen, Y.; Jin, Y.; Cen, Y.; Li, Y.; Zhang, L. Error-robust Low-rank Tensor Approximation for Multi-view Clustering. Knowl. Based Syst. 2021, 215, 106745. [Google Scholar] [CrossRef]
  35. Wang, H.; Yang, Y.; Liu, B.; Fujita, H. A Study of Graph-based System for Multi-view Clustering. Knowl. Based Syst. 2019, 163, 1009–1019. [Google Scholar] [CrossRef]
  36. Yang, L.; Shen, C.; Hu, Q.; Jing, L.; Li, Y. Adaptive Sample-Level Graph Combination for Partial Multiview Clustering. IEEE Trans. Image Process. 2020, 29, 2780–2794. [Google Scholar] [CrossRef]
  37. Zhang, C.; Fu, H.; Hu, Q.; Cao, X.; Xie, Y.; Tao, D.; Xu, D. Generalized Latent Multi-View Subspace Clustering. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 86–99. [Google Scholar] [CrossRef]
  38. Jia, Y.; Liu, H.; Hou, J.; Kwong, S.; Zhang, Q. Multi-View Spectral Clustering Tailored Tensor Low-Rank Representation. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 4784–4797. [Google Scholar] [CrossRef]
  39. Brbic, M.; Kopriva, I. Multi-view Low-rank Sparse Subspace Clustering. Pattern Recognit. 2018, 73, 247–258. [Google Scholar] [CrossRef] [Green Version]
  40. Zheng, Q.; Zhu, J.; Li, Z.; Pang, S.; Wang, J.; Li, Y. Feature Concatenation Multi-view Subspace Clustering. Neurocomputing 2020, 379, 89–102. [Google Scholar] [CrossRef] [Green Version]
  41. Zhao, Q.; Zong, L.; Zhang, X.; Liu, X.; Yu, H. Multi-view Clustering Via Clusterwise Weights Learning. Knowl. Based Syst. 2020, 193, 105459. [Google Scholar] [CrossRef]
  42. Zhan, K.; Shi, J.; Wang, J.; Wang, H.; Xie, Y. Adaptive Structure Concept Factorization for Multiview Clustering. Neural Comput. 2018, 30, 1080–1103. [Google Scholar] [CrossRef]
  43. Zhan, K.; Zhang, C.; Guan, J.; Wang, J. Graph Learning for Multiview Clustering. IEEE Trans. Cybern. 2018, 48, 2887–2895. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Flowchart of the proposed algorithm. The left half is the consensus matrix’s learning process, which is learned from the fusion matrix X and combined with the representation matrix of the remaining views.
Figure 1. Flowchart of the proposed algorithm. The left half is the consensus matrix’s learning process, which is learned from the fusion matrix X and combined with the representation matrix of the remaining views.
Mathematics 11 01509 g001
Figure 2. Parameter study of α and β for proposed FEMV on BBCsport, BBC4view, MSRCv1 and NGs with best values of γ . (a) BBCsport with γ = 100 ; (b) BBC4view γ = 10 ; (c) MSRCv1 with γ = 10 ; (d) NGs with γ = 1 .
Figure 2. Parameter study of α and β for proposed FEMV on BBCsport, BBC4view, MSRCv1 and NGs with best values of γ . (a) BBCsport with γ = 100 ; (b) BBC4view γ = 10 ; (c) MSRCv1 with γ = 10 ; (d) NGs with γ = 1 .
Mathematics 11 01509 g002
Figure 3. Objective function value with respect to the number of iterations for the proposed FEMV algorithm on BBCsport, BBC4view, MSRCv1, and NGs. (a) BBCsport; (b) BBC4view; (c) MSRCv1; (d) NGs.
Figure 3. Objective function value with respect to the number of iterations for the proposed FEMV algorithm on BBCsport, BBC4view, MSRCv1, and NGs. (a) BBCsport; (b) BBC4view; (c) MSRCv1; (d) NGs.
Mathematics 11 01509 g003
Table 1. Number of views and categories for the seven data sets.
Table 1. Number of views and categories for the seven data sets.
DatasetInstanceViewsClasses
BBCsport54425
BBC4views68545
HandWritten2000610
MSRCv121047
WEBkb105122
NGs50035
100 leaves16003100
Table 2. Comparison results of different algorithms on seven datasets.
Table 2. Comparison results of different algorithms on seven datasets.
DatasetsAlgorithmsACCNMIF-Score
BBCsportSC-BEST 0.6610 ± 0.0006 0.4396 ± 0.0050 0.5585 ± 0.0042
SC-FU 0.7470 ± 0.0026 0.5468 ± 0.0118 0.6435 ± 0.0083
MVGL 0.4099 ± 0.0000 0.1586 ± 0.0000 0.3985 ± 0.0000
MLRSSC 0.8631 ± 0.0488 0.7961 ± 0.0074 0.8542 ± 0.0279
MVCF 0.6607 ± 0.0355 0.4459 ± 0.0431 0.5522 ± 0.0283
MVGS 0.8070 ± 0.0000 0.7600 ± 0.0000 0.7943 ± 0.0000
FEMV 0.8805 ± 0.0000 0.7935 ± 0.0000 0.8600 ± 0.0000
BBC4viewSC-BEST 0.5213 ± 0.0017 0.3487 ± 0.0018 0.3902 ± 0.0009
SC-FU 0.6233 ± 0.0000 0.5462 ± 0.0000 0.5659 ± 0.0000
MVGL 0.3518 ± 0.0000 0.0762 ± 0.0000 0.3743 ± 0.0000
MLRSSC 0.7532 ± 0.0810 0.6518 ± 0.0209 0.6703 ± 0.0294
MVCF 0.6369 ± 0.0273 0.5065 ± 0.0316 0.5935 ± 0.0311
MVGS 0.6934 ± 0.0000 0.5628 ± 0.0000 0.6333 ± 0.0000
FEMV 0.8701 ± 0.0000 0.7331 ± 0.0021 0.8193 ± 0.0008
HandWrittenSC-BEST 0.7411 ± 0.0012 0.6760 ± 0.0011 0.6447 ± 0.0011
SC-FU 0.7508 ± 0.0010 0.7122 ± 0.0014 0.6758 ± 0.0017
MVGL 0.8560 ± 0.0000 0.9055 ± 0.0000 0.8502 ± 0.0000
MLRSSC 0.7862 ± 0.0389 0.8073 ± 0.0180 0.7566 ± 0.0334
MVCF 0.6085 ± 0.0456 0.8017 ± 0.0276 0.4254 ± 0.0683
MVGS 0.8810 ± 0.0000 0.9011 ± 0.0000 0.8654 ± 0.0000
FEMV 0.9375 ± 0.0163 0.8728 ± 0.0078 0.8781 ± 0.0263
MSRCv1SC-BEST 0.7100 ± 0.0105 0.5822 ± 0.0145 0.5761 ± 0.0109
SC-FU 0.5747 ± 0.0100 0.4872 ± 0.0106 0.4577 ± 0.0063
MVGL 0.6333 ± 0.0000 0.5897 ± 0.0000 0.4976 ± 0.0000
MLRSSC 0.6271 ± 0.0410 0.5257 ± 0.0230 0.4915 ± 0.0303
MVCF 0.8105 ± 0.0456 0.7220 ± 0.0355 0.6994 ± 0.0371
MVGS 0.7774 ± 0.0000 0.0023 ± 0.0000 0.7876 ± 0.0000
FEMV 0.8400 ± 0.0023 0.7279 ± 0.0020 0.7171 ± 0.0029
WEBKbSC-BEST 0.7792 ± 0.0000 0.0055 ± 0.0000 0.7917 ± 0.0000
SC-FU 0.7792 ± 0.0000 0.0055 ± 0.0000 0.7917 ± 0.0000
MVGL 0.7498 ± 0.0000 0.0062 ± 0.0000 0.5632 ± 0.7495
MLRSSC 0.7726 ± 0.0000 0.0001 ± 0.0000 0.7832 ± 0.0000
MVGS 0.7685 ± 0.0000 0.4351 ± 0.0000 0.7004 ± 0.0000
FEMV 0.9514 ± 0.0000 0.6592 ± 0.0000 0.9287 ± 0.0000
NGsSC-BEST 0.5358 ± 0.0006 0.3745 ± 0.0006 0.4222 ± 0.0004
SC-FU 0.2110 ± 0.0000 0.0723 ± 0.0000 0.3291 ± 0.0000
MVGL 0.2720 ± 0.0000 0.1286 ± 0.0000 0.3296 ± 0.0000
MLRSSC 0.9724 ± 0.0018 0.9230 ± 0.0053 0.9453 ± 0.0036
MVCF 0.3138 ± 0.0237 0.2173 ± 0.0385 0.3531 ± 0.0097
MVGS 0.9820 ± 0.0000 0.9392 ± 0.0000 0.9643 ± 0.0000
FEMV 0.9920 ± 0.0000 0.9722 ± 0.0000 0.9840 ± 0.0000
100leavesSC-BEST 0.4483 ± 0.0099 0.7115 ± 0.0060 0.3182 ± 0.0099
SC-FU 0.7162 ± 0.0172 0.8778 ± 0.0077 0.6262 ± 0.0257
MVGL 0.7700 ± 0.0000 0.8972 ± 0.0000 0.5415 ± 0.0000
MLRSSC 0.5835 ± 0.0189 0.7904 ± 0.0084 0.4677 ± 0.0195
MVCF 0.5195 ± 0.0000 0.0 . 7556 ± 0.0020 0.3830 ± 0.0025
MVGS 0.8244 ± 0.0000 0.9343 ± 0.0000 0.5765 ± 0.0000
FEMV 0.8413 ± 0.0000 0.9175 ± 0.0000 0.7555 ± 0.0000
† indicates that the results obtained by the other algorithms are significantly worse than that of FEMV.
Table 3. Ablation comparison experiment.
Table 3. Ablation comparison experiment.
DatasetAlgorithmsACCNMIF-Score
BBCsportFEMV 0.8805 ± 0.0000 0.7935 ± 0.0000 0.8600 ± 0.0000
del-FEMV 0.8363 ± 0.0000 0.7057 ± 0.0000 0.7880 ± 0.0000
BBC4viewsFEMV 0.8701 ± 0.0000 0.7331 ± 0.0021 0.8193 ± 0.0008
del-FEMV 0.8399 ± 0.0011 0.6807 ± 0.0045 0.7835 ± 0.0033
HandWrittenFEMV 0.9375 ± 0.0163 0.8728 ± 0.0078 0.8781 ± 0.0263
del-FEMV 0.8856 ± 0.0234 0.7830 ± 0.0113 0.7956 ± 0.0023
MSRCv1FEMV 0.8400 ± 0.0023 0.7279 ± 0.0020 0.7171 ± 0.0029
del-FEMV 0.8095 ± 0.0000 0.7264 ± 0.0000 0.6866 ± 0.0000
WebkbFEMV 0.9514 ± 0.0000 0.6592 ± 0.0000 0.9287 ± 0.0000
del-FEMV 0.9248 ± 0.0000 0.5174 ± 0.0000 0.8960 ± 0.0000
NGSFEMV 0.9920 ± 0.0000 0.9722 ± 0.0000 0.9840 ± 0.0000
del-FEMV 0.9800 ± 0.0000 0.9390 ± 0.0000 0.9604 ± 0.0000
100leavesFEMV 0.8413 ± 0.0000 0.9175 ± 0.0000 0.7555 ± 0.0000
del-FEMV 0.7723 ± 0.0000 0.8463 ± 0.0000 0.7269 ± 0.0000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Deng, X.; Zhang, Y.; Gu, F. Fusion and Enhancement of Consensus Matrix for Multi-View Subspace Clustering. Mathematics 2023, 11, 1509. https://doi.org/10.3390/math11061509

AMA Style

Deng X, Zhang Y, Gu F. Fusion and Enhancement of Consensus Matrix for Multi-View Subspace Clustering. Mathematics. 2023; 11(6):1509. https://doi.org/10.3390/math11061509

Chicago/Turabian Style

Deng, Xiuqin, Yifei Zhang, and Fangqing Gu. 2023. "Fusion and Enhancement of Consensus Matrix for Multi-View Subspace Clustering" Mathematics 11, no. 6: 1509. https://doi.org/10.3390/math11061509

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop