Next Article in Journal
Entropy Generation Analysis of the Flow Boiling in Microgravity Field
Previous Article in Journal
Numerical Solutions of Variable Coefficient Higher-Order Partial Differential Equations Arising in Beam Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multiview Clustering of Adaptive Sparse Representation Based on Coupled P Systems

Academy of Management Science, Business School, Shandong Normal University, Jinan 250014, China
*
Author to whom correspondence should be addressed.
Entropy 2022, 24(4), 568; https://doi.org/10.3390/e24040568
Submission received: 22 March 2022 / Revised: 12 April 2022 / Accepted: 15 April 2022 / Published: 18 April 2022
(This article belongs to the Topic Machine and Deep Learning)

Abstract

:
A multiview clustering (MVC) has been a significant technique to dispose data mining issues. Most of the existing studies on this topic adopt a fixed number of neighbors when constructing the similarity matrix of each view, like single-view clustering. However, this may reduce the clustering effect due to the diversity of multiview data sources. Moreover, most MVC utilizes iterative optimization to obtain clustering results, which consumes a significant amount of time. Therefore, this paper proposes a multiview clustering of adaptive sparse representation based on coupled P system (MVCS-CP) without iteration. The whole algorithm flow runs in the coupled P system. Firstly, the natural neighbor search algorithm without parameters automatically determines the number of neighbors of each view. In turn, manifold learning and sparse representation are employed to construct the similarity matrix, which preserves the internal geometry of the views. Next, a soft thresholding operator is introduced to form the unified graph to gain the clustering results. The experimental results on nine real datasets indicate that the MVCS-CP outperforms other state-of-the-art comparison algorithms.

1. Introduction

At present, many fields have accumulated a large amount of data from which cluster analysis can mine useful knowledge. Cluster analysis has been effectively applied to text mining [1], information retrieval [2], pattern recognition [3], molecular biology [4], etc. With the rapid growth of multimedia technology and the widespread deployment of the Internet of Things, multiview data has become more common and public [5]. Due to the limitations of traditional clustering algorithms, multiview clustering has become a research hotspot. Multiview clustering essentially utilizes the input of multiple characteristic views of the data. It merges these feature views to acquire an optimized model, which is more efficient than traditional single-view clustering. Then, the instance is divided into different clusters [6,7]. There are two important principles of the multiview clustering algorithm, namely, the principle of complementarity and consensus. The former can describe instances more comprehensively, and multiple views can complement each other. The latter is designed to maximize consistency between multiple different views.
In recent years, multiview clustering has developed rapidly. In theory, most existing multiview clustering methods can be divided into four categories: graph-based techniques, non-negative matrix factorization, multikernel clustering, and deep multiview clustering. The multiview spectral clustering method and the subspace clustering method are graph-based multiview clustering methods. The former commonly applies spectral embedding [8], tensor learning [9,10] and relaxation matrix methods [11]. The latter is more effective in processing high-dimensional data. Methods such as sparse representation and low-rank representation are usually adopted to gain subspace self-representation [12,13,14,15]. The non-negative matrix factorization method adopts multiple normalizations, double regularization, and graph regularization strategies of non-negative matrix factorization factors to improve the performance of multiview clustering [16,17,18]. Multikernel clustering fuses linear kernels, polynomial kernels, Gaussian kernels and other kernel functions as base kernels. Meanwhile, it combines kernel norm minimization [19], tensor kernel norm [20], non-convex norm [21], extreme learning machine [22] and other methods to realize the clustering process.
Deep multiview clustering is another development direction of current multiview clustering. Constructing category-level correspondences of unaligned data with the latent space learned by neural networks in partial-view-aligned clustering [23,24]. Robust multiview clustering with incomplete information addresses the partially view-unaligned problem and the partially sample-missing problem under a unified framework combined with neural networks [25]. Moreover, the concepts of intelligent algorithms such as particle swarm optimization (PSO) [26,27,28], the Boltzmann machine [29], encoder [30], and convolutional neural network (CNN) [31] are introduced into multiview clustering to solve practical application problems. Since the graph-based clustering methods have advantages in terms of accuracy, this paper will focus on them.
In multiview clustering, the construction of a single-view similarity matrix and the formation of a unified graph matrix are important issues. Zhan et al. [32] put forward an unsupervised multi-feature learning method based on graph construction and dimensionality reduction (MVGL) to maximize the use of the correlation between different features. The weights of the affinity matrix are learned through well-designed optimization problems rather than fixed functions. In addition, a rank constraint is imposed on the Laplacian matrix of the global graph to achieve an ideal neighbor assignment. Maria and Ivica [12] learned the joint subspace representation by constructing an affinity matrix shared by all views while encouraging the sparsity and low rank of the solution. The proposed multiview low-rank sparse subspace clustering (MLRSSC) algorithm strengthens the consistency between affinity matrices of the pairs of views. Wang et al. [33] presented a graph-based system for multiview clustering.
The working principle of GBS is to employ the nearest neighbor method to effectively construct the similarity matrix after extracting the data feature matrix of each view. Then, an iterative optimization algorithm is adopted to automatically weight each similarity matrix to learn a unified affinity matrix. It can directly gain the final cluster labels in the unified graph. Peng et al. [34] set the neighbor size to 10 and used the cosine distance as a metric to construct a similarity matrix, and then updated it iteratively with an objective function that includes geometric consistency. GMC [35] automatically weights and merges the similarity matrix of each view to generate a unified graph matrix. The two improve each other by means of iterative optimization algorithms and give the final cluster directly without additional algorithms. Tan et al. [7] proposed a two-step multiview clustering method that exploits sparse representation and adaptive graph learning to optimize the similarity matrix of a single view, and retains the internal structural characteristics of each view. Further, the global optimal matrix is obtained through adaptive weighted cooperative learning for each view. Huang et al. [36] merged the consistency and diversity of multiple views into a unified framework to form a “consistent and divergent multiview graph” (CDMGC). At the same time, an alternating iterative algorithm combines the consistency part with the automatically learned weights, and the consistency part is further integrated into the target affinity matrix. Finally, the clustering label of each instance is directly assigned.
Membrane computing (also known as a P system) is a distributed parallel computing model that Professor Păun proposed in 1998 inspired by the structure and function of biological cells [37]. Since it was put forward, its calculation model has been proved to have the computing power equivalent to the Turing machine [38]. The neural P system is the third-generation neural membrane computing model inspired by discrete neurons whose information is encoded by neurons’ spike number and spike time. In addition, there are currently cell-like P systems and tissue-like P systems [39]. The development of membrane computing (P system) is mainly in theoretical research and application research. In theoretical research, the research on the parallel computing capabilities of various P systems and solving NP problems are flooding in [40,41,42,43]. In terms of application research, membrane computing has been widely used in spectral clustering [44,45] and density peak clustering [46].
Most of the above algorithms adopt the concept of neighbors when initially constructing the similarity matrix, but most of them used a fixed number of neighbors manually determined. However, multiview data is usually collected from different measurement methods, such as images, videos, etc. The noise, damage, and even view-specific attributes of different data sources will be different, so the number of neighbors should be different when the similarity matrix of each view is constructed. Simultaneously, most of the existing multiview clustering algorithms utilize iterative optimization algorithms when merging into a unified graph matrix and decomposing it into subproblems for disposing of. Although higher accuracy can be achieved, the calculation time is increased. Therefore, regarding the issue above, this paper proposes a multiview clustering of adaptive sparse representation based on a coupled P system (MVCS-CP) and verifies the clustering performance. The main contributions of this paper are as follows:
(1)
A new coupled P system is proposed, which integrates the construction of a single view matrix and the formation of a unified graph into the P system to perform clustering tasks.
(2)
To construct the similarity matrix of each view, this paper introduces a natural neighbor search algorithm without parameters, which can automatically determine the number of neighbors in each view. After that, sparse representation and various learning methods are imported to construct the similarity matrix to preserve the internal geometry of the views.
(3)
In forming a unified graph, this paper adopts a soft thresholding operator to learn a consistent sparse structure affinity matrix from the similarity matrix of each view and then obtain the clustering result. Iterative optimization is not required, and better clustering results can be captured and obtained quickly.
(4)
Nine multiview data sets are employed to simulate and verify the clustering performances of MVCS-CP.
The remaining parts of this paper are arranged as follows: Section 2 introduces the related concepts of the P system and graph learning and other related works. The proposed multiview clustering of adaptive sparse representation based on a coupled P system (MVCS-CP) is outlined in Section 3. Section 4 details the experimental results and analysis the performance of the algorithm. The summary of this paper and the perspective for future work are given in Section 5.

2. Related Work

2.1. Notations

In this paper, vectors, matrices, and scalars are represented by bold lowercase letters ( x ), bold uppercase letters ( X ), and lowercase letters ( x ), respectively. X = { X 1 , X 2 , , X m } denotes a dataset with m views, where X v R d v × n . Its j th column vector is represented as x j , and the ( i , j ) instance is x i j . I represents the identity matrix and 1 represents a column vector with entries as one. T r ( X ) and | | X | | F is the trace and Fresenius norm of X . For a vector x , its p norm is | | x | | p . L denotes the Laplacian matrix constructed by the similar matrix S n × n .

2.2. Graph-Based Clustering and Graph Learning

Supposing that all elements in the similarity matrix S n × n are non-negative, the relevant properties of the Laplacian matrix L can be obtained [47,48].
Theorem 1. 
The multiplicity c of the eigenvalue 0 of the Laplacian matrix L is equal to the number of connected components of the similarity matrix S.
That is, when the constraint condition of rank ( L ) = n c is fulfilled, then the similarity matrix S is the most suitable neighbor allocation and the data points have been divided into c clusters [49]. If the sum of the first c smallest eigenvalues of the Laplacian matrix L is equal to 0 and satisfies the constraint of rank ( L ) = n c , i = 1 c λ i = 0 , where λ i refers to the i -th smallest eigenvalue of L . Hence, according to Fan’s theorem [50], it has:
i = 1 c λ i = min F t r ( F T L F ) s . t . F R n × n ,   F T L F = I
where F T = ( f 1 , f 2 , , f n ) is the eigenvector matrix of L = D [ ( S T + S ) / 2 ] , D = i = 1 m ( S T + S ) i i / 2 is a diagonal matrix.

2.3. Natural Neighbours

On the basis of previous studies, Zhu et al. [51,52] systematically summarized and defined the concept of natural neighbors. For data objects in X v R d v × n , its natural neighborhood stable structure can be expressed as ( x i ) ( x j ) ( k N ) ( i j ) ( x i N N k ( x j ) ) ( x j N N k ( x i ) , where N N k ( x i ) = { x j X | d ( x i , x j ) d ( x i , k n ) } is the k th nearest neighbor of x i , d ( x i , k n ) is the distance of the kth nearest neighbor of x i .
Definition 1. 
(The Natural Characteristic Value Ncv)Ncv is equivalent to the number of natural neighbors (That is, k) of the data point x .
N c v = min { k | i = 1 n f ( N b k ( x i ) ) = 0   o r i = 1 n f ( N b k ( x i ) ) = i = 1 n f ( N b k 1 ( x i ) ) }
where N b k ( x i ) is the number of reverse neighbors ( R N N ( x i ) = { x X | x i N N k ( x ) } ) of x i in the k th iteration. Furthermore, f ( x ) = { 0 , o t h e r w i s e 1 , i f x = = 0 .
Definition 2. 
(The Natural Neighbors)The natural neighbors of the object x in the data set are the k nearest neighbors, expressed as NaN ( x ).

2.4. P System

The cell-like P system is the first generation of the membrane computing model. The structure is shown in Figure 1. It is divided into basic membranes (such as 2, 3, 5, 7, 8 and 9) and non-basic membranes (such as 1, 4 and 6). Membrane 1 is also called the skin membrane, isolating the P system from the external environment.
The tissue-like P system is composed of multiple single membrane cells, which rely on designated channels for communication. The basic membrane structure of the tissue-like P system is shown in Figure 2. The initial object is in the input cell (membrane 0), using rules and communication mechanisms to correspond between cell 1 and cell n. Cell n + 1 is the output cell stored in the obtained results.

3. Multi-View Clustering of Adaptive Sparse Representation Based on Coupled P Systems

This section puts forward the multiview clustering method of adaptive sparse graph learning based on a coupled P system. At first, we elaborated on the general framework of the defined coupled P system. After that, different evolution rules and manipulation, including the construction of a similarity matrix of each view after the number of neighbors is determined, the formation of the unified graph, and clustering are discussed in turn. In addition, it explains the communication rules between different subsystems if there is a synapse between cells. The flow chart of the MVCS-CP algorithm is shown in Figure 3.

3.1. The General Framework of the Proposed Coupled P System

The proposed coupled P system (MVCS-CP) is formed based on the tissue P system by adding the relevant knowledge of the cell P system. As shown in Figure 4, it is the basic structure of the coupled P system (MVCS-CP), showing part of the basic information in the algorithm system.
Definition 3. 
The formal definition of the MVCS-CP system is
= ( Γ , ε , s y n , σ 0 , , σ t , R , i n , o u t )
where
  • Γ = { X 1 , X 2 , , X m , S 1 , S 2 , , S m , N a N ( x ) , N c v , N b ( x ) , W , D , L , p a r a , c , } . X i , S i represent the original data of m views and the similarity matrix corresponding to each view, respectively. N a N ( x ) is the natural neighbor of the data point x in the view. N c v refers to the characteristic natural value, and the number of reverse neighbors of x is denoted as N b ( x ) . W represents the learned uniform unified graph matrix. D and L indicate the degree matrix and Laplacian matrix, respectively. The parameters p a r a and c respectively refer to the parameters required for the experiment and the number of clusters.
  • ε = { X 1 , X 2 , , X m , p a r a , c } Γ is the initial objects in the coupled systems.
  • s y n = { { 0 , 1 } , { 0 , 3 } , { 1 , 2 } , { 2 , 3 } , { 3 , 4 } } signifies the synapse between cells, whose main function is to connect cells and make them communicate with each other.
  • σ 0 , , σ t denotes the cells (membrane) in the system. t depends on the number of views and the number of clusters in the data set, that is, the total number of cells in the system.
  • R represents a collection of communication rules and evolution rules in the system. The role of evolution rules is to modify objects and communication rules are used to transfer objects between cells (membranes).
  • i n is cell 0, which is the input membrane. o u t is cell 5, output membrane, used to store the final clustering results.

3.2. The Evolution Rules

The input cell 0 initializes the data object. It transmits the data and corresponding parameters of the multiview to cell 1 to determine the Natural Characteristic Value and construct the similarity matrix of each view. At the same time, the number of clusters c is transported from cell 0 to cell 2 to form a new cluster sample for k-means. The rule R0 can be described in detail as:
  • R 01 = { X 1 , X 2 , , X m , p a r a X 1 , X 2 , , g o [ ] 1 }
  • R 02 = { c , p a r a c , p a r a , g o [ ] 2 }
The output cell 4 stores the clustering results obtained by the algorithm. R 4 .

3.2.1. The Evolution Rules of Determining Ncv and Constructing Similarity Matrix in Cell 1

In practice, when constructing the similarity matrix, it prefers data objects having similarities with neighbors. Then the choice of the number of neighbors is an important influencing factor. Most of the traditional algorithms are manual input obtained from experience, such as 10, 15. However, the source channel of each view in the multiview data is different, and the number of neighbors should be different. Therefore, in order to promote the accuracy of the algorithm, a non-parameter natural neighbor search algorithm is adopted in this paper to automatically determine the number of neighbors in each view.
In summary, the detailed evolution rules for determining natural characteristics in cell 1 are shown in rule R1:
  • R11 (Iterative search rules): At the k th iteration, for each data point x i in the single view X v , we search for its r th neighbor x j using a KD tree. After that, N b ( x j ) = N b ( x i ) + 1 , N a N k ( x i ) = N a N k 1 ( x i ) x j correspond to the concepts in Section 2.3. N a N ( x ) will be transported to the related subcell to construct the similarity matrix S ν .
  • R12 (Iterative stop rules): If the number of reverse neighbors N b ( x ) of data point x does not change or N b ( x ) = = 0 , the evolution rules stop.
  • R13 (Determine the N c v rule): The natural characteristic value N c v is calculated by Equation (2), which is equivalent to the number of neighbors k , and then k is transmitted to the relevant subunits to prepare for the construction of S ν .
Manifold learning is finding a low-dimensional manifold in a high-dimensional space and exploring the corresponding embedding mapping to achieve dimensionality reduction or data visualization [53,54]. The general explanation is that if two data objects are close, they are also close in the embedding graph. Noise and outliers have always been factors that affect the final clustering results. Research [55] has found that sparse representation is robust to them. Therefore, this paper introduces manifold learning and sparse representation to construct the similarity matrix. In detail, the similarity matrix S v of each view X v is obtained by solving the following problems:
min S v i , j = 1 n | | x i v x j v | | 2 2 s i j v + α i n | | s i v | | 1 s . t . s i i v = 0 , s i j v 0 .
When s i v is normalized with 1 T s i v = 1 , the second term of Equation (3) becomes a constant. Namely, the normalization and the sparse representation on s i v are equivalent. Then, problem (3) can change into:
min S v i , j = 1 n | | x i v x j v | | 2 2 s i j v s . t . s i i v = 0 , s i j v 0 , 1 T s i v = 1 .
Suppose that problem (4) has a trivial solution. The value of the only data point with the smallest distance to x i v is 1, while the value of all other data points is 0. Now, adding a before question (2), its expression is
  min S v i , j = 1 n | | x i v x j v | | 2 2 s i j v + β i n | | s i v | | 2 2 s . t . s i i v = 0 , s i j v 0 , 1 T s i v = 1 .
If we only pay attention to the second item of Equation (5), the prior can be regarded as the similarity value of each data point to x i v , that is, 1 / n . As can be seen from the above problem, Equation (5) is independent in terms of each data object i . Therefore, the following problems can be solved separately for each data object i :
  min s i v j = 1 n | | x i v x j v | | 2 2 s i j v + β i n | | s i v | | 2 2 s . t . s i i v = 0 , s i j v 0 , 1 T s i v = 1 .
We adopt d i j to represent | | x i v x j v | | 2 2 and d i is its vector. Afterward, it can depict problem (6) in vector form:
min s i v | | s i v + d i 2 β | | 2 2
Problem (7) can be solved by a closed-form solution, as shown in [56]. As mentioned at the beginning of this section, it has been said that the construction of the similarity matrix requires the number of neighbors k . k has been gained in the front, which is equivalent to the natural characteristic value N c v .
To sum up, the evolution rules for constructing the similarity matrix of each view in cell 1 are as follows:
  • R14 (Lagrange function rule): The Lagrange function of Equation (17) is ( s i v , ϵ , ζ ) = | | s i v + d i 2 β | | 2 2 ϵ ( 1 T s i v 1 ) ζ T s i v
  • R15 (Constraint rule): Based on the Karush–Kuhn–Tucker constraint, the optimal solution s ^ i j v = ( d i j 2 α + ϵ ) + can be acquired, where ( a ) + = m a x ( a ,   0 ) . As a result of the constraints 1 T s i v = 1 , it has ϵ = 1 k + 1 2 k β j = 1 k h i j .
  • R16 (Determining β rule): Since there are only k non-zero values in s i v , β has a maximum value, which is conveyed as β = k 2 d i , k + 1 k 2 Σ j = 1 k d i j .
  • R17 (Getting the s i v rule): The j -th element of s i v is as follows:
    s i j v = { d i , k 1 d i j k d i , k + 1 h = 1 k b i h j k 0 j > k .
Through the above evolution rules, the number of neighbors can be automatically determined. The introduced manifold learning and sparse representation are robust to noise and outliers, and thus the similarity matrix S v of each view is obtained in cell 1.

3.2.2. The Evolution Rules of Constructing the Unified Graph Matrix, Degree Matrix, Laplacian Matrix in Cell 2

The unified graph matrix W is to merge each similarity matrix S v of multiple views into an affinity matrix to execute the subsequent clustering algorithm and obtain the final clustering result. Enlightened by previous models, this paper leads a soft thresholding operator into the unified graph affinity matrix based on the following two principles:
(1).
The unified graph matrix W and the similarity matrix S v of each view tend to be as consistent as possible.
(2).
The unified graph matrix W is sparse, which can further alleviate the noises generated by different views.
In order to construct the unified graph matrix as quickly as possible, the objective function is:
  min W v = 1 m | | S v W | | F 2 + p a r a | | W | | 0
The first item of Equation (8) can satisfy the principle (1), while the second item satisfies the principle (2). Due to the 0 -norm minimization, the solution of Equation (8) is an NP-hard problem. According to previous studies on sparse learning [57,58], Equation (8) can be rewritten as:
  min W v = 1 m | | S v W | | F 2 + p a r a | | W | | 1
where | | W | | 1 is the convex relaxation of | | W | | 0 , and then Equation (9) can be expressed as:
min W | | S 1 W | | F 2 + | | S 2 W | | F 2 + , , + | | S m W | | F 2 + p a r a | | W | | 1 min W v = 1 m | | S v | | F 2 2 T r ( v = 1 m S v W T ) + m | | W | | F 2 + p a r a | | W | | 1 min W 1 m v = 1 m | | S v | | F 2 2 T r ( 1 m v = 1 m S v W T ) + | | W | | F 2 + p a r a m | | W | | 1 min W | | 1 m v = 1 m S v | | F 2 2 T r ( 1 m v = 1 m S v W T ) + | | W | | F 2 + p a r a m | | W | | 1 + c o n s
where c o n s is the constant item to be balanced, that is
c o n s = 1 m v = 1 m | | S v | | F 2 | | 1 m v = 1 m S v | | F 2
All in all, the evolution rules for constructing the unified graph matrix in cell 2 are shown in rule R2:
  • R21 (Removing the c o n s rule): Removing the cons, then the problem (11) is redefined as min W | | T W | | F 2 + p a r a m | | W | | 1 , where T = Σ v = 1 m S v / m .
  • R22 (Soft-thresholding operator rule): Based on the above, when μ > 0 , the soft-thresholding operator is introduced here:
    S μ ( x ) = { x μ , x > μ x + μ , x < μ 0 , o t h e r w i s e .
  • R23 (Obtaining W rule): By conducting the S μ element-wise, it can be extended to the matrix. In addition, as shown in [52], the approximate solution to problem (13) is W * = S p a r a 2 m ( T ) .
  • R24 (Constructing the Degree Matrix rule): According to D i i = j = 1 n W i j , the degree matrix D is gained.
  • R25 (Constructing the Laplacian Matrix rule): In terms of the Laplacian Matrix, it is based upon W and D , L = I D 1 / 2 W D 1 / 2 .

3.2.3. The Evolution Rules of K-Means in Cell 3

On the basis of a unified graph matrix, this section adopts the spectral clustering method to acquire the final clustering results. The evolution rules are shown in R3.
  • R31 (Building new cluster instances rule): The formation of clustering new instances is conducive to K-means clustering. We select the eigenvectors U = { u 1 , u 2 , u c } , U R n c   corresponding to the first c eigenvalues of L , and standardize it to obtain Y i j = U i j / ( j U i j 2 ) 1 / 2 .
  • R32 (Randomly selecting clustering centers rule): Among the n points of Y , it randomly selects c points as the initial clustering centers and stores them in the subcells.
  • R33 (Clustering rule): After that, the distance from each instance to each cluster center is computed in the subcells simultaneously and transported to cell 3. Finally, the instances are allocated based on the principle of minimum distance to form c different clusters in cell 3.
  • R34 (Outputting result rules): For clusters divided in accordance with rule R35, it takes the current average distance of each cluster as the new cluster center. Comparing the current cluster center with the previous cluster center, if there is a change, it repeats rule R35. Conversely, the result of clustering is outputted to cell 4.

3.3. The Communication Rules between Different Cells

In the MVCS system, communication between cells relies on the synapses between different cells. Membranes have distinct functions, such as initializing objects, executing algorithms, and outputting clustering results. In that way, the ordered communication between membranes makes the whole algorithm more efficient.
The rules of communication between different cells are as follows:
(1)
Unidirectional transport between cells. u is a string containing the object. λ is the empty string.
  • R u l e   1 :   ( 0 , u / λ , 1 ) : It feeds u containing the original data X of m views into cell 2 for the determination of the similarity matrix for each view.
  • R u l e   2 :   ( 0 , u / λ , 2 ) : The u including the parameter p a r a and the number of clusters c are transferred to cell 3 to format the unified graph matrix and construct the degree matrix and the Laplacian matrix.
  • R u l e   4 : ( 1 , u / λ , 2 ) : The string u of similarity matrix S for each view produced by cell 1 is transported to cell 2 for the construction of the unified graph matrix.
  • R u l e   5 :   ( 2 , u / λ , 3 ) : It conveys the string u containing the Laplacian matrix and the number of clusters c to cell 3 for K-means clustering.
  • R u l e   6 :   ( 2 , u / λ , 3 ) : The string u of clustering results generated by K-means is transmitted to cell 4 for storage.
(2)
Unidirectional transport between cells and the environment.
  • R u l e   3 : ( 0 , u / λ , 2 ) : It transports the string u of the resulting reverse neighbor N b ( x ) into the environment to release.

4. Experiments

In this section, we verify the performance of MVCS-CP on the real multiview dataset. All experiments were carried out in the MATLAB 2016a environment under the computer with Intel Core i7-2.9G CPU, 16 GB RAM, and Windows 10 64-bit.

4.1. Datasets

Experiments are conducted on nine commonly used multiview datasets, and the general information of the dataset is shown in Table 1.
  • Caltech101 [59]: Coltech101-07 and Coltech101-20 are selected from the Caltech101 dataset, which includes 2386 and 1474 images, respectively. Each image contains six feature vectors of GABOR, WM (wavelet moment), CENT (Centrist features), HOG, GIST and LBP.
  • NUS [60]: It contains 2400 images in 12 categories. The six features of colour histogram, CM, edge direction histogram, wavelet texture, block-wise colour moment and SIFT description are included for each image.
  • ORL [61]: This dataset contains 400 images with four feature vectors of GIST, HOG, LBP, and CENT.
  • 3sources: This dataset contains 169 news documents reported by three online news organizations, BBC, The Guardian and Reuters.
  • BBC [62]: It is a collection of 685 documents from the BBC News website, each divided into four feature vectors.
  • BBC_Sport [62]: This dataset consists of 544 documents collected from the BBC Sports website; each document has two feature vectors.
  • 100leaves [63]: It consists of 1600 samples from the UCI repository, each of which is one of a hundred species.
  • Scene15 [64]: It consists of 4485 images of indoor and outdoor scenes with a total of three views.

4.2. Evaluation Metrics

This paper adopts six evaluation indicators to measure the quality of clustering results, namely accuracy (ACC), Adjusted Rand Index (ARI), Normalized Mutual Information (NMI), F1-score (F), Purity and Precision.
(1).
Accuracy: ACC refers to the ratio of the number of correctly clustered samples to the total number of instances N .
A c c = T P + T N N
where N = T P + F P + F N + T N , T P represents true positive, F P means false positive, F N indicates false negative and T N denotes true negative.
(2).
Adjusted Rand Index: The value range of the ARI is [−1, 1].
  R I = T P + T N T P + F P + T N + F N A R I = R I E [ R I ] max ( R I ) E [ R I ]
(3).
Normalized Mutual Information: N M I measures the difference between cluster partitions through information theory. The value range is [0, 1]
N M I ( X ; Y ) = 2 I ( X ; Y ) H ( X ) + H ( Y )
where I ( X ; Y ) denotes the mutual information between random variables X and Y , and H ( X ) ,   H ( Y ) are in the entropy of them.
(4).
P r e c i s i o n : It represents the probability of the true positive sample among all predicted positive samples.
    P r e c i s i o n = T P T P + F P
(5).
F1-score: F is the harmonic mean of precision and recall to comprehensively measure the clustering effect.
R = T P T P + F N F = 2 * P r e c i s i o n * R P r e c i s i o n + R
(6).
P u r i t y : The general idea of cluster purity is to divide the number of correctly clustered instances by the total number of instances.
  P u r i t y = ( Ω , T ) = 1 N c max j | ω c t j |
where Ω = { ω 1 , ω 2 , ω c } denotes the clustered clusters, and T = { t 1 , t 2 , , t j } represents the correct category. ω c is all samples in the c th class after clustering. t j expresses the true positive sample in the j th cluster. Its value range is [0, 1]—the higher the better.

4.3. Compared Methods

To verify that the proposed method can effectively improve the clustering performance, we compare the MVCS-CP method with a single-view clustering method (spectral clustering method) and six state-of-the-art multiview clustering methods.
  • SC [65] performs clustering on every single view and concatenates all views in the dataset into one view (Featconcat) for clustering.
  • GBS [33] proposes a general graph-based multiview clustering system. The number of neighbors takes its default setting of 5.
  • AMGL [66] is a parameter-free model for spectral embedding learning that automatically learns the weights for each view by solving a square root trace minimization problem.
  • MVGL [67] uses it to explore the Laplacian rank-constrained graph after obtaining the similarity graph for each view, where the number of neighbors is set to a default value of 10.
  • ASMV [32] adaptively jointly optimizes the data correlation between multiple features, and the number of neighbors is set to 15.
  • CDMGC [36] is a graph clustering method of explicitly exploiting both multiview consistency and multiview diversity. The parameters in the experiment leverage the default values in the code provided by the author.
  • CoMSC [68] is a multiview subspace clustering algorithm that groups objects and simultaneously removes data redundancy. In the experiment, the two parameters λ and c are respectively searched in { 2 10 , 2 8 , 2 6 , 2 4 , 2 2 , 2 0 ,   2 2 , 2 4 , 2 6 , 2 8 , 2 10 } and { k ,   2 k , , 20 k } , where k is the number of classes.
The experimental results on the nine datasets are shown in Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9 and Table 10, with the standard deviation in parentheses. The value with the best experimental result is bolded, and the second-best value is marked with an underscore (_). The following results can be obtained from the tables:
  • The proposed MVCS-CP method performs better on six evaluation metrics on all datasets, basically being the best or second best. In the caltech101-20 dataset, it has the best performance on the four metrics of ARI, NMI, precision and purity with 8%, 2%, 4% and 6% improvement over the second-best results. As far as the caltech101-7 dataset is concerned, the three indicators of ACC, Precision and Purity are the best, and the remaining indicators ARI, NMI and F are the second best. In terms of the NUS dataset, except for the F indicator, the rest of the indicators perform the best. Compared with the better overall performance of CoMSC, the effect was increased by 5% (ACC), 2% (ARI), 2% (NMI), 2% (Precision) and 16% (Purity), respectively. Synthesizing the caltech101-20, caltech101-7 and NUS datasets, it can be concluded that the proposed MVCS-CP can achieve better results in processing more than five views. MVCS-CP performs optimally on all six metrics for the ORL dataset, with an average of 2% improvement for each metric over the second-best result. As for the 100leaves dataset, except for the NMI indicator, which is 0.5% lower than the second-best, all other indicators perform the best. The ORL and 100 leaves datasets have more clusters numbers (40 and 100 categories, respectively). Based on the above experimental results, it can be found that MVCS-CP can cope well with rich clusters number. On the 3sources dataset and the BBC dataset, the proposed algorithm demonstrates obvious improvement on all indicators. In addition, the BBC_Sport dataset has the best performance on the remaining five metrics except for Precision, which is the second-best. Combining the three datasets of 3sources, BBC and BBC_Sport, all of them have a higher dimension of the order of thousands. It can be seen that the proposed MVCS-CP can achieve satisfactory results when dealing with datasets with higher dimensions.
  • Furthermore, the Scene dataset has the best performance on four metrics (ACC, ARI, Precision and Purity), especially on Purity, which is 7% better than the second-best result. And it is comparable to the best results on the NMI indicator. This illustrates that MVCS-CP can handle larger-scale datasets.
  • Compared with the state-of-the-art multiview clustering algorithms, the MVCS-CP algorithm has better or comparable performance. This suggests that taking each view’s geometry and sparse representation into account yields better results.
  • In terms of the single-view method, it is found that the multiview clustering algorithm is basically better than it, which shows that considering the multiple features of the dataset can be better clustered. However, on the BBC_Sport dataset, Featconcat performs the best in terms of Precision, which means that the multiview clustering method still needs further improvement.
In order to display the results more intuitively, the unified graph learned by different methods is visualized, taking ORL and BBC_Sport as examples to explain (see Figure 5). ORL dataset, the methods can obtain the correct number of block diagonals. Nevertheless, AMGL and MVGL have a lot of noise—the result of GBS is clearer, yet the noise is more than that of MVCS-CP. About the BBC_Sport dataset, ASMV and CDMGC cannot acquire the correct number of block diagonals, and the COMSC block diagonal structure is obvious but noisier. The visual display of the unified graph indicates that the sparse representation can effectively reduce the noise.

4.4. Running Time

Table 11 (the value with the best experimental result is bolded) and Figure 6 show the runtime comparison of different multiview clustering methods on nine real-world datasets. It can be seen that except AMGL has the shortest running time on the BBC dataset; the proposed MVCS-CP method has the shortest running time on the remaining eight datasets. Even on a relatively large-scale Scene dataset, the time taken is less than 10 s. Compared with MVGL, ASMV and CoMSC methods, the MVCS-CP method has obvious advantages, and the time cost for most of the datasets is only one percent of the former methods. In conclusion, the method can save a significant amount of time without iteration.

4.5. Comparison of the Number of Neighbors

Determining the number of neighbors in each view is an important step for the MVCS-CP method before constructing the similarity matrix, which is different from other methods. Table 12 indicates the different number of neighbors automatically determined for each view for the nine datasets. It can be seen that except the 3sources dataset has the same number of neighbors in each view, the rest of the datasets are different. In order to verify that automatically determining the number of neighbors can effectively improve the clustering results, the fixed number of MVCS-CP methods will be adopted for comparison. Figure 7 and Figure 8 show the comparison of the ACC and F metrics when the number of neighbors is fixed at 5, 10, 15, 20 and the number of neighbors is automatically determined, respectively. The MVCS-CP method has the best clustering effect on both ACC and F values, which show the effectiveness of automatically determining the number of neighbors.

4.6. Parameter Analysis

In this paper, the parameter p a r a is used when forming the unified graph matrix, and its selection range is { 0.01 ,   0.02 ,   003 ,   0.04 } . It can be seen from Figure 9 that the clustering effect of MVCS-CP is relatively stable in the parameter range from 0.01 to 0.04. When the parameter of the NUS dataset is 0.04, the data is too complex to be read during clustering, so only the results from 0.01 to 0.03 are shown in the figure. Figure 9 demonstrates that the proposed method is less sensitive to parameters.

4.7. Result Discussion

As mentioned previously, the comprehensive results on nine common datasets indicate that the MVCS-CP can handle datasets with different numbers of views and clusters, different dimensions and different sizes. For higher-dimensional and larger-size datasets, it can still obtain better clustering results. The above shows that considering the geometry and sparse representation of each view enables better clustering. The visualization of the unified graph demonstrates that MVCS-CP obtains a clearer and more concentrated clustering structure. This shows that the introduction of sparse representation effectively reduces noise. In terms of the running time results, the MVCS-CP method without iterations saves more time. As far as the number of neighbors, automatically determining the number of neighbors can effectively improve the clustering results. Furthermore, MVCS-CP is not sensitive to parameters.

5. Conclusions

This paper proposes a multiview clustering of adaptive sparse representation based on coupled P system (MVCS-CP). After reading the data matrix, the number of neighbors of each view is automatically determined, and then it adopts the concepts of manifold learning and sparse representation to construct the similarity matrix. During the unified graph formation stage, it aims to learn a sparse similarity matrix that is as consistent as possible with all views. In addition, the model directly obtains the close-form solution without iteration, consuming less time. The experimental data on nine real datasets demonstrate that the proposed MVCS-CP method outperforms the state-of-the-art multiview clustering algorithms. Moreover, the comparison experiment with the fixed number of neighbors indicates that the automatic determination of the number of neighbors is effective. In brief, this method can be implemented quickly and intuitively, which is more suitable for dealing with practical problems. The method of embedding a deep neural network into multiview clustering to automatically determine the number of clusters and parameter-free will be issues worth exploring in the future.

Author Contributions

Conceptualization, X.Z. and X.L.; methodology, X.Z. and X.L.; software, X.Z.; validation, X.Z.; Formal analysis, X.Z.; Writing—original draft preparation, X.Z.; writing—review and editing, X.Z. and X.L.; supervision, X.Z.; project administration, X.L.; Funding acquisition, X.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by the National Natural Science Foundation of China (Nos. 62172622, 61876101, 61802234, and 61806114), the Social Science Fund Project of Shandong (16BGLJ06, 11CGLJ22), China Postdoctoral Science Foundation Funded Project (2017M612339, 2018M642695), Natural Science Foundation of the Shandong Provincial (ZR2019QF007), China Postdoctoral Special Funding Project (2019T120607) and Youth Fund for Humanities and Social Sciences, Ministry of Education (19YJCZH244).

Data Availability Statement

The datasets used in this paper come from related papers (see Section 4.1) or contact the authors for the full datasets.

Conflicts of Interest

The authors of this paper declare no conflict of interest.

References

  1. Janani, R.; Vijayarani, S. Text document clustering using Spectral Clustering algorithm with Particle Swarm Optimization. Expert Syst. Appl. 2019, 134, 192–200. [Google Scholar] [CrossRef]
  2. Djenouri, Y.; Belhadi, A.; Fournier-Viger, P.; Lin, J.C.W. Fast and effective cluster-based information retrieval using frequent closed itemsets. Inf. Sci. 2018, 453, 154–167. [Google Scholar] [CrossRef] [Green Version]
  3. Ge, C.J.; de Oliveira, R.A.; Gu, I.Y.H.; Bollen, M.H.J. Deep Feature Clustering for Seeking Patterns in Daily Harmonic Variations. IEEE Trans. Instrum. Meas. 2021, 70, 2501110. [Google Scholar] [CrossRef]
  4. Bang, H.; Zhou, X.K.; Van Epps, H.L.; Mazumdar, M. Statistical Methods in Molecular Biology; Humana Press: Totowa, NJ, USA, 2010. [Google Scholar]
  5. Fu, L.L.; Lin, P.F.; Vasilakos, A.V.; Wang, S.P. An overview of recent multi-view clustering. Neurocomputing 2020, 402, 148–161. [Google Scholar] [CrossRef]
  6. Hu, Z.X.; Nie, F.P.; Chang, W.; Hao, S.Z.; Wang, R.; Li, X.L. Multi-view spectral clustering via sparse graph learning. Neurocomputing 2020, 384, 1–10. [Google Scholar] [CrossRef]
  7. Tan, J.P.; Yang, Z.J.; Cheng, Y.Q.; Ye, J.L.; Wang, B.; Dai, Q.Y. SRAGL-AWCL: A two-step multi-view clustering via sparse representation and adaptive weighted cooperative learning. Pattern Recognit. 2021, 117, 107987. [Google Scholar] [CrossRef]
  8. Cai, Y.; Jiao, Y.Y.; Zhuge, W.Z.; Tao, H.; Hou, C.P. Partial multi-view spectral clustering. Neurocomputing 2018, 311, 316–324. [Google Scholar] [CrossRef]
  9. Shi, S.J.; Nie, F.P.; Wang, R.; Li, X.L. Auto-weighted multi-view clustering via spectral embedding. Neurocomputing 2020, 399, 369–379. [Google Scholar] [CrossRef]
  10. Li, Z.L.; Tang, C.; Chen, J.J.; Wan, C.; Yan, W.Q.; Liu, X.W. Diversity and consistency learning guided spectral embedding for multi-view clustering. Neurocomputing 2019, 370, 128–139. [Google Scholar] [CrossRef]
  11. Wu, J.L.; Lin, Z.C.; Zha, H.B. Essential Tensor Learning for Multi-View Spectral Clustering. IEEE Trans. Image Process. 2019, 28, 5910–5922. [Google Scholar] [CrossRef] [Green Version]
  12. Brbic, M.; Kopriva, I. Multi-view low-rank sparse subspace clustering. Pattern. Recogn. 2018, 73, 247–258. [Google Scholar] [CrossRef] [Green Version]
  13. Niu, G.L.; Yang, Y.L.; Sun, L.Q. One-step multi-view subspace clustering with incomplete views. Neurocomputing 2021, 438, 290–301. [Google Scholar] [CrossRef]
  14. Zhu, W.C.; Lu, J.W.; Zhou, J. Structured general and specific multi-view subspace clustering. Pattern. Recognit. 2019, 93, 392–403. [Google Scholar] [CrossRef]
  15. Xiong, L.Y.; Wang, C.; Huang, X.H.; Zeng, H. An Entropy Regularization k-Means Algorithm with a New Measure of between-Cluster Distance in Subspace Clustering. Entropy 2019, 21, 683. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Zong, L.L.; Zhang, X.C.; Zhao, L.; Yu, H.; Zhao, Q.L. Multi-view clustering via multi-manifold regularized non-negative matrix factorization. Neural Netw. 2017, 88, 74–89. [Google Scholar] [CrossRef] [Green Version]
  17. Luo, P.; Peng, J.Y.; Guan, Z.Y.; Fan, J.P. Dual regularized multi-view non-negative matrix factorization for clustering. Neurocomputing 2018, 294, 1–11. [Google Scholar] [CrossRef]
  18. Zhang, X.Y.; Gao, H.B.; Li, G.P.; Zhao, J.H.; Huo, J.H.; Yin, J.L.; Liu, Y.C.; Zheng, L. Multi-view clustering based on graph-regularized nonnegative matrix factorization for object recognition. Inf. Sci. 2018, 432, 463–478. [Google Scholar] [CrossRef]
  19. Huang, A.P.; Zhao, T.S.; Lin, C.W. Multi-View Data Fusion Oriented Clustering via Nuclear Norm Minimization. IEEE Trans. Image Process. 2020, 29, 9600–9613. [Google Scholar] [CrossRef]
  20. Lu, G.F.; Zhao, J.B. Latent multi-view self-representations for clustering via the tensor nuclear norm. Appl. Intell. 2022, 52, 6539–6551. [Google Scholar] [CrossRef]
  21. Zhang, X.Q.; Sun, H.J.; Liu, Z.G.; Ren, Z.W.; Cui, Q.J.; Li, Y.M. Robust low-rank kernel multi-view subspace clustering based on the Schatten p-norm and correntropy. Inf. Sci. 2019, 477, 430–447. [Google Scholar] [CrossRef]
  22. Wang, Q.; Dou, Y.; Liu, X.W.; Xia, F.; Lv, Q.; Yang, K. Local kernel alignment based multi-view clustering using extreme learning machine. Neurocomputing 2018, 275, 1099–1111. [Google Scholar] [CrossRef]
  23. Huang, Z.Y.; Hu, P.; Peng, X. Partially View-aligned Clustering. In Proceedings of the 33th Annual Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 6–12 December 2020. [Google Scholar]
  24. Yang, M.X.; Li, Y.F.; Huang, Z.Y.; Liu, Z.T.; Hu, P.; Peng, X. Partially View-aligned Representation Learning with Noise-robust Contrastive Loss. In Proceedings of the 2021 IEEE Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 19–25 June 2021. [Google Scholar]
  25. Yang, M.X.; Li, Y.F.; Hu, P.; Bai, J.F.; Lv, J.C.; Peng, X. Robust Multi-View Clustering with Incomplete Information. IEEE Trans. Pattern. Anal. 2022; online ahead of print. [Google Scholar] [CrossRef] [PubMed]
  26. Jiang, B.; Qiu, F.Y.; Wang, L.P.; Zhang, Z.J. Bi-level weighted multi-view clustering via hybrid particle swarm optimization. Inf. Process. Manag. 2016, 52, 387–398. [Google Scholar] [CrossRef]
  27. De Gusmao, R.P.; de Carvalho, F.D.T. Clustering of multi-view relational data based on particle swarm optimization. Expert Syst. Appl. 2019, 123, 34–53. [Google Scholar] [CrossRef]
  28. De Gusmao, R.P.; de Carvalho, F.D.T. PSO for Fuzzy Clustering of Multi-View Relational Data. Int. J. Pattern. Recognit. 2020, 34, 2050022. [Google Scholar] [CrossRef]
  29. Dutta, P.; Mishra, P.; Saha, S. Incomplete multi-view gene clustering with data regeneration using Shape Boltzmann Machine. Comput. Biol. Med. 2020, 125, 103965. [Google Scholar] [CrossRef] [PubMed]
  30. Saini, N.; Bansal, D.; Saha, S.; Bhattacharyya, P. Multi-objective multi-view based search result clustering using differential evolution framework. Expert Syst. Appl. 2021, 168, 114299. [Google Scholar] [CrossRef]
  31. Guerin, J.; Thiery, S.; Nyiri, E.; Gibaru, O.; Boots, B. Combining pretrained CNN feature extractors to enhance clustering of complex natural images. Neurocomputing 2021, 423, 551–571. [Google Scholar] [CrossRef]
  32. Zhan, K.; Chang, X.; Guan, J.; Chen, L.; Ma, Z.; Yang, Y. Adaptive Structure Discovery for Multimedia Analysis Using Multiple Features. IEEE Trans. Cybern. 2019, 49, 1826–1834. [Google Scholar] [CrossRef]
  33. Wang, H.; Yang, Y.; Liu, B.; Fujita, H. A study of graph-based system for multi-view clustering. Knowl.-Based Syst. 2019, 163, 1009–1019. [Google Scholar] [CrossRef]
  34. Peng, X.; Huang, Z.Y.; Lv, J.C.; Zhou, J.T. COMIC: Multi-View Clustering without Parameter Selection. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019. [Google Scholar]
  35. Wang, H.; Yang, Y.; Liu, B. GMC: Graph-Based Multi-View Clustering. IEEE Trans. Knowl. Data Eng. 2020, 32, 1116–1129. [Google Scholar] [CrossRef]
  36. Huang, S.; Tsang, I.; Xu, Z.; Lv, J.C. Measuring Diversity in Graph Learning: A Unified Framework for Structured Multi-View Clustering. IEEE Trans. Knowl. Data Eng. 2021; early access. [Google Scholar] [CrossRef]
  37. Paun, G. Computing with membranes. J. Comput. Syst. Sci. 2000, 61, 108–143. [Google Scholar] [CrossRef] [Green Version]
  38. Zhang, G.X.; Pan, L.Q. A Survey of Membrane Computing as a New Branch of Natural Computing. Chin. J. Comput. 2010, 33, 208–214. [Google Scholar]
  39. Wu, T.; Pan, L.; Yu, Q.; Tan, K.C. Numerical Spiking Neural P Systems. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 2443–2457. [Google Scholar] [CrossRef]
  40. Ren, Q.; Liu, X.; Sun, M. Turing Universality of Weighted Spiking Neural P Systems with Anti-Spikes. Comput. Intell. Neurosci. 2020, 2020, 8892240. [Google Scholar] [CrossRef]
  41. Wang, L.P.; Liu, X.Y.; Zhao, Y.Z. Universal Nonlinear Spiking Neural P Systems with Delays and Weights on Synapses. Comput. Intell. Neurosci. 2021, 2021, 3285719. [Google Scholar] [CrossRef]
  42. Song, B.S.; Li, K.L.; Zeng, X.X. Monodirectional Evolutional Symport Tissue P Systems with Promoters and Cell Division. IEEE Trans. Parall. Distr. 2022, 33, 332–342. [Google Scholar] [CrossRef]
  43. Zhao, S.; Zhang, L.; Liu, Z.; Peng, H.; Wang, J. ConvSNP: A deep learning model embedded with SNP-like neurons. J. Membr. Comput. 2022, 4, 87–95. [Google Scholar] [CrossRef]
  44. Zhang, Z.; Liu, X.; Wang, L. Spectral Clustering Algorithm Based on Improved Gaussian Kernel Function and Beetle Antennae Search with Damping Factor. Comput. Intell. Neurosci. 2020, 2020, 1648573. [Google Scholar] [CrossRef]
  45. Zhang, X.; Liu, X. Noises Cutting and Natural Neighbors Spectral Clustering Based on Coupling P System. Processes 2021, 9, 439. [Google Scholar] [CrossRef]
  46. Jiang, Z.; Liu, X.; Sun, M. A Density Peak Clustering Algorithm Based on the K-Nearest Shannon Entropy and Tissue-Like P System. Math. Probl. Eng. 2019, 2019, 1713801. [Google Scholar] [CrossRef] [Green Version]
  47. Newman, M.W.; Libraty, N.; On, O.; On, K.A.; On, K.A. The Laplacian spectrum of graphs. Int. J. Combin. Appl. 1991, 18, 871–898. [Google Scholar]
  48. Surhone, L.M.; Tennoe, M.T.; Henssonow, S.F. Spectral Graph Theory; Published for the Conference Board of the Mathematical Sciences by the American Mathematical Society; American Mathematical Society: Providence, RI, USA, 2010. [Google Scholar]
  49. Tarjan, R. Depth-first search and linear graph algorithms. In Proceedings of the Symposium on Switching & Automata Theory, East Lansing, MI, USA, 13–15 October 1971. [Google Scholar]
  50. Fan, K. On a Theorem of Weyl Concerning Eigenvalues of Linear Transformations I. Proc. Natl. Acad. Sci. USA 1949, 35, 11. [Google Scholar] [CrossRef] [Green Version]
  51. Huang, J.L.; Zhu, Q.S.; Yang, L.J.; Feng, J. A non-parameter outlier detection algorithm based on Natural Neighbor. Knowl.-Based Syst. 2016, 92, 71–77. [Google Scholar] [CrossRef]
  52. Zhu, Q.S.; Feng, J.; Huang, J.L. Natural neighbor: A self-adaptive neighborhood method without parameter K. Pattern. Recognit. Lett. 2016, 80, 30–36. [Google Scholar] [CrossRef]
  53. Cai, D.; He, X.F.; Han, J.W.; Huang, T.S. Graph Regularized Nonnegative Matrix Factorization for Data Representation. IEEE Trans. Pattern. Anal. 2011, 33, 1548–1560. [Google Scholar]
  54. Hao, W.; Yan, Y.; Li, T. Multi-View Clustering via Concept Factorization with Local Manifold Regularization. In Proceedings of the IEEE International Conference on Data Mining (ICDM2016), Barcelona, Spain, 12–15 December 2016. [Google Scholar]
  55. Wright, J.; Yang, A.Y.; Ganesh, A.; Sastry, S.S.; Ma, Y. Robust Face Recognition via Sparse Representation. IEEE Trans. Pattern. Anal. 2009, 31, 210–227. [Google Scholar] [CrossRef] [Green Version]
  56. Nie, F.; Wang, X.; Jordan, M.I.; Huang, H. The Constrained Laplacian Rank Algorithm for Graph-Based Clustering. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (AAAI’16), Phoenix, AZ, USA, 12–17 February 2016. [Google Scholar]
  57. Candes, E.J.; Wakin, M.B. An introduction to compressive sampling. IEEE Signal Process. Mag. 2008, 25, 21–30. [Google Scholar] [CrossRef]
  58. Candes, E.J.; Li, X.D.; Ma, Y.; Wright, J. Robust Principal Component Analysis? J. ACM 2011, 58, 11. [Google Scholar] [CrossRef]
  59. Dueck, D.; Frey, B.J. Non-metric affinity propagation for unsupervised image categorization. In Proceedings of the IEEE 11th International Conference on Computer Vision, Rio de Janeiro, Brazil, 14–21 October 2007. [Google Scholar]
  60. Tat-Seng Chua, J.T.; Li, H.; Luo, Z.; Zheng, Y. NUS-WIDE: A real-world web image database from National University of Singapore. In Proceedings of the ACM International Conference on Image and Video Retrieval, Fira, Greece, 8–10 July 2009. [Google Scholar]
  61. Samaria, F.S.; Harter, A.C. Parameterisation of a stochastic model for human face identification. In Proceedings of the 1994 IEEE Workshop on Applications of Computer Vision, Sarasota, FL, USA, 5–7 December 1994. [Google Scholar]
  62. Greene, D.; Cunningham, P. Practical solutions to the problem of diagonal dominance in kernel document clustering. In Proceedings of the 23rd International Conference on Machine Learning, Pittsburgh, PA, USA, 25–29 June 2006. [Google Scholar]
  63. Mallah, C.; Cope, J.; Orwell, J. Plant Leaf Classification Using Probabilistic Integration of Shape, Texture and Margin Features; Acta Press: Calgary, AB, USA, 2013. [Google Scholar]
  64. Li, F.F.; Perona, P. A Bayesian Hierarchical Model for Learning Natural Scene Categories. In Proceedings of the 2005 IEEE Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–26 June 2005. [Google Scholar]
  65. Ng, A.Y.; Jordan, M.I.; Weiss, Y. On Spectral Clustering: Analysis and an Algorithm. In Proceedings of the 14th International Conference on Neural Information Processing Systems: Natural and Synthetic, Vancouver, BC, Canada, 3–8 December 2001. [Google Scholar]
  66. Nie, F.P.; Li, J.; Li, X.L. Parameter-Free Auto-Weighted Multiple Graph Learning: A Framework for Multiview Clustering and Semi-Supervised Classification. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, New York, NY, USA, 9–15 July 2016. [Google Scholar]
  67. Zhan, K.; Zhang, C.; Guan, J.; Wang, J. Graph Learning for Multiview Clustering. IEEE Trans. Cybern. 2018, 48, 2887–2895. [Google Scholar] [CrossRef] [PubMed]
  68. Liu, J.; Liu, X.; Yang, Y.; Guo, X.; Kloft, M.; He, L. Multiview Subspace Clustering via Co-Training Robust Data Representation. IEEE Trans. Neural Netw. Learn. Syst. 2021. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The basic membrane structure of the cell-like P system.
Figure 1. The basic membrane structure of the cell-like P system.
Entropy 24 00568 g001
Figure 2. The basic membrane structure of the tissue-like P system.
Figure 2. The basic membrane structure of the tissue-like P system.
Entropy 24 00568 g002
Figure 3. The flow chart of the MVCS-CP algorithm.
Figure 3. The flow chart of the MVCS-CP algorithm.
Entropy 24 00568 g003
Figure 4. The basic structure of the coupled P system.
Figure 4. The basic structure of the coupled P system.
Entropy 24 00568 g004
Figure 5. Unified graph of ORL and BBC_Sport.
Figure 5. Unified graph of ORL and BBC_Sport.
Entropy 24 00568 g005
Figure 6. Performance comparison of running time on nine real-world datasets.
Figure 6. Performance comparison of running time on nine real-world datasets.
Entropy 24 00568 g006
Figure 7. The comparison of ACC with fixed number and MVCS-CP.
Figure 7. The comparison of ACC with fixed number and MVCS-CP.
Entropy 24 00568 g007
Figure 8. The comparison of F with fixed number and MVCS-CP.
Figure 8. The comparison of F with fixed number and MVCS-CP.
Entropy 24 00568 g008
Figure 9. The influence of para changes on clustering effect indicators on nine datasets.
Figure 9. The influence of para changes on clustering effect indicators on nine datasets.
Entropy 24 00568 g009
Table 1. The general information of the dataset (- means null).
Table 1. The general information of the dataset (- means null).
DatasetsObjectsViewClustersdddddd
Caltech101-20238662048402541984512928
Caltech101-7147462048402541984512928
NUS2400612644473128155500
ORL40044051289864254--
3sources16936356036313068---
BBC685454659463346654684--
BBC_Sport5442531833203----
100leaves16003100646464---
Scene154485315205940---
Table 2. Experimental results on Caltech101-20 datasets (%).
Table 2. Experimental results on Caltech101-20 datasets (%).
Caltech101-20ACCARINMIPrecisionFPurity
SC126.55 ± 1.4611.73 ± 1.0326.99 ± 0.436.13 ± 1.7919.47 ± 1.0152.46 ± 0.92
SC228.32 ± 1.3816.27 ± 0.2633.43 ± 0.3246.73 ± 0.8622.96 ± 0.3259.62 ± 0.63
SC328.32 ± 1.3816.27 ± 0.2633.43 ± 0.3246.73 ± 0.8622.96 ± 0.3259.62 ± 0.63
SC440.49 ± 1.1330.05 ± 1.6752.89 ± 0.9971.02 ± 1.7835.78 ± 1.6775.43 ± 0.68
SC539.28 ± 1.7227.47 ± 1.8548.82 ± 0.9867.2 ± 2.433.31 ± 1.8173.19 ± 1.15
SC635.44 ± 2.7524.18 ± 1.9343.31 ± 1.3760.39 ± 3.5130.39 ± 1.868.64 ± 1.6
Featconcat49.97 ± 0.1314.52 ± 0.3520.2 ± 0.2923.09 ± 0.1936.51 ± 0.2152.77 ± 0.12
AMGL52.73 ± 3.1426.82 ± 2.8252.19 ± 3.3335.21 ± 3.140.67 ± 1.9967.62 ± 1.88
MVGL60.69 ± 028.92 ± 050.73 ± 033.54 ± 044.15 ± 071.29 ± 0
ASMV41.17 ± 2.0728.79 ± 2.0654.23 ± 0.6563.13 ± 2.7535.25 ± 274.78 ± 0.7
GBS64 ± 034.08 ± 053.73 ± 037.07 ± 047.95 ± 073.34 ± 0
CoMSC53.98 ± 4.8343.01 ± 6.3159.47 ± 6.5978.6 ± 4.9178.21 ± 5.0548.77 ± 2.32
CDMGC55.7 ± 9.4922.72 ± 10.3344.68 ± 8.2929.28 ± 6.3440.43 ± 6.7665.08 ± 8.99
MVCS-CP60.6 ± 0.5951.05 ± 2.261.36 ± 1.4882.23 ± 1.5656.56 ± 2.0781.98 ± 1.01
The value with the best experimental result is bolded, and the second-best value is marked with an underscore (_).
Table 3. Experimental results on Caltech101-07 datasets (%).
Table 3. Experimental results on Caltech101-07 datasets (%).
Caltech101-7ACCARINMIPrecisionFPurity
SC128.83 ± 2.17.94 ± 0.9211.51 ± 0.548.77 ± 0.9729.14 ± 1.4565.88 ± 1.66
SC234.79 ± 2.2419.67 ± 1.2624.18 ± 0.5966.23 ± 1.4636.94 ± 1.1373.09 ± 0.75
SC355.6 ± 0.222.81 ± 0.263.15 ± 0.3739.44 ± 0.0955.91 ± 0.0356.61 ± 0.29
SC442.43 ± 2.5729.55 ± 1.9937.88 ± 1.6278.48 ± 2.1545.18 ± 1.981.25 ± 1.46
SC540.72 ± 0.3928.11 ± 1.4835.36 ± 0.777.99 ± 1.4543.59 ± 1.3781.41 ± 0.54
SC646.15 ± 3.2430.32 ± 1.9136.04 ± 1.1778.42 ± 2.0246.1 ± 1.6480.62 ± 1.07
Featconcat54.04 ± 0.041.22 ± 0.081.47 ± 0.0338.93 ± 0.0355.69 ± 0.0654.52 ± 0.06
AMGL64.46 ± 6.1444.36 ± 5.8254.6 ± 1.9670.94 ± 6.6563.71 ± 4.7584.79 ± 0.77
MVGL57.06 ± 045.96 ± 053.17 ± 087.25 ± 060.37 ± 087.04 ± 0
ASMV40.77 ± 1.229.04 ± 1.2241.55 ± 0.8176.53 ± 0.7545.2 ± 1.2282.5 ± 0.53
GBS69.2 ± 059.43 ± 060.56 ± 088.58 ± 072.17 ± 088.47 ± 0
CoMSC63.28 ± 3.6849.02 ± 3.9653.62 ± 3.986.26 ± 4.9563.49 ± 3.5586.57 ± 1.32
CDMGC51.74 ± 11.665.97 ± 23.2523.71 ± 1642.53 ± 12.7650.26 ± 10.5961.8 ± 12.3
MVCS-CP69.95 ± 0.0357.69 ± 0.0756.13 ± 0.394.27 ± 1.2969.99 ± 0.1689.48 ± 0
The value with the best experimental result is bolded, and the second-best value is marked with an underscore (_).
Table 4. Experimental results on NUS datasets (%).
Table 4. Experimental results on NUS datasets (%).
NUSACCARINMIPrecisionFPurity
SC121.25 ± 0.424.32 ± 0.358.74 ± 0.1912.03 ± 0.3712.71 ± 0.2222.99 ± 0.58
SC220.76 ± 0.424.23 ± 0.188.75 ± 0.3412.04 ± 0.212.42 ± 0.0922.41 ± 0.34
SC318.7 ± 0.223.4 ± 0.177.18 ± 0.2311.33 ± 0.1711.62 ± 0.1519.94 ± 0.2
SC423.43 ± 1.15.23 ± 0.3610.02 ± 0.6113.02 ± 0.3113.21 ± 0.3624.84 ± 0.84
SC521.03 ± 0.454.73 ± 0.229.64 ± 0.7212.44 ± 0.212.98 ± 0.2222.41 ± 0.65
SC611.43 ± 0.180.32 ± 0.014.61 ± 0.148.44 ± 0.0115.31 ± 0.0213.09 ± 0.2
Featconcat10.79 ± 0.230.32 ± 0.024.5 ± 0.168.44 ± 0.0115.4 ± 0.0212.75 ± 0.12
AMGL21.43 ± 0.964.15 ± 0.6612.2 ± 0.9610.68 ± 0.4816.33 ± 0.223.37 ± 0.99
MVGL13 ± 00.36 ± 05.57 ± 08.46 ± 015.44 ± 013.83 ± 0
ASMV12.13 ± 1.20.71 ± 0.848.13 ± 2.219.14 ± 0.9414.21 ± 0.1522.46 ± 2.67
GBS16.5 ± 01.24 ± 07.88 ± 08.88 ± 015.92 ± 017.88 ± 0
CoMSC26.83 ± 2.658.32 ± 3.4714.12 ± 3.4715.84 ± 2.9827.46 ± 2.7616 ± 1.49
CDMGC11.96 ± 1.430.27 ± 0.254.14 ± 1.578.42 ± 0.1215.42 ± 0.1712.68 ± 1.54
MVCS-CP31.38 ± 0.8310.49 ± 0.5816.1 ± 0.2917.52 ± 0.5218.21 ± 0.5232.42 ± 0.38
The value with the best experimental result is bolded, and the second-best value is marked with an underscore (_).
Table 5. Experiments results on ORL datasets (%).
Table 5. Experiments results on ORL datasets (%).
ORLACCARINMIPrecisionFPurity
SC175.7 ± 1.9568.01 ± 1.6589.92 ± 0.6858.42 ± 1.7568.86 ± 1.680.6 ± 1.31
SC249.25 ± 2.0235.32 ± 2.1170.37 ± 1.1634.84 ± 1.736.86 ± 2.0753.3 ± 1.87
SC365.45 ± 1.1658.83 ± 2.3185.05 ± 0.8151.09 ± 2.759.92 ± 2.2371.8 ± 0.87
SC453.65 ± 2.0637.44 ± 2.8272.1 ± 1.536.77 ± 2.7438.94 ± 2.7657.15 ± 1.71
Featconcat74.4 ± 0.7268.87 ± 1.1889.37 ± 0.4160.55 ± 1.7269.67 ± 1.1479.5 ± 0.71
AMGL72.91 ± 3.3365.43 ± 6.5189.69 ± 1.7754.66 ± 7.7166.39 ± 6.2780.21 ± 2.54
MVGL73.75 ± 052.74 ± 087.15 ± 040.38 ± 054.17 ± 080.25 ± 0
ASMV67 ± 1.2349.46 ± 0.6781.08 ± 0.4543.59 ± 1.3750.79 ± 0.8272.34 ± 0.71
GBS83.75 ± 076.32 ± 092.6 ± 068.75 ± 076.92 ± 086.75 ± 0
CoMSC86.5 ± 9.6783.63 ± 13.0394.42 ± 6.7680.84 ± 11.9784.01 ± 12.7288.75 ± 9.78
CDMGC71.35 ± 1.947.16 ± 3.2786.7 ± 0.8533.95 ± 3.1548.88 ± 3.1279.2 ± 0.96
MVCS-CP89.5 ± 2.7185.96 ± 0.4694.87 ± 0.0884.27 ± 0.8286.28 ± 0.4590.75 ± 1.41
The value with the best experimental result is bolded, and the second-best value is marked with an underscore (_).
Table 6. Experiments results on 3sources datasets (%).
Table 6. Experiments results on 3sources datasets (%).
3sourcesACCARINMIPrecisionFPurity
SC130.3 ± 0.77−2.87 ± 0.426.34 ± 0.7322.06 ± 0.1934.37 ± 0.5336.45 ± 0.9
SC237.4 ± 0.774.58 ± 0.4410.37 ± 1.625.19 ± 0.238.27 ± 0.239.76 ± 1.28
SC331.95 ± 0−2 ± 0.197.07 ± 0.6222.42 ± 0.0835 ± 0.2337.63 ± 0.79
Featconcat31.01 ± 1.36−0.37 ± 1.365.45 ± 2.1223.09 ± 0.7727.54 ± 1.8437.28 ± 1.82
AMGL34.02 ± 2.69−1.66 ± 1.457.2 ± 2.9522.58 ± 0.634.78 ± 0.5639.25 ± 2.73
MVGL30.77 ± 0−3.38 ± 06.6 ± 021.86 ± 034.17 ± 037.87 ± 0
ASMV69.82 ± 4.760.01 ± 7.1464.07 ± 4.5665.99 ± 6.4569.84 ± 5.2377.51 ± 3.75
GBS69.23 ± 044.31 ± 054.8 ± 048.44 ± 060.47 ± 074.56 ± 0
CoMSC64.93 ± 4.3953.44 ± 5.5962.41 ± 3.6368.11 ± 4.9863.54 ± 4.478.27 ± 3.18
CDMGC34.91 ± 0−1.26 ± 0.056.31 ± 0.2622.73 ± 0.0235.77 ± 0.0839.35 ± 0.31
MVCS-CP78.11 ± 0.7465.86 ± 1.2771.62 ± 1.4180.31 ± 3.4973.49 ± 0.6985.21 ± 0.56
The value with the best experimental result is bolded, and the second-best value is marked with an underscore (_).
Table 7. Experiments results on BBC datasets (%).
Table 7. Experiments results on BBC datasets (%).
BBCACCARINMIPrecisionFPurity
SC133.11 ± 2.04−1.4 ± 0.717.73 ± 2.4622.88 ± 0.2935.09 ± 0.3936.15 ± 3.35
SC231.53 ± 0−0.66 ± 01.24 ± 0.1323.2 ± 037.26 ± 033.02 ± 0.07
SC330.92 ± 1.38−0.71 ± 0.372.1 ± 0.1623.17 ± 0.1536.84 ± 0.633.28 ± 0.21
SC433.75 ± 0.28−0.29 ± 0.132.71 ± 0.3223.34 ± 0.0537.24 ± 0.1335.07 ± 0.52
Featconcat33.26 ± 0.12−0.23 ± 0.031.19 ± 0.0723.37 ± 0.0137.59 ± 0.0334.01 ± 0.18
AMGL35.66 ± 2.750.88 ± 1.222.23 ± 1.2823.83 ± 0.5137.22 ± 0.4536.66 ± 2.93
MVGL35.04 ± 00.24 ± 03.82 ± 023.55 ± 037.49 ± 036.35 ± 0
ASMV63.94 ± 1.246.07 ± 3.0246.82 ± 1.2550.86 ± 0.860 ± 3.3364.09 ± 1.21
GBS69.34 ± 047.89 ± 048.52 ± 050.12 ± 063.33 ± 069.34 ± 0
CoMSC70.18 ± 5.6345.72 ± 8.0751.49 ± 6.5360.36 ± 6.9257.99 ± 6.0671.77 ± 3.89
CDMGC31.53 ± 1.24−0.69 ± 0.091.08 ± 1.0323.19 ± 0.0336.93 ± 0.1332.99 ± 1.16
MVCS-CP74.89 ± 0.1552.64 ± 0.1951.76 ± 0.2864.67 ± 0.1163.56 ± 0.1674.89 ± 0.15
The value with the best experimental result is bolded, and the second-best value is marked with an underscore (_).
Table 8. Experiments results on BBC_Sport datasets (%).
Table 8. Experiments results on BBC_Sport datasets (%).
BBC_SportACCARINMIPrecisionFPurity
SC135.59 ± 0.1−0.07 ± 0.061.33 ± 0.0523.83 ± 0.0238.25 ± 0.0436.54 ± 0.08
SC236.76 ± 00.36 ± 0.021.78 ± 0.0623.99 ± 0.0138.41 ± 037.1 ± 0.08
Featconcat0.12 ± 0.121.4 ± 0.2238.27 ± 0.0896.04 ± 0.3836.84 ± 0.2823.9 ± 0.05
AMGL36.21 ± 00.15 ± 01.34 ± 0.323.91 ± 038.42 ± 0.0436.58 ± 0
MVGL39.15 ± 01.89 ± 06.98 ± 024.59 ± 039.07 ± 039.52 ± 0
ASMV69.12 ± 6.740.78 ± 5.4939.26 ± 5.0948.07 ± 4.857.76 ± 3.0469.3 ± 5.95
GBS80.7 ± 072.18 ± 072.26 ± 072.71 ± 079.43 ± 084.38 ± 0
CoMSC88.6 ± 0.8172.37 ± 2.6671.63 ± 1.8480.28 ± 0.9978.84 ± 2.1588.6 ± 0.81
CDMGC36.03 ± 0.190.06 ± 0.141.43 ± 0.0623.88 ± 0.0538.33 ± 0.0936.76 ± 0.19
MVCS-CP93.75 ± 0.4184.18 ± 0.6581.77 ± 0.4488.2 ± 1.8887.94 ± 0.7893.75 ± 0.63
The value with the best experimental result is bolded, and the second-best value is marked with an underscore (_).
Table 9. Experiments results on 100leaves datasets (%).
Table 9. Experiments results on 100leaves datasets (%).
100leavesACCARINMIPrecisionFPurity
SC141.78 ± 1.2328.47 ± 1.1767.71 ± 0.4326.94 ± 1.1629.2 ± 1.1644.39 ± 1.29
SC233.3 ± 1.0420.85 ± 0.8462.44 ± 0.7618.51 ± 0.7621.72 ± 0.8336.28 ± 1
SC345.96 ± 2.0831.41 ± 1.8570.13 ± 0.8729.73 ± 1.9132.1 ± 1.8348.85 ± 1.86
Featconcat62.91 ± 2.4552.85 ± 2.4282.01 ± 1.0349.96 ± 2.5953.32 ± 2.3966.23 ± 2.15
AMGL77.58 ± 2.547.47 ± 11.887.87 ± 2.1734.87 ± 11.6248.18 ± 11.5881.25 ± 1.94
MVGL81.06 ± 051.55 ± 089.12 ± 037.95 ± 052.17 ± 083.31 ± 0
ASMV48.5 ± 0.4123.8 ± 0.5971.38 ± 0.5116.36 ± 0.3724.89 ± 0.1954.06 ± 0.58
GBS82.44 ± 057.11 ± 091.15 ± 042.67 ± 057.65 ± 085.13 ± 0
CoMSC88.5 ± 6.8386.56 ± 6.9595.95 ± 4.8482.92 ± 6.8386.69 ± 6.290.88 ± 5.49
CDMGC88.61 ± 1.3476.15 ± 9.0894.54 ± 1.166.56 ± 12.4576.42 ± 8.9589.93 ± 1.04
MVCS-CP91.5 ± 0.7486.82 ± 0.1795.39 ± 0.1384.1 ± 0.4386.95 ± 0.1692 ± 0.32
The value with the best experimental result is bolded, and the second-best value is marked with an underscore (_).
Table 10. Experiments results on Scene15 datasets (%).
Table 10. Experiments results on Scene15 datasets (%).
Scene15ACCARINMIPrecisionFPurity
SC134.69 ± 0.719.64 ± 0.2636.53 ± 0.1924.87 ± 0.3825.29 ± 0.2340.08 ± 0.56
SC225.39 ± 0.4310.05 ± 0.1421.86 ± 0.2914.06 ± 0.2218.01 ± 0.127.87 ± 0.54
SC322.7 ± 0.998.82 ± 0.619.89 ± 0.1814.77 ± 0.5615.38 ± 0.5728.54 ± 0.34
Featconcat14.46 ± 0.591.58 ± 0.3310.49 ± 1.057.68 ± 0.1713.78 ± 0.1617.33 ± 0.66
AMGL32.78 ± 2.4115.1 ± 1.7330.79 ± 1.8416.66 ± 1.623.38 ± 1.1634.06 ± 2.09
MVGL23.21 ± 06.01 ± 020.44 ± 010 ± 017.16 ± 024.41 ± 0
ASMV34.09 ± 0.4117.52 ± 0.4833.74 ± 0.5122.38 ± 0.5423.51 ± 0.4138.86 ± 0.69
GBS14 ± 00.42 ± 05.82 ± 07.11 ± 013.17 ± 014.65 ± 0
CoMSC43.15 ± 2.6925.86 ± 1.9741.24 ± 1.3930.72 ± 2.0247.29 ± 2.5331.04 ± 1.79
CDMGC12.44 ± 0.730.19 ± 0.133.99 ± 0.847 ± 0.0613.01 ± 0.0912.97 ± 0.71
MVCS-CP45.84 ± 2.1226.71 ± 1.0141.18 ± 0.3930.85 ± 1.0131.95 ± 0.9247.42 ± 0.81
The value with the best experimental result is bolded, and the second-best value is marked with an underscore (_).
Table 11. Performance comparison of running time on nine real-world datasets.
Table 11. Performance comparison of running time on nine real-world datasets.
Time(s)AMGLMVGLASMVGBSCoMSCCDMGCMVCS-CP
Caltech101-2080.954662.011317.87028.231562.93122.8463.411
Caltech101-721.3763150.587288.3398.102282.19651.5111.467
NUS144.625545.729349.96227.65581.597144.3332.333
ORL0.8095.1158.7400.4594.4412.6120.062
3sources1.4360.5283.9180.1930.6830.3540.028
BBC4.8378.77622.62929.8199.5835.2696.742
BBC_Sport2.3874.7538.2146.1991.6283.0541.552
100leaves47.81890.341449.5715.849175.70340.0801.204
Scence15616.9783485.0921100.98297.190432.381641.2648.988
Table 12. The number of neighbors each view for nine datasets (- means null).
Table 12. The number of neighbors each view for nine datasets (- means null).
Datasetsd1d2d3d4d5d6
Caltech101-20181821332533
Caltech101-7171621283127
NUS13245516
ORL710819--
3sources888---
BBC1613915--
BBC_Sport1416----
100leaves172614---
Scene15241831---
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, X.; Liu, X. Multiview Clustering of Adaptive Sparse Representation Based on Coupled P Systems. Entropy 2022, 24, 568. https://doi.org/10.3390/e24040568

AMA Style

Zhang X, Liu X. Multiview Clustering of Adaptive Sparse Representation Based on Coupled P Systems. Entropy. 2022; 24(4):568. https://doi.org/10.3390/e24040568

Chicago/Turabian Style

Zhang, Xiaoling, and Xiyu Liu. 2022. "Multiview Clustering of Adaptive Sparse Representation Based on Coupled P Systems" Entropy 24, no. 4: 568. https://doi.org/10.3390/e24040568

APA Style

Zhang, X., & Liu, X. (2022). Multiview Clustering of Adaptive Sparse Representation Based on Coupled P Systems. Entropy, 24(4), 568. https://doi.org/10.3390/e24040568

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop