Next Article in Journal
Research on the Coupled Bending–Torsional Flutter Mechanism for Ideal Plate
Previous Article in Journal
Optimal Crew Scheduling in an Intensive Care Unit: A Case Study in a University Hospital
Previous Article in Special Issue
Secure Delivery Method for Preserving Data Integrity of a Video Frame with Sensitive Objects
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Clustering Algorithm Based on the Detection of Density Peaks and the Interaction Degree Between Clusters

1
Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
2
Artificial Intelligence Key Laboratory of Yunnan Province, Kunming University of Science and Technology, Kunming 650500, China
3
City College, Kunming University of Science and Technology, Kunming 650051, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(7), 3612; https://doi.org/10.3390/app15073612
Submission received: 21 January 2025 / Revised: 18 March 2025 / Accepted: 20 March 2025 / Published: 25 March 2025
(This article belongs to the Special Issue Trusted Service Computing and Trusted Artificial Intelligence)

Abstract

:
In order to cope with data with an irregular shape and uneven density, this paper proposes a two-phase clustering algorithm based on detecting the peaks of dimensional density and the degree of interaction between clusters (CPDD-ID). In the partitioning phase, the local densities of the data in all dimensions are calculated using kernel density estimation, the density curves are constructed based on the densities of all the data, and the peaks of the density curves are used as the benchmark to construct a Kd-Tree to search for the data points that are closest to each peak to partition the initial sub-clusters. Then, the intersection of the results of the initial sub-clusters obtained from all the dimensions is taken to obtain the final sub-clusters. The proposed partitioning strategy is able to accurately identify clusters with density differences and has significant effects in dealing with data with irregular shapes as well as uneven densities in this category. In addition, a new similarity measure based on the interaction degree between clusters is proposed in the merging stage. This method iteratively merges subclusters with maximum similarity by calculating the interaction degree of shared k-nearest neighbors between neighboring subclusters. The proposed similarity measure is effective in dealing with the problems of high overlap between clusters and ambiguous boundaries. The proposed algorithm is tested in detail on 10 synthetic datasets and 10 UCI real datasets and compared with existing state-of-the-art algorithms. The experimental results show that the CPDD-ID algorithm accurately identifies potential cluster structures and exhibits excellent performance in terms of both clustering accuracy.

1. Introduction

Cluster analysis is an unsupervised machine learning algorithm that explores potential relationships and rules in a dataset based on the similarities between data, and its primary task is to partition the data into a number of clusters while ensuring that the data within the same cluster have high similarity. In order to generate natural groups that maximize intra-cluster similarity and minimize inter-cluster similarity, clustering algorithms are widely used in a variety of fields, including image segmentation [1], bioinformatics [2,3], cybersecurity [4], computer vision [5], and so on. In recent years, clustering algorithms based on different strategies have been proposed and categorized into different types.
K-means [6] is the most representative partition-based clustering algorithm; it can effectively deal with spherical datasets, but cannot deal with complex structured datasets and the number of clusters needs to be selected manually. Unlike k-means, the density-based spatial clustering of applications with noise (DBSCAN) [7] is a density-based clustering algorithm that can recognize clusters of arbitrary shapes, but it performs poorly when dealing with clusters with uneven density or ambiguous boundaries between clusters. Density peak clustering (DPC) [8] is a density-based clustering algorithm proposed by Rodriguez and Laio that rapidly clusters data points by manually selecting cluster centers from a decision graph and assigning the remaining data points to the nearest-neighbor centers based on the two assumptions that the cluster centers have high local density and that the different cluster centers are far away from each other. Although DPC has achieved good results on non-spherical datasets, it is difficult to find the correct clustering centers in streaming and density heterogeneous datasets.
Hierarchical clustering algorithms utilize distance or similarity metrics to build a tree-like hierarchy [9]. In particular, agglomerative hierarchical clustering, such as balanced iterative reducing and clustering using hierarchies (BIRCH) [10], considers individual data points as a cluster merged bottom-up to obtain a hierarchical structure. Split hierarchical clustering [11] iteratively partitions all data points into smaller clusters by considering them as a single cluster. A hierarchical clustering algorithm using dynamic modeling (Chameleon) [12] essentially belongs to agglomerative hierarchical clustering, which can find the real clustering result by constructing k-nearest neighbor graphs and using a graph partitioning algorithm to partition the dataset into a number of subclusters and then iteratively merging subclusters by the connectivity between subclusters. It can handle complex structured data sets of arbitrary shape and size. However, when dealing with high-dimensional datasets or those with an excessive amount of data, the process is quite time-consuming. Cheng et al. proposed a local kernel-based hierarchical clustering method called a local core-based hierarchical clustering algorithm (HCLORE) [13], which first partitions the dataset into small subclusters by finding local kernels and then merges the subclusters based on a newly defined similarity. Although HCLORE is able to recognize complex cluster structures, it performs sensitively to cluster structures with ambiguous boundaries between clusters.
In order to effectively deal with the cluster structure with uneven density and the problem of ambiguous inter-cluster boundaries, we propose a two-phase clustering algorithm based on detecting density peaks in dimensions and measuring the degree of interaction between clusters (CPDD-ID). Firstly, we detect the local density peaks of each dimension for subclustering based on the proposed partitioning strategy. Secondly, the degree of interaction between subclusters is evaluated to define a new similarity metric to merge the subclusters. Lastly, we iteratively merge the subclusters with maximum similarity to achieve the desired number of clusters. The specific algorithmic process will be described later. In comparison to existing methods, the proposed algorithm possesses the following characteristics:
  • This paper proposes a new partitioning strategy, which is different from existing clustering algorithms that use only representative dimensions for partitioning, and our partitioning strategy is based on all dimensions of the data. It utilizes a non-parametric density measure to calculate the local density peaks of all dimensions in the dataset and takes each peak as the division benchmark for the first stage of subcluster division. In the second phase, the final subcluster is obtained by intersecting the results of each dimension. This partitioning strategy can accurately identify clusters with density differences, avoiding the problem that traditional density-based clustering algorithms and their variants incorrectly partition sparse clusters into dense clusters. Furthermore, the proposed partitioning strategy is effective in identifying clusters with irregular shapes and uneven densities, enabling the adaptive partitioning of subclusters.
  • This paper proposes a new similarity metric for merging subclusters from the perspective of structural similarity. Specifically, a new similarity metric is defined for evaluating the degree of interaction of shared nearest neighbors among the subclusters to be merged and iteratively merging the subclusters with the maximum similarity until the desired number of clusters is achieved. This merging strategy effectively addresses the issues of ambiguous boundaries and the high overlap between subclusters, enhancing the accuracy of clustering performance.
  • The CPDD-ID algorithm was tested on 20 benchmark datasets, and the comparison results with the existing clustering algorithms show that the proposed algorithm achieves good performance in discovering the underlying data structures in all cases.
The rest of the paper is organized as follows: Section 2 reviews the related work. Section 3 describes the algorithm proposed in this paper in detail. Section 4 compares the proposed algorithm with six advanced clustering algorithms and presents the experimental results. Section 5 summarizes the whole paper.

2. Related Work

2.1. Density Peak Clustering Algorithm

The DPC algorithm is a density-based clustering algorithm published in 2014, which is widely used in various scenarios and achieved good results according to the two core ideas of clustering centers with high local density and centers with different local densities that have a long distance. The clustering centers of the DPC algorithm can be selected manually based on the decision map, which contains two important attributes: the local density is represented on the horizontal axis, while the vertical coordinate indicates relative distance, the allowing identification of density peaks and outliers. In addition, its data point allocation strategy is based on single-chain label propagation, which assigns data points to those with the highest local density of nearest neighbors based on the nearest-neighbor principle. In recent years, density-based clustering algorithms have gradually become mainstream. For example, Bryant et al. [14] proposed a density-based clustering algorithm called the density-based clustering algorithm using reverse nearest neighbor density estimates (RNN-DBSCAN) based on DBSCAN, which uses inverse nearest neighbors to estimate the local density and traverses the data points based on the k-nearest neighbor graph. Compared to the DBSCAN algorithm, it uses only one parameter (k-nearest neighbors) to estimate the density, and secondly, it also demonstrates superiority in dealing with variable-density datasets but performs poorly on datasets with ambiguous boundaries. Cheng et al. proposed a clustering algorithm called the local density peak and minimum spanning tree clustering algorithm (LDP-MST) [15], which utilizes the minimum spanning tree and the density peak. It constructs a minimum spanning tree using local density peaks and then iteratively splits the longest edges until the termination condition is reached, which is robust to datasets with complex structures. Fan et al. [16] integrated the idea of nearest-neighbor graphs into DPC, which calculates the local densities and distances through an improved mutual k nearest-neighbor graph and uses the two assumptions of DPC to constrain and select the clustering centers. The proposed algorithm can ensure the correct selection of suitable clustering centers and effectively improve the clustering performance. Ding et al. [17] calculation the local density and relative distance based on the natural neighbors to classify the subclusters, which can effectively eliminate the effect of cutoff parameters on the clustering results. Then, a novel merging strategy is utilized to merge the subclusters generated in the division stage until the termination condition is satisfied. Euclidean distance, a commonly used distance metric for density clustering algorithms, is widely used, but it ignores the contribution of individual features to the similarity and clustering of data points. In order to address this limitation, Xie et al. proposed standard deviation weighted distance and fuzzy weighted k-nearest neighbor-based density peak clustering (SFKNN-DPC) [18], which enhances the Euclidean distance by utilizing the standard deviation-weighted distances, and this strategy takes into account the specific contribution of each individual feature to the similarity between data points.
To address clustering with multiple density peaks, Rasool et al. [19] proposed an effective data relevance measure based on probability mass named the density peak clustering algorithm based on probability mass (MP-DPC) and integrated it into the DPC algorithm. To avoid the domino effect inherent in the DPC’s single-allocation strategy, where an error in assigning one point leads to the misallocation of other data points, Qin et al. [20] utilized label propagation and the Jaccard coefficient to propose a two-step allocation strategy for measuring the similarity between data points, which is concerned only with the k points of highest similarity. The first step assigns the label of each data point to points near the cluster centers, while the second step completes the label assignment for the remaining points based on the nearest labeled data to each unassigned sample. Guo et al. [21] proposed a connectivity-based density peak clustering algorithm called density peak clustering with connectivity estimation (DPC-CE), which selects local centers with higher relative distances to further compute cluster centers and then introduces a neighbor graph strategy to calculate the connectivity information between cluster centers. This method not only overcomes the domino effect but also adapts to clusters with uneven densities. Although the aforementioned algorithms have achieved good clustering results, they do not provide an effective solution to the problem of parameter fine-tuning within the algorithms. However, selecting appropriate parameters for data without true labels is quite challenging. In response to this issue, a parameter-free density peak clustering algorithm has also been proposed. García-García et al. [22] described an optimized adaptive methodology that utilizes clustering validity indices as the objective function for the adaptive parameters of DPC. The results indicate that this approach is not only suitable for the DPC algorithm but also effective for its derivative algorithms. Zhu et al. [23] proposed a density-based hierarchical clustering algorithm by combining the ideas of DBSCAN and density–connectivity DBSCAN. The proposed algorithm recognizes clusters of arbitrary shape by generating a tree diagram. Inspired by the vagueness and uncertainty in clustering, Bian et al. [24] proposed a concept of fuzzy density peaks, which expresses the density of a sample point as a coupling of fuzzy distances between the sample point and its neighbors. Meanwhile, an improved density peak clustering algorithm based on fuzzy operators was designed. Experiments show that the proposed algorithm performs better than most density-based clustering algorithms. Wang et al. [25] modified the clustering centers of DPC by identifying the density peak points as the clustering centers. Liu et al. [26] proposed a fast identification density peak method named shared-nearest-neighbor-based clustering by fast search and find of density peaks (SNN-DPC), which completes the clustering by considering the information of the nearest neighbor structure of the sample points and overcomes the problem that the traditional density peak algorithm cannot solve the problem of variable density clustering effectively, but it is sensitive to noise.
Although the above algorithms effectively overcome the problems of DPC, there are still shortcomings: when the data samples have obvious density differences and are close to each other, it is difficult to find the correct clustering center in the low-density region, and the normal sample points in the low-density region can easily be identified as noise points. Compared to existing methods, the dimension density-peak-based partitioning method proposed in this paper can accurately detect cluster structures with density differences, and the dense and sparse clusters can be divided correctly when they are close to each other.

2.2. Hierarchical Clustering

Hierarchical clustering is another significant analytical technique in unsupervised clustering algorithms, which aims to analyze the similarity or dissimilarity between clusters according to the hierarchical structure, mainly including split hierarchical clustering and cohesive hierarchical clustering. Among them, split hierarchical clustering splits all data points as a cluster by utilizing the dissimilarity, while cohesive hierarchical clustering measures the similarity between the initial clusters to be merged and merges them step by step. This paper focuses on cohesive hierarchical clustering.
Ros et al. [27] proposed a cohesive hierarchical clustering algorithm, a novel clustering algorithm combining mutual neighboring and hierarchical approaches using a new selection criterion (KdMutual) driven by the number of clusters in three steps: the first step identifies core subclusters using mutual neighborhoods, the second step deals with outliers, and the last step selects the initial clusters to construct the resulting partition using an ordering criterion. KdMutual combines the features of density peaks as well as structural similarities and is suitable for high-dimensional datasets. In order to avoid the influence of outliers on the clustering results, Cheng et al. [15] proposed a noise-removing hierarchical clustering algorithm named the hierarchical clustering algorithm based on noise removal (HCBNR), which is capable of discovering arbitrarily shaped clusters. The algorithm initially removes noise points based on natural neighbors, followed by constructing a mutual nearest neighbor graph with the remaining points to partition into sub-clusters; eventually, sub-clusters are iteratively merged based on their similarity measures until the desired number of clusters is achieved. Han et al. [28] proposed an efficient hierarchical clustering algorithm that uses a scalable sample set kernel to measure the similarity between existing clusters in the clustering tree and new samples in the data stream. Moreover, the hierarchical structure can be automatically updated while iteratively merging clusters. Yang et al. [29] proposed a hierarchical clustering algorithm, a novel hierarchical clustering algorithm based on density-distance cores (HCDC), based on the density–distance kernel for variable—density clustering, which first selects a density–distance representative point for each data point from candidate points, then selects a density–distance kernel from all the density–distance representative points and completes the clustering by utilizing the proposed new distance to complete the clustering. The experimental results prove that HCDC has good performance on complex structured datasets, as well as variable-density datasets. Hulot et al. [30] proposed an algorithm that combines a tree aggregation method with hierarchical clustering, which merges the clusters by aggregating the tree structures that have the same cotyledons.
Overall, unlike the similarity measures proposed by the hierarchical clustering algorithms described above, the similarity measure proposed in this paper takes into account the structural composition between clusters. For subcluster pairs with a large number of shared nearest neighbors, they are usually considered to belong to the same cluster. Moreover, the similarity metric based on shared k-nearest neighbors can solve the problems of high overlapping between clusters and ambiguous boundaries.

2.3. Partition–Merge Clustering Algorithm

Clustering algorithms based on the partition–merge strategy have gained favor among researchers for their ability to overcome the limitations of traditional clustering methods. Clustering by combining k-means with the density- and distance-based method (KMDD) [31] is a classical clustering algorithm following the partition–merge strategy, which combines both k-means and density in order to quickly find the structure of the data in the space. In the partitioning phase, KMDD uses k-means to divide the dataset into a number of smaller spherical clusters and assigns the corresponding local densities. In the merging phase, the initial subclusters are merged by manually selecting the cluster centers from the decision graph following the feature that the cluster core density is higher than the density of neighboring subclusters at a greater distance. KMDD accurately identifies clusters of various shapes, but the optimal k-value is a challenge for the KMDD algorithm. To address the limitations of KMDD, Yuan et al. [32] proposed a clustering algorithm called Beyond K-Means++, which explores clusters through local geometric information. To overcome the dependence on k, the algorithm automatically increases or decreases k starting from 1 as clusters split and merge, thereby not only avoiding k-dependence but also enhancing clustering accuracy. To address the limitations of the DPC algorithm’s poor performance on popular datasets and its high dependence on the cutoff parameter, Ding et al. [17] proposed a division and merging algorithm called an improved density peaks clustering algorithm based on natural neighbor with a merging strategy (IDPC-NNMS), which combines peak density and natural nearest neighbors. IDPC-NNMS adaptively obtains the local densities by accurately identifying the natural neighbors of each data and then defines a novel merging strategy to merge the subclusters. Cheng et al. [33] proposed a split–merge clustering algorithm, a novel projection-based split-and-merge clustering algorithm (PSM) using projection techniques, which extends projection techniques to K-means to select the initial cluster centers and then determines the distribution of clusters by the distribution of density profiles. Experiments show that PSM performs well under strict data conditions.

3. The Proposed Algorithm

This paper proposes a clustering algorithm based on detecting the dimension density peaks in dimensions and measuring the degree of interaction between clusters (CPDD-ID). The algorithm consists of two main phases: (1) Firstly, in the partitioning phase, using kernel density estimation to calculate the local density of the points in each dimension and take the local densities of all dimensions to construct the kernel density curves, the curves themselves react to the density distribution of the data in each dimension. Secondly, the coordinates of the peaks of the kernel density curves are marked in each dimension, and a Kd-Tree is established to quickly and accurately assign the data points to their nearest-neighbors peaks. Lastly, the intersection of all dimensions is taken to adaptively divide reasonable subclusters. (2) In the merging phase, the interaction degree of shared neighbors between subclusters is calculated to define a novel similarity metric, and subclusters with the highest similarity are iteratively merged until the desired number of clusters is reached. The main clustering process is illustrated in Figure 1.

3.1. Partitioning Phase

In the partitioning stage, a partitioning strategy that considers both density and distance is proposed, which aims to partition the data into high-density regions and ensure that the data points in the same dense region have highly similar densities. The details are as follows. Before calculating local densities, use the box plots to remove outliers from the data to prevent their impact on the partitioning results, and the filtered data consist of points within the upper and lower limits of the box plots, with the calculations for these limits provided in Equations (1) and (2).
U B d i = Q 3 d i + e I Q R d i
L B d i = Q 1 d i e I Q R d i
where L B d i and U B d i represent the upper and lower bounds of the d i dimension, respectively; Q 1 d i and Q 3 d i denote the upper quartile and the lower quartile of the d i dimension; and  I Q R d i = Q 3 d i Q 1 d i . I Q R d i is used to define the range of outliers. In addition, based on the statistical properties of the normal distribution, e is set to 1.5 [34]. This is because about 0.7 percent of the samples outside L B d i and U B d i are considered outliers under the normal distribution. This threshold balances sensitivity and robustness.
Secondly, according to the kernel density estimation, calculate the local density of the points after removing the outliers, construct the density curve to detect the density peak in each dimension, and use the coordinates of the peak of the density curve as the benchmark to establish a Kd-Tree to quickly attract the data points around the initial subcluster division. Taking the intersection of the partitioning results from each dimension to obtain several subclusters.

3.1.1. Local Density Is Calculated Using Kernel Density Estimation

Kernel density estimation [35] (KDE) is a nonparametric density estimation function that can estimate the probability distribution of data without assuming a density distribution. The motivation of this paper is to partition the subclusters using KDE to evaluate the density differences of the data in each dimension. The kernel density estimation function can be expressed as Equation (3):
f ( x ) = 1 2 n h i = 1 n exp x x i h
where h represents the kernel density bandwidth and the optimal bandwidth h d v  [35] is defined as follows:
h d v = 0.9 n 1 d + 2 min Std X v , IQR ( X v ) 8
where n represents the data size, h d v = 1 , 2 , 3 , , v denotes the bandwidth of the v t h dimension, and  I Q R is the same as defined above.

3.1.2. Partitioning Initial Sub-Cluster

As previously mentioned, the two-dimensional dataset aggregation was used as an example to calculate local densities via kernel density estimation and generate density curves. The resulting curves exhibit distinct changes in density distribution, as illustrated in Figure 2a,b.
It can be seen that the curve presents multiple regions of high density. These regions record the probability distribution of the data density in this dimension, and the wave peaks represent the probability that the data appearing in this region are the largest such that the density reaches the maximum. These peaks can be used to rationally partition the initial subclusters. The wave crest coordinate matrix coordinate = 1, 2, 3, ……, n is recorded. Subsequent work will use these peak coordinates as a basis for partitioning sub-clusters.
All the peaks are utilized on the above basis, with any one of them marked as a separate category. In particular, because of the spatial partitioning properties of the Kd-Tree, it is possible to quickly locate points within a specific region, which helps to quickly identify which points belong to the same density peak while reducing the running time of the algorithm. In this paper, Kd-Tree is introduced to calculate the distance from all data points to the categories, find the nearest category index corresponding to each data point, and finally assign it to the nearest-neighbor category, as shown Figure 3a,b. Meanwhile, data points that are classified into the same category are marked with the same color.

3.1.3. Final Sub-Cluster

This section focuses on the initial subclusters generated in the previous phase and utilizes the idea of intersection to complete the purpose of the partitioning phase as follows: the density distribution of the data in a single dimension was computed in the previous phase but the single dimension does not reflect the overall distribution of the data. Therefore, the purpose of this phase is to further process the initial subclusters partitioned by a single dimension. The initial subclusters generated from all dimensions are intersected to obtain the overall distribution, as shown in Figure 4.
It can be seen that after taking the intersection, the initial subclusters are partitioned into a number of subclusters, and each subcluster exhibits a locally dense pattern. The aforementioned steps utilize the characteristics of the kernel density function to automatically detect the optimal number of peaks under the appropriate bandwidth conditions, and they assign the same labels to the most similar data points to achieve the adaptive partitioning of subclusters, which can cope with different types of cluster structures and overcomes the problem that traditional density-based clustering algorithms and their variants incorrectly partition sparse clusters into dense clusters. The main flow of the partition phase is shown in Algorithm 1.
Algorithm 1 Noise detection and partition strategy based on dimensional density peaks
Input: 
Dataset X, Dimensions D
Output: 
Final Subclusters S u b _ f i n a l , N o i s e s
 1:
Initialize S = [ X ] , S u b _ f i n a l = [ ], p e a k _ p o i n t s = [ ], N o i s e s = [ ]
 2:
for each point i [ 1 , D ]  do
 3:
   Calculate noises from the i - t h dimension of S according to Equations (1) and (2)
 4:
   Identify n o i s e s in the i - t h dimension and remove them from S
 5:
   Append n o i s e s to the N o i s e s
 6:
end for
 7:
for each point i [ 1 , D ]  do
 8:
   Extract the i - t h dimension point from S
 9:
   Calculate the kernel density estimation (KDE) for the i - t h dimension point according to Equation (3)
10:
   Find all peak points and append the peak points to the p e a k _ p o i n t s
11:
end for
12:
Initialize s u b c l u s t e r s = [ ]
13:
for each p e a k _ p o i n t P p e a k _ p o i n t s  do
14:
   Initialize s u b c l u s t e r for p = [ ]
15:
   for each point i [ 1 , D ] ]  do
16:
       Use Kd-Tree to search for the nearest points to peak point p in the i - t h dimension
17:
       Assign the nearest points to the s u b c l u s t e r for p
18:
   end for
19:
   Append the s u b c l u s t e r for p to the s u b c l u s t e r s
20:
end for
21:
while  s u b c l u s t e r s > 0  do
22:
    S u b _ f i n a l = first s u b c l u s t e r in s u b c l u s t e r s
23:
   for each subcluster s c s u b c l u s t e r s  do
24:
        S u b _ f i n a l = intersection of S u b _ f i n a l and s c
25:
   end for
26:
end while
27:
return  S u b _ f i n a l , N o s i e

3.2. Merging Period

The merging phase aims to merge subclusters that are structurally close and of similar density. In order to obtain actual clusters, this paper proposes a novel similarity metric called inter-cluster interaction degree based on the degree of interaction of the shared nearest neighbors between subclusters. The proposed similarity metric starts from the structural similarity between subclusters and considers the shared neares neighbors between neighboring subclusters. The neighboring subclusters with maximum similarity are iteratively merged by evaluating the interaction degree of the shared nearest neighbors between the clusters until the desired number of clusters is reached. The merging phase consists of two parts: (1) Calculate the inter-cluster interaction degree. (2) Iteratively merge subclusters with maximum similarity.

3.2.1. Methods of Similarity Measurement

In this section, a similarity metric based on shared nearest neighbors is proposed to iteratively merge subclusters with maximum similarity, where shared nearest neighbors can be obtained based on k-nearest neighbors and reverse k-nearest neighbors. In the proposed merging strategy, the two subclusters to be merged should have sufficiently small shortest distances and more points in the set of shared nearest neighbors. Thus, the shared k-nearest neighbor is born. It evaluates the similarity by counting the number of shared neighbors in the neighborhood between subclusters. In addition, unlike similarity measures that only consider Euclidean distances that must rely on global linearity assumptions, shared k-neighbors focus more on the local neighborhood structure, take into account the relative positions of the samples in their neighborhoods, and are able to capture local structural and contextual information in the samples. This means that they are able to adapt to data distributions of different densities. Based on the above properties, some definitions are given to describe the new similarity metric.
  • Shared k-nearest neighbor (SKNN): For any two sample points x i and x j , K N N ( x i ) and K N N ( x j ) , denote the set of K nearest neighbor sample points of sample points x i and x j , respectively. R K N N ( x i ) refers to a set of data points in a given dataset, where each point in the set considers a query point as one of its k-nearest neighbors. Formally, for a query point x i , its R K N N ( x i ) set is defined as the collection of points in the dataset that include x i within their respective sets of k-nearest neighbors. In particular, since K N N and R K N N are symmetric neighborhoods and both utilize Euclidean distance to compute the distance between samples, R K N N ( x i ) can be utilized instead of K N N ( x j ) to denote the reverse k-nearest neighbors of the sample point x i as shown in Equation (5), which together form the basis for sharing k-nearest neighbors.
R K N N x i = x j cluster j x i KNN x j
Therefore, the number of shared nearest neighbor points S K N N defined by K N N and R K N N together for any sample point x i is defined as in Equation (6):
S K N N x i = K N N x i R K N N x i
where K N N x i denotes the set of k-nearest neighbors of the sample point x i and R K N N x i denotes the set of reverse k-nearest neighbors of the sample point x i .
  • Shared k-nearest neighbors between clusters: The closer two sample points on the structure are and the higher the degree of shared nearest neighbors, the higher their similarity and the higher the probability that they will be merged. As shown in Figure 5, it can be seen that the higher the degree of shared k-nearest neighbors of neighboring subclusters, the more likely they are to be merged into one cluster.
    According to Equations (5) and (6), the shared k-nearest neighbors of subcluster c l u s t e r i can be defined as in Equation (7):
S K N N c l u s t e r i = K N N c l u s t e r i R K N N c l u s t e r i
Equation (8) denotes the sum of two neighboring subclusters c l u s t e r i and c l u s t e r j sharing k-nearest neighbor sample points.
S N N cluster i , cluster j = S K N N cluster i + S K N N cluster j
  • Shared k-nearest neighbor similarity measure: According to the above Equations (7) and (8), the similarity measure S i m ( c l u s t e r i , c l u s t e r j ) of two neighboring subclusters can be defined as in Equation (9):
S i m ( c l u s t e r i , c l u s t e r j ) = S N N ( c l u s t e r i , c l u s t e r j ) n u m s c l u s t e r i c o n c l u s t e r i ( c l u s t e r j ) n u m s c l u s t e r i c o n c l u s t e r j ( c l u s t e r i ) n u m s c l u s t e r j
where n u m s c l u s t e r i denotes the number of subcluster c l u s t e r i data points.
And c o n c l u s t e r i ( c l u s t e r j ) = i = 1 c l u s t e r j K N N ( c l u s t e r i ) c l u s t e r j denotes the number of k-nearest neighbors of the subcluster c l u s t e r i near the subcluster c l u s t e r j . Similarly, c o n c l u s t e r j ( c l u s t e r i ) = i = 1 c l u s t e r j K N N ( c l u s t e r j ) c l u s t e r i denotes the number of k-nearest neighbor sample points of the subcluster c l u s t e r j near the subcluster c l u s t e r i .
In order to have a clearer understanding of the proposed similarity metric, a further analysis of Equation (9) is made. First, S N N denotes the number of shared nearest neighbors between two clusters. This is one of the important metrics to measure the similarity between two clusters. If two clusters have more shared neighbors, they are likely to belong to the same taxon. Secondly, c o n c l u s t e r i ( c l u s t e r j ) denotes the total number of points of k-nearest neighbors of c l u s t e r j that are connected to c l u s t e r i . This term can be viewed as the strength of the connection between two clusters. If a cluster has a large number of points connected to another cluster, then the two clusters are likely to be similar. Lastly, n u m s c l u s t e r i is used to represent the size of c l u s t e r i . When calculating similarity, it is important to consider the size of the cluster because larger clusters are likely to have more connected points. By dividing the cluster size, we can obtain a normalized similarity metric that allows direct comparison of clusters of different sizes.
In summary, the proposed similarity metric takes into account the number of shared nearest neighbors between clusters, the strength of connections, and the size of the clusters, which is helpful in merging subclusters with maximum result similarity.

3.2.2. Merging Subclusters

The similarity between all subclusters is calculated based on the traversal of Equation (9). After obtaining the similarity between subclusters, the similarity of all subcluster pairs is sorted in descending order, and the subclusters with maximum similarity and closer distance are merged. In addition, the merging process is performed iteratively until the desired number of clusters is reached, which means that new clusters generated during the merging process can be evaluated again with respect to whether to merge them or not by the similarity metrics, and the merging process reduces the time complexity, particularly when there are more than two subclusters with maximum similarity in the merging process. For example, if multiple subclusters have the same maximum similarity as shown in Figure 6, then they can be merged simultaneously.
Therefore, in the merging process, not only are two subclusters with maximum similarity merged, but several subclusters with the same maximum similarity and similar distance can be merged at the same time. After traversing all the subclusters, the outliers identified in the partitioning phase are assigned to the nearest neighbor clusters. At this point, the merging process ends. The main flow of the merging phase is shown in Algorithm 2.
Algorithm 2 Merging strategies based on shared nearest neighbors
Input: 
Subclusters S u b _ f i n a l , N o i s e
Output: 
L a b e l
 1:
Initialize S i m i l a r i t y _ m a t r i x = [ ], L a b e l = [ ]
 2:
for each cluster c S u b _ f i n a l  do
 3:
   Calculate K N N _ c o u n t and R K N N _ c o u n t for c according to Equation (5)
 4:
end for
 5:
for each pair ( c l u s t e r 1 , c l u s t e r 2 ) c o m b i n a t i o n s ( c l u s t e r s , 2 )  do
 6:
   Calculate s h a r e d _ K N N _ c o u n t for c l u s t e r 1 and c l u s t e r 2 according to Equations (7) and (8)
 7:
   Calculate s i m i l a r i t y between c l u s t e r 1 and c l u s t e r 2 according to Equation (9)
 8:
   Store s i m i l a r i t y in s i m i l a r i t y _ m a t r i x with key ( c l u s t e r 1 , c l u s t e r 2 )
 9:
end for
10:
while  c l u s t e r s > 1  do
11:
   Initialize m e r g e _ l i s t = [ ]
12:
   for each pair ( c l u s t e r 1 , c l u s t e r 2 ) c o m b i n a t i o n s ( c l u s t e r s , 2 )  do
13:
      if  s i m i l a r i t y m a t r i x [ ( c l u s t e r 1 , c l u s t e r 2 ) ] > m a x _ s i m i l a r i t y  then
14:
           m a x _ s i m i l a r i t y = s i m i l a r i t y _ m a t r i x [ ( c l u s t e r 1 , c l u s t e r 2 ) ]
15:
           m e r g e _ l i s t = [ ( c l u s t e r 1 , c l u s t e r 2 ) ]
16:
      else if  s i m i l a r i t y _ m a t r i x [ ( c l u s t e r 1 , c l u s t e r 2 ) ] = = m a x _ s i m i l a r i t y  then
17:
          Append ( c l u s t e r 1 , c l u s t e r 2 ) to m e r g e _ l i s t
18:
      end if
19:
   end for
20:
   for each pair ( c l u s t e r 1 , c l u s t e r 2 ) m e r g e _ l i s t  do
21:
       m e r g e d _ c l u s t e r = m e r g e _ c l u s t e r s ( c l u s t e r 1 , c l u s t e r 2 )
22:
      Remove c l u s t e r 1 from c l u s t e r s
23:
      Remove c l u s t e r 2 from c l u s t e r s
24:
      Append m e r g e d _ c l u s t e r to c l u s t e r s
25:
      Update s i m i l a r i t y _ m a t r i x with new m e r g e d _ c l u s t e r
26:
   end for
27:
end while
28:
Initialize L a b e l = [ ]
29:
for  i n d e x , c l u s t e r e n u m e r a t e ( c l u s t e r s )   do
30:
   for  d a t a _ p o i n t c l u s t e r  do
31:
       L a b e l [ d a t a _ p o i n t ] = i n d e x
32:
   end for
33:
end for
34:
for each n o i s e N o i s e  do
35:
   Assign the n o i s e to the nearest c l u s t e r
36:
    L a b e l [ n o i s e ] = i n d e x of the nearest c l u s t e r
37:
end for
38:
return  L a b e l

3.3. Complexity Analysis

Computational complexity is an important metric for evaluating the performance of an algorithm. In the previous subsections, all phases of the algorithm were described in detail. In this section, the time complexity of the algorithm is analyzed. In the partitioning phase, it is assumed that the size of the dataset is N. First, calculating the density profiles in all dimensions using the kernel density estimation function requires O ( N m ) , where N is the number of data points and m is the grid points. After obtaining the density profile, building a Kd-Tree to accelerate the search for nearest-neighbor points and assigning them to the nearest-neighbor crest index takes O ( N l o g P ) , where P is the number of crests, and the time complexity of partitioning the subclusters is O ( V P ) , with V denoting the dimension. In the merging phase, it takes O ( l o g N ) to compute the shared k-nearest neighbors among subclusters, O ( n 2 ) time complexity to compute the similarity among subclusters, and  O ( n l o g n ) to iteratively merge the subclusters, where n is the number of subclusters. In summary, the total time complexity of the algorithm is O ( N m + N l o g P + V P + l o g N + N 2 + n l o g n ) , which can be approximated as O ( N l o g P ) . This is because when analyzing the time complexity of an algorithm, we usually look for the highest-order terms, since the highest-order terms will dominate the growth rate of the whole expression when the input size N becomes very large. Next, we will analyze why the time complexity is approximated as O ( N l o g P ) .
First, the term O ( N m ) depends on the value of m. If m is a constant, then N m is O ( N ) . However, regardless of the value of m, N m will not exceed O ( N 2 ) . For O ( N l o g P ) , when P is a constant, it is denoted as O ( N ) ; otherwisel it remains below O ( N 2 ) . Similarly, O ( V P ) , O ( l o g N ) and O ( n l o g n ) are all below O ( N 2 ) . Consequently, the time complexity can be approximated as O ( N l o g P ) .
The above-mentioned O denotes a mathematical notation used to describe an upper bound on the growth rate of a function. Specifically, we can say that the time complexity of the algorithm is O ( g ( n ) ) if there exist positive constants c and n 0 such that the running time of the algorithm is always less than or equal to c multiplied by some function g ( n ) when the input size N is greater than or equal to n 0 .
In order to show the time efficiency of the CPDD-ID algorithm more clearly, we report the running time of the CPDD-ID algorithm with six comparison algorithms on the selected synthetic dataset and the real dataset. The details are shown in Table 1 and Table 2.

4. Experiments and Results

To further illustrate the effectiveness of the algorithm for detecting clusters of different shapes and densities in close proximity to each other, experiments were conducted on several synthetic datasets and the UCI (University of California) real dataset, which is a public repository used for machine learning and data mining research, and we compared them with current state-of-the-art clustering algorithms using three external evaluation metrics. The selected datasets are characterized by ambiguous boundaries and highly overlapping samples, especially R15, Pathbased, D31, S1, DS577, etc., with the aim of demonstrating the applicability of the proposed method. Finally, the effect of the nearest neighbor parameter K on the clustering results is analyzed. All experiments were conducted using a PC configured with a i9-12900H 2.50 GHz processor Windows 11 operating system, 16 GB RAM, and Python 3.11.

4.1. Preparation

Ten synthetic datasets and ten UCI real datasets were prepared for this experiment, and the detailed information of these datasets is displayed in Table 3 and Table 4, including the data size, dimensions, and the number of real clusters. All datasets were obtained from https://github.com/Chelsea547/Clustering-Datasets (accessed on 21 January 2025).
Secondly, this part compares some classical and advanced algorithms used to demonstrate the superiority of the algorithms, including DPC, DBSCAN, K-means, RNN-DBSCAN, LDP-MST, and HCDC. The content of these six algorithms has already been described in the introduction and related work, so they will not be repeated here. In particular, this paper uses three traditional external evaluation metrics, clustering accuracy [36] (ACC), adjusted Rand index [37] (ARI), and normalized mutual information [38] (NMI), to measure the algorithm’s performance. Among them, ACC is an important metric for evaluating the performance of a classification model, which indicates the ratio between the number of samples correctly predicted by the model and the total number of samples, and it takes the value in the range of [0,1]. In this paper, ACC is used to evaluate the consistency between the clustering results and the true labels; ARI is used to measure the similarity between two clustering results, which takes into account the random assignment of element pairs in the clustering results and takes the value in the range of [−1,1]; and NMI is used to compare the consistency of different clustering results and takes the value in the range of [0,1]. Overall, ACC, ARI, and NMI are all used to describe the accuracy of the clustering results, with larger values indicating that the algorithm is more effective in clustering.

4.2. Experiments on Synthetic Datasets

This section analyzes the comparative tests with six other state-of-the-art algorithms on ten synthetic datasets. The original distributions of the synthetic datasets are shown in Figure 7a–j.
In them, data points belonging to the same cluster are marked with the same color. In particular, Figure 7d,g exhibit uneven density characteristics, and datasets D31, S1, DS577, and T4.8K all feature indistinct cluster boundaries, especially D31 and S1. In order to clearly illustrate the experimental setup of each algorithm, Table 5 references the selection of parameters in the original paper for each algorithm and provides the experimental parameters on the synthetic dataset.
Table 6 presents the ACC, ARI, and NMI results of each algorithm on different synthetic datasets to illustrate their performance.
Figure 8 shows the clustering results of the proposed algorithm CPDD-ID on the synthetic datasets, where the results on popular datasets such as Spiral, Jain, and Zelink1 are consistent with the distribution of the original dataset, and the results on the three evaluation metrics are all 1. For datasets similar to D31, S1, and other datasets with a high degree of overlap between clusters, the CPDD-ID algorithm accurately separates clusters that do not belong to the same clusters that do not belong to the same category and also has good results in the evaluation metrics. Finally, the CPDD-ID algorithm also correctly recognizes different cluster structures when dealing with datasets with ambiguous boundaries such as Pathbased and DS577.
Additionally, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13 and Figure 14 display the clustering results of the comparative algorithms DPC, DBSCAN, K-means, RNN-DBSCAN, LDP-MST, and HCDC on the ten synthetic datasets. It is worth noting that k-means shows the clustering centers detected on each dataset with a red pentagram, which is indicated by averaging the results over 30 runs on the experimental setup since variations in the initial parameters can lead to different results. Figure 9 presents the clustering results of DPC.
DPC is applied to various datasets by setting different truncation distances and gives good results on spherical datasets. It can be seen that it performs well on datasets such as R15 but performs poorly on datasets with uneven density such as Pahbased and Jain. The clustering results of DBSCAN are shown in Figure 10.
DBSCAN can handle datasets of arbitrary shapes, and it especially excels in handling datasets such as Jain and Zelink1 that have clear cluster structures and are far away from each other. However, it incorrectly categorizes clusters close to each other as one cluster when dealing with datasets such as R15, S1, etc., which have unclear inter-cluster structures and are highly overlapping.
As shown in Figure 11, K-means performs well on the Zelink1 dataset but performs poorly on Aggregation, Spiral and Pathbased, incorrectly classifying different clusters together and even detecting the wrong cluster centers. It also performs poorly on noisy datasets such as T4.8k.
RNN-DBSCAN is a density-based clustering algorithm based on DBSCAN that uses reverse nearest neighbors to compute the local density, and it shows better clustering performance than DBSCAN, as shown in Figure 12.
Especially on the Jain and Aggregation datasets, the potential cluster structure is correctly identified. However, it is not as good as DBSCAN in dealing with datasets containing noise like T4.8k. Moreover, the RNN-DBSCAN algorithm is not suitable for datasets with ambiguous boundaries such as R15, D31, and Pathbased.
The clustering results of LDP-MST and HCDC are shown in Figure 13 and Figure 14, respectively.
It can be seen that LDP-MST and HCDC perform well in dealing with arbitrary shapes and highly overlapping datasets but perform poorly in dealing with uneven datasets such as Pathbased and DS577. HCDC is excellent in dealing with arbitrary shapes and highly overlapping datasets, but performs poorly in dealing with datasets with uneven density such as Pathbased and DS577.
Unlike the six comparison algorithms mentioned above, the CPDD-ID algorithm uses two-phase clustering. Local density maxima are detected using kernel density estimation in the partitioning phase, which can effectively detect the density distributions of dense and sparse regions and avoid incorrectly classifying sparse clusters into dense clusters. For example, the Pathbased, Aggregation and Jain datasets provide strong support for the subsequent merging phase by correctly identifying subclusters with different density distributions in the partitioning phase. The merging phase is similar to hierarchical clustering in that the subclusters with maximum similarity are iteratively merged through the interaction degree of shared nearest neighbors among subclusters, starting from structural similarity. The proposed merging strategy shows good performance in dealing with datasets that are highly overlapping and ambiguous to each other’s cluster structure and achieves first-place results on the Jain, Spiral, Zelink1, Pathbased, and DS577 datasets. Even on datasets with high overlap between clusters such as R15 and S1, it can achieve second-place results. In summary, the CPDD-ID algorithm combines the advantages of both density-based clustering and hierarchical clustering, demonstrating universal performance compared to other algorithms.

4.3. Experiments on Real Datasets

In this section, the performance of the CPDD-ID algorithm is further evaluated against six other algorithms on ten real datasets, all of which are taken from the UCI machine learning repository, and the parameters of all the real datasets on the seven different algorithms are given in Table 7.
In particular, Table 8 shows the clustering results of the selected real datasets on the different algorithms.
The results show that the CPDD-ID algorithm ranks first in ACC, NMI, and NRI on the Ionosphere and Pima datasets and also ranks first in two metrics on the Wine, Satimage, and Balance datasets, and it has top-three performance on the remaining datasets. Overall, the CPDD-ID algorithm shows excellent performance in dealing with low-dimensional datasets and highlights the effectiveness of the CPDD-ID algorithm’s strategy of reasonable partitioning and merging based on the correlation between all dimensions when dealing with high-dimensional datasets. As a result, the clustering performance of the CPDD-ID algorithm on high-dimensional datasets, such as Ionosphere and Satimage, is better than that of other algorithms.

4.4. Parameter Analysis of Algorithms

To verify the effect of the shared nearest neighbor parameter K on the performance of the CPDD-ID algorithm, a further analysis was performed on the synthetic dataset. This analysis deflates the range of K values from 1 to 50, and Figure 15 demonstrates the effect of the variation of the parameter K on the ACC, NMI, and ARI scores.
It can be seen that the results on the Aggregation dataset smooth out as the value of K increases, remaining constant especially in the range of 11 to 33 and showing a downward trend when the value of K is greater than about 43. The Spiral dataset shows a decreasing and then stabilizing trend. There is a small fluctuation on the Pathbased dataset. There is also a stabilizing effect on DSS577, which is a dataset with uneven density. In particular, the CPDD-ID algorithm performs very smoothly and well on datasets with ambiguous and highly overlapping boundaries such as R15, D31, and S1, which shows that the CPDD-ID algorithm is effective in dealing with this type of dataset.
On the basis of the above analysis, we suggest that the k value be set within the range of [5,12] when dealing with datasets such as Spiral, Jain, and Pathbased, which have a streaming structure. This is because there are generally many localized density peaks in such datasets, which may be partitioned into multiple subclusters during division, and if the k value is too large, it may incorrectly merge subclusters that do not belong to the same cluster structure. In addition, for datasets such as Aggregation, R15, D31, and S1, which have highly overlapping and interconnected clusters, the only way to avoid having highly overlapping cluster structures being incorrectly merged is to scale up the evaluation of the number of shared k-nearest neighbors. In addition, smaller k values tend to merge some clusters that do not belong to a cluster but are connected by a small number of samples into a single cluster. Therefore, a k value of [10,25] is recommended for such datasets.

5. Conclusions

In this paper, a new clustering algorithm CPDD-ID is proposed, which combines both density-based and hierarchical clustering ideas. First, in order to cope with cluster structures with uneven densities and irregular shapes, a density-maxima-based delineation strategy is proposed, which utilizes kernel density estimation to compute the local density maxima and partition the subclusters using this coordinate as a benchmark. The proposed partitioning strategy accurately recognizes cluster structures with different density distributions and can overcome the problem of subclusters that do not belong to the same cluster but are close to each other being partitioned into one cluster. Second, a similarity metric focusing on the local neighborhood structure is proposed. This similarity metric iteratively merges the subclusters with the highest similarity generated in the division phase based on the degree of interaction between the shared k-nearest neighbors of the subclusters, which solves the problem of incorrectly merging subclusters that occurs in traditional hierarchical clustering algorithms when dealing with highly overlapping and interconnected cluster structures. The experimental results show that the CPDD-ID algorithm is applicable to more types of datasets and exhibits greater robustness compared to existing clustering algorithms, especially when dealing with datasets with uneven density and ambiguous boundaries, where it shows more competitive performance.
Despite the fact that the proposed study has achieved some meaningful findings, there are still some limitations. The performance of the CPDD-ID algorithm is not optimal when dealing with the large number of redundant features present in high-dimensional datasets. In future work, in order to address the poor performance of the CPDD-ID algorithm on high-dimensional datasets, the goal is to propose a feature-selection algorithm to cope with the large amount of redundant information that exists in high-dimensional datasets. The proposed feature-selection algorithm should be able to select a representative subset of features that can improve the performance of the algorithm as well as maximize the representation of the original feature set.

Author Contributions

Conceptualization, Y.L., J.D., Y.D. and H.W.; methodology, Y.L.; software, H.W. and Y.D.; validation, Y.L., J.D., H.W. and Y.D.; formal analysis, Y.L., H.W. and Y.D.; investigation, H.W. and Y.D.; resources, Y.L.; data curation, H.W. and Y.D.; writing—original draft preparation, Y.L.; writing—review and editing, Y.L. and J.D.; visualization, H.W. and Y.D.; supervision, J.D. All authors have read and agreed to the published version of the manuscript.

Funding

This document is the result of the research project funded by the National Natural Science Foundation of China (62262034, 62262035).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The publicly available datasets used in this study can all be found here https://github.com/Chelsea547/Clustering-Datasets (accessed on 21 January 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Guan, J.; Li, S.; He, X.; Chen, J. Peak-graph-based fast density peak clustering for image segmentation. IEEE Signal Process. Lett. 2021, 28, 897–901. [Google Scholar]
  2. Yeganova, L.; Kim, W.; Kim, S.; Wilbur, W.J. Retro: Concept-based clustering of biomedical topical sets. Bioinformatics 2014, 30, 3240–3248. [Google Scholar] [PubMed]
  3. Xu, C.; Su, Z. Identification of cell types from single-cell transcriptomes using a novel clustering method. Bioinformatics 2015, 31, 1974–1980. [Google Scholar]
  4. Yan, Y.; Qian, Y.; Sharif, H.; Tipper, D. A survey on cyber security for smart grid communications. IEEE Commun. Surv. Tutor. 2012, 14, 998–1010. [Google Scholar]
  5. Dong, G.; Xie, M. Color clustering and learning for image segmentation based on neural networks. IEEE Trans. Neural Netw. 2005, 16, 925–936. [Google Scholar] [CrossRef]
  6. MacQueen, J. Some methods for classification and analysis of multivariate observations. In Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Oakland, CA, USA, 21 June–18 July 1965; Volume 1, pp. 281–297. [Google Scholar]
  7. Ester, M.; Kriegel, H.-P.; Sander, J.; Xu, X. A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise. In Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining, Portland, OR, USA, 2–4 August 1996; Volume 96, pp. 226–231. [Google Scholar]
  8. Rodriguez, A.; Laio, A. Clustering by fast search and find of density peaks. Science 2014, 344, 1492–1496. [Google Scholar]
  9. Nielsen, F. Introduction to HPC with MPI for Data Science; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  10. Zhang, T.; Ramakrishnan, R.; Livny, M. BIRCH: An efficient data clustering method for very large databases. ACM Sigmod Rec. 1996, 25, 103–114. [Google Scholar]
  11. Guha, S.; Rastogi, R.; Shim, K. CURE: An efficient clustering algorithm for large databases. ACM Sigmod Rec. 1998, 27, 73–84. [Google Scholar]
  12. Karypis, G.; Han, E.H.; Kumar, V. Chameleon: Hierarchical clustering using dynamic modeling. Computer 1999, 32, 68–75. [Google Scholar]
  13. Cheng, D.; Zhu, Q.; Huang, J.; Wu, Q.; Yang, L. A local cores-based hierarchical clustering algorithm for data sets with complex structures. Neural Comput. Appl. 2019, 31, 8051–8068. [Google Scholar]
  14. Bryant, A.; Cios, K. RNN-DBSCAN: A density-based clustering algorithm using reverse nearest neighbor density estimates. IEEE Trans. Knowl. Data Eng. 2017, 30, 1109–1121. [Google Scholar] [CrossRef]
  15. Cheng, D.; Zhu, Q.; Huang, J.; Wu, Q.; Yang, L. Clustering with local density peaks-based minimum spanning tree. IEEE Trans. Knowl. Data Eng. 2019, 33, 374–387. [Google Scholar] [CrossRef]
  16. Fan, J.C.; Jia, P.L.; Ge, L. M k-NN G-DPC: Density peaks clustering based on improved mutual K-nearest-neighbor graph. Int. J. Mach. Learn. Cybern. 2020, 11, 1179–1195. [Google Scholar] [CrossRef]
  17. Ding, S.; Du, W.; Xu, X.; Shi, T.; Wang, Y.; Li, C. An improved density peaks clustering algorithm based on natural neighbor with a merging strategy. Inf. Sci. 2023, 624, 252–276. [Google Scholar] [CrossRef]
  18. Xie, J.; Liu, X.; Wang, M. SFKNN-DPC: Standard deviation weighted distance based density peak clustering algorithm. Inf. Sci. 2024, 653, 119788. [Google Scholar] [CrossRef]
  19. Rasool, Z.; Aryal, S.; Bouadjenek, M.R.; Dazeley, R. Overcoming weaknesses of density peak clustering using a data-dependent similarity measure. Pattern Recognit. 2023, 137, 109287. [Google Scholar] [CrossRef]
  20. Qin, X.; Han, X.; Chu, J.; Zhang, Y.; Xu, X.; Xie, J.; Xie, G. Density peaks clustering based on Jaccard similarity and label propagation. Cogn. Comput. 2021, 13, 1609–1626. [Google Scholar] [CrossRef]
  21. Guo, W.; Wang, W.; Zhao, S.; Niu, Y.; Zhang, Z.; Liu, X. Density peak clustering with connectivity estimation. Knowl.-Based Syst. 2022, 243, 108501. [Google Scholar] [CrossRef]
  22. García-García, J.C.; García-Ródenas, R. A methodology for automatic parameter-tuning and center selection in density-peak clustering methods. Soft Comput. 2021, 25, 1543–1561. [Google Scholar] [CrossRef]
  23. Zhu, Y.; Ting, K.M.; Jin, Y.; Angelova, M. Hierarchical clustering that takes advantage of both density-peak and density-connectivity. Inf. Syst. 2022, 103, 101871. [Google Scholar] [CrossRef]
  24. Bian, Z.; Chung, F.L.; Wang, S. Fuzzy density peaks clustering. IEEE Trans. Fuzzy Syst. 2020, 29, 1725–1738. [Google Scholar] [CrossRef]
  25. Wang, S.; Li, Q.; Zhao, C.; Zhu, X.; Yuan, H.; Dai, T. Extreme clustering—A clustering method via density extreme points. Inf. Sci. 2021, 542, 24–39. [Google Scholar] [CrossRef]
  26. Liu, R.; Wang, H.; Yu, X. Shared-nearest-neighbor-based clustering by fast search and find of density peaks. Inf. Sci. 2018, 450, 200–226. [Google Scholar] [CrossRef]
  27. Ros, F.; Guillaume, S.; El Hajji, M.; Riad, R. KdMutual: A novel clustering algorithm combining mutual neighboring and hierarchical approaches using a new selection criterion. Knowl.-Based Syst. 2020, 204, 106220. [Google Scholar] [CrossRef]
  28. Han, X.; Zhu, Y.; Ting, K.M.; Zhan, D.C.; Li, G. Streaming hierarchical clustering based on point-set kernel. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, 14–18 August 2022; pp. 525–533. [Google Scholar]
  29. Yang, Q.F.; Gao, W.Y.; Han, G.; Li, Z.Y.; Tian, M.; Zhu, S.H.; Deng, Y.h. HCDC: A novel hierarchical clustering algorithm based on density-distance cores for data sets with varying density. Inf. Syst. 2023, 114, 102159. [Google Scholar] [CrossRef]
  30. Hulot, A.; Chiquet, J.; Jaffrézic, F.; Rigaill, G. Fast tree aggregation for consensus hierarchical clustering. BMC Bioinform. 2020, 21, 120. [Google Scholar] [CrossRef]
  31. Wang, J.; Zhu, C.; Zhou, Y.; Zhu, X.; Wang, Y.; Zhang, W. From partition-based clustering to density-based clustering: Fast find clusters with diverse shapes and densities in spatial databases. IEEE Access 2017, 6, 1718–1729. [Google Scholar] [CrossRef]
  32. Ping, Y.; Li, H.; Hao, B.; Guo, C.; Wang, B. Beyond k-Means++: Towards better cluster exploration with geometrical information. Pattern Recognit. 2024, 146, 110036. [Google Scholar]
  33. Cheng, M.; Ma, T.; Liu, Y. A projection-based split-and-merge clustering algorithm. Expert Syst. Appl. 2019, 116, 121–130. [Google Scholar] [CrossRef]
  34. Rousseeuw, P.J.; Ruts, I.; Tukey, J.W. The bagplot: A bivariate boxplot. Am. Stat. 1999, 53, 382–387. [Google Scholar] [CrossRef]
  35. Silverman, B.W. Density Estimation for Statistics and Data Analysis; Routledge: London, UK, 2018. [Google Scholar]
  36. Fränti, P.; Sieranoja, S. Clustering accuracy. Appl. Comput. Intell. 2024, 4, 24–44. [Google Scholar]
  37. Hubert, L.; Arabie, P. Comparing partitions. J. Classif. 1985, 2, 193–218. [Google Scholar]
  38. Kvalseth, T.O. Entropy and correlation: Some comments. IEEE Trans. Syst. Man Cybern. 1987, 17, 517–519. [Google Scholar]
Figure 1. Clustering process of the CPDD-ID algorithm (developed by authors).
Figure 1. Clustering process of the CPDD-ID algorithm (developed by authors).
Applsci 15 03612 g001
Figure 2. (a) Density curves in the first dimension generated using kernel density estimation. (b) Density curves in the second dimension generated using kernel density estimation. Black points are the coordinates of the labeled density peaks.
Figure 2. (a) Density curves in the first dimension generated using kernel density estimation. (b) Density curves in the second dimension generated using kernel density estimation. Black points are the coordinates of the labeled density peaks.
Applsci 15 03612 g002
Figure 3. (a) Segmentation results obtained by utilizing the peaks of the wave in the first dimension. (b) Segmentation results obtained by utilizing the peaks of the wave in the second dimension. Different colors represent different clusters.
Figure 3. (a) Segmentation results obtained by utilizing the peaks of the wave in the first dimension. (b) Segmentation results obtained by utilizing the peaks of the wave in the second dimension. Different colors represent different clusters.
Applsci 15 03612 g003
Figure 4. The final segmentation result obtained by taking the intersection of all dimension segmentation results.
Figure 4. The final segmentation result obtained by taking the intersection of all dimension segmentation results.
Applsci 15 03612 g004
Figure 5. Merge subclusters with maximum shared nearest neighbor similarity.
Figure 5. Merge subclusters with maximum shared nearest neighbor similarity.
Applsci 15 03612 g005
Figure 6. The process of simultaneously merging subclusters with maximum similarity.
Figure 6. The process of simultaneously merging subclusters with maximum similarity.
Applsci 15 03612 g006
Figure 7. (aj) show the original distributions of the ten synthetic datasets selected for the experiments.
Figure 7. (aj) show the original distributions of the ten synthetic datasets selected for the experiments.
Applsci 15 03612 g007
Figure 8. (aj) show the clustering results of the CPDD-ID algorithm on ten synthetic datasets.
Figure 8. (aj) show the clustering results of the CPDD-ID algorithm on ten synthetic datasets.
Applsci 15 03612 g008
Figure 9. (aj) show the clustering results of the DPC algorithm on ten synthetic datasets.
Figure 9. (aj) show the clustering results of the DPC algorithm on ten synthetic datasets.
Applsci 15 03612 g009
Figure 10. (aj) show the clustering results of the DBSCAN algorithm on ten synthetic datasets.
Figure 10. (aj) show the clustering results of the DBSCAN algorithm on ten synthetic datasets.
Applsci 15 03612 g010
Figure 11. (aj) show the clustering results of the K-means algorithm on ten synthetic datasets.
Figure 11. (aj) show the clustering results of the K-means algorithm on ten synthetic datasets.
Applsci 15 03612 g011
Figure 12. (aj) show the clustering results of the RNN-DBSCAN algorithm on ten synthetic datasets.
Figure 12. (aj) show the clustering results of the RNN-DBSCAN algorithm on ten synthetic datasets.
Applsci 15 03612 g012
Figure 13. (aj) show the clustering results of the LDP-MST algorithm on ten synthetic datasets.
Figure 13. (aj) show the clustering results of the LDP-MST algorithm on ten synthetic datasets.
Applsci 15 03612 g013
Figure 14. (aj) show the clustering results of the HCDC algorithm on ten synthetic datasets.
Figure 14. (aj) show the clustering results of the HCDC algorithm on ten synthetic datasets.
Applsci 15 03612 g014
Figure 15. (ah) show the trend of the parameter K value in the CPDD-ID algorithm on the synthetic dataset for ACC, NMI, and ARI. (The blue folds are ACC, the orange folds are NMI, and the green folds are ARI.)
Figure 15. (ah) show the trend of the parameter K value in the CPDD-ID algorithm on the synthetic dataset for ACC, NMI, and ARI. (The blue folds are ACC, the orange folds are NMI, and the green folds are ARI.)
Applsci 15 03612 g015
Table 1. Runtime comparison on synthetic datasets.
Table 1. Runtime comparison on synthetic datasets.
DatasetK-MeansDBSCANDPCRNN-DBSCANLDP-MSTHCDCCPDD-ID
Aggregation0.350.821.710.980.410.920.315
Spiral0.090.311.420.080.120.140.15
R150.850.440.990.560.250.550.46
Pathbased0.020.501.310.070.200.100.23
Jain0.050.301.490.110.330.140.18
D318.0313.8537.5114.520.9613.245.09
Zelink10.020.401.360.060.180.110.15
S19.7337.80105.0736.845.9241.335.07
DS5770.030.563.180.280.390.330.33
T4.8K6.8079.97202.21150.8010.2494.5025.70
Table 2. Runtime comparison on synthetic datasets.
Table 2. Runtime comparison on synthetic datasets.
DatasetK-MeansDBSCANDPCRNN-DBSCANLDP-MSTHCDCCPDD-ID
Thyroid0.030.011.070.040.080.030.62
Wine0.300.050.290.050.010.080.16
Ionosphere0.050.040.520.110.150.120.06
Wireless0.380.304.523.510.673.5631.30
Ecoli1.210.481.120.230.290.340.66
Pima0.140.240.340.490.180.561.80
Yeast6.823.3819.925.695.853.944.34
Balance0.220.320.250.340.200.480.09
Satimage25.3324.3226.6835.4523.5237.5529.14
Breast1.870.98-0.901.451.081.04
Table 3. Details of ten synthetic datasets.
Table 3. Details of ten synthetic datasets.
NO.DatasetInstancesDimensionsClusters
1Aggregation78827
2Spiral31223
3R15600215
4Pathbased30023
5Jain37322
6D313100231
7Zelink129923
8S15000215
9DS57757723
10T4.8K800026
Table 4. Details of ten UCI real datasets.
Table 4. Details of ten UCI real datasets.
NO.DatasetInstancesDimensionsClusters
1Thyroid21553
2Wine178143
3Ionosphere3513315
4Wireless200073
5Ecoli36672
6Pima768831
7Yeast148483
8Balance625415
9Satimage6435363
10Breast69996
Table 5. Optimal parameters of 7 algorithms on synthetic datasets.
Table 5. Optimal parameters of 7 algorithms on synthetic datasets.
DatasetK-MeansDBSCANDPCRNN-DBSCANLDP-MSTHCDCCPDD-ID
Aggregationk = 7EPS = 1.2, MinPts = 5dc = 2k = 14NC = 7NC = 7k = 11
Spiralk = 3EPS = 3, MinPts = 3dc = 2k = 7NC = 3NC = 3k = 3
R15k = 15EPS = 0.3, MinPts = 3dc = 2k = 10NC = 15NC = 15k = 19
Pathbasedk = 3EPS = 2, MinPts = 9dc = 3k = 6NC = 3NC = 3k = 6
Jaink = 2EPS = 2.5, MinPts = 15dc = 2k = 15NC = 2NC = 2k = 5
D31k = 31EPS = 0.5, MinPts = 6dc = 2k = 13NC = 31NC = 31k = 29
Zelink1k = 3EPS = 0.2, MinPts = 5dc = 3k = 14NC = 3NC = 3k = 6
S1k = 15EPS = 197,500, MinPts = 2dc = 3k = 15NC = 15NC = 15k = 40
DS577k = 3EPS = 0.45, MinPts = 5dc = 2k = 9NC = 3NC = 3k = 10
T4.8Kk = 6EPS = 6, MinPts = 6dc = 2k = 14NC = 6NC = 6, w = 0.6k = 11
Table 6. The comparison of ACC, NMI, and ARI of 7 algorithms on synthetic datasets.
Table 6. The comparison of ACC, NMI, and ARI of 7 algorithms on synthetic datasets.
Dataset K-MeansDBSCANDPCRNN-DBSCANLDP-MSTHCDCCPDD-ID
ACC0.780.980.990.990.990.990.99
AggregationNMI0.880.970.970.990.990.980.99
ARI0.760.970.980.990.990.990.99
ACC0.34111111
SpiralNMI0.001111111
ARI−0.005111111
ACC0.710.910.970.970.990.990.99
R15NMI0.870.900.95970.990.980.99
ARI0.690.850.940.970.990.980.99
ACC0.740.980.720.980.670.560.99
PathbasedNMI0.540.960.510.930.490.590.98
ARI0.460.980.440.950.410.450.99
ACC0.780.890.920.99111
JainNMI0.370.790.640.95111
ARI0.320.790.690.98111
ACC0.780.850.960.910.970.860.96
D31NMI0.950.870.940.950.960.930.94
ARI0.820.690.920.890.940.840.91
ACC110.461111
Zelink1NMI110.171111
ARI110.061111
ACC0.820.970.990.980.990.730.99
S1NMI0.930.960.980.980.990.900.98
ARI0.820.960.980.970.990.700.98
ACC0.980.990.990.990.990.990.99
DS577NMI0.940.980.970.970.990.980.99
ARI0.960.990.980.980.990.990.99
ACC0.640.980.730.900.910.650.89
T4.8KNMI0.520.970.660.860.860.470.85
ARI0.610.950.740.860.870.690.84
Table 7. Optimal parameters of 7 algorithms on real datasets.
Table 7. Optimal parameters of 7 algorithms on real datasets.
DatasetK-MeansDBSCANDPCRNN-DBSCANLDP-MSTHCDCCPDD-ID
Thyroidk = 3EPS = 5500, MinPts = 3dc = 0.2k = 4NC = 3NC = 3k = 17
Winek = 3EPS = 30,000, MinPts = 2dc = 0.2k = 4NC = 3NC = 3k = 7
Ionospherek = 2EPS = 0.78, MinPts = 0.78dc = 2k = 50NC = 2NC = 2k = 3
Wirelessk = 4EPS = 0.14, MinPts = 11dc = 2k = 10NC = 4NC = 4k = 36
Ecolik = 8EPS = 0.5, MinPts = 5dc = 2k = 20NC = 8NC = 8k = 11
Pimak = 2EPS = 7, MinPts = 7dc = 2k = 10NC = 2NC = 2k = 7
Yeastk = 10EPS = 0.09, MinPts = 4dc = 3k = 15NC = 10NC = 10k = 3
Balancek = 15EPS = 100, MinPts = 2dc = 3k = 15NC = 15NC = 15k = 40
Satimagek =6EPS = 18, MinPts = 3dc = 2k = 10NC = 6NC = 6k = 6
Breastk = 2EPS = 1.5, MinPts = 2dc = 2k = 45NC = 2NC = 2k = 31
Table 8. The comparison of ACC, NMI, and ARI of 7 algorithms on real datasets.
Table 8. The comparison of ACC, NMI, and ARI of 7 algorithms on real datasets.
Dataset K-MeansDBSCANDPCRNN-DBSCANLDP-MSTHCDCCPDD-ID
ACC0.870.820.570.910.700.870.90
ThyroidNMI0.570.540.190.710.140.560.92
ARI0.590.730.180.640.230.550.67
ACC0.710.590.760.650.750.750.76
WineNMI0.310.220.420.430.420.430.412
ARI0.310.200.430.390.430.420.45
ACC0.710.610.680.660.550.730.81
IonosphereNMI0.130.090.240.050.090.190.42
ARI0.180.040.280.03−0.050.170.35
ACC0.430.630.520.260.770.530.72
WirelessNMI0.490.510.350.020.800.550.75
ARI0.280.510.310.000.690.340.64
ACC0.380.470.420.430.580.540.56
EcoliNMI0.260.160.300.020.430.310.33
ARI0.260.130.160.010.450.350.33
ACC0.650.020.670.660.660.650.75
PimaNMI0.0050.010.020.0020.010.0020.14
ARI0.04−0.020.070.010.020.010.22
ACC0.640.390.690.240.580.320.76
SatimageNMI0.610.300.610.0010.610.280.58
ARI0.520.040.540.000.440.090.57
ACC0.170.320.280.320.250.280.32
YeastNMI0.010.010.010.0010.010.010.02
ARI0.010.002−0.0050.0020.0030.0060.005
ACC0.040.0050.460.470.470.470.56
BalanceNMI0.010.250.060.020.030.0080.13
ARI0.0020.140.020.010.02−0.0010.15
ACC0.950.900.000.660.950.590.95
BreastNMI0.730.620.000.0050.720.060.70
ARI0.830.650.000.0030.82−0.040.81
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, Y.; Ding, J.; Wang, H.; Du, Y. A Clustering Algorithm Based on the Detection of Density Peaks and the Interaction Degree Between Clusters. Appl. Sci. 2025, 15, 3612. https://doi.org/10.3390/app15073612

AMA Style

Liu Y, Ding J, Wang H, Du Y. A Clustering Algorithm Based on the Detection of Density Peaks and the Interaction Degree Between Clusters. Applied Sciences. 2025; 15(7):3612. https://doi.org/10.3390/app15073612

Chicago/Turabian Style

Liu, Yangming, Jiaman Ding, Hongbin Wang, and Yi Du. 2025. "A Clustering Algorithm Based on the Detection of Density Peaks and the Interaction Degree Between Clusters" Applied Sciences 15, no. 7: 3612. https://doi.org/10.3390/app15073612

APA Style

Liu, Y., Ding, J., Wang, H., & Du, Y. (2025). A Clustering Algorithm Based on the Detection of Density Peaks and the Interaction Degree Between Clusters. Applied Sciences, 15(7), 3612. https://doi.org/10.3390/app15073612

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop