Next Article in Journal
Use of High-Field Electron Injection into Dielectrics to Enhance Functional Capabilities of Radiation MOS Sensors
Previous Article in Journal
Review of Microwaves Techniques for Breast Cancer Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Local Neighborhood Robust Fuzzy Clustering Image Segmentation Algorithm Based on an Adaptive Feature Selection Gaussian Mixture Model

1
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
2
Key Laboratory of Airborne Optical Imaging and Measurement, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
3
School of Physics, Northeast Normal University, Changchun 130024, China
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(8), 2391; https://doi.org/10.3390/s20082391
Submission received: 19 March 2020 / Revised: 13 April 2020 / Accepted: 17 April 2020 / Published: 22 April 2020
(This article belongs to the Section Intelligent Sensors)

Abstract

:
Since the fuzzy local information C-means (FLICM) segmentation algorithm cannot take into account the impact of different features on clustering segmentation results, a local fuzzy clustering segmentation algorithm based on a feature selection Gaussian mixture model was proposed. First, the constraints of the membership degree on the spatial distance were added to the local information function. Second, the feature saliency was introduced into the objective function. By using the Lagrange multiplier method, the optimal expression of the objective function was solved. Neighborhood weighting information was added to the iteration expression of the classification membership degree to obtain a local feature selection based on feature selection. Each of the improved FLICM algorithm, the fuzzy C-means with spatial constraints (FCM_S) algorithm, and the original FLICM algorithm were then used to cluster and segment the interference images of Gaussian noise, salt-and-pepper noise, multiplicative noise, and mixed noise. The performances of the peak signal-to-noise ratio and error rate of the segmentation results were compared with each other. At the same time, the iteration time and number of iterations used to converge the objective function of the algorithm were compared. In summary, the improved algorithm significantly improved the ability of image noise suppression under strong noise interference, improved the efficiency of operation, facilitated remote sensing image capture under strong noise interference, and promoted the development of a robust anti-noise fuzzy clustering algorithm.

1. Introduction

1.1. Image Segmentation Algorithms

Existing image segmentation methods are mainly divided into the following categories: the edge-based image segmentation methods, the region-based image segmentation methods, and the image segmentation methods based on a specific theory. Cluster segmentation, as a typical unsupervised segmentation method, has attracted the attention of many scholars and has been widely used and studied in many fields [1,2].
Clustering algorithms can be divided into hard partition clustering algorithms and soft partition clustering algorithms. Hard partition clustering algorithms are used for image segmentation. Their principle is to directly divide an image according to the similarity of pixels in terms of qualities such as grayness, color, and texture. The optimal solution or partition can be obtained by minimizing the objective function for algorithms such as the H-means algorithm [3,4], the global K-means algorithm, and the K-means algorithms, where K-means clustering is one of these methods [5,6]. This clustering algorithm has the advantages of fast segmentation, clear structure, and good usability [7,8], but it is also prone to fall into local minima in the process of optimizing the segmentations. Soft partitioning clustering algorithms use the degree of belonging or the probability of pixels to indirectly partition the similarity of pixels and search for an optimal decomposition in the process of minimizing the likelihood function of the objective function or maximizing the parameter [9,10]. For example, Dunn [4] proposed the fuzzy C-means clustering algorithm in 1947. In 1981, Bezdek [5] proved and compared the measurement theory of mean clustering and fuzzy mean clustering, and proved the convergence of the fuzzy mean clustering algorithm, established fuzzy clustering theory, promoted the development of the fuzzy clustering algorithm, and developed the fuzzy mean clustering algorithm such that it became an important branch of fuzzy theory. The theory was introduced into the clustering algorithm to improve the adaptability of the algorithm, which has been widely used [11,12].

1.2. Fuzzy Clustering Algorithm Based on Feature Selection

At present, the research of clustering analysis focuses on the scalability of the clustering method, the validity of clustering for complex shapes and data types, high-dimensional clustering analysis technology, and the clustering method for mixed data. Among them, high-dimensional data clustering is a difficult problem in clustering analysis, and solving the clustering problem with high-dimensional data is difficult for the traditional clustering algorithm. For example, there are large numbers of invalid clustering features in high-dimensional sample spaces, and the Euclidean distance is used as a distance measure in the FCM algorithm [13], but it cannot take into account the correlation of each feature space in high-dimensional space. At present, the problem of high-dimensional data is mainly dealt with using feature transformations and feature selection. The method based on feature selection can effectively reduce the dimension and has been widely applied. A subspace-based clustering image segmentation method is proposed in the literature. By defining search strategies and evaluation criteria, effective features for clustering are screened. The original data sets are clustered in different subspaces to reduce storage and computation costs [14].
The existing supervised feature selection method achieves the goal of dimensionality but reduces the operational efficiency. To achieve clustering segmentation using adaptive feature selection, a similarity measurement method for high-dimensional data, which takes into account the correlation between high-dimensional spatial features and effectively reduces the impact of a “dimensional disaster” on high-dimensional data, is proposed in the literature. However, there is a lack of theoretical guidance on how to select the similarity measurement criteria. To avoid any combination search and to apply the method to unsupervised learning, the concept of feature saliency is proposed in the literature. Considering the influence of different features on the clustering results, the Gaussian mixture model is used for clustering analysis to improve the performance of the algorithm [15].
The fuzzy Gaussian mixture models (FGMMs) algorithm replaces the Euclidean distance of the FCM algorithm with the Gaussian mixture model, which can more accurately fit multipeak data and achieves better segmentation of noiseless complex images. Traditional fuzzy C-means clustering analysis treats the different features of samples equally and ignores the important influence of key features on clustering results, which leads to the difference between the clustering segmentation results and the real classification results. According to the theory of feature selection, the concept of feature saliency is used to assume that the saliency of sample features obeys a probability distribution, and the clustering analysis is carried out by using the Gaussian mixture model. Ju and Liu [16] proposed an online feature selection method based on fuzzy clustering, along with its application (OFSBFCM), and a fuzzy C-means clustering method combined with a Gaussian mixture model with feature selection using Kullback–Leibler (KL) divergence (FSFCM) is proposed in this paper [16,17].
In short, the advantages of the feature-based selection of the GMM-based fuzzy clustering algorithm are as follows:
(1)
By using the Gaussian mixture model as a distance measure and by accurately fitting multifront data, compared with the FCM algorithm, the FCM algorithm can manage complexly structured data sample sets.
(2)
The Gaussian hybrid model algorithm for feature selection assumes that different features of samples play different roles in pattern analysis. Some features play a decisive role in model analysis and overcome the limitations of the FCM algorithm.
The algorithm treats the different features of samples equally for clustering analysis, ignoring the important influence of key features on the clustering results, which leads to a certain gap between the clustering results and the real classification results.
(3)
KL divergence regularization clustering can be widely used in the clustering analysis of class unbalanced data.
The problems of the feature-based GMM-based fuzzy clustering algorithm are as follows:
(1)
The parameters need to be adjusted to increase the running time of the algorithm.
(2)
Like the FCM algorithm, it only clusters a single pixel without considering the influence of spatial neighborhood pixels on each central pixel. For different types of noisy images, the algorithm does not have good robustness against noise.

2. Algorithm Analysis

2.1. FLICM Algorithm

The FCM algorithm uses the fuzzy membership degree and nonsimilarity measure to construct the objective function; it also finds the corresponding membership degree and clustering center when the objective function is the smallest in the iteration process to realize the sample classification. Its structure is simple and easy to simulate, and the convergence is fast. However, it does not consider the interference from neighborhood information on the central pixel, and the results of the image segmentation with noise interference are unsatisfactory. To improve the robustness against the noise of the algorithm, Chens et al. [17]. proposed the neighborhood mean and neighborhood mean fuzzy C-means algorithms FCM-S1 and FCM-S2. Later, the Greek scholars Krinidis et al. [18,19]. proposed a neighborhood local fuzzy C-means segmentation algorithm (FLICM), which combines neighborhood pixel spatial information, gray information, and fuzzy classification information to improve the anti-noise performance of the algorithm. Its objective function expression is as follows [20,21]:
J = i = 1 N j = 1 C z i j m [ x i v j 2 + G i j ]
G i j = β N i 1 d i β ˜ ( 1 z β j ) m d 2 ( x β , v j ) .
Specifically, x i = ( x i 1 , x i 2 , x i 3 , x i 4 , , x i D ) T is the first sample. x i 1 , x i 2 , x i 3 , x i 4 , , x i D represent the different attributes of the first sample. C is the number of clusters. z i j denotes the fuzzy membership of the first pixel in the jth category; the clustering center is v j ( j = 1 , 2 , , C ) . d ˜ i β is the Euclidean distance of the spatial position between the pixel point and the neighboring pixel x β . N i represents a set of neighborhood spatial pixels x β of pixel point x i ; the neighborhood window size is 3 × 3 or 5 × 5 . The optimal iteration expressions of the classification membership degree and the clustering center are as follows [22,23]:
z i j = k = 1 c [ x i v j 2 + G i j x i v j 2 + G i k ] 1 m 1 ,
v j = i = 1 N z i j m x i i = 1 N z i j m .

2.2. Local Neighborhood Robust Fuzzy Clustering Image Segmentation Algorithm Based on an Adaptive Feature Selection Gaussian Mixture Model

2.2.1. Improved FLICM Algorithm

The FLICM algorithm does not strictly follow the Lagrangian multiplier method to solve the optimal expression of the objective function. Furthermore, it runs too long and falls into local minima. To solve these problems, the unconstrained expression of the objective function is solved using the Lagrangian multiplier method as follows [24,25]:
J M = i = 1 N j = 1 C z i j m [ x i v j 2 + G i j ] + i = 1 N λ i ( 1 i = 1 C z i j ) .
The partial derivative of J M with respect to the membership degree z i j and clustering center v j is obtained, and its partial derivative is 0:
J M z i j = m z i j m 1 [ x i v j 2 + G i j ] λ i = 0 ,
J M v j = i = 1 N z i j m [ 2 ( x i v j ) β N i β i 2 ( x β v j ) ( 1 z β j ) m d i β ˜ + 1 ] = 0 .
By solving Equations (6) and (7), the following solution is obtained:
z i j = i = 1 N [ x i v j 2 + G i j x i v k 2 + G i k 1 ( m 1 ) ,
v j = k = 1 C [ x i + β N i , β i ( d i β ˜ + 1 ) 1 ( 1 z β j ) x β i = 1 N z i j m ( 1 + β N i , β i ( d i β ˜ + 1 ) 1 ( 1 z β j ) m ) .
Compared with the iteration expressions in the literature, the iteration formula of the clustering centers needs to consider the central pixel x i values. Furthermore, the influence of the neighborhood pixels x β on the clustering center v j and the degree of classification membership also have some influence on the clustering center v j . To accurately compare the influence of the neighborhood pixels on the central pixels, this section describes the use of neighborhood spatial classification membership z β j to restrict the Euclidean distance d i β of the spatial position between pixel x i and pixel x β , and redefines the ambiguity factor G i j to be [26,27]:
G i j = β = N 1 1 z β j d i β ˜ + 1 ( 1 z β j ) m d 2 ( x β , v j )
z i j = i = 1 N [ x i l v j l 2 + G i j x i l v k l 2 + G i k 1 ( m 1 )
z i j = k = 1 C [ x i j + β N i , β i ( z β j d i β ˜ + 1 ) 1 ( 1 z β j ) x β i i = 1 N z i j m ( 1 + β N i , β i ( z β j d i β ˜ + 1 ) 1 ( 1 z β j ) m )

2.2.2. Local Neighborhood Robust Fuzzy Clustering Algorithm Based on an Adaptive Feature Selection Gaussian Mixture Model

The FLICM algorithm introduces neighborhood spatial information into the objective function of the algorithm to enhance the anti-noise performance of the algorithm; however, the algorithm treats the different features of the samples equally for clustering analysis, ignoring the important impact of key features on the clustering results, which results in unsatisfactory segmentation results. In this section, the idea of feature selection is introduced into the improved FLICM algorithm, KL divergence is introduced as a regularization term to realize feature selection constraints, and a new objective function is obtained as follows [28,29]:
J = i = 1 N j = 1 C z i j ( d ( x i , v j ) + G i j ) + λ i = 1 N j = 1 C z i j log z i j π j + γ i = 1 N j = 1 C l = 1 D z i j ( s i j l log s i j l ρ l + ( 1 s i j l ) log 1 s i j l 1 ρ l )
Further, d i j = l = 1 D ( s i j l ( x i l μ j l ) 2 + ( 1 s i j l ) ( x i l ε ) 2 ,
G i j = β N i ( 1 z β j ) m z β j d ˜ + 1 d β j , d β j = l = 1 D ( s β j l ( x β l μ j l ) 2 + ( 1 s β j l ) ( x β l ε l ) 2 )
d i j is the weighted Euclidean distance between the first sample and the center μ i j of class J. The Euclidean distance d β j is the spatial position between pixel point x i and pixel point x β . s i j l is the influence degree of the first characteristic attribute x i l on the jth class in the fist sample. ε l is the eigenvalue corresponding to the mean of all samples. ρ l is the weight factor of the first dimension feature attribute of the sample. G i j is used as a fuzzy factor.
In the literature, the membership degree has been obtained strictly according to the Lagrange multiplier method after finding an unconstrained solution of the objective function but the clustering center of the formula solution is directly calculated using the traditional fuzzy C-means clustering cluster center expression, which is not strictly obtained via the Lagrange method, resulting in an inconsistency between Equation (4) and the clustering objective function. In this section, the objective function of clustering is optimized strictly using the Lagrange multiplier method, and the iterative optimization expression is solved. The process is as follows [30,31]:
Finding the partial derivatives of object functions with respect to s i j l :
L s i j l = z i j [ ( x i l μ j l ) 2 + β N i 1 z β j z β j d i β ˜ + 1 ( x β l ε l ) 2 ] + γ z i j ( s i j l log s i j l ρ l log 1 s i j l 1 ρ l ) .
Let the partial derivative be zero:
s i j l = ρ l exp ( t i j / γ ) 1 ρ l + ρ l exp ( t i j / γ ) .
The unconstrained expression of the objective function obtained using the Lagrange multiplier method is given by L m = L i = 1 N η i ( i = 1 N z i j 1 ) . Finding partial derivatives of the formula with respect to z i j :
z i j L m = d ( x i , v j ) + G i j + λ z i j log z i j π j + λ + γ l = 1 D z i j ( s i j l log s i j l ρ l + ( 1 s i j l ) log 1 s i j l 1 ρ l ) η i
Bring the local ambiguity factor G i j into the formula and set the formula equal to zero:
λ log z i j π j = l = 1 D { s i j l [ ( x i l u j l ) 2 + β N i 1 z β j z β j d β j ˜ + 1 ( x β l μ j l ) 2 + γ ( log s i j l ρ l ] + ( 1 + s i j l ) [ ( x i l ε l ) 2 + β N i 1 z β j z β j d β j ˜ + 1 ( x β l ε l ) 2 + γ log 1 s i j l 1 ρ l ) ] } λ + η i
Constraints of membership degree j = 1 c z i j = 1 .
The iteration expression of the subordinate degree z i j is solved by introducing Equation (15) into Equation (17), as follows.
z i j = π j exp ( η i j / λ ) k = 1 c π exp ( η i k / λ )
Therefore:
η i j = l = 1 D { ρ l exp ( t i j / γ ) 1 ρ l + ρ l exp ( t i j / γ ) [ ( x i l u j l ) 2 + β N i 1 z β j z β j d β j ˜ + 1 ( x β l μ j l ) 2 + γ ( log ρ l exp ( t i j / γ ) 1 ρ l + ρ l exp ( t i j / γ ) ] + 1 ρ l 1 ρ l + ρ l exp ( t i j / γ ) [ ( x i l ε l ) 2 + β N i 1 z β j z β j d β j ˜ + 1 ( x β l ε l ) 2 + γ log 1 1 ρ l + ρ l exp ( t i j / γ ) ) ] }
Finding the partial derivatives of the object functions with respect to μ j l gives:
μ j l = z i j s i j l ( x i l μ j l ) + z i j β N i 1 z β j z β j d β j + 1 s i j l ( x β l μ j l ) ,
μ j l = 1 M j l i = 1 N z i j s i j l x i l
where M j l = i = 1 N z i j s i j l x i l .
Finding the partial derivatives of the object functions with respect to ε l gives:
ε l = z i j ( 1 s i j l ) ( x i l ε l ) + z i j β N i 1 z β j z β j d β j ˜ + 1 ( 1 s i j l ) ( x β l ε l )
Let the partial derivative be zero and obtain the expression of ε l as follows:
ε l = 1 F l i = 1 N j = 1 C z i j ( 1 s i j ) x i l
F l = 1 F l i = 1 N j = 1 C z i j ( 1 s i j l )
For the objective function with respect to ρ l , the partial derivative is obtained, and the partial derivative is set to 0. The iterative expression is as follows.
ρ l = 1 N i = 1 N j = 1 C z i j s i j l
Using the Lagrange multiplier method, the partial derivative of the objective function with respect to π j is set to 0:
π j [ L i = 1 N η i ( j = 1 C π j 1 ) ] = 0
The iterative expression of π j is obtained from the above formula:
π j = 1 N i = 1 N z i j

2.2.3. Postprocessing Method of the Clustering Membership Degree

To further enhance the robustness against noise, the neighborhood weighting information is added to the membership degree of the iteration expression. Combined with the idea of the non-Markov random field (MRF) space-constrained Gaussian model in the literature, this section constructs a neighborhood weighting function by using the classification membership degree and the postprocessing clustering membership degree. The function considers the corresponding median to be a probability by classifying the membership degree of neighborhood pixels in ascending order, which is expressed as follows [32,33]:
H i j = m e d i a n { z β j }
A indicates that the neighborhood window sizes are 3 × 3 ,   5 × 5 for the classification membership of neighborhood pixels. N i represents the set of classified membership degrees of neighborhood pixels. According to the Bayesian theorem, the weight factor of the neighborhood information function is added to Equation (18), and the new expression of the membership degree is given in Equation (27):
z i j = π j ( H i j ) α exp ( η i j / λ ) k = 1 C π k ( H i k ) α exp ( η i k / λ )
In this equation, α is the weight factor and the selection range is. a value of 2.0 is usually chosen. Its function is similar to the fuzzy weight factor m in the traditional fuzzy C-means clustering objective function.
The improved membership degree of the sample classification in this chapter has the following properties [34,35]:
(1)
Neighborhood weighted membership still satisfies the constraints i = 1 C z i j = 1 .
(2)
The membership degree of the current pixel x i in class J is proportional to the probability that the neighborhood pixel x β belongs to class J.
As the probability increases, the degree of membership increases. Conversely, when neighborhood pixel x i belongs to class j, the probability tends to zero, and thus, the membership degree of the current pixel x i in class j decreases.
In addition, φ i j = ( H i j ) α , such that:
z i j = π j φ i j exp ( η i j / λ ) k = 1 C π k φ i k exp ( η i k / λ )
The derivative is obtained, as follows:
z i j φ i j = π exp ( η i j / λ ) k = 1 , j k c π φ i k exp ( η i k / λ ) ( k = 1 C π k φ i k exp ( η i k / λ ) ) 2 0
It is proved that the weighted neighborhood membership degree can be found using the neighborhood information.
The monotone incremental function of φ i j , which uses the φ i j number to restrict the membership degree of classification, improves the performance of the sample classification to a certain extent and enhances the robustness of the algorithm against noise. To achieve image segmentation, the local fuzzy clustering algorithm based on feature selection in this chapter needs to solve the iterative optimization expression. The detailed steps are as follows [36,37]:
Step 1: Transform the image pixel value into sample eigenvector x i , where x i = ( x i 1 , , x i D ) ( i = 1 , 2 , , N ) , N is the total number of pixels, and C is the number of clusters.
The termination condition threshold is δ , the maximum iteration number is τ max , the regularization parameter is λ , and the feature selection parameter is γ .
Step 2: Initialize the feature attribute weight coefficients ρ l = 1 / D and π j = 1 / C ( j = 1 , , C ) to find the prior probability of sample classification.
Step 3: The central vector of the sample classification class is obtained using FCM clustering, where μ j = ( μ j 1 , , μ j D ) . Class variance matrix is σ j 2 = ( σ j 1 2 , , σ j D 2 ) . Sample eigenvalue mean vector is ε = ( ε l , , ε D ) . Eigenvalue variance matrix is ν 2 = ( ν l 2 , , ν D 2 ) . Given the improved adaptive spatial neighborhood information, in this section, the initial values of the Gaussian mixture fuzzy clustering algorithm are selected as follows: μ j ( 0 ) σ j 2 ( 0 ) ε ( 0 ) ν 2 ( 0 ) .
Step 4: Compute the adaptive spatial neighborhood information function H i j using Equation (26).
Step 5: Use Equation (15) to calculate the eigenweight function s i j l .
Step 6: Calculate the membership function z i j using Equation (28).
Step 7: Update μ j , σ j 2 , ε , ν 2 , π j , ρ l using Equations (20) to (26).
Step 8: If the number of iterations is τ = τ max or the convergence condition { | z i j ( τ + 1 ) z i j ( τ ) | } δ is satisfied, the iteration will stop; otherwise, the iteration returns to step 4.
Step 9: The image pixels are classified and segmented according to the principle of the maximum membership degree using the z i j values obtained when the algorithm’s iterations have been completed.

3. Experimental Results and Analysis

To verify the good segmentation performance and anti-noise ability of the improved algorithm, high-resolution remote sensing images, including common ground objects in remote sensing images (such as forest farmland, bare land, and grassland), synthetic images, standard images, and high-resolution medical images were selected, as is shown in Figure 1. The improved algorithm and the FCM-S, FLICM, kernel-weighted FLICM (KWFLICM), and local data and membership relative entropy-based FCM (LDMREFCM) algorithms were used to segment gray images with different noises [36,37]. The peak signal-to-noise ratio (PSNR) and the error misclassification rate (MCR) were used to compare the segmentation performance and anti-noise performance of the algorithms [38,39]. Generally, the MCR is often used to quantitatively evaluate the performance of segmentation algorithms, which is defined as:
M C R = [ 1 ( j = 1 C C j ) 1 ( j = 1 C A j C j ) ] × 100 %
The efficiency of the algorithms was compared using the running time after convergence and the number of iterations n. A Dell OptiPlex 360 (Intel Core 4, 8 GB of memory) running a Windows 7 system with the MATLAB 2013a (MathWorks, Natick, MA, USA)programming environment comprised the evaluation platform. The maximum number of iterations T max of the algorithm was set to 300. The cluster numbers C for each noise was chosen to be 2, 3, and 4. The regularization parameters and characteristic parameters were selected separately to be λ = 10 3 and γ = 10 3 , respectively. The iteration threshold was δ = 10 4 , and the neighborhood window size was set to 3 × 3 .

3.1. Image Segmentation Test with Gaussian Noise

3.1.1. Segmentation Performance Test

Gaussian noise was added to two remote sensing images with a mean value of 0 and mean variances of 57 and 80. Gaussian noise was added to images containing four artificial categories, brain CT (Computed Tomography) images, and camera images with a mean value of 0 and mean variances of 140 and 161. The number of clusters was set to 3, 4, 2, and 2. The results were compared using the results from the FLICM, FCM_S, LDMREFCM, and KWFLICM algorithms and the improved algorithm. The original image is shown in Figure 1, and the experimental results are shown in Figure 2, Figure 3, Figure 4 and Figure 5 (b–f). The error rate and PSNR of the segmentation results are shown in Table 1 and Table 2, and the iteration time and the number of iterations are shown in Table 3 [40,41].

3.1.2. Test Result

Comparing the segmentation results of the five algorithms in Figure 2, Figure 3, Figure 4 and Figure 5 for four images with different degrees of Gaussian noise interference, we can see that the segmentation results of the FCM_S, FLICM, and LDMREFCM algorithms still contained many noise points; the KWFLICM algorithm contained fewer noise points; while the improved algorithm has the fewest noise points. Table 1 shows that the improved algorithm had the highest signal-to-noise ratio compared with the other four algorithms, which shows that the improved algorithm had the strongest anti-Gaussian noise ability. Table 2 shows that the segmentation result of the improved algorithm was the smallest of all the algorithms, which shows that the segmentation result of the improved algorithm was closer to the ideal segmentation result and had a better segmentation performance. Comparing the PSNR and iteration time of each algorithm in Table 3, the average PSNR of the improved algorithm was 0.7 dB higher than that of the KWFLICM algorithm, and the average iteration time of the improved algorithm was 500 s less than that of the KWFLICM algorithm [42,43]. The iteration times of the FCM_S and FLICM algorithms were the lowest, but the difference between the improved algorithm results and the PSNR was 2–5 dB. The anti-noise ability of the FLCM and FCM_S method was poor. Combining the PSNR test results and the iteration time, the improved algorithm had a better anti-Gaussian noise segmentation performance.

3.2. Image Segmentation Test of Salt-and-Pepper Noise

3.2.1. Segmentation Performance Test

In this experiment, 20% and 40% salt-and-pepper noise were added to two remote sensing images, respectively, while 40% and 30% salt-and-pepper noise were added to brain CT images and images containing four artificial categories, respectively. The experimental results are shown in Figure 6, Figure 7, Figure 8 and Figure 9. The number of clusters was set to 3, 4, 2, and 2. The PSNRs and error rates are shown in Table 4 and Table 5, respectively, and the iterative operation time and number of iterations are shown in Table 6.

3.2.2. Test Result

Comparing the results of image segmentation with the multiplicative noise in Figure 6, Figure 7, Figure 8 and Figure 9, we can see that the FCM_S and FLICM algorithms took neighborhood information into account and suppressed some of the multiplicative noise, but in the case of high noise interference, compared with the improved algorithm, the segmentation results contained a large amount of noise. As seen from the results of the artificial segmentation in Figure 6, Figure 7, Figure 8 and Figure 9, the LDMREFCM algorithm produced the phenomenon of false segmentation. The KWFLICM algorithm and the improved algorithm could remove a large number of noise points. From the test results of the PSNR and the error rate (ERR) of the algorithms in Table 4 and Table 5, along with the iteration times of the algorithms in Table 6, it can be concluded that compared with the PSNR of the FCM_S and FLICM algorithms, the LDMREFCM, KWFLICM, and improved algorithms had a significantly greater noise suppression ability. Table 6 shows that the iteration time of the improved algorithm was the lowest. Although the PSNR of the improved algorithm was 0.7 dB less than that of the KWFLICM algorithm [44,45], the iteration time was 300 s less than that of the KWFLICM algorithm, and the PSNR of the brain CT image segmentation test results in Table 6 was 0.7 dB less than that of the KWFLICM algorithm. However, the iteration time was 45 s less than that of the KWFLICM algorithm. In summary, the proposed algorithm showed a superior performance compared with the FCM_S, FLICM, KWFLICM, and LDMREFCM algorithms, where a large amount of salt-and-pepper noise is suppressed, and the iteration speed of the algorithm was faster.

3.3. Image Segmentation Test with Multiplicative Noise

3.3.1. Segmentation Performance Test

Multiplicative noise was added to the remote sensing image, the medical image, and the man-made image with a mean value of 0 and mean variances of 80, 114, 140, and 161. The number of clusters was set to 3, 4, 2, and 2. The experimental results are shown in Figure 10, Figure 11, Figure 12 and Figure 13. The error rate of the segmentation results is shown in Table 7 and Table 8. The iteration times and number of iterations of the algorithms are shown in Table 9 [46,47,48].

3.3.2. Test Result

Comparing the results of the image segmentation with multiplicative noise in Figure 10, Figure 11, Figure 12 and Figure 13, we can see that the FCM_S and FLICM algorithms took neighborhood information into account and suppressed part of the multiplicative noise. The KWFLICM and LDMREFCM algorithms could remove a large number of noise points. Compared with the other algorithms, the improved algorithm contained the fewest noise points. The edges of the segmentation results were continuous and smooth [49]. Compared with Table 7, the PSNR of the improved algorithm was the largest, which proved that the improved algorithm had a better robustness against multiplicative noise. Comparing the error rate of the segmentation results of each algorithm in Table 8 shows that the segmentation results of this algorithm were closer to the ideal segmentation results and had a better segmentation performance. Combined with the comparison of the iteration times in Table 9, the segmentation performance and PSNR of the KWFLICM algorithm were lower than those of the improved algorithm, and the iteration time of the improved algorithm was much shorter than that of the KWFLICM algorithm. In conclusion, the improved algorithm not only guaranteed good robustness against noise, but also reduced the iteration time and improved the operation efficiency of the algorithm.

3.4. Segmentation Performance Test

To test the segmentation efficiency of the algorithm, several real remote sensing images of different sizes were selected for segmentation. Table 10 shows the segmentation time comparison of the five real remote sensing images of different sizes (Figure 14a–g, with sizes of 256 × 256, 532 × 486, 350 × 290, 500 × 500, 590 × 490, 700 × 680, 1024 × 768, respectively), among which, the bold value is the optimal value. It can be seen from this that the segmentation efficiency of the first four comparison algorithms on each real remote sensing image is lower, and the larger the image scale is, the longer the segmentation time is; the improved algorithm can achieve less segmentation time for real remote sensing images of different sizes, and the segmentation efficiency is much higher than other algorithms. The above analysis shows that the algorithm proposed in this paper has high efficiency, and it has certain practical significance and reference value for large-scale remote sensing image processing in practical applications.

3.5. Segmentation Test of Remote Sensing Images Disturbed Using Mixed Noise

Three remote sensing images, including farmland, a stadium, and a river (Figure 15), were segmented and tested by adding Gaussian noise (mean value was 0, mean square deviation was 25) and salt-and-pepper noise of different intensities (5%, 10%, and 30%). The number of clusters was set to 2, 3, and 2, and the segmentation results are shown in Figure 16, Figure 17 and Figure 18.
Compared with the other five algorithms, the improved algorithm was more suitable for the needs of image segmentation disturbed by salt-and-pepper and Gaussian mixture noise, as is shown in Table 11 and Table 12.

4. Conclusions

The FLICM algorithm combines neighborhood pixel spatial information, gray information, and fuzzy classification information, which improves the anti-noise performance of the algorithm. However, the algorithm does not take into account the impact of different features on clustering. Additionally, the FLICM algorithm does not minimize the objective function strictly according to the Lagrange method, it easily falls into local optima, and the iteration speed is slow. In this study, the FLICM algorithm was improved. First, the membership degree was introduced into the local constraint information of the FLICM algorithm. Considering the influence of features on clustering, the feature saliency was then introduced into the objective function of the algorithm. Finally, the neighborhood weighting function was constructed using the classification membership degree, and the membership degree was processed to obtain the feature-based membership. The local fuzzy clustering algorithm was selected. The improved algorithm was compared with the existing robust clustering segmentation algorithm in a clustering segmentation test of noisy images. The segmentation results were objectively compared based on the PSNR and error rate, which proved the effectiveness and practicability of the proposed algorithm.

Author Contributions

All authors contributed to the article. H.R. conceived and designed the simulations under the supervision of T.H. H.R. performed the experiments, analyzed the data, and wrote the paper. T.H. reviewed the manuscript and provided valuable suggestions. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Fundamental Research Funds for the Central Universities (No. 2412019FZ037).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Vrooman, H.A.; Cocosco, C.A.; Lijn, F.v.d.; Stokking, R.; Ikram, M.A.; Vernooij, M.W.; Breteler, M.M.B.; Niessen, W.J. Multi-spectral brain tissue segmentation using automatically trained k-nearest-neighbor classification. Neuroimag 2007, 37, 71–81. [Google Scholar] [CrossRef] [PubMed]
  2. Kim, S.; Chang, D.Y.; Nowozin, S.; Kohli, p. Image segmentation using higher-order correlation clustering. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 1761–1774. [Google Scholar] [CrossRef] [PubMed]
  3. Pereyra, M.; Mclaughlin, S. Fast unsupervised bayesian image segmentation with adaptive spatial regularisation. IEEE Trans. Image Process. 2017, 26, 2577–2587. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Dunn, J.C. A fuzzy relative of the ISODATA process and its use in detecting compact well-separated clusters. J. Cybern. 1973, 3, 32–57. [Google Scholar] [CrossRef]
  5. Bezdek, J.C. Pattern Recognition with Fuzzy Objective Function Algorithms; Plenum Press: New York, NY, USA, 1981. [Google Scholar]
  6. Herman, G.T.; Carvalho, B.M. Multiseeded segmentation using fuzzy connectedness. IEEE Trans. Pattern Anal. Mach. Intell. 2001, 23, 460–474. [Google Scholar] [CrossRef]
  7. Rueda, S.; Knight, C.L.; Papageorghiou, A.T.; Noble, J.A. Feature-based fuzzy connectedness segmentation of ultrasound images with an object completion step. Med. Image Anal. 2015, 26, 30–46. [Google Scholar] [CrossRef]
  8. Dokur, Z.; Olmez, T. Segmentation of ultrasound images by using a hybrid neural network. Pattern Recognit. Lett. 2002, 23, 1824–1836. [Google Scholar] [CrossRef]
  9. Seyedhosseini, M.; Tasdizen, T. Multi-class multi-scale series contextual model for image segmentation. IEEE Trans. Image Process. 2013, 22, 4486–4496. [Google Scholar] [CrossRef]
  10. Vese, L.A.; Chan, T.F. A multiphase level set framework for image segmentation using the Mumford and Shah model. Int. J. Comput. Vis. 2002, 50, 271–293. [Google Scholar] [CrossRef]
  11. Cai, W.; Chen, S.; Zhang, D. Fast and robust fuzzy c-means clustering algorithms incorporating local information for image segmentation. Pattern Recognit. 2007, 40, 825–838. [Google Scholar] [CrossRef] [Green Version]
  12. Nguyen, T.M.; Wu, Q.M.J. Gaussian mixture model based spatial neighborhood relationships for pixel labeling problem. IEEE Trans. Syst. Man Cybern. 2012, 42, 193–202. [Google Scholar] [CrossRef]
  13. Li, C.; Kao, C.Y.; Gore, J.C.; Ding, Z. Minimization of region-scalable fitting energy for image segmentation. IEEE Trans. Image Process. 2008, 17, 1940–1949. [Google Scholar] [PubMed] [Green Version]
  14. Wang, X.; Min, H.; Zou, L.; Zhang, Y.-G. A novel level set method for image segmentation by incorporating local statistical analysis and global similarity measurement. Pattern Recognit. 2015, 48, 189–204. [Google Scholar] [CrossRef]
  15. Wang, X.; Tang, Y.; Masnou, S.; Chen, L. A global/local affinity graph for image segmentation. IEEE Trans. Image Process. 2015, 24, 1399–1411. [Google Scholar] [CrossRef] [PubMed]
  16. Ju, Z.; Liu, H. Fuzzy gaussian mixture models. Pattern Recognit. 2012, 45, 1146–1158. [Google Scholar] [CrossRef]
  17. Chen, S.M.; Chang, Y.C. Multivariable fuzzy forecasting based on fuzzy clustering and fuzzy ruleinterpolation techniques. Inf. Sci. 2010, 180, 4772–4783. [Google Scholar] [CrossRef]
  18. Krinidis, S.; Chatzis, V. A robust fuzzy local information C-means clustering algorithm. IEEE Trans. Image Process. 2010, 19, 1328–1337. [Google Scholar] [CrossRef]
  19. Zhang, X.; Zhang, C.; Tang, W.; Wei, Z. Medical image segmentation using improved. F.C.M. Sci. China Inf. Sci. 2012, 55, 1052–1061. [Google Scholar] [CrossRef]
  20. Zhao, X.; Li, Y.; Zhao, Q. Mahalanobis distance based on fuzzy clustering algorithm for image segmentation. Digit. Signal Process. 2015, 3, 8–16. [Google Scholar] [CrossRef]
  21. Sikka, K.; Sinha, N.; Singh, P.K.; Mishra, A.K. A fully automated algorithm under modified FCM framework for improved brain MR image segmentation. Magn. Reson. Imaging 2009, 27, 994–1004. [Google Scholar] [CrossRef]
  22. Benaichouche, A.N.; Oulhadj, H.; Siarry, P. Improved spatial fuzzy c-means clustering for image segmentation using PSO initialization, Mahalanobis distance and post-segmentation correction. Digit. Signal Process. 2013, 23, 1390–1400. [Google Scholar] [CrossRef]
  23. Kandwal, R.; Kumar, A.; Bhargava, S. Review: Existing image segmentation techniques. Int. J. Adv. Res. Comput. Sci. Softw. Eng. 2014, 4, 153–156. [Google Scholar]
  24. Khan, A.M.; Ravi, S. Segmentation. Methods: A comparative study. Int. J. Soft Comput. Eng. 2013, 4, 84–92. [Google Scholar]
  25. Shivhare, P.; Gupta, V. Review of image segmentation techniques including pre & post processingoperations. Int. J. Eng. Adv. Technol. 2015, 4, 153–157. [Google Scholar]
  26. Dass, R.; Devi, S. Image Segmentation Techniques 1. Graph. Models Image Process. 2012, 29, 100–132. [Google Scholar]
  27. Marr, D.; Hildreth, E. Theory of edge detection. Proc. R. Soc. Lond. 1980, 207, 187–217. [Google Scholar] [PubMed]
  28. Kuang, Y.H. Applications of an enhanced cluster validity index method based on the fuzzy C-means and rough set theories to partition and classification. Expert Syst. Appl. 2010, 37, 8757–8769. [Google Scholar]
  29. Vandenbroucke, N.; Macaire, L.; Postaire, J.G. Color Image Segmentation by Supervised Pixel Classification in A Color Texture Feature Space: Application to Soccer Image Segmentation. In Proceedings of the 15th International Conference on Pattern Recognition, Barcelona, Spain, 3–7 September 2000. [Google Scholar]
  30. Hou, X.; Zhang, T.; Xiong, G.; Lu, Z.; Xie, K. A novel steganalysis framework of heterogeneous images basedon GMM clustering. Signal Process. Image Commun. 2014, 29, 385–399. [Google Scholar] [CrossRef]
  31. Zhao, B.; Zhong, Y.; Ma, A.; Zhang, L. A spatial Gaussian mixture model for optical remote sensing imageclustering. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 1–12. [Google Scholar] [CrossRef]
  32. Yang, M.S.; Lai, C.Y.; Lin, C.Y. A robust EM clustering algorithm for Gaussian mixture models. Pattern Recognit. 2012, 45, 3950–3961. [Google Scholar] [CrossRef]
  33. Lin, P.L.; Huang, P.W.; Kuo, C.H.; Lai, Y.H. A size-insensitive integrity-based fuzzy C-means method fordata clustering. Pattern Recognit. 2014, 47, 2042–2056. [Google Scholar] [CrossRef]
  34. Chen, S.; Zhang, D. Robust image segmentation using FCM with spatial constraints based on new kernel-induced distance measure. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2004, 34, 1907–1916. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Hussain, R.; Alican Noyan, M.; Woyessa, G.; Marín, R.R.R.; Martinez, P.A.; Mahdi, F.M.; Finazzi, V.; Hazlehurst, T.A.; Hunter, T.H.; Coll, T.; et al. An ultra-compact particle size analyser using a CMOS image sensor and machine learning. Light Sci. Appl. 2020, 9, 21. [Google Scholar] [CrossRef] [Green Version]
  36. Wei, M.; Xing, F.; You, Z. A real-time detection and positioning method for small and weak targets using a 1D morphology-based approach in 2D images. Light Sci. Appl. 2018, 7, 18006. [Google Scholar] [CrossRef] [PubMed]
  37. Rivenson, Y.; Liu, T.; Wei, Z.; Zhang, Y.; Hanna, K.d.; Ozvan, A. PhaseStain: The digital staining of label-free quantitative phase microscopy images using deep learning. Light Sci. Appl. 2019, 8, 23. [Google Scholar] [CrossRef]
  38. Zhang, D.; Lan, L.; Bai, Y.; Majeed, H.; Kandel, M.E.; Popescu, G.; Cheng, J.-X. Bond-selective transient phase imaging via sensing of the infrared photothermal effect. Light Sci. Appl. 2019, 8, 116. [Google Scholar] [CrossRef] [Green Version]
  39. Yao, R.; Ochoa, M.; Yan, P.; Intes, X. Net-FLICS: Fast quantitative wide-field fluorescence lifetime imaging with compressed sensing–a deep learning approach. Light Sci. Appl. 2019, 8, 26. [Google Scholar] [CrossRef] [Green Version]
  40. Liu, C.; Dong, W.-f.; Jiang, K.-m.; Zhou, W.-p.; Zhang, T.; Li, H.-w. Recognition of dense fluorescent droplets using an improved watershed segmentation algorithm. Chin. Opt. 2019, 12, 783–790. [Google Scholar] [CrossRef]
  41. Hu, H.-r.; Dan, X.-z.; Zhao, Q.-h.; Sun, F.-y.; Wang, Y.-h. Automatic extraction of speckle area in digital image correlation. Chin. Opt. 2019, 12, 1329–1337. [Google Scholar] [CrossRef]
  42. Wang, X.-s.; Guo, S.; Xu, X.-j.; Li, A.-z.; He, Y.-g.; Guo, W.; Liu, R.-b.; Zhang, W.-j.; Zhang, T.-l. Fast recognition and classification of tetrazole compounds based on laser-induced breakdown spectroscopy and raman spectroscopy. Chin. Opt. 2019, 12, 888–895. [Google Scholar] [CrossRef]
  43. Cai, H.-y.; Zhang, W.-q.; Chen, X.-d.; Liu, S.-s.; Han, X.-y. Image processing method for ophthalmic optical coherence tomography. Chin. Opt. 2019, 12, 731–740. [Google Scholar] [CrossRef]
  44. Wang, J.; He, X.; Wei, Z.-h.; Mu, Z.-y.; Lv, Y.; He, J.-w. Restoration method for blurred star images based on region filters. Chin. Opt. 2019, 12, 321–331. [Google Scholar] [CrossRef]
  45. Liu, D.-m.; Chang, F.-L. Active contour model for image segmentation based on Retinex correction and saliency. Opt. Precis. Eng. 2019, 27, 1593–1600. [Google Scholar]
  46. Lu, B.; Hu, T.; Liu, T. Variable Exponential Chromaticity Filtering for Microscopic Image Segmentation of Wire Harness Terminals. Opt. Precis. Eng. 2019, 27, 1894–1900. [Google Scholar]
  47. Deng, J.; Li, J.; Feng, H.; Zeng, Z.-m. Three-dimensional depth segmentation technique utilizing discontinuities of wrapped phase sequence. Opt. Precis. Eng. 2019, 27, 2459–2466. [Google Scholar] [CrossRef]
  48. Wei, T.; Zhou, Y.-h. Blind sidewalk image location based on machine learning recognition and marked watershed segmentation. Opt. Precis. Eng. 2019, 27, 201–210. [Google Scholar]
  49. Zhang, K.-h.; Tan, Z.-h.; Li, B. Automated image segmentation based on pulse coupled neural network with partide swarm optimization and comprehensive evaluation. Opt. Precis. Eng. 2018, 26, 962–970. [Google Scholar] [CrossRef]
Figure 1. Original images.
Figure 1. Original images.
Sensors 20 02391 g001
Figure 2. Gaussian noise disturbing remote sensing image 1 (a) and the segmentation results (bf). FLICM: Fuzzy local information C-means, FCM_S: Fuzzy C-means with spatial constraints, LDMREFCM: Local data and membership relative entropy-based FCM, KWFLICM: Kernel-weighted FLICM, and Improved algorithm.
Figure 2. Gaussian noise disturbing remote sensing image 1 (a) and the segmentation results (bf). FLICM: Fuzzy local information C-means, FCM_S: Fuzzy C-means with spatial constraints, LDMREFCM: Local data and membership relative entropy-based FCM, KWFLICM: Kernel-weighted FLICM, and Improved algorithm.
Sensors 20 02391 g002
Figure 3. Gaussian noise disturbing remote sensing image 2 (a) and the segmentation results (bf).
Figure 3. Gaussian noise disturbing remote sensing image 2 (a) and the segmentation results (bf).
Sensors 20 02391 g003
Figure 4. Gaussian noise interfering with the brain slice image (a) and the segmentation results (bf).
Figure 4. Gaussian noise interfering with the brain slice image (a) and the segmentation results (bf).
Sensors 20 02391 g004
Figure 5. Gaussian noise disturbing the camera image (a) and the segmentation results (bf).
Figure 5. Gaussian noise disturbing the camera image (a) and the segmentation results (bf).
Sensors 20 02391 g005
Figure 6. Disturbance of salt-and-pepper noise on remote sensing image 1 (a) and the segmentation results (bf).
Figure 6. Disturbance of salt-and-pepper noise on remote sensing image 1 (a) and the segmentation results (bf).
Sensors 20 02391 g006
Figure 7. Remote sensing image 4 disturbed using salt-and-pepper noise (a) and the segmentation results (bf).
Figure 7. Remote sensing image 4 disturbed using salt-and-pepper noise (a) and the segmentation results (bf).
Sensors 20 02391 g007aSensors 20 02391 g007b
Figure 8. Salt-and-pepper noise interfering with brain slice images (a) and the segmentation results (bf).
Figure 8. Salt-and-pepper noise interfering with brain slice images (a) and the segmentation results (bf).
Sensors 20 02391 g008
Figure 9. Images containing four artificial categories disturbed by salt-and-pepper noise (a) and the segmentation results (bf).
Figure 9. Images containing four artificial categories disturbed by salt-and-pepper noise (a) and the segmentation results (bf).
Sensors 20 02391 g009aSensors 20 02391 g009b
Figure 10. Multiplicative noise disturbing remote sensing image 1 (a) and the segmentation results (bf).
Figure 10. Multiplicative noise disturbing remote sensing image 1 (a) and the segmentation results (bf).
Sensors 20 02391 g010aSensors 20 02391 g010b
Figure 11. Multiplicative noise disturbing remote sensing image 3 (a) and the segmentation results (bf).
Figure 11. Multiplicative noise disturbing remote sensing image 3 (a) and the segmentation results (bf).
Sensors 20 02391 g011
Figure 12. Multiplicative noise disturbing brain CT images (a) and the segmentation results (bf).
Figure 12. Multiplicative noise disturbing brain CT images (a) and the segmentation results (bf).
Sensors 20 02391 g012aSensors 20 02391 g012b
Figure 13. Multiplicative noise interfering with three types of artificial images (a) and the segmentation results (bf).
Figure 13. Multiplicative noise interfering with three types of artificial images (a) and the segmentation results (bf).
Sensors 20 02391 g013
Figure 14. Real remote sensing images.
Figure 14. Real remote sensing images.
Sensors 20 02391 g014
Figure 15. Original remote sensing images.
Figure 15. Original remote sensing images.
Sensors 20 02391 g015
Figure 16. Interference of mixed noise on the farmland image (a) and the segmentation results (bf).
Figure 16. Interference of mixed noise on the farmland image (a) and the segmentation results (bf).
Sensors 20 02391 g016
Figure 17. Interference of mixed noise on the stadium image (a) and the segmentation results (bf).
Figure 17. Interference of mixed noise on the stadium image (a) and the segmentation results (bf).
Sensors 20 02391 g017
Figure 18. Interference of mixed noise on the river image (a) and the segmentation results (bf).
Figure 18. Interference of mixed noise on the river image (a) and the segmentation results (bf).
Sensors 20 02391 g018
Table 1. Comparison of the peak signal-to-noise ratio (PSNR) (dB) for the anti-noise and Gaussian noise after using each algorithm.
Table 1. Comparison of the peak signal-to-noise ratio (PSNR) (dB) for the anti-noise and Gaussian noise after using each algorithm.
Split ImageFLICMFCM_SLDMREFCMKWFLICMImproved Algorithm
Remote sensing image 1 (57)13.530113.015014.044315.489315.9516
Remote sensing image 2 (80)8.10456.84669.36488.90229.4914
Brain slice image (140)13.743111.625913.976816.164516.2804
Cameraman (161)9.42648.058810.047712.508713.2316
Table 2. Comparisons of the misclassification rate (MCR) (%) against Gaussian noise using different algorithms.
Table 2. Comparisons of the misclassification rate (MCR) (%) against Gaussian noise using different algorithms.
Split ImageFLICMFCM_SLDMREFCMKWFLICMImproved Algorithm
Remote sensing image 1 (57)17.1319.6014.9311.009.84
Remote sensing image 2 (80)15.4720.6011.5712.8811.24
Brain slice image (140)4.226.884.012.422.40
Cameraman (161)12.2913.079.895.614.75
Table 3. Comparison of the iteration times and number of iterations.
Table 3. Comparison of the iteration times and number of iterations.
Split ImageIteration Time and NumberFLICMFCM_SLDMREFCMKWFLICMImproved Algorithm
Remote sensing image 1 (57) t s ( s ) 26.4422.85515.48931465.473760.046
n47181375756
Remote sensing image 2 (80) t s ( s ) 13.7281.014189.2091265.792226.326
n4016568828
Brain slice image (140) t s ( s ) 8.2841.49860.017248.337180.134
n221721519
Cameraman (161) t s ( s ) 15.1791.389181.138226.937180.611
n4116511319
Table 4. Comparison of the PSNR (dB) for algorithms applied to images disturbed by salt-and-pepper noise.
Table 4. Comparison of the PSNR (dB) for algorithms applied to images disturbed by salt-and-pepper noise.
Split ImageFLICMFCM_SLDMREFCMKWFLICMImproved Algorithm
Remote sensing image 1 (20%)10.908312.101417.727119.366318.5909
Remote sensing image 2 (40%)10.97488.152210.390513.286315.5261
Cerebral section (40%)9.10418.836012.661318.656317.9488
Four artificial categories (30%)13.300515.234411.305725.460525.6753
Table 5. Comparison of the MCR (%) for algorithms applied to images disturbed by salt-and-pepper noise.
Table 5. Comparison of the MCR (%) for algorithms applied to images disturbed by salt-and-pepper noise.
Split ImageFLICMFCM_SLDMREFCMKWFLICMImproved Algorithm
Remote sensing image 1 (20%)31.2123.376.114.505.22
Remote sensing image 2 (40%)7.9915.309.144.692.80
Cerebral section (40%)12.2913.075.421.362.38
Four artificial categories (30%)38.5723.3652.481.481.43
Table 6. Operation time and number of iterations for each algorithm.
Table 6. Operation time and number of iterations for each algorithm.
Split ImageIteration Time and NumbeFLICMFCM_SLDMREFCMKWFLICMImproved Algorithm
Remote sensing image 1 (20%) t s ( s ) 10.5822.933494.152620.837332.492
n2027972423
Remote sensing image 2 (40%) t s ( s ) 18.5494.227165.631224.407252.513
n5015451426
Cerebral section (40%) t s ( s ) 8.581.482213.65230.743185.451
n2317131317
Four artificial categories (30%) t s ( s ) 21.4814.337580.81179.561123.545
n31337866
Table 7. Comparison of the PSNR (dB) for the multiplicative noise resistance of algorithms.
Table 7. Comparison of the PSNR (dB) for the multiplicative noise resistance of algorithms.
Split ImageFLICMFCM_SLDMREFCMKWFLICMImproved Algorithm
Remote sensing image 1 (80)17.194816.400018.555717.749118.2079
Remote sensing image 3 (114)16.215716.910718.888417.287519.3224
Brain CT (140)17.217017.434918.364819.221819.2364
Artificial, three categories (161)12.018014.537420.155620.852824.1451
Table 8. Comparison of the MCR (%) for multiplicative noise resistance of algorithms.
Table 8. Comparison of the MCR (%) for multiplicative noise resistance of algorithms.
Split ImageFLICMFCM_SLDMREFCMKWFLICMImproved Algorithm
Remote sensing image 1 (80)7.488.635.496.626.00
Remote sensing image 3 (114)9.518.725.097.274.61
Brain CT (140)7.607.153.784.431.36
Artificial, three categories (161)25.6512.063.672.190.81
Table 9. Iteration time and number of iterations for each algorithm.
Table 9. Iteration time and number of iterations for each algorithm.
Split ImageIteration Time and NumbeFLICMFCM_SLDMREFCMKWFLICMImproved Algorithm
Remote sensing image 1 (80) t s ( s ) 26.3952.839352.08326.783341.965
n5026661425
Remote sensing image 3 (114) t s ( s ) 17.7213.495580.4621225.57407.895
n33321054930
Brain CT (140) t s ( s ) 17.6661.841308.691541.184407.535
n3317602430
Artificial, three categories (161) t s ( s ) 22.2781.076261.0681116.625198.809
n4110494914
Table 10. Segmentation time of real remote sensing images with different sizes by each algorithm.
Table 10. Segmentation time of real remote sensing images with different sizes by each algorithm.
Segmentation AlgorithmFigure 14aFigure 14bFigure 14cFigure 14dFigure 14eFigure 14fFigure 14g
FCM_S2.04286.21862.16265.58158.643010.24813.976
FLICM5.171920.74535.052315.464022.237924.54326.787
KWFLICM5.944222.98017.061223.430824.136926.45328.285
LDMREFCM4.502120.61755.283613.445917.580119.45622.167
Improved algorithm1.12043.29871.41282.23674.23766.2988.213
Table 11. The PSNR (dB) comparison between different algorithms against mixed noise.
Table 11. The PSNR (dB) comparison between different algorithms against mixed noise.
Split ImageFLICMFCM_SLDMREFCMKWFLICMImproved Algorithm
Farmland6.33828.98379.21037.723415.7468
Stadium8.30019.46409.15659.553015.1158
Rivers10.010110.069710.448212.469117.4963
Table 12. The MCR (%) comparison against anti-mixed noise between each algorithm.
Table 12. The MCR (%) comparison against anti-mixed noise between each algorithm.
Split ImageFLICMFCM_SLDMREFCMKWFLICMImproved Algorithm
Farmland46.4425.3437.1424.075.34
Stadium44.8234.9041.1933.9910.93
Rivers20.0219.7511.3718.013.57

Share and Cite

MDPI and ACS Style

Ren, H.; Hu, T. A Local Neighborhood Robust Fuzzy Clustering Image Segmentation Algorithm Based on an Adaptive Feature Selection Gaussian Mixture Model. Sensors 2020, 20, 2391. https://doi.org/10.3390/s20082391

AMA Style

Ren H, Hu T. A Local Neighborhood Robust Fuzzy Clustering Image Segmentation Algorithm Based on an Adaptive Feature Selection Gaussian Mixture Model. Sensors. 2020; 20(8):2391. https://doi.org/10.3390/s20082391

Chicago/Turabian Style

Ren, Hang, and Taotao Hu. 2020. "A Local Neighborhood Robust Fuzzy Clustering Image Segmentation Algorithm Based on an Adaptive Feature Selection Gaussian Mixture Model" Sensors 20, no. 8: 2391. https://doi.org/10.3390/s20082391

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop