Next Article in Journal
A Fault Detection and Data Reconciliation Algorithm in Technical Processes with the Help of Haar Wavelets Packets
Previous Article in Journal
Coupled Least Squares Identification Algorithms for Multivariate Output-Error Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Kernel Clustering with a Differential Harmony Search Algorithm for Scheme Classification

School of Hydropower and Information Engineering, Huazhong University of Science and Technology, Wuhan 430074, China
*
Author to whom correspondence should be addressed.
Algorithms 2017, 10(1), 14; https://doi.org/10.3390/a10010014
Submission received: 8 October 2016 / Revised: 22 December 2016 / Accepted: 11 January 2017 / Published: 14 January 2017

Abstract

:
This paper presents a kernel fuzzy clustering with a novel differential harmony search algorithm to coordinate with the diversion scheduling scheme classification. First, we employed a self-adaptive solution generation strategy and differential evolution-based population update strategy to improve the classical harmony search. Second, we applied the differential harmony search algorithm to the kernel fuzzy clustering to help the clustering method obtain better solutions. Finally, the combination of the kernel fuzzy clustering and the differential harmony search is applied for water diversion scheduling in East Lake. A comparison of the proposed method with other methods has been carried out. The results show that the kernel clustering with the differential harmony search algorithm has good performance to cooperate with the water diversion scheduling problems.

1. Introduction

Metaheuristics are designed to find, generate, or select a heuristic that provides a good solution to an optimization problem [1]. By searching over a large set of feasible solutions, metaheuristics can often find good solutions with less computational effort [2]. There has been a widespread usage of metaheuristics and their applications in artificial intelligence, e.g., transit network design problems [3], sewer pipe networks [4], water distribution systems [5], sizing optimization of truss structures [6], ordinary differential equations [7], and so forth. Using metaheuristic algorithms, complex problems are not far from finding their solutions [7]. Metaheuristics are good at finding near-optimal solutions to numerical real-valued problems [8].
Harmony search (HS) is a new phenomenon-mimicking algorithm, proposed by Geem in 2001 [9]. It is a relatively recent metaheuristic method based on musical performances. In the HS algorithm, a new solution is generated by pitch adjustment and random selection. HS has the ability to deal with discontinuous optimization problems, as well as continuous problems. Comparing with other artificial intelligent algorithms such as the genetic algorithm and its variants, the HS algorithm requires fewer parameters and these parameters are easy to set. Moreover, HS can overcome the drawback of GA򢀙s building block theory. The advantages of HS have resulted in much interest in recent years, and HS has been widely applied in many fields, such as email classification [10], single machine scheduling problems [11], and so on.
Although HS has its advantages, i.e., it is good at identifying the high-performance regions of the solution space at a reasonable time, it gets into trouble in performing local searches for numerical applications [12]. In order to improve the optimization ability, different variants of HS have been developed. Mahdavi et al. presented an improved harmony search algorithm (IHS) which changes the parameters dynamically with the generation number [12]. The IHS algorithm presents a strategy of parameter adjustment which improves the performances of HS algorithm. However, the user needs to specify the new parameters, which are not easy to set, and it still performs local searches for some numerical applications. Other modifications, such as the global-best harmony search algorithm (GHS) [13] and chaotic harmony search algorithms (CHS) [14], have shown better performance than the classical HS, but they still have their own shortcomings. The GHS algorithm generates new harmonies using the variable from the best harmony. However, the algorithm cannot be adopted when the variables have different value ranges. This drawback limits the scope of the application of this method. The CHS generates new solutions following the chaotic map, but simulations show that CHS still suffers from a local optimum when dealing with some numerical applications.
Fuzzy clustering is a class of algorithms for cluster analysis which determine the affinities of samples using mathematical methods. It divides samples into classes or clusters so that items in the same class are as similar as possible, while items in different classes are dissimilar. It is a useful technique for analyzing statistical data in multi-dimensional space.
Fuzzy clustering has gained a significant level of interest for its ability to classify the samples which are multi-attributed and difficult to analyze. In recent years, there have been a large number of studies on fuzzy clustering. Fuzzy c-means clustering (FCM) [14] and fuzzy k-means clustering (FKM) [15] are widely used to categorize similar data into clusters. The kernel fuzzy clustering algorithm (KFC) [16] and the weighted fuzzy kernel-clustering algorithm (WFKCA) [17] are also efficient ways for cluster analysis. Research has shown that WFKCA has good convergence properties and the prototypes obtained can be well represented in the original space. However, these clustering methods have the same problem: the iterative solution is not the optimal solution. In order to overcome this drawback, in this paper, we combine the KFC with the HS algorithm to help the KFC perform better.
Although the modifications of HS have shown a better performance than the classical HS, the performances still need improvement. In this paper, we proposed a new differential harmony search algorithm (DHS), and applied it to kernel fuzzy clustering. A comparison of the proposed method with other methods has been carried out. Finally, the proposed method is applied to the water diversion scheduling assessment in East Lake, which is a new study in the East Lake Ecological Water Network Project. The water diversion scheduling tries to divert water from the Yangtze River to the East Lake Network, aiming to improve water quality in the sub-lakes. Using a hydrodynamic simulation model and a water quality model, the quality of the lakes at the end of the water diversion can be simulated. In order to obtain a better improvement of the water quality and reduce the economic cost, the water diversion scheme must be carefully developed. The diversion scheduling in the East Lake Network is a multi-objective problem, however, multi-objective evolutionary algorithms cannot be adopted because the water quality simulation is time-consuming. Thus, we made some typical schemes in the feasible range, and obtain the scheme results by the water quality model. The purpose of the kernel clustering with differential harmony search method is to classify the results in order to find out the schemes which perform better than others.
This paper is organized as follows: Section 2 presents an overview of the harmony search algorithm and kernel fuzzy clustering; Section 3 describes the modification and the combination of kernel fuzzy clustering and the differential harmony search algorithm; Section 4 discusses the computational results; and Section 5 provides the summary of this paper.

2. Harmony Search and Kernel Fuzzy Clustering

2.1. Harmony Search Algorithm

Harmony search (HS) is a phenomenon-mimicking algorithm inspired by the improvisation process of musicians proposed by Geem in 2001 [9]. The method is a population-based evolutionary algorithm and it is a simple and effective solution to find a result which optimizes an objective function. Parameters of the harmony search algorithm usually include harmony memory size (hms), harmony memory considering rate (hmcr), pitch adjustment rate (par), and fret width (fw).
The main steps of the classical harmony search mainly include memory consideration, pitch adjustment, and randomization. Details of HS are explained as follows:
Step 1
Initialize algorithm parameters
This step specifies the HS algorithm parameters, including hms, hmcr, par, and fw.
Step 2
Initializing the harmony memory
In HS, each solution is called a harmony. The harmony memory (HM) is equivalent to the population of other population-based algorithms. In HM, all solution vectors are stored. In this step, random vectors, as many as hms, are generated following Equation (1):
x i j = ( U B i L B i ) U ( 0 , 1 ) + L B i , i = 1 , 2 , , D
where LBi and UBi are the lower and upper bounds of the ith variable. D represents the dimensions of the problem. U(0, 1) is a uniformly-distributed random number between 0 and 1.
Step 3
Improvising a new harmony
In this step, a new harmony x = ( x 1 , x 2 , , x D ) is generated based on three rules: memory consideration, pitch adjustment, and random selection. The procedure is shown in Algorithm 1.
Algorithm 1 Improvisation of a New Harmony
For i = 1 to D do
 If U ( 0 , 1 ) h m c r then
   x i = x i j   where   j   is   a   random   integer   from   ( 1 , 2 , , h m s )
  If U ( 0 , 1 ) p a r then
    x i = x i + U ( 1 , 1 ) × f w
  end if
 else
   x i = ( U B i L B i ) U ( 0 , 1 ) + L B i
 end if
end for
Step 4
Update harmony memory
If the new harmony generated in Step 2 is better than the worst harmony stored in HM, replace the worst harmony with the new harmony.
Step 5
Check the terminal criteria
If the maximum number of improvisations (NI) is reached, then stop the algorithm. Otherwise, the improvisation will continue by repeating Steps 3 to 4.

2.2. Kernel Fuzzy Clustering

In the past decade, several studies on the kernel fuzzy clustering method (KFC) have been conducted [18,19]. A kernel method means using kernel functions to map the original sample data to a higher-dimensional space without ever computing the coordinates of the data in that space. The kernel method takes advantage of the fact that dot products in the kernel space can be expressed by a Mercer kernel K, given by K ( x , y ) Φ ( x ) T Φ ( y ) , where x , y R d [20].
Suppose the sample dataset is given as X = { X 1 , X 2 , ... , X n } , and each sample has K attributes X i = { X i 1 , X i 2 , ... , X i K } . Firstly, the original sample data is normalized following Equation (2):
x j k = X j k X k min X k max X k min
The purpose of kernel fuzzy clustering is to minimize the following objective function:
J = i = 1 C j = 1 N u i j m || Φ ( x j ) Φ ( v i ) || 2
x j = diag ( w ) x j
v i = diag ( w ) v i
Constraints are:
i = 0 C u i j = 1 , 1 i N
k = 1 K w k = 1
where C is the number of clusters, N is the number of samples, K is the number of features, v i = [ v i 1 , v i 2 , , v i L ] is the ith cluster center, u i j is the membership of x i in class i, w = [ w 1 , w 2 , , w K ] T is the feature-weight vector, and m is coefficient, m > 1.
According to the relevant literature [17,20,21], Equation (3) is simplified as:
J = 2 i = 1 C j = 1 N u i j m ( 1 K ( x j , v i ) )
where K ( x , x ) is a Mercer kernel function. In this paper, the (Gaussian) radial basis function (RBF) kernel is adopted:
K ( x , x ) = exp ( || x x || 2 2 2 σ 2 )
The clustering center and membership matrix is:
u i j = ( r = 1 C 1 K ( x j , v i ) 1 K ( x j , v i ) ) 1 m 1
v i = j = 1 N u i j K ( x j , v i ) x j j = 1 N u i j K ( x j , v i )

3. DHS-KFC

3.1. Differential Harmony Search Algorithm

The harmony search (HS) algorithm tries to replace the worst item in the harmony memory (HM) if the generated harmony is better. However, some local optimums in HM have the probability of remaining unchanged for a long time. Additionally, the guide from the best harmony can also improve the local search ability of the algorithm. In this section, a novel modification of the harmony search algorithm, named differential harmony search (DHS), is proposed to help the algorithm converge faster.
In the proposed method, the harmonies in the HM which meet the specific conditions will be changed following the differential evolution strategy. Moreover, the new generated harmonies will consider the best vector.

3.1.1. A Self-Adaptive Solution Generation Strategy

The purpose of this strategy is to improve the local search ability of the algorithm. In the pitch adjustment step, the classical HS changes the random harmony selected from HM by small perturbations. Different from classical HS, the pitch adjustment in DHS will consider the best vector, as follows:
x i = { x i + f w U ( 1 , 1 ) , u ( 0 , 1 ) < 1 N     u ( 0 , 1 ) < c r x i b e s t + f w U ( 1 , 1 ) , otherwise
where cr is the crossover probability; it generally varies from 0 to 1. Through a large number of numerical simulations, we obtain the conclusion that the suitable range of cr is [0.4, 0.9].
In this paper, the parameter cr is dynamically adjusted following Equation (13). When cr is smaller, the new harmony has a higher probability to use the value coming from the best vector, which means that the convergence speed will be faster; when cr is larger, the harmonies will retain their own diversity. Overall, using this strategy, the algorithm will have better local search ability in the early stage.
c r = 0.4 + C I 2 × NI
where CI is the current iteration, and NI is the maximum iteration.

3.1.2. A Differential Evolution-Based Population Update Strategy

The purpose of this strategy is to help the algorithm avoid from local convergence. When updating HM with the new harmony, some local optimums in HM have the probability of remaining unchanged for a long time. In DHS, if the harmony in HM remains unchanged for several iterations (sc = 20), it will be replaced with a new harmony using the differential evolution strategy shown in Equation (14):
x i = { x i + f w ( x i x i ) , u ( 0 , 1 ) < 1 N     u ( 0 , 1 ) < c r x i , otherwise
where x i and x i are random stored values from HM. Using the dynamic parameter cr, the harmonies have higher probability of variation in the later stage.

3.1.3. Implementation of DHS

The DHS algorithm consists of five steps, as follows:
Step 1
Initialize the algorithm parameters
This step specifies parameters, including harmony memory size (hms), harmony memory considering rate (hmcr), pitch adjustment rate (par), fret width (fw), and memory keep iteration (sc).
Step 2
Initialize the harmony memory
This step is consistent with the basic harmony search algorithm. New random vectors ( x 1 , , x h m s ), as many as hms, are generated following Equation (15):
x i j = ( U B i L B i ) U ( 0 , 1 ) + L B i , i = 1 , 2 , , D
Then each vector will be evaluated by the objective function and stored in the HM.
Step 3
Improvising a new harmony
In this step, a new random vector x = ( x 1 , x 2 , , x D ) is generated considering the self-adaptive solution generation strategy. The procedure is shown in Algorithm 2.
Algorithm 2 Improvisation of a New Harmony of DHS
for i = 1 , , n do
  if ( U ( 0 , 1 ) h m c r ) then
    x i = x i j , w h e r e   j ~ U ( 1 , h m s )
   if ( U ( 0 , 1 ) < p a r ) then
      s h i f t = U ( 1 , 1 ) × f w × ( U B i L B i )
     if ( U ( 0 , 1 ) < 1 N | | U ( 0 , 1 ) < c r ) then
      x i = x i b e s t + s h i f t
    else
      x i = x i + s h i f t
    end if
   end if
   else
    x i = L B i + U ( 0 , 1 ) × ( U B i L B i )
  end if
end for
Step 4
Update the harmony memory
If the vector generated in Step 3 is better than the worst vector in HM, replace the worst vector with the generated vector.
If the vector in HM remains unchanged for several iterations (sc = 20), replace it with a new vector generated, following Equation (14), if the new vector is better.
The procedure of step 4 is shown in Algorithm 3.
Algorithm 3 Update the Harmony Memory
if ( f ( x ) < f ( x w o r s t ) ) then
  Replace x w o r s t with x
  Set f l a g ( w o r s t ) = 0
end if
for r = 1 , , hms do
   f l a g ( r ) = f l a g ( r ) + 1
  if ( f l a g ( r ) > s c ) then
   for i = 1 , , n do
     if ( U ( 0 , 1 ) < 1 N | | U ( 0 , 1 ) < c r ) then
       s h i f t = U ( 1 , 1 ) × ( x i r 1 x i r 2 ) , r 1 , r 2 = 1 , 2 , , h m s   and   r 1 r 2
       x i = x i r + s h i f t
     else
        x i = x i r
     end if
    end for
    if ( f ( x ) < f ( x r ) ) then
Replace x r with x
    end if
  end if
end for
Step 5
Check the stop criterion
Repeat Step 3 to Step 4 until the termination criterion is satisfied. In this paper, the termination criterion is the maximum number of evaluations.

3.2. DHS-KFC

In this paper, we use the proposed DHS to search the optimal result of the kernel fuzzy clustering (KFC). When DHS is applied to KFC, we must define the optimization variables. In this paper, the cluster center v = { v 1 , , v n } is chosen as the optimization variables. The weight matrix w = [ w 1 , , w K ] T is given by the experts using a method which determines the relative importance of attributes, such as Fuzzy Analytic Hierarchy Process (FAHP) [22]. The membership matrix u = { u 1 , , u n } can be obtained by Equation (10). The objective function of KFC is shown as:
J = i = 1 C j = 1 N u i j m || Φ ( x j ) Φ ( v i ) || 2
The DHS algorithm will find the optimized result which minimizes the objective function. The procedure is described as follows (Figure 1):
Step 1
initialize the parameters of the harmony search algorithm, initialize the harmony memory.
Step 2
initialize the parameters of the KFC, maximum generation N, and the weight matrix w , set the initial value of the cluster center matrix v 0 to a randomly-generated matrix. Then the membership matrix u 0 can be obtained from Equation (10).
Step 3
generate a new solution vector based on the harmony search algorithm.
Step 4
obtain the cluster center matrix v n from the solution vector, calculate the membership matrix u n based on Equation (10), and then calculate J n based on Equation (16).
Step 5
compare J n with J n 1 . If J n remains unchanged until 10 iterations, and go to Step 7.
Step 6
set the current iteration n = n + 1 . If n > N go to Step 7, otherwise go to Step 3.
Step 7
classify the samples based on their membership.

4. Experiments

4.1. Numerical Experiments of DHS

4.1.1. Benchmark Function Tests

To evaluate the performance of the proposed DHS, we chose several famous typical benchmark functions, shown in Table 1. These functions are tested by simulating the HS, IHS, GHS, CHS, and the proposed DHS algorithms. Of the functions in Table 1, the Ackley function, Griewank function, Rastrigin function, and Rosenbrock function are multimodal functions. The Sphere function and Schwefel 2.22 function are unimodal functions.
Parameters of the algorithms are shown in Table 2. All of the experiments are performed using a computer with 3.80 GHz AMD Athlon x4 760k with 8 GB RAM. The source code is compiled with Java SE8.
We evaluated the optimization algorithms based on 10 dimensional versions of the benchmark functions. The maximum evaluation count (NI) is set to 100,000. Each function is tested 100 times for each algorithm.
Table 3 shows the maximum, minimum, means, and standard deviation of errors of the algorithms on each benchmark function. Table 4 shows the distribution of the results simulated by the algorithms. As demonstrated in Table 3 and Table 4, it is clear that DHS outperforms the other variants of HS in almost all of the functions except the Rosenbrock function. In these cases, the minimum, maximum, means, and standard deviations obtained by DHS are smaller than the results obtained by HS and its variants. When dealing with the Rosenbrock function, the global minimum of the Rosenbrock function is inside a long, narrow, parabolic-shaped flat valley, meaning that convergence to the global minimum is difficult. Although the minimum of GHS is smaller than DHS, the maximum, mean, and standard deviation show that DHS is more stable than GHS.
We choose one typical unimodal function and one multimodal function to test the convergence speed of the algorithms. Figure 2 shows the convergence of the DHS algorithm compared to the variants of HS. The results clearly show that the DHS algorithm converges faster than other variants of HS.

4.1.2. Sensitivity Analysis of Parameters

In this section, the effect of each parameter in the search process of the DHS algorithm will be discussed.
Similar with the basic HS, the DHS algorithm has parameters that include hms, hmcr, par, and fw. The default parameters of DHS are hms = 50, hmcr = 0.9, par = 0.3, fw = 0.005, sc = 20. The maximum number of evaluations is set to 100,000. Then we change each parameter and test via the benchmark functions. Each scene runs 50 times. Table 5, Table 6, Table 7, Table 8 and Table 9 show the results of the optimization of the benchmark functions.
The results in Table 5 show that the performance of the DHS algorithm will be better if sc is smaller. The value of sc determines the replacement frequency of the memories in the HM. Results show that sc values between 10 and 40 are suggested.
In Table 6 and Table 7, we found that although values of hms and hmcr have impacts on the optimization results, but there is no obvious rule for the selection of hms and hmcr. Hms values between 30 and 70 are applicable for most cases, while hmcr values between 0.8 and 0.99 are suggested.
In Table 8, DHS performs better when par values are less than 0.3. In Table 9, we found that the algorithm is not sensitive to the parameter fw. A value of 0.005 is applicable for most cases.

4.2. Numerical Experiments of DHS-KFC

To test the effectiveness of the DHS-KFC method, University of California Irvine (UCI) machine learning repositories: wine dataset and iris dataset are used here. The iris dataset consists of 50 samples from each of three species of iris. The wine dataset is the results of a chemical analysis of wines grown in the same region in Italy but derived from three different cultivars. The method is compared with the k-means and fuzzy cluster method. Classification results are shown in Table 10. The classification rates of wine and iris were higher for DHS-KFC than kmeans and fuzzy cluster.

4.3. Case Study

East Lake is the largest scenery-related tourist attraction in Wuhan China, located on the south bank of the Yangtze River. The East Lake Network covers an area of 436 km2, consisting of East Lake, Sha Lake, Yangchun Lake, Yanxi Lake, Yandong Lake, and Bei Lake. A map of the East Lake is shown in Figure 3. In recent years, climate change and human activities have influenced the lakes significantly. Increasing sewage has led to serious eutrophication. Most of the sub-lakes in the East Lake Network are highly polluted. Chemical oxygen demand (COD), total nitrogen (TN) and total phosphorus (TP) of the lakes are shown in Table 11.
In recent years, the government has invested 15.8 billion yuan (RMB) to build the Ecological Water Network Project. The Water Network Connection Project is one of the subprojects; this project tries to transfer water from the Yangtze River through the diversion channels to improve the water quality in the lakes. In this project, Water diversion scheduling (WDS) is a new scheduling method combining with hydrodynamics. Previous works, including a hydrodynamic simulation model and water quality model, have already been done. Using these models, water quality of the lakes at the terminal time can be simulated. The purpose of WDS is to find a suitable scheme which has a better water quality result and lower economic cost. This is a multi-objective problem but, unfortunately, multi-objective evolutionary algorithms (MOEA) cannot be adopted because the water quality simulation is time-consuming. After a variety of simulations have been done, several feasible schemes and their simulation results had been made. However, the difference among the results is small. Thus, we use cluster analysis to summarize these schemes.
The water diversion scheduling in the East Lake Network is a multi-objective problem. To reduce the construction cost, the existing diversion channels include Zengjiaxiang, Qingshangang, Jiufengqu, Luojiagang, and Beihu pumping stations are considered. Water is brought from the Yangtze River to the East Lake Network through the Zengjiaxiang and Qingshangang channels, while water in the lakes is drained out through Luojiagang and Beihu pumping stations (Figure 4).
The objective functions are shown as:
{ I = f 1 ( q z , q q , q l ) C = f 2 ( q z , q q , q l ) Q = f 3 ( q z , q q , q l )
where I is the water quality index vector including TP, TN, and COD information, which is obtained by the water quality model. C is the total amount of water, Q is the economic cost, qz is the inflow vector of the Zengjiaxiang channel, qq is the inflow vector of the Qinshanggang channel, and ql is the outflow vector of the Luojiagang channel. Then the outflow of Jiufengqu and Beihu pumping stations follows:
q j = q b = q z + q q q l
The initial water qualities of the six lakes are shown in Table 11. The maximum inflow of the Qinshangang channel is 30 m3/s, the maximum inflow of the Zengjiaxiang channel is 10 m3/s, the total diversion time is 30 days. The water quality of Yandong Lake has already reached the standard, so it is not considered in this paper. The problem of water diversion scheduling is to design the inflow of Zengjiaxiang, Qinshangang, Luojiaxiang every day, which improves the water quality as much as possible with minimum cost.
Since the water quality simulation is time-consuming and MOEA cannot be adapted to this problem, after some pretreatment, we have identified a number of feasible schemes. These feasible schemes are shown in Table 12. Then we applied these schemes to the water quality model. The water quality simulation results of the schemes are shown in Table 13. Since the water quality model may have a certain amount of error, we decided to determine a set of good schemes instead of one scheme.
In this paper, the goal is to use the DHS-KFC method to determine the good schemes. According to the requirements, we need to divide the schemes into five categories, including excellent, good, average, fair, and poor. Using the kernel cluster method explained in Section 3, we input the scheme results shown in Table 13 as the samples for the clustering, including three water quality parameters (COD, TP, TN) of five lakes, water diversion quantity, and economic cost. The weight vector of water quality, water diversion quantity, and economic cost is defined in Equation (23) according to the advice given by experts.
In this case study, the parameter m = 1.7, and the minimum objective function value is 0.225269. The cluster results are shown in Table 14. The cluster center v, membership matrix u, and weight vector w are shown as:
w = [ 0.04 0.04 0.04 0.04 0.04 0.04 0.04 0.04 0.04     0.04 0.04 0.04 0.04 0.04 0.04 0.2 0.2 ]
v = [ 0.077 0.102 0.087 1 0.084 0.001 0.087 0.120 0.030 0.459 0.461 0.417 1 0 0 0.457 0.459 0.388 0.601 0.600 0.660 1 0.979 1 0.784 0.741 0.764 0.761 0.554 0.769 1 0.915 0.916 0.775 0.563 0.696 0.223 0.243 0.230 1 0.032 0.067 0.750 0.669 0.467 0.364 0.198 0.118 0.981 0.846 0.995 1 0.946 0.533 0.307 0.322 0.772 0.883 0.878 0.648 0.645 0.626 0.352 0.391 0.752 0.808 0.953 0.185 0.191 0.937 0.914 1 0 0.107 0 0.313 0.286 0.645 0.098 0 0.88 0.773 1 0.48 0.479 ]
u = [ 0.935 0.001 0.973 0.858 0.064 0.010 0.021 0.034 0.028 0.035 0.003 0.015 0.102 0.869 0.842 0.044 0.377 0.125 0.005 0.982 0.002 0.005 0.006 0.006 0.077 0.044 0.018 0.004 0.006 0.001 0.004 0.004 0.003 0.813 0.018 0.008 0.019 0.006 0.007 0.028 0.054 0.137 0.042 0.525 0.819       0.010 0.004 0.127 0.029 0.015 0.013 0.021 0.005 0.059 0.026 0.014 0.697 0.815 0.042 0.044 0.923 0.012 0.525 0.873 0.915 0.019 0.015 0.824 0.780 0.005 0.035 0.106 0.048 0.045 0.017 0.012 0.040 0.115 0.003 0.935 0.084 0.040 0.019 0.137 0.127 0.077 0.047 0.046 0.011 0.223 ]
Using the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) model [22,23], the similarity values of the cluster centers are shown as:
C C = [ 0.950 0.592 0.13 0.052 0.816 ]
The schemes in cluster I are considered better than the others.

5. Conclusions

The main contributions of this work are (i) a novel modification of the harmony search algorithm is proposed to deal with the optimization problem; (ii) a kernel fuzzy clustering algorithm with the harmony search algorithm is proposed; and (iii) the methodology is adopted for a water diversion scheduling assessment in East Lake.
In order to show the performance of the proposed differential harmony search algorithm, several tests have been conducted to compare with other methods. Results show that the modification can effectively improve the convergence of the harmony search algorithm. Finally, the combination of KFC and DHS is applied to the water diversion scheduling assessment in East Lake. It is efficient in clustering multi-dimensional data; the result shows that the methodology is reasonable and reliable. However, simulation results show that DHS still has the drawback of local convergence when dealing with some functions. Our work in the future will be to overcome this shortcoming.

Acknowledgments

This work is supported by The State Key Program of National Natural Science of China (Grant No. 51239004). We also acknowledge the entire development team, without whose help this research could not have been achieved.

Author Contributions

Yu Feng wrote the DHS-KFC algorithm. Muhammad Tayyab performed the experiments. Final checks were done by Jianzhong Zhou. All authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bianchi, L.; Dorigo, M.; Gambardella, L.M.; Gutjahr, W.J. A survey on metaheuristics for stochastic combinatorial optimization. Nat. Comput. 2009, 8, 239–287. [Google Scholar] [CrossRef]
  2. Blum, C.; Roli, A. Metaheuristics in combinatorial optimization: Overview and conceptual comparison. J. ACM Comput. Surv. 2001, 35, 268–308. [Google Scholar] [CrossRef]
  3. Almasi, M.H.; Sadollah, A.; Mounes, S.M.; Karim, M.R. Optimization of a transit services model with a feeder bus and rail system using metaheuristic algorithms. J. Comput. Civ. Eng. 2015, 29, 1–4. [Google Scholar] [CrossRef]
  4. Yazdi, J.; Sadollah, A.; Lee, E.H.; Yoo, D.; Kim, J.H. Application of multi-objective evolutionary algorithms for the rehabilitation of storm sewer pipe networks. J. Flood Risk Manag. 2015. [Google Scholar] [CrossRef]
  5. Yoo, D.G. Improved mine blast algorithm for optimal cost design of water distribution systems. Eng. Optim. 2014, 47, 1602–1618. [Google Scholar]
  6. Eskandar, H.; Sadollah, A.; Bahreininejad, A. Weight optimization of truss structures using water cycle algorithm. Int. J. Optim. Civ. Eng. 2013, 3, 115–129. [Google Scholar]
  7. Sadollah, A.; Eskandar, H.; Yoo, D.G.; Kim, J.H. Approximate solving of nonlinear ordinary differential equations using least square weight function and metaheuristic algorithms. Eng. Appl. Artif. Intell. 2015, 40, 117–132. [Google Scholar] [CrossRef]
  8. Glover, F.W.; Kochenberger, G.A. Handbook of Metaheuristics; Springer: New York, NY, USA, 2003; pp. 293–377. [Google Scholar]
  9. Geem, Z.W.; Kim, J.H.; Loganathan, G. A new heuristic optimization algorithm: Harmony search. Simulation 2001, 76, 60–68. [Google Scholar] [CrossRef]
  10. Wang, Y.; Liu, Y.; Feng, L.; Zhu, X. Novel feature selection method based on harmony search for email classification. Knowl. Based Syst. 2014, 73, 311–323. [Google Scholar] [CrossRef]
  11. Zammori, F.; Braglia, M.; Castellano, D. Harmony search algorithm for single-machine scheduling problem with planned maintenance. Comput. Ind. Eng. 2014, 76, 333–346. [Google Scholar] [CrossRef]
  12. Mahdavi, M.; Fesanghary, M.; Damangir, E. An improved harmony search algorithm for solving optimization problems. Appl. Math. Comput. 2007, 188, 1567–1579. [Google Scholar] [CrossRef]
  13. Omran, M.G.H.; Mahdavi, M. Global-best harmony search. Appl. Math. Comput. 2008, 198, 643–656. [Google Scholar] [CrossRef]
  14. Alatas, B. Chaotic harmony search algorithms. Appl. Math. Comput. 2010, 216, 2687–2699. [Google Scholar] [CrossRef]
  15. Jain, A.K.; Dubes, R.C. Algorithms for Clustering Data; Prentice hall Englewood Cliffs: Upper Saddle River, NJ, USA, 1988. [Google Scholar]
  16. Girolami, M. Mercer kernel-based clustering in feature space. IEEE Trans. Neural Netw. 2002, 13, 780–784. [Google Scholar] [CrossRef] [PubMed]
  17. Shen, H.; Yang, J.; Wang, S.; Liu, X. Attribute weighted mercer kernel based fuzzy clustering algorithm for general non-spherical datasets. Soft Comput. 2006, 10, 1061–1073. [Google Scholar] [CrossRef]
  18. Yang, M.-S.; Tsai, H.-S. A gaussian kernel-based fuzzy c-means algorithm with a spatial bias correction. Pattern Recognit. Lett. 2008, 29, 1713–1725. [Google Scholar] [CrossRef]
  19. Ferreira, M.R.P.; de Carvalho, F.A.T. Kernel fuzzy c-means with automatic variable weighting. Fuzzy Sets Syst. 2014, 237, 1–46. [Google Scholar] [CrossRef]
  20. Graves, D.; Pedrycz, W. Kernel-based fuzzy clustering and fuzzy clustering: A comparative experimental study. Fuzzy Sets Syst. 2010, 161, 522–543. [Google Scholar] [CrossRef]
  21. Xing, H.-J.; Ha, M.-H. Further improvements in feature-weighted fuzzy c-means. Inf. Sci. 2014, 267, 1–15. [Google Scholar] [CrossRef]
  22. Chen, Z.P.; Yang, W. An magdm based on constrained fahp and ftopsis and its application to supplier selection. Math. Comput. Model. 2011, 54, 2802–2815. [Google Scholar] [CrossRef]
  23. Singh, R.K.; Benyoucef, L. A fuzzy topsis based approach for e-sourcing. Eng. Appl. Artif. Intell. 2011, 24, 437–448. [Google Scholar] [CrossRef]
Figure 1. Framework of the proposed method.
Figure 1. Framework of the proposed method.
Algorithms 10 00014 g001
Figure 2. Convergence characteristics: (a) Ackley function; (b) Sphere function.
Figure 2. Convergence characteristics: (a) Ackley function; (b) Sphere function.
Algorithms 10 00014 g002
Figure 3. Map of the East Lake Network.
Figure 3. Map of the East Lake Network.
Algorithms 10 00014 g003
Figure 4. Diversion channels of the East Lake Network.
Figure 4. Diversion channels of the East Lake Network.
Algorithms 10 00014 g004
Table 1. Test functions.
Table 1. Test functions.
FunctionFormulaSearch DomainOptimum
Ackley function f ( x 1 , , x n ) = 20 exp ( 0.2 1 n i = 1 n x i 2 ) exp ( 1 n i = 1 n cos ( 2 π x i ) ) + e + 20 40 x i 40 f ( 0 , , 0 ) = 0
Griewank function f ( x 1 , , x n ) = 1 + 1 4000 i = 1 n x i 2 i = 1 n cos ( x i i ) 600 x i 600 f ( 0 , , 0 ) = 0
Rastrigin function f ( x 1 , , x n ) = A n + i = 1 n [ x i 2 A cos ( 2 π x i ) ] A = 10 5.12 x i 5.12 f ( 0 , , 0 ) = 0
Rosenbrock function f ( x 1 , , x n ) = i = 1 n 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] x i f ( 1 , , 1 ) = 0
Sphere function f ( x 1 , , x n ) = i = 1 n x i 2 5.12 < x i < 5.12 f ( 0 , , 0 ) = 0
Schwefel 2.22 function f ( x 1 , , x n ) = i = 1 n | x i | + i = 0 n | x i | 10 < x i < 10 f ( 0 , , 0 ) = 0
Table 2. Parameters of the algorithms.
Table 2. Parameters of the algorithms.
AlgorithmhmshmcrparfwNI
HS500.90.30.005100,000
IHS500.9Max: 0.5 Min: 0.1Max: 0.01 Min: 0.001100,000
GHS500.90.30.005100,000
CHS500.90.30.005100,000
DHS500.90.30.005100,000
Table 3. Errors of test results of the algorithms.
Table 3. Errors of test results of the algorithms.
Function HSIHSGHSCHSDHS
Ackley functionMean8.49 × 10−36.16 × 10−33.95 × 10−31.12 × 10−21.57 × 10−13
Min3.29 × 10−32.44 × 10−31.35 × 10−53.56 × 10−34.57 × 10−14
Max1.62 × 10−21.16 × 10−22.10 × 10−22.54 × 10−24.69 × 10−13
Stdv2.43 × 10−31.67 × 10−33.48 × 10−33.79 × 10−36.78 × 10−14
Griewank functionMean1.58 × 10−21.46 × 10−25.00 × 10−41.37 × 10−25.45 × 10−10
Min8.62 × 10−51.92 × 10−51.59 × 10−79.62 × 10−50
Max8.13 × 10−27.38 × 10−21.13 × 10−29.15 × 10−21.69 × 10−8
Stdv1.81 × 10−21.76 × 10−21.24 × 10−31.54 × 10−22.06 × 10−9
Rastrigin functionMean1.49 × 10−48.20 × 10−57.44 × 10−52.45 × 10−41.02 × 10−12
Min2.66 × 10−51.35 × 10−51.82 × 10−84.33 × 10−50
Max3.30 × 10−42.01 × 10−41.17 × 10−39.92 × 10−46.20× 10−11
Stdv7.13 × 10−54.22 × 10−51.64 × 10−41.42 × 10−46.17× 10−12
Rosenbrock functionMean2.101.981.772.327.16−1
Min5.65 × 10−31.00 × 10−21.80 × 10−61.04 × 10−26.22 × 10−4
Max5.325.348.376.084.64
Stdv1.571.513.101.699.75 × 10−1
Sphere functionMean7.33 × 10−74.30 × 10−73.35 × 10−71.01 × 10−61.85 × 10−28
Min1.40 × 10−71.02 × 10−73.00 × 10−142.21 × 10−72.92 × 10−29
Max2.03 × 10−61.43 × 10−63.58 × 10−63.55 × 10−67.97 × 10−28
Stdv3.83 × 10−72.24 × 10−76.13 × 10−76.07 × 10−71.72 × 10−28
Schwefel 2.22 functionMean3.07 × 10−32.41 × 10−33.23 × 10−33.68 × 10−33.12 × 10−12
Min1.29 × 10−31.12 × 10−31.62 × 10−61.23 × 10−37.35 × 10−13
Max5.12 × 10−33.91 × 10−31.32 × 10−26.62 × 10−31.33 × 10−11
Stdv9.05 × 10−45.91 × 10−42.86 × 10−31.08 × 10−31.99 × 10−12
Table 4. Distribution of error of the algorithms.
Table 4. Distribution of error of the algorithms.
FunctionPrecisionHSIHSGHSCHSDHS
Ackley function<1 × 10−1100%100%100%100%100%
<1 × 10−274%99%93%40%100%
<1 × 10−30%0%23%0%100%
<1 × 10−40%0%2%0%100%
<1 × 10−50%0%0%0%100%
<1 × 10−60%0%0%0%100%
<1 × 10−70%0%0%0%100%
Griewank function<1 × 10−1100%100%100%100%100%
<1 × 10−232%44%99%32%100%
<1 × 10−330%44%84%27%100%
<1 × 10−41%7%47%1%100%
<1 × 10−50%0%17%0%100%
<1 × 10−60%0%5%0%100%
<1 × 10−70%0%0%0%100%
Rastrigin function<1 × 10−1100%100%100%100%100%
<1 × 10−2100%100%100%100%100%
<1 × 10−3100%100%99%100%100%
<1 × 10−425%69%81%11%100%
<1 × 10−50%0%43%0%100%
<1 × 10−60%0%14%0%100%
<1 × 10−70%0%4%0%100%
Rosenbrock function<1 × 10−16%9%49%4%34%
<1 × 10−22%0%29%0%10%
<1 × 10−30%0%6%0%1%
<1 × 10−40%0%3%0%0%
<1 × 10−50%0%1%0%0%
<1 × 10−60%0%0%0%0%
<1 × 10−70%0%0%0%0%
Sphere function<1 × 10−1100%100%100%100%100%
<1 × 10−2100%100%100%100%100%
<1 × 10−3100%100%100%100%100%
<1 × 10−4100%100%100%100%100%
<1 × 10−5100%100%100%100%100%
<1 × 10−677%98%89%59%100%
<1 × 10−70%0%49%0%100%
Schwefel 2.22 function<1 × 10−1100%100%100%100%100%
<1 × 10−2100%100%98%100%100%
<1 × 10−30%0%23%0%100%
<1 × 10−40%0%4%0%100%
<1 × 10−50%0%1%0%100%
<1 × 10−60%0%0%0%100%
<1 × 10−70%0%0%0%100%
Table 5. The effect of sc on the means and standard deviations.
Table 5. The effect of sc on the means and standard deviations.
Function sc = 20sc = 40sc = 60sc = 80sc = 100
AckleyMean9.12 × 10−132.81 × 10−113.87 × 10−103.01 × 10−91.33 × 10−8
Stdv3.38 × 10−131.15 × 10−111.48 × 10−101.18 × 10−94.91 × 10−9
GriewankMean4.65 × 10−62.63 × 10−57.38 × 10−43.13 × 10−33.80 × 10−3
Stdv2.91 × 10−51.84 × 10−43.86 × 10−37.26 × 10−39.49 × 10−3
RastriginMean5.97 × 10−150002.84 × 10−16
Stdv4.22 × 10−140002.01 × 10−15
RosenbrockMean1.321.551.851.491.72
Stdv1.361.401.641.291.57
SphereMean7.18 × 10−279.36 × 10−241.56 × 10−219.81 × 10−201.92 × 10−18
Stdv4.38 × 10−278.31 × 10−249.09 × 10−226.95 × 10−201.42 × 10−18
Schwefel 2.22Mean4.38 × 10−123.82 × 10−112.53 × 10−101.09 × 10−93.66 × 10−9
Stdv1.99 × 10−121.76 × 10−119.65 × 10−114.93 × 10−101.64 × 10−9
Table 6. The effect of hms on means and standard deviations.
Table 6. The effect of hms on means and standard deviations.
Function hms = 10hms = 30hms = 50hms = 70hms = 90
AckleyMean1.71 × 10−113.11 × 10−153.76 × 10−131.58 × 10−91.68 × 10−7
Stdv3.45 × 10−133.11 × 10−151.66 × 10−136.43 × 10−108.20 × 10−8
GriewankMean6.58 × 10−22.47 × 10−49.37 × 10−64.32 × 10−72.64 × 10−5
Stdv7.93 × 10−21.74 × 10−36.62 × 10−57.10 × 10−71.92 × 10−5
RastriginMean0002.72 × 10−62.46 × 10−6
Stdv0001.92 × 10−51.74 × 10−5
RosenbrockMean1.941.388.70 × 10−17.11 × 10−11.09
Stdv1.511.351.146.14 × 10−16.05 × 10−1
SphereMean1.16 × 10−896.20 × 10−431.97 × 10−281.96 × 10−212.62 × 10−17
Stdv2.96 × 10−898.53 × 10−431.93 × 10−281.36 × 10−211.20 × 10−17
Schwefel 2.22Mean1.57 × 10−419.47 × 10−192.92 × 10−123.76 × 10−92.27 × 10−7
Stdv4.08 × 10−416.63 × 10−191.37 × 10−121.78 × 10−98.24 × 10−8
Table 7. The effect of hmcr on the means and standard deviations.
Table 7. The effect of hmcr on the means and standard deviations.
Function hmcr = 0.7hmcr = 0.8hmcr = 0.9hmcr = 0.99
AckleyMean3.95 × 10−105.27 × 10−121.73 × 10−136.09 × 10−15
Stdv5.56 × 10−102.04 × 10−126.69 × 10−141.81 × 10−15
GriewankMean9.08 × 10−57.97 × 10−75.26 × 10−92.46 × 10−4
Stdv1.34 × 10−41.13 × 10−63.17 × 10−81.74 × 10−3
RastriginMean7.39 × 10−145.23 × 10−142.58 × 10−84.18 × 10−1
Stdv3.73 × 10−133.70 × 10−131.82 × 10−76.39 × 10−1
RosenbrockMean7.88 × 10−15.49 × 10−11.261.72
Stdv4.52 × 10−13.80 × 10−11.431.54
SphereMean8.59 × 10−221.09 × 10−252.17 × 10−287.11 × 10−32
Stdv5.18 × 10−218.31 × 10−261.65 × 10−286.02 × 10−32
Schwefel 2.22Mean3.53 × 10−85.96 × 10−103.10 × 10−121.43 × 10−14
Stdv1.88 × 10−84.04 × 10−102.01 × 10−126.77 × 10−15
Table 8. The effect of par on the means and standard deviations.
Table 8. The effect of par on the means and standard deviations.
Function par = 0.1par = 0.3par = 0.5par = 0.7par = 0.9
AckleyMean3.04 × 10−151.54 × 10−135.74 × 10−105.64 × 10−93.02 × 10−8
Stdv5.02 × 10−166.46 × 10−141.21 × 10−94.21 × 10−94.19 × 10−8
GriewankMean1.03 × 10−32.23 × 10−93.36 × 10−65.70 × 10−51.11 × 10−4
Stdv3.37 × 10−31.35 × 10−81.06 × 10−51.83 × 10−42.98 × 10−4
RastriginMean01.64 × 10−71.88 × 10−58.43 × 10−24.59 × 10−1
Stdv01.16 × 10−69.86 × 10−52.67 × 10−16.11 × 10−1
RosenbrockMean3.36 × 10−11.051.141.892.18
Stdv3.88 × 10−11.171.351.451.54
SphereMean1.03 × 10−401.82 × 10−281.92 × 10−212.81 × 10−197.43 × 10−19
Stdv1.08 × 10−401.59 × 10−284.61 × 10−213.63 × 10−192.10 × 10−18
Schwefel 2.22Mean7.58 × 10−212.85 × 10−122.21 × 10−86.46 × 10−87.54 × 10−8
Stdv5.31 × 10−211.62 × 10−121.07 × 10−82.73 × 10−83.71 × 10−8
Table 9. The effect of fw on the means and standard deviations.
Table 9. The effect of fw on the means and standard deviations.
Function fw = 0.001fw = 0.004fw = 0.007fw = 0.01
AckleyMean3.11 × 10−152.97 × 10−153.04 × 10−152.82 × 10−15
Stdv07.03 × 10−165.02 × 10−169.74 × 10−16
GriewankMean2.24 × 10−32.41 × 10−31.24 × 10−32.95 × 10−3
Stdv4.77 × 10−35.48 × 10−33.73 × 10−35.86 × 10−3
RastriginMean1.90 × 10−5000
Stdv1.34 × 10−4000
RosenbrockMean4.10 × 10−13.37 × 10−14.42 × 10−13.03 × 10−1
Stdv6.34 × 10−15.74 × 10−18.63 × 10−14.94 × 10−1
SphereMean8.78 × 10−582.91 × 10−572.44 × 10−561.23 × 10−55
Stdv1.73 × 10−574.10 × 10−573.88 × 10−562.55 × 10−55
Schwefel 2.22Mean7.58 × 10−332.49 × 10−327.39 × 10−322.05 × 10−31
Stdv9.03 × 10−331.94 × 10−326.15 × 10−321.92 × 10−31
Table 10. Results of the clustering method.
Table 10. Results of the clustering method.
Datasetk-MeansFuzzy ClusterDHS-KFC
Wine data setClass 159616359
Class 271636370
Class 348545249
Error rates (%) 4.49%4.49%0.56%
Iris data setClass 150505050
Class 250616058
Class 350394042
Error rates (%) 11.33%12.00%10.67%
Table 11. Initial water qualities of the six lakes.
Table 11. Initial water qualities of the six lakes.
LakeCOD (mg/L)TN (mg/L)TP (mg/L)
East Lake242.320.196
Sha Lake506.110.225
Yangchun Lake261.140.085
Yanxi Lake343.820.2
Yandong Lake100.70.05
Bei Lake323.810.122
Table 12. Feasible schemes.
Table 12. Feasible schemes.
No.1–5 Days6–10 Days11–15 Days16–20 Days21–25 Days26–30 Days
qzqqqlqzqqqlqzqqqlqzqqqlqzqqqlqzqqql
110010103040103020103020103020103020
210010103040522.513.8522.513.8522.513.8
310010103040103040103020103020103020
4100101030401030407.53018.87.53018.87.53018.8
510010103040103040527.516.2527.516.2527.516.2
610010103040103040522.513.8522.513.8522.513.8
7100101030401030401030407.52013.8
81001010253552515525155251552515
910010102535102015102015102015102015
101001010253552012.552012.552012.5
11100101025351025355251552515
12100101025351025351025357.527.517.57.527.517.5
13100101025351025351025355251552515
1410010530351022.516.21022.516.21022.516.2
151001053035530357.527.517.57.527.517.5
161001053035530357.527.517.57.527.517.57.527.517.5
1710010530355303553017.552515
1810010530355303553035522.513.8522.513.8
Table 13. Results of the schemes shown in Table 12.
Table 13. Results of the schemes shown in Table 12.
No.Sha LakeYangchun LakeEast LakeYandong LakeBei LakeWaterCost
TPTNCODTPTNCODTPTNCODTPTNCODTPTNCOD
10.1082.38617.5690.0660.99710.6020.162.13619.430.1983.29430.0230.1653.90732.889072226.8
20.1262.94222.0510.0660.99810.7730.1722.2320.8950.23.36630.6440.1613.91232.935724143.1
30.1082.38617.5690.0660.99710.6020.1612.1419.4380.23.36630.5660.1613.91732.9439072226.8
40.1152.59319.4130.0660.99710.6020.1622.16119.6680.23.36530.5660.1613.91732.9438748218.7
50.122.75120.4530.0660.99710.6020.1652.17719.950.23.36730.5810.1613.91732.9438100202.5
60.122.75120.4530.0660.99710.6020.1682.20920.3210.23.36930.6070.1613.91732.9437452186.3
70.1162.62419.6340.0660.99810.7730.1682.21120.4820.2023.62732.4890.1443.87732.5976804170.1
80.1232.86521.3210.0660.99710.6020.172.22220.5750.1993.30130.120.1653.90732.8817128178.2
90.1082.38617.5690.0660.99710.6020.1682.22120.4310.23.30630.1840.1653.90732.8827128178.2
100.1262.94222.0510.0660.99810.7730.1752.2621.260.23.36930.6840.1613.91232.935184129.6
110.1222.82721.1740.0660.99810.7730.172.21720.6590.2013.46931.3820.1553.90632.876048151.2
120.1132.53618.8930.0660.99710.6020.1652.18519.9880.2013.47331.3790.1563.9132.8827992199.8
130.1172.66819.8230.0660.99710.6020.1672.220.240.2013.47431.3830.1563.9132.8827560189
140.1152.6219.6270.0660.99810.7730.1692.2320.7190.23.36630.6480.1613.91232.936156153.9
150.1262.93222.2610.0660.99810.7730.1692.20820.5780.2013.46831.3680.1553.90632.876480162
160.1222.81821.3280.0660.99710.6020.1672.20320.2360.23.36730.5830.1613.91732.9437992199.8
170.1343.20424.4880.0660.99810.7730.1722.20720.7520.2023.62732.4880.1443.87732.5976264156.6
180.1333.13723.920.0660.99710.6020.1712.2220.6320.2013.47331.3750.1563.9132.8827344183.6
Table 14. Results of the proposed method.
Table 14. Results of the proposed method.
No.ClassificationNo.ClassificationNo.Classification
1I7IV13II
2III8V14III
3I9V15III
4I10III16II
5II11III17IV
6II12II18II

Share and Cite

MDPI and ACS Style

Feng, Y.; Zhou, J.; Tayyab, M. Kernel Clustering with a Differential Harmony Search Algorithm for Scheme Classification. Algorithms 2017, 10, 14. https://doi.org/10.3390/a10010014

AMA Style

Feng Y, Zhou J, Tayyab M. Kernel Clustering with a Differential Harmony Search Algorithm for Scheme Classification. Algorithms. 2017; 10(1):14. https://doi.org/10.3390/a10010014

Chicago/Turabian Style

Feng, Yu, Jianzhong Zhou, and Muhammad Tayyab. 2017. "Kernel Clustering with a Differential Harmony Search Algorithm for Scheme Classification" Algorithms 10, no. 1: 14. https://doi.org/10.3390/a10010014

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop