Next Article in Journal
A Prediction of Precipitation Data Based on Support Vector Machine and Particle Swarm Optimization (PSO-SVM) Algorithms
Previous Article in Journal
Extending the Applicability of the MMN-HSS Method for Solving Systems of Nonlinear Equations under Generalized Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Clustering Using an Improved Krill Herd Algorithm

1
School of Information Science and Technology, Jinan University, Guangzhou 510630, China
2
Department of Information Science and Technology, Jinan University, Guangzhou 510630, China
*
Author to whom correspondence should be addressed.
Algorithms 2017, 10(2), 56; https://doi.org/10.3390/a10020056
Submission received: 27 March 2017 / Revised: 6 May 2017 / Accepted: 12 May 2017 / Published: 17 May 2017

Abstract

:
In recent years, metaheuristic algorithms have been widely used in solving clustering problems because of their good performance and application effects. Krill herd algorithm (KHA) is a new effective algorithm to solve optimization problems based on the imitation of krill individual behavior, and it is proven to perform better than other swarm intelligence algorithms. However, there are some weaknesses yet. In this paper, an improved krill herd algorithm (IKHA) is studied. Modified mutation operators and updated mechanisms are applied to improve global optimization, and the proposed IKHA can overcome the weakness of KHA and performs better than KHA in optimization problems. Then, KHA and IKHA are introduced into the clustering problem. In our proposed clustering algorithm, KHA and IKHA are used to find appropriate cluster centers. Experiments were conducted on University of California Irvine (UCI) standard datasets, and the results showed that the IKHA clustering algorithm is the most effective.

1. Introduction

Clustering is an important research direction in data analysis. This method does not make any statistical hypothesis on data and, thus, is called unsupervised learning in pattern recognition and data mining. Clustering is mainly used in text clustering [1], search engine optimization [2], landmark selection [3], face recognition [4], and medicine and biology [5].
Clustering is one of the most difficult and challenging problems in machine learning. The variety of clustering algorithms is roughly divided into three main types, namely, overlapping (so-called non-exclusive) [6], partitional [7], and hierarchical [8]. Regardless of the type of clustering algorithm applied, the main goal is to maximize homogeneity within each cluster and heterogeneity among different clusters. In other words, objects that belong to the same cluster should be more similar to each other than objects that belong to different clusters.
Although present algorithms have their own advantages, they are sensitive to the initialization parameters and it is difficult to find their optimal clusters. In recent years, optimization methods inspired by natural phenomena have provided new ways to solve clustering problems. A swarm of individuals are employed to explore the search space and obtain an optimal solution, such as genetic algorithms (GA) [9], particle swarm optimization algorithms (PSO) [10], and ant colony optimization (ACO) [11], among others. Other novel swarm intelligence algorithms have been proposed, such as harmony search (HS) [12], honeybee mating optimization algorithm (HBMO) [13], artificial fish swarm algorithm (AFSA) [14], artificial bee colony (ABC) [15], firefly algorithm (FA) [16], monkey algorithm (MA) [17], bat algorithm (BA) [18], and many others.
The krill herd algorithm (KHA) [19] is a novel swarm algorithm that is based on the simulation of the herding behavior of krill individuals and the minimum distances of each individual krill from food and from the highest density of the herd, which are considered as the objective functions for krill movement. Although proposed recently, KHA has quickly been applied to multiple scenarios. Amudhavel et al. [20] used KHA to optimize a peer-to-peer network. KHA is applied in a smartphone ad hoc network [21]. Kowalski et al. [22] used KHA for learning an artificial neural network. In [23], KHA demonstrated better performance compared with well-known algorithms, such as PSO and GA. Gandomi and Alavi [19] illustrated that KHA with a crossover operator is superior to other well-known algorithms, including differential evolution (DE) [24], biogeography-based optimization (BBO) [25], and ACO.
Although KHA outperforms many other swarm intelligent algorithms [19], the algorithm cannot search globally particularly well [26]. In [27], a free search KHA for function optimization was proposed to improve the feasibility and effectiveness of KHA. An improved KHA with a linear decreasing step was proposed by Li et al. [28]. Furthermore, a new KHA that improved the original genetic operator by modifying the mutation mechanism and adding a new updated scheme will be demonstrated in our paper.
In this paper, we apply KHA as an optimization means to transform clustering into an optimization problem. In other words, we use individual krill to represent K cluster centers (K is the number of clusters), and KHA is used to search for the optimal clustering center. According to the principle of minimum distance of centers, all of the objects of the dataset are divided into different clusters, which leads to obtaining clustering results.
The rest of the paper is organized as follows: In Section 2, details of KHA are introduced. Section 3 briefly explains the improved KHA. In Section 4 clustering with the IKHA approach is proposed. Section 5 presents the experimental results of our proposed algorithm. Finally, the summary and future works are provided in Section 6.

2. Introduction to Krill Herd Algorithm

KHA is based on the simulation of the herding of krill swarms in response to specific biological and environmental processes. Nearly all necessary coefficients for KHA are obtained from real-world empirical studies [19].
In nature, the adaptability of an individual is judged by its distance to food and the maximum density of the krill population. Thus, based on the assumption of an imaginary distance, the fitness is the value of the objective function. Within a two-dimensional space, the specific location of the individual krill varies with time depending on the following three actions [19]:
  • movement induced by other krill individuals;
  • foraging activity; and
  • random diffusion.
KHA uses the Lagrangian model to extend the search space to an n-dimensional decision space as:
d X i d t = N i + F i + D i
where N i is the motion of the ith krill induced by other krill individuals, F i represents the foraging activity, and D i denotes the physical diffusion of the krill individuals.
The explanations for basic KHA are given as follows:
(1) Motion induced by other krill individuals
According to theoretical arguments, individual krill maintain a high density and move due to mutual effects. The direction of motion induced, α i , is estimated from the local swarm density (local effect), target swarm density (target effect), and repulsive swarm density (repulsive effect). For an individual krill, the motion can be defined as:
N i n e w = N max α i + ω n N i o l d
where:
α i = α i l o c a l + α i t arg e t
N max is the maximum induced speed, ω n is the inertia weight of the motion induced in the range [0, 1]s, N i o l d is the last motion induced, α i l o c a l is the local effect provided by the neighbors, and α i t arg e t is the target direction effect provided by the best individual krill. According to the measured values of the maximum induced speed ( N max ), N max is taken as 0.01 (ms−1) in [19].
Different strategies can be used in choosing the neighbor. Based on the actual behavior of krill individuals, a sensing distance ( d s ) should be determined around a krill individual and the neighbors should be found.
The sensing distance for each krill individual can be determined by using different heuristic methods. Here, the sensing distance is determined by using the following formula for each iteration:
d s , i = 1 5 N j = 1 N X i X j
where d s , i is the sensing distance for the ith krill individual and N is the number of the krill individuals, and X i represents the related positions of ith krill. If the distance of X i and X j is less than the defined sensing distance ( d s , i ), X j is a neighbor of X i .
(2) Foraging motion
This movement is intended to comply with two criteria. The first is food location, and the second is previous experience about the food location. For the ith krill, the foraging motion can be expressed as:
F i = V f β i + ω f F i o l d
where:
β i = β i f o o d + β i b e s t
where V f is foraging speed, ω f is inertia weight of the foraging motion in the range [0, 1], β i f o o d is the attractive food, and β i b e s t is the effect of the best fitness of the ith krill so far. According to measured values of the foraging speed, V f is taken as 0. 02 ms−1 in [19].
Food effect is defined in terms of its location. The center of food should be found and then formulated for food attraction. This solution cannot be determined, but can be estimated. In this study, the virtual center of food concentration is estimated according to the fitness distribution of krill individuals, which is inspired by the "center of mass" concept. The center of food for each iteration is formulated as:
X f o o d = i = 1 N 1 K i X i i = 1 N 1 K i
where K i is the objective function value of the ith krill individual.
(3) Physical diffusion
The physical diffusion of the krill individuals is considered a random process. This motion can be expressed in terms of a maximum diffusion speed and a random directional vector. The formula is as follows:
D i = D max ( 1 I I max ) δ
where D max is the maximum diffusion speed, and δ is the random directional vector and its arrays are random values between −1 and 1. I is the actual iteration number and I max is the maximum number of iterations.
(4) Motion process of KHA
Defined motions regularly change the krill position toward the best fitness. The foraging motion and motion induced by other krill individuals contain two local ( α i l o c a l , β i b e s t ) and two global strategies ( α i t arg e t , β i f o o d ), which work simultaneously and create a powerful algorithm. Using diverse operative parameters of the motion throughout the time, the position vector of a krill individual during interval t to t + Δ t is expressed by the following equation:
X i ( t + Δ t ) = X i ( t ) + Δ t d X i d t
where X i ( t + Δ t ) represents the updated krill individual position, and X i ( t ) represents the current position. Note that Δ t is considered the most important constant and should be tuned carefully based on the optimization problem. This is because this parameter works as a scale factor of the speed vector, and Δ t can be obtained from the following formula:
Δ t = C t j = 1 N V ( U B J L B j )
where N V is the total number of variables, and L B j and U B j are the lower and upper bounds of the jth variables ( j = ( 1 , 2 , , N V ) ), respectively. Therefore, the absolute of their subtraction shows the search space. It is empirically found that C t is a constant number between [0, 2]. It is also obvious that low values of C t let the krill individuals search the space carefully.
(5) Genetic operators
Crossover operation is the use of a binomial crossover scheme to update the mth components of the ith krill by the following formula:
X i , m = { x r , m r a n d i , m < C r x i , m e l s e
C r = 0.2 K ^ i , b e s t
where Cr is crossover probability, which is a random number between 0 and 1, r { 1 , 2 , , i 1 , i + 1 , , N } . Mutation is controlled by mutation probability ( M u ). The adaptive mutation scheme used is formulated as
X i , m = { x gbest , m + μ ( x p , m x q , m ) r a n d i , m < M u x i , m e l s e
M u = 0.05 / K ^ i , b e s t
where p , q { 1 , 2 , , i 1 , i + 1 , , N } and μ is a number between 0 and 1. In K ^ i , b e s t , the nominator is K i K b e s t . Based on this new mutation probability, the mutation probability for the global best is equal to zero, which increases as fitness decreases.

3. Improved KHA

The KHA algorithm considers various motion characteristics of individual krill, as well as the global exploration and local exploitation ability. Through simulation and experiments [19], the performance of the algorithm is better than that of the majority of swarm intelligence algorithms. However, recent studies show that the KHA algorithm has excellent local exploitation ability, but global exploration ability is not as strong, especially in the treatment of high-dimensional multimodal function optimization [29], because the algorithm cannot always converge rapidly. To solve the problem, selection and crossover operators are added to the basic KHA in [29], and [30] used a local search to explore around the solution obtained by the KHA. Inspired by these developments, we propose the improved KHA algorithm (IKHA) based on a modified mutation scheme and a new updated mechanism.
The main ideas of IKHA are as follows: First, we sorted the individuals of each generation according to the fitness value in ascending order. The first part included individuals with good fitness (individuals with fitness value among the top 10%, but apart from the global best), and the rest comprised the second part. For the first part, which we call sub-optimal individuals, the fitness value was close to the optimal individual, but worse than the optimal solution. In the process of optimization of this part, the individual does not have much effect. Another noteworthy point, based on Equatio (14) in the previous section, is that mutation probability ( M u ) for the global best is equal to zero and increases with decreasing fitness. In other words, the smaller the fitness value, the higher the probability of mutation. Thus, we can improve the mutation mechanism to use this part of the individual and allow them to find the potential solution in the vicinity of the optimal solution.
For the first part of the sub-optimal individuals, we use the individual’s own neighbors x a (a neighbor of x i ) to optimize the mutation program instead of the original stochastic selection x p , x q . Specific operations observed the following formula, where SN is the abbreviation of sub-optimal individuals and μ n n is a number between 0 and 1:
X i , m = { x i , m + μ n n ( x a , m x i , m ) x i S N , r a n d i , m M u x i , m x i S N , r a n d i , m > M u
For the second part of the individuals, we only had to use good individuals to guide them toward a better direction of evolution. Therefore, we chose sub-optimal individuals to optimize the mutation program. The specific formula is as follows:
X i , m = { x g b e s t , m + μ ( x b , m x c , m ) x i S N , r a n d i , m M u x i , m x i S N , r a n d i , m > M u
where x b , x c ∈ {SN | SN} are sub-optimal individuals.
Beyond the modified mutation mechanism, an updated operator is added in our approach. After many iterations, the KHA tends to stagnate. To avoid premature convergence in the early run phase, we added an updated mechanism to overstep the local extremum. In our approach, a parameter, the maximum number of stalls ( S max ), is added. Suppose that the K g b e s t (the fitness value of the global best individual of the population) remains unchanged, and n u m s a m e b e s t (the number of unchanged iterations) is greater than S max , then the updated formula is shown as follows:
X b e s t n e w 1 = X b e s t + ν b e s t ( X b e s t X S N ¯ ) ,   i f ( n u m s a m e b e s t > S max )
X b e s t n e w 2 = X b e s t ν b e s t ( X b e s t X S N ¯ ) ,   i f ( n u m s a m e b e s t > S max )
where X S N ¯ is the average position of the SN, and ν b e s t is a number between 0 and 1. If the fitness value of X b e s t n e w 1 or X b e s t n e w 2 is less than K g b e s t , we replace the old position with the new position. S max , which is defined as follows, and is a positive integer greater than zero and decreases with the increase of the iteration number:
S max = s max ( 1 I I max )
In IKHA, the optimized mutation scheme abandons the original randomly-selected individuals for mutations, and uses different mutations for individuals with different fitness values. With such a divide-and-rule strategy, we take full advantage of all individuals, as opposed to the KHA. For example, sub-optimal individuals can be used to find potentially better values, thereby preventing the algorithm from falling into a local optimum. For the remaining individuals, excellent individuals could guide them, thereby speeding up optimization. The purpose of the updated operation is to find the potential for the escape from the local solution at the later run phase of the process.
The time computational complexity of IKHA is the same as KHA, and the analysis is as follows: In KHA, for each krill in an iteration, the time complexity of calculating the sensing distance d s , i is Ο ( N ) , so KHA’s time computational complexity is Ο ( I max N 2 ) ; in IKHA, the added updated operating is mainly according to Equations (17) and (18), and time computational complexity is O ( 1 ) . Moreover, with the improved mutation mechanism, we need to sort the individuals according to their fitness value, and we use a quick sort algorithm, whose time computational complexity is Ο ( N log N ) in the average case, or Ο ( N 2 ) in the worst case, but for every generation, one sorting operation is added, thus, the time computational complexity of IKHA is still Ο ( I max N 2 ) .
To test IKHA further, we conducted the following experiments by using the Ackley function [31]. The Ackley function is defined as follows and its graph is shown in Figure 1:
f ( x ) = i = 1 n 20 exp ( 0.2 1 n i = 1 n x i 2 exp ( 1 n i = 1 n cos 2 π x i ) ) + 20 + e
The convergence graphs for the Ackley function is drawn in Figure 2. In our experiment, the number of iterations is set to 100, the population size is 50, and the results are obtained after 50 trials. For the KHA and the proposed IKHA, we set the same parameters N max = 0.01, V f = 0.02, D max = 0.005, S max = 5, and ν b e s t = 0.5 at the beginning and these parameters linearly decreased to 0.1 at the end in IKHA [32,33]. Regarding the convergence behavior of KHA and IKHA, both IKHA and KHA converged quickly in the early run phase, but IKHA converged faster than KHA. During the latter run, KHA began to stagnate after rapid convergence, but IKHA continued to find a better value. Thus, IKHA can quickly converge in the early iterations and jump out of the local optimum to find a better solution.

4. Clustering Algorithms with IKHA

4.1. Basic Idea of Clustering

Data clustering, which is a NP-complete problem, finds heterogeneous data by minimizing some measure of dissimilarity. Given D a t a s e t = { d a t a 1 , d a t a 2 , , d a t a n } , clustering aims to divide the whole data into K clusters (K ≤ n), n is the total number of data objects, and the data objects of the same cluster are similar according to the similarity criteria. The similarity measure uses Euclidean distance:
d i s ( d a t a i , d a t a j ) = d = 1 D ( d a t a i , d d a t a j , d ) 2
where i , j { 1 , 2 , , n } , and d a t a i , d is the dth attribute of the ith datum in D , d i s ( d a t a i , d a t a j ) denotes the distance of d a t a i and d a t a j , and D is the number of attributes for each data object.

4.2. Clustering Based on IKHA

Clustering is in accordance with appropriate indexes to find an optimal clustering process. The essence of clustering is the optimization process. What is important is finding ways to combine the optimization algorithm IKHA with clustering. By representing each krill in the IKHA as a clustering scheme, we find the optimal clustering scheme by choosing the appropriate objective function. A clustering scheme can be expressed by all clustering centers. That is, every krill X i represents the K clustering centers:
X i = { C 1 , C 2 , , C k , C K 1 , C K }
C k = { C k 1 C k 2 C k d C k D 1 C k D }
where d denotes the number of parameters of the data that will be clustered, and C k 1 represents the first parameter of the first cluster center. Each krill individual can be expressed as the following matrix:
X i = { C 1 1 C 2 1 C k 1 C K 1 1 C K 1 C 1 2 C 2 2 C k 2 C K 1 2 C K 2 C 1 d C 2 d C k d C K 1 d C K d C 1 D 1 C 2 D 1 C k D 1 C K 1 D 1 C K D 1 C 1 D C 2 D C k D C K 1 D C K D }
In this study, one krill is used to represent a candidate solution to a problem, and the selected K initial cluster centers are potential solutions. One krill and K initial clustering center play similar roles in our algorithms. Thus, the mapping between a krill individual and K initial clustering centers can be established. In the coding method of the krill location structure, a set of initial cluster centers are generated randomly from the dataset points.
The whole krill population represents a variety of clustering schemes. In this manner, our aim is to find the optimal clustering centers. According to the principle of the minimal distance, data are categorized into the appropriate cluster. The description of the improved krill-herd clustering algorithm (IKHCA) is shown in Algorithm 1.
Algorithm 1. Improved Krill Herd Clustering Algorithm (IKHCA)
(1) Define the parameters (K, Imax, N, Nmax, Vf, Dmax, and so on).
(2) Initialize N krills randomly as the initial clustering center.
(3) Evaluate each krill individual by fitness function.
(4) For each krill individual:
  • Perform three motions (motion induced by another individual, foraging motion, and physical diffusion).
  • Then, implement the crossover operator and the modified mutation operator (two mutation schemes were performed for individuals with different fitness levels).
  • Calculate the fitness according the krill’s new position; if the new fitness is better than the older, update the krill individual position in the search space.
(5) Use the updated mechanism to update the krill’s position if the new position is superior to the old position.
(6) Repeat Steps 4 and 5 until the stopping criteria are satisfied.
(7) Return to the best clustering solution.

5. Simulation and Experiment

To investigate the performance of IKHCA, five clustering algorithms, namely, K-means [34], ACO [35], PSO [36], KHCA I in [30], and KHCA II, were compared. KHCA II is a clustering algorithm based on KHA [19]. Five datasets obtained from UCI Machine Learning Repository [37] were used in our experiment. The details of the data sets, including the name, number of classes, attributes, and records are presented in Table 1. Our experiments were conducted on Eclipse 4.6.0 with Windows 7 environment using Intel Core i7, 3.40 GHz, and 4 GB RAM.
Before the experiment, the setting of the parameters and the selection of the objective functions in KHCA II and IKHCA were specified. In KHCA II and IKHCA, we used the sum of squared error ( I S S E ) as the objective function directly, the formula is indicated in Equation (25). The low value of I S S E , the higher the quality of the clustering is
I S S E = k = 1 K i = 1 n ω i k ( d i s ( d a t a i , C k ) ) 2
ω i k = { 1 , if data i belongs to cluster k 0 , else }
The parameters are set in accordance with [19,38]:
N max = 0.01;
V f = 0.02; and
D max = 0.005.
Here, Ct is set to 0.5, and the inertia weights ( ω n , ω f ) are equal to 0.9 at the beginning of the search, and linearly decreased to 0.1 at the end to encourage exploitation. The size of the population is set to 25, s max = 5 , ν b e s t = 0.5 at the beginning, and linearly decreased to 0.1 at the end in IKHCA.
We compared the performance of different clustering algorithms from two aspects. First, we compared the objective function value of the different clustering algorithms in Table 2, and then we compared the accuracy of different clustering algorithms in Table 3. Accuracy is specifically expressed as follows:
accuracy = ( number   of   correctly   placed   data total   number   of   data ) × 100
Table 2 lists the best and worst means of the solution, and ranks the algorithms based on the mean values for all datasets in Table 1. As compared, algorithm results are directly taken from [30]. KHCA II and IKHCA algorithms were executed 100 times independently with the same parameters described in this paper, except that the maximum number of generations was set to 200. As shown in Table 2, IKHCA obtained better solutions for the best and worse than other algorithms on the Wine, Glass, Cancer, and CMC datasets, but not on Iris. KHCA II obtained the first solution for best on the Iris dataset. However, KHCA II generated a poor solution for the worst with respect to the Iris dataset. Then, we observed that IKHCA achieved the best solutions from mean values on all datasets, except Glass. However, IKHCA is very close to the results obtained by the KHCA II algorithm on the Glass dataset. From the experimental results, our proposed algorithm achieved better optimal solutions with improved stability in a limited number of iterations. IKHCA ranked first in all algorithms.
In Table 3, the clustering accuracies of IKHCA and other clustering algorithms are given, and part of the results were obtained directly from [30], with the bold font indicating the best results. At a glance, one can easily see that the last three clustering algorithms (KHCA I, KHCA II, and IKHCA) by using KHA are obviously better than the K-means, ACO, and PSO algorithms. It can be seen that the introduction of KHA into the clustering problem is reasonable and effective. Based on these results, IKHCA is proved to be the best algorithm with respect to objective function value and accuracy.

6. Conclusions and Future Work

KHA is a good swarm intelligent heuristic algorithm that could be gradually applied to address real-world problems. For the original KHA algorithm that could not always converge rapidly and search globally particularly well, we proposed IKHA, which improved the original mutation mechanism to provide two different mutation schemes and introduced an updated mechanism. In IKHA, we were in accordance with the fitness of individuals, set different mutation schemes according to their own conditions, made outstanding individuals look for better solutions, and the rest moved closer to the good individual. Then, through the updated mechanism, optimal individuals looked for potential solutions in the surrounding space to avoid being stuck in the local optimal zone. Experimental results showed that IKHA performed better than KHA.
Several clustering algorithms depend highly on the initial states and always converge to the nearest local optimum from the starting position of the search. In order to find the optimal clustering center, we applied the IKHA to solve an actual clustering problem and proposed the improved krill-herd clustering algorithm (IKHCA). According to the experiments, the IKHCA had better efficiency than, and outperformed, other well-known clustering approaches. Moreover, the results of the experiments show that the IKHA can successfully be introduced in clustering problems and perform best in almost all experimental datasets. In the future, there are several issues the can be further studied, such as utilizing the optimization ability of the IKHA to find the optimal cluster number and apply the IKHA to other scenarios to solve a wide range of real-world problems.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (U1431227); and the Guangzhou Scientific and Technological Project (201604010037).

Author Contributions

All of the authors contributed to the content of this paper. Qin Li participated in the algorithm analyses, design, algorithm implementation and draft preparation. Bo Liu analyzed the experimental data and revised this paper. All authors read and approved the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Abualigah, L.M.Q.; Hanandeh, E.S. Applying Genetic Algorithms to Information Retrieval Using Vector Space Model. Int. J. Comput. Sci. Eng. Appl. 2015, 5, 19–28. [Google Scholar]
  2. Carpineto, C.; Osiński, S.; Romano, G.; Weiss, D. A Survey of Web Clustering Engines. Acm Comput. Surv. 2009, 41, 17. [Google Scholar] [CrossRef]
  3. Rafailidis, D.; Constantinou, E.; Manolopoulos, Y. Landmark selection for spectral clustering based on Weighted PageRank. Futur. Gener. Comput. Syst. 2017, 68, 465–472. [Google Scholar] [CrossRef]
  4. Wu, B.; Hu, B.G.; Ji, Q. A Coupled Hidden Markov Random Field Model for Simultaneous Face Clustering and Tracking in Videos. Pattern Recognit. 2016, 64, 361–373. [Google Scholar] [CrossRef]
  5. Kaya, I.E.; Pehlivanlı, A.C.; Sekizkardeş, E.G.; Ibrikci, T. PCA Based Clustering for Brain Tumor Segmentation of T1w MRI Images. Comput. Method. Progr. Biomed. 2017, 140, 19–28. [Google Scholar] [CrossRef] [PubMed]
  6. Macqueen, J. Some Methods for Classification and Analysis of MultiVariate Observations. Proc. Berkeley Symp. Math. Stat. Probab. 1967, 1, 281–297. [Google Scholar]
  7. Jain, A.K. Data Clustering: 50 Years Beyond K-Means; Springer: Berlin, Germany, 2008. [Google Scholar]
  8. Langfelder, P.; Zhang, B.; Horvath, S. Defining clusters from a hierarchical cluster tree: The Dynamic Tree Cut package for R. Bioinformatics 2008, 24, 719. [Google Scholar] [CrossRef] [PubMed]
  9. Goldberg, D.E. Genetic Algorithms in Search, Optimization and Machine Learning. Choice Rev. Online 1989, 27, 2104–2116. [Google Scholar]
  10. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the 1995 IEEE International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  11. Xu, B.; Zhu, J.; Chen, Q. Ant Colony Optimization. New Advances in Machine Learning; InTech: Jiangsu, China, 2010; pp. 1155–1173. [Google Scholar]
  12. Zong, W.G.; Kim, J.H.; Loganathan, G.V. A New Heuristic Optimization Algorithm: Harmony Search. Simul. Trans. Soc. Model. Simul. Int. 2001, 76, 60–68. [Google Scholar]
  13. Abbass, H.A. MBO: Marriage in honey bees optimization-a Haplometrosis polygynous swarming approach. In Proceedings of the 2001 Congress on Evolutionary Computation, Seoul, Korea, 27–30 May 2001; Volume 1, pp. 207–214. [Google Scholar]
  14. Xiaolei, L.I.; Shao, Z.; Qian, J. An Optimizing Method Based on Autonomous Animats: Fish-swarm Algorithm. Syst. Eng. Theory Pract. 2002, 11, 32–38. [Google Scholar]
  15. Karaboga, D. An Idea Based on Honey Bee Swarm for Numerical Optimization; Erciyes University, Engineering Faculty, Computer Engineering Department: Kayseri, Turkey, 2005. [Google Scholar]
  16. Yang, X.S. Firefly Algorithms for Multimodal Optimization. Mathematics 2009, 5792, 169–178. [Google Scholar]
  17. Zhao, R.; Tang, W. Monkey algorithm for global numerical optimization. J. Uncertain Syst. 2008, 2, 165–176. [Google Scholar]
  18. Yang, X.S. A New Metaheuristic Bat-Inspired Algorithm. Comput. Knowl. Technol. 2010, 284, 65–74. [Google Scholar]
  19. Gandomi, A.H.; Alavi, A.H. Krill herd: A new bio-inspired optimization algorithm. Commun. Nonlinear Sci. Numer. Simul. 2012, 17, 4831–4845. [Google Scholar] [CrossRef]
  20. Amudhavel, J.; Sathian, D.; Raghav, R.S.; Pasupathi, L.; Baskaran, R.; Dhavachelvan, P. A Fault Tolerant Distributed Self Organization in Peer To Peer (P2P) Using Krill Herd Optimization. In Proceedings of the 2015 International Conference on Advanced Research in Computer Science Engineering & Technology, Unnao, India, 6–7 March 2015. [Google Scholar]
  21. Amudhavel, J.; Kumarakrishnan, S.; Gomathy, H.; Jayabharathi, A.; Malarvizhi, M.; Prem Kumar, K. An Scalable Bandwidth Reduction and Optimization in Smart Phone Ad hoc Network (SPAN) Using Krill Herd Algorithm. In Proceedings of the 2015 International Conference on Advanced Research in Computer Science Engineering & Technology, Unnao, India, 6–7 March 2015. [Google Scholar]
  22. Kowalski, P.A.; Lukasik, S. Training Neural Networks with Krill Herd Algorithm. Neural Proc. Lett. 2016, 9463, 5–17. [Google Scholar] [CrossRef]
  23. Chaturvedi, S.; Pragya, P.; Verma, H.K. Comparative analysis of particle swarm optimization, genetic algorithm and krill herd algorithm. In Proceedings of the 2015 International Conference on Computer, Communication and Control, Indore, India, 10–12 September 2015. [Google Scholar]
  24. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  25. Simon, D. Biogeography-Based Optimization. IEEE Trans. Evol. Comput. 2009, 12, 702–713. [Google Scholar] [CrossRef]
  26. Wang, G.; Guo, L.; Gandomi, A.H.; Cao, L.; Alavi, A.H.; Duan, H.; Li, J. Lévy-Flight Krill Herd Algorithm. Math. Probl. Eng. 2013, 2013, 1–14. [Google Scholar] [CrossRef]
  27. Li, L.; Zhou, Y.; Xie, J. A Free Search Krill Herd Algorithm for Functions Optimization. Math. Probl. Eng. 2014, 2014, 1–21. [Google Scholar] [CrossRef]
  28. Li, J.; Tang, Y.; Hua, C.; Guan, X. An improved krill herd algorithm: Krill herd with linear decreasing step. Appl. Math. Comput. 2014, 234, 356–367. [Google Scholar] [CrossRef]
  29. Wang, G.G.; Gandomi, A.H.; Alavi, A.H. Stud krill herd algorithm. Neurocomputing 2014, 128, 363–370. [Google Scholar] [CrossRef]
  30. Nikbakht, H.; Mirvaziri, H. A new clustering approach based on K-means and Krill Herd algorithm. In Proceedings of the 2015 IEEE Electrical Engineering, Tehran, Iran, 10–14 May 2015; pp. 662–667. [Google Scholar]
  31. Gandomi, A.H.; Yang, X.S. Benchmark Problems in Structural Optimization. In Computational Optimization, Methods and Algorithms; Springer: Berlin, Germany, 2011; pp. 259–281. [Google Scholar]
  32. Price, H.J. Swimming behavior of krill in response to algal patches: A mesocosm study. Limnol. Oceanogr. 1989, 34, 649–659. [Google Scholar] [CrossRef]
  33. Morin, A.; Okubo, A.; Kawasaki, K. Acoustic data analysis and models of krill spatial distribution. In Scientific Committee for the Conservation of Antarctic Marine Living Resources; Selected Scientific Papers, Part I; Scientific Committee: Tasmania, Australia, 1988; pp. 311–329. [Google Scholar]
  34. Hartigan, J.A.; Wong, M.A. Algorithm AS 136: A K-Means Clustering Algorithm. Appl. Stat. 1979, 28, 100–108. [Google Scholar] [CrossRef]
  35. Shelokar, P.S.; Jayaraman, V.K.; Kulkarni, B.D. An ant colony approach for clustering. Anal. Chim. Acta 2004, 509, 187–195. [Google Scholar] [CrossRef]
  36. Chen, C.Y.; Ye, F. Particle swarm optimization algorithm and its application to clustering analysis. In Proceedings of the 2004 IEEE International Conference on Networking, Sensing and Control, Taipei, Taiwan, 21–23 March 2004; Volume 2, pp. 789–794. [Google Scholar]
  37. Blake, C.L.; Merz, C.J. University of California at Irvine Repository of MachineLearning Databases. 1998. Available online: http://www.ics.uci.edu/mlearn/MLRepository.html (accessed on 15 May 2017).
  38. Kowalski, P.A.; Lukasik, S. Experimental Study of Selected Parameters of the Krill Herd Algorithm. In Intelligent Systems; Springer: Cham, Switzerland, 2014; pp. 473–485. [Google Scholar]
Figure 1. Ackley function graph.
Figure 1. Ackley function graph.
Algorithms 10 00056 g001
Figure 2. Comparison of convergence of the KHA and IKHA for Ackley (D = 20).
Figure 2. Comparison of convergence of the KHA and IKHA for Ackley (D = 20).
Algorithms 10 00056 g002
Table 1. The details of selected datasets.
Table 1. The details of selected datasets.
NameNumber of
ClustersParametersElements
Iris34150
Wine313178
Glass69214
Cancer29683
CMC3101473
Table 2. Objective function values obtained by the algorithms.
Table 2. Objective function values obtained by the algorithms.
Data SetCriteriaK-meansACOPSOKHCA IKHCA IIIKHCA
IrisBest98.597.497.196.496.6596.66
Worst117.499.2100.5103.197.6896.67
Mean104.797.898.898.696.6796.66
Rank635421
WineBest16,562.616,510.316,336.416,328.116,293.0116,292.12
Worst17,995.916,535.816,426.416,430.917,710.1616,589.23
Mean17,101.516,528.516,396.316,384.216,490.1716,305.51
Rank653241
GlassBest225.3219.8271.1216.2210.85210.30
Worst263.1258.1286.8255.9246.16223.03
Mean248.2241.3279.3238.3215.86215.90
Rank546312
CancerBest2994.92966.62974.42945.82964.392964.39
Worst3651.53098.93289.13088.83571.532971.15
Mean3131.32984.93102.82981.42995.872968.16
Rank635241
CMCBest5891.35721.85795.55711.25700.165692.20
Worst5989.4583658665821.35791.525695.02
Mean5945.157735823.15759.45760.295694.91
Rank645231
Mean Rank5.83.84.82.62.81.2
Final Rank645231
Table 3. Accuracy obtained by the algorithms.
Table 3. Accuracy obtained by the algorithms.
Data SetK-meansACOPSOKHCA IKHCA IIIKHCA
Iris83.388.587.789.0789.6790.67
Wine63.6270.670.471.1270.9973.03
Glass60.864.756.6564.9865.0165.88
Cancer93.3794.194.6295.0195.0295.16
CMC41.845.545.245.545.5545.62

Share and Cite

MDPI and ACS Style

Li, Q.; Liu, B. Clustering Using an Improved Krill Herd Algorithm. Algorithms 2017, 10, 56. https://doi.org/10.3390/a10020056

AMA Style

Li Q, Liu B. Clustering Using an Improved Krill Herd Algorithm. Algorithms. 2017; 10(2):56. https://doi.org/10.3390/a10020056

Chicago/Turabian Style

Li, Qin, and Bo Liu. 2017. "Clustering Using an Improved Krill Herd Algorithm" Algorithms 10, no. 2: 56. https://doi.org/10.3390/a10020056

APA Style

Li, Q., & Liu, B. (2017). Clustering Using an Improved Krill Herd Algorithm. Algorithms, 10(2), 56. https://doi.org/10.3390/a10020056

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop