A New Parallel Cuckoo Flower Search Algorithm for Training Multi-Layer Perceptron
Abstract
:1. Introduction
- To avoid premature convergence and local optima stagnation, best known properties of FPA and CS are added to the proposed algorithm.
- The global and local search phase equations of FPA and CS are optimized for addition in the proposed algorithm.
- Solutions generated by FPA and CS are compared and best among the two is selected as the current best solution. These solutions are further generated over the course of iterations to find the global best solution.
- A greedy selection operation is followed for retaining the best solution over subsequent iterations.
- The proposed algorithm is tested on 19 classical benchmark functions, and Wilcoxon rank-sum test is done to prove the significance of the algorithm statistically.
- Finally, five real-world datasets, including Heart, Breast cancer, Iris, Ballon, and XOR, are optimized using the proposed algorithm.
- The source code of CFS algorithm is available at: https://github.com/rohitsalgotra/CFS (accessed on 20 June 2023).
2. Feed-Forward Neural Networks and Multi-Layer Perceptron
- Weighted sum of inputs is given by:
- Outputs of hidden layers are calculated as:
- Final output based on the hidden node outputs is given as:
3. Basic Cuckoo Search and Flower Pollination Algorithm
3.1. Cuckoo Search Algorithm
- Each cuckoo lays one egg and dumps it in a random nest;
- The nest with highest fitness will carry over to next generation;
- The host bird discovered the cuckoo’s egg with a probability pa ∈ [0, 1]. A fixed number of host nests are available. Depending on , a new nest is built by the host bird at a new location either by throwing the egg away from the nest or abandoning the nest.
3.2. Flower Pollination Algorithm
- Global pollination arises via biotic and cross-pollination.
- Local pollination occurs via abiotic and self-pollination.
- Flower constancy, termed as reproduction probability, is proportional to the similarity of two flowers.
- Switch probability p ϵ [0, 1] balances global and local pollination.
4. Cuckoo Flower Search Algorithm
4.1. Algorithm Definition
Initialization
- Solution generation
- Final evaluation
Algorithm 1: Pseudocode of CFS algorithm |
begin: 1. Initialize: , maximum iterations 2. Define Population, objective function f(x) 3. While (t < maximum iterations) For i = 1 to n For j = 1 to n Evaluate new solution using CS inspired equation; Evaluate new solution using FPA inspired equation; Find the best among the two using greedy selection; End for j End for i 4. Update current best. 5. End while 6. Find final best end. |
4.2. CFS-MLP Trainer
- To find combination of weights and biases of MLP for achieving the minimum error using meta-heuristic algorithms. In this approach, proper values of weights are found without changing the basic architecture of the heuristic algorithm. It has simple a encoding phase and a difficult decoding phase, and so is often used for simple NNs.
- To find proper architecture for an MLP using heuristic algorithms. In this method, the architecture varies and it can be achieved by varying the connections between hidden nodes, layers, and neurons, as proposed in [46]. This method has a simple decoding phase but, due to complexity in the encoding phase, it is used for complex structures.
- To tune the gradient-based learning algorithm parameters using a heuristic approach. This method has been used to train FNNs using EAs [47] and others, such as GA [48], using a combination of methods to tune FNN. In this method, the decoding and encoding processes are very complicated and hence the structure becomes very complex.
5. Result and Discussion
5.1. Benchmark Problems
5.1.1. Unimodal Functions
5.1.2. Multimodal Functions
5.1.3. Fixed Dimension Functions
5.2. FNN–MLP Datasets
5.2.1. XOR Dataset
5.2.2. Balloon Dataset
5.2.3. Iris Dataset
5.2.4. Breast Cancer Dataset
5.2.5. Heart Dataset
5.3. Discussion of Results
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- McCulloch, W.S.; Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 1943, 5, 115–133. [Google Scholar] [CrossRef]
- Kohonen, T. The self-organizing map. Proc. IEEE 1990, 78, 1464–1480. [Google Scholar] [CrossRef]
- Dorffner, G. Neural networks for time series processing. Neural Netw. World 1996. [Google Scholar]
- Ghosh-Dastidar, S.; Adeli, H. Spiking neural networks. Int. J. Neural Syst. 2009, 19, 295–308. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Bebis, G.; Georgiopoulos, M. Feed-forward neural networks. IEEE Potentials 1994, 13, 27–31. [Google Scholar] [CrossRef]
- Rosenblatt, F. The Perceptron, A Perceiving and Recognizing Automaton Project Para; Cornell Aeronautical Laboratory: Buffalo, NY, USA, 1957. [Google Scholar]
- Werbos, P. Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences. Ph.D. Thesis, Harvard University, Cambridge, MA, USA, 1974. [Google Scholar]
- Reed, R.D.; Marks, R.J. Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networks; MIT Press: Cambridge, MA, USA, 1998. [Google Scholar]
- Caruana, R.; Niculescu-Mizil, A. An empirical comparison of supervised learning algorithms. In Proceedings of the 23rd International Conference on Machine Learning, Pittsburgh, PA, USA, 25–29 June 2006; pp. 161–168. [Google Scholar]
- Hinton, G.E.; Sejnowski, T.J. Unsupervised Learning: Foundations of Neural Computation; MIT Press: Cambridge, MA, USA, 1999. [Google Scholar]
- Wang, D. Unsupervised Learning: Foundations of Neural Computation; MIT Press: Cambridge, MA, USA, 2001; p. 101. [Google Scholar]
- Hertz, J. Introduction to the Theory of Neural Computation. Basic Books 1; Taylor Francis: Abingdon, UK, 1991. [Google Scholar]
- Wang, G.-G.; Guo, L.; Gandomi, A.H.; Hao, G.-S.; Wang, H. Chaotic krill herd algorithm. Inf. Sci. 2014, 274, 17–34. [Google Scholar] [CrossRef]
- Wang, G.-G.; Gandomi, A.H.; Alavi, A.H.; Hao, G.-S. Hybrid krill herd algorithm with differential evolution for global numerical optimization. Neural Comput. Appl. 2013, 25, 297–308. [Google Scholar] [CrossRef]
- Van Laarhoven, P.J.; Aarts, E.H. Simulated Annealing; Springer: Berlin/Heidelberg, Germany, 1987. [Google Scholar]
- Szu, H.; Hartley, R. Fast simulated annealing. Phys. Lett. A 1987, 122, 157–162. [Google Scholar] [CrossRef]
- Mitchell, M.; Holland, J.H.; Forrest, S. When will a genetic algorithm outperform hill climbing? NIPS 1993, 51–58. [Google Scholar]
- Sanju, P. Enhancing Intrusion Detection in IoT Systems: A Hybrid Metaheuristics-Deep Learning Approach with Ensemble of Recurrent Neural Networks. J. Eng. Res. 2023; in press. [Google Scholar]
- Mirjalili, S.; Mohd Hashim, S.Z.; Moradian Sardroudi, H. Training feedforward neural networks using hybrid particle swarm optimization and gravitational search algorithm. Appl. Math. Comput. 2012, 218, 11125–11137. [Google Scholar] [CrossRef]
- Whitley, D.; Starkweather, T.; Bogart, C. Genetic algorithms and neural networks: Optimizing connections and connectivity. Parallel Comput. 1990, 14, 347–361. [Google Scholar] [CrossRef]
- Shokouhifar, A.; Shokouhifar, M.; Sabbaghian, M.; Soltanian-Zadeh, H. Swarm intelligence empowered three-stage ensemble deep learning for arm volume measurement in patients with lymphedema. Biomed. Signal Process. Control. 2023, 85, 105027. [Google Scholar] [CrossRef]
- Socha, K.; Blum, C. An ant colony optimization algorithm for continuous optimization: Application to feed-forward neural network training. Neural Comput. Appl. 2007, 16, 235–247. [Google Scholar] [CrossRef]
- Ozturk, C.; Karaboga, D. Hybrid Artificial Bee Colony algorithm for neural network training. In Proceedings of the 2011 IEEE Congress on, Evolutionary Computation (CEC), New Orleans, LA, USA, 5–8 June 2011; pp. 84–88. [Google Scholar]
- Mendes, R.; Cortez, P.; Rocha, M.; Neves, J. Particle swarms for feed forward neural network training. In Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN’02 (Cat. No.02CH37290), Honolulu, HI, USA, 12–17 May 2002. [Google Scholar]
- Gudise, V.G.; Venayagamoorthy, G.K. Comparison of particle swarm optimization and backpropagation as training algorithms for neural networks. In Proceedings of the Swarm Intelligence Symposium, SIS’03, Indianapolis, IN, USA, 26 April 2003; pp. 110–117. [Google Scholar]
- Ilonen, J.; Kamarainen, J.-K.; Lampinen, J. Differential evolution training algorithm for feed-forward neural networks. Neural Process. Lett. 2003, 17, 93–105. [Google Scholar] [CrossRef]
- Uzlu, E.; Kankal, M.; Akpınar, A.; Dede, T. Estimates of energy consumption in Turkey using neural networks with the teaching–learning-based optimization algorithm. Energy 2014, 75, 295–303. [Google Scholar] [CrossRef]
- Moallem, P.; Razmjooy, N. A multi-layer perceptron neural network trained by invasive weed optimization for potato color image segmentation. Trends Appl. Sci. Res. 2012, 7, 445–455. [Google Scholar] [CrossRef]
- Darekar, R.V.; Chavan, M.; Sharanyaa, S.; Ranjan, N.M. A hybrid meta-heuristic ensemble based classification technique speech emotion recognition. Adv. Eng. Softw. 2023, 180, 103412. [Google Scholar] [CrossRef]
- Mirjalili, S. How effective is the Grey Wolf Optimizer in training multi-layer perceptrons. Appl. Intell. 2015, 43, 150–161. [Google Scholar] [CrossRef]
- Yang, X.-S.; Deb, S. Engineering optimization by cuckoo search. Int. J. Math. Model. Numer. Optim. 2010, 1, 330–343. [Google Scholar]
- Yang, X.-S. Flower Pollination Algorithm for Global Optimization. In Proceedings of the 11th International Conference, UCNC 2012, Orléan, France, 3–7 September 2012; Volume 7445, pp. 240–249. [Google Scholar] [CrossRef] [Green Version]
- Fine, T.L. Feedforward Neural Network Methodology; Springer: Berlin/Heidelberg, Germany, 1999. [Google Scholar]
- Mirjalili, S.; Sadiq, A.S. Magnetic optimization algorithm for training multi-layer perceptron. In Proceedings of the Communication Software and Networks (ICCSN), 2011 IEEE 3rd International Conference, Xi’an, China, 27–29 May 2011; pp. 42–46. [Google Scholar]
- Payne, R.B.; Sorenson, M.D.; Klitz, K. The Cuckoos; Oxford University Press: Oxford, UK, 2005. [Google Scholar]
- Barthelemy, P.; Bertolotti, J.; Wiersma, D.S. A Lévy flight for light. Nature 2008, 453, 495–498. [Google Scholar] [CrossRef]
- Yang, X.-S.; Deb, S. Cuckoo Search via Levy Flights’. In Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), Coimbatore, India, 9–11 December 2009; IEEE Publications: Piscataway, NJ, USA, 2009. [Google Scholar]
- Brown, C.; Liebovitch, L.S.; Glendon, R. Lévy Flights in Dobe Ju/’hoansi Foraging Patterns. Human Ecol. 2007, 35, 129–138. [Google Scholar] [CrossRef]
- Pavlyukevich, I. Cooling down Lévy flights. J. Phys. A Math. Theory 2007, 40, 12299–12313. [Google Scholar] [CrossRef] [Green Version]
- Walker, M. How Flowers Conquered the World, BBC Earth News, 10 July 2009. Available online: http://news.bbc.co.uk/earth/hi/earth_news/newsid_8143000/8143095.stm (accessed on 1 January 2019).
- Waser, N.M. Flower constancy: Definition, cause and measurement. Am. Nat. 1986, 127, 596–603. [Google Scholar] [CrossRef]
- Glover, B.J. Understanding Flowers and Flowering: An Integrated Approach; Oxford University Press: Oxford, UK, 2007. [Google Scholar]
- Xin-She, Y.; Karamanoglu, M.; He, X. Flower pollination algorithm: A novel approach for multiobjective optimization. Eng. Optim. 2014, 46, 1222–1237. [Google Scholar]
- Belew, R.K.; McInerney, J.; Schraudolph, N.N. Evolving Networks: Using the Genetic Algorithm with Connectionist Learning; Cognitive Computer Science Research Group: La Jolla, CA, USA, 1990. [Google Scholar]
- Smizuta, T.; Sato, D.; Lao, M.; Ikeda, T. Shimizu, Structure design of neural networks using genetic algorithms. Complex Syst. 2001, 13, 161–176. [Google Scholar]
- Yu, J.; Wang, S.; Xi, L. Evolving artificial neural networks using an improved PSO and DPSO. Neurocomputing 2008, 71, 1054–1060. [Google Scholar] [CrossRef]
- Leung, F.H.; Lam, H.; Ling, S.; Tam, P.K.S. Tuning of the structure and parameters of a neural network using an improved genetic algorithm. IEEE Trans. Neural Netw. 2003, 14, 79–88. [Google Scholar] [CrossRef] [Green Version]
- Montana, D.J.; Davis, L. Training Feedforward Neural Networks Using Genetic Algorithms. IJCAI 1989, 89, 762–767. [Google Scholar]
- Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evolut. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
- Karaboga, D. An Idea Based on Honey Bee Swarm for Numerical Optimization; Technical Report TR-06; Erciyes University, Engineering Faculty, Computer Engineering Department: Kayseri, Turkey, 2005. [Google Scholar]
- Yang, X.S. Firefly algorithms for multimodal optimization. In Stochastic Algorithms: Foundations and Applications; Lecture Notes in Computer Sciences; SAGA: Chicago, IL, USA, 2009; Volume 5792, pp. 169–178. [Google Scholar]
- Urvinder, S.; Salgotra, R. Synthesis of linear antenna array using flower pollination algorithm. Neural Comput. Appl. 2016, 29, 435–445. [Google Scholar]
- Blake, C.; Merz, C.J. {UCI} Repository of Machine Learning Databases; UCI: Aigle, Switzerland, 1998. [Google Scholar]
- Beyer, H.-G.; Schwefel, H.-P. Evolution strategies—A comprehensive introduction. Nat. Comput. 2002, 1, 3–52. [Google Scholar] [CrossRef]
- Yao, X.; Liu, Y.; Lin, G. Evolutionary programming made faster. Evol. Comput. IEEE Trans. 1999, 3, 82–102. [Google Scholar]
- Yao, X.; Liu, Y. Fast evolution strategies. In Proceedings of the Evolutionary Programming VI, Indianapolis, IN, USA, 13–16 April 1997; pp. 149–161. [Google Scholar]
- Baluja, S. Population-Based Incremental Learning: A Method for Integrating Genetic Search-Based Function Optimization and Competitive Learning; DTIC Document; Carnegie Mellon University: Pittsburgh, PA, USA, 1994. [Google Scholar]
- Seyedali, M.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar]
- Seyedali, M. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl. Based Syst. 2015, 89, 228–249. [Google Scholar]
- Zhou, Y.; Niu, Y.; Luo, Q.; Jiang, M. Teaching learning-based whale optimization algorithm for multi-layer perceptron neural network training. Math. Biosci. Eng. 2020, 17, 5987–6025. [Google Scholar] [CrossRef]
- Chong, H.Y.; Yap, H.J.; Tan, S.C.; Yap, K.S.; Wong, S.Y. Advances of metaheuristic algorithms in training neural networks for industrial applications. Soft Comput. 2021, 25, 11209–11233. [Google Scholar] [CrossRef]
Algorithm | Parameters | Values |
---|---|---|
FA | Number of fireflies | 20 |
Alpha (α) | 0.5 | |
Beta (β) | 0.2 | |
Gamma (γ) | 1 | |
Stopping Criteria | 200 Iterations | |
ABC | Swarm Size | 20 |
Limit | 100 | |
Stopping Criteria | 200 Iterations | |
FPA | Population Size | 20 |
Probability Switch | 0.8 | |
Stopping Criteria | 200 Iterations | |
CS | Population Size | 20 |
Discovery Rate of alien egg | 0.25 | |
Maximum number of iterations | 200 | |
Stopping Criteria | Max Iteration. | |
BFP | Population size | 20 |
Probability Switch | 0.8 | |
Alpha (α) | 0.5 | |
Stopping Criteria | 200 Iterations | |
CFS | Population size | 20 |
Probability switch | 0.8 | |
Discovery rate of alien egg (pa) | 0.25 | |
Stopping Criteria | 200 Iterations |
Unimodal Test Problems | Objective Function | Search Range | Optimum Value | D |
---|---|---|---|---|
Schwefel function | [−500, 500] | −418.9829 × D | 30, 50, 100 | |
Sphere function | [−100, 100] | 0 | 30, 50, 100 | |
Elliptic function | [−100, 100] | 0 | 30, 50, 100 | |
Scaffer function | [−100, 100] | 0 | 30, 50, 100 |
Objective Function | Algorithm | Best | Worst | Mean | Standard Deviation |
---|---|---|---|---|---|
CFS | −1.16 × 104 | −1.03 × 104 | −1.08 × 104 | 3.47 × 102 | |
FA | −4.85 × 103 | −2.53 × 103 | −3.78 × 103 | 6.61 × 102 | |
ABC | −9.65 × 103 | −7.67 × 103 | −8.68 × 103 | 4.93 × 102 | |
FPA | −6.36 × 1019 | −4.73 × 1015 | −3.72 × 1018 | 1.41 × 1019 | |
CS | −7.34 × 103 | −6.49 × 103 | −6.94 × 103 | 2.30 × 102 | |
BFP | −5.19 × 1010 | −2.08 × 103 | −2.76 × 109 | 1.15 × 1010 | |
CFS | 1.0666 | 2.9397 | 2.0917 | 0.4731 | |
FA | 0.0282 | 0.0818 | 0.0567 | 0.0137 | |
ABC | 1.09 × 104 | 2.31 × 104 | 1.56 × 104 | 3.27 × 103 | |
FPA | 9.52 × 103 | 2.28 × 104 | 1.53 × 104 | 3.20 × 103 | |
CS | 2.93 × 102 | 1.23 × 103 | 8.07 × 102 | 2.45 × 102 | |
BFP | 3.49 × 104 | 7.41 × 104 | 6.00 × 104 | 1.27 × 104 | |
CFS | 9.73 × 103 | 3.56 × 104 | 2.08 × 104 | 6.88 × 103 | |
FA | 1.95 × 106 | 1.66 × 107 | 6.96 × 106 | 4.06 × 106 | |
ABC | 6.75 × 106 | 5.16 × 108 | 1.04 × 108 | 1.17 × 108 | |
FPA | 1.60 × 108 | 5.09 × 108 | 2.81 × 108 | 8.27 × 107 | |
CS | 9.32 × 105 | 5.87 × 106 | 2.31 × 106 | 1.13 × 106 | |
BFP | 9.86 × 108 | 4.43 × 109 | 2.73 × 109 | 7.60 × 108 | |
CFS | 0 | 6.43 × 10−14 | 1.40 × 10−14 | 1.74 × 10−14 | |
FA | 3.61 × 10−10 | 0.0298 | 0.0066 | 0.0091 | |
ABC | 0 | 0 | 0 | 0 | |
FPA | 1.28 × 10−5 | 0.0029 | 5.12 × 10−4 | 7.13 × 10−4 | |
CS | 1.82 × 10−8 | 5.84 × 10−5 | 1.05 × 10−5 | 1.73 × 10−5 | |
BFP | 4.67 × 10−2 | 4.75 × 10−1 | 3.25 × 10−1 | 1.51 × 10−1 |
Objective Function | FA | FPA | CS | ABC | CFS |
---|---|---|---|---|---|
6.79 × 10−8 | NA | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | |
NA | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | |
6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | NA | |
8.00 × 10−9 | 8.00 × 10−9 | 8.00 × 10−9 | NA | NA |
Objective Function | Algorithm | Best | Worst | Mean | Standard Deviation |
---|---|---|---|---|---|
CFS | −1.66 × 104 | −1.53 × 104 | −1.60 × 104 | 4.10 × 102 | |
FA | −9.18 × 103 | −3.88 × 103 | −6.19 × 103 | 1.59 × 103 | |
ABC | −1.36 × 104 | −1.11 × 104 | −1.23 × 104 | 7.09 × 102 | |
FPA | −1.18 × 1020 | −9.35 × 1015 | −7.75 × 1018 | 2.69 × 1019 | |
CS | −1.08 × 104 | −9.62 × 103 | −1.00 × 104 | 3.52 × 102 | |
BFP | −6.32 × 1011 | −1.22 × 103 | −3.31 × 1010 | 1.41 × 1011 | |
CFS | 4.5385 | 11.9049 | 9.2753 | 4.5385 | |
FA | 0.1062 | 0.2069 | 0.1578 | 0.0303 | |
ABC | 5.83 × 103 | 1.81 × 104 | 1.37 × 104 | 3.21 × 103 | |
FPA | 1.46 × 104 | 4.84 × 104 | 3.03 × 104 | 8.91 × 103 | |
CS | 2.09 × 103 | 5.50 × 103 | 3.83 × 103 | 8.82 × 102 | |
BFP | 9.06 × 104 | 1.43 × 105 | 1.18 × 105 | 1.63 × 104 | |
CFS | 1.14 × 104 | 3.03 × 104 | 1.95 × 104 | 5.71 × 103 | |
FA | 2.80 × 106 | 1.34 × 107 | 6.66 × 106 | 2.99 × 106 | |
ABC | 2.91 × 107 | 1.14 × 109 | 5.12 × 108 | 3.16 × 109 | |
FPA | 1.16 × 108 | 4.53 × 108 | 2.76 × 108 | 1.04 × 108 | |
CS | 1.17 × 106 | 4.58 × 106 | 2.42 × 106 | 8.49 × 105 | |
BFP | 1.71 × 109 | 4.60 × 109 | 2.70 × 109 | 8.34 × 108 | |
CFS | 0 | 7.88 × 10−14 | 1.63 × 10−14 | 2.32 × 10−14 | |
FA | 9.36 × 10−10 | 0.0336 | 0.0082 | 0.0106 | |
ABC | 0 | 0 | 0 | 0 | |
FPA | 2.38 × 10−5 | 0.0069 | 0.0012 | 0.0019 | |
CS | 2.37 × 10−8 | 2.39 × 10−4 | 3.22 × 10−5 | 7.18 × 10−5 | |
BFP | 2.19 × 10−2 | 4.86 × 10−1 | 3.33 × 10−1 | 1.36 × 10−1 |
Objective Function | FA | FPA | CS | ABC | CFS |
---|---|---|---|---|---|
6.79 × 10−8 | NA | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | |
NA | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | |
6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10v8 | NA | |
8.00 × 10−9 | 8.00 × 10−9 | 8.00 × 10−9 | NA | NA |
Objective Function | Algorithm | Best | Worst | Mean | Standard Deviation |
---|---|---|---|---|---|
CFS | −2.78 × 104 | −2.36 × 104 | −2.60 × 104 | 1.07 × 103 | |
FA | −1.54 × 104 | −5.88 × 103 | −9.40 × 103 | 3.05 × 103 | |
ABC | −2.28 × 104 | −1.73 × 104 | −1.98 × 104 | 1.45 × 103 | |
FPA | −1.63 × 1019 | −1.24 × 1016 | −1.50 × 1018 | 3.85 × 1018 | |
CS | −1.05 × 104 | −9.41 × 103 | −1.00 × 104 | 2.79 × 102 | |
BFP | −5.75 × 108 | −4.56 × 103 | −5.87 × 107 | 1.73 × 108 | |
CFS | 32.0745 | 1.01 × 102 | 69.1336 | 19.4049 | |
FA | 14.1504 | 1.69 × 102 | 55.1173 | 40.4882 | |
ABC | 5.70 × 103 | 1.88 × 104 | 1.23 × 104 | 3.88 × 103 | |
FPA | 3.03 × 104 | 9.66 × 104 | 5.99 × 104 | 1.93 × 104 | |
CS | 1.38 × 104 | 2.45 × 104 | 1.69 × 104 | 2.59 × 103 | |
BFP | 1.61 × 105 | 3.16 × 105 | 2.53 × 105 | 4.57 × 104 | |
CFS | 6.22 × 103 | 3.48 × 104 | 2.11 × 104 | 6.55 × 103 | |
FA | 1.89 × 106 | 1.10 × 107 | 5.29 × 106 | 2.83 × 106 | |
ABC | 2.57 × 108 | 1.69 × 109 | 1.03 × 109 | 3.57 × 108 | |
FPA | 1.71 × 108 | 4.66 × 108 | 3.22 × 108 | 8.91 × 107 | |
CS | 1.26 × 106 | 6.12 × 106 | 2.69 × 106 | 1.06 × 106 | |
BFP | 1.92 × 109 | 4.22 × 109 | 2.87 × 108 | 2.87 × 109 | |
CFS | 2.22 × 10−16 | 2.83 × 10−13 | 2.49 × 10−14 | 6.35 × 10−14 | |
FA | 1.36 × 10−11 | 0.0667 | 0.0121 | 0.0164 | |
ABC | 0 | 0 | 0 | 0 | |
FPA | 1.34 × 10−6 | 0.0028 | 4.55 × 10−4 | 7.03 × 10−4 | |
CS | 1.80 × 10−8 | 6.88 × 10−5 | 1.43 × 10−5 | 2.11 × 10−5 | |
BFP | 3.10 × 10−2 | 4.92 × 10−1 | 3.30 × 10−1 | 1.42 × 10−1 |
Objective Function | FA | FPA | CS | ABC | CFS |
---|---|---|---|---|---|
6.79 × 10−8 | NA | 6.79 × 10−8 | 6.79 × 10−08 | 6.79 × 10−8 | |
6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−08 | NA | |
6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−08 | NA | |
8.00 × 10−9 | 8.00 × 10−9 | 8.00 × 10−9 | NA | 7.97 × 10−9 |
Multimodal Test Problems | Objective Function | Search Range | Optimum Value | D |
---|---|---|---|---|
Rastrigin function | [−5.12, 5.12] | 0 | 30, 50, 100 | |
Weierstrass function | ; where a = 0.5, b = 3, kmax = 20 | [−0.5, 0.5] | 0 | 30, 50, 100 |
Griewank | − | [−600, 600] | 0 | 30, 50, 100 |
Penalized 1 Function | , 10, 100, 4) u(, a, k, m) | [−50, 50] | 0 | 30, 50, 100 |
Penalized 2 function | u(, a, k, m) | [−50, 50] | 0 | 30, 50, 100 |
Ackley function | [−100, 100] | 0 | 30, 50, 100 |
Objective Function | Algorithm | Best | Worst | Mean | Standard Deviation |
---|---|---|---|---|---|
CFS | 7.53 × 10−13 | 3.59 × 10−9 | 7.46 × 10−10 | 9.59 × 10−10 | |
FA | 2.07 × 10−9 | 0.339 | 0.0226 | 0.0765 | |
ABC | 7.24 × 102 | 4.17 × 103 | 2.06 × 103 | 9.69 × 102 | |
FPA | 0.0017 | 0.2469 | 0.0595 | 0.063 | |
CS | 8.00 × 102 | 1.86 × 103 | 1.16 × 103 | 3.00 × 102 | |
BFP | 4.00 × 10−3 | 1.70 × 101 | 8.13 × 10 | 3.16 × 10 | |
CFS | 1.9004 | 2.6108 | 2.3049 | 0.2186 | |
FA | 13.279 | 21.3048 | 16.8613 | 1.8416 | |
ABC | 11.1346 | 19.6264 | 15.5206 | 2.4429 | |
FPA | 35.7517 | 39.4474 | 37.5697 | 1.2557 | |
CS | 16.6273 | 23.8924 | 19.9146 | 1.9027 | |
BFP | 42.4496 | 50.6708 | 46.8925 | 2.1102 | |
CFS | 1.31 × 10−13 | 8.61 × 10−11 | 2.18 × 10−11 | 2.44 × 10−11 | |
FA | 2.25 × 10−7 | 1.48 × 10−5 | 4.01 × 10−6 | 3.54 × 10−6 | |
ABC | 3.5295 | 72.961 | 28.0289 | 16.7278 | |
FPA | 1.50 × 10−4 | 0.0874 | 0.0161 | 0.0231 | |
CS | 4.5803 | 18.3641 | 8.8762 | 3.1545 | |
BFP | 7.5279 | 1.65 × 102 | 7.69 × 101 | 4.51 × 101 | |
CFS | 0.187 | 4.4737 | 0.5345 | 0.9324 | |
FA | 0.0013 | 0.133 | 0.0167 | 0.0287 | |
ABC | 8.04 × 106 | 1.68 × 108 | 6.93 × 107 | 4.39 × 107 | |
FPA | 5.42 × 105 | 3.37 × 107 | 8.71 × 106 | 8.63 × 106 | |
CS | 9.846 | 81.0669 | 24.5174 | 15.5959 | |
BFP | 3.63 × 108 | 9.02 × 008 | 5.82 × 108 | 1.64 × 108 | |
CFS | 0.0086 | 0.0873 | 0.0286 | 0.0015 | |
FA | 0.0059 | 0.0619 | 0.0119 | 0.0031 | |
ABC | 2.41 × 107 | 3.19 × 108 | 1.49 × 108 | 8.95 × 107 | |
FPA | 1.38 × 107 | 1.09 × 108 | 4.99 × 107 | 2.45 × 107 | |
CS | 51.2308 | 1.83 × 105 | 3.13 × 104 | 5.18 × 104 | |
BFP | 4.59 × 108 | 1.65 × 109 | 1.13 × 109 | 3.41 × 108 | |
CFS | 0.5159 | 0.879 | 0.6915 | 0.0111 | |
FA | 0.1462 | 0.469 | 0.261 | 0.0745 | |
ABC | 6.0052 | 14.9839 | 11.0513 | 2.6699 | |
FPA | 14.0678 | 19.0195 | 17.4088 | 1.0373 | |
CS | 10.0393 | 17.1319 | 13.0073 | 1.6329 | |
BFP | 19.8877 | 20.849 | 20.5093 | 0.2645 |
Objective Function | FA | FPA | CS | ABC | BFP | CFS |
---|---|---|---|---|---|---|
1.23 × 10−7 | 1.23 × 10−7 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | NA | |
6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | NA | |
6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | NA | |
NA | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | |
NA | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 7.57 × 10−4 | |
NA | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | NA |
Objective Function | Algorithm | Best | Worst | Mean | Standard Deviation |
---|---|---|---|---|---|
CFS | 7.88 × 10−10 | 3.30 × 10−9 | 7.39 × 10−10 | 9.49 × 10−10 | |
FA | 1.85 × 10−09 | 0.1989 | 0.0109 | 0.0444 | |
ABC | 7.08 × 1003 | 2.62 × 1004 | 1.74 × 1004 | 6.25 × 1003 | |
FPA | 0.0083 | 0.323 | 0.1085 | 0.1053 | |
CS | 3.08 × 1003 | 5.85 × 1003 | 4.34 × 1003 | 8.13 × 1002 | |
BFP | 1.51 × 1000 | 1.98 × 1001 | 1.09 × 1001 | 5.08 × 1000 | |
CFS | 4.4288 | 6.3445 | 5.6194 | 0.5645 | |
FA | 28.8712 | 39.2514 | 33.3655 | 3.1566 | |
ABC | 32.506 | 46.6375 | 38.085 | 3.2999 | |
FPA | 66.9832 | 74.2188 | 70.8866 | 2.0856 | |
CS | 33.8792 | 46.2211 | 39.4868 | 3.1594 | |
BFP | 68.1743 | 89.0548 | 80.9581 | 5.9281 | |
CFS | 9.55 × 10−14 | 3.91 × 10−10 | 4.20 × 10−11 | 9.14 × 10−11 | |
FA | 1.31 × 10−9 | 0.0038 | 1.91 × 10−4 | 8.44 × 10−4 | |
ABC | 45.2242 | 3.06 × 102 | 1.84 × 102 | 67.741 | |
FPA | 0.0017 | 0.0634 | 0.0144 | 0.0174 | |
CS | 21.4091 | 67.0806 | 38.1323 | 9.8239 | |
BFP | 1.5237 | 1.77 × 102 | 8.07 × 101 | 6.33 × 101 | |
CFS | 1.4169 | 11.1903 | 4.2718 | 2.5936 | |
FA | 0.0061 | 2.0945 | 0.4204 | 0.5428 | |
ABC | 8.05 × 106 | 1.07 × 18 | 5.58 × 107 | 2.92 × 107 | |
FPA | 4.61 × 105 | 9.02 × 1007 | 2.58 × 107 | 2.07 × 107 | |
CS | 65.7835 | 8.56 × 105 | 6.28 × 104 | 1.90 × 105 | |
BFP | 2.73 × 108 | 1.33 × 109 | 9.55 × 108 | 3.23 × 108 | |
CFS | 0.0809 | 1.446 | 0.4249 | 0.0559 | |
FA | 0.0075 | 0.3539 | 0.0444 | 0.0817 | |
ABC | 8.40 × 107 | 2.33 × 108 | 1.52 × 108 | 4.62 × 107 | |
FPA | 3.80 × 107 | 3.59 × 108 | 1.24 × 108 | 7.85 × 107 | |
CS | 1.47 × 105 | 1.09 × 106 | 5.76 × 105 | 2.86 × 105 | |
BFP | 4.59 × 108 | 1.65 × 109 | 1.13 × 109 | 3.41 × 109 | |
CFS | 2.3662 | 18.0276 | 9.7138 | 4.4605 | |
FA | 0.3074 | 0.8458 | 0.3074 | 0.1439 | |
ABC | 10.5978 | 18.3258 | 15.8443 | 1.9489 | |
FPA | 16.287 | 18.9147 | 17.6333 | 0.7004 | |
CS | 10.3972 | 17.8387 | 13.9978 | 1.9681 | |
BFP | 19.5794 | 20.8868 | 20.6361 | 0.3188 |
Objective Function | FA | FPA | CS | ABC | BFP | CFS |
---|---|---|---|---|---|---|
1.06 × 10−7 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | NA | |
6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | NA | |
6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | NA | |
NA | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | |
NA | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 9.12 × 10−7 | |
NA | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 |
Objective Function | Algorithm | Best | Worst | Mean | Standard Deviation |
---|---|---|---|---|---|
CFS | 3.21 × 10−10 | 2.21 × 10−9 | 7.12 × 10−10 | 7.73 × 10−10 | |
FA | 1.49 × 10−8 | 1.91 × 10−6 | 3.98 × 10−7 | 5.25 × 10−7 | |
ABC | 8.09 × 104 | 1.34 × 105 | 1.05 × 105 | 1.44 × 104 | |
FPA | 0.0033 | 0.2715 | 0.0813 | 0.0755 | |
CS | 1.22 × 104 | 2.03 × 104 | 1.65 × 104 | 2.37 × 103 | |
BFP | 2.10 × 10−3 | 1.49 × 101 | 8.27 × 1000 | 4.11 × 1000 | |
CFS | 16.4301 | 22.9819 | 18.3616 | 1.4303 | |
FA | 67.1686 | 81.6839 | 74.2381 | 4.1565 | |
ABC | 1.03 × 102 | 1.25 × 102 | 1.16 × 102 | 6.363 | |
FPA | 1.23 × 102 | 1.62 × 102 | 1.53 × 102 | 9.3169 | |
CS | 81.7266 | 96.5011 | 89.2076 | 4.8649 | |
BFP | 1.47 × 102 | 1.86 × 102 | 1.72 × 102 | 1.03 × 101 | |
CFS | 9.20 × 10−14 | 3.20 × 10−10 | 4.46 × 10−11 | 7.82 × 10−11 | |
FA | 1.11 × 10−6 | 6.13 × 10−6 | 1.83 × 10−6 | 1.65 × 10−6 | |
ABC | 6.68 × 102 | 1.08 × 103 | 8.83 × 102 | 1.15 × 102 | |
FPA | 0.003 | 0.071 | 0.022 | 0.0212 | |
CS | 1.06 × 102 | 1.91 × 102 | 1.41 × 102 | 22.8388 | |
BFP | 1.0051 | 1.60 × 102 | 5.98 × 101 | 4.05 × 101 | |
CFS | 18.9008 | 1.86 × 102 | 50.7091 | 35.0491 | |
FA | 9.0521 | 48.0274 | 28.2233 | 10.0771 | |
ABC | 3.42 × 106 | 7.72 × 107 | 3.44 × 107 | 1.94 × 107 | |
FPA | 3.35 × 107 | 1.78 × 107 | 8.73 × 107 | 4.64 × 107 | |
CS | 2.67 × 104 | 4.32 × 106 | 7.91 × 105 | 9.75 × 105 | |
BFP | 7.01 × 108 | 3.61 × 109 | 2.49 × 109 | 8.69 × 108 | |
CFS | 1.7343 | 5.7693 | 3.5451 | 1.1998 | |
FA | 2.3704 | 9.5091 | 4.5693 | 1.5113 | |
ABC | 3.89 × 107 | 2.01 × 108 | 1.07 × 108 | 4.86 × 108 | |
FPA | 5.12 × 107 | 7.27 × 108 | 3.17 × 108 | 2.03 × 108 | |
CS | 2.86 × 106 | 2.04 × 107 | 7.63 × 106 | 5.15 × 106 | |
BFP | 1.64 × 109 | 6.02 × 109 | 4.75 × 109 | 1.40 × 109 | |
CFS | 4.5948 | 19.4525 | 12.4022 | 3.9196 | |
FA | 1.3412 | 3.5494 | 2.7283 | 0.5434 | |
ABC | 18.2683 | 19.7601 | 19.1378 | 0.3621 | |
FPA | 16.6528 | 19.686 | 18.2081 | 0.8273 | |
CS | 13.5684 | 17.8497 | 15.6218 | 1.4927 | |
BFP | 20.0389 | 21.0637 | 20.7694 | 0.2923 |
Objective Function | FA | FPA | CS | ABC | CFS | |
---|---|---|---|---|---|---|
6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | NA | |
6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | NA | |
6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | NA | |
NA | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | |
0.0439 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | NA | |
NA | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 |
Fixed Dimension Test Problems | Objective Function | Search Range | Optimum Value | D |
---|---|---|---|---|
Branin RCOS Function | x1 ϵ [−5, 10], x2 ϵ [0, 15] | 0.397887 | 2 | |
Six Hump Camel function | [−5, 5] | −1.0316 | 2 | |
Goldstein & Price function | [−2, 2] | 3 | 2 | |
Hartmann function 3 | [0, 1] | −3.86278 | 3 | |
Hartmann function 6 | [0, 1] | −3.32237 | 6 | |
Shekel 5 | [0, 10] | −10.1532 | 4 | |
Shekel 7 | [0, 10] | −10.4029 | 4 | |
Shekel 10 | [0, 10] | −10.5364 | 4 | |
Easom function | [−10, 10] | −1 | 2 |
Objective Function | Algorithm | Best | Worst | Mean | Standard Deviation |
---|---|---|---|---|---|
CFS | 0.3979 | 0.3979 | 0.3979 | 2.19 × 10−11 | |
FA | 0.3979 | 0.3979 | 0.3979 | 1.30 × 10−8 | |
ABC | 0 | 0 | 0 | 0 | |
FPA | 0.3979 | 0.3983 | 0.398 | 9.64 × 10−5 | |
CS | 0.3979 | 0.3979 | 0.3979 | 5.32 × 10−8 | |
BFP | 0.4416 | 5.3576 | 3.0721 | 1.63 × 1000 | |
CFS | −1.0316 | −1.0316 | −1.0316 | 1.66 × 10−10 | |
FA | −1.3016 | −1.3015 | −1.0316 | 3.47 × 10−5 | |
ABC | −1.3016 | −1.0250 | −1.0310 | 0.0015 | |
FPA | −1.3016 | −1.3016 | −1.3016 | 1.24 × 10−5 | |
CS | −1.3016 | −1.0316 | −1.3016 | 8.22 × 10−11 | |
BFP | −0.9884 | 4.4587 | 0.1862 | 1.50 × 1000 | |
CFS | 3 | 3 | 3 | 1.54 × 10−12 | |
FA | 3 | 3 | 3 | 1.51 × 10−7 | |
ABC | 3.0004 | 3.0531 | 3.0107 | 0.0148 | |
FPA | 3 | 3.0015 | 3.0004 | 4.73 × 10−4 | |
CS | 3 | 3 | 3 | 9.86 × 10−9 | |
BFP | 3.3525 | 98.258 | 47.458 | 3.46 × 101 | |
CFS | −3.8628 | −3.8628 | −3.8628 | 6.56 × 10−12 | |
FA | −3.8628 | −2.1968 | −3.3064 | 0.6077 | |
ABC | −3.8628 | −3.8621 | −3.8626 | 2.17 × 10−4 | |
FPA | −3.8325 | −1.5171 | −3.3253 | 0.6709 | |
CS | −3.8628 | −3.8628 | −3.8628 | 1.13 × 10−8 | |
BFP | −0.5359 | −3.25E−6 | −0.0814 | 1.65 × 10−1 | |
CFS | −3.3224 | −3.3224 | −3.3224 | 3.74 × 10−7 | |
FA | −3.3224 | −3.0639 | −3.2469 | 9.32 × 10−2 | |
ABC | −3.3223 | −3.1954 | −3.2461 | 0.059 | |
FPA | −3.2275 | −2.9663 | −3.1345 | 0.0702 | |
CS | −3.3223 | −3.3140 | −3.3201 | 0.0028 | |
BFP | −2.6298 | −0.7595 | −1.6330 | 0.5693 | |
CFS | −10.1532 | −10.1532 | −10.1532 | 5.74 × 10−5 | |
FA | −5.0552 | −5.0552 | −5.0552 | 1.03 × 10−8 | |
ABC | −10.1486 | −2.6075 | −5.5322 | 3.4454 | |
FPA | −5.0546 | −5.0419 | −5.0513 | 0.0033 | |
CS | −10.0826 | −9.3309 | −10.0826 | 0.1799 | |
BFP | −3.9584 | −1.2893 | −2.3915 | 0.8119 | |
CFS | −10.4029 | −10.4029 | −10.4029 | 1.33 × 10−4 | |
FA | −5.0877 | −5.0877 | −5.0877 | 9.13 × 10−9 | |
ABC | −10.5359 | −2.4206 | −5.3332 | 3.1817 | |
FPA | −5.0864 | −5.0771 | −5.0837 | 0.0025 | |
CS | −10.5358 | −73868 | −10.3006 | 0.6974 | |
BFP | −4.4980 | −1.6336 | −2.6498 | 0.8739 | |
CFS | −10.5364 | −10.5364 | −10.5364 | 1.87 × 10−6 | |
FA | −5.1285 | −5.1285 | −5.1285 | 9.16 × 10−9 | |
ABC | −10.4895 | −1.8556 | −4.6289 | 3.0032 | |
FPA | −5.1279 | −5.1185 | −5.1244 | 0.0028 | |
CS | −10.5357 | −9.8686 | −10.4320 | 0.1724 | |
BFP | −4.3369 | −1.6523 | −2.5939 | 0.7823 | |
CFS | −1 | −1.0000 | −1.0000 | 6.07 × 10−14 | |
FA | −1.0000 | −1.0000 | −1.0000 | 1.26 × 10−8 | |
ABC | −1.0000 | −0.9886 | −0.9977 | 0.0029 | |
FPA | −1.0000 | −0.9998 | −0.9999 | 6.79 × 10−5 | |
CS | −1.0000 | −1.0000 | −1.0000 | 5.30 × 10−10 | |
BFP | −0.5894 | −2.18 × 10−13 | −0.0574 | 1.51 × 10−1 |
Objective Function | FA | FPA | CS | ABC | BFP | CFS |
---|---|---|---|---|---|---|
7.89 × 10−8 | 7.89 × 10−8 | 9.17 × 10−8 | 8.00 × 10−9 | 6.79 × 10−8 | NA | |
6.79 × 10−8 | 6.79 × 10−8 | 0.0679 | 6.79 × 10−8 | 6.79 × 10−8 | NA | |
6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | NA | |
6.79 × 10−8 | 6.79 × 10−8 | 7.89 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | NA | |
0.1895 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | NA | |
6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | NA | |
6.79 × 10−8 | 6.79 × 10−8 | 0.0012 | 1.60 × 10−4 | 6.79 × 10−8 | NA | |
6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | NA | |
6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | 6.79 × 10−8 | NA |
Algorithm | Parameters | Value |
---|---|---|
CFS | Population size | 50 for XOR and Balloon; 20 for the rest |
Probability switch | 0.8 | |
Discovery rate of alien egg (pa) | 0.25 | |
Maximum number of iterations | 250 |
Classification Datasets | Attributes Count | Training Samples Count | Test Samples Count | Number of Classes |
---|---|---|---|---|
3-bit XOR | 3 | 8 | 8 as training samples | 2 |
Balloon | 4 | 16 | 16 as training samples | 2 |
Iris | 4 | 150 | 150 as training samples | 3 |
Breast Cancer | 9 | 599 | 100 | 2 |
Heart | 22 | 80 | 187 | 2 |
Classification Datasets | Attributes Count | MLP Structure |
---|---|---|
3-bit XOR | 3 | 3−7−1 |
Balloon | 4 | 4−9−1 |
Iris | 4 | 4−9−3 |
Breast Cancer | 9 | 9−19−1 |
Heart | 22 | 22−45−1 |
Algorithm | Average | Standard Deviation |
---|---|---|
CFS−MLP | 9.687 × 10−12 | 2.520 × 10−11 |
GWO−MLP | 9.410 × 10−3 | 2.950 × 10−1 |
PSO−MLP | 8.405 × 10−2 | 3.594 × 10−2 |
GA−MLP | 1.810 × 10−4 | 4.130 × 10−4 |
ACO−MLP | 1.803 × 10−1 | 2.526 × 10−2 |
ES−MLP | 1.187 × 10−1 | 1.157 × 10−2 |
PBIL−MLP | 3.022 × 10−2 | 3.966 × 10−2 |
WOA−MLP | 8.420 × 10−2 | 5.140 × 10−2 |
MFO−MLP | 5.298 × 10−6 | 1.038 × 10−5 |
Algorithm | Average | Standard Deviation |
---|---|---|
CFS−MLP | 1.19 × 10−41 | 1.90 × 10−41 |
GWO−MLP | 9.38 × 10−15 | 2.81 × 10−14 |
PSO−MLP | 0.000585 | 0.000749 |
GA−MLP | 5.08 × 10−24 | 1.06E−23 |
ACO−MLP | 0.004854 | 0.00776 |
ES−MLP | 0.019055 | 0.17026 |
PBIL−MLP | 2.49 × 10−5 | 5.27 × 10−5 |
WOA−MLP | 4.88 × 10−6 | 1.41 × 10−5 |
MFO−MLP | 1.85 × 10−15 | 6.18 × 10−15 |
Algorithm | Average | Standard Deviation |
---|---|---|
CFS−MLP | 0.06673 | 5.31 × 10−4 |
GWO−MLP | 0.0229 | 0.0032 |
PSO−MLP | 0.22868 | 0.057235 |
GA−MLP | 0.089912 | 0.123638 |
ACO−MLP | 0.405979 | 0.053775 |
ES−MLP | 0.31434 | 0.052142 |
PBIL−MLP | 0.116067 | 0.036355 |
WOA−MLP | 0.734134 | 0.051808 |
MFO−MLP | 0.667957 | 0.003467 |
Algorithm | Average | Standard Deviation |
---|---|---|
CFS−MLP | 0.0018 | 2.83 × 10−4 |
GWO−MLP | 0.0012 | 7.44 × 10−5 |
PSO−MLP | 0.034881 | 0.002472 |
GA−MLP | 0.003026 | 0.0015 |
ACO−MLP | 0.01351 | 0.002137 |
ES−MLP | 0.04032 | 0.00247 |
PBIL−MLP | 0.032009 | 0.003065 |
WOA−MLP | 0.006243 | 0.003128 |
MFO−MLP | 0.004038 | 0.003041 |
Algorithm | Average | Standard Deviation |
---|---|---|
CFS−MLP | 0.0686 | 0.0067 |
GWO−MLP | 0.1226 | 0.0077 |
PSO−MLP | 0.188568 | 0.008939 |
GA−MLP | 0.093047 | 0.02246 |
ACO−MLP | 0.22843 | 0.004979 |
ES−MLP | 0.192473 | 0.015174 |
PBIL−MLP | 0.154096 | 0.018204 |
WOA−MLP | 0.179664 | 0.052152 |
MFO−MLP | 0.08321 | 0.02062 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Salgotra, R.; Mittal, N.; Mittal, V. A New Parallel Cuckoo Flower Search Algorithm for Training Multi-Layer Perceptron. Mathematics 2023, 11, 3080. https://doi.org/10.3390/math11143080
Salgotra R, Mittal N, Mittal V. A New Parallel Cuckoo Flower Search Algorithm for Training Multi-Layer Perceptron. Mathematics. 2023; 11(14):3080. https://doi.org/10.3390/math11143080
Chicago/Turabian StyleSalgotra, Rohit, Nitin Mittal, and Vikas Mittal. 2023. "A New Parallel Cuckoo Flower Search Algorithm for Training Multi-Layer Perceptron" Mathematics 11, no. 14: 3080. https://doi.org/10.3390/math11143080
APA StyleSalgotra, R., Mittal, N., & Mittal, V. (2023). A New Parallel Cuckoo Flower Search Algorithm for Training Multi-Layer Perceptron. Mathematics, 11(14), 3080. https://doi.org/10.3390/math11143080