“Optimizing the Optimization”: A Hybrid Evolutionary-Based AI Scheme for Optimal Performance
Abstract
:1. Introduction
Motivation
2. Materials and Methods
2.1. Grouping and Normalization of Unique Initial Values
- -
- Objective Function: The function to be solved is common to all algorithms, as this is the base that all optimization algorithms utilize to compare solutions. The algorithms evaluate the quality of solutions based on their ability to minimize or maximize the objective function. In each iteration, all the algorithms’ processes are assessed through the objective function.
- -
- Problem Dimensions: The dimensions of the objective function are common to all algorithms. Any problem can have dimensions ranging from 1 to N, with each dimension representing an independent variable based on the function’s form and the solving objective.
- -
- Value Range: The solution space, according to the objective function, has a predetermined range within which algorithms should search for possible solutions. This range is determined by constraints imposed by the objective function itself or the additional constraints set and can vary from a very narrow value field to infinity. The range is characterized by a set of minimum and maximum values, with each dimension having its declared minimum and maximum. Depending on the problem type and constraints, the sets of minima or maxima can be characterized by different values for each independent variable.
- -
- Number of Iterations: One of the essential optimization algorithm variables is the total number of iterations it will perform before stopping and returning the best values it has identified up to that point. Since these algorithms are stochastic, and the ideal solution is often unknown, the number of iterations is one of the most crucial factors for the success of an algorithm to converge. For this study, all algorithms must be applied for the same number of iterations. Due to the different architecture of each algorithm, to achieve this, the iterations are measured by the number of objective function evaluations.
- -
- Population of Solutions: Each algorithm initializes and maintains a predetermined number of potential solutions. This number is predetermined, and despite changes made to facilitate convergence to an optimal solution, the size remains constant. The number of maintained solutions is equally important for an optimization algorithm because there is a correlation between the population and the speed and quality of convergence, especially as the difficulty level, which is imposed by the objective function, increases.
- Beta: This value determines the rate of selecting individuals from the solution set for mutation. Due to its influence on expanding the search in the solution space, this value corresponds to “Exploration”.
- PC: The term “PC” comes from “Population to Children”, which translates to the correlation of the total population with the number of “offspring” or new values generated for each iteration. This variable determines the percentage of differentiation in the set of values, thus corresponding to “Exploitation”.
- Gamma: The Gamma variable determines the intensity of crossover between the parameterized values. As this parameterization affects the way the solution space is explored, this term corresponds to “Exploration”.
- MU: MU determines the percentage of features parameterized in each iteration for each chromosome. In other words, this value determines how much each chromosome to be parameterized changes. This value corresponds to the “Mutation Rate”.
- Sigma: The last parameter that characterizes only GAs is Sigma. This parameter determines the size of parameterization for each chromosome to be parameterized, depending on MU. This variable also corresponds to the “Mutation Rate”.
- Alpha: This variable determines the degree of randomization with which fireflies move that do not have the highest “intensity” or suitability as per the objective function. Thus, their degree of change is partly determined by this parameter, and for this reason, it corresponds to the “Mutation Rate”.
- Beta: This term determines the degree of attraction with which fireflies are attracted to the “radiance” of the “brightest”, or in other words, the better solution. The larger this term, the greater the attraction exhibited by the optimal fireflies toward the others, creating a tendency to gather the others towards them, thereby increasing the “Exploitation” characteristic of the algorithm.
- Gamma: This term collaborates with the Beta value and determines the degree to which the remaining fireflies react to this attraction, or how much light they allow to be absorbed and influence them. Conversely, this term determines the “Exploration”.
- Harmony Memory Considering Rate (HMRC): With this term, the algorithm decides how many of the stored values (melodies) in the form of a solution population it will choose to use per iteration compared to new ones that will be generated. Thus, the size of the solution space in which the algorithm will search for new solutions is determined, and it corresponds to “Exploration”.
- Pitch Adjusting Rate (PAR): This variable characterizes the maximum allowable differentiation that an existing solution can have when it is deemed necessary to parameterize. In short, with the term PAR, it is determined how close a new solution will appear to the existing solution, thus determining the “Mutation Rate”.
- Maximum Pitch Adjustment Proportion/Index (MPAP/MPAI): These two parameters essentially have the exact influence on a harmony search algorithm. One is used when the function to be solved is continuous, while the other is used when we have a function with discrete values. In practice, these variables define the range of conversion that all parameterized values-melodies will undergo for each iteration. Consequently, it directly affects “Exploitation”.
- Exploration Degree:
- ○
- Beta, Gamma (Genetic Algorithm)
- ○
- Gamma (Firefly Algorithm)
- ○
- HMRC (Harmony Search Algorithm)
- Exploitation Degree:
- ○
- PC (Genetic Algorithm)
- ○
- Beta (Firefly Algorithm)
- ○
- MBAP/MBAI (Harmony Search Algorithm)
- Mutation Rate:
- ○
- MU, Sigma (Genetic Algorithm)
- ○
- Alpha (Firefly Algorithm)
- ○
- PAR (Harmony Search Algorithm)
2.2. Algorithmic Scheme
2.2.1. Objective Functions
2.2.2. Architecture of the Algorithmic Scheme
2.2.3. Initial Values
- >
- Problem dimensions (independent variables) (e.g., d = 2);
- >
- Minimum value in the domain (where: Min = [value] * d, for creating a minimum for each dimension);
- >
- Maximum value in the domain (where: Max = [value] * d, for creating a maximum for each dimension);
- >
- Number of iterations (e.g., iterations = 20);
- >
- Population size (e.g., population = 1000);
- >
- Exploration rate (e.g., exploration = 0.5);
- >
- Exploitation rate (e.g., exploitation = 0.5);
- >
- Mutation rate (e.g., mutation_rate = 0.5).
2.2.4. Special Values
- >
- Optimal solution: The optimal solution is the best-performing value presented by the best algorithm compared to the others, e.g., for a problem with two independent variables, the point (1,1).
- >
- Fitness of the optimal solution (Cost): Fitness is defined as the value given at the output of the objective function at the point of the optimal solution, e.g., Cost = 2, related to fitness for the point (1,1) and the function .
- >
- Selection index: To allow the leader to determine which player performs better or worse, there must be a logical index (true/false) that, in each iteration of the mixed scheme, takes the appropriate value concerning the performance of the players. This selection index adjusts to the compressed single variable fed to the algorithms. Thus, the algorithms know whether they should consider either a better solution or an entire population for their next optimization attempt.
- >
- Distributed performance list: For feedback to the user during the execution of the mixed scheme and for returning the optimal solution at all levels (optimal cost, optimal position), a distributed performance list is created. This list ranks the algorithms from best to worst and returns the corresponding index for each.
2.2.5. Objective Function Evaluations
2.3. Programming Environment
2.4. Hybrid Scheme Rules and Programming
- All algorithms should “accept” and “perceive” the values of the domain in the same way. This means that for each function fed into the algorithms, where xi represents the value from the domain for each time instance, it should have a common variable format for both the leader algorithm and the players. This format was chosen as “NumPy Array”. In practice, NumPy Arrays are numeric lists created by the “NumPy” library, which is part of Python’s source code libraries. This choice was made to ensure input uniformity based on an independent, standalone library from the programming language.
- All algorithms should output the optimal value from the domain and the set of solution populations as a list, decoded by NumPy. This choice serves the leader algorithm to manage, compare, and present data independently of how they are generated in the individual player algorithms.
- All algorithms should be able to assign initial and special values appropriately, according to the correspondences made in their theoretical study.
- All algorithms should be able to extract the same parameters in the same format. Specifically, the output values for all of the algorithms are as follows: (i) optimal input value, (ii) optimal value of the objective function, and (iii) total population at the last iteration.
- All algorithms should parameterize the value of the maximum iterations given to them, according to the number of evaluations of the objective function they perform in total, so that all of the algorithms evaluate the objective function the same number of times in each “call”.
- All algorithms should be able to accept the objective function encoded in the same way.
- The algorithms should be in the form of a library, which will be imported by the leader, who, in turn, can call on the execution method, providing the compressed initial and special values and the objective function as input variables.
3. Results
3.1. Testing Optimization Algorithms Individually
3.1.1. Sphere Reference Function
3.1.2. Schwefel Reference Function
3.1.3. Alpine n.2 Reference Function
3.2. Testing the Hybrid Scheme
4. Discussion
4.1. Evaluation Summary for Individual Algorithms
- Genetic Algorithm: Exceptional ability to solve problems in a very short time for the majority of optimization problems.
- Particle Swarm Algorithm: Good problem-solving ability, but exhibits high instability depending on the combinations of initial values. Solution accuracy rarely exceeds 10−2, and it struggles to converge for optimal values close to 0.
- Harmony Search Algorithm: The agnostic way, meaning the way it globally evaluates the entire domain value it maintains as a population, requires an exceptionally large number of iterations to solve a problem. Nevertheless, the solutions converge with very high accuracy to the desired target.
- Black Hole Algorithm: The inability to adjust its reaction method makes it extremely unstable, to the point of not being able to converge in some cases. Its results have a high degree of randomness, regardless of the initial parameter values provided. As the domain of definition increases, so does the instability it exhibits.
4.2. Comparing the Hybrid Scheme to Individual Algorithms
4.3. Identified Breakthrough
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Dantzig, G. The Nature of Mathematical Programming; Mathematical Programming Glossary; INFORMS Computing Society: Catonsville, MD, USA, 2014. [Google Scholar]
- Martins, J.R.R.A.; Ning, A. Engineering Design Optimization. Available online: https://www.cambridge.org/highereducation/books/engineering-design-optimization/B1B23D00AF79E45502C4649A0E43135B#overview (accessed on 16 September 2024).
- Du, D.-Z.; Pardalos, P.M.; Wu, W. History of Optimization. In Encyclopedia of Optimization; Floudas, C.A., Pardalos, P.M., Eds.; Springer US: Boston, MA, USA, 2008; pp. 1538–1542. ISBN 978-0-387-74758-3. [Google Scholar]
- El-Omari, N.K.T. Sea Lion Optimization Algorithm for Solving the Maximum Flow Problem. Int. J. Comput. Sci. Netw. Secur. 2020, 20, 30–68. [Google Scholar] [CrossRef]
- Weir, M.D.; Hass, J.; Giordano, F.R. Thomas’ Calculus; Pearson Addison Wesley: Boston, MA, USA, 2005; ISBN 978-0-321-18558-7. [Google Scholar]
- Bunkley, N. Joseph Juran, 103, Pioneer in Quality Control, Dies. The New York Times. Available online: https://www.nytimes.com/2008/03/03/business/03juran.html (accessed on 16 September 2024).
- Mohanty, R.; Suman, S.; Das, S.K. Modeling the Axial Capacity of Bored Piles Using Multi-Objective Feature Selection, Functional Network and Multivariate Adaptive Regression Spline. In Handbook of Neural Computation; Elsevier: Amsterdam, The Netherlands, 2017; pp. 295–309. ISBN 978-0-12-811318-9. [Google Scholar]
- Liberti, L.; Kucherenko, S. Comparison of Deterministic and Stochastic Approaches to Global Optimization. Int. Trans. Oper. Res. 2005, 12, 263–285. [Google Scholar] [CrossRef]
- Haggag, S.; Desokey, F.; Ramadan, M. A Cosmological Inflationary Model Using Optimal Control. Gravit. Cosmol. 2017, 23, 236–239. [Google Scholar] [CrossRef]
- Index. In An Introduction to Algorithmic Finance, Algorithmic Trading and Blockchain; Emerald Publishing Limited: Leeds, UK, 2020; pp. 185–191. ISBN 978-1-78973-894-0.
- Sieja, M.; Wach, K. The Use of Evolutionary Algorithms for Optimization in the Modern Entrepreneurial Economy: Interdisciplinary Perspective. EBER 2019, 7, 117–130. [Google Scholar] [CrossRef]
- Guimaraes, G.L.; Sanchez-Lengeling, B.; Outeiral, C.; Farias, P.L.C.; Aspuru-Guzik, A. Objective-Reinforced Generative Adversarial Networks (ORGAN) for Sequence Generation Models. arXiv 2017, arXiv:1705.10843. [Google Scholar]
- He, J.; Mattsson, F.; Forsberg, M.; Bjerrum, E.J.; Engkvist, O.; Nittinger, E.; Tyrchan, C.; Czechtizky, W. Transformer Neural Network for Structure Constrained Molecular Optimization. ChemRxiv 2021. [Google Scholar] [CrossRef]
- Brownlee, J. Why Optimization Is Important in Machine Learning. 2021. Available online: https://MachineLearningMastery.com (accessed on 16 September 2024).
- Gandomi, A.H.; Yang, X.-S.; Talatahari, S.; Alavi, A.H. Metaheuristic Algorithms in Modeling and Optimization. In Metaheuristic Applications in Structures and Infrastructures; Elsevier: Amsterdam, The Netherlands, 2013; pp. 1–24. ISBN 978-0-12-398364-0. [Google Scholar]
- Blum, C.; Puchinger, J.; Raidl, G.; Roli, A. Hybrid Metaheuristics. In Hybrid Optimization. Springer Optimization and Its Applications; van Hentenryck, P., Milano, M., Eds.; Springer: New York, NY, USA, 2011; Volume 45. [Google Scholar] [CrossRef]
- Burke, E.; Gendreau, M.; Hyde, M.; Kendall, G.; Ochoa, G.; Özcan, E.; Qu, R. Hyper-heuristics: A survey of the state of the art. J. Oper. Res. Soc. 2013, 64, 1695–1724. [Google Scholar] [CrossRef]
- Farag, A.A.; Ali, Z.M.; Zaki, A.M.; Rizk, F.H.; Eid, M.M.; El-Kenawy, E.M. Exploring Optimization Algorithms: A Review of Methods and Applications. J. Artif. Intell. Metaheuristics 2024, 7, 8–17. [Google Scholar] [CrossRef]
- Kumar, S.; Datta, D.; Singh, S.K. Black Hole Algorithm and Its Applications. In Computational Intelligence Applications in Modeling and Control; Azar, A.T., Vaidyanathan, S., Eds.; Studies in Computational Intelligence; Springer International Publishing: Cham, Switzerland, 2015; Volume 575, pp. 147–170. ISBN 978-3-319-11016-5. [Google Scholar]
- Stützle, T.; López-Ibáñez, M. Automated Design of Metaheuristic Algorithms. In Handbook of Metaheuristics. International Series in Operations Research & Management Science; Gendreau, M., Potvin, J.Y., Eds.; Springer: Cham, Switzerland, 2019; Volume 272. [Google Scholar] [CrossRef]
- Lambora, A.; Gupta, K.; Chopra, K. Genetic Algorithm—A Literature Review. In Proceedings of the 2019 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COMITCon), Faridabad, Indi, 14–16 February 2019; IEEE: Faridabad, India; pp. 380–384. [Google Scholar]
- Yang, X.-S. Firefly Algorithms for Multimodal Optimization. In Stochastic Algorithms: Foundations and Applications; Watanabe, O., Zeugmann, T., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2009; Volume 5792, pp. 169–178. ISBN 978-3-642-04943-9. [Google Scholar]
- Gao, X.Z.; Govindasamy, V.; Xu, H.; Wang, X.; Zenger, K. Harmony Search Method: Theory and Applications. Comput. Intell. Neurosci. 2015. [Google Scholar] [CrossRef] [PubMed]
- Jamil, M.; Yang, X.-S. A Literature Survey of Benchmark Functions For Global Optimization Problems. arXiv 2013, arXiv:1308.4008. [Google Scholar]
- Kerschke, P.; Trautmann, H. Automated Algorithm Selection on Continuous Black-Box Problems by Combining Exploratory Landscape Analysis and Machine Learning. Evol. Comput. 2019, 27, 99–127. [Google Scholar] [CrossRef] [PubMed]
- Liu, J.; Moreau, A.; Preuss, M.; Rapin, J.; Roziere, B.; Teytaud, F.; Teytaud, O. Versatile black-box optimization. In Proceedings of the 2020 Genetic and Evolutionary Computation Conference (GECCO’20), Cancún, Mexico, 8–12 July 2020; Association for Computing Machinery: New York, NY, USA; pp. 620–628. [Google Scholar] [CrossRef]
Function | Objective Function | Value Range | Optimized Result |
---|---|---|---|
Sphere | |||
Schwefel | |||
Alpine N.2 |
Dimensions—Independent Values = 2 | |||||||||
---|---|---|---|---|---|---|---|---|---|
ITE | POP | GA | DEV-GA | FA | DEV-FA | HSA | DEV-HSA | BHA | DEV-BHA |
50 | 200 | 2.249 × 10−7 | 2.249 × 10−7 | 0.415 | 0.415 | 0.171 | 0.171 | 0.020 | 0.020 |
400 | 2.990 × 10−7 | 2.990 × 10−7 | 0.340 | 0.340 | 0.018 | 0.018 | 0.343 | 0.343 | |
1000 | 2.599 × 10−8 | 2.599 × 10−8 | 0.318 | 0.318 | 0.096 | 0.096 | 0.082 | 0.082 | |
240 | 200 | 1.868 × 10−21 | 1.868 × 10−21 | 0.010 | 0.010 | 0.407 | 0.407 | 0.247 | 0.247 |
400 | 1.762 × 10−21 | 1.762 × 10−21 | 0.456 | 0.456 | 0.082 | 0.082 | 0.075 | 0.075 | |
1000 | 3.057 × 10−23 | 3.057 × 10−23 | 0.095 | 0.095 | 0.013 | 0.013 | 0.057 | 0.057 | |
Comparison of each algorithm reaching an accuracy of 10−2 | |||||||||
Algorithm | ITE | POP | Best Value Returned | ||||||
GA | 20 | 1000 | 1.880 × 10−5 | ||||||
FA | 1200 | 1000 | 2.464 × 10−4 | ||||||
HSA | 1000 | 1000 | 4.654 × 10−3 | ||||||
BHA | 700 | 1000 | 1.455 × 10−3 | ||||||
Dimensions—Independent Values = 3 | |||||||||
ITE | POP | GA | DEV-GA | FA | DEV-FA | HSA | DEV-HSA | BHA | DEV-BHA |
50 | 200 | 7.241 × 10−5 | 7.241 × 10−5 | 2.209 | 2.209 | 1.702 | 1.702 | 0.488 | 0.488 |
400 | 4.668 × 10−5 | 4.668 × 10−5 | 2.061 | 2.061 | 7.337 | 7.337 | 2.300 | 2.300 | |
1000 | 1.557 × 10−5 | 1.557 × 10−5 | 1.572 | 1.572 | 0.318 | 0.318 | 3.049 | 3.049 | |
240 | 200 | 2.727 × 10−10 | 2.727 × 10−10 | 0.469 | 0.469 | 0.354 | 0.354 | 3.053 | 3.053 |
400 | 1.009 × 10−10 | 1.009 × 10−10 | 1.600 | 1.600 | 0.635 | 0.635 | 3.828 | 3.828 | |
1000 | 5.415 × 10−11 | 5.415 × 10−11 | 0.512 | 0.512 | 3.337 | 3.337 | 3.256 | 3.256 | |
Comparison of each algorithm reaching an accuracy of 10−2 | |||||||||
Algorithm | ITE | POP | Best Value Returned | ||||||
GA | 20 | 1000 | 1.694 × 10−3 | ||||||
FA | 2400 | 1000 | 1.130 × 10−3 | ||||||
HSA | 30,000 | 1000 | 7.109 × 10−3 | ||||||
BHA | - | - | No convergence for more than 10−1 |
Dimensions—Independent Values = 2 | |||||||||
---|---|---|---|---|---|---|---|---|---|
ITE | POP | GA | DEV-GA | FA | DEV-FA | HSA | DEV-HSA | BHA | DEV-BHA |
50 | 200 | 3.198 × 10−5 | 3.198 × 10−5 | 44.615 | 44.615 | 131.036 | 131.036 | 101.910 | 101.910 |
400 | 2.634 × 10−5 | 2.634 × 10−5 | 7.410 | 7.410 | 53.005 | 53.005 | 44.155 | 44.155 | |
1000 | 2.589 × 10−5 | 2.589 × 10−5 | 41.289 | 41.289 | 8.540 | 8.540 | 118.362 | 118.362 | |
240 | 200 | 2.545 × 10−5 | 2.545 × 10−5 | 4.721 | 4.721 | 0.720 | 0.720 | 3.762 | 3.762 |
400 | 2.545 × 10−5 | 2.545 × 10−5 | 48.525 | 48.525 | 31.229 | 31.229 | 196.613 | 196.613 | |
1000 | 2.545 × 10−5 | 2.545 × 10−5 | 43.090 | 43.090 | 97.473 | 97.473 | 60.672 | 60.672 | |
Comparison of each algorithm reaching an accuracy of 10−2 | |||||||||
Algorithm | ITE | POP | Best Value Returned | ||||||
GA | 30 | 1000 | 1.120 × 10−4 | ||||||
FA | 10,000 | 1000 | 7.456 × 10−2 | ||||||
HSA | 10,000 | 1000 | 3.748 × 10−2 | ||||||
BHA | - | - | No convergence | ||||||
Dimensions—Independent Values = 3 | |||||||||
ITE | POP | GA | DEV-GA | FA | DEV-FA | HSA | DEV-HSA | BHA | DEV-BHA |
50 | 200 | 0.430 | 0.430 | 334.301 | 334.301 | 445.006 | 445.006 | 370.047 | 370.047 |
400 | 0.013 | 0.013 | 101.121 | 101.121 | 114.760 | 114.760 | 270.679 | 270.679 | |
1000 | 0.001 | 0.001 | 338.982 | 338.982 | 153.056 | 153.056 | 152.497 | 152.497 | |
240 | 200 | 3.818 × 10−5 | 3.818 × 10−5 | 128.789 | 128.789 | 136.208 | 136.208 | 306.345 | 306.345 |
400 | 3.818 × 10−5 | 3.818 × 10−5 | 347.940 | 347.940 | 223.536 | 223.536 | 272.299 | 272.299 | |
1000 | 3.818 × 10−5 | 3.818 × 10−5 | 218.610 | 218.610 | 255.159 | 255.159 | 189.642 | 189.642 | |
Comparison of each algorithm reaching an accuracy of 10−2 | |||||||||
Algorithm | ITE | POP | Best Value Returned | ||||||
GA | 50 | 1000 | 3.546 × 10−3 | ||||||
FA | - | - | No convergence | ||||||
HSA | 150,000 | 1000 | 9.954 × 10−2 | ||||||
BHA | - | - | No convergence |
Dimensions—Independent Values = 2 | |||||||||
---|---|---|---|---|---|---|---|---|---|
ITE | POP | GA | DEV-GA | FA | DEV-FA | HSA | DEV-HSA | BHA | DEV-BHA |
50 | 200 | 7.885 | 0.001 | 7.836 | 0.048 | 7.343 | 0.541 | 6.681 | 1.203 |
400 | 7.885 | 0.001 | 7.795 | 0.089 | 7.872 | 0.012 | 7.759 | 0.125 | |
1000 | 7.885 | 0.001 | 7.710 | 0.174 | 7.846 | 0.038 | 7.865 | 0.019 | |
240 | 200 | 7.885 | 0.001 | 7.871 | 0.013 | 7.753 | 0.131 | 7.678 | 0.206 |
400 | 7.885 | 0.001 | 7.861 | 0.023 | 7.617 | 0.267 | 7.592 | 0.292 | |
1000 | 7.885 | 0.001 | 7.878 | 0.006 | 7.295 | 0.589 | 7.671 | 0.213 | |
Comparison of each algorithm reaching an accuracy of 10−2 | |||||||||
Algorithm | ITE | POP | Best Value Returned | ||||||
GA | 3 | 1000 | 7.821 | ||||||
FA | 300 | 1000 | 7.847 | ||||||
HSA | 300 | 1000 | 7.463 | ||||||
BHA | 200 | 1000 | 7.803 | ||||||
Dimensions—Independent Values = 3 | |||||||||
ITE | POP | GA | DEV-GA | FA | DEV-FA | HSA | DEV-HSA | BHA | DEV-BHA |
50 | 200 | 22.130 | 0.013 | 14.552 | 7.588 | 16.963 | 5.171 | 10.118 | 12.022 |
400 | 22.141 | 0.001 | 16.225 | 5.915 | 19.221 | 2.919 | 20.047 | 2.093 | |
1000 | 22.143 | 0.003 | 18.846 | 3.294 | 20.776 | 1.364 | 21.504 | 0.636 | |
240 | 200 | 22.143 | 0.003 | 21.433 | 0.707 | 19.410 | 2.730 | 9.617 | 12.523 |
400 | 22.143 | 0.003 | 20.100 | 2.040 | 20.087 | 2.053 | 20.170 | 1.970 | |
1000 | 22.143 | 0.003 | 16.885 | 5.255 | 18.975 | 3.165 | 20.260 | 1.880 | |
Comparison of each algorithm reaching an accuracy of 10−2 | |||||||||
Algorithm | ITE | POP | Best Value Returned | ||||||
GA | 30 | 1000 | 22.042 | ||||||
FA | 3000 | 1000 | 22.075 | ||||||
HSA | 13,000 | 1000 | 22.055 | ||||||
BHA | - | - | No convergence |
Sphere Reference Function | ||||
---|---|---|---|---|
Dimensions—Independent Values = 2 | ||||
Number of iterations with a complete population for each algorithm | Population for each algorithm | Total runs of the hybrid scheme | Returned output value of objective function | Total hybrid scheme deviation |
10 | 50 | 2 | 1.033 × 10−3 | 0.001 |
10 | 300 | 2 | 1.267 × 10−4 | 0.0001 |
20 | 50 | 2 | 8.784 × 10−6 | 10−6 |
20 | 300 | 2 | 8.294 × 10−6 | 10−6 |
Total effort to reach an accuracy of 10−2 | ||||
7 | 50 | 2 | 2.658 × 10−2 | |
Dimensions—Independent Values = 3 | ||||
10 | 50 | 2 | 0.027 | 0.027 |
10 | 300 | 2 | 0.012 | 0.012 |
20 | 50 | 2 | 5.536 × 10−4 | 10−4 |
20 | 300 | 2 | 5.255 × 10−4 | 10−4 |
Total effort to reach an accuracy of 10−2 | ||||
10 | 50 | 2 | 2.700 × 10−2 | |
Schwefel reference function | ||||
Dimensions—Independent Values = 2 | ||||
10 | 50 | 2 | 3.939 | 3.939 |
10 | 300 | 2 | 1.003 | 1.003 |
20 | 50 | 2 | 3.044 × 10−4 | 10−4 |
20 | 300 | 2 | 1.284 × 10−4 | 10−4 |
Total effort to reach an accuracy of 10−2 | ||||
16 | 50 | 2 | 5.532 × 10−2 | |
Dimensions—Independent Values = 3 | ||||
10 | 50 | 2 | 90.644 | 90.644 |
10 | 300 | 2 | 18.618 | 18.618 |
20 | 50 | 2 | 6.728 | 6.728 |
20 | 300 | 2 | 5.681 × 10−3 | 10−3 |
Total effort to reach an accuracy of 10−2 | ||||
20 | 300 | 2 | 5.681 × 10−3 | |
Alpine N.2 reference function | ||||
Dimensions—Independent Values = 2 | ||||
10 | 50 | 2 | 7.835 | 0.049 |
10 | 300 | 2 | 7.885 | 0.001 |
20 | 50 | 2 | 7.883 | −0.001 |
20 | 300 | 2 | 7.885 | 0.001 |
Total effort to reach an accuracy of 10−2 | ||||
10 | 50 | 2 | 7.835 | |
Dimensions—Independent Values = 3 | ||||
10 | 50 | 2 | 20.000 | 2.140 |
10 | 300 | 2 | 21.492 | 0.647 |
20 | 50 | 2 | 21.234 | 0.906 |
20 | 300 | 2 | 22.142 | 0.002 |
Total effort to reach an accuracy of 10−2 | ||||
20 | 300 | 2 | 22.137 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Krimpenis, A.A.; Athanasakos, L. “Optimizing the Optimization”: A Hybrid Evolutionary-Based AI Scheme for Optimal Performance. Computers 2025, 14, 97. https://doi.org/10.3390/computers14030097
Krimpenis AA, Athanasakos L. “Optimizing the Optimization”: A Hybrid Evolutionary-Based AI Scheme for Optimal Performance. Computers. 2025; 14(3):97. https://doi.org/10.3390/computers14030097
Chicago/Turabian StyleKrimpenis, Agathoklis A., and Loukas Athanasakos. 2025. "“Optimizing the Optimization”: A Hybrid Evolutionary-Based AI Scheme for Optimal Performance" Computers 14, no. 3: 97. https://doi.org/10.3390/computers14030097
APA StyleKrimpenis, A. A., & Athanasakos, L. (2025). “Optimizing the Optimization”: A Hybrid Evolutionary-Based AI Scheme for Optimal Performance. Computers, 14(3), 97. https://doi.org/10.3390/computers14030097