CNN-HT: A Two-Stage Algorithm Selection Framework
Abstract
:1. Introduction
- This paper presents a novel algorithm selection framework named CNN-HT, which follows a two-stage approach. In contrast, existing algorithm selection methods, based on the structure presented in [16], typically employ a one-stage approach involving a regression or classification model to establish the relationship between problems and algorithms. However, one-stage approaches incur high computational costs when adapting to changing problems and algorithms, as the classification models need to be retrained in such instances. So the algorithm has the advantage of being able to adapt to different combinations of algorithms without the need to re-train the whole model, simply by making modifications in the second stage. Experiment 4.4 demonstrates CNN-HT’s adaptability to algorithm sets of different sizes.
- The paper demonstrates that deep learning CNN is the most suitable classification model for identifying problem classes compared to other classification models. In Experiments 4.2 and 4.3, CNN, Random Forest, and Support Vector Machine (SVM) are used as classification models, with CNN exhibiting the highest accuracy.
- Feature selection techniques are applied as a preprocessing step in problem classification, reducing redundant features and saving computational costs for training the classification model, as the model has fewer parameters to train with reduced input. Experiment 4.3 indicates that the selected features as input achieved higher accuracy compared to the initial 169 features and randomly selected 19 features.
- The CNN-HT method outperforms individual algorithms within the algorithm set and another algorithm combination, PAP, which is supported by Experiment 4.4.
2. Related Works
2.1. Exploratory Landscape Analysis
2.2. Feature Selection
2.3. Convolutional Neural Network
2.4. Hypothesis Testing
3. CNN-HT
4. Experimental Results
4.1. Data Description
4.2. Results of Problem Classification
4.3. Results of Feature Selection
4.4. Performance Analysis of Algorithm Selection
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Terayama, K.; Sumita, M.; Tamura, R.; Tsuda, K. Black-box optimization for automated discovery. Accounts Chem. Res. 2021, 54, 1334–1346. [Google Scholar] [CrossRef] [PubMed]
- Roy, R.; Hinduja, S.; Teti, R. Recent advances in engineering design optimisation: Challenges and future trends. CIRP Ann. 2008, 57, 697–715. [Google Scholar] [CrossRef]
- Zhang, X.; Fong, K.F.; Yuen, S.Y. A novel artificial bee colony algorithm for HVAC optimization problems. HVAC&R Res. 2013, 19, 715–731. [Google Scholar]
- Omidvar, M.N.; Yang, M.; Mei, Y.; Li, X.; Yao, X. DG2: A faster and more accurate differential grouping for large-scale black-box optimization. IEEE Trans. Evol. Comput. 2017, 21, 929–942. [Google Scholar] [CrossRef]
- Hansen, N.; Auger, A.; Ros, R.; Finck, S.; Pošík, P. Comparing results of 31 algorithms from the black-box optimization benchmarking BBOB-2009. In Proceedings of the 12th Annual Conference Companion on Genetic and Evolutionary Computation, Portland, OR, USA, 7–11 July 2010; pp. 1689–1696. [Google Scholar]
- Wang, M.; Lu, G. A modified sine cosine algorithm for solving optimization problems. IEEE Access 2021, 9, 27434–27450. [Google Scholar] [CrossRef]
- Shi, R.; Liu, L.; Long, T.; Wu, Y.; Tang, Y. Filter-based adaptive Kriging method for black-box optimization problems with expensive objective and constraints. Comput. Methods Appl. Mech. Eng. 2019, 347, 782–805. [Google Scholar] [CrossRef]
- Lou, Y.; Yuen, S.Y. On constructing alternative benchmark suite for evolutionary algorithms. Swarm Evol. Comput. 2019, 44, 287–292. [Google Scholar] [CrossRef]
- Lou, Y.; Yuen, S.Y.; Chen, G. Non-revisiting stochastic search revisited: Results, perspectives, and future directions. Swarm Evol. Comput. 2021, 61, 100828. [Google Scholar] [CrossRef]
- Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
- Peng, F.; Tang, K.; Chen, G.; Yao, X. Population-based algorithm portfolios for numerical optimization. IEEE Trans. Evol. Comput. 2010, 14, 782–800. [Google Scholar] [CrossRef]
- Kerschke, P.; Kotthoff, L.; Bossek, J.; Hoos, H.H.; Trautmann, H. Leveraging TSP solver complementarity through machine learning. Evol. Comput. 2018, 26, 597–620. [Google Scholar] [CrossRef]
- Kerschke, P.; Hoos, H.H.; Neumann, F.; Trautmann, H. Automated algorithm selection: Survey and perspectives. Evol. Comput. 2019, 27, 3–45. [Google Scholar] [CrossRef]
- Schmidhuber, J. Deep Learning in Neural Networks: An Overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef]
- Lou, Y.; He, Y.; Wang, L.; Chen, G. Predicting network controllability robustness: A convolutional neural network approach. IEEE Trans. Cybern. 2022, 52, 4052–4063. [Google Scholar] [CrossRef]
- Rice, J.R. The algorithm selection problem. In Advances in Computers; Elsevier: Amsterdam, The Netherlands, 1976; Volume 15, pp. 65–118. [Google Scholar]
- Bischl, B.; Mersmann, O.; Trautmann, H.; Preuß, M. Algorithm selection based on exploratory landscape analysis and cost-sensitive learning. In Proceedings of the 14th Annual Conference on Genetic and Evolutionary Computation, Philadelphia, PA, USA, 7–11 July 2012; pp. 313–320. [Google Scholar]
- Li, Z.; Tian, X.; Liu, X.; Liu, Y.; Shi, X. A Two-Stage Industrial Defect Detection Framework Based on Improved-YOLOv5 and Optimized-Inception-ResnetV2 Models. Appl. Sci. 2022, 12, 834. [Google Scholar] [CrossRef]
- Khan, M.A.; Karim, M.R.; Kim, Y. A Two-Stage Big Data Analytics Framework with Real World Applications Using Spark Machine Learning and Long Short-Term Memory Network. Symmetry 2018, 10, 485. [Google Scholar] [CrossRef]
- Liu, A.; Xiao, Y.; Ji, X.; Wang, K.; Tsai, S.B.; Lu, H.; Cheng, J.; Lai, X.; Wang, J. A novel two-stage integrated model for supplier selection of green fresh product. Sustainability 2018, 10, 2371. [Google Scholar] [CrossRef]
- Zhang, X.; Yang, W.; Tang, X.; Liu, J. A fast learning method for accurate and robust lane detection using two-stage feature extraction with YOLO v3. Sensors 2018, 18, 4308. [Google Scholar] [CrossRef] [PubMed]
- Škvorc, U.; Eftimov, T.; Korošec, P. Understanding the problem space in single-objective numerical optimization using exploratory landscape analysis. Appl. Soft Comput. 2020, 90, 106138. [Google Scholar] [CrossRef]
- Renau, Q.; Doerr, C.; Dreo, J.; Doerr, B. Exploratory landscape analysis is strongly sensitive to the sampling strategy. In Proceedings of the Parallel Problem Solving from Nature–PPSN XVI: 16th International Conference, PPSN 2020, Leiden, The Netherlands, 5–9 September 2020; Proceedings, Part II 16. Springer: Berlin/Heidelberg, Germany, 2020; pp. 139–153. [Google Scholar]
- Mersmann, O.; Preuss, M.; Trautmann, H. Benchmarking evolutionary algorithms: Towards exploratory landscape analysis. In Parallel Problem Solving from Nature; Springer: Berlin,/Heidelberg, Germany, 2010. [Google Scholar]
- Kerschke, P.; Bossek, J.T.H. Parameterization of state-of-the-art performance indicators: A robustness study based on inexact TSP solvers. In Proceedings of the 20th Annual Conference on Genetic and Evolutionary Computation (GECCO) Companion, Kyoto, Japan, 15–19 July 2018; pp. 1737–1744. [Google Scholar]
- Tian, Y.; Peng, S.; Zhang, X.; Rodemann, T.; Tan, K.C.; Jin, Y. A recommender system for metaheuristic algorithms for continuous optimization based on deep recurrent neural networks. IEEE Trans. Artif. Intell. 2020, 1, 5–18. [Google Scholar] [CrossRef]
- Mersmann, O.; Bischl, B.; Trautmann, H.; Preuss, M.; Weihs, C.; Rudolph, G. Exploratory landscape analysis. In Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation, Dublin, Ireland, 12–16 July 2011; pp. 829–836. [Google Scholar]
- Malan, K.M.; Oberholzer, J.F.; Engelbrecht, A.P. Characterising constrained continuous optimisation problems. In Proceedings of the 2015 IEEE Congress on Evolutionary Computation (CEC), Sendai, Japan, 25–28 May 2015; IEEE: Toulouse, France, 2015; pp. 1351–1358. [Google Scholar]
- Shirakawa, S.; Nagao, T. Bag of local landscape features for fitness landscape analysis. Soft Comput. 2016, 20, 3787–3802. [Google Scholar] [CrossRef]
- Kerschke, P.; Dagefoerde, J. flacco: Feature-based landscape analysis of continuous and constrained optimization problems. R-Package Version 2017, 1, 1–6. [Google Scholar]
- Li, J.; Cheng, K.; Wang, S.; Morstatter, F.; Trevino, R.P.; Tang, J.; Liu, H. Feature selection: A data perspective. ACM Comput. Surv. (CSUR) 2017, 50, 1–45. [Google Scholar] [CrossRef]
- Thakkar, A.; Lohiya, R. Fusion of statistical importance for feature selection in Deep Neural Network-based Intrusion Detection System. Inf. Fusion 2023, 90, 353–363. [Google Scholar] [CrossRef]
- Bolón-Canedo, V.; Alonso-Betanzos, A.; Morán-Fernández, L.; Cancela, B. Feature selection: From the past to the future. In Advances in Selected Artificial Intelligence Areas: World Outstanding Women in Artificial Intelligence; Springer: Berlin/Heidelberg, Germany, 2022; pp. 11–34. [Google Scholar]
- Sun, L.; Yin, T.; Ding, W.; Qian, Y.; Xu, J. Feature selection with missing labels using multilabel fuzzy neighborhood rough sets and maximum relevance minimum redundancy. IEEE Trans. Fuzzy Syst. 2021, 30, 1197–1211. [Google Scholar] [CrossRef]
- Venkatesh, B.; Anuradha, J. A review of feature selection and its methods. Cybern. Inf. Technol. 2019, 19, 3–26. [Google Scholar] [CrossRef]
- Abbas, F.; Zhang, F.; Abbas, F.; Ismail, M.; Iqbal, J.; Hussain, D.; Khan, G.; Alrefaei, A.F.; Albeshr, M.F. Landslide Susceptibility Mapping: Analysis of Different Feature Selection Techniques with Artificial Neural Network Tuned by Bayesian and Metaheuristic Algorithms. Remote Sens. 2023, 15, 4330. [Google Scholar] [CrossRef]
- Fan, Y.; Chen, B.; Huang, W.; Liu, J.; Weng, W.; Lan, W. Multi-label feature selection based on label correlations and feature redundancy. Knowl.-Based Syst. 2022, 241, 108256. [Google Scholar] [CrossRef]
- Yu, L.; Liu, H. Efficient feature selection via analysis of relevance and redundancy. J. Mach. Learn. Res. 2004, 5, 1205–1224. [Google Scholar]
- Khaire, U.M.; Dhanalakshmi, R. Stability of feature selection algorithm: A review. J. King Saud Univ.-Comput. Inf. Sci. 2022, 34, 1060–1073. [Google Scholar] [CrossRef]
- Hu, Q.; Yu, D.; Liu, J.; Wu, C. Neighborhood rough set based heterogeneous feature subset selection. Inf. Sci. 2008, 178, 3577–3594. [Google Scholar] [CrossRef]
- Lou, Y.; He, Y.; Wang, L.; Tsang, K.F.; Chen, G. Knowledge-based prediction of network controllability robustness. IEEE Trans. Neural Networks Learn. Syst. 2022, 33, 5739–5750. [Google Scholar] [CrossRef]
- Lou, Y.; Wu, R.; Li, J.; Wang, L.; Chen, G. A Convolutional Neural Network Approach to Predicting Network Connectedness Robustness. IEEE Trans. Netw. Sci. Eng. 2021, 8, 3209–3219. [Google Scholar] [CrossRef]
- Lou, Y.; Wu, C.; Li, J.; Wang, L.; Chen, G. Network Robustness Prediction: Influence of Training Data Distributions. In IEEE Transactions on Neural Networks and Learning Systems; IEEE: Toulouse, France, 2023. [Google Scholar] [CrossRef]
- Gu, J.; Wang, Z.; Kuen, J.; Ma, L.; Shahroudy, A.; Shuai, B.; Liu, T.; Wang, X.; Wang, G.; Cai, J.; et al. Recent advances in convolutional neural networks. Pattern Recognit. 2018, 77, 354–377. [Google Scholar] [CrossRef]
- Li, Z.; Liu, F.; Yang, W.; Peng, S.; Zhou, J. A survey of convolutional neural networks: Analysis, applications, and prospects. In IEEE Transactions on Neural Networks and Learning Systems; IEEE: Toulouse, France, 2021. [Google Scholar]
- Albawi, S.; Mohammed, T.A.; Al-Zawi, S. Understanding of a convolutional neural network. In Proceedings of the 2017 International Conference on Engineering and Technology (ICET), Antalya, Turkey, 21–23 August 2017; IEEE: Toulouse, France, 2017; pp. 1–6. [Google Scholar]
- Kotthoff, L.; Kerschke, P.; Hoos, H.; Trautmann, H. Improving the State of the Art in Inexact TSP Solving Using Per-Instance Algorithm Selection. In Proceedings of the Learning and Intelligent Optimization, Lille, France, 12–15 January 2015; Dhaenens, C., Jourdan, L., Marmion, M.E., Eds.; Springer: Cham, Switzerland, 2015; pp. 202–217. [Google Scholar]
- Loreggia, A.; Malitsky, Y.; Samulowitz, H.; Saraswat, V. Deep learning for algorithm portfolios. In Proceedings of the Aaai Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016; Volume 30. [Google Scholar]
- He, Y.; Yuen, S.Y.; Lou, Y.; Zhang, X. A sequential algorithm portfolio approach for black box optimization. Swarm Evol. Comput. 2019, 44, 559–570. [Google Scholar] [CrossRef]
- Wilcox, R.R. Introduction to Robust Estimation and Hypothesis Testing; Academic Press: New York, NY, USA, 2011. [Google Scholar]
- Smith-Miles, K.; Baatar, D.; Wreford, B.; Lewis, R. Towards objective measures of algorithm performance across instance space. Comput. Oper. Res. 2014, 45, 12–24. [Google Scholar] [CrossRef]
- Kruskal, W.H.; Wallis, W.A. Use of ranks in one-criterion variance analysis. J. Am. Stat. Assoc. 1952, 47, 583–621. [Google Scholar] [CrossRef]
- Lou, Y.; Yuen, S.Y.; Chen, G. Evolving Benchmark Functions for Optimization Algorithms. In From Parallel to Emergent Computing; CRC Press: Boca Raton, FL, USA, 2019; pp. 239–260. [Google Scholar]
- Lou, Y.; Yuen, S.Y.; Chen, G. Evolving benchmark functions using kruskal-wallis test. In Proceedings of the Genetic and Evolutionary Computation Conference Companion, Kyoto, Japan, 15–19 July 2018; pp. 1337–1341. [Google Scholar]
- McKay, M.D.; Beckman, R.J.; Conover, W.J. A comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics 2000, 42, 55–61. [Google Scholar] [CrossRef]
- Liaw, A.; Wiener, M. Classification and regression by randomForest. R News 2002, 2, 18–22. [Google Scholar]
- Karatzoglou, A.; Smola, A.; Hornik, K.; Zeileis, A. kernlab-an S4 package for kernel methods in R. J. Stat. Softw. 2004, 11, 1–20. [Google Scholar] [CrossRef]
- Wang, Y.; Cai, Z.; Zhang, Q. Differential evolution with composite trial vector generation strategies and control parameters. IEEE Trans. Evol. Comput. 2011, 15, 55–66. [Google Scholar] [CrossRef]
- Hansen, N.; Ostermeier, A. Completely derandomized self-adaptation in evolution strategies. Evol. Comput. 2001, 9, 159–195. [Google Scholar] [CrossRef] [PubMed]
- Jain, M.; Singh, V.; Rani, A. A novel nature-inspired algorithm for optimization: Squirrel search algorithm. Swarm Evol. Comput. 2019, 44, 148–175. [Google Scholar] [CrossRef]
- Tanabe, R.; Fukunaga, A.S. Improving the search performance of SHADE using linear population size reduction. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; IEEE: Toulouse, France, 2014; pp. 1658–1665. [Google Scholar]
- Trojovská, E.; Dehghani, M.; Trojovský, P. Zebra Optimization Algorithm: A New Bio-Inspired Optimization Algorithm for Solving Optimization Algorithm. IEEE Access 2022, 10, 49445–49473. [Google Scholar] [CrossRef]
- Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
- Qin, A.K.; Huang, V.L.; Suganthan, P.N. Differential evolution algorithm with strategy adaptation for global numerical optimization. IEEE Trans. Evol. Comput. 2008, 13, 398–417. [Google Scholar] [CrossRef]
- García-Martínez, C.; Lozano, M.; Herrera, F.; Molina, D.; Sánchez, A.M. Global and local real-coded genetic algorithms based on parent-centric crossover operators. Eur. J. Oper. Res. 2008, 185, 1088–1113. [Google Scholar] [CrossRef]
Group | Layers | Kernel Size | Stride | Output Channel |
---|---|---|---|---|
Group 1 | Conv3-16 | 3 | 1 | 16 |
Conv3-16 | 3 | 1 | 16 | |
Max3 | 3 | 1 | 16 | |
Group 2 | Conv3-64 | 3 | 1 | 64 |
Conv3-64 | 3 | 1 | 64 | |
Max3 | 3 | 1 | 64 | |
Group 3 | Conv3-64 | 3 | 1 | 64 |
Conv3-64 | 3 | 1 | 64 | |
Max3 | 3 | 1 | 64 | |
FC-64 | 1 | 1 | 64 | |
Softmax | 1 | 1 | 24 |
Feature Name | Feature Name in Flacco | F1 | F2 | F3 |
---|---|---|---|---|
cell mapping features | cm_angle.dist_ctr2best.mean | 2.77 | 2.63 | 1.94 |
gradient homogeneity features | cm_grad.mean | 0.59 | 0.61 | 0.36 |
ELA curvature features | ela_curv.grad_norm.max | 14.9 | 8,909,220.50 | 639.14 |
ELA y-distribution features | ela_distr.number_of_peaks | 1.00 | 4.00 | 5.00 |
D = 2 | D = 5 | D = 10 | D = 20 | |
---|---|---|---|---|
1D CNN | 99.97% | 96.78% | 95.71% | 93.79% |
SVM | 96.25% | 93.1% | 93.54% | 91.56% |
RF | 94.3% | 94.12% | 93.3% | 91.22% |
Feature Name | Feature Name In Flacco | Description |
---|---|---|
ELA convexity features | ela_conv.conv_prob | percentage of convexity |
ELA convexity features | ela_conv.lin_dev.orig | average (original) deviation between the linear combination of the objectives and the objective of the linear combination of the observations |
ELA curvature features | ela_curv.grad_scale.lq | aggregations (lower quartile) of the ratios between biggest and smallest (absolute) gradient directions |
ELA curvature features | ela_curv.grad_scale.sd | aggregations (standard deviation) of the ratios between biggest and smallest (absolute) gradient directions |
ELA y-distribution features | ela_distr.costs_runtime | number of features and runtime (in seconds) which were needed for the computation of these features |
ELA local search features | e1a_local.best2mean_contr.orig | each cluster is represented by its center; this feature is the ratio of the objective values of the best and average cluster |
ELA local search features | ela_local.best2mean_contr.ratio | each cluster is represented by its center; this feature is the ratio of the differences in the objective values of average to best and worst to best cluster |
ELA local search features | ela_local.fun_evals.lq | aggregations (lower quartile) of the performed local searches |
ELA local search features | ela_local.fun_evals.sd | aggregations (standard deviation) of the performed local searches |
information content features | ic.eps.ratio | ratio of partial information sensitivity where the ratio is 0.5 |
cell mapping features | cm_angle.dist_ctr2worst.mean | arithmetic mean of distances from the cell center to the worst observation within the cell (over all cells) |
cell mapping features | cm_angle.costs_runtime | runtime (in seconds) needed for the computation of these features |
cell mapping features | cm_grad.mean | arithmetic mean of the aforementioned lengths |
linear model features | limo.avg_length.norm | length of the average coefficient vector (based on normalized vectors) |
linear model features | limo.cor.norm | correlation of all coefficient vectors (based on normalized vectors) |
linear model features | limo.sd_ratio.reg | max-by-min ratio of the standard deviations of the (non-intercept) coefficients (based on regular ones) |
principal component features | pca.expl_var.cov_init | proportion of the explained variance when applying PCA to the covariance matrix of the entire initial design (init) |
principal component features | pca.expl_var_PC1.cov_x | proportion of variance which is explained by the first principal component when applying PCA to the covariance matrix of the decision space (x) |
principal component features | pca.expl_var_PC1.cov_init | proportion of variance which is explained by the first principal component when applying PCA to the covariance matrix of the entire initial design |
D = 2 | D = 5 | D = 10 | D = 20 | ||
---|---|---|---|---|---|
Feature selection ( 19 features ) Classifier: CNN | 99.97% | 98.78% | 98.66% | 98.13% | |
Comparative Experiment 1 | Feature selection ( 19 features ) Classifier: SVM | 97.5% | 96.4% | 95.3% | 93.18% |
Feature selection ( 19 features ) Classifier: RF | 96.35% | 95.21% | 94.74% | 94.8% | |
Comparative Experiment 2 | Random selection ( 19 features ) | 50.33% | 62.88% | 74.17% | 55.33% |
Initial feature set ( 169 features ) | 99.97% | 96.78% | 95.71% | 93.79% | |
Comparative Experiment 3 | 2D-Feature selection (14 features) | 99.97% | − | − | − |
5D-Feature selection (22 features) | − | 99.56% | − | − | |
10D-Feature selection (13 features) | − | − | 99.22% | − | |
20D-Feature selection (21 features) | − | − | − | 99.22% |
F1 | F2 | F3 | F4 | F5 | F6 | F7 | F8 | F9 | F10 | F11 | F12 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
CoDE | ||||||||||||
CMA-ES | ||||||||||||
SSA | ||||||||||||
L-SHADE | ||||||||||||
ZOA | ||||||||||||
F13 | F14 | F15 | F16 | F17 | F18 | F19 | F20 | F21 | F22 | F23 | F24 | |
CODE | ||||||||||||
CMAES | ||||||||||||
SSA | ||||||||||||
L-shade | ||||||||||||
ZOA |
CNN-HT | PAP | CoDE | CMA-ES | SSA | L-SHADE | ZOA | |
---|---|---|---|---|---|---|---|
f1 | 1 | 1 (=) | 1 (=) | 1 (=) | 1 (=) | 1 (=) | 1 (=) |
f2 | 1 | 1 (=) | 1 (=) | 1 (=) | 1 (=) | 1 (=) | 1 (=) |
f3 | 1 | 1 (=) | 1 (=) | 6 (+) | 1 (=) | 1 (=) | 6 (+) |
f4 | 1 | 1 (=) | 1 (=) | 6 (+) | 1 (=) | 1 (=) | 6 (+) |
f5 | 1 | 1 (=) | 1 (=) | 1 (=) | 1 (=) | 1 (=) | 1 (=) |
f6 | 1 | 1 (=) | 1 (=) | 1 (=) | 1 (=) | 1 (=) | 1 (=) |
f7 | 1 | 1 (=) | 1 (=) | 6 (+) | 1 (=) | 1 (=) | 6 (+) |
f8 | 1 | 3 (=) | 1 (=) | 3 (=) | 7 (+) | 3 (=) | 6 (+) |
f9 | 1 | 7 (+) | 3 (=) | 1 (=) | 5 (+) | 4 (=) | 6 (+) |
f10 | 1 | 6 (+) | 4 (=) | 3 (=) | 7 (+) | 1 (=) | 5 (+) |
f11 | 2 | 1 (=) | 5 (+) | 3 (=) | 4 (=) | 5 (+) | 5 (+) |
f12 | 1 | 3 (=) | 7 (+) | 1 (=) | 5 (+) | 3 (=) | 5 (+) |
f13 | 1 | 1 (=) | 1 (=) | 1 (=) | 5 (+) | 5 (+) | 5 (+) |
f14 | 1 | 1 (=) | 5 (+) | 1 (=) | 7 (+) | 5 (+) | 1 (=) |
f15 | 1 | 3 (+) | 4 (+) | 4 (+) | 4 (+) | 4 (+) | 1 (=) |
f16 | 2 | 2 (=) | 5 (+) | 5 (+) | 2 (=) | 5 (+) | 1 (=) |
f17 | 1 | 1 (=) | 1 (=) | 6 (+) | 5 (+) | 1 (=) | 7 (+) |
f18 | 1 | 1 (=) | 1 (=) | 5 (+) | 7 (+) | 1 (=) | 5 (+) |
f19 | 1 | 6 (+) | 3 (+) | 4 (+) | 5 (+) | 6 (+) | 1 (=) |
f20 | 2 | 3 (=) | 1 (=) | 5 (+) | 3 (=) | 5 (+) | 5 (+) |
f21 | 3 | 1 (=) | 2 (=) | 7 (+) | 5 (=) | 6 (=) | 4 (=) |
f22 | 3 | 1 (=) | 4 (=) | 7 (+) | 6 (+) | 4 (=) | 2 (=) |
f23 | 1 | 3 (+) | 4 (+) | 4 (+) | 4 (+) | 4 (+) | 1 (=) |
f24 | 1 | 3 (+) | 4 (+) | 4 (+) | 4 (+) | 4 (+) | 1 (=) |
average | 1.291 | 2.208 | 2.583 | 3.583 | 3.833 | 3.041 | 3.458 |
PAP | CoDE | CMA-ES | SSA | L-SHADE | ZOA | ||
Significantly worse than CNN-HT | 5 | 8 | 13 | 13 | 9 | 12 |
CNN-HT | PAP | CoDE | CMA-ES | SSA | L-SHADE | ZOA | |
---|---|---|---|---|---|---|---|
f1 | 1 | 1 (=) | 1 (=) | 1 (=) | 1 (=) | 1 (=) | 1 (=) |
f2 | 1 | 1 (=) | 1 (=) | 1 (=) | 1 (=) | 1 (=) | 1 (=) |
f3 | 1 | 1 (=) | 1 (=) | 6 (+) | 1 (=) | 1 (=) | 6 (+) |
f4 | 1 | 1 (=) | 1 (=) | 6 (+) | 1 (=) | 1 (=) | 6 (+) |
f5 | 1 | 1 (=) | 1 (=) | 1 (=) | 1 (=) | 1 (=) | 1 (=) |
f6 | 1 | 1 (=) | 1 (=) | 1 (=) | 1 (=) | 1 (=) | 1 (=) |
f7 | 3 | 1 (=) | 1 (=) | 6 (+) | 6 (+) | 3 (=) | 3 (=) |
f8 | 1 | 1 (=) | 1 (=) | 1 (=) | 7 (+) | 1 (=) | 1 (=) |
f9 | 1 | 7 (+) | 3 (=) | 1 (=) | 6 (+) | 5 (=) | 3 (=) |
f10 | 1 | 1 (=) | 5 (=) | 1 (=) | 7 (+) | 1 (=) | 6 (+) |
f11 | 2 | 2 (=) | 5 (+) | 1 (=) | 4 (=) | 5 (+) | 5 (+) |
f12 | 2 | 3 (=) | 6 (+) | 1 (=) | 6 (+) | 3 (=) | 3 (=) |
f13 | 1 | 1 (=) | 1 (=) | 1 (=) | 6 (+) | 5 (=) | 6 (+) |
f14 | 1 | 1 (=) | 5 (+) | 1 (=) | 7 (+) | 5 (+) | 1 (=) |
f15 | 2 | 3 (+) | 5 (+) | 6 (+) | 7 (+) | 4 (+) | 1 (=) |
f16 | 2 | 2 (=) | 6 (+) | 7 (+) | 4 (=) | 5 (+) | 1 (=) |
f17 | 1 | 1 (=) | 1 (=) | 5 (+) | 5 (+) | 1 (=) | 5 (+) |
f18 | 1 | 3 (=) | 4 (=) | 6 (+) | 7 (+) | 1 (=) | 5 (=) |
f19 | 1 | 6 (+) | 4 (+) | 4 (+) | 3 (+) | 6 (+) | 1 (=) |
f20 | 2 | 2 (=) | 1 (−) | 5 (+) | 3 (=) | 5 (+) | 5 (+) |
f21 | 2 | 1 (=) | 4 (=) | 7 (+) | 2 (=) | 4 (=) | 4 (=) |
f22 | 2 | 1 (=) | 5 (=) | 6 (+) | 6 (+) | 4 (=) | 2 (=) |
f23 | 2 | 2 (=) | 5 (+) | 1 (−) | 5 (+) | 5 (+) | 2 (=) |
f24 | 1 | 3 (+) | 4 (+) | 4 (+) | 4 (+) | 4 (+) | 1 (=) |
average | 1.416 | 1.958 | 3 | 3.33 | 4.208 | 3.0416 | 2.958 |
PAP | CoDE | CMA-ES | SSA | L-SHADE | ZOA | ||
Significantly worse than CNN-HT | 4 | 8 | 12 | 14 | 8 | 7 |
CNN-HT | PAP | CoDE | CMA-ES | SSA | L-SHADE | ZOA | |
---|---|---|---|---|---|---|---|
f1 | 1 | 1 (=) | 1 (=) | 1 (=) | 1 (=) | 1 | 1 (=) |
f2 | 1 | 1 (=) | 1 (=) | 1 (=) | 1 (=) | 1 (=) | 1 (=) |
f3 | 1 | 1 (=) | 1 (=) | 6 (+) | 1 (=) | 1 (=) | 6 (+) |
f4 | 1 | 1 (=) | 1 (=) | 6 (+) | 1 (=) | 1 (=) | 6 (+) |
f5 | 1 | 1 (=) | 1 (=) | 1 (=) | 1 (=) | 1 (=) | 1 (=) |
f6 | 1 | 1 (=) | 1 (=) | 1 (=) | 1 (=) | 1 (=) | 1 (=) |
f7 | 1 | 1 (=) | 1 (=) | 6 (+) | 1 (=) | 1 (=) | 7 (+) |
f8 | 1 | 3 (=) | 1 (=) | 3 (=) | 6 (+) | 3 (=) | 6 (+) |
f9 | 1 | 7 (+) | 4 (=) | 2 (=) | 6 (+) | 3 (=) | 5 (+) |
f10 | 2 | 6 (+) | 4 (=) | 3 (=) | 7 (+) | 1 (=) | 5 (+) |
f11 | 1 | 1 (=) | 5 (+) | 1 (=) | 5 (+) | 1 (=) | 5 (+) |
f12 | 1 | 3 (=) | 7 (+) | 1 (=) | 5 (+) | 3 (=) | 5 (+) |
f13 | 1 | 2 (=) | 4 (=) | 3 (=) | 5 (+) | 5 (+) | 5 (+) |
f14 | 1 | 4 (=) | 5 (=) | 1 (=) | 6 (+) | 6 (+) | 1 (=) |
f15 | 1 | 3 (=) | 3 (=) | 6 (+) | 6 (+) | 3 (=) | 1 (=) |
f16 | 2 | 2 (=) | 5 (+) | 5 (+) | 2 (=) | 5 (+) | 1 (=) |
f17 | 1 | 1 (=) | 1 (=) | 5 (+) | 5 (+) | 1 (=) | 5 (+) |
f18 | 1 | 1 (=) | 1 (=) | 5 (+) | 7 (+) | 1 (=) | 5 (+) |
f19 | 1 | 6 (+) | 7 (+) | 4 (+) | 5 (+) | 3 (+) | 1 (=) |
f20 | 3 | 3 (=) | 2 (=) | 5 (+) | 1 (=) | 5 (+) | 5 (+) |
f21 | 3 | 1 (=) | 4 (=) | 7 (+) | 6 (=) | 4 (=) | 1 (=) |
f22 | 3 | 1 (=) | 4 (=) | 7 (+) | 6 (+) | 4 (=) | 2 (=) |
f23 | 1 | 3 (+) | 4 (+) | 4 (+) | 4 (+) | 4 (+) | 1 (=) |
f24 | 1 | 3 (+) | 4 (+) | 4 (+) | 4 (+) | 4 (+) | 1 (=) |
average | 1.333 | 2.375 | 3 | 3.667 | 3.875 | 2.625 | 3.25 |
PAP | CoDE | CMA-ES | SSA | L-SHADE | ZOA | ||
Significantly worse than CNN-HT | 5 | 6 | 13 | 14 | 7 | 12 |
CNN-HT | PAP | CoDE | CMA-ES | SSA | L-SHADE | ZOA | |
---|---|---|---|---|---|---|---|
f1 | 1 | 1 (=) | 1 (=) | 1 (=) | 1 (=) | 1 (=) | 1 (=) |
f2 | 1 | 1 (=) | 1 (=) | 1 (=) | 1 (=) | 1 (=) | 1 (=) |
f3 | 1 | 1 (=) | 1 (=) | 6 (+) | 1 (=) | 1 (=) | 6 (+) |
f4 | 1 | 1 (=) | 1 (=) | 6 (+) | 1 (=) | 1 (=) | 6 (+) |
f5 | 1 | 1 (=) | 1 (=) | 1 (=) | 1 (=) | 1 (=) | 1 (=) |
f6 | 1 | 1 (=) | 1 (=) | 1 (=) | 1 (=) | 1 (=) | 1 (=) |
f7 | 1 | 1 (=) | 1 (=) | 6 (+) | 1 (=) | 1 (=) | 6 (+) |
f8 | 3 | 1 (−) | 3 (=) | 1 (−) | 6 (+) | 3 (=) | 7 (+) |
f9 | 1 | 5 (+) | 3 (=) | 1 (=) | 5 (+) | 3 (=) | 5 (+) |
f10 | 1 | 4 (+) | 4 (+) | 1 (=) | 4 (+) | 1 (=) | 4 (+) |
f11 | 1 | 1 (=) | 4 (+) | 1 (=) | 4 (+) | 4 (+) | 4 (+) |
f12 | 3 | 2 (=) | 6 (+) | 1 (=) | 5 (+) | 7 (+) | 4 (+) |
f13 | 1 | 1 (=) | 1 (=) | 1 (=) | 5 (+) | 5 (+) | 5 (+) |
f14 | 1 | 1 (=) | 5 (+) | 3 (=) | 5 (+) | 7 (+) | 3 (=) |
f15 | 1 | 3 (+) | 3 (+) | 3 (+) | 3 (+) | 3 (+) | 1 (=) |
f16 | 2 | 2 (=) | 5 (+) | 5 (+) | 2 (=) | 5 (+) | 1 (=) |
f17 | 1 | 1 (=) | 1 (=) | 5 (+) | 5 (+) | 1 (=) | 5 (+) |
f18 | 1 | 1 (=) | 1 (=) | 5 (+) | 7 (+) | 4 (=) | 5 (+) |
f19 | 2 | 4 (+) | 3 (+) | 4 (+) | 6 (+) | 6 (+) | 1 (=) |
f20 | 2 | 2 (=) | 1 (=) | 5 (+) | 2 (=) | 7 (+) | 5 (+) |
f21 | 3 | 1 (=) | 4 (=) | 7 (+) | 5 (=) | 6 (=) | 1 (=) |
f22 | 2 | 5 (+) | 4 (=) | 6 (+) | 6 (+) | 1 (=) | 2 (=) |
f23 | 1 | 1 (=) | 4 (+) | 4 (+) | 4 (+) | 4 (+) | 1 (=) |
f24 | 1 | 3 (+) | 3 (+) | 3 (+) | 3 (+) | 3 (+) | 1 (=) |
average | 1.416 | 1.875 | 2.583 | 3.25 | 3.5 | 3.208 | 3.208 |
PAP | CoDE | CMA-ES | SSA | L-SHADE | ZOA | ||
Significantly worse than CNN-HT | 6 | 9 | 13 | 14 | 10 | 12 |
CNN-HT | PAP | GL25 | CMAES | ABC | SaDE | |
---|---|---|---|---|---|---|
f1 | 1 | 1 (=) | 1 (=) | 1 (=) | 1 (=) | 1 (=) |
f2 | 1 | 1 (=) | 1 (=) | 1 (=) | 1 (=) | 1 (=) |
f3 | 1 | 1 (=) | 4 (+) | 4 (+) | 4 (+) | 1 (=) |
f4 | 1 | 1 (=) | 4 (+) | 4 (+) | 4 (+) | 3 (+) |
f5 | 1 | 1 (=) | 1 (=) | 1 (=) | 1 (=) | 1 (=) |
f6 | 1 | 1 (=) | 5 (+) | 1 (=) | 6 (+) | 1 (=) |
f7 | 1 | 2 (=) | 4 (+) | 6 (+) | 5 (+) | 2 (=) |
f8 | 1 | 3 (+) | 5 (+) | 2 (=) | 5 (+) | 4 (+) |
f9 | 1 | 1 (=) | 5 (+) | 1 (=) | 5 (+) | 4 (+) |
f10 | 1 | 1 (=) | 4 (+) | 1 (=) | 4 (+) | 4 (+) |
f11 | 1 | 1 (=) | 4 (+) | 1 (=) | 4 (+) | 4 (+) |
f12 | 1 | 1 (=) | 4 (+) | 1 (=) | 4 (+) | 4 (+) |
f13 | 1 | 3 (=) | 4 (+) | 1 (=) | 4 (+) | 4 (+) |
f14 | 1 | 1 (=) | 5 (+) | 1 (=) | 5 (+) | 4 (+) |
f15 | 1 | 1 (=) | 1 (=) | 5 (+) | 6 (+) | 1 (=) |
f16 | 2 | 3 (=) | 5 (+) | 3 (=) | 5 (+) | 1 (=) |
f17 | 2 | 2 (=) | 4 (=) | 5 (+) | 5 (+) | 1 (=) |
f18 | 3 | 4 (+) | 1 (=) | 4 (+) | 4 (+) | 2 (=) |
f19 | 1 | 3 (+) | 5 (+) | 1 (=) | 5 (+) | 3 (+) |
f20 | 1 | 1 (=) | 4 (+) | 4 (+) | 4 (+) | 1 (=) |
f21 | 1 | 4 (=) | 1 (=) | 6 (+) | 4 (=) | 1 (=) |
f22 | 1 | 3 (=) | 3 (=) | 6 (+) | 5 (=) | 1 (=) |
f23 | 1 | 1 (=) | 4 (+) | 1 (=) | 4 (+) | 4 (+) |
f24 | 3 | 1 (=) | 5 (+) | 1 (=) | 6 (+) | 4 (=) |
average | 1.25 | 1.75 | 3.5 | 2.583 | 4.208 | 2.375 |
PAP | GL25 | CMAES | ABC | SaDE | ||
Significantly worse than CNN-HT | 3 | 16 | 9 | 19 | 10 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Xu, S.; Liu, W.; Wu, C.; Li, J. CNN-HT: A Two-Stage Algorithm Selection Framework. Entropy 2024, 26, 262. https://doi.org/10.3390/e26030262
Xu S, Liu W, Wu C, Li J. CNN-HT: A Two-Stage Algorithm Selection Framework. Entropy. 2024; 26(3):262. https://doi.org/10.3390/e26030262
Chicago/Turabian StyleXu, Siyi, Wenwen Liu, Chengpei Wu, and Junli Li. 2024. "CNN-HT: A Two-Stage Algorithm Selection Framework" Entropy 26, no. 3: 262. https://doi.org/10.3390/e26030262
APA StyleXu, S., Liu, W., Wu, C., & Li, J. (2024). CNN-HT: A Two-Stage Algorithm Selection Framework. Entropy, 26(3), 262. https://doi.org/10.3390/e26030262