Adaptive Neural Architecture Search Using Meta-Heuristics: Discovering Fine-Tuned Predictive Models for Photocatalytic CO2 Reduction
Abstract
:1. Introduction
Literature Review
2. Materials and Methods
2.1. The Dataset
2.2. Data Preprocessing
2.2.1. Data Transformation
2.2.2. Data Imputation
2.3. Adaptive NAS and HyperNetExplorer
2.3.1. Artificial Neural Networks (ANNs)
2.3.2. Algorithms for Hyperparameter Optimization
Algorithm | Advantages | Disadvantages | Types of Optimization Problems Best Suited for | Convergence Rates |
---|---|---|---|---|
FPA | Simplicity and flexibility | Inadequate optimization precision [79] | Large integer programming problems, high-complexity convergence problems | Poor |
GA | Parallel capabilities and flexibility | Difficult interpretation of solutions | Timetabling and scheduling problems | Slow |
HS | No need for derivative information [80] | Stuck at the local optimum | Multi-objective optimization problems | Fast |
JA | Simplicity, efficiency, and no algorithm-specific parameters | Get stuck in local minima | Large -scale real-life urban traffic light scheduling problems | Medium |
PSO | Flexible and easy to implement | Easy stuck to local optimum | Constrained and unconstrained (one or more objective) optimization problems | Slow |
TLBO | Powerful exploration | Suboptimal performance | Complex optimization problems | Fast |
2.3.3. Validation and Performance Evaluation Strategies
k-Fold Cross-Validation
Performance Evaluation
2.3.4. HyperNetExplorer: Architecture and Operation
2.4. Utilisation of Conventional Machine Learning Algorithms
2.4.1. Decision Tree (DT)
2.4.2. Gaussian Naïve Bayes (Gaussian NB)
2.4.3. K-Nearest Neighbors (KNN)
2.4.4. Linear Discriminant Analysis (LDA)
2.4.5. Support Vector Machine (SVM)
2.4.6. Random Forest (RF)
2.4.7. Gradient Boosting (GB)
2.4.8. Histogram Gradient Boosting (HGB)
2.4.9. Category Boosting (CatBoost)
3. Results and Discussion
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Saadetnejad, D.; Oral, B.; Can, E.; Yıldırım, R. Machine learning analysis of gas phase photocatalytic CO2 reduction for hydrogen production. Int. J. Hydrogen Energy 2022, 47, 19655–19668. [Google Scholar] [CrossRef]
- Ren, T.; Wang, L.; Chang, C.; Li, X. Machine learning-assisted multiphysics coupling performance optimization in a photocatalytic hydrogen production system. Energy Convers. Manag. 2020, 216, 112935. [Google Scholar] [CrossRef]
- Mageed, A.K. Modeling photocatalytic hydrogen production from ethanol over copper oxide nanoparticles: A comparative analysis of various machine learning techniques. Biomass Convers. Biorefin. 2021, 13, 3319–3327. [Google Scholar] [CrossRef]
- Ramkumar, G.; Tamilselvi, M.; Jebaseelan, S.S.; Mohanavel, V.; Kamyab, H.; Anitha, G.; Prabu, R.T.; Rajasimman, M. Enhanced machine learning for nanomaterial identification of photo thermal hydrogen production. Int. J. Hydrogen Energy 2024, 52, 696–708. [Google Scholar] [CrossRef]
- Yurova, V.Y.; Potapenko, K.O.; Aliev, T.A.; Kozlova, E.A.; Skorb, E.V. Optimization of g-C3N4 synthesis parameters based on machine learning to predict the efficiency of photocatalytic hydrogen production. Int. J. Hydrogen Energy 2024, 81, 193–203. [Google Scholar] [CrossRef]
- Wang, F.; Harindintwali, J.D.; Yuan, Z.; Wang, M.; Wang, F.; Li, S.; Yin, Z.; Huang, L.; Fu, Y.; Li, L.; et al. Technologies and perspectives for achieving carbon neutrality. Innovation 2021, 2, 100180. [Google Scholar] [CrossRef]
- Zohuri, B. Hydrogen Energy: Challenges and Solutions for a Cleaner Future; Springer International Publishing: Cham, Switzerland, 2019. [Google Scholar] [CrossRef]
- Da Rosa, A.V.; Ordonez, J.C. Fundamentals of Renewable Energy Processes; Academic Press: Cambridge, MA, USA, 2021. [Google Scholar]
- Olabi, A.G.; Abdelkareem, M.A. Renewable energy and climate change. Renew. Sustain. Energy Rev. 2022, 158, 112111. [Google Scholar] [CrossRef]
- Abbasi, K.R.; Shahbaz, M.; Zhang, J.; Irfan, M.; Alvarado, R. Analyze the environmental sustainability factors of China: The role of fossil fuel energy and renewable energy. Renew. Energy 2022, 187, 390–402. [Google Scholar] [CrossRef]
- Vivek, C.M.; Ramkumar, P.; Srividhya, P.K.; Sivasubramanian, M. Recent strategies and trends in implanting of renewable energy sources for sustainability–A review. Mater. Today Proc. 2021, 46, 8204–8208. [Google Scholar] [CrossRef]
- IEA. The Energy World Is Set to Change Significantly by 2030, Based on Today’s Policy Settings Alone. Available online: https://www.iea.org/news/the-energy-world-is-set-to-change-significantly-by-2030-based-on-today-s-policy-settings-alone (accessed on 20 July 2024).
- Average Monthly Carbon Dioxide (CO2) Levels in the Atmosphere Worldwide from 1990 to 2024. Available online: https://www.statista.com/statistics/1091999/atmospheric-concentration-of-co2-historic/#:~:text=Monthly%20mean%20atmospheric%20carbon%20dioxide,record%20high%20of%20421%20ppm (accessed on 22 July 2024).
- Basics of Climate Change. Available online: https://www.epa.gov/climatechange-science/basics-climate-change (accessed on 22 July 2024).
- Xu, X.; Zhou, Q.; Yu, D. The future of hydrogen energy: Bio-hydrogen production technology. Int. J. Hydrogen Energy 2022, 47, 33677–33698. [Google Scholar] [CrossRef]
- Hassan, Q.; Sameen, A.Z.; Salman, H.M.; Jaszczur, M.; Al-Jiboory, A.K. Hydrogen energy future: Advancements in storage technologies and implications for sustainability. J. Energy Storage 2023, 72, 108404. [Google Scholar] [CrossRef]
- Sharma, S.; Agarwal, S.; Jain, A. Significance of hydrogen as economic and environmentally friendly fuel. Energies 2021, 14, 7389. [Google Scholar] [CrossRef]
- Yue, M.; Lambert, H.; Pahon, E.; Roche, R.; Jemei, S.; Hissel, D. Hydrogen energy systems: A critical review of technologies, applications, trends and challenges. Renew. Sustain. Energy Rev. 2021, 146, 111180. [Google Scholar] [CrossRef]
- Penner, S.S. Steps toward the hydrogen economy. Energy 2006, 31, 33–43. [Google Scholar] [CrossRef]
- Hydrogen and Fuel Cell Technologies Office. Fuel Cells. Available online: https://www.energy.gov/eere/fuelcells/fuel-cells (accessed on 29 July 2024).
- International Renewable Energy Agency. Overview. Available online: https://www.irena.org/Energy-Transition/Technology/Hydrogen#:~:text=As%20at%20the%20end%20of,around%204%25%20comes%20from%20electrolysis (accessed on 11 November 2024).
- Aziz, M.; Darmawan, A.; Juangsa, F.B. Hydrogen production from biomasses and wastes: A technological review. Int. J. Hydrogen Energy 2021, 46, 33756–33781. [Google Scholar] [CrossRef]
- Arcos JM, M.; Santos, D.M. The hydrogen color spectrum: Techno-economic analysis of the available technologies for hydrogen production. Gases 2023, 3, 25–46. [Google Scholar] [CrossRef]
- Aydın, Y.; Işıkdağ, Ü.; Bekdaş, G.; Nigdeli, S.M.; Geem, Z.W. Use of machine learning techniques in soil classification. Sustainability 2023, 15, 2374. [Google Scholar] [CrossRef]
- Cakiroglu, C.; Aydın, Y.; Bekdaş, G.; Geem, Z.W. Interpretable predictive modelling of basalt fiber reinforced concrete splitting tensile strength using ensemble machine learning methods and SHAP approach. Materials 2023, 16, 4578. [Google Scholar] [CrossRef]
- Aydın, Y.; Bekdaş, G.; Nigdeli, S.M.; Isıkdağ, Ü.; Kim, S.; Geem, Z.W. Machine learning models for ecofriendly optimum design of reinforced concrete columns. Appl. Sci. 2023, 13, 4117. [Google Scholar] [CrossRef]
- Aydın, Y.; Cakiroglu, C.; Bekdaş, G.; Işıkdağ, Ü.; Kim, S.; Hong, J.; Geem, Z.W. Neural network predictive models for alkali-activated concrete carbon emission using metaheuristic optimization algorithms. Sustainability 2023, 16, 142. [Google Scholar] [CrossRef]
- Hu, G.; Liu, Y.; Chu, X.; Liu, Z. Fourier ptychographic layer-based imaging of hazy environments. Results Phys. 2024, 56, 107216. [Google Scholar] [CrossRef]
- Aydın, Y.; Cakiroglu, C.; Bekdaş, G.; Geem, Z.W. Explainable Ensemble Learning and Multilayer Perceptron modeling for compressive strength prediction of Ultra-high-performance concrete. Biomimetics 2024, 9, 544. [Google Scholar] [CrossRef] [PubMed]
- Cakiroglu, C.; Aydın, Y.; Bekdaş, G.; Isikdag, U.; Sadeghifam, A.N.; Abualigah, L. Cooling load prediction of a double-story terrace house using ensemble learning techniques and genetic programming with SHAP approach. Energy Build. 2024, 313, 114254. [Google Scholar] [CrossRef]
- Jayasinghe, T.; Chen, B.W.; Zhang, Z.; Meng, X.; Li, Y.; Gunawardena, T.; Mangalathu, S.; Mendis, P. Data-driven shear strength predictions of recycled aggregate concrete beams with/without shear reinforcement by applying machine learning approaches. Constr. Build. Mater. 2023, 387, 131604. [Google Scholar] [CrossRef]
- Qiu, W.X.; Si, Z.Z.; Mou, D.S.; Dai, C.Q.; Li, J.T.; Liu, W. Data-driven vector degenerate and nondegenerate solitons of coupled nonlocal nonlinear Schrödinger equation via improved PINN algorithm. Nonlinear Dyn. 2024, 1–14. [Google Scholar] [CrossRef]
- Olyanasab, A.; Annabestani, M. Leveraging Machine Learning for Personalized Wearable Biomedical Devices: A Review. J. Pers. Med. 2024, 14, 203. [Google Scholar] [CrossRef]
- Yan, L.; Zhong, S.; Igou, T.; Gao, H.; Li, J.; Chen, Y. Development of machine learning models to enhance element-doped g-C3N4 photocatalyst for hydrogen production through splitting water. Int. J. Hydrogen Energy 2022, 47, 34075–34089. [Google Scholar] [CrossRef]
- Xu, Y.; Ju, C.W.; Li, B.; Ma, Q.S.; Chen, Z.; Zhang, L.; Chen, J. Hydrogen evolution prediction for alternating conjugated copolymers enabled by machine learning with multidimension fragmentation descriptors. ACS Appl. Mater. Interfaces 2021, 13, 34033–34042. [Google Scholar] [CrossRef]
- Haq, Z.U.; Ullah, H.; Khan MN, A.; Naqvi, S.R.; Ahsan, M. Hydrogen production optimization from sewage sludge supercritical gasification process using machine learning methods integrated with genetic algorithm. Chem. Eng. Res. Des. 2022, 184, 614–626. [Google Scholar] [CrossRef]
- Ajmal, Z.; Haq, M.U.; Naciri, Y.; Djellabi, R.; Hassan, N.; Zaman, S.; Murtaza, A.; Kumar, A.; Al-Sehemi, A.G.; Algarni, H.; et al. Recent advancement in conjugated polymers based photocatalytic technology for air pollutants abatement: Cases of CO2, NOx, and VOCs. Chemosphere 2022, 308, 136358. [Google Scholar] [CrossRef]
- Abbas, A.M.; Hammad, S.A.; Sallam, H.; Mahfouz, L.; Ahmed, M.K.; Abboudy, S.M.; Ahmed, A.E.; Alhag, S.K.; Taher, M.A.; Alrumman, S.A.; et al. Biosynthesis of zinc oxide nanoparticles using leaf extract of Prosopis juliflora as potential photocatalyst for the treatment of paper mill effluent. Appl. Sci. 2021, 11, 11394. [Google Scholar] [CrossRef]
- Mohanty, S.; Moulick, S.; Maji, S.K. Adsorption/photodegradation of crystal violet (basic dye) from aqueous solution by hydrothermally synthesized titanate nanotube (TNT). J. Water Process Eng. 2020, 37, 101428. [Google Scholar] [CrossRef]
- Lofrano, G.; Ubaldi, F.; Albarano, L.; Carotenuto, M.; Vaiano, V.; Valeriani, F.; Libralato, G.; Gianfranceschi, G.; Fratoddi, I.; Meric, S.; et al. Antimicrobial effectiveness of innovative photocatalysts: A review. Nanomaterials 2022, 12, 2831. [Google Scholar] [CrossRef] [PubMed]
- Chang, J.; Ma, J.; Ma, Q.; Zhang, D.; Qiao, N.; Hu, M.; Ma, H. Adsorption of methylene blue onto Fe3O4/activated montmorillonite nanocomposite. Appl. Clay Sci. 2016, 119, 132–140. [Google Scholar] [CrossRef]
- Bhom, F.; Isa, Y.M. Photocatalytic Hydrogen Production Using TiO2-based Catalysts: A Review. Glob. Chall. 2024, 8, 2400134. [Google Scholar] [CrossRef] [PubMed]
- Ma, Y.; Wang, X.; Jia, Y.; Chen, X.; Han, H.; Li, C. Titanium dioxide-based nanomaterials for photocatalytic fuel generations. Chem. Rev. 2014, 114, 9987–10043. [Google Scholar] [CrossRef]
- Chen, D.; Cheng, Y.; Zhou, N.; Chen, P.; Wang, Y.; Li, K.; Huo, S.; Cheng, P.; Peng, P.; Zhang, R.; et al. Photocatalytic degradation of organic pollutants using TiO2-based photocatalysts: A review. J. Clean. Prod. 2020, 268, 121725. [Google Scholar] [CrossRef]
- Xia, C.; Nguyen, T.H.C.; Nguyen, X.C.; Kim, S.Y.; Nguyen, D.L.T.; Raizada, P.; Singh, P.; Nguyen, V.-H.; Nguyen, C.C.; Hoang, V.C.; et al. Emerging cocatalysts in TiO2-based photocatalysts for light-driven catalytic hydrogen evolution: Progress and perspectives. Fuel 2022, 307, 121745. [Google Scholar] [CrossRef]
- Arora, I.; Chawla, H.; Chandra, A.; Sagadevan, S.; Garg, S. Advances in the strategies for enhancing the photocatalytic activity of TiO2: Conversion from UV-light active to visible-light active photocatalyst. Inorg. Chem. Commun. 2022, 143, 109700. [Google Scholar] [CrossRef]
- Albahra, S.; Gorbett, T.; Robertson, S.; D’Aleo, G.; Kumar, S.V.S.; Ockunzzi, S.; Lallo, D.; Hu, B.; Rashidi, H.H. Artificial intelligence and machine learning overview in pathology & laboratory medicine: A general review of data preprocessing and basic supervised concepts. In Seminars in Diagnostic Pathology; WB Saunders: Philadelphia, PA, USA, 2023; Volume 40, pp. 71–87. [Google Scholar] [CrossRef]
- Tawakuli, A.; Havers, B.; Gulisano, V.; Kaiser, D.; Engel, T. Survey: Time-series data preprocessing: A survey and an empirical analysis. J. Eng. Res. 2024. [Google Scholar] [CrossRef]
- García, S.; Luengo, J.; Herrera, F. Data Preprocessing in Data Mining; Springer International Publishing: Cham, Switzerland, 2015; Volume 72, pp. 59–139. [Google Scholar] [CrossRef]
- Python (3.9) [Computer Software]. Available online: http://python.org (accessed on 5 August 2024).
- Anaconda3 [Computer Software]. Available online: https://anaconda.org/ (accessed on 5 August 2024).
- Raybaut, P. Spyder-Documentation. 2009. Available online: https://www.spyder-ide.org/ (accessed on 7 August 2024).
- Harris, C.R.; Millman, K.J.; Van Der Walt, S.J.; Gommers, R.; Virtanen, P.; Cournapeau, D.; Wieser, E.; Taylor, J.; Berg, S.; Smith, N.J.; et al. Array programming with NumPy. Nature 2020, 585, 357–362. [Google Scholar] [CrossRef] [PubMed]
- About Pandas. Available online: https://pandas.pydata.org/ (accessed on 7 August 2024).
- Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
- Ma, Y.; Zhang, Z. Travel mode choice prediction using deep neural networks with entity embeddings. IEEE Access 2020, 8, 64959–64970. [Google Scholar] [CrossRef]
- Yedla, A.; Kakhki, F.D.; Jannesari, A. Predictive modeling for occupational safety outcomes and days away from work analysis in mining operations. Int. J. Environ. Res. Public Health 2020, 17, 7054. [Google Scholar] [CrossRef]
- Hale, T.; Angrist, N.; Goldszmidt, R.; Kira, B.; Petherick, A.; Phillips, T.; Webster, S.; Cameron-Blake, E.; Hallas, L.; Majumdar, S.; et al. A global panel database of pandemic policies (Oxford COVID-19 Government Response Tracker). Nat. Hum. Behav. 2021, 5, 529–538. [Google Scholar] [CrossRef]
- Bennett, D.A. How can I deal with missing data in my study? Aust. N. Z. J. Public Health 2001, 25, 464–469. [Google Scholar] [CrossRef]
- Schafer, J.L. Multiple imputation: A primer. Statistical methods in medical research 1999, 8, 3–15. Schafer, J.L. Multiple imputation: A primer. Stat. Methods Med. Res. 1999, 8, 3–15. [Google Scholar] [CrossRef]
- Mealpy. Available online: https://github.com/thieu1995/mealpy (accessed on 8 August 2024).
- Lima, A.A.; Mridha, M.F.; Das, S.C.; Kabir, M.M.; Islam, M.R.; Watanobe, Y. A comprehensive survey on the detection, classification, and challenges of neurological disorders. Biology 2022, 11, 469. [Google Scholar] [CrossRef]
- McUlloch, W.S.; Pitts, W. A Logical Calculus of the Ideas Immanent in Nervous Activity. Bull. Math. Biophys. 1943, 5, 115–133. [Google Scholar] [CrossRef]
- Öztemel, E. Artificial Neural Networks, 3rd ed.; Papatya Publishing Education: Istanbul, Turkey, 2012. [Google Scholar]
- Calandra, H.; Gratton, S.; Riccietti, E.; Vasseur, X. On a multilevel Levenberg–Marquardt method for the training of artificial neural networks and its application to the solution of partial differential equations. Optim. Methods Softw. 2022, 37, 361–386. [Google Scholar] [CrossRef]
- Panimalar, S.A.; Krishnakumar, A. Customer churn prediction model in cloud environment using DFE-WUNB: ANN deep feature extraction with weight updated tuned Naïve bayes classification with block-jacobi SVD dimensionality reduction. Eng. Appl. Artif. Intell. 2023, 126, 107015. [Google Scholar] [CrossRef]
- Yalçın, O.G. Deep learning and neural networks overview. In Applied Neural Networks with Tensorflow 2: API oriented Deep Learning with Python; Apress: New York, NY, USA, 2021; pp. 57–80. [Google Scholar] [CrossRef]
- Uma, U.U.; Nmadu, D.; Ugwuanyi, N.; Ogah, O.E.; Eli-Chukwu, N.; Eheduru, M.; Ekwue, A. Adaptive overcurrent protection scheme coordination in presence of distributed generation using radial basis neural network. Prot. Control Mod. Power Syst. 2023, 8, 1–19. [Google Scholar] [CrossRef]
- Gülcü, A.; Kuş, Z. A Survey of Hyper-parameter Optimization Methods in Convolutional Neural Networks. Gazi Univ. J. Sci. 2019, 7, 503–522. [Google Scholar] [CrossRef]
- Misra, D. Mish: A self regularized non-monotonic activation function. arXiv 2019, arXiv:1908.08681. [Google Scholar] [CrossRef]
- Clevert, D.A.; Unterthiner, T.; Hochreiter, S. Fast and accurate deep network learning by exponential linear units (elus). arXiv 2015, arXiv:1511.07289. [Google Scholar]
- Yang, X.S. Flower pollination algorithm for global optimization. In Proceedings of the International Conference on Unconventional Computing and Natural Computation, Orléan, France, 3–7 September 2012; Springer: Berlin/Heidelberg, Germany, 2012; pp. 240–249. [Google Scholar] [CrossRef]
- Albadr, M.A.; Tiun, S.; Ayob, M.; Al-Dhief, F. Genetic algorithm based on natural selection theory for optimization problems. Symmetry 2020, 12, 1758. [Google Scholar] [CrossRef]
- Xue, Y.; Zhu, H.; Liang, J.; Słowik, A. Adaptive crossover operator based multi-objective binary genetic algorithm for feature selection in classification. Knowl.-Based Syst. 2021, 227, 107218. [Google Scholar] [CrossRef]
- Geem, Z.W.; Kim, J.H.; Loganathan, G.V. A new heuristic optimization algorithm: Harmony search. Simulation 2001, 76, 60–68. [Google Scholar] [CrossRef]
- Rao, R. Jaya: A simple and new optimization algorithm for solving constrained and unconstrained optimization problems. Int. J. Ind. Eng. Comput. 2016, 7, 19–34. [Google Scholar] [CrossRef]
- Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; IEEE: New York, NY, USA, 1995; Volume 4, pp. 1942–1948. [Google Scholar]
- Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput.-Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
- Chen, Y.; Pi, D. An innovative flower pollination algorithm for continuous optimization problem. Appl. Math. Model. 2020, 83, 237–265. [Google Scholar] [CrossRef]
- Nancy, M.; Stephen SE, A. A comprehensive review on harmony search algorithm. Ann. Rom. Soc. Cell Biol. 2021, 25, 5480–5483. [Google Scholar]
- Bellman, R.E.; Dreyfus, S.E. Applied Dynamic Programming; Princeton University Press: Princeton, NJ, USA, 2015. [Google Scholar] [CrossRef]
- Li, W.; He, Z.; Zheng, J.; Hu, Z. Improved flower pollination algorithm and its application in user identification across social networks. IEEE Access 2019, 7, 44359–44371. [Google Scholar] [CrossRef]
- What Is The Time Complexity Of A Genetic Algorithm? Available online: https://www.ontechnos.com/the-time-complexity-of-a-genetic-algorithm (accessed on 25 November 2024).
- Tian, Z.; Zhang, C. An improved harmony search algorithm and its application in function optimization. J. Inf. Process. Syst. 2018, 14, 1237–1253. [Google Scholar] [CrossRef]
- Shingade, S.; Niyogi, R.; Pichare, M. Hybrid Particle Swarm Optimization-Jaya Algorithm for Team Formation. Algorithms 2024, 17, 379. [Google Scholar] [CrossRef]
- Zhang, X.; Zou, D.; Shen, X. A novel simple particle swarm optimization algorithm for global optimization. Mathematics 2018, 6, 287. [Google Scholar] [CrossRef]
- Dastan, M.; Shojaee, S.; Hamzehei-Javaran, S.; Goodarzimehr, V. Hybrid teaching–learning-based optimization for solving engineering and mathematical problems. J. Braz. Soc. Mech. Sci. Eng. 2022, 44, 431. [Google Scholar] [CrossRef]
- Yadav, S.; Shukla, S. Analysis of k-fold cross-validation over hold-out validation on colossal datasets for quality classification. In Proceedings of the 2016 IEEE 6th International Conference on Advanced Computing (IACC), Bhimavaram, India, 27–28 February 2016; IEEE: New York, NY, USA; pp. 78–83. [Google Scholar] [CrossRef]
- Talukder, M.A.; Islam, M.M.; Uddin, M.A.; Akhter, A.; Hasan, K.F.; Moni, M.A. Machine learning-based lung and colon cancer detection using deep feature extraction and ensemble learning. Expert Syst. Appl. 2022, 205, 117695. [Google Scholar] [CrossRef]
- Zhao, L.; Lee, S.; Jeong, S.P. Decision tree application to classification problems with boosting algorithm. Electronics 2021, 10, 1903. [Google Scholar] [CrossRef]
- Winterburn, J.L.; Voineskos, A.N.; Devenyi, G.A.; Plitman, E.; de la Fuente-Sandoval, C.; Bhagwat, N.; Graff-Guerrero, A.; Knight, J.; Chakravarty, M.M. Can we accurately classify schizophrenia patients from healthy controls using magnetic resonance imaging and machine learning? A multi-method and multi-dataset study. Schizophr. Res. 2019, 214, 3–10. [Google Scholar] [CrossRef]
- Yacouby, R.; Axman, D. Probabilistic Extension of Precision, Recall, and F1 Score for More Thorough Evaluation of Classification Models. In Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems, Online, 20 November 2020; pp. 79–91. [Google Scholar] [CrossRef]
- Derczynski, L. Complementarity, F-score, and NLP Evaluation. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16), Portorož, Slovenia, 23–28 May 2016; pp. 261–266. [Google Scholar]
- Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. Pytorch: An imperative style, high-performance deep learning library. Adv. Neural Inf. Process. Syst. 2019, 32. [Google Scholar] [CrossRef]
- HyperNetExplorer. Available online: https://hyperopt.streamlit.app (accessed on 11 August 2024).
- Van Thieu, N.; Mirjalili, S. MEALPY: An open-source library for latest meta-heuristic algorithms in Python. J. Syst. Archit. 2023, 139, 102871. [Google Scholar] [CrossRef]
- Van Thieu, N.; Barma, S.D.; Van Lam, T.; Kisi, O.; Mahesha, A. Groundwater level modeling using augmented artificial ecosystem optimization. J. Hydrol. 2023, 617, 129034. [Google Scholar] [CrossRef]
- Ahmed, A.N.; Van Lam, T.; Hung, N.D.; Van Thieu, N.; Kisi, O.; El-Shafie, A. A comprehensive comparison of recent developed meta-heuristic algorithms for streamflow time series forecasting problem. Appl. Soft Comput. 2021, 105, 107282. [Google Scholar] [CrossRef]
- Dorogush, A.V.; Ershov, V.; Gulin, A. CatBoost: Gradient boosting with categorical features support. arXiv 2018, arXiv:1810.11363. [Google Scholar]
- Bibal, A.; Delchevalerie, V.; Frénay, B. DT-SNE: T-SNE discrete visualizations as decision tree structures. Neurocomputing 2023, 529, 101–112. [Google Scholar] [CrossRef]
- Assis, A.; Véras, D.; Andrade, E. Explainable Artificial Intelligence-An Analysis of the Trade-offs Between Performance and Explainability. In Proceedings of the 2023 IEEE Latin American Conference on Computational Intelligence (LA-CCI), Recife, Brazil, 29 October–1 November 2023; IEEE: New York, NY, USA; pp. 1–6. [Google Scholar] [CrossRef]
- Cheraghi, Y.; Kord, S.; Mashayekhizadeh, V. Application of machine learning techniques for selecting the most suitable enhanced oil recovery method; challenges and opportunities. J. Pet. Sci. Eng. 2021, 205, 108761. [Google Scholar] [CrossRef]
- Shobha, G.; Rangaswamy, S. Chapter 8—Machine Learning Handbook of Statistics; Elsevier: Amsterdam, The Netherlands. [CrossRef]
- Özbay Karakuş, M.; Er, O. A comparative study on prediction of survival event of heart failure patients using machine learning algorithms. Neural Comput. Appl. 2022, 34, 13895–13908. [Google Scholar] [CrossRef]
- Halder, R.K.; Uddin, M.N.; Uddin, M.A.; Aryal, S.; Khraisat, A. Enhancing K-nearest neighbor algorithm: A comprehensive review and performance analysis of modifications. J. Big Data 2024, 11, 113. [Google Scholar] [CrossRef]
- Qu, L.; Pei, Y. A Comprehensive Review on Discriminant Analysis for Addressing Challenges of Class-Level Limitations, Small Sample Size, and Robustness. Processes 2024, 12, 1382. [Google Scholar] [CrossRef]
- Lewis, D.N. Machine Learning Made Easy with R: An Intuitive Step by Step Blueprint for Beginners; CreateSpace Independent Publishing Platform: Scotts Valley, CA, USA, 2017. [Google Scholar]
- Anowar, F.; Sadaoui, S.; Selim, B. Conceptual and empirical comparison of dimensionality reduction algorithms (pca, kpca, lda, mds, svd, lle, isomap, le, ica, t-sne). Comput. Sci. Rev. 2021, 40, 100378. [Google Scholar] [CrossRef]
- Alp Aydin, E. Introduction to Machine Learning; MIT Press: Cambridge, MA, USA, 2020; ISBN 9780262043793. [Google Scholar]
- Breiman, L. Bagging predictors. Mach. Learn. 1996, 24, 123–140. [Google Scholar] [CrossRef]
- Robert, N.; Gary, M.; Ken, Y. (Eds.) Chapter 9—Classification. In Handbook of Statistical Analysis and Data Mining Applications, 2nd ed.; Academic Press: Cambridge, MA, USA, 2018; pp. 169–186. ISBN 9780124166325. [Google Scholar] [CrossRef]
- Kiangala, S.K.; Wang, Z. An effective adaptive customization framework for small manufacturing plants using extreme gradient boosting-XGBoost and random forest ensemble learning algorithms in an Industry 4.0 environment. Mach. Learn. Appl. 2021, 4, 100024. [Google Scholar] [CrossRef]
- Guryanov, A. Histogram-based algorithm for building gradient boosting ensembles of piecewise linear decision trees. In Proceedings of the Analysis of Images, Social Networks and Texts: 8th International Conference, AIST 2019, Kazan, Russia, 17–19 July 2019; Revised Selected Papers 8. Springer International Publishing: Cham, Switzerland; pp. 39–50. [Google Scholar] [CrossRef]
- Features in Histogram Gradient Boosting Trees. Available online: https://scikit-learn.org/1.5/auto_examples/ensemble/plot_hgbt_regression.html (accessed on 11 November 2024).
- Parameter Tuning. Available online: https://catboost.ai/en/docs/concepts/parameter-tuning (accessed on 11 November 2024).
- Borisov, V.; Leemann, T.; Seßler, K.; Haug, J.; Pawelczyk, M.; Kasneci, G. Deep neural networks and tabular data: A survey. IEEE Trans. Neural Netw. Learn. Syst. 2022, 35, 7499–7519. [Google Scholar] [CrossRef]
Year | CO2 Concentration (ppm) |
---|---|
2014 (December) | 399.08 |
2015 (December) | 402.06 |
2018 (December) | 409.27 |
2019 (December) | 411.98 |
2022 (December) | 418.99 |
2023 (December) | 421.86 |
2024 (June) | 426.91 |
Gas-Phase Variables | Range |
---|---|
Light | Vis, UV, UVVis |
Temperature (K) | 278–473 |
Pressure (bar) | 0.7–4 |
Feed Molar Ratio | 0.2–97.7 |
Additive | None, Na2CO3, NaHCO3, KHCO3, K2CO3, H2SO4 |
Amount of Additive (mol/L) | 0–0.5 |
Semiconductor Preparation Method | None, Thermal method, Sol gel, Hydrothermal, Chemical reduction, Commercial, Solid state, Ion Exchange |
Co-catalyst Preparation Method | None, Thermal method, Sol gel, IW, Deposition-precipitation, Hydrothermal, Photodeposition, ALD, Solid state |
CalcTemp (K) | 298–1223 |
Co-catalyst 1 Loading, wt% | 0–100 |
Co-catalyst 1 | None, Pd, Cu, Ag, Pt, Co, V, Cr, Ru, Au, Ag, MgO, ln2O3, CuO |
Co-catalyst 2 Loading, wt% | 0–2 |
Co-catalyst 2 | No, Fe, Cu |
Dopant1/Semiconductor1 Ratio | 0–90 |
Dop1 | No, Fe, Mg, Cu, V, W |
Dopant2/Semiconductor1 Ratio | 0–5 |
Dop2 | No, W |
Semiconductor1/Semiconductor2 Ratio | 2.3–100 |
Semiconductor 1 | C3N4, CeO2, GO, SrTiO3, TiO2, ZnS, Zn2Ti3O8, ZrO2, GO |
Semiconductor 2 | None, SiO2, TCPP, Modified Zeolite, ZnTiO3, ZnFe2O4 |
Band Gap Energy (eV) | 1.54–3.08 |
Total Gas Yield (µmol/gcat/h) | 0.01–310 |
Liquid-Phase Variables | Range |
---|---|
Light S. | UV, Vis |
Temperature (K) | 278–973 |
Pressure (bar) | 0.25–7.5 |
Feed Molar Ratio | No |
Additive | No, K2CO3, NaOH, C3H8O, CH3CN, TEOA, NaHCO3, KHCO3 |
Amount of Additive (mol/L) | 0–6 |
Semiconductor Preparation Method | Sol gel, Chemical, Solid state, Hydrothermal, None, Commercial, Polymerizable complex method, Thermal methods |
Co-catalyst Preparation Method | None, Sol gel, Chemical, Photodeposition, IW, Freeze-drying, Thermal methods, Hydrothermal, Solid state, Deposition-precipitation |
CalcTemp (K) | 298–1473 |
Co-catalyst 1 Loading, wt% | 0–175 |
Co-catalyst 1 | None, NiO, Ag, Pd, Cu, Ru, CPDs, Au, Co, AgBr, Pt, Ru (II) dinuclear complex, Bi2S3, Eu |
Co-catalyst 2 Loading, wt% | 0–1 |
Co-catalyst 2 | None, Co, Fe, Ni, Cu, Ru(bpy3)+2, Ru (II) dinuclear complex |
Dopant1/Semiconductor1 Ratio | 0–25.22 |
Dop1 | None, S |
Dopant2/Semiconductor1 Ratio | 0 |
Dop2 | No |
Semiconductor1/Semiconductor2 Ratio | 0–100 |
Semiconductor 1 | TiO2, rG, SiC, lnTaO4, Ta2O5, CaFe2O4, BaLa4Ti4O15, CdS, ZnO, ZnTe, C3N4, BP, PbBiO2Br, CeO2, Fe2O3, NiO, SrNb2O6, TaON, BiVO4, rGO, ZnS, ZnGa2O4 |
Semiconductor 2 | ACF, CuI, Ta2O5, SiO2, WO3, ZnTe, Cu2O, Fe2O3,NiO, SiO2, rGO, CdS, Modified Molecular Sieve, CeO2 |
Band Gap Energy (eV) | 1.6–4.67 |
Total Gas Yield (µmol/gcat/h) | 0.01–840 |
Class | Total Gas Yield (µmol/gcat/h) |
---|---|
A | 6.97–310 |
B | 0.75–6.97 |
C | 0.01–0.75 |
Class | Total Gas Yield (µmol/gcat/h) |
---|---|
A | 27.9–840 |
B | 4.05–27.9 |
C | 0–4.05 |
Gas Phase Dataset | Liquid Phase Dataset | ||
---|---|---|---|
Variable Name | Size | Variable Name | Size |
Light S. | 2 | Light S. | 1 |
Additive | 5 | Additive | 7 |
Semiconductor Preparation Method | 7 | Semiconductor Preparation Method | 7 |
Co-catalyst Preparation Method | 9 | Co-catalyst Preparation Method | 9 |
Co-catalyst 1 | 12 | Co-catalyst 1 | 13 |
Co-catalyst 2 | 2 | Co-catalyst 2 | 6 |
Dop1 | 5 | Dop1 | 1 |
Semiconductor 1 | 7 | Semiconductor 1 | 23 |
Semiconductor 2 | 6 | Semiconductor 2 | 13 |
Algorithm | Temporal Complexity | Spatial Complexity |
---|---|---|
FPA | O(T⋅N⋅f(D)) [82] | O(N⋅D) |
GA | O(T⋅N⋅(f(D) + log N + D)) [83] | O(N⋅D) |
HS | O(N⋅D + T⋅(D + f(D) + N)) [84] | O(N⋅D) |
JA | O(T⋅N⋅f(D)) [85] | O(N⋅D) |
PSO | O(T⋅N⋅f(D)) [86] | O(N⋅D) |
TLBO | O(T⋅N⋅f(D)) [87] | O(N⋅D) |
Confusion Matrix | Actually Positive (1) | Actually Negative (0) |
---|---|---|
Predicted Positive (1) | True Positives (TPs) | False Positives (FPs) |
Predicted Negative (0) | False Negatives (FNs) | True Negatives (TNs) |
Parameter Name | Lower Bound | Upper Bound | Options |
---|---|---|---|
Number of Hidden Layers (HL) | 0 | 2 | 0: Single HL 1: Two HL 2: Three HL |
Number of Neurons in HL = 1 | 0 | 6 | 0: 8 1: 16 2: 32 3: 64 4: 128 5: 256 6: 512 |
Number of Neurons in HL = 2 | 0 | 6 | 0: 8 1: 16 2: 32 3: 64 4: 128 5: 256 6: 512 |
Number of Neurons in HL = 3 | 0 | 6 | 0: 8 1: 16 2: 32 3: 64 4: 128 5: 256 6: 512 |
Activation Function of HL = 1 | 0 | 6 | 0: Leaky-ReLU 1: Sigmoid 2: Tanh 3: ReLU 4: LogSigmoid 5: ELU 6: Mish |
Activation Function of HL = 2 | 0 | 6 | 0: Leaky-ReLU 1: Sigmoid 2: Tanh 3: ReLU 4: LogSigmoid 5: ELU 6: Mish |
Activation Function of HL = 3 | 0 | 6 | 0: Leaky-ReLU 1: Sigmoid 2: Tanh 3: ReLU 4: LogSigmoid 5: ELU 6: Mish |
Algorithm | Parameters |
---|---|
DT | criterion = ‘gini’, splitter = ‘best’, max_depth = None, min_samples_split = 2, min_samples_leaf = 1, min_weight_fraction_leaf = 0.0, max_features = None, ran-dom_state = None, max_leaf_nodes = None, min_impurity_decrease = 0.0, class_weight = None, ccp_alpha = 0.0, monotonic_cst = None). |
Gaussian NB | priors = None, var_smoothing = 1 × 10−9 |
KNN | n_neighbors = 5, weights = ‘uniform’, algorithm = ‘auto’, leaf_size = 30, p = 2, metric = ‘minkowski’, metric_params = None, n_jobs = None |
LDA | solver = ‘svd’, shrinkage = None, priors = None, n_components = None, store_covariance = False, tol = 0.0001, covariance_estimator = None |
SVM | C = 1.0, kernel = ‘rbf’, degree = 3, gamma = ‘scale’, coef0 = 0.0, shrinking = True, probability = False, tol = 0.001, cache_size = 200, class_weight = None, verbose = False, max_iter = −1, deci-sion_function_shape = ‘ovr’, break_ties = False, random_state = None |
RF | n_estimators = 100, criterion = ‘gini’, max_depth = None, min_samples_split = 2, min_samples_leaf = 1, min_weight_fraction_leaf = 0.0, max_features = ‘sqrt’, max_leaf_nodes = None, min_impurity_decrease = 0.0, bootstrap = True, oob_score = False, n_jobs = None, random_state = None, verbose = 0, warm_start = False, class_weight = None, ccp_alpha = 0.0, max_samples = None, monotonic_cst = None |
GB | loss = ‘log_loss’, learning_rate = 0.1, n_estimators = 100, subsample = 1.0, criteri-on = ‘friedman_mse’, min_samples_split = 2, min_samples_leaf = 1, min_weight_fraction_leaf = 0.0, max_depth = 3, min_impurity_decrease = 0.0, init = None, random_state = None, max_features = None, verbose = 0, max_leaf_nodes = None, warm_start = False, validation_fraction = 0.1, n_iter_no_change = None, tol = 0.0001, ccp_alpha = 0. |
HGB | loss = ‘log_loss’, learning_rate = 0.1, max_iter = 100, max_leaf_nodes = 31, max_depth = None, min_samples_leaf = 20, l2_regularization = 0.0, max_features = 1.0, max_bins = 255, categori-cal_features = ‘from_dtype’, monotonic_cst = None, interaction_cst = None, warm_start = False, early_stopping = ‘auto’, scoring = ‘loss’, validation_fraction = 0.1, n_iter_no_change = 10, tol = 1 × 10−7, verbose = 0, random_state = None, class_weight = None |
CatBoost | class CatBoostClassifier(iterations = None, learning_rate = None, depth = None, l2_leaf_reg = None, model_size_reg = None, rsm = None, loss_function = None, border_count = None, feature_border_type = None, per_float_feature_quantization = None, input_borders = None, output_borders = None, fold_permutation_block = None, od_pval = None, od_wait = None, od_type = None, nan_mode = None, counter_calc_method = None, leaf_estimation_iterations = None, leaf_estimation_method = None, thread_count = None, random_seed = None, use_best_model = None, verbose = None, logging_level = None, metric_period = None, ctr_leaf_count_limit = None, store_all_simple_ctr = None, max_ctr_complexity = None, has_time = None, allow_const_label = None, classes_count = None, class_weights = None, auto_class_weights = None, one_hot_max_size = None, random_strength = None, name = None, ignored_features = None, train_dir = None, custom_loss = None, custom_metric = None, eval_metric = None, bagging_temperature = None, save_snapshot = None, snapshot_file = None, snapshot_interval = None, fold_len_multiplier = None, used_ram_limit = None, gpu_ram_part = None, allow_writing_files = None, final_ctr_computation_mode = None, approx_on_full_history = None, boosting_type = None, simple_ctr = None, combinations_ctr = None, per_feature_ctr = None, task_type = None, device_config = None, devices = None, bootstrap_type = None, subsample = None, sampling_unit = None, dev_score_calc_obj_block_size = None, max_depth = None, n_estimators = None, num_boost_round = None, num_trees = None, colsample_bylevel = None, random_state = None, reg_lambda = None, objective = None, eta = None, max_bin = None, scale_pos_weight = None, gpu_cat_features_storage = None, data_partition = None metadata = None, early_stopping_rounds = None, cat_features = None, grow_policy = None, min_data_in_leaf = None, min_child_samples = None, max_leaves = None, num_leaves = None, score_function = None, leaf_estimation_backtracking = None, ctr_history_unit = None, monotone_constraints = None, feature_weights = None, penalties_coefficient = None, first_feature_use_penalties = None, model_shrink_rate = None, model_shrink_mode = None, langevin = None, diffusion_temperature = None, posterior_sampling = None, boost_from_average = None, text_features = None, tokenizers = None, dictionaries = None, feature_calcers = None, text_processing = None, fixed_binary_splits = None |
Optimizer | Gas-Phase Accuracy | Gas-Phase Precision | Gas-Phase Recall | Gas-Phase F1 Score |
---|---|---|---|---|
FPA | 94.32 | 0.9412 | 0.9386 | 0.9398 |
GA | 94.36 | 0.9418 | 0.9557 | 0.9486 |
TLBO | 94.49 | 0.9417 | 0.9446 | 0.9431 |
HS | 94.24 | 0.9401 | 0.9475 | 0.9437 |
JA | 94.61 | 0.9446 | 0.9460 | 0.9452 |
PSO | 94.44 | 0.9424 | 0.9439 | 0.9431 |
Optimizer | Liquid-Phase Accuracy | Liquid-Phase Precision | Liquid-Phase Recall | Liquid-Phase F1 Score |
---|---|---|---|---|
FPA | 90.67 | 0.9057 | 0.9025 | 0.9040 |
GA | 90.79 | 0.9073 | 0.9075 | 0.9074 |
TLBO | 90.71 | 0.9061 | 0.9062 | 0.9061 |
HS | 90.51 | 0.9045 | 0.9047 | 0.9046 |
JA | 90.79 | 0.9071 | 0.9075 | 0.9073 |
PSO | 90.63 | 0.9052 | 0.9054 | 0.9053 |
Type of optimization method | JA |
Number of layers in the network | 3 |
Number of neurons in the input layer | 5 |
Number of hidden layers | 4 |
Numbers of hidden layer neurons | 1024–16–16–512 |
Total number of iterations | 1050 |
Accuracy | 94.61 |
Type of optimization method | GA |
Number of layers in the network | 2 |
Number of neurons in the input layer | 5 |
Number of hidden layers | 4 |
Numbers of hidden layer neurons | 16–64–1024–128 |
Total number of iterations | 1050 |
Number of acc iterations | 384 |
Accuracy | 90.79 |
Type of optimization method | JA |
Number of layers in the network | 2 |
Number of neurons in the input layer | 5 |
Number of hidden layers | 4 |
Numbers of hidden layer neurons | 1024–64–1024–1024 |
Total number of iterations | 1050 |
Number of acc iterations | 236 |
Accuracy | 90.79 |
ML Models | Gas-Phase Rank | Liquid-Phase Rank | Gas-Phase Accuracy (%) | Liquid-Phase Accuracy (%) |
---|---|---|---|---|
Decision Tree | 2 | 5 | 59.90 | 31.30 |
GaussianNB | 8 | 6 | 41.20 | 26.76 |
KNeighborsClassifier | 10 | 4 | 33.56 | 32.03 |
LinearDiscriminantAnalysis | 9 | 7 | 40.44 | 25.23 |
SVC | 6 | 9 | 43.96 | 19.16 |
Random Forest | 4 | 3 | 50.89 | 35.92 |
GradientBoostingClassifier | 5 | 2 | 44.30 | 39.15 |
HistGradientBoostingClassifier | 3 | 8 | 50.90 | 24.18 |
CatBoostClassifier | 7 | 10 | 43.20 | 0.39 |
Decision Tree [1] | - | - | 79 | 65 |
HyperNetExplorer (ANN) | 1 | 1 | 94.61 | 90.79 |
Author | Aim | Methods | Results |
---|---|---|---|
Regression-Based | |||
Ren et al. 2021 [2] | Photocatalysis to produce maximum hydrogen | Gaussian Process Regression (GPR) | Reported as successful results |
Mageed [3] | Prediction of photocatalytic hydrogen production from ethanol using copper oxide (CuO) nanoparticles as a photocatalyst | Artificial Neural Networks (ANNs), Support Vector Machine (SVM), Nonlinear Regression Model (NLM), and Response Surface Model (RSM) | R2 > 0.99 |
Yurova et al. [5] | Prediction of the efficiency of photocatalytic hydrogen production | Gradient Boosting, Multilayer Perceptron, and Random Forest | R2 > 0.99 |
Yan et al. [34] | Prediction of hydrogen production rate | Reported as an ML model | R2 = 0.78 |
Haq et al. [36] | Prediction of hydrogen yield for supercritical water gasification | Support Vector Machine, Ensembled Tree, Gaussian Process Regression, and Artificial Neural Network | R2 = 0.99 |
Classification-Based | |||
Ramkumar et al. [4] | Prediction of the hydrogen production value by photolysis | ANN | Accuracy 92% |
Xu et al. [35] | Prediction of the hydrogen evolution reaction | Electronic property-based ML model | Accuracy 91% |
Saadetnejad et al. [1] | Predict the band gap while investigating the gas phase of photocatalytic CO2 reduction for hydrogen production | Random Forest | Liquid-phase accuracy 65% Gas-phase accuracy 79% |
This study | Prediction of the total hydrogen production of the photocatalysis process | Fine-tuned ANN/NAS | Liquid-phase accuracy 94.61% Gas-phase accuracy 90.79% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Işıkdağ, Ü.; Bekdaş, G.; Aydın, Y.; Apak, S.; Hong, J.; Geem, Z.W. Adaptive Neural Architecture Search Using Meta-Heuristics: Discovering Fine-Tuned Predictive Models for Photocatalytic CO2 Reduction. Sustainability 2024, 16, 10756. https://doi.org/10.3390/su162310756
Işıkdağ Ü, Bekdaş G, Aydın Y, Apak S, Hong J, Geem ZW. Adaptive Neural Architecture Search Using Meta-Heuristics: Discovering Fine-Tuned Predictive Models for Photocatalytic CO2 Reduction. Sustainability. 2024; 16(23):10756. https://doi.org/10.3390/su162310756
Chicago/Turabian StyleIşıkdağ, Ümit, Gebrail Bekdaş, Yaren Aydın, Sudi Apak, Junhee Hong, and Zong Woo Geem. 2024. "Adaptive Neural Architecture Search Using Meta-Heuristics: Discovering Fine-Tuned Predictive Models for Photocatalytic CO2 Reduction" Sustainability 16, no. 23: 10756. https://doi.org/10.3390/su162310756
APA StyleIşıkdağ, Ü., Bekdaş, G., Aydın, Y., Apak, S., Hong, J., & Geem, Z. W. (2024). Adaptive Neural Architecture Search Using Meta-Heuristics: Discovering Fine-Tuned Predictive Models for Photocatalytic CO2 Reduction. Sustainability, 16(23), 10756. https://doi.org/10.3390/su162310756