Integrating artificial intelligence (AI) into public health has emerged as a transformative force, reshaping how health data are collected, analyzed, and utilized. The increasing availability of big data, coupled with advancements in machine learning, natural language processing, and deep learning, has enabled AI-driven approaches to improve disease surveillance, optimize healthcare systems, and personalize medical interventions [
1]. AI applications in public health have expanded rapidly, offering novel approaches to the early detection of outbreaks, risk assessment, and health resource management.
One of the most significant contributions of AI to public health is its ability to process vast amounts of data in real time, providing insights previously unattainable with traditional epidemiological methods. AI-powered models can predict disease outbreaks by analyzing diverse data sources, including electronic health records, social media trends, and environmental factors [
2]. Machine learning algorithms are increasingly used for patient stratification, allowing healthcare providers to identify high-risk individuals and tailor interventions accordingly [
3]. Additionally, AI-driven imaging and diagnostic tools have enhanced the accuracy and speed of disease detection, improving patient outcomes and reducing the burden on healthcare systems [
4].
The COVID-19 pandemic underscored both the potential and challenges associated with the application of AI in public health. AI played a crucial role in tracking the spread of the virus, forecasting infection trends, and expediting vaccine development [
5]. Real-world applications such as AI-driven contact tracing, automated diagnostics, and the predictive modelling of healthcare demand demonstrated how AI could enhance our response to the pandemic [
6]. However, the pandemic also revealed significant challenges, including data privacy concerns, biases in AI models due to incomplete or skewed data, and the need for transparent decision-making processes [
7]. These lessons emphasize the importance of developing AI systems that are robust, ethical, and adaptable to dynamic public health needs.
Beyond individual patient care, AI is also instrumental in optimizing public health policies and decision-making [
8]. Predictive analytics enable governments and health organizations to allocate resources more efficiently, ensuring timely and targeted interventions [
9]. AI-driven models assist in understanding social determinants of health, identifying disparities, and formulating strategies to address healthcare access and delivery inequities. As AI technology evolves, its applications in public health will expand, fostering data-driven approaches that enhance prevention, treatment, and overall public health preparedness [
10].
The growing reliance on AI in public health underscores the need for interdisciplinary collaboration among data scientists, healthcare professionals, and policymakers. Ethical considerations, such as data privacy, algorithmic bias, and transparency, must be addressed to ensure that AI-driven public health initiatives benefit diverse populations equitably. Despite these challenges, AI holds immense potential to transform public health, paving the way for more proactive and personalized healthcare solutions.
This Special Issue, entitled “Artificial Intelligence Applications in Public Health”, presents a collection of interdisciplinary studies that showcase the impact of AI-driven methodologies on various aspects of public health. The rapid advancements in artificial intelligence, particularly in machine learning and deep learning, have enabled innovative solutions to pressing public health challenges, ranging from disease surveillance and outbreak prediction to clinical decision support and the optimization of the healthcare system.
Out of fourteen manuscripts submitted, seven high-quality papers were accepted for publication. These contributions reflect the current state of AI applications in public health, offering valuable insights into real-world implementations and methodological advancements. The accepted studies cover diverse topics, including AI-driven epidemic modelling, predictive analytics for health trends, and the integration of machine learning techniques for improved diagnostics and patient monitoring.
Several papers emphasize the role of AI in epidemiological modelling, highlighting its utility in understanding disease dynamics under complex social and environmental conditions. Others explore the intersection of AI and digital health, analyzing web behavior to assess misinformation trends and public health awareness. Additionally, contributions focusing on the processing of small medical data demonstrate how novel computational techniques can enhance the accuracy of classification when working with limited datasets.
The selected research highlights the opportunities and challenges associated with deploying AI solutions in public health. Ethical considerations, data privacy, and model interpretability remain critical factors in ensuring that AI-driven interventions are effective and responsible. The findings presented in this Special Issue contribute to ongoing discussions regarding the role of artificial intelligence in public health and provide a foundation for future research in the field.
The paper titled “Epidemiological Implications of War: Machine Learning Estimations of the Russian Invasion’s Effect on Italy’s COVID-19 Dynamics” [
11] explores the impact of geopolitical crises on pandemic dynamics using machine learning techniques. The study employs the XGBoost algorithm to model Italy’s COVID-19 trajectory, focusing on the period following the Russian invasion of Ukraine and the resulting influx of refugees. The predictive model demonstrated an 86.03% accuracy in forecasting new COVID-19 cases and 96.29% accuracy for fatalities over 30 days. The findings indicate that the migration of Ukrainian refugees did not significantly alter Italy’s COVID-19 epidemic trends during the study period. The study highlights the robustness of machine learning models in public health forecasting. It underscores the need for high-quality data and rigorous methodological approaches to ensure accurate crisis predictions. The results contribute to the broader discourse on the intersection of geopolitical events and public health, emphasizing the importance of data-driven decision-making in epidemic management.
The paper entitled “Marburg Virus Outbreak and a New Conspiracy Theory: Findings from a Comprehensive Analysis and Forecasting of Web Behavior” [
12] examines the intersection of public health crises and misinformation by analyzing web behavior related to the 2023 Marburg Virus Disease (MVD) outbreak. Using time-series forecasting models, including ARIMA, LSTM, and Autocorrelation, the study evaluates search data from 216 global regions to identify trends in online discussions about MVD and its association with conspiracy theories, particularly linking the virus to the U.S. Federal Emergency Management Agency’s emergency alert system. The findings reveal statistically significant correlations between MVD-related searches and zombie-related queries in several regions, demonstrating the rapid spread of misinformation across digital platforms. The results underscore the role of web analytics in tracking public sentiment during health emergencies and highlight the challenges posed by misinformation in crisis communication. This study contributes to the growing body of research on the impact of online behavior in shaping public health narratives and emphasizes the need for proactive misinformation management strategies.
The paper entitled “Impact of Ukrainian Refugees on the COVID-19 Pandemic Dynamics after 24 February 2022” [
13] examines the effects of the mass migration caused by the Russian invasion of Ukraine on COVID-19 transmission in host countries. Using a generalized Susceptible–Infectious–Removed model, the study analyzes smoothed daily case data from Ukraine, Poland, Germany, the UK, and Moldova and global trends to estimate variations in epidemic dynamics. The findings indicate a short-term increase in the reproduction number and daily case counts in host countries, particularly in the UK and Germany, where pre-existing infection rates were lower than in Ukraine. However, in Poland and Moldova, where infection rates were similar to Ukraine’s, the influx of refugees had minimal impact on case trends. The study underscores the role of mathematical modelling in epidemic forecasting and highlights the necessity of robust public health strategies to mitigate the spread of disease during humanitarian crises.
The paper entitled “Enhanced Input-Doubling Method Leveraging Response Surface Linearization to Improve Classification Accuracy in Small Medical Data Processing” [
14] addresses the challenge of the limited availability of training data in medical machine learning applications, particularly for cardiovascular risk assessment. The study introduces an improved input-doubling method that enhances the accuracy of classification by expanding the dataset with additional independent attributes, specifically class membership probabilities. The authors implement this method using two Naïve Bayes classifiers and evaluate its effectiveness against standard machine learning approaches. The results demonstrate that the enhanced input-doubling method significantly improves the accuracy of classification, particularly in cases where medical datasets are small and imbalanced. The findings highlight the potential of data augmentation techniques to improve predictive performance while maintaining model generalization, making the approach relevant for various applications in medical data analysis.
The paper entitled “Classification of Acoustic Tones and Cardiac Murmurs Based on Digital Signal Analysis Leveraging Machine Learning Methods” [
15] presents a machine learning-based framework for the automated classification of heart sounds, aiming to improve diagnostic accuracy and accessibility in cardiac auscultation. The study employs digital signal processing techniques for feature extraction, including Mel-frequency cepstral coefficients, wavelet transforms, and spectrograms, followed by classification using convolutional neural networks (CNNs), random forests, and support vector machines. The experimental evaluation, performed using a dataset of approximately 5000 heart sound recordings, demonstrates that the CNN model achieved the highest classification accuracy of 92.5%, outperforming traditional machine learning methods. The findings highlight the ability of AI-driven auscultation analysis to reduce diagnostic subjectivity and improve the early detection of cardiac abnormalities, with implications for clinical decision support and telemedicine applications.
The paper entitled “Interpretable Conversation Routing via the Latent Embeddings Approach” [
16] explores the development of a routing layer for large language model (LLM)-based chatbots, aiming to improve accuracy, efficiency, and interpretability in conversational AI systems. The authors propose a latent embedding retrieval method to classify and direct user queries to the most suitable response agent, eliminating the need for a single monolithic prompt. The study uses benchmark datasets to evaluate the routing method against traditional machine learning classifiers and LLM-based routers, demonstrating that the proposed approach achieves comparable accuracy while offering greater transparency and control over model decision-making. The findings highlight the benefits of semantic routing in multilingual, domain-specific environments, showcasing its potential to enhance reliability and security of chatbots by filtering harmful inputs and reducing instruction conflicts. The study underscores the necessity of interpretability in AI-driven dialogue systems and suggests further refinements for optimizing routing performance.
The paper entitled “Enhancing Accessibility: Automated Tactile Graphics Generation for Individuals with Visual Impairments” [
17] addresses the challenges that visually impaired individuals face in accessing graphic information, which is crucial for education and social inclusion. The study proposes a novel generative model that integrates a Bidirectional and Auto-Regressive Transformer with a Vector Quantized Variational Auto-Encoder to convert textual descriptions into tactile graphics. The model was trained on a publicly available and custom-designed tactile graphics dataset and evaluated using cross-entropy, perplexity, mean square error, and CLIP Score metrics. The results indicate that the model effectively generates high-quality, customizable tactile graphics, significantly reducing the production time and operator expertise requirements. Testing with educational and rehabilitation institutions confirmed the system’s practicality, highlighting its ability improve the accessibility of educational materials for visually impaired individuals. The study underscores the need for further refinement, particularly in expanding training datasets and improving the generalization of models for complex scenarios.