Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (478)

Search Parameters:
Keywords = learning rate tuning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 2827 KB  
Article
Predictive Modelling of Exam Outcomes Using Stress-Aware Learning from Wearable Biosignals
by Sham Lalwani and Saideh Ferdowsi
Sensors 2025, 25(18), 5628; https://doi.org/10.3390/s25185628 - 9 Sep 2025
Abstract
This study investigates the feasibility of using wearable technology and machine learning algorithms to predict academic performance based on physiological signals. It also examines the correlation between stress levels, reflected in the collected physiological data, and academic outcomes. To this aim, six key [...] Read more.
This study investigates the feasibility of using wearable technology and machine learning algorithms to predict academic performance based on physiological signals. It also examines the correlation between stress levels, reflected in the collected physiological data, and academic outcomes. To this aim, six key physiological signals, including skin conductance, heart rate, skin temperature, electrodermal activity, blood volume pulse, inter-beat interval, and accelerometer were recorded during three examination sessions using a wearable device. A novel pipeline, comprising data preprocessing and feature engineering, is proposed to prepare the collected data for training machine learning algorithms. We evaluated five machine learning models, including Random Forest, Support Vector Machine (SVM), eXtreme Gradient Boosting (XGBoost), Categorical Boosted (CatBoost), and Gradient-Boosting Machine (GBM), to predict the exam outcomes. The Synthetic Minority Oversampling Technique (SMOTE), followed by hyperparameter tuning and dimensionality reduction, are implemented to optimise model performance and address issues like class imbalance and overfitting. The results obtained by our study demonstrate that physiological signals can effectively predict stress and its impact on academic performance, offering potential for real-time monitoring systems that support student well-being and academic success. Full article
(This article belongs to the Special Issue Biomedical Imaging, Sensing and Signal Processing)
Show Figures

Figure 1

41 pages, 7601 KB  
Article
Hybrid Deep Neural Architectures with Evolutionary Optimization and Explainable AI for Drought Susceptibility Assessment
by Jinping Liu, Jie Li and Yanqun Ren
Remote Sens. 2025, 17(17), 3122; https://doi.org/10.3390/rs17173122 - 8 Sep 2025
Abstract
This study presents a novel ensemble deep-learning framework integrating Convolutional Neural Networks (CNN), self-attention mechanisms, and Long Short-Term Memory (LSTM) networks, designed to generate high-resolution drought susceptibility maps for the Oroqen Autonomous Banner of Inner Mongolia. The model was further enhanced through two [...] Read more.
This study presents a novel ensemble deep-learning framework integrating Convolutional Neural Networks (CNN), self-attention mechanisms, and Long Short-Term Memory (LSTM) networks, designed to generate high-resolution drought susceptibility maps for the Oroqen Autonomous Banner of Inner Mongolia. The model was further enhanced through two metaheuristic optimization techniques—Differential Evolution (DE) and Biogeography-Based Optimization (BBO)—which tuned hyperparameters including CNN filters, LSTM units, and learning rate. Model evaluation—quantified via predictive accuracy (RMSE = 0.22 and MAE = 0.12), goodness-of-fit (R2 = 0.79), and classification discrimination [Area Under the Receiver Operating Characteristic curve (AUROC) = 0.91]—revealed that the BBO-optimized ensemble achieved the best overall performance on the test set, outperforming the DE-enhanced (AUROC = 0.86) and baseline models (AUROC = 0.80). Pairwise z-statistics confirmed the statistical superiority of the BBO-enhanced ensemble with a p-value < 0.001. The final susceptibility map—classified into five levels using the Jenks natural breaks method—identified western rangelands and transitional ecotones as high-susceptibility zones, while eastern areas were marked by lower susceptibility. The resulting outputs offer decision-makers and land managers an interpretable, high-precision tool to guide drought preparedness, implement resource allocation strategies, and design early-warning systems. This research establishes a scalable, interpretable, and statistically robust approach for drought susceptibility assessment in vulnerable landscapes. Full article
(This article belongs to the Special Issue Remote Sensing and Geoinformatics in Sustainable Development)
Show Figures

Figure 1

26 pages, 2337 KB  
Article
Explainable Prediction of UHPC Tensile Strength Using Machine Learning with Engineered Features and Multi-Algorithm Comparative Evaluation
by Zhe Zhang, Tianqin Zeng, Yongge Zeng and Ping Zhu
Buildings 2025, 15(17), 3217; https://doi.org/10.3390/buildings15173217 - 6 Sep 2025
Viewed by 279
Abstract
To explore a direct predictive model for the tensile strength of ultra-high-performance concrete (UHPC), machine learning (ML) algorithms are presented. Initially, a database comprising 178 samples of UHPC tensile strength with varying parameters is established. Then, feature engineering strategies are proposed to optimize [...] Read more.
To explore a direct predictive model for the tensile strength of ultra-high-performance concrete (UHPC), machine learning (ML) algorithms are presented. Initially, a database comprising 178 samples of UHPC tensile strength with varying parameters is established. Then, feature engineering strategies are proposed to optimize the robustness of ML models under a small-sample condition. Further, the performance and efficiency of algorithms are compared under default hyperparameters and hyperparameter tuning, respectively. Moreover, the utilization of SHapley Additive exPlanations (SHAP) enables the analysis of the relationships between UHPC tensile strength and its influencing factors. The quantitative analysis results indicate that ensemble algorithms exhibit superior performance, indicated by R2 values of above 0.92, under default hyperparameters. After hyperparameter tuning, both conventional and ensemble models achieve R2 values exceeding 0.94. However, Bayesian ridge regression (BRR) consistently demonstrates a suboptimal performance, irrespective of hyperparameter tuning. Notably, Categorical Boosting (CatBoost) requires a substantial duration of 1208 s, which is notably more time-consuming than that of other algorithms. The most influential feature identified is fiber reinforcement index with a contribution of 37.5%, followed by the water-to-cement ratio, strain rate, and cross-sectional size. The nonlinear relationship between UHPC tensile strength and the top four factors is visualized, and the critical thresholds are identified. Full article
(This article belongs to the Special Issue Research on Structural Analysis and Design of Civil Structures)
Show Figures

Figure 1

22 pages, 2152 KB  
Article
Dynamic PSO-Optimized XGBoost–RFE with Cross-Domain Hierarchical Transfer: A Small-Sample Feature Selection Approach for Equipment Health Management
by Yao Lei, Jianyin Zhao, Weimin Lv and Youwei Hu
Electronics 2025, 14(17), 3521; https://doi.org/10.3390/electronics14173521 - 3 Sep 2025
Viewed by 306
Abstract
In equipment health management, inefficient key feature selection and model overfitting caused by data scarcity in small-sample scenarios severely restrict the practical applications of predictive maintenance technologies. To address this challenge, this study proposes an improved key feature selection method integrating dynamic particle [...] Read more.
In equipment health management, inefficient key feature selection and model overfitting caused by data scarcity in small-sample scenarios severely restrict the practical applications of predictive maintenance technologies. To address this challenge, this study proposes an improved key feature selection method integrating dynamic particle swarm optimization (PSO) and cross-domain transfer learning. First, principal component analysis (PCA) is employed for the dimensionality reduction of high-dimensional health-related features. An improved PSO algorithm is then used to dynamically optimize XGBoost hyperparameters, coupled with a recursive feature elimination (RFE) framework to screen for key features. A hierarchical transfer strategy is then introduced to address small-sample data limitations in the target domain via source domain knowledge transfer, achieving cross-domain feature space alignment and model parameter fine-tuning. Experiments on the UCI bearing dataset demonstrated that the proposed model achieved a 9% improvement in the classification F1-score, a 60% reduction in overfitting and a 24% increase in the feature selection overlap rate compared to traditional methods in small-sample scenarios. Full article
Show Figures

Figure 1

19 pages, 1164 KB  
Article
Improving GPT-Driven Medical Question Answering Model Using SPARQL–Retrieval-Augmented Generation Techniques
by Abdulelah Algosaibi and Abdul Rahaman Wahab Sait
Electronics 2025, 14(17), 3488; https://doi.org/10.3390/electronics14173488 - 31 Aug 2025
Viewed by 524
Abstract
The development of medical question-answering systems (QASs) encounters substantial challenges due to the complexities of medical terminologies and the lack of reliable datasets. The shortcomings of traditional artificial intelligence (AI) driven QAS lead to generating outcomes with a higher rate of hallucinations. In [...] Read more.
The development of medical question-answering systems (QASs) encounters substantial challenges due to the complexities of medical terminologies and the lack of reliable datasets. The shortcomings of traditional artificial intelligence (AI) driven QAS lead to generating outcomes with a higher rate of hallucinations. In order to overcome these limitations, there is a demand for a reliable QAS to understand and process complex medical queries and validate the quality and relevance of its outcomes. In this study, we develop a medical QAS by integrating SPARQL, retrieval-augmented generation (RAG), and generative pre-trained transformer (GPT)-Neo models. Using this strategy, we generate a synthetic dataset to train and validate the proposed model, addressing the limitations of the existing QASs. The proposed QAS was generalized on the MEDQA dataset. The findings revealed that the model achieves a generalization accuracy of 87.26% with a minimal hallucination rate of 0.16. The model outperformed the existing models by leveraging deep learning techniques to handle complex medical queries. The dynamic responsive capability of the proposed model enables it to maintain the accuracy of medical information in a rapidly evolving healthcare environment. Employing advanced hallucination reduction and query refinement techniques can fine-tune the model’s performance. Full article
(This article belongs to the Special Issue The Future of AI-Generated Content(AIGC))
Show Figures

Figure 1

30 pages, 1166 KB  
Article
A Novel DRL-Transformer Framework for Maximizing the Sum Rate in Reconfigurable Intelligent Surface-Assisted THz Communication Systems
by Pardis Sadatian Moghaddam, Sarvenaz Sadat Khatami, Francisco Hernando-Gallego and Diego Martín
Appl. Sci. 2025, 15(17), 9435; https://doi.org/10.3390/app15179435 - 28 Aug 2025
Viewed by 367
Abstract
Terahertz (THz) communication is a key technology for sixth-generation (6G) networks, offering ultra-high data rates, low latency, and massive connectivity. However, the THz band faces significant propagation challenges, including high path loss, molecular absorption, and susceptibility to blockage. Reconfigurable intelligent surfaces (RISs) have [...] Read more.
Terahertz (THz) communication is a key technology for sixth-generation (6G) networks, offering ultra-high data rates, low latency, and massive connectivity. However, the THz band faces significant propagation challenges, including high path loss, molecular absorption, and susceptibility to blockage. Reconfigurable intelligent surfaces (RISs) have emerged as an effective solution to overcome these limitations by reconfiguring the wireless environment through passive beam steering. In this work, we propose a novel framework, namely the optimized deep reinforcement learning transformer (ODRL-Transformer), to maximize the sum rate in RIS-assisted THz systems. The framework integrates a Transformer encoder for extracting temporal and contextual features from sequential channel observations, a DRL agent for adaptive beamforming and phase shift control, and a hybrid biogeography-based optimization (HBBO) algorithm for tuning the hyperparameters of both modules. This design enables efficient long-term decisionmaking and improved convergence. Extensive simulations of dynamic THz channel models demonstrate that ODRL-Transformer outperforms other optimization baselines in terms of the sum rate, convergence speed, stability, and generalization. The proposed model achieved an error rate of 0.03, strong robustness, and fast convergence, highlighting its potential for intelligent resource allocation in next-generation RIS-assisted THz networks. Full article
Show Figures

Figure 1

24 pages, 2062 KB  
Article
A Flexible Multi-Channel Deep Network Leveraging Texture and Spatial Features for Diagnosing New COVID-19 Variants in Lung CT Scans
by Shervan Fekri-Ershad and Khalegh Behrouz Dehkordi
Tomography 2025, 11(9), 99; https://doi.org/10.3390/tomography11090099 - 27 Aug 2025
Viewed by 367
Abstract
Background: The COVID-19 pandemic has claimed thousands of lives worldwide. While infection rates have declined in recent years, emerging variants remain a deadly threat. Accurate diagnosis is critical to curbing transmission and improving treatment outcomes. However, the similarity of COVID-19 symptoms to those [...] Read more.
Background: The COVID-19 pandemic has claimed thousands of lives worldwide. While infection rates have declined in recent years, emerging variants remain a deadly threat. Accurate diagnosis is critical to curbing transmission and improving treatment outcomes. However, the similarity of COVID-19 symptoms to those of the common cold and flu has spurred the development of automated diagnostic methods, particularly through lung computed-tomography (CT) scan analysis. Methodology: This paper proposes a novel deep learning-based approach for detecting diverse COVID-19 variants using advanced textural feature extraction. The framework employs a dual-channel convolutional neural network (CNN), where one channel processes texture-based features and the other analyzes spatial information. Unlike existing methods, our model dynamically learns textural patterns during training, eliminating reliance on predefined features. A modified local binary pattern (LBP) technique extracts texture data in matrix form, while the CNN’s adaptable internal architecture optimizes the balance between accuracy and computational efficiency. To enhance performance, hyperparameters are fine-tuned using the Adam optimizer and focal loss function. Results: The proposed method is evaluated on two benchmark datasets, COVID-349 and Italian COVID-Set, which include diverse COVID-19 variants. Conclusions: The results demonstrate its superior accuracy (94.63% and 95.47%, respectively), outperforming competing approaches in precision, recall, and overall diagnostic reliability. Full article
Show Figures

Figure 1

21 pages, 2799 KB  
Article
Few-Shot Leukocyte Classification Algorithm Based on Feature Reconstruction Network with Improved EfficientNetV2
by Xinzheng Wang, Cuisi Ou, Guangjian Pan, Zhigang Hu and Kaiwen Cao
Appl. Sci. 2025, 15(17), 9377; https://doi.org/10.3390/app15179377 - 26 Aug 2025
Viewed by 409
Abstract
Deep learning has excelled in image classification largely due to large, professionally labeled datasets. However, in the field of medical images data annotation often relies on experienced experts, especially in tasks such as white blood cell classification where the staining methods for different [...] Read more.
Deep learning has excelled in image classification largely due to large, professionally labeled datasets. However, in the field of medical images data annotation often relies on experienced experts, especially in tasks such as white blood cell classification where the staining methods for different cells vary greatly and the number of samples in certain categories is relatively small. To evaluate leukocyte classification performance with limited labeled samples, a few-shot learning method based on Feature Reconstruction Network with Improved EfficientNetV2 (FRNE) is proposed. Firstly, this paper presents a feature extractor based on the improved EfficientNetv2 architecture. To enhance the receptive field and extract multi-scale features effectively, the network incorporates an ASPP module with dilated convolutions at different dilation rates. This enhancement improves the model’s spatial reconstruction capability during feature extraction. Subsequently, the support set and query set are processed by the feature extractor to obtain the respective feature maps. A feature reconstruction-based classification method is then applied. Specifically, ridge regression reconstructs the query feature map using features from the support set. By analyzing the reconstruction error, the model determines the likelihood of the query sample belonging to a particular class, without requiring additional modules or extensive parameter tuning. Evaluated on the LDWBC and Raabin datasets, the proposed method achieves accuracy improvements of 3.67% and 1.27%, respectively, compared to the method that demonstrated strong OA performance on both datasets among all compared approaches. Full article
Show Figures

Figure 1

16 pages, 3972 KB  
Article
Solar Panel Surface Defect and Dust Detection: Deep Learning Approach
by Atta Rahman
J. Imaging 2025, 11(9), 287; https://doi.org/10.3390/jimaging11090287 - 25 Aug 2025
Viewed by 705
Abstract
In recent years, solar energy has emerged as a pillar of sustainable development. However, maintaining panel efficiency under extreme environmental conditions remains a persistent hurdle. This study introduces an automated defect detection pipeline that leverages deep learning and computer vision to identify five [...] Read more.
In recent years, solar energy has emerged as a pillar of sustainable development. However, maintaining panel efficiency under extreme environmental conditions remains a persistent hurdle. This study introduces an automated defect detection pipeline that leverages deep learning and computer vision to identify five standard anomaly classes: Non-Defective, Dust, Defective, Physical Damage, and Snow on photovoltaic surfaces. To build a robust foundation, a heterogeneous dataset of 8973 images was sourced from public repositories and standardized into a uniform labeling scheme. This dataset was then expanded through an aggressive augmentation strategy, including flips, rotations, zooms, and noise injections. A YOLOv11-based model was trained and fine-tuned using both fixed and adaptive learning rate schedules, achieving a mAP@0.5 of 85% and accuracy, recall, and F1-score above 95% when evaluated across diverse lighting and dust scenarios. The optimized model is integrated into an interactive dashboard that processes live camera streams, issues real-time alerts upon defect detection, and supports proactive maintenance scheduling. Comparative evaluations highlight the superiority of this approach over manual inspections and earlier YOLO versions in both precision and inference speed, making it well suited for deployment on edge devices. Automating visual inspection not only reduces labor costs and operational downtime but also enhances the longevity of solar installations. By offering a scalable solution for continuous monitoring, this work contributes to improving the reliability and cost-effectiveness of large-scale solar energy systems. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

17 pages, 3976 KB  
Article
A Self-Supervised Pre-Trained Transformer Model for Accurate Genomic Prediction of Swine Phenotypes
by Weixi Xiang, Zhaoxin Li, Qixin Sun, Xiujuan Chai and Tan Sun
Animals 2025, 15(17), 2485; https://doi.org/10.3390/ani15172485 - 24 Aug 2025
Viewed by 372
Abstract
Accurate genomic prediction of complex phenotypes is crucial for accelerating genetic progress in swine breeding. However, conventional methods like Genomic Best Linear Unbiased Prediction (GBLUP) face limitations in capturing complex non-additive effects that contribute significantly to phenotypic variation, restricting the potential accuracy of [...] Read more.
Accurate genomic prediction of complex phenotypes is crucial for accelerating genetic progress in swine breeding. However, conventional methods like Genomic Best Linear Unbiased Prediction (GBLUP) face limitations in capturing complex non-additive effects that contribute significantly to phenotypic variation, restricting the potential accuracy of phenotype prediction. To address this challenge, we introduce a novel framework based on a self-supervised, pre-trained encoder-only Transformer model. Its core novelty lies in tokenizing SNP sequences into non-overlapping 6-mers (sequences of 6 SNPs), enabling the model to directly learn local haplotype patterns instead of treating SNPs as independent markers. The model first undergoes self-supervised pre-training on the unlabeled version of the same SNP dataset used for subsequent fine-tuning, learning intrinsic genomic representations through a masked 6-mer prediction task. Subsequently, the pre-trained model is fine-tuned on labeled data to predict phenotypic values for specific economic traits. Experimental validation demonstrates that our proposed model consistently outperforms baseline methods, including GBLUP and a Transformer of the same architecture trained from scratch (without pre-training), in prediction accuracy across key economic traits. This outperformance suggests the model’s capacity to capture non-linear genetic signals missed by linear models. This research contributes not only a new, more accurate methodology for genomic phenotype prediction but also validates the potential of self-supervised learning to decipher complex genomic patterns for direct application in breeding programs. Ultimately, this approach offers a powerful new tool to enhance the rate of genetic gain in swine production by enabling more precise selection based on predicted phenotypes. Full article
(This article belongs to the Section Pigs)
Show Figures

Figure 1

29 pages, 13156 KB  
Article
Exchange Rate Forecasting: A Deep Learning Framework Combining Adaptive Signal Decomposition and Dynamic Weight Optimization
by Xi Tang and Yumei Xie
Int. J. Financial Stud. 2025, 13(3), 151; https://doi.org/10.3390/ijfs13030151 - 22 Aug 2025
Viewed by 458
Abstract
Accurate exchange rate forecasting is crucial for investment decisions, multinational corporations, and national policies. The nonlinear nature and volatility of the foreign exchange market hinder traditional forecasting methods in capturing exchange rate fluctuations. Despite advancements in machine learning and signal decomposition, challenges remain [...] Read more.
Accurate exchange rate forecasting is crucial for investment decisions, multinational corporations, and national policies. The nonlinear nature and volatility of the foreign exchange market hinder traditional forecasting methods in capturing exchange rate fluctuations. Despite advancements in machine learning and signal decomposition, challenges remain in high-dimensional data handling and parameter optimization. This study mitigates these constraints by introducing an innovative enhanced prediction framework that integrates the optimal complete ensemble empirical mode decomposition with adaptive noise (OCEEMDAN) method and a strategically optimized combination weight prediction model. The grey wolf optimizer (GWO) is employed to autonomously modify the noise parameters of OCEEMDAN, while the zebra optimization algorithm (ZOA) dynamically fine-tunes the weights of predictive models—Bi-LSTM, GRU, and FNN. The proposed methodology exhibits enhanced prediction accuracy and robustness through simulation experiments on exchange rate data (EUR/USD, GBP/USD, and USD/JPY). This research improves the precision of exchange rate forecasts and introduces an innovative approach to enhancing model efficacy in volatile financial markets. Full article
Show Figures

Figure 1

20 pages, 492 KB  
Article
CurriculumPT: LLM-Based Multi-Agent Autonomous Penetration Testing with Curriculum-Guided Task Scheduling
by Xingyu Wu, Yunzhe Tian, Yuanwan Chen, Ping Ye, Xiaoshu Cui, Jingqi Jia, Shouyang Li, Jiqiang Liu and Wenjia Niu
Appl. Sci. 2025, 15(16), 9096; https://doi.org/10.3390/app15169096 - 18 Aug 2025
Viewed by 909
Abstract
While autonomous driving systems and intelligent transportation infrastructures become increasingly software-defined and network-connected, ensuring their cybersecurity has become a critical component of traffic safety. Large language models (LLMs) have recently shown promise in automating aspects of penetration testing, yet most existing approaches remain [...] Read more.
While autonomous driving systems and intelligent transportation infrastructures become increasingly software-defined and network-connected, ensuring their cybersecurity has become a critical component of traffic safety. Large language models (LLMs) have recently shown promise in automating aspects of penetration testing, yet most existing approaches remain limited to simple, single-step exploits. They struggle to handle complex, multi-stage vulnerabilities that demand precise coordination, contextual reasoning, and knowledge reuse. This is particularly problematic in safety-critical domains, such as autonomous vehicles, where subtle software flaws can cascade across interdependent subsystems. In this work, we present CurriculumPT, a novel LLM-based penetration testing framework specifically designed for the security of intelligent systems. CurriculumPT combines curriculum learning and a multi-agent system to enable LLM agents to progressively acquire and apply exploitation skills across common vulnerabilities and exposures-based tasks. Through a structured progression from simple to complex vulnerabilities, agents build and refine an experience knowledge base that supports generalization to new attack surfaces without requiring model fine-tuning. We evaluate CurriculumPT on 15 real-world vulnerabilities scenarios and demonstrate that it outperforms three state-of-the-art baselines by up to 18 percentage points in exploit success rate, while achieving superior efficiency in execution time and resource usage. Our results confirm that CurriculumPT is capable of autonomous, scalable penetration testing and knowledge transfer, laying the groundwork for intelligent security auditing of modern autonomous driving systems and other cyberphysical transportation platforms. Full article
Show Figures

Figure 1

21 pages, 2574 KB  
Article
Clinically Explainable Prediction of Immunotherapy Response Integrating Radiomics and Clinico-Pathological Information in Non-Small Cell Lung Cancer
by Jhimli Mitra, Soumya Ghose and Rajat Thawani
Cancers 2025, 17(16), 2679; https://doi.org/10.3390/cancers17162679 - 18 Aug 2025
Viewed by 555
Abstract
Background/Objectives: Immunotherapy is a viable therapeutic approach for non-small cell lung cancer (NSCLC). Despite the significant survival benefit of immune checkpoint inhibitors PD-1/PD-L1, on average; the objective response rate is around 20% as monotherapy and around 50% in combination with chemotherapy. While PD-L1 [...] Read more.
Background/Objectives: Immunotherapy is a viable therapeutic approach for non-small cell lung cancer (NSCLC). Despite the significant survival benefit of immune checkpoint inhibitors PD-1/PD-L1, on average; the objective response rate is around 20% as monotherapy and around 50% in combination with chemotherapy. While PD-L1 IHC is used as a predictive biomarker, its accuracy is subpar. Methods: In this work, we develop a machine learning (ML) method to predict response to immunotherapy in NSCLC from multimodal clinicopathological biomarkers, tumor and peritumoral radiomic biomarkers from CT images. We further learn a graph structure to understand the associations between biomarkers and treatment response. The graph is then used to create sentences with clinical hypotheses that are finally used in a Large Language Model (LLM) that explains the treatment response predicated on the biomarkers that are comprehensible to clinicians. From a retrospective study, a training dataset of NSCLC with n = 248 tumors from 140 subjects was used for feature selection, ML model training, learning the graph structure, and fine-tuning LLM. Results: An AUC = 0.83 was achieved for prediction of treatment response on a separate test dataset of n = 84 tumors from 47 subjects. Conclusions: Our study therefore not only improves the prediction of immunotherapy response in patients with NSCLC from multimodal data but also assists the clinicians in making clinically interpretable predictions by providing language-based explanations. Full article
Show Figures

Figure 1

22 pages, 2887 KB  
Article
Autoencoder-Assisted Stacked Ensemble Learning for Lymphoma Subtype Classification: A Hybrid Deep Learning and Machine Learning Approach
by Roseline Oluwaseun Ogundokun, Pius Adewale Owolawi, Chunling Tu and Etienne van Wyk
Tomography 2025, 11(8), 91; https://doi.org/10.3390/tomography11080091 - 18 Aug 2025
Viewed by 447
Abstract
Background: Accurate subtype identification of lymphoma cancer is crucial for effective diagnosis and treatment planning. Although standard deep learning algorithms have demonstrated robustness, they are still prone to overfitting and limited generalization, necessitating more reliable and robust methods. Objectives: This study presents an [...] Read more.
Background: Accurate subtype identification of lymphoma cancer is crucial for effective diagnosis and treatment planning. Although standard deep learning algorithms have demonstrated robustness, they are still prone to overfitting and limited generalization, necessitating more reliable and robust methods. Objectives: This study presents an autoencoder-augmented stacked ensemble learning (SEL) framework integrating deep feature extraction (DFE) and ensembles of machine learning classifiers to improve lymphoma subtype identification. Methods: Convolutional autoencoder (CAE) was utilized to obtain high-level feature representations of histopathological images, followed by dimensionality reduction via Principal Component Analysis (PCA). Various models were utilized for classifying extracted features, i.e., Random Forest (RF), Support Vector Machine (SVM), Multi-Layer Perceptron (MLP), AdaBoost, and Extra Trees classifiers. A Gradient Boosting Machine (GBM) meta-classifier was utilized in an SEL approach to further fine-tune final predictions. Results: All the models were tested using accuracy, area under the curve (AUC), and Average Precision (AP) metrics. The stacked ensemble classifier performed better than all the individual models with a 99.04% accuracy, 0.9998 AUC, and 0.9996 AP, far exceeding what regular deep learning (DL) methods would achieve. Of standalone classifiers, MLP (97.71% accuracy, 0.9986 AUC, 0.9973 AP) and Random Forest (96.71% accuracy, 0.9977 AUC, 0.9953 AP) provided the best prediction performance, while AdaBoost was the poorest performer (68.25% accuracy, 0.8194 AUC, 0.6424 AP). PCA and t-SNE plots confirmed that DFE effectively enhances class discrimination. Conclusion: This study demonstrates a highly accurate and reliable approach to lymphoma classification by using autoencoder-assisted ensemble learning, reducing the misclassification rate and significantly enhancing the accuracy of diagnosis. AI-based models are designed to assist pathologists by providing interpretable outputs such as class probabilities and visualizations (e.g., Grad-CAM), enabling them to understand and validate predictions in the diagnostic workflow. Future studies should enhance computational efficacy and conduct multi-centre validation studies to confirm the model’s generalizability on extensive collections of histopathological datasets. Full article
Show Figures

Figure 1

24 pages, 2009 KB  
Article
Artificial Intelligence and Sustainable Practices in Coastal Marinas: A Comparative Study of Monaco and Ibiza
by Florin Ioras and Indrachapa Bandara
Sustainability 2025, 17(16), 7404; https://doi.org/10.3390/su17167404 - 15 Aug 2025
Viewed by 576
Abstract
Artificial intelligence (AI) is playing an increasingly important role in driving sustainable change across coastal and marine environments. Artificial intelligence offers strong support for environmental decision-making by helping to process complex data, anticipate outcomes, and fine-tune day-to-day operations. In busy coastal zones such [...] Read more.
Artificial intelligence (AI) is playing an increasingly important role in driving sustainable change across coastal and marine environments. Artificial intelligence offers strong support for environmental decision-making by helping to process complex data, anticipate outcomes, and fine-tune day-to-day operations. In busy coastal zones such as the Mediterranean where tourism and boating place significant strain on marine ecosystems, AI can be an effective means for marinas to reduce their ecological impact without sacrificing economic viability. This research examines the contribution of artificial intelligence toward the development of environmental sustainability in marina management. It investigates how AI can potentially reconcile economic imperatives with ecological conservation, especially in high-traffic coastal areas. Through a focus on the impact of social and technological context, this study emphasizes the way in which local conditions constrain the design, deployment, and reach of AI systems. The marinas of Ibiza and Monaco are used as a comparative backdrop to depict these dynamics. In Monaco, efforts like the SEA Index® and predictive maintenance for superyachts contributed to a 28% drop in CO2 emissions between 2020 and 2025. In contrast, Ibiza focused on circular economy practices, reaching an 85% landfill diversion rate using solar power, AI-assisted waste systems, and targeted biodiversity conservation initiatives. This research organizes AI tools into three main categories: supervised learning, anomaly detection, and rule-based systems. Their effectiveness is assessed using statistical techniques, including t-test results contextualized with Cohen’s d to convey practical effect sizes. Regression R2 values are interpreted in light of real-world policy relevance, such as thresholds for energy audits or emissions certification. In addition to measuring technical outcomes, this study considers the ethical concerns, the role of local communities, and comparisons to global best practices. The findings highlight how artificial intelligence can meaningfully contribute to environmental conservation while also supporting sustainable economic development in maritime contexts. However, the analysis also reveals ongoing difficulties, particularly in areas such as ethical oversight, regulatory coherence, and the practical replication of successful initiatives across diverse regions. In response, this study outlines several practical steps forward: promoting AI-as-a-Service models to lower adoption barriers, piloting regulatory sandboxes within the EU to test innovative solutions safely, improving access to open-source platforms, and working toward common standards for the stewardship of marine environmental data. Full article
Show Figures

Figure 1

Back to TopTop