Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (479)

Search Parameters:
Keywords = AI modules integration

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 7099 KB  
Article
AI-Driven Tethered Drone Surveillance for Maritime Security in Ports and Coastal Areas
by Alberto Belmonte-Hernández, Briac Grauby, Anaida Fernández García, Solange Tardi, Torbjørn Houge, Hidalgo García Bango and Álvaro Gutiérrez
Drones 2026, 10(4), 268; https://doi.org/10.3390/drones10040268 (registering DOI) - 8 Apr 2026
Abstract
Effective port and coastal surveillance require persistent monitoring, flexible deployment, and reliable target detection in dynamic maritime environments. This paper presents a system- and deployment-oriented autonomous tethered drone architecture, integrated with AI-based perception, for persistent maritime surveillance in ports and coastal areas. Mounted [...] Read more.
Effective port and coastal surveillance require persistent monitoring, flexible deployment, and reliable target detection in dynamic maritime environments. This paper presents a system- and deployment-oriented autonomous tethered drone architecture, integrated with AI-based perception, for persistent maritime surveillance in ports and coastal areas. Mounted on a moving maritime platform and powered through a tether, the drone provides a persistent elevated viewpoint without the endurance limitations of conventional battery-powered Unmanned Aerial Vehicles (UAVs). The system combines maritime platform integration, tethered flight operation, fail-safe and safety mechanisms, and a distributed Artificial Intelligence (AI) pipeline for real-time object detection and tracking. The perception module is based on YOLOv8m for vessel detection and BoT-SORT for multi-object tracking, enabling continuous monitoring of maritime targets in realistic operational scenarios. Field trials conducted from moving vessels in maritime environments demonstrate autonomous take-off and landing, stable surveillance operation under realistic wind and wave conditions, and effective vessel detection and tracking on real image sequences. The results show the potential of AI-enabled tethered drone surveillance as a persistent and operationally relevant tool for maritime monitoring and security. Full article
Show Figures

Figure 1

25 pages, 3712 KB  
Article
An AI-Enabled Single-Cell Transcriptomic Analysis Pipeline for Gene Signature Discovery in Natural Killer Cells Linked to Remission Outcomes in Chronic Myeloid Leukemia
by Santoshi Borra, Da Yan, Robert S. Welner and Zongliang Yue
Biology 2026, 15(7), 588; https://doi.org/10.3390/biology15070588 - 6 Apr 2026
Abstract
Background: A major technical challenge in single-cell transcriptomics is the absence of an integrative analytic pipeline that can simultaneously leverage gene regulatory network (GRN) architecture, AI-assisted gene panel discovery, and functional relevance analyses to generate coherent biological insights. Existing approaches often treat these [...] Read more.
Background: A major technical challenge in single-cell transcriptomics is the absence of an integrative analytic pipeline that can simultaneously leverage gene regulatory network (GRN) architecture, AI-assisted gene panel discovery, and functional relevance analyses to generate coherent biological insights. Existing approaches often treat these components independently, focusing on clusters, marker genes, or predictive features without integrating them into a mechanistically grounded framework. Consequently, comprehensive screening that links regulatory association, gene signature screening, and functional interpretation within single-cell datasets remains limited, underscoring the need for an integrated strategy. Methods: We developed an integrative bioinformatics pipeline based on Gene regulatory network–AI–Functional Analysis (GAFA), combining latent-space integration, unsupervised clustering, diffusion pseudotime analysis, lineage-resolved generalized additive modeling, GRN inference, and machine learning-based gene panel discovery. This framework enables systematic mapping of cell-state structure, reconstruction of differentiation and effector trajectories, and identification of transcriptional and regulatory features strongly associated with clinical outcomes. As a case study, we applied the pipeline to NK cell transcriptomes from six CML patients (two early relapse, two late relapse, two durable treatment-free remission—TFR; 15 samples) collected at TKI discontinuation and 6–12 months after therapy cessation. Results: We reanalyzed publicly available scRNA-seq data from a previously published CML cohort to evaluate NK-cell transcriptional programs associated with treatment-free remission and relapse. We resolved six transcriptionally distinct NK cell states spanning CD56bright-like cytokine-responsive, early activated, terminally mature, cytotoxic, lymphoid trafficking, and HLA-DR+ immunoregulatory populations, each exhibiting outcome-specific compositional differences. Pseudotime analysis revealed two major NK cell lineages—a maturation trajectory and a cytotoxic effector trajectory. TFR samples displayed balanced occupancy of both lineages, whereas early relapse samples showed marked depletion of the maturation branch and preferential accumulation in cytotoxic end states. AI-guided feature selection and random forest modeling identified an 18-gene panel that distinguished NK cells from TFR and relapse samples in an exploratory manner. Among them, CST7, FCER1G, GNLY, GZMA, and HLA-C were conventional NK-associated genes, whereas ACTB, CYBA, IFITM2, IFITM3, LYZ, MALAT1, MT2A, MYOM2, NFKBIA, PIM1, S100A8, S100B, and TSC22D3 were novel. The GRN inference further uncovered outcome-specific regulatory modules, with RUNX3, EOMES, ELK4, and REL regulons enriched in TFR, whereas FOSL2 and MAF regulons were enriched in relapse, and their downstream targets linked to IFN-γ signaling, metabolic reprogramming, and immunoregulatory feedback circuits. Conclusions: This AI-enabled single-cell analysis demonstrates how NK cell state composition, differentiation trajectories, and regulatory network rewiring collectively shape TFR versus relapse following TKI discontinuation in CML. The integrative pipeline provides a modular framework that could be extended to additional datasets for data-driven biomarker discovery and mechanistic stratification, and highlights candidate transcriptional regulators and NK cell programs that may be leveraged to improve remission durability, pending validation in larger patient cohorts. Full article
Show Figures

Figure 1

19 pages, 5823 KB  
Article
A Human-Centric AI-Enabled Ecosystem for SME Cybersecurity: Cross-Sectoral Practices and Adaptation Framework for Maritime Defence
by Kitty Kioskli, Eleni Seralidou, Wissam Mallouli, Dimitrios Koutras, Pedro Tomás and Dimitrios Kallergis
Electronics 2026, 15(7), 1520; https://doi.org/10.3390/electronics15071520 - 4 Apr 2026
Viewed by 211
Abstract
Artificial intelligence (AI) is increasingly integrated into cybersecurity tools to improve threat detection, anomaly identification, and incident response. However, organisations, particularly small- and medium-sized enterprises (SMEs), often struggle to discover, evaluate, and effectively use AI-enabled cybersecurity solutions due to skills gaps, usability challenges, [...] Read more.
Artificial intelligence (AI) is increasingly integrated into cybersecurity tools to improve threat detection, anomaly identification, and incident response. However, organisations, particularly small- and medium-sized enterprises (SMEs), often struggle to discover, evaluate, and effectively use AI-enabled cybersecurity solutions due to skills gaps, usability challenges, and fragmented tool ecosystems. This paper presents the advaNced cybErsecurity awaReness ecOsystem for SMEs (NERO), a human-centric cybersecurity ecosystem that combines a cybersecurity marketplace with a competency-based training and awareness platform to support the practical adoption of advanced cybersecurity technologies. The NERO Marketplace enables structured discovery, comparison, and assessment of cybersecurity tools based on usability, operational relevance, and competency alignment. Complementing this, the NERO Training Platform delivers modular, multi-modal training aligned with the European Cybersecurity Skills Framework (ECSF) to develop the human competencies required to operate advanced cybersecurity systems. This study contributes a socio-technical framework that addresses the gap between AI tool availability and organisational readiness through ECSF role-based competency mapping and iterative design-based evaluation. The platform targets technical roles like Cybersecurity Implementer to ensure training is aligned with the operational requirements of critical infrastructure protection. Results from cross-sector SME training activities show measurable improvements in cybersecurity awareness, knowledge, and user satisfaction, with knowledge gains exceeding 30% in some modules. Finally, the paper provides a structural mapping of these cross-sectoral results to the maritime defence domain, specifically addressing legacy OT systems and intermittent connectivity constraints. Full article
Show Figures

Figure 1

28 pages, 4837 KB  
Article
AI-Driven Adaptive Encryption Framework for a Modular Hardware-Based Data Security Device: Conceptual Architecture, Formal Foundations, and Security Analysis
by Pruthviraj Pawar and Gregory Epiphaniou
Appl. Sci. 2026, 16(7), 3522; https://doi.org/10.3390/app16073522 - 3 Apr 2026
Viewed by 128
Abstract
This paper presents a conceptual architecture for an AI-Driven Adaptive Encryption Device (AI-AED), a tri-modular hardware platform embodied in a registered industrial design. The device integrates a Secure Input Module, an AI-Enhanced Central Processing Unit with biometric authentication, and a Secure Output Module [...] Read more.
This paper presents a conceptual architecture for an AI-Driven Adaptive Encryption Device (AI-AED), a tri-modular hardware platform embodied in a registered industrial design. The device integrates a Secure Input Module, an AI-Enhanced Central Processing Unit with biometric authentication, and a Secure Output Module connected by unidirectional buses. We formalise the adaptive encryption policy as a constrained Markov decision process (CMDP) over a discrete action space of 216 cryptographic configurations, with safety constraints that provably prevent convergence to insecure states. A formal threat model based on extended Dolev–Yao assumptions with four physical access tiers defines attacker capabilities, and anti-downgrade safeguards enforce a monotonically non-decreasing security floor during threat escalation. An information-theoretic analysis shows that adaptive algorithm selection contributes an additional entropy term H(α) to ciphertext uncertainty, upper-bounded by log2(|L_enc|) ≈ 1.58 bits, while noting this represents increased attacker uncertainty rather than a strengthening of any individual cipher. A component-level latency model estimates 0.91–1.00 ms pipeline latency under normal operation and 3.14–3.42 ms under active threat, including integration overhead. Simulation validation over 1000 episodes compares a tabular Q-learning baseline against the proposed Deep Q-Network operating on the continuous state space: the DQN achieves 82% fewer constraint violations, 6× faster threat response, and more stable policy switching, demonstrating the advantage of continuous-state reinforcement learning for safety-critical adaptive encryption. All claims are positioned as theoretical contributions requiring empirical validation through prototype implementation. Full article
Show Figures

Figure 1

17 pages, 1372 KB  
Article
GastroMalign: Vision Transformer-Based Framework for Early Detection and Malignancy-Risk Stratification for High-Risk Gastrointestinal Lesions
by Sri Harsha Boppana, Sachin Sravan Kumar Komati, Medha Sharath, Aditya Chandrashekar, Gautam Maddineni, Raja Chandra Chakinala, Pradeep Yarra and C. David Mintz
J. Clin. Med. 2026, 15(7), 2701; https://doi.org/10.3390/jcm15072701 - 2 Apr 2026
Viewed by 241
Abstract
Background: Current artificial intelligence (AI) systems in gastrointestinal (GI) endoscopy primarily emphasize binary detection or static classification, providing limited support for the graded assessment of malignant potential that underpins clinical decision-making. We developed GastroMalign, a transformer-based framework designed to stratify GI lesions [...] Read more.
Background: Current artificial intelligence (AI) systems in gastrointestinal (GI) endoscopy primarily emphasize binary detection or static classification, providing limited support for the graded assessment of malignant potential that underpins clinical decision-making. We developed GastroMalign, a transformer-based framework designed to stratify GI lesions according to ordinal disease severity while maintaining clinical interpretability, addressing this unmet need in endoscopic risk assessment. Methods: This retrospective development and validation study used the publicly available GastroVision dataset, comprising 8000 de-identified endoscopic still images from the upper and lower gastrointestinal tract, including the esophagus, stomach, duodenum, colon, rectum, and terminal ileum. GastroMalign integrates a Vision Transformer (ViT) encoder with a Sequential Feature Learner that explicitly models ordinal disease severity along a benign-to-malignant spectrum. The framework produces both categorical risk classification and a continuous malignancy risk score. Images were stratified into training (80%), validation (10%), and test (10%) sets. Performance was compared with convolutional neural network (CNN) baselines and a Swin Transformer. Interpretability was assessed using Score-CAM visualizations reviewed by blinded expert endoscopists. Results: On the held-out test set (n = 800 images), GastroMalign achieved an overall accuracy of 80.06%, precision of 79.65%, recall of 80.06%, and F1-score of 79.17%, with a micro-averaged AUC of 0.98. In comparison, ResNet-50 and DenseNet-121 achieved accuracies of 32.42% and 36.77%, respectively, while the Swin Transformer achieved 60.56% accuracy (AUC = 0.93). Ablation analyses demonstrated a 17% absolute reduction in High-Risk lesion recall when the progression-aware module was removed. Continuous malignancy risk scores increased monotonically across ordinal classes, with mean values < 0.18 for Benign and >0.72 for High-Risk/Malignant lesions. Score-CAM visualizations demonstrated 92% overlap with clinician-annotated lesion regions. Conclusions: GastroMalign delivers an interpretable, progression-aware AI framework for GI lesion risk stratification that outperforms existing CNN- and transformer-based models. Clinically, GastroMalign is intended as an adjunct decision-support tool during endoscopic review to standardize lesion risk stratification (benign to malignant spectrum), support management decisions (biopsy vs. resection vs. surveillance), and reduce operator-dependent variability by pairing ordinal risk outputs with interpretable visual explanations. Full article
Show Figures

Figure 1

25 pages, 869 KB  
Article
Fostering Sustainable Learning via Embodied Intelligence: The E3-HOT Framework for Higher-Order Thinking in the AI Era
by Hanzi Zhu, Xin Jiang, Xiaolei Zhang, Huiying Xu, Deang Su, Zhendong Chen and Xinzhong Zhu
Sustainability 2026, 18(7), 3469; https://doi.org/10.3390/su18073469 - 2 Apr 2026
Viewed by 166
Abstract
Artificial intelligence (AI) can help students accelerate assignment completion, but it may also foster cognitive outsourcing and learning detached from authentic contexts. This paper presents E3-HOT, a conceptual framework that leverages embodied intelligence to sustain learners’ cognitive agency and higher-order thinking for sustainable [...] Read more.
Artificial intelligence (AI) can help students accelerate assignment completion, but it may also foster cognitive outsourcing and learning detached from authentic contexts. This paper presents E3-HOT, a conceptual framework that leverages embodied intelligence to sustain learners’ cognitive agency and higher-order thinking for sustainable learning, aligned with SDG 4 (Sustainable Development Goal 4) and its emphasis on inclusive and equitable quality education and lifelong learning. Using an iterative conceptual synthesis, we distill three embodied pathways—situational embedding, embodied participation, and cognitive creation—and translate them into a practical system design with a three-module E3 core. It includes a virtual–real integrated learning environment for rich scenarios, embodied interaction for action and sensing, and an intelligent core that provides bounded and teacher-controlled support. To facilitate equitable adoption across resource-diverse settings, we specify multi-fidelity enactment options and an auditable set of evidence artifacts for subsequent expert review and future validation studies. We further provide an illustrative university human–AI design project that outlines a week-by-week workflow and corresponding evidence plan, presented as a worked example rather than a report of an implemented study. E3-HOT offers a traceable design-and-evidence blueprint without claiming measured learning gains. Full article
(This article belongs to the Section Sustainable Education and Approaches)
Show Figures

Figure 1

23 pages, 2936 KB  
Article
Lightweight Transient-Source Detection Method for Edge Computing
by Jiahao Zhang, Yutian Fu, Feng Dong and Lingfeng Huang
Universe 2026, 12(4), 101; https://doi.org/10.3390/universe12040101 - 1 Apr 2026
Viewed by 193
Abstract
Transient-source detection without relying on difference images still faces challenges in achieving high accuracy, especially under practical space-based astronomical survey conditions where the data volume is enormous, on-orbit transmission bandwidth is limited, and real-time response is required for rapid follow-up observations. To address [...] Read more.
Transient-source detection without relying on difference images still faces challenges in achieving high accuracy, especially under practical space-based astronomical survey conditions where the data volume is enormous, on-orbit transmission bandwidth is limited, and real-time response is required for rapid follow-up observations. To address these issues, this paper proposes a lightweight detection network that integrates multi-scale feature fusion with contextual feature extraction, enabling efficient real-time processing on resource-constrained edge devices. The proposed model enhances robustness to point-spread-function variations across observation conditions and to complex background environments, while simultaneously improving detection accuracy. To evaluate performance comprehensively, lightweight VGG and lightweight ResNet architectures and other baseline models—commonly used as baselines for transient-source detection—are adopted for comparison. Experimental results show that under the condition that the models have approximately the same number of parameters, the proposed network achieves the best accuracy, obtaining nearly 1% improvement compared with the best-performing baseline model. Based on this design, an ultra-lightweight version with only 7k parameters is further developed by incorporating a compact multi-scale module, improving accuracy by 1% over the version without the multi-scale structure. Moreover, through heterogeneous knowledge distillation and adaptive iterative training, the accuracy of the ultra-lightweight model is further increased from 93.3% to 94.0%. Finally, the model is deployed and validated on an AI hardware acceleration platform. The results demonstrate that the proposed method substantially improves inference throughput while maintaining high accuracy, providing a practical solution for real-time, low-latency, on-device transient-source detection under large data volume and limited transmission conditions. Specifically, the proposed models are trained offline on a high-performance GPU and subsequently deployed on the Fudan Microelectronics 7100 AI board to evaluate their real-world inference efficiency on resource-constrained edge devices. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Modern Astronomy)
Show Figures

Figure 1

23 pages, 6673 KB  
Article
ERZA-DETR: A Deep Learning-Based Detection Transformer with Enhanced Relational-Zone Aggregation for WCE Lesion Detection
by Shiren Ye, Haipeng Ma, Zetong Zhang and Liangjing Li
Algorithms 2026, 19(4), 268; https://doi.org/10.3390/a19040268 - 1 Apr 2026
Viewed by 191
Abstract
Wireless capsule endoscopy (WCE) plays a vital role in non-invasive screening of small intestinal lesions. However, the automated detection of lesions remains challenging due to low contrast, uneven illumination, and severe visual variability across images. Existing convolutional detectors rely heavily on manually designed [...] Read more.
Wireless capsule endoscopy (WCE) plays a vital role in non-invasive screening of small intestinal lesions. However, the automated detection of lesions remains challenging due to low contrast, uneven illumination, and severe visual variability across images. Existing convolutional detectors rely heavily on manually designed anchors and post-processing, while end-to-end detection transformers developed for natural images exhibit limited adaptability to the complex texture and spectral characteristics of WCE data. To overcome these limitations, this study proposes a deep learning-based detection transformer with enhanced relational-zone aggregation for WCE lesion detection, termed ERZA-DETR, specifically tailored for WCE lesion detection. The framework integrates three complementary modules: a Dual-Band Adaptive Fourier Spectral module (DBFS) that recalibrates frequency responses to suppress illumination artifacts and highlight lesion boundaries; a Fused Dual-scale Gated Convolutional module (FD-gConv) that selectively fuses multi-scale texture features; and a Graph-Linked Embedding at Semantic Scales module (GLES) that preserves local topological relationships through coordinate-gated aggregation. Experimental evaluations on the SEE-AI small intestine dataset demonstrate that ERZA-DETR achieves a 3.2% improvement in mAP@50 and a 12.4% reduction in parameters compared with RT-DETRv2, achieving a superior balance between detection accuracy, computational efficiency, and clinical applicability. Full article
Show Figures

Figure 1

31 pages, 2539 KB  
Article
Design and Evaluation of an AI-Based Conversational Agent for Travel Agencies: Enhancing Training, Assistance, and Operational Efficiency
by Pablo Vicente-Martínez, Emilio Soria-Olivas, Inés Esteve-Mompó, Manuel Sánchez-Montañés, María Ángeles García Escrivà and Edu William-Secin
AI 2026, 7(4), 123; https://doi.org/10.3390/ai7040123 - 1 Apr 2026
Viewed by 449
Abstract
The tourism industry faces increasing pressure for agile, personalized services, yet travel agencies struggle with fragmented knowledge scattered across isolated systems and legacy formats. While Large Language Models (LLMs) are widely applied in customer-facing roles, their potential to enhance internal operational efficiency remains [...] Read more.
The tourism industry faces increasing pressure for agile, personalized services, yet travel agencies struggle with fragmented knowledge scattered across isolated systems and legacy formats. While Large Language Models (LLMs) are widely applied in customer-facing roles, their potential to enhance internal operational efficiency remains largely underexplored. This study presents the design and evaluation of an intelligent assistant specifically for travel agency operations, built upon a Retrieval-Augmented Generation (RAG) architecture using Gemini 2.0 Flash. The system integrates heterogeneous data sources, including structured product catalogs and unstructured documentation processed via Optical Character Recognition (OCR), into a unified interface comprising work assistance, interactive training, and evaluation modules. Results demonstrate information retrieval times not greater than 45 s, ensuring its daily usability, while maintaining 95% accuracy. Furthermore, the system democratizes tacit senior expertise and accelerates new employee onboarding. This research validates RAG architectures as a powerful solution to knowledge fragmentation, shifting the strategic AI focus from customer automation to employee empowerment and operational optimization. Full article
Show Figures

Figure 1

23 pages, 2459 KB  
Article
Optimizing Renewable Energy Distribution Networks with AI Techniques: The A-IsolE Project
by Gian Giuseppe Soma, Maria Giulia Pasquarelli, Massimo Pentolini, Cristina Dore, Francesco Martini, Andrea Bagnasco, Andrea Vinci, Giulio Valfrè, Enrico Bessone, Gabriele Mosaico and Matteo Saviozzi
Energies 2026, 19(7), 1718; https://doi.org/10.3390/en19071718 - 31 Mar 2026
Viewed by 252
Abstract
The large-scale penetration of Distributed Energy Resources (DERs), the proliferation of Energy Communities, and the increasing provision of flexibility services are fundamentally transforming distribution network operation, rendering traditional Distribution Management Systems (DMSs) structurally inadequate. This paper addresses this structural gap by proposing and [...] Read more.
The large-scale penetration of Distributed Energy Resources (DERs), the proliferation of Energy Communities, and the increasing provision of flexibility services are fundamentally transforming distribution network operation, rendering traditional Distribution Management Systems (DMSs) structurally inadequate. This paper addresses this structural gap by proposing and experimentally validating A-ISolE, a novel hybrid Artificial Intelligence (AI) architecture that natively integrates centralized and distributed intelligence within a unified DMS framework. The core scientific contribution of this work lies in the formulation and deployment of a coordinated, hierarchical AI paradigm in which cloud-level predictive and optimization modules dynamically interact with edge-level autonomous control agents. Specifically, the paper introduces: (1) an integrated forecasting state estimation pipeline with AI-enhanced grid observability; (2) intelligent fault location and optimal feeder reconfiguration algorithms embedded into operational control loops; and (3) distributed edge control strategies enabling autonomous yet coordinated microgrid stabilization. The architecture is validated on a real pilot microgrid in Sanremo (Italy). Experimental results demonstrate quantifiable gains in many parameters, substantiating the feasibility of hybrid centralized/distributed AI as a foundational paradigm for future resilient and decarbonized distribution networks. Full article
Show Figures

Figure 1

20 pages, 13941 KB  
Article
A Graph Learning-Driven Method for Multi-Ship Collision Risk Prediction in Complex Waterways
by Jie Wang, Shijie Liu and Yan Zhang
J. Mar. Sci. Eng. 2026, 14(7), 658; https://doi.org/10.3390/jmse14070658 - 31 Mar 2026
Viewed by 162
Abstract
The proactive identification of emerging collision risks is pivotal for maritime traffic safety, particularly in congested hub ports where multi-ship encounters exhibit complex spatiotemporal dependencies. Conventional risk assessment methods, predominantly predicated on instantaneous geometric indicators, often fall short in capturing the systemic evolution [...] Read more.
The proactive identification of emerging collision risks is pivotal for maritime traffic safety, particularly in congested hub ports where multi-ship encounters exhibit complex spatiotemporal dependencies. Conventional risk assessment methods, predominantly predicated on instantaneous geometric indicators, often fall short in capturing the systemic evolution of risk. To address these limitations, this study proposes an Improved Spatio-Temporal Graph Convolutional Network (IST-GCN) framework for the short-term forecasting of ship collision risk. The framework models maritime traffic as a rule-integrated dynamic interaction graph, where edge weights are adaptively modulated by navigational rules and the Collision Risk Index (CRI). By leveraging historical observation windows, the model forecasts the maximum collective risk level over a subsequent prediction horizon, categorizing traffic scenes into three ordinal levels: Low, Medium, and High. A comprehensive case study utilizing real-world Automatic Identification System (AIS) data from the core waters of Ningbo–Zhoushan Port demonstrates the efficacy of the proposed approach. The IST-GCN achieves a superior prediction Accuracy of 92.4% and an F1-score of 0.91, significantly outperforming representative baselines including Long Short-Term Memory (LSTM), Temporal Convolutional Network (TCN), and standard ST-GCN. Notably, by explicitly encoding COLREGs-based interaction logic, the framework reduces the False Alarm Rate (FAR) to 8.5% in complex crossing and merging scenarios. These findings indicate that the IST-GCN serves as an interpretable, reliable, and early-warning decision-support tool for intelligent maritime supervision and modern Vessel Traffic Services (VTS). Full article
(This article belongs to the Special Issue Advances in Maritime Shipping)
Show Figures

Figure 1

13 pages, 1799 KB  
Proceeding Paper
Cooling Tower Decision Support Web System: A Case Study
by Hao-Yu Lien, Wen-Hao Chen and Yen-Jen Chen
Eng. Proc. 2026, 134(1), 7; https://doi.org/10.3390/engproc2026134007 - 30 Mar 2026
Viewed by 171
Abstract
Conventional cooling tower operations often rely on the operator’s experience for fan-switching control, lacking precise decision support and real-time monitoring capabilities. This makes it challenging to maintain water temperature within an optimal range, thereby affecting industrial process efficiency. Using a case study approach, [...] Read more.
Conventional cooling tower operations often rely on the operator’s experience for fan-switching control, lacking precise decision support and real-time monitoring capabilities. This makes it challenging to maintain water temperature within an optimal range, thereby affecting industrial process efficiency. Using a case study approach, we integrate a Long Short-Term Memory (LSTM) model for temperature prediction with a Reinforcement Learning (RL) model to develop a web-based decision support system for cooling tower operations. The system uses an LSTM model to predict the trend of return water temperature for the next 15 min. This prediction, along with environmental conditions and historical data, is then fed into the RL model. Through a reward mechanism, the model is designed to receive a higher score when the predicted temperature is close to the benchmark of 30.5 °C and a lower score otherwise, enabling it to learn the optimal fan control strategy. Based on the evaluation results, the system automatically determines the optimal action—turning the fan on, off, or maintaining its current state—and provides specific fan operation suggestions and a decision-making basis to the operator via a web interface. This system is designed with a layered architecture, comprising functional modules such as a real-time monitoring dashboard, historical data query, and AI model management. Through visual elements like temperature trend line charts, fan status indicators, and a decision suggestion interface, it provides operators with real-time water temperature status, predicted temperature trends, and specific operational recommendations. The system has been deployed and is running in an actual manufacturing factory, where the AI model generates predictions and decision outputs every 15 min, assisting operators in adjusting fan control. This has successfully stabilized the outlet water temperature within the target range of 30–31 °C, thereby enhancing the efficiency of cooling water temperature regulation. The model presents the practical application of AI technology in a manufacturing control scenario and establishes a web-based decision support system, providing a concrete example for smart manufacturing transformation within an Industrial IoT environment. Full article
Show Figures

Figure 1

10 pages, 2959 KB  
Proceeding Paper
AI-Driven Detection, Characterization and Localization of GNSS Interference: A Comprehensive Approach Using Portable Sensors
by Yasamin Keshmiri Esfandabadi, Amir Tabatabaei and Ruediger Hein
Eng. Proc. 2026, 126(1), 43; https://doi.org/10.3390/engproc2026126043 - 30 Mar 2026
Viewed by 188
Abstract
The increasing interest in the development and integration of navigation and positioning services across a wide range of receivers has exposed them to various security threats, including GNSS jamming and spoofing attacks. Early detection of jamming and spoofing interference is crucial to mitigating [...] Read more.
The increasing interest in the development and integration of navigation and positioning services across a wide range of receivers has exposed them to various security threats, including GNSS jamming and spoofing attacks. Early detection of jamming and spoofing interference is crucial to mitigating these threats and preventing service degradation. This research introduces an interference detection technique leveraging an AI algorithm applied to GNSS data utilizing various methods to enhance detection accuracy and efficiency. The objective was to use modern sensors and AI to develop an effective tool that detects, characterizes, and localizes interference, thereby reducing associated risks. These sensors and algorithms enable continuous GNSS interference monitoring and support real-time Decision-making. A server plays a crucial role in managing the entire system. Its primary function is to process data collected from various sensors referred to as nodes (e.g., static, rover, drone, and space) and from (public) GNSS networks as well as to perform localization using rotating-antenna nodes. Within the interference detection module, various methods were implemented at different points in the software receiver architecture. Each method’s certainty in identifying an interference source depends on its design and capabilities, with outcomes—whether positive or negative—being subject to potential accuracy or errors. To enhance the Decision-making process, an AI-based Decision-making block has been introduced to determine the presence of interference at a given epoch. The proposed interference monitoring methods were evaluated through experiments using GNSS signals under clean, jamming, and spoofing scenarios. The results demonstrate the techniques’ applicability across diverse scenarios, achieving high performance in interference detection, characterization, and localization. Full article
(This article belongs to the Proceedings of European Navigation Conference 2025)
Show Figures

Figure 1

17 pages, 2368 KB  
Article
LANTERN-XGB: An Interpretable Multi-Modal Machine Learning for Improving Clinical Decision-Making in Lung Cancer
by Davide Dalfovo, Carolina Sassorossi, Elisa De Paolis, Annalisa Campanella, Dania Nachira, Leonardo Petracca Ciavarella, Luca Boldrini, Esther G. C. Troost, Róza Ádány, Núria Farré, Ece Öztürk, Angelo Minucci, Rocco Trisolini, Emilio Bria, Steffen Löck, Stefano Margaritora and Filippo Lococo
Int. J. Mol. Sci. 2026, 27(7), 3128; https://doi.org/10.3390/ijms27073128 - 30 Mar 2026
Viewed by 290
Abstract
Non-small cell lung cancer (NSCLC) remains the leading cause of cancer-related mortality globally. While multi-modal artificial intelligence (AI) models offer significant predictive potential, their translation into routine clinical practice is delayed by the “black box” nature of complex algorithms and the fragmentation of [...] Read more.
Non-small cell lung cancer (NSCLC) remains the leading cause of cancer-related mortality globally. While multi-modal artificial intelligence (AI) models offer significant predictive potential, their translation into routine clinical practice is delayed by the “black box” nature of complex algorithms and the fragmentation of heterogeneous data. We present LANTERN-XGB, a hierarchical machine learning workflow designed to bridge this gap by generating interpretable “digital human avatars” for precision oncology. The methodology employs a multi-stage scalable tree boosting system (XGBoost) architecture utilizing shapley additive explanations (SHAP) for rigorous hierarchical feature selection, missing value management, and patient-specific decision support. The workflow was developed and benchmarked using a retrospective cohort of 437 patients with clinical N0 NSCLC, followed by validation on a prospective dataset (n = 100) and an independent external dataset (n = 100). The pipeline integrates diverse data modalities to predict occult lymph node metastasis (OLM). LANTERN-XGB identified a robust consensus signature driven by non-linear interactions among CT textural fragmentation, PET metabolic heterogeneity, tumor density distribution, and systemic clinical modulators. Exploratory transcriptomic pathway analysis (GSVA) revealed that high-risk predictions strongly correlate with systemic molecular dysregulation, such as the enrichment of immune-inflammatory signaling and metabolic stress pathways. The model achieved robust discrimination in external validation (AUC ≈ 0.77), performing comparably to state-of-the-art nomogram benchmarks. Crucially, the LANTERN-XGB framework demonstrated superior utility in handling diagnostic ambiguity; local force plots allowed for the correct reclassification of “borderline” prediction by visualizing feature interactions that standard linear models fail to capture. LANTERN-XGB provides a validated, open-source framework that successfully balances predictive power with clinical transparency. By empowering clinicians to visualize and verify the logic behind AI predictions, this workflow offers a pragmatic path for integrating reliable multi-modal avatars into daily medical decision-making. Full article
(This article belongs to the Special Issue Omics Science and Research in Human Health and Disease)
Show Figures

Figure 1

7 pages, 2523 KB  
Proceeding Paper
AI- and IoT-Enabled Smart Dustbin for Automated Hazardous Electronic Waste Separation
by Min Xuan Soh, Hou Kit Mun, Hui Ziang Lee, Zhi Khai Ng and Yan Chai Hum
Eng. Proc. 2026, 134(1), 10; https://doi.org/10.3390/engproc2026134010 - 30 Mar 2026
Viewed by 330
Abstract
Electronic waste (e-waste) continues to increase globally, yet conventional bins cannot distinguish hazardous batteries and devices from recyclable metals. This article presents an AI- and IoT-enabled smart dustbin that automatically identifies and segregates general waste, metals, and electronic or battery-based hazards while providing [...] Read more.
Electronic waste (e-waste) continues to increase globally, yet conventional bins cannot distinguish hazardous batteries and devices from recyclable metals. This article presents an AI- and IoT-enabled smart dustbin that automatically identifies and segregates general waste, metals, and electronic or battery-based hazards while providing real-time monitoring through a cloud-based dashboard. The system integrates inductive sensing, Time-of-Flight detection, an Espressif Systems Platform 32 (ESP32)-CAM module, and Google Gemini 1.5 Flash for image classification. The prototype achieved a waste segregation accuracy of 93.5% with a total cycle time of 4–6 s per item. The touch-free lid, swift mechanical actuation, and compact 59 × 59 × 100 cm footprint make the dustbin suitable for deployment in campuses, offices, and shopping malls. Dual ESP32 controllers, cloud connectivity through Message Queuing Telemetry Transport (MQTT), Firebase, and a Streamlit web interface enable automated alerts through Discord and email, demonstrating a scalable and energy-efficient approach to sustainable e-waste management. Full article
Show Figures

Figure 1

Back to TopTop