Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,050)

Search Parameters:
Keywords = multi-level context modeling

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
70 pages, 8778 KB  
Systematic Review
Beyond Accuracy: Transferability Limits, Validation Inflation, and Uncertainty Gaps in Satellite-Based Water Quality Monitoring—A Systematic Quantitative Synthesis and Operational Framework
by Saeid Pourmorad, Valerie Graw, Andreas Rienow and Luca Antonio Dimuccio
Remote Sens. 2026, 18(7), 1098; https://doi.org/10.3390/rs18071098 (registering DOI) - 7 Apr 2026
Abstract
Satellite remote sensing has become essential for water quality assessment across inland and coastal environments, with rapid improvements in recent years. Significant advances have been made in detecting optically active parameters (such as chlorophyll-a, suspended matter, and turbidity), showing consistently strong performance across [...] Read more.
Satellite remote sensing has become essential for water quality assessment across inland and coastal environments, with rapid improvements in recent years. Significant advances have been made in detecting optically active parameters (such as chlorophyll-a, suspended matter, and turbidity), showing consistently strong performance across multiple studies. Specifically, the median validation performance (R2) derived from the quantitative synthesis indicates R2 = 0.82 for chlorophyll-a (interquartile range—IQR: 0.75–0.90), R2 = 0.80 for total suspended matter (IQR: 0.78–0.85), and R2 = 0.88 for turbidity (IQR: 0.85–0.90). Conversely, the retrieval of optically inactive parameters (such as nutrients like total phosphorus and total nitrogen) remains more context dependent. It exhibits moderate, more variable results, with median R2 = 0.68 (IQR: 0.64–0.74) for total phosphorus and R2 = 0.75 (IQR: 0.70–0.80) for total nitrogen. These findings clearly illustrate the varying success of retrievals of optically active and inactive parameters and underscore the inherent difficulties of indirect estimation methods. However, high reported accuracy has yet to translate into transferable, uncertainty-informed, and operational monitoring systems. This gap stems from structural issues in validation design, physics integration, uncertainty management, and multi-sensor compatibility rather than data limitations alone. We present a PRISMA-guided, distribution-aware quantitative synthesis of 152 peer-reviewed studies (1980–2025), based on a systematic search protocol, to evaluate satellite-based retrievals of both optically active and inactive parameters. Instead of simply averaging performance, we analyse the empirical distributions of validation metrics, considering the validation protocol, sensor type, parameter category, degree of physics integration, and uncertainty quantification. The synthesis demonstrates that validation strategy often influences reported results more than the algorithm class itself, with accuracy inflated under non-independent cross-validation methods and notable variability between studies concealed by mean-based reports. Across four decades, four persistent structural challenges remain: limited transferability across sites and sensors beyond calibration areas; weak or implicit physical integration in many data-driven models; lack of or inconsistency in uncertainty quantification; and fragmented multi-sensor harmonisation that restricts operational scalability. To address these issues, we introduce two evidence-based coding frameworks: a physics-integration taxonomy (P0–P4) and an uncertainty-quantification hierarchy (U0–U4). Applying these frameworks shows that most studies remain focused on low-to-moderate levels of physics integration and primarily consider uncertainty at the prediction stage, with limited attention to upstream sources throughout the observation and inference process. Building on this structured synthesis, we propose a transferable, physics-informed, and uncertainty-aware conceptual framework that links model architecture, validation robustness, and probabilistic uncertainty to well-founded design principles. By shifting satellite water quality modelling from isolated algorithm demonstrations towards integrated, evidence-based system design, this study promotes scalable, decision-grade environmental monitoring amid the accelerating impacts of climate change. Full article
Show Figures

Figure 1

29 pages, 2990 KB  
Article
Federated and Interpretable AI Framework for Secure and Transparent Loan Default Prediction in Financial Institutions
by Awad M. Awadelkarim
Math. Comput. Appl. 2026, 31(2), 56; https://doi.org/10.3390/mca31020056 - 5 Apr 2026
Viewed by 226
Abstract
Predicting loan defaults is a significant challenge for financial institutions; however, current machine learning techniques often encounter issues in areas such as data privacy, cross-institutional cooperation, and model transparency. The restrictions on the practical implementation of advanced predictive models are centralized training paradigms, [...] Read more.
Predicting loan defaults is a significant challenge for financial institutions; however, current machine learning techniques often encounter issues in areas such as data privacy, cross-institutional cooperation, and model transparency. The restrictions on the practical implementation of advanced predictive models are centralized training paradigms, which limit the application of advanced models because of regulatory and confidentiality issues, and black-box decision making, which diminishes confidence in automated credit risk tools. This study mitigates these problems by adopting a federated-inspired decentralized ensemble learning model combined with explainable artificial intelligence (XAI) in predicting loan defaults. Various machine learning classifiers are trained on partitioned institutional data without the need to share any data; they include K-Nearest Neighbors, support vector machine, random forest, and XGBoost. They use a prediction-level aggregation strategy to simulate the collaborative decision-making process without losing locality of data. SHAP and LIME are used to promote model interpretability by giving both global and local explanations of the consequences of prediction. The proposed framework was tested on a large public dataset of loans that contains more than 116,000 records, including various financial and borrower-related features. The experimental findings show that XGBoost has high and reliable predictive accuracy in both centralized and decentralized scenarios, achieving 99.7% accuracy under federated-inspired evaluation. The explanation analysis shows interest rate spread and upfront charges as the most significant predictors of loan default risk. The main contributions of this research are as follows: (i) a privacy-preserving decentralized ensemble learning framework that is applicable in multi-institutional financial contexts, (ii) a detailed analysis of centralized and decentralized predictive performances, and (iii) the pipeline of the XAI, which can be used to increase its transparency and regulatory confidence in automated credit risk evaluation. These results prove that decentralized learning combined with explainable AI can provide high-performing, transparent and privacy-sensitive loan default prediction systems in practice in real-world banking systems. Full article
Show Figures

Figure 1

23 pages, 4788 KB  
Article
Leakage-Free Evaluation and Multi-Prototype Contrastive Learning for Hyperspectral Classification of Vegetation
by Tong Jia and Haiyong Ding
Appl. Sci. 2026, 16(7), 3543; https://doi.org/10.3390/app16073543 - 4 Apr 2026
Viewed by 139
Abstract
Hyperspectral image (HSI) classification regarding vegetation is hampered by strong intra-class spectral variability and inter-class similarity, and commonly used random pixel splits can introduce spatial-context leakage that inflates test accuracy in patch-based models. To address these issues, we propose a classification framework that [...] Read more.
Hyperspectral image (HSI) classification regarding vegetation is hampered by strong intra-class spectral variability and inter-class similarity, and commonly used random pixel splits can introduce spatial-context leakage that inflates test accuracy in patch-based models. To address these issues, we propose a classification framework that couples a leakage-free block partition (LFBP) strategy with class-aware multi-prototype contrastive loss (CAMP-CL). LFBP assigns non-overlapping spatial blocks to training/validation/test sets and reserves a buffer matched to the patch radius to prevent contextual overlap while keeping class distributions balanced. CAMP-CL represents each class with multiple learnable prototypes and performs supervised contrastive learning at the prototype level, encouraging compact yet multimodal intra-class embedding and improved inter-class separation. Experiments conducted on the Matiwan Village airborne HSI dataset under the LFBP protocol show that the proposed method can achieve 91.51% overall accuracy (OA) and 91.49% average accuracy (AA). Compared with the strongest baseline, supervised contrastive learning (SupCon), the proposed method yields consistent gains of 1.07 percentage points (pp) in both OA and AA while improving OA by 5.76 pp over the cross-entropy baseline. The results suggest that CAMP-CL is beneficial for addressing the challenges of HSI classification for fine-grained vegetation, while leakage-free evaluation protocols are important for obtaining more reliable performance estimates in practical settings. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
16 pages, 609 KB  
Article
Dynamic Simulation of Ecological Risk Thresholds Under Multi-Reservoir Water Transfer Operations in the Upper Yangtze River Basin
by Zeyu Zhang, Yong Li, Peiying Tan, Hongsen You, Yi Peng, Zhuying Mao, Jia Li, Lingling Ni and Yun Lu
Land 2026, 15(4), 594; https://doi.org/10.3390/land15040594 - 3 Apr 2026
Viewed by 216
Abstract
This study systematically evaluates the regulatory effects of multi-reservoir water diversion on ecological risk thresholds in the upper Yangtze River. Taking multiple reservoirs in the upper basin as the research object, a system dynamics model was developed to simulate reservoir operation, water level [...] Read more.
This study systematically evaluates the regulatory effects of multi-reservoir water diversion on ecological risk thresholds in the upper Yangtze River. Taking multiple reservoirs in the upper basin as the research object, a system dynamics model was developed to simulate reservoir operation, water level regulation, ecological water diversion, and diversion capacity enhancement. Key indicators included upstream ecological risk thresholds, ecohydrological risk levels, habitat ecological risk levels, and water environment ecological risk levels. Five scenarios were designed: S0 (baseline), S1 (enhanced ecological compensation), S2 (industrial coordination and optimization), S3 (economic synergy promotion), and S4 (comprehensive regulation and optimization). These scenarios were used to assess the combined effects of different diversion strategies on ecological risk control. Results indicate the following: (1) All scenarios reduce ecological risks to some extent, but the degree of effectiveness differs. (2) The overall ranking is S4 > S1 > S3 > S2 > S0, demonstrating that comprehensive regulation optimization is most effective in mitigating ecohydrological risks, improving habitat quality, and enhancing water environment security. (3) S1 is particularly effective in reducing ecohydrological risks and is suitable as an emergency safeguard during dry seasons, though less effective than S4 in habitat and water quality improvements. (4) S3 supports economic–ecological synergy but remains less effective than S1 and S4. (5) S2 primarily enhances industrial–ecological coordination with limited contribution to overall risk control. (6) S0 yields minimal improvement under existing operational conditions, failing to meet ecosystem safety thresholds. Overall, the findings highlight that in multi-reservoir joint diversion contexts, a composite strategy centered on comprehensive regulation optimization, supplemented by ecological compensation and economic synergy, should be prioritized to achieve systematic ecological risk reduction and ensure long-term watershed ecological security. Full article
(This article belongs to the Special Issue Conservation of Bio- and Geo-Diversity and Landscape Changes II)
Show Figures

Figure 1

18 pages, 688 KB  
Article
Food Insecurity and Adolescent Obesity in the United States: A Social Ecological Analysis of Multi-Level Risk Factors and Structural Inequities
by Ogochukwu R. Abasilim, Kenechukwu O. S. Nwosu, Opeyemi O. Akintimehin, Ogochukwu J. Ezeigwe, Odinakachukwu O. Dimgba, Meghna Lama, Amarachi H. Njoku, Nnenna C. Okoye and Elizabeth O. Obekpa
Int. J. Environ. Res. Public Health 2026, 23(4), 458; https://doi.org/10.3390/ijerph23040458 - 3 Apr 2026
Viewed by 235
Abstract
While the association between food insecurity and adolescent obesity is well-established, the mechanisms through which these co-occurring public health crises are linked remain inadequately understood. Using the Social Ecological Model as a theoretical framework, this study examines how individual (physical activity), interpersonal (household [...] Read more.
While the association between food insecurity and adolescent obesity is well-established, the mechanisms through which these co-occurring public health crises are linked remain inadequately understood. Using the Social Ecological Model as a theoretical framework, this study examines how individual (physical activity), interpersonal (household food security), community (poverty level, residence), and societal (race/ethnicity) factors interact to influence adolescent weight outcomes. Cross-sectional data from 37,425 adolescents aged 12–17 years in the 2022–2023 National Survey of Children’s Health using weighted multinomial logistic regression with interaction terms were used. Adolescents experiencing nutrition insecurity (adequate quantity but poor-quality food) had 41% higher odds of obesity (adjusted odds ratio (aOR) = 1.41; 95% CI: 1.20–1.65), while those with food insecurity (insufficient quantity) had 48% higher odds (aOR = 1.48; 95% CI: 1.08–2.02) compared to food-secure peers. Significant effect modification emerged across ecological levels: poverty below the 200% federal poverty level (FPL) significantly amplified the food insecurity–obesity relationship (interaction p < 0.001), Hispanic and Black adolescents demonstrated 49% and 78% higher obesity odds, respectively, independent of household food and nutrition security status, and physical activity showed protective effects that varied by food security context (interaction p = 0.003). These findings underscore the necessity of multi-level interventions addressing structural inequities alongside individual behaviors to combat adolescent obesity in food-insecure populations effectively. Full article
(This article belongs to the Special Issue Health Promotion in Childhood and Adolescence)
Show Figures

Figure 1

28 pages, 1876 KB  
Article
Network Analysis of Convergent and Specific Molecular Pathways of Nutraceuticals with Antioxidant and Neuroprotective Potential in Glaucoma
by Pavlina Teneva, Sylvia Stamova, Kaloyan Varlyakov, Neli Ermenlieva, Emilia Georgieva and Todorka Kostadinova
Antioxidants 2026, 15(4), 445; https://doi.org/10.3390/antiox15040445 - 2 Apr 2026
Viewed by 296
Abstract
Optic neuropathy represents a leading cause of irreversible vision loss, in which oxidative stress, chronic inflammation, dysregulated lipid metabolism, and mitochondrial dysfunction contribute to the progressive degeneration of retinal ganglion cells (RGCs). In recent years, a number of nutraceuticals have been investigated as [...] Read more.
Optic neuropathy represents a leading cause of irreversible vision loss, in which oxidative stress, chronic inflammation, dysregulated lipid metabolism, and mitochondrial dysfunction contribute to the progressive degeneration of retinal ganglion cells (RGCs). In recent years, a number of nutraceuticals have been investigated as potential neuroprotective agents; however, the molecular mechanisms through which they exert their effects remain incompletely understood and are often considered in isolation. In the present in silico study, an integrative network-based approach was applied for a systematic analysis of the predicted molecular targets of selected nutraceuticals with antioxidant and anti-inflammatory potential. By combining target prediction, protein–protein interaction analysis, and functional enrichment, their functional convergence was assessed in the context of optic nerve pathophysiology. The results indicate that, despite their chemical and functional heterogeneity, the investigated nutraceuticals do not act through fully independent mechanisms but instead converge on interconnected regulatory axes. In particular, lipid–inflammatory signaling, epigenetic and stress-adaptive mechanisms, as well as nuclear-receptor mediated transcriptional regulation emerged as key pathways. These pathways form integrated molecular models potentially determining cellular susceptibility to injury and the adaptive capacity of RGCs. In conclusion, the present analysis provides a systems-level framework for understanding the neuroprotective potential of nutraceuticals, highlighting the importance of network convergence and multi-target activity. The obtained results support the conceptual shift from isolated antioxidant strategies towards integrative, network-oriented approaches in the study of optic neuropathy. Full article
Show Figures

Figure 1

31 pages, 2018 KB  
Article
Structuring Sustainability-Oriented Reconstruction Decisions After Earthquakes: A MIVES-Based Methodological Framework
by Josephin Rezk, Carlos Muñoz-Blanc and Oriol Pons-Valladares
Appl. Sci. 2026, 16(7), 3449; https://doi.org/10.3390/app16073449 - 2 Apr 2026
Viewed by 388
Abstract
Post-earthquake reconstruction involves complex decision-making that extends beyond structural safety to include economic, environmental, and social considerations under conditions of uncertainty and limited resources. Although sustainability-oriented assessment frameworks and multi-criteria decision-making approaches have increasingly been applied in disaster contexts, existing models typically address [...] Read more.
Post-earthquake reconstruction involves complex decision-making that extends beyond structural safety to include economic, environmental, and social considerations under conditions of uncertainty and limited resources. Although sustainability-oriented assessment frameworks and multi-criteria decision-making approaches have increasingly been applied in disaster contexts, existing models typically address localized technical interventions and rarely support strategic reconstruction planning after earthquakes. This study develops a sustainability-based decision-support framework for post-earthquake reconstruction of reinforced concrete buildings using the Integrated Value Model for Sustainability Assessment (MIVES). This framework is derived through a systematic synthesis of the post-earthquake, post-disaster, and MIVES-based literature. Reconstruction alternatives reported in previous studies are first identified and classified to structure the reconstruction decision space. Sustainability requirements, criteria, and indicators are then examined and adapted through processes of retention, modification, elimination, and addition. The principal outcome of the study is an adapted MIVES requirements tree composed of 10 criteria and 19 indicators organized across the sustainability dimensions, providing a context-consistent hierarchical structure for strategic building-level reconstruction decisions. By explicitly linking reconstruction alternatives with sustainability indicators within clearly defined decision boundaries, the framework strengthens methodological rigor in sustainability-oriented reconstruction planning. The present article focuses on the methodological development of the framework (Part I). The operational implementation of the model—including expert-based weighting, value-function definition, indicator aggregation, and empirical validation through case studies—will be presented in a companion study. The proposed framework provides a transparent and transferable basis for sustainability-oriented reconstruction planning and supports informed decision-making by public authorities. Full article
Show Figures

Figure 1

31 pages, 28128 KB  
Article
HMF-DEIM: High-Fidelity Multi-Domain Fusion Transformer for UAV Small Object Detection
by Lan Ma, Yun Luo and Jiajun Xu
Sensors 2026, 26(7), 2187; https://doi.org/10.3390/s26072187 - 1 Apr 2026
Viewed by 304
Abstract
Unmanned aerial vehicle (UAV) small object detection faces critical challenges including irreversible geometric detail loss during multi-level downsampling, cross-scale feature distortion from interpolation blur and aliasing, and limited long-range dependency modeling due to constrained receptive fields. To address these limitations, we propose HMF-DEIM [...] Read more.
Unmanned aerial vehicle (UAV) small object detection faces critical challenges including irreversible geometric detail loss during multi-level downsampling, cross-scale feature distortion from interpolation blur and aliasing, and limited long-range dependency modeling due to constrained receptive fields. To address these limitations, we propose HMF-DEIM (High-Fidelity Multi-Domain Fusion Transformer for UAV Small Object Detection), an end-to-end architecture tailored for UAV small object detection. First, we design a lightweight hierarchical differentiation backbone that removes redundant deepest-layer features (P5) to prevent tiny object information loss, employing Multi-Domain Feature Blending (MDFB) in shallow layers for geometric detail preservation and a Hierarchical Attention-guided Feature Modulation Block (HAFMB) in deep layers for global semantic modeling. Second, we develop a full-chain high-fidelity feature transformation framework comprising Channel-Adaptive Shift Upsampling (CASU) for interpolation-free resolution recovery, Multi-scale Context Alignment Fusion (MCAF) for bridging deep–shallow semantic gaps via bidirectional gating, and Diversified Residual Frequency-aware Downsampling (DRFD) for aliasing suppression through a three-branch parallel architecture. Finally, we devise the FocusFeature module that aligns multi-scale features to a unified scale and employs parallel multi-scale large-kernel depthwise convolutions to capture cross-scale long-range dependencies, generating dual-scale (P3/P4) features balancing details and semantics. Experiments demonstrate that HMF-DEIM outperforms DEIM on VisDrone2019 test by 0.405 mAP50 (+2.1%) and 0.235 mAP50–95 (+1.6%), with a remarkable 21.3% relative improvement in APs for tiny objects, while maintaining real-time inference (465 FPS with TensorRT FP16) on an NVIDIA A100 GPU with only 11.87M parameters and 34.1 GFLOPs. Further validation on AI-TOD v2 and DOTA v1.5 datasets confirms robust generalization across diverse aerial scenarios, making it a practical solution for resource-constrained UAV applications. Full article
(This article belongs to the Special Issue Communications and Networking Based on Artificial Intelligence)
Show Figures

Figure 1

29 pages, 2771 KB  
Review
Multiphysics Modeling and Simulation of NVH Phenomena in Electric Vehicle Powertrains
by Krisztian Horvath
World Electr. Veh. J. 2026, 17(4), 183; https://doi.org/10.3390/wevj17040183 - 1 Apr 2026
Viewed by 338
Abstract
The rapid electrification of road vehicles has fundamentally reshaped the priorities of noise, vibration, and harshness (NVH) engineering. In the absence of combustion-related broadband masking, tonal and order-related phenomena originating from the electric machine, inverter switching, and high-speed reduction gearing have become clearly [...] Read more.
The rapid electrification of road vehicles has fundamentally reshaped the priorities of noise, vibration, and harshness (NVH) engineering. In the absence of combustion-related broadband masking, tonal and order-related phenomena originating from the electric machine, inverter switching, and high-speed reduction gearing have become clearly perceptible and, in many cases, acoustically dominant. Consequently, drivetrain noise in electric vehicles can no longer be assessed at component level alone; it must be understood as a coupled system response shaped by excitation mechanisms, structural dynamics, transfer paths, radiation efficiency, and ultimately human perception. This review adopts a source-to-perception perspective and consolidates the principal physical mechanisms governing vibro-acoustic behavior in integrated electric drive units. Electromagnetic force harmonics and torque ripple are discussed alongside transmission-error-driven gear mesh excitation, while bearing and shaft nonlinearities are examined in the context of high-speed operation. In addition, ancillary thermoacoustic and aerodynamic contributions are considered, reflecting the increasingly integrated packaging of modern e-axle architectures. On this mechanism-oriented basis, dominant excitation types are linked to frequency-appropriate modeling strategies, spanning electromagnetic force extraction, multibody drivetrain simulation, structural finite element analysis, transfer path analysis, and acoustic radiation prediction. Particular attention is given to workflow integration across domains. Finally, the paper identifies research challenges that predominantly arise at system level, including multi-source interaction effects, installation-dependent transfer-path variability, emergent resonances in assembled structures, manufacturing-induced tonal artifacts, and the still limited correlation between predicted vibration fields and perceived sound quality. Full article
(This article belongs to the Section Propulsion Systems and Components)
Show Figures

Graphical abstract

30 pages, 44004 KB  
Article
Visualising Relation Between Terminologies and HBIM Models for Historic Architecture
by Alberto Pettineo and Sandro Parrinello
Heritage 2026, 9(4), 140; https://doi.org/10.3390/heritage9040140 - 30 Mar 2026
Viewed by 1079
Abstract
Moving beyond the limits of purely geometric or descriptive documentation, the study conceives the digital models as a structured information system capable of coherently and queryably organising both the formal-typological and the interpretative-historical dimensions of heritage. The methodology is developed within the framework [...] Read more.
Moving beyond the limits of purely geometric or descriptive documentation, the study conceives the digital models as a structured information system capable of coherently and queryably organising both the formal-typological and the interpretative-historical dimensions of heritage. The methodology is developed within the framework of the European Horizon MSCA project Hephaestus, which investigates cross-border Cultural Heritage Routes (CHRs) and historic fortification systems in the Adriatic and Baltic basins. The paper focuses on Adriatic CHR, through the selection, organisation, and interrelation of a distributed corpus of fortified architectures, articulated according to historical phases, territorial clusters, typological classes, and multilevel relationships. The study adopts an approach centered on HBIM models and ontological frameworks, implemented through complementary top-down and bottom-up processes. The results show the possibility of structuring HBIM-derived data within an ontology-based framework capable of linking, within a single information system, architectural elements, fortified systems, and territorial entities across heterogeneous case studies. The application to differentiated contexts highlights the ability of the models to adapt to different scales and levels of complexity, supporting querying, comparison, and multi-level interpretation of heritage. The variety of sources and contexts enables the methodology to be tested across heterogeneous historical and typological scenarios, strengthening its applicability and robustness within a multiscalar information structure. Full article
Show Figures

Figure 1

33 pages, 8145 KB  
Article
Multi-View Transformers for Structure-Aware HA–NA Drift Risk Scoring and Mutation Hotspot Mapping
by Pankaj Agarwal, Sumendra Yogarayan, Md. Shohel Sayeed and Rupesh Kumar Tipu
Viruses 2026, 18(4), 421; https://doi.org/10.3390/v18040421 - 30 Mar 2026
Viewed by 340
Abstract
Seasonal influenza A evolves quickly through mutations in haemagglutinin (HA) and neuraminidase (NA), which can reduce vaccine match and lower protection. Many sequence-only models do not link codon-level mutations to three-dimensional (3D) protein context and long-term evolutionary signals within one scoring framework. This [...] Read more.
Seasonal influenza A evolves quickly through mutations in haemagglutinin (HA) and neuraminidase (NA), which can reduce vaccine match and lower protection. Many sequence-only models do not link codon-level mutations to three-dimensional (3D) protein context and long-term evolutionary signals within one scoring framework. This study presents TRIAD-Influenza (TRIAD: Token–Residue–Integrated Architecture for Drift), a multi-view transformer that combines (i) codon- and residue-level sequence representations, (ii) structure-derived residue interaction features from predicted HA/NA models, and (iii) an embedding-space phylogeny that captures cluster and drift context. The pipeline curates more than 3×105 paired HA/NA coding sequences from the NCBI Virus resource (2010–2024) using strict quality control and codon-aware alignment and predicts 3D structures for nearly all unique HA and NA proteins to build contact graphs and surface/stability descriptors. TRIAD-Influenza outputs a continuous, structure-aware risk score for each HA/NA pair and produces interpretable mutation hotspot maps using gradient saliency and a contact-weighted mutation risk index (CMRI). On rolling-origin temporal cross-validation and for a temporally held-out internal test window with strong class imbalance (∼3.4% high-risk), the model shows strong ranking performance (AUROC 0.89; AUPRC 0.44; Brier score =0.069) while operating at surveillance speed (median latency 1.6 ms per HA/NA pair). External validation on independent GISAID/Nextstrain cohorts (2023–2024; 5000 isolates) preserves discrimination (AUROC 0.850.86). Predicted risk scores correlate with experimental haemagglutination inhibition (HI) antigenic distances (Spearman ρ up to ≈0.82 at the virus-aggregated level), and CMRI hotspots enrich known epitope and deep mutational scanning escape residues (odds ratios 2.73.6). Overall, token–residue–phylogeny coupling enables rapid, structure-aware prioritisation of emerging influenza A HA/NA sequences and delivers compact hotspot maps for expert review and targeted experiments. Full article
(This article belongs to the Section General Virology)
Show Figures

Figure 1

35 pages, 859 KB  
Article
Digitalizing Urban Planning Governance: Empirical Evidence from Yerevan and a Multi-Layer Framework for Data-Driven City Management
by Khoren Mkhitaryan, Anna Sanamyan, Hasmik Hambardzumyan, Armenuhi Ordyan and Gor Harutyunyan
Urban Sci. 2026, 10(4), 183; https://doi.org/10.3390/urbansci10040183 - 29 Mar 2026
Viewed by 322
Abstract
The rapid digitalization of cities is reshaping urban planning practices; however, significant gaps persist between technological investments and institutional governance capacity, particularly in transition economies. This study investigates how digital tools can be systematically embedded within planning processes to improve decision-making quality, coordination, [...] Read more.
The rapid digitalization of cities is reshaping urban planning practices; however, significant gaps persist between technological investments and institutional governance capacity, particularly in transition economies. This study investigates how digital tools can be systematically embedded within planning processes to improve decision-making quality, coordination, and administrative efficiency. Drawing on urban governance theory and an empirical implementation study conducted in Yerevan, Armenia (population 1.1 million) between 2019 and 2023, the paper develops and operationalizes a multi-layer governance framework that aligns digital instruments—including geospatial information systems, performance dashboards, and decision-support platforms—with strategic, tactical, and operational levels of city management. The framework is evaluated through institutional analysis of municipal policy documents, planning databases, and semi-structured interviews with planning officials. The results reveal substantial governance barriers, including data fragmentation, organizational silos, and limited digital capacity. Framework-based implementation produced measurable improvements: planning decision cycles shortened by 43%, GIS utilization increased from 18% to 68% of eligible projects, inter-agency data sharing rose sixfold, and annual cost savings of approximately $1.2 million were achieved through reduced duplication and faster approvals. By combining conceptual design with empirical validation, the study advances digital urban governance research and offers a transferable, evidence-based model for implementing resilient and efficient data-driven planning systems in resource-constrained contexts. Full article
(This article belongs to the Special Issue Advances in Urban Planning and the Digitalization of City Management)
Show Figures

Figure 1

18 pages, 1305 KB  
Perspective
Reintegrating the Human in Health: A Triadic Blueprint for Whole-Person Care in the Age of AI
by Azizi A. Seixas and Debbie P. Chung
Int. J. Environ. Res. Public Health 2026, 23(4), 426; https://doi.org/10.3390/ijerph23040426 - 29 Mar 2026
Viewed by 323
Abstract
Modern healthcare remains structurally and conceptually fragmented, with profound clinical and policy implications. At its root lies an ontological fracture: the prevailing biomedical model reduces patients to discrete biological systems (organs, biomarkers, and symptoms) detached from the psychological, social, and ecological contexts in [...] Read more.
Modern healthcare remains structurally and conceptually fragmented, with profound clinical and policy implications. At its root lies an ontological fracture: the prevailing biomedical model reduces patients to discrete biological systems (organs, biomarkers, and symptoms) detached from the psychological, social, and ecological contexts in which health and illness are experienced. This is compounded by epistemological fragmentation, where medical knowledge is compartmentalized into increasingly narrow specialties, limiting holistic understanding. These philosophical divisions manifest in downstream operational, informational, financial, and policy dysfunctions duplicative testing, misaligned incentives, disconnected care pathways, and population health failures. To address these multilevel fractures, we propose a unified architecture grounded in three interlocking components. First, the Precision and Personalized Population Health (P3H) framework offers a principle-based realignment toward care that is integrated, personalized, proactive, and population wide. P3H addresses the conceptual shortcomings of fragmented care by focusing on the full human trajectory across time, systems, and determinants. Second, General Purpose Technologies including artificial intelligence, biosensors, mobile diagnostics, and multimodal data systems enable the operationalization of whole-person care at scale, especially in low-resource settings. Third, the AI-WHOLE policy framework (Alignment, Integration, Workflow, Holism, Outcomes, Learning, and Equity) provides governance principles to guide ethical, equitable, and context-specific implementation. We argue that this triadic blueprint is particularly critical for Global South nations, where the lack of legacy infrastructure offers an opportunity for leapfrogging toward integrated, intelligent systems of care. Early models illustrate how policy-aligned, technology-enabled care rooted in whole-person principles can yield improvements in continuity, cost-efficiency, and chronic disease outcomes. This manuscript offers a systems-level strategy to overcome fragmentation and reimagine healthcare delivery, not only by refining clinical tools, but by redefining what it means to care for the human being in full. Full article
(This article belongs to the Special Issue Perspectives in Health Care Sciences)
Show Figures

Figure 1

25 pages, 4776 KB  
Article
FireMambaNet: A Multi-Scale Mamba Network for Tiny Fire Segmentation in Satellite Imagery
by Bo Song, Bo Li, Hong Huang, Zhiyong Zhang, Zhili Chen, Tao Yue and Yun Chen
Remote Sens. 2026, 18(7), 1021; https://doi.org/10.3390/rs18071021 - 29 Mar 2026
Viewed by 258
Abstract
Satellite remote sensing plays an essential role in wildfire monitoring due to its large-scale observation capability. However, fire targets in satellite imagery are typically extremely small, sparsely distributed, and embedded in complex backgrounds, making accurate segmentation highly challenging for existing methods. To address [...] Read more.
Satellite remote sensing plays an essential role in wildfire monitoring due to its large-scale observation capability. However, fire targets in satellite imagery are typically extremely small, sparsely distributed, and embedded in complex backgrounds, making accurate segmentation highly challenging for existing methods. To address these challenges, this paper proposes a multi-scale Mamba-based network for tiny fire segmentation, named FireMambaNet. The network adopts a nested U-shaped encoder-decoder architecture, primarily consisting of three modules: the Cross-layer Gated Residual U-shaped module (CG-RSU), the Fire-aware Directional Context Modulation module (FDCM), and the Multi-scale Mamba Attention Module (M2AM). The CG-RSU, as the core building block, adaptively suppresses background redundancy and enhances weak fire responses by extracting multi-scale features through cross-layer gating. The FDCM explicitly enhances the network’s ability to perceive anisotropic expansion features of fire points, such as those along the wind direction and terrain orientation, by modeling multi-directional context. The M2AM model employs a Mamba state-space model to suppress background interference through global context modeling during cross-scale feature fusion, while enhancing consistency among sparsely distributed tiny fire targets. In addition, experimental validation is conducted using two subsets from the Active Fire dataset, which have significant pixel-level sparse features: Oceania and Asia4. The results show that the proposed method significantly outperforms various mainstream CNN, Transformer, and Mamba baseline models on both datasets. It achieves an IoU of 88.51% and F1 score of 93.76% on the Oceania dataset, and an IoU of 85.65% and F1 score of 92.26% on the Asia4 dataset. Compared to the best-performing CNN baseline model, the IoU is improved by 1.81% and 2.07%, respectively. Overall, the FireMambaNet demonstrates significant advantages in detecting tiny fire points in complex backgrounds. Full article
Show Figures

Figure 1

24 pages, 304 KB  
Article
Engineering Predictive Applications for Academic Track Selection and Student Performance for Future Study Planning in High School Education
by Ka Ian Chan, Jingchi Huang, Huiwen Zou and Patrick Pang
Appl. Sci. 2026, 16(7), 3286; https://doi.org/10.3390/app16073286 - 28 Mar 2026
Viewed by 233
Abstract
With the rapid development in data mining and learning analytics, integrating predictive analytics into educational data has become increasingly critical for supporting students’ learning trajectories. In many schooling systems, the academic tracks (such as Liberal Arts or Science) and the performance of junior [...] Read more.
With the rapid development in data mining and learning analytics, integrating predictive analytics into educational data has become increasingly critical for supporting students’ learning trajectories. In many schooling systems, the academic tracks (such as Liberal Arts or Science) and the performance of junior high school students can substantially shape their subsequent university pathways and career planning. Despite the long-term impact of these decisions, academic track selections and the evaluation of students’ potential are often made without systematic and evidence-based guidance. Predictive computer applications can assist, but the training of accurate models and the selection of adequate features remain key challenges. This paper details our process of engineering such an application comprising two tasks based on 1357 real-world junior high school academic performance records. The first task applies a classification approach to predict students’ academic track orientation, while the second task employs a multi-output regression model to forecast students’ future academic performance in senior high school. Our approach shows that the stacking ensemble model achieved a classification accuracy of 85.76%, whereas the Bi-LSTM model with multi-head attention attained an overall R2 exceeding 82% in performance forecasting; both models demonstrated strong and reliable predictive capability. Moreover, the proposed approach provides inherent interpretability by decomposing predictions at the subject level. Feature importance analysis reveals how different academic subjects contribute variably to both academic track decisions and future academic performance, offering actionable insights for academic counselling and future study planning. By bridging predictive modelling with students’ educational and career planning needs, this study advances the practical application of educational data mining and provides support for evidence-based academic guidance and future career choices in real-world contexts. Full article
(This article belongs to the Special Issue Innovative Applications of Artificial Intelligence in Education)
Back to TopTop