Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (369)

Search Parameters:
Keywords = adaptive multilayer perceptron

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 6909 KB  
Article
MDE-UNet: A Physically Guided Asymmetric Fusion Network for Multi-Source Meteorological Data Lightning Identification
by Yihua Chen, Yuanpeng Han, Yujian Zhang, Yi Liu, Lin Song, Jialei Wang, Xinjue Wang and Qilin Zhang
Remote Sens. 2026, 18(7), 1027; https://doi.org/10.3390/rs18071027 - 29 Mar 2026
Viewed by 198
Abstract
Utilizing multi-source meteorological data for lightning identification is crucial for monitoring severe convective weather. However, several key challenges persist in this field: dimensional imbalance and modal competition among multi-source heterogeneous data, model training bias caused by the extreme sparsity of lightning samples, and [...] Read more.
Utilizing multi-source meteorological data for lightning identification is crucial for monitoring severe convective weather. However, several key challenges persist in this field: dimensional imbalance and modal competition among multi-source heterogeneous data, model training bias caused by the extreme sparsity of lightning samples, and an imbalance between false alarms and missed detections resulting from complex background noise. To address these challenges, this paper proposes a lightning identification network guided by physical priors and constrained by supervision. First, to tackle the issue of modal competition in fusing satellite (high-dimensional) and radar (low-dimensional) data, a physical prior-guided asymmetric radar information enhancement mechanism is introduced. This mechanism uses radar physical features as contextual guidance to selectively enhance the latent weak radar signatures. Second, at the architectural level, a multi-source multi-scale feature fusion module and a weighted sliding window–multilayer perceptron (MLP) enhanced decoding unit are constructed. The former achieves the coupling of multi-scale physical features at a 2 km grid scale through cross-level semantic alignment, building a highly consistent feature field that effectively improves the model’s ability to detect lightning signals. The latter leverages adaptive receptive fields and the nonlinear modeling capability of MLPs to effectively smooth spatially discrete noise, ensuring spatial continuity in the reconstructed results. Finally, to address the model bias caused by severe class imbalance between positive and negative samples—resulting from the extreme sparsity of lightning events—an asymmetrically weighted BCE-DICE loss function is designed. Its “asymmetric” characteristic is implemented by assigning different penalty weights to false-positive and false-negative predictions. This loss function balances pixel-level accuracy and inter-class equilibrium while imposing high-weight penalties on false-positive predictions, achieving synergistic optimization of feature enhancement and directional suppression. Experimental results show that the proposed method effectively increases the hit rate while substantially reducing the false alarm rate, enabling efficient utilization of multi-source data and high-precision identification of lightning strike areas. Full article
Show Figures

Figure 1

22 pages, 5375 KB  
Article
A Novel AAF-SwinT Model for Automatic Recognition of Abnormal Goat Lung Sounds
by Shengli Kou, Decao Zhang, Jiadong Yu, Yanling Yin, Weizheng Shen and Qiutong Cen
Animals 2026, 16(7), 1021; https://doi.org/10.3390/ani16071021 - 26 Mar 2026
Viewed by 232
Abstract
In abnormal goat lung sound recognition, high inter-class similarity and large intra-class variability pose significant challenges. To address this issue and improve recognition performance, we propose a deep learning model, AAF-SwinT, based on an improved Swin Transformer. The model replaces the original Swin [...] Read more.
In abnormal goat lung sound recognition, high inter-class similarity and large intra-class variability pose significant challenges. To address this issue and improve recognition performance, we propose a deep learning model, AAF-SwinT, based on an improved Swin Transformer. The model replaces the original Swin Transformer self-attention module with Axial Decomposed Attention (ADA), modeling the temporal and frequency axes separately and integrating attention weights to mitigate inter-class feature similarity. Adaptive Spatial Aggregation for Patch Merging (ASAP) is designed to emphasize key time-frequency regions, and a Frequency-Aware Multi-Layer Perceptron (FAM) is introduced to model features across different frequency bands, further enhancing the discriminative ability for abnormal lung sounds. Experiments on a self-constructed goat lung sound dataset demonstrate that AAF-SwinT achieves an accuracy of 88.21%, outperforming existing mainstream Transformer-based models by 2.68–5.98%. Ablation studies further confirm the effectiveness of each proposed module, improving the accuracy of baseline Swin Transformer model from 85.53% to 88.21%. These results indicate that the proposed approach exhibits strong robustness and practical potential for abnormal lung sound recognition in goats, providing technical support for early diagnosis and management of respiratory diseases in large-scale goat farming. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications for Veterinary Medicine)
Show Figures

Figure 1

44 pages, 11575 KB  
Article
GeoAI-Driven Land Cover Change Prediction Using Copernicus Earth Observation and Geospatial Data for Law-Compliant Territorial Planning in the Aosta Valley (Italy)
by Tommaso Orusa, Duke Cammareri and Davide Freppaz
Land 2026, 15(4), 533; https://doi.org/10.3390/land15040533 - 25 Mar 2026
Viewed by 807
Abstract
Mapping land cover, monitoring its changes, and simulating future alterations are essential tasks for sustainable land management. These processes enable accurate assessment of environmental impacts, support informed policymaking, and assist in the planning needed to mitigate risks related to urban expansion, deforestation, and [...] Read more.
Mapping land cover, monitoring its changes, and simulating future alterations are essential tasks for sustainable land management. These processes enable accurate assessment of environmental impacts, support informed policymaking, and assist in the planning needed to mitigate risks related to urban expansion, deforestation, and climate change. This study proposes a GeoAI-based framework leveraging Multilayer Perceptron (MLP), a class of Artificial Neural Networks (ANNs), to predict land cover changes in the Aosta Valley region (NW Italy). The model uses Copernicus Earth Observation data, specifically Sentinel-1 and Sentinel-2 imagery, and is trained and validated on land cover maps derived from different time periods previously validated with ground truth data. The objective is to provide a predictive tool capable of simulating potential future landscape configurations, supporting proactive regional land use planning including regulatory constraints under the current land use plan. Model performance is evaluated using accuracy metrics. The land cover classification methodology follows established approaches in the scientific literature, adapted to the specific geomorphological characteristics of the Aosta Valley. To explore and visualize potential future land cover transitions, Sankey and chord diagrams are used in combination with zonal statistics and thematic plots. These provide detailed insights into the intensity, direction, and magnitude of landscape dynamics. Training data were stratified-sampled across the study area, covering a diverse set of land cover classes to ensure robustness and generalization of the MLP model. This GeoAI approach offers a scalable and replicable methodology for anticipating land cover dynamics, identifying vulnerable areas, and informing adaptive environmental management strategies at the regional scale, while simultaneously considering the latest urban planning regulations. Full article
Show Figures

Figure 1

41 pages, 1130 KB  
Article
A Weighted Average-Based Heterogeneous Datasets Integration Framework for Intrusion Detection Using a Hybrid Transformer–MLP Model
by Hesham Kamal and Maggie Mashaly
Technologies 2026, 14(3), 180; https://doi.org/10.3390/technologies14030180 - 16 Mar 2026
Viewed by 436
Abstract
In today’s digital era, cyberattacks pose a critical threat to networks of all scales, from local systems to global infrastructures. Intrusion detection systems (IDSs) are essential for identifying and mitigating such threats. However, existing machine learning-based IDS often suffer from low detection accuracy, [...] Read more.
In today’s digital era, cyberattacks pose a critical threat to networks of all scales, from local systems to global infrastructures. Intrusion detection systems (IDSs) are essential for identifying and mitigating such threats. However, existing machine learning-based IDS often suffer from low detection accuracy, heavy reliance on manual feature extraction, and limited coverage of attack categories. To address these limitations, we propose a modular, deployment-ready intrusion detection framework that integrates multiple heterogeneous datasets through a hybrid transformer–multilayer perceptron (Transformer–MLP) architecture. The system employs three parallel Transformer–MLP models, each specialized for a distinct dataset, whose probabilistic outputs are fused using a weighted decision-level strategy. Unlike traditional feature-level fusion, this strategy ensures module independence, eliminates the need for global retraining when adding new components, and provides seamless modular scalability. The framework accurately identifies twenty-one traffic categories, including one benign and twenty attack classes, derived from a unified mapping across multiple heterogeneous sources to ensure a consistent cross-dataset taxonomy. By combining advanced contextual representation learning with ensemble-based probabilistic fusion, the framework demonstrates high detection accuracy and practical applicability in real-world network environments. The Transformer module captures complex contextual dependencies, while the MLP performs final classification. Class imbalance is mitigated via adaptive synthetic sampling (ADASYN), synthetic minority over-sampling technique (SMOTE), edited nearest neighbor (ENN), and class weight adjustments. Empirical evaluation demonstrates the framework’s high effectiveness: for binary classification, it achieves 99.98% on CICIDS2017, 99.19% on NSL-KDD, and 99.98% on NF-BoT-IoT-v2; for two-stage multi-class classification, 99.56%, 99.55%, and 97.75%; and for one-phase multi-class classification, 99.73%, 99.07%, and 98.23%, respectively. Moreover, the framework enables real-time deployment with 4.8–6.9 ms latency, 9800–14,200 fps throughput, and 412–458 MB memory. These results outperform existing multi-dataset IDS approaches, highlighting the architectural effectiveness, robustness, and practical applicability of the proposed framework. Full article
Show Figures

Figure 1

17 pages, 1953 KB  
Article
Early Detection and Classification of Gibberella Zeae Contamination in Maize Kernels Using SWIR Hyperspectral Imaging and Machine Learning
by Kaili Liu, Shiling Li, Wenbo Shi, Zhen Guo, Xijun Shao, Yemin Guo, Jicheng Zhao, Xia Sun, Nortoji A. Khujamshukurov and Fangling Du
Sensors 2026, 26(6), 1834; https://doi.org/10.3390/s26061834 - 14 Mar 2026
Viewed by 420
Abstract
Early-stage fungal contamination in maize kernels is difficult to identify visually and it can cause severe quality and safety risks during storage and transportation. Short-wave infrared (SWIR) hyperspectral imaging offers a rapid, non-destructive approach by capturing chemical information related to water, proteins, and [...] Read more.
Early-stage fungal contamination in maize kernels is difficult to identify visually and it can cause severe quality and safety risks during storage and transportation. Short-wave infrared (SWIR) hyperspectral imaging offers a rapid, non-destructive approach by capturing chemical information related to water, proteins, and lipids. This study investigates the early detection and classification of Gibberella zeae contamination in maize kernels using SWIR hyperspectral imaging combined with machine learning. Two maize varieties were artificially inoculated and cultured under controlled conditions, followed by hyperspectral data collection over six contamination stages. Various preprocessing techniques including standard normal variate (SNV), second derivative (SD), multiplicative scatter correction (MSC), and derivatives were evaluated to enhance data quality. Feature wavelength selection was performed using successive projections algorithm (SPA), competitive adaptive reweighted sampling (CARS), and uninformative variable elimination (UVE), significantly reducing redundancy and improving classification performance. Multiple models, including linear discriminant analysis (LDA), multilayer perceptron (MLP), support vector machine (SVM), a convolutional neural network (CNN), long short-term memory (LSTM) network, and a hybrid architecture Transformer that integrated a CNN, a LSTM network, and a Transformer (abbreviated as CLT), were constructed for both binary (healthy vs. contaminated) and multiclass classification tasks. Specifically, the multiclass task consisted of six contamination stages corresponding to contamination time from Day 0 to Day 5. The best binary classification task accuracy of 100% was achieved using SNV-preprocessed data with the MLP model. For multiclass classification task, the SD-preprocessed LDA model reached a test accuracy of 92.56%. Combined with appropriate preprocessing, feature selection and modeling, these results demonstrate that hyperspectral imaging is a powerful tool for the non-destructive, early-stage identification of fungal contamination in maize kernels, offering strong support for food safety and quality monitoring. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

21 pages, 474 KB  
Article
Performance Evaluation of Machine Learning and Deep Learning Models for Credit Risk Prediction
by Irvine Mapfumo and Thokozani Shongwe
J. Risk Financial Manag. 2026, 19(3), 210; https://doi.org/10.3390/jrfm19030210 - 11 Mar 2026
Viewed by 578
Abstract
Credit risk prediction is essential for financial institutions to effectively assess the likelihood of borrower defaults and manage associated risks. This study presents a comparative analysis of deep learning architectures and traditional machine learning models on imbalanced credit risk datasets. To address class [...] Read more.
Credit risk prediction is essential for financial institutions to effectively assess the likelihood of borrower defaults and manage associated risks. This study presents a comparative analysis of deep learning architectures and traditional machine learning models on imbalanced credit risk datasets. To address class imbalance, we employ three resampling techniques: Synthetic Minority Over-sampling Technique (SMOTE), Edited Nearest Neighbors (ENN), and the hybrid SMOTE-ENN. We evaluate the performance of various models, including multilayer perceptron (MLP), convolutional neural network (CNN), long short-term memory (LSTM), gated recurrent unit (GRU), logistic regression, decision tree, support vector machine (SVM), random forest, adaptive boosting, and extreme gradient boosting. The analysis reveals that SMOTE-ENN combined with MLP achieves the highest F1-score of 0.928 (accuracy 95.4%) on the German dataset, while SMOTE-ENN with random forest attains the best F1-score of 0.789 (accuracy 82.1%) on the Taiwanese dataset. SHapley Additive exPlanations (SHAP) are employed to enhance model interpretability, identifying key drivers of credit default. These findings provide actionable guidance for developing transparent, high-performing, and robust credit risk assessment systems. Full article
(This article belongs to the Section Financial Technology and Innovation)
Show Figures

Figure 1

21 pages, 6399 KB  
Article
Future Hydrological Drought and Water Sustainability in the Sava River Basin: Machine Learning Projections Under Climate Change Scenarios
by Igor Leščešen, Milan Josić, Slobodan Gnjato, Ana M. Petrović and Zbyněk Bajtek
Sustainability 2026, 18(6), 2678; https://doi.org/10.3390/su18062678 - 10 Mar 2026
Viewed by 360
Abstract
Hydrological drought projections are crucial for climate-resilient water management; however, many basins lack calibrated process-based models that can readily be forced with climate scenarios. This study develops a purely data-driven framework to forecast the Streamflow Drought Index (SDI) from standardized meteorological indices and [...] Read more.
Hydrological drought projections are crucial for climate-resilient water management; however, many basins lack calibrated process-based models that can readily be forced with climate scenarios. This study develops a purely data-driven framework to forecast the Streamflow Drought Index (SDI) from standardized meteorological indices and to assess future drought regimes under different emission pathways. We used a 60-year monthly record (1961–2020) of the Standardized Precipitation Index (SPI), the Standardized Temperature Index (STI), the Standardized Precipitation–Evapotranspiration Index (SPEI), and the SDI for the Sava River Basin. Correlation analysis showed that the SDI is primarily controlled by the short-lag SPI (0–1 months), whereas the STI and SPEI play a minor role. Several machine learning models were tested for one-month-ahead SDI prediction; a Random Forest (RF) with hyperparameters optimized by TimeSeriesSplit cross-validation, combined with linear-scaling bias correction, clearly outperformed XGBoost, Elastic Net, support vector regression, and a multilayer perceptron. On the independent test period (2009–2020), the RF achieved MAE ≈ 0.62, RMSE ≈ 0.83, NSE ≈ 0.49, and KGE ≈ 0.65. Using SPI/STI/SPEI projections from RCP2.6, RCP4.5, and RCP8.5, the RF produced monthly SDI projections for 2021–2050, revealing increasingly frequent, severe, and persistent streamflow droughts with higher emissions. The results demonstrate that carefully tuned ensemble tree models driven solely by standardized climate indices can provide skilful and interpretable SDI projections for drought risk assessment, supporting sustainable, climate-resilient water resources planning and adaptation in this transboundary basin. Full article
Show Figures

Figure 1

46 pages, 990 KB  
Review
Machine Learning for Outdoor Thermal Comfort Assessment and Optimization: Methods, Applications and Perspectives
by Giouli Mihalakakou, John A. Paravantis, Alexandros Romeos, Sonia Malefaki, Paraskevas N. Georgiou and Athanasios Giannadakis
Sustainability 2026, 18(5), 2600; https://doi.org/10.3390/su18052600 - 6 Mar 2026
Viewed by 362
Abstract
Urban environments face increasing thermal stress from climate change and the Urban Heat Island effect, with significant implications for livability, public health, and energy sustainability. Outdoor thermal comfort is defined as the state in which conditions are perceived as acceptable, depends on interactions [...] Read more.
Urban environments face increasing thermal stress from climate change and the Urban Heat Island effect, with significant implications for livability, public health, and energy sustainability. Outdoor thermal comfort is defined as the state in which conditions are perceived as acceptable, depends on interactions among meteorological, morphological, physiological, and behavioral factors. This review synthesizes the application of machine learning (ML) to outdoor thermal comfort assessment into a practice-oriented taxonomy. Research spans diverse climates and urban forms, using inputs across environmental and human domains. Supervised learning dominates. Regression approaches (linear regression, support vector regression, random forest, gradient boosting) and classification algorithms (decision trees, support vector machines, K-nearest neighbors, Naïve Bayes, random forest classifiers) are widely used to predict thermal indices such as the Physiological Equivalent Temperature and Universal Thermal Climate Index, or to classify subjective responses including thermal sensation, comfort, and acceptability. Unsupervised learning (clustering, principal component analysis) supports identification of microclimatic zones and perceptual clusters, while deep learning (multilayer perceptrons, convolutional and recurrent neural networks, generative adversarial networks) achieves superior accuracy for complex, high-dimensional, and spatiotemporal data. Algorithms such as random forests, support vector machines, and gradient boosting consistently show strong performance for both indices and subjective responses when integrating multi-domain inputs. Semi-supervised and reinforcement learning remain underexplored but offer promise for leveraging large-scale sensor data and enabling adaptive, real-time comfort management. The review concludes with a roadmap emphasizing explainable artificial intelligence, scalable surrogate modeling, and integration with simulation-based optimization and parametric design tools. Full article
Show Figures

Figure 1

19 pages, 7373 KB  
Article
District-Level Dengue Early Warning Prediction System in Bangladesh Using Hybrid Explainable AI and Bayesian Deep Learning
by Md. Abu Bokkor Shiddik, Farzana Zannat Toshi, Sadia Yesmin and Md. Siddikur Rahman
Trop. Med. Infect. Dis. 2026, 11(3), 73; https://doi.org/10.3390/tropicalmed11030073 - 5 Mar 2026
Viewed by 637
Abstract
Dengue is a mosquito-borne viral disease which is predominantly endemic in tropical and subtropical countries. In Bangladesh, 321,179 dengue cases were reported in 2023, followed by 101,214 cases in 2024, which highlights a severe and ongoing public health challenge. Dengue transmission risks are [...] Read more.
Dengue is a mosquito-borne viral disease which is predominantly endemic in tropical and subtropical countries. In Bangladesh, 321,179 dengue cases were reported in 2023, followed by 101,214 cases in 2024, which highlights a severe and ongoing public health challenge. Dengue transmission risks are shaped by climatic variability, rapid urbanization, socio-economic vulnerability, and healthcare strain. But existing dengue surveillance models remain limited in their ability to capture district-level disparities in Bangladesh. This study aimed to develop a district-level dengue early warning system that integrates climatic, socio-demographic, economic, healthcare, and environmental determinants to generate accurate and interpretable predictions. We examined dengue cases across all 64 districts in Bangladesh from 2017 to 2024, integrating Directorate General of Health Services (DGHS) case records with climate, socio-demographic, economic, and healthcare indicators. Machine learning and deep learning approaches, including Multi-Layer Perceptron (MLP) and Convolutional Long Short-Term Memory (ConvLSTM), were combined with SHAP (Shapley Additive Explanations)-based explainable artificial intelligence. We also used Bayesian spatio-temporal models to capture spatial clustering, temporal dependence, and the lagged transmission effects of dengue. Dengue outbreaks peaked in September 2023, with Dhaka recording 113,233 cases. DENV-4 (Dengue Virus type 4) emerged in 2022, accounting for 27% of infections in 2023. Climate was the strongest predictor of dengue transmission (humidity SHAP = 0.314; minimum temperature SHAP = 0.146; rainfall RR = 1.303). Poverty (SHAP = 0.193) and healthcare capacity (nursing/midwifery density SHAP = 0.073) mostly contributed to dengue prediction. The MLP model achieved the best yearly performance (accuracy = 0.93; ROC-AUC = 0.99), ConvLSTM was the best model in monthly prediction (recall = 0.88; ROC-AUC = 0.81), and Bayesian BYM2_RW2 with lagged effects improved predictive fit (DIC = 3671.055). Our integrated framework delivers transparent, interpretable predictions and district-level early warnings, supporting adaptive dengue outbreak preparedness and resource allocation in Bangladesh. Full article
(This article belongs to the Special Issue Urban Vector-Borne Pathogens in Tropical Cities Under Climate Change)
Show Figures

Figure 1

16 pages, 8115 KB  
Article
Fusing Deep Learning and Gradient Boosting for Robust Minute-Level Atmospheric Visibility Nowcasting
by Yuguo Ni, Chenbo Xie, Zichen Zhang and Jianfeng Chen
Geosciences 2026, 16(3), 104; https://doi.org/10.3390/geosciences16030104 - 3 Mar 2026
Viewed by 339
Abstract
Atmospheric visibility nowcasting is vital for safety-critical operations but remains challenging due to complex atmospheric dynamics. We propose a compact stacking ensemble merging a multilayer perceptron (MLP) and gradient-boosted regression trees (GBRT). The model, trained on seven months of minute-scale resolution data with [...] Read more.
Atmospheric visibility nowcasting is vital for safety-critical operations but remains challenging due to complex atmospheric dynamics. We propose a compact stacking ensemble merging a multilayer perceptron (MLP) and gradient-boosted regression trees (GBRT). The model, trained on seven months of minute-scale resolution data with a variability-adaptive filter to suppress sensor noise, employs cross-validation. Results demonstrate that the ensemble achieves its peak performance in the operationally critical low-visibility regime (V < 5 km). This range is particularly significant as it encompasses the Category I and II (CAT I/II) operational thresholds defined by the World Meteorological Organization (WMO) for aviation and surface transportation safety. In this regime, the ensemble yields an R2 of 0.82 and an MAE≈385 m, significantly outperforming single learners during rapid weather transitions. Conversely, in the high-visibility regime (V > 20 km), the explanatory power decreases (R2 of 0.46) due to inherent forward-scattering sensor uncertainties and low aerosol concentrations. Despite these range-specific physical limitations, the model maintains high robustness with narrowly centered residuals. This efficient approach, utilizing cost-effective in situ sensors, is highly suitable for airport and road-weather applications and offers strong potential for multi-site scalability. Full article
(This article belongs to the Section Climate and Environment)
Show Figures

Figure 1

27 pages, 4167 KB  
Article
OptiNeRF: A Spatially Optimized Neural Rendering Framework for Complex Scene Reconstruction
by Xinyuan Gu, Yanbo Chang, Junyue Xia, Yue Yu, Zhen Tian and Junming Chen
Mathematics 2026, 14(5), 842; https://doi.org/10.3390/math14050842 - 1 Mar 2026
Viewed by 364
Abstract
Neural rendering techniques aim to generate photorealistic images and accurate 3D geometries from multi-view images but often struggle with efficiency and geometric consistency in complex or dynamic scenes. Optimized Neural Radiance Fields (OptiNeRF) addresses these challenges through several innovations. It uses spatially optimized [...] Read more.
Neural rendering techniques aim to generate photorealistic images and accurate 3D geometries from multi-view images but often struggle with efficiency and geometric consistency in complex or dynamic scenes. Optimized Neural Radiance Fields (OptiNeRF) addresses these challenges through several innovations. It uses spatially optimized sampling to focus on points near object surfaces, reducing computation while improving precision. Leveraging the pre-trained Marigold model, it generates depth and normal maps as geometric priors. Sampled points are processed through a hybrid network combining an MLP and a multi-resolution feature grid (MRF), capturing fine details and large-scale structures. To handle varying illumination and complex materials, OptiNeRF introduces adaptive volume rendering (AVR), dynamically adjusting light transparency and scattering. A progressive sampling strategy further focuses computation on regions with high geometric complexity. The loss function incorporates RGB, normal, depth, boundary, and lighting optimization losses, with adaptive weight modulation for geometric priors, ensuring both visual fidelity and geometric consistency even with inaccurate depth/normal estimates. Experiments on dynamic scenes show strong performance, with a PSNR of 32.10 dB, SSIM of 0.936, Chamfer distance of 1.28 × 10−3, training time of 12 h, and rendering speed of 25 FPS, demonstrating high geometric accuracy, realistic rendering, and computational efficiency over conventional methods. Full article
(This article belongs to the Special Issue Intelligent Mathematics and Applications)
Show Figures

Figure 1

22 pages, 14552 KB  
Article
Shallow Water Bathymetry Inversion Method Based on Spatiotemporal Coupling Correlation Adaptive Spectroscopy
by Jiaxing Du, Houpu Li, Shuaidong Jia, Gaixiao Li, Jian Dong, Bing Liu and Shaofeng Bian
Remote Sens. 2026, 18(5), 741; https://doi.org/10.3390/rs18050741 - 28 Feb 2026
Viewed by 314
Abstract
Shallow water bathymetry data underpins maritime shipping and marine resource survey/protection, but its accuracy is constrained by water heterogeneity and spectral interference. To address this, this study proposes a Spatio-Temporal Coupling and Correlation Adaptive Spectral (STCCAS) inversion method, integrating four machine learning models: [...] Read more.
Shallow water bathymetry data underpins maritime shipping and marine resource survey/protection, but its accuracy is constrained by water heterogeneity and spectral interference. To address this, this study proposes a Spatio-Temporal Coupling and Correlation Adaptive Spectral (STCCAS) inversion method, integrating four machine learning models: Random Forest (RF), XGBoost, Support Vector Regression (SVR), and Multi-Layer Perceptron (MLP). Experiments were conducted in Tampa Bay’s nearshore waters, using Sentinel-2 imagery and Airborne LiDAR Bathymetry (ALB) data. Core to STCCAS, the Temporal Stability Index (TSI) quantifies spectral temporal consistency, while the Normalized Difference Turbidity Index (NDTI) characterizes water turbidity, and the two indices synergistically form a dual-scale “spectral reliability-turbidity stability” evaluation system for pixel-level feature quality assessment—coupled with spectral fusion features and spatial location, they jointly realize pixel-level feature reliability weighting and dynamic filtering to build a water condition-adaptive input set. Comparative analysis of inversion performance under the original spectral features (OSFs) inversion method vs. STCCAS inversion method confirms STCCAS significantly boosts accuracy. XGBoost outperforms others, achieving a coefficient of determination (R2) of 0.93, root mean square error (RMSE) of 0.16 m, and mean absolute error (MAE) of 0.12 m. STCCAS breaks the limitations of traditional fixed feature combinations, effectively adapting to nearshore water heterogeneity. It provides a novel method for high-frequency, high-precision shallow water bathymetry inversion, with important practical value for marine environmental monitoring and resource management. Full article
Show Figures

Figure 1

42 pages, 1422 KB  
Article
Exploring Handwriting-Based Biomarkers for Alzheimer’s Disease: Identifying Discriminative Features and Tasks to Enhance Diagnostic Accuracy
by Cansu Akyürek Anacur, Asuman Günay Yılmaz and Bekir Dizdaroğlu
Diagnostics 2026, 16(5), 697; https://doi.org/10.3390/diagnostics16050697 - 26 Feb 2026
Viewed by 353
Abstract
Background/Objectives: This study proposes a comprehensive classification framework for the automatic detection of Alzheimer’s disease using handwriting data. An enriched feature space is constructed by combining 18 baseline features extracted from raw handwriting signals with 30 additional features derived from established handwriting analysis [...] Read more.
Background/Objectives: This study proposes a comprehensive classification framework for the automatic detection of Alzheimer’s disease using handwriting data. An enriched feature space is constructed by combining 18 baseline features extracted from raw handwriting signals with 30 additional features derived from established handwriting analysis studies, resulting in a total of 48 features. To enhance clinical practicality, a task reduction analysis is conducted by comparing the full dataset containing 25 handwriting tasks with a reduced dataset comprising 14 selected tasks. Methods: The proposed framework employs a two-stage evaluation strategy involving four feature selection methods (Random Forest Feature Importance, Extreme Gradient Boosting Feature Importance, L1 Regularization and Recursive Feature Elimination), three normalization techniques (Unnormalized, Min–Max and Z-Score), and five baseline machine learning classifiers (Random Forest, Logistic Regression, Multilayer Perceptron, XGBoost and Support Vector Machines). In the second stage, a dynamic ensemble learning strategy is introduced, where the most effective classifiers are adaptively selected for each cross-validation fold and integrated using soft and hard voting schemes. Results: The experimental results demonstrate that reducing the number of tasks leads to an improvement in average classification accuracy from 79.47% to 81.03%, while simultaneously decreasing training time and memory consumption by approximately 40% and 35%, respectively. The highest classification performance, achieving an accuracy of 94.20%, is obtained using the Hard Ensemble combined with L1-based feature selection. Conclusions: These findings highlight that the joint use of enriched feature representations, task reduction, and dynamic ensemble learning provides an effective and computationally efficient solution for handwriting-based Alzheimer’s disease detection. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

19 pages, 5129 KB  
Article
High-Resolution Contact Localization and Three-Axis Force Estimation with a Sparse Strain-Node Tactile Interface Device
by Yanyan Wu, Hanhan Wu, Yifei Han, Yi Ding, Bosheng Cao and Chongkun Xia
Sensors 2026, 26(4), 1378; https://doi.org/10.3390/s26041378 - 22 Feb 2026
Viewed by 390
Abstract
High-resolution contact localization and three-axis force estimation are crucial for human–robot interaction and precision manipulation, yet the sensing area is limited by channel density and wiring cost. Sparse strain readout makes joint estimation of location and three-axis force challenging due to cross-axis coupling [...] Read more.
High-resolution contact localization and three-axis force estimation are crucial for human–robot interaction and precision manipulation, yet the sensing area is limited by channel density and wiring cost. Sparse strain readout makes joint estimation of location and three-axis force challenging due to cross-axis coupling and nonlinear responses, while dense arrays or extensive calibration increase complexity. We present a sparse strain-node tactile interface device (SSTID) whose three-module layout is optimized via particle swarm optimization to maximize informative response overlap, enabling contact localization (x,y) and three-axis force (Fx,Fy,Fz) estimation using only nine strain channels. We further propose a strain-node contact-state decoding framework (SCDF) implemented with a lightweight multilayer perceptron and trained via a two-stage sim-to-real strategy, including FEM pretraining followed by few-shot real-data adaptation. Experiments demonstrate accurate contact-state decoding with full-workspace characterization, supporting low-cost and scalable deployment of sparse tactile interfaces. Full article
Show Figures

Figure 1

23 pages, 4825 KB  
Article
Degradation-Aware Dynamic Kernel Generation Network for Hyperspectral Super-Resolution
by Huadong Liu, Haifeng Liang and Qian Wang
Sensors 2026, 26(4), 1362; https://doi.org/10.3390/s26041362 - 20 Feb 2026
Viewed by 407
Abstract
Addressing the problems of the difficulty in reconstructing high-resolution hyperspectral images caused by dynamic degradation characteristics, the poor adaptability of traditional static degradation models, and the oversimplified noise modeling, this paper proposes a degradation-aware dynamic Fourier network (DADFN) for hyperspectral super-resolution. This method [...] Read more.
Addressing the problems of the difficulty in reconstructing high-resolution hyperspectral images caused by dynamic degradation characteristics, the poor adaptability of traditional static degradation models, and the oversimplified noise modeling, this paper proposes a degradation-aware dynamic Fourier network (DADFN) for hyperspectral super-resolution. This method employs a dual-channel split module to decouple and encode spectral and spatial degradation information, realizes the independent mapping of spectral and spatial features via a multi-layer perceptron module, and integrates a spectral–spatial dynamic cross-attention fusion module to generate 3D dynamic blur kernels tailored to different bands and spatial positions. The proposed method designs a multi-scale spectral–spatial collaborative constraint (MSSCC) loss function to ensure the coordinated optimization of modeling rationality, spectral continuity, and spatial detail fidelity. Experiments on the CAVE and Harvard benchmark datasets demonstrate that the DADFN algorithm outperforms the baseline methods in all evaluation metrics, which proves the proposed method’s strong robustness in real-world complex degradation scenarios. This method provides a novel solution balancing physical interpretability and performance superiority for hyperspectral image super-resolution tasks and holds significant value for advancing its applications in remote sensing monitoring, precision agriculture, and other related fields. Full article
Show Figures

Figure 1

Back to TopTop