Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (8,905)

Search Parameters:
Keywords = error distribution

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 2442 KB  
Article
Development of a Novel Weighted Maximum Likelihood-Based Parameter Estimation Technique for Improved Annual Energy Production Estimation of Wind Turbines
by Woobeom Han, Kanghee Lee, Jonghwa Kim and Seungjae Lee
Energies 2025, 18(19), 5265; https://doi.org/10.3390/en18195265 - 3 Oct 2025
Abstract
Conventional statistical models consider all wind speed ranges as equally important, causing significant prediction errors, particularly in wind speed intervals that contribute the most to wind turbine power generation. To overcome this limitation, this study proposes a novel parameter estimation method—Weighted Maximum Likelihood [...] Read more.
Conventional statistical models consider all wind speed ranges as equally important, causing significant prediction errors, particularly in wind speed intervals that contribute the most to wind turbine power generation. To overcome this limitation, this study proposes a novel parameter estimation method—Weighted Maximum Likelihood Estimation (WMLE)—to improve the accuracy of annual energy production (AEP) predictions for wind turbine systems. The proposed WMLE incorporates wind-speed-specific weights based on power generation contribution, along with a weighting amplification factor (β), to construct a power-oriented wind distribution model. WMLE performance was validated by comparing four offshore wind farm candidate sites in Korea—each exhibiting distinct wind characteristics. Goodness-of-fit evaluations against conventional wind statistical models demonstrated the improved distribution fitting performance of WMLE. Furthermore, WMLE consistently achieved relative AEP errors within ±2% compared to those of time-series-based methods. A sensitivity analysis identified the optimal β value, which narrowed the distribution fit around high-energy-contributing wind speeds, thereby enhancing the reliability of AEP predictions. In conclusion, WMLE provides a practical and robust statistical framework that bridges the gap between statistical distribution fitting and time-series-based methods for AEP. Moreover, the improved accuracy of AEP predictions enhances the reliability of wind farm feasibility assessments, reduces investment risk, and strengthens financial bankability. Full article
(This article belongs to the Section B: Energy and Environment)
32 pages, 4520 KB  
Article
Beyond the Gold Standard: Linear Regression and Poisson GLM Yield Identical Mortality Trends and Deaths Counts for COVID-19 in Italy: 2021–2025
by Marco Roccetti and Giuseppe Cacciapuoti
Computation 2025, 13(10), 233; https://doi.org/10.3390/computation13100233 - 3 Oct 2025
Abstract
While it is undisputed that Poisson GLMs represent the gold standard for counting COVID-19 deaths, recent studies have analyzed the seasonal growth and decline trends of these deaths in Italy using a simple segmented linear regression. They found that, despite an overall decreasing [...] Read more.
While it is undisputed that Poisson GLMs represent the gold standard for counting COVID-19 deaths, recent studies have analyzed the seasonal growth and decline trends of these deaths in Italy using a simple segmented linear regression. They found that, despite an overall decreasing trend throughout the entire period analyzed (2021–2025), rising mortality trends from COVID-19 emerged in all summers and winters of the period, though they were more pronounced in winter. The technical reasons for the general unsuitability of using linear regression for the precise counting of deaths are well-known. Nevertheless, the question remains whether, under certain circumstances, the use of linear regression can provide a valid and useful tool in a specific context, for example, to highlight the slopes of seasonal growth/decline in deaths more quickly and clearly. Given this background, this paper presents a comparison between the use of linear regression and a Poisson GLM with the aforementioned death data, leading to the following conclusions. Appropriate statistical hypothesis testing procedures have demonstrated that the conditions of a normal distribution of residuals, their homoscedasticity, and the lack of autocorrelation were essentially guaranteed in this particular Italian case (weekly COVID-19 deaths in Italy, from 2021 to 2025) with very rare exceptions, thus ensuring the acceptable performance of linear regression. Furthermore, the development of a Poisson GLM definitively confirmed a strong agreement between the two models in identifying COVID-19 mortality trends. This was supported by a Kolmogorov–Smirnov test, which found no statistically significant difference between the slopes calculated by the two models. Both the Poisson and the linear model also demonstrated a comparably high accuracy in counting COVID-19 deaths, with MAE values of 62.76 and a comparable 88.60, respectively. Based on an average of approximately 6300 deaths per period, this translated to a percentage error of just 1.15% for the Poisson and only a slightly higher 1.48% for the linear model. Full article
(This article belongs to the Section Computational Biology)
Show Figures

Figure 1

16 pages, 1851 KB  
Article
A Method for Determining Medium- and Long-Term Renewable Energy Accommodation Capacity Considering Multiple Uncertain Influencing Factors
by Tingxiang Liu, Libin Yang, Zhengxi Li, Kai Wang, Pinkun He and Feng Xiao
Energies 2025, 18(19), 5261; https://doi.org/10.3390/en18195261 - 3 Oct 2025
Abstract
Amid the global energy transition, rapidly expanding wind and solar installations challenge power grids with variability and uncertainty. We propose an adaptive framework for renewable energy accommodation assessment under high-dimensional uncertainties, integrating three innovations: (1) Response Surface Methodology (RSM) is adopted for the [...] Read more.
Amid the global energy transition, rapidly expanding wind and solar installations challenge power grids with variability and uncertainty. We propose an adaptive framework for renewable energy accommodation assessment under high-dimensional uncertainties, integrating three innovations: (1) Response Surface Methodology (RSM) is adopted for the first time to construct a closed-form polynomial of renewable energy accommodation in terms of resource hours, load, installed capacity, and transmission limits, enabling millisecond-level evaluation; (2) LASSO-regularized RSM suppresses high-dimensional overfitting by automatically selecting key interaction terms while preserving interpretability; (3) a Bayesian kernel density extension yields full posterior distributions and confidence intervals for renewable energy accommodation in small-sample scenarios, quantifying risk. A case study on a renewable-rich grid in Northwest China validates the framework: two-factor response surface models achieve R2 > 90% with < 0.5% mean absolute error across ten random historical cases; LASSO regression keeps errors below 1.5% in multidimensional space; Bayesian density intervals encompass all observed values. The framework flexibly switches between deterministic, sparse, or probabilistic modes according to data availability, offering efficient and reliable decision support for generation-transmission planning and market clearing under multidimensional uncertainty. Full article
Show Figures

Figure 1

12 pages, 7595 KB  
Article
Predictive Modeling of Shear Strength for Lotus-Type Porous Copper Bonded to Alumina
by Sang-Gyu Choi, Sangwook Kim, Jinkwan Lee, Keun-Soo Kim and Soongkeun Hyun
Metals 2025, 15(10), 1103; https://doi.org/10.3390/met15101103 - 3 Oct 2025
Abstract
This study investigates the shear strength of lotus-type unidirectional porous copper bonded to alumina substrates using the Direct Bonded Copper (DBC) process. Porous copper specimens with various porosities (38.7–50.9%) and pore sizes (150–800 μm) were fabricated and joined to alumina discs. Shear [...] Read more.
This study investigates the shear strength of lotus-type unidirectional porous copper bonded to alumina substrates using the Direct Bonded Copper (DBC) process. Porous copper specimens with various porosities (38.7–50.9%) and pore sizes (150–800 μm) were fabricated and joined to alumina discs. Shear testing revealed that both porosity and pore size significantly affect the interfacial strength. While higher porosity led to reduced shear strength, larger pore sizes enhanced the maximum shear strength owing to increased local contact areas and crack coalescence in the alumina substrate. Fractographic analysis using optical microscopy and SEM-EDS confirmed that failure mainly occurred in the alumina, with local fracture associated with pore distribution and size. To improve strength prediction, a modified model was proposed, reducing the error from 12.3% to 7.5% and increasing the coefficient of determination (R²) from 0.43 to 0.74. These findings highlight the necessity of considering both porosity and pore size when predicting the shear strength of porous copper/alumina DBC joints, and they provide important insights for optimizing metal structures in metal–ceramic bonding for high-performance applications. Full article
(This article belongs to the Special Issue Fracture Mechanics of Metallic Materials—the State of the Art)
26 pages, 12288 KB  
Article
An Optimal Scheduling Method for Power Grids in Extreme Scenarios Based on an Information-Fusion MADDPG Algorithm
by Xun Dou, Cheng Li, Pengyi Niu, Dongmei Sun, Quanling Zhang and Zhenlan Dou
Mathematics 2025, 13(19), 3168; https://doi.org/10.3390/math13193168 - 3 Oct 2025
Abstract
With the large-scale integration of renewable energy into distribution networks, the intermittency and uncertainty of renewable generation pose significant challenges to the voltage security of the power grid under extreme scenarios. To address this issue, this paper proposes an optimal scheduling method for [...] Read more.
With the large-scale integration of renewable energy into distribution networks, the intermittency and uncertainty of renewable generation pose significant challenges to the voltage security of the power grid under extreme scenarios. To address this issue, this paper proposes an optimal scheduling method for power grids under extreme scenarios, based on an improved Multi-Agent Deep Deterministic Policy Gradient (MADDPG) algorithm. By simulating potential extreme scenarios in the power system and formulating targeted secure scheduling strategies, the proposed method effectively reduces trial-and-error costs. First, the time series clustering method is used to construct the extreme scene dataset based on the principle of maximizing scene differences. Then, a mathematical model of power grid optimal dispatching is constructed with the objective of ensuring voltage security, with explicit constraints and environmental settings. Then, an interactive scheduling model of distribution network resources is designed based on a multi-agent algorithm, including the construction of an agent state space, an action space, and a reward function. Then, an improved MADDPG multi-agent algorithm based on specific information fusion is proposed, and a hybrid optimization experience sampling strategy is developed to enhance the training efficiency and stability of the model. Finally, the effectiveness of the proposed method is verified by the case studies of the distribution network system. Full article
(This article belongs to the Special Issue Artificial Intelligence and Game Theory)
Show Figures

Figure 1

19 pages, 1316 KB  
Article
A Comprehensive Model for Predicting Water Advance and Determining Infiltration Coefficients in Surface Irrigation Systems Using Beta Cumulative Distribution Function
by Amir Panahi, Amin Seyedzadeh, Miguel Ángel Campo-Bescós and Javier Casalí
Water 2025, 17(19), 2880; https://doi.org/10.3390/w17192880 - 2 Oct 2025
Abstract
Surface irrigation systems are among the most common yet often inefficient methods due to poor design and management. A key factor in optimizing their design is the accurate prediction of the water advance and infiltration relationships’ coefficients. This study introduces a novel model [...] Read more.
Surface irrigation systems are among the most common yet often inefficient methods due to poor design and management. A key factor in optimizing their design is the accurate prediction of the water advance and infiltration relationships’ coefficients. This study introduces a novel model based on the Beta cumulative distribution function for predicting water advance and estimating infiltration coefficients in surface irrigation systems. Traditional methods, such as the two-point approach, rely on limited data from only the midpoint and endpoint of the field, often resulting in insufficient accuracy and non-physical outcomes under heterogeneous soil conditions. The proposed model enhances predictive flexibility by incorporating the entire advance dataset and integrating the midpoint as a constraint during optimization, thereby improving the accuracy of advance curve estimation and subsequent infiltration coefficient determination. Evaluation using field data from three distinct sites (FS, HF, WP) across 10 irrigation events demonstrated the superiority of the proposed model over the conventional power advance method. The new model achieved average RMSE, MAPE, and R2 values of 0.790, 0.109, and 0.997, respectively, for advance estimation. For infiltration prediction, it yielded an average error of 12.9% in total infiltrated volume—outperforming the two-point method—and also showed higher accuracy during the advance phase, with average RMSE, MAPE, and R2 values of 0.427, 0.075, and 0.990, respectively. These results confirm that the Beta-based model offers a more robust, precise, and reliable tool for optimizing the design and management of surface irrigation systems. Full article
(This article belongs to the Section Water, Agriculture and Aquaculture)
24 pages, 5840 KB  
Article
Numerical Study of Blast Load Acting on Typical Precast Segmental Reinforced Concrete Piers in Near-Field Explosions
by Lu Liu, Zhouhong Zong, Yulin Shan, Yao Yao, Chenglin Li and Yihao Cheng
CivilEng 2025, 6(4), 53; https://doi.org/10.3390/civileng6040053 - 2 Oct 2025
Abstract
Explosions, including those from war weapons, terrorist attacks, etc., can lead to damage and overall collapse of bridges. However, there are no clear guidelines for anti-blast design and protective measures for bridges under blast loading in current bridge design specifications. With advancements in [...] Read more.
Explosions, including those from war weapons, terrorist attacks, etc., can lead to damage and overall collapse of bridges. However, there are no clear guidelines for anti-blast design and protective measures for bridges under blast loading in current bridge design specifications. With advancements in intelligent construction, precast segmental bridge piers have become a major trend in social development. There is a lack of full understanding of the anti-blast performance of precast segmental bridge piers. To study the engineering calculation method for blast load acting on a typical precast segmental reinforced concrete (RC) pier in near-field explosions, an air explosion test of the precast segmental RC pier is firstly carried out, then a fluid–structure coupling numerical model of the precast segmental RC pier is established and the interaction between the explosion shock wave and the precast segmental RC pier is discussed. A numerical simulation of the precast segmental RC pier in a near-field explosion is conducted based on a reliable numerical model, and the distribution of the blast load acting on the precast segmental RC pier in the near-field explosion is analyzed. The results show that the reflected overpressure on the pier and the incident overpressure in the free field are reliable. The simulation results are basically consistent with the experimental results (with a relative error of less than 8%), and the fluid–structure coupling model is reasonable and reliable. The explosion shock wave has effects of reflection and circulation on the precast segmental RC pier. In the near-field explosion, the back and side blast loads acting on the precast segmental RC bridge pier can be ignored in the blast-resistant design. The front blast loads can be simplified and equalized, and a blast-resistant design load coefficient (1, 0.2, 0.03, 0.02, and 0.01) and a calculation formula of maximum equivalent overpressure peak value (applicable scaled distance [0.175 m/kg1/3, 0.378 m/kg1/3]) are proposed, which can be used as a reference for the blast-resistant design of precast segmental RC piers. Full article
(This article belongs to the Section Mathematical Models for Civil Engineering)
Show Figures

Figure 1

25 pages, 8881 KB  
Article
Evaluating Machine Learning Techniques for Brain Tumor Detection with Emphasis on Few-Shot Learning Using MAML
by Soham Sanjay Vaidya, Raja Hashim Ali, Shan Faiz, Iftikhar Ahmed and Talha Ali Khan
Algorithms 2025, 18(10), 624; https://doi.org/10.3390/a18100624 - 2 Oct 2025
Abstract
Accurate brain tumor classification from MRI is often constrained by limited labeled data. We systematically compare conventional machine learning, deep learning, and few-shot learning (FSL) for four classes (glioma, meningioma, pituitary, no tumor) using a standardized pipeline. Models are trained on the Kaggle [...] Read more.
Accurate brain tumor classification from MRI is often constrained by limited labeled data. We systematically compare conventional machine learning, deep learning, and few-shot learning (FSL) for four classes (glioma, meningioma, pituitary, no tumor) using a standardized pipeline. Models are trained on the Kaggle Brain Tumor MRI Dataset and evaluated across dataset regimes (100%→10%). We further test generalization on BraTS and quantify robustness to resolution changes, acquisition noise, and modality shift (T1→FLAIR). To support clinical trust, we add visual explanations (Grad-CAM/saliency) and report per-class results (confusion matrices). A fairness-aligned protocol (shared splits, optimizer, early stopping) and a complexity analysis (parameters/FLOPs) enable balanced comparison. With full data, Convolutional Neural Networks (CNNs)/Residual Networks (ResNets) perform strongly but degrade with 10% data; Model-Agnostic Meta-Learning (MAML) retains competitive performance (AUC-ROC ≥ 0.9595 at 10%). Under cross-dataset validation (BraTS), FSL—particularly MAML—shows smaller performance drops than CNN/ResNet. Variability tests reveal FSL’s relative robustness to down-resolution and noise, although modality shift remains challenging for all models. Interpretability maps confirm correct activations on tumor regions in true positives and explain systematic errors (e.g., “no tumor”→pituitary). Conclusion: FSL provides accurate, data-efficient, and comparatively robust tumor classification under distribution shift. The added per-class analysis, interpretability, and complexity metrics strengthen clinical relevance and transparency. Full article
(This article belongs to the Special Issue Machine Learning Models and Algorithms for Image Processing)
Show Figures

Figure 1

12 pages, 765 KB  
Article
Optimising Ventilation System Preplanning: Duct Sizing and Fan Layout Using Mixed-Integer Programming
by Julius H. P. Breuer and Peter F. Pelz
Int. J. Turbomach. Propuls. Power 2025, 10(4), 32; https://doi.org/10.3390/ijtpp10040032 - 1 Oct 2025
Abstract
Traditionally, duct sizing in ventilation systems is based on balancing pressure losses across all branches, with fan selection performed subsequently. However, this sequential approach is inadequate for systems with distributed fans in the central duct network, where pressure losses can vary significantly. Consequently, [...] Read more.
Traditionally, duct sizing in ventilation systems is based on balancing pressure losses across all branches, with fan selection performed subsequently. However, this sequential approach is inadequate for systems with distributed fans in the central duct network, where pressure losses can vary significantly. Consequently, when designing the system topology, fan placement and duct sizing must be considered together. Recent research has demonstrated that discrete optimisation methods can account for multiple load cases and produce ventilation layouts that are both cost- and energy-efficient. However, existing approaches usually concentrate on component placement and assume that duct sizing has already been finalised. While this is sufficient for later design stages, it is unsuitable for the early stages of planning, when numerous system configurations must be evaluated quickly. In this work, we present a novel methodology that simultaneously optimises duct sizing, fan placement, and volume flow controller configuration to minimise life-cycle costs. To achieve this, we exploit the structure of the problem and formulate a mixed-integer linear program (MILP), which, unlike existing non-linear models, significantly reduces computation time while introducing only minor approximation errors. The resulting model enables fast and robust early-stage planning, providing optimal solutions in a matter of seconds to minutes, as demonstrated by a case study. The methodology is demonstrated on a case study, yielding an optimal configuration with distributed fans in the central fan station and achieving a 5 reduction in life-cycle costs compared to conventional central designs. The MILP formulation achieves these results within seconds, with linearisation errors in electrical power consumption below 1.4%, confirming the approach’s accuracy and suitability for early-stage planning. Full article
(This article belongs to the Special Issue Advances in Industrial Fan Technologies)
13 pages, 4253 KB  
Article
Satellite DNA in Populus and Molecular Karyotyping of Populus xiaohei and Its Derived Double Haploids
by Bo Liu, Xinyu Wang, Wenjie Shen, Meng Wang, Guanzheng Qu and Quanwen Dou
Plants 2025, 14(19), 3046; https://doi.org/10.3390/plants14193046 - 1 Oct 2025
Abstract
Karyotype analysis and the investigation of chromosomal variations in Populus are challenging due to its small and morphologically similar chromosomes. Despite its utility in chromosome identification and karyotype evolutionary research, satellite DNA (satDNA) remains underutilized in Populus. In the present study, 12 [...] Read more.
Karyotype analysis and the investigation of chromosomal variations in Populus are challenging due to its small and morphologically similar chromosomes. Despite its utility in chromosome identification and karyotype evolutionary research, satellite DNA (satDNA) remains underutilized in Populus. In the present study, 12 satDNAs were identified from P. trichocarpa, and the copy numbers and chromosomal distributions of each satDNA were analyzed bioinformatically in the reference genomes of P. trichocarpa, P. simonii, and P. nigra. Ten satDNA probes for fluorescence in situ hybridization (FISH) were successfully developed and validated on chromosomes of P. xiaohei (poplar hybrid P. simonii × P. nigra). By integrating bioinformatic genomic satDNA distribution patterns with experimental FISH signals, we constructed a molecular karyotype of P. xiaohei. Comparative analysis revealed errors in current poplar genome assemblies. Comparative karyotype analysis of P. xiaohei and its doubled haploid (DH) lines revealed chromosomal variations in the DH lines relative to the donor tree. The results demonstrate that the newly developed satDNA probes constitute robust cytogenetic tools for detecting structural variations in Populus, while molecular karyotyping provides new insights into the genetic mechanisms underlying chromosome variations in P. xiaohei and the DH plants derived. Full article
(This article belongs to the Section Plant Genetics, Genomics and Biotechnology)
Show Figures

Figure 1

15 pages, 479 KB  
Article
Security of Quantum Key Distribution with One-Time-Pad-Protected Error Correction and Its Performance Benefits
by Roman Novak
Entropy 2025, 27(10), 1032; https://doi.org/10.3390/e27101032 - 1 Oct 2025
Abstract
In quantum key distribution (QKD), public discussion over the authenticated classical channel inevitably leaks information about the raw key to a potential adversary, which must later be mitigated by privacy amplification. To limit this leakage, a one-time pad (OTP) has been proposed to [...] Read more.
In quantum key distribution (QKD), public discussion over the authenticated classical channel inevitably leaks information about the raw key to a potential adversary, which must later be mitigated by privacy amplification. To limit this leakage, a one-time pad (OTP) has been proposed to protect message exchanges in various settings. Building on the security proof of Tomamichel and Leverrier, which is based on a non-asymptotic framework and considers the effects of finite resources, we extend the analysis to the OTP-protected scheme. We show that when the OTP key is drawn from the entropy pool of the same QKD session, the achievable quantum key rate is identical to that of the reference protocol with unprotected error-correction exchange. This equivalence holds for a fixed security level, defined via the diamond distance between the real and ideal protocols modeled as completely positive trace-preserving maps. At the same time, the proposed approach reduces the computational requirements: for non-interactive low-density parity-check codes, the encoding problem size is reduced by the square of the syndrome length, while privacy amplification requires less compression. The technique preserves security, avoids the use of QKD keys between sessions, and has the potential to improve performance. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

55 pages, 4152 KB  
Article
Compliance with the Euro Area Financial Criteria and Economic Convergence in the European Union over the Period 2000–2023
by Constantin Duguleana, Liliana Duguleana, Klára-Dalma Deszke and Mihai Bogdan Alexandrescu
Int. J. Financial Stud. 2025, 13(4), 183; https://doi.org/10.3390/ijfs13040183 - 1 Oct 2025
Abstract
The two groups of EU economies, the euro area and the non-euro area, are statistically analyzed taking into account the fulfillment of the euro area financial criteria and economic performance over the period 2000–2023. Compliance with financial criteria, economic performance, and their significant [...] Read more.
The two groups of EU economies, the euro area and the non-euro area, are statistically analyzed taking into account the fulfillment of the euro area financial criteria and economic performance over the period 2000–2023. Compliance with financial criteria, economic performance, and their significant influencing factors are presented comparatively for the two groups of countries. The long-run equilibrium between economic growth and its factors is identified by econometric approaches with the error correction model (ECM) and autoregressive distributed lag (ARDL) models for the two data panels. In the short term, economic shocks are taken into account to compare their different influences on economic growth within the two groups of countries. The GMM system is used to model economic convergence at the EU level over the period under review. Comparisons between GDP growth and its theoretical values from econometric models have led to interesting conclusions regarding the existence and characteristics of economic convergence at the group and EU level. EU countries outside the euro area have higher economic growth rates than euro area economies over the period 2000–2023. In the long run, investment brings a higher increase in economic development in EU countries outside the euro area than in euro area countries. Economic shocks have been felt more deeply on economic growth in the euro area than in the non-euro area. The speed of adjustment towards long-run equilibrium in econometric models is slower for non-euro area economies than in the euro area over a one-year period. At the level of the European Monetary Union, change policies have a faster impact on economic development and a faster speed of adjustment towards equilibrium. Full article
Show Figures

Figure 1

17 pages, 1225 KB  
Article
Assessment of the ZJWARMS Forecast Model’s Adaptability and AI-Based Bias Correction over Complex Terrain
by Qi Zhang, Yiwen Shi, Yifan Wang, Shiyun Mou, Zhidan Zhu, Tu Qian, Zhijun Mao, Shujie Yuan, Lin Han and Xiaocan Lao
Atmosphere 2025, 16(10), 1151; https://doi.org/10.3390/atmos16101151 - 1 Oct 2025
Abstract
This study assesses the efficacy of the ZJWARMS model’s AI-based post-processing correction method for temperature and wind speed forecasts in complex terrain. By analyzing 72 h forecasts at four stations with varying elevations (from 273 m to 1327 m) in the Liuchun Lake [...] Read more.
This study assesses the efficacy of the ZJWARMS model’s AI-based post-processing correction method for temperature and wind speed forecasts in complex terrain. By analyzing 72 h forecasts at four stations with varying elevations (from 273 m to 1327 m) in the Liuchun Lake region during December 2021–December 2022, the study found that AI-based corrections substantially enhanced both forecast accuracy and stability. The results indicate that, after correction, temperature forecast accuracy at all stations exceeded 99%, with the most notable relative gains at higher elevations (up to 48.1%). The mean absolute error (MAE) for temperature declined from 3.08 °C to below 0.8 °C at Octagonal Palace, and from 3.29 °C to below 0.6 °C at Mountaintop. Wind speed forecast accuracy also increased from approximately 60–70% to nearly 100%, with MAE generally constrained to the range of 0.2–0.4 m/s. In terms of extreme error control, the number of samples with temperature errors exceeding ±2 °C was markedly reduced. For instance, at Mountainside, the count dropped from 127 to 0. Extreme wind speed errors were also effectively eliminated. After correction, error distributions became more concentrated, and both temporal stability and spatial consistency showed notable improvement. These gains enhance operational forecasting and risk management in mountainous regions, for example, through threshold-based wind-hazard alerts and support for mountain-road icing, by providing more reliable, high-confidence guidance. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

30 pages, 852 KB  
Article
Bayesian Model Updating of Structural Parameters Using Temperature Variation Data: Simulation
by Ujjwal Adhikari and Young Hoon Kim
Machines 2025, 13(10), 899; https://doi.org/10.3390/machines13100899 - 1 Oct 2025
Abstract
Finite element (FE) models are widely used in structural health monitoring to represent real structures and assess their condition, but discrepancies often arise between numerical and actual structural behavior due to simplifying assumptions, uncertain parameters, and environmental influences. Temperature variation, in particular, significantly [...] Read more.
Finite element (FE) models are widely used in structural health monitoring to represent real structures and assess their condition, but discrepancies often arise between numerical and actual structural behavior due to simplifying assumptions, uncertain parameters, and environmental influences. Temperature variation, in particular, significantly affects structural stiffness and modal properties, yet it is often treated as noise in traditional model updating methods. This study treats temperature changes as valuable information for model updating and structural damage quantification. The Bayesian model updating approach (BMUA) is a probabilistic approach that updates uncertain model parameters by combining prior knowledge with measured data to estimate their posterior probability distributions. However, traditional BMUA methods assume mass is known and only update stiffness. A novel BMUA framework is proposed that incorporates thermal buckling and temperature-dependent stiffness estimation and introduces an algorithm to eliminate the coupling effect between mass and stiffness by using temperature-induced stiffness changes. This enables the simultaneous updating of both parameters. The framework is validated through numerical simulations on a three-story aluminum shear frame under uniform and non-uniform temperature distributions. Under healthy and uniform temperature conditions, stiffness parameters were estimated with high accuracy, with errors below 0.5% and within uncertainty bounds, while mass parameters exhibited errors up to 13.8% that exceeded their extremely low standard deviations, indicating potential model bias. Under non-uniform temperature distributions, accuracy declined, particularly for localized damage cases, with significant deviations in both parameters. Full article
Show Figures

Figure 1

15 pages, 2939 KB  
Article
DIC-Aided Mechanoluminescent Film Sensor for Quantitative Measurement of Full-Field Strain
by Guoqing Gu, Liya Dai and Liyun Chen
Sensors 2025, 25(19), 6018; https://doi.org/10.3390/s25196018 - 1 Oct 2025
Abstract
To break through the bottleneck in the mapping of the mechanoluminescent (ML) intensity field to the strain field, a quantification method for full-field strain measurement based on pixel-level data fusion is proposed, integrating ML imaging with digital image correlation (DIC) to achieve precise [...] Read more.
To break through the bottleneck in the mapping of the mechanoluminescent (ML) intensity field to the strain field, a quantification method for full-field strain measurement based on pixel-level data fusion is proposed, integrating ML imaging with digital image correlation (DIC) to achieve precise reconstruction of the strain field. Experiments are conducted using aluminum alloy specimens coated with ML film sensor on their surfaces. During the tensile process, ML images of the films and speckle images of the specimen backsides are simultaneously acquired. Combined with DIC technology, high-precision full-field strain distributions are obtained. Through spatial registration and region matching algorithms, a quantitative calibration model between ML intensity and DIC strain is established. The research results indicate that the ML intensity and DIC strain exhibit a significant linear correlation (R2 = 0.92). To verify the universality of the model, aluminum alloy notched specimen tests show that the reconstructed strain field is in good agreement with the DIC and finite element analysis results, with an average relative error of 0.23%. This method enables full-field, non-contact conversion of ML signals into strain distributions with high spatial resolution, providing a quantitative basis for studying ML response mechanisms under complex loading. Full article
Show Figures

Figure 1

Back to TopTop