Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (189)

Search Parameters:
Keywords = probability density curves

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 1057 KB  
Article
A New Weibull–Rayleigh Distribution: Characterization, Estimation Methods, and Applications with Change Point Analysis
by Hanan Baaqeel, Hibah Alnashri, Amani S. Alghamdi and Lamya Baharith
Axioms 2025, 14(9), 649; https://doi.org/10.3390/axioms14090649 - 22 Aug 2025
Viewed by 118
Abstract
Many scholars are interested in modeling complex data in an effort to create novel probability distributions. This article proposes a novel class of distributions based on the inverse of the exponentiated Weibull hazard rate function. A particular member of this class, the Weibull–Rayleigh [...] Read more.
Many scholars are interested in modeling complex data in an effort to create novel probability distributions. This article proposes a novel class of distributions based on the inverse of the exponentiated Weibull hazard rate function. A particular member of this class, the Weibull–Rayleigh distribution (WR), is presented with focus. The WR features diverse probability density functions, including symmetric, right-skewed, left-skewed, and the inverse J-shaped distribution which is flexible in modeling lifetime and systems data. Several significant statistical features of the suggested WR are examined, covering the quantile, moments, characteristic function, probability weighted moment, order statistics, and entropy measures. The model accuracy was verified through Monte Carlo simulations of five different statistical estimation methods. The significance of WR is demonstrated with three real-world data sets, revealing a higher goodness of fit compared to other competing models. Additionally, the change point for the WR model is illustrated using the modified information criterion (MIC) to identify changes in the structures of these data. The MIC and curve analysis captured a potential change point, supporting and proving the effectiveness of WR distribution in describing transitions. Full article
(This article belongs to the Special Issue Probability, Statistics and Estimations, 2nd Edition)
Show Figures

Figure 1

14 pages, 838 KB  
Article
Research on Commuting Mode Split Model Based on Dominant Transportation Distance
by Jinhui Tan, Shuai Teng, Zongchao Liu, Wei Mao and Minghui Chen
Algorithms 2025, 18(8), 534; https://doi.org/10.3390/a18080534 - 21 Aug 2025
Viewed by 174
Abstract
Conventional commuting mode split models are characterized by inherent limitations in dynamic adaptability, primarily due to persistent dependence on periodic survey data with significant temporal gaps. A dominant transportation distance-based modeling framework for commuting mode choice is proposed, formalizing a generalized cost function. [...] Read more.
Conventional commuting mode split models are characterized by inherent limitations in dynamic adaptability, primarily due to persistent dependence on periodic survey data with significant temporal gaps. A dominant transportation distance-based modeling framework for commuting mode choice is proposed, formalizing a generalized cost function. Through the application of random utility theory, probability density curves are generated to quantify mode-specific dominant distance ranges across three demographic groups: car-owning households, non-car households, and collective households. Empirical validation was conducted using Dongguan as a case study, with model parameters calibrated against 2015 resident travel survey data. Parameter updates are dynamically executed through the integration of big data sources (e.g., mobile signaling and LBS). Successful implementation has been achieved in maintaining Dongguan’s transportation models during the 2021 and 2023 iterations. Full article
(This article belongs to the Special Issue Algorithms for Smart Cities (2nd Edition))
Show Figures

Figure 1

24 pages, 8202 KB  
Article
Study on the Empirical Probability Distribution Model of Soil Factors Influencing Seismic Liquefaction
by Zhengquan Yang, Meng Fan, Jingjun Li, Xiaosheng Liu, Jianming Zhao and Hui Yang
Buildings 2025, 15(16), 2861; https://doi.org/10.3390/buildings15162861 - 13 Aug 2025
Viewed by 223
Abstract
One of the important tasks in sand liquefaction assessment is to evaluate the likelihood of soil liquefaction. However, most liquefaction assessment methods are deterministic for influencing factors and fail to calculate the liquefaction probability by systematically considering the probability distributions of soil factors. [...] Read more.
One of the important tasks in sand liquefaction assessment is to evaluate the likelihood of soil liquefaction. However, most liquefaction assessment methods are deterministic for influencing factors and fail to calculate the liquefaction probability by systematically considering the probability distributions of soil factors. Based on field liquefaction investigation cases, probability distribution fitting and a hypothesis test were carried out. For the variables that failed to pass the fitting and test, the kernel density estimation was conducted. Methods for calculating the liquefaction probability using a Monte Carlo simulation with the probability distribution were then proposed. The results indicated that for (N1)60, SM, S, and GM followed a Gaussian distribution, while CL and ML followed a lognormal distribution; for FC, SM and GM followed a lognormal distribution; and for d50, ML and S followed a Gaussian and lognormal distribution, respectively. The other factors’ distribution curves can be calculated by kernel density estimation. It is feasible to calculate the liquefaction probability based on a Monte Carlo simulation of the variable distribution. The result of the liquefaction probability calculation in this case was similar to that of the existing probability model and was consistent with actual observations. Regional sample differences were considered by introducing the normal distribution error term, and the liquefaction probability accuracy could be improved to a certain extent. The liquefaction probability at a specific seismic level or the total probability within a certain period in the future can be calculated with the method proposed in this paper. It provides a data-driven basis for realistically estimating the likelihood of soil liquefaction under seismic loading and contributes to site classification, liquefaction potential zoning, and ground improvements in seismic design decisions. The practical value of seismic hazard mapping and performance-based design in earthquake-prone regions was also demonstrated. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

27 pages, 942 KB  
Article
Individual Homogeneity Learning in Density Data Response Additive Models
by Zixuan Han, Tao Li, Jinhong You and Narayanaswamy Balakrishnan
Stats 2025, 8(3), 71; https://doi.org/10.3390/stats8030071 - 9 Aug 2025
Viewed by 203
Abstract
In many complex applications, both data heterogeneity and homogeneity are present simultaneously. Overlooking either aspect can lead to misleading statistical inferences. Moreover, the increasing prevalence of complex, non-Euclidean data calls for more sophisticated modeling techniques. To address these challenges, we propose a density [...] Read more.
In many complex applications, both data heterogeneity and homogeneity are present simultaneously. Overlooking either aspect can lead to misleading statistical inferences. Moreover, the increasing prevalence of complex, non-Euclidean data calls for more sophisticated modeling techniques. To address these challenges, we propose a density data response additive model, where the response variable is represented by a distributional density function. In this framework, individual effect curves are assumed to be homogeneous within groups but heterogeneous across groups, while covariates that explain variation share common additive bivariate functions. We begin by applying a transformation to map density functions into a linear space. To estimate the unknown subject-specific functions and the additive bivariate components, we adopt a B-spline series approximation method. Latent group structures are uncovered using a hierarchical agglomerative clustering algorithm, which allows our method to recover the true underlying groupings with high probability. To further improve estimation efficiency, we develop refined spline-backfitted local linear estimators for both the grouped structures and the additive bivariate functions in the post-grouping model. We also establish the asymptotic properties of the proposed estimators, including their convergence rates, asymptotic distributions, and post-grouping oracle efficiency. The effectiveness of our method is demonstrated through extensive simulation studies and real-world data analysis, both of which show promising and robust performance. Full article
Show Figures

Figure 1

13 pages, 272 KB  
Article
Asymptotic Behavior of the Bayes Estimator of a Regression Curve
by Agustín G. Nogales
Mathematics 2025, 13(14), 2319; https://doi.org/10.3390/math13142319 - 21 Jul 2025
Viewed by 184
Abstract
In this work, we prove the convergence to 0 in both L1 and L2 of the Bayes estimator of a regression curve (i.e., the conditional expectation of the response variable given the regressor). The strong consistency of the estimator is also [...] Read more.
In this work, we prove the convergence to 0 in both L1 and L2 of the Bayes estimator of a regression curve (i.e., the conditional expectation of the response variable given the regressor). The strong consistency of the estimator is also derived. The Bayes estimator of a regression curve is the regression curve with respect to the posterior predictive distribution. The result is general enough to cover discrete and continuous cases, parametric or nonparametric, and no specific supposition is made about the prior distribution. Some examples, two of them of a nonparametric nature, are given to illustrate the main result; one of the nonparametric examples exhibits a situation where the estimation of the regression curve has an optimal solution, although the problem of estimating the density is meaningless. An important role in the demonstration of these results is the establishment of a probability space as an adequate framework to address the problem of estimating regression curves from the Bayesian point of view, putting at our disposal powerful probabilistic tools in that endeavor. Full article
(This article belongs to the Section D1: Probability and Statistics)
27 pages, 4019 KB  
Article
Study of the Applicability of CMADS Data Based on the BTOPMC Model in the South Yunnan Region—An Example from the Jinping River Basin
by Hongbo Zhang, Chunyong Li, Junjie Wu, Ban Yin, Hongbin Liu, Guliang Xie, Yanglin Xie and Ting Yang
Water 2025, 17(12), 1802; https://doi.org/10.3390/w17121802 - 16 Jun 2025
Viewed by 483
Abstract
Data-driven distributed hydrological models utilizing atmospheric assimilation are crucial for simulating hydrological processes, particularly in regions lacking historical observational data, and for managing and developing local water resources due to the impacts of climate change and human activities. The southern part of Yunnan [...] Read more.
Data-driven distributed hydrological models utilizing atmospheric assimilation are crucial for simulating hydrological processes, particularly in regions lacking historical observational data, and for managing and developing local water resources due to the impacts of climate change and human activities. The southern part of Yunnan is located at the southwestern border of China, and the small number of observation stations poses a major obstacle to local water-resource management and hydrological research. This paper carries out an evaluation of the accuracy of the China Atmospheric-Assimilation Dataset (CMADS) in southern Yunnan and uses CMADS data and measured data to drive the BTOPMC model to investigate hydrological processes in the Jinping River basin, a representative local sub-basin. The study shows that the probability density function statistic (SS) between CMADS data and the measured precipitation data is 0.941, and their probability density curves of precipitation are basically the same. The relative error of daily precipitation is −19%, with 90% of the daily precipitation error concentrated within ±10 mm/day, which increases as daily precipitation increases. This paper examines three precipitation scenarios to drive the hydrological model, resulting in Nash–Sutcliffe efficiency (NSE) coefficients of 66.8%, 81.0%, and 83.9% for calibration, and 54.5%, 70.2%, and 74.5% for validation. These results indicate that CMADS data possesses a certain degree of applicable accuracy in southern Yunnan. Furthermore, the CMADS-driven BTOPMC model is suitable for simulating hydrological processes and conducting water-resource research in the region. The integration of CMADS data with actual measurement data can enhance the accuracy of hydrological simulations. Overall, the CMADS data have good applicability in southern Yunnan, and the CMADS-driven BTOPMC model can be used for hydrological modeling studies and water-resource management applications in southern Yunnan. Full article
(This article belongs to the Special Issue Remote Sensing of Spatial-Temporal Variation in Surface Water)
Show Figures

Figure 1

23 pages, 422 KB  
Article
A Novel Alpha-Power X Family: A Flexible Framework for Distribution Generation with Focus on the Half-Logistic Model
by A. A. Bhat , Aadil Ahmad Mir , S. P. Ahmad , Badr S. Alnssyan , Abdelaziz Alsubie  and Yashpal Singh Raghav
Entropy 2025, 27(6), 632; https://doi.org/10.3390/e27060632 - 13 Jun 2025
Viewed by 470
Abstract
This study introduces a new and flexible class of probability distributions known as the novel alpha-power X (NAP-X) family. A key development within this framework is the novel alpha-power half-logistic (NAP-HL) distribution, which extends the classical half-logistic model through an alpha-power transformation, allowing [...] Read more.
This study introduces a new and flexible class of probability distributions known as the novel alpha-power X (NAP-X) family. A key development within this framework is the novel alpha-power half-logistic (NAP-HL) distribution, which extends the classical half-logistic model through an alpha-power transformation, allowing for greater adaptability to various data shapes. The paper explores several theoretical aspects of the proposed model, including its moments, quantile function and hazard rate. To assess the effectiveness of parameter estimation, a detailed simulation study is conducted using seven estimation techniques: Maximum likelihood estimation (MLE), Cramér–von Mises estimation (CVME), maximum product of spacings estimation (MPSE), least squares estimation (LSE), weighted least squares estimation (WLSE), Anderson–Darling estimation (ADE) and a right-tailed version of Anderson–Darling estimation (RTADE). The results offer comparative insights into the performance of each method across different sample sizes. The practical value of the NAP-HL distribution is demonstrated using two real datasets from the metrology and engineering domains. In both cases, the proposed model provides a better fit than the traditional half-logistic and related distributions, as shown by lower values of standard model selection criteria. Graphical tools such as fitted density curves, Q–Q and P–P plots, survival functions and box plots further support the suitability of the model for real-world data analysis. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

28 pages, 4199 KB  
Article
A Sustainable SOH Prediction Model for Lithium-Ion Batteries Based on CPO-ELM-ABKDE with Uncertainty Quantification
by Meng-Xiang Yan, Zhi-Hui Deng, Lianfeng Lai, Yong-Hong Xu, Liang Tong, Hong-Guang Zhang, Yi-Yang Li, Ming-Hui Gong and Guo-Ju Liu
Sustainability 2025, 17(11), 5205; https://doi.org/10.3390/su17115205 - 5 Jun 2025
Viewed by 705
Abstract
The battery management system (BMS) is crucial for the efficient operation of batteries, with state of health (SOH) prediction being one of its core functions. Accurate SOH prediction can optimize battery management, enhance utilization and range, and extend battery lifespan. This study proposes [...] Read more.
The battery management system (BMS) is crucial for the efficient operation of batteries, with state of health (SOH) prediction being one of its core functions. Accurate SOH prediction can optimize battery management, enhance utilization and range, and extend battery lifespan. This study proposes an SOH estimation model for lithium-ion batteries that integrates the Crested Porcupine Optimizer (CPO) for parameter optimization, Extreme Learning Machine (ELM) for prediction, and Adaptive Bandwidth Kernel Function Density Estimation (ABKDE) for uncertainty quantification, aiming to enhance the long-term reliability and sustainability of energy storage systems. Health factors (HFs) are extracted by analyzing the charging voltage curves and capacity increment curves of lithium-ion batteries, and their correlation with battery capacity is validated using Pearson and Spearman correlation coefficients. The ELM model is optimized using the CPO algorithm to fine-tune input weights (IWs) and biases (Bs), thereby enhancing prediction performance. Additionally, ABKDE-based probability density estimation is introduced to construct confidence intervals for uncertainty quantification, further improving prediction accuracy and stability. Experiments using the NASA battery aging dataset validate the proposed model. Comparative analysis with different models demonstrates that the CPO-ELM-ABKDE model achieves SOH estimation with a mean absolute error (MAE) and root-mean-square error (RMSE) within 0.65% and 1.08%, respectively, significantly outperforming other approaches. Full article
Show Figures

Figure 1

19 pages, 4846 KB  
Article
Research on the Degradation Model of a Smart Circuit Breaker Based on a Two-Stage Wiener Process
by Zhenhua Xie, Jianmin Ren, Puquan He, Linming Hou and Yao Wang
Processes 2025, 13(6), 1719; https://doi.org/10.3390/pr13061719 - 30 May 2025
Viewed by 640
Abstract
As the global energy transition moves towards the goal of low-carbon sustainability, it is crucial to build a new energy power system. The performance and reliability of Smart Circuit Breakers are the key to ensuring safe operation. The control circuit is the key [...] Read more.
As the global energy transition moves towards the goal of low-carbon sustainability, it is crucial to build a new energy power system. The performance and reliability of Smart Circuit Breakers are the key to ensuring safe operation. The control circuit is the key to the reliability of Smart Circuit Breakers, so studying its performance-degradation process is of great significance. This study centers on the development of a degradation model and the performance-degradation-assessment method for the control circuit of Smart Circuit Breakers and proposes a novel approach for lifetime prediction. Firstly, a test platform is established to collect necessary data for developing a performance-degradation model based on the two-stage Wiener process. According to the theory of maximum likelihood estimation and Schwarz information criterion, the estimation method of model distribution parameters in each degradation stage and the degradation ‘turning point’ method are studied. Then, reliability along with residual life serve as evaluation criteria for analyzing the control circuit’s performance deterioration. Taking the degradation characteristic data into the degradation model, for example, analysis, combined with the Arrhenius empirical formula, the reliability function at room temperature and the curve of the residual life probability density function is obtained. Ultimately, the average service life of the Smart Circuit Breaker control circuit at room temperature is 178,100 h (20.3 years), with a degradation turning point at 155,000 h (17.7 years), providing a basis for the lifetime evaluation of low-voltage circuit breakers. Full article
(This article belongs to the Special Issue Fault Diagnosis Technology in Machinery Manufacturing)
Show Figures

Figure 1

23 pages, 9305 KB  
Article
Structure and Regeneration Differentiation of Coniferous Stand Groups in Representative Altay Montane Forests: Demographic Evidence from Dominant Boreal Conifers
by Haiyan Zhang, Yang Yu, Lingxiao Sun, Chunlan Li, Jing He, Ireneusz Malik, Malgorzata Wistuba and Ruide Yu
Forests 2025, 16(6), 885; https://doi.org/10.3390/f16060885 - 23 May 2025
Viewed by 497
Abstract
With the intensification of global climate change and human activities, coniferous species as the main components of natural forests in the Altay Mountains are facing the challenges of aging and regeneration. This study systematically analyzed structural heterogeneity and regeneration of three coniferous stand [...] Read more.
With the intensification of global climate change and human activities, coniferous species as the main components of natural forests in the Altay Mountains are facing the challenges of aging and regeneration. This study systematically analyzed structural heterogeneity and regeneration of three coniferous stand groups, Larix sibirica Ledeb. stand group, Abies sibirica Ledeb.-Picea obovata Ledeb.-Larix sibirica mixed stand group, and Picea obovata stand group, respectively, across western, central, and eastern forest areas of the Altay Mountains in Northwest China based on field surveys in 2023. Methodologically, we integrated Kruskal–Wallis/Dunn’s post hoc tests, nonlinear power-law modeling (diameter at breast height (DBH)–age relationships, validated via R2, root mean square error (RMSE), and F-tests), static life tables (age class mortality and survival curves), and dynamic indices. Key findings revealed structural divergence: the L. sibirica stand group exhibited dominance of large-diameter trees (>30 cm DBH) with sparse seedlings/saplings and limited regeneration; the mixed stand group was dominated by small DBH individuals (<10 cm), showing young age structures and vigorous regeneration; while the P. obovata stand group displayed uniform DBH/height distributions and slow regeneration capacity. Radial growth rates differed significantly—highest in the mixed stand group (average of 0.315 cm/a), intermediate in the P. obovata stand group (0.216 cm/a), and lowest in the L. sibirica stand group (0.180 cm/a). Age–density trends varied among stand groups: unimodal in the L. sibirica and P. obovata stand groups while declining in the mixed stand group. All stand groups followed a Deevey-II survival curve (constant mortality across ages). The mixed stand group showed the highest growth potential but maximum disturbance risk, the L. sibirica stand group exhibited complex variation with lowest risk probability, while the P. obovata stand group had weaker adaptive capacity. These results underscore the need for differentiated management: promoting L. sibirica regeneration via gap-based interventions, enhancing disturbance resistance in the mixed stand group through structural diversification, and prioritizing P. obovata conservation to maintain ecosystem stability. This multi-method framework bridges stand-scale heterogeneity with demographic mechanisms, offering actionable insights for climate-resilient forestry. Full article
(This article belongs to the Section Forest Ecology and Management)
Show Figures

Figure 1

31 pages, 19278 KB  
Article
Fractal Dimension of Pollutants and Urban Meteorology of a Basin Geomorphology: Study of Its Relationship with Entropic Dynamics and Anomalous Diffusion
by Patricio Pacheco and Eduardo Mera
Fractal Fract. 2025, 9(4), 255; https://doi.org/10.3390/fractalfract9040255 - 17 Apr 2025
Viewed by 318
Abstract
A total of 108 maximum Kolmogorov entropy (SK) values, calculated by means of chaos theory, are obtained from 108 time series (TSs) (each consisting of 28,463 hourly data points). The total TSs are divided into 54 urban meteorological (temperature (T), relative [...] Read more.
A total of 108 maximum Kolmogorov entropy (SK) values, calculated by means of chaos theory, are obtained from 108 time series (TSs) (each consisting of 28,463 hourly data points). The total TSs are divided into 54 urban meteorological (temperature (T), relative humidity (RH) and wind speed magnitude (WS)) and 54 pollutants (PM10, PM2.5 and CO). The measurement locations (6) are located at different heights and the data recording was carried out in three periods, 2010–2013, 2017–2020 and 2019–2022, which determines a total of 3,074,004 data points. For each location, the sum of the maximum entropies of urban meteorology and the sum of maximum entropies of pollutants, SK, MV and SK, P, are calculated and plotted against h, generating six different curves for each of the three data-recording periods. The tangent of each figure is determined and multiplied by the average temperature value of each location according to the period, obtaining, in a first approximation, the magnitude of the entropic forces associated with urban meteorology (FK, MV) and pollutants (FK, P), respectively. It is verified that all the time series have a fractal dimension, and that the fractal dimension of the pollutants shows growth towards the most recent period. The entropic dynamics of pollutants is more dominant with respect to the dynamics of urban meteorology. It is found that this greater influence favors subdiffusion processes (α < 1), which is consistent with a geographic basin with lower atmospheric resilience. By applying a heavy-tailed probability density analysis, it is shown that atmospheric pollution states are more likely, generating an extreme environment that favors the growth of respiratory diseases and low relative humidity, makes heat islands more stable over time, and strengthens heat waves. Full article
Show Figures

Figure 1

16 pages, 7443 KB  
Article
Identification and Temporal Distribution of Typical Rainfall Types Based on K-Means++ Clustering and Probability Distribution Analysis
by Qiting Zhang and Jinglin Qian
Hydrology 2025, 12(4), 88; https://doi.org/10.3390/hydrology12040088 - 14 Apr 2025
Viewed by 915
Abstract
Characterizing rainfall events with recurrence periods of 1–5 years is crucial for urban flood risk assessment and water management system design. Traditional hydrological frequency analysis methods inadequately describe the temporal structure and intensity distribution of rainfall. In this study, we analyzed 1580 independent [...] Read more.
Characterizing rainfall events with recurrence periods of 1–5 years is crucial for urban flood risk assessment and water management system design. Traditional hydrological frequency analysis methods inadequately describe the temporal structure and intensity distribution of rainfall. In this study, we analyzed 1580 independent rainfall events in central Hangzhou (1950–2023) using PCA dimension reduction and K-means++ clustering to investigate typical rainfall types across different recurrence periods. The integrated approach effectively captures temporal characteristics while reducing dimensionality and improving clustering efficiency. Our results indicate that concentrated single-peak rainfall with short duration and a mid-to-late peak dominates the region, with longer recurrence periods exhibiting higher intensity, shorter duration, and greater temporal concentration. Furthermore, cumulative distribution function (CDF) and probability density function (PDF) analyses were conducted on these typical rainfall types, quantifying their distributional characteristics and yielding precise mathematical expressions. These standardized rainfall curves provide direct applications for engineering design and hydrological modeling, enabling more accurate flood prediction and mitigation strategies for Hangzhou’s urban infrastructure. Full article
Show Figures

Figure 1

15 pages, 4546 KB  
Article
Optimizing Methanol Flow Rate for Enhanced Semi-Passive Mini-Direct Methanol Fuel Cell Performance
by Laura Faria and Vicenta María Barragán
Fuels 2025, 6(2), 21; https://doi.org/10.3390/fuels6020021 - 24 Mar 2025
Viewed by 612
Abstract
Direct methanol fuel cells (DMFCs) typically operate in passive mode, where methanol is distributed across the membrane electrode assembly through natural diffusion. Usual methanol concentrations range from 1% to 5% by weight (wt.%), although this can vary depending on the specific configuration and [...] Read more.
Direct methanol fuel cells (DMFCs) typically operate in passive mode, where methanol is distributed across the membrane electrode assembly through natural diffusion. Usual methanol concentrations range from 1% to 5% by weight (wt.%), although this can vary depending on the specific configuration and application. In this work, the effect of an additional pumping system to supply the methanol has been analyzed by varying the methanol flow rate within the pump’s range. To this end, a parametric experimental study was carried out to study the influence of temperature (25–40 °C), concentration (0.15–6 wt.% methanol in water), and the flow rate of methanol (1.12–8.65 g/s) on the performance of a single mini-direct methanol fuel cell (DMFC) operating in semi-passive mode with a passive cathode and an active anode. Open circuit voltage, maximum power density, and cell efficiency were analyzed. To this purpose, open circuit voltage and current–voltage curves were measured in different experimental conditions. Results indicate that temperature is the most decisive parameter to increase DMFC performance. For all methanol concentrations and flow rates, performance improves with higher operating temperatures. However, the impact of the concentration and flow rate depends on the other parameters. The operating optimal concentration was 1% wt. At this concentration, a maximum power of 14.2 mW was achieved at 40 °C with a methanol flow of 7.6 g/s. Under these same conditions, the cell also reached its maximum efficiency of 23%. The results show that switching from passive to semi-passive mode generally increases open-circuit voltage and maximum power, thus improving fuel cell performance, likely due to the enhanced uniform distribution of the reactant in semi-passive mode. However, further increases in flow rate led to a decrease in performance, probably due to the methanol crossover effect. An optimal methanol flow rate is observed, depending on methanol flow temperature and concentration. Full article
Show Figures

Figure 1

15 pages, 2161 KB  
Article
Enhancing Lymph Node Metastasis Risk Prediction in Early Gastric Cancer Through the Integration of Endoscopic Images and Real-World Data in a Multimodal AI Model
by Donghoon Kang, Han Jo Jeon, Jie-Hyun Kim, Sang-Il Oh, Ye Seul Seong, Jae Young Jang, Jung-Wook Kim, Joon Sung Kim, Seung-Joo Nam, Chang Seok Bang and Hyuk Soon Choi
Cancers 2025, 17(5), 869; https://doi.org/10.3390/cancers17050869 - 3 Mar 2025
Viewed by 1144
Abstract
Objectives: The accurate prediction of lymph node metastasis (LNM) and lymphovascular invasion (LVI) is crucial for determining treatment strategies for early gastric cancer (EGC). This study aimed to develop and validate a deep learning-based clinical decision support system (CDSS) to predict LNM including [...] Read more.
Objectives: The accurate prediction of lymph node metastasis (LNM) and lymphovascular invasion (LVI) is crucial for determining treatment strategies for early gastric cancer (EGC). This study aimed to develop and validate a deep learning-based clinical decision support system (CDSS) to predict LNM including LVI in EGC using real-world data. Methods: A deep learning-based CDSS was developed by integrating endoscopic images, demographic data, biopsy pathology, and CT findings from the data of 2927 patients with EGC across five institutions. We compared a transformer-based model to an image-only (basic convolutional neural network (CNN)) model and a multimodal classification (CNN with random forest) model. Internal testing was conducted on 449 patients from the five institutions, and external validation was performed on 766 patients from two other institutions. Model performance was assessed using the area under the receiver operating characteristic curve (AUC), probability density function, and clinical utility curve. Results: In the training, internal, and external validation cohorts, LNM/LVI was observed in 379 (12.95%), 49 (10.91%), 15 (9.09%), and 41 (6.82%) patients, respectively. The transformer-based model achieved an AUC of 0.9083, sensitivity of 85.71%, and specificity of 90.75%, outperforming the CNN (AUC 0.5937) and CNN with random forest (AUC 0.7548). High sensitivity and specificity were maintained in internal and external validations. The transformer model distinguished 91.8% of patients with LNM in the internal validation dataset, and 94.0% and 89.1% in the two different external datasets. Conclusions: We propose a deep learning-based CDSS for predicting LNM/LVI in EGC by integrating real-world data, potentially guiding treatment strategies in clinical settings. Full article
(This article belongs to the Collection Artificial Intelligence and Machine Learning in Cancer Research)
Show Figures

Graphical abstract

21 pages, 4678 KB  
Article
Guided Firework Algorithm (GFWA) Optimization Research on Viscoelastic Damper (VED) Structure Based on Vulnerability Evaluation
by Xianjie Wang, Chunyu Lei, Mengjie Xiang, Donghai Jiang and Xin Wang
Buildings 2025, 15(5), 712; https://doi.org/10.3390/buildings15050712 - 24 Feb 2025
Viewed by 638
Abstract
The vulnerability curve serves as a precise evaluation metric for structural seismic performance and a critical component in earthquake loss assessment. In this study, the orthogonal expansion method for random ground motion generation is integrated with the probability density evolution method (PDEM) to [...] Read more.
The vulnerability curve serves as a precise evaluation metric for structural seismic performance and a critical component in earthquake loss assessment. In this study, the orthogonal expansion method for random ground motion generation is integrated with the probability density evolution method (PDEM) to address the dynamic reliability and vulnerability of general Multi-Degree of Freedom (MDOF) nonlinear structures. By employing dynamic reliability as a constraint and vulnerability as an evaluation index, the guided firework algorithm (GFWA) is introduced to optimize the design of viscoelastic damper (VED) structure systems. To validate the proposed methods, several examples are presented, including the generation of artificial waves, the vulnerability analysis of a five-story reinforced concrete (RC) structure, and a comparative study of GFWA and genetic algorithm (GA) optimization for VED parameters to assess the optimization efficiency. The results demonstrate that the proposed vulnerability method achieves satisfactory accuracy and is well suited for evaluating damper structure optimization designs. Furthermore, GFWA outperforms GA significantly in terms of efficiency and feasibility, offering a promising approach for optimization design in architectural structures. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

Back to TopTop