Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,418)

Search Parameters:
Keywords = cost rate model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 12099 KB  
Article
Hardware–Software System for Biomass Slow Pyrolysis: Characterization of Solid Yield via Optimization Algorithms
by Ismael Urbina-Salas, David Granados-Lieberman, Juan Pablo Amezquita-Sanchez, Martin Valtierra-Rodriguez and David Aaron Rodriguez-Alejandro
Computers 2025, 14(10), 426; https://doi.org/10.3390/computers14100426 (registering DOI) - 5 Oct 2025
Abstract
Biofuels represent a sustainable alternative that supports global energy development without compromising environmental balance. This work introduces a novel hardware–software platform for the experimental characterization of biomass solid yield during the slow pyrolysis process, integrating physical experimentation with advanced computational modeling. The hardware [...] Read more.
Biofuels represent a sustainable alternative that supports global energy development without compromising environmental balance. This work introduces a novel hardware–software platform for the experimental characterization of biomass solid yield during the slow pyrolysis process, integrating physical experimentation with advanced computational modeling. The hardware consists of a custom-designed pyrolizer equipped with temperature and weight sensors, a dedicated control unit, and a user-friendly interface. On the software side, a two-step kinetic model was implemented and coupled with three optimization algorithms, i.e., Particle Swarm Optimization (PSO), Genetic Algorithm (GA), and Nelder–Mead (N-M), to estimate the Arrhenius kinetic parameters governing biomass degradation. Slow pyrolysis experiments were performed on wheat straw (WS), pruning waste (PW), and biosolids (BS) at a heating rate of 20 °C/min within 250–500 °C, with a 120 min residence time favoring biochar production. The comparative analysis shows that the N-M method achieved the highest accuracy (100% fit in estimating solid yield), with a convergence time of 4.282 min, while GA converged faster (1.675 min), with a fit of 99.972%, and PSO had the slowest convergence time at 6.409 min and a fit of 99.943%. These results highlight both the versatility of the system and the potential of optimization techniques to provide accurate predictive models of biomass decomposition as a function of time and temperature. Overall, the main contributions of this work are the development of a low-cost, custom MATLAB-based experimental platform and the tailored implementation of optimization algorithms for kinetic parameter estimation across different biomasses, together providing a robust framework for biomass pyrolysis characterization. Full article
Show Figures

Figure 1

24 pages, 6085 KB  
Article
Heat Pump Optimization—Comparative Study of Different Optimization Algorithms and Heat Exchanger Area Approximations
by Eivind Brodal
Energies 2025, 18(19), 5270; https://doi.org/10.3390/en18195270 - 3 Oct 2025
Abstract
More energy efficient heat pumps can be designed if the industry is able to identify reliable optimization schemes able to predict how a fixed amount of money is best spent on the different individual components. For example, how to optimally design and size [...] Read more.
More energy efficient heat pumps can be designed if the industry is able to identify reliable optimization schemes able to predict how a fixed amount of money is best spent on the different individual components. For example, how to optimally design and size the different heat exchangers (HEs) in a heat pump with respect to cost and performance. In this work, different optimization algorithms and HE area integral approximations are compared for heat pumps with two and three HEs, with or without ejectors. Since the main goal is to identify optimal numerical schemes, not optimal designs, heat transfer is simplified, assuming a constant U-value for all HEs, which reduces the computational work significantly. Results show that high-order HE area approximations are 10400 times faster than conventional trapezoidal and adaptive integral methods. High-order schemes with 45 grid points (N) obtained 80100% optimization success rates. For subcritical processes, the LMTD method produced accurate results with N5, but such schemes are unreliable and difficult to extend to real HE models with non-constant U. Results also show that constrained gradient-based optimizations are 10 times faster than particle swarm, and that conventional GA optimizations are extremely inefficient. This study therefore recommends applying high-order HE area approximations and gradient-based optimizations methods developing accurate optimization schemes for the industry, which include realistic heat transfer coefficients. Full article
11 pages, 2360 KB  
Article
Temperature Hysteresis Calibration Method of MEMS Accelerometer
by Hak Ju Kim and Hyoung Kyoon Jung
Sensors 2025, 25(19), 6131; https://doi.org/10.3390/s25196131 - 3 Oct 2025
Abstract
Micro-electromechanical system (MEMS) sensors are widely used in various navigation applications because of their cost-effectiveness, low power consumption, and compact size. However, their performance is often degraded by temperature hysteresis, which arises from internal temperature gradients. This paper presents a calibration method that [...] Read more.
Micro-electromechanical system (MEMS) sensors are widely used in various navigation applications because of their cost-effectiveness, low power consumption, and compact size. However, their performance is often degraded by temperature hysteresis, which arises from internal temperature gradients. This paper presents a calibration method that corrects temperature hysteresis without requiring any additional hardware or modifications to the existing MEMS sensor design. By analyzing the correlation between the external temperature change rate and hysteresis errors, a mathematical calibration model is derived. The method is experimentally validated on MEMS accelerometers, with results showing an up to 63% reduction in hysteresis errors. We further evaluate bias repeatability, scale factor repeatability, nonlinearity, and Allan variance to assess the broader impacts of the calibration. Although minor trade-offs in noise characteristics are observed, the overall hysteresis performance is substantially improved. The proposed approach offers a practical and efficient solution for enhancing MEMS sensor accuracy in dynamic thermal environments. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

26 pages, 14595 KB  
Article
Practical Application of Passive Air-Coupled Ultrasonic Acoustic Sensors for Wheel Crack Detection
by Aashish Shaju, Nikhil Kumar, Giovanni Mantovani, Steve Southward and Mehdi Ahmadian
Sensors 2025, 25(19), 6126; https://doi.org/10.3390/s25196126 - 3 Oct 2025
Abstract
Undetected cracks in railroad wheels pose significant safety and economic risks, while current inspection methods are limited by cost, coverage, or contact requirements. This study explores the use of passive, air-coupled ultrasonic acoustic (UA) sensors for detecting wheel damage on stationary or moving [...] Read more.
Undetected cracks in railroad wheels pose significant safety and economic risks, while current inspection methods are limited by cost, coverage, or contact requirements. This study explores the use of passive, air-coupled ultrasonic acoustic (UA) sensors for detecting wheel damage on stationary or moving wheels. Two controlled datasets of wheelsets, one with clear damage and another with early, service-induced defects, were tested using hammer impacts. An automated system identified high-energy bursts and extracted features in both time and frequency domains, such as decay rate, spectral centroid, and entropy. The results demonstrate the effectiveness of UAE (ultrasonic acoustic emission) techniques through Kernel Density Estimation (KDE) visualization, hypothesis testing with effect sizes, and Receiver Operating Characteristic (ROC) analysis. The decay rate consistently proved to be the most effective discriminator, achieving near-perfect classification of severely damaged wheels and maintaining meaningful separation for early defects. Spectral features provided additional information but were less decisive. The frequency spectrum characteristics were effective across both axial and radial sensor orientations, with ultrasonic frequencies (20–80 kHz) offering higher spectral fidelity than sonic frequencies (1–20 kHz). This work establishes a validated “ground-truth” signature essential for developing a practical wayside detection system. The findings guide a targeted engineering approach to physically isolate this known signature from ambient noise and develop advanced models for reliable in-motion detection. Full article
(This article belongs to the Special Issue Sensing and Imaging for Defect Detection: 2nd Edition)
Show Figures

Figure 1

27 pages, 4873 KB  
Article
The Streamer Selection Strategy for Live Streaming Sales: Genuine, Virtual, or Hybrid
by Delong Jin
J. Theor. Appl. Electron. Commer. Res. 2025, 20(4), 273; https://doi.org/10.3390/jtaer20040273 - 3 Oct 2025
Abstract
Research Problem and Gap: Live streaming sales rely heavily on streamers, with both genuine and AI-generated virtual streamers gaining popularity. However, these streamer types possess contrasting capabilities. Genuine streamers are superior at building trust and reducing product valuation uncertainty but have limited reach, [...] Read more.
Research Problem and Gap: Live streaming sales rely heavily on streamers, with both genuine and AI-generated virtual streamers gaining popularity. However, these streamer types possess contrasting capabilities. Genuine streamers are superior at building trust and reducing product valuation uncertainty but have limited reach, while virtual streamers excel at broad audience engagement but are less effective at mitigating uncertainty, often leading to higher product return rates. This trade-off creates a critical strategic gap; that is, brand firms lack clear guidance on whether to invest in genuine or virtual streamers or adopt a hybrid approach for their live channels. Objective and Methods: This study addresses this gap by developing a theoretical analytical model to determine a monopolistic brand firm’s optimal streamer strategy among three options: using only a genuine streamer, only a virtual streamer, or a combination of the two (hybrid approach). The researchers model consumer utility, factoring in uncertainty and the streamers’ differential impact on reach, to derive optimal decisions on pricing and streamer selection. Results and Findings: The analysis yields several key findings with direct managerial implications. First, while a hybrid strategy leverages the complementary strengths of both streamer types, its success depends on employing high-quality streamers; in other words, this strategy does not justify settling for inferior talent of either type. Second, employing a virtual streamer requires a moderate price reduction to compensate for higher consumer uncertainty and prevent high profit-eroding return rates. Third, a pure strategy (only genuine or only virtual) is optimal only when that streamer type has a significant cost advantage. Otherwise, the hybrid strategy tends to be the most profitable. Moreover, higher product return costs directly diminish the viability of virtual streamers, making a genuine or hybrid strategy more attractive for products with expensive return processes. Conclusions: The results provide a clear framework for brand firms—that is, the choice of streamer is a strategic decision intertwined with pricing and product return costs. Firms should pursue a hybrid strategy not as a compromise but as a premium approach, use targeted pricing to mitigate the risk of virtual streamers, and avoid virtual options altogether for products with high return costs. Full article
Show Figures

Figure 1

38 pages, 5753 KB  
Article
EfficientNet-B3-Based Automated Deep Learning Framework for Multiclass Endoscopic Bladder Tissue Classification
by A. A. Abd El-Aziz, Mahmood A. Mahmood and Sameh Abd El-Ghany
Diagnostics 2025, 15(19), 2515; https://doi.org/10.3390/diagnostics15192515 - 3 Oct 2025
Abstract
Background: Bladder cancer (BLCA) is a malignant growth that originates from the urothelial lining of the urinary bladder. Diagnosing BLCA is complex due to the variety of tumor features and its heterogeneous nature, which leads to significant morbidity and mortality. Understanding tumor [...] Read more.
Background: Bladder cancer (BLCA) is a malignant growth that originates from the urothelial lining of the urinary bladder. Diagnosing BLCA is complex due to the variety of tumor features and its heterogeneous nature, which leads to significant morbidity and mortality. Understanding tumor histopathology is crucial for developing tailored therapies and improving patient outcomes. Objectives: Early diagnosis and treatment are essential to lower the mortality rate associated with bladder cancer. Manual classification of muscular tissues by pathologists is labor-intensive and relies heavily on experience, which can result in interobserver variability due to the similarities in cancerous cell morphology. Traditional methods for analyzing endoscopic images are often time-consuming and resource-intensive, making it difficult to efficiently identify tissue types. Therefore, there is a strong demand for a fully automated and reliable system for classifying smooth muscle images. Methods: This paper proposes a deep learning (DL) technique utilizing the EfficientNet-B3 model and a five-fold cross-validation method to assist in the early detection of BLCA. This model enables timely intervention and improved patient outcomes while streamlining the diagnostic process, ultimately reducing both time and costs for patients. We conducted experiments using the Endoscopic Bladder Tissue Classification (EBTC) dataset for multiclass classification tasks. The dataset was preprocessed using resizing and normalization methods to ensure consistent input. In-depth experiments were carried out utilizing the EBTC dataset, along with ablation studies to evaluate the best hyperparameters. A thorough statistical analysis and comparisons with five leading DL models—ConvNeXtBase, DenseNet-169, MobileNet, ResNet-101, and VGG-16—showed that the proposed model outperformed the others. Conclusions: The EfficientNet-B3 model achieved impressive results: accuracy of 99.03%, specificity of 99.30%, precision of 97.95%, recall of 96.85%, and an F1-score of 97.36%. These findings indicate that the EfficientNet-B3 model demonstrates significant potential in accurately and efficiently diagnosing BLCA. Its high performance and ability to reduce diagnostic time and cost make it a valuable tool for clinicians in the field of oncology and urology. Full article
(This article belongs to the Special Issue AI and Big Data in Medical Diagnostics)
21 pages, 1658 KB  
Article
Utilization of Eye-Tracking Metrics to Evaluate User Experiences—Technology Description and Preliminary Study
by Julia Falkowska, Janusz Sobecki and Michał Falkowski
Sensors 2025, 25(19), 6101; https://doi.org/10.3390/s25196101 - 3 Oct 2025
Abstract
This study examines the feasibility of applying eye tracking as a rigorous method for assessing user experience in web design. A controlled experiment was conducted with 102 participants, who interacted with both guideline-compliant websites and systematically degraded variants violating specific principles of Material [...] Read more.
This study examines the feasibility of applying eye tracking as a rigorous method for assessing user experience in web design. A controlled experiment was conducted with 102 participants, who interacted with both guideline-compliant websites and systematically degraded variants violating specific principles of Material Design 2. Eleven websites were presented in A/B conditions with modifications targeting three design dimensions: contrast, link clarity, and iconography. Eye-tracking indicators—time to first fixation, fixation duration, fixation count, and time to first click—are examined in conjunction with subjective ratings and expert assessments. Mixed-effects models are employed to ensure robust statistical inference. The results demonstrate that reduced contrast and unclear links consistently impair user performance and increase search effort, whereas the influence of icons is more context-dependent. The study contributes by quantifying the usability costs of guideline deviations and by validating a triangulated evaluation framework that combines objective, subjective, and expert data. From a practical perspective, the findings support the integration of eye tracking into A/B testing and guideline validation, providing design teams with empirical evidence to inform and prioritize improvements in user interfaces. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

31 pages, 11924 KB  
Article
Enhanced 3D Turbulence Models Sensitivity Assessment Under Real Extreme Conditions: Case Study, Santa Catarina River, Mexico
by Mauricio De la Cruz-Ávila and Rosanna Bonasia
Hydrology 2025, 12(10), 260; https://doi.org/10.3390/hydrology12100260 - 2 Oct 2025
Abstract
This study compares enhanced turbulence models in a natural river channel 3D simulation under extreme hydrometeorological conditions. Using ANSYS Fluent 2024 R1 and the Volume of Fluid scheme, five RANS closures were evaluated: realizable k–ε, Renormalization-Group k–ε, Shear Stress Transport k–ω, Generalized k–ω, [...] Read more.
This study compares enhanced turbulence models in a natural river channel 3D simulation under extreme hydrometeorological conditions. Using ANSYS Fluent 2024 R1 and the Volume of Fluid scheme, five RANS closures were evaluated: realizable k–ε, Renormalization-Group k–ε, Shear Stress Transport k–ω, Generalized k–ω, and Baseline-Explicit Algebraic Reynolds Stress model. A segment of the Santa Catarina River in Monterrey, Mexico, defined the computational domain, which produced high-energy, non-repeatable real-world flow conditions where hydrometric data were not yet available. Empirical validation was conducted using surface velocity estimations obtained through high-resolution video analysis. Systematic bias was minimized through mesh-independent validation (<1% error) and a benchmarked reference closure, ensuring a fair basis for inter-model comparison. All models were realized on a validated polyhedral mesh with consistent boundary conditions, evaluating performance in terms of mean velocity, turbulent viscosity, strain rate, and vorticity. Mean velocity predictions matched the empirical value of 4.43 [m/s]. The Baseline model offered the highest overall fidelity in turbulent viscosity structure (up to 43 [kg/m·s]) and anisotropy representation. Simulation runtimes ranged from 10 to 16 h, reflecting a computational cost that increases with model complexity but justified by improved flow anisotropy representation. Results show that all models yielded similar mean flow predictions within a narrow error margin. However, they differed notably in resolving low-velocity zones, turbulence intensity, and anisotropy within a purely hydrodynamic framework that does not include sediment transport. Full article
Show Figures

Figure 1

13 pages, 1292 KB  
Article
Development and Internal Validation of Machine Learning Algorithms to Predict 30-Day Readmission in Patients Undergoing a C-Section: A Nation-Wide Analysis
by Audrey Andrews, Nadia Islam, George Bcharah, Hend Bcharah and Misha Pangasa
J. Pers. Med. 2025, 15(10), 476; https://doi.org/10.3390/jpm15100476 - 2 Oct 2025
Abstract
Background/Objectives: Cesarean section (C-section) is a common surgical procedure associated with an increased risk of 30-day postpartum hospital readmissions. This study utilized machine learning (ML) to predict readmissions using a nationwide database. Methods: A retrospective analysis of the National Surgical Quality [...] Read more.
Background/Objectives: Cesarean section (C-section) is a common surgical procedure associated with an increased risk of 30-day postpartum hospital readmissions. This study utilized machine learning (ML) to predict readmissions using a nationwide database. Methods: A retrospective analysis of the National Surgical Quality Improvement Project (2012–2022) included 54,593 patients who underwent C-sections. Random Forests (RF) and Extreme Gradient Boosting (XGBoost) models were developed and compared to logistic regression (LR) using demographic, preoperative, and perioperative data. Results: Of the cohort, 1306 (2.39%) patients were readmitted. Readmitted patients had higher rates of being of African American race (17.99% vs. 9.83%), diabetes (11.03% vs. 8.19%), and hypertension (11.49% vs. 4.68%) (p < 0.001). RF achieved the highest performance (AUC = 0.737, sensitivity = 72.03%, specificity: 61.33%), and a preoperative-only RF model achieved a sensitivity of 83.14%. Key predictors included age, BMI, operative time, white blood cell count, and hematocrit. Conclusions: ML effectively predicts C-section readmissions, supporting early identification and interventions to improve patient outcomes and reduce healthcare costs. Full article
(This article belongs to the Special Issue Advances in Prenatal Diagnosis and Maternal Fetal Medicine)
Show Figures

Figure 1

14 pages, 879 KB  
Article
Predicting Factors Associated with Extended Hospital Stay After Postoperative ICU Admission in Hip Fracture Patients Using Statistical and Machine Learning Methods: A Retrospective Single-Center Study
by Volkan Alparslan, Sibel Balcı, Ayetullah Gök, Can Aksu, Burak İnner, Sevim Cesur, Hadi Ufuk Yörükoğlu, Berkay Balcı, Pınar Kartal Köse, Veysel Emre Çelik, Serdar Demiröz and Alparslan Kuş
Healthcare 2025, 13(19), 2507; https://doi.org/10.3390/healthcare13192507 - 2 Oct 2025
Abstract
Background: Hip fractures are common in the elderly and often require ICU admission post-surgery due to high ASA scores and comorbidities. Length of hospital stay after ICU is a crucial indicator affecting patient recovery, complication rates, and healthcare costs. This study aimed to [...] Read more.
Background: Hip fractures are common in the elderly and often require ICU admission post-surgery due to high ASA scores and comorbidities. Length of hospital stay after ICU is a crucial indicator affecting patient recovery, complication rates, and healthcare costs. This study aimed to develop and validate a machine learning-based model to predict the factors associated with extended hospital stay (>7 days from surgery to discharge) in hip fracture patients requiring postoperative ICU care. The findings could help clinicians optimize ICU bed utilization and improve patient management strategies. Methods: In this retrospective single-centre cohort study conducted in a tertiary ICU in Turkey (2017–2024), 366 ICU-admitted hip fracture patients were analysed. Conventional statistical analyses were performed using SPSS 29, including Mann–Whitney U and chi-squared tests. To identify independent predictors associated with extended hospital stay, Least Absolute Shrinkage and Selection Operator (LASSO) regression was applied for variable selection, followed by multivariate binary logistic regression analysis. In addition, machine learning models (binary logistic regression, random forest (RF), extreme gradient boosting (XGBoost) and decision tree (DT)) were trained to predict the likelihood of extended hospital stay, defined as the total number of days from the date of surgery until hospital discharge, including both ICU and subsequent ward stay. Model performance was evaluated using AUROC, F1 score, accuracy, precision, recall, and Brier score. SHAP (SHapley Additive exPlanations) values were used to interpret feature contributions in the XGBoost model. Results: The XGBoost model showed the best performance, except for precision. The XGBoost model gave an AUROC of 0.80, precision of 0.67, recall of 0.92, F1 score of 0.78, accuracy of 0.71 and Brier score of 0.18. According to SHAP analysis, time from fracture to surgery, hypoalbuminaemia and ASA score were the variables that most affected the length of stay of hospitalisation. Conclusions: The developed machine learning model successfully classified hip fracture patients into short and extended hospital stay groups following postoperative intensive care. This classification model has the potential to aid in patient flow management, resource allocation, and clinical decision support. External validation will further strengthen its applicability across different settings. Full article
Show Figures

Figure 1

25 pages, 3236 KB  
Article
A Wearable IoT-Based Measurement System for Real-Time Cardiovascular Risk Prediction Using Heart Rate Variability
by Nurdaulet Tasmurzayev, Bibars Amangeldy, Timur Imankulov, Baglan Imanbek, Octavian Adrian Postolache and Akzhan Konysbekova
Eng 2025, 6(10), 259; https://doi.org/10.3390/eng6100259 - 2 Oct 2025
Abstract
Cardiovascular diseases (CVDs) remain the leading cause of global mortality, with ischemic heart disease (IHD) being the most prevalent and deadly subtype. The growing burden of IHD underscores the urgent need for effective early detection methods that are scalable and non-invasive. Heart Rate [...] Read more.
Cardiovascular diseases (CVDs) remain the leading cause of global mortality, with ischemic heart disease (IHD) being the most prevalent and deadly subtype. The growing burden of IHD underscores the urgent need for effective early detection methods that are scalable and non-invasive. Heart Rate Variability (HRV), a non-invasive physiological marker influenced by the autonomic nervous system (ANS), has shown clinical relevance in predicting adverse cardiac events. This study presents a photoplethysmography (PPG)-based Zhurek IoT device, a custom-developed Internet of Things (IoT) device for non-invasive HRV monitoring. The platform’s effectiveness was evaluated using HRV metrics from electrocardiography (ECG) and PPG signals, with machine learning (ML) models applied to the task of early IHD risk detection. ML classifiers were trained on HRV features, and the Random Forest (RF) model achieved the highest classification accuracy of 90.82%, precision of 92.11%, and recall of 91.00% when tested on real data. The model demonstrated excellent discriminative ability with an area under the ROC curve (AUC) of 0.98, reaching a sensitivity of 88% and specificity of 100% at its optimal threshold. The preliminary results suggest that data collected with the “Zhurek” IoT devices are promising for the further development of ML models for IHD risk detection. This study aimed to address the limitations of previous work, such as small datasets and a lack of validation, by utilizing real and synthetically augmented data (conditional tabular GAN (CTGAN)), as well as multi-sensor input (ECG and PPG). The findings of this pilot study can serve as a starting point for developing scalable, remote, and cost-effective screening systems. The further integration of wearable devices and intelligent algorithms is a promising direction for improving routine monitoring and advancing preventative cardiology. Full article
Show Figures

Figure 1

21 pages, 8233 KB  
Article
Integrated Optimization of Ground Support Systems and UAV Task Planning for Efficient Forest Fire Inspection
by Ze Liu, Zhichao Shi, Wei Liu, Lu Zhang and Rui Wang
Drones 2025, 9(10), 684; https://doi.org/10.3390/drones9100684 - 1 Oct 2025
Abstract
With the increasing frequency and intensity of forest fires driven by climate change and human activities, efficient detection and rapid response have become critical for forest fire prevention. Effective fire detection, swift response, and timely rescue are vital for forest firefighting efforts. This [...] Read more.
With the increasing frequency and intensity of forest fires driven by climate change and human activities, efficient detection and rapid response have become critical for forest fire prevention. Effective fire detection, swift response, and timely rescue are vital for forest firefighting efforts. This paper proposes an unmanned aerial vehicle (UAV)-based forest fire inspection system that integrates a ground support system (GSS), aiming to enhance automation and flexibility in inspection tasks. A three-layer mixed-integer linear programming model is developed: the first layer focuses on the site selection and capacity planning of the GSS; the second layer defines the coverage scope of different GSS units; and the third layer plans the inspection routes of UAVs and coordinates multi-UAV collaborative tasks. For planning UAV patrol routes and collaborative tasks, a goal-driven greedy algorithm (GDGA) based on traditional greedy methods is proposed. Simulation experiments based on a real forest fire case in Turkey demonstrate that the proposed model reduces the total annual costs by 28.1% and 16.1% compared to task-only and renewable-only models, respectively, with a renewable energy penetration rate of 68.71%. The goal-driven greedy algorithm also shortens UAV patrol distances by 7.0% to 12.5% across different rotation angles. These results validate the effectiveness of the integrated model in improving inspection efficiency and economic benefits, thereby providing critical support for forest fire prevention. Full article
Show Figures

Figure 1

16 pages, 1057 KB  
Article
Cost-Effectiveness Analysis of Pitavastatin in Dyslipidemia: Vietnam Case
by Nam Xuan Vo, Hanh Thi My Nguyen, Nhat Manh Phan, Huong Lai Pham, Tan Trong Bui and Tien Thuy Bui
Healthcare 2025, 13(19), 2494; https://doi.org/10.3390/healthcare13192494 - 1 Oct 2025
Abstract
Background/Objectives: Dyslipidemia is becoming a significant economic healthcare burden in low- to middle-income countries (LMICs) due to its role in heightening cardiovascular-related mortality. Statins are the first-line treatment for reducing LDL-C levels, thereby minimizing direct costs associated with cardiovascular disease management, with [...] Read more.
Background/Objectives: Dyslipidemia is becoming a significant economic healthcare burden in low- to middle-income countries (LMICs) due to its role in heightening cardiovascular-related mortality. Statins are the first-line treatment for reducing LDL-C levels, thereby minimizing direct costs associated with cardiovascular disease management, with pitavastatin being of the newest generation of statins. This research work conducted a cost-utility analysis of pitavastatin to determine the economic benefit in Vietnam. Methods: A decision tree model was developed to compare the rate of LDL-C controlled patients over a lifetime horizon among patients treated with pitavastatin, atorvastatin, and rosuvastatin. The primary outcome was the incremental cost-effectiveness ratio (ICER), measured from the healthcare system perspective. Effectiveness was evaluated in terms of quality-adjusted life years (QALYs), using an annual discount rate of 3%. A one-way sensitivity analysis was performed to identify the key input parameters that most influenced the ICER outcomes. Results: Pitavastatin was cost-effective compared to atorvastatin but was dominated by rosuvastatin. Although pitavastatin gained fewer QALYs than atorvastatin, the ICER was 195,403,312 VND/QALY, well below Vietnam’s 2024 willingness-to-pay. Drug cost had the most significant impact on ICERs. Conclusions: Pitavastatin represents an economical short-term alternative to atorvastatin, particularly in resource-constrained settings. Full article
Show Figures

Figure 1

26 pages, 3111 KB  
Article
Design and Experiment of Bare Seedling Planting Mechanism Based on EDEM-ADAMS Coupling
by Huaye Zhang, Xianliang Wang, Hui Li, Yupeng Shi and Xiangcai Zhang
Agriculture 2025, 15(19), 2063; https://doi.org/10.3390/agriculture15192063 - 30 Sep 2025
Abstract
In traditional scallion cultivation, the bare-root transplanting method—which involves direct seeding, seedling raising in the field, and lifting—is commonly adopted to minimize seedling production costs. However, during the mechanized transplanting of bare-root scallion seedlings, practical problems such as severe seedling damage and poor [...] Read more.
In traditional scallion cultivation, the bare-root transplanting method—which involves direct seeding, seedling raising in the field, and lifting—is commonly adopted to minimize seedling production costs. However, during the mechanized transplanting of bare-root scallion seedlings, practical problems such as severe seedling damage and poor planting uprightness exist. In this paper, the Hertz–Mindlin with Bonding contact model was used to establish the scallion seedling model. Combined with the Plackett–Burman experiment, steepest ascent experiment, and Box–Behnken experiment, the bonding parameters of scallion seedlings were calibrated. Furthermore, the accuracy of the scallion seedling model parameters was verified through the stress–strain characteristics observed during the actual loading and compression process of the scallion seedlings. The results indicate that the scallion seedling normal/tangential contact stiffness, scallion seedling normal/tangential ultimate stress, and scallion Poisson’s ratio significantly influence the mechanical properties of scallion seedlings. Through optimization experiments, the optimal combination of the above parameters was determined to be 4.84 × 109 N/m, 5.64 × 107 Pa, and 0.38. In this paper, the flexible planting components of scallion seedlings were taken as the research object. Flexible protrusions were added to the planting disc to reduce the damage rate of scallion seedlings, and an EDEM-ADAMS coupling interaction model between the planting components and scallion seedlings was established. Based on this model, optimization and verification were carried out on the key components of the planting components. Orthogonal experiments were conducted with the contact area between scallion seedlings and the disc, rotational speed of the flexible disc, furrow depth, and clamping force on scallion seedlings as experimental factors, and with the uprightness and damage status of scallion seedlings as evaluation criteria. The experimental results showed that when the contact area between scallion seedlings and the disc was 255 mm2, the angular velocity was 0.278 rad/s, and the furrow depth was 102.15 mm, the performance of the scallion planting mechanism was optimal. At this point, the uprightness of the scallion seedlings was 94.80% and the damage rate was 3%. Field experiments were carried out based on the above parameters. The results indicated that the average uprightness of transplanted scallion seedlings was 93.86% and the damage rate was 2.76%, with an error of less than 2% compared with the simulation prediction values. Therefore, the parameter model constructed in this paper is reliable and effective, and the designed and improved transplanting mechanism can realize the upright and low-damage planting of scallion seedlings, providing a reference for the low-damage and high-uprightness transplanting operation of scallions. Full article
(This article belongs to the Section Agricultural Technology)
13 pages, 655 KB  
Article
Capacity Configuration Optimization of Wind–Light–Load Storage Based on Improved PSO
by Benhong Wang, Ligui Wu, Peng Zhang, Yifeng Gu, Fangqing Zhang and Jiang Guo
Energies 2025, 18(19), 5212; https://doi.org/10.3390/en18195212 - 30 Sep 2025
Abstract
To improve the economy and stability of data center green power direct supply, the capacity configuration optimization of wind–light–load storage based on improved particle swarm optimization (PSO) is conducted. According to wind speed, the Weibull distribution of wind output is established, while the [...] Read more.
To improve the economy and stability of data center green power direct supply, the capacity configuration optimization of wind–light–load storage based on improved particle swarm optimization (PSO) is conducted. According to wind speed, the Weibull distribution of wind output is established, while the Beta distribution of solar output is established according to light intensity. Furthermore, by conducting the correlation analysis, it is indicated that there is a negative correlation between wind and solar output, which is helpful to optimize the mix of wind and solar output. To minimize the yearly average cost of wind–light–load storage, the capacity configuration optimization model is established, where the constraints include wind and solar output, energy storage capacity, balance between wind and solar output and data center load. To solve the capacity configuration optimization model, the improved PSO is adopted, compared to other optimization algorithms, like differential evolution (DE), genetic algorithm (GA) and grey wolf optimizer (GWO); by adjusting the inertia weight factor dynamically, the improved PSO is more likely to escape the local optimal solution. To validate the feasibility of data center green power direct supply with wind–light–load storage, a case study is conducted. By solving the capacity configuration optimization model of wind–light–load storage with the improved PSO, the balance rate between wind–solar output and data center load is improved by 12.5%, while the rate of abandoned wind and solar output is reduced by 17.5%, which is helpful to improve the economy and stability of data center green power direct supply. Full article
Show Figures

Figure 1

Back to TopTop