Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,854)

Search Parameters:
Keywords = level splitting

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 324 KB  
Article
Data-Leakage-Aware Preoperative Prediction of Postoperative Complications from Structured Data and Preoperative Clinical Notes
by Anastasia Amanatidis, Kyle Egan, Kusuma Nio and Milan Toma
Surgeries 2025, 6(4), 87; https://doi.org/10.3390/surgeries6040087 (registering DOI) - 9 Oct 2025
Abstract
Background/Objectives: Machine learning has been suggested as a way to improve how we predict anesthesia-related complications after surgery. However, many studies report overly optimistic results due to issues like data leakage and not fully using information from clinical notes. This study provides a [...] Read more.
Background/Objectives: Machine learning has been suggested as a way to improve how we predict anesthesia-related complications after surgery. However, many studies report overly optimistic results due to issues like data leakage and not fully using information from clinical notes. This study provides a transparent comparison of different machine learning models using both structured data and preoperative notes, with a focus on avoiding data leakage and involving clinicians throughout. We show how high reported metrics in the literature can result from methodological pitfalls and may not be clinically meaningful. Methods: We used a dataset containing both structured patient and surgery information and preoperative clinical notes. To avoid data leakage, we excluded any variables that could directly reveal the outcome. The data was cleaned and processed, and information from clinical notes was summarized into features suitable for modeling. We tested a range of machine learning methods, including simple, tree-based, and modern language-based models. Models were evaluated using a standard split of the data and cross-validation, and we addressed class imbalance with sampling techniques. Results: All models showed only modest ability to distinguish between patients with and without complications. The best performance was achieved by a simple model using both structured and summarized text features, with an area under the curve of 0.644 and accuracy of 60%. Other models, including those using advanced language techniques, performed similarly or slightly worse. Adding information from clinical notes gave small improvements, but no single type of data dominated. Overall, the results did not reach the high levels reported in some previous studies. Conclusions: In this analysis, machine learning models using both structured and unstructured preoperative data achieved only modest predictive performance for postoperative complications. These findings highlight the importance of transparent methodology and clinical oversight to avoid data leakage and inflated results. Future progress will require better control of data leakage, richer data sources, and external validation to develop clinically useful prediction tools. Full article
17 pages, 1033 KB  
Review
Towards Carbon-Neutral Hydrogen: Integrating Methane Pyrolysis with Geothermal Energy
by Ayann Tiam, Marshall Watson and Talal Gamadi
Processes 2025, 13(10), 3195; https://doi.org/10.3390/pr13103195 - 8 Oct 2025
Abstract
Methane pyrolysis produces hydrogen (H2) with solid carbon black as a co-product, eliminating direct CO2 emissions and enabling a low-carbon supply when combined with renewable or low-carbon heat sources. In this study, we propose a hybrid geothermal pyrolysis configuration in [...] Read more.
Methane pyrolysis produces hydrogen (H2) with solid carbon black as a co-product, eliminating direct CO2 emissions and enabling a low-carbon supply when combined with renewable or low-carbon heat sources. In this study, we propose a hybrid geothermal pyrolysis configuration in which an enhanced geothermal system (EGS) provides base-load preheating and isothermal holding, while either electrical or solar–thermal input supplies the final temperature rise to the catalytic set-point. The work addresses four main objectives: (i) integrating field-scale geothermal operating envelopes to define heat-integration targets and duty splits; (ii) assessing scalability through high-pressure reactor design, thermal management, and carbon separation strategies that preserve co-product value; (iii) developing a techno-economic analysis (TEA) framework that lists CAPEX and OPEX, incorporates carbon pricing and credits, and evaluates dual-product economics for hydrogen and carbon black; and (iv) reorganizing state-of-the-art advances chronologically, linking molten media demonstrations, catalyst development, and integration studies. The process synthesis shows that allocating geothermal heat to the largest heat-capacity streams (feed, recycle, and melt/salt hold) reduces electric top-up demand and stabilizes reactor operation, thereby mitigating coking, sintering, and broad particle size distributions. High-pressure operation improves the hydrogen yield and equipment compactness, but it also requires corrosion-resistant materials and careful thermal-stress management. The TEA indicates that the levelized cost of hydrogen is primarily influenced by two factors: (a) electric duty and the carbon intensity of power, and (b) the achievable price and specifications of the carbon co-product. Secondary drivers include the methane price, geothermal capacity factor, and overall conversion and selectivity. Overall, geothermal-assisted methane pyrolysis emerges as a practical pathway to turquoise hydrogen, if the carbon quality is maintained and heat integration is optimized. The study offers design principles and reporting guidelines intended to accelerate pilot-scale deployment. Full article
Show Figures

Figure 1

15 pages, 2184 KB  
Article
Neural Network-Based Prediction of Traffic Accidents and Congestion Levels Using Real-World Urban Road Data
by Baraa A. Alfasi, Khaled R. M. Mahmoud, Al-Hussein Matar and Mohamed H. Abdelati
Future Transp. 2025, 5(4), 138; https://doi.org/10.3390/futuretransp5040138 - 7 Oct 2025
Viewed by 28
Abstract
This study presents a machine learning framework for predicting traffic accident occurrence and congestion intensity using artificial neural networks (ANNs) trained on real-world traffic data collected from a central urban corridor in Egypt. The research aims to enhance proactive traffic management by providing [...] Read more.
This study presents a machine learning framework for predicting traffic accident occurrence and congestion intensity using artificial neural networks (ANNs) trained on real-world traffic data collected from a central urban corridor in Egypt. The research aims to enhance proactive traffic management by providing reliable, data-driven forecasts derived from temporal and environmental road features. Sixty-seven traffic observations were recorded over three months, capturing variations across vehicle flow, speed, weather, holidays, and road conditions. Two predictive models were developed: a binary accident detection classifier and a multi-class congestion level estimation classifier. Both models employed Bayesian optimization for hyperparameter tuning and were evaluated under three validation strategies—5-fold cross-validation, 10-fold cross-validation, and resubstitution—combined with different train/test splits. The results demonstrated that the model using 10-fold cross-validation and a 75/25 split achieved the highest accuracy in accident prediction (93.8% on test data), with minimal variance between validation and testing phases. In contrast, resubstitution validation yielded artificially high training accuracy (up to 100%) but lower generalization performance, confirming overfitting risks. Congestion prediction showed similarly strong classification trends, with the optimized model effectively distinguishing between congestion levels under dynamic traffic conditions. These findings validate the use of ANN-based prediction in real-world traffic scenarios and highlight the critical role of validation design in developing robust forecasting models. The proposed approach holds promise for integrating intelligent transportation systems, enabling anticipatory interventions, and enhancing road safety. Full article
Show Figures

Figure 1

12 pages, 1163 KB  
Article
Sensor Input Type and Location Influence Outdoor Running Terrain Classification via Deep Learning Approaches
by Gabrielle Thibault, Philippe C. Dixon and David J. Pearsall
Sensors 2025, 25(19), 6203; https://doi.org/10.3390/s25196203 - 7 Oct 2025
Viewed by 133
Abstract
Background/Objective: Understanding the training effect in high-level running is important for performance optimization and injury prevention. This includes awareness of how different running surface types (e.g., hard versus soft) may modify biomechanics. Recent studies have demonstrated that deep learning algorithms, such as convolutional [...] Read more.
Background/Objective: Understanding the training effect in high-level running is important for performance optimization and injury prevention. This includes awareness of how different running surface types (e.g., hard versus soft) may modify biomechanics. Recent studies have demonstrated that deep learning algorithms, such as convolutional neural networks (CNNs), can accurately classify human activity collected via body-worn sensors. To date, no study has assessed optimal signal type, sensor location, and model architecture to classify running surfaces. This study aimed to determine which combination of signal type, sensor location, and CNN architecture would yield the highest accuracy in classifying grass and asphalt surfaces using inertial measurement unit (IMU) sensors. Methods: Running data were collected from forty participants (27.4 years + 7.8 SD, 10.5 ± 7.3 SD years of running) with a full-body IMU system (head, sternum, pelvis, upper legs, lower legs, feet, and arms) on grass and asphalt outdoor surfaces. Performance (accuracy) for signal type (acceleration and angular velocity), sensor configuration (full body, lower body, pelvis, and feet), and CNN model architecture was tested for this specific task. Moreover, the effect of preprocessing steps (separating into running cycles and amplitude normalization) and two different data splitting protocols (leave-n-subject-out and subject-dependent split) was evaluated. Results: In general, acceleration signals improved classification results compared to angular velocity (3.8%). Moreover, the foot sensor configuration had the best performance-to-number of sensor ratio (95.5% accuracy). Finally, separating trials into gait cycles and not normalizing the raw signals improved accuracy by approximately 28%. Conclusion: This analysis sheds light on the important parameters to consider when developing machine learning classifiers in the human activity recognition field. A surface classification tool could provide useful quantitative feedback to athletes and coaches in terms of running technique effort on varied terrain surfaces, improve training personalization, prevent injuries, and improve performance. Full article
Show Figures

Figure 1

11 pages, 512 KB  
Article
Run-Based Tests Performed on an Indoor and Outdoor Surface Are Comparable in Adolescent Rugby League Players
by Michael A. Carron and Vincent J. Dalbo
Sports 2025, 13(10), 351; https://doi.org/10.3390/sports13100351 - 4 Oct 2025
Viewed by 193
Abstract
At non-professional levels of rugby league, run-based tests are commonly performed on outdoor turfed fields and on indoor multipurpose sport surfaces, and results are monitored to gauge player performance and progression. However, test–retest reliability has not been conducted on indoor surfaces in adolescent [...] Read more.
At non-professional levels of rugby league, run-based tests are commonly performed on outdoor turfed fields and on indoor multipurpose sport surfaces, and results are monitored to gauge player performance and progression. However, test–retest reliability has not been conducted on indoor surfaces in adolescent rugby league players, and no research has examined if results obtained on outdoor and indoor surfaces are comparable for practitioners. Adolescent, male, rugby league players (N = 15; age = 17.1 ± 0.7 years) completed a 20 m linear sprint test (10- and 20 m splits), 505-Agility Test, and Multistage Fitness Test (MSFT) weekly for three consecutive weeks. Absolute (coefficient of variation (CV)) and relative (intraclass correlation coefficient (ICC)) reliability of each run-based test performed on the indoor surface was quantified. Dependent t-tests, Hedges g, and 95% confidence intervals were used to examine if differences in performance occurred between indoor and outdoor surfaces. Effect size magnitudes were determined as Trivial: <0.20, Small: 0.20–0.49, Medium: 0.50–0.79, and Large: ≥0.80. All tests were considered reliable on the indoor surface (CV < 5.0%; ICCs = moderate-good) except for the 505-Agility Test (CV = 4.6–5.1%; ICCs = poor). Non-significant (p > 0.05), trivial differences were revealed between surface types for 10 (g = 0.15, 95% CI = −0.41 to 0.70) and 20 m (g = 0.06, 95% CI = −0.49 to 0.61) sprint tests, the 505-Agility Test (Right: g = −0.53, 95% CI = −1.12 to 0.06; Left: g = −0.40, 95% CI = −0.97 to 0.17), and the MSFT (g = 0.25, 95% CI = −0.31 to 0.81). The 10 and 20 m linear sprint test and MSFT have acceptable test–retest reliability on an indoor multipurpose sport surface, and practitioners may compare results of run-based tests obtained on an outdoor and indoor surface. Full article
(This article belongs to the Special Issue Sport-Specific Testing and Training Methods in Youth)
Show Figures

Figure 1

13 pages, 1799 KB  
Article
Comparative Analysis of Speed-Power Performance and Sport-Specific Skills Among Elite Youth Soccer Players with Different Start Procedures
by Eduard Bezuglov, Anton Emanov, Timur Vakhidov, Elizaveta Kapralova, Georgiy Malyakin, Vyacheslav Kolesnichenko, Zbigniew Waśkiewicz, Larisa Smekalkina and Mikhail Vinogradov
Sports 2025, 13(10), 341; https://doi.org/10.3390/sports13100341 - 2 Oct 2025
Viewed by 241
Abstract
Accurate interpretation of physical test results is essential to objectively measure parameters both at a single point in time and throughout longitudinal assessments. This is particularly relevant for tests of speed and change of direction, which are among the most commonly used assessments [...] Read more.
Accurate interpretation of physical test results is essential to objectively measure parameters both at a single point in time and throughout longitudinal assessments. This is particularly relevant for tests of speed and change of direction, which are among the most commonly used assessments for soccer players at different levels. This study aimed to quantify the impact of start-line distance (30 cm vs. 100 cm) on linear sprint splits (5–30 m), change-of-direction (COD), and T-test performance in elite youth soccer players, while also examining potential order effects. The study involved 82 youth soccer players (14–19 y; 180.68 ± 6.97 cm; 71.65 ± 7.91 kg; BMI 21.90 ± 1.57) from an elite academy, divided into two groups. The first group started trials at 30 cm from the starting line, then at 100 cm, while the second group performed in the reverse order. All participants underwent a standard sequence of tests: anthropometric measurements, 5, 10, 20, and 30 m sprints, change-of-direction running, and the T-test. The longer start (100 cm) improved sprint times with large effects tapering with distance: 5 m (Hedges’ g = 1.00, 95% CI 0.80–1.25; Δ = 0.076 s, 0.060–0.093; 6.99%), 10 m (g = 1.37, 1.14–1.68; Δ = 0.102 s, 0.086–0.119; 5.63%), 20 m (g = 1.58, 1.36–1.88; Δ = 0.112 s, 0.096–0.127; 3.66%), 30 m (g = 1.48, 1.26–1.80; Δ = 0.114 s, 0.097–0.131; 2.71%). COD also improved (rank-biserial r = 0.516, 0.294–0.717; Δ = 0.075 s, 0.034–0.116; 1.00%) and the T-test improved (g = 0.61, 0.37–0.86; Δ = 0.107 s, 0.068–0.145; 1.26%). Order effects on Δ were evident for 30 m (Welch t = −3.05, p_Holm = 0.0157, d = −0.67) and COD (MWU p_Holm = 0.0048, r = −0.43). Protocols must specify and report the start geometry; the order should be randomised or counter-balanced, particularly for 30 m and COD. Full article
Show Figures

Figure 1

14 pages, 1037 KB  
Article
MMSE-Based Dementia Prediction: Deep vs. Traditional Models
by Yuyeon Jung, Yeji Park, Jaehyun Jo and Jinhyoung Jeong
Life 2025, 15(10), 1544; https://doi.org/10.3390/life15101544 - 1 Oct 2025
Viewed by 213
Abstract
Early and accurate diagnosis of dementia is essential to improving patient outcomes and reducing societal burden. The Mini-Mental State Examination (MMSE) is widely used to assess cognitive function, yet traditional statistical and machine learning approaches often face limitations in capturing nonlinear interactions and [...] Read more.
Early and accurate diagnosis of dementia is essential to improving patient outcomes and reducing societal burden. The Mini-Mental State Examination (MMSE) is widely used to assess cognitive function, yet traditional statistical and machine learning approaches often face limitations in capturing nonlinear interactions and subtle decline patterns. This study developed a novel deep learning-based dementia prediction model using MMSE data collected from domestic clinical settings and compared its performance with traditional machine learning models. A notable strength of this work lies in its use of item-level MMSE features combined with explainable AI (SHAP analysis), enabling both high predictive accuracy and clinical interpretability—an advancement over prior approaches that primarily relied on total scores or linear modeling. Data from 164 participants, classified into cognitively normal, mild cognitive impairment (MCI), and dementia groups, were analyzed. Individual MMSE items and total scores were used as input features, and the dataset was divided into training and validation sets (8:2 split). A fully connected neural network with regularization techniques was constructed and evaluated alongside Random Forest and support vector machine (SVM) classifiers. Model performance was assessed using accuracy, F1-score, confusion matrices, and receiver operating characteristic (ROC) curves. The deep learning model achieved the highest performance (accuracy 0.90, F1-score 0.90), surpassing Random Forest (0.86) and SVM (0.82). SHAP analysis identified Q11 (immediate memory), Q12 (calculation), and Q17 (drawing shapes) as the most influential variables, aligning with clinical diagnostic practices. These findings suggest that deep learning not only enhances predictive accuracy but also offers interpretable insights aligned with clinical reasoning, underscoring its potential utility as a reliable tool for early dementia diagnosis. However, the study is limited by the use of data from a single clinical site with a relatively small sample size, which may restrict generalizability. Future research should validate the model using larger, multi-institutional, and multimodal datasets to strengthen clinical applicability and robustness. Full article
(This article belongs to the Section Biochemistry, Biophysics and Computational Biology)
Show Figures

Figure 1

19 pages, 4035 KB  
Article
Optimization of Metakaolin-Based Geopolymer Composite for Repair Application
by Layal Hawa, Abdulkader El-Mir, Jamal Khatib, Dana Nasr, Joseph Assaad, Adel Elkordi and Mohamad Ezzedine El Dandachy
J. Compos. Sci. 2025, 9(10), 527; https://doi.org/10.3390/jcs9100527 - 1 Oct 2025
Viewed by 298
Abstract
This paper assesses the feasibility of metakaolin (MK)-based geopolymer (GP) composite as an environmentally friendly substitute for cement-based composite in repair applications. The Taguchi orthogonal array method was used to find the optimum GP mix in terms of mechanical properties and adhesion to [...] Read more.
This paper assesses the feasibility of metakaolin (MK)-based geopolymer (GP) composite as an environmentally friendly substitute for cement-based composite in repair applications. The Taguchi orthogonal array method was used to find the optimum GP mix in terms of mechanical properties and adhesion to concrete substrates. Four key parameters, each with three levels, are investigated including the alkaline activator-to-MK ratio (A/M: 1, 1.2, 1.4), the sodium silicate-to-sodium hydroxide ratio (S/H: 2.0, 2.5, 3.0), sodium hydroxide (SH) molarity (12, 14, 16), and curing temperature (30, 45, 60 °C). The evaluated properties include flowability, compressive strength, splitting tensile strength, flexural strength, ultrasonic pulse velocity, and bond strength under various interface configurations. Experimental results demonstrated that the performance of MK-based GP composite was primarily governed by the A/M ratio and sodium hydroxide molarity. The Taguchi optimization method revealed that the mix design featuring A/M of 1.4, SS/SH of 2, 16 M sodium hydroxide, and curing at 60 °C yielded notable improvements in compressive and bond strengths compared to conventional cement-based composites. Full article
(This article belongs to the Section Polymer Composites)
Show Figures

Figure 1

19 pages, 1061 KB  
Systematic Review
Autologous Tooth-Derived Biomaterials in Alveolar Bone Regeneration: A Systematic Review of Clinical Outcomes and Histological Evidence
by Angelo Michele Inchingolo, Grazia Marinelli, Francesco Inchingolo, Roberto Vito Giorgio, Valeria Colonna, Benito Francesco Pio Pennacchio, Massimo Del Fabbro, Gianluca Tartaglia, Andrea Palermo, Alessio Danilo Inchingolo and Gianna Dipalma
J. Funct. Biomater. 2025, 16(10), 367; https://doi.org/10.3390/jfb16100367 - 1 Oct 2025
Viewed by 416
Abstract
Background: Autologous tooth-derived grafts have recently gained attention as an innovative alternative to conventional biomaterials for alveolar ridge preservation (ARP) and augmentation (ARA). Their structural similarity to bone and osteoinductive potential support clinical use. Methods: This systematic review was conducted according to PRISMA [...] Read more.
Background: Autologous tooth-derived grafts have recently gained attention as an innovative alternative to conventional biomaterials for alveolar ridge preservation (ARP) and augmentation (ARA). Their structural similarity to bone and osteoinductive potential support clinical use. Methods: This systematic review was conducted according to PRISMA 2020 guidelines and registered in PROSPERO (CRD420251108128). A comprehensive search was performed in PubMed, Scopus, and Web of Science (2010–2025). Randomized controlled trials (RCTs), split-mouth, and prospective clinical studies evaluating autologous dentin-derived grafts were included. Two reviewers independently extracted data and assessed risk of bias using Cochrane RoB 2.0 (for RCTs) and ROBINS-I (for non-randomized studies). Results: Nine studies involving 321 patients were included. Autologous dentin grafts effectively preserved ridge dimensions, with horizontal and vertical bone loss significantly reduced compared to controls. Histomorphometric analyses reported 42–56% new bone formation within 4–6 months, with minimal residual graft particles and favorable vascularization. Implant survival ranged from 96–100%, with stable marginal bone levels and no major complications. Conclusions: Autologous tooth-derived biomaterials represent a safe, biologically active, and cost-effective option for alveolar bone regeneration, showing comparable or superior results to xenografts and autologous bone. Further standardized, long-term RCTs are warranted to confirm their role in clinical practice. Full article
(This article belongs to the Special Issue Property, Evaluation and Development of Dentin Materials)
Show Figures

Figure 1

26 pages, 2687 KB  
Article
Mixed-Fleet Goods-Distribution Route Optimization Minimizing Transportation Cost, Emissions, and Energy Consumption
by Mohammad Javad Jafari, Luca Parodi, Giulio Ferro, Riccardo Minciardi, Massimo Paolucci and Michela Robba
Energies 2025, 18(19), 5147; https://doi.org/10.3390/en18195147 - 27 Sep 2025
Viewed by 333
Abstract
At the international level, new measures, policies, and technologies are being developed to reduce greenhouse gas emissions and, more broadly, air pollutants. Road transportation is one of the main contributors to such emissions, as vehicles are extensively used in logistics operations, and many [...] Read more.
At the international level, new measures, policies, and technologies are being developed to reduce greenhouse gas emissions and, more broadly, air pollutants. Road transportation is one of the main contributors to such emissions, as vehicles are extensively used in logistics operations, and many fleet owners of fossil-fueled trucks are adopting new technologies such as electric, hybrid, and hydrogen-based vehicles. This paper addresses the Hybrid Fleet Capacitated Vehicle Routing Problem with Time Windows (HF-CVRPTW), with the objectives of minimizing costs and mitigating environmental impacts. A mixed-integer linear programming model is developed, incorporating split deliveries, scheduled arrival times at stores, and a carbon cap-and-trade mechanism. The model is tested on a real case study provided by Decathlon, evaluating the performance of internal combustion engine (ICE), electric (EV), and hydrogen fuel cell (HV) vehicles. Results show that when considering economic and emission trading costs, the optimal fleet deployment priority is to use ICE vehicles first, followed by EVs and then HVs, but considering only total emissions, the result is the reverse. Further analysis explores the conditions under which alternative fuel, electricity, or hydrogen prices can achieve competitiveness, and a further analysis investigates the impact of different electricity generation and hydrogen production pathways on overall indirect emissions. Full article
Show Figures

Figure 1

15 pages, 2673 KB  
Article
Research on and Experimental Verification of the Efficiency Enhancement of Powerspheres Through Distributed Incidence Combined with Intracavity Light Uniformity
by Tiefeng He, Jiawen Li, Chongbo Zhou, Haixuan Huang, Wenwei Zhang, Zhijian Lv, Qingyang Wu, Lili Wan, Zhaokun Yang, Zikun Xu, Keyan Xu, Guoliang Zheng and Xiaowei Lu
Photonics 2025, 12(10), 957; https://doi.org/10.3390/photonics12100957 - 27 Sep 2025
Viewed by 260
Abstract
In laser wireless power transmission systems, the powersphere serves as a spherical enclosed receiver that performs photoelectric conversion, achieving uniform light distribution within the cavity through infinite internal light reflection. However, in practical applications, the high level of light absorption displayed by photovoltaic [...] Read more.
In laser wireless power transmission systems, the powersphere serves as a spherical enclosed receiver that performs photoelectric conversion, achieving uniform light distribution within the cavity through infinite internal light reflection. However, in practical applications, the high level of light absorption displayed by photovoltaic cells leads to significant disparities in light intensity between directly irradiated regions and reflected regions on the inner surface of the powersphere, resulting in poor light uniformity. One approach aimed at addressing this issue uses a spectroscope to split the incident beam into multiple paths, allowing the direct illumination of all inner surfaces of the powersphere and reducing the light intensity difference between direct and reflected regions. However, experimental results indicate that light transmission through lenses introduces power losses, leading to improved uniformity but reduced output power. To address this limitation, this study proposes a method that utilizes multiple incident laser beams combined with a centrally positioned spherical reflector within the powersphere. A wireless power transmission system model was developed using optical simulation software, and the uniformity of the intracavity light field in the system was analyzed through simulation. To validate the design and simulation accuracy, an experimental system incorporating semiconductor lasers, spherical mirrors, and a powersphere was constructed. The data from the experiments aligned with the simulation results, jointly confirming that integrating a spherical reflector and distributed incident lasers enhances the uniformity of the internal light field within the powersphere and improves the system’s efficiency. Full article
(This article belongs to the Special Issue Technologies of Laser Wireless Power Transmission)
Show Figures

Figure 1

20 pages, 4963 KB  
Article
Enhancing Cherry Tomato Performance Under Water Deficit Through Microbial Inoculation with Bacillus subtilis and Burkholderia seminalis
by Henrique Fonseca Elias de Oliveira, Thiago Dias Silva, Jhon Lennon Bezerra da Silva, Priscila Jane Romano Gonçalves Selaria, Marcos Vinícius da Silva, Marcio Mesquita, Josef Augusto Oberdan Souza Silva and Rhuanito Soranz Ferrarezi
Horticulturae 2025, 11(10), 1157; https://doi.org/10.3390/horticulturae11101157 - 26 Sep 2025
Viewed by 529
Abstract
Crop productivity can be affected by biotic and abiotic stressors, and plant growth-promoting bacteria (PGPB) from the genera Bacillus and Burkholderia have the potential to maintain fruit yield and quality, as these bacteria can promote plant growth by solubilizing nutrients, fixing atmospheric nitrogen, [...] Read more.
Crop productivity can be affected by biotic and abiotic stressors, and plant growth-promoting bacteria (PGPB) from the genera Bacillus and Burkholderia have the potential to maintain fruit yield and quality, as these bacteria can promote plant growth by solubilizing nutrients, fixing atmospheric nitrogen, producing phytohormones, and exhibiting antagonistic activity against pathogens. This study aimed to evaluate the effects of inoculating plants with Bacillus subtilis and Burkholderia seminalis on their morphological characteristics, fruit technological attributes and yield of common cherry tomatoes (Solanum lycopersicum L.) subjected to induced water deficit. The study was arranged on a split-plot randomized block design, with four water replacement levels (40%, 60%, 80% and 100% of crop evapotranspiration, ETc) and three inoculation treatments (Bacillus subtilis ATCC 23858, Burkholderia seminalis TC3.4.2R3 and non-inoculation). Data were subjected to analysis of variance using the F-test and compared using Tukey’s test (p < 0.05) and multivariate statistics from principal component analysis. Inoculation with Burkholderia seminalis increased the plant fresh and dry shoot and root mass, as well as root volume. Inoculation with Bacillus subtilis increased carotenoid and chlorophyll b contents. Both inoculations enhanced leaf water content in plants experiencing severe water deficit (40% of ETc). The use of these strains as PGPB increased the fruit soluble solids content. Higher productivity in inoculated plants was achieved through a greater number of fruits per cluster, despite the individual fruits being lighter. Treatments with higher water replacement levels resulted in greater yield. Inoculations showed biotechnological potential in mitigating water deficit in cherry tomatoes. Full article
(This article belongs to the Special Issue Advancements in Horticultural Irrigation Water Management)
Show Figures

Graphical abstract

13 pages, 1334 KB  
Review
Artificial Intelligence for Myocardial Infarction Detection via Electrocardiogram: A Scoping Review
by Sosana Bdir, Mennatallah Jaber, Osaid Tanbouz, Fathi Milhem, Iyas Sarhan, Mohammad Bdair, Thaer Alhroob, Walaa Abu Alya and Mohammad Qneibi
J. Clin. Med. 2025, 14(19), 6792; https://doi.org/10.3390/jcm14196792 - 25 Sep 2025
Viewed by 618
Abstract
Background/Objectives: Acute myocardial infarction (MI) is a major cause of death worldwide, and it imposes a heavy burden on health care systems. Although diagnostic methods have improved, detecting the disease early and accurately is still difficult. Recently, AI has demonstrated increasing capability [...] Read more.
Background/Objectives: Acute myocardial infarction (MI) is a major cause of death worldwide, and it imposes a heavy burden on health care systems. Although diagnostic methods have improved, detecting the disease early and accurately is still difficult. Recently, AI has demonstrated increasing capability in improving ECG-based MI detection. From this perspective, this scoping review aimed to systematically map and evaluate AI applications for detecting MI through ECG data. Methods: A systematic search was performed in Ovid MEDLINE, Ovid Embase, Web of Science Core Collection, and Cochrane Central. The search covered publications from 2015 to 9 October 2024; non-English articles were included if a reliable translation was available. Studies that used AI to diagnose MI via ECG were eligible, and studies that used other diagnostic modalities were excluded. The review was performed per the PRISMA extension for scoping reviews (PRISMA-ScR) to ensure transparent and methodological reporting. Of a total of 7189 articles, 220 were selected for inclusion. Data extraction included parameters such as first author, year, country, AI model type, algorithm, ECG data type, accuracy, and AUC to ensure all relevant information was captured. Results: Publications began in 2015 with a peak in 2022. Most studies used 12-lead ECGs; the Physikalisch-Technische Bundesanstalt database and other public and single-center datasets were the most common sources. Convolutional neural networks and support vector machines predominated. While many reports described high apparent performance, these estimates frequently came from relatively small, single-source datasets and validation strategies prone to optimism. Cross-validation was reported in 57% of studies, whereas 36% did not specify their split method, and several noted that accuracy declined under inter-patient or external validation, indicating limited generalizability. Accordingly, headline figures (sometimes ≥99% for accuracy, sensitivity, or specificity) should be interpreted in light of dataset size, case mix, and validation design, with risks of spectrum/selection bias, overfitting, and potential data leakage when patient-level independence is not enforced. Conclusions: AI-based approaches for MI detection using ECGs have grown quickly. Diagnostic performance is limited by dataset and validation issues. Variability in reporting, datasets, and validation strategies have been noted, and standardization is needed. Future work should address clinical integration, explainability, and algorithmic fairness for safe and equitable deployment. Full article
(This article belongs to the Section Cardiology)
Show Figures

Figure 1

21 pages, 1478 KB  
Article
Working Capital Management and Profitability in India’s Cement Sector: Evidence and Sustainability Implications
by Ashok Kumar Panigrahi
J. Risk Financial Manag. 2025, 18(10), 541; https://doi.org/10.3390/jrfm18100541 - 25 Sep 2025
Viewed by 374
Abstract
This study investigates the impact of working capital management (WCM) on profitability in the Indian cement industry, an energy-intensive sector central to the country’s infrastructure growth. Using a balanced panel of listed firms over 2010–2024, we employ pooled OLS, two-way fixed effects, quantile [...] Read more.
This study investigates the impact of working capital management (WCM) on profitability in the Indian cement industry, an energy-intensive sector central to the country’s infrastructure growth. Using a balanced panel of listed firms over 2010–2024, we employ pooled OLS, two-way fixed effects, quantile regressions, and dynamic system GMM to address heterogeneity and endogeneity concerns. The results demonstrate that reductions in the cash conversion cycle (CCC), accelerated receivables collection, leaner inventories, and prudent use of payables significantly improve profitability. Quantile regressions reveal that highly profitable firms capture larger absolute gains from CCC reductions, while size-split analysis indicates that smaller and liquidity-constrained firms achieve proportionally greater marginal relief. These findings represent complementary perspectives rather than unified statistical relationship, a limitation we acknowledge. Dynamic estimates confirm the robustness of results after accounting for persistence and reverse causality. Beyond firm-level outcomes, the study contributes conceptually by linking WCM efficiency to sustainability financing: liquidity released from shorter operating cycles can be redeployed into green and energy-efficient investments, offering a potential channel for ESG alignment in carbon-intensive industries. Policy implications highlight the role of digital reforms such as TReDS and e-invoicing in strengthening liquidity efficiency, particularly for mid-sized firms. The findings extend the international WCM profitability literature, provide sector-specific evidence for India, and suggest new avenues for integrating financial and sustainability strategies. Full article
(This article belongs to the Section Business and Entrepreneurship)
Show Figures

Figure 1

14 pages, 356 KB  
Article
The Charmed Meson Spectrum Using One-Loop Corrections to the One-Gluon Exchange Potential
by André Capelo-Astudillo, Telmo Aguilar, Marlon Conde-Correa, Álvaro Duenas-Vidal, Pablo G. Ortega and Jorge Segovia
Symmetry 2025, 17(9), 1575; https://doi.org/10.3390/sym17091575 - 20 Sep 2025
Viewed by 277
Abstract
We investigate the charmed meson spectrum using a constituent quark model (CQM) with one-loop corrections applied to the one-gluon exchange (OGE) potential. The study aims to understand if the modified version of our CQM sufficiently account for the charmed meson spectrum observed experimentally, [...] Read more.
We investigate the charmed meson spectrum using a constituent quark model (CQM) with one-loop corrections applied to the one-gluon exchange (OGE) potential. The study aims to understand if the modified version of our CQM sufficiently account for the charmed meson spectrum observed experimentally, without invoking exotic quark and gluon configurations such as hybrid mesons or tetraquarks. Within this model, charmed mesons’ masses are computed, comparing theoretical predictions to experimental data. The results, within uncertainties, suggest that our theoretical framework generally reproduces mass splittings and level ordering observed for charmed mesons. Particularly, large discrepancies between theory and experiment found in P-wave states are, at least, significantly ameliorated by incorporating higher-order interaction terms. Therefore, the findings emphasize that while the traditional quark model is limited in fully describing charmed mesons, enhanced potential terms may bridge the gap with experimental observations. The study contributes a framework for predicting excited charmed meson states for future experimental validation. Full article
(This article belongs to the Section Physics)
Show Figures

Figure 1

Back to TopTop