Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (317)

Search Parameters:
Keywords = breaking probability

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 883 KB  
Article
An Enhanced RPN Model Incorporating Maintainability Complexity for Risk-Based Maintenance Planning in the Pharmaceutical Industry
by Shireen Al-Hourani and Ali Hassanlou
Processes 2025, 13(10), 3153; https://doi.org/10.3390/pr13103153 - 2 Oct 2025
Abstract
In pharmaceutical manufacturing, the reliability of machines and utility assets is critical to ensuring product quality, regulatory compliance, and uninterrupted operations. Traditional Risk-Based Maintenance (RBM) models quantify asset criticality using the Risk Priority Number (RPN), calculated from the probability and impact of failure [...] Read more.
In pharmaceutical manufacturing, the reliability of machines and utility assets is critical to ensuring product quality, regulatory compliance, and uninterrupted operations. Traditional Risk-Based Maintenance (RBM) models quantify asset criticality using the Risk Priority Number (RPN), calculated from the probability and impact of failure alongside detectability. However, these models often neglect the practical challenges involved in diagnosing and resolving equipment issues, particularly in GMP-regulated environments. This study proposes an enhanced RPN framework that replaces the conventional detectability component with Maintainability Complexity (MC), quantified through two practical indicators: Ease of Diagnosis (ED) and Ease of Resolution (ER). Thirteen Key Performance Indicators (KPIs) were developed to assess Probability, Impact, and MC across 185 pharmaceutical utility assets. To enable objective risk stratification, Jenks Natural Breaks Optimization was applied to group assets into Low, Medium, and High risk tiers. Both multiplicative and normalized averaging methods were tested for score aggregation, allowing comparative analysis of their impact on prioritization outcomes. The enhanced model produced stronger alignment with operational realities, enabling more accurate asset classification and maintenance scheduling. A 3D risk matrix was introduced to translate scores into proactive strategies, offering traceability and digital compatibility with Computerized Maintenance Management Systems (CMMS). This framework provides a practical, auditable, and scalable approach to maintenance planning, supporting Industry 4.0 readiness in pharmaceutical operations. Full article
(This article belongs to the Section Pharmaceutical Processes)
Show Figures

Figure 1

43 pages, 1895 KB  
Article
Bi-Level Dependent-Chance Goal Programming for Paper Manufacturing Tactical Planning: A Reinforcement-Learning-Enhanced Approach
by Yassine Boutmir, Rachid Bannari, Abdelfettah Bannari, Naoufal Rouky, Othmane Benmoussa and Fayçal Fedouaki
Symmetry 2025, 17(10), 1624; https://doi.org/10.3390/sym17101624 - 1 Oct 2025
Abstract
Tactical production–distribution planning in paper manufacturing involves hierarchical decision-making under hybrid uncertainty, where aleatory randomness (demand fluctuations, machine variations) and epistemic uncertainty (expert judgments, market trends) simultaneously affect operations. Existing approaches fail to address the bi-level nature under hybrid uncertainty, treating production and [...] Read more.
Tactical production–distribution planning in paper manufacturing involves hierarchical decision-making under hybrid uncertainty, where aleatory randomness (demand fluctuations, machine variations) and epistemic uncertainty (expert judgments, market trends) simultaneously affect operations. Existing approaches fail to address the bi-level nature under hybrid uncertainty, treating production and distribution decisions independently or using single-paradigm uncertainty models. This research develops a bi-level dependent-chance goal programming framework based on uncertain random theory, where the upper level optimizes distribution decisions while the lower level handles production decisions. The framework exploits structural symmetries through machine interchangeability, symmetric transportation routes, and temporal symmetry, incorporating symmetry-breaking constraints to eliminate redundant solutions. A hybrid intelligent algorithm (HIA) integrates uncertain random simulation with a Reinforcement-Learning-enhanced Arithmetic Optimization Algorithm (RL-AOA) for bi-level coordination, where Q-learning enables adaptive parameter tuning. The RL component utilizes symmetric state representations to maintain solution quality across symmetric transformations. Computational experiments demonstrate HIA’s superiority over standard metaheuristics, achieving 3.2–7.8% solution quality improvement and 18.5% computational time reduction. Symmetry exploitation reduces search space by approximately 35%. The framework provides probability-based performance metrics with optimal confidence levels (0.82–0.87), offering 2.8–4.5% annual cost savings potential. Full article
Show Figures

Figure 1

19 pages, 1442 KB  
Article
Benova and Cenova Models in the Homogenization of Climatic Time Series
by Peter Domonkos
Climate 2025, 13(10), 199; https://doi.org/10.3390/cli13100199 - 23 Sep 2025
Viewed by 209
Abstract
For the correct evaluation of climate trends and climate variability, it is important to remove non-climatic biases from the observed data. Such biases, referred to as inhomogeneities, occur for station relocations or changes in the instrumentation or instrument installation, among other reasons. Most [...] Read more.
For the correct evaluation of climate trends and climate variability, it is important to remove non-climatic biases from the observed data. Such biases, referred to as inhomogeneities, occur for station relocations or changes in the instrumentation or instrument installation, among other reasons. Most inhomogeneities are related to a sudden change (break) in the technical conditions of the climate observations. In long time series (>30 years), usually multiple breaks occur, and their joint impact on the long-term trends and variability is more important than their individual evaluation. Benova is the optimal method for the joint calculation of correction terms for removing inhomogeneity biases. Cenova is a modified, imperfect version of Benova, which, however, can also be used in discontinuous time series. In the homogenization of section means, the use of Benova should be preferred, while in homogenizing probability distribution, only Cenova can be applied. This study presents the Benova and Cenova methods, discusses their main properties and compares their efficiencies using the benchmark dataset of the Spanish MULTITEST project (2015–2017), which is the largest existing dataset of this kind so far. The root mean square error (RMSE) of the annual means and the mean absolute trend bias were calculated for the Benova and Cenova results. When the signal-to-noise ratio (SNR) is high, the errors in the Cenova results are higher, from 14% to 24%, while when the SNR is low, or concerted inhomogeneities in several time series occur, the advantage of Benova over Cenova might disappear. Full article
Show Figures

Figure 1

18 pages, 3331 KB  
Article
DeepFocusNet: An Attention-Augmented Deep Neural Framework for Robust Colorectal Cancer Classification in Whole-Slide Histology Images
by Shah Md Aftab Uddin, Muhammad Yaseen, Md Kamran Hussain Chowdhury, Rubina Akter Rabeya, Shah Muhammad Imtiyaj Uddin and Hee-Cheol Kim
Electronics 2025, 14(18), 3731; https://doi.org/10.3390/electronics14183731 - 21 Sep 2025
Viewed by 310
Abstract
A major cause of cancer-related mortality globally is colorectal cancer, which emphasises the critical need for state-of-the-art diagnostic tools for early identification and categorisation. We use deep learning methodology to classify colorectal cancer histology images into eight different categories automatically. To improve classification [...] Read more.
A major cause of cancer-related mortality globally is colorectal cancer, which emphasises the critical need for state-of-the-art diagnostic tools for early identification and categorisation. We use deep learning methodology to classify colorectal cancer histology images into eight different categories automatically. To improve classification accuracy and maximise feature extraction, we create a DeepFocusNet architecture with attention approaches using a dataset of 5000 high-resolution (150 × 150) histological images. To improve model generalisation, we combine data augmentation, fine-tuning, and freezing early layers into our progressive training approach. Additionally, we create full-scale images using heatmaps and multi-class overlays after breaking up large-scale histology images (5000 × 5000) into smaller windows for classification using a special tiling technique. Attention mechanisms are added to improve the model’s performance and interpretability, as they are proven to focus on the most important histopathological traits. The model provides pathologists with high-resolution probability maps that aid in precise and speedy patient identification. The robustness of our methodology is demonstrated by empirical findings, opening the door for clinical applications of AI-driven histopathological investigation. Pathologists can receive precise and efficient diagnostic support from the final system thanks to its high-resolution probability maps and 97% classification accuracy. Empirical results provide evidence of our methodology’s robustness and show its potential for real-world clinical applications in AI-assisted histopathology. Full article
Show Figures

Figure 1

19 pages, 12692 KB  
Article
Long-Range Plume Transport from Brazilian Burnings to Urban São Paulo: A Remote Sensing Analysis
by Gabriel Marques da Silva, Mateus Fernandes Rodrigues, Laura Silva Pelicer, Gregori de Arruda Moreira, Alexandre Cacheffo, Fábio Juliano da Silva Lopes, Luisa D’Antola de Mello, Giovanni Souza and Eduardo Landulfo
Atmosphere 2025, 16(9), 1022; https://doi.org/10.3390/atmos16091022 - 29 Aug 2025
Viewed by 728
Abstract
In 2024, Brazil experienced record-breaking wildfire activity, underscoring the escalating influence of climate change. This study examines the long-range transport of wildfire-generated aerosol plumes to São Paulo, combining multi-platform observations to trace their origin and properties. During August and September—a period marked by [...] Read more.
In 2024, Brazil experienced record-breaking wildfire activity, underscoring the escalating influence of climate change. This study examines the long-range transport of wildfire-generated aerosol plumes to São Paulo, combining multi-platform observations to trace their origin and properties. During August and September—a period marked by intense fire outbreaks in Pará and Mato Grosso do Sul—lidar measurements performed at São Paulo detected pronounced aerosol plumes. To investigate their source and characteristics, we integrated data from the Earth Cloud Aerosol and Radiation Explorer (EarthCARE) satellite, HYSPLIT back-trajectory modeling, and ground-based AERONET and Raman lidar measurements. Aerosol properties were derived from aerosol optical depth (AOD), Ångström exponent, and lidar ratio (LR) retrievals. Back-trajectory analysis identified three transport pathways originating from active fire zones, with coinciding AOD values (0.7–1.1) and elevated LR (60–100 sr), indicative of dense smoke plumes. Compositional analysis revealed a significant black carbon component, implicating wildfires near Corumbá (Mato Grosso do Sul) and São Félix do Xingu (Pará) as probable emission sources. These findings highlight the efficacy of satellite-based lidar systems, such as Atmospheric Lidar (ATLID) onboard EarthCARE, in atmospheric monitoring, particularly in data-sparse regions where ground instrumentation is limited. Full article
Show Figures

Figure 1

14 pages, 857 KB  
Article
Research on the Law of Top Coal Movement and Influence Factors of Coal Caving Ratio for Fully Mechanized Top Coal Caving Working Face
by Jinhu Zhang, Zhiheng Cheng, Sheng Lei, Kai Guo, Liang Chen, Zherui Zhang and Jiahui Chen
Energies 2025, 18(16), 4312; https://doi.org/10.3390/en18164312 - 13 Aug 2025
Viewed by 311
Abstract
To investigate the movement law of top coal and the influencing factors of coal caving ratio in fully mechanized top coal caving faces, this study adopts the theory of dispersoid mechanics. First, a top coal flow model was established without considering the influence [...] Read more.
To investigate the movement law of top coal and the influencing factors of coal caving ratio in fully mechanized top coal caving faces, this study adopts the theory of dispersoid mechanics. First, a top coal flow model was established without considering the influence of the support. Then, the effect of the support was analyzed, and it was found that the sliding resistance of the top coal body increases with the square of both the support width and the top coal thickness. Furthermore, the positive stress on the coal particles was derived through a microelement force analysis, and a theoretical formula for arching probability was proposed. The mobility of top coal was evaluated using a flow factor, and the influence of lump size on arching tendency was quantitatively analyzed. Based on these insights, several measures to improve top coal flowability and recovery rate were proposed, including increasing mining height, enlarging the coal caving opening, enhancing the initial support force, extending the caving step, and applying multiple alternating loads to pre-break top coal. These strategies provide a theoretical basis and practical guidance for enhancing top coal caving efficiency. Full article
(This article belongs to the Special Issue Coal, Oil and Gas: Lastest Advances and Propects)
Show Figures

Figure 1

19 pages, 1476 KB  
Article
Network Design and Content Deployment Optimization for Cache-Enabled Multi-UAV Socially Aware Networks
by Yikun Zou, Gang Wang, Guanyi Chen, Jinlong Wang, Siyuan Yu, Chenxu Wang and Zhiquan Zhou
Drones 2025, 9(8), 568; https://doi.org/10.3390/drones9080568 - 12 Aug 2025
Viewed by 384
Abstract
Unmanned aerial vehicles (UAVs) with high mobility and self-organization capabilities can establish highly connected networks to cache popular content for edge users, which improves network stability and significantly reduces access time. However, an uneven distribution of demand and storage capacity may reduce the [...] Read more.
Unmanned aerial vehicles (UAVs) with high mobility and self-organization capabilities can establish highly connected networks to cache popular content for edge users, which improves network stability and significantly reduces access time. However, an uneven distribution of demand and storage capacity may reduce the utilization of the storage capacity of UAVs without a proper UAV coordination mechanism. This work proposes a multi-UAV-enabled caching socially aware network (SAN) where UAVs can switch roles by adjusting the social attributes, effectively enhancing data interaction within the UAVs. The proposed network breaks down communication barriers at the UAV layer and integrates the collective storage resources by incorporating social awareness mechanisms to mitigate these imbalances. Furthermore, we formulate a multi-objective optimization problem (MOOP) with the objectives of maximizing both the diversity of cached content and the total request probability (RP) of the network, while employing a multi-objective particle swarm optimization (MOPSO) algorithm with a mutation strategy to approximate the Pareto front. Finally, the impact of key parameters on the Pareto front is analyzed under various scenarios. Simulation results validate the benefits of leveraging social attributes for resource allocation and demonstrate the effectiveness and convergence of the proposed algorithm for the multi-UAV caching strategy. Full article
Show Figures

Figure 1

14 pages, 2413 KB  
Article
Effect of Carbon and Nitrogen Concentrations on the Superconducting Properties of (NbMoTaW)1CxNy Carbonitride Films
by Gabriel Pristáš, Slavomír Gabáni, Petra Hviščová, Jozef Dobrovodský, Dmitry Albov, Maksym Lisnichuk, Oleksandr Onufriienko, Janina Zorych, František Lofaj and Karol Flachbart
Materials 2025, 18(16), 3732; https://doi.org/10.3390/ma18163732 - 8 Aug 2025
Viewed by 452
Abstract
We report about the effect of nitrogen and carbon concentration on the superconducting transition temperature TC of (NbMoTaW)1CxNy carbonitride films deposited using reactive DC magnetron sputtering. By measuring the temperature dependence of electrical resistance and magnetization of [...] Read more.
We report about the effect of nitrogen and carbon concentration on the superconducting transition temperature TC of (NbMoTaW)1CxNy carbonitride films deposited using reactive DC magnetron sputtering. By measuring the temperature dependence of electrical resistance and magnetization of these carbonitrides, with 0.20 ≤ x ≤ 1.17 and 0 ≤ y ≤ 0.73, we observe a TC enhancement that occurs especially at high (x ≥ 0.76) carbon concentrations, with the largest TC = 9.6 K observed in the over-doped fcc crystal structure with x = 1.17 and y = 0.41. The reason why the largest TC appears at high C concentrations is probably related to the lower atomic mass of carbon compared to nitrogen and to the increase in the electron–phonon interaction due to different bonding of carbon (compared to nitrogen) to the Nb-Mo-Ta-W metallic sublattice. However, for concentrations where y > 0.71 and x + y > 1.58, two structural phases begin to form. Additionally, the proximity to structural instability may play a role in the observed BC2 enhancement. Further measurements in a magnetic field show that the upper critical fields BC2 of (NbMoTaW)1CxNy carbonitrides provide BC2/BC2 < 2 T/K, which falls within the weak-coupling pair breaking limit. Full article
(This article belongs to the Special Issue High-Entropy Alloys: Synthesis, Characterization, and Applications)
Show Figures

Figure 1

38 pages, 522 KB  
Article
Modified Engel Algorithm and Applications in Absorbing/Non-Absorbing Markov Chains and Monopoly Game
by Chunhe Liu and Jeff Chak Fu Wong
Math. Comput. Appl. 2025, 30(4), 87; https://doi.org/10.3390/mca30040087 - 8 Aug 2025
Viewed by 338
Abstract
The Engel algorithm was created to solve chip-firing games and can be used to find the stationary distribution for absorbing Markov chains. Kaushal et al. developed a matlab-based version of the generalized Engel algorithm based on Engel’s probabilistic abacus theory. This paper [...] Read more.
The Engel algorithm was created to solve chip-firing games and can be used to find the stationary distribution for absorbing Markov chains. Kaushal et al. developed a matlab-based version of the generalized Engel algorithm based on Engel’s probabilistic abacus theory. This paper introduces a modified version of the generalized Engel algorithm, which we call the modified Engel algorithm, or the mEngel algorithm for short. This modified version is designed to address issues related to non-absorbing Markov chains. It achieves this by breaking down the transition matrix into two distinct matrices, where each entry in the transition matrix is calculated from the ratio of the numerator and denominator matrices. In a nested iteration setting, these matrices play a crucial role in converting non-absorbing Markov chains into absorbing ones and then back again, thereby providing an approximation of the solutions of non-absorbing Markov chains until the distribution of a Markov chain converges to a stationary distribution. Our results show that the numerical outcomes of the mEngel algorithm align with those obtained from the power method and the canonical decomposition of absorbing Markov chains. We provide an example, Torrence’s problem, to illustrate the application of absorbing probabilities. Furthermore, our proposed algorithm analyzes the Monopoly transition matrix as a form of non-absorbing probabilities based on the rules of the Monopoly game, a complete information dynamic game, particularly the probability of landing on the Jail square, which is determined by the order of the product of the movement, Jail, Chance, and Community Chest matrices. The Long Jail strategy, the Short Jail strategy, and the strategy for getting out of Jail by rolling consecutive doubles three times have been formulated and tested. In addition, choosing which color group to buy is also an important strategy. By comparing the probability distribution of each strategy and the profit return for each property and color group of properties, and the color group property, we find which one should be used when playing Monopoly. In conclusion, the mEngel algorithm, implemented using R codes, offers an alternative approach to solving the Monopoly game and demonstrates practical value. Full article
(This article belongs to the Section Engineering)
Show Figures

Figure 1

19 pages, 618 KB  
Article
Application of Microwaves to Reduce Checking in Low-Fat Biscuits: Impact on Sensory Characteristics and Energy Consumption
by Raquel Rodríguez, Xabier Murgui, Yolanda Rios, Eduardo Puértolas and Izaskun Pérez
Foods 2025, 14(15), 2693; https://doi.org/10.3390/foods14152693 - 30 Jul 2025
Viewed by 413
Abstract
The use of microwaves (MWs) has been proposed as an energy-efficient method for reducing checking. Along with understanding moisture distribution, it is essential to consider structural characteristics to explain how MWs reduce checking. The influence of MWs on these characteristics depends on the [...] Read more.
The use of microwaves (MWs) has been proposed as an energy-efficient method for reducing checking. Along with understanding moisture distribution, it is essential to consider structural characteristics to explain how MWs reduce checking. The influence of MWs on these characteristics depends on the food matrix’s dielectric and viscoelastic properties, which vary significantly between fresh and pre-baked dough. This study investigates the effects of MW treatment applied before (MW-O) or after conventional oven baking (O-MW) on low-fat biscuits that are prone to checking. Color (CIELab), thickness, moisture content and distribution, checking rate, texture, sensory properties, energy consumption and baking time were analyzed. The findings suggest that MWs reduce checking rate by eliminating internal moisture differences, while also changing structural properties, as evidenced by increased thickness and hardness. MW-O eliminated checking (control samples showed 100%) but negatively affected color, texture (increased hardness and breaking work), and sensory quality. The O-MW checking rate (3.41%) was slightly higher than in MW-O, probably due to the resulting different structural properties (less thickness, less hardness and breaking work). O-MW biscuits were the most preferred by consumers (54.76% ranked them first), with color and texture close to the control samples. MW-O reduced total energy consumption by 16.39% and baking time by 25.00%. For producers, these improvements could compensate for the lower biscuit quality. O-MW did not affect energy consumption but reduced baking time by 14.38%. The productivity improvement, along with the reduction in checking and the satisfactory sensory quality, indicates that O-MW could be beneficial for the bakery sector. Full article
(This article belongs to the Special Issue Cereal Processing and Quality Control Technology)
Show Figures

Figure 1

19 pages, 6821 KB  
Article
Effects of Process Parameters on Tensile Properties of 3D-Printed PLA Parts Fabricated with the FDM Method
by Seçil Ekşi and Cetin Karakaya
Polymers 2025, 17(14), 1934; https://doi.org/10.3390/polym17141934 - 14 Jul 2025
Cited by 1 | Viewed by 1030
Abstract
This study investigates the influence of key fused deposition modeling (FDM) process parameters, namely, print speed, infill percentage, layer thickness, and layer width, on the tensile properties of PLA specimens produced using 3D printing technology. A Taguchi L9 orthogonal array was employed to [...] Read more.
This study investigates the influence of key fused deposition modeling (FDM) process parameters, namely, print speed, infill percentage, layer thickness, and layer width, on the tensile properties of PLA specimens produced using 3D printing technology. A Taguchi L9 orthogonal array was employed to design the experiments efficiently, enabling the systematic evaluation of parameter effects with fewer tests. Tensile strength and elongation at break were measured for each parameter combination, and statistical analyses, including the signal-to-noise (S/N) ratio and analysis of variance (ANOVA), were conducted to identify the most significant factors. The results showed that infill percentage significantly affected tensile strength, while layer thickness was the dominant factor influencing elongation. The highest tensile strength (47.84 MPa) was achieved with the parameter combination of 600 mm/s print speed, 100% infill percentage, 0.4 mm layer thickness, and 0.4 mm layer width. A linear regression model was developed to predict tensile strength with an R2 value of 83.14%, and probability plots confirmed the normal distribution of the experimental data. This study provides practical insights into optimizing FDM process parameters to enhance the mechanical performance of PLA components, supporting their use in structural and functional applications. Full article
Show Figures

Figure 1

18 pages, 2268 KB  
Article
Effects of a Novel Mechanical Vibration Technology on the Internal Stress Distribution and Macrostructure of Continuously Cast Billets
by Shuai Liu, Jianliang Zhang, Hui Zhang and Minglin Wang
Metals 2025, 15(7), 794; https://doi.org/10.3390/met15070794 - 14 Jul 2025
Viewed by 367
Abstract
In this paper, a new mechanical vibration technology applied to continuous casting production is studied, which is used to break the dendrite at the solidification front, expand the equiaxed dendrite zone, and improve the center quality of the billet. The exciting force of [...] Read more.
In this paper, a new mechanical vibration technology applied to continuous casting production is studied, which is used to break the dendrite at the solidification front, expand the equiaxed dendrite zone, and improve the center quality of the billet. The exciting force of this vibration technology is provided by a new type of vibration equipment (Vibration roll) independently developed and designed. Firstly, an investigation is conducted into the impacts of vibration acceleration, vibration frequency, and the contact area between the Vibration roll (VR) and the billet surface on the internal stress distribution within the billet shell, respectively. Secondly, the billet with and without vibration treatment was sampled and analyzed through industrial tests. The results show that the area ratio of equiaxed dendrites in transverse specimens treated with vibration technology was 11.96%, compared to 6.55% in untreated specimens. Similarly, for longitudinal samples, the linear ratio of equiaxed dendrites was observed to be 34.56% in treated samples and 22.95% in untreated samples. Compared to the specimens without mechanical vibration, the billet treated with mechanical vibration exhibits an increase in the area ratio and linear ratio of equiaxed dendrite ratio by 5.41% and 11.61%, respectively. Moreover, the probability of bridging at the end of solidification of the billet treated by vibration technology was significantly reduced, and the central porosity and shrinkage cavities of the billet were significantly improved. This study provides the first definitive evidence that the novel mechanical vibration technology can enhance the quality of the billet during the continuous casting process. Full article
Show Figures

Figure 1

9 pages, 550 KB  
Case Report
Psychotic Disorder Secondary to Cerebral Venous Thrombosis Caused by Primary Thrombophilia in a Pediatric Patient with Protein S Deficiency and an MTHFR p.Ala222Val Variant: A Case Report
by Darío Martínez-Pascual, Alejandra Dennise Solis-Mendoza, Jacqueline Calderon-García, Bettina Sommer, Eduardo Calixto, María E. Martinez-Enriquez, Arnoldo Aquino-Gálvez, Hector Solis-Chagoyan, Luis M. Montaño, Bianca S. Romero-Martinez, Ruth Jaimez and Edgar Flores-Soto
Hematol. Rep. 2025, 17(4), 34; https://doi.org/10.3390/hematolrep17040034 - 3 Jul 2025
Viewed by 727
Abstract
Background and Clinical Significance: Herein, we describe the clinical case of a 17-year-old patient with psychotic disorder secondary to cerebral venous thrombosis due to primary thrombophilia, which was related to protein S deficiency and a heterozygous MTHFR gene mutation with the p.Ala222Val variant. [...] Read more.
Background and Clinical Significance: Herein, we describe the clinical case of a 17-year-old patient with psychotic disorder secondary to cerebral venous thrombosis due to primary thrombophilia, which was related to protein S deficiency and a heterozygous MTHFR gene mutation with the p.Ala222Val variant. Case presentation: A 17-year-old female, with no history of previous illnesses, was admitted to the emergency service department due to a psychotic break. Psychiatric evaluation detected disorganized thought, euphoria, ideas that were fleeting and loosely associated, psychomotor excitement, and deviant judgment. On the fifth day, an inflammatory process in the parotid gland was detected, pointing out a probable viral meningoencephalitis, prompting antiviral and antimicrobial treatment. One week after antiviral and steroidal anti-inflammatory treatments, the symptoms’ improvement was minimal, which led to further neurological workup. MRI venography revealed a filling defect in the transverse sinus, consistent with cerebral venous thrombosis. Consequently, anticoagulation treatment with enoxaparin was initiated. The patient’s behavior improved, revealing that the encephalopathic symptoms were secondary to thrombosis of the venous sinus. Hematological studies indicated the cause of the venous sinus thrombosis was a primary thrombophilia caused by a heterozygous MTHFR mutation variant p.Ala222Val and a 35% decrease in plasmatic protein S. Conclusions: This case highlights the possible relationship between psychiatric and thrombotic disorders, suggesting that both the MTHFR mutation and protein S deficiency could lead to psychotic disorders. Early detection of thrombotic risk factors in early-onset psychiatric disorders is essential for the comprehensive management of patients. Full article
Show Figures

Figure 1

15 pages, 455 KB  
Article
Dead or Alive? Identification of Postmortem Blood Through Detection of D-Dimer
by Amy N. Brodeur, Tai-Hua Tsai, Gulnaz T. Javan, Dakota Bell, Christian Stadler, Gabriela Roca and Sara C. Zapico
Biology 2025, 14(7), 784; https://doi.org/10.3390/biology14070784 - 28 Jun 2025
Viewed by 588
Abstract
At crime scenes, apart from the detection of blood, it may be important to determine whether a person was alive at the time of blood deposition. Based on the rapid onset of fibrinolysis after death, this pathway could be considered to identify potential [...] Read more.
At crime scenes, apart from the detection of blood, it may be important to determine whether a person was alive at the time of blood deposition. Based on the rapid onset of fibrinolysis after death, this pathway could be considered to identify potential biomarkers for postmortem blood. Fibrinolysis is the natural process that breaks down blood clots after healing a vascular injury. One of its products, D-dimer, could be a potential biomarker for postmortem blood. SERATEC® (SERATEC® GmbH, Göttingen, Germany) has developed the PMB immunochromatographic assay to simultaneously detect human hemoglobin and D-dimer. The main goals of this study were to assess the possibility of using this test to detect postmortem blood, evaluate D-dimer levels in antemortem, menstrual, and postmortem blood, and assess the ability to obtain STR profiles from postmortem blood. Except for one degraded sample, all postmortem blood samples reacted positively for the presence of D-dimer using the SERATEC® PMB test. All antemortem blood samples from living individuals showed negative results for D-dimer detection, except for one liquid sample with a weak positive result, probably due to pre-existing health conditions. Menstrual blood samples gave variable results for D-dimer. The DIMERTEST® Latex assay was used for semi-quantitative measurement of D-dimer concentrations, with postmortem and menstrual blood yielding higher D-dimer concentrations compared to antemortem blood. Full STR profiles were developed for all postmortem samples tested except for one degraded sample, pointing to the possibility of not only detecting postmortem blood at the crime scene but also the potential identification of the victim. Full article
Show Figures

Figure 1

13 pages, 751 KB  
Article
Potential Associations Between Anthropometric Characteristics, Biomarkers, and Sports Performance in Regional Ultra-Marathon Swimmers: A Quasi-Experimental Study
by Iasonas Zompanakis, Konstantinos Papadimitriou and Nikolaos Koutlianos
Appl. Sci. 2025, 15(13), 7210; https://doi.org/10.3390/app15137210 - 26 Jun 2025
Viewed by 546
Abstract
Background/Objectives: This study aimed to investigate the associations of anthropometric characteristics with performance and potential biomarker changes resulting from a continuous 10 h ultra-marathon swimming effort in regional-level swimmers. Methods: Nine adult male swimmers (age: 43 ± 6 years) participated in a 10 [...] Read more.
Background/Objectives: This study aimed to investigate the associations of anthropometric characteristics with performance and potential biomarker changes resulting from a continuous 10 h ultra-marathon swimming effort in regional-level swimmers. Methods: Nine adult male swimmers (age: 43 ± 6 years) participated in a 10 h swim in a 50 m outdoor pool, self-managing their nutrition and hydration breaks. Pre- and post-swim measurements included body weight (BW), body fat percentage (BF%), limb lengths (LL), circumferences (C), lean mass (LM), body mass index (BMI), skinfold thicknesses, heart rate (HR) and blood pressure (BP). Results: A significant reduction was observed in bicep skinfold thickness (Fb) (p = 0.022), while both HR and systolic BP increased post-effort (p = 0.030 and p = 0.045, respectively). Also, most anthropometric parameters, such as BMI, LM, and some C, remained unchanged (p ≥ 0.05). A statistically significant negative correlation was found between post-swim hip circumference (Ph) and total swimming distance (r = –0.682, p = 0.043). Conclusions: While most anthropometric traits remained stable and unrelated to performance, isolated changes in specific biomarkers indicate a physiological response to prolonged exertion. Although pacing and nutritional strategies were not directly examined, observational data—such as consistent swimming rhythm, time allocation for active recovery (AR), and structured carbohydrate intake—suggest these factors may have contributed to performance maintenance and probably the lack of body composition differences after the ultra-marathon effort. These insights are interpretive and align with the existing literature, highlighting the need for future studies with targeted experimental designs. Full article
Show Figures

Figure 1

Back to TopTop