All articles published by MDPI are made immediately available worldwide under an open access license. No special
permission is required to reuse all or part of the article published by MDPI, including figures and tables. For
articles published under an open access Creative Common CC BY license, any part of the article may be reused without
permission provided that the original article is clearly cited. For more information, please refer to
https://www.mdpi.com/openaccess.
Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature
Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for
future research directions and describes possible research applications.
Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive
positive feedback from the reviewers.
Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world.
Editors select a small number of articles recently published in the journal that they believe will be particularly
interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the
most exciting work published in the various research areas of the journal.
We study how small harmful mutations spread in populations that reproduce asexually. This process is known as Muller’s ratchet—it means that even though these mutations are damaging, they can still build up over generations. To explore this, we use a mathematical model that
[...] Read more.
We study how small harmful mutations spread in populations that reproduce asexually. This process is known as Muller’s ratchet—it means that even though these mutations are damaging, they can still build up over generations. To explore this, we use a mathematical model that describes how such mutations move through a population living in an environment with limited resources. We model Muller’s ratchet deterministically using differential equations, incorporating modifications that account for extinction risk of small mutation classes. We analyze two modifications: a published cutoff modification and a more flexible exponential modification. We show that the exponential modification better matches stochastic simulations over specific parameter ranges.
Full article
Objectives: Typical BPPV forms are widespread and easily diagnosed disorders. However, some forms of labyrinthine lithiasis can differ from the typical BPPV paradigm, showing their own signs and symptoms and resulting in variable therapeutic responses. The aim of this retrospective study is
[...] Read more.
Objectives: Typical BPPV forms are widespread and easily diagnosed disorders. However, some forms of labyrinthine lithiasis can differ from the typical BPPV paradigm, showing their own signs and symptoms and resulting in variable therapeutic responses. The aim of this retrospective study is to describe the incidence of the so-called atypical forms compared to the more common BPPV, describing their clinical behavior. Methods: This retrospective study analyzed clinical and instrumental data of 139 patients evaluated over a 12-month period at a referral center. Patients were divided into two groups. The first group (Group A) included patients with so-called “typical” and unilateral labyrintholithiasis, while the second group (Group B) included patients with so-called “atypical” forms. Results: Based on clinical characteristics, 82 patients were assigned to group A while 57 (51.01%) to group B. In group A, resolution of the clinical picture required fewer sessions and a smaller number of therapeutic maneuvers than in group B (p < 0.001). Furthermore, in group A, resolution of symptoms was observed immediately after one of the therapeutic maneuvers performed in 74.07% of cases, while in group B, resolution of the clinical picture was observed during one of the follow-up visits in 39.66% of cases (p < 0.001). Conclusions: Although considered rare, “atypical” forms have an increased prevalence in tertiary centers. The location of the canaliths within the labyrinth can be hypothesized based on the pattern of nystagmus, which serves as a guide for treatment.
Full article
Soil electrical conductivity is a key indicator of soil salinity and sustainability, particularly in arid and semi-arid regions. Accurate estimation of EC is essential for managing soil salinity and ensuring crop productivity. Five pedotransfer functions (PTFs) were developed and evaluated for predicting electrical
[...] Read more.
Soil electrical conductivity is a key indicator of soil salinity and sustainability, particularly in arid and semi-arid regions. Accurate estimation of EC is essential for managing soil salinity and ensuring crop productivity. Five pedotransfer functions (PTFs) were developed and evaluated for predicting electrical conductivity in a saturated paste extract using soil parameters, such as particle size analysis, pH, organic carbon, total nitrogen, cation exchange capacity, and electrical conductivity in a 1:5 soil-to-water extract, in agricultural soils of northern Tunisia. The accuracy of each PTF was systematically evaluated. PTF1 represented an R2 value of 0.85, PTF2 showed an R2 of 0.71 for the stepwise regression model, PTF3 achieved an R2 of 0.84, PTF4, based on Lasso/Ridge regression, reached an R2 of 0.89, and PTF5 reached an R2 of 0.83. Our findings revealed regional variations in soil salinity, with certain areas showing elevated salinity levels that could affect agricultural sustainability. This research emphasizes the importance of developing ad hoc PTFs as a reliable tool for predicting soil salinity and, consequently, assuring sustainable soil management in northeastern Tunisia.
Full article
Background: Current research is paying more attention to neurological outcomes and quality of life after life-threatening events. Children with heart disease are particularly vulnerable, especially after resuscitation events. While newer data show that adults with heart failure and a left-ventricular assist device suffer
[...] Read more.
Background: Current research is paying more attention to neurological outcomes and quality of life after life-threatening events. Children with heart disease are particularly vulnerable, especially after resuscitation events. While newer data show that adults with heart failure and a left-ventricular assist device suffer from a higher incidence of depression, mental health in pediatric heart disease patients is poorly understood. This is the first study in Germany to examine the quality of life and psychological burden in cardiac arrest survivors with congenital or acquired heart disease. Methods: This monocentric study retrospectively analyzed survival outcomes of pediatric heart disease patients who underwent in-hospital resuscitation between 2008 and 2022. The PedsQL and Strength and difficulties questionnaires were prospectively administered to survivors to assess quality of life and emotional/behavioral problems, while academic achievements were additionally documented. Results: Of 127 patients experiencing cardiac arrest, 91 (71.7%) survived to discharge. Most had complex congenital heart diseases; mean cardiopulmonary resuscitation duration was 14 min. Five patients received extracorporeal cardiopulmonary resuscitation. Of the 22 patients who were receiving follow-up care at the pediatric cardiology outpatient clinic at the time of the study, 14 completed questionnaires were received. Overall quality of life was comparable to healthy controls, though those with prolonged or multiple resuscitations showed lower physical, emotional, social, and school functioning scores. The Strengths and Difficulties Questionnaire revealed no pathological scores but elevated average values for hyperactivity and emotional problems in parent reports, and emotional and peer difficulties in self-reports, indicating increased psychological burden. Conclusions: While survival rates are comparable to international data, gaps exist in structured follow-up and neuropsychological care, especially for high-risk subgroups like ECMO survivors. Routine neuropsychological screening and multidisciplinary outpatient programs are essential to improve long-term follow-up care.
Full article
Bispecific peptides represent an emerging therapeutic platform in immunotherapy, offering simultaneous engagement of two distinct molecular targets to enhance specificity, functional synergy, and immune modulation. Their compact structure and modular design enable precise interaction with protein–protein interfaces and shallow binding sites that are
[...] Read more.
Bispecific peptides represent an emerging therapeutic platform in immunotherapy, offering simultaneous engagement of two distinct molecular targets to enhance specificity, functional synergy, and immune modulation. Their compact structure and modular design enable precise interaction with protein–protein interfaces and shallow binding sites that are otherwise difficult to target. This review summarizes current design strategies of bispecific peptides, including fused, linked, and self-assembled architectures, and elucidates their mechanisms in bridging tumor cells with immune effector cells and blocking immune checkpoint pathways. Recent developments highlight their potential applications not only in oncology but also in autoimmune and infectious diseases. Key translational challenges, including proteolytic stability, immunogenicity, delivery barriers, and manufacturing scalability, are discussed, along with emerging peptide engineering and computational design strategies to address these limitations. Bispecific peptides offer a versatile and adaptable platform poised to advance precision immunotherapy and expand therapeutic options across immune-mediated diseases.
Full article
Ultra-high dimensional longitudinal data feature screening procedures are widely studied, but most require model assumptions. The screening performance of these methods may not be excellent if we specify an incorrect model. To resolve the above problem, a new model-free method is introduced where
[...] Read more.
Ultra-high dimensional longitudinal data feature screening procedures are widely studied, but most require model assumptions. The screening performance of these methods may not be excellent if we specify an incorrect model. To resolve the above problem, a new model-free method is introduced where feature screening is performed by sample splitting and data aggregation. Distance correlation is used to measure the association at each time point separately, while longitudinal correlation is modeled by a specific cumulative distribution function to achieve efficiency. In addition, we extend this new method to handle situations where the predictors are correlated. Both methods possess excellent asymptotic properties and are capable of handling longitudinal data with unequal numbers of repeated measurements and unequal intervals between repeated measurement time points. Compared to other model-free methods, the two new methods are relatively insensitive to within-subject correlation, and they can help reduce the computational burden when applied to longitudinal data. Finally, we use some simulated and empirical examples to show that both new methods have better screening performance.
Full article
Based on the perspective of multi-stage dynamic competition, this study constructs a discrete dynamic model of price competition between the “direct sales” and “resale” channels in cross-border e-commerce (CBEC) under three blockchain deployment modes. Drawing on nonlinear dynamics theory, the Nash equilibrium of
[...] Read more.
Based on the perspective of multi-stage dynamic competition, this study constructs a discrete dynamic model of price competition between the “direct sales” and “resale” channels in cross-border e-commerce (CBEC) under three blockchain deployment modes. Drawing on nonlinear dynamics theory, the Nash equilibrium of the system and its stability conditions are examined. Using numerical simulations, the effects of factors such as the channel price adjustment speed, tariff rate, and commission ratio on the dynamic evolution, entropy, and stability of the system under the empowerment of blockchain technology are investigated. Furthermore, the impact of noise factors on system stability and the corresponding chaos control strategies are further analyzed. This study finds that a single-channel deployment tends to induce asymmetric system responses, whereas dual-channel collaborative deployment helps enhance strategic coordination. An increase in price adjustment speed, tariffs, and commission rates can drive the system’s pricing dynamics from a stable state into chaos, thereby raising its entropy, while the adoption of blockchain technology tends to weaken dynamic stability. Therefore, after deploying blockchain technology, each channel should make its pricing decisions more cautiously. Moderate noise can exert a stabilizing effect, whereas excessive disturbances may cause the system to diverge. Hence, enterprises should carefully assess the magnitude of disturbances and capitalize on the positive effects brought about by moderate fluctuations. In addition, the delayed feedback control method can effectively suppress chaotic fluctuations and enhance system stability, demonstrating strong adaptability across different blockchain deployment modes.
Full article
This paper presents a dual-objective Model Predictive Control (MPC) framework for fixed-wing unmanned aerial vehicles (UAVs). The framework was designed with two goals in mind: improving longitudinal motion control and optimizing the flight trajectory when connectivity and no-fly zone constraints are present. A
[...] Read more.
This paper presents a dual-objective Model Predictive Control (MPC) framework for fixed-wing unmanned aerial vehicles (UAVs). The framework was designed with two goals in mind: improving longitudinal motion control and optimizing the flight trajectory when connectivity and no-fly zone constraints are present. A multi-input–multi-output model derived from NASA’s Generic Transport Model (T-2) was used and linearized for controller design. We compared the MPC controller with a Linear Quadratic Regulator (LQR) in MATLAB simulations. The results showed that MPC reached the reference values faster, with less overshoot and phase error, particularly under sinusoidal reference inputs. These differences became even more evident when the UAV had to fly in windy conditions. Trajectory optimization was carried out using the CasADi framework, which allowed us to evaluate paths that balance two competing requirements: reaching the target quickly and maintaining cellular connectivity. We observed that changing the weights of the cost function had a strong influence on the trade-off between direct flight and reliable communication, especially when multiple base stations and no-fly zones were included. Although the study was limited to simulations at constant altitude, the results suggest that MPC can serve as a practical tool for UAV missions that demand both accurate flight control and robust connectivity. Future work will extend the framework to more complete models and experimental validation.
Full article
South Africa’s urbanization is often driven by poverty, unemployment, and limited resource access. Unearned income, such as social grants and other sources, has contributed to poverty alleviation. However, concerns have also been raised that this unearned support may reduce individuals’ motivation to pursue
[...] Read more.
South Africa’s urbanization is often driven by poverty, unemployment, and limited resource access. Unearned income, such as social grants and other sources, has contributed to poverty alleviation. However, concerns have also been raised that this unearned support may reduce individuals’ motivation to pursue earned income opportunities. This study investigates whether a two-step modelling approach provides better insight than a single-framework model to assess the influence of youths’ access to resources on household income generation. The results indicate that the two-step model is more effective, as different factors influence the decision to earn income and the amount earned. Youth unemployment and household receipt of remittances had similar effects on both the decision to earn income and the amount earned. In contrast, youth involvement in agriculture was positively associated with the decision to earn income but negatively associated with the amount of income. Youth-headed households face additional constraints due to limited access to and ownership of productive resources. The study concludes that a two-step approach provides more information and thus a more accurate understanding of rural income dynamics. Enhancing youth access to quality resources and evaluating the effectiveness of support programs are essential for fostering income generation and improving rural livelihoods.
Full article
A phosphogypsum flotation tailings-derived zeolite (PGTZ) was synthesized from the tailings produced during the reverse flotation of phosphogypsum through alkaline fusion and hydrothermal treatment. The response surface methodology (RSM) utilizing a three-level Box–Behnken design (BBD) was used to assess the adsorption of MB
[...] Read more.
A phosphogypsum flotation tailings-derived zeolite (PGTZ) was synthesized from the tailings produced during the reverse flotation of phosphogypsum through alkaline fusion and hydrothermal treatment. The response surface methodology (RSM) utilizing a three-level Box–Behnken design (BBD) was used to assess the adsorption of MB by PGTZ. Polynomial regression models were developed to analyze the effects of process parameters on adsorption capacity (qe). The maximum MB adsorption occurred under the following optimized conditions: PGTZ dosage = 5.31 g·L−1; initial MB concentration = 294.59 mg·L−1; pH = 7.42; and adsorption time = 187.89 min. Additionally, adsorption isotherm and kinetic models were fitted to the experimental data to determine model parameters. The Langmuir isotherm model and pseudo-second-order kinetic model incorporating intraparticle diffusion were able to effectively predict MB adsorption onto PGTZ. Thermodynamic analyses indicated that the adsorption process was spontaneous, with strong chemical interactions between MB and PGTZ.
Full article
by
Pauline Celine Raoul, Maurizio Romano, Francesca Sofia Galli, Marco Cintoni, Esmeralda Capristo, Vincenzina Mora, Maria Cristina Mele, Antonio Gasbarrini and Emanuele Rinninella
Nutrients2025, 17(20), 3251; https://doi.org/10.3390/nu17203251 (registering DOI) - 16 Oct 2025
Background: Artificial sweeteners, widely used as non-nutritive sugar substitutes, are increasingly prevalent in ultra-processed products. Although promoted for weight management due to their minimal caloric content, their impact on systemic inflammation remains uncertain. This systematic review of animal studies aims to evaluate the
[...] Read more.
Background: Artificial sweeteners, widely used as non-nutritive sugar substitutes, are increasingly prevalent in ultra-processed products. Although promoted for weight management due to their minimal caloric content, their impact on systemic inflammation remains uncertain. This systematic review of animal studies aims to evaluate the association between artificial sweetener consumption and inflammatory biomarkers. Methods: A systematic literature search was conducted up to May 2025 across PubMed, Web of Science, and Scopus, following PRISMA guidelines and registered in PROSPERO (CRD420251084004). Risk of bias was assessed using the ARRIVE guidelines and SCYRCLE’s risk of bias tool. Results: Thirty-seven animal studies were included: aspartame (n = 17), sucralose (n = 16), acesulfame potassium (n = 5), and saccharin (n = 4). Protocols varied in terms of dosage, exposure duration, animal models, and assessment of inflammatory outcomes, including C-reactive protein, interleukins (IL-6 and IL-1β), and tumor necrosis factor alpha. Aspartame and sucralose could elevate inflammatory markers, with sucralose also disrupting gut integrity and microbiota. Acesulfame K and saccharin showed variable, dose-dependent effects. Conclusions: This systematic review of animal studies suggests a possible mechanistic association between the consumption of certain artificial sweeteners and systemic inflammation. However, this relationship remains to be clarified and warrants exploration through well-designed, large-scale randomized controlled trials.
Full article
The peroxisome proliferator-activated receptors (PPAR-α, PPAR-δ, and PPAR-γ) are transcription factors that belong to the nuclear hormone receptor superfamily. Upon activation by specific lipids, they regulate gene expression by directly binding to PPAR response elements (PPREs) in the DNA. Although the functions of [...] Read more.
The peroxisome proliferator-activated receptors (PPAR-α, PPAR-δ, and PPAR-γ) are transcription factors that belong to the nuclear hormone receptor superfamily. Upon activation by specific lipids, they regulate gene expression by directly binding to PPAR response elements (PPREs) in the DNA. Although the functions of the different PPARs are specific to the isoform, tissue, and context, all three PPARs are generally involved in energy homeostasis through lipid sensing in physiological conditions. Importantly, there is increasing evidence linking PPARs with malignant behavior in cancer, regulating features frequently attributed to the aggressive subpopulation of cancer stem cells (CSCs): self-renewal, tumor initiation, chemoresistance, metastasis, and immune evasion. However, contradictory effects have been described for each isoform in various cancer types, and their implication in these malignant features may not consistently follow a pro- or anti-tumoral pattern. In this review, we revise the current knowledge on the role of the PPAR family members in cancer, with a special focus on cancer stemness, and discuss the potential for PPARs as therapeutic targets in CSC-driven relapse and resistance.Full article
This review investigates recent progress in the field of PLA-based conductive composites for 3D printing. First, it introduces PLA as a biodegradable thermoplastic polymer, describing its processing and recycling methods and highlighting its environmental advantages over conventional polymers. In order to evaluate its
[...] Read more.
This review investigates recent progress in the field of PLA-based conductive composites for 3D printing. First, it introduces PLA as a biodegradable thermoplastic polymer, describing its processing and recycling methods and highlighting its environmental advantages over conventional polymers. In order to evaluate its printability, PLA is briefly compared to other commonly used thermoplastics in additive manufacturing. The review then examines the incorporation of conductive fillers such as carbon black, carbon nanotubes, graphene, and metal particles into the PLA matrix, with a particular focus on the percolation threshold and its effect on conductivity. Critical challenges such as filler dispersion, agglomeration, and conductivity anisotropy are also highlighted. Recent results are summarized to identify promising formulations that combine improved electrical performance with acceptable mechanical integrity, while also emphasizing the structural and morphological characteristics that govern these properties. Finally, potential applications in the fields of electronics, biomedicine, energy, and electromagnetic shielding are discussed. From an overall perspective, the review highlights that while PLA-based conductive composites show great potential for sustainable functional materials, further progress is needed to improve reproducibility, optimize processing parameters, and ensure reliable large-scale applications.
Full article
Improving the efficiency of spark-ignited (SI) engines while simultaneously reducing emissions remains a critical challenge in meeting global energy demands and increasingly stringent environmental regulations. Lean burn combustion is a proven strategy for increasing efficiency in SI engines. However, the air dilution level
[...] Read more.
Improving the efficiency of spark-ignited (SI) engines while simultaneously reducing emissions remains a critical challenge in meeting global energy demands and increasingly stringent environmental regulations. Lean burn combustion is a proven strategy for increasing efficiency in SI engines. However, the air dilution level is limited by the mixture’s ignition ability and poor combustion efficiency and stability. A promising method to extend the dilution limit and ensure stable combustion is the implementation of an active pre-chamber combustion system. The pre-chamber spark-ignited (PCSI) engine facilitates stable and rapid combustion of very lean mixtures in the main chamber by utilizing high ignition energy from multiple flame jets penetrating from the pre-chamber (PC) to the main chamber (MC). Together with the increase in efficiency by dilution of the mixture, nitrogen oxide (NOX) emissions are lowered. However, at peak efficiencies, the NOX emissions are still too high and require aftertreatment. The use of exhaust gas recirculation (EGR) as a dilutant might enable simple aftertreatment by using a three-way catalyst. This study experimentally investigates the use of EGR as a dilution method in a PCSI engine fueled by methane and analyzes the benefits and drawbacks compared to the use of air as a dilution method. The experimental results are categorized into three sets: measurements at wide open throttle (WOT) conditions, at a constant engine load of indicated mean effective pressure (IMEP) of 5 bar, and at IMEP = 7 bar, all at a fixed engine speed of 1600 rpm. The experimental results were further enhanced with numerical 1D/0D simulations to obtain parameters such as the residual combustion products and excess air ratio in the pre-chamber, which could not be directly measured during the experimental testing. The findings indicate that air dilution achieves higher indicated efficiency than EGR, at all operating conditions. However, EGR shows an increasing trend in indicated efficiency with the increase in EGR rates but is limited due to misfires. In both dilution approaches, at peak efficiencies, aftertreatment is required for exhaust gases because they are above the legal limit, but a significant decrease in NOX emissions can be observed.
Full article
The functional performance of porous metals and alloys is dictated by pore features such as size, connectivity, and morphology. While methods like mercury porosimetry or gas pycnometry provide cumulative information, direct observation via scanning electron microscopy (SEM) offers detailed insights unavailable through other
[...] Read more.
The functional performance of porous metals and alloys is dictated by pore features such as size, connectivity, and morphology. While methods like mercury porosimetry or gas pycnometry provide cumulative information, direct observation via scanning electron microscopy (SEM) offers detailed insights unavailable through other means, especially for microscale or nanoscale pores. Each scanned image can contain hundreds or thousands of pores, making efficient identification, classification, and quantification challenging due to the processing time required for pixel-level edge recognition. Traditionally, pore outlines on scanned images were hand-traced and analyzed using image-processing software, a process that is time-consuming and often inconsistent for capturing both large and small pores while accurately removing noise. In this work, a software framework was developed that leverages modern computing tools and methodologies for automated image processing for pore identification, classification, and quantification. Vectorization was implemented as the final step to utilize the direction and magnitude of unconnected endpoints to reconstruct incomplete or broken edges. Combined with other existing pore analysis methods, this automated approach reduces manual effort dramatically, reducing analysis time from multiple hours per image to only minutes, while maintaining acceptable accuracy in quantified pore metrics.
Full article
by
Beatriz M. Ferrer-González, Ricardo Aguilar-Garay, Carla I. Acosta-Ramírez, Liliana Alamilla-Beltrán, Georgina Calderón-Domínguez, Humberto Hernández-Sánchez and Gustavo F. Gutiérrez-López
Popcorn maize (Zea mays everta) exhibits complex morphologies that challenge structural analysis. This study assessed the fidelity of the three-dimensional (3D) reconstruction and printing of four popcorn morphologies, unilateral, bilateral, multilateral, and mushroom, by integrating structured-light 3D scanning and (DIA), which can
[...] Read more.
Popcorn maize (Zea mays everta) exhibits complex morphologies that challenge structural analysis. This study assessed the fidelity of the three-dimensional (3D) reconstruction and printing of four popcorn morphologies, unilateral, bilateral, multilateral, and mushroom, by integrating structured-light 3D scanning and (DIA), which can support the construction of food replicas. Morphometric parameters (projected area, perimeter, Feret diameter, circularity, and roundness) and fractal descriptors (fractal dimension, lacunarity, and entropy) were quantified as the relative ratios of printed/real parameters (P/R) to compare real flakes with their 3D-printed counterparts. Results revealed the lowest mean errors for Feret diameter (6%) and projected area (10%), while deviations in circularity and roundness were more pronounced in mushroom flakes. With respect to the actual mean values of the morphological parameters, real flakes showed slightly larger perimeter values (86 mm for real and 82 mm for printed objects) and a higher fractal dimension (1.36 for real and 1.33 for printed), indicating greater texture irregularity, whereas the projected area remained highly comparable (225 mm2 in real/229 mm2 in printed). These parameters reinforced that the overall morphological fidelity remained high (P/R = 0.9–1.0), despite localized deviations in circularity and fractal descriptors. Less complex morphologies (unilateral and bilateral) demonstrated higher structural fidelity (P/R = 0.95), whereas multilateral and mushroom types showed greater variability due to surface irregularity. Fractal dimension and lacunarity effectively described textural complexity, highlighting the role of flake geometry and moisture in determining expansion patterns and printing accuracy. Principal Component Analysis confirmed that circularity and fractal indicators are critical descriptors for distinguishing morphological fidelity. Overall, the findings demonstrated that 3D scanning and printing provided reliable physical replicas of irregular food structures as popcorn flakes supporting their application in food engineering.
Full article
This study has investigated the structural and seismic performance of monolithic stone columns in the historical Mosque–Cathedral of Córdoba, with a focus on the earliest section constructed during the reign of Abd al-Rahman I (VIII century). An advanced 3D finite element (FE) model
[...] Read more.
This study has investigated the structural and seismic performance of monolithic stone columns in the historical Mosque–Cathedral of Córdoba, with a focus on the earliest section constructed during the reign of Abd al-Rahman I (VIII century). An advanced 3D finite element (FE) model has been developed to assess the effects of geometric imperfections and component interactions on the stability of columns under both vertical and horizontal static loading. Three distinct modelling strategies have been employed in OpenSees 3.7.1, incorporating column inclination and contact elements to simulate mortar interfaces. Material properties have been calibrated using experimental data and in situ observations. The gravitational analysis has shown no significant damage in any of the configurations, aligning with the observed undamaged state of the structure. Conversely, horizontal analyses have revealed that tensile damage has predominantly occurred at the lower shaft. The inclusion of contact elements has led to a significant reduction in lateral resistance, highlighting the importance of accounting for friction and interface behaviour. Column inclination has been found to have a significant influence on failure patterns. These findings have highlighted the critical role of detailed modelling in evaluating structural vulnerabilities. Such features are generally included in the numerical modelling and evaluation of heritage buildings. Consequently, they can contribute to a better understanding of the seismic behaviour of historic masonry structures.
Full article
The single-crystal diamond (SCD) possessing both favorable dielectric properties and low stress is esteemed as the ideal material for terahertz windows. The intrinsic step-like growth pattern of SCD can easily lead to stress concentration and a decrease in dielectric performance. In this study,
[...] Read more.
The single-crystal diamond (SCD) possessing both favorable dielectric properties and low stress is esteemed as the ideal material for terahertz windows. The intrinsic step-like growth pattern of SCD can easily lead to stress concentration and a decrease in dielectric performance. In this study, a “two-step method” was designed to optimize the growth mode of SCD. A novel large platform growth pattern has been achieved by controlling diamond seed crystal etching and the epitaxial layer growth process. The experimental results indicate that, compared with the traditional step-like growth model, the root mean square (RMS) roughness of as-prepared SCD reduced from 5 nanometers (step growth) to 0.4~1.0 nanometers (platform growth) within a 5 μm × 5 μm area. Furthermore, the growth step height difference diminished from 30 nm to 3~4 nm, thereby mitigating stress induced by steps to a mere 0.1976 GPa. Additionally, at frequencies ranging from 0.1 to 3 THz, the diamond windows exhibit lower refractive index, dielectric constant, and dielectric loss. Finally, large platform growth effectively reduces phenomena such as dislocation pile-up brought about by step growth, achieving low-damage ultra-precision machining of diamond windows measuring 1 mm in diameter.
Full article
Objective: Preterm birth has been associated with an elevated risk of a broad range of neurodevelopmental impairments, including attentional deficits. This systematic review and meta-analysis aimed to synthesize the existing evidence on sustained and selective attention in school-aged children born preterm. Methods: Following
[...] Read more.
Objective: Preterm birth has been associated with an elevated risk of a broad range of neurodevelopmental impairments, including attentional deficits. This systematic review and meta-analysis aimed to synthesize the existing evidence on sustained and selective attention in school-aged children born preterm. Methods: Following PRISMA guidelines, a comprehensive literature search was conducted across PubMed, Ovid MEDLINE, EMBASE, and Web of Science. Eligible studies included assessments of sustained and/or selective attention in children aged 5–12 years born before 37 weeks of gestation. Data from 15 studies (sustained attention) and 12 studies (selective attention) were analyzed using random-effects meta-analyses. Additionally, subgroup analyses were performed based on gestational age. Results: Preterm-born children showed significantly poorer performance in sustained (Hedges’ g = −0.31, p < 0.001) and selective attention (Hedges’ g = −0.27, p < 0.001) compared to term-born controls. While sustained attention deficits were consistent across all gestational age subgroups, selective attention deficits were more pronounced in very early and extremely early preterm-born children. Moderate to late preterm-born children showed less impairment in selective attention tasks. Conclusions: Preterm birth is associated with measurable and persistent deficits in both sustained and selective attention, with greater vulnerability in children born before 32 weeks of gestation. These findings underscore the importance of implementing early monitoring and intervention strategies specifically designed to support attentional development in this high-risk population.
Full article
Designing Low Earth Orbit (LEO) constellations for the continuous, collaborative observation of space objects in MEO/GEO is a complex optimization task, frequently limited by prohibitive computational costs. This study introduces an efficient surrogate-based framework to overcome this challenge. Our approach integrates Optimized Latin
[...] Read more.
Designing Low Earth Orbit (LEO) constellations for the continuous, collaborative observation of space objects in MEO/GEO is a complex optimization task, frequently limited by prohibitive computational costs. This study introduces an efficient surrogate-based framework to overcome this challenge. Our approach integrates Optimized Latin Hypercube Sampling (OLHS) with a Radial Basis Function (RBF) model to minimize the required number of satellites. In a comprehensive case study targeting 18 diverse space objects—including communication satellites in GEO (e.g., EUTELSAT, ANIK) and navigation satellites in MEO/IGSO from GPS, Galileo, and BeiDou constellations—the method proved highly effective and scalable. It successfully designed a 208-satellite Walker constellation that provides 100% continuous coverage over a 36-h period. Furthermore, the design ensures that each target is simultaneously observed by at least three satellites at all times. A key finding is the method’s remarkable efficiency and scalability: the optimal solution for this larger problem was found using only 46 high-fidelity function evaluations, maintaining a computational time that was 5–8 times faster than traditional global optimization algorithms. This research demonstrates that surrogate-assisted optimization can drastically lower the computational barrier in constellation design, offering a powerful tool for building cost-effective and robust Space Situational Awareness (SSA) systems.
Full article
by
Luana Barbosa Dias, Thiago De Marchi, Ana Paula Vargas Visentin, Juliana Maria Chaves, Catia Santos Branco, Fernando Joel Scariot, Matheus Marinho Aguiar Lino, Older Manoel Araújo-Silva, Amanda Lima Pereira, Heliodora Leão Casalechi, Douglas Scott Johnson, Shaiane Silva Tomazoni and Ernesto Cesar Pinto Leal-Junior
Antioxidants2025, 14(10), 1243; https://doi.org/10.3390/antiox14101243 (registering DOI) - 16 Oct 2025
Background: Recent technological advances have sparked growing interest in high-power laser devices due to their capacity for energy delivery and therapeutic potential, especially in deeper tissues. This promising approach may be comparable to photobiomodulation for modulating inflammatory and redox processes in various tissues.
[...] Read more.
Background: Recent technological advances have sparked growing interest in high-power laser devices due to their capacity for energy delivery and therapeutic potential, especially in deeper tissues. This promising approach may be comparable to photobiomodulation for modulating inflammatory and redox processes in various tissues. However, to our knowledge, this is the first study to evaluate the safety profile and redox modulation capacity of high-power laser therapy in BV-2 microglial cells. Methods: This study investigated the cellular responses of BV-2 microglial cells exposed to three laser irradiation protocols using a high-power laser device (650/810/915/980 nm, 657 J total dose), applied at variable distances to simulate in vivo power attenuation. Cell viability, apoptosis, adenosine triphosphate(ATP) levels, mitochondrial membrane potential (MMP), reactive oxygen species (ROS), nitric oxide (NO), and intracellular calcium levels were assessed at multiple time points (5 min to 24 h). Results: Protocol-dependent effects were observed. Protocol A promoted early increases in cell viability and ATP levels, along with decreased apoptotic markers and ROS production, suggesting a protective bioenergetic response. In contrast, Protocol C showed transient increases in oxidative stress and reduced MMP, suggesting possible mitochondrial stress. A selective increase in NO levels under Protocol A also suggests modulation of inflammatory pathways without cytotoxicity. Conclusions: High-power laser therapy modulates redox balance, mitochondrial function, and inflammatory mediators (e.g., NO) in a dual-phase manner in BV-2 microglial cells. These findings contribute to defining safe and effective parameters for potential musculoskeletal and neurological applications.
Full article
This treatise studies the thermal sensitivity of the mechanical and optical transmission coefficients of a microoptoelectromechanical (MOEM) accelerometer based on evanescent coupling in a temperature range from minus 40 to plus 125 °C. Two types of optical measuring transducers are considered: based on
[...] Read more.
This treatise studies the thermal sensitivity of the mechanical and optical transmission coefficients of a microoptoelectromechanical (MOEM) accelerometer based on evanescent coupling in a temperature range from minus 40 to plus 125 °C. Two types of optical measuring transducers are considered: based on a directional coupler and a resonator. This analysis covers the optical and mechanical components of the thermal sensitivity of the transmission coefficient. In terms of the mechanical part, the temperature effect induces changes to the linear dimensions of the structure and material characteristics and causes internal mechanical stresses as well. The temperature effect on the optical system of the accelerometer is conditioned by the thermo-optic effect of the materials the optical waveguides are made of. This study includes experiments on the refraction index dependence on the temperature of the optical films that compose the optical system of the MOEM accelerometer. The experiment shows that the refraction index of the films grows with temperature and amounts to 0.12642 ppm/°C for silicon nitride on the SiO2/Si substrate. For the optical measuring transducer based on a directional coupler, the thermal sensitivity of the accelerometer’s optical transmission coefficient is 580 ppm/°C. For the resonator-based transducer, the thermal sensitivity is 0.33 °C−1. The thermal sensitivity of the normalized mechanical transmission coefficient of the accelerometer is 120 ppm/°C. For optical measuring transducers based on a directional coupler, the contribution of the temperature dependent refraction index alteration to the overall error is 5 times larger than that of the MOEM accelerometer’s mechanical parameters, while for the resonator-based transducer the difference reaches 3000 times. This means its operability is only possible in a thermostatic environment.
Full article
Industrial safety in high-risk sectors such as mining, construction, oil and gas, petrochemicals, and offshore fishing remains a strategic global challenge due to the high incidence of occupational accidents and their human, financial, and legal consequences. Despite international standards and advancements in safety
[...] Read more.
Industrial safety in high-risk sectors such as mining, construction, oil and gas, petrochemicals, and offshore fishing remains a strategic global challenge due to the high incidence of occupational accidents and their human, financial, and legal consequences. Despite international standards and advancements in safety strategies, significant barriers persist in the effective implementation of a Zero Accident culture. This scoping review, conducted under PRISMA-ScR guidelines, analyzed 11 studies selected from 232 records, focusing on documented practices in both multinational corporations from developed economies and local companies in emerging markets. The methodological synthesis validated theoretical models, practical interventions, and regulatory frameworks across diverse industrial settings. The findings led to the construction of a five-pillar model that provides the structural foundation for a comprehensive safety strategy: (1) strategic safety planning, defining long-term vision, mission, and objectives with systematic risk analysis; (2) executive leadership and commitment, expressed through decision-making, resource allocation, and on-site engagement; (3) people and competencies, emphasizing continuous training, communities of practice, and the development of safe behaviors; (4) process risk management, using validated protocols, structured methodologies, and early warning systems; and (5) performance measurement and auditing, combining reactive and proactive indicators within continuous improvement cycles. The results demonstrate that only a holistic approach, one that aligns strategy, culture, and performance, can sustain a robust safety culture. While notable reductions in incident rates were observed when these pillars were applied, the current literature is dominated by theoretical contributions and model replication from developed countries, with limited empirical evaluation in emerging contexts. This study provides a comparative, practice-oriented framework to guide the implementation and refinement of safety systems in high-risk organizations. This review was registered in Open Science Framework (OSF): 10.17605/OSF.IO/XFDPR.
Full article
The bloom-forming dinoflagellates and euglenophyceae were observed in the coastal waters of Hammam-Lif (Southern Mediterranean), during a green tide event on 3 June 2023. The bloom was dominated by Lepidodinium chlorophorum, identified through ribotyping with densities reaching 2.3 × 107 cells·L−1 [...] Read more.
The bloom-forming dinoflagellates and euglenophyceae were observed in the coastal waters of Hammam-Lif (Southern Mediterranean), during a green tide event on 3 June 2023. The bloom was dominated by Lepidodinium chlorophorum, identified through ribotyping with densities reaching 2.3 × 107 cells·L−1. Euglena spp. and Eutrepsiella spp. contributed to the discoloration, with abundances up to 2.9 × 107 cells·L−1. Environmental data revealed significant depletion of nitrite and nitrate, coinciding with a rapid increase in sunlight duration, likely promoting the proliferation of L. chlorophorum and euglenophyceae. By 5 June, two days after the bloom, nutrient stocks were exhausted. Diatoms appeared limited by low silicate concentrations (<0.05 µmol·L−1), while dissolved inorganic phosphate and Nitrogen-ammonia were elevated during the bloom (0.88 and 4.8 µmol·L−1, respectively), then decreased significantly afterward (0.23 and 1.06 µmol·L−1, respectively). Low salinity (34.0) indicated substantial freshwater input from the Meliane River, likely contributing to nutrient enrichment and bloom initiation. After the event, phytoplankton abundance and chlorophyll levels declined, with a shift from dinoflagellates to diatoms. The accumulation of pigments (chlorophyll b and carotenoids) and the presence of Mycosporine-like amino acids (MAAs) during and after the bloom suggest that UV radiation and Nitrogen-ammonia were key drivers of this green tide.
Full article
by
Yenny Trinidad Fierro-Salgado, Manuel Reiriz, Ana Isabel Beltrán-Velasco, Javier Calleja-Conde, Xabier Hernández-Oñativia, Sara Uceda and Víctor Echeverry-Alzate
Int. J. Mol. Sci.2025, 26(20), 10074; https://doi.org/10.3390/ijms262010074 (registering DOI) - 16 Oct 2025
Breast cancer is a globally prevalent oncological disease whose treatments, while improving survival rates, often lead to adverse cognitive effects. Brain-derived neurotrophic factor (BDNF) and cytokines, key mediators of the inflammatory response, may play a significant role in these cognitive alterations. This systematic
[...] Read more.
Breast cancer is a globally prevalent oncological disease whose treatments, while improving survival rates, often lead to adverse cognitive effects. Brain-derived neurotrophic factor (BDNF) and cytokines, key mediators of the inflammatory response, may play a significant role in these cognitive alterations. This systematic review (osf.io/vk37x) addresses the use of BDNF and cytokines as biomarkers of cognitive impairment in breast cancer animal models. A comprehensive literature search was conducted across the following databases: Web of Science, Scopus, ScienceDirect, PubMed, and Medline. Keywords used were: (“breast cancer” AND “cognitive impairment” AND (“brain derived neurotrophic factor” OR “cytokines”). A total of 9876 articles were identified, of which 17 studies met the inclusion criteria. For quality assessment the SYRCLE’s tool for assessing Risk of Bias was used. Neuroinflammatory and systemic inflammatory responses, particularly increases in pro-inflammatory cytokines (IL-6, IL-1β, TNF-α) and reductions in hippocampal BDNF, are consistently linked to breast cancer and chemotherapy-induced cognitive impairment in animal models. Several interventions normalized these biomarkers and improved cognitive performance after chemotherapy. Anti-inflammatory cytokines (IL-10 or IL-4) were measured in fewer studies and recent research suggests that they could serve as potential protective biomarkers. BDNF, pro- and anti-inflammatory cytokines may represent candidate biomarkers for cancer-related cognitive impairment.
Full article
In an increasingly uncertain business environment, developing organizational resilience to cope with supply chain disruptions is crucial for firms aiming to achieve sustainable growth. This study investigates how forward and backward vertical integration influence organizational resilience in the face of large-scale supply chain
[...] Read more.
In an increasingly uncertain business environment, developing organizational resilience to cope with supply chain disruptions is crucial for firms aiming to achieve sustainable growth. This study investigates how forward and backward vertical integration influence organizational resilience in the face of large-scale supply chain disruptions, with particular attention to the moderating role of a firm’s position in the supply network. Drawing on a comprehensive dataset of 2931 publicly listed Chinese firms, we integrate the relational view and information processing theory to examine how integration strategies affect two key dimensions of resilience: organizational stability and flexibility. Our results show that backward integration enhances both stability (reducing the severity of loss by about 17%) and flexibility by accelerating recovery, especially benefiting downstream firms in terms of stability and upstream firms in terms of flexibility. In contrast, forward integration is associated with reduced stability (raising the severity of loss by about 7%) but enables faster recovery for firms closer to end markets. Moreover, we find that the effectiveness of vertical integration depends on organizational context and alternative resilience mechanisms. These findings highlight the importance of aligning integration direction with supply chain position to optimize resilience. By disentangling the distinct strategic trade-offs of forward versus backward integration, this study advances theoretical understanding and offers practical guidance for firms seeking to strengthen their capacity to withstand and recover from systemic shocks.
Full article
Inter-turn short-circuit faults in power transformers generate enormous short-circuit currents within the affected turns, making full-scale experimental investigations impractical. To address this issue, this study proposes an experimental method utilizing a third external short-circuit winding to simulate inter-turn faults through structural improvements in
[...] Read more.
Inter-turn short-circuit faults in power transformers generate enormous short-circuit currents within the affected turns, making full-scale experimental investigations impractical. To address this issue, this study proposes an experimental method utilizing a third external short-circuit winding to simulate inter-turn faults through structural improvements in winding configuration and conductor current-carrying capacity. A simulation calculation model for transformer inter-turn short circuits was first established to investigate the equivalence between the proposed equivalent fault model and actual fault conditions under varying short-circuit positions and proportions. Simulation results demonstrate that both models exhibit consistent primary/secondary winding currents, short-circuit turn currents, and spatial radial leakage magnetic field distributions post-fault, with average errors less than 5%. Subsequently, an experimental platform for inter-turn short-circuit fault simulation was constructed. Current and leakage magnetic field measurements under different fault positions and proportions were validated against simulation data, confirming the proposed method’s equivalence. This approach provides an effective pathway for investigating fault characteristics and monitoring methodologies of transformer inter-turn short circuits.
Full article