Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,388)

Search Parameters:
Keywords = parallel training

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 2324 KB  
Article
The Study of Influence of Quarry Bench Elevation on the Prediction of Blasting Vibration Using Empirical Attenuation Equations and Artificial Neural Networks
by Chi-Han Wang, Yung-Chin Ding and Wei-Yuan Su
Appl. Sci. 2026, 16(7), 3556; https://doi.org/10.3390/app16073556 - 5 Apr 2026
Viewed by 139
Abstract
Blasting operations in quarries are frequently carried out across benches with pronounced elevation variations, which affect the propagation of ground vibrations. This study examines vibration attenuation in a marble quarry in eastern Taiwan using both traditional empirical formulas and artificial neural networks (ANNs). [...] Read more.
Blasting operations in quarries are frequently carried out across benches with pronounced elevation variations, which affect the propagation of ground vibrations. This study examines vibration attenuation in a marble quarry in eastern Taiwan using both traditional empirical formulas and artificial neural networks (ANNs). Field measurements were collected from 54 production blasts, resulting in 322 vibration records at three distinct elevation levels. Several empirical equations—including an elevation correction factor—were applied and compared. Among these, the equation incorporating an adjusted elevation factor yielded higher R2 values than the other empirical models. In parallel, a three-layer ANN trained in MATLAB, using inputs such as instantaneous charge, distance, elevation difference, and total charge per blast, achieved an R2 of 0.951, highlighting total charge as a key parameter. Both the empirical and ANN methods proved effective for PPV prediction, but the ANN models demonstrated better accuracy when total charge was included. Full article
Show Figures

Figure 1

37 pages, 33258 KB  
Article
An Intelligent Gated Fusion Network for Waterbody Recognition in Multispectral Remote Sensing Imagery
by Tong Zhao, Chuanxun Hou, Zhili Zhang and Zhaofa Zhou
Remote Sens. 2026, 18(7), 1088; https://doi.org/10.3390/rs18071088 - 4 Apr 2026
Viewed by 169
Abstract
Accurate water body segmentation from multispectral remote sensing imagery is critical for hydrological monitoring and environmental management. However, leveraging transfer learning with pre-trained models remains challenging due to the dimensional mismatch between three-channel RGB-based architectures and multi-band spectral data. To address this, this [...] Read more.
Accurate water body segmentation from multispectral remote sensing imagery is critical for hydrological monitoring and environmental management. However, leveraging transfer learning with pre-trained models remains challenging due to the dimensional mismatch between three-channel RGB-based architectures and multi-band spectral data. To address this, this study proposes a novel segmentation network, termed Intelligent Gated Fusion Network (IGF-Net), built upon a dual-branch feature encoder module and a core Intelligent Gated Fusion Module (IGFM). The IGFM achieves adaptive fusion of visual and spectral features through a cascaded mechanism integrating differences-and-commonalities parallel modeling, channel-context priors, and adaptive temperature control. We evaluate IGF-Net on the newly constructed Tiangong-2 remote sensing image water body semantic segmentation dataset, which comprises 3776 meticulously annotated multispectral image patches. Comprehensive experiments demonstrate that IGF-Net achieves strong and consistent performance on this dataset, with an Intersection over Union of 0.8742 and a Dice coefficient of 0.9239, consistently outperforming the evaluated baseline methods, such as FCN, U-Net, and DeepLabv3+. It also exhibits strong cross-dataset generalization capabilities on an independent Sentinel-2 water segmentation dataset. Ablation studies and visualization analyses confirm that the proposed fusion strategy significantly enhances segmentation accuracy and stability, particularly in complex scenarios. placeholder Full article
(This article belongs to the Topic Advances in Hydrological Remote Sensing)
25 pages, 7087 KB  
Article
Digital Twin-Based Improved YOLOv8 Algorithm for Micro-Defect Detection of Labyrinth Drip Emitters in High-Speed Agricultural Production Lines
by Renzhong Niu, Zhangliang Wei, Peilin Jin, Qi Zhang and Zhigang Li
Sensors 2026, 26(7), 2220; https://doi.org/10.3390/s26072220 - 3 Apr 2026
Viewed by 218
Abstract
In water-scarce regions such as Xinjiang, China, agricultural development is constrained not only by limited water resources but also by a strong reliance on water-saving irrigation technologies. Drip irrigation is a key measure for improving irrigation efficiency and promoting the sustainable development of [...] Read more.
In water-scarce regions such as Xinjiang, China, agricultural development is constrained not only by limited water resources but also by a strong reliance on water-saving irrigation technologies. Drip irrigation is a key measure for improving irrigation efficiency and promoting the sustainable development of water-saving agriculture. However, defects arising during the manufacture of labyrinth Drip emitters—the core components of drip irrigation systems—can undermine system reliability, leading to channel blockage and non-uniform irrigation. To tackle this issue, a defect detection approach is developed by integrating Digital Twin technology with an enhanced YOLOv8 model for online inspection of labyrinth Drip emitters on drip irrigation tape production lines. In parallel, a self-built dataset covering six defect categories is established. Supported by the DT framework, the standard YOLOv8 network is refined to strengthen its capability in identifying complex micro-defects. Specifically, DySnakeConv is introduced to better represent the curved and slender characteristics of labyrinth channels; DySample is incorporated to improve the reconstruction and representation of fine-grained details; an Efficient Multi-Scale Attention module is adopted to capture richer contextual information while suppressing background noise; and Inner-SIoU is applied to optimize the bounding-box regression process. Experimental results show that the model achieves 89.6% precision, 90.9% recall, and 93.9% mAP50. Compared with the baseline YOLOv8, precision, recall, and mAP50 are improved by 7.3, 3.9, and 3.3 percentage points, respectively. Under the same training conditions, the proposed model outperforms YOLOv10 and YOLOv11 in accuracy-related metrics. Specifically, compared with YOLOv11, precision, recall, and mAP50 are improved by 4.8, 5.0, and 2.6 percentage points, respectively; compared with YOLOv10, they are improved by 10.0, 7.7, and 7.3 percentage points, respectively. Meanwhile, the model maintains a lightweight size of 3.7 M parameters and a real-time inference speed of 150.2 FPS, demonstrating a favorable accuracy–efficiency trade-off. By extending manufacturing-level quality control to agricultural applications, the approach helps ensure uniform irrigation and improve water-use efficiency, providing practical technical support for precision agriculture in arid regions. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

18 pages, 527 KB  
Article
An Empirical Comparison of Cascade and Direct End-to-End Speech Translation for Low-Resource Language Pair
by Zhanibek Kozhirbayev
Computers 2026, 15(4), 222; https://doi.org/10.3390/computers15040222 - 2 Apr 2026
Viewed by 398
Abstract
Speech-to-text translation (S2TT) for low-resource languages remains challenging due to the scarcity of parallel speech translation data and the susceptibility of modular pipelines to error propagation. This paper presents a controlled empirical comparison of cascade and end-to-end approaches for Kazakh–Russian speech translation using [...] Read more.
Speech-to-text translation (S2TT) for low-resource languages remains challenging due to the scarcity of parallel speech translation data and the susceptibility of modular pipelines to error propagation. This paper presents a controlled empirical comparison of cascade and end-to-end approaches for Kazakh–Russian speech translation using the ST-kk-ru dataset (≈332 h, 140 k triplets). The cascade framework is strengthened with recent pre-trained models for automatic speech recognition and neural machine translation, achieving 21.3 BLEU on the test set. Three representative end-to-end architectures are evaluated under identical data conditions. The strongest direct model, combining a Wav2Vec 2.0 encoder with an mBART decoder augmented by a length adaptor and adapter modules, reaches 17.97 BLEU, compared with 15.35 BLEU for FAIRSEQ S2T and 16.3 BLEU for ESPnet-ST. Automatic evaluation is complemented by expert manual assessment and targeted linguistic analysis. Results indicate that, under current low-resource conditions, cascade systems provide higher translation accuracy and better morpho-syntactic fidelity, while end-to-end models remain competitive and offer advantages in architectural simplicity and potentially reduced inference latency (due to single-pass processing), although empirical measurements were not conducted in this study. This study establishes a reproducible benchmark for Kazakh–Russian speech translation and highlights practical trade-offs between modeling paradigms in low-resource, morphologically rich settings. Full article
Show Figures

Figure 1

29 pages, 1851 KB  
Systematic Review
Technological Trends in Lean Construction for Engineering Design Improvement and Productivity in Civil Engineering Projects: A Systematic Literature Review
by Luis Mayo-Alvarez, Jorge Córdova-Maraví, Diego García-Gómez and Iván Paredes-Julca
Designs 2026, 10(2), 40; https://doi.org/10.3390/designs10020040 - 1 Apr 2026
Viewed by 302
Abstract
Lean Construction has become a key strategy for improving productivity, reducing waste, and increasing efficiency in civil engineering projects. In parallel, advances in digital technologies have transformed the way engineering design and project planning processes are conceived and managed. However, there remains a [...] Read more.
Lean Construction has become a key strategy for improving productivity, reducing waste, and increasing efficiency in civil engineering projects. In parallel, advances in digital technologies have transformed the way engineering design and project planning processes are conceived and managed. However, there remains a limited systematic understanding of how emerging technologies support engineering design practices and influence the implementation and performance of Lean Construction in diverse civil engineering scenarios. This study presents a systematic literature review of 70 peer-reviewed articles published between 2019 and 2025, following the PRISMA 2020 guidelines. The selected studies were examined using a structured classification framework consisting of three analytical categories: Technologies and Tools, Construction Methods and Sustainability, and Production Philosophies and Management. From an engineering design perspective, this framework allows the identification of technological trends, design-support tools, and management strategies that influence the planning, modeling, and optimization of construction processes. The results show that digital technologies, such as Building Information Modeling (BIM), automation systems, Artificial Intelligence, and Industry 4.0 tools, play a significant role in supporting engineering design activities by improving project visualization, coordination, and decision-making during the design and planning stages. These technologies contribute to more integrated design processes aligned with Lean Construction principles. At the same time, the analysis reveals that the adoption of Lean Construction technologies varies depending on project characteristics, levels of digital maturity, and regional industry conditions. The main barriers identified in the literature include interoperability limitations, insufficient workforce training, and organizational resistance to technological change. Overall, the review provides a structured synthesis of recent research trends and highlights the technological and managerial factors that influence the successful integration of Lean Construction with engineering design practices in civil engineering. The findings contribute to bridging the gap between technological innovation, design methodologies, and Lean Construction implementation, offering insights for both researchers and practitioners seeking to improve efficiency, sustainability, and design performance in construction projects. Full article
Show Figures

Figure 1

26 pages, 725 KB  
Article
Effects of Multicomponent Versus Aerobic Training on Body Composition, Physical Fitness, Psychological Health, and Quality of Life in Cancer Survivors: A 24-Week Randomized Controlled Trial
by Alessandro Petrelli, Ilaria Pepe, Luca Poli, Gianpiero Greco, Carla Minoia, Antonella Daniele, Patrizia Dicillo, Francesca Romito, Francesco Fischetti and Stefania Cataldi
Sports 2026, 14(4), 135; https://doi.org/10.3390/sports14040135 - 1 Apr 2026
Viewed by 306
Abstract
Background: Cancer survivors frequently experience persistent physical and psychological sequelae, including impaired physical function, fatigue, anxiety/depressive symptoms, and reduced health-related quality of life (HRQoL). Exercise is an effective non-pharmacological intervention; however, comparative evidence between multicomponent training (MCT) and aerobic training (AT) using a [...] Read more.
Background: Cancer survivors frequently experience persistent physical and psychological sequelae, including impaired physical function, fatigue, anxiety/depressive symptoms, and reduced health-related quality of life (HRQoL). Exercise is an effective non-pharmacological intervention; however, comparative evidence between multicomponent training (MCT) and aerobic training (AT) using a multidomain framework remains limited. Methods: In this randomized controlled parallel-group trial, 47 cancer survivors (mean age 63.0 ± 8.9 years) were allocated to a 24-week supervised MCT programme (n = 16), an AT programme (n = 16), or a non-exercise control group (CG; n = 15). Outcomes were assessed at baseline and post-intervention including body composition (BIA), physical performance, fatigue (FSS), anxiety (STAI-Y1/Y2), depressive symptoms (BDI), and HRQoL (EORTC QLQ-C30). Results: Fat mass decreased in both MCT (p = 0.005) and AT (p = 0.034), whereas arm circumference increased only in MCT (p < 0.001). Significant Group × Time interactions were observed for major physical performance outcomes; improvements were broader in MCT, while AT showed its largest change in aerobic endurance. Between-group contrasts indicated greater gains with MCT than AT for chair-stand (p = 0.046), sit-and-reach (p = 0.048), and handgrip strength (p = 0.049). Significant interaction effects were also observed for fatigue and psychological outcomes (FSS: p = 0.003; STAI-Y1 and STAI-Y2: p < 0.001; BDI: p < 0.001) and for HRQoL global health (p = 0.003), with larger improvements in MCT than AT for fatigue, state anxiety, and depressive symptoms (all p < 0.05), but not for trait anxiety (p > 0.05). Conclusions: A 24-week supervised MCT programme produced broader benefits than AT alone across physical function and selected psychological outcomes in cancer survivors. These findings support the incorporation of multicomponent exercise into survivorship care as a feasible and effective strategy for addressing multidimensional treatment sequelae. Full article
Show Figures

Figure 1

16 pages, 1045 KB  
Article
Risk Level Assessment and Impact Range Analysis of CCUS CO2 Pipeline Leakage Based on Machine Learning
by Haoyuan Zhang, Siqi Wang, Xiaoping Jia and Fang Wang
Safety 2026, 12(2), 44; https://doi.org/10.3390/safety12020044 - 31 Mar 2026
Viewed by 151
Abstract
In emergency decision-making for carbon capture, utilization, and storage (CCUS) CO2 pipeline leakage, risk levels and warning distances/impact ranges are often derived from different methodological systems—risk-matrix scoring versus mechanistic consequence modeling. Differences in threshold definitions and modeling assumptions make it difficult to [...] Read more.
In emergency decision-making for carbon capture, utilization, and storage (CCUS) CO2 pipeline leakage, risk levels and warning distances/impact ranges are often derived from different methodological systems—risk-matrix scoring versus mechanistic consequence modeling. Differences in threshold definitions and modeling assumptions make it difficult to align level assignment with distance boundaries for the same scenario, which in turn reduces the comparability and traceability of multi-scenario batch screening. To address this, this study proposes an integrated framework based on “threshold impact-distance calculation–risk-matrix mapping,” with physical consequence quantification as the main thread. A scenario library (N = 4320) covering phase state, leak aperture, operating conditions, and meteorological fields is constructed; impact distances corresponding to CO2 volume-fraction thresholds of 1%/4%/10% (R1%, R4%, R10%) are computed and then mapped to five RiskLevel classes under a unified rule set, enabling standardized synchronous outputs. The modeling tasks are formulated as RiskLevel classification and threshold-distance regression. Using a stratified 70%/30% train–test split, Extreme Gradient Boosting (XGBoost) is adopted as the primary model and compared with logistic regression (LR), support vector classification (SVC), ordinary least squares regression (OLS), and support vector regression (SVR). Results show that XGBoost achieves an accuracy of 0.806 and a macro-F1 of 0.825 for RiskLevel classification, with a recall of 0.631 for the high-risk classes (RiskLevel 4–5), and yields mean absolute errors (MAEs) of 95/62/41 m for R1%/R4%/R10% regression with coefficient of determination (R2) values of 0.795–0.814. Distributional analysis further indicates that threshold impact distances increase overall with higher RiskLevel, while dispersion becomes larger at higher levels. Accordingly, a parallel representation of “RiskLevel + multi-threshold rings” is recommended to support coordinated graded control and zoned warning delineation. Full article
Show Figures

Figure 1

29 pages, 8422 KB  
Article
A Transformer-Based Method for Bidirectional French–Lingala Machine Translation in Speech and Text
by Reagan E. Mandiya, Selain K. Kasereka, Christophe B. Wizamo, Milena Savova-Mratsenkova, Ruffin-Benoît M. Ngoie, Tasho Tashev and Nathanaël M. Kasoro
Appl. Sci. 2026, 16(7), 3399; https://doi.org/10.3390/app16073399 - 31 Mar 2026
Viewed by 191
Abstract
Underrepresented languages such as Lingala are a significant part of the world’s cultural and linguistic heritage. Lingala plays a central role in daily communication, business, media, education, and culture for millions of people in the Democratic Republic of Congo (DRC) and the Republic [...] Read more.
Underrepresented languages such as Lingala are a significant part of the world’s cultural and linguistic heritage. Lingala plays a central role in daily communication, business, media, education, and culture for millions of people in the Democratic Republic of Congo (DRC) and the Republic of Congo. However, due to data scarcity and dialectal diversity, natural language processing (NLP) research often overlooks this language. In this paper, we propose a deep neural network pipeline for bidirectional French–Lingala automatic translation, covering both text-to-text and voice-to-text scenarios, by integrating Long Short-Term Memory (LSTM) and Transformer models on a specialized parallel corpus. The Bidirectional Encoder Representations from Transformers (BERT) model is used as a bidirectional source encoder to improve contextual representation, while the Whisper model handles automatic speech recognition as the first stage of the audio translation pipeline. Experimental results show that the standalone Transformer achieves a BLEU score of 35.3, compared to 8.12 for the LSTM SeqToSeq baseline. Fine-tuning with BERT raises the BLEU score to 38.6. Integrating the Whisper ASR module for an end-to-end speech translation task yields a final pipeline BLEU score of 55.4, with a Word Error Rate of 12.3% on the speech recognition sub-task, confirming the effectiveness of each component. These results demonstrate the potential of combining domain-specific pre-trained models with modular neural architectures to achieve competitive translation performance in a critically under-resourced language. Full article
(This article belongs to the Special Issue The Advanced Trends in Natural Language Processing)
Show Figures

Figure 1

19 pages, 9863 KB  
Article
Analysis of Slope Braking Adaptability of Copper-Based Powder Metallurgy Brake Pads for High-Speed Trains Based on Full-Scale Bench Tests
by Xueqian Geng
Lubricants 2026, 14(4), 146; https://doi.org/10.3390/lubricants14040146 - 31 Mar 2026
Viewed by 207
Abstract
With the opening of complex service routes, the importance of the service performance of brake pads under long slope braking conditions is increasing. It is necessary to analyze the slope braking adaptability of current brake pad products. This work takes the copper-based powder [...] Read more.
With the opening of complex service routes, the importance of the service performance of brake pads under long slope braking conditions is increasing. It is necessary to analyze the slope braking adaptability of current brake pad products. This work takes the copper-based powder metallurgy brake pads of a certain in-service high-speed train as the research object and conducts friction and wear behavior tests of the brake pads based on a full-scale brake test bench. Through microscopic observation and damage analysis, the differences in friction and wear behavior of the brake pads under stop braking and slope braking conditions are compared, revealing the wear mechanism and damage evolution characteristics of the brake pads. The results show that under the impact of high speed, high braking force, and severe thermal load in the stop braking conditions, the uneven wear of brake pads is high, and the eccentric wear of friction blocks is affected by both the friction radius and friction direction. The friction surface has a large number and size of damages, and the stability of the friction interface is poor. The brake pad exhibits a composite wear mechanism dominated by abrasive wear and brittle fracture induced exfoliation. In the slope braking condition, under the action of low speed, low braking force, and long-term stable thermal load, the uneven wear of the brake pads is relatively low, the surface damage size is small, and the friction block only has eccentric wear along the friction direction. The brake pad mainly initiates cracks along the interface of the components, which propagate parallel to the friction surface, exhibiting a progressive delamination and flaking exfoliation mechanism with a low wear rate. Although the friction interface of the brake pad is relatively stable under slope braking conditions, the cumulative delamination wear of the brake pads under long-term braking action needs further attention. Full article
Show Figures

Figure 1

21 pages, 5987 KB  
Article
Machine Learning-Based Fluorescence Assessment for Augmented Imaging and Decision Support in Glioblastoma Resections
by Anna Schaufler, Klaus-Peter Stein, Sunisha Pamnani, Claudia A. Dumitru, Belal Neyazi, Ali Rashidi, Axel Boese and I. Erol Sandalcioglu
Cancers 2026, 18(7), 1125; https://doi.org/10.3390/cancers18071125 - 31 Mar 2026
Viewed by 260
Abstract
Background/Objectives: Glioblastoma is the most common and aggressive primary malignant brain tumor in adults, characterized by infiltrative growth and poor prognosis. Achieving maximal resection without inducing neurological deficits remains a challenge in glioblastoma surgery. While 5-aminolevulinic acid-based fluorescence-guided surgery supports intraoperative tumor [...] Read more.
Background/Objectives: Glioblastoma is the most common and aggressive primary malignant brain tumor in adults, characterized by infiltrative growth and poor prognosis. Achieving maximal resection without inducing neurological deficits remains a challenge in glioblastoma surgery. While 5-aminolevulinic acid-based fluorescence-guided surgery supports intraoperative tumor visualization, its reliability is limited by patient variability and weak fluorescence signals. This study proposes a machine learning framework to enhance fluorescence-guided surgery sensitivity by analyzing surgical microscope images at the pixel level. Methods: Fluorescence-mode neurosurgical microscope images of synthetic samples with known Protoporphyrin IX (PPIX) concentrations were used to train three classifiers (Support Vector Machine, Naïve Bayes, Neural Network) for pixel-wise fluorescence detection. In parallel, three contrastive-learning-based Variational Autoencoders (VAE, β = 1, 2, 3) were evaluated for detecting weak fluorescence beyond visual perception. Additionally, a regression model was trained to relate pixel features to PPIX concentration. The best-performing VAE (β = 1) was subsequently trained on real intraoperative data, and its detection sensitivity was compared to annotations from four experienced surgeons. Results: The proposed model achieved the highest detection rates on synthetic test data when calibrated for 99% specificity. Applied to real intraoperative images, the model revealed fluorescent areas substantially larger than those marked by experienced surgeons. In non-5-ALA control cases, minimal false positives were observed, indicating a specificity exceeding 99.9%. The regression model reliably quantified PPIX concentration in synthetic samples (R2=0.92). Conclusions: By enabling more sensitive and objective fluorescence detection, this approach offers a valuable tool for improving surgical decision-making and facilitating safer, more extensive tumor resections. Full article
Show Figures

Figure 1

13 pages, 437 KB  
Article
Caregiver Qualities and Resident Satisfaction in Long-Term Care: Mediating Roles of Spending Time and Environment
by Xiaoli Li, Cheng Yin and Elias Mpofu
Healthcare 2026, 14(7), 897; https://doi.org/10.3390/healthcare14070897 - 31 Mar 2026
Viewed by 180
Abstract
Background: Caregiver and resident interactions are important to resident satisfaction with long-term care (LTC). However, these are variously operationalized, and caregiver–resident interactions of “spending time” (activity and autonomy) and environmental quality are less well investigated modifiable factors to inform LTC resident support [...] Read more.
Background: Caregiver and resident interactions are important to resident satisfaction with long-term care (LTC). However, these are variously operationalized, and caregiver–resident interactions of “spending time” (activity and autonomy) and environmental quality are less well investigated modifiable factors to inform LTC resident support policies for health aging. Methods: This quantitative, cross-sectional study analyzed secondary survey data from 326 long-term care facility (LTCF) residents (aged ≥60) across Shanghai, Nanjing, and Changsha, China. Satisfaction was measured using the Chinese version of the Ohio Long-Term Care Resident Satisfaction Survey. Caregiver Qualities served as the primary predictor, with spending time and environment as parallel mediators. Analysis adjusted for age cohort, functional independence, and length of stay. Results: Caregiver qualities were positively associated with overall satisfaction (β = 0.30, p < 0.01). Spending time (effect = 0.14, 95% CI: −0.01 to 0.30) and environment quality (effect = 0.05, 95% CI: −0.03 to 0.15) showed non-significant mediated pathways between caregiver qualities and satisfaction, but the combined indirect effect of these domains was statistically significant (effect = 0.19, 95% CI: 0.04 to 0.36). The direct association between caregiver qualities and satisfaction remained significant after accounting for these mediators (effect = 0.36, 95% CI: 0.11 to 0.61). Conclusions: These findings clarify how caregiver interactions are important to resident satisfaction both directly and indirectly through spending time, activity engagement, and environmental perceptions. To promote longevity and healthy aging in LTCFs, providers should prioritize caregiver training that fosters resident autonomy, supports daily activity, and maintains age-responsive care environments. Full article
Show Figures

Figure 1

24 pages, 3457 KB  
Article
Hypoxia and DNA-Repair Radiosensitivity Signatures Are Associated with Radiotherapy-Modified Survival in TCGA Breast Cancer, with External Prognostic Validation of the Hypoxia Score in METABRIC
by Jimmy Carter Osei, Mei-Han Chen and Tim A. D. Smith
BioTech 2026, 15(2), 28; https://doi.org/10.3390/biotech15020028 - 31 Mar 2026
Viewed by 199
Abstract
Radiotherapy (RT) is one of the main treatments for breast cancer, but response varies between patients. Tumour hypoxia and intrinsic radiosensitivity are major determinants of response to RT. Using TCGA-BRCA, a 563-gene hypoxia meta-signature was built by combining curated hypoxia gene sets from [...] Read more.
Radiotherapy (RT) is one of the main treatments for breast cancer, but response varies between patients. Tumour hypoxia and intrinsic radiosensitivity are major determinants of response to RT. Using TCGA-BRCA, a 563-gene hypoxia meta-signature was built by combining curated hypoxia gene sets from MSigDB with published hypoxia metagenes (Buffa, Winter, Elvidge, Fardin, and related sets). After Cox screening and penalised regression, a simple three-gene hypoxia score (CP, GPC3, STC1) was derived. In parallel, based on DSB-repair factors highlighted by Mladenov et al. as key regulators of intrinsic radiosensitivity, a four-gene radiosensitivity (RS) signature (ATR, RPA2, BLM, MRE11A) was trained using only RT-treated patients. In TCGA, both signatures were prognostic and showed significant interaction with RT status in Cox models. The hypoxia score was strongly associated with worse outcomes in RT-untreated patients, but this effect was much weaker in RT-treated patients (Hypoxia × RT HR = 0.009, p = 0.044). The RS score showed a similarly strong interaction with RT (RS × RT HR = 0.011, p = 0.003). When we combined both signatures into one interaction model, it gave the best performance (C-index = 0.785), and both interaction terms stayed independently significant. The hypoxia score was then validated externally in METABRIC (N = 1979; 1143 events), where it remained associated with overall survival, although more weakly than in TCGA (HR = 1.34, 95% CI: 1.10–1.63; p = 0.0042). Overall, these results suggest that hypoxia and DSB-repair capacity capture two complementary sides of radiosensitivity and RT-modified survival patterns, and they support further prospective testing and validation in independent datasets with strong RT annotation. Full article
(This article belongs to the Special Issue The Emerging Role of Bioinformatics in Biotechnology)
Show Figures

Figure 1

22 pages, 2559 KB  
Article
SEG-FAUSP: Anatomical Structure Segmentation of the Standard Sections of Fetal Abdominal Ultrasounds
by Jianhui Chen, Peizhong Liu, Xiaying Yang, Xiaoling Wang, Xiuming Wu, Zhonghua Liu and Shunlan Liu
Bioengineering 2026, 13(4), 403; https://doi.org/10.3390/bioengineering13040403 - 31 Mar 2026
Viewed by 320
Abstract
This study addresses the challenge of the difficult identification of organ structures in the standard sections of fetal abdominal ultrasounds. A deep learning-based multi-task model named SEG-FAUSP was developed to segment the core anatomical structures of seven key fetal abdominal ultrasound sections. We [...] Read more.
This study addresses the challenge of the difficult identification of organ structures in the standard sections of fetal abdominal ultrasounds. A deep learning-based multi-task model named SEG-FAUSP was developed to segment the core anatomical structures of seven key fetal abdominal ultrasound sections. We collected fetal abdominal ultrasound images from pregnant women in the mid-pregnancy period (18–24 weeks) using various mainstream ultrasound devices, and professional physicians annotated key anatomical structures (e.g., umbilical veins, gastric bubbles, spine) in the images. Based on an improved deep learning framework, the model accurately segments and locates the target organ structures through a parallel dual-branch semantic segmentation network, which avoids the over-reliance on large-scale pre-trained data in traditional methods. Experimental results show that the model achieves excellent performance in anatomical structure segmentation, with the intersection over union of the bladder and gastric bubble both reaching above 0.84; its segmentation accuracy for complex structures such as the inferior vena cava is also significantly superior to the baseline model. As an end-to-end model, it simplifies the clinical interpretation process of fetal abdominal ultrasound, reduces the risk of missed diagnoses caused by unclear organ identification, provides an efficient auxiliary tool for prenatal screening in grassroots medical institutions, and is of great significance for improving the quality of newborns. Full article
Show Figures

Figure 1

17 pages, 853 KB  
Article
Low-Dose CT Image Denoising Based on a Progressive Fusion Distillation Network with Pixel Attention
by Xinyi Wang and Bao Pang
Appl. Sci. 2026, 16(7), 3292; https://doi.org/10.3390/app16073292 - 28 Mar 2026
Viewed by 227
Abstract
Low-dose computed tomography (LDCT) can effectively reduce ionizing radiation; however, the associated image noise and artifacts can severely compromise the accuracy of clinical diagnosis. To address the challenge of balancing noise suppression and detail preservation in LDCT images, this study proposes a deep [...] Read more.
Low-dose computed tomography (LDCT) can effectively reduce ionizing radiation; however, the associated image noise and artifacts can severely compromise the accuracy of clinical diagnosis. To address the challenge of balancing noise suppression and detail preservation in LDCT images, this study proposes a deep learning (DL)-based image denoising method termed Progressive Fusion Distillation Network (PFDN). Building upon the Information Multi-distillation Network (IMDN), the proposed method incorporates a pixel attention (PA) mechanism and a progressive fusion strategy, and further designs a Pixel Parallel Extraction Block (PPEB) together with a Progressive Fusion Distillation Block (PFDB) to fully exploit multi-scale and multi-channel features, thereby optimizing the image denoising network through efficient feature separation and re-fusion. In addition, by explicitly leveraging the noise characteristics specific to LDCT images, the method establishes an end-to-end training framework suitable for medical imaging. Experimental results demonstrate that PFDN not only effectively reduces image noise and artifacts, but also enhances overall image quality while preserving diagnostically relevant image structures under the adopted evaluation setting. Full article
Show Figures

Figure 1

15 pages, 780 KB  
Article
Time–Frequency Parallel and Channel-Adaptive Gating for Multivariate Time Series Prediction
by Xin He and Zhenwen He
Appl. Sci. 2026, 16(7), 3266; https://doi.org/10.3390/app16073266 - 27 Mar 2026
Viewed by 223
Abstract
In real-world scenarios, multivariate time series data typically presents a variety of complex characteristics simultaneously, including long-term trends, multiple seasonality, sudden event disturbances and random noise. Owing to remarkable discrepancies among different variables in dimensions, periodic stability and other aspects, and the gradual [...] Read more.
In real-world scenarios, multivariate time series data typically presents a variety of complex characteristics simultaneously, including long-term trends, multiple seasonality, sudden event disturbances and random noise. Owing to remarkable discrepancies among different variables in dimensions, periodic stability and other aspects, and the gradual evolution of these periodic characteristics over time, models are confronted with numerous challenges in handling non-stationarity, multi-scale dynamic variations and heterogeneous fusion of variables. To tackle these problems, this paper proposes a time–frequency parallel fusion framework—TFDG-Net (Time–Frequency Dual-Branch Gated Fusion Network). This framework models the prior information in the frequency domain and the temporal query network in the time domain in parallel, and introduces a channel-wise gating mechanism to achieve more flexible adaptive fusion after data inverse normalization. Such a design enables the model to operate collaboratively on the original physical scale, which not only improves the long-term prediction capability for periodically stable variables, but also effectively suppresses the interference of noise and event-driven factors, thus significantly enhancing prediction accuracy and the robustness of the training process. In multiple long-term prediction benchmark tests covering fields such as energy and finance, compared with various mainstream models, TFDG-Net reduces the mean squared error and mean absolute error by an average of 12.0% and 7.8% respectively. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop