Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (8,185)

Search Parameters:
Keywords = local attention

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 16466 KB  
Article
SAW-YOLOv8l: An Enhanced Sewer Pipe Defect Detection Model for Sustainable Urban Drainage Infrastructure Management
by Linna Hu, Hao Li, Jiahao Guo, Penghao Xue, Weixian Zha, Shihan Sun, Bin Guo and Yanping Kang
Sustainability 2026, 18(8), 3685; https://doi.org/10.3390/su18083685 - 8 Apr 2026
Abstract
Urban underground sewage pipelines often suffer from defects such as cracks, irregular joint misalignment, and stratified sedimentation blockages, which may lead to pipeline bursts, sewage overflow, and water pollution. Timely detection of abnormal defects in sewage pipelines is critical to ensuring public health [...] Read more.
Urban underground sewage pipelines often suffer from defects such as cracks, irregular joint misalignment, and stratified sedimentation blockages, which may lead to pipeline bursts, sewage overflow, and water pollution. Timely detection of abnormal defects in sewage pipelines is critical to ensuring public health and environmental sustainability. Vision-based sewage pipeline defect detection plays a crucial role in modern urban wastewater treatment systems. However, it still faces challenges such as limited feature extraction capabilities, insufficient multi-scale defect characterization, and poor positioning stability when dealing with low-contrast images and in environments with severe background interference. To address this issue, this study proposes an enhanced SAW-YOLOv8l model that integrates RT-DETR (real-time detection Transformer) with CNN (convolutional neural network) architecture. First, a C2f_SCA module improves the long-distance feature extraction capability and localization precision. Second, an AIFI-PRBN module enhances global feature correlation through attention-mechanism-based intra-scale feature interaction and reduces computational complexity using lightweight techniques. Finally, an adaptive dynamic weighted loss function based on Wise-IoU (weighted intersection over union) further improves training convergence and robustness by balancing the gradient distribution of samples. Experiments on a mixed dataset comprising Sewer-ML and industrial images demonstrate that the SAW-YOLOv8l model achieved mAP@0.5 of 86.2% and precision of 84.4%, which were improvements of 2.4% and 6.6% respectively over the baseline model, significantly enhancing the detection performance of abnormal defects in sewage pipelines. Full article
Show Figures

Figure 1

28 pages, 6176 KB  
Article
Modeling Spectral–Temporal Information for Estimating Cotton Verticillium Wilt Severity Using a Transformer-TCN Deep Learning Framework
by Yi Gao, Changping Huang, Xia Zhang and Ze Zhang
Remote Sens. 2026, 18(8), 1105; https://doi.org/10.3390/rs18081105 - 8 Apr 2026
Abstract
Hyperspectral remote sensing provides essential biochemical and structural information for crop disease monitoring, yet its application to cotton Verticillium wilt has largely focused on single-period evaluations or multi-temporal classifications. Such approaches overlook the progressive nature of this vascular disease, whose pigment, water, and [...] Read more.
Hyperspectral remote sensing provides essential biochemical and structural information for crop disease monitoring, yet its application to cotton Verticillium wilt has largely focused on single-period evaluations or multi-temporal classifications. Such approaches overlook the progressive nature of this vascular disease, whose pigment, water, and mesophyll responses evolve over time, making temporal hyperspectral information critical for reliable severity estimation but still insufficiently utilized. To overcome this limitation, we conducted daily time-series observations on cotton leaves and collected 2895 hyperspectral reflectance measurements and 770 high-resolution RGB images together with disease severity records, generating a temporally dense spectral-severity dataset spanning symptom-free to severe stages. Five categories of disease-related vegetation indices were derived and organized into 5-day spectral–temporal slices. Based on these features, we introduce a dual-branch Transformer-TCN model that integrates global temporal dependencies captured by self-attention with local temporal variations resolved by dilated causal convolutions for severity inversion. The model delivers the strongest performance with an R2 of 0.8813, exceeding multiple single and hybrid time-series alternatives by 0.0446–0.1407 in R2, equivalent to a relative improvement of 5.33–19.00%. Temporal spectral features also outperform their non-temporal counterparts, highlighting that disease progression dynamics captured by time-series spectra are critical for reliable severity retrieval. Feature contribution analysis indicates that the blue red index BRI provides the highest contribution, consistent with the single-index time-series modelling results. Photosynthesis- and water-related indices provide secondary but complementary support. Collectively, our results demonstrate that the dual-branch Transformer-TCN model can capture complex spectral–temporal relationships between cotton Verticillium wilt and disease severity, providing methodological support for crop disease monitoring and evaluation. Full article
Show Figures

Figure 1

31 pages, 10425 KB  
Article
AVGS-YOLO: A Quad-Synergistic Lightweight Enhanced YOLOv11 Model for Accurate Cotton Weed Detection in Complex Field Environments
by Suqi Wang and Linjing Wei
Agriculture 2026, 16(8), 828; https://doi.org/10.3390/agriculture16080828 (registering DOI) - 8 Apr 2026
Abstract
Cotton represents one of the world’s most significant agricultural commodities. However, severe weed proliferation in cotton fields seriously hampers the development of the cotton industry, making precise weed control essential for ensuring healthy cotton growth. Traditional object detection methods often suffer from computational [...] Read more.
Cotton represents one of the world’s most significant agricultural commodities. However, severe weed proliferation in cotton fields seriously hampers the development of the cotton industry, making precise weed control essential for ensuring healthy cotton growth. Traditional object detection methods often suffer from computational complexity, rendering them difficult to deploy on resource-constrained edge devices. To address this challenge, this paper proposes AVGS-YOLO, a lightweight and enhanced model employing a Quadruple Synergistic Lightweight Perception Mechanism (QSLPM) for precise weed detection in complex cotton field environments. The QSLPM emphasizes synergistic interactions between modules. It integrates lightweight neck architecture (Slimneck) to optimize feature extraction pathways for cotton weeds; the ADown module (Adaptive Downsampling) replaces Conv modules to address model parameter redundancy; the small object attention modulation module (SEAM) enhances the recognition of small-scale cotton weed features; and angle-sensitive geometric regression (SIoU) improves bounding box localization accuracy. Experimental results demonstrate that the AVGS-YOLO model achieves 95.9% precision, 94.2% recall, 98.2% mAP50, and 93.3% mAP50-95. While maintaining high detection accuracy, the model achieves a lightweight design with reductions of 17.4% in parameters, 27% in GFLOPs, and 14.5% in model size. Demonstrating strong performance in identifying cotton weeds within complex cotton field environments, this model provides technical support for deployment on resource-constrained edge devices, thereby advancing intelligent agricultural development and safeguarding the healthy growth of cotton crops. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
29 pages, 111196 KB  
Article
Deep Learning-Driven Sparse Light Field Enhancement: A CNN-LSTM Framework for Novel View Synthesis and 3D Scene Reconstruction
by Vivek Dwivedi, Gregor Rozinaj, Javlon Tursunov, Ivan Minárik, Marek Vanco and Radoslav Vargic
Mach. Learn. Knowl. Extr. 2026, 8(4), 94; https://doi.org/10.3390/make8040094 (registering DOI) - 8 Apr 2026
Abstract
Sparse light field imaging often limits the quality of 3D scene reconstruction due to insufficient viewpoint coverage, resulting in incomplete or inaccurate reconstructions. This work introduces a hybrid CNN–LSTM-based framework to address this issue by generating novel camera poses and the corresponding synthesized [...] Read more.
Sparse light field imaging often limits the quality of 3D scene reconstruction due to insufficient viewpoint coverage, resulting in incomplete or inaccurate reconstructions. This work introduces a hybrid CNN–LSTM-based framework to address this issue by generating novel camera poses and the corresponding synthesized novel views, effectively densifying the light field representation. The CNN extracts spatial features from the sparse input views, while the LSTM predicts temporal and positional dependencies, enabling smooth interpolation of novel poses and views. The proposed method integrates these synthesized views with the original sparse dataset to produce a comprehensive set of images. Our approach was evaluated on several datasets, including challenging datasets. The inference capability of our method was tested extensively, and it showed good generalization across diverse datasets. The effectiveness of the framework was evaluated not only with local light field fusion (LLFF) but also with NeRF and 3D Gaussian Splatting, which are considered state-of-the-art reconstruction methods. Overall, the enriched dataset generated by our method led to consistent improvements in 3D reconstruction quality, including higher depth estimation accuracy, reduced artifacts, and enhanced structural consistency. Most importantly, LSTM-based approaches have so far attracted limited attention in the context of generating novel views. While LSTMs have been widely applied in sequential data domains such as natural language processing, their use for image generation conditioned on camera poses remains largely unexplored, which underscores the novelty and significance of the proposed work. This approach provides a scalable and generalizable solution to the sparsity problem in light fields, advancing the capabilities of computational imaging, photorealistic rendering, and immersive 3D scene reconstruction. The results firmly establish the proposed method as a robust and versatile tool for improving reconstruction quality in sparse-view settings. Full article
16 pages, 1100 KB  
Review
Tumor Microenvironment Acidosis and Alkalization-Oriented Interventions in Advanced Solid Tumors: A Narrative Review and Science-Based Medicine Perspective on Long-Tail Survival
by Kazuyuki Suzuki, Shion Kachi and Hiromi Wada
Cancers 2026, 18(8), 1193; https://doi.org/10.3390/cancers18081193 - 8 Apr 2026
Abstract
Median overall survival remains a central endpoint in oncology, but it can obscure a clinically meaningful long tail of patients with advanced solid tumors who survive well beyond the median. One biological context in which this pattern may be relevant is tumor microenvironment [...] Read more.
Median overall survival remains a central endpoint in oncology, but it can obscure a clinically meaningful long tail of patients with advanced solid tumors who survive well beyond the median. One biological context in which this pattern may be relevant is tumor microenvironment (TME) acidosis. Driven by aerobic glycolysis, hypoxia, impaired perfusion, and proton-export programs, acidic TME is increasingly implicated in invasion, therapeutic resistance, and immune suppression. This narrative review examines TME acidosis as the primary biological framework and considers long-tail survival as a clinical lens through which its implications may be interpreted. We summarize the biological basis and heterogeneity of acidic TME, review current approaches to clinical and translational assessment of tumor acidity, including acidoCEST magnetic resonance imaging (MRI) and positron emission tomography (PET)-based approaches, and discuss the potential and limitations of alkalization-oriented interventions such as buffering and diet-based strategies. Particular attention is given to the distinction between direct measurements of tumor acidity and clinically feasible but indirect markers such as urinary pH, which should not be interpreted as a direct surrogate for local tumor extracellular pH. From a science-based medicine perspective, long-tail survival is treated here as a hypothesis-generating clinical signal rather than proof of causality. Overall, alkalization-oriented interventions appear biologically plausible and clinically testable, but current clinical evidence remains limited and context-dependent. Future progress will require mechanistically informed biomarkers, careful safety evaluation, and trial designs capable of detecting delayed separation of survival curves and tail-oriented patterns of benefit. Full article
Show Figures

Figure 1

21 pages, 11316 KB  
Article
Multimodal Fusion Prediction of Radiation Pneumonitis via Key Pre-Radiotherapy Imaging Feature Selection Based on Dual-Layer Attention Multiple-Instance Learning
by Hao Wang, Dinghui Wu, Shuguang Han, Jingli Tang and Wenlong Zhang
J. Imaging 2026, 12(4), 158; https://doi.org/10.3390/jimaging12040158 - 8 Apr 2026
Abstract
Radiation pneumonitis (RP), one of the most common and severe complications in locally advanced non-small cell lung cancer (LA-NSCLC) patients following thoracic radiotherapy, presents significant challenges in prediction due to the complexity of clinical risk factors, incomplete multimodal data, and unavailable slice-level annotations [...] Read more.
Radiation pneumonitis (RP), one of the most common and severe complications in locally advanced non-small cell lung cancer (LA-NSCLC) patients following thoracic radiotherapy, presents significant challenges in prediction due to the complexity of clinical risk factors, incomplete multimodal data, and unavailable slice-level annotations in pre-radiotherapy CT images. To address these challenges, we propose a multimodal fusion framework based on Dual-Layer Attention-Based Adaptive Bag Embedding Multiple-Instance Learning (DAAE-MIL) for accurate RP prediction. This study retrospectively collected data from 995 LA-NSCLC patients who received thoracic radiotherapy between November 2018 and April 2025. After screening, Subject datasets (n = 670) were allocated for training (n = 535), and the remaining samples (n = 135) were reserved for an independent test set. The proposed framework first extracts pre-radiotherapy CT image features using a fine-tuned C3D network, followed by the DAAE-MIL module to screen critical instances and generate bag-level representations, thereby enhancing the accuracy of deep feature extraction. Subsequently, clinical data, radiomics features, and CT-derived deep features are integrated to construct a multimodal prediction model. The proposed model demonstrates promising RP prediction performance across multiple evaluation metrics, outperforming both state-of-the-art and unimodal RP prediction approaches. On the test set, it achieves an accuracy (ACC) of 0.93 and an area under the curve (AUC) of 0.97. This study validates that the proposed method effectively addresses the limitations of single-modal prediction and the unknown key features in pre-radiotherapy CT images while providing significant clinical value for RP risk assessment. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

29 pages, 6506 KB  
Article
A Hybrid VMD–Informer Framework for Forecasting Volatile Pork Prices
by Xudong Lin, Guobao Liu, Zhiguo Du, Bin Wen, Zhihui Wu, Xianzhi Tu and Yongjie Zhang
Agriculture 2026, 16(8), 827; https://doi.org/10.3390/agriculture16080827 - 8 Apr 2026
Abstract
Accurate forecasting of pork prices is important yet challenging because pork price series are highly volatile and non-stationary. Existing hybrid forecasting models often rely on fixed-weight integration, which may limit their ability to adapt to multi-scale temporal variation and complex temporal dependencies. To [...] Read more.
Accurate forecasting of pork prices is important yet challenging because pork price series are highly volatile and non-stationary. Existing hybrid forecasting models often rely on fixed-weight integration, which may limit their ability to adapt to multi-scale temporal variation and complex temporal dependencies. To address these issues, this study proposes VMD–EMSA–HCTM–Informer, a hybrid forecasting framework that combines signal decomposition with an enhanced encoder–decoder architecture. Variational Mode Decomposition (VMD) is first used to reduce signal non-stationarity by extracting intrinsic mode functions. Within the Informer backbone, an Enhanced Multi-Scale Attention (EMSA) encoder is introduced to capture local fluctuations at different temporal scales, while a Hybrid Convolutional–Temporal Module (HCTM) decoder is used to strengthen temporal feature extraction and channel interaction modeling. Empirical evaluation was conducted on daily pork price data from the China Pig Industry Network and a large-scale intensive breeding enterprise in southern China over the period 2013–2025. Under the current experimental setting, the proposed framework achieved the lowest average errors among the compared baselines across five independent runs, with an average MAE of 0.4875 and an average MAPE of 3.0540%. These results suggest that the proposed framework provides a useful and relatively stable univariate forecasting approach for volatile pork prices. However, the findings should be interpreted within the scope of the present dataset and experimental design, and future work will extend the framework to multivariate forecasting with exogenous drivers and uncertainty quantification. Full article
(This article belongs to the Section Agricultural Economics, Policies and Rural Management)
Show Figures

Figure 1

23 pages, 1612 KB  
Article
DARNet: Dual-Head Attention Residual Network for Multi-Step Short-Term Load Forecasting
by Jianyu Ren, Yun Zhao, Yiming Zhang, Haolin Wang, Hao Yang, Yuxin Lu and Ziwen Cai
Electronics 2026, 15(8), 1548; https://doi.org/10.3390/electronics15081548 - 8 Apr 2026
Abstract
Short-term load forecasting plays a pivotal role in modern power system operations yet it remains challenging due to the complex spatiotemporal dependencies in load data. This paper proposes a dual-head attention residual network (DARNet) that significantly advances STLF through three key innovations: (1) [...] Read more.
Short-term load forecasting plays a pivotal role in modern power system operations yet it remains challenging due to the complex spatiotemporal dependencies in load data. This paper proposes a dual-head attention residual network (DARNet) that significantly advances STLF through three key innovations: (1) a hybrid encoder combining 1D-CNN and GRU architectures to simultaneously capture the local load patterns and long-term temporal dependencies, achieving a 28% better locality awareness than that of conventional approaches; (2) a novel dual-head attention mechanism that dynamically models both the inter-temporal relationships and cross-variable dependencies, reducing the feature engineering requirements; and (3) an autocorrelation-adjusted recursive forecasting framework that cuts the multi-step prediction error accumulation by 33% compared to that with standard seq2seq models. Extensive experiments on real-world datasets from three Chinese cities demonstrate DARNet’s superior performance, outperforming six state-of-the-art benchmarks by 21–35% across all of the evaluation metrics (MAPE, SMAPE, MAE, and RRSE) while maintaining robust generalization across different geographical regions and prediction horizons. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

25 pages, 3968 KB  
Article
Explainable Data-Driven Approach for Smart Crop Yield Prediction in Sub-Saharan Africa: Performance and Interpretability Analysis
by Damilola D. Olatinwo, Herman C. Myburgh, Allan De Freitas and Adnan Abu-Mahfouz
Agriculture 2026, 16(8), 826; https://doi.org/10.3390/agriculture16080826 - 8 Apr 2026
Abstract
The increasing demand for innovative strategies in sustainable food production—driven by rapid global population growth, particularly in sub-Saharan Africa (SSA)—necessitates urgent attention to agricultural resilience. Recent technological advancements have enhanced crop productivity, post-harvest preservation, and environmentally sustainable farming practices. However, three critical bottlenecks [...] Read more.
The increasing demand for innovative strategies in sustainable food production—driven by rapid global population growth, particularly in sub-Saharan Africa (SSA)—necessitates urgent attention to agricultural resilience. Recent technological advancements have enhanced crop productivity, post-harvest preservation, and environmentally sustainable farming practices. However, three critical bottlenecks remain: (i) the lack of accurate, maize-specific yield prediction methods tailored to SSA; (ii) limited multimodal modeling approaches capable of capturing complex, nonlinear interactions among heterogeneous data sources; and (iii) a lack of explainability mechanisms, which render high-performing models “black boxes” and hinder stakeholder trust. To address these gaps, this study presents an explainable machine learning framework for smart maize yield prediction. We integrate multimodal SSA-specific soil, crop, and weather data to capture the multi-dimensional drivers of maize productivity. Six diverse algorithms—including extreme gradient boosting (XGBoost), light gradient boosting machine (LGBM), categorical boosting (CatBoost), support vector machine (SVM), random forest (RF), and an artificial neural network (ANN) combined with a k-nearest neighbors (kNN)—were benchmarked to evaluate predictive performance. To ensure robustness against spatial heterogeneity, we employed a Leave-One-Plot-Out (LOPO) cross-validation strategy. Empirical results on unseen test data identify CatBoost as the best-performing model, achieving a coefficient of determination of (R2 =~76%), demonstrating its ability to capture complex, nonlinear relationships in agricultural data. To enhance transparency and stakeholder trust, we integrated Local Interpretable Model-agnostic Explanations (LIME), providing plot-level insights into the physiological and environmental drivers of maize yield. Together, these contributions establish a scalable and interpretable modeling framework capable of supporting data-driven agricultural decision-making in SSA. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

17 pages, 6586 KB  
Article
Harnessing Foundation Models for Optical–SAR Object Detection via Gated–Guided Fusion
by Qianyin Jiang, Jianshang Liao, Qiuyu Lin and Junkang Zhang
ISPRS Int. J. Geo-Inf. 2026, 15(4), 160; https://doi.org/10.3390/ijgi15040160 - 8 Apr 2026
Abstract
Remote sensing object detection is fundamental to Earth observation, yet remains challenging when relying on a single sensing modality. While optical imagery provides rich spatial and textural details, it is highly sensitive to illumination and adverse weather; conversely, Synthetic Aperture Radar (SAR) offers [...] Read more.
Remote sensing object detection is fundamental to Earth observation, yet remains challenging when relying on a single sensing modality. While optical imagery provides rich spatial and textural details, it is highly sensitive to illumination and adverse weather; conversely, Synthetic Aperture Radar (SAR) offers robust all-weather acquisition but suffers from speckle noise and limited semantic interpretability. To address these limitations, we leverage the potential of foundation models for optical–SAR object detection via a novel gated–guided fusion approach. By integrating transferable and generalizable representations from foundation models into the detection pipeline, we enhance semantic expressiveness and cross-environment robustness. Specifically, a gated–guided fusion mechanism is designed to selectively merge cross-modal features with foundational priors, enabling the network to prioritize informative cues while suppressing unreliable signals in complex scenes. Furthermore, we propose a dual-stream architecture incorporating attention mechanisms and State Space Models (SSMs) to simultaneously capture local and long-range dependencies. Extensive experiments on the large-scale M4-SAR dataset demonstrate that our method achieves state-of-the-art performance, significantly improving detection accuracy and robustness under challenging sensing conditions. Full article
Show Figures

Figure 1

17 pages, 1073 KB  
Review
Cannabinoids in Motor Control: From Receptor Distribution to Motor Disorders
by Dan Faganeli and Metoda Lipnik-Stangelj
Biomedicines 2026, 14(4), 844; https://doi.org/10.3390/biomedicines14040844 - 8 Apr 2026
Abstract
Cannabinoid receptors occupy strategic control nodes within motor circuitry, making them potential targets for modulating different motor manifestations. They are positioned both within basal ganglia circuits that regulate movement and within spinal circuits that control skeletal muscle tone. Consequently, cannabinoids have been studied [...] Read more.
Cannabinoid receptors occupy strategic control nodes within motor circuitry, making them potential targets for modulating different motor manifestations. They are positioned both within basal ganglia circuits that regulate movement and within spinal circuits that control skeletal muscle tone. Consequently, cannabinoids have been studied across diverse motor disorders, most notably in movement disorders and tone disorders, particularly those resulting in spasticity. Because motor control spans multiple anatomically and functionally distinct levels, relating cannabinoid signaling to effects on motor function is not straightforward. Limited understanding of cannabinoid receptor distribution has led to cannabinoids being tested even in disorders where receptor localization would predict little or no benefit. Mapping receptor distribution within individual motor circuits and integrating them with their pharmacological effects can help anticipate how cannabinoid signaling shapes motor output. Combined with characteristic motor manifestations, one can identify motor disorders in which cannabinoids may have therapeutic value. In this review, we integrate existing evidence to place cannabinoid receptors within key motor pathways, ranging from basal ganglia circuits controlling movement to peripheral mechanisms governing muscle tone. We consider both cannabinoid 1 receptor (CB1R) and cannabinoid 2 receptor (CB2R), with CB2R gaining attention only recently for its potential relevance within the central nervous system. Building on this framework, we infer how cannabinoids acting at these sites may modulate motor control, and consequently, influence motor manifestations across major motor disorders. Finally, we examine how these distribution-based expectations align with available clinical observations. Full article
Show Figures

Figure 1

17 pages, 465 KB  
Article
Mapping the Use of Real-World Evidence Across the EU Health Technology Assessment Regulation: Methodological Considerations, Challenges, and Opportunities for Harmonization
by Grammati Sarri, Bengt Liljas, Keith R. Abrams, Stephen J. Duffield and Murtuza Bharmal
J. Mark. Access Health Policy 2026, 14(2), 20; https://doi.org/10.3390/jmahp14020020 - 8 Apr 2026
Abstract
Methodological guidelines for real-world evidence (RWE) in European Union (EU) joint clinical assessments (JCA) are lacking. This manuscript explores RWE potential in EU health technology assessment (HTA) and offers recommendations for generating high-quality RWE. An environmental scan of peer-reviewed and gray literature was [...] Read more.
Methodological guidelines for real-world evidence (RWE) in European Union (EU) joint clinical assessments (JCA) are lacking. This manuscript explores RWE potential in EU health technology assessment (HTA) and offers recommendations for generating high-quality RWE. An environmental scan of peer-reviewed and gray literature was conducted to review RWE frameworks and documents in EU regulatory and HTA decision-making. Extraction elements were standardized across key RWE themes: data quality, methodological rigor, stakeholder engagement, and applications. In JCA, RWE has multiple uses, including informing PICO simulation exercises, understanding disease landscape, identifying prognostic factors and effect modifiers, and directly or indirectly informing comparative clinical assessments. Methodological guidance from the HTA Coordination Group is limited to cases in which evidence from non-randomized studies is used as direct inputs in comparative assessments. Individual HTA bodies provide more detailed guidance, missing an opportunity to leverage RWE within JCAs that can offer insight for local Member State submissions. Generating high-quality RWE that is credible, actionable, and acceptable for JCA submissions and local HTA bodies requires careful attention to methodological considerations and early planning. Broader RWE integration that reflects patient journeys is needed. Expanding the HTA Coordination Group guidance can unlock RWE’s full potential in supporting EU JCA submissions. Full article
(This article belongs to the Collection European Health Technology Assessment (EU HTA))
Show Figures

Figure 1

26 pages, 7110 KB  
Article
Research on an Automatic Detection Method for Response Keypoints of Three-Dimensional Targets in Directional Borehole Radar Profiles
by Xiaosong Tang, Maoxuan Xu, Feng Yang, Jialin Liu, Suping Peng and Xu Qiao
Remote Sens. 2026, 18(7), 1102; https://doi.org/10.3390/rs18071102 - 7 Apr 2026
Abstract
During the interpretation of Borehole Radar (BHR) B-scan profiles, the accurate determination of the azimuth of geological targets in three-dimensional space is a critical issue for achieving precise anomaly localization and spatial structure inversion. However, existing directional BHR anomaly localization methods exhibit limited [...] Read more.
During the interpretation of Borehole Radar (BHR) B-scan profiles, the accurate determination of the azimuth of geological targets in three-dimensional space is a critical issue for achieving precise anomaly localization and spatial structure inversion. However, existing directional BHR anomaly localization methods exhibit limited intelligence, insufficient adaptability to multi-site data, and weak generalization capability, rendering them inadequate for engineering applications under complex geological conditions. To address these challenges, a robust deep learning model, termed BSS-Pose-BHR, is developed based on YOLOv11n-pose for keypoint detection in directional BHR profiles. The model incorporates three key optimizations: Bi-Level Routing Attention (BRA) replaces Multi-Head Self-Attention (MHSA) in the backbone to improve computational efficiency; Conv_SAMWS enhances keypoint-related feature weighting in the backbone and neck; and Spatial and Channel Reconstruction Convolution (SCConv) is integrated into the detection head to reduce redundancy and strengthen local feature extraction, thereby improving suitability for keypoint detection tasks. In addition, a three-dimensional electromagnetic model of limestone containing a certain density of clay particles is established to construct a simulation dataset. On the simulated test set, compared with current mainstream deep learning approaches and conventional directional borehole radar anomaly localization algorithms, BSS-Pose-BHR achieves superior performance, with an mAP50(B) of 0.9686, an mAP50–95(B) of 0.7712, an mAP50(P) of 0.9951, and an mAP50–95(P) of 0.9952. Ablation experiments demonstrate that each proposed module contributes significantly to performance improvement. Compared with the baseline, BSS-Pose-BHR improves mAP50(B) by 5.39% and mAP50(P) by 0.86%, while increasing model weight by only 1.05 MB, thereby achieving a reasonable trade-off between detection accuracy and complexity. Furthermore, indoor physical model experiments validate the effectiveness of the method on measured data. Robustness experiments under different Peak Signal-to-Noise Ratio (PSNR) conditions and varying missing-trace rates indicate that BSS-Pose-BHR maintains high detection accuracy under moderate noise and data loss, demonstrating strong engineering applicability and practical value. Full article
Show Figures

Figure 1

25 pages, 6398 KB  
Article
StageAttn-VTON: Stage-Wise Flow Deformation with Attention for High-Resolution Virtual Try-On
by Li Yao, Wenhui Liang and Yan Wan
Appl. Sci. 2026, 16(7), 3609; https://doi.org/10.3390/app16073609 - 7 Apr 2026
Abstract
Virtual try-on is a key enabling technology for online fashion retail and digital garment visualization. It aims to realistically render a target garment on a person while preserving geometric alignment and fine texture details. Appearance flow-based approaches provide explicit deformation modeling but often [...] Read more.
Virtual try-on is a key enabling technology for online fashion retail and digital garment visualization. It aims to realistically render a target garment on a person while preserving geometric alignment and fine texture details. Appearance flow-based approaches provide explicit deformation modeling but often suffer from texture squeezing and boundary artifacts in challenging scenarios, such as long sleeves and tucked-in garments, especially under high-resolution settings. In this work, we propose StageAttn-VTON (Stage-wise Attentive Virtual Try-On), an appearance flow-based framework that improves structural coherence and visual fidelity through stage-wise deformation modeling. Specifically, garment warping is decomposed into three stages—coarse alignment, local refinement, and non-target region removal—which mitigates the coupling between competing objectives, such as smooth texture preservation and accurate structural alignment. Furthermore, we introduce a self-attention module in the image synthesis stage to enhance global dependency modeling and capture long-range garment–body interactions. Experiments on VITON-HD and the upper-body subset of DressCode demonstrate that StageAttn-VTON achieves consistently strong performance against representative warping-based and diffusion-based baselines. In addition, qualitative comparisons show that the proposed method better alleviates deformation artifacts in challenging regions such as sleeves and waist areas. Full article
25 pages, 1851 KB  
Article
Where to Start? Participatory Systems Mapping for Place-Based Service Integration in the City of Casey
by Matt Healey, Joseph Lea and Vanessa Hammond
Systems 2026, 14(4), 407; https://doi.org/10.3390/systems14040407 - 7 Apr 2026
Abstract
Place-based approaches have gained significant attention as a means of addressing entrenched disadvantage through collaborative, locally responsive service delivery, yet implementation has yielded mixed results and the systemic factors that facilitate or impede inter-organisational collaboration remain inadequately understood. This study applied participatory systems [...] Read more.
Place-based approaches have gained significant attention as a means of addressing entrenched disadvantage through collaborative, locally responsive service delivery, yet implementation has yielded mixed results and the systemic factors that facilitate or impede inter-organisational collaboration remain inadequately understood. This study applied participatory systems mapping as part of a systemic inquiry to identify leverage points for place-based integrated service delivery in the City of Casey, an outer-metropolitan municipality in Melbourne, Australia. Twenty-one representatives from the Casey Futures Partnership engaged in group model building workshops, co-producing a causal loop diagram containing 33 factors and 104 directional connections. The resulting map was analysed using a blended analytical approach combining network metrics with the Action Scales Model. Funding availability and criteria emerged as the most central factor within the system, while belief-level factors, including territorial behaviour and resource and collaboration mindset, were found to be substantially shaped by upstream structural conditions. Factors combining network influence with deeper system positioning and amenability to local action included awareness of community needs and priorities, trust and willingness to collaborate from funders, inter-organisational communication, and advocacy effectiveness. The findings support multi-level place-based approaches that address underlying beliefs and structural conditions alongside operational improvements. Full article
Show Figures

Figure 1

Back to TopTop