Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,342)

Search Parameters:
Keywords = contrast-enhanced image detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 6103 KB  
Article
An Optimal Deep Hybrid Framework with Selective Kernel U-Net for Skin Lesion Detection and Classification
by Guzal Gulmirzaeva, Robert Hudec, Baxtiyorjon Akbaraliev and Batirbek Samandarov
Bioengineering 2026, 13(4), 427; https://doi.org/10.3390/bioengineering13040427 - 6 Apr 2026
Abstract
Early and accurate detection of skin cancer is critical for reducing mortality rates, particularly for malignant melanoma. Automated analysis of dermoscopic images has gained significant attention due to its potential to support clinical diagnosis and overcome the limitations of manual inspection. Motivated by [...] Read more.
Early and accurate detection of skin cancer is critical for reducing mortality rates, particularly for malignant melanoma. Automated analysis of dermoscopic images has gained significant attention due to its potential to support clinical diagnosis and overcome the limitations of manual inspection. Motivated by challenges such as image noise, low contrast, lesion variability, and redundant feature representation, this study proposes an optimal deep hybrid framework for skin lesion detection and classification. The objective of this work is to design a robust and efficient system that integrates advanced preprocessing, precise segmentation, optimal feature selection, and accurate classification. Initially, contrast enhancement using Contrast Limited Adaptive Histogram Equalization (CLAHE) and noise reduction using Wiener filtering are applied to improve image quality. Lesion regions are then segmented using a Selective Kernel U-Net (SK-UNet), which adaptively captures multi-scale spatial information. Subsequently, discriminative color, texture, and shape features are extracted and optimized using the Fossa Optimization Algorithm (FOA) to eliminate redundancy. A hybrid one-dimensional Convolutional Neural Network–Gated Recurrent Unit (1D-CNN–GRU) classifier is employed for final classification, learning both spatial and sequential feature patterns. Experimental evaluation on the ISIC and DermMNIST datasets demonstrates that the proposed framework achieves classification accuracies of 97.6% and 95.6%, respectively, outperforming several existing methods. The results confirm that the proposed hybrid framework provides reliable, accurate, and scalable skin cancer diagnosis, highlighting its potential for assisting clinical decision-making and early detection. Full article
(This article belongs to the Special Issue Deep Learning for Medical Applications: Challenges and Opportunities)
Show Figures

Figure 1

32 pages, 43664 KB  
Article
MVFF: Multi-View Feature Fusion Network for Small UAV Detection
by Kunlin Zou, Haitao Zhao, Xingwei Yan, Wei Wang, Yan Zhang and Yaxiu Zhang
Drones 2026, 10(4), 264; https://doi.org/10.3390/drones10040264 - 4 Apr 2026
Viewed by 249
Abstract
With the widespread adoption of various types of Unmanned Aerial Vehicles (UAVs), their non-compliant operations pose a severe challenge to public safety, necessitating the urgent identification and detection of UAV targets. However, in complex backgrounds, UAV targets exhibit small-scale dimensions and low contrast, [...] Read more.
With the widespread adoption of various types of Unmanned Aerial Vehicles (UAVs), their non-compliant operations pose a severe challenge to public safety, necessitating the urgent identification and detection of UAV targets. However, in complex backgrounds, UAV targets exhibit small-scale dimensions and low contrast, coupled with extremely low signal-to-noise ratios. This forces conventional target detection methods to confront issues such as feature convergence, missed detections, and false alarms. To address these challenges, we propose a Multi-View Feature Fusion Network (MVFF) that achieves precise identification of small, low-contrast UAV targets by leveraging complementary multi-view information. First, we design a collaborative view alignment fusion module. This module employs a cross-map feature fusion attention mechanism to establish pixel-level mapping relationships and perform deep fusion, effectively resolving geometric distortion and semantic overlap caused by imaging angle differences. Furthermore, we introduce a view feature smoothing module that employs displacement operators to construct a lightweight long-range modeling mechanism. This overcomes the limitations of traditional convolutional local receptive fields, effectively eliminating ghosting artifacts and response discontinuities arising from multi-view fusion. Additionally, we developed a small object binary cross-entropy loss function. By incorporating scale-adaptive gain factors and confidence-aware weights, this function enhances the learning capability of edge features in small objects, significantly reducing prediction uncertainty caused by background noise. Comparative experiments conducted on a multi-perspective UAV dataset demonstrate that our approach consistently outperforms existing state-of-the-art methods across multiple performance metrics. Specifically, it achieves a Structure-measure of 91.50% and an F-measure of 85.14%, validating the effectiveness and superiority of the proposed method. Full article
Show Figures

Figure 1

20 pages, 7512 KB  
Article
PDA-YOLO: An Early Detection Method for Egg Fertilization Rate Based on Position-Decoupled Attention
by Yifan Zhou, Zhengxiang Shi, Geqi Yan, Haiqing Peng, Fuwei Li, Wei Liu and Dapeng Li
Agriculture 2026, 16(7), 784; https://doi.org/10.3390/agriculture16070784 - 2 Apr 2026
Viewed by 234
Abstract
This study addresses the inefficiencies, subjectivity, and poor adaptability to lighting variations inherent in traditional candling methods used in large-scale egg incubation. We developed a high-throughput transmissive imaging system capable of capturing 30 eggs simultaneously. Based on this system, we propose PDA-YOLO, an [...] Read more.
This study addresses the inefficiencies, subjectivity, and poor adaptability to lighting variations inherent in traditional candling methods used in large-scale egg incubation. We developed a high-throughput transmissive imaging system capable of capturing 30 eggs simultaneously. Based on this system, we propose PDA-YOLO, an enhanced YOLOv8-based object detection model featuring a position-decoupled attention strategy. Specifically, a lightweight C2f-SE module is integrated into the backbone to amplify subtle feature responses in low-contrast regions, while a CBAM is deployed prior to the detection head to mitigate background clutter through precise spatial attention. Experimental results on a self-constructed Hailan White egg dataset show that at the critical 60 h incubation stage, PDA-YOLO achieves a Recall of 91.5% and an mAP@0.5 of 97.4%, outperforming the YOLOv8 baseline while maintaining a real-time inference speed of 62.1 FPS. Grad-CAM visualizations confirm the model’s ability to focus on vascular textures and suppress noise. Furthermore, the model demonstrates robust performance under varying illumination (180–540 lumens), effectively mitigating missed detections in low light and recognition degradation from overexposure. This work provides a scalable, real-time solution for non-destructive, early-stage detection of poultry health and fertilization status in commercial hatcheries. Full article
(This article belongs to the Special Issue Computer Vision Analysis Applied to Farm Animals)
Show Figures

Figure 1

14 pages, 615 KB  
Article
Effectiveness of Contrast-Enhanced Spectral Mammography Following Contrast-Enhanced Computed Tomography in Breast Cancer Diagnosis
by Iksan Tasdelen, Ahmet Gunkan and Fatma Nur Soylu
Diagnostics 2026, 16(7), 1062; https://doi.org/10.3390/diagnostics16071062 - 1 Apr 2026
Viewed by 220
Abstract
Background: Contrast-enhanced spectral mammography (CEM) provides functional information on tumor vascularity and is increasingly used for breast imaging, particularly as an alternative to breast MRI in selected clinical scenarios. During breast cancer staging, many patients undergo contrast-enhanced computed tomography (CT) and may subsequently [...] Read more.
Background: Contrast-enhanced spectral mammography (CEM) provides functional information on tumor vascularity and is increasingly used for breast imaging, particularly as an alternative to breast MRI in selected clinical scenarios. During breast cancer staging, many patients undergo contrast-enhanced computed tomography (CT) and may subsequently receive a second dose of iodinated contrast for CEM, thus increasing contrast exposure, cost, and potential risk. This study aimed to evaluate the diagnostic effectiveness of CEM performed immediately after contrast-enhanced CT using the same contrast bolus (CT/CEM), in comparison with standard CEM requiring a separate contrast injection. Methods: The retrospective single-center study included 63 women with histopathologically confirmed breast cancer who underwent imaging between January of 2020 and December of 2024. Patients were divided into two groups: CT followed by CEM (CT/CEM, n = 29) and standard CEM alone (n = 34). Results: The CEM findings—including lesion visibility, enhancement characteristics, lesion conspicuity, background parenchymal enhancement (BPE), additional lesion detection, and tumor size—were assessed by two radiologists in consensus. The primary lesion was identified in all patients in both groups. No significant differences were found between groups in terms of lesion enhancement, conspicuity, BPE, or additional lesion detection (p > 0.05). Lesion conspicuity was higher in patients with low BPE, particularly in the CT/CEM group. Conclusions: CEM performed immediately after CT demonstrated diagnostic performance comparable to standard CEM, while eliminating the need for additional contrast administration. Full article
(This article belongs to the Section Medical Imaging and Theranostics)
Show Figures

Figure 1

23 pages, 6673 KB  
Article
ERZA-DETR: A Deep Learning-Based Detection Transformer with Enhanced Relational-Zone Aggregation for WCE Lesion Detection
by Shiren Ye, Haipeng Ma, Zetong Zhang and Liangjing Li
Algorithms 2026, 19(4), 268; https://doi.org/10.3390/a19040268 - 1 Apr 2026
Viewed by 180
Abstract
Wireless capsule endoscopy (WCE) plays a vital role in non-invasive screening of small intestinal lesions. However, the automated detection of lesions remains challenging due to low contrast, uneven illumination, and severe visual variability across images. Existing convolutional detectors rely heavily on manually designed [...] Read more.
Wireless capsule endoscopy (WCE) plays a vital role in non-invasive screening of small intestinal lesions. However, the automated detection of lesions remains challenging due to low contrast, uneven illumination, and severe visual variability across images. Existing convolutional detectors rely heavily on manually designed anchors and post-processing, while end-to-end detection transformers developed for natural images exhibit limited adaptability to the complex texture and spectral characteristics of WCE data. To overcome these limitations, this study proposes a deep learning-based detection transformer with enhanced relational-zone aggregation for WCE lesion detection, termed ERZA-DETR, specifically tailored for WCE lesion detection. The framework integrates three complementary modules: a Dual-Band Adaptive Fourier Spectral module (DBFS) that recalibrates frequency responses to suppress illumination artifacts and highlight lesion boundaries; a Fused Dual-scale Gated Convolutional module (FD-gConv) that selectively fuses multi-scale texture features; and a Graph-Linked Embedding at Semantic Scales module (GLES) that preserves local topological relationships through coordinate-gated aggregation. Experimental evaluations on the SEE-AI small intestine dataset demonstrate that ERZA-DETR achieves a 3.2% improvement in mAP@50 and a 12.4% reduction in parameters compared with RT-DETRv2, achieving a superior balance between detection accuracy, computational efficiency, and clinical applicability. Full article
Show Figures

Figure 1

31 pages, 4842 KB  
Article
FDR-Net: Fine-Grained Lesion Detection Model for Tilapia in Aquaculture via Multi-Scale Feature Enhancement and Spatial Attention Fusion
by Chenhui Zhou and Vladimir Y. Mariano
Symmetry 2026, 18(4), 598; https://doi.org/10.3390/sym18040598 - 31 Mar 2026
Viewed by 268
Abstract
In disease control and precision management in aquaculture, rapid and accurate identification of common fish diseases is pivotal to mitigating economic losses and ensuring aquaculture profitability. However, fish diseases are characterized by subtle symptoms, polymorphic lesions, and high susceptibility to environmental perturbations such [...] Read more.
In disease control and precision management in aquaculture, rapid and accurate identification of common fish diseases is pivotal to mitigating economic losses and ensuring aquaculture profitability. However, fish diseases are characterized by subtle symptoms, polymorphic lesions, and high susceptibility to environmental perturbations such as water turbidity and illumination fluctuations. Existing detection models generally suffer from inadequate lightweight design, poor fine-grained lesion feature extraction, and deficient adaptability to class imbalance, failing to meet the stringent requirements of precise diagnosis in real-world aquaculture scenarios. To address these challenges, this study proposes FDR-Net: a fine-grained lesion detection model for tilapia via multi-scale feature enhancement and spatial attention fusion. Using image data of Nile tilapia (Oreochromis niloticus) covering 6 common diseases and healthy individuals (from the NTD-1 dataset), the model incorporates symmetry-aware design logic, leveraging the morphological and textural symmetry of healthy tilapia tissues to capture lesion-induced symmetry-breaking features, thereby improving fine-grained lesion detection accuracy. Through depth-width scaling coefficients, FDR-Net achieves lightweight optimization while integrating three core modules and a task-specific loss function for full-chain optimization: specifically, a Micro-lesion Feature Enhancement Module (MLFEM) is embedded in key feature layers of the backbone network to accurately extract edge and texture features of incipient fine-grained lesions via multi-scale frequency decomposition and residual fusion; subsequently, a Lightweight Multi-scale Position Attention Module (MS_PSA) and a Single-modal Intra-feature Contrastive Fusion Module (SMICFM) are collaboratively deployed—the former focusing on spatial localization of lesion features, and the latter enhancing lesion-background discriminability through channel-spatial feature recalibration and contrastive fusion; finally, a Class-Aware Weighted Hybrid Loss (CAWHL) function is combined with customized small-target anchor boxes to alleviate class imbalance and further improve localization and classification accuracy of fine-grained lesions. Empirical evaluations on the NTD-1 dataset demonstrate that compared with mainstream state-of-the-art baseline models, FDR-Net achieves a peak recognition accuracy of 90.1% with substantially enhanced mAP50-95 performance. Retaining lightweight characteristics, it exhibits superior performance in identifying incipient fine-grained lesions and strong adaptability to simulated complex aquaculture scenarios. Collectively, this study provides an efficient technical backbone for the rapid and precise detection of tilapia fine-grained lesions, offering a potential solution for precise disease management in tilapia farming. Full article
(This article belongs to the Special Issue Symmetry and Asymmetry in Computer Vision Under Extreme Environments)
Show Figures

Figure 1

21 pages, 5987 KB  
Article
Machine Learning-Based Fluorescence Assessment for Augmented Imaging and Decision Support in Glioblastoma Resections
by Anna Schaufler, Klaus-Peter Stein, Sunisha Pamnani, Claudia A. Dumitru, Belal Neyazi, Ali Rashidi, Axel Boese and I. Erol Sandalcioglu
Cancers 2026, 18(7), 1125; https://doi.org/10.3390/cancers18071125 - 31 Mar 2026
Viewed by 247
Abstract
Background/Objectives: Glioblastoma is the most common and aggressive primary malignant brain tumor in adults, characterized by infiltrative growth and poor prognosis. Achieving maximal resection without inducing neurological deficits remains a challenge in glioblastoma surgery. While 5-aminolevulinic acid-based fluorescence-guided surgery supports intraoperative tumor [...] Read more.
Background/Objectives: Glioblastoma is the most common and aggressive primary malignant brain tumor in adults, characterized by infiltrative growth and poor prognosis. Achieving maximal resection without inducing neurological deficits remains a challenge in glioblastoma surgery. While 5-aminolevulinic acid-based fluorescence-guided surgery supports intraoperative tumor visualization, its reliability is limited by patient variability and weak fluorescence signals. This study proposes a machine learning framework to enhance fluorescence-guided surgery sensitivity by analyzing surgical microscope images at the pixel level. Methods: Fluorescence-mode neurosurgical microscope images of synthetic samples with known Protoporphyrin IX (PPIX) concentrations were used to train three classifiers (Support Vector Machine, Naïve Bayes, Neural Network) for pixel-wise fluorescence detection. In parallel, three contrastive-learning-based Variational Autoencoders (VAE, β = 1, 2, 3) were evaluated for detecting weak fluorescence beyond visual perception. Additionally, a regression model was trained to relate pixel features to PPIX concentration. The best-performing VAE (β = 1) was subsequently trained on real intraoperative data, and its detection sensitivity was compared to annotations from four experienced surgeons. Results: The proposed model achieved the highest detection rates on synthetic test data when calibrated for 99% specificity. Applied to real intraoperative images, the model revealed fluorescent areas substantially larger than those marked by experienced surgeons. In non-5-ALA control cases, minimal false positives were observed, indicating a specificity exceeding 99.9%. The regression model reliably quantified PPIX concentration in synthetic samples (R2=0.92). Conclusions: By enabling more sensitive and objective fluorescence detection, this approach offers a valuable tool for improving surgical decision-making and facilitating safer, more extensive tumor resections. Full article
Show Figures

Figure 1

24 pages, 3163 KB  
Review
Amplified Light Absorption with Nanomaterials for Enhanced Photoacoustic Imaging in Biomedical Research: A Review
by Yong Duk Kim, Jijoe Samuel Prabagar and Dong-Kwon Lim
Bioengineering 2026, 13(4), 404; https://doi.org/10.3390/bioengineering13040404 - 31 Mar 2026
Viewed by 374
Abstract
Recently, photoacoustic (PA) imaging has made a significant impact on biomedical imaging, providing detailed information on tissue structure and function by integrating optical and acoustic techniques. PA imaging can provide functional information at the cellular (e.g., oxygen saturation, hemoglobin concentration, metabolic rate) and [...] Read more.
Recently, photoacoustic (PA) imaging has made a significant impact on biomedical imaging, providing detailed information on tissue structure and function by integrating optical and acoustic techniques. PA imaging can provide functional information at the cellular (e.g., oxygen saturation, hemoglobin concentration, metabolic rate) and molecular levels, owing to its substantial advantages over conventional imaging techniques. PA imaging is particularly useful for neuroimaging, cancer detection, and cardiovascular studies. Over the last decade, there has been a tremendous amount of research and development dedicated to nanomaterials that are ideal for PA imaging. Examples of nanomaterials include carbon-based and gold nanorods, both of which demonstrate greatly enhanced light absorption capabilities in the near-infrared range. Therefore, the properties of these materials make them perfect for achieving deep penetration into tissues. In addition, they exhibit biocompatibility, tunable optical properties, and enhance the acoustic signal for PA imaging, resulting in greater accuracy and precision in PA results. Researchers working in this area have focused on developing nanomaterials with fabrication capabilities, enabling real-time visualization of therapeutic events and enhancing light absorption. This review critically examines recent advances in nanomaterials for PA imaging, emphasizing strategies for signal enhancement and evaluating their impact on imaging performance, including imaging depth, photostability, and signal intensity, as well as their suitability for biomedical applications. Furthermore, complementary approaches for PA signal enhancement are discussed to provide a broader perspective and guide the selection and design of effective contrast agents for clinical and preclinical use. Full article
Show Figures

Figure 1

36 pages, 6675 KB  
Review
Application of Composite Raman Probes in Tumor Diagnosis and Imaging
by Shuting Zou, Yue Wen, Wanneng Li, Huanhuan Sun, Hongyi Yin, Dean Tian, Sidan Tian, Mei Liu and Jun Liu
Polymers 2026, 18(7), 843; https://doi.org/10.3390/polym18070843 - 30 Mar 2026
Viewed by 251
Abstract
Raman spectroscopy offers unique molecular fingerprinting capability for cancer diagnosis and monitoring, yet its biomedical application is fundamentally limited by weak intrinsic signals and complex biological backgrounds. Composite Raman probes, particularly surface-enhanced Raman scattering (SERS)—based systems, overcome these limitations through synergistic electromagnetic and [...] Read more.
Raman spectroscopy offers unique molecular fingerprinting capability for cancer diagnosis and monitoring, yet its biomedical application is fundamentally limited by weak intrinsic signals and complex biological backgrounds. Composite Raman probes, particularly surface-enhanced Raman scattering (SERS)—based systems, overcome these limitations through synergistic electromagnetic and chemical enhancement combined with functional integration. By engineering plasmonic nanostructures, interfacial electronic states, and molecular architectures, composite Raman probes achieve synergistic electromagnetic and chemical enhancement while incorporating biorecognition units, reporter molecules, and protective coatings to improve stability, specificity, and biocompatibility. In recent years, these probes have evolved from simple signal tags into multifunctional platforms capable of ultrasensitive tumor biomarker detection, high-contrast imaging, surgical guidance, therapy monitoring, and dynamic analysis of the tumor microenvironment (TME). This review systematically summarizes recent advances in composite Raman probes for oncological applications, with an emphasis on material design strategies, enhancement mechanisms, and stimulus-responsive regulation. Representative applications at both molecular and tissue levels are highlighted, including nucleic acid, protein, and exosome detection, as well as in vivo imaging and microenvironmental sensing. Finally, current challenges and future perspectives toward clinical translation are discussed, aiming to provide guidance for the rational design of next-generation Raman probes for precision oncology. Full article
(This article belongs to the Section Polymer Applications)
Show Figures

Figure 1

29 pages, 3941 KB  
Article
Explainable Deep Learning for Thoracic Radiographic Diagnosis: A COVID-19 Case Study Toward Clinically Meaningful Evaluation
by Divine Nicholas-Omoregbe, Olamilekan Shobayo, Obinna Okoyeigbo, Mansi Khurana and Reza Saatchi
Electronics 2026, 15(7), 1443; https://doi.org/10.3390/electronics15071443 - 30 Mar 2026
Viewed by 241
Abstract
COVID-19 still poses a global public health challenge, exerting pressure on radiology services. Chest X-ray (CXR) imaging is widely used for respiratory assessment due to its accessibility and cost-effectiveness. However, its interpretation is often challenging because of subtle radiographic features and inter-observer variability. [...] Read more.
COVID-19 still poses a global public health challenge, exerting pressure on radiology services. Chest X-ray (CXR) imaging is widely used for respiratory assessment due to its accessibility and cost-effectiveness. However, its interpretation is often challenging because of subtle radiographic features and inter-observer variability. Although recent deep learning (DL) approaches have shown strong performance in automated CXR classification, their black-box nature limits interpretability. This study proposes an explainable deep learning framework for COVID-19 detection from chest X-ray images. The framework incorporates anatomically guided preprocessing, including lung-region isolation, contrast-limited adaptive histogram equalization (CLAHE), bone suppression, and feature enhancement. A novel four-channel input representation was constructed by combining lung-isolated soft-tissue images with frequency-domain opacity maps, vessel enhancement maps, and texture-based features. Classification was performed using a modified Xception-based convolutional neural network, while Gradient-weighted Class Activation Mapping (Grad-CAM) was employed to provide visual explanations and enhance interpretability. The framework was evaluated on the publicly available COVID-19 Radiography Database, achieving an accuracy of 95.3%, an AUC of 0.983, and a Matthews Correlation Coefficient of approximately 0.83. Threshold optimisation improved sensitivity, reducing missed COVID-19 cases while maintaining high overall performance. Explainability analysis showed that model attention was primarily focused on clinically relevant lung regions. Full article
(This article belongs to the Special Issue Image Processing Based on Convolution Neural Network: 2nd Edition)
Show Figures

Figure 1

12 pages, 1575 KB  
Article
Comparison of Quantitative Evaluation and Conventional Scar Scale Analysis for Pediatric Pathological Scars
by Jin-Ye Guan, Xing Zou, Jun-Wen Ge, Rui-Cheng Tian, Wei Liu, Mei-Yun Li and Dan Deng
Biomedicines 2026, 14(4), 784; https://doi.org/10.3390/biomedicines14040784 - 30 Mar 2026
Viewed by 279
Abstract
Background/Objectives: The incidence of pediatric pathological scars (PPS) has been gradually increasing due to various causes, highlighting the need for accurate scar assessment to monitor disease progression and therapeutic efficacy. Vancouver Scar Scale (VSS) and other scar evaluation systems are relatively subjective [...] Read more.
Background/Objectives: The incidence of pediatric pathological scars (PPS) has been gradually increasing due to various causes, highlighting the need for accurate scar assessment to monitor disease progression and therapeutic efficacy. Vancouver Scar Scale (VSS) and other scar evaluation systems are relatively subjective evaluation methods that rely on physicians’ or patients’ own judgment. By contrast, when comparing different scar scale evaluation methods, a three-dimensional (3D) camera and dermoscopy may provide relatively objective measurable parameters to avoid possible subjective bias created by the observers. This study aimed to compare the utility of traditional VSS evaluation with that of 3D cameras and dermoscopy in PPS evaluation. Methods: A total of 35 pediatric patients (aged 0–18 years) with PPS were involved, and their scars were assessed via the VSS, dermoscopy, and the Antera 3D® system. In addition, a subset of 18 patients (36 scar regions) was also evaluated for therapeutic efficacy after 3–6 months of treatment. Briefly, VSS scores were blindly evaluated by two independent dermatologists under standardized conditions. Quantitative assessment was also performed using dermoscopy and the Antera 3D® system. The former quantified chromatic parameters (pigmentation: L*, vascularity: a*, green value); the latter captured multispectral 3D images to analyze volume, pigmentation, and erythema. Data are presented as means ± standard deviation and analyzed using paired-sample t tests (one-tailed), the Wilcoxon signed-rank test, and standardized response means (SRMs) to assess therapeutic sensitivity, while baseline variability was evaluated using the standard deviation and coefficient of variation (CV). Results: The results showed that Antera 3D® detected significant reductions in pigmentation (p < 0.01, SRM = −0.46), vascularity (p < 0.001, SRM = −0.59), and volume (p < 0.0001, SRM = −0.83), while dermoscopy indicated similar moderate improvements in vascularity (Green value: p < 0.001, SRM = 0.57; a*: p < 0.0001, SRM = −0.68) and pigmentation (L*: p < 0.0001, SRM = 0.48) after treatments. VSS showed significant gains in pliability (p < 0.0001, SRM = −1.13), height (p < 0.01, SRM = −0.54), and overall impression (p < 0.0001, SRM = −0.86), but minimal changes in pigmentation (p > 0.05, SRM = 0) or vascularity (p > 0.05, SRM = −0.21). At baseline, Antera 3D® showed the greatest variability in pigmentation (CV 43.41%) and volume (CV 91.21%), followed by VSS in vascularity (CV 52.95%), pliability (CV 34.05%), and overall impression (CV 31.76%). Dermoscopy presented the lowest variability, indicating limited discriminative power. Conclusions: In conclusion, Antera 3D® offers an objective, sensitive, and spatially precise approach for PPS assessment and may provide additional quantitative information for evaluating subtle and early changes alongside traditional scar assessment scales. Its integration into clinical practice will enhance treatment monitoring and support more accurate timing of therapeutic interventions. Full article
Show Figures

Graphical abstract

15 pages, 287 KB  
Review
Potential Benefits of Ultra-High Field MRI for Embryonic and Fetal Brain Investigation: A Comprehensive Review
by Dan Boitor, Mihaela Oancea, Alexandru Farcasanu, Simion Simon, Daniel Muresan, Ioana Cristina Rotar, Georgiana Irina Nemeti, Iulian Goidescu, Adelina Staicu and Mihai Surcel
Diagnostics 2026, 16(7), 1026; https://doi.org/10.3390/diagnostics16071026 - 29 Mar 2026
Viewed by 263
Abstract
Ultra-high-field (UHF) magnetic resonance imaging, defined as imaging at field strengths of 7 Tesla (7T) and above, represents a frontier technology in neuroimaging with emerging applications in prenatal brain research. This narrative review examines the current evidence on the potential benefits of UHF-MRI [...] Read more.
Ultra-high-field (UHF) magnetic resonance imaging, defined as imaging at field strengths of 7 Tesla (7T) and above, represents a frontier technology in neuroimaging with emerging applications in prenatal brain research. This narrative review examines the current evidence on the potential benefits of UHF-MRI for investigating embryonic and fetal brain development. Through analysis of 97 studies identified across multiple databases, we find that UHF-MRI offers substantial advantages in spatial resolution, tissue contrast, and anatomical detail compared to conventional clinical field strengths (1.5T and 3T). The primary applications to date have been in ex vivo imaging of post-mortem fetal specimens and preclinical animal models, where UHF-MRI has enabled unprecedented visualization of laminar cortical organization, early sulcation patterns, microstructural development, and subtle anatomical features critical for understanding normal and abnormal neurodevelopment. Key benefits include enhanced delineation of transient developmental zones, improved characterization of cortical folding, superior detection of subtle malformations, and the ability to create high-resolution three-dimensional atlases of fetal brain development. However, significant technical and safety challenges currently limit in utero human applications, including concerns about specific absorption rate, acoustic noise, and fetal motion artifacts. This review identifies critical knowledge gaps and future directions for translating UHF-MRI technology to clinical prenatal diagnostics. Full article
(This article belongs to the Special Issue Advances in Diagnostic Imaging for Maternal–Fetal Medicine)
31 pages, 3515 KB  
Article
Improving Deep Learning Based Lung Nodule Classification Through Optimized Adaptive Intensity Correction
by Saba Khan, Muhammad Nouman Noor, Haya Mesfer Alshahrani, Wided Bouchelligua and Imran Ashraf
Bioengineering 2026, 13(4), 396; https://doi.org/10.3390/bioengineering13040396 - 29 Mar 2026
Viewed by 334
Abstract
Lung cancer is one of the most common causes of death from cancer around the world, and catching it early through computed tomography (CT) scans can drastically improve survival. However, automated classification of pulmonary nodule candidates is hard because images do not all [...] Read more.
Lung cancer is one of the most common causes of death from cancer around the world, and catching it early through computed tomography (CT) scans can drastically improve survival. However, automated classification of pulmonary nodule candidates is hard because images do not all have the same intensity across scanners and protocols, resulting in inconsistent performance, more false positives (FP), and a ceiling on how much deep learning models work in an average clinic. In this work, we tackle this by introducing a preprocessing step that corrects intensity differences before feeding images into classification models. We use Contrast-Limited Adaptive Histogram Equalization (CLAHE), but with its key parameters tuned automatically via a modified version of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). This helps to boost local contrast adaptively, keeps important anatomical details intact, and cuts down on noise. We tested the approach on the public LUNA16 dataset, first checking image quality (Peak Signal-to-Noise Ratio (PSNR) around 53 dB and Structural Similarity Index (SSIM) of 0.9, better than standard methods), then training three popular deep models—namely, ResNet-50, EfficientNet-B0, and InceptionV3—with CutMix augmentation for better generalization. On the enhanced images, ResNet-50 achieved up to 99.0% classification accuracy with substantially less FP than when using the raw scans. Taken together, these results demonstrate that intelligent and optimized preprocessing can effectively mitigate intensity variations via deep learning for lung nodule detection, thus coming closer to realizing the practical toolbox of computer-aided diagnosis in routine clinical practice. Full article
Show Figures

Figure 1

24 pages, 15151 KB  
Article
SG-YOLO: A Multispectral Small-Object Detector for UAV Imagery Based on YOLO
by Binjie Zhang, Lin Wang, Quanwei Yao, Keyang Li and Qinyan Tan
Remote Sens. 2026, 18(7), 1003; https://doi.org/10.3390/rs18071003 - 27 Mar 2026
Viewed by 366
Abstract
Object detection in unmanned aerial vehicle (UAV) imagery remains a crucial yet challenging task due to complex backgrounds, large scale variations, and the prevalence of small objects. Visible-spectrum images lack robustness under all-weather and all-illumination conditions; by contrast, multispectral sensing provides complementary cues [...] Read more.
Object detection in unmanned aerial vehicle (UAV) imagery remains a crucial yet challenging task due to complex backgrounds, large scale variations, and the prevalence of small objects. Visible-spectrum images lack robustness under all-weather and all-illumination conditions; by contrast, multispectral sensing provides complementary cues (e.g., thermal signatures) that improve detection robustness. However, existing multispectral solutions often incur high computational costs and are therefore difficult to deploy on resource-constrained UAV platforms. To address these issues, SG-YOLO is proposed, a lightweight and efficient multispectral object detection framework that aims to balance accuracy and efficiency. First, a Spectral Gated Downsampling Stem (SGDS) is designed, in which grouped convolutions and a gating mechanism are employed at the early stage of the network to extract band-specific features, thereby maximizing spectral complementarity while minimizing redundancy. Second, a Spectral–Spatial Iterative Attention Fusion (SSIAF) module is introduced, in which spectral-wise (channel) attention and spatial-wise attention are iteratively coupled and cascaded in a multi-scale manner to jointly model cross-band dependencies and spatial saliency, thereby aggregating high-level semantic information while suppressing redundant spectral responses. Finally, a Spatial–Channel Synergistic Fusion (SCSF) module is designed to enhance multi-scale and cross-channel feature integration in the neck. Experiments on the MODA dataset show that SG-YOLOs achieves 72.4% mAP50, outperforming the baseline by 3.2%. Moreover, compared with a range of mainstream one-stage detectors and multispectral detection methods, SG-YOLO delivers the best overall performance, providing an effective solution for UAV object detection while maintaining a favorable trade-off between model size and detection accuracy. Full article
Show Figures

Figure 1

20 pages, 13035 KB  
Article
Development of Wideband Circular Microstrip Patch Antenna for Use in Microwave Imaging for Brain Tumor Detection
by Hüseyin Özmen, Mengwei Wu and Mariana Dalarsson
Sensors 2026, 26(7), 2062; https://doi.org/10.3390/s26072062 - 25 Mar 2026
Viewed by 556
Abstract
This work presents the design of a compact, wideband circular microstrip patch antenna for microwave imaging-based brain tumor detection. The main contribution is the development of a compact antenna structure incorporating enhanced ground-plane slot modifications, which significantly improves impedance bandwidth while maintaining a [...] Read more.
This work presents the design of a compact, wideband circular microstrip patch antenna for microwave imaging-based brain tumor detection. The main contribution is the development of a compact antenna structure incorporating enhanced ground-plane slot modifications, which significantly improves impedance bandwidth while maintaining a small electrical size, making it highly suitable for medical imaging systems. In addition, the study integrates antenna design, safety evaluation, and microwave imaging analysis within a unified framework to assess tumor localization feasibility using a realistic head model in CST Microwave Studio. The proposed antenna is fabricated on an FR-4 substrate with dimensions of 37 × 54.5 × 1.6 mm3, corresponding to an electrical size of 0.176λ × 0.260λ × 0.0076λ at the lowest operating frequency of 1.43 GHz. Ground-plane slot enhancements are introduced to achieve wideband performance, resulting in an impedance bandwidth from 1.43 to 4 GHz and a fractional bandwidth of 94.7%. The antenna exhibits a maximum realized gain of 3.7 dB. To evaluate its suitability for medical applications, specific absorption rate (SAR) analysis is performed using a realistic human head model at multiple antenna positions and at 1.5, 2.1, 2.5, 3.3, and 3.9 GHz frequencies. The computed SAR values range from 0.109 to 1.56 W/kg averaged over 10 g of tissue, satisfying the IEEE C95.1 safety guideline limit of 2 W/kg. For tumor detection assessment, time-domain simulations are conducted in CST Microwave Studio using a monostatic radar configuration, where the antenna operates as both transmitter and receiver at twelve angular positions around the head with 30° increments. The collected scattered signals are processed using the Delay-and-Sum (DAS) beamforming algorithm to reconstruct dielectric contrast maps and localize the tumor. It should be noted that the tumor-imaging demonstrations presented in this work are based on numerical simulations, while experimental validation is limited to the characterization of the fabricated antenna. Nevertheless, the findings indicate that the proposed antenna is a promising candidate for noninvasive, low-cost microwave brain tumor imaging applications. Full article
Show Figures

Figure 1

Back to TopTop