Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,171)

Search Parameters:
Keywords = Superresolution imaging

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 6790 KB  
Article
MGFormer: Super-Resolution Reconstruction of Retinal OCT Images Based on a Multi-Granularity Transformer
by Jingmin Luan, Zhe Jiao, Yutian Li, Yanru Si, Jian Liu, Yao Yu, Dongni Yang, Jia Sun, Zehao Wei and Zhenhe Ma
Photonics 2025, 12(9), 850; https://doi.org/10.3390/photonics12090850 (registering DOI) - 25 Aug 2025
Abstract
Optical coherence tomography (OCT) acquisitions often reduce lateral sampling density to shorten scan time and suppress motion artifacts, but this strategy degrades the signal-to-noise ratio and obscures fine retinal microstructures. To recover these details without hardware modifications, we propose MGFormer, a lightweight Transformer [...] Read more.
Optical coherence tomography (OCT) acquisitions often reduce lateral sampling density to shorten scan time and suppress motion artifacts, but this strategy degrades the signal-to-noise ratio and obscures fine retinal microstructures. To recover these details without hardware modifications, we propose MGFormer, a lightweight Transformer for OCT super-resolution (SR) that integrates a multi-granularity attention mechanism with tensor distillation. A feature-enhancing convolution first sharpens edges; stacked multi-granularity attention blocks then fuse coarse-to-fine context, while a row-wise top-k operator retains the most informative tokens and preserves their positional order. We trained and evaluated MGFormer on B-scans from the Duke SD-OCT dataset at 2×, 4×, and 8× scaling factors. Relative to seven recent CNN- and Transformer-based SR models, MGFormer achieves the highest quantitative fidelity; at 4× it reaches 34.39 dB PSNR and 0.8399 SSIM, surpassing SwinIR by +0.52 dB and +0.026 SSIM, and reduces LPIPS by 21.4%. Compared with the same backbone without tensor distillation, FLOPs drop from 289G to 233G (−19.4%), and per-B-scan latency at 4× falls from 166.43 ms to 98.17 ms (−41.01%); the model size remains compact (105.68 MB). A blinded reader study shows higher scores for boundary sharpness (4.2 ± 0.3), pathology discernibility (4.1 ± 0.3), and diagnostic confidence (4.3 ± 0.2), exceeding SwinIR by 0.3–0.5 points. These results suggest that MGFormer can provide fast, high-fidelity OCT SR suitable for routine clinical workflows. Full article
(This article belongs to the Section Biophotonics and Biomedical Optics)
Show Figures

Figure 1

19 pages, 1225 KB  
Article
Lightweight Image Super-Resolution Reconstruction Network Based on Multi-Order Information Optimization
by Shengxuan Gao, Long Li, Wen Cui, He Jiang and Hongwei Ge
Sensors 2025, 25(17), 5275; https://doi.org/10.3390/s25175275 - 25 Aug 2025
Abstract
Traditional information distillation networks using single-scale convolution and simple feature fusion often result in insufficient information extraction and ineffective restoration of high-frequency details. To address this problem, we propose a lightweight image super-resolution reconstruction network based on multi-order information optimization. The core of [...] Read more.
Traditional information distillation networks using single-scale convolution and simple feature fusion often result in insufficient information extraction and ineffective restoration of high-frequency details. To address this problem, we propose a lightweight image super-resolution reconstruction network based on multi-order information optimization. The core of this network lies in the enhancement and refinement of high-frequency information. Our method operates through two main stages to fully exploit the high-frequency features in images while eliminating redundant information, thereby enhancing the network’s detail restoration capability. In the high-frequency information enhancement stage, we design a self-calibration high-frequency information enhancement block. This block generates calibration weights through self-calibration branches to modulate the response strength of each pixel. It then selectively enhances critical high-frequency information. Additionally, we combine an auxiliary branch and a chunked space optimization strategy to extract local details and adaptively reinforce high-frequency features. In the high-frequency information refinement stage, we propose a multi-scale high-frequency information refinement block. First, multi-scale information is captured through multiplicity sampling to enrich the feature hierarchy. Second, the high-frequency information is further refined using a multi-branch structure incorporating wavelet convolution and band convolution, enabling the extraction of diverse detailed features. Experimental results demonstrate that our network achieves an optimal balance between complexity and performance, outperforming popular lightweight networks in both quantitative metrics and visual quality. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

18 pages, 43842 KB  
Article
DPO-ESRGAN: Perceptually Enhanced Super-Resolution Using Direct Preference Optimization
by Wonwoo Yun and Hanhoon Park
Electronics 2025, 14(17), 3357; https://doi.org/10.3390/electronics14173357 - 23 Aug 2025
Viewed by 38
Abstract
Super-resolution (SR) is a long-standing task in the field of computer vision that aims to improve the quality and resolution of an image. ESRGAN is a representative generative adversarial network specialized to produce perceptually convincing SR images. However, it often fails to recover [...] Read more.
Super-resolution (SR) is a long-standing task in the field of computer vision that aims to improve the quality and resolution of an image. ESRGAN is a representative generative adversarial network specialized to produce perceptually convincing SR images. However, it often fails to recover local details and still produces blurry or unnatural visual artifacts, resulting in producing SR images that people do not prefer. To address this problem, we propose to adopt Direct Preference Optimization (DPO), which was originally devised to fine-tune large language models based on human preferences. To this end, we develop a method for applying DPO to ESRGAN, and add a DPO loss for training the ESRGAN generator. Through ×4 SR experiments utilizing benchmark datasets, it is demonstrated that the proposed method can produce SR images with a significantly higher perceptual quality and higher human preference than ESRGAN and other ESRGAN variants that have modified the loss or network structure of ESRGAN. Specifically, when compared to ESRGAN, the proposed method achieved, on average, 0.32 lower PieAPP values, 0.79 lower NIQE values, and 0.05 higher PSNR values on the BSD100 dataset, as well as 0.32 lower PieAPP values, 0.32 lower NIQE values, and 0.17 higher PSNR values on the Set14 dataset. Full article
Show Figures

Figure 1

14 pages, 7081 KB  
Article
SupGAN: A General Super-Resolution GAN-Promoting Training Method
by Tao Wu, Shuo Xiong, Qiuhang Chen, Huaizheng Liu, Weijun Cao and Haoran Tuo
Appl. Sci. 2025, 15(17), 9231; https://doi.org/10.3390/app15179231 - 22 Aug 2025
Viewed by 123
Abstract
An image super-resolution (SR) method based on Generative Adversarial Networks (GANs) has achieved impressive results in terms of visual performance. However, the weights of loss functions in these methods are usually set to fixed values manually, which cannot fully adapt to different datasets [...] Read more.
An image super-resolution (SR) method based on Generative Adversarial Networks (GANs) has achieved impressive results in terms of visual performance. However, the weights of loss functions in these methods are usually set to fixed values manually, which cannot fully adapt to different datasets and tasks, and may result in a decrease in the perceptual effect of the SR images. To address this issue and further improve visual quality, we propose a perception-driven SupGAN, which improves the generator and loss function of GAN-based image super-resolution models. The generator adopts multi-scale feature extraction and fusion to restore SR images with diverse and fine textures. We design a network-training method based on the proportion of high-frequency information in images (BHFTM), which utilizes the proportion of high-frequency information in images obtained through the Canny operator to set the weights of the loss function. In addition, we employ the four-patch method to better simulate the degradation of complex real-world scenarios. We extensively test our method and compare it with recent SR methods (BSRGAN, Real-ESRGAN, RealSR, SwinIR, LDL, etc.) on different types of datasets (OST300, 2020track1, RealWorld38, BSDS100 etc.) with a scaling factor of ×4. The results show that the NIQE metric improves, and also demonstrate that SupGAN can generate more natural and fine textures while suppressing unpleasant artifacts. Full article
(This article belongs to the Special Issue Collaborative Learning and Optimization Theory and Its Applications)
Show Figures

Figure 1

16 pages, 1072 KB  
Article
ωk MUSIC Algorithm for Subsurface Target Localization
by Antonio Cuccaro, Angela Dell’Aversano, Maria Antonia Maisto, Rosa Scapaticci, Adriana Brancaccio and Raffaele Solimene
Remote Sens. 2025, 17(16), 2838; https://doi.org/10.3390/rs17162838 - 15 Aug 2025
Viewed by 293
Abstract
This paper addresses the problem of subsurface target localization from single-snapshot multimonostatic and multifrequency radar measurements. In this context, the use of subspace projection methods—known for their super-resolution capabilities—is hindered by the rank deficiency of the data correlation matrix and the lack of [...] Read more.
This paper addresses the problem of subsurface target localization from single-snapshot multimonostatic and multifrequency radar measurements. In this context, the use of subspace projection methods—known for their super-resolution capabilities—is hindered by the rank deficiency of the data correlation matrix and the lack of a Vandermonde structure, especially in near-field configurations and layered media. To overcome this issue, we propose a novel pre-processing strategy that transforms the measured data into the ωk domain, thereby restoring the structural conditions required for subspace-based detection. The resulting algorithm, referred to as ωk MUSIC, enables the application of subspace projection techniques in scenarios where traditional smoothing procedures are not viable. Numerical experiments in a 2-D scalar configuration demonstrate the effectiveness of the proposed method in terms of resolution and robustness under various noise conditions. A Monte Carlo simulation study is also included to provide a quantitative assessment of localization accuracy. Comparisons with conventional migration imaging highlight the superior performance of the proposed approach. Full article
Show Figures

Figure 1

23 pages, 5458 KB  
Article
Global Prior-Guided Distortion Representation Learning Network for Remote Sensing Image Blind Super-Resolution
by Guanwen Li, Ting Sun, Shijie Yu and Siyao Wu
Remote Sens. 2025, 17(16), 2830; https://doi.org/10.3390/rs17162830 - 14 Aug 2025
Viewed by 254
Abstract
Most existing deep learning-based super-resolution (SR) methods for remote sensing images rely on predefined degradation assumptions (e.g., bicubic downsampling). However, when real-world degradations deviate from these assumptions, their performance deteriorates significantly. Moreover, explicit degradation estimation approaches based on iterative schemes inevitably lead to [...] Read more.
Most existing deep learning-based super-resolution (SR) methods for remote sensing images rely on predefined degradation assumptions (e.g., bicubic downsampling). However, when real-world degradations deviate from these assumptions, their performance deteriorates significantly. Moreover, explicit degradation estimation approaches based on iterative schemes inevitably lead to accumulated estimation errors and time-consuming processes. In this paper, instead of explicitly estimating degradation types, we first innovatively introduce an MSCN_G coefficient to capture global prior information corresponding to different distortions. Subsequently, distortion-enhanced representations are implicitly estimated through contrastive learning and embedded into a super-resolution network equipped with multiple distortion decoders (D-Decoder). Furthermore, we propose a distortion-related channel segmentation (DCS) strategy that reduces the network’s parameters and computation (FLOPs). We refer to this Global Prior-guided Distortion-enhanced Representation Learning Network as GDRNet. Experiments on both synthetic and real-world remote sensing images demonstrate that our GDRNet outperforms state-of-the-art blind SR methods for remote sensing images in terms of overall performance. Under the experimental condition of anisotropic Gaussian blurring without added noise, with a kernel width of 1.2 and an upscaling factor of 4, the super-resolution reconstruction of remote sensing images on the NWPU-RESISC45 dataset achieves a PSNR of 28.98 dB and SSIM of 0.7656. Full article
Show Figures

Figure 1

26 pages, 5964 KB  
Article
Super-Resolution Reconstruction of Part Images Using Adaptive Multi-Scale Object Tracking
by Yaohe Li, Long Jin, Yindi Bai, Zhiwen Song and Dongyuan Ge
Processes 2025, 13(8), 2563; https://doi.org/10.3390/pr13082563 - 14 Aug 2025
Viewed by 239
Abstract
Computer vision-based part surface inspection is widely used for quality evaluation. However, challenges such as low image quality, caused by factors like inadequate acquisition equipment, camera vibrations, and environmental conditions, often lead to reduced detection accuracy. Although super-resolution reconstruction can enhance image quality, [...] Read more.
Computer vision-based part surface inspection is widely used for quality evaluation. However, challenges such as low image quality, caused by factors like inadequate acquisition equipment, camera vibrations, and environmental conditions, often lead to reduced detection accuracy. Although super-resolution reconstruction can enhance image quality, existing methods face issues such as limited accuracy, information distortion, and high computational cost. To overcome these challenges, we propose a novel super-resolution reconstruction method for part images that incorporates adaptive multi-scale object tracking. Our approach first adaptively segments the input sequence of part images into blocks of varying scales, improving both reconstruction accuracy and computational efficiency. Optical flow is then applied to estimate the motion parameters between sequence images, followed by the construction of a feature tracking and sampling model to extract detailed features from all images, addressing information distortion caused by pixel misalignment. Finally, a non-linear reconstruction algorithm is employed to generate the high-resolution target image. Experimental results demonstrate that our method achieves superior performance in terms of both quantitative metrics and visual quality, outperforming existing methods. This contributes to a significant improvement in subsequent part detection accuracy and production efficiency. Full article
(This article belongs to the Section Manufacturing Processes and Systems)
Show Figures

Figure 1

20 pages, 2798 KB  
Article
LSTMConvSR: Joint Long–Short-Range Modeling via LSTM-First–CNN-Next Architecture for Remote Sensing Image Super-Resolution
by Qiwei Zhu, Guojing Zhang, Xiaoying Wang and Jianqiang Huang
Remote Sens. 2025, 17(15), 2745; https://doi.org/10.3390/rs17152745 - 7 Aug 2025
Viewed by 486
Abstract
The inability of existing super-resolution methods to jointly model short-range and long-range spatial dependencies in remote sensing imagery limits reconstruction efficacy. To address this, we propose LSTMConvSR, a novel framework inspired by top-down neural attention mechanisms. Our approach pioneers an LSTM-first–CNN-next architecture. First, [...] Read more.
The inability of existing super-resolution methods to jointly model short-range and long-range spatial dependencies in remote sensing imagery limits reconstruction efficacy. To address this, we propose LSTMConvSR, a novel framework inspired by top-down neural attention mechanisms. Our approach pioneers an LSTM-first–CNN-next architecture. First, an LSTM-based global modeling stage efficiently captures long-range dependencies via downsampling and spatial attention, achieving 80.3% lower FLOPs and 11× faster speed. Second, a CNN-based local refinement stage, guided by the LSTM’s attention maps, enhances details in critical regions. Third, a top-down fusion stage dynamically integrates global context and local features to generate the output. Extensive experiments on Potsdam, UAVid, and RSSCN7 benchmarks demonstrate state-of-the-art performance, achieving 33.94 dB PSNR on Potsdam with 2.4× faster inference than MambaIRv2. Full article
(This article belongs to the Special Issue Neural Networks and Deep Learning for Satellite Image Processing)
Show Figures

Figure 1

42 pages, 6539 KB  
Article
Multimodal Sparse Reconstruction and Deep Generative Networks: A Paradigm Shift in MR-PET Neuroimaging
by Krzysztof Malczewski
Appl. Sci. 2025, 15(15), 8744; https://doi.org/10.3390/app15158744 - 7 Aug 2025
Viewed by 718
Abstract
A novel multimodal super-resolution framework is introduced, combining GAN-based synthesis, perceptual constraints, and joint low-rank sparsity regularization to noticeably enhance MR-PET image quality. The architecture integrates modality-specific ResNet encoders, a transformer-based attention fusion block, and a multi-scale PatchGAN discriminator. Training is guided by [...] Read more.
A novel multimodal super-resolution framework is introduced, combining GAN-based synthesis, perceptual constraints, and joint low-rank sparsity regularization to noticeably enhance MR-PET image quality. The architecture integrates modality-specific ResNet encoders, a transformer-based attention fusion block, and a multi-scale PatchGAN discriminator. Training is guided by a hybrid loss function incorporating adversarial, pixel-wise, perceptual (VGG19), and structured Hankel constraints. The proposed method outperforms all baselines in PSNR, SSIM, LPIPS, and diagnostic confidence metrics. Clinical PET metrics, such as SUV recovery and lesion detectability, show substantial improvement. A thorough analysis of computational complexity, dataset composition, training reproducibility, and motion compensation is provided. These findings are visually supported by processed scan panels and benchmark tables. This framework advances reproducible and interpretable hybrid neuroimaging with strong clinical and technical validation. Full article
Show Figures

Figure 1

19 pages, 8091 KB  
Article
Leveraging Synthetic Degradation for Effective Training of Super-Resolution Models in Dermatological Images
by Francesco Branciforti, Kristen M. Meiburger, Elisa Zavattaro, Paola Savoia and Massimo Salvi
Electronics 2025, 14(15), 3138; https://doi.org/10.3390/electronics14153138 - 6 Aug 2025
Viewed by 332
Abstract
Teledermatology relies on digital transfer of dermatological images, but compression and resolution differences compromise diagnostic quality. Image enhancement techniques are crucial to compensate for these differences and improve quality for both clinical assessment and AI-based analysis. We developed a customized image degradation pipeline [...] Read more.
Teledermatology relies on digital transfer of dermatological images, but compression and resolution differences compromise diagnostic quality. Image enhancement techniques are crucial to compensate for these differences and improve quality for both clinical assessment and AI-based analysis. We developed a customized image degradation pipeline simulating common artifacts in dermatological images, including blur, noise, downsampling, and compression. This synthetic degradation approach enabled effective training of DermaSR-GAN, a super-resolution generative adversarial network tailored for dermoscopic images. The model was trained on 30,000 high-quality ISIC images and evaluated on three independent datasets (ISIC Test, Novara Dermoscopic, PH2) using structural similarity and no-reference quality metrics. DermaSR-GAN achieved statistically significant improvements in quality scores across all datasets, with up to 23% enhancement in perceptual quality metrics (MANIQA). The model preserved diagnostic details while doubling resolution and surpassed existing approaches, including traditional interpolation methods and state-of-the-art deep learning techniques. Integration with downstream classification systems demonstrated up to 14.6% improvement in class-specific accuracy for keratosis-like lesions compared to original images. Synthetic degradation represents a promising approach for training effective super-resolution models in medical imaging, with significant potential for enhancing teledermatology applications and computer-aided diagnosis systems. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

11 pages, 60623 KB  
Article
Super Resolution for Mangrove UAV Remote Sensing Images
by Qin Qin, Wenlong Dai and Xin Wang
Symmetry 2025, 17(8), 1250; https://doi.org/10.3390/sym17081250 - 6 Aug 2025
Viewed by 212
Abstract
Mangroves play a crucial role in ecosystems, and the accurate classification and real-time monitoring of mangrove species are essential for their protection and restoration. To improve the segmentation performance of mangrove UAV remote sensing images, this study performs species segmentation after the super-resolution [...] Read more.
Mangroves play a crucial role in ecosystems, and the accurate classification and real-time monitoring of mangrove species are essential for their protection and restoration. To improve the segmentation performance of mangrove UAV remote sensing images, this study performs species segmentation after the super-resolution (SR) reconstruction of images. Therefore, we propose SwinNET, an SR reconstruction network. We design a convolutional enhanced channel attention (CEA) module within a network to enhance feature reconstruction through channel attention. Additionally, the Neighborhood Attention Transformer (NAT) is introduced to help the model better focus on domain features, aiming to improve the reconstruction of leaf details. These two attention mechanisms are symmetrically integrated within the network to jointly capture complementary information from spatial and channel dimensions. The experimental results demonstrate that SwinNET not only achieves superior performance in SR tasks but also significantly enhances the segmentation accuracy of mangrove species. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

16 pages, 2750 KB  
Article
Combining Object Detection, Super-Resolution GANs and Transformers to Facilitate Tick Identification Workflow from Crowdsourced Images on the eTick Platform
by Étienne Clabaut, Jérémie Bouffard and Jade Savage
Insects 2025, 16(8), 813; https://doi.org/10.3390/insects16080813 - 6 Aug 2025
Viewed by 411
Abstract
Ongoing changes in the distribution and abundance of several tick species of medical relevance in Canada have prompted the development of the eTick platform—an image-based crowd-sourcing public surveillance tool for Canada enabling rapid tick species identification by trained personnel, and public health guidance [...] Read more.
Ongoing changes in the distribution and abundance of several tick species of medical relevance in Canada have prompted the development of the eTick platform—an image-based crowd-sourcing public surveillance tool for Canada enabling rapid tick species identification by trained personnel, and public health guidance based on tick species and province of residence of the submitter. Considering that more than 100,000 images from over 73,500 identified records representing 25 tick species have been submitted to eTick since the public launch in 2018, a partial automation of the image processing workflow could save substantial human resources, especially as submission numbers have been steadily increasing since 2021. In this study, we evaluate an end-to-end artificial intelligence (AI) pipeline to support tick identification from eTick user-submitted images, characterized by heterogeneous quality and uncontrolled acquisition conditions. Our framework integrates (i) tick localization using a fine-tuned YOLOv7 object detection model, (ii) resolution enhancement of cropped images via super-resolution Generative Adversarial Networks (RealESRGAN and SwinIR), and (iii) image classification using deep convolutional (ResNet-50) and transformer-based (ViT) architectures across three datasets (12, 6, and 3 classes) of decreasing granularities in terms of taxonomic resolution, tick life stage, and specimen viewing angle. ViT consistently outperformed ResNet-50, especially in complex classification settings. The configuration yielding the best performance—relying on object detection without incorporating super-resolution—achieved a macro-averaged F1-score exceeding 86% in the 3-class model (Dermacentor sp., other species, bad images), with minimal critical misclassifications (0.7% of “other species” misclassified as Dermacentor). Given that Dermacentor ticks represent more than 60% of tick volume submitted on the eTick platform, the integration of a low granularity model in the processing workflow could save significant time while maintaining very high standards of identification accuracy. Our findings highlight the potential of combining modern AI methods to facilitate efficient and accurate tick image processing in community science platforms, while emphasizing the need to adapt model complexity and class resolution to task-specific constraints. Full article
(This article belongs to the Section Medical and Livestock Entomology)
Show Figures

Graphical abstract

17 pages, 1571 KB  
Review
Super-Resolution Microscopy in the Structural Analysis and Assembly Dynamics of HIV
by Aiden Jurcenko, Olesia Gololobova and Kenneth W. Witwer
Appl. Nano 2025, 6(3), 13; https://doi.org/10.3390/applnano6030013 - 31 Jul 2025
Viewed by 420
Abstract
Super-resolution microscopy (SRM) has revolutionized our understanding of subcellular structures, including cell organelles and viruses. For human immunodeficiency virus (HIV), SRM has significantly advanced knowledge of viral structural biology and assembly dynamics. This review analyzes how SRM techniques (particularly PALM, STORM, STED, and [...] Read more.
Super-resolution microscopy (SRM) has revolutionized our understanding of subcellular structures, including cell organelles and viruses. For human immunodeficiency virus (HIV), SRM has significantly advanced knowledge of viral structural biology and assembly dynamics. This review analyzes how SRM techniques (particularly PALM, STORM, STED, and SIM) have been applied over the past decade to study HIV structural components and assembly. By categorizing and comparing studies based on SRM methods, HIV components, and labeling strategies, we assess the strengths and limitations of each approach. Our analysis shows that PALM is most commonly used for live-cell imaging of HIV Gag, while STED is primarily used to study the viral envelope (Env). STORM and SIM have been applied to visualize various components, including Env, capsid, and matrix. Antibody labeling is prevalent in PALM and STORM studies, targeting Env and capsid, whereas fluorescent protein labeling is mainly associated with PALM and focused on Gag. A recent emphasis on Gag and Env points to deeper investigation into HIV assembly and viral membrane dynamics. Insights from SRM studies of HIV not only enhance virological understanding but also inform future research in therapeutic strategies and delivery systems, including extracellular vesicles. Full article
(This article belongs to the Collection Review Papers for Applied Nano Science and Technology)
Show Figures

Figure 1

19 pages, 7161 KB  
Article
Dynamic Snake Convolution Neural Network for Enhanced Image Super-Resolution
by Weiqiang Xin, Ziang Wu, Qi Zhu, Tingting Bi, Bing Li and Chunwei Tian
Mathematics 2025, 13(15), 2457; https://doi.org/10.3390/math13152457 - 30 Jul 2025
Viewed by 425
Abstract
Image super-resolution (SR) is essential for enhancing image quality in critical applications, such as medical imaging and satellite remote sensing. However, existing methods were often limited in their ability to effectively process and integrate multi-scales information from fine textures to global structures. To [...] Read more.
Image super-resolution (SR) is essential for enhancing image quality in critical applications, such as medical imaging and satellite remote sensing. However, existing methods were often limited in their ability to effectively process and integrate multi-scales information from fine textures to global structures. To address these limitations, this paper proposes DSCNN, a dynamic snake convolution neural network for enhanced image super-resolution. DSCNN optimizes feature extraction and network architecture to enhance both performance and efficiency: To improve feature extraction, the core innovation is a feature extraction and enhancement module with dynamic snake convolution that dynamically adjusts the convolution kernel’s shape and position to better fit the image’s geometric structures, significantly improving feature extraction. To optimize the network’s structure, DSCNN employs an enhanced residual network framework. This framework utilizes parallel convolutional layers and a global feature fusion mechanism to further strengthen feature extraction capability and gradient flow efficiency. Additionally, the network incorporates a SwishReLU-based activation function and a multi-scale convolutional concatenation structure. This multi-scale design effectively captures both local details and global image structure, enhancing SR reconstruction. In summary, the proposed DSCNN outperforms existing methods in both objective metrics and visual perception (e.g., our method achieved optimal PSNR and SSIM results on the Set5 ×4 dataset). Full article
(This article belongs to the Special Issue Structural Networks for Image Application)
Show Figures

Figure 1

13 pages, 2827 KB  
Article
Ultrasonic Nondestructive Testing Image Enhancement Model Based on Super-Resolution Imaging
by Jinxuan Zhu, Guoyou Wang, Kang Luo and Xinfang Zhang
Appl. Sci. 2025, 15(15), 8339; https://doi.org/10.3390/app15158339 - 26 Jul 2025
Viewed by 390
Abstract
Ultrasonic nondestructive testing has been widely used in various industries due to its simple operation and harmlessness for the object to be detected. However, due to the mechanism of ultrasonic image generation, the generated ultrasonic images often have low resolution, which greatly affects [...] Read more.
Ultrasonic nondestructive testing has been widely used in various industries due to its simple operation and harmlessness for the object to be detected. However, due to the mechanism of ultrasonic image generation, the generated ultrasonic images often have low resolution, which greatly affects the final detection results. How to improve the resolution of ultrasonic images has become the key to improving the accuracy of defect detection. Therefore, this paper proposes an ultrasonic super-resolution model based on up- and down-sampling layers and multi-layer residual networks combined with Charbonnier loss function. The degradation features of the image are learned through up- and down-sampling layers, and the intrinsic features of the image are learned through multi-layer residual networks, so that all the feature information of the image is fully learned. The Charbonnier loss function accelerates the convergence of the model. Experimental results show that the model proposed in this paper outperforms the common model performance. Full article
Show Figures

Figure 1

Back to TopTop