Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (596)

Search Parameters:
Keywords = super resolution reconstruction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 1436 KB  
Article
Enhancing Lesion Detection in Rat CT Images: A Deep Learning-Based Super-Resolution Study
by Sungwon Ham, Sang Hoon Jeong, Hong Lee, Yoon Jeong Nam, Hyejin Lee, Jin Young Choi, Yu-Seon Lee, Yoon Hee Park, Su A Park, Wooil Kim, Hangseok Choi, Haewon Kim, Ju-Han Lee and Cherry Kim
Biomedicines 2025, 13(10), 2421; https://doi.org/10.3390/biomedicines13102421 - 3 Oct 2025
Viewed by 315
Abstract
Background/Objectives: Preclinical chest computed tomography (CT) imaging in small animals is often limited by low resolution due to scan time and dose constraints, which hinders accurate detection of subtle lesions. Traditional super-resolution (SR) metrics, such as peak signal-to-noise ratio (PSNR) and structural similarity [...] Read more.
Background/Objectives: Preclinical chest computed tomography (CT) imaging in small animals is often limited by low resolution due to scan time and dose constraints, which hinders accurate detection of subtle lesions. Traditional super-resolution (SR) metrics, such as peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM), may not adequately reflect clinical interpretability. We aimed to evaluate whether deep learning-based SR models could enhance image quality and lesion detectability in rat chest CT, balancing quantitative metrics with radiologist assessment. Methods: We retrospectively analyzed 222 chest CT scans acquired from polyhexamethylene guanidine phosphate (PHMG-p) exposure studies in Sprague Dawley rats. Three SR models were implemented and compared: single-image SR (SinSR), segmentation-guided SinSR with lung cropping (SinSR3), and omni-super-resolution (OmniSR). Models were trained on rat CT data and evaluated using PSNR and SSIM. Two board-certified thoracic radiologists independently performed blinded evaluations of lesion margin clarity, nodule detectability, image noise, artifacts, and overall image quality. Results: SinSR1 achieved the highest PSNR (33.64 ± 1.30 dB), while SinSR3 showed the highest SSIM (0.72 ± 0.08). Despite lower PSNR (29.21 ± 1.46 dB), OmniSR received the highest radiologist ratings for lesion margin clarity, nodule detectability, and overall image quality (mean score 4.32 ± 0.41, κ = 0.74). Reader assessments diverged from PSNR and SSIM, highlighting the limited correlation between conventional metrics and clinical interpretability. Conclusions: Deep learning-based SR improved visualization of rat chest CT images, with OmniSR providing the most clinically interpretable results despite modest numerical scores. These findings underscore the need for reader-centered evaluation when applying SR techniques to preclinical imaging. Full article
(This article belongs to the Section Molecular and Translational Medicine)
Show Figures

Graphical abstract

22 pages, 4596 KB  
Article
Image Super-Resolution Reconstruction Network Based on Structural Reparameterization and Feature Reuse
by Tianyu Li, Xiaoshi Jin, Qiang Liu and Xi Liu
Sensors 2025, 25(19), 5989; https://doi.org/10.3390/s25195989 - 27 Sep 2025
Viewed by 447
Abstract
In the task of integrated circuit micrograph acquisition, image super-resolution reconstruction technology can significantly enhance acquisition efficiency. With the advancement of deep learning techniques, the performance of image super-resolution reconstruction networks has improved markedly, but their demand for inference device memory has also [...] Read more.
In the task of integrated circuit micrograph acquisition, image super-resolution reconstruction technology can significantly enhance acquisition efficiency. With the advancement of deep learning techniques, the performance of image super-resolution reconstruction networks has improved markedly, but their demand for inference device memory has also increased substantially, greatly limiting their practical application in engineering and deployment on resource-constrained devices. Against this backdrop, we designed image super-resolution reconstruction networks based on feature reuse and structural reparameterization techniques, ensuring that the networks maintain reconstruction performance while being more suitable for deployment in resource-limited environments. Traditional image super-resolution reconstruction networks often redundantly compute similar features through standard convolution operations, leading to significant computational resource wastage. By employing low-cost operations, we replaced some redundant features with those generated from the inherent characteristics of the image and designed a reparameterization layer using structural reparameterization techniques. Building upon local feature fusion and local residual learning, we developed two efficient deep feature extraction modules, and forming the image super-resolution reconstruction networks. Compared to performance-oriented image super-resolution reconstruction networks (e.g., DRCT), our network reduces algorithm parameters by 84.5% and shortens inference time by 49.8%. In comparison with lightweight image reconstruction algorithms, our method improves the mean structural similarity index by 3.24%. Experimental results demonstrate that the image super-resolution reconstruction network based on feature reuse and structural reparameterization achieves an excellent balance between network performance and complexity. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

24 pages, 3908 KB  
Article
Transform Domain Based GAN with Deep Multi-Scale Features Fusion for Medical Image Super-Resolution
by Huayong Yang, Qingsong Wei and Yu Sang
Electronics 2025, 14(18), 3726; https://doi.org/10.3390/electronics14183726 - 20 Sep 2025
Viewed by 431
Abstract
High-resolution (HR) medical images provide clearer anatomical details and facilitate early disease diagnosis, yet acquiring HR scans is often limited by imaging conditions, device capabilities, and patient factors. We propose a transform domain deep multiscale feature fusion generative adversarial network (MSFF-GAN) for medical [...] Read more.
High-resolution (HR) medical images provide clearer anatomical details and facilitate early disease diagnosis, yet acquiring HR scans is often limited by imaging conditions, device capabilities, and patient factors. We propose a transform domain deep multiscale feature fusion generative adversarial network (MSFF-GAN) for medical image super-resolution (SR). Considering the advantages of generative adversarial networks (GANs) and convolutional neural networks (CNNs), MSFF-GAN integrates a deep multi-scale convolution network into the GAN generator, which is composed primarily of a series of cascaded multi-scale feature extraction blocks in a coarse-to-fine manner to restore the medical images. Two tailored blocks are designed: a multiscale information distillation (MSID) block that adaptively captures long- and short-path features across scales, and a granular multiscale (GMS) block that expands receptive fields at fine granularity to strengthen multiscale feature extraction with reduced computational cost. Unlike conventional methods that predict HR images directly in the spatial domain, which often yield excessively smoothed outputs with missing textures, we formulate SR as the prediction of coefficients in the non-subsampled shearlet transform (NSST) domain. This transform domain modeling enables better preservation of global anatomical structure and local texture details. The predicted coefficients are inverted to reconstruct HR images, and the transform domain subbands are also fed to the discriminator to enhance its discrimination ability and improve perceptual fidelity. Extensive experiments on medical image datasets demonstrate that MSFF-GAN outperforms state-of-the-art approaches in structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR), while more effectively preserving global anatomy and fine textures. These results validate the effectiveness of combining multiscale feature fusion with transform domain prediction for high-quality medical image super-resolution. Full article
(This article belongs to the Special Issue New Trends in AI-Assisted Computer Vision)
Show Figures

Figure 1

21 pages, 4379 KB  
Article
Deep Learning-Based Super-Resolution Reconstruction of a 1/9 Arc-Second Offshore Digital Elevation Model for U.S. Coastal Regions
by Chenhao Wu, Bo Zhang, Meng Zhang and Chaofan Yang
Remote Sens. 2025, 17(18), 3205; https://doi.org/10.3390/rs17183205 - 17 Sep 2025
Viewed by 487
Abstract
High-resolution offshore digital elevation models (DEMs) are essential for coastal geomorphology, marine resource management, and disaster prevention. While deep learning-based super-resolution (SR) techniques have become a mainstream solution for enhancing DEMs, they often fail to maintain a balance between large-scale geomorphological structure and [...] Read more.
High-resolution offshore digital elevation models (DEMs) are essential for coastal geomorphology, marine resource management, and disaster prevention. While deep learning-based super-resolution (SR) techniques have become a mainstream solution for enhancing DEMs, they often fail to maintain a balance between large-scale geomorphological structure and fine-scale topographic detail due to limitations in modeling spatial dependency. To overcome this challenge, we propose DEM-Asymmetric multi-scale super-resolution network (DEM-AMSSRN), a novel asymmetric multi-scale super-resolution network tailored for offshore DEM reconstruction. Our method incorporates region-level non-local (RL-NL) modules to capture long-range spatial dependencies and residual multi-scale blocks (RMSBs) to extract hierarchical terrain features. Additionally, a hybrid loss function combining pixel-wise, perceptual, and adversarial losses is introduced to ensure both geometric fidelity and visual realism. Experimental evaluations on U.S. offshore DEM datasets demonstrate that DEM-AMSSRN significantly outperforms existing GAN-based models, reducing RMSE by up to 72.47% (vs. SRGAN) and achieving 53.30 dB PSNR and 0.995056 SSIM. These results highlight its effectiveness in preserving both continental shelf-scale bathymetric patterns and detailed terrain textures. Using this model, we also constructed the USA_OD_2025, a 1/9 arc-second high-resolution offshore DEM for U.S. coastal zones, providing a valuable geospatial foundation for future marine research and engineering. Full article
Show Figures

Figure 1

43 pages, 3753 KB  
Review
Comprehensive Review of Deep Learning Approaches for Single-Image Super-Resolution
by Zirun Liu, Shijie Jiang, Shuhan Feng, Qirui Song and Ji Zhang
Sensors 2025, 25(18), 5768; https://doi.org/10.3390/s25185768 - 16 Sep 2025
Viewed by 710
Abstract
Single-image super-resolution (SISR) is a core challenge in the field of image processing, aiming to overcome the physical limitations of imaging systems and improve their resolution. This article systematically introduces the SISR method based on deep learning, proposes a method-oriented classification framework, and [...] Read more.
Single-image super-resolution (SISR) is a core challenge in the field of image processing, aiming to overcome the physical limitations of imaging systems and improve their resolution. This article systematically introduces the SISR method based on deep learning, proposes a method-oriented classification framework, and explores it from three aspects: theoretical basis, technological evolution, and domain-specific applications. Firstly, the basic concepts, development trajectory, and practical value of SISR are introduced. Secondly, in-depth research is conducted on key technical components, including benchmark dataset construction, a multi-scale upsampling strategy, objective function optimization, and quality assessment indicators. Thirdly, some classic SISR model reconstruction results are listed and compared. Finally, the limitations of SISR research are pointed out, and some prospective research directions are proposed. This article provides a systematic knowledge framework for researchers and offers important reference value for the future development of SISR. Full article
Show Figures

Figure 1

26 pages, 20242 KB  
Article
Multi-Source Feature Selection and Explainable Machine Learning Approach for Mapping Nitrogen Balance Index in Winter Wheat Based on Sentinel-2 Data
by Botai Shi, Xiaokai Chen, Yiming Guo, Li Liu, Peng Li and Qingrui Chang
Remote Sens. 2025, 17(18), 3196; https://doi.org/10.3390/rs17183196 - 16 Sep 2025
Viewed by 521
Abstract
The Nitrogen Balance Index is a key indicator of crop nitrogen status, but conventional monitoring methods are invasive, costly, and unsuitable for large-scale application. This study targets early-season winter wheat in the Guanzhong Plain and proposes a framework that integrates Sentinel-2 imagery with [...] Read more.
The Nitrogen Balance Index is a key indicator of crop nitrogen status, but conventional monitoring methods are invasive, costly, and unsuitable for large-scale application. This study targets early-season winter wheat in the Guanzhong Plain and proposes a framework that integrates Sentinel-2 imagery with Sen2Res super-resolution reconstruction, multi-feature optimization, and interpretable machine learning. Super-resolved imagery demonstrated improved spatial detail and enhanced correlations between reflectance, texture, and vegetation indices and the Nitrogen Balance Index compared to native imagery. A two-stage feature-selection strategy, combining correlation analysis and recursive feature elimination, identified a compact set of key variables. Among the tested algorithms, the random forest model achieved the highest accuracy, with R2 = 0.77 and RMSE = 1.57, representing an improvement of about 20% over linear models. Shapley Additive Explanations revealed that red-edge and near-infrared features accounted for up to 75% of predictive contributions, highlighting their physiological relevance to nitrogen metabolism. Overall, this study contributes to the remote sensing of crop nitrogen status through three aspects: (1) integration of super-resolution with feature fusion to overcome coarse spatial resolution, (2) adoption of a two-stage feature optimization strategy to reduce redundancy, and (3) incorporation of interpretable modeling to improve transparency. The proposed framework supports regional-scale NBI monitoring and provides a scientific basis for precision fertilization. Full article
(This article belongs to the Special Issue Perspectives of Remote Sensing for Precision Agriculture)
Show Figures

Figure 1

21 pages, 37484 KB  
Article
Reconstructing Hyperspectral Images from RGB Images by Multi-Scale Spectral–Spatial Sequence Learning
by Wenjing Chen, Lang Liu and Rong Gao
Entropy 2025, 27(9), 959; https://doi.org/10.3390/e27090959 - 15 Sep 2025
Viewed by 684
Abstract
With rapid advancements in transformers, the reconstruction of hyperspectral images from RGB images, also known as spectral super-resolution (SSR), has made significant breakthroughs. However, existing transformer-based methods often struggle to balance computational efficiency with long-range receptive fields. Recently, Mamba has demonstrated linear complexity [...] Read more.
With rapid advancements in transformers, the reconstruction of hyperspectral images from RGB images, also known as spectral super-resolution (SSR), has made significant breakthroughs. However, existing transformer-based methods often struggle to balance computational efficiency with long-range receptive fields. Recently, Mamba has demonstrated linear complexity in modeling long-range dependencies and shown broad applicability in vision tasks. This paper proposes a multi-scale spectral–spatial sequence learning method, named MSS-Mamba, for reconstructing hyperspectral images from RGB images. First, we introduce a continuous spectral–spatial scan (CS3) mechanism to improve cross-dimensional feature extraction of the foundational Mamba model. Second, we propose a sequence tokenization strategy that generates multi-scale-aware sequences to overcome Mamba’s limitations in hierarchically learning multi-scale information. Specifically, we design the multi-scale information fusion (MIF) module, which tokenizes input sequences before feeding them into Mamba. The MIF employs a dual-branch architecture to process global and local information separately, dynamically fusing features through an adaptive router that generates weighting coefficients. This produces feature maps that contain both global contextual information and local details, ultimately reconstructing a high-fidelity hyperspectral image. Experimental results on the ARAD_1k, CAVE and grss_dfc_2018 dataset demonstrate the performance of MSS-Mamba. Full article
Show Figures

Figure 1

5 pages, 1086 KB  
Abstract
First Laboratory Measurements of a Super-Resolved Compressive Instrument in the Medium Infrared
by Donatella Guzzi, Tiziano Bianchi, Marco Corti, Sara Francés González, Cinzia Lastri, Enrico Magli, Vanni Nardino, Christophe Pache, Lorenzo Palombi, Diego Valsesia and Valentina Raimondi
Proceedings 2025, 129(1), 24; https://doi.org/10.3390/proceedings2025129024 - 12 Sep 2025
Viewed by 211
Abstract
In the framework of the SURPRISE EU project, the Compressive Sensing paradigm was applied for the development of a laboratory demonstrator with improved spatial sampling operating from visible up to Medium InfraRed (MIR). The demonstrator, which utilizes a commercial Digital Micromirror Device modified [...] Read more.
In the framework of the SURPRISE EU project, the Compressive Sensing paradigm was applied for the development of a laboratory demonstrator with improved spatial sampling operating from visible up to Medium InfraRed (MIR). The demonstrator, which utilizes a commercial Digital Micromirror Device modified by replacing its front window with one transparent up to MIR, has 10 bands in the VIS-NIR range and 2 bands in the MIR range, showing a super resolution factor up to 32. Measurements performed in the MIR spectral range using hot sources as targets show that CS is effective in reconstructing super-resolved hot targets. Full article
Show Figures

Figure 1

25 pages, 7964 KB  
Article
DSCSRN: Physically Guided Symmetry-Aware Spatial-Spectral Collaborative Network for Single-Image Hyperspectral Super-Resolution
by Xueli Chang, Jintong Liu, Guotao Wen, Xiaoyu Huang and Meng Yan
Symmetry 2025, 17(9), 1520; https://doi.org/10.3390/sym17091520 - 12 Sep 2025
Viewed by 396
Abstract
Hyperspectral images (HSIs), with their rich spectral information, are widely used in remote sensing; yet the inherent trade-off between spectral and spatial resolution in imaging systems often limits spatial details. Single-image hyperspectral super-resolution (HSI-SR) seeks to recover high-resolution HSIs from a single low-resolution [...] Read more.
Hyperspectral images (HSIs), with their rich spectral information, are widely used in remote sensing; yet the inherent trade-off between spectral and spatial resolution in imaging systems often limits spatial details. Single-image hyperspectral super-resolution (HSI-SR) seeks to recover high-resolution HSIs from a single low-resolution input, but the high dimensionality and spectral redundancy of HSIs make this task challenging. In HSIs, spectral signatures and spatial textures often exhibit intrinsic symmetries, and preserving these symmetries provides additional physical constraints that enhance reconstruction fidelity and robustness. To address these challenges, we propose the Dynamic Spectral Collaborative Super-Resolution Network (DSCSRN), an end-to-end framework that integrates physical modeling with deep learning and explicitly embeds spatial–spectral symmetry priors into the network architecture. DSCSRN processes low-resolution HSIs with a Cascaded Residual Spectral Decomposition Network (CRSDN) to compress redundant channels while preserving spatial structures, generating accurate abundance maps. These maps are refined by two Synergistic Progressive Feature Refinement Modules (SPFRMs), which progressively enhance spatial textures and spectral details via a multi-scale dual-domain collaborative attention mechanism. The Dynamic Endmember Adjustment Module (DEAM) then adaptively updates spectral endmembers according to scene context, overcoming the limitations of fixed-endmember assumptions. Grounded in the Linear Mixture Model (LMM), this unmixing–recovery–reconstruction pipeline restores subtle spectral variations alongside improved spatial resolution. Experiments on the Chikusei, Pavia Center, and CAVE datasets show that DSCSRN outperforms state-of-the-art methods in both perceptual quality and quantitative performance, achieving an average PSNR of 43.42 and a SAM of 1.75 (×4 scale) on Chikusei. The integration of symmetry principles offers a unifying perspective aligned with the intrinsic structure of HSIs, producing reconstructions that are both accurate and structurally consistent. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

23 pages, 4203 KB  
Article
Improved Super-Resolution Reconstruction Algorithm Based on SRGAN
by Guiying Zhang, Tianfu Guo, Zhiqiang Wang, Wenjia Ren and Aryan Joshi
Appl. Sci. 2025, 15(18), 9966; https://doi.org/10.3390/app15189966 - 11 Sep 2025
Viewed by 528
Abstract
To improve the performance of image super-resolution reconstruction, this paper optimizes the classical SRGAN model architecture. The original SRResNet is replaced with the EDSR network as the generator, which effectively enhances the ability to restore image details. To address the issue of insufficient [...] Read more.
To improve the performance of image super-resolution reconstruction, this paper optimizes the classical SRGAN model architecture. The original SRResNet is replaced with the EDSR network as the generator, which effectively enhances the ability to restore image details. To address the issue of insufficient multi-scale feature extraction in SRGAN during image reconstruction, an LSK attention mechanism is introduced into the generator. By fusing features from different receptive fields through parallel multi-scale convolution kernels, the model improves its ability to capture key details. To mitigate the instability and overfitting problems in the discriminator training, the Mish activation function is used instead of LeakyReLU to improve gradient flow, and a Dropout layer is introduced to enhance the discriminator’s generalization ability, preventing overfitting to the generator. Additionally, a staged training strategy is employed during adversarial training. Experimental results show that the improved model effectively enhances image reconstruction quality while maintaining low complexity. The generated results exhibit clearer details and more natural visual effects. On the public datasets Set5, Set14, and BSD100, compared to the original SRGAN, the PSNR and SSIM metrics improved by 13.4% and 5.9%, 9.9% and 6.0%, and 6.8% and 5.8%, respectively, significantly enhancing the reconstruction of super-resolution images, achieving more refined and realistic image quality improvement. The model also demonstrates stronger generalization ability on complex cross-domain data, such as remote sensing images and medical images. The improved model achieves higher-quality image reconstruction and more natural visual effects while maintaining moderate computational overhead, validating the effectiveness of the proposed improvements. Full article
Show Figures

Figure 1

16 pages, 7343 KB  
Article
Accelerated Super-Resolution Reconstruction for Structured Illumination Microscopy Integrated with Low-Light Optimization
by Caihong Huang, Dingrong Yi and Lichun Zhou
Micromachines 2025, 16(9), 1020; https://doi.org/10.3390/mi16091020 - 3 Sep 2025
Viewed by 772
Abstract
Structured illumination microscopy (SIM) with π/2 phase-shift modulation traditionally relies on frequency-domain computation, which greatly limits processing efficiency. In addition, the illumination regime inherent in structured illumination techniques often results in poor visual quality of reconstructed images. To address these dual challenges, this [...] Read more.
Structured illumination microscopy (SIM) with π/2 phase-shift modulation traditionally relies on frequency-domain computation, which greatly limits processing efficiency. In addition, the illumination regime inherent in structured illumination techniques often results in poor visual quality of reconstructed images. To address these dual challenges, this study introduces DM-SIM-LLIE (Differential Low-Light Image Enhancement SIM), a novel framework that integrates two synergistic innovations. First, the study pioneers a spatial-domain computational paradigm for π/2 phase-shift SIM reconstruction. Through system differentiation, mathematical derivation, and algorithm simplification, an optimized spatial-domain model is established. Second, an adaptive local overexposure correction strategy is developed, combined with a zero-shot learning deep learning algorithm, RUAS, to enhance the image quality of structured light reconstructed images. Experimental validation using specimens such as fluorescent microspheres and bovine pulmonary artery endothelial cells demonstrates the advantages of this approach: compared with traditional frequency-domain methods, the reconstruction speed is accelerated by five times while maintaining equivalent lateral resolution and excellent axial resolution. The image quality of the low-light enhancement algorithm after local overexposure correction is superior to existing methods. These advances significantly increase the application potential of SIM technology in time-sensitive biomedical imaging scenarios that require high spatiotemporal resolution. Full article
(This article belongs to the Special Issue Advanced Biomaterials, Biodevices, and Their Application)
Show Figures

Figure 1

28 pages, 19672 KB  
Article
A Multi-Fidelity Data Fusion Approach Based on Semi-Supervised Learning for Image Super-Resolution in Data-Scarce Scenarios
by Hongzheng Zhu, Yingjuan Zhao, Ximing Qiao, Jinshuo Zhang, Jingnan Ma and Sheng Tong
Sensors 2025, 25(17), 5373; https://doi.org/10.3390/s25175373 - 31 Aug 2025
Viewed by 671
Abstract
Image super-resolution (SR) techniques can significantly enhance visual quality and information density. However, existing methods often rely on large amounts of paired low- and high-resolution (LR-HR) data, which limits their generalization and robustness when faced with data scarcity, distribution inconsistencies, and missing high-frequency [...] Read more.
Image super-resolution (SR) techniques can significantly enhance visual quality and information density. However, existing methods often rely on large amounts of paired low- and high-resolution (LR-HR) data, which limits their generalization and robustness when faced with data scarcity, distribution inconsistencies, and missing high-frequency details. To tackle the challenges of image reconstruction in data-scarce scenarios, this paper proposes a semi-supervised learning-driven multi-fidelity fusion (SSLMF) method, which integrates multi-fidelity data fusion (MFDF) and semi-supervised learning (SSL) to reduce reliance on high-fidelity data. More specifically, (1) an MFDF strategy is employed to leverage low-fidelity data for global structural constraints, enhancing information compensation; (2) an SSL mechanism is introduced to reduce data dependence by using only a small amount of labeled HR samples along with a large quantity of unlabeled multi-fidelity data. This framework significantly improves data efficiency and reconstruction quality. We first validate the reconstruction accuracy of SSLMF on benchmark functions and then apply it to image reconstruction tasks. The results demonstrate that SSLMF can effectively model both linear and nonlinear relationships among multi-fidelity data, maintaining high performance even with limited high-fidelity samples. Finally, its cross-disciplinary potential is illustrated through an audio restoration case study, offering a novel solution for efficient image reconstruction, especially in data-scarce scenarios where high-fidelity samples are limited. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

24 pages, 17568 KB  
Article
Super-Resolved Pseudo Reference in Dual-Branch Embedding for Blind Ultra-High-Definition Image Quality Assessment
by Jiacheng Gu, Qingxu Meng, Songnan Zhao, Yifan Wang, Shaode Yu and Qiurui Sun
Electronics 2025, 14(17), 3447; https://doi.org/10.3390/electronics14173447 - 29 Aug 2025
Viewed by 492
Abstract
In the Ultra-High-Definition (UHD) domain, blind image quality assessment remains challenging due to the high dimensionality of UHD images, which exceeds the input capacity of deep learning networks. Motivated by the visual discrepancies observed between high- and low-quality images after down-sampling and Super-Resolution [...] Read more.
In the Ultra-High-Definition (UHD) domain, blind image quality assessment remains challenging due to the high dimensionality of UHD images, which exceeds the input capacity of deep learning networks. Motivated by the visual discrepancies observed between high- and low-quality images after down-sampling and Super-Resolution (SR) reconstruction, we propose a SUper-Resolved Pseudo References In Dual-branch Embedding (SURPRIDE) framework tailored for UHD image quality prediction. SURPRIDE employs one branch to capture intrinsic quality features from the original patch input and the other to encode comparative perceptual cues from the SR-reconstructed pseudo-reference. The fusion of the complementary representation, guided by a novel hybrid loss function, enhances the network’s ability to model both absolute and relational quality cues. Key components of the framework are optimized through extensive ablation studies. Experimental results demonstrate that the SURPRIDE framework achieves competitive performance on two UHD benchmarks (AIM 2024 Challenge, PLCC = 0.7755, SRCC = 0.8133, on the testing set; HRIQ, PLCC = 0.882, SRCC = 0.873). Meanwhile, its effectiveness is verified on high- and standard-definition image datasets across diverse resolutions. Future work may explore positional encoding, advanced representation learning, and adaptive multi-branch fusion to align model predictions with human perceptual judgment in real-world scenarios. Full article
Show Figures

Figure 1

21 pages, 6790 KB  
Article
MGFormer: Super-Resolution Reconstruction of Retinal OCT Images Based on a Multi-Granularity Transformer
by Jingmin Luan, Zhe Jiao, Yutian Li, Yanru Si, Jian Liu, Yao Yu, Dongni Yang, Jia Sun, Zehao Wei and Zhenhe Ma
Photonics 2025, 12(9), 850; https://doi.org/10.3390/photonics12090850 - 25 Aug 2025
Viewed by 604
Abstract
Optical coherence tomography (OCT) acquisitions often reduce lateral sampling density to shorten scan time and suppress motion artifacts, but this strategy degrades the signal-to-noise ratio and obscures fine retinal microstructures. To recover these details without hardware modifications, we propose MGFormer, a lightweight Transformer [...] Read more.
Optical coherence tomography (OCT) acquisitions often reduce lateral sampling density to shorten scan time and suppress motion artifacts, but this strategy degrades the signal-to-noise ratio and obscures fine retinal microstructures. To recover these details without hardware modifications, we propose MGFormer, a lightweight Transformer for OCT super-resolution (SR) that integrates a multi-granularity attention mechanism with tensor distillation. A feature-enhancing convolution first sharpens edges; stacked multi-granularity attention blocks then fuse coarse-to-fine context, while a row-wise top-k operator retains the most informative tokens and preserves their positional order. We trained and evaluated MGFormer on B-scans from the Duke SD-OCT dataset at 2×, 4×, and 8× scaling factors. Relative to seven recent CNN- and Transformer-based SR models, MGFormer achieves the highest quantitative fidelity; at 4× it reaches 34.39 dB PSNR and 0.8399 SSIM, surpassing SwinIR by +0.52 dB and +0.026 SSIM, and reduces LPIPS by 21.4%. Compared with the same backbone without tensor distillation, FLOPs drop from 289G to 233G (−19.4%), and per-B-scan latency at 4× falls from 166.43 ms to 98.17 ms (−41.01%); the model size remains compact (105.68 MB). A blinded reader study shows higher scores for boundary sharpness (4.2 ± 0.3), pathology discernibility (4.1 ± 0.3), and diagnostic confidence (4.3 ± 0.2), exceeding SwinIR by 0.3–0.5 points. These results suggest that MGFormer can provide fast, high-fidelity OCT SR suitable for routine clinical workflows. Full article
(This article belongs to the Section Biophotonics and Biomedical Optics)
Show Figures

Figure 1

19 pages, 1225 KB  
Article
Lightweight Image Super-Resolution Reconstruction Network Based on Multi-Order Information Optimization
by Shengxuan Gao, Long Li, Wen Cui, He Jiang and Hongwei Ge
Sensors 2025, 25(17), 5275; https://doi.org/10.3390/s25175275 - 25 Aug 2025
Viewed by 985
Abstract
Traditional information distillation networks using single-scale convolution and simple feature fusion often result in insufficient information extraction and ineffective restoration of high-frequency details. To address this problem, we propose a lightweight image super-resolution reconstruction network based on multi-order information optimization. The core of [...] Read more.
Traditional information distillation networks using single-scale convolution and simple feature fusion often result in insufficient information extraction and ineffective restoration of high-frequency details. To address this problem, we propose a lightweight image super-resolution reconstruction network based on multi-order information optimization. The core of this network lies in the enhancement and refinement of high-frequency information. Our method operates through two main stages to fully exploit the high-frequency features in images while eliminating redundant information, thereby enhancing the network’s detail restoration capability. In the high-frequency information enhancement stage, we design a self-calibration high-frequency information enhancement block. This block generates calibration weights through self-calibration branches to modulate the response strength of each pixel. It then selectively enhances critical high-frequency information. Additionally, we combine an auxiliary branch and a chunked space optimization strategy to extract local details and adaptively reinforce high-frequency features. In the high-frequency information refinement stage, we propose a multi-scale high-frequency information refinement block. First, multi-scale information is captured through multiplicity sampling to enrich the feature hierarchy. Second, the high-frequency information is further refined using a multi-branch structure incorporating wavelet convolution and band convolution, enabling the extraction of diverse detailed features. Experimental results demonstrate that our network achieves an optimal balance between complexity and performance, outperforming popular lightweight networks in both quantitative metrics and visual quality. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Back to TopTop