Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (619)

Search Parameters:
Keywords = super-resolution reconstruction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 24463 KB  
Article
Multi-Scale Adaptive Modulation Network for Efficient Image Super-Resolution
by Zepeng Liu, Guodong Zhang, Jiya Tian and Ruimin Qi
Electronics 2025, 14(22), 4404; https://doi.org/10.3390/electronics14224404 (registering DOI) - 12 Nov 2025
Abstract
As convolutional neural networks (CNNs) become gradually larger and deeper, their applicability in real-time and resource-constrained environments is significantly limited. Furthermore, while self-attention (SA) mechanisms excel at capturing global dependencies, they often emphasize low-frequency information and struggle to represent fine local details. To [...] Read more.
As convolutional neural networks (CNNs) become gradually larger and deeper, their applicability in real-time and resource-constrained environments is significantly limited. Furthermore, while self-attention (SA) mechanisms excel at capturing global dependencies, they often emphasize low-frequency information and struggle to represent fine local details. To overcome these limitations, we propose a multi-scale adaptive modulation network (MAMN) for image super-resolution. The MAMN mainly consists of a series of multi-scale adaptive modulation blocks (MAMBs), each of which incorporates a multi-scale adaptive modulation layer (MAML), a local detail extraction layer (LDEL), and two Swin Transformer Layers (STLs). The MAML is designed to capture multi-scale non-local representations, while the LDEL complements this by extracting high-frequency local features. Additionally, the STLs enhance long-range dependency modeling, effectively expanding the receptive field and integrating global contextual information. Extensive experiments demonstrate that the proposed method achieves an optimal trade-off between computational efficiency and reconstruction performance across five benchmark datasets. Full article
(This article belongs to the Special Issue Intelligent Signal Processing and Its Applications)
Show Figures

Figure 1

25 pages, 2896 KB  
Article
A Multi-Scale Windowed Spatial and Channel Attention Network for High-Fidelity Remote Sensing Image Super-Resolution
by Xiao Xiao, Xufeng Xiang, Jianqiang Wang, Liwen Wang, Xingzhi Gao, Yang Chen, Jun Liu, Peng He, Junhui Han and Zhiqiang Li
Remote Sens. 2025, 17(21), 3653; https://doi.org/10.3390/rs17213653 - 6 Nov 2025
Viewed by 346
Abstract
Remote sensing image super-resolution (SR) plays a crucial role in enhancing the quality and resolution of satellite and aerial imagery, which is essential for various applications, including environmental monitoring and urban planning. While recent image super-resolution networks have achieved strong results, remote-sensing images [...] Read more.
Remote sensing image super-resolution (SR) plays a crucial role in enhancing the quality and resolution of satellite and aerial imagery, which is essential for various applications, including environmental monitoring and urban planning. While recent image super-resolution networks have achieved strong results, remote-sensing images present domain-specific challenges—complex spatial distribution, large cross-scale variations, and dynamic topographic effects—that can destabilize multi-scale fusion and limit the direct applicability of generic SR models. These features make it difficult for single-scale feature extraction methods to fully capture the complex structure, leading to the presence of artifacts and structural distortion in the reconstructed remote sensing images. Therefore, new methods are needed to overcome these challenges and improve the accuracy and detail fidelity of remote sensing image super-resolution reconstruction. This paper proposes a novel Multi-scale Windowed Spatial and Channel Attention Network (MSWSCAN) for high-fidelity remote sensing image super-resolution. The proposed method combines multi-scale feature extraction, window-based spatial attention, and channel attention mechanisms to effectively capture both global and local image features while addressing the challenges of fine details and structural distortion. The network is evaluated on several benchmark datasets, including WHU-RS19, UCMerced and RSSCN7, where it demonstrates superior performance in terms of peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) compared to state-of-the-art methods. The results show that the MSWSCAN not only enhances texture details and edge sharpness but also reduces reconstruction artifacts. To address cross-scale variations and dynamic topographic effects that cause texture drift in multi-scale SR, we combine windowed spatial attention to preserve local geometry with a channel-aware fusion layer (FFL) that reweights multi-scale channels. This stabilizes cross-scale aggregation at a runtime comparable to DAT and yields sharper details on heterogeneous land covers. Averaged over WHU–RS19, RSSCN7, and UCMerced_LandUse at ×2/×3/×4, MSWSCAN improves PSNR (peak signal-to-noise ratio, dB)/SSIM (structural similarity index measure, 0–1) by +0.10 dB/+0.0038 over SwinIR and by +0.05 dB/+0.0017 over DAT. In conclusion, the proposed MSWSCAN achieves state-of-the-art performance in remote sensing image SR, offering a promising solution for high-quality image enhancement in remote sensing applications. Full article
(This article belongs to the Special Issue Artificial Intelligence for Optical Remote Sensing Image Processing)
Show Figures

Figure 1

19 pages, 2680 KB  
Article
ESSTformer: A CNN-Transformer Hybrid with Decoupled Spatial Spectral Transformers for Hyperspectral Image Super-Resolution
by Hehuan Li, Chen Yi, Jiming Liu, Zhen Zhang and Yu Dong
Appl. Sci. 2025, 15(21), 11738; https://doi.org/10.3390/app152111738 - 4 Nov 2025
Viewed by 267
Abstract
Hyperspectral images (HSIs) are crucial for ground object classification, target detection, and related applications due to their rich spatial spectral information. However, hardware limitations in imaging systems make it challenging to directly acquire HSIs with a high spatial resolution. While deep learning-based single [...] Read more.
Hyperspectral images (HSIs) are crucial for ground object classification, target detection, and related applications due to their rich spatial spectral information. However, hardware limitations in imaging systems make it challenging to directly acquire HSIs with a high spatial resolution. While deep learning-based single hyperspectral image super-resolution (SHSR) methods have made significant progress, existing approaches primarily rely on convolutional neural networks (CNNs) with fixed geometric kernels, which struggle to model global spatial spectral dependencies effectively. To address this, we propose ESSTformer, a novel SHSR framework that synergistically integrates CNNs’ local feature extraction and Transformers’ global modeling capabilities. Specifically, we design a multi-scale spectral attention module (MSAM) based on dilated convolutions to capture local multi-scale spatial spectral features. Considering the inherent differences between spatial and spectral information, we adopt a decoupled processing strategy by constructing separate spatial and Spectral Transformers. The Spatial Transformer employs window attention mechanisms and an improved convolutional multi-layer perceptron (CMLP) to model long-range spatial dependencies, while the Spectral Transformer utilizes self-attention mechanisms combined with a spectral enhancement module to focus on discriminative spectral features. Extensive experiments on three hyperspectral datasets demonstrate that the proposed ESSTformer achieves a superior performance in super-resolution reconstruction compared to state-of-the-art methods. Full article
(This article belongs to the Special Issue Advances in Optical Imaging and Deep Learning)
Show Figures

Figure 1

21 pages, 2773 KB  
Article
MSFANet: A Multi-Scale Feature Fusion Transformer with Hybrid Attention for Remote Sensing Image Super-Resolution
by Jie Yu, Chengcheng Lin, Luyao Peng, Cheng Zhong and Hui Li
Sensors 2025, 25(21), 6729; https://doi.org/10.3390/s25216729 - 3 Nov 2025
Viewed by 403
Abstract
To address the issue of insufficient resolution in remote sensing images due to limitations in sensors and transmission, this paper proposes a multi-scale feature fusion model, MSFANet, based on the Swin Transformer architecture for remote sensing image super-resolution reconstruction. The model comprises three [...] Read more.
To address the issue of insufficient resolution in remote sensing images due to limitations in sensors and transmission, this paper proposes a multi-scale feature fusion model, MSFANet, based on the Swin Transformer architecture for remote sensing image super-resolution reconstruction. The model comprises three main modules: shallow feature extraction, deep feature extraction, and high-quality image reconstruction. The deep feature extraction module innovatively introduces three core components: Feature Refinement Augmentation (FRA), Local Structure Optimization (LSO), and Residual Fusion Network (RFN), which effectively extract and adaptively aggregate multi-scale information from local to global levels. Experiments conducted on three public remote sensing datasets (RSSCN7, AID, and WHU-RS19) demonstrate that MSFANet outperforms state-of-the-art models (including HSENet and TransENet) across five evaluation metrics in ×2, ×3, and ×4 super-resolution tasks. Furthermore, MSFANet achieves superior reconstruction quality with reduced computational overhead, striking an optimal balance between efficiency and performance. This positions MSFANet as an effective solution for remote sensing image super-resolution applications. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

15 pages, 1970 KB  
Article
Super-Resolution Reconstruction of Sonograms Using Residual Dense Conditional Generative Adversarial Network
by Zengbo Xu and Yiheng Wei
Sensors 2025, 25(21), 6694; https://doi.org/10.3390/s25216694 - 2 Nov 2025
Viewed by 227
Abstract
A method for super-resolution reconstruction of sonograms based on Residual Dense Conditional Generative Adversarial Network (RDC-GAN) is proposed in this paper. It is well known that the resolution of medical ultrasound images is limited, and the single-frame image super-resolution algorithms based on a [...] Read more.
A method for super-resolution reconstruction of sonograms based on Residual Dense Conditional Generative Adversarial Network (RDC-GAN) is proposed in this paper. It is well known that the resolution of medical ultrasound images is limited, and the single-frame image super-resolution algorithms based on a convolutional neural network are prone to losing texture details, extracting much fewer features, and then blurring the reconstructed images. Therefore, it is very important to reconstruct high-resolution medical images in terms of retaining textured details. A Generative Adversarial Network could learn the mapping relationship between low-resolution and high-resolution images. Based on GAN, a new network is designed, where the generation network is composed of dense residual modules. On the one hand, low-resolution (LR) images are input into the dense residual network, then the multi-level features of images are learned, and then are fused into the global residual features. On the other hand, conditional variables are introduced into a discriminator network to guide the process of super-resolution image reconstruction. The proposed method could realize four times magnification reconstruction of medical ultrasound images. Compared with classical algorithms including Bicubic, SRGAN, and SRCNN, experimental results show that the super-resolution effect of medical ultrasound images based on RDC-GAN could be effectively improved, both in objective numerical evaluation and subjective visual assessment. Moreover, the application of super-resolution reconstructed images to stage the diagnosis of cirrhosis is discussed and the accuracy rates prove the practicality in contrast to the original images. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

24 pages, 25418 KB  
Article
A Transformer-Based Residual Attention Network Combining SAR and Terrain Features for DEM Super-Resolution Reconstruction
by Ruoxuan Chen, Yumin Chen, Tengfei Zhang, Fei Zeng and Zhanghui Li
Remote Sens. 2025, 17(21), 3625; https://doi.org/10.3390/rs17213625 - 1 Nov 2025
Viewed by 326
Abstract
Acquiring high-resolution digital elevation models (DEMs) over across extensive regions remains challenging due to high costs and insufficient detail, creating demand for super-resolution (SR) techniques. However, existing DEM SR methods still rely on limited data sources and often neglect essential terrain features. To [...] Read more.
Acquiring high-resolution digital elevation models (DEMs) over across extensive regions remains challenging due to high costs and insufficient detail, creating demand for super-resolution (SR) techniques. However, existing DEM SR methods still rely on limited data sources and often neglect essential terrain features. To address the issues, SAR data complements existing sources with its all-weather capability and strong penetration, and a Transformer-based Residual Attention Network combining SAR and Terrain Features (TRAN-ST) is proposed. The network incorporates intensity and coherence as SAR features to restore the details of the high-resolution DEMs, while slope and aspect constraints in the loss function enhance terrain consistency. Additionally, it combines the lightweight Transformer module with the residual feature aggregation module, which enhances the global perception capability while aggregating local residual features, thereby improving the reconstruction accuracy and training efficiency. Experiments were conducted on two DEMs in San Diego, USA, and the results show that compared with methods such as the bicubic, SRCNN, EDSR, RFAN, HNCT methods, the model reduces the mean absolute error (MAE) by 2–30%, the root mean square error (RMSE) by 1–31%, and the MAE of the slope by 2–13%, and it reduces the number of parameters effectively, which proves that TRAN-ST outperforms current typical methods. Full article
(This article belongs to the Special Issue Deep Learning Innovations in Remote Sensing)
Show Figures

Figure 1

17 pages, 3049 KB  
Article
PECNet: A Lightweight Single-Image Super-Resolution Network with Periodic Boundary Padding Shift and Multi-Scale Adaptive Feature Aggregation
by Tianyu Gao and Yuhao Liu
Symmetry 2025, 17(11), 1833; https://doi.org/10.3390/sym17111833 - 1 Nov 2025
Viewed by 267
Abstract
Lightweight Single-Image Super-Resolution (SISR) faces the core challenge of balancing computational efficiency with reconstruction quality, particularly in preserving both high-frequency details and global structures under constrained resources. To address this, we propose the Periodically Enhanced Cascade Network (PECNet). Our main contributions are as [...] Read more.
Lightweight Single-Image Super-Resolution (SISR) faces the core challenge of balancing computational efficiency with reconstruction quality, particularly in preserving both high-frequency details and global structures under constrained resources. To address this, we propose the Periodically Enhanced Cascade Network (PECNet). Our main contributions are as follows: 1. Its core component, a novel Multi-scale Adaptive Feature Aggregation (MAFA) module, which employs three functionally complementary branches that work synergistically: one dedicated to extracting local high-frequency details, another to efficiently modeling long-range dependencies and a third to capturing structured contextual information within windows. 2. To seamlessly integrate these branches and enable cross-window information interaction, we introduce the Periodic Boundary Padding Shift (PBPS) mechanism. This mechanism serves as a symmetric preprocessing step that achieves implicit window shifting without introducing any additional computational overhead. Extensive benchmarking shows PECNet achieves better reconstruction quality without a complexity increase. Taking the representative shift-window-based lightweight model, NGswin, as an example, for ×4 SR on the Manga109 dataset, PECNet achieves an average PSNR 0.25 dB higher, while its computational cost (in FLOPs) constitutes merely 40% of NGswin’s. Full article
Show Figures

Figure 1

34 pages, 7677 KB  
Article
JSPSR: Joint Spatial Propagation Super-Resolution Networks for Enhancement of Bare-Earth Digital Elevation Models from Global Data
by Xiandong Cai and Matthew D. Wilson
Remote Sens. 2025, 17(21), 3591; https://doi.org/10.3390/rs17213591 - 30 Oct 2025
Viewed by 399
Abstract
(1) Background: Digital Elevation Models (DEMs) encompass digital bare earth surface representations that are essential for spatial data analysis, such as hydrological and geological modelling, as well as for other applications, such as agriculture and environmental management. However, available bare-earth DEMs can have [...] Read more.
(1) Background: Digital Elevation Models (DEMs) encompass digital bare earth surface representations that are essential for spatial data analysis, such as hydrological and geological modelling, as well as for other applications, such as agriculture and environmental management. However, available bare-earth DEMs can have limited coverage or accessibility. Moreover, the majority of available global DEMs have lower spatial resolutions (∼30–90 m) and contain errors introduced by surface features such as buildings and vegetation. (2) Methods: This research presents an innovative method to convert global DEMs to bare-earth DEMs while enhancing their spatial resolution as measured by the improved vertical accuracy of each pixel, combined with reduced pixel size. We propose the Joint Spatial Propagation Super-Resolution network (JSPSR), which integrates Guided Image Filtering (GIF) and Spatial Propagation Network (SPN). By leveraging guidance features extracted from remote sensing images with or without auxiliary spatial data, our method can correct elevation errors and enhance the spatial resolution of DEMs. We developed a dataset for real-world bare-earth DEM Super-Resolution (SR) problems in low-relief areas utilising open-access data. Experiments were conducted on the dataset using JSPSR and other methods to predict 3 m and 8 m spatial resolution DEMs from 30 m spatial resolution Copernicus GLO-30 DEMs. (3) Results: JSPSR improved prediction accuracy by 71.74% on Root Mean Squared Error (RMSE) and reconstruction quality by 22.9% on Peak Signal-to-Noise Ratio (PSNR) compared to bicubic interpolated GLO-30 DEMs, and achieves 56.03% and 13.8% improvement on the same items against a baseline Single Image Super Resolution (SISR) method. Overall RMSE was 1.06 m at 8 m spatial resolution and 1.1 m at 3 m, compared to 3.8 m for GLO-30, 1.8 m for FABDEM and 1.3 m for FathomDEM, at either resolution. (4) Conclusions: JSPSR outperforms other methods in bare-earth DEM super-resolution tasks, with improved elevation accuracy compared to other state-of-the-art globally available datasets. Full article
(This article belongs to the Special Issue Artificial Intelligence Remote Sensing for Earth Observation)
Show Figures

Figure 1

19 pages, 20616 KB  
Article
Toward Trustworthy On-Device AI: A Quantization-Robust Parameterized Hybrid Neural Filtering Framework
by Sangwoo Hong, Seung-Wook Kim, Seunghyun Moon and Seowon Ji
Mathematics 2025, 13(21), 3447; https://doi.org/10.3390/math13213447 - 29 Oct 2025
Viewed by 350
Abstract
Recent advances in deep learning have led to a proliferation of AI services for the general public. Consequently, constructing trustworthy AI systems that operate on personal devices has become a crucial challenge. While on-device processing is critical for privacy-preserving and latency-sensitive applications, conventional [...] Read more.
Recent advances in deep learning have led to a proliferation of AI services for the general public. Consequently, constructing trustworthy AI systems that operate on personal devices has become a crucial challenge. While on-device processing is critical for privacy-preserving and latency-sensitive applications, conventional deep learning approaches often suffer from instability under quantization and high computational costs. Toward a trustworthy and efficient on-device solution for image processing, we present a hybrid neural filtering framework that combines the representational power of lightweight neural networks with the stability of classical filters. In our framework, the neural network predicts a low-dimensional parameter map that guides the filter’s behavior, effectively decoupling parameter estimation from the final image synthesis. This design enables a truly trustworthy AI system by operating entirely on-device, which eliminates the reliance on servers and significantly reduces computational cost. To ensure quantization robustness, we introduce a basis-decomposed parameterization, a design mathematically proven to bound reconstruction errors. Our network predicts a set of basis maps that are combined via fixed coefficients to form the final guidance. This architecture is intrinsically robust to quantization and supports runtime-adaptive precision without retraining. Experiments on depth map super-resolution validate our approach. Our framework demonstrates exceptional quantization robustness, exhibiting no performance degradation under 8-bit quantization, whereas a baseline suffers a significant 1.56 dB drop. Furthermore, our model’s significantly lower Mean Squared Error highlights its superior stability, providing a practical and mathematically grounded pathway toward trustworthy on-device AI. Full article
Show Figures

Figure 1

23 pages, 11997 KB  
Article
Deep Learning-Driven Automatic Segmentation of Weeds and Crops in UAV Imagery
by Jianghan Tao, Qian Qiao, Jian Song, Shan Sun, Yijia Chen, Qingyang Wu, Yongying Liu, Feng Xue, Hao Wu and Fan Zhao
Sensors 2025, 25(21), 6576; https://doi.org/10.3390/s25216576 - 25 Oct 2025
Viewed by 427
Abstract
Accurate segmentation of crops and weeds is essential for enhancing crop yield, optimizing herbicide usage, and mitigating environmental impacts. Traditional weed management practices, such as manual weeding or broad-spectrum herbicide application, are labor-intensive, environmentally harmful, and economically inefficient. In response, this study introduces [...] Read more.
Accurate segmentation of crops and weeds is essential for enhancing crop yield, optimizing herbicide usage, and mitigating environmental impacts. Traditional weed management practices, such as manual weeding or broad-spectrum herbicide application, are labor-intensive, environmentally harmful, and economically inefficient. In response, this study introduces a novel precision agriculture framework integrating Unmanned Aerial Vehicle (UAV)-based remote sensing with advanced deep learning techniques, combining Super-Resolution Reconstruction (SRR) and semantic segmentation. This study is the first to integrate UAV-based SRR and semantic segmentation for tobacco fields, systematically evaluate recent Transformer and Mamba-based models alongside traditional CNNs, and release an annotated dataset that not only ensures reproducibility but also provides a resource for the research community to develop and benchmark future models. Initially, SRR enhanced the resolution of low-quality UAV imagery, significantly improving detailed feature extraction. Subsequently, to identify the optimal segmentation model for the proposed framework, semantic segmentation models incorporating CNN, Transformer, and Mamba architectures were used to differentiate crops from weeds. Among evaluated SRR methods, RCAN achieved the optimal reconstruction performance, reaching a Peak Signal-to-Noise Ratio (PSNR) of 24.98 dB and a Structural Similarity Index (SSIM) of 69.48%. In semantic segmentation, the ensemble model integrating Transformer (DPT with DINOv2) and Mamba-based architectures achieved the highest mean Intersection over Union (mIoU) of 90.75%, demonstrating superior robustness across diverse field conditions. Additionally, comprehensive experiments quantified the impact of magnification factors, Gaussian blur, and Gaussian noise, identifying an optimal magnification factor of 4×, proving that the method was robust to common environmental disturbances at optimal parameters. Overall, this research established an efficient, precise framework for crop cultivation management, offering valuable insights for precision agriculture and sustainable farming practices. Full article
(This article belongs to the Special Issue Smart Sensing and Control for Autonomous Intelligent Unmanned Systems)
Show Figures

Figure 1

24 pages, 11432 KB  
Article
MRDAM: Satellite Cloud Image Super-Resolution via Multi-Scale Residual Deformable Attention Mechanism
by Liling Zhao, Zichen Liao and Quansen Sun
Remote Sens. 2025, 17(21), 3509; https://doi.org/10.3390/rs17213509 - 22 Oct 2025
Viewed by 412
Abstract
High-resolution meteorological satellite cloud imagery plays a crucial role in diagnosing and forecasting severe convective weather phenomena characterized by suddenness and locality, such as tropical cyclones. However, constrained by imaging principles and various internal/external interferences during satellite data acquisition, current satellite imagery often [...] Read more.
High-resolution meteorological satellite cloud imagery plays a crucial role in diagnosing and forecasting severe convective weather phenomena characterized by suddenness and locality, such as tropical cyclones. However, constrained by imaging principles and various internal/external interferences during satellite data acquisition, current satellite imagery often fails to meet the spatiotemporal resolution requirements for fine-scale monitoring of these weather systems. Particularly for real-time tracking of tropical cyclone genesis-evolution dynamics and capturing detailed cloud structure variations within cyclone cores, existing spatial resolutions remain insufficient. Therefore, developing super-resolution techniques for meteorological satellite cloud imagery through software-based approaches holds significant application potential. This paper proposes a Multi-scale Residual Deformable Attention Model (MRDAM) based on Generative Adversarial Networks (GANs), specifically designed for satellite cloud image super-resolution tasks considering their morphological diversity and non-rigid deformation characteristics. The generator architecture incorporates two key components: a Multi-scale Feature Progressive Fusion Module (MFPFM), which enhances texture detail preservation and spectral consistency in reconstructed images, and a Deformable Attention Additive Fusion Module (DAAFM), which captures irregular cloud pattern features through adaptive spatial-attention mechanisms. Comparative experiments against multiple GAN-based super-resolution baselines demonstrate that MRDAM achieves superior performance in both objective evaluation metrics (PSNR/SSIM) and subjective visual quality, proving its superior performance for satellite cloud image super-resolution tasks. Full article
(This article belongs to the Special Issue Neural Networks and Deep Learning for Satellite Image Processing)
Show Figures

Graphical abstract

30 pages, 2370 KB  
Review
Nanobiosensors for Single-Molecule Diagnostics: Toward Integration with Super-Resolution Imaging
by Seungah Lee, Sobia Rafiq and Seong Ho Kang
Biosensors 2025, 15(10), 705; https://doi.org/10.3390/bios15100705 - 21 Oct 2025
Viewed by 759
Abstract
Recent advances in nanotechnology and optical imaging have transformed molecular diagnostics, enabling the detection and analysis of individual biomolecules with unprecedented precision. Nanobiosensors provide ultrasensitive molecular detection, and super-resolution microscopy (SRM) exceeds the diffraction limit of conventional optics to achieve nanometer-scale resolution. Although [...] Read more.
Recent advances in nanotechnology and optical imaging have transformed molecular diagnostics, enabling the detection and analysis of individual biomolecules with unprecedented precision. Nanobiosensors provide ultrasensitive molecular detection, and super-resolution microscopy (SRM) exceeds the diffraction limit of conventional optics to achieve nanometer-scale resolution. Although their integration remains in its infancy, with only a handful of proof-of-concept studies reported, the convergence of nanobiosensors and SRM holds significant promise for next-generation diagnostics. In this review, we first outline nanobiosensor-based single-molecule detection strategies and highlight representative implementations. These include plasmonic–SRM hybrids, electrochemical–optical correlatives, and SRM-enabled immunoassays, with a focus on their applications in oncology, infectious diseases, and neurodegenerative disorders. Then, we discuss emerging studies at the interface of nanobiosensors and SRM, including nanostructure-assisted SRM. Despite not being true biosensing approaches, these studies provide valuable insights into how engineered nanomaterials can improve imaging performance. Finally, we evaluate current challenges, including reproducibility, multiplexing, and clinical translation, and outline future opportunities, such as the development of photostable probes, artificial intelligence-assisted image reconstruction, microfluidic integration, and regulatory strategies. This review highlights the synergistic potential of nanobiosensors and SRM, outlining a roadmap toward clinically translatable next-generation single-molecule diagnostic platforms. Full article
Show Figures

Figure 1

16 pages, 21685 KB  
Article
MambaUSR: Mamba and Frequency Interaction Network for Underwater Image Super-Resolution
by Guangze Shen, Jingxuan Zhang and Zhe Chen
Appl. Sci. 2025, 15(20), 11263; https://doi.org/10.3390/app152011263 - 21 Oct 2025
Viewed by 352
Abstract
In recent years, underwater image super-resolution (SR) reconstruction has increasingly become a core focus of underwater machine vision. Light scattering and refraction in underwater environments result in images with blurred details, low contrast, color distortions, and multiple visual artifacts. Despite the promising results [...] Read more.
In recent years, underwater image super-resolution (SR) reconstruction has increasingly become a core focus of underwater machine vision. Light scattering and refraction in underwater environments result in images with blurred details, low contrast, color distortions, and multiple visual artifacts. Despite the promising results achieved by deep learning in underwater SR tasks, global and frequency-domain information remain poorly addressed. In this study, we introduce a novel underwater SR method based on the Vision State-Space Model, dubbed MambaUSR. At its core, we design the Frequency State-Space Module (FSSM), which integrates two complementary components: the Visual State-Space Module (VSSM) and the Frequency-Assisted Enhancement Module (FAEM). The VSSM models long-range dependencies to enhance global structural consistency and contrast, while the FAEM employs Fast Fourier Transform combined with channel attention to extract high-frequency details, thereby improving the fidelity and naturalness of reconstructed images. Comprehensive evaluations on benchmark datasets confirm that MambaUSR delivers superior performance in underwater image reconstruction. Full article
Show Figures

Figure 1

12 pages, 6063 KB  
Article
Prex-NetII: Attention-Based Back-Projection Network for Light Field Reconstruction
by Dong-Myung Kim and Jae-Won Suh
Electronics 2025, 14(20), 4117; https://doi.org/10.3390/electronics14204117 - 21 Oct 2025
Viewed by 251
Abstract
We propose an attention-based back-projection network that enhances light field reconstruction quality by modeling inter-view dependencies. The network uses pixel shuffle to efficiently extract initial features. Spatial attention focuses on important regions while capturing inter-view dependencies. Skip connections in the refinement network improve [...] Read more.
We propose an attention-based back-projection network that enhances light field reconstruction quality by modeling inter-view dependencies. The network uses pixel shuffle to efficiently extract initial features. Spatial attention focuses on important regions while capturing inter-view dependencies. Skip connections in the refinement network improve stability and reconstruction performance. In addition, channel attention within the projection blocks enhances structural representation across views. The proposed method reconstructs high-quality light field images not only in general scenes but also in complex scenes containing occlusions and reflections. The experimental results show that the proposed method outperforms existing approaches. Full article
(This article belongs to the Special Issue Image Processing Based on Convolution Neural Network: 2nd Edition)
Show Figures

Figure 1

26 pages, 28516 KB  
Article
Geology-Topography Constrained Super-Resolution of Geochemical Maps via Enhanced U-Net
by Yao Pei, Yuanfang Wang, Xiaolong Li, Tie Gao, Shengfa Wang and Xiaoshan Zhou
Minerals 2025, 15(10), 1088; https://doi.org/10.3390/min15101088 - 19 Oct 2025
Viewed by 355
Abstract
Geochemical maps are essential visualization tools for studying the distribution patterns of elements on the Earth’s surface. They provide critical insights into geological structure, mineralization processes, and environmental evolution. Traditional interpolation methods often fail to adequately reconstruct high-frequency details in geochemical maps with [...] Read more.
Geochemical maps are essential visualization tools for studying the distribution patterns of elements on the Earth’s surface. They provide critical insights into geological structure, mineralization processes, and environmental evolution. Traditional interpolation methods often fail to adequately reconstruct high-frequency details in geochemical maps with low sampling density. This study proposes a super-resolution (SR) reconstruction method for geochemical maps based on an enhanced U-Net architecture, validated in the Gouli area of Qinghai Province. By integrating residual blocks, multi-scale neural networks, and constraints from topographic features (elevation, slope, aspect) and geological map embeddings, our method enhances the resolution of stream sediment geochemical maps from 1:50,000 to 1:25,000 scale. Experimental results demonstrate that the proposed method outperforms SRCNN, VDSR, and standard U-Net models in both peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). Specifically, with all constraints incorporated, the method achieves maximum and mean PSNR values of 38.486 and 25.334, respectively, and maximum and mean SSIM values of 0.968 and 0.817. The reconstructed high-resolution (HR) geochemical maps exhibit superior detail clarity and maintain strong spatial correlation with the original HR data. Studies have shown that this method can effectively learn multi-scale geochemical patterns and detect subtle anomalies missed in low-resolution (LR) maps. Moreover, the reconstructed HR geochemical maps exhibit better alignment with the Ag, Cu, and Pb anomalies in known mineralization zones (Maixiulongwa and Sanchakou areas), thereby providing strong support for precise mineral exploration. Full article
(This article belongs to the Special Issue Selected Papers from the 7th National Youth Geological Congress)
Show Figures

Graphical abstract

Back to TopTop