Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (17)

Search Parameters:
Keywords = deblurring and denoising performance

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
37 pages, 20758 KB  
Review
A Comprehensive Review of Image Restoration Research Based on Diffusion Models
by Jun Li, Heran Wang, Yingjie Li and Haochuan Zhang
Mathematics 2025, 13(13), 2079; https://doi.org/10.3390/math13132079 - 24 Jun 2025
Viewed by 5253
Abstract
Image restoration is an indispensable and challenging task in computer vision, aiming to enhance the quality of images degraded by various forms of degradation. Diffusion models have achieved remarkable progress in AIGC (Artificial Intelligence Generated Content) image generation, and numerous studies have explored [...] Read more.
Image restoration is an indispensable and challenging task in computer vision, aiming to enhance the quality of images degraded by various forms of degradation. Diffusion models have achieved remarkable progress in AIGC (Artificial Intelligence Generated Content) image generation, and numerous studies have explored their application in image restoration, achieving performance surpassing that of other methods. This paper provides a comprehensive overview of diffusion models for image restoration, starting with an introduction to the background of diffusion models. It summarizes relevant theories and research in utilizing diffusion models for image restoration in recent years, elaborating on six commonly used methods and their unified paradigm. Based on these six categories, this paper classifies restoration tasks into two main areas: image super-resolution reconstruction and frequency-selective image restoration. The frequency-selective image restoration category includes image deblurring, image inpainting, image deraining, image desnowing, image dehazing, image denoising, and low-light enhancement. For each area, this paper delves into the technical principles and modeling strategies. Furthermore, it analyzes the specific characteristics and contributions of the diffusion models employed in each application category. This paper summarizes commonly used datasets and evaluation metrics for these six applications to facilitate comprehensive evaluation of existing methods. Finally, it concludes by identifying the limitations of current research, outlining challenges, and offering perspectives on future applications. Full article
Show Figures

Figure 1

37 pages, 4940 KB  
Review
Graph Convolutional Network for Image Restoration: A Survey
by Tongtong Cheng, Tingting Bi, Wen Ji and Chunwei Tian
Mathematics 2024, 12(13), 2020; https://doi.org/10.3390/math12132020 - 28 Jun 2024
Cited by 6 | Viewed by 3525
Abstract
Image restoration technology is a crucial field in image processing and is extensively utilized across various domains. Recently, with advancements in graph convolutional network (GCN) technology, methods based on GCNs have increasingly been applied to image restoration, yielding impressive results. Despite these advancements, [...] Read more.
Image restoration technology is a crucial field in image processing and is extensively utilized across various domains. Recently, with advancements in graph convolutional network (GCN) technology, methods based on GCNs have increasingly been applied to image restoration, yielding impressive results. Despite these advancements, there is a gap in comprehensive research that consolidates various image denoising techniques. In this paper, we conduct a comparative study of image restoration techniques using GCNs. We begin by categorizing GCN methods into three primary application areas: image denoising, image super-resolution, and image deblurring. We then delve into the motivations and principles underlying various deep learning approaches. Subsequently, we provide both quantitative and qualitative comparisons of state-of-the-art methods using public denoising datasets. Finally, we discuss potential challenges and future directions, aiming to pave the way for further advancements in this domain. Our key findings include the identification of superior performance of GCN-based methods in capturing long-range dependencies and improving image quality across different restoration tasks, highlighting their potential for future research and applications. Full article
Show Figures

Figure 1

17 pages, 7142 KB  
Article
Performance Evaluation of L1-Norm-Based Blind Deconvolution after Noise Reduction with Non-Subsampled Contourlet Transform in Light Microscopy Images
by Kyuseok Kim and Ji-Youn Kim
Appl. Sci. 2024, 14(5), 1913; https://doi.org/10.3390/app14051913 - 26 Feb 2024
Cited by 3 | Viewed by 1662
Abstract
Noise and blurring in light microscope images are representative factors that affect accurate identification of cellular and subcellular structures in biological research. In this study, a method for l1-norm-based blind deconvolution after noise reduction with non-subsampled contourlet transform (NSCT) was designed [...] Read more.
Noise and blurring in light microscope images are representative factors that affect accurate identification of cellular and subcellular structures in biological research. In this study, a method for l1-norm-based blind deconvolution after noise reduction with non-subsampled contourlet transform (NSCT) was designed and applied to a light microscope image to analyze its feasibility. The designed NSCT-based algorithm first separated the low- and high-frequency components. Then, the restored microscope image and the deblurred and denoised images were compared and evaluated. In both the simulations and experiments, the average coefficient of variation (COV) value in the image using the proposed NSCT-based algorithm showed similar values compared to the denoised image; moreover, it significantly improved the results compared with that of the degraded image. In particular, we confirmed that the restored image in the experiment improved the COV by approximately 2.52 times compared with the deblurred image, and the NSCT-based proposed algorithm showed the best performance in both the peak signal-to-noise ratio and edge preservation index in the simulation. In conclusion, the proposed algorithm was successfully modeled, and the applicability of the proposed method in light microscope images was proved based on various quantitative evaluation indices. Full article
Show Figures

Figure 1

30 pages, 32451 KB  
Article
A Framework for Reconstructing Super-Resolution Magnetic Resonance Images from Sparse Raw Data Using Multilevel Generative Methods
by Krzysztof Malczewski
Appl. Sci. 2024, 14(4), 1351; https://doi.org/10.3390/app14041351 - 6 Feb 2024
Cited by 4 | Viewed by 2114
Abstract
Super-resolution magnetic resonance (MR) scans give anatomical data for quantitative analysis and treatment. The use of convolutional neural networks (CNNs) in image processing and deep learning research have led to super-resolution reconstruction methods based on deep learning. The study offers a G-guided generative [...] Read more.
Super-resolution magnetic resonance (MR) scans give anatomical data for quantitative analysis and treatment. The use of convolutional neural networks (CNNs) in image processing and deep learning research have led to super-resolution reconstruction methods based on deep learning. The study offers a G-guided generative multilevel network for training 3D neural networks with poorly sampled MR input data. The author suggest using super-resolution reconstruction (SRR) and modified sparse sampling to address these issues. Image-based Wasserstein GANs retain k-space data sparsity. Wasserstein Generative Adversarial Networks (WGANs) store and represent picture space knowledge. The method obtains null-valued k-space data and repairs fill gaps in the dataset to preserve data integrity. The proposed reconstruction method processes raw data samples and is able to perform subspace synchronization, deblurring, denoising, motion estimation, and super-resolution image production. The suggested algorithm uses different preprocessing methods to deblur and denoise datasets. Preliminary trials contextualize and speed up assessments. Results indicate that reconstructed pictures have better high-frequency features than sophisticated multi-frame techniques. This is supported by rising PSNR, MAE, and IEM measurements. A k-space correction block improves GAN network refinement learning in the suggested method. This block improves the network’s ability to avoid unnecessary data, speeding reconstruction. A k-space correction module can limit the generator’s output to critical lines, allowing the reconstruction of only missing lines. This improves convergence and speeds rebuilding. This study shows that this strategy reduces aliasing artifacts better than contemporaneous and noniterative methods. Full article
Show Figures

Figure 1

13 pages, 6702 KB  
Communication
Image Deblurring Based on Convex Non-Convex Sparse Regularization and Plug-and-Play Algorithm
by Yi Wang, Yating Xu, Tianjian Li, Tao Zhang and Jian Zou
Algorithms 2023, 16(12), 574; https://doi.org/10.3390/a16120574 - 18 Dec 2023
Cited by 2 | Viewed by 2293
Abstract
Image deblurring based on sparse regularization has garnered significant attention, but there are still certain limitations that need to be addressed. For instance, convex sparse regularization tends to exhibit biased estimation, which can adversely impact the deblurring performance, while non-convex sparse regularization poses [...] Read more.
Image deblurring based on sparse regularization has garnered significant attention, but there are still certain limitations that need to be addressed. For instance, convex sparse regularization tends to exhibit biased estimation, which can adversely impact the deblurring performance, while non-convex sparse regularization poses challenges in terms of solving techniques. Furthermore, the performance of the traditional iterative algorithm also needs to be improved. In this paper, we propose an image deblurring method based on convex non-convex (CNC) sparse regularization and a plug-and-play (PnP) algorithm. The utilization of CNC sparse regularization not only mitigates estimation bias but also guarantees the overall convexity of the image deblurring model. The PnP algorithm is an advanced learning-based optimization algorithm that surpasses traditional optimization algorithms in terms of efficiency and performance by utilizing the state-of-the-art denoiser to replace the proximal operator. Numerical experiments verify the performance of our proposed algorithm in image deblurring. Full article
Show Figures

Figure 1

15 pages, 8102 KB  
Article
Ambiguity in Solving Imaging Inverse Problems with Deep-Learning-Based Operators
by Davide Evangelista, Elena Morotti, Elena Loli Piccolomini and James Nagy
J. Imaging 2023, 9(7), 133; https://doi.org/10.3390/jimaging9070133 - 30 Jun 2023
Cited by 5 | Viewed by 2580
Abstract
In recent years, large convolutional neural networks have been widely used as tools for image deblurring, because of their ability in restoring images very precisely. It is well known that image deblurring is mathematically modeled as an ill-posed inverse problem and its solution [...] Read more.
In recent years, large convolutional neural networks have been widely used as tools for image deblurring, because of their ability in restoring images very precisely. It is well known that image deblurring is mathematically modeled as an ill-posed inverse problem and its solution is difficult to approximate when noise affects the data. Really, one limitation of neural networks for deblurring is their sensitivity to noise and other perturbations, which can lead to instability and produce poor reconstructions. In addition, networks do not necessarily take into account the numerical formulation of the underlying imaging problem when trained end-to-end. In this paper, we propose some strategies to improve stability without losing too much accuracy to deblur images with deep-learning-based methods. First, we suggest a very small neural architecture, which reduces the execution time for training, satisfying a green AI need, and does not extremely amplify noise in the computed image. Second, we introduce a unified framework where a pre-processing step balances the lack of stability of the following neural-network-based step. Two different pre-processors are presented. The former implements a strong parameter-free denoiser, and the latter is a variational-model-based regularized formulation of the latent imaging problem. This framework is also formally characterized by mathematical analysis. Numerical experiments are performed to verify the accuracy and stability of the proposed approaches for image deblurring when unknown or not-quantified noise is present; the results confirm that they improve the network stability with respect to noise. In particular, the model-based framework represents the most reliable trade-off between visual precision and robustness. Full article
(This article belongs to the Topic Computer Vision and Image Processing)
Show Figures

Figure 1

21 pages, 11083 KB  
Article
A Deep-Learning-Based Approach for Aircraft Engine Defect Detection
by Anurag Upadhyay, Jun Li, Steve King and Sri Addepalli
Machines 2023, 11(2), 192; https://doi.org/10.3390/machines11020192 - 1 Feb 2023
Cited by 24 | Viewed by 7352
Abstract
Borescope inspection is a labour-intensive process used to find defects in aircraft engines that contain areas not visible during a general visual inspection. The outcome of the process largely depends on the judgment of the maintenance professionals who perform it. This research develops [...] Read more.
Borescope inspection is a labour-intensive process used to find defects in aircraft engines that contain areas not visible during a general visual inspection. The outcome of the process largely depends on the judgment of the maintenance professionals who perform it. This research develops a novel deep learning framework for automated borescope inspection. In the framework, a customised U-Net architecture is developed to detect the defects on high-pressure compressor blades. Since motion blur is introduced in some images while the blades are rotated during the inspection, a hybrid motion deblurring method for image sharpening and denoising is applied to remove the effect based on classic computer vision techniques in combination with a customised GAN model. The framework also addresses the data imbalance, small size of the defects and data availability issues in part by testing different loss functions and generating synthetic images using a customised generative adversarial net (GAN) model, respectively. The results obtained from the implementation of the deep learning framework achieve precisions and recalls of over 90%. The hybrid model for motion deblurring results in a 10× improvement in image quality. However, the framework only achieves modest success with particular loss functions for very small sizes of defects. The future study will focus on very small defects detection and extend the deep learning framework to general borescope inspection. Full article
Show Figures

Figure 1

11 pages, 2958 KB  
Article
One-Step Enhancer: Deblurring and Denoising of OCT Images
by Shunlei Li, Muhammad Adeel Azam, Ajay Gunalan and Leonardo S. Mattos
Appl. Sci. 2022, 12(19), 10092; https://doi.org/10.3390/app121910092 - 7 Oct 2022
Cited by 6 | Viewed by 2844
Abstract
Optical coherence tomography (OCT) is a rapidly evolving imaging technology that combines a broadband and low-coherence light source with interferometry and signal processing to produce high-resolution images of living tissues. However, the speckle noise introduced by the low-coherence interferometry and the blur from [...] Read more.
Optical coherence tomography (OCT) is a rapidly evolving imaging technology that combines a broadband and low-coherence light source with interferometry and signal processing to produce high-resolution images of living tissues. However, the speckle noise introduced by the low-coherence interferometry and the blur from device motions significantly degrade the quality of OCT images. Convolutional neural networks (CNNs) are a potential solution to deal with these issues and enhance OCT image quality. However, training such networks based on traditional supervised learning methods is impractical due to the lack of clean ground truth images. Consequently, this research proposes an unsupervised learning method for OCT image enhancement, termed one-step enhancer (OSE). Specifically, OSE performs denoising and deblurring based on a single step process. A generative adversarial network (GAN) is used for this. Encoders disentangle the raw images into a content domain, blur domain and noise domain to extract features. Then, the generator can generate clean images from the extracted features. To regularize the distribution range of retrieved blur characteristics, KL divergence loss is employed. Meanwhile, noise patches are enforced to promote more accurate disentanglement. These strategies considerably increase the effectiveness of GAN training for OCT image enhancement when used jointly. Both quantitative and qualitative visual findings demonstrate that the proposed method is effective for OCT image denoising and deblurring. These results are significant not only to provide an enhanced visual experience for clinicians but also to supply good quality data for OCT-guide operations. The enhanced images are needed, e.g., for the development of robust, reliable and accurate autonomous OCT-guided surgical robotic systems. Full article
(This article belongs to the Special Issue Deep Neural Networks in Medical Imaging)
Show Figures

Figure 1

21 pages, 9620 KB  
Article
Blind Restoration of Atmospheric Turbulence-Degraded Images Based on Curriculum Learning
by Jie Shu, Chunzhi Xie and Zhisheng Gao
Remote Sens. 2022, 14(19), 4797; https://doi.org/10.3390/rs14194797 - 26 Sep 2022
Cited by 11 | Viewed by 3017
Abstract
Atmospheric turbulence-degraded images in typical practical application scenarios are always disturbed by severe additive noise. Severe additive noise corrupts the prior assumptions of most baseline deconvolution methods. Existing methods either ignore the additive noise term during optimization or perform denoising and deblurring completely [...] Read more.
Atmospheric turbulence-degraded images in typical practical application scenarios are always disturbed by severe additive noise. Severe additive noise corrupts the prior assumptions of most baseline deconvolution methods. Existing methods either ignore the additive noise term during optimization or perform denoising and deblurring completely independently. However, their performances are not high because they do not conform to the prior that multiple degradation factors are tightly coupled. This paper proposes a Noise Suppression-based Restoration Network (NSRN) for turbulence-degraded images, in which the noise suppression module is designed to learn low-rank subspaces from turbulence-degraded images, the attention-based asymmetric U-NET module is designed for blurred-image deconvolution, and the Fine Deep Back-Projection (FDBP) module is used for multi-level feature fusion to reconstruct a sharp image. Furthermore, an improved curriculum learning strategy is proposed, which trains the network gradually to achieve superior performance through a local-to-global, easy-to-difficult learning method. Based on NSRN, we achieve state-of-the-art performance with PSNR of 30.1 dB and SSIM of 0.9 on the simulated dataset and better visual results on the real images. Full article
Show Figures

Figure 1

23 pages, 3309 KB  
Article
An Unsupervised Weight Map Generative Network for Pixel-Level Combination of Image Denoisers
by Lijia Yu, Jie Luo, Shaoping Xu, Xiaojun Chen and Nan Xiao
Appl. Sci. 2022, 12(12), 6227; https://doi.org/10.3390/app12126227 - 19 Jun 2022
Cited by 5 | Viewed by 2519
Abstract
Image denoising is a classic but still important issue in image processing as the denoising effect has a significant impact on subsequent image processing results, such as target recognition and edge detection. In the past few decades, various denoising methods have been proposed, [...] Read more.
Image denoising is a classic but still important issue in image processing as the denoising effect has a significant impact on subsequent image processing results, such as target recognition and edge detection. In the past few decades, various denoising methods have been proposed, such as model-based and learning-based methods, and they have achieved promising results. However, no stand-alone method consistently outperforms the others in different complex imaging situations. Based on the complementary strengths of model-based and learning-based methods, in this study, we design a pixel-level image combination strategy to leverage their respective advantages for the denoised images (referred to as initial denoised images) generated by individual denoisers. The key to this combination strategy is to generate a corresponding weight map of the same size for each initial denoised image. To this end, we introduce an unsupervised weight map generative network that adjusts its parameters to generate a weight map for each initial denoised image under the guidance of our designed loss function. Using the weight maps, we are able to fully utilize the internal and external information of various denoising methods at a finer granularity, ensuring that the final combined image is close to the optimal. To the best of our knowledge, our enhancement method of combining denoised images at the pixel level is the first proposed in the image combination field. Extensive experiments demonstrate that the proposed method shows superior performance, both quantitatively and visually, and stronger generalization. Specifically, in comparison with the stand-alone denoising methods FFDNet and BM3D, our method improves the average peak signal-to-noise ratio (PSNR) by 0.18 dB to 0.83 dB on two benchmarking datasets crossing different noise levels. Its denoising effect is also greater than other competitive stand-alone methods and combination methods, and has surpassed the denoising effect of the second-best method by 0.03 dB to 1.42 dB. It should be noted that since our image combination strategy is generic, the proposed combined strategy can not only be used for image denoising but can also be extended to low-light image enhancement, image deblurring or image super-resolution. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

27 pages, 34906 KB  
Article
Fine-Tuned Siamese Network with Modified Enhanced Super-Resolution GAN Plus Based on Low-Quality Chest X-ray Images for COVID-19 Identification
by Grace Ugochi Nneji, Jingye Cai, Happy Nkanta Monday, Md Altab Hossin, Saifun Nahar, Goodness Temofe Mgbejime and Jianhua Deng
Diagnostics 2022, 12(3), 717; https://doi.org/10.3390/diagnostics12030717 - 15 Mar 2022
Cited by 14 | Viewed by 5076
Abstract
Coronavirus disease has rapidly spread globally since early January of 2020. With millions of deaths, it is essential for an automated system to be utilized to aid in the clinical diagnosis and reduce time consumption for image analysis. This article presents a generative [...] Read more.
Coronavirus disease has rapidly spread globally since early January of 2020. With millions of deaths, it is essential for an automated system to be utilized to aid in the clinical diagnosis and reduce time consumption for image analysis. This article presents a generative adversarial network (GAN)-based deep learning application for precisely regaining high-resolution (HR) CXR images from low-resolution (LR) CXR correspondents for COVID-19 identification. Respectively, using the building blocks of GAN, we introduce a modified enhanced super-resolution generative adversarial network plus (MESRGAN+) to implement a connected nonlinear mapping collected from noise-contaminated low-resolution input images to produce deblurred and denoised HR images. As opposed to the latest trends of network complexity and computational costs, we incorporate an enhanced VGG19 fine-tuned twin network with the wavelet pooling strategy in order to extract distinct features for COVID-19 identification. We demonstrate our proposed model on a publicly available dataset of 11,920 samples of chest X-ray images, with 2980 cases of COVID-19 CXR, healthy, viral and bacterial cases. Our proposed model performs efficiently both on the binary and four-class classification. The proposed method achieves accuracy of 98.8%, precision of 98.6%, sensitivity of 97.5%, specificity of 98.9%, an F1 score of 97.8% and ROC AUC of 98.8% for the multi-class task, while, for the binary class, the model achieves accuracy of 99.7%, precision of 98.9%, sensitivity of 98.7%, specificity of 99.3%, an F1 score of 98.2% and ROC AUC of 99.7%. Our method obtains state-of-the-art (SOTA) performance, according to the experimental results, which is helpful for COVID-19 screening. This new conceptual framework is proposed to play an influential role in addressing the issues facing COVID-19 examination and other diseases. Full article
(This article belongs to the Section Medical Imaging and Theranostics)
Show Figures

Figure 1

14 pages, 2400 KB  
Article
Effect of Denoising and Deblurring 18F-Fluorodeoxyglucose Positron Emission Tomography Images on a Deep Learning Model’s Classification Performance for Alzheimer’s Disease
by Min-Hee Lee, Chang-Soo Yun, Kyuseok Kim and Youngjin Lee
Metabolites 2022, 12(3), 231; https://doi.org/10.3390/metabo12030231 - 7 Mar 2022
Cited by 5 | Viewed by 3189
Abstract
Alzheimer’s disease (AD) is the most common progressive neurodegenerative disease. 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET) is widely used to predict AD using a deep learning model. However, the effects of noise and blurring on 18F-FDG PET images were [...] Read more.
Alzheimer’s disease (AD) is the most common progressive neurodegenerative disease. 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET) is widely used to predict AD using a deep learning model. However, the effects of noise and blurring on 18F-FDG PET images were not considered. The performance of a classification model trained using raw, deblurred (by the fast total variation deblurring method), or denoised (by the median modified Wiener filter) 18F-FDG PET images without or with cropping around the limbic system area using a 3D deep convolutional neural network was investigated. The classification model trained using denoised whole-brain 18F-FDG PET images achieved classification performance (0.75/0.65/0.79/0.39 for sensitivity/specificity/F1-score/Matthews correlation coefficient (MCC), respectively) higher than that with raw and deblurred 18F-FDG PET images. The classification model trained using cropped raw 18F-FDG PET images achieved higher performance (0.78/0.63/0.81/0.40 for sensitivity/specificity/F1-score/MCC) than the whole-brain 18F-FDG PET images (0.72/0.32/0.71/0.10 for sensitivity/specificity/F1-score/MCC, respectively). The 18F-FDG PET image deblurring and cropping (0.89/0.67/0.88/0.57 for sensitivity/specificity/F1-score/MCC) procedures were the most helpful for improving performance. For this model, the right middle frontal, middle temporal, insula, and hippocampus areas were the most predictive of AD using the class activation map. Our findings demonstrate that 18F-FDG PET image preprocessing and cropping improves the explainability and potential clinical applicability of deep learning models. Full article
(This article belongs to the Special Issue Deep Learning for Metabolomics)
Show Figures

Graphical abstract

17 pages, 33299 KB  
Article
A Marine Organism Detection Framework Based on the Joint Optimization of Image Enhancement and Object Detection
by Xueting Zhang, Xiaohai Fang, Mian Pan, Luhua Yuan, Yaxin Zhang, Mengyi Yuan, Shuaishuai Lv and Haibin Yu
Sensors 2021, 21(21), 7205; https://doi.org/10.3390/s21217205 - 29 Oct 2021
Cited by 28 | Viewed by 3677
Abstract
Underwater vision-based detection plays an increasingly important role in underwater security, ocean exploration and other fields. Due to the absorption and scattering effects of water on light, as well as the movement of the carrier, underwater images generally have problems such as noise [...] Read more.
Underwater vision-based detection plays an increasingly important role in underwater security, ocean exploration and other fields. Due to the absorption and scattering effects of water on light, as well as the movement of the carrier, underwater images generally have problems such as noise pollution, color cast and motion blur, which seriously affect the performance of underwater vision-based detection. To address these problems, this study proposes an end-to-end marine organism detection framework that can jointly optimize the image enhancement and object detection. The framework uses a two-stage detection network with dynamic intersection over union (IoU) threshold as the backbone and adds an underwater image enhancement module (UIEM) composed of denoising, color correction and deblurring sub-modules to greatly improve the framework’s ability to deal with severely degraded underwater images. Meanwhile, a self-built dataset is introduced to pre-train the UIEM, so that the training of the entire framework can be performed end-to-end. The experimental results show that compared with the existing end-to-end models applied to marine organism detection, the detection precision of the proposed framework can improve by at least 6%, and the detection speed has not been significantly reduced, so that it can complete the high-precision real-time detection of marine organisms. Full article
(This article belongs to the Special Issue Artificial Intelligence-Driven Ocean Monitoring (AID-OM))
Show Figures

Figure 1

19 pages, 5851 KB  
Article
Image Restoration Based on End-to-End Unrolled Network
by Xiaoping Tao, Hao Zhou and Yueting Chen
Photonics 2021, 8(9), 376; https://doi.org/10.3390/photonics8090376 - 8 Sep 2021
Cited by 5 | Viewed by 3763
Abstract
Recent studies on image restoration (IR) methods under unrolled optimization frameworks have shown that deep convolutional neural networks (DCNNs) can be implicitly used as priors to solve inverse problems. Due to the ill-conditioned nature of the inverse problem, the selection of prior knowledge [...] Read more.
Recent studies on image restoration (IR) methods under unrolled optimization frameworks have shown that deep convolutional neural networks (DCNNs) can be implicitly used as priors to solve inverse problems. Due to the ill-conditioned nature of the inverse problem, the selection of prior knowledge is crucial for the process of IR. However, the existing methods use a fixed DCNN in each iteration, and so they cannot fully adapt to the image characteristics at each iteration stage. In this paper, we combine deep learning with traditional optimization and propose an end-to-end unrolled network based on deep priors. The entire network contains several iterations, and each iteration is composed of analytic solution updates and a small multiscale deep denoiser network. In particular, we use different denoiser networks at different stages to improve adaptability. Compared with a fixed DCNN, it greatly reduces the number of computations when the total parameters are equal and the number of iterations is the same, but the gains from a practical runtime are not as significant as indicated in the FLOP count. The experimental results of our method of three IR tasks, including denoising, deblurring, and lensless imaging, demonstrate that our proposed method achieves state-of-the-art performances in terms of both visual effects and quantitative evaluations. Full article
(This article belongs to the Special Issue Smart Pixels and Imaging)
Show Figures

Figure 1

22 pages, 918 KB  
Article
Balancing Heterogeneous Image Quality for Improved Cross-Spectral Face Recognition
by Zhicheng Cao, Xi Cen, Heng Zhao and Liaojun Pang
Sensors 2021, 21(7), 2322; https://doi.org/10.3390/s21072322 - 26 Mar 2021
Cited by 7 | Viewed by 2943
Abstract
Matching infrared (IR) facial probes against a gallery of visible light faces remains a challenge, especially when combined with cross-distance due to deteriorated quality of the IR data. In this paper, we study the scenario where visible light faces are acquired at a [...] Read more.
Matching infrared (IR) facial probes against a gallery of visible light faces remains a challenge, especially when combined with cross-distance due to deteriorated quality of the IR data. In this paper, we study the scenario where visible light faces are acquired at a short standoff, while IR faces are long-range data. To address the issue of quality imbalance between the heterogeneous imagery, we propose to compensate it by upgrading the lower-quality IR faces. Specifically, this is realized through cascaded face enhancement that combines an existing denoising algorithm (BM3D) with a new deep-learning-based deblurring model we propose (named SVDFace). Different IR bands, short-wave infrared (SWIR) and near-infrared (NIR), as well as different standoffs, are involved in the experiments. Results show that, in all cases, our proposed approach for quality balancing yields improved recognition performance, which is especially effective when involving SWIR images at a longer standoff. Our approach outperforms another easy and straightforward downgrading approach. The cascaded face enhancement structure is also shown to be beneficial and necessary. Finally, inspired by the singular value decomposition (SVD) theory, the proposed deblurring model of SVDFace is succinct, efficient and interpretable in structure. It is proven to be advantageous over traditional deblurring algorithms as well as state-of-the-art deep-learning-based deblurring algorithms. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Graphical abstract

Back to TopTop