Next Article in Journal
Natural Language Processing Application on Commit Messages: A Case Study on HEP Software
Next Article in Special Issue
Detail Guided Multilateral Segmentation Network for Real-Time Semantic Segmentation
Previous Article in Journal
A Review of the Optimal Design of Neural Networks Based on FPGA
Previous Article in Special Issue
Research on Image Matching of Improved SIFT Algorithm Based on Stability Factor and Feature Descriptor Simplification
 
 
Article
Peer-Review Record

Learning from Multiple Instances: A Two-Stage Unsupervised Image Denoising Framework Based on Deep Image Prior

Appl. Sci. 2022, 12(21), 10767; https://doi.org/10.3390/app122110767
by Shaoping Xu, Xiaojun Chen, Yiling Tang *, Shunliang Jiang, Xiaohui Cheng and Nan Xiao
Reviewer 1: Anonymous
Reviewer 2:
Appl. Sci. 2022, 12(21), 10767; https://doi.org/10.3390/app122110767
Submission received: 18 September 2022 / Revised: 14 October 2022 / Accepted: 20 October 2022 / Published: 24 October 2022

Round 1

Reviewer 1 Report

This paper presents a denoising framework based on deep image prior (DIP), introducing two targets: a noisy image and preliminary images. The preliminary images are generated from denoised image obtained by FFDNet denoiser and up-sampling & down-sampling.  Finally, unsupervised image fusion is performed to obtain a final denoised image.  Authors shows superiority of their framework over the previous denoisers in general case.

To understand real contribution of the proposed method, I ask the following issues.

(1)  Recently authors published two papers related to the unsupervised denoiser. However, in this paper authors did not mention about these two denoisers proposed by the same authors.  Please mention these previous works and compare the performance.

[1]  Xu S. et al. An unsupervised fusion network for boosting denoising performance, Journal of Visual Communication, and Image Representation, 2022

 

[2] Xu S. et al. An unsupervised weight map generative network for pixel-level combination of image denoising, Applied Sciences, 2022 

(2) Table 1 shows the average PSNR using only noisy image, only preliminary image and both images. Because preliminary images are generated randomly from up-sampled image, they are varied depend on the randomly selection. Please add variance or deviation standard of the PSNR in the columns of preliminary image and both images.

(3) From table 3, we can observe that the noisy image with deviation standard 50, other denoisers, such as SwinIR, DRUNet, DeamNet and DAGL, outperform the proposed denoiser. Please add some explanations when noisy image with larger deviation, performance of the proposed denoiser is down.

(4) The condition of equation (4), is not necessarily absolute operation?

(5) Please revise lines 131 and 309.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

This paper proposes a two-stage unsupervised deep learning framework based on deep image prior (DIP) for image denoising.

In general, the proposed idea has been clearly expressed, and verified by sufficient experiments. But the reviewer have doubts about these experimental results.

 

First of all, we must admit that the supervised methods have a great performance advantage over the unsupervised methods. At the same time, among unsupervised methods, Noise2Noise based methods have better performance than DIP based methods. Because the training of the former is based on a large number of samples. If the authors can improve the performance of DIP to be better than the latest transformer based methods such as SwinIR and Restormer through such a simple scheme, it will be a great breakthrough. But the reviewer is highly suspicious of this. Therefore, if the authors can provide open source code, or upload visual images of results, such as the denoising results on the datasets Set12, BSD68, and SCIE, the doubt of the review can be eliminated.

 

Also, it's well known that network design is the most important factor affecting performance. Hence, the latest transformer based methods such as SwinIR and Restormer are better than the most powerful CNN-based methods DRUNet. However, it is hard not to doubt that such a simple network in the manuscript can get such a big performance improvement.

The reviewer suggests that the experimental data should be reorganized to remove inappropriate data.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

Authors modified their paper correctly according with my suggestions, so I am satisfied this revised version. 

Reviewer 2 Report

I can't agree with the authors' answers.

 

 

Although the three algorithms:DIP, DRUNet and Restormer are based on U-net, DRUNet and Restormer also incorporates residual modules and attention modules (Transformer), respectively.  Therefore, better performance can be obtained.

 

The reviewer thinks that this scheme combining supervised learning with unsupervised learning can indeed improve the performance of the original DIP, but it cannot reach the leading level.

Back to TopTop