Next Article in Journal
A Thorough Evaluation of GaN HEMT Degradation under Realistic Power Amplifier Operation
Next Article in Special Issue
Resource-Efficient Optimization for FPGA-Based Convolution Accelerator
Previous Article in Journal
Linguistic Features and Bi-LSTM for Identification of Fake News
Previous Article in Special Issue
Hardware Architecture for Realtime HEVC Intra Prediction
 
 
Article
Peer-Review Record

An Unsupervised Fundus Image Enhancement Method with Multi-Scale Transformer and Unreferenced Loss

Electronics 2023, 12(13), 2941; https://doi.org/10.3390/electronics12132941
by Yanzhe Hu 1, Yu Li 1,*, Hua Zou 2,* and Xuedong Zhang 3
Reviewer 1:
Reviewer 2:
Reviewer 3:
Electronics 2023, 12(13), 2941; https://doi.org/10.3390/electronics12132941
Submission received: 1 May 2023 / Revised: 12 June 2023 / Accepted: 19 June 2023 / Published: 4 July 2023
(This article belongs to the Special Issue Signal, Image and Video Processing: Development and Applications)

Round 1

Reviewer 1 Report

This paper presents an unsupervised framework to tackle of  the problem of low-quality color fundus image for ophthalmic diseases. It was found many technical details were missing in the paper and the novelty and contribution of this work are unclear. My comments can be summarised as follows:

1. In the "Introduction", Authors should provide a paragraph to highlight the novelty and contribution of the proposed work. The literature review of this work can be enhanced by considering the following papers. 

Lee KG et al. (2023) A deep learning-based framework for retinal fundus image enhancement. PLoS ONE 18(3): e0282416.

A.Ghani et al., “Accelerating Retinal Fundus Image Classification by Using Artificial Neural Networks (ANNs) and Reconfigurable Hardware (FPGA),” Electronics 2019, vol.8, no. 12, 1522

2. In Section "3.2. Swin Transformer layer", authors should provide references for them. Also, section "3.4. Dual Discriminator and Generator" and "3.5.1. Self Feature Preserving loss",   please provide references in the first paragraph.

3. Normally, three metrics are used to access the quality of the enhanced image and to evaluate the proposed framework, i..e PSNR, SSIM and r (linear index of fuzziness). Authors didn't provide any information for the r results. Is this not important ??

4. In section "Experiment and Results", why EyeQ [34] dataset was selected ??

5. In section "4.2. Implementation", in the training phase, how the model parameters, i.e. input image, batch size, local discriminator, learning rate etc. were chosen ?

6. The Statistical analysis was missing in the paper.

7. The performance in terms of accuracy and execution time of the image processing should be discussed.

8.In the "Conclusion", authors should provide the quantitative achievement of the proposed method. 

 

It was found many typos, grammatical, punctuation mistakes in the paper. This paper should be proofread by English native speaker before the resubmission. 

 

Author Response

1. The reviewer’s comment: In the "Introduction", Authors should provide a paragraph to highlight the novelty and contribution of the proposed work. The literature review of this work can be enhanced by considering the following papers.

The authors’ Answer: Thank you for your suggestion. In the "Introduction" section, we have incorporated an additional paragraph that emphasizes the novelty and contributions of our proposed work. We have also included and discussed the suggested references, thereby strengthening the literature review.

 

2. The reviewer’s comment:In Section "3.2. Swin Transformer layer", authors should provide references for them. Also, section "3.4. Dual Discriminator and Generator" and "3.5.1. Self Feature Preserving loss”, please provide references in the first paragraph.

The authors’ Answer: Thank you for your valuable feedback. We appreciate your suggestion to provide references in Section "3.2. Swin Transformer layer", Section "3.4. Dual Discriminator and Generator", and Section "3.5.1. Self Feature Preserving loss".

 

3. The reviewer’s comment:Normally, three metrics are used to access the quality of the enhanced image and to evaluate the proposed framework, i..e PSNR, SSIM and r (linear index of fuzziness). Authors didn't provide any information for the r results. Is this not important??

The authors’ Answer: We deeply apologize for the oversight regarding the inclusion of the linear blur index (r) in our evaluation of image quality. We appreciate the meticulous review and valuable suggestions provided by the reviewer. The decision not to consider the linear blur index (r) is primarily based on two considerations. Firstly, our choice of evaluation metrics is based on widely accepted and utilized standards in the field, namely peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM), which have been proven to be highly effective in assessing image quality and preserving image structure. Secondly, we have not come across any references to the linear blur index (r) in the papers that employ this dataset. We regret any confusion caused by this omission and appreciate the reviewer for bringing it to our attention. Considering this feedback, we will further consider the inclusion of the linear blur index (r) in future work and ensure a more comprehensive evaluation of image quality.

 

4. The reviewer’s comment:In section "Experiment and Results", why EyeQ [34] dataset was selected??

The authors’ Answer: We have chosen the EyeQ [34] dataset for the "Experiments and Results" section because of its diverse and extensive collection of images, which facilitates a comprehensive evaluation of our model. We have stated this in the manuscript to clarify our rationale for dataset selection.

 

5. The reviewer’s comment:In section "4.2. Implementation", in the training phase, how the model parameters, i.e., input image, batch size, local discriminator, learning rate etc. were chosen?

The authors’ Answer: In Section "4.2. Implementation", we provide a detailed explanation of how we select model parameters during the training phase. This includes the selection of input images, batch size, local discriminators, learning rate, and other relevant factors.

6. The reviewer’s comment:The Statistical analysis was missing in the paper.

The authors’ Answer: We appreciate the valuable feedback from the reviewer. We have thoroughly considered the issue raised regarding the lack of statistical analysis and made corresponding revisions to the paper. In the revised version, we have included appropriate statistical analyses to support our research findings.

7. The reviewer’s comment:The performance in terms of accuracy and execution time of the image processing should be discussed.

The authors’ Answer: Thank you for your suggestion. We have revised the paper to include a discussion on the accuracy and execution time of image processing. In the revised version, we provide an analysis of the achieved accuracy and discuss the execution time of our techniques.

8. The reviewer’s comment:In the "Conclusion", authors should provide the quantitative achievement of the proposed method.

The authors’ Answer: Thank you for your valuable feedback, which has contributed to the enhancement of our work. In the "Conclusion" section, we have incorporated the quantitative results of the proposed method.

Reviewer 2 Report

The manuscript is an interesting piece of work and needs improvement.

Clarification of the method is suggested in correlation to the findings.

The development of the algorithm  has to be further analyzed and deeply explain how the results improves the existing methodologies.

Nomenclature, current references, limitation of the method and future research are recommended.

 

Language proofreading is suggested

Author Response

1. The reviewer’s comment:Clarification of the method is suggested in correlation to the findings.

The authors’ Answer: Thank you for your valuable feedback on the manuscript. We have clarified the method in correlation to the findings. We have provided a more detailed explanation of the algorithm's development, highlighting how our results improve upon existing methodologies.

 

2. The reviewer’s comment:The development of the algorithm has to be further analyzed and deeply explain how the results improves the existing methodologies.

The authors’ Answer: We have considered your recommendation regarding nomenclature. We have carefully reviewed and ensured consistent and accurate use of terms throughout the manuscript.

 

3. The reviewer’s comment: Nomenclature, current references, limitation of the method and future research are recommended.

The authors’ Answer: We have also addressed the limitation of the method by acknowledging its constraints and discussing potential areas for future research. This provides a comprehensive view of the scope and potential extensions of our work.

Reviewer 3 Report

The main question addressed by this research is the use of color fundus images, now widely used in computer-aided analysis systems for ophthalmic diseases. The authors designed an unsupervised method that integrates a multi-scale feature fusion transformer and an unreferenced loss function for enhancing low-quality fundus images. The idea is very interesting and the paper is well writen, with some minor errors listed as follows.

 

The topic is relevant as it addresses a specific gap in the field which is the training with the absence of paired training data.

 

To improve the blurring of image details caused by deep unsupervised networks, the authors define unreferenced loss functions that improve the model’s ability to suppress edge sharpness degradation. In addition, they use an a priori luminance-based attention mechanism to improve low-quality image illumination unevenness. On public datasets, the authors improve by 0.88dB and 0.024 on PSNR and SSIM compared to SOTA. 

 

The authors conclusions are consistent with the evidence and arguments presented.

 

References are appropriate.

 

 

More general comments and minor errors are listed as folows.

 

 

 

"Q, K, and V is calculated as: " -> "Q, K, and V are calculated as: "

 

Are equations 6 and 7 really correct? From what is shown, it means that MSA(LN(X)) and MLP(LN(X)) are alwayes zero.

 

" constraining the generator to generate" -> please rewrite

 

"model. , Wi,j, Hi,jrepresent" -> "model. Wi,j and Hi,j represent"

 

"a total of color fundus images" -> how many images?

 

"Comparison with State-of-the-Arts" -> "Comparison with State-of-the-Art"

 

"Our" -> "Ours"

 

Author Response

1. The reviewer’s comment:"Q, K, and V is calculated as: " -> "Q, K, and V are calculated as: "

The authors’ Answer: Thank you for bringing this to our attention. We have made the necessary correction in the manuscript by changing "Q, K, and V is calculated as" to "Q, K, and V are calculated as."

 

2. The reviewer’s comment:Are equations 6 and 7 really correct? From what is shown, it means that MSA(LN(X)) and MLP(LN(X)) are always zero.

The authors’ Answer: Thank you for your comment. We have carefully reviewed equations 6 and 7 and can confirm that they are correct as presented in the manuscript.

 

3. The reviewer’s comment:" constraining the generator to generate" -> please rewrite.

The authors’ Answer: Thank you for your feedback. We have revised the sentence "constraining the generator to generate" to make it more readable and clearer.

 

4. The reviewer’s comment:“model., Wi,j, Hi,jrepresent" -> "model. Wi,j and Hi,j represent"

The authors’ Answer: Thank you for pointing out the error. We have corrected the sentence from "model., Wi,j, Hi,j represent" to "model. Wi,j and Hi,j represent".

 

5. The reviewer’s comment:"a total of color fundus images" -> how many images?

The authors’ Answer: Thank you for your suggestion. We have addressed your comment by providing the specific number of color fundus images in the sentence "a total of color fundus images." We appreciate your attention to detail, and we have made the necessary clarification to enhance the accuracy and completeness of the information presented.

 

6. The reviewer’s comment:“Comparison with State-of-the-Arts" -> "Comparison with State-of-the-Art"

The authors’ Answer: Thank you for bringing this to our attention. We have corrected the phrase "Comparison with State-of-the-Arts" to "Comparison with State-of-the-Art" as suggested.

 

7. The reviewer’s comment:“Our" -> "Ours"

The authors’ Answer: Thank you for pointing out the error. We have made the correction by replacing "Our" with "Ours" as suggested. We appreciate your keen eye and attention to detail.

Round 2

Reviewer 1 Report

Authors have addressed my comments adequately. 

Okay.

Back to TopTop