DEF-Net: A Dual-Encoder Fusion Network for Fundus Retinal Vessel Segmentation
Round 1
Reviewer 1 Report
I am a bit confused by the term "dense" in the DC block. What does the "dense" mean here, since the DC block just uses more convolutional layer and a residual layer in comparison with traditional U-Net block?
The loss function used for network training should be presented.
Some experiments should be provided to compare the blocks in Fig.2 (a) and (b).
More relevant works on image segmentation should be discussed. For example, Volumetric Memory Network for Interactive Medical Image Segmentation is a famous network architecture for medical image segmentation, and Rethinking Semantic Segmentation: A Prototype View also introduces a novel network design.
Author Response
The authors would like to express sincere appreciation to the Editor who handled the paper and the Reviewer 1 for their constructive suggestions and comments on further improving the quality and presentation of the results in the revised version. The detailed respond to the reviewers is provided in the "Reviewer 1.pdf" file and please check it.
Author Response File: Author Response.pdf
Reviewer 2 Report
The authors propose a variation of an encoder-decoder architecture to segment fundus retinal structures. The work focuses on two main proposals, a dual-encoder structure and a block that extracts and fused features. The authors have made a good effort to describe the methodology, as the writing is clear and the structure satisfying. Besides, the results showed that the proposed architecture is better than other works of the state-of-the art.
-- Shortcomings
1. Line 4:
"..., but due to their single path, ..."
- Single path meaning what?
- Actually, it is kinda solved in lines 9-10. Though, context could be added in line 4.
2. Line 17:
"..., millions of people lose their eyesight every year..."
- Cite a statement from a health institution that backs this up.
3. Line 29:
FCN has not been defined.
4. Line 34:
"... image segmentation. where..."
- "... image segmentation, where..."
5. Line 45:
"[19] embedded..."
- Add the author's name
6. Line 104+ (After Figure 3., before eq 5):
"Where high-level features from the encoder adjust the size..."
- It is not clear what the authors mean. Please review and rewrite.
7. Line 108:
"The utilization of features from multiple scales is helpful to improve the fusion efficiency"
- Reference(?)
8. Line 155:
"..., a original ..."
- "..., an original ..."
9. Line 216:
"..., and the qualitive results with the quantitative results can be seen in the subsections."
-It may be rewritten as:
-"..., quantitative and qualitative results are described in sections 4.2.1 and 4.2.2, respectively."
10. Line 231:
"... which was not obvious for model performance improvement".
- This sentence is not clear enough, do the authors talk about sub-optimal or subjective performance metrics for the task?
11. Line 243:
"..., which strongly demonstrated the superiority of the proposed method ..."
- Definitely, the method shows improvement, though I would not say that it is strongly superior.
Author Response
The authors would like to express sincere appreciation to the Editor who handled the paper and the Reviewer 2 for their constructive suggestions and comments on further improving the quality and presentation of the results in the revised version. The detailed respond to the reviewers is provided in the "Reviewer 2.pdf" file and please check it.
Author Response File: Author Response.pdf
Reviewer 3 Report
1. Authors started using abbreviations without defining them. For example "FCN" and many more.
2. Section title of section 5 please change to "Conclusion" instead of "Conclusions".
3. It is observed that a few references are not relevant to this study. Please cross-check all the references.
4. Section 3.1 is directly starting after section 3 without any discussion. Just give a brief discussion about the content that you are going to discuss before sub-section 3.1.
5. The image dataset used by authors is very small, just 14 to 20 pairs of images only. Check the possibility of extending the study with a large dataset.
6. Authors have performed a sequence of pre-processing steps in the given image. A better justification for these operations is required to claim that these processes will help us to improve the results (Page No. 6).
7. In Many places sub-sections are directly starting just after the section title, please correct it.
8. In Table 4 accuracy is missing for one work, please recheck the table because it is one important result suppose to be discussed.
9. The plagiarism is too high (26% in Turnitin). This should bring down (less than 10%).
Author Response
The authors would like to express sincere appreciation to the Editor who handled the paper and the Reviewer 3 for their constructive suggestions and comments on further improving the quality and presentation of the results in the revised version. The detailed respond to the reviewers is provided in the "Reviewer 3.pdf" file and please check it.
Author Response File: Author Response.pdf
Round 2
Reviewer 3 Report
Nice to see that authors addressed all the comments.