Next Article in Journal
Designing Parameters to Reshape the Inverter Output Impedance Based on the D-Split Method under Weak Grid Conditions
Previous Article in Journal
Enhancing Security and Accountability in Autonomous Vehicles through Robust Speaker Identification and Blockchain-Based Event Recording
 
 
Article
Peer-Review Record

Underwater Image Enhancement Based on Color Feature Fusion

Electronics 2023, 12(24), 4999; https://doi.org/10.3390/electronics12244999
by Tianyu Gong 1, Mengmeng Zhang 2,*, Yang Zhou 3 and Huihui Bai 3
Reviewer 1:
Reviewer 3: Anonymous
Reviewer 4:
Electronics 2023, 12(24), 4999; https://doi.org/10.3390/electronics12244999
Submission received: 13 November 2023 / Revised: 5 December 2023 / Accepted: 11 December 2023 / Published: 14 December 2023

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

Dear Authors,

I carefully read the article entitled Underwater Image Enhancement Based on Color Feature Fusion and it is a very interesting and original article. Therefore, I am very much in favor of its publication in Editor MDPI's Electronic Journal.

However, I found some formatting errors when writing this article, which could not be included in the final version. Therefore, I strongly suggest that the authors of this article make the following writing corrections present in this version of the article:

1. In the title, change “based” to “Based”.

2. In Keywords. Only the first word of the first technical term is written starting with a capital letter. The remaining words are written starting in lowercase letters (according to the ACS – American Chemical Society - standard).

3. On p. 1 (line 24) change “autonomous underwater vehicle (AUV)” to “Autonomous Underwater Vehicle (AUV)”. Every time the meaning of an acronym appears, along with the acronym, its meaning must be highlighted; to do this, it must be written starting with words in capital letters. In general, the authors did this right, but some acronyms escaped their attention. Here are the acronyms under which this happened again: CNNs (p. 3, line 139), GANs (p. 4, line 142), BN (p. 4, 151), AUVs (p. 15, line 488), and ROVs (p. 15, line 489).

4. This is just a suggestion to maintain uniformity in the text. (p. 4, line 155) Replace “CBAM (Convolutional Block Attention Module)” with “Convolutional Block Attention Module (CBAM)”. Do the same with BN (Batch Normalization) (p. 5, lines 183 and 198).

5. (p. 1 lower left corner under “Citation”). The title of the article was missing after the name of the authors.

6. At the end of the Introduction. End Section 1 with a brief paragraph saying what will be done in the rest of the article. For example, “In Section 2 it was reported...”

7. (p. 2, line 80) Change “Jodson et al.introduced” to “Jodson et al. introduced” (missing space). Do the same on (p. 3, line 128) and (p. 11, lines 400-401 and line 403).

8. (p. 3, line 118) Explain very briefly what the pix2pix method is.

9. Place a full stop at the end of the legend in Figure 1. Do the same in Figures 2, 3, 4, 5, and 6. In Figure 7 it is not necessary.

10. The titles of Sections and Sub-sections are written with all words starting in capital letters. So, arrange this in Sections 3, 8, 8.2, 8.3.1, 8.3.2, and 8.3.3.

11. In Tables 1 and 2. Replace “WaterNet[10]” with “WaterNet [10]” (no space). Do the same with the other names.

12. (p. 11, lines 400-404) Where is “Deep WaveNet[54]” would not it be “Deep WaveNet [29]”? The reference [54] does not even exist in the article! Check this also in the other terms in these lines. Furthermore, this paragraph should end with a full stop, which is missing.

13. The bibliographic references at the end of the article are slightly different from the standard required by the ACS. Here are my suggestions for improvements: (i) According to the ACS standard, a reference from Conferences must have the following information after the title of the article: Conference name, city, country, date of occurrence, and pages (see references [5, 15, 24, 25, 27, 28, 30, 31, 32, 34, 35, 36]), (ii) Would it not be possible to discover the origin of reference [11]?, and (iii) In references [12, 19, 20, 37] to maintain the uniformity of the “References” write the name of the Journal, of these references, starting the words in capital letters. It would also be good to include the name of the Journal “abbreviated” according to the ISO4 standard and not in full.

  Example of (i):

To replace:

5. Ancuti, C.; Ancuti, C.O.; Haber, T.; Bekaert, P. Enhancing underwater images and videos by fusion. In Proceedings of the 2012 IEEE conference on computer vision and pattern recognition. IEEE, 2012, pp. 81–88.

Per:

5. Ancuti, C.; Ancuti, C.O.; Haber, T.; Bekaert, P. Enhancing underwater images and videos by fusion. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, Rhode Island, 16-21 June 2012, pp. 81–88.

 

Yours sincerely,

 

The Reviewer.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

 

Page 4, Fig. 4.- What pre-processing is carried out on the images prior to entering the network? What type of image enhancement is performed? What advantages does it have over low pass filtering using convolution kernels?

Page 4, line 163.- “In neural networks, the term "Receptive field" describes the portion of an image that each neuron can access and interpret.” How is the size of the portion of the image that each neuron processes determined? What happens if this size is increased or reduced?

 

The proposed method should be compared with other recently published methods:

Jian, M., Liu, X., Luo, H., Lu, X., Yu, H., & Dong, J. (2021). Underwater image processing and analysis: A review. Signal Processing: Image Communication, 91, 116088.

Zhang, W., Pan, X., Xie, X., Li, L., Wang, Z., & Han, C. (2021). Color correction and adaptive contrast enhancement for underwater image enhancement. Computers & Electrical Engineering, 91, 106981.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

In this paper a model employing a multi-channel feature extraction strategy is presented in order to extract features from the red, green, and blue channels, learning both global and local information of underwater images.

The paper includes and appropriate introduction together with a Related Work section which includes methods based on traditional image processing, on physical models and on deep learning.

The network structure is appropriately described including each of the proposed blocks. However performance of the algorithms is only based on image quality metrics such as UIQM, PSNR or SSIM, but metrics based on complexity or computational times are not included. These metric must be included in order to perform a fair comparison. Moreover, conclusions must be also improved considered main paper results.

Comments on the Quality of English Language

Only minor editing of English language required

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 4 Report

Comments and Suggestions for Authors

This paper presents an underwater image enhancement approach based on color feature fusion, which is a positive contribution. However, I have a few concerns:

 

1. The authors could provide more specific details regarding the "traditional image processing techniques" they employed to enhance the quality of the collected low-quality underwater images.

 

2. The authors have not included information about the size of images used in their experiments in their paper. For your information, some researchers studied how to determine the most beneficial image size for training deep learning models (see: https://doi.org/10.3390/electronics12040985).

Comments on the Quality of English Language

Good

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

The generalization analysis has been delved into and how the robustness of the proposed method is improved. In addition, the explanation was extended in relation to the preprocessing of the images and the advantages over other methods were detailed.

Reviewer 3 Report

Comments and Suggestions for Authors

Once the required aspects have been addressed, the paper can be accepted.

Back to TopTop