Next Article in Journal
Spatio-Temporal Forecasting of Global Horizontal Irradiance Using Bayesian Inference
Previous Article in Journal
Multi-Band Polarization Imaging in a Harsh Sea Fog Environment
 
 
Article
Peer-Review Record

Color Structured Light Stripe Edge Detection Method Based on Generative Adversarial Networks

Appl. Sci. 2023, 13(1), 198; https://doi.org/10.3390/app13010198
by Dieuthuy Pham 1,2, Minhtuan Ha 1,2 and Changyan Xiao 1,*
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Appl. Sci. 2023, 13(1), 198; https://doi.org/10.3390/app13010198
Submission received: 11 November 2022 / Revised: 12 December 2022 / Accepted: 20 December 2022 / Published: 23 December 2022

Round 1

Reviewer 1 Report

In 97 line, ligjht-> light.

Figure 7 seems to already contain Figure 8.

It can be confusing to think that segmentation result is correct using whether a line exists in a patch or not. So it seems better to use MSE metrics. Why did you use statistical MSE in the Pixel Location Accuracy section? It makes difficult to compare previous results.

It is said that attention gates make better performance in u-net. However, there is no evidence in this paper. Please, add a result without attention gates.

Author Response

Response to Reviewers’ comments
Concerning the manuscript


Color structured light stripe edge detection method based on Generative Adversarial Networks

Dieuthuy Pham, Minhtuan Ha, Changyan Xiao

 

First of all, we would like to appreciate the associate editor and anonymous reviewers for their constructive comments. Thank you for consideration of publication our work in the Applied Sciences journal. The manuscript has been carefully revised and all the changes were highlighted in one copy of the manuscript. In the letter, a point-by-point response is given to each of the issues raised by the reviewers. In italic, the comments are repeated and our responses follow in blue color.

Answers to the first reviewer

Comment #1

English language and style are fine/minor spell check required.

Response:

Thanks. We have carefully and completely verified our English expression with the help of a senior colleague. Additionally, we have asked the Language Editing Services to polish our English writing.  I wish the quality of the revised version could meet the requirement of publication.

 

Comment #2

In 97 line, ligjht-> light.

Response:

Thank you!  It is a typo. We have corrected it.

 

Comment #3

Figure 7 seems to already contain Figure 8.

Response:

Thanks!  After carefully revising these Figures, We have found that Figure 7 illustrates the input samples in the training set. It does not contain Figure 8 which illustrates the training process of our model.

 

Comment #4

It can be confusing to think that segmentation result is correct using whether a line exists in a patch or not. So it seems better to use MSE metrics. Why did you use statistical MSE in the Pixel Location Accuracy section? It makes difficult to compare previous results.

Response:

Thank you! After discussion, we agree with the reviewer that segmentation results should not be evaluated using whether a line exists in a patch or not. To facilitate comparison with previous methods, definitions of TP, FP, TN, and FN, are modified and experimental results are also updated in Tables 1 and 2. Besides that, reasons for using statistical MSE in the Pixel Location Accuracy section have been added to the second paragraph of that Section.

 

Comment #5

It is said that attention gates make better performance in u-net. However, there is no evidence in this paper. Please, add a result without attention gates.

Response:

Thanks. The experiment result without attention gates has been added to Table 2.

Author Response File: Author Response.docx

Reviewer 2 Report

The results seem to be correct and publishable.

Author Response

Response to Reviewers’ comments
Concerning the manuscript


Color structured light stripe edge detection method based on Generative Adversarial Networks

Dieuthuy Pham, Minhtuan Ha, Changyan Xiao

 

First of all, we would like to appreciate the associate editor and anonymous reviewers for their constructive comments. Thank you for consideration of publication our work in the Applied Sciences journal. The manuscript has been carefully revised and all the changes were highlighted in one copy of the manuscript. In the letter, a point-by-point response is given to each of the issues raised by the reviewers. In italic, the comments are repeated and our responses follow in blue color.

 

Answers to the second reviewer

Comment #1

The results seem to be correct and publishable.

Response:

Thank a lot for your positive comment!

Author Response File: Author Response.docx

Reviewer 3 Report

This work discussess the problem of color structured light tripe edge detection and the proposal of using GAN to solve that. The current presentation has some major issues:

1) Following the title, the main contribution should be GAN based color structured light tripe edge detection. But after reading the paper, this is rather minor (Section 2.6). In stead, the authors want to present the creation of dataset and system design 

2) The organization of the paper is not good: Section 2 should only focus on the system design or related work, Section 3 should describe the proposed method addressing the noise and complex characteristics of the scene. The training and experimental setup should be moved to section 4;

3) It is unclear the motivation of using GAN to address the noise of the scene

4) Section 2.3, 2.4, 2.5 contains very few information. In fact, that can be found in references

5) Some measurements such as: sample mean, variance, SE (6)-(8) are not necessary to present;

6) what is the meaning of results in (10), (11)

7) Why reference [39-41] appear in the first paragraph of Introduction ? 

8) Related work shoul be briefly discussed

Author Response

Response to Reviewers’ comments
Concerning the manuscript


Color structured light stripe edge detection method based on Generative Adversarial Networks

Dieuthuy Pham, Minhtuan Ha, Changyan Xiao

 

First of all, we would like to appreciate the associate editor and anonymous reviewers for their constructive comments. Thank you for consideration of publication our work in the Applied Sciences journal. The manuscript has been carefully revised and all the changes were highlighted in one copy of the manuscript. In the letter, a point-by-point response is given to each of the issues raised by the reviewers. In italic, the comments are repeated and our responses follow in blue color.

 

Answers to the third reviewer

Comment #1

This work discussess the problem of color structured light tripe edge detection and the proposal of using GAN to solve that. The current presentation has some major issues:

Response:

Thank a lot for your positive comments! We have made much improvement in the revised version for better understanding. Please see the following responses.

 

Comment #2

Moderate English changes required.

Response:

Thanks. We have carefully and completely verified our English expression with the help of a senior colleague. Additionally, we have asked the Language Editing Services to polish our English writing.  I wish the quality of the revised version could meet the requirement of publication.

 

Comment #3

Following the title, the main contribution should be GAN based color structured light tripe edge detection. But after reading the paper, this is rather minor (Section 2.6). In stead, the authors want to present the creation of dataset and system design.

Response:

Thank you! According to the comment of a reviewer, section 2.6 has been renamed as 3.3. Also, a paragraph has been added at the end of this section to express the novelty of the proposed method

 

Comment #4

The organization of the paper is not good: Section 2 should only focus on the system design or related work, Section 3 should describe the proposed method addressing the noise and complex characteristics of the scene. The training and experimental setup should be moved to section 4e.

Response:

Thank you! The organization of the paper has been modified. Section 2 gives the Problem definition and our approach overview. And a short paragraph has been added to this Section to introduce our approach overview. Section 3 details the proposed method. Section 4 includes experimental setup, training process, results, and discussions.

 

 

Comment #5

It is unclear the motivation of using GAN to address the noise of the scene.

Response:

Thanks! The motivation to use GANs to address the noise of the scene is added to paragraph 7 in the Introduction section. Related studies have been added to the Reference section.

 

Comment #6

Section 2.3, 2.4, 2.5 contains very few information. In fact, that can be found in references.

Response:

Thanks! According to the comment of a reviewer, these sections have been named as sections 3.1, 3.2, and 4.2 respectively. To clarify the innovations of the proposed method, the corresponding paragraphs have been added at the end of these sections.

 

Comment #7

Some measurements such as: sample mean, variance, SE (6)-(8) are not necessary to present.

Response:

Thanks! Since these measurements can be found in previous studies, equations (6) – (8) are removed from the manuscript.

 

Comment #8

What is the meaning of results in (10), (11)?

Response:

Thanks! Equations 10, 11, 12, and 13 have been modified for better understanding. Their meanings are added at the end of section 4.2.

 

Comment #9

Why reference [39-41] appear in the first paragraph of Introduction ?

Response:

Thanks! References 39 and 41 are modified in the Reference section. References [39-41] are cited in the first paragraph of section 2.1.

 

Comment #10

Related work should be briefly discussed

Response:

Thanks! Related work has been briefly discussed with modifications in paragraphs 2, 5, and 6 in the Introduction section.

Author Response File: Author Response.docx

Round 2

Reviewer 1 Report

- Pixel Location accuracy - 

I understand what you suggested. But I think there is a simpler way like calculating MSE from dilated images.

Reviewer 3 Report

Following my intensive comments, the authors have improved the paper. The current form is worth to publish. 

Back to TopTop