Next Article in Journal / Special Issue
RS-LLaVA: A Large Vision-Language Model for Joint Captioning and Question Answering in Remote Sensing Imagery
Previous Article in Journal
Focusing Algorithm of Range Profile for Plasma-Sheath-Enveloped Target
Previous Article in Special Issue
DCEF2-YOLO: Aerial Detection YOLO with Deformable Convolution–Efficient Feature Fusion for Small Target Detection
 
 
Article
Peer-Review Record

M-SKSNet: Multi-Scale Spatial Kernel Selection for Image Segmentation of Damaged Road Markings

Remote Sens. 2024, 16(9), 1476; https://doi.org/10.3390/rs16091476
by Junwei Wang 1,2, Xiaohan Liao 1,*, Yong Wang 1, Xiangqiang Zeng 1,2, Xiang Ren 1, Huanyin Yue 1 and Wenqiu Qu 1,2
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Remote Sens. 2024, 16(9), 1476; https://doi.org/10.3390/rs16091476
Submission received: 25 January 2024 / Revised: 6 April 2024 / Accepted: 15 April 2024 / Published: 23 April 2024

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

The paper is well-structured and presents a significant contribution to the field of remote sensing and road damage detection. The M-SKSNet model is well-described with a clear explanation of its components.

Please consider the following suggestions:

1. It was mentioned that some photos in the dataset was collected at night. Please provide specific information on how many were collected during the day and how many were collected at night, and how you collect the data. It is interesting because most of the current datasets are captured in the day. 

2. Discuss the potential biases or limitations of the dataset and how it compares with existing datasets in terms of diversity and representativeness. Also, in the paper it is mentioned "The data presented in this study are openly available in www.gxdata.com." But the website is on sale now.

3. The networks selected for benchmarking in the study are somewhat dated. Qiu's research, which evaluated four different deep learning networks for detecting road cracks at night, found that ResNeSt and ConvNext were more effective. It is recommended to experiment with these two networks or explore other advanced deep learning models developed after 2022 for potentially better results.

Qiu, Zhouyan, et al. "A novel low-cost multi-sensor solution for pavement distress segmentation and characterization at night." International Journal of Applied Earth Observation and Geoinformation 120 (2023): 103331.

Liu, Zhuang, et al. "A convnet for the 2020s." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.

Xie, Enze, et al. "SegFormer: Simple and efficient design for semantic segmentation with transformers." Advances in Neural Information Processing Systems 34 (2021): 12077-12090.

4. Extend the conclusion: Suggest specific areas for future research, such as integrating the model with other types of road infrastructure analysis.

5. In Table 5, there is no point in leaving so many digits after the decimal point.

6. Some paragraphs are shaded, please check the formatting.

7. The main contributions of this paper -- This part is too long. Shorten each point.

8. Section title 

Road Damage Marking Data Set Research -> Road Damage Marking Dataset

Methods Based on Deep Learning -> Deep learning based methods

9. In the first two section, it is obvious that some backgrounds are generated by AI. Please re-read the parts and re-write those complicated sentences.

10. Some typos:

road marking damage based on Baidu” 's public data set as CDM-P,

4.2. Transformer (Encoder)- Use English ()

  Comments on the Quality of English Language

Please rewrite those complex sentences that appear to be generated by AI.

Author Response

3. Point-by-point response to Comments and Suggestions for Authors

Comments 1: It was mentioned that some photos in the dataset were collected at night. Please provide specific information on how many were collected during the day how many were collected at night, and how you collect the data. It is interesting because most of the current datasets are captured in the day

Response 1: Agree.In response to the data collection aspects of our semantic segmentation project, we have utilized proprietary datasets for both highway and urban road environments.

The CDM-H and CDM-C datasets were gathered by our team using the LiMobile M1 system, capturing images with a high-definition resolution of 3520 × 1080 pixels. On the other hand, the CDM-P dataset is compiled from the Apollo Scape public repository. The CDM-H dataset encompasses data from Fuzhou, Chongqing, and Wuhan, producing a total of 35,565 images. Collection activities were conducted in each city, lasting around an hour per session and spanning 70 kilometers. As for the CDM-C dataset, it focuses on the urban streets of Wuhan and Shanghai. Data collection was a one-time event in each city, lasting roughly 30 minutes per session and covering 40 kilometers, resulting in 6,838 images.

 The nighttime collection was opted for to minimize visual distractions and improve the visibility of road markings.

The CDM-P dataset, sourced from Apollo Scape and captured using the VMX-1HA device, showcases the urban landscape of Beijing with 4,274 images, each boasting a resolution of 1920 × 1080 pixels. Through the release of the CDM-P dataset, we aim to provide a valuable resource that underscores the versatility and applicability of our research in real-world scenarios.

The specific details regarding the dataset, include information on the acquisition equipment, resolution, and temporal description.

Thank you for pointing this out. We agree with this comment. Mention exactly where in the revised manuscript this change can be found – page 4, paragraph 4,  lines 162-173.

Comments 2: Discuss the potential biases or limitations of the dataset and how it compares with existing datasets in terms of diversity and representativeness. Also, in the paper it is mentioned: "The data presented in this study are openly available at www.gxdata.com." But the website is on sale now.

Response 2: Agree. We acknowledge the limitations and biases inherent in our dataset and compare its diversity and representativeness with existing datasets. Mention exactly where in the revised manuscript this change can be found – page 3, paragraph 1,  lines 96-99;page 5, paragraphs 1-4,  lines 196-205; page 6, paragraphs 1-4,  lines 206-223. We apologize for the inconvenience caused by the issues with our previously provided website. We have now re-uploaded the dataset to the Science Data Bank (www.scidb.cn), and it is currently under review. Once approved, the dataset will be publicly available. The Science Data Bank is a reliable and open platform dedicated to the global sharing of scientific data. Thank you for your understanding and support. Mention exactly where in the revised manuscript this change can be found – page 17, paragraph 6,  line 620.

 

 

 

Comments 3: The networks selected for benchmarking in the study are somewhat dated. Qiu's research, which evaluated four different deep-learning networks for detecting road cracks at night, found that ResNeSt and ConvNext were more effective. It is recommended to experiment with these two networks or explore other advanced deep learning models developed after 2022 for potentially better results.

Qiu, Zhouyan, et al. "A novel low-cost multi-sensor solution for pavement distress segmentation and characterization at night." International Journal of Applied Earth Observation and Geoinformation 120 (2023): 103331.

Liu, Zhuang, et al. "A convnet for the 2020s." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.

Xie, Enze, et al. "SegFormer: Simple and efficient design for semantic segmentation with transformers." Advances in Neural Information Processing Systems 34 (2021): 12077-12090.

Response 3: Agree. We appreciate the reviewer's suggestion regarding the selection of deep learning networks for benchmarking. In response, we have included a discussion on more recent models such as ResNeSt and ConvNext, as recommended by Qiu et al. (2023). We have also considered exploring other advanced deep learning models developed after 2022, such as SegFormer (Xie et al., 2021), for potential performance improvements. Thank you for pointing this out. We agree with this comment.  Mention exactly where in the revised manuscript this change can be found – page 10, paragraph 2,  lines 363-364.

Comments 4: Extend the conclusion: Suggest specific areas for future research, such as integrating the model with other types of road infrastructure analysis.

Response 4: Agree. We have expanded the conclusion to include suggestions for future research directions, as requested. We provide an extended discussion on potential areas for future exploration, including integrating the model with other types of road infrastructure analysis. We believe this expansion enriches the conclusion and provides valuable insights for further research endeavours. Thank you for pointing this out. We agree with this comment. Mention exactly where in the revised manuscript this change can be found – page 17, paragraph 5,  lines 605-609.

Comments 5: In Table 5, there is no point in leaving so many digits after the decimal point.

Response 5: Agree. We have revised Table 5 to align with the reviewer's suggestion regarding the number of digits after the decimal point. Specifically, we have adjusted the precision to two decimal places in Table 5, as recommended. Thank you for pointing this out. We agree with this comment. Mention exactly where in the revised manuscript this change can be found – page 16, paragraph 4,  lines 577-578.

Comments 6: Some paragraphs are shaded, please check the formatting.

Response 6: Agree. Thank you for bringing this to our attention. Thank you for pointing this out. We agree with this comment. We have carefully reviewed the manuscript and rectified the formatting issue regarding shaded paragraphs. The revised manuscript now ensures consistent formatting throughout.

Comments 7: The main contributions of this paper -- This part is too long. Shorten each point.

Response 7: Agree. We have revisited the main contributions section and condensed each point for brevity. Thank you for pointing this out. We agree with this comment.  The revised version can be found on page 2, paragraphs 4-6,  lines 70-77.

Comments 8: Road Damage Marking Data Set Research -> Road Damage Marking Dataset

Methods Based on Deep Learning -> Deep learning based methods

Response 8: Agree. Following the reviewer's suggestion, we have updated the section titles accordingly. The revised titles now read "Road Damage Marking Dataset" and "Deep learning-based methods," as recommended. Thank you for pointing this out. We agree with this comment. These changes can be found – on page 3, paragraph 4, lines 103 and 131.

Comments 9: In the first two sections, it is obvious that some backgrounds are generated by AI. Please re-read the parts and re-write those complicated sentences.

Response 9: Agree. We have carefully reviewed and revised the background sections to ensure clarity and readability. Complicated sentences have been simplified for better understanding, as per the reviewer's suggestion. Thank you for pointing this out. We agree with this comment. The revised sections can be found at the beginning of the manuscript.

Comments 10: Some typos:

road marking damage based on Baidu” 's public data set as CDM-P,

4.2. Transformer Encoder- Use English ()

Response10: Agree. Thank you for bringing these typos to our attention. Thank you for pointing this out. We agree with this comment. We have changed the Chinese parentheses to English parentheses and the Chinese quotation marks to English quotation marks. Additionally, we have reviewed and corrected other spelling and formatting issues throughout the text.

4. Response to Comments on the Quality of English Language

Point 1: Please rewrite those complex sentences that appear to be generated by AI.

Response 1:  

We have conducted a thorough review of the manuscript to address the formatting and grammar issues highlighted. Additionally, we have thoroughly reviewed the quality of the English language throughout the manuscript, making necessary improvements for clarity and correctness.

 

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

A Multi-scale Spatial Kernel Selection Net named M-SKSNet was proposed to accurately segment damaged road markings from images. It integrates a Transformer and Multi-dilated Large Kernel convolutional neural network (MLKC) Block. the Results demonstrated its effectiveness in extracting damaged road markings from images in various complex scenarios. However, the introduction needs to be revised. The typographical errors need to be corrected. Paper content should be polished. The language also needs further revision. These issues must be addressed and require major revisions.

 

1.       Some abbreviations are not introduced at the first time.

2.       The introduction seems to be light and not rich enough. The research status of the DL-based methods is lacking. Better to supplement. Some key papers about the segmentation models in other field should be discussed:

a)       Automatic pixel-level detection of vertical cracks in asphalt pavement based on GPR investigation and improved mask R-CNN. https://doi.org/10.1016/j.autcon.2022.104689

b)      Deep learning framework for intelligent pavement condition rating: A direct classification approach for regional and local roads, https://doi.org/ 10.1016/j.autcon.2023.104945

c)       ISTD-PDS7: A Benchmark Dataset for Multi-Type Pavement Distress Segmentation from CCD Images in Complex Scenarios, https://doi.org/ 10.3390/rs15071750

3.       Datasets:

a)       The dataset is the key innovation in this paper. Did you create the CMD or was it open source? There seems to be a contradiction.

b)      Why are they divided according to a ratio of 8:1?

c)       How are these images captured? Road vehicle or drone? How long does it take? These are the things readers care about.

d)      What preprocessing did you do to the original data set?

4.       The grammar in the essay should be checked thoroughly.

5.       You say that your model is robust, how was that illustrated?

6.       Part of the logic of the experimental analysis is not clear enough, so it is suggested to divide the title according to different angles.

7.       The overall quality of the pictures in this article is very poor. It is recommended to use vector images.

8.       Table 1: The units of the evaluation indicators should be indicated in the table. How are the indicators of other models obtained? If it is cited elsewhere, it should be strictly cited.

9.       There are some shadows in Section 5.3.1. Is it a typographical error or does it have special meaning?

10.    Section 6 is still the content of experimental results and analysis, which is not suitable for separate division and should be integrated into Section 5.

11.    How stable and robust is your proposed model? How does it look on other data sets? Is as good as on your own dataset. Only from the analysis of the current experimental results, I cannot see the superiority of your model, so it is suggested to increase the evaluation angle of the model.

12.    Limitations of the study should be appropriately mentioned in the conclusion.

13.    Authors should strictly follow the paper template on the official website of the conference for formatting.

Comments on the Quality of English Language

Moderate editing of English language required

Author Response

3. Point-by-point response to Comments and Suggestions for Authors

Comments 1: Some abbreviations are not introduced at the first time.

Response 1: Agree. Thank you for pointing out this oversight. We have ensured that all abbreviations are introduced upon their first use in the manuscript. The necessary introductions have been added for clarity, as per the reviewer's suggestion.

Comments 2: The introduction seems to be light and not rich enough. The research status of the DL-based methods is lacking. Better to supplement. Some key papers about the segmentation models in other field should be discussed:

a)       Automatic pixel-level detection of vertical cracks in asphalt pavement based on GPR investigation and improved mask R-CNN. https://doi.org/10.1016/j.autcon.2022.104689

b)      Deep learning framework for intelligent pavement condition rating: A direct classification approach for regional and local roads, https://doi.org/ 10.1016/j.autcon.2023.104945

c)       ISTD-PDS7: A Benchmark Dataset for Multi-Type Pavement Distress Segmentation from CCD Images in Complex Scenarios, https://doi.org/ 10.3390/rs15071750

Response 2: Agree. We appreciate the suggestion to enrich the introduction with additional details on DL-based methods and relevant segmentation models in other fields. In response, we have supplemented the introduction with discussions on key papers, including those mentioned (a-c). Thank you for pointing this out. We agree with this comment. These additions can be found – on page 2, paragraphs 1-2,  lines 49-52, lines 64-67.

 

Comments 3: Datasets:

a)        a)The dataset is the key innovation in this paper. Did you create the CMD or was it open source? There seems to be a contradiction.

Response 3: Agree. We apologize for any confusion regarding the dataset. The CDM-H and CDM-C datasets were gathered by our team using the LiMobile M1 system, capturing images with a high-definition resolution of 3520 × 1080 pixels. On the other hand, the CDM-P dataset is compiled from the Apollo Scape public repository. The CDM-H dataset encompasses data from Fuzhou, Chongqing, and Wuhan, producing a total of 35,565 images. Collection activities were conducted in each city, lasting around an hour per session and spanning 70 kilometers. As for the CDM-C dataset, it focuses on the urban streets of Wuhan and Shanghai. Data collection was a one-time event in each city, lasting roughly 30 minutes per session and covering 40 kilometers, resulting in 6,838 images. The CDM-P dataset, sourced from Apollo Scape and captured using the VMX-1HA device, showcases the urban landscape of Beijing with 4,274 images, each boasting a resolution of 1920 × 1080 pixels. Thank you for pointing this out. We agree with this comment.We have clarified this aspect on page 4, paragraph 4,  lines 162-173.

b)          b)Why are they divided according to a ratio of 8:1?

Response 3: Agree. The division of datasets into an 8:1 ratio was based on standard practices to ensure a balanced distribution of training and testing data. This ratio allows for comprehensive model training while maintaining a sufficient amount of data for evaluation. Thank you for pointing this out. We agree with this comment. We have added this clarification on – page 9, paragraph 4, lines 329-339.

c)          c)Why are they divided according to a ratio of 8:1?

Response 3: Agree. Thank you for highlighting these important details. The images were captured using the vehicle-mounted mobile measurement system, namely the LiMobile M1 and VMX-1HA. The CDM-H and CDM-C datasets were gathered by our team using the LiMobile M1 system, capturing images with a high-definition resolution of 3520 × 1080 pixels. On the other hand, the CDM-P dataset is compiled from the Apollo Scape public repository. The CDM-H dataset encompasses data from Fuzhou, Chongqing, and Wuhan, producing a total of 35,565 images. Collection activities were conducted in each city, lasting around an hour per session and spanning 70 kilometers. As for the CDM-C dataset, it focuses on the urban streets of Wuhan and Shanghai. Data collection was a one-time event in each city, lasting roughly 30 minutes per session and covering 40 kilometers, resulting in 6,838 images. The CDM-P dataset, sourced from Apollo Scape and captured using the VMX-1HA device, showcases the urban landscape of Beijing with 4,274 images, each boasting a resolution of 1920 × 1080 pixels. Thank you for pointing this out. We agree with this comment. Further information on the data capture process, including equipment used and time taken, has been provided – page 4, paragraph 4,  lines 162-173. 

d) What preprocessing did you do to the original data set?

Response 3: Agree. We appreciate the inquiry regarding data preprocessing. Initially, the dataset underwent meticulous curation to isolate instances of damaged road markings. This was followed by manual annotation and cropping to enhance extraction precision. Thank you for pointing this out. We agree with this comment. Details on the preprocessing steps have been included on – page 4, paragraphs 5-6,   lines 174-188.

Comments 4: The grammar in the essay should be checked thoroughly.

Response 4: Agree. We have conducted a thorough review of the manuscript to identify and correct any grammatical errors. All modifications have been made to ensure grammatical accuracy and clarity, as per the reviewer's suggestion.

Comments 5: You say that your model is robust, how was that illustrated?

Response 5: Agree. The robustness of our model has been demonstrated through comprehensive testing across multiple datasets, including internal and publicly available datasets. Additionally, qualitative analysis has been conducted to assess the model's performance under various scenarios. Thank you for pointing this out. We agree with this comment. Further details on model evaluation and robustness testing have been provided on – page 13, paragraph 2, lines 463-479.

Comments 6: Part of the logic of the experimental analysis is not clear enough, so it is suggested to divide the title according to different angles

Response 6: Agree. Thank you for the suggestion to improve the clarity of the experimental analysis. We have divided the experimental analysis into subsections to provide a clearer presentation of the results from different perspectives. Thank you for pointing this out. We agree with this comment. These modifications can be found – on page 10, paragraph 3, line 374; page 12, paragraph 4, line 462.

Comments 7: The overall quality of the pictures in this article is very poor. It is recommended to use vector images.

Response 7: Agree. We acknowledge the feedback regarding image quality and have taken steps to enhance the visual presentation of the article. Where applicable, vector images have been incorporated to improve clarity and readability. This improvement can be observed throughout the manuscript.

Comments 8: Table 1: The units of the evaluation indicators should be indicated in the table. How are the indicators of other models obtained? If it is cited elsewhere, it should be strictly cited.

Response 8: Agree. Thank you for your attention to detail regarding Table 1. We have added units to the evaluation indicators for clarity. Additionally, we have ensured that the sources of indicators for other models are cited appropriately following academic standards. Thank you for pointing this out. We agree with this comment. These revisions can be found on page 10, paragraph 7, line 395.

Comments 9: There are some shadows in Section 5.3.1. Is it a typographical error or does it have special meaning?

Response 9: Agree. We apologize for any confusion caused by the shadows in Section 5.3.1. These shadows were unintended and have been removed as per your observation. Additionally, we have conducted a thorough review of the entire manuscript to ensure consistent formatting and addressed any other formatting issues that may have arisen. Thank you for pointing this out. We agree with this comment.These changes have been made throughout the manuscript for clarity and consistency.

Comments10: Section 6 is still the content of experimental results and analysis, which is not suitable for separate division and should be integrated into Section 5

Response 10: Agree. We appreciate your feedback regarding the organization of the manuscript. Following your suggestion, we have merged Section 6 into Section 5 to improve the flow and coherence of the document. This revision ensures that the experimental results and analysis are presented seamlessly within the same section for better clarity and readability.

Comments11: How stable and robust is your proposed model? How does it look on other data sets? Is as good as on your own dataset. Only from the analysis of the current experimental results, I cannot see the superiority of your model, so it is suggested to increase the evaluation angle of the model.

Response 11: Agree. Thank you for raising this important point regarding the stability and robustness of our proposed model. We have addressed this concern by providing additional insights into the model's performance on other datasets and scenarios. Through comprehensive evaluation and comparison with existing models, we have demonstrated the superiority of our model not only on our dataset but also on external datasets. Thank you for pointing this out. We agree with this comment. This information has been included in Section 5.3.1 and 5.3.2.

Comments12: Limitations of the study should be appropriately mentioned in the conclusion.

Response 12: Agree. We agree that it's essential to acknowledge the limitations of our study in the conclusion. We have included a section outlining the limitations of our research, highlighting areas for future improvement and research directions. Thank you for pointing this out. We agree with this comment. This addition can be found on page 17, paragraph 3, lines 600-609.

Comments13: Authors should strictly follow the paper template on the official website of the conference for formatting.

Response 13: Agree. We have carefully reviewed the paper template provided on the official website of the conference and ensured that our manuscript adheres to the specified formatting guidelines. Any deviations from the template have been rectified to maintain consistency and compliance with the conference requirements.

 

4. Response to Comments on the Quality of English Language

Point 1: Moderate editing of English language required

Response 1:  

We have conducted a thorough review of the manuscript to address the formatting and grammar issues highlighted. Additionally, we have thoroughly reviewed the quality of the English language throughout the manuscript, making necessary improvements for clarity and correctness.

 

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

Suggested revision: Major

The present manuscript proposes a new model M-SKSNet for identifying damages on road markings, method which integrates the strengths of both CNN and Transformer models, and also a multi-scale spatial selection mechanism. The structure follows the specifications of the Journal and the topic is of interest for the readers. However there are some points needed to be addresed:

Please refer to the follorwing comments and suggestions:

1.       English language needs to be improved

2.       Abstract should contain more details: specifficaly provide more information indicating the main research purpose of detecting the damaged road markings and include values for the accuary metrics with the proposed method, highlighting its performance.

3.       Title should be: “Muli-Scale Spatial Kernel Selection for Image Segmentation of Damaged Road Markings” or “…for Damage Detection of Road Markings”.

4.       In all manuscript the formulation should be “Road Marking Damage Detection” not “Road Damage Marking”, because road markings are damaged. Please correct everywhere (ex.  title for Section 3, the name for the dataset CDM: Chinese Damaged Road Markings Dataset)

5.       More examples for the quality assessment

 

Line 69: a novel method for damaged road marking detection

Line 104: strongly relies on ..

Line122:  damaged road markings scenarios

Line 147: add space between “model which”

Lines 175, 180, 197: replace “study’s method” with “the method proposed in this study”

Line 368: “damaged road marking”

Line 444: add space between “evaluates the”

Comments on the Quality of English Language

English language should be improved.

Author Response

3. Point-by-point response to Comments and Suggestions for Authors

Comments 1: English language needs to be improved.

Response 1: Agree. Thank you for pointing out this oversight. We have meticulously revised the manuscript to enhance the quality of the English language according to the journal's guidelines. Changes have been implemented throughout the document to ensure clarity and correctness of language.

Comments 2: The abstract should contain more details: specifically provide more information indicating the main research purpose of detecting the damaged road markings and include values for the accuracy metrics with the proposed method, highlighting its performance.

Response 2: Agree. Thanks for your suggestion, we have enriched the abstract by including specific details regarding the main research purpose of detecting damaged road markings.  Specifically, M-SKSNet demonstrated the highest improvement in F1 by 3.77% and in IOU by 4.6% compared to other models. Thank you for pointing this out. We agree with this comment. These revisions can be found on page 1, paragraph 1, lines 19-21.

Comments 3: The title should be: “Muli-Scale Spatial Kernel Selection for Image Segmentation of Damaged Road Markings” or “…for Damage Detection of Road Markings”.

Response 3: Agree. We have revised the title of the manuscript based on your recommendation. The title now reads "Multi-Scale Spatial Kernel Selection for Image Segmentation of Damaged Road Markings," aligning it more closely with the focus of our research.

Comments 4: In all manuscripts, the formulation should be “Road Marking Damage Detection” not “Road Damage Marking”, because road markings are damaged. Please correct everywhere (ex.  title for Section 3, the name for the dataset CDM: Chinese Damaged Road Markings Dataset)

Response 4: Agree. In all manuscripts, we correct“Road Damage Marking” to “Damaged Road Marking”.

Comments 5: More examples of the quality assessment

 Line 69: a novel method for damaged road marking detection

Line 104: strongly relies on.

Line122:  damaged road markings scenarios

Line 147: add space between “model which”

Lines 175, 180, 197: replace “study’s method” with “the method proposed in this study”

Line 368: “damaged road marking”

Line 444: add space between “evaluates the”

Response 5: Agree. We have conducted a thorough review of the manuscript to address the formatting and grammar issues highlighted. Additionally, we have thoroughly reviewed the quality of the English language throughout the manuscript, making necessary improvements for clarity and correctness.

 

4. Response to Comments on the Quality of English Language

Point 1: English language needs to be improved

Response 1:  

We have conducted a thorough review of the manuscript to address the formatting and grammar issues highlighted. Additionally, we have thoroughly reviewed the quality of the English language throughout the manuscript, making necessary improvements for clarity and correctness.

 

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

I have no other questions. Thank you for the improvements.

Comments on the Quality of English Language

Do check all Chinese parenthesis again.

Author Response

We are grateful for your meticulous review and the identification of the oversight regarding the use of Chinese parentheses. Upon receiving your feedback, we immediately rectified the specific instances in Table 5 at line 574, converting all Chinese parentheses to their English counterparts for ‘Params’ and ‘Throughput’. To prevent any recurrence and uphold the quality of our manuscript, we have conducted an exhaustive review of the entire document, paying special attention to punctuation consistency. We have implemented a rigorous proofreading process to ensure that all punctuation marks now adhere to the standard English format. We are committed to maintaining these high standards in all future scholarly.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

My comments on the initial version of the manuscript have been sufficiently addressed by the authors in this revised version. I have no further comments on the technical aspects. The manuscript may be considered for publication after a proofreading.

Author Response

We sincerely appreciate the time and effort you have invested in reviewing our manuscript. Your feedback has been instrumental in enhancing the quality of our work. We will ensure that the manuscript undergoes thorough proofreading to meet the publication standards. Thank you once again for your constructive comments and for considering our manuscript for publication. We wish you all the best in your endeavors.

Back to TopTop