Next Article in Journal
Radial Basis Function Neural Network and Feedforward Active Disturbance Rejection Control of Permanent Magnet Synchronous Motor
Previous Article in Journal
Unraveling the Influence of Perceived Built Environment on Commute Mode Choice Based on Hybrid Choice Model
 
 
Article
Peer-Review Record

A Method for Detecting the Yarn Roll’s Margin Based on VGG-UNet

Appl. Sci. 2024, 14(17), 7928; https://doi.org/10.3390/app14177928
by Junru Wang 1, Xiong Zhao 1, Laihu Peng 1,* and Honggeng Wang 2
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Appl. Sci. 2024, 14(17), 7928; https://doi.org/10.3390/app14177928
Submission received: 27 June 2024 / Revised: 26 August 2024 / Accepted: 27 August 2024 / Published: 5 September 2024

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

The paper titled "A Method for Detecting the Yarn Roll’s Margin Based on VGG-UNet" is a study of the paper that aims to improve image processing methods using comparative experiments. As a result, the accuracy and average intercept were superior to other models used. The subject is of interest to the readers of Applied Sciences. But there are some small errors.

 

1-The introduction is not complete. It should be improved mainly in the context of detecting the margin of yarn rolls, which is a key step in automated textile production. Describe more about traditional visual detection methods, for example.

2-Figure 2 could be divided into two parts so that the information contained in them could be visualized.

3- Figure 3, which represents the VGG-UNet architecture diagram, cannot identify the information of the cubes.

4-After equation 2 comes equation 3 and not 8.

5-The captions in figure 9 cannot be identified because they are too small.

Comments on the Quality of English Language

The English text needs to be improved as there are several errors.

Author Response

Comments 1:The introduction is not complete. It should be improved mainly in the context of detecting the margin of yarn rolls, which is a key step in automated textile production. Describe more about traditional visual detection methods, for example.

Response1: We thank you for your suggestion to improve the introduction. We have supplemented the background on traditional detection methods in the "Related Work" section, emphasizing the strengths and limitations of these methods in comparison to our proposed deep learning-based approach.

Comments 2 and 3:Figure 2 could be divided into two parts so that the information contained in them could be visualized.Figure 3, which represents the VGG-UNet architecture diagram, cannot identify the information of the cubes.

Response2 and 3: We acknowledge your concerns regarding the visualization of information in Figures 2 and 3. Due to space constraints, we have opted to create a new table (Table 1) that provides detailed information about the models and architectures presented in these figures. This allows for a clearer presentation of the data, addressing the issues raised about the visibility of details in the figures.

Comments 4:After equation 2 comes equation 3 and not 8.

Response 4: We apologize for the oversight in the equation numbering. The numbering error following Equation 2 has been corrected 

Comments 5:The captions in figure 9 cannot be identified because they are too small.

Response 5: We understand your concern regarding the small captions in Figure 9. The figure has been redrawn with improved clarity, and the caption size has been increased to ensure readability.

last as for English language quality,we have thoroughly reviewed and optimized the English language throughout the manuscript to enhance clarity and readability.

 

Reviewer 2 Report

Comments and Suggestions for Authors

In this manuscript a method for yarn roll’s margin is presented. A semantic segmentation dataset for yarn roll is presented and a new method for detecting yarn roll’s margin based on deep learning is proposed. In general, the manuscript is well written, lay down is good and deserves to be published. Here some comments:

1 – Regarding the dataset, I suggest more discussion about that; is it in a public repository? I think tis dataset could be interesting for other scientists.

2 – In the same way, I can see the dataset only consider white yarn color. So, the proposed approach can be used with other yarn colors? Why authors only consider white color? I recommend more discussion about that.

3 – For implementation details, I can see “good” experimental results but in a real-world scenario, a generic camera will be connected to a PC running the proposed model? I think a smart camera implementation could be more feasible and suitable for industrial applications but in this context the model allows embedded implementation using dedicated hardware (GPGPU/FPGA/ASIC)? If not, embedded sequential processors such as ARM could provide enough computational resources? Discussion about that could be a nice complement of the current manuscript.

4 – Results in terms of processing speed are missing. I imagine an industrial scenario in which the roll become empty in a few minutes so real-time processing will be required? Typical scenarios take several minutes/hours to become empty the roll? So, real-time estimations are not required?

Comments on the Quality of English Language

1 – There are some minor grammatical and style errors. I suggest a detailed revision of the English language.

Author Response

Comments 1:Regarding the dataset, I suggest more discussion about that; is it in a public repository? I think tis dataset could be interesting for other scientists.

Response1: We acknowledge the importance of making datasets publicly available to support further research. The dataset used in our study was collected and created by our team specifically for industrial applications. Due to the proprietary nature of the data collected in an industrial setting, we are currently unable to make the dataset publicly available. However, we are open to sharing the dataset with other researchers who have reasonable and justified requests. Interested parties can contact us directly for access.

Comments 2:In the same way, I can see the dataset only consider white yarn color. So, the proposed approach can be used with other yarn colors? Why authors only consider white color? I recommend more discussion about that.

Response2 : The focus on white yarn in our study is due to the industrial environment where the primary production involves white base fabric, resulting in a higher prevalence of white yarn. Additionally, white yarn poses more significant challenges in image processing due to reflections and other environmental factors in an open factory setting. The dataset also includes some examples of blue yarn rolls. We acknowledge the reviewer's point regarding the generalization of our method to other yarn colors. We will consider expanding our dataset and experiments to include different yarn colors in future research.

Comments 3:For implementation details, I can see “good” experimental results but in a real-world scenario, a generic camera will be connected to a PC running the proposed model? I think a smart camera implementation could be more feasible and suitable for industrial applications but in this context the model allows embedded implementation using dedicated hardware (GPGPU/FPGA/ASIC)? If not, embedded sequential processors such as ARM could provide enough computational resources? Discussion about that could be a nice complement of the current manuscript.

Response 3: We appreciate the your suggestion regarding the use of smart cameras. Currently, our industrial setup utilizes Hikvision Industrial Cameras due to their cost-effectiveness. While we agree that smart cameras could offer better performance for industrial applications, the decision to use conventional cameras was made to balance performance and cost in an industrial setting. We will explore the potential of smart cameras in future research to determine if they provide significant advantages over the current setup. 

Comments 4:Results in terms of processing speed are missing. I imagine an industrial scenario in which the roll become empty in a few minutes so real-time processing will be required? Typical scenarios take several minutes/hours to become empty the roll? So, real-time estimations are not required?

Response 4: We have taken note of the your concern about the processing speed and real-time capabilities of our model. We have now included a discussion on processing speed in the Conclusions section of the manuscript. While the speed at which yarn rolls deplete varies in industrial settings and no specific time frame is universally applicable, our current model's processing speed is sufficient to meet real-time detection requirements. We will continue to monitor and improve this aspect in future iterations of our work.

last as for English language quality,we have thoroughly reviewed and optimized the English language throughout the manuscript to enhance clarity and readability.

 

Back to TopTop