Next Article in Journal
Biostimulants as a Response to the Negative Impact of Agricultural Chemicals on Vegetation Indices and Yield of Common Buckwheat (Fagopyrum esculentum Moench)
Previous Article in Journal
SSR and SNP Marker-Based Investigation of Indian Rice Landraces in Relation to Their Genetic Diversity, Population Structure, and Geographical Isolation
 
 
Article
Peer-Review Record

Apple Surface Defect Detection Method Based on Weight Comparison Transfer Learning with MobileNetV3

Agriculture 2023, 13(4), 824; https://doi.org/10.3390/agriculture13040824
by Haiping Si 1, Yunpeng Wang 1, Wenrui Zhao 1, Ming Wang 1, Jiazhen Song 1, Li Wan 1, Zhengdao Song 1, Yujie Li 1, Bacao Fernando 2 and Changxia Sun 1,*
Reviewer 1:
Reviewer 2:
Agriculture 2023, 13(4), 824; https://doi.org/10.3390/agriculture13040824
Submission received: 2 March 2023 / Revised: 30 March 2023 / Accepted: 31 March 2023 / Published: 3 April 2023

Round 1

Reviewer 1 Report

The manuscript contains original and interesting results. The proposed procedure turned out to be very effective in detecting apple surface defects.

Why was only "Yantai Red Fuji" used?

lines 180-185: Were natural defects also investigated?

The beginning of section 3. Experimental results looks like it should be in the Material and Methods section.

The heading of section 2 is missing.

3.2.6. Parameter (M): It should be explained in more detail.

The results in Table 6 are not sufficiently described and discussed.

Future research should be indicated in more detail.

It would be recommended to test the developed procedure for more sets of apples belonging to different varieties.

Some errors in the text and Figures should be corrected.

Author Response

Dear editor:

      Sorry for replying to your review comments so late. Now, I will respond to your comments point by point. 

Question 1: Why was only "Yantai Red Fuji" used?

Answer 1:The main sample used in this study is "Yantai Fuji" due to its wide availability in our city and easily accessible. Therefore, it is chosen as the main experimental sample. 

Question 2: lines 180-185: Were natural defects also investigated?

Answer 2: The research in this paper focuses mainly on the surface defects of apples caused by mechanical damage mainly scrapes and scratches. However, due to the special characteristics of the fused images of infrared and visible images, there is still obvious feature information on the fused images for common natural defects on the fruit surface such as rots and insect spots. In the subsequent research, we consider expanding the number of defect types and realizing the identification of defect classes in the process of detection, and giving high-quality suggestions related to subsequent fruit cultivation and management. 

Question 3: The beginning of section 3. Experimental results looks like it should be in the Material and Methods section.

Answer 3: We have made adjustments according to your comments.

Question 4: The heading of section 2 is missing.

Answer 4: We have added the heading of Section 2 according to your suggestion.

Question 5: We have added the beginning of Section 2 according to your suggestion. 

Answer 5: For details about the modification of this part, see Line500-510.

Question 6 : The results in Table 6 are not sufficiently described and discussed.

Answer 6 : Table 6 has been modified to Table 8, and the experimental analysis is as follows:

By analyzing the experimental results of Table 8, the number of parameters of the model starts to increase as the number of frozen layers decreases. At the same time, the evaluation indexes of the model (accuracy, precision, recall, F1-score, and parameter) start to increase, and the overall performance of the model becomes better and better. When the model is unfrozen to only the first 4 network layers, the model achieves the best performance with 96.8% of accuracy, 96.45% of precision, 96.6% of recall, 96.48% of F1-score, 6.71M of parameters, and 22.77ms of Ts. Since the number of frozen layers is different, and thus the model learns feature information differently, the model with the best overall performance (the model with the first 4 network layers frozen) is selected as the fine-tuned model for this study.

Question 7 :Future research should be indicated in more detail.

Answer 7 :  Future researches on disscusion and conclusion are described in detail. 

The research in this paper focuses mainly on the surface defects of apples caused by mechanical damage mainly scrapes and scratches. However, due to the special characteristics of the fused images of infrared and visible images, there is still obvious feature information on the fused images for common natural defects on the fruit surface such as rots and insect spots. In the subsequent research, we consider expanding the number of defect types and realizing the identification of defect classes in the process of detection, and giving high-quality suggestions related to subsequent fruit cultivation and management. In addition, the quality of the acquired images is limited due to the low precision of the infrared camera in the dual-light camera used in this study, but the experiments in sub-section 3.5 also prove the effectiveness of the infrared and visible image fusion technique for apple surface defect detection. In the future research, a new type of dual-light camera will be constructed using a visible industrial camera and thermal infrared industrial camera with suitable precision to realize the acquisition of high-quality infrared and visible images. The RFN-Nest algorithm achieves the fusion of infrared and visible images of apples with a nice fusion effect, but it still has many deficiencies for fruits with high-speed movement on the sorting assembly line. In the future research, the idea of infrared and visible image fusion is considered to be introduced into the object detection model to achieve real-time image fusion and defect detection. In addition, more high-quality infrared and visible images of apples will be acquired for training the deep learning models in subsequent studies, in the expectation of obtaining models with better generalization and detection performance for the task of detecting surface defects for most apple varieties.

Question 8: It would be recommended to test the developed procedure for more sets of apples belonging to different varieties.

Answer 8: We performed generalizability experiments with other varieties, and the detailed experimental analysis is presented in Section 3.6.

Question 9: Some errors in the text and Figures should be corrected.

Answer 9: We have corrected some errors in the text and Figures, and the corrections are marked in yellow.

More details of the revisions are in the submitted manuscript, and the revisions are marked in yellow.

Author Response File: Author Response.docx

Reviewer 2 Report

A pretrained network based apple surface defect detection approach is proposed in this manuscript. The manuscript is well written, however, there are some concerns that should addressed before considered for publication.

Minor Concerns:

In some places there is no space after a dot.

Whenever the author names of literature is mentioned it should Author's last name et al. however, I found this format is missing in some places. 

Recent literatures are missing. More recently published papers should be cited. For example: 

J. Andrew, J. Eunice, D. E. Popescu, M. K. Chowdary, and J. Hemanth, “Deep Learning-Based Leaf Disease Detection in Crops Using Images for Agricultural Applications,” Agronomy, vol. 12, no. 10, Oct. 2022, doi: 10.3390/AGRONOMY12102395.

Major Concerns

In the introduction section, the authors seems to have directly jumped into machine learning techniques and deep learning techniques without properly defining them. To maintain appropriate flow, it is important to introduce machine learning and its techniques then the literatures in that field similarly for deep learning. May be some subsections will also be good. 

In section 2.2, it is mentioned that the dataset is split into 80:20 ratio. How did you arrive to this? why not 70:30 or something else. Justification is required. 

Figure 6 is doesn't convey much information. I suggest the authors to refine this figure to add more details. 

The final hyperparameter tuning details can be reported in a table. 

An ROC curve for the different models implemented should be plotted to show the best performing model. This is is very important. 

Is there any literature that uses similar dataset? if so compare your results with them for performance evaluation. 

Author Response

Dear editor:

      Sorry for replying to your review comments so late. Now, I will respond to your comments point by point. 

Minor Concerns:

Question 1: In some places there is no space after a dot.

Answer 1:We checked the article and made corrections.

Question 2: Whenever the author names of literature is mentioned it should Author's last name et al. however, I found this format is missing in some places.

Answer 2: We checked the article and made corrections.

Question 3: Recent literatures are missing. More recently published papers should be cited. For example:

J. Andrew, J. Eunice, D. E. Popescu, M. K. Chowdary, and J. Hemanth, “Deep Learning-Based Leaf Disease Detection in Crops Using Images for Agricultural Applications,” Agronomy, vol. 12, no. 10, Oct. 2022, doi: 10.3390/AGRONOMY12102395.

Answer 3: We cite this published paper in [19] in Ref.

Major Concerns:

Question 1:  In the introduction section, the authors seems to have directly jumped into machine learning techniques and deep learning techniques without properly defining them. To maintain appropriate flow, it is important to introduce machine learning and its techniques then the literatures in that field similarly for deep learning. May be some subsections will also be good.

Answer 1: We have rewritten the Introduction according to your comments. See the uploaded revision for details.

Question 2: In section 2.2, it is mentioned that the dataset is split into 80:20 ratio. How did you arrive to this? why not 70:30 or something else. Justification is required.

Answer 2: We conducted experiments on different division ratios of the dataset and performed experimental analysis to give reasons for choosing 8:2. See Section 2.2 for details.

Question 3: Figure 6 is doesn't convey much information. I suggest the authors to refine this figure to add more details. 

Answer 3: Since Figure 6 alone is not sufficient to describe sufficient model information, the WC-MobileNetV3 training procedure  is supplemented in section 2.4.

Question 4: The final hyperparameter tuning details can be reported in a table.

Answer 4: The final hyperparameter tuning details can be reported in Table 5.

Question 5: An ROC curve for the different models implemented should be plotted to show the best performing model. This is is very important.

Answer 5: We have plotted the ROC curves of different models implemented in Figure 10. Also, the corresponding experimental analysis is given in Section 3.6.

Question 6: Is there any literature that uses similar dataset? if so compare your results with them for performance evaluation. 

Answer 7: Unfortunately, according to our investigation, no similar dataset currently exists and there is no comparative goal for our work.

See the uploaded revisions for more revision details.

Author Response File: Author Response.docx

Round 2

Reviewer 1 Report

Please correct the Chinese words in some Figures and Tables.

Author Response

Dear editor:

     I am sorry that I forgot to correct the titles of Table12 and Table13 due to my oversight. 

     Now corrected. And the changed part has been marked yellow.

Author Response File: Author Response.docx

Reviewer 2 Report

The authors have addressed my concerns. 

Author Response

Dear editor:

Thank you for your recognition of our work and for the advice you have given us.

Author Response File: Author Response.docx

Back to TopTop