Next Article in Journal
Modelling of Cantilever-Based Flow Energy Harvesters Featuring C-Shaped Vibration Inducers: The Role of the Fluid/Beam Interaction
Next Article in Special Issue
Forecasting Agricultural Financial Weather Risk Using PCA and SSA in an Index Insurance Model in Low-Income Economies
Previous Article in Journal
Approach-Based Analysis on Wireless Power Transmission for Bio-Implantable Devices
Previous Article in Special Issue
Promotion of Color Sorting in Industrial Systems Using a Deep Learning Algorithm
 
 
Article
Peer-Review Record

Tomato Maturity Estimation Using Deep Neural Network

Appl. Sci. 2023, 13(1), 412; https://doi.org/10.3390/app13010412
by Taehyeong Kim 1, Dae-Hyun Lee 2,*, Kyoung-Chul Kim 3, Taeyong Choi 4 and Jun Myoung Yu 5
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Appl. Sci. 2023, 13(1), 412; https://doi.org/10.3390/app13010412
Submission received: 11 November 2022 / Revised: 16 December 2022 / Accepted: 23 December 2022 / Published: 28 December 2022

Round 1

Reviewer 1 Report

1. Please provide the external environment of tomatoes such as illumination when collecting original images.

2. As described in the paper, the tomato area selected by the box is a rectangular area of different sizes, but all sample sizes are adjusted to 128 × 128 pixels. If the tomatoes in the figure may be distorted, which in turn leads to the tomato features learned by the model is not real.

3. The discussion section in Section 3.1 is not sufficient. Compared with the current similar research, the discussion section does not reflect the advantages of this research. Compared with the current tomato maturity discrimination research, the innovation of this research is not obvious.

4. Whether the light has an effect on tomato maturity and whether the model is sensitive to light is a problem that must be considered in the application of this method to actual agricultural production, but it is not mentioned from the model classification performance verification section.

5. It can be seen from the classification results that the model has low classification accuracy for the two tomato samples of ' Turning ' and ' Pink ', which should be a more important part of the tomato maturity discrimination work. If the model optimization method can be proposed to improve the classification accuracy of such samples, this study will be more meaningful.

Author Response

Thank you for your kindly review and we provided the revised manuscript with review report. Please check the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

1. The manuscript's authors have prepared too modest a list of references for an article in a serious scientific journal. The estimating tomato maturity has been attempted using many methods, including neural networks. There are already published papers on evaluating tomato ripeness using computerized methods. One can refer to the results posted in them. The authors should familiarize the reader, if only roughly, with at least a few procedures other than the ones they used and with their results. They should also expand the literature review to include publications on computer methods, including and using neural networks.

2. The authors completely omitted the biological side, although the object used in their study was plant fruits. There needs to be more mention of the variety of tomatoes used in the study with their Latin name, brief characteristics, such as the size of the fruits studied, and description of the shape. It is worth stating whether any specific feature of the objects determined the selection of fruits for analysis or whether the choice was random.

Note that this research aims to robotize agriculture and not to automate the choice of ball colors for children's play...

3. The manuscript lacks a "Discussion" section, where the authors should compare their results to those of other researchers described in the cited papers. This chapter should be included in the manuscript

Author Response

Thank you for your kindly review and we provided review report. Please check attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

The article is interesting, however there are several points to be fixed before being ready for publication. 

 

-please divide into intro and literature review, otherwise there is too much confusion between the two. CNNs should be properly introduced, with an intro which makes an overview of their architecture and cite proper literature such as: 

    -LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. "Deep learning." nature 521.7553 (2015): 436-444.

Chua, Leon O., and Tamas Roska. "The CNN paradigm." IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications 40.3 (1993): 147-156

-Girshick, Ross. "Fast r-cnn." Proceedings of the IEEE international conference on computer vision. 2017

-Dimitri, Giovanna Maria, et al. "Multimodal and multicontrast image fusion via deep generative models." Information Fusion 88 (2022): 146-160

-He, Kaiming, et al. "Mask r-cnn." Proceedings of the IEEE international conference on computer vision. 2017.

-Bhatt, Dulari, et al. "CNN variants for computer vision: History, architecture, application, challenges and future scope." Electronics 10.20 (2021): 2470

-Yang, Ruoyu, et al. "CNN-LSTM deep learning architecture for computer vision-based modal frequency detection." Mechanical Systems and signal processing 144 (2020): 106885

-Bianchini, Monica, et al. "Deep neural networks for structured data." Computational Intelligence for Pattern Recognition. Springer, Cham, 2018. 29-51

-Figure 3 needs more details in the caption 

-please specify all of the variable used in the equations introduced

-How were the parameters of the DNN found? Griid search or other hyperparameters optimization?

-FIgure 5 confusion matrix is referred to which set? (validation, traininig. test?)

-Same Table 1, to which dataset is referred to? 

-How did you perform division in test and training? How did you divide? Cross validation should be performed and results should be reported here

-Table 3 should be more contextualized for what concerns the type of GPU used

 

Author Response

Thank you for your kindly review and we provided review report. Please check attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

The revised part of manuscript should be marked red. There are two questions.

1. As shown in Figure 2, some tomatoes are occluded. How did the model determine the maturity of the occluded tomatoes? For example, if the camera captured image including three occluded tomatoes, which one’s maturity is esteimated? Please add the discussion about the occluded tomatoes, because this problem needs to be considered in practical application.

2. Lines 367-377, the author conducted experiments on a low-cost Jetson development board to prove that the model can be used in real time, but the author only provided the detection speed. Please add the graphs or interfaces of on-site real time detection, which will help readers understand if the model proposed in this study can detect maturity real time in practical applications.That's to say, show us how the real time experiment worked and the results.

Author Response

Thank you for your high quality review. Please check attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

Before publishing the manuscript, the authors might want to consider whether chapter "5. Discussion" should be appended to chapter "4. Results and Discussion" as its last subchapter. I leave this matter to the authors to decide.

I have no more comments on the manuscript.

Author Response

Thank you for your high quality review. Please check attachment.

Author Response File: Author Response.docx

Reviewer 3 Report

The authors still need to improve the manuscript. 

 

1) A division only in train and test is not sufficient. Please perform k-fold cross validation and report mean and average accuracy obtained 

2) please add more details about the performances 

3) how do you compare to other methods? 

4) State of the art and literature background still needs improvement

 

Author Response

Thank you for your high quality review. Please check attachment.

Author Response File: Author Response.docx

Back to TopTop