Automation of Crop Disease Detection through Conventional Machine Learning and Deep Transfer Learning Approaches
Round 1
Reviewer 1 Report
1. On the PlantVillage plant disease image dataset, the authors compared the d-models using machine learning and deep learning frameworks; a higher-precision model was obtained by replacing the activation function and optimizer. However, the authors simply replaced the activation function and optimizer without any effective improvements to the network. Research is not innovation.
2. The article only describes various network models and does not show automation in the title.
3. The subtitles in the article are confusing, some pictures are blurred (pages 22 and 31), and the picture format does not match the paragraph.
4. The article is long and redundant, and it is recommended to express it concisely.
5. There are spelling mistakes in the article, such as the keyword "cop" should be "crop".
Author Response
Please find the attached file
Author Response File: Author Response.docx
Reviewer 2 Report
In the manuscript, a thorough comparison of many cutting-edge deep learning models and machine learning models has been performed. This work is generated great and very interesting at this moment.
To prevent production loss, crop diseases must be detected early. However, physically monitoring leaf diseases is a laborious activity that necessitates a lot of work, prolonged processing time, and in-depth familiarity with plant pathogens. For these goals, authors have created and analysed a variety of algorithms based on image processing, machine learning, and deep learning for the identification of leaf diseases, and frequently these methods have produced notable results.
The authors used a dataset taken from the PlantVillage Dataset composed of healthy and diseased crop leaves for binary classification and conducted an extensive comparison study between traditional machine learning (SVM, LDA, KNN, CART, RF, NB) and deep transfer learning (VGG16, VGG19, InceptionV3, ResNet50, and CNN) models in terms of accuracy, precision, recall, and f1-score. In addition, they used a variety of activation functions and deep learning optimization techniques to improve the performance of these CNN designs. They achieved pretty outstanding classification accuracy (CA) of leaf diseases for all models through experimentation. The results show that NB provides the least CA (60.09%), whereas the InceptionV3 model provides the best CA (98.01%).
The performance of the best-performing model was further evaluated by the authors using a variety of activation functions and deep learning optimization algorithms. The results are quite encouraging and clearly show that the DL approach outperforms traditional ML algorithms.
This confirms that the DL method is the best path forward for image classification problems with somewhat big datasets based on the recall, the acquired accuracy, and most importantly the simplicity of the methodology. However, DL techniques also have serious limitations, such as the requirement for a very powerful GPU or TPU for training since CNN models take a long time to train and may take days to weeks depending on the size of the dataset. Therefore, authors used pre-trained models to shorten the learning period.
Additionally, combining deep networks and machine learning techniques uses around half the memory bandwidth and significantly less CPU power while producing superior models. In the future, a hybrid model based on the best-found functions (SGD optimizer+ swish activation function + RF+ InceptionV3) that can be trained with features like a display of the crop's recognised diseases and deployed in the field for validation and testing can be implemented as a whole system for a web application. Agronomists and farmers may also be able to discuss remedies and safety measures for diseases they have encountered in a forum provided by the programme.
The writers put a lot of work into providing current knowledge and a plan for the future of illness management. Last but not least, the manuscript is sound and clear, with only a few small mistakes. With a few minor changes. For support, there can be new references added.
The content of manuscript may be summarized in brief.
Author Response
Please find the attached file
Author Response File: Author Response.docx
Reviewer 3 Report
1. The authors have done a lot of work, and compared a variety of different machine learning algorithms (including traditional and deep learning).
2. Lack of innovation, just a simple comparison of the original algorithms.
3. what is the purpose of mixing images of various crops together for training? To get a good general-purpose model? Will the model trained by single crop work better?
4. How to deal with data imbalance? If the data is unbalanced, is the subsequent comparison of classification accuracy scientific?
5. Figure 9 looks a bit deformed.
6. Table 3 and table 8 do not appear to be aligned.
7. It would be better to unify the table style, and table 1 and table 2 look different from other tables.
Author Response
Please find the attached file
Author Response File: Author Response.docx