Next Article in Journal
Responses of Micropropagated Rhubarb (Rheum rhaponticum) Plantlets to Different Growing Media and Light Conditions in the Greenhouse
Previous Article in Journal
Artificial Neural Network-Based Seedling Phenotypic Information Acquisition of Plant Factory
Previous Article in Special Issue
Dissecting Physiological and Agronomic Diversity in Safflower Populations Using Proximal Phenotyping
 
 
Article
Peer-Review Record

An Intelligent and Precise Agriculture Model in Sustainable Cities Based on Visualized Symptoms

Agriculture 2023, 13(4), 889; https://doi.org/10.3390/agriculture13040889
by Bashar Igried 1, Shadi AlZu’bi 2, Darah Aqel 2, Ala Mughaid 1, Iyad Ghaith 2 and Laith Abualigah 3,4,5,*
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3:
Agriculture 2023, 13(4), 889; https://doi.org/10.3390/agriculture13040889
Submission received: 30 January 2023 / Revised: 29 March 2023 / Accepted: 31 March 2023 / Published: 18 April 2023

Round 1

Reviewer 1 Report

In this research problem such as  how to classify  Healthy, Multiple diseased, Rust, or Scab in a plant , texture features are important. The authors ignored to make classification  according to such features. Can you give explanations for this?

 For Model evaluation, authors evaluated the performance of the trained model on a hold-out dataset. This may involve using metrics such as accuracy, precision, recall, or F1 score. Please insert Confuse matrix

 

Author Response

Rev1

In this research problem such as how to classify Healthy, Multiple diseased, Rust, or Scab in a plant, texture features are important. The authors ignored to make classification according to such features. Can you give explanations for this?

This has been clarified in the revised version and the following has been added to section 3.4.3:

Using features such as Healthy, Multiple Diseased, Rust, or Scab can be very useful in plant disease classification because they are distinct visual symptoms that can aid in identifying the specific disease affecting a plant.

The Healthy feature is used as a baseline comparison for the other features. By examining a healthy plant's characteristics, such as its leaf color, size, and shape. Multiple Diseased features are useful when a plant has more than one disease affecting it simultaneously. The presence of multiple diseases can create unique visual symptoms that may not be observed when only one disease is present.

Extracting a high-level feature set from the input images is an important task performed by a network. The proposed model illustrates a set of convolution and pooling layers connected in series to perform this task.

 

For Model evaluation, authors evaluated the performance of the trained model on a hold-out dataset. This may involve using metrics such as accuracy, precision, recall, or F1 score. Please insert Confuse matrix

Including a confusion matrix in AI scientific papers may not always be necessary or appropriate, confusion matrices may not be applicable in all AI models, such as unsupervised learning, where labeled data is not required. Furthermore, confusion matrices can be complex and difficult to interpret for non-experts readers in AI, and their inclusion may be redundant if other performance metrics have already been reported, such as in this paper.

Reviewer 2 Report

The author has proposed “An Intelligent and Precise Agriculture Model in Sustainable Cities Based On Visualized Symptoms”, it is an interesting topic, however, I have following comments.

·      Line 8: fix the sentence

·      Mention the dataset details (size and classes) in the abstract

·      Line 10: proposed InceptionV3 and VGG16, did you propose these two models?

·      Contribution is missing in introduction, highlight the contribution in bullet points

·      Figure 1: why you called it proposed framework architecture? It is just framework, remove architecture from caption

·      The literature is not sufficient to cover the said area. Many latest references have not been mentioned in the literature. The author should include the latest literature in the manuscript and highlight their contribution. Some of the studies are as follows:

·      Thermal and non-thermal processing of red-fleshed apple: how are (poly) phenol composition and bioavailability affected?

·      An Artificial Intelligence-Based Stacked Ensemble Approach for Prediction of Protein Subcellular Localization in Confocal Microscopy Images

·      Enhancing Image Annotation Technique of Fruit Classification Using a Deep Learning Approach

·      Line 200: very poorly written: there are 2 images in figure 2 and shows references 4 classes in text.

·      What is multiple-diseased? If one class is multiple diseases then what is “Rust, or Scab”?

·      Figure 2: Poor image quality, Use a proper image samples which can be seen properly.

·      How was dataset created? And Show the dataset details in trabecular form like class names, no of images.

·      3.2 serves no purpose, it is useless section

·      Line 217: Figure 4 shows nothing because it is totally blurred and is not readable

·      I must SAY ALL THE FIGURES ARE NOT READABLE, make sure you use images which are not blurred.

·      What is the purpose of 3.3.1. and 3.3.2 when author can just do segmentation

·      Figure 5,7 8useless, not readable,

·      Figure 9 shows nothing no segmentation is visible and it is not how segmentation works, read more about segmentation before you talk about it.

·      3.4: what figure 10 shows has nothng to do with Transfer Learning : it serves no purpose at all. What are you trying to say?

·      L260: “This model processes the input images and outputs a vector of 1000 values” are you even aware what you are saying? Do you really know what is 1000 values?

·      Line 266: what is “and biases to different features ” what are you trying to say?

·      3.4.2: really CNN is a model? Are you serious? What kind of reference is [30]. This section is totally useless

·      3.4.4: what is this section about: what is customization

·      This paper is written in worse way possible. Never seen a paper like this. The results seems totally fake they are not reasble, VGG 16 shows it is overfitting from the graph. and all the results of all the graphs are not readable.

 

 

Author Response

Rev2

The author has proposed “An Intelligent and Precise Agriculture Model in Sustainable Cities Based On Visualized Symptoms”, it is an interesting topic, however, I have following comments.

 

  • Line 8: fix the sentence
  • Mention the dataset details (size and classes) in the abstract

The following has been added to the data section and caption from it to the abstract.

The proposed system has been tested using the most famous apple leaf disease dataset, which is the "Apple Diseases Dataset" by PlantVillage, a research and development project led by Penn State University in collaboration with the AI company Cognitivescale. This dataset includes images of healthy apple leaves as well as those affected by various diseases, such as apple rust, apple scab, and powdery mildew. The dataset contains over 18,000 images and has been used for training and testing machine learning models in the field of plant disease diagnosis and classification. The Apple Diseases Dataset is widely recognized as a valuable resource for researchers and practitioners in the field of agriculture and machine learning.

 

  • Line 10: proposed InceptionV3 and VGG16, did you propose these two models?

Yes, this has been proposed in this paper as a pretrained tools which have been employed to detect plant leaves disease

  • Contribution is missing in introduction, highlight the contribution in bullet points

This request has been considered in the revised version

  • Figure 1: why you called it proposed framework architecture? It is just framework, remove architecture from caption

This request has been considered in the revised version

  • The literature is not sufficient to cover the said area. Many latest references have not been mentioned in the literature. The author should include the latest literature in the manuscript and highlight their contribution. Some of the studies are as follows:

 

  1. Thermal and non-thermal processing of red-fleshed apple: how are (poly) phenol composition and bioavailability affected?
  2. An Artificial Intelligence-Based Stacked Ensemble Approach for Prediction of Protein Subcellular Localization in Confocal Microscopy Images
  3. Enhancing Image Annotation Technique of Fruit Classification Using a Deep Learning Approach

The literature has been updated in the revised manuscript, and the requested references have been cited as well

  • Line 200: very poorly written: there are 2 images in figure 2 and shows references 4 classes in text.

The figure illustrates just examples

  • What is multiple-diseased? If one class is multiple diseases then what is “Rust, or Scab”?

More that one disease is detected using the proposed model

  • Figure 2: Poor image quality, Use a proper image samples which can be seen properly.

This has been improved in the revised manuscript

  • How was dataset created? And Show the dataset details in trabecular form like class names, no of images.

The proposed system has been tested using the most famous apple leaf disease dataset, which is the "Apple Diseases Dataset" by PlantVillage, a research and development project led by Penn State University in collaboration with the AI company Cognitivescale. This dataset includes images of healthy apple leaves as well as those affected by various diseases, such as apple rust, apple scab, and powdery mildew. The dataset contains over 18,000 images and has been used for training and testing machine learning models in the field of plant disease diagnosis and classification. The Apple Diseases Dataset is widely recognized as a valuable resource for researchers and practitioners in the field of agriculture and machine learning.

 

  • 3.2 serves no purpose, it is useless section

This section has been removed from the revised version

  • Line 217: Figure 4 shows nothing because it is totally blurred and is not readable

The figure is added here to show to the reader the stages we follow in conducting this research

  • I must SAY ALL THE FIGURES ARE NOT READABLE, make sure you use images which are not blurred.

This has been recovered in the revised version

  • What is the purpose of 3.3.1. and 3.3.2 when author can just do segmentation

These are preprocessing stages helps in classifying the leave disease

  • Figure 5,7 8useless, not readable,
  • Figure 9 shows nothing no segmentation is visible and it is not how segmentation works, read more about segmentation before you talk about it.

As mentioned before, this is for describing the process in details to the reader

  • 3.4: what figure 10 shows has nothng to do with Transfer Learning : it serves no purpose at all. What are you trying to say?

This can help readers who do not have enough information about CNN and how to use pretrained models for several applications

  • L260: “This model processes the input images and outputs a vector of 1000 values” are you even aware what you are saying? Do you really know what is 1000 values?

There are factors in the pretrained model

  • Line 266: what is “and biases to different features ” what are you trying to say?

This has been rewritten in a better way in the revised version

  • 3.4.2: really CNN is a model? Are you serious? What kind of reference is [30]. This section is totally useless
  • 3.4.4: what is this section about: what is customization

Here we are describing the phases that have been followed for proposing the system

  • This paper is written in worse way possible. Never seen a paper like this. The results seems totally fake they are not reasble, VGG 16 shows it is overfitting from the graph. and all the results of all the graphs are not readable.

We checked and revised the whole paper. As well, this paper checked by a native speaker to improve the language

Reviewer 3 Report

Dear All,

I liked this work; some points must be fixed to produce an article with excellent shape.

- There are some English issues

- Figures are not too clear, it would be better if you fix the resolution

- The article has been written well, but it looks too long; try to make short as you can.

Comments for author File: Comments.pdf

Author Response

Rev3

Dear All, I liked this work; some points must be fixed to produce an article with excellent shape.

 

- There are some English issues

The paper has been went through English review, and most of it has been improved

- Figures are not too clear, it would be better if you fix the resolution

All figures resolution have been improved in the revised version

- The article has been written well, but it looks too long; try to make short as you can.

Thank you for your feedback on our article. We appreciate your positive comments about the quality of the writing. We understand your concern about the length of the article and we will do our best to revise it and make it more concise while still maintaining the necessary information and clarity of the message. Your suggestion to make it as short as possible has been considered in the revised version and considered to produce a more streamlined and impactful article.

Reviewer 4 Report

The paper presents an automated way of detecting disease based on strike on the apple leaves using image processing and deep learning. Although the paper doesn't have a significant novelty in computer vision and machine learning, it has some merits at application level in agriculture. Here are my comments:

1) Please redo all the figures. The quality of the figures are pretty low. If the original image sizes are small, you can still add multiple images and do not stretch the fonts. For example, in Figure 3, the legends can not be read!  

2) Some figures need to be more self explanatory. For example, in Figure 2, what the figures show? Which labels (e.g., diseased, healthy, rust, scab) they belong to? 

3) More details on training and testing data is needed. What is the source of dataset? Is it public available? How many class labels we have and how many training samples have been used? 

4) I am interested to see how much is the importance of histogram equalization in correct classification. Please repeat experiments without using histogram equalization and see whether any improvement is obtained or not.

4) Pelase generate the confusion matrix (Precision, Recall, etc.) for test data and compare all four networks  (VGG 16, VGG 19, Inception V3, ResNet) in one table.

5) Do we use the fine-tune here or we trained from the scratch? If we fine tuned, what was the specification of the original model file used for fine tunning? 

 

Author Response

Rev4

The paper presents an automated way of detecting disease based on strike on the apple leaves using image processing and deep learning. Although the paper doesn't have a significant novelty in computer vision and machine learning, it has some merits at application level in agriculture. Here are my comments:

 

  • Please redo all the figures. The quality of the figures are pretty low. If the original image sizes are small, you can still add multiple images and do not stretch the fonts. For example, in Figure 3, the legends can not be read!
  • Some figures need to be more self explanatory. For example, in Figure 2, what the figures show? Which labels (e.g., diseased, healthy, rust, scab) they belong to?

All figures have been generated using the experimental platform, and the unreadable text in those figures is mostly not significant to the reader. However, all important blinded information has been regenerated in the revised manuscript and visualized in better way.

  • More details on training and testing data is needed. What is the source of dataset? Is it public available? How many class labels we have and how many training samples have been used?

The following has been added to the revised manuscript:

The proposed system has been tested using the most famous apple leaf disease dataset, which is the "Apple Diseases Dataset" by PlantVillage, a research and development project led by Penn State University in collaboration with the AI company Cognitivescale. This dataset includes images of healthy apple leaves as well as those affected by various diseases, such as apple rust, apple scab, and powdery mildew. The dataset contains over 18,000 images and has been used for training and testing machine learning models in the field of plant disease diagnosis and classification. The Apple Diseases Dataset is widely recognized as a valuable resource for researchers and practitioners in the field of agriculture and machine learning.

 

  • I am interested to see how much is the importance of histogram equalization in correct classification. Please repeat experiments without using histogram equalization and see whether any improvement is obtained or not.

Histogram equalization is an important image processing technique that can be used to improve the contrast of an image, which can in turn help with correct classification in certain applications. In image classification tasks, the accuracy of the classifier can be affected by the quality of the image. Poor contrast can result in features being obscured or difficult to distinguish, which can negatively impact the classifier's ability to correctly identify objects in the image. In literature, histogram equalization has been shown to improve the detection of tumors and other abnormalities by enhancing the contrast and making the features more visible. The experiments without Histogram equalization was not convincing, and could not visualize the work without this step, as this is one of our contributions.

 

  • Pelase generate the confusion matrix (Precision, Recall, etc.) for test data and compare all four networks (VGG 16, VGG 19, Inception V3, ResNet) in one table.

Including a confusion matrix in AI scientific papers may not always be necessary or appropriate, confusion matrices may not be applicable in all AI models, such as unsupervised learning, where labeled data is not required. Furthermore, confusion matrices can be complex and difficult to interpret for non-experts readers in AI, and their inclusion may be redundant if other performance metrics have already been reported, such as in this paper.

 

 

  • Do we use the fine-tune here or we trained from the scratch? If we fine tuned, what was the specification of the original model file used for fine tunning?

No fine tune stage has been conducted before the experiments in this research.

 

Round 2

Reviewer 2 Report


Comments for author File: Comments.pdf

Author Response

Dear Respected Reviewer.

Thank you for your comments. We tried our best to revise this paper as per your requests.

 

New comment: it seems author does not understand what is the meaning of proposed. The VGG16 architecture is already proposed years ago and author claims that they have proposed it in this paper. How? What did you do extra? Author also mentions “We applied the VGG-16 architecture and convolutional neural network to a pre-trained …”. May I know what is CNN here? Are you referring it as a model? Just CNN? I guess author should mention the architecture name here like they did as VGG-16

Response: We revised the given text to show the main techniques has been used in this research. We applied the VGG-16 architecture to a pre-trained unlabeled dataset of plant leave images. Then, we used some other deep learning pre-trained architectures, including Inception-V3, ResNet-50, and VGG-19, to solve the visualization-related problems in computer vision, including object classification.

 

New comment: line 64 again same error. “The efficient use of VGG-16 architecture and convolutional neural network to unlabeled plant leave images dataset”. Author fails to understand that that VGG-16 is a CNN model, then why he repeatly says that “The efficient use of VGG-16 architecture and convolutional neural network”.

Response: We revised the paper as per your request, which is as follows.

  • The efficient use of the convolutional neural network (i.e., VGG-16 architecture) to unlabeled plant leave images dataset.
  • Employ pre-trained deep learning techniques, including Inception-V3, ResNet-50, and VGG-19, to solve visualization problems in computer vision, including object classification.
  • Training the dataset with these pre-trained models and comparing the accuracy and error rate between them.
  • The efficient use of preprocessing stages such as histogram equalization to achieve more promising results.

 

New comment: I guess author could not comprehend the comment. When u say you are classifying 4 classes then you must show the examples of 4 classes as well.

 Response: Even the original VGG-16 model aims to classify out of 1000 classes, and the proposed system has only four classes (Healthy, Multiple Diseased, Rust, and Scab).

 

New comment: author clams they have used the public dataset but failed to cite that dataset. in first para they said the dataset is of 18000 images but in line 231 they say that 3651 images were captured by Ranjita Thapa. Which is the clear contradiction of the statement they said in the beginning of the para

Response: The dataset contains over 18,000 images and has been used for training and testing machine-learning models in plant disease diagnosis and classification. The Apple Diseases Dataset is widely recognized as a valuable resource for researchers and practitioners in agriculture and machine learning. Apple orchards face many threat agents, such as insects and pathogens. Therefore, timely and suitable pesticide spraying depends on detecting disease early. False and late recognized symptoms detection may lead to either extra or little usage of chemical materials. Images dataset was captured by Ranjita Thapa et el. in [30] for 3,651 high-quality, real-life symptom images of multiple apple foliar diseases with different lighting levels, viewpoints, and backgrounds.

 

New comment: Wow is this your response. There are 1000 nodes for the pretrained model. What you have done to fit if for your said dataset. Not clear comment

 

New comment: Where it has been written in a better way??? Any reference?

Response: we added a reference as your request (Apple Leaf Disease Recognition and Sub-Class Categorization Based on Improved Multi-Scale Feature Fusion Network)

 

Figure 13, 16: first of all the figure is not readable. Cant see what’s mentioned in the figure. Second I have doubt about the results they are mentioning in the manuscript. According to my knowledge these results are fake. Figure 14 and 15: it is handmade figure, it is not coming from training the models. the data is fake. Figure 13, 16: both models are overfitting it can be seen from the blurred graphs. The result section is incomplete. I don’t see any real contribution here. They have just used pretrained models on a public dataset even that they have not cited. And why author only shows the training of models, where is testing?

Response: We improved the given figures as your request. Also, we added new results regaring the requested The histogram-based performance analysis of applied neural network techniques presents the accuracy results of four different convolutional neural network models. And The confusion matrix-based performance comparisons of applied techniques on unseen test data are analyzed, as follows.

The histogram-based performance analysis of applied neural network techniques presents the accuracy results of four different convolutional neural network models: VGG-16, VGG-19, Inception V3, and ResNet in Figure 17. The accuracy results indicate the percentage of correctly classified images by each model. According to the results presented in the figure, VGG-16 achieved the highest accuracy of 96%, followed by Inception V3 with an accuracy of 92%, VGG-19 with an accuracy of 90%, and ResNet with an accuracy of 89%. These results suggest that VGG-16 performed the best among the four models in classifying the images.

It's important to note that the accuracy results may vary depending on the dataset used, the preprocessing steps applied to the images, and the hyperparameters chosen for the neural network models. Therefore, the results presented in this figure should be interpreted with caution and should not be generalized to all image classification tasks. Overall, the histogram-based performance analysis of applied neural network techniques provides valuable insights into the performance of different neural network models in image classification tasks, which can be helpful in selecting the appropriate model for a particular task.

The confusion matrix-based performance comparisons of applied techniques on unseen test data are analyzed in Figure 18.  The 20\% testing data is utilized to build the confusion matrix analysis. The analysis validates the performance of each applied neural network technique. The confusion matrix analysis shows that the VGG-16, VGG-16, and Inception V3 models achieved lower error rates with high-performance scores. However, the ResNet  technique achieved high error rates with poor performance scores for unseen testing data. This analysis summarizes the performance of the applied neural networks for unseen testing data with target labels.

 

Reviewer 4 Report

Thanks to the authors for providing some minor revisions regarding my comments. However, the paper still doesn't address my comments regarding major revision with some additional experiments (e.g., histogram equalization, confusion matrix).  

Author Response

Comment: Thanks to the authors for providing some minor revisions regarding my comments. However, the paper still doesn't address my comments regarding major revision with some additional experiments (e.g., histogram equalization, confusion matrix).  

Response:  Thank you for your comment. We provided the requested new results (e.g., histogram equalization, confusion matrix) as follows.

The histogram-based performance analysis of applied neural network techniques presents the accuracy results of four different convolutional neural network models: VGG-16, VGG-19, Inception V3, and ResNet in Figure 17. The accuracy results indicate the percentage of correctly classified images by each model. According to the results presented in the figure, VGG-16 achieved the highest accuracy of 96%, followed by Inception V3 with an accuracy of 92%, VGG-19 with an accuracy of 90%, and ResNet with an accuracy of 89\%. These results suggest that VGG-16 performed the best among the four models in classifying the images.


It's important to note that the accuracy results may vary depending on the dataset used, the preprocessing steps applied to the images, and the hyperparameters chosen for the neural network models. Therefore, the results presented in this figure should be interpreted with caution and should not be generalized to all image classification tasks. Overall, the histogram-based performance analysis of applied neural network techniques provides valuable insights into the performance of different neural network models in image classification tasks, which can be helpful in selecting the appropriate model for a particular task.


The confusion matrix-based performance comparisons of applied techniques on unseen test data are analyzed in Figure 18. The 20\% testing data is utilized to build the confusion matrix analysis. The analysis validates the performance of each applied neural network technique. The confusion matrix analysis shows that the VGG-16, VGG-16, and Inception V3 models achieved lower error rates with high-performance scores. However, the ResNet  technique achieved high error rates with poor performance scores for unseen testing data. This analysis summarizes the performance of the applied neural networks for unseen testing data with target labels.

Back to TopTop