Next Article in Journal
YOLO-IAPs: A Rapid Detection Method for Invasive Alien Plants in the Wild Based on Improved YOLOv9
Next Article in Special Issue
Research and Experiments on Adaptive Root Cutting Using a Garlic Harvester Based on a Convolutional Neural Network
Previous Article in Journal
Accuracy of Various Sampling Techniques for Precision Agriculture: A Case Study in Brazil
Previous Article in Special Issue
AI-Based Monitoring for Enhanced Poultry Flock Management
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Detection and Classification of Agave angustifolia Haw Using Deep Learning Models

by
Idarh Matadamas
1,*,
Erik Zamora
2 and
Teodulfo Aquino-Bolaños
1
1
Centro Interdisciplinario de Investigación para el Desarrollo Integral Regional, Unidad Oaxaca, Instituto Politécnico Nacional, Hornos No. 1003, Colonia Noche Buena, Municipio de Santa Cruz Xococotlán 71230, Oaxaca, Mexico
2
Centro de Investigación en Computación, Instituto Politécnico Nacional, Avenida Juan de Dios Batiz esquina Miguel Othón de Mendizábal, Colonia Nueva Industrial Vallejo, Gustavo A. Madero, Ciudad de México 07738, Mexico
*
Author to whom correspondence should be addressed.
Agriculture 2024, 14(12), 2199; https://doi.org/10.3390/agriculture14122199
Submission received: 28 August 2024 / Revised: 24 November 2024 / Accepted: 29 November 2024 / Published: 2 December 2024
(This article belongs to the Special Issue Computational, AI and IT Solutions Helping Agriculture)

Abstract

:
In Oaxaca, Mexico, there are more than 30 species of the Agave genus, and its cultivation is of great economic and social importance. The incidence of pests, diseases, and environmental stress cause significant losses to the crop. The identification of damage through non-invasive tools based on visual information is important for reducing economic losses. The objective of this study was to evaluate and compare five deep learning models: YOLO versions 7, 7-tiny, and 8, and two from the Detectron2 library, Faster-RCNN and RetinaNet, for the detection and classification of Agave angustifolia plants in digital images. In the town of Santiago Matatlán, Oaxaca, 333 images were taken in an open-air plantation, and 1317 plants were labeled into five classes: sick, yellow, healthy, small, and spotted. Models were trained with a 70% random partition, validated with 10%, and tested with the remaining 20%. The results obtained from the models indicate that YOLOv7 is the best-performing model, in terms of the test set, with a mAP of 0.616, outperforming YOLOv7-tiny and YOLOv8, both with a mAP of 0.606 on the same set; demonstrating that artificial intelligence for the detection and classification of Agave angustifolia plants under planting conditions is feasible using digital images.

1. Introduction

According to data from the National Commission for the Knowledge and Use of Biodiversity (Comision Nacional para el Conocimiento y Uso de la Biodiversidad, CONABIO), the number of species of the Agave genus distributed in the Mexican territory is 159, which represents 75% of the total species in the world [1]. Agave angustifolia, also known as “agave espadin”, is the species with the greatest distribution in North America and can be found from Costa Rica to the northern part of Mexico, with Oaxaca being one of the entities where it is cultivated thanks to the existence of countless microclimates that facilitate its production and yield [2]. According to data from the Mexican Regulatory Council for the Quality of Mezcal (Consejo Mexicano Regulador de la Calidad del Mezcal, COMERCAM), mezcal production in Mexico in 2022 was 14,165,505 L, with the agave espadin being the most widely used variety for this production with 81.08% [3].
Recently, damage has been reported to plantations of different agave species caused by pests resulting from the presence of the agave weevil (Scyphophorus acupunctatus Gyllenhal) [4] or the rhinoceros beetle (Strategus aloeus) [5], which are described by visual characteristics such as wilting, yellowing of the plant’s leaves, and reduced growth [6]; or diseases like ‘sooty mold’, which produces black spots on the leaves due to the presence of fungi from the Bipolaris genus [7], compromising yield and, consequently, economic benefits.
Artificial intelligence in agriculture offers the possibility of improving and automating activities such as irrigation planning, orchard monitoring, and greenhouse automation; all thanks to the implementation of techniques such as robotics and the use of unmanned aerial vehicles [8,9]. In addition to this, the field of machine learning offers an option to perform tasks such as the classification of fruit varieties using the visual information contained in the images of their leaves [10], the detection of pests [11,12] and diseases [13,14], as well as counting plants [15,16]. This is done by locating the objects of interest within the image, enclosing them in a space defined by a bounding box, and subsequently estimating the class to which they belong [17]. Traditionally, disease identification in plantations is done through visual inspection by people considered experts [18,19], however, ref. [20] reported cases where deep learning achieved results of up to 95% in the execution of this task.
Different studies report the use of techniques such as digital image processing, pattern recognition, and machine learning for the identification of plants and fruits, mostly focused on counting and classification. Ref. [21] implemented a modified version of an Inception-ResNet architecture for fruit counting, trained using a set of synthetic images to later test it on real images; the results showed 91% accuracy in the detection of fruits in real images and 93% in synthetic images. In this same sense, ref. [22] tested the capacity of deep learning models to develop leaf counting tasks; the tested models were trained using synthetic images, demonstrating that a data set of this type is sufficient to achieve counting in real images. The detection of corn plants using versions 4 and 5 of the model YOLO was reported by [23] and they obtained the best results with YOLOv5 with an average of 73.1% accuracy.
In applications related to agave plants, ref. [24] trained a convolutional neural network for the detection and counting of agave plants, showing that the performance of the network is higher with an F1 of 0.96, compared to the 0.57 obtained with the Haar algorithm. Ref. [25] implemented an algorithm based on mathematical morphology and parallel computing for the detection of agave plants in images obtained by an unmanned aerial vehicle, reporting results between 83% and 89% accuracy. In this same sense, ref. [16] implemented a deep learning algorithm based on YOLOv5 to automate the counting of agave plants in images taken from an unmanned aerial vehicle. More recently, ref. [26] presented a method for the segmentation and classification of Agave tequilana Weber blue crop maturity in very high-resolution satellite images, achieving an accuracy of 95% in the classification of test images. However, no results have been reported from studies focused on the detection and classification of agave plants in photographs taken directly in the field.
The main objective of this work is the implementation and evaluation of five deep learning models for the detection and classification of Agave angustifolia H. plants in photographs taken in open-air planting plots, with the aim of providing an alternative in the design of automatic evaluation and diagnostic systems directly in the field, for the identification of possible deficiencies and early diagnosis of diseases.
Five object detection models were used: Faster-RCNN, RetinaNet, Yolov8n (hereinafter YOLOv8), and two versions of the YOLOv7 model [27] were used: YOLOv7 and YOLOv7-tiny. During 2023 and 2024, image labeling, training, validation, and results analysis activities were developed. The main contributions of the work can be listed as: (i) the generation of a database of Agave angustifolia images under planting conditions; (ii) the demonstration of the use of deep learning models for the detection and classification of agave plants and (iii) the comparison of the performance of five deep learning models for the detection of agave plants: Faster-RCNN, RetinaNet, Yolov8, YOLOv7, and YOLOv7-tiny.

2. Materials and Methods

2.1. Image Colletion and Capture Site

To create the digital image database, a plot was visited in the Central Valleys within the so-called “mezcal region” in the town of Santiago Matatlan, in the district of Tlacolula in the state of Oaxaca (16°51′13.7″ N 96°23′25.1″ W), with an average altitude of 1725 m above sea level and an annual rainfall of 166 mm. The plot with dimensions of 35 m × 215 m corresponds to a plantation of Agave angustifolia H., with an age of four years. The digital images of the agave plants were obtained with a Canon® Rebel T3i digital camera (Canon Inc., Tokyo, Japan), between 12:00 h and 13:30 h on 25 August 2022, with a partially cloudy sky. The capture angle was a “normal angle” (camera lens parallel to the ground at the same level as the photographed object). A total of 333 primary images of agave plants planted in the open air and under monoculture conditions were obtained, and the dimensions of the images were 2592 × 1728 px in RGB color space.

2.2. Image Annotation and Classification

The VGG image annotator tool [28] was used to define the regions of interest in each of the images. The image annotation process was carried out manually, through rectangles that delimit the existence of an agave plant within the image, thus obtaining a total of 1317 sub-images that represent an individual plant, Figure 1:
Each of the detections was assigned to one of five classes identified with a visual characteristic that distinguishes it from the others, as seen in Figure 2:
  • Sick agave: plants with most of their leaves withered with a brown color, thin, and small in size;
  • Yellow agave: plants that have a yellowish color on most of their leaves;
  • Healthy agave: plants of normal size compared to the size of the plants present in the plot, with a homogeneous green color on all the leaves;
  • Small agave: plants of small size compared to the rest of the plants found in the plot;
  • Spotted agave: plants of normal size compared to the size of the plants present in the plot, which have dark-colored spots on some or all of their leaves.

2.3. Training of the Models

You Only Look Once (YOLO) is an advanced method used for object detection using a convolutional neural network that divides the input image into a grid and calculates for each of the cells a bounding box and the probability that the object of interest is located in that space [29,30]. YOLO, in its version 7 (YOLOv7), offers three basic models designed for Edge GPUs, regular GPUs, and Cloud GPUs: YOLOv7-tiny, YOLOv7, and YOLOv7-W6, respectively. Here, the YOLOv7-tiny and YOLOv7 models were selected because they are the smallest architectures, as the number of trainable parameters were 6.2 M and 36.9 M, respectively, in addition to sharing the same set of training hyperparameters [31].
Detectron2 is a library that provides the capability to perform object detection and segmentation tasks. It includes various available models such as Mask R-CNN, Faster R-CNN, Fast R-CNN, RetinaNet, TridentNet, DensePose, Cascade R-CNN, and Tensor-Mask [32,33,34]. In this work, two models were selected: RetinaNet and Faster R-CNN, as they specialize in object detection tasks.
Three random partitions were obtained from the image database with a distribution of 70% for the training set and 20% for the test set of the detection model. The validation set (Val) consisted of the remaining 10% (Table 1). The number of iterations used for training was 1000.
All the models were trained under the same conditions in a programming environment based on the Python language accelerated by CUDA. The configuration of the training environment in terms of hardware and software is shown in Table 2. The size of the input images to the models was 640 × 640 px.

2.4. Models Performance Metrics

To evaluate the detection capacity of the models, the metrics Precision (Pr), which represents the number of true positives out of a total of elements identified as positive, and Recall (Rc), which represents the sensitivity and provides information on the number of true positives that the model classified based on the total number of positive values, were used. The Precision-Recall (P-R) curve presents the Pr along the vertical axis and the Rc on the horizontal axis. The area under this curve is used to calculate a scalar value that represents the average precision (AP); a high value indicates that the classifier’s performance is high [35]. For each of the classes present in the data set, it is possible to calculate the P-R curve and its corresponding AP. mAP represents the average of the APs of all the classes:
P r = T P T P + F P
R c = T P T P + F N
A P = 0 1 P r ( R c ) d ( R c )
m A P = 1 n k = 1 k = n A P k
where:
TP (True Positives): is the number of items correctly detected as positives;
FP (False Positives): is the number of items incorrectly detected as positives;
FN (False Negatives): is the number of items incorrectly detected as negatives;
APk: AP corresponding to class k;
n: number of classes

3. Results

The performance curves of the models were obtained in the last epoch for the loss or error in classification in training and validation (Figure 3). The YOLOv7 model obtained a loss of 0.00018 in training and one of 0.032 in validation after running 1000 epochs. On the other hand, the YOLOv7-tiny model obtained a loss of 0.001 and 0.056 in training and validation, respectively, after the same number of epochs. YOLOv8 achieved a loss of 0.248 during training and 1.7545 during validation; for Faster-RCNN, the loss was 0.328 during training, compared to a validation value of 0.316; similar values were observed with the RetinaNet model, with 0.2154 for training and 0.3017 for validation.
The confusion matrix (Figure 4) calculated for each of the models with the validation set shows that the class with the highest classification accuracy was “sick” with the highest value reached by the YOLOv8 model at 0.86, while Faster R-CNN had the lowest value at 0.43. The class with the lowest classification value in the YOLOv7, YOLOv7-tiny, and YOLOv8 models was “small”, with values ranging from 0.033 to 0.5, while for RetinaNet, the class with the lowest value was “yellow” at 0.29. Faster R-CNN was the model with the worst accuracy in four classes: “yellow”, “healthy”, “small”, and “spotted”, with values of 0 for all four classes.
The performance metrics during model validation were recorded in order to later calculate the AP per class (Table 3) and determine the overall mAP.
The YOLOv7 model stands out primarily in the “sick” class with an AP of 0.714, which indicates good detection of plants belonging to this class. Additionally, the results show that the classes sharing similar characteristics have very similar performance metrics, with AP values of 0.554, 0.552, and 0.546 for the “yellow”, “healthy” and “spotted” classes, respectively. The class with the lowest AP value is “small” with 0.397, making it the class with the worst performance for the model.
The YOLOv7-tiny model, overall, performs similarly to YOLOv7, with the “sick” class still being the best detected, achieving an AP of 0.678. In the “healthy” class, it reaches an AP of 0.573, slightly higher than YOLOv7, while in the “small” class, it achieves 0.577, showing improvement. However, its performance in the “sick” class, with an AP of 0.678, is lower than that of YOLOv7, highlighting a loss of precision compared to the previous model. In this same metric, the lowest value was obtained by the “spotted” class, with an AP of 0.542.
The YOLOv8 model shows the best performance, achieving high AP values in most classes. It performs the best in the “sick” class, with an AP of 0.791, making it the most effective model for detecting plants in this class. Additionally, it achieved similar values for the “yellow” and “healthy” classes, with 0.671 and 0.61, respectively. However, although the values obtained for the “spotted” class with 0.567 and the “small” class with 0.577 are similar to those of the previous models, the performance in these classes is lower than that of the YOLOv7-tiny model.
The Faster-RCNN model performs worse compared to the YOLO family models across all classes. The AP scores are consistently low, reaching a minimum value in the “yellow” and “spotted” classes, both with AP = 0, suggesting an inability to detect plants in these categories. In the “sick” class, it achieves an AP of 0.287, which is low compared to the YOLO models. The “healthy” class also shows a very low AP of 0.063, indicating very poor performance in detecting healthy plants.
Finally, the RetinaNet model, in the AP metric, shows a performance similar to that of Faster-RCNN, with poor results in most of the evaluated classes. Like the previous model, it is unable to efficiently detect most of the classes, with “sick” and “small” having the lowest performance at 0.246 and 0.216, respectively. Unlike the other models, the best-performing class for this model is “small”, although overall, the value is low compared to the other evaluated models.
The models were evaluated using the set of weights that showed the best performance during the training periods, employing a test set of 261 images that were not used in training or validation; images unknown to the models; see Table 4.
The YOLOv7 model showed good performance in most classes. In the “sick” class, it achieved an average precision of 0.799, indicating a good ability to detect sick plants. It also performed well in the “spotted” class with an AP of 0.602. Although its performance in the “yellow” class with 0.630 and the “healthy” class with 0.574 was not as high as in other classes, it remains comparable to other models evaluated in this study. The “small” class, with an AP of 0.474, demonstrates that YOLOv7 is able to identify small plants, though with low accuracy.
The YOLOv7-tiny model stands out in the “sick” class, with a precision of 0.892, suggesting that it is effective in detecting sick plants. However, in other classes, the YOLOv7-tiny showed a decrease in the AP metric compared to the full YOLOv7 model. In “yellow”, it achieved an AP of 0.583, in “healthy”, 0.585, and in “small”, 0.404, reflecting its average performance in those categories.
YOLOv8 showed results similar to YOLOv7 in several classes, with good performance in the “sick” class, where it achieved a precision of 0.833, the highest for the model. This result is slightly below that obtained by YOLOv7 and YOLOv7-tiny, but it is still good. In the “yellow” class, it achieved a precision of 0.568, slightly lower than YOLOv7, and in the “healthy” class, it reached 0.558, indicating moderate performance in detecting healthy plants. However, the model had its lowest value in the “small” class, where it obtained an AP of 0.474, similar to YOLOv7. The “spotted” class was also detected with a precision of 0.596, indicating that YOLOv8 is a model with comparable performance, although it did not outperform the previous models.
Faster-RCNN performed worse than the YOLO family models in almost all classes. In the “yellow” class, it achieved a result of 0.0, indicating that it was unable to adequately detect plants with yellow leaves in the test set. In the “healthy” and “small” classes, the results were also very low, barely reaching 0.013 and 0.026, suggesting that Faster-RCNN had significant difficulties in identifying healthy and small plants. In the “spotted” class, it also achieved a precision of 0.0, reflecting the model’s inability to detect plants with spots. In summary, Faster-RCNN showed poor performance across all classes.
RetinaNet, like Faster-RCNN, did not show good performance in detecting agave plants with this dataset. Although its results were better than those of Faster-RCNN, they were still inferior to those of the YOLO models. In the “sick” class, it achieved a precision of 0.421, which was notably low compared to YOLOv7 and YOLOv7-tiny. In the “yellow” class, its performance was also low, with an AP of 0.301, and in “healthy”, it reached an AP of 0.259. In the “small” class, it achieved an AP of 0.170, indicating difficulties in detecting small plants. The “spotted” class was detected with a precision of 0.294, better than Faster-RCNN’s performance. Overall, RetinaNet did not reach the precision level of the YOLO models.
The mAP metric was calculated for each of the evaluated models. These results reflect the performance of the detection models in the task of identifying agave plants (Figure 5).
The YOLOv7 model shows similar performance across both datasets. In the validation set, it achieved a mAP of 0.552, indicating a reasonable ability to generalize over the validation data. In the test set, it improved its performance, reaching a mAP of 0.616, demonstrating that the model has good predictive ability on previously unseen data.
YOLOv7-tiny showed slightly lower performance compared to the YOLOv7 model. In the validation set, it achieved a mAP of 0.586, which is a good result considering the reduced size and complexity of the model. In the test set, its mAP was 0.606, similar to the validation performance, demonstrating that YOLOv7-tiny is capable of maintaining good accuracy despite being a more efficient model in terms of speed and resource consumption.
YOLOv8 showed the best performance in the validation set, with a mAP of 0.643, indicating its capability to detect agave plants in the validation data. However, in the test set, its mAP decreased to 0.606, which is slightly lower than that of YOLOv7.
The Faster-RCNN model showed very low performance in both sets, with a mAP of 0.079 in validation and 0.076 in the test. These results indicate that Faster-RCNN has serious difficulties in detecting agave plants in this dataset. This poor performance could be related to several factors, such as the lack of data or the model’s inability to adapt to the specific characteristics of the plants.
Similarly, RetinaNet showed low results in both datasets, achieving a mAP of 0.303 in validation, and its mAP decreased to 0.259 in the test. Although RetinaNet’s results were better than Faster-RCNN’s, they are still inferior to those of the YOLO models.
Since the models from the YOLO family showed the best results in terms of detection accuracy, their performance in FLOPs (floating point operations per second) was calculated to determine which one exhibits the highest efficiency in terms of computational cost. YOLOv7 reaches a value of 72.26 GFLOPs, which is higher than the 36.13 GFLOPs of the YOLOv7-tiny model. Finally, the YOLOv8 model achieved 2.87 GFLOPs, making it the model with the lowest performance among the three during the inference process on the same image.

4. Discussion

The Table 1 data show that the class with the largest number of images is “healthy” and the ones with the lowest representation in the sets are “sick” and “spotted”, resulting in an unbalanced database; for this reason, it is common for the models to over-qualify the classes that dominate the database, wrongly classifying those that present a minority [36]. In many cases, this problem is treated by increasing images by scaling, adding noise, translation, rotation, or changing contrast [37]; however, in our case, the class that obtained the highest percentage of correct classification for the validation and test sets was “sick” in both trained models, since, in visual appearance, this class is visually more distant from the others, making it easier to classify for the tested models.
Although the use of computer vision techniques in agave cultivation has grown [16,24,25,26], their specific application for the detection and classification of symptoms that may indicate the presence of diseases or pests in a crop with high economic relevance in Mexico, particularly in the state of Oaxaca, is relatively novel.
The YOLOv7 and YOLOv7-tiny models showed the best performance in terms of accuracy, with a mAP of 0.616 and 0.606 on the test set, respectively. These models exhibit a favorable balance between speed and accuracy, making them suitable for implementation in real-time automatic assessment systems in open-field plots.
Although the YOLOv8 model showed a slight improvement in accuracy during validation with a mAP of 0.643, during testing it achieved a mAP of 0.606, similar to that of YOLOv7-tiny on the test set. This suggests that, in this context, it does not offer a significant advantage.
On the other hand, Faster-RCNN and RetinaNet showed notably lower performance, especially on the test set, with mAP scores of 0.076 and 0.259, respectively, indicating that these models are not suitable for the specific task of detecting and classifying Agave angustifolia.
One of the critical factors limiting the effectiveness of the detection models tested in this work is the relatively small size and the imbalance of the image dataset. While the results obtained are promising, a larger and more diverse dataset could significantly improve the performance of the models in terms of generalization and precision.
The YOLOv7 model has a higher number of FLOPs, indicating greater complexity compared to YOLOv7-tiny and YOLOv8. This difference is attributed to the higher number of parameters [31,38]. The results highlight the typical trade-off between precision and efficiency: YOLOv7 is the most powerful in terms of FLOPs and also delivers the best performance in detection. Therefore, it emerges as the most suitable option for the task of detecting Agave angustifolia plants, using the development environment employed in this study.

5. Conclusions

This article presents the implementation of five different deep learning models to detect Agave angustifolia H. plants under open-field planting conditions. The plants can be classified into one of five classes based on their visible characteristics. These features may suggest the presence of a disease or pest that affects the optimal development of the plants.
The creation of the image dataset, although currently limited, represents a significant advancement in research applied to the monitoring, evaluation, and diagnosis of Agave plants. This dataset will enable researchers to develop more accurate computer vision models to detect issues such as diseases, pests, and environmental stress, contributing to more efficient and sustainable agricultural practices.
This study compares advanced versions of YOLO, one of the most popular models for real-time object detection, along with models from the Detectron2 library, specifically Faster-RCNN and RetinaNet. This comparative approach is an important contribution, as it allows for the evaluation of which of these models is more efficient and accurate for detecting agaves in images, considering the real-world conditions of a crop field.
Although no specific modifications were made to the parameters during this study, its development focused on demonstrating that, by using default configurations, the selected models, especially those belonging to the YOLO family, are effective for monitoring and classifying agave plants, which is an important contribution to the field of precision agriculture.

6. Future Works

In future work, we will expand the dataset with more representative images of the classes presented here, aiming to have a larger and more balanced database. This will allow us to train more robust models, as well as help reduce overfitting and improve the models’ ability to detect maguey plants in real-world scenarios like the one presented in this study.
Although the evaluation in terms of computational efficiency was not the primary objective of this work, the results obtained provide a broader perspective on the performance of the tested models for the task at hand. In future research, we will conduct a detailed comparison of these models and their implementation on specific devices, such as Raspberry Pi, NVIDIA Jetson, mobile devices, or drones, to assess their performance in real-world conditions. This approach will allow us to gather crucial information about inference time and FLOPS (floating point operations per second), which will be essential for understanding how the models behave in practical scenarios, where resources such as memory, processor, and battery may be limited. In this way, we will gain a more comprehensive view of computational efficiency and detection accuracy, contributing to a better understanding of the challenges associated with the practical implementation of these models in specific agricultural tasks.

Author Contributions

Conceptualization, I.M., E.Z. and T.A.-B.; methodology, I.M. and E.Z.; validation, I.M. and E.Z.; investigation, I.M., E.Z. and T.A.-B.; resources, T.A.-B. and E.Z.; writing—original draft preparation, I.M.; writing—review and editing, E.Z. and T.A.-B.; visualization, I.M.; supervision, E.Z. and T.A.-B. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Instituto Politécnico Nacional through the Secretaría de Investigación y Posgrado (SIP20220002, 20230232, 20240108), and the Mexican National Council of Humanities, Science, and Technology CONAHCyT under a doctorate grant.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data associated with the paper will be made available to readers upon request post-publication, through contact with any of the authors.

Acknowledgments

The authors thank Rafael Hernández Martínez for providing the facilities to take the photographs on his plot in Santiago Matatlán, Oaxaca. I. Matadamas thanks CONAHCYT for the scholarship granted to continue his postgraduate studies.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Palma, F.; Pérez, P.; Vinicio, M. Diagnóstico de la Cadena de Valor Mezcal en las Regiones de Oaxaca. 2016. Available online: https://www.oaxaca.gob.mx/coplade/wp-content/uploads/sites/29/2017/04/Perfiles/AnexosPerfiles/6.%20CV%20MEZCAL.pdf (accessed on 28 November 2024).
  2. Mariles-Flores, V.; Ortiz-Solorio, C.A. Las clases de tierras productoras de maguey mezcalero en la Soledad Salinas, Oaxaca* Classes maguey mezcal producing land in La Soledad Salinas, Oaxaca. Rev. Mex. De Cienc. Agrícolas 2016, 7, 1199–1210. [Google Scholar] [CrossRef]
  3. COMERCAM. Informe Estadístico 2023. 2023. Available online: https://comercam-dom.org.mx/wp-content/uploads/2023/05/INFORME-2023_PUBLICO.pdf (accessed on 28 November 2024).
  4. Aquino-Bolaños, T.; Parraguirre-Cruz, M.A.; Ruiz-Vega, J. Scyphophorus acupunctatus (=interstitialis) Gyllenhal (Coleoptera: Curculionidae). Pest of agave mezcalero: Losses and damage in Oaxaca, Mexico. Rev. Científica UDO Agrícola 2007, 7, 175–180. [Google Scholar]
  5. Aquino-Bolaños, T.; Aquino-Lopez, T.; Ruiz-Vega, J.; Bautista-Cruz, A. Strategus aloeus (Coleoptera: Scarabaeidae) damage in two agave species and its management based on entomopathogenic fungi in oil suspensions. Rev. Colomb. Entomol. 2024, 50, e12865. [Google Scholar] [CrossRef]
  6. CESAVEG. Manual de Plagas y Enfermedades. Available online: http://cesaveg.org.mx/divulgacion/agave/manual_agave.pdf (accessed on 11 November 2024).
  7. Romero-Cortes, T.; Pérez España, V.H.; Pescador-Rojas, J.A.; Rangel-Cortés, E.; Armendaríz-Ontiveros, M.M.; Cuervo-Parra, J.A. First Report of Leaf Spot Disease (“Negrilla”) on Agave salmiana Otto Ex Salm-Dyck (ssp. salmiana) Plants Caused by Bipolaris zeae Zivan in Mexico. Agronomy 2024, 14, 623. [Google Scholar] [CrossRef]
  8. De Oliveira, R.C.; e Silva, R.D.d.S. Artificial Intelligence in Agriculture: Benefits, Challenges, and Trends. Appl. Sci. 2023, 13, 7405. [Google Scholar] [CrossRef]
  9. Wakchaure, M.; Patle, B.K.; Mahindrakar, A.K. Application of AI techniques and robotics in agriculture: A review. Artif. Intell. Life Sci. 2023, 3, 100057. [Google Scholar] [CrossRef]
  10. Ayala-Niño, D.; González-Camacho, J.M. Evaluation of machine learning models to identify peach varieties based on leaf color. Agrociencia 2022, 4, 21–179. [Google Scholar] [CrossRef]
  11. Agnihotri, V. Machine Learning Based Pest Identification in Paddy Plants. In Proceedings of the 2019 3rd International conference on Electronics, Communication and Aerospace Technology (ICECA), Coimbatore, India, 12–14 June 2019; pp. 246–250. [Google Scholar] [CrossRef]
  12. Park, Y.-H.; Choi, S.H.; Kwon, Y.-J.; Kwon, S.-W.; Kang, Y.J.; Jun, T.-H. Detection of Soybean Insect Pest and a Forecasting Platform Using Deep Learning with Unmanned Ground. Veh. Agron. 2023, 13, 477. [Google Scholar] [CrossRef]
  13. Ambrosio, J.P.A.; Camacho, J.M.G.; Aguilar, A.R.; Paniagua, D.H.d.V. Identification of disease in tomato leaves using machine learning classifiers and digital images. Agrociencia 2023, 8, 2462. [Google Scholar] [CrossRef]
  14. Joseph, D.S.; Pawar, P.M.; Pramanik, R. Intelligent plant disease diagnosis using convolutional neural network: A review. Multimed. Tools Appl. 2022, 82, 21415–21481. [Google Scholar] [CrossRef]
  15. Ribera, J.; Chen, Y.; Boomsma, C.; Delp, E.J. Counting plants using deep learning. In Proceedings of the 2017 IEEE Global Conference on Signal and Information Processing (GlobalSIP), Montreal, QC, Canada, 14–16 November 2017; pp. 1344–1348. [Google Scholar] [CrossRef]
  16. Omar, H.C.; Florián, F.; Sánchez, M.G.; Guadalupe, S.M.; Ávila-George, H.; Departamento de Ciencias Computacionales e Ingenierías. Conteo de plantas de agave usando redes neuronales convolucionales e imágenes adquiridas desde un vehículo aéreo no tripulado. RISTI—Rev. Ibérica De Sist. E Tecnol. De Informação 2022, 1, 64–76. [Google Scholar] [CrossRef]
  17. Gündüz, M.; Işik, G. A new YOLO-based method for real-time crowd detection from video and performance analysis of YOLO models. J. Real-Time Image Process. 2023, 20, 5. [Google Scholar] [CrossRef] [PubMed]
  18. Shahi, T.B.; Xu, C.Y.; Neupane, A.; Guo, W. Recent Advances in Crop Disease Detection Using UAV and Deep Learning Techniques. Remote Sens. 2023, 15, 2450. [Google Scholar] [CrossRef]
  19. Toda, Y.; Okura, F. How Convolutional Neural Networks Diagnose Plant Disease. Plant Phenomics 2019, 2019, 9237136. [Google Scholar] [CrossRef] [PubMed]
  20. Saleem, M.H.; Potgieter, J.; Arif, K.M. Plant Disease Detection and Classification by Deep Learning. Plants 2019, 8, 468. [Google Scholar] [CrossRef]
  21. Rahnemoonfar, M.; Sheppard, C. Deep Count: Fruit Counting Based on Deep Simulated Learning. Sensors 2017, 17, 905. [Google Scholar] [CrossRef]
  22. Ubbens, J.; Cieslak, M.; Prusinkiewicz, P.; Stavness, I. The use of plant models in deep learning: An application to leaf counting in rosette plants. Plant Methods 2018, 14, 6. [Google Scholar] [CrossRef]
  23. Mota-Delfin, C.; López-Canteñs, G.d.J.; López-Cruz, I.L.; Romantchik-Kriuchkova, E.; Olguín-Rojas, J.C. Detection and Counting of Corn Plants in the Presence of Weeds with Convolutional Neural Networks. Remote Sens. 2022, 14, 4892. [Google Scholar] [CrossRef]
  24. Flores, D.; González-Hernández, I.; Lozano, R.; Vazquez-Nicolas, J.M.; Hernandez Toral, J.L. Automated Agave Detection and Counting Using a Convolutional Neural Network and Unmanned Aerial Systems. Drones 2021, 5, 4. [Google Scholar] [CrossRef]
  25. Calvario, G.; Alarcón, T.E.; Dalmau, O.; Sierra, B.; Hernandez, C. An Agave Counting Methodology Based on Mathematical Morphology and Images Acquired through Unmanned Aerial Vehicles. Sensors 2020, 20, 6247. [Google Scholar] [CrossRef]
  26. Sánchez, A.; Nanclares, R.; Quevedo, A.; Pelagio, U.; Aguilar, A.; Calvario, G.; Moya-Sánchez, E.U. Agave crop segmentation and maturity classification with deep learning data-centric strategies using very high-resolution satellite imagery. arXiv 2023, arXiv:2303.11564. [Google Scholar] [CrossRef]
  27. Wang, M.; Yang, B.; Wang, X.; Yang, C.; Jie, X.; Mu, B.; Xiong, K.; Li, Y. YOLO-T: Multitarget Intelligent Recognition Method for X-ray Images Based on the YOLO and Transformer Models. Appl. Sci. 2022, 12, 11848. [Google Scholar] [CrossRef]
  28. Dutta, A.; Zisserman, A. The VIA Annotation Software for Images, Audio and Video. In Proceedings of the 27th ACM International Conference on Multimedia, Nice, France, 21–25 October 2019; pp. 2276–2279. [Google Scholar] [CrossRef]
  29. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar] [CrossRef]
  30. Shinde, S.; Kothari, A.; Gupta, V. YOLO based Human Action Recognition and Localization. Procedia Comput. Sci. 2018, 133, 831–838. [Google Scholar] [CrossRef]
  31. Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv 2022, arXiv:2207.02696. [Google Scholar]
  32. Rani, A.; Ortiz-Arroyo, D.; Durdevic, P. Defect Detection in Synthetic Fibre Ropes Using Detectron2 Framework. Appl. Ocean Res. 2024, 150, 104109. [Google Scholar] [CrossRef]
  33. de Almeida, G.P.S.; dos Santos, L.N.S.; da Silva Souza, L.R.; da Costa Gontijo, P.; de Oliveira, R.; Teixeira, M.C.; De Oliveira, M.; Teixeira, M.B.; do Carmo França, H.F. Performance Analysis of YOLO and Detectron2 Models for Detecting Corn and Soybean Pests Employing Customized Dataset. Agronomy 2024, 14, 2194. [Google Scholar] [CrossRef]
  34. Butt, M.; Glas, N.; Monsuur, J.; Stoop, R.; de Keijzer, A. Application of YOLOv8 and Detectron2 for Bullet Hole Detection and Score Calculation from Shooting Cards. AI 2024, 5, 72–90. [Google Scholar] [CrossRef]
  35. Liu, K.; Sun, Q.; Sun, D.; Yang, M.; Wang, N.; Peng, L. Underwater target detection based on improved YOLOv7. J. Mar. Sci. Eng. 2023, 11, 77. [Google Scholar] [CrossRef]
  36. Ferreira, U.E.C.; Camacho, J.M.G. Clasificador de red neuronal convolucional para identificar enfermedades del fruto de aguacate (Persea americana Mill.) a partir de imágenes digitales. Agrociencia 2021, 55, 695–709. [Google Scholar] [CrossRef]
  37. Kumar, N.; Nagarathna; Flammini, F. YOLO-Based Light-Weight Deep Learning Models for Insect Detection System with Field Adaption. Agriculture 2023, 13, 741. [Google Scholar] [CrossRef]
  38. Sohan, M.; Sai Ram, T.; Rami Reddy, C.V. A Review on YOLOv8 and Its Advancements. In Data Intelligence and Cognitive Informatics; ICDICI 2023. Algorithms for Intelligent Systems; Springer: Singapore, 2024. [Google Scholar] [CrossRef]
Figure 1. Selection criteria for agave samples in the images obtained in the field. A: Complete agaves in the foreground of the image, B: partially recognizable agaves, C: agaves in the background whose shape can be fully recognized, and D: agaves in the third background that are partially or totally recognizable.
Figure 1. Selection criteria for agave samples in the images obtained in the field. A: Complete agaves in the foreground of the image, B: partially recognizable agaves, C: agaves in the background whose shape can be fully recognized, and D: agaves in the third background that are partially or totally recognizable.
Agriculture 14 02199 g001
Figure 2. Examples of images (detections) of the different classes in the labeling process. (A) Sick agave; (B) yellow agave; (C) healthy agave; (D) small agave; (E) spotted agave.
Figure 2. Examples of images (detections) of the different classes in the labeling process. (A) Sick agave; (B) yellow agave; (C) healthy agave; (D) small agave; (E) spotted agave.
Agriculture 14 02199 g002
Figure 3. Comparison of the loss behavior of the YOLOv7, YOLOv7-tiny, YOLOv8, Faster-RCNN, and RetinaNet models during the training and validation processes. (A) Training process; (B) validation process.
Figure 3. Comparison of the loss behavior of the YOLOv7, YOLOv7-tiny, YOLOv8, Faster-RCNN, and RetinaNet models during the training and validation processes. (A) Training process; (B) validation process.
Agriculture 14 02199 g003
Figure 4. Confusion matrix of the performance in predicting images of agave espadin. (A) YOLOv7 model; (B) YOLOv7-tiny model; (C) Faster R-CNN model; (D) RetinaNet model; (E) YOLOv8 model.
Figure 4. Confusion matrix of the performance in predicting images of agave espadin. (A) YOLOv7 model; (B) YOLOv7-tiny model; (C) Faster R-CNN model; (D) RetinaNet model; (E) YOLOv8 model.
Agriculture 14 02199 g004
Figure 5. mAP metric calculated for the five evaluated models using the validation and test sets.
Figure 5. mAP metric calculated for the five evaluated models using the validation and test sets.
Agriculture 14 02199 g005
Table 1. Number of agave sub-images per class in each of the training sets of the YOLOv7 model.
Table 1. Number of agave sub-images per class in each of the training sets of the YOLOv7 model.
SetSickYellowHealthySmallSpottedTotal
Training109213345173100940
Test3460775535261
Validation724432715116
Total1502974652551501317
Table 2. Environment settings and initial values of the training hyperparameters of the models.
Table 2. Environment settings and initial values of the training hyperparameters of the models.
Item/ElementModel/Version
CPUAMD Ryzen 5 3600 G
GPUNVIDIA GeForce RTX 3050
RAMAdata Xpg Spectrix D50 8 GB 3200 Mhz (2) 1
OSUbuntu Linux 22.04.2 LTS
Programming LanguagePython 3.10.9
CUDA Toolkit12.1
NVIDIA Driver530.30.02
Epochs1000
1 Number of units used.
Table 3. Performance in AP metric for the prediction in images of agave espadin of the models with the validation set.
Table 3. Performance in AP metric for the prediction in images of agave espadin of the models with the validation set.
AP
Class/ModelYOLOv7YOLOv7-TinyYOLOv8Faster-RCNNRetinaNet
Sick0.7140.6780.7910.2870.246
Yellow0.5540.560.67100.404
Healthy0.5520.5730.610.0630.327
Small0.3970.5770.5770.0430.216
Spotted0.5460.5420.56700.316
Table 4. Performance in AP metric for prediction in images of agave espadin of the models with the test set.
Table 4. Performance in AP metric for prediction in images of agave espadin of the models with the test set.
AP
Class/ModelYOLOv7YOLOv7-TinyYOLOv8Faster-RCNNRetinaNet
Sick0.7990.8920.8330.3420.421
Yellow0.6300.5830.56800.301
Healthy0.5740.5850.5580.0130.259
Small0.4740.4040.4740.0260.170
Spotted0.6020.5670.59600.294
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Matadamas, I.; Zamora, E.; Aquino-Bolaños, T. Detection and Classification of Agave angustifolia Haw Using Deep Learning Models. Agriculture 2024, 14, 2199. https://doi.org/10.3390/agriculture14122199

AMA Style

Matadamas I, Zamora E, Aquino-Bolaños T. Detection and Classification of Agave angustifolia Haw Using Deep Learning Models. Agriculture. 2024; 14(12):2199. https://doi.org/10.3390/agriculture14122199

Chicago/Turabian Style

Matadamas, Idarh, Erik Zamora, and Teodulfo Aquino-Bolaños. 2024. "Detection and Classification of Agave angustifolia Haw Using Deep Learning Models" Agriculture 14, no. 12: 2199. https://doi.org/10.3390/agriculture14122199

APA Style

Matadamas, I., Zamora, E., & Aquino-Bolaños, T. (2024). Detection and Classification of Agave angustifolia Haw Using Deep Learning Models. Agriculture, 14(12), 2199. https://doi.org/10.3390/agriculture14122199

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop