Next Article in Journal
Transcriptomic Analysis of Sodium-Silicate-Induced Resistance against Rhizoctonia solani AG-3 in Potato
Previous Article in Journal
Evaluation of Soil Quality of Pingliang City Based on Fuzzy Mathematics and Cluster Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Pattern Classification of an Onion Crop (Allium Cepa) Field Using Convolutional Neural Network Models

by
Manuel de Jesús López-Martínez
1,†,
Germán Díaz-Flórez
1,*,
Santiago Villagrana-Barraza
1,
Celina L. Castañeda-Miranda
1,2,
Luis Octavio Solís-Sánchez
1,2,
Diana I. Ortíz-Esquivel
3,
José I. de la Rosa-Vargas
3 and
Carlos A. Olvera-Olvera
1,*,†
1
Laboratorio de Invenciones Aplicadas a la Industria (LIAI), Unidad Académica de Ingeniería Eléctrica, Universidad Autónoma de Zacatecas, Zacatecas 98000, Mexico
2
Laboratorio de Inteligencia Artificial Avanzada (LIAA), Unidad Académica de Ingeniería Eléctrica, Posgrado en Ingeniería y Tecnología Aplicada, Universidad Autónoma de Zacatecas, Zacatecas 98000, Mexico
3
Unidad Académica de Ingeniería Eléctrica, Universidad Autónoma de Zacatecas, Campus Siglo XXI, Zacatecas 98160, Mexico
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Agronomy 2024, 14(6), 1206; https://doi.org/10.3390/agronomy14061206
Submission received: 1 May 2024 / Revised: 24 May 2024 / Accepted: 29 May 2024 / Published: 3 June 2024

Abstract

:
Agriculture is an area that currently benefits from the use of new technologies and techniques, such as artificial intelligence, to improve production in crop fields. Zacatecas is one of the states producing the most onions in the northeast region of Mexico. Identifying and determining vegetation, soil, and humidity zones could help solve problems such as irrigation demands or excesses, identify spaces with different levels of soil homogeneity, and estimate the yield or health of the crop. This study examines the application of artificial intelligence through the use of deep learning, specifically convolutional neural networks, to identify the patterns that can be found in a crop field, in this case, vegetation, soil, and humidity zones. To extract the mentioned patterns, the K-nearest neighbor algorithm was used to pre-process images taken using unmanned aerial vehicles and form a dataset composed of 3672 images of vegetation, soil, and humidity (1224 for each class). A total of six convolutional neural network models were used to identify and classify the patterns, namely Alexnet, DenseNet, VGG16, SqueezeNet, MobileNetV2, and Res-Net18. Each model was evaluated with the following validation metrics: accuracy, F1-score, precision, and recall. The results showed a variation in performance between 90% and almost 100%. Alexnet obtained the highest metrics with an accuracy of 99.92%, while MobileNetV2 had the lowest accuracy of 90.85%. Other models, such as DenseNet, VGG16, SqueezeNet, and ResNet18, showed an accuracy of between 92.02% and 98.78%. Furthermore, our study highlights the importance of adopting artificial intelligence in agriculture, particularly in the management of onion fields in Zacatecas, Mexico. The findings can help farmers and agronomists make more informed and efficient decisions, which can lead to greater production and sustainability in local agriculture.

1. Introduction

Cultivated for thousands of years, the onion (Allium Cepa) reigns as the most important vegetable in the Alliaceae family, feeding humanity for countless generations. Currently, around 25 million tons of onion are produced globally, and it is one of the most basic and essential crops in human nutrition [1]. According to the Food and Agriculture Organization of the United Nations, the main onion-producing countries are China, Japan, the Republic of Korea, Mali, Angola, New Zealand, Nigeria, Tunisia, Turkey, and Iraq [2]. In addition, it is the second most produced vegetable worldwide, and Mexico is proud to contribute to this by supplying an important portion that has been constantly increasing since 2020 [3,4]. Mexico contributes 1 in every 50 tons of onion consumed worldwide and registers a production volume of around 1.5 million tons. The state of Zacatecas, a prominent Bajio region in Mexico, is in third place in terms of onion production nationwide, with a production of approximately 0.2 million tons [4]. It is well known that different factors intervene in the growth and development of crops, such as soil conditions, climate, fertilizer management, agricultural machinery, irrigation water, weeds, pests, and humidity [5]. If not managed well, these factors can cause a decrease in production or, in the worst case, the death of the onion crop.
Onion production worldwide is frequently affected by problems derived from inadequate agronomic practices by producers [6]. This challenge is especially notable in the state of Zacatecas, where constant humidity, a lack of sunlight in times of rain, and pests have caused a production loss of approximately 20% [7]. The use of artificial intelligence (AI) can help overcome these limits and enhance the growth of the onion industry in Zacatecas. Therefore, it is crucial to encourage the adoption of modern agricultural technologies among onion producers to ensure a prosperous and sustainable future for the onion industry in Zacatecas.
Currently, agriculture faces significant environmental pressures, including soil erosion and resource mismanagement, impacting ecosystems and biodiversity. To address these challenges, the agricultural industry must adopt new technologies that improve sustainability and efficiency. This involves the use of advanced technologies to process data and analyze plants, which represents a transformative change in agriculture [8,9,10]. These technologies include advanced sensor systems [11], digital image processing (DIP) [12], and the application of machine learning (ML) algorithms [13], enabling more efficient crop management and increased agricultural productivity. In this context, deep learning (DL) is a rapidly growing technology that can improve and benefit the correct management of different crops, not only onions. Specifically, convolutional neural networks (CNNs) [14] are used to process large-scale images of crop fields. CNNs, designed to automatically learn spatial features from input images [15], are essential in numerous fields of scientific research, including agriculture. Their application goes beyond simple image classification and includes object detection, semantic segmentation, and image synthesis, providing detailed and accurate information about agricultural practices [16,17]. This technology is fundamentally transforming the ways in which crops are managed and production is optimized. One of the most prominent applications of this technology in agriculture is the processing of images taken using unmanned aerial vehicles (UAVs) or drones. Equipped with high-resolution cameras, these UAVs can capture detailed images of mines, ecological zones, crop fields, and other locations in real time [18]. In addition, it is important to mention that CNNs can be trained with fewer data points, which is especially useful in agriculture, where data can be limited or expensive to obtain. This process is known as transfer learning, using pre-existing models trained and adjusted to meet specific needs, which can help take advantage of the knowledge acquired from general tasks and apply it to specific tasks such as identifying areas of vegetation, soil, and humidity in onion fields [19].
Some previous studies presented the use of this technology, in which CNNs and transfer learning were applied, especially to resolve challenges inside the agricultural industry to facilitate the management of crop fields and the production of organic food. In the work of Pandey et al. [20], a conjugated dense CNN integrating data fusion and feature map extraction was used to classify different types of crop fields. They obtained an accuracy of 96.2% for the examined data. Rachmad et al. [21] used several models of CNNs, like SqueezeNet, AlexNet, Resnet-101, ResNet50, and Resnet18, to classify diseases in corn plants, obtaining 95.59% accuracy with the ResNet50 CNN model. Additionally, UAVs and CNNs can be used in combination to create systems for the detection of weeds, like in the study presented by Haq et al. [22], which proposed a CNN with a learning vector quantization algorithm to classify weeds present in different crops such as soybean, grass, soil, and broadleaf, obtaining an overall accuracy of 99.44%. The work of Tetila et al. [23] evaluated four DL models—Inception V3, ResNet50, VGG19, and Xception—for the automatic recognition of soybean leaf diseases from images taken via UAVs by applying transfer learning. The highest accuracy was found for InceptionV3, at 99.04%. Zheng et al. [24] used UAVs and CNNs to detect and monitor the growing status of oil palm trees photographed at two sites in Indonesia and proposed a solution to distribute different kinds of oil palms divided into five categories: healthy palms, dead palms, mismanaged palms, smalling palms, and yellowish palms. They achieved F1-scores of 87.91% for site 1 and 99.04% for site 2.
Another relevant study presented by Gulzar et al. [25] investigated the classification of diseases in sunflower plants using transfer learning by applying the CNN models Alexnet, VGG16, InceptionV3, MobileNetV3, and EfficientNetB3; the highest precision, recall, F1-score, and accuracy of 97.6% were achieved with the EfficientNetB3 model. Alkanan et al. [26] identified broken, discolored, silk-cut, and pure corn states using the CNN model MobileNetV2, incorporating additional layers that augmented its feature capabilities and achieving an accuracy of 96% for each class. A soybean seed classification was presented in [27], using InceptionV3 to perform the training; the performance of this approach was validated using the metrics of precision, recall, and the F1-score. In this study, the results showed good performance in all classes, with values ranging between 98.51 and 100%. Ibarra-Pérez et al. [28] proposed a transfer learning approach to identify different phenological stages of beams using the CNN models AlexNet, VGG19, SqueezeNet, and GoogleNet. They used validation metrics like accuracy, precision, sensitivity, specificity, and F1-score to obtain the best results with the GoogleNet architecture, reporting the highest value of 98.73% for specificity. Tsai et al. [29] implemented the model YoloV5m integrated with BoTNet, ShuffleNet, and GhostNet to detect tomato fruits, reporting accuracies of 94%, 95%, and 96% for these models, respectively.
In the context of onion crop fields, the authors of [30] proposed a DCNN architecture to detect onion disease (purple blotch), obtaining a classification accuracy of 85.47%. Additionally, Kim et al. [31] proposed a DCNN to classify and identify six classes of disease symptoms in onion crop images. Paymode et al. [32] proposed a CNN architecture to identify the quality of onions and classify them as healthy or defective onions, achieving an accuracy value of 97.59%.
This article explores the potential of various CNN models, such as AlexNet, SqueezeNet, DenseNet, VGG16, MobileNetV2, and ResNet18, for the detection of characteristics such as vegetation zones, soil, and humidity in onion cultivation fields. These models were used to analyze 3672 images of three different classes (vegetation, soil, and humidity), which were taken using UAVs. In implementing the proposed methodology, the aim was to identify the most accurate CNN model for feature identification. The results seek to contribute to the development of good agricultural practices, providing farmers with a technological tool that allows them to optimize the management of their onion fields, which could help them to maximize their crop yield, reduce costs, and minimize the production losses generated by environmental factors and the poor management of crop fields.

2. Materials and Methods

This section presents the proposed methodology developed to extract patterns present in onion (Allium Cepa) crop fields. This methodology takes advantage of CNNs to identify and classify data. With the images obtained using UAVs, pre-processing is performed to form a dataset with which the CNN models are trained. Classifying patterns like vegetation, soil, and humidity in the collected images can help generate compensatory strategies to be applied by farmers. This knowledge will empower them to choose better agricultural practices and potentially minimize production losses in their onion fields.
The process involves several stages, as shown in Figure 1a, including data acquisition, image pre-processing, image analysis using CNN models, and, finally, the evaluation of the models using various validation metrics. Each stage is designed to ensure the accuracy and reliability of the results. To show the proposed methodology in a more graphic and understandable way, Figure 1b is shown.

2.1. Data Acquisition

This approach extends the research presented in the work by Duarte-Correa et al. [7] by leveraging the same dataset of images to explore further growth patterns in onion crop fields, like vegetation, soil, and humidity zones. Image segmentation techniques are applied in the methodology outlined in this paper to classify and categorize patterns related to vegetation, soil, and humidity distributions. In doing so, we aim not only to validate the findings from this study but also to extend the methodology’s applications into new domains and real-world scenarios. This project, in comparison with that of Duarte-Correa et al. [7], can provide more detailed and accurate results using the statistical metrics used in AI models. In addition, this takes advantage of current technological applications and tools used in agricultural research (the cloud, sensor networks, microcontrollers, robots, etc.), particularly in arid zone contexts.
Figure 2 shows the sorts of images that are normally obtained and used in a crop field monitoring process. The UAVs carry out a sweeping or field tracking process from which multiple sequential images are obtained until the entire study area is captured (Figure 2b). To obtain these images, a Phantom 4 Pro UAV (Shenzhen, China) with an RGB camera with a 4K resolution was used. Once the UAVs have completed this process, specialized applications such as OpenDroneMap are used to generate the orthomosaic or orthophoto, which is a detailed representation of the onion field (Figure 2a). In this case, only Figure 2a was used in a representative manner of what can be achieved with the sequential images taken using a UAV. It is important to mention that the monitoring or scanning process is always carried out first, and then the orthomosaic is formed. To increase the batch size of the data and thus increase the information for image analysis, cuts of size 900 × 700 px were made (Figure 2c) to better extract the patterns from the images captured via UAV. Originally, the total number of images taken in the study area was 758; from these, a sample of 51 was taken to apply the cuts and thus expand the batch of data; therefore, the new dataset was made up of 1224 cropped images of size 900 × 700 px, which are the ones to which pre-processing was applied.

2.2. Image Pre-Processing

To enhance the methodology introduced in [7], we employed an advanced pre-processing approach that builds upon semantic segmentation as its foundational element. Unlike traditional methods, our approach goes beyond mere classification or grouping of pixels in an image. This refined technique serves as a cornerstone in this research, elevating the level of detail and accuracy in our exploration of agricultural dynamics.
Many agricultural investigations present methodologies for extracting significant features from agricultural images. These methods encompass spectral analysis, photogrammetry, color indices, feature extraction analysis, and statistical analysis, with a primary emphasis on leveraging vegetation indices [33].
KNN (K-nearest neighbors) is one of the most widely used methods within semantic segmentation. This technique, known for its simplicity and effectiveness, operates by assigning labels to pixels based on the majority class of their nearest neighbors in the feature space [34]. Its application in semantic segmentation contributes to accurate and context-aware classification, making it a popular choice in image analysis and pattern recognition tasks.
This approach is based on the classification of input images using three different kernels, each of which represents a specific characteristic present in onion cultivation fields: vegetation, soil, and humidity zones. These kernels are designed to discern crucial patterns in data images. To correlate the colors corresponding to each kernel, RGB (Red, Green, and Blue) color spaces are used to numerically represent the colors corresponding to each of the representative patterns. For example, the vegetation zone corresponds to green colors in the RGB color space. By setting these green colors, patterns such as leaves, weeds, and leaf overlap, among others, can be identified. On the other hand, the ground presents a color space with greater luminosity and brightness compared to the vegetation areas. Humidity zones are those where colors in the brown range are present, which is why they were taken into account for the segmentation of humidity zones. The pre-processing with KNN was carried out taking into account three cores: kernel 1 for vegetation zones, kernel 2 for soil, and kernel 3 for humidity zones. Using these three kernels, the centroids can be extracted and the values correlated with the colors obtained from the corresponding channels of the RGB color space for each pixel in the image. In this way, KNN identifies its nearest neighbors based on their pattern representations. This approach offers a robust and accurate methodology to classify patterns present in crop fields by pre-processing the images, which improves the efficiency and performance of the CNN models used later.
The majority class among these neighbors determines the label assigned to the pixel. This process is repeated throughout the image, resulting in a semantically segmented output where each pixel is assigned a label indicative of the predominant pattern it belongs to. The above is demonstrated in Figure 3.
To calculate distances between data points in the feature space to determine their similarity, common distance metrics include the Euclidean distance or other similarity measures. For this method, we use the Euclidean distance described by the following equation:
d s t 2 = ( x s y t ) ( x s y t ) ' ,
where d s t 2 is the distance, and x s and y t represent two points. By simplifying Equation (1), we can obtain the Pythagorean theorem equation [35].
The utilization of KNN is a valuable component in this methodology, contributing to the accurate delineation of distinct patterns within the agricultural images.
Once KNN was applied for image segmentation and to form the dataset for training, 3672 new images were created. For each segmented pattern, 1224 images were obtained, and, with these, the classes with which the CNN models would be trained were formed.

2.3. Deep Learning for Image Analysis

ML has permeated many aspects of our daily lives, from search engines to smart devices. ML systems can identify objects in images, translate spoken language into text, and personalize advertising [36]. DL, a technique within ML, enables models to learn from data, with multiple levels of abstraction [37]. This capability is particularly useful when analyzing data, such as images taken with UAVs from our onion crop field data, as it allows us to extract meaningful values used to improve maintenance and production.
There are several DL techniques, including artificial neural networks (ANNs), recurrent neural networks (RNNs), generative adversarial networks (GANs), transfer learning, and autoencoders. Each of these can help us perform different types of processing. However, in this article, we focus on CNNs and transfer learning. These techniques were used to analyze and classify patterns within the agricultural images, providing valuable insights for efficient and effective farming practices.

2.3.1. Convolutional Neural Networks

CNNs are a type of artificial neural network primarily used for object detection in images. They were created to mimic human vision on computers. CNNs achieve image recognition accuracy depending on the architecture or model of the neural network (NN), that is, the number of layers and the depth it possesses.
CNNs are generally designed to perform forward learning or feed-forward, where the results are produced from layers that learn from previous or input layers to obtain results. Typically, pooling layers follow the convolutional layers, and the fully connected layers are the last during training [15].
Convolutional layers are the main blocks that characterize this type of ANN and have as their parameters a set of learning filters formed by the height, width, and sometimes depth values of the images. Pooling layers are inserted between the convolutional layers to progressively reduce the size of the image, the number of parameters, and the calculation of the NN to avoid overfitting. Fully connected (FC) layers contain neurons with complete connections with all activations from the previous layer, so their activations can be calculated with matrix multiplication followed by a bias shift.
In this study, the application of these CNN principles was evaluated for the analysis of images taken via UAVs in onion fields to identify the characteristics present in these fields. This same method can be implemented in various aspects of the agroindustry. By harnessing the power of CNNs and transfer learning, we can support improvements in the efficiency and accuracy of agricultural data processing, which is crucial for real-time decision making in modern agriculture.
Conducting experiments with CNNs presents certain challenges, particularly in terms of hardware requirements. CNNs are known to require significant storage and processing capabilities, especially when working with large datasets such as images.
Additional random access memory (RAM) or virtual random access memory (VRAM) is usually a necessity, and the size required depends on the volume of the data batch being used. As such, it is recommended to use computing systems that exceed the capabilities of conventional computers to effectively handle these demands [38].
While there are now services and infrastructures that can be rented or purchased to enhance analytical capabilities, these options are outside the scope of this article. However, it is important to highlight that the choice of hardware can significantly affect the efficiency and effectiveness of CNN experiments.
Table 1 shows the computer specifications used for the experiments in this study, demonstrating the hardware capability required for such tasks. This underscores the importance of considering hardware requirements when planning and conducting experiments with CNN models.

2.3.2. Transfer Learning

This approach is particularly beneficial when working with limited data, a common scenario in agricultural applications. Often, photographs of crop fields taken via UAVs are very similar, and only in areas where there are abnormalities—such as weeds, garbage, excess humidity, or growth deficit, among other very visible characteristics—are the images a little different.
In this research, transfer learning was used to adapt pre-trained CNN models to the task of pattern classification in onion crop fields. Transfer learning is a highly effective technique in DL that makes it possible to use pre-trained models for specific tasks [39]. To implement transfer learning, the final classification layer in a pre-trained CNN model is replaced with a new layer tailored to the specific task. Using transfer learning offers several advantages. First, it allows us to take advantage of the strong feature extraction capabilities of pre-trained CNNs. Second, it reduces the amount of data needed to train the model since it has already learned valuable features from its pre-training dataset. Finally, it can significantly reduce the training time since we only need to train the final layer of the model [40].
In this study, the pre-processing results obtained from the KNN were used to create the classes (vegetation, soil, and humidity) to form the new dataset. This dataset was then integrated into the transfer learning process. Specifically, the KNN algorithm segmented the images from the onion crop fields into the distinct patterns mentioned above. These segmented features served as the input for the pre-trained CNN models.
The pre-trained models were then fine-tuned on this segmented data using transfer learning. This process enabled the CNNs to learn the specific patterns of the segmented images, thereby enhancing their ability to accurately classify patterns in onion crop fields. Figure 4 shows the representation of our process in which the convolutional and pooling layers of pre-trained CNN models were transferred to our architecture. Moreover, the last layers were modified to obtain better results in our pattern classification.
The CNN models used for this purpose include Resnet18 [41], VGG16 [42], Alexnet [43], SqueezeNet [44], MobileNet [45], and DenseNet [46]. These models, with their diverse architectures and capabilities, provide a comprehensive approach to pattern classification in this study.
Furthermore, this approach allows for effective inferences to be made from the applied models. Once the CNN models are trained, the generated weights can be used to analyze new images, not only of onion fields but also of other classes and types.

2.4. Training and Evaluation

This study evaluated methods by considering three classes related to the patterns found in onion cultivation fields: vegetation, soil, and humidity. Each class consisted of 1224 images (vegetation, soil, and humidity). The data were randomly divided into training and test sets, with 70% of the data used for training and 30% for testing.
For the CNN models, a batch size of 64 was set, meaning that each gradient update was performed on 64 data points. Another configuration used to train the model was the epochs, which represent the iterations that are performed on the input data and the output data using the batch_size value as the increment factor. In this case, and according to the previous literature, in which some studies mention that many iterations or epochs are not necessary for transfer learning since it is one of the main functionalities of this technique, only 20 epochs were configured to train the models.
In most projects related to machine learning and large-scale learning, a procedure called stochastic gradient descent (SGD) is widely used. This method consists of displaying the input vector for some examples, calculating outputs and errors, calculating the average gradient for those examples, and adjusting the weights. The process is repeated for many small-training-set example processes until the average of the average function stops decreasing. It is called stochastic because each small part of the set of examples gives a noise estimate of the average gradient over all examples [37]. The six pre-trained CNN models were configured with the same parameters.
The evaluation phase was performed using the following metrics for validation: accuracy, F1-score, precision, and recall. Each metric provides valuable insights into different aspects of the model’s effectiveness. Accuracy refers to the proportion of true results (both true positives and true negatives) among the total number of cases examined. It is calculated as the ratio of the number of correctly classified images to the total number of images. The F1-score is a metric that correlates precision and recall. It essentially captures how well the model identifies relevant items (recall) while also avoiding incorrect positive classifications (precision). Precision tells us the proportion of correct, positive predictions. In other words, it measures the accuracy of the model’s positive identification. Recall indicates how well the model identifies all the relevant positive cases in the data. In simple terms, it tells us the proportion of actual positive instances that the model correctly classified.
The evaluation metrics were calculated as follows:
A c c u r a c y = T P + T N T P + T N + F P + F N
F 1 s c o r e = 2 × P r e c i s i o n ×   R e c a l l P r e c i s i o n + R e c a l l
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
where TP, FP, TN, and FN represent true positive, false positive, true negative, and false negative, respectively [47].

3. Results

This study used six pre-trained CNN models to classify patterns in an onion (Allium Cepa) crop field, namely vegetation, soil, and humidity zones. The models were configured and trained using the dataset described in Section 2.1. The performance of the CNN models was evaluated based on their ability to accurately classify the onion crop according to the state of the onion crop. As shown in Table 2, the models demonstrated a high level of value upon validation of their metrics, outperforming traditional pattern classification methods.
The experimental results showed performance variations ranging from 90% to nearly 100%. Among the models tested, AlexNet stood out with the highest performance across all metrics, achieving an impressive accuracy of 99.92%. On the other hand, MobileNetV2 demonstrated the lowest values, with an accuracy of 90.85% and an F1-score of 90.71%. DenseNet, VGG16, SqueezeNet, and ResNet18 also performed well, with accuracy values ranging from 92.02% (in VGG16) to 98.78% (in SqueezeNet).
Another way to evaluate the performance of the models is by examining their learning curves, which represent the model’s learning progress over time. Specifically, these curves plot the model’s performance on the training and validation sets as a function of the training iterations. Thanks to these curves, we can identify valuable insights into the models. For example, they can help determine whether the model is underfitting or overfitting the training data. If the model is underfitting, both the training and validation errors will be high. Conversely, if the model is overfitting, the training error will be low, but the validation error will be high. If training errors can be identified, the parameters of the models are modified or adjusted. Figure 5 shows the learning curves obtained from each model in this study.
As depicted in Figure 5, the learning curves for both the validation accuracy and training accuracy converge, indicating similar behavior across the models. The only exception is observed in the VGG16 model (Figure 5c), where a noticeable variation is seen in the first 15 epochs. However, after these initial epochs, the learning curves for VGG16 also begin to converge, aligning with the trend observed in the other models.

4. Discussion

The results of this article provide compelling evidence of the effectiveness of various CNN models in classifying patterns in onion crops (Table 2). The performance variation observed, ranging from 90% to nearly 100%, underscores the importance of selecting the appropriate model for specific tasks. Alexnet, with its impressive accuracy of 99.92%, emerged as the top-performing model. This suggests that Alexnet’s architecture and features may be particularly well-suited for pattern recognition tasks in agricultural contexts. However, further research is needed to understand the factors contributing to its superior performance. On the other hand, the lower performance of MobileNetV2, with an accuracy of 90.85%, indicates that this model may not be the optimal choice for this specific task. It would be interesting to explore whether modifications to the model’s parameters or training process could enhance its performance. The performance of the other models, DenseNet, VGG16, SqueezeNet, and ResNet18, fell within the range of 92.02–98.78%. This highlights the potential of these models for agricultural pattern recognition tasks. However, their performance may be influenced by various factors, such as the pattern’s complexity, the image quality, and the amount of training data.
Specifically, this study, which classifies vegetation, soil, and humidity patterns, contributes to timely predicting critical areas or points for onion production in the Zacatecas state. For example, in the case of vegetation, if there is no crop in a certain area of the furrows, the area where there is a deficiency is treated with fertilizer [48]. The soil is a fundamental part of the field, so identifying the type of soil in time, whether clay, sandy, or limestone, among others, helps to determine if it is a good area to cultivate [49]. Finally, in the case of humidity, poor irrigation control by the system that supplies the water can be prevented to avoid irregular growth or the appearance of diseases in the crop due to an excess or deficit of water at certain points [50].
Table 3 shows similar recent studies where transfer learning is implemented using some identical CNN models as those used in this study. The results of the studies mentioned provide a precise validation of those presented in this work and demonstrate that AI applications in agriculture can help create support tools that facilitate processes such as irrigation, plant disease diagnosis, pest control, and the control of irregular growth in any crop field (beans, corn, sunflower, onion, etc.) through classification, extraction, and identification of patterns. All the studies presented take advantage of pre-trained models to apply them to different specific tasks focused on agribusiness, which is beneficial to reduce processing times and improve decision making.
Other studies presented, like Duarte-Correa et al. [7] reported a methodology in which they performed DIP to segment and identify growth patterns in onion crops. This work serves as an important precedent and motivated the development of our study. Another similar work was presented by Din et al. [1], in which they used DL techniques to recognize the onion crop growth stage using the captured dataset. The performance accuracy of the system for a batch size of 16 was 96.10%, and that for a batch size of 32 was 93.80%. This opens up the possibility of carrying out multiple comparative studies with the aim of providing farmers with the technological strategies that best adapt to their needs.
Furthermore, it is important to mention that the analysis was carried out only using the resources of the central processing unit (CPU) of the computer, so no acceleration unit was used to make the processes more efficient; that is, there was no installed graphics processing unit (GPU) to increase the number of images and the power in model training, so our experimentation was limited in this aspect.
In the context of the classification of patterns in onion plants or crop fields, we can say that our methodology provided great results compared to the works mentioned above, and this has inspired us to further explore and refine our models. We aim to continue advancing in this field, developing more precise and efficient models for agricultural applications, particularly for onion crop fields.
In our future work, we have identified several areas for exploration. We aim to investigate other ML and DL algorithms to improve the accuracy and efficiency of our model. This includes expanding the number of images in our dataset and collecting data from other crop fields in Zacatecas State. Currently, there are multispectral cameras integrated into the UAVs, which opens the opportunity to create new datasets with multiple spectral bands and thus be able to identify more patterns such as water stress, altered pigmentation in the crop, and chlorophyll decrease, among other patterns present in the crop fields. Additionally, we plan to measure processing time using multi-worker GPUs and employ data parallelism to verify new results. This approach will allow us to adapt our applications to cloud infrastructures, such as Google Earth, to provide detailed information about a certain study area. Furthermore, the weights generated by the experiments carried out in this study could be used to propose the development of web-based applications and platforms for mapping patterns identified in crop fields more generally. In other words, the application would not be limited to onion crops but could be extended to encompass a wider range of agricultural crops and specific plants.

5. Conclusions

DL and transfer learning offer promising solutions to the unique challenges faced by onion growers. They have the potential to transform agricultural practices, making them more efficient, sustainable, and resilient to environmental challenges.
In the context of local onion production, this study serves as a foundational step for Zacatecas state farmers. It demonstrates the potential of implementing new technologies, specifically DL and CNNs, to enhance agricultural practices. By leveraging these technologies, farmers can develop compensatory strategies to ensure optimal crop production and resource efficiency in their fields through data-driven decision making.
This study demonstrates the power of combining ML algorithms such as KNN and DL techniques such as CNN for image classification tasks in agriculture, not only in fragile areas or points of interest in the fields but also in the classification of representative plant characteristics such as diseases, phenotypes, and the detection and diagnosis of stress, among other traits. By leveraging the strengths of both algorithms, we were able to achieve superior accuracy and efficiency in our results, achieving the main objective of this research. This underlines the potential of hybrid models to advance the field of image classification applied in agriculture, specifically in onion cultivation field-related decision-making processes.
In this article, the proposed and applied methodology provides valuable information on the application of CNN models in the field of agriculture. These results can be used to create new applications for monitoring or mapping areas of interest in crop fields, as well as adapting the models to embedded systems or microcontrollers to develop electronic devices that can be used to assist farmers in agronomic tasks and work. We believe that these findings will pave the way for more advanced and precise agricultural practices, ultimately leading to higher crop yields and sustainability.

Author Contributions

Methodology, M.d.J.L.-M.; validation, M.d.J.L.-M., G.D.-F. and C.A.O.-O.; formal analysis, M.d.J.L.-M., G.D.-F., J.I.d.l.R.-V. and C.A.O.-O.; writing—review and editing, S.V.-B., L.O.S.-S. and C.L.C.-M.; supervision, C.A.O.-O., G.D.-F., D.I.O.-E. and L.O.S.-S.; project administration, C.A.O.-O., G.D.-F., S.V.-B. and J.I.d.l.R.-V. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Naseer, U.D.; Bushra, N.; Samer, Z.; Bakhtawer; Waqar, A. Onion Crop Monitoring with Multispectral Imagery using Deep Neural Network. Int. J. Adv. Comput. Sci. Appl. 2021, 12, 0120537. [Google Scholar] [CrossRef]
  2. Food and Agriculture Organization of the United Nations. Available online: https://www.fao.org/faostat/en/#data/QCL/visualize (accessed on 14 May 2024).
  3. Valencia-Sandoval, K.; Zetina-Espinosa, A.M. La cebolla mexicana: Un análisis de competitividad en el mercado estadounidense, 2002–2013. Región Soc. 2017, 29, 70. [Google Scholar] [CrossRef]
  4. Gobierno de México. Available online: https://www.gob.mx/agricultura/prensa/aporta-mexico-una-de-cada-50-toneladas-de-cebolla-que-se-consumen-en-el-mundo?idiom=es (accessed on 10 May 2024).
  5. Malik, M.F.; Nawaz, M.; Hafeez, Z. Evaluation of Onion Crop Production, Management Techniques and Economic Status in Balochistan, Pakistan. J. Agron. 2003, 2, 70–76. [Google Scholar]
  6. Hernández, M.R.; Ríos, Á.C.; Calzada, R.T. Crecimiento, rendimiento y calidad de cebolla en dos densidades de plantación en Calera, Zacatecas, México. Dialnet 2013, 13, 85–92. [Google Scholar]
  7. Duarte-Correa, D.; Rodríguez-Reséndiz, J.; Díaz-Flórez, G.; Olvera-Olvera, C.A.; Álvarez-Alvarado, J.M. Identifying Growth Patterns in Arid-Zone Onion Crops (Allium Cepa) Using Digital Image Processing. Technologies 2023, 11, 67. [Google Scholar] [CrossRef]
  8. Vitousek, P.M.; Aber, J.D.; Howarth, R.W.; Likens, G.E.; Matson, P.A.; Schindler, D.W.; Tilman, D.G. Human alteration of the global nitrogen cycle: Sources and consequences. Ecol. Appl. 1997, 7, 737–750. [Google Scholar] [CrossRef]
  9. Salin, V. Information technology in agri-food supply chains. Int. Food Agribus. Manag. Rev. 1998, 1, 329–334. [Google Scholar] [CrossRef]
  10. Bauer, A.; Bostrom, A.G.; Ball, J.; Applegate, C.; Cheng, T.; Laycock, S.; Rojas, S.M.; Kirwan, J.; Zhou, J. Combining computer vision and deep learning to enable ultra-scale aerial phenotyping and precision agriculture: A case study of lettuce production. Hortic. Res. 2019, 6, 70. [Google Scholar] [CrossRef] [PubMed]
  11. Baggio, A. Wireless sensor networks in precision agriculture. In ACM Workshop on Real-World Wireless Sensor Networks REALWSN; ACM: Stockholm, Sweden, 2005; pp. 1567–1576. [Google Scholar]
  12. García-Mateos, G.; Hernández-Hernández, J.L.; Escarabajal-Henarejos, D.; Jaén-Terrones, S.; Molina-Martínez, J.M. Study and comparison of color models for automatic image analysis in irrigation management applications. Agric. Water Manag. 2015, 151, 158–166. [Google Scholar] [CrossRef]
  13. Ullah, S.; Awan, M.D.; Khiyal, M.S. Big Data in Cloud Computing: A Resource Management Perspective. Sci. Program. 2018, 2018, 5418679. [Google Scholar] [CrossRef]
  14. Jha, K.; Doshi, A.; Patel, P.; Shah, M. A comprehensive review on automation in agriculture using artificial intelligence. Artif. Intell. Agric. 2019, 2, 1–12. [Google Scholar] [CrossRef]
  15. Krizhevsky, A.; Sutskever, I.; Hinton, G. ImageNet Classification with Deep Convolutional Neural Networks. Lund Inst. Technol. 2002, 60, 84–90. [Google Scholar] [CrossRef]
  16. Akula, C.S.; Sunkari, V.; Prathima, C. High-Performance Computing Center Framework for Smart Farming. In Proceedings of the International Conference on Computer Vision, High Performance Computing, Smart Devices and Networks, Kakinada, India, 28–29 December 2022; Advanced Technologies and Societal Change. Springer: Singapore, 2022. [Google Scholar] [CrossRef]
  17. Gniady, T.; Ruan, G.; Sherman, W.; Tuna, E.; Wernert, E. Scalable Photogrammetry with High Performance Computing. In Proceedings of the Practice and Experience in Advanced Research Computing 2017 on Sustainability, Success and Impact—PEARC17, Los Angeles, NO, USA, 9–13 July 2017. [Google Scholar] [CrossRef]
  18. Boon, M.A.; Greenfield, R.; Tesfamichael, S. Unmanned Aerial Vehicle (UAV) photogrammetry produces accurate high-resolution orthophotos, point clouds and surface models for mapping wetlands. S. Afr. J. Geomat. 2016, 5, 186. [Google Scholar] [CrossRef]
  19. Pan, S.J.; Qiang, Y. A Survey on Transfer Learning. IEEE Trans. Knowl. Data Eng. 2010, 22, 1345–1359. [Google Scholar] [CrossRef]
  20. Pandey, A.; Jain, K. An intelligent system for crop identification and classification from UAV images using conjugated dense convolutional neural network. Comput. Electron. Agric. 2022, 192, 106543. [Google Scholar] [CrossRef]
  21. Rachmad, A.; Fuad, M.; Rochman, E.M.S. Convolutional neural network-based classification model of corn leaf disease. Math. Model. Eng. Probl. 2023, 10, 530–536. [Google Scholar] [CrossRef]
  22. Haq, M.A. CNN based automated weed detection system using UAV imagery. Comput. Syst. Sci. Eng. 2022, 42, 837–849. [Google Scholar]
  23. Tetila, E.C.; Machado, B.B.; Menezes, G.K.; Oliveira, A.D.S.; Alvarez, M.; Amorim, W.P.; Belete, N.A.; Da Silva, G.G.; Pistori, H. Automatic Recognition of Soybean Leaf Diseases Using UAV Images and Deep Convolutional Neural Networks. IEEE Geosci. Remote Sens. Lett. 2020, 17, 903–907. [Google Scholar] [CrossRef]
  24. Zheng, J.; Fu, H.; Li, W.; Wu, W.; Yu, L.; Yuan, S.; Kanniah, K.D. Growing Status Observation for Oil Palm Trees Using Unmanned Aerial Vehicle (UAV) Images. ISPRS J. Photogramm. Remote Sens. 2021, 173, 95–121. [Google Scholar] [CrossRef]
  25. Gulzar, Y.; Ünal, Z.; Aktaş, H.; Mir, M.S. Harnessing the power of transfer learning in sunflower disease detection: A comparative study. Agriculture 2023, 13, 1479. [Google Scholar] [CrossRef]
  26. Alkanan, M.; Gulzar, Y. Enhanced corn seed disease classification: Leveraging MobileNetV2 with feature augmentation and transfer learning. Front. Appl. Math. Stat. 2023, 9, 1320177. [Google Scholar] [CrossRef]
  27. Gulzar, Y. Enhancing soybean classification with modified inception model: A transfer learning approach. Emir. J. Food Agric. 2024, 36, 1–9. [Google Scholar] [CrossRef]
  28. Ibarra-Pérez, T.; Jaramillo-Martínez, R.; Correa-Aguado, H.C.; Ndjatchi, C.; Martínez-Blanco, M.d.R.; Guerrero-Osuna, H.A.; Mirelez-Delgado, F.D.; Casas-Flores, J.I.; Reveles-Martínez, R.; Hernández-González, U.A. A Performance Comparison of CNN Models for Bean Phenology Classification Using Transfer Learning Techniques. AgriEngineering 2024, 6, 841–857. [Google Scholar] [CrossRef]
  29. Tsai, F.-T.; Nguyen, V.-T.; Duong, T.-P.; Phan, Q.-H.; Lien, C.-H. Tomato Fruit Detection Using Modified Yolov5m Model with Convolutional Neural Networks. Plants 2023, 12, 3067. [Google Scholar] [CrossRef] [PubMed]
  30. Zaki, M.A.; Narejo, S.; Ahsan, M.; Zai, S.; Anjum, M.R.; Din, N. Image-based Onion Disease (Purple Blotch) Detection using Deep Convolutional Neural Network. Int. J. Adv. Comput. Sci. Appl. 2021, 12, 5. [Google Scholar] [CrossRef]
  31. Kim, W.S.; Lee, D.-H.; Kim, Y.-J. Machine vision-based automatic disease symptom detection of onion downy mildew. Comput. Electron. Agric. 2020, 168, 105099. [Google Scholar] [CrossRef]
  32. Paymode, A.S.; Mohite, J.N.; Shinde, U.B.; Malode, V.B. Artificial intelligence for agriculture: A technique of vegetables crop onion sorting and grading using deep learning. Int. J. Adv. Sci. Res. Eng. Trends 2021, 6, 4. [Google Scholar]
  33. Chiu, M.T.; Xu, X.; Wei, Y.; Huang, Z.; Schwing, A.G.; Brunner, R.; Khachatrian, H.; Karapetyan, H.; Dozier, I.; Rose, G.; et al. Agriculture-vision: A large aerial image database for agricultural pattern analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; 2001. arxiv:2001.01306. [Google Scholar]
  34. Yigit, E.; Sabanci, K.; Toktas, A.; Kayabasi, A. A study on visual features of leaves in plant identification using artificial intelligence techniques. Comput. Electron. Agric. 2019, 156, 369–377. [Google Scholar] [CrossRef]
  35. MathWorks: Classification Using Nearest Neighbors. Available online: https://la.mathworks.com/help/stats/classification-using-nearest-neighbors.html (accessed on 11 April 2024).
  36. Hruška, J.; Adão, T.; Pádua, L.; Marques, P.; Cunha, A.; Peres, E.; Sousa, J.J. Machine learning classification methods in hyperspectral data processing for agricultural applications. In Proceedings of the International Conference on Geoinformatics and Data Analysis—ICGDA, Prague, Czech Republic, 20–22 April 2018. [Google Scholar] [CrossRef]
  37. Lecun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  38. Raj, P.; Raman, A.; Nagaraj, D.; Duggirala, S. High-Performance Big-Data Analytics: Computing Systems and Approaches; Springer International Publishing: Alemania, Germany, 2015. [Google Scholar]
  39. Tensorflow: Tranferencia de Aprendizaje y Ajuste. Available online: https://www.tensorflow.org/tutorials/images/transfer_learning?hl=es-419#freeze_the_convolutional_base (accessed on 30 March 2024).
  40. Abbas, A.; Jain, S.; Gour, M.; Vankudothu, S. Tomato plant disease detection using transfer learning with C-GAN synthetic images. Comput. Electron. Agric. 2021, 187, 106279. [Google Scholar] [CrossRef]
  41. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar] [CrossRef]
  42. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  43. Krizhevsky, A. One weird trick for parallelizing convolutional neural networks. arXiv 2014, arXiv:1404.5997. [Google Scholar]
  44. Iandola, F.N.; Moskewicz, M.W.; Ashraf, K.; Han, S.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-level accuracy with 50× fewer parameters and <1 MB model size. arXiv, 2016; arXiv:1602.07360. [Google Scholar]
  45. Sandler, M.; Andrew, G.H.; Menglong, Z.; Andrey, Z.; Liang-Chieh, C. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18 June–23 June 2018; pp. 4510–4520. [Google Scholar]
  46. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  47. Yao, M.; Li, W.; Chen, L.; Zou, H.; Zhang, R.; Qiu, Z.; Yang, S.; Shen, Y. Rice Counting and Localization in Unmanned Aerial Vehicle Imagery Using Enhanced Feature Fusion. Agronomy 2024, 14, 868. [Google Scholar] [CrossRef]
  48. Divya, S.; Rusyn, I.; Solorza-Feria, O.; Sathish-Kumar, K. Sustainable SMART fertilizers in agriculture systems: A review on fundamentals to in-field applications. Sci. Total Environ. 2023, 904, 166729. [Google Scholar]
  49. Elbeltagi, A.; Kushwaha, N.L.; Srivastava, A.; Zoof, A.T. Artificial intelligent-based water and soil management. In Deep Learning for Sustainable Agriculture; Springer: Berlin/Heidelberg, Germany, 2022; pp. 129–142. [Google Scholar] [CrossRef]
  50. Manik, S.M.N.; Pengilley, G.; Dean, G.; Field, B.; Shabala, S.; Zhou, M. Soil and Crop Management Practices to Minimize the Impact of Waterlogging on Crop Productivity. Front. Plant Sci. 2019, 10, 140. [Google Scholar] [CrossRef]
Figure 1. Methodology proposed in this research; (a) step-by-step procedure of the proposed methodology; (b) graphic representation of the proposed methodology.
Figure 1. Methodology proposed in this research; (a) step-by-step procedure of the proposed methodology; (b) graphic representation of the proposed methodology.
Agronomy 14 01206 g001
Figure 2. Images obtained from the onion crop field: (a) orthophoto from the onion crop field; (b) aerial image sequence taken by UAV during the crop monitoring; (c) sliced images to extend the batch of data.
Figure 2. Images obtained from the onion crop field: (a) orthophoto from the onion crop field; (b) aerial image sequence taken by UAV during the crop monitoring; (c) sliced images to extend the batch of data.
Agronomy 14 01206 g002
Figure 3. Image pre-processing method performed with KNN algorithm to segment our images into three patterns.
Figure 3. Image pre-processing method performed with KNN algorithm to segment our images into three patterns.
Agronomy 14 01206 g003
Figure 4. Representation of transfer learning in this approach.
Figure 4. Representation of transfer learning in this approach.
Agronomy 14 01206 g004
Figure 5. Learning curves obtained during training in each model: (a) AlexNet; (b) DenseNet; (c) VGG16; (d) SqueezeNet; (e) MobileNetV2; (f) ResNet18.
Figure 5. Learning curves obtained during training in each model: (a) AlexNet; (b) DenseNet; (c) VGG16; (d) SqueezeNet; (e) MobileNetV2; (f) ResNet18.
Agronomy 14 01206 g005
Table 1. Computer characteristics used to train CNN models.
Table 1. Computer characteristics used to train CNN models.
CharacteristicDescription
Operating SystemWindows 11
ProcessorAMD Ryzen 9 5900X, 12-Core Processor, 3.7 GHz
RAM memory96 GB
ApplicationsPython 3.10, PyTorch 2.0.1
Table 2. Classification results of different pre-trained CNN models, expressed as percentages (%).
Table 2. Classification results of different pre-trained CNN models, expressed as percentages (%).
Validation MetricModel
AlexnetDenseNetVGG16SqueezeNetMobileNetV2ResNet18
Accuracy99.9297.8492.0298.7790.8598.12
F1-score99.9197.8392.1098.7690.7198.12
Precision99.9197.8892.4698.7891.5598.15
Recall99.9197.8392.0298.7690.8498.12
Table 3. Summary comparison with recent and similar studies. The bold format highlights the higher values.
Table 3. Summary comparison with recent and similar studies. The bold format highlights the higher values.
AuthorData SourceArchitecturesClassesMetricsBest Accuracy
Rachmad et al. [21]Corn LeafAlexNetHealthy
Gray leaf spot
Blight
Common rust
Accuracy95.59%
ResNet101
ResNet18
SqueezeNet
ResNet50
Gulzan et al. [25]SunflowerAlexNetDowny Mildew
Fresh leaf
Gray mold
Leaf scars
Precision
Recall
F1-score
Accuracy
100%
VGG16
InceptionV3
MobileNetV3
EfficientNetB3
Ibarra-Pérez et al. [28]BeanAlexNet
VGG19
SqueezeNet
GoogleNet
Vegetative phase
Reproductive phase in prefloration and floration
Reproductive stage in the formation and filling of pods
Reproductive phase in maturation
Accuracy
Precision
Sensitivity
Specificity
F1-score
96.71%
This studyOnionAlexNetVegetation
Soil
Humidity
Accuracy
F1-score
Precision
Recall
99.92%
DenseNet
VGG16
SqueezeNet
MobileNetV2
ResNet18
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

López-Martínez, M.d.J.; Díaz-Flórez, G.; Villagrana-Barraza, S.; Castañeda-Miranda, C.L.; Solís-Sánchez, L.O.; Ortíz-Esquivel, D.I.; de la Rosa-Vargas, J.I.; Olvera-Olvera, C.A. Pattern Classification of an Onion Crop (Allium Cepa) Field Using Convolutional Neural Network Models. Agronomy 2024, 14, 1206. https://doi.org/10.3390/agronomy14061206

AMA Style

López-Martínez MdJ, Díaz-Flórez G, Villagrana-Barraza S, Castañeda-Miranda CL, Solís-Sánchez LO, Ortíz-Esquivel DI, de la Rosa-Vargas JI, Olvera-Olvera CA. Pattern Classification of an Onion Crop (Allium Cepa) Field Using Convolutional Neural Network Models. Agronomy. 2024; 14(6):1206. https://doi.org/10.3390/agronomy14061206

Chicago/Turabian Style

López-Martínez, Manuel de Jesús, Germán Díaz-Flórez, Santiago Villagrana-Barraza, Celina L. Castañeda-Miranda, Luis Octavio Solís-Sánchez, Diana I. Ortíz-Esquivel, José I. de la Rosa-Vargas, and Carlos A. Olvera-Olvera. 2024. "Pattern Classification of an Onion Crop (Allium Cepa) Field Using Convolutional Neural Network Models" Agronomy 14, no. 6: 1206. https://doi.org/10.3390/agronomy14061206

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop