Next Article in Journal
Predicting Models for Plant Metabolites Based on PLSR, AdaBoost, XGBoost, and LightGBM Algorithms Using Hyperspectral Imaging of Brassica juncea
Next Article in Special Issue
Early Detection of Cavitation in Centrifugal Pumps Using Low-Cost Vibration and Sound Sensors
Previous Article in Journal
Consumer Preferences in the Purchase of Agri-Food Products: Implications for the Development of Family Farms
Previous Article in Special Issue
Non-Destructive Hyperspectral Imaging and Machine Learning-Based Predictive Models for Physicochemical Quality Attributes of Apples during Storage as Affected by Codling Moth Infestation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Harnessing the Power of Transfer Learning in Sunflower Disease Detection: A Comparative Study

1
Department of Management Information Systems, College of Business Administration, King Faisal University, Al-Ahsa 31982, Saudi Arabia
2
Department of Biosystem Engineering, Niğde Ömer Halisdemir University, Central Campus, Niğde 51240, Türkiye
3
Department of Computer Engineering, Niğde Ömer Halisdemir University, Central Campus, Niğde 51240, Türkiye
*
Authors to whom correspondence should be addressed.
Agriculture 2023, 13(8), 1479; https://doi.org/10.3390/agriculture13081479
Submission received: 19 June 2023 / Revised: 20 July 2023 / Accepted: 25 July 2023 / Published: 26 July 2023
(This article belongs to the Special Issue Applications of Data Analysis in Agriculture)

Abstract

:
Sunflower is an important crop that is susceptible to various diseases, which can significantly impact crop yield and quality. Early and accurate detection of these diseases is crucial for implementing appropriate management strategies. In recent years, deep learning techniques have shown promising results in the field of disease classification using image data. This study presents a comparative analysis of different deep-learning models for the classification of sunflower diseases. five widely used deep learning models, namely AlexNet, VGG16, InceptionV3, MobileNetV3, and EfficientNet were trained and evaluated using a dataset of sunflower disease images. The performance of each model was measured in terms of precision, recall, F1-score, and accuracy. The experimental results demonstrated that all the deep learning models achieved high precision, recall, F1-score, and accuracy values for sunflower disease classification. Among the models, EfficientNetB3 exhibited the highest precision, recall, F1-score, and accuracy of 0.979. whereas the other models, ALexNet, VGG16, InceptionV3 and MobileNetV3 achieved 0.865, 0.965, 0.954 and 0.969 accuracy respectively. Based on the comparative analysis, it can be concluded that deep learning models are effective for the classification of sunflower diseases. The results highlight the potential of deep learning in early disease detection and classification, which can assist farmers and agronomists in implementing timely disease management strategies. Furthermore, the findings suggest that models like MobileNetV3 and EfficientNetB3 could be preferred choices due to their high performance and relatively fewer training epochs.

1. Introduction

Sunflower (Helianthus annuus) is a widely cultivated crop across various regions of the world due to its economic significance and versatility. It is grown in diverse climates ranging from temperate to subtropical regions, making it adaptable to a wide range of environmental conditions. Sunflower cultivation spans continents, including North America, Europe, Asia, Africa, and South America, with major producing countries including Russia, Ukraine, Argentina, the United States, and China. The global production of sunflower seeds during the year 2022–2023 is around 51.17 million metric tons [1]. The biggest producer is Russia with 16.5 million metric tons followed by Ukraine with 10 million metric tons, European Union with 9.48, Argentina with 4.6, Turkey 1.9, and rest 8.6 million metric tons [1]. The cultivation of sunflowers serves numerous purposes, such as animal feed production, oil extraction for different applications (culinary and industrial applications), and as a source of raw materials for biofuel production [2]. The widespread cultivation of sunflower reflects its agricultural significance and the recognition of its economic value in different regions of the world.
Sunflower seed cultivation is critical in many ways, including economic, agricultural, nutritional, and environmental considerations. Sunflower seeds are high in oil and have numerous applications, making them a valuable crop around the world [3]. Sunflower seeds are an excellent source of oil that is edible, frequently referred to as sunflower oil. Given its pleasant flavor, high smoke point, and nutritional benefits, sunflower oil is frequently utilized in cooking, frying, and salad dressings. Sunflower cultivation has various agricultural advantages as well. Sunflowers are well-known for their ability to take nutrients from the soil, making them an excellent rotation crop. Farmers can improve soil fertility and prevent pest and disease accumulation by planting sunflowers in rotation with other crops [4]. Sunflowers are an adaptive crop that can adapt to a wide range of climates and flourishing conditions, helping to increase agricultural biodiversity. Sunflower farming enhances biodiversity by providing a habitat for various beneficial insects, birds, and pollinators [4].
Sunflowers are prone to various diseases that can significantly influence the productivity, growth, and quality of crops. To avoid the adverse effect of diseases on productivity, it is important to identify and manage these diseases at the right time. There are some diseases commonly found in sunflowers such as Downy Mildew (Plasmopara halstedii), Sclerotinia Head Rot (Sclerotinia sclerotiorum), Rust (Puccinia helianthi), Phoma Black Stem (Phoma macdonaldii), Powdery Mildew (Erysiphe cichoracearum). It is very important to identify such diseases at the right time to avoid many things such as the effect on yield production, environmental impact, crop rotation, and future planning. Identifying such diseases needs a trained expert or a person who has knowledge. However, when talking about developing countries the farmers are illiterate and are not sound when it comes to technology usage. Such issues further negatively impact yield production. Identification and detection of sunflower diseases in developing countries are manually performed by the farmers through visual observation. It can’t be done by a layman but by a person who has a lot of experience in such an area. Nevertheless, such observations are highly likely error prone. Moreover, it is not possible to identify the diseases regularly through visual inspection, which results in significant financial losses [5]. There are different methods to detect diseases in plants such as the use of spectrometers. Spectrometers are more accurate than traditional methods to classify diseased leaves from healthy ones [6]. Furthermore, molecular techniques like the polymerase chain reaction and the real-time polymerase chain reaction can be employed to detect plant illnesses [7,8,9]. Such methods demand qualified people to operate and are time-consuming, expensive, and difficult. Therefore, a system easy to use, and works on images will be more practical than any traditional system.
Automated disease recognition systems have gained popularity recently thanks to developments in computer vision and machine learning approaches. These systems use deep learning algorithms to detect diseases quickly and accurately by analyzing massive amounts of visual data [10,11]. Deep learning has gained a lot of attention lately due to its promising results in terms of improving accuracy. Deep learning has been incorporated into many domains because of its remarkable ability to solve complex problem. Such as Computer Vision [12], Object Detection and Recognition [13], Image Segmentation [14], Image Generation [10], Healthcare [15,16], Finance [17], Autonomous Systems [18], Education [19,20], Natural Language Processing (NLP) [21]. Bantan et al. [22] proposed a CNN-based sunflower seed classification model. The dataset is containing seven different varieties of sunflower seeds. Their proposed model achieved 98% accuracy in terms of identifying different varieties of sunflower seeds. Kurtulmus et al. [23] tried to use three different deep learning models such as AlexNet, GoogleNet, and ResNet to find their accuracy in terms of identifying sunflower seeds. They stated that GoogleNet has achieved 95% accuracy. In study [24], the authors have proposed a deep learning model based on VGG16 to classify fourteen different types of seeds. The VGG16 model has been modified by adding five more layers before the classifying layer to improve the performance of the model. From the results, it can be inferred that the proposed model has achieved 99% accuracy. Sirohi et al. [25] proposed a hybrid deep learning model for classifying sunflower diseases. They created their dataset containing four different classes such as Alternaria leaf blight, Downy mildew, Phoma blight, and Verticillium wilt. Their hybrid model is based on VGG16 and MobileNet. Their proposed hybrid model has achieved 89.2% accuracy in identifying different types of diseases in sunflowers and they claimed that their model outperformed other models. Albarrak et al. [26] proposed a model based on MobileNetV2 to classify different types of date fruits. They trained the model on a dataset containing eight different types of date fruits found in Saudi Arabia. Their proposed model has achieved 99% accuracy and has outperformed state-of-art models.
Chen et al. [27] proposed a model based on YOLOv4 to detect sunflower leaf diseases. The modified version of YOLOv4 is obtained by collecting three effective layers of three versions of MobileNet model and used that to replace the feature layer of the original YOLOv4. Doing so has helped to improve the accuracy of the model and outperformed the original YOLOv4. Malik et al. [5] proposed a deep learning model for sunflower disease identification. They used a dataset collected from the Internet. The proposed model is based on MobileNet and VGG16. They claimed that the model based on VGG16 has achieved 81% accuracy whereas the model based on MobileNet has an accuracy of around 86%. Carbone et al. [28] proposed a model for sunflower-weed segmentation classification. They created a dataset containing both RGB and NIR data. their proposed model has shown an IoU performance of 76.4%. Dawod et al. [29] proposed a R-CNN-based model to classify Foliar diseases in sunflowers. They have perfumed the segmentation before they classified the different classes of foliar lesions. Gulzar [30] proposed a MobileNetV2-based modified model to classify forth different types of fruits. The modified proposed model has achieved 99% accuracy and has outperformed state-of-art models such as VGG16, AlexNet, and other models. whereas in another study Aktas et al. [31,32] trained AlexNet and InceptionV2 models to classify open and closed pistachios and achieved 96.13% and 96.54%, respectively. In another study, Dowod et al. [33] proposed a model to classify foliar diseases in sunflowers based on ResNet architecture. They claimed that by doing the segmentation, the results in terms of accuracy have increased by far and avoid many wrong predictions which usually happen without segmentation. Song et al. [34] trained their model based on remote sensing images to recognize the sunflower growth periodically. Their proposed model, PSPNet achieved 89.01% accuracy. Furthermore, Barrio-conde et al. [35] have a deep learning-based model to classify High Oleic Sunflower Seed Varieties. They used around 6000 images of six different types of sunflower seeds to train their model. The model achieved 100% accuracy for two of the classes whereas, for the remaining classes, the model achieved 89.5% accuracy. Sathi et al. [36] have conducted a study to classify sunflower diseases used a dataset containing 1428 images of three classes. They have trained and tested four CNN models based on the said dataset. they concluded that ResNet50 model was identified as optimal model with accuracy 97.88%. Ghosh et al. [37] have proposed a hybrid model for recognition of sunflower diseases. The hybrid model is the combination of VGG19 and small CNN model. they have trained around eight CNN models and out of those the proposed model has outperformed all the other models in detecting sunflower diseases.
The study investigated the classification of sunflower diseases using deep learning techniques, specifically employing five popular convolutional neural network (CNN) models: AlexNet, VGG16, InceptionV3, MobileNetV3, and EfficientNetB3. The contribution of this study is as follows:
  • Improved Disease Detection: The study highlights the use of deep learning models, such as AlexNet, VGG16, InceptionV3, MobileNetV3, and EfficientNet, for the classification of sunflower diseases. These models have demonstrated high precision, recall, F1-score, and accuracy in detecting and classifying various sunflower diseases. This contribution improves the early detection of diseases, allowing farmers to implement timely management strategies and minimize crop yield and quality losses.
  • Comparative Analysis of Deep Learning Models: The study provides a comparative analysis of different deep learning models, allowing researchers and practitioners to assess their performance in sunflower disease classification. By evaluating the precision, recall, F1-score, and accuracy of each model, the study offers valuable insights into their effectiveness and helps in selecting the most suitable model for sunflower disease detection tasks.
  • Potential Benefits for Farmers and Agronomists: The study’s results emphasize the potential of deep learning models in early disease detection and classification, offering significant benefits to farmers and agronomists. By utilizing these models, farmers can quickly identify and categorize sunflower diseases, enabling them to implement timely and appropriate disease management strategies. This contribution has practical implications in enhancing crop productivity and minimizing economic losses in sunflower farming.
  • General Applicability to Other Crops: While this study specifically focuses on sunflower disease detection, the findings have broader implications for other crops as well. Deep learning models trained on image data can be adapted and applied to different plant species, aiding in disease identification and classification across various agricultural contexts. The study’s comparative analysis provides valuable insights that can be utilized in similar studies on different crops, benefiting farmers and researchers in multiple agricultural domains.
  • Reduced Training Epochs: The study suggests that models like MobileNetV3 and EfficientNetB3 offer high performance while requiring relatively fewer training epochs. This finding is beneficial in terms of computational efficiency and time-saving during the training process. By reducing the required training time, farmers and researchers can expedite the development and deployment of disease detection models, facilitating faster decision-making and implementation of appropriate management strategies.

2. Material and Methods

Figure 1 provides a summary of the overall methodology used in this study. The images of sunflower leaves and blooms that are available online were used for four-class classification of healthy and infected groups. Pre-trained networks like AlexNet [38], VGG16 [39], InceptionV3 [40], MobilenetV3 [41], and EfficientNet [42] are used for the classification. Prior to the splitting dataset into training, validation, and testing sets, images were resized according to model requirements which are given in Figure 1.

2.1. Dataset

In this study, a public dataset [43] containing images of a total of 1892 images of healthy and infected sunflower leaves and blooms was used. The National Institute of Textile Engineering and Research in Bangladesh is the host institution for the dataset, which is publicly accessible online. The dataset originally containing images of a total of 467 images was created by researchers using sunflowers from the Bangladesh Agricultural Research Institute (BARI) demonstration farm at Gazipur. All the images were reported to be captured manually using a digital camera at a resolution of 512 × 512 pixels. Researchers later used original images to create 1892 augmented images. A dataset containing augmented images was split into training, validation, and testing subsets using a ratio of 70:15:15. The number of images per class and its distribution by subclasses are shown in Table 1. Whereas Figure 2 shows the sample of each class of diseases found in sunflowers.

2.2. State-of-Art Models

In this study, we have selected start-of-art models which are very well-known for image classification. Due to their high popularity, we have selected AlexNet [38], VGG16 [39], InceptionV3 [40], MobilenetV3 [41], and EfficientNet [42] for this comparative study.

2.2.1. AlexNet

AlexNet [38] is a groundbreaking convolutional neural network (CNN) architecture that won the ImageNet Large Scale Visual Recognition Challenge in 2012. Figure 3 shows the architecture of AlexNet. It introduced several key innovations that revolutionized deep learning and image recognition. AlexNet consisted of eight layers, including five convolutional and three fully connected layers. It employed the rectified linear unit (ReLU) activation function, local response normalization, and dropout regularization to improve performance and prevent overfitting. Notably, AlexNet utilized multiple GPUs for parallel processing, significantly reducing training time. With a top-5 error rate of 15.3% on the ILSVRC 2012 dataset, AlexNet outperformed previous approaches by a large margin. Its success propelled the development of more complex CNN architectures and accelerated the advancement of deep learning in computer vision tasks. The influential principles and design choices of AlexNet continue to shape the field of deep learning, making CNNs the standard model for image classification.

2.2.2. VGG16

VGG16 [39] is a popular convolutional neural network (CNN) model known for its simplicity and effectiveness in image classification. Figure 4 shows the architecture of VGG16. Developed by the Visual Geometry Group (VGG) at the University of Oxford, VGG16 consists of 16 layers, including 13 convolutional layers and three fully connected layers. It follows a uniform architecture with small 3 × 3 filters and max pooling layers. VGG16’s depth and small filter size contribute to its impressive performance. Despite its simplicity, VGG16 achieved outstanding results on the ImageNet dataset, surpassing previous models in accuracy. However, its large number of parameters makes it computationally expensive and memory-intensive, limiting its usage in resource-constrained environments. Nevertheless, VGG16’s impact has been significant, serving as a benchmark for subsequent CNN architectures and influencing the development of deeper networks. Its simplicity and effectiveness have made it a widely studied and referenced model in the field of deep learning.

2.2.3. InceptionV3

The InceptionV3 [40] model is a deep convolutional neural network (CNN) architecture developed by Google researchers for image recognition tasks. Figure 5 shows the architecture of InceptionV3. It builds upon the original Inception architecture and is known for its high accuracy and computational efficiency. InceptionV3 utilizes inception modules, which incorporate different filter sizes to capture both local and global information while reducing the computational cost. Additional techniques like batch normalization, factorized convolution, and regularization contribute to improved performance and prevent overfitting. InceptionV3 has achieved impressive results in various image recognition challenges, including winning the ImageNet Large Scale Visual Recognition Challenge in 2015. Its balance between accuracy and efficiency has made it a popular choice in computer vision applications. Furthermore, InceptionV3 has influenced subsequent iterations of the Inception architecture, such as InceptionV4 and Inception-ResNet, which have introduced advanced techniques to further enhance performance. The model’s impact on deep learning is substantial, inspiring researchers to explore more sophisticated and efficient CNN architectures for image recognition tasks.

2.2.4. MobileNetV3

MobileNetV3 [41] is a highly efficient convolutional neural network (CNN) model designed for mobile and resource-constrained devices. Figure 6 shows the architecture of MobileNetV3. It builds upon the success of the original MobileNet architecture by introducing several improvements. Developed by Google, MobileNetV3 utilizes depth-wise separable convolutions, which split the standard convolutional operation into separate depth-wise and pointwise convolutions. This reduces computational complexity while maintaining high accuracy. It also incorporates inverted residuals and linear bottleneck layers to further enhance efficiency. MobileNetV3 achieves a good balance between accuracy and efficiency, making it suitable for real-time applications on devices with limited computational resources. It has been widely adopted in mobile vision applications, enabling tasks such as image classification, object detection, and semantic segmentation. MobileNetV3’s contributions to efficient CNN architectures have played a crucial role in enabling deep learning on mobile devices and expanding the reach of AI-powered applications.

2.2.5. EfficientNet

EfficientNet [42] is a state-of-the-art convolutional neural network (CNN) model renowned for its exceptional efficiency and accuracy trade-off. Figure 7 shows the architecture of EfficientNet. Developed by researchers at Google, EfficientNet introduces a compound scaling method that optimizes the model’s depth, width, and resolution simultaneously. By systematically scaling up these dimensions, EfficientNet achieves superior performance while keeping computational requirements in check. This approach ensures efficient resource utilization and enables the model to perform well across a wide range of resource constraints. EfficientNet has consistently achieved top-tier performance on various benchmark datasets, surpassing previous models in both accuracy and efficiency. Its versatility and scalability have made it popular in computer vision tasks such as image classification and object detection. The EfficientNet architecture serves as a guiding principle for developing efficient and effective CNN models, empowering the deployment of deep learning models on devices with limited computational capabilities. EfficientNet-B0 in the base mode which consists of 237 layers. Moving from EfficientNet-B0 to EfficientNet-B7 number of layers is increasing reaching 813.

2.3. Model Tuning

Transfer Learning: Transfer learning is a powerful technique in deep learning where pre-trained models are used as a starting point for solving new tasks. In this study, we have incorporated transfer learning for all our state-of-the-art models (AlexNet, VGG16, InceptionV3, MobileNetV3, and EfficientNet). By leveraging transfer learning, we have initialized all the networks with weights pertained on large-scale image classification dataset (ImageNet). Fine-tuning the SOTA models helps in accelerating the training and improves the performance.

2.4. K-Fold Validation

Cross-validation is a popular technique for estimating the skill of machine learning models. Because it has a lower bias, it is widely used to compare and select models for a given predictable modelling problem [44]. The entire procedure of the K-Fold cross validation algorithm used in this study is listed in following steps:
1.
Shuffle the entire dataset and split data set into training and test dataset with the ration of 08:20.
2.
Split training set into 4 subsets. Create a loop to train model 4 times.
3.
In first loop first subset is used for validation and the last 3 parts are used for training. Train and test the model.
4.
In second loop second subset is used for validation and the remaining 3 parts are used for training. Train and test the model.
5.
In third loop third subset is used for validation and the remaining 3 parts are used for training. Train and test the model.
6.
In last loop last subset is used for validation and the first 3 parts are used for training. Train and test the model.
7.
Summarize the overall performance of all models of all trained K-Folds.

2.5. Experimental Environment Settings and Performance Evaluation Metrics

This research aims to propose an optimal model which identifies and classifies different types of images. The proposed model was implemented using Python (v. 3.8), OpenCV (v. 4.7), Keras Library (v. 2.8) were used on Windows 10 Pro OS, with system configuration using an Intel i5 processor running at 2.9 GHz, an Nvidia RTX 2060 Graphical Processing Unit and 16 GB RAM.
Several metrics were employed to evaluate the performance of classifying sunflower blooms and leaves, including accuracy, precision, recall, and F1-score, which are frequently used indicators [30]. Accuracy is the ratio of samples from all classes that can be correctly identified, Recall is the ratio of correctly classified positives among all actual positives, and Precision is the ratio of correctly identified positives versus all expected positives [45]. The metrics were calculated using Equations (1)–(4).
A c c u r a c y = T r u e   P o s i t i v e + T r u e   N e g a t i v e T r u e   P o s i t i v e + T r u e   N e g a t i v e + F a l s e   P o s i t i v e + F a l s e   N e g a t i v e
R e c a l l = T r u e   P o s i t i v e T r u e   P o s i t i v e + F a l s e   N e g a t i v e
P r e c i s i o n = T P T r u e   P o s i t i v e + F a l s e   P o s i t i v e
F 1 S c o r e = 2 × R e c a l l × P r e c i s i o n R e c a l l + P r e c i s i o n

3. Results

The performance of all classifiers, such as AlexNet, VGG16, InceptionV3, MobilenetV3, and EfficientNet, models were evaluated with evaluation metrics as mentioned in Section 2.5. The convergence graphs of performed training were also given to track the accuracies and losses of training with epoch. Transfer learning was employed for the training processes, and ImageNet weights were used to initialize the weights models. To produce results that can be compared, each training procedure was completed using the same settings, such as batch size, dataset split ratio, learning rate, and optimizer. The number of the epoch was set to 300, the early stop parameter was set to 30, the batch size was set to 32, the initial learning rate was set to 0.0005, and the SGD optimizer was chosen as the optimizer during the training process. Figure 8 displays the training accuracy, validation accuracy, training loss, and validation loss vs. the number of epochs for all models after using transfer learning considered in this study.
Initially, the number of epochs was set to 300 for all the models but it is important to note that the early stop parameter was also deployed to avoid the wastage of time when there is no positive/negative change occurring to the model training. From Figure 8, it can be depicted that each model has taken different epochs to train and validate. It took 117, 76, 201, 126, and 89 epochs to AlextNet, VGG16, InceptionV3, MobileNetV3, and EfficientNetB3 respectively. From Figure 8A it can be detected that the training accuracy of ALexNet started at 40% at the initial iterations and gradually increased and remained around 85% until there was no improvement in the model training accuracy. From Figure 8B it can be seen that the training accuracy rate of VGG16 started at 35% from the first iteration and continued to improve until iteration 22. From iteration 22 model has been performing well in terms of training accuracy and has reached the optimal accuracy. Whereas when it comes to the validation accuracy, it can be seen that model has performed identically to training accuracy. When it comes to the training loss and validation loss, it can be depicted from the Figure that at initial iterations the model isn’t doing well due to the high number of losses. But gradually it can be seen that when the model started to learn and remember more about the data, it performed well, and the training and validation loss decreased. By the 20th iteration, it can be seen that model accuracy reached to maximum value. From Figure 8B it can be seen that VGG16 has achieved 96.5% accuracy. When it comes to InceptionV3, it has performed similarly to VGG16 in terms of training and validation. However, the accuracy rate of VGG16 is better than InceptionV3. It may be due to many factors such as architectural design, parameter efficiency, computational efficiency, dimensional reduction, and auxiliary classifiers.
Figure 8D,E present the training accuracy, validation accuracy, training loss, and validation loss of MobileNetV3 and EfficeintNetB3. From Figure 8D,E, it can be seen that both models are performing better than AlexNet, VGG16, and InceptionV3 in terms of training accuracy. The model validation accuracy of MobileNetV3 and EfficientNetB3 started from 55% and 70% respectively. From Figure 8D it is evident that the MobileNetV3 model has reached optimal accuracy before reaching 35 iterations whereas EfficientNetB3 has reached optimal accuracy within the first 10 iterations. As far as training and validation loss is concerned both models are performing well. During the training accuracy, validation accuracy, training loss, and validation loss, it can be concluded that EfficientNetB3 has outperformed state-of-art models.
Figure 9 shows the confusion matrixes of sunflower leaf and bloom classification results of all models. where it is seen that most of the images were assigned to the correct classes. According to Figure 9E, the EfficientNetB3 model has the fewest misclassifications. EfficientNetB3 predicted 3 items in the Leaf Scars class as the Downy Mildew class and 4 items in the Downy Mildew class as the Leaf Scars class. Figure 9A shows that 39 out of 286 images were misclassified by AlexNet, which is the highest misclassification score. According to Figure 9B,D, several misclassifications made by VGG16 (10) and MobileNetV3 (9). However, all models showed perfect recognition of healthy leaves and gray mold disease, where no misclassifications were observed, except AlexNet. It can be also observed that AlexNet did not perform well and had the highest number of misclassifications in all the classes.
Table 2 shows the evaluation metric in terms of precision, recall, F1-score, and accuracy of all the models used in this study. According to the results given in Table 2 performance of the models in the discrimination of healthy leaves from infected items is very high, where the classification accuracy is 100%. The classification accuracy for the detection of Gray Mold disease was also 100% (except AlextNet), whereas classification accuracies for the detection of Downy Mildew and Leaf Scars disease were comparatively lower for all models. Downy Mildew and Leaf Scars classes were classified by InceptionV3 with accuracies of 91.4% and 91.0%, respectively. whereas for VGG16, it got 94.2% and 92.4% for Downy Mildew and Leaf Scars classes respectively. The lowest performance in the detection of Downy Mildew and Leaf Scars disease was obtained by AlexNet, with classification accuracies of 82.4% and 74.7%, respectively. The highest performance in detection of sunflower diseases is obtained by EfficientNetB3, with a classification accuracy for Downy Mildew and Leaf Scars is 95.7% and 94.9%, respectively.
The overall performance of the models before and after incorporating transfer learning is given in Table 3. From the table it can be seen that incorporating transfer learning the precision, recall, F1-score, and accuracy has changed by a lot in all the models. Before transfer learning the model the accuracy of ALexNet, VGG16, InceptionV3, MobileNetV3, EfficientNetB3 are 83.5%, 93.4%, 93.2%, 83.2% and 88.5% respectively. However, after incorporating transfer learning EfficientNetB3 model has achieved the highest accuracy, 97.6% in just 89 epochs among all the models. MobilenetV3 achieved an accuracy of 96.9% in 126 epochs, whereas the AlextNet model achieved the lowest classification accuracy (86.4%) in 117, the second worse accuracy obtained by a model is InceptionV3 with 95.4% accuracy within 201 epochs, which has the highest number of epochs compared to other models.
To evaluate the skill of the used models and check dependency to dataset K-fold cross-validation technique was applied. In this study entire data set was divided into 5 subsets. One subset was used for testing and the remaining 4 subset were used in training in loop, where different subsets were assigned as validation set. The results of applied method are given in Table 4.
According to the results given in Table 4 EfficientNetB3 model has achieved the highest mean accuracy of 98.0% among all the models. AlextNet model achieved the lowest classification accuracy of 85.1%. The second-best accuracy was obtained by VGG16 model (96.5%). InceptionV3 and MobilenetV3 achieved the same mean accuracy of 95.7%.
Inference time performance results of these models are given in Table 5. The trained models are used in real-time systems, different batch sizes and inference times are calculated for each model in order to obtain test performance results. A total of 1000 test data were used as test data in order to calculate the inference times precisely, and thus more normalized values were obtained. For example, for AlexNet architecture the Batch Size = 1 as shown in Table 5, the calculated time is 4.4 ms. This value is calculated as follows: 1000 images are given as test data and Batch size = 1 is chosen. The total test time is 4447 ms, during which 1000 images were classified. In other words, it took 4447/1000 = 4.4 ms to classify 1 image. Similarly, Batch Size is set to 16 for AlexNet model, the total test time of 1000 images were calculated as 1832 ms. In other words, it took 1832/1000 = 1.8 ms to classify 1 image. These operations were performed for all models and batch sizes in the table, and the results are obtained as mentioned in Table 5.
Based on these results, when such a classification is desired to be used in real-time applications, the EfficientNetB3 model can be used to achieve high accuracy, if there is no shortage of time. If such an application will be used on mobile devices and in case of hardware capacity shortage, it would be more accurate to use the MobileNetv3 model in terms of both accuracy and low inference time.

4. Discussion and Conclusions

In this comparative study, we investigated the classification of sunflower diseases using deep learning models and incorporated transfer learning techniques. The models evaluated in this study included AlexNet, VGG16, InceptionV3, MobileNetV3, and EfficientNet. By leveraging pre-trained models, we aimed to enhance the accuracy and efficiency of sunflower disease classification.
Our findings indicate that deep learning models are highly effective in accurately identifying and classifying various sunflower diseases. Through a thorough analysis of the results, we observed that all the models achieved considerable accuracy rates, highlighting their potential for disease detection in sunflower crops. However, some models demonstrated superior performance compared to others, providing valuable insights into their capabilities for this specific task.
Among the models evaluated, EfficientNetB3 consistently outperformed the other architectures in terms of accuracy, precision, recall, and F1-score metrics. it may be due to many characteristics that make it stand out compared to the other models:
  • Compound Scaling: EfficientNetB3 includes a technique known as compound scaling, which systematically grows the model’s depth, width, and resolution all at once. This method enables EfficientNetB3 to perform better across a wide variety of computational resources and dataset sizes. EfficientNetB3 achieves improved accuracy while keeping efficiency by scaling the model in a balanced manner.
  • Depth and Width: When compared to MobileNetV3 and InceptionV3, EfficientNetB3 is deeper and wider. Because of the enhanced depth, it can capture more complicated features and hierarchies in the data. The increased width, which refers to the number of channels in each layer, improves the model’s representational capacity and allows for more fine-grained detail to be captured.
  • Resolution: When compared to AlexNet, MobileNetV3, InceptionV3, and VGG16, EfficientNetB3 has higher-resolution input images. The higher the input resolution, the more visual information the model can learn from, which is very useful when working with detailed or high-quality photos.
  • Efficient Architecture: Despite its depth and breadth, EfficientNetB3 retains computational efficiency. It accomplishes this by employing efficient network design ideas such as inverted residual blocks and squeeze-and-excite modules. When compared to VGG16 and InceptionV3, these design choices lower the number of parameters and operations, resulting in faster training and inference times.
  • State-of-the-Art Performance: EfficientNetB3 has regularly achieved top performance in a variety of computer vision tasks, including picture classification and object recognition, in many benchmark datasets, including ImageNet. The balanced scalability and efficient architecture contribute to its high performance, making it a suitable solution for many real-world applications.
The utilization of transfer learning in sunflower disease classification showcases its potential as a valuable technique for agricultural applications. By leveraging pre-trained models, we were able to benefit from the learned features and representations of general images, adapting them to the specific task of sunflower disease classification. This approach circumvents the need for training models from scratch, which can be computationally expensive and resource intensive.
As with any scientific study, there are opportunities for future work to build upon the findings of this research. One avenue for future research includes expanding the dataset used for training and evaluation. While our study utilized a comprehensive dataset, incorporating additional samples from diverse geographical locations and under different environmental conditions would further enhance the generalization capability of the models. This would enable the models to accurately classify diseases in various regions and environments, ensuring robustness and reliability. Exploring the interpretability of the deep learning models could be also a potential improvement. Interpretability is a crucial aspect, especially in agricultural applications, where understanding the decision-making process is vital for the acceptance and adoption of these models. While deep learning models are often regarded as black boxes due to their complex architectures, efforts have been made to interpret and visualize the learned features and decision boundaries. Such interpretability techniques can provide valuable insights into the factors contributing to disease classification, ultimately aiding farmers and agricultural experts in decision-making processes.
Furthermore, exploring model optimization techniques specific to sunflower disease classification could potentially improve the performance of the deep learning models. Techniques such as neural architecture search (NAS) can be employed to automatically discover optimal architectures for this task, considering the unique characteristics of sunflower diseases.

Author Contributions

Conceptualization, Y.G., Z.Ü., H.A. and M.S.M.; Data curation, Y.G., Z.Ü. and H.A.; Formal analysis, Y.G., Z.Ü. and H.A.; Funding acquisition, Y.G.; Investigation, Y.G., Z.Ü., H.A. and M.S.M.; Methodology, Y.G., Z.Ü., H.A. and M.S.M.; Project administration, Y.G.; Resources, Y.G., Z.Ü. and H.A.; Software, Y.G., Z.Ü. and H.A.; Supervision, Y.G. and Z.Ü.; Validation, Y.G., Z.Ü. and H.A.; Visualization, Y.G., Z.Ü. and M.S.M.; Writing—original draft, Y.G. and Z.Ü.; Writing—review & editing, Y.G. and Z.Ü. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Deanship of Scientific Research, Vice Presidency for Graduate Studies and Scientific Research, King Faisal University, Saudi Arabia, under the Project GRANT3,625.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

We have used a public dataset [43].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Production Volume of Sunflower Seed in Major Producer Countries in 2022/2023. Available online: https://www.statista.com/statistics/263928/production-of-sunflower-seed-since-2000-by-major-countries/#:~:text=Sunflower%20seed%20production%20in%20major%20countries%202022%2F2023&text=During%20that%20time%20period%2C%20Russia,metric%20tons%20in%202022%2F2023 (accessed on 11 June 2023).
  2. Adeleke, B.S.; Babalola, O.O. Oilseed crop sunflower (Helianthus annuus) as a source of food: Nutritional and health benefits. Food Sci. Nutr. 2020, 8, 4666–4684. [Google Scholar] [CrossRef]
  3. Urrutia, R.I.; Yeguerman, C.; Jesser, E.; Gutierrez, V.S.; Volpe, M.A.; González, J.O.W. Sunflower seed hulls waste as a novel source of insecticidal product: Pyrolysis bio-oil bioactivity on insect pests of stored grains and products. J. Clean. Prod. 2020, 287, 125000. [Google Scholar] [CrossRef]
  4. Hladni, N.; Terzić, S.; Mutavdžić, B.; Zorić, M. Classification of confectionary sunflower genotypes based on morphological characters. J. Agric. Sci. 2017, 155, 1594–1609. [Google Scholar] [CrossRef]
  5. Malik, A.; Vaidya, G.; Jagota, V.; Eswaran, S.; Sirohi, A.; Batra, I.; Rakhra, M.; Asenso, E. Design and Evaluation of a Hybrid Technique for Detecting Sunflower Leaf Disease Using Deep Learning Approach. J. Food Qual. 2022, 2022, 9211700. [Google Scholar] [CrossRef]
  6. Sasaki, Y.; Okamoto, T.; Imou, K.; Torii, T. Automatic diagnosis of plant disease-Spectral reflectance of healthy and diseased leaves. IFAC Proc. Vol. 1998, 31, 145–150. [Google Scholar] [CrossRef]
  7. Haber, S. Diagnosis of Flame Chlorosis by Reverse Transcription-Polymerase Chain Reaction (RT-PCR). Plant Dis. 1995, 79, 626–630. [Google Scholar] [CrossRef]
  8. Koo, C.; Malapi-Wight, M.; Kim, H.S.; Cifci, O.S.; Vaughn-Diaz, V.L.; Ma, B.; Kim, S.; Abdel-Raziq, H.; Ong, K.; Jo, Y.-K.; et al. Development of a Real-Time Microchip PCR System for Portable Plant Disease Diagnosis. PLoS ONE 2013, 8, e82704. [Google Scholar] [CrossRef]
  9. Aggarwal, S.; Gupta, S.; Gupta, D.; Gulzar, Y.; Juneja, S.; Alwan, A.A.; Nauman, A. An Artificial Intelligence-Based Stacked Ensemble Approach for Prediction of Protein Subcellular Localization in Confocal Microscopy Images. Sustainability 2023, 15, 1695. [Google Scholar] [CrossRef]
  10. Ayoub, S.; Gulzar, Y.; Rustamov, J.; Jabbari, A.; Reegu, F.A.; Turaev, S. Adversarial Approaches to Tackle Imbalanced Data in Machine Learning. Sustainability 2023, 15, 7097. [Google Scholar] [CrossRef]
  11. Ayoub, S.; Gulzar, Y.; Reegu, F.A.; Turaev, S. Generating Image Captions Using Bahdanau Attention Mechanism and Transfer Learning. Symmetry 2022, 14, 2681. [Google Scholar] [CrossRef]
  12. Khan, S.A.; Gulzar, Y.; Turaev, S.; Peng, Y.S. A Modified HSIFT Descriptor for Medical Image Classification of Anatomy Objects. Symmetry 2021, 13, 1987. [Google Scholar] [CrossRef]
  13. Hamid, Y.; Elyassami, S.; Gulzar, Y.; Balasaraswathi, V.R.; Habuza, T.; Wani, S. An improvised CNN model for fake image detection. Int. J. Inf. Technol. 2022, 15, 5–15. [Google Scholar] [CrossRef]
  14. Gulzar, Y.; Khan, S.A. Skin Lesion Segmentation Based on Vision Transformers and Convolutional Neural Networks—A Comparative Study. Appl. Sci. 2022, 12, 5990. [Google Scholar] [CrossRef]
  15. Alam, S.; Raja, P.; Gulzar, Y. Investigation of Machine Learning Methods for Early Prediction of Neurodevelopmental Disorders in Children. Wirel. Commun. Mob. Comput. 2022, 2022, 5766386. [Google Scholar] [CrossRef]
  16. Anand, V.; Gupta, S.; Gupta, D.; Gulzar, Y.; Xin, Q.; Juneja, S.; Shah, A.; Shaikh, A. Weighted Average Ensemble Deep Learning Model for Stratification of Brain Tumor in MRI Images. Diagnostics 2023, 13, 1320. [Google Scholar] [CrossRef]
  17. Gulzar, Y.; Alwan, A.A.; Abdullah, R.M.; Abualkishik, A.Z.; Oumrani, M. OCA: Ordered Clustering-Based Algorithm for E-Commerce Recommendation System. Sustainability 2023, 15, 2947. [Google Scholar] [CrossRef]
  18. Qurashi, J.M.; Jambi, K.M.; Eassa, F.E.; Khemakhem, M.; Alsolami, F.; Basuhail, A.A. Toward Attack Modeling Technique Addressing Resilience in Self-Driving Car. IEEE Access 2022, 11, 2652–2673. [Google Scholar] [CrossRef]
  19. Hanafi, M.F.F.M.; Nasir, M.S.F.M.; Wani, S.; Abdulghafor, R.A.A.; Gulzar, Y.; Hamid, Y. A Real Time Deep Learning Based Driver Monitoring System. Int. J. Perceptive Cogn. Comput. 2021, 7, 79–84. [Google Scholar]
  20. Sahlan, F.; Hamidi, F.; Misrat, M.Z.; Adli, M.H.; Wani, S.; Gulzar, Y. Prediction of Mental Health Among University Students. Int. J. Perceptive Cogn. Comput. 2021, 7, 85–91. [Google Scholar]
  21. Lauriola, I.; Lavelli, A.; Aiolli, F. An introduction to Deep Learning in Natural Language Processing: Models, techniques, and tools. Neurocomputing 2021, 470, 443–456. [Google Scholar] [CrossRef]
  22. Bantan, R.A.R.; Ali, A.; Naeem, S.; Jamal, F.; Elgarhy, M.; Chesneau, C. Discrimination of sunflower seeds using multispectral and texture dataset in combination with region selection and supervised classification methods. Chaos Interdiscip. J. Nonlin. Sci. 2020, 30, 113142. [Google Scholar] [CrossRef]
  23. Kurtulmuş, F. Identification of sunflower seeds with deep convolutional neural networks. J. Food Meas. Charact. 2020, 15, 1024–1033. [Google Scholar] [CrossRef]
  24. Gulzar, Y.; Hamid, Y.; Soomro, A.B.; Alwan, A.A.; Journaux, L. A Convolution Neural Network-Based Seed Classification System. Symmetry 2020, 12, 2018. [Google Scholar] [CrossRef]
  25. Sirohi, A.; Malik, A. A Hybrid Model for the Classification of Sunflower Diseases Using Deep Learning; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2021; pp. 58–62. [Google Scholar]
  26. Albarrak, K.; Gulzar, Y.; Hamid, Y.; Mehmood, A.; Soomro, A.B. A Deep Learning-Based Model for Date Fruit Classification. Sustainability 2022, 14, 6339. [Google Scholar] [CrossRef]
  27. Chen, S.; Lv, F.; Huo, P. Improved Detection of Yolov4 Sunflower Leaf Diseases. In Proceedings of the Proceedings-2021 2nd International Symposium on Computer Engineering and Intelligent Communications, ISCEIC, Nanjing, China, 6–8 August 2021. [Google Scholar]
  28. Carbone, C.; Potena, C.; Nardi, D. Augmentation of Sunflower-Weed Segmentation Classification with Unity Generated Imagery Including Near Infrared Sensor Data. Lect. Notes Netw. Syst. 2022, 306, 42–63. [Google Scholar]
  29. Dawod, R.G.; Dobre, C. Automatic Segmentation and Classification System for Foliar Diseases in Sunflower. Sustainability 2022, 14, 11312. [Google Scholar] [CrossRef]
  30. Gulzar, Y. Fruit Image Classification Model Based on MobileNetV2 with Deep Transfer Learning Technique. Sustainability 2023, 15, 1906. [Google Scholar] [CrossRef]
  31. Aktaş, H.; Kızıldeniz, T.; Ünal, Z. Classification of pistachios with deep learning and assessing the effect of various datasets on accuracy. J. Food Meas. Charact. 2022, 16, 1983–1996. [Google Scholar] [CrossRef]
  32. Ünal, Z.; Aktaş, H. Classification of hazelnut kernels with deep learning. Postharvest Biol. Technol. 2023, 197, 112225. [Google Scholar] [CrossRef]
  33. Dawod, R.G.; Dobre, C. ResNet interpretation methods applied to the classification of foliar diseases in sunflower. J. Agric. Food Res. 2022, 9, 100323. [Google Scholar] [CrossRef]
  34. Song, Z.; Wang, P.; Zhang, Z.; Yang, S.; Ning, J. Recognition of sunflower growth period based on deep learning from UAV remote sensing images. Precis. Agric. 2023, 24, 1417–1438. [Google Scholar] [CrossRef]
  35. Barrio-Conde, M.; Zanella, M.A.; Aguiar-Perez, J.M.; Ruiz-Gonzalez, R.; Gomez-Gil, J. A Deep Learning Image System for Classifying High Oleic Sunflower Seed Varieties. Sensors 2023, 23, 2471. [Google Scholar] [CrossRef]
  36. Sathi, T.A.; Hasan, M.A.; Alam, M.J. SunNet: A Deep Learning Approach to Detect Sunflower Disease. In Proceedings of the 7th International Conference on Trends in Electronics and Informatics, ICOEI 2023—Proceedings, Tirunelveli, India, 11–13 April 2023; pp. 1210–1216. [Google Scholar]
  37. Ghosh, P.; Mondal, A.K.; Chatterjee, S.; Masud, M.; Meshref, H.; Bairagi, A.K. Recognition of Sunflower Diseases Using Hybrid Deep Learning and Its Explainability with AI. Mathematics 2023, 11, 2241. [Google Scholar] [CrossRef]
  38. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25. Available online: https://proceedings.neurips.cc/paper/2012/hash/c399862d3b9d6b76c8436e924a68c45b-Abstract.html (accessed on 18 June 2023). [CrossRef] [Green Version]
  39. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  40. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 2818–2826. [Google Scholar]
  41. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.-C. Mobilenetv2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 4510–4520. [Google Scholar]
  42. Tan, M.; Le, Q. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 6105–6114. [Google Scholar]
  43. Sara, U.; Rajbongshi, A.; Shakil, R.; Akter, B.; Sazzad, S.; Uddin, M.S. An extensive sunflower dataset representation for successful identification and classification of sunflower diseases. Data Brief 2022, 42, 108043. [Google Scholar] [CrossRef]
  44. Nti, I.K.; Nyarko-Boateng, O.; Aning, J. Performance of Machine Learning Algorithms with Different K Values in K-fold CrossValidation. Int. J. Inf. Technol. Comput. Sci. 2021, 13, 61–71. [Google Scholar] [CrossRef]
  45. Hamid, Y.; Wani, S.; Soomro, A.B.; Alwan, A.A.; Gulzar, Y. Smart Seed Classification System Based on MobileNetV2 Architecture. In Proceedings of the 2022 2nd International Conference on Computing and Information Technology (ICCIT), Tabuk, Saudi Arabia, 25–27 January 2022; pp. 217–222. [Google Scholar]
Figure 1. A diagram of the proposed approach.
Figure 1. A diagram of the proposed approach.
Agriculture 13 01479 g001
Figure 2. Sample images of four classes. (A) Downy Mildew, (B) Fresh Leaf, (C) Gray Mold, (D) Leaf Scars.
Figure 2. Sample images of four classes. (A) Downy Mildew, (B) Fresh Leaf, (C) Gray Mold, (D) Leaf Scars.
Agriculture 13 01479 g002
Figure 3. AlexNet Architecture.
Figure 3. AlexNet Architecture.
Agriculture 13 01479 g003
Figure 4. VGG16 architecture.
Figure 4. VGG16 architecture.
Agriculture 13 01479 g004
Figure 5. InceptionV3 architecture.
Figure 5. InceptionV3 architecture.
Agriculture 13 01479 g005
Figure 6. MobileNetV3 Architecture.
Figure 6. MobileNetV3 Architecture.
Agriculture 13 01479 g006
Figure 7. EfficientNetB0 baseline architecture.
Figure 7. EfficientNetB0 baseline architecture.
Agriculture 13 01479 g007
Figure 8. Convergence Curves of Models: (A) AlexNet, (B) VGG16, (C) InceptionV3, (D) MobileNetV3, (E) EfficientNetB3.
Figure 8. Convergence Curves of Models: (A) AlexNet, (B) VGG16, (C) InceptionV3, (D) MobileNetV3, (E) EfficientNetB3.
Agriculture 13 01479 g008aAgriculture 13 01479 g008b
Figure 9. Confusion Matrix of Models. (A) AlexNet, (B) VGG16, (C) InceptionV3, (D) MobileNetV3, (E) EfficientNetB3.
Figure 9. Confusion Matrix of Models. (A) AlexNet, (B) VGG16, (C) InceptionV3, (D) MobileNetV3, (E) EfficientNetB3.
Agriculture 13 01479 g009
Table 1. Dataset distribution.
Table 1. Dataset distribution.
Class NameTraining Set Validation SetTesting SetNumber of Images Per Class
Downy Mildew3297071470
Fresh Leaf3607778515
Gray Mold2786060398
Leaf Scars3567677509
Total13232832861892
Table 2. Evaluation Metrics for the Performance of Models.
Table 2. Evaluation Metrics for the Performance of Models.
ModelClassPrecisionRecallF1-ScoreAccuracy
AlexNetDowny Mildew0.8240.6620.7340.662
Fresh Leaf0.9500.9740.9620.974
Gray Mold0.9520.9830.9670.983
Leaf Scars0.7470.8440.7930.844
VGG16Downy Mildew0.9420.9150.9290.915
Fresh Leaf1.0001.0001.0001.000
Gray Mold1.0001.0001.0001.000
Leaf Scars0.9240.9480.9360.948
InceptionV3Downy Mildew0.9140.9010.9080.901
Fresh Leaf1.0001.0001.0001.000
Gray Mold1.0001.0001.0001.000
Leaf Scars0.9100.9220.9160.922
MobilNetV3Downy Mildew0.9560.9150.9350.915
Fresh Leaf1.0001.0001.0001.000
Gray Mold1.0001.0001.0001.000
Leaf Scars0.9250.9610.9430.961
EfficientNetB3Downy Mildew0.9570.9440.9500.944
Fresh Leaf1.0001.0001.0001.000
Gray Mold1.0001.0001.0001.000
Leaf Scars0.9490.9610.9550.961
Table 3. Overall performance of models before and after transfer learning.
Table 3. Overall performance of models before and after transfer learning.
ModelsModelsPrecisionRecallF1-ScoreAccuracyEpochs
Before
Transfer
Learning
AlexNet0.8350.8310.8320.835300
VGG160.9340.9360.9350.934300
InceptionV30.9320.9330.9320.932300
MobilenetV30.8320.8350.8380.832300
EfficientNetB30.9020.8900.8890.885300
After
Transfer
Learning
AlexNet0.8650.8660.8610.864117
VGG160.9650.9650.9650.96576
InceptionV30.9540.9540.9540.954201
MobilenetV30.9690.9690.9690.969126
EfficientNetB30.9760.9760.9760.97689
Table 4. K-fold cross-validation results.
Table 4. K-fold cross-validation results.
ModelsLoop1 AccLoop2 AccLoop3 AccLoop4 AccMean Acc
AlexNet0.8460.8570.8540.8460.851
VGG160.9600.9600.9810.9570.965
InceptionV30.9360.9550.9550.9810.957
MobilenetV30.9590.9550.9600.9550.957
EfficientNetB30.9840.9700.9760.9890.980
Table 5. Inference time performance.
Table 5. Inference time performance.
Model NameBatch Size = 1
(ms)
Batch Size = 16
(ms)
Batch Size = 32
(ms)
AlexNet4.41.81.7
VGG169.94.14.0
InceptionV318.33.83.3
MobileNetV311.12.21.9
EfficientNetB325.79.45.7
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gulzar, Y.; Ünal, Z.; Aktaş, H.; Mir, M.S. Harnessing the Power of Transfer Learning in Sunflower Disease Detection: A Comparative Study. Agriculture 2023, 13, 1479. https://doi.org/10.3390/agriculture13081479

AMA Style

Gulzar Y, Ünal Z, Aktaş H, Mir MS. Harnessing the Power of Transfer Learning in Sunflower Disease Detection: A Comparative Study. Agriculture. 2023; 13(8):1479. https://doi.org/10.3390/agriculture13081479

Chicago/Turabian Style

Gulzar, Yonis, Zeynep Ünal, Hakan Aktaş, and Mohammad Shuaib Mir. 2023. "Harnessing the Power of Transfer Learning in Sunflower Disease Detection: A Comparative Study" Agriculture 13, no. 8: 1479. https://doi.org/10.3390/agriculture13081479

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop