1. Introduction
Grapes are a globally important crop, with a significant economic impact [
1]. The existence of diseases in grapevines poses a severe threat to global food security, as they significantly contribute to crop losses ranging from 10 to 30% [
2]. Vines are highly susceptible to a wide variety of fungal diseases that can reduce yields. These include leaf spot (
Isariopsis griseola), gray mold (
Botrytis cinerea), downy mildew (
Plasmopara viticola), powdery mildew (
Erysiphe necator), esca (
Phaeomoniella chlamydospora and
Phaeoacremonium aleophilum), and black rot (
Guignardia bidwellii). All of these diseases have an adverse effect on plant leaves or the crop itself, which can lead to a moderate to extreme loss in production of one or both [
3]. The health of grapevines can be affected by a variety of factors, with stress being induced by both biotic and abiotic elements. Biotic stress arises from live pathogens such as fungi, viruses, and bacteria, which are the most prevalent pathogenic agents [
4]. In contrast, abiotic stress is linked to non-living factors such as climate and soil conditions. For instance, chlorosis is often a physiological symptom, not a disease in itself. It is caused by factors such as nutrient deficiencies (iron, nitrogen, magnesium, or zinc), poor soil conditions (inadequate drainage, high pH, or compacted soil), or environmental stress (water imbalance, root damage, or exposure to pollutants), all of which lead to the yellowing of leaves [
5,
6]. The most prevalent method of mitigating biotic stress in grapevines involves the administration of chemical substances. While this method has proven to be extremely successful, it can also have a detrimental impact on the environment and overall agricultural revenue, as it is not always a cost-effective strategy [
7]. Numerous precision agriculture techniques have been developed in response to the aforementioned factors in an effort to maximize agricultural output while decreasing the impact of external factors such as pests and disease [
7]. As a result, the agricultural industry makes extensive use of distant and proximal sensing techniques, as well as big data technologies, computer vision, robots, deep learning (DL) and machine learning (ML) techniques, and high-performance computers. These methods have utility beyond plant disease diagnosis, encompassing weed detection, crop quality assessment, yield prediction, species identification, water and soil monitoring, and irrigation system management [
7,
8].
The success of classification algorithms and digital imaging techniques depends on a number of factors, for example, transfer learning, characteristics of the training inputs, data augmentation, and the combination of multiple trained deep networks. To solve classification issues with a small dataset, ML practitioners might turn to transfer learning, a technique that employs previously trained networks—typically those with deep architectures [
9]. Using this strategy, the original pretraining weights are preserved. They are partially updated as new data are brought to the network. The main point of this method is to take advantage of the deep learning network’s prior learning from training models to make it easier to train a new, related classification issue that does not use the same feature space or distribution [
10]. Multiple studies confirmed that using deep learning, particularly transfer learning, effectively classifies plant diseases, achieving over 80% accuracy [
11,
12,
13]. This approach reduces computational time and enables the training of diverse classes with a substantial number of instances, making it particularly suitable for deep architectures. A variety of well-established pretrained networks, such as AlexNet, GoogleNet, ResNet, and the VGG family, are available. These DL models exhibit variations in their layer architectures. In transfer learning, typically only the final layer’s parameters are adjusted, while the rest of the architecture extracts features from training samples.
Texture features based on the gray level co-occurrence matrix (GLCM) are also crucial for identifying diseases in grapevines. Jaisakthi et al. [
14] developed a system for detecting grapevine diseases, focusing on extracting texture features such as color variations after segmentation, to classify diseases effectively using support vector machine (SVM), adaboost, and random forest (RF) algorithms. This method, distinguished by its high accuracy in diagnosing diseases like rot and leaf blight, highlights its value in improving agricultural disease management. Data augmentation, considered during the training phase in this work, is an additional factor that can influence the performance of the detection model. Augmenting data by applying a sequence of transformations—such as mirroring or rotating an image—increases the dataset’s usefulness and depth [
15]. Moreover, deep network fusion was extensively employed to enhance the quality and durability of the sickness detection model. Previously, this approach had been utilized for identifying plant phenotypes. Xiao et al. [
16] employed a convolutional neural network (CNN) with the Resnet50 architecture to successfully detect various strawberry diseases, including gray mold, crown leaf blight, fruit leaf blight, powdery mildew, and leaf blight, utilizing datasets comprising both original and feature-enhanced images. Similarly, Koklu et al. [
17] achieved impressive classification performance by generating a CNN-SVM model, extracting features from the Logits layer of the MobileNetv2 architecture, and employing various SVM kernels to classify the leaves of grapevines into five different species.
Traditionally, DL models in agriculture have relied on single-modal data, particularly plant images. However, advanced agricultural practices have led to a growing trend of exploiting multimodal data, which combine plant images with additional features like GLCM variables and pretrained characteristics. This shift has the potential to improve accuracy and performance in estimating plant phenotypes by incorporating diverse data sources. The current study distinguishes itself from other research in the field by exploring innovative deep networks, along with the creation of standalone software designed for a first-level model. Hence, the primary objectives of this research were (i) to construct a well-organized hybrid deep network to detect illnesses in grapevines using high-level characteristics extracted from RGB and GLCM, (ii) to develop an independent, easy-to-use software solution named AI GrapeCare for the quick assessment and analysis of digital imagery related to grapevine disease spread, (iii) to explain the superior components of a deep network for robust detection of grapevine infections, (iv) to examine the behavior of deep networks in different scenarios involving both augmented and non-augmented data, and (v) to compare the performance of various hybrid deep networks that combine CNNs and deep neural networks (DNNs) with long short-term memory (LSTM), as well as applying pre-trained features such as VGG16, VGG19, ResNet50, and ResNet101V2 during training. All of these procedures aim to select the best model that can be recommended for precision agriculture in the future.
4. Conclusions
Grapevine, a globally significant fruit crop, is often plagued by four prevalent diseases: esca, chlorosis, powdery mildew, and black rot. Timely and accurate diagnosis plays a pivotal role in containing the spread of diseases and minimizing production losses in the grapevine. Progress in deep learning has paved the way for innovative diagnostic algorithms in the field of plant disease identification, unlocking new possibilities and avenues for accurate and efficient detection. This paper introduces an innovative hybrid approach, utilizing stand-alone software named AI Grapecare, to process and analyze RGB images for identifying grapevine diseases. The proposed framework integrates various deep learning networks, including a convolutional neural network (CNN), long short-term memory (LSTM), and a deep neural network (DNN). It also employs multimodal features like VGG16, VGG19, ResNet50, and ResNet101V2, along with texture characteristics based on the gray level co-occurrence matrix (GLCM). The hybrid models, such as CNNimg-VGG16, CNNimg-LSTMGLCM-VGG16, DNNimg-VGG16, and DNNimg-LSTMGLCM-VGG16, were trained with 80% of the data, while the remaining 20% was reserved for evaluation. This methodology allowed for a thorough performance assessment, enabling the selection of the most effective model. Based on experimental outcomes, the hybrid network of CNNimg-LSTMGLCM-VGG16 exhibited remarkable predictive capability as a feature extractor and classifier for grapevine disease diagnosis. By incorporating its extracted features, the proposed system achieved impressive classification performance, with precision, recall, and F-measure reaching 96.6% and an intersection over union of 93.4%. The system’s validation accuracy was 96.6% with a loss of 0.123. The proposed AI GrapeCare-based approach demonstrated the superiority of the chosen model architecture, highlighting its effectiveness in classifying grapevine disease leaves with minimal processing time and labor involvement. AI GrapeCare is expected to enhance phenotyping efficiency in precision agriculture by analyzing fruit health dynamics, leading to the consistent cultivation of high-quality crops.