Next Article in Journal
Varietal Variances of Grain Nitrogen Content and Its Relations to Nitrogen Accumulation and Yield of High-Quality Rice under Different Nitrogen Rates
Previous Article in Journal
Deep Learning-Based Method for Classification of Sugarcane Varieties
Previous Article in Special Issue
Computer Vision and Deep Learning for Precision Viticulture
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An AI Based Approach for Medicinal Plant Identification Using Deep CNN Based on Global Average Pooling

1
Department of Biosystems Engineering, University of Tehran, Karaj 3158777871, Iran
2
Information Technology, Al-Mustaqbal University College, Babylon 51001, Iraq
3
Department of Industrial Engineering and Management Systems, University of Central Florida, Orlando, FL 32816, USA
4
Department of Computer Engineering, Bandirma Onyedi Eylul University, Balikesir 10200, Turkey
5
Department of Agricultural, Karaj Branch, Islamic Azad University, Karaj 3158777871, Iran
6
Institute of Sciences and Technologies for Sustainable Energy and Mobility (STEMS), National Research Council (CNR) of Italy, 10129 Torino, Italy
*
Author to whom correspondence should be addressed.
Agronomy 2022, 12(11), 2723; https://doi.org/10.3390/agronomy12112723
Submission received: 16 September 2022 / Revised: 19 October 2022 / Accepted: 26 October 2022 / Published: 2 November 2022
(This article belongs to the Special Issue Imaging Technology for Detecting Crops and Agricultural Products)

Abstract

:
Medicinal plants have always been studied and considered due to their high importance for preserving human health. However, identifying medicinal plants is very time-consuming, tedious and requires an experienced specialist. Hence, a vision-based system can support researchers and ordinary people in recognising herb plants quickly and accurately. Thus, this study proposes an intelligent vision-based system to identify herb plants by developing an automatic Convolutional Neural Network (CNN). The proposed Deep Learning (DL) model consists of a CNN block for feature extraction and a classifier block for classifying the extracted features. The classifier block includes a Global Average Pooling (GAP) layer, a dense layer, a dropout layer, and a softmax layer. The solution has been tested on 3 levels of definitions (64 × 64, 128 × 128 and 256 × 256 pixel) of images for leaf recognition of five different medicinal plants. As a result, the vision-based system achieved more than 99.3% accuracy for all the image definitions. Hence, the proposed method effectively identifies medicinal plants in real-time and is capable of replacing traditional methods.

1. Introduction

Medicinal plants have been used in traditional medicine practices for a long time because of their nutrients and medicinal properties [1]. Due to their bioactive compounds, such as phenolic, carotenoid, anthocyanin, and other bio-active components, they are known for their antioxidant, anti-allergic, anti-inflammatory, and antibacterial properties [2]. Different species of plants are recognized as having medical properties; they can be trees, shrubs or herbs. Their single diffusion depends on the environmental conditions they adapted to over time. According to the statistics, about 14–28% of all plants have medicinal uses [3]. Furthermore, about 3–5% of patients in developed countries, and over 80% in the rural population in developing countries and about 85% of the population in the Southern Sahara use medicinal plants to treat their diseases because of their properties [4]. Moreover, in developed countries, some people have turned to traditional medicines prepared from medicinal plants to treat and control illnesses and diseases after considering the harmfulness and side effects of chemical drugs [5,6]. In addition to their medicinal uses, these plants can be used as food, beverages, and even in the form of cosmetics [7,8]. Unfortunately, many counterfeit, low-quality, damaged, or not perfectly preserved medicinal plants are being manufactured and distributed worldwide, which may harm their users [9].
Botanists have long identified various species of medicinal plants using traditional and experience-based methods. Nevertheless, visually and manually identifying medicinal plants from other similar plants can be extremely challenging and time-consuming for inexperienced people [10,11,12].
Plants are generally classified based on their various organs, including roots, flowers, and leaves. The leaves, one of the most important organs of plants, largely differ among species and varieties in colors, shapes, and texture characteristics. However, in some cases, there have always been challenges associated with identifying medicinal plants due to the apparent similarities of their leaves. Furthermore, the leaf color cannot be considered a suitable option for plant classification due to their similarities and, most importantly, their variability during the growing period.
Several real-time vision systems have recently been developed using machine vision applications and computational methods to identify medicinal plants [13,14,15,16]. Deep learning (DL) techniques have largely been applied since they can simultaneously handle feature extraction and image selection. As a result, DL has gained great popularity in various agriculture automation applications over the last few years, where object detection and image classification are required [17,18,19,20].
Convolutional Neural Network (CNN) is one of the most successful DL methods in which it has been associated with excellent efficiency in image segmentation and pattern recognition [21]. The CNN uses expertly trained layers that have been adopted in several studies for plant identification and classification [22,23,24,25] and plant disease identification [26,27,28,29]. CNN consists of a hierarchy of self-learning properties, in which low-level features such as colors, corners, and edges are learned from the first layer and high-level features such as textures and objects within the image are learned from the deep layer. Automatic feature learning has reduced the sensitivity of CNNs to environmental variables such as light changes. These models combine the learning of extracted features with the classification, which are both crucial steps in the image processing process. Therefore, unlike conventional machine learning algorithms, the manual (hand-crafted) extraction of features is not required.
Nasiri et al. [30] showed that the DL algorithm applied to imaging recognition in the visible area (400–700 nm) could discriminate grape leaves among 6 different cultivars with 99% accuracy. Paulson and Ravishankar [31] identified 64 types of medicinal plants using digital images adopting three DL models of CNN, VGG16, and VGG19, achieving an accuracy rate of 95.7, 97.8, and 97.6%, respectively. Grinblat et al. [32] used a 3-layer CNN model to identify three different plants based on their leaf vein patterns and attained an accuracy recognition rate of 92%. Hu et al. [33] developed a Multi-Scale Function (MSF) MSF-CNN model to classify plant leaves by integrating multi-scale features with CNNs. MSF-CNNs consist of multiple learning branches with different scales of learning. They performed experiments using two public datasets of plant leaves, MalayaKew (MK) and LeafSnap. Their results demonstrated that the proposed MSF-CNN method is more effective in recognizing plant leaves than state-of-the-art methods.
It is still difficult to distinguish medicinal herbs from other plants, even with vision-based systems that have significantly improved their ability to extract complex features and select the most important ones via conventional Machine Learning (ML). Therefore, the main objective of the current study was to develop a real-time automatic vision system for identifying medical plants using a proposed DL algorithm coupled with a machine vision technique. To put it more clearly, this research aimed to improve the automatic identification of medicinal plants as their popularity is growing and they are increasingly being used for artisanal and industrial purposes.

2. Materials and Methods

2.1. Sampling Protocol

The leaves of five medicinal plants, Lemon Balm (Melissa officinalis L.), Stevia (Stevia rebaudiana Bertoni), Peppermint (Mentha balsamea Wild), Bael (Aegle marmelos) and Tulsi (Ocimum sanctum L) were collected in the Northern area of Iran, in the cities of Salmas (38°11′41″ N and 44°45′53″ E), Khoy (38°33′01″ N and 44°57′08″ E), Maku (39°17′44.96″ N and 44°30′47.64″ E) and Urmia (37°32′55″ N and 45°04′03″ E). The characteristics of these plants are summarized in Appendix A. One hundred and fifty leaves for each medicinal plant (a total of 750 samples) were used for the investigation. After the collection, the leaves were immediately wrapped in zipped plastic bags and transferred to the laboratory for the following investigation operations.
The leaves’ images were captured in an imaging box. The imaging box consists of a camera, lighting box, and computer and is equipped with a ring of LED lamps that emit low-intensity infrared light (450 lm) with adjustable light intensity on 3 levels (low, medium, and high). Refer to Azadnia and Kheiralipour [13] for more details on the imaging box. The distance for imaging the leaf samples was set at 250 mm. The images were captured with a smartphone (Galaxy A8, SAMSUNG Corporation, Suwon, Korea, 16 MP camera). The mobile camera settings were adjusted to achieve high-quality images (f/1.2, 9.8 mm). In addition, for each leaf sample, an RGB image with 3456 × 4608 pixels size was taken and stored as a jpeg file on a personal computer.

2.2. Image Pre-Processing

Plant leaf images were pre-processed to remove backgrounds using a code written in the Python programming software environment (3.10v). The images were automatically uploaded by the program and processed. Open CV2 library, Python Image Library (PIL), and Numpy library were exploited to build the code to process the images. The optimal T value (threshold value for removing the background) was selected and applied to separate the leaves from the backgrounds. This study used Otsu’s threshold to identify the optimal threshold value [34].
The pre-processing operation for background removal was performed via the following steps, while the results of the steps are shown in Figure 1:
Step 1: The acquired images were re-sized.
Step 2: The images were optimized by Otsu’s threshold method.
Step 3: Suppression of the empty pixels inside the leaf images via dilation operation through the morphology method.
Step 4: Reversing the binary mask made in the previous step.
Step 5: Replacement of the reversed mask by the pixels related to the main images of the plants.
Figure 1. Image pre-processing steps to remove the background from an original image (see text for 1–5 descriptions).
Figure 1. Image pre-processing steps to remove the background from an original image (see text for 1–5 descriptions).
Agronomy 12 02723 g001
The speed and accuracy of the processing of the images through the DL network are affected mainly by pixel image sizes [35]. To be effective, automatic vision systems must be fast and accurate. For this reason, the images were re-sized in three definition levels, 256 × 256, 128 × 128, and 64 × 64 pixels, to enhance the speed and investigate the classification accuracy rate at the different image definitions. Generally, it is not necessary to pre-process images to train deep learning algorithms. We explored the possible reduction of the computational time and increase of the recognition accuracy by pre-processing the images.

2.3. Data Augmentation (DA)

Data Augmentation (DA) is a technique to effectively increase the amount of data for network training, addressed at enhancing the accuracy of models and significantly preventing its over-fitting [36]. For the investigation, in addition to the original images, five DA multiform transformation methods were adopted, as reported in Figure 2: four angles rotations, 45°, 90°, 135°, and 180°, and a color manipulation. From the total number of data (13,500 images), 80% (10,800 images) and 20% (2700 images) were randomly selected for training and testing the proposed network, respectively (Table 1).

2.4. Architecture of the Proposed CNN Model

CNN is an architecture of DL algorithms containing a collection of non-linear transformation functions. These networks are an advanced version of artificial neural networks (ANNs) that can perform image processing operations such as object identification, segmentation, and classification. CNN models are made up of several layers producing an output from an input. Convolutional, Pooling, and Fully Connected layers are among the most common algorithm of a CNN network implemented to process RGB images. Convolutional layers use the local correlation of information in the images to extract the features: in other words, they are applied to the input data to extract low-level features such as edges, corners and blobs, and generate a feature map by using a set of filters, i.e., the so-called kernels [37]. Pooling layers select features from the top-layer feature map using sampling and holding the model steady when performing operations, such as rotation and scaling. These two layers are employed to reduce the network parameters and training time and prevent overfitting [38]. The two most common types of pooling layers are maximum and average pooling layers. The Fully Connected layers find an application after the convolutional and pooling layers have been implemented. These layers include neurons, biases, and weights. In the Fully Connected layers, each neuron is connected to the upper neuron to convert the integrated multidimensional features into one-dimensional features for classification and identification operations [39].
Figure 3 summarizes the architecture of the model adopted for image recognition. It consists of 5 convolutional blocks and a classifier block. In the 5 convolutional blocks, the output of each block is the input of the next one. In each convolutional block, 2 convolutional layers are used to extract the important features, including shape, color, and texture. Stride and Padding of convolutional layers were 3 × 3 kernels, equivalent to 1 pixel. An activation function was considered for each ReLU layer. Each convolutional layer is followed by a batch normalization layer, which causes the CNNs to become deeper, thus reducing the number of training iterations required. After batch normalization layers, max-pooling layers of 2 × 2 with Stride 2 were utilized to reduce the dimensions of the feature map. Finally, a dropout layer with a value of 0.1 was used in each block to prevent overfitting.
The output of the convolutional blocks is the input of the following classifier block. The architecture of the classifier is described in Figure 4. It contains 4 layers: Global Average Pooling (GAP), dense (ReLU), dropout, and softmax layers.
In this study, we adopted the GAP layer instead of a Fully Connected layer as reported in similar experiments [40]. The main advantage of the GAP layer is to prevent the over-fitting phenomenon by reducing the number of parameters and complexity of network computing. The main difference between GAP and fully connected layers is that the output of the flatten layer must be given to the fully connected layer to obtain the required features (Figure 5). However, the output of the GAP layer has appropriate dimensions, so the fully connected layer is not required. As a result, when fully connected layers are used, many parameters are added to the model, resulting in a more complex and a higher probability of over-fitting. Conversely, in the GAP layer there is no need for the parameters to be optimized; therefore, the model reduces the number of parameters and has a low degree of complexity.
Feature extraction of the GAP layer from the images is summarized in Figure 6. The GAP layer reduces the spatial dimension of a tensor (N × M × D) to a tensor with the dimension of 1 × 1 × D.
The ReLU function in the classifier block performs mathematical operations (Equation (1)) on the input data, while the softmax function calculates the numerical value of the normalized probability for each neuron based on (Equation (2)) [41].
f x = x           i f   x > 0       0           o t h e r w i s e
p i = exp a i j = 1 n exp a j
where a i is the input of the softmax function for node i (Class i).
Figure 7 summarizes the classification model adopted to classify the leaf images. It includes a GAP layer, 3 dense layers with 64, 128, and 256 neurons, a dropout layer, and a softmax classifier. Finally, the classifier block with three different dense layers were tested to select the model with the best performance.

2.5. Performance Metrics

A confusion matrix was utilized to evaluate the predictive performance of the test data after the classification. The confusion matrix is a table layout that allows for visualizing the performance of a supervised algorithm. Thus, the performance metrics, including accuracy, precision, sensitivity, specificity, and Area-Under-the-Curve (AUC) were measured based on the confusion matrix elements.

3. Results

The most common way to visually assess the model’s performance is to extract the features using a visual response filter [42]. The proposed CNN model extracts the low-level features such as edges, blobs and corners from the images by the initial layers, while high-level features such as textures and objects are identified by the deep layers. Figure 8 summarizes how the proposed CNN model extracts low-level features from the images. These features contain efficient information which the proposed model extracts in the initial layers.
The result of the extraction of the features and the following classification of the images of the medicinal plants through the CNN network adopted for the study is presented in the confusion matrix reported in Figure 9.
The proposed CNN model has been able to correctly classify the images of the medicinal plants with overall accuracy rates of 99.66, 99.32, and 99.45% when the input images are 64 × 64, 128 × 128, and 256 × 256 pixels, respectively. Finally, according to the results, the confusion matrix of 64×64 pixels achieved the best performance with an overall accuracy of 99.66%. Moreover, in the confusion matrix of 64 × 64 pixels, the Peppermint class had a mistaken image, which is confused with plants in the Tulsi class. The Bael and Stevia classes had three misclassified images. The Tulsi class had two misclassified images, which was confused with the Peppermint and Stevia classes. This could be attributed to the similarities in shape features among them. In particular, the Lemon Balm leaves, in all three pixel dimensions, were classified correctly by the CNN model without any confusion in all images (100% accuracy). However, the high similarities among the leaves of medicinal plants caused the proposed model to incorrectly classify several images of the other 4 species involved in the study. Therefore, although the medicinal plant images were similar, the model nearly predicted the correct classifications.
Figure 10 indicates the CNN model’s prediction accuracy and loss function for the train and test leaf images with three different image sizes. The accuracy and loss values are obtained for each epoch. Each epoch goes through a cycle to update its weight during total data training. The loss values determine how much the proposed model reacts after optimizing each iteration. In other words, the loss values train the sum of errors created for each image to find the model’s best weights. The downward trend in the training and testing diagrams shows that the proposed model has excellent performance in classifying the medicinal plants’ images. Convergence of the proposed CNN model was obtained after 100 epochs; in other similar models, with hundreds of classes and millions of images, it could be close to 1000 epochs [43]. The incorrect adjustment of parameters, such as learning rate, in the network, causes the model to be over-fitted and fail to reach convergence.
Table 2 summarizes the accuracy, precision, sensitivity, specificity, and AUC of the three sizes of images extracted by the proposed CNN confusion matrix. All of them are at 100% for the Lemon Balm images. The highest average per class value of accuracy (99.8%) was achieved for the image size of 64 × 64 pixels. Moreover, the average values of accuracy for 128 × 128 and 256 × 256 pixel images were 99.7%. The results also indicate that the highest and lowest precision metric values were obtained for the image size of 64 × 64 and 128 × 128 pixels with 99.6 and 99.2%, respectively. According to Table 2, the highest average per class value for AUC was obtained for images of 64 × 64 pixels with an accuracy of 99.7%. The values of this parameter for images with the sizes of 128 × 128 and 256 × 256 pixels were slightly lower at 99.5 and 99.6%, respectively. However, the specificity values were 99.8% for the three sizes of images.

4. Discussion

According to the results, the proposed CNN algorithm has been proven to be effective in identifying medicinal plants with similar leaves compared to models developed by previous studies for a similar purpose.
Amuthalingeswaran et al. [44] implemented a DL model to identify medicinal plants. Their model was trained on 800 images of 4 types of medicinal plants and could identify medicinal plants in the field with 85% accuracy. Anubha Pearline et al. [45] identified different plants using DL methods and conventional ML algorithms. The plant classification was performed by conventional learning methods extracting texture, shape and color features using LBP and Haralick algorithms, the HU moments, and various color channels, respectively. The extracted features were then classified by Linear Discriminant Analysis (LDA), Logistic Regression (LR), K-Nearest Neighbor (KNN), Classification and Regression Tree (CART), and Random Forest (RF). The best identification performance, 82.38%, was obtained with the RF algorithm. Furthermore, the VGG16 network identified plants with 97.14% accuracy. Zhu et al. [46] proposed a two-way attention model based on the DL network to identify plants from the Flower 102 dataset. The recognition accuracy of their model was 97.2%. Munner and Fati [47] recognized Malaysuan herbs using an automated classification system. They extracted the shape and texture features of plant images and classified them using a DL model. The performance metrics of accuracy, precision and sensitivity achieved were 98, 93 and 85%, respectively. Moreover, Reddy et al. [48] proposed an optimized CNN model consisting of four convolutional layers, followed by two fully connected layers and a softmax layer. Their suggested CNN model was focused on the color images of leaves. The accuracy, precision and sensitivity metrics obtained were 97.6, 93.4 and 95.2%, respectively. Moreover, Zhang et al. [49] proposed a 7-layer CNN approach to classify 32 plant species. They employed the DA technique to increase the dataset images, achieving a 94.7% accuracy rate in plant identification.
The results of the study highlight that, unlike conventional ML methods, the proposed DL model can automatically extract important features from medicinal plant leaves and classify them with an accuracy up to 99%. Hence, there is no need to extract the features manually. Furthermore, experimental results indicated that our approach outperformed previous studies for identifying medicinal plants in terms of precision, sensitivity, specificity, and AUC by 99.6, 99.6, 99.8, and 99.7%, respectively.
Nonetheless, using primary CNN models has been associated with challenges, such as high computational cost, complexity, and a long-time running process. To overcome this problem, we adopted two strategies: (1) increasing the number of images and reducing their sizes during image processing operations and (2) replacing the fully connected layer with the GAP layer, which significantly decreased the number of parameters and model complexity. Such solutions, finally, have been proven to increase the model’s accuracy and computational speed and prevent the occurrence of the over-fitting phenomenon. In addition, it is noteworthy that the prediction time for the collected medicinal plant leaves database using the proposed model was 40.81 s, which is lower than the time, 44.1 s, for the same operation reported in the study conducted by Roopashree and Anitha [50].
According to the results we obtained from the study, the method we adopted can be used to design an automated vision-based recognition system to identify various herb plants with high accuracy and speed. Such an application can improve the public’s interest and use of medical herbs, supporting the demand for healthier and safer food. Furthermore, the model developed by the study will be tested in future studies, with possible adjustments, to identify less common medicinal plants.

5. Conclusions

Identifying medicinal plants separately from other non-edible plants is essential in botany and food industries. The traditional methods of identifying medicinal plants are time-consuming, complex, and require experienced and trained people. The automatic real-time vision-based system exploited to identify broadly used medical herbs with similar leaves has given positive results. The proposed method includes an improved CNN network consisting of convolutional and classifier blocks. The classifier had a Global Average Pooling (GAP) layer, dense layer, dropout layer, and softmax layer. Compared with previous studies’ results, this solution reduces the number of parameters and increases the speed and accuracy of the model. The proposed CNN model identified the medicinal plant images in three levels of image definition, 64 × 64, 128 × 128, and 256 × 256 pixels, with overall accuracy rates of 99.66, 99.32, and 99.45%, respectively. Therefore, combining image processing and the proposed CNN algorithm is an efficient alternative to traditional methods. Future works will be carried out to improve the model’s performance in the classification of further species of medicinal plants to confirm the effectiveness to a larger extent of the solution we developed. The model will be adopted to develop an intelligent mobile-based application for the real-time identification of medicinal plants.
The current study aimed to improve the automatic identification of medicinal plants due to their growing popularity and increased requests for artisanal and industrial uses and applications. Therefore, the proposed DL algorithm and image processing technique can have a special place in plant science and even industrial markets for identifying and classifying varied medicinal plants separately from other non-edible plants.

Author Contributions

Conceptualization, R.A.; methodology, R.A. and H.M.; software, R.A. and M.A.C.; validation, R.A. and H.M.; formal analysis, R.A. and M.A.C.; investigation, R.A. and E.C.; resources, M.A.C., A.D. and M.M.A.-A.; data curation, A.D. and M.M.A.-A.; Writing—original draft preparation, R.A. and H.M.; Input to different sections of the manuscript and contribution to data interpretation E.C.; Writing, Review, Editing and Finalization of the manuscript E.C.; visualization, R.A. and M.A.C.; supervision, R.A. and E.C.; project administration, R.A.; funding acquisition, E.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

We declare that we do not have any commercial or associative interest that represent conflict of interest in connection with the work submitted.

Appendix A

The medicinal plants studied and their pertinent details.
Scientific NameCommon NameMedicinal PropertiesLeaf Image
Melissa officinalis L.Lemon BalmThis type of medicinal plant belongs to the Lamiaceae family. This herb plant contains a variety of secondary metabolites that are very beneficial to human health [51,52]. In traditional medicine, this herb is used to treat headaches, mental disorders, rheumatism and hypersensitivity [53].Agronomy 12 02723 i001
Stevia rebaudiana BertoniSteviaThese plants belong to the Asteraceae family. Stevia can be a good substitute for sugar source for diabetics. It is also used to reduce the blood glucose levels, blood pressure and to treat mental problems [54,55].Agronomy 12 02723 i002
Mentha balsamea WildPeppermintThe Mentha balsamea plant is a multipurpose herb plant from the Lamiaceae family. This plant is used in Iran as an antiviral, invigorating, stimulant and antifungal agent [56]. The extracted oil from Mentha balsamea is also used as a medicine to treat cancer, colds, sore throats, nausea, toothaches and muscle soreness [57].Agronomy 12 02723 i003
Aegle marmelosBaelAegle marmelos is a fruit with purely medicinal properties. The leaves of this fruit are feverish and expectorant and are used for bleeding, diarrhea and intestinal disorders. It is also used to treat urinary problems, regulate heart rate and treat stomach ache [58,59].Agronomy 12 02723 i004
Ocimum sanctum L.TulsiThis plant belongs to the Lamiaceae family. It grows mostly in warm regions and is used as an antiseptic, antiemetic, anti-flatulence, sedative, treatment of gastrointestinal disorders, rheumatism and skin disorders [60,61].Agronomy 12 02723 i005

References

  1. Naserifar, R.; Bahmani, M.; Abdi, J.; Abbaszadeh, S.; Nourmohammadi, G.A.; Rafieian-Kopaei, M. A review of the most important native medicinal plants of Iran effective on leishmaniasis according to Iranian ethnobotanical references. Int. J. Adv. Biotechnol. Res. 2017, 8, 1330–1336. [Google Scholar]
  2. Altemimi, A.; Lakhssassi, N.; Baharlouei, A.; Watson, D.G.; Lightfoot, D.A. Phytochemicals: Extraction, isolation, and identification of bioactive compounds from plant extracts. Plants 2017, 6, 42. [Google Scholar] [CrossRef] [PubMed]
  3. Naeem, S.; Ali, A.; Chesneau, C.; Tahir, M.H.; Jamal, F.; Sherwani, R.A.K.; Ul Hassan, M. The classification of medicinal plant leaves based on multispectral and texture feature using machine learning approach. Agronomy 2021, 11, 263. [Google Scholar] [CrossRef]
  4. Ozioma, E.O.J.; Chinwe, O.A.N. Herbal medicines in African traditional medicine. Herb. Med. 2019, 10, 191–214. [Google Scholar]
  5. Amenu, E. Use and Management of Medicinal Plants by Indigenous People of Ejaji Area (Chelya Woreda) West Shoa, Ethiopia: An Ethnobotanical Approach. Master’s Thesis, Addis Ababa University, Addis Ababa, Ethiopia, 2007. [Google Scholar]
  6. Hu, R.; Lin, C.; Xu, W.; Liu, Y.; Long, C. Ethnobotanical study on medicinal plants used by Mulam people in Guangxi, China. J. Ethnobiol. Ethnomed. 2020, 16, 40. [Google Scholar] [CrossRef] [PubMed]
  7. Crini, G.; Lichtfouse, E.; Chanet, G.; Morin-Crini, N. Applications of hemp in textiles, paper industry, insulation and building materials, horticulture, animal nutrition, food and beverages, nutraceuticals, cosmetics and hygiene, medicine, agrochemistry, energy production and environment: A review. Environ. Chem. Lett. 2020, 18, 1451–1476. [Google Scholar] [CrossRef]
  8. Nabavi, S.F.; Di Lorenzo, A.; Izadi, M.; Sobarzo-Sánchez, E.; Daglia, M.; Nabavi, S.M. Antibacterial effects of cinnamon: From farm to food, cosmetic and pharmaceutical industries. Nutrients 2015, 7, 7729–7748. [Google Scholar] [CrossRef]
  9. Chukwuma, E.C.; Soladoye, M.O.; Feyisola, R.T. Traditional medicine and the future of medicinal Plants in Nigeria. J. Med. Plants Stud. 2015, 3, 23–29. [Google Scholar]
  10. Sladojevic, S.; Arsenovic, M.; Anderla, A.; Culibrk, D.; Stefanovic, D. Deep neural networks based recognition of plant diseases by leaf image classification. Comput. Intell. Neurosci. 2016, 2016, 3289801. [Google Scholar] [CrossRef] [Green Version]
  11. Singh, V.; Misra, A.K. Detection of plant leaf diseases using image segmentation and soft computing techniques. Inf. Process. Agric. 2017, 4, 41–49. [Google Scholar] [CrossRef] [Green Version]
  12. Wäldchen, J.; Mäder, P. Plant species identification using computer vision techniques: A systematic literature review. Arch. Comput. Methods Eng. 2018, 25, 507–543. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Azadnia, R.; Kheiralipour, K. Recognition of leaves of different medicinal plant species using a robust image processing algorithm and artificial neural networks classifier. J. Appl. Res. Med. Aromat. Plants 2021, 25, 100327. [Google Scholar] [CrossRef]
  14. Wang, J.; Mo, W.; Wu, Y.; Xu, X.; Li, Y.; Ye, J.; Lai, X. Combined Channel Attention and Spatial Attention Module Network for Chinese Herbal Slices Automated Recognition. Front. Neurosci. 2022, 16, 920820. [Google Scholar] [CrossRef] [PubMed]
  15. Mukherjee, G.; Tudu, B.; Chatterjee, A. A convolutional neural network-driven computer vision system toward identification of species and maturity stage of medicinal leaves: Case studies with Neem, Tulsi and Kalmegh leaves. Soft Comput. 2021, 25, 14119–14138. [Google Scholar] [CrossRef]
  16. Bisen, D. Deep convolutional neural network based plant species recognition through features of leaf. Multimed. Tools Appl. 2021, 80, 6443–6456. [Google Scholar] [CrossRef]
  17. Azadnia, R.; Jahanbakhshi, A.; Rashidi, S. Developing an automated monitoring system for fast and accurate prediction of soil texture using an image-based deep learning network and machine vision system. Measurement 2022, 190, 110669. [Google Scholar] [CrossRef]
  18. Apolo-Apolo, O.E.; Pérez-Ruiz, M.; Martínez-Guanter, J.; Valente, J. A cloud-based environment for generating yield estimation maps from apple orchards using UAV imagery and a deep learning technique. Front. Plant Sci. 2020, 11, 1086. [Google Scholar] [CrossRef]
  19. Ferentinos, K.P. Deep learning models for plant disease detection and diagnosis. Comput. Electron. Agric. 2018, 145, 311–318. [Google Scholar] [CrossRef]
  20. Ziyaee, P.; Farzand Ahmadi, V.; Bazyar, P.; Cavallo, E. Comparison of Different Image Processing Methods for Segregation of Peanut (Arachis hypogaea L.) Seeds Infected by Aflatoxin-Producing Fungi. Agronomy 2021, 11, 873. [Google Scholar] [CrossRef]
  21. Yamashita, R.; Nishio, M.; Do, R.K.G.; Togashi, K. Convolutional neural networks: An overview and application in radiology. Insights Imaging 2018, 9, 611–629. [Google Scholar] [CrossRef] [Green Version]
  22. Ghazi, M.M.; Yanikoglu, B.; Aptoula, E. Plant identification using deep neural networks via optimization of transfer learning parameters. Neurocomputing 2017, 235, 228–235. [Google Scholar] [CrossRef]
  23. Russel, N.S.; Selvaraj, A. Leaf species and disease classification using multiscale parallel deep CNN architecture. Neural Comput. Appl. 2022, 34, 19217–19237. [Google Scholar] [CrossRef]
  24. Bodhwani, V.; Acharjya, D.; Bodhwani, U. Deep Residual Networks for Plant Identification. Procedia Comput. Sci. 2019, 152, 186–194. [Google Scholar] [CrossRef]
  25. Lee, S.H.; Chan, C.S.; Mayo, S.J.; Remagnino, P. How deep learning extracts and learns leaf features for plant classification. Pattern Recognit. 2017, 71, 1–13. [Google Scholar] [CrossRef] [Green Version]
  26. Arsenovic, M.; Karanovic, M.; Sladojevic, S.; Anderla, A.; Stefanovic, D. Solving Current Limitations of Deep Learning Based Approaches for Plant Disease Detection. Symmetry 2019, 11, 939. [Google Scholar] [CrossRef] [Green Version]
  27. Bedi, P.; Gole, P. Plant disease detection using hybrid model based on convolutional autoencoder and convolutional neural network. Artif. Intell. Agric. 2021, 5, 90–101. [Google Scholar] [CrossRef]
  28. Mohanty, S.P.; Hughes, D.P.; Salathé, M. Using Deep Learning for Image-Based Plant Disease Detection. Front. Plant Sci. 2016, 7, 1419. [Google Scholar] [CrossRef] [Green Version]
  29. Barbedo, J.G.A. Plant disease identification from individual lesions and spots using deep learning. Biosyst. Eng. 2019, 180, 96–107. [Google Scholar] [CrossRef]
  30. Nasiri, A.; Taheri-Garavand, A.; Fanourakis, D.; Zhang, Y.-D.; Nikoloudakis, N. Automated Grapevine Cultivar Identification via Leaf Imaging and Deep Convolutional Neural Networks: A Proof-of-Concept Study Employing Primary Iranian Varieties. Plants 2021, 10, 1628. [Google Scholar] [CrossRef]
  31. Paulson, A.; Ravishankar, S. AI Based Indigenous Medicinal Plant Identification. In Proceedings of the 2020 Advanced Computing and Communication Technologies for High Performance Applications (ACCTHPA), Cochin, India, 2–4 July 2020; pp. 57–63. [Google Scholar] [CrossRef]
  32. Grinblat, G.L.; Uzal, L.C.; Larese, M.G.; Granitto, P.M. Deep learning for plant identification using vein morphological patterns. Comput. Electron. Agric. 2016, 127, 418–424. [Google Scholar] [CrossRef] [Green Version]
  33. Hu, J.; Chen, Z.; Yang, M.; Zhang, R.; Cui, Y. A Multiscale Fusion Convolutional Neural Network for Plant Leaf Recognition. IEEE Signal Process. Lett. 2018, 25, 853–857. [Google Scholar] [CrossRef]
  34. Vala, H.J.; Baxi, A. A review on Otsu image segmentation algorithm. Int. J. Adv. Res. Comput. Eng. Technol. IJARCET 2013, 2, 387–389. [Google Scholar]
  35. Jahanbakhshi, A.; Momeny, M.; Mahmoudi, M.; Zhang, Y.-D. Classification of sour lemons based on apparent defects using stochastic pooling mechanism in deep convolutional neural networks. Sci. Hortic. 2020, 263, 109133. [Google Scholar] [CrossRef]
  36. Shorten, C.; Khoshgoftaar, T.M. A survey on Image Data Augmentation for Deep Learning. J. Big Data 2019, 6, 60. [Google Scholar] [CrossRef]
  37. Ma, J.; Jiang, X.; Fan, A.; Jiang, J.; Yan, J. Image matching from handcrafted to deep features: A survey. Int. J. Comput. Vis. 2021, 129, 23–79. [Google Scholar] [CrossRef]
  38. Farooq, M.; Sazonov, E. Feature Extraction Using Deep Learning for Food Type Recognition. Bioinform. Biomed. Eng. 2017, 10208, 464–472. [Google Scholar] [CrossRef]
  39. Gu, J.; Wang, Z.; Kuen, J.; Ma, L.; Shahroudy, A.; Shuai, B.; Liu, T.; Wang, X.; Wang, G.; Cai, J.; et al. Recent advances in convolutional neural networks. Pattern Recognit. 2018, 77, 354–377. [Google Scholar] [CrossRef] [Green Version]
  40. Chen, Y.; Peng, G.; Xie, C.; Zhang, W.; Li, C.; Liu, S. ACDIN: Bridging the gap between artificial and real bearing damages for bearing fault diagnosis. Neurocomputing 2018, 294, 61–71. [Google Scholar] [CrossRef]
  41. Tang, Y. Deep learning using linear support vector machines. arXiv 2013, arXiv:1306.0239. [Google Scholar] [CrossRef]
  42. Nasiri, A.; Taheri-Garavand, A.; Zhang, Y.-D. Image-based deep learning automated sorting of date fruit. Postharvest Biol. Technol. 2019, 153, 133–141. [Google Scholar] [CrossRef]
  43. Azizi, A.; Gilandeh, Y.A.; Mesri-Gundoshmian, T.; Saleh-Bigdeli, A.A.; Moghaddam, H.A. Classification of soil aggregates: A novel approach based on deep learning. Soil Tillage Res. 2020, 199, 104586. [Google Scholar] [CrossRef]
  44. Amuthalingeswaran, C.; Sivakumar, M.; Renuga, P.; Alexpandi, S.; Elamathi, J.; Hari, S.S. Identification of Medicinal Plant’s and Their Usage by Using Deep Learning. In Proceedings of the 2019 3rd International Conference on Trends in Electronics and Informatics (ICOEI), Tirunelveli, India, 23–25 April 2019; pp. 886–890. [Google Scholar] [CrossRef]
  45. Pearline, S.A.; Kumar, V.S.; Harini, S. A study on plant recognition using conventional image processing and deep learning approaches. J. Intell. Fuzzy Syst. 2019, 36, 1997–2004. [Google Scholar] [CrossRef]
  46. Zhu, Y.; Sun, W.; Cao, X.; Wang, C.; Wu, D.; Yang, Y.; Ye, N. TA-CNN: Two-way attention models in deep convolutional neural network for plant recognition. Neurocomputing 2019, 365, 191–200. [Google Scholar] [CrossRef]
  47. Muneer, A.; Fati, S.M. Efficient and automated herbs classification approach based on shape and texture features using deep learning. IEEE Access 2020, 8, 196747–196764. [Google Scholar] [CrossRef]
  48. Reddy, S.R.; Varma, G.P.; Davuluri, R.L. Optimized convolutional neural network model for plant species identification from leaf images using computer vision. Int. J. Speech Technol. 2021, 1–28. [Google Scholar] [CrossRef]
  49. Zhang, C.; Zhou, P.; Li, C.; Liu, L. A convolutional neural network for leaves recognition using data augmentation. In Proceedings of the 2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing, Liverpool, UK, 26–28 October 2015; pp. 2143–2150. [Google Scholar] [CrossRef]
  50. Roopashree, S.; Anitha, J. DeepHerb: A Vision Based System for Medicinal Plants Using Xception Features. IEEE Access 2021, 9, 135927–135941. [Google Scholar] [CrossRef]
  51. Mimica-Dukic, N.; Bozin, B.; Sokovic, M.; Simin, N. Antimicrobial and Antioxidant Activities of Melissa officinalis L. (Lamiaceae) Essential Oil. J. Agric. Food Chem. 2004, 52, 2485–2489. [Google Scholar] [CrossRef]
  52. Tagashira, M.; Ohtake, Y. A New Antioxidative 1,3-Benzodioxole from Melissa officinalis. Planta Med. 1998, 64, 555–558. [Google Scholar] [CrossRef]
  53. Reiter, M.; Brandt, W. Relaxant effects on tracheal and ileal smooth muscles of the guinea pig. Arzneim. Forsch. 1985, 35, 408–414. [Google Scholar]
  54. Rincon, F.; Mayer, S.A. Intracerebral Hemorrhage: Clinical Overview and Pathophysiologic Concepts. Transl. Stroke Res. 2012, 3, 10–24. [Google Scholar] [CrossRef]
  55. Madan, S.; Ahmad, S.; Singh, G.N.; Kohli, K.; Kumar, Y.; Singh, R.; Garg, M. Stevia rebaudiana (Bert.) Bertoni—A review. Indian J. Nat. Prod. Resour. 2010, 1, 267–286. [Google Scholar]
  56. Herro, E.; Jacob, S.E. Mentha piperita (Peppermint). Dermatitis 2010, 21, 327–329. [Google Scholar] [CrossRef] [PubMed]
  57. Mahendran, G.; Rahman, L.U. Ethnomedicinal, phytochemical and pharmacological updates on Peppermint (Mentha × piperita L.)—A review. Phytother. Res. 2020, 34, 2088–2139. [Google Scholar] [CrossRef] [PubMed]
  58. Bhardwaj, R.L.; Nandal, U. Nutritional and therapeutic potential of bael (Aegle marmelos Corr.) fruit juice: A review. Nutr. Food Sci. 2015, 45, 895–919. [Google Scholar] [CrossRef]
  59. Baliga, M.S.; Bhat, H.P.; Joseph, N.; Fazal, F. Phytochemistry and medicinal uses of the bael fruit (Aegle marmelos Correa): A concise review. Food Res. Int. 2011, 44, 1768–1775. [Google Scholar] [CrossRef]
  60. Bhattacharyya, P.; Bishayee, A. Ocimum sanctum Linn. (Tulsi): An ethnomedicinal plant for the prevention and treatment of cancer. Anti-Cancer Drugs 2013, 24, 659–666. [Google Scholar] [CrossRef]
  61. Kumar, A.; Rahal, A.; Chakraborty, S.; Tiwari, R.; Latheef, S.K.; Dhama, K. Ocimum sanctum (Tulsi): A miracle herb and boon to medical science—A Review. Int. J. Agron. Plant Prod. 2013, 4, 1580–1589. [Google Scholar]
Figure 2. The DA method using image rotation and color manipulation.
Figure 2. The DA method using image rotation and color manipulation.
Agronomy 12 02723 g002
Figure 3. The architecture of the CNN model for image recognition.
Figure 3. The architecture of the CNN model for image recognition.
Agronomy 12 02723 g003
Figure 4. A block diagram of the proposed CNN model.
Figure 4. A block diagram of the proposed CNN model.
Agronomy 12 02723 g004
Figure 5. Differences in the performance of GAP vs. fully connected layers in imaging classification.
Figure 5. Differences in the performance of GAP vs. fully connected layers in imaging classification.
Agronomy 12 02723 g005
Figure 6. The ability of the GAP layer to identify medicinal plants.
Figure 6. The ability of the GAP layer to identify medicinal plants.
Agronomy 12 02723 g006
Figure 7. A summary of the proposed model for the identification of medicinal plant species.
Figure 7. A summary of the proposed model for the identification of medicinal plant species.
Agronomy 12 02723 g007
Figure 8. The first activation layer of each channel related to the test images.
Figure 8. The first activation layer of each channel related to the test images.
Agronomy 12 02723 g008
Figure 9. Predicted distributions of the 5 classes of medicinal plants images based on their sizes of (a) 64 × 64, (b) 128 × 128, and (c) 256 × 256 pixels.
Figure 9. Predicted distributions of the 5 classes of medicinal plants images based on their sizes of (a) 64 × 64, (b) 128 × 128, and (c) 256 × 256 pixels.
Agronomy 12 02723 g009
Figure 10. Accuracy and loss performance in the training and testing phases for the leaf image sizes of (a) 64 × 64, (b) 128 × 128, and (c) 256 × 256 pixels obtained through the proposed CNN model.
Figure 10. Accuracy and loss performance in the training and testing phases for the leaf image sizes of (a) 64 × 64, (b) 128 × 128, and (c) 256 × 256 pixels obtained through the proposed CNN model.
Agronomy 12 02723 g010aAgronomy 12 02723 g010b
Table 1. Data splitting for training and testing in the proposed CNN.
Table 1. Data splitting for training and testing in the proposed CNN.
SetData Splitting (%)Original ImagesAugmented DataTotal Images
Train8060010,20010,800
Test2015025502700
Total10075012,75013,500
Table 2. The performance results of medicinal plant classification obtained from the proposed CNN model.
Table 2. The performance results of medicinal plant classification obtained from the proposed CNN model.
Plant SamplesPerformance Classification Indicators
Image Size of 64 × 64Image Size of 128 × 128Image Size of 256 × 256
AccuracyPrecisionSensitivitySpecificityAUCAccuracyPrecisionSensitivitySpecificityAUCAccuracyPrecisionSensitivitySpecificityAUC
Lemon Balm1.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
Peppermint0.9980.9970.9960.9990.9970.9970.9930.9910.9980.9940.9970.9910.9950.9980.996
Bael0.9980.9940.9980.9980.9980.9950.9830.9920.9950.9930.9960.9960.9900.9990.994
Stevia0.9980.9950.9930.9990.9960.9960.9910.9910.9980.9940.9960.9890.9910.9970.994
Tulsi0.9970.9940.9960.9980.9970.9970.9900.9960.9990.9970.9970.9930.9960.9980.997
Average per class0.9980.9960.9960.9980.9970.9970.9920.9940.9980.9950.9970.9940.9940.9980.996
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Azadnia, R.; Al-Amidi, M.M.; Mohammadi, H.; Cifci, M.A.; Daryab, A.; Cavallo, E. An AI Based Approach for Medicinal Plant Identification Using Deep CNN Based on Global Average Pooling. Agronomy 2022, 12, 2723. https://doi.org/10.3390/agronomy12112723

AMA Style

Azadnia R, Al-Amidi MM, Mohammadi H, Cifci MA, Daryab A, Cavallo E. An AI Based Approach for Medicinal Plant Identification Using Deep CNN Based on Global Average Pooling. Agronomy. 2022; 12(11):2723. https://doi.org/10.3390/agronomy12112723

Chicago/Turabian Style

Azadnia, Rahim, Mohammed Maitham Al-Amidi, Hamed Mohammadi, Mehmet Akif Cifci, Avat Daryab, and Eugenio Cavallo. 2022. "An AI Based Approach for Medicinal Plant Identification Using Deep CNN Based on Global Average Pooling" Agronomy 12, no. 11: 2723. https://doi.org/10.3390/agronomy12112723

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop