1. Introduction
Medicinal plants are gaining popularity in the pharmaceutical industry as they are less likely to have adverse effects and are less expensive than modern pharmaceuticals. According to the World Health Organization, there are over 21,000 plant species that can potentially be utilized for medicinal purposes. It is also reported that 80% of people around the world use medicinal plants for the treatment of their primary health ailments [
1].
Systematic identification and naming of plants are often carried out by professional botanists (taxonomists), who have deep knowledge of plant taxonomy [
2,
3]. Manually identifying plant species is a challenging and time-consuming process. Furthermore, the process is prone to errors, as every aspect of the identification is entirely based on human perception [
4]. There is also a dearth of these plant identification subject matter experts, which gives rise to a situation of “taxonomic impediment” [
5]. It is, therefore, important to develop an effective and reliable method for the accurate identification of these valuable plants.
Machine learning (ML) is a sub-field of artificial intelligence that solves various intricate problems in different application domains with minimal human intervention [
6]. Deep learning (DL), inspired by the structure and functionality of biological neurons, is a sub-field of machine learning (ML). DL involves the training of algorithms and demands a large amount of data to achieve better identification results. Due to the advancements in hardware technology and the extensive availability of the data, DL has gained popularity in various tasks like natural language processing, game playing, and image processing, and outstanding performance is being achieved, which could be otherwise impossible for humans to discern [
7].
Various research studies have revealed that researchers are showing a great interest in the automatic identification and classification of plant species using plant image specimens and by employing ML and DL techniques [
8,
9,
10,
11,
12]. This automatic identification is carried out in different stages, viz. (a) image acquisition, (b) image preprocessing, (c) feature extraction, and (d) classification. Plant species identification can be performed using the different parts of plants like flowers, bark, fruits, and leaves or using the entire plant image. Researchers prefer to use leaf images for the identification process as leaves are easily identifiable parts of the plant. Leaf images are usually available for a long time during the year, while flowers and fruits are specific to a particular season only. There are more the one-hundred studies that have used plant leaf images for the automatic identification process [
4].
In this research, transfer learning and ensemble learning approaches were employed to classify medicinal plant leaf images into thirty classes. The transfer learning approach uses the existing knowledge gained from one task to solve problems of a related nature [
13]. This approach is extensively used in image-classification problems, particularly with convolutional neural network models pre-trained on advanced GPUs capable of categorizing objects across a wide range of 1000 classes. Transfer learning can be used in medicinal plant image classification as the approach facilitates the migration of acquired features and parameters, so it reduces the need for extensive training from scratch.
The main aim of ensemble learning is to improve the overall performance of classifiers by combining the predictions of individual neural network models. Ensemble learning has recently gained popularity in image classification using deep learning [
14,
15,
16]. We trained VGG16, VGG19, and DenseNet201 on the Mendeley Medicinal Leaf Dataset and evaluated the efficiency of these component models. After individual evaluation of the models, an ensemble learning approach using the averaging and weighted averaging strategies was employed to make the final prediction. Some of the novel contributions of this work are as follows:
An automated system was developed to reduce the reliance on human experts when it comes to identifying different medicinal plant species. Classically, identifying plant species often requires experts who possess knowledge of botanical characteristics, but this system aims to lessen that dependency by using technology to perform the identification task.
In this work, we adjusted the hyperparameters, like the learning rates, batch sizes, and regularization techniques, to ensure the model performed optimally for this classification task.
Also, transfer learning and fine-tuning are being used to extract meaningful and informative features from images of medicinal plant leaves. Instead of training a deep learning model from scratch, the pre-trained models VGG16, VGG19, and DenseNet201 were used as a starting point. These models were leveraged to enhance the ability of identifying and extracting relevant information from medicinal plant images, which were used for species identification or other related purposes.
Also, a comparative analysis was performed in this study, including several previously state-of-the-art approaches for the identification of medicinal plants using the same dataset, leveraging the complementary features of VGG19 and DenseNet201, the ensemble approach improved the robustness and balanced the performance, making it a potent solution for medicinal plant identification.
In this section, we provide an overview of medicinal plants, highlighting their numerous benefits and the challenges associated with their identification. We also discuss existing methods used for identifying medicinal plants.
The upcoming sections will explore various related research studies shedding light on the advancements made in utilizing deep learning techniques for plant identification using their image data. Following the literature review, we describe the methodology employed and present the results obtained from our experiments. Furthermore, we discuss the significance and relevance of our proposed approach in addressing the existing challenges in medicinal plant identification.
3. Methods and Material
3.4. DenseNet201
DenseNet201 [
27], a deep convolutional neural network (CNN) model introduced in 2017, is known for its dense connectivity pattern. With 201 layers, it employs dense blocks, where each layer is directly connected to every other layer, promoting feature reuse and gradient flow. Transition layers are inserted to control the parameters and reduce the spatial dimensions. Batch normalization and ReLU activation enhance the training. DenseNet201 achieves high accuracy in image-recognition tasks by capturing intricate patterns and hierarchical representations. Pretrained on ImageNet, it serves as a feature extractor or can be fine-tuned for transfer learning, making it a powerful CNN model for image classification and related applications.
DenseNet201 has various advantages due to its 201 convolutional layers. These advantages include enabling feature adaptation to be used again, vanishing gradient problems, achieving optimal feature distribution, and reducing the number of parameters [
33]. Let us consider an image
being fed into a neural network comprising
S layers with nonlinear transformation, denoted as
. In this case, DenseNet201 incorporates conventional skip connections within the feed-forward network (see
Figure 2). These connections allow bypassing the nonlinear alteration using an identity function, as represented as follows.
Further, DenseNet offers a distinct benefit in that the gradient can flow directly through the identity function from the primary layers to the last layer. On the other hand, dense networks utilize direct end-to-end connections to maximize the details within each layer. In this case, the
s-
layer receives all the data from the preceding layer, as follows.
DenseNet’s architectural design incorporates a particular process for downsampled data, which occurs within dense blocks. These blocks are divided into transition layers, each including a 1 × 1 convolutional layer (CONV), an average pooling layer, and batch normalization (BN). The elements from the transition layer progressively disseminate to the dense layers, making the network less intricate. In an effort to enhance network utility, the average pooling layer was wholly converted into a 2 × 2 max pooling layer. Each convolutional layer is preceded by batch normalization. The network’s growth rate, denoted by the hyperparameter k, allows DenseNet to yield state-of-the-art results. However, the hyperparameters should be adjusted depending on the complexity and nonlinearity of the data characteristics [
37]. The conventional pooling layers were removed, and the proposed detection layers were fully amalgamated and connected to the classification layers for detection purposes. DenseNet264 encompasses even more-complex network designs than the 201-layer network. However, due to its narrower network footprint, the 201-layer structure was deemed suitable for the plant leaf detection tasks. DenseNet201, despite a smaller growth rate, still performs excellently because its design employs feature maps as a network-wide mechanism. The architecture of DenseNet201 is depicted in
Figure 2.
DenseNet201, as a deep learning architecture, possesses distinct advantages and disadvantages. The benefits of DenseNet201 are as follows. Firstly, it effectively addresses the vanishing gradient problem by establishing direct connections between layers. This facilitates smoother gradient flow during training, leading to improved optimization based on gradients. Secondly, DenseNet201 excels in distributing features evenly across layers due to its dense connections. This enhances the model’s overall representational capabilities. Additionally, DenseNet201 enables the reuse of feature maps at various depths, promoting efficient information flow and potentially strengthening the network’s learning capacity. Lastly, it reduces the number of parameters compared to conventional deep learning architectures by eliminating the need for redundant feature learning in individual layers. This results in a more-efficient utilization of the parameters in the model. Deep learning methods on single leaf sheets for plant identification are not a replacement for approaches dealing with segmentation and identification in scattered environments. Instead, they serve the specific purpose of simplifying the data collection, enabling in-depth species-specific analysis, and providing a foundational step for more-complex plant identification tasks. These methods are valuable and contribute to developing more robust, comprehensive plant identification systems.
Author Contributions
Conceptualization, M.A.H., T.A. and A.M.U.D.K.; methodology, A.M.U.D.K. and T.A.; validation, M.A.H.; formal analysis, M.A.H.; investigation, M.A.H., T.A. and M.N.; data curation, M.A.H. and T.A.; writing—original draft, A.M.U.D.K. and M.A.H.; writing—review and editing, A.M.U.D.K., M.N. and M.A.H.; visualization, A.M.U.D.K., M.A.H., T.A. and M.N.; supervision, T.A. and A.M.U.D.K.; project administration, T.A.; funding acquisition, M.N. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
The data presented in this study are available in this article.
Acknowledgments
The authors would like to acknowledge Baba Ghulam Shah Badshah University for their valuable support. Also, the authors would like to thank the Center for Artificial Intelligence Research and Optimisation, Torrens University Australia, for supporting the APC of this publication.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Introduction and Importance of Medicinal Plants and Herbs. Available online: https://www.nhp.gov.in/introduction-and-importance-of-medicinal-plants-and-herbs_mtl (accessed on 9 November 2023).
- Azlah, M.A.F.; Chua, L.S.; Rahmad, F.R.; Abdullah, F.I.; Wan Alwi, S.R. Review on techniques for plant leaf classification and recognition. Computers 2019, 8, 77. [Google Scholar] [CrossRef]
- Tripathi, K.; Khan, F.A.; Khanday, A.M.U.D.; Nisa, K.U. The classification of medical and botanical data through majority voting using artificial neural network. Int. J. Inf. Technol. 2023, 15, 3271–3283. [Google Scholar] [CrossRef]
- Wäldchen, J.; Mäder, P. Plant species identification using computer vision techniques: A systematic literature review. Arch. Comput. Methods Eng. 2018, 25, 507–543. [Google Scholar]
- De Carvalho, M.R.; Bockmann, F.A.; Amorim, D.S.; Brandão, C.R.F.; de Vivo, M.; de Figueiredo, J.L.; Britski, H.A.; de Pinna, M.C.; Menezes, N.A.; Marques, F.P.; et al. Taxonomic impediment or impediment to taxonomy? A commentary on systematics and the cybertaxonomic-automation paradigm. Evol. Biol. 2007, 34, 140–143. [Google Scholar] [CrossRef]
- Rabani, S.T.; Khanday, A.M.U.D.; Khan, Q.R.; Hajam, U.A.; Imran, A.S.; Kastrati, Z. Detecting suicidality on social media: Machine learning at rescue. Egypt. Inform. J. 2023, 24, 291–302. [Google Scholar]
- Sarker, I.H. Deep learning: A comprehensive overview on techniques, taxonomy, applications and research directions. SN Comput. Sci. 2021, 2, 420. [Google Scholar] [CrossRef] [PubMed]
- Husin, Z.; Shakaff, A.; Aziz, A.; Farook, R.; Jaafar, M.; Hashim, U.; Harun, A. Embedded portable device for herb leaves recognition using image processing techniques and neural network algorithm. Comput. Electron. Agric. 2012, 89, 18–29. [Google Scholar] [CrossRef]
- Gokhale, A.; Babar, S.; Gawade, S.; Jadhav, S. Identification of medicinal plant using image processing and machine learning. In Proceedings of the Applied Computer Vision and Image Processing; Proceedings of ICCET 2020; Springer: Berlin/Heidelberg, Germany, 2020; Volume 1, pp. 272–282. [Google Scholar]
- Puri, D.; Kumar, A.; Virmani, J.; Kriti. Classification of leaves of medicinal plants using laws texture features. Int. J. Inf. Technol. 2019, 14, 1–12. [Google Scholar] [CrossRef]
- Roopashree, S.; Anitha, J. DeepHerb: A vision based system for medicinal plants using xception features. IEEE Access 2021, 9, 135927–135941. [Google Scholar] [CrossRef]
- Sachar, S.; Kumar, A. Deep ensemble learning for automatic medicinal leaf identification. Int. J. Inf. Technol. 2022, 14, 3089–3097. [Google Scholar] [CrossRef]
- Pan, S.J.; Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 2009, 22, 1345–1359. [Google Scholar] [CrossRef]
- Nakata, N.; Siina, T. Ensemble Learning of Multiple Models Using Deep Learning for Multiclass Classification of Ultrasound Images of Hepatic Masses. Bioengineering 2023, 10, 69. [Google Scholar] [CrossRef] [PubMed]
- Kuzinkovas, D.; Clement, S. The detection of COVID-19 in chest x-rays using ensemble cnn techniques. Information 2023, 14, 370. [Google Scholar] [CrossRef]
- D’Angelo, M.; Nanni, L. Deep Learning-Based Human Chromosome Classification: Data Augmentation and Ensemble. Information 2023, 14, 389. [Google Scholar] [CrossRef]
- Nazarenko, D.; Kharyuk, P.; Oseledets, I.; Rodin, I.; Shpigun, O. Machine learning for LC–MS medicinal plants identification. Chemom. Intell. Lab. Syst. 2016, 156, 174–180. [Google Scholar] [CrossRef]
- Kumar, N.; Belhumeur, P.N.; Biswas, A.; Jacobs, D.W.; Kress, W.J.; Lopez, I.C.; Soares, J.V. Leafsnap: A computer vision system for automatic plant species identification. In Proceedings of the Computer Vision–ECCV 2012: 12th European Conference on Computer Vision, Part II 12, Florence, Italy, 7–13 October 2012; Springer: Berlin/Heidelberg, Germany, 2012; pp. 502–516. [Google Scholar]
- Kadir, A.; Nugroho, L.E.; Susanto, A.; Santosa, P.I. Leaf classification using shape, color, and texture features. arXiv 2013, arXiv:1401.4447. [Google Scholar]
- Wu, S.G.; Bao, F.S.; Xu, E.Y.; Wang, Y.X.; Chang, Y.F.; Xiang, Q.L. A leaf recognition algorithm for plant classification using probabilistic neural network. In Proceedings of the 2007 International Symposium on Signal Processing and Information Technology, Giza, Egypt, 15–18 December 2007; pp. 11–16. [Google Scholar]
- Sabu, A.; Sreekumar, K.; Nair, R.R. Recognition of Ayurvedic medicinal plants from leaves: A computer vision approach. In Proceedings of the 2017 4th International Conference on Image Information Processing (ICIIP), Shimla, India, 21–23 December 2017; pp. 1–5. [Google Scholar]
- Sundara Sobitha Raj, A.P.; Vajravelu, S.K. DDLA: Dual deep learning architecture for classification of plant species. IET Image Process. 2019, 13, 2176–2182. [Google Scholar] [CrossRef]
- Barré, P.; Stöver, B.C.; Müller, K.F.; Steinhage, V. LeafNet: A computer vision system for automatic plant species identification. Ecol. Inform. 2017, 40, 50–56. [Google Scholar] [CrossRef]
- Geetharamani, G.; Pandian, A. Identification of plant leaf diseases using a nine-layer deep convolutional neural network. Comput. Electr. Eng. 2019, 76, 323–338. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Duong-Trung, N.; Quach, L.D.; Nguyen, M.H.; Nguyen, C.N. A combination of transfer learning and deep learning for medicinal plant classification. In Proceedings of the 2019 4th International Conference on Intelligent Information Technology, New York, NY, USA, 16–17 November 2019; pp. 83–90. [Google Scholar]
- Prashar, N.; Sangal, A. Plant disease detection using deep learning (convolutional neural networks). In Proceedings of the 2nd International Conference on Image Processing and Capsule Networks: ICIPCN 2021 2, Bidar, India, 16–17 December 2016; Springer: Berlin/Heidelberg, Germany, 2022; pp. 635–649. [Google Scholar]
- Khanday, A.M.U.D.; Bhushan, B.; Jhaveri, R.H.; Khan, Q.R.; Raut, R.; Rabani, S.T. Nnpcov19: Artificial neural network-based propaganda identification on social media in COVID-19 era. Mob. Inf. Syst. 2022, 2022, 1–10. [Google Scholar] [CrossRef]
- Patil, S.S.; Patil, S.H.; Azfar, F.N.; Pawar, A.M.; Kumar, S.; Patel, I. Medicinal plant identification using convolutional neural network. In Proceedings of the AIP Conference Proceedings; AIP Publishing: Long Island, NY, USA, 2023; Volume 2890. [Google Scholar]
- Akhtar, M.J.; Mahum, R.; Butt, F.S.; Amin, R.; El-Sherbeeny, A.M.; Lee, S.M.; Shaikh, S. A Robust Framework for Object Detection in a Traffic Surveillance System. Electronics 2022, 11, 3425. [Google Scholar] [CrossRef]
- Dadgar, S.; Neshat, M. Comparative hybrid deep convolutional learning framework with transfer learning for diagnosis of lung cancer. In Proceedings of the International Conference on Soft Computing and Pattern Recognition; Springer: Berlin/Heidelberg, Germany, 2022; pp. 296–305. [Google Scholar]
- Leonidas, L.A.; Jie, Y. Ship classification based on improved convolutional neural network architecture for intelligent transport systems. Information 2021, 12, 302. [Google Scholar] [CrossRef]
- Ahsan, M.; Naz, S.; Ahmad, R.; Ehsan, H.; Sikandar, A. A deep learning approach for diabetic foot ulcer classification and recognition. Information 2023, 14, 36. [Google Scholar] [CrossRef]
- Neshat, M.; Lee, S.; Momin, M.M.; Truong, B.; van der Werf, J.H.; Lee, S.H. An effective hyper-parameter can increase the prediction accuracy in a single-step genetic evaluation. Front. Genet. 2023, 14, 1104906. [Google Scholar] [CrossRef]
- Al-Kaltakchi, M.T.; Mohammad, A.S.; Woo, W.L. Ensemble System of Deep Neural Networks for Single-Channel Audio Separation. Information 2023, 14, 352. [Google Scholar] [CrossRef]
- Neshat, M.; Ahmedb, M.; Askarid, H.; Thilakaratnee, M.; Mirjalilia, S. Hybrid Inception Architecture with Residual Connection: Fine-tuned Inception-ResNet Deep Learning Model for Lung Inflammation Diagnosis from Chest Radiographs. arXiv 2023, arXiv:2310.02591. [Google Scholar]
- Alibrahim, H.; Ludwig, S.A. Hyperparameter optimization: Comparing genetic algorithm against grid search and bayesian optimization. In Proceedings of the 2021 IEEE Congress on Evolutionary Computation (CEC), Krakow, Poland, 28 June–1 July 2021; pp. 1551–1559. [Google Scholar]
- Medicinal Leaf Dataset—Mendeley Data. Available online: https://data.mendeley.com/datasets/nnytj2v3n5/1 (accessed on 9 December 2023).
- Patil, S.S.; Patil, S.H.; Pawar, A.M.; Patil, N.S.; Rao, G.R. Automatic Classification of Medicinal Plants Using State-Of-The-Art Pre-Trained Neural Networks. J. Adv. Zool. 2022, 43, 80–88. [Google Scholar] [CrossRef]
- Ayumi, V.; Ermatita, E.; Abdiansah, A.; Noprisson, H.; Jumaryadi, Y.; Purba, M.; Utami, M.; Putra, E.D. Transfer Learning for Medicinal Plant Leaves Recognition: A Comparison with and without a Fine-Tuning Strategy. Int. J. Adv. Comput. Sci. Appl. 2022, 13, 138–144. [Google Scholar] [CrossRef]
- Almazaydeh, L.; Alsalameen, R.; Elleithy, K. Herbal leaf recognition using mask-region convolutional neural network (mask R-CNN). J. Theor. Appl. Inf. Technol. 2022, 100, 3664–3671. [Google Scholar]
- Ghosh, S.; Singh, A.; Kumar, S. Identification of medicinal plant using hybrid transfer learning technique. Indones. J. Electr. Eng. Comput. Sci. 2023, 31, 1605–1615. [Google Scholar] [CrossRef]
Figure 1.
CNN architecture (feature-extraction block).
Figure 2.
DenseNet201 architecture (feature-extraction block). The star symbol (*) denotes multiplication.
Figure 3.
The process flow of the proposed research study.
Figure 4.
An overview of the proposed ensemble model framework.
Figure 5.
Showcases a sample of the medicinal leaf dataset images, illustrating the diverse range of plant images incorporated for classification purposes. (a) Alpinia Galanga (Rasna), (b) Amaranthus Viridis (Arive-Dantu), (c) Artocarpus Heterophyllus (jackfruit), (d) Azadirachta Indica (neem), (e) Basella Alba (Basale), (f) Brassica Juncea (Indian mustard), (g) Carissa Carandas (Karanda), (h) Citrus Limon (lemon), (i) Ficus Auriculata (Roxburgh fig), (j) Ficus Religiosa (peepal tree), (k) Jasminum (jasmine), (l) Mangifera Indica (mango).
Figure 6.
Accuracy comparison of VGG16, VGG19, and DenseNet201.
Figure 7.
The training and validation loss and accuracy of DenseNet201.
Figure 8.
The confusion matrix of the DenseNet201 component model.
Figure 9.
Performance comparison between VGG16, VGG19, and DenseNet201 based on precision, recall, and F1-score.
Figure 10.
Accuracy comparison between ensemble models: VGG19 + DenseNet201, VGG16 + VGG19, VGG16 + DenseNet201, and VGG16 + VGG19 + DenseNet201.
Figure 11.
The confusion matrix of the average ensemble of VGG19+DenseNet201.
Table 1.
The tuned hyperparameters of the proposed classification model.
Parameter | Description | Value |
---|
Number of Layers | Composed of dense blocks, transition blocks, and a final classification layer | 201 |
Dense Blocks | For the dense blocks, the number of layers in these blocks was (6, 12, 48, 32) | 4 |
Growth Rate | Sets the number of feature maps added to each layer in the DenseNet | 32 |
Learning Rate | Step size at each iteration while moving toward a minimum of a loss function | 0.00001 |
Batch Size | Number of samples contributing to training | 32 |
Activation Function | Introduces nonlinearity into the network | ReLU |
Dropout Factor | Disregards certain nodes in a layer at random during training to prevent overfitting | 0.5 |
Table 2.
Training, test, and validation accuracy of VGG16, VGG19, and DenseNet201.
Deep Neural Network | Training Accuracy (%) | Validation Accuracy (%) | Test Accuracy (%) |
---|
VGG16 | 96.19 | 89.7 | 93.67 |
VGG19 | 95.41 | 87.94 | 92.26 |
DenseNet201 | 100 | 94.64 | 98.93 |
Table 3.
Precision, recall, and F1-score obtained by the component deep neural networks.
Deep Neural Network | Precision (%) | Recall (%) | F1-Score (%) |
---|
VGG16 | 94.02 | 93.67 | 93.84 |
VGG19 | 92.67 | 92.26 | 92.46 |
DenseNet201 | 99.01 | 98.94 | 98.97 |
Table 4.
Accuracy outcomes of different ensembled deep neural networks using averaging ensemble approach.
Average Ensemble |
---|
Ensemble Deep Neural Networks | Training Accuracy (%) | Validation Accuracy (%) | Test Accuracy (%) |
VGG19 + DenseNet201 | 100 | 95.52 | 99.12 |
VGG16 + VGG19 | 99.04 | 90.65 | 96.66 |
VGG16 + DenseNet201 | 100 | 95.52 | 98.76 |
VGG16 + VGG19 + DenseNet201 | 99.90 | 93.90 | 98.41 |
Table 5.
Accuracy outcomes of different ensembled deep neural networks using weighted average ensemble.
Weighted Average Ensemble |
---|
Ensemble Deep Neural Networks | Training Accuracy (%) | Validation Accuracy (%) | Test Accuracy (%) |
VGG16 + VGG19 + DenseNet201 | 99.61 | 92.68 | 97.89 |
VGG19 + DenseNet201 | 99.23 | 91.86 | 96.83 |
VGG19 + VGG16 | 98.75 | 89.02 | 96.66 |
VGG16 + DenseNet201 | 99.80 | 91.05 | 98.06 |
Table 6.
Comparative analysis of the proposed study with the previously state-of-the-art approaches for the identification of medicinal plant images.
Reference | Technique | Medicinal Leaf Dataset | Accuracy |
---|
[42] | MobileNetV1 | Mendeley Medicinal Leaf Dataset | 98% |
[43] | MobileNetV2 | Mendeley Medicinal Leaf Dataset | 81.82% |
[44] | Mask RCNN | Mendeley Medicinal Leaf Dataset | 95.7% |
[45] | Hybrid Transfer | Mendeley Medicinal Leaf Dataset | 95.25% |
Proposed Approach | Ensemble Learning | Mendeley Medicinal Leaf Dataset | 99.12% |
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).