Next Article in Journal
Jasmonic Acid and Salicylic Acid Levels in Defense Response of Azalea (Rhododendron simsii Hybrid) to Broad Mite (Polyphagotarsonemus latus)
Next Article in Special Issue
Tomato Leaf Disease Classification via Compact Convolutional Neural Networks with Transfer Learning and Feature Selection
Previous Article in Journal
Fruit Characterization of Prunus serotina subsp. capuli
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Wild Chrysanthemums Core Collection: Studies on Leaf Identification

1
Department of Plant Biotechnology, Sejong University, Seoul 05006, Korea
2
Department of Information and Communication Engineering, Convergence Engineering for Intelligent Drone, Sejong University, Seoul 05006, Korea
3
Department of Computer Science and Engineering, Sejong University, Seoul 05006, Korea
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Horticulturae 2022, 8(9), 839; https://doi.org/10.3390/horticulturae8090839
Submission received: 30 June 2022 / Revised: 5 September 2022 / Accepted: 8 September 2022 / Published: 13 September 2022
(This article belongs to the Special Issue Smart Horticulture, Plant Secondary Compounds and Their Applications)

Abstract

:
Wild chrysanthemums mainly present germplasm collections such as leaf multiform, flower color, aroma, and secondary compounds. Wild chrysanthemum leaf identification is critical for farm owners, breeders, and researchers with or without the flowering period. However, few chrysanthemum identification studies are related to flower color recognition. This study contributes to the leaf classification method by rapidly recognizing the varieties of wild chrysanthemums through a support vector machine (SVM). The principal contributions of this article are: (1) an assembled collection method and verified chrysanthemum leaf dataset that has been achieved and improved; (2) an adjusted SVM model that is offered to deal with the complex backgrounds presented by smartphone pictures by using color and shape classification results to be more attractive than the original process. As our study presents, the proposed method has a viable application in real-picture smartphones and can help to further investigate chrysanthemum identification.

1. Introduction

Chrysanthemum (Chrysanthemum sp.), which belongs to the Asteraceae family, is a floricultural crop with high economic value, and is second only to roses in the floral trade market [1]. Wild chrysanthemums are attractive because of their flower color, leaf shape and type, and their secondary compounds, which are the main characteristics for discerning the distinctions between floral crops for horticultural studies [1,2]. Chrysanthemum boreale, a diploid species, displays small yellow flowers and shows a variety of morphological characteristics in natural growth in Korea [2]. Wild C. boreale is present in numerous habitats, including in Gangwon-do, Gyeonggi-do, Gyeongsangbuk-do, Gyeongsangnam-do, and Jeollabuk-do in Korea [2]. C. indicum consists of diploid, tetraploid, or hexaploid populations, presents small yellow ray florets, and has been known to be used in treatments of hypertension, inflammation, and respiratory disorders [2]. C. indicum, which has been studied for its flowering time, yields, and bioactive compounds, provides morphological leaf shapes in wild habitats [2]. In Korea, the wild C. indicum is located in Incheon, Jeollabuk-do, Gyeongsangbuk-do, Chungcheongbuk-do, and Jeju-do. C. makinoi, a diploid wild species, has been mentioned as one of the progenitors of the cultivated hexaploid chrysanthemum varieties that are presently produced worldwide [2]. The wild C. makinoi is raised in the sedimentary rock region of Gangwon-do and Daegu in Korea. C. zawadskii is the major and most popular variety of wild chrysanthemums, which has white ray florets or white-purple flowers in Korea [2]. The chromosome level of C. zawadskii ranges from diploid to octoploid. In Korea, C. zawadskii is found in Gangwon-do, Gyeonggi-do, Gyeongsangbuk-do, and Gyeongsangnam-do. Aster spathulifolius belongs to a chronic herb of the Asteraceae family, and is grown in the wild in the seaside regions of Korea [2]. It has vivid white or light blue ray florets and thick leaves. The chromosome A. spathulifolius is the diploid number. In Korea, it is found in Ulleungdo, Busan, and Jeju-do.
Chrysanthemum classification systems have been organized by the International Union for the Protection of New Varieties of Plants (UPOV). However, adjusting the morphological characteristics is time-consuming and only a few classification systems of chrysanthemums have been investigated. Leaf shape, which varies in different wild chrysanthemums, is one of the main classification characteristics [3]. However, phenotype identification with the handle method has some drawbacks: first, the common methods are not the automatic investigation in which the floral experts or owners inspect the handle in the wild; second, there are many leaf shapes of chrysanthemums, and the number of individual shapes that belong to the same type is erroneous due to errors that occur when using the human eye to identify leaf shape. Therefore, proper leaf shape identification tools require time, investigation, and application.
Machine learning (ML) is a golden key that allows computers to learn various features to perform a given task automatically [4,5]. Formerly, various automatic recognition systems were conducted by applying different ML algorithms [6] using Support Vector Machines (SVM) and spectral vegetation [7]. A framework was applied to image processing methods and ML to classify five different plant leaf diseases [8]. All results of automatic recognition techniques illustrated the identification with accuracy varying from 83% to 94%. Computer-based image analysis can be implemented to extract morphological features, such as leaves, for botanical identification [9]. Our solution to the taxonomic problems required extensively-trained specialists that used visual identification as the primary method for this approach. The study used 40 leaves from 30 trees, as well as shrub species from 19 different families, to compare scanners and mobile phone pictures based on color, shape, and texture [10]. All devices were compared by three ML algorithms (adaptive boosting-AdaBoost, random forest, support vector machine (SVM)) and an artificial neural network model (deep-learning) [10]. Computer vision identified species efficiently (higher than 93%), with similar results achieved for both mobile phones and scanners [10]. The algorithms’ SVM, random forest, and deep-learning effectively accomplished more than the AdaBoost [10]. The smartphone-assisted disease diagnosis was applied to identify crop diseases by deep-learning methods. Based on a public dataset of 54,306 images of plant leaves, which showed both diseased and healthy samples to collect under controlled conditions, a deep convolutional neural network was trained to identify 14 crop species and 26 diseases (or absence thereof). The trained model obtained an accuracy of 99.35% on a held-out test set. The deep-learning models were useful to the increasingly large and publicly available image datasets that present a clear path toward smartphone-assisted crop disease diagnosis on a massive global scale [11].
In this study, we classify five chrysanthemum species to identify five leaf shapes using an SVM tool. We hope that this tool is useful for the identification of wild chrysanthemum species within flower recognition. We emphasize that the shape-recognition tool of chrysanthemum leaves might be applied for chrysanthemum image identification with the rapidly obtained predicted data of the cultivar characteristics that correspond to the cultivar image system. Moreover, tools for input data are needed to upgrade real-time identification for the next iteration. In addition, it could be used to recognize individual species to improve germplasm material for chrysanthemum breeding.

2. Methodology

Figure 1 describes the five main processes of the leaf dataset, which include: (1) data collection, (2) data partitioning, (3) feature extraction, (4) feature engineering, and (5) prediction model.

2.1. Plant Material and Data Collection

We collected four wild chrysanthemums, including C. borale, C. indicum, C. makinoi, and the common C. zawadskii (Figure 2). A. spathulifolius was used as a wild type belonging to the Asteraceae family (Figure 2). In this study, the dataset collection was performed in greenhouse shooting using an LG Q52 smart phone with a main quad camera including: 48 MP, f/1.8, (wide), 1/2.0”, 0.8 µm, PDAF; 5 MP, f/2.2, 115° (ultrawide), 1/5.0”, 1.12 µm; 2 MP, f/2.4, (macro); and 2 MP, f/2.4, (depth). The dataset contains a total of 1317 images with the same shooting condition at 9 am in a greenhouse at the Chrysanthemum Research Institute of Sejong University, Korea.

2.2. Data Partitioning

A total of 1317 images for 5 types of leaves are collected at the end of the data collection process. A detailed description of the dataset used in this study is described in Table 1. The original dataset, which contains 1068 images that present 5 species of examined chrysanthemums, is divided into a training set (80%) and a validation set (20%). Finally, the testing set, with an additional 249 images, is collected to evaluate model performance.

2.3. Feature Extraction

Before the feature extraction process, various preprocessing methods are implemented to reduce noises and improve the feature extraction process. Firstly, gaussian blur is implemented on the original datasets to smooth the image. Otsu’s method is then applied to perform adaptive thresholding on the blurred images [12]. A morphological closing operation is conducted in order to fill the small holes that may appear after the thresholding process. Finally, the shape boundary is extracted using the contours.
By investigating the dataset, we conclude that it is challenging to perform leaf classification using only the image because some types of leaves appear identical at first look to the human eye (C. indicum and C. makinoi). Therefore, additional features, such as shape, color, and texture, need to be extracted from the original images in order to support the classification process. In total, there are 17 features extracted for the training process. The detailed description for each feature is described as follows.

2.3.1. Shape Features

Shape features are an important indicator that can be used to differentiate between different species. This section extracts various shape-related features, such as physiological width, length, area, perimeter, aspect ratio, rectangularity, and circularity, from the extracted contours. The description for those shape features is explained in Table 2.
Even though leaves from two different species can have the same physical width/length, other shape feature information, such as perimeter and area, can also be used to differentiate them.

2.3.2. Color Features

Color is an important feature that can distinguish objects with very similar geometric properties with different colors and vice versa. Therefore, this section extracts various color features from the most common RGB color space that contains red, green, and blue components with a range of values from 0 to 255. Initially, individual red, green, and blue color components were separated from the original image. Each component’s mean and standard deviation are then computed and used as the color features.
At the end of this process, a total of six color features are extracted: mean red, mean green, mean blue, standard deviation red, standard deviation green, and standard deviation blue.

2.3.3. Texture Features

The Zernike and Haralick features [13] were first extracted using the Mahotasimage processing library to compute the texture features used in this study. Texture features, which provide surface characteristics and the appearance of an object found in an image, are crucial for many computer vision topics. Texture features can be extracted using various approaches, such as structural, statistical, and model-based techniques [14]. The most common method is the Gray Level Co-occurrence Matrix (GLCM), which offers 13 statistical points of information on the spatial relationships of pixels in an image [15]. Various important textural features can be computed based on the GLCM to expose details about the image content.
Table 3 describes four textural features that are essential for leaf classification in this study.

2.4. Feature Engineering

Seventeen individual features were collected for each image from the dataset after the feature extraction process. However, there was a huge different in value range of each feature. Therefore, StandardScaler, which is a crucial feature in engineering techniques, was implemented to standardize the range of functionality of each extracted feature by the mean of 0 and standard deviation of 1.

2.5. Model Description

A Support vector machine (SVM) is a supervised learning algorithm mostly used for classification, which has been proven to fit the training dataset well and to accurately classify unseen data [16]. SVM takes training samples and produces an optimal hyperplane that separates all training samples into different classes efficiently and can be used to classify new data points. In two dimensions, the hyperplane is a simple line.
As explained in the previous section, 17 features are extracted for each image, which increases the complexity and leads to a nonlinearity problem. Therefore, the kernel trick is introduced to SVM to solve the nonlinear dataset by mapping it to a higher dimensional space. Available kernel functions are linear, nonlinear, polynomial, radial basis function (RBF), and sigmoid. This study implements the Radial Basis Function (RBF) kernel because it is localized and has a finite response along the complete x-axis [17]. The equation for RBF is supplied below.
F ( x i , x j ) = exp ( γ x i x j 2 )
where x i x j 2 is the Euclidean distance between x i and x j , and γ represents the Gamma parameter.

3. Experimental Results and Discussion

The SVM model has some hyperparameters, which must be determined before training because a set of optimal parameters can significantly improve the model’s performance.
In order to determine the optimal hyperparameter values, a grid search technique is first implemented on all possible hyperparameter combinations. After that, each set of hyperparameters is used to train the SVM model in order to find the one that helps the model obtain the highest performance. Finally, a 5-fold cross-validation, which randomly splits the training data into five non-overlapping subsets of equal size, is implemented to further improve the model’s robustness. For each iteration, 4 subsets are utilized for training and the remaining subset is applied to evaluate the model. The final output is the mean value of the five folds.
Table 4 illustrates the value ranges for each hyperparameter and the optimized parameter value after performing the grid search approach.

3.1. Training Results

Figure 3 shows the training and validation scores of the SVM model for different values of the kernel hyperparameter γ . For low values of γ ( 10 6 , 10 5 ), it can be seen that both the training and validation scores are significantly slow at about 0.24, which indicates that the model is underfitted. The training and validation scores increase gradually as the γ increases and reach a peak at 0.986.

3.2. Testing Results

In this section, the model performance is tested on a manually collected testing set that was not used during the training process.
Figure 4 visualizes the confusion matrix between the true and predicted labels for each leaf type. Overall, the model showed a high rate of classification accuracy of over 80% on A. spathulifolius, C. boreale, C. indicum, and C. zawadskii. However, it displayed poor performance for the C. makinoi class with an accuracy of 57%. It incorrectly predicted 15 C. makinoi images as A. spathulifolius and 7 C. makinoi images as C. boreale, which may be due to the similarities in shape and color between the C. makinoi leaf and those of the C. boreale and A. spathulifolius leaves.
We then visualize the model performance for two different scenarios, including the frontside and backside of the leaves (Figure 5). The results prove that the model performed well on A. spathulifolius for both scenarios, most likely because the leaf shape of A. spathulifolius is different from other types. Conversely, the model predicted the backside of the C. zawadskii and C. boreale significantly better than the frontside with a confidence rate of 90.3% and 62.6%, respectively. For the C. makinoi, the backside shows higher confidence than the frontside at 69%. Finally, for the C. indicum, the model showed higher confidence of 90% on the frontside compared to 63.4% on the backside.
In roses, the use of Convolutional Neural Network (CNN) was identified and classified in the flower’s characteristics [18]. In the work of flower identification, ten species, Anthurium, Bougainvillea, Dianthus, Euphorbia, Ixora, Jetropha, Petunia, Phlox, Perwinkle, and Tecoma, were classified using Faster-Recurrent Convolutional Neural Network (Faster-RCNN) and Single Short Detector (SSD) for flower characteristics [19]. Furthermore, the CNN application is used to classify 43 different plant flowers in smart phone pictures, with up to 90% accuracy [20].
Chrysanthemum identification is an important technique to provide the exact recognized chrysanthemum individual species. There are some tools for chrysanthemum identification, such as self-incompatibility [21], morphological traits [22,23,24], molecular identification [1,25,26,27,28], and deep-learning identification [29,30,31,32,33].
The featured plant leaf extraction, such as gas analysis, uses healthy and dead leaves to identify the plant leaves using CNN. When using the Hue, Saturation, Value (HSV) model, the accuracy is almost 98% in the proposed model [34]. The smartphone application was developed to identify four kinds of herbs, fruits, and vegetable plants available in Sri Lanka using leaf features such as shape, texture, and color [35]. Five machine learning algorithms, such as SVM, Multilayer Perceptron, Random Forest, K-Nearest Neighbors, and Decision Tree algorithms, are 85.82%, 82.88%, 80.85%, 75.45%, and 64.39% accurate, respectively [35]. SVM and Multilayer Perceptron algorithms exhibited satisfactory performance according to the results [35].
However, chrysanthemum leaf identification has not received enough attention from the research community. Based on our study, the proper tool for wild chrysanthemum identification was applied and our results show the proper performance.

4. Conclusions

In this study, we classify five wild chrysanthemum leaf-shape types using a handle smartphone-collected dataset. The confidence percentage of A. spathulifolius is highest for both scenarios. For the remaining four wild chrysanthemums, differences between the frontside or backside are used to identify the leaves. Overall, we recognize that our model is proper for wild chrysanthemum leaf identification. However, this is the first study of chrysanthemum leaf identification, and we plan to extend the proposed model to identify over 200 wild chrysanthemum individuals. In the future, we hope more concentration can be applied to the development of leaf identification systems on smart phone devices because these devices are useful and easily accessible to farm owners, breeders, and researchers.

Author Contributions

T.K.N.: Data curation, Methodology, Visualization, Written and Revised Manuscript; L.M.D.: Formal analysis, Investigation, Written, Edited and Reviewed Manuscript; H.-K.S.: Conceptualization; H.M.: Conceptualization, Funding acquisition, Validation; S.J.L.: Data curation; J.H.L.: Conceptualization, Funding acquisition, Validation, Supervision of the project. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the project “Establishment of infrastructure for efficient management of clonal resources at the national seed cluster of central bank and sub-bank”, funded by the Rural Development Administration (RDA) (Project No. PJ0166632022) and by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (Project No. 2020R1A6A1A03038540).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We would like to thank the cooperation between three departments: Plant Biotechnology; Information and Communication Engineering, Convergence Engineering for Intelligent Drone; and Computer Science and Engineering in Sejong University.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nguyen, T.K.; Lim, J.-H. Tools for Chrysanthemum genetic research and breeding: Is genotyping-by-sequencing (GBS) the best approach? Hortic. Environ. Biotechnol. 2019, 60, 625–635. [Google Scholar] [CrossRef]
  2. Wang, Y.; Jin Hee, L.; Jae, A.J.; Won Hee, K.; Ki-Byung, L.; Raisa Aone, M.C.; Yoon-Jung, H. Analysis of ploidy levels of Korean Wild Asteraceae species using chromosome counting. Flower Res. J. 2019, 27, 278–284. [Google Scholar] [CrossRef]
  3. Xiong, J.; Yu, D.; Liu, S.; Shu, L.; Wang, X.; Liu, Z. A review of plant phenotypic image recognition technology based on deep learning. Electronics 2021, 10, 81. [Google Scholar] [CrossRef]
  4. Dang, L.M.; Hassan, S.I.; Suhyeon, I.; kumar Sangaiah, A.; Mehmood, I.; Rho, S.; Seo, S.; Moon, H. UAV based wilt detection system via convolutional neural networks. Sustain. Comput. Inform. Syst. 2020, 28, 100250. [Google Scholar] [CrossRef]
  5. Dang, L.M.; Wang, H.; Li, Y.; Min, K.; Kwak, J.T.; Lee, O.N.; Park, H.; Moon, H. Fusarium wilt of radish detection using RGB and near infrared images from Unmanned Aerial Vehicles. Remote Sens. 2020, 12, 2863. [Google Scholar] [CrossRef]
  6. Nguyen, T.N.; Lee, S.; Nguyen-Xuan, H.; Lee, J. A novel analysis-prediction approach for geometrically nonlinear problems using group method of data handling. Comput. Methods Appl. Mech. Eng. 2019, 354, 506–526. [Google Scholar] [CrossRef]
  7. Rumpf, T.; Mahlein, A.K.; Steiner, U.; Oerke, E.C.; Dehne, H.W.; Plümer, L. Early detection and classification of plant diseases with Support Vector Machines based on hyperspectral reflectance. Comput. Electron. Agric. 2010, 74, 91–99. [Google Scholar] [CrossRef]
  8. Hiary, H.; Ahmad, S.B.; Reyalat, M.; Braik, M.; Al-Rahamneh, Z. Fast and accurate detection and classification of plant diseases. Int. J. Comput. Appl. 2011, 17, 31–38. [Google Scholar] [CrossRef]
  9. Minh, D.; Wang, H.X.; Li, Y.F.; Nguyen, T.N. Explainable artificial intelligence: A comprehensive review. Artif. Intell. Rev. 2021, 55, 3503–3568. [Google Scholar] [CrossRef]
  10. Bao, F.; Bambil, D. Applicability of computer vision in seed identification: Deep learning, random forest, and support vector machine classification algorithms. Acta Bot. Bras. 2021, 35, 17–21. [Google Scholar] [CrossRef]
  11. Mohanty, S.P.; Hughes, D.P.; Salathé, M. Using deep learning for image-based plant disease detection. Front. Plant Sci. 2016, 7, 1419. [Google Scholar] [CrossRef] [PubMed]
  12. Xu, X.; Xu, S.; Jin, L.; Song, E. Characteristic analysis of Otsu threshold and its applications. Pattern Recognit. Lett. 2011, 32, 956–961. [Google Scholar] [CrossRef]
  13. Agarwal, R.; Verma, O.P. Image splicing detection using hybrid feature extraction. In Advances in Mechanical Engineering; Springer: Singapore, 2021; pp. 663–672. [Google Scholar]
  14. Humeau-Heurtier, A. Texture feature extraction methods: A survey. IEEE Access 2019, 7, 8975–9000. [Google Scholar] [CrossRef]
  15. Iqbal, N.; Mumtaz, R.; Shafi, U.; Zaidi, S.M.H. Gray level co-occurrence matrix (GLCM) texture based crop classification using low altitude remote sensing platforms. PeerJ Comput. Sci. 2021, 7, e536. [Google Scholar] [CrossRef] [PubMed]
  16. Bambil, D.; Pistori, H.; Bao, F.; Weber, V.; Alves, F.M.; Gonçalves, E.G.; de Alencar Figueiredo, L.F.; Abreu, U.G.P.; Arruda, R.; Bortolotto, I.M. Plant species identification using color learning resources, shape, texture, through machine learning and artificial neural networks. Environ. Syst. Decis. 2020, 40, 480–484. [Google Scholar] [CrossRef]
  17. Pham, B.T.; Phong, T.V.; Nguyen, H.D.; Qi, C.; Al-Ansari, N.; Amini, A.; Ho, L.S.; Tuyen, T.T.; Yen, H.P.H.; Ly, H.-B. A comparative study of kernel logistic regression, radial basis function classifier, multinomial naïve bayes, and logistic model tree for flash flood susceptibility mapping. Water 2020, 12, 239. [Google Scholar] [CrossRef]
  18. Anjani, I.A.; Pratiwi, Y.R.; Norfa Bagas Nurhuda, S. Implementation of deep learning using convolutional neural network algorithm for classification rose flower. J. Phys. Conf. Ser. 2021, 1842, 012002. [Google Scholar] [CrossRef]
  19. Abbas, T.; Razzaq, A.; Zia, M.A.; Mumtaz, I.; Saleem, M.A.; Akbar, W.; Khan, M.A.; Akhtar, G.; Shivachi, C.S. Deep neural networks for automatic flower species localization and recognition. Comput. Intell. Neurosci. 2022, 2022, 9359353. [Google Scholar] [CrossRef]
  20. Adak, M.F. Identification of plant species by deep learning and providing as a mobile application. Sakarya Univ. J. Comput. Inform. Sci. 2020, 3, 231–237. [Google Scholar] [CrossRef]
  21. Wang, F.; Zhang, F.-J.; Chen, F.-D.; Fang, W.-M.; Teng, N.-J. Identification of chrysanthemum (Chrysanthemum morifolium) self-incompatibility. Sci. World J. 2014, 2014, 625658. [Google Scholar] [CrossRef] [Green Version]
  22. Song, X.; Gao, K.; Fan, G.; Zhao, X.; Liu, Z.; Dai, S. Quantitative classification of the morphological traits of ray florets in large-flowered chrysanthemum. Hort. Sci. 2018, 53, 1258–1265. [Google Scholar] [CrossRef]
  23. Fanourakis, D.; Kazakos, F.; Nektarios, P.A. Allometric individual leaf area estimation in chrysanthemum. Agronomy 2021, 11, 795. [Google Scholar] [CrossRef]
  24. Hoang, T.K.; Wang, Y.; Hwang, Y.-J.; Lim, J.-H. Analysis of the morphological characteristics and karyomorphology of wild Chrysanthemum species in Korea. Hortic. Environ. Biotechnol. 2020, 61, 359–369. [Google Scholar] [CrossRef]
  25. Song, X.; Xu, Y.; Gao, K.; Fan, G.; Zhang, F.; Deng, C.; Dai, S.; Huang, H.; Xin, H.; Li, Y. High-density genetic map construction and identification of loci controlling flower-type traits in Chrysanthemum (Chrysanthemum × morifolium Ramat). Hortic. Res. 2020, 7, 108. [Google Scholar] [CrossRef] [PubMed]
  26. Nguyen, T.K.; Lim, J.H. High-throughput identification of chrysanthemum gene function and expression: An overview and an effective proposition. J. Plant Biotechnol. 2021, 48, 139–147. [Google Scholar] [CrossRef]
  27. Ma, Y.-P.; Zhao, L.; Zhang, W.-J.; Zhang, Y.-H.; Xing, X.; Duan, X.-X.; Hu, J.; Harris, A.; Liu, P.-L.; Dai, S.-L.; et al. Origins of cultivars of Chrysanthemum—Evidence from the chloroplast genome and nuclear LFY gene. J. Syst. Evol. 2020, 58, 925–944. [Google Scholar] [CrossRef]
  28. Gao, K.; Song, X.; Kong, D.; Dai, S. Genetic analysis of leaf traits in small-flower chrysanthemum (Chrysanthemum × morifolium Ramat.). Agronomy 2020, 10, 697. [Google Scholar] [CrossRef]
  29. Liu, Z.; Wang, J.; Tian, Y.; Dai, S. Deep learning for image-based large-flowered chrysanthemum cultivar recognition. Plant Methods 2019, 15, 146. [Google Scholar] [CrossRef]
  30. Liu, C.; Lu, W.; Gao, B.; Kimura, H.; Li, Y.; Wang, J. Rapid identification of chrysanthemum teas by computer vision and deep learning. Food Sci. Nutr. 2020, 8, 1968–1977. [Google Scholar] [CrossRef]
  31. Wang, B.; Brown, D.; Gao, Y.; Salle, J.L. Mobile plant leaf identification using smart-phones. In Proceedings of the 2013 IEEE International Conference on Image Processing, Melbourne, VIC, Australia, 15–18 September 2013; pp. 4417–4421. [Google Scholar]
  32. Sun, Y.; Liu, Y.; Wang, G.; Zhang, H. Deep learning for plant identification in natural environment. Comput. Intell. Neurosci. 2017, 2017, 7361042. [Google Scholar] [CrossRef] [Green Version]
  33. Prasad, S.; Kumar, P.S.; Ghosh, D. An efficient low vision plant leaf shape identification system for smart phones. Multimed. Tools Appl. 2017, 76, 6915–6939. [Google Scholar] [CrossRef]
  34. Ahmad, M.U.; Ashiq, S.; Badshah, G.; Khan, A.H.; Hussain, M. Feature extraction of plant leaf using deep learning. Complexity 2022, 2022, 6976112. [Google Scholar] [CrossRef]
  35. Dissanayake, C.; Kumara, W.G.C.W. Plant leaf identification based on machine learning algorithms. Sri Lankan J. Technol. 2021, 60–66. [Google Scholar] [CrossRef]
Figure 1. The architecture of the framework highlighting comprehensive guidance for our overall study.
Figure 1. The architecture of the framework highlighting comprehensive guidance for our overall study.
Horticulturae 08 00839 g001
Figure 2. Sample images for five types of wild chrysanthemum.
Figure 2. Sample images for five types of wild chrysanthemum.
Horticulturae 08 00839 g002
Figure 3. Training and validation curves using SVM with different kernel coefficient values. Note: For each coefficient value, the grey field of the validation score is formed based on the lowest and highest scores of the 10 folds.
Figure 3. Training and validation curves using SVM with different kernel coefficient values. Note: For each coefficient value, the grey field of the validation score is formed based on the lowest and highest scores of the 10 folds.
Horticulturae 08 00839 g003
Figure 4. Confusion matrix on the testing dataset using the trained model.
Figure 4. Confusion matrix on the testing dataset using the trained model.
Horticulturae 08 00839 g004
Figure 5. The model performance on frontside and backside of leaves of five wild chrysanthemums species found in Korea.
Figure 5. The model performance on frontside and backside of leaves of five wild chrysanthemums species found in Korea.
Horticulturae 08 00839 g005
Table 1. Detailed explanation of the dataset used in this study.
Table 1. Detailed explanation of the dataset used in this study.
ClassTraining SetValidation SetTesting Set
C. boreale1924852
C. indicum2015159
C. makinoi2215656
C. zawadskii1975023
A. spathulifolius1934959
Total814254249
Table 2. Description and explanation for shape features extracted from the dataset.
Table 2. Description and explanation for shape features extracted from the dataset.
NameSampleExplanation
Physiological width
( w )
Horticulturae 08 00839 i001Physiological width of the leaf.
Physiological length
( l )
Horticulturae 08 00839 i002Physiological length of the leaf.
Area
( A )
Horticulturae 08 00839 i003Area of the leaf.
Perimeter
( P )
Horticulturae 08 00839 i004Total curve width of the leaf.
Aspect ratio w l The ratio of the leaf’s width to its length.
Rectangularity A ( w × l ) The variations of width and length with respect to the area.
Circularity P 2 A Measures   the   similarity   of   the   leaf   to   a   perfect   circle .   P 2 is the perimeter of square.
Table 3. Description and explanation for texture features extracted from the dataset.
Table 3. Description and explanation for texture features extracted from the dataset.
NameSampleExplanation
Contrast i ,   j = 0 N 1 p i j ( i j ) 2 Measures the intensity or gray-level variations between the reference pixel and its neighbor.
Correlation i j ( i j ) p ( i , j ) μ x μ y σ x σ y Measures the gray-tone linear dependencies in the image.
Inverse difference moment i j 1 1 + ( i j ) 2 p ( i , j ) Measures the local homogeneity of an image.
Entropy i ,   j = 0 N 1 p i j ln p i j The quantity of energy that is permanently lost to heat every time a reaction/physical transformation appears.
Table 4. Default value range and optimal value for two important hyperparameters of the SVM algorithm.
Table 4. Default value range and optimal value for two important hyperparameters of the SVM algorithm.
HyperparametersDescriptionConsidered ValuesOptimal Value
γ Kernel coefficient for the RBF kernel 10 6 ,   ,   10 1 10 1
C Regularization parameter 10 0 ,   10 1 ,   ,   10 3 10 0
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Nguyen, T.K.; Dang, L.M.; Song, H.-K.; Moon, H.; Lee, S.J.; Lim, J.H. Wild Chrysanthemums Core Collection: Studies on Leaf Identification. Horticulturae 2022, 8, 839. https://doi.org/10.3390/horticulturae8090839

AMA Style

Nguyen TK, Dang LM, Song H-K, Moon H, Lee SJ, Lim JH. Wild Chrysanthemums Core Collection: Studies on Leaf Identification. Horticulturae. 2022; 8(9):839. https://doi.org/10.3390/horticulturae8090839

Chicago/Turabian Style

Nguyen, Toan Khac, L. Minh Dang, Hyoung-Kyu Song, Hyeonjoon Moon, Sung Jae Lee, and Jin Hee Lim. 2022. "Wild Chrysanthemums Core Collection: Studies on Leaf Identification" Horticulturae 8, no. 9: 839. https://doi.org/10.3390/horticulturae8090839

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop