Next Article in Journal
Negative Effects of Rhizobacteria Association on Plant Recruitment of Generalist Predators
Previous Article in Journal
GC/MS Profiling, Anti-Collagenase, Anti-Elastase, Anti-Tyrosinase and Anti-Hyaluronidase Activities of a Stenocarpus sinuatus Leaves Extract
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Pollen Grain Classification Based on Ensemble Transfer Learning on the Cretan Pollen Dataset

by
Nikos Tsiknakis
1,*,
Elisavet Savvidaki
2,
Georgios C. Manikis
1,
Panagiota Gotsiou
3,
Ilektra Remoundou
3,
Kostas Marias
1,4,
Eleftherios Alissandrakis
2,† and
Nikolas Vidakis
4,†
1
Computational Biomedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology Hellas–FORTH, 70013 Heraklion, Greece
2
Department of Agriculture, Hellenic Mediterranean University, 71004 Heraklion, Greece
3
Department of Food Quality and Chemistry of Natural Products, Mediterranean Agronomic Institute of Chania (M.A.I.Ch./CIHEAM), 73100 Chania, Greece
4
Department of Electrical and Computer Engineering, Hellenic Mediterranean University, 71004 Heraklion, Greece
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Plants 2022, 11(7), 919; https://doi.org/10.3390/plants11070919
Submission received: 1 March 2022 / Revised: 17 March 2022 / Accepted: 26 March 2022 / Published: 29 March 2022
(This article belongs to the Section Plant Cell Biology)

Abstract

:
Pollen identification is an important task for the botanical certification of honey. It is performed via thorough microscopic examination of the pollen present in honey; a process called melissopalynology. However, manual examination of the images is hard, time-consuming and subject to inter- and intra-observer variability. In this study, we investigated the applicability of deep learning models for the classification of pollen-grain images into 20 pollen types, based on the Cretan Pollen Dataset. In particular, we applied transfer and ensemble learning methods to achieve an accuracy of 97.5%, a sensitivity of 96.9%, a precision of 97%, an F1 score of 96.89% and an AUC of 0.9995. However, in a preliminary case study, when we applied the best-performing model on honey-based pollen-grain images, we found that it performed poorly; only 0.02 better than random guessing (i.e., an AUC of 0.52). This indicates that the model should be further fine-tuned on honey-based pollen-grain images to increase its effectiveness on such data.

1. Introduction

Honey is a natural, sweet food produced by bees from the nectar of plants and/or from the secretions of plants and insects. It is a complex mixture, with excellent healing properties and nutrients, and plays an increasing part in the human diet [1,2]. Due to the great variety of melliferous plants, there is high variability among honeys regarding their composition. A honey can be considered to be unifloral or monofloral when honeybees collect mainly from one floral source. To certify the botanical origin of honey, physicochemical analyses are combined with the investigation of the pollen contained in the honey. Pollen analysis (also known as melissopalynology) is a procedure for identifying the pollen grains present in the honey sediment after dilution and centrifugation. The sediment is examined under a microscope by a specialist scientist, who identifies the granules according to their morphological characteristics. This process has been performed manually to date, although it is very time-consuming. For this reason, there is a need to find a method that will automate and speed up the process, while reducing the potential human error. Pollen analysis is a necessary procedure for characterizing the origin of honey. Among other things, geographical identification is possible, according to the flora of each place. Specifically, the region of Crete presents a unique floral diversity with a significant number of endemic plants from which honey is produced, with special organoleptic and physicochemical characteristics. Such products need protection to prevent fraud and the misleading of consumers.
Recent advances in artificial intelligence, especially deep learning, have enabled complex analyses to be performed in a variety of tasks, with impressive performances [3,4]. The successful development of robust and powerful deep learning models is dependent on the availability of large and well-annotated datasets, particularly for tasks based on supervised methods, such as pollen identification. However, the collection, curation and processing of pollen samples is difficult and time-consuming; thus, in comparison with other computer vision applications where large datasets exist, e.g., ImageNet [5], pollen-grain datasets are much smaller [6,7,8,9]. However, it is possible to leverage the obtained knowledge of models trained on larger datasets for other tasks, via transfer learning [10]. In the case where knowledge is transferred from models trained using ImageNet, which is a collection of more than 14 million real-life images with a thousand classes, when applied on datasets from another domain, the models are able to detect low-level imaging features more easily but are unable to properly perform the task at hand (i.e., pollen classification). It is necessary to perform fine-tuning by either retraining the existing classification part of the pretrained models or by training a new classifier on new annotated data, based on the deep features produced by the pretrained convolutional network. In this study, we utilized transfer learning to investigate the applicability of four well-established convolutional neural network (CNN) architectures, i.e., Inception v3 [11], Xception [12], ResNet [13] and Inception–ResNet [14] for pollen classification on the Cretan Pollen Dataset v1 (CPD-1) [6,15] which comprises more than 4000 pollen-grain images of 20 pollen types, gathered from the region of Crete. In addition, we examined two ensemble approaches to combine the predictive power of all or some of the base models. Finally, we obtained preliminary results from applying the best-performing model, trained on plant-based images, on pollen-grain images extracted from honey samples, to evaluate the applicability of such models in a real-life case such as pollen identification within the context of honey botanical certification.

2. Materials and Methods

2.1. Data

The study used the Cretan Pollen Dataset v1 (CPD-1) [15] for developing the classification models. CPD-1 is a publicly available dataset comprising images of 4034 pollen grains of 20 plant species (Figure 1). The pollen samples were collected from various places in the region of Crete, Greece, during the period between April 2019 and April 2021. Figure 2 illustrates a mosaic of all pollen grains, numbered in accordance with Figure 1. This dataset is very rich in terms of the variety of pollen-grain types, comprising a wide spectrum of pollen types that are commonly found in Crete’s characteristic honey “Pefkothymaromelo Kritis PDO”. However, it also has some limitations such as a class imbalance, as seen in Figure 1, as well as some poorly segmented pollen grains. The proportion of poorly segmented pollen grains is very small, and we believe that it does not affect the training process of the model. In addition, some of the pollen types with a low representation in the dataset have a very distinct and unique morphology (e.g., Pinus). Thus, we believe that the model is able to classify these correctly, even if the number of images corresponding to these types is low. The dataset was split into three subsets, i.e., training, validation and hold-out testing sets (Figure 3). First, we split the dataset in half, to generate the training and testing sets. Then, the training set was augmented and split into final training and validation sets. The augmentation techniques applied to increase the size of the training set included randomly adding Gaussian noise, linearly adjusting contrast and brightness and rotating and translating the image in the ‘x’ and ‘y’ plane, as well as vertically and horizontally flipping the image (see Table 1 for a detailed presentation of the augmentation techniques). Finally, zero padding was applied to each image to standardize the image sizes to (512, 512) pixels. This padding was necessary because the largest pollen grain image has a size of (435, 419) pixels. Finally, each image was normalized by sample-wise centering of the mean value to 0 and dividing by the standard deviation of each sample.

2.2. Base Models

The current study utilized transfer learning to compare several well-established architectures that were pretrained on ImageNet [5]. In particular, the pretrained convolutional parts of the following architectures were used: Inception v3 [11], Xception [12], ResNet [13] and Inception–ResNet [14]. The ResNet architecture introduced residual connections to mitigate the vanishing/exploding gradients problem of deep neural networks. With its increased depth but considerably lower complexity than VGGNets, due to the use of global average pooling layers, ResNet was able to outperform the state-of-the-art approach at the time, overcoming multiple classification, detection and segmentation challenges. Google’s Inception architecture, on the other hand, is wider rather than deeper. It is built upon Inception modules, which consist of multiple parallel convolutional and pooling operations with different filter sizes, each of which computes a different transformation over the same input feature map. In addition, due to the increased computational complexity, 1 × 1 convolutions are used to reduce the dimensionality of the output feature maps. Inception v4, or the so-called Inception–ResNet architecture, combines the principles of Inception modules and residual connections to produce a wide, yet very deep CNN. Finally, Xception, which stands for extreme inception, uses depth-wise separable convolutions in place of the Inception modules, which effectively maps cross-channel and spatial correlations completely separately, while maintaining the complexity at the same level as Inception v3. Xception also uses linear residual connections, like ResNet.
A custom classification part was developed as described in Table 2, which followed the feature extraction part of the pretrained models. The GAP layer computes the mean value of each feature map, effectively downscaling and flattening the output of the convolutional network. For the dense layers, the activation function is the rectified linear unit (ReLU), while the dropout rate is chosen to be 50% for each neuron. The output dense layer utilizes the softmax activation function, which calculates the probability for each of the examined classes. Each model was trained for a maximum of 50 epochs, with early stopping based on the validation loss value. The categorical cross-entropy loss function (Equation (1)) and the Adam optimizer [16] with a batch size of 8 samples were used to train the model, using an exponentially decaying learning rate with an initial value of 0.001 and a decay rate of 0.96.
C E = i = 1 C   y o , c l o g o o , i  
Here, C is the number of classes, y is a binary indicator of whether class label c is the correct classification for the observation o and p is the predicted probability that observation o belongs to class i. The model was trained on a server with an AMD EPYC 7251 8-core 2.9 GHz CPU, RTX 2080 Ti 11 GB GPU and 64 GB RAM and implemented using TensorFlow 2.5.

2.3. Ensemble Techniques

In addition, we utilized ensemble techniques to combine the predictive power of the models. The hypothesis was that each classification model identifies unique imaging features with respect to the other models, and thus the combination of their predictions boosts the overall classification performance. Two ensemble learning strategies were utilized: (a) a soft voting ensemble strategy, where the mean of the prediction probabilities of all models was calculated and used for the final prediction, and (b) a hard voting ensemble strategy, where each model calculated a single prediction and then the one with the maximum occurrences was used as the final prediction. An example of each strategy for a binary classification task is shown in Table 3.

3. Results

3.1. Performance Metrics

To evaluate the performance of the model on the hold-out testing set, we computed the following metrics based on the model predictions and the ground truth: accuracy (ACC), sensitivity (SEN), precision (PRE), F1 score and AUC score (one versus the rest). Because these metrics are mainly used in binary classification tasks, we computed them in two different settings: (a) an average calculation across all classes (macro and weighted averages) and (b) using a per-class calculation. In addition, we obtained the receiver operating characteristic (ROC) curve and the confusion matrix for each model. It should be noted that we have not provided the AUC and ROC curves for the hard voting ensemble models, because they provide class predictions rather than prediction probabilities; thus, only metrics derived from the confusion matrix are applicable.

3.2. Performance Analysis of the Models

Each model was trained to classify the pollen-grain images into one of the 20 plant species. The training, validation and testing sets comprised 7129, 802 and 2013 pollen-grain images, respectively. The specific distribution of each class is presented in Figure 3. The validation set was used for assessing the performance during training and for early stopping of the training procedure. Table 4 presents the overall performance results averaged across all classes for each of the models, while Table 5 presents the performance of each model regarding the Thymbra class. For the sake of brevity, the model performances on the rest of the classes are presented in their respective tables in the Supplementary Materials. We chose to include the performance regarding the Thymbra class in the main part of the paper because it is the main pollen type present in Crete’s trademark honey “Pefkothymaromelo Kritis PDO”.
Since our main objective was to correctly identify as many pollen grains belonging to the Thymbra class as possible, the metric that should be used to compare the model performances is the sensitivity (recall). At the same time, false positive cases for the Thymbra class should be minimized, which essentially means maximizing the specificity for that class. However, the perfect classifier, which would have a sensitivity of 1 and a specificity of 1, is rarely if ever observed in any real-life application in AI. To balance these two metrics, ROC curves and AUC scores are used. Thus, the metric that was used to compare the models and select the most appropriate model for our task was the AUC score of each model. However, because hard voting ensemble classifiers do not produce an ROC curve, based on which the AUC score is calculated, the comparison was based on the sensitivity.
We observe that all the models performed exceptionally well in the aggregated results (Table 4), with a weighted mean AUC value of 0.99895 ± 0.00054, a macro AUC value of 0.99898 ± 0.00047, a weighted mean sensitivity value of 0.96839 ± 0.00663 and a macro mean sensitivity value of 0.96237 ± 0.00739. However, the same does not apply for the Thymbra class evaluation. Although the mean AUC value of all models was 0.997295 ± 0.002001, the mean sensitivity was 0.895548 ± 0.04364. Interestingly, the base models (Inception v3, Inception–ResNet, Xception and ResNet) had a mean sensitivity value of 0.8869865 ± 0.0326692, the hard voting ensemble classifiers had a mean sensitivity value of 0.8712329 ± 0.050592 and the soft voting ensemble classifiers had a mean sensitivity value of 0.9232878 ± 0.0139697. In fact, the Xception–Inception soft voting ensemble model (“ens_x_i_soft”) had the highest sensitivity value of 0.945205 of all the models, while the highest sensitivity for a hard voting ensemble was 0.931507. Thus, soft voting ensemble classifiers should be preferred to hard voting classifiers in the context of this study. The receiver operating characteristic (ROC) analysis regarding the Thymbra class is presented in Figure 4.
Thus, based on the previous assumptions and observations, the best-performing model is a soft voting ensemble of all the base models (“ens_all_soft”). The ROC analysis is presented in Figure 5, and the performance metrics are presented in Table 6. Looking at the confusion matrix of this classifier in Figure 6, we observe that there are only a few misclassified cases for all classes. Specifically, for Thymbra, there were 5 pollen-grain images that were classified as Erica, Vitis, Origanum, Satureja and Calicotome, while 68 were correctly classified as Thymbra. On the other hand, there were only 2 pollen-grain images that were falsely classified as Thymbra, when in fact they belonged to Satureja and Calicotome pollen types, respectively. Such misclassifications may be attributed to the fact that several pollen species such as Thymbra, Origanum and Salvia have shared morphological characteristics.
According to Greek legislation, at least 18% of the pollen found in the sediment of a sample must be thyme pollen for the honey to be characterized as thyme honey [17]. On this basis, if there are enough Thymbra pollen grains in the sample, the chosen model with a sensitivity of 0.931507 and a specificity of 0.998969 will identify more than enough pollen for the honey to be certified. However, extensive evaluation and possibly fine-tuning of the model on real-world honey-based samples should be carried out before any application of the model in a production environment.
The performance metrics and graphs for all the other models across all studied classes are included in the Supplementary Material.

4. Discussion

4.1. Comparison to Other Studies

Pollen analysis is an important task in melissopalynology, since it has a large financial impact on agricultural applications such as certification of botanical origin for honey. However, manual inspection of pollen microscopy images is a cumbersome and time-consuming task, subject to large inter- and intra-observer variability while achieving low identification accuracy. Thus, automatic pollen grain classification has attracted a great deal of attention from the research community during recent years.
In a previous study in our group, Manikis et al. [18] proposed a machine learning pipeline to extract geometric, textural and wavelet features from a private dataset of 564 pollen-grain images and to train a random forest classifier to classify them into six pollen types. They achieved a satisfactory performance, with a reported accuracy of 88.24%, precision of 88.60%, recall of 88.16% and an F1 score of 87.79%. Battiato et al. [19] presented a machine-learning-based analysis for classifying images into four classes (i.e., Corylus Avellana (well-developed pollen grains, anomalous pollen grains), Alnus and Debris). They investigated the performance of five classifiers based on hand-crafted features (i.e., HOG and LBP features), and they also developed an end-to-end model based on two convolutional neural network architectures, i.e., AlexNet and VGGNet. The best-performing model was AlexNet, achieving an accuracy of 0.8963 and a F1 score of 0.8897. A key difference between this study and our study is the increased number of unique pollen species that our dataset includes. Although the total number of pollen-grain images in our dataset was lower, our approach achieved a much better accuracy (0.975161) and F1 score (0.968880), with many more classes.
Although hand-crafted approaches report satisfactory performance results, we believe that models trained on selected hand-crafted features are inferior to deep learning approaches for large-scale tasks such as the one presented in this study. For reference, Battiato et al. [19] reported an increased performance from 3% up to 13% in terms of accuracy for a deep learning approach, in comparison to the hand-crafted approach. Sevillano et al. [20] presented a model combining CNN-based deep features with a linear discriminant classifier for classifying 23 types of pollen images on the Pollen23E dataset. Although this dataset comprises only 805 images (approximately 35 images per pollen type), it is very similar to ours regarding the high number of classes. Their approach achieved an accuracy of 0.932273, a precision of 0.9477, a recall (sensitivity) of 0.9964 and an F1 score of 0.9669. It is interesting to note that our approach slightly outperformed theirs, with an accuracy of 0.97561, precision of 0.970042, sensitivity of 0.969219 and an F1 score of 0.968880, indicating the robustness and wide applicability of advanced AI models on a wide spectrum of pollen samples.
Astolfi et al. [7] presented a very similar study to ours, as they trained and evaluated several well-established CNN architectures within the context of a large pollen-grain dataset. Specifically, their database comprised 2523 images of 73 pollen types collected from the Brazilian savanna (Cerrado). They reported that DenseNet-201 outperformed all the other models, achieving a precision score of 95.7%, an F1 score of 96.4%, an accuracy of 95.8% and a recall score of 95.7%. However, their models’ performances varied considerably across several pollen types. This may be attributed to the significantly lower number of images per class, i.e., there are many more classes but a much lower total number of images in the Pollen73S dataset, compared with the CPD v1 dataset. Finally, they conclude by stating the importance of investigating the potential gains of an ensemble approach, which, as our study indicates, can play an important role in achieving a performance boost compared to single-model approaches.
Table 7 presents the comparison information for the performance results of our method and those of other researchers.

4.2. Performance on Honey Data

To evaluate the model in a real-life setting, 17 samples were collected from two different honey samples. Pollen grains were isolated according to [21]. In brief, 10 g of honey was dissolved in deionized water and centrifuged to remove the sugars. The sediment was used to make microscopic preparations with the addition of 0.05% pararosaniline chloride (Acros Organics, Mumbai, India). The slides were placed in a heating hearth at 40 °C to evaporate the moisture, followed by closing the sample using coverslips and mounting medium (Eukitt, Sigma-Aldrich, Tafkirchen, Germany). Permanent pollen preparations were allowed to dry before storage. The images were acquired using a Kern Optics microscope with a built-in ODC 832 camera, with 5.1 MP at 400× magnification. After the images were taken, the granules were identified by a specialist scientist. The images were processed to segment each pollen grain and then standardized according to the preprocessing pipeline of the analysis presented in Section 2. The final data comprised a total of 152 pollen-grain images, with a distribution across the classes as shown in Figure 7. This is a very imbalanced collection, with many classes not having a single data point. The best-performing model from the test set, i.e., the “ens_all_soft” classifier, was used to predict the classes of the pollen-grain images derived from the honey samples. It should be stressed that this is a preliminary case study, since the collected samples were not sufficient to produce reliable and trustworthy results. Hence, we chose to include it in the Discussion section of this manuscript as an indication of future research directions.
Unfortunately, the results of the selected model on the honey-sampled pollen-grain images were very poor, with a weighted average AUC score of 0.52, which is only 2% better than a random prediction classifier. The confusion matrix in Figure 8 shows that there were many misclassifications, especially for the target species of this study, i.e., Thymbra. However, we can observe that most of the Thymbra samples were misclassified as Salvia and some as Origanum. These two, in addition to Satureja, have a very similar morphology to Thymbra (Figure 9 presents indicative pollen-grain images from these species sharing common morphological characteristics). We can also observe that all Origanum images were misclassified as Salvia. Taking into consideration the nature of this honey-based image collection and the morphological similarities between these species, these specific misclassifications could be expected.
However, there were still many unexpected misclassifications, which can be attributed to several factors. The images from the honey sediment showed some differences compared to those from the plant. It is possible that the granules in the sediment were more swollen, as they contained more water. At the same time, there is a possibility that many grains were joined together. Finally, honey contains other elements such as sugars and proteins, which can blur the images, affecting both the quality of the image and the integrity of the imaging features. In addition, the segmented pollen-grain images from the honey-based samples were unfortunately very few compared to the testing set of the initial plant-based data. Thus, any preliminary results presented in this discussion are not representative of the actual predictive power of the model.
However, this analysis paves the way for future work on real-life data based on honey samples, while showing some necessary and important steps that must be followed in future studies. First, a larger number of samples should be collected and preprocessed to create large pollen-grain datasets from honey samples, which will be representative of all the pollen classes. Second, a robust and standardized preprocessing pipeline should be established to eliminate the variability between the pollen-grain images collected from plant samples and those collected from honey samples. Models that are trained on the plant-based dataset should be fine-tuned on a small subset of the honey-based dataset, to gain insight into this variability and learn the unique imaging features that may be present in the honey-based dataset. Following this, the fine-tuned models will be more useful to the melissopalynology community.

5. Conclusions

In this paper, we presented a comparative study for deep-learning-based classification of pollen-grain images from the 20 most common pollen species in Crete’s unique Pefkothymaromelo Kritis PDO honey. We compared four well-established CNN models, which were pretrained on the ImageNet database and fine-tuned on the publicly available dataset CPD v1 [15]. The best-performing model was based on the soft voting ensemble of all the base models, achieving an accuracy of 97.5%, a precision of 97%, a sensitivity of 96.9%, an F1 score of 96.89% and an AUC of 0.9995. When tested on a small collection of pollen grains extracted from honey samples, it performed poorly, as it only exceeded random guessing by 2% (i.e., an AUC of 0.52). Further development and fine-tuning on a larger honey-based collection should be performed to increase the model’s performance and robustness.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/plants11070919/s1.

Author Contributions

Conceptualization, N.T., G.C.M., K.M., E.A. and N.V.; methodology, N.T.; software, N.T.; validation, N.T., E.S., P.G. and I.R.; formal analysis, N.T. and G.C.M.; investigation, N.T., E.S., P.G. and I.R.; resources, E.S. and E.A.; data curation, N.T. and E.S.; writing—original draft preparation, N.T., E.S. and G.C.M.; writing—review and editing, N.T., G.C.M., N.V., E.A. and K.M.; visualization, N.T.; supervision, G.C.M., N.V., K.M. and E.A.; project administration, E.A. and N.V.; funding acquisition, N.V. and E.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Emblematic Action “The Honeybee Roads” of the Greek Public Investments Program (P.I.P.) of the General Secretariat for Research and Technology (GSRT) (project code: 2018ΣE01300000).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used in this study are available on Zenodo at https://doi.org/10.5281/zenodo.4756360 (accessed on 28 March 2022).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ilia, G.; Simulescu, V.; Merghes, P.; Varan, N. The health benefits of honey as an energy source with antioxidant, antibacterial and antiseptic effects. Sci. Sports 2021, 36, 272.e1–272.e10. [Google Scholar] [CrossRef]
  2. Majtan, J.; Bucekova, M.; Kafantaris, I.; Szweda, P.; Hammer, K.; Mossialos, D. Honey antibacterial activity: A neglected aspect of honey quality assurance as functional food. Trends Food Sci. Technol. 2021, 118, 870–886. [Google Scholar] [CrossRef]
  3. Esteva, A.; Chou, K.; Yeung, S.; Naik, N.; Madani, A.; Mottaghi, A.; Liu, Y.; Topol, E.; Dean, J.; Socher, R. Deep learning-enabled medical computer vision. NPJ Digit. Med. 2021, 4, 5. [Google Scholar] [CrossRef] [PubMed]
  4. Santos, L.; Santos, F.N.; Oliveira, P.M.; Shinde, P. Deep learning applications in agriculture: A short review. In Iberian Robotics Conference; Springer: Berlin/Heidelberg, Germany, 2019; pp. 139–151. [Google Scholar]
  5. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; Fei-Fei, L. ImageNet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
  6. Tsiknakis, N.; Savvidaki, E.; Kafetzopoulos, S.; Manikis, G.; Vidakis, N.; Marias, K.; Alissandrakis, E. Cretan Pollen Dataset v1 (CPD-1). E. Cretan Pollen Dataset 2021, v1, 1. [Google Scholar] [CrossRef]
  7. Astolfi, G.; Gonçalves, A.B.; Menezes, G.V.; Borges, F.S.B.; Astolfi, A.C.M.N.; Matsubara, E.T.; Alvarez, M.; Pistori, H. POLLEN73S: An image dataset for pollen grains classification. Ecol. Inform. 2020, 60, 101165. [Google Scholar] [CrossRef]
  8. Gonçalves, A.B.; Souza, J.S.; Da Silva, G.G.; Cereda, M.P.; Pott, A.; Naka, M.H.; Pistori, H. Feature Extraction and Machine Learning for the Classification of Brazilian Savannah Pollen Grains. PLoS ONE 2016, 11, e0157044. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Battiato, S.; Ortis, A.; Trenta, F.; Ascari, L.; Politi, M.; Siniscalco, C. POLLEN13K: A Large Scale Microscope Pollen Grain Image Dataset. In Proceedings of the 2020 IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, 25–28 October 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 2456–2460. [Google Scholar]
  10. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  11. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar] [CrossRef] [Green Version]
  12. Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 1800–1807. [Google Scholar]
  13. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  14. Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A.A. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning; AAAI Press: Palo Alto, CA, USA, 2017; pp. 4278–4284. [Google Scholar]
  15. Tsiknakis, N.; Savvidaki, E.; Kafetzopoulos, S.; Manikis, G.; Vidakis, N.; Marias, K.; Alissandrakis, E. Segmenting 20 Types of Pollen Grains for the Cretan Pollen Dataset v1 (CPD-1). Appl. Sci. 2021, 11, 6657. [Google Scholar] [CrossRef]
  16. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. In Proceedings of the International Conference Learn, Represent, (ICLR), San Diego, CA, USA, 5–8 May 2015. [Google Scholar]
  17. Official Government Gazette B-239/23-2-2005 Annex II Article 67 of Greek Food Code 2005, Greek Ministry of Agriculture. Available online: http://www.et.gr/index.php/anazitisi-fek (accessed on 28 March 2022).
  18. Manikis, G.C.; Marias, K.; Alissandrakis, E.; Perrotto, L.; Savvidaki, E.; Vidakis, N. Pollen Grain Classification using Geometrical and Textural Features. In Proceedings of the 2019 IEEE International Conference on Imaging Systems and Techniques (IST), Abu Dhabi, United Arab Emirates, 9–10 December 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–6. [Google Scholar]
  19. Battiato, S.; Ortis, A.; Trenta, F.; Ascari, L.; Politi, M.; Siniscalco, C. Detection and Classification of Pollen Grain Microscope Images. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 14–19 June 2020; pp. 4220–4227. [Google Scholar]
  20. Sevillano, V.; Aznarte, J.L. Improving classification of pollen grain images of the POLEN23E dataset through three different applications of deep learning convolutional neural networks. PLoS ONE 2018, 13, e0201807. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Louveaux, J.; Maurizio, A.; Vorwohl, G. Methods of Melissopalynology. BEE World 1978, 59, 139–157. [Google Scholar] [CrossRef]
Figure 1. Data histogram across classes.
Figure 1. Data histogram across classes.
Plants 11 00919 g001
Figure 2. Mosaic of images of all pollen types: 1 Thymbra; 2 Erica; 3 Castanea; 4 Eucalyptus; 5 Myrtus; 6 Ceratonia; 7 Urginea; 8 Vitis; 9 Origanum; 10 Satureja; 11 Pinus; 12 Calicotome; 13 Salvia; 14 Sinapis; 15 Ferula; 16 Asphodelus; 17 Oxalis; 18 Pistacia; 19 Ebenus; 20 Olea.
Figure 2. Mosaic of images of all pollen types: 1 Thymbra; 2 Erica; 3 Castanea; 4 Eucalyptus; 5 Myrtus; 6 Ceratonia; 7 Urginea; 8 Vitis; 9 Origanum; 10 Satureja; 11 Pinus; 12 Calicotome; 13 Salvia; 14 Sinapis; 15 Ferula; 16 Asphodelus; 17 Oxalis; 18 Pistacia; 19 Ebenus; 20 Olea.
Plants 11 00919 g002
Figure 3. Histogram of each subset of the data.
Figure 3. Histogram of each subset of the data.
Plants 11 00919 g003
Figure 4. Receiver operating characteristic curve for Thymbra class.
Figure 4. Receiver operating characteristic curve for Thymbra class.
Plants 11 00919 g004
Figure 5. Receiver operating characteristic curve for soft voting ensemble of all base models.
Figure 5. Receiver operating characteristic curve for soft voting ensemble of all base models.
Plants 11 00919 g005
Figure 6. Confusion matrix for soft voting ensemble of all base models.
Figure 6. Confusion matrix for soft voting ensemble of all base models.
Plants 11 00919 g006
Figure 7. Histogram of honey-based dataset.
Figure 7. Histogram of honey-based dataset.
Plants 11 00919 g007
Figure 8. Confusion matrix of soft voting ensemble of all models on the honey-based dataset.
Figure 8. Confusion matrix of soft voting ensemble of all models on the honey-based dataset.
Plants 11 00919 g008
Figure 9. Pollen-grain images of pollen types with similar morphological characteristics.
Figure 9. Pollen-grain images of pollen types with similar morphological characteristics.
Plants 11 00919 g009
Table 1. Augmentations used in the study.
Table 1. Augmentations used in the study.
Augmentation MethodHyperparametersProbability
Gaussian BlurringSigma [0, 0.3]30%
Linear Contrast AdjustmentAlpha [0.75, 1.25]30%
Brightness MultiplicationMultiplication factor [0.7, 1.3]30%
RotationDegrees [−180, 180]100%
Translation in x PlaneTranslation percentage [−0.2, 0.2]100%
Translation in y PlaneTranslation percentage [−0.2, 0.2]100%
Vertical Flipping-50%
Horizontal Flipping-50%
Table 2. Classification part of the network. The input and output sizes of the GAP layer, as well as the input size of the first dense layer, depend on the backbone convolutional network used.
Table 2. Classification part of the network. The input and output sizes of the GAP layer, as well as the input size of the first dense layer, depend on the backbone convolutional network used.
Input SizeOutput SizeActivation Function
Global Average Pooling 2D---
Dense Layer-1024ReLU
Droput of 50%10241024-
Dense Layer1024512ReLU
Droput of 50%512512-
Dense Layer512256ReLU
Droput of 50%256256-
Dense Layer256128ReLU
Droput of 50%10241024-
Dense Layer12820Softmax
Table 3. Example of the ensemble strategies for a binary classification task. The same procedure applies for a 20-class task.
Table 3. Example of the ensemble strategies for a binary classification task. The same procedure applies for a 20-class task.
ModelsPrediction ProbabilityPrediction
Model 1[0.8, 0.2]Class 0
Model 2[0.55, 0.45]Class 0
Model 3[0.1, 0.9]Class 1
Soft Voting Ensemble[0.8 + 0.55 + 0.1, 0.2 + 0.45 + 0.9]/3 = [0.483, 0.517]Class 1
Hard Voting EnsembleMaximum occurrence [0,0,1]Class 0
Table 4. Performance metrics for each model on the test set, averaged across all classes.
Table 4. Performance metrics for each model on the test set, averaged across all classes.
MacroWeighted
ACCPreSenF1AUCPreSenF1AUC
ens_all_hard0.9751610.9707620.9666470.967991NA0.9760310.9751610.975231NA
ens_all_soft0.9751610.9700420.9692190.9688800.9995420.9763060.9751610.9753340.999533
ens_ir_i_r_hard0.9726780.9693620.9668090.967251NA0.9738370.9726780.972838NA
ens_ir_i_r_soft0.9736710.9697810.9666280.9674430.9994370.9746880.9736710.9738030.999338
ens_ir_r_hard0.9572780.9523620.9478190.947840NA0.9599520.9572780.957202NA
ens_ir_r_soft0.9667160.9651320.9632030.9627950.9993580.9699110.9667160.9674100.999204
ens_i_r_hard0.9647290.9595180.9560760.956596NA0.9662310.9647290.964742NA
ens_i_r_soft0.9741680.9677640.9709740.9688630.9991770.9750650.9741680.9743260.999204
ens_x_ir_hard0.9592650.9586020.9498790.951970NA0.9624510.9592650.959344NA
ens_x_ir_i_hard0.9721810.9714090.9690760.969541NA0.9736560.9721810.972402NA
ens_x_ir_i_soft0.9741680.9719590.9686990.9693950.9992220.9758050.9741680.9743930.999170
ens_x_ir_r_hard0.9696970.9676190.9649340.965034NA0.9721980.9696970.970201NA
ens_x_ir_r_soft0.9711870.9689540.9665670.9666830.9994640.9732590.9711870.9715710.999475
ens_x_ir_soft0.9667160.9639600.9619130.9612370.9988920.9705500.9667160.9674570.998980
ens_x_i_hard0.9657230.9608520.9548800.956783NA0.9668560.9657230.965620NA
ens_x_i_r_hard0.9761550.9699230.9703660.969455NA0.9773860.9761550.976399NA
ens_x_i_r_soft0.9766520.9695400.9716590.9699630.9993870.9777900.9766520.9768890.999454
ens_x_i_soft0.9741680.9670770.9717520.9687440.9989310.9754700.9741680.9744110.999008
ens_x_r_hard0.9672130.9603670.9526310.954955NA0.9687600.9672130.967131NA
ens_x_r_soft0.9716840.9649130.9639930.9634540.9990970.9731440.9716840.9719210.999184
inception0.9647290.9607870.9616600.9605470.9986330.9661190.9647290.9649590.998587
inception_resnet0.9523100.9532530.9525060.9502530.9981990.9586440.9523100.9535340.997607
resnet0.9582710.9504900.9553650.9510010.9983600.9622260.9582710.9592580.998233
xception0.9617490.9507590.9536110.9504830.9980960.9643550.9617490.9621290.998363
Individual models (short name for ensemble naming): Inception (i); ResNet (r); Inception–ResNet (ir); Xception (x). Ensemble strategies: soft or hard voting. The naming of the ensemble models is based on the schema “ens_MODELS_VOTING” (e.g., ens_i_r_soft means Inception and ResNet ensembles based on soft voting). NA refers to Not Applicable.
Table 5. Performance metrics for each model on the test set regarding the Thymbra class.
Table 5. Performance metrics for each model on the test set regarding the Thymbra class.
SensitivitySpecificityPrecisionAccuracyF1AUC
ens_all_hard0.9178080.9989690.9710140.9960260.943662NA
ens_all_soft0.9315070.9989690.9714290.9965230.9510490.999569
ens_ir_i_r_hard0.9178080.9989690.9710140.9960260.943662NA
ens_ir_i_r_soft0.9315070.9989690.9714290.9965230.9510490.999364
ens_ir_r_hard0.7808221.0000001.0000000.9920520.876923NA
ens_ir_r_soft0.9178081.0000001.0000000.9970190.9571430.998757
ens_i_r_hard0.8356160.9989690.9682540.9930450.897059NA
ens_i_r_soft0.9315070.9979380.9444440.9955290.9379310.999244
ens_x_ir_hard0.8219181.0000001.0000000.9935420.902256NA
ens_x_ir_i_hard0.9041100.9984540.9565220.9950320.929577NA
ens_x_ir_i_soft0.9315070.9989690.9714290.9965230.9510490.995220
ens_x_ir_r_hard0.9178080.9989690.9710140.9960260.943662NA
ens_x_ir_r_soft0.9041100.9994850.9850750.9960260.9428570.998913
ens_x_ir_soft0.9041101.0000001.0000000.9965230.9496400.995629
ens_x_i_hard0.8630140.9984540.9545450.9935420.906475NA
ens_x_i_r_hard0.9315070.9984540.9577460.9960260.944444NA
ens_x_i_r_soft0.9315070.9989690.9714290.9965230.9510490.999477
ens_x_i_soft0.9452050.9969070.9200000.9950320.9324320.995523
ens_x_r_hard0.8219180.9989690.9677420.9925480.888889NA
ens_x_r_soft0.9041100.9989690.9705880.9955290.9361700.998687
inception0.9315070.9953610.8831170.9930450.9066670.994549
inception_resnet0.8493151.0000001.0000000.9945360.9185190.993913
resnet0.8630140.9979380.9402990.9930450.9000000.997522
xception0.9041100.9969070.9166670.9935420.9103450.995763
Table 6. Performance results of soft voting ensemble of all base models across all classes.
Table 6. Performance results of soft voting ensemble of all base models across all classes.
SensitivitySpecificityPrecisionAccuracyF1AUC
1. Thymbra0.9315070.9989690.9714290.9965230.9510490.999569
2. Erica1.0000000.9984390.9680850.9985100.9837841.000000
3. Castanea1.0000000.9989500.9819820.9990060.9909091.000000
4. Eucalyptus0.9411760.9984440.9638550.9960260.9523810.999713
5. Myrtus0.9898221.0000001.0000000.9980130.9948850.999991
6. Ceratonia0.9600000.9959250.8571430.9950320.9056600.998839
7. Urginea1.0000001.0000001.0000001.0000001.0000001.000000
8. Vitis0.9629630.9952080.9352520.9930450.9489050.999101
9. Origanum0.9411760.9994810.9876540.9970190.9638550.995973
10. Satureja0.9722220.9989880.9459460.9985100.9589040.999930
11. Pinus1.0000001.0000001.0000001.0000001.0000001.000000
12. Calicotome0.9463090.9978540.9724140.9940390.9591840.999622
13. Salvia1.0000001.0000001.0000001.0000001.0000001.000000
14. Sinapis1.0000000.9937300.8918920.9940390.9428570.999609
15. Ferula0.9756101.0000001.0000000.9995030.9876540.999975
16. Asphodelus1.0000000.9994990.9444440.9995030.9714291.000000
17. Oxalis1.0000000.9994850.9859150.9995030.9929081.000000
18. Pistacia0.8823531.0000001.0000000.9990060.9375000.999882
19. Ebenus0.9090911.0000001.0000000.9995030.9523810.999273
20. Olea0.9721520.9987640.9948190.9935420.9833550.999368
Table 7. Comparison table between other studies and ours.
Table 7. Comparison table between other studies and ours.
Ref.MethodDatasetAUCSensitivityPrecisionAccuracyF1 Score
Manikis et al. [18]Hand-crafted Features + ML546 images-88.16%88.60%88.24%87.79%
Battiato et al. [19]CNNPollen23E
805 images
---89.63%88.97%
Sevillano et al. [20]CNN + LDPollen23E
805 images
-99.64%94.77%93.22%96.69%
Astolfi et al. [7]CNNPollen73S 2523 images-95.7%95.7%95.8%96.4%
Our studyCNNCPD 40340.999596.9%97%97.5%96.89%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tsiknakis, N.; Savvidaki, E.; Manikis, G.C.; Gotsiou, P.; Remoundou, I.; Marias, K.; Alissandrakis, E.; Vidakis, N. Pollen Grain Classification Based on Ensemble Transfer Learning on the Cretan Pollen Dataset. Plants 2022, 11, 919. https://doi.org/10.3390/plants11070919

AMA Style

Tsiknakis N, Savvidaki E, Manikis GC, Gotsiou P, Remoundou I, Marias K, Alissandrakis E, Vidakis N. Pollen Grain Classification Based on Ensemble Transfer Learning on the Cretan Pollen Dataset. Plants. 2022; 11(7):919. https://doi.org/10.3390/plants11070919

Chicago/Turabian Style

Tsiknakis, Nikos, Elisavet Savvidaki, Georgios C. Manikis, Panagiota Gotsiou, Ilektra Remoundou, Kostas Marias, Eleftherios Alissandrakis, and Nikolas Vidakis. 2022. "Pollen Grain Classification Based on Ensemble Transfer Learning on the Cretan Pollen Dataset" Plants 11, no. 7: 919. https://doi.org/10.3390/plants11070919

APA Style

Tsiknakis, N., Savvidaki, E., Manikis, G. C., Gotsiou, P., Remoundou, I., Marias, K., Alissandrakis, E., & Vidakis, N. (2022). Pollen Grain Classification Based on Ensemble Transfer Learning on the Cretan Pollen Dataset. Plants, 11(7), 919. https://doi.org/10.3390/plants11070919

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop