Next Article in Journal
A Deep Learning Enabled Multi-Class Plant Disease Detection Model Based on Computer Vision
Previous Article in Journal
Shape Optimization of a Wooden Baseball Bat Using Parametric Modeling and Genetic Algorithms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Effectiveness of Leukocytes Classification Methods in a Real Application Scenario

1
Department of Mathematics and Computer Science, University of Cagliari, via Ospedale 72, 09124 Cagliari, Italy
2
Department of Electrical and Electronic Engineering, University of Cagliari, Piazza d’Armi, 09123 Cagliari, Italy
*
Authors to whom correspondence should be addressed.
AI 2021, 2(3), 394-412; https://doi.org/10.3390/ai2030025
Submission received: 15 June 2021 / Revised: 20 July 2021 / Accepted: 23 August 2021 / Published: 25 August 2021
(This article belongs to the Topic Medical Image Analysis)

Abstract

:
Automating the analysis of digital microscopic images to identify the cell sub-types or the presence of illness has assumed a great importance since it aids the laborious manual process of review and diagnosis. In this paper, we have focused on the analysis of white blood cells. They are the body’s main defence against infections and diseases and, therefore, their reliable classification is very important. Current systems for leukocyte analysis are mainly dedicated to: counting, sub-types classification, disease detection or classification. Although these tasks seem very different, they share many steps in the analysis process, especially those dedicated to the detection of cells in blood smears. A very accurate detection step gives accurate results in the classification of white blood cells. Conversely, when detection is not accurate, it can adversely affect classification performance. However, it is very common in real-world applications that work on inaccurate or non-accurate regions. Many problems can affect detection results. They can be related to the quality of the blood smear images, e.g., colour and lighting conditions, absence of standards, or even density and presence of overlapping cells. To this end, we performed an in-depth investigation of the above scenario, simulating the regions produced by detection-based systems. We exploit various image descriptors combined with different classifiers, including CNNs, in order to evaluate which is the most suitable in such a scenario, when performing two different tasks: Classification of WBC subtypes and Leukaemia detection. Experimental results have shown that Convolutional Neural Networks are very robust in such a scenario, outperforming common machine learning techniques combined with hand-crafted descriptors. However, when exploiting appropriate images for model training, even simpler approaches can lead to accurate results in both tasks.

1. Introduction

Blood contains three types of cells: platelets or thrombocytes, red blood cells (RBCs) or erythrocytes, and white blood cells (WBCs) or leukocytes. Platelets play an important role in haemostasis, leading to the formation of blood clots when there is an injury to blood vessels or other haemorrhages [1]. Red blood cells are important for the transport of oxygen from the heart to all tissues and carry away carbon dioxide [2]. White blood cells have important functions for the immune system, as they are the body’s main defence against infection and disease [3]. Consequently, their reliable classification is important for recognising their different types and potential diseases. All leukocytes contain a nucleus and can be grouped into two main types, according to the appearance of the structure: granulocytes and agranulocytes. These broader categories can be further subdivided into five subtypes: the first includes basophils, eosinophils and neutrophils, while lymphocytes and monocytes belong to the second [4]. Some examples are shown in Figure 1.
Haematology is the branch of medicine concerned with the study of the blood and the prevention of blood-related diseases. Many diseases can affect the number and type of blood cells produced, their function and lifespan. Usually, only normal, mature or nearly mature cells are released into the bloodstream, but certain circumstances may cause the bone marrow to release immature or abnormal cells into the circulation. Therefore, several manual visual examinations performed by experienced pathologists are essential to detect or monitor these conditions. Microscopic examination of the peripheral blood smear (PBS) slides by clinical pathologists is considered the gold standard for detecting various disorders [5]. Depending on the disorder being inspected (blood cancer, anaemia, presence of parasites), it may require manual counting of cells, classification of cell types and analysis of their morphological characteristics. The disadvantages of this manual inspection are several. It is time-consuming, repetitive and error-prone, and it is subjective because different operators may produce different interpretations of the same scene. Therefore, to avoid misdiagnosis, an automated procedure is increasingly necessary, especially in emerging countries. Among existing blood diseases, leukaemia is a malignant tumour, which can be further grouped into four main types: Acute Lymphoblastic Leukaemia (ALL), Acute Myeloid Leukaemia (AML), Chronic Lymphocytic Leukaemia (CLL), and Chronic Myeloid Leukaemia (CML) [6]. This disease causes the bone marrow to produce abnormal and undeveloped white blood cells, called blasts or leukaemia cells [7]. Their dark purple appearance makes them distinguishable, although analysis and further treatment can be complex due to their variability in shape and consistency. Indeed, different types of WBCs can differ significantly in shape and size, which is one of the most challenging aspects, even considering that they are surrounded by other blood components such as red blood cells and platelets. To address these problems, several computer-aided diagnosis (CAD) systems have been proposed to automate the described manual tasks using image processing techniques and classical Machine Learning (ML) [8,9,10,11,12], and also Deep Learning (DL) approaches [13,14,15], especially after the proposal of Convolutional Neural Network (CNN) [16,17].
CAD systems for PBS analysis can differ greatly from each other because they deal with different medical problems and focus on different tasks, ranging from simple cell counting to complete cell analysis. Despite these significant differences, they generally consist of the same steps: Image pre-processing, segmentation or detection, feature extraction and classification. Not all CAD systems take advantage of all the steps mentioned. As the images captured by new digital microscopes are of excellent quality, the pre-processing step may not be necessary. Furthermore, methods that focus on cell counting do not need feature extraction and classification. On the contrary, some steps may be present several times to address different issues related to the type of images under investigation. For example, multiple segmentation steps might be performed to identify cells and then separate their components. Other times, segmentation steps are combined with detection steps, especially when it is necessary to identify adjacent or clustered cells [10,12,18] As a general rule, CAD systems for PBS analysis can be grouped into two types: segmentation-based or detection-based. Segmentation-based systems are preferred for fine-grained [19] analysis, e.g., for pathology grading, analysis of various cellular components, checking for inclusions or parasites. Detection-based systems, on the other hand, are preferred for quantitative or coarse-grained analyses, such as cataloguing and counting cells [10,12,18,20]. Nowadays, detection-based methods are often preferred to segmentation-based methods, as the generated Bounding Boxes (BBs) can be directly used as input for modern feature extractors or CNNs [21]. Moreover, a particular type of CNNs, Regional CNNs (R-CNNs), perform both the detection phase, mainly exploiting regional proposals, and the classification phase [22]. In this way, it is possible to filter out all those BBs produced by the regional proposal, whose cataloguing process does not reach a high confidence value. However, this process has two main disadvantages: some relevant BBs can be filtered out due to their low confidence value or, even worse, some inaccurate BBs can reach high confidence but with wrong classifications. In this work, we study how accurate the BBs produced by a detection system must be in order to generate a robust classification model and at the same time correctly classify new bounding boxes, both accurate and inaccurate. To this end, we performed several experiments simulating a real-world application scenario in which a model is trained using near-perfect manually annotated BBs and deployed on automatically generated BBs. In the literature, many works deal with the classification of white blood cells [17,23,24,25,26], also proposing Unsupervised Domain Adaptation methods to address the Domain Shift present between different datasets [27]. However, to the best of our knowledge, no one has ever thoroughly investigated whether and to what extent specific features, such as bounding box size and quality, can be crucial for extracting representative features and creating robust models for WBCs classification.
Motivated by these observations, in this work, we propose an in-depth investigation of the above scenario by simulating the BBs produced by detection-based systems with different quality levels. We exploited different image descriptors combined with different classifiers, including CNNs, to assess which is the most suitable in such a scenario in performing two different tasks: Classification of WBC subtypes and Leukaemia detection.
The contribution of this paper is not to create a classification system that can reach or outperform state-of-the-art methods. It instead consists of evaluating the state of the art methods in a scenario different from standard laboratory tests to provide some valuable suggestions/guidelines for creating real CAD systems. For this purpose in our experiments, we performed a quantitative evaluation to assess the single descriptor/classifier; therefore, we discarded all combinations of descriptors or ensembles of classifiers. The rest of the manuscript is organised as follows. Section 2 discusses related work, in particular recent methods for the classification of white blood cells from microscopic images. In Section 3 we illustrate the used data sets, the methods and the experimental setup. The results are presented and discussed in Section 4, and finally, in Section 5 we draw the conclusions and directions for future works.

2. Related Work

Current CAD systems that perform WBC analysis address several tasks ranging from simple counting to disease detection and classification. Since in this work, we focus on two main tasks: Classification of subtypes of WBCs and detection of ALL, in this section, we describe related work that has addressed these two tasks.
Classification of White Blood Cells sub-types. The classification of WBCs sub-types is one of the most ordered laboratory tests since it allows monitoring the proportion of WBCs into the bloodstream that could be affected by numerous diseases and conditions. Even if the WBCs are easily identifiable inside the PBS, this task is very challenging due to wide variations in cell shape, dimensions and edges [1], which are even higher with the presence of a disease. For this reason, most recent works addressing this task exploited CNN-based systems, being more suited to cope with such variability. The authors in [23] performed a concatenation of pre-trained AlexNet and GoogleNet’s feature vectors by taking their maximum values. Then, they classify lymphocytes, monocytes, eosinophils, and neutrophils with the Support Vector Machine (SVM) strategy. The results are higher than 97% for both data sets investigated. Semerjian et al. [24] proposed a built-in customisable CNN, trained with several WBC templates, extracted from the used data set with a segmentation step. They reached a correlation of 90% in recognising different WBCs. Yao et al. [25] proposed a two-module weighted optimised deformable CNN for WBC classification, achieving the best F1-scores (F1) of 95.7%, 94.5% and 91.6% in testing for low-resolution and noisy undisclosed data sets, and BCCD [28] data set, respectively. Qin et al. [17] realised Cell3Net, a fine-grained leukocyte classification method for microscopic images, based on deep residual learning theory and medical domain knowledge. Their method uses a convolutional layer to extract the overall shape feature of the cell body at first, and then three residual blocks (each composed of two convolutional layers and a residual layer) to extract fine features from various aspects. Finally, the last fully connected layers produce a compact feature representation for 40 different types of white blood cells. Ridoy et.al [26] realised a novel CNN architecture to distinguish among eosinophils, lymphocytes, monocytes, and neutrophils, obtaining 86%, 99%, 97%, and 85% of F1-score, respectively.
Acute Lymphoblastic Leukaemia Detection. The detection of a disease is partially related to the previous task since most diseases affect a particular cell sub-types. In particular, ALL affect the lymphocytes that are released prematurely into the bloodstream. Lymphocytes affected by ALL, called lymphoblasts, present morphological changes that increment with increasing severity of the disease [6]. Thus the analysis in this task is more fine-grained, and the systems must distinguish little morphological variations and small cavities inside the nucleus and cytoplasm. Recently, even in this task, CNNs have gained more attention and they have been used both as feature extractor [29,30,31] and to directly classify the images [32,33,34,35,36]. In [32], the author proposed a novel CNN architecture to differentiate normal and abnormal leukocytes, reaching 96.43% accuracy on ALL-IDB1 data set (described in Section 3.1). Khandekar et al. [33] proposed an ALL detection system, composed of a modified version of the YOLOv4 detector, by adjusting the number of filters to support custom data sets used. They reached an F1-score of 92% and weighted F1-score (WFS) of 92% on ALL-IDB1, and C-NMC (accessed on 10 June 2021) [37] data sets, respectively. Mondal et al. [34] proposed a weighted automated CNN-based ensemble model, trained with centre-cropped images to detect ALL. It has been based on Xception, VGG-16, DenseNet-121, MobileNet, and InceptionResNet-V2. They reached 81.6% WFS on C-NMC data set. The authors in [29] used transfer learning to extract images features for further classification from three different CNN architectures both separately and jointly. Moreover, the features were selected according to their gain ratios and used as input to the SVM classifier. The authors also proposed a new, hybrid data set from the union of three distinct databases and aimed to diagnose leukaemia without a segmentation process, achieving accuracy, precision and recall above 99%. Huang et al. [35] realised a WBCs classification framework that combined a modulated Gabor wavelet and deep CNN kernels for each convolutional layer. The authors state that, in this way, the features learned by modulated kernels at different frequencies and orientations are more representative and discriminative for the task. In particular, they tested their method on a data set of hyperspectral images of blood cells. The authors in [30] employed a VGG architecture (described in Section 3.3.3) for extracting features from WBC images and then filtered them using a statistically Enhanced Salp Swarm Algorithm. Finally, using SVM classifier, they reach average accuracy of 96.11%, and 87.9%, on ALL-IDB2, and C-NMC, respectively. Togacar et al. [31] used the Maximal Information Coefficient and Ridge feature selection methods on the combination of features extracted by AlexNet, GoogLeNet, and ResNet-50 (described in Section 3.3.3). Finally, quadratic discriminant analysis was used as a classifier. The overall accuracy on BCCD data set [28] was 97.95% in the classification of white blood cells. In [36], the authors realised a system to classify WBCs through the Attention-aware Residual Network-based Manifold Learning model that exploits the first and second-order category-relevant image-level features. It reached average classification accuracy of 95.3% on a proprietary microscopic WBCs images data set collected from Shandong Shengli Hospital. The authors in [38] proposed a new CNN to deal with ALL detection. They reached an accuracy of 88.25% on a 10-fold cross-validation average. However, they pinpointed that the best fold achieved an accuracy of 99.3%, outperforming most of the works for this task. It appears they cannot outperform the work in [39], which achieved 99.50% accuracy for leukaemia detection on ALL-IDB2 by employing fine-tuned Alexnet and extensive data augmentation, which also comprised an analysis of different colour spaces. Nevertheless, the latter did not specify how they select training and testing samples or use a cross-validation strategy.
In most of the cited works dealing with one of the WBC analysis tasks, the authors have used reference datasets in which the images present single centred cells. It represents the ideal scenario where salient and high discriminative features can be extracted from the images [34]. Of course, this is valid under the assumption that the crops are still performed manually by the pathologists or that the detection systems provide perfect crops. However, this assumption is not verified in real application scenarios, as the systems are fully automated, and therefore the crops are not always precise or perfectly centred. Indeed, the BBs produced by a detection system could be larger, including a large background region, or too narrow and cut off a portion of the cell. Although, until now, no one has performed an exhaustive analysis on this, we believe that these factors can significantly influence the performance of a classification system. For this reason, we performed an in-depth investigation to verify if and to what extent the mentioned issues can affect the performance of automated systems for WBCs sub-type classification and leukaemia detection. Furthermore, we also investigated hand-crafted features combined with common ML methods; even if the reported related work (that are also the most recent) exploited mainly CNN-based methods, they do not provide clear evidence on which one to prefer for this task.

3. Materials and Methods

This section describes the materials and methods used in this work to perform the above evaluation. We first describe the datasets used, the methods employed, and the experimental setup.

3.1. Data Sets

We used two well-known benchmark data sets: the Acute Lymphoblastic Leukaemia Image Database (ALL-IDB) [3], proposed for ALL detection, and Raabin-WBC (R-WBC) [40], a recently proposed data set for WBC sub-types classification.

3.1.1. ALL-IDB2

The ALL-IDB is a dataset of public PBS images from healthy individuals and leukaemia patients, collected at the M. Tettamanti Research Centre for Childhood Leukaemia and Haematological Diseases, Monza, Italy. It is organised in two versions: ALL-IDB1, which presents 108 complete RGB images containing many cells and clusters, and ALL-IDB2, a collection of single WBCs extracted from ALL-IDB1. Since we are only interested in evaluating classification performance in this work, we only used the ALL-IDB2 version. It contains 260 images in JPG format with 24-bit colour depth, and each image presents a single centred leukocyte, 50% of which are lymphoblasts. The images were taken with a light laboratory microscope with different magnifications, ranging from 300 to 500, coupled to two different cameras: an Olympus Optical C2500L and a Canon PowerShot G5. This leads to several variations in terms of colour, brightness, scale and cell size, making this a very challenging dataset. Figure 2 shows a healthy leukocyte and a lymphoblast, taken from ALL-IDB2.

3.1.2. R-WBC

R-WBC is a large open-access dataset collected from several laboratories: the Razi Hospital laboratory in Rasht, the Gholhak laboratory, the Shahr-e-Qods laboratory and the Takht-e Tavous laboratory in Tehran, Iran. The images were taken with Olympus CX18 and Zeiss microscopes at 100× magnification, paired with Samsung Galaxy S5 and LG G3 camera smartphones, respectively. For this reason, this data set is also very challenging; even though the scale is fixed, it has several variations in terms of illuminations and colours. After the imaging process, the images were labelled and processed to provide multiple subsets for different tasks. We exploited the subset containing WBC bounding boxes and the ground truth of the nucleus and cytoplasm in this work. This subset contains 1145 images of selected WBCs, including 242 lymphocytes, 242 monocytes, 242 neutrophils, 201 eosinophils and 218 basophils. Thus, unlike the previous data set, the task is to identify the different sub-types of WBCs. A sample image for each WBC sub-type, taken from R-WBC, is shown in Figure 3.

3.2. Data Pre-Processing

As it can be observed from Figure 2 and Figure 3, in both data set images, each WBC is perfectly located in the BB centre and, at the same time, the BB is larger than the WBC size, and many other cells (mainly RBCs) are present in the images. Given that we are investigating how the quality of the extracted BB influences the classification performance, for each original data set, from now on called large, we created three alternative versions from now on called tight, eroded and dilated. As it can be guessed, the tight versions contain images where the BBs are perfectly fit to the WBC size, which should simulate the ideal case. Instead, the remaining versions are the more realistic ones concerning the current detection-based systems, given that in most cases, the provided BBs are not precise. Indeed, it could happen that the WBC is not entirely enclosed inside the BB (like in the eroded version) or that the WBC is not perfectly centred, and a consistent portion of the background is still present in the BB (like in the dilated version). To create these alternative versions, we exploited the pixel-wise ground truth in the form of a binary mask (for ALL-IDB2, whose ground truth was not proposed by the authors, we provided a copy at ALL-IDB2 masks), (accessed on 10 June 2021) where the foreground contains the WBCs only. We extracted the contours extreme points (left, right, top and bottom) to re-crop the RGB images from the foreground. The tight version has been re-cropped, adding just 3 pixels for each side of the box. The eroded/dilated version has been re-cropped, removing/adding 30 pixels for ALL-IDB and 60 pixels for R-WBC (whose resolution is double) from one side of the box randomly drawn. A sample image for each version of healthy leukocyte and lymphoblast, taken from ALL-IDB2 and for each WBC subtype, taken from R-WBC, is shown in Figure 4.

3.3. Methods

Here we describe the methods used in our evaluation: hand-crafted image descriptors, classic machine learning, and deep learning approaches.

3.3.1. Hand-Crafted Image Descriptors

We evaluated different hand-crafted image descriptors that we grouped into four important classes: invariant moments, texture, colour and wavelet features. As Invariant Moments we computed Legendre and Zernike moments. Legendre moments were first introduced in image analysis by Teague [41] and have been used extensively in many pattern recognition and features extraction applications due to their invariance to scale, rotation and reflection changes [42,43]. They are computed from the Legendre polynomials. Zernike moments also have been used as features set in many applications [44], as they can represent the properties of an image with no redundancy or overlapping of information between moments. They are constructed as the mapping of an image onto a set of complex Zernike polynomials [41]. In both cases, the order of the moments is equal to 5, since a higher-order would have decreased system performance, adding too specif features more useful for image reconstruction rather than for image classification [45].
The Texture Features computed were the rotation invariant Gray Level Co-occurrence Matrix (GLCM) features, as proposed in [46], and the rotation invariant Local Binary Pattern (LBP) features [47]. In both cases we focused on fine textures, thus we computed four GLCMs with d = 1 and θ = [ 0 , 45 , 90 , 135 ] , and the LBP map in the neighbourhood identified by r and n equal to 1 and 8 respectively. From the GLCMs we extracted thirteen features [48] and converted into rotationally invariant ones H a r r i (for more details see [46]). The LBP map is then converted into a rotationally invariant one, and its histogram is used as a feature vector L B P r i [47].
As Colour Features, we extracted basic colour histogram and colour auto-correlogram features [49]. The colour histogram features describe the global colour distribution in the image. We computed seven well-known descriptors: mean, standard deviation, smoothness, skewness, kurtosis, uniformity and entropy, which are calculated from images converted in shades of grey. The auto-correlogram combines the colour information with the spatial correlation between colours. In particular, it stores the probability of finding two pixels with the same colour at a distance d. Here, we used four distance values: d = 1 , 2 , 3 , 4 . Finally, the four probability vectors are concatenated to create a single feature vector.
As Wavelet Features, we computed the Gabor [50] and Haar [51] wavelets. Gabor wavelet filter bank is an efficient tool for the analysis of local time-frequency characteristics. It uses a set of specific filters with fixed directions and scales to describe the time-frequency coefficients for each direction and scale. Here we used a common filter bank composed of 40 filters, 5 scales and 8 directions, of size 39 × 39 . Haar wavelet has been used in many different applications, mostly in combination with other techniques. Here we directly applied the Haar wavelet on the original images using three different levels. From each level, we extracted the approximated image and computed the smoothed histogram with 64 bins. The three histograms are finally concatenated to create a single feature vector.

3.3.2. Classic Machine Learning

The classification accuracy was estimated using three different classifiers: the k-Nearest Neighbor (k-NN) classifier [52], the Support Vector Machine (SVM) [53] and Random Forest (RF) [54]. The k-NN was used because it is one of the simplest and can document the effectiveness of the extracted features rather than assessing the classifier’s performance. Here we used k = 1 , computed using the Euclidean distance. The SVM, on the other hand, is one of the most used in biomedical application [12,23,29,30]. Here we use a Gaussian radial basis function (RBF) trained using the one VS rest approach. To speed up the selection of the SVM hyperparameters, we employed an error-correcting output code mechanism [55] with 5-fold cross-validation to fit and automatically tune the hyperparameters. The RF was chosen because it combines many decision trees’ results, thus reducing over-fitting and improving generalisation. Here we used a forest consisting of 100 trees.

3.3.3. Deep Learning

In this work, we also employed DL approaches as both classifier and feature extractor. We evaluated different well known CNN architectures which are: AlexNet, VGG-16, VGG-19, ResNet-18, ResNet-50, ResNet-101, GoogleNet and Inceptionv3. They were all pre-trained on a well-known natural image dataset (ImageNet [56]) and adapted to medical image tasks, following an established procedure for transfer learning and fine-tuning CNN models [57]. AlexNet [16], VGG-16 and the VGG-19 [58] are very simple architectures, but, at the same time, are the most used for transfer learning and fine-tuning [57], since they gained popularity for their excellent performance in many classification tasks [16]. They are quite similar except for the number of layers, which is 8 for AlexNet, 16 for VGG-16 and 19 for VGG-19. The three ResNet architectures are slightly more complex, but being based on residual learning, they are easier to optimise even when the depth increases considerably [59]. They present 18, 50, 101 layers for ResNet-18, ResNet-50 and ResNet-101, respectively. GoogleNet [60] and Inceptionv3 [61] are both based on the Inception layer; indeed, Inceptionv3 is a variant of GoogleNet, exploiting only a few additional layers. They also differ in the number of layers, 100 and 140 for GoogleNet and InceptionV3, respectively. For transfer learning, we followed the approach used in [57], preserving all the CNN layers except the last fully-connected one. We replaced it with a new layer, which has been freshly initialised and set up in order to accommodate the new object categories.
CNNs can be even used to replace traditional feature extractors since they have a solid ability to extract complex features that describe the image in much more detail [31,35]. Therefore we leveraged them both for classification and feature extraction. We acted in different ways depending on the end-use of CNN. We considered the entire CNN architecture by taking its prediction values when used as a classifier, while we only leveraged a portion of CNN when used as a feature extractor. In particular, we extracted features from the penultimate fully connected layer (FC7) on AlexNet, VGG-16 and VGG-19, and the last (only one) on the ResNet and Inception architectures.

3.4. Experimental Setup

In order to perform a fair comparison, we split all the above-mentioned data sets/versions into three parts. Indeed, in order to have a sufficient number of samples for the training process while preserving a sufficient number of samples for performance evaluation, we first split the data sets into two parts, namely training and testing set, with about 70% and 30% of images, respectively. Given that the data sets are well balanced, we used a stratified sampling procedure to keep the splits balanced. Then we further split the original training set into a training and a validation set (used during CNN training), with about 80% and 20% of images, respectively. Even in this split, we have tried to keep intact the proportions between the various classes. Also, to further ease reproducibility, the splits were not created randomly but by taking the images in lexicographic order from each class. We conducted all the experiments on a single machine with the following configuration: Intel(R) Core(TM) i9-8950HK @ 2.90GHz CPU with 32 GB RAM and NVIDIA GTX1050 Ti 4GB GPU.
To evaluate the classification performance, we used five common metrics that are Accuracy (A), Precision (P), Recall (R), Specificity (S) and F1-score (F1). They all range over the interval [ 0 , 1 ] . On ALL-IDB, which is a two-class problem, the above metrics are used as they are, while on R-WBC, which is a multi-class problem, the above measures are used to compute the per-class performance, and then, we computed the weighted average to obtain a single performance measure.
As mentioned in Section 3.3.3, we fine-tuned the CNN architectures exploited on both data sets in order to extract more meaningful features. In particular, we used the hyper-parameters defined in Table 1. Furthermore, considering that we did not produce any image augmentation, we set the regularisation factor L2 in order to avoid a possible over-fitting during the training phase.

4. Experimental Results

As mentioned before, in this work, we are interested in investigating how the quality of the extracted BB influences the classification performance; thus, we performed different experiments involving the mentioned data sets/versions. The first experiments are devoted to comparing the performance of the different features and classifiers in a controlled (ideal) environment, as it happens in most benchmark data sets created specifically for model training, where the BBs are precise. In this experiments, from now on called “single-version”, we compared the original (large) versions of the data sets with the tight versions in order to evaluate if and to what extent the presence of a large portion of background in the BBs can influence the extracted features and consequently also the created classification models.
The subsequent experiments are devoted to evaluating the robustness of the different features and classifiers in an uncontrolled (real) environment, where the features to train the classification models could still be extracted from controlled benchmark data sets. However, the classification models are deployed in a real scenario. To simulate this scenario in this experiments, from now on called “cross-version”, we exploited the classification models already trained during the previous experiments, that is using the training sets from the large and tight versions of the data sets, but this time they are tested using the test sets belonging to the other versions of the data sets.

4.1. Results

For the sake of brevity, we have reported the results of experiments performed on a single table for each version of the source data sets. It means that in the Table 2 and Table 3 and in the Table 4 and Table 5 we reported the performance obtained when the source models are created exploiting the large and tight versions of the data sets respectively, and then tested using all versions of the data sets (large, tight, eroded and dilated) in turn. To emphasise the performance on the same-version, all corresponding columns in the tables have been highlighted in grey. In addition, we have reported the average performance value of each classifier on the different feature sets to facilitate comparisons.
As it can be observed from the same-version experiments on ALL-IDB2 data sets, reported in Table 2 and Table 4, the performance is very similar on both the large and tight data set version, on average. More in detail, looking at the AVG rows of every single classifier, it is possible to notice that, on ALL-IDB, the large to large achieve slightly better performance with the kNN classifier. In contrast, for the remaining classifiers, the tight to tight produced better (considerably better with the CNNs) performance. A similar trend occurs on R-WBC data sets (see Table 3 and Table 5), in which the only clear difference from ALL-IDB2 comes from the CNN results. Indeed, their performance are very similar in both the large on large and tight to tight versions.
The cross-version experiments are aimed to verify which combinations of descriptors and classifiers are more robust in a real scenario, where the testing images could differ from those used for training. In this scenario, performance can be expected to worsen quickly; however, this trend has not occurred in all cases. In particular, on ALL-IDB2 data sets, when the large version is exploited for creating the models, reported in Table 2, the CNNs alone demonstrated to be very robust, and in many cases, the performance obtained in the cross-version is even better than the ones obtained with the same-version. On average, from this table, it can be observed that the highest drop in performance (about 30%) corresponds to the large to eroded crossing, for which SVM proved to be the best among the classic ML classifiers. On the contrary, on the same data sets, when the tight version is exploited for creating the models, reported in Table 4, the highest drop in performance (about 20% for all classifiers) corresponds to the tight to large crossing, while for the other crossings the drop is less evident (about 10%). On average, even in the crossing cases, the CNNs proved to be the most robust, still followed by the SVM.
The trends just mentioned are confirmed for R-WBC data set on both large and tight versions, reported in Table 3 and Table 5. Indeed the highest drops in performance correspond to the eroded to eroded and tight to large crossings. In this case, the CNNs have a consistent decline in the worst case (about 30% when tested in eroded and large, respectively), but, in the other cases, they proved to be very robust, especially with models created with the tight version.

4.2. Discussion

Going into more detail on the individual descriptors on both data sets and comparing the performance obtained with the large and tight versions, it can be seen that the hand-crafted descriptors perform better with the tight versions. In fact, the tight version invariant moments and texture features outperformed their corresponding large version, particularly on R-WBC, by about 30% and 20% with all classifiers, respectively. In addition, on ALL-IDB2, several colour descriptors produced comparable results to the absolute best. For example, kNN and SVM trained with the Haar feature on the large case, auto-correlogram and Gabor with kNN on the tight case produced comparable results with the same classifiers trained with CNN features.
On the contrary, features extracted from CNNs show a counter-trend and always perform better on the large version. It is most likely due to filtering operations that slowly degrade the information present at the edges of the images, which strongly penalises tight BBs.
Furthermore, the trend regarding the CNNs is that they performed better when used for feature extraction to feed SVM or RF classifiers while performing slightly worse on their own.
On average, the tables show that CNN models trained with tight versions produced excellent results when tested on tight, eroded and dilated versions compared to the remaining classifiers. This trend is even more evident on R-WBC. On the same three crosses defined above, the performance is higher than 90%. Therefore, it seems that the tight versions of the data sets are more suitable for training CNNs. As a general rule, the tight version is preferable to the large version because of the cross-over performance. A detection or segmentation method can rarely produce large bounding boxes, such as those provided in the original versions of the data set.
Deepening individual descriptors on both data sets when exploiting the large version, CNNs, especially the simplest ones (AlexNet, VGG-16, and VGG-19), showed superior performance even though they suffered the most significant drop, particularly on the R-WBC data set. In contrast, several descriptors proved to be robust when exploiting the tight version, particularly Legendre moment on ALL-IDB2 and Haar wavelet on R-WBC with all classifiers.
In general, the following observations can be made from the results obtained:
1
HC descriptors are more appropriate for both tasks when they are extracted from the tight version of the data sets (in particular invariant moments and texture), which makes them more robust to BBs variations;
2
on the ALL-IDB2 task (ALL vs. Healthy cells detection), which is finer and more difficult, several HC descriptors (in particular Haar from large, and Gabor from tight) produced results in line with the descriptors extracted from CNNs;
3
CNNs used as feature extractors produced better results than CNNs alone in practically all cases, although the large version is certainly more suitable than the tight one for feature extraction;
4
however, CNNs alone, when trained on the tight versions, have proven to be very robust to every variation of BBs except the large case. Nevertheless, it is a rare case in real application scenarios.

5. Conclusions

In this work, we proposed an in-depth investigation of current white blood cell analysis methods in a real application scenario. In such a scenario, the regions of interest are automatically extracted from a region detector or a proposal and, as a result, are inaccurate. Furthermore, cells are not well centred or even not completely included. In order to assess if and to what extent such factors can affect the performance of classification systems for WBC sub-types classification and leukaemia detection, we evaluated both hand-crafted and deep features combined with different classifiers and also Convolutional Neural Networks. Obviously, in this work, we did not want to create a classification system capable of competing with state-of-the-art methods but only to make a quantitative evaluation of the single descriptor/classifier; therefore, we have discarded a priori all combinations of descriptors or ensembles of classifiers. Experimental results confirmed that Convolutional Neural Networks are very robust in a scenario where there is a large variability, and the testing images differ a lot from the training ones. Nevertheless, compared with the hand-crafted features combined with traditional classifiers, the gap in performance is limited or none at all, especially when exploiting appropriate images for training the models. In such case, the images used for training are well centred and present the smallest portion of the background, which is a valid assumption even in a real application scenario, given that the images used for training could still be produced manually, or better, they could be produced automatically and double-checked by an operator. In the future, we aim to investigate a similar scenario where the regions of interest are produced by a segmentation-based system rather than a detection-based one. It could be interesting also to investigate features created ad-hoc for peripheral blood image analysis in such a scenario.

Author Contributions

Conceptualisation, A.L. and L.P.; Methodology, A.L. and L.P.; Investigation, A.L. and L.P.; software, A.L. and L.P.; writing—original draft, A.L. and L.P.; writing—review and editing, A.L. and L.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

All the features extracted and the models produced in this study are available at the following url: GitHub repository accessed on 21 June 2021.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
PBS Peripheral Blood Smear
RBC Red Blood Cells
WBC White Blood Cells
ALL Acute Lymphoblastic Leukaemia
AML Acute Myeloid Leukaemia
CLL Chronic Lymphocytic Leukaemia
CML Chronic Myeloid Leukaemia
CAD Computer-Aided Diagnosis
ALL-IDB Acute Lymphoblastic Leukaemia Image Database
R-WBC Raabin-WBC
BB Bounding Boxes
ML Machine Learning
DL Deep Learning
CNN Convolutional Neural Network
LBP Local Binary Pattern
GLCM Gray Level Co-occurrence Matrix
FC Fully Connected
TP True Positive
TN True Negative
FN False Negative
FP False Positive
A Accuracy
P Precision
R Recall
S Specificity
F1 F1-score
WFS Weighted F1-score

References

  1. Ciesla, B. Hematology in Practice; FA Davis: Philadelphia, PA, USA, 2011. [Google Scholar]
  2. Biondi, A.; Cimino, G.; Pieters, R.; Pui, C.H. Biological and therapeutic aspects of infant leukemia. Blood 2000, 96, 24–33. [Google Scholar] [CrossRef]
  3. Labati, R.D.; Piuri, V.; Scotti, F. All-IDB: The acute lymphoblastic leukemia image database for image processing. In Proceedings of the IEEE ICIP International Conference on Image Processing, Brussels, Belgium, 29 December 2011; pp. 2045–2048. [Google Scholar]
  4. University Of Leeds The Histology Guide. 2021. Available online: https://www.histology.leeds.ac.uk/blood/blood_wbc.php (accessed on 10 June 2021).
  5. Bain, B.J. A Beginner’s Guide to Blood Cells; Wiley Online Library: New York, NY, USA, 2004. [Google Scholar]
  6. Cancer Treatment Centers of America, Types of Leukemia. 2021. Available online: https://www.cancercenter.com/cancer-types/leukemia/types (accessed on 11 June 2021).
  7. United States National Cancer Institute, Leukemia. 2021. Available online: https://www.cancer.gov/types/leukemia/hp (accessed on 11 June 2021).
  8. Madhloom, H.T.; Kareem, S.A.; Ariffin, H.; Zaidan, A.A.; Alanazi, H.O.; Zaidan, B.B. An automated white blood cell nucleus localization and segmentation using image arithmetic and automatic threshold. J. Appl. Sci. 2010, 10, 959–966. [Google Scholar] [CrossRef] [Green Version]
  9. Putzu, L.; Caocci, G.; Di Ruberto, C. Leucocyte classification for leukaemia detection using image processing techniques. AIM 2014, 62, 179–191. [Google Scholar] [CrossRef] [Green Version]
  10. Alomari, Y.M.; Sheikh Abdullah, S.N.H.; Zaharatul Azma, R.; Omar, K. Automatic detection and quantification of WBCs and RBCs using iterative structured circle detection algorithm. Comput. Math. Methods Med. 2014, 2014. [Google Scholar] [CrossRef] [Green Version]
  11. Mohapatra, S.; Patra, D.; Satpathy, S. An ensemble classifier system for early diagnosis of acute lymphoblastic leukemia in blood microscopic images. Neural Comput. Appl. 2014, 24, 1887–1904. [Google Scholar] [CrossRef]
  12. Ruberto, C.D.; Loddo, A.; Putzu, L. A leucocytes count system from blood smear images Segmentation and counting of white blood cells based on learning by sampling. Mach. Vis. Appl. 2016, 27, 1151–1160. [Google Scholar] [CrossRef]
  13. Vincent, I.; Kwon, K.; Lee, S.; Moon, K. Acute lymphoid leukemia classification using two-step neural network classifier. In Proceedings of the 2015 21st Korea-Japan Joint Workshop on Frontiers of Computer Vision, Mokpo, South Korea, 28–30 January 2015; pp. 1–4. [Google Scholar] [CrossRef]
  14. Singh, G.; Bathla, G.; Kaur, S. Design of new architecture to detect leukemia cancer from medical images. Int. J. Appl. Eng. Res. 2016, 11, 7087–7094. [Google Scholar]
  15. Di Ruberto, C.; Loddo, A.; Puglisi, G. Blob Detection and Deep Learning for Leukemic Blood Image Analysis. Appl. Sci. 2020, 10, 1176. [Google Scholar] [CrossRef]
  16. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; Volume 1, pp. 1097–1105. [Google Scholar]
  17. Qin, F.; Gao, N.; Peng, Y.; Wu, Z.; Shen, S.; Grudtsin, A. Fine-grained leukocyte classification with deep residual learning for microscopic images. Comput. Methods Programs Biomed. 2018, 162, 243–252. [Google Scholar] [CrossRef]
  18. Mahmood, N.H.; Lim, P.C.; Mazalan, S.M.; Razak, M.A.A. Blood cells extraction using color based segmentation technique. Int. J. Life Sci. Biotechnol. Pharma Res. 2013, 2. [Google Scholar] [CrossRef]
  19. Sipes, R.; Li, D. Using convolutional neural networks for automated fine grained image classification of acute lymphoblastic leukemia. In Proceedings of the 3rd International Conference on Computational Intelligence and Applications; Institute of Electrical and Electronics Engineers Inc.: New York, NY, USA, 2018; pp. 157–161. [Google Scholar]
  20. Ruberto, C.D.; Loddo, A.; Putzu, L. Detection of red and white blood cells from microscopic blood images using a region proposal approach. Comput. Biol. Med. 2020, 116, 103530. [Google Scholar] [CrossRef]
  21. Zhao, Z.; Zheng, P.; Xu, S.; Wu, X. Object Detection With Deep Learning: A Review. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3212–3232. [Google Scholar] [CrossRef] [Green Version]
  22. Ren, S.; He, K.; Girshick, R.B.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [Green Version]
  23. Çınar, A.; Tuncer, S.A. Classification of lymphocytes, monocytes, eosinophils, and neutrophils on white blood cells using hybrid Alexnet-GoogleNet-SVM. SN Appl. Sci. 2021, 3, 1–11. [Google Scholar] [CrossRef]
  24. Semerjian, S.; Khong, Y.F.; Mirzaei, S. White Blood Cells Classification Using Built-in Customizable Trained Convolutional Neural Network. In Proceedings of the 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), Pune, India, 5–7 March 2021; pp. 357–362. [Google Scholar]
  25. Yao, X.; Sun, K.; Bu, X.; Zhao, C.; Jin, Y. Classification of white blood cells using weighted optimized deformable convolutional neural networks. Artif. Cells Nanomed. Biotechnol. 2021, 49, 147–155. [Google Scholar] [CrossRef] [PubMed]
  26. Ridoy, M.A.R.; Islam, M.R. An automated approach to white blood cell classification using a lightweight convolutional neural network. In Proceedings of the 2020 2nd International Conference on Advanced Information and Communication Technology, Dhaka, Bangladesh, 28–29 November 2020; pp. 480–483. [Google Scholar]
  27. Pandey, P.; Kyatham, V.; Mishra, D.; Dastidar, T.R. Target-Independent Domain Adaptation for WBC Classification Using Generative Latent Search. IEEE Trans. Med. Imaging 2020, 39, 3979–3991. [Google Scholar] [CrossRef] [PubMed]
  28. Mooney, P. Blood Cell Images Data Set. 2014. Available online: https://github.com/Shenggan/BCCD_Dataset (accessed on 11 June 2021).
  29. Vogado, L.H.; Veras, R.M.; Araujo, F.H.; Silva, R.R.; Aires, K.R. Leukemia diagnosis in blood slides using transfer learning in CNNs and SVM for classification. Eng. Appl. Artif. Intell. 2018, 72, 415–422. [Google Scholar] [CrossRef]
  30. Sahlol, A.T.; Kollmannsberger, P.; Ewees, A.A. Efficient Classification of White Blood Cell Leukemia with Improved Swarm Optimization of Deep Features. Sci. Rep. 2020, 10, 1–11. [Google Scholar] [CrossRef] [PubMed]
  31. Toğaçar, M.; Ergen, B.; Cömert, Z. Classification of white blood cells using deep features obtained from Convolutional Neural Network models based on the combination of feature selection methods. Appl. Soft Comput. J. 2020, 97, 106810. [Google Scholar] [CrossRef]
  32. Ttp, T.; Pham, G.N.; Park, J.H.; Moon, K.S.; Lee, S.H.; Kwon, K.R. Acute leukemia classification using convolution neural network in clinical decision support system. CS IT Conf. Proc. 2017, 7, 49–53. [Google Scholar]
  33. Khandekar, R.; Shastry, P.; Jaishankar, S.; Faust, O.; Sampathila, N. Automated blast cell detection for Acute Lymphoblastic Leukemia diagnosis. Biomed. Signal Process. Control. 2021, 68, 102690. [Google Scholar] [CrossRef]
  34. Mondal, C.; Hasan, M.K.; Jawad, M.T.; Dutta, A.; Islam, M.R.; Awal, M.A.; Ahmad, M. Acute Lymphoblastic Leukemia Detection from Microscopic Images Using Weighted Ensemble of Convolutional Neural Networks. arXiv 2021, arXiv:2105.03995. [Google Scholar]
  35. Huang, Q.; Li, W.; Zhang, B.; Li, Q.; Tao, R.; Lovell, N.H. Blood Cell Classification Based on Hyperspectral Imaging with Modulated Gabor and CNN. IEEE J. Biomed. Health Inform. 2020, 24, 160–170. [Google Scholar] [CrossRef]
  36. Huang, P.; Wang, J.; Zhang, J.; Shen, Y.; Liu, C.; Song, W.; Wu, S.; Zuo, Y.; Lu, Z.; Li, D. Attention-Aware Residual Network Based Manifold Learning for White Blood Cells Classification. IEEE J. Biomed. Health Inform. 2020, 25, 1206–1214. [Google Scholar] [CrossRef] [PubMed]
  37. Duggal, R.; Gupta, A.; Gupta, R.; Mallick, P. SD-layer: Stain deconvolutional layer for CNNs in medical microscopic imaging. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: New York, NY, USA, 6–10 September 2017; pp. 435–443. [Google Scholar]
  38. Ahmed, N.; Yigit, A.; Isik, Z.; Alpkocak, A. Identification of Leukemia Subtypes from Microscopic Images Using Convolutional Neural Network. Diagnostics 2019, 9, 104. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Shafique, S.; Tehsin, S. Acute Lymphoblastic Leukemia Detection and Classification of Its Subtypes Using Pretrained Deep Convolutional Neural Networks. Technol. Cancer Res. Treat. 2018, 17, 1533033818802789. [Google Scholar] [CrossRef] [Green Version]
  40. Kouzehkanan, S.Z.M.; Saghari, S.; Tavakoli, E.; Rostami, P.; Abaszadeh, M.; Satlsar, E.S.; Mirzadeh, F.; Gheidishahran, M.; Gorgi, F.; Mohammadi, S.; et al. Raabin-WBC: A large free access dataset of white blood cells from normal peripheral blood. bioRxiv 2021. [Google Scholar] [CrossRef]
  41. Teague, M.R. Image analysis via the general theory of moments. J. Opt. Soc. Am. 1980, 70, 920–930. [Google Scholar] [CrossRef]
  42. Chong, C.W.; Raveendran, P.; Mukundan, R. Translation and scale invariants of Legendre moments. Pattern Recognit. 2004, 37, 119–129. [Google Scholar] [CrossRef]
  43. Ma, Z.; Kang, B.; Ma, J. Translation and scale invariant of Legendre moments for images retrieval. J. Inf. Comput. Sci. 2011, 8, 2221–2229. [Google Scholar]
  44. Oujaoura, M.; Minaoui, B.; Fakir, M. Image annotation by moments. Moments-Moment-Invariants Theory Appl. 2014, 1, 227–252. [Google Scholar]
  45. Di Ruberto, C.; Putzu, L.; Rodriguez, G. Fast and accurate computation of orthogonal moments for texture analysis. Pattern Recognit. 2018, 83, 498–510. [Google Scholar] [CrossRef] [Green Version]
  46. Putzu, L.; Di Ruberto, C. Rotation Invariant Co-occurrence Matrix Features. In 19th International Conference ICIAP on Image Analysis and Processing; Springer International Publishing: Cham, Switzerland, 2017; Volume 10484, pp. 391–401. [Google Scholar] [CrossRef]
  47. Ojala, T.; Pietikäinen, M.; Maenpaa, T. Multiresolution gray-scale and rotation invariant texture classification with local binary pattern. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef]
  48. Haralick, R.M.; Shanmugam, K.; Dinstein, I.H. Textural Features for Image Classification. IEEE Trans. Syst. Man Cybern. 1973, SMC-3, 610–621. [Google Scholar] [CrossRef] [Green Version]
  49. Mitro, J. Content-based image retrieval tutorial. arXiv 2016, arXiv:1608.03811. [Google Scholar]
  50. Samantaray, A.; Rahulkar, A. New design of adaptive Gabor wavelet filter bank for medical image retrieval. IET Image Process. 2020, 14, 679–687. [Google Scholar] [CrossRef]
  51. Singha, M.; Hemachandran, K.; Paul, A. Content-based image retrieval using the combination of the fast wavelet transformation and the colour histogram. IET Image Process. 2012, 6, 1221–1226. [Google Scholar] [CrossRef]
  52. Cover, T.M.; Hart, P.E. Nearest neighbor pattern classification. IEEE Trans. Inf. Theory 1967, 13, 21–27. [Google Scholar] [CrossRef]
  53. Lin, Y.; Lv, F.; Zhu, S.; Yang, M.; Cour, T.; Yu, K.; Cao, L.; Huang, T.S. Large-scale image classification: Fast feature extraction and SVM training. In Proceedings of the 24th IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 20–25 June 2011; pp. 1689–1696. [Google Scholar]
  54. Breiman, L. Random Forests. Mach. Learn. 2001, 4, 5–32. [Google Scholar] [CrossRef] [Green Version]
  55. Bagheri, M.A.; Montazer, G.A.; Escalera, S. Error correcting output codes for multiclass classification: Application to two image vision problems. In Proceedings of the 16th CSI International Symposium on Artificial Intelligence and Signal Processing, Shiraz, Iran, 2–3 May 2012; 3 May 2012; pp. 508–513. [Google Scholar]
  56. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the Computer Vision and Pattern Recognition (CVPR), Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
  57. Shin, H.; Roth, H.R.; Gao, M.; Lu, L.; Xu, Z.; Nogues, I.; Yao, J.; Mollura, D.; Summers, R.M. Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning. IEEE Trans. Med. Imaging 2016, 35, 1285–1298. [Google Scholar] [CrossRef] [Green Version]
  58. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the 3rd International Conference on Learning Representations, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  59. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef] [Green Version]
  60. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar] [CrossRef] [Green Version]
  61. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 2818–2826. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Representation of the WBC sub-types categorisation.
Figure 1. Representation of the WBC sub-types categorisation.
Ai 02 00025 g001
Figure 2. WBC categories from ALL-IDB2: healthy lymphocyte (left) and lymphoblast (right).
Figure 2. WBC categories from ALL-IDB2: healthy lymphocyte (left) and lymphoblast (right).
Ai 02 00025 g002
Figure 3. WBC sub-type images from R-WBC. From left to right: Basophil, Eosinophil, Lymphocyte, Monocyte and Neutrophil.
Figure 3. WBC sub-type images from R-WBC. From left to right: Basophil, Eosinophil, Lymphocyte, Monocyte and Neutrophil.
Ai 02 00025 g003
Figure 4. Tight (top), eroded (middle) and dilated (bottom) versions of ALL-IDB2 (first two columns) and R-WBC (remaining columns) data sets.
Figure 4. Tight (top), eroded (middle) and dilated (bottom) versions of ALL-IDB2 (first two columns) and R-WBC (remaining columns) data sets.
Ai 02 00025 g004
Table 1. Hyper-parameters settings for CNNs fine-tuning.
Table 1. Hyper-parameters settings for CNNs fine-tuning.
ParamsValue
SolverAdam
Max Epochs50
Mini Batch Size8
Initial Learn Rate1e-4
Learn Rate Drop Period10
Learn Rate Drop Factor0.1
L2 Regularisation0.1
Table 2. ALL-IDB2 performance obtained with source models created exploiting the large data set version and tested using in turn (from the left) the large, tight, eroded and dilated versions. Same-version performance are highlighted in grey.
Table 2. ALL-IDB2 performance obtained with source models created exploiting the large data set version and tested using in turn (from the left) the large, tight, eroded and dilated versions. Same-version performance are highlighted in grey.
ClassifierDescriptorTarget Set
LargeTightErodedDilated
APRSF1APRSF1APRSF1APRSF1
kNNLegendre56.455.664.148.759.552.6100.05.1100.09.853.880.010.397.418.253.863.617.989.728.0
Zernike35.933.328.243.630.610.310.310.310.310.319.216.715.423.116.021.825.028.215.426.5
HARri61.558.876.946.266.750.050.015.484.623.546.236.410.382.116.052.662.512.892.321.3
LBP1853.852.582.125.664.047.448.794.90.064.350.050.0100.00.066.747.448.794.90.064.3
Histogram64.160.879.548.768.946.233.37.784.612.547.437.57.787.212.843.627.37.779.512.0
Correlogram50.050.010.389.717.053.871.412.894.921.747.440.010.384.616.353.861.520.587.230.8
Haar wavelet75.669.292.359.079.170.573.564.176.968.553.853.756.451.355.079.587.169.289.777.1
Gabor wavelet43.645.159.028.251.160.363.348.771.855.153.854.348.759.051.452.652.261.543.656.5
AlexNet70.572.266.774.469.375.667.2100.051.380.471.869.876.966.773.266.760.794.938.574.0
VGG-1669.265.382.156.472.766.763.379.553.870.567.964.082.153.871.964.161.276.951.368.2
VGG-1969.261.9100.038.576.583.377.194.971.885.171.866.787.256.475.676.969.894.959.080.4
ResNet-1875.667.997.453.880.066.769.759.074.463.960.360.061.559.060.873.173.771.874.472.7
ResNet-5082.182.182.182.182.134.633.330.838.532.030.827.323.138.525.052.652.946.259.049.3
ResNet-10176.980.071.882.175.746.238.512.879.519.239.710.02.676.94.150.050.020.579.529.1
GoogleNet82.177.889.774.483.351.3100.02.6100.05.050.00.00.0100.00.057.7100.015.4100.026.7
Inceptionv376.970.692.361.580.033.331.428.238.529.739.730.015.464.120.326.930.435.917.932.9
AVG65.262.773.457.166.053.058.241.764.440.750.243.538.062.536.454.657.948.161.146.9
SVMRbfLegendre60.368.238.582.149.248.740.05.192.39.147.440.010.384.616.351.3100.02.6100.05.0
Zernike41.042.651.330.846.538.541.859.017.948.960.358.371.848.764.448.749.279.517.960.8
HARri59.055.787.230.868.052.652.946.259.049.352.653.341.064.146.456.457.151.361.554.1
LBP1861.558.282.141.068.150.050.0100.00.066.750.050.0100.00.066.750.050.0100.00.066.7
Histogram38.535.528.248.731.447.437.57.787.212.846.220.02.689.74.535.97.72.669.23.8
Correlogram55.153.676.933.363.251.351.448.753.850.047.447.243.651.345.352.652.848.756.450.7
Haar wavelet82.174.597.466.784.450.050.094.95.165.550.050.097.42.666.169.262.794.943.675.5
Gabor wavelet50.050.076.923.160.664.169.051.376.958.860.365.443.676.952.355.154.859.051.356.8
AlexNet44.946.669.220.555.767.993.838.597.454.561.5100.023.1100.037.569.275.956.482.164.7
VGG-1671.864.994.948.777.169.264.784.653.873.359.056.182.135.966.767.963.584.651.372.5
VGG-1962.857.4100.025.672.974.466.1100.048.779.670.562.9100.041.077.270.562.9100.041.077.2
ResNet-1878.271.294.961.581.370.580.853.887.264.662.867.948.776.956.774.480.664.184.671.4
ResNet-5083.381.087.279.584.035.933.328.243.630.628.227.025.630.826.344.944.138.551.341.1
ResNet-10182.185.776.987.281.173.195.048.797.464.461.590.925.697.440.079.592.664.194.975.8
GoogleNet83.379.589.776.984.351.3100.02.6100.05.050.00.00.0100.00.057.7100.015.4100.026.7
Inceptionv384.678.794.974.486.064.159.687.241.070.865.462.079.551.369.747.448.476.917.959.4
AVG64.962.777.951.968.456.861.653.560.150.254.653.249.759.546.058.262.658.757.753.9
RFLegendre60.360.061.559.060.851.3100.02.6100.05.052.666.710.394.917.857.787.517.997.429.8
Zernike25.625.625.625.625.635.936.638.533.337.526.917.912.841.014.943.640.728.259.033.3
HARri52.651.682.123.163.448.747.828.269.235.543.639.123.164.129.056.461.933.379.543.3
LBP1847.448.692.32.663.750.00.00.0100.00.050.00.00.0100.00.051.3100.02.6100.05.0
Histogram65.461.582.148.770.360.372.233.387.245.650.050.028.271.836.152.657.120.584.630.2
Correlogram64.162.271.856.466.759.061.348.769.254.356.457.151.361.554.155.156.346.264.150.7
Haar wavelet46.244.833.359.038.251.353.817.984.626.951.353.817.984.626.953.860.023.184.633.3
Gabor wavelet42.345.271.812.855.435.940.761.510.349.046.246.551.341.048.839.743.164.115.451.5
AlexNet60.362.551.369.256.388.596.979.597.487.369.289.543.694.958.669.269.269.269.269.2
VGG-1671.866.787.256.475.671.866.089.753.876.164.159.092.335.972.070.564.889.751.375.3
VGG-1965.459.1100.030.874.378.273.987.269.280.069.265.382.156.472.780.874.094.966.783.1
ResNet-1876.970.692.361.580.067.979.248.787.260.357.759.448.766.753.576.988.961.592.372.7
ResNet-5080.878.684.676.981.534.635.035.933.335.430.827.323.138.525.046.245.235.956.440.0
ResNet-10179.581.176.982.178.959.066.735.982.146.752.655.625.679.535.162.870.843.682.154.0
GoogleNet83.379.589.776.984.351.3100.02.6100.05.050.00.00.0100.00.056.4100.012.8100.022.7
InceptionV376.970.692.361.580.033.331.428.238.529.739.730.015.464.120.326.930.435.917.932.9
AVG62.460.574.750.265.954.860.139.969.742.150.644.832.968.435.356.365.642.570.045.5
CNNAlexNet44.920.540.046.527.165.494.959.787.573.357.7100.054.2100.070.371.889.766.084.076.1
VGG-1661.523.1100.056.537.573.156.484.667.367.767.943.685.062.157.670.551.383.364.863.5
VGG-1957.715.4100.054.226.771.859.079.367.367.666.751.374.162.760.670.548.786.464.362.3
ResNet-1875.651.3100.067.267.866.733.3100.060.050.061.523.1100.056.537.570.543.694.463.359.6
ResNet-5075.664.183.370.872.535.935.935.935.935.925.620.522.927.921.641.041.041.041.041.0
ResNet-10175.656.491.768.569.871.859.079.367.367.661.548.765.559.255.970.548.786.464.362.3
GoogleNet71.843.6100.063.960.766.7100.060.0100.075.064.1100.058.2100.073.688.5100.081.2100.089.7
InceptionV378.264.189.372.074.634.641.036.432.338.539.764.143.130.051.529.523.126.531.824.6
AVG67.642.388.062.554.660.759.966.964.759.555.656.462.962.353.664.155.870.764.259.9
Table 3. R-WBC performance obtained with source models created exploiting the large data set version and tested using in turn (from the left) the large, tight, eroded and dilated versions. Same-version performance are highlighted in grey.
Table 3. R-WBC performance obtained with source models created exploiting the large data set version and tested using in turn (from the left) the large, tight, eroded and dilated versions. Same-version performance are highlighted in grey.
ClassifierDescriptorTarget Set
LargeTightErodedDilated
APRSF1APRSF1APRSF1APRSF1
kNNLegendre75.039.137.884.437.965.722.513.778.911.666.325.315.479.312.567.620.718.980.116.8
Zernike71.328.928.582.328.365.813.115.778.412.669.017.122.780.816.468.320.320.680.316.7
HARri70.025.325.381.325.265.916.815.178.813.165.719.014.878.412.364.810.012.578.110.0
LBP1878.347.246.286.646.468.13.618.981.16.068.13.618.981.16.068.13.618.981.16.0
Histogram70.225.626.281.425.769.023.323.580.421.870.124.026.580.923.267.218.918.679.217.9
Correlogram71.126.927.682.321.669.221.323.081.018.370.626.326.781.823.570.628.526.582.022.5
Haar wavelet79.649.949.787.349.363.814.79.377.410.964.412.811.677.611.063.013.27.077.18.8
Gabor wavelet74.436.936.084.135.171.528.828.882.023.569.624.123.881.219.272.031.329.782.525.5
AlexNet97.092.992.498.092.377.045.243.385.136.476.238.641.384.632.480.655.152.387.445.5
VGG-1698.396.095.998.995.977.956.146.585.940.573.451.435.582.829.182.861.958.489.054.6
VGG-1995.789.289.297.289.168.520.220.980.810.068.118.219.580.97.870.129.525.681.616.5
ResNet-1896.390.690.797.790.672.233.630.283.020.269.742.723.381.913.674.240.435.584.226.5
ResNet-5096.892.892.297.992.181.562.655.288.051.877.061.044.285.041.383.963.061.089.657.5
ResNet-10198.696.596.599.196.573.537.235.882.828.569.735.426.280.216.278.154.547.186.039.5
GoogleNet97.794.494.298.594.177.546.644.585.734.675.943.240.784.728.782.555.757.389.051.3
Inceptionv398.496.496.299.096.270.159.527.380.519.069.362.825.379.915.872.759.433.782.328.7
AVG85.564.364.091.063.571.131.628.281.922.470.231.626.081.319.372.935.432.783.127.8
SVMRbfLegendre77.143.143.085.643.064.618.610.878.56.964.543.511.078.110.269.034.321.881.113.8
Zernike74.935.737.884.335.263.611.19.077.46.868.229.320.180.613.668.012.519.580.511.9
HARri74.737.436.684.635.071.623.129.481.824.073.135.133.182.828.970.422.826.581.323.2
LBP1874.842.938.184.139.067.74.521.278.87.467.74.521.278.87.467.74.521.278.87.4
Histogram79.651.249.187.845.362.86.15.277.65.063.86.68.478.06.963.514.87.077.98.5
Correlogram78.448.446.886.547.372.244.431.782.130.571.843.330.881.728.773.846.535.583.135.4
Haar wavelet82.557.456.789.156.667.715.820.179.212.768.819.023.379.815.167.821.119.879.615.0
Gabor wavelet79.947.749.787.448.471.834.129.982.124.371.433.529.181.824.473.638.334.383.430.3
AlexNet98.095.295.198.795.076.350.341.684.433.475.246.339.083.729.880.457.651.787.144.6
VGG-1698.796.896.899.296.879.255.249.486.942.275.145.539.584.132.283.259.559.389.553.8
VGG-1996.190.090.197.590.168.04.619.280.77.168.03.718.980.96.268.326.420.380.89.6
ResNet-1896.290.390.497.690.370.839.726.782.316.968.837.820.981.39.872.841.232.383.322.7
ResNet-5097.995.094.898.694.779.659.850.686.847.773.858.236.382.932.284.362.861.989.858.0
ResNet-10198.897.197.199.297.175.737.741.384.231.573.936.436.683.027.177.757.946.285.737.8
GoogleNet97.794.594.298.594.174.636.236.384.425.672.834.432.083.320.378.543.546.286.638.1
Inceptionv398.396.095.998.995.970.959.629.481.121.969.040.124.779.714.974.258.437.583.432.4
AVG87.769.969.592.369.071.131.328.281.821.570.432.326.681.319.273.337.633.883.227.7
RFLegendre75.238.338.484.538.269.211.721.881.512.067.413.918.080.212.470.224.024.782.115.4
Zernike73.231.333.183.531.565.47.712.879.06.368.716.320.981.212.168.316.919.880.911.2
HARri83.560.559.089.859.472.621.331.183.222.170.620.326.282.017.074.247.135.883.928.6
LBP1879.953.549.487.849.968.19.822.179.211.067.813.921.578.99.468.815.323.879.714.3
Histogram81.053.952.988.252.666.446.915.779.413.666.231.814.879.313.267.129.417.479.617.3
Correlogram79.048.548.087.047.974.436.936.384.031.273.436.134.083.429.275.437.639.284.635.3
Haar wavelet88.371.070.692.770.663.717.68.477.78.463.717.08.177.77.565.730.513.778.914.3
Gabor wavelet81.351.253.288.351.872.336.330.882.526.470.630.626.781.422.572.932.832.383.027.9
AlexNet97.794.594.298.494.177.850.845.385.638.576.649.142.284.833.582.055.855.588.347.2
VGG-1698.696.696.599.196.571.147.629.481.221.868.939.624.179.713.276.453.742.284.638.6
VGG-1995.889.489.597.389.468.326.720.380.89.367.93.918.980.86.569.728.924.481.515.3
ResNet-1896.390.590.797.790.671.739.329.482.618.669.240.422.481.412.072.838.932.683.222.3
ResNet-5097.794.394.298.594.175.553.539.884.136.372.351.232.082.025.881.059.253.587.850.8
ResNet-10198.796.896.899.296.870.326.825.981.315.769.827.024.481.112.574.750.937.284.129.1
GoogleNet97.393.793.398.293.376.946.043.385.333.575.639.540.184.528.481.056.053.587.948.3
Inceptionv398.396.095.999.095.969.450.725.080.214.368.628.923.379.611.872.051.831.182.123.0
AVG88.972.572.293.172.070.833.127.381.719.969.828.724.981.116.773.339.333.583.327.4
CNNAlexNet98.195.595.398.995.464.782.643.996.150.263.380.238.795.446.272.986.651.496.958.8
VGG-1696.390.790.497.790.561.978.332.894.042.537.388.723.397.433.168.973.936.392.945.5
VGG-1996.290.590.497.790.549.278.223.893.534.435.089.120.697.531.857.774.027.692.637.3
ResNet-1898.997.497.499.397.457.589.941.098.049.944.287.930.597.639.166.690.847.798.156.6
ResNet-10198.796.896.899.296.866.786.747.797.154.652.287.337.897.444.776.485.957.396.962.5
ResNet-5098.396.095.999.095.970.086.252.697.157.056.685.441.097.046.278.486.860.897.165.4
GoogleNet98.496.095.999.095.973.085.342.496.454.170.486.738.795.952.577.582.950.096.158.3
InceptionV398.496.396.299.096.248.782.129.696.137.243.283.925.096.334.758.979.238.495.543.0
AVG97.994.994.898.794.861.583.739.296.047.550.386.131.996.841.069.782.546.295.853.4
Table 4. ALL-IDB2 performance obtained with source models created exploiting the tight data set version and tested using in turn (from the left) the large, tight, eroded and dilated versions. Same-version performance are highlighted in grey.
Table 4. ALL-IDB2 performance obtained with source models created exploiting the tight data set version and tested using in turn (from the left) the large, tight, eroded and dilated versions. Same-version performance are highlighted in grey.
ClassifierDescriptorTarget Set
LargeTightErodedDilated
APRSF1APRSF1APRSF1APRSF1
kNNLegendre47.448.794.90.064.369.268.371.866.770.073.187.553.892.366.760.356.392.328.269.9
Zernike46.248.092.30.063.247.447.141.053.843.851.351.935.966.742.439.742.356.423.148.4
HARri47.446.938.556.442.362.861.966.759.064.259.061.348.769.254.352.651.874.430.861.1
LBP1850.050.0100.00.066.759.056.974.443.664.467.964.679.556.471.360.360.061.559.060.8
Histogram23.130.241.05.134.826.926.325.628.226.043.640.025.661.531.226.929.533.320.531.3
Correlogram62.859.382.143.668.856.455.366.746.260.553.853.166.741.059.151.350.969.233.358.7
Haar wavelet35.941.871.80.052.853.855.638.569.245.561.573.735.987.248.341.041.946.235.943.9
Gabor wavelet50.050.087.212.863.655.153.774.435.962.467.965.974.461.569.961.557.984.638.568.7
AlexNet50.050.097.42.666.174.466.797.451.379.250.050.0100.00.066.753.852.0100.07.768.4
VGG-1632.125.017.946.220.965.466.761.569.264.064.169.051.376.958.864.165.759.069.262.2
VGG-1960.355.7100.020.571.666.760.397.435.974.557.754.397.417.969.755.152.897.412.868.5
ResNet-1875.667.997.453.880.080.872.2100.061.583.974.466.1100.048.779.675.667.2100.051.380.4
ResNet-5059.064.041.076.950.075.672.782.169.277.147.437.57.787.212.847.441.712.882.119.6
ResNet-10139.744.379.50.056.980.874.094.966.783.164.159.389.738.571.452.651.776.928.261.9
GoogleNet50.050.0100.00.066.788.581.3100.076.989.750.050.0100.00.066.750.050.0100.00.066.7
Inceptionv352.662.512.892.321.373.176.566.779.571.248.742.97.789.713.047.440.010.384.616.3
AVG48.949.672.125.655.664.762.272.457.166.258.457.960.955.955.152.550.767.137.855.4
SVMRbfLegendre48.749.282.115.461.575.677.871.879.574.769.2100.038.5100.055.669.266.079.559.072.1
Zernike50.050.0100.00.066.762.860.971.853.865.974.467.394.953.878.759.054.9100.017.970.9
HARri56.456.159.053.857.562.863.959.066.761.373.178.164.182.170.444.946.669.220.555.7
LBP1850.050.0100.00.066.762.858.984.641.069.567.963.087.248.773.161.558.876.946.266.7
Histogram10.317.020.50.018.643.641.933.353.837.155.161.128.282.138.633.334.938.528.236.6
Correlogram60.364.346.274.453.757.756.071.843.662.960.358.371.848.764.455.154.561.548.757.8
Haar wavelet50.050.0100.00.066.779.571.797.461.582.678.270.497.459.081.761.556.797.425.671.7
Gabor wavelet46.247.989.72.662.562.860.474.451.366.756.455.169.243.661.461.557.984.638.568.7
AlexNet50.00.00.0100.00.076.969.197.456.480.950.00.00.0100.00.050.00.00.0100.00.0
VGG-1647.437.57.787.212.861.566.746.276.954.574.482.861.587.270.669.282.648.789.761.3
VGG-1950.00.00.0100.00.071.863.9100.043.678.050.00.00.0100.00.050.00.00.0100.00.0
ResNet-1862.8100.025.6100.040.882.173.6100.064.184.850.00.00.0100.00.050.00.00.0100.00.0
ResNet-5050.00.00.0100.00.082.183.879.584.681.653.8100.07.7100.014.350.00.00.0100.00.0
ResNet-10148.749.497.40.065.566.760.0100.033.375.061.557.984.638.568.760.356.194.925.670.5
GoogleNet50.050.0100.00.066.788.581.3100.076.989.750.050.0100.00.066.750.050.0100.00.066.7
Inceptionv348.745.512.884.620.076.981.869.284.675.059.066.735.982.146.751.352.428.274.436.7
AVG48.741.752.644.941.269.667.078.560.771.361.556.952.670.449.454.842.055.054.646.0
RFLegendre50.050.0100.00.066.780.883.376.984.680.076.988.961.592.372.756.453.597.415.469.1
Zernike48.749.279.517.960.835.936.638.533.337.524.427.330.817.928.925.632.143.67.737.0
HARri47.448.794.90.064.379.572.594.964.182.280.874.094.966.783.152.651.497.47.767.3
LBP1850.050.0100.00.066.762.857.894.930.871.857.754.397.417.969.755.153.284.625.665.3
Histogram50.050.0100.00.066.774.471.182.166.776.278.289.364.192.374.646.248.092.30.063.2
Correlogram52.651.776.928.261.960.358.371.848.764.461.559.671.851.365.161.558.876.946.266.7
Haar wavelet17.926.435.90.030.451.351.446.256.448.660.363.348.771.855.146.246.246.246.246.2
Gabor wavelet43.645.159.028.251.155.154.364.146.258.855.155.948.761.552.152.652.164.141.057.5
AlexNet61.556.797.425.671.776.969.197.456.480.970.562.9100.041.077.269.261.9100.038.576.5
VGG-1637.236.835.938.536.462.863.959.066.761.350.050.030.869.238.159.062.146.271.852.9
VGG-1950.00.00.0100.00.066.760.794.938.574.050.00.00.0100.00.050.00.00.0100.00.0
ResNet-1857.787.517.997.429.882.173.6100.064.184.857.768.828.287.240.056.472.720.592.332.0
ResNet-5057.780.020.594.932.778.276.282.174.479.046.220.02.689.74.548.733.32.694.94.8
ResNet-10135.941.871.80.052.879.572.594.964.182.257.755.282.133.366.050.050.082.117.962.1
GoogleNet50.050.0100.00.066.788.581.3100.076.989.750.050.0100.00.066.751.350.6100.02.667.2
Inceptionv350.050.07.792.313.370.572.266.774.469.348.742.97.789.713.046.236.410.382.116.0
AVG47.548.462.332.748.269.165.979.059.171.357.953.954.361.450.451.747.660.343.149.0
CNNAlexNet55.112.883.352.822.280.861.5100.072.276.276.953.8100.068.470.076.953.8100.068.470.0
VGG-1651.37.760.050.713.678.256.4100.069.672.176.953.8100.068.470.082.064.1100.073.678.1
VGG-1957.715.4100.054.226.774.448.7100.066.165.567.938.593.761.354.566.733.3100.060.050.0
ResNet-1847.40.00.048.70.070.546.190.063.861.062.841.072.758.952.562.830.885.757.845.3
ResNet-5053.87.7100.052.014.370.541.0100.062.958.256.420.572.753.732.071.843.6100.063.960.7
ResNet-10141.066.744.131.653.176.979.575.678.477.576.976.976.976.976.962.871.860.965.665.9
GoogleNet78.259.095.870.473.082.066.796.374.578.870.541.0100.062.958.283.369.296.476.080.6
InceptionV343.62.614.346.54.357.728.268.754.840.053.825.658.852.535.757.730.866.755.042.1
AVG53.521.562.250.825.973.953.591.3367.866.267.843.984.462.956.270.549.788.765.061.6
Table 5. R-WBC performance obtained with source models created exploiting the tight data set version and tested using in turn (from the left) the large, tight, eroded and dilated versions. Same-version performance are highlighted in grey.
Table 5. R-WBC performance obtained with source models created exploiting the tight data set version and tested using in turn (from the left) the large, tight, eroded and dilated versions. Same-version performance are highlighted in grey.
ClassifierDescriptorTarget Set
LargeTightErodedDilated
APRSF1APRSF1APRSF1APRSF1
kNNLegendre67.022.919.278.714.583.157.858.189.256.879.350.748.087.446.674.741.737.583.937.9
Zernike68.18.120.979.811.384.060.760.589.960.378.747.047.486.546.673.343.734.382.932.7
HARri67.915.719.879.917.477.646.144.286.044.979.949.449.787.649.273.540.634.383.134.8
LBP1867.74.521.278.87.487.570.568.692.469.284.662.861.690.260.884.864.662.290.561.7
Histogram68.720.422.480.116.676.542.641.685.241.678.546.346.586.646.276.345.941.085.040.6
Correlogram71.430.827.982.625.371.431.828.282.527.272.633.031.183.030.272.235.429.983.128.7
Haar wavelet68.021.120.979.616.489.675.174.193.574.188.871.672.193.071.683.965.659.989.959.3
Gabor wavelet70.928.127.381.726.974.838.237.584.035.671.728.529.782.128.574.135.335.583.634.5
AlexNet68.040.922.179.09.291.880.679.994.880.175.141.939.883.828.475.942.041.984.331.0
VGG-1676.145.442.284.532.092.281.681.195.081.376.563.343.084.733.376.245.942.484.532.1
VGG-1967.74.521.278.87.484.462.361.390.161.567.425.520.378.79.768.824.423.579.614.2
ResNet-1867.74.521.278.87.487.067.967.791.867.067.74.920.978.97.967.44.820.378.77.7
ResNet-5076.134.142.284.531.491.278.878.894.478.778.438.948.086.138.877.537.045.685.536.0
ResNet-10171.233.129.981.120.990.677.977.093.977.181.364.255.588.048.379.763.251.587.044.7
GoogleNet67.85.721.578.98.592.582.081.795.281.468.840.423.579.713.368.133.022.179.210.5
Inceptionv374.145.737.283.126.689.874.374.793.674.483.559.660.289.653.483.560.460.289.654.0
AVG69.922.826.180.617.585.364.363.490.763.277.045.543.685.438.375.642.740.184.435.0
SVMRbfLegendre66.88.119.278.311.285.865.164.591.064.684.565.261.390.460.476.647.041.685.139.9
Zernike67.87.520.979.29.687.769.869.592.369.684.163.660.590.060.277.551.044.285.640.5
HARri67.514.619.279.615.483.258.958.489.458.683.558.258.789.757.678.451.246.586.045.7
LBP1867.74.521.278.87.485.768.164.291.064.983.461.158.789.458.777.257.843.985.443.0
Histogram70.016.124.481.418.089.074.472.493.372.789.473.573.593.473.483.865.259.389.958.0
Correlogram77.142.643.385.741.181.655.554.188.554.579.750.349.787.349.578.749.846.587.046.8
Haar wavelet68.818.722.480.319.890.476.876.294.076.089.574.974.193.474.085.267.663.190.862.7
Gabor wavelet73.239.533.782.632.180.951.952.388.051.875.543.139.584.439.578.348.146.286.346.5
AlexNet67.74.521.278.87.496.491.691.397.791.367.74.521.278.87.467.74.521.278.87.4
VGG-1676.546.842.784.933.892.582.381.795.281.772.762.733.782.125.374.545.638.183.427.3
VGG-1961.01.33.575.41.886.266.965.791.265.566.64.516.379.66.965.94.214.879.16.5
ResNet-1867.74.521.278.87.488.772.371.892.871.567.74.521.278.87.467.74.521.278.87.4
ResNet-5075.636.241.084.129.893.183.783.495.683.578.463.348.386.139.977.741.646.585.637.6
ResNet-10170.724.228.880.819.392.983.582.695.482.568.462.123.079.210.968.337.422.779.210.3
GoogleNet67.74.821.278.87.895.189.288.196.987.769.643.025.980.117.169.645.825.980.117.0
Inceptionv373.345.535.282.625.092.180.580.595.080.271.457.230.581.421.774.060.637.283.131.0
AVG69.920.026.280.617.988.873.272.393.072.377.049.543.585.338.175.142.638.784.033.0
RFLegendre68.313.622.779.314.586.167.065.491.365.483.562.758.490.257.074.146.435.583.633.3
Zernike67.66.520.978.87.987.669.669.292.169.385.165.863.490.563.176.248.741.384.638.5
HARri67.417.820.678.710.389.875.775.093.675.289.374.673.593.473.381.060.852.987.652.1
LBP1875.324.240.183.927.689.774.074.193.674.083.963.360.289.858.985.869.264.591.263.5
Histogram69.217.623.080.916.088.572.071.292.971.489.172.673.093.372.482.460.755.889.054.3
Correlogram71.948.528.883.126.781.554.354.188.554.080.551.151.587.850.978.849.947.186.947.2
Haar wavelet73.117.932.883.421.389.875.774.493.774.389.974.374.793.774.485.569.663.791.063.6
Gabor wavelet71.836.729.981.928.382.355.155.889.054.775.739.939.884.639.379.348.248.387.148.0
AlexNet70.137.927.380.518.494.085.485.296.185.171.562.931.181.524.572.841.434.382.426.1
VGG-1676.055.441.984.431.494.687.487.296.687.377.879.646.285.538.975.324.540.183.927.7
VGG-1968.49.122.779.510.388.973.272.492.972.567.413.519.579.212.669.117.123.380.216.7
ResNet-1867.94.920.979.27.989.875.174.493.674.467.49.518.679.49.667.726.919.579.510.4
ResNet-5075.124.439.883.827.592.381.881.795.181.776.939.844.285.134.776.034.241.984.431.0
ResNet-10171.641.831.181.422.092.682.782.095.282.077.063.444.585.035.676.160.542.284.431.5
GoogleNet69.010.924.479.612.494.687.786.996.586.669.722.125.680.216.069.921.926.280.316.7
Inceptionv374.548.438.183.427.492.681.681.795.381.580.361.252.687.346.980.459.952.687.347.3
AVG71.126.029.181.419.489.774.974.493.574.379.153.548.586.744.376.946.243.185.238.0
CNNAlexNet70.675.139.893.250.397.794.794.598.794.596.792.791.998.292.096.491.691.397.991.2
VGG-1668.562.134.988.444.096.892.592.198.192.295.088.587.297.187.594.687.586.696.986.7
VGG-1955.785.934.394.246.695.990.889.897.790.095.488.788.497.288.595.288.887.897.388.0
ResNet-1872.071.946.591.452.698.496.296.299.096.298.095.595.398.895.397.995.195.198.795.1
ResNet-10152.597.240.799.451.298.295.795.698.995.697.293.693.398.393.398.195.495.398.895.4
ResNet-5071.187.850.097.458.398.395.995.998.995.997.794.694.598.694.597.995.095.198.795.0
GoogleNet59.683.335.295.345.597.794.594.598.694.597.093.793.098.493.095.590.289.297.689.2
InceptionV380.485.057.896.064.797.995.095.198.795.098.295.795.698.995.698.095.395.398.895.3
AVG66.381.042.494.451.797.694.494.298.694.296.992.992.498.292.596.792.492.098.192.0
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Loddo, A.; Putzu, L. On the Effectiveness of Leukocytes Classification Methods in a Real Application Scenario. AI 2021, 2, 394-412. https://doi.org/10.3390/ai2030025

AMA Style

Loddo A, Putzu L. On the Effectiveness of Leukocytes Classification Methods in a Real Application Scenario. AI. 2021; 2(3):394-412. https://doi.org/10.3390/ai2030025

Chicago/Turabian Style

Loddo, Andrea, and Lorenzo Putzu. 2021. "On the Effectiveness of Leukocytes Classification Methods in a Real Application Scenario" AI 2, no. 3: 394-412. https://doi.org/10.3390/ai2030025

Article Metrics

Back to TopTop