Next Article in Journal
New Parameterized Inequalities for η-Quasiconvex Functions via (p, q)-Calculus
Previous Article in Journal
Accessing Active Inference Theory through Its Implicit and Deliberative Practice in Human Organizations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic Detection and Counting of Blood Cells in Smear Images Using RetinaNet

Department of Electrical and Computer Engineering Fundamentals, Rzeszow University of Technology, 35-959 Rzeszow, Poland
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Entropy 2021, 23(11), 1522; https://doi.org/10.3390/e23111522
Submission received: 10 September 2021 / Revised: 10 November 2021 / Accepted: 11 November 2021 / Published: 16 November 2021
(This article belongs to the Topic eHealth and mHealth: Challenges and Prospects)

Abstract

:
A complete blood count is one of the significant clinical tests that evaluates overall human health and provides relevant information for disease diagnosis. The conventional strategies of blood cell counting include manual counting as well as counting using the hemocytometer and are tedious and time-consuming tasks. This research-based paper proposes an automatic software-based alternative method to count blood cells accurately using the RetinaNet deep learning network, which is used to recognize and classify objects in microscopic images. After training, the network automatically recognizes and counts red blood cells, white blood cells, and platelets. We tested a model trained on smear images and found that the trained model has generalized capabilities. We assessed the quality of detection and cell counting using performance measures, such as accuracy, sensitivity, precision, and F1-score. Moreover, we studied the dependence of the confidence thresholds and the number of learning epochs on the obtained results of recognition and counting. We compared the performance of the proposed approach with those obtained by other authors who dealt with the subject of cell counting and show that object detection and labeling can be an additional advantage in the task of counting objects.

1. Introduction

A complete blood count (CBC) is a typical clinical test that provides relevant information for disease diagnosis. The main three types of blood cells are: Red Blood Cells (RBCs), also called erythrocytes, White Blood Cells (WBCs), also called leukocytes, and platelets, also called thrombocytes. CBC provides information about the production of all blood cells, identifies the patient’s ability to carry oxygen by evaluating RBC counts, and allows for immune system evaluation by assessing WBC counts with differential. This test helps diagnose anemia, certain cancers, infections, and many many others, as well as monitor the side effects of certain medications [1]. For this reason, medical laboratories are flooded with a large number of blood and tissue samples that need to be analyzed as accurately as possible and in the shortest possible time. The ability to accurately quantitate specific populations of cells is important for precision diagnostics in laboratory medicine. Thus, medical staff work under heavy loads and time pressure. Medical workers often have to work overtime to analyze all samples on time, causing even greater fatigue of the staff, which may result in mistakes and lower work efficiency [2]. These errors may lead to severe and even fatal consequences in the treatment of patients.
An alternative to traditional manual counting of various cells by specialists are semi-automatic and automatic methods. Automatic detection and counting of cells in images is a difficult and complex task, especially in reality the resolution of input medical images could be very high, at the same time the target cells could easily be extremely dense. Moreover, there are a large number of them in the image, the cells are often overlapped and there are problems with distinguishing cells. This is the principal motivation of automatic cell counting.
There are generally two main approaches in the automated counting of blood cells. We can distinguish traditional methods, which involve several steps such as preprocessing, segmentation, feature extraction, and classification, while other methods are based on deep neural networks (DNN). The selected traditional automatic RBCs counting methods are presented in [3,4]. Various methods of automatic WBC counting are presented in [5,6,7,8,9,10]. Despite the numerous advantages of the automated methods, they also have disadvantages, such as the accuracy of counting and the preparation of cell images. Reliable and accurate cell detection is usually a difficult problem due to a great variability of cells and the complexity of data. Detection can determine the presence of a specific cell in a microscopic image, e.g., lymphocytes. Moreover, detection can be also combined with their counting and quantitative analysis of cells [11]. Automatic cell counting involves obtaining the number of cells in a medical image [12].
In recent years, due to the rapid development of deep learning networks, they have become a key component of many computer vision applications such as object detection, classification or segmentation. The efficiency and efficacy of deep learning in the medical imaging field is unquestionable, as evidenced by a large number of independent studies in different modalities and applications, including those suggested for automatic cell counting [13]. For example, deep learning models that classify various types of erythrocytes were proposed in [14,15]. Vogado et al. [16] proposed LeukNet, which is based on a convolutional neural network (CNN). Acevedo et al. proposed recognition of peripheral blood cell images using CNNs [17]. Automatic white blood cell classification using deep learning models was also presented in [18,19,20,21,22,23]. Automatic identification and counting of all three types of blood cells simultaneously using DNN was proposed in [24].
A literature review indicates that there are only a few articles on the detection and counting of RBCs, WBCs, and platelets simultaneously using deep learning methods [24]. However, it is not clear how to determine the optimal number of epochs and the optimal threshold to achieve the highest performance. We also noted that the obtained results are usually compared only based on accuracy, which in no doubt is an important metric to consider, but it does not always give the full picture. Obtained results should also be discussed in the light of important quality metrics in medical testing: recall, precision, and F1-score. Many works concern recognizing cells in small images that contain just a few cells in the image, while microscopic images can include hundreds of crowded and overlapped cells. Motivated by the lack of a thorough examination of the above issues, we decided to propose our own solution.
This paper aims at developing a precise and automatic method for counting various types of cells in one image using the developed deep learning methods. It will allow for a significant acceleration of cell counting work in laboratories and a reduction of the burden on staff. Doing this work by using a computer will also reduce human error and increase the accuracy and reduce the likelihood of mistakes. To achieve this goal, our work was related to the development of methods that can automatically count blood cells. We proposed an approach that employs RetinaNet based on CNN architecture to detect all three types of blood cells, i.e., RBCs, WBCs, and platelets simultaneously.
The main contribution of this work includes several points. We prepared our own training dataset and manually marked RBCs, WBCs and platelets in the images. Then, we adapted and trained RetinaNet to recognize three types of cells simultaneously by presenting a wide collection of microscopic medical images. Next, we prepared an application that counted cells recognized by the RetinaNet network. Then, we evaluated the impact of learning epochs and confidence thresholds on the performance and effectiveness of cell detection and counting for each class on several images by comparing the number of cells counted by the application with the manually counted number of correctly classified cells, incorrectly classified cells, and unclassified cells. Based on those preliminary results, we selected and tested two of the trained models to evaluate how accurately they mark RBCs, WBCs, and platelets with a bigger test set for subsequent confidence thresholds. Finally, we calculated the accuracy, precision, recall, and F1-score of automatic counting for each type of cells, determined the optimal confidence thresholds for each type of cells, and compared them with the state-of-the-art.

2. Materials and Methods

2.1. General Concept of CNN Construction

Deep learning is a method that simulates the human brain structure. This method consists of a series of algorithms for finding a hierarchical representation of the input data based on the way that the human brain senses an important part of a sensory data set. It is a part of machine learning, which revolves around the algorithms responsible for modeling high-level abstraction, using many layers composed of nonlinear transformations. Due to their high efficiency, DNNs are nowadays the most popular group of deep learning algorithms.
In recent years, the unrestrainable increase of the data amount has raised new challenges in machine learning in the area of scalability. It was particularly evident in the subject of object recognition and image processing. During the analysis of a small black and white image, each neuron of the hidden layer would still have to have thousands of weights. This fact causes problems of both computational and purely practical nature. Such problems are dealt with by the architecture of CNNs [25].
A CNN is a class of DNNs, most commonly applied to analyzing images and object recognition. Figure 1 shows the sequence of transformations involved in a typical convolutional network [26] that has been adopted in our research to recognize blood cells.
At first, the input image is scanned for feature selection. The checked rectangle is the filter that passes over the image. Activation maps are stacked atop another one for each of the employed filters. Secondly, the next rectangle is downsampled and the activation maps are downsampled. Next, a new set of activation maps is created by passing filters over the first downsampled stack. Then, the second set of activation maps is condensed by the second downsampling. Finally, the fully connected layer classifies the output with one label per node.
It is a solution taken from the human system of vision. Neurons are activated only when something is in the human field of vision, utilizing the fact that the features that represent only this small part of an image can relate to the entire surface of the image. Based on this knowledge, groups of neurons are created with common weights but located in different parts of the image. Several types of layers make CNN:
  • Convolutional layers—they create feature maps based on systematically learned filters on input images and summarize the presence of these functions in the input. A map of the activity of a particular feature across the entire image area can be interpreted as a set of output signals from neurons of the same weight shall. The filter is a feature represented by one shared set of weights. The convolutional layer is operating in three dimensions, where instead of multiplying vectors, as in the classical approach, the convolution operation is applied and it gives better results when detecting a pattern [25,26];
  • Pooling layers—they are used to streamline the computation. Combining the outputs of neuron clusters at one layer into a single neuron in the next layer by pooling layers reduces the dimensions of the data. Local pooling combines small clusters, and global pooling acts on all neurons of the convolutional layer. Pooling may calculate a maximum or an average. Max pooling uses the maximum value, and average pooling uses the average value from each of a cluster of neurons at the prior layer [25,26].
  • Fully connected layer—uses the convolution results to classify the image into a label. The convolution output is flattened into a single vector of values representing the probability of belonging of a feature to that label. Each neuron receives weights that assign priority to the most appropriate label. Finally, neurons vote for each label, and the winner of this vote is the classification decision [26].

2.2. RetinaNet

RetinaNet is a one-stage detector that uses focal loss, whereby the lower loss is contributed by negative samples. The loss is concentrated in problematic samples, which improves the accuracy of prediction. With ResNet and Feature Pyramid Network (FPN) as the backbone for extraction of features and two task-specific subnetworks used for classification and bounding box regression, the formed RetinaNet achieves excellent performance and outperforms Faster R-CNN—the well-known two stage detector [27,28].
The architecture of RetinaNet shown in Figure 2 can be divided into three main groups [29]:
  • a backbone FPN is used on the top of the ResNet model for constructing a rich multiscale feature pyramid from a single input image;
  • a subnet used for classifying objects based on FPN outputs;
  • a subnet that makes regression of the bounding box using the output data of the backbone network.
Feature pyramids are a basic component in recognition systems used for detecting objects at multiple scales. RetinaNet is based on the FPN presented in [30]. In a network containing residual blocks (ResNet), each layer feeds directly into the next layer and two to three jumped layers. In comparison, in traditional neural networks each layer feeds into the next layer. The training of a few layers can be skipped by using shortcut connections. It has been proven that training this type of network is easier than training in simple DNNs, and it particularly deals with the problem of accuracy degradation.
The fully convolutional nature of the network enables downloading an image of any scale and output proportional feature maps on multiple levels in the feature pyramid [31].
FPN consists of a bottom-up and top-down pathway. The bottom-up pathway is a convolutional network used for feature extraction, and the top-down pathway restores resolution to semantic information.
The classification and regression subnets are attached to each feature map obtained using FPN. The classification subnet predicts the object presence probability for each of the A anchors and K object classes at each spatial position. It applies four 3 × 3 convolutional layers, each with 256 filters and each followed by the Rectified Linear Unit (ReLU) activation, followed by a 3 × 3 convolutional layer with K × A filters. The regression subnet is identical to the classification subnet, except that 4A linear outputs are terminated per spatial location.
We used Keras implementation of RetinaNet object detection [32]. RetinaNet makes use of a ResNet-based backbone, from which a FPN is constructed. We used ResNet50 as the backbone. We took advantage of the possibility of using transfer learning. We set the weight option to the pretrained model when training and used the freeze backbone argument to freeze the backbone layers. We set the input batch size at 5 due to limitations in GPU memory. We trained the RetinaNet model with 36,382,957 parameters, which is equal to the number of trainable and non-trainable parameters.

2.3. Focal Loss

The imbalance between the background not containing objects and the foreground that holds interesting objects is the main issue for object detection model training. Focal loss is designed to assign greater weights to difficult, easily misclassified objects and downweight trivial ones. The goal is to minimize the expected value of the loss from the model and in the case of the cross-entropy loss, the expected loss is approximated as:
CE p i , y = log p i 1 n i = 1 n log p i = 1 n i = 1 n L i
where L i is the loss for one training example and the total loss L is approximated as the mean overall examples, p i [ 0 , 1 ] is the model’s estimated probability for the class y = 1 , and y { ± 1 } specifies the ground-truth class [30].
The loss is calculated depending on the loss function definition. One of the most common loss functions is cross-entropy loss. This loss function is beneficial for image classification tasks, but different tasks need different loss functions. For example, in the detection problem in which bounding boxes are estimated around objects, a regression loss function can be used to get a measure of how well the bounding box is placed in the image.
The cross-entropy loss is used when the model contains the Softmax classifier. The Softmax classifier gives a probability score for each object class. The loss function is calculated as:
L i = log e f y i j e f j
where L i are all the training examples together, f j is the j-th element of the vector of class scores f, y i is the output for the correct class.
The Mean Square Error (MSE) is the most commonly used regression loss function. It can be computed as the squared norm of the difference between the true value and the predicted value:
L i = g y i 2 2
where g are the predicted values and y i are the true ones. This loss function can be used when the goal is to find the coordinates of a bounding box when performing object detection.

2.4. Metrics

To quantitatively evaluate the results of cell counting, the following measures are defined.
The accuracy is defined by the following formula:
Accuracy = N expert N count max N expert , N count · 100 %
where: N e x p e r t —number of cells counted by an expert, N c o u n t —cells counted by the application.
The classifier efficiency is evaluated based on its ability to correctly identify the number of cells belonging to one of the three classes. For each class, the quantitative measurement is performed based on True Positive (TP), False Positive (FP), True Negative (TN), and False Negative (FN) parameters.
Precision is the fraction of correctly identified samples of a given class to all correctly recognized samples. This value is given by the formula:
Precision = T P T P + F P · 100 %
Recall (sensitivity) is the number of correctly identified samples belonging to a given class to all samples belonging to that class. It is expressed by the formula:
Recall = T P T P + F N · 100 %
F1-score is the harmonic average of recall and precision, which can be expressed by the formula:
F 1 - score = 2 Precision Recall Precision + Recall · 100 %

3. Implementation of Cell Counting Algorithm

Our goal is to use an object detection and classification algorithm to detect and count three types of blood cells directly from a smear image. For this purpose, we have needed to train the RetinaNet network with selected settings and configurations based on training images with blood cell annotation. In this way, we created an application for recognizing and counting blood cells.

3.1. Datasets

For the learning and validation application, we used our own dataset consisting of 900 images containing WBCs, RBCs, and platelets. In the case of the validation dataset, we randomly selected 15 training images with annotations.
For application tests, we used images from the LISC dataset [33]. The dataset includes 251 images of resolution 720 × 576 acquired by a light microscope (Axioskope 40) with a magnification of 100×, recorded by a digital camera (Sony Model No. SSCDC50AP). From the test dataset, we randomly selected 131 images for counting WBCs, 64 images for counting platelets, and 15 images for counting RBCs. The different number of images selected for testing is due to the different number of individual cells in one image. Therefore, a small number of images for testing RBCs was selected, because of the large number of RBCs in individual images (average 121 RBCs per image). The situation is similar for the platelet count.

3.2. Image Labelling

Before starting the network training process, we marked manually three types of cells in microscopic images using the LabelImg application, which is a graphical image annotation tool [34]. This process is shown in Figure 3. The objects in the images are divided into three categories: WBCs, RBCs, and platelets are marked accordingly. In this way, the annotations of blood cells were acquired for DNN training.

3.3. Training the RetinaNet of Object Recognition

We used the RetinaNet network with ResNet50 as the backbone with the input batch size set at 5, and the number of epochs set at 40 epochs, each for 500 steps for training. We used 900 images to train the network. The training process outputs a JSON file containing the network trained on these images, based on the set parameters.
As a result of the training of each of the models, we obtained 40 files for each epoch. Then, we selected the results with 10, 15, 20, 25, 30, 35, and 40 epochs to investigate the impact of decreasing loss function on the detection accuracy. The workflow of the network learning process is presented in Figure 4.
Figure 5 shows the learning curve of the RetinaNet algorithm to detect blood cells relative to the regression and classification loss function, as well as according to the sum of losses.

3.4. Selection of the Optimal Model

The criteria for selecting the best model variant were based on observation of the loss function, which decreased during learning from epoch to epoch. Additionally, we manually validated the results obtained after 10, 15, 20, 25, 30, 35, and 40 learning epochs using a validation set consisting of 15 images not used for training. We assessed the efficiency of blood cell counting by calculating the mean F1-score for each of the considered thresholds for each epoch. The results of the preliminary analysis are presented in Table 1, Table 2 and Table 3.
The additional aim of this validation was to compare the quality of cell counting after passing a certain number of epochs and to find the optimal model for further testing. The learning process was quite long. For a detailed analysis, we selected models trained with 10 and 30 epochs. The model trained with 10 epochs achieved very high F1-score results, and the loss function was stabilized for it. The model obtained after learning with 30 epochs achieved the highest F1-score values. We conducted research on a larger testing dataset for these two selected models and calculated metrics, such as F1-score, accuracy, precision and recall, allowing for an in-depth and comprehensive assessment of the quality of the RetinaNet model.
Analyzing the results of the F1-score presented in Table 1, Table 2 and Table 3, obtained on the basis of counting 3 types of cells in 15 images for each model and each threshold, it turned out that the best results of RBCs, WBCs, and platelet counting was achieved by RetinaNet trained during 30 epochs. That model returns the highest values of recognized RBCs, platelets, as well as WBCs. The same maximum values of F1-score values for WBCs also occur in other models, except the RN40. However, taking into account the maximum values of F1-score counting of all three types of cells from Table 1, Table 2 and Table 3, it can be indicated that the best is the RN30 model.

4. Experiments and Results

After the network training, we performed tests using a specially developed application, which allows for the import of trained models, cell detection, and presentation of results in a graphical and numerical form.
The tests of the developed models were performed for 15 images with RBCs, 151 images with WBCs, and 64 images with platelets. The output of the deep learning model is an image with an appropriate marking of the recognized samples. To verify the correctness of the obtained results, we counted all marked cells applied to their type in the dedicated application. Thus, our application counts different cells in the selected testing dataset with a different confidence threshold for selected models (RetinaNet model trained with 10 epochs (RN10) and the RetinaNet trained with 30 epochs (RN30)). We compared the results obtained for both types of models in order to check the impact of loss function values on the performance of object recognition.
It should be noted that the confidence threshold plays an instrumental role. Accuracy of identification and counting significantly depends on the appropriate confidence threshold setting. The values of different measures to estimate the accuracy of the recognition and counting of blood cells for testing data were presented in Table 4, Table 5, Table 6, Table 7, Table 8, Table 9 and Table 10.
To visualize the operation of the proposed labeling and blood cell counting method, we presented one of the images from the validation set. Figure 6 shows an original blood smear image, and Figure 7 shows the same image with automatically drawn bounding boxes, labels, and probabilities of each marked blood cell. It was returned by our application using the RN30 model with the confidence threshold of 0.35. Recognized cells were automatically marked on the bounding boxes according to their type. Orange bounding boxes mark RBCs, light blue mark WBCs, and dark blue mark platelets. It is seen in Figure 7 that WBCs and platelets are detected without error. Almost all RBCs are correctly labeled. However, three erroneous orange frames are also noticed, which includes a part of two neighboring RBC cells.

4.1. Results for the RN10

Table 4 contains the determined values of accepted quality measures for the RetinaNet model after 10 learning epochs (RN10) for 15 test images. Table 5 includes an assessment of WBCs counting quality in 131 images, and Table 6 contains the results of counting platelets in 64 images. Table 7 contains the ground truth of cells and cell numbers counted by our application for the confidence thresholds considered.

4.2. Results for the RN30

Table 8 contains the calculated values of the adopted quality measures for the RetinaNet model after 30 learning epochs (RN30) for 15 test images. Table 9 includes an assessment of the WBCs counting quality in 131 images, and Table 10 contains the results of counting platelets in 64 images. Total estimated numbers of cells of different types for different confidence threshold values are presented in Table 11.
As it is apparent from Table 8, in order to count RBCs, it is best to use the optimal threshold of 0.25. However, to count WBCs and platelets, the threshold is much higher (0.65 and 0.35 sequentially for Table 9 and Table 10). Thus, appropriate thresholds for each type of cells are selected as follows:
  • RBCs—Confidence Threshold: 0.25,
  • WBCs—Confidence Threshold: 0.65,
  • Platelets—Confidence Threshold: 0.35.
For the RN10 model, the optimal confidence thresholds determined based on Table 4, Table 5 and Table 6 are as follows: 0.25 for RBCS, 0.60 for WBCs, and 0.30 for platelets. Thus, the thresholds in the counting of WBCs and platelets in the RN30 model are higher and increased as a result of learning the RN10 for another 20 epochs. Higher confidence thresholds give greater certainty of correct recognition of individual blood cells.
The growth of the confidence threshold, which occurs in the RetinaNet model due to the network learning process, can be seen by analyzing and comparing Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10 and Table 11, as well as analyzing Figure 8, Figure 9 and Figure 10, which shows the impact of the value of the confidence threshold on the number of counted cells concerning ground truths. From this figure, you can also effortlessly determine the optimal confidence thresholds for the three blood cell classes considered.
Figure 11 and shows the original image from the testing dataset. It was processed by our application, which automatically drew the bounding boxes, labels, and probabilities of each marked blood cell. An image processed using the RN30 model with the established confidence threshold of 0.45 is in Figure 12. Labels have determined colors, relevant names, and a probability value for each blood cell. At an established confidence threshold of 0.45, the WBC was correctly recognized and all platelets have been correctly recognized, labeled, and counted. The vast majority of RBCs are recognized and labeled correctly. At this confidence threshold, the RN30 model counts RBCs with an accuracy of about 75 %. It is seen in Figure 12 that there are only a few unchecked RBCs (typical for this threshold value), especially the RBCs that are overlapped or trimmed near the edge of the image. The application correctly recognized and counted one WBC. It also correctly recognized and counted 9 platelets and 123 RBCs when the ground truth is 135. The application also returns the probability values of each marked cell and the average probability of all recognized cells depending on their type. In this case, the average probability for WBCs was 0.905, for RBCs it was 0.875, and for platelets it was 0.784.

4.3. Comparison with the State-of-the-Art

To evaluate the performance of the proposed approach, we used the accuracy, precision, recall, and F1-score metrics, which are used most often for counting purposes. We compared the performance of the proposed approach with those obtained by other authors who dealt with the subject of cell counting for RBC, WBC, or platelet counting. It must be noted that only a few methods aimed to count both RBCs, WBCs, and platelets at the same time [24]. The selected methods work on the basis of deep learning as well as traditional image processing. Alam et al. [24] proposed an approach that employs YOLO to detect all three types of blood cells simultaneously. Their method does not require any greyscale conversion or binary segmentation and the whole process is fully automated. It is very similar to our approach because it uses deep neural networks to detect and count three types of cells. Dvanesh et al. [35] presented a method to digitally analyze the image of blood cells and find the RBC and WBC count values from the blood smear microscopic images using Digital Image Processing. Acevedo et al. [17] proposed a system for the automatic classification of peripheral blood cells (WBCs and platelets) by means of a transfer learning approach using convolutional neural networks. Di Ruberto et al. [36] proposed a system for detecting and quantifying red and white blood cells, which is based on the Edge Boxes method. That method is an approach for generating object bounding box proposals directly from edges.
A comparison of the RBC, WBC and platelet counting results with the results obtained by the other authors are reported in Table 12, Table 13 and Table 14. As can be observed, the proposed approach improves the counting performances; in particular, it significantly enhances accuracy. To highlight the performances obtained with the proposed method, in Table 12, we also report the number of images or ground truths used by the authors to test their approaches. The method proposed by [36] performed a higher precision, recall, and F1-score than our method.
The WBC counting results are reported in Table 13, which again have been directly compared with the results obtained by the other authors. The numerical results shown in Table 13 confirm the good performance of our approach, as it is able to detect WBCs with higher accuracy and precision than other methods. Only one method [36] performed higher recall and F1-score while losing accuracy and precision.
The platelet counting results are reported in Table 14, which have been compared with the results obtained by the same authors. The proposed approach obtained better accuracy than presented by Alam et al. [24] a and is slightly worse than presented by Acevedo et al. [17].
The obtained results are very satisfactory if we take into account that we are dealing with the recognition and counting of three types of cells simultaneously. However, it should be noted that similar to our approach was present only in one of the selected works [24] and in comparison to it, we obtained better performance in recognizing all three types of cells. Other works concerned the simultaneous recognition of two types of cells–WBCs and RBCs [35,36] or WBCs and platelets [17]. However, it should be noted that the compared results were obtained when tested on different datasets. For an accurate comparison of the results obtained, the approaches should be tested under the same conditions using the same datasets. Furthermore, the images used to test our approach contained average resolution images with a large number of cells (typically 100–150 cells per image) and the cells often overlapped each other, which impeded their correct recognition and counting.

5. Discussion

The results listed in Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10 and Table 11 indicate that each cell type has its optimal confidence threshold. For optimal thresholds, the highest accuracy of recognizing and counting individual cells was obtained. Using one common confidence threshold generally cannot provide accurate results, because when choosing an indirect common threshold, for example, 0.55, all counting indicators will be worse and the application will not count exactly individual cells, as in the case of optimal confidence thresholds. Thus, each type of cells should be counted separately with the individually selected confidence threshold to obtain the most accurate results.
For the Retina model, after 30 epochs (RN30), at the confidence threshold of 0.25, the accuracy of RBCs counting by application is 99.7%, precision is 86.4%, and recall is 86.5%. Accuracy of WBCs counting by application at a confidence threshold of 0.65 is 98.6%, the counting precision is 97.9%, and recall is 96.5%. In the case of counting platelets, for the optimal threshold of 0.35, the accuracy is 97.8, precision is 92.8%, and recall is 90.8%. Almost all quality indicators for the RN10 model are slightly lower than for the RN30. Only in an assured range of confidence thresholds, the accuracy and precision of counting cells in the RN10 model is better than in the RN30. However, comparing F1-score values, the maximum value of this metric was obtained for the RN30 model.
In the light of the results presented above, general conclusions can be made. The model RN10 after the relatively short learning process (10 epochs) may quite accurately count the blood cells. Furthermore, learning up to 30 epochs improves almost all counting performance metrics, and it also grows the confidence threshold for the best results. The model trained by 40 epochs shows signs of overtraining visible on preliminary results for validation data. The presented results, however, partially present the complexity of the problem of counting blood cells. Regarding the selection of the optimal model for blood cell counting applications, we came to the conclusion that it is a difficult, complex, and time-consuming process because the accuracy of counting depends on the confidence threshold, the time of learning, the number of epochs, selection of performance evaluation metrics and perhaps many other factors which we did not include in this work. With such a wide study, the optimal confidence thresholds have been established, for which the application counted cells very accurately with high precision. We can dispose of redundant and incorrect estimates of the number of cells by selecting an appropriate confidence threshold for each cell type instead of a general threshold for all blood cells. The results obtained are very satisfactory for the recognition and counting of three cell types simultaneously compared to other works on cell counting, and we achieved better quality measure values for assessing the effectiveness of our approach for most of them.
Finally, we have to mention that a very big advantage of the application, in addition to the precise counting, is the appropriate marking of all recognized cells with labels and probabilities. It allows for easy verification of the obtained results. Marking recognized cells so far is still a rare functionality used in counting methods. Our method works in images of high resolution and dimensions. Different methods must divide a large image into a smaller one with a few number of cells in the individual image, which gives our method an additional advantage.

6. Conclusions

This article presents a machine learning approach to automatic identification and counting of blood cells from a smear image based on CNN RetinaNet. The proposed method is evaluated on the basis of publicly available datasets. The developed methods have been tested on different types of cells with different cell density in the images and they show promising results. The developed application returns the results in numerical and graphical form, which enables their simple verification. Additionally, the graphical results, i.e., labeled cells, ensure the probability of correct recognition of the right cell. We observed that in the case of the testing dataset, our method accurately recognizes and counts RBCs, WBCs, and platelets. However, the counting accuracy depends on the proper selection of the confidence threshold for individual cell classes.
An essential advantage is that the medical images do not require preliminary preparation, and all results are obtained after a single presentation of an image. All calculated metrics allow for in-depth and comprehensive evaluation of the quality of RetinaNet models. Due to the accuracy and performance of the detection, the proposed method has the potential to replace the manual identification of blood cells and the counting process. The developed application would allow for speeding up cell counting and increasing its accuracy.

Author Contributions

Conceptualization, G.D. and D.M.; methodology, G.D. and D.M.; software, G.D. and A.C.; validation, D.M. and G.D.; formal analysis, G.D.; investigation, G.D. and A.C.; resources, G.D.; data curation, G.D.; writing—original draft preparation, D.M., G.D. and A.C.; writing—review and editing, D.M., G.D. and A.C.; visualization, A.C.; supervision, G.D.; project administration, D.M.; funding acquisition, D.M. All authors have read and agreed to the published version of the manuscript.

Funding

This project is financed by the statutory funds (UPB) of the Department of Electrical and Computer Engineering Fundamentals, Rzeszow University of Technology.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CBCComplete blood count
CNNConvolutional neural network
DNNDeep Neural Network
FNFalse Negative
FPFalse Positive
FPNFeature Pyramid Network
MSEMean Square Error
RBCRed Blood Cell
ReLURectified Linear Unit
TNTrue Negative
TPTrue Positive
WBCWhite Blood Cell

References

  1. George-Gay, B.; Parker, K. Understanding the complete blood count with differential. J. Perianesthesia Nurs. 2003, 18, 96–114. [Google Scholar] [CrossRef]
  2. Lockley, S.W.; Barger, L.K.; Ayas, N.T.; Rothschild, J.M.; Czeisler, C.A.; Landrigan, C.P. Effects of Health Care Provider Work Hours and Sleep Deprivation on Safety and Performance. Jt. Comm. J. Qual. Patient Saf. 2007, 33, 7–18. [Google Scholar] [CrossRef]
  3. Maitra, M.; Gupta, R.K.; Mukherjee, M. Detection and counting of red blood cells in blood cell images using hough transform. Int. J. Comput. Appl. 2012, 53, 13–17. [Google Scholar] [CrossRef]
  4. Tomari, R.; Zakaria, W.N.W.; Ngadengon, R. An Empirical Framework For Automatic Red Blood Cell Morphology Identification and Counting. ARPN J. Eng. Appl. Sci. 2015, 10, 8894–8901. [Google Scholar]
  5. Nemane, J.B.; Chakkarwar, V.A.; Lahoti, P.B. White blood cell segmentation and counting using global threshold. Int. J. Emerg. Technol. Adv. Eng. 2013, 3, 639–643. [Google Scholar]
  6. Putzu, L.; Di Ruberto, C. White Blood Cells Identification and Counting from Microscopic Blood Image. Eng. Technol. Int. J. Med Health Sci. 2013, 7, 20–27. [Google Scholar]
  7. Arslan, S.; Ozyurek, E.; Gunduz-Demir, C. A color and shape based algorithm for segmentation of white blood cells in peripheral blood and bone marrow images. Cytom. Part A 2014, 85, 480–490. [Google Scholar] [CrossRef] [PubMed]
  8. Nazlibilek, S.; Karacor, D.; Ercan, T.; Sazli, M.H.; Kalender, O.; Ege, Y. Automatic segmentation, counting, size determination and classification of white blood cells. Measurement 2014, 55, 58–65. [Google Scholar] [CrossRef]
  9. Safuan, S.; Tomari, R.; Zakaria, W.N.W. White blood cell (WBC) counting analysis in blood smear images using various color segmentation methods. Measurement 2018, 116, 543–555. [Google Scholar] [CrossRef]
  10. Zhang, C.; Xiao, X.; Li, X.; Chen, Y.-J.; Zhen, W.; Chang, J.; Zheng, C.; Liu, Z. White Blood Cell Segmentation by Color-Space-Based KMean Clustering. Sensors 2014, 14, 16128–16147. [Google Scholar] [CrossRef] [PubMed]
  11. Xing, F.; Yang, L. Robust nucleus/cell detection and segmentation in digital pathology and microscopy images: A Comprehensive Review. IEEE Rev. Biomed. Eng. 2016, 9, 234–263. [Google Scholar] [CrossRef] [PubMed]
  12. Xie, Y.; Xing, F.; Kong, X.; Su, H.; Yang, L. Beyond Classification: Structured Regression for Robust Cell Detection Using Convolutional Neural Network. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015; Navab, N., Hornegger, J., Wells, W., Frangi, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2015; Volume 9351, pp. 358–365. [Google Scholar] [CrossRef] [Green Version]
  13. Kim, M.; Yan, C.; Yang, D.; Wang, Q.; Ma, J.; Wu, G. Chapter Eight—Deep learning in biomedical image analysis. In Biomedical Engineering, Biomedical Information Technology, 2nd ed.; Feng, D.D., Ed.; Academic Press: Cambridge, MA, USA, 2020; pp. 239–263. [Google Scholar] [CrossRef]
  14. Alzubaidi, L.; Fadhel, M.A.; Al-Shamma, O.; Zhang, J.; Duan, Y. Deep Learning Models for Classification of Red Blood Cells in Microscopy Images to Aid in Sickle Cell Anemia Diagnosis. Electronics 2020, 9, 427. [Google Scholar] [CrossRef] [Green Version]
  15. Parab, M.A.; Mehendale, N.D. Red Blood Cell Classification Using Image Processing and CNN. SN Comput. Sci. 2021, 2, 1–10. [Google Scholar] [CrossRef]
  16. Vogado, L.; Veras, R.; Aires, K.; Araújo, F.; Silva, R.; Ponti, M.; Tavares, J.M.R.S. Diagnosis of Leukaemia in Blood Slides Based on a Fine-Tuned and Highly Generalisable Deep Learning Model. Sensors 2021, 21, 2989. [Google Scholar] [CrossRef]
  17. Acevedo, A.; Alférez, S.; Merino, A.; Puigví, L.; Rodellar, J. Recognition of peripheral blood cell images using convolutional neural networks. Comput. Methods Programs Biomed. 2019, 180, 105020. [Google Scholar] [CrossRef]
  18. Habibzadeh, M.; Jannesari, M.; Rezaei, Z.; Baharvand, H.; Totonchi, M. Automatic white blood cell classification using pre-trained deep learning models: ResNet and Inception. In Proceedings of the Tenth International Conference on Machine Vision (ICMV 2017), Vienna, Austria, 13–15 November 2018; Zhou, J., Radeva, P., Nikolaev, D., Verikas, A., Eds.; SPIE: Vienna, Austria, 2018; Volume 10696, p. 1069612. [Google Scholar] [CrossRef]
  19. Wang, Q.; Bi, S.; Sun, M.; Wang, Y.; Wang, D.; Yang, S. Deep learning approach to peripheral leukocyte recognition. PLoS ONE 2019, 14, e0218808. [Google Scholar] [CrossRef]
  20. Loey, M.; Naman, M.; Zayed, H. Deep Transfer Learning in Diagnosing Leukemia in Blood Cells. Computers 2020, 9, 29. [Google Scholar] [CrossRef] [Green Version]
  21. Huang, X.; Liu, J.; Yao, J.; Wei, M.; Han, W.; Chen, J.; Sun, L. Deep-Learning Based Label-Free Classification of Activated and Inactivated Neutrophils for Rapid Immune State Monitoring. Sensors 2021, 21, 512. [Google Scholar] [CrossRef]
  22. Khan, A.; Eker, A.; Chefranov, A.; Demirel, H. White blood cell type identification using multi-layer convolutional features with an extreme-learning machine. Biomed. Signal Process. Control. 2021, 69, 102932. [Google Scholar] [CrossRef]
  23. Reena, M.R.; Ameer, P.M. Localization and recognition of leukocytes in peripheral blood: A deep learning approach. Comput. Biol. Med. 2020, 126, 104034. [Google Scholar] [CrossRef]
  24. Alam, M.M.; Islam, M.T. Machine learning approach of automatic identification and counting of blood cells. Heal. Technol. Lett. 2019, 17, 103–108. [Google Scholar] [CrossRef] [PubMed]
  25. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning, 1st ed.; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  26. Xie, W.; Noble, J.A.; Zisserman, A. Microscopy cell counting and detection with fully convolutional regression networks. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 2016, 6, 283292. [Google Scholar] [CrossRef]
  27. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [Green Version]
  28. Zhao, Z.Q.; Zheng, P.; Xu, S.-T.; Wu, X. Object Detection With Deep Learning: A Review. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3212–3232. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Lin, T.Y.; Dollar, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature Pyramid Networks for Object Detection. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; IEEE Computer Society: Washington, DC, USA, 2009; pp. 936–944. [Google Scholar]
  30. Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollar, P. Focal Loss for Dense Object Detection. In Proceedings of the 2017 IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; IEEE Computer Society: Washington, DC, USA, 2017; pp. 2999–3007. [Google Scholar]
  31. Peteiro-Barral, D.; Guijarro-Berdiñas, B. A Study on the Scalability of Artificial Neural Networks Training Algorithms Using Multiple-Criteria Decision-Making Methods. In Lecture Notes in Computer Science, Proceeding of the Artificial Intelligence and Soft Computing (ICAISC 2013), Zakopane, Poland, 16–20 June 2019; Springer: Heidelberg, Germany, 2019; pp. 162–173. [Google Scholar]
  32. Gaiser, H.; de Vries, M.; Lacatusu, V. Keras RetinaNet. 2019. Available online: https://github.com/fizyr/keras-retinanet/tree/0.5.1 (accessed on 27 May 2021).
  33. Rezatofighi, S.H.; Soltanian-Zadeh, H. Automatic recognition of five types of white blood cells in peripheral blood. Comput. Med. Imaging Graph. 2011, 35, 333–343. [Google Scholar] [CrossRef]
  34. Tzutalin. LabelImg. Git Code. 2015. Available online: https://github.com/tzutalin/labelImg (accessed on 20 July 2021).
  35. Dvanesh, V.D.; Lakshmi, P.S.; Reddy, K.; Vasavi, A.S. Blood Cell Count using Digital Image Processing. In Proceedings of the 2018 International Conference on Current Trends towards Converging Technologies (ICCTCT), Tamil Nadu, India, 1–3 March 2018; pp. 1–7. [Google Scholar] [CrossRef]
  36. Ruberto, C.; Loddo, A.; Putzu, L. Detection of red and white blood cells from microscopic blood images using a region proposal approach. Comput. Biol. Med. 2020, 116, 103530. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The sequence of transformations involved in the convolutional network for recognizing blood cells.
Figure 1. The sequence of transformations involved in the convolutional network for recognizing blood cells.
Entropy 23 01522 g001
Figure 2. The architecture of RetinaNet Detector.
Figure 2. The architecture of RetinaNet Detector.
Entropy 23 01522 g002
Figure 3. The process of marking the training dataset.
Figure 3. The process of marking the training dataset.
Entropy 23 01522 g003
Figure 4. The cell counting workflow.
Figure 4. The cell counting workflow.
Entropy 23 01522 g004
Figure 5. Learning curve of the RetinaNet blood cells identification (500 steps per epoch).
Figure 5. Learning curve of the RetinaNet blood cells identification (500 steps per epoch).
Entropy 23 01522 g005
Figure 6. An example of blood smear image from the validation dataset.
Figure 6. An example of blood smear image from the validation dataset.
Entropy 23 01522 g006
Figure 7. An example of blood smear image with recognized RBCs, WBC and platelets by the RN30 model.
Figure 7. An example of blood smear image with recognized RBCs, WBC and platelets by the RN30 model.
Entropy 23 01522 g007
Figure 8. Number of detected RBCs vs. threshold value.
Figure 8. Number of detected RBCs vs. threshold value.
Entropy 23 01522 g008
Figure 9. An example of blood smear image with recognized RBCs, WBC and platelets by the RN30 model.
Figure 9. An example of blood smear image with recognized RBCs, WBC and platelets by the RN30 model.
Entropy 23 01522 g009
Figure 10. An example of blood smear image with recognized RBCs, WBC and platelets by the RN30 model.
Figure 10. An example of blood smear image with recognized RBCs, WBC and platelets by the RN30 model.
Entropy 23 01522 g010
Figure 11. An example of blood smear image from the testing dataset.
Figure 11. An example of blood smear image from the testing dataset.
Entropy 23 01522 g011
Figure 12. An example of blood smear image with recognized RBCs, WBC, and platelets by the RN30 model.
Figure 12. An example of blood smear image with recognized RBCs, WBC, and platelets by the RN30 model.
Entropy 23 01522 g012
Table 1. Preliminary F1-scores for the recognition of RBCs, obtained from the analysis of 15 images, used to select the optimal model.
Table 1. Preliminary F1-scores for the recognition of RBCs, obtained from the analysis of 15 images, used to select the optimal model.
ThresholdRBCs F1-Score [%]
RN10RN15RN20RN25RN30RN35RN40
0.2087.1187.3686.8187.4785.6587.9787.99
0.2587.5187.5987.9788.1887.0587.6888.40
0.3087.5787.8587.9788.3988.5187.5687.99
0.3587.9486.9988.1288.4787.2287.0986.29
0.4086.4784.6186.8587.3286.3585.9184.22
0.4584.0482.7484.9685.3784.8283.9282.86
0.5081.1480.6483.2583.3982.4982.0881.26
0.5578.0078.2680.6980.9580.3680.0078.51
0.6073.9573.8378.6478.9278.1377.8075.97
0.6569.3970.1675.2876.0976.1075.5573.91
0.7064.2066.3172.1473.5272.7172.4571.11
0.7556.6761.0968.5269.7369.4668.8267.76
0.8048.4252.1962.3465.0364.9265.0364.57
0.8536.8043.3255.4158.6560.4460.8359.70
Table 2. Preliminary F1-score values for the recognition of WBCs, obtained from the analysis of 15 images, used to select the optimal model.
Table 2. Preliminary F1-score values for the recognition of WBCs, obtained from the analysis of 15 images, used to select the optimal model.
ThresholdWBCs F1-Score [%]
RN10RN15RN20RN25RN30RN35RN40
0.2021.1828.5726.2824.3219.4619.8814.46
0.2536.7343.9039.1333.9626.6727.2018.28
0.3051.4355.7448.5742.5034.2932.6923.45
0.3561.5464.0065.3857.6344.4444.1629.06
0.4073.1775.0065.3162.5053.1258.6235.79
0.4585.7183.3375.0069.7761.5464.0043.59
0.5088.2488.2478.9571.4363.8368.1847.06
0.5590.9188.2485.7175.0068.1871.4351.72
0.6090.9190.9190.9183.3373.1778.9557.69
0.6587.5090.9190.9190.9183.3385.7173.17
0.7083.8787.5090.9190.9188.2488.2476.92
0.7580.0087.5087.5087.5090.9190.9185.71
0.8061.5475.8687.5087.5087.5087.5084.85
0.8556.0066.6780.0080.0080.0083.8787.50
Table 3. Preliminary F1-score values for the recognition of platelets, obtained from the analysis of 15 images, used to select the optimal model.
Table 3. Preliminary F1-score values for the recognition of platelets, obtained from the analysis of 15 images, used to select the optimal model.
ThresholdPlatelets F1-Score [%]
RN10RN15RN20RN25RN30RN35RN40
0.2073.9571.0775.6275.2373.1773.4966.95
0.2580.6581.7282.5881.8281.2380.7573.35
0.3083.9985.4683.0383.5786.5380.9476.88
0.3582.0585.3683.9185.4585.1182.3980.60
0.4080.1381.7082.3983.6083.9781.9780.51
0.4577.8280.0080.4180.9581.1980.4179.21
0.5073.8478.0877.0378.3278.5077.3577.51
0.5568.4273.7675.2774.8275.0073.4574.47
0.6058.3068.6664.5968.1870.8567.1868.66
0.6547.8358.0657.6160.8064.5960.2463.53
0.7041.2852.1452.7752.7755.0053.1653.78
0.7535.2442.7338.3244.8449.5747.5849.35
0.8027.8634.4530.3935.2441.2839.0739.81
0.8513.9824.3724.3724.3727.8626.1326.13
Table 4. The accuracy, precision, recall, and F1-score of automatic counting of RBCs using RetinaNet model for 10 epochs (15 images).
Table 4. The accuracy, precision, recall, and F1-score of automatic counting of RBCs using RetinaNet model for 10 epochs (15 images).
ThresholdAccuracy [%]Precision [%]Recall [%]F1-Score [%]
0.2086.6881.3093.8087.10
0.2596.9187.2390.0188.60
0.3093.4790.6684.7487.60
0.3585.2493.6979.8686.22
0.4078.0595.6474.6483.85
0.4571.6897.1769.6581.14
0.5064.7698.3963.7277.35
0.5559.2299.3558.8473.91
0.6053.5199.7953.4069.57
0.6547.6499.8845.9462.93
0.7041.60100.041.6058.76
0.7534.80100.034.8051.63
0.8028.38100.028.3844.21
0.8519.92100.019.9233.23
Table 5. The accuracy, precision, recall, and F1-score of automatic counting of WBCs using RetinaNet model for 10 epochs (131 images).
Table 5. The accuracy, precision, recall, and F1-score of automatic counting of WBCs using RetinaNet model for 10 epochs (131 images).
ThresholdAccuracy [%]Precision [%]Recall [%]F1-Score [%]
0.2018.5118.2598.6130.80
0.2531.1730.7498.6146.86
0.3045.7145.0898.6161.87
0.3566.6764.8197.2277.78
0.4079.5677.3597.2286.15
0.4592.9089.6897.2093.29
0.5097.3093.2496.5094.85
0.5597.2296.4393.7595.07
0.6094.4498.5393.0695.71
0.6590.2898.4690.1494.12
0.7081.2599.1581.6989.58
0.7570.8399.0272.1483.47
0.8052.7897.3752.8668.52
0.8536.11100.037.1454.17
Table 6. The accuracy, precision, recall, and F1-score of automatic counting of platelets using RetinaNet model for 10 epochs (64 images).
Table 6. The accuracy, precision, recall, and F1-score of automatic counting of platelets using RetinaNet model for 10 epochs (64 images).
ThresholdAccuracy [%]Precision [%]Recall [%]F1-Score [%]
0.2069.1868.5698.9781.01
0.2586.3682.9395.9088.94
0.3098.7291.6890.8591.26
0.3588.4596.0886.0990.81
0.4081.3997.4879.8487.78
0.4572.7999.4773.0684.24
0.5063.8099.8064.5078.36
0.5553.79100.054.4970.54
0.6045.96100.046.8663.81
0.6538.25100.039.1156.23
0.7030.94100.031.5447.96
0.7522.21100.022.2136.34
0.8014.25100.014.2524.94
0.858.09100.08.0914.96
Table 7. Ground truth and the estimated number of blood cells at different confidence thresholds for the RN10.
Table 7. Ground truth and the estimated number of blood cells at different confidence thresholds for the RN10.
ThresholdRBCWBCPlatelets
Ground truthEstimatedGround TruthEstimatedGround TruthEstimated
0.20182221021447787791126
0.2518221880144462779902
0.3018221703144315779769
0.3518221553144216779689
0.4018221422144181779634
0.4518221306144155779567
0.5018221180144148779497
0.5518221079144140779419
0.601822975144136779358
0.651822868144130779298
0.701822758144117779241
0.751822634144102779173
0.80182251714476779111
0.8518223631445277963
Table 8. The accuracy, precision, recall, and F1-score of automatic counting of RBCs using RetinaNet model for 30 epochs (15 images).
Table 8. The accuracy, precision, recall, and F1-score of automatic counting of RBCs using RetinaNet model for 30 epochs (15 images).
ThresholdAccuracy [%]Precision [%]Recall [%]F1-Score [%]
0.2091.2481.1888.4184.64
0.2599.6786.4486.5486.49
0.3091.9990.5183.3586.78
0.3586.3993.2080.6086.45
0.4080.9094.9176.8784.94
0.4575.3096.4372.6982.89
0.5070.2597.1168.3080.19
0.5565.8197.7564.4077.64
0.6062.1898.2361.1575.38
0.6558.2998.5957.5372.66
0.7053.3599.1852.9769.05
0.7548.3599.6648.0864.86
0.8043.1499.8743.0260.14
0.8537.87100.037.9154.98
Table 9. The accuracy, precision, recall, and F1-score of automatic counting of WBCs using RetinaNet model for 30 epochs (131 images).
Table 9. The accuracy, precision, recall, and F1-score of automatic counting of WBCs using RetinaNet model for 30 epochs (131 images).
ThresholdAccuracy [%]Precision [%]Recall [%]F1-Score [%]
0.2013.5113.52100.023.82
0.2521.0221.02100.034.74
0.3030.3830.38100.046.60
0.3541.3841.38100.058.54
0.4053.3353.33100.069.57
0.4565.7564.8498.6178.24
0.5075.3973.8297.9284.18
0.5586.7584.3497.2290.32
0.6096.0092.6796.5394.56
0.6598.6197.8996.5397.20
0.7097.2299.2996.5397.89
0.7595.14100.095.1497.51
0.8090.97100.090.9795.27
0.8579.86100.079.8688.80
Table 10. The accuracy, precision, recall, and F1-score of automatic counting of platelets using RetinaNet model for 30 epochs (64 images).
Table 10. The accuracy, precision, recall, and F1-score of automatic counting of platelets using RetinaNet model for 30 epochs (64 images).
ThresholdAccuracy [%]Precision [%]Recall [%]F1-Score [%]
0.2067.3366.5598.9779.59
0.2582.4380.0097.0587.70
0.3093.2987.9094.2290.95
0.3597.8292.7890.7691.76
0.4089.7395.7185.8890.53
0.4583.9597.4081.7788.90
0.5077.7998.8476.8986.50
0.5569.9699.2769.4581.72
0.6062.7799.5962.5276.81
0.6555.84100.055.8471.66
0.7047.37100.047.3764.29
0.7540.56100.040.5657.72
0.8030.94100.030.9447.25
0.8521.44100.021.4435.31
Table 11. Ground truth and the estimated number of blood cells at different confidence thresholds for the RN30.
Table 11. Ground truth and the estimated number of blood cells at different confidence thresholds for the RN30.
ThresholdRBCWBCPlatelets
Ground TruthEstimatedGround TruthEstimatedGround TruthEstimated
0.201822199714410667791157
0.2518221828144685779945
0.3018221676144474779835
0.3518221574144348779762
0.4018221474144270779699
0.4518221372144219779654
0.5018221280144191779606
0.5518221199144166779545
0.6018221133144150779489
0.6518221062144142779435
0.701822972144140779369
0.751822881144137779316
0.801822786144131779241
0.851822690144115779167
Table 12. RBCs counting performance compared with the state-of-the-art.
Table 12. RBCs counting performance compared with the state-of-the-art.
Alam [24]Dvanesh [35]Acevedo [17]Ruberto [36]Our Approach
ModelTiny YOLOABCCS-Region proposal approachRetinaNet50
No. images6063-10815
Ground truths792---1822
Accuracy96.0991.0-95.699.67
Precision---98.486.44
Recall---95.086.54
F1-score---96.686.49
Table 13. WBC counting performance compared with the state-of-the-art.
Table 13. WBC counting performance compared with the state-of-the-art.
Alam [24]Dvanesh [35]Acevedo [17]Ruberto [36]Our Approach
ModelTiny YOLOABCCSVgg-16Region proposal approachRetinaNet50
No. images60631919108131
Ground truths61---144
Accuracy86.8985.096.2097.098.61
Precision---97.697.89
Recall---98.796.53
F1-score---98.097.20
Table 14. Platelet counting performance compared with the state-of-the-art.
Table 14. Platelet counting performance compared with the state-of-the-art.
Alam [24]Dvanesh [35]Acevedo [17]Ruberto [36]Our Approach
ModelTiny YOLO-Vgg-16-RetinaNet50
No. images60-1919-64
Ground truths55---144
Accuracy96.36-99.61-97.82
Precision----92.78
Recall----90.76
F1-score----91.76
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Drałus, G.; Mazur, D.; Czmil, A. Automatic Detection and Counting of Blood Cells in Smear Images Using RetinaNet. Entropy 2021, 23, 1522. https://doi.org/10.3390/e23111522

AMA Style

Drałus G, Mazur D, Czmil A. Automatic Detection and Counting of Blood Cells in Smear Images Using RetinaNet. Entropy. 2021; 23(11):1522. https://doi.org/10.3390/e23111522

Chicago/Turabian Style

Drałus, Grzegorz, Damian Mazur, and Anna Czmil. 2021. "Automatic Detection and Counting of Blood Cells in Smear Images Using RetinaNet" Entropy 23, no. 11: 1522. https://doi.org/10.3390/e23111522

APA Style

Drałus, G., Mazur, D., & Czmil, A. (2021). Automatic Detection and Counting of Blood Cells in Smear Images Using RetinaNet. Entropy, 23(11), 1522. https://doi.org/10.3390/e23111522

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop