Next Article in Journal
Methods for Noise Event Detection and Assessment of the Sonic Environment by the Harmonica Index
Next Article in Special Issue
A Self-Activated CNN Approach for Multi-Class Chest-Related COVID-19 Detection
Previous Article in Journal
A Dynamic Model Updating Method with Thermal Effects Based on Improved Support Vector Regression
Previous Article in Special Issue
Multi-Agent Robot System to Monitor and Enforce Physical Distancing Constraints in Large Areas to Combat COVID-19 and Future Pandemics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

COVID-19 Lesion Segmentation Using Lung CT Scan Images: Comparative Study Based on Active Contour Models

by
Younes Akbari
1,*,†,
Hanadi Hassen
1,†,
Somaya Al-Maadeed
1,† and
Susu M. Zughaier
2,†
1
Department of Computer Science and Engineering, Qatar University, Doha 122104, Qatar
2
College of Medicine, QU Health, Qatar University, Doha 122104, Qatar
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2021, 11(17), 8039; https://doi.org/10.3390/app11178039
Submission received: 12 July 2021 / Revised: 20 August 2021 / Accepted: 24 August 2021 / Published: 30 August 2021

Abstract

:
Pneumonia is a lung infection that threatens all age groups. In this paper, we use CT scans to investigate the effectiveness of active contour models (ACMs) for segmentation of pneumonia caused by the Coronavirus disease (COVID-19) as one of the successful methods for image segmentation. A comparison has been made between the performances of the state-of-the-art methods performed based on a database of lung CT scan images. This review helps the reader to identify starting points for research in the field of active contour models on COVID-19, which is a high priority for researchers and practitioners. Finally, the experimental results indicate that active contour methods achieve promising results when there are not enough images to use deep learning-based methods as one of the powerful tools for image segmentation.

1. Introduction

The new 2019 coronavirus, named COVID-19 by the World Health Organization (WHO), is attracting a lot of attention lately because it is a new type of coronavirus that is highly contagious and has not been seen in humans before [1]. As of 20 August 2021, 209,201,939 confirmed cases of COVID-19, including 4,390,467 deaths, have been reported by the WHO (https://covid19.who.int/ (accessed on 20 August 2021)). The current standard golden diagnostic method for COVID-19 cases is the detection of viral nucleic acids by Reverse Transcription Polymerase Chain Reaction (RT-PCR). Due to the lower sensitivity of some tests leading to false-negative results, other methods may be considered to aid in COVID-19 diagnosis. To facilitate COVID-19 diagnosis, medical radiological imaging is used as a valuable supplemental diagnostic tool to evaluate the infectious process. Radiologic imaging such as radiographs is usually performed in patients with clinical symptoms suggestive of pulmonary infection [1]. The authors of [2] showed that CT scan tests have higher sensitivity than RT-PCR tests. This point is also confirmed by the fact that CT scans and RT-PCR tests have sensitivities of 98% and 71%, respectively. However, the duration of diagnosis remains the major limitation of CT scans. Even experienced radiologists need about 21.5 min to analyze the test results of each case [2,3].
Therefore, identifying the region of infection can help reduce the time radiologists spend analyzing examination results. Automatic segmentation of the infection region can be achieved by effective segmentation algorithms. Currently, there are several segmentation algorithms in the literature that show accurate and robust segmentation results. In general, they can be divided into four categories, the clustering-based [4,5], the graph-cut-based [6,7], neural network-based (for example, deep learning methods) [8,9] and the active-contour-based methods [10,11,12,13,14]. The clustering-based methods use clustering algorithms such as Kmeans and fuzzy C-means, which are based on the assumption that each pixel can be assigned to pixels belonging to the same class because they have a certain distribution. The clustering-based methods have high efficiency but are sensitive to the initial clustering centers and require manual adjustment of the clustering number. In the graph cut-based methods, the pixel correlation is considered and the segmentation problem is converted into a graph partition problem where a cut energy model is constructed and the segmentation curve is the cut that minimizes the energy. The drawback of this method is the difficulty in constructing accurate weights for the correlations between pixels, which usually leads to over-segmentation or under-segmentation problems, especially in complicated regions. The neural network-based semantic segmentation solutions [8,9] require a large set of images to train the network, and also lack fine segmentation for tiny regions, which is very important in medical image analysis. One of the methods that ensures stable performance is the active contour model (snake model). This method grants closed boundaries and has proven to be effective and widely used. The general idea behind ACMs is to apply partial differential equations (PDEs) to iteratively evolve the initial contour toward object boundaries by minimizing a given energy function [15,16], which is a function of the internal and external energies of the contour.
Since our goal is to investigate the effectiveness of active contouring methods for segmenting COVID-19 infected regions from the CTSI, current methods for segmenting COVID-19 pneumonia using the CTSI are examined below. It should be noted that the segmentation methods in COVID-19 applications can be mainly divided into two categories, namely, the lung region-oriented methods and the lung lesion-oriented methods. The lung region-oriented methods aim to separate the lung regions, i.e., the entire lung and the lobe of the lung, from other (background) regions in CT or X-ray images, which is considered a prerequisite in COVID-19 applications [17,18,19,20,21,22,23,24,25,26]. To the best of our knowledge, most COVID-19 pneumonia segmentation methods are based on deep learning networks, including the classic U-Net [17,18,19,20,21,22], UNet++ [22,23], VB-Net [24]. UNet and its variants achieved reasonable segmentation results in images COVID-19. For example, Jin et al. [23] used the UNet++ network to segment the lung region from CT images. Then, the damaged region of the lung is separated. This process is challenging because the damaged area may have different shapes and textures. The authors of [21] developed an automatic AI-based analysis of CT images using a deep learning approach to classify CT images into coronavirus and non-coronavirus cases. They reused a system previously used to detect small opacities and nodules in the lungs and used UNet to segment lung images. To detect coronavirus abnormalities, they used the CNN architecture Resnet-50, which consists of 50 layers, to classify images into normal and abnormal cases. One of the most important approaches to visually interpret and explain medical imaging is Gradient Weighted Class Activation Mapping (Grad-CAM) [27]. Therefore, many studies have been conducted to segment COVID-19 infected regions based on activation mapping [28,29]. Finally, for more detail, the reader is referred to several comprehensive reviews of all works on this topic, including deep learning methods [30,31,32,33].
The aim of this work is to compare current active contour methods for detecting COVID-19 pneumonia infections using CTSI, as shown in Figure 1. Our proposed segmentation experiments are performed using the COVID-CS database [34], which contains one hundred CT COVID-19 images with dimensions of 512 × 512 × 1 pixels, and all images are associated with the Ground-Truth Image (GTI). In this paper, we present the state-of-the-art methods that have been selected between 2008 and 2020 and have achieved the best results in medical image segmentation. Moreover, these methods were compared in terms of robustness to initialization.
This paper deals with CT COVID-19 image segmentation. The main contributions of this paper are described as follows:
  • A survey of active contour models: One of the most important aspects of detecting diseases such as pneumonia from medical images is identifying the region of infection. Although deep learning methods are well suited for this goal, if there are not enough images to train deep learning methods, the experimental results of the paper show that active contour methods achieve promising results. Therefore, our main contribution is to verify whether the active contour model-based image processing methods can be useful when only one image is available;
  • Study on COVID-19 as a current topic: To the best of the authors’ knowledge, this paper is the first attempt to study active contour models and comparisons on images of the disease;
  • Pointing out a line of research for the next researchers: We examine different methods and show which of them are effective for the topic and where the problems lie.
The remainder of this paper is organised as follows: Section 2 presents our methodology, including the methodological background and the database description. Section 3 presents the experimental results, Section 4 discusses the results, and Section 5 concludes the paper.

2. Methodology

In this section, we describe the details of the ACM methods used in our experiments, including traditional and the state-of-the-art methods. We also explain the database to which the methods are applied. The main idea of ACM is to segment an image based on an initial contour by minimizing the energy associated with the sum of internal and external energy. Depending on the contour representation (parametric or level set) and object boundary description (edge-based or region-based), ACMs can be classified into four categories: parametric representation with edge-based description [15,35], parametric representation with region-based description [36,37], level set representation with edge-based description [38,39] and level set representation with region-based description [40,41]. In parametric ACM, the contour is explicitly represented as polynomials or splines [42,43]. Given an initial contour, the external energies drive the evolution of a parametric ACM, while the internal energies maintain the shape of the contour. The parametric ACM is capable of extracting a single object given a single initial contour. Its strength is that the parametric snakes have a limited capture range, i.e., the range where the external forces are is strong enough to drive the contour evolution. Therefore, they must be initialized near the object contour. Another problem is that the parametric snakes are not able to accurately capture concave shapes. In the ACMs with level set, the contour is implicitly represented in the zero level set. In this method, the deformation of the level set function leads to contour evolution. Compared to the parametric ACMs, the level set model can capture multiple objects by a single initial contour and complex geometry. However, since the level set models require the deformation of a higher dimensional function, these methods are generally slower than parametric methods [39]. Moreover, for many applications, such as medical image processing, it is necessary to extract only a single object [44] and therefore, in such situations, the parametric representation is preferred over the level set representation. To solve the contour evolution equation, each representation scheme must select appropriate numerical methods. The finite element method [45] is used for parametric snake models, while the finite difference method [46] is used for level set models. Boundary-based segmentation methods for describing object boundaries based on the external force include region-based and edge-based models. Region-based models use more global information to define object boundaries. To control the evolution, region-based models use the statistical information inside and outside the contour [40,41]. In the region-based approach, the deformation of ACM is based on an energy minimization algorithm. In the region-based method, many functions were considered as an edge stop function (ESF) [41], such as a signed pressure function (SPF). Edge-based models use image gradients to construct an ESF [39,42], which is used to stop contour evolution at object boundaries. Using the ESF, the boundary points can be characterized by a differential property with respect to the image gradient. It should be noted that hybrid description approaches have been used recently to take advantage of edge-based and region-based methods and avoid their drawbacks [47]. Two important challenges in using ACMs are initialization and convergence. For example, some parametric ACMs are affected by saddle point and stationary point problems that lead to convergence failure [39]. ACM models respond quickly to initial conditions, so a poor initial contour can lead to a poor result, as shown in Figure 2, which shows the initialization of the contour outside, overlapping (cross), or inside the image.
The region-based methods are more efficient than the edge-based methods in detecting the outer and interior boundaries because they are less sensitive to the location of the initial contour. However, for weak edges and concave shapes, the edge-based methods are more successful than the region-based descriptions.

2.1. Background of ACMS

In this subsection, a background knowledge of ACMs is presented by briefly explaining traditional ACMs, geometric ACM models, and deep learning methods with loss functions based on ACMs.

2.1.1. Traditional ACM

The earliest active contour or snake, which was a parametric representation with edge-based description, was proposed by Kass et al. (1988) [42]. Active contours are declared as an energy minimization process. The energy functional is a function of contour’s internal energy ( E i n t ) addition to the external energy ( E e x t ). These energies are two functions of the set of points ( x ( s ) , y ( s ) ), which make up a snake c ( s ) = ( x ( s ) , y ( s ) ) . The energy functional denoted by E S n a k e is computed as follows:
E Snake = 0 1 E int ( c ( s ) ) + E ext ( c ( s ) ) d s ,
where s [ 0 , 1 ] is the normalized length around the snake and to control the behavior of the snake naturally, E i n t is defined as follows:
E i n t = α ( s ) c ( s ) 2 + β ( s ) c ( s ) 2 2 ,
where the first and the second derivatives of c ( s ) are shown by c ( s ) and c ( s ) with respect to s and α and β called elasticity and rigidity parameters, respectively, which are the weighting parameters of contour. The term of external energy, E e x t , attracts a snake to the chosen low-level features (such as edge points) such that it has smaller value near the object boundary and bigger value in other areas.
Limit capture range and poor convergence to the concave regions are two major issues in ACM. Figure 3 shows an example of concave regions which has not been captured.

2.1.2. Geometric Active Contours (Gac) Models

One of the most popular level set representations with edge-based models is the Geometric active contours (GAC) model [38]. This model utilizes image gradient to construct an ESF. Usually, a positive, decreasing and regular ESF such as g ( t ) is used such that lim t g ( t ) = 0 . For instance, g ( | I | ) can be computed using the following formula:
g ( | I | ) = 1 1 + G σ I 2 ,
where G σ I shows the convolution of Gaussian kernel (standard deviation of σ ) and image I. The GAC model suffers some major disadvantages as follows:
  • High computational cost due to computation of gradient of the curvature approximation of the current level set at each iteration;
  • Since the GAC model is in terms of the curvature and the gradient, only local boundary information is used. This leads to the GAC model being affected by input noise. This problem is shown in Figure 4.

2.1.3. The C-V Model

The C-V model is a parametric representation with region-based description presented by Chan and Vese [40]. For a given image I in domain Ω , the C-V model is formulated (parametric) by minimizing the following energy functional:
E C V = λ 1 inside ( c ) I ( x ) c 1 2 d x
+ λ 2 outside ( C ) I ( x ) c 2 2 d x , x Ω
where c 1 and c 2 show the average intensities inside and outside the contour, respectively. Two major disadvantages of C-V model are as follows:
  • Similar to GAC model, C-V model also needs to calculate the curvature approximation δ ( ϕ ) , which has a high computational expense;
  • Despite power of global segmentation of the C-V model with a proper initial contour, it cannot extract the interior contour without setting the initial contour inside the object and fails to extract all the objects. This problem has been shown in Figure 5.

2.1.4. Deep Learning Approaches

Deep learning models showed better results, but are limited to pixel-wise adaptations of the segmentation map. This limitation can be addressed by considering the size of the boundaries and the areas inside and outside the region of interest during the learning process. This can be achieved by using an active contour loss function inspired by ACMs. This new loss function combines geometric information with region similarity to enable more accurate segmentation. The ACM loss function is used as a loss function in many deep learning models, as in [48,49,50,51].

2.2. The State-of-the-Art Methods

In this section, we review some of the best proposed ACMs and present the main advantages and disadvantages.

2.2.1. Magnetostatic Active Contour (MAC) Model

The magnetostatic active contour (MAC) model is based on the level set representation with edge-based description [39]. MAC represents the active contour using an implicit model in which the contour, c, is defined as follows:
c = { x ¯ | ϕ ( x ¯ ) = 0 } ,
where ϕ : R 2 R . For image segmentation MAC considers the following PDE:
ϕ t = α s ( x ¯ ) ( ϕ | ϕ | ) | ϕ | ( 1 α ) F ( x ¯ ) ϕ ,
where α , s ( x ¯ ) , and F ( x ¯ ) are a real constant, the stopping function (Sobel filter), and the magnetostatic force, respectively. The advantages of MAC compared to other related works are as follows:
  • Significant improvement in initialization invariance;
  • Significant improvement in convergence capability; the contour attracts into deep concave regions;
  • Is not affected by stationary point and saddle point problems;
  • It is able to capture complex geometries;
  • It is able to capture multiple objects with a single initial contour.

2.2.2. Online Region-Based Active Contour Model (Oracm)

The ORACM is introduced as representing the level set with region-based ACM. The models do not require any parameters. Compared to traditional ACMs, the ORACM requires less time without a change in the segmentation accuracy. In each iteration, the ORCAM performs block thresholding. Rigorous boundaries and small particles that do not belong to the object and are produced outside of this process. To tackle this problem, morphological operations such as closing and opening are applied. The level set function, ϕ ( x ) , is initialized to constants, which have different signs such as 1 and + 1 inside and outside the contour. A simple and efficient level set updating formulation is used in ORACM as follows:
ϕ t = H ( SPF ( I ( x ) ) ) · ϕ ( x ) ,
where H ( . ) is the Heaviside function, and S P F ( . ) is the signed pressure function defined as follows:
SPF ( I ( x , y ) ) = I ( x , y ) c 1 c 2 2 max ( | I ( x , y ) c 1 c 2 2 | ) ,
where I and ϕ show an input image and the current level set, respectively. Two parameters, c 1 and c 2 , have the same definitions as (20) and (21), respectively. The advantages of ORACM are as follows:
  • Decreasing efficiency without changing the accuracy of the image segmentation process;
  • Accurate segmentation of all object regions in both inside and outside for medical, real, and synthetic images with holes, complex backgrounds, weak edges, and high noise.
The disadvantage of ORACM is that it supports only bimodal segmentation of piecewise constant intensity distribution. Therefore, the application of ORACM is limited to the cases satisfied the above constraints. It should be noted that two types of ORACM have been conducted by researchers called ORACM1 and ORACM2 which consider ORACM without or with the morphological operations, respectively.

2.2.3. Selective Binary and Gaussian Filtering Regularized Level Set (Sbgfrls)

The SBGFRLS [47] is an ACM based on level set representation with a region-based description. This model uses both global and local information. At the initialization step, a user-defined active contour is determined and it is continuously updated by a region-based signed pressure function (SPF) defined by (25). Unlike the C-V method, obtaining c 1 and c 2 using (20) and (21), SBGFRLS uses the H Heaviside function described in (22) only with ε = 0 . The SPF function tunes the signs of the pressure force inside and outside the region of interest. For example, the contour shrinks and expands based on whether it is outside or inside the object. The SBGFRLS obtains the corresponding variation level set by:
ϕ t = SPF ( I ( x ) ) · α . | ϕ | ,
where, to control the speed, a constant α is used. The advantages of SBGFRLS are as follows:
  • Robust against noise because the image statistical information is used to stop the curve evolution on the desired boundaries;
  • Good performance on images with weak edges or even without edges;
  • Initial curve can be defined anywhere to extract the interior boundaries of the objects
However, the disadvantages of the ACM with SBGFRLS include the following cases:
  • Difficult to use ACM with SBGFRLS on different images because it needs to be tuned according to the image which is why it cannot be used on real-time video images;
  • The slowness of the method caused by propagating the SPF function results in the boundary of the level set function only using | ϕ | at the level set. Updating only the boundary of the level set is the main cause of the slowness.

2.2.4. Level Set Active Contour Model (Lsacm)

The LSACM is a level set method that relies on the region for image segmentation in the presence of intensity inhomogeneity. This method models the inhomogeneous objects as Gaussian distributions of different means and variances. Using a sliding window, the original image is mapped into another domain. In this area, every object’s intensity distribution is still Gaussian but more distinct. It is possible to approximate the means of the Gaussian distributions in the transformed domain by multiplying a bias field with the original signal inside the window. A functional energy of maximum likelihood is then defined in the entire image region, which combines the bias field, the level set function, and the partly constant function which approximates the true image signal [10].
The method works by combining information from neighbouring pixels that belong to the same class. In this approach, the desired object is separated from its background. Given N R x , a neighboring region centered at location x, i.e., N R x = { y | | y x | ρ } , where for the region N R x , ρ is its radius. The whole image domain Ω can be represented as Ω = i = 1 , , n Ω i with Ω i Ω i = , for all i j , where Ω i is the i t h object region. A mapping T : I ( x | θ i , B ) I ( x | θ i , B ) from original image intensity domain I D ( T ) to another domain R D ( T ) by averaging image intensities is defined as
I ( x | θ i , B ) = 1 L i ( x ) Ω i N R x I ( y | θ i , B , x ) d y
where the number of pixels in region Ω i N R x is obtained with L i ( x ) = Ω i N R x . The intensity of pixel x is assumed to be independently distributed with a Gaussian distribution, that is, P ( I ( x | θ i , B ) ) = N ( I | U i ( x ) , σ i 2 L i ( x ) ) where U i and σ i are the spatial varying mean and the standard deviation subject to the object in region Ω i , for all I ( x | θ i ) R D ( I ) .
The advantages of this method are: it achieves soft classification, it is robust against noise and it mitigates the over-smoothing of object boundaries problem.
The disadvantages are: only the neighbouring intensities belonging to the same class contribute to each class and the overlapping parts of the statistical distributions among different classes of intensities are suppressed.

2.2.5. Region-Scalable Fitting and Optimized Laplacian of Gaussian Energy (Rsfolge)

The RSFOLGE applies a LoG energy term optimized by an energy functional. Then, it integrates the optimized LoG energy term with the region-scalable fitting energy term. The advantage of this is that it makes use of of local region information to drive the curve towards the boundaries. The advantage of the RSFOLGE model is that it achieves accurate image segmentation and it is insensitive to the positions of initial contour [11]. The energy functional used to optimize the LoG of the image is as follows:
E L Q G ( L ) = Ω g ( [ I ] ) × ( L 0 ) 2 + ( 1 g ( | I | ) )
× ( L β × Δ ( G σ I ) ) 2 d x d y
where the value of optimized LoG of the image is represented with L, and g ( I ) = e a | G o I , a , β are positive constants. A edge indicator is defined by g ( I ) . The values of g ( I ) are small and approximately equal to 0 at the locations near the object boundaries. Δ ( G σ I ) is:
Δ ( G σ I ) = 2 G σ ( x , y ) x 2 + 2 G σ ( x , y ) y 2 I ( x , y )
where G σ ( x , y ) is a Gaussian kernel function with standard deviation σ .
The advantages of this method are that it develops the robustness of initialization and it has accurate segmentation results compared with the original region scalable fitting (RSF) method. Moreover, the optimization process is applied to the edge stopping function which used in edge-based ACMs.
The disadvantage of this method is that it fails when objects inside the image have significantly different intensity values which can be tackled by using multi-phase segmentation.
A simple and universal method of improving the robustness of initial contour for these local fitting-based models is presented [12]. The core idea of proposed method is exchanging the fitting values on the two sides of contour, so that the fitting values inside the contour are always larger (or smaller) than the values outside the contour in the process of curve evolution. In this way, the whole curve will evolve along the inner (or outer) boundaries of object and is less likely to be stuck in the object or background.

2.2.6. Adaptive Local-Fitting (Alf) Method

The ALF model pushes the initial contour towards the object boundary using adaptive local fitting energy and regularization energy. This method shows an accurate way to separate the region of interest. The traditional methods assume that the intensities in the local region are constant; however, the LBF method finds an optimal solution by fitting the original image using an adaptive technique [52].
The ALF method improves on the local binary fitting method. The energy function of the ALF is:
E A L F = i = 1 N ( κ i w x ( x y ) I ( y ) μ i ( x ) λ ( x ) σ ¯ ( y ) 2 × M i ( Φ ( y ) ) d y ) d x + β 1 2 ( | Φ ( x ) | 1 ) 2 d x + v δ ( Φ ( x ) ) | Φ ( x ) | d x
where w x is a truncated weight function, σ ¯ is an appropriate estimation to σ and M i ( Φ ( y ) ) is a membership function satisfying:
M i ( Φ ( y ) ) = 1 , y Ω i 0 , else
In Equation (27), when λ ( x ) = 0 or σ ¯ ( y ) = 0 , it is the same as the LBF model. So, the LBF model can be seen as special case of the proposed ALF model. Nevertheless, LBF discards the local information about intensity variance Despite the necessity to improve the efficiency of the method, the model can extract more details and performs robustly with regard to to intensity inhomogeneity and noises.

2.2.7. Fuzzy Region-Based Active Contour Model (Frbacm)

The FRBACM is driven by weighting global and local fitting energy, wherein fuzzy region energy with local spatial image information is proposed. The segmentation results of this method are independent of initialization. To extract object boundaries while maintaining its distance, an initial evolving curve of pseudo level set function (LSF) followed by the pseudo-LSF and further smoothed by edge energy is proposed. This method consists of fuzzy region energy which is formulated with local spatial image information and edge energy which use the evolving curve to stop the object boundaries. The fuzzy region energy is strictly convex and used to drive the motion of the evolving curves. To minimize the energy functional, the fuzzy region energy directly calculates the change in the fuzzy region energy instead of using the Euler–Lagrange equation [53].
This proposed model succeeds in extracting objects in different situations such as images with noise and images with intensity inhomogeneity; however, this model fails when the similarity between the object and its background is high.

2.2.8. Global and Local Signed Energy-Based Pressure Force (Glsepf)

The GLSEPF uses the energy difference between the inner and outer energies to obtain the contour of the object. This model will improve the initial curve robustness. The local signed energy-based pressure force (LSEPF) is determined by means of the pixel-by-pixel energy difference within the local neighborhood region. This LSEPF can handle images with depth inhomogeneity and noise. Global image information and local energy information are used, respectively, for global and local force propagation functions. Global and local variances are used to balance the weights of GSEPF and LSEPF automatically, which can solve the problem of parameter setting. In the meantime, a regularization term and a penalty term are added in order to prevent re-initialization during iterations and smooth the level set function [54]. The level set formulation of the GSEPF model is written as:
GSEPF model : ϕ t = Δ E g ( I ( x ) ) max ( Δ E g ( I ( x ) ) ) · c 1 c 2 · ϕ + μ δ ( ϕ ) · div ( ϕ | ϕ | ) + v ( 2 ϕ div ( ϕ | ϕ | ) )
where Δ E g ( I ( x ) ) is the global energy. Objects in images with noise and intensity can be detected by incorporation of the global and local image information; however, for color images, the segmentation is poor because only the intensity information is used.
In next subsection, we explain the database that the methods are applied to.

2.3. Database

In this paper, the COVID-19 pneumonia image database of [34] was considered to test the performance of the proposed image processing method. This database consists of 100 CTI with dimensions of 512 × 512 × 1 pixels, and all the available images are associated with corresponding Ground-Truth Images (GTI). The overall distribution of patients is shown in Table 1. Figure 6 also shows examples of COVID-19 CT images and the corresponding ground truth. As can be seen in the figure, the ground truth is a binary image where the white parts are infection regions. The segmentation step aims to divide the images of CT into black and white regions. The segmentation results are compared with the ground truth to evaluate the performance of the method.

3. Evaluation

In this section, we present the experimental results of the active contour methods defined in the previous section for comparison purposes. We also explain the implementation of the methods. To evaluate the effectiveness of the active contour methods, seven suitable measures applied, namely: Dice, Jaccard, Bfscore, Precision, Recall, Iteration and Time. If the first five measures score better (closer to one or closer to 100%), then the implemented disease investigation scheme is confirmed as a better procedure. If I O and I G T represent the binary output image of the segmentation result and the binary image of the ground truth, respectively, the measures are defined as follows:
Precision = T P T P + F P ,
Recall = T P T P + F N ,
Jaccard = I o I g I o I g ,
Dice = 2 I o I g I o + I g ,
Bfscore = 2 × P r e c i s i o n × R e c a l l R e c a l l + P r e c i s i o n ,
where TP, FP and FN are true positive, false positive and false negative rates.

3.1. Lung Region Extraction

In this work, we first removed the artifacts such as bones and other body segments from the test image using a threshold filter in [55]. This threshold filter separates the test image into two sections based on a chosen threshold. We plotted the Receiver Operating Characteristic (ROC) curve (the TP part against the FP part) by changing the threshold value, as shown in Figure 7. This method obtained an area under curve (AUC) value of 0.94. We chose the threshold value when TP = 0.9 and FP = 0.045, and then we obtained the accuracy of the lung region extraction step for each image with respect to the ground truth. We assumed that the method failed for each image if the accuracy was less than 95 %. Therefore, we manually separated the part of images. A total of 12 images of the database (12 %) failed and were processed manually. The results obtained with the threshold filter are shown in Figure 8.

3.2. Comparison the Active Contour Methods

In this section, the performances of the mentioned methods in the previous sections have been compared. We performed all simulations in MATLAB R2018a. All experiments were run on a 64-bit operating system with a CPU E5-2690 v3 @ 2.60 GHz, 64.0 GB of RAM, and a single NVIDIA GTX TITAN X. In order to implement the methods, their codes have been provided by authors: the C-V, SBGFRLS, and ORACM (available at http://iys.inonu.edu.tr /webpanel/dosyalar/1348/file/OnlineSeg.rar (accessed on 20 August 2021)), LSACM (available at http://www.comp.polyu.edu.hk/~cslzhang/LSACM/%MDPI:changedlikethisformat,pleaseconfirm.Thedatehasbeenadded.LSACM.htm (accessed on 20 August 2021)), RSFOLGE (available at https://github.com/dingkeyan93/Active-Contour-Model-Matlab-Code-Set (accessed on 20 August 2021)), ALF (available at https://github.com/madd2014/ALF (accessed on 20 August 2021)), FRAGL (available at https://github.com/fangchj2002/FRAGL (accessed on 20 August 2021)), GLSEPF (available at https://github.com/HuaxiangLiu/GLSEPF/ (accessed on 20 August 2021)) methods were implemented in MATLAB code and the MAC method (available at http://csvision.swan.ac.uk/public_html_XX/snakes/mac/index.html (accessed on 20 August 2021)) was implemented in JAVA code. For all of the methods used on the database, the default parameters mentioned in their codes were retained. The experimental results confirm that the proposed methods provide the mean value of infection rate for further evaluation. The comparative analysis was performed for CT COVID-19 images. To test the robustness of initialization, three initial contours were considered: inside, outside, and overlapping (cross) the target object. However, some methods failed on the initial contours (when the inner contour cannot be extracted and the method is forced to infinite loop), as shown in Table 2.

3.3. Result

The quality of the results of the methods is shown in Figure 9. The quantitative results related to the metrics from the database are presented in Table 3. According to the evaluations, the quantitative and qualitative results are promising in segmentation, but the results should help the researchers to improve the results in sensitive application.
The quantitative results in terms of Dice, Jaccard, Bfscore, Precision, Recall, Iteration and Time measured using the database are shown in Table 3.
Figure 9 shows some binary results processed with six sample images from the CT database. The six images were randomly selected for qualitative evaluation. The authors reviewed the images based on the results of each method compared to the corresponding ground truth.
To show the effectiveness of active contour methods, we compared the best active contour method (GLSEPF), which achieves the best results among all methods in the database based on Table 3, with deep learning methods. Due to the different structure of the two sets of methods, the comparison is not fair. However, it can show the effectiveness of the active contour methods. Table 4 shows the comparison in terms of recall measure. The deep learning methods consist of 8 popular methods, namely MiniSeg [56], MobileNet [57], Inf-Net [58], DeepLabv3+ [59], EfficientNet [60], EDANet [61], ENet [62], and ESPNetv2 [63].

4. Discussion

In this section, we elaborate on the results section. As shown in Table 2, ORACM can perform the final segmentation based on the three initial contours in two cases: the MAC of both methods. Additionally, LSACM failed only on one of the initial contours and the remaining methods succeeded only on one of the initial contours. It should be noted that some methods (Figure 9) failed for at least one of the initial contours, which is why we report the best results of each method. In the overall evaluation, it can be seen that ORACM performs best in both quantitative and qualitative evaluations compared to the other methods.
In the quantitative evaluation, Table 3 shows that ORACM performs significantly better than all other methods in terms of Bfscore, iteration and time and performs as well as the FRAGL method in terms of Dice and Jaccard measures. Therefore, as shown in the table, FRAGL ranks first in terms of the DICE and Jaccard measures. Compared to the other methods, LSACM reaches the highest score in terms of accuracy, followed by ALF and MAC, which rank second and third, respectively. While GLSEPF ranks first in terms of recall measure, ORACM scores second and the other methods do not have promising results in this measure. As can be seen from the low value of time and iteration, ORACM and ALF are also faster than the others. Since the Bfscore measures how well the predicted boundary of an object matches the true boundary, ORACM preserves the boundaries better than other methods. Moreover, Jaccard and Dice similarities of 100% imply that the segmentations in the two images match perfectly. FRAGL and ORACM achieve the best matching images compared to other methods.
As the qualitative evaluation in Figure 9 shows, RSFOLGE, MAC and C-V methods achieve clean binary images. However, the results of the ORACM, FRAGL and GLSEPF methods are more effective and can efficiently produce higher visual quality from the input image with a dark shadow. In qualitative evaluation, as shown in the figure, ORACM is able to keep the boundaries in the patterns smooth. As shown, ORACM, FRAGL and GLSEPF methods are successful in delineating the area of infection, which serves as the foreground, from the background, but they also fail in maintaining thin strokes. Nevertheless, the methods achieve the best visual quality for the coronal and axial view database samples.
Finally, the comparison with deep learning methods shows that the active contour methods are competitive with deep learning methods, although the results show that deep learning methods are better overall.

5. Conclusions

This study is an introduction to the automatic investigation of infection COVID-19 with CTSI. Finding a collection of clinical quality images is a difficult task due to the sudden onset of the disease. In this work, active contour methods are applied to the 2D lung scans CT to automatically extract the infected sections. In this work, methods from 2008 to 2020 were tested, which are divided into four classes belonging to contour representation and object boundary description categories. In addition to a brief explanation of the methods, this paper also examined the advantages and disadvantages of the methods. Furthermore, a comparison between the methods in terms of computational cost and accuracy has been carried out. Based on the experimental results, ORACM2 (online region-based active contour model with the morphological operations) has the best overall performance in terms of speed and accuracy.
In the future, we plan to test more segmentation methods in other categories such as the graph-cut-based and the clustering-based methods. We will also propose to implement the following: (i) automatic detection and classification of COVID-19 cases into mild, moderate, and severe classes; (ii) automatic detection of disease progression; and (iii) automatic classification of CTSI slices into normal and COVID-19 pneumonia classes.

Author Contributions

Y.A. and H.H. wrote the main text of the manuscript under the supervision of S.A.-M. and S.M.Z., Y.A. performed the experiments, analyzed the results under the supervision of S.A.-M. and S.M.Z. All authors reviewed the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Qatar University Emergency Response Grant (QUERG-CENG-2020-1) from Qatar University.

Acknowledgments

This publication was made possible by Qatar University Emergency Response Grant (QUERG-CENG-2020-1) from Qatar University. The statements made herein are solely the responsibility of the authors.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rodriguez-Morales, A.J.; Cardona-Ospina, J.A.; Gutiérrez-Ocampo, E.; Villamizar-Peña, R.; Holguin-Rivera, Y.; Escalera-Antezana, J.P.; Alvarado-Arnez, L.E.; Bonilla-Aldana, D.K.; Franco-Paredes, C.; Henao-Martinez, A.F.; et al. Clinical, laboratory and imaging features of COVID-19: A systematic review and meta-analysis. Travel Med. Infect. Dis. 2020, 34, 101623. [Google Scholar] [CrossRef]
  2. Wu, Y.H.; Gao, S.H.; Mei, J.; Xu, J.; Fan, D.P.; Zhao, C.W.; Cheng, M.M. JCS: An Explainable COVID-19 Diagnosis System by Joint Classification and Segmentation. arXiv 2020, arXiv:2004.07054. [Google Scholar]
  3. Huang, Z.; Zhao, S.; Li, Z.; Chen, W.; Zhao, L.; Deng, L.; Song, B. The battle against coronavirus disease 2019 (COVID-19): Emergency management and infection control in a radiology department. J. Am. Coll. Radiol. 2020, 17, 710–716. [Google Scholar] [CrossRef] [PubMed]
  4. Ribbens, A.; Hermans, J.; Maes, F.; Vandermeulen, D.; Suetens, P. Unsupervised segmentation, clustering, and groupwise registration of heterogeneous populations of brain MR images. IEEE Trans. Med. Imaging 2013, 33, 201–224. [Google Scholar] [CrossRef] [PubMed]
  5. Gong, M.; Liang, Y.; Shi, J.; Ma, W.; Ma, J. Fuzzy c-means clustering with local information and kernel metric for image segmentation. IEEE Trans. Image Process. 2012, 22, 573–584. [Google Scholar] [CrossRef] [PubMed]
  6. Kuo, J.W.; Mamou, J.; Aristizábal, O.; Zhao, X.; Ketterling, J.A.; Wang, Y. Nested graph cut for automatic segmentation of high-frequency ultrasound images of the mouse embryo. IEEE Trans. Med. Imaging 2015, 35, 427–441. [Google Scholar] [CrossRef]
  7. Li, G.; Chen, X.; Shi, F.; Zhu, W.; Tian, J.; Xiang, D. Automatic liver segmentation based on shape constraints and deformable graph cut in CT images. IEEE Trans. Image Process. 2015, 24, 5315–5329. [Google Scholar] [CrossRef]
  8. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
  9. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Munich, Germany, 2015; pp. 234–241. [Google Scholar]
  10. Zhang, K.; Zhang, L.; Lam, K.M.; Zhang, D. A level set approach to image segmentation with intensity inhomogeneity. IEEE Trans. Cybern. 2015, 46, 546–557. [Google Scholar] [CrossRef]
  11. Ding, K.; Xiao, L.; Weng, G. Active contours driven by region-scalable fitting and optimized Laplacian of Gaussian energy for image segmentation. Signal Process. 2017, 134, 224–233. [Google Scholar] [CrossRef]
  12. Ding, K.; Xiao, L. A Simple Method to improve Initialization Robustness for Active Contours driven by Local Region Fitting Energy. arXiv 2018, arXiv:1802.10437. [Google Scholar]
  13. Dong, B.; Weng, G.; Jin, R. Active contour model driven by Self Organizing Maps for image segmentation. Expert Syst. Appl. 2021, 177, 114948. [Google Scholar] [CrossRef]
  14. Liu, H.; Rashid, T.; Habes, M. Cerebral Microbleed Detection Via Fourier Descriptor with Dual Domain Distribution Modeling. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging Workshops (ISBI Workshops), Iowa City, IA, USA, 4 April 2020; pp. 1–4. [Google Scholar]
  15. Kim, W.; Kim, C. Active contours driven by the salient edge energy model. IEEE Trans. Image Process. 2012, 22, 1667–1673. [Google Scholar]
  16. Lecellier, F.; Fadili, J.; Jehan-Besson, S.; Aubert, G.; Revenu, M.; Saloux, E. Region-based active contours with exponential family observations. J. Math. Imaging Vis. 2010, 36, 28. [Google Scholar] [CrossRef] [Green Version]
  17. Zheng, C.; Deng, X.; Fu, Q.; Zhou, Q.; Feng, J.; Ma, H.; Liu, W.; Wang, X. Deep Learning-based Detection for COVID-19 from Chest CT using Weak Label. medRxiv 2020. [Google Scholar] [CrossRef] [Green Version]
  18. Cao, Y.; Xu, Z.; Feng, J.; Jin, C.; Han, X.; Wu, H.; Shi, H. Longitudinal Assessment of COVID-19 Using a Deep Learning–based Quantitative CT Pipeline: Illustration of Two Cases. Radiol. Cardiothorac. Imaging 2020, 2, e200082. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Huang, L.; Han, R.; Ai, T.; Yu, P.; Kang, H.; Tao, Q.; Xia, L. Serial Quantitative Chest CT Assessment of COVID-19: Deep-Learning Approach. Radiol. Cardiothorac. Imaging 2020, 2, e200075. [Google Scholar] [CrossRef] [Green Version]
  20. Qi, X.; Jiang, Z.; Yu, Q.; Shao, C.; Zhang, H.; Yue, H.; Ma, B.; Wang, Y.; Liu, C.; Meng, X.; et al. Machine learning-based CT radiomics model for predicting hospital stay in patients with pneumonia associated with SARS-CoV-2 infection: A multicenter study. medRxiv 2020. [Google Scholar] [CrossRef]
  21. Gozes, O.; Frid-Adar, M.; Greenspan, H.; Browning, P.D.; Zhang, H.; Ji, W.; Bernheim, A.; Siegel, E. Rapid ai development cycle for the coronavirus (covid-19) pandemic: Initial results for automated detection & patient monitoring using deep learning ct image analysis. arXiv 2020, arXiv:2003.05037. [Google Scholar]
  22. Li, L.; Qin, L.; Xu, Z.; Yin, Y.; Wang, X.; Kong, B.; Bai, J.; Lu, Y.; Fang, Z.; Song, Q.; et al. Artificial intelligence distinguishes covid-19 from community acquired pneumonia on chest ct. Radiology 2020, 200905. [Google Scholar] [CrossRef]
  23. Jin, S.; Wang, B.; Xu, H.; Luo, C.; Wei, L.; Zhao, W.; Hou, X.; Ma, W.; Xu, Z.; Zheng, Z.; et al. AI-assisted CT imaging analysis for COVID-19 screening: Building and deploying a medical AI system in four weeks. medRxiv 2020. [Google Scholar] [CrossRef] [Green Version]
  24. Shan, F.; Gao, Y.; Wang, J.; Shi, W.; Shi, N.; Han, M.; Xue, Z.; Shen, D.; Shi, Y. Lung infection quantification of covid-19 in ct images with deep learning. arXiv 2020, arXiv:2003.04655. [Google Scholar]
  25. Tang, L.; Zhang, X.; Wang, Y.; Zeng, X. Severe COVID-19 pneumonia: Assessing inflammation burden with volume-rendered chest CT. Radiol. Cardiothorac. Imaging 2020, 2, e200044. [Google Scholar] [CrossRef] [Green Version]
  26. Shen, C.; Yu, N.; Cai, S.; Zhou, J.; Sheng, J.; Liu, K.; Zhou, H.; Guo, Y.; Niu, G. Quantitative computed tomography analysis for stratifying the severity of Coronavirus Disease 2019. J. Pharm. Anal. 2020, 10, 123–129. [Google Scholar] [CrossRef]
  27. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
  28. Panwar, H.; Gupta, P.; Siddiqui, M.K.; Morales-Menendez, R.; Bhardwaj, P.; Singh, V. A deep learning and grad-CAM based color visualization approach for fast detection of COVID-19 cases using chest X-ray and CT-Scan images. Chaos Solitons Fractals 2020, 140, 110190. [Google Scholar] [CrossRef]
  29. Sarker, L.; Islam, M.M.; Hannan, T.; Ahmed, Z. COVID-densenet: A deep learning architecture to detect covid-19 from chest radiology images; 2020; Preprints. [Google Scholar]
  30. Shi, F.; Wang, J.; Shi, J.; Wu, Z.; Wang, Q.; Tang, Z.; He, K.; Shi, Y.; Shen, D. Review of artificial intelligence techniques in imaging data acquisition, segmentation and diagnosis for covid-19. arXiv 2020, arXiv:2004.02731/. [Google Scholar]
  31. Islam, M.M.; Karray, F.; Alhajj, R.; Zeng, J. A review on deep learning techniques for the diagnosis of novel coronavirus (COVID-19). IEEE Access 2021, 9, 30551–30572. [Google Scholar] [CrossRef]
  32. Nayak, J.; Naik, B.; Dinesh, P.; Vakula, K.; Rao, B.K.; Ding, W.; Pelusi, D. Intelligent system for COVID-19 prognosis: A state-of-the-art survey. Appl. Intell. 2021, 51, 2908–2938. [Google Scholar] [CrossRef]
  33. Soomro, T.A.; Zheng, L.; Afifi, A.J.; Ali, A.; Yin, M.; Gao, J. Artificial intelligence (AI) for medical imaging to combat coronavirus disease (COVID-19): A detailed review with direction for future research. Artif. Intell. Rev. 2021, 1–31. Available online: https://link.springer.com/article/10.1007/s10462-021-09985-z (accessed on 23 August 2021).
  34. Jenssen, H.B. COVID-19 ct Segmentation Dataset. Available online: http://medicalsegmentation.com/covid19/ (accessed on 4 October 2020).
  35. Xu, T.; Cheng, I.; Mandal, M. An improved fluid vector flow for cavity segmentation in chest radiographs. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 3376–3379. [Google Scholar]
  36. Ronfard, R. Region-based strategies for active contour models. Int. J. Comput. Vis. 1994, 13, 229–251. [Google Scholar] [CrossRef]
  37. Huang, R.; Pavlovic, V.; Metaxas, D.N. A graphical model framework for coupling MRFs and deformable models. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, 27 June–2 July 2004; Volume 2. [Google Scholar]
  38. Caselles, V.; Kimmel, R.; Sapiro, G. Geodesic active contours. Int. J. Comput. Vis. 1997, 22, 61–79. [Google Scholar] [CrossRef]
  39. Xie, X.; Mirmehdi, M. MAC: Magnetostatic active contour model. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 632–646. [Google Scholar] [CrossRef] [Green Version]
  40. Chan, T.F.; Vese, L.A. Active contours without edges. IEEE Trans. Image Process. 2001, 10, 266–277. [Google Scholar] [CrossRef] [Green Version]
  41. Talu, M.F. ORACM: Online region-based active contour model. Expert Syst. Appl. 2013, 40, 6233–6240. [Google Scholar] [CrossRef]
  42. Kass, M.; Witkin, A.; Terzopoulos, D. Snakes: Active contour models. Int. J. Comput. Vis. 1988, 1, 321–331. [Google Scholar] [CrossRef]
  43. Wang, T.; Cheng, I.; Basu, A. Fluid vector flow and applications in brain tumor segmentation. IEEE Trans. Biomed. Eng. 2009, 56, 781–789. [Google Scholar] [CrossRef] [PubMed]
  44. Liu, B.; Cheng, H.D.; Huang, J.; Tian, J.; Tang, X.; Liu, J. Probability density difference-based active contour for ultrasound image segmentation. Pattern Recognit. 2010, 43, 2028–2042. [Google Scholar] [CrossRef]
  45. Cohen, L.D.; Cohen, I. Finite-element methods for active contour models and balloons for 2-D and 3-D images. IEEE Trans. Pattern Anal. Mach. Intell. 1993, 15, 1131–1147. [Google Scholar] [CrossRef] [Green Version]
  46. Sethian, J.A.; Sethian, J. Level Set Methods: Evolving Interfaces in Geometry, Fluid Mechanics, Computer Vision, and Materials Science; Cambridge University Press: Cambridge, UK, 1996; Volume 1999. [Google Scholar]
  47. Zhang, K.; Zhang, L.; Song, H.; Zhou, W. Active contours with selective local or global segmentation: A new formulation and level set method. Image Vis. Comput. 2010, 28, 668–676. [Google Scholar] [CrossRef]
  48. Chen, X.; Williams, B.M.; Vallabhaneni, S.R.; Czanner, G.; Williams, R.; Zheng, Y. Learning active contour models for medical image segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 11632–11640. [Google Scholar]
  49. Marcos, D.; Tuia, D.; Kellenberger, B.; Zhang, L.; Bai, M.; Liao, R.; Urtasun, R. Learning deep structured active contours end-to-end. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 19–21 June 2018; pp. 8877–8885. [Google Scholar]
  50. Gur, S.; Wolf, L.; Golgher, L.; Blinder, P. Unsupervised microvascular image segmentation using an active contours mimicking neural network. In Proceedings of the IEEE International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 10722–10731. [Google Scholar]
  51. Rupprecht, C.; Huaroc, E.; Baust, M.; Navab, N. Deep active contours. arXiv 2016, arXiv:1607.05074. [Google Scholar]
  52. Ma, D.; Liao, Q.; Chen, Z.; Liao, R.; Ma, H. Adaptive local-fitting-based active contour model for medical image segmentation. Signal Process. Image Commun. 2019, 76, 201–213. [Google Scholar] [CrossRef]
  53. Fang, J.; Liu, H.; Zhang, L.; Liu, J.; Liu, H. Fuzzy region-based active contours driven by weighting global and local fitting energy. IEEE Access 2019, 7, 184518–184536. [Google Scholar] [CrossRef]
  54. Liu, H.; Fang, J.; Zhang, Z.; Lin, Y. A novel active contour model guided by global and local signed energy-based pressure force. IEEE Access 2020, 8, 59412–59426. [Google Scholar] [CrossRef]
  55. Rajinikanth, V.; Kadry, S.; Thanaraj, K.P.; Kamalanand, K.; Seo, S. Firefly-Algorithm Supported Scheme to Detect COVID-19 Lesion in Lung CT Scan Images using Shannon Entropy and Markov-Random-Field. arXiv 2020, arXiv:2004.09239. [Google Scholar]
  56. Qiu, Y.; Liu, Y.; Li, S.; Xu, J. Miniseg: An extremely minimum network for efficient covid-19 segmentation. arXiv 2020, arXiv:2004.09750. [Google Scholar]
  57. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  58. Fan, D.P.; Zhou, T.; Ji, G.P.; Zhou, Y.; Chen, G.; Fu, H.; Shen, J.; Shao, L. Inf-net: Automatic covid-19 lung infection segmentation from ct images. IEEE Trans. Med. Imaging 2020, 39, 2626–2637. [Google Scholar] [CrossRef] [PubMed]
  59. Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–12 September 2018; pp. 801–818. [Google Scholar]
  60. Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; PMLR: Long Beach, CA, USA, 2019; pp. 6105–6114. [Google Scholar]
  61. Lo, S.Y.; Hang, H.M.; Chan, S.W.; Lin, J.J. Efficient dense modules of asymmetric convolution for real-time semantic segmentation. In Proceedings of the ACM Multimedia Asia; 2019; pp. 1–6. Available online: https://scholar.google.com/scholar?q=Efficient+dense+modules+of+asymmetric+convolution+for+real-time+semantic+segmentation&hl=en&as_sdt=0&as_vis=1&oi=scholart (accessed on 23 August 2021).
  62. Paszke, A.; Chaurasia, A.; Kim, S.; Culurciello, E. Enet: A deep neural network architecture for real-time semantic segmentation. arXiv 2016, arXiv:1606.02147. [Google Scholar]
  63. Mehta, S.; Rastegari, M.; Shapiro, L.; Hajishirzi, H. Espnetv2: A light-weight, power efficient, and general purpose convolutional neural network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16-20 June 2019; pp. 9190–9200. [Google Scholar]
Figure 1. Overview of the proposed steps.
Figure 1. Overview of the proposed steps.
Applsci 11 08039 g001
Figure 2. Three examples of initial contours considered outside (a), overlapping (b), and inside (c) the image.
Figure 2. Three examples of initial contours considered outside (a), overlapping (b), and inside (c) the image.
Applsci 11 08039 g002
Figure 3. An example of concave regions; (a) is the initial snake and (b) is the final state of snake (by Kass) occurred after 2346 iterations.
Figure 3. An example of concave regions; (a) is the initial snake and (b) is the final state of snake (by Kass) occurred after 2346 iterations.
Applsci 11 08039 g003
Figure 4. Segmentation result on a synthetic image based on GAC model; (a) the initial contour shown in red color, (b) the final segmentation result, and (c) shows all of the boundaries that should be surrounded.
Figure 4. Segmentation result on a synthetic image based on GAC model; (a) the initial contour shown in red color, (b) the final segmentation result, and (c) shows all of the boundaries that should be surrounded.
Applsci 11 08039 g004
Figure 5. An example of global segmentation property of the C-V model. (a) shows the initial contour and (b) shows the segmentation result of the C-V method, which cannot extract all boundaries in the image.
Figure 5. An example of global segmentation property of the C-V model. (a) shows the initial contour and (b) shows the segmentation result of the C-V method, which cannot extract all boundaries in the image.
Applsci 11 08039 g005
Figure 6. Samples of images of COVID-19 database and their ground truth (a binary image where the white parts represent the area of infection).
Figure 6. Samples of images of COVID-19 database and their ground truth (a binary image where the white parts represent the area of infection).
Applsci 11 08039 g006
Figure 7. ROC curve plot based on the average TP and FP for all images in the database when the threshold was changed.
Figure 7. ROC curve plot based on the average TP and FP for all images in the database when the threshold was changed.
Applsci 11 08039 g007
Figure 8. Outcomes of extraction lung region extraction.
Figure 8. Outcomes of extraction lung region extraction.
Applsci 11 08039 g008
Figure 9. Outcomes of all active contour models in terms of the database.
Figure 9. Outcomes of all active contour models in terms of the database.
Applsci 11 08039 g009
Table 1. The overall distribution of samples used in the database.
Table 1. The overall distribution of samples used in the database.
Range of ages of patients32–86 (years)
No. of total patients (men + women + NA)49 (=27 + 11 + 11)
Minimum no. of COVID-19 CT images per patient1
Maximum no. of COVID-19 CT images per patient13
Total no. of COVID-19 CT images100
Table 2. Success of active contour methods in obtaining infection area in terms of initial contour.
Table 2. Success of active contour methods in obtaining infection area in terms of initial contour.
Initial ContourC-VSBGFRLSMACORACMLSACMRSFOLGEALFFRAGLGLSEPF
InsideFailedFailedDoneDoneFailedFailedFailedFailedFailed
OutsideFailedFailedDoneDoneFailedDoneDoneFailedFailed
CrossDoneFailedDoneDoneDoneDoneFailedDoneDone
Table 3. Detailed evaluation of seven metrics based on COVID-19 CT images.
Table 3. Detailed evaluation of seven metrics based on COVID-19 CT images.
MeasureC-VMACORACMLSACMRSFOLGEALFFRAGLGLSEPF
Dice (%)93.5595.9496.3095.7789.8892.1296.4495.60
Jaccard (%)88.3192.3293.0692.0182.4985.7093.2191.77
Bfscore (%)66.8261.4074.1360.4663.5057.0565.5571.96
Precision (%)84.4492.3777.7396.8973.0393.3391.3368.24
Recall (%)58.1548.2572.4145.6761.4443.3453.1878.81
Iteration1588500520025081030
Time (s)557001.41241.401001.85.5
Table 4. Comparison of the best active contour method with deep learning methods in terms of the Recall measure.
Table 4. Comparison of the best active contour method with deep learning methods in terms of the Recall measure.
MiniSegMobileNetInf-NetDeepLabv3+EfficientNetEDANetENetESPNetv2GLSEPF (ACM)
84.9581.1976.5079.5880.2582.8681.2677.8478.81
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Akbari, Y.; Hassen, H.; Al-Maadeed, S.; Zughaier, S.M. COVID-19 Lesion Segmentation Using Lung CT Scan Images: Comparative Study Based on Active Contour Models. Appl. Sci. 2021, 11, 8039. https://doi.org/10.3390/app11178039

AMA Style

Akbari Y, Hassen H, Al-Maadeed S, Zughaier SM. COVID-19 Lesion Segmentation Using Lung CT Scan Images: Comparative Study Based on Active Contour Models. Applied Sciences. 2021; 11(17):8039. https://doi.org/10.3390/app11178039

Chicago/Turabian Style

Akbari, Younes, Hanadi Hassen, Somaya Al-Maadeed, and Susu M. Zughaier. 2021. "COVID-19 Lesion Segmentation Using Lung CT Scan Images: Comparative Study Based on Active Contour Models" Applied Sciences 11, no. 17: 8039. https://doi.org/10.3390/app11178039

APA Style

Akbari, Y., Hassen, H., Al-Maadeed, S., & Zughaier, S. M. (2021). COVID-19 Lesion Segmentation Using Lung CT Scan Images: Comparative Study Based on Active Contour Models. Applied Sciences, 11(17), 8039. https://doi.org/10.3390/app11178039

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop