Next Article in Journal
Cholesterol Efflux Capacity Associates with the Ankle-Brachial Index but Not All-Cause Mortality in Patients with Peripheral Artery Disease
Previous Article in Journal
Promoter Hypermethylation of Tumor Suppressor Genes Located on Short Arm of the Chromosome 3 as Potential Biomarker for the Diagnosis of Nasopharyngeal Carcinoma
 
 
diagnostics-logo
Article Menu

Article Menu

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

COVLIAS 1.0: Lung Segmentation in COVID-19 Computed Tomography Scans Using Hybrid Deep Learning Artificial Intelligence Models

by
Jasjit S. Suri
1,2,*,
Sushant Agarwal
2,3,
Rajesh Pathak
4,
Vedmanvitha Ketireddy
5,
Marta Columbu
6,
Luca Saba
6,
Suneet K. Gupta
7,
Gavino Faa
8,
Inder M. Singh
1,
Monika Turk
9,
Paramjit S. Chadha
1,
Amer M. Johri
10,
Narendra N. Khanna
11,
Klaudija Viskovic
12,
Sophie Mavrogeni
13,
John R. Laird
14,
Gyan Pareek
15,
Martin Miner
16,
David W. Sobel
15,
Antonella Balestrieri
6,
Petros P. Sfikakis
17,
George Tsoulfas
18,
Athanasios Protogerou
19,
Durga Prasanna Misra
20,
Vikas Agarwal
20,
George D. Kitas
21,22,
Jagjit S. Teji
23,
Mustafa Al-Maini
24,
Surinder K. Dhanjil
25,
Andrew Nicolaides
26,
Aditya Sharma
27,
Vijay Rathore
25,
Mostafa Fatemi
28,
Azra Alizad
29,
Pudukode R. Krishnan
30,
Nagy Frence
31,
Zoltan Ruzsa
31,
Archna Gupta
32,
Subbaram Naidu
33 and
Mannudeep Kalra
34
add Show full author list remove Hide full author list
1
Stroke Diagnostic and Monitoring Division, AtheroPoint™, Roseville, CA 95661, USA
2
Advanced Knowledge Engineering Centre, GBTI, Roseville, CA 95661, USA
3
Department of Computer Science Engineering, PSIT, Kanpur 209305, India
4
Department of Computer Science Engineering, Rawatpura Sarkar University, Raipur 492015, India
5
Mira Loma High School, Sacramento, CA 95821, USA
6
Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), 09124 Cagliari, Italy
7
Department of Computer Science, Bennett University, Noida 201310, India
8
Department of Pathology—AOU of Cagliari, 09124 Cagliari, Italy
9
The Hanse-Wissenschaftskolleg Institute for Advanced Study, 27753 Delmenhorst, Germany
10
Department of Medicine, Division of Cardiology, Queen’s University, Kingston, ON K7L 3N6, Canada
11
Department of Cardiology, Indraprastha APOLLO Hospitals, New Delhi 208011, India
12
Department of Radiology, University Hospital for Infectious Diseases, 10000 Zagreb, Croatia
13
Cardiology Clinic, Onassis Cardiac Surgery Center, 176 74 Athens, Greece
14
Heart and Vascular Institute, Adventist Health St. Helena, St. Helena, CA 94574, USA
15
Minimally Invasive Urology Institute, Brown University, Providence City, RI 02912, USA
16
Men’s Health Center, Miriam Hospital Providence, Providence, RI 02906, USA
17
Rheumatology Unit, National Kapodistrian University of Athens, 157 72 Athens, Greece
18
Department of Transplantation Surgery, Aristoteleion University of Thessaloniki, 541 24 Thessaloniki, Greece
19
National & Kapodistrian University of Athens, 157 72 Athens, Greece
20
Department of Immunology, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow 226014, India
21
Academic Affairs, Dudley Group NHS Foundation Trust, Dudley DY1 2HQ, UK
22
Arthritis Research UK Epidemiology Unit, Manchester University, Manchester M13 9PL, UK
23
Ann and Robert H. Lurie Children’s Hospital of Chicago, Chicago, IL 60611, USA
24
Allergy, Clinical Immunology and Rheumatology Institute, Toronto, ON M5G 1N8, Canada
25
Athero Point LLC, Roseville, CA 95611, USA
26
Vascular Screening and Diagnostic Centre, University of Nicosia Medical School, Nicosia 2408, Cyprus
27
Division of Cardiovascular Medicine, University of Virginia, Charlottesville, VA 22904, USA
28
Department of Physiology & Biomedical Engg., Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
29
Department of Radiology, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
30
Neurology Department, Fortis Hospital, Bangalore 560076, India
31
Department of Internal Medicines, Invasive Cardiology Division, University of Szeged, 6720 Szeged, Hungary
32
Radiology Department, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow 226014, India
33
Electrical Engineering Department, University of Minnesota, Duluth, MN 55455, USA
34
Department of Radiology, Massachusetts General Hospital, 55 Fruit Street, Boston, MA 02114, USA
*
Author to whom correspondence should be addressed.
Diagnostics 2021, 11(8), 1405; https://doi.org/10.3390/diagnostics11081405
Submission received: 6 July 2021 / Revised: 28 July 2021 / Accepted: 29 July 2021 / Published: 4 August 2021
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)

Abstract

:
Background: COVID-19 lung segmentation using Computed Tomography (CT) scans is important for the diagnosis of lung severity. The process of automated lung segmentation is challenging due to (a) CT radiation dosage and (b) ground-glass opacities caused by COVID-19. The lung segmentation methodologies proposed in 2020 were semi- or automated but not reliable, accurate, and user-friendly. The proposed study presents a COVID Lung Image Analysis System (COVLIAS 1.0, AtheroPoint™, Roseville, CA, USA) consisting of hybrid deep learning (HDL) models for lung segmentation. Methodology: The COVLIAS 1.0 consists of three methods based on solo deep learning (SDL) or hybrid deep learning (HDL). SegNet is proposed in the SDL category while VGG-SegNet and ResNet-SegNet are designed under the HDL paradigm. The three proposed AI approaches were benchmarked against the National Institute of Health (NIH)-based conventional segmentation model using fuzzy-connectedness. A cross-validation protocol with a 40:60 ratio between training and testing was designed, with 10% validation data. The ground truth (GT) was manually traced by a radiologist trained personnel. For performance evaluation, nine different criteria were selected to perform the evaluation of SDL or HDL lung segmentation regions and lungs long axis against GT. Results: Using the database of 5000 chest CT images (from 72 patients), COVLIAS 1.0 yielded AUC of ~0.96, ~0.97, ~0.98, and ~0.96 (p-value < 0.001), respectively within 5% range of GT area, for SegNet, VGG-SegNet, ResNet-SegNet, and NIH. The mean Figure of Merit using four models (left and right lung) was above 94%. On benchmarking against the National Institute of Health (NIH) segmentation method, the proposed model demonstrated a 58% and 44% improvement in ResNet-SegNet, 52% and 36% improvement in VGG-SegNet for lung area, and lung long axis, respectively. The PE statistics performance was in the following order: ResNet-SegNet > VGG-SegNet > NIH > SegNet. The HDL runs in <1 s on test data per image. Conclusions: The COVLIAS 1.0 system can be applied in real-time for radiology-based clinical settings.

1. Introduction

On 30 January 2020, the International Health Regulations and Emergency Committee of the World Health Organization (WHO) declared COVID-19 or Coronavirus a “public health emergency of international concern” or “pandemic”. As per the WHO’s statistics dated 28 July 2021, more than 196 million people have been infected with Acute Respiratory Distress Syndrome (ARDS), and nearly 4.2 million have lost their lives due to this virus [1]. This SARS-CoV-2 virus affects the respiratory system, damages lungs, travels through the entire body, and causes myocardial infarction or coronary artery syndrome [2,3] or worsening diabetes [4] or causing pulmonary embolism [5]. It was seen that comorbidity had a severe influence on COVID-19 [6]. As of today, even though vaccination is showing the signs of pandemic control in some parts of the world, it is still the highest concern for health professionals. The objective is to flatten the curve and evaluate the COVID-19 lung severity at an early stage prior to worsening symptoms and admission of patients in ICU [7].
Computed Tomography (CT) is often used for assessing COVID-19 severity in the lung regions and is considered an important component of the computer-aided diagnosis for lung image analysis [7,8,9]. For a complete diagnosis of the COVID-19 severity, one must first identify the region of interest in these CT scans. There are two main challenges associated with the processing of CT scans: first and foremost, the challenge is the large volume of patients in diagnostic centers with each having 200 slices to be processed. This makes the task of processing scanned images tedious and time-consuming [10]. The second issue with current automated or semi-automated systems is reliability, accuracy, and clinical effectiveness [11]. One of the major causes for unreliable accuracy and low performance is the intensity-based segmentation methods which are influenced by local or global statistical methods. Furthermore, it does not take advantage of the cohort’s knowledge. Thus, there is a clear need for an automated and accurate joint left and right lung identification system in CT scans.
Artificial Intelligence (AI) has provided fast pace research and development in nearly all walks of life especially in healthcare imaging [12]. The reason for such growth is the ability of AI techniques to mimic the brain using deep neural networks [12,13,14]. Machine learning is the most fundamental class of AI and has dominated the medical imaging industry [15], covering many fields of healthcare such as cancer classification and diagnosis of several kinds of cancers such as breast [16], liver [17,18], thyroid [19,20,21], skin [22,23], ovarian [24], and lung [25]. ML has also been applied to risk stratification for coronary artery disease [26,27], atherosclerosis imaging [28,29], and stroke [30], diabetes [31,32], and arrhythmia classification [33].
A subclass of ML is deep learning (DL), or conventionally called solo DL (SDL). Many researchers have proposed AI-based studies on COVD-19 which involves SDL techniques. [34,35]. It offers substantial benefits compared to ML-based solutions [11,36]. DL provides a complete automated feature extraction and classification simultaneously using the so-called dense layers. Despite the merits of DL, it poses challenges and uncertainties such as optimization of the learning rate, deciding the number of epochs, avoiding overfitting, handling large image size configurations, and operating in a multiresolution framework. One way to overcome challenges in SDL is by fusing two kinds of DL systems, called hybrid DL (HDL). It offers the advantages such as (i) extraction and refinement of a large number of features, (ii) application of diversity in classifiers, and (iii) ability to apply transfer learning for weights reusability. The proposed study uses the spirit of HDL to design COVID Lung Image Analysis System (COVLIAS) 1.0, which computes the segmentation of the left and right lungs jointly. The proposed framework uses two kinds of HDL models which combine Visual Geometry Group (VGG), Segmentation Network (SegNet), and Residual Network (ResNet) architectures. The model combines the strengths of both VGG and SegNet models and offers a new VGG-SegNet framework.
Similarly, Resnet-Segnet includes the power of both ResNet and SegNet. These models use 40:60 training to testing ratio protocol, then benchmarking our novel design against the NIH-based Fuzzy-Connectedness (FC) lung segmentation method [37]. Finally, we compute nine different ways for conducting the performance evaluation of the lung region and boundary estimation system.
The layout of this study is as follows: Section 2 presents the methodology with the architecture of the COVLIAS 1.0 (Figure 1) using HDL and SDL. The experimental protocol is shown in Section 3, while results and comprehensive performance evaluation are presented in Section 4. The discussions and benchmarking are presented in Section 5. Finally, the paper concludes in Section 6.

2. Methodology

2.1. Patient Demographics, Image Acquisition, and Data Preparation

2.1.1. Demographics

The dataset consists of 72 Italian patients, where 46 patients were male and 26 were female. The mean height and weight were 172.72 cm and 79 kg, respectively. A total of 60 patients were tested for PCR, while 12 could not be tested for PCR.

2.1.2. Image Acquisition

All chest computed tomography (CT) scans were performed during a single full inspiratory breath-hold in a supine position on a 128-slice multidetector-row CT scanner (Philips Ingenuity Core, Philips Healthcare, the Netherlands). No intravenous or oral contrast media were administered. The CT examinations were performed at 120 kV, 226 mAs (using automatic tube current modulation—Z-DOM, Philips), with 1.08 spiral pitch factor, 0.5-s gantry rotation time, and 64 * 0.625 detector configurations. One-mm thick images were reconstructed with soft tissue kernel using 512 × 512 matrix (mediastinal window) and lung kernel using 768 × 768 matrix (lung window). CT images were reviewed on the Picture Archiving and Communication System (PACS) workstation equipped with two 35 × 43 cm monitors produced by Eizo, with a 2048 × 1536 matrix. Figure 2 shows the raw sample CT scans of COVID-19 patients with varying lung sizes and variable intensity patterns, posing a challenge.

2.1.3. Data Preparation

The CT data of 72 COVID-positive patients’ data are used in the proposed study. Each patient consisted of 200 slices, out of which 65–70 slices consisting of visible lung region were selected by the radiologist [LS]. A total of 5000 images were used in the proposed study. A total of 2000 images were used to train the AI-based segmentation models, whereas 3000 images were used to test and validate the models.
To prepare the data for segmentation, a binary mask was generated manually by tracing in the selected slices under the guidance of a trained radiologist [LS] using ImgTracer™ (Global Biomedical Technologies, Inc., Roseville, CA, USA) [38,39,40]. Figure 3 shows the raw lung COVID-19 CT scans with ground truth boundary (white) overlay using manual tracing. Figure 4 shows the white binary mask of the lung region computed using manual ImgTracer™.

2.2. Architecture

The COVLIAS 1.0 system incorporates three models: deep learning-based (SDL) and two hybrid-based (HDL). We benchmark these models against the conventional NIH-based model using fuzzy connectedness. In all, the proposed study incorporates four kinds of models such as (a) SegNet, (b) VGG-SegNet, (c) ResNet-SegNet, and (d) NIH-conventional model.

2.2.1. SegNet

SegNet is a semantic segmentation model shown in Figure 5 and has been implemented by Badrinarayanan et al. before [41], while we adapted SegNet for comparison with HDL. SegNet is a trainable segmentation architecture that consists of (i) an encoder network (the left half of Figure 5), (ii) a corresponding decoder network followed by (the right half of Figure 5) (iii) a pixel-wise classification layer (the last block of Figure 5). The architecture of the encoder network is topologically identical to the 13 convolutional layers. It uses a technique to down-sample encoder output, which involves storing the max-pooling (filter size of 2 × 2) indices. The max-pooling layer is placed at the end of each block on the encoder side, which increases the depth of the following layer by two times (64–128–256–512).
Similarly, in the second half of the architecture, the up-sampling happens where the depth of the layer decreases by two (512–256–128–64). The up-sampling process involves recalling the max-pooling indices at the corresponding encoder layer. Finally, in the end, there is a K-class softmax classifier used to predict the class for each pixel. This gives reasonably good performance and is space-efficient.

2.2.2. VGG-SegNet

The VGG architecture consists of blocks, where each block is composed of 2D convolution (Conv) and max-pooling layers (Figure 6). VGGNet was born out of the need to reduce the # of parameters in the convolution layers and improve training time [42]. This architecture consists of 16 convolution layers compared to 13 in SegNet. It helps the model to learn high-level features to improve the results. The blocks used in the VGG-SegNet architecture consist of (i) encoder, (ii) decoder part, and followed by (iii) a pixel-wise softmax classifier.
The distribution of the blocks in the encoder (the left half of Figure 6) is 2 × (2 Conv + max-pool) + 3 × (4 Conv + max-pool), and the decoder (the right half of Figure 6) is the reverse of the encoder part with up-sampling, thereby decreasing the depth of the layers.

2.2.3. ResNet-SegNet

Residual Network (ResNet) [43] came into existence by overcoming the VGG architecture problem of the vanishing gradient. When the network goes through backpropagation, the gradient keeps flowing backward to the initial layers. This value keeps getting multiplied by each gradient. As a result, the gradient becomes smaller and smaller, making the updates to the initial layers very small, also known as the vanishing gradient problem. Consequently, this results in an increase of training time. ResNet is a superior architecture to VGG because it has skip connections, which act as gradient superhighways or bypasses, allowing the gradient to flow unhindered. This solves the problem of diminishing gradient. With the help of the identity function (shown in the bottom right of Figure 7), the value of backpropagated gradient values does not decrease because the local gradient value is one.

2.2.4. NIH Segmentation Model

Figure 8 shows the architecture of the fuzzy-connected system for segmentation of the lung region. This system was not built in the study but simply adapted from the NIH website [37] for benchmarking. There are three steps in the process: (i) seed point selection, (ii) fuzzy-connected (FC) segmentation and binary mask cleaning, and (iii) grayscale segmentation. The user can select the seed point in two lines corresponding to the left and right lungs. FC segmentation is executed, and the results are displayed in the form of a binary mask. Finally, the lung is segmented with the given binary mask and the original CT COVID-19 grayscale scan leading to segmented lungs.

2.2.5. Loss Functions for SDL and HDL Models

The proposed system uses CE-loss in SDL/HDL models. The CE-loss function, labeled as L C E for all the models, can be mathematically given by Equation (1):
L C E = - [ ( y i × log   a i ) + ( 1 - y i ) × log ( 1 - a i ) ]
where yi is the input GT label 1, (1 − yi) is GT label 0, ai represents the softmax classifier probability, and x represents the product of the two terms. All three of the AI architectures shown in Figure 5, Figure 6 and Figure 7 have been trained using the CE-loss function.

2.3. Sample Size Calculation

It is vital to understand the sample size needed for achieving the desired performance. Generally, sample size requires calculating the number of studies adapted in our proposed system for reaching the desired accuracy and error statistics. This is crucial because, when the sample size is too large with respect to the desired sample size, it may incur more computational and economic burdens both in terms of time and cost. We follow paradigms like our previous paradigms for sample size calculation [27,44,45,46,47,48,49]. To represent the mathematical formula for the sample size calculation, we used a z-score from the standard z-table as shown by Equation (2):
N p = z 2 × p ^ 1 p ^ MoE 2  
In the proposed study, we use a confidence level of 95%, a margin of error (MoE) of 5%, and a data proportion ( p ^ ) of 0.5. Normal distribution was assumed for the data sample, and a confidence interval of 95% provided a z-score value of 1.96 (derived from the standard z-table). MoE of 5% for SDL and 3% for HDL was used to ensure that the population remains in the tolerance band. The desired sample size ( n ) for a large, true population size was computed using Equation (2). Using the confidence level of 95%, the resultant sample size resulted in 1351 samples while using MoE of 3% for HDL and 588 samples for SDL with 5% MoE. Thus, the recruited sample size for HDL and SDL was ~270% and ~750% higher than the desired sample size.

3. Experimental Protocol

3.1. Cross-Validation

A total of 5000 COVID-19 CT slices are used in the proposed system. A well-known data partitioning method, K5-r46, was used in our system, in which the database was partitioned into five parts (K5). Then, a 4:6 ratio was used for training to test, these 40% of (2000 images) data are used for training, and 60% (3000 images) of data are used for testing. An internal validation system was also used in our system with training data. For each test patient, the binary mask was generated, and the lung area was estimated.

3.2. Lung Quantification

The lung area is measured by counting the number of white pixels in the lung region and converted to mm2 dimensions using the resolution factor of 0.05 mm to a pixel. If N represents the total number of the image in the database, A a i m , n represents lung area for in the image “n” using the AI model “m”, A ¯ a i m represents the mean lung area corresponding to the AI model “m”,   A ¯ g t represents the corresponding mean area of the GT binary lung mask, then mathematically A ¯ a i m and A ¯ g t can be computed and are shown in Equation (3):
A ¯ a i m = n = 1 N A a i m , n N       A ¯ g t = n = 1 N A g t n N

3.3. Accuracy Computation

The accuracy of the AI system is measured by comparing the predicted output and the ground truth pixel values. These values were interpreted as binary (0 or 1) numbers as the output lung mask was only black and white, which was then summed up and divided by the total number of pixels. Assuming TP represents true positive, TN represents true negative, FN represents false negative, and FP represents false positive. The accuracy of the AI system can be given as shown in Equation (4) [50]:
ACC   % = TP + TN TP + FN + TN + FP × 100

4. Results and Performance Evaluation

4.1. Results

COVLIAS 1.0 was designed to run 40% training and 60% unseen data sets from 5000 scans. The output of the system reported the binary mask for the left and right lung jointly. These masks were then used for computing the segmented region, given the CT grayscale COVID-19 scans. Figure 9 shows the segmented mask (column 2), segmented lung region (column 3), and overlay image of segmented lung mask of green color over the grayscale image (column 4). Figure 10 shows the accuracy (Equation (4)) and the loss functions (Equation (1)) using the three AI models such as SegNet, VGG-SegNet, and ResNet-SegNet. The accuracy and loss values were then plotted as a stacked line chart to show the increase in the accuracy and decrease in the loss of the training model.

4.2. Performance Evaluation

As part of the performance evaluation, three metrics and nine evaluation criteria were considered. First are the visualization comparisons, second is the lung area metrics, and third are the long axis metrics. Section 4.2.1 presents the visualization of boundary error and lung area error (LAE). It has two sets of visualization components (i) visualization of the model boundary against the ground truth boundary with different boundary colors; and (ii) visualization of the superimposition of the model lung region against the GT lung region with different colors; Section 4.2.2 presents the performance evaluation based on the LAE and consists of (i) cumulative plot of the LAE; (ii) Bland–Altman plot for the LAE; (iii) JI and DS for the lung region and (iv) ROC curves for AI-based model performance. Section 4.2.3 presents the performance evaluation based on the lung long axis error (LLAE) and consists of the following components: (i) cumulative plot of the LLAE; (ii) correlation coefficient of the LLAE; and (iii) Bland–Altman plot for the LLAE.
The lung masks were then converted into lung boundary images using a region-to-boundary convertor. The lung boundary was superimposed over the original COVID-19 lung CT grayscale scans to visualize the lung, shown in Figure 11. Note that the top three AI models (1 SDL and 2 HDL) performed well (row-1, row-2, and row-3), as can be seen visually, unlike the NIH conventional model (row-4).

4.2.1. Visualization of Lung Boundary and Regional Lung Error

Figure 12 shows the overlay of the AI-model boundary (green) and GT-boundary (red) with a grayscale COVID-19 CT slice in the background. Four models are shown in the rows in the following order: SegNet, VGG-SegNet, ResNet-SegNet, and NIH. As we go down the rows, the boundaries are bumper-to-bumper and show the nearly negligible difference between three AI models and GT, while the pronounced error for the NIH model. The white arrows indicate the region of mismatch between the NIH model and ground truth. The top three rows (SegNet, VGG-SegNet, and ResNet-SegNet) show more closeness between the AI-model borders (green) and the ground truth borders (red).
Figure 13 shows the lung region using AI-mask from the AI-model (green) vs. GT-mask (red). Note that the slight difference is visible between the two-colored masks for the first three AI models, while a more significant difference is seen between the NIH model and GT, typically at the basal region (shown by the white arrows). For better visualization of the differences in area region, we have taken two sample slices in Figure 14, where the combination (c) shows the zoomed version of (a), and combination (d) shows the zoomed version of (b). Figure 15 shows the long axis of the AI-model (green) and GT (red) with a grayscale COVID-19 CT slice in the background. Four models are shown in the rows in the following order: SegNet, VGG-SegNet, ResNet-SegNet, and NIH.

4.2.2. Performance Metrics for the Lung Area Error

Cumulative Plot for Lung Area Error

Cumulative frequency analysis is the analysis of the frequency of occurrence of the LAE against the reference value. Figure 16 below shows the cumulative distribution of left and right LAE over 3000 images corresponding to the test dataset (60% of 5000 scans). A cumulative frequency threshold of 80% was chosen to compare the four different models: SegNet, VGG-SegNet, ResNet-SegNet, and NIH. Note that, even though the AI models perform exceptionally well, the best of all was the HDL model ResNet-SegNet with 80% LAE for left, right, and combined left/right as 8.3 mm2, 5.8 mm2, and 7.5 mm2, respectively. Furthermore, note that the NIH showed the lowest performance at an 80% threshold (19.83 mm2, 13.12 mm2, and 19.22 mm2, corresponding to the left lung, right lung, and mean of the left lung and right lung). It is also interesting to note that the right LAE was lower than the left LAE for all four models.

Lung Area Error

Figure 17 shows the increasing LAE between the AI-models and GT for the left, right, and mean areas was in the order of ResNet-SegNet < VGG-SegNet < SegNet < NIH. This is consistent with Figure 16, where the LAE increases in the order of ResNet-SegNet < VGG-SegNet < SegNet < NIH, demonstrating that AI models (SDL and HDL models) are superior to NIH conventional models.

Correlation Plots for Lung Area

Figure 18 (4 × 3 matrix) shows the coefficient of correlation (CC) plot for AI-model lung area vs. GT area using the four models (rows) corresponding to left, right, and mean (columns). The CC values of all the models ranged from 0.91 to 0.98 (p-value < 0.0001). The 3D bar plot is shown in Figure 19 for the left, right, and mean lungs. For the left lung, the decreasing order of the CC performance was ResNet-SegNet, VGG-SegNet, SegNet, and NIH. Thus, HDL models were better than the SDL model. This order remained the same for the left lung, right lung, and mean lung area. These results were consistent with Figure 16, Figure 17 and Figure 18.

Jaccard Index and Dice Similarity for Lung Area

Figure 20 below shows the bar chart for the four models using the Jaccard index (JI) (blue) and Dice similarity (DS) (red). The performance of the models (best to worst) had the following order ResNet-SegNet > VGG-SegNet > SegNet > NIH, consistent with Figure 16, Figure 17, Figure 18 and Figure 19. The cumulative frequency for JI and DS for the four models is shown in Figure 21 (left column for JI and right column for DS). Figure 21 shows that 80% of the CT scans had a JI of 0.94, 0.96, 0.97, and 0.93, respectively, for the four models. The DS for 80% of the CT scans had DS of 0.97, 0.97, 0.98, and 0.96, respectively for the four models SegNet, VGG-SegNet, ResNet-SegNet, and NIH conventional models.

Bland–Altman Plot for Lung Area

A Bland–Altman plot is used to show the concurrency between two methods using the same variable. We follow the protocol for Bland–Altman computation based on our previous paradigms [40,51]. Figure 22 below shows the Bland–Altman plot by computing (i) mean and (ii) difference between AI model area and GT area corresponding to 3000 COVID-19 test CT scans. The strongest concentration can be seen for the HDL models over the SDL model and NIH models. This is consistent with previous plots.

ROC Plots for Lung Area

Figure 23 shows ROC plots for four models (rows) corresponding to left, right, and mean (columns). The corresponding AUC values, along with their p-values, are shown as well. All the AUCs were in the range of 0.96 to 0.99 (p-value < 0.0001). The highest performance was for ResNet-SegNet for the left, right, and mean lungs and given as 0.994, 0.974, and 0.987, respectively. The performance of the models for best to worst had the following order: ResNet-SegNet > VGG-SegNet > SegNet > NIH.

4.2.3. Performance Evaluation Using Lung Long Axis Error

Cumulative Frequency Plot for Lung Long Axis Error

Figure 24 shows the cumulative distribution of left and right LLAE over 3000 images from the test dataset. The cumulative frequency threshold of 80% was chosen to show the comparison amongst the three models. The corresponding bar charts for the LLAE using four models are shown in Figure 25. Decreasing error is shown from right to left, showing the performance of ResNet-SegNet, VGG-SegNet, SegNet, and NIH, which is consistent with area error in Section 4.2.2.

Correlation Plot for Lung Long Axis Error

Figure 26 shows the correlation plots between lung long axes of the model vs. GT for the left, right, and mean. The CC varied from 0.90 to 0.98 (p < 0.0001). The order of the performance was from ResNet-SegNet, VGG-SegNet, SegNet, and NIH. The bar chart plot is shown in Figure 27, and the performance follows the pattern nearly the same order as ResNet-SegNet > VGG-SegNet > SegNet > NIH.

Bland–Altman Plots for Lung Long Axis Error

Figure 28 shows the Bland–Altman (BA) plot for the lung long axis for four models corresponding to left, right, and mean. The order of the performance was ResNet-SegNet, VGG-SegNet, SegNet, and NIH. This can be seen from the spread of the deviations.

The Figure of Merit Using Lung Area

The figure of merit (FoM) is defined in terms of the central tendency of the error. Let A a i m , n and A g t n represent the lung area using AI model ‘m’ and GT, respectively, for the image ‘n’. Considering N as the total number of CT scans, the corresponding mean AI for model m and GT can be represented as A ¯ a i ( m ) and A ¯ g t , as defined in Equation (3), and the FoM can be expressed in Equation (5).
Table 1 shows the values for FoM for the models against the GT. The order of performance for the models is SegNet, VGG-SegNet, SegNet, and NIH:
F o M m = 100 A ¯ a i m   A ¯ g t A ¯ g t 100 ,
where A ¯ a i m = n = 1 N A a i m , n N & A ¯ g t = n = 1 N A g t n N .
Percentage Improvement of AI Models against the NIH Model Using the Lung Area and Lung Long Axis.
This performance measure shows the improvement of the AI models against the NIH, the so-called benchmarking solution (Table 2).

5. Discussion

This study presented a COVLIAS 1.0 system that consisted of two hybrid deep learning and one solo deep learning artificial intelligence model for lung segmentation in COVID-19 CT lung scans. The system was then compared against the conventional NIH-released model. Seventy-two COVID-19 patients were considered for the study, which consisted of 5000 CT scans. Cross-validation protocol was implemented using 40% training and 60% testing data sets. The performance of the system was computed using three main kinds of metrics. This consisted of two sets of visualization components: (i) visualization of the model boundary against the ground truth boundary with different boundary colors; and (ii) visualization of the superimposition of the segmented model lung region against the GT lung region with different colors; the second set of performance evaluation was based on the LAE and consisted of (i) cumulative plot of the LAE; (ii) Bland–Altman plot for the LAE; (iii) JI and DS for the lung region; and (iv) ROC curves for AI-based model performance. The third set of performance evaluation was based on the LLAE and consisted of the following components: (I) cumulative plot of the LLAE; (ii) correlation coefficient of the lung long axis; (iii) correlation coefficient of the lung long axis error; and (iv) Bland–Altman plot for the LLAE. In all the metrics, HDL performed better than SDL and both were better than the conventional NIH model, retaining the order of ResNet-SegNet > VGG-SegNet > SegNet > NIH. Note that the COVILAS 1.0 system (AtheroPoint™, Roseville, CA, USA) was designed using Python language unlike other tools typically designed in MATLAB. The COVLIAS 1.0 used solo deep learning and hybrid deep learning solutions for lung segmentation for CT lung COVID-19 scans. The Python system was developed in a Windows 10 environment, while the graphical user interface was designed using JavaScript. It adapts multitasking and multithreaded architecture for COVLIAS 1.0.

5.1. Benchmarking

Several studies have been published with deep learning systems based on chest CT imaging to distinguish COVID-19 cases from non-COVID-19 cases (which may include normal and abnormal cases) and segment the COVID-19 region from the lungs [52,53,54,55]. However, most of them do not include lung segmentation, boundary estimation, or lung region mask overlays compared to GT. Table 3 shows the benchmarking table showing five studies: Priya et al. (2021) [56], Saood et al. (2021) [57], Paluru et al. (2020) [58], and Cai et al. (2020) [59], which focused on lung segmentation. The table also compares against the proposed study by Suri et al. (2021). Cai et al. [59] have used a deep learning-based UNet model to segment the lung and show the boundary of the segmented lung with a DSC of 0.98 and Jaccard of 0.96. Xiao et al. [60] have demonstrated CNN architecture for segmentation of lung and showed a DSC of 0.98. Our study used a hybrid deep learning AI solution for computing lung region with a study database of 5000 CT scans demonstrating DS of 0.97, 0.97, 0.98, and 0.96. The JI for the four models were 0.94, 0.96, 0.97, and 0.93, respectively. The AUC for the four models were 0.96, 0.96, 0.97, and 0.98, respectively, the highest of the previous methods. The accuracies of the proposed AI models were 98%, 98%, and 99%, respectively. Overall, the proposed study offers a first-time hybrid solution in conjunction with solo deep learning and benchmarking against the NIH-based approach, proving our hypothesis that hybrid deep learning artificial intelligence models are superior to solo deep learning models for COVID-19 lung segmentation.

5.2. A Special Note on the HDL Model

Our results show that HDL is superior to SDL, and both were superior to the conventional NIH model. HDL takes advantage of combining the two SDL models. Several studies were published in the image classification area that showed HDL was superior to SDL. EL-Kenawy et al. [61] used Stochastic Fractal Search (SFS) combined with the Whale Optimization Algorithm (WOA) as part of the HDL, and the authors showed superior results compared to SDL models such as AlexNet, VGGNet16, VGGNet19, GoogleNet, and ResNet50. Vijayalakshmi et al. [62] implemented an HDL model where the authors combined SVM with VGG16 and showed superior results compared to SDL such as VGG16 or VGG19.

5.3. A Note on Comparison between the Proposed AI and NIH Conventional Models

The fundamental difference between the proposed AI models and the NIH model is the role of feature extraction during the lung segmentation process. The feature extraction step is the most vital component of the entire system. This component is accomplished automatically in the AI model, unlike the semi-automatically implemented in the NIH model. This automaticity provides considering time-saving in AI models, unlike in the conventional NIH model. Second, the NIH model has several sub-systems which are user-controlled, such as seed point selection, and image format conversion from High dynamic range (HDR) to Neuroimaging Informatics Technology Initiative (NII/NIfTI) to Portable Network Graphics (PNG) format. It is very tedious and time-consuming when you have to execute this on 5000 CT slices. This is where the AI system comes in handy, is time saving, and is efficient. Our analysis showed that the AI model typically takes one second on any test CT slice, unlike in the NIH model, which took 10 min for a single slice. Note that the NIH model required that the user inputs the entire CT volume of the patient for segmenting the lungs in the CT scans. The processing time for the volume of interest consisting of typically 60 selected slices took ~20 min based on the hardware configuration. This included uploading of volumes, semi-automated segmentation, format conversion, and saving of all the segmented results.

5.4. Strength, Weakness, and Extension

The study presented two HDL models, one SDL model, and one conventional NIH model for lung segmentation in COVID-19 CT lungs for a cohort of 5000 images. The results of the study showed consistency where HDL models were superior to SDL models. The online system took less than 1 s.
Even though the COVLIAS 1.0 was the first of its kind where HDL was designed and developed, more validation is needed for conduction involving multicenter studies [39,63]. Furthermore, cyclic cross-validation needs to be conducted for further comparisons [64,65]. The system can be extended to estimate COVID-19 severity in the lung region while keeping the services cost-effective [66]. Variability analysis can also be conducted where multiple tracers can be involved [40]. The COVLIAS 1.0 is a pilot study with encouraging results. We intend to pursue more CV protocols with large data size and more models in the big data framework [67] in the future.
Comorbidity is an essential factor that has been linked to COVID-19 right from the beginning. Recently, a study was conducted that showed the relation of the effect of the comorbidity on AI model design [6]. In our study, the patients had no prior issues before getting admitted to Novara hospital, Italy during a pandemic. In some cases, however, it is possible that the patients had concomitant or incidental findings such as scars or small nodules. The scope of this study did not incorporate such incidental findings enough to incorporate them into the AI model. This could potentially be a future possibility of more sophisticated data collection and modeling of AI. Furthermore, the COVLIAS 1.0 system can be extended to other kinds of pneumonia, interstitial lung diseases, and cancer [25,68]. Lastly, the AI model can be compared with conventional models as previously attempted applications [69].

6. Conclusions

The study presented four kinds of lung segmentation methods for COVID-19 CT lung scans under the COVLIAS 1.0 (AtheroPoint™, Roseville, CA, USA). It consisted of three AI-based models: two-hybrid deep learning (HDL) models and one SDL. These three AI models were benchmarked against the NIH-based conventional model. The COVILAS 1.0 used a database of 5000 COVID-19 CT images for designing the cross-validation protocol that consisted of 4:6 ratios for training to testing with validation data.
Comprehensive performance matrices were designed such as (i) visualization of the model boundary against the ground truth boundary with different boundary colors; (ii) visualization of the superimposition of the model lung region against the GT lung region with different colors; (iii) performance evaluation based on the LAE and consisted of (a) cumulative plot of the lung area error; (b) Bland–Altman plot for the lung area error; (c) Jaccard index and dice similarity for the lung region; and (d) ROC curves for AI-based model performance. (iv) performance evaluation based on the lung long axis error and consists of the following components (a) cumulative plot of the lung long axis error; (b) correlation coefficient of the lung long axis error; and (c) Bland–Altman plot for the lung long axis error.
We further conclude that the AI-based models were better than the conventional NIH model, and HDL was superior to SDL. The order of performance for all metrics were ResNet-SegNet > VGG-SegNet > SegNet > NIH. The COVLIAS 1.0 system is currently under evaluation by radiology departments at friendly sites.

Author Contributions

Conceptualization: J.S.S., M.K., N.N.K. and M.M.; Data Curation: M.C., L.S., G.F., M.T., K.V., P.R.K., N.F., Z.R. and A.G.; Formal Analysis: J.S.S. and M.K.; Investigation: J.S.S., M.K., I.M.S., P.S.C., A.M.J., S.M., J.R.L., G.P., D.W.S., P.P.S., G.T., A.P., D.P.M., V.A., J.S.T., M.A.-M., S.K.D., A.N., A.S. and A.A.; Methodology: J.S.S., S.A. and A.B.; Project Administration: J.S.S. and M.K.; Deep Learning Computing Resources: S.K.G.; Software Design and Usage: S.A. and V.K.; Software Verification: J.S.S., L.S. and M.K.; Supervision: J.S.S., M.K. and S.N.; Scientific Validation: J.S.S. and S.A.; Clinical Validation and Discussions: J.S.S., M.K. and L.S.; Visualization: S.A.; Writing—Original Draft: J.S.S. and S.A.; Writing—Review and Editing: J.S.S., M.K., L.S., R.P., S.K.G., I.M.S., M.T., P.S.C., A.M.J., N.N.K., K.V., S.M., J.R.L., G.P., M.M., D.W.S., A.B., P.P.S., G.T., A.P., D.P.M., V.A., G.D.K., M.A.-M., S.K.D., A.N., A.S., V.R., M.F., N.F., A.G. and S.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Acronym Table

SNSymbolDescription of the Symbols
1ACCAccuracy
2AEArea Error
3AIArtificial Intelligence
4ARDSAcute Respiratory Distress Syndrome
5AUCArea Under the Curve
6BABland–Altman
7BEBoundary Error
8CCCorrelation coefficient
9CECross Entropy
10COVIDCoronavirus disease
11COVLIASCOVID Lung Image Analysis System
12CTComputed Tomography
13DLDeep Learning
14DSDice Similarity
15FoMFigure of merit
16GTGround Truth
17HDLHybrid Deep Learning
18ISImage Size
19JIJaccard Index
20LAELung Area Error
21LLAELung Long Axis Error
22NIHNational Institute of Health
23PCPixel Counting
24RFResolution Factor
25ROCReceiver operating characteristic
26SDLSolo Deep Learning
27VGGVisual Geometric Group
28VSVariability studies
29WHOWorld Health Organization

Symbol Table

SNSymbolDescription of the Symbols
1 L C E Cross Entropy-loss
2 L D S C Dice Similarity Coefficient-loss
3mModel number used for segmentation in the total number of models M
4nImage scan number a total number of image N.
5 A ¯ a i m Mean estimated lung area for all images using AI model ‘m
6 A a i m , n Estimated Lung Area using AI model ‘m’ and image ‘n
7 A g t n GT lung area for image ‘n
8 A ¯ g t Mean ground truth area for all images N in the database
9 F o M m Figure-of-Merit for segmentation model ‘m
10JIMean Jaccard Index for a specific segmentation model
11DSCDice Similarity Coefficient for a specific segmentation model
12 N p Sample size required computed using power analysis
13MoEMargin-of-Error
14TP, TNTrue Positive and True Negative
15FP, FNFalse Positive and False Negative
16yiGT label
17aiSoftMax classifier probability
18YpGround truth image
19 Y p ^ Estimated image
20PTotal number of pixels in an image in x,y-direction
21zZ-score from standard z-table
22K5-r46Cross-validation protocol with 40% training and 60% testing
Deep Learning Segmentation Architectures
23SegNetSDL model for lung segmentation with reduced learning parameters
24VGG-SegNetHDL model designed by fusion of VGG-19 and SegNet architecture
25ResNet-SegNetHDL model designed by fusion of ResNet-50 and SegNet architecture
Conventional Model
26NIHNational Institute of Health segmentation model

References

  1. WHO Coronavirus (COVID-19) Dashboard. Available online: https://covid19.who.int/ (accessed on 28 July 2021).
  2. Saba, L.; Gerosa, C.; Fanni, D.; Marongiu, F.; La Nasa, G.; Caocci, G.; Barcellona, D.; Balestrieri, A.; Coghe, F.; Orru, G.; et al. Molecular pathways triggered by COVID-19 in different organs: ACE2 receptor-expressing cells under attack? A review. Eur. Rev. Med. Pharmacol. Sci. 2020, 24, 12609–12622. [Google Scholar] [CrossRef]
  3. Cau, R.; Bassareo, P.P.; Mannelli, L.; Suri, J.S.; Saba, L. Imaging in COVID-19-related myocardial injury. Int. J. Cardiovasc. Imaging 2021, 37, 1349–1360. [Google Scholar] [CrossRef] [PubMed]
  4. Viswanathan, V.; Puvvula, A.; Jamthikar, A.D.; Saba, L.; Johri, A.M.; Kotsis, V.; Khanna, N.N.; Dhanjil, S.K.; Majhail, M.; Misra, D.P. Bidirectional link between diabetes mellitus and coronavirus disease 2019 leading to cardiovascular disease: A narrative review. World J. Diabetes 2021, 12, 215. [Google Scholar] [CrossRef] [PubMed]
  5. Cau, R.; Pacielli, A.; Fatemeh, H.; Vaudano, P.; Arru, C.; Crivelli, P.; Stranieri, G.; Suri, J.S.; Mannelli, L.; Conti, M.; et al. Complications in COVID-19 patients: Characteristics of pulmonary embolism. Clin. Imaging 2021, 77, 244–249. [Google Scholar] [CrossRef] [PubMed]
  6. Suri, J.S.; Agarwal, S.; Gupta, S.K.; Puvvula, A.; Biswas, M.; Saba, L.; Bit, A.; Tandel, G.S.; Agarwal, M.; Patrick, A. A narrative review on characterization of acute respiratory distress syndrome in COVID-19-infected lungs using artificial intelligence. Comput. Biol. Med. 2021, 130, 104210. [Google Scholar] [CrossRef]
  7. Cau, R.; Falaschi, Z.; Paschè, A.; Danna, P.; Arioli, R.; Arru, C.D.; Zagaria, D.; Tricca, S.; Suri, J.S.; Karla, M.K. Computed tomography findings of COVID-19 pneumonia in Intensive Care Unit-patients. J. Public Health Res. 2021, 2019. [Google Scholar] [CrossRef]
  8. El-Baz, A.; Suri, J. Lung Imaging and CADx; CRC Press: Boca Raton, FL, USA, 2019. [Google Scholar]
  9. El-Baz, A.; Suri, J.S. Lung Imaging and Computer Aided Diagnosis; CRC Press: Boca Raton, FL, USA, 2011. [Google Scholar]
  10. Saba, L.; Suri, J.S. Multi-Detector CT Imaging: Principles, Head, Neck, and Vascular Systems; CRC Press: Boca Raton, FL, USA, 2013; Volume 1. [Google Scholar]
  11. Saba, L.; Agarwal, M.; Patrick, A.; Puvvula, A.; Gupta, S.K.; Carriero, A.; Laird, J.R.; Kitas, G.D.; Johri, A.M.; Balestrieri, A.; et al. Six artificial intelligence paradigms for tissue characterisation and classification of non-COVID-19 pneumonia against COVID-19 pneumonia in computed tomography lungs. Int. J. Comput. Assist. Radiol. Surg. 2021, 16, 423–434. [Google Scholar] [CrossRef]
  12. Saba, L.; Biswas, M.; Suri, H.S.; Viskovic, K.; Laird, J.R.; Cuadrado-Godia, E.; Nicolaides, A.; Khanna, N.; Viswanathan, V.; Suri, J.S.J.C.d.; et al. Ultrasound-based carotid stenosis measurement and risk stratification in diabetic cohort: A deep learning paradigm. Cardiovasc. Diagn. Ther. 2019, 9, 439. [Google Scholar] [CrossRef]
  13. Biswas, M.; Kuppili, V.; Saba, L.; Edla, D.R.; Suri, H.S.; Cuadrado-Godia, E.; Laird, J.R.; Marinhoe, R.T.; Sanches, J.M.; Nicolaides, A.; et al. State-of-the-art review on deep learning in medical imaging. Front. Biosci. 2019, 24, 392–426. [Google Scholar]
  14. Saba, L.; Biswas, M.; Kuppili, V.; Cuadrado Godia, E.; Suri, H.S.; Edla, D.R.; Omerzu, T.; Laird, J.R.; Khanna, N.N.; Mavrogeni, S.; et al. The present and future of deep learning in radiology. Eur. J. Radiol. 2019, 114, 14–24. [Google Scholar] [CrossRef]
  15. El-Baz, A.; Suri, J. Machine Learning in Medicine; CRC Press: Boca Raton, FL, USA, 2021. [Google Scholar]
  16. Suri, J.S.; Rangayyan, R.M. Recent Advances in Breast Imaging, Mammography, and Computer-Aided Diagnosis of Breast Cancer; SPIE Press: Bellingham, WA, USA, 2006. [Google Scholar]
  17. Biswas, M.; Kuppili, V.; Edla, D.R.; Suri, H.S.; Saba, L.; Marinhoe, R.T.; Sanches, J.M.; Suri, J.S. Symtosis: A liver ultrasound tissue characterization and risk stratification in optimized deep learning paradigm. Comput. Methods Programs Biomed. 2018, 155, 165–177. [Google Scholar] [CrossRef] [PubMed]
  18. Acharya, U.R.; Sree, S.V.; Ribeiro, R.; Krishnamurthi, G.; Marinho, R.T.; Sanches, J.; Suri, J.S. Data mining framework for fatty liver disease classification in ultrasound: A hybrid feature extraction paradigm. Med. Phys. 2012, 39, 4255–4264. [Google Scholar] [CrossRef] [Green Version]
  19. Acharya, U.R.; Sree, S.V.; Krishnan, M.M.R.; Molinari, F.; Garberoglio, R.; Suri, J.S. Non-invasive automated 3D thyroid lesion classification in ultrasound: A class of ThyroScan™ systems. Ultrasonics 2012, 52, 508–520. [Google Scholar] [CrossRef]
  20. Acharya, U.R.; Swapna, G.; Sree, S.V.; Molinari, F.; Gupta, S.; Bardales, R.H.; Witkowska, A.; Suri, J.S. A review on ultrasound-based thyroid cancer tissue characterization and automated classification. Technol. Cancer Res. Treat. 2014, 13, 289–301. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Molinari, F.; Mantovani, A.; Deandrea, M.; Limone, P.; Garberoglio, R.; Suri, J.S. Characterization of single thyroid nodules by contrast-enhanced 3-D ultrasound. Ultrasound Med. Biol. 2010, 36, 1616–1625. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Shrivastava, V.K.; Londhe, N.D.; Sonawane, R.S.; Suri, J.S. Computer-aided diagnosis of psoriasis skin images with HOS, texture and color features: A first comparative study of its kind. Comput. Methods Programs Biomed. 2016, 126, 98–109. [Google Scholar] [CrossRef] [PubMed]
  23. Shrivastava, V.K.; Londhe, N.D.; Sonawane, R.S.; Suri, J.S. Reliable and accurate psoriasis disease classification in dermatology images using comprehensive feature space in machine learning paradigm. Expert Syst. Appl. 2015, 42, 6184–6195. [Google Scholar] [CrossRef]
  24. Mookiah, M.R.K.; Acharya, U.R.; Martis, R.J.; Chua, C.K.; Lim, C.M.; Ng, E.; Laude, A. Evolutionary algorithm based classifier parameter tuning for automatic diabetic retinopathy grading: A hybrid feature extraction approach. Knowl. Based Syst. 2013, 39, 9–22. [Google Scholar] [CrossRef]
  25. Than, J.C.; Saba, L.; Noor, N.M.; Rijal, O.M.; Kassim, R.M.; Yunus, A.; Suri, H.S.; Porcu, M.; Suri, J.S. Lung disease stratification using amalgamation of Riesz and Gabor transforms in machine learning framework. Comput. Biol. Med. 2017, 89, 197–211. [Google Scholar] [CrossRef] [PubMed]
  26. Acharya, U.R.; Sree, S.V.; Krishnan, M.M.R.; Krishnananda, N.; Ranjan, S.; Umesh, P.; Suri, J.S. Automated classification of patients with coronary artery disease using grayscale features from left ventricle echocardiographic images. Comput. Methods Programs Biomed. 2013, 112, 624–632. [Google Scholar] [CrossRef]
  27. Araki, T.; Ikeda, N.; Shukla, D.; Jain, P.K.; Londhe, N.D.; Shrivastava, V.K.; Banchhor, S.K.; Saba, L.; Nicolaides, A.; Shafique, S. PCA-based polling strategy in machine learning framework for coronary artery disease risk assessment in intravascular ultrasound: A link between carotid and coronary grayscale plaque morphology. Comput. Methods Programs Biomed. 2016, 128, 137–158. [Google Scholar] [CrossRef]
  28. Acharya, R.U.; Faust, O.; Alvin, A.P.; Sree, S.V.; Molinari, F.; Saba, L.; Nicolaides, A.; Suri, J.S. Symptomatic vs. asymptomatic plaque classification in carotid ultrasound. J. Med. Syst. 2012, 36, 1861–1871. [Google Scholar] [CrossRef] [PubMed]
  29. Acharya, U.R.; Faust, O.; Sree, S.V.; Molinari, F.; Saba, L.; Nicolaides, A.; Suri, J.S. An accurate and generalized approach to plaque characterization in 346 carotid ultrasound scans. IEEE Trans. Instrum. Meas. 2011, 61, 1045–1053. [Google Scholar] [CrossRef]
  30. Suri, J.S. Imaging Based Symptomatic Classification and Cardiovascular Stroke Risk Score Estimation. Google Patent 13053971, 28 July 2011. [Google Scholar]
  31. Cuadrado-Godia, E.; Dwivedi, P.; Sharma, S.; Ois Santiago, A.; Roquer Gonzalez, J.; Balcells, M.; Laird, J.; Turk, M.; Suri, H.S.; Nicolaides, A.; et al. Cerebral Small Vessel Disease: A Review Focusing on Pathophysiology, Biomarkers, and Machine Learning Strategies. J. Stroke 2018, 20, 302–320. [Google Scholar] [CrossRef]
  32. Maniruzzaman, M.; Rahman, M.J.; Al-MehediHasan, M.; Suri, H.S.; Abedin, M.M.; El-Baz, A.; Suri, J.S. Accurate diabetes risk stratification using machine learning: Role of missing value and outliers. J. Med. Syst. 2018, 42, 1–17. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. Martis, R.J.; Acharya, U.R.; Prasad, H.; Chua, C.K.; Lim, C.M.; Suri, J.S. Application of higher order statistics for atrial arrhythmia classification. Biomed. Signal Process. Control. 2013, 8, 888–900. [Google Scholar] [CrossRef]
  34. Suri, J.S.; Puvvula, A.; Biswas, M.; Majhail, M.; Saba, L.; Faa, G.; Singh, I.M.; Oberleitner, R.; Turk, M.; Chadha, P.S.; et al. COVID-19 pathways for brain and heart injury in comorbidity patients: A role of medical imaging and artificial intelligence-based COVID severity classification: A review. Comput. Biol. Med. 2020, 124, 103960. [Google Scholar] [CrossRef]
  35. Suri, J.S.; Puvvula, A.; Majhail, M.; Biswas, M.; Jamthikar, A.D.; Saba, L.; Faa, G.; Singh, I.M.; Oberleitner, R.; Turk, M. Integration of cardiovascular risk assessment with COVID-19 using artificial intelligence. Rev. Cardiovasc. Med. 2020, 21, 541–560. [Google Scholar] [PubMed]
  36. Agarwal, M.; Saba, L.; Gupta, S.K.; Carriero, A.; Falaschi, Z.; Paschè, A.; Danna, P.; El-Baz, A.; Naidu, S.; Suri, J.S. A novel block imaging technique using nine artificial intelligence models for COVID-19 disease classification, characterization and severity measurement in lung computed tomography scans on an Italian cohort. J. Med. Syst. 2021, 45, 1–30. [Google Scholar] [CrossRef]
  37. Mansoor, A.; Bagci, U.; Foster, B.; Xu, Z.; Douglas, D.; Solomon, J.M.; Udupa, J.K.; Mollura, D.J. CIDI-lung-seg: A single-click annotation tool for automatic delineation of lungs from CT scans. In Proceedings of the 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Chicago, IL, USA, 26–30 August 2014; pp. 1087–1090. [Google Scholar]
  38. Saba, L.; Banchhor, S.K.; Araki, T.; Viskovic, K.; Londhe, N.D.; Laird, J.R.; Suri, H.S.; Suri, J.S. Intra- and inter-operator reproducibility of automated cloud-based carotid lumen diameter ultrasound measurement. Indian Heart J. 2018, 70, 649–664. [Google Scholar] [CrossRef]
  39. Saba, L.; Banchhor, S.K.; Suri, H.S.; Londhe, N.D.; Araki, T.; Ikeda, N.; Viskovic, K.; Shafique, S.; Laird, J.R.; Gupta, A.; et al. Accurate cloud-based smart IMT measurement, its validation and stroke risk stratification in carotid ultrasound: A web-based point-of-care tool for multicenter clinical trial. Comput. Biol. Med. 2016, 75, 217–234. [Google Scholar] [CrossRef] [PubMed]
  40. Saba, L.; Than, J.C.; Noor, N.M.; Rijal, O.M.; Kassim, R.M.; Yunus, A.; Ng, C.R.; Suri, J.S. Inter-observer Variability Analysis of Automatic Lung Delineation in Normal and Disease Patients. J. Med. Syst. 2016, 40, 142. [Google Scholar] [CrossRef]
  41. Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
  42. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  43. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); IEEE: Las Vegas, NV, USA, 2016; pp. 770–778. [Google Scholar]
  44. Saba, L.; Banchhor, S.K.; Londhe, N.D.; Araki, T.; Laird, J.R.; Gupta, A.; Nicolaides, A.; Suri, J.S. Web-based accurate measurements of carotid lumen diameter and stenosis severity: An ultrasound-based clinical tool for stroke risk assessment during multicenter clinical trials. Comput. Biol. Med. 2017, 91, 306–317. [Google Scholar] [CrossRef]
  45. Molinari, F.; Zeng, G.; Suri, J.S. Intima-media thickness: Setting a standard for a completely automated method of ultrasound measurement. IEEE Trans. Ultrason. Ferroelectr. Freq. Control. 2010, 57, 1112–1124. [Google Scholar] [CrossRef]
  46. Suri, J.S.; Kathuria, C.; Molinari, F. Atherosclerosis Disease Management; Springer Science & Business Media: New York, NY, USA, 2010. [Google Scholar]
  47. Khanna, N.N.; Jamthikar, A.D.; Araki, T.; Gupta, D.; Piga, M.; Saba, L.; Carcassi, C.; Nicolaides, A.; Laird, J.R.; Suri, H.S.J.E. Nonlinear model for the carotid artery disease 10-year risk prediction by fusing conventional cardiovascular factors to carotid ultrasound image phenotypes: A Japanese diabetes cohort study. Echocardiography 2019, 36, 345–361. [Google Scholar] [CrossRef]
  48. Molinari, F.; Meiburger, K.M.; Saba, L.; Acharya, U.R.; Ledda, G.; Zeng, G.; Ho, S.Y.S.; Ahuja, A.T.; Ho, S.C.; Nicolaides, A.J.C.m.; et al. Ultrasound IMT measurement on a multi-ethnic and multi-institutional database: Our review and experience using four fully automated and one semi-automated methods. Comput. Methods Programs Biomed. 2012, 108, 946–960. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  49. Qualtrics, X. Determining Sample Size: How to Ensure You Get the Correct Sample Size. 2019. Available online: https://www.qualtrics.com/au/experience-management/research/determine-sample-size/ (accessed on 28 July 2021).
  50. Singh, B.K.; Verma, K.; Thoke, A.; Suri, J.S.J.M. Risk stratification of 2D ultrasound-based breast lesions using hybrid feature selection in machine learning paradigm. Measurement 2017, 105, 146–157. [Google Scholar] [CrossRef]
  51. Riffenburgh, R.H.; Gillen, D.L. Contents. In Statistics in Medicine, 4th ed.; Academic Press: Cambridge, MA USA, 2020. [Google Scholar]
  52. Priya, C.; Sithi Shameem Fathima, S.M.H.; Kirubanandasarathy, N.; Valanarasid, A.; Safana Begam, M.H.; Aiswarya, N. Automatic Optimized Cnn Based Covid-19 Lung Infection Segmentation from Ct Image. Mater. Today Proc. 2021. [Google Scholar] [CrossRef]
  53. Saood, A.; Hatem, I. COVID-19 lung CT image segmentation using deep learning methods: U-Net versus SegNet. BMC Med. Imaging 2021, 21, 1–10. [Google Scholar] [CrossRef] [PubMed]
  54. Paluru, N.; Dayal, A.; Jenssen, H.B.; Sakinis, T.; Cenkeramaddi, L.R.; Prakash, J.; Yalavarthy, P.K. Anam-Net: Anamorphic Depth Embedding-Based Lightweight CNN for Segmentation of Anomalies in COVID-19 Chest CT Images. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 932–946. [Google Scholar] [CrossRef]
  55. Cai, W.; Liu, T.; Xue, X.; Luo, G.; Wang, X.; Shen, Y.; Fang, Q.; Sheng, J.; Chen, F.; Liang, T. CT Quantification and Machine-learning Models for Assessment of Disease Severity and Prognosis of COVID-19 Patients. Acad. Radiol. 2020, 27, 1665–1678. [Google Scholar] [CrossRef] [PubMed]
  56. Chaddad, A.; Hassan, L.; Desrosiers, C. Deep CNN models for predicting COVID-19 in CT and X-ray images. J. Med. Imaging 2021, 8, 014502. [Google Scholar] [CrossRef]
  57. Gunraj, H.; Wang, L.; Wong, A. COVIDNet-CT: A Tailored Deep Convolutional Neural Network Design for Detection of COVID-19 Cases From Chest CT Images. Front. Med. 2020, 7, 608525. [Google Scholar] [CrossRef] [PubMed]
  58. Iyer, T.J.; Joseph Raj, A.N.; Ghildiyal, S.; Nersisson, R. Performance analysis of lightweight CNN models to segment infectious lung tissues of COVID-19 cases from tomographic images. PeerJ Comput. Sci. 2021, 7, e368. [Google Scholar] [CrossRef]
  59. Ranjbarzadeh, R.; Jafarzadeh Ghoushchi, S.; Bendechache, M.; Amirabadi, A.; Ab Rahman, M.N.; Baseri Saadi, S.; Aghamohammadi, A.; Kooshki Forooshani, M. Lung Infection Segmentation for COVID-19 Pneumonia Based on a Cascade Convolutional Network from CT Images. Biomed. Res. Int. 2021, 2021, 5544742. [Google Scholar] [CrossRef] [PubMed]
  60. Xiao, H.; Ran, Z.; Huang, J.; Ren, H.; Liu, C.; Zhang, B.; Dang, J. Research progress in lung parenchyma segmentation based on computed tomography. J. Biomed. Eng. 2021, 38, 379–386. [Google Scholar]
  61. El-Kenawy, E.-S.M.; Ibrahim, A.; Mirjalili, S.; Eid, M.M.; Hussein, S.E. Novel feature selection and voting classifier algorithms for COVID-19 classification in CT images. IEEE Access 2020, 8, 179317–179335. [Google Scholar] [CrossRef]
  62. Vijayalakshmi, A. Deep learning approach to detect malaria from microscopic images. Multimed. Tools Appl. 2020, 79, 15297–15317. [Google Scholar] [CrossRef]
  63. Saba, L.; Sanagala, S.S.; Gupta, S.K.; Koppula, V.K.; Laird, J.R.; Viswanathan, V.; Sanches, M.J.; Kitas, G.D.; Johri, A.M.; Sharma, N. A Multicenter Study on Carotid Ultrasound Plaque Tissue Characterization and Classification Using Six Deep Artificial Intelligence Models: A Stroke Application. IEEE Trans. Instrum. Meas. 2021, 70, 1–12. [Google Scholar] [CrossRef]
  64. Skandha, S.S.; Gupta, S.K.; Saba, L.; Koppula, V.K.; Johri, A.M.; Khanna, N.N.; Mavrogeni, S.; Laird, J.R.; Pareek, G.; Miner, M. 3-D optimized classification and characterization artificial intelligence paradigm for cardiovascular/stroke risk stratification using carotid ultrasound-based delineated plaque: Atheromatic™ 2.0. Comput. Biol. Med. 2020, 125, 103958. [Google Scholar] [CrossRef] [PubMed]
  65. Kuppili, V.; Biswas, M.; Sreekumar, A.; Suri, H.S.; Saba, L.; Edla, D.R.; Marinhoe, R.T.; Sanches, J.M.; Suri, J.S. Extreme learning machine framework for risk stratification of fatty liver disease using ultrasound tissue characterization. J. Med. Syst. 2017, 41, 1–20. [Google Scholar] [CrossRef] [PubMed]
  66. Jamthikar, A.; Gupta, D.; Khanna, N.N.; Saba, L.; Araki, T.; Viskovic, K.; Suri, H.S.; Gupta, A.; Mavrogeni, S.; Turk, M.J.C.d.; et al. A low-cost machine learning-based cardiovascular/stroke risk assessment system: Integration of conventional factors with image phenotypes. Cardiovasc. Diagn. Ther. 2019, 9, 420. [Google Scholar] [CrossRef] [Green Version]
  67. El-Baz, A.; Suri, J.S. Big Data in Multimodal Medical Imaging; Boca Raton: CRC Press: Boca Raton, FL, USA, 2019. [Google Scholar]
  68. Noor, N.M.; Than, J.C.; Rijal, O.M.; Kassim, R.M.; Yunus, A.; Zeki, A.A.; Anzidei, M.; Saba, L.; Suri, J.S. Automatic lung segmentation using control feedback system: Morphology and texture paradigm. J. Med. Syst. 2015, 39, 1–18. [Google Scholar] [CrossRef] [PubMed]
  69. Jamthikar, A.; Gupta, D.; Saba, L.; Khanna, N.N.; Araki, T.; Viskovic, K.; Mavrogeni, S.; Laird, J.R.; Pareek, G.; Miner, M.J.C.D.; et al. Cardiovascular/stroke risk predictive calculators: A comparison between statistical and machine learning models. Cardiovasc. Diagn. Ther. 2020, 10, 919. [Google Scholar] [CrossRef]
Figure 1. COVLIAS 1.0: Global system for lung segmentation and evaluation. AI: Artificial Intelligence; CV: Cross-Validation; GT: Ground Truth; LAE: Lung Area Error; LLAE: Lung Long Axis Error; CC: Correlation coefficient; ROC: Receiver operating characteristic; BA: Bland–Altman.
Figure 1. COVLIAS 1.0: Global system for lung segmentation and evaluation. AI: Artificial Intelligence; CV: Cross-Validation; GT: Ground Truth; LAE: Lung Area Error; LLAE: Lung Long Axis Error; CC: Correlation coefficient; ROC: Receiver operating characteristic; BA: Bland–Altman.
Diagnostics 11 01405 g001
Figure 2. Raw lung COVID-19 CT images were taken from different patients in the database.
Figure 2. Raw lung COVID-19 CT images were taken from different patients in the database.
Diagnostics 11 01405 g002
Figure 3. Raw lung images with ground truth boundary (white) overlay on grayscale CT scans.
Figure 3. Raw lung images with ground truth boundary (white) overlay on grayscale CT scans.
Diagnostics 11 01405 g003
Figure 4. White binary mask of lung region used for AI-based model training.
Figure 4. White binary mask of lung region used for AI-based model training.
Diagnostics 11 01405 g004
Figure 5. SegNet architecture.
Figure 5. SegNet architecture.
Diagnostics 11 01405 g005
Figure 6. VGG-SegNet architecture.
Figure 6. VGG-SegNet architecture.
Diagnostics 11 01405 g006
Figure 7. ResNet-SegNet architecture.
Figure 7. ResNet-SegNet architecture.
Diagnostics 11 01405 g007
Figure 8. User interactive fuzzy-connectedness Center for Infectious Disease Imaging (CIDI) lung segmentation [37].
Figure 8. User interactive fuzzy-connectedness Center for Infectious Disease Imaging (CIDI) lung segmentation [37].
Diagnostics 11 01405 g008
Figure 9. Results from the four AI models with grayscale CT slice. Row 1 to Row 4 is four models: SegNet, VGG-SegNet, ResNet-SegNet, and NIH.
Figure 9. Results from the four AI models with grayscale CT slice. Row 1 to Row 4 is four models: SegNet, VGG-SegNet, ResNet-SegNet, and NIH.
Diagnostics 11 01405 g009
Figure 10. Accuracy and loss from the three AI models SegNet, VGG-SegNet, and ResNet-SegNet.
Figure 10. Accuracy and loss from the three AI models SegNet, VGG-SegNet, and ResNet-SegNet.
Diagnostics 11 01405 g010
Figure 11. Results from the four AI models and NIH with a grayscale CT slice. Row 1 to Row 4 is four models: SegNet, VGG-SegNet, ResNet-SegNet, and NIH.
Figure 11. Results from the four AI models and NIH with a grayscale CT slice. Row 1 to Row 4 is four models: SegNet, VGG-SegNet, ResNet-SegNet, and NIH.
Diagnostics 11 01405 g011
Figure 12. Boundary overlay of the segmented mask from the AI-model (green) vs. GT-mask in the background (red). White arrow shows NIH-segmented boundary not at a right position.
Figure 12. Boundary overlay of the segmented mask from the AI-model (green) vs. GT-mask in the background (red). White arrow shows NIH-segmented boundary not at a right position.
Diagnostics 11 01405 g012
Figure 13. Overlay of the segmented mask from the AI-model (green) vs. GT-mask in the background (red). White arrow shows NIH-segmented boundary not at a right position.
Figure 13. Overlay of the segmented mask from the AI-model (green) vs. GT-mask in the background (red). White arrow shows NIH-segmented boundary not at a right position.
Diagnostics 11 01405 g013
Figure 14. Visualization of the zoomed region-of-interest showing the difference between AI area (green) and GT area (red). (a) central lung region, (b) basal region of the lung, (c) zoom of the central region, and (d) zoom of the basal region. Arrow shows the difference between GT and AI area.
Figure 14. Visualization of the zoomed region-of-interest showing the difference between AI area (green) and GT area (red). (a) central lung region, (b) basal region of the lung, (c) zoom of the central region, and (d) zoom of the basal region. Arrow shows the difference between GT and AI area.
Diagnostics 11 01405 g014
Figure 15. Display of the AI-long axis (green) and GT-long axis (red) showing the reach of the top apical zone and bottom basal zone of the left and right lungs. Rows are the four models, and columns are sample four patients COVID-19 CT scans. White is the GT delineated borders of the lungs by the tracer using ImgTracer(TM).
Figure 15. Display of the AI-long axis (green) and GT-long axis (red) showing the reach of the top apical zone and bottom basal zone of the left and right lungs. Rows are the four models, and columns are sample four patients COVID-19 CT scans. White is the GT delineated borders of the lungs by the tracer using ImgTracer(TM).
Diagnostics 11 01405 g015
Figure 16. Cumulative plot for LAE for the four models corresponding to left, right, and mean.
Figure 16. Cumulative plot for LAE for the four models corresponding to left, right, and mean.
Diagnostics 11 01405 g016
Figure 17. Bar Chart showing the LAE for the four models corresponding to left (blue), right (red), and mean (green) areas. LAE was in the following order ResNet-SegNet < VGG-SegNet < SegNet < NIH.
Figure 17. Bar Chart showing the LAE for the four models corresponding to left (blue), right (red), and mean (green) areas. LAE was in the following order ResNet-SegNet < VGG-SegNet < SegNet < NIH.
Diagnostics 11 01405 g017
Figure 18. Correlation plot for lung area for four models corresponding to left, right, and mean.
Figure 18. Correlation plot for lung area for four models corresponding to left, right, and mean.
Diagnostics 11 01405 g018
Figure 19. Plot for correlation coefficient between the AI model and GT area for the four types of model: ResNet-SegNet, VGG-SegNet, SegNet, and NIH for left lung, right lung, mean of the two lungs.
Figure 19. Plot for correlation coefficient between the AI model and GT area for the four types of model: ResNet-SegNet, VGG-SegNet, SegNet, and NIH for left lung, right lung, mean of the two lungs.
Diagnostics 11 01405 g019
Figure 20. Bar Chart for JI and DS for four (two-hybrid, one deep, and NIH) models.
Figure 20. Bar Chart for JI and DS for four (two-hybrid, one deep, and NIH) models.
Diagnostics 11 01405 g020
Figure 21. Cumulative frequency for JI and DS for the four models.
Figure 21. Cumulative frequency for JI and DS for the four models.
Diagnostics 11 01405 g021
Figure 22. BA plot for LAE for the four models (rows) corresponding to left, right, and mean (columns).
Figure 22. BA plot for LAE for the four models (rows) corresponding to left, right, and mean (columns).
Diagnostics 11 01405 g022
Figure 23. ROC plot for four models corresponding to left, right, and mean.
Figure 23. ROC plot for four models corresponding to left, right, and mean.
Diagnostics 11 01405 g023
Figure 24. Cumulative plot for LLAE along with an 80% threshold for cumulative frequency for LLAE.
Figure 24. Cumulative plot for LLAE along with an 80% threshold for cumulative frequency for LLAE.
Diagnostics 11 01405 g024
Figure 25. Bar chart for LLAE using four models.
Figure 25. Bar chart for LLAE using four models.
Diagnostics 11 01405 g025
Figure 26. Plot for correlation coefficient between the AI-based LLA vs. GT.
Figure 26. Plot for correlation coefficient between the AI-based LLA vs. GT.
Diagnostics 11 01405 g026
Figure 27. Correlation plot for LLA error for the four models corresponding to left, right, and mean.
Figure 27. Correlation plot for LLA error for the four models corresponding to left, right, and mean.
Diagnostics 11 01405 g027
Figure 28. BA plot for LLA for four models corresponding to left, right, and mean.
Figure 28. BA plot for LLA for four models corresponding to left, right, and mean.
Diagnostics 11 01405 g028
Table 1. FoM for the segmentation modes for lung area and lung long axis.
Table 1. FoM for the segmentation modes for lung area and lung long axis.
Lung AreaLung Long Axis
LeftRightMeanLeftRightMean
SegNet94.2897.695.9497.0398.5397.78
VGG-SegNet99.0999.5699.3299.799.6199.65
ResNet-SegNet99.0399.9899.5599.7299.6299.67
NIH94.9898.6996.8397.7696.0996.92
Table 2. Improvement of three AI models against the NIH model for lung area and lung long axis.
Table 2. Improvement of three AI models against the NIH model for lung area and lung long axis.
Lung Area ErrorLung Long Axis Error
Left Lung Right Lung Mean Left Lung Right Lung Mean
ResNet-SegNet57%60%58%42%44%43%
VGG-SegNet51%52%52%31%37%34%
SegNet22%19%21%5%13%4%
Table 3. Benchmarking table.
Table 3. Benchmarking table.
Author (Year)# of PatientsDemo.# of ImagesIS2# of TracersAI
Model
SDL vs. HDLDim.Training SpeedHeatmapVSAEBEDSJIAUCACC (%)
Priya et al. (2021) [56]-No78--Squeeze NetHDL2D-YesNo-Yes----
Saood et al. (2021) [57]-No1002562-UNet,
SegNet
SDL2D--No--0.73,
0.74
-0.94,
0.95
91,
95
Paluru et al. (2020) [58]69No43395122-Anam-netSDL2D27 min-Yes--0.75-0.9999.1
Cai et al. (2020) [59]99Yes6301 2UNetSDL2D-yesNo--0.980.96--
Suri et al. (2021)72Yes500076821NIH,
SegNet,
VGG-SegNet,
ResNet-SegNet
SDL
and HDL
2D18 minNoNoYesYes0.96,
0.97,
0.97,
0.98
0.93,
0.94,
0.96,
0.97
0.96,
0.96,
0.97,
0.98
NA,
98,
98,
99
Demo.: Demographics, VS: Variability studies; IS2: Image size (2 represents dimension along x-axis & y-axis); AE: Area error; Dim.: Dimensionality of the image (2D vs. 3D); BE: Boundary error; DS: Dice similarity; JI: Jaccard index; AUC: Area under the curve; SDL: Solo deep learning; HDL: Hybrid deep learning; ACC: Accuracy.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Suri, J.S.; Agarwal, S.; Pathak, R.; Ketireddy, V.; Columbu, M.; Saba, L.; Gupta, S.K.; Faa, G.; Singh, I.M.; Turk, M.; et al. COVLIAS 1.0: Lung Segmentation in COVID-19 Computed Tomography Scans Using Hybrid Deep Learning Artificial Intelligence Models. Diagnostics 2021, 11, 1405. https://doi.org/10.3390/diagnostics11081405

AMA Style

Suri JS, Agarwal S, Pathak R, Ketireddy V, Columbu M, Saba L, Gupta SK, Faa G, Singh IM, Turk M, et al. COVLIAS 1.0: Lung Segmentation in COVID-19 Computed Tomography Scans Using Hybrid Deep Learning Artificial Intelligence Models. Diagnostics. 2021; 11(8):1405. https://doi.org/10.3390/diagnostics11081405

Chicago/Turabian Style

Suri, Jasjit S., Sushant Agarwal, Rajesh Pathak, Vedmanvitha Ketireddy, Marta Columbu, Luca Saba, Suneet K. Gupta, Gavino Faa, Inder M. Singh, Monika Turk, and et al. 2021. "COVLIAS 1.0: Lung Segmentation in COVID-19 Computed Tomography Scans Using Hybrid Deep Learning Artificial Intelligence Models" Diagnostics 11, no. 8: 1405. https://doi.org/10.3390/diagnostics11081405

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop