Next Article in Journal
Smart Privacy Protection for Big Video Data Storage Based on Hierarchical Edge Computing
Next Article in Special Issue
Multi-View Visual Question Answering with Active Viewpoint Selection
Previous Article in Journal
Fast Approximations of Activation Functions in Deep Neural Networks when using Posit Arithmetic
Previous Article in Special Issue
Active Player Detection in Handball Scenes Based on Activity Measures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Liver Tumor Segmentation in CT Scans Using Modified SegNet

1
Department of Natural and Applied Sciences, Faculty of Community College, Majmaah University, Majmaah 11952, Saudi Arabia
2
Department of Biomedical Engineering, Higher Technological Institute, 10th Ramadan City 44629, Egypt
3
Department of Information Technology, College of Computer Sciences and Information Technology College, Majmaah University, Al-Majmaah 11952, Saudi Arabia
4
Faculty of Computer and Information Sciences, Ain Shams University, Cairo 11566, Egypt
5
Faculty of Media Engineering & amp; Technology, German University in Cairo, Cairo 11835, Egypt
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(5), 1516; https://doi.org/10.3390/s20051516
Submission received: 5 December 2019 / Revised: 26 February 2020 / Accepted: 4 March 2020 / Published: 10 March 2020

Abstract

:
The main cause of death related to cancer worldwide is from hepatic cancer. Detection of hepatic cancer early using computed tomography (CT) could prevent millions of patients’ death every year. However, reading hundreds or even tens of those CT scans is an enormous burden for radiologists. Therefore, there is an immediate need is to read, detect, and evaluate CT scans automatically, quickly, and accurately. However, liver segmentation and extraction from the CT scans is a bottleneck for any system, and is still a challenging problem. In this work, a deep learning-based technique that was proposed for semantic pixel-wise classification of road scenes is adopted and modified to fit liver CT segmentation and classification. The architecture of the deep convolutional encoder–decoder is named SegNet, and consists of a hierarchical correspondence of encode–decoder layers. The proposed architecture was tested on a standard dataset for liver CT scans and achieved tumor accuracy of up to 99.9% in the training phase.

1. Introduction

The liver is the largest organ, located underneath the right ribs and below the lung base. It has a role in digesting food [1,2]. It is responsible for filtering blood cells, processing and storing nutrients, and converting some these nutrients into energy; it also breaks down toxic agents [3,4]. There are two main hepatic lobes, the left and right lobes. When the liver is viewed from the undersurface, there are two more lobes, the quadrate and caudate lobes [5].
Hepatocellular carcinoma (HCC) [6] may occur when the liver cells begin to grow out of control and can spread to other areas in the body. Primary hepatic malignancies develop when there is abnormal behavior of the cells [7].
Liver cancer has been reported to be the second most frequent cancer to cause death in men, and sixth for women. About 750,000 people got diagnosed with liver cancer in 2008, 696,000 of which died from it. Globally, the rate of infection of males is twice than that of females [8]. Liver cancer can be developed from viral hepatitis, which is much more problematic. According to the World Health Organization, WHO, about 1.45 million deaths a year occur because of this infection [9]. In 2015, Egypt was named as the country with the highest rate of adults infected by viral hepatitis C (HCV), at 7% [9]. Because the treatment could not be reached by all infected people, the Egypt’s government launched a “100 Million Seha” (seha is an Arabic word meaning “health”) national campaign between October 2018 and April 2019. At the end of March 2019, around 35 million people had been examined for HCV [10].
Primary hepatic malignancy is more prevalent in Southeast Asia and Africa than in the United States [11,12]. The reported survival rate is generally 18%. However, survival rates rely on the stage of disease at the time of diagnosis [13].
Primary hepatic malignancy is diagnosed by clinical, laboratory, and imaging tests, including ultrasound scans, magnetic resonance imaging (MRI) scans, and computed tomography (CT) scans [14]. A CT scan utilizes radiation to capture detailed images around the body from different angles, including sagittal, coronal, and axial images. It shows organs, bones, and soft tissues; the information is then processed by the computer to create images, usually in DICOM format [15]. Quite often, the examination requires intravenous injection of contrast material. The scans can help to differentiate malignant lesions from acute infection, chronic inflammation, fibrosis, and cirrhosis [16].
Staging of hepatic malignancies depends on the size and location of the malignancy [16,17]. Hence, it is important to develop an automatic procedure to detect and extract the cancer region from the CT scan accurately. Image segmentation is the process of partitioning the liver region in the CT scan into regions, where each region represents a semantic part of the liver [18,19]. This is a fundamental step to support the diagnosis by radiologists, and a fundamental step to create automatic computer-aided diagnosis (CAD) systems [20,21,22]. CT scans of the liver are usually interpreted by manual or semi-manual techniques, but these techniques are subjective, expensive, time-consuming, and highly error prone. Figure 1 shows an example where the gray level intensities of the liver and the spleen are too similar to be differentiated by the naked eye. To overcome these obstacles and improve the quality of liver tumors’ diagnosis, multiple computer-aided methods have been developed. However, these systems have not been that great at the segmentation of the liver and lesions due to multiple challenges, such as the low contrast between the liver and neighboring organs and between the liver and tumors, different contrast levels in tumors, variation in the numbers and sizes of tumors, tissues’ abnormalities, and irregular tumor growth in response to medical treatment. Therefore, a new approach must be used to overcome these obstacles [23].
In this work, a review of wide variety of recent publications of image analysis for liver malignancy segmentation is introduced. In recent years, extensive research has depended on supervised learning methods. The supervised method use inputs labeled to train a model for a specific task—liver or tumor segmentation, in this case. On top of these learning methods are the deep learning methods [26,27]. There are many different models of deep learning that have been introduced, such as stacked auto-encoder (SAE), deep belief nets (DBN), convolutional neural networks (CNNs), and Deep Boltzmann Machines (DBM) [28,29,30,31]. The superiority of the deep learning models in terms of accuracy has been established. However, it is still a challenge to find proper training dataset, which should be huge in size and prepared by experts.
CNNs are considered the best of deep learning methods used. Elshaer et al. [13] reduced the computation time of a large number of slices by using two trained deep CNN models. The first model was used to get the liver region, and the second model was used for avoiding fogginess from image re-sampling and for avoiding missed small lesions.
Wen Li et al. [28] utilized a convolutional neural network (CNN) that uses image patches. It considers an image patch for each pixel, such that the pixel of interest is in the center of that patch. The patches are divided into normal or tumor liver tissue. If the patch contains at least 50 percent or more of tumor tissue, the patch is labeled as a positive sample. The reported accuracy reached about 80.6%. The work presented in [12,13] reported s more than 94% accuracy rate for classifying the images either as normal or abnormal if the image showed a liver with tumor regions. The CNN model has different architectures—i.e., Alex Net, VGG-Net, ResNet, etc. [32,33,34]. The work presented by Bellver et al. [5] used VGG-16 architecture as the base network in their work. Other work [11,16,29,32,35,36] has used two-dimensional (2D) U-Net, which is designed mainly for medical image segmentation.
The main objective of this work is to present a novel segmentation technique for liver cross-sectional CT scans based on a deep learning model that has proven successful in image segmentation for scene understanding, namely SegNet [37]. Memory and performance efficiency are the main advantages of this architecture over the other models. The model has been modified to fit two-class classification tasks.
The paper is organized as follows. In the next section, a review is presented on recent segmentation approaches for the liver and lesions in CT images, as well as a short introduction to the basic concepts addressed in this work. Section 3 presents the proposed method and the experimental dataset. Experimental results are presented in Section 4. Finally, conclusions are presented and discussed in Section 5.

2. Basic Concepts

Convolutional neural networks are similar to traditional neural networks [20,38,39]. A convolutional neural network (CNN) includes one or more layers of convolutional, fully connected, pooling, or fully connected and rectified linear unit (ReLU) layers. Generally, as the network becomes deeper with many more parameters, the accuracy of the results increases, but it also becomes more computationally complex.
Recently, CNN models have been used widely in image classification for different applications [20,34,40,41,42] or to extract features from the convolutional layers before or after the down sampling layers [41,43]. However, the architectures discussed above are not suitable for image segmentation or pixel-wise classifications. VGG-16 network architecture [44] is a type of CNN model. The network includes 41 layers. There are 16 layers with learnable weights: there are 13 convolutional layers and three fully connected layers. Figure 2 shows the architecture of VGG-16 as introduced by Simonyan and Zisserman [44].
Most pixel-wise classification network architectures are of encoder–decoder architecture, where the encoder part is the VGG-16 model. The encoder gradually decreases the spatial dimension of the images with pooling layers; however, the decoder retrieves the details of the object and spatial dimensions for fast and precise segmentation of images. U-Net [45,46] is a convolutional encoder-decoder network used widely for semantic image segmentation. It is interesting because it applies a fully convolutional network architecture for medical images. However, it is very time- and memory-consuming.
The semantic image segmentation approach uses the predetermined weights of the pertained VGG-16 network [45]. Badrinarayanan et al. [37], have proposed an encoder–decoder deep network, named SegNet, for scene understanding applications tested on road and indoor scenes. The main parts of the core trainable segmentation engine are an encoder network, a decoder network, and a pixel-wise classification layer. The architecture of the encoder network is similar to the 13 convolutional layers in the VGG-16 network. The function of the decoder network is mapping the features of encoder with low to full-input resolution feature maps for pixel-wise classification. Figure 3 shows a simple illustration of the SegNet model during the down sampling (max-pooling or subsampling layers) of the encoder part. Instead of transferring the pixel values to the decoder, the indices of the chosen pixel are saved and synchronized with the decoder for the up-sampling process. In SegNet, more shortcut connections are presented. The indices are copied from max pooling instead of copying the features of encoder, such as in FCN [47], so the memory and performance of SegNet is much more efficient than FCN and U-Net.

3. Materials and Method

This section discusses the steps and the implementation of the proposed method for segmentation of a liver tumor. The proposed method follows the conventional pattern recognition scheme: preprocessing, feature extraction and classification, and post-processing.

3.1. Dataset

The 3D-IRCADb-01 database is composed of three-dimensional (3D) CT-scans of 20 different patients (10 females and 10 males), with hepatic tumors in 15 of those cases. Each image has a resolution of 512 × 512 width and height. The depth or the number of slices per patient ranges between 74 and 260. Along with patient images in DICOM format, labeled images and mask images are given that could be used as ground truth for the segmentation process. The place of tumors is exposed by Couinaud segmentation [48]. This shows the main difficulties in segmentation of the liver via software [49].

3.2. Image Preprocessing

In the preprocessing steps, the DICOM CT images were subject to file format conversion to portable network graphics (PNG). The PNG file format was chosen to preserve the image quality, as it is a lossless format. In DICOM format, the pixel values are in Hounsfield, in the range [−1000, 4000]. In this format, the images cannot be displayed, and many image processing operations will fail. Therefore, the color depth conversion, and hence the range of the pixel’s values mapping to the positive 1 byte integer, is necessary. The mapping is done according to the following formula:
g = h m 1 m 2 m 1 255
where h is the pixel value in Hounsfield, g is the corresponding predicted gray level value, and m 1 and m 2 are the minimum and maximum of the Hounsfield range, respectively.
The second step is to put the images in an acceptable format for the SegNet model [37]. The images have been converted to three channels, similar to the RGB color space, by simply duplicating the slice in each channel and resizing each to be the dimension 360 × 480 × 3. Figure 4 shows three samples of the input images before color depth correction. The images in this format have too low contrast and are not suitable for use by the deep learning model.
In order to increase the performance of the system, the training images were subject to data augmentation, where the images are transformed by a set of affine transformations, such as flipping, rotation, and mirroring, as well as augmenting the color values [38,51,52]. Perez et al. [53] discuss the effectiveness of data augmentation on the classification results when deep learning is used, and showed that the traditional augmentation techniques can improve the results by about 7%.

3.3. Training and Classification

The goodness of CNN features was compared to other traditional feature extraction methods, such as LBP, GLCM, Wavelet and Spectral. The feature extractors, which give good performance in comparison with the other texture extractor features, are a CNN. CNN training consumes some time; however, features can be extracted from the trained convolutional network, compared to other complex textural methods. CNNs have proven to be effective in classification tasks [26]. The training data and data augmentation are combined by reading batches of training data, applying data augmentation, and sending the augmented data to the training algorithm. The training is started by taking the data source, which contains the training images, pixel labels, and their augmentation forms.

4. Experimental Results

4.1. Evaluation Metrics

The output results of classification were compared against the ground truth given by the dataset. The comparison was done on a pixel-to-pixel basis. To evaluate the results, we applied the evaluation metrics given below. Table 1 represent the confusion matrix for binary class classification.
Out of the confusion matrix, some important metrics were computed, such as
1. Overall Accuracy: this represents the percentage of correctly classified pixels to the whole number of pixels. This could be formulated as in Equation (2):
a c c u r a c y = T N + T P T N + T P + F N + F P
while the mean accuracy is the mean of accuracies reported across the different testing folds.
2. Recall (Re) or true positive rate (TPR): this represents the capability of the system to correctly detect tumor pixels relative to the total number of true tumor pixels, as formulated in Equation (3):
R e = T P T P + F N
3. Specificity of the true negative rate (TNR): this represents the rate of the correctly detected background or normal tissue, as formulated in Equation (4):
S p e c i f i c i t y =   T N T N + F P  
Since most of image is normal or background, the percentage of global accuracy is significantly influenced by the TNR. Therefore, some other measures for the tumor class are computed.
4. Intersection over union (IoU): this is the ratio of correctly classified pixels relative to the union of predicted and actual number of pixels for the same class. Equation (5) shows the formulation of the IoU:
I o U = T P T P + F P + F N
5. Precision (Pr): this measures the trust in the predicted positive class, i.e., prediction of a tumor. It is formulated as in Equation (6):
P r = T P T P + F P
6. F1 score (F1): this is a harmonic mean of recall (true positive rate) and precision, as formulated in Equation (7). It measures whether a point on the predicted boundary has a match on the ground truth boundary or not:
F 1 = 2 ( P r R e ) ( P r + R e )

4.2. Data Set and Preprocessing

As mentioned before, the dataset used to test the proposed algorithm is 3D-IRCADb. The 3D-IRACDb dataset is offered by the French Research Institute against Digestive Tract, or IRCAD [50]. It has two subsets: the first one, 3DIRACDb-01, is the one appropriate for liver tumor segmentation. This subset consists of publicly available 3D CT scans of 20 patients, half of them for women patients and half for men, with hepatic tumors in 75% of the cases. All the scans are available in DICOM format with axial dimensions of 512 × 512. For each case, tens of 2D images are available, together with labeled images and masked images prepared by radiologists. In this work, we have considered all 15 cases with a total of 2063 images for training and testing. The dataset is used widely and recently, as in [54,55,56,57].
All image slices were subject to preprocessing, as discussed above. The labeled images provided by the dataset are preprocessed by the same procedure, except the step of range mapping, since they are given as binary images in the range [0,255]. Figure 5 and Figure 6 show the examples of the preprocessing steps on input images. Associated with the input (patient) images are the labeled images, which are labeled by experts and are fed to the system as ground truth for the segmentation process.

4.3. Training and Classification

Three of the 15 cases of the dataset were used for testing and evaluation, with a total of 475 images. Among these, 454 images were used for training and validation, and 45 images were used for testing.
The first training and testing experiments were carried out using the U-Net model in [45]. The U-Net model is trained to perform semantic segmentation on medical images. It is based on VGG-16, as discussed before. The results were near perfect to extract the liver region. However, it failed completely when tested to extract the tumor regions from the image. In this case, the tumor region was almost missed or predicted as others.
The proposed architecture is based on the SegNet model [37], which is an encoder network, and a corresponding decoder network connected to a 2D multi-classification layer for pixel-based semantic segmentation. However, the final classification layer was replaced by 2D binary classification. The VGG-16 trained model was imported for the encoder part. Figure 7 shows an illustration of the proposed network architecture. To improve the training, class weighting was used to balance the classes and calculate the median frequency class weights.
For testing, a semantic segmentation was returned from the input image with the classification scores for each categorical label, in order to run the network for one image from test set.

4.4. Testing and Evaluation

The proposed method was trained on a machine with NVIDIA GTX 1050 4GB RAM GPU on an Intel Core i7-7700HQ 2.20 GHz 16 GB RAM, and developed with MATLAB 2018b software, which offers a Neural Network Toolbox and an Image Processing Toolbox.
The images of the tested cases were divided randomly into two groups for training and testing by the ratio 9:1. The results of the training are normally higher than that achieved by testing. Figure 8 shows three samples of testing output, where the resulted binary segmentation is augmented on the input gray-level images. At this stage, an almost perfect segmentation was achieved. In Table 2 are the evaluation metrics for the three cases. The network training performed by 1000 iterations per epoch for 100 epochs on a single GPU with a constant learning rate was 0.001. It is clear from Table 2 that as the number of training images increases, the segmentation quality increases up to perfect results, as in case 3.
For testing, a semantic segmentation is returned for the input image, with the classification scores for each categorical label. Figure 9 shows an illustration of the evaluation method, where the resulted segmented images are superimposed over the ground truth image. The correctly classified tumor pixels, known as true positive, are colored in white. It is clear from this figure that the results of the first are the one with the least accuracy, while the results of case 3 are perfect in terms of tumor detection; however, the tumor appears larger than it actually is.
The experimental results are presented in confusion matrices in Table 3, Table 4 and Table 5 for the test cases 1,2 and 3, respectively. The results displayed are normalized.
In order to increase the insight on the presented results, Table 6 presents a comparison between the overall accuracy of the proposed method compared to some chosen work from the literature, according to the results reported in their papers. From this work, we have achieved higher accuracy than the work in the comparison.

5. Conclusions

This paper presents experimental work to adopt a deep learning model, used for semantic segmentation of road scene understanding, for tumor segmentation in CT Liver scans in DICOM format.
SegNet is recent encoder–decoder network architecture that employs the trained VGG-16 image classification network as encoder, and employs corresponding decoder architecture to transform the features back into the image domain to reach a pixel-wise classification at the end. The advantage of SegNet over standard auto-encoder architecture is in the simple yet very efficient modification where the max-pooling indices of the feature map are saved, instead of saving the feature maps in full. As a result, the architecture is much more efficient in training time, memory requirements, and accuracy.
To facilitate binary segmentation of medical images, the classification layer was replaced with binary pixel classification layer. For training and testing, the standard 3D-IRCADb-01 dataset was used. The proposed method correctly detects most parts of the tumor, with accuracy above 86% for tumor classification. However, by examining the results, there were few false positives that could be improved by applying false positive filters or by training the model on a larger dataset.
As a future work, we propose using a new deep learning model as an additional level to increase the localization accuracy of the tumor, and hence reduce the FN rate and increase the IoU metric, like the work introduced in [20].

Author Contributions

Conceptualization and methodology, M.A.-M.S. and S.A.; software and validation, G.K. and M.A.; formal analysis, M.A. and M.A.-M.S.; writing—original draft preparation, M.A.-M.S.; writing—review and editing, G.K., M.A., and B.A.; supervision, M.A.-M.S.; project administration, S.A.; funding acquisition, S.A. and B.A. Reviewed by all authors. All authors have read and agreed to the published version of the manuscript.

Funding

This research received external funding for publication by the Deanship of Scientific Research at Majmaah University, which is funding this work under project RGP-2019-29.

Acknowledgments

The authors would like to thank Mamdouh Mahfouz, Professor of Radiology at Kasr-El-Aini School of Medicine, Cairo University, Egypt, for his help in revising the introduction in order to correct any misused medical terms.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Al-Shaikhli, S.D.S.; Yang, M.Y.; Rosenhahn, B. Automatic 3d liver segmentation using sparse representation of global and local image information via level set formulation. arXiv, 2015; arXiv:abs/1508.01521. [Google Scholar]
  2. Whitcomb, E.; Choi, W.T.; Jerome, K.R.; Cook, L.; Landis, C.; Ahn, J.; Westerhoff, M. Biopsy Specimens From Allograft Liver Contain Histologic Features of Hepatitis C Virus Infection After Virus Eradication. Clin. Gastroenterol. Hepatol. 2017, 15, 1279–1285. [Google Scholar] [CrossRef] [PubMed]
  3. Woźniak, M.; Połap, D. Bio-Inspired methods modeled for respiratory disease detection from medical images. Swarm Evol. Comput. 2018, 41, 69–96. [Google Scholar] [CrossRef]
  4. Mohamed, A.S.E.; Mohammed Salem, A.-M.; Hegazy, D.; Howida Shedeed, A. Improved Watershed Algorithm for CT Liver Segmentation Using Intraclass Variance Minimization. ICIST 2017. [Google Scholar] [CrossRef]
  5. Bellver, M.; Maninis, K.-K.; Pont-Tuset, J.; Nieto, X.G.; Torres, J.; Gool, L.V. Detection-Aided liver lesion segmentation using deep learning. Machine Learning 4 Health Workshop - NIPS. arXiv, 2017; arXiv:abs/1711.11069. [Google Scholar]
  6. Ben-Cohen, A.; Klang, E.; Amitai, M.M.; Goldberger, J.; Greenspan, H. Anatomical data augmentation for CNN based pixel-wise classification. In Proceedings of the IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; pp. 1096–1099. [Google Scholar]
  7. Bi, L.; Kim, J.; Kumar, A.; Feng, D. Automatic liver lesion detection using cascaded deep residual networks. arXiv, 2017; arXiv:abs/1704.02703. [Google Scholar]
  8. Jemal, A.; Bray, F.; Center, M.M.; Ferlay, J.; Ward, E.; Forman, D. Global cancer statistics. CA Cancer J. Clin. 2017, 61, 60–90. [Google Scholar]
  9. World Health Organization. Available online: http://www.emro.who.int/media/news/world-hepatitis-day-in-egypt-focuses-on-hepatitis-b-and-c-prevention (accessed on 24 February 2020).
  10. World Health Organization Hepatitis. Available online: https://www.who.int/hepatitis/news-events/egypt-hepatitis-c-testing/en/ (accessed on 24 February 2020).
  11. Chlebus, G.; Meine, H.; Moltz, J.H.; Schenk, A. Neural network-Based automatic liver tumor segmentation with random forest-Based candidate filtering. arXiv, 2017; arXiv:abs/1706.00842. [Google Scholar]
  12. Christ, P.F.; Elshaer, M.E.A.; Ettlinger, F.; Tatavarty, S.; Bickel, M.; Bilic, P.; Rempfler, M.; Armbruster, M.; Hofmann, F.; D’Anastasi, M.; et al. Automatic liver and lesion segmentation in CT using cascaded fully convolutional neural networks and 3d conditional random fields. arXiv, 2016; arXiv:abs/1610.02177. [Google Scholar]
  13. Christ, P.F.; Ettlinger, F.; Grün, F.; Elshaer, M.E.A.; Lipkova, J.; Schlecht, S.; Ahmaddy, F.; Tatavarty, S.; Bickel, M.; Bilic, P.; et al. Automatic liver and tumor segmentation of CT and MRI volumes using cascaded fully convolutional neural networks. arXiv, 2017; arXiv:abs/1702.05970. [Google Scholar]
  14. Wei, W.; Zhou, B.; Połap, D.; Woźniak, M. A regional adaptive variational PDE model for computed tomography image reconstruction. Pattern Recognit. 2019, 92, 64–81. [Google Scholar] [CrossRef]
  15. Varma, D.R. Managing DICOM images: Tips and tricks for the radiologist. Indian J. Radiol. Imaging 2012, 22, 4–13. [Google Scholar] [CrossRef] [PubMed]
  16. The American Cancer Society medical and editorial content team. Liver Cancer Early Detection, Diagnosis, and Staging. Available online: https://www.cancer.org/content/dam/CRC/PDF/Public/8700.00.pdf (accessed on 22 July 2019).
  17. Wozniak, M.; Polap, D.; Capizzi, G.; Sciuto, G.L.; Kosmider, L.; Frankiewicz, K. Small lung nodules detection based on local variance analysis and probabilistic neural network. Comput. Methods Programs Biomed. 2018, 161, 173–180. [Google Scholar] [CrossRef]
  18. Mohamed, A.S.E.; Salem, M.A.-.M.; Hegazy, D.; Shedeed, H. Probablistic-Based framework for medical CT images segmentation. In Proceedings of the 2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS), Cairo, Egypt, 12–14 December 2015; pp. 149–155. [Google Scholar] [CrossRef]
  19. Salem, M.A.; Meffert, B. Resolution mosaic EM algorithm for medical image segmentation. In Proceedings of the 2009 International Conference on High Performance Computing & Simulation, Leipzig, Germany, 21–24 June 2009; pp. 208–215. [Google Scholar] [CrossRef] [Green Version]
  20. El-Regaily, S.A.; Salem, M.A.M.; Abdel Aziz, M.H.; Roushdy, M.I. Multi-View convolutional neural network for lung nodule false positive reduction. Expert Syst. Appl. 2019. [Google Scholar] [CrossRef]
  21. Atteya, M.A.; Salem, M.A.-M.; Hegazy, D.; Roushdy, M.I. Image segmentation and particles classification using texture analysis method. Rev. Bras. de Eng. Biomed. 2016, 32, 243–252. [Google Scholar] [CrossRef]
  22. Han, X. Automatic liver lesion segmentation using A deep convolutional neural network method. arXiv, 2017; arXiv:abs/1704.07239. [Google Scholar]
  23. National Institute of Biomedical Imaging and Bioengineering (NIBIB). Computed Tomography (CT), URL. Available online: https://www.nibib.nih.gov/science-education/science-topics/computed-tomography-ct (accessed on 18 May 2019).
  24. Heimann, T.; Van Ginneken, B.; Styner, M. Sliver07 Grand Challenge. Available online: https://sliver07.grand-challenge.org/Home/ (accessed on 20 July 2019).
  25. Heimann, T.; van Ginneken, B.; Styner, M.A.; Arzhaeva, Y.; Aurich, V.; Bauer, C.; Beck, A.; Becker, C.; Beichel, R.; Bekes, G.; et al. Comparison and evaluation of methods for liver segmentation from ct datasets. IEEE Trans. Med. Imaging 2009, 28, 1251–1265. [Google Scholar] [CrossRef]
  26. Salem, M.A.-M.; Atef, A.; Salah, A.; Shams, M. Recent survey on medical image segmentation. Comput. Vis. Concepts Methodol. Tools Appl. 2018, 129–169. [Google Scholar] [CrossRef]
  27. Horn, Z.C.; Auret, L.; Mccoy, J.T.; Aldrich, C.; Herbst, B.M. Performance of convolutional neural networks for feature extraction in froth flotation sensing. IFAC Pap. Line 2017, 50, 13–18. [Google Scholar] [CrossRef]
  28. Li, W.; Jia, F.; Hu, Q. Automatic segmentation of liver tumor in CT images with deep convolutional neural networks. J. Comput. Commun. 2015, 3, 146–151. [Google Scholar] [CrossRef] [Green Version]
  29. Li, X.; Chen, H.; Qi, X.; Dou, Q.; Fu, C.; Heng, P. H-DenseUNet: Hybrid Densely Connected UNet for Liver and Tumor Segmentation From CT Volumes. IEEE Trans. Med. Imaging 2018, 37, 2663–2674. [Google Scholar] [CrossRef] [Green Version]
  30. Rajeshwaran, M.; Ahila, A. Segmentation of Liver Cancer Using SVM Techniques. Int. J. Comput. Commun. Inf. Syst. (IJCCIS) 2014, 6, 78–80. [Google Scholar]
  31. El-Regaily, S.A.; Salem, M.A.M.; Aziz, M.H.A.; Roushdy, M.I. Lung nodule segmentation and detection in computed tomography. In Proceedings of the 8th IEEE International Conference on Intelligent Computing and Information Systems (ICICIS), Cairo, Egypt, 5–7 December 2017; pp. 72–78. [Google Scholar] [CrossRef]
  32. Yuan, Y. Hierarchical convolutional-Deconvolutional neural networks for automatic liver and tumor segmentation. arXiv, 2017; arXiv:abs/1710.04540. [Google Scholar]
  33. Yang, D.; Xu, D.; Zhou, S.K.; Georgescu, B.; Chen, M.; Grbic, S.; Metaxas, D.N.; Comaniciu, D. Automatic liver segmentation using an adversarial image-To-Image network. arXiv, 2017; arXiv:abs/1707.08037. [Google Scholar]
  34. Shafaey, M.A.; Salem, M.A.-M.; Ebied, H.M.; Al-Berry, M.N.; Tolba, M.F. Deep Learning for Satellite Image Classification. In Proceedings of the 3rd International Conference on Intelligent Systems and Informatics (AISI2018), Cairo, Egypt, 1–3 September 2018. [Google Scholar]
  35. Sutskever, I.; Vinyals, O.; Le, Q.V. Sequence to sequence learning with neural networks. arXiv, 2014; arXiv:abs/1409.3215. [Google Scholar]
  36. Ke, Q.; Zhang, J.; Wei, W.; Damaševĭcius, R.; Wozniak, M. Adaptive Independent Subspace Analysis of Brain Magnetic Resonance Imaging Data. IEEE Access 2019, 7, 12252–12261. [Google Scholar] [CrossRef]
  37. Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
  38. Ahmed, A.H.; Salem, M.A. Mammogram-Based cancer detection using deep convolutional neural networks. In Proceedings of the 2018 13th International Conference on Computer Engineering and Systems (ICCES), Cairo, Egypt, 18–19 December 2018; pp. 694–699. [Google Scholar] [CrossRef]
  39. Lu, F.; Wu, F.; Hu, P.; Peng, Z.; Kong, D. Automatic 3d liver location and segmentation via convolutional neural networks and graph cut. Int. J. Comput. Assist Radiol. Surg. 2017, 12, 171–182. [Google Scholar] [CrossRef]
  40. Wang, P.; Qian, Y.; Soong, F.K.; He, L.; Zhao, H. Part-Of-Speech tagging with bidirectional long short-Term memory recurrent neural network. arXiv, 2015; arXiv:abs/1510.06168. [Google Scholar]
  41. Abdelhalim, A.; Salem, M.A. Intelligent Organization of Multiuser Photo Galleries Using Sub-Event Detection. In Proceedings of the 12th International Conference on Computer Engineering & Systems (ICCES), Cairo, Egypt, 19–20 December 2017; Available online: https://ieeexplore.ieee.org/document/347/ (accessed on 22 January 2020).
  42. Elsayed, O.A.; Marzouky, N.A.M.; Atefz, E.; Mohammed, A.-S. Abnormal Action detection in video surveillance. In Proceedings of the 9th IEEE International Conference on Intelligent Computing and Information Systems (ICICIS), Cairo, Egypt, 8–10 December 2019. [Google Scholar]
  43. El-Masry, M.; Fakhr, M.; Salem, M. Action Recognition by Discriminative EdgeBoxes. IET Comput. Vis. Dec. 2017. Available online: http://digital-library.theiet.org/content/journals/10.1049/iet-cvi.2017.0335 (accessed on 22 January 2020). [CrossRef]
  44. Simonyan, K.; Zisserman, A. Very deep convolutional networks for largescale image recognition. arXiv, 2014; arXiv:abs/1409.1556. [Google Scholar]
  45. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. arXiv, 2015; arXiv:abs/1505.04597. [Google Scholar]
  46. Ke, Q.; Zhang, J.; Wei, W.; Połap, D.; Woźniak, M.; Kośmider, L.; Damaševĭcius, R. A neuro-Heuristic approach for recognition of lung diseases from X-Ray images. Expert Syst. Appl. 2019, 126, 218–232. [Google Scholar] [CrossRef]
  47. Noh, H.; Hong, S.; Han, B. Learning Deconvolution Network for Semantic Segmentation. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1520–1528. [Google Scholar]
  48. Vadera, S.; Jone, J. Couinaud classification of hepatic segments. Available online: https://radiopaedia.org/articles/couinaud-classification-of-hepatic-segments (accessed on 28 November 2019).
  49. Zayene, O.; Jouini, B.; Mahjoub, M.A. Automatic liver segmentation method in CT images. arXiv, 2012; arXiv:abs/1204.1634. [Google Scholar]
  50. IRCAD. Hôpitaux Universitaires. 1, place de l’Hôpital, 67091 Strasbourg Cedex, FRANCE. Available online: https://www.ircad.fr/contact/ (accessed on 2 February 2020).
  51. Golan, R.; Jacob, C.; Denzinger, J. Lung nodule detection in CT images using deep convolutional neural networks. In Proceedings of the 2016 International Joint Conference on Neural Networks (IJCNN), Vancouver, BC, Canada, 24–29 July 2016; pp. 243–250. [Google Scholar]
  52. El-Regaily, S.A.; Salem, M.A.; Abdel Aziz, M.H.A.; Roushdy, M.I. Survey of Computer Aided Detection Systems for Lung Cancer in Computed Tomography. Current. Med. Imaging 2018, 14, 3–18. [Google Scholar] [CrossRef]
  53. Perez, L.; Wang, J. The Effectiveness of Data Augmentation in Image Classification Using Deep Learning. arXiv, 2017; arXiv:abs/1712.04621. [Google Scholar]
  54. Sun, C.; Guo, S.; Zhang, H.; Li, J.; Chen, M.; Ma, S.; Jin, L.; Liu, X.; Li, X.; Qian, X. Automatic segmentation of Liver tumors from multiphase contrast-Enhanced CT images based on FCN. Artificial Intell. Med. 2017, 83, 58–66. [Google Scholar] [CrossRef] [PubMed]
  55. Badura, P.; Wieclawe, W. Calibrating level set approach by granular computing in computed tomography abdominal organs segmentation. Appl. Soft Comput. 2016, 49, 887–900. [Google Scholar] [CrossRef]
  56. Shi, C.; Cheng, Y.; Liu, F.; Wang, Y.; Bai, J.; Tamura, S. A hierarchical local region-Based sparse shape composition for liver segmentation in CT scans. Pattern Recognit. 2015. [Google Scholar] [CrossRef]
  57. Hesamian, M.H.; Jia, W.; He, X.; Kennedy, P. Deep Learning Techniques for Medical Image Segmentation: Achievements and challenges. J. Digit. Imaging 2019. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Example of the similarity in gray levels between the liver and the spleen in computed tomography (CT) images. Imported from the Medical Image Computing and Computer Assisted Intervention (MICCAI) SLIVER07 workshop datasets [24,25].
Figure 1. Example of the similarity in gray levels between the liver and the spleen in computed tomography (CT) images. Imported from the Medical Image Computing and Computer Assisted Intervention (MICCAI) SLIVER07 workshop datasets [24,25].
Sensors 20 01516 g001
Figure 2. VGG-16 network architecture [44].
Figure 2. VGG-16 network architecture [44].
Sensors 20 01516 g002
Figure 3. SegNet network architecture [37].
Figure 3. SegNet network architecture [37].
Sensors 20 01516 g003
Figure 4. Raw CT slices in DICOM format for three different patients imported from the IRCAD dataset [50].
Figure 4. Raw CT slices in DICOM format for three different patients imported from the IRCAD dataset [50].
Sensors 20 01516 g004
Figure 5. Samples of the slices after color range mapping to [0, 255]. The three images correspond respectively to the images in Figure 4.
Figure 5. Samples of the slices after color range mapping to [0, 255]. The three images correspond respectively to the images in Figure 4.
Sensors 20 01516 g005
Figure 6. Liver tumor labeled images from the ground truth given by the dataset. The three images correspond respectively to the images in Figure 4 and Figure 5.
Figure 6. Liver tumor labeled images from the ground truth given by the dataset. The three images correspond respectively to the images in Figure 4 and Figure 5.
Sensors 20 01516 g006
Figure 7. The proposed architecture.
Figure 7. The proposed architecture.
Sensors 20 01516 g007
Figure 8. Samples of the results of testing. The predicted tumor position is highlighted with the red color.
Figure 8. Samples of the results of testing. The predicted tumor position is highlighted with the red color.
Sensors 20 01516 g008
Figure 9. Samples of the resulting segmented image superimposed over the ground truth image. The correctly classified tumor pixels (known as true positive (TP)) are colored in white. The missed tumor pixels are colored in purple. The pixels that are predicted to belong to the tumor, but actually are pixels representing normal tissue or the background, are colored in green. The black color represents pixels that are correctly classified as normal or background.
Figure 9. Samples of the resulting segmented image superimposed over the ground truth image. The correctly classified tumor pixels (known as true positive (TP)) are colored in white. The missed tumor pixels are colored in purple. The pixels that are predicted to belong to the tumor, but actually are pixels representing normal tissue or the background, are colored in green. The black color represents pixels that are correctly classified as normal or background.
Sensors 20 01516 g009
Table 1. Terms used to define sensitivity, specificity, and accuracy.
Table 1. Terms used to define sensitivity, specificity, and accuracy.
Predicted
PositiveNegative
ActualPositiveTPFN
NegativeFPTN
Table 2. Evaluation metrics for the training of the three test cases.
Table 2. Evaluation metrics for the training of the three test cases.
# Training ImagesOverall Ac.Mean Ac.Tumor Ac. (TPR)Background Ac. (TNR)Weighted IoU F1-Score
Case 17091.27%93.43%95.58%91.27%91.26%21 91%
Case 212493.36%95.18%97.34%93.03%92.99%46.58%
Case 326094.57%97.26%99.99%94.52%93.72%62.16%
Table 3. Normalized confusion matrix for test case 1.
Table 3. Normalized confusion matrix for test case 1.
Case 1Predicted
TumorOthers
ActualTumor45.82%54.28%
Others0.31%99.69%
Table 4. Normalized confusion matrix for test case 2.
Table 4. Normalized confusion matrix for test case 2.
Case 1Predicted
TumorOthers
ActualTumor67.13%32.87%
Others3.04%96.95%
Table 5. Normalized confusion matrix for test case 3.
Table 5. Normalized confusion matrix for test case 3.
Case 1Predicted
TumorOthers
ActualTumor99.99%0%
Others5.48%94.52%
Table 6. Comparison on the overall accuracy for the proposed method against other work in the literature.
Table 6. Comparison on the overall accuracy for the proposed method against other work in the literature.
Author (s)ApplicationMethodAccuracy
Chlebus [11]Liver tumor candidate classificationRandom Forest90%
Christ [12]Automatic liver and tumor segmentation of CT and MRICascaded fully convolutional neural networks (CFCNs) with dense 3D conditional random fields (CRFs)94%
Yang [33]Liver segmentation of CT volumesA deep image-to-image network (DI2IN)95%
Bi [7]Liver segmentationDeep residual network (Cascaded ResNet)95.9%
Li [46]Liver and tumor segmentation from CT VolumesH-DenseUNet96.5%
Yuan [32]Automatic liver and tumor segmentationHierarchical Convolutional—Deconvolutional Neural Networks96.7%
Wen Li et al. [28]Patch-based liver tumor classificationConventional Convolutional Neural Network (CNN)80.6%
Our methodLiver tumor in CT Scans segmentationModified SegNet98.8%

Share and Cite

MDPI and ACS Style

Almotairi, S.; Kareem, G.; Aouf, M.; Almutairi, B.; Salem, M.A.-M. Liver Tumor Segmentation in CT Scans Using Modified SegNet. Sensors 2020, 20, 1516. https://doi.org/10.3390/s20051516

AMA Style

Almotairi S, Kareem G, Aouf M, Almutairi B, Salem MA-M. Liver Tumor Segmentation in CT Scans Using Modified SegNet. Sensors. 2020; 20(5):1516. https://doi.org/10.3390/s20051516

Chicago/Turabian Style

Almotairi, Sultan, Ghada Kareem, Mohamed Aouf, Badr Almutairi, and Mohammed A.-M. Salem. 2020. "Liver Tumor Segmentation in CT Scans Using Modified SegNet" Sensors 20, no. 5: 1516. https://doi.org/10.3390/s20051516

APA Style

Almotairi, S., Kareem, G., Aouf, M., Almutairi, B., & Salem, M. A. -M. (2020). Liver Tumor Segmentation in CT Scans Using Modified SegNet. Sensors, 20(5), 1516. https://doi.org/10.3390/s20051516

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop