Next Article in Journal
Viability of Using High Amounts of Steel Slag Aggregates to Improve the Circularity and Performance of Asphalt Mixtures
Next Article in Special Issue
Advantages of Machine Learning in Forensic Psychiatric Research—Uncovering the Complexities of Aggressive Behavior in Schizophrenia
Previous Article in Journal
Patellar Tendon Force Differs Depending on Jump-Landing Tasks and Estimation Methods
Previous Article in Special Issue
STHarDNet: Swin Transformer with HarDNet for MRI Segmentation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automated Extraction of Cerebral Infarction Region in Head MR Image Using Pseudo Cerebral Infarction Image by CycleGAN

1
Graduate School of Health Sciences, Fujita Health University, Toyoake 470-1192, Japan
2
Daido Hospital, Nagoya 457-8511, Japan
3
Faculty of Medicine, Fujita Health University, Toyoake 470-1192, Japan
4
Faculty of Engineering, Gifu University, Gifu 501-1194, Japan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(1), 489; https://doi.org/10.3390/app12010489
Submission received: 18 November 2021 / Revised: 30 December 2021 / Accepted: 2 January 2022 / Published: 4 January 2022
(This article belongs to the Special Issue Applications of Artificial Intelligence in Medicine Practice)

Abstract

:
Since recognizing the location and extent of infarction is essential for diagnosis and treatment, many methods using deep learning have been reported. Generally, deep learning requires a large amount of training data. To overcome this problem, we generated pseudo patient images using CycleGAN, which performed image transformation without paired images. Then, we aimed to improve the extraction accuracy by using the generated images for the extraction of cerebral infarction regions. First, we used CycleGAN for data augmentation. Pseudo-cerebral infarction images were generated from healthy images using CycleGAN. Finally, U-Net was used to segment the cerebral infarction region using CycleGAN-generated images. Regarding the extraction accuracy, the Dice index was 0.553 for U-Net with CycleGAN, which was an improvement over U-Net without CycleGAN. Furthermore, the number of false positives per case was 3.75 for U-Net without CycleGAN and 1.23 for U-Net with CycleGAN, respectively. The number of false positives was reduced by approximately 67% by introducing the CycleGAN-generated images to training cases. These results indicate that utilizing CycleGAN-generated images was effective and facilitated the accurate extraction of the infarcted regions while maintaining the detection rate.

1. Introduction

Stroke is a leading cause of death globally [1]. Cerebral infarction, the most common type of stroke [2], often has after-effects and affects the quality of life. Early detection and treatment are essential for cerebral infarction because the infarcted region expands over time.
Computed tomography (CT) and magnetic resonance imaging (MRI) are mainly used to diagnose cerebral infarction. MRI is widely used today because of its high contrast resolution and ability to visualize brain structures and lesions. MR modalities, such as T2WI, FLAIR, and diffusion-weighted imaging (DWI), are mainly used to diagnose cerebral infarction. DWI is excellent for detecting a stroke during the hyperacute phase due to the high signal of reduced diffusion caused by cellular edema [3]. Therefore, the detection of acute cerebral infarction by DWI is useful for prompt diagnosis and for treatment selection. However, DWI has a lower resolution than other sequences, and its imaging principle tends to cause artifacts and distortion, making it challenging to identify the presence or absence, as well as the extent, of infarction. Moreover, stroke specialists are not always present during emergencies, and it may be too complex to make an accurate diagnosis.
Here, we focus on computer-aided diagnosis (CAD). CAD uses image processing to detect and analyze lesions and is used by physicians for second opinions on diagnoses. The beginning of CAD was mainly based on image analysis and machine learning [4]. Later, the concept of deep learning was proposed [5], and its application fields have been broadened to include speech recognition, natural language processing, and image recognition [6]. In the field of image recognition, it has been actively used for classification, regression, and segmentation tasks [7]. In particular, segmentation has become an important task in the medical field, to recognize organs and to understand the location and extent of lesions. FCN [8], SegNet [9], and U-Net [10] are commonly used deep learning techniques for segmentation. In particular, U-Net has shown good performance in segmenting medical images, and has been applied to many regions [11,12,13].
Since recognizing the location and extent of infarction is essential for diagnosis and treatment, several studies have been reported on the automatic detection and segmentation of cerebral infarction [14,15,16,17,18,19,20]. Rajini et al. [14] proposed a method for detecting cerebral infarction in CT images, and Barros et al. [15] proposed a method for segmenting the infarcted cerebral region using a convolutional neural network (CNN). In addition, Chen et al. [16] segmented the cerebral infarct region from a DWI using a CNN. Dolz et al. [17] proposed a method for segmenting infarcted regions with U-Net using multiple image sequences. Paing et al. [19] recently proposed automated segmentation of the infarcted region using variational mode decomposition and U-Net. Segmentation performance was evaluated using a total of 239 cases from a public dataset, and the results showed high similarity to the gold standard. Furthermore, Zhang et al. [20] proposed stroke lesion detection using a deep learning model for object detection.
Generally, deep learning requires a large amount of training data. However, the collection of medical imaging data is sometimes limited by ethical and other issues. Data augmentation is often used to prevent overfitting because of the small amount of data. During data augmentation, the number of images is increased by image manipulations, such as rotation, enlargement, contraction, contrast change, and the addition of noise. However, data augmentation does not significantly change the nature of the lesion or the target structure, and it is not expected to increase the amount of data.
To overcome this problem, we focussed on generative adversarial networks (GANs) [21]. GAN is a network model that generates images similar to training image data and was proposed by Goodfellow et al. in 2014. Recently, deep convolutional GAN [22], information maximizing GAN [23], Wasserstein GAN [24], and CycleGAN [25] have been developed as derivative technologies for GANs. Among them, CycleGAN can transform images without paired training data; it can convert MR images to CT images [26] and reduce noise [27]. Recently, several studies on the mutual conversion of images using CycleGAN have been performed. However, to the best of our knowledge, only a few studies have reported using CycleGAN-generated images for the extensive training of deep learning models. If CycleGAN generates brain images from normal brain MR images, it is expected to improve the accuracy of extraction and segmentation by increasing the variety of data.
In this study, we aimed to develop a novel method for virtually generating cerebral infarction images from healthy images using CycleGAN and applying them to the U-Net training model to improve the cerebral infarction automatic extraction accuracy by U-Net.
Hereafter, U-Net with CycleGAN represents an approach that uses CycleGAN-generated images for training; U-Net without CycleGAN represents an approach that uses only real images.

2. Materials and Methods

2.1. Outline

Figure 1 shows an outline of the proposed method. Normal brain MR images were converted into cerebral infarction images using CycleGAN. Subsequently, U-Net is trained using the CycleGAN-generated images and images of actual patients with cerebral infarction, and the infarcted regions were automatically extracted.

2.2. Image Dataset

This study collected DWI of MR images acquired at the Fujita Health University Hospital and Daido Hospital, including 64 healthy cases (1280 images) and 160 cerebral infarction cases (4788 images).
Figure 2 shows the distribution of the cases. The mean area of the infarcted region was 922.54 ± 1169.25 mm2. The mean signal intensity was 91.78 ± 14.75 within the healthy region and 161.70 ± 24.81 within the infarcted region. Figure 2a shows the distribution of the differences between the signal intensities of the healthy and infarcted regions. The difference between the signal intensities was calculated by subtracting the mean of the pixel intensities within the infarcted and healthy regions. The mean difference between the signal intensities of the infarcted and healthy regions was 69.91 ± 22.76.
As ground truth, a binary image was created. The pixel intensities of the infarcted regions were set at 255, whereas the background was set at 0. The infarcted region was also confirmed by a radiological technologist with more than 10 years of clinical experience. As a basic data augmentation technique, the number of images was doubled by the left-right flipping operation in this study. Examples of the collected images and ground truth are shown in Figure 3.

2.3. Generation of Pseudo Abnormal Images by CycleGAN

We generated a pseudo-patient image from a healthy image using domain transformation techniques. Domain translation techniques include Pix2Pix and CycleGAN, etc. Pix2Pix requires paired images for training, but it is difficult to obtain paired images of a healthy image and a diseased image. Therefore, we used CycleGAN, which can perform image translation without paired images. The structure of CycleGAN is shown in Figure 4.
The CycleGAN structure used in this study followed the algorithm proposed by Zhu et al. [25]. CycleGAN consists of two generators and two discriminators. The two generators convert one image group to another. The discriminator determines whether the data transformed by the generator and the actual data are real or fake. Once an image has been detected as real or fake, the generator trains to perform a transformation that will result in images as close to real as possible. CycleGAN uses cycle consistency loss, in addition to the adversarial loss used in normal GANs. The cycle consistency loss was calculated by comparing the distributions generated by the cycle based on the training data.
We prepared two image groups: 243 slices from 52 stroke patients with infarction and 300 slices in 15 of 64 healthy cases taken at the Daido Hospital. CycleGAN was trained to convert images of the healthy and infarction cases. After training, the stroke pseudo-images were generated following the provision of healthy images to the generator of the CycleGAN, which converts images of healthy cases to those of infarction cases. Furthermore, we created a ground truth on the generated pseudo infarction images. First, the infarcted region was identified by subtracting the image before conversion from the generated image. Then, the noise generated by the subtraction was manually removed. In addition, pixel value of the infarct area was set to 255 and the background was set to 0 in a binary image, as in the correct image of the real image.
A nine-block ResNet [28] was used for the CycleGAN generator, and the structure of PatchGAN [29] was used as the discriminator. To train the CycleGAN, we implemented an original python program using TensorFlow and Keras that are the deep learning libraries. The number of epochs was set to 200. The learning rate was set to 0.0002, and the batch size was set to 1. The training and testing were executed on a computer equipped with a graphical processing unit (NVIDIA GeForce GTX TITAN X).

2.4. Extraction of Infarcted Region

To extract the infarcted region, U-Net, a deep learning model, was used for segmentation. In this study, we chose U-Net as a preliminary study because it had achieved many results in segmentation. The U-Net structure used in this study is shown in Figure 5. U-Net is a semantic segmentation method that was presented at the Medical Image Computing and Computer-Assisted Intervention in 2015. This network achieved good results during the cell segmentation challenge of the International Symposium on Biomedical Imaging in 2015. This network is an extension of the Fully Convolutional Network and allows accurate segmentation with less training data. The U-Net structure is shown in Figure 5, and it is called U-Net because of its U-shaped network structure. The left half of the U-shape is called the encoder, and the right half is called the decoder. The encoder was composed of a convolutional and pooling layer. In the former layer, convolutional operations were performed to extract features from the given image. Subsequently, the output was passed through a rectified linear unit, an activation function. In the latter layer, max-pooling was used to downscale the image features. Thus, the features were extracted and compressed through several convolutional and pooling layers. In the decoder, they were upsampled by deconvolution. The high-resolution features obtained from the encoder were combined with the up-sampled output, and a convolutional operation was performed. Furthermore, the decoder concatenated and cropped the output of the encoder at the same depth. This process ensured that information was propagated from the encoder to the decoder at all scales, and no information was lost during the down-sampling operations of the encoder. The input was an image with a matrix size of height (H) = 256 and width (W) = 256; the infarcted region had pixel intensity values >0. The training was performed on 100 cases of cerebral infarction at the Daido Hospital and Fujita Medical University Hospital. Fifty test cases were randomly selected from the Fujita Health University Hospital cases that were not used for CycleGAN and U-Net training.
In this study, each convolutional layer of the encoder and decoder had five layers. A batch normalization layer was added at the end of each layer. We developed an original python program using TensorFlow and Keras for the training of the U-Net. The number of epochs was set to 300, with a training rate of 0.0001 and a batch size of 32. The training and testing were executed on a computer equipped with a graphical processing unit (NVIDIA TITAN RTX).

2.5. Evaluation Metrics

The detection and segmentation accuracies were evaluated to verify the effectiveness of the proposed method. First, we defined the criteria for evaluation as follows.
The infarcted region was detected if there was an overlap between the infarcted region extracted by U-Net and the infarcted region in the ground truth. This evaluation was conducted in 3D.
The number of false positives (FPs) was calculated using healthy images. We counted the number of extracted regions divided by the number of cases to obtain the number of FPs per case. The number of extracted regions was automatically calculated using 3D labeling.
DI and Jaccard index (JI) were employed to evaluate the extraction accuracy. DI and JI were used to assess the similarity between the extracted region and the ground truth (ideal infarction region), and they were calculated using the following equations:
D i c e   i n d e x ( A , B ) = 2 | A B | | A | + | B |
J a c c a r d   i n d e x ( A , B ) = | A B | | A B |
where A indicates the ground truth region and B indicates the extracted region of the cerebral infarction.

3. Results

3.1. CycleGAN-Generated Images

Examples of the CycleGAN-generated cerebral infarction images are shown in Figure 6. Figure 6a shows a healthy image before conversion by CycleGAN, Figure 6b shows a pseudo cerebral infarction image using CycleGAN. Images with large infarct regions are shown in case 1 and 2, and those with small infarct regions are shown in Figure 6 case 3. Figure 6 case 4 shows an example of failure in which an infarct was not generated, and the pixel intensity values of the entire image were high.

3.2. Extraction of the Infarcted Regions

The results of the U-Net extraction of the infarcted region are shown in Figure 7. The extraction accuracy and sensitivity of cerebral infarction are shown in Table 1. Regarding extraction accuracy, the U-Net Dice index (DI) was 0.473 without CycleGAN and 0.553 with CycleGAN, showing an improvement of approximately 8%. Figure 8 compares the extraction accuracies for different amounts of training cases. In the cases of 25 to 100 trainings, the extraction accuracy was higher when the CycleGAN-generated images were added to the training cases.
The U-Net cerebral infarction sensitivity was 0.94 without CycleGAN and 0.92 with CycleGAN. The detection results using 64 healthy cases with no infarcted regions are shown in Figure 9. When only real images were used for training, the number of FPs per case was 3.75. On the other hand, it was 1.23 when CycleGAN-generated images were introduced. The number of FPs was reduced by approximately 67% by introducing CycleGAN-generated images to the training cases.

4. Discussion

The CycleGAN-generated images ranged from small infarcts, such as lacunar infarcts, to large infarcts, such as cardiogenic cerebral emboli. Infarct images were also generated from brain slices at different heights; infarcts along the lateral ventricles and cerebellum were also generated. Most of the generated infarct images had high signal intensities within the infarcted region, and infarcts with low signal intensity were not often generated. The signal intensities of the infarcts used in the study tended to be high. In some of the generated images, the signal intensities of the entire image increased, and no infarct was generated.
We extracted the regions of cerebral infarction using the U-Net. Then, we compared the accuracy and sensitivity of extracting infarcted regions when only actual cases of stroke patients were used for U-Net training and when CycleGAN-generated images and actual cases were used. The DI for extraction accuracy was 0.473 and 0.553 when using only real cases and CycleGAN-generated images, respectively. The extraction accuracy improved when using generated images for training. One of the reasons for the improvement in the extraction accuracy is that the infarcted region characteristics extracted by U-Net changed when the CycleGAN-generated images were used together. Most of the CycleGAN-generated images had small and high-contrast infarcted regions. U-Net was trained using these images, and the infarcted regions with a difference of more than 80 from the healthy region were extracted, and the DI was 0.707. However, if the infarcted region was large and the signal value within the infarcted region was uneven, only the high-signal region was extracted and underestimated in some cases. The sensitivity for detecting infarcts was 0.940 when training on real cases only and 0.920 when using CycleGAN-generated images. The combined use of CycleGAN-generated images decreased the sensitivity because one small infarct was missed that had been detected when only real cases were used. The sensitivity differed depending on the size of the infarcted area and the difference in pixel values between the normal parenchyma and the infarcted area, both in real cases alone and when CycleGAN-generated images were used. The sensitivity of infarcts with diameters >10 mm and a pixel difference >50 was 0.976. However, infarcts with sizes of <10 mm or faint infarcts with pixel intensity differences of <50 were not extracted in some cases, and the sensitivity was 0.750. It is challenging to recognize contrast with the surrounding normal parenchyma in cases of small or low-contrast infarcts. Furthermore, when the number of FPs was compared, the number of FPs per case was 3.750 when only real cases were used and 1.234 when CycleGAN-generated images were used, a reduction of approximately 67% in the number of FPs. FPs were caused by a slight signal increase in the brain parenchyma and linear high-signal artifacts in the peripheral areas of the brain parenchyma due to DWI distortion. In many cases, a region with a relatively high signal compared to the surrounding area in the healthy brain parenchyma was mistakenly identified as the infarcted area. When the CycleGAN-generated images were combined, a high-signal area in the brain parenchyma was rarely identified as the wrong infarcted area. The CycleGAN introduced in this study generated many patterns that resembled actual infarcts. Therefore, data could be collected to more clearly separate the characteristics of FPs and infarct patterns, and U-Net, trained on infarct patterns generated by CycleGAN and actual infarcts, could distinguish between infarcts and FPs accurately. Although it is necessary to improve the detection of small and low-contrast infarcts, including those detected when training was performed only in real cases, the combined use of CycleGAN-generated images is highly effective in removing FPs. This method can be said to be effective overall.
Studies using deep learning require several images for training, and studies using large datasets, multi-sequence images, and 3D networks have been conducted to extract cerebral infarctions [16,17,30]. The accuracy comparison with the other groups is shown in Table 2. Among these, one study using a 3D network has a DI of nearly 0.8, which is inferior to the 3D network in terms of extraction accuracy. Compared with Chen et al. [16], who used DWI in the same 2D network, the extraction accuracy was lower, but the sensitivity was the same, and the number of FPs was lower in our method. By performing region extraction using only DWI, which is almost always performed at any institution for acute stroke diagnosis, the collection of cases is facilitated, and detection is not prevented due to missing data. Some studies have been performed to improve CycleGAN accuracy [31]. However, no studies have generated pseudo-patient lesion images. The results of our study suggest that performance can be improved even when only limited patient data are obtained. The method used in this study can be applied to rare diseases, with only a few reported cases.
Regarding the limitations of this method, cases from two facilities were used. Still, the images used for CycleGAN training were not used for the segmentation test data using U-Net, so the U-Net test cases were cases obtained at a single facility only. In the future, it will be necessary to increase the number of cases and collect data from multiple facilities to verify the results. In addition, subjective and quantitative evaluation are considered essential because we did not evaluate the detailed image quality of the generated images in this study. Furthermore, in this study, U-Net and CycleGAN were introduced as a preliminary study. In the future, it is necessary to use improved models of U-Net and CycleGAN as well as other models to improve the accuracy.

5. Conclusions

We developed a method to extract infarcted regions from head MR images using U-Net. Furthermore, the training images were augmented using CycleGAN. The results showed that the use of CycleGAN-generated images was effective for accurately extracting the infarcted region while maintaining the detection rate.

Author Contributions

Conceptualization: M.Y. and A.T.; methodology: M.Y. and A.T.; software: M.Y. and A.T.; validation: M.Y., K.K. and S.M.; formal analysis: M.Y.; investigation: M.Y. and A.T.; resources: A.T.; data curation: K.K. and S.M.; writing—original draft preparation: M.Y.; writing—review and editing: A.T., K.S. and H.F.; visualization: M.Y.; supervision: A.T.; project administration: A.T. and K.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of Fujita Health University (HM20-489, 16 March 2021).

Informed Consent Statement

Informed consent was obtained via an opt-out process at the Fujita Health University Hospital and Daido Hospital, and all data were anonymized.

Data Availability Statement

The source code used to support the findings of this study will be available from the corresponding author upon request.

Acknowledgments

We are grateful to Ayumi Yamada and Yuya Onishi of Fujita Health University for helpful advice for this study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Benjamin, E.J.; Blaha, M.J.; Chiuve, S.E.; Cushman, M.; Das, S.R.; Deo, R.; de Ferranti, S.D.; Floyd, J.; Fornage, M.; Gillespie, C.; et al. Heart disease and stroke statistics-2016 update a report from the American Heart Association. Circulation 2017, 135, e38–e48. [Google Scholar] [CrossRef] [PubMed]
  2. Feigin, V.L.; Lawes, C.M.; Bennett, D.A.; Anderson, C. Stroke epidemiology: A review of population-based studies of incidence, prevalence, and case-fatality in the late 20th century. Lancet Neurol. 2003, 2, 43–53. [Google Scholar] [CrossRef]
  3. Lutsep, H.L.; Albers, G.W.; Decrespigny, A.; Kamat, G.N.; Marks, M.P.; Moseley, M.E. Clinical utility of diffusion-weighted magnetic resonance imaging in the assessment of ischemic stroke. Ann. Neurol. 1997, 41, 574–580. [Google Scholar] [CrossRef] [PubMed]
  4. Doi, K. Computer-aided diagnosis in medical imaging: Historical review, current status and future potential. Comput. Med. Imaging Graph. 2007, 31, 198–211. [Google Scholar] [CrossRef] [Green Version]
  5. Hinton, G.E.; Salakhutdinovr, R.R. Reducing the Dimensionality of Data with Neural Networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef] [Green Version]
  6. LeCun, Y.; Bengio, Y.; Geoffrey Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  7. Hesamian, M.H.; Jia, W.; He, X.; Kennedy, P. Deep Learning Techniques for Medical Image Segmentation: Achievements and Challenges. J. Digit. Imaging 2019, 32, 582–596. [Google Scholar] [CrossRef] [Green Version]
  8. Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar] [CrossRef] [Green Version]
  9. Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef] [PubMed]
  10. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
  11. Dong, H.; Yang, G.; Liu, F.; Mo, Y.; Guo, Y. Automatic Brain Tumor Detection and Segmentation Using U-Net Based Fully Convolutional Networks. In Communications in Computer and Information Science; Springer: Cham, Switzerland, 2017; pp. 506–517. [Google Scholar] [CrossRef] [Green Version]
  12. Seo, H.; Huang, C.; Bassenne, M.; Xiao, R.; Xing, L. Modified U-Net (mU-Net) with Incorporation of Object-Dependent High Level Features for Improved Liver and Liver-Tumor Segmentation in CT Images. IEEE Trans. Med. Imaging 2020, 39, 1316–1325. [Google Scholar] [CrossRef] [Green Version]
  13. Gaál, G.; Maga, B.; Lukács, A. Attention U-Net Based Adversarial Architectures for Chest X-ray Lung Segmentation. In Proceedings of the Workshop on Applied Deep Generative Networks Co-Located with 24th European Conference on Artificial Intelligence 2020, CEUR Workshop Proceedings 2692, Santiago de Compostela, Spain, 29 August–8 September 2020. [Google Scholar]
  14. Rajini, N.H.; Bhavani, R. Computer aided detection of ischemic stroke using segmentation and texture features. Measurement 2013, 46, 1865–1874. [Google Scholar] [CrossRef]
  15. Barros, R.S.; Tolhuisen, M.; Boers, A.M.; Jansen, I.; Ponomareva, E.; Dippel, D.W.J.; Van Der Lugt, A.; Van Oostenbrugge, R.J.; Van Zwam, W.H.; Berkhemer, O.A.; et al. Automatic segmentation of cerebral infarcts in follow-up computed tomography images with convolutional neural networks. J. NeuroInt. Surg. 2019, 12, 848–852. [Google Scholar] [CrossRef] [Green Version]
  16. Chen, L.; Bentley, P.; Rueckert, D. Fully automatic acute ischemic lesion segmentation in DWI using convolutional neural networks. NeuroImage Clin. 2017, 15, 633–643. [Google Scholar] [CrossRef] [PubMed]
  17. Dolz, J.; Ben Ayed, I.; Desrosiers, C. Dense Multi-path U-Net for Ischemic Stroke Lesion Segmentation in Multiple Image Modalities. Lect. Notes Comput. Sci. 2019, 11383, 271–282. [Google Scholar] [CrossRef] [Green Version]
  18. Karthik, R.; Menaka, R.; Johnson, A.; Anand, S. Neuroimaging and deep learning for brain stroke detection—A review of recent advancements and future prospects. Comput. Methods Programs Biomed. 2020, 197, 105728. [Google Scholar] [CrossRef]
  19. Paing, M.; Tungjitkusolmun, S.; Bui, T.; Visitsattapongse, S.; Pintavirooj, C. Automated Segmentation of Infarct Lesions in T1-Weighted MRI Scans Using Variational Mode Decomposition and Deep Learning. Sensors 2021, 21, 1952. [Google Scholar] [CrossRef] [PubMed]
  20. Zhang, S.; Xu, S.; Tan, L.; Wang, H.; Meng, J. Stroke Lesion Detection and Analysis in MRI Images Based on Deep Learning. J. Health Eng. 2021, 2021, 5524769. [Google Scholar] [CrossRef]
  21. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. Adv. Neural. Inf. Process Syst. 2014, 27, 2672–2680. [Google Scholar]
  22. Radford, A.; Metz, L.; Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. In Proceedings of the 4th International Conference on Learning Representations, ICLR 2016—Conference Track Proceedings, San Juan, PR, USA, 2–4 May 2016. [Google Scholar]
  23. Chen, X.; Duan, Y.; Houthooft, R.; Schulman, J.; Sutskever, I.; Abbeel, P. InfoGAN: Interpretable representation learning by information maximizing generative adversarial nets. Adv. Neural Inf. Process. Syst. 2016, 29, 2180–2188. [Google Scholar]
  24. Arjovsky, M.; Chintala, S.; Bottou, L. Wasserstein GAN. In Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; Volume 70, pp. 214–223. [Google Scholar]
  25. Zhu, J.-Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2223–2232. [Google Scholar]
  26. Hiasa, Y.; Otake, Y.; Takao, M.; Matsuoka, T.; Takashima, K.; Carass, A.; Prince, J.L.; Sugano, N.; Sato, Y. Cross-Modality Image Synthesis from Unpaired Data Using CycleGAN. Adv. Data Min. Appl. 2018, 11037, 31–41. [Google Scholar] [CrossRef]
  27. Zhou, L.; Schaefferkoetter, J.D.; Tham, I.W.; Huang, G.; Yan, J. Supervised learning with cyclegan for low-dose FDG PET image denoising. Med. Image Anal. 2020, 65, 101770. [Google Scholar] [CrossRef]
  28. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef] [Green Version]
  29. Isola, P.; Zhu, J.-Y.; Zhou, T.; Efros, A.A. Image-to-Image Translation with Conditional Adversarial Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5967–5976. [Google Scholar] [CrossRef] [Green Version]
  30. Zhang, R.; Zhao, L.; Lou, W.; Abrigo, J.; Mok, V.C.T.; Chu, W.C.W.; Wang, D.; Shi, L. Automatic Segmentation of Acute Ischemic Stroke From DWI Using 3-D Fully Convolutional DenseNets. IEEE Trans. Med. Imaging 2018, 37, 2149–2160. [Google Scholar] [CrossRef]
  31. Sandfort, V.; Yan, K.; Pickhardt, P.J.; Summers, R.M. Data augmentation using generative adversarial networks (CycleGAN) to improve generalizability in CT segmentation tasks. Sci. Rep. 2019, 9, 16884. [Google Scholar] [CrossRef] [PubMed]
  32. Mitra, J.; Bourgeat, P.; Fripp, J.; Ghose, S.; Rose, S.; Salvado, O.; Connelly, A.; Campbell, B.; Palmer, S.; Sharma, G.; et al. Lesion segmentation from multimodal MRI using random forest following ischemic stroke. NeuroImage 2014, 98, 324–335. [Google Scholar] [CrossRef] [PubMed]
  33. Muda, A.F.; Saad, N.; Bakar, S.; Muda, S.; Abdullah, A. Brain lesion segmentation using fuzzy C-means on diffusion-weighted imaging. ARPN J. Eng. Appl. Sci. 2015, 10, 1138–1144. [Google Scholar]
Figure 1. Study outline schematic.
Figure 1. Study outline schematic.
Applsci 12 00489 g001
Figure 2. Distribution of cases. (a) Distribution by difference in pixel values; (b) Distribution by area of infarction.
Figure 2. Distribution of cases. (a) Distribution by difference in pixel values; (b) Distribution by area of infarction.
Applsci 12 00489 g002aApplsci 12 00489 g002b
Figure 3. Example of original images and ground truths. (a) Original image; (b) Ground truth.
Figure 3. Example of original images and ground truths. (a) Original image; (b) Ground truth.
Applsci 12 00489 g003
Figure 4. CycleGAN structure.
Figure 4. CycleGAN structure.
Applsci 12 00489 g004
Figure 5. U-Net structure.
Figure 5. U-Net structure.
Applsci 12 00489 g005
Figure 6. Examples of cerebral infarction CycleGAN-generated images. (a) Healthy image; (b) Pseudo cerebral infarction image. (The generated images are shown at each brain height. (Case 1) medulla oblongata level; (Case 2) midbrain level; (Cases 3 and 4) cortical level.).
Figure 6. Examples of cerebral infarction CycleGAN-generated images. (a) Healthy image; (b) Pseudo cerebral infarction image. (The generated images are shown at each brain height. (Case 1) medulla oblongata level; (Case 2) midbrain level; (Cases 3 and 4) cortical level.).
Applsci 12 00489 g006
Figure 7. Extraction outcomes of the infarcted region. (a) Original image; (b) Ground truth; (c) U-Net without CycleGAN; (d) U-Net with CycleGAN.
Figure 7. Extraction outcomes of the infarcted region. (a) Original image; (b) Ground truth; (c) U-Net without CycleGAN; (d) U-Net with CycleGAN.
Applsci 12 00489 g007
Figure 8. Comparison of Dice index for different numbers of study cases.
Figure 8. Comparison of Dice index for different numbers of study cases.
Applsci 12 00489 g008
Figure 9. Extraction outcomes of healthy cases. (a) Original image; (b) U-Net without CycleGAN; (c) U-Net with CycleGAN.
Figure 9. Extraction outcomes of healthy cases. (a) Original image; (b) U-Net without CycleGAN; (c) U-Net with CycleGAN.
Applsci 12 00489 g009
Table 1. Extraction accuracy and sensitivity of cerebral infarction.
Table 1. Extraction accuracy and sensitivity of cerebral infarction.
U-Net without CycleGANU-Net with CycleGAN
Dice index0.4730.553
Jaccard index0.3600.433
Sensitivity0.9400.920
False positives per case3.7501.234
Table 2. Comparison of accuracy with other groups.
Table 2. Comparison of accuracy with other groups.
SensitivityDIFP
RF (Mitra et al. [32]) 0.53
FCM (Muda et al. [33]) 0.730.16
CNN (Chen et al. [16])0.940.673.27
U-Net (Dolz et al. [17]) 0.635
U-Net (Paing et al. [19]) 0.668
U-Net without CycleGAN0.940.4733.75
U-Net with CycleGAN0.920.5531.23
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yoshida, M.; Teramoto, A.; Kudo, K.; Matsumoto, S.; Saito, K.; Fujita, H. Automated Extraction of Cerebral Infarction Region in Head MR Image Using Pseudo Cerebral Infarction Image by CycleGAN. Appl. Sci. 2022, 12, 489. https://doi.org/10.3390/app12010489

AMA Style

Yoshida M, Teramoto A, Kudo K, Matsumoto S, Saito K, Fujita H. Automated Extraction of Cerebral Infarction Region in Head MR Image Using Pseudo Cerebral Infarction Image by CycleGAN. Applied Sciences. 2022; 12(1):489. https://doi.org/10.3390/app12010489

Chicago/Turabian Style

Yoshida, Mizuki, Atsushi Teramoto, Kohei Kudo, Shoji Matsumoto, Kuniaki Saito, and Hiroshi Fujita. 2022. "Automated Extraction of Cerebral Infarction Region in Head MR Image Using Pseudo Cerebral Infarction Image by CycleGAN" Applied Sciences 12, no. 1: 489. https://doi.org/10.3390/app12010489

APA Style

Yoshida, M., Teramoto, A., Kudo, K., Matsumoto, S., Saito, K., & Fujita, H. (2022). Automated Extraction of Cerebral Infarction Region in Head MR Image Using Pseudo Cerebral Infarction Image by CycleGAN. Applied Sciences, 12(1), 489. https://doi.org/10.3390/app12010489

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop