Next Article in Journal
Dielectric Measurement of Agricultural Grain Moisture—Theory and Applications
Previous Article in Journal
Introducing a Novel Model-Free Multivariable Adaptive Neural Network Controller for Square MIMO Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved 3D Deep Learning-Based Segmentation of Left Ventricular Myocardial Diseases from Delayed-Enhancement MRI with Inclusion and Classification Prior Information U-Net (ICPIU-Net)

1
ImViA EA 7535 Laboratory, University of Burgundy, 21078 Dijon, France
2
National Engineering School of Sousse, University of Sousse, Sousse 4054, Tunisia
3
LASEE Laboratory, National Engineering School of Monastir, University of Monastir, Monastir 5000, Tunisia
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(6), 2084; https://doi.org/10.3390/s22062084
Submission received: 29 December 2021 / Revised: 18 February 2022 / Accepted: 1 March 2022 / Published: 8 March 2022
(This article belongs to the Section Biomedical Sensors)

Abstract

:
Accurate segmentation of the myocardial scar may supply relevant advancements in predicting and controlling deadly ventricular arrhythmias in subjects with cardiovascular disease. In this paper, we propose the architecture of inclusion and classification of prior information U-Net (ICPIU-Net) to efficiently segment the left ventricle (LV) myocardium, myocardial infarction (MI), and microvascular-obstructed (MVO) tissues from late gadolinium enhancement magnetic resonance (LGE-MR) images. Our approach was developed using two subnets cascaded to first segment the LV cavity and myocardium. Then, we used inclusion and classification constraint networks to improve the resulting segmentation of the diseased regions within the pre-segmented LV myocardium. This network incorporates the inclusion and classification information of the LGE-MRI to maintain topological constraints of pathological areas. In the testing stage, the outputs of each segmentation network obtained with specific estimated parameters from training were fused using the majority voting technique for the final label prediction of each voxel in the LGE-MR image. The proposed method was validated by comparing its results to manual drawings by experts from 50 LGE-MR images. Importantly, compared to various deep learning-based methods participating in the EMIDEC challenge, the results of our approach have a more significant agreement with manual contouring in segmenting myocardial diseases.

1. Introduction

Cardiovascular diseases (CVDs) are the leading cause of global mortality, with an estimated 17.9 million deaths in 2019, mainly due to MI. Radiologically diagnosing MI in its early phases plays a crucial role in supplying guidance on further patient treatment. In recent decades, automatic methods have been developed to improve the diagnosis and prognosis steps of CVDs. A crucial clinical parameter to evaluate the state of the heart after MI is the viability of the considered segment, i.e., if the segment recovers its functionality upon revascularization.
Replacement myocardial fibrosis is a known substrate for occurring malignant ventricular arrhythmias (VA), a prevalent cause of abrupt cardiac death worldwide. The scar development in the heart is most commonly from MI, an irreversible decease of the contractile muscle associated with the occlusion of a coronary artery. MI occurs as a result of atherosclerosis, in which plaque builds up inside the artery walls. This build-up makes the arteries progressively narrower and slows blood flow, causing angina. Finally, an area of cholesterol plaque can tear inside of a coronary artery. This rupture results in a blood clot forming on the plaque’s surface, which can then completely block blood flow through arteries. If the blockage is not remedied fast, the heart muscle begins to die. The healthy heart area is substituted with the infarct area. In chronic MI, capillaries in the myocardial region continue to be impeded after the re-perfusion, indicating severe ischemic disease. MVO, also called the no-reflow phenomenon, is an incident that usually appears in a proportion of subjects with acute MI following re-perfusion therapy of an occluded coronary artery [1]. Patients sustaining MVO regions have higher proportions of MI and raised mortality. Recently, computational modeling for the accurate characterization of myocardial scar geometry and its volume and the heterogeneity of patients with chronic ischemic cardiomyopathy (IC) may help clinicians determine the appropriateness of analysis and treatment-related rhythm disorders [2,3].
Two-dimensional (2D) LGE-MRI is the primary reference for recognizing myocardial scarring through enhancement from preserving its based contrast agents. LGE-MRI is a non-invasive technique achieved approximately after 10 min of gadolinium-based contrast agent injection. Healthy and infarct tissues are distinguished by their altered wash-in and wash-out contrast agents. Nonetheless, by progress in MRI acquisition techniques, 3D LGE-MRI has arisen with improved spatial resolution, allowing accurate volumetric quantification of scar tissue [4,5,6,7]. As the manual segmentation is tedious, dependent on observer variability, and time-consuming, automated volume segmentations are highly intended for this task. This increased interest was primarily justified through the success rate performed by these methods. There are several studies on automatic MI segmentation methods that motivate our approach.
Considering the performance reached by deep-learning approaches in medical image analysis, this paper proposes an inclusion and classification of prior information U-Net-based network (ICPIU-Net) for the fully automatic and efficient segmentation of LV healthy myocardium and LV myocardial diseases in LGE-MRI. Our approach integrates image features and a post-processing decision phase to aggregate the final prediction. This algorithm is novel to quantify myocardial tissues’ presence on a set of contrast-enhanced acquisitions, leading to better prevention and higher survival opportunities for patients.
Figure 1 shows short-axis LGE-MRI at the base, middle, and apical ventricular levels, illustrating a large region of hyperenhancement (scar) with a hypo-enhanced central area (MVO).
The rest of the paper is structured as follows: In Section 2, we introduce the previous literature related to our study. In Section 3, we describes the new methodology used throughout this work. In Section 4, we present experimental results that allow the quantitative evaluation of the performance of our approach on the EMIDEC test dataset and a comparison between different methods. Finally, in Section 5, we summarize the main conclusions of the paper.

2. Related Work

Traditional scar segmentation research studies were frequently based on intensity thresholding or clustering techniques responsive to local intensity variations [8]. The predominant limitation of these techniques is that they typically require expert delineations of the region of interest to decrease the computational costs and the search space [9]. As a result, researchers focus on deep-learning cardiac segmentation approaches that are more convenient for clinical guidance. For instance, Moccia et al. [10] applied fully convolutional networks (FCNs) for MI-tissue segmentation protocols from LGE-MRI. The authors investigated two segmentation algorithms. Segmentation results against expert contours showed that both algorithms identified scar tissues in LGE-MRI, particularly when delimiting the search area to the myocardium only. Xu et al. [11] proposed a new joint motion feature-learning architecture based on deep learning and optical flow to segment MI from non-contrast agents cardiac MRI accurately. The validation results proved that the suggested architecture has a comparable performance with a human expert’s delineation (pixel-level accuracy: 95.03%, Kappa statistic: 0.91, Dice: 89.87%, and Hausdorff distance: 5.91 mm). De La Rosa et al. [12] proposed a deep learning-based method for the automatic segmentation and quantification of the scar and MVO tissues in LGE-MRI. Their approach is based on a cascade framework where, firstly, healthy and diseased slices are distinguished by a convolutional neural network. Secondly, the MI is segmented by an initial fast coarse segmentation. Then, the resulting segmentation is refined by a boundary-voxel reclassification strategy to incorporate MVO tissues in the infarction segmentation. Compared to the reference techniques, the proposed network achieved the highest agreement in volumetric infarct segmentation with the manual delineations (p < 0.001). This method reached an average Dice coefficient of 77.22 ± 14.3% and a volumetric error of 1.0 ± 6.9 cm. Hao et al. [13] developed a multi-branch fusion architecture for automatic MI screening from 12-lead ECG images. Their method included feature fusion and classification network. Extensive experiments demonstrated that the proposed architecture reached human-level performance on all four evaluation criteria (accuracy, sensitivity, specificity, and F1-score of 94.73%, 96.41%, 95.94%, and 93.79%, respectively).
U-Net [14] has become the widespread variant of FCNs for biomedical image segmentation and is commonly employed in cardiology. The network uses skip connections between the down-sampling and up-sampling paths to recover the spatial context loss in the encoder, performing more accurate segmentation. Different previous cardiac image-segmentation networks have utilized the 3D U-Net [15] and the 3D V-Net [16] as their basis architectures, reaching efficient segmentation for several cardiac segmentation tasks [17,18,19]. Isensee et al. [20] introduced nnU-Net (no-new-U-Net) to automatically adjust preprocessing techniques and network architectures to a medical dataset. nnU-Net has been well applied to several segmentation tasks [21,22]. Despite the potential of deep learning for several fields, few deep learning-based methodologies have been proposed in the literature for infarct segmentation from LGE-MRI. Fahmy et al. [23] used a U-Net architecture with 150 layers to automatically quantify LV mass and infarct volume on LGE images of 1041 subjects with hypertrophic cardiomyopathy (HCM). Their methodology reported DSCs of 82 ± 0.08% (per-patient) and 81 ± 0.11% (per-slice) for the LV quantification and 57 ± 0.23% (per-patient) and 58 ± 0.28% (per-slice) for MI segmentation. In another study, Zabihollahy et al. [24] developed a U-Net-based network to accurately segment LV myocardium and infarct borders from 3D LGE-CMR images of 34 subjects with IC. Two cascaded subnets were used to segment the LV myocardium and quantify the MI region into the segmented LV myocardium. Three U-Nets were trained in each subnet using slices extracted from coronal, axial, and sagittal planes. The proposed network reached a DSC of 88.61 ± 2.54% for LV infarct segmentation on the 3D test dataset. Recently, Arega et al. [25] proposed a segmentation network that, firstly, generates uncertainty estimates during its learning process using the Monte Carlo dropout method. Secondly, it integrates the uncertainty information into the loss function for better segmentation results. The proposed model showed an accurate segmentation of all myocardial regions.
The major disadvantage of adopting 2D and 3D FCNs is that they are usually trained with cross-entropy, soft-Dice losses, and compound loss functions. These losses neglect high-level features that represent the implicit anatomical structures, such as their shape or topology and spatial relationships between tissues [26]. Likewise, U-Net architecture does not influence contextual or anatomical consistencies. Several research works focus on alleviating this challenge by incorporating further prior information to improve network robustness and produce plausible segmentations [27]. Prior knowledge can be introduced into segmentation in many ways, such as by appending it as a penalty term in the loss function, anatomical, or contextual constraints. Many researchers have used autoencoders to extract semantic feature information from input images or labels, which steer the cardiac image segmentation [28,29,30]. The contextual information can include shape priors to guide the segmentation results toward a ground truth shape. Oktay et al. [31] modified the decoder layers of a U-Net architecture to embed prior information through super-resolution gold standard maps using cardiac cine MRI. Zotti et al. [32] developed a grid-Net-based network to segment heart structures from cardiac cine-MRI. Their model integrates cardiac shape prior information to encode a 3D position-point likelihood for being a definite class. More recently, El Jurdi et al. [33] included position and shape priors to the learning phase via inserting bounding filters on the skip-connections in a U-Net model. Duan et al. [34] introduced a shape-constrained bi-ventricular segmentation technique. Firstly, a multi-task network is used to localize definite landmarks. Then, these landmarks are employed to initialize atlas propagation during the segmentation step to improve the segmentation quality. These networks can also be adjusted for improving spatial, temporal, and topology consistency of segmentation prediction in the cardiac cycle [35,36,37,38,39,40].
Different studies based on deep learning models evaluating infarct segmentation from LGE-MRI have been included in the EMIDEC challenge (http://emidec.com/) (accessed on 1 April 2020). Camarasa et al. [41] presented two approaches to evaluate if the uncertainty of an auxiliary unsupervised task is helpful for MI segmentation. Their baseline method first determined the ROI centered on the non-background labels to use U-Net architecture to segment all myocardial regions from the definite ROI. Feng et al. [42] developed an automatic LGE-MRI segmentation model using: (a) rotation-based augmentation to force the algorithm to remove the image orientation and learn the anatomical and contrast relationships; (b) dilated 2D U-Net to increase the robustness of the network against different slices’ misalignment. The authors applied the weighted cross-entropy and soft-Dice loss functions to relieve the class imbalance problem. They also favored slices containing damaged tissues. Girum et al. [43] proposed a two-stage CNN network to segment the anatomical structures firstly, and then pathological regions from LGE-MRI. The segmented myocardium area from the anatomical network is further used to refine the pathological network’s segmentation, thus producing the final four-class segmentation output. Huellebrand et al. [44] compared a hybrid mixture model approach with two U-Net segmentations. The proposed mixture model is inspired by [45] and is suited to EMIDEC data. This algorithm differentiated the infarct regions depending on the intensity distribution. The authors proved that a better segmentation is achieved using a mixture of Rayleigh and Gaussian than a mixture of Rician and Gaussian. In addition, they realigned the image slices to prevent any inconvenience due to respiratory motions. Yang and Wang [46] developed an improved and hybrid U-Net architecture for myocardial segmentation in LGE-MRI. The modified U-Net embodied the squeeze-and-excitation residual (SE-Res) module in the encoder part and a selective kernel (SK) block in the decoder part. Zhang [47] proposed a cascaded convolutional neural network to segment myocardial zones from LGE-MRI automatically. Its model achieved the best segmentation performance. The winner first employed 2D U-Net to focus on the intra-slice information for a preliminary segmentation and then a 3D U-Net to focus on the volumetric spatial information for a better segmentation based on both the original volume and the 2D segmentation. Finally, post-processing, removing all the scattered pixels from the latest segmentation, is applied to produce the final segmentation. Zhou et al. [48] developed an anatomy prior-based network, which combines the U-Net segmentation architecture with attention blocks. They also presented a neighborhood penalty strategy to assess the inclusion relationship among the healthy myocardium and damaged areas, and a data-augmentation technique based on the mix-up strategy [49] to interpolate two images and their corresponding segmentation maps.

3. Materials and Methods

3.1. Study Subjects and Data Acquisition

The LV myocardial diseases LGE-MRI dataset used in our experiment is provided by the EMIDEC challenge [50]. The data was acquired with 1.5 and 3 T Siemens MRI scanners. Additional data (12 features), including physiological and patient medical background were also included but not used in the present study. The training dataset includes 100 subjects, among them 67 scar cases and 33 normal cases. The EMIDEC test dataset consists of 50 exams divided into 1/3 healthy cases and 2/3 unhealthy myocardial cases (Table 1). All subjects underwent a standardized imaging protocol of LGE-MRI. Each patient’s LGE-MRI contains a series of 5–10 short-axis slices, wrapping the whole LV myocardium from the base to the apex. Manual boundaries (LV cavity, LV healthy myocardium, scar, and MVO), delineated by a biophysicist with deep competency in medical imaging, are presented for the training set.

3.2. Pre-Processing

The data pre-processing is one part of the pipeline. We performed pre-processing steps to the test data similar to those that were performed on training data. It includes the following tasks.
To reduce the class imbalance in medical images and eliminate unimportant anatomical structures, all images were cropped automatically to the region whose center is the centroid of the LV cavity. LGE-MRI cropping was fully automatic. Cropping results in both reducing the background class as well as the computational time for training our model.
Since the EMIDEC data shape varies between various subjects, it is necessary to alleviate the shape mismatch by reshaping each patient’s volume image. Thus, all MR volumes are reshaped to 96 × 96 × 16 with appending empty slices [51].
A standard adaptive histogram equalization algorithm is then used to enhance the local image contrast [52,53] and, hence, improving the efficiency of the training process. We also applied the non-local mean denoising [54] to all the data for decreasing the noise.

3.3. Architecture of ICPIU-Net

Figure 2 presents the pipeline of the ICPIU-Net network, which includes two major stages (Anatomical-Net (nnU-Net [20]) and Pathology-Net) as the myocardial diseases (MI and MVO) that are ensured to be localized within the LV myocardium. By segmenting the LV myocardium and cavity first, we may eliminate other hyper-enhanced and hypo-enhanced tissues of the LGE-MRI.
A schematic flowchart of the ICPIU-Net network is displayed in Figure 3. In the training step, Anatomical-Net and Pathology-Net were trained separately on 100 LGE-MRI. In the testing step, 50 LGE test MR images were supplied to the trained algorithm to generate the relative segmentation maps, which were later fused through a majority voting technique to yield the final label prediction of myocardial segmentation output. Each stage’s details are explained in the following clauses.

3.3.1. Anatomical Network

The segmentation networks used in the anatomical network are based on nnU-Net [20]. nnU-Net is a fully automatic segmentation framework based on the widely used U-Net [14] architecture. Similar to U-Net, it uses convolutional layers between poolings in the down-sampling path and deconvolution operations in the up-sampling path. However, it replaces classical rectified linear unit (ReLu) activation functions with leaky ReLus (leakiness = 10 2 ) and uses instance normalization [55] rather than the more common batch normalization (BN) [56]. The order of operations is as follows: convolution—instance norm—leaky ReLus. In addition, the downsampling is completed using strided convolutions instead of max-pooling. nnU-Net ensembles 2D U-Net and 3D U-Net networks. 2D U-Net trains whole slices to concentrate on intra-slice information. A 3D U-Net is used to extract the volumetric spatial features. This architecture aims to restrict the impact of intra-slice heterogeneity and take into account the volumetric information for more accurate segmentation. Thus, the cross-validation outputs automatically lead to the best ensemble to be employed for the testing prediction.
The anatomical network uses 2D, 3D, and cascaded U-Net to alleviate the convenient shortcoming of a 3D U-Net architecture on datasets with large-size images. In a cascaded U-Net, a 3D U-Net is first trained on 3D down-sampled images for a preliminary segmentation. Then, the resulting segmentations are up-sampled to the original input resolution and passed to a second 3D U-Net, trained on patches at full resolution for final prediction segmentation.
The anatomical network was implemented on NVIDIA Tesla V100 with four embedded GPUs using the Pytorch source code (http://github.com/MIC-DKFZ/nnunet/) (accessed on 1 February 2021) based on the nnU-Net implementation [20]. To train the model, we employed a five-fold cross-validation. The ADAM optimizer was used with an initial learning rate of 3 × 10 4 . The learning rate is reduced during training using a polynomial learning rate scheduler. The short-axis slice and volume inputs are provided for 2D and 3D networks, respectively. The sum of the cross-entropy loss ( L C E ) and the Dice loss ( L D I C E ) is used as the final loss function to train the proposed network (Equation (1)):
L = L C E + L D I C E .
The L D I C E function is summarized as follows:
L D I C E = 2 | K | k K i I u i k v i k i I u i k + i I v i k ,
where u refers to the softmax output of the proposed network and v is a one-hot encoding of the ground truth label delineated manually by the experts. Both u and v have shape I × K with i I being the pixels’ number in the training patch/batch and k K being the different classes.

3.3.2. Pathological Network

3D U-Net Architecture

Our pathological network mainly has a 3D U-Net as the main network and it incorporates some prior knowledge to further improve the result. The encoder part consists of stacked convolutions followed by BN, ReLu, and max-pooling layers for capturing contextual information. The decoder part consists of deconvolutions, convolutions, BN, and ReLu for the accurate position of patterns. Skip connections concatenate symmetrically contextual and positional attributes from opposing contracting and expanding pathways. The last convolution layer reduces its output channels number to the number of predicted classes, generating a myocardial segmentation map of identical dimensions to that of the target map.

Network Implementation

We trained the proposed architecture with sampled patches of size 12 × 12 × 12 voxels and a batch size of 4. The training is completed using the ADAM optimizer with a learning rate α = 10 4 for a maximum of 200,000 iterations, taking a total time of 314 min. The pathological network was implemented in Python using the Chainer library.

Shape Reconstruction

To learn the latent representation from which the original cardiac shapes can be reconstructed with inclusion-restricted segmentation, we train the proposed 3D convolutional variational autoencoder (CVAE), which uses an iterative optimization process with expert ground truth. The 3D CVAE encodes the original cardiac shape information as a compact representation in a reduced dimension, interprets the code, and decompresses the input without any reconstruction loss. Thus, a pre-trained 3D CVAE from an ensemble of cardiac shapes is used as a constraint to regularize a segmentation output into a proper shape [30]. The 3D pre-trained CVAE has in-depth information about segmenting several feature representations’ categories. Compared to [30], our CVAE learns the general shape as well as the inclusion of the MVO into the MI itself into the myocardium. The inclusion criteria are helpful for plausible reconstruction with accurately localizing the borders of the cardiac tissues. Figure 4 depicts the configuration of the proposed 3D CVAE.

Class Constraint

All along the optimization process, we develop a binary classification module to distinguish patients with myocardial infarction from regular patients. Hence, we incorporate classification priors for the segmentation process to constrain the predicted tissue in this known category.
As Figure 4 illustrates, we propose the inclusion (IC in cyan) and class constraint (CC in purple) modules, connected as an extended network and to the bottom of the 3D U-Net, respectively, for penalizing the final prediction of myocardial segmentation output. These constraints are introduced as auxiliary L I C and L C C loss terms to highlight small classes’ tissue.
One of the main challenges with diseased myocardial tissues’ segmentation is the class imbalance (i.e., LV healthy myocardium has way more instances than pathological regions) in the dataset. As shown in Table 1, the EMIDEC test dataset provides 1/3 healthy patients and 2/3 infarcted patients. The resulting segmentation of training with the cross-entropy loss function may not be effective, as the most frequent class may leverage training. That is why it is critical to optimize the appropriate loss function for accurate segmentation. We train our penalty-based pathological network with a fusion of multi-class mean intersection over union (IoU) loss L S e g [57], inclusion constraint loss L I C , and a class constraint loss L C C . This final loss function is given in Equation (3):
L F i n a l = L S e g + λ I C × L I C + λ C C × L C C ,
where λ I C denotes the inclusion constraint penalty-term and L I C indicates the L2 loss function that is defined in Frobenius norm Equation (4); λ C C represents the class constraint penalty-term and L C C indicates the cross-entropy loss function. We regularize, with λ I C and λ C C , the weights in the interval ( 10 2 , 10 1 ).
L I C = i = 1 n R P i R G i F 2 ,
where n is the total number of training volumes, R G i is the reconstructed manual delineation, R P i is the reconstructed segmentation output, and . F denotes the Frobenius norm of an m × n matrix.
The multiclass L S e g function measures the overlap between two samples [58] and is incorporated into deep learning networks as follows:
L S e g = L I o U = 1 C c C i p i c × p i c * i p i c + p i c * p i c × p i c * ,
where p i c is the prediction score at position i for class c, and p i c * is the gold standard distribution which is a delta function at y i * , the true label.

3.4. Post-Processing

In our post-processing, we employed morphological operators such as opening (kernel size of 3 × 3 ), to remove small predicted classes with less than 64 voxels from the predicted segmentation. In addition, we used connected components to further improve the segmentation result of scar and MVO. Finally, the cropped slices were resized to the original input LGE-MRI size.
The majority voting method, based on all fusions of our models’ results, with varying estimated parameters from training ( λ I C and λ C C ), is used to improve the segmentation. Based on the best experimental results, we chose λ I C [ 10 2 , 5 × 10 1 ] with an increment step of 7 × 10 2 and λ C C [ 10 2 , 5 × 10 1 ] with an increment step of 7 × 10 2 , obtaining the best fit. Indeed, hyperparameter tuning ranges from 10 2 to 5 × 10 1 make for the best trade-off between evaluation metrics. Thus, the voxel was labeled as infarct if at least three of the combined models predicted this voxel as an infarct label. The final model (or ensemble) that yields the highest Dice similarity coefficient (DSC) on the training set is automatically selected.

4. Results and Discussion

4.1. Evaluation Metrics

As proposed by the challenge organizers, we employed region-based and volume-based evaluation metrics (in 3D) to appraise the performance of our approach-generated segmentations of myocardial tissues. The DSC (Equation (6)) computes the spatial overlap of our presented model delineation and the gold standard, varying from 0 (mismatch) to 1 (excellent match). A various class of scores evaluates the distance between segmentation contouring.
D S C = 2 P G P + G ,
where P and G represent the predicted and manual segmentation maps, respectively.
Given two boundaries generated from our algorithm ( A = a i : i = 1 , , N A ) and manual ( M = m j : j = 1 , , N M ) segmentation, the Hausdorff distance (HD) [59] is determined as follows:
H D = max a i A min m j M a i m j .
The HD computes the degree of mismatch between A and M by calculating the Euclidean distance of each point a i that is most distant from any point m j .
The absolute volume difference (AVD) calculates the difference between our method-generated V A and manual V M LV volumes by the expert. In addition to the AVD, absolute volume difference rate according to the volume of the myocardium (AVDR) metric, Equation (8) was reported:
A V D R = A V D V M Y O ,
where A V D = V A V M and V M Y O is the myocardium volume of the manual annotation.
For consistency with other publications, the metrics were based on the online evaluation platform (http://github.com/EMIDEC-Challenge/Evaluation-metrics/) (accessed on 1 April 2020). Therefore, region- and volume-based metrics were measured for each test patient, and we calculated their mean values to investigate the performance of our approach for myocardial disease segmentation. We also applied Bland–Altman plots [60] to consider the accuracy between the proposed method and manually generated LV volumes.

4.2. Results Analysis and Extensive Discussions

We compared the results of our proposed network to different previous methods used in the EMIDEC challenge, enclosing Feng et al. [42], Huellebrand et al. [44], Yang et al. [46], Zhang [47], Camarasa et al. [41], Zhou et al. [48], and Girum et al. [43]. The LV myocardium, scar, and MVO were segmented using those methods from the same test dataset. We also compared the ICPIU-Net segmentation results to Brahim et al. [61], which is based on only a shape prior constraint for myocardial disease segmentation and manual delineations. The results prove that our approach performs well on all substructures. A statistical test was conducted for each model to check if the difference of the results between the coronary arteries is significant. We have found that there is no statistical difference.
There are a total of 100 exams with published labels to train our algorithm. We make random five-fold cross-validations by shuffling the scan sequence and splitting the database into five folds to provide a more comprehensive evaluation of our network. In Table 2 and Table 3, the cross-validation results of our segmentation output and of two networks’ segmentation output are represented.
The conducted experiments’ metrics on the internal cross-validation shown in Table 2 prove that our approach can accurately segment each target tissue despite using a small sample size. The standard deviations, which are relatively small, demonstrate the stability of our proposed method for the segmentation of myocardial diseases.
Table 3 gives a summary of the comparison study. From the result, our approach yielded the best clinical and geometrical metrics compared with two existing networks.
Figure 5 shows the LV myocardium, infarct, and MVO segmentation results of Brahim et al. [61] and our proposed ICPIU-Net for three different slices, randomly chosen from three patients of the testing dataset. We stacked up the segmented 2D slices of each EMIDEC test dataset to reproduce a 3D rendering surface of myocardial regions for visualization purposes. In comparison to Brahim et al. [61], our approach segmented myocardial diseases more accurately. Visually, the proposed network-generated segmentation narrowly matches the manually segmented delineation for all the labeled regions. The test results proved that our approach is comparatively accurate in segmenting scar tissue.
Figure 6 shows segmentation results of EMIDEC challengers, ground truth mask, and our proposed ICPIU-Net for all slices, chosen from one patient of the testing dataset. Visually, our proposed network correctly depicted myocardial structures and showed a good agreement with the gold standard. In comparison to Zhang’s method, our approach demonstrated promising performance in segmenting the damaged myocardial areas from LGE-MRI.
As shown in Figure 7, qualitative evaluations demonstrate that our proposed network accurately segments infarcted patients, especially at the middle slices. The segmentation results achieved by our proposed ICPIU-Net approach are coincident with the expert delineations for both two volumes. Most segmentation errors appear at basal and apical short-axis slices.
Table 4 summarizes the quantitative results of our proposed method against those of the alternatives for the testing dataset. Experimental results demonstrate that the myocardium segmentation is globally satisfying, whereas the diseased areas (i.e., MI and MVO) are comparatively challenging to predict accurately. The evaluation methods were based on the clinical metrics most applied in cardiac clinical practice: the AVD and AVDR of the diseased areas (MI and MVO) and the geometric metrics: the DSC for the different tissues and HD for the myocardium. A ranking is computed for each metric, and the final ranking representing the sum of the rankings for each evaluation metric is chosen to select the best model. Our developed ICPIU-Net reported higher DSC, AVD, and AVDR than other methods for MI and MVO segmentations. The second-best DSC for fully automated segmentation of myocardial diseases was reached using the network proposed by Zhang [47] (71.24% for scar and 78.51% for MVO). In testing, our proposed method also achieved DSC, AVD, and HD of 87.65 %, 8863.41 mm 3 , and 13.10 mm for LV myocardium segmentation, respectively. The publicly available test database consists of 50 exams divided into 17 cases with normal MRI after injection of a contrast agent and 33 patients with myocardial infarction. A patient-by-patient study of the proposed network-generated segmentation revealed that the infarct tissue could be correctly determined in 32 out of 33 pathological subjects from the test dataset.
We conducted a comprehensive ablation study of prior information modules to investigate their impact on the segmentation results. As shown in Table 5, the combination of IC and CC regularization penalty terms provided more plausible segmentation close to the manual delineation. The results have demonstrated the effectiveness of inclusion and class constraints to the segmentation task on the EMIDEC dataset.
The metrics of DSC, AVD, and AVDR applied during the MVO segmentation challenge can be inconsistent since the MVO represents only a small volume of the input LGE-MRI. Indeed, MVO’s absence from all the data seemingly supplies correct results with DSC and volumes. However, the accuracy highlights the effectiveness of the proposed method to identify MVO regions. Therefore, in Table 6, the additional metrics of MVO tissue are provided. Results reveal the pertinence of inclusion and class constraints in segmenting MVO tissues.
Figure 8 depicts the Bland–Altman plots for the proposed ICPIU-Net vs. expert manual LV volumes. In these graphics, the dashed blue line, and the upper and lower red dashed lines show the mean value of the difference and the upper and lower limits of accordance, respectively. In comparison to manually segmented volumes, our method’s mean bias in assessing LV myocardium, MI, and MVO volumes was 4.9888 cm 3 , 1.2266 cm 3 , and 0.5112 cm 3 , respectively. Thus, our proposed network lightly overvalued the LV myocardial tissue volumes, resulting in a mean absolute LV volume percentage error of 8.12%.
The EMIDEC classification contest aims to classify the patients as healthy or infarct. In Table 7, the classification results of each method are provided. The proposed network achieved the best results with the best pathology classification accuracy of 98%. Our approach failed only on 1 exam among 50, which the challenge organizers considered to be an outstanding result.
Deep learning networks have significantly boosted state-of-the-art segmentation performance in cardiac MRI. Evaluation of the MI’s presence and extent (MVO) stays essential for assessing myocardial viability. The visual estimation by cardiologists remains the routine approach. However, an accurate automatic prediction of the exams as an objective assessment of the volume and the percentage of damaged myocardium plays a crucial role in treating and improving clinical outcomes. The proposed network achieved a Dice score of 0.8765 for the myocardium and 0.7336 for the infarction tissue. Nevertheless, MI tissue segmentation still proved to be a challenging task compared to the myocardium. Our results demonstrate that automatic myocardial segmentation is now a possible task. Still, the segmentation of diseased regions requires further development before being included in software solutions applied in clinical practice.

5. Conclusions

In this paper, we describe a novel deep learning-based approach for fully automated segmentation of myocardial tissue in LV myocardium from LGE-MRI. The experimental results proved the relevance of our proposed architecture for clinical guidelines diagnosis and treatment planning. Our ICPIU-Net outperformed prior deep learning-based techniques in terms of segmentation accuracy. In building our approach, we addressed the critical class imbalance issue due to a relatively small size of myocardial diseases compared to the healthy area in the myocardium via constructing informative inclusion and classification constraints from pathological tissues. A decision-fusion technique was used to aggregate the predictions achieved through varying estimated parameters from training for final prediction. Nonetheless, cases of MVO were not all identified in the subject cohort enlisted for this work (Acc. of 84.00%). Thus, further studies, to integrate the clinical metadata information, are needed for improving the segmentation of myocardial abnormalities.

6. Code Availability

Updated versions of the anatomical network can be found at https://github.com/mic-dkfz/nnunet (accessed on 1 February 2021). The pathological network repository is available for download at https://github.com/KhawlaMarrakchi/Pathological-ICPIU-Net (accessed on 16 February 2022).

Author Contributions

Conceptualization, K.B. and T.W.A.; methodology, K.B.; software, K.B.; validation, K.B. and F.M.; formal analysis, K.B.; investigation, K.B.; resources, F.M.; data curation, K.B.; writing—original draft preparation, K.B.; writing—review and editing, K.B., T.W.A., A.B., S.B. and F.M.; visualization, K.B. and F.M.; supervision, A.S. and F.M.; project administration, F.M.; funding acquisition, F.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data available in a publicly accessible repository: the EMIDEC segmentation archive. Publicly available datasets were analyzed in this study. These data can be found at http://emidec.com/dataset (accessed on 1 April 2020).

Acknowledgments

This work was partly supported by the French “Investissements d’Avenir” program, ISITE-BFC project (number ANR-15-IDEX-0003).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Abbas, A.; Matthews, G.; Brown, I.; Shambrook, J.; Peebles, C.; Harden, S. Cardiac MR assessment of microvascular obstruction. Br. J. Radiol. Suppl. 2015, 88, 20140470. [Google Scholar] [CrossRef]
  2. Arevalo, H.J.; Vadakkumpadan, F.; Guallar, E.; Jebb, A.; Malamas, P.; Wu, K.C.; Trayanova, N.A. Arrhythmia risk stratification of patients after myocardial infarction using personalized heart models. Nat. Commun. 2016, 7, 1–8. [Google Scholar] [CrossRef]
  3. Trayanova, N.A.; Pashakhanloo, F.; Wu, K.C.; Halperin, H.R. Imaging-based simulations for predicting sudden death and guiding ventricular tachycardia ablation. Circ. Arrhythm. Electrophysiol. 2017, 10, e004743. [Google Scholar] [CrossRef]
  4. Kawaji, K.; Tanaka, A.; Patel, M.B.; Wang, H.; Maffessanti, F.; Ota, T.; Patel, A.R. 3D late gadolinium enhanced cardiovascular MR with CENTRA-PLUS profile/view ordering: Feasibility of right ventricular myocardial damage assessment using a swine animal model. Magn. Reson. Imaging 2017, 39, 7–14. [Google Scholar] [CrossRef]
  5. Rajchl, M.; Stirrat, J.; Goubran, M.; Yu, J.; Scholl, D.; Peters, T.M.; White, J.A. Comparison of semi-automated scar quantification techniques using high-resolution, 3-dimensional late-gadolinium-enhancement magnetic resonance imaging. Int. J. Card. Imaging 2015, 31, 349–357. [Google Scholar] [CrossRef]
  6. Ukwatta, E.; Arevalo, H.; Li, K.; Yuan, J.; Qiu, W.; Malamas, P.; Wu, K.C.; Trayanova, N.A.; Vadakkumpadan, F. Myocardial infarct segmentation from magnetic resonance images for personalized modeling of cardiac electrophysiology. IEEE Trans. Med. Imaging 2015, 35, 1408–1419. [Google Scholar] [CrossRef] [Green Version]
  7. Usta, F.; Gueaieb, W.; White, J.A.; McKeen, C.; Ukwatta, E. Comparison of myocardial scar geometries from 2D and 3D LGE-MRI. In Proceedings of the Medical Imaging 2018: Biomedical Applications in Molecular, Structural, and Functional Imaging, Houston, TX, USA, 11–13 February 2018. [Google Scholar]
  8. Zabihollahy, F.; White, J.A.; Ukwatta, E. Myocardial scar segmentation from magnetic resonance images using convolutional neural network. In Proceedings of the Medical Imaging 2018: Computer-Aided Diagnosis, Houston, TX, USA, 10–15 February 2018. [Google Scholar]
  9. Carminati, M.C.; Boniotti, C.; Fusini, L.; Andreini, D.; Pontone, G.; Pepi, M.; Caiani, E.G. Comparison of image processing techniques for nonviable tissue quantification in late gadolinium enhancement cardiac magnetic resonance images. J. Thorac. Imaging 2016, 31, 168–176. [Google Scholar] [CrossRef] [Green Version]
  10. Moccia, S.; Banali, R.; Martini, C.; Muscogiuri, G.; Pontone, G.; Pepi, M.; Caiani, E.G. Development and testing of a deep learning-based strategy for scar segmentation on CMR-LGE images. Magn. Reson. Mater. Phys. Biol. Med. 2019, 32, 187–195. [Google Scholar] [CrossRef] [Green Version]
  11. Xu, C.; Xu, L.; Gao, Z.; Zhao, S.; Zhang, H.; Zhang, Y.; Du, X.; Zhao, S.; Ghista, D.; Liu, H.; et al. Direct delineation of myocardial infarction without contrast agents using a joint motion feature learning architecture. Med. Image Anal. 2018, 50, 82–94. [Google Scholar] [CrossRef]
  12. de la Rosa, E.; Sidibé, D.; Decourselle, T.; Leclercq, T.; Cochet, A.; Lalande, A. Myocardial infarction quantification from late gadolinium enhancement mri using top-hat transforms and neural networks. arXiv 2019, arXiv:1901.02911. [Google Scholar] [CrossRef]
  13. Hao, P.; Gao, X.; Li, Z.; Zhang, J.; Wu, F.; Bai, C. Multi-branch fusion network for Myocardial infarction screening from 12-lead ECG images. Comput. Methods Programs Biomed. 2020, 184, 105286. [Google Scholar] [CrossRef]
  14. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
  15. Çiçek, Ö.; Abdulkadir, A.; Lienkamp, S.S.; Brox, T.; Ronneberger, O. 3D U-Net: Learning dense volumetric segmentation from sparse annotation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Athens, Greece, 17–21 October 2016; pp. 424–432. [Google Scholar]
  16. Milletari, F.; Navab, N.; Ahmadi, S.A. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In Proceedings of the 2016 Fourth International Conference on 3D vision (3DV), Stanford, CA, USA, 5–28 October 2016; pp. 565–571. [Google Scholar]
  17. Tao, Q.; Yan, W.; Wang, Y.; Paiman, E.H.; Shamonin, D.P.; Garg, P.; Plein, S.; Huang, L.; Xia, L.; Sramko, M.; et al. Deep learning–based method for fully automatic quantification of left ventricle function from cine MR images: A multivendor, multicenter study. Radiology 2019, 290, 81–88. [Google Scholar] [CrossRef] [Green Version]
  18. Xia, Q.; Yao, Y.; Hu, Z.; Hao, A. Automatic 3D atrial segmentation from GE-MRIs using volumetric fully convolutional networks. In Proceedings of the International Workshop on Statistical Atlases and Computational Models of the Heart, Granada, Spain, 16 September 2018; pp. 211–220. [Google Scholar]
  19. Vigneault, D.M.; Xie, W.; Ho, C.Y.; Bluemke, D.A.; Noble, J.A. Ω-net (omega-net): Fully automatic, multi-view cardiac MR detection, orientation, and segmentation with deep neural networks. Med. Image Anal. 2018, 48, 95–106. [Google Scholar] [CrossRef]
  20. Isensee, F.; Jaeger, P.F.; Kohl, S.A.; Petersen, J.; Maier-Hein, K.H. nnU-Net: A self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 2021, 18, 203–211. [Google Scholar] [CrossRef]
  21. Fatemeh, Z.; Nicola, S.; Satheesh, K.; Eranga, U. Ensemble U-net-based method for fully automated detection and segmentation of renal masses on computed tomography images. J. Med. Phys. 2020, 47, 4032–4044. [Google Scholar] [CrossRef]
  22. Ma, J.; Wang, Y.; An, X.; Ge, C.; Yu, Z.; Chen, J.; Zhu, Q.; Dong, G.; He, J.; He, Z.; et al. Toward data-efficient learning: A benchmark for COVID-19 CT lung and infection segmentation. J. Med. Phys. 2021, 48, 1197–1210. [Google Scholar] [CrossRef]
  23. Fahmy, A.S.; Rausch, J.; Neisius, U.; Chan, R.H.; Maron, M.S.; Appelbaum, E.; Menze, B.; Nezafat, R. Automated cardiac MR scar quantification in hypertrophic cardiomyopathy using deep convolutional neural networks. JACC Cardiovasc. Imaging 2018, 11, 1917–1918. [Google Scholar] [CrossRef]
  24. Zabihollahy, F.; Rajchl, M.; White, J.A.; Ukwatta, E. Fully automated segmentation of left ventricular scar from 3D late gadolinium enhancement magnetic resonance imaging using a cascaded multi-planar U-Net (CMPU-Net). J. Med. Phys. 2020, 47, 1645–1655. [Google Scholar] [CrossRef]
  25. Arega, T.W.; Bricq, S.; Meriaudeau, F. Leveraging Uncertainty Estimates to Improve Segmentation Performance in Cardiac MR. In Proceedings of the MICCAI UNSURE Workshop 2021, Uncertainty for Safe Utilization of Machine Learning in Medical Imaging, London, UK, 1 October 2021. [Google Scholar]
  26. El Jurdi, R.; Petitjean, C.; Honeine, P.; Cheplygina, V.; Abdallah, F. High-level prior-based loss functions for medical image segmentation: A survey. Comput. Vis. Image Underst. 2021, 210, 103248. [Google Scholar] [CrossRef]
  27. Taghanaki, S.A.; Abhishek, K.; Cohen, J.P.; Cohen-Adad, J.; Hamarneh, G. Deep semantic segmentation of natural and medical images: A review. Artif. Intell. Rev. 2021, 54, 137–178. [Google Scholar] [CrossRef]
  28. Oktay, O.; Bai, W.; Lee, M.; Guerrero, R.; Kamnitsas, K.; Caballero, J.; de Marvao, A.; Cook, S.; O’Regan, D.; Rueckert, D. Multi-input cardiac image super-resolution using convolutional neural networks. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Athens, Greece, 17–21 October 2016; pp. 246–254. [Google Scholar]
  29. Schlemper, J.; Oktay, O.; Bai, W.; Castro, D.C.; Duan, J.; Qin, C.; Hajnal, J.V.; Rueckert, D. Cardiac MR segmentation from undersampled k-space using deep latent representation learning. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Granada, Spain, 16–20 September 2018; pp. 259–267. [Google Scholar]
  30. Yue, Q.; Luo, X.; Ye, Q.; Xu, L.; Zhuang, X. Cardiac segmentation from LGE MRI using deep neural network incorporating shape and spatial priors. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Shenzhen, China, 13–17 October 2019; pp. 559–567. [Google Scholar]
  31. Oktay, O.; Ferrante, E.; Kamnitsas, K.; Heinrich, M.; Bai, W.; Caballero, J.; Cook, S.A.; De Marvao, A.; Dawes, T.; O‘Regan, D.P.; et al. Anatomically constrained neural networks (ACNNs): Application to cardiac image enhancement and segmentation. IEEE Trans. Med. Imaging 2017, 37, 384–395. [Google Scholar] [CrossRef] [Green Version]
  32. Zotti, C.; Luo, Z.; Lalande, A.; Jodoin, P.M. Convolutional neural network with shape prior applied to cardiac MRI segmentation. IEEE J. Biomed. Health Inform. 2018, 23, 1119–1128. [Google Scholar] [CrossRef]
  33. El Jurdi, R.; Petitjean, C.; Honeine, P.; Abdallah, F. Bb-unet: U-net with bounding box prior. IEEE J. Sel. Top. Signal Process. 2020, 14, 1189–1198. [Google Scholar]
  34. Duan, J.; Bello, G.; Schlemper, J.; Bai, W.; Dawes, T.J.; Biffi, C.; de Marvao, A.; Doumoud, G.; O’Regan, D.P.; Rueckert, D. Automatic 3D bi-ventricular segmentation of cardiac images by a shape-refined multi-task deep learning approach. IEEE Trans. Med. Imaging 2019, 38, 2151–2164. [Google Scholar] [CrossRef]
  35. Du, X.; Yin, S.; Tang, R.; Zhang, Y.; Li, S. Cardiac-DeepIED: Automatic pixel-level deep segmentation for cardiac bi-ventricle using improved end-to-end encoder-decoder network. IEEE J. Transl. Eng. Health Med. 2019, 7, 1–10. [Google Scholar] [CrossRef]
  36. Yan, W.; Wang, Y.; Li, Z.; Van Der Geest, R.J.; Tao, Q. Left ventricle segmentation via optical-flow-net from short-axis cine MRI: Preserving the temporal coherence of cardiac motion. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Granada, Spain, 16–20 September 2018; pp. 613–621. [Google Scholar]
  37. Savioli, N.; Vieira, M.S.; Lamata, P.; Montana, G. Automated segmentation on the entire cardiac cycle using a deep learning work-flow. In Proceedings of the 2018 Fifth International Conference on Social Networks Analysis, Management and Security (SNAMS), Valencia, Spain, 15–18 October 2018; pp. 153–158. [Google Scholar]
  38. Qin, C.; Bai, W.; Schlemper, J.; Petersen, S.E.; Piechnik, S.K.; Neubauer, S.; Rueckert, D. Joint learning of motion estimation and segmentation for cardiac MR image sequences. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Granada, Spain, 16–20 September 2018; pp. 472–480. [Google Scholar]
  39. Wolterink, J.M.; Leiner, T.; Viergever, M.A.; Išgum, I. Automatic segmentation and disease classification using cardiac cine MR images. In Proceedings of the International Workshop on Statistical Atlases and Computational Models of the Heart, Quebec City, QC, Canada, 10–14 September 2017; pp. 101–110. [Google Scholar]
  40. Clough, J.R.; Oksuz, I.; Byrne, N.; Schnabel, J.A.; King, A.P. Explicit topological priors for deep-learning based image segmentation using persistent homology. In Proceedings of the International Conference on Information Processing in Medical Imaging, Hong Kong, China, 2–7 June 2019; pp. 16–28. [Google Scholar]
  41. Camarasa, R.; Faure, A.; Crozier, T.; Bos, D.; de Bruijne, M. Uncertainty-Based Segmentation of Myocardial Infarction Areas on Cardiac MR Images. In Proceedings of the International Workshop on Statistical Atlases and Computational Models of the Heart, Lima, Peru, 4 October 2020; pp. 385–391. [Google Scholar]
  42. Feng, X.; Kramer, C.M.; Salerno, M.; Meyer, C.H. Automatic Scar Segmentation from DE-MRI Using 2D Dilated UNet with Rotation-Based Augmentation. In Proceedings of the International Workshop on Statistical Atlases and Computational Models of the Heart, Lima, Peru, 4 October 2020; pp. 400–405. [Google Scholar]
  43. Girum, K.B.; Skandarani, Y.; Hussain, R.; Grayeli, A.B.; Créhange, G.; Lalande, A. Automatic Myocardial Infarction Evaluation from Delayed-Enhancement Cardiac MRI Using Deep Convolutional Networks. In Proceedings of the International Workshop on Statistical Atlases and Computational Models of the Heart, Lima, Peru, 4 October 2020; pp. 378–384. [Google Scholar]
  44. Huellebrand, M.; Ivantsits, M.; Zhang, H.; Kohlmann, P.; Kuhnigk, J.M.; Kuehne, T.; Schönberg, S.; Hennemuth, A. Comparison of a Hybrid Mixture Model and a CNN for the Segmentation of Myocardial Pathologies in Delayed Enhancement MRI. In Proceedings of the International Workshop on Statistical Atlases and Computational Models of the Heart, Lima, Peru, 4 October 2020; pp. 319–327. [Google Scholar]
  45. Hennemuth, A.; Friman, O.; Huellebrand, M.; Peitgen, H.O. Mixture-model-based segmentation of myocardial delayed enhancement MRI. In Proceedings of the International Workshop on Statistical Atlases and Computational Models of the Heart, Nice, France, 5 October 2012; pp. 87–96. [Google Scholar]
  46. Yang, S.; Wang, X. A Hybrid Network for Automatic Myocardial Infarction Segmentation in Delayed Enhancement-MRI. In Proceedings of the International Workshop on Statistical Atlases and Computational Models of the Heart, Lima, Peru, 4 October 2020; pp. 351–358. [Google Scholar]
  47. Zhang, Y. Cascaded Convolutional Neural Network for Automatic Myocardial Infarction Segmentation from Delayed-Enhancement Cardiac MRI. In Proceedings of the International Workshop on Statistical Atlases and Computational Models of the Heart, Lima, Peru, 4 October 2020; pp. 328–333. [Google Scholar]
  48. Zhou, Y.; Zhang, K.; Luo, X.; Wang, S.; Zhuang, X. Anatomy Prior Based U-net for Pathology Segmentation with Attention. In Proceedings of the International Workshop on Statistical Atlases and Computational Models of the Heart, Lima, Peru, 4 October 2020; pp. 392–399. [Google Scholar]
  49. Zhang, H.; Cisse, M.; Dauphin, Y.N.; Lopez-Paz, D. mixup: Beyond empirical risk minimization. arXiv 2017, arXiv:1710.09412. [Google Scholar]
  50. Lalande, A.; Chen, Z.; Decourselle, T.; Qayyum, A.; Pommier, T.; Lorgis, L.; de la Rosa, E.; Cochet, A.; Cottin, Y.; Ginhac, D.; et al. Emidec: A database usable for the automatic evaluation of myocardial infarction from delayed-enhancement cardiac MRI. Data 2020, 5, 89. [Google Scholar] [CrossRef]
  51. Qayyum, A.; Lalande, A.; Meriaudeau, F. Automatic segmentation of tumors and affected organs in the abdomen using a 3D hybrid model for computed tomography imaging. Comput. Biol. Med. 2020, 127, 104097. [Google Scholar] [CrossRef]
  52. Zuiderveld, K. Contrast limited adaptive histogram equalization. J. Geom. Graph. 1994, IV, 474–485. [Google Scholar]
  53. Xie, W.Q.; Zhang, X.P.; Yang, X.M.; Liu, Q.S.; Tang, S.H.; Tu, X.B. 3D size and shape characterization of natural sand particles using 2D image analysis. Eng. Geol. 2020, 279, 105915. [Google Scholar] [CrossRef]
  54. Buades, A.; Coll, B.; Morel, J.M. A non-local algorithm for image denoising. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; pp. 60–65. [Google Scholar]
  55. Ulyanov, D.; Vedaldi, A.; Lempitsky, V. Instance normalization: The missing ingredient for fast stylization. arXiv 2016, arXiv:1607.08022. [Google Scholar]
  56. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning, Lille, France, 6–11 July 2015; pp. 448–456. [Google Scholar]
  57. Kervadec, H.; Bouchtiba, J.; Desrosiers, C.; Granger, E.; Dolz, J.; Ayed, I.B. Boundary loss for highly unbalanced segmentation. In Proceedings of the International Conference on Medical Imaging with Deep Learning, London, UK, 8–10 July 2019; pp. 285–296. [Google Scholar]
  58. Yuan, Y.; Chao, M.; Lo, Y.C. Automatic skin lesion segmentation using deep fully convolutional networks with jaccard distance. IEEE Trans. Med. Imaging 2017, 36, 1876–1886. [Google Scholar] [CrossRef]
  59. Menze, B.H.; Jakab, A.; Bauer, S.; Kalpathy-Cramer, J.; Farahani, K.; Kirby, J.; Burren, Y.; Porz, N.; Slotboom, J.; Wiest, R.; et al. The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans. Med. Imaging 2014, 34, 1993–2024. [Google Scholar] [CrossRef]
  60. Giavarina, D. Understanding bland altman analysis. Biochem. Med. 2015, 25, 141–151. [Google Scholar] [CrossRef] [Green Version]
  61. Brahim, K.; Qayyum, A.; Lalande, A.; Boucher, A.; Sakly, A.; Meriaudeau, F. A 3D Network Based Shape Prior for Automatic Myocardial Disease Segmentation in Delayed-Enhancement MRI. IRBM 2021, 42, 424–434. [Google Scholar] [CrossRef]
  62. Lalande, A.; Chen, Z.; Pommier, T.; Decourselle, T.; Qayyum, A.; Salomon, M.; Ginhac, D.; Skandarani, Y.; Boucher, A.; Brahim, K.; et al. Deep Learning methods for automatic evaluation of delayed enhancement-MRI. The results of the EMIDEC challenge. arXiv 2021, arXiv:2108.04016. [Google Scholar]
  63. Lourenço, A.; Kerfoot, E.; Grigorescu, I.; Scannell, C.M.; Varela, M.; Correia, T.M. Automatic myocardial disease prediction from delayed-enhancement cardiac mri and clinical information. In Proceedings of the International Workshop on Statistical Atlases and Computational Models of the Heart, Lima, Peru, 4 October 2020; pp. 334–341. [Google Scholar]
  64. Ivantsits, M.; Huellebrand, M.; Kelle, S.; Schönberg, S.O.; Kuehne, T.; Hennemuth, A. Deep-learning-based myocardial pathology detection. In Proceedings of the International Workshop on Statistical Atlases and Computational Models of the Heart, Lima, Peru, 4 October 2020; pp. 369–377. [Google Scholar]
  65. Sharma, R.; Eick, C.F.; Tsekos, N.V. SM2N2: A Stacked Architecture for Multimodal Data and Its Application to Myocardial Infarction Detection. In Proceedings of the International Workshop on Statistical Atlases and Computational Models of the Heart, Lima, Peru, 4 October 2020; pp. 342–350. [Google Scholar]
  66. Shi, J.; Chen, Z.; Couturier, R. Classification of Pathological Cases of Myocardial Infarction Using Convolutional Neural Network and Random Forest. In Proceedings of the International Workshop on Statistical Atlases and Computational Models of the Heart, Lima, Peru, 4 October 2020; pp. 406–413. [Google Scholar]
Figure 1. Short-axis LGE-MR images show MI (blue triangular) and MVO (yellow triangular).
Figure 1. Short-axis LGE-MR images show MI (blue triangular) and MVO (yellow triangular).
Sensors 22 02084 g001
Figure 2. Workflow of our ICPIU-Net approach for fully automatic myocardial disease segmentation. The red, green, blue, and yellow colors show the LV cavity, the LV myocardium, scar, and MVO, respectively.
Figure 2. Workflow of our ICPIU-Net approach for fully automatic myocardial disease segmentation. The red, green, blue, and yellow colors show the LV cavity, the LV myocardium, scar, and MVO, respectively.
Sensors 22 02084 g002
Figure 3. Block diagram of ICPIU-Net network.
Figure 3. Block diagram of ICPIU-Net network.
Sensors 22 02084 g003
Figure 4. The overall architecture of our pathological segmentation network. We present the number of channels overhead each feature map. The classification is as well applied only in the training stage to supervise the segmentation network profoundly.
Figure 4. The overall architecture of our pathological segmentation network. We present the number of channels overhead each feature map. The classification is as well applied only in the training stage to supervise the segmentation network profoundly.
Sensors 22 02084 g004
Figure 5. Qualitative results. In the first four columns, input LGE-MRI, manual delineations, and exemplary test results of the segmented myocardial regions in three slices extracted from LGE-MRI of three patients generated by the Brahim et al. [61] method and our ICPIU-Net approach are shown. The fifth column illustrates the 3D view of the myocardial tissues of our proposed method prediction. LV cavity is displayed in red, LV myocardium is in green, infarct in blue, and MVO in yellow.
Figure 5. Qualitative results. In the first four columns, input LGE-MRI, manual delineations, and exemplary test results of the segmented myocardial regions in three slices extracted from LGE-MRI of three patients generated by the Brahim et al. [61] method and our ICPIU-Net approach are shown. The fifth column illustrates the 3D view of the myocardial tissues of our proposed method prediction. LV cavity is displayed in red, LV myocardium is in green, infarct in blue, and MVO in yellow.
Sensors 22 02084 g005
Figure 6. Segmentation results and the ground truth mask on Case 119. (a) LGE-MRI, (b) Ground Truth, (c), Zhang [47], and (d) ICPIU-Net.
Figure 6. Segmentation results and the ground truth mask on Case 119. (a) LGE-MRI, (b) Ground Truth, (c), Zhang [47], and (d) ICPIU-Net.
Sensors 22 02084 g006
Figure 7. Examples of test segmentation results and ground-truth for three different levels (base, middle, and apex) of two patient slices (columns 1–3 from patient 1 and columns 4–6 from patient 2). Red: LV cavity, Green: LV myocardium, Blue: Infarct, and Yellow: MVO.
Figure 7. Examples of test segmentation results and ground-truth for three different levels (base, middle, and apex) of two patient slices (columns 1–3 from patient 1 and columns 4–6 from patient 2). Red: LV cavity, Green: LV myocardium, Blue: Infarct, and Yellow: MVO.
Sensors 22 02084 g007
Figure 8. The graph represents the difference between the generated method and the expert target volumes according to their average. (a) Bland–Altman plot of LV myocardium volume acquired from our ICPIU-Net approach. (b) Bland–Altman plot of infarct volume acquired from our ICPIU-Net approach. (c) Bland–Altman plot of MVO volume acquired from our ICPIU-Net approach.
Figure 8. The graph represents the difference between the generated method and the expert target volumes according to their average. (a) Bland–Altman plot of LV myocardium volume acquired from our ICPIU-Net approach. (b) Bland–Altman plot of infarct volume acquired from our ICPIU-Net approach. (c) Bland–Altman plot of MVO volume acquired from our ICPIU-Net approach.
Sensors 22 02084 g008
Table 1. Stratification of the EMIDEC dataset.
Table 1. Stratification of the EMIDEC dataset.
EMIDEC Dataset (n = 150)Healthy CasesPathological Cases
Infarcted CasesInfarcted + MVO (a Subclass of MI) Cases
Training dataset (n = 100)332740
Testing dataset (n = 50)172211
Table 2. Internal quantitative assessment on 5-fold cross-validation results.
Table 2. Internal quantitative assessment on 5-fold cross-validation results.
TargetsMetrics5-Fold Cross-Validation
Fold 0Fold 1Fold 2Fold 3Fold 4AverageStandard Deviation
MyocardiumDSC (%)95.3895.0795.2195.3595.5995.320.17
AVD (mm 3 )232.74290.61225.42229.14203.49236.2829.01
HD (mm)4.024.783.873.613.463.950.44
MIDSC (%)77.0579.4578.7378.9277.3478.300.75
AVD (mm 3 )283.31267.26190.34156.53271.25233.7450.65
AVDR (%)4.014.203.182.034.533.390.77
MVODSC (%)76.5479.1579.9275.5178.0377.831.62
AVD (mm 3 )34.1826.8045.1049.1946.6840.398.51
AVDR (%)0.620.610.690.740.760.680.10
Table 3. A comparison of evaluation methods on 5-fold cross-validation of EMIDEC dataset. Best values are marked in bold font.
Table 3. A comparison of evaluation methods on 5-fold cross-validation of EMIDEC dataset. Best values are marked in bold font.
TargetsMetricsMethods
Huellebrand et al. [44]Zhang [47]Proposed (ICPIU-Net)
MyocardiumDSC (%)81.0094.4095.32
AVD (mm 3 )13655.556474.38236.28
HD (mm)16.7217.213.95
MIDSC (%)36.0872.0878.30
AVD (mm 3 )8980.54179.5233.74
AVDR (%)7.073.413.39
MVODSC (%)54.1571.0177.83
AVD (mm 3 )1501.73918.6940.39
AVDR (%)1.080.690.68
Table 4. Comparative study for EMIDEC myocardial segmentation in LGE-MRI (test leaderboard) [62]. Best values are marked in bold font.
Table 4. Comparative study for EMIDEC myocardial segmentation in LGE-MRI (test leaderboard) [62]. Best values are marked in bold font.
MethodsStructures
MyocardiumMIMVO
DSC (%)AVD (mm 3 )HD (mm)DSC (%)AVD (mm 3 )AVDR (%)DSC (%)AVD (mm 3 )AVDR (%)
Feng et al. [42]83.5615,187.4833.7754.683970.732.8972.22883.420.53
Huellebrand et al. [44]84.0810,874.4718.337.876166.014.9352.25953.470.64
Yang et al. [46]85.5316,539.5213.2362.795343.694.3760.991851.521.69
Zhang [47]87.869258.2413.0171.243117.882.3878.51634.690.38
Camarasa et al. [41]75.7417,108.1325.4430.794868.563.6460.52867.860.52
Zhou et al. [48]82.4613,292.6883.4237.776104.994.7151.98879.990.54
Girum et al. [43]80.2611,807.6851.4834.0011,521.718.5878.00891.130.51
Proposed (ICPIU-Net)87.658863.4113.1073.362693.841.9581.31511.250.32
Table 5. Performance analysis and comparison between our proposed ICPIU-NET network without and with using IC and CC modules. Best values are marked in bold font.
Table 5. Performance analysis and comparison between our proposed ICPIU-NET network without and with using IC and CC modules. Best values are marked in bold font.
MethodsStructures
MyocardiumMIMVO
DSC (%)AVD (mm 3 )HD (mm)DSC (%)AVD (mm 3 )AVDR (%)DSC (%)AVD (mm 3 )AVDR (%)
Without IC and CC87.779381.7713.0765.053096.542.3978.82553.560.34
Without CC87.749201.0413.0971.712830.322.1580.99538.600.34
Proposed (ICPIU-Net)87.658863.4113.1073.362693.841.9581.31511.250.32
Table 6. Additional metrics for EMIDEC myocardial segmentation [62]. Best values are marked in bold font.
Table 6. Additional metrics for EMIDEC myocardial segmentation [62]. Best values are marked in bold font.
MethodsMVO
Acc. (Case,%)Acc. (Slice,%)
Feng et al. [42]80.0090.78
Huellebrand et al. [44]70.0085.75
Yang et al. [46]76.0081.56
Zhang [47]84.0094.97
Camarasa et al. [41]74.0084.36
Zhou et al. [48]64.0086.87
Girum et al. [43]78.0089.66
Proposed (ICPIU-Net)84.0094.97
Table 7. Results of the classification contest. Best results in bold.
Table 7. Results of the classification contest. Best results in bold.
MethodsSensitivity (%)Specificity (%)Precision (%)Accuracy (%)
Lourenço et al. [63]87.8870.5985.2982
Ivantsits et al. [64]72.7382.3588.8976
Sharma et al. [65]72.7341.1870.5962
Girum et al. [43]78.7988.2492.8682
Shi et al. [66]90.9194.1296.7792
Proposed (ICPIU-Net)10094.4496.9698
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Brahim, K.; Arega, T.W.; Boucher, A.; Bricq, S.; Sakly, A.; Meriaudeau, F. An Improved 3D Deep Learning-Based Segmentation of Left Ventricular Myocardial Diseases from Delayed-Enhancement MRI with Inclusion and Classification Prior Information U-Net (ICPIU-Net). Sensors 2022, 22, 2084. https://doi.org/10.3390/s22062084

AMA Style

Brahim K, Arega TW, Boucher A, Bricq S, Sakly A, Meriaudeau F. An Improved 3D Deep Learning-Based Segmentation of Left Ventricular Myocardial Diseases from Delayed-Enhancement MRI with Inclusion and Classification Prior Information U-Net (ICPIU-Net). Sensors. 2022; 22(6):2084. https://doi.org/10.3390/s22062084

Chicago/Turabian Style

Brahim, Khawla, Tewodros Weldebirhan Arega, Arnaud Boucher, Stephanie Bricq, Anis Sakly, and Fabrice Meriaudeau. 2022. "An Improved 3D Deep Learning-Based Segmentation of Left Ventricular Myocardial Diseases from Delayed-Enhancement MRI with Inclusion and Classification Prior Information U-Net (ICPIU-Net)" Sensors 22, no. 6: 2084. https://doi.org/10.3390/s22062084

APA Style

Brahim, K., Arega, T. W., Boucher, A., Bricq, S., Sakly, A., & Meriaudeau, F. (2022). An Improved 3D Deep Learning-Based Segmentation of Left Ventricular Myocardial Diseases from Delayed-Enhancement MRI with Inclusion and Classification Prior Information U-Net (ICPIU-Net). Sensors, 22(6), 2084. https://doi.org/10.3390/s22062084

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop