Next Article in Journal
Equivalent Stress Intensity Factor: The Consequences of the Lack of a Unique Definition
Next Article in Special Issue
Region-of-Interest Optimization for Deep-Learning-Based Breast Cancer Detection in Mammograms
Previous Article in Journal
The Influence of Fruit Pomaces on Nutritional, Pro-Health Value and Quality of Extruded Gluten-Free Snacks
Previous Article in Special Issue
Magnetic Resonance with Diffusion and Dynamic Perfusion-Weighted Imaging in the Assessment of Early Chemoradiotherapy Response of Naso-Oropharyngeal Carcinoma
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Ultrasound Intima-Media Complex (IMC) Segmentation Using Deep Learning Models

by
Hanadi Hassen Mohammed
1,*,
Omar Elharrouss
1,
Najmath Ottakath
1,
Somaya Al-Maadeed
1,
Muhammad E. H. Chowdhury
2,
Ahmed Bouridane
3 and
Susu M. Zughaier
4
1
Department of Computer Science and Engineering, Qatar University, Doha P.O. Box 2713, Qatar
2
Department of Electrical Engineering, Qatar University, Doha P.O. Box 2713, Qatar
3
Cybersecurity and Data Analytics Research Center, University of Sharjah, Sharjah 27272, United Arab Emirates
4
Department of Basic Medical Sciences, College of Medicine, Qatar University, Doha P.O. Box 2713, Qatar
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(8), 4821; https://doi.org/10.3390/app13084821
Submission received: 6 March 2023 / Revised: 6 April 2023 / Accepted: 8 April 2023 / Published: 12 April 2023

Abstract

:
Common carotid intima-media thickness (CIMT) is a common measure of atherosclerosis, often assessed through carotid ultrasound images. However, the use of deep learning methods for medical image analysis, segmentation and CIMT measurement in these images has not been extensively explored. This study aims to evaluate the performance of four recent deep learning models, including a convolutional neural network (CNN), a self-organizing operational neural network (self-ONN), a transformer-based network and a pixel difference convolution-based network, in segmenting the intima-media complex (IMC) using the CUBS dataset, which includes ultrasound images acquired from both sides of the neck of 1088 participants. The results show that the self-ONN model outperforms the conventional CNN-based model, while the pixel difference- and transformer-based models achieve the best segmentation performance.

1. Introduction

The primary mechanism in the human body that sustains life is the cardiovascular system. Cardiovascular system diseases (CVDs) have been regarded as a major cause of death in the world. Lifespan can be increased and the death rate from CVDs can be decreased with early diagnosis and treatment of the diseases. The cardiovascular system is made up of blood vessels that carry blood, necessary for all of the body’s organs to operate. The primary components of the blood vessels that transport blood to and from the heart and to all organs are arteries and veins. Any obstruction in blood flow or disease in the arteries or veins will seriously affect how well the organs operate. The most common types of cardiovascular disease include peripheral vascular disease, coronary artery disease and carotid artery disease. These disorders manifest as a result of the development of atherosclerotic plaques in the arteries, as illustrated in Figure 1. One of the effects of carotid artery stenosis is an ischemic stroke, due to the accumulation of plaque on the carotid arterial walls. If the stenosis is detected early and the amount of plaque can be determined, the problem can be addressed immediately. For this, a variety of imaging modalities are used. Computed tomography (CT), EEG, ECG, ultrasound imaging, laboratory tests for coagulation status and cardiac monitoring are among the diagnostic techniques used in the assessment of carotid artery stenosis or stroke. Both sides of the neck contain the common carotid artery. The soft tissue features in the arteries allow for imaging using a variety of methods or modalities, such as computerized tomography (CT), ultrasound imaging and magnetic resonance imaging. The analysis of the generated images can enhance diagnosis and support clinical judgment. Medical image analysis algorithms have advanced significantly from image processing and pattern recognition methods to machine learning and deep learning algorithms that see it as a computer vision problem. A notable development in the automatic segmentation, analysis and grading of stenosis is the use of carotid artery imaging generated by CT scans, MRIs and ultrasound images [1,2]. Due to the complexity of scanning the carotid artery, ultrasound scanning is the preferred method to capture images with acceptable resolutions. Ultrasound images have been used for many studies using medical imaging analysis algorithms [3].
In order to segment the plaques on the carotid artery, many methods have been proposed even in the absence of large datasets. Previously, the proposed methods used CIMT measurement to detect and localize the carotid artery walls and then the plaques [4,5]. The ground truth was presented using some points representing the plaques generated by specialists [6]. The analysis using these types of data used different statistical and machine learning algorithms, including Snake’s segmentation and contour [4,5], bulb edge detection [6], wind-driven optimization techniques [7] and SVM [8].
Using convolution neural networks, the proposed methods used binary segmentation instead of CIMT measurement. By generating binary images containing labeled regions in the images instead of using points, the deep learning methods could successfully segment these regions with better precision [8]. Furthermore, the segmented regions could be helpful in computing CIMT [9], related to the performance accuracy of segmentation. This makes segmentation a crucial task.
Although CNNs have succeeded in solving many computer vision problems, recent studies have shown many drawbacks for CNNs, such as the need for large datasets [10] and the reliance on linear neuron models [11,12,13,14]. Operational neural networks (ONNs) [14,15,16,17] are heterogeneous networks with a non-linear neuron model that have recently been proposed as a solution for highly non-linearly separable problems. With the help of predefined nodal, pool and activation operators, ONNs are able to learn highly complex and multi-modal functions. The transformer neural network has recently been a successful non-CNN alternative for computer vision problems. Instead of convolution, vision transformers utilize self-attention to combine information from several locations [18]. In this paper, we performed a segmentation of common carotid intima-media using deep learning models. For this, we updated existing deep learning models, such as DeepCrack [19] and the transformer-based model [20]. We used a self-ONN instead of normal convolutional layers for DeepCrack. In order to improve the segmentation quality, we used morphological operations, such as erosion to enhance the output results. The main contributions of the research are summarized as follows:
  • We develop and investigate various recent deep learning models for the segmentation of IMC in B-mode ultrasound images of the carotid artery.
  • We propose a pioneer application for self-organized operational neural networks (self-ONNs) for IMC segmentation.
  • We investigate the level of non-linearity for operational layers required to achieve a better segmentation performance.
The rest of the paper is divided as follows; in Section 2, we highlight the recent work of carotid intima-media segmentation. Then, in Section 3, we present the model architecture for the deep learning models, and in Section 4, we present the experimental setup along with the evaluation metrics and the results of the model. Finally, we conclude and explain the future work in Section 5.

2. Related Works

The carotid artery segmentation, including the walls and plaques in the intima-media complex (IMC), can be used for the estimation of intima-media thickness (IMT). Which makes it an important operation for atherosclerotic risk evaluation.
There are numerous methods for segmenting the intima-media complex. However, the majority of them are semi-automatic and require manual intervention. Medical experts must define the boundary between the media adventitia and lumen. However, the subjectivity and variability of manual segmentation can be reduced using image segmentation algorithms. Additionally, IMT is assessed using active contours [21,22,23,24,25,26,27,28], dynamic programming [29,30,31,32,33,34] and edge detection algorithms and gradient-based approaches [35,36]. For active contour-based approaches, the authors in [21] began with a simple segmentation of B-mode ultrasound images followed by segmentation of the far wall intima-media–adventitia, then applied the active contour to obtain the desired region in the images. The same process was used in [22], but this time using some morphological operations, such as opening. Subsequently, an LI contour function was applied to detect the final common carotid artery result. In [23], the authors started with non-linear filtering followed by the detection of the intima layer using an iterative relaxation procedure to detect the wall using a modified energy function and an optimal initial contour.
For dynamic programming-based approaches, the researchers in [29] used a multi-scale dynamic programming (DP) algorithm to estimate the vessel wall positions leading to boundary detection. The obtained results with geometrical characteristics were used to obtain the final results. In the same context and to detect the arterial wall, the authors in [31] proposed a dual dynamic programming (DDP) technique to detect the intima and adventitial layers of the common carotid artery. Furthermore, in [33] an improved dynamic programming method was proposed for carotid artery wall thickness evaluation.
Machine and deep learning techniques have becoming intriguing as promising methods for medical image analysis tasks, such as image de-noising, segmentation and classification. Before the development of deep learning models, machine learning was the most commonly utilized technology, where comprehensive feature extraction techniques were applied to find several areas of carotid artery risk estimation. The deep learning strategy takes advantage of a neural network architecture that mimics the human brain by having more hidden layers. The neuron is the fundamental building block of a deep neural network (DNN), which accepts several inputs, linearly combines them and then passes them to a non-linear network to produce the desired output. Multiple processing layers make up a deep learning network, which uses deep graphs to extract high-level representations of meaningful information from low-level inputs. CNNs are among the most widely used networks in the medical image analysis domain [37]. U-Net is a CNN-based architecture used to solve the automatic image segmentation problem. This architecture has been adopted in many IMC segmentation works [38,39,40]. For example, in [38,41] the authors used the U-Net architecture for plaque segmentation in carotid ultrasound images. Furthermore, in [42] the authors used U-Net, U-Net+, U-Net++, U-Net+++ and three types of hybrids, namely, Inception-U-Net, Fractal-U-Net and Squeeze-U-Net architectures, to segment and measure the plaque far wall area of the common carotid (CCAs) and internal carotid arteries (ICAs) in B-mode ultrasound images. Using M-Net [43] as the backbone, the authors in [44] proposed an automatic joint segmentation method named CSM-Net with triple spatial attention and cascaded dilated convolution modules.

3. Methods

Medical image segmentation is a challenging task. As our ultimate goal is to find the most accurate deep learning model for ultrasound IMC segmentation, we tested several deep learning methods. Three recent deep learning networks were used in this study: DeepCrack [19], PidiNet [10] and CCTrans [45]. These networks have been used previously in different tasks such as edge detection, crack segmentation and crowd counting. The DeepCrack network is a CNN-based architecture which we modified with the recently proposed self-operational neural network (self-ONN) with the goal of seeing whether the CNN- or self-ONN-based architecture worked better on our dataset. CCTrans is a transformer-based model used for crowd counting. For this, we adapted the model to be suitable for ultrasound IMC segmentation by exploiting the same first layers of the model. The following sections describe a detailed description of how these methods have been adapted to our problem.

3.1. Self-Operational Neural Network-Based Model

Self-organized operational neural networks with generative neurons, proposed by [46], are a type of artificial neural network designed to operate in a self-organizing manner. Instead of using a predefined set of operators as an ONN, the self-ONNs with generative neurons generate nodal operators during backpropagation training. This property of self-ONNs allows for maximum learning performance, diversity and flexibility. The use of generative neurons can improve the network’s robustness to unseen data and reduce the risk of overfitting. A generative neuron uses a Taylor series expansion around the point a to approximate the non-linear function f(x):
Y = s = 1 S f n ( a ) n ! ( x a ) 2
If we truncate the Taylor series to q terms then the approximation g ( w , x , a ) will be given by:
Y = w 0 + w 1 ( x a ) + + w q ( x a ) q
where w n = f n ( a ) n ! ( x a ) 2 , w 0 is the bias for the c-channel input tensor w n and n = 1 , , q are the q-banks of c-channel convolution kernels that are learned during backpropagation.
To investigate the performance of self-ONNs, we chose the DeepCrack [19] model as a baseline model. The DeepCrack network, proposed by [19], is a CNN-based model built for crack segmentation. The architecture of the DeepCrack network is shown in Figure 2a. It has thirteen convolutional layers, each with convolution, batch normalization and ReLU layers. The convolution produces a set of feature maps. At the same time, batch normalization is used to reduce the covariate shift and the ReLU function is the activation function used to learn non-linearity in the data. A max-pooling with 2 × 2 pixel filter layers is added between the convolutional layers. A convolutional layer with kernel size 1 is used to obtain side-output features. Deconvolutional layers are then used (except for the first side output layer) to upsample the feature maps’ plane size to match the input image. Following the concatenation of the upsampled feature maps to obtain the final features, a convolutional layer and a Softmax layer are applied. Then a convolutional layer followed by a Softmax layer are used for predicting two classes. According to this prediction, for each pixel, the predicted label can be obtained. We modified the network to be flexible to use self-ONN layers instead of CNN layers, as shown in Figure 2a. We used Tanh activation layers instead of ReLU. The level of non-linearity can be adjusted on the network by modifying the parameter q.

3.2. Pixel Difference-Based Model

Although CNNs can achieve human-level performance in many computer-vision-based applications, the high performance of CNN-based models is achieved with a large pre-trained CNN backbone [47], such as VGG, ResNet and DenseNet, which is memory- and energy-consuming, while some methods have been proposed with simple and light-weight architectures, such as pixel difference networks (PiDiNets), that use edge detection [10]. PiDiNet adopts novel pixel difference convolutions that integrate the traditional edge detection operators into popular convolutional operations in modern convolution neural networks for enhanced performance to enjoy the best of both worlds. We used a PiDiNet model for IMC segmentation.

3.3. Transformer-Based Model

Traditionally, convolutional neural networks (CNNs) have been the preferred architecture for image segmentation tasks due to their ability to extract features from the input image. However, in recent years, transformer-based models have shown remarkable performance in a variety of natural language processing (NLP) tasks and have been extended to computer vision tasks, such as image segmentation.
CNNs have a strong ability to extract local features, but they inherently fail in modeling the global context due to the limited receptive fields. The transformer can model the global context easily. Furthermore, it has become the most used technique in computer vision. Due to this, we used a transformer model for IMC segmentation. The proposed method used a pyramid vision transformer backbone to capture the global information, a pyramid feature aggregation (PFA) model to combine low- and high-level features and an efficient regression head with multi-scale dilated convolution (MDC) to predict the final results [20]. The input image is transformed into a 1D sequence first, then the output is fed into the transformer-based backbone. The pyramid transformer in [45] is adopted to capture the global context through various downsampling stages. The outputs of each stage are reshaped into 2D feature maps for pyramid feature aggregation. Finally, a simple regression head with multi-scale receptive fields regresses the final results. The proposed architecture is illustrated in Figure 2b.

3.4. Post-Processing

The IMC segmentation is a difficult task, due to the difficulty of generating the precise thickness from an image, even when using deep learning methods. While the carotid intima-media region can be segmented, for some images, this region can be very skinny, affecting the performance of the segmentation method. We noticed that when using deep learning methods the segmented thickness is generally fat, as presented in Figure 3b. Because of this and in order to make the segmented thickness skinny to meet the ground truth, we applied morphological erosion. Morphological erosion is a post-processing step commonly used in medical image segmentation. In the context of IMC segmentation, morphological erosion is used to refine the initial segmentation results by removing small regions of noise or non-IMC tissue that may have been included. This helps to improve the accuracy and reliability of the segmentation by ensuring that only the true IMC structure is retained. The erosion operation is typically performed using a structuring element, which determines the size and shape of the erosion. The choice of structuring element depends on the characteristics of the image and the desired level of erosion. For example, a small circular element may be used to remove small regions of noise, while a larger rectangular element may be used to remove larger areas of non-IMC tissue. Figure 3c presents an example of the erosion result.

4. Experimental Results

In this section, we demonstrate the experimental results of the proposed self-ONN–DeepCrack approach on the CUBS dataset, and compare the obtained results with other published image segmentation methods, including DeepCrack [19], PidiNet [10] and adapted CCTrans [45]. The comparison was performed using image segmentation metrics as well as visual illustrations.

4.1. Implementation Details

The implementation details for training the proposed and implemented models are presented in Table 1. The implementation was performed using the Pytorch library, while the post-processing and evaluation metrics were performed using Matlab.

4.2. Dataset and Evaluation Metrics

The dataset used in this study is the CUBS dataset, acquired from both sides of the neck of 1088 participants, totalling 2176 images. All images are annotated by a skilled analyst. The images in Figure 5 are samples of the images and the ground truths taken from the dataset. A total of 80% of the data are used for training and 20% are used for testing. The segmentation metrics used to evaluate the performance of the proposed models are precision, recall, F1 measure (Equation (3)), Jaccard index (Equation (4)) and Dice coefficient (Equation (5)). Precision measures how many true positive (TP) predictions there are out of all the positive predictions or how many positive predictions there are in total. Recall calculates the true positive rate (TPR) or how many true positive predictions are made out of all the true positives. Both precision and recall are used to handle the class imbalance problem and to compute the F1 measure.
F 1 = 2 * P r e c i s i o n * R e c a l l P r e c i s i o n + R e c a l l
J a c c a r d I n d e x = T r u e P o s i t i v e T r u e P o s i t i v e + F a l s e N e g a t i v e + F a l s e P o s i t i v e
D i c e = 2 * 2 T r u e P o s i t i v e 2 * T r u e P o s i t i v e + F a l s e N e g a t i v e + F a l s e P o s i t i v e

4.3. Evaluation

To evaluate the ultrasound IMC segmentation using the deep learning methods on the CUBS dataset, a set of metrics as mentioned above are used. These metrics are predominantly used for image segmentation in computer vision tasks. Moreover, we compare the frames per second (FPS) for each model on the same dataset. In this section, we present the obtained results from the dataset using the proposed method for ultrasound IMC segmentation. The results are reported in the tables and figures to show the performed techniques using the different architectures.
We first investigated the effect of replacing CNN layers with self-ONN layers in the DeepCrack model. The level of linearity was controlled using the parameter q = 3, 5, 7, 9 or 11. Figure 4a shows that the best performing model uses q = 3, then the accuracy of the model starts to drop as we enlarge the level of non-linearity. Compared with the CNN version of the model, Figure 4b shows that the best precision and recall accuracies at q are set to 3 and 5. The performance of all the deep learning models on the CUBS dataset is shown in Table 2. From the table, we can observe that both the transformer- and pixel difference-based models act similarly in all the performance measures with a slight increase for PiDiNet in the F-measure, Dice and Jaccard index. Both the transformer- and pixel difference-based models achieved better performances with exceptional margins compared to the CNN- and self-ONN-based models. From Table 2, we can also see that the post-processing operations improved the performance metrics of all the methods, including DeepCrack, DeepCrack_Self_ONN, PiDiNet and the transformer-based models. The models achieved an improvement of about 20, 14, 19 and 20% for the DeepCrack, DeepCrack_Self_ONN, PiDiNet and transformer-based models, respectively, on the precision metric, while the transformer-based + post-processing model demonstrated the best metrics followed by PiDiNet + Post-processing with an average difference of 1% and 10% and less than 1% for dice, recall and precision, respectively. In addition to the qualitative results, we present the qualitative results in Figure 5 that show the visual outputs from the segmentation results. From Figure 5, we can see that all the proposed methods demonstrated segmentation with good performance with a difference in terms of thickness.
It is worth mentioning that image segmentation algorithms typically rely on edge detection and thresholding techniques to separate regions of interest from the background. However, these techniques can be affected by image noise, leading to the detection of false edges and the inclusion of noise as part of the segmented object. Additionally, image segmentation algorithms may also introduce a level of smoothing or blurring to the image, which can further contribute to the fattening effect. This smoothing operation can cause the boundaries of the segmented object to become slightly blurred and more diffuse, resulting in a larger area being assigned to the object than is actually present in the ground truth.

5. Conclusions

We developed and investigated various novel deep learning models for the segmentation of IMC in B-mode ultrasound images of the carotid artery. Compared to the conventional CNN-based model, the self-ONN-based model performs better in all evaluation metrics; however, the pixel difference- and transformer-based models perform better in all metrics, potentially due to the absence of enough data. The pixel difference model performs better when data are scarce. A further investigation into suitable data augmentation techniques is needed to increase the accuracy.

Author Contributions

Conceptualization, H.H.M. and O.E.; data curation, H.H.M., O.E. and N.O.; formal analysis, H.H.M. and O.E.; methodology, H.H.M. and O.E.; project administration; supervision, S.A.-M., M.E.H.C., A.B. and S.M.Z.; validation, H.H.M. and O.E.; visualization, H.H.M. and O.E.; writing—original draft, H.H.M. and O.E.; writing—review and editing, H.H.M., O.E. and S.A.-M. All authors have read and agreed to the published version of the manuscript.

Funding

This publication was supported by the Qatar University Internal Grant #QUHI-CENG-22/23-548. The findings achieved herein are solely the responsibility of the authors.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
DRDiabetic retinopathy
DLDeep learning
AIArtificial intelligence
CNNConvolutional neural network

References

  1. Latha, S.; Samiappan, D.; Kumar, R. Carotid artery ultrasound image analysis: A review of the literature. Proc. Inst. Mech. Eng. Part H J. Eng. Med. 2020, 234, 417–443. [Google Scholar] [CrossRef]
  2. Vila, M.D.M.; Remeseiro, B.; Grau, M.; Elosua, R.; Igual, L. Last Advances on Automatic Carotid Artery Analysis in Ultrasound Images: Towards Deep Learning. In Handbook of Artificial Intelligence in Healthcare; Springer: Berlin/Heidelberg, Germany, 2022; pp. 215–247. [Google Scholar]
  3. Riahi, A.; Elharrouss, O.; Al-Maadeed, S. BEMD-3DCNN-based method for COVID-19 detection. Comput. Biol. Med. 2022, 142, 105188. [Google Scholar] [CrossRef]
  4. Loizou, C.P.; Kasparis, T.; Spyrou, C.; Pantziaris, M. Integrated system for the complete segmentation of the common carotid artery bifurcation in ultrasound images. In IFIP International Conference on Artificial Intelligence Applications and Innovations; Springer: Berlin/Heidelberg, Germany, 2013; Volume 9, pp. 292–301. [Google Scholar]
  5. Christodoulou, L.; Loizou, C.P.; Spyrou, C.; Kasparis, T.; Pantziaris, M. Full-automated system for the segmentation of the common carotid artery in ultrasound images. In Proceedings of the 2012 IEEE 5th International Symposium on Communications, Control and Signal Processing, Rome, Italy, 2–4 May 2012; pp. 1–6. [Google Scholar]
  6. Ikeda, N.; Dey, N.; Sharma, A.; Gupta, A.; Bose, S.; Acharjee, S.; Shafique, S.; Cuadrado-Godia, E.; Araki, T.; Saba, L.; et al. Automated segmental-IMT measurement in thin/thick plaque with bulb presence in carotid ultrasound from multiple scanners: Stroke risk assessment. Comput. Methods Programs Biomed. 2017, 141, 73–81. [Google Scholar] [CrossRef]
  7. Madipalli, P.; Kotta, S.; Dadi, H.; Nagaraj, Y.; Asha, C.S.; Narasimhadhan, A.V. Automatic Segmentation of Intima Media Complex in Common Carotid Artery using Adaptive Wind Driven Optimization. In Proceedings of the 2018 Twenty Fourth National Conference on Communications (NCC), Hyderbad, India, 25–28 February 2018; pp. 1–6. [Google Scholar]
  8. Nagaraj, Y.; Teja, A.; Narasimha, D. Automatic Segmentation of Intima Media Complex in Carotid Ultrasound Images Using Support Vector Machine. Arab. J. Sci. Eng. 2019, 44, 3489–3496. [Google Scholar] [CrossRef]
  9. Biswas, M.; Saba, L.; Chakrabartty, S.; Khanna, N.N.; Song, H.; Suri, H.S.; Sfikakis, P.P.; Mavrogeni, S.; Viskovic, K.; Laird, J.R.; et al. Two-stage artificial intelligence model for jointly measurement of atherosclerotic wall thickness and plaque burden in carotid ultrasound: A screening tool for cardiovascular/stroke risk assessment. Comput. Biol. Med. 2020, 123, 103847. [Google Scholar] [CrossRef]
  10. Su, Z.; Liu, W.; Yu, Z.; Hu, D.; Liao, Q.; Tian, Q.; Pietikainen, M.; Liu, L. Pixel difference networks for efficient edge detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 5117–5127. [Google Scholar]
  11. Kiranyaz, S.; Malik, J.; Abdallah, H.B.; Ince, T.; Iosifidis, A.; Gabbouj, M. Self-organized operational neural networks with generative neurons. Neural Netw. 2021, 140, 294–308. [Google Scholar] [CrossRef]
  12. Gabbouj, M.; Kiranyaz, S.; Malik, J.; Zahid, M.U.; Ince, T.; Chowdhury, M.E.; Khandakar, A.; Tahir, A. Robust peak detection for holter ECGs by self-organized operational neural networks. IEEE Trans. Neural Netw. Learn. Syst. 2022. Early Access. [Google Scholar] [CrossRef]
  13. Malik, J.; Kiranyaz, S.; Gabbouj, M. Operational vs. convolutional neural networks for image denoising. arXiv 2020, arXiv:2009.00612. [Google Scholar]
  14. Malik, J.; Devecioglu, O.C.; Kiranyaz, S.; Ince, T.; Gabbouj, M. Real-time patient-specific ECG classification by 1D self-operational neural networks. IEEE Trans. Biomed. Eng. 2021, 69, 1788–1801. [Google Scholar] [CrossRef]
  15. Rahman, A.; Chowdhury, M.E.; Khandakar, A.; Tahir, A.M.; Ibtehaz, N.; Hossain, M.S.; Kiranyaz, S.; Malik, J.; Monawwar, H.; Kadir, M.A. Robust biometric system using session invariant multimodal EEG and keystroke dynamics by the ensemble of self-ONNs. Comput. Biol. Med. 2022, 142, 105238. [Google Scholar] [CrossRef]
  16. Soltanian, M.; Malik, J.; Raitoharju, J.; Iosifidis, A.; Kiranyaz, S.; Gabbouj, M. Speech command recognition in computationally constrained environments with a quadratic self-organized operational layer. In Proceedings of the 2021 IEEE International Joint Conference on Neural Networks (IJCNN), Shenzhen, China, 18–22 July 2021; pp. 1–6. [Google Scholar]
  17. Kiranyaz, S.; Ince, T.; Iosifidis, A.; Gabbouj, M. Operational neural networks. Neural Comput. Appl. 2020, 32, 6645–6668. [Google Scholar] [CrossRef] [Green Version]
  18. Khan, S.; Naseer, M.; Hayat, M.; Zamir, S.W.; Khan, F.S.; Shah, M. Transformers in vision: A survey. ACM Comput. Surv. (CSUR) 2022, 54, 1–41. [Google Scholar] [CrossRef]
  19. Liu, Y.; Yao, J.; Lu, X.; Xie, R.; Li, L. DeepCrack: A deep hierarchical feature learning architecture for crack segmentation. Neurocomputing 2019, 338, 139–153. [Google Scholar] [CrossRef]
  20. Elharrouss, O.; Hmamouche, Y.; Idrissi, A.K.; El Khamlichi, B.; El Fallah-Seghrouchni, A. Refined edge detection with cascaded and high-resolution convolutional network. Pattern Recognit. 2023, 138, 109361. [Google Scholar] [CrossRef]
  21. Petroudi, S.; Loizou, C.; Pantziaris, M.; Pattichis, C. Segmentation of the common carotid intima-media complex in ultrasound images using active contours. IEEE Trans. Biomed. Eng. 2012, 59, 3060–3069. [Google Scholar] [CrossRef]
  22. Molinari, F.; Meiburger, K.M.; Saba, L.; Acharya, U.R.; Ledda, M.; Nicolaides, A.; Suri, J.S. Constrained snake vs. conventional snake for carotid ultrasound automated IMT measurements on multi-center data sets. Ultrasonics 2012, 52, 949–961. [Google Scholar] [CrossRef] [Green Version]
  23. Ceccarelli, M.; Luca, N.D.; Morganella, A. An active contour approach to automatic detection of the intima-media thickness. In Proceedings of the 2006 IEEE International Conference on Acoustics Speech and Signal Processing, Toulouse, France, 14–19 May 2006; Volume 2, p. II. [Google Scholar]
  24. Loizou, C.P.; Pattichis, C.S.; Pantziaris, M.; Tyllis, T.; Nicolaides, A. Snakes based segmentation of the common carotid artery intima media. Med. Biol. Eng. Comput. 2007, 45, 35–49. [Google Scholar] [CrossRef]
  25. Gutierrez, M.A.; Pilon, P.E.; Lage, S.G.; Kopel, L.; Carvalho, R.T.; Furuie, S.S. Automatic measurement of carotid diameter and wall thickness in ultrasound images. In Proceedings of the Computers in Cardiology, Memphis, TN, USA, 22–25 September 2002; pp. 359–362. [Google Scholar]
  26. Chan, R.C.; Kaufhold, J.; Hemphill, L.C.; Lees, R.S.; Karl, W.C. Anisotropic edge-preserving smoothing in carotid B-mode ultrasound for improved segmentation and intima-media thickness (IMT) measurement. In Proceedings of the Computers in Cardiology 2000. Vol.27 (Cat. 00CH37163), Cambridge, MA, USA, 24–27 September 2000; pp. 37–40. [Google Scholar]
  27. Delsanto, S.; Molinari, F.; Giustetto, P.; Liboni, W.; Badalamenti, S.; Suri, J.S. Characterization of a completely user-independent algorithm for carotid artery segmentation in 2-D ultrasound images. IEEE Trans. Instrum. Meas. 2007, 56, 1265–1274. [Google Scholar] [CrossRef]
  28. Gagan, J.H.; Shirsat, H.S.; Mathias, G.P.; Mallya, B.V.; Andrade, J.; Rajagopal, K.V.; Kumar, J.H. Automated Segmentation of Common Carotid Artery in Ultrasound Images. IEEE Access 2022, 10, 58419–58430. [Google Scholar] [CrossRef]
  29. Liang, Q.; Wendelhag, I.; Wikstrand, J.; Gustavsson, T. A multiscale dynamic programming procedure for boundary detection in ultrasonic artery images. IEEE Trans. Med. Imaging 2000, 19, 127–142. [Google Scholar] [CrossRef]
  30. Wendelhag, I.; Liang, Q.; Gustavsson, T.; Wikstrand, J. A new automated computerized analyzing system simplifies readings and reduces the variability in ultrasound measurement of intima-media thickness. Stroke 1997, 28, 2195–2200. [Google Scholar] [CrossRef]
  31. Cheng, D.-C.; Jiang, X. Detections of arterial wall in sonographic artery images using dual dynamic programming. IEEE Trans. Inf. Technol. Biomed. 2008, 12, 792–799. [Google Scholar] [CrossRef]
  32. Gustavsson, T.; Wendelhag, Q.L.I.; Wikstrand, J. A dynamic programming procedure for automated ultrasonic measurement of the carotid artery. In Proceedings of the Computers in Cardiology, Bethesda, MD, USA, 25–28 September 1994; pp. 297–300. [Google Scholar]
  33. Santhiyakumari, N.; Madheswaran, M. Non-invasive evaluation of carotid artery wall thickness using improved dynamic programming technique. Signal Image Video Process. 2008, 2, 183–193. [Google Scholar] [CrossRef]
  34. Lee, Y.-B.; Choi, Y.-J.; Kim, M.-H. Boundary detection in carotid ultrasound images using dynamic programming and a directional Haar-like filter. Comput. Biol. Med. 2010, 40, 687–697. [Google Scholar] [CrossRef]
  35. Liguori, C.; Paolillo, A.; Pietrosanto, A. An automatic measurement system for the evaluation of carotid intima-media thickness. IEEE Trans. Instrum. Meas. 2001, 50, 1684–1691. [Google Scholar] [CrossRef]
  36. Selzer, R.H.; Mack, W.J.; Lee, P.L.; Kwong-Fu, H.; Hodis, H.N. Improved common carotid elasticity and intima-media thickness measurements from computer analysis of sequential ultrasound frames. Atherosclerosis 2002, 154, 185–193. [Google Scholar] [CrossRef]
  37. Pramulen, A.S.; Yuniarno, E.M.; Nugroho, J.; Sunarya, I.M.G.; Purnama, I.K.E. Carotid Artery Segmentation on Ultrasound Image using Deep Learning based on Non-Local Means-based Speckle Filtering. In Proceedings of the 2020 International Conference on Computer Engineering, Network and Intelligent Multimedia (CENIM), Surabaya, Indonesia, 17–18 November 2020; pp. 360–365. [Google Scholar]
  38. Jain, P.K.; Sharma, N.; Saba, L.; Paraskevas, K.I.; Kalra, M.K.; Johri, A.; Nicolaides, A.N.; Suri, J.S. Automated deep learning-based paradigm for high-risk plaque detection in B-mode common carotid ultrasound scans: An asymptomatic Japanese cohort study. Int. Angiol. 2021, 41, 9–23. [Google Scholar] [CrossRef]
  39. Lainé, N.; Liebgott, H.; Zahnd, G.; Orkisz, M. Carotid artery wall segmentation in ultrasound image sequences using a deep convolutional neural network. arXiv 2022, arXiv:2201.12152. [Google Scholar]
  40. Radovanovic, N.; Dašić, L.; Blagojevic, A.; Sustersic, T.; Filipovic, N. Carotid Artery Segmentation Using Convolutional Neural Network in Ultrasound Images. 2022. Available online: https://scidar.kg.ac.rs/bitstream/123456789/16643/4/p8.pdf (accessed on 1 January 2023).
  41. Park, J.H.; Seo, E.; Choi, W.; Lee, S.J. Ultrasound deep learning for monitoring of flow–vessel dynamics in murine carotid artery. Ultrasonics 2022, 120, 106636. [Google Scholar] [CrossRef]
  42. Jain, P.K.; Sharma, N.; Kalra, M.K.; Johri, A.; Saba, L.; Suri, J.S. Far wall plaque segmentation and area measurement in common and internal carotid artery ultrasound using U-series architectures: An unseen Artificial Intelligence paradigm for stroke risk assessment. Comput. Biol. Med. 2022, 149, 106017. [Google Scholar] [CrossRef]
  43. Fu, H.; Xu, J.C.Y.; Wong, D.W.K.; Liu, J.; Cao, X. Joint optic disc and cup segmentation based on multi-label deep network and polar transformation. IEEE Trans. Med. Imaging 2018, 37, 1597–1605. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  44. Yuan, Y.; Li, C.; Xu, L.; Zhu, S.; Hua, Y.; Zhang, J. CSM-Net: Automatic joint segmentation of intima-media complex and lumen in carotid artery ultrasound images. Comput. Biol. Med. 2022, 150, 106119. [Google Scholar] [CrossRef]
  45. Chu, X.; Tian, Z.; Wang, Y.; Zhang, B.; Ren, H.; Wei, X.; Xia, H.; Shen, C. Twins: Revisiting the design of spatial attention in vision transformers. Adv. Neural Inf. Process. Syst. 2021, 34, 9355–9366. [Google Scholar]
  46. Elharrouss, O.; Akbari, Y.; Almaadeed, N.; Al-Maadeed, S. Backbones-review: Feature extraction networks for deep learning and deep reinforcement learning approaches. arXiv 2022, arXiv:2206.08016. [Google Scholar]
  47. Meiburger, K.M.; Zahnd, G.; Faita, F.; Loizou, C.P.; Carvalho, C.; Steinman, D.A.; Gibello, L.; Bruno, R.M.; Marzola, F.; Clarenbach, R.; et al. Carotid ultrasound boundary study (CUBS): An open multicenter analysis of computerized intima-media thickness measurement systems and their clinical impact. Ultrasound Med. Biol. 2021, 47, 2442–2455. [Google Scholar] [CrossRef]
Figure 1. Visualization of plaque build-up and obstruction to the normal flow of blood in the artery (https://my.clevelandclinic.org/health/diseases/16845-carotid-artery-disease-carotid-artery-stenosis, accessed on 9 January 2023).
Figure 1. Visualization of plaque build-up and obstruction to the normal flow of blood in the artery (https://my.clevelandclinic.org/health/diseases/16845-carotid-artery-disease-carotid-artery-stenosis, accessed on 9 January 2023).
Applsci 13 04821 g001
Figure 2. Networks used for ultrasound IMC segmentation.
Figure 2. Networks used for ultrasound IMC segmentation.
Applsci 13 04821 g002
Figure 3. Morphological erosion on ultrasound IMC segmentation results.
Figure 3. Morphological erosion on ultrasound IMC segmentation results.
Applsci 13 04821 g003
Figure 4. (a) The precision-recall curve for ultrasound IMC segmentation using the self-ONN with different q settings, (b) using the CNN and self-ONN with q = 3 and q = 5.
Figure 4. (a) The precision-recall curve for ultrasound IMC segmentation using the self-ONN with different q settings, (b) using the CNN and self-ONN with q = 3 and q = 5.
Applsci 13 04821 g004
Figure 5. Original and ground truth sample images and the corresponding segmentation results for the proposed deep learning models.
Figure 5. Original and ground truth sample images and the corresponding segmentation results for the proposed deep learning models.
Applsci 13 04821 g005
Table 1. Training hyperparameters and parameters for each model.
Table 1. Training hyperparameters and parameters for each model.
MethodLearning RateOptimizerEpochsTraining Parameters
DeepCrack0.0001Adam10014.720 M
DeepCrack_Self_ONN0.0001Adam10044.144 M
PidiNet0.005Adam701.150 MB
Transformer0.00001Adam70104.609 M
Table 2. Performance of the proposed and implemented models on the CUBS dataset. The bold and underline fonts respectively represent the first and second place.
Table 2. Performance of the proposed and implemented models on the CUBS dataset. The bold and underline fonts respectively represent the first and second place.
ModelPrecisionRecallF-MeasureDiceJaccardFPS
DeepCrack_CNN0.6310.6750.6520.6520.48417.074
DeepCrack_CNN + Post-processing0.8340.6180.6970.6970.54417.074
DeepCrack_Self (q = 3)0.6520.6880.6690.6690.50313.45
DeepCrack_Self + Post-processing0.7920.6910.7210.7210.57113.45
PiDiNet0.6870.8250.7500.7500.6020.62
PiDiNet + Post-processing0.8760.7400.7910.7910.66120.62
Transformer0.680.8260.7460.7460.59511.427
Transformer + Post-processing0.8820.8490.8010.8010.65611.427
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hassen Mohammed, H.; Elharrouss, O.; Ottakath, N.; Al-Maadeed, S.; Chowdhury, M.E.H.; Bouridane, A.; Zughaier, S.M. Ultrasound Intima-Media Complex (IMC) Segmentation Using Deep Learning Models. Appl. Sci. 2023, 13, 4821. https://doi.org/10.3390/app13084821

AMA Style

Hassen Mohammed H, Elharrouss O, Ottakath N, Al-Maadeed S, Chowdhury MEH, Bouridane A, Zughaier SM. Ultrasound Intima-Media Complex (IMC) Segmentation Using Deep Learning Models. Applied Sciences. 2023; 13(8):4821. https://doi.org/10.3390/app13084821

Chicago/Turabian Style

Hassen Mohammed, Hanadi, Omar Elharrouss, Najmath Ottakath, Somaya Al-Maadeed, Muhammad E. H. Chowdhury, Ahmed Bouridane, and Susu M. Zughaier. 2023. "Ultrasound Intima-Media Complex (IMC) Segmentation Using Deep Learning Models" Applied Sciences 13, no. 8: 4821. https://doi.org/10.3390/app13084821

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop