Next Article in Journal
A Simple Method for Solving the Power Fluctuation Issue of a Base Station’s Surrounding Areas Based on Half Tyler Distribution
Previous Article in Journal
A Weakly Supervised Hybrid Lightweight Network for Efficient Crowd Counting
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Motion Artifact Detection Based on Regional–Temporal Graph Attention Network from Head Computed Tomography Images

1
School of Computer Science and Engineering, Northeastern University, Shenyang 110169, China
2
Office of Information Construction and Network Security, Northeastern University, Shenyang 110819, China
3
Department of Computer Science and Technology, Dalian Neusoft University of Information, Dalian 116023, China
4
School of Information Science and Engineering, Northeastern University, Shenyang 110819, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(4), 724; https://doi.org/10.3390/electronics13040724
Submission received: 27 December 2023 / Revised: 27 January 2024 / Accepted: 4 February 2024 / Published: 10 February 2024

Abstract

:
Artifacts are the main cause of degradation in CT image quality and diagnostic accuracy. Because of the complex texture of CT images, it is a challenging task to automatically detect artifacts from limited image samples. Recently, graph convolutional networks (GCNs) have achieved great success and shown promising results in medical imaging due to their powerful learning ability. However, GCNs do not take the attention mechanism into consideration. To overcome their limitations, we propose a novel Regional–Temporal Graph Attention Network for motion artifact detection from computed tomography images (RT-GAT). In this paper, head CT images are viewed as a heterogeneous graph by taking regional and temporal information into consideration, and the graph attention network is utilized to extract the features of the constructed graph. Then, the feature vector is input into the classifier to detect the motion artifacts. The experimental results demonstrate that our proposed RT-GAT method outperforms the state-of-the-art methods on a real-world CT dataset.

1. Introduction

Regular physical examination can help to improve the health of hospital residents. With the development of artificial intelligence technology, the construction of intelligent medical systems, the alleviation of medical resource shortages using computer-aided methods, and even the replacement of doctors to screen patients have become the main research directions. The work in this paper belongs to the core field of “virtual doctor” research in smart cities, which can promote future intelligent medical treatment.
Image quality detection technology based on computer vision is the core of the intelligent medical field. Medical imaging technology plays an important role and has provided clinicians with an effective tool for diagnosis. In particular, computed tomography (CT) has emerged as an important tool and has been widely used due to its low cost, high efficiency, readily obtainable results, and non-invasive nature [1]; it contains valuable information. However, due to an individual’s motion, motion artifacts often obscure the image and threaten the accuracy of judgement in clinical practice. Thus, in academia and industry, the detection of motion artifacts has become an important area in CT quality analysis and has attracted widespread attention [2,3]. Great efforts have been made to explore the internal mechanism of motion artifact detection throughout the years. Conventional methods manually classify and label each image according to the attributes, which is tedious and time-consuming. Moreover, evaluating the quality of CT images is a challenging task that requires extensive clinical experience and is error-prone.
Researchers have carried out a lot of valuable work to automatically assess CT image quality [4,5,6,7,8,9]. Several studies have mainly focused on handcrafted feature designation. Deep learning methods [10,11,12,13,14], such as convolution neural networks (CNNs), have achieved unprecedented performance in the processing of Euclidean data [15,16,17]. However, there are some limitations and challenges when it comes to the medical domain. For instance, these methods neglect useful relationships between images and are still under development for graph-structured data. Related nodes, i.e., the neighbors of a node, can give insights to better understand the current image. The relationship between quality and topological properties may be uncovered through combining domain knowledge at the macro and micro levels; this would help in improving the accuracy and in understanding the formation mechanism of motion artifacts. Because of the black box nature of deep learning methods, features are extracted implicitly, and there are few clues as to how decisions are made; this makes it difficult to understand why the algorithms predict a certain label.
Complex networks are highly simplified models of complex systems [18,19,20] and provide a way to understand the nature and function of complex systems. They have attracted increasing attention because they are powerful, flexible, and universal. For example, the presentation of problems in and topological characteristics of complex networks have been studied [21,22]. Many interdisciplinary applications, including social networks [23,24], software networks [25], biology [26,27,28,29], and earthquake networks [30], have emerged based on complex networks. The topological characteristics of the network can be obtained, and the network can be analyzed. There are few studies on image representation using complex networks. There is, therefore, still a lot of room for research on the application of complex networks to images.
To tackle the aforementioned problems, graph-based analysis methods have been established to preserve the structure and extract node features. Deep learning technologies are being used to process graph-structured data and learn the representation of networks, such as graph neural networks. Based on the feature learning ability of GNNs, they can realize effective motion artifact detection under a limited number of samples, meeting the needs of clinical practice [31]. Recently, graph convolutional networks (GCNs) [32,33,34] have been demonstrated to be effective in dealing with graph data for image classification tasks. An end-to-end GCN, ImageGCN [35], which learns the representation of an image, has been applied to chest X-ray images for disease identification. When it comes to medical domain knowledge, many researchers are devoted to analyzing the critical role of the shape and location. X-ray images of the cervical spine have been automatically segmented using the shape-aware deep segmentation network UNet-S. ‘S’ refers to the use of the updated shape-aware loss function [36]. Researchers have also verified that it is critical to take rib region information into account and confirmed the necessity of region-specific features in the segmentation task [37], which may benefit the classification task. It has also been demonstrated that the incorporation of prior anatomical knowledge may effectively improve the performance. In [38], it has been demonstrated that the PPGs used to extract RR are not very clean, and this is an important reason for which they are destroyed by motion artifacts. This poses a challenge to the reliable extraction of these metrics. A graph attention network (GAiA-Net) for image denoising is proposed to deal with the increase in noise complexity in noisy images and solve the problem whereby existing artifact detection methods cannot provide satisfactory solutions. Because the presence of motion artifacts (Mas) hinders the accurate analysis of EDA signals, the study in [39] presents a machine learning framework for automatic motion artifact detection on electrodermal activity signals.
The motivations of this study are the core issues in medical image classification identified from the above analysis, i.e., how to model the relationships between head CT images and efficiently fuse domain knowledge to improve the classification accuracy. To achieve these goals, in this work, it is essential to take region heterogeneity into account. First, we introduce complex network theory to model the relationships between head CT images based on anatomical prior knowledge, called the RT-graph. Efforts have been dedicated to promoting a systematic understanding of the relationship between topology, function [40,41,42,43,44,45,46], and network characteristics. Identifying this relationship may help in understanding images of different quality, leading to better explainability. Second, to learn the relationships between different images, we design a medical image classification algorithm based on GCN, which takes the degree into consideration, to learn the RT-graph. Compared with the popular CNN-based image classification method, the proposed RT-GAT approach can model graph-structured data to learn informative representations for nodes. Intuitively, by examining the content of each slice and comprehensively analyzing the relationships between slices, motion artifacts can be detected; from these, clinical diagnoses, prognoses, treatment plans, and other medical decisions may be better formulated.
The main contributions in this work are summarized as follows.
A Regional–Temporal Graph Attention Network is developed for motion artifact detection from head CT images. From the perspective of levels, the quality assessment problem is formulated as a node classification issue at the slice level, with node attributes acquired at the pixel level.
Network information, including the node feature information and the structure information extracted from the CT network, is utilized to detect motion artifacts via a two-stage decision scheme.
An improved GNN model called RT-GAT is presented and embedded into the execution of the proposed method, such that the features of CT image quality can be highlighted using the designed community detection and anatomical prior knowledge.

2. Materials and Methods

2.1. Basic Notations and Problem Statement

Artifacts may cause a sharp decline in the quality of the image, compromising post-processing and greatly affecting the accuracy of a diagnosis; they bring serious confusion to the diagnostic process and lead to deviations in diagnosis [47,48].
In this investigation, for simplicity, there are some assumptions.
The quality of CT images is degraded merely by motion artifacts and is evaluated based on whether motion artifacts have affected the diagnosis process.
A single slice contains several pixels. All the slices are of the same resolution of 512 × 512 .
The qualities and anatomical prior knowledge (APK), i.e., region information, of a slice are correctly labeled.

2.2. Pipeline of the Proposed RT-GAT Method

The schematic diagram of the proposed RT-GAT method for motion artifact detection is illustrated in Figure 1. RT-GAT’s architecture consists of four key modules: (1) graph construction; (2) feature extraction; (3) training and learning; and (4) classification and motion artifact detection.
First, CT images are converted into an RT-graph. The feature vector is input into the classifier to detect the motion artifacts.
The framework of the proposed motion artifact detection method based on the Regional–Temporal Graph Attention Network (RT-GAT) is illustrated in Figure 1.

2.2.1. Graph Construction

Based on the RT-graph model, using complex network theory, CT images are expressed in a macro manner. Suppose that the CT images can be modeled as an undirected, unweighted graph at the region level. The graph can be denoted as G = {V, E, A}. V represents the node set of the graph with the number of |V| = N, where N is the number of total samples, and E is the edge set, where the slices representing nodes are connected in the form of a fully connected undirected graph. At the region level, a node corresponds to a slice, and an edge represents the relation between the two slices based on the labeled anatomical prior knowledge (APK) and region information. An edge is generated if two slices belong to the same region, with the same quality between different individuals, or the slices belong to the same individual, as seen in Figure 2, where the solid circle indicates individuals and the dashed circle indicates slices. At the pixel level, the attributes of the graph are obtained based on the complex network [49].
Specifically, Figure 2 shows an example of two types of nodes and two types of edge connections. To consider the relationships between CT images, an adjacency matrix was formed using complex network theory. A   R m × m is the adjacent matrix demonstrating the connections between any two nodes; it is a symmetric connectivity matrix and a representative description of the graph structure in matrix form. The value of the element in the adjacent matrix is 0 or 1. The definition of the adjacent matrix is shown in Equation (1):
ai , j = 0 , while   V i , V j   are   not   connected 1 , while   V i , V j   are   connected
As the characterization and identification of CT images require a methodology that can express the context surrounding each slice, both the local and global CT image characteristics of the network were joined, and multi-level exploration was used to better characterize the CT image. Thus, we also took pixel-level graph construction into consideration. An image was represented as a graph, with each pixel denoted as a node, and an edge was generated according to the location and intensity between two pixels. Pixel-level graph construction was conducted in a micro manner, and topological features, such as the average degree, average clustering coefficient, and average path length, were extracted as they could be used to explain the internal mechanism.
The network topological characteristics [19] are explained in detail as follows.
Degree [18]: In an undirected graph, the degree of a given node i is the number of nodes directly connected to it. The degree is the most fundamental characteristic and measure of a node.
Clustering coefficient [20]: The clustering coefficient implies the degree of clustering of the network.
C i = 2 E i k i ( k i 1 )
A gray-level co-occurrence matrix (GLCM) reflects the spatial correlation characteristics of gray values and can effectively describe the spatial relationships between pixels in two-dimensional CT images. In order to quantitatively describe the artifacts in CT images, the gray-level co-occurrence matrix (GLCM) based on statistical theory is used to analyze the CT scan images of the head and further explore the details of microstructure development. In this study, we computed GLCM-based features. In 1973, Haralick and Shanmugam 35 proposed GLCM features based on texture attributes. We extracted the GLCM texture features from the CT images of the head and applied them to the texture analysis. We calculated various GLCM features, such as contrast, dissimilarity, homogeneity, energy, and correlation.
P Δ x , Δ y ( i , j ) = x = 1 N y = 1 N 1 i f I ( x , y ) = i , and I ( x + Δ x , y + Δ y ) = j 0 otherwise
where i,j indicate the gray-scale pixel values, and P(i,j) is the number times that the pixel pairs (i,j) are observed in the input image.
Contrast: This feature measures the local gray correlation in the image. The greater the difference in gray values in the image, the sharper the edge of the image and the greater the contrast.
C o n t r a s t = i = 0 N g 1 j = 0 N g 1 p i , j ( i j ) 2
Dissimilarity: This feature reflects the local contrast change in the image. When the local contrast increases, the difference increases, but when the local contrast decreases, the difference decreases.
D i s s i m i l a r i t y = i = 0 N g 1 j = 0 N g 1 p i , j i j
Homogeneity: A higher value indicates a smaller difference in gray-scale tone between two objects, while a low value indicates a greater change in contrast.
H o m o g e n e i t y = i = 0 N g 1 j = 0 N g 1 P i , j ( i j ) 2
Energy: Energy is used to calculate the number of pixel pair repetitions and the amount of randomness in the image texture. This feature reflects the uniformity of the gray distribution in an image. The more concentrated the gray distribution of the image, the greater the energy. The more dispersed the gray distribution of the image, the smaller the energy. When the pixels are closely related, the energy value increases significantly.
E n e r g y = i = 0 N g 1 j = 0 N g 1 ( p i , j ) 2
Correlation: This feature indicates the relationship between the pixel and those around it. Its value can be between −1 and 1 depending on whether the image has a negative or positive correlation. It reflects the local gray-level correlation of the image and measures the similarity of the elements of the gray-level co-occurrence matrix in the direction of rows or columns. The element values of the gray-level co-occurrence matrix have smaller differences and greater correlation. The greater the difference in the element values of the gray-level co-occurrence matrix, the smaller the correlation.
C o r r e l a t i o n = i = 0 N g 1 j = 0 N g 1 p i , j ( i μ ) ( j μ ) σ 2

2.2.2. Graph Attention Network

The number of nodes in different regions may differ, as may the degree of the node. Thus, here, we take the graph convolution network into consideration. For an image, original features and topological features were considered in a comprehensive way as node attributes. In this way, the physical and topological properties of the head CT images were investigated as attributes of the corresponding node at the macro level, as seen at the micro level.
In this paper, the graph attention network (GAT) is used to model the whole graph structure in the feature extraction part of graph nodes to obtain the corresponding feature expression of CT image nodes in the network. The GAT model is improved based on the GCN. The attention mechanism can assign different weights to different nodes, and it depends on pairs of adjacent nodes instead of a specific network structure in training.
Assuming that there are N nodes in the graph; the node vector input into the network is recorded as follows:
h = h 1 , h 2 , , h N , h i R F . The output vector of the graph attention network is recorded as follows: h = h 1 , h 2 , h N , h i R F . The model used in this paper introduces a self-attention mechanism to calculate the information aggregation between nodes, and the corresponding formula is as follows:
e i j = a ( W h i , W h j )
where e i j represents the importance of node j to node i .
α i j = softmax j ( e i j )
The attention mechanism used in this paper is implemented by a single-layer feedforward neural network, in which the activation function is the LeakyReLU function, and the formula for the calculation of the attention score can be extended to Equation (5).
α i j = exp ( LeakyReLu ( β T W h i W h j ) ) k N i exp ( LeakyRelu ( β T W h i W h j ) )
wherein β T is a trainable parameter of the feedforward neural network, and W is another trainable parameter. The multi-head attention mechanism is added to the GAT. The eigenvector calculated by the K-head attention mechanism is used to concatenate, and the corresponding output eigenvector is expressed as follows:
h i = k = 1 K σ ( j N i α i j k W k h j )
W k is a parameter matrix that linearly transforms the input vector. For the output of the last layer, it is found that the performance is not good when concatenating the vectors, so the average method is used to calculate the feature vector to predict the last layer.
h i = σ ( 1 K k = 1 K j N i α i j k W k h j )
h i is the feature vector that is extracted via the graph attention network, which needs to be input to the output layer.

2.2.3. Motion Artifact Detection Evaluation

We used machine learning methods and deep learning methods for classification. A deep learning method, RT-GAT, was used for feature extraction after graph construction. We also compared our framework to the GCN, which does not take attention into consideration, and a CNN (backbone: ResNet-50), a deep learning method for Euclidean data.
In the terms of medical image quality assessment, the labels may represent the quality of the medical image. For the binary classification problem, CT images with and without artifacts are determined to be positive and negative, respectively. See in Table 1, True-positive (TP) cases were those in which the artifact-affected image was recognized as artifact-affected, whereas false-positive (FP) cases were those in which an artifact-free image was recognized as artifact-affected. Furthermore, true-negative (TN) cases were those where artifact-free images were recognized as artifact-free, whereas false-negative (FN) cases were those where artifact-affected images were incorrectly recognized as artifact-free.
The accuracy was determined using Equation (3) for performance evaluation.
A c c u r a c y = T P + T N T P + F P + F N + T N
S p e c i f i c i t y = T N F P + T N
S e n s i t i v i t y = T P T P + F N

3. Results

3.1. Dataset

For CT images, we used the Digital Imaging and Communications in Medicine (DICOM) format. To test the performance of the proposed method, a real-world dataset consisting of 480 samples collected from the Chinese Neusoft Medical CT scanner (Chongqing, China) was used. The ground truth of all training and testing datasets was assessed by a neuroradiologist with 10 years’ experience in interpreting and analyzing CT. The slices were categorized into two classes: artifact-affected images and artifact-free images. Typical images from the different sections are presented in Figure 3. The one with artifacts (a) shows fissures; however, the artifact-free image shows no fissures.
After RT-graph construction, we obtained a corresponding graph (see Figure 4).
As shown in Figure 4, the graph was constructed based on regional–temporal information. Multi-dimension network topological characteristics combined with physical characteristics were fused as node feature vectors. From the level perspective, at the pixel level, topological characteristics were obtained as node attributes, and at the slice level, an RT-graph was constructed. As seen from Figure 5a–h, community detection was conducted, and the communities from 1 to 8 are demonstrated to show the clustering of the CT images.

3.2. Performance Metrics

We evaluated the model’s performance on the test set in terms of accuracy, sensitivity, and specificity. As shown in Table 2, the proposed method works well in clinical practice.
As shown in Table 3, we evaluated the performance of RT-GAT in terms of accuracy, sensitivity, and specificity. The classification performance of the RT-GAT approach was evaluated in comparison to the CNN approach, without taking anatomical prior knowledge into consideration, as well as the GCN approach, GAT, and machine learning methods.
The proposed RT-GAT method has been compared with the DL-based baseline approach, a CNN, which implies that a deep learning method that does not take regional characteristics into consideration may not be sufficient for motion artifact detection. It has been demonstrated that the topology-aware method outperforms CNN-based pixel-wise classifiers, where the training images do not incorporate the topological information.
It can be observed from Table 2 that the proposed method, RT-GAT, achieved the best classification performance, followed by GAT, GCN, GLCM + RF, GLCM + SVM, and CNN. This could be attributed to the regional–temporal construction method and attention mechanism, which improved the classification performance.The information of the topological level is shown in Table 3.× means not include and √ means include.

4. Discussion

In this paper, a novel robust strategy that incorporates regional–temporal information with a graph attention network, called RT-GAT, has been proposed for the detection of motion artifacts. The main outcome of this work is that RT-GAT has achieved impressive performance when acting as a feature extractor in a classification task on graph-structured data. By formulating a motion artifact detection problem as a node classification problem, a comprehensive description of head CT images has been developed to effectively express multi-level characteristics. The results show that the topology-aware method with topological information outperforms pixel-wise classifiers.
In addition, an interpretable multi-level quality assessment method has been carried out on head CT images, which helps to open the “black box”’ associated with these methods. Considering that regional information is essential and helpful in classification tasks and leads to better explainability, we used both regional and temporal information to recognize the head CT images in a more comprehensive way. Moreover, related nodes, i.e., the neighbors of a node, can give insights to better understand the current image. With a combination of domain knowledge at the macro and micro levels, the relationship between quality and topological properties has been uncovered, which helps to improve the accuracy and explains the formation mechanism of motion artifacts.
The experimental results obtained on real clinical data have demonstrated that the RT-GAT method can classify the quality of head CT images effectively, even with a limited number of labeled samples, which better fits the clinical scenario. It has been demonstrated to be robust and effective in characterizing CT images of different quality.
In this work, the ground truth of all training and testing datasets was obtained by a neuroradiologist with 10 years’ experience in interpreting and analyzing CT. However, it is necessary to train and test other datasets in the future.
The proposed RT-GAT method introduces a new line of research and may serve as a valuable tool in assessing the quality of CT images. In clinical practice, motion artifacts exist in different medical image modalities, such as magnetic resonance imaging (MRI), ultrasound, and positron emission tomography (PET). The generality and feasibility of the proposed strategy need to be verified on multi-modal medical data before it can ultimately benefit the quality assessment research and medical imaging community.

Author Contributions

Conceptualization, Y.L. and Z.W.; Methodology, Y.L.; Software, T.W.; Validation, T.W.; Formal analysis, Z.W.; Investigation, T.W.; Data curation, T.W.; Writing—original draft, Y.L.; Writing—review & editing, Z.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Nature Science Foundation of China under Grant 61772101.

Data Availability Statement

The dataset used and/or analyzed during the current study is available from the corresponding author upon reasonable request. The dataset is not publicly available due to privacy.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Boas, F.E.; Fleischmann, D. CT artifacts: Causes and reduction techniques. Imaging Med. 2012, 4, 229–240. [Google Scholar] [CrossRef]
  2. Fair, D.; Miranda-Dominguez, O.; Snyder, A.; Perrone, A.; Earl, E.; Van, A.N.; Koller, J.M.; Feczko, E.; Tisdall, M.D.; van der Kouwe, A.; et al. Correction of respiratory artifacts in MRI head motion estimates. NeuroImage 2020, 208, 116400. [Google Scholar] [CrossRef] [PubMed]
  3. Power, J.; Mitra, A.; Laumann, T.; Snyder, A.; Schlaggar, B.; Petersen, S. Methods to detect, characterize, and remove motion artifact in resting state fMRI. NeuroImage 2014, 84, 320–341. [Google Scholar] [CrossRef] [PubMed]
  4. Doi, K. Computer-aided diagnosis in medical imaging: Historical review, current status and future potential. Comput. Med. Imaging Graph. 2007, 31, 198–211. [Google Scholar] [CrossRef]
  5. Spin-Neto, R.; Kruse, C.; Hermann, L.; Kirkevang, L.L.; Wenzel, A. Impact of motion artefacts and motion-artefact correction on diagnostic accuracy of apical periodontitis in CBCT images: An ex vivo study in human cadavers. Int. Endod. J. 2020, 53, 1275. [Google Scholar] [CrossRef]
  6. Wei, L.; Rosen, B.; Vallières, M.; Chotchutipan, T.; Mierzwa, M.; Eisbruch, A.; El Naqa, I. Automatic recognition and analysis of metal streak artifacts in head and neck computed tomography for radiomics modeling. Phys. Imaging Radiat. Oncol. 2019, 10, 49–54. [Google Scholar] [CrossRef]
  7. Ger, R.B.; Craft, D.F.; Mackin, D.S.; Zhou, S.; Layman, R.R.; Jones, A.K.; Elhalawani, H.; Fuller, C.D.; Howell, R.M.; Li, H.; et al. Practical guidelines for handling head and neck computed tomography artifacts for quantitative image analysis. Comput. Med. Imaging Graph. 2018, 69, 134–139. [Google Scholar] [CrossRef]
  8. Stoeve, M.P.; Aubreville, M.; Oetter, N.; Knipfer, C.; Neumann, H.; Stelzle, F.; Maier, A. Motion Artifact Detection in Confocal Laser Endomicroscopy Images; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  9. Wallner, J.; Mischak, I.; Egger, J. Computed tomography data collection of the complete human mandible and valid clinical ground truth models. Sci. Data 2019, 6, 190003. [Google Scholar] [CrossRef]
  10. Hamilton, W.L. Graph Representation Learning. In Synthesis Lectures on Artificial Intelligence and Machine Learning; Claypool Publishers: San Rafael, CA, USA, 2020; Volume 14, pp. 1–159. [Google Scholar]
  11. Krizhevsky, A.; Sulskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 60, 84–90. [Google Scholar] [CrossRef]
  12. Graham, M.S.; Ivana, D.; Hui, Z. A supervised learning approach for diffusion MRI quality control with minimal training data. Neuroimage 2018, 178, 668–676. [Google Scholar] [CrossRef]
  13. Zaitsev, M.; Maclaren, J.; Herbst, M. Motion artifacts in MRI: A complex problem with many partial solutions. J. Magn. Reson. Imaging 2015, 42, 887–901. [Google Scholar] [CrossRef] [PubMed]
  14. Locatello, F.; Bauer, S.; Lui, M.; Rtsch, G.; Gelly, S.; Schlkopf, B.; Bachem, O. Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations. In Proceedings of the 36th International Conference on Machine Learning, PMLR, Long Beach, CA, USA, 9–15 June 2019. [Google Scholar]
  15. Hernandez, S.; Sjogreen, C.; Gay, S.S.; Nguyen, C.; Cardenas, C.E. Development and dosimetric assessment of an automatic dental artifact classification tool to guide artifact management techniques in a fully automated treatment planning workflow. Comput. Med. Imaging Graph. 2021, 90, 101907. [Google Scholar] [CrossRef] [PubMed]
  16. Wei, Y.; Xia, W.; Lin, M.; Huang, J.; Ni, B.; Dong, J.; Zhao, Y.; Yan, S. Hcp: A flexible CNN framework for multi-label image classification. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 1901–1907. [Google Scholar] [CrossRef] [PubMed]
  17. Welch, M.L.; Mcintosh, C.; Traverso, A.; Wee, L.; Jaffray, D.A. External validation and transfer learning of convolutional neural networks for computed tomography dental artifact classification. Phys. Med. Biol. 2019, 65, 035017. [Google Scholar] [CrossRef]
  18. Albert, R.; Barabási, A.L. Statistical mechanics of complex networks. Rev. Mod. Phys. 2002, 74, 47–97. [Google Scholar] [CrossRef]
  19. Barabási, A.L.; Albert, R. Emergence of scaling in random networks. Science 1999, 286, 509–512. [Google Scholar] [CrossRef]
  20. Watts, D.J.; Strogatz, S.H. Collective dynamics of ‘small-world’ networks. Nature 1998, 393, 440–442. [Google Scholar] [CrossRef]
  21. Costa, L.D.F.; Oliveira, O.N., Jr.; Travieso, G.; Rodrigues, F.A.; Villas Boas, P.R.; Antiqueira, L.; Viana, M.P.; Correa Rocha, L.E. Analyzing and modeling real-world phenomena with complex networks: A survey of applications. Adv. Phys. 2011, 60, 329–412. [Google Scholar] [CrossRef]
  22. Zhou, J.; Cui, G.; Hu, S.; Zhang, Z.; Yang, C.; Liu, Z.; Wang, L.; Li, C.; Sun, M. Graph neural networks: A review of methods and applications. AI Open 2020, 1, 57–81. [Google Scholar] [CrossRef]
  23. Garlaschelli, D.; Battiston, S.; Castri, M.; Servedio, V.D.P.; Caldarelli, G. The scale-free topology of market investments. Phys. A Stat. Mech. Appl. 2005, 350, 491–499. [Google Scholar] [CrossRef]
  24. Borgatti, S.; Mehra, A.; Brass, D.J.; Labianca, G. Network Analysis in the Social Science. Science 2009, 323, 892–895. [Google Scholar] [CrossRef]
  25. Li, H.; Zhao, H.; Cai, W.; Xu, J.Q.; Ai, J. A modular attachment mechanism for software network evolution. Phys. A Stat. Mech. Appl. 2013, 392, 2025–2037. [Google Scholar] [CrossRef]
  26. Machado, B.B.; Scabini, L.F.; Margarido Orue, J.P.; De Arruda, M.S.; Goncalves, D.N.; Goncalves, W.N.; Moreira, R.; Rodrigues, J.F., Jr. A complex network approach for nanoparticle agglomeration analysis in nanoscale images. J. Nanopart. Res. 2017, 19, 65. [Google Scholar] [CrossRef]
  27. Backes, A.R.; Casanova, D.; Bruno, O.M. Texture analysis and classification: A complex network-based approach. Inf. Sci. 2013, 219, 168–180. [Google Scholar] [CrossRef]
  28. Ribas, L.C.; Riad, R.; Jennane, R.; Bruno, O.M. A complex network based approach for knee osteoarthritis detection: Data from the osteoarthritis initiative. Biomed. Signal Process. Control 2022, 71, 103133. [Google Scholar] [CrossRef]
  29. Xu, S.S.; Duan, L.H.; Zhang, Y.; Zhang, Z.C.; Sun, T.S.; Tian, L.X. Graph- and transformer-guided boundary aware network for medical image segmentation. Comput. Methods Programs Biomed. 2023, 242, 107849. [Google Scholar] [CrossRef]
  30. Abe, S.; Suzuki, N. Dynamical evolution of the community structure of complex earthquake network. EPL 2012, 99, 313–316. [Google Scholar] [CrossRef]
  31. Wein, S.; Malloni, W.; Tomé, A.M.; Frank, S.M.; Lang, E.W. A graph neural network framework for causal inference in brain networks. Sci. Rep. 2021, 11, 8061. [Google Scholar] [CrossRef] [PubMed]
  32. Yang, Z.; Wu, Q.; Zhang, F.; Chen, X.; Wang, W.; Zhang, X.S. Optimizing spatial relationships in gcn to improve the classification accuracy of remote sensing images. Intell. Autom. Soft Comput. 2023, 37, 491–506. [Google Scholar] [CrossRef]
  33. Ghorbani, M.; Kazi, A.; Baghshah, M.S.; Rabiee, H.R.; Navab, N. RA-GCN: Graph convolutional network for disease prediction problems with imbalanced data. Med. Image Anal. 2022, 75, 102272. [Google Scholar] [CrossRef] [PubMed]
  34. Hong, D.; Gao, L.; Yao, J.; Zhang, B.; Plaza, A.; Chanussot, J. Graph convolutional networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 5966–5978. [Google Scholar] [CrossRef]
  35. Mao, C.; Yao, L.; Luo, Y. ImageGCN: Multi-relational image graph convolutional networks for disease identification with chest X-rays. IEEE Trans. Med. Imaging 2022, 41, 1990–2003. [Google Scholar] [CrossRef]
  36. Al Arif, S.M.; Masudur, R.; Knapp, K.; Slabaugh, G. Fully Automatic Cervical Vertebrae Segmentation Framework for X-ray Images. Comput. Methods Programs Biomed. 2018, 157, 95–111. [Google Scholar] [CrossRef]
  37. Lee, H.M.; Kim, Y.J.; Kim, K.G. Segmentation Performance Comparison Considering Regional Characteristics in Chest X-ray Using Deep Learning. Sensors 2022, 22, 3143. [Google Scholar] [CrossRef]
  38. Pollreisz, D.; Taherinejad, N. Detection and Removal of Motion Artifacts in PPG Signals. Mob. Netw. Appl. 2022, 27, 728–738. [Google Scholar] [CrossRef]
  39. Hossain, M.B.; Posada-Quintero, H.F.; Kong, Y.; Mcnaboe, R.; Chon, K.H. Automatic motion artifact detection in electrodermal activity data using machine learning. Biomed. Signal Process. Control 2022, 74, 103483. [Google Scholar] [CrossRef]
  40. Zhang, Q.M.; Shang, M.S.; Lu, L. Similarity-based classification in partially labeled networks. Int. J. Mod. Phys. C 2010, 21, 813. [Google Scholar] [CrossRef]
  41. Da Costa, L.F.; Rodrigues, F.A.; Travieso, G.; Villas-Boas, P.R. Characterization of complex networks: A survey of measurements. Adv. Phys. 2007, 56, 167–242. [Google Scholar] [CrossRef]
  42. Stam, C.J. Characterization of anatomical and functional connectivity in the brain: A complex networks perspective. Int. J. Psychophysiol. 2010, 77, 186–194. [Google Scholar] [CrossRef] [PubMed]
  43. Marchette, D.J. Random Graphs for Statistical Pattern Recognition; Wiley-Interscience: Hoboken, NJ, USA, 2005. [Google Scholar]
  44. Chalumeau, T.; Costa, L.D.F.; Laligant, O.; Meriaudeau, F. Optimized texture classification by using hierarchical complex network measurements. In Machine Vision Applications in Industrial Inspection XIV; SPIE: Bellingham, WA, USA, 2006. [Google Scholar]
  45. Backes, A.R.; Bruno, O.M. A graph-based approach for shape skeleton analysis. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2009; Volume 5716. [Google Scholar]
  46. Antiqueira, L.; Nunes, M.D.G.V.; Oliveira, O.N., Jr.; Costa, L.D.F. Strong correlations between text quality and complex networks features. Phys. A 2007, 373, 811–820. [Google Scholar] [CrossRef]
  47. Barrett, J.F.; Keat, N. Artifacts in CT: Recognition and avoidance. Radiographics 2004, 24, 1679–1691. [Google Scholar] [CrossRef] [PubMed]
  48. De Crop, A.; Casselman, J.; Van Hoof, T.; Dierens, M.; Vereecke, E.; Bossu, N.; Pamplona, J.; D’Herde, K.; Thierens, H.; Bacher, K. Analysis of metal artifact reduction tools for dental hardware in CT scans of the oral cavity: kVp, iterative reconstruction, dual-energy CT, metal artifact reduction software: Does it make a difference? Neuroradiology 2015, 57, 841–849. [Google Scholar] [CrossRef] [PubMed]
  49. Liu, Y.; Wen, T.; Sun, W.; Liu, Z.; Song, X.; He, X.; Zhang, S.; Wu, Z. Graph-Based Motion Artifacts Detection Method from Head Computed Tomography Images. Sensors 2022, 22, 5666. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The pipeline of the proposed RT-GAT method.
Figure 1. The pipeline of the proposed RT-GAT method.
Electronics 13 00724 g001
Figure 2. Transformation from Euclidean space to non-Euclidean space.
Figure 2. Transformation from Euclidean space to non-Euclidean space.
Electronics 13 00724 g002
Figure 3. Typical samples: (a) artifact-affected image, (b) artifact-free image.
Figure 3. Typical samples: (a) artifact-affected image, (b) artifact-free image.
Electronics 13 00724 g003
Figure 4. Graphs generated based on RT-graph model.
Figure 4. Graphs generated based on RT-graph model.
Electronics 13 00724 g004
Figure 5. Community detection of RT-graph: (a) 1st community, (b) 2nd community, (c) 3rd community, (d) 4th community, (e) 5th community, (f) 6th community, (g) 7th community, (h) 8th community.
Figure 5. Community detection of RT-graph: (a) 1st community, (b) 2nd community, (c) 3rd community, (d) 4th community, (e) 5th community, (f) 6th community, (g) 7th community, (h) 8th community.
Electronics 13 00724 g005aElectronics 13 00724 g005b
Table 1. Metrics of motion artifact detection judgement.
Table 1. Metrics of motion artifact detection judgement.
Predicted
10
True1True Positive (TP)False Negative (FN)
False0False Positive (FP)True Negative (TN)
Table 2. Performance of different methods.
Table 2. Performance of different methods.
SensitivityAccuracySpecificity
CNN85.67%78.67%71.67%
GLCM + RF87.00%88.00%89.00%
GLCM + SVM86.00%88.00%93.00%
GCN89.1%90.3%88.5%
GAT89.5%90.8%87.2%
RT-GAT90.7%92.3%89.1%
Table 3. Information utilized in different models.
Table 3. Information utilized in different models.
Region-Level
Global Information
Pixel-Level
Local Information
Topological FeaturesRegional–Temporal InformationAttention Mechanism
CNN××××
GLCM + SVM××××
GLCM + RF××××
GCN××
GAT×
RT-GAT
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, Y.; Wen, T.; Wu, Z. Motion Artifact Detection Based on Regional–Temporal Graph Attention Network from Head Computed Tomography Images. Electronics 2024, 13, 724. https://doi.org/10.3390/electronics13040724

AMA Style

Liu Y, Wen T, Wu Z. Motion Artifact Detection Based on Regional–Temporal Graph Attention Network from Head Computed Tomography Images. Electronics. 2024; 13(4):724. https://doi.org/10.3390/electronics13040724

Chicago/Turabian Style

Liu, Yiwen, Tao Wen, and Zhenning Wu. 2024. "Motion Artifact Detection Based on Regional–Temporal Graph Attention Network from Head Computed Tomography Images" Electronics 13, no. 4: 724. https://doi.org/10.3390/electronics13040724

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop