Next Article in Journal
miR-145, miR-92a and miR-375 Show Differential Expression in Serum from Patients with Diabetic Retinopathies
Next Article in Special Issue
Primary Tumor Radiomic Model for Identifying Extrahepatic Metastasis of Hepatocellular Carcinoma Based on Contrast Enhanced Computed Tomography
Previous Article in Journal
A Rapid, Simple, Trace, Cost-Effective, and High-Throughput Stable Isotope-Dilution Liquid Chromatography–Tandem Mass Spectrometry Method for Serum Methylmalonic Acid Quantification and Its Clinical Applications
Previous Article in Special Issue
Comparison of Multiple Radiomics Models for Identifying Histological Grade of Pancreatic Ductal Adenocarcinoma Preoperatively Based on Multiphasic Contrast-Enhanced Computed Tomography: A Two-Center Study in Southwest China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Lung Radiomics Features Selection for COPD Stage Classification Based on Auto-Metric Graph Neural Network

1
College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
2
College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen 518118, China
3
School of Applied Technology, Shenzhen University, Shenzhen 518060, China
4
Department of Radiology, the First Affiliated Hospital of Guangzhou Medical University, Guangzhou 510120, China
5
Shenzhen Institute of Respiratory Diseases, Shenzhen People’s Hospital, Shenzhen 518001, China
6
The Second Clinical Medical College, Jinan University, Guangzhou 518001, China
7
The First Affiliated Hospital, Southern University of Science and Technology, Shenzhen 518001, China
8
Engineering Research Centre of Medical Imaging and Intelligent Analysis, Ministry of Education, Shenyang 110169, China
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Diagnostics 2022, 12(10), 2274; https://doi.org/10.3390/diagnostics12102274
Submission received: 18 August 2022 / Revised: 13 September 2022 / Accepted: 18 September 2022 / Published: 20 September 2022
(This article belongs to the Special Issue Radiomics and Machine Learning in Disease Diagnosis)

Abstract

:
Chronic obstructive pulmonary disease (COPD) is a preventable, treatable, progressive chronic disease characterized by persistent airflow limitation. Patients with COPD deserve special consideration regarding treatment in this fragile population for preclinical health management. Therefore, this paper proposes a novel lung radiomics combination vector generated by a generalized linear model (GLM) and Lasso algorithm for COPD stage classification based on an auto-metric graph neural network (AMGNN) with a meta-learning strategy. Firstly, the parenchyma images were segmented from chest high-resolution computed tomography (HRCT) images by ResU-Net. Second, lung radiomics features are extracted from the parenchyma images by PyRadiomics. Third, a novel lung radiomics combination vector (3 + 106) is constructed by the GLM and Lasso algorithm for determining the radiomics risk factors (K = 3) and radiomics node features (d = 106). Last, the COPD stage is classified based on the AMGNN. The results show that compared with the convolutional neural networks and machine learning models, the AMGNN based on constructed novel lung radiomics combination vector performs best, achieving an accuracy of 0.943, precision of 0.946, recall of 0.943, F1-score of 0.943, and ACU of 0.984. Furthermore, it is found that our method is effective for COPD stage classification.

1. Introduction

As a common and non-infectious lung disease, chronic obstructive pulmonary disease (COPD) presents a preventable, treatable, and progressive chronic disease with debilitating lung conditions characterized by persistent airflow limitation [1]. Severe COPD can cause chronic morbidity and eventually lead to death. In 2030, it will become the third-largest death factor worldwide [2].
Because of this characterization, the COPD stage is diagnosed from stage 0 to IV according to Global Initiative for Chronic Obstructive Lung Disease (GOLD) criteria accepted by the American Thoracic Society and the European Respiratory Society [1,3]. The assessment parameters in the GOLD criteria are the forced expiratory volume in 1 s/forced vital capacity (FEV1/FVC) and FEV1 % predicted [1,4]. The FEV1/FVC and FEV1 % predicted can explain the impact on symptoms and life quality of COPD patients [5,6], but they cannot reflect the change of the lung tissue in COPD patients with COPD stage evolution. The predicted FEV1/FVC and FEV1 % changes only occur when lung tissue is destroyed to a certain extent. However, the measurement accuracy of PFT is limited by the compliance degree of patients. Specifically, the measurement process of PFT is very complex, and it is difficult for patients to understand and comply with the requirements put forward by doctors [7]. In addition, PFT cannot intuitively provide detailed anatomical information and morphological changes, such as subtypes of emphysema and bronchial wall thickening [8,9].
Compared with the GOLD criteria and other imaging equipment, computed tomography (CT) has been regarded as the most effective modality for characterizing and quantifying COPD [10]. For example, compared with PFT, chest CT images can indicate the patients have suffered from mild lobular central emphysema and decreased exercise tolerance in smokers without airflow limitation [11]. In addition, with the significant progress of CT imaging, especially high-resolution CT (HRCT), it has become an effective method for quantitative analysis of COPD, such as measuring the severity of air trapping, emphysema, airflow obstruction, and airway diseases [12,13]. However, the quantitative analysis of the bronchial and vascular flow are limited by the resolution of the HRCT. Therefore, it is challenging to automatically, semi-automatically, or manually segment the tiny trachea (small airway) and vascular from chest CT images. In particular, small airways (diameter <2 mm) and associated vessels can hardly be observed from chest CT images. Based on the above, parametric response mapping (PRM) [14] was proposed to locate the small airway lesion region, the emphysema region, and the healthy lung region. First, PRM needs to register the expiratory and inspiratory chest CT images. Then, the mall airway lesion, the emphysema, and the healthy lung region are located by the set thresholds. Therefore, the registration method directly affects the location of the above regions.
Radiomics features in lung disease imaging have been regarded as the state of the art for clinicians [15]. However, radiomics features in COPD develop slower than in other lung diseases, such as lung cancer and pulmonary nodules. The diffuse distribution of COPD in the lung limits the application of radiomics features in COPD. Radiomics features should be extracted from the region of interest (ROI) of the chest CT images. However, the diffuse distribution of COPD makes it difficult to determine ROI. Until 2020, Refaee T. et al. point out that radiomics features have not been extensively investigated yet in COPD [16]. There are potential applications of radiomics features in COPD for the diagnosis, treatment, and follow-up of COPD and future directions [16]. Meanwhile, the value of lung radiomics features in COPD assessment has also been confirmed [17].
Lung radiomics features extracted from the peripheral airway, pulmonary parenchyma, and pulmonary vessels in the chest CT images are suitable for reflecting the change of the lung tissue in COPD patients with COPD stage evolution. Specifically, the characteristic pathological changes of COPD exist in the central airway (trachea, bronchus, bronchioles with an inner diameter greater than 2–4 mm), peripheral airway (bronchioles and bronchioles with an inner diameter less than 2 mm), and pulmonary parenchyma and pulmonary vessels. Chronic inflammation causes the airway wall to repeatedly be damaged and repaired as the COPD stage evolves, resulting in airway blockage and a narrow air cavity in the peripheral airway [18]. In addition, the destruction of pulmonary parenchyma in patients with COPD involves the expansion and collapse of respiratory bronchioles. With COPD stage evolution, this expansion and collapse of respiratory bronchioles spread from the upper region to the whole lung, and the pulmonary capillary beds destroy [19]. In the earlier COPD stages, the changes in pulmonary vessels are characterized by the thickening of the vascular wall. With the continuous development of COPD, the increase of smooth muscle, proteoglycan, and collagen further thickens pulmonary vessels’ walls, which may lead to cor pulmonale [20]. Therefore, COPD results from the joint action of the peripheral airway, pulmonary parenchyma, and pulmonary vessels. Thus, the peripheral airway, pulmonary parenchyma, and pulmonary vessels as ROI [17] to extract lung radiomics features are reasonable for COPD stage classification.
Currently, radiomics features have also been used in COPD for survival prediction [21,22], spirometric assessment of emphysema presence and severity [23], COPD exacerbations [24], COPD early decision [3], COPD stage classification [25,26], COPD prediction [27,28], and analysis of COPD and resting heart rate [29]. The convolutional neural networks (CNN) and machine learning (ML) models can implement the COPD stage classification task. Compared with the CNN based on chest HRCT images, the multi-layer perceptron (MLP, a kind of ML model) classifier performs better, achieving an accuracy of 0.83, precision of 0.83, recall of 0.83, F1-score of 0.82, and AUC of 0.95 [25]. However, the classification performance needs to be further improved. The graph neural network (GNN) was first proposed in 2005 [30]. The GNN also developed into the graph convolutional network (GCN) inspired by CNN [31]. However, the fixed-graph structure of the GCN by using the entire dataset limits its application and development. To overcome the limitation of the GCN and maintain the advantages of the GNN, an auto-metric graph neural network (AMGNN) based on a meta-learning strategy is proposed [32]. The proposed AMGNN has been applied to the Alzheimer’s disease classification, achieving a good performance [32,33]. However, the risk factors and node features are critical for the classification performance of the AMGNN. Compared with node features, the risk factors with a lower dimension. The selection of the risk factors may have an important impact on the classification effect. Therefore, we focus on COPD and construct a novel lung radiomics combination vector based on the AMGNN for COPD stage classification.
Lung radiomics features have been applied to COPD stage classification, and compared to the CNN models, the ML models with lung radiomics features selected by least absolute shrinkage and selection operator (Lasso) algorithm perform better [25]. However, the risk factors and node features of the AMGNN limit the application in COPD stage classification. Therefore, this paper applies the AMGNN with lung radiomics features to improve COPD stage classification performance. Our contributions in this paper are briefly described as follows:
(1) This paper proposes a lung radiomics features selection method for risk factors and node features of the AMGNN with the advantages of modeling the correlation between samples and building a small graph structure to classify the COPD stage. The lung radiomics features are only extracted from routine chest HRCT images, eliminating the limitation that risk factors and node features based on prior knowledge are difficult to obtain, such as gene information. The Lasso algorithm with 10-fold cross-validation selects the independent variables related to the dependent variables, determining the critical, independent variables to simplify the classification model. The Lasso algorithm for improving COPD classification has been confirmed [25]. The Cox model [3] and the generalized linear model (GLM) are often used for determining the risk factors. Compared with the Cox model, the GLM eliminates the limitation of follow-up features with multiple time series. Last, a novel lung radiomics combination vector (3 + 106) is constructed by GLM and Lasso algorithm for determining the radiomics risk factors (K = 3) and radiomics node features (d = 106) of the AMGNN;
(2) Compared with previous work on COPD identification (binary classification: COPD and without COPD) [34,35], our work (COPD stage classification) has more clinical significance. Therefore, our proposed model eliminates the limitations of PFT and may become an effective tool for COPD management.

2. Materials and Methods

Materials and methods are described in detail in Section 2.1 and Section 2.2, respectively.

2.1. Materials

Figure 1 shows the participants’selection flow diagram and GOLD distribution of the participants in this study. Specifically, Figure 1a shows the participants’ selection flow. Our study cohort was enrolled in the national clinical research center of respiratory diseases, China, from 25 May 2009, to 11 January 2011. Four hundred and sixty-five Chinese participants aged 40–49 were included in this study after being strictly selected by the inclusion and exclusion criteria [36]. Then, these 465 participants underwent chest HRCT scans (TOSHIBA, kVp: 120 kV, X-ray tube current: 40 mA, slice thickness: 1.0 mm) at the deep inspiration state and PFT on the same day. The COPD stage 0-III-IV (GOLD 0-III-IV) is diagnosed by GOLD 2008 using FEV1/FVC and FEV1% predicted PFT [1,4]. Figure 1b shows that our study cohort has 129, 108, 121, and 107 participants in the GOLD 0, I, II, and III-IV, respectively. The 465 participants are divided into the train set (70%) and the test set (30%). Furthermore, Figure 1c,d shows the detailed training and test sets in each COPD stage.
This study was approved by the ethics committee in the national clinical research center of China’s respiratory diseases. In addition, all participants have provided written informed consent to the first affiliated hospital of Guangzhou Medical University before chest HRCT scans and PFT.

2.2. Methods

Figure 2 shows the detailed flow chart of our proposed method in this study.

2.2.1. Lung Parenchyma Segmentation and Radiomics Feature Extraction

Figure 2A(a) shows that a state-of-the-art ResU-Net (U-net (R231)) [37] trained by data diversity (diverse lung disease images), which has been a robust and standard segmentation model of pathological lungs, is transferred to segment lung parenchyma images from 465 sets of chest HRCT images. The network architecture of the ResU-Net has been described in detail in our previous study [38]. Then, Figure 2A(b) shows that 1316 lung radiomics features of each participant are extracted from lung parenchyma images with the Hounsfield unit [39] by PyRadiomics [40].
Figure 2B shows the lung radiomics feature extraction model: PyRadiomics. Specifically, two steps are performed to extract the lung radiomics features from lung parenchyma images. The two steps include (1) original lung parenchyma images are filtered by wavelet and Laplacian of Gaussian (LoG) filters, generating derived lung parenchyma images; (2) the original and derived lung parenchyma images are further used to calculate the lung radiomics features based on the preset classes. More detailed descriptions can be found in our previous studies [25,29].

2.2.2. Radiomics Feature Combination

The risk factors and node features are critical for the classification performance of the AMGNN. Therefore, a novel lung radiomics combination vector (3 + 106) is constructed by the GLM [41] and the Lasso algorithm [42] for determining the radiomics risk factors (K = 3) and radiomics node features (d = 106) of the AMGNN.
The results of a previous study [25] have shown that the Lasso algorithm helps improve the classification effect of COPD. Specifically, Figure 2A(c) shows that the Lasso algorithm with 10-fold cross-validation selects the radiomics node features from 1316 lung radiomics features. Meanwhile, the radiomics risk factors are selected from 1316 lung radiomics features by calculating the top three maximal R2 values generated from the GLM. Finally, the radiomics risk factors (K = 3) and radiomics node features (d = 106) are concatenated, obtaining the proposed lung radiomics combination vector (3 + 106) for COPD classification.
Equations (1) and (2) give the mathematical form of the GLM and Lasso [3,25,29], respectively.
g ( μ y i ) = β 0 + j = 1 p β j x j ,
where the link function g ( μ y i ) = η relates the mean μ y i = E ( y i ) to the linear predictor η = j = 1 p β j x j . yi denotes the dependent variable (the 465 COPD stages: GOLD 0, I, II, and III-IV). xj denotes the independent variable (the 465 × 1316 normalized lung radiomics features). βj denotes the regression coefficients i∈[1, n] and j∈[0, p].
A a r g   m i n { i = 1 n ( y i β 0 j = 1 p β j x j ) 2 + λ j = 0 p | β j | }
where matrix A denotes the node features (selected lung radiomics feature). xj denotes the independent variable (the 465 × 1316 normalized lung radiomics features). yi denotes the independent variable (the 465 COPD stages: GOLD 0, I, II, and III–IV). λ denotes the penalty parameter (λ ≥ 0). βj denotes the regression coefficients i∈[1, n] and j∈[0, p].

2.2.3. COPD Stage Classification Based on AMGNN

Figure 2A(d) shows that the COPD stage is classified based on AMGNN with the proposed lung radiomics combination vector. Specifically, Figure 2C shows the pipeline of AMGNN based on meta-learning. Specifically, T graphs are separately established by selecting the 40 known nodes in the training set and 1 unknown node in the training set. The 40 (10 × 4) known nodes of each graph include 10 known nodes in each COPD stage, and four COPD stages are included in our study. Subsequently, T-1 meta tasks train the AMGNN by randomly to the constantly updated network parameter P and loss value , establishing the AMGNN (PT). Lastly, the established AMGNN (PT) realizes the unknown nodes in the test set for COPD stage classification.
In addition, Figure 2D shows the detailed network structure of AMGNN (Pi) for calculating the auto-metics connection with probability constraints and updating the node. Specifically, two stages are parallelly performed in each AMGNN (Pi) for generating the adjacency matrix A ˜ . First, the adjacency matrix A ˜ multiplies the proposed lung radiomics combination vector of the nodes in the lth layer V ( l ) , and then updating the node G n ( V ( l ) ) is obtained by a nonlinear activation function FC (Leaky ReLU). Finally, the output V ( l + 1 ) of the AMGNN (Pi) is obtained by concatenating V ( l ) and G n ( V ( l ) ) . However, the output of the last AMGNN (PT-1) is input into the softmax instead of the concatenating operation.
Furthermore, one stage is constructing the edge weight matrix W using the 106 radiomics nodes features. Meanwhile, the other stage is constructing the edge constraint matrix E using the 3 radiomics risk factors. Finally, the edge weight matrix W element wisely multiplies the edge constraint matrix E, obtaining the adjacency matrix A ˜ . In constructing the W stage, the N × N × 106 feature block C is generated by copying N times of the N × 106 feature map. Then, each N × N × 1 in the N × N × 106 feature block C is transposed, generating a new N × N × 106 feature block C′. The difference feature block C′′ is further obtained by calculating the absolute difference between each feature of the two nodes in C and C′. Subsequently, a CNN with a 1 × 1 kernel generates the edge weight matrix W. The detailed description of calculating the edge constraint matrix E refers to the study [32].

3. Experiments and Results

3.1. Experiments

Figure 3 shows four experiments to verify the effectiveness of our proposed method. The radiomics/CNN combination vectors (K risk factors + d node features) based on the AMGNN for COPD classification in Experiments 2–4 can be obtained in Supplementary Materials. The open-source code of AMGNN with 5-fold cross-validation is directly applied to COPD stage classification. Similarly, other ML models are also trained with the 5-fold cross-validation. Finally, the trained models with the best AUC are loaded and used on the test set.
The CNN models based on the chest HRCT images perform unsatisfactory classification [25]. However, the model fusion strategy based on the CNN and ML models should be further studied. Specifically, the features are extracted by the CNN model based on transfer learning, and then the ML classifiers are used for classification. Therefore, 3D CNN features are extracted based on the encoder backbone (ResNet 10) of a pre-trained Med3d [41] using truncated transfer learning for comparing the lung radiomics features. Med3d, a heterogeneous 3D network, is used to extract general medical 3D features by building a 3DSeg-8 dataset with diverse modalities, target organs, and pathologies. Five hundred and twelve 3D CNN feature maps with the size of 3 × 3 × 3 are generated by ResNet 10. Therefore, each participant has 13,824 3D CNN features (512 × 3 × 3 × 3 = 13,824) by flattening the 512 3D CNN feature maps.
Experiments 1 is designed based on ML classifiers with lung radiomics or 3D CNN features. The support vector machine (SVM) [42], multi-layer perceptron (MLP) [25,43], random forest (RF) [44], logistic regression (LR) [45], and the proposed AMGNN are applied to diagnose Alzheimer’s disease [32]. In addition, SVM, LR, MLP, RF, gradient boosting (GB) [46], and linear discriminant analysis (LDA) [47] are applied to classify the COPD stages [25]. Therefore, the above six classic ML classifiers and the AMGNN are used for COPD stage classification. Specifically, the original lung radiomics (1316) and 3D CNN features (13,824) are directly and separately used to classify the COPD stage based on ML classifiers. In addition, the radiomics features (106) and 3D CNN features (60) of each participant are automatically selected by the Lasso algorithm with the 10-fold cross-validation. For comparison of the same number of features, the radiomics features (106) and 3D CNN features (60) of each participant are separately selected or fused by the GLM/principal component analysis (PCA) algorithm. These selected or fused features are separately used to classify the COPD stage based on ML classifiers. However, the combination feature vector of the lung radiomics or 3D CNN features selected by GLM/Lasso and these features separately fused by PCA are also considered in our experimental design in Experiment 1. Specifically, because the GLM and Lasso algorithm perform the feature selection task, the features selected by the GLM and Lasso algorithm are not further combined.
Experiments 2–4 are designed based on the AMGNN with lung radiomics or 3D CNN features for determining the risk factors and node features. Specifically, compared with node features of AMGNN, the risk factors have a lower dimension. Therefore, the PCA algorithm and GLM are separately used to determine the risk factors (K = 2–6). In addition, the PCA algorithm and GLM are separately used to determine the risk factors (K = 2–6) based on the original lung radiomics (1316) or 3D CNN features (13,824), and the corresponding node features are original lung radiomics (1316) or 3D CNN features (13,824) in Experiment 2. The PCA and GLM are separately used to determine the risk factors (K = 2–6) based on the selected lung radiomics (106) or 3D CNN features (60) by the Lasso algorithm, and the corresponding node features also are the selected lung radiomics (106) or 3D CNN features (60) in Experiment 3. The PCA and GLM are separately used to determine the risk factors (K = 2–6) based on 1316 original lung radiomics or 13,824 3D CNN features, and the corresponding node features are the lung radiomics (106) or 3D CNN features (60) selected by the Lasso algorithm in Experiment 4.
Table 1 reports the different ML classifiers with their definitions. The 4-way-10-shot of the few-shot learning [48] is set in the AMGNN (iterations = 600, batch_size_test = 28, batch_size_train = 28, and random_seed = 42). All the source codes run on PyCharm 2020.3.5 (professional edition) on Windows 10 Pro 64-bit with two 2080 Ti GPUs, 32 GB RAM, 1 TB mechanical storage, and a 256 G SSD.

3.2. Results

This section reports the results of Experiments 1-4, including the five standard evaluation metrics (accuracy, precision, recall, F1-score, and area under the curve (AUC)). The evaluation metric AUC for multi-classification is calculated by the receiver operating characteristic curve (ROC) [25]. Compared with the CNN and ML models, the AMGNN based on the novel lung radiomics combination vector (3 + 106) performs best, achieving an accuracy of 0.943, precision of 0.946, recall of 0.943, F1-score of 0.943, and ACU of 0.984.

3.2.1. COPD Stage Classification Based on Different ML Classifiers

Table 2, Table 3, Table 4, Table 5, Table 6 and Table 7 report the ML classifiers’ performance in Experiment 1. Meanwhile, Figure 4 and Figure 5 visually show the evaluation metrics and ROC curves of the ML classifier in Experiment 1. The MLP classifier performs better than the other ML classifiers for COPD stage classification. Specifically, the MLP classifier with 13,824 original 3D CNN/1316 original lung radiomics features performs best, achieving an accuracy of 0.793/0.786, precision of 0.798/0.784, recall of 0.793/0.784, F1-score of 0.790/0.784, and ACU of 0.938/0.919. In addition, 60 3D CNN/106 lung radiomics features separately selected or fused by the Lasso algorithm, GLM, and PCA algorithm are further used to classify the COPD stage. Again, the MLP classifier with 60 3D CNN/106 lung radiomics features selected by the Lasso algorithm performs best, achieving an accuracy of 0.821/0.829, precision of 0.826/0.828, recall of 0.821/0.829, F1-score of 0.821/0.824, and ACU of 0.946/0.950. Even for the CNN/radiomics combination vector, the MLP classifier with 60 3D CNN/106 lung radiomics features only selected by the Lasso algorithm also performs best.
The GLM fails to improve the classification performance of all ML classifiers with 3D CNN features. Compared with the GLM, the PCA algorithm only improves the classification performance of the RF classifier with 3D CNN features. The Lasso algorithm only improves the classification performance of the MLP classifier with the 3D CNN features. In addition, the GLM fails to improve the classification performance of the SVM classifier, but it improves the classification performance of other ML classifiers with lung radiomics features. Compared with the GLM, the PCA algorithm only improves the classification performance of the RF and LDA classifiers with lung radiomics features. However, the Lasso algorithm improves the classification performance of all ML classifiers with lung radiomics features.
Compared to the evaluation metrics of the MLP classifier with 13,824 original 3D CNN features, that of the MLP classifier with 60 selected 3D CNN features has been improved by 2.8% in accuracy, 2.8% in precision, 2.8% in recall, 3.1% in F1-score, and 0.8% in ACU. Similar to 3D CNN features, compared to the evaluation metrics of the MLP classifier with 1316 original lung radiomics features, that of the MLP classifier with 106 selected lung radiomics features has been improved by 4.3% in accuracy, 4.4% in precision, 4.5% in recall, 4.0% in F1-score, and 3.1% in ACU.

3.2.2. COPD Stage Classification Based on the AMGNN Classifier

Table 8, Table 9 and Table 10 report the AMGNN classifier’s performance in Experiments 2–4. Meanwhile, Figure 6 and Figure 7 visually show the evaluation metrics and ROC curves of the AMGNN classifier in Experiments 2–4. The AMGNN classifier with the proposed lung radiomics combination vector constructed by 3 radiomics risk factors (selected from 1316 original lung radiomics features by GLM) and 106 radiomics node features (selected from 1316 original lung radiomics features by the Lasso algorithm) performs best for COPD stage classification, achieving an accuracy of 0.943, precision of 0.946, recall of 0.943, F1-score of 0.943, and ACU of 0.984.
The results in this section show that the AMGNN classifier performs better than the ML classifiers. Specifically, compared to the best evaluation metrics of the MLP classifier with 13,824 original 3D CNN features in Experiment 1, that of the AMGNN classifier with 3D CNN combination vector (K = 6 (GLM) + d = 13,824) in Experiment 2 has improved by 4.3% in accuracy, 4.2% in precision, 4.3% in recall, 4.1% in F1-score, and 1.2% in ACU. Meanwhile, compared to the best evaluation metrics of the MLP classifier with 1316 original lung radiomics features in Experiment 1, that of the AMGNN classifier with lung radiomics combination vector (K = 5 (PCA) + d = 1316) in Experiment 2 has improved by 11.4% in accuracy, 11.5% in precision, 11.6% in recall, 11.6% in F1-score, and 5.3% in ACU. Compared to the best evaluation metrics of the MLP classifier with 60 3D CNN features selected by the Lasso algorithm in Experiment 1, that of the AMGNN classifier with 3D CNN combination vector (K = 4 (PCA) + d = 60 in Experiment 3/K = 3 (PCA) + d = 60 in Experiment 4) has improved by 1.5%/2.2% in accuracy, 0.2%/2.5% in precision, 1.5%/2.2% in recall, 1.2%/2.0% in F1-score, and 1.1%/1.2% in ACU, respectively. Meanwhile, compared to the best evaluation metrics of the MLP classifier with 106 lung radiomics features selected by the Lasso algorithm in Experiment 1, that of the AMGNN classifier with lung radiomics combination vector (K = 2 (PCA) + d = 106 in Experiment 3/K = 3 (GLM) + d = 106 in Experiment 4) in Experiments 4–5 has improved by 10.0%/11.4% in accuracy, 10.1%/11.8% in precision, 10.0%/11.4% in recall, 10.4%/11.9% in F1-score, and 3.4%/3.4% in ACU, respectively.
Table 8 and Table 10 and Figure 6 and Figure 7 show that the Lasso algorithm helps the AMGNN classifier to construct the excellent edge weight matrix by reducing redundant collinearity 3D CNN or lung radiomics features. The mean evaluation metrics of the CNN/radiomics combination vector (K = 2–6 (PCA/GLM) + d = 60/106) in Experiment 4 also perform better than that of the CNN/radiomics combination vector (K = 2–6 (PCA/GLM) + d = 13,824/1316) in Experiment 2 based on the AMGNN classifier. Specifically, compared with the mean evaluation metrics of the CNN/radiomics combination vector (K = 2–6 (PCA/GLM) + d = 13,824/1316) in Experiment 3, that of the CNN/radiomics combination vector (K = 2–6 (PCA/GLM) + d = 60/106) in Experiment 4 has improved by 1.4%/0.2%/2.3%/3.0% in accuracy, 0.5%/0.4%/3.3%/3.1% in precision, 1.4%/0.2%/2.3%/3.0% in recall, 2.2%/1.1%/2.5%/3.2% in F1-score, and 0.9%/1.8%/1.7%/1.3% in ACU, respectively.
Table 9 and Table 10 and Figure 6 and Figure 7 show that the PCA/GLM helps the AMGNN classifier determine the risk factors for constructing the excellent edge constraint matrix. The mean evaluation metrics of the CNN/radiomics combination vector (K = 2–6 (PCA/GLM) + d = 60/106, K risk factors separately generated by PCA/GLM from 13,824 original 3D CNN/1316 original lung radiomics features) in Experiment 4 also perform better than that of the CNN/radiomics combination vector (K = 2–6 (PCA/GLM) + d = 13,824/1316, K risk factors separately generated by PCA/GLM from 60 selected 3D CNN/106 selected lung radiomics features) in Experiment 3 based on the AMGNN classifier. Specifically, compared with the mean evaluation metrics of the CNN/radiomics combination vector (K = 2–6 (PCA/GLM) + d = 13,824/1316) in Experiment 3, that of the CNN/radiomics combination vector (K = 2–6 (PCA/GLM) + d = 60/106) in Experiment 4 has improved by –0.4%/0.5%/0.9%/1.4% in accuracy, –0.7%0.6/1.4%/1.4% in precision, –0.4%/0.5%/0.9%/1.4% in recall, –0.4%/0.6%/0.9%/1.4% in F1-score, and –1.9%/0.5%/1.0%/1.1% in ACU, respectively.
Table 8, Table 9 and Table 10 and Figure 6 and Figure 7 also show that the evaluation metrics of the AMGNN classifier with lung radiomics combination vector are better than the AMGNN classifier with the 3D CNN combination vector.
Specifically, the AMGNN classifier with the 3D CNN combination vector (K = 4 or 5/6+ d = 13,824, K risk factors separately generated by PCA/GLM from 13,824 original 3D CNN features) performs best in Experiment 2, achieving an accuracy of 0.807 or 0.807/0.836, precision of 0.831 or 0.811/0.840, recall of 0.807 or 0.807/0.836, F1-score of 0.799 or 0.804/0.831, and ACU of 0.929 or 0.924/0.840. Meanwhile, the AMGNN classifier with the lung radiomics combination vector (K = 5/5+ d = 13,824, K risk factors separately generated by PCA/GLM from 1316 original lung radiomics features) performs best in Experiment 2, achieving an accuracy of 0.900/0.871, precision of 0.899/0.871, recall of 0.900/0.871, F1-score of 0.900/0.870, and ACU of 0.972/0.964. Compared to the best evaluation metrics of the 3D CNN combination vector (K = 6 (GLM) + d = 13,824), the best evaluation metrics of the lung radiomics combination vector (K = 5 (PCA) + d = 13,824) have improved by 6.4% in accuracy, 5.9% in precision, 6.4% in recall, 6.9% in F1-score, and 2.2% in ACU.
Specifically, the AMGNN classifier with the 3D CNN combination vector (K = 4/4+ d = 60, K risk factors separately generated by PCA/GLM from 60 selected 3D CNN features) performs best in Experiment 3, achieving an accuracy of 0.836/0.814, precision of 0.846/0.837, recall of 0.836/0.814, F1-score of 0.833/0.812, and ACU of 0.957/0.949. Meanwhile, the AMGNN classifier with the lung radiomics combination vector (K = 3/3 or 6+ d = 106, K risk factors separately generated by PCA/GLM from 106 selected lung radiomics features) performs best in Experiment 3, achieving an accuracy of 0.907/0.886 or 0.886, precision of 0.914/0.887 or 0.886, recall of 0.907/0.886 or 0.886, F1-score of 0.908/0.886 or 0.881, and ACU of 0.983/0.956 or 0.963. Compared to the best evaluation metrics of the 3D CNN combination vector (K = 4 (PCA) + d = 60), the best evaluation metrics of the lung radiomics combination vector (K = 3 (PCA) + d = 106) have improved by 7.1% in accuracy, 6.8% in precision, 7.1% in recall, 7.5% in F1-score, and 2.6% in ACU.
Specifically, the AMGNN classifier with the 3D CNN combination vector (K = 3/6+ d = 60, K risk factors separately generated by PCA/GLM from 13,824 original 3D CNN features) performs best in Experiment 4, achieving an accuracy of 0.843/0.821, precision of 0.851/0.833, recall of 0.843/0.821, F1-score of 0.841/0.818, and ACU of 0.958/0.945. Meanwhile, the AMGNN classifier with the lung radiomics combination vector (K = 3/3 or 6+ d = 106, K risk factors separately generated by PCA/GLM from 1316 original lung radiomics features) performs best in Experiment 4, achieving an accuracy of 0.929/0.943, precision of 0.929/0.946, recall of 0.929/0.943, F1-score of 0.928/0.943, and ACU of 0.984/0.984. Compared to the best evaluation metrics of the 3D CNN combination vector (K = 3 (PCA) + d = 60), the best evaluation metrics of the lung radiomics combination vector (K = 3 (GLM) + d = 106) have improved by 10.0% in accuracy, 9.5% in precision, 10.0% in recall, 10.2% in F1-score, and 2.6% in ACU.
Table 11 compares the results of our proposed method with other previous methods. Furthermore, it fully demonstrates the superiority of our method over other previous methods.

4. Discussion

Based on the experimental results, we give the following discussion and point out the limitations in this study and the future direction. Compared with multimodal sleep data [49] and electromyography [50], lung radiomics features extracted from chest HRCT images are more suitable for COPD stage classification.
First, the MLP classifier with 3D CNN or lung radiomics features performs best for the COPD stage classification in the ML classifiers. The MLP classifier’s structure and ability to handle complex nonlinear features determine its excellent ability. The MLP classifier is composed of three full connection layers, which is more efficient and more suitable for modeling long-range dependencies [51]. Meanwhile, COPD patients have high heterogeneity and different phenotypes [1], resulting in complex nonlinear 3D CNN or lung radiomics features extracted from their chest HRCT images. However, the MLP classifier can handle these complex nonlinear features and discover dependencies between different input features by approximating the nonlinear map globally to realize the COPD stage classification [25,43].
Second, the AMGNN classifier performs better than the best MLP classifier in the COPD stage classification. This is because the AMGNN classifier, which overcomes the lack of flexibility of existing GNN models, introduces a meta-learning strategy, and is insensitive to graphics size [32]. In addition, the AMGNN classifier needs fewer training samples than traditional ML classifiers while maintaining stability in reducing training samples [32]. Therefore, even if a small number of graphs are used, it also can show good performance for the COPD stage classification. The AMGNN classifier also solves the problems that the standard medical image cohort is usually challenging to obtain, which results in a small data volume. On the other hand, the MPL classifier needs sufficient data to train its parameters. Therefore, the MPL classifier’s performance with small data may deteriorate very seriously. Meanwhile, the AMGNN classifier performs well on the test set, showing no underfitting and overfitting problems. Specifically, too deep/complex network structure often causes underfitting. However, the AMGNN classifier only with two layers effectively avoids underfitting. In addition, the AMGNN classifier introduces the meta-learning strategy. Meta-learning-based approaches can increasingly train powerful CNNs on small datasets in many vision problems [52]. Therefore, the meta-learning strategy effectively avoids overfitting.
Third, the Lasso algorithm improves the MLP and AMGNN classifiers’ performance. Compared with the Lasso algorithm, the PCA algorithm reduces the original features’ dimension by identifying the orthogonal linear combinations with the largest covariance. Therefore, the PCA algorithm can compress the original features into a small number of features and obtain new fused features without losing too much information. However, because the PCA algorithm needs to retain most of the features, the fused features still affect the COPD classification effect. The Lasso algorithm and GLM perform the feature selection task. Compared with the GLM, the Lasso algorithm is a penalized likelihood approach. Therefore, the Lasso algorithm automatically selects the original features, obtaining the fixed numbers of the selected features. However, GLM generates R2 values corresponding to each feature instead of the fixed numbers of the selected features. In addition, the Lasso algorithm is often used with survival analysis models to determine variables and eliminate the collinearity problem between variables [3]. The Lasso algorithm is also applied to select the features to improve the MLP classifier’s performance by establishing the relationship between the independent variables (3D CNN or lung radiomics features) and dependent variables (the COPD stages) [25]. Therefore, the complexity of the 3D CNN or lung radiomics features is reduced. As a result, the MLP classifier can focus on the 60 selected 3D CNN or 106 selected lung radiomics features to improve the classifier’s performance. The Lasso algorithm also applies to the AMGNN classifier to construct the excellent edge weight matrix by reducing redundant collinearity 3D CNN or lung radiomics features. Meanwhile, the Lasso algorithm determines node features of the AMGNN classifier, eliminates collinearity features, and further avoids overfitting.
Fourth, lung radiomics features remove the limitation of risk factors and node features that are difficult to obtain. Meanwhile, the AMGNN classifier with the lung radiomics combination vector performs better than the 3D CNN combination vector. The risk factors of diseases, such as the risk factors (age, gender, year of education, and APOe4 gene information) of Alzheimer’s disease [32], are hardly obtained or determined, which limits the application of the AMGNN classifier. However, because a large number of lung radiomics features can be extracted from the ROI of chest HRCT images, we may overcome the limitation of obtaining and determining risk factors. Compared with the 3D CNN features, the lung radiomics features are calculated by preset formulas, which are easier to explain in subsequent studies.
Lastly, this study also has some limitations, and we point out the future direction. Although the AMGNN classifier with the proposed novel lung radiomics combination vector achieves promising results, the model must be retrained with a newcomer to be classified. Furthermore, we only explained the method from the engineering and algorithm, but the clinical significance of three radiomics risk factors needs further analysis by COPD experts and doctors. However, the combination vector of 3D CNN and lung radiomics based on ML or AMGNN classifiers for the COPD stage classification should be studied in the future.

5. Conclusions

This paper constructs a novel lung radiomics combination vector generated by GLM and Lasso algorithm for the COPD stage classification based on the auto-metric graph neural network. Compared with the CNN and ML models, the AMGNN based on the novel lung radiomics combination vector (K = 3 (GLM) + d = 106, 3 radiomics risk factors are selected by GLM and 106 radiomics node features are selected by the Lasso algorithm) performs best, achieving an accuracy of 0.943, precision of 0.946, recall of 0.943, F1-score of 0.943, and ACU of 0.984. Therefore, our proposed model eliminates the limitations of PFT and may become an effective tool for COPD management.

6. Patents

The method and device, electronic device and storage medium for stage classification of chronic obstructive pulmonary disease, CN2022104685981, Shenzhen Technology University and Northeastern University, China.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/diagnostics12102274/s1.

Author Contributions

Conceptualization, R.C. and Y.K.; methodology, Y.Y., S.W. and N.Z.; software, Y.Y. and S.W.; validation, N.Z., W.D. and Z.C.; formal analysis, Y.Y., S.W., W.L. and N.Z.; investigation, Y.Y., S.W., Y.G. and H.C.; resources, H.C., X.L. and R.C.; data curation, H.C., X.L. and R.C.; writing—original draft preparation, Y.Y.; writing—review and editing, Y.L., W.D. and Y.G.; visualization, Y.Y., S.W., Y.G. and N.Z.; supervision, R.C. and Y.K.; project administration, R.C. and Y.K.; funding acquisition, Y.K., W.L. and H.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 62071311; the Stable Support Plan for Colleges and Universities in Shenzhen of China, grant number SZWD2021010; the Scientific Research Fund of Liaoning Province of China, grant number JL201919; the Natural Science Foundation of Guangdong Province of China, grant number 2019A1515011382; the special program for key fields of colleges and universities in Guangdong Province (biomedicine and health) of China, grant number 2021ZDZX2008.

Institutional Review Board Statement

All patients provided written informed consent, and this study was approved by the Guangzhou medical university Ethics Committee (grant number: 2009-09, Approval Date: 5 August 2009) and registered at http://www.chictr.org.cn (registration number: ChiCTR2000034586, accessed on 11 July 2020).

Informed Consent Statement

Written informed consent has been obtained from the patients to publish this paper.

Data Availability Statement

The data supporting this study’s findings are available from the corresponding author upon reasonable request.

Acknowledgments

Thanks to the Department of Radiology, the First Affiliated Hospital of Guangzhou Medical University, for providing the dataset.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Singh, D.; Agusti, A.; Anzueto, A.; Barnes, P.J.; Bourbeau, J.; Celli, B.R.; Criner, G.J.; Frith, P.; Halpin, D.M.G.; Han, M.; et al. Global strategy for the diagnosis, management, and prevention of chronic obstructive lung disease: The GOLD science committee report 2019. Eur. Respir. J. 2019, 53, 1900164. [Google Scholar] [CrossRef] [PubMed]
  2. Washko, G.R.; Coxson, H.O.; Donnell, D.E.O.; Aaron, S.D. CT imaging of chronic obstructive pulmonary disease: Insights, disappointments, and promise. Lancet Respir. Med. 2017, 5, 903–908. [Google Scholar] [CrossRef]
  3. Yang, Y.; Li, W.; Guo, Y.; Liu, Y.; Li, Q.; Yang, K.; Wang, S.; Zeng, N.; Duan, W.; Chen, Z.; et al. Early COPD Risk Decision for Adults Aged From 40 to 79 Years Based on Lung Radiomics Features. Front. Med. 2022, 9, 845286. [Google Scholar] [CrossRef] [PubMed]
  4. Fortis, S.; Comellas, A.; Make, B.J.; Hersh, C.P.; Bodduluri, S.; Georgopoulos, D.; Kim, V.; Criner, G.J.; Dransfield, M.T.; Bhatt, S.P.; et al. Combined forced expiratory volume in 1 second and forced vital capacity bronchodilator response, exacerbations, and mortality in chronic obstructive pulmonary disease. Ann. Am. Thorac. Soc. 2019, 16, 826–835. [Google Scholar] [CrossRef]
  5. Jones, P.W. Health status measurement in chronic obstructive pulmonary disease. Thorax 2001, 56, 880–887. [Google Scholar] [CrossRef]
  6. Brown, C.D.; Benditt, J.O.; Sciurba, F.C.; Lee, S.M.; Criner, G.J.; Mosenifar, Z.; Shade, D.M.; Slivka, W.A.; Wise, R.A.; National emphysema Treatment Trial Research Group. Exercise Testing in Severe Emphysema: Association with Quality of Life and Lung Function. J. Chronic Obstr. Pulm. Dis. 2008, 5, 117–124. [Google Scholar] [CrossRef]
  7. Flesch, J.D.; Dine, C.J. Lung volumes: Measurement, clinical use, and coding. Chest 2012, 142, 506–510. [Google Scholar] [CrossRef]
  8. Fan, L.; Xia, Y.; Guan, Y.; Zhang, T.F.; Liu, S.Y. Characteristic features of pulmonary function test, CT volume analysis and MR perfusion imaging in COPD patients with different HRCT phenotypes. Clin. Respir. J. 2014, 8, 45–54. [Google Scholar] [CrossRef]
  9. Lynch, D.A.; Austin, J.H.; Hogg, J.C.; Grenier, P.A.; Kauczor, H.U.; Bankier, A.A.; Barr, R.G.; Colby, T.V.; Galvin, J.R.; Gevenois, P.A.; et al. CT-Definable Subtypes of Chronic Obstructive Pulmonary Disease: A Statement of the Fleischner Society. Radiology 2015, 277, 192–205. [Google Scholar] [CrossRef]
  10. Lynch, D.A. Progress in Imaging COPD, 2004–2014. Chronic Obstr. Pulm. Dis. J. COPD Found. 2014, 1, 73. [Google Scholar] [CrossRef] [Green Version]
  11. Castaldi, P.J.; San José Estépar, R.; Mendoza, C.S.; Hersh, C.P.; Laird, N.; Crapo, J.D.; Lynch, D.A.; Silverman, E.K.; Washko, G.R. Distinct quantitative computed tomography emphysema patterns are associated with physiology and function in smokers. Am. J. Respir. Crit. Care Med. 2013, 188, 1083–1090. [Google Scholar] [CrossRef]
  12. Barbosa, E.M., Jr.; Song, G.; Tustison, N.; Kreider, M.; Gee, J.C.; Gefter, W.B.; Torigian, D.A. Computational analysis of thoracic multidetector row HRCT for segmentation and quantification of small airway air trapping and emphysema in obstructive pulmonary disease. Acad. Radiol. 2011, 18, 1258–1269. [Google Scholar] [CrossRef]
  13. O’Donnell, R.A.; Peebles, C.; Ward, J.A.; Daraker, A.; Angco, G.; Broberg, P.; Pierrou, S.; Lund, J.; Holgate, S.T.; Davies, D.E.; et al. Relationship between peripheral airway dysfunction, airway obstruction, and neutrophilic inflammation in COPD. Thorax 2004, 59, 837–842. [Google Scholar] [CrossRef]
  14. Pompe, E.; Galbán, C.J.; Ross, B.D.; Koenderman, L.; Ten Hacken, N.H.; Postma, D.S.; van den Berge, M.; de Jong, P.A.; Lammers, J.J.; Mohamed Hoesein, F.A. Parametric response mapping on chest computed tomography associates with clinical and functional parameters in chronic obstructive pulmonary disease. Respir. Med. 2017, 123, 48–55. [Google Scholar] [CrossRef]
  15. Frix, A.-N.; Cousin, F.; Refaee, T.; Bottari, F.; Vaidyanathan, A.; Desir, C.; Vos, W.; Walsh, S.; Occhipinti, M.; Lovinfosse, P.; et al. Radiomics in lung diseases imaging: State-of-the-art for clinicians. J. Pers. Med. 2021, 11, 602. [Google Scholar] [CrossRef]
  16. Refaee, T.; Wu, G.; Ibrahim, A.; Halilaj, I.; Leijenaar, R.T.H.; Rogers, W.; Gietema, H.A.; Hendriks, L.E.L.; Lambin, P.; Woodruff, H.C. The Emerging Role of Radiomics in COPD and Lung Cancer. Respiration 2020, 99, 99–107. [Google Scholar] [CrossRef]
  17. Yang, K.; Yang, Y.; Kang, Y.; Liang, Z.; Wang, F.; Li, Q.; Xu, J.; Tang, G.; Chen, R. The value of radiomic features in chronic obstructive pulmonary disease assessment: A prospective study. Clin. Radiol. 2022, 77, e466–e472. [Google Scholar] [CrossRef]
  18. Eapen, M.S.; Myers, S.; Walters, E.H.; Sohal, S.S. Airway inflammation in chronic pulmonary disease (COPD): A ture paradox. Expert Rev. Reapiratory Med. 2017, 11, 827–839. [Google Scholar] [CrossRef]
  19. Wright, J.L.; Churg, A. Advances in the pathology of COPD. Histopathology 2006, 49, 1–9. [Google Scholar] [CrossRef]
  20. Peinado, V.I.; Pizarro, S.; Barbera, J.A. Pulmonary vascular involvement in COPD. Chest 2008, 134, 808–814. [Google Scholar] [CrossRef]
  21. Cho, Y.H.; Seo, J.B.; Lee, S.M.; Kim, N.; Yun, J.; Hwang, J.E.; Lee, J.S.; Oh, Y.M.; Do Lee, S.; Loh, L.C.; et al. Radiomics approach for survival prediction in chronic obstructive pulmonary disease. Eur. Radiol. 2021, 31, 7316–7324. [Google Scholar] [CrossRef]
  22. Yun, J.; Cho, Y.H.; Lee, S.M.; Hwang, J.; Lee, J.S.; Oh, Y.M.; Lee, S.D.; Loh, L.C.; Ong, C.K.; Seo, J.B.; et al. Deep radiomics-based survival prediction in patients with chronic obstructive pulmonary disease. Sci. Rep. 2021, 11, 15144. [Google Scholar] [CrossRef]
  23. Occhipinti, M.; Paoletti, M.; Bartholmai, B.J.; Rajagopalan, S.; Karwoski, R.A.; Nardi, C.; Inchingolo, R.; Larici, A.R.; Camiciottoli, G.; Lavorini, F.; et al. Spirometric assessment of emphysema presence and severity as measured by quantitative CT and CT-based radiomics in COPD. Respir. Res. 2019, 20, 1–11. [Google Scholar] [CrossRef]
  24. Liang, C.; Xu, J.; Wang, F.; Chen, H.; Tang, J.; Chen, D.; Li, Q.; Jian, W.; Tang, G.; Zheng, J.; et al. Development of a radiomics model for predicting COPD exacerbations based on complementary visual information. Am. Thorac. Soc. 2021, 203, A2296. [Google Scholar]
  25. Yang, Y.; Li, W.; Guo, Y.; Zeng, N.; Wang, S.; Chen, Z.; Liu, Y.; Chen, H.; Duan, W.; Li, X.; et al. Lung radiomics features for characterizing and classifying COPD stage based on feature combination strategy and multi-layer perceptron classifier. Math. Biosci. Eng. 2022, 19, 7826–7855. [Google Scholar] [CrossRef]
  26. Li, Z.; Liu, L.; Zhang, Z.; Yang, X.; Li, X.; Gao, Y.; Huang, K. A Novel CT-Based Radiomics Features Analysis for Identification and Severity Staging of COPD. Acad. Radiol. 2022, 29, 663–673. [Google Scholar] [CrossRef]
  27. Makimoto, K.; Au, R.; Moslemi, A.; Hogg, J.C.; Bourbeau, J.; Tan, W.C.; Kirby, M. Comparison of Feature Selection Methods and Machine Learning Classifiers for Predicting Chronic Obstructive Pulmonary Disease Using Texture-Based CT Lung Radiomic Features. Acad. Radiol. 2022, 1–11. [Google Scholar] [CrossRef]
  28. Au, R.C.; Tan, W.C.; Bourbeau, J.; Hogg, J.C.; Kirby, M. Radiomics Analysis to Predict Presence of Chronic Obstructive Pulmonary Disease and Symptoms Using Machine Learning. TP121 COPD: From Cells to The Clinic. Am. Thorac. Soc. 2021, 203, A4568. [Google Scholar]
  29. Yang, Y.; Li, W.; Kang, Y.; Guo, Y.; Yang, K.; Li, Q.; Liu, Y.; Yang, C.; Chen, R.; Chen, H.; et al. A novel lung radiomics feature for characterizing resting heart rate and COPD stage evolution based on radiomics feature combination strategy. Math. Biosci. Eng. 2022, 19, 4145–4165. [Google Scholar] [CrossRef]
  30. Gori, M.; Monfardini, G.; Scarselli, F. A new model for learning in graph domains. In Proceedings of the 2005 IEEE International Joint Conference on Neural Networks, Montreal, QC, Canada, 31 July–4 August 2005; pp. 729–734. [Google Scholar]
  31. Kazi, A.; Shekarforoush, S.; Arvind Krishna, S.; Burwinkel, H.; Vivar, G.; Kortüm, K.; Ahmadi, S.; Albarqouni, S.; Navab, N. InceptionGCN: Receptive field aware graph convolutional network for disease prediction. In Proceedings of the International Conference on Information Processing in Medical Imaging, Hong Kong, China, 2–6 June 2019; pp. 73–85. [Google Scholar]
  32. Song, X.; Mao, M.; Qian, X. Auto-Metric Graph Neural Network Based on a Meta-Learning Strategy for the Diagnosis of Alzheimer’s Disease. IEEE J. Biomed. Health Inform. 2021, 25, 3141–3152. [Google Scholar] [CrossRef]
  33. McCombe, N.; Bamrah, J.; Sanchez-Bornot, J.M.; Finn, D.P.; McClean, P.L.; Wong-Lin, K.; Alzheimer’s Disease Neuroimaging Initiative (ADNI). Alzheimer’s Disease Classification Using Cluster-based Labelling for Graph Neural Network on Tau PET Imaging and Heterogeneous Data. medRxiv 2022, 3, 22271873. [Google Scholar]
  34. Xu, C.; Qi, S.; Feng, J.; Xia, S.; Kang, Y.; Yao, Y.; Qian, W. DCT-MIL: Deep CNN transferred multiple instance learning for COPD identification using CT images. Phys. Med. Biol. 2020, 65, 145011. [Google Scholar] [CrossRef] [PubMed]
  35. Sun, J.; Liao, X.; Yan, Y.; Zhang, X.; Sun, J.; Tan, W.; Liu, B.; Wu, J.; Guo, Q.; Gao, S.; et al. Detection and staging of chronic obstructive pulmonary disease using a computed tomography–based weakly supervised deep learning approach. Eur. Radiol. 2022, 32, 1–11. [Google Scholar]
  36. Zhou, Y.; Bruijnzeel, P.; Mccrae, C.; Zheng, J.; Nihlen, U.; Zhou, R.; Van Geest, M.; Nilsson, A.; Hadzovic, S.; Huhn, M.; et al. Study on risk factors and phenotypes of acute exacerbations of chronic obstructive pulmonary disease in Guangzhou, China-design and baseline characteristics. J. Thorac. Dis. 2015, 7, 720–733. [Google Scholar]
  37. Hofmanninger, J.; Prayer, F.; Pan, J.; Rohrich, S.; Prosch, H.; Langs, G. Automatic lung segmentation in routine imaging is a data diversity problem, not a methodology problem. Eur. Radiol. Exp. 2020, 4, 1–13. [Google Scholar] [CrossRef]
  38. Yang, Y.; Li, Q.; Guo, Y.; Liu, Y.; Li, X.; Guo, J.; Li, W.; Cheng, L.; Chen, H.; Kang, Y. Lung parenchyma parameters measure of rats from pulmonary window computed tomography images based on ResU-Net model for medical respiratory researches. Math. Biosci. Eng. 2021, 18, 4193–4211. [Google Scholar] [CrossRef]
  39. Yang, Y.; Guo, Y.; Guo, J.; Gao, Y.; Kang, Y. A method of abstracting single pulmonary lobe from computed tomography pulmonary images for locating COPD. In Proceedings of the Fourth International Conference on Biological Information and Biomedical Engineering, Chengdu, China, 21–23 July 2020; pp. 1–6. [Google Scholar]
  40. Van Griethuysen, J.J.M.; Fedorov, A.; Parmar, C.; Hosny, A.; Aucoin, N.; Narayan, V.; Beets-Tan RG, H.; Fillon-Robin, J.C.; Pieper, S.; Aerts, H.J.W.L. Computational Radiomics System to Decode the Radiographic Phenotype. Cancer Res. 2017, 77, e104–e107. [Google Scholar] [CrossRef]
  41. Chen, S.; Ma, K.; Zheng, Y. Med3d: Transfer learning for 3d medical image analysis. arXiv 2019, arXiv:1904.00625. [Google Scholar]
  42. Jakkula, V. Tutorial on Support Vector Machine (svm); School of EECS, Washington State University: Pullman, WA, USA, 2006; Volume 37, p. 3. [Google Scholar]
  43. Wan, S.; Liang, Y.; Zhang, Y.; Guizani, M. Deep multi-layer perceptron classifier for behavior analysis to estimate Parkinsons disease severity using smartphones. IEEE Access 2018, 6, 36825–36833. [Google Scholar] [CrossRef]
  44. Qi, Y. Random Forest for bioinformatics. In Ensemble Machine Learning; Springer: Boston, MA, USA, 2012; pp. 307–323. [Google Scholar]
  45. LaValley, M.P. Logistic regression. Circulation 2008, 117, 2395–2399. [Google Scholar] [CrossRef]
  46. Ayyadevara, V.K. Gradient boosting machine. In Pro Machine Learning Algorithms; Apress: Berkeley, CA, USA, 2018; pp. 117–134. [Google Scholar]
  47. Balakrishnama, S.; Ganapathiraju, A. Linear discriminant analysis-a brief tutorial. Inst. Signal Inf. Process. 1998, 18, 1–8. [Google Scholar]
  48. Sun, Q.; Liu, Y.; Chua, T.S.; Schiele, B. Meta-transfer learning for few-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 403–412. [Google Scholar]
  49. Spina, G.; Casale, P.; Albert, P.S.; Alison, J.; Garcia-Aymerich, J.; Clarenbach, C.F.; Costello, R.W.; Hernandes, N.A.; Leuppi, J.D.; Mesquita, R.; et al. Nighttime features derived from topic models for classification of patients with COPD. Comput. Biol. Med. 2021, 132, 104322. [Google Scholar] [CrossRef]
  50. Bairagi, V.K.; Kanwade, A.B. Classification of Chronic Obstructive Pulmonary Disease (COPD) Using Electromyography. Sādhanā 2020, 45, 1–17. [Google Scholar] [CrossRef]
  51. Meng, Z.; Zhao, F.; Liang, M. SS-MLP: A Novel Spectral-Spatial MLP Architecture for Hyperspectral Image Classification. Remote Sens. 2021, 13, 4060. [Google Scholar] [CrossRef]
  52. Hospedales, T.M.; Antoniou, A.; Micaelli, P.; Strokey, A. Meta-learning in neural networks: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 5149–5169. [Google Scholar] [CrossRef]
Figure 1. The participants’ selection flow diagram and GOLD distribution of the participants in this study. (a) The participants’ selection flow diagram; (b) GOLD distribution in our study cohort; (c) training set distribution; (d) test set distribution.
Figure 1. The participants’ selection flow diagram and GOLD distribution of the participants in this study. (a) The participants’ selection flow diagram; (b) GOLD distribution in our study cohort; (c) training set distribution; (d) test set distribution.
Diagnostics 12 02274 g001
Figure 2. The proposed method in this study. (A) Our proposed method: COPD stage classification using auto-metric graph neural network; specifically, our proposed method includes (a) lung parenchyma segmentation, (b) radiomics feature extraction, (c) radiomics feature combination, and (d) COPD stage classification. (B) Lung radiomics feature extraction model: PyRadiomics [25,29,33,40]. (C) The pipeline of the AMGNN based on meta-learning. (D) Detailed network structure of the AMGNN (Pi).
Figure 2. The proposed method in this study. (A) Our proposed method: COPD stage classification using auto-metric graph neural network; specifically, our proposed method includes (a) lung parenchyma segmentation, (b) radiomics feature extraction, (c) radiomics feature combination, and (d) COPD stage classification. (B) Lung radiomics feature extraction model: PyRadiomics [25,29,33,40]. (C) The pipeline of the AMGNN based on meta-learning. (D) Detailed network structure of the AMGNN (Pi).
Diagnostics 12 02274 g002
Figure 3. Experimental design in this paper.
Figure 3. Experimental design in this paper.
Diagnostics 12 02274 g003
Figure 4. Visual evaluation metrics of different ML classifiers with different features in Experiment 1. (a,b) The evaluation metrics of different ML classifiers with 13,824 original 3D CNN/1316 original lung radiomics features. (ch) The evaluation metrics of different ML classifiers with 60 3D CNN/106 lung radiomics features separately selected or fused by the Lasso algorithm, GLM, and PCA algorithm. (il) The evaluation metrics of different ML classifiers with different combinations of feature vectors of the selected and fused lung radiomics features or the selected and fused 3D CNN features.
Figure 4. Visual evaluation metrics of different ML classifiers with different features in Experiment 1. (a,b) The evaluation metrics of different ML classifiers with 13,824 original 3D CNN/1316 original lung radiomics features. (ch) The evaluation metrics of different ML classifiers with 60 3D CNN/106 lung radiomics features separately selected or fused by the Lasso algorithm, GLM, and PCA algorithm. (il) The evaluation metrics of different ML classifiers with different combinations of feature vectors of the selected and fused lung radiomics features or the selected and fused 3D CNN features.
Diagnostics 12 02274 g004
Figure 5. Visual ROC curves of different ML classifiers with different features in Experiment 1. (a,b) The ROC curves of different ML classifiers with 13,824 original 3D CNN/1316 original lung radiomics features. (ch) The ROC curves of different ML classifiers with 60 3D CNN/106 lung radiomics features separately selected or fused by the Lasso algorithm, GLM, and PCA algorithm. (il) The ROC curves of different ML classifiers with different combinations of feature vectors of the selected and fused lung radiomics features or the selected and fused 3D CNN features.
Figure 5. Visual ROC curves of different ML classifiers with different features in Experiment 1. (a,b) The ROC curves of different ML classifiers with 13,824 original 3D CNN/1316 original lung radiomics features. (ch) The ROC curves of different ML classifiers with 60 3D CNN/106 lung radiomics features separately selected or fused by the Lasso algorithm, GLM, and PCA algorithm. (il) The ROC curves of different ML classifiers with different combinations of feature vectors of the selected and fused lung radiomics features or the selected and fused 3D CNN features.
Diagnostics 12 02274 g005
Figure 6. Visual evaluation metrics and ROC curves of the AMGNN classifier in Experiments 2–4. (ac) The mean evaluation metrics, and (df) the best evaluation metrics.
Figure 6. Visual evaluation metrics and ROC curves of the AMGNN classifier in Experiments 2–4. (ac) The mean evaluation metrics, and (df) the best evaluation metrics.
Diagnostics 12 02274 g006
Figure 7. ROC curves of the AMGNN classifier in Experiments 2–4. (af) ROC curves of the AMGNN classifiers with different combination vectors.
Figure 7. ROC curves of the AMGNN classifier in Experiments 2–4. (af) ROC curves of the AMGNN classifiers with different combination vectors.
Diagnostics 12 02274 g007
Table 1. The different ML classifiers with their definitions.
Table 1. The different ML classifiers with their definitions.
ML ClassifierML Classifier Model Definition in Python 3.6
SVMSVM sklearn.svm.SVC(kernel=‘rbf’,probability=True)
MLPsklearn.neural_network. MLPClassifier (hidden_layer_sizes=(400, 100), alpha=0.01, max_iter=10,000)
RFsklearn.ensemble.RandomForestClassifier(n_estimators=200)
LRsklearn.linear_model.logisticRegressionCV(max_iter=100,000, solver=“liblinear”)
GBsklearn.ensemble.GradientBoostingClassifier()
LDAsklearn.discriminant_analysis.()
Table 2. The different ML classifiers’ performance on the test set of original 3D CNN features (13,824)/lung radiomics features (1316) in Experiment 1.
Table 2. The different ML classifiers’ performance on the test set of original 3D CNN features (13,824)/lung radiomics features (1316) in Experiment 1.
FeaturesClassifierAccuracyPrecisionRecallF1-ScoreAUC
3D CNN feature
(13,824)/
Lung radiomics feature (1316)
SVM0.629/0.6430.635/0.6550.629/0.6430.631/0.6470.863/0.863
MLP0.793/0.7860.798/0.7840.793/0.7840.790/0.7840.938/0.919
RF0.657/0.6640.652/0.6680.657/0.6640.652/0.6640.858/0.886
LR0.650/0.6790.647/0.6800.650/0.6790.643/0.6780.835/0.863
GB0.643/0.7290.644/0.7270.643/0.7290.641/0.7240.869/0.906
LDA0.721/0.3790.726/0.3950.721/0.3790.722/0.3770.913/0.639
Table 3. The different ML classifiers’ performance on the test set of selected 3D CNN features (60) or lung radiomics features (106) generated by the Lasso algorithm in Experiment 1.
Table 3. The different ML classifiers’ performance on the test set of selected 3D CNN features (60) or lung radiomics features (106) generated by the Lasso algorithm in Experiment 1.
FeaturesClassifierAccuracyPrecisionRecallF1-ScoreAUC
Selected 3D CNN feature
(60)/
Selected lung radiomics feature (106)
SVM0.629/0.7360.637/0.7370.629/0.7360.630/0.7360.880/0.915
MLP0.821/0.8290.826/0.8280.821/0.8290.821/0.8240.946/0.950
RF0.600/0.7860.590/0.7830.600/0.7860.591/0.7810.858/0.928
LR0.650/0.6930.633/0.6890.650/0.6930.636/0.6800.866/0.886
GB0.600/0.7360.613/0.7320.600/0.7360.602/0.7290.869/0.928
LDA0.664/0.7860.679/0.7850.664/0.7860.669/0.7840.898/0.920
Table 4. The different ML classifiers’ performance on the test set of selected 3D CNN features (60) or lung radiomics features (106) generated by the GLM in Experiment 1.
Table 4. The different ML classifiers’ performance on the test set of selected 3D CNN features (60) or lung radiomics features (106) generated by the GLM in Experiment 1.
FeaturesClassifierAccuracyPrecisionRecallF1-ScoreAUC
Selected 3D CNN feature
(60)/
Selected lung radiomics feature (106)
SVM0.443/0.4270.402/0.4160.443/0.4270.407/0.4010.736/0.685
MLP0.729/0.8110.726/0.8130.729/0.8110.725/0.8110.901/0.938
RF0.657/0.7410.656/0.7420.657/0.7410.654/0.7400.871/0.914
LR0.657/0.6920.647/0.6870.657/0.6920.637/0.6810.848/0.886
GB0.636/0.7410.637/0.7420.636/0.7410.636/0.7410.847/0.925
LDA0.607/0.6780.597/0.6850.607/0.6780.599/0.6790.844/0.882
Table 5. The different ML classifiers’ performance on the test set of fused 3D CNN features (60) or lung radiomics features (106) generated by the PCA algorithm in Experiment 1.
Table 5. The different ML classifiers’ performance on the test set of fused 3D CNN features (60) or lung radiomics features (106) generated by the PCA algorithm in Experiment 1.
FeaturesClassifierAccuracyPrecisionRecallF1-ScoreAUC
Selected 3D CNN feature
(60)/
Selected lung radiomics feature (106)
SVM0.571/0.6360.569/0.6420.571/0.6360.565/0.6370.821/0.876
MLP0.779/0.7690.782/0.7690.779/0.7690.776/0.7670.920/0.932
RF0.707/0.6780.705/0.6940.707/0.6780.703/0.6810.868/0.886
LR0.600/0.6570.593/0.6570.600/0.6570.586/0.6550.835/0.861
GB0.529/0.6710.530/0.6780.529/0.6710.529/0.6710.834/0.866
LDA0.614/0.6780.631/0.6880.614/0.6780.612/0.6800.863/0.899
Table 6. The different ML classifiers’ performance on the test set of the CNN combination vector (Lasso + PCA/GLM + PCA) in Experiment 1.
Table 6. The different ML classifiers’ performance on the test set of the CNN combination vector (Lasso + PCA/GLM + PCA) in Experiment 1.
FeaturesClassifierAccuracyPrecisionRecallF1-ScoreAUC
CNN combination vector (Lasso+PCA/GLM+ PCA)SVM0.614/0.6000.608/0.5820.614/0.6000.604/0.5860.852/0.856
MLP0.807/0.8210.815/0.8280.807/0.8210.803/0.8170.939/0.951
RF0.636/0.6570.634/0.6530.636/0.6570.621/0.6530.878/0.872
LR0.700/0.6640.699/0.6530.700/0.6640.685/0.6480.889/0.861
GB0.657/0.6360.664/0.6350.657/0.6360.658/0.6290.889/0.863
LDA0.664/0.6500.675/0.6560.664/0.6500.661/0.6480.902/0.882
Table 7. The different ML classifiers’ performance on the test set of lung radiomics combination vector (Lasso + PCA/GLM + PCA) in Experiment 1.
Table 7. The different ML classifiers’ performance on the test set of lung radiomics combination vector (Lasso + PCA/GLM + PCA) in Experiment 1.
FeaturesClassifierAccuracyPrecisionRecallF1-ScoreAUC
Lung radiomics combination vector (Lasso+PCA/GLM+ PCA)SVM0.615/0.5310.617/0.5240.615/0.5310.615/0.5250.871/0.798
MLP0.797/0.8110.803/0.8150.797/0.8110.795/0.8090.941/0.956
RF0.713/0.6990.719/0.7060.713/0.6990.710/0.7010.913/0.904
LR0.727/0.6570.724/0.6500.727/0.6570.725/0.6490.899/0.873
GB0.762/0.7130.762/0.7120.762/0.7130.761/0.7100.931/0.912
LDA0.755/0.7130.762/0.7170.755/0.7130.758/0.7130.933/0.920
Table 8. The AMGNN classifier’s performance on the test set in Experiment 2.
Table 8. The AMGNN classifier’s performance on the test set in Experiment 2.
FeaturesClassifierAccuracyPrecisionRecallF1-ScoreAUC
3D CNN feature
(13,824)
(K = 2–6 (PCA) + d = 13,824)
AMGNN (K = 2)0.7930.7890.7930.7740.933
AMGNN (K = 3)0.7860.8030.7860.7800.884
AMGNN (K = 4)0.8070.8310.8070.7990.929
AMGNN (K = 5)0.8070.8110.8070.8040.924
AMGNN (K = 6)0.8000.8250.8000.7950.927
Mean0.7990.8120.7990.7900.919
3D CNN feature
(13,824)
(K = 2–6 (GLM) + d = 13,824)
AMGNN (K = 2)0.7790.7980.7790.7620.917
AMGNN (K = 3)0.8210.8240.8210.8110.929
AMGNN (K = 4)0.7860.7870.7860.7760.913
AMGNN (K = 5)0.8070.8190.8070.8080.941
AMGNN (K = 6)0.8360.8400.8360.8310.950
Mean0.8060.8140.8060.7980.930
Lung radiomics feature
(1316)
(K = 2–6 (PCA) + d = 1316)
AMGNN (K = 2)0.8640.8640.8640.8620.966
AMGNN (K = 3)0.8790.8800.8790.8780.972
AMGNN (K = 4)0.8500.8510.8500.8500.952
AMGNN (K = 5)0.9000.8990.9000.9000.972
AMGNN (K = 6)0.8710.8720.8710.8650.948
Mean0.8730.8730.8730.8710.962
Lung radiomics feature
(1316)
(K = 2–6 (GLM) + d = 1316)
AMGNN (K = 2)0.8640.8800.8640.8610.948
AMGNN (K = 3)0.8570.8580.8570.8570.971
AMGNN (K = 4)0.8640.8690.8640.8620.943
AMGNN (K = 5)0.8710.8710.8710.8700.964
AMGNN (K = 6)0.8640.8640.8640.8620.973
Mean0.8640.8680.8640.8620.960
Table 9. The AMGNN classifier’s performance on the test set in Experiment 3.
Table 9. The AMGNN classifier’s performance on the test set in Experiment 3.
FeaturesClassifierAccuracyPrecisionRecallF1-ScoreAUC
Selected 3D CNN feature
(60)
(K = 2–6 (PCA) + d = 60)
AMGNN (K = 2)0.8000.8160.8000.8010.927
AMGNN (K = 3)0.8140.8120.8140.8090.947
AMGNN (K = 4)0.8360.8460.8360.8330.957
AMGNN (K = 5)0.8290.8290.8290.8270.962
AMGNN (K = 6)0.8070.8180.8070.8090.943
Mean0.8170.8240.8170.8160.947
Selected 3D CNN feature
(60)
(K = 2–6 (GLM) + d = 60)
AMGNN (K = 2)0.8000.7970.8000.7970.941
AMGNN (K = 3)0.8000.8060.8000.8020.940
AMGNN (K = 4)0.8140.8370.8140.8120.949
AMGNN (K = 5)0.8070.8220.8070.8100.944
AMGNN (K = 6)0.7930.7980.7930.7930.939
Mean0.8030.8120.8030.8030.943
Selected lung radiomics feature
(106)
(K = 2–6 (PCA) + d = 106)
AMGNN (K = 2)0.9000.9100.9000.9000.981
AMGNN (K = 3)0.9070.9140.9070.9080.983
AMGNN (K = 4)0.8790.8840.8790.8790.963
AMGNN (K = 5)0.8790.8790.8790.8780.954
AMGNN (K = 6)0.8710.8720.8710.8680.962
Mean0.8870.8920.8870.8870.969
Selected lung radiomics feature
(106)
(K = 2–6 (GLM) + d = 106)
AMGNN (K = 2)0.8790.8890.8790.8790.951
AMGNN (K = 3)0.8860.8870.8860.8860.956
AMGNN (K = 4)0.8710.8820.8710.8750.969
AMGNN (K = 5)0.8790.8810.8790.8780.971
AMGNN (K = 6)0.8860.8860.8860.8810.963
Mean0.8800.8850.8800.8800.962
Table 10. The AMGNN classifier’s performance on the test set in Experiment 4.
Table 10. The AMGNN classifier’s performance on the test set in Experiment 4.
FeaturesClassifierAccuracyPrecisionRecallF1-ScoreAUC
3D CNN feature
(13,824)
(K = 2–6 (PCA) + d = 60)
AMGNN (K = 2)0.7930.7930.7930.7910.933
AMGNN (K = 3)0.8430.8510.8430.8410.958
AMGNN (K = 4)0.8000.8010.8000.7980.941
AMGNN (K = 5)0.8360.8430.8360.8350.963
AMGNN (K = 6)0.7930.7980.7930.7940.847
Mean0.8130.8170.8130.8120.928
3D CNN feature
(13,824)
(K = 2–6 (GLM) + d = 60)
AMGNN (K = 2)0.8000.7970.8000.7970.941
AMGNN (K = 3)0.8000.8060.8000.8020.940
AMGNN (K = 4)0.8070.8210.8070.8070.955
AMGNN (K = 5)0.8140.8320.8140.8190.958
AMGNN (K = 6)0.8210.8330.8210.8180.945
Mean0.8080.8180.8080.8090.948
Lung radiomics feature
(1316)
(K = 2–6 (PCA) + d = 106)
AMGNN (K = 2)0.9290.9290.9290.9280.984
AMGNN (K = 3)0.8930.9120.8930.8950.979
AMGNN (K = 4)0.8930.8940.8930.8920.979
AMGNN (K = 5)0.8710.8860.8710.8760.971
AMGNN (K = 6)0.8930.9080.8930.8890.984
Mean0.8960.9060.8960.8960.979
Lung radiomics feature
(1316)
(K = 2–6 (GLM) + d = 106)
AMGNN (K = 2)0.8860.8850.8860.8840.984
AMGNN (K = 3)0.9430.9460.9430.9430.984
AMGNN (K = 4)0.8710.8890.8710.8740.947
AMGNN (K = 5)0.8790.8860.8790.8790.979
AMGNN (K = 6)0.8930.8910.8930.8910.969
Mean0.8940.8990.8940.8940.973
Table 11. Comparison of the results of our proposed method with other previous methods.
Table 11. Comparison of the results of our proposed method with other previous methods.
ReferenceMethodFeatureAccuracyPrecisionRecall
(Sensitivity)
F1-ScoreAUCSpecificity
Yang, Yingjian, et al. [25]Lasso + MLPCT-Based Radiomics0.8300.8300.8300.8200.950-
Spina, Gabriele, et al. [49]Text representation + LDAMultimodal Sleep Data--0.78---
V K BAIRAGI, et al. [50]CWTElectromyography0.8590.8490.882-0.8650.855
Li, Zongli, et al. [26]Variance threshold + Select K Best + Lasso + SVMCT-Based Radiomics0.7590.8340.7230.7710.7990.805
Li, Zongli, et al. [26]Variance threshold + Select K Best + Lasso + LRCT-Based Radiomics0.7630.8200.7580.7780.7970.766
Our methodGLM + Lasso + AMGNNCT-Based Radiomics 0.9430.9460.9430.9430.9840.982
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yang, Y.; Wang, S.; Zeng, N.; Duan, W.; Chen, Z.; Liu, Y.; Li, W.; Guo, Y.; Chen, H.; Li, X.; et al. Lung Radiomics Features Selection for COPD Stage Classification Based on Auto-Metric Graph Neural Network. Diagnostics 2022, 12, 2274. https://doi.org/10.3390/diagnostics12102274

AMA Style

Yang Y, Wang S, Zeng N, Duan W, Chen Z, Liu Y, Li W, Guo Y, Chen H, Li X, et al. Lung Radiomics Features Selection for COPD Stage Classification Based on Auto-Metric Graph Neural Network. Diagnostics. 2022; 12(10):2274. https://doi.org/10.3390/diagnostics12102274

Chicago/Turabian Style

Yang, Yingjian, Shicong Wang, Nanrong Zeng, Wenxin Duan, Ziran Chen, Yang Liu, Wei Li, Yingwei Guo, Huai Chen, Xian Li, and et al. 2022. "Lung Radiomics Features Selection for COPD Stage Classification Based on Auto-Metric Graph Neural Network" Diagnostics 12, no. 10: 2274. https://doi.org/10.3390/diagnostics12102274

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop