Next Article in Journal
Radiomic Analysis of Treatment Effect for Patients with Radiation Necrosis Treated with Pentoxifylline and Vitamin E
Previous Article in Journal
A Scoping Review of Machine-Learning Derived Radiomic Analysis of CT and PET Imaging to Investigate Atherosclerotic Cardiovascular Disease
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Joint Classification Method for COVID-19 Lesions Based on Deep Learning and Radiomics

1
School of Public Health, Xinjiang Medical University, Urumuqi 830017, China
2
College of Medical Engineering and Technology, Xinjiang Medical University, Urumuqi 830017, China
*
Author to whom correspondence should be addressed.
Tomography 2024, 10(9), 1488-1500; https://doi.org/10.3390/tomography10090109
Submission received: 4 August 2024 / Revised: 30 August 2024 / Accepted: 3 September 2024 / Published: 5 September 2024
(This article belongs to the Section Artificial Intelligence in Medical Imaging)

Abstract

:
Pneumonia caused by novel coronavirus is an acute respiratory infectious disease. Its rapid spread in a short period of time has brought great challenges for global public health. The use of deep learning and radiomics methods can effectively distinguish the subtypes of lung diseases, provide better clinical prognosis accuracy, and assist clinicians, enabling them to adjust the clinical management level in time. The main goal of this study is to verify the performance of deep learning and radiomics methods in the classification of COVID-19 lesions and reveal the image characteristics of COVID-19 lung disease. An MFPN neural network model was proposed to extract the depth features of lesions, and six machine-learning methods were used to compare the classification performance of deep features, key radiomics features and combined features for COVID-19 lung lesions. The results show that in the COVID-19 image classification task, the classification method combining radiomics and deep features can achieve good classification results and has certain clinical application value.

1. Introduction

Respiratory infectious diseases have become a major public health problem, a situation which has brought great challenges to global medical and health services. Therefore, early prevention, diagnosis and treatment are particularly important for improving the prognoses of patients, reducing the economic burden of patients, and avoiding the waste of medical resources [1]. Pneumonia caused by novel coronavirus infection is an acute respiratory infectious disease, common signs of which include respiratory symptoms, fever, cough, shortness of breath and other flu-like symptoms. When the disease worsens, it affects multiple tissues and organs, causes pneumonia and rapidly turns into severe acute respiratory syndrome and renal failure. The prognosis has a great impact on human organs and their functions [2]. Therefore, timely, accurate and effective diagnosis is the key to the treatment and prevention of infectious diseases such as SARS. At present, real-time Reverse Transcription–Polymerase Chain Reaction (RT-PCR) [3] of viral nucleic acid is the recommended method for the diagnosis of COVID-19 [4,5]. However, with the rapid increase in the number of infections, RT-PCR testing may become unreliable because of the viral load or sampling technique. In addition, RT-PCR relies heavily on manual sampling and has strict limitations on sampling criteria. A number of studies have shown the effectiveness of chest CT in the diagnosis of COVID-19, a technique in which deep learning and radiomics methods play important roles in the image-assisted diagnosis of lung diseases [6,7,8].
Common lung diseases include pneumonia, chronic obstructive pulmonary disease, bronchial asthma, pulmonary nodules, lung cancer, etc., and are typically accompanied by symptoms of cough, chest pain, fever, dyspnea, expectoration, hemoptysis and acute respiratory distress syndrome. Compared with other diseases, unknown viral lung diseases are usually highly contagious and may cause large-scale mass infection events in the short term, posing a certain threat to public healthcare [9]. Therefore, in the future, using medical imaging and clinical indicators to explore the characteristics of complex lung diseases can provide prevention methods for the occurrence of lung diseases which are more rapid and accurate. Microscopic changes in gene or protein patterns will be reflected in macroscopic images, and mining deep image features can reflect changes in human tissues, cells and genes. Deep learning and radiomics can extract a large amount of high-dimensional image information which can analyze disease information objectively and comprehensively, and thereby play a potential role in promoting disease diagnosis, treatment selection and prognosis evaluation [10].
Current research has shown that lung-image processing methods based on deep learning and radiomics can effectively distinguish lung disease subtypes, provide a better clinical prognosis accuracy and assist clinicians in adjusting clinical management level in time, allocating medical resources more reasonably [11]. Deep learning can automatically learn from a large amount of data and obtain the deep feature expression in the data, and it also has good feature-discrimination ability [12]. It can effectively improve the performance of machine-learning tasks, and has been widely used in signal processing, computer vision, natural language processing and many other fields [13,14,15]. In the medical field, Zhang et al. applied a deep neural network to the imaging-based diagnosis of COVID-19, which provided an important basis for the standardized diagnosis of COVID-19 [16]. In addition, good progress has been made in brain tumor MRI imaging-based diagnosis [17], pancreatic disease diagnosis [18], breast disease diagnosis [19], etc., each of which has provided new standardized diagnosis and treatment methods for hospitals. Radiomics can extract high-dimensional image features from conventional computed-tomography images and describe the imaging differences of human tissues and organs, thereby quantitatively assessing the disease and exploring the imaging markers. At present, it has been applied in many fields, such as tumor detection, lymphatic cancer metastasis prediction and treatment response evaluation [20,21,22]. In the diagnosis of lung diseases, radiomics has been shown to play an important role in COVID-19 screening, diagnosis, and prediction of hospital stay, as well as the assessment of risk factors related to pneumonia patients [23,24,25].
The use of deep learning and radiomics methods to automate the diagnosis of lung lesion subtypes can not only explore the individual differences in the imaging phenotypes of lung lesions, but also provide automated auxiliary diagnostic tools for clinical diagnosis in order to provide a better screening method for the prevention and control of infectious lung diseases. In this study, six machine-learning methods were used to compare the diagnostic efficacy of deep features, key radiomics features and combined features for different COVID-19 lung lesions. Radiomics features were used to analyze the differences in the imaging phenotypes of the different lung lesions, which provided a reference for clinical diagnosis of lung lesions. Through the study, we compared the performance of different modeling methods for lung disease classification, determined the radiomics features of different lung lesions and found a tool that can automatically identify lung lesions.

2. Materials and Methods

In this study, Python (version 3.5.6) was used to write the experimental code. The validation set was used for model evaluation, the ratio of training set and validation set was 4:1 and the average size of the lesion image was 66 × 66 pixels. The experimental process mainly includes image input, feature extraction, feature selection and machine-learning modeling; the flow chart is shown in Figure 1. Feature extraction mainly includes lesion-image radiomics feature extraction and depth feature extraction, A new multi-scale convolutional neural network was used to extract the depth features of lung lesions, and six machine-learning classifiers were used for the final classification task.

2.1. Data

The data used in this study are lung-CT images. The dataset is from China National Center for Bioinformation, via the China Consortium of Chest CT Image Investigation (CC-CCII) [26], which is publicly available globally and was compiled based on data from the Third Affiliated Hospital of Sun Yat-sen University, the First Affiliated Hospital of Anhui Medical University, the Huaxi Hospital of Sichuan University, the People’s Hospital of Jiangsu Province, the Central People’s Hospital of Yichang, and the People’s Hospital of Wuhan University. All hospitals obtained Institutional Review Board (IRB) or Independent Ethics Committee (IEC) review approval and the informed consent of subjects, and CC-CCII complied with the policy of the Chinese Center for Disease Control and Prevention as to reportable infectious diseases, the Chinese Health Quarantine Law and the Chinese patient privacy regulations, and also followed the principles of the Declaration of Helsinki.
The dataset includes two parts: the lung disease classification data and the segmentation data. The classification data involve lung-CT images of novel coronavirus pneumonia and common pneumonia, as well as a normal control group, and the corresponding clinical diagnosis data. The lesion segmentation data were obtained from the CT slice images of CC-CCII. The data included 750 CT images, with 512 × 512 resolution, of 150 COVID-19 patients. Each image was manually segmented into background, lung field (LF), ground-glass opacity (GGO), and consolidation (CL). Lung imaging manual annotation was performed by eight radiologists; four of the radiologists have 5 to 15 years of clinical experience, and four of the radiologists have 15 to 25 years of clinical experience. In cases involving disputes, a final consensus was reached by an independent panel of four senior radiologists, each with at least 25 years of clinical experience. Figure 2 shows a sample of CT images of lung fields, ground-glass opacity and consolidation. In this paper, the CC-CCII segmentation dataset is used as the research object. In the process of image preprocessing, the ROI region for lesions larger than 9 × 9 was selected. The data include 2404 lung field images, 1716 ground-glass opacity images, and 705 consolidation images. Due to the imbalance of the data, which affects the performance of the model, the ROI image data of the three types of lesions were downsampled to 705 images, resulting in a total of 2115 case data, for the subsequent construction of the deep learning and radiomics joint model. Figure 3 shows a flow chart of inclusion and exclusion criteria.

2.2. Radiomics Feature Extraction

Radiomics can extract a large number of high-dimensional image information elements, which permits the analysis of disease information in a more objective and comprehensive manner, and plays a potential role in promoting disease diagnosis, treatment selection and prognosis evaluation [27]. In this paper, the Pyradiomics package (version 3.1.0) is used to extract 873 dimensional radiomics features of lung lesions. The extracted features mainly include first-order statistical features, shape features, gray-level co-occurrence features, gray-level dependence matrix features, gray-level run-length matrix features, gray-level size zone matrix features, and neighboring gray-tone difference matrix features of original images and wavelet transform images. The correlation clustering plot is shown in Figure 4, and reveals that the majority of radiomics features exhibit both correlation and redundancy. Therefore, the least absolute shrinkage and selection operator (LASSO) [28] was used to select features from high-dimensional radiomics data, and the top 20 key features were selected as the features of the model input. The Lambda curve of regression coefficient in the feature selection process of LASSO regression model is shown in Figure 5, and the selected key radiomics features and their importance are shown in Figure 6.
According to the results of LASSO feature selection, the first five key characteristics are “original firstorder Median”, “diagnostics Image-original Mean”, “wavelet-HLL firstorder Median”, “wavelet-HLL gldm High Gray Level Emphasis”, and “wavelet-HLL gldm Dependence Entropy”. Among them, the original firstorder Median feature has the highest impact on the prediction target. This feature can describe the average brightness distribution of the original-image gray value and measure the central trend of image brightness value, and can be used to evaluate the disease state or potential biological characteristics, such as tumor texture and tissue density. In addition, the remaining four importance degrees mainly represent the overall brightness of the original image, the local texture feature intensity, the texture feature of the high gray-level region, and the uncertainty or randomness of the gray-level information between pixels.

2.3. Deep Feature Extraction

In this paper, we use the deep learning framework Pytorch (version 1.4.0) to construct a new Multi-Feature Pyramid Network (MFPN) to extract high-dimensional deep features of lung lesions. This model uses ResNet34 as the baseline network, and uses Feature Pyramid Networks (FPN) [29], convolutional attention, global average pooling and other means to build the classification model. It can effectively extract the channel features and spatial features of different scales, and effectively solve the problem of insufficient semantic information extraction in the process of feature extraction. The structure diagram of the MFPN model is shown in Figure 7. In this study, the fully connected layers FC1, FC2, FC3, and FC4 of the last layer of the MFPN network are used as the final deep features.
In addition, in order to ensure that the hyperparameters of the neural network can converge to the optimal state, the training epoch is 200 rounds, the batchsize is 8, and the loss function uses the cross-entropy loss. In order to avoid the scenario of the loss function falling into the local optimal solution, which would result in the model performance not reaching its optimal effect, Adam is selected as the optimization method. The initial learning rate is 0.0001. The data augmentation methods used include random cropping, flip transformation, and scaling. The changes of loss function and accuracy when the MFPN network is trained are shown in Figure 8. When the model is trained to 70 epochs, the accuracy of the model on the test set is the highest, reaching 83.63%.

2.4. Feature Fusion and Modeling

Multi-feature fusion is often helpful to improve model performance, increase feature diversity, and alleviate overfitting. Therefore, in order to further discover the regularity of imaging features of lung disease lesions and verify the influence of multi-omics data on lung diseases, this paper combines the deep features of lung lesions and radiomics features; the deep features use the 5-dimensional features of FC1, FC2, FC3, and FC4 in the MFPN model, and a total of 20 dimensional deep features. The radiomics features are the 20 dimensional features selected by LASSO. Finally, the classification models are Logistic, KNN, Bayesian, Random Forest, XGBoost, and Deep Learning; these six machine-learning methods were used to construct multi-classification models for distinguishing the typical types of lung lesions.

3. Results

In this paper, 20 dimensional deep features and 20 dimensional key radiomics features are used to quantitatively analyze the performance of machine-learning models in lung lesion classification. In the experiment, the classification performance levels of radiomics, deep features and combined radiomics features and deep features are compared and analyzed, respectively. The classification models are Logistic, KNN, Bayesian, Random Forest, XGBoost, and Deep Learning; these six models are used to quantitatively compare and analyze the classification performance. The deep model not only uses the deep features combined with other machine-learning classification methods, but also uses the traditional deep feature classification method for modeling. The results of the comparative experiments are shown in Table 1.
In the experiment, six methods were used to evaluate the classification effect under the three combined features. It can be seen from Table 1 that when the 20 dimensional key radiomics features were used for modeling, the Logistic achieved the best classification effect, and its classification accuracy was 90.07%. The classification accuracy levels of KNN, Bayesian, Random Forest, and XGBoost were 89.60%, 86.76%, 89.36%, and 88.89%, respectively. Random Forest and the KNN model also show good performance in radiomics features-based modeling. In addition, 20 dimensional deep features extracted by deep learning were used for experiments; the classification accuracy levels of Logistic, KNN, Bayesian, Random Forest, XGBoost, and Deep Learning were 69.50%, 72.34%, 46.81%, 74.23%, 75.41%, and 83.63%, respectively. Among them, the classification method using traditional deep learning has the best performance. Finally, the key radiomics features and deep features were combined for experiments; the experimental results show that XGBoost has the best classification performance, and its classification accuracy is 89.83%. The accuracy of Logistic, KNN, Bayesian, and Random Forest were 87.00%, 84.40%, 82.74%, and 89.36%, respectively. According to the results, the key radiomics features and the combination features have high classification performance, and Logistic has the best classification performance for the key radiomics features; its classification accuracy can reach 90.07%.
To further evaluate the performance of the model, the two models with the highest classification accuracy, Logistic and XGBoost, were selected, and each lesion subtype was evaluated in detail using Precision, Recall, F1-Score, and AUC metrics. Table 2 shows the detailed evaluation metrics for Logistic and XGBoost, respectively.
According to Table 2, Logistic performs best in the key radiomics features, with an average precision of 90.04%, an average recall of 90.14%, an average F1 value of 90.06%, and an average AUC value of 97.32%. In addition, the average AUC values when using deep features and joint features in the Logistic model are 87.49% and 96.19%, respectively. Logistic achieved the highest AUC value for the key radiomics features, and its corresponding performance is also the best. The average AUC value of XGBoost in key radiomics features was 98.13%, and the average AUC value of XGBoost in deep features was 89.94%. In the joint feature, XGBoost has the best comprehensive performance, with an average precision of 88.91%, an average recall of 90.93%, an average F1 value of 89.78%, and an average AUC value of 98.08%. For different types of lung lesions, the XGBoost model performed better in the combined features than the key radiomics features in Lung field and Consolidation lesion types, with slightly lower Recall and F1 values in Ground-glass opacity. The ROC curves of Logistic and XGBoost under the three categories are shown in Figure 9.

4. Discussion

In this study, deep learning and radiomics methods were used to construct an image classification model for lung lesion subtypes of COVID-19. The proposed method can provide an automatic classification method for the diagnosis of lung lesions, one which can be used to solve the problem of difficulty in the diagnoses of a large number of lung lesions. The proposed model can be used to distinguish Lung field, Ground-glass opacity and Consolidation in CT images of COVID-19 patients, and radiomics was used to analyze the imaging differences of lung lesions. Experimental results using six classification models show that the highest classification accuracy values for key radiomics features, deep learning, and combined features are 90.07%, 75.41%, and 89.83%, respectively, and the classification accuracy of the traditional deep learning method is 83.63%. It can be seen from the results that the performance of the model constructed by radiomics features is better than that from the depth features. The main reason is that the ROI regions of lung lesions used in the dataset of this study are quite different, showing that the size of lesions is not uniform. And a deep learning model of uniform image patterns can show better performance in the task, while lesion size differences can reduce the performance of the model [30,31]. However, for combined features, the robustness of the model is improved.
Among them, the Logistic model has the best performance in key radiomics features, the traditional deep learning method has the best performance in deep features, and XGBoost has the best performance in joint features. Moreover, with the combination of the key radiomics and deep features, the complexity and dimension of the data increase, the performance levels of the Logistic, KNN, and Bayesian models show a trend of decrease, and the classification performance levels of XGBoost and Random Forest are improved. Therefore, XGBoost and Random Forest have better generalization performance when the data complexity increases and the data dimension increases. Moreover, according to Table 1, the classification accuracy of the deep features is significantly lower than those of the key radiomics features and the joint features. In the case of Logistic model, the joint feature modeling method does not improve the performance of the model, but rather decreases it. However, the joint features under the XGBoost model can effectively improve the classification accuracy of the model. Therefore, it can be seen that in the classification task of lung lesion images, the joint feature composed of deep features and radiomics features does not improve the classification performance of simple models, but does have a certain positive effect on models with a strong fitting ability. This further proves that the ensemble learning approach can show excellent performance in complex tasks [32].
In order to further illustrate the robustness of the model, different categories are usually evaluated by additional evaluation indicators [33]. For three different subtypes of COVID-19 lung, the two models with the highest classification accuracy, Logistic and XGBoost, were selected for comparison experiments. It was found that the use of key radiomics features in Logistic could achieve the highest classification performance in the three subtypes, with an average AUC value of 97.32%. In this model, the indices of the Lung field subtype were significantly higher than those for Ground-glass opacity and Consolidation, and the indices of Consolidation subtype were significantly higher than those for Ground-glass opacity. This indicates that the Ground-glass opacity subtype has imaging features which are less obvious than those of Lung field and Consolidation. In addition, using the combined features in XGBoost can achieve a good level of performance in the Lung field and Consolidation subtypes; the accuracy of the Ground-glass opacity subtype is higher than those found with key radiomics features and depth features, and the other indicators are not much different. The average AUC values of XGBoost in key radiomics feature modeling and joint feature modeling are 98.13% and 98.08%, respectively. Although the AUC of the key radiomics feature modeling is slightly higher than that of the joint feature, the comprehensive performance of joint feature modeling is significantly better than that of key radiomics. Therefore, in the task of lesion recognition in lung images of COVID-19, the joint modeling method can improve the recognition performance and generalization ability, and is suitable for large-scale and high-dimensional radiomics analysis tasks. In addition, radiomics features were used to analyze the differences in imaging phenotypes associated with different lung lesions. Through the selection of radiomics features, it was determined that that the original firstorder Median feature has the highest impact on the prediction target. The results showed that for Lung field, Ground-glass opacity, and Consolidation, the three principal differences within Lung lesions, the differences are mainly reflected in the aspect of image brightness. In addition, several other important radiomics features also indicated that the randomness of local texture feature intensity and gray-level information was also an important imaging marker affecting the difference of lung lesions.

5. Conclusions

In this study, deep learning and radiomics methods are used to distinguish different lesion types in COVID-19 images, and the MFPN model is proposed as a means to extract the depth features of lesions; the classification performance levels of six common machine-learning methods are subsequently compared. The experimental results show that in the COVID-19 image classification task, the classification method combining radiomics and deep features can achieve good classification results and has certain clinical application value. In addition, we analyzed the differences in the imaging of phenotypes of different lung lesions by radiomics features, which provided a reference for the imaging identification of lung lesions.

Author Contributions

Conceptualization, G.M. and K.W.; methodology, G.M. and K.W.; validation, K.W. and T.Z.; formal analysis, B.S. and L.Y.; investigation, G.M. and K.W.; resources, K.W.; data curation, G.M.; writing—original draft preparation, G.M.; writing—review and editing, K.W. visualization, K.W. and T.Z.; supervision, K.W.; project administration, G.M.; funding acquisition, G.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Natural Science Foundation of Xinjiang (Grant No. 2022D01C202), the 14th Five-Year Plan Distinctive Program of Public Health and Preventive Medicine in Higher Education Institutions of Xinjiang Uygur Autonomous Region, and the Key Research and Development Program of Xinjiang Uygur Autonomous Region (Grant No. 2022B03023-2).

Institutional Review Board Statement

The dataset used in this study is publicly available, and in the interest of academic integrity and ethics, we have identified the source of the data. In addition, we use data at the request of data providers and protect personal privacy and sensitive information in our research to ensure that personally identifiable data is not compromised. Therefore, the data sources used in this study are not subject to the ethical review process. We guarantee the openness, anonymity, legality, and reliability of our data, and follow the principles of academic integrity and privacy protection in our research.

Informed Consent Statement

This study did not involve collecting data directly from subjects, and our research methods and results are based on open-source datasets; guidelines of academic integrity and ethics were followed in the study, and the study did not involve any violation of privacy principles.

Data Availability Statement

The data is from the China National Center for Bioinformation, via the China Consortium of Chest CT Image Investigation (CC-CCII). It is publicly available globally at http://ncov-ai.big.ac.cn/download (accessed on 18 June 2024).

Conflicts of Interest

The authors declare there are no conflicts of interest.

References

  1. Waterer, G. The global burden of respiratory infectious diseases before and beyond COVID. Respirology 2023, 28, 95. [Google Scholar] [CrossRef] [PubMed]
  2. Umakanthan, S.; Sahu, P.; Ranade, A.V.; Bukelo, M.M.; Rao, J.S.; Abrahao-Machado, L.F.; Dahal, S.; Kumar, H.; Kv, D. Origin, transmission, diagnosis and management of coronavirus disease 2019 (COVID-19). Postgrad. Med. J. 2020, 96, 753–758. [Google Scholar]
  3. Green, M.R.; Sambrook, J. Quantification of RNA by real-time reverse transcription-polymerase chain reaction (RT-PCR). Cold Spring Harb. Protoc. 2018, 2018, pdb-rot095042. [Google Scholar] [CrossRef] [PubMed]
  4. Yoo, H.M.; Kim, I.H.; Kim, S. Nucleic acid testing of SARS-CoV-2. Int. J. Mol. Sci. 2021, 22, 6150. [Google Scholar] [CrossRef] [PubMed]
  5. Islam, N.; Ebrahimzadeh, S.; Salameh, J.P.; Kazi, S.; Fabiano, N.; Treanor, L.; Absi, M.; Hallgrimson, Z.; Leeflang, M.M.; Hooft, L.; et al. Thoracic imaging tests for the diagnosis of COVID-19. Cochrane Database Syst. Rev. 2021, 2021, CD013639. [Google Scholar]
  6. Fang, Y.; Zhang, H.; Xie, J.; Lin, M.; Ying, L.; Pang, P.; Ji, W. Sensitivity of chest CT for COVID-19: Comparison to RT-PCR. Radiology 2020, 296, E115–E117. [Google Scholar] [CrossRef]
  7. Ayalew, A.M.; Salau, A.O.; Tamyalew, Y.; Abeje, B.T.; Woreta, N. X-Ray image-based COVID-19 detection using deep learning. Multimed. Tools Appl. 2023, 82, 44507–44525. [Google Scholar] [CrossRef]
  8. Khan, S.U.; Ullah, I.; Ullah, N.; Shah, S.; Affendi, M.E.; Lee, B. A novel CT image de-noising and fusion based deep learning network to screen for disease (COVID-19). Sci. Rep. 2023, 13, 6601. [Google Scholar] [CrossRef]
  9. Kalhan, R.; Dransfield, M.T.; Colangelo, L.A.; Cuttica, M.J.; Jacobs, D.R., Jr.; Thyagarajan, B.; Estepar, R.S.; Harmouche, R.; Onieva, J.O.; Ash, S.Y.; et al. Respiratory symptoms in young adults and future lung disease. The CARDIA lung study. Am. J. Respir. Crit. Care Med. 2018, 197, 1616–1624. [Google Scholar] [CrossRef]
  10. Kang, W.; Qiu, X.; Luo, Y.; Luo, J.; Liu, Y.; Xi, J.; Li, X.; Yang, Z. Application of radiomics-based multiomics combinations in the tumor microenvironment and cancer prognosis. J. Transl. Med. 2023, 21, 598. [Google Scholar] [CrossRef]
  11. Nakashima, M.; Uchiyama, Y.; Minami, H.; Kasai, S. Prediction of COVID-19 patients in danger of death using radiomic features of portable chest radiographs. J. Med. Radiat. Sci. 2023, 70, 13–20. [Google Scholar] [CrossRef]
  12. Zhou, T.; Cheng, Q.; Lu, H.; Li, Q.; Zhang, X.; Qiu, S. Deep learning methods for medical image fusion: A review. Comput. Biol. Med. 2023, 160, 106959. [Google Scholar] [CrossRef] [PubMed]
  13. Snoap, J.A.; Popescu, D.C.; Latshaw, J.A.; Spooner, C.M. Deep-Learning-Based classification of digitally modulated signals using capsule networks and cyclic cumulants. Sensors 2023, 23, 5735. [Google Scholar] [CrossRef] [PubMed]
  14. Zech, J.R.; Carotenuto, G.; Igbinoba, Z.; Tran, C.V.; Insley, E.; Baccarella, A.; Wong, T.T. Detecting pediatric wrist fractures using deep-learning-based object detection. Pediatr. Radiol. 2023, 53, 1125–1134. [Google Scholar] [CrossRef] [PubMed]
  15. Chen, A.; Yu, Z.; Yang, X.; Guo, Y.; Bian, J.; Wu, Y. Contextualized medication information extraction using transformer-based deep learning architectures. J. Biomed. Inform. 2023, 142, 104370. [Google Scholar] [CrossRef] [PubMed]
  16. Zhang, H.T.; Sun, Z.Y.; Zhou, J.; Gao, S.; Dong, J.H.; Liu, Y.; Bai, X.; Ma, J.L.; Li, M.; Li, G.; et al. Computed tomography–based COVID-19 triage through a deep neural network using mask–weighted global average pooling. Front. Cell. Infect. Microbiol. 2023, 13, 1116285. [Google Scholar] [CrossRef] [PubMed]
  17. Zhou, T.; Zhu, S. Uncertainty quantification and attention-aware fusion guided multi-modal MR brain tumor segmentation. Comput. Biol. Med. 2023, 163, 107142. [Google Scholar] [CrossRef]
  18. Tang, A.; Gong, P.; Fang, N.; Ye, M.; Hu, S.; Liu, J.; Wang, W.; Gao, K.; Wang, X.; Tian, L. Endoscopic ultrasound diagnosis system based on deep learning in images capture and segmentation training of solid pancreatic masses. Med. Phys. 2023, 50, 4197–4205. [Google Scholar] [CrossRef]
  19. Zhao, X.; Bai, J.W.; Guo, Q.; Ren, K.; Zhang, G.J. Clinical applications of deep learning in breast MRI. Biochim. Biophys. Acta Rev. Cancer 2023, 1878, 188864. [Google Scholar] [CrossRef]
  20. Du, G.; Zeng, Y.; Chen, D.; Zhan, W.; Zhan, Y. Application of radiomics in precision prediction of diagnosis and treatment of gastric cancer. Jpn. J. Radiol. 2023, 41, 245–257. [Google Scholar] [CrossRef]
  21. Zhao, X.; Li, W.; Zhang, J.; Tian, S.; Zhou, Y.; Xu, X.; Hu, H.; Lei, D.; Wu, F. Radiomics analysis of CT imaging improves preoperative prediction of cervical lymph node metastasis in laryngeal squamous cell carcinoma. Eur. Radiol. 2023, 33, 1121–1131. [Google Scholar] [CrossRef]
  22. Li, Z.F.; Kang, L.Q.; Liu, F.H.; Zhao, M.; Guo, S.Y.; Lu, S.; Quan, S. Radiomics based on preoperative rectal cancer MRI to predict the metachronous liver metastasis. Abdom. Radiol. 2023, 48, 833–843. [Google Scholar] [CrossRef]
  23. Jiang, X.; Su, N.; Quan, S.; Linning, E.; Li, R. Computed Tomography Radiomics-based Prediction Model for Gender–Age–Physiology Staging of Connective Tissue Disease-associated Interstitial Lung Disease. Acad. Radiol. 2023, 30, 2598–2605. [Google Scholar] [CrossRef] [PubMed]
  24. Zhou, T.; Tu, W.; Dong, P.; Duan, S.; Zhou, X.; Ma, Y.; Wang, Y.; Liu, T.; Zhang, H.; Feng, Y.; et al. CT-based radiomic nomogram for the prediction of chronic obstructive pulmonary disease in patients with lung cancer. Acad. Radiol. 2023, 30, 2894–2903. [Google Scholar] [CrossRef] [PubMed]
  25. Huang, G.; Hui, Z.; Ren, J.; Liu, R.; Cui, Y.; Ma, Y.; Han, Y.; Zhao, Z.; Lv, S.; Zhou, X.; et al. Potential predictive value of CT radiomics features for treatment response in patients with COVID-19. Clin. Respir. J. 2023, 17, 394–404. [Google Scholar] [CrossRef]
  26. Zhang, K.; Liu, X.; Shen, J.; Li, Z.; Sang, Y.; Wu, X.; Zha, Y.; Liang, W.; Wang, C.; Wang, K.; et al. Clinically applicable AI system for accurate diagnosis, quantitative measurements, and prognosis of COVID-19 pneumonia using computed tomography. Cell 2020, 181, 1423–1433. [Google Scholar] [CrossRef] [PubMed]
  27. Lambin, P.; Rios-Velazquez, E.; Leijenaar, R.; Carvalho, S.; Van Stiphout, R.G.; Granton, P.; Zegers, C.M.L.; Gillies, R.; Boellard, R.; Dekker, A.; et al. Radiomics: Extracting more information from medical images using advanced feature analysis. Eur. J. Cancer 2012, 43, 441–446. [Google Scholar] [CrossRef] [PubMed]
  28. Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B 1996, 58, 267–288. [Google Scholar] [CrossRef]
  29. Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature Pyramid Networks for Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 936–944. [Google Scholar]
  30. Rakaraddi, A.; Pratama, M. Unsupervised Learning for Identifying High Eigenvector Centrality Nodes: A Graph Neural Network Approach. In Proceedings of the 2021 IEEE International Conference on Big Data (Big Data), Orlando, FL, USA, 15–18 December 2021; pp. 4945–4954. [Google Scholar]
  31. Mire, A.; Elangovan, V.; Patil, S. (Eds.) Advances in Deep Learning for Medical Image Analysis; CRC Press: Boca Raton, FL, USA, 2022. [Google Scholar]
  32. Sun, Y.; Salerno, S.; Pan, Z.; Yang, E.; Sujimongkol, C.; Song, J.; Wang, X.; Han, P.; Zeng, D.; Kang, J.; et al. Assessing the prognostic utility of clinical and radiomic features for COVID-19 patients admitted to ICU: Challenges and lessons learned. Harv. Data Sci. Rev. 2023, 6. [Google Scholar] [CrossRef]
  33. Qiu, J.; Yan, M.; Wang, H.; Liu, Z.; Wang, G.; Wu, X.; Gao, Q.; Hu, H.; Chen, J.; Dai, Y. Identifying ureteral stent encrustation using machine learning based on CT radiomics features: A bicentric study. Front. Med. 2023, 10, 1202486. [Google Scholar] [CrossRef]
Figure 1. Technical framework diagram of a joint approach for deep learning and radiomics.
Figure 1. Technical framework diagram of a joint approach for deep learning and radiomics.
Tomography 10 00109 g001
Figure 2. Sample images of lung field, ground-glass opacity, and consolidation in CT images.
Figure 2. Sample images of lung field, ground-glass opacity, and consolidation in CT images.
Tomography 10 00109 g002
Figure 3. Flow chart of data inclusion and exclusion criteria.
Figure 3. Flow chart of data inclusion and exclusion criteria.
Tomography 10 00109 g003
Figure 4. Cluster plot of radiomics feature correlation.
Figure 4. Cluster plot of radiomics feature correlation.
Tomography 10 00109 g004
Figure 5. Lambda curves for LASSO regression coefficients.
Figure 5. Lambda curves for LASSO regression coefficients.
Tomography 10 00109 g005
Figure 6. The 20 dimensional key features, as selected by radiomics.
Figure 6. The 20 dimensional key features, as selected by radiomics.
Tomography 10 00109 g006
Figure 7. MFPN neural network structure diagram. C represents the convolutional layer, P represents the pyramid feature, and FC represents the fully connected layer.
Figure 7. MFPN neural network structure diagram. C represents the convolutional layer, P represents the pyramid feature, and FC represents the fully connected layer.
Tomography 10 00109 g007
Figure 8. Loss function and accuracy curves of the MFPN network.
Figure 8. Loss function and accuracy curves of the MFPN network.
Tomography 10 00109 g008
Figure 9. ROC curves for Logistic and XGBoost models.
Figure 9. ROC curves for Logistic and XGBoost models.
Tomography 10 00109 g009
Table 1. Model comparison using the experimental results, as evaluated for accuracy.
Table 1. Model comparison using the experimental results, as evaluated for accuracy.
ModelRadiomicsDeep LearningCombined Feature
Logistic90.07%69.50%87.00%
KNN89.60%72.34%84.40%
Bayesian86.76%46.81%82.74%
Random Forest89.36%74.23%89.36%
XGBoost88.89%75.41%89.83%
Deep Learning-83.63%-
Table 2. Classification results for the Logistic and XGBoost models.
Table 2. Classification results for the Logistic and XGBoost models.
Data TypeMetrics (%)Logistic ModelXGBoost Model
LFGGOCLMeanLFGGOCLMean
RadiomicsPrecision92.2587.5990.2890.0489.4488.3287.588.42
Recall93.5783.3393.5390.1492.0385.2194.0390.42
F1-Score92.9185.4191.8790.0690.7186.7490.6589.37
AUC98.6394.9898.3597.3298.7896.8798.7498.13
Deep learningPrecision68.3178.8361.8169.6577.4668.6162.569.52
Recall90.6558.767.4272.2683.3374.0276.2777.87
F1-Score77.9167.2964.4969.980.2971.2168.773.4
AUC89.9587.5584.9687.4992.968888.8589.94
Combined featurePrecision92.2583.9484.7286.9789.4489.7887.588.91
Recall92.2579.3189.7187.0994.7882.5595.4590.93
F1-Score92.2581.5687.1486.9892.0386.0191.389.78
AUC97.7994.0896.796.1998.9696.7498.5398.08
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ma, G.; Wang, K.; Zeng, T.; Sun, B.; Yang, L. A Joint Classification Method for COVID-19 Lesions Based on Deep Learning and Radiomics. Tomography 2024, 10, 1488-1500. https://doi.org/10.3390/tomography10090109

AMA Style

Ma G, Wang K, Zeng T, Sun B, Yang L. A Joint Classification Method for COVID-19 Lesions Based on Deep Learning and Radiomics. Tomography. 2024; 10(9):1488-1500. https://doi.org/10.3390/tomography10090109

Chicago/Turabian Style

Ma, Guoxiang, Kai Wang, Ting Zeng, Bin Sun, and Liping Yang. 2024. "A Joint Classification Method for COVID-19 Lesions Based on Deep Learning and Radiomics" Tomography 10, no. 9: 1488-1500. https://doi.org/10.3390/tomography10090109

Article Metrics

Back to TopTop