Next Article in Journal
Ubiquitin-Specific Proteases as Potential Therapeutic Targets in Bladder Cancer—In Vitro Evaluation of Degrasyn and PR-619 Activity Using Human and Canine Models
Next Article in Special Issue
The Impact of Artificial Intelligence in the Odyssey of Rare Diseases
Previous Article in Journal
Differentiating Lumbar Spinal Etiology from Peripheral Plexopathies
Previous Article in Special Issue
A Deep Convolutional Neural Network for the Early Detection of Heart Disease
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Machine Learning: Using Xception, a Deep Convolutional Neural Network Architecture, to Implement Pectus Excavatum Diagnostic Tool from Frontal-View Chest X-rays

1
Division of Thoracic Surgery, Department of Surgery, Taipei Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, New Taipei City 231016, Taiwan
2
Department of Research, Taipei Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, New Taipei City 231016, Taiwan
3
Department of Computer Science and Information Engineering, National Changhua University of Education, Changhua City 50074, Taiwan
4
Department of Radiology, Taipei Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, New Taipei City 231016, Taiwan
5
Department of R&D, Bamboo Technology Ltd., Taipei City 105037, Taiwan
6
School of Medicine, Tzu Chi University, Hualien 970374, Taiwan
*
Authors to whom correspondence should be addressed.
Biomedicines 2023, 11(3), 760; https://doi.org/10.3390/biomedicines11030760
Submission received: 11 November 2022 / Revised: 17 February 2023 / Accepted: 19 February 2023 / Published: 2 March 2023
(This article belongs to the Special Issue Artificial Intelligence in the Detection of Diseases)

Abstract

:
Pectus excavatum (PE), a chest-wall deformity that can compromise cardiopulmonary function, cannot be detected by a radiologist through frontal chest radiography without a lateral view or chest computed tomography. This study aims to train a convolutional neural network (CNN), a deep learning architecture with powerful image processing ability, for PE screening through frontal chest radiography, which is the most common imaging test in current hospital practice. Posteroanterior-view chest images of PE and normal patients were collected from our hospital to build the database. Among them, 80% were used as the training set used to train the established CNN algorithm, Xception, whereas the remaining 20% were a test set for model performance evaluation. The performance of our diagnostic artificial intelligence model ranged between 0.976–1 under the receiver operating characteristic curve. The test accuracy of the model reached 0.989, and the sensitivity and specificity were 96.66 and 96.64, respectively. Our study is the first to prove that a CNN can be trained as a diagnostic tool for PE using frontal chest X-rays, which is not possible by the human eye. It offers a convenient way to screen potential candidates for the surgical repair of PE, primarily using available image examinations.

1. Introduction

Pectus excavatum (PE), also called funnel chest, is a congenital structural deformity of the chest wall. Depression of the anterior chest wall to a certain degree is not merely a cosmetic problem but can also cause exercise intolerance, owing to compromised cardiopulmonary function. PE deformity, with an estimated prevalence of approximately 1 in 300~400 births [1], can cause exercise intolerance owing to cardiopulmonary compression [2]. On examination, the common cardiopulmonary finding is a restrictive pattern in the pulmonary function test (PFT) and decreased right-ventricular function in echocardiography [3].
Pectus excavatum can be corrected by a modified Nuss procedure, a minimally invasive method, where stainless-steel bridge metal bars are placed retrosternally to strut the depressed chest wall [4] Although surgical repair has been proven to improve cardiopulmonary function in patients with PE [5], there is much to be regretted considering some PE patients miss the opportunity for surgical correction at the optimal age due to a delayed diagnosis of the disease. Patients may feel embarrassed about their appearance and may not be willing to talk about it. Some patients may not even be aware of its existence, and therefore they cannot ascribe their exercise intolerance and cardiopulmonary symptoms to PE, making the accessibility of PE diagnosis more important. Clinically, the image diagnosis of PE still relies on manual measurement of the thoracic diameter on chest computed tomotraphy (CT). Physicians measure the maximum transverse diameter of the thoracic cage divided by the shortest distance from the sternal to the anterior vertebral body to get the Haller index, which can indicate the severity of pectus excavatum. The cut-off value of Haller index 3.25 has been widely used for pectus excavatum diagnosis for decades based on the preliminary report of Haller et al., which revealed that all patients who received operative repair had a Haller index greater than 3.25 (Figure 1) [6]. However, chest CTs are not economical and are not frequently requested imaging test compare to frontal view chest X-rays during routine health examination. Moreover, a chest CT examination takes 30~170 times more radiation dose exposure than a chest X-ray [7]. Since a human can hardly differentiate pectus excavatum from a frontal view chest X-ray, our proposal is to use artificial intelligence to accomplish this task. Our aim is to train a machine learning model to diagnose pectus excavatum from plain frontal chest radiography automatically, without the need for a chest CT. It would not only economize manual labor and medical expenses as a screening tool, but would also ensure that the potential diseased candidates could be screened out for surgical repair to improve their quality of life, while helping them to avoid extra radiation exposure.
Of note is that the PE diagnosis is unlikely from frontal chest radiograph images without a chest lateral view or CT images.

Convolutional Neural Networks (CNN) and Medical Image Detection

Machine learning and artificial intelligence in the computing field aims to develop algorithms to teach computers to detect patterns in data autonomously. Artificial neural network (ANN) architecture and the further derived convolutional neural network (CNN) architecture, as computing methods, can be used to develop an algorithm to achieve this purpose. CNNs can be very good feature extractors which use filters to obtain convolutional layer output, thereby reducing training parameters while keeping the accuracy. We can also repeat the convolutional layers several times to go deep to make the model match the data pattern more precisely [8]. Starting with LeNet-style [9], the following refined well-known CNN architectures, VGG-style network, Inception and Inception-ResNet have proven their powerful performance in image classification tasks. Some researchers have applied these models for the interpretation of chest X-ray images to detect pneumothorax, nodules, masses, opacity and fracture [10,11,12]. In recent research, the authers used VGG16 and VGG19 to identify pectus excavatum from chest computed tomography images [13]. However, artificial intelligence has not been yet applied for detecting PE from frontal chest X-rays. Among the above CNN models, Xception is the latest proposed model derived from Inception which has better performance on the ImageNet dataset and JFT (internal datasets used at Google) dataset. Xception also had a smaller parameter count and higher training speed compared to Inception V3 [14]. For reasons outlined above, we adopted Xception as our model training algorithm.

2. Materials and Methods

All the radiographs and reports in our study database were obtained from the Taipei Tzu-Chi Hospital, Taipei, Taiwan, ROC, and were fully anonymized. The study was approved by the Ethics Committee and Institutional Review Board (IRB No: 11-XD-109). The requirement for patient consent was waived by the Institutional Review Board.
Images of the posteroanterior-view chest radiographs for model implementation were retrieved in JPEG format with 1760 × 2140 pixels from the clinical picture archiving and communication system (PACS). These images were downsized to a resolution of 224 × 224 pixels and matted through YOLOv4, making them suitable for image recognition by a CNN [15]. Every now and then, some extra mark can be seen for clinical use for peripheral chest X-ray images. By object detection (YOLOv4) identifying the chest cage, we can exclude those unrelated parts, such as peripheral marks usually used for clinical label purposes and the limb girdle. Now that the CNN model can extract the diseased features, the chest cavity is used for model training (Figure 2).

2.1. Data Set

Images were captured from 1 January 2006 to 31 December 2020 from patients aged 12–50. The collected images were divided into two independent datasets: CXRs from pectus excavatum patients (PE) and normal (N). The PE dataset included images from patients whose Haller index was determined to be greater than 3.25, measured and calculated manually through chest CT, all of whom were diagnosed with PE and underwent surgical treatment in our hospital. The images in the N dataset were obtained from a normal group with no abnormalities in either frontal chest X-ray or chest CT. All the patients in the normal group were verified to have no PE, based on the Haller index less than 3.25 calculated manually from their chest CT images. The images in the two datasets were randomly split into an 80:20 ratio for the training and test sets, respectively, by a computer program. The test set, which is a holdout dataset, was never encountered by the algorithm during training. This set was used to evaluate the trained models.

2.2. Model Development

The convolutional neural network architecture used for our model implementation was Keras Xception (version 2.5.0) in Tensorflow (2.5.0) [11]. This deep learning task was executed in Google Colab using up to Tesla P100-PCIE. In the normalization process, we normalized the X-ray intensity value of the 224 × 224 pixel image in the 0~255 range as input data, computing using the process:
[ ( x i     min   ( x ) ) / ( max   ( x )     min   ( x ) ) ]   ×   255
The convolutional layers of Xception architectures were completely unchanged. After going through transfer learning, the Xception architectures used for our model development has its retained trained weights because it has been pretrained with an ImageNet large-scale, multi-class labels dataset. The process flow of our model algorithm is exhibited in Figure 3. We applied a cross-entropy loss function to obtain the final output of the sigmoid function and Softmax classifier with two outputs to obtain our binary final result. Nadamax was used as the optimizer. The batch size was set as 32. The learning rate and learning rate schedule were all set to their default values. We did not use dropout or image augmentation for our model because we have sufficient image data to compare to pre-existing pectus excavatum image evaluation studies. We set early stopping to avoid overfitting. The hyperparameters used during model development are listed in Table 1.

2.3. Model Evaluation

The primary outcome is the model’s performance in distinguishing patients with PE from the frontal chest X-rays on the test set. The performance is depicted as a confusion matrix. The accuracy, sensitivity, specificity and positive predictive values were computed. The receiver operating characteristic (ROC) curves were plotted using matplotlib (version 3.2.2).

2.4. Statistical Analyses

SPSS Statistics for Windows, version 24 (IBM Corp., Armonk, NY, USA), was used for statistical analyses. The investigated parameters in our population were normally distributed using the Kolmogorov–Smirnov test. Continuous data are depicted as mean ± standard deviation, whereas categorical data are depicted as a count (%). The patient characteristics of the PE and N datasets were compared through student’s t tests.

3. Results

A total of 2027 posteroanterior chest radiograph images were utilized, with 774 images in the PE dataset from 520 patients and 1253 images in the N dataset (normal group) from 667 people. The PE group comprised 84.6% men and 15.4% women with an average examined age of 23.4 ± 7.8 years. The N group comprised 49.2% men and 50.8% women with an average examined age of 41.0 ± 6.7 years. A total of 27.3% of patients in the PE group and 35.8% in the N group exhibited more than one image because they underwent a series of chest X-ray examinations. We split our dataset into 80% training and 20% test sets. The epidemiological data in the PE group had a significantly higher Haller index (4 ± 1.2 vs. 2.5 ± 0.37) and the mean age was younger than that of the N group (23.4 ± 7.8 vs. 41.0 ± 6.7). In our PE group, the proportion of men was dominant (84.6%); nevertheless, women had a higher Haller index (p < 0.001). The mean Haller index of women was higher in both the PE and N groups (p < 0.001). The prevalence of obvious scoliosis with Cobb’s angle more than 20°, noted concomitantly during chest X-ray review, was 9.6% in the PE dataset and 4.2% in the N dataset. The shapes of chest wall depression of PE were not identical in our patient group—53.1% were asymmetric (Table 2).

Model Performance

As previously mentioned, our model was implemented using Xception, a standard network architecture in the Keras deep learning library [14]. We set early stopping in our script code, wherein the training process is stopped if there is no improvement in accuracy after three epochs. The highest accuracy for the model, evaluated on the test set, was realized after 28 epochs on the training set. The test set used for evaluating the performance of our model was not used previously during the training process. The confusion matrix indicating our model performance is shown in Figure 3, with an accuracy of 0.973 ± 0.005, precision of 0.986, recall of 0.943, F1-score of 0.964 and AUC of 0.976 ± 0.014 (Table 3; Figure 4) [16].

4. Discussion

The development of convolutional neural networks (CNN) drives the progress of imaging diagnostic medicine. In addition to automatic detection [17], it is also possible to identify image features that are invisible to the human eye [18].
Since the release of the NIH ChestX-ray14 dataset [19], which contains 112,120 labeled frontal chest X-rays, artificial intelligence algorithms for automated diagnosis from chest radiographs have been developed. In addition to the ChestX-ray 14 (originally ChestX-ray 8), there are several large open-access chest X-ray datasets worldwide that may be utilized for future work, such as CheXpert from Stanford [20], MIMIC-CXR from M.I.T [21] and the well-known Alicante hospital chest X-ray datasets with large chest X-ray image data [22]. These datasets enable the CNN architectures to be trained as chest X-ray automated diagnostic models. The most popular among these chest X-ray autodetection algorithms is CheXNet, which can replicate radiologist-level pneumonia detection [23]. The trained neural network CheXNeXt can concurrently [12] detect 14 different pathologic findings, as classified in the NIH ChestX-ray 14 dataset on frontal-view chest radiographs. Many such algorithms have proven to be just as feasible, valid and competitive as certified radiologists for certain diagnoses [24,25]. Notably, none of these databases include the PE category. The 14 pathology classes of ChestX-ray 14, the largest and most well-known chest X-ray database, are pneumonia, pneumothorax, consolidation, atelectasis, nodule, mass, infiltration, cardiomegaly, emphysema, edema, effusion, fibrosis, hernia and pleural thickening. However, this does not include PE. To our knowledge, no attempt has been made to diagnose PE from frontal chest radiographs. Previous studies have focused on diseases which require label annotation from radiologists on chest X-rays compared to the result of machine reading [11,12,26]. Our study applies the CNN algorithm innovatively to detect PE; the distinguishing features of which are beyond the resolving power of human eyes. There is no need for manual label annotation in our study. Nevertheless, in real world clinical practice, the radiologist is not obligated (or unable) to diagnose PE from frontal chest X-rays. The results of our study reveal that training established CNN architecture can distinguish PE on frontal chest X-rays, which is not possible by the human eye.
For model implementation, we selected Xception, a novel deep CNN architecture that replaces depth-wise separable convolutions in the inception algorithm. Xception performs better than Inception V3 on the JFT dataset (an internal Google dataset) [14]. Some studies compare several CNN architectures, such as VGG, Inception, Xception, ResNet, DenseNet and EfficientNet to evaluate the machine learning performance and parameter optimization of frontal chest X-ray interpretation [26,27]. Taylor et al. used VGG16, VGG19, Inception, Xception and ResNet, and manipulated their parameters to determine the best model for detecting pneumothorax in frontal chest X-rays. The best performing models published for the prediction of pneumothorax have a validation AUC of 0.94. The performances of these models are approximated after hyperparameter optimization [14]. In our study, we have reported that Xception is a promising CNN architecture for the detection of pneumothorax after parameter optimization, as it has a high accuracy of 0.973 and an AUC of 0.976. Further studies are required to train different CNN architectures for better model establishment.
As PE is a low-prevalence disease, the available data are limited. At the beginning of model development, we split the database into 70%, 15% and 15% for the training, validation and test sets, respectively. Generally, datasets are often split into training set, validation set and test set. However, we could not train the model successfully on this setting because of the limited number of images. After discarding the validation set and redistributing the data into training/test sets to increase the data in both sets, the training model could be established. K-fold cross-validation is also a modified validation method to split the training set into K parts which can be used as a validation set alternatively. K-fold cross-validation is also a good way to evaluate the training model and tuning to optimize the hyperparameter when the training data are limited [28]. It can be adopted for our further experiment compared with the previous model. We selected adolescent and adult patients for model implementation for two reasons: (i) the best time for PE repair surgery is when patients are in their adolescence or young adulthood [29,30]; (ii) chest X-rays confirmed as normal or with PE through chest CT during childhood are rare, and those available could be extreme data that may disturb model training.
Considering the limited data for model training, the CNN architecture used for model implementation was pretrained on the ImageNet dataset in the Keras library [31]. ImageNet is an image database with more than 14 million images classified into more than 20,000 categories with hand-annotated objects on the pictures. Ke et al. (2021) compared the transfer performance and parameter efficiency of 16 popular CNN architectures on a large chest X-ray dataset, and found that ImageNet pretraining provided a statistically significant improvement in the algorithm performance [27].

Limitation

The population of our study is all Asian and male-dominant and is limited in comparison to those of other deep machine learning studies. We require more diverse pooling data that include other races and more female data. This would enhance the validity of the machine model and aid further study to compare different subgroups. In addition, evaluation datasets from other institutions for external validation are requested. As the Haller index is extensively used to quantify PE, and to indicate and evaluate the results of surgery [6], we used a Haller index of 3.25 as the cutoff to separate the data into normal and disease groups for deep learning training in our study. However, the Haller index is not necessarily related to the severity of the patient’s symptoms and cardiopulmonary function. Further studies can combine the relative symptoms and other parameters, in addition to chest X-ray images, to render the proposed diagnostic model more effective. Additionally, the image processing in our study was just downsizing and simple intensity value normalization. In fact, when processing large amounts of image information, there are state-of-the art methods proposed to help in denoising unnecessary signals as well as in preserving the image texture and details. Wavelet denoising or denoising-compressed sensing by regularization (DCSR) can be applied for image processing [32,33]. By the denoising method, feature simplification may improve overfitting problem in our machine learning model, which can be verified by further work.

Author Contributions

Conceptualization, Y.-L.C.; methodology, S.-T.H.; software, S.-T.H.; validation, S.-T.H., I.-S.T. and Y.-S.H.; formal analysis, S.-T.H. and I.-S.T.; investigation, Y.-J.F. and B.-C.W.; resources, Y.-L.C. and Y.-Y.H.; data curation, Y.-J.F., Y.-Y.H. and B.-C.W.; writing—original draft preparation, Y.-J.F.; writing—review and editing, Y.-J.F., I.-S.T. and Y.-S.H.; visualization, S.-T.H. and Y.-J.F.; supervision, Y.-L.C.; project administration, Y.-L.C.; funding acquisition, Y.-L.C. All authors have read and agreed to the published version of the manuscript.

Funding

Taipei Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation: TCRD-TPE-112-C1-3 (1/3).

Institutional Review Board Statement

All the radiographs and reports in our study database were obtained from the Taipei Tzu-Chi Hospital, Taipei, Taiwan, ROC, and were fully anonymized. The requirement for patient consent was waived by the Institutional Review Board. The study was approved by the Ethics Committee and Institutional Review Board (IRB No: 11-XD-109).

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

I would like to extend my sincere thanks to the technique support of the Bamboo Tech. company in Taiwan. This experimental study project would not have been possible without the help of the team.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Biavati, M.; Kozlitina, J.; Alder, A.C.; Foglia, R.; McColl, R.W.; Peshock, R.M.; Kelly, R.E., Jr.; Kim Garcia, C. Prevalence of pectus excavatum in an adult population-based cohort estimated from radiographic indices of chest wall shape. PLoS ONE 2020, 15, e0232575. [Google Scholar] [CrossRef] [PubMed]
  2. Jaroszewski, D.E.; Velazco, C.S.; Pulivarthi, V.; Arsanjani, R.; Obermeyer, R.J. Cardiopulmonary Function in Thoracic Wall Deformities: What Do We Really Know? Eur. J. Pediatr. Surg. 2018, 28, 327–346. [Google Scholar] [CrossRef]
  3. Kelly, R.E., Jr.; Obermeyer, R.J.; Nuss, D. Diminished pulmonary function in pectus excavatum: From denying the problem to finding the mechanism. Ann. Cardiothorac. Surg. 2016, 5, 466–475. [Google Scholar] [CrossRef] [Green Version]
  4. Lo, P.C.; Tzeng, I.S.; Hsieh, M.S.; Yang, M.C.; Wei, B.C.; Cheng, Y.L. The Nuss procedure for pectus excavatum: An effective and safe approach using bilateral thoracoscopy and a selective approach to use multiple bars in 296 adolescent and adult patients. PLoS ONE 2020, 15, e0233547. [Google Scholar] [CrossRef] [PubMed]
  5. Neviere, R.; Montaigne, D.; Benhamed, L.; Catto, M.; Edme, J.L.; Matran, R.; Wurtz, A. Cardiopulmonary response following surgical repair of pectus excavatum in adult patients. Eur. J. Cardiothorac. Surg. 2011, 40, e77–e82. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Haller, J.A., Jr.; Kramer, S.S.; Lietman, S.A. Use of CT scans in selection of patients for pectus excavatum surgery: A preliminary report. J. Pediatr. Surg. 1987, 22, 904–906. [Google Scholar] [CrossRef]
  7. Ward, R.; Carroll, W.D.; Cunningham, P.; Ho, S.A.; Jones, M.; Lenney, W.; Thompson, D.; Gilchrist, F.J. Radiation dose from common radiological investigations and cumulative exposure in children with cystic fibrosis: An observational study from a single UK centre. BMJ Open 2017, 7, e017548. [Google Scholar] [CrossRef] [Green Version]
  8. de Oliveira Carvalho, P.E.; da Silva, M.V.; Rodrigues, O.R.; Cataneo, A.J. Surgical interventions for treating pectus excavatum. Cochrane Database Syst. Rev. 2014, 10, CD008889. [Google Scholar] [CrossRef]
  9. LeCun, Y.; Jackel, L.D.; Bottou, L.; Cortes, C.; Denker, J.S.; Drucker, H.; Guyon, I.; Muller, U.A.; Sackinger, E.; Simard, P. Learning algorithms for classification: A comparison on handwritten digit recognition. Neural Netw. Stat. Mech. Perspect. 1995, 261, 2. [Google Scholar]
  10. Soffer, S.; Ben-Cohen, A.; Shimon, O.; Amitai, M.M.; Greenspan, H.; Klang, E. Convolutional Neural Networks for Radiologic Images: A Radiologist’s Guide. Radiology 2019, 290, 590–606. [Google Scholar] [CrossRef]
  11. Majkowska, A.; Mittal, S.; Steiner, D.F.; Reicher, J.J.; McKinney, S.M.; Duggan, G.E.; Eswaran, K.; Cameron Chen, P.H.; Liu, Y.; Kalidindi, S.R.; et al. Chest Radiograph Interpretation with Deep Learning Models: Assessment with Radiologist-adjudicated Reference Standards and Population-adjusted Evaluation. Radiology 2020, 294, 421–431. [Google Scholar] [CrossRef] [PubMed]
  12. Rajpurkar, P.; Irvin, J.; Ball, R.L.; Zhu, K.; Yang, B.; Mehta, H.; Duan, T.; Ding, D.; Bagul, A.; Langlotz, C.P. Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists. PLoS Med. 2018, 15, e1002686. [Google Scholar] [CrossRef] [PubMed]
  13. Lai, L.; Cai, S.; Huang, L.; Zhou, H.; Xie, L. Computer-aided diagnosis of pectus excavatum using CT images and deep learning methods. Sci. Rep. 2020, 10, 20294. [Google Scholar] [CrossRef] [PubMed]
  14. Chollet, F. Xception: Deep Learning with Depthwise Separable Convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1251–1258, arXiv2016, arXiv:1610.02357. [Google Scholar]
  15. Bochkovskiy, A.; Wang, C.-Y.; Liao, H.-Y.M. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
  16. Parker, C. An Analysis of Performance Measures for Binary Classifiers. In Proceedings of the 2011 IEEE 11th International Conference on Data Mining, Vancouver, BC, Canada, 11–14 December 2011; pp. 517–526. [Google Scholar]
  17. Hung, T.N.K.; Vy, V.P.T.; Tri, N.M.; Hoang, L.N.; Tuan, L.V.; Ho, Q.T.; Le, N.Q.K.; Kang, J.H. Automatic Detection of Meniscus Tears Using Backbone Convolutional Neural Networks on Knee MRI. J. Magn. Reson. Imaging 2022. [Google Scholar] [CrossRef]
  18. Le, N.Q.K.; Ho, Q.T. Deep transformers and convolutional neural network in identifying DNA N6-methyladenine sites in cross-species genomes. Methods 2022, 204, 199–206. [Google Scholar] [CrossRef]
  19. Wang, X.; Peng, Y.; Lu, L.; Lu, Z.; Bagheri, M.; Summers, R.M. ChestX-ray8: Hospital-scale Chest X-ray Database and Benchmarks on Weakly-Supervised Classification and Localization of Common Thorax Diseases. arXiv 2017, arXiv:1705.02315. [Google Scholar]
  20. Irvin, J.; Rajpurkar, P.; Ko, M.; Yu, Y.; Ciurea-Ilcus, S.; Chute, C.; Marklund, H.; Haghgoo, B.; Ball, R.; Shpanskaya, K. Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; pp. 590–597. [Google Scholar]
  21. Johnson, A.E.W.; Pollard, T.J.; Berkowitz, S.J.; Greenbaum, N.R.; Lungren, M.P.; Deng, C.-y.; Mark, R.G.; Horng, S. MIMIC-CXR, a de-identified publicly available database of chest radiographs with free-text reports. Sci. Data 2019, 6, 317. [Google Scholar] [CrossRef] [Green Version]
  22. Bustos, A.; Pertusa, A.; Salinas, J.-M.; de la Iglesia-Vayá, M. PadChest: A large chest x-ray image dataset with multi-label annotated reports. arXiv 2019, arXiv:1901.07441. [Google Scholar] [CrossRef]
  23. Rajpurkar, P.; Irvin, J.; Zhu, K.; Yang, B.; Mehta, H.; Duan, T.; Ding, D.; Bagul, A.; Langlotz, C.; Shpanskaya, K.; et al. CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning. arXiv 2017, arXiv:1711.05225. [Google Scholar]
  24. Lakhani, P.; Sundaram, B. Deep learning at chest radiography: Automated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology 2017, 284, 574–582. [Google Scholar] [CrossRef]
  25. Nam, J.G.; Park, S.; Hwang, E.J.; Lee, J.H.; Jin, K.N.; Lim, K.Y.; Vu, T.H.; Sohn, J.H.; Hwang, S.; Goo, J.M.; et al. Development and Validation of Deep Learning-based Automatic Detection Algorithm for Malignant Pulmonary Nodules on Chest Radiographs. Radiology 2019, 290, 218–228. [Google Scholar] [CrossRef] [Green Version]
  26. Taylor, A.G.; Mielke, C.; Mongan, J. Automated detection of moderate and large pneumothorax on frontal chest X-rays using deep convolutional neural networks: A retrospective study. PLoS Med. 2018, 15, e1002697. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Ke, A.; Ellsworth, W.; Banerjee, O.; Ng, A.Y.; Rajpurkar, P. CheXtransfer: Performance and Parameter Efficiency of ImageNet Models for Chest X-Ray Interpretation. arXiv 2021, arXiv:2101.06871. [Google Scholar]
  28. Refaeilzadeh, P.; Tang, L.; Liu, H. Cross-validation. Encycl. Database Syst. 2009, 5, 532–538. [Google Scholar]
  29. Kuyama, H.; Uemura, S.; Yoshida, A. Recurrence of pectus excavatum in long-term follow-up after the Nuss procedure in young children based on the radiographic Haller index. J. Pediatr. Surg. 2020, 55, 2699–2702. [Google Scholar] [CrossRef]
  30. Gibreel, W.; Zendejas, B.; Joyce, D.; Moir, C.R.; Zarroug, A.E. Minimally Invasive Repairs of Pectus Excavatum: Surgical Outcomes, Quality of Life, and Predictors of Reoperation. J. Am. Coll. Surg. 2016, 222, 245–252. [Google Scholar] [CrossRef]
  31. Deng, J.; Dong, W.; Socher, R.; Li, L.; Kai, L.; Li, F.-F. ImageNet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
  32. Ouahabi, A. A review of wavelet denoising in medical imaging. In Proceedings of the 2013 8th International Workshop on Systems, Signal Processing and their Applications (WoSSPA) IEEE, Algiers, Algeria, 12–15 May 2013; pp. 19–26. [Google Scholar]
  33. Mahdaoui, A.E.; Ouahabi, A.; Moulay, M.S. Image Denoising Using a Compressive Sensing Approach Based on Regularization Constraints. Sensors 2022, 22, 2199. [Google Scholar] [CrossRef]
Figure 1. Pectus excavatum is diagnosed by Haller index > 3.25. Calculation of Haller index by measurement of SVD (sternual-vertebral distance) divided by TD (transverse chest diameter) from chest computed tomography (CT). Sample images used in our study: (a) posteroanterior-view chest X-ray of a 24-year-old patient with PE with (b) CT calculated Haller index of 3.82. (c) Posteroanterior-view chest X-ray of a 24-year-old patient with no abnormal radiographic finding with (d) CT calculated Haller index of 2.49.
Figure 1. Pectus excavatum is diagnosed by Haller index > 3.25. Calculation of Haller index by measurement of SVD (sternual-vertebral distance) divided by TD (transverse chest diameter) from chest computed tomography (CT). Sample images used in our study: (a) posteroanterior-view chest X-ray of a 24-year-old patient with PE with (b) CT calculated Haller index of 3.82. (c) Posteroanterior-view chest X-ray of a 24-year-old patient with no abnormal radiographic finding with (d) CT calculated Haller index of 2.49.
Biomedicines 11 00760 g001
Figure 2. Image processing before input to Xception architecture. Miniature in dashed box shows how images were matted through YOLOv4 processing. The frame was just abutting the chest cage to exclude unnecessary features. Arrow: extra mark in clinical use.
Figure 2. Image processing before input to Xception architecture. Miniature in dashed box shows how images were matted through YOLOv4 processing. The frame was just abutting the chest cage to exclude unnecessary features. Arrow: extra mark in clinical use.
Biomedicines 11 00760 g002
Figure 3. Xception architecture diagram: Xception is a complex multiple layers (deep) CNN structure which contains both separatable depth-wise and pointwise convolutional layers. The Xception architecture is composed of entry flow, middle flow and exit flow. After going through the entry flow (blue part), the data then go through the middle flow which is repeated eight times (middle grey part), and finally through the exit flow. After fully connected layer, all vectors go through SoftMax function then binary cross-entropy loss to obtain binary classification results.
Figure 3. Xception architecture diagram: Xception is a complex multiple layers (deep) CNN structure which contains both separatable depth-wise and pointwise convolutional layers. The Xception architecture is composed of entry flow, middle flow and exit flow. After going through the entry flow (blue part), the data then go through the middle flow which is repeated eight times (middle grey part), and finally through the exit flow. After fully connected layer, all vectors go through SoftMax function then binary cross-entropy loss to obtain binary classification results.
Biomedicines 11 00760 g003
Figure 4. Diagnostic performance of the model on the test set: ROC curve (left) and confusion matrix (right).
Figure 4. Diagnostic performance of the model on the test set: ROC curve (left) and confusion matrix (right).
Biomedicines 11 00760 g004
Table 1. Hyperparameters during model training.
Table 1. Hyperparameters during model training.
ParameterValuesExplanation
ArchXceptionArchitecture: Xception
Imgshape224 × 224Image shape:
downsized image pixels
poolingGlobal averagePooling method after convoluted filter layers
LRdefaultLearning rate
LR scheduredefaultchanges the learning rate during learning
Batch size32a number of samples processed everytime the model is updated
dropout0Dropout setting applied to fully connected layers
Augmentation
(zoom, shear, rotation)
(0, 0, 0)Image transformation to expand data size
optimizernadamaxOptimization algorithm used for training
Batch normalizationnoA layer inserted before the pooling layer
Table 2. Patient characteristics of the Training and Test Data Sets.
Table 2. Patient characteristics of the Training and Test Data Sets.
CharacteristicPE Data Set N Data Setp Value
Total No. of chest X-rays
Mean examined age (y)
774
23.4 ± 7.8
1253
41.0 ± 6.7

<0.001
Patients (n)
Men (n)
Women (n)
520
440 (84.6%)
80 (15.4%)
667
328 (49.2%)
339 (50.8%)
Haller index, mean ± SD
Men
Women
4 ± 1.2
Biomedicines 11 00760 i001
2.5 ± 0.37
Biomedicines 11 00760 i002
<0.001
Patients with (n)
1 chest X-ray
2 chest X-rays
≥3 chest X-rays

378 (72.7%)
125(24.0%)
17 (3.3%)

428 (64.2%)
102(15.2%)
137(20.6%)
PE shape (n)
symmetric
Asymmetric
Right site depression
Left site depression

244 (46.9%)
276 (53.1%)
176 (33.8%)
100 (19.3%)

100 (100%)
NA
NA
NA
Scoliosis (n) #50 (9.6%)28 (4.2 %)
No. n: number; y: year-old; PE: pectus excavatum; * p < 0.001; # Cobb’s angle > 20°.
Table 3. Detection performance of PE from frontal chest X-rays.
Table 3. Detection performance of PE from frontal chest X-rays.
Accuracy (95% CI)PrecisionRecallF1-ScoreAUCOC (95% CI)
0.973 (0.968–0.978) 0.9860.9430.9640.976 (0.962–0.990)
CI: confidence intervals; AUROC: area under the receiver operating characteristic curve.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fan, Y.-J.; Tzeng, I.-S.; Huang, Y.-S.; Hsu, Y.-Y.; Wei, B.-C.; Hung, S.-T.; Cheng, Y.-L. Machine Learning: Using Xception, a Deep Convolutional Neural Network Architecture, to Implement Pectus Excavatum Diagnostic Tool from Frontal-View Chest X-rays. Biomedicines 2023, 11, 760. https://doi.org/10.3390/biomedicines11030760

AMA Style

Fan Y-J, Tzeng I-S, Huang Y-S, Hsu Y-Y, Wei B-C, Hung S-T, Cheng Y-L. Machine Learning: Using Xception, a Deep Convolutional Neural Network Architecture, to Implement Pectus Excavatum Diagnostic Tool from Frontal-View Chest X-rays. Biomedicines. 2023; 11(3):760. https://doi.org/10.3390/biomedicines11030760

Chicago/Turabian Style

Fan, Yu-Jiun, I-Shiang Tzeng, Yao-Sian Huang, Yuan-Yu Hsu, Bo-Chun Wei, Shuo-Ting Hung, and Yeung-Leung Cheng. 2023. "Machine Learning: Using Xception, a Deep Convolutional Neural Network Architecture, to Implement Pectus Excavatum Diagnostic Tool from Frontal-View Chest X-rays" Biomedicines 11, no. 3: 760. https://doi.org/10.3390/biomedicines11030760

APA Style

Fan, Y. -J., Tzeng, I. -S., Huang, Y. -S., Hsu, Y. -Y., Wei, B. -C., Hung, S. -T., & Cheng, Y. -L. (2023). Machine Learning: Using Xception, a Deep Convolutional Neural Network Architecture, to Implement Pectus Excavatum Diagnostic Tool from Frontal-View Chest X-rays. Biomedicines, 11(3), 760. https://doi.org/10.3390/biomedicines11030760

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop