Next Article in Journal
A Recent Electronic Control Circuit to a Throttle Device
Previous Article in Journal
Developing Efficient Discrete Simulations on Multicore and GPU Architectures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fusion High-Resolution Network for Diagnosing ChestX-ray Images

1
School of Communication and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
2
School of Medical Information and Engineering, Southwest Medical University, Luzhou 646000, China
3
Chongqing Key Laboratory of Photoelectronic Information Sensing and Transmitting Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
4
Chongqing Key Laboratory of Image Cognition, School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
5
Department of Electronic Engineering, National Formosa University, Yunlin 632, Taiwan
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Electronics 2020, 9(1), 190; https://doi.org/10.3390/electronics9010190
Submission received: 11 December 2019 / Revised: 15 January 2020 / Accepted: 16 January 2020 / Published: 19 January 2020
(This article belongs to the Section Artificial Intelligence)

Abstract

:
The application of deep convolutional neural networks (CNN) in the field of medical image processing has attracted extensive attention and demonstrated remarkable progress. An increasing number of deep learning methods have been devoted to classifying ChestX-ray (CXR) images, and most of the existing deep learning methods are based on classic pretrained models, trained by global ChestX-ray images. In this paper, we are interested in diagnosing ChestX-ray images using our proposed Fusion High-Resolution Network (FHRNet). The FHRNet concatenates the global average pooling layers of the global and local feature extractors—it consists of three branch convolutional neural networks and is fine-tuned for thorax disease classification. Compared with the results of other available methods, our experimental results showed that the proposed model yields a better disease classification performance for the ChestX-ray 14 dataset, according to the receiver operating characteristic curve and area-under-the-curve score. An ablation study further confirmed the effectiveness of the global and local branch networks in improving the classification accuracy of thorax diseases.

1. Introduction

ChestX-rays (CXRs) are often included in routine physical examinations. Due to the advantages of being rapid, simple and economical, X-ray photography has become the most popular method for performing chest examinations [1]. A ChestX-ray can clearly record gross lesions of the lungs, including pneumonia, masses and nodules. The interpretation of CXR images in current medical practice, however, is mainly performed by radiologists, through artificial reading. The ChestX-ray image of a patient needs to be read by a senior radiologist for at least 10 min to make a diagnosis and different doctors can make inconsistent diagnoses of the same ChestX-ray image, which means that the results are affected by the cognitive ability of the radiologist, subjective experience, fatigue and other factors [2]. Computer-aided diagnosis (CAD) can overcome the deficiencies of radiologists, make quick and effective objective judgements, improve accuracy and stability and reduce misdiagnoses and missed diagnoses [3,4,5].
Recently, benefitting from deep learning techniques [6], computer vision [7] has had remarkable success in the fields of target detection [8], image classification [9,10] and image inpainting [11], for example. This notable progress has led to the development of many medical image processing applications, including disease classification [12], lesion detection or segmentation [13,14,15], registration [16], image annotation [17,18] as well as other examples [19]. Deep learning methods, particularly deep convolutional neural networks (CNN) [20,21], have quickly become the preferred approach for processing medical images [22,23]. Large-scale datasets are usually required to train deep neural networks [24]. The ChestX-ray 14 dataset, released by the National Institutes of Health (NIH) in 2017 [25], is known as one of the largest hospital-scale ChestX-ray datasets. A series of studies was conducted to classify thoracic disease using this dataset. Existing CXR image diagnosis with deep learning [26,27,28,29,30,31,32,33] was used to resize or down-sample the high-resolution or original high-pixel images and eliminate most of the pixels in the images, with the hope that useful disease information would not be lost. The mainstream framework of a CNN for diagnosing thorax disease is shown in Figure 1, in which the input size of the CXR image is normally set to 224 × 224 × 3. For example, Mao et al. [34] used deep generative classifiers to make the model architecture more robust and to reduce model overfitting. Guan et al. [35] treated the entire image as a global branch, focused on local regions with disease specificity, and proposed an attention guided convolutional neural network (AG-CNN) to fuse complementary information for favourable accuracy. Zhu et al. [36] proposed the deep-local global feature fusion (DLGFF) framework, for multilevel semantic recognition in high spatial resolution images, which fused the local and global convolutional features and considered fully connected features. Lin et al. [37] set the outputs of a trained CNN as fuzzy integral inputs and proposed evolutionary-fuzzy-integral-based CNNs (EFI-CNN) for improved classification accuracy.
The classic pretrained models, e.g., AlexNet [38], VGGNet [39], ResNet [40] and DenseNet [41], all use a CXR image that is resized to 224 × 224 × 3 as the input. The model encodes the image to C feature maps that are sized S × S and outputs them to the transition layer. Each feature map is reduced to 1 × 1 × D by the transition layer and then transformed into a D-dimensional feature vector by the sampling layer. A sigmoid function transforms the fully connected layer and then outputs probability scores for 14 thorax diseases.
Medical disease diagnosis, however, often needs to find abnormal disease information from dozens of pixels in a picture with millions of pixels to make an accurate disease judgement. Artificial downsampling, or discarding pixels, will result in the loss of disease information, missed diagnoses and misdiagnoses, leading to the treatment of the patient’s diseases potentially being delayed.
In this paper, to take full advantage of neural network architectures and fuse image representation features, we adopt a fusion convolutional neural network and introduce the classification layer into a high-resolution network (HRNet) to improve the classification of CXR images. An illustration of HRNet is provided in Figure 2. Specifically, four high-resolution feature maps are first fed into a bottleneck, and the number of output channels is increased to 64, 128, 256 and 512. The high-resolution representations are then downsampled by a 2-stride 3 × 3 convolution layer, which results in 128 channels. Then, all the channels are compiled into representations of the second-level high-resolution representations, and this process is conducted twice, to obtain 256 channels at the low resolution. Finally, the 512 channels are transformed into 1024 channels through one 1 × 1 convolution, which is followed by a global average pooling operation. The output 1024-dimensional representation is fed into the classifier [42].
In summary, our contributions in this work are as follows: First, we propose the fusion high-resolution network as a feature extractor, which produces competitive results compared with those of other advanced methods. Second, we introduce a fusion CNN that diagnoses ChestX-ray images by combining local and global cues. The FHRNet improves the performance of thorax disease classification by reducing the impact of noise and highlighting lung regions. Third, we conduct a comparative experiment based on the ChestX-ray 14 dataset. The classification results show that the FHRNet model achieves better performance than other available approaches.

2. Method

2.1. Dataset

Wang et al. [25] released the ChestX-ray 14 dataset in October 2017, and it is the largest available ChestX-ray dataset by far. The ChestX-ray14 dataset includes 112,120 CXR images, involving 30,805 patients. The pixel size of every CXR image is 1024 × 1024, and all images are saved in PNG format, with an 8-bit greyscale value. Every image is labelled with 14 different thorax diseases, with features extracted from radiologist reports. The ground truth data are mined and labelled through natural language processing (NLP) from patient diagnostic reports, and the label accuracy is estimated to be greater than ninety percent. Among the 112,120 ChestX-ray images, 51,708 images contained one or more diseases, and the remaining 60,412 images were considered normal and labelled “No Finding”. An image example is shown in Figure 3.
The ChestX-ray 14 dataset includes multilabel classification and is large enough for deep learning; therefore, it was used to evaluate and validate the FHRNet model. In this experiment, we divided the whole dataset into a training set (total 75,708 images), a validation set (total 10,816 images) and a test set (total 25,596 images), at the hospital scale. All images from the same patient only appeared once in the training set, the validation set and the test set.

2.2. Network Framework

As shown in Figure 4, the proposed FHRNet has three branches: the local feature extractor, the global feature extractor and the feature fusion module. The local and global feature extractors are disease classification networks that output disease classification probabilities from the corresponding images. In contrast, the input image of the local feature extractor is a small lung region that is cropped using a mask inference generated from the global feature extractor. Two of the HRNets were adjusted to obtain the distinguishing features of the local lung region and whole image.
The HRNets are connected to global average pooling layers, a fully connected layer, a sigmoid layer and a loss function. The feature fusion module concatenates the global average pooling layers after two feature extraction steps and is then fine-tuned to make a final classification prediction.

2.3. Network Structure

It usually takes three steps to build a model for classifying CXR images, based on the deep learning of multibranch images. These steps are feature extraction, feature fusion and classification prediction. The specific descriptions of these steps are provided below.
Feature extraction from multibranch images. Determining how to better extract features from multiview medical images is one of the main research topics in the field of medical image processing based on deep learning methods [43]. Although a variety of advantageous features have been manually extracted from multiview medical images, for example, HOG, LBP and SIFT, classification predictions based on these features can lead to incompatibility problems, that is, the extracted features cannot effectively classify and predict specific organs or diseases. Feature extraction based on a CNN solves the above problems. With the continuous development of attention mechanisms, feature extraction from multiview medical images has become increasingly ideal [44].
When we take the feature extraction network f as an example, HRNet can be used to extract features. Suppose that the network can be expressed as follows:
f ( x , θ ) = W L a ( L 1 ) ( W ( L 1 ) a ( L 2 ) ( W 1 x + b 1 ) + b ( L 1 ) ) + b L
in which θ : = { W 1 , b 1 , , W L 1 , b L 1 , W L , b L } are the parameters of network f , a l ( 1 l < L ) represents the activation function of the lth layer, x represents the input of the network   f . f ( x , θ ) represents an output that is not processed by the activation function of the last layer [45]. The overall output of the network is as follows:
O u t p u t = A ( f ( x , θ ) )
in which A represents the activation function of feature extraction network f [46].
As shown in Figure 2, the input of feature extraction network f includes the global input image x g and the local input image x l , and the ith local input image is represented as x l i = m i x g i . Therefore, according to the definition of the feature network, the global features and local features can be expressed as follows:
O g = A ( f g ( x g , θ ) )
  O l = A ( f l ( x g m , θ ) )  
Feature fusion from multibranch images. To use the images of different branches for classification prediction, it is necessary to construct unified fusion features to share the features of different branches. After different deep neural networks extract the features from different branch images, the shared fusion features can be obtained by directly concatenating the images from the three branches,
O = w 1 A ( f g ( x g , θ ) ) + w 2 A ( f l ( x l , θ ) )
in which w i ( 1 i 2 ) represents the weight of a feature that is extracted from the ith network of the fusion feature [47].
It is not difficult to find that the features extracted from the three branches will result in feature redundancy. An attention mechanism can be used to reduce feature redundancy. That is, adding a random mask after the last activation layer and removing redundant features can increase the classification accuracy.
Classification prediction. At present, the prediction of lung disease is a multiclassification task that usually adopts the softmax classification function. The classification function is expressed as
[ p 1 , p 2 , , p 14 ] = S o f m a x ( W × F )
in which O represents the fusion feature, W represents the mapping matrix that is used to map the high-dimensional fusion feature to a low-dimensional probability distribution representing the disease information and p i ( 1 i 14 ) represents the probability of identifying the ith disease [48].
To dynamically determine the weights of the three features and further improve the prediction accuracy, global and local consistency classification methods can be used. That is, three classifiers for global, local and fusion features are trained and alternately optimised for classification prediction,
[ p 1 1 , p 2 1 , , p 14 1 ] = S o f m a x ( W × O g )
[ p 1 2 , p 2 2 , , p 14 2 ] = S o f m a x ( W × O l )
[ p 1 3 , p 2 3 , , p 14 3 ] = S o f m a x ( W × O )
in which p i j ( 1 i 14 , 1 j 3 ) represents the probability that the jth network predicts the ith disease. According to the mechanism of global–local consistency, the probabilities of patients suffering from the 14 diseases are p 1 1 p 1 2 , p 2 1 p 2 2 , …, and p 14 1 p 14 2 . Due to the range of the probability values, the final diagnosis probability is small and it is processed by a logarithmic function to become useful to doctors.

3. Experimental Setting

In all pretrained models, input images are expected to be normalised by the same means, such as by creating a minibatch of three-channel RGB images (3 × H × W), in which either H or W is expected to be no less than 224. All images in the ChestX-ray 14 dataset are 1024 × 1024, with an 8-bit greyscale value. We split the dataset into the training set (78,468 images of 21,528 patients), the validation set (11,219 images of 3090 patients) and the test set (22,433 images of 6187 patients), without the same patient overlapping among sets. We converted these greyscale images to three-channel RGB images, cropped them to a 224 × 224 resolution at the centre and then normalised these images by the means ([0.485, 0.456, 0.406]) and standard deviations ([0.229, 0.224, 0.225]). We trained the model by the Adam optimiser and set the initial learning rate and batch size as 1.0 × 10−4 and 32, respectively. We completed the training procedure after 50 epochs. After each epoch, we validated, tested and saved the model with the best classification performance. For multiclass classification, we used the receiver operating characteristic (ROC) curve and area-under-the-curve (AUC) score to assess the classification performance. The model weights associated with the best AUC scores, based on the validation set, were saved and used to extract representative features. In our experiment, we plotted the ROC curve for each thorax disease and calculated the AUC scores for 14 diseases to evaluate the classification performance. The FHRNet was implemented with the Pytorch 1.0 framework in Python 3.6 on an Ubuntu 16.04 server. The model was trained, validated and tested on an 8-core CPU and four TITAN V GPUs.

4. Results

The classification results for the existing methods and the FHRNet based on the ChestX-ray 14 dataset are presented in terms of the AUC scores in Table 1. The obtained ROC curves of the FHRNet for each of 14 thorax diseases are shown in Figure 5.
Based on the available published works performed by other researchers, including Wang [25], Yao [49] and Gundel [50], we recorded and compared the AUC scores they obtained and those of the FHRNet based on the ChestX-ray 14 dataset. We found that the FHRNet method achieved the expected effect and provided a superior classification performance. A numerical comparison of the results for 14 classes of thorax diseases and the average AUC of each method are shown in Table 1. Compared with the three existing methods, the proposed method increased the average AUC by 8.98% (from 0.7451 to 0.812). Notably, for “Mass”, the rate of increase in the AUC score reached 19.3% (from 0.6933 to 0.827).
From Table 1, a horizontal comparison shows that the existing methods and our model obtained different classification effects, even for the same thorax disease. Among the 14 diseases, 10 thorax diseases had the best average AUC scores with the FHRNet model: “Atelectasis”, “Cardiomegaly”, “Effusion”, “Infiltration”, “Mass”, “Pneumothorax”, “Consolidation”, “Emphysema”, “Fibrosis” and “Hernia”. Table 1 also shows that the FHRNet model achieved the best average AUC score.
A vertical comparison shows that the existing methods and our model obtain different classification effects for the 14 thorax diseases. The most accurately identified thorax disease was “Hernia”, with an AUC score of 0.916, and the least accurately identified disease was “Pneumonia”, with an AUC score of 0.703.
We also plotted the ROC curves of the FHRNet for each of the 14 thorax diseases, as shown in Figure 5. We can observe that the ROC curve of “Infiltration” was flatter than that of “Hernia”, which means that the classification of “Pneumonia” was not as good as that of “Hernia”.

5. Discussion

The experimental results show that the proposed FHRNet provides excellent disease classification performance. Our method can obtain satisfactory results because two significant structures are introduced: (1) a high-resolution network is adopted as a feature extractor to exchange image representation features and (2) the local and global branches of the ChestX-ray images are introduced to obtain the most useful features. To illustrate the effectiveness of local and global branches in our method, we conducted a further ablation study that correspondingly yielded different AUC scores. The results of the ablation study of local and global branches are shown in Table 2.
We developed a three-branch convolutional neural network for diagnosing CXR images in this study. The fusion branch used two high-resolution networks to adaptively concentrate on pathologically abnormal regions, which thus improved the classification accuracy. The model achieved the effective utilisation of the fusion features extracted from both local lung region images and entire ChestX-ray images. If the fusion branch were to be eliminated, the performance of the FHRNet model would degrade. With reasonable confidence, we conclude that the fusion branch plays an important role in the FHRNet model. Among the existing methods that were trained only on the ChestX-ray 14 dataset, the FHRNet achieved good AUC scores for the 14 thorax diseases.

6. Conclusions

In this work, an innovative architecture, termed the FHRNet, was applied to classify 14 thorax diseases and diagnose ChestX-ray images. Compared with most previous networks, the difference is that the FHRNet consists of four parallel high-to-low resolution subnetworks and repeatedly exchanges information via multiscale fusion processes. Two HRNets were trained by the local and global feature extraction branches, and the feature fusion module was concatenated and fine-tuned for the final prediction. Our experimental results for the ChestX-ray14 dataset demonstrated the effectiveness and accuracy of the FHRNet model. Additional ablation studies showed that the local and global feature extraction branches affect the classification performance and improve the classification effect after fusion.
In our future work, we will focus on the pixel-level segmentation of the lung region, from CXR images, to further improve the classification performance. Then, we will train the model by using more than 180,000 images from the PLCO dataset [51] as extra training data for applying the model in computer-aided diagnosis.

Author Contributions

Z.H. designed and implemented the prototype, executed the experiments, analysed the results and wrote the article. L.X. analyzed the process of auxiliary diagnosis from deep learning view and executed the experiments. H.W. and T.B. performed the literature review. J.L., Y.P. and T.-H.M. performed grammar, logical structure and typo correction. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by National Natural Science Foundation of China (61671091, 61971079), by Science and Technology Research Program of Chongqing Municipal Education Commission (KJQN201800614), by Chongqing Research Program of Basic Research and Frontier Technology (cstc2017jcyjBX0057, cstc2017jcyjAX0328), by Scientific Research Foundation of CQUPT(A2016-73), by Key Research Project of Sichuan Provincial Department of Education (18ZA0514) and by Joint Project of LZSTB-SWMU(2015LZCYD-S08(1/5)).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

In this manuscript the used abbreviations are as follows:
FHRNetFusion High-Resolution Net
CXRChestX-ray
NLPNatural Language Processing
CADComputer-Aided Diagnosis
CNNConvolution Neural Network
NIHNational Institutes of Health
ROCReceiver Operating Characteristic
AGCNNAttention Guided Convolution Neural Network
AUCArea Under Curve
LBPLocal Binary Pattern
HOGHistogram of Oriented Gradients
SIFTScale-Invariant Feature Transform

References

  1. Xu, S.J.; Wu, H.; Bie, R.F. CXNet-m1: Anomaly detection on chest X-rays with image-based deep learning. IEEE Access 2018, 7, 4466–4477. [Google Scholar] [CrossRef]
  2. Shen, D.G.; Wu, G.R.; Suk, H.I. Deep Learning in Medical Image Analysis. Annu. Rev. Biomed. Eng. 2017, 19, 221–248. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Lee, J.G.; Jun, S.; Cho, Y.W. Deep Learning in Medical Imaging: General Overview. Korean J. Radiol. 2017, 4, 570–584. [Google Scholar] [CrossRef] [Green Version]
  4. Qin, C.L.; Yao, D.M.; Shi, Y.H. Computer-aided detection in chest radiography based on artificial intelligence: A survey. Biomed. Eng. Online 2018, 17, 113. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Bertrand, H.; Hashir, M.; Cohen, J.P. Do Lateral Views Help Automated Chest X-ray Predictions. arXiv 2019, arXiv:1904.08534. [Google Scholar]
  6. Wang, H.Y.; Xia, Y. ChestNet: A Deep Neural Network for Classification of Thoracic Diseases on Chest 200 Radiography. arXiv 2018, arXiv:1807.03058. [Google Scholar]
  7. Chawla, A.; Lim, T.C.; Shikhare, S.N.; Munk, P.L.; Peh, W.C. Computer vision syndrome: Darkness under the shadow of light. Can. Assoc. Radiol. J. 2019, 70, 5–9. [Google Scholar] [CrossRef] [Green Version]
  8. Chen, J.X.; Mao, Z.J.; Zheng, R.; Huang, Y.F.; He, L.F. Feature selection of deep learning models for EEG-based RSVP target detection. IEICE Trans. Inf. Syst. 2019, 4, 836–844. [Google Scholar] [CrossRef] [Green Version]
  9. Yi, X.; Walia, E.; Babyn, P. Generative adversarial network in medical imaging: A review. Med. Image Anal. 2019, 58, 101552. [Google Scholar] [CrossRef] [Green Version]
  10. Xia, W.; Ma, C.H.; Liu, J.B.; Liu, S.B.; Chen, F. High-Resolution Remote Sensing Imagery Classification of Imbalanced Data Using Multistage Sampling Method and Deep Neural Networks. Remote Sens. 2019, 11, 2523. [Google Scholar] [CrossRef] [Green Version]
  11. Sun, T.Z.; Fang, W.D.; Chen, W.; Yao, Y.X.; Bi, F.M.; Wu, B.L. High-Resolution Image Inpainting Based on Multi-Scale Neural Network. Electronics 2019, 8, 1370. [Google Scholar] [CrossRef] [Green Version]
  12. Shen, Y.; Gao, M. Dynamic routing on deep neural network for thoracic disease classification and sensitive area localization. In International Workshop on Machine Learning in Medical Imaging; Springer International Publishing: Cham, Switzerland, 2018; pp. 389–397. [Google Scholar]
  13. Rajpurkar, P.; Irvin, J.; Zhu, K. Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv 2017, arXiv:1711.05225. [Google Scholar]
  14. Tang, Y.B.; Tang, Y.X.; Xiao, J. XLSor: A Robust and Accurate Lung Segmentor on Chest X-Rays Using Criss-Cross Attention and Customized Radiorealistic Abnormalities Generation. arXiv 2019, arXiv:1904.09229. [Google Scholar]
  15. Subramanian, V.; Wang, H.; Wu, J.T. Automated detection and type classification of central venous catheters 216 in chest X-rays. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Lima, Peru, 4–8 October 2019. [Google Scholar]
  16. Aviles-Rivero, A.I.; Papadakis, N.; Li, R.T. GraphXNet-Chest X-Ray Classification Under Extreme Minimal Supervision. arXiv 2019, arXiv:1907.10085. [Google Scholar]
  17. Gooßen, A.; Deshpande, H.; Harder, T. Deep Learning for Pneumothorax Detection and Localization in Chest Radiographs. arXiv 2019, arXiv:1907.07324. [Google Scholar]
  18. Shin, H.C.; Roberts, K.; Lu, L. Learning to read chest X-rays: Recurrent neural cascade model for automated image annotation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2497–2506. [Google Scholar]
  19. Litjens, G.; Kooi, T.; Bejnordi, B.E. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef] [Green Version]
  20. Tajbakhsh, N.; Shin, J.Y.; Gurudu, S.R. Convolutional neural networks for medical image analysis: Full training or fine tuning. IEEE Trans. Med. Imaging 2016, 35, 1299–1312. [Google Scholar] [CrossRef] [Green Version]
  21. Tan, Z.; Yue, P.; Di, L.; Tang, J. Deriving High Spatiotemporal Remote Sensing Images Using Deep Convolutional Network. Remote Sens. 2018, 10, 1066. [Google Scholar] [CrossRef] [Green Version]
  22. Livieris, I.E.; Kanavos, A.; Tampakas, V.; Pintelas, P. An Ensemble SSL Algorithm for Efficient Chest X-Ray Image Classification. J. Imaging 2018, 4, 95. [Google Scholar] [CrossRef] [Green Version]
  23. Heo, S.J.; Kim, Y.; Yun, S.; Lim, S.S.; Kim, J.; Nam, C.M.; Park, E.C.; Jung, I.; Yoon, J.H. Deep Learning Algorithms with Demographic Information Help to Detect Tuberculosis in Chest Radiographs in Annual Workers’ Health Examination Data. Int. J. Environ. Res. Public Health 2019, 16, 250. [Google Scholar] [CrossRef] [Green Version]
  24. Jing, L.L.; Tian, Y. Self-supervised visual feature learning with deep neural networks: A survey. arXiv 2019, arXiv:1902.06162. [Google Scholar]
  25. Wang, X.; Peng, Y.; Lu, L.; Lu, Z.; Bagheri, M. Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 3462–3471. [Google Scholar]
  26. Liu, H.; Wang, L.; Nan, Y.; Jin, F.; Wang, Q. SDFN: Segmentation-based Deep Fusion Network for Thoracic Disease Classification in Chest X-ray Images. Comput. Med. Imaging Graph. 2019, 75, 66–73. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Rajpurkar, P.; Irvin, J.; Ball, R.L.; Zhu, K.; Yang, B. Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists. PLoS Med. 2018, 15, e1002686. [Google Scholar] [CrossRef] [PubMed]
  28. Zhou, B.; Li, Y.; Wang, J. A weakly supervised adaptive densenet for classifying thoracic diseases and identifying abnormalities. arXiv 2018, arXiv:1807.01257. [Google Scholar]
  29. Kumar, P.; Grewal, M.; Srivastava, M.M. Boosted cascaded convnets for multilabel classification of thoracic diseases in chest radiographs. In Proceedings of the International Conference Image Analysis and Recognition, Montreal, QC, Canada, 27–29 June 2018; Springer International Publishing: Cham, Switzerland, 2018; pp. 546–552. [Google Scholar]
  30. Kovalev, V.; Kazlouski, S. Examining the Capability of GANs to Replace Real Biomedical Images in Classification Models Training. arXiv 2019, arXiv:1904.08688. [Google Scholar]
  31. Burwinkel, H.; Kazi, A.; Vivar, G. Adaptive image-feature learning for disease classification using inductive graph networks. arXiv 2019, arXiv:1905.03036. [Google Scholar]
  32. Guendel, S.; Ghesu, F.C.; Grbic, S. Multi-task Learning for Chest X-ray Abnormality Classification on Noisy Labels. arXiv 2019, arXiv:1905.06362. [Google Scholar]
  33. Tang, Y.X.; Wang, X.S.; Harrison, A.P.; Lu, L.; Xiao, J. Attention-guided curriculum learning for weakly supervised classification and localization of thoracic diseases on chest radiographs. In Proceedings of the International Workshop on Machine Learning in Medical Imaging, Granada, Spain, 10 September 2018; Springer International Publishing: Cham, Switzerland, 2018; pp. 249–258. [Google Scholar]
  34. Mao, C.; Yao, L.; Pan, Y.; Luo, Y.; Zeng, Z. Deep Generative Classifiers for Thoracic Disease Diagnosis with Chest X-ray Images. In Proceedings of the 2018 IEEE International Conference on Bioinformatics and Biomedicine, Madrid, Spain, 3–6 December 2018; pp. 1209–1214. [Google Scholar]
  35. Guan, Q.; Huang, Y.; Zhong, Z.; Zheng, Z.; Zheng, L.; Yang, Y. Diagnose like a radiologist: Attention guided convolutional neural network for thorax disease classification. arXiv 2018, arXiv:1801.09927. [Google Scholar]
  36. Zhu, Q.; Zhong, Y.; Liu, Y.; Zhang, L.; Li, D. A Deep-Local-Global Feature Fusion Framework for High Spatial Resolution Imagery Scene Classification. Remote Sens. 2018, 10, 568. [Google Scholar]
  37. Lin, C.J.; Lin, C.H.; Sun, C.C.; Wang, S.H. Evolutionary-Fuzzy-Integral-Based Convolutional Neural Networks for Facial Image Classification. Electronics 2019, 8, 997. [Google Scholar] [CrossRef] [Green Version]
  38. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. In Proceedings of the Advances in Neural Information Processing Systems 25 (NIPS 2012), Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
  39. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  40. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  41. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  42. Sun, K.; Xiao, B.; Liu, D.; Wang, J.D. Deep High-Resolution Representation Learning for Human Pose Estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019. [Google Scholar]
  43. Behzadi-khormouji, H.; Rostami, H.; Salehi, S. Deep learning, reusable and problem-based architectures for detection of consolidation on chest X-ray images. Comput. Methods Programs Biomed. 2020, 185, 105162. [Google Scholar] [CrossRef] [PubMed]
  44. Yang, H.; Xu, X.Y.; Kargoll, B.; Neumann, I. An automatic and intelligent optimal surface modeling method for composite tunnel structures. Compos. Struct. 2019, 208, 702–710. [Google Scholar] [CrossRef]
  45. Yang, H.; Xu, X.Y. Multi-sensor technology for B-spline modelling and deformation analysis of composite structures. Compos. Struct. 2019, 224, 111000. [Google Scholar] [CrossRef]
  46. Xu, X.Y.; Yang, H. Intelligent crack extraction and analysis for tunnel structures with terrestrial laser scanning measurement. Adv. Mech. Eng. 2019, 11, 1687814019872650. [Google Scholar] [CrossRef]
  47. Xu, X.Y.; Augello, R.; Yang, H. The generation and validation of a CUF-based FEA model with laser-based experiments. Mech. Adv. Mater. Struct. 2019, 1–8. [Google Scholar] [CrossRef]
  48. Spinks, G.; Moens, M.F. Justifying diagnosis decisions by deep neural networks. J. Biomed. Inform. 2019, 96, 103248. [Google Scholar] [CrossRef] [Green Version]
  49. Yao, L.; Poblenz, E.; Dagunts, D.; Covington, B.; Bernard, D.; Lyman, K. Learning to diagnose from scratch by exploiting dependencies among labels. arXiv 2017, arXiv:1710.10501. [Google Scholar]
  50. Gündel, S.; Grbic, S.; Georgescu, B.; Liu, S.; Maier, A.; Comaniciu, D. Learning to recognize abnormalities in chest x-rays with location-aware dense networks. In Proceedings of the Iberoamerican Congress on Pattern Recognition, Madrid, Spain, 7–10 November 2018; Springer International Publishing: Cham, Switzerland, 2018; pp. 757–765. [Google Scholar]
  51. Gohagan, J.K.; Prorok, P.C.; Hayes, R.B.; Kramer, B.S. Prostate, Lung, Colorectal and Ovarian (PLCO) cancer screening trial of the National Cancer Institute: History, organization, and status. Control. Clin. Trials 2000, 6, 251–272. [Google Scholar] [CrossRef]
Figure 1. The mainstream framework of a convolutional neural network for diagnosing thorax disease.
Figure 1. The mainstream framework of a convolutional neural network for diagnosing thorax disease.
Electronics 09 00190 g001
Figure 2. An architectural illustration of the proposed Fusion High-Resolution Network (FHRNet). The FHRNet is composed of four parallel high-to-low resolution subnetworks that repeatedly exchange information across multiresolution subnetworks. The vertical and horizontal directions correspond to the scale of the feature maps and the depth of the network, respectively.
Figure 2. An architectural illustration of the proposed Fusion High-Resolution Network (FHRNet). The FHRNet is composed of four parallel high-to-low resolution subnetworks that repeatedly exchange information across multiresolution subnetworks. The vertical and horizontal directions correspond to the scale of the feature maps and the depth of the network, respectively.
Electronics 09 00190 g002
Figure 3. Example of images in the ChestX-ray 14 dataset.
Figure 3. Example of images in the ChestX-ray 14 dataset.
Electronics 09 00190 g003
Figure 4. The overall framework of the FHRNet.
Figure 4. The overall framework of the FHRNet.
Electronics 09 00190 g004
Figure 5. The receiver operating characteristic (ROC) curves of the FHRNet, for each of the 14 thorax diseases.
Figure 5. The receiver operating characteristic (ROC) curves of the FHRNet, for each of the 14 thorax diseases.
Electronics 09 00190 g005
Table 1. The area-under-the-curve (AUC) scores of existing methods and the FHRNet based on the ChestX-ray 14 dataset. The scores that displayed a relative increase are marked in bold.
Table 1. The area-under-the-curve (AUC) scores of existing methods and the FHRNet based on the ChestX-ray 14 dataset. The scores that displayed a relative increase are marked in bold.
Thorax DiseaseWang [25]Yao [49]Gundel [50]FHRNet
Atelectasis0.70030.7330.7670.794
Cardiomegaly0.81000.8560.8830.902
Effusion0.75850.8060.8280.839
Infiltration0.66140.6730.7090.714
Mass0.69330.7180.8210.827
Nodule0.66870.7770.7580.727
Pneumonia0.65800.6840.7310.703
Pneumothorax0.79930.8050.8460.848
Consolidation0.70320.7110.7450.773
Edema0.80520.8060.8350.834
Emphysema0.83300.8420.8950.911
Fibrosis0.78590.7430.8180.824
Pleural Thickening0.68350.7240.7610.752
Hernia0.87170.7750.8960.916
Average0.74510.7610.8070.812
Table 2. The ablation study of local and global branches.
Table 2. The ablation study of local and global branches.
Thorax DiseaseGlobal FusionLocal FusionFHRNet
Atelectasis0.7780.7830.794
Cardiomegaly0.8790.8940.902
Effusion0.8220.8280.839
Infiltration0.7030.6970.714
Mass0.8040.8160.827
Nodule0.7080.7210.727
Pneumonia0.6840.6920.703
Pneumothorax0.8360.8440.848
Consolidation0.7580.7640.773
Edema0.8270.8210.834
Emphysema0.8970.9030.911
Fibrosis0.8150.8130.824
Pleural Thickening0.7350.4530.752
Hernia0.9040.9080.916
Average0.8030.8060.812

Share and Cite

MDPI and ACS Style

Huang, Z.; Lin, J.; Xu, L.; Wang, H.; Bai, T.; Pang, Y.; Meen, T.-H. Fusion High-Resolution Network for Diagnosing ChestX-ray Images. Electronics 2020, 9, 190. https://doi.org/10.3390/electronics9010190

AMA Style

Huang Z, Lin J, Xu L, Wang H, Bai T, Pang Y, Meen T-H. Fusion High-Resolution Network for Diagnosing ChestX-ray Images. Electronics. 2020; 9(1):190. https://doi.org/10.3390/electronics9010190

Chicago/Turabian Style

Huang, Zhiwei, Jinzhao Lin, Liming Xu, Huiqian Wang, Tong Bai, Yu Pang, and Teen-Hang Meen. 2020. "Fusion High-Resolution Network for Diagnosing ChestX-ray Images" Electronics 9, no. 1: 190. https://doi.org/10.3390/electronics9010190

APA Style

Huang, Z., Lin, J., Xu, L., Wang, H., Bai, T., Pang, Y., & Meen, T. -H. (2020). Fusion High-Resolution Network for Diagnosing ChestX-ray Images. Electronics, 9(1), 190. https://doi.org/10.3390/electronics9010190

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop