Next Article in Journal
MEMS-Based Tactile Sensors: Materials, Processes and Applications in Robotics
Next Article in Special Issue
A 7.6-nW 1-kS/s 10-Bit SAR ADC for Biomedical Applications
Previous Article in Journal
Dynamic Enhancement for Dual Active Bridge Converter with a Deadbeat Current Controller
Previous Article in Special Issue
Cyclic Voltammetric-Paper-Based Genosensor for Detection of the Target DNA of Zika Virus
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Holographic Microwave Image Classification Using a Convolutional Neural Network

Biomedical Device Innovation Center, Shenzhen Technology University, Shenzhen 518118, China
Micromachines 2022, 13(12), 2049; https://doi.org/10.3390/mi13122049
Submission received: 11 November 2022 / Revised: 21 November 2022 / Accepted: 22 November 2022 / Published: 23 November 2022

Abstract

:
Holographic microwave imaging (HMI) has been proposed for early breast cancer diagnosis. Automatically classifying benign and malignant tumors in microwave images is challenging. Convolutional neural networks (CNN) have demonstrated excellent image classification and tumor detection performance. This study investigates the feasibility of using the CNN architecture to identify and classify HMI images. A modified AlexNet with transfer learning was investigated to automatically identify, classify, and quantify four and five different HMI breast images. Various pre-trained networks, including ResNet18, GoogLeNet, ResNet101, VGG19, ResNet50, DenseNet201, SqueezeNet, Inception v3, AlexNet, and Inception-ResNet-v2, were investigated to evaluate the proposed network. The proposed network achieved high classification accuracy using small training datasets (966 images) and fast training times.

1. Introduction

Breast cancer is the leading cause of female cancer deaths [1]. Previous studies showed that early breast cancer detection methods combined with suitable treatment could improve survival rates significantly [2]. X-ray mammography is the current gold-standard imaging tool for diagnosing breast cancer, but it produces harmful radiation and is unsuitable for dense breasts [3]. Microwave imaging has been proposed as one of the most potential breast imaging tools [4]. Researchers have extensively investigated microwave imaging in many aspects, including measurement of the microwave dielectric properties of breast tissues [5,6], image algorithms [7,8], numerical models [9,10], data acquisition systems [11,12,13], microwave antennas [14,15,16], clinical trials [17,18], image enhancement and improvement methods [19,20,21], and image classification [22,23,24]. If microwave images contain specific qualitative and quantitative indicators, this may help characterize benign and malignant tumors and predict disease. However, this work is challenging because this interdisciplinary study involves several disciplines, such as microwave science, medical imaging, machine learning, and computer vision.
Over the past two decades, deep learning has attracted increasing attention and has achieved excellent performance in medical image classification and disease detection [25,26]. For example, Chen et al. employed the biclustering mining method in ultrasound images to identify breast lesions with accuracy, sensitivity, and specificity of 96.1, 96.7, and 95.7%, respectively [27]. However, the image datasets were too small to implement generalizations. Le et al. applied a deep neural network to enhance microwave images [28]. Khoshdel et al. investigated the feasibility of using 3D U-Net architecture to improve microwave breast images [29]. Rana et al. investigated machine learning for breast lesion detection using microwave radar imaging [22]. Mojabi et al. applied convolutional neural networks (CNN) to microwave and ultrasound images to classify uncertainty quantification and breast tissue [24]. However, obtaining big microwave image datasets for training networks is challenging.
AlexNet is one of the most popular CNN architectures, and it is exploited on ImageNet datasets (including 50 million images) [30]. Previous studies demonstrated that small datasets (a few hundred) are employed for image classification [31]. However, small datasets are unsuitable for training networks due to easy overfitting. With the help of transfer learning, the training process can be conducted on a personal computer using small datasets [32].
In our previous studies, the holographic microwave imaging (HMI) method was proposed and tested for breast lesion detection [33,34,35]. This paper investigates the feasibility of using modified AlexNet with transfer learning to identify, classify, and quantify five classes of HMI datasets (fatty, dense, heterogeneously dense, very dense, and very dense breasts containing tumors), thereby solving the highly subjective judgment problem of lesions or abnormal tissues. Experimental validations are conducted on realistic MRI-based breast models to investigate the effectiveness and accuracy of modified AlexNet with transfer learning. In addition, a comparison study of several deep learning networks, including ResNet18, ResNet50, ResNet101, GoogLeNet, Inception v3, AlexNet, and VGG19, was conducted to evaluate the performance of HMI image classification. The research findings not only extend the application of deep learning but also help to understand microwave science from the perspective of deep learning with computer vision. The rest of this paper is organized as follows: Section 2 describes the proposed materials and method. Section 3 presents experimental validations and results. Section 4 concludes the study.

2. Materials and Method

2.1. Convolutional Neural Network

A typical CNN contains an input layer (that receives pixel values), a convolution layer (that extracts image features), a pooling layer (that reduces the pixels to be processed and formulates abstract elements), and an output layer (that maps the extracted features into classification vectors corresponding to the feature categories) that can be described as:
z l = W l x l 1 + b l a l = σ z l
where l denotes the lth layer and is a convolution operation. W l , b l , and z l denote the weights matrix, bias matrix, and weighted input of the lth layer. σ is the nonlinear activation function. When l = 2 , x 2 1 = x 1 is the image matrix whose elements are pixel values. When l > 2 , x l 1 is the feature maps matrix a l 1 , which is extracted from the ( l 1 )th layer, i.e., x l 1 = a l 1 = σ z l 1 . Let L be the output layer and a L is the final output vector.
Nonlinear activation functions are employed from the second layer to the last layer. The cost function is:
E 0 L = 1 n i = 1 n k = 1 N t k L l n a k L + 1 t k L l n 1 a k L
where n is the training number and N is the number of neurons in the output layer corresponding to the N classes. t k L is the targeted value corresponding to the kth neuron of the output layer and a k L is the actual output value of the kth neuron of the output layer.
The output layer error can be defined as:
δ L = E 0 L z L
where · denotes the partial derivative operation. l =   L 1 ,   L 2 ,   ,   2 , then:
δ l = W l + 1 δ l + 1 ° σ z l
where ° is the Hadamard product. The partial derivative from E 0 L to W l + 1 and b l can be calculated as follows:
E 0 L W l = E 0 L a l ° a l W l = δ l ° x l 1 E 0 L b l = E 0 L a l ° a l b l = δ l
the changes can be computed by:
W l = η E 0 L W l b l = η E 0 L b l
where η denotes the learning rate.
The ResNet architecture reduces training errors and network layers [36]. Adding a quick identity link to the primary network unit is the key to the ResNet architecture:
H X = F X + X
where H X is the ideal image and F X is the residual map.

2.2. Datasets

As shown in Table 1, publicly available MRI-derived breast phantoms from 9 human subjects were used to develop realistic breast models by converting pixel values in MRI images to complex-valued permittivity [37,38]. Figure 1 shows a sample (breast 9) of 12 phantoms and the real and imaginary parts of the relative complex-valued permittivity. Figure 2 shows the real and imaginary parts of 12 breast phantoms. The HMI method was applied to generate HMI breast image datasets using the developed, realistic numerical microwave breast models. The numerical model simulated a sphere-shaped inclusion as a tumor (radius of 5 and 10 mm).
This study used two datasets to train and test the CNN networks (see Table 2). Dataset 1 consists of the real part of HMI breast images, and dataset 2 consists of the imaginary part of HMI breast images. According to [37], the dataset in this study includes five classes of HMI images (12 phantoms), which are fatty, dense, heterogeneously dense, very dense, and breasts containing tumors. Class V was identified based on tumors that existed, and three Class V models were investigated in this study (see Table 1).

2.3. Training and Testing Data

2.3.1. Image Segmentation

An original HMI image contains different types of tissues with different sizes and cannot be applied directly for classification. We applied the image segmentation method to partition each original HMI image into sub-images and created the total of the sub-images. Sub-image properties are 227 × 227 pixels (a RGB image). The segmentation method helps to change the representation to a more meaningful and easier-to-analyze image while changing the scale to fit AlexNet. Image segmentation makes HMI images in each sub-image more uniform, which is suitable for classification and facilitates the final determination of the percentage of each mechanism. In addition, to ensure the authenticity of extracted features from the training dataset, image augmentation techniques such as rotation, height, and width shift were not used to ensure the integrity of the original images.

2.3.2. Image Labeling

Both datasets 1 and 2 were classified into five classes (see Figure 2 and Table 2). The fatty breast (class I) consists of skin, muscle, and fat tissue. Dense breast (Class II) consists of skin, muscle, fat, and dense tissue (which has higher dielectric properties than fatty tissue). Heterogeneously dense breast tissue (Class III) consists of skin, muscle, fat tissue, and heterogeneously dense tissue. A very dense breast (Class IV) consists of skin, muscle, fat, dense tissue (which has higher dielectric properties than fat), and very dense fatty tissues (which have higher dielectric properties than fat and dense tissues). A breast contains tumors (Class V) consisting of skin, muscle, fat, heterogeneously dense tissue, and two tumors.
The created HMI images illustrate the application behavior of the trained network. Therefore, their sub-images were not labeled. Different numbers of sub-images from each class were selected for manual labeling and then used for training and testing the proposed network. Training and testing datasets were utterly independent to ensure the reliability and stability of the proposed method.
For each dataset, 70% of the total images were used to train the proposed network, 20% of the total images were used to validate the network, and 10% of the total images were used to test the network. All breast image datasets were resized to 227 × 227 × 3 pixels. The training image dataset was applied to tune the network parameters using a gradient-based method. The testing image dataset was involved in the testing process to generate predictions. Table 2 shows the parameters used for training the networks.

2.4. Network Architecture

2.4.1. Modified AlexNet

AlexNet is the most popular CNN architecture due to its better performance in image classification. Thus, this study applied a modified AlexNet with transfer learning (see Table 3) to HMI images to improve image classification accuracy. Table 3 shows the structure of modified AlexNet with transfer learning. The first convolution layer of the network takes input datasets and passes them through convolution filters. Thus, the input image is required to be resized to 227 × 227 × 3 pixels, corresponding to the breadth, height, and three-color channels representing the depth of the input image. The last convolutional layer implements the reconstructed image process, aggregating the high-resolution patch-wise representations to produce the output image. The cross-entropy loss function is used to reduce errors. The batch normalization function is performed before each activation function to solve overfitting problems. The ReLU layer provides faster and more efficient training, mapping negatives, and maintaining positive values. The max pooling layer simplifies the output and reduces the resolution by reducing the number of parameters needed to learn. The fully connected layer combines all features to classify the images into four classes. The SoftMax function normalizes the output of the fully connected layer.

2.4.2. Transfer Learning

As shown in Table 3, the last three layers of AlexNet were replaced by transfer learning to avoid overfitting. The proposed AlexNet network consists of a pre-trained network and a transferred network. The parameters in the pre-trained network were trained on publicly available ImageNet datasets. Therefore, it could be adapted to extract features from the HMI image dataset. The parameters in the transferred network represent a small part of the proposed AlexNet network. Thus, a small training dataset can meet the requirements of transfer learning.

2.5. Data Analysis and Image Processing

MATLAB version R2020a with the deep learning library tool was used for data analysis and image processing. The proposed network was developed on a laptop (ThinkPad P53) with an Intel i7-8700K CPU (2.60 GHz) and 256 GB of RAM. Stochastic gradient descent with momentum (SGDM) was selected to train the transferred part of AlexNet.
The MATLAB Transfer Learning of Pretrained Network for Classification tool was used to train and test various deep learning networks using dataset 2, including ResNet18, GoogLeNet, ResNet101, VGG19, ResNet50, DenseNet201, SqueezeNet, Inception v3, AlexNet, and Inception-Res-Net-v2.

2.6. Performance Metrics

The overall performance of the proposed architecture depends on the evaluation matrix, which contains True Positives (TP), False Positives (FP), False Negatives (FN), and True Negatives (TN). The AlexNet architecture was evaluated on the testing dataset using four performance metrics, including precision and accuracy. Precision quantifies the exactness of a model and represents the ratio of carcinoma images accurately classified out of the union of predicted same-class images [39].
Precision   = TP TP + FP
where TP refers to images correctly classified as breast tumor images and FP represents the typical images mistakenly classified as breast tumor images.
Accuracy evaluates the correctness of a model and is the ratio of the number of images accurately classified out of the total number of testing images.
Accuracy   = TP + FN TP + TN + FP + FN
where TN refers to the correctly classified standard images.

3. Results and Discussion

3.1. Results

Figure 3a shows the training progress of the proposed network using dataset 1 and the SGDM method, including classification accuracy and cross-entropy loss for each epoch of training and validation. At 50 epochs, the highest classification accuracy of training and validation was 100 and 100%, respectively, and the lowest cross-entropy loss of training and validation was 0 and 0%, respectively. The training time was 11 min and 13 s for training 966 images from dataset 1.
Figure 3b displays the training progress of modified AlexNet with transfer learning using dataset 2 and the SGDM method. At 50 epochs, the highest classification accuracy of training and validation was 100 and 100%, respectively, and the lowest cross-entropy loss of training and validation was 0 and 0%, respectively. The training time was 10 min and 55 s for training 966 images from dataset 2.
As shown in Figure 4a, the performance of the proposed network was evaluated using the confusion matrix on testing images (from dataset 1). The actual horizontal row and predicted vertical column demonstrate the classification accuracy and sensitivity of the proposed network, respectively. For example, in the first row, 16 images were used to classify Class IV in the testing dataset, and 16 images (100%) were classified accurately. Therefore, the classification accuracy of Classes I, II, III, IV, and V was 100, 100, 100, 91.7, and 67.7%, respectively. In the first column, 16 images were used to predict class I of the testing images (from dataset 1), where 16 images (100%) were classified accurately. The sensitivity of Classes I, II, III, IV, and V was 100, 78.3, 97.7, 100, and 100%, respectively.
Figure 4b shows the performance of modified AlexNet with transfer learning on testing images (from dataset 2). In the first row, 16 images were used to classify Class I in the testing dataset, and 16 images (100%) were classified accurately. The proposed network obtained a classification accuracy of 100, 100, 100, 100, and 100% for Classes I, II, III, IV, and V, respectively. In the first column, 16 images were used to predict Class I in the testing images, where 16 images (100%) were classified accurately. The proposed network obtained a sensitivity of 100, 100, 100, 100, and 100% for Classes I, II, III, IV, and V, respectively.
Figure 5a,b demonstrate the randomly selected 16 examples of training images (from dataset 1) and randomly selected 16 examples of testing images (from dataset 1) using AlexNet with a transfer learning network, respectively.
Figure 6a,b display the randomly selected 16 examples of training images (from dataset 2) and randomly selected 16 examples of testing images (from dataset 2), respectively.
Table 4 presents the prediction results of dataset 2 using several deep learning networks. MobileNet-v2 obtained the highest accuracy (96.84%), and the training time was 28 min and 38 s. AlexNet used the shortest training time (3 min and 4 s) with relatively low accuracy (79.89%), Inception-ResNet-v2 obtained the lowest accuracy (79.34%) and used a long training time (106 min and 48 s), and DenseNet201 used the longest training time (132 min and 25 s) with relatively high accuracy (96.01%). Modified AlexNet with transfer learning achieved higher classification accuracy than other deep learning networks, which is suitable for classifying HMI images.

3.2. Discussion

In this study, five classes of breast phantoms were developed using the method presented in [37]. The initial HMI breast images were created using the HMI method detailed in [33]. The initial images were analyzed and processed using the proposed CNN architecture. The proposed architecture offered higher classification accuracy and sensitivity for image dataset 2 (imagery-part HMI images; see Figure 4a) than image dataset 1 (real-part HMI images; see Figure 4b). For image dataset 1, the modified AlexNet with transfer learning offers higher classification accuracy for classes I–III (100%) than classes IV (91.7%) and V (67.7%), and higher sensitivity for classes I (100%), IV (100%), and V (100%) than classes II (78.3%) and III (97.7%). However, no significant difference in classification accuracy and sensitivity was obtained for dataset 2. Figure 4 demonstrates that image datasets affect the performance classification accuracy and sensitivity of the modified AlexNet with transfer learning.
The randomly selected 16 testing examples of image dataset 1 are shown in Figure 5b, and the 16 randomly selected testing examples of image dataset 2 are shown in Figure 6b. Although a classification accuracy of 100% was obtained for examples of image dataset 1 (see Figure 5b), it does not mean that the classification accuracy of dataset 1 is as high as 100%. For example, the classification accuracy rates of 91.7% and 67.7% were obtained for classes IV and V, respectively (see Figure 4a). Although the proposed CNN architecture provides accuracy and sensitivity of 100% to classify dataset 2 (see Figure 4b), the classification accuracy of some testing examples is below 100% (96.36–99.96%; see Figure 6b). This may be caused by MATLAB calculation errors.
Compared with some popular deep learning networks (see Table 4), modified AlexNet with transfer learning has apparent advantages in classification accuracy and training time. For example, modified AlexNet with transfer learning obtained higher accuracy (100% vs. 96.84%) and required shorter training time (10 min 55 s vs. 28 min 38 s) to classify image dataset 2 than MobileNet-v2. The experimental results demonstrated that the modified AlexNet with transfer learning could identify, classify, and quantify HMI images with high accuracy, sensitivity, and reasonable training time. Several factors may affect the test results, including image preprocessing, the number of training images (in percentages), the total number of image datasets, and MATLAB calculation errors.

4. Conclusions

In this study, the CNN architecture was introduced for analyzing HMI images. A modified AlexNet with transfer learning was developed to identify, classify, and quantify five classes of HMI images (fatty, dense, heterogeneously dense, very dense, and very dense breasts containing tumors). Various experimental validations were conducted to validate the performance of the proposed network. Various popular deep learning networks, including AlexNet, were studied to evaluate the proposed network. Results demonstrated that the proposed network could automatically identify and classify HMI images more accurately (100%) than other deep learning networks. In conclusion, the proposed network has the potential to become an effective tool for analyzing HMI images using small training datasets, which offers promising applications in the microwave breast imaging field.

Funding

This research was funded by the International Science and Technology Cooperation Project of the Shenzhen Science and Technology Commission (GJHZ20200731095804014).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data and code are available from the corresponding authors upon reasonable request.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Siegel, R.; Miller, K.; Fuchs, H.; Jemal, A. Cancer statistics, 2022. CA Cancer J. Clin. 2022, 72, 7–33. [Google Scholar] [CrossRef] [PubMed]
  2. Yang, Y.; Yin, X.; Sheng, L.; Xu, S.; Dong, L.; Liu, L. Perioperative chemotherapy more of a benefit for overall survival than adjuvant chemotherapy for operable gastric cancer: An updated meta-analysis. Sci. Rep. 2015, 5, 12850. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Magna, G.; Casti, P.; Jayaraman, S.V.; Salmeri, M.; Mencattini, A.; Martinelli, E.; Di Natale, C. Identification of mammography anomalies for breast cancer detection by an ensemble of classification models based on artificial immune system. Knowl. Syst. 2016, 101, 60–70. [Google Scholar] [CrossRef]
  4. Meaney, P.M.; Golnabi, A.H.; Epstein, N.R.; Geimer, S.D.; Fanning, M.W.; Weaver, J.B.; Paulsen, K.D. Integration of microwave tomography with magnetic resonance for improved breast imaging. Med. Phys. 2013, 40, 103101. [Google Scholar] [CrossRef] [Green Version]
  5. Lazebnik, M.; McCartney, L.; Popovic, D.; Watkins, C.B.; Lindstrom, M.J.; Harter, J.; Sewall, S.; Magliocco, A.; Booske, J.H.; Okoniewski, M.; et al. A large-scale study of the ultrawideband microwave dielectric properties of normal breast tissue obtained from reduction surgeries. Phys. Med. Biol. 2007, 52, 2637. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Lazebnik, M.; Popovic, D.; McCartney, L.; Watkins, C.B.; Lindstrom, M.J.; Harter, J.; Sewall, S.; Ogilvie, T.; Magliocco, A.; Breslin, T.M.; et al. A large-scale study of the ultrawideband microwave dielectric properties of normal, benign and malignant breast tissues obtained from cancer surgeries. Phys. Med. Biol. 2007, 52, 6093–6115. [Google Scholar] [CrossRef]
  7. Elahi, M.A.; O’Loughlin, D.; Lavoie, B.R.; Glavin, M.; Jones, E.; Fear, E.C.; O’Halloran, M. Evaluation of Image Reconstruction Algorithms for Confocal Microwave Imaging: Application to Patient Data. Sensors 2018, 18, 1678. [Google Scholar] [CrossRef] [Green Version]
  8. Moloney, B.M.; O’Loughlin, D.; Abd Elwahab, S.; Kerin, M.J. Breast Cancer Detection–A Synopsis of Conventional Modalities and the Potential Role of Microwave Imaging. Diagnostics 2020, 10, 103. [Google Scholar] [CrossRef] [Green Version]
  9. Soltani, M.; Rahpeima, R.; Kashkooli, F.M. Breast cancer diagnosis with a microwave thermoacoustic imaging technique—A numerical approach. Med. Biol. Eng. Comput. 2019, 57, 1497–1513. [Google Scholar] [CrossRef]
  10. Rahpeima, R.; Soltani, M.; Kashkooli, F.M. Numerical Study of Microwave Induced Thermoacoustic Imaging for Initial Detection of Cancer of Breast on Anatomically Realistic Breast Phantom. Comput. Methods Programs Biomed. 2020, 196, 105606. [Google Scholar] [CrossRef]
  11. Meaney, P.M.; Fanning, M.W.; Li, D.; Poplack, S.P.; Paulsen, K.D. A clinical prototype for active microwave imaging of the breast. IEEE Trans. Microw. Theory Tech. 2000, 48, 1841–1853. [Google Scholar]
  12. Islam, M.; Mahmud, M.; Islam, M.T.; Kibria, S.; Samsuzzaman, M. A Low Cost and Portable Microwave Imaging System for Breast Tumor Detection Using UWB Directional Antenna array. Sci. Rep. 2019, 9, 15491. [Google Scholar] [CrossRef] [Green Version]
  13. Adachi, M.; Nakagawa, T.; Fujioka, T.; Mori, M.; Kubota, K.; Oda, G.; Kikkawa, T. Feasibility of Portable Microwave Imaging Device for Breast Cancer Detection. Diagnostics 2021, 12, 27. [Google Scholar] [CrossRef] [PubMed]
  14. Srinivasan, D.; Gopalakrishnan, M. Breast Cancer Detection Using Adaptable Textile Antenna Design. J. Med. Syst. 2019, 43, 177. [Google Scholar] [CrossRef]
  15. Misilmani, H.M.E.; Naous, T.; Khatib, S.K.A.; Kabalan, K.Y. A Survey on Antenna Designs for Breast Cancer Detection Using Microwave Imaging. IEEE Access 2020, 8, 102570–102594. [Google Scholar] [CrossRef]
  16. Sheeba, I.R.; Jayanthy, T. Design and Analysis of a Flexible Softwear Antenna for Tumor Detection in Skin and Breast Model. Wirel. Pers. Commun. 2019, 107, 887–905. [Google Scholar] [CrossRef]
  17. Meaney, P.M.; Kaufman, P.A.; Muffly, L.S.; Click, M.; Poplack, S.P.; Wells, W.A.; Schwartz, G.N.; di Florio-Alexander, R.M.; Tosteson, T.D.; Li, Z.; et al. Microwave imaging for neoadjuvant chemotherapy monitoring: Initial clinical experience. Breast Cancer Res. 2013, 15, R35. [Google Scholar] [CrossRef] [Green Version]
  18. Sani, L.; Ghavami, N.; Vispa, A.; Paoli, M.; Raspa, G.; Ghavami, M.; Sacchetti, F.; Vannini, E.; Ercolani, S.; Saracini, A.; et al. Novel microwave apparatus for breast lesions detection: Preliminary clinical results. Biomed. Signal Process. Control 2019, 52, 257–263. [Google Scholar] [CrossRef]
  19. Williams, T.C.; Fear, E.C.; Westwick, D.T. Tissue sensing adaptive radar for breast cancer detection-investigations of an improved skin-sensing method. IEEE Trans. Microw. Theory Tech. 2006, 54, 1308–1314. [Google Scholar] [CrossRef] [Green Version]
  20. Abdollahi, N.; Jeffrey, I.; LoVetri, J. Improved Tumor Detection via Quantitative Microwave Breast Imaging Using Eigenfunction- Based Prior. IEEE Trans. Comput. Imaging 2020, 6, 1194–1202. [Google Scholar] [CrossRef]
  21. Coşğun, S.; Bilgin, E.; Çayören, M. Microwave imaging of breast cancer with factorization method: SPIONs as contrast agent. Med. Phys. 2020, 47, 3113–3122. [Google Scholar] [CrossRef] [PubMed]
  22. Rana, S.P.; Dey, M.; Tiberi, G.; Sani, L.; Vispa, A.; Raspa, G.; Duranti, M.; Ghavami, M.; Dudley, S. Machine Learning Approaches for Automated Lesion Detection in Microwave Breast Imaging Clinical Data. Sci. Rep. 2019, 9, 10510. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Edwards, K.; Khoshdel, V.; Asefi, M.; LoVetri, J.; Gilmore, C.; Jeffrey, I. A Machine LearningWorkflow for Tumour Detection in Breasts Using 3D Microwave Imaging. Electronics 2021, 10, 674. [Google Scholar] [CrossRef]
  24. Mojabi, P.; Khoshdel, V.; Lovetri, J. Tissue-Type ClassificationWith Uncertainty Quantification of Microwave and Ultrasound Breast Imaging: A Deep Learning Approach. IEEE Access 2020, 8, 182092–182104. [Google Scholar] [CrossRef]
  25. Roslidar, R.; Rahman, A.; Muharar, R.; Syahputra, M.R.; Munadi, K. A review on recent progress in thermal imaging and deep learning approaches for breast cancer detection. IEEE Access 2020, 8, 116176–116194. [Google Scholar] [CrossRef]
  26. Bakx, N.; Bluemink, H.; Hagelaar, E.; Sangen, M.; Hurkmans, C. Development and evaluation of radiotherapy deep learning dose prediction models for breast cancer. Phys. Imaging Radiat. Oncol. 2021, 17, 65–70. [Google Scholar] [CrossRef]
  27. Chen, Y.; Ling, L.; Huang, Q. Classification of breast tumors in ultrasound using biclustering mining and neural network. In Proceedings of the 2016 9th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), Datong, China, 15–17 October 2016; pp. 1788–1791. [Google Scholar]
  28. Li, Y.; Hu, W.; Chen, S.; Zhang, W.; Ligthart, L. Spatial resolution matching of microwave radiometer data with convolutional neural network. Remote Sens. 2019, 11, 2432. [Google Scholar] [CrossRef] [Green Version]
  29. Khoshdel, V.; Asefi, M.; Ashraf, A.; Lovetri, J. Full 3D microwave breast imaging using a deep-learning technique. J. Imaging 2020, 6, 80. [Google Scholar] [CrossRef]
  30. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; Li, F.-F. Imagenet: A Large-Scale Hierarchical Image Database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
  31. Liu, G.; Mao, S.; Kim, J.H. A mature-tomato detection algorithm using machine learning and color analysis. Sensors 2019, 19, 2023. [Google Scholar] [CrossRef] [Green Version]
  32. Lu, S.; Lu, Z.; Zhang, Y.D. Pathological brain detection based on AlexNet and transfer learning. J. Comput. Sci. 2019, 30, 41–47. [Google Scholar] [CrossRef]
  33. Wang, L.; Simpkin, R.; Al-Jumaily, A. Holographic microwave imaging array: Experimental investigation of breast tumour detection. In Proceedings of the 2013 IEEE International Workshop on Electromagnetics, Applications and Student Innovation Competition, Hong Kong, China, 1–3 August 2013; pp. 61–64. [Google Scholar]
  34. Wang, L.; Fatemi, M. Compressive Sensing Holographic Microwave Random Array Imaging of Dielectric Inclusion. IEEE Access 2018, 6, 56477–56487. [Google Scholar] [CrossRef]
  35. Wang, L. Multi-Frequency Holographic Microwave Imaging for Breast Lesion Detection. IEEE Access 2019, 7, 83984–83993. [Google Scholar] [CrossRef]
  36. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  37. Burfeindt, M.J.; Colgan, T.J.; Mays, R.O.; Shea, J.D.; Behdad, N.; Van Veen, B.D.; Hagness, S.C. MRI-Derived 3-D-Printed Breast Phantom for Microwave Breast Imaging Validation. IEEE Antennas Wirel. Propag. Lett. 2012, 11, 1610–1613. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  38. Italian National Research Council. An Internet Resource for the Calculation of the Dielectric Properties of Body Tissues in the Frequency Range 10 Hz–100 GHz. Available online: http://niremf.ifac.cnr.it/tissprop (accessed on 23 October 2022).
  39. Sahlol, A.T.; Yousri, D.; Ewees, A.A.; Al-Qaness, M.; Elaziz, M.A. COVID-19 image classification using deep features and fractional-order marine predators algorithm. Sci. Rep. 2020, 10, 15364. [Google Scholar] [CrossRef] [PubMed]
Figure 1. (a) example of 12 breast phantoms (breast 9); (b) real part of the relative complex-valued permittivity of breast 9; and (c) imaginary part of the relative complex-valued permittivity of breast 9.
Figure 1. (a) example of 12 breast phantoms (breast 9); (b) real part of the relative complex-valued permittivity of breast 9; and (c) imaginary part of the relative complex-valued permittivity of breast 9.
Micromachines 13 02049 g001
Figure 2. (a,b) real and imaginary parts of the relative complex-valued permittivity of breast 1; (c,d) real and imaginary parts of the relative complex-valued permittivity of breast 2; (e,f) real and imaginary parts of the relative complex-valued permittivity of breast 3; (g,h) real and imaginary parts of the relative complex-valued permittivity of breast 4; (i,j) real and imaginary parts of the relative complex-valued permittivity of breast 5; (k,l) real and imaginary parts of the relative complex-valued permittivity of breast 6; (m,n) real and imaginary parts of the relative complex-valued permittivity of breast 7; (o,p) real and imaginary parts of the relative complex-valued permittivity of breast 8; (q,r) real and imaginary parts of the relative complex-valued permittivity of breast 9; (s,t) real and imaginary parts of the relative complex-valued permittivity of breast 10; (u,v) real and imaginary parts of the relative complex-valued permittivity of breast 11 and; (w,x) real and imaginary parts of the relative complex-valued permittivity of breast 12.
Figure 2. (a,b) real and imaginary parts of the relative complex-valued permittivity of breast 1; (c,d) real and imaginary parts of the relative complex-valued permittivity of breast 2; (e,f) real and imaginary parts of the relative complex-valued permittivity of breast 3; (g,h) real and imaginary parts of the relative complex-valued permittivity of breast 4; (i,j) real and imaginary parts of the relative complex-valued permittivity of breast 5; (k,l) real and imaginary parts of the relative complex-valued permittivity of breast 6; (m,n) real and imaginary parts of the relative complex-valued permittivity of breast 7; (o,p) real and imaginary parts of the relative complex-valued permittivity of breast 8; (q,r) real and imaginary parts of the relative complex-valued permittivity of breast 9; (s,t) real and imaginary parts of the relative complex-valued permittivity of breast 10; (u,v) real and imaginary parts of the relative complex-valued permittivity of breast 11 and; (w,x) real and imaginary parts of the relative complex-valued permittivity of breast 12.
Micromachines 13 02049 g002aMicromachines 13 02049 g002bMicromachines 13 02049 g002c
Figure 3. Training progress of the proposed network using (a) dataset 1 and (b) dataset 2.
Figure 3. Training progress of the proposed network using (a) dataset 1 and (b) dataset 2.
Micromachines 13 02049 g003
Figure 4. (a) Confusion matrix of testing dataset 1; (b) Confusion matrix of testing dataset 2.
Figure 4. (a) Confusion matrix of testing dataset 1; (b) Confusion matrix of testing dataset 2.
Micromachines 13 02049 g004
Figure 5. (a) randomly selected 16 examples of training images (from dataset 1), and (b) randomly selected 16 examples of testing images (from dataset 1).
Figure 5. (a) randomly selected 16 examples of training images (from dataset 1), and (b) randomly selected 16 examples of testing images (from dataset 1).
Micromachines 13 02049 g005aMicromachines 13 02049 g005b
Figure 6. (a) randomly selected 16 examples of training images (from dataset 2); (b) randomly selected 16 examples of testing images (from dataset 2).
Figure 6. (a) randomly selected 16 examples of training images (from dataset 2); (b) randomly selected 16 examples of testing images (from dataset 2).
Micromachines 13 02049 g006aMicromachines 13 02049 g006b
Table 1. Characteristics of breast phantoms.
Table 1. Characteristics of breast phantoms.
NumberPhantom ClassQuantityModelSize
No 1I: fatty253RGB 310 × 355 × 253
No 2I: fatty288RGB 267 × 375 × 288
No 3II: dense307RGB 316 × 352 × 307
No 4II: dense270RGB 300 × 382 × 270
No 5II: dense251RGB 258 × 253 × 251
No 6III: heterogeneously dense202RGB 269 × 332 × 202
No 7III: heterogeneously dense248RGB 258 × 365 × 248
No 8III: heterogeneously dense273RGB 219 × 243 × 273
No 9IV: very dense212RGB 215 × 328 × 212
No 10V: very dense breast contains two tumors212RGB 215 × 328 × 212
No 11V: very dense breast contains two tumors212RGB 215 × 328 × 212
No 12V: fatty breast contains two tumors253RGB 310 × 355 × 253
Table 2. Training parameters.
Table 2. Training parameters.
Dataset12
ModalityReal part of HMI breastImaginary part of HMI breast
Number of phantoms1212
Classes of images55
Number of HMI images13791379
Image size227 × 227 × 3227 × 227 × 3
Number of training images966966
Number of validation images275275
Number of test images138138
Number of Class I160160
Number of Class II457457
Number of Class III444444
Number of Class IV108108
Number of Class V210210
Cross-validation group8-fold8-fold
Maximum number of epochs5050
Minimum batch size2525
Validation frequency3030
Initial learning rate0.00030.0003
Table 3. AlexNet with transfer learning.
Table 3. AlexNet with transfer learning.
SchematicNo.NameTypeActivationsWeights & Bias
Micromachines 13 02049 i0011dataImage input227 × 227 × 3
2conv1Convolution55 × 55 × 96Weights: 11 × 11 × 3 × 96; bias: 1 × 1 × 96
3relu1ReLu55 × 55 × 96
4norm1Cross-channel normalization55 × 55 × 96
5pool1Max pooling27 × 27 × 96
6conv2Grouped convolution27 × 27 × 96
7relu2ReLU27 × 27 × 256Weights: 5 × 5 × 48 × 128; bias: 1 × 1 × 128 × 2
8norm2Cross-channel normalization27 × 27 × 256
9pool2Max pooling13 × 13 × 256
10conv3Convolution13 × 13 × 384Weights: 3 × 3 × 25 × 384; bias: 1 × 1 × 384
11relu3ReLU13 × 13 × 384
12conv4Grouped convolution13 × 13 × 384Weights: 3 × 3 × 192 × 192; bias: 1 × 1 × 192 × 2
13relu4ReLU13 × 13 × 384
14conv5Grouped convolution13 × 13 × 256Weights: 3 × 3 × 192 × 128; bias: 1 × 1 × 128 × 2
15relu5ReLU13 × 13 × 256
16pool5Max pooling6 × 6 × 256
17fc6Fully connected1 × 1 × 4096Weights: 7029 × 9216; bias: 4096 × 1
18relu6ReLU1 × 1 × 4096
19drop6Dropout1 × 1 × 4096
20fc7Fully connected1 × 1 × 4096Weights: 4096 × 4096; bias: 4096 × 1
21relu7ReLU1 × 1 × 4096
22drop7Dropout1 × 1 × 4096
23fc8Fully connected1 × 1 × 4Weights: 4 × 4096; bias: 4 × 1
24softmaxSoftMax1 × 1 × 4
25outputClassification output
Table 4. HMI image classification using different deep learning networks.
Table 4. HMI image classification using different deep learning networks.
ArchitectureAccuracyTraining TimeResult
MobileNet-v296.84%28 min 38 sMicromachines 13 02049 i002
DenseNet20196.01%132 min 25 sMicromachines 13 02049 i003
SqueezeNet92.98%16 min 3 sMicromachines 13 02049 i004
Inception-v386.24%11 mins 30 sMicromachines 13 02049 i005
ResNet10184.73.%43 min 5 sMicromachines 13 02049 i006
GoogLeNet81.02%7 min 48 sMicromachines 13 02049 i007
AlexNet80.33%5 min 39 sMicromachines 13 02049 i008
ResNet5078.40%36 min 16 sMicromachines 13 02049 i009
ResNet1877.30%11 min 45 sMicromachines 13 02049 i010
Inception-ResNet-v273.18%106 mins 48 sMicromachines 13 02049 i011
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, L. Holographic Microwave Image Classification Using a Convolutional Neural Network. Micromachines 2022, 13, 2049. https://doi.org/10.3390/mi13122049

AMA Style

Wang L. Holographic Microwave Image Classification Using a Convolutional Neural Network. Micromachines. 2022; 13(12):2049. https://doi.org/10.3390/mi13122049

Chicago/Turabian Style

Wang, Lulu. 2022. "Holographic Microwave Image Classification Using a Convolutional Neural Network" Micromachines 13, no. 12: 2049. https://doi.org/10.3390/mi13122049

APA Style

Wang, L. (2022). Holographic Microwave Image Classification Using a Convolutional Neural Network. Micromachines, 13(12), 2049. https://doi.org/10.3390/mi13122049

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop