Next Article in Journal
Investigation of the Reduction in Distributed Acoustic Sensing Signal Due to Perforation Erosion by Using CFD Acoustic Simulation and Lighthill’s Acoustic Power Law
Previous Article in Journal
Underwater Acoustic Orthogonal Frequency-Division Multiplexing Communication Using Deep Neural Network-Based Receiver: River Trial Results
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Two-Step Contrast Source Learning Method for Electromagnetic Inverse Scattering Problems

College of Electronic Science and Technology, National University of Defense Technology, Changsha 410073, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(18), 5997; https://doi.org/10.3390/s24185997
Submission received: 31 July 2024 / Revised: 14 September 2024 / Accepted: 14 September 2024 / Published: 16 September 2024
(This article belongs to the Section Electronic Sensors)

Abstract

:
This article is devoted to solving full-wave electromagnetic inverse scattering problems (EM-ISPs), which determine the geometrical and physical properties of scatterers from the knowledge of scattered fields. Due to the intrinsic ill-posedness and nonlinearity of EM-ISPs, traditional non-iterative and iterative methods struggle to meet the requirements of high accuracy and real-time reconstruction. To overcome these issues, we propose a two-step contrast source learning approach, cascading convolutional neural networks (CNNs) into the inversion framework, to tackle 2D full-wave EM-ISPs. In the first step, a contrast source network based on the CNNs architecture takes the determined part of the contrast source as input and then outputs an estimate of the total contrast source. Then, the recovered total contrast source is directly converted into the initial contrast. In the second step, the rough initial contrast obtained beforehand is input into the U-Net for refinement. Consequently, the EM-ISPs can be quickly solved with much higher accuracy, even for high-contrast objects, almost achieving real-time imaging. Numerical examples have demonstrated that the proposed two-step contrast source learning approach is able to improve accuracy and robustness even for high-contrast scatterers. The proposed approach offers a promising avenue for advancing EM-ISPs by integrating strengths from both traditional and deep learning-based approaches, to achieve real-time quantitative microwave imaging for high-contrast objects.

1. Introduction

Full-wave electromagnetic inverse scattering problems (EM-ISPs) determine both qualitative and quantitative information about unknown targets within a designated domain of interest (DoI). This non-invasive technique utilizes the scattered electromagnetic field data collected from an illuminating source to infer the properties of objects in the DoI. A significant advantage of employing EM-ISPs lies in their nondestructive nature, which contrasts sharply with traditional methods requiring physical intervention or invasive procedures. Instead, the method allows for the detection of internal heterogeneities by simply observing the scattered fields external to the medium. This attribute makes EM-ISPs highly suitable for applications where the integrity and preservation of the sample or object are paramount, such as remote sensing [1,2], through-the-wall imaging [3], and so on.
Due to the inherent ill-conditioning and nonlinearity of EM-ISPs, the reconstruction methods can be broadly categorized into two principal approaches: regularization iterative optimization methods and non-iterative methods. The conventional iterative approach involves formulating an objective function that encapsulates the problem’s constraints and goals. Subsequently, optimization techniques are employed to iteratively minimize this objective function, quantifying the discrepancy between computed and observed values. By iteratively refining this process, the objective is to achieve convergence and solve EM-ISPs, thereby reconstructing the properties of unknown scatterers. Methods such as the Born iteration method (BIM) [4], the contrast source inversion method (CSI) [5,6,7], and the subspace-based optimization method (SOM) [8,9] commonly encounter several formidable challenges when tasked with reconstructing targets featuring a high dielectric constant. These challenges include sensitivity to the initial selection of values, sluggish convergence rates, elevated computational demands, and susceptibility to becoming trapped in locally optimal solutions. Particularly problematic is their tendency to converge towards false local minima during the inversion process, which becomes notably pronounced when attempting to recover objects characterized by high permittivity. These issues collectively impede their suitability for applications necessitating real-time reconstruction capabilities. Stochastic optimization algorithms, such as genetic algorithm (GA) [10] and particle swarm optimization (PSO) [11], offer advantages in exploring global optimal solutions, due to their inherent uncertainty during iteration. However, these methods are associated with high temporal and spatial complexity. Non-iterative methods simplify the EM-ISPs by ignoring multiple scattering effects and approximating the target as an isolated scattering point, thereby linearizing the problem. Such methods, including Born approximation (BA), Rytov approximation (RA), and back-propagation (BP) [12], are primarily applicable in scenarios characterized by weak scattering (low-contrast targets). In an article by Wang et al. [13], a diffraction tomography (DT) algorithm was introduced for solving 3D EM-ISPs using a sparse planar array and polarization diversity. Nevertheless, non-iterative approaches often fail to achieve successful reconstruction, particularly when dealing with high contrast and large scatterers.
The constraints of conventional methods in computational electromagnetics, such as non-learnable parameters, significant computational overhead, and extensive manual intervention, have driven the rapid adoption of deep learning approaches in this domain [14,15,16,17]. This trend aims to strike a balance between computational efficiency and reconstruction accuracy [18,19,20]. Deep neural networks (DNNs) have the ability to automatically detect features from data; there have been numerous successful applications, including image classification [21] and segmentation [22]. In recent years, learning methods have been a powerful framework enabling unprecedented time and accuracy performance for solving complex EM-ISPs [23,24,25]. Recent developments have increasingly utilized CNNs, with notable success. CNNs employ key operations, such as convolution, addition, ReLU activation, up-sampling, down-sampling, and local maximum filtering. These abilities enable CNNs to learn complex relationships between inputs and outputs [26], and they can also be used to assist iterative optimization methods [27]. In addition to directly mapping scattered fields to scatterer contrasts, using the direct inversion scheme (DIS) [28], another approach involves preprocessing scattering data with iterative or non-iterative inversion algorithms before network training [29,30,31,32]. The dominant current scheme (DCS) [28] first obtains the initial contrast of multiple incident fields from the dominant current and then obtains the contrast through CNN training. This method leverages numerical algorithms to extract prior physical information, enhancing the overall generality and effectiveness of the learning algorithms. Yash Sanghvi et al. introduced the contrast source net (CS-Net) [33], which uses CNNs to derive the contrast source (CS) from scattered field data, followed by traditional iterative algorithms to obtain contrast estimates. These methods typically employ single-network structures. Yao [34] proposed a two-step network structure where the first-step network directly extracts preliminary contrast from scattered field data, refined further in the second-step network. While straightforward and practical, these approaches are purely data-driven and may suffer from limited generalization. In recent years, methods based on deep learning to solve EM-ISPs have flourished [24,32,35,36,37].
Based on prior research and the expressive power of DNNs, we propose a two-step CS learning method for addressing complex EM-ISPs, in order to improve the quality and efficiency of inversion imaging. Specifically, similar to the SOM algorithm, which incorporates subspace decomposition techniques, the CS-Net is employed to learn unknown signal subspaces and recover the complete CS. This recovered CS is then converted into a contrast image, which serves as the input for a second-step U-Net network for refined imaging. Compared with existing conventional methods and two-step DL-based methods [34,38,39,40], the proposed method offers four advantages:
(1)
The proposed method enables CNNs to manage the entire imaging process without iterative procedures, thereby achieving near-real-time imaging.
(2)
Despite the inherent non-linearity between the scattered field and object permittivity, the introduction of CSs as intermediate variables in inversion techniques effectively mitigates this issue in EM-ISPs.
(3)
In the initial phase, integrating physical principles into the CS-Net training enhances noise resilience, incorporates prior physical knowledge, and expands the applicability of the learning algorithms.
(4)
In the first step, the initial imaging breakthrough allows for only rough imaging of weak scatterers while providing initial imaging of target scatterers with high contrast.
The remainder of this paper is organized as follows: In Section 2, the formulation of the EM-ISPs is introduced. Section 3 outlines the CNN framework and describes the two-step learning method implementation details, involving how to estimate the signal-space components of the CS. Synthetic data are then tested in Section 4 for performance verification. Finally, we summarize this paper and conclude with directions for future extension, in Section 5.

2. Problem Formulation

For clarity and ease of explanation, this paper investigates two-dimensional (2D) transverse magnetic (TM, i.e., E z polarization) EM-ISPs. The typical geometric model structure of the 2D-medium free-space electromagnetic inversion imaging system is shown in Figure 1, where D has a free-space background with the permittivity ϵ 0 and the scatterer has the relative permittivity ϵ r . Considering the incident case of 2D TM waves, a transmitter generates time-harmonic electromagnetic waves (with a time factor e i ω t from different positions around the object, and it irradiates isotropic non-magnetic medium scatterers at a single frequency. We measure the scattered field at different positions on a circular orbit centered around the center of the target area, with receivers distributed equidistant outside the target area. The entire electromagnetic scattering process is represented by two basic electric field integral equations [12] in the observation domain S and the imaging domain D. For convenience, we represent ϵ r ( r ) 1 as the contrast function χ ( r ) of the scatterer, and r = x , y denotes the source point. We define a CS variable (sometimes referred to as the induced current) as the product of the contrast and the internal field at any point in D.
For the sake of our numerical experiments, we solve the discretized version of the Lippmann–Schwinger equation by partitioning DoI into an M × M ( M = 32 ) square grid, using the method of moments (MOM) [41]. The discrete form is as follows:
J ¯ = χ ¯ ¯ · E ¯ tot
E ¯ tot = E ¯ inc + G ¯ ¯ D · J ¯
E ¯ sca = G ¯ ¯ S · J ¯
Among these, E ¯ tot , E ¯ sca , and E ¯ inc represent the total, scattered, and incident field, respectively. By subtracting the incident field E ¯ inc from the total field E ¯ tot , the scattered field E ¯ sca can be calculated. The scattered field E ¯ tot is measured with N r receivers per illumination, with a total of N t illuminations in a single experiment. J ¯ is the CS function. The operators G S ( · ) and G D ( · ) are the mapping from the imaging domain D to the measurement domain S and the mapping from the imaging domain D to D, namely, the Green’s function from the imaging domain D to the receiving antenna S and the Green’s function within the imaging domain D.
The above three Equations (1)–(3) are the basic equations for solving the EM-ISPs. If Ψ is expressed as an operator for solving the corresponding forward problem, the nonlinear relation between the input and output is represented as follows:
E ¯ s c a = Ψ ( χ ¯ ¯ )
The EM-ISPs estimate χ ¯ ¯ , given noisy measurements E ¯ sca + η , where η denotes noise, for various illuminations, and, as is typically assumed, E ¯ inc is taken to be known. It can be seen from the above equations that solving the EM-ISPs involves inverting a non-linear and ill-conditioned system of equations between the scattered field and the object contrast. Due to the involvement of many variables in the formula, we have created Table 1 for easy comparison and understanding.

3. Theory and Methodology

The equations between the scattered field and the contrast of the object are non-linear and ill-conditioned. Consequently, when solving the EM-ISPs, the system may have infinite solutions, making it challenging to choose a meaningful solution. This issue becomes especially pronounced under conditions involving high-contrast objects or scenes with high frequencies. This method is structured into two sequential steps, depicted in Figure 2. The details of the whole approach are introduced as follows.

3.1. Theoretical Background

In EM-ISPs, the full CS is not known, and it cannot be reconstructed directly from the data equation. Therefore, scholars have proposed SOM algorithms that incorporate subspace techniques based on the CSI method. In the SOM, the singular value of the G S operator divides the CS into mutually orthogonal signal and noise subspace components, i.e., ( J ¯ = J ¯ s + J ¯ n ). We perform singular value decomposition (SVD) to obtain G ¯ ¯ S · ν ¯ n = σ n u ¯ n , where μ ¯ n , v ¯ n , σ n represent the left and right singular vectors and singular values of G ¯ ¯ S , respectively, and σ 1 σ 2 σ M 2 0 . By considering the orthogonality of singular vectors and their relationships,
J ¯ s = j = 1 L μ ¯ j H · E ¯ s c a σ j v ¯ j
the complete CS can be represented as
J ¯ = J ¯ s + V N α
where the signal subspace is spanned by the first L right singular vectors and the noise subspace is spanned by the remaining. We need to determine the number of singular values L to be used to define the signal subspace as per Equation (5). Defining a basis for the latter subspace as V N = v L + 1 v M 2 L , α is the coefficient of the basis vector for the noise subspace, which is as yet unknown. Based on the singular value size corresponding to the signal-to-noise ratio (SNR), we take the right singular vectors corresponding to the first L . Larger singular values form a matrix V ¯ ¯ + , and the remaining M L right singular vectors can form a matrix V ¯ ¯ . L is chosen such as to avoid fitting the noise. In the SOM, the unknown parameters α and χ ¯ ¯ are updated alternatively.
Neural networks are highly effective at parallel computing, making them well-suited to addressing large-scale EM-ISPs. CNNs excel in analyzing boundary changes in input data using convolutional modules, and they are particularly adept at handling matrix-structured data. Their robust nonlinear fitting capabilities allow for the establishment of complex one-to-one mappings between input and output images, facilitating accurate regression on image-like data. Consequently, the EM-ISPs can be modeled as an input–output mapping relationship. By training a neural network model with a series of data samples, we can tackle nonlinear issues while benefiting from the model’s strong robustness against noise and uncertainties in the input data.

3.2. Initial Guess (Step 1)

To begin, we employ the CS-Net framework to learn the noise spatial component of the radiation operator. The CS-Net is trained specifically for this purpose. Subsequently, the coarse contrast is derived through CS-based restoration. The architecture details of the CS-Net, employed for the estimation of the noise subspace components, are illustrated in Figure 3:
Similar to the method in the SOM, we first perform SVD based on Green’s function operator G S . Therefore, as an initial guess of the CS, we use the signal subspace component J ¯ s , which contains the most important information but lacks some information contained in J ¯ n . To compensate the missing information J ¯ n , the signal subspace component J ¯ s is used as input to the CS-Net in the first step. Since the J ¯ s is complex-valued, we separate its real and imaginary parts as two channels. Thus, the input of the CS-Net is a 2 N i -channel real-valued concatenation of J ¯ s . The CS-Net aims to reconstruct the full CS J ¯ from the input J ¯ s for each incidence. After the CS-Net predicts all the CS J ¯ for each incidence, the predicted contrast can be calculated by the basic Equations (1)–(3). Each emission scenario generates a corresponding CS image, which is utilized to train the CS-Net for recovering the full CS. The network is trained using the average mean squared loss between the estimated and the true CS, and the network parameters are optimized using the Adam optimizer with learning rate 10 4 . Afterwards, the restored CS is used for SOM imaging, to compare with our proposed two-step CS learning method. The optimization process terminates when either the relative change in cost function drops below 10 4 or after 2000 iterations, whichever comes first.

3.3. Fine Imaging (Step 2)

In the second step, the U-Net [42] takes as input the coarse contrast image recovered by the CS-Net and generates an improved estimate of the contrast image. The U-Net model for the second step is benchmarked in MATLAB 2019a, using the Deep Learning Toolbox. An adaptive moment estimation optimizer is employed, to minimize the half-mean-squared-error loss function, while a dropout regularization rate of 0.2 is applied, to reduce overfitting. Due to the down-sampling operations and batch normalization (BN) structure in the U-Net CNN architecture, it is particularly well-suited for addressing EM-ISPs [43]. The chosen loss function is the mean square error (MSE), calculated between the true labels and the predicted pixel responses. The architecture details of the U-Net used for fine imaging are described in detail in Figure 4. Therefore, the initially retrieved result is further enhanced by the U-Net, to achieve improved reconstruction. However, the potential prior information acquired by the CS-Net and the U-Net throughout the two-step process has yet to be fully understood and explored.

3.4. Image Evaluation

To compare different schemes, we utilize three key quantitative metrics: the structural similarity index measure (SSIM) [44], the peak signal-to-noise ratio (PSNR), and the equivalent number of looks (ENL). These metrics serve as indicators to assess the fidelity of reconstructed contrast images. Among them, higher SSIM and PSNR values indicate greater similarity and higher quality between the true profile image and its reconstructed counterpart. Conversely, a lower ENL indicates improved clarity and reduced noise in the image.

3.5. Computational Complexity

Before the network training begins, performing a thin SVD on the matrix G ¯ ¯ S has the computational complexity of O N r 2 M 2 . In the first step, the CS-Net is primarily responsible for reconstructing the complete CS, with a relatively lightweight network architecture designed to handle the determined part of the CS. The computational complexity of this step is mainly due to the convolution operations, with a time complexity of approximately O N M 2 K , where N is the number of samples, M 2 is the size of the input feature maps, and K is the number of convolutional kernels. In the second step, the U-Net is used to further optimize the contrast images. Although the U-Net has relatively higher computational complexity, the use of skip connections and multi-scale feature-extraction mechanisms effectively reduces the computational burden. The complexity of the U-Net is primarily influenced by convolution and pooling operations, with time complexity expressed as O p q 2 r , where p is the number of samples, q 2 is the size of the input feature maps, and r is the number of convolutional kernels. Such a computation can be accelerated by using GPU. In contrast, the traditional SOM has computational complexity O I N i M 2 log M , where I is the total number of iterations, N i is the number of illuminations, and M 2 is the number of pixels in the imaging domain. Each gradient and step calculation for a single view requires O M 2 log M rather than O M 4 , due to the use of fast Fourier transform (FFT) operations for all matrix-vector computations within the matrix G ¯ ¯ D .

4. Numerical Example

To assess the effectiveness and accuracy of the proposed two-step CS learning approach, experiments were conducted using the Modified National Institute of Standards and Technology (MNIST) dataset [45]. The MNIST dataset consists of images of handwritten digits and is well-standardized with a broad range of applications. Although it does not directly represent EM-ISPs, using this dataset allows for testing the algorithm’s fundamental image processing and generalization capabilities. Specifically, the testing samples were digit images extracted from the MNIST test dataset that had not been encountered by the CS-Net and U-Net during the training phase. This section presents our results, using synthetic data to assess the performance of the proposed two-steps approach in reconstructing contrast from the scattered field. In all the tests, the reconstructions were performed at a fixed frequency, without employing frequency-hopping techniques. Moreover, as the comparison, the two-step enhanced deep learning approach [34] was also employed, to reconstruct the contrast of the testing samples. All the numerical experiments in this paper were performed on an Intel(R) Xeon(R) processor running at 2.10 GHz with 128 GB RAM.

4.1. Configuration of the Scattering System

In our numerical experiments, we utilized the MNIST database to generate a scattered field, using a forward solver [41]. The scattered field was intentionally corrupted with additive Gaussian noise during each reconstruction. Under our experimental conditions, synthetic noise with a signal-to-noise ratio (SNR) of 25 dB was applied to the scattered field. Consequently, the optimal number of singular values, denoted as L = 19 , was determined, based on Morozov’s discrepancy principle [46]. This optimal choice varied with the SNR levels observed in the measurements. Based on MNIST, the contrast of the number-shaped objects was set between 1.0 and 7.0 with a free space background. Throughout, the incident field frequency was 400 MHz (i.e., wavelength λ = 0.75 m ). Measurements were taken along a circular path with a radius R = 4 m , centered precisely on the DoI. The measurement setup included N r = 32 receivers and N t = 16 transceivers uniformly distributed in an equi-angular manner across the measurement domain.

4.2. Test Using MNIST Database with SNR = 25 dB

In the first example, the scattered field was corrupted by additive Gaussian noise, such that the SNR was 25 dB. The reconstruction results of six randomly selected examples, i.e., labeled Test(1) through Test(6), are displayed in Figure 5. Quantitatively, the reconstructed results of the different methods were calculated, and the corresponding image quality metrics ENL, PSNR, and SSIM are listed in Table 2. Figure 5 shows the ground truth images of the testing samples alongside the reconstructed images, using different methods. These included the SOM with the CS recovered by the CS-Net, the first and second steps of the two-step enhanced deep learning approach by Yao et al. [34], and the first and second steps of our proposed two-step CS learning method. Clearly, the final outputs of the proposed method were a much better reconstruction of the ground truth. Table 2 shows that incorporating prior labeled data enhanced the reconstruction quality, surpassing both the SOM and the two-step enhanced deep learning method in accuracy and efficiency. The contrast image converted from the CS restored by the CS-Net shows that although the contrast value of the target scatterer and the edge reconstruction of the contrast image were still not accurate enough, the approximate position and contour of the target scatterer could be vaguely displayed from it. It should also be noted that the metrics of the proposed method in the second row (Test 2) slightly outperformed those of the SOM. However, challenges include potential local minima issues when using the CS restored by the CS-Net for SOM imaging. Figure 6 shows the statistical analysis of the testing results, confirming that our proposed two-step CS learning method significantly improves performance in solving EM-ISPs.

4.3. Test Using MNIST Database with SNR = 15 dB

To verify the robustness of the method, in the second example, we also tested the data target with an SNR of 15 dB, i.e., the scattered field was corrupted with additive Gaussian noise, such that the SNR was 15 dB. The results depicting the reconstruction of Test(1)–Test(6) are presented in Figure 7, while the corresponding quality metrics, SSIM, PSNR, and ENL are detailed in the accompanying Table 3, and the statistical analysis of the testing results is presented in Figure 8. Based on the imaging results and a comprehensive analysis of the three evaluation indicators, it is evident that for scatterers with a contrast exceeding 2.0, the proposed method demonstrated superior performance compared to the SOM imaging, in terms of both reconstructed contours and image clarity. Compared to the two-step enhanced deep learning approach [34], our proposed CS learning method exhibited stronger robustness. It should also be noted that the contrast imaging result of our proposed method in the first row (Test 1), with a contrast value of 2.0, was slightly better than that of the SOM. However, while the SOM method requires thousands of iterations and an imaging time of four to five minutes, our method can achieve near-real-time imaging.

4.4. Test Using Austria Profile with SNR = 25 dB

Next, the proposed scheme was evaluated, using the Austria profile with a contrast value of 2.0, which consisted of one central ring and two disks. The results depicting the reconstruction of the Austria profile are presented in Figure 9. The evaluation index values for image reconstruction are presented in Table 4. From the imaging results, the use of the CS recovered by the CS-NET for SOM imaging of low-contrast Austria targets shows promising outcomes. However, the SOM required thousands of iterations, resulting in prolonged imaging times exceeding five minutes. In contrast, the proposed two-step CS method achieved near-real-time imaging capabilities.

5. Conclusions

In this study, we propose a novel two-step CS learning approach to solving EM-ISPs. Our method employs a two-step process for CS and contrast image reconstruction, validated through simulation data that demonstrate effective reconstruction capabilities, feasibility, and efficiency. Initially, the CS-Net integrates physical insights to restore the complete CS and converts the predicted complete CS output into a contrast image, which is then used as input for the U-Net. The U-Net refines these initial results, leveraging the previously obtained rough contrasts, to progressively improve the image quality from coarse to fine. This refinement process progresses from rough intermediate images towards fine images. Utilizing prior information in the CS-Net noticeably improves reconstruction outcomes. Essentially, the U-Net enhances the initial reconstruction, yielding clearer and more accurate images. As a result, our approach achieves significantly improved accuracy in solving EM-ISPs, especially for high-contrast scatters (up to a contrast of 7.0). The proposed approach offers a promising avenue for advancing EM-ISPs by integrating strengths from both traditional and deep learning-based approaches, to achieve real-time quantitative microwave imaging. The parameterization of EM-ISPs using DNNs facilitates GPU-friendly processing, due to high parallelization. Interpretability remains a concern in CNN steps, affecting inversion accuracy, which relies on the proximity of input image quality to the reference. Future work could focus on enhancing the CS-Net model structure and refining the training parameters to improve noise subspace estimation and reduce the occurrence of local minima. Additionally, future research should focus on optimizing computational efficiency through improved model initialization and GPU parallelization. While the MNIST database was used to test the effectiveness of this method in terms of simplicity and generality, its direct applicability to EM-ISPs is limited, as it does not represent the real physical properties of scattering objects. Future work should involve testing with data that more accurately reflects actual EM-ISPs. Moreover, although the results of the proposed method are promising, further research is needed, particularly in enhancing its limited aperture-inversion capability and adapting it to 3D scenes.

Author Contributions

Conceptualization, A.S. and M.W.; methodology, A.S. and F.F.; software, M.W. and F.F.; validation, D.D. and M.W.; formal analysis, M.W.; investigation, A.S.; resources, D.D.; data curation, A.S. and F.F.; writing—original draft preparation, A.S.; writing—review and editing, M.W.; visualization, D.D.; supervision, D.D.; project administration, D.D.; funding acquisition, D.D. All authors have read and agreed to the published version of the manuscript.

Funding

National Natural Science Foundation of China under Grant 62231026.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
BABorn approximation
BIMBorn iteration method
BNbatch normalization
BPback-propagation
CGconjugate gradient
CNNconvolutional neural network
CSIcontrast source inversion method
CScontrast source
CS-Netcontrast source net
DCSdominant current scheme
DISdirect inversion scheme
DNNsdeep neural networks
DoIdomain of interest
DTdiffraction tomography
EM-ISPselectromagnetic inverse scattering problems
ENLequivalent number of looks
FFTfast Fourier transform
MNISTthe Modified National Institute of Standards and Technology
MOMmethod of moments
MSEmean square error
PSNRpeak signal-to-noise ratio
PSOparticle swarm optimization
RARytov approximation
ReLUrectified linear unit
SNRsignal-to-noise ratio
SOMsubspace-based optimization method
SSIMstructural similarity index measurement
SVDsingular value decomposition
TMtransverse magnetic

References

  1. Tang, F.; Ji, Y.; Zhang, Y.; Dong, Z.; Wang, Z.; Zhang, Q.; Zhao, B.; Gao, H. Drifting ionospheric scintillation simulation for L-band geosynchronous SAR. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2023. [Google Scholar] [CrossRef]
  2. Kagiwada, H.; Kalaba, R.; Timko, S.; Ueno, S. Associate memories for system identification: Inverse problems in remote sensing. Math. Comput. Model. 1990, 14, 200–202. [Google Scholar] [CrossRef]
  3. Randazzo, A.; Ponti, C.; Fedeli, A.; Estatico, C.; D’Atanasio, P.; Pastorino, M.; Schettini, G. A two-step inverse-scattering technique in variable-exponent Lebesgue spaces for through-the-wall microwave imaging: Experimental results. IEEE Trans. Geosci. Remote Sens. 2021, 59, 7189–7200. [Google Scholar] [CrossRef]
  4. Nie, Z.; Yang, F.; Zhao, Y.; Zhang, Y. Variational Born iteration method and its applications to hybrid inversion. IEEE Trans. Geosci. Remote Sens. 2000, 38, 1709–1715. [Google Scholar]
  5. Van den Berg, P.M.; Abubakar, A. Contrast source inversion method: State of art. Prog. Electromagn. Res. 2001, 34, 189–218. [Google Scholar] [CrossRef]
  6. Sun, S.; Kooij, B.J.; Jin, T.; Yarovoy, A.G. Cross-correlated contrast source inversion. IEEE Trans. Antennas Propag. 2017, 65, 2592–2603. [Google Scholar] [CrossRef]
  7. Sun, S.; Dai, D.; Wang, X. A fast algorithm of cross-correlated contrast source inversion in homogeneous background media. IEEE Trans. Antennas Propag. 2023, 71, 4380–4393. [Google Scholar] [CrossRef]
  8. Chen, X. Subspace-based optimization method for solving inverse-scattering problems. IEEE Trans. Geosci. Remote Sens. 2009, 48, 42–49. [Google Scholar] [CrossRef]
  9. Zhong, Y.; Chen, X. Twofold subspace-based optimization method for solving inverse scattering problems. Inverse Probl. 2009, 25, 085003. [Google Scholar] [CrossRef]
  10. Pastorino, M.; Massa, A.; Caorsi, S. A microwave inverse scattering technique for image reconstruction based on a genetic algorithm. IEEE Trans. Instrum. Meas. 2000, 49, 573–578. [Google Scholar] [CrossRef]
  11. Yang, C.X.; Zhang, J.; Tong, M.S. A hybrid quantum-behaved particle swarm optimization algorithm for solving inverse scattering problems. IEEE Trans. Antennas Propag. 2021, 69, 5861–5869. [Google Scholar] [CrossRef]
  12. Chen, X. Computational Methods for Electromagnetic Inverse Scattering; John Wiley & Sons: Hoboken, NJ, USA, 2018. [Google Scholar]
  13. Wang, M.; Sun, S.; Dai, D.; Zhang, Y.; Su, Y. Coherence Factor-Based Polarimetric Diffraction Tomography for 3-D Inverse Scattering with a Sparse Planar Array. IEEE Trans. Geosci. Remote Sens. 2024, 62, 2002314. [Google Scholar] [CrossRef]
  14. Zhang, L.; Xu, K.; Song, R.; Ye, X.; Wang, G.; Chen, X. Learning-based quantitative microwave imaging with a hybrid input scheme. IEEE Sens. J. 2020, 20, 15007–15013. [Google Scholar] [CrossRef]
  15. Salucci, M.; Arrebola, M.; Shan, T.; Li, M. Artificial intelligence: New frontiers in real-time inverse scattering and electromagnetic imaging. IEEE Trans. Antennas Propag. 2022, 70, 6349–6364. [Google Scholar] [CrossRef]
  16. Wang, Y.; Zong, Z.; He, S.; Wei, Z. Multiple-space deep learning schemes for inverse scattering problems. IEEE Trans. Geosci. Remote Sens. 2023, 61, 2000511. [Google Scholar] [CrossRef]
  17. Chiu, C.C.; Lee, Y.H.; Chen, P.H.; Shih, Y.C.; Hao, J. Application of Self-Attention Generative Adversarial Network for Electromagnetic Imaging in Half-Space. Sensors 2024, 24, 2322. [Google Scholar] [CrossRef]
  18. Wu, Z.; Zhao, F.; Zhang, M.; Huan, S.; Pan, X.; Chen, W.; Yang, L. Fast Near-Field Frequency-Diverse Computational Imaging Based on End-to-End Deep-Learning Network. Sensors 2022, 22, 9771. [Google Scholar] [CrossRef]
  19. Guo, M.F.; Zeng, X.D.; Chen, D.Y.; Yang, N.C. Deep-learning-based earth fault detection using continuous wavelet transform and convolutional neural network in resonant grounding distribution systems. IEEE Sens. J. 2017, 18, 1291–1300. [Google Scholar] [CrossRef]
  20. Khoshdel, V.; Ashraf, A.; LoVetri, J. Enhancement of multimodal microwave-ultrasound breast imaging using a deep-learning technique. Sensors 2019, 19, 4050. [Google Scholar] [CrossRef]
  21. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25. [Google Scholar] [CrossRef]
  22. Zhang, T.; Wang, Z.; Cheng, P.; Xu, G.; Sun, X. DCNNet: A distributed convolutional neural network for remote sensing image classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5603618. [Google Scholar] [CrossRef]
  23. Guo, R.; Huang, T.; Li, M.; Zhang, H.; Eldar, Y.C. Physics-embedded machine learning for electromagnetic data imaging: Examining three types of data-driven imaging methods. IEEE Signal Process. Mag. 2023, 40, 18–31. [Google Scholar] [CrossRef]
  24. Xu, K.; Qian, Z.; Zhong, Y.; Su, J.; Gao, H.; Li, W. Learning-assisted inversion for solving nonlinear inverse scattering problem. IEEE Trans. Microw. Theory Tech. 2022, 71, 2384–2395. [Google Scholar] [CrossRef]
  25. Massa, A.; Marcantonio, D.; Chen, X.; Li, M.; Salucci, M. DNNs as applied to electromagnetics, antennas, and propagation—A review. IEEE Antennas Wirel. Propag. Lett. 2019, 18, 2225–2229. [Google Scholar] [CrossRef]
  26. Li, L.; Wang, L.G.; Teixeira, F.L.; Liu, C.; Nehorai, A.; Cui, T.J. DeepNIS: Deep neural network for nonlinear electromagnetic inverse scattering. IEEE Trans. Antennas Propag. 2018, 67, 1819–1825. [Google Scholar] [CrossRef]
  27. Chen, G.; Shah, P.; Stang, J.; Moghaddam, M. Learning-assisted multimodality dielectric imaging. IEEE Trans. Antennas Propag. 2019, 68, 2356–2369. [Google Scholar] [CrossRef]
  28. Wei, Z.; Chen, X. Deep-learning schemes for full-wave nonlinear inverse scattering problems. IEEE Trans. Geosci. Remote Sens. 2018, 57, 1849–1860. [Google Scholar] [CrossRef]
  29. Wu, Z.; Peng, Y.; Wang, P.; Wang, W.; Xiang, W. A physics-induced deep learning scheme for electromagnetic inverse scattering. IEEE Trans. Microw. Theory Tech. 2024, 72, 927–947. [Google Scholar] [CrossRef]
  30. Xue, B.W.; Guo, R.; Li, M.K.; Sun, S.; Pan, X.M. Deep-learning-equipped iterative solution of electromagnetic scattering from dielectric objects. IEEE Trans. Antennas Propag. 2023, 71, 5954–5966. [Google Scholar] [CrossRef]
  31. Xue, F.; Guo, L.; Abbosh, A. Microwave imaging using cascaded convolutional neural networks. In Proceedings of the 2023 5th Australian Microwave Symposium (AMS), Melbourne, Australia, 16–17 February 2023; pp. 47–48. [Google Scholar]
  32. Wang, M.; Sun, S.; Dai, D.; Su, Y.; Wu, M. Quantitative diffraction tomography for weak scatterers based on aliasing modification of the multifrequency spatial spectrum. IEEE Trans. Geosci. Remote Sens. 2023, 61, 2002214. [Google Scholar] [CrossRef]
  33. Sanghvi, Y.; Kalepu, Y.; Khankhoje, U.K. Embedding deep learning in inverse scattering problems. IEEE Trans. Comput. Imaging 2019, 6, 46–56. [Google Scholar] [CrossRef]
  34. Yao, H.M.; Wei, E.; Jiang, L. Two-step enhanced deep learning approach for electromagnetic inverse scattering problems. IEEE Antennas Wirel. Propag. Lett. 2019, 18, 2254–2258. [Google Scholar] [CrossRef]
  35. Chen, X.; Wei, Z.; Maokun, L.; Rocca, P. A review of deep learning approaches for inverse scattering problems (invited review). Electromagn. Waves 2020, 167, 67–81. [Google Scholar] [CrossRef]
  36. Zhang, Y.; Lambert, M.; Fraysse, A.; Lesselier, D. Unrolled convolutional neural network for full-wave inverse scattering. IEEE Trans. Antennas Propag. 2022, 71, 947–956. [Google Scholar] [CrossRef]
  37. Xu, K.; Zhang, C.; Ye, X.; Song, R. Fast full-wave electromagnetic inverse scattering based on scalable cascaded convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2021, 60, 2001611. [Google Scholar] [CrossRef]
  38. Yao, H.M.; Ng, M.; Jiang, L. Deep Learning Electromagnetic Inversion Solver Based on Two-Step Framework for High-Contrast and Heterogeneous Scatterers. IEEE Trans. Antennas Propag. 2024. [Google Scholar] [CrossRef]
  39. Zhang, H.H.; Yao, H.M.; Jiang, L.; Ng, M. Enhanced two-step deep-learning approach for electromagnetic-inverse-scattering problems: Frequency extrapolation and scatterer reconstruction. IEEE Trans. Antennas Propag. 2022, 71, 1662–1672. [Google Scholar] [CrossRef]
  40. Si, A.; Dai, D.; Wang, M.; Fang, F. Two Steps Electromagnetic Quantitative Inversion Imaging Based on Convolutional Neural Network. In Proceedings of the 2024 5th International Conference on Geology, Mapping and Remote Sensing (ICGMRS), Wuhan, China, 12–14 April 2024; pp. 28–32. [Google Scholar]
  41. Gibson, W.C. The Method of Moments in Electromagnetics; Chapman and Hall/CRC: London, UK, 2021. [Google Scholar]
  42. Siddique, N.; Paheding, S.; Elkin, C.P.; Devabhaktuni, V. U-net and its variants for medical image segmentation: A review of theory and applications. IEEE Access 2021, 9, 82031–82057. [Google Scholar] [CrossRef]
  43. Ioffe, S.; Szegedy, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv 2015, arXiv:1502.03167. [Google Scholar]
  44. Huang, Y.; Song, R.; Xu, K.; Ye, X.; Li, C.; Chen, X. Deep learning-based inverse scattering with structural similarity loss functions. IEEE Sens. J. 2020, 21, 4900–4907. [Google Scholar] [CrossRef]
  45. Deng, L. The mnist database of handwritten digit images for machine learning research [best of the web]. IEEE Signal Process. Mag. 2012, 29, 141–142. [Google Scholar] [CrossRef]
  46. Anzengruber, S.W.; Ramlau, R. Convergence rates for Morozov’s discrepancy principle using variational inequalities. Inverse Probl. 2011, 27, 105007. [Google Scholar] [CrossRef]
Figure 1. Geometry of EM-ISPs in 2D case with TM illuminations.
Figure 1. Geometry of EM-ISPs in 2D case with TM illuminations.
Sensors 24 05997 g001
Figure 2. Overall flowchart of the proposed two-step CS learning method for EM-ISPs. In Step 1, the Green’s function operator is subjected to SVD based on scattered field data. The dominant CS obtained from SVD is used as the input for the CS-Net, which generates an initial estimate of the total CS. This estimated CS is then directly converted into a contrast image. In Step 2, the coarse contrast image is refined, using the U-Net to obtain the final solution.
Figure 2. Overall flowchart of the proposed two-step CS learning method for EM-ISPs. In Step 1, the Green’s function operator is subjected to SVD based on scattered field data. The dominant CS obtained from SVD is used as the input for the CS-Net, which generates an initial estimate of the total CS. This estimated CS is then directly converted into a contrast image. In Step 2, the coarse contrast image is refined, using the U-Net to obtain the final solution.
Sensors 24 05997 g002
Figure 3. Architecture for CS-Net in Step 1. The input of the CS-Net is the signal subspace of the CS, which is represented as 32 × 32 image with V = 32 channels. The output of the CS-Net is an estimate of the true CS. The first layer performs convolution with different filter sizes, i.e., (3 × 3), (5 × 5), (7 × 7), and (9 × 9), each with eight channels, and stacks the filter activations, to form a 32 × 32 image with 32 channels. The image is then vectorized and passed through three fully connected layers, each with a ReLU activation. The output vector is reshaped to a 16-channel 32 × 32 image again, and a last convolution layer is used to generate the estimate of the true CS.
Figure 3. Architecture for CS-Net in Step 1. The input of the CS-Net is the signal subspace of the CS, which is represented as 32 × 32 image with V = 32 channels. The output of the CS-Net is an estimate of the true CS. The first layer performs convolution with different filter sizes, i.e., (3 × 3), (5 × 5), (7 × 7), and (9 × 9), each with eight channels, and stacks the filter activations, to form a 32 × 32 image with 32 channels. The image is then vectorized and passed through three fully connected layers, each with a ReLU activation. The output vector is reshaped to a 16-channel 32 × 32 image again, and a last convolution layer is used to generate the estimate of the true CS.
Sensors 24 05997 g003
Figure 4. Architecture for the U-Net in Step 2. The encoding part is equipped with the repeated application of 3 × 3 convolution, BN, and a rectified linear unit (ReLU) and 2 × 2 max-pooling operation. Meanwhile, the decoding part is armed with the repeated application of 3 × 3 up-convolution, BN, ReLU, and a concatenation operation with skip connection.
Figure 4. Architecture for the U-Net in Step 2. The encoding part is equipped with the repeated application of 3 × 3 convolution, BN, and a rectified linear unit (ReLU) and 2 × 2 max-pooling operation. Meanwhile, the decoding part is armed with the repeated application of 3 × 3 up-convolution, BN, ReLU, and a concatenation operation with skip connection.
Sensors 24 05997 g004
Figure 5. Estimated contrast image on MNIST test images, SNR = 25 dB: (a) ground truth; (b) reconstruction results of object from the SOM, using the CS recovered by the CS-Net; (c) reconstructed rough contrast image from the first step of the two-step enhanced deep learning approach [34]; (d) reconstructed final contrast image from the second step of the two-step enhanced deep learning approach [34]; (e) direct conversion of the CS recovered by the CS-Net into contrast imaging results; (f) outperforming that of the U-Net.
Figure 5. Estimated contrast image on MNIST test images, SNR = 25 dB: (a) ground truth; (b) reconstruction results of object from the SOM, using the CS recovered by the CS-Net; (c) reconstructed rough contrast image from the first step of the two-step enhanced deep learning approach [34]; (d) reconstructed final contrast image from the second step of the two-step enhanced deep learning approach [34]; (e) direct conversion of the CS recovered by the CS-Net into contrast imaging results; (f) outperforming that of the U-Net.
Sensors 24 05997 g005
Figure 6. ENL, PSNR, and SSIM statistical histograms of the reconstructed contrast image quality, SNR = 25 dB: (a) results obtained from the SOM using the CS recovered by the CS-Net; (b) results obtained from the first step of the two-step enhanced deep learning approach [34]; (c) results obtained from the second step of the two-step enhanced deep learning approach [34]; (d) results obtained from the first step of the proposed two-step CS learning method; (e) results obtained from the second step of the proposed two-step CS learning method.
Figure 6. ENL, PSNR, and SSIM statistical histograms of the reconstructed contrast image quality, SNR = 25 dB: (a) results obtained from the SOM using the CS recovered by the CS-Net; (b) results obtained from the first step of the two-step enhanced deep learning approach [34]; (c) results obtained from the second step of the two-step enhanced deep learning approach [34]; (d) results obtained from the first step of the proposed two-step CS learning method; (e) results obtained from the second step of the proposed two-step CS learning method.
Sensors 24 05997 g006
Figure 7. Estimated contrast image on MNIST test images, SNR = 15 dB: (a) ground truth; (b) reconstruction results of object from the SOM, using the CS recovered by the CS-Net; (c) reconstructed rough contrast image from the first step of the two-step enhanced deep learning approach [34]; (d) reconstructed final contrast image from the second step of the two-step enhanced deep learning approach [34]; (e) direct conversion of the CS recovered by the CS-Net into contrast imaging results; (f) outperforming that of the U-Net.
Figure 7. Estimated contrast image on MNIST test images, SNR = 15 dB: (a) ground truth; (b) reconstruction results of object from the SOM, using the CS recovered by the CS-Net; (c) reconstructed rough contrast image from the first step of the two-step enhanced deep learning approach [34]; (d) reconstructed final contrast image from the second step of the two-step enhanced deep learning approach [34]; (e) direct conversion of the CS recovered by the CS-Net into contrast imaging results; (f) outperforming that of the U-Net.
Sensors 24 05997 g007
Figure 8. ENL, PSNR and SSIM statistical histograms of the reconstructed contrast image quality, SNR = 15 dB: (a) results obtained from the SOM using the CS recovered by the CS-Net; (b) results obtained from the first step of the two-step enhanced deep learning approach [34]; (c) results obtained from the second step of the two-step enhanced deep learning approach [34]; (d) results obtained from the first step of the proposed two-step CS learning method; (e) results obtained from the second step of the proposed two-step CS learning method.
Figure 8. ENL, PSNR and SSIM statistical histograms of the reconstructed contrast image quality, SNR = 15 dB: (a) results obtained from the SOM using the CS recovered by the CS-Net; (b) results obtained from the first step of the two-step enhanced deep learning approach [34]; (c) results obtained from the second step of the two-step enhanced deep learning approach [34]; (d) results obtained from the first step of the proposed two-step CS learning method; (e) results obtained from the second step of the proposed two-step CS learning method.
Sensors 24 05997 g008
Figure 9. Estimated contrast image on Austria profile, SNR = 25 dB: (a) true Austria profile; (b) reconstruction results of object from the SOM using the CS recovered by the CS-Net; (c) reconstructed rough contrast image from the first step of the two-step enhanced deep learning approach [34]; (d) reconstructed final contrast image from the second step of the two-step enhanced deep learning approach [34]; (e) direct conversion of the CS recovered by the CS-Net into contrast imaging results; (f) outperforming that of the U-Net.
Figure 9. Estimated contrast image on Austria profile, SNR = 25 dB: (a) true Austria profile; (b) reconstruction results of object from the SOM using the CS recovered by the CS-Net; (c) reconstructed rough contrast image from the first step of the two-step enhanced deep learning approach [34]; (d) reconstructed final contrast image from the second step of the two-step enhanced deep learning approach [34]; (e) direct conversion of the CS recovered by the CS-Net into contrast imaging results; (f) outperforming that of the U-Net.
Sensors 24 05997 g009
Table 1. For the p th ( p = 1 , 2 , , N t ) illumination, the relevant variables used in the above formula.
Table 1. For the p th ( p = 1 , 2 , , N t ) illumination, the relevant variables used in the above formula.
Variable NameSizeMeaning
E ¯ tot C M 2 × 1 The total internal fields data on domain S.
E ¯ inc C M 2 × 1 The incident fields data in the DoI.
E ¯ s c a C N r × 1 The scattered fields data on domain S.
J ¯ C M 2 × 1 The CS function in the DoI.
χ ¯ ¯ C M 2 × 1 The diagnonal matrix of the contrast function.
G ¯ ¯ S C N r × M 2 The radiation operators (the DoI to the domain S).
G ¯ ¯ D C M 2 × M 2 The radiation operators (the DoI to the DoI).
Table 2. Metrics of different methods, SNR = 25 dB.
Table 2. Metrics of different methods, SNR = 25 dB.
Metric(b)(c)(d)(e)(f)
Test(1): first row digit   “7”SSIM0.22010.10640.28360.40030.7175↑
PSNR14.103011.802412.338218.284424.4498↑
ENL0.36150.13030.17540.23530.1433↓
Test(2): second row digit   “1”SSIM0.35320.17220.33110.58390.8924↑
PSNR22.943517.632814.131521.571230.3555↑
ENL0.13900.04870.07670.10700.0760↓
Test(3): third row digit   “0”SSIM0.28610.20540.32190.58830.7931↑
PSNR19.790411.901212.288222.835424.7904↑
ENL0.53920.22670.22960.34880.2241↓
Test(4): fourth row digit   “4”SSIM0.23690.16210.34830.40070.7147↑
PSNR17.998713.272514.083814.685119.2040↑
ENL0.34610.20150.16060.26300.1520↓
Test(5): fifth row digit   “9”SSIM0.68350.21700.36720.67610.8810↑
PSNR23.428915.295117.032519.450828.7113↑
ENL0.26650.18420.20590.29100.1988↓
Test(6): sixth row digit   “6”SSIM0.24420.19570.36920.32480.8093↑
PSNR19.490116.860217.490418.599921.6365↑
ENL0.42290.32940.30050.43510.2793↓
Table 3. Metrics of different methods, SNR = 15 dB.
Table 3. Metrics of different methods, SNR = 15 dB.
Metric(b)(c)(d)(e)(f)
Test(1): first row digit   “2”SSIM0.55840.08370.16130.46050.7858↑
PSNR24.44279.130810.586618.287825.2928↑
ENL0.31480.09500.27620.32460.2074↓
Test(2): second row digit   “9”SSIM0.26490.20290.39320.37210.7834↑
PSNR17.612513.916415.121519.938224.2291↑
ENL0.48530.15220.21160.33960.2251↓
Test(3): third row digit   “4”SSIM0.17150.08250.29690.37820.6624↑
PSNR14.610213.255613.554419.483325.4906↑
ENL0.31440.20080.14570.25210.1836↓
Test(4): fourth row digit   “7”SSIM0.24600.04250.19870.33350.7619↑
PSNR23.012110.845210.368119.368027.3919↑
ENL0.26940.12460.16180.23690.1318↓
Test(5): fifth row digit   “3”SSIM0.26060.11850.24100.32810.6867↑
PSNR15.360212.007612.410617.961321.3712↑
ENL0.48590.20350.22140.36270.2007↓
Test(6): sixth row digit   “1”SSIM0.20300.09490.27450.39100.7850↑
PSNR18.88187.67365.460522.949724.5452↑
ENL0.22820.21420.10630.14330.0936↓
Table 4. Metrics comparing different methods for the Austria profile, SNR = 25 dB.
Table 4. Metrics comparing different methods for the Austria profile, SNR = 25 dB.
Metric(b)(c)(d)(e)(f)
Austria:SSIM0.57870.28340.40010.39600.8195↑
PSNR16.010415.963616.830017.479119.0318↑
ENL0.58230.44900.30480.56580.4268↓
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Si, A.; Wang, M.; Fang, F.; Dai, D. Two-Step Contrast Source Learning Method for Electromagnetic Inverse Scattering Problems. Sensors 2024, 24, 5997. https://doi.org/10.3390/s24185997

AMA Style

Si A, Wang M, Fang F, Dai D. Two-Step Contrast Source Learning Method for Electromagnetic Inverse Scattering Problems. Sensors. 2024; 24(18):5997. https://doi.org/10.3390/s24185997

Chicago/Turabian Style

Si, Anran, Miao Wang, Fuping Fang, and Dahai Dai. 2024. "Two-Step Contrast Source Learning Method for Electromagnetic Inverse Scattering Problems" Sensors 24, no. 18: 5997. https://doi.org/10.3390/s24185997

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop