Next Article in Journal
Comparative Assessment of Spire and COSMIC-2 Radio Occultation Data Quality
Next Article in Special Issue
Polarimetric Synthetic Aperture Radar Speckle Filter Based on Joint Similarity Measurement Criterion
Previous Article in Journal
The Big Picture: An Improved Method for Mapping Shipping Activities
Previous Article in Special Issue
Non-Local SAR Image Despeckling Based on Sparse Representation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A U-Net Approach for InSAR Phase Unwrapping and Denoising

1
Multimedia Research Centre, University of Alberta, Edmonton, AB T6G 2E8, Canada
2
3vGeomatics Inc., Vancouver, BC V5Y 0M6, Canada
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(21), 5081; https://doi.org/10.3390/rs15215081
Submission received: 18 August 2023 / Revised: 10 October 2023 / Accepted: 19 October 2023 / Published: 24 October 2023
(This article belongs to the Special Issue Advance in SAR Image Despeckling)

Abstract

:
The interferometric synthetic aperture radar (InSAR) imaging technique computes relative distances or surface maps by measuring the absolute phase differences of returned radar signals. The measured phase difference is wrapped in a 2 π cycle due to the wave nature of light. Hence, the proper multiple of 2 π must be added back during restoration and this process is known as phase unwrapping. The noise and discontinuity present in the wrapped signals pose challenges for error-free unwrapping procedures. Separate denoising and unwrapping algorithms lead to the introduction of additional errors from excessive filtering and changes in the statistical nature of the signal. This can be avoided by joint unwrapping and denoising procedures. In recent years, research efforts have been made using deep-learning-based frameworks, which can learn the complex relationship between the wrapped phase, coherence, and amplitude images to perform better unwrapping than traditional signal processing methods. This research falls predominantly into segmentation- and regression-based unwrapping procedures. The regression-based methods have poor performance while segmentation-based frameworks, like the conventional U-Net, rely on a wrap count estimation strategy with very poor noise immunity. In this paper, we present a two-stage phase unwrapping deep neural network framework based on U-Net, which can jointly unwrap and denoise InSAR phase images. The experimental results demonstrate that our approach outperforms related work in the presence of phase noise and discontinuities with a root mean square error (RMSE) of an order of magnitude lower than the others. Our framework exhibits better noise immunity, with a low average RMSE of 0.11.

Graphical Abstract

1. Introduction

With the advancement of satellite remote sensing technology, interferometric synthetic aperture radar (InSAR) analysis [1] has become an effective way of measuring land surface deformation [2]. As the satellite receives its electromagnetic waves emitted to and bounced back from the Earth, high-precision ground deformation measurements can be obtained. Due to the significant surge in space launch missions over recent decades, abundant and continuous remote sensing data are available for Earth observation. Wide-area ground deformation monitoring is important due to natural and man-made hazards such as earthquakes, landslides, and mine site disruptions. Early detection and warning before disaster strikes reduces recovery costs and saves lives.
Synthetic aperture radar (SAR) systems [1] capture the Earth’s surface as a two-dimensional grid of pixel values of complex numbers, with amplitude as the modulus and phase as the angle. These images are called single-look complex (SLC) images. Two SLCs of the same well-registered surface are used for the pixel-wise computation of the difference to form an interferogram, in which the phase is highly correlated to topography. Many signal conditioning techniques have been studied to clean these noisy signals and remove the undesired effects such as atmospheric conditions, distortion due to the imaging system, and satellite position error [1]. However, due to the light-wave nature of the emitted radar, these values are wrapped to the 2 π range. Hence, finding the absolute phase change from the wrapped phase (phase unwrapping) is necessary to measure relative ground displacements in InSAR signal processing [1].
In general, the phase wrapping process is observed in many other fields of optical interferometry (OI) in a variety of applications, such as 3D object surface measurements in fringe projection profilometry (FPP) [3] and blood flow measurements in magnetic resonance imaging (MRI) [4], where physical entities, such as distance and refractive index, are measured. A true wrapped phase estimation based on a limited [− π , + π ] value is a challenging task. It involves differentiating true phase changes wrapping around the [− π , + π ] due to changes in the actual physical entity in measurements from phase changes due to noise sources, distortion, and signal under sampling errors.
Spatial and temporal changes with respect to a particular pixel location are used for unwrapping the interferograms. But, in the presence of noise and discontinuities, finding the unwrapped solution is a challenging task. As the unwrapping strategies mainly depend upon the neighborhood pixel values, the noise and discontinuities at a particular locality tend to affect the unwrap values everywhere. Noise is unavoidable in satellite image acquisition due to the presence of many sources such as temporary scattering, dust, and multiple reflections in the atmosphere. The SAR preprocessing can also result in the addition of highly correlated noise. In addition, in some applications, such as mine and dam monitoring systems, the ground displacements tend to be very fast, resulting in large spatial and temporal variations in the pixel values between two data acquisitions. This leads to unwrapping errors due to insufficient resolution in the captured data. This is called aliasing [5] and results in phase changes of more than one cycle between samples. The unavailability of large publicly available datasets and huge computational costs are another challenge that especially affects deep-learning-based solvers.
Mathematically, Equation (1) describes the phase ambiguity at a pixel location. Estimating the value of k using Equation (1) alone is impossible. The relative wrapped phase differences between the neighborhood pixel locations are also required to estimate the relative absolute phase differences and hence the value of k at a pixel location.
Ω = μ 2 π k
where:
  • Ω = wrapped phase, with range [− π , π ];
  • μ = unwrapped phase;
  • k = integer.
Early research in this area dates back decades [5]. In general, it can be assumed that the points whose phase differences are measured lie on a single line. This is called a one-dimensional phase unwrapping (PU) problem and can be solved by using the continuity assumption proposed by Itoh et al. [6]. The continuity assumption states that it is possible to extract the absolute phase differences between all the neighborhood pixels if the phase difference between any two neighborhood pixels is not more than π . As a consequence, there is no ambiguity in the absolute phase differences. Equations (2)–(4) show the mathematical description of the unwrapping at a pixel location s proposed by Itoh et al. [6].
μ ( s ) = Ω ( s ) Ω ( s 1 ) w h e r e : | Ω ( s ) Ω ( s 1 ) | < π
μ ( s ) = Ω ( s ) Ω ( s 1 ) 2 π w h e r e : | Ω ( s ) Ω ( s 1 ) | > π
μ ( s ) = Ω ( s ) Ω ( s 1 ) + 2 π w h e r e : | Ω ( s ) Ω ( s 1 ) | < π
Phase unwrapping in higher dimensions can be performed by dividing the problem into lower dimensions. Based on the number of phase images used, phase unwrapping can be divided into single-baseline (SB) and multi-baseline (MB) methods. SBPU methods use a single interferogram for unwrapping, while MBPU methods use a stack of interferograms. The various phase unwrapping strategies can be summarized as shown in Figure 1 [7].

1.1. Phase Unwrapping in Single-Baseline Images

Phase unwrapping in SB images is also called a two-dimensional PU problem, as the input is a two-dimensional wrapped phase. The continuity condition proposed by Itoh [6] applies to gradients along two dimensions, which are considered independently using Equations (2)–(4). Furthermore, phase is a scalar field, and the gradient of a scalar field is an irrotational vector. These additional constraints are helpful in unwrapping noisy interferograms. We discuss below some classical signal processing and deep learning methods proposed to solve this problem.

Classical Methods to Solve SBPU

Ghilia et al. [5] provided a thorough introduction to the theory of unwrapping and detailed many pre-established mathematical models and two-dimensional phase unwrapping strategies. According to them, the phase unwrapping integration path along the gradient field should result in the same value irrespective of the path chosen and it should have a zero-sum along a closed path. A residue exists in an integration path if the direction of the traverse results in different absolute phase values or a non-zero value of the sum in a closed path. In a closed integration path, if the summation value is positive, then it is called a positive residue, else it is called a negative residue. It is possible to avoid such noisy paths if the residues are known ahead of time of the traverse. This can be achieved by first creating paths as simple as 2 × 2 grids. Once the residues have been calculated, the integration paths should be chosen to avoid such residues. Ghilia et al. [8] referred to this as phase discontinuity. Huntley [9] proposed a minimum length cut algorithm to find the paths in the phase image which avoid the noisy pixels. The algorithm works well in high-quality and low-noise phase maps but may not generate a solution at all if the phase maps are too noisy. The method proposed by Goldstein [10] tried to calculate a better solution by choosing unwrapping paths that encircle equal numbers of positive and negative residues. This can be achieved by connecting opposite residues with branch cuts [11]. Whenever a phase unwrapping has to be performed, these branches should never be crossed. Branch-cut could be used to find such connections. This solution tends to be optimal but is NP-hard. The solution also maintains the integer relationship that exists between the wrap and unwrap phases. But this method produces good results only if noise is sparse. Flynn [12], Zhong [13], and Zhao [14], proposed using quality maps such as the coherence to guide the choice of the integration path with the least errors. A higher value of coherence indicates a lower probability of noise on that pixel. This simplifies the process of finding noisy pixels but this requires a large number of SLC images of the given site to create reliable quality maps. The PU procedure outlined by Ching et al. [15] involves dividing the image areas which have a variation less than π , into segments. Then, at the boundaries, the phase unwrapping is performed in such a way that there is a minimum error while traversing from one boundary to another. The same can be achieved using minimum spanning tree algorithms [16]. This procedure is computationally faster and much more efficient than other procedures as pixels that are related are grouped and solved together. But this tends to produce artifacts across boundaries. An et al. [17] proposed a faster minimum spanning tree algorithm for phase maps, achieving a reduction in the time complexity by changing the way edge weights are stored and searched. Harrenz et al. [18] proposed a reliability-guided unwrapping strategy, where the reliability of pixels is calculated first and unwrapping is performed from the most reliable pixels to the least reliable pixels. Different groups of unwrapped pixels are created as the unwrapping paths are discreet. These groups are finally merged while keeping phase jumps across the groups consistent. The unwrapping procedure works well only in the presence of sparse and random noise.
In optimization-based PU methods, an objective function that minimizes the error between the estimated phase and the absolute phase gradients is formulated and solved. These methods tend to be global and will find unwrap values for all pixels simultaneously. The general form of the Lp norm optimization problem is expressed in Equation (5), where ψ and Δ ψ are the absolute phase and wrap phase at a pixel location s, and p is the power of the norm. According to Ghiglia et al. [19], the optimization problem, when solved considering p = 0 , provides the best solution, but it is an NP-hard problem. Constantine et al. [20] provided a procedure to solve with p = 1 which results in a minimum cost problem. The solution is smooth but does not maintain the integer count relationship between the wrap and unwrap phases. According to Ghiglia et al. [21], a solution considering p = 2 results in an L2 norm problem. This method is affected significantly by noise. A lower value of p provides better results but they are often difficult to solve. Adaptive methods include the combination of different Lp problems proposed by Yu et al. [22] for better results, but this increases the computational time and complexity. Yann et al. [23] proposed using additional constraints of the scalar properties of phase in the temporal domain when three differential interferograms are available. This improves the result but requires larger computational resources. Chartrand et al. [24] proposed the use of a sparse optimization algorithm to solve the phase unwrapping problem. In Equation (5), p = 0 provides an optimal solution but is NP-hard. As the residues in the noisy wrap image tend to be sparse, the alternating directions, method of multipliers (ADMM) algorithm was utilized by the authors to perform faster and more reliable unwrapping. However, the authors provide only a few qualitative comparisons against SNAPHU [25].
arg m i n ψ ( s ) | Σ ( s , s 1 ) ( ψ ( s ) ψ ( s 1 ) Δ ψ ( s , s 1 ) ) | p
The statistical-based unwrapping techniques solve the PU problems by finding unwrapped results that maximize the a posteriori probability for the given wrap input. These methods can also utilize the probability information and capture the dependency of the unwrap results with other parameters such as amplitude and coherence, among others. The methods proposed by Chen [25], and Nico et al. [26] are examples of such procedures. A generalized mathematical description is presented in Equation (6), where ψ ( s ) is the unwrapped value and ϕ ( s ) is the wrap input. These require a large amount of data to develop probability density functions. They are computationally hard but provide better results.
a r g m a x ψ ( s ) | Π s g ( ψ ( s ) | ϕ ( s ) ) |
Conventionally, the phase unwrapping step is preceded by a noise filtering step, as the performance of PU depends on the quality of the input. However, denoising may lead to the introduction of algorithmic errors and changes in the statistical nature of the input. As a result, integrated denoising and unwrapping in one step has the potential to perform better. These methods can handle a large amount of noise and perform well in low-coherence areas compared to other methods, but they are computationally expensive and have low performance in high-coherence regions when compared to traditional methods. The variations in this research field mainly differ in the type of filtering used in the process. Xie et al. [27] and Xianming et al. [28] used the Kalman filter. The latter method [28] uses a better matrix pencil model and a better path-following strategy to achieve a better result. Zhang et al. [29] further enhanced the method with the use of the adaptive Kalman filter model. These models were tested against the data with a few simulation images and found to perform better only in low-coherence regions. Martinez et al. proposed a particle-filter-based solution [30] and further optimized it in later research [31]. The improvements in this later research can be mainly attributed to the use of matrix pencils. The algorithm performs well under high-noise conditions but is tested with only a few real and simulation data. Chen et al. [32] used Markov models to denoise and unwrap jointly. This method is not time efficient but can generate good results compared to MAP-based estimation if the prior distributions have defects. A summary of some of the important research in classical methods to solve SBPU is provided in Table 1.
In general, these methods tend to be local, algorithmically simple, and computationally efficient, but they work only when noise is sparse, and the phase images are of high quality.

2. Deep Learning for Phase Unwrapping

With the rise of deep learning models in many signal-processing applications [33], similar trends are seen in the field of PU as well. Based on the inputs and intermediate outputs, these models can be categorized into regression methods and wrap count methods, which are described in Figure 2 and Figure 3, respectively. In the regression methods, the unwrapped phase is directly predicted from the input wrap phase by the deep learning network. The major difference in different methods proposed in the literature is the type of deep learning network trained and the loss function. Zhou et al. [34] proposed a GAN network with adversarial loss which provides an output that is smooth but does not force an integer relationship between the input and output. Xu et al. [35] proposed MNet, with mean absolute error (MAE) and structural similarity index measure (SSIM) [36] loss, which performs well at a low concentration of speckle noise. Zhou et al. [37] proposed U-Net with MAE. Qin et al. [38] proposed ResUNet with MAE. An advantage of these methods is the better accuracy of the model in the presence of noise and discontinuity. The unwrap values are smooth but violate the mathematical relationship of Equation (1). Wrap count methods predict the wrap count (k) from Equation (1). This translates the problem into a classification or segmentation task. Liu et al. [39] were the first to propose this idea. Spoorthly et al. [40] proposed the use of Dense-UNet for wrap count estimation with MAE and cross-entropy and residue loss as the loss function, which performs well with noisy data, but their method is only validated against a custom simulation dataset. Zhang et al. [41] employed EESANET, with weighted cross-entropy as the loss function. The advantages of wrap count methods are the simple network structure and short training time, but the performance of the PU is bad in the presence of noise and discontinuity. Perera et al. [42] proposed a spatial quad-directional long short-term memory (LSTM)-based model for phase unwrapping. To learn the horizontal and vertical phase relationships, four distinct spatial directional sequencings were used to train the network. The variance error loss was used as a loss function to learn multiple solutions. The proposed algorithm performed well in the presence of noise but had incorrect unwrapping boundaries. A summary of some of the important research in deep learning to solve SBPU is provided in Table 2.

U-Net for Phase Unwrapping

U-Net is a deep learning architecture proposed by Ronneberger et al. [44] for the semantic segmentation of medical images. The goal of semantic segmentation is to label each pixel of the image with the corresponding class label to which it belongs. As a result, the output is not just a one-class label for the whole image but a high-resolution image with each pixel individually labeled. This results in generating high-quality segmented results compared to image-patch-based methods. A lot of complex tasks in computer vision require the generation of high-quality semantic segmentation labels. Hence, U-Nets are employed in vehicle detection [45], crack detection in buildings and dams [46,47], medical image segmentation [44], modern agriculture [46], and geosensing [48].
U-Net is composed of encoder and decoder blocks. The downsampling encoder blocks extract high-level features but tend to lose the locality information. The upsampling decoder block reconstructs the output image using the high-level coarse features. The skip connection from the encoder to the decoder re-injects the details lost during downsampling and also leads to faster convergence. As a result, U-Net preserves locality information lost during downsampling. Hence, it is suitable for semantic segmentation tasks. U-Net can also learn very fast with few training samples.
Deep learning for phase unwrapping using the wrap count method converts the phase unwrapping problem into a semantic segmentation or pixel-level classification problem. To minimize the error in unwrapped results, the segmented result should be of high quality and have good accuracy at the border between the regions. The limited availability of labeled training samples is another obstacle to the use of deep learning in this research area. U-Net’s ability to capture both the features of the context as well as the localization to generate high-quality segmentation maps, and the ability to be trained with few labeled samples, drove our design decision to use U-Net in our proposed architecture.
In general, deep learning segmentation-based phase estimation methods suffer from poor noise immunity [42,43]. Hence, these methods should be preceded by noise-filtering techniques, which affect the statistical distribution of phase and introduce additional errors in the unwrapping. A single joint phase unwrapping and denoising architecture provides better noise immunity. Hence, we are proposing a two-stage U-Net network that is capable of performing denoising and unwrapping jointly.

3. Materials and Methods

The block diagram of our approach is presented in Figure 4. The proposed two-dimensional phase unwrapping procedure uses phase differences in the range and azimuth directions to estimate the unwrapped phase. Our method is inspired by the procedure proposed by Sica et al. [43]. However, we modified the architecture to improve the denoising performance. Our architecture is a two-stage deep neural network (DNN). The first stage approximates wrap count values and the second stage refines these values. We will present our improvement on the method of Sica et al. [43] in this section. Equation (7) expresses the relationship between the wrap phase μ and unwrap phase σ , where k represents the integer count times the 2 π by which the wrapped phase differs from the unwrapped phase at a given pixel location. The integer count vector k calculated from Equation (7) is referred to as the wrap count image. These wrap count images are converted to gradient wrap count vector images in the range and azimuth directions separately using Equation (8). This is used as the ground truth in training the first-stage U-Nets in our two-stage network.
k = ( μ σ ) / 2 π
k is wrap count, σ is the wrap phase whose range is [− π , π ], and μ is the unwrapped phase.
δ k ( s ) = k ( s + 1 ) k ( s ) for elements belonging to the first row / column of k matrix k ( s ) k ( s 1 ) for elements belonging to the last row / column of k matrix ( k ( s + 1 ) k ( s 1 ) ) / 2 for all other elements in k matrix
Equation (8) can be converted into matrix form so that it is simpler and mathematically more convenient to solve, as shown in Equations (9) and (10), where δ k ( r ) and δ k ( a ) represent the gradient wrap count vectors in the range and azimuth directions, respectively. Δ r and Δ a represent the forward gradient transformation matrices in the range and azimuth directions, respectively.
δ k ( r ) = Δ r k
δ k ( a ) = Δ a k
Δ r = 1 1 0 0 0 0 0 0 0 0.5 0 0.5 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0.5 0 0.5 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0.5 0 0.5 0 0 0 0 0 0 0 1 1
Δ a = 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0.5 0 0 0 0 0 0.5 0 0 0 0.5 0 0 0 0.0 0 0.5 0 0 0 0.5 0 0 0 0 0 0.5 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1
If the two-dimensional interferograms are of N rows and N columns, the transformation matrices Δ a and Δ r are of ( N 2 , N 2 ) dimensions. The elements in the transformation matrix are chosen in such a way that matrix multiplication results in the implementation of Equations (9) and (10). Examples of the gradient transformation matrices in the range and azimuth directions for N = 3 are shown in Equation (11) and Equation (12), respectively. N-dimensional wrap count square matrices k are converted into ( N 2 , 1 ) dimension vectors. The gradient matrices are obtained from Equations (9) and (10). Based on Itoh’s unwrapping condition [6], the values of the gradient vector matrix are limited to the set of fixed values of −1, −0.5, 0, 0.5, 1. These pixels are then labeled into segments to convert the problem into a segmentation task and we employ a U-Net to segment the same. The two-channel wrapped phase and coherence image are used as input to the U-Net. Semantically labeled gradient wrap count vectors are used as ground truth to the network. Two U-Nets are trained to estimate the gradient wrap count vectors in the range direction and azimuth direction independently. The sum of the cross-entropy loss and Jaccard loss is used as the loss function. The architecture of each U-Net used in the block diagram is illustrated in Figure 5.
As the U-Nets estimate the gradient wrap count vectors for a given input wrap phase image and coherence during inference, it is necessary to calculate the unwrapped phase from the gradient wrap count vectors using the method proposed by Sica et al. [43]. A single transformation, as shown in Equation (13), can be used to jointly calculate a bigger ( 2 N 2 , N 2 ) matrix, where the first N 2 corresponds to Δ r and the next N 2 corresponds to Δ a . The [ Δ r , Δ a ] is the resultant matrix obtained by appending Δ r and Δ a row-wise. This special matrix can be presented as Δ k , as shown in Equation (14). The problem is now reduced to estimating N 2 points from 2 N 2 equations. The unwrapped phase value is now estimated using Equations (15) and (16).
δ k = [ Δ r , Δ a ] k
δ k = Δ k k
k = Δ k 1 δ k
μ = σ + 2 π k
μ = σ + 2 π k + r
where:
  • r is the refinement factor calculated from the second stage U-Net.
The wrapped phase images in any practical scenario have inherent noise in them. In Equation (16), the noise is added back into the result due to the addition of the wrapped phase back to the wrap counts σ . The above-mentioned method developed by Sica et al. [43] uses the phase wrap count vector instead of just the phase information. Further experiments with this kind of architecture, using synthetic inputs, revealed that these work reliably well in no-noise and very low noise conditions, or if the input wrap phase and ground truth unwrap phase have the same noise. But, if the network is made to learn unwrapping and denoising jointly by providing noisy wrap phase input and clean ground truth, the noisy artifact is observed. This is shown in the output image of Figure 6 (middle). Upon further analysis, it was found that this kind of noisy pattern was created due to the changing of the wrapping boundary between the wrap phase input and the ground truth caused by the addition of noise. As a result, segmentation-based unwrapping architectures such as that presented Sica et al. [43] cannot be directly used for phase unwrapping and denoising jointly.
To address this problem, the architecture in Figure 4 is proposed. A second stage U-Net image restoration network is added which is trained with the noiseless ground truth to perform denoising. In the second stage, the U-Net network estimates the noise in the unwrap results. This noise is subtracted from the final result to obtain a denoised output. These two networks are trained together jointly to perform unwrapping and denoising together. The two-channel wrapped phase and coherence image are used as input to the second-stage U-Net. The clean unwrapped images are used as ground truth to the network. The mean square error between the ground truth and unwrap phase predicted by the first-stage U-Net networks is used as a loss function to train the second-stage network. The output of this second-stage U-Net is added back as refinement values, as shown in Equation (17), to unwrap the output of the first-stage network to obtain denoised unwrap results.

4. Results

The training, validation, and test interferogram datasets were generated using the simulator proposed by Sun et al. [49]. The simulator provides a controlled environment to generate the ground truth required for supervised learning. We can objectively demonstrate the model’s capability to jointly unwrap and denoise using this simulator. Unlike many other simulators which can only generate a few known patterns [50], Sun [49]’s simulator provides the capability of generating unlimited geometrical phase patterns. It can also generate irregular motion signals, ground reflective phenomena, and noise conditions. A set of control parameters included in the simulator allows the generation of very complex phase patterns with different motion, noise, and distortion levels. We follow the steps below to generate the dataset.
  • We generate the SLC images S1 and S2 with random Gaussian bubbles as the synthetic motion signals. We make sure that the generated SLCs satisfy Itoh’s condition [6] using a set of control parameters in the simulator.
  • We add random additive white Gaussian noise at a random signal level to both SLC images to generate noisy SLC images.
  • Using the SLCs, we generate clean and noisy interferometric phases and calculate the ground truth coherence.
A total of 7000 interferograms were used for training, 3000 for validation, and 3000 for testing. We implemented all experiments on a computer with an Intel Xeon 4210 CPU, an NVIDIA GeForce RTX 2080 GPU, and 128 GB memory. The network was trained for 100 epochs with a batch size of 32 and the Adam optimizer, with a learning rate set to 0.001, on the Pytorch platform. The network was initialized with weights and biases for all layers using a uniform distribution from −1 to 1. No data augmentation technique was used as the dataset is sufficient to train the model.
We employed the root mean square error (RMSE), structural similarity index measure (SSIM) [36], and unwrap failure rate (UFR) [51] between the reference ground truth and the outputs obtained from different unwrapping procedures to measure performance. The RMSE is a popular metric used to evaluate the accuracy [43] of the unwrapping framework. We follow the same approach to compare with other published results. A low value of RMSE indicates better performance. The formulaic expression of the RMSE is presented in Equation (18).
R M S E = y N x M ( ϕ x y ϕ x y ) 2 / ( M N )
where:
  • M = height of the interferogram;
  • N = width of the interferogram;
  • ϕ x y = recovered absolute phase at pixel location [x, y];
  • ϕ x y = reference absolute phase at pixel location [x, y].
The formula for the SSIM is given in Equation (19). The SSIM is better at evaluating the perceptional difference between the images. We employ the SSIM as a quantitative measure of the visual difference between the unwrapped image and ground truth. A high value of the SSIM indicates better performance.
S S I M ( x , y ) = ( 2 μ x μ y + c 1 ) ( 2 σ x y + c 2 ) / ( ( μ x 2 + μ y 2 + c 1 ) ( σ x 2 + σ y 2 + c 2 ) )
where:
  • X = pixels of a window of the reference absolute phase image;
  • Y = pixels of a window of the recovered absolute phase image;
  • μ = mean of the pixel values of window;
  • σ = standard deviation of pixel values of window;
  • σ x y = cross-correlation of pixels of two windows;
  • c = stabilizing factor.
The RMSE and SSIM are average scores and fail to capture large inaccuracies concentrated in small areas of the phase map. Hence, we also use the UFR as a performance metric, which only considers pixels that deviate more than π from the ground truth. The calculation of the UFR is presented in Equation (20).
U F R = ( y N x M d x y ) / M N 100
where
d x y = 1 i f ( ϕ x y ϕ x y ) > π 0 otherwise
  • M = height of the interferogram;
  • N = width of the interferogram;
  • ϕ x y = recovered absolute phase at pixel location [x, y];
  • ϕ x y = reference absolute phase at pixel location [x, y].
The performance of the proposed algorithm is compared to the traditional, spatial filtering followed by unwrapping and deep-learning-based frameworks in Table 3. The deep-learning-based frameworks used for comparison were retrained with the training dataset for one-to-one comparisons. Column 2 in Table 3 reports the average and worst (highest) RMSE; column 3 reports the average and worst (lowest) SSIM; and column 4 reports the UFR of the different unwrapping procedures tested on a set of 3000 test wrap images. The proposed algorithm performs better in all the metrics and RMSE is an order of magnitude lower than for the other methods.

5. Discussion

A qualitative comparison of the outputs produced by various unwrapping algorithms is provided in Figure 7 and Figure 8. Error maps are generated by subtracting outputs from the expected ground truth. The unwrap images produced by Herraez et al. [18], SNAPHU [25], and Chartrand et al. [24] suffer from noise. The outputs have incorrect unwrap boundaries and have artifacts caused by noise accumulation, indicated by dark and white spots in the error map. Higher values of the UFR for these methods in Table 3 also indicate the poor noise immunity of these methods.
BM3D [52] is a collaborative filtering technique that exploits the fact that images have a locally sparse representation in the transformed domain. Filtering the noisy phase images with BM3D before unwrapping shows minimal improvement in the performance metrics in Table 3. The presence of noise in the unwrapped outputs is evident from row 1 and row 2 of Figure 8. In some scenarios, filtering introduces new noise artifacts in the output. Sample 1 (column 1) in Figure 8 is an example of that. Filtering while preprocessing has a tendency to introduce errors due to over/under-filtered patches.
Perera et al. [42] produce visually noise-free output (corroborated by a high SSIM in Table 3) but have incorrect unwrap boundaries, as shown in Figure 9 and the error map (row 8) in Figure 7. The unwrap values produced by Perera et al. [42] have a higher dynamic range (also indicated by the error map), which results in a higher UFR, resulting in incorrect measurements in the InSAR process pipeline. The method of Sica et al. [43] produces noisy output and a noisy error map, which indicates its poor immunity to noise.
The consistently low values of the RMSE and UFR, and the high value of the SSIM for the proposed method in Table 3 demonstrate that the proposed method performs better than the other methods. A low worst RMSE value shows the generalization of our trained model on the test dataset and the model’s ability to handle all edge cases. The qualitative comparison of the unwrapped outputs in Figure 7 shows relatively less noise and distortion in the proposed method (indicated by a smoother error map row 2 in Figure 7). The proposed method generates better phase boundaries as U-Net-based models can generate high-quality semantic labels with fewer boundary errors. It achieves better denoising from the second-stage network by refining the coarse unwrap values from the prior stage.

6. Conclusions and Future Work

In this article, we have introduced a novel two-stage deep learning architecture inspired by U-Net that shows better anti-noise behavior than many state-of-the-art algorithms. The first stage estimates the closest unwrapped value by wrap count estimation and the second stage performs refinement to obtain a better estimation of the unwrapped values. This two-stage process shows state-of-the-art performance in a noisy input phase condition by learning the noise behavior correctly and then removing it. In future work, training the network with more complex noise models to simulate the behavior of more complex InSAR phase maps can be explored. The speckled noise from random scatters and highly correlated noise due to the InSAR preprocessing affects the quality of the real-world InSAR images. An unwrapping algorithm robust to these different types of noise would be of great value in the field of InSAR processing. In addition, the amplitude images could be used as part of the input to the network to exploit the correlation of amplitude images with unwrap phase values. The lack of real-world InSAR images and the lack of a perfect simulator to model real-world InSAR images may direct future research on phrase unwrapping in the direction of unsupervised learning.

Author Contributions

Conceptualization, S.V.K.; data curation, X.S. and Z.W.; formal analysis, X.S.; funding acquisition, I.C.; investigation, Z.W.; methodology, S.V.K. and X.S.; project administration, R.G. and I.C.; resources, Z.W. and R.G.; software, S.V.K.; supervision, R.G. and I.C.; validation, S.V.K.; visualization, S.V.K.; writing—original draft, S.V.K.; writing—review and editing, S.V.K., X.S., Z.W. and I.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Sciences and Engineering Research Council of Canada (NSERC), grant number CRDPJ543428-19.

Data Availability Statement

Acknowledgments

We would like to acknowledge the anonymous reviewers for their valuable suggestions that have helped to improve this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ferretti, A.; Monti-Guarnieri, A.; Prati, C.; Rocca, F.; Massonet, D. InSAR Principles-Guidelines for SAR Interferometry Processing and Interpretation; ESA Publications: Washington, DC, USA, 2007; Volume 19. [Google Scholar]
  2. Ng, A.H.M.; Wang, H.; Dai, Y.; Pagli, C.; Chen, W.; Ge, L.; Du, Z.; Zhang, K. InSAR reveals land deformation at Guangzhou and Foshan, China between 2011 and 2017 with COSMO-SkyMed data. Remote Sens. 2018, 10, 813. [Google Scholar] [CrossRef]
  3. Zhang, S. Absolute phase retrieval methods for digital fringe projection profilometry: A review. Opt. Lasers Eng. 2018, 107, 28–37. [Google Scholar] [CrossRef]
  4. Dong, J.; Liu, T.; Chen, F.; Zhou, D.; Dimov, A.; Raj, A.; Cheng, Q.; Spincemaille, P.; Wang, Y. Simultaneous phase unwrapping and removal of chemical shift (SPURS) using graph cuts: Application in quantitative susceptibility mapping. IEEE Trans. Med. Imaging 2014, 34, 531–540. [Google Scholar] [CrossRef]
  5. Ghiglia, D.C.; Pritt, M.D. Two-Dimensional Phase Unwrapping: Theory, Algorithms, and Software; Wiley: Hoboken, NJ, USA, 1998. [Google Scholar]
  6. Itoh, K. Analysis of the phase unwrapping algorithm. Appl. Opt. 1982, 21, 2470. [Google Scholar] [CrossRef]
  7. Yu, H.; Lan, Y.; Yuan, Z.; Xu, J.; Lee, H. A Review on Phase Unwrapping in InSAR Signal Processing. IEEE Geosci. Remote Sens. Mag. 2019, 7, 40–58. [Google Scholar] [CrossRef]
  8. Ghiglia, D.C.; Mastin, G.A.; Romero, L.A. Cellular-automata method for phase unwrapping. J. Opt. Soc. Am. A 1987, 4, 267–280. [Google Scholar] [CrossRef]
  9. Huntley, J. Noise-immune phase unwrapping algorithm. Appl. Opt. 1989, 28, 3268–3270. [Google Scholar] [CrossRef] [PubMed]
  10. Goldstein, R.M.; Zebker, H.A.; Werner, C.L. Satellite radar interferometry: Two-dimensional phase unwrapping. Radio Sci. 1988, 23, 713–720. [Google Scholar] [CrossRef]
  11. Fried, D.L.; Vaughn, J.L. Branch cuts in the phase function. Appl. Opt. 1992, 31, 2865–2882. [Google Scholar] [CrossRef] [PubMed]
  12. Flynn, T.J. Consistent 2-D phase unwrapping guided by a quality map. In Proceedings of the IGARSS’96, 1996 International Geoscience and Remote Sensing Symposium, Lincoln, NE, USA, 31 May 1996; Volume 4, pp. 2057–2059. [Google Scholar]
  13. Zhong, H.; Tang, J.; Zhang, S.; Chen, M. An improved quality-guided phase-unwrapping algorithm based on priority queue. IEEE Geosci. Remote Sens. Lett. 2010, 8, 364–368. [Google Scholar] [CrossRef]
  14. Zhao, M.; Kemao, Q. Quality-guided phase unwrapping implementation: An improved indexed interwoven linked list. Appl. Opt. 2014, 53, 3492–3500. [Google Scholar] [CrossRef]
  15. Ching, N.H.; Rosenfeld, D.; Braun, M. Two-dimensional phase unwrapping using a minimum spanning tree algorithm. IEEE Trans. Image Process. 1992, 1, 355–365. [Google Scholar] [CrossRef] [PubMed]
  16. Graham, R.L.; Hell, P. On the history of the minimum spanning tree problem. Ann. Hist. Comput. 1985, 7, 43–57. [Google Scholar] [CrossRef]
  17. An, L.; Xiang, Q.S.; Chavez, S. A fast implementation of the minimum spanning tree method for phase unwrapping. IEEE Trans. Med. Imaging 2000, 19, 805–808. [Google Scholar] [CrossRef]
  18. Herráez, M.A.; Burton, D.R.; Lalor, M.J.; Gdeisat, M.A. Fast two-dimensional phase-unwrapping algorithm based on sorting by reliability following a noncontinuous path. Appl. Opt. 2002, 41, 7437–7444. [Google Scholar] [CrossRef]
  19. Ghiglia, D.C.; Romero, L.A. Minimum Lp-norm two-dimensional phase unwrapping. J. Opt. Soc. Am. A 1996, 13, 1999–2013. [Google Scholar] [CrossRef]
  20. Costantini, M. A novel phase unwrapping method based on network programming. IEEE Trans. Geosci. Remote Sens. 1998, 36, 813–821. [Google Scholar] [CrossRef]
  21. Ghiglia, D.C.; Romero, L.A. Robust two-dimensional weighted and unweighted phase unwrapping that uses fast transforms and iterative methods. J. Opt. Soc. Am. A 1994, 11, 107–117. [Google Scholar] [CrossRef]
  22. Yu, H.; Lan, Y.; Lee, H.; Cao, N. 2-D phase unwrapping using minimum infinity-norm. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1887–1891. [Google Scholar] [CrossRef]
  23. Yan, Y.; Wang, Y.; Yu, H. An Optimization Model for Two-Dimensional Single-Baseline Insar Phase Unwrapping. In Proceedings of the IGARSS 2022—2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 17–22 July 2022; pp. 727–730. [Google Scholar]
  24. Chartrand, R.; Calef, M.T.; Warren, M.S. Exploiting Sparsity for Phase Unwrapping. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japa, 28 July–2 August 2019; pp. 258–261. [Google Scholar]
  25. Chen, C.W. Statistical-Cost Network-Flow Approaches to Two-Dimensional Phase Unwrapping for Radar Interferometry; Stanford University: Stanford, CA, USA, 2001. [Google Scholar]
  26. Nico, G.; Palubinskas, G.; Datcu, M. Bayesian approaches to phase unwrapping: Theoretical study. IEEE Trans. Signal Process. 2000, 48, 2545–2556. [Google Scholar] [CrossRef]
  27. Xie, X.; Pi, Y. Phase noise filtering and phase unwrapping method based on unscented Kalman filter. J. Syst. Eng. Electron. 2011, 22, 365–372. [Google Scholar] [CrossRef]
  28. Xie, X.; Li, Y. Enhanced phase unwrapping algorithm based on unscented Kalman filter, enhanced phase gradient estimator, and path-following strategy. Appl. Opt. 2014, 53, 4049–4060. [Google Scholar] [CrossRef]
  29. Zhang, Y.; Zhang, S.; Gao, Y.; Li, S.; Jia, Y.; Li, M. Adaptive Square-Root Unscented Kalman Filter Phase Unwrapping with Modified Phase Gradient Estimation. Remote Sens. 2022, 14, 1229. [Google Scholar] [CrossRef]
  30. Martinez-Espla, J.J.; Martinez-Marin, T.; Lopez-Sanchez, J.M. A particle filter approach for InSAR phase filtering and unwrapping. IEEE Trans. Geosci. Remote Sens. 2009, 47, 1197–1211. [Google Scholar] [CrossRef]
  31. Martinez-Espla, J.J.; Martinez-Marin, T.; Lopez-Sanchez, J.M. An optimized algorithm for InSAR phase unwrapping based on particle filtering, matrix pencil, and region-growing techniques. IEEE Geosci. Remote Sens. Lett. 2009, 6, 835–839. [Google Scholar] [CrossRef]
  32. Chen, R.; Yu, W.; Wang, R.; Liu, G.; Shao, Y. Integrated denoising and unwrapping of InSAR phase based on Markov random fields. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4473–4485. [Google Scholar] [CrossRef]
  33. Sarker, I.H. Deep learning: A comprehensive overview on techniques, taxonomy, applications and research directions. SN Comput. Sci. 2021, 2, 420. [Google Scholar] [CrossRef]
  34. Zhou, L.; Yu, H.; Pascazio, V.; Xing, M. PU-GAN: A One-Step 2-D InSAR Phase Unwrapping Based on Conditional Generative Adversarial Network. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5221510. [Google Scholar] [CrossRef]
  35. Xu, M.; Tang, C.; Shen, Y.; Hong, N.; Lei, Z. PU-M-Net for phase unwrapping with speckle reduction and structure protection in ESPI. Opt. Lasers Eng. 2022, 151, 106824. [Google Scholar] [CrossRef]
  36. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  37. Zhou, H.; Cheng, C.; Peng, H.; Liang, D.; Liu, X.; Zheng, H.; Zou, C. The PHU-NET: A robust phase unwrapping method for MRI based on deep learning. Magn. Reson. Med. 2021, 86, 3321–3333. [Google Scholar] [CrossRef]
  38. Qin, Y.; Wan, S.; Wan, Y.; Weng, J.; Liu, W.; Gong, Q. Direct and accurate phase unwrapping with deep neural network. Appl. Opt. 2020, 59, 7258–7267. [Google Scholar] [CrossRef] [PubMed]
  39. Liu, Y.; Han, Y.; Li, F.; Zhang, Q. Speedup of minimum discontinuity phase unwrapping algorithm with a reference phase distribution. Opt. Commun. 2018, 417, 97–102. [Google Scholar] [CrossRef]
  40. Spoorthi, G.; Gorthi, R.K.S.S.; Gorthi, S. PhaseNet 2.0: Phase unwrapping of noisy data based on deep learning approach. IEEE Trans. Image Process. 2020, 29, 4862–4872. [Google Scholar] [CrossRef]
  41. Zhang, J.; Li, Q. EESANet: Edge-enhanced self-attention network for two-dimensional phase unwrapping. Opt. Express 2022, 30, 10470–10490. [Google Scholar] [CrossRef] [PubMed]
  42. Perera, M.V.; De Silva, A. A joint convolutional and spatial quad-directional LSTM network for phase unwrapping. In Proceedings of the ICASSP 2021—2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada, 6–11 June 2021; pp. 4055–4059. [Google Scholar]
  43. Sica, F.; Calvanese, F.; Scarpa, G.; Rizzoli, P. A CNN-based coherence-driven approach for InSAR phase unwrapping. IEEE Geosci. Remote Sens. Lett. 2020. [Google Scholar] [CrossRef]
  44. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18. Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
  45. Zhang, T.; Jin, P.J.; Ge, Y.; Moghe, R.; Jiang, X. Vehicle detection and tracking for 511 traffic cameras with U-shaped dual attention inception neural networks and spatial-temporal map. Transp. Res. Rec. 2022, 2676, 613–629. [Google Scholar] [CrossRef]
  46. Tang, Y.; Huang, Z.; Chen, Z.; Chen, M.; Zhou, H.; Zhang, H.; Sun, J. Novel visual crack width measurement based on backbone double-scale features for improved detection automation. Eng. Struct. 2023, 274, 115158. [Google Scholar] [CrossRef]
  47. Tang, Y.; Chen, Z.; Huang, Z.; Nong, Y.; Li, L. Visual measurement of dam concrete cracks based on U-net and improved thinning algorithm. J. Exp. Mech. 2022, 37, 209–220. [Google Scholar]
  48. Tang, Y.; Zhu, M.; Chen, Z.; Wu, C.; Chen, B.; Li, C.; Li, L. Seismic performance evaluation of recycled aggregate concrete-filled steel tubular columns with field strain detected via a novel mark-free vision method. Structures 2022, 37, 426–441. [Google Scholar] [CrossRef]
  49. Sun, X.; Zimmer, A.; Mukherjee, S.; Kottayil, N.K.; Ghuman, P.; Cheng, I. DeepInSAR—A Deep Learning Framework for SAR Interferometric Phase Restoration and Coherence Estimation. Remote Sens. 2020, 12, 2340. [Google Scholar] [CrossRef]
  50. Sica, F.; Reale, D.; Poggi, G.; Verdoliva, L.; Fornaro, G. Nonlocal adaptive multilooking in SAR multipass differential interferometry. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 1727–1742. [Google Scholar] [CrossRef]
  51. Pu, L.; Zhang, X.; Zhou, Z.; Li, L.; Zhou, L.; Shi, J.; Wei, S. A Robust InSAR Phase Unwrapping Method via Phase Gradient Estimation Network. Remote Sens. 2021, 13, 4564. [Google Scholar] [CrossRef]
  52. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Summary of the different procedures for phase unwrapping.
Figure 1. Summary of the different procedures for phase unwrapping.
Remotesensing 15 05081 g001
Figure 2. Architecture of regression-based phase unwrapping methods.
Figure 2. Architecture of regression-based phase unwrapping methods.
Remotesensing 15 05081 g002
Figure 3. Architecture of wrap-count-based phase unwrapping methods.
Figure 3. Architecture of wrap-count-based phase unwrapping methods.
Remotesensing 15 05081 g003
Figure 4. The block diagram of the proposed methodology.
Figure 4. The block diagram of the proposed methodology.
Remotesensing 15 05081 g004
Figure 5. Architecture of each U-Net used in our framework.
Figure 5. Architecture of each U-Net used in our framework.
Remotesensing 15 05081 g005
Figure 6. Boundary artifacts observed when trained with noisy input. Noisy wrap input (left column); the output produced by Sica et al. [43] when trained with noisy wrap phase input and noiseless ground truth (middle column); and ground truth (right column). The x- and y-axes correspond to the azimuth and range spatial dimensions. The color bars indicate the scale of the wrap input and unwrap output.
Figure 6. Boundary artifacts observed when trained with noisy input. Noisy wrap input (left column); the output produced by Sica et al. [43] when trained with noisy wrap phase input and noiseless ground truth (middle column); and ground truth (right column). The x- and y-axes correspond to the azimuth and range spatial dimensions. The color bars indicate the scale of the wrap input and unwrap output.
Remotesensing 15 05081 g006
Figure 7. Qualitative comparison of unwrapped outputs of various deep learning algorithms (starting from row 3), with our proposed method in row 3. Each column shows one sample example. Row 1 is the wrapped phase interferogram. Row 2 is the ground truth. Rows 4, 6, and 8 are the error maps for rows 3, 5, and 7 unwrap outputs. [a] and [b] refer to outputs of Sica et al. [43] and Perera et al. [42] respectively. (i), (ii) and (iii) indicate the color bar for the scale of wrap input, unwrap output, and error map respectively.
Figure 7. Qualitative comparison of unwrapped outputs of various deep learning algorithms (starting from row 3), with our proposed method in row 3. Each column shows one sample example. Row 1 is the wrapped phase interferogram. Row 2 is the ground truth. Rows 4, 6, and 8 are the error maps for rows 3, 5, and 7 unwrap outputs. [a] and [b] refer to outputs of Sica et al. [43] and Perera et al. [42] respectively. (i), (ii) and (iii) indicate the color bar for the scale of wrap input, unwrap output, and error map respectively.
Remotesensing 15 05081 g007
Figure 8. Qualitative comparison of unwrapped outputs of various traditional algorithms (starting from row 1). Each column shows one sample example and has the same wrap phase interferogram and ground truth (row 1 and row 2 respectively) as Figure 7. Rows 2, 4, 6, and 8 are the error maps for 1, 3, 5, and 7 unwrap outputs. [c], [d], [e], and [f] refer to outputs of BM3D [52] + SNAPHU [25], SNAPHU [25], Chartrand et al. [24] and, Herraez et al. [18] respectively. (i), (ii) and (iii) indicate the color bar for the scale of wrap input, unwrap output, and error map respectively.
Figure 8. Qualitative comparison of unwrapped outputs of various traditional algorithms (starting from row 1). Each column shows one sample example and has the same wrap phase interferogram and ground truth (row 1 and row 2 respectively) as Figure 7. Rows 2, 4, 6, and 8 are the error maps for 1, 3, 5, and 7 unwrap outputs. [c], [d], [e], and [f] refer to outputs of BM3D [52] + SNAPHU [25], SNAPHU [25], Chartrand et al. [24] and, Herraez et al. [18] respectively. (i), (ii) and (iii) indicate the color bar for the scale of wrap input, unwrap output, and error map respectively.
Remotesensing 15 05081 g008
Figure 9. Comparison of the unwrapped output of algorithms: Perera et al. [42] (column (a)) and the proposed method. Row 1 and row 2 show two samples. The rectangular boxes highlight the errors in boundaries in Perera et al. [42] not observed in the proposed method. The color bar indicates the scale of the unwrap outputs.
Figure 9. Comparison of the unwrapped output of algorithms: Perera et al. [42] (column (a)) and the proposed method. Row 1 and row 2 show two samples. The rectangular boxes highlight the errors in boundaries in Perera et al. [42] not observed in the proposed method. The color bar indicates the scale of the unwrap outputs.
Remotesensing 15 05081 g009
Table 1. Summary of the classical methods to solve SBPU.
Table 1. Summary of the classical methods to solve SBPU.
ReferenceMethodologyKey HighlightsLimitation
Huntley [9]Minimum length cut algorithm to avoid residuesWorks well in high-quality and low-noise phase mapsMay not generate output at all if phase maps are too noisy
Goldstein [10]Integration paths encircle an equal number of positive and negative residuesBranch-cut is used to find a solutionNP-Hard
Flynn [12], Zhong [13], and Zhao [14]Quality-maps-guided unwrappingCoherence was used as measure of qualityRequires many SLCs to generate reliable quality maps
Ching et al. [15]Segmentation-guided unwrappingEach segment is unwrapped independentlyBoundary errors
An et al. [17]Segmentation-guided unwrappingFaster minimum spanning tree algorithmBoundary errors
Herraez et al. [18]Reliability-guided unwrappingUnwraps from most reliable to least reliable pixelPoor noise immunity
Constantine et al. [20]Lp norm optimizationP = 1, minimum cost problemUnwrap solution is smooth but not best
Ghiglia et al. [21]Lp norm optimizationP = 2, results in smooth unwrap solutionAffected by noise
Chartrand et al. [24]Sparse optimizationFaster computationPoor noise immunity
SNAPHU [25]MAP estimationCaptures dependencies with factors such as amplitudeThe accuracy depends on the estimates of probability density functions
Table 2. Summary of the deep learning methods to solve SBPU.
Table 2. Summary of the deep learning methods to solve SBPU.
ReferenceNetwork ArchitectureLoss FunctionLimitation
Zhou et al. [34]GANAdversarial lossDataset has no noise
Xu et al. [37]MNetMAE and SSIMOnly works in low concentration of speckle noise
Qin et al. [38]ResUNetMAEWorks only in low noise
Spoorthly et al. [40]Dense-UNetCross-entropy + Mean absolute error (MAE)Incorrect unwrapping boundaries
Zhang et al. [41]EESANETCross-entropy + L1 loss + residue lossPoor noise immunity
Sica et al. [43]U-NetCross-entropy + Jaccard loss + Mean absolute error (MAE)Poor noise immunity
Perera et al. [42]Spatial Quad-Directional Long Short-Term Memory (LSTM)Variance errorIncorrect unwrapping boundaries
Table 3. Comparison of our method with other methods: average/worst RMSE, average/worst SSIM, and UFR. The best-performing method for each metric is highlighted in bold text.
Table 3. Comparison of our method with other methods: average/worst RMSE, average/worst SSIM, and UFR. The best-performing method for each metric is highlighted in bold text.
MethodRMSE (Average/Worst)SSIM (Average/Worst)UFR (%)
Herraez et al. [18]1.425/9.360.89/0.333.52
Chartrand et al. [24]0.93/14.120.93/0.611.25
SNAPHU [25]0.96/12.590.91/0.631.2
BM3D [52] + SNAPHU [25]0.87/14.690.93/0.651.1
Sica et al. [43]0.54/2.910.91/0.640.01
Perera et al. [42]1.01/5.660.98/0.920.75
Our method0.11/2.180.99/0.950.006
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Vijay Kumar, S.; Sun, X.; Wang, Z.; Goldsbury, R.; Cheng, I. A U-Net Approach for InSAR Phase Unwrapping and Denoising. Remote Sens. 2023, 15, 5081. https://doi.org/10.3390/rs15215081

AMA Style

Vijay Kumar S, Sun X, Wang Z, Goldsbury R, Cheng I. A U-Net Approach for InSAR Phase Unwrapping and Denoising. Remote Sensing. 2023; 15(21):5081. https://doi.org/10.3390/rs15215081

Chicago/Turabian Style

Vijay Kumar, Sachin, Xinyao Sun, Zheng Wang, Ryan Goldsbury, and Irene Cheng. 2023. "A U-Net Approach for InSAR Phase Unwrapping and Denoising" Remote Sensing 15, no. 21: 5081. https://doi.org/10.3390/rs15215081

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop