Next Article in Journal
A Portable Infrared System for Identification of Particulate Matter
Next Article in Special Issue
An Aerial Image Detection Algorithm Based on Improved YOLOv5
Previous Article in Journal
S-LIGHT: Synthetic Dataset for the Separation of Diffuse and Specular Reflection Images
Previous Article in Special Issue
TRANS-CNN-Based Gesture Recognition for mmWave Radar
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An On-Site InSAR Terrain Imaging Method with Unmanned Aerial Vehicles

Graduate Institute of Communication Engineering, National Taiwan University, Taipei 10617, Taiwan
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2024, 24(7), 2287; https://doi.org/10.3390/s24072287
Submission received: 27 December 2023 / Revised: 11 March 2024 / Accepted: 2 April 2024 / Published: 3 April 2024

Abstract

:
An on-site InSAR imaging method carried out with unmanned aerial vehicles (UAVs) is proposed to monitor terrain changes with high spatial resolution, short revisit time, and high flexibility. To survey and explore a specific area of interest in real time, a combination of a least-square phase unwrapping technique and a mean filter for removing speckles is effective in reconstructing the terrain profile. The proposed method is validated by simulations on three scenarios scaled down from the high-resolution digital elevation models of the US geological survey (USGS) 3D elevation program (3DEP) datasets. The efficacy of the proposed method and the efficiency in CPU time are validated by comparing with several state-of-the-art techniques.

1. Introduction

Radars have been widely used for terrain surveillance under different weather conditions [1], which is crucial for environmental protection and natural disaster evaluation [2]. Synthetical aperture radars (SARs), including ALOS (L-band, 2006–2011) [3], Sentinel-1 (C-band, 2014–) [3], and UAVSAR (airborne, L-band and P-band, 2008–) [3], have been used for monitoring glaciers, volcanoes, earthquakes, and so on. TerraSAR-X, operating at X-band with 300 MHz bandwidth, offers spatial resolution of 0.6 m × 1.1 m (slant range × azimuth) in spotlight mode, 0.6 m × 0.24 m in staring spotlight mode, and 1.2 m × 3.3 m in stripmap mode [4,5].
The InSAR technique has been used for measuring surface topography and altimetry profile [6], mapping three-dimensional building shape [7], and detecting building edge [8]. InSAR and TomoSAR imaging techniques demand precise coregistration between master image and slave images [9,10]. In [9], a two-step, scale-invariant feature transform (SIFT) registration method was proposed. In [11], an outlier-detecting total least-squares (OD-TLS) algorithm was proposed to enhance the precision and robustness of 3D point-set registration. In [12], a sinc interpolation method was used to implement subpixel-to-subpixel match.
Faithful reconstruction of a terrain profile relies on accurate acquisition of interferometric phase. Numerous filtering methods on interferometric phase have been developed in the past few decades [13], including transform domain methods [14], nonlocal methods [15], and spatial domain methods [16]. The trade-off between noise reduction and preservation of terrain-related signal with transform domain methods is typically adjusted via a threshold [14].
In [17], a 3D space-time nonlocal mean filter (NLMF) was applied to detect terrain changes by extracting nonlocal information from pixels in SAR images acquired in different time windows. In [18], a nonlocal mean filter was applied to a few persistent scattering points in a search area to improve the accuracy of 3D deformation profile. The nonlocal filters performed well in preserving details of complex structures, but were less effective in removing speckle noise [15].
A spatial-domain Gaussian filter was used to reduce high-frequency noise while preserving deformation information [19]. It could reduce impulse noise and preserve edges by replacing each pixel with the mean value of its neighboring pixels [20], but the edges might become blurred due to loss of fine details. On the other hand, nonlocal filters preserve intricate details and adapt to local structures by considering pixel patch similarities, with the downside of computational complexity and sensitivity to parameters.
Phase unwrapping (PU) is a critical step to derive a faithful terrain profile from the interferometric phase of the acquired InSAR image, and the results are affected by the number of baselines used in probing the target area [21]. A phase unwrapping problem could be formulated as a wrap count classification task to invoke deep learning methods [22], as used in processing optical images [23,24]. In [25], a quality-guided algorithm was developed by unwrapping the phases along an optimal path in the interferometric phase image, based on a quality map of all edges in the image. Although the result is insensitive to noise, its performance relies on the quality map and the errors may propagate along the path.
A least-squares (LS) phase unwrapping method was formulated as a global optimization task [26], which may be sensitive to outliers and takes long computational time to process a large image. In [27], a phase unwrapping method was proposed by minimizing the difference between the discrete partial derivative of the wrapped phase function and that of the unwrapped phase function. The unwrapped phases were obtained by solving a Hunt’s matrix and a discrete Poisson’s equation, accelerated by using FFT, and the result was comparable to other methods.
InSAR imaging tasks have been operated on spaceborne [28], airborne [29], ground-based [30], and UAV-borne platforms [31]. Spaceborne platforms are typically used to survey wide areas or large-scale phenomena [4], airborne platforms are more flexible in path planning [32], and ground-based platforms are used to monitor local environment [33].
UAV-borne platforms [34,35,36] are expedient for monitoring local area of contingency and can achieve spatial resolution of 10 cm [37] in P and L bands [31]. For example, the Antarctic ice sheet (AIS) is covered with rifts and crevasses off the map, endangering the exploration personnel [38,39]. Satellite-borne sensors cannot provide updated images and information for on-site tasks [38,40], but can be complemented with the InSAR images acquired with UAVs. Typical satellite-borne platforms take days to revisit the same area, with a baseline of a few hundred meters, while UAV-borne platforms can revisit the same area immediately after the previous flight, with a baseline of a few meters.
The radar signals can be acquired in two separate flights with single-channel SAR or a single flight with dual-channel SAR [41]. Typical position accuracy of UAVs derived from GPS lies between 0.5 and 2 m [42], which can be enhanced to the centimeter level by using differential GPS (DGPS) technique [43] or real-time kinematic GPS [44]. The downside of deploying UAVs is the impact of airflow disturbance and platform perturbation [42], which can be mitigated by applying motion compensation and autofocusing techniques [45,46,47].
In this work, an on-site InSAR imaging method is proposed to reconstruct a high-resolution local terrain profile with UAV-borne SARs in L-band. A mean filter is used to reduce artifact speckles, and a least-squares phase unwrapping method is used to acquire 2D interferometric phase in almost real time. Three high-quality digital elevation models (DEMs) featuring volcano, glacier, and landslide, are retrieved from the US geological survey (USGS) 3D elevation program (3DEP) [48] to validate the efficacy of the proposed method. The performance is further evaluated by comparing the acquired InSAR images with their counterparts acquired using other state-of-the-art techniques under the effects of noise.
The rest of this paper is organized as follows: the proposed InSAR method is presented in Section 2, the simulation results are discussed in Section 3, and some conclusions are drawn in Section 4.

2. Proposed InSAR Method

Figure 1 shows the schematic of InSAR operation with two parallel flight paths, where the x, y, and z axes are aligned in the ground-range direction, azimuth direction, and height direction, respectively. A ( x , y , z ) denotes a point target, and the platform flies at height H above ground, with the side-looking angle θ to A ( x , y , z ) .
The coordinates of radar P 0 ( η ) along the master track and radar P 1 ( η ) along the slave track are given by
P 0 ( η ) = ( 0 , η v p , H ) P 1 ( η ) = ( b , η v p , H )
The slant ranges from P 0 ( η ) and P 1 ( η ) to A ( x , y , z ) are R 0 ( η ) and R 1 ( η ) , respectively, with
R 0 ( η ) = ( x ) 2 + ( y η v p ) 2 + ( z H ) 2 R 1 ( η ) = ( x + b ) 2 + ( y η v p ) 2 + ( z H ) 2

2.1. Backscattered Signals

Figure 2 shows the flow-chart of the range-Doppler algorithm (RDA) used in this work [49]. The signal backscattered from the point target A ( x , y , z ) and received at P n ( η ) ( n = 0 , 1 ) is demodulated to the baseband as
s r n ( τ , η ) = A 0 w e ( τ 2 R n ( η ) / c ) e j 4 π f 0 R n ( η ) / c + j π K r [ τ 2 R n ( η ) / c ] 2
where A 0 is the amplitude, f 0 is the carrier frequency, K r is the chirp rate of the linear frequency modulation (LFM) pulse, τ is the range (fast) time, η is the azimuth (slow) time, and w e ( t ) = rect ( t ) is a window function, which is equal to one when | t | 1 / 2 and zero otherwise.
By taking the Fourier transform of s r n ( τ , η ) with respect to τ and η sequentially, we have
S n ( f τ , f η ) A W e ( f τ ) e j ϕ n
where A is a constant of integration, W e ( f τ ) = w e ( f τ / K r ) , and
ϕ n π f τ 2 K r 2 π f η y v p 4 π R n ( 0 ) D λ 4 π R n ( 0 ) λ D f τ f 0 + 4 π R n ( 0 ) λ 1 D 2 2 D 3 f τ f 0 2 4 π R n ( 0 ) λ 1 D 2 2 D 5 f τ f 0 3
with D = 1 c 2 f η 2 4 f 0 2 v p 2 .

2.2. Range Compression

Let us define a range-compression filter H rc ( f τ , f η ) , a coupling-compensation filter H cc ( f τ , f η ) , and a range cell migration correction (RCMC) filter H rcmc ( f τ , f η ) as
H rc ( f τ , f η ) = e j π f τ 2 / K m
H cc ( f τ , f η ) = exp j π λ R n ( 0 ) f τ 3 f η 2 2 D 5 f 0 3 v p 2
H rcmc ( f τ , f η ) = exp j 4 π R n ( 0 ) f τ c 1 D 1 D ( f dc )
where f dc is the Doppler centroid and
1 K m = 1 K r 4 R n ( 0 ) λ 1 D 2 2 D 3 f 0 2
Then, multiply these three filters with S n ( f τ , f η ) to have
S n ( 1 ) ( f τ , f η ) = S n ( f τ , f η ) H rc ( f τ , f η ) H cc ( f τ , f η ) H rcmc ( f τ , f η ) = A W e ( f τ ) e j ϕ n ( 1 )
where
ϕ n ( 1 ) = 2 π f η y v p 4 π R n ( 0 ) D λ 4 π R n ( 0 ) λ D ( f dc ) f τ f 0
By taking the inverse Fourier transform of S n ( 1 ) ( f τ , f η ) in the range, we obtain the range-compressed signal
S n ( 2 ) ( τ , f η ) = A e j 2 π f η y / v p e j 4 π R n ( 0 ) D / λ K r T r sinc K r T r τ 2 R n ( 0 ) c D ( f dc )
where sinc ( x ) = sin ( π x ) / ( π x ) .

2.3. Azimuth Compression

Let us define an azimuth compression filter
H ac ( τ , f η ) = e j 4 π R n ( 0 ) D / λ
which is multiplied with S n ( 2 ) ( τ , f η ) to have
S n ( 3 ) ( τ , f η ) = S n ( 2 ) ( τ , f η ) H ac ( τ , f η ) = A K r T r e j 2 π f η y / v p sinc τ 2 R n ( 0 ) c D ( f dc )
By taking the inverse Fourier transform of S n ( 3 ) ( τ , f η ) in azimuth, we obtain the azimuth-compressed signal
s n ( 4 ) ( τ , η ) = F η 1 { S n ( 3 ) ( τ , f η ) } = A K r T r sinc K r T r τ 2 R n ( 0 ) c D ( f dc ) F a sinc F a ( η y / v p )
which is the SAR image stored in a matrix s n ( 4 ) [ u , v ] = s n ( 4 ) ( τ v , η u ) of size N a × N r .

2.4. Coregistration

Figure 3 shows the flow-chart of InSAR imaging. In the master image, the τ -axis is sampled at τ a + n r Δ τ , with τ a = 2 R 0 / c and N r / 2 n r N r / 2 1 . These sampling values of τ are stored in a vector
a ¯ = [ τ a , τ a , , τ a ] t + Δ τ N r / 2 , N r / 2 + 1 , , N r / 2 1 t
The slant ranges associated with all the range cells in the master image are r ¯ 0 = c a ¯ / 2 , and the side-looking angle of the vth range cell is θ [ v ] = cos 1 ( H / r 0 [ v ] ) , with 1 v N r .
Figure 4a shows that the point target A ( x , y , h ) appears at A 0 in the master image and A 1 in the slave image. If the platforms fly high enough, the range difference between the two tracks in Figure 4a can be approximated as that in Figure 4b, namely, Δ r A [ v ] = r 1 A [ v ] r 0 A [ v ] r 1 [ v ] r 0 [ v ] = Δ r [ v ] . By the law of cosines, r 1 [ v ] can be represented as r 1 [ v ] = b 2 + ( r 0 [ v ] ) 2 2 b r 0 [ v ] cos ( θ 0 [ v ] + π / 2 ) . The range difference Δ r [ v ] is normalized with respect to c / ( 2 F r ) to have Δ r p [ v ] = Δ r [ v ] ( 2 F r / c ) .
Next, apply both sinc interpolation [12] and subpixel-to-subpixel match to coregister the slave image to the master image. The original slave image s ¯ ¯ 1 ( 10 ) of size N a × N r is interpolated in the range direction by a factor of 16 to obtain a finer slave image s ¯ ¯ 1 ( 13 ) of size N a × 16 N r , which is resampled to derive a coregistered slave image S 1 c [ u , v ] of size N a × N r .

2.5. Interferometry and Flat-Earth Phase Removal

An interferogram is formed from the master image S 0 [ u , v ] and the coregistered slave image S 1 c [ u , v ] as
I [ u , v ] =   | S 0 [ u , v ] | | S 1 c [ u , v ] | e j ϕ [ u , v ]
where ϕ [ u , v ] = ϕ 0 [ u , v ] ϕ 1 c [ u , v ] is the interferometric phase.
The interferometric phase attributed to the flat-earth reference plane is ϕ f [ v ] = 4 π ( r 1 [ v ] r 0 [ v ] ) / λ [50], which is subtracted from the phase of I [ u , v ] in (17) to obtain
I ( 1 ) [ u , v ] = I [ u , v ] e j ϕ f [ v ] =   | S 0 [ u , v ] | | S 1 c [ u , v ] | e j ϕ ( 1 ) [ u , v ]
where ϕ ( 1 ) [ u , v ] = ϕ [ u , v ] ϕ f [ v ] .

2.6. Mean Filter

Since the master image and the slave image are not perfectly coregistered, the interferometric phase manifests some random noise, inflicting errors in the subsequent phase unwrapping process. A mean filter is applied before phase unwrapping to reduce such phase noise.
Consider a target area of ( 2 L a + 1 ) azimuth cells by ( 2 L r + 1 ) range cells, centered at [ N a / 2 , N r / 2 ] . The interferometric phase in the target area is mapped from ϕ ( 1 ) [ u , v ] as
ϕ ¯ ¯ ( 2 ) = ϕ ( 1 ) [ N a / 2 + L a , N r / 2 L r ] ϕ ( 1 ) [ N a / 2 + L a , N r / 2 + L r ] ϕ ( 1 ) [ N a / 2 L a , N r / 2 L r ] ϕ ( 1 ) [ N a / 2 L a , N r / 2 + L r ]
Next, apply a searching window of size ( 2 w a + 1 ) × ( 2 w r + 1 ) and centered at [ u , v ] on ϕ ¯ ¯ ( 2 ) to have
ϕ ¯ ¯ u v ( 3 ) = ϕ ( 2 ) [ u + w a , v w r ] ϕ ( 2 ) [ u + w a , v + w r ] ϕ ( 2 ) [ u w a , v w r ] ϕ ( 2 ) [ u w a , v + w r ]
An intermediate phase, ϕ s [ u , v ] , is derived from ϕ ¯ ¯ u v ( 3 ) as [51]
A ϕ s [ u , v ] e j ϕ s [ u , v ] = u = u min u max v = v min v max e j ϕ u v ( 3 ) [ u , v ]
The interferometric phase after mean filtering is computed as [20]
ϕ ( 3 ) [ u , v ] = ϕ s [ u , v ] + ϕ ( 3 ) [ u , v ] ϕ s [ u , v ] u = u min u max v = v min v max ϕ ( 3 ) [ u , v ] ϕ s [ u , v ]

2.7. Poisson’s Equation of Unwrapped Phase

Let us define a wrapping operator as [25]
ϕ ( 4 ) [ u , v ] = W ϕ ( 3 ) [ u , v ] = ϕ ( 3 ) [ u , v ] 2 π ϕ ( 3 ) [ u , v ] + π 2 π
which returns the principal value of ϕ ( 3 ) [ u , v ] in ( π , π ] . The residue of ϕ ( 3 ) [ u , v ] is determined as [25]
R ϕ ( 3 ) [ u , v ] = W ϕ ( 3 ) [ u , v ] ϕ ( 3 ) [ u + 1 , v ] + W ϕ ( 3 ) [ u + 1 , v ] ϕ ( 3 ) [ u + 1 , v + 1 ] + W ϕ ( 3 ) [ u + 1 , v + 1 ] ϕ ( 3 ) [ u , v + 1 ] + W ϕ ( 3 ) [ u , v + 1 ] ϕ ( 3 ) [ u , v ]
with possible outcomes of 2 π , 0 , or 2 π .
Next, take the mirror reflections of the wrapped phase function to obtain an even periodic function, which is continuous at the junction between two adjacent periods. Let U = 2 L a + 1 and V = 2 L r + 1 , an expanded phase function is defined in terms of ϕ ¯ ¯ ( 4 ) and its three versions of mirror reflection as
ϕ ¯ ¯ 2 U × 2 V ( 4 ) = ϕ ¯ ¯ ( 4 b ) ϕ ¯ ¯ ( 4 c ) ϕ ¯ ¯ ( 4 ) ϕ ¯ ¯ ( 4 a )
where
ϕ ¯ ¯ ( 4 a ) = ϕ ( 4 ) [ U , V ] ϕ ( 4 ) [ U , 1 ] ϕ ( 4 ) [ 1 , V ] ϕ ( 4 ) [ 1 , 1 ] ϕ ¯ ¯ ( 4 b ) = ϕ ( 4 ) [ 1 , 1 ] ϕ ( 4 ) [ 1 , V ] ϕ ( 4 ) [ U , 1 ] ϕ ( 4 ) [ U , V ] ϕ ¯ ¯ ( 4 c ) = ϕ ( 4 ) [ 1 , V ] ϕ ( 4 ) [ 1 , 1 ] ϕ ( 4 ) [ U , V ] ϕ ( 4 ) [ U , 1 ]
The wrapped phase differences
Δ ϕ v [ u , v ] = W ϕ ( 4 ) [ u , v + 1 ] ϕ ( 4 ) [ u , v ] Δ ϕ u [ u , v ] = W ϕ ( 4 ) [ u + 1 , v ] ϕ ( 4 ) [ u , v ]
fall in ( π , π ] .
Given the wrapped phase ϕ ( 4 ) [ u , v ] , its unwrapped counterpart, ϕ ˜ [ u , v ] satisfies
ϕ ˜ [ u + 1 , v ] ϕ ˜ [ u , v ] = Δ ϕ u [ u , v ] , 1 u 2 U 1 , 1 v 2 V
ϕ ˜ [ u , v + 1 ] ϕ ˜ [ u , v ] = Δ ϕ v [ u , v ] , 1 u 2 U , 1 v 2 V 1
The least-squares solution of (25) and (26) can be obtained by minimizing the cost function [27,52]
C = u = 1 2 U 1 v = 1 2 V ϕ ˜ [ u + 1 , v ] ϕ ˜ [ u , v ] Δ ϕ u [ u , v ] 2 + u = 1 2 U v = 1 2 V 1 ϕ ˜ [ u , v + 1 ] ϕ ˜ [ u , v ] Δ ϕ v [ u , v ] 2
with the Hunt’s method to have [52]
ϕ ˜ [ u + 1 , v ] + ϕ ˜ [ u 1 , v ] + ϕ ˜ [ u , v + 1 ] + ϕ ˜ [ u , v 1 ] 4 ϕ ˜ [ u , v ] = Δ ϕ u [ u , v ] Δ ϕ u [ u 1 , v ] + Δ ϕ v [ u , v ] Δ ϕ v [ u , v 1 ]
which is rearranged into a Poisson’s difference equation on a 2 U × 2 V grid as
ϕ ˜ [ u + 1 , v ] 2 ϕ ˜ [ u , v ] + ϕ ˜ [ u 1 , v ] + ϕ ˜ [ u , v + 1 ] 2 ϕ ˜ [ u , v ] + ϕ ˜ [ u , v 1 ] = ρ [ u , v ]
where
ρ [ u , v ] = Δ ϕ u [ u , v ] Δ ϕ u [ u 1 , v ] + Δ ϕ v [ u , v ] Δ ϕ v [ u , v 1 ]

2.8. Solving Poisson’s Difference Equation with FFT

Define the 2D discrete Fourier transform (DFT) of ϕ ˜ [ u , v ] and its inverse as [52]
Φ [ m , n ] = u = 1 2 U v = 1 2 V ϕ ˜ [ u , v ] exp j 2 π ( m 1 ) ( u 1 ) 2 U exp j 2 π ( n 1 ) ( v 1 ) 2 V , 1 m 2 U , 1 n 2 V
ϕ ˜ [ u , v ] = 1 4 U V m = 1 2 U n = 1 2 V Φ [ m , n ] exp j 2 π ( m 1 ) ( u 1 ) 2 U exp j 2 π ( n 1 ) ( v 1 ) 2 V , 1 u 2 U , 1 v 2 V
By substituting (31) into the left-hand-side of (28), we obtain
1 4 U V m = 1 2 U n = 1 2 V Φ [ m , n ] e j α e j β { e j π ( m 1 ) / U + e j π ( m 1 ) / U + e j π ( n 1 ) / V + e j π ( n 1 ) / V 4 }
where α = π ( m 1 ) ( u 1 ) / U and β = π ( n 1 ) ( v 1 ) / V . The right-hand side of (28) can be represented as
ρ [ u , v ] = IDFT { P [ m , n ] } = 1 4 U V m = 1 2 U n = 1 2 V P [ m , n ] e j α e j β , 1 u 2 U , 1 v 2 V
By equating (32) and (33), we obtain
P [ m , n ] = Φ [ m , n ] 2 cos π ( m 1 ) U + 2 cos π ( n 1 ) V 4
The phase unwrapping procedure is summarized as follows:
Step 1: Take the mirror reflections of ϕ ¯ ¯ ( 4 ) to obtain ϕ ¯ ¯ ( 4 ) , as in (22);
Step 2: Compute ρ [ u , v ] in (29), with 1 u 2 U and 1 v 2 V ;
Step 3: Take 2D DFT of ρ [ u , v ] to obtain P [ m , n ] , as in (33);
Step 4: Compute Φ [ m , n ] by using (34), with 1 m 2 U , 1 n 2 V , and Φ [ 1 , 1 ] = 0 ;
Step 5: Take 2D IDFT of Φ [ m , n ] to obtain the solution, ϕ ˜ [ u , v ] ;
Step 6: Retrieve the unwrapped interferometric phases in the target area as
ϕ ( 5 ) [ u , v ] = ϕ ˜ [ U , V ] ϕ ˜ [ U , V ] ϕ ˜ [ 1 , 1 ] ϕ ˜ [ 1 , V ]

2.9. Nonlocal Filter

A nonlocal filter can be applied to either the interferometric phase ϕ ( 1 ) [ u , v ] in (18) before phase unwrapping or ϕ ( 5 ) [ u , v ] in (35) after phase unwrapping. The output of the nonlocal filter to ϕ ( 1 ) [ u , v ] is computed as [15,53]
ϕ NL ( 2 ) [ u , v ] = [ u , v ] W se W 1 [ u , v ; u , v ] ϕ ( 1 ) [ u , v ]
where W se is a search window and W 1 [ u , v ; u , v ] is a weighting coefficient that is determined by the difference of pixels between two similarity windows centered at [ u , v ] and [ u , v ] . The weighting coefficient is large if the pixels in these two similarity windows match closely, and vice versa. The sum of all weighting coefficients over W se is set to one.
In the literature, a nonlocal filter is applied before phase unwrapping to reduce noise, speckle, or other artifacts embedded in the wrapped flattened phase, aiming to acquire a more accurate unwrapped phase. A nonlocal filter applied after phase unwrapping aims to smooth the unwrapped phase, at the risk of inducing artifacts or errors to the latter. The simulation results in this work show that smoother interferometric phase distribution is acquired by applying a nonlocal filter before phase unwrapping than after it.

2.10. Quality-Guided Phase Unwrapping

A quality-guided phase unwrapping process is also used in this work for comparison. A quality map is defined over a window W s centered at [ u , v ] as [25]
Z [ u , v ] = [ u , v ] W s Δ ϕ u [ u , v ] Δ ϕ u [ u , v ] 2 + [ u , v ] W s Δ ϕ v [ u , v ] Δ ϕ v [ u , v ] 2
where Δ ϕ u [ u , v ] and Δ ϕ v [ u , v ] are the partial derivatives of the wrapped phase in the u and v directions, respectively, and their mean values over the window W s are denoted as Δ ϕ u [ u , v ] and Δ ϕ v [ u , v ] , respectively.
After computing the quality map over an image area of interest, the pixel with the highest quality-map value is denoted as [ u s , v s ] . The phase unwrapping process begins with its four surrounding pixels, [ u s ± 1 , v s ] and [ u s , v s ± 1 ] , followed by the pixels surrounding them. The process is repeated until all the pixels in the image area are exhausted.

2.11. Target Height Estimation

By adding the flat-earth phases in the target area,
ϕ ¯ f ( 2 ) = ϕ f ( 2 ) [ 1 ] ϕ f ( 2 ) [ 2 ] ϕ f ( 2 ) [ 2 L r + 1 ] = ϕ f [ N r / 2 L r ] ϕ f [ N r / 2 L r + 1 ] ϕ f [ N r / 2 + L r ]
back to the unwrapping phase, ϕ ( 5 ) [ u , v ] , we have
ϕ ( 6 ) [ u , v ] = ϕ ( 5 ) [ u , v ] + ϕ f ( 2 ) [ v ]
Without loss of generality, choose cell [ 1 , 1 ] as the reference cell, with a reference phase ϕ ref = ϕ ( 6 ) [ 1 , 1 ] ϕ f ( 2 ) [ 1 ] . The phase difference between the master image and the slave image is calibrated as
ϕ ( 7 ) [ u , v ] = ϕ ( 6 ) [ u , v ] ϕ ref
Figure 5 shows the geometry for target-height estimation. The difference between | P 0 A ¯ | and | P 1 A ¯ | is estimated as
Δ r A [ v ] = λ 4 π ϕ ( 7 ) [ u , v ]
The side-looking angle from the master track toward the point target A is calculated by using the law of cosines as
θ A [ v ] = cos 1 ( r 0 A [ v ] ) 2 + b 2 r 0 A [ v ] + Δ r A [ v ] 2 2 b r 0 A [ v ] π 2
Finally, the height of point target A is estimated as
h [ v ] = H r 0 A [ u , v ] cos θ A [ v ]

3. Simulations and Discussions

In this section, three scenarios are simulated by using the DEM models extracted from the US Geological Survey (USGS) 3D Elevation Program (3DEP) dataset [48], including Mount St. Helens, Columbia Glacier, and Santa Cruz landslide. Without loss of effectiveness, each DEM model is scaled down by a common factor in all three dimensions to reduce the computational time. Table 1 lists the default InSAR parameters used in the simulations, from which the height of ambiguity is determined as [54]
z amb = λ R 0 sin θ 2 B = 80.52 ( m )
Aside from the mean filter (MF) and the least-squares phase unwrapping (LSPU) method, the nonlocal filter (NF) and the quality-guided phase unwrapping (QGPU) method are also used for comparison. The effects of noise are studied by comparing the acquired images without noise with their counterparts at SNR = 0 dB, 5 dB, and 10 dB.

3.1. Mount St. Helens

Figure 6 shows the intermediate images of Mount St. Helens, scaled down tenfold to reduce the computational time. Mount St. Helens is an active volcano located at (46.2° N, 122.18° W), Skamania County, Washington, USA. Its elevation is 2549 m and its prominence is 1404 m. The DEM is extracted from the USGS 3DEP dataset [48], with spatial resolution of 1 m × 1 m.
Figure 6a,b shows the master SAR images without noise and at SNR = 10 dB, respectively. The latter manifests speckles over the whole image. Figure 6c,d shows the interferometric phase without noise and at SNR = 10 dB, respectively. The latter is severely smeared by noise and covered with speckles. Figure 6e,f shows the wrapped flattened phase without noise and at SNR = 10 dB, respectively. Similar features as in the interferograms are observed.
Figure 6g,h shows the coherence maps without noise and at SNR = 10 dB, respectively. The coherence between the master SAR image S 0 [ u , v ] and the coregistered slave image S 1 c [ u , v ] is defined as [54]
γ co [ u , v ] = E { S 0 [ u , v ] S 1 c * [ u , v ] } E { | S 0 [ u , v ] | 2 } E { | S 1 c [ u , v ] 2 }
which is equal to one if the coregistration is perfect. It is observed that the coherence map without noise is close to one, and that, at SNR = 10 dB, it is slightly reduced to about 0.8.
Figure 7 shows the reconstructed images of Mount St. Helens with the proposed method and the effects of noise. The comparison between mean filter (MF) and nonlocal filter (NF), as well as between least-squares phase unwrapping (LSPU) and quality-guided phase unwrapping (QGPU) methods, under noise free condition are also demonstrated.
Figure 7a shows the true DEM of Mount St. Helens extracted from the dataset, Figure 7b shows the tenfold scale-down model of that in Figure 7a, and Figure 7c shows the reconstructed DEM with the proposed method.
The fidelity of the acquired InSAR image a against the true image b is evaluated with a structural similarity (SSIM) index defined as [55,56]
SSIM ( a , b ) = 2 μ a μ b + c 1 μ a 2 + μ b 2 + c 1 2 σ a b + c 2 σ a 2 + σ b 2 + c 2
where μ p and σ p are the mean and standard deviation, respectively, of image p, with p = a , b ; σ a b is the covariance between images a and b; and c 1 and c 2 are stability constants. The SSIM index lies in [ 0 , 1 ] , with higher index indicating higher similarity. Each image pixel is stored in 8 bits, implying the dynamic range of L = 2 8 1 = 255 . The stability constants are chosen as c 1 = ( 0.01 L ) 2 = 6.50 and c 2 = ( 0.03 L ) 2 = 58.52 . The SSIM index between the images in Figure 7b,c is 0.90.
The fidelity of the acquired InSAR image a against the true image b is also evaluated with a root-mean-square error (RMSE) defined as [57]
RMSE ( a , b ) = 1 P p = 1 P ( a p b p ) 2
where a p and b p are the values of the pth pixels in images a and b, respectively, and P is the number of pixels in one image. The RMSE between the images in Figure 7b,c is 5.79 m.
Figure 7d shows the reconstructed DEM, with the NF replacing the mean filter; its SSIM index and RMSE against the image in Figure 7b are 0.89 and 6.14 m, respectively, i.e., slightly worse than the proposed method.
A closer inspection of the images in Figure 7c,d reveals that the NF preserves sharper edge while the MF smears image features. The SSIM indices and RMSE values of these two images are similar, implying that MF and NF have comparable performance.
Figure 7e shows the reconstructed DEM with MF and QGPU; its SSIM index and RMSE against the image in Figure 7b are 0.90 and 5.79 m, respectively, which are identical to those in Figure 7c, indicating that LSPU and QGPU methods have comparable performance in this case. Note that the QGPU method has longer computation time than the LSPU method.
Table 2 lists the CPU time of running for LSPU, QGPU, mean filter, and nonlocal filter, with MATLAB R2019a on a PC with i7-3.00 GHz CPU and 32 GB memory. The CPU time of running for the mean filter is about half that of the nonlocal filter. The CPU time of the LSPU is much shorter than that of the QGPU because the former is implemented with FFT on the whole image, while the QGPU is executed pixel by pixel. The breakdown of CPU time in LSPU, QGPU, mean filter, and nonlocal filter, as well as their algorithms, are detailed in Appendix A.
Figure 7f–h shows the InSAR images acquired with the proposed method at SNR = 0 dB, 5 dB, and 10 dB, respectively. Their SSIM indices against Figure 7b are 0.89, 0.89, and 0.74, respectively, and their RMSE values against Figure 7b are 10.8 m, 9.22 m, and 22.38 m, respectively. The main features in the image are almost unaffected at SNR = 5 dB and become slightly distorted at SNR = 10 dB. In short, the DEM of Mount St. Helens is reconstructed with high fidelity by visual inspection, as well as in terms of SSIM and RMSE, even at SNR = 10 dB.
Figure 8 shows the differences between the reconstructed DEMs in Figure 7c,f,g,h and the true DEM in Figure 7b. The difference is calculated as Δ z = | a p b p | , where a p and b p are the values of the pth pixel in images a and b, respectively. Figure 8 shows that the difference is negligible at SNR 5 dB and becomes significant at SNR = 10 dB.
Figure 9 shows the reconstructed images of Mount St. Helens, with the nonlocal filter (NF) applied before and after the LSPU, under noise-free condition. The computational noise distorts some terrain features and inflicts speckles in the reconstructed image if the nonlocal filter is applied after phase unwrapping.

3.2. Columbia Glacier

Figure 10 shows the images of the Columbia Glacier, located at (61.14° N, 147.08° W) on the south coast of Alaska, USA. The DEM is extracted from the USGS 3DEP dataset [48], with spatial resolution of 5 m × 5 m. Figure 10a shows the true DEM of the Columbia Glacier extracted from the dataset. Figure 10b shows the fivefold scale-down model of that in Figure 10a. Figure 10c shows the reconstructed DEM with the proposed method and the simulation parameters listed in Table 1. The reconstructed DEM closely matches the true DEM; its SSIM index and RMSE against the image in Figure 10b are 0.88 and 28.4 m, respectively.
The backscattered signals from multiple resolution cells near the steep mountain slope region surrounding the glacier, enclosed by white dashed curves in Figure 10b, are mapped to the same resolution cell in the acquired image, inflicting layover effect. The high RMSE value is attributed to such layover regions, which is confirmed later in Figure 11.
Figure 10d shows the reconstructed DEM with NF replacing the mean filter; its SSIM index and RMSE against the image in Figure 10b are 0.87 and 28.24 m, respectively. Figure 10e shows the reconstructed DEM with QGPU replacing LSPU; its SSIM index and RMSE against the image in Figure 10b are 0.88 and 24.91 m, respectively, slightly better than their counterparts in Figure 10c. The glacier in this scenario manifests a steeper slope than that of the volcano in the previous scenario. The use of mean filter may blur some fine features in the DEM; hence, it should be used with caution if the terrain profile changes drastically.
Figure 10f–h shows the InSAR images acquired with the proposed method at SNR = 0 dB, 5 dB, and 10 dB, respectively. Their SSIM indices against Figure 10b are 0.87, 0.86, and 0.78, respectively, and their RMSE values against Figure 10b are 31.93 m, 30.24 m, and 33.24 m, respectively. The acquired InSAR images at SNR = 0 dB and SNR = 5 dB have similar SSIM indices, and the RMSE at SNR = 5 dB is slightly lower than the other two images.
Figure 11 shows the difference between the reconstructed DEM in Figure 10c,f–h, and the true DEM in Figure 10b. As SNR is decreased from 0 dB to 10 dB, more pixels in the layover regions manifest significant difference.

3.3. Santa Cruz Landslide

Figure 12 shows the images of an area with potential landslide hazards near Santa Cruz (37.03° N, 122.12° W), California, USA, on 17 March 2020, which are extracted from the USGS 3DEP dataset [48], with spatial resolution of 3 m × 3 m. Figure 12a shows the true DEM of the target area, and Figure 12b shows the tenfold scale-down model of that in Figure 12a.
Figure 12c shows the reconstructed InSAR image with the proposed method. The reconstructed DEM closely matches the true DEM; its SSIM index and RMSE against the image in Figure 12b are 0.90 and 2.32 m, respectively. Figure 12d shows the reconstructed InSAR image, with the nonlocal filter replacing the mean filter. Its SSIM index and RMSE against the image in Figure 12b are 0.89 and 2.46 m, respectively. Figure 12e shows the reconstructed DEM, with QGPU replacing LSPU. Its SSIM index and RMSE against the DEM in Figure 12b are 0.90 and 2.32 m, respectively, same as those for the proposed method.
Figure 12f–h show the InSAR images acquired with the proposed method at SNR = 0 dB, 5 dB, and 10 dB, respectively. Their SSIM indices against Figure 12b are 0.90, 0.72, and 0.66, respectively, and their RMSE values against Figure 12b are 2.32 m, 7.74 m, and 9.09 m, respectively.
Figure 13 shows the differences between the reconstructed DEM in Figure 12c,f–h and the true DEM in Figure 12b. As SNR is decreased, more pixels manifest significant difference.
Table 3 summarizes the RMSE and SSIM indices of images in Figure 7, Figure 10, and Figure 12, with different combinations of the filter and phase unwrapping methods under noise-free condition. The best indices among the three different methods are marked by boldface, and the differences among these combinations are not significant.
Table 4 summarizes the RMSE and SSIM indices of images in Figure 7, Figure 10, and Figure 12, by using the proposed method under different SNRs. In general, the best indices occur at SNR = 0 dB, but some indices at SNR = 5 dB turn out to be slightly better.
Next, we reconstruct two DEMs over the same area, dated 17 March 2020 and 10 August 2022, and show their height difference in Figure 14 to detect possible landslide hazards. Figure 14a shows the height difference between the two true DEMs extracted from the dataset on the two dates just mentioned [48]. Figure 14b shows the height difference between the two InSAR images reconstructed with the proposed method, and its SSIM index against the image in Figure 14a is 0.29. Both images show similar patterns, but some fine features in Figure 14a are smeared out in Figure 14b.
Figure 14c shows the height difference between the two images reconstructed with the nonlocal filter replacing the mean filter. The image shows a similar pattern as in Figure 14a, with more fragmented features than the latter. The SSIM index between these two images is 0.30.
Figure 14d shows the reconstructed image, with the QGPU replacing the LSPU. It is more resemblant of Figure 14b than Figure 14c, and its SSIM index against the image in Figure 14a is 0.29. By comparing Figure 14a–d, the combination of the NF and LSPU methods seems to manifest more terrain details in the true DEM.
Figure 14e–g shows the height differences acquired with the NF and LSPU at SNR = 0 dB, 5 dB, and 10 dB, respectively. Their SSIM indices against Figure 14a are 0.45, 0.20, and 0.13, respectively, and their RMSE values against Figure 14a are 4.31 m, 4.63 m, and 9.54 m, respectively. The images in Figure 14e,f still retain some useful information about terrain profile change, but that in Figure 14g provides no useful clue.
Figure 15 shows the density maps of high-risk landslide areas acquired with the three methods compared in this section. The areas with height difference greater than ± 1 m are highlighted with red marks ( z 1 m) and blue marks ( z 1 m).
The density maps in Figure 15b,d appear similar, consistent with the performance indices of these two methods. On the other hand, Figure 15c manifests an excessive number of high-risk marks.

3.4. Comparison with State-of-the-Art Techniques

In [58], a satellite-based InSAR method utilizing a Kalman filter (KF) and sequential least squares (SLS) was introduced to implement near-real-time applications. The SLS was designed to reduce the CPU time of conventional LS methods by sequentially processing the whole image. For comparison, the results in Figure 7, Figure 10, Figure 12, Figure 14 and Figure 15 demonstrate the efficacy of the LSPU method, which incorporates 2D FFT in the LS method to reduce the CPU time even more significantly.
In [59], a deep learning-based LSPU method utilizing encoder–decoder architecture (PGENet) was proposed to reconstruct the wrapped phase data embedding noise. Similarly, a deep learning-based QGPU via global attention U-Net was introduced in [60]. The efficacy of LSPU and QGPU can be enhanced by utilizing a deep learning approach. Furthermore, the results in [59] demonstrated that LSPU outperformed QGPU, producing lower RMSE and shorter computational time, especially in low-coherence areas. The results in Figure 7c,e and Figure 12c,e show that the LSPU has nearly the same performance as the QGPU, not to mention that the LSPU has high computational efficiency, as listed in Table 2.
In [61], a weighted least-squares (WLS) technique was proposed to improve the effectiveness of phase unwrapping within a small baseline InSAR framework. Choosing a small baseline in a satellite-based InSAR approach can reduce the computational cost. The proposed UAV-based InSAR approach has relatively smaller (temporal and spatial) baseline compared to the satellite-based counterpart in [61]. In addition, the UAV-based platform offers more flexibility in achieving specific baseline and revisit time.
In [15], the low-coherence area and high-coherence area were filtered by a local fringe frequency compensation nonlocal filter and Goldstein filter, respectively. The Goldstein filter, considered an old-fashioned method, was used for its computational efficiency [15]. For the same reason, the mean filter adopted in our work is suitable for relatively smooth and high-coherence areas. In our approach, the data can be acquired with two UAVs (sensors) in a single flight or with one UAV (sensor) in two separate flights that are staggered by a short revisit time. The coherence in the UAV-based InSAR image pair is higher than that in the satellite-based counterpart, which has typical revisit time of 12 days or longer.
In [53], various filters were simulated upon ramp and square noisy images. The results indicated that the nonlocal filter outperformed both the Lee filter and the Goldstein filter (considered old-fashioned filters) on square noisy images, but underperformed the latter on ramp noisy images [53]. Such outcomes are consistent with the simulation findings presented in Section 3.1, Section 3.2 and Section 3.3. The scenarios simulated in Section 3.1 and Section 3.3 manifest relatively smooth height profiles, resembling ramp noisy images. Figure 7c,d and Figure 12c,d show that the mean filter achieves lower RMSE and higher SSIM value in these two scenarios. On the other hand, the scenario simulated in Section 3.2 manifests a steep mountain terrain, resembling square noisy images. Figure 10c,d show that nonlocal filter achieves lower RMSE in this scenario.
In the presence of additive Gaussian noise, the pivoting mean filter emerges as statistically optimal from the perspective of maximum likelihood estimation [20]. As for the scenarios with relatively smooth profile discussed in Section 3.1 and Section 3.3, reconstruction with mean filter (MF) results in slightly higher SSIM value and lower RMSE value compared with the nonlocal filter (NF). However, the mean filter may oversmooth the phase details in areas with drastic topographical variations. As discussed in Section 3.2, the scenario containing some steep areas may not be well reconstructed by using the mean filter, and the nonlocal filter achieves a lower RMSE on the reconstructed DEM.
In [62], a coherence-guided InSAR phase unwrapping method was proposed in conjunction with cycle-consistent adversarial networks. The coherence-guided phase unwrapping method typically employs a cost function in terms of phase gradients and coherence values to penalize phase discontinuities in low-coherence regions and promote smooth phase paths in high-coherence areas. The method could achieve accurate phase unwrapping with low RMS value. However, the generative adversarial networks entail high computational cost and require extensive training data.
In [63], a median filter was cascaded with a mean filter based on stationary wavelet transform for phase filtering. The median filter exceled in preserving phase fringes, while the mean filter demonstrated superior noise reduction capabilities.
Lightweight UAVs are typically more susceptible to wind disturbances than airborne platforms in conducting SAR or InSAR imaging tasks. Both types of platform may tilt or dip under headwinds and deviate from planned flight path under crosswinds [64]. Take a real-world example of dispatching a small UAV for InSAR imaging. It can carry a payload up to 7.5 kg and stay in the air for an hour while equipped with GPS navigation gear. Its attitude response to the wind interference can be ignored if the wind speed if less than 5 mph, and its trajectory deviation can be compensated with servo mechanisms and algorithms.

3.5. Discussions on Contributions and Constraints

The contributions of this work are summarized as follows:
1
An on-site InSAR imaging method is proposed for monitoring environmental changes. The imaging task is carried out with UAVs, which can be swiftly deployed on site with small decorrelation between master and slave images;
2
High-resolution DEMs are reconstructed and enhanced with a mean filter to mitigate artifacts on InSAR images, which are attributed to imperfect coregistration between master and slave images. A least-squares phase unwrapping method at extremely low computational cost is applied to run the imaging task near real-time;
3
Three scenarios of DEM reconstruction are simulated to validate the efficacy of the proposed approach, considering the effect of noise. The fidelity of acquired InSAR images is evaluated in terms of SSIM index and RMSE. The merits of using mean filter and least-squares phase unwrapping method are compared with two popular counterparts.
We propose a feasible scheme of deploying UAVs for on-site InSAR imaging of small areas, which cannot be achieved with satellite-borne InSAR platforms. Potential applications include monitoring natural disasters such as landslides, wildfires, and volcanic eruptions. In these scenarios, the satellite-borne InSAR imaging technique is limited by the long revisit time of days, which is impractical for real-time monitoring of disaster evolution. Among many state-of-the-art algorithms, choosing the mean filter and the least-square phase unwrapping method via Poisson’s difference equation and FFT can practically accomplish real-time imaging tasks in terms of robustness and computational efficiency.
The attitude of an airborne platform can be disturbed by complicated airflow disturbance and platform mechanical oscillation. Their effects on SAR imaging have been compensated with a compressive-sensing technique [46].

4. Conclusions

An on-site UAV-borne InSAR imaging method is proposed to reconstruct terrain profile with high spatial resolution in real time. A UAV-borne imaging system can be swiftly deployed to monitor rapidly changing environments during extreme weather events or natural disasters. Three different high-resolution DEMs are extracted from the USGS 3DEP datasets to validate the efficacy of the proposed approach. The combination of the least-squares phase unwrapping method featuring short CPU time and the mean filter for mitigating speckles on the acquired InSAR image is effective for monitoring terrain profile in real time. Several state-of-the-art techniques like nonlocal filter and quality-guided phase unwrapping method have been used to validate the advantages of the proposed approach to acquire InSAR images for reconstructing terrain profile in real time.

Author Contributions

Conceptualization, H.-Y.C. and J.-F.K.; methodology, H.-Y.C. and J.-F.K.; software, H.-Y.C.; validation, H.-Y.C. and J.-F.K.; formal analysis, H.-Y.C. and J.-F.K.; investigation, H.-Y.C. and J.-F.K.; resources, J.-F.K.; data curation, H.-Y.C.; writing—original draft, H.-Y.C.; writing—review and editing, J.-F.K.; visualization, H.-Y.C. and J.-F.K.; supervision, J.-F.K.; project administration, J.-F.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Science and Technology Council, Taiwan, under contract MOST 111-2221-E-002-092.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Publicly available datasets were analyzed in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Breakdown of CPU Time and Algorithms

Table A1 lists the number of multiplication/division operations in QGPU and LSPU, where N a and N r are the numbers of azimuth samples and range samples, respectively. The target area has the size of U azimuth cells by V range cells and is centered at sample indices [ N a / 2 , N r / 2 ] .
Algorithm A1 lists the procedure of the QGPU algorithm. The image sizes in Section 3.1, Section 3.2 and Section 3.3 are U × V = 1401 × 841 , 781 × 499 , and 781 × 499 , respectively. The average CPU time to run a complete cycle of phase unwrapping steps within the while loop (for one pixel) is about 0.01 s. For example, the image in Section 3.1 takes a CPU time of about 1401 × 841 × 0.01 = 11 , 782.41 s.
Table A1. Number of multiplication/division operations in QGPU and LSPU.
Table A1. Number of multiplication/division operations in QGPU and LSPU.
QGPU LSPU
ProcessNo. of M/DProcessNo. of M/D
Δ ϕ u [ u , v ] none ϕ ¯ ¯ ( 4 ) none
Δ ϕ u [ u , v ] none ρ [ u , v ] none
Δ ϕ u [ u , v ] Δ ϕ u [ u , v ] 2 1 per pixel2D DFT U V log 2 U V
Δ ϕ v [ u , v ] Δ ϕ v [ u , v ] 2 1 per pixel Φ [ m , n ] 4 U V
Z [ u , v ] [ u , v ] W s 2 W s per window2D IDFT U V log 2 U V
ϕ ( 5 ) [ u , v ] none
total 2 U V total 2 U V ( log 2 U V + 2 )
Algorithm A1: Pseudocode of QGPU algorithm.
Sensors 24 02287 i001
Table A2 lists the number in multiplication/division operations of nonlocal filter and mean filter. The total numbers of multiplication/division operations to run the nonlocal filter and the mean filter over an image of size U × V are 2 U V and U V , respectively.
Table A2. Number of multiplication/division operations in nonlocal filter and mean filter.
Table A2. Number of multiplication/division operations in nonlocal filter and mean filter.
Nonlocal Filter Mean Filter
ProcessNo. of M/DProcessNo. of M/D
W 1 [ u , v ; u , v ] 1 per pixel ϕ ¯ ¯ ( 2 ) none
ϕ NL ( 2 ) [ u , v ] 1 per pixel ϕ ¯ ¯ u v ( 3 ) none
A ϕ s [ u , v ] e j ϕ s [ u , v ] none
ϕ ( 3 ) [ u , v ] 1 per pixel
total 2 U V total U V
Algorithm A2 lists the procedure for applying the mean filter on an image of size U × V . Algorithm A3 lists the procedure for applying the nonlocal filter on an image of size U × V . The average CPU time to run a complete cycle of mean filter on one pixel is about 0.005 s, and its counterpart with nonlocal filter is about 0.013 s.
Algorithm A2: Pseudocode of mean filter.
Sensors 24 02287 i002
Algorithm A3: Pseudocode of nonlocal filter.
Sensors 24 02287 i003

References

  1. Feng, D.; Chen, W.D. Structure filling and matching for three-dimensional reconstruction of buildings from single high-resolution SAR image. IEEE Trans. Geosci. Remote Sens. 2016, 13, 752–756. [Google Scholar] [CrossRef]
  2. Xu, H.J.; Yang, Z.W.; Chen, G.Z.; Liao, G.S.; Tian, M. A ground moving target detection approach based on shadow feature with multichannel high resolution synthetic aperture radar. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1572–1576. [Google Scholar] [CrossRef]
  3. Truckenbrodt, J. EO-College Tomography Tutorial. 2018. Available online: https://eo-college.org/resource/sar-tomography-tutorial/ (accessed on 12 February 2020).
  4. Anonymous. TerraSAR-X Image Product Guide; Airbus Defence and Space: Taufkirchen, Germany, 2015; pp. 2–14. [Google Scholar]
  5. Shahzad, M.; Zhu, X.X. Automatic detection and reconstruction of 2-D/3-D building shapes from spaceborne TomoSAR point clouds. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1292–1310. [Google Scholar] [CrossRef]
  6. Cloude, S.R.; Papathanassiou, K.P. Polarimetric SAR interferometry. IEEE Trans. Geosci. Remote Sens. 1998, 36, 1551–1565. [Google Scholar] [CrossRef]
  7. Thiele, A.; Cadario, E.; Schulz, K.; Thonnessen, U.; Soergel, U. Building recognition from multi-aspect high-resolution InSAR data in urban areas. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3585–3593. [Google Scholar] [CrossRef]
  8. Ferraioli, G. Multichannel InSAR building edge detection. IEEE Trans. Geosci. Remote Sens. 2010, 48, 1224–1231. [Google Scholar] [CrossRef]
  9. Ma, W.P.; Zhang, J.; Wu, Y.; Jiao, L.C.; Zhu, H.; Zhao, W. A novel two-step registration method for remote sensing images based on deep and local features. IEEE Trans. Geosci. Remote Sens. 2019, 57, 4834–4843. [Google Scholar] [CrossRef]
  10. Zou, W.B.; Chen, L.B. Determination of optimum tie point interval for SAR image coregistration by decomposing autocorrelation coefficient. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5067–5084. [Google Scholar] [CrossRef]
  11. Yu, J.; Lin, Y.; Wang, B.; Ye, Q.; Cai, J.Q. An advanced outlier detected total least-squares algorithm for 3-D point clouds registration. IEEE Trans. Geosci. Remote Sens. 2019, 57, 4789–4798. [Google Scholar] [CrossRef]
  12. Li, Z.X.; Bethel, J. Image coregistration in SAR interferometry. Int. Soc. Photogramm. Remote. Sens. 2008, 37 Pt B1, 433–438. [Google Scholar]
  13. Liu, F.; Antoniou, M.; Zeng, Z.; Cherniakov, M. Coherent change detection using passive GNSS-based BSAR: Experimental proof of concept. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4544–4555. [Google Scholar] [CrossRef]
  14. Chi, B.; Zhuang, H.-F.; Fan, H.-D.; Yu, Y.; Peng, L. An adaptive patch-based goldstein filter for interferometric phase denoising. J. Sel. Top. Appl. Earth. Obs. Remote Sens. 2021, 42, 6746–6761. [Google Scholar] [CrossRef]
  15. Xu, H.-P.; Li, Z.-H.; Li, S.; Liu, W.; Li, J.-W.; Liu, A.-F.; Li, W. A nonlocal noise reduction method based on fringe frequency compensation for SAR interferogram. J. Sel. Top. Appl. Earth. Obs. Remote Sens. 2021, 14, 9756–9767. [Google Scholar] [CrossRef]
  16. Li, T.-T.; Chen, K.-S.; Lee, J.-S. Enhanced interferometric phase noise filtering of the refined InSAR filter. IEEE Geosci. Remote Sens. Lett. 2020, 17, 1528–1532. [Google Scholar] [CrossRef]
  17. Shang, R.Z.; Liu, F.-F.; Wang, Z.-Z.; Gao, J.; Zhou, J.-T.; Yao, D. An adaptive spatial filtering algorithm based on nonlocal mean filtering for GNSS-based InSAR. In Proceedings of the 2022 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC), Xi’an, China, 25–27 October 2022; pp. 1–5. [Google Scholar]
  18. Pu, L.-M.; Zhang, X.-L.; Zhou, L.-M.; Li, L.; Shi, J.; Wei, S.-J. Nonlocal feature selection encoder-decoder network for accurate InSAR phase filtering. Remote Sens. 2022, 14, 1174. [Google Scholar] [CrossRef]
  19. Staniewicz, S.; Chen, J.-Y.; Rathje, E.; Olson, J. Automatic detection of InSAR deformation signals associated with hydrocarbon production and wastewater injection using Laplacian of Gaussian filtering. In Proceedings of the IGARSS 2019–2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 433–436. [Google Scholar]
  20. Xu, G.; Gao, Y.; Li, J.; Xing, M. InSAR phase denoising: A review of current technologies and future directions. IEEE Geosci. Remote Sens. Mag. 2020, 8, 64–82. [Google Scholar] [CrossRef]
  21. Yu, H.-W.; Lan, Y.; Yuan, Z.-H.; Xu, J.-Y.; Lee, H.-K. Phase unwrapping in InSAR: A review. IEEE Trans. Geosci. Remote Sens. Mag. 2019, 7, 40–58. [Google Scholar] [CrossRef]
  22. Gianluca, M.; Alessio, R.; Claudio, P. Deep learning for InSAR phase filtering: An optimized framework for phase unwrapping. Remote Sens. 2022, 14, 4956. [Google Scholar] [CrossRef]
  23. Wu, Z.; Wang, T.; Wang, Y.; Wang, R.; Ge, D. Deep-learning-based phase discontinuity prediction for 2-D phase unwrapping of SAR interferograms. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5216516. [Google Scholar] [CrossRef]
  24. Zhou, L.; Yu, H.; Lan, Y.; Xing, M. Deep learning-based branch-cut method for InSAR two-dimensional phase unwrapping. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5209615. [Google Scholar] [CrossRef]
  25. Herraez, M.A.; Villatoro, F.R.; Gdeisat, M.A. A robust and simple measure for quality-guided 2D phase unwrapping algorithm. IEEE Trans. Image Process. 2016, 25, 2601–2609. [Google Scholar] [CrossRef]
  26. Zhang, J.-C.; Rothenberger, S.M.; Brindise, M.C.; Scott, M.B.; Brindise, H.; Baraboo, J.J.; Markl, M.; Rayz, V.L.; Vlachos, P.P. Divergence-free constrained phase unwrapping and denoising for 4D flow MRI using weighted least-squares. IEEE Trans. Med. Imaging 2021, 40, 3389–3399. [Google Scholar] [CrossRef]
  27. Pritt, M.D.; Shipman, J.S. Least-squares two-dimensional phase unwrapping using FFT’s. IEEE Trans. Geosci. Remote Sens. 1994, 32, 706–708. [Google Scholar] [CrossRef]
  28. Yang, Q.-Y.; Wang, J.-L.; Wang, Y.-J.; Lu, P.-P.; Jia, H.-Y.; Wu, Z.-P.; Li, L.; Zan, Y.-K.; Wang, R. Image-based baseline correction method for spaceborne InSAR with external DEM. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5202216. [Google Scholar] [CrossRef]
  29. Zhang, B.; Xie, F.-T.; Wang, L.-L.; Li, S.; Wei, L.-D.; Feng, L. Airborne millimeter-wave InSAR terrain mapping experiments based on automatic extraction and interferometric calibration of tie-points. Remote Sens. 2023, 15, 572. [Google Scholar] [CrossRef]
  30. Yang, H.-L.; Liu, Y.-F.; Fan, H.-D.; Li, X.-L.; Su, Y.-H.; Ma, X.; Wang, L.-Y. Identification and analysis of deformation areas in the construction stage of pumped storage power station using GB-InSAR technology. J. Sel. Top. Appl. Earth. Obs. Remote Sens. 2023, 16, 4931–4946. [Google Scholar] [CrossRef]
  31. Engel, M.; Heinzel, A.; Schreiber, E.; Dill, S.; Peichl, M. Recent results of a UAV-based synthetic aperture radar for remote sensing applications. In Proceedings of the EUSAR 2021; 13th European Conference on Synthetic Aperture Radar 2021, online, 29 March–1 April 2021. [Google Scholar]
  32. Wang, R.; Lv, X.; Zhang, L. A novel three-dimensional block adjustment method for spaceborne InSAR-DEM based on general models. J. Sel. Top. Appl. Earth. Obs. Remote Sens. 2023, 16, 3973–3987. [Google Scholar] [CrossRef]
  33. Han, J.; Yang, H.; Liu, Y.; Lu, Z.; Zeng, K.; Jiao, R. A deep learning application for deformation prediction from ground-based InSAR. Remote Sens. 2022, 14, 5067. [Google Scholar] [CrossRef]
  34. Ludeno, G.; Catapano, I.; Renga, A.; Vetrella, A.R.; Fasano, G.; Soldovieri, F. Assessment of a micro-UAV system for microwave tomography radar imaging. Remote Sens. Environ. 2018, 212, 90–102. [Google Scholar] [CrossRef]
  35. Hussain, Y.; Schlogel, R.; Innocenti, A.; Hamza, O.; Iannucci, R.; Martino, S.; Havenith, H.B. Review on the geophysical and UAV-based methods applied to landslides. Remote Sens. 2022, 14, 4564. [Google Scholar] [CrossRef]
  36. Wang, Z.; Ding, Z.; Sun, T.; Zhao, J.; Wang, Y.; Zeng, T. UAV-based P-band SAR tomography with long baseline: A multimaster approach. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5207221. [Google Scholar] [CrossRef]
  37. Svedin, J.; Bernland, A.; Gustafsson, A.; Claar, E.; Luong, J. Small UAV-based SAR system using low-cost radar, position, and attitude sensors with onboard imaging capability. Int. J. Microw. Wirel. Technol. 2021, 13, 602–613. [Google Scholar] [CrossRef]
  38. Li, Y.-J.; Qiao, G.; Popov, S.; Cui, X.-B.; Florinsky, I.V.; Yuan, X.-H.; Wang, L.-J. Unmanned aerial vehicle remote sensing for antarctic research: A review of progress, current applications, and future use cases. IEEE Trans. Geosci. Remote Sens. 2023, 11, 73–93. [Google Scholar] [CrossRef]
  39. Avian, M.; Bauer, C.; Schlogl, M.; Widhalm, B.; Gutjahr, K.H.; Paster, M.; Hauer, C.; Friebenbichler, M.; Neureiter, A.; Weyss, G.; et al. The status of earth observation techniques in monitoring high mountain environments at the example of Pasterze glacier, Austria: Data, methods, accuracies, processes, and scales. Remote Sens. 2020, 12, 1251. [Google Scholar] [CrossRef]
  40. Huang, L.-Q.; Fischer, G.; Hajnsek, I. Antarctic snow-covered sea ice topography derivation from TanDEM-X using polarimetric SAR interferometry. Cryosphere 2021, 15, 5323–5344. [Google Scholar] [CrossRef]
  41. Burr, R.; Schartel, M.; Grathwohl, A.; Mayer, W.; Walter, T.; Waldschmidt, C. UAV-borne FMCW InSAR for focusing buried objects. IEEE Geosci. Remote Sens. Lett. 2021, 19, 4014505. [Google Scholar] [CrossRef]
  42. Bekar, A.; Antoniou, M.; Baker, C.J. Low-cost, high-resolution, drone-borne SAR imaging. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5208811. [Google Scholar] [CrossRef]
  43. Tsai, S.-C.; Kiang, J.-F. Floating dropsondes with DGPS receiver for real-time typhoon monitoring. IEEE Trans. Geosci. Remote Sens. 2011, 49, 4363–4373. [Google Scholar] [CrossRef]
  44. Tapete, D.; Morelli, S.; Fanti, R.; Casagli, N. Localising deformation along the elevation of linear structures: An experiment with space-borne InSAR and RTK GPS on the Roman aqueducts in Rome, Italy. Appl. Geogr. 2015, 58, 65–83. [Google Scholar] [CrossRef]
  45. Fu, X.; Xiang, M.; Wang, B.; Jiang, S.; Wang, J. Preliminary result of a novel yaw and pitch error estimation method for UAV-based FMCW InSAR. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; pp. 463–465. [Google Scholar]
  46. Wu, D.-M.; Kiang, J.-F. Dual-channel airborne SAR imaging of ground moving targets on perturbed platform. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5205814. [Google Scholar] [CrossRef]
  47. Wu, D.-M.; Kiang, J.-F. Imaging of high-speed aerial targets with ISAR installed on a moving vessel. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2023, 16, 6463–6474. [Google Scholar] [CrossRef]
  48. Stoker, J.M. Defining Technology Operational Readiness for the 3D Elevation Program—A Plan for Investment, Incubation, and Adoption; US Geological Survey: Reston, VA, USA, 2020. [Google Scholar]
  49. Chen, P.-C.; Kiang, J.-F. An improved range-Doppler algorithm for SAR imaging at high squint angles. Prog. Electromag. Res. M 2017, 53, 41–52. [Google Scholar] [CrossRef]
  50. Desai, K.; Joshi, P.; Chirakkal, S.; Putrevu, D.; Ghosh, R. Analysis of performance of flat earth phase removal methods. Int. Soc. Photogramm. Remote Sens. 2018, 42, 207–209. [Google Scholar] [CrossRef]
  51. Meng, D.; Sethu, V.; Ambikairajah, E.; Ge, L. A novel technique for noise reduction in InSAR images. IEEE Trans. Geosci. Remote Sens. 2007, 4, 226–230. [Google Scholar] [CrossRef]
  52. Ghiglia, D.C.; Romero, L.A. Robust two-dimensional weighted and unweighted phase unwrapping that uses fast transforms and iterative methods. J. Opt. Soc. Am. A 1994, 11, 107–117. [Google Scholar] [CrossRef]
  53. Sica, F.; Cozzolino, D.; Zhu, X.X.; Verdoliva, L.; Poggi, G. InSAR-BM3D: A nonlocal filter for SAR interferometric phase restoration. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3456–3467. [Google Scholar] [CrossRef]
  54. Bamler, R.; Hartl, P. Synthetic aperture radar interferometry. Inverse Probl. 1998, 14, R1. [Google Scholar] [CrossRef]
  55. Hore, A.; Ziou, D. Image quality metrics: PSNR vs. SSIM. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 2366–2369. [Google Scholar]
  56. Shen, H.-F.; Zhou, C.-X.; Li, J.; Yuan, Q.-Q. SAR image despeckling employing a recursive deep CNN prior. IEEE Trans. Geosci. Remote Sens. 2021, 59, 273–286. [Google Scholar] [CrossRef]
  57. Root-Mean-Square Deviation. Available online: https://en.wikipedia.org/wiki/Root-mean-square_deviation (accessed on 5 July 2023).
  58. Wang, B.; Zhang, Q.; Zhao, C.; Pepe, A.; Niu, Y. Near real-time InSAR deformation time series estimation with modified Kalman filter and sequential least squares. J. Sel. Top. Appl. Earth. Obs. Remote Sens. 2022, 15, 2437–2448. [Google Scholar] [CrossRef]
  59. Pu, L.; Zhang, X.; Zhou, Z.; Li, L.; Zhou, L.; Shi, J.; Wei, S. A robust InSAR phase unwrapping method via phase gradient estimation network. Remote Sens. 2021, 13, 4564. [Google Scholar] [CrossRef]
  60. Wang, H.; Hu, J.; Fu, H.; Wang, C.; Wang, Z. A novel quality-guided two-dimensional InSAR phase unwrapping method via GAUNet. J. Sel. Top. Appl. Earth. Obs. Remote Sens. 2021, 14, 7840–7856. [Google Scholar] [CrossRef]
  61. Falabella, F.; Serio, C.; Zeni, G.; Pepe, A. On the use of weighted least-squares approaches for differential interferometric SAR analyses: The weighted adaptive variable-length (WAVE) technique. Sensors 2020, 20, 1103. [Google Scholar] [CrossRef] [PubMed]
  62. Mu, J.; Wang, Y.; Zhan, S.; Yao, G.; Liu, K.; Zhu, Y.; Wang, L. A coherence-guided InSAR phase unwrapping method with cycle-consistent adversarial networks. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2023, 17, 2690–2704. [Google Scholar] [CrossRef]
  63. Xu, H.; Wang, Y.; Li, C.; Zeng, G.; Li, S.; Li, S. A novel adaptive InSAR phase filtering method based on complexity factors. Chin. J. Electron. 2023, 32, 1089–1105. [Google Scholar] [CrossRef]
  64. Yocky, D.A.; West, R.D. Unmanned Aerial Vehicle Synthetic Aperture RADAR for Surface Change Monitoring; Technical Report; Sandia National Lab.: Albuquerque NM, USA, 2022. [Google Scholar]
Figure 1. Schematic of InSAR operation.
Figure 1. Schematic of InSAR operation.
Sensors 24 02287 g001
Figure 2. Flow-chart of range-Doppler algorithm (RDA) [49].
Figure 2. Flow-chart of range-Doppler algorithm (RDA) [49].
Sensors 24 02287 g002
Figure 3. Flow-chart of InSAR imaging.
Figure 3. Flow-chart of InSAR imaging.
Sensors 24 02287 g003
Figure 4. (a) Point target A ( x , y , h ) appears at A 0 in the master image and A 1 in the slave image. (b) Point target A g ( x , y , 0 ) with known r 0 [ v ] , r 1 [ v ] , θ [ v ] , and b.
Figure 4. (a) Point target A ( x , y , h ) appears at A 0 in the master image and A 1 in the slave image. (b) Point target A g ( x , y , 0 ) with known r 0 [ v ] , r 1 [ v ] , θ [ v ] , and b.
Sensors 24 02287 g004
Figure 5. Geometry for target-height estimation.
Figure 5. Geometry for target-height estimation.
Sensors 24 02287 g005
Figure 6. Intermediate images of Mount St. Helens: (a) master SAR image without noise, (b) master SAR image at SNR = 10 dB, (c) interferometric phase without noise, (d) interferometric phase at SNR = 10 dB, (e) wrapped flattened phase without noise, (f) wrapped flattened phase at SNR = 10 dB, (g) coherence map without noise, (h) coherence map at SNR = 10 dB.
Figure 6. Intermediate images of Mount St. Helens: (a) master SAR image without noise, (b) master SAR image at SNR = 10 dB, (c) interferometric phase without noise, (d) interferometric phase at SNR = 10 dB, (e) wrapped flattened phase without noise, (f) wrapped flattened phase at SNR = 10 dB, (g) coherence map without noise, (h) coherence map at SNR = 10 dB.
Sensors 24 02287 g006
Figure 7. Images of Mount St. Helens: (a) DEM extracted from USGS 3DEP dataset [48], (b) tenfold scale-down model of DEM in (a); reconstructed DEM with (c) proposed method (MF and LSPU), SSIM = 0.90 , RMSE = 5.79 m, (d) NF [15,53] and LSPU, SSIM = 0.89 , RMSE = 6.14 m, (e) MF and QGPU [25], SSIM = 0.90 , RMSE = 5.79 m, (f) proposed method at SNR = 0 dB, SSIM = 0.89 , RMSE = 10.8 m, (g) proposed method at SNR = 5 dB, SSIM = 0.89 , RMSE = 9.22 m, (h) proposed method at SNR = 10 dB, SSIM = 0.74 , RMSE = 22.38 m.
Figure 7. Images of Mount St. Helens: (a) DEM extracted from USGS 3DEP dataset [48], (b) tenfold scale-down model of DEM in (a); reconstructed DEM with (c) proposed method (MF and LSPU), SSIM = 0.90 , RMSE = 5.79 m, (d) NF [15,53] and LSPU, SSIM = 0.89 , RMSE = 6.14 m, (e) MF and QGPU [25], SSIM = 0.90 , RMSE = 5.79 m, (f) proposed method at SNR = 0 dB, SSIM = 0.89 , RMSE = 10.8 m, (g) proposed method at SNR = 5 dB, SSIM = 0.89 , RMSE = 9.22 m, (h) proposed method at SNR = 10 dB, SSIM = 0.74 , RMSE = 22.38 m.
Sensors 24 02287 g007
Figure 8. Difference between reconstructed DEM and true DEM in Figure 7b: (a) noise-free, (b) SNR = 0 dB, (c) SNR = 5 dB, (d) SNR = 10 dB.
Figure 8. Difference between reconstructed DEM and true DEM in Figure 7b: (a) noise-free, (b) SNR = 0 dB, (c) SNR = 5 dB, (d) SNR = 10 dB.
Sensors 24 02287 g008
Figure 9. Reconstructed images of Mount St. Helens under noise-free condition: (a) with NF before phase unwrapping, (b) with NF after phase unwrapping.
Figure 9. Reconstructed images of Mount St. Helens under noise-free condition: (a) with NF before phase unwrapping, (b) with NF after phase unwrapping.
Sensors 24 02287 g009
Figure 10. Images of Columbia Glacier: (a) DEM extracted from USGS 3DEP dataset [48], (b) fivefold scale-down model of (a)—glacier edge is marked by white curve, region of layover is marked by white dashed curve; reconstructed DEM with (c) proposed method, SSIM = 0.88 , RMSE = 28.4 m, (d) NF and LSPU, SSIM = 0.87 , RMSE = 28.24 m, (e) MF and QGPU, SSIM = 0.88 , RMSE = 24.91 m, (f) proposed method at SNR = 0 dB, SSIM = 0.87 , RMSE = 31.93 m, (g) proposed method at SNR = 5 dB, SSIM = 0.86 , RMSE = 30.24 m, (h) proposed method at SNR = 10 dB, SSIM = 0.78 , RMSE = 33.24 m.
Figure 10. Images of Columbia Glacier: (a) DEM extracted from USGS 3DEP dataset [48], (b) fivefold scale-down model of (a)—glacier edge is marked by white curve, region of layover is marked by white dashed curve; reconstructed DEM with (c) proposed method, SSIM = 0.88 , RMSE = 28.4 m, (d) NF and LSPU, SSIM = 0.87 , RMSE = 28.24 m, (e) MF and QGPU, SSIM = 0.88 , RMSE = 24.91 m, (f) proposed method at SNR = 0 dB, SSIM = 0.87 , RMSE = 31.93 m, (g) proposed method at SNR = 5 dB, SSIM = 0.86 , RMSE = 30.24 m, (h) proposed method at SNR = 10 dB, SSIM = 0.78 , RMSE = 33.24 m.
Sensors 24 02287 g010
Figure 11. Difference between reconstructed DEM and true DEM in Figure 10b: (a) noise-free, (b) SNR = 0 dB, (c) SNR = 5 dB, (d) SNR = 10 dB.
Figure 11. Difference between reconstructed DEM and true DEM in Figure 10b: (a) noise-free, (b) SNR = 0 dB, (c) SNR = 5 dB, (d) SNR = 10 dB.
Sensors 24 02287 g011
Figure 12. Images of landslide area near Santa Cruz on 17 March 2020: (a) DEM extracted from USGS 3DEP dataset [48], (b) tenfold scale-down model of (a); reconstructed DEM with (c) proposed method, SSIM = 0.90 , RMSE = 2.32 m, (d) NF and LSPU, SSIM = 0.89 , RMSE = 2.46 m, (e) MF and QGPU, SSIM = 0.90 , RMSE = 2.32 m, (f) proposed method at SNR = 0 dB, SSIM = 0.90 , RMSE = 2.32 m, (g) proposed method at SNR = 5 dB, SSIM = 0.72 , RMSE = 7.74 m, (h) proposed method at SNR = 10 dB, SSIM = 0.66 , RMSE = 9.09 m.
Figure 12. Images of landslide area near Santa Cruz on 17 March 2020: (a) DEM extracted from USGS 3DEP dataset [48], (b) tenfold scale-down model of (a); reconstructed DEM with (c) proposed method, SSIM = 0.90 , RMSE = 2.32 m, (d) NF and LSPU, SSIM = 0.89 , RMSE = 2.46 m, (e) MF and QGPU, SSIM = 0.90 , RMSE = 2.32 m, (f) proposed method at SNR = 0 dB, SSIM = 0.90 , RMSE = 2.32 m, (g) proposed method at SNR = 5 dB, SSIM = 0.72 , RMSE = 7.74 m, (h) proposed method at SNR = 10 dB, SSIM = 0.66 , RMSE = 9.09 m.
Sensors 24 02287 g012
Figure 13. Difference between reconstructed DEM and true DEM in Figure 12b: (a) noise-free, (b) SNR = 0 dB, (c) SNR = 5 dB, (d) SNR = 10 dB.
Figure 13. Difference between reconstructed DEM and true DEM in Figure 12b: (a) noise-free, (b) SNR = 0 dB, (c) SNR = 5 dB, (d) SNR = 10 dB.
Sensors 24 02287 g013
Figure 14. Height difference between 17 March 2020 and 10 August 2022 in landslide area near Santa Cruz,: (a) between DEMa on 10 August 2022 and DEMb in Figure 12b; reconstructed with (b) proposed method, SSIM = 0.29 , RMSE = 2.32 m, (c) NF and LSPU, SSIM = 0.30 , RMSE = 2.26 m, (d) MF and QGPU, SSIM = 0.29 , RMSE = 2.26 m, (e) NF and LSPU at SNR = 0 dB, SSIM = 0.45 , RMSE = 4.31 m, (f) NF and LSPU at SNR = 5 dB, SSIM = 0.20 , RMSE = 4.63 m, (g) NF and LSPU at SNR = 10 dB, SSIM = 0.13 , RMSE = 9.54 m.
Figure 14. Height difference between 17 March 2020 and 10 August 2022 in landslide area near Santa Cruz,: (a) between DEMa on 10 August 2022 and DEMb in Figure 12b; reconstructed with (b) proposed method, SSIM = 0.29 , RMSE = 2.32 m, (c) NF and LSPU, SSIM = 0.30 , RMSE = 2.26 m, (d) MF and QGPU, SSIM = 0.29 , RMSE = 2.26 m, (e) NF and LSPU at SNR = 0 dB, SSIM = 0.45 , RMSE = 4.31 m, (f) NF and LSPU at SNR = 5 dB, SSIM = 0.20 , RMSE = 4.63 m, (g) NF and LSPU at SNR = 10 dB, SSIM = 0.13 , RMSE = 9.54 m.
Sensors 24 02287 g014
Figure 15. Density map of high-risk landslide areas derived from (a) Figure 14a, true DEM, (b) Figure 14b, proposed method, (c) Figure 14c, NF and LSPU, (d) Figure 14d, MF and QGPU.
Figure 15. Density map of high-risk landslide areas derived from (a) Figure 14a, true DEM, (b) Figure 14b, proposed method, (c) Figure 14c, NF and LSPU, (d) Figure 14d, MF and QGPU.
Sensors 24 02287 g015
Table 1. Default parameters for InSAR simulations.
Table 1. Default parameters for InSAR simulations.
ParameterSymbolValue
carrier frequency f c 1258 MHz
range bandwidth B r 300 MHz
pulse duration T r 1 μ s
range sampling rate F r 360 MHz
range chirp rate K r 300 THz/s
range samples N r 1024
pulse repetition frequency F a 400 Hz
azimuth samples (case 1) N a 2048
azimuth samples (cases 2,3) N a 1024
look angle θ 45
platform heightH2000 m
platform velocity v p 150 m/s
closest slant range R 0 2828 m
slant range resolution Δ r 0.5 m
azimuth resolution (case 1) Δ a 0.44 m
azimuth resolution (cases 2,3) Δ a 0.88 m
baselineB5 m
height of ambiguity z amb 80.52 m
Table 2. CPU time of running for LSPU, QGPU, mean filter, and nonlocal filter.
Table 2. CPU time of running for LSPU, QGPU, mean filter, and nonlocal filter.
MethodCPU Time (s)
mean filter (MF)962.80
nonlocal filter (NF)1970.78
LS phase unwrapping0.72
QG phase unwrapping10,271.32
Table 3. RMSE and SSIM indices of acquired images under noise-free condition.
Table 3. RMSE and SSIM indices of acquired images under noise-free condition.
Figure 7MF + LSPUNF + LSPUMF + QGPU
RMSE (m)5.796.145.79
SSIM0.900.890.90
Figure 10MF + LSPUNF + LSPUMF + QGPU
RMSE (m)28.4028.2424.91
SSIM0.880.870.88
Figure 12MF + LSPUNF + LSPUMF + QGPU
RMSE (m)2.322.462.32
SSIM0.900.890.90
Table 4. RMSE and SSIM indices of acquired images with proposed method under different SNRs.
Table 4. RMSE and SSIM indices of acquired images with proposed method under different SNRs.
Figure 7SNR = 0 dBSNR = 5 dBSNR = 10 dB
RMSE (m)10.809.2222.38
SSIM0.890.890.74
Figure 10SNR = 0 dBSNR = 5 dBSNR = 10 dB
RMSE (m)31.9330.2433.24
SSIM0.870.860.78
Figure 12SNR = 0 dBSNR = 5 dBSNR = 10 dB
RMSE (m)2.327.749.09
SSIM0.900.720.66
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chuang, H.-Y.; Kiang, J.-F. An On-Site InSAR Terrain Imaging Method with Unmanned Aerial Vehicles. Sensors 2024, 24, 2287. https://doi.org/10.3390/s24072287

AMA Style

Chuang H-Y, Kiang J-F. An On-Site InSAR Terrain Imaging Method with Unmanned Aerial Vehicles. Sensors. 2024; 24(7):2287. https://doi.org/10.3390/s24072287

Chicago/Turabian Style

Chuang, Hsu-Yueh, and Jean-Fu Kiang. 2024. "An On-Site InSAR Terrain Imaging Method with Unmanned Aerial Vehicles" Sensors 24, no. 7: 2287. https://doi.org/10.3390/s24072287

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop