**1. Introduction**

Due to large coverage and high-precision, Interferometric Synthetic Aperture Radar (InSAR) has been widely used for mapping surface deformation, such as urban surface deformation [1,2], seismic deformation [3,4], landslide displacement [5–9], and mining subsidence [10]. With the fast development of SAR satellite technology [11], the observation range and frequency are both improved [12–14], providing cycle monitoring for a large-scale or national wide area. However, the traditional processing strategies for Multi-Temporal Interferometric Synthetic Aperture Radar (MT-InSAR) cannot efficiently process the huge number of images with large spatial and temporal coverage. Furthermore, the possible atmospheric phase screen and orbital errors exist in the SAR images with wide spatial coverage are difficult to be corrected. Therefore, optimizing the InSAR processing strategy and parameters is crucial for the application of wide-area InSAR data.

Using supercomputers or distributed computing systems, such as CASearth Cloud Infrastructure Platform [15], and ESA's G-POD environment [16], is a way to improve the data processing efficiency, but it is too expensive to be popularized. Another way is to

**Citation:** Wang, Y.; Feng, G.; Feng, Z.; Wang, Y.; Wang, X.; Luo, S.; Zhao, Y.; Lu, H. An MT-InSAR Data Partition Strategy for Sentinel-1A/B TOPS Data. *Remote Sens.* **2022**, *14*, 4562. https://doi.org/10.3390/rs14184562

Academic Editors: Paolo Mazzanti and Saverio Romeo

Received: 26 July 2022 Accepted: 2 September 2022 Published: 13 September 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

segment a large image into small blocks, which can significantly reduce the computation burden and complexity in one block and improve the efficiency of data processing. Currently, data partition strategies are applied in some steps of SAR data processing, such as phase unwrapping [17–20], orbital error correction [21], atmospheric correction [22,23], and PS point decomposition [24,25], but not the whole data process. GAMMA software provides a well-known patch-based point target analysis method, Interferometric Point Target Analysis (IPTA) [26]. However, the method of using local reference points between neighboring patches to extend the results to adjacent blocks is highly affected by unstable connections, resulting in errors propagating in the result easily [27]. StaMPS method and software [28] also provide a block strategy to select permanent scatterers, but it is timeconsuming for large-scale areas [29]. Data block processing often introduces systematic errors, such as reference basis errors [29–31]. To remove these errors, external data (GNSS and leveling) and modeling [32,33] are needed. Furthermore, the correction efficacy and precision strongly depend on the precision and spatial distribution of external data.

To address the above problems, we propose a strategy to divide the original data into small blocks by GAMMA software and process these blocks independently by the traditional MT-InSAR method. Then, we use the least square method to estimate the basis between each block and mosaic the corrected block results to obtain the overall results. To validate our strategy, we selected the Sentinel-1 Terrain Observation by Progressive Scans (TOPS) data of a city in the plain area (Changzhou City, Jiangsu Province) and a city in the mountainous area (Qijiang, Chongqing City) in China for the experiment. The results obtained by the traditional and the proposed methods are compared in terms of precision, memory consumption, and time consumption. We also discuss the optimal overlap ratio of blocks and the application of the proposed method.

This study is organized as follows: Section 2 describes the proposed method in detail. The study area and the datasets are introduced in Section 3. In Section 4, we compare the precision and time consumption of the proposed method and traditional method. The block approach and the applicability scenarios of our method is also discussed in Section 5. Finally, some conclusions are drawn in Section 6.

#### **2. The Block MT-InSAR Data Processing Strategy**

In order to solve the great calculation burden caused by Sentinel-1A/B TOPS data of large spatial and temporal coverage, this paper proposes a data partition strategy based on the MT-InSAR data method, referred to as the block MT-InSAR algorithm. First, the TOPS data are co-registered in the study area to obtain registered single look complex (RSLC), and then the RSLC data are partitioned and processed separately by the traditional MT-InSAR algorithm. Then, the results are corrected by the adjustment model based on the spatial consistency of homonymy points (the same ground deformation points located in different blocks within the overlap areas.). Finally, the results are spliced to obtain the continuous overall deformation results. The general flow of the method is shown in Figures 1 and A1.

**Figure 1.** Flowchart of the block MT-InSAR algorithm.

#### *2.1. Data Partition and Block Processing*

Even partition [29], quadtree partition [22], and clustering algorithm partition [19,34] have been used in some parts of data processing, such as atmospheric delay removal, PS network construction, and phase unwrapping. In order to facilitate the splicing process, this paper uses even partition to divide the original data into small blocks, in which the block size and the overlap ratio should be considered.

Block size affects the precision and the processing efficiency of the phase unwrapping, atmospheric delay, and orbit error partition. If the block interferograms are highly coherent and easy to unwrap, the block size has little effect on phase unwrapping precision, but a small block size would lead to high unwrapping efficiency [17]. Additionally, the atmospheric delay in the MT-InSAR consists of a short-scale (few kilometers) and a long-scale (tens of kilometers) component [35], so the block size smaller than these scales is conducive to removing atmospheric delay. However, the too-small block size may remove the longwavelength deformation signal. The block width and height should be larger than 1/3 ALOS-2 data in range and azimuth for ALOS-2 (70 km) datasets [21]. Therefore, we set the initial block size as ~30 km in length and width for S1A/B TOPS data.

The overlap ratio between blocks also affects the reliability of results and data processing efficiency. The larger the overlap ratio the higher the reliability. For example, if a block area is overlapped with the surrounding blocks in four directions by 10%, 36% of the small block is overlapped with the surrounding blocks, and the overlap area will become 96% when the overlap ratio is 40% in width and height directions. (Figure 2a) However, increasing the overlap ratio will lead to a lot of repeated calculations and reduce the data processing efficiency. In order to improve the result reliability (>50% overlap area), we take off the balance between the reliability of results and choose the overlap ratio of 20~40% for further experiments.

**Figure 2.** (**a**) Diagram of overlap ratio and overlap area. *w* and *h* are the width and height of the overlap region, respectively, and *W* and *H* are the width and height of the image, respectively. So, the overlap ratio is *w*/*W* or *h*/*H*. The shadow area is the overlap area in the block (red line) with the surrounding blocks, and the overlap area is 1 − (1 − 2 × *w*/*W*) × (1 − 2 × *h*/*H*). (**b**) Diagram of the coordinates acquirement of corner points in the overlap region. (**c**) Diagram of the adjustment model. The circles represent the deformation points. The thicker the circles, the more times the regions are overlapped. *v* represents the deformation rate of each point.

#### *2.2. Results Correction Based on Least Square Estimation*

After data partition, we process each block of data to obtain the deformation results using the improved IPTA-InSAR method [36,37]. The obtained deformation results of all the blocks are preprocessed through three steps. (1) Co-registration. Due to the location errors caused by orbital errors and low resolution of DEM; the location of deformation results may have a systematic deviation of about 1–2 pixels after geocoding. Such deviation can be solved by an overall offset correction using some feature points on the ground. (2) Automatic extraction of overlapping regions. The deformation rates of the homonymy points in the overlapping region determine the correction precision, so it is necessary to identify the overlapping regions between the deformation results first. We use the topological relationship between image overlays (quadrangles) to find the coordinates of corner points in the overlap region (Figure 2b). (3) High-quality homonymy point selection. We select homonymy points with high coherence and small uncertainty. After these operations, the deformation results can be corrected by adjustment.

Errors can be removed during data processing. However, data partition makes each block have a local reference point, and the benchmarks of these reference points may be different, resulting in discrepancies between the results of adjacent blocks and affecting the precision of the overall results. The difference in the deformation rates of the homonymy points in the overlapping area is described as

$$\text{vel}\_{i,k} - \text{vel}\_{j,k} = \delta\_{i,k} - \delta\_{j,k} \tag{1}$$

where *veli*,*<sup>k</sup>* and *velj*,*<sup>k</sup>* denote the deformation rate at deformation point *k* in image *i* and *j*, respectively, and *δi*,*k*, and *δj*,*<sup>k</sup>* denote the error of the corresponding points. Since the error contains mainly the difference in benchmarks, this value can be assumed as a constant.

The matrix form of Equation (1) is:

$$V = B\hat{X} - L$$

where *V* = - *δv*<sup>1</sup> *δv*<sup>2</sup> ··· *δvi* ··· *δvM <sup>T</sup>* is the residual of the calculated values and the observations, *<sup>X</sup>*<sup>ˆ</sup> = [*x*ˆ1 *<sup>x</sup>*ˆ2 ··· *<sup>x</sup>*ˆ*<sup>M</sup>* ] *<sup>T</sup>* is the difference between the reference points of adjacent images estimated by the least square method. *B* is the coefficient matrix. *L* = [*vel*1,*<sup>k</sup>* − *vel*2,*<sup>k</sup> veli*,*<sup>k</sup>* − *velj*,*<sup>k</sup>* ···] *T* .

To solve Equation (2), we have to determine the weights of the blocks according to the quality of the data involved in the adjustment.

$$D(L) = \sigma\_0^2 P^{-1} \tag{3}$$

*σ*2 <sup>0</sup> denotes the variance of unit weight and *P* is the weight matrix. Assume that the uncertainty of point *i* is given by *δ*. Then, the weight of the point is

$$p\_i = \frac{\mathcal{C}}{\delta\_i} \tag{4}$$

In this study, the data are partitioned into small blocks, which are processed independently. The Helmert variance component estimation for multiple data classes is applied to optimize the solution weights of each data set.

$$S\hat{\theta} = \mathcal{W}\_{\theta} \tag{5}$$

where ˆ *θ* = - *σ*ˆ 2 <sup>01</sup> *<sup>σ</sup>*<sup>ˆ</sup> <sup>2</sup> <sup>02</sup> ··· *<sup>σ</sup>*<sup>ˆ</sup> <sup>2</sup> 0*M <sup>T</sup>* is the estimated variance of unit weight. *<sup>W</sup><sup>θ</sup>* is the square sum of the corrected values, *W<sup>θ</sup>* = - *V<sup>T</sup>* <sup>1</sup> *PV*<sup>1</sup> *<sup>V</sup><sup>T</sup>* <sup>2</sup> *PV*<sup>2</sup> ··· *<sup>V</sup><sup>T</sup> <sup>M</sup>PVM T* . *S* is the coefficient matrix. After obtaining ˆ *θ*, *X*ˆ is solved using the least square method. Repeat the above process until ˆ *θ* satisfies the given threshold *T* = 3*δ*0, and the corresponding solution is the optimal *X*ˆ for each SAR image block. The corrected deformation rate is obtained by Equation (6).

$$
\hat{\nu}\!i\_{i,k} = \nu \!e l\_{i,k} - \pounds\_i \tag{6}
$$

The posteriori variance of unit weight and the covariance array are used to evaluate the adjustment observation. They can be obtained by

$$
\sigma\_0^2 = \frac{V^T P V}{r} \tag{7}
$$

$$Q\_{\mathbb{A}\_i} = B\_i^T P B\_i \tag{8}$$

In Equation (7), *r* is the number of redundant observations, and it can be referred to as the number of degrees of freedom, *r* = *N* − *M*, with *N* denoting the number of observations, and *M* denoting the row number of *X*ˆ . According to the error propagation, the covariance of the estimate of the homonymy points in the overlap region can be obtained by Equation (9).

$$Q\_{\underline{L}\underline{L}} = \left(BQ\_{\underline{\chi}}^{-1}B^{T}P\right)Q(BQ\_{\underline{\chi}}^{-1}B^{T}P)^{T} = BQ\_{\underline{\chi}}^{-1}B^{T} \tag{9}$$

To verify the precision of the adjustment results, the deformation difference of the homonymy points before and after correction are compared. The block processing results are verified by comparing with that of the traditional processing method (the result without partitioning processing).

#### *2.3. Result Mosaicking*

The final step is to mosaic the corrected deformation results of all blocks. After geocoding, the block results are horizontally mosaicked. After correction, the deformation of the homonymy points in the overlapping area may still have differences, due to the different errors distribution. We adopt the weighted average method to merge the deformation of the homonymy points.

After correcting the deformation rate, we correct the deformation sequence. Assuming that the deformation is linear, *T*(*T*1, *T*<sup>2</sup> ··· *Ta* ··· *Tt*) is a deformation sequence, and the corrected deformation sequence at time *Ta* can be obtained by

$$\mathcal{S}\_a = \int\_{T\_1}^{T\_a} (v\_a + \mathfrak{X}\_a) dT = \mathcal{S}\_a + \mathfrak{X}\_a (T\_a - T\_1) \tag{10}$$

where *S*ˆ *<sup>a</sup>* is the accumulated deformation after correction, *va* is the deformation rate. *x*ˆ*<sup>a</sup>* is the correction of deformation rate, but cannot be calculated, *Sa* is the accumulated deformation before correction. If the deformation is linear, *x*ˆ*<sup>a</sup>* is equal to the correction of average deformation rate in the deformation sequences *T*. Variant *x*ˆ can be calculated by Equation (2), so the equation can be instead of Equation (11). Figure 3 is the diagram of deformation sequence correction.

$$\mathcal{S}\_a = \mathcal{S}\_a + \mathfrak{X}(T\_a - T\_1) \tag{11}$$

**Figure 3.** Deformation sequence correction diagram.

#### **3. Experiment and Data Processing**

*3.1. Study Area and Datasets*

Two study areas are selected to validate the proposed method. One is in Changzhou City (31◦09 –32◦04 N, 119◦08 –120◦12 E), a coastal city in eastern China. This area is a plain with an elevation of about 10 m [38]. It has a highly developed economy and urban industry. The continuous expansion of urban and engineering projects has changed the geological environment and led to frequent geological hazards. So, surface subsidence monitoring in the area is necessary.

The other study area is Qijiang (28◦27 –29◦11 N, 106◦23 –107◦03 E), in western China. It is in the transition zone from the southeastern edge of the Sichuan basin to the Yunnan-Guizhou plateau. The topography is undulating. The mountainous area accounts for 67.6% of the total area and the hills account for 32.4%. The average elevation of this area is 254.8 m [39].

These two regions are used to test the applicability of the proposed method under different error conditions.

We collected 110 Sentinel-1A/B TOPS images covering Changzhou City from path T69 and frame 99, between 5 January 2018 and 31 December 2020, and acquired 115 Sentinel-1 images over Qijiang from path T55 and frame 92, between 9 January 2018 and 31 December 2021. Specific image parameters and image acquisition time are shown in Tables 1 and A1 in Appendix A.


**Table 1.** The image parameters of the study areas.

#### *3.2. Data Processing*

Using the method described in Section 2, we partitioned the acquired single look complex (SLC) images after co-registration and obtained 30 small blocks with overlapping regions (Figure 4). The block size in Changzhou City is about 7000 × 1400 (pixels), and the overlap rate is about 30%; the block size in Qijiang is about 6400 × 1600 (pixels), and the overlap rate is about 25%.

The spatial baselines of Sentinel-1 images are short, so we connected each image with two (temporally) adjacent images to form a network, only considering the temporal baselines. A multi-look operation (range: azimuth = 5:1) was applied to reduce the noise. After the multi-look operation, the image size of Changzhou city was reduced to 1600 × 1400 (pixels) and that of Qijiang was reduced to 1500 × 1600 (pixels). The data were processed by minimum cost flow (MCF) for phase unwrapping, and Goldstein filtering for noise mitigation. The PS points were selected considering the phase coherence threshold and the amplitude dispersion threshold of the amplitude map. Orbital error phases were removed by polynomial fitting. Most atmospheric phases were removed by differencing between neighboring PS points, and the remaining was removed by spatial-temporal filtering. The topographic residual phases were then removed using linear regression. Finally, the deformation sequence was solved from the remaining phases using Singular Value Decomposition (SVD). The obtained time series of deformation was corrected by the method introduced in Section 2.2. The average deformation of the high-quality homonymy points in the overlapping areas was used for correction. Finally, the result of traditional processing and block processing were obtained. Figure 5 shows the deformation results of Changzhou and Qijiang.

**Figure 4.** Data coverage for (**a**) Changzhou City and (**b**) Qijiang City. The red line is the administrative division boundary. The blue frame is the image coverage after partition and the shaded part is the overlapping area. The gray blocks are not in the study area.

**Figure 5.** Deformation results of the study areas. The results of Changzhou found by (**a**) partition method and (**b**) traditional method. The results of Qijiang found by (**c**) partition method and (**d**) traditional method. (**e**–**j**) Are time series results of the selected points. The red "+" is the reference point. The reference points in (**a**,**c**) are virtual reference points after free net adjustment because of there are reference points in each small block before adjustment, and they are the center of gravity of the image coverage. The reference points of (**b**,**d**) are the real reference points in data processing.

#### **4. Result Analysis**

When dealing with the deformation time series of a large area, most conventional algorithms use one reference point for phase unwrapping and solve for PS point deformation rates. If the distance between the PS point and the reference point is large, the precision of the results is low. Reducing the size of image coverage by partition can improve the precision of PS points. However, partition leads to different reference points for different

blocks, so the deformation results should be corrected to follow one benchmark. In this section, the partition and the traditional methods are compared in terms of precision and time consumption.

#### *4.1. Precision of the Deformation Rate*

We compare the deformation results in Changzhou found by the partition method and the traditional method in Figure 5. The two results show a similar distribution of deformation, but a slight difference in details. We manually selected three regions A, B, and C for analysis. The size of these regions is 1000 × 1000 (pixels) These three regions are stable and outside the deformation region, so we assumed the deformation as 0. The statistical analysis shows that the standard deviation (STD) of the partition results and the traditional results in region A is 3.4 mm/yr and 3.7 mm/yr, respectively, in region B is 4.1 mm/yr and 4.2 mm/yr, respectively, and in region C is 4.6 mm/yr and 4.6 mm/yr, respectively. On the whole, the precision of the results obtained by the partition strategy is slightly higher than that of the traditional processing results.

Figure 5c,d show the deformation results of Qijiang found by the two methods, which generally agreed with each other but some local areas have some slight differences, especially in the circled areas D, E, F (the selection criteria is the same as A, B, C). The traditional results contain a large number of uplift signals, which are not deformation signals but residual errors. These errors are significantly less in the partition results. The STD of the partition results in regions D, E, and F are 3.2 mm/yr, 3.1 mm/yr, and 3.6 mm/yr, respectively, and the correspondence of the traditional results are 3.8 mm/yr, 3.9 mm/yr, and 4.0 mm/yr, respectively. Therefore, the partition method outperforms the traditional method in error removal.

The comparison results in Table 2 show that the partition method has higher precision than the traditional method. In the Changzhou experiment, the former obtained a precision of about 5% higher than the traditional method, and in the Qijiang experiment, the precision improvement is about 15%.


**Table 2.** Precision of the deformation velocity in Changzhou and Qijiang found by the two methods.

#### *4.2. Precision of the Deformation Sequence*

After correcting the deformation rates, we corrected the corresponding deformation sequences. We selected the deformation time series of 3 points in each of Changzhou city (P1, P2, P3) and Qijiang (P4, P5, P6) to test the result precision. These points are in the overlapping regions, and they have different deformation magnitudes. P1 and P4 have large deformation rate, P2 and P5 have medium deformation rate, and P3 and P6 have small deformation rate. The results are shown in Figure 5. In Changzhou City, the difference between the deformation sequences obtained by our method and that obtained by the traditional method is not significant. At P2, the deformation sequences obtained by the two methods almost coincide (Figure 5f), and the deformation rate difference at the three selected points is less than 1 mm/yr. The RMSE between the two results is 5.3 mm for P1, 3.2 mm for P2, and 4.0 mm for P3. We also selected the time series deformation of the three points in Qijiang. The overall deformation trend and deformation magnitude obtained by the two methods are basically the same. The RMSEs between the two results at P4, P5, and P6 are 6.2 mm, 5.0 mm, and 3.0 mm, respectively.

In Figure 5e,h, the annual average deformation rates of these two points are more than 30 mm/yr, the difference between the deformation rates of these two points was about 1.5 mm/yr, and the deformation monitoring precision of InSAR was also basically in this range. A simple proportional function model is used for correcting the time series. If the deformation is nonlinear, the correction of this model might not be appropriate.
