Next Article in Journal
Combined L-Band Polarimetric SAR and GPR Data to Develop Models for Leak Detection in the Water Pipeline Networks
Previous Article in Journal
Research on Effective Radius Retrievals of Aerosol Particles Based on Dual-Wavelength Lidar
Previous Article in Special Issue
Machine Learning Algorithms for Acid Mine Drainage Mapping Using Sentinel-2 and Worldview-3
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning for Enhanced-Resolution Reconstruction of Sentinel-1 Backscatter NRCS in China’s Offshore Seas

1
School of Electronic Engineering, Xi’an University of Posts & Telecommunications, Xi’an 710121, China
2
China Academy of Space Technology, Xi’an 710100, China
3
School of Physics, Xidian University, Xi’an 710071, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(8), 1385; https://doi.org/10.3390/rs17081385
Submission received: 25 January 2025 / Revised: 9 April 2025 / Accepted: 11 April 2025 / Published: 13 April 2025
(This article belongs to the Special Issue Deep Learning and Computer Vision in Remote Sensing-III)

Abstract

:
High-precision and high-resolution scattering data play a crucial role in remote sensing applications, including ocean environment monitoring, target recognition, and classification. This paper proposes a deep learning-based model aimed at enhancing and reconstructing the spatial resolution of Sentinel-1 backscatter NRCS (Normalized Radar Cross Section) data for China’s offshore seas, including the Bohai Sea, Yellow Sea, East China Sea, Taiwan Strait, and South China Sea. The proposed model innovatively integrates a Self-Attention Feature Fusion based on the Weighted Channel Concatenation (SAFF-WCC) module, combined with the Global Attention Mechanism (GAM) and High-Order Attention (HOA) modules. The feature fusion module effectively regulates the proportion of each feature during the fusion process through weight allocation, significantly enhancing the effectiveness of multi-feature integration. The experimental results show that the model can effectively enhance the fine structural features of marine targets when the resolution is doubled, though the enhancement effect is slightly diminished when the resolution is quadrupled. For high-resolution data reconstruction, the proposed model demonstrates significant advantages over traditional methods under a scale factor of 2 across four key evaluation metrics, including PSNR, SSIM, MS-SSIM, and MAPE. These results indicate that the proposed deep learning-based model is not only well-suited for scattering data from China’s offshore seas but also provides robust support for subsequent research on ocean target recognition, as well as the compression and transmission of SAR data.

1. Introduction

The Normalized Radar Cross Section (NRCS) is crucial in marine science and has been used in numerous studies, including oil pollution monitoring [1], wind speed retrieval [2], wave height estimation [3], rainfall monitoring [4], marine target identification and classification [5], and image interpretation [6]. Enhancing the NRCS resolution aids in improving spatial feature expressiveness, enabling more accurate detection and identification of surface features like water bodies and vegetation [7], and enhancing structural characteristics of marine targets, thereby improving identification abilities. The factors constraining the resolution of radar systems are multifaceted, including constraints of radar system architecture (signal bandwidth, sampling rate, synthetic aperture length, antenna beamwidth, and receiver performance), external environmental factors (celestial interference, atmospheric interference, and sea surface conditions) and constraints of data processing techniques. Enhancing the performance of radar systems, such as sharpening antenna beams or employing arrays with tens of antenna elements, is an effective way to address low-resolution issues. However, these methods incur substantial economic costs [8]. Therefore, it is necessary to develop more practical and cost-effective super-resolution algorithms to improve the resolution of scattering data.
Previous studies have significantly enhanced the spatial resolution of scatterometer and radiometer data through traditional numerical optimization algorithms. For instance, Lindsley utilized the Weighted Average (AVE) algorithm and the Scatterometer Image Reconstruction (SIR) algorithm to reconstruct the backscatter data measured by the MetOp satellite, achieving a resolution of 15 to 20 km, which significantly improves upon the nominal spatial resolutions of 25 and 50 km [9]. D. G. Long employed the Backus–Gilbert Inversion (BGI) technique in combination with the SIR algorithm to enhance the spatial resolution of land and vegetation areas in data collected by the Special Sensor Microwave/Imager (SSM/I) [10]. D. G. Long improved the resolution of the Cassini Titan Radar Mapper data by applying the AVE and SIR algorithms [11], and further optimized the spatial resolution of SMAP data using the rSIR and BG algorithms [12]. Additionally, D. G. Long combined the Drop-In-The-Bucket (DIB) technique with the SIR algorithm to enhance the resolution of SMAP NRCS data [13]. Santi, E. applied the Smoothing Filter-Based Intensity Modulation (SFIM) technique to improve the spatial resolution of land-, forest-, and snow-covered area data measured by AMSR-E [14]. J. Z. Miller achieved spatial resolution improvement for SMAP NRCS data using the SIR algorithm [15]. Although various algorithms have been proposed for enhancing the spatial resolution of NRCS data, such as BG and SIR algorithms, they still face numerous limitations in practical applications. These algorithms are computationally complex, and require additional parameter information (e.g., antenna gain functions and acquisition geometry), which not only increases operational complexity but also restricts their broader applicability [14]. Most importantly, these algorithms often trade off other performance aspects—such as noise levels, NRCS accuracy, and temporal resolution—when improving spatial resolution [16].
In recent years, with the advancement of deep learning techniques, neural network-based super-resolution methods have demonstrated remarkable advantages in the field of synthetic aperture radar (SAR) image enhancement. For instance, L. G. Wang et al. proposed a generative adversarial network (GAN)-based super-resolution reconstruction method for SAR images, achieving resolution enhancement from low to high resolution [17]. Y. H. Li et al. introduced a residual attention module (ERAM), integrating channel attention (CA) and spatial attention (SA), and implemented a SAR image super-resolution U-Net architecture leveraging this module [18]. L. J. Bu et al. proposed a deep learning network for integrated speckle reduction and super-resolution in multi-temporal SAR (ISSMSAR), which employs two parallel subnetworks to simultaneously process dual-temporal SAR image inputs [19]. Y. Y. Kong et al. introduced a feature extraction module combining convolutional operations with deformable multi-head self-attention (DMSA), forming a SAR image super-resolution network named DMSC-GAN [20]. These methods primarily focus on resolution enhancement networks designed for natural images, whose pixel values are typically non-negative. Since NRCS data far exceed the range of natural image pixel values, directly applying these approaches to NRCS resolution enhancement would result in partial numerical information loss.
On the other hand, combining resolution reconstruction algorithms with data down-sampling can serve as an effective approach for data compression and reconstruction. In synthetic aperture radar (SAR) applications, while hardware advancements have enabled high-resolution data acquisition, they have concurrently exacerbated data storage and transmission challenges. To address this issue, previous researchers have proposed various methods for SAR data compression and reconstruction. For instance, Yang, D. et al. applied the Matrix Completion (MC) theory to the SAR imaging process, leveraging the low-rank property of the radar echo data matrix to reconstruct the full aperture data from undersampled measurements [21]. Lee, S. et al. employed Compressive Sensing (CS) techniques to reconstruct SAR images [22]. Alaa M. El-Ashkar et al. utilized CS techniques for SAR image reconstruction, successfully addressing the challenges of data storage and transmission while maintaining image quality [23]. Slim Rouabah et al. adopted a Fast Fourier Transform (FFT)-based recovery algorithm for SAR compression and reconstruction. Their experimental results demonstrated that high-quality image recovery could be achieved with only 30% of the data volume [24]. While these conventional approaches have been extensively applied in terrestrial monitoring, deep learning architectures are increasingly emerging as a transformative paradigm for SAR image reconstruction. Shaoman Fu et al. proposed a content-adaptive transform network and a Context-Aware Entropy Model (CAEM) to achieve SAR image compression and reconstruction [25]. Zhixiong Di et al. introduced a compression framework based on Variational Autoencoders (VAE), which integrates pyramid features and quality enhancement techniques. This framework significantly improves the compression ratio and reconstruction quality for SAR images [26].
Ocean remote sensing is more challenging than land observation, owing to the inherent stochasticity and hydrodynamic complexity of sea waves, for which deep learning architectures have demonstrated superior efficacy. Ma proposed a multi-task one-dimensional convolutional neural network (MT1DCNN) for the joint prediction of the type and parameters of sea clutter amplitude distribution [27]. An improved H-YOLOv4 model is proposed for effective sea clutter region extraction [28]. Ji established an over-the-horizon propagation loss (OHPL) prediction model by incorporating prior information into the long short-term memory (LSTM)–transformer structure (IPILT–OHPL) for efficiently predicting the OHPL in nonuniform evaporation ducts [29]. He, H. et al. managed to forecast sea surface temperature (SST) values using the Attention-based Context Fusion Network (ACFN) [30]. As deep learning has achieved significant advancements in enhancing image pixel resolution, some researchers have proposed high-resolution reconstruction methods based on deep learning for non-image data, such as sea surface temperature and ocean salinity. Wang applied the LightGBM algorithm to establish a high-resolution reconstruction model for sea surface salinity data [31]. Wang proposed an implicit neural representation-based interpolation method with temporal information (T_INRI) to reconstruct high spatial resolution sea surface temperature data [32].
Through a comprehensive review of deep learning literature, attention mechanisms have emerged as a pivotal technique for obtaining discriminative feature representations, thereby enhancing model performance. Consequently, numerous studies have adopted attention mechanisms in their implementations. Specifically, spatial attention [33] achieves selective feature processing through spatial position focusing, while the processing strategy of spatial masking is coarse and has no intrinsic effect on modulating fine-grained channel knowledge. Channel attention [34] can model inter-channel dependencies but is constrained by first-order statistics, failing to capture high-order feature interactions. Building upon this, researchers have proposed a dual-attention module of spatial and channel attention in both serial [35] and parallel [36], yet these still suffer from insufficient cross-dimensional interactions. To address this limitation, the Global Attention Mechanism (GAM) innovatively integrates an MLP-based 3D permutation channel submodule with a convolutional spatial submodule, effectively suppressing information decay while enhancing global representation capability [37]. Meanwhile, High-Order Attention (HOA) employs a high-order polynomial predictor to model the higher-order statistics of convolutional activations, significantly improving the capture of complex feature correlations [38].
Inspired by the successful application of deep learning techniques to oceanic phenomena by previous researchers, this paper explores the potential of a deep learning model for the enhanced-resolution reconstruction of oceanic scattering data. We propose a deep neural network that innovatively introduces a Self-Attention Feature Fusion based on the Weighted Channel Concatenation (SAFF-WCC) module, combining the Global Attention Mechanism (GAM) module and the High-Order Attention (HOA) module. The feature fusion module can automatically allocate weight proportions according to the importance of input features, effectively avoiding potential issues of feature loss or over-fusion during the feature fusion process, thereby achieving better feature fusion results and higher reconstruction quality. Furthermore, the proposed method eliminates the need for additional parameter requirements and high computational complexity inherent in traditional algorithms while maintaining temporal resolution. Moreover, the resolution reconstruction technique proposed in this study offers an alternative innovative approach to alleviating data storage and transmission pressures. Experiments were conducted using SAR data with alternating down-sampling scaling factors of 2 and 4, resulting in data sizes reduced to 1/4 and 1/16 of the original size, respectively, and significantly lowering storage and transmission demands. Additionally, we conducted comparative analyses between the proposed technique and traditional Compressed Sensing (CS) methods to further validate its effectiveness and advantages.
This paper is organized as follows: Section 2 outlines the acquisition and processing of the scattering data from the Sentinel-1 satellite. Section 3 elaborates on the deep learning model for high-resolution reconstruction. Section 4 shows the ablation experiments and the comparison experiments for the proposed SAFF-WCC module, along with other related feature fusion modules. The experimental results for the enhancement and reconstruction of the NRCS data, as well as comparing the results with those from traditional methods, are also presented in this section. Section 5 discusses the advantages and the limitations of the proposed approach and briefly describes future work. Section 6 summarizes this paper.

2. Datasets

The satellites capable of offering NRCS products include JERS-1, ERS-1/2, RADARSAT-1/2, Envisat-1, Sentinel-1, Terra SAR-X, ALOS-2, and China’s GaoFen-3, among others. The RADARSAT Constellation Mission (RCM) consists of three identical satellites equipped with C-band SAR, covering HH, VV, HV, VH, and compact polarization modes, with spatial resolutions ranging from 5 m to 100 m [39]. The Gaofen-3 (GF-3) satellite is China’s first civilian quad-polarized C-band imaging microwave satellite that can provide high-resolution marine and land observations. It operates in 12 different imaging modes. Among these, there are two full-polarization measurement modes: the QPSI mode with a resolution of 8 m, a swath width of 30 km, and an incidence angle range of 20 to 50 degrees; and the QPSII mode with a resolution of 25 m, a swath width of 45 km, and an incidence angle range of 19 to 50 degrees [40]. Sentinel-1 consists of two polar-orbiting satellites, each carrying an imaging C-band SAR to provide long-term radar backscatter data [41]. Equipped with active phased array antennas, it achieves high-precision beam pointing in both elevation and azimuth angles, enabling high flexibility in data acquisition [42]. Sentinel-1′s SAR has four imaging modes: strip map, interferometric wide swath (IW), extra-wide swath, and wave mode (WV). Core products are provided at Levels 0, 1, and 2, respectively. In this paper, VV- and VH-polarized data from the high-resolution Level 1 ground range detected product of the IW mode are used. The resolution is 20 m × 22 m, with a swath width of 250 km [43].

2.1. Sentinel-1 Data Extraction

The Sentinel Application Platform (SNAP) version 9.0.0 is software provided by the European Space Agency (ESA) for preprocessing Sentinel-1 data [44]. SNAP offers two primary operational modes: the Graphical User Interface (GUI) and the Graph Processing Tool (GPT). While the GUI is user-friendly and convenient, it is less efficient compared to the GPT tool. Therefore, this study utilizes the GPT mode for data processing. As illustrated in Figure 1, the entire processing workflow consists of eight steps: data input, thermal noise removal, orbit file correction, radiometric calibration, noise filtering, Doppler terrain correction, conversion to decibel scale, and scattering data output.
After preprocessing the data using SNAP, an output file in the NetCDF format can be obtained. Since the size of the NRCS data in a single file is approximately 24,000 × 35,000, which is not suitable for direct input into deep learning models, the data requires further processing. As shown in Figure 2, the scattering data is gridded and divided into sections of 1024 × 1024.
In this paper, the focus of the high-resolution reconstruction is primarily on oceanic areas. Due to the presence of uncertain factors, such as cargo and cruise ships, it is necessary to remove data from large land areas to avoid negatively impacting the reconstruction of objects like ships within the ocean. Simultaneously, it is important to retain parts of small islands within the ocean to enhance the number and feature diversity of non-sea-surface targets. Figure 3a shows the scattering data image of a specific area in the Yellow Sea at 09:47:42 on 14 January 2021. Figure 3b displays the scattering data image of the ocean area after removing large land areas, using the gridding method, and reassembling the remaining sections.
China’s offshore seas can be divided into five main parts, from north to south: the Bohai Sea, Yellow Sea, East China Sea, South China Sea, and the Taiwan Strait, along with the eastern Pacific Ocean. In this section, based on their specific locations, these five representative marine areas are selected as the subjects for high-resolution reconstruction studies of the ocean scattering data. Specific longitude and latitude data can be found in reference [45]. Table 1 presents the longitude and latitude information for each marine area, as well as the division of the training and validation sets for the corresponding Sentinel-1 scattering data based on the marine area information.

2.2. Data Processing for Deep Learning Model

Supervised learning is used in this paper to train the deep learning models. The input data, which has a lower resolution than the output, is processed by alternating downsampling to reduce its resolution. The loss value is then calculated by comparing the prediction results with the original data, which is not subject to resolution reduction, to train the deep learning models.
Due to the relatively small proportion of scattering data from individual objects (such as ships) in the overall sea surface scattering data, the model struggles to effectively extract features from this subset, resulting in the loss of localized details. To improve the prediction of high-resolution data from low-resolution data, while avoiding the loss of edge data during feature extraction, preprocessing is applied based on the significant differences in scattering characteristics between the sea surface and targets. The Sentinel-1 scattering data is processed using SNAP software version 9.0.0, then gridded and divided into 1024 × 1024-pixel blocks. Typically, the backscattered signal energy from the ocean surface is significantly lower than that reflected from target objects. Based on this characteristic, the backscattering data can be binarized to separate edge data from non-edge data, thereby further distinguishing small targets from sea surface features. For example, the structural contour characteristics of small targets are preserved in the edge data, enabling the model to focus more on learning the feature information of small targets during edge data branch training. Additionally, to enhance the distinguishability between targets and the sea surface, the preprocessing stage normalizes the backscattering data to a range of 0–255, thereby amplifying the contrast between the target scattering data (higher values) and the sea surface scattering data (lower values), as shown in Figure 4. (Note: the 1024 × 1024 resolution data is used here as an example, and downsampled resolutions should be used for actual model training.)

3. Deep Learning Model

3.1. Network Structure

The network structure utilized in this paper is illustrated in Figure 5, where the procedures enclosed in dashed boxes are implemented during the model training phase. The training process proceeds as follows:
  • The resolution of the original data is reduced by alternating downsampling with a specific scale factor. The downsampled data is binarized, and the edge data is separated from non-edge data.
  • Feature extraction is performed on the edge data using three residual groups. The data features extracted from these three groups are fused using a feature fusion module. Fine-grained features of the fused data are extracted using HOA with R = 1, and the resolution is then enhanced through linear upsampling.
  • Feature extraction is also performed on the non-edge data using three residual groups. The GAM is employed to further extract specific features from those extracted by the residual groups. The features from the residual groups and those from the GAM are fused using a feature fusion approach. Fine-grained features of the fused data are extracted using HOA with R = 3, and the resolution is then enhanced through linear upsampling.
  • Finally, the resolution-enhanced edge and non-edge data are combined, using the feature fusion module again for an enhanced resolution output.
In the proposed network model, the basic network structure is designed based on a convolutional neural network (CNN) module with a Conv-ReLU-Conv architecture. To effectively mitigate the common issue of gradient vanishing during the training of deep neural networks, a Weighted Channel Concatenation (WCC) mechanism was introduced to form a residual structure, as shown in the green box in Figure 5b. WCC, as a connection method for the network, replaces a direct element-wise addition and allows the network to adaptively adjust the connection rate based on weight updates [46].
Through an in-depth analysis of the performance of the basic network structure in enhancing NRCS resolution, it was found that relying solely on convolutional layers for feature extraction resulted in suboptimal reconstruction quality in edge regions. To enhance the extraction and refinement of edge features, we drew inspiration from the successful application of deep learning in high-resolution reconstruction of oceanographic parameters [47], and introduced a Global Attention Mechanism (GAM) module and a High-Order Attention (HOA) module. Specifically, the GAM module was incorporated into the feature extraction stage to more effectively enhance the network’s focus on key features, while the HOA module was applied in the feature refinement stage to capture more detailed feature information.
In terms of the feature fusion strategy, we abandoned the traditional Weighted Channel Connection (WCC) method and adopted an innovative fusion scheme that integrates a self-attention component into the WCC framework. Specifically, the proposed scheme constructs an efficient self-attention mechanism by utilizing a 1 × 1 convolutional layer, a sigmoid activation function, and Hadamard product operations. Subsequently, the features are connected through a WCC, as illustrated in the pink box of Figure 5b. This design enables the system to automatically assign weights based on the importance of the features when fusing edge and non-edge features, thereby significantly improving the effectiveness and accuracy of the feature fusion process.
Our model achieves a balance between performance and efficiency, with a total parameter of 33.82 M and a computational cost of 155.19 MMac. The memory consumption of the DP model is 301.06 MB. The inference speed reaches 297.43 FPS (3.52 ms per frame) on NVIDIA RTX 4090 Gainward, Shenzhen, China, demonstrating real-time capability. The training time and model accuracy across different datasets (with a fixed sample size of 1024 × 1024 pixels) are presented in Table 2.

3.2. Optimize Algorithm Settings

For the proposed network model in this study, we compared two optimization algorithms: Adam [48] and AdamW [49]. The specific formula for AdamW is shown in Equation (1):
w i = ( 1 n r ) w i 1 n m i v i + c
where w i represents the parameters at the i-th iteration, n is the learning rate, r is the weight decay coefficient, m i is the first-order moment estimate of the gradient, v i is the second-order moment estimate of the gradient, and c is a small constant to prevent division by zero.
As shown in Figure 6, for the proposed model in this study, the AdamW optimization algorithm demonstrates a significantly faster convergence speed compared to the Adam algorithm under identical experimental conditions. Additionally, the convergence performance of AdamW is slightly superior to that of Adam. Therefore, this study selects AdamW as the optimization algorithm for the model.
The hyperparameter settings are as follows: the initial learning rate n is set to 0.001, and the weight decay coefficient r is set to 0.01. The remaining parameters keep their default values: c (small constant) is 1 × 108, the first-order moment estimation decay rate β1 is 0.9, and the second-order moment estimation decay rate β2 is 0.999.

3.3. Evaluation Criteria

In the process of selecting the evaluation criteria between the predicted results and the actual results, we selected four types: PSNR [50], SSIM [51], MS-SSIM [52], and MAPE [53]. The formulas for these four evaluation criteria are shown in Equations (2)–(5), respectively.
P S N R = 10 log 10 ( M A X 2 M S E )
where MAX represents the maximum possible pixel value, usually 255, and MSE stands for the Mean Squared Error.
S S I M = 2 μ y t μ y p + C 1 2 σ y t y p + C 2 μ y t 2 + μ y p 2 + C 1 σ y t 2 + σ y p 2 + C 2
where μ y t and μ y p denote pixel sample means of the actual and predicted values, respectively, σ y t 2 and σ y p 2 represent the variances of the actual and predicted values, respectively, σ y t y p is the covariance between the actual and predicted values, and C 1 and C 2 are constants introduced to avoid division by zero.
M S _ S S I M = i = 1 N l y t ( i ) , y p ( i ) α i × c y t ( i ) , y p ( i ) β i × s y t ( i ) , y p ( i ) γ i
where y t ( i ) and y p ( i ) represent the actual and predicted values at the i-th scale, respectively, α i , β i , and γ i are the weight parameters at the i-th scale, l y t ( i ) , y p ( i ) , c y t ( i ) , y p ( i ) , and s y t ( i ) , y p ( i ) are the luminance, contrast, and structural similarity at the i-th scale, respectively.
M A P E = 100 % n i = 1 n y p ( i ) y t ( i ) y t ( i )
where y t ( i ) and y p ( i ) represent the actual and predicted values at the i-th scale, respectively.
The experimental results for the proposed model for the enhanced-resolution reconstruction of the Sentinel-1 backscatter NRCS in China’s offshore seas, using the mentioned evaluation criteria above, will be presented in Section 4.

3.4. Loss Function

The common loss functions for regression problems include the Mean Squared Error (MSE) [54], the Mean Absolute Error (MAE) [55], and the Smooth L1 Loss [56]. The three loss functions are calculated as follows:
M S E _ L O S S = 1 n i = 1 n y t y p 2
M A E _ L O S S = 1 n i = 1 n y t y p
S m o o t h L 1 _ L O S S = 1 n i = 1 n 0.5 × y t y p 2 ,   if   y t y p < 1 y t y p 0.5 ,   otherwise  
where n represents the total number of training data, y t denotes the true results corresponding to the input data, and y p represents the model’s predicted output.
The MSE loss function converges rapidly in the initial stages but is susceptible to outliers, which can slow convergence later on. The MAE loss function, while robust to outliers, has a constant gradient update magnitude, making fine-tuning challenging. In contrast, the Smooth L1 Loss combines the strengths of both, achieving quick convergence while remaining robust to outliers. As shown in Figure 7, the comparison of loss values and MS-SSIM evaluation metrics indicates that, under the same conditions, the convergence performance of the Smooth L1 Loss function surpasses that of the MSE and MAE loss functions, with slightly higher accuracy. Therefore, this study selects the Smooth L1 Loss function for model training.

4. Experimental Results

4.1. Ablation Study and Feature Fusion Comparison

In this section, a series of ablation experiments are conducted to verify the effectiveness of the proposed model. First, a baseline model composed of R-Group modules is evaluated. Then, the GAM module, the HOA module with R = 1 and R = 3, and the SAFF-WCC module are individually added to the baseline model. Next, combinations of modules are added to the baseline model, including the combination of HOA modules with R = 1 and R = 3, the combination of the GAM module with HOA modules (R = 1 and R = 3), the combination of the SAFF-WCC module with HOA modules (R = 1 and R = 3), and the combination of the GAM module with the SAFF-WCC module. Finally, experiments are conducted on the full proposed model, which integrates the GAM module, HOA modules with R = 1 and R = 3, and the SAFF-WCC module with the baseline model. All ablation experiments maintain identical experimental settings except for the model architecture. The evaluation metrics used include PSNR, SSIM, and MS-SSIM, with their respective formulas shown in Equations (2)–(4). The results are summarized in Table 3.
From Equations (2)–(4), a higher PSNR value indicates smaller deviations from the original data, while SSIM and MS-SSIM values closer to 1 indicate higher similarity. As shown in Table 3, the incorporation of the GAM module, the HOA module, and the SAFF-WCC module each contribute to a notable enhancement in the model’s performance when compared to the baseline model. Among these, the GAM module exhibits slightly superior outcomes compared to the HOA and SAFF-WCC modules, albeit still marginally outperformed by the combined use of both HOA modules, with R = 1 and R = 3. When combining the GAM module with the HOA modules (R = 1 and R = 3), the SAFF-WCC module with the HOA modules (R = 1 and R = 3), and the GAM module with the SAFF-WCC module, the model’s performance undergoes a substantial boost compared to utilizing any single module alone. Based on the results in Table 3, it is evident that the proposed model framework attains the optimal performance.
We conducted a comparison of the performance of the proposed feature fusion module (see Figure 5b) with the Parallel Convolution Structure (PCS, see Figure 8a) in [57] and the Spatio-Temporal Feature Fusion (STFF, see Figure 8b) module in [58]. The PCS module primarily integrates multiple data features along the channel dimension and is specifically designed for feature fusion in the temporal dimension. However, our model’s feature fusion requirements are more focused on the spatial dimension. Although the STFF module aims to perform feature fusion across both spatial and temporal dimensions, it has a more complex network structure compared to the feature fusion module proposed in this paper. Particularly, when using training data with a resolution of 1024 × 1024, incorporating the STFF module into our model would significantly increase the network complexity, thereby greatly extending the model’s training time.
As shown in Table 4, under consistent experimental conditions, we employed PSNR, SSIM, and MS-SSIM as the evaluation metrics for the comparison of the three feature fusion modules. The results reveal that, for the proposed deep learning model, the PCS module—which emphasizes different temporal features—performs marginally inferior to the scenario without feature fusion across all metrics. While the STFF module achieves feature fusion in the spatial dimension and slightly outperforms the proposed method in the SSIM metric, it exhibits slightly poorer performance in both PSNR and MS-SSIM. Furthermore, due to its relatively intricate structure, the STFF module demonstrates lower training efficiency compared to the feature fusion scheme proposed in this study.

4.2. Model Training Results for Different Sea Areas

Taking the training results of the Bohai Sea region as an example, MS-SSIM is used as the evaluation metric for both training and testing accuracy. Based on the dataset allocated for the Bohai Sea in Table 1, the loss values, training accuracy, and test accuracy obtained from the deep learning model are shown in Figure 9. When the scale factor is 2, the loss values for VV and VH polarizations converge to 0.30, with the training accuracy reaching 0.99, and the test accuracy stabilizing at 0.986. When the scale factor is 4, the loss values for VV and VH polarizations converge to 0.82 and 0.57, respectively, with the training accuracy reaching 0.918 and 0.95, and the test accuracy achieving 0.896 and 0.946, respectively.
For the Yellow Sea, East China Sea, Taiwan Strait, and South China Sea, the training loss, training accuracy, and test accuracy under different polarization modes (VV and VH) and scale factors (2 and 4) are summarized in Table 5. From Figure 9 and Table 5, it can be observed that under a scale factor of 2, both VV and VH polarizations exhibit high training and testing accuracy across all five sea areas, generally maintaining values above 0.98. When the scale factor is increased to 4, despite an increase in the training loss, the model’s training and testing accuracy consistently remain at high levels. Specifically, the accuracy for VV polarization is above 0.888, while that for VH polarization is above 0.942. In summary, the proposed deep learning model demonstrates outstanding performance across different sea areas, polarization modes, and scale factors. The model is not only effective for the Bohai Sea region but is also applicable to the other four major sea areas surrounding China, indicating strong generalization capability.

4.3. NRCS Resolution Enhancement

The NRCS resolution enhancement is conducted for marine targets using the proposed deep learning model. As shown in Figure 10, the NRCS of a target in the Bohai Sea on 2 June 2024, at 09:57:16, is presented under different resolutions for both VV and VH polarizations. Specifically, Figure 10a,b show the original VV and VH polarization data provided by Sentinel-1, with a resolution of 20 m × 22 m. Figure 10c,d display the results obtained by applying the proposed model to enhance the resolution of the original data by a scaling factor of 2, achieving a resolution of 10 m × 11 m. Figure 10e,f present the results obtained by further enhancing the resolution of the original data by a scaling factor of 4, achieving a resolution of 5 m × 5.5 m.
By comparing Figure 10c with Figure 10a, and Figure 10d with Figure 10b, it can be observed that enhancing the NRCS resolution by a scaling factor of 2 effectively improves the structural features of the marine target. Furthermore, by comparing Figure 10e with Figure 10c, and Figure 10f with Figure 10d, it can be seen that enhancing the NRCS resolution by a scaling factor of 4 further improves the structural features of the target, although the improvement is limited.
By comparing Figure 11a with Figure 10a, and Figure 11b with Figure 10b, it can be observed that when the resolution of the target scattering data is reduced by a factor of 4, the data volume becomes merely 1/16 of the original. This resolution degradation leads to the loss of numerous critical structural features, consequently impairing the model’s ability to effectively learn these essential characteristics during training. As a result, the trained model shows inferior performance in the 4× resolution enhancement compared to the 2× case.

4.4. High-Resolution Reconstruction of NRCS Under Different Sea Conditions

We matched the NRCS data provided by Sentinel-1 with the wind speed and significant wave height data provided by ECMWF in both temporal and spatial dimensions for the East China Sea region in 2024, as shown in Table 6. The matching information includes the time and latitude/longitude coordinates of the Sentinel-1 data and the ECMWF data that simultaneously satisfy the classification criteria for wind speed and wave height under sea states, ranging from levels 2 to 7.
According to the analysis of the training dataset in the East China Sea, based on the sea state classification standards shown in Table 6, the NRCS data accounts for approximately 4% of the total data under sea state level 2, 57.84% under level 3, 28.17% under level 4, 7.77% under level 5, 1.43% under level 6, and 0.17% under level 7.
As shown in Figure 12, Figure 13, Figure 14 and Figure 15, this study uses scatter plots to illustrate the comparison between the original-resolution NRCS data provided by Sentinel-1 and the NRCS data reconstructed by the model after downsampling under sea states 2~7 in the East China Sea. Both the original data and the predicted results consist of 1024 × 1024 data points, amounting to a total of 1,048,576 data points.
As shown in Figure 12, the scatter plots contrast the predicted and actual NRCS values for VV polarization under a scaling factor of 2. It can be observed that, within sea states ranging from 2 to 7, the majority of the predicted values align closely with the actual values, with only a minor fraction of the data exhibiting notable deviations. When comparing the reconstruction results of the model with the original data under sea states 2~7, it is evident that the reconstruction performance is optimal under sea state 7. However, as the sea state level increases, the deviations for a small portion of the data tend to become larger. As shown in Figure 13, the scatter plots depict the NRCS data for VV polarization with a scaling factor of 4. Notably, the reconstruction performance is more optimal under sea state 7 than that under other sea states. As sea state levels rise, more data points exhibit significant deviations between the predicted and actual values, with these deviations becoming more pronounced. The reconstruction results with a scaling factor of 4 are slightly inferior compared to those with a scaling factor of 2.
As shown in Figure 14, the scatter plots show the predicted and actual NRCS values for VH polarization with a scaling factor of 2. It can be observed that under sea states 2~7, the majority of the reconstruction results exhibit small deviations from the original data, with only a small portion of the data showing larger deviations. Comparing the reconstruction results of the model with the original data under sea states 2~7, it is evident that the reconstruction performance is optimal under sea state 5, followed by sea state 6. Additionally, as the sea state level increases, the deviations for a small portion of the data tend to become larger. As shown in Figure 15, the scatter plots illustrate the NRCS data for VH polarization with a scaling factor of 4. It can be observed that among sea states 2~7, the reconstruction performance is significantly better under sea state 5. When compared with the results at a scaling factor of 2, it is evident that the reconstruction performance of the model is noticeably better than with a scaling factor of 4. For the scaling factor of 4, as the sea state level increases, the number of data points with larger deviations between the predicted and actual values increases, and the magnitude of these deviations also becomes larger.
When comparing Figure 12 and Figure 14, it can be seen that the reconstruction results for the VH-polarized NRCS data are significantly better than those for the VV-polarized data when the scaling factor is 2. However, a slight increase in deviation is observed for a minor fraction of the VH-polarized data compared to VV polarization. Comparing Figure 13 with Figure 15, the same scenario can also be observed when the scaling factor is 4.
For the high-resolution reconstruction of the NRCS under different sea conditions, four evaluation metrics were selected in this study: PSNR, SSIM, MS-SSIM, and MAPE. The corresponding formulas are shown in Equations (2)–(5), respectively. According to equation (5), the closer the MAPE is to 0, the smaller the data deviation is. Table 7 presents the reconstruction results of the NRCS data for VV and VH polarizations under sea conditions of levels 2 to 7, with downsampling factors of 2 and 4, respectively.
From Table 7, it can be seen that the high-resolution reconstruction of the NRCS data under sea states 2~7 achieves better performance when the scaling factor is 2, compared to that when the scaling factor is 4 for both the VV and VH-polarized data. For VV polarization, the reconstruction achieves the best results under sea state level 7, with a PSNR of 42.412, SSIM of 0.9744, MS-SSIM of 0.9951, and MAPE of 0.0155 for a scaling factor of 2, and a PSNR of 35.993, SSIM of 0.8935, MS-SSIM of 0.9361, and MAPE of 0.0326 for a scaling factor of 4. For VH polarization, when taking PSNR and MAPE as assessment metrics, the best reconstruction performance is observed under sea state level 3, with values of 42.470 and 0.0163 for a scaling factor of 2, and 36.264 and 0.0365 for a scaling factor of 4. But for the SSIM and MS-SSIM metrics, the best reconstruction performance is observed under sea state level 5, with values of 0.9833 and 0.9965 for a scaling factor of 2, and 0.9348 and 0.9645 for a scaling factor of 4.
When the scaling factor is 2, the PSNR and MAPE metrics indicate that the high-resolution reconstruction of the VH-polarized NRCS data outperforms that of the VV-polarized data under sea states 2 to 4. However, the situation reverses under sea states 5 to 7. When the scaling factor is 4, significant differences in evaluation metrics are observed between the VV-polarized and VH-polarized data across different sea states.
By considering the proportion of the data distribution across the various sea states in the training set, as well as the comprehensive results from scatter plots and various evaluation metrics for sea states ranging from 2 to 7, it is evident that the model proposed in this paper demonstrates strong generalization capabilities. Even though data from high sea states constitute a relatively small proportion of the training set, the model is still capable of reconstructing the NRCS data with high accuracy. However, it is noteworthy that within the NRCS reconstruction results, a minor fraction of the predicted values are significantly lower than the actual values, and this deviation gradually increases with higher sea states. This phenomenon can be primarily attributed to the complex scattering characteristics of the sea surface under high sea states, such as breaking waves and foam, which significantly enhance the roughness and complexity of the sea surface. During downsampling processing, some of this detailed sea surface structure information may be partially lost. Furthermore, due to the limited representation of high sea state data in the training set, the model may not have adequately learned these complex features, leading to some predicted NRCS values being smaller than the actual observations during the reconstruction process.

4.5. Comparison of NRCS Reconstruction Using Different Methods

As shown in Figure 16, the alternating downsampling technique effectively reduces data resolution, significantly decreasing data size, which can alleviate the storage and transmission pressures of SAR data. However, to ensure data quality, effective reconstruction methods are required. These reconstruction techniques, based on advanced algorithms, can extract key information from the downsampled data and restore the details of the original data.
In this section, we compare the method proposed in this paper with traditional Compressed Sensing (CS) methods. Common reconstruction algorithms for this convex optimization problem include the Generalized Back Projection (GBP) algorithm, the Iterative Reweighted Least Squares (IRLSs) algorithm, the Spatial Pyramid Matching (SPM) algorithm, and the Orthogonal Matching Pursuit (OMP) algorithm. To evaluate the performance of the proposed method, we compare it with these four algorithms in terms of reconstruction results.
As shown in Figure 17, the NRCS data for sea state 3 in the East China Sea are presented, with detailed information provided in Table 6. Figure 17a corresponds to the VV-polarized data, and Figure 17b corresponds to the VH-polarized data. The data size is 1024 × 1024.
As shown in Figure 18, the reconstruction results of the VV-polarized NRCS data using different methods are presented. It can be observed that the proposed method achieves reconstruction results that are closer to the original data compared to the other methods when the scaling factor is 2.
Figure 19 presents the reconstruction results of the VH-polarized NRCS data using different methods. It can be observed that, when the scaling factor is 2, the proposed method achieves reconstruction results that are closer to the original data compared to the other methods.
As shown in Table 8, we evaluated the reconstruction results of different methods using four evaluation metrics: PSNR, SSIM, MS-SSIM, and MAPE. The experimental data corresponds to the NRCS data from the East China Sea under sea condition level 3, with a data size of 1024 × 1024, as detailed in Table 6. Here, “CR” represents the compression ratio, which is the ratio of the compressed data size to the original data size. From Table 8, for both the VV and VH polarization data, the reconstruction results for the proposed model with a scaling factor of 2 are significantly better than those of the other methods. Furthermore, regardless of whether traditional methods or the proposed method are used when the same method is applied, the reconstruction results for the VH polarization NRCS data generally outperform those for the VV polarization data.

5. Discussion

From the analysis in Section 4.3, it can be observed that for the NRCS data on surface targets, the proposed model significantly enhances the fine structural features of the targets when the resolution is increased by a factor of 2 for both VV and VH polarization. The enhancement effect diminishes when the resolution is increased by a factor of 4. This may be attributed to the model’s inability to fully capture feature information when processing downsampled data with a scaling factor of 4. The scatter plots presented in Section 4.4 indicate that within the sea state range from 2 to 7, the NRCS data can achieve good spatial resolution reconstruction. However, as the sea state grade increases, a minor fraction of the predicted values is significantly lower than the actual values, and this deviation gradually increases with a higher sea state. This phenomenon can be primarily attributed to the complex scattering characteristics of the sea surface under high sea states, such as breaking waves and foam, which significantly enhance the roughness and complexity of the sea surface. During downsampling processing, some of this detailed sea surface structure information may be partially lost. Additionally, due to the relatively small proportion of high sea state data in the training set, the model may not have adequately learned these complex features, leading to some predicted NRCS values being smaller than the actual observations during the reconstruction process. Meanwhile, the comparative results in Section 4.5. demonstrate that the proposed model outperforms traditional compression and reconstruction methods in terms of both the compression ratio and reconstruction performance under a scaling factor of 2.
Therefore, we will focus on the following two major optimizations and improvements in further work:
  • Balancing the proportion of NRCS data across sea states: When constructing the training database, more attention will be paid to balancing the proportion of NRCS data under different sea states. This will enable the model to extract features more evenly across all sea states, thereby improving NRCS reconstruction performance under various sea conditions.
  • Optimizing the Network Architecture: Efforts will be made to refine the network architecture to improve reconstruction accuracy and overall performance. For instance, incorporating the electromagnetic scattering mechanism model into the DL model by establishing a high-resolution NRCS model of the composite scattering for marine targets and sea surface (especially for high sea state) to reveal the influencing factors on NRCS data differences at various resolutions under different sea states. This will strengthen the model’s ability to extract target features and fine sea waves, which further enhance structural characteristics.

6. Conclusions

This paper proposes a deep learning model for enhancing and reconstructing the spatial resolution of Sentinel-1 backscattering data from the Bohai Sea, Yellow Sea, East China Sea, Taiwan Strait, and South China Sea. The SNAP GPT tool is employed to preprocess the Sentinel-1 data, and the decibel-processed data undergo gridding segmentation to extract effective data for specific sea areas. The extracted data is then downsampled according to different scale factors and binarized to obtain edge and non-edge data, which serve as inputs for the proposed deep learning model. The deep learning model innovatively incorporates a self-attention feature fusion based on the WCC (SAFF-WCC) module, combined with a Global Attention Mechanism (GAM) and a High-Order Attention (HOA) module, which significantly improves the efficiency of multi-feature integration. To evaluate the effectiveness of the proposed model, four key metrics were employed: Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), Multi-Scale Structural Similarity Index (MS-SSIM), and Mean Absolute Percentage Error (MAPE). Specifically, we conducted experiments on the scattering data from a marine target in the Bohai Sea to enhance its spatial resolution by factors of 2 and 4. Additionally, we reconstructed the NRCS data for the East China Sea under sea states ranging from levels 2 to 7 and used the NRCS data under sea state level 3 as an example for comparative analysis with traditional reconstruction algorithms. The results demonstrate that for the scattering data on ocean surface targets, the proposed model effectively enhances the fine structural features of the targets when the resolution is doubled. For high-resolution NRCS reconstruction under different sea states, the proposed method outperforms the traditional approaches across all four evaluation metrics under a scaling factor of 2. It should be noted that China’s coastal areas exhibit three distinct climate types: temperate (covering the Bohai Sea, the Yellow Sea, and northern portions of the East China Sea), subtropical (the southern East China Sea), and tropical (the South China Sea). Since our dataset exclusively focuses on these mid- to low-latitude regions, polar zones are not included, and any generalization to polar regions requires further validation.
In future work, we aim to further optimize the model by integrating the scattering model, adjusting the network architecture, and balancing the proportion of the NRCS data for different sea states in the training dataset. These efforts will further improve the model’s ability to enhance the structural features of marine targets and increase the reconstruction accuracy of the NRCS data under high sea state conditions.

Author Contributions

Conceptualization, X.Z., X.S. and Z.W.; methodology, X.S., Y.D. and X.Z.; formal analysis, Y.D. and X.Z.; funding acquisition, X.Z.; investigation, Y.D.; writing—original draft, Y.D. and X.Z.; writing—review and editing, X.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 62101445, and the Natural Science Foundation of Shaanxi Province, China, grant number 2020JQ-843.

Data Availability Statement

No new data were created or analyzed in this study.

Acknowledgments

We gratefully acknowledge the European Space Agency for providing the Sentinel-1 satellite scatterometer data used in this paper. The data can be accessed via the following URL: https://search.asf.alaska.edu/ (accessed on 10 May 2024). Additionally, we extend our special thanks to the European Space Agency for making the SNAP remote sensing image processing software available, which significantly facilitated our data processing efforts.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zheng, H.; Zhang, J.; Khenchaf, A.; Li, X.-M. Study on Non-Bragg Microwave Backscattering from Sea Surface Covered with and without Oil Film at Moderate Incidence Angles. Remote Sens. 2021, 13, 2443. [Google Scholar] [CrossRef]
  2. Nekrasov, A.; Dell’Acqua, F. Airborne Weather Radar: A Theoretical Approach for Water-Surface Backscattering and Wind Measurements. IEEE Geosci. Remote Sens. Mag. 2016, 4, 38–50. [Google Scholar] [CrossRef]
  3. Song, T.; Yan, Q.; Fan, C.; Meng, J.; Wu, Y.; Zhang, J. Significant Wave Height Retrieval Using XGBoost from Polarimetric Gaofen-3 SAR and Feature Importance Analysis. Remote Sens. 2023, 15, 149. [Google Scholar] [CrossRef]
  4. Quan, M.; Zhang, J.; Zhang, R. A Novel Rain Identification and Rain Intensity Classification Method for the CFOSAT Scatterometer. Remote Sens. 2024, 16, 887. [Google Scholar] [CrossRef]
  5. Sun, S.-K.; He, Z.; Fan, Z.-H.; Ding, D.-Z. SAR Image Target Recognition Using Diffusion Model and Scattering Information. IEEE Geosci. Remote Sens. Lett. 2024, 21, 4017505. [Google Scholar] [CrossRef]
  6. Sun, L.; Hu, C.; Ding, Z. Analysis of the Impact of Measured UWB RCS on High-Resolution Imaging in Bistatic SAR. In Proceedings of the 2011 IEEE CIE International Conference on Radar, Chengdu, China, 6–9 October 2011; pp. 1303–1306. [Google Scholar]
  7. Aires, F.; Boucher, E.; Pellet, V. Convolutional neural networks for satellite remote sensing at coarse resolution. Application for the SST retrieval using IASI. Remote Sens. Environ. 2021, 263, 112553. [Google Scholar] [CrossRef]
  8. Maeda, T. Spatial Resolution Enhancement Algorithm Based on the Backus–Gilbert Method and Its Application to GCOM-W AMSR2 Data. IEEE Trans. Geosci. Remote Sens. 2020, 58, 2809–2816. [Google Scholar] [CrossRef]
  9. Lindsley, R.D.; Long, D.G. Enhanced-Resolution Reconstruction of ASCAT Backscatter Measurements. IEEE Trans. Geosci. Remote Sens. 2016, 54, 2589–2601. [Google Scholar] [CrossRef]
  10. Long, D.G.; Daum, D.L. Spatial Resolution Enhancement of SSM/I Data. IEEE Trans. Geosci. Remote Sens. 1988, 36, 407–417. [Google Scholar] [CrossRef]
  11. Long, D.G. Spatial Resolution Enhancement of Cassini Titan Radar Mapper Data. In Proceedings of the 2009 IEEE Radar Conference, Pasadena, CA, USA, 4–8 May 2009; pp. 1–6. [Google Scholar] [CrossRef]
  12. Long, D.G.; Brodzik, M.J.; Hardman, M.A. Enhanced-Resolution SMAP Brightness Temperature Image Products. IEEE Trans. Geosci. Remote Sens. 2019, 57, 4151–4163. [Google Scholar] [CrossRef]
  13. Long, D.G.; Miller, J.Z. Validation of the Effective Resolution of SMAP Enhanced Resolution Backscatter Products. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2023, 16, 3390–3404. [Google Scholar] [CrossRef]
  14. Santi, E. An Application of the SFIM Technique to Enhance the Spatial Resolution of Spaceborne Microwave Radiometers. Int. J. Remote Sens. 2010, 31, 2419–2428. [Google Scholar] [CrossRef]
  15. Miller, J.Z.; Long, D.G.; Brodzik, M.J.; Hardman, M.A. SMAP Enhanced-Resolution Scatterometer and Synthetic Aperture Radar Image Products. In Proceedings of the 2IGARSS 2022—2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 17–22 July 2022; pp. 4300–4303. [Google Scholar] [CrossRef]
  16. Liu, L.; Dong, X.; Lin, W.; Zhu, J.; Zhu, D. Analysis of Backus-Gilbert Approach on Resolution Enhancement of Dual-Frequency Polarized Scatterometer. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; pp. 5038–5041. [Google Scholar] [CrossRef]
  17. Wang, L.; Zheng, M.; Du, W.; Wei, M.; Li, L. Super-Resolution SAR Image Reconstruction via Generative Adversarial Network. In Proceedings of the 2018 12th International Symposium on Antennas, Propagation and EM Theory (ISAPE), Hangzhou, China, 3–6 December 2018; pp. 1–4. [Google Scholar] [CrossRef]
  18. Li, Y.; Zhou, L.; Xu, F.; Chen, S. OGSRN: Optical-Guided Super-Resolution Network for SAR Image. Chin. J. Aeronaut. 2022, 35, 204–219. [Google Scholar] [CrossRef]
  19. Bu, L.; Zhang, J.; Zhang, Z.; Yang, Y.; Deng, M. Deep Learning for Integrated Speckle Reduction and Super-Resolution in Multi-Temporal SAR. Remote Sens. 2024, 16, 18. [Google Scholar] [CrossRef]
  20. Kong, Y.; Liu, S. DMSC-GAN: A c-GAN-Based Framework for Super-Resolution Reconstruction of SAR Images. Remote Sens. 2024, 16, 50. [Google Scholar] [CrossRef]
  21. Yang, D.; Liao, G.; Zhu, S.; Yang, X.; Zhang, X. SAR Imaging With Undersampled Data via Matrix Completion. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1539–1543. [Google Scholar] [CrossRef]
  22. Lee, S.; Jung, Y.; Lee, M.; Lee, W. Compressive Sensing-Based SAR Image Reconstruction from Sparse Radar Sensor Data Acquisition in Automotive FMCW Radar System. Sensors 2021, 21, 7283. [Google Scholar] [CrossRef]
  23. El-Ashkar, A.M.; Shendy, H.; El-Shafai, W.; Taha, T.E.S.; El-Fishawy, A.S.; Abd El-Nabi, M.; Abd El-Samie, F.E. Compressed Sensing for SAR Image Reconstruction. In Proceedings of the 2021 International Conference on Electronic Engineering (ICEEM), Menouf, Egypt, 3–4 July 2021; pp. 1–6. [Google Scholar] [CrossRef]
  24. Rouabah, S.; Ouarzeddine, M.; Souissi, B. SAR Images Compressed Sensing Based on Recovery Algorithms. In Proceedings of the 2018 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Valencia, Spain, 22–27 July 2018; pp. 8897–8900. [Google Scholar] [CrossRef]
  25. Fu, S.; Zhang, H.; Feng, H.; Zhuo, L. Content-Adaptive Residual Learning and Context-Aware Entropy Model for SAR Image Compression. IEEE Geosci. Remote Sens. Lett. 2024, 21, 4018605. [Google Scholar] [CrossRef]
  26. Di, Z.; Chen, X.; Wu, Q.; Shi, J.; Feng, Q.; Fan, Y. Learned Compression Framework With Pyramidal Features and Quality Enhancement for SAR Images. IEEE Geosci. Remote Sens. Lett. 2022, 19, 4505605. [Google Scholar] [CrossRef]
  27. Wang, L.; Ma, L.; Wu, T.; Wu, J.; Luo, X. Joint Prediction of Sea Clutter Amplitude Distribution Based on a One-Dimensional Convolutional Neural Network with Multi-Task Learning. Remote Sens. 2024, 16, 3891. [Google Scholar] [CrossRef]
  28. Wang, Y.; Yin, B.; Zhang, J.; Zhang, Y. Effective Sea Clutter Region Extraction Based on Improved YOLOv4 Algorithm for Shore-Based UHF-Band Radar. In Proceedings of the 2022 IEEE 6th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Beijing, China, 3–5 October 2022; pp. 1566–1573. [Google Scholar] [CrossRef]
  29. Ji, H.; Guo, L.; Zhang, J.; Wei, Y.; Guo, X.; Zhang, Y.; Nie, T.; Feng, J. IPILT–OHPL: An Over-the-Horizon Propagation Loss Prediction Model Established by Incorporating Prior Information Into the LSTM–Transformer Structure. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 10067–10082. [Google Scholar] [CrossRef]
  30. He, H.; Shi, B.; Zhu, Y.; Feng, L.; Ge, C.; Tan, Q.; Peng, Y.; Liu, Y.; Ling, Z.; Li, S. Numerical Weather Prediction of Sea Surface Temperature in South China Sea Using Attention-Based Context Fusion Network. Remote Sens. 2024, 16, 3793. [Google Scholar] [CrossRef]
  31. Wang, Z.; Wang, G.; Guo, X.; Hu, J.; Dai, M. Reconstruction of High-Resolution Sea Surface Salinity over 2003–2020 in the South China Sea Using the Machine Learning Algorithm LightGBM Model. Remote Sens. 2022, 14, 6147. [Google Scholar] [CrossRef]
  32. Wang, Y.; Karimi, H.A.; Jia, X. Reconstruction of Continuous High-Resolution Sea Surface Temperature Data Using Time-Aware Implicit Neural Representation. Remote Sens. 2023, 15, 5646. [Google Scholar] [CrossRef]
  33. Li, W.; Zhu, X.; Gong, S. Harmonious Attention Network for Person Re-Identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 2285–2294. [Google Scholar]
  34. Hu, J.; Shen, L.; Sun, G. Squeeze-and-Excitation Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 7132–7141. [Google Scholar]
  35. Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. CBAM: Convolutional Block Attention Module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
  36. Park, J.; Woo, S.; Lee, J.-Y.; Kweon, I.S. BAM: Bottleneck Attention Module. arXiv 2018, arXiv:1807.06514. [Google Scholar]
  37. Liu, Y.; Shao, Z.; Hoffmann, N. Global Attention Mechanism: Retain Information to Enhance Channel-Spatial Interactions. arXiv 2021, arXiv:2112.05561. [Google Scholar]
  38. Chen, B.; Deng, W.; Hu, J. Mixed High-Order Attention Network for Person Re-Identification. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 371–381. [Google Scholar]
  39. Dabboor, M.; Olthof, I.; Mahdianpari, M.; Mohammadimanesh, F.; Shokr, M.; Brisco, B.; Homayouni, S. The RADARSAT constellation mission core applications: First results. Remote Sens. 2022, 14, 301. [Google Scholar] [CrossRef]
  40. Guo, H.; Zhao, X.; Liu, X.; Yu, W. Polarimetric Channel Imbalance Evaluation of GF-3 SAR Products by Bragg-Like Targets. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 2115–2132. [Google Scholar] [CrossRef]
  41. Geudtner, D.; Torres, R.; Snoeij, P.; Davidson, M.; Rommen, B. Sentinel-1 System Capabilities and Applications. In Proceedings of the 2014 IEEE Geoscience and Remote Sensing Symposium, Quebec City, QC, Canada, 13–18 July 2014; pp. 1457–1460. [Google Scholar]
  42. Miranda, N.; Piantanida, R.; Recchia, A.; Franceschi, N.; Schmidt, K.; Hajduch, G.; Vincent, P. Sentinel-1 Mission Performance and Evolution of Data Products. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 1051–1052. [Google Scholar]
  43. Miranda, N.; Rosich, B.; Putignano, C. The Sentinel-1 Data Processor and Operational Products. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 July 2012. [Google Scholar]
  44. Filipponi, F. Sentinel-1 GRD Preprocessing Workflow. Proceedings 2019, 18, 11. [Google Scholar] [CrossRef]
  45. Linghu, L.; Wu, J.; Jeon, G.; Wu, Z.-S.; Shi, M. Sea Clutter Feature Prediction and Parameters Inversion Using Deep Learning Model. IEEE Trans. Ind. Informat. 2023, 19, 8374–8383. [Google Scholar] [CrossRef]
  46. Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; Fu, Y. Image Super-Resolution Using Very Deep Residual Channel Attention Networks. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 286–301. [Google Scholar]
  47. Hu, Y.; Ma, L.; Zhang, Y.; Wu, Z.; Wu, J.; Zhang, J.; Zhang, X. Research on High-Resolution Reconstruction of Marine Environmental Parameters Using Deep Learning Model. Remote Sens. 2023, 15, 3419. [Google Scholar] [CrossRef]
  48. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. In Proceedings of the 3rd International Conference on Learning Representations, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  49. Wang, X.; Aitchison, L. How to set AdamW’s weight decay as you scale model and dataset size. arXiv 2024, arXiv:2405.13698. [Google Scholar]
  50. Erfurt, J.; Helmrich, C.R.; Bosse, S.; Schwarz, H.; Marpe, D.; Wiegand, T. A Study of the Perceptually Weighted Peak Signal-ToNoise Ratio (WPSNR) for Image Compression. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September 2019; pp. 2339–2343. [Google Scholar]
  51. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
  52. Nasr, M.A.S.; AlRahmawy, M.F.; Tolba, A.S. Multi-scale structural similarity index for motion detection. J. King Saud Univ. Comput. Inf. Sci. 2017, 29, 399–409. [Google Scholar] [CrossRef]
  53. Kim, S.; Kim, H. A New Metric of Absolute Percentage Error for Intermittent Demand Forecasts. Int. J. Forecast. 2016, 32, 669–679. [Google Scholar] [CrossRef]
  54. Picking Loss Functions—A Comparison Between MSE, Cross Entropy, and Hinge Loss. Available online: https://rohanvarma.me/Loss-Functions/ (accessed on 14 May 2023).
  55. Differences Between L1 and L2 as Loss Function and Regularization. Available online: http://www.chioka.in/differences-between-l1-and-l2-as-loss-function-and-regularization/ (accessed on 14 May 2023).
  56. Girshick, R. Fast R-CNN. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
  57. Guo, Q.; Zhang, J.; Zhu, S.; Zhong, C.; Zhang, Y. Deep Multiscale Siamese Network with Parallel Convolutional Structure and Self-Attention for Change Detection. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5406512. [Google Scholar] [CrossRef]
  58. Wei, H.; Wang, N.; Liu, Y.; Ma, P.; Pang, D.; Sui, X.; Chen, Q. Spatio-Temporal Feature Fusion and Guide Aggregation Network for Remote Sensing Change Detection. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5642216. [Google Scholar] [CrossRef]
Figure 1. The workflow in SNAP for Sentinel-1 NRCS data processing.
Figure 1. The workflow in SNAP for Sentinel-1 NRCS data processing.
Remotesensing 17 01385 g001
Figure 2. Gridding of data in Bohai Sea.
Figure 2. Gridding of data in Bohai Sea.
Remotesensing 17 01385 g002
Figure 3. Scattering data from the Yellow Sea at 09:47:42 on 14 January 2021. (a) The scattering data image. (b) The scattering data image of the remaining ocean section.
Figure 3. Scattering data from the Yellow Sea at 09:47:42 on 14 January 2021. (a) The scattering data image. (b) The scattering data image of the remaining ocean section.
Remotesensing 17 01385 g003
Figure 4. Scattering data processing workflow.
Figure 4. Scattering data processing workflow.
Remotesensing 17 01385 g004
Figure 5. Diagram of the proposed network model. (a) An overview of the proposed network. (b) The residual group and feature fusion module. (c) The module descriptions.
Figure 5. Diagram of the proposed network model. (a) An overview of the proposed network. (b) The residual group and feature fusion module. (c) The module descriptions.
Remotesensing 17 01385 g005
Figure 6. Comparison of convergence performance for different optimization algorithms. (a) Train loss. (b) Train accuracy.
Figure 6. Comparison of convergence performance for different optimization algorithms. (a) Train loss. (b) Train accuracy.
Remotesensing 17 01385 g006
Figure 7. A comparison chart for the loss functions.
Figure 7. A comparison chart for the loss functions.
Remotesensing 17 01385 g007
Figure 8. The comparison chart for the loss functions. (a) PCS [57]. (b) STFF [58].
Figure 8. The comparison chart for the loss functions. (a) PCS [57]. (b) STFF [58].
Remotesensing 17 01385 g008
Figure 9. The loss values, training accuracy, and test accuracy. (a) The loss values. (b) The training accuracy. (c) The test accuracy.
Figure 9. The loss values, training accuracy, and test accuracy. (a) The loss values. (b) The training accuracy. (c) The test accuracy.
Remotesensing 17 01385 g009
Figure 10. NRCS resolution enhancement for a target in the Bohai Sea. (a) 20 m × 22 m (VV). (b) 20 m × 22 m (VH). (c) 20 m × 22 m (VH). (d) 10 m × 11 m (VH). (e) 5 m × 5.5 m (VV). (f) 5 m × 5.5 m (VH).
Figure 10. NRCS resolution enhancement for a target in the Bohai Sea. (a) 20 m × 22 m (VV). (b) 20 m × 22 m (VH). (c) 20 m × 22 m (VH). (d) 10 m × 11 m (VH). (e) 5 m × 5.5 m (VV). (f) 5 m × 5.5 m (VH).
Remotesensing 17 01385 g010
Figure 11. NRCS resolution for a target in the Bohai Sea. (a) 80 m × 88 m (VV). (b) 80 m × 88 m (VH).
Figure 11. NRCS resolution for a target in the Bohai Sea. (a) 80 m × 88 m (VV). (b) 80 m × 88 m (VH).
Remotesensing 17 01385 g011
Figure 12. Scatter plots of high-resolution NRCS reconstruction for VV polarization under sea states 2~7 with a scaling factor of 2. (a) Sea state 2. (b) Sea state 3. (c) Sea state 4. (d) Sea state 5. (e) Sea state 6. (f) Sea state 7.
Figure 12. Scatter plots of high-resolution NRCS reconstruction for VV polarization under sea states 2~7 with a scaling factor of 2. (a) Sea state 2. (b) Sea state 3. (c) Sea state 4. (d) Sea state 5. (e) Sea state 6. (f) Sea state 7.
Remotesensing 17 01385 g012
Figure 13. Scatter plots of high-resolution NRCS reconstruction for VV polarization under sea states 2~7 with a scaling factor of 4. (a) Sea state 2. (b) Sea state 3. (c) Sea state 4. (d) Sea state 5. (e) Sea state 6. (f) Sea state 7.
Figure 13. Scatter plots of high-resolution NRCS reconstruction for VV polarization under sea states 2~7 with a scaling factor of 4. (a) Sea state 2. (b) Sea state 3. (c) Sea state 4. (d) Sea state 5. (e) Sea state 6. (f) Sea state 7.
Remotesensing 17 01385 g013
Figure 14. Scatter plots of high-resolution NRCS reconstruction for VH polarization under sea states 2~7 with a scaling factor of 2. (a) Sea state 2. (b) Sea state 3. (c) Sea state 4. (d) Sea state 5. (e) Sea state 6. (f) Sea state 7.
Figure 14. Scatter plots of high-resolution NRCS reconstruction for VH polarization under sea states 2~7 with a scaling factor of 2. (a) Sea state 2. (b) Sea state 3. (c) Sea state 4. (d) Sea state 5. (e) Sea state 6. (f) Sea state 7.
Remotesensing 17 01385 g014
Figure 15. Scatter plots of high-resolution NRCS reconstruction for VH polarization under sea states 2~7 with a scaling factor of 4. (a) Sea state 2. (b) Sea state 3. (c) Sea state 4. (d) Sea state 5. (e) Sea state 6. (f) Sea state 7.
Figure 15. Scatter plots of high-resolution NRCS reconstruction for VH polarization under sea states 2~7 with a scaling factor of 4. (a) Sea state 2. (b) Sea state 3. (c) Sea state 4. (d) Sea state 5. (e) Sea state 6. (f) Sea state 7.
Remotesensing 17 01385 g015
Figure 16. Illustration of data downsampling and reconstruction.
Figure 16. Illustration of data downsampling and reconstruction.
Remotesensing 17 01385 g016
Figure 17. NRCS data for sea state 3 in the East China Sea provided by Sentinel-1. (a) VV Polarization. (b) VH Polarization.
Figure 17. NRCS data for sea state 3 in the East China Sea provided by Sentinel-1. (a) VV Polarization. (b) VH Polarization.
Remotesensing 17 01385 g017
Figure 18. Reconstruction results of the VV-polarized NRCS data for sea state 3 in the East China Sea using different methods. (a) Reconstruction results using the GBP algorithm. (b) Reconstruction results using the IRLSs algorithm. (c) Reconstruction results using the OMP algorithm. (d) Reconstruction results using the SPM algorithm. (e) Reconstruction results using the proposed method with a scaling factor of 2. (f) Reconstruction results using the proposed method with a scaling factor of 4.
Figure 18. Reconstruction results of the VV-polarized NRCS data for sea state 3 in the East China Sea using different methods. (a) Reconstruction results using the GBP algorithm. (b) Reconstruction results using the IRLSs algorithm. (c) Reconstruction results using the OMP algorithm. (d) Reconstruction results using the SPM algorithm. (e) Reconstruction results using the proposed method with a scaling factor of 2. (f) Reconstruction results using the proposed method with a scaling factor of 4.
Remotesensing 17 01385 g018
Figure 19. Reconstruction results of the VH-polarized NRCS data for sea state 3 in the East China Sea using different methods. (a) Reconstruction results using the GBP algorithm. (b) Reconstruction results using the IRLSs algorithm. (c) Reconstruction results using the OMP algorithm. (d) Reconstruction results using the SPM algorithm. (e) Reconstruction results using the proposed method with a scaling factor of 2. (f) Reconstruction results using the proposed method with a scaling factor of 4.
Figure 19. Reconstruction results of the VH-polarized NRCS data for sea state 3 in the East China Sea using different methods. (a) Reconstruction results using the GBP algorithm. (b) Reconstruction results using the IRLSs algorithm. (c) Reconstruction results using the OMP algorithm. (d) Reconstruction results using the SPM algorithm. (e) Reconstruction results using the proposed method with a scaling factor of 2. (f) Reconstruction results using the proposed method with a scaling factor of 4.
Remotesensing 17 01385 g019
Table 1. Information on the different marine areas and the division of scattering data.
Table 1. Information on the different marine areas and the division of scattering data.
Sea AreaLocation (°)Depth
(m)
FeaturePolarizationTraining SetValidation Set
LongitudeLatitude
Bohai Sea119~125E34~41N18Near closedVV26,5336634
VH18,7174680
Yellow Sea119~125E31~37N44Semi-closedVV50,33212,584
VH50,23712,560
East Sea121~125E29~31N370Marginal seaVV21,0625266
119~125E25~29NVH21,1715293
Taiwan Strait119~121E24~25N60Narrow straitVV17,0854272
117~120E22~24NVH17,0854272
South Sea106~125E5~21N1212Open seaVV21,3125328
VH21,3125328
Table 2. Training time and accuracy under different datasets.
Table 2. Training time and accuracy under different datasets.
DatasetsTraining TimePSNRSSIMMS-SSIM
100048 min 16.4 s30.6770.76890.8174
5000221 min 29.0 s39.1160.92410.9818
10,000448 min 31.7 s39.9040.93110.9872
15,000681 min 17.6 s40.1810.93590.9875
20,000918 min 47.4 s40.2400.93630.9877
Table 3. Comparison experiments for different modules.
Table 3. Comparison experiments for different modules.
ModelPSNRSSIMMS-SSIM
Baseline model38.03840.91250.9639
Model with GAM module39.10220.92240.9719
Model with HOA module (R = 1)38.31830.91270.9703
Model with HOA module (R = 3)38.54390.91490.9734
Model with SAFF-WCC38.12170.91720.9688
Model with HOA modules (R = 1 and 3)39.05490.93050.9765
Model with GAM and HOA modules (R = 1 and 3)39.4250.93050.9811
Model with SAFF-WCC and HOA modules (R = 1 and 3)39.3060.92800.9830
Model with GAM and SAFF-WCC39.73460.93270.9841
Our model (GAM and HOA and SAFF-WCC)40.54060.93870.9886
Table 4. Feature fusion comparison.
Table 4. Feature fusion comparison.
Feature Fusion MethodPSNRSSIMMS-SSIM
Non-Feature Fusion39.4250.93050.9811
PCS38.6650.91510.9803
STFF40.2840.94090.9868
SAFF-WCC40.54060.93870.9886
Table 5. The training results for the other four sea areas.
Table 5. The training results for the other four sea areas.
China SeasPolarizationScale FactorTrain LossTrain AccTest Acc
Yellow SeaVV×20.400.9860.986
×40.900.8920.892
VH×20.360.9810.982
×40.590.9440.942
East China SeaVV×20.390.9840.983
×40.850.9020.888
VH×20.330.9840.984
×40.550.9520.952
Taiwan StraitVV×20.340.9890.984
×40.960.8920.882
VH×20.360.9800.980
×40.620.9460.944
South China SeaVV×20.320.9840.984
×40.830.910.896
VH×20.360.9860.984
×40.550.950.952
Table 6. Matching information for Sentinel-1 data and ECMWF data under different sea states.
Table 6. Matching information for Sentinel-1 data and ECMWF data under different sea states.
Sea StateWave Height (m)Wind Speed(m/s)DateLocation (°)
Sentinel-1 TimeECMWF TimeLatitude(N)Longitude(E)
20.1~0.51.6~3.33 March 202427.935~28.027121.433~121.525
10:02:2310:00:00
30.5~1.253.4~7.93 March 202426.923~27.015121.433~121.525
10:02:2310:00:00
41.25~2.58.0~10.74 December 202425.971~26.063121.481~121.573
10:01:5610:00:00
52.5~4.010.8~13.815 January 202425.967~26.059121.481~121.573
10:01:5810:00:00
64.0~6.013.9~17.125 July 202425.971~26.063121.483~121.575
10:01:5610:00:00
76.0~9.017.2~24.425 July 202426.431~26.523121.483~121.575
10:01:5610:00:00
Table 7. High-resolution reconstruction of VV and VH-polarized NRCS under different sea states.
Table 7. High-resolution reconstruction of VV and VH-polarized NRCS under different sea states.
PolarizationScale FactorSea StatePSNRSSIMMS-SSIMMAPE
VV×2238.9360.93520.98760.0201
339.3630.93770.98820.0185
441.0790.95560.99140.0166
541.8720.96500.99310.0158
642.1770.97130.99440.0156
742.4120.97440.99510.0155
×4232.5710.80700.90610.0389
332.8290.80840.90630.0363
434.5890.85080.92020.0336
535.3480.87110.92700.0328
635.7240.88580.93280.0328
735.9930.89350.93610.0326
VH×2242.0600.97570.99500.0173
342.4700.97850.99560.0163
441.1730.98170.99620.0237
540.9290.98330.99650.0260
640.6930.98260.99630.0293
740.5360.98220.99630.0321
×4235.8720.91560.95610.0385
336.2640.92210.95900.0365
434.9380.92820.96060.0547
534.8020.93480.96460.0598
634.5370.93120.96290.0682
734.4120.92980.96230.0746
Table 8. NRCS reconstruction results using different methods for sea state 3 in the East China Sea.
Table 8. NRCS reconstruction results using different methods for sea state 3 in the East China Sea.
PolarizationMethodCRPSNRSSIMMS-SSIMMAPE
VVOMP58.82%34.50850.57470.87550.1318
GBP58.82%34.95860.57940.84130.1279
IRLSs58.82%37.06780.72880.91870.0991
SPM58.82%35.08710.60730.89250.1233
Our Model (×2)25%39.3630.93770.98820.0185
Our Model (×4)6.25%32.8290.80840.90630.0363
VHOMP58.82%40.70530.61030.84130.0272
GBP58.82%41.08820.60680.83350.0259
IRLSs58.82%42.35590.84030.95320.0179
SPM58.82%41.34920.64550.89150.0252
Our Model (×2)25%42.4700.97850.99560.0163
Our Model (×4)6.25%36.2640.92210.95900.0365
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, X.; Du, Y.; Su, X.; Wu, Z. Deep Learning for Enhanced-Resolution Reconstruction of Sentinel-1 Backscatter NRCS in China’s Offshore Seas. Remote Sens. 2025, 17, 1385. https://doi.org/10.3390/rs17081385

AMA Style

Zhang X, Du Y, Su X, Wu Z. Deep Learning for Enhanced-Resolution Reconstruction of Sentinel-1 Backscatter NRCS in China’s Offshore Seas. Remote Sensing. 2025; 17(8):1385. https://doi.org/10.3390/rs17081385

Chicago/Turabian Style

Zhang, Xiaoxiao, Yu Du, Xiang Su, and Zhensen Wu. 2025. "Deep Learning for Enhanced-Resolution Reconstruction of Sentinel-1 Backscatter NRCS in China’s Offshore Seas" Remote Sensing 17, no. 8: 1385. https://doi.org/10.3390/rs17081385

APA Style

Zhang, X., Du, Y., Su, X., & Wu, Z. (2025). Deep Learning for Enhanced-Resolution Reconstruction of Sentinel-1 Backscatter NRCS in China’s Offshore Seas. Remote Sensing, 17(8), 1385. https://doi.org/10.3390/rs17081385

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop