1. Introduction
Synthetic Aperture Radar (SAR) imaging techniques have been increasingly extensively applied in various fields [
1,
2,
3,
4,
5], such as military reconnaissance, ocean remote sensing, meteorological observation, and various other domains, due to its day–night and all-weather monitoring capability [
6,
7,
8,
9] Among these, earth observation is one of the most significant applications of SAR [
10,
11], enabling mapping, geological exploration, environmental monitoring, and resource survey [
12]. All of these applications leverage SAR data [
13] and rely strongly on advanced imaging algorithms for efficient and accurate imaging. SAR was first proposed and utilized in the late 1950s on RB-47A and RB-57D strategic reconnaissance aircrafts [
14]. In June 1978, SEASAT successfully launched and marked the beginning of spaceborne SAR technology. In order to process SEASAT SAR data, the Jet Propulsion Laboratory (JPL) proposed a new digital signal processing algorithm that could make an approximate separation of the two directions using the significant difference between range and azimuth time. This algorithm, called the Range-Doppler (RD) algorithm [
15,
16,
17], performs range cell migration correction (RCMC) in the Range-Doppler domain [
18,
19,
20]. In 1984, Jin and Wu [
21] in JPL discovered the phase coupling between range and azimuth and then improved the RD algorithm by implementing second range compression (SRC), making the algorithm capable of dealing with SAR medium squint angle. Since then, developments in RCMC and secondary range compression have allowed the RD algorithm to deal with SAR with high squint angle [
22,
23,
24], and the RD algorithm has been widely used in SAR imaging due to its high efficiency and accuracy.
From two perspectives of imaging simulation and real SAR raw data processing, SAR has been studied in the past decades. The significance of SAR imaging simulation is to help professionals in military, scientific, and industrial fields understand and evaluate the performance, characteristics, and limitations of SAR technology. SAR imaging simulation enhances researchers’ understanding of SAR systems and is a basis for building real SAR systems. For SAR imaging simulation, researchers try to develop SAR simulators, including geometrical modeling, raw echo generation based on computational electromagnetics, as well as radar signal processing to obtain SAR images. For example, F. Xu, in 2006, proposed a comprehensive approach to simulate polarimetric SAR imaging for complex scenarios, which considers the scattering, attenuation, shadowing, and multiple scattering effects of volumetric and surface scatterers distributed in space [
25]. In the same year, G. Margarit et al. proposed a SAR simulator, GRECOSAR [
26], which is applied to complex targets and can provide PolSAR, PolISAR, and PolInSAR images. In addition, RaySAR [
27,
28] as a SAR imaging simulator was developed by S. Auer in his doctoral thesis [
29] in 2011 at the Technical University of Munich in collaboration with the German Aerospace Center (DLR, Institute of Remote Sensing Technology). Due to its high efficiency and flexibility, RaySAR can generate very high-resolution SAR products based on backward ray tracing techniques [
30,
31,
32]. In 2022, MetaSensing developed a simulator, KAISAR [
33], based on GPU acceleration and ray tracing methods, which can efficiently generate SAR images for complex 3D scenes. Due to its extremely high efficiency in evaluating electromagnetic scattering, the ray tracing method can efficiently calculate radar echoes for large-scene SAR imaging simulation. The classical ray tracing methods are forward ray tracing, backward ray tracing, and hybrid ray tracing. On this basis, many researchers have improved the ray tracing method. Xu and Jin proposed a Bidirectional Analytic Ray Tracing (BART) method [
34] to calculate the composite RCS of a 3D electrically large complex target above a random rough surface. In [
35], Dong and Guo proposed an accelerated ray-tracing algorithm, which is accelerated using the neighbor search technique, with a new 0–1 transformation rule and arrangement of octree nodes. This algorithm improved the computational efficiency of the composite scattering of 3D ship targets on random sea surfaces.
As a basis for SAR imaging simulation, 3D modeling is an indispensable step in the SAR imaging system. There are many methods for modeling complex 3D scenes, such as through Computer-Aided Design (CAD) modeling, mathematical modeling, etc. From point clouds, 3D models of complex scenes can be efficiently built, among which Light Detection and Ranging (LiDAR) [
36,
37,
38] can quickly obtain a large amount of point cloud data, which is more efficient than traditional measurement methods. This technique is of great importance for applications such as modeling large-scale scenes, quickly updating models, and so on. As a new model reconstruction technology in recent years, LiDAR point cloud modeling technology has been extended to various application fields and plays an important role in many fields, such as digital city, engineering planning, ancient architectural protection, and so on. In [
39], a new method for extracting buildings from LiDAR data was proposed by Ahmed F. Elaksher et al., and this method uses the geometric properties of urban buildings to reconstruct building wireframes from LiDAR data. In [
40], a novel CNN-based method was developed, which can effectively address the challenge of extracting the Digital Elevation Model (DEM) in heavily forested regions. By employing the concept of image super resolution, this approach achieves satisfactory results without requiring the use of ground control points. Recently, a set of effective, fast 3D modeling methods was presented, which can realize the modeling of 3D point clouds of different sites and has a good modeling effect [
41].
With the continuous launch of SAR missions, ERS-1, Radarsat-1, TerraSAR, Sentinel-1, GF-3, etc., can continuously provide massive SAR products for researchers. Typically, the Sentinel-1 consists of a constellation of two polar-orbiting satellites, Sentinel-1A and Sentinel-1B [
42], operating day and night to perform C-band SAR imaging. Level-0, Level-1, and Level-2 data products are provided for user use. Most researchers are using Level-1 and Level-2 products for their research, such as target recognition and feature extraction from SAR images. However, for processing Level-0 raw data, there are few publicly available commercial software for technical support. M. Kirkove developed a SAR processor for Terrain Observation by Progressive Scans (TOPS) based on the Omega-K Algorithm [
43]. The processor combined preprocessing by frequency unfolding and post-processing by time unfolding. In addition, K. Kim et al., in 2021, processed and sorted the Sentinel-1 Level-0 raw data via swath, and they generated a SAR-focused image of Seoul City [
44].
Aiming at SAR imaging for large coastal scenes, this paper is devoted to a comprehensive comparative study on spaceborne SAR imaging of coastal areas by virtue of Sentinel-1 raw data, SAR imaging simulation, and Google Maps. In comparison with free and publicly available DEM data with accuracies typically 30 m and 90 m, the accuracy of LiDAR point cloud data can reach the sub-meter level. Hence, for large coastal scene modeling problems encountered in SAR imaging simulation, a tailored 3D DEM model is to be developed in this paper, which is converted from the LiDAR point cloud data of the coastal areas by performing statistical outlier removal (SOR) denoising and down-sampling processing.
This paper is organized as follows:
Section 2 introduces the Range-Doppler algorithm and its parallel implementation. In
Section 3, the modeling of coastal areas and SAR imaging simulations are presented. In
Section 4 and
Section 5, results and discussion are presented, and a comprehensive comparison is made among the simulated SAR image of selected 3D coastal areas, the focused SAR image of Sentinel-1 data, the multi-look-processed SAR image using SNAP software, as well as the corresponding areas of Google Maps.
Section 6 concludes this paper.
2. Range-Doppler Algorithm and Its Parallel Implementation
The Range-Doppler (RD) algorithm operates approximately in the range direction and the azimuth direction, and the processing procedure could be carried out in two one-dimensional operations due to the large difference in time scales of the range and azimuth echo [
45]. In the RD algorithm, the radar echo data are processed via range pulse compression, range cell migration correction (RCMC), and azimuth pulse compression, respectively. After the aforementioned procedures, the focused SAR image is obtained finally.
Although the conventional RD algorithm is very efficient in processing SAR raw data due to the use of fast Fourier transform (FFT), it also encounters challenges when dealing with large-scene SAR data. Because SAR raw data is inherently suitable for parallel processing, a scheme for parallel processing SAR echoes is presented in this paper, in which the SAR raw data are divided into N blocks in the azimuth direction and can thus be processed parallelly.
2.1. SAR Echo Model
The transmitted pulse of the linear frequency modulation signal
can be expressed as follows:
where
is the range compression chirp rate,
is the center frequency [
46,
47], and
is the range direction time.
is a rectangular signal or window function to define a finite signal
.
In Equation (2),
is the pulse duration. The reflected energy of the radar at any time of illumination
is a convolution of the pulse waveform at that time and the scattering coefficient
of the scene in the illuminated area.
For a point target at range
from the radar with a backscattering coefficient
,
can be expressed as follows:
where
is the speed of light. Since
varies with the azimuth time
, which can be denoted as
,
is the azimuth time when the sensor is closest to the target. The two-dimensional received signal
in the time domain of the point targets can be expressed as follows:
where
is a rectangular signal or window function, similar to
.
2.2. RD Algorithm
We perform range pulse compression on the raw echo signal. The received signal
contains the radar carrier frequency,
. Before sampling, it needs to be down-converted using the method of orthogonal demodulation. The Fourier transform in the range direction is applied to the demodulated signal, and the frequency spectrum in the range direction can be obtained. The range pulse compression is completed by multiplying it with a matched filter in the frequency domain and then performing an inverse Fourier transform [
48]. The expression of the range-compressed signal
in the time domain can thus be expressed by the following:
where
A greater slant angle results in an increased range and azimuth coupling when the slant angle is not zero. A second range compression (SRC) procedure is required to correct the defocus caused by the coupling for more precise imaging. To solve this problem, after range compression using a matched filter with a chirp rate of
, a filter with a chirp rate of
should be applied as a secondary filter for compression. This filter
can be expressed as follows:
Here, is the range when the target is closest to the radar. is the range frequency, and is the azimuth frequency.
Next, we perform RCMC. The same target is compressed to different positions in different azimuth directions, that is range migration. In the time domain, multiple targets may be present in different azimuth cells at the same range cell, so it is difficult to correct each target separately. After the Fourier transform of the range-compressed signal in azimuth direction, the targets in the same range cell can be corrected uniformly in the Range-Doppler domain. The RCMC formula is given by the following:
where
and
are the number of sampling sequences in the azimuth and range directions.
is the sampling frequency in the range direction.
is the range migration.
is the pulse repetition frequency.
is the integer part of
, and
is the fractional part. Hence, the processing of range migration is completed. Applying sinc interpolation, the range migration corrected signal
is as follows:
where
is the azimuth chirp rate. Subsequently, we perform azimuth pulse compression. The azimuth-matched filter
is the complex conjugate of the second exponential term, which is expressed as follows:
The RCMC-processed signal
is multiplied by the frequency domain-matched filter
. After the inverse Fourier transform, the azimuth compression is completed as follows:
Here, is the amplitude of the azimuth impulse response, similar to , which is the sinc function. is the 2D time-domain signal obtained after processing with the Range-Doppler algorithm.
2.3. Parallel Implementation for RD Algorithm
RD algorithm is used to process the raw echo signal of SAR, and the processing efficiency is excellent due to the use of FFT. However, for a large amount of SAR raw data, the conventional RD algorithm still takes a long time. In order to solve this problem, we analyzed the way the raw signal is collected and the data composition structure. In SAR system, the sensor collects the range direction signal in turn at each azimuth sampling point in a “stop-go-stop” manner. In short, the SAR raw echo signal is collected in a linear manner in the azimuth direction.
Using this feature, to further improve the processing efficiency, a parallel scheme for processing a large amount data using the RD algorithm. The flow chart of our parallel scheme is illustrated in
Figure 1. The raw echo signal is divided into N blocks according to the azimuth sampling points, and each divided data block is processed with the RD algorithm at the same time. Since the range sampling point of the raw echo signal is not changed, all the data blocks can be easily merged after SAR imaging processing. Our scheme allows a good combination of the characteristics of SAR systems with a parallel acceleration technique. For example, our parallel scheme is applied to SAR raw data with 8000 and 27,000 sampling points in the azimuth and range directions, respectively. The sampling points are divided into four equal data blocks along the azimuth direction. The results show that our parallel acceleration method takes only 34.1% in comparison with the serial processing time, resulting in a significant improvement in processing efficiency.
3. Coastal Areas Modeling and SAR Imaging Simulation
Coastal areas include land and sea surfaces, with buildings, hillsides, and roads on land as well as piers and bridges on water. It is difficult to model the real coastal areas because the structure of the coastal areas is complex. Light Detection and Ranging (LiDAR) point cloud data have the advantages of high resolution and sufficient available data, which are suitable for constructing complex 3D coastal area models. Therefore, in this paper, we process the Light Detection and Ranging (LiDAR) point cloud data by denoising and down-sampling, and then, we triangulate the LiDAR point cloud data to establish a 3D model of the coastal areas. Based on this model, we use the ray tracing method to realize the simulation of spaceborne SAR imaging. The block diagram of a comprehensive comparative study for coastal areas is illustrated in
Figure 2.
3.1. LiDAR Point Cloud Data
In order to model the real coastal areas, we need a data source that carries coastal geographic information. The higher the accuracy of the data source, the more realistic the coastal areas model will be. LiDAR point cloud data meet our need for model accuracy. LiDAR is a remote sensing technique that utilizes laser light to measure the elevation of various objects such as buildings, ground surfaces, and forest areas. It employs ultraviolet, visible, or near-infrared sources to detect and analyze these objects. LiDAR determines the distance by measuring the time taken for the signal to travel from sensor to target and back to sensor, providing a measurement of objects on the ground.
This enables the creation of a precise and reliable 3D representation of Earth’s surface, facilitating the modeling of both natural and man-made features with high accuracy. The applications of LiDAR have a variety of fields, including agriculture, forest management, architecture, transportation and urban planning, surveying, geology, mining, heritage mapping, etc.
LiDAR point cloud is a technique that uses lidar to scan the surface of an object and convert the reflected laser beam into 3D coordinate points to form a data set [
36,
41]. An average LiDAR data product comprises a large amount of point cloud data, ranging from millions to billions of precise 3D points [
37,
38] denoted by their coordinates (X, Y, and Z). This data set also includes supplementary attributes like intensity, feature classification, and GPS time. These additional attributes are essential and supplemental information when creating a 3D model [
40].
3.2. Approach to Model Realistic Coastal Areas
In order to model the real coastal areas, a scheme is proposed to convert the point cloud data of the coast into a 3D coastal area DEM. The point cloud data with higher accuracy are more beneficial for modeling realistic coastal areas. The resulting DEM mesh can be utilized as an input geometrical model for SAR imaging simulation. The approach is to perform SOR denoising and down-sampling of LiDAR point cloud data, followed by Delaunay triangulation to generate a 3D coastal area DEM, which is used as a target for SAR imaging simulation. The flow chart of our approach to modeling the 3D coastal areas is shown in
Figure 3.
According to the modeling procedure, a geometric model of actual terrain can be obtained. Next, we will discuss the modeling approach in detail, taking a part of the city of Vallejo for example, which is located in the San Francisco Bay area. The public LiDAR point cloud data can be downloaded from the Open Topography official website.
After downloading, the point cloud data are shown in
Figure 4a. However, the presence of rain, fog, or dust can introduce noise to the original point cloud, such as outliers, which can heavily impact subsequent modeling tasks. To mitigate this problem, we perform a denoising step on the original point cloud using the SOR method. This method mainly carries out a statistical analysis on the neighborhood of each point. In terms of the distance distribution from a point to all adjacent points based on distance distribution, some points that do not conform to the requirements of outliers are filtered out.
The SOR procedure consists of two iterations. In the first iteration, the average distance of each point to its nearest N neighboring points is calculated, and the obtained results are in the form of Gaussian distribution. Then, the mean
and standard deviation
of these distances are calculated to determine the distance threshold
, where
. Here
is the multiplier of the standard deviation. For the second iteration, we classify points as inliers or outliers based on whether their average neighborhood distance falls below or above this determined threshold. As a result,
Figure 4b shows the denoised point cloud data. By comparing
Figure 4a and
Figure 4b, it is observed that the distribution of all points is more regular after denoising the original point cloud data, in which some points higher or lower than the whole points are filtered out. If outliers are not removed, some spikes in Delaunay triangulation will appear on the surface of the generated model, leading to an inaccurate model.
Due to the large scene and dense points of the denoised point cloud data, the direct triangulation will lead to the model file being too large, thus resulting in the memory requirements bottlenecked in the subsequent SAR imaging processing. We resort to down-sampling to solve this problem, and the idea of down-sampling is to check if there is a point in each voxel that can replace the whole. If so, a point is used to replace the point set in the voxel. Here, the average value (centroid) of all point coordinates in each voxel is selected to replace all points. The down-sampled point cloud is shown in
Figure 5a.
In order to convert the down-sampled LiDAR point cloud data into a 3D model, we will apply the Delaunay triangulation method. The Delaunay triangulation is a fundamental algorithm in computational geometry that is used to create a triangulation of a given set of points satisfying a specific property called the Delaunay condition. This principle is widely used in various applications such as computer graphics, terrain analysis, and mesh generation. The Delaunay triangulation is a way to divide a set of points into a simplicial complex, which is a collection of triangles that cover the entire point set. The Delaunay condition states that for every triangle in the triangulation, there should be no other point within its circumcircle, which is the circle passing through the three vertices of the triangle.
The Delaunay process begins with three points forming an initial triangle that encompasses all the input points. Then, the remaining points are inserted one by one. When a new point is added, the algorithm determines the triangle(s) that contain the newly inserted point. These triangles are then split into smaller ones by connecting the new point to their vertices. After the insertion, the algorithm checks the local Delaunay condition for each newly created triangle and flips any violated edges until the global Delaunay condition is met. This iterative process continues until all points have been inserted, resulting in the final Delaunay triangulation. As an example, the LiDAR point cloud data on a part of city of Vallejo were converted into a 3D model of the coastal areas by utilizing Delaunay triangulation, as illustrated in
Figure 5b.
3.3. SAR Imaging Simulation Based on RaySAR
On the basis of the coastal areas modeling aforementioned, SAR imaging simulation can be performed. The schematic of stripmap SAR is depicted in
Figure 6. Stripmap mode is the most basic SAR imaging mode, in which the radar antenna remains fixed in a pointing direction. The imaging area is a ground strip parallel to the direction of movement of the radar sensor platform. The width of the imaging band can be changed, ranging from several kilometers to hundreds of kilometers. Our main purpose is to make a comparison between the focused image processed using real SAR raw data and simulated SAR images. Therefore, we will use a SAR simulator for SAR imaging simulation on the basis of our geometrical model built using LiDAR point cloud data.
In order to obtain a high-resolution SAR image, we resorted to RaySAR simulator [
27,
28] due to its high efficiency and flexibility. RaySAR is a SAR imaging simulator [
49] that was developed at the Technical University of Munich in collaboration with the German Aerospace Center (DLR, Institute of Remote Sensing Technology) in Stefan Auer’s doctoral thesis [
29]. It can generate ultra-high resolution SAR products. However, for the large number of triangular surface patches in the triangulated 3D coastal areas model, it is not able to be processed directly with the RaySAR for SAR simulation. To solve this problem, we propose to perform SOR denoising and down-sampling processing on the point cloud model before triangulation, so that the model of 3D coastal areas can be utilized by RaySAR. It also avoids mismodeling phenomena such as spikes.
The SAR imaging system of RaySAR is approximated using a cylindrical light source and an orthographic camera.
Figure 7 shows a schematic of ray tracing, and the process of double reflection of ray is also described. The reflection contribution is evaluated with the backward ray tracing technique, that is, light starting at the sensor and ending at the signal source. By this way, rays that do not intersect the scene and cannot reach the light source are not traced. Compared with the forward ray tracing, the computational complexity is greatly reduced, and the computational efficiency is improved.
For the final SAR image, the position of each sampling point is determined with one or more intersection points between the ray and the target model. The azimuth position is determined using the average value of the azimuth coordinates of the N intersection points, and the position of the range direction is determined using half the distance traveled by the ray.
For the calculation of the contribution of the signal strength, the initial intensity of the emitted ray is set, and the weight factor is reduced with each bounce. Finally, the contribution of all the bounce times is added up. At each contact with the target surface, the contributions of specular and diffuse reflections are calculated separately. The formula for diffuse reflection contribution is as follows:
where
is a diffuse reflection factor, and
is intensity of the incoming signal.
is a surface brilliance factor.
is the direction of the intersection point pointing to the light source.
is the normal vector of the target surface at the intersection point. For a specular reflection contribution, the formula is as follows:
where
is a specular reflection coefficient, and
is a roughness factor defining the sharpness of the specular highlight.
is the angular bisector direction of the incident and reflected rays at the intersection point. The whole radiometry contribution can be evaluated through Equations (13) and (14).
From the radiometrical point of view, the RaySAR model for specular reflection is used to approximate the Fresnel reflection model, while the RaySAR model for diffuse reflection is used to approximate the small perturbation method (SPM). Finally, by setting the coefficients of specular reflection and diffuse reflection, the RaySAR model can be used to approximate the common radar model. In contrast to the Fresnel refection model, the angular dependence of the reflection coefficient on the surface permittivity and signal polarization is not considered.
In order to perform SAR imaging of large-scene coastal areas based on RaySAR simulator, we generated the 3000 m 3000 m large-scene coastal areas model via denoising, down-sampling, and triangulating the LiDAR point cloud data. As an imaging scenario of large-scene coastal areas, the SAR reflectance map is evaluated with backward ray tracing, and the simulated SAR image can be obtained by performing convolution of the reflectance map and the SAR impulse response.
All contributions with the same bounce level will be separated into different layers, and the contribution of only double bounce is shown in
Figure 8a. It can be seen from
Figure 8a that the double bounce mainly occurs in areas such as buildings and bridges. The first five bounce contributions are superimposed, based on which a simulated SAR image of Vallejo City obtained with RaySAR is generated, as depicted in
Figure 8b.
5. Discussion
5.1. Quality Evaluation of Reconstructed Images
In order to numerically evaluate reconstructed images, the cosine similarity between the simulated image and real image is defined as follows [
51]:
where
is the feature vector of the first image, and
is that of the second image.
represents the vector’s Euclidean norm. Cosine similarity
indicates that the bigger the similarity
is, the greater the correlation is between two images. The cosine similarity
ranges from [0, 1], and the larger the value, the more similar it is.
Using Equation (15), the cosine similarity between Sentinel-1 SAR-focused images as in
Figure 17b and multi-look SAR images processed with SNAP software as in
Figure 17c is 0.85. The comparison indicates a satisfactory similarity between Sentinel-1-focused SAR images using our parallel RD algorithm and multi-look SAR image processed using SNAP software, which proves the effectiveness of our parallel RD algorithm for focusing the Sentinel-1 raw data. The cosine similarity reaches 0.93 between the simulated SAR image with RaySAR simulator as in
Figure 17a and the corresponding areas of Google Maps as in
Figure 17d. In addition, by comparing Sentinel-1 SAR-focused images with our parallel RD algorithm as in
Figure 17b and RaySAR-simulated SAR image as in
Figure 17a, a cosine similarity of 0.81 was obtained. This is due mainly to the rotational angle deviation between our focused SAR image and Google Maps.
5.2. Parallel Scheme of RD Algorithm
As we know, the SAR imaging system can be regarded as a linear system, and SAR-focused imaging of level-0 raw data has natural advantages of parallel processing, due to its “stop-go-stop” operating mode. Hence, in this study, a CPU parallel scheme of Range-Doppler (RD) is proposed to focus Sentinel-1 level-0 raw data due to the fact that Sentinel-1 level-0 raw data can be divided into several separate blocks, provided that CPU cores are sufficient. Moreover, the speedup ratio is almost linear proportional to CPU cores. Although the parallel RD algorithm based on CPU is not novel, the scheme is simple and easy to implement. Many acceleration algorithms have been proposed over the last decade and nowadays, but most of these acceleration algorithms are devoted to accelerating algorithm itself, which is complex and not easy to implement.
Taking the workstation (CPU: 20 cores, Intel(R) Xeon(R) Silver 4114 CPU @ 2.20 GHz) used in this study for example, almost 20× speedup ratio was achieved. In our future research, the parallel techniques of the CPU–GPU heterogeneous architecture are under consideration for focusing a large amount of Sentinel-1 level-0 raw data.
5.3. Limitations of This Work
The accuracy of SAR imaging simulations heavily depends on the quality of input data and the assumptions made during the simulation process. Any inaccuracies in the input data or simulation assumptions can affect the reliability of the results. In general, the geometrical model cannot fully represent the complexity of real-world scenes, which can include various types of terrain and objects with different scattering properties.
While this work proposes a scheme for converting LiDAR point cloud data into a 3D coastal area DEM, it may oversimplify the complexities involved in LiDAR data processing. LiDAR data can be noisy, and various preprocessing steps are often required to obtain an accurate 3D model.
The study does not extensively discuss the uncertainty associated with SAR imaging, simulation, and modeling. Real-world SAR data can be affected by noise, atmospheric conditions, and other factors that introduce uncertainties into the results.
The efficiency of the parallel RD algorithm is highlighted, but the computational resources required for these processes may not be readily available in all research or operational settings.
6. Conclusions
In this paper, a parallel RD algorithm is developed for improving the efficiency of large-scene SAR imaging of SAR raw data. The scheme of parallel RD algorithm is first presented, and its validity is verified by performing SAR imaging simulation for ideal points target. The parallel RD algorithm is then applied to focus SAR raw data decoded from the latest Sentinel-1 data for large coastal scene, and the focused SAR images of large coastal scene are in good accordance with Google Maps. Due to its high efficiency and flexibility, the RaySAR simulator based on ray tracing method is utilized in this paper for SAR imaging simulation of large coastal scene, which is modeled using LiDAR point cloud data. Because of the large number of triangular surface patches in the triangulated 3D coastal area models, we propose to perform SOR denoising and down-sampling processing on the point cloud model before triangulation, the tailored model of 3D coastal areas suitable for RaySAR simulator. In order to make a comprehensive comparison, the simulated SAR image of a selected 3D coastal area is compared with a focused SAR image of Sentinel-1 data, the multi-look-processed SAR image using SNAP software, as well as the corresponding areas of Google Maps. The comparison results show good agreements, which verify the effectiveness of the parallel RD algorithm as well as the RaySAR simulator. In addition, slicing and enhancing techniques are applied to zoom in on any region of interest, which are still clearly visible. Using a comprehensive study that involves various components, including Sentinel-1 raw data, SAR imaging simulation, and Google Maps, this comparative analysis allows for the assessment of the developed algorithm’s performance and accuracy, providing valuable references for researchers and engineers in related fields of SAR imaging research and applications. In the future, a more reasonable electromagnetic simulator will be developed for SAR imaging simulation, in which the scattering from large-scale facets as well as small-scale structure will be taken into account by PO and SPM, respectively, along with ray tracing techniques.