Next Article in Journal
A Study on the Electromagnetic Characteristics of Very-Low-Frequency Waves in the Ionosphere Based on FDTD
Previous Article in Journal
A Low-Power, Low-Noise Recycling Folded-Cascode Operational Transconductance Amplifier for Neural Recording Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Field-Programmable Gate Array Implementation of Backprojection Algorithm for Circular Synthetic Aperture Radar

1
Department of Smart Air Mobility, Korea Aerospace University, Goyang-si 10540, Republic of Korea
2
Department of AI Convergence and Electronic Engineering, Sejong University, Seoul 05006, Republic of Korea
3
School of Electronics and Information Engineering, Korea Aerospace University, Goyang-si 10540, Republic of Korea
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(8), 1544; https://doi.org/10.3390/electronics14081544
Submission received: 15 March 2025 / Revised: 7 April 2025 / Accepted: 9 April 2025 / Published: 10 April 2025
(This article belongs to the Special Issue New Insights in Radar Signal Processing and Target Recognition)

Abstract

:
This paper presents a backprojection algorithm (BPA) accelerator implemented on a field-programmable gate array (FPGA) for circular synthetic aperture radar (SAR) systems. Although the BPA offers superior image quality, it requires significantly more computation and is memory intensive, necessitating hardware optimization. In particular, the BPA accumulates image data, leading to high memory requirements that must be reduced for embedded system implementation. To address this issue, we optimized the floating-point (FP) bit width, focusing on the output data that form the image, rather than only reducing the internal computation bit widths as in previous studies. Specifically, we optimized the exponent and mantissa widths in six computational units, prioritizing memory optimization for image data before reducing the computational logic. The proposed BPA accelerator achieved a 77% reduction in memory usage and a 73–74% reduction in computational logic while maintaining an image quality with a structural similarity index measure (SSIM) of 0.99 or higher. These optimizations significantly enhanced the feasibility of BPA processing in embedded systems.

1. Introduction

A synthetic aperture radar (SAR) system transmits microwave pulses to a target area and uses the signals reflected from the target, ensuring stable performance regardless of weather conditions or whether it is day or night. In addition, its large antenna aperture enables high-resolution imaging of various targets. Unlike optical sensor systems, SAR systems are robust against environmental conditions, making them suitable for a wide range of applications such as monitoring climate change, tracking wildfire progression, military applications, and population estimation. However, unlike optical sensors, SAR image formation requires intensive signal processing [1,2,3,4].
SAR systems are classified according to their data collection methods. When an aircraft equipped with radar illuminates a target while moving in a linear path, it is referred to as a stripmap or a linear SAR system. However, this method cannot detect hidden targets at certain angles. Circular SAR systems collect data by rotating 360° around a spotlighted target. Therefore, they can detect hidden targets at various angles and achieve a high resolution when observed through a complete circular aperture. Consequently, research efforts on circular SAR systems have increased [5,6,7,8,9,10,11].
Current SAR image formation algorithms can be classified into two categories: frequency and time domain algorithms. Common frequency domain algorithms include range doppler (R-D), chirp scaling (CS), and omega-K ( ω -K). These algorithms form images by decoupling the echo signals in the range and azimuth directions. However, during this decoupling process, frequency domain algorithms introduce geometric approximations and assumptions that can degrade image quality [12,13,14,15,16,17,18].
A representative time domain algorithm is the backprojection algorithm (BPA). Because the BPA is based on the actual antenna trajectory, it introduces fewer geometric errors and can generate higher-quality SAR images than frequency domain algorithms. Owing to this characteristic, the BPA is primarily used for SAR image formation in applications such as climate change monitoring, military operations, and disaster observations. The BPA projects echo signals from each pulse onto the image domain and accumulates the projected values in the image domain. As the images accumulate, the image resolution gradually improves until it reaches its maximum resolution. Because pulse signals do not interfere with each other, the BPA theoretically has the advantage of continuously forming images [19,20,21,22,23,24,25,26].
However, the BPA has a computational complexity of O( N 3 ), which is significantly higher than the O( N 2 l o g N ) complexity of frequency domain algorithms. This computation requires numerous vector multiplications and memory access. SAR image processing is generally performed at ground stations owing to its high computational load and memory-intensive nature. However, onboard image generation can be used in various applications, such as real-time image classification. Given the power constraints of onboard systems, research is being conducted on implementing the BPA using field-programmable gate arrays (FPGA) or application-specific integrated circuits (ASIC) [27,28,29,30,31].
Owing to the high computational complexity of the BPA, it is essential to optimize the bit width of the computations. Previous studies have proposed methods for optimizing the bit width of internal operations [32]. From a computational perspective, this can reduce area and power consumption. However, because the BPA is an algorithm that iteratively accumulates output images, memory usage and power consumption are extremely high. It is impossible to reduce total memory usage without optimizing the bit width of output data. Therefore, we first optimized the bit width of the output data to reduce memory requirements and then optimized the internal operations. Bit width optimization was performed while ensuring that the image quality was maintained.
This paper presents a BPA accelerator implemented on an FPGA for circular SAR image formation. The proposed design optimizes the bit width to reduce memory usage by 77% and computational logic by 73–74% while maintaining an image quality structural similarity index measure (SSIM) of 0.99. Compared with the BPA using double precision floating-point (FP) operations, the optimized accelerator significantly enhances hardware efficiency, making BPA processing more feasible for embedded systems. The remainder of this paper is organized as follows: Section 2 reviews the SAR algorithm, Section 3 details the proposed BPA accelerator, Section 4 presents the FPGA implementation results, Section 5 discusses the results, and Section 6 concludes the paper.

2. Background

SAR is an active high-resolution radar imaging system that is mounted on a moving platform such as an aircraft or satellite to observe the Earth’s surface. The most important feature of SAR is that it can achieve a much higher azimuth resolution than a typical radar system through the concept of synthetic aperture. Although it actually uses a small antenna, it implements the effect of using a very large antenna by aligning and accumulating signals received from various locations on the time axis as the platform moves. This allows it to generate high-resolution images that are difficult to implement with actual physical antennas [1,2,3].

2.1. Basic Principles of SAR

The SAR system transmits microwave signals to the ground surface as the platform flies, and receives echo signals that are reflected from various objects on the ground. In general, the transmitted wave is an electromagnetic wave in the form of a pulse, and the reflected wave contains information about the time of reception, intensity, and phase. Using this information, the system estimates the distance and direction of each reflector. The basic principle of this process is illustrated in Figure 1.
The range resolution is mainly determined by the time span of the transmitted pulse. The shorter the pulse, the more finely the adjacent reflectors can be distinguished. On the other hand, the azimuth resolution is improved by analyzing the phase change that occurs when the same reflector is observed at multiple locations as the platform moves. In SAR, this phase information is precisely synthesized to create a single high-resolution image, and this process is called the synthetic aperture in SAR. This is basically a method that uses the Doppler effect, and the azimuth position of the reflector can be accurately reconstructed by utilizing the difference in Doppler frequency at different observation points.
In practice, SAR systems store phase information that varies over time within specific range bins and use this to reconstruct the position and intensity of reflectors into a two-dimensional image. This reconstruction is performed by an image formation algorithm, and involves precise time and frequency analysis processes beyond simple signal acquisition. SAR image format requires very precise signal processing techniques. The received signal not only contains distance information corresponding to the pulse, but also phase and Doppler frequency components. This must be accurately interpreted and synthesized to produce high-resolution images.
The first thing that is performed is range compression. This is a technique that allows for a range resolution higher than the pulse length by processing the received signal with a matched filter when the transmitted pulse is a chirp signal. After obtaining the range resolution through this, azimuth compression is performed through the Doppler analysis of the azimuth signal. In azimuth compression, the Doppler frequency is extracted from multiple samples received over time for the same distance cell, and the phase shift due to the relative motion of the reflector is calculated from this. Through this process, information about the same scatterer at different observation points is accumulated and corrected, resulting in a final image with high resolution.
SAR image formation algorithms can be divided into frequency domain-based algorithms and time domain-based algorithms. Frequency domain-based algorithms include the Range-Doppler algorithm and the Chirp Scaling algorithm, and these techniques transform the received SAR data into the frequency domain and then perform matched filtering in the distance and azimuth directions to compress the signal. This method has the advantage of enabling fast image processing even for large-scale datasets by utilizing the computational efficiency of the fast Fourier transform (FFT). However, these frequency domain techniques generally assume linear flight paths and flat terrain, and although they provide high computational efficiency under these ideal conditions, their accuracy may deteriorate in complex terrain or geometric structures [14,17,18].
On the other hand, time domain-based algorithms are composed mainly of the backprojection algorithm, and directly process the received SAR signal in the time domain. This is a method of forming an image by backprojecting each received signal into the image space and accumulating it. This technique shows robust performance in various complex environments such as nonlinear flight paths, terrain relief, and wide-area observation geometry, and is suitable for high-precision applications such as SAR spotlight mode images or 3D terrain reconstruction [20,21,23].
The time domain technique can secure higher precision than the frequency domain method because it does not need to simplify the geometry of the SAR platform, but its computational complexity is relatively very large. Each pixel of the output image must accumulate contributions from numerous radar pulses, which usually increases the computational amount in proportion to the number of pixels and pulses. Therefore, hardware acceleration or the optimization of the computational structure is essential for the high-speed processing of the algorithm [19,24].
SAR has been reported in various practical applications. For example, in [33], a SAR system was manufactured and tested on an unmanned helicopter, and a 94 GHz band Frequency-Modulated Continuous Wave (FMCW) radar was used. Unlike electro-optical (EO) sensors or infrared (IR) sensors, the system is not affected by environmental factors such as dust, smoke, and bad weather, so it is suitable for providing stable operational guidance in ground-based systems. In addition, in [34], a SAR sensor was mounted on an unmanned aerial vehicle (UAV), which secured higher operational flexibility and implemented a cost-effective platform compared to existing aircraft-based remote sensing systems. UAV platforms can be used for various purposes in both civilian and military fields, and are particularly advantageous for implementing low-cost, high-performance surveillance systems. Beyond these examples, SAR technology has also found use in a wide range of other applications, including environmental monitoring, disaster response, and terrain mapping, further demonstrating its versatility and robustness across domains [35,36,37,38,39].

2.2. Backprojection Algorithm

The BPA is an algorithm that generates images based on the projection of echoes received from radar. The projection determines the contribution of each reflected pulse to each pixel in the output image. The BPA uses the radar position for each pulse, the location of the output image pixel, and the echo data as inputs to calculate the contribution. The process for each pulse follows the flow shown in Figure 2.
The SAR echo data are in the frequency domain, and the range inverse fast Fourier transform (IFFT) process involves performing inverse IFFT to compress the pulses in the range direction. The IFFT operation is performed for each pulse. Mathematically, this can be expressed by the following equation:
S r c ( t k , τ n ) = I F F T ( S ( f k , τ n ) , N f f t )
Here, N f f t is the FFT length, S ( f k , τ n ) represents the echo data in the frequency domain, f k is the frequency sample per pulse, τ n is the transmission time of each pulse, and t k is the sampling time interval.
The differential range is the calculated difference between the distance from the radar to the image pixel and image center. The differential range is computed for each pixel of each pulse. This value is used for interpolation and phase correction calculations. Mathematically, this can be expressed by the following equation:
Δ R ( μ ) = d a 0 ( τ μ ) d a ( τ μ ) = ( x a ( μ ) x ) 2 + ( y a ( μ ) y ) 2 + ( z a ( μ ) z ) 2 r 0
Here, d a 0 ( τ μ ) represents the distance between the radar and the pixel, and d a ( τ μ ) denotes the distance to the scene center. The radar position is given by x a ( μ ) , y a ( μ ) , z a ( μ ) , while the pixel position is represented by x , y , z .
To perform matched filtering in the azimuth direction, the phase correction factor is calculated. This can be mathematically expressed as follows:
p h C o r r = e x p j 4 π f 1 Δ R c
where f 1 is the minimum frequency for every pulse and c represents the speed of light.
Because the result of the pulse compression does not exactly match the range pixel, linear interpolation is performed [32,40]. The final image at pixel r is obtained as the sum of each contribution. This can be expressed using the following equation:
I ( r ) = n = 1 N p S i n t ( r , τ n ) · e x p j 4 π f 1 Δ R c

3. Proposed BPA Hardware Accelerator

As shown in Figure 2, the BPA requires multiple processing stages. We propose a BPA accelerator consisting of the following units, as illustrated in Figure 3: the IFFT unit (IFU), differential range calculation unit (DRCU), phase calculation unit (PCU), find pixel in range swath unit (FPXU), linear interpolation unit (LIU), and image update unit (IUU).
The BPA requires echo data, radar positions, and pixel positions as inputs. It also requires the distance to the target scene center (R0) and a cumulative operation with image data calculated from the previous pulse.

3.1. SAR Datasets

We used two publicly available datasets released by the AFRL for image formation. Both datasets are in the X-band region and collected through a circular flight path. The first dataset, the Volumetric SAR dataset [41], consists of stationary vehicles and a calibration target. It contains an average of 117 pulses per azimuth angle, with 424 frequency samples for each pulse. The generated image covers a 100 × 100 m area with a resolution of 501 × 501 pixels. The second dataset, the point target dataset [40], consists of data for three-point targets. This was simulated by using 128 pulses and 512 frequency samples per pulse. The resulting image covers a 10 × 10 m area and has a resolution of 501 × 501 pixels. Figure 4 shows the images generated using the two datasets.

3.2. Bit Width Optimization

To optimize the bit width, we designed a simulation using MATLAB R2022a. Each functional block of the BPA was designed to allow the modification of the FP bit width from a 64-bit double precision FP to 1 bit. As the FP bit width was adjusted, the corresponding exponent bias and rounding method were considered according to IEEE FP standards. The exponent bit width was reduced stepwise from the 11-bit exponent of the double precision FP, whereas the mantissa bit width was reduced from 52 bits. Because the exponent bit width has a greater impact on the performance, we first optimized the exponent bit width and then reduced the mantissa bit width. In addition, to reduce the output data, we optimized the IUU and then proceeded with the IFU, PCU, DRCU, LIU, and FPXU in that order, which was highly complex.
The reference image for performance comparison was generated using the BPA with all operations performed using the double precision FP. For each functional block, the exponent and mantissa bit widths were reduced, and the resulting images were compared with the reference image. Although both peak signal-to-noise ratio (PSNR) and SSIM are commonly used for image quality comparisons, SSIM was chosen as the evaluation metric because it aligns better with the visual perception of SAR images.
Table 1 and Table 2 present the results comparing the images generated with the reduced exponent and mantissa bit widths and the reference image. To prevent image quality degradation, we rounded the images to the fourth decimal place to maintain an SSIM value of 1. Consequently, the IUU can be optimized to an exponent bit width of 6 bits and a mantissa bit width of 7 bits, thereby reducing the output data size to less than one-fourth. The internal operations were also optimized, allowing the IFU to be reduced to an exponent bit width of 6 bits and a mantissa bit width of 8 bits. For the other units, the DRCU was optimized to an exponent bit width of 5 bits and a mantissa bit width of 38 bits, the PCU to an exponent bit width of 5 bits and a mantissa bit width of 23 bits, the FPXU to an exponent bit width of 4 bits and a mantissa bit width of 20 bits, and the LIU to an exponent bit width of 7 bits and a mantissa bit width of 14 bits. The optimization results are summarized in Table 3.

3.3. Design of the Function Unit

The echo data are in the frequency domain and range pulse compression can be performed using the IFFT (Figure 5). The IFFT was implemented based on the radix-2 butterfly pipeline structure. Upsampling was supported to achieve smoother image formation, and the IFFT can operate with up to 2048 points.
The DRCU (Figure 6) is calculated using the three-dimensional (3D) position of each pixel and the sensor position for each pulse relative to the distance from the scene center. In other words, the differential range is the difference between the distance from the central position of the antenna to the target, and the distance from the central position of the antenna to the scene center.
The PCU (Figure 7) is a functional block that calculates the phase correction factor for azimuth matched filtering. Calculations of the sine and cosine functions are required to compute the phase correction factor. These were implemented using the Coordinate Rotation Digital Computer (CORDIC) algorithm, which performs approximate calculations.
The LIU (Figure 8) is a functional block that performs interpolation to match the range pulse compression result with the range pixel. Prior to interpolation, the corresponding range bin for each pixel must be identified. Therefore, the FPXU (Figure 7) performs computation to find the corresponding range bin using the minimum and maximum values of the image range bin and the differential range value. The IUU (Figure 9) applies phase correction to the data using complex multiplication after linear interpolation and then accumulates the images for each pulse to generate the final image.

4. Implementation Results of the Proposed BPA Accelerator

The proposed BPA accelerator was designed using a hardware description language (HDL) and implemented on an FPGA platform based on the Xilinx Zynq UltraScale+ device. While maintaining an SSIM of 0.99 or higher, the BPA accelerator required 68,549 look-up tables (LUTs), 87,201 flip flops (FFs), 15 digital signal processors (DSPs), and 206 block RAMs (BRAMs) as hardware resources.
Figure 10 shows the images generated using the volumetric dataset, processed by MATLAB R2022a on an Intel i7 CPU and by the proposed BPA accelerator, with data accumulated for a 1° azimuth angle. A visual comparison between the MATLAB R2022a-generated reference image and the image produced by the optimized hardware implementation reveals negligible degradation. Quantitatively, the resulting SSIM consistently exceeded 0.99, confirming that the accelerator preserves image quality while offering substantial computational advantages.

5. Discussion

In this section, the hardware efficiency of the proposed BPA accelerator is evaluated through comparisons with other implementation designs. In Table 4, SP-FP denotes single-precision floating-point operations, and the corresponding values indicate the hardware resources required to implement the BPA accelerator. When comparing the proposed BPA accelerator with the SP-FP-based implementation, a significant reduction in resource consumption is observed. Specifically, the proposed design achieves a 54% reduction in memory usage and a 46% decrease in computational logic utilization, demonstrating the efficiency of the custom floating-point format adopted in our architecture.
Furthermore, DP-FP represents double precision floating-point operations, which typically offer higher numerical accuracy but at the cost of significantly greater hardware overhead. When compared to the DP-FP implementation, the proposed BPA accelerator reduces memory usage by approximately 77% and lowers computational logic utilization by 73% to 74%. These results highlight the advantage of adopting a precision-optimized floating-point representation, which enables substantial savings in both memory and logic resources without compromising output image quality.
Table 5 summarizes previous FPGA implementation studies of the BPA. To compare the computational throughput for different images, an additional performance metric called BP pixel was introduced. This metric represents the computational workload required to generate a single image and is calculated by multiplying the image size by the number of pulses. For clarity, BP pixels are presented in units of 10 3 . Computational throughput was also defined and calculated by dividing the previously defined BP pixels by seconds.
In [29], the hardware was implemented using SP-FP operations, and a method for mapping radar data to parallel computations was proposed. The design was optimized using a pipeline approach to maintain image quality with SP-FP operations. However, owing to the lack of computational optimization, the throughput per hardware resource remained relatively low. In [30], fixed-point operations were employed for BPA computations to reduce area usage, and a parallel structure was introduced to enable real-time BPA image formation. Although this approach increased the throughput per hardware resource, the use of fixed-point arithmetic resulted in a decrease in image quality, with an SSIM of 0.87. In [31], a hardware–software partitioned structure was proposed to accelerate SAR imaging. Fixed-point operations were applied to the hardware portion to reduce the area usage. However, because some computations were performed in the software, the BPA image formation time on the FPGA was relatively long, leading to lower throughput per hardware resource. Additionally, the image data require 46-bit precision, resulting in high memory usage.
In this study, the proposed BPA accelerator achieved the highest throughput per hardware resource among the approaches that maintain high image quality. Notably, the throughput per memory unit was more than eight times higher than that in previous works. These improvements enhance the suitability of the proposed accelerator for resource-constrained platforms, such as UAVs and embedded systems, where compact and efficient hardware is essential.

6. Conclusions

In this study, the BPA accelerator for circular SAR systems was implemented on an FPGA platform, achieving significant reductions in memory usage and computational resource requirements and preserving high image quality. The proposed BPA accelerator consists of six main functional blocks and was optimized through bit-width reduction, demonstrating its ability to maintain an SSIM of 0.99 or higher, ensuring high-fidelity SAR image reconstruction.
Notably, compared with a DP-FP BPA accelerator, the proposed design reduces memory usage by 77% and computational logic utilization by 73–74%. These optimizations make the accelerator particularly well-suited for resource-constrained platforms, such as UAVs and small satellites, where minimizing hardware footprint is essential.
Future research will focus on implementing the BPA accelerator in very large-scale integrated circuits (VLSI) to further enhance power efficiency and reduce hardware footprint. This transition to VLSI-based implementations will enable even more lightweight and high-performance SAR imaging systems, expanding their applicability to advanced aerospace and remote sensing missions. Additionally, further optimizations in parallel processing and dataflow management will be explored to improve processing speed and overall system efficiency, paving the way for real-time onboard SAR image formation in resource-limited environments. We will also investigate the integration of hardware accelerators with autofocus techniques to effectively mitigate noise and image distortion caused by significant motion errors.

Author Contributions

J.H. designed the BPA accelerator, performed the experiment and evaluation, and wrote the paper. S.L. reviewed and revised the manuscript. Y.J. conceived and led the research, analyzed the experimental results, and edited the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the MOTIE (Ministry of Trade, Industry, and Energy), Republic of Korea, under the Technology Innovation Program (RS-2022-00144290, RS-2024-00433615); the CAD tools were supported by IDEC.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Brown, W.M. Synthetic aperture radar. IEEE Trans. Aerosp. Electron. Syst. 1967, 2, 217–229. [Google Scholar] [CrossRef]
  2. Jakowatz, C.V.; Wahl, D.E.; Eichel, P.H.; Ghiglia, D.C.; Thompson, P.A. Spotlight-Mode Synthetic Aperture Radar: A Signal Processing Approach; Springer: Boston, MA, USA, 1996. [Google Scholar]
  3. Soumekh, M. Synthetic Aperture Radar Signal Processing with MATLAB Algorithms; John Wiley & Sons: New York, NY, USA, 1999. [Google Scholar]
  4. Moreira, A.; Prats-Iraola, P.; Younis, M.; Krieger, G.; Hajnsek, I.; Papathanassiou, K.P. A tutorial on synthetic aperture radar. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–43. [Google Scholar] [CrossRef]
  5. Soumekh, M. Reconnaissance with slant plane circular SAR imaging. IEEE Trans. Image Process. 1996, 5, 1252–1265. [Google Scholar] [CrossRef] [PubMed]
  6. Pinheiro, M.; Prats, P.; Scheiber, R.; Nannini, M.; Reigber, A. Tomographic 3D reconstruction from airborne circular SAR. In Proceedings of the 2009 IEEE International Geoscience and Remote Sensing Symposium, Cape Town, South Africa, 12–17 July 2009; Volume 3. [Google Scholar]
  7. Lin, Y.; Hong, W.; Tan, W.; Wang, Y.; Wu, Y. Interferometric circular SAR method for three-dimensional imaging. IEEE Geosci. Remote Sens. Lett. 2011, 8, 1026–1030. [Google Scholar] [CrossRef]
  8. Chen, L.; Jiang, X.; Li, Z.; Liu, X.; Zhou, Z. Feature-enhanced speckle reduction via low-rank and space-angle continuity for circular SAR target recognition. IEEE Trans. Geosci. Remote Sens. 2020, 58, 7734–7752. [Google Scholar] [CrossRef]
  9. Ge, B.; An, D.; Chen, L.; Wang, W.; Feng, D.; Zhou, Z. Ground moving target detection and trajectory reconstruction methods for multichannel airborne circular SAR. IEEE Trans. Aerosp. Electron. Syst. 2022, 58, 2900–2915. [Google Scholar] [CrossRef]
  10. Ishimaru, A.; Chan, T.K.; Kuga, Y. An imaging technique using confocal circular synthetic aperture radar. IEEE Trans. Geosci. Remote Sens. 1998, 36, 1524–1530. [Google Scholar] [CrossRef]
  11. Chen, L.; An, D.; Huang, X. A backprojection-based imaging for circular synthetic aperture radar. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 3547–3555. [Google Scholar] [CrossRef]
  12. Lanari, R.; Zoffoli, S.; Sansosti, E.; Fornaro, G.; Serafino, F. New approach for hybrid strip-map/spotlight SAR data focusing. IEEE Proc. Radar Sonar Navig. 2001, 148, 363–372. [Google Scholar] [CrossRef]
  13. Mittermayer, J.; Lord, R.; Borner, E. Sliding spotlight SAR processing for TerraSAR-X using a new formulation of the extended chirp scaling algorithm. In Proceedings of the IGARSS 2003. 2003 IEEE International Geoscience and Remote Sensing Symposium, Proceedings (IEEE Cat. No. 03CH37477), Toulouse, France, 21–25 July 2003; Volume 3. [Google Scholar]
  14. Giroux, V.; Cantalloube, H.; Daout, F. An Omega-K algorithm for SAR bistatic systems. In Proceedings of the 2005 IEEE International Geoscience and Remote Sensing Symposium, Seoul, Republic of Korea, 29 July 2005; Volume 2, pp. 1060–1063. [Google Scholar]
  15. Antoniou, M.; Saini, R.; Cherniakov, M. Results of a space-surface bistatic SAR image formation algorithm. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3359–3371. [Google Scholar] [CrossRef]
  16. Dong, X.; Zhang, Y. A novel compressive sensing algorithm for SAR imaging. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 7, 708–720. [Google Scholar] [CrossRef]
  17. Fan, W.; Zhang, M.; Li, J.; Wei, P. Modified range-Doppler algorithm for high squint SAR echo processing. IEEE Geosci. Remote Sens. Lett. 2018, 16, 422–426. [Google Scholar] [CrossRef]
  18. Cruz, H.; Véstias, M.; Monteiro, J.; Neto, H.; Duarte, R.P. A review of synthetic-aperture radar image formation algorithms and implementations: A computational perspective. Remote Sens. 2022, 14, 1258. [Google Scholar] [CrossRef]
  19. Ponce, O.; Prats, P.; Rodriguez-Cassola, M.; Scheiber, R.; Reigber, A. Processing of circular SAR trajectories with fast factorized back-projection. In Proceedings of the 2011 IEEE International Geoscience and Remote Sensing Symposium, Vancouver, BC, Canada, 24–29 July 2011; pp. 3692–3695. [Google Scholar]
  20. Carrara, W.G.; Goodman, R.S.; Majewski, R.M. Spotlight Synthetic Aperture Radar—Signal Processing Algorithms; Artech House: Norwood, MA, USA, 1995. [Google Scholar]
  21. Lin, Y.; Hong, W.; Tan, W.; Wang, Y.; Xiang, M. Airborne circular SAR imaging: Results at P-band. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 July 2012; pp. 5594–5597. [Google Scholar]
  22. Zhang, L.; Li, H.L.; Qiao, Z.J.; Xu, Z.W. A fast BP algorithm with wavenumber spectrum fusion for high-resolution spotlight SAR imaging. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1460–1464. [Google Scholar] [CrossRef]
  23. Oriot, H.; Cantalloube, H. Circular SAR imagery for urban remote sensing. In Proceedings of the 7th European Conference on Synthetic Aperture Radar, Friedrichshafen, Germany, 2–5 June 2008; pp. 1–4. [Google Scholar]
  24. Seger, O.; Herberthson, M.; Hellsten, H. Real time SAR processing of low frequency ultra wide band radar data. In Proceedings of the EUSAR’98—European Conference on Synthetic Aperture Radar, Friedrichshafen, Germany, 25–27 May 1998; pp. 489–492. [Google Scholar]
  25. Basu, S.; Bresler, Y. O(N/sup 2/log/sub 2/N) filtered backprojection reconstruction algorithm for tomography. IEEE Trans. Image Process. 2000, 9, 1760–1773. [Google Scholar] [CrossRef]
  26. Ulander, L.M.; Hellsten, H.; Stenstrom, G. Synthetic-aperture radar processing using fast factorized back-projection. IEEE Trans. Aerosp. Electron. Syst. 2003, 39, 760–776. [Google Scholar] [CrossRef]
  27. Pritsker, D. Efficient global back-projection on an FPGA. In Proceedings of the 2015 IEEE Radar Conference (RadarCon), Arlington, VA, USA, 10–15 May 2015; pp. 204–209. [Google Scholar]
  28. Hettiarachchi, D.L.N.; Balster, E.J. Fixed-point processing of the SAR back-projection algorithm on FPGA. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 10889–10902. [Google Scholar] [CrossRef]
  29. Schleuniger, P.; Kusk, A.; Dall, J.; Karlsson, S. Synthetic aperture radar data processing on an FPGA multi-core system. In Proceedings of the Architecture of Computing Systems—ARCS 2013: 26th International Conference, Prague, Czech Republic, 19–22 February 2013; Proceedings 26. pp. 74–85. [Google Scholar]
  30. Cao, Y.; Guo, S.; Jiang, S.; Zhou, X.; Wang, X.; Luo, Y.; Yu, Z.; Zhang, Z.; Deng, Y. Parallel optimisation and implementation of a real-time back projection (BP) algorithm for SAR based on FPGA. Sensors 2022, 22, 2292. [Google Scholar] [CrossRef]
  31. Duarte, R.P.; Cruz, H.; Véstias, M.; de Sousa, J.T.; Neto, H. Hardware Accelerated Backprojection Algorithm on Xilinx UltraScale+ SoC-FPGA for On-Board SAR Image Formation. In Proceedings of the 2023 European Data Handling & Data Processing Conference (EDHPC), Juan Les Pins, France, 2–6 October 2023; pp. 1–8. [Google Scholar]
  32. Pimentel, J.J.; Stillmaker, A.; Bohnenstiehl, B.; Baas, B.M. Area efficient backprojection computation with reduced floating-point word width for SAR image formation. In Proceedings of the 2015 49th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 8–11 November 2015; pp. 732–736. [Google Scholar]
  33. Essen, H.; Johannes, W.; Stanko, S.; Sommer, R.; Wahlen, A.; Wilcke, J. High resolution W-band UAV SAR. In Proceedings of the In 2012 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 July 2012; pp. 5033–5036. [Google Scholar]
  34. Lort, M.; Aguasca, A.; Lopez-Martinez, C.; Marín, T.M. Initial evaluation of SAR capabilities in UAV multicopter platforms. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 11, 127–140. [Google Scholar] [CrossRef]
  35. Henderson, F.M.; Xia, Z.G. SAR applications in human settlement detection, population estimation and urban land use pattern analysis: A status report. IEEE Trans. Geosci. Remote Sens. 1997, 35, 79–85. [Google Scholar] [CrossRef]
  36. McNairn, H.; Brisco, B. The application of C-band polarimetric SAR for agriculture: A review. Can. J. Remote Sens. 2004, 30, 525–542. [Google Scholar] [CrossRef]
  37. Pieraccini, M.; Miccinesi, L.; Rojhani, N. RCS measurements and ISAR images of small UAVs. IEEE Aerosp. Electron. Syst. Mag. 2017, 32, 28–32. [Google Scholar] [CrossRef]
  38. Tsokas, A.; Rysz, M.; Pardalos, P.M.; Dipple, K. SAR data applications in earth observation: An overview. Expert Syst. Appl. 2022, 205, 117342. [Google Scholar] [CrossRef]
  39. Sayed, A.N.; Ramahi, O.M.; Shaker, G. In the Realm of Aerial Deception: UAV Classification via ISAR Images and Radar Digital Twins for Enhanced Security. IEEE Sens. Lett. 2024, 8, 6007704. [Google Scholar] [CrossRef]
  40. Gorham, L.A.; Moore, L.J. SAR image formation toolbox for MATLAB. In Algorithms for Synthetic Aperture Radar Imagery XVII; SPIE: Bellingham, WA, USA, 2010; Volume 7699, pp. 46–58. [Google Scholar]
  41. Casteel, C.H., Jr.; Gorham, L.A.; Minardi, M.J.; Scarborough, S.M.; Naidu, K.D.; Majumder, U.K. A challenge problem for 2D/3D imaging of targets from a volumetric data set in an urban environment. In Algorithms for Synthetic Aperture Radar Imagery XIV; SPIE: Bellingham, WA, USA, 2007; Volume 6568, pp. 97–103. [Google Scholar]
Figure 1. Basic principles of SAR.
Figure 1. Basic principles of SAR.
Electronics 14 01544 g001
Figure 2. BPA flow.
Figure 2. BPA flow.
Electronics 14 01544 g002
Figure 3. Hardware architecture of the proposed BPA accelerator.
Figure 3. Hardware architecture of the proposed BPA accelerator.
Electronics 14 01544 g003
Figure 4. (a) Volumetric SAR dataset image. (b) Point target dataset image.
Figure 4. (a) Volumetric SAR dataset image. (b) Point target dataset image.
Electronics 14 01544 g004
Figure 5. Hardware architecture of IFU.
Figure 5. Hardware architecture of IFU.
Electronics 14 01544 g005
Figure 6. Hardware architecture of the DRCU.
Figure 6. Hardware architecture of the DRCU.
Electronics 14 01544 g006
Figure 7. Hardware architecture of the (a) PCU and (b) FPXU.
Figure 7. Hardware architecture of the (a) PCU and (b) FPXU.
Electronics 14 01544 g007
Figure 8. Hardware architecture of the LIU.
Figure 8. Hardware architecture of the LIU.
Electronics 14 01544 g008
Figure 9. Hardware architecture of the IUU.
Figure 9. Hardware architecture of the IUU.
Electronics 14 01544 g009
Figure 10. SAR imaging results: (a) Double precision FP. (b) Optimized FP (SSIM = 0.99).
Figure 10. SAR imaging results: (a) Double precision FP. (b) Optimized FP (SSIM = 0.99).
Electronics 14 01544 g010
Table 1. SSIM of resulting images versus the exponent widths.
Table 1. SSIM of resulting images versus the exponent widths.
Exponent BitIFUDRCUPCUFPXULIUIUU
11111111
7111111
611110.991
50.631110.670.66
40.630.020.99100
30.630.020.450.8400
20.630.020.350.0600
10.630.020.340.0500
Table 2. SSIM of resulting images versus the mantissa widths.
Table 2. SSIM of resulting images versus the mantissa widths.
Exponent BitIFUDRCUPCUFPXULIUIUU
52111111
3810.991111
2210.960.99111
1910.330.990.9911
1610.230.990.9911
1410.150.990.9911
1310.150.990.990.991
1210.150.940.990.991
1110.070.830.990.991
1010.070.400.980.991
910.050.280.960.991
810.050.280.840.981
70.990.050.280.840.941
60.990.050.280.570.880.99
50.990.020.280.570.770.99
40.990.020.280.570.740.94
30.990.020.280.570.740.67
20.960.020.280.570.740.36
10.810.020.280.570.710.17
Table 3. The exponent and mantissa widths for the function block.
Table 3. The exponent and mantissa widths for the function block.
WorkExponentMantissaTotal
IFU6815
DRCU53844
PCU52329
FPXU42025
LIU71422
IUU6714
Table 4. Comparison between the reduced FP and single-precision and double precision accelerator.
Table 4. Comparison between the reduced FP and single-precision and double precision accelerator.
WorkDP-FPSP-FPOurs
DeviceZynq UltraScale+Zynq UltraScale+Zynq UltraScale+
Image size501 × 501501 × 501501 × 501
LUTs257,940128,97068,549
FFs331,466165,73387,201
BRAMs892450260
DPSs21210615
SSIM10.990.99
Table 5. Comparison between the proposed accelerator with previous implementation studies in FPGA.
Table 5. Comparison between the proposed accelerator with previous implementation studies in FPGA.
Works[29][30][31]Ours
DeviceVirtex-7Virtex-7Zynq UltraScale+Zynq UltraScale+
Image size1500 × 40900 × 900501×501501 × 501
No. of pulse6882048117117
BP pixels ( 10 3 )41,2801,658,88029,367.1229,367.12
Data formatSP-FP (32 bits)Fixed-point (16 bits)Fixed-point (46 bits)Custom FP (14 bits)
LUTs167,000258,178106,01668,549
FFs190,000371,220N.A.87,201
BRAMs600123541260
URAMs00360
DPSs240187218415
Memory (KB)27005557.54858.94927
Power (W)1021.074N.A.3.733
Processing time (s)0.9051.1311.420.30
Throughput (BP pixels/s)45,613.261,468,035.402571.5597,890.39
Throughput/LUTs0.275.690.241.43
Throughput/FFs0.203.95N.A.1.12
Throughput/DSPs190.06784.2113.986.53
Throughput/Memory16.89264.150.53105.60
Throughput/Power4561.3369,660.98N.A.26,222.98
SSIM0.990.870.960.99
N.A.: Not available.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Heo, J.; Lee, S.; Jung, Y. Field-Programmable Gate Array Implementation of Backprojection Algorithm for Circular Synthetic Aperture Radar. Electronics 2025, 14, 1544. https://doi.org/10.3390/electronics14081544

AMA Style

Heo J, Lee S, Jung Y. Field-Programmable Gate Array Implementation of Backprojection Algorithm for Circular Synthetic Aperture Radar. Electronics. 2025; 14(8):1544. https://doi.org/10.3390/electronics14081544

Chicago/Turabian Style

Heo, Jinmoo, Seongjoo Lee, and Yunho Jung. 2025. "Field-Programmable Gate Array Implementation of Backprojection Algorithm for Circular Synthetic Aperture Radar" Electronics 14, no. 8: 1544. https://doi.org/10.3390/electronics14081544

APA Style

Heo, J., Lee, S., & Jung, Y. (2025). Field-Programmable Gate Array Implementation of Backprojection Algorithm for Circular Synthetic Aperture Radar. Electronics, 14(8), 1544. https://doi.org/10.3390/electronics14081544

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop