Next Article in Journal
HydroSAR: A Cloud-Based Service for the Monitoring of Inundation Events in the Hindu Kush Himalaya
Previous Article in Journal
Aboveground Biomass Mapping in SemiArid Forests by Integrating Airborne LiDAR with Sentinel-1 and Sentinel-2 Time-Series Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancing Integrated Sensing and Communication (ISAC) Performance for a Searching–Deciding Alternation Radar-Comm System with Multi-Dimension Point Cloud Data

1
School of Electronics and Information Engineering, Beihang University, Beijing 100191, China
2
State Key Laboratory of CNS/ATM, Beihang University, Beijing 100191, China
3
School of Reliability and System Engineering, Beihang University, Beijing 100191, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(17), 3242; https://doi.org/10.3390/rs16173242
Submission received: 2 July 2024 / Revised: 23 August 2024 / Accepted: 30 August 2024 / Published: 1 September 2024

Abstract

:
In developing modern intelligent transportation systems, integrated sensing and communication (ISAC) technology has become an efficient and promising method for vehicle road services. To enhance traffic safety and efficiency through real-time interaction between vehicles and roads, this paper proposes a searching–deciding scheme for an alternation radar-communication (radar-comm) system. Firstly, its communication performance is derived for a given detection probability. Then, we process the echo data from real-world millimeter-wave (mmWave) radar into four-dimensional (4D) point cloud datasets and thus separate different hybrid modes of single-vehicle and vehicle fleets into three types of scenes. Based on these datasets, an efficient labeling method is proposed to assist accurate vehicle target detection. Finally, a novel vehicle detection scheme is proposed to classify various scenes and accurately detect vehicle targets based on deep learning methods. Extensive experiments on collected real-world datasets demonstrate that compared to benchmarks, the proposed scheme obtains substantial radar performance and achieves competitive communication performance.

1. Introduction

With the advancement of 5G and 6G wireless networks, integrating advanced technologies has become essential for improving communication efficiency and applications in both the military and civilian arenas [1]. Effective communication between the base station (BS) and vehicles is crucial in developing intelligent transportation systems [2]. Seamless communication links enable vehicles to receive critical information, such as road conditions and traffic updates, which is vital for future services like autonomous driving [3]. Beamforming technology is pivotal in this setup, enhancing the signal-to-noise ratio (SNR) by directing pencil-shaped beams toward vehicles based on real-time vehicle positions. To solve this, the BS must possess target detection capabilities in the area. Integrated sensing and communication (ISAC) technology is a promising solution for enabling effective target sensing and communication, ensuring efficient traffic management and safer roads [4].
Target detection in traffic primarily relies on camera vision and radar technologies, and integrating vehicle position information significantly contributes to beamformer design [5]. With the maturation of image-based vehicle target detection technology, the efficient detection and tracking of vehicle targets have become crucial research domains [6,7]. Traditional approaches involve capturing images to extract relevant target position and motion data, forming the basis for subsequent semantic analysis tasks. Deep learning advancements have led to the emergence of highly efficient target detection algorithms, notably the region-based convolutional neural networks (RCNNs) and you only look once (YOLO) series [8,9,10,11]. However, environmental factors can compromise the quality of collected image or video data, affecting the accuracy and efficiency of target detection and tracking processes [12,13]. These challenges underscore the need for robust solutions to ensure reliable performance in real-world scenarios.
To tackle the above challenges, there has been a growing interest from both industry and academia in enhancing the target sensing performance across various environments through leveraging wireless communication technology [14]. Radar and lidar can offer superior sensing capabilities compared to cameras. However, lidar systems are very expensive. Therefore, radar systems have been widely developed in advanced driver assistance systems [15]. In contrast to lidar, millimeter-wave (mmWave) radar can penetrate through fog, smoke, and dust, a capability that translates into near-all-weather and all-time operation, rendering it highly reliable [16]. Operating within the wavelength range between microwave and centimeter waves, mmWave radar combines the advantages of radar and lidar technologies [17]. Furthermore, mmWave radar demonstrates over 90% accuracy in distinguishing and identifying weak targets [18]. The transmission and reception of mmWave radar electromagnetic signals are far less susceptible to adverse weather conditions and variations in lighting, whether strong or weak. Consequently, mmWave radar delivers exceptional target positioning and detection performance, even in challenging environmental scenes [19]. It is also cost-effective and proficient in simultaneously identifying multiple targets across a wide range of practical applications.
The main contributions of this paper are as follows:
1.
This paper first proposes a novel searching–deciding scheme for radar-communication (radar-comm) systems, which are designed to operate in a dual-functional mode, balancing the demands of both radar sensing and wireless communication. Then, the theoretical analysis underscores the significance of detection probability in enhancing the communication performance of the system. Finally, we propose a vehicle detection scheme to achieve superior radar-comm system integration by enhancing detection probability.
2.
In real-world conditions, we process the echo data from mmWave radar into the 4D point cloud datasets. By comprehensively understanding vehicle target features, the datasets encompass three distinct scenes. Then, an efficient labeling method is proposed to accurately detect vehicle targets, which is not contingent on camera image quality and is versatile for various conditions.
3.
Based on the collected 4D radar point cloud dataset, this paper presents a novel six-channel self-attention neural network architecture to detect vehicle targets. It integrates the multi-layer perceptron (MLP) layer, max-pooling layer, and transformer block to achieve more accurate and robust detection of vehicle targets. The MLP layer provides a powerful non-linear mapping capability, allowing the network to learn complex patterns within the point cloud data. The max-pooling layer effectively reduces the spatial dimensions of the data, which helps reduce the computational load. The transformer block enables the model to capture contextual information across the point cloud.
4.
Extensive experiments on collected real-world datasets demonstrate that compared to benchmarks, the proposed scheme obtains substantial radar performance and achieves competitive communication performance.

2. Related Work

In mmWave radar target sensing, two primary dataset types are employed, radar spectrogram and radar point cloud. In radar spectrogram, target sensing involves a multi-step process. Initially, the raw radar echo signals undergo a series of fast Fourier transforms (FFTs) to produce a spectrogram [20]. Reference [21] utilizes the peak algorithm’s extracted targets from the radio frequency. Nevertheless, setting detection thresholds can be susceptible to widespread false and missed detects. To solve this challenge, reference [22] has addressed it by employing a deep learning algorithm to combine a range angle (RA) spectrogram with camera images or videos. This integrated approach enhances target detection accuracy and mitigates the issues associated with traditional threshold methods [23]. Reference [24] explores the feasibility of utilizing deep-learning algorithms for sensing targets based on the range velocity (RV) spectrogram. Regardless of the spectrogram type, it is essential to label the targets using camera images before applying deep learning models for classification. When integrating camera images for target annotation, the process is constrained by image quality and inevitably incorporates clutter information within the target bounding box. This clutter information hampers the accurate extraction and tracking of subsequent targets [25]. Compared with radar spectrograms, radar point clouds contain more detailed target feature information and have found widespread application in the field of target detection. Reference [26] employs a virtual point cloud (VPC) as an auxiliary teacher in conjunction with mmWave radar point clouds (RPCs) for human pose estimation. Through extensive experiments, the study validates the effectiveness of utilizing radar point cloud data for human pose estimation. According to [27], radar point cloud data are used for vehicle sensing and power allocation, and the experiment demonstrates that radar point cloud data outperforms RV spectrograms in performance and efficiency metrics. The precise detection of targets contributes to a more rational allocation of communication resources.
The integration of sensing and communication functionalities, commonly referred to as ISAC, has emerged as a pivotal technology in the development of next-generation wireless systems. ISAC offers the potential to enhance the efficiency of spectrum utilization and reduce hardware costs by combining the traditionally separate domains of radar sensing and wireless communication into a unified framework. This integration is achieved through various design schemes, each addressing different aspects of the dual-functional system requirements and performance. As the demand for more sophisticated and versatile systems grows, the exploration of ISAC techniques becomes increasingly crucial, paving the way for innovations in signal design, system architecture, and algorithm development.
In the background of target sensing, ISAC technology is typically approached through four prevalent design schemes. The first scheme is the symbol-level optimized signals for both sensing and communication. In [28], the authors design and optimize transceiver waveforms for a multi-input–multi-output (MIMO) dual functional radar-communication (DFRC) system. In this system, the dual-functional base station (BS) transmits the integrated signal optimized by the successive convex approximation method. In [29], an emerged symbol-level precoding approach for ISAC is proposed, where the real-time data transmission is based on the Riemannian Broyden–Fletcher–Goldfarb–Shanno (RBFGS) algorithm. Continuous real-time optimization of these signals is necessary, which imposes significant demands on computing and storage resources. Furthermore, its implementation requires substantial modifications to the existing system architecture.
The second is employing radar signals for communication functions. Reference [30] extends the DFRC system based on index modulation to incorporate sparse arrays and frequency-modulated continuous waveforms (FMCWs). The proposed FMCW-based radar-communication system utilizes fewer radio frequency modules and integrates narrowband FMCW signals to minimize cost and complexity. Reference [31] introduces a novel DFRC scheme as hybrid index modulation (HIM), which operates on a frequency-hopping (FH-MIMO) radar platform. However, the restriction imposed by the radar pulse repetition frequency in this scheme presents a challenge to achieving high communication rates.
The third scheme involves using communication signals for sensing functions. Reference [32] proposes utilizing the spread spectrum communication signal echo reflected by the target to achieve sensing functions. The proposed dual-function radar and communication system demonstrated the capability to reach speeds of up to 10 Megabits/s. Reference [33] presents a novel sparse vector-coding-based ISAC waveform, designed to minimize sidelobes for radar sensing and ensure ultra-reliable communication transmission. This scheme exhibits enhancing performance. This scheme frequently overlooks the design considerations of sidelobes during waveform construction, resulting in inadequate sensing resolution and failure to meet the required standards.
The final scheme is the alternating design of sensing and communication, facilitating seamless transitions between the two models within the system, which offers increased flexibility in adapting to dynamic environmental conditions. In reference [34], the authors investigate the coexistence of a MIMO radar system with cellular base stations, specifically focusing on interfering channel estimation. The radar operates in a “search and decide” mode, while the base station receives interference from the radar. In addition, the authors propose several hypothesis testing methods to identify the radar’s operating mode and obtain the interference channel state information (ICSI) through various channel estimation schemes. In reference [35], advanced deep learning methodologies are devised to capitalize on radar sensor data, facilitating mmWave beam prediction. These methodologies seamlessly incorporate radar signal processing techniques to extract pertinent features for the learning models, enhancing their efficiency and reducing inference time.
Considering the ease of engineering implementation, this study employs mmWave radar for target sensing and implements a search-deciding scheme for wireless data transmission. Firstly, we collect the echo data using our measurements with the FMCW mmWave radar sensor and process them to four-dimensional (4D) radar point clouds as the input of the neural network. Then, we propose a novel approach based on the radar point cloud datasets to enhance vehicle target detection performance, in the search model. Based on the detection results, we can optimize communication resource allocation. The proposed search-deciding alternation radar-comm system is designed for real-time processing. Finally, compared to the benchmarks, the proposed scheme achieves superior integrated system performance.

3. System Model

This section focuses on a mmWave radar signal processing and communication performance analysis. As depicted in Figure 1, in an urban road, the proposed alternation radar-comm system includes an mmWave radar sensing system and a communication system. This paper utilizes a self-developed mmWave radar 80 GHz FMCW mmWave radar sensor to capture 4D radar point clouds, with a range resolution from 0.4 m to 1.8 m and angular resolutions of 1° in azimuth and 2° in elevation. The radar sensor is deployed 6 m above the roadway. Data are collected on sunny days in urban environments with moderate traffic. At the same time as the camera records the lane situation, the mmWave radar is used to sense vehicles and collect data, and then the data are stored in the computer.
The initial step involved preprocessing the radar data to effectively filter out clutter, resulting in usable 4D radar point cloud datasets. Combining video frames and radar frames with the same timestamp, we performed manual data annotation to construct the dataset. These datasets were systematically categorized into different traffic scenarios, focusing on distinct hybrid modes, including single-vehicle instances and vehicle fleets. Each scene has approximately 300 points, and the vehicle fleet on a city road means vehicles are close together and travel in a line. Building on this foundation, we proposed a novel vehicle detection scheme designed to accurately classify these diverse scenes and detect vehicle targets. The methodology also involved analyzing the communication resource allocation for vehicles, guided by the detection probabilities derived from the radar data to allocate communication resources.

3.1. Radar Signal Processing

The echo signal processing primarily of FMCW radar involves three fundamental components: range estimation, velocity estimation, and angle estimation. The specific processing steps are outlined below [36].
Range estimation is fundamental to processing mmWave radar echo signals. It involves calculating the distance between the BS and the vehicle target, corresponding to the round-trip time delay of electromagnetic wave propagation. An approximate range estimation can be obtained by conducting the first FFT on the radar echo signal [37]. The mmWave radar echo signal is defined as
S r = A ψ cos 2 π f c t τ + π μ t τ 2 + φ 0 ,
where A ψ is the amplitude of the signal, τ = 2 R t 2 R t c c is propagation delay, c is the speed of light, f c is the carrier frequency, φ 0 is the initial phase of the echo signal, t 0 , T t is the time duration, the frequency sweep slope is μ = B B T t T t , and B is the scanning bandwidth.
The complex signal representation of the mixed echo signal can be written as
S c o m p l e x t = A ψ e 2 π j f d + f τ t + 2 π j f c τ 0 ,
which can be rewritten in a discrete form as
S c o m p l e x n T S = A ψ e 2 π j f d + f τ ( n T s ) + 2 π j f c τ 0 ,
where T S is the sampling interval, n is the number of sampling points, f d is the Doppler frequency deviation, f τ = 2 μ τ 0 is the frequency generated by the range between the target and the mmWave radar, and τ 0 is the delay at the initial position.
Firstly, we show the data processing procedure for each antenna in a group of chirp waves received by the radar radio frequency front [37]. In the T c time interval, the radar echo signal of each vehicle target is defined by
S r e s u l t ( t ) = i = 0 N t 1 e j 2 π f c v T c τ 0 R 0 i + ( f d + f τ ) t + μ v T c N t R 0 + f c τ 0 t ,
where N t is the number of pulses and v is the velocity of a vehicle with the initial distance of  R 0 .
The peak of Equation (4) is influenced by the range and velocity of the target. The first FFT is applied to approximate the range information between the vehicle and the radar. Once the distance information is obtained, the second FFT is performed to extract the velocity information of the target. The specific description is as follows:
S 2 D FFT l , k = e j 2 π f c τ 0 l i N t · i = 0 N t 1 e j 2 π R 0 v T c τ 0 i · z = 0 n 1 e j 2 π f d + f τ T s z k z n ,
where
k = f d + f τ · n T S , f d f τ ,
and variable l can be denoted as
l = τ 0 f c v T c N t R 0 = f d T c N t .
According to Equations (6) and (7), f d and f τ can be obtained, where formula (5) reaches the peak value. Subsequently, the velocity and range information of the vehicle can be acquired. Following the processing of the mmWave radar echo signal by the first and second FFTs, the corresponding range velocity spectrograms are generated. In multi-antenna reception, the received signals γ of target number T with angle directions θ j and j = 1 , 2 , . . . , T on each element of the receiving array can be represented as a weighted form of T echoes [5]:
γ t = j = 1 T a θ j y j t ,
where a θ m represents the guiding vector of the array and can be calculated by
a θ m = Δ 1 , e j 2 π f c d s i n θ m c , , e j 2 π f c M 1 d s i n θ m c T ,
and y j t is the j-th received signal on the receiving antenna. Angle estimation necessitates employing multiple-antenna reception and can be derived through the third FFT, which is denoted by
A 3 D F F T θ = a H θ γ t 2 .
Finally, the SNR of the vehicle and clutter can be obtained based on the range (R), velocity (V), and angle (A) information [38].
After obtaining the RVA spectrogram and SNR, we filter out points with zero speed in the RV spectrogram using zero-speed detection. Subsequently, we apply the constant false alarm rate (CFAR) detection algorithm to eliminate clutter points with low SNR values [23]. This helps reduce the workload of labeling the data.

3.2. Alternation Radar-Comm System Model

For the radar-comm system with M transmission antennas [5], the steering vector is
a θ = 1 M 1 , e j 2 π Δ sin ( θ ) , , e j 2 π ( M 1 ) Δ sin ( θ ) T .
The transmission waveform with length L, X C M × L ,  can be defined as
X = F P S ,
where F is the beamforming matrix with f i 2 = 1 , which can be denoted by
F = f 1 , f 2 , . . . , f M C M × M ,
P is the power allocation diagonal matrix with the total transmission power i p i = P T , which can be calculated by
P = d i a g { p 1 , p 2 , , p M } ,
and S C M × L is the random complex signal, which can be expressed by
S S H = L I M × M .
The transmission radiation pattern towards the angle θ is represented as
P d θ = a H θ R X a ( θ ) ,
where
R X = 1 L X X H
is the spatial sample covariance matrix. Following [34], the radar-comm system consists of two main operation schemes including the search–deciding mode. The searching mode is used to detect vehicles in the area, which determines the initial positions and velocity of vehicle targets. The beam pattern is omnidirectional, and each θ angle is constant. We must have R X = I M × M , which leads to a feasible solution where p i = P T / M and f i is an all-zero vector except the i-th element is 1. We model T true targets are distributed in the area and T k vehicle targets are detected at time k.
In the deciding mode, the radar-comm system forms several beams aligning the detected targets with a maximum M. The beams contain the downlink communication data, and the reflected echoes are used for target detection. In the case of many antennas, the array forms a pencil beam. The feasible solution of the beamformers are f q = a q θ q , where θ q are the target angles [34].
The channel capacity C q ˜ , k of the q ˜ -th target in T k targets at the slot time k epoch is
C q ˜ , k = l o g 2 1 + p q ˜ , k G d q ˜ , k 2 P n ,
where G is the antenna gain, d q ˜ , k is the distance between q ˜ -th target and BS at the slot time k, p q ˜ , k is the transmission power of q ˜ -th target at the slot time k, P n = ρ n 4 π 2 ρ n 4 π 2 λ c 2 λ c 2 , λ c is the wavelength of the wireless signal, and ρ n is the noise power. Thus, the total capacity of the radar-comm system is
C k = q ˜ = 1 T k l o g 2 1 + p q ˜ , k G d q ˜ , k 2 P n .
The total communication channel capacity when the q-th target of T targets is detected at slot time k can be expressed as
C k = q = 1 T a q , k · l o g 2 1 + p q , k G d q , k 2 P n ,
where a q , k 0 , 1 represents the detection status of the q-th target at slot time k. If the target is detected, a q , k = 1 , and otherwise, a q , k = 0 . Then, the average performance of C k in the deciding model can be represented as
E C k = E q = 1 T a q , k · l o g 2 1 + p q , k G d q , k 2 P n ,
which can be selected as the objective function of the radar-comm system optimization problem. The above function can be simplified as
E C k = q = 1 T β a q , k = 1 · E l o g 2 1 + p q , k G d q , k 2 P n = q = 1 T η q , k · l o g 2 1 + p q , k G d q , k 2 P n .
where η q , k = β a q , k = 1 is the detection probability of the q-th target at slot time k.
In the radar-comm system, the optimization of communication resource allocation typically involves maximizing the total channel capacity, which is written by
m a x p q , k q = 1 T η q , k · l o g 2 1 + p q , k G d q , k 2 P n s . t . q = 1 T k p q , k = P T , p q , k 0 .
In Equation (23), the objective function is a joint concave function concerning power, and this optimization problem can be solved using the Lagrangian method. The optimal power allocation converges to
p q , k * = η q , k λ l n 2 d q , k 2 P n G + ,
whose detailed derivation is provided in Appendix A.
Then, the optimal channel capacity can be calculated by
C t o t a l = q = 1 T η q , k · l o g 2 1 + η q , k λ l n 2 d q , k 2 P n G + G d q , k 2 P n ,
and it can be simplified to
C t o t a l = q = 1 T I η q , k d q , k 2 λ l n 2 · P n G · η q , k l o g 2 η q , k G λ l n 2 · d q , k 2 P n ,
where I · is the indicator function with
I · = 1 , i f η q , k d q , k 2 λ l n 2 · P n G , I · = 0 , i f η q , k d q , k 2 < λ l n 2 · P n G .
From Equation (26), it becomes apparent that an increment in the parameter p q , k or a reduction in d q , k results in an augmentation of communication performance C t o t a l , the total channel capacity. Given the d q , k , the distance between the target and the BS, which is dependent on data and beyond direct control, the enhancement of C t o t a l hinges on the optimizing p q , k . Consequently, the primary challenge in bolstering integration performance relies on designing a more precise target detector.

4. Vehicle Sensing Scheme Based on Radar Point Cloud

In this section, we propose a vehicle target sensing scheme utilizing 4D mmWave radar point cloud features ( R , V , A , SNR ) , which can be summarized into three parts. Firstly, leveraging real-world mmWave data, the urban traffic scenes involving vehicles can be classified. Secondly, after post-processing the collected mmWave radar data, this paper constructs the 4D radar point cloud datasets, annotates them with labels, and visualizes the targets within the point cloud. Finally, a novel vehicle target sensing scheme with deep learning techniques and 4D radar point cloud data is introduced.

4.1. Radar-Assisted Vehicle Sensing Scenes

This paper categorizes the collected real-world mmWave radar data into three scenes, as shown in Figure 2, each representing common traffic conditions on urban roads. Each scene depicts distinct urban traffic scenarios.
  • Scene I comprises a multitude of vehicle formations, showcasing diverse vehicle models, with the distance between vehicle targets and the mmWave radar distributed from far to near.
  • Scene II constitutes a mixed setting where individual vehicles and vehicle formations coexist, encompassing various vehicle types. Vehicle targets are distributed from far to near the mmWave radar.
  • Scene III represents the simplest scenario, consisting solely of individual vehicles of various types. There is no vehicle fleet, and the distance between each vehicle target and the mmWave radar varies from far to near.

4.2. Four-Dimensional Radar Point Cloud Data Processing

The dataset used in this study is partitioned into two distinct segments: RV spectrogram and 4D mmWave radar point cloud data. The RV spectrogram undergoes range and velocity FFT processing, while the mmWave radar point cloud data are processed through FFT and CFAR techniques. This segmentation facilitates our subsequent comparative experiments detecting vehicle targets using 4D mmWave radar point cloud data.
Additionally, each mmWave radar RV spectrogram is paired with a corresponding camera image to facilitate target labeling within the spectrogram. Figure 3a,b provide an illustrative frame of the captured camera image and RV spectrogram dataset. We adopt target sensing labeling methods commonly used in image vision, as shown in Figure 3c. Specifically, for the mmWave radar RV spectrogram, we employ 2D bounding boxes to label vehicle targets [39].
For 4D radar point cloud data featuring ( R , V , A , SNR ) , as shown in Figure 4, we introduce a novel label-labeling approach that does not rely on camera images. Initially, we establish a threshold with a velocity value of 0 to remove obvious clutter points. Subsequently, we apply the CFAR detection algorithm to filter out clutter points with lower SNR values. Finally, we employ the correlation matrix between the frame of radar point cloud data, identifying points with correlation as target points and those without correlation as clutter points. This method significantly reduces the time required for target labeling.
As depicted in Figure 5, we have chosen a subset of processed 4D radar point cloud data for visualization. Figure 5a illustrates the distribution of 4D radar point cloud data on a two-dimensional RV plane, where colors denote the SNR values of individual points. Brighter colors indicate higher SNR values. The corresponding three-dimensional scene display is depicted in Figure 5b, and the color of each point is determined by its SNR value, where brighter colors signify higher values.
In contrast to RV spectrogram data, manually labeling each point within the extensive mmWave radar point cloud data proves highly costly. To solve this problem, we label radar point cloud data by analyzing inter-frame correlation. Specifically, we leverage the correlation between frames in radar point cloud data to build a correlation matrix. Points exhibiting significant correlation across multiple frames are identified as target points, while those lacking such correlation are categorized as clutter points. This method enables us to enhance the precision and dependability of target detection by accurately discerning between target and clutter points. Figure 6a illustrates the 3D bounding box of the vehicle target, while Figure 6b displays point labels in the 4D radar point clouds, where the red dots signify the target, whereas the blue dots represent clutter. This integrated methodology significantly diminishes the time and effort required for labeling.
However, it is worth noting that in some cases, the SNR values of certain clutter points may surpass that of the target points. Consequently, relying solely on straightforward signal processing methods, like the CFAR detection algorithm, may not suffice for effectively distinguishing between clutter and target points. In addition, in the fleet scenes, such as Scene I and Scene II, the radar signal often undergoes multiple reflections between vehicles. This complicates the distinction between clutter points and vehicle target points, presenting a challenge for conventional detection algorithms.

4.3. Vehicle Detection Scheme

Given these challenges and the requirements to achieve more accurate and detailed target detection within mmWave radar point clouds, an effective vehicle target detection method is needed to address these issues. The PointNet algorithm is applied for its ability to effectively process point cloud data, particularly in 3D object classification and segmentation and lidar point cloud detection [40]. On this basis, this paper proposes a novel neural network architecture. This architecture is constructed to handle the 4D point cloud dataset and aims to classify and segment them across diverse scenes. Ultimately, the outcome shows the proposed scheme enhances the precision of vehicle target detection compared with benchmarks.
As depicted in Figure 7, the proposed scheme consists of three integral components: the transformer block, the scene classification block, and the vehicle detection block. The transformer block incorporates a self-attention layer, designed to streamline dimensionality reduction while expediting linear projection and residual connections. Input data comprise a set of six-channel vectors, each pair consisting of F = x , y , R , V , A , SNR . The transformer block is crucial in fostering information exchange among local feature vectors within the point cloud data. This process generates new feature vectors for all points, significantly enriching the interconnections between each point.
The proposed scheme takes 4D mmWave radar point cloud data as the input N p × F , where N p is the maximum number of point clouds in a sample p ˜ P = P i i = 1 , . . . , M p is the point cloud sample, and  F = R , V , A , SNR is the 4D features of point cloud data. In the scene classification block, the multiple multi-layer perceptrons (MLPs) and a maximum pooling layer (MP) are employed to obtain the global feature of sample p ˜ . Initially, we augment the dimensionality of the point cloud data by passing it through multiple MLP layers and using the batch normalization (BN) layer to prevent overfitting. This process aims to encapsulate as much information as possible for all points within the current sample, which can be written by
o 1 = δ BN 1 MLP 1 N p × F ˙ ,
where F ˙ means the input F or F , o 1 is the network output after the first dimension expansion, MLP 1 · is the first MLP operation, BN 1 · is the first BN operation, and  δ · is the Relu activation function. Then, the subsequent dimension expansion of the point cloud can be represented by
o 2 = δ BN 2 MLP 2 o 1 o n = BN n MLP n o n 1 ,
where n represents the number of expansions in dimensionality. Subsequently, the transformer block operation TB · is employed to augment the exchange of information among local feature vectors within the point cloud data sample p ˜ and obtain TB o n .
Subsequently, we utilize a max-pooling layer operation M max · to extract global features from the point cloud data, which is denoted by
o g l o b a l = M max TB o n ,
where o g l o b a l is a 1 × 1024 one-dimensional vector.
Finally, we employ multiple fully connected (FC) layers and BN layers to integrate and compress features of the point cloud by connecting them to neurons, which is calculated by
o FC , 1 = δ BN 1 FC 1 o g l o b a l o FC , 2 = δ BN 2 FC 2 o FC , 1 o FC , n = BN n FC n o FC , n 1 ,
where FC · means the fully connected operation, and scene classification accurate probability ξ can be calculated by the softmax function, which is written by
ξ = e x p o FC , n 1 T e x p o FC , n ,
where K is the number of scene categories, K means a K-dimensional vector, ξ R K , and  o FC , n R K .
For the vehicle detection task, the original features of the point cloud f 1 R N p × F ˙ , the initial 64-dimensional expanded features f 2 R N p × 64 , global features f 3 R N p × 1024 , and the scene classification count f 4 R N p × K are amalgamated to enrich the representation capacity of the point cloud data, which is denoted by
f = [ f 1 , f 2 , f 3 , f 4 ] N p × 1088 + K + F ˙ .
This fusion of feature information from diverse levels aims to capture the local and global information within point clouds more effectively, thereby enhancing the accuracy and resilience of detection tasks.
Since the cascaded feature f constitutes a high-dimensional tensor, the multiple MLP layers are employed to effectively reduce the dimensionality of the vector by managing the number of neurons, which can be represented by
O 1 = δ BN 1 MLP 1 f O 2 = δ BN 2 MLP 2 O 1 O n = BN n MLP n O n 1 .
Then, the output layer employs the softmax function to compute the probability distribution of each point belonging to various categories, which can be calculated by
γ p ˜ j , 0 f = e O n p ˜ j , 0 e O n p ˜ j , 0 + e O n p ˜ j , 1 γ p ˜ j , 1 f = e O n p ˜ j , 1 e O n p ˜ j , 0 + e O n p ˜ j , 1 ,
where γ p ˜ j , 0 f means the prediction probability of the vehicle detection block for the clutter points, γ p ˜ j , 1 f is the prediction probability of the vehicle detection block for the vehicle points, and j = 1 , 2 , N p is the j-th point in the point cloud data sample p ˜ .

4.4. Loss Function and Algorithm Design

The loss function for the proposed vehicle target detection scheme involves both a scene classification task and a vehicle detection task. The loss function for the overall target detection component can be formulated as
L t o t a l = L r e g + ω c l s L c l s + ω s e g L s e g ,
where L r e g = I UU T F 2 is the loss function of the feature transformation matrix, the feature transformation matrix enables the transformation of point cloud data within local coordinate systems, allowing the network to capture the local features of point cloud data more effectively. I is the identity matrix, and U is the characteristic alignment matrix. The two losses are weighted by their corresponding parameters ω c l s and ω s e g . L c l s represents the loss associated with scene classification, and  L c l s is weighted by the corresponding parameter ϖ c l s . L c l s is calculated by
L i c l s = 1 M p j = 1 N t Y l a b e l i j ln p ˜ i j ,
where M p is the sample size of the input, and  Y l a b e l i j corresponds to the true label of the i-th sample. p ˜ i j represents the probability that the i-th sample belongs to the j-th category as predicted.
The optimization of the loss function is not always directly reflected in the final performance of the model. To fully evaluate the performance of our method, we used two widely recognized evaluation metrics: Mean Average Precision (mAP) and Mean Intersection over Union (mIOU). mAP is a measure of the performance of the object detection model, which takes into account the accuracy and recall rate, which can be calculated by
m A P = 1 K i = 1 K A P i ,
where K is the number of categories, A P is the area under the Precision–Recall curve for a specific class, which can be calculated by A P = n R n R n 1 × P n , where precision P = T P T P T P + F P T P + F P , recall R = T P T P T P + F N T P + F N , T P is true positives, F P means false positives, and  F N denotes false negatives.
mIOU is a metric that evaluates the performance of the segmentation task, and it measures the consistency between the predicted segmentation and the real segmentation. It can be denoted as
m I O U = 1 K i = 1 K I O U i ,
where I O U i is the IOU for class i. For each point in a point cloud, the network predicts a class label. The IoU for each class is calculated as I O U = T P T P T P + F P + F N T P + F P + F N .
By combining mAP and mIOU, we can evaluate the performance from different perspectives. The proposed vehicle-detection algorithm PTDN is summarized in Algorithm  1.
Algorithm 1 Vehicle Detection Scheme Based on 4D Radar Point Cloud Data
Input: Six-channel 4D point cloud data sample p ˜ with N p points p i , j j = 1 , 2 , . . . , N p p ˜ i , each point p i , j is represented by coordinate and features F = x , y , R , V A , SNR , number of scene classes K, total epoch number N n , etc.
1:
Initialize parameters of Net1 and Net2.
2:
Already training epoch number n p = 0 .
3:
While n p < N n do
4:
     Apply MLP · and BN · to p ˜ i to map point p i , j to a higher-dimensional space o n and obtain feature vectors p i , j by (29).
5:
     for  l = 1 to L do
6:
          Project the embedded p i , j into query Q ˜ , key K ˜ , and value V ˜ matrices.
7:
        Compute attention scores between pairs of p i , j using Q ˜ and K ˜ to capture global dependencies and relations between points.
8:
          TB l o n ← Pass TB l 1 o n through TB block l.
9:
      end for
10:
     o g l o b a l ← Max-pooling M max · over TB l to obtain global feature by (30).
11:
     ξ ← Pass o g l o b a l through FC · to obtain scene class probabilities ξ R K by (31) and (32).
12:
     Asmalgamate F , o 1 , o g l o b a l and o F C , n through (28), (30), (31) to features N p × f .
13:
    Apply MLP · , BN · and softmax to N p × f can obtain detection probabilities γ by (34) and (35).
14:
     Forward propagation Net1 and calculate loss with (36), ω s e g = 0 .
15:
     Forward propagation Net2 and calculate loss with (36), ω c l s = 0 .
16:
     Backward propagation and update all parameters in Net1 and Net2.
17:
     n p = n p + 1 .
18:
end
Output: Predicted scene classfication probabilities ξ and predicted vehicle detection probabilities γ .

5. Experimental Results

This paper focuses on a search-deciding alternation procedure, where the system model encompasses both radar sensing and communication components. The experimentation involves scene classification, vehicle detection, and communication performance, and the scene with radar point clouds consists of approximately 300 points. The scenes include up to 10 vehicles, with small vehicles typically represented by around 10 points each and larger vehicles by approximately 30 points.
The training and testing sets are randomly selected from the radar point cloud dataset in different vehicle scenes to ensure that these datasets can fully cover the scenes. The training dataset is used to train the network model, and the test data are used to display the generalization ability of the trained model. The testing dataset is completely disjoint with the training dataset. In total, 80.62% of the radar point cloud data are used for the training of the network, and 19.37% are used for testing.
The proposed methods are conducted by Python -based machine learning frameworks like PyTorch. The simulation runs on an Intel(R) Core(TM) i9-10900K CPU @3.7 GHz and an NVIDIA GeForce RTX 3080. The network model architecture consists of several layers, including multiple transformer encoder layers with eight attention heads per layer, a hidden dimension of 512 units, and a feedforward network with 1024 units. The initial learning rate for the network is set to 10 3 , and the batch size is 32. For both scene classification and vehicle detection tasks, we conduct 200 iterations (epochs). In addition, a heuristic method is used to select the multi-task loss weights ω c l s and ω s e g . In the experiment part of this paper, the weight parameter of the vehicle target is set to ω c l s _ t a r g e t = ω s e g _ t a r g e t = 1 , and the weight parameter of the clutter point will be set to ω c l s _ c l u t t e r = ω s e g _ c l u t t e r = 0.5 . The experiment is divided into scene classification, vehicle detection, and communication resource allocation.

5.1. Scene Classification Results

This paper utilizes the widely used RV spectrogram as input for the YOLO algorithm, which has demonstrated strong performance in radar spectrogram detection. After vehicle detection processing, a threshold judgment method is applied to ascertain the distance between each target, and based on threshold ϖ , the scene type is determined. The features F = R , V , A , SNR are employed for the four-channel PointNet algorithm, and the features F = x , y , V , R , A , SNR are used as the input for the six-channel PointNet and the proposed scheme. The above methods are employed to classify the scene following the equal iterations. As illustrated in Figure 8, it is evident that both during training and testing, the accuracy of the proposed scheme exhibits a consistent upward trend, while the loss function value steadily decreases, ultimately converging, which indicates the convergence of the proposed algorithm.
For scenario classification, we use YOLO, VoxelNet [41], PointNet, and PointPillars [42] as our benchmark experiments, respectively. The scene classification results of the proposed scheme PTDN and the benchmark are shown in Table 1. Comparatively, our scheme attains a final testing accuracy of 95.31 % , with a mIOU value of 0.9223. Notably, higher accuracy corresponds to higher values of mAP and mIOU. Hence, the proposed scheme exhibits competitive performance in the scene classification experiments.

5.2. Vehicle Detection Results

Following the scene classification experiment, a scene is randomly chosen for the vehicle detection experiment. During this experiment, we amalgamate the initial features F = R , V , A , SNR or F = x , y , V , R , A , SNR of the mmWave radar point cloud data with the distinctive global features of the selected scene. This fusion of features can extract the relative relationships between each point within the same data sample, thereby enhancing vehicle detection.

5.2.1. Scene I

Scene I comprises a multitude of vehicle formations, showcasing diverse vehicle models, with the distance between vehicle targets and the mmWave radar distributed from far to near.
As depicted in Figure 9a,b, the training and testing accuracy demonstrate a consistent upward trend and ultimately reach 95.45% and 93.46%, while the training and testing loss function values exhibit a downward trend. However, notable fluctuations are observed, which can be attributed to the complexity of the scenario. Figure 9c illustrates the detected vehicle points and clutter points in Scene I, with green points representing vehicles and blue points denoting the clutter points. There is an overlap between the vehicle and clutter points, significantly impacting the accuracy of vehicle target detection.

5.2.2. Scene II

Scene II constitutes a mixed setting where individual vehicles and vehicle formations coexist, encompassing various vehicle types. Vehicle targets are distributed from far to near radar.
As shown in Figure 10a,b, the training and testing accuracy demonstrate a consistent upward trend and ultimately reach 96.59% and 95.57%, while the training and testing loss function values exhibit a downward trend. In comparison to Scene I, the Scene II complexity is lower, and it is evident that the accuracy and loss function curves exhibit fewer fluctuations. Figure 10c corroborates this observation by presenting the absence of overlap between vehicle and clutter points. However, in Scene II, the presence of a convoy leads to high similarity and interference between certain points among vehicles, thus hindering vehicle differentiation.

5.2.3. Scene III

Scene III represents the simplest scenario, consisting solely of individual vehicles of various types. There is no vehicle fleet, and the distance between each vehicle target and radar varies from far to near.
As depicted in Figure 11a,b, throughout the training and testing phases in Scene III, there is a consistent enhancement observed in the rise in detection accuracy and the reduction in loss function values; and detection accuracy ultimately reaches 98.05% and 97.85%. Compared with the preceding scenes, Scene III demonstrates notably improved detection accuracy and reduced fluctuations in loss function values during training and testing. This improvement can be attributed to the favorable conditions present in Scene III, which contribute to a more stable training process. Notably, the complexity of Scene III is lower than that of Scene I and Scene II, with no overlap between vehicle and clutter points, nor interference among vehicle points themselves, as shown in Figure 11c. Consequently, the detection accuracy of Scene III surpasses that of the previous scenes, while the loss function value is minimized.
To assess the vehicle detection performance of the proposed scheme, this paper selects the four-channel and six-channel traditional PointNet algorithms as benchmarks, respectively. As illustrated in Figure 12a,b, the proposed scheme exhibits the highest vehicle detection accuracy and mIOU values across all three scenes. In addition, to illustrate the performance of the proposed algorithm, we conduct additional statistical analyses to complement our experimental results, which include receiver operating characteristic (ROC) curves. As shown in Figure 13, we choose the more complex Scene I for evaluation. The ROC curve of the proposed algorithm consistently stays above the other curves, indicating a higher true positive rate at various false positive rates. This means the proposed algorithm can better identify positive cases while maintaining a lower rate of false positives. The area under the ROC curve (AUC) for the proposed algorithm is presumably higher, which measures the model’s ability to distinguish between positive and negative cases. A higher AUC implies that the model has a better predictive performance.
In summary, an advanced scheme leveraging 4D mmWave radar point cloud data is introduced in this paper. The design of this comparative framework not only underscores the benefits of utilizing point cloud data but also validates the competitive performance of the proposed scheme. Compared to the benchmarks, the proposed scheme achieves competitive performance enhancements, reports acceptable detection accuracy, and achieves an inference time of 21.37 ms, demonstrating its effectiveness.

5.3. Communication Performance

The communication experiments examine the communication performance achieved by the proposed vehicle detection scheme across a three-vehicle scene, the distances from the three vehicles to BS are 100 m, 132 m, and 204 m, respectively, as shown in Figure 3. The Equation (24) is used to calculate the proposed optimization problem (23) wherein power allocation is conducted for three vehicle targets under the constraint of the constant total transmission power P T = 5 W. The outcomes of the power allocation process are depicted in Figure 14a, the power levels of the three vehicles are 2.24 W, 1.91 W, and 0.85 W, respectively, and the water level value is 2.676 W. Specifically, we analyze the power level allocation and channel capacity achieved by the proposed scheme and compare them with the benchmarks. This evaluation provides insights into the overall effectiveness of the proposed scheme in enhancing detection accuracy and communication performance.
After acquiring the detection probability derived from the proposed vehicle detection scheme, optimizing power allocation with a fixed detection probability can maximize channel capacity. As shown in Figure 14b, it becomes evident that our vehicle detection scheme optimally enhances channel capacity under various transmission power levels, signifying that higher detection accuracy correlates with superior communication performance.
Furthermore, the experiments on the total channel capacity across varying vehicle detection probabilities are conducted, showcasing the overall channel capacity enhancements attributed to the proposed vehicle detection scheme and the benchmark across three distinct scenes. As depicted in Figure 15, the proposed vehicle detection scheme exhibits the most significant communication performance gains among the three scenes and achieves the highest total channel capacity.

6. Conclusions

A searching–deciding scheme for an alternation radar-communication (radar-comm) system is proposed to enhance traffic safety and efficiency through real-time interaction between vehicles and roads. We first analyze the channel capacity for the searching–deciding model and conclude that the larger detection probability leads to superior communication performance. Then, we process the echo data from real-world mmWave radar into 4D point cloud datasets, which clearly describe the vehicle features and then improve vehicle detection performance. The proposed vehicle detection scheme can accurately classify various scenes and detect vehicle targets based on deep learning methods. Extensive experiments on collected real-world datasets demonstrate that the proposed scheme outperforms benchmarks in terms of vehicle detection probability and channel capacity. In future work, we will investigate advanced vehicle detection and signal processing techniques for adverse weather conditions, such as rain and snow.

Author Contributions

L.C., K.L., X.W. and Z.Z. developed the theory and system model. L.C., K.L., X.W., K.L. and Z.Z. performed and analyzed the experimental results. L.C., Z.Z., K.L., Q.G. and X.W. wrote and edited the paper. X.W. financially supports for the project leading to this publication. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Key Research and Development Program of China under Grant No. 2021YFB1600503, the National Nature Science Foundation of China under Grant No. U2233216, and the Postdoctoral Science Foundation of China under Grant Number GZC20242161.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author. The data are not publicly available due to privacy.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ISACIntegrated sensing and communication
mmWavemillimeter-wave
BSbase station
SNRsignal-to-noise ratio
RCNNregion-based convolutional neural networks
YOLOyou only look once
RPCradar point cloud
VPCvirtual point cloud
RBFGSRiemannian Broyden-Fletcher-Goldfarb-Shanno
HIMhybrid index modulation
FFTfast Fourier transforms
ICSIinterference channel state information
MIMOmultiple-input multiple-output
DFRCdual functional radar-communication
FMCWfrequency-modulated continuous waveforms
CFARconstant false alarm rate
mAPmean average precision
mIOUmean intersection over union

Appendix A

For the optimization problem (23), we consider the Lagrangian function, which can be denoted as
L λ , p q , k = q = 1 T η q , k l o g 2 1 + p q , k G d q , k 2 P n + λ P T q = 1 T p q , k ,
where λ is the Lagrangian multiplier. The Karush-Kuhn-Tucker condition for optimal power allocation is given by
L p q , k = 0 , i f p q , k > 0 0 , i f p q , k = 0 .
When L L p q , k p q , k = 0 , we can obtain
η q , k G d q , k 2 P n = λ l n 2 · 1 + p q , k G d q , k 2 P n .
Then, we can derive the expression for the optimal power allocation p q , k * , which can be written by
p q , k * = η q , k λ l n 2 d q , k 2 P n G + ,
where we define p + : = m a x p , 0 , which means to select the maximum value between p and 0. References

References

  1. Wang, Y.; Cao, Y.; Yeo, T.-S.; Cheng, Y.; Zhang, Y. Sparse Reconstruction-Based Joint Signal Processing for MIMO-OFDM-IM Integrated Radar and Communication Systems. Remote Sens. 2024, 16, 1773. [Google Scholar] [CrossRef]
  2. Wei, W.; Shen, J.; Telikani, A.; Fahmideh, M.; Gao, W. Feasibility Analysis of Data Transmission in Partially Damaged IoT Networks of Vehicles. IEEE Trans. Intell. Transp. Syst. 2023, 24, 4577–4588. [Google Scholar] [CrossRef]
  3. Ngo, H.; Fang, H.; Wang, H. Cooperative Perception With V2V Communication for Autonomous Vehicles. IEEE Trans. Veh. Technol. 2023, 72, 11122–11131. [Google Scholar] [CrossRef]
  4. Liu, F.; Cui, Y.; Masouros, C.; Xu, J.; Han, T.X.; Eldar, Y.C.; Buzzi, S. Integrated Sensing and Communications: Toward Dual-Functional Wireless Networks for 6G and Beyond. IEEE J. Sel. Areas Commun. 2022, 40, 1728–1767. [Google Scholar] [CrossRef]
  5. Liu, F.; Masouros, C.; Li, A.; Sun, H.; Hanzo, L. MU-MIMO Communications With MIMO Radar: From Co-Existence to Joint Transmission. IEEE Trans. Wirel. Commun. 2018, 17, 2755–2770. [Google Scholar] [CrossRef]
  6. Li, J.; Huang, X.; Zhan, J. High-Precision Motion Detection and Tracking Based on Point Cloud Registration and Radius Search. IEEE Trans. Intell. Transp. Syst. 2023, 24, 6322–6335. [Google Scholar] [CrossRef]
  7. Wang, W.; Xia, F.; Nie, H.; Chen, Z.; Gong, Z.; Kong, X.; Wei, W. Vehicle Trajectory Clustering Based on Dynamic Representation Learning of Internet of Vehicles. IEEE Trans. Intell. Transp. Syst. 2021, 22, 3567–3576. [Google Scholar] [CrossRef]
  8. Girshick, R. Fast R-CNN. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
  9. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef]
  10. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
  11. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
  12. Zhan, J.; Liu, J.; Wu, Y.; Guo, C. Multi-Task Visual Perception for Object Detection and Semantic Segmentation in Intelligent Driving. Remote Sens. 2024, 16, 1774. [Google Scholar] [CrossRef]
  13. Li, X.; Wu, J. Extracting High-Precision Vehicle Motion Data from Unmanned Aerial Vehicle Video Captured under Various Weather Conditions. Remote Sens. 2022, 14, 5513. [Google Scholar] [CrossRef]
  14. Gao, Z.; Wan, Z.; Zheng, D.; Tan, S.; Masouros, C.; Ng, D.W.K.; Chen, S. Integrated Sensing and Communication With mmWave Massive MIMO: A Compressed Sampling Perspective. IEEE Trans. Wirel. Commun. 2023, 22, 1745–1762. [Google Scholar] [CrossRef]
  15. Cheng, Y.; Su, J.; Jiang, M.; Liu, Y. A Novel Radar Point Cloud Generation Method for Robot Environment Perception. IEEE Trans. Robot. 2022, 38, 3754–3773. [Google Scholar] [CrossRef]
  16. Kim, J.; Khang, S.; Choi, S.; Eo, M.; Jeon, J. Implementation of MIMO Radar-Based Point Cloud Images for Environmental Recognition of Unmanned Vehicles and Its Application. Remote Sens. 2024, 16, 1733. [Google Scholar] [CrossRef]
  17. Montañez, O.J.; Suarez, M.J.; Fernez, E.A. Application of Data Sensor Fusion Using Extended Kalman Filter Algorithm for Identification and Tracking of Moving Targets from LiDAR–Radar Data. Remote Sens. 2023, 15, 3396. [Google Scholar] [CrossRef]
  18. Chen, X.; Liu, K.; Zhang, Z. A PointNet-Based CFAR Detection Method for Radar Target Detection in Sea Clutter. IEEE Geosci. Remote Sens. Lett. 2024, 21, 3502305. [Google Scholar] [CrossRef]
  19. Peng, Y.; Wu, Y.; Shen, C.; Xu, H.; Li, J. Detection Performance Analysis of Marine Wind by Lidar and Radar under All-Weather Conditions. Remote Sens. 2024, 16, 2212. [Google Scholar] [CrossRef]
  20. Klinefelter, E.; Nanzer, J.A. Automotive Velocity Sensing Using Millimeter-Wave Interferometric Radar. IEEE Trans. Microw. Theory Tech. 2021, 69, 1096–1104. [Google Scholar] [CrossRef]
  21. Akter, R.; Doan, V.-S.; Lee, J.-M.; Kim, D.-S. CNN-SSDI: Convolution neural network inspired surveillance system for UAVs detection and identification. Comput. Netw. 2021, 201, 108519. [Google Scholar] [CrossRef]
  22. Wang, Y.; Jiang, Z.; Li, Y.; Hwang, J.N.; Liu, H. RODNet: A Real-Time Radar Object Detection Network Cross-Supervised by Camera-Radar Fused Object 3D Localization. IEEE J. Sel. Top. Signal Process. 2021, 15, 954–967. [Google Scholar] [CrossRef]
  23. Rosu, F. Dimension Compressed CFAR for Massive MIMO Radar. IEEE Geosci. Remote Sens. Lett. 2023, 20, 1–5. [Google Scholar] [CrossRef]
  24. Zhang, Z.; Chang, Q.; Xing, J.; Chen, L. Deep-learning methods for integrated sensing and communication in vehicular networks. Veh. Commun. 2023, 40, 100574. [Google Scholar] [CrossRef]
  25. Liu, S.; Cao, Y.; Yeo, T.-S.; Wu, W.; Liu, Y. Adaptive Clutter Suppression in Randomized Stepped-Frequency Radar. IEEE Trans. Aerosp. Electron. Syst. 2021, 57, 1317–1333. [Google Scholar] [CrossRef]
  26. Cao, Z.; Mei, G.; Guo, X.; Wang, G. Virteach: MmWave Radar Point Cloud Based Pose Estimation with Virtual Data as a Teacher. IEEE Internet Things J. 2024, 11, 17615–17628. [Google Scholar] [CrossRef]
  27. Chen, L.; Liu, K.; Zhang, Z.; Li, B. Beam Selection and Power Allocation: Using Deep Learning for Sensing-Assisted Communication. IEEE Wirel. Commun. Lett. 2024, 13, 323–327. [Google Scholar] [CrossRef]
  28. Du, Y.; Liu, Y.; Han, K.; Jiang, J.; Wang, W.; Chen, L. Multi-User and Multi-Target Dual-Function Radar-Communication Waveform Design: Multi-Fold Performance Tradeoffs. IEEE Trans. Green Commun. Netw. 2023, 7, 483–496. [Google Scholar] [CrossRef]
  29. Liu, R.; Li, M.; Liu, Q.; Swindlehurst, A.L. Dual-Functional Radar-Communication Waveform Design: A Symbol-Level Precoding Approach. IEEE J. Sel. Top. Signal Process. 2021, 15, 1316–1331. [Google Scholar] [CrossRef]
  30. Ma, D.; Shlezinger, N.; Huang, T.; Liu, Y.; Eldar, Y.C. FRAC: FMCW-Based Joint Radar-Communications System Via Index Modulation. IEEE J. Sel. Top. Signal Process. 2021, 15, 1348–1364. [Google Scholar] [CrossRef]
  31. Xu, J.; Wang, X.; Aboutanios, E.; Cui, G. Hybrid Index Modulation for Dual-Functional Radar Communications Systems. IEEE Trans. Veh. Technol. 2023, 72, 3186–3200. [Google Scholar] [CrossRef]
  32. Temiz, M.; Peters, N.J.; Horne, C.; Ritchie, M.A.; Masouros, C. Radar-Centric ISAC Through Index Modulation: Over-the-air Experimentation and Trade-offs. In Proceedings of the 2023 IEEE Radar Conference (RadarConf23), San Antonio, TX, USA, 1–5 May 2023; pp. 1–6. [Google Scholar]
  33. Zhang, R.; Shim, B.; Yuan, W.; Renzo, M.D.; Dang, X.; Wu, W. Integrated Sensing and Communication Waveform Design With Sparse Vector Coding: Low Sidelobes and Ultra Reliability. IEEE Trans. Veh. Technol. 2022, 71, 4489–4494. [Google Scholar] [CrossRef]
  34. Liu, F.; Garcia-Rodriguez, A.; Masouros, C.; Geraci, G. Interfering Channel Estimation in Radar-Cellular Coexistence: How Much Information Do We Need? IEEE Trans. Wirel. Commun. 2019, 18, 4238–4253. [Google Scholar] [CrossRef]
  35. Demirhan, U.; Alkhateeb, A. Radar Aided 6G Beam Prediction: Deep Learning Algorithms and Real-World Demonstration. In Proceedings of the 2022 IEEE Wireless Communications and Networking Conference (WCNC), Austin, TX, USA, 10–13 April 2022; pp. 2655–2660. [Google Scholar]
  36. Rojhani, N.; Passafiume, M.; Sadeghibakhi, M.; Collodi, G.; Cidronali, A. Model-Based Data Augmentation Applied to Deep Learning Networks for Classification of Micro-Doppler Signatures Using FMCW Radar. IEEE Trans. Microw. Theory Tech. 2023, 71, 2222–2236. [Google Scholar] [CrossRef]
  37. Patole, S.M.; Torlak, M.; Wang, D.; Ali, M. Automotive radars: A review of signal processing techniques. IEEE Signal Process. Mag. 2017, 34, 22–35. [Google Scholar] [CrossRef]
  38. Sun, S.; Petropulu, A.P.; Poor, H.V. MIMO Radar for Advanced Driver-Assistance Systems and Autonomous Driving: Advantages and Challenges. IEEE Signal Process. Mag. 2020, 37, 98–117. [Google Scholar] [CrossRef]
  39. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
  40. Charles, R.Q.; Su, H.; Kaichun, M.; Guibas, L.J. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 77–85. [Google Scholar]
  41. Nobis, F.; Shafiei, E.; Karle, P.; Betz, J.; Lienkamp, M. Radar Voxel Fusion for 3D Object Detection. Appl. Sci. 2021, 11, 5598. [Google Scholar] [CrossRef]
  42. Xu, B.; Zhang, X.; Wang, L.; Hu, X.; Li, Z.; Pan, S.; Li, J.; Deng, Y. RPFA-Net: A 4D RaDAR Pillar Feature Attention Network for 3D Object Detection. In Proceedings of the 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), Indianapolis, IN, USA, 19–22 September 2021; pp. 3061–3066. [Google Scholar] [CrossRef]
Figure 1. System model of alternation radar communication system driven by 4D radar point clouds.
Figure 1. System model of alternation radar communication system driven by 4D radar point clouds.
Remotesensing 16 03242 g001
Figure 2. Single vehicle and a vehicle fleet on urban roads in different scenarios.
Figure 2. Single vehicle and a vehicle fleet on urban roads in different scenarios.
Remotesensing 16 03242 g002
Figure 3. Visualization scenes and corresponding RV spectrogram. (a) The camera image of vehicles. (b) RV spectrogram. (c) RV spectrogram after labeling.
Figure 3. Visualization scenes and corresponding RV spectrogram. (a) The camera image of vehicles. (b) RV spectrogram. (c) RV spectrogram after labeling.
Remotesensing 16 03242 g003
Figure 4. Individual parameters of vehicle and radar.
Figure 4. Individual parameters of vehicle and radar.
Remotesensing 16 03242 g004
Figure 5. Visualization of point clouds. (a) Visualization of point clouds corresponding to the RV spectrogram; (b) 4D RVA-SNR point clouds.
Figure 5. Visualization of point clouds. (a) Visualization of point clouds corresponding to the RV spectrogram; (b) 4D RVA-SNR point clouds.
Remotesensing 16 03242 g005
Figure 6. Visualization of 4D radar point cloud label annotation. (a) The 3D bounding box of the vehicle target. (b) Points labels in 4D radar point clouds.
Figure 6. Visualization of 4D radar point cloud label annotation. (a) The 3D bounding box of the vehicle target. (b) Points labels in 4D radar point clouds.
Remotesensing 16 03242 g006
Figure 7. Framework of the vehicle detection scheme.
Figure 7. Framework of the vehicle detection scheme.
Remotesensing 16 03242 g007
Figure 8. Scene classification. (a) Training and testing accuracy of scene classification. (b) Training and testing loss value of scene classification.
Figure 8. Scene classification. (a) Training and testing accuracy of scene classification. (b) Training and testing loss value of scene classification.
Remotesensing 16 03242 g008
Figure 9. Scene I vehicle detection results. (a) Training and testing accuracy of Scene I. (b) Training and testing loss value of Scene I. (c) Visualization of the vehicle detection results in Scene I.
Figure 9. Scene I vehicle detection results. (a) Training and testing accuracy of Scene I. (b) Training and testing loss value of Scene I. (c) Visualization of the vehicle detection results in Scene I.
Remotesensing 16 03242 g009
Figure 10. Scene II vehicle detection results. (a) Training and testing accuracy of Scene II. (b) Training and testing loss value of Scene II. (c) Visualization of the vehicle detection results in Scene II.
Figure 10. Scene II vehicle detection results. (a) Training and testing accuracy of Scene II. (b) Training and testing loss value of Scene II. (c) Visualization of the vehicle detection results in Scene II.
Remotesensing 16 03242 g010
Figure 11. Scene III vehicle detection results. (a) Training and testing accuracy of Scene III. (b) Training and testing loss value of Scene III. (c) Visualization of the vehicle detection results in Scene III.
Figure 11. Scene III vehicle detection results. (a) Training and testing accuracy of Scene III. (b) Training and testing loss value of Scene III. (c) Visualization of the vehicle detection results in Scene III.
Remotesensing 16 03242 g011
Figure 12. Results of various models in various scenes. (a) Testing accuracy results of three scenes. (b) The mIoU values of three scenes.
Figure 12. Results of various models in various scenes. (a) Testing accuracy results of three scenes. (b) The mIoU values of three scenes.
Remotesensing 16 03242 g012
Figure 13. ROC curves for the proposed algorithm and benchmarks in Scene I.
Figure 13. ROC curves for the proposed algorithm and benchmarks in Scene I.
Remotesensing 16 03242 g013
Figure 14. (a) The power level of different vehicle targets. (b) The total channel capacity versus the total transmit power P T .
Figure 14. (a) The power level of different vehicle targets. (b) The total channel capacity versus the total transmit power P T .
Remotesensing 16 03242 g014
Figure 15. The total channel capacity versus the detection probability under the total transmit power P T = 5 W.
Figure 15. The total channel capacity versus the detection probability under the total transmit power P T = 5 W.
Remotesensing 16 03242 g015
Table 1. Scene classification results of our scheme and benchmarks.
Table 1. Scene classification results of our scheme and benchmarks.
MethodTesting Accuracy (%)mAPmIOU
YOLO88.58%0.8716-
VoxelNet89.16%-0.8876
PointNet93.24%-0.9027
PointPillars94.68%0.9134-
Ours (PTDN)95.31%-0.9223
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, L.; Liu, K.; Gao, Q.; Wang, X.; Zhang, Z. Enhancing Integrated Sensing and Communication (ISAC) Performance for a Searching–Deciding Alternation Radar-Comm System with Multi-Dimension Point Cloud Data. Remote Sens. 2024, 16, 3242. https://doi.org/10.3390/rs16173242

AMA Style

Chen L, Liu K, Gao Q, Wang X, Zhang Z. Enhancing Integrated Sensing and Communication (ISAC) Performance for a Searching–Deciding Alternation Radar-Comm System with Multi-Dimension Point Cloud Data. Remote Sensing. 2024; 16(17):3242. https://doi.org/10.3390/rs16173242

Chicago/Turabian Style

Chen, Leyan, Kai Liu, Qiang Gao, Xiangfen Wang, and Zhibo Zhang. 2024. "Enhancing Integrated Sensing and Communication (ISAC) Performance for a Searching–Deciding Alternation Radar-Comm System with Multi-Dimension Point Cloud Data" Remote Sensing 16, no. 17: 3242. https://doi.org/10.3390/rs16173242

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop