Next Article in Journal
Recognizing Trained and Untrained Obstacles around a Port Transfer Crane Using an Image Segmentation Model and Coordinate Mapping between the Ground and Image
Previous Article in Journal
A Multi-Scale Dehazing Network with Dark Channel Priors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Essay

Task-Incremental Learning for Drone Pilot Identification Scheme

School of Cybersecurity, Northwestern Polytechnical University, Xi’an 710072, China
*
Author to whom correspondence should be addressed.
Current address: 127 West Youyi Road, Beilin District, Xi’an 710072, China.
These authors contributed equally to this work.
Sensors 2023, 23(13), 5981; https://doi.org/10.3390/s23135981
Submission received: 24 May 2023 / Revised: 15 June 2023 / Accepted: 22 June 2023 / Published: 27 June 2023
(This article belongs to the Section Vehicular Sensing)

Abstract

:
With the maturity of Unmanned Aerial Vehicle (UAV) technology and the development of Industrial Internet of Things, drones have become an indispensable part of intelligent transportation systems. Due to the absence of an effective identification scheme, most commercial drones suffer from impersonation attacks during their flight procedure. Some pioneering works have already attempted to validate the pilot’s legal status at the beginning and during the flight time. However, the off-the-shelf pilot identification scheme can not adapt to the dynamic pilot membership management due to a lack of extensibility. To address this challenge, we propose an incremental learning-based drone pilot identification scheme to protect drones from impersonation attacks. By utilizing the pilot temporal operational behavioral traits, the proposed identification scheme could validate pilot legal status and dynamically adapt newly registered pilots into a well-constructed identification scheme for dynamic pilot membership management. After systemic experiments, the proposed scheme was capable of achieving the best average identification accuracy with 95.71% on P450 and 94.23% on S500. With the number of registered pilots being increased, the proposed scheme still maintains high identification performance for the newly added and the previously registered pilots. Owing to the minimal system overhead, this identification scheme demonstrates high potential to protect drones from impersonation attacks.

1. Introduction

With the continuous advancement of the Industrial Internet of Things and 5G technology, drones have been integrated with various emerging technologies, such as Software-Defined Networking and Blockchain, for providing reliable service [1,2]. Owing to their high mobility and operability, a drone can act as an individual switch for traffic forwarding in an SDN-based drone communication network. On the other hand, Blockchain technology has also been applied to drone swarms to keep the transactions between drones and pilots secure, cost-effective and privacy-preserving.
Since drones have become indispensable to intelligent traffic systems, many adversaries try to compromise flying drones for malicious purposes. Currently, there are already some vulnerabilities that have been found in commercial drones. For example, the authors in [3] revealed that GPS spoofing attacks could mislead the flying drones to a manipulated destination. Son et al. in [4] also found that resonance effects could exaggerate the Micro-Electro-Mechanical Systems gyroscope estimation bias, preventing drones from undergoing pre-flight checks. Furthermore, networking-based attacks, such as Denial of Services attack [5] and Man-in-the-Middle attack [6], also pose significant threats to flying drones. Compared with the previously mentioned vulnerabilities, pilot impersonation attacks, where the adversaries try to impersonate the victim pilot bearing compromised credentials and send malicious control instructions to the flying drones, pose severe threats to the drone’s security. As the adversary could obtain drone control privileges without generating faked radio signals, the pilot impersonation attacks are more brutal to detect and nerveless to be prevented.
Currently, there are many works [7,8,9,10] that have investigated the validation of the pilot legal status to protect drones from pilot impersonation attacks. For example, Zhang et al. in [7] proposed the utilization of a one-way hash function and bitwise XOR operations for authentication and key agreement at the beginning of the flight. After analyzing with the security tools, they have proven safety under the random oracle model and can achieve the security requirements of the Internet of drones environment to withstand various attacks. After that, Alladi et al. in [8] proposed a Physical Unclonable Function-based mutual authentication scheme, SecAuthUAV, for UAV-GCS. Through comparing with the state-of-the-art authentication protocols [9,10], the authors verified that their proposed scheme could well resist masquerade, replay, node tampering, and cloning attacks over the drone communication channel. On the other hand, many machine learning (ML)-based identification schemes [11,12,13] have also been designed for ensuring drone pilot legal status during the flight procedure. For example, shouFan et al. in [11,12] first verified the pilot’s legal status by monitoring the remote radio controller sending instructions. After analyzing extracted control commands with Linear Discriminant (LD) [14], Quadratic Discriminant (QD) [15], Support Vector Machine (SVM) [16], weighted k Nearest Neighbors (k-NN) [17] and Random Forest (RF) [18] the best performance was achieved with RF approximating to 89% in accuracy. Alkadi et al. in [13] further combined the onboard sensor measurements and received control instructions for improving drone pilot identification performance. Through analyzing with Long Short-Term Memory (LSTM), feature-based, and majority voting-based classification algorithms [19,20,21], the authors concluded that the combination methods could enhance the pilot identification performance. As there exist similarities between automobile driver and drone pilot operations, some algorithms, such as XGBoost [22] and SVM, which have been applied to automobile drivers [23,24], could also achieve impressive performance in drone pilot identification.
Despite the significant progress that has been made to reduce drone pilot impersonation attacks, a critical challenge still needs to be addressed: drone pilot membership dynamic management. Unlike the protocol-based authentication scheme, the ML-based identification scheme mentioned in the previous works [11,12,13] could not adapt to newly joined pilots for identification and authentication, only succeeding with the help of the current pilot’s flight data. As illustrated in Figure 1, once the well-established pilot identification scheme is deployed on the flying drone, only the previous registered pilot’s legal status can be verified during the flight procedure. To maintain high identification performance, these ML-based identification schemes require periodic retraining to update their inner parameters. Since the previous registered pilots’ flight data are not accessible due to pilot operation privacy issues, updating ML-based identification schemes would cause the catastrophic forgetting problem, a phenomenon of significant accuracy degradation after training on newly joined pilots’ flight data. Even if the previously registered pilot’s flight data are accessible, storing those data still requires a significant amount of memory. Retraining the ML-based identification scheme from scratch for the newly joined pilots also brings many system overhead costs.
To address the challenge mentioned above, we design a novel task-incremental learning-based drone pilot identification scheme. Motivated by the previous works [12,13,23], we first design a background service to collect drone flight data by subscribing to the topics from a micro object request broker (uORB) message bus. We then construct a module for extracting pilot behavioral traits from the flight data and establish a mapping between the extracted pilots’ behavioral traits and their provided identities. For adapting to the newly joined pilot’s identification, we design an updating mechanism to adjust the inner structure and trainable parameters based on the newly registered pilot flight data. As the proposed scheme possesses high identification accuracy for the newly and previously registered pilots with minimal system overheads, it enhances the potential for protecting drones from pilot impersonation attacks.
The prominent features of this paper are summarized as the following aspects:
  • We propose a novel incremental learning-based drone pilot identification scheme for protecting drones from impersonation attacks.
  • For obtaining high-quality drone flight data, we design a background service to collect the subscribed topics from uORB message bus without altering the hardware and software architecture.
  • To adopt dynamic pilot membership management, we construct an extensible framework and propose an updating mechanism for adopting newly joined pilots into the well-established identification scheme.
  • Numerous experiments have demonstrated that the proposed scheme can maintain high identification accuracy for newly and previously registered pilots with minimal system overhead.
The rest of this paper is organized as follows: In Section 2, we first present PX4 inner communication mechanism and adversary model, and then give details about the proposed drone pilot identification scheme and updating mechanism. In Section 3, we conduct systematic experiments to prove the effectiveness of the proposed identification scheme under different environmental settings. We also discuss the advantages and disadvantages of the proposed identification scheme and put forward our future research directions in Section 4. Finally, we provide concluding remarks in Section 5.

2. Materials and Methods

2.1. UAV Inner Communication Mechanism

With the development of dynamic aerial technology and integrated circuit manufacturing, most flying functionalities have been integrated into flight controllers, such as PX4 [25]. Through utilizing the onboard sensor measurements, PX4 can monitor the drone flight status and provide navigation and mission planning service to the flying drones, reducing the pilot’s operational overhead. According to [25], PX4 has been implemented in Nuttx [26] environment, where different hardware modules have been abstracted into functionalities and provided reliable services for the flight control stack. Moreover, the PX4 also acts as the middleware to intercept the pilot’s instructions and convert them into the preset drone flight attitudes.
Thanks to the inner communication mechanism, the modules within PX4 could cooperate with each other through publishing and subscribing to predefined topics. Specifically, the modules that want to utilize the drone flight attitudes for computation first subscribe to specific topic and then create a listener to receive the onboard measurements at fixed intervals. On the other hand, the modules that want to publish their computation results must also apply for a topic on the uORB message bus and publish their results over the applied topic. Furthermore, PX4 also utilizes the Extended Kalman Filter [27] algorithm to fuse the onboard sensor measurements and publish high-precision drone flight attitude over the uORB message bus, reducing the side effects caused by external environmental conditions. Since the high-precision flight data run over the uORB message bus, we design a background service to subscribe to the predefined topics listed in Table 1 from the uORB message bus and utilize them for drone pilot identification.

2.2. UAV Pilot Impersonation Attack

As drones deployed in the open environment carry sensitive information, many adversaries try to capture flying drones for malicious purposes. Compared with other electronic attacks, such as GPS spoofing attacks or Man-in-the-middle attacks, the adversary could launch an impersonation attack by utilizing the compromised credential, as illustrated in Figure 2. This paper assumes that all pilots and flying drones must first complete registration at the ground control center. After that, the pilot can utilize their credentials for mutual authentication at the beginning of the communication. Once their legal status is verified on the drone side, the pilot uses their remote radio controller to send instructions for the flying drone. On the other hand, the adversary tries to utilize hacking tools to capture the pilot’s credentials from the ground control center. Once the victim pilot’s credentials are obtained, it utilizes the compromised credentials to obtain the flying drone’s control privilege. As the knowledge-based authentication scheme can not verify the pilot’s legal status during the flight procedure, the adversary can impersonate the victim pilot and utilize their radio controller to send malicious instructions to the flying drone.
To protect drones from impersonation attacks, we propose a pilot identification scheme that verifies the pilot’s legal status during the flight procedure. As distinct differences exist between the victim pilot and adversary operation profile, the proposed identification scheme will differentiate the adversary flight data from the current pilot’s. If the estimated identity does not match the pilot-provided credential, an alert will be triggered by this identification system. At the same time, the PX4 flight controller will stop executing the received instructions and switch to automatic mode, guiding the drone to land at the take-off place to stop further impersonation attacks.

2.3. Incremental Learning

Incremental learning was first proposed by Schlimmer in [28], whose goal is to enable machine-learning models to continuously adjust their structure according to the newly generated data, adapting to the dynamic changing environment. In recent years, incremental learning has become an important research topic and is applied to many application scenarios, such as wireless device identification, smartphone counterfeit detection, and natural language processing [29,30,31]. In order to adapt to the dynamic changing environment, many researchers have proposed to utilize regularization approaches to mitigate the catastrophic forgetting problem. For example, Kirkpatrick et al. in [29] proposed an elastic weight consolidation (ewc) approach to calculate a diagonal of the Fisher Information Matrix. They assumed that the model would learn the importance after each task while ignoring the influence of those parameters along the learning trajectory in weight space. To address the importance overestimating problem, the authors in [30] proposed memory aware synapses (mas) to fuse Fisher Information Matrix approximation and online path integral in a single algorithm to calculate the importance for each parameter. Furthermore, the authors in [32] proposed an incremental learning method, learning without forgetting (lwf), to regularize data drift with the temperature-scaled logits during the training procedure. Instead of the previously mentioned works [29,30,32], there also exist some techniques, such as rehearsal approaches [32,33,34], that have been proved to be effective in improving incremental learning performance. To our knowledge, the proposed identification scheme is the first work that utilizes incremental learning for drone pilot identification. We compare the performance of our updating mechanism with the state-of-the-art incremental learning algorithms, such as [29,30,32] to illustrate the effectiveness of the proposed algorithm. Systemic experiments in natural and constrained enviroments have demonstrated that the proposed updating strategy has higher identification accuracy for the previously and currently registered pilots over the compared algorithms.

2.4. Problem Definition

We define the learning on newly joint pilot drone flight data as a new task in our drone pilot identification scheme. When training on t-th pilot’s drone flight dataset D t , we assume no direct access to D k k = 1 t 1 for the moment, leading to the following training objective.
L D t ; w = 1 D t ( x , q ) D t g w ( x ) , q ,
where x R N and q R denote the N-dimensional drone flight data and the corresponding pilot-provided identity, respectively. g w ( x ) represents the well-established pilot identification model parameterized by a vector w. is the objective function, quantifying drone pilot identification performance. One may add a regularizer r ( w ) to Equation (1) to gain resistance to catastrophic forgetting. For evaluation, we may measure the performance of g w ( x ) on the hold-out test sets of all tasks seen so far.
k = 1 t L V k ; w = k = 1 t 1 V k ( x , q ) V k g w ( x ) , q ,
where V k is the test set for the k-th task. An ideal drone pilot identification scheme should identify well on all newly joined pilots and endeavor to mitigate catastrophic forgetting of the previous pilots, resulting in better identification performance.

2.5. Pilot Identification Based on UAV Flight Data

2.5.1. Data Collection and Preprocessing

In order to collect highly precise drone flight data and minimize the side impact caused by the external environment, we have designed a background service to subscribe to the selected topics from the uORB message bus instead of the ground control center. Due to the unstable connections caused by external weather conditions and drone mobility, the integrity of data transmitted to the ground control center over the mavlink protocol has been broken, downgrading drone pilot identification performance. According to the PX4 documentation, there are a total of 54 topics running through the uORB message bus, designed explicitly for the quadcopter framework. In order to select the most representative attributes which can describe the pilot behavioral traits, we utilize the embedded feature selection algorithm [35] to filter out the irrelevant topics and attributes. Specifically, we first select five pilots’ flight data to construct a mini dataset, which includes drone flight attitude, inner communication message, and pilot-provided identity. We then utilize RF as a classifier to identify drone pilot identity based on the attribute within subscribed uORB topics. Finally, we sort the attributes by identification accuracy in descending order and preserve the first 28 attributes for constructing the identification scheme. Table 1 provides details about the selected topics and preserved attributes.
As illustrated in Table 1, the selected topics are mainly about the received pilot instructions, drone flight attitudes, and the inner control commands sent from the flight controller to the executor. Due to internal hardware errors and external factors, significant deviations exist among the selected attributes. To address this problem, we first utilized the Kalman filter algorithm to find the abnormal deviations from the selected topics and then replaced this deviation with the previous observation. Note that the selected topics maintain their publishing frequency, and the preserved attributes have their working dimensions. We implemented a sliding window with one-second width to streamline the selected attributes to 1 Hz, where the average value within this window is utilized for current observation. We then applied the following standard equation to normalize the observations before feeding them into the drone pilot identification scheme.
x = x x m e a n x s t d ,
where x represents the previous generated observations, x m e a n and x s t d are mean and standard deviation of current observations. After data processing, we concatenated the selected attributes chronologically and generated the input sequence with 28 dimensions. Each time, we selected 64 consecutive observations to form one input sequence map, and the overlap between consecutive maps is kept at 32. Furthermore, we digitized pilot-provided credentials as unique numbers and utilized these numbers to represent the ground truth while constructing the identification scheme.

2.5.2. Drone Pilot Identification

In order to make the drone pilot identification scheme adaptable for dynamic membership management, we first designed a foundation framework that could estimate pilot identity according to the drone flight data. Motivated by the previous work [36,37], this foundation framework consists of feature extraction and pilot identification module, as illustrated in Figure 3. In the feature extraction module, we concatenated three convolution layers sequentially and appended the batch normalization operation at the output of each convolution layer for the pilot behavioral traits extraction. The convolutional layer utilizes the local parameter-sharing mechanism for extracting the spatial and temporal relationships among the input drone flight data. Furthermore, batch normalization has been applied to recenter and rescale the output features of the convolutional layer, boosting the gradient backpropagation and reducing the probabilities of the gradient disappearance problem. As the negative representations extracted by each convolutional layer have physical meaning when describing drone flight data, we eliminated the ReLU [38] functions in the constructed foundation framework.
In the pilot identification module, we utilized one full connection layer to establish connections between the extracted hidden representations with the registered pilot identity. The full connection layer could assign appropriate weights for each extracted hidden representation, just like an affine matrix. By utilizing the backpropagation mechanism, the full connection layer can approximate the pilot’s real identity by adjusting the trainable weights. In order to make this identification scheme adaptable for dynamic pilot membership management, the form of the full connection layer also can be altered according to the number of currently registered pilots. As illustrated in Figure 3, we first utilized the drone flight data as input to the pilot identification scheme. Through a series of convolutional operation and the batch normalization, we extract pilot behavioral traits step by step, where the purple blocks represent the convolutional features and green blocks indicate the batch normalization features. After that, we flatten the hidden representations into one dimension(cyan blocks) and establish the connections between the hidden representations with predicted pilot identity, where the red blocks represent the previously registered pilots, and the green ones indicate the newly joined pilots. Once newly registered pilots have been integrated into the drone system, the green blocks will be appended at the end of the red ones in the previously constructed identification scheme. Regarding the identification scheme updating mechanism, we utilized the previously constructed identification scheme to guide the newly generated identification scheme only with the help of newly registered drone pilot flight data. For more details about updating mechanism, please refer to Section 2.5.3.
Regarding the optimization strategy, we utilized the least mean square error to calculate the distance between the predicted pilot identity and the ground truth. The loss function can be expressed as Equation (4).
l o s s m s e = 1 n i = 1 n | P i P r | ,
where n is the number of identities the proposed identification scheme could handle each time. P i is the proposed identification scheme predicted identity, and P r is the ground truth corresponding to the pilot-provided credential. | · | represents the hamming distance, where zero denotes that the estimated identity is consistent with the generated ground truth. One represents that the estimated identity deviates from the generated ground truth. Figure 3 illustrates the hyperparameters we used to construct the pilot identification scheme, and we applied stochastic gradient descent optimization [39] strategy to find the parameters for best identification performance.

2.5.3. Drone Pilot Identification Updating Mechanism

Once the pilot completes registration at the ground control center, he will be granted the credential for obtaining the drone control privilege. In order to validate this pilot’s legal status during the flight procedure, we first required the pilots to utilize a remote radio controller to send instructions to the drone for completing some basic flight missions. Meanwhile, we utilized the previously designed background services to extract the flight data X n . We digitized their credential into a unique number to represent their identity in our identification scheme. Since the other registered pilot’s flight data are inaccessible due to their operation privacy, we only utilized the X n and the previous well-constructed identification scheme to optimize parameters of the altered network structure, aiming at high identification accuracy for all registered pilots.
As illustrated in Figure 3, we first parameterized this identification scheme with parameter θ s and θ o , where θ s represents the parameters within feature extraction module and θ o indicates the parameters within pilot identification module. When updating the pilot identification module for newly registered pilots, we first added nodes to the output layer (the green blocks in Figure 3). After that, we established the connections θ n between the feature extraction module and the newly added nodes. The number of connections equals the number of newly added pilots times the number of the output features extracted by the feature extraction module. We initialized θ n with random Gaussian distribution and then updated this identification scheme in the following procedure. At the beginning of updating procedure, we first froze parameter θ s and θ o and utilized newly registered pilot flight data X n to training parameter θ n until to coverage. We then utilized X n to train all the network parameters including θ s , θ o and θ n to converge. As we could only utilize the currently registered pilot data X n for updating the drone pilot identification scheme, the optimization target for newly registered pilots is to minimize the distance between the predicted identity and pilot-provided ground truth. We utilized the Equation (5) to describe the newly registered pilot’s optimization target.
L n e w ( y n , y ^ n ) = y n · l o g y ^ n ,
where y n is the output of the pilot identification scheme based on pilot-provided data X n and y ^ n is the corresponding ground truth. As for the previously registered pilot optimization, we utilized the Knowledge Distillation loss [40] to encourage the current pilot identification scheme’s output to approximate the previous one’s outputs.
L o l d ( y o , y ^ o ) = i = 1 l y o ( i ) l o g y ^ o ( i ) ,
where l is the number of the previously registered pilots in each iteration, y o ( i ) indicates the probabilities predicted by the previous well-constructed identification scheme, and y ^ o ( i ) is the probabilities estimated by the newly constructed identification scheme. Furthermore, we tried to regularize the parameters θ s , θ o and θ n with R with the decay of 0.005 to force the newly constructed identification focus on the newly registered pilot behavioral traits.
In order to minimize the distance between the current and previous identification scheme output probabilities, we utilized the Equation (7) to aggravate the small probabilities.
y o ( i ) = y o ( i ) 1 / T j y o ( j ) 1 / T , y ^ o ( i ) = y ^ o ( i ) 1 / T j y ^ o ( j ) 1 / T
where we utilized hyper-parameter T to control the scale factors to amplify the small probabilities for each predicted pilot identity. Since drone pilot identification is a multi-label classification problem, we took the sum of loss for old and new tasks in each iteration. Algorithm 1 gives more details about updating procedure proposed in the drone pilot identification scheme. In order to further improve identification performance, we preserved 100 samples for each previously registered pilot flight data. We merged the newly registered pilot flight data for this newly generated identification scheme. We used Pilot Identification to indicate the previous well-constructed identification scheme, and the λ o controls weights between the old and current tasks. After conducting system experiments, we find that the parameter λ o with 0.1 could achieve the best identification performance for all registered pilots.
Algorithm 1: Drone Pilot Identification Updating Algorithm
1: Start:
   θ S : feature extraction parameters
   θ o : identification parameters for the previous registered pilots
   θ n : added parameters for newly registered pilots
   X n , Y n : newly registered pilot’s drone flight data and their identity

2: Initialize:
   Y o Pilot Identification ( X n , θ s , θ o )
   V n RANDINT ( | Y n | )

3: Train:
  Define  Y ^ o = Pilot Identification( X n , θ s , θ o ) ▹ previous registered pilot
  identity estimation
  Define Y ^ n = Pilot Identification( X n , θ s , θ n ) ▹ newly registered pilot
  identity estimation
   θ s * , θ o * , θ n * arg min θ s , θ o , θ n ( λ o L o l d ( Y o , Y ^ o ) + L n e w ( Y n , Y ^ n ) + R ( θ s , θ o , θ n ) )

3. Results

3.1. Environmental Setting

As the quadcopter has received wide attention due to its easy operation and broad application, we conducted experiments on the quadcopter, such as P450 and S500, to validate the effectiveness of the proposed identification scheme. In data collection, 15 students were required to fly quadcopters in the natural and constrained environment. Regarding P450, produced by AmovLab, it was installed with a depth of field and optical flow sensors to provide reliable flight performance for the drone. When it comes to S500, we constructed it from scratch under the guidance of the CUAV flight stack. We utilized the remote radio controller Futuba to capture the pilot instructions and send control signals to the flying drone. Among the invited pilots, five students from AeroModelling Team were required to act as professional pilots, and the remainder who came from our laboratory were the amateurs. Figure 4 details the participants’ operation proficiency, where we utilized the flying times to represent their operation experience.
To validate the effectiveness of the proposed identification scheme, we have conducted experiments over P450 and S500, as illustrated in Figure 5. We first required participants to fly P450 in the natural environment for a traffic monitoring mission, the most common flight task in the quadcopter. Specifically, the pilots utilized a remote radio controller to send instructions to the drone for taking off, hovering in the air, and landing at the predefined destination. At the same time, the pilots were asked to record three-minute videos about the traffic conditions. Furthermore, we conducted experiments in the constrained environment, whose settings complied with [12,13]. We had preset the flight trajectories and asked pilots to control drones to pass through the way-points without collision. To reduce the side effects caused by external weather conditions, all the data were collected in the constrained environment on sunny days.
We also conducted experiments on S500 in natural and constrained environments with similar settings to further illustrate the effectiveness of the proposed identification scheme. To collect sufficient data, all the participants were required to fly the drone ten times, and we utilized the designed background service to monitor the drone flight status. Regarding the ground truth generation, we digitized the pilot-provided credential and mapped it as a unique number, indicating the ground truth in our experiments. Table 2 gives more details about the experimental settings and the portion of the collected dataset has been available on 15 June 2023 at https://github.com/FRTeam2017/DronePilotIdentification.git.

3.2. The Hardware and Software Architecture

In order to collect high-precise drone flight data, we first designed a background service to subscribe to the selected topics from the uORB bus. In data preprocessing, Anaconda is utilized to create a pure Python environment, where the package Pyulog is installed for converting the collected flight data into CSV format, and Pandas is used for calculating statical features. As for the drone pilot identification scheme implementation, we utilized the Pytorch framework to extract pilot behavioral traits from the flight data and estimate their identity in real-time.
In the hardware configuration, we have constructed a workstation equipped with an i7-8700 CPU processor and 24 GB of memory. In order to accelerate the training procedure, an NVIDIA-3080 graphics card was installed for parallel computing. We utilized Ubuntu 20.04 to manage the previously mentioned hardware equipment, and the above configuration determines the results presented below.

3.3. Drone Pilot Identification Based on P450

In this section, 15 pilots were required to utilize P450 for completing flight missions in the natural and constrained environment, as illustrated in Figure 5. In order to collect sufficient drone flight data, we asked them to repeat the flight mission ten times in the natural and constrained environment separately. We utilized the first eight trajectories for training and the remainder for testing that partition obeys the machine learning and pattern recognition algorithm. Furthermore, there is no overlapping between the training and testing trajectories. After data preprocessing, we obtained 54,123 training and 11,254 testing samples in the natural environment. We also obtained 49,938 training and 11,432 testing samples in the constrained environment. Table 3 details drone pilot identification performance in different environmental settings based on P450, where P * represents the sample that drone pilot identification scheme will be classified, and T * indicates the estimation results based on given flight data.
This table demonstrates that the proposed identification scheme can achieve impressive performance based on the selected topics on uORB message bus. In the natural environment, the average identification accuracy approximates 94.87%, and the average identification accuracy for the constrained environment is about 95.71%, slightly better than the natural environment. One possible explanation is that external weather conditions, such as wind and magnets, may affect the pilot’s operation behaviors, decreasing identification performance. Note that the worst identification accuracy is 73.54%. The proposed identification scheme can be deployed on the drone for real-time pilot identification.

3.4. Drone Pilot Identification Based on S500

In order to further illustrate the effectiveness of the proposed identification scheme, we also conducted the same experiments in S500. We required all the pilots to utilize S500 for traffic monitoring in the natural environment and pass through the arch door preset along the waypoints in the constrained environment. To collect sufficient drone flight data, all the pilots were asked to repeat the mission ten times. We utilized the flight data from the first eight times for training and the last two for testing. After data preprocessing, we obtained 62,118 flight samples in the natural environment and 58,921 in the constrained environment. Table 4 provides details about the performance of the proposed drone pilot identification, and the meaning of P * and T * has been illustrated in Table 3.
The proposed drone pilot identification scheme achieves similar accuracy on drone S500. According to Table 4, eight pilot identification accuracy exceeds 95% The minimal identification accuracy has been achieved with 82.17% (constrained environment) and 88.47% (natural environment). The reason of the minimal identification accuracy is also caused by the external weather condition factors. As the average identification accuracy on S500 is 93.95% and 94.23% in natural and constrained environments, respectively, we can conclude that the proposed scheme can maintain high identification performance over different quadcopters.

3.5. Performance Comparison

Since there is no public drone pilot identification dataset available and this is the first work that utilizes the incremental learning paradigm for drone pilot identification, we first compared the proposed identification performance with the most related works [11,12,13] in terms of objectiveness, the number of participants, and utilized signals et al. in Section 3.5.1. We then compare the identification performance with the algorithms mentioned in the related works in Section 3.5.2. Finally, we compare our updating mechanism with the off-the-shelf incremental algorithms [29,30,32] to illustrate effectiveness of the proposed identification scheme in Section 3.5.3.

3.5.1. Comparison with the Related Works

In this section, we first compare the proposed identification scheme with the most related works [11,12,13] in terms of the experimental setting, objectiveness, number of participants, utilized signals, and identification performance. Table 5 provides details about the comparison results. As the compared works do not provide the source code and drone flight data, we utilized the reported results for the comparison.
As illustrated in Table 5, we employed similar experimental settings to demonstrate the effectiveness of the proposed identification scheme. For example, we used the quadcopter to collect drone flight data and verify the pilot’s legal status. Compared with the related works, we verified the effectiveness of the proposed identification scheme in natural and constrained environments. We also have to consider the extensibility when constructing drone pilot identification scheme in our work. Although the identification accuracy is not comparable due to lacking the public dataset, the proposed scheme achieves the best identification accuracy with 95.24% over 15 pilots, approximating to the best results reported in the related work [13].

3.5.2. Comparison with the Algorithms Mentioned in Related Works

We also compared our identification scheme with the algorithms mentioned in the related works [11,12,13,23]. Specifically, we compared the algorithm QD, LD ( s o l v e r = SGD), Bagging, RF ( n e s t i m a t o r = 100), Adaboost, DT mentioned in work [12], and LSTM ( n u m l a y e r = 2), Feature ( e s t i m a t o r = RF, n _ e s t i m a t o r = 5), Voted ( b a s e _ e s t i m a t o r =SVC, n _ e s t i m a t o r = 5) mentioned in [13]. Furthermore, the algorithms, including SVM, XGB, and RF mentioned in [23] are utilized for comparison due to their competitive performance. Due to the lack of a public dataset, all the comparison algorithms were tested on our collected dataset. The best parameters for the compared algorithms are set with the grid search algorithm [41]. In order to reduce side effects caused by data preprocessing, all the data preprocessing procedures align with the original works. Table 6 provides more details about the drone pilot identification performance.
Table 6 and Table 7 give details about the identification performance of the proposed identification scheme and the related works with over 15 participants. Our proposed identification scheme achieves the best identification performance over 15 participants compared to the related algorithm. Specifically, four out of fifteen pilots’ identification are 100%, and the fifteenth pilot with 95.42% has achieved minimal identification accuracy. One reason is that the identification scheme can effectively use the spatial and temporal relationships between the subscribed UAV flight data.

3.5.3. Comparison with the SOTA Incremental Learning Algorithms

In order to validate the effectiveness of the proposed updating mechanism, we conducted experiments over the quad-copter S500 flight data. We first compared the proposed updating mechanism with the feature extraction and fine-tuning mentioned in the related work [42,43], where the feature extraction tries to fix the shared parameters θ s and θ o and utilize the newly joined pilot light data as a new task for training θ n , While the Fine-tuning utilized θ s and θ n for learning a new task and maintain the previous task-specific parameters θ o stable. Each time we utilized the fixed number of newly joined pilot flight data to update the proposed identification scheme and the average accuracy up to the currently registered pilots (mentioned in Equation (2)) to evaluate the drone pilot identification performance. From Figure 6, we can see that the proposed could maintain high identification performance with the increased size of the registered pilots in different incremental steps. However, Fine-tuning and Feature extraction suffer from the catastrophic forgetting problem. Note that the performance of feature extraction is relatively better than fine-tuning. One reason to explain this is that the feature extraction updating mechanism fixes the shared parameters θ s and θ o , which keeps more historical information during the updating procedure.
We also compared the proposed updating mechanism with the state-of-the-art incremental learning algorithms, including lwf [32], ewc [29] and mas [30]. These methods have been utilized as the standard comparison algorithms to illustrate the effectiveness of incremental learning-based identification framework in the IoT device recognition and identification [44,45]. As illustrated in Table 8, we first selected seven pilot’s flight data to initialize the proposed identification scheme. After that, we utilized the different number of pilot flight data to update the well-established identification scheme and the up-to-current registered pilot average accuracy to depict performance among the compared algorithms. It is clear that the well-established drone pilot identification performance decreases with the increased number of newly joined pilots. Take mas for example. The identification performance of mas dropped from 98.11% with seven registered pilots to 61.52% with 15 registered pilots. One of the reasons to explain this is that the mas needs many current pilot flight data to determine the importance of each shared parameter, which contradicts the setting of drone pilot identification. Compared with the other SOTA incremental learning algorithm, our proposed updating mechanism could maintain high identification performance with the increased number of registered pilots. Based on the previous analyzes, the updating mechanism can help the drone pilot identification scheme adopted for newly registered pilots in dynamic pilot membership management.

3.6. Time and Space Complexity

As a drone is a lightweight system, the time and space overhead is critical for timely protection of the drone from impersonation attacks. According to Figure 3, the proposed identification scheme consists of pilot behavioral trait extraction and a pilot identification module. The pilot behavioral extraction module comprises convolution layers and batch normalization operations. Based on [46], the time complexity for each convolution layer is O ( M 2 × K 2 × C i n × C o u t ) , where M is the dimension of the input feature map, K is the size of the convolutional kernel, C i n and C o u t are the number of input and output channels. As for batch normalization, its time complexity depends only on the input feature maps, O ( 1 ) . We used the full connection layer to map the hidden representation to the pilot identity for pilot identification. Thus, the time complexity for the full connection layer is the same as batch normalization. The proposed identification scheme involves extracting behavioral traits and estimating identity in a sequential manner. The depth of the identification module determines the time complexity of this process. It is represented by the equation: O ( l = 1 D M 2 × K 2 × C i n × C o u t ) , where D is the depth of the proposed identification module. As for run time overhead, the Python built-in time function suggests that the proposed scheme only needs 0.031 s for drone pilot identification each time. Furthermore, we implemented a Pytorch-implemented parameter statistical function to assess the system overhead. Our identification scheme only has 13 M parameters, indicating that it requires reasonable storage overhead for identification.

4. Discussion

In this paper, we propose a novel drone pilot identification scheme for protecting drones against impersonation attacks and an updating mechanism for adopting to dynamic pilot membership management. In order to validate the effectiveness of the proposed identification scheme, we have conducted experiments over P450 and S500 in different environmental settings. Despite the impressive results that have been produced, there are still existing challenges in our proposed identification scheme.
First, the identification performance could also be further improved. The numerical results in Table 3 and Table 4 have shown that the proposed identification scheme could identify most pilots with high identification accuracy. However, some pilots, such as the tenth pilot in Table 3 still could not be well identified. One explainable reason is that external weather conditions, such as wind, could change the pilot’s behavioral traits for maintaining stabilization during the flight procedure, and the proposed identification scheme does not consider the pilot’s behavioral traits in different weather conditions. In our future work, we will design a more intelligent identification scheme that can utilize weather condition robust features for drone pilot identification.
Second, different application scenarios should be utilized to verify the proposed updating mechanism further. This paper validates the proposed updating mechanism for adopting newly joined pilots under similar experimental settings. In the application scenario, the newly joined pilots’ flight data could come from different environments, which may bring side effects on the performance of the proposed identification scheme. Furthermore, the registered pilots’ leaving scenario should also be considered to enhance dynamic drone pilot membership management.
Third, more experiments should be conducted on different types of drones to validate the effectiveness of the proposed identification scheme. This paper only verified the effectiveness of the proposed identification scheme on P450 and S500 in the preset natural and constrained environment. As more types of drones have been designed and devoted to application scenarios, a more general and robust identification scheme must be deployed on the drone side to further reduce pilot impersonation attacks.
With an increasing number of drones being deployed to real application scenarios, pilot legal status verification will become a dispensable part of the drone system. In the near future, we will conduct more research about drone related attacks and design a more lightweight and robust pilot identification to enhance drone flight security.

5. Conclusions

This paper has presented a novel task incremental learning-based drone pilot identification scheme to protect drones from pilot impersonation attacks and adopt to dynamic pilot membership management. In order to verify the effectiveness of the proposed identification scheme, we conduct systemic experiments on P450 and S500 in different environmental settings. The numerical results show that the proposed identification scheme achieves the best identification accuracy with 95.71% on P450 and 94.32% on S500 over 15 pilots, respectively. Furthermore, the proposed scheme only consumes 13M parameters and can complete drone pilot identification within 0.031 s. Due to the high identification accuracy and low system overhead, the proposed drone pilot identification scheme demonstrates great potential to protect drones from impersonation attacks. In the future, we will consider more factors for designing an intelligent drone pilot identification scheme and conduct comprehensive experiments using different types of drones and environmental settings to verify the robustness of the proposed identification scheme.

Author Contributions

Conceptualization, Y.Z.; methodology, L.H.; validation, L.H. and X.Z.; investigation, L.H.; data curation, L.H. and X.Z.; writing—original draft preparation, L.H.; writing—review and editing, L.H.; visualization, X.Z.; supervision, Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are unavailable due to privacy.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
UAVUnmanned Aerial Vehicle
GPSGlobal Positioning System
GCSGround Control Station
MEMSMicro-Electro-Mechanical Systems
LDLinear dichroism
QDquadratic discriminant
SVMSupport vector machine
kNNK-nearest neighbors
RFRandom forest
DTDecision tree
LSTMLong short-term memory
SGDStochastic gradient descent
ESCElectronic Speed Controller
NEDNorth East Down
ReLURectification Linear function
lwflearning without forget
ewcelastic weight consolidation
masmemory aware synapses
SOTAstate-of-the-art work

References

  1. Shakeri, R.; Al-Garadi, M.A.; Badawy, A.; Mohamed, A.; Khattab, T.; Al-Ali, A.K.; Harras, K.A.; Guizani, M. Design challenges of multi-UAV systems in cyber-physical applications: A comprehensive survey and future directions. IEEE Commun. Surv. Tutorials 2019, 21, 3340–3385. [Google Scholar] [CrossRef] [Green Version]
  2. Hassija, V.; Chamola, V.; Agrawal, A.; Goyal, A.; Luong, N.C.; Niyato, D.; Yu, F.R.; Guizani, M. Fast, reliable, and secure drone communication: A comprehensive survey. IEEE Commun. Surv. Tutorials 2021, 23, 2802–2832. [Google Scholar] [CrossRef]
  3. Eldosouky, A.; Ferdowsi, A.; Saad, W. Drones in distress: A game-theoretic countermeasure for protecting uavs against gps spoofing. IEEE Internet Things J. 2019, 7, 2840–2854. [Google Scholar] [CrossRef] [Green Version]
  4. Son, Y.; Shin, H.; Kim, D.; Park, Y.; Noh, J.; Choi, K.; Choi, J.; Kim, Y. Rocking drones with intentional sound noise on gyroscopic sensors. In Proceedings of the 24th {USENIX} Security Symposium ({USENIX} Security 15), Washington, DC, USA, 12–14 August 2015; pp. 881–896. [Google Scholar]
  5. Alladi, T.; Chamola, V.; Zeadally, S. Industrial control systems: Cyberattack trends and countermeasures. Comput. Commun. 2020, 155, 1–8. [Google Scholar] [CrossRef]
  6. Choudhary, G.; Sharma, V.; Gupta, T.; Kim, J.; You, I. Internet of drones (iod): Threats, vulnerability, and security perspectives. arXiv 2018, arXiv:1808.00203. [Google Scholar]
  7. Zhang, Y.; He, D.; Li, L.; Chen, B. A lightweight authentication and key agreement scheme for Internet of Drones. Comput. Commun. 2020, 154, 455–464. [Google Scholar] [CrossRef]
  8. Alladi, T.; Naren; Bansal, G.; Chamola, V.; Guizani, M. SecAuthUAV: A Novel Authentication Scheme for UAV-Ground Station and UAV-UAV Communication. IEEE Trans. Veh. Technol. 2020, 69, 15068–15077. [Google Scholar] [CrossRef]
  9. Wazid, M.; Das, A.K.; Kumar, N.; Vasilakos, A.V.; Rodrigues, J.J. Design and analysis of secure lightweight remote user authentication and key agreement scheme in internet of drones deployment. IEEE Internet Things J. 2018, 6, 3572–3584. [Google Scholar] [CrossRef]
  10. Srinivas, J.; Das, A.K.; Kumar, N.; Rodrigues, J.J. TCALAS: Temporal credential-based anonymous lightweight authentication scheme for Internet of drones environment. IEEE Trans. Veh. Technol. 2019, 68, 6903–6916. [Google Scholar] [CrossRef]
  11. Shoufan, A. Continuous authentication of uav flight command data using behaviometrics. In Proceedings of the 2017 IFIP/IEEE International Conference on Very Large Scale Integration (VLSI-SoC), Abu Dhabi, Saudi Arabia, 23–25 October 2017; pp. 1–6. [Google Scholar]
  12. Shoufan, A.; Al-Angari, H.M.; Sheikh, M.F.A.; Damiani, E. Drone pilot identification by classifying radio-control signals. IEEE Trans. Inf. Forensics Secur. 2018, 13, 2439–2447. [Google Scholar] [CrossRef]
  13. Alkadi, R.; Al-Ameri, S.; Shoufan, A.; Damiani, E. Identifying drone operator by deep learning and ensemble learning of imu and control data. IEEE Trans. Hum. Mach. Syst. 2021, 51, 451–462. [Google Scholar] [CrossRef]
  14. Balakrishnama, S.; Ganapathiraju, A. Linear discriminant analysis-a brief tutorial. Inst. Signal Inf. Process. 1998, 18, 1–8. [Google Scholar]
  15. Tharwat, A. Linear vs. quadratic discriminant analysis classifier: A tutorial. Int. J. Appl. Pattern Recognit. 2016, 3, 145–180. [Google Scholar] [CrossRef]
  16. Suthaharan, S.; Suthaharan, S. Support vector machine. In Machine Learning Models and Algorithms for Big Data Classification: Thinking With Examples for Effective Learning; Springer: Berlin/Heidelberg, Germany, 2016; pp. 207–235. [Google Scholar]
  17. Tan, S. Neighbor-weighted k-nearest neighbor for unbalanced text corpus. Expert Syst. Appl. 2005, 28, 667–671. [Google Scholar] [CrossRef] [Green Version]
  18. Belgiu, M.; Drăguţ, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote. Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  19. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  20. Nanopoulos, A.; Alcock, R.; Manolopoulos, Y. Feature-based classification of time-series data. Int. J. Comput. Res. 2001, 10, 49–61. [Google Scholar]
  21. Gopinath, B.; Gupt, B. Majority voting based classification of thyroid carcinoma. Procedia Comput. Sci. 2010, 2, 265–271. [Google Scholar] [CrossRef] [Green Version]
  22. Chen, T.; Guestrin, C. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd Acm Sigkdd International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
  23. Kwak, B.I.; Han, M.L.; Kim, H.K. Driver identification based on wavelet transform using driving patterns. IEEE Trans. Ind. Inform. 2020, 17, 2400–2410. [Google Scholar] [CrossRef]
  24. Hallac, D.; Sharang, A.; Stahlmann, R.; Lamprecht, A.; Huber, M.; Roehder, M.; Leskovec, J.; Sosic, R. Driver identification using automobile sensor data from a single turn. In Proceedings of the 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), Rio de Janeiro Brazil, 1–4 November 2016; pp. 953–958. [Google Scholar]
  25. Meier, L.; Honegger, D.; Pollefeys, M. PX4: A node-based multithreaded open source robotics framework for deeply embedded platforms. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 6235–6240. [Google Scholar]
  26. Wei, H.; Shao, Z.; Huang, Z.; Chen, R.; Guan, Y.; Tan, J.; Shao, Z. RT-ROS: A real-time ROS architecture on multi-core processors. Future Gener. Comput. Syst. 2016, 56, 171–178. [Google Scholar] [CrossRef]
  27. Willner, D.; Chang, C.; Dunn, K. Kalman filter algorithms for a multi-sensor system. In Proceedings of the 1976 IEEE Conference on Decision and Control Including the 15th Symposium on Adaptive Processes, Clearwater, FL, USA, 1–3 December 1976; pp. 570–574. [Google Scholar]
  28. Schlimmer, J.C.; Fisher, D. A case study of incremental concept induction. Proc. AAAI 1986, 86, 496–501. [Google Scholar]
  29. Kirkpatrick, J.; Pascanu, R.; Rabinowitz, N.; Veness, J.; Desjardins, G.; Rusu, A.A.; Milan, K.; Quan, J.; Ramalho, T.; Grabska-Barwinska, A.; et al. Overcoming catastrophic forgetting in neural networks. Proc. Natl. Acad. Sci. USA 2017, 114, 3521–3526. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Aljundi, R.; Babiloni, F.; Elhoseiny, M.; Rohrbach, M.; Tuytelaars, T. Memory aware synapses: Learning what (not) to forget. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 139–154. [Google Scholar]
  31. Wolf, T.; Debut, L.; Sanh, V.; Chaumond, J.; Delangue, C.; Moi, A.; Cistac, P.; Rault, T.; Louf, R.; Funtowicz, M.; et al. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, Online, 16–20 November 2020; pp. 38–45. [Google Scholar]
  32. Li, Z.; Hoiem, D. Learning without forgetting. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 2935–2947. [Google Scholar] [CrossRef] [Green Version]
  33. Rebuffi, S.A.; Kolesnikov, A.; Sperl, G.; Lampert, C.H. icarl: Incremental classifier and representation learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2001–2010. [Google Scholar]
  34. Wu, Y.; Chen, Y.; Wang, L.; Ye, Y.; Liu, Z.; Guo, Y.; Fu, Y. Large scale incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 374–382. [Google Scholar]
  35. Chandrashekar, G.; Sahin, F. A survey on feature selection methods. Comput. Electr. Eng. 2014, 40, 16–28. [Google Scholar] [CrossRef]
  36. Xun, Y.; Liu, J.; Kato, N.; Fang, Y.; Zhang, Y. Automobile driver fingerprinting: A new machine learning based authentication scheme. IEEE Trans. Ind. Inform. 2019, 16, 1417–1426. [Google Scholar] [CrossRef]
  37. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
  38. Li, Y.; Yuan, Y. Convergence analysis of two-layer neural networks with relu activation. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar]
  39. Bottou, L. Stochastic gradient descent tricks. In Neural Networks: Tricks of the Trade: Second Edition; Springer: Berlin/Heidelberg, Germany, 2012; pp. 421–436. [Google Scholar]
  40. Hinton, G.; Vinyals, O.; Dean, J. Distilling the knowledge in a neural network. arXiv 2015, arXiv:1503.02531. [Google Scholar]
  41. Syarif, I.; Prugel-Bennett, A.; Wills, G. SVM parameter optimization using grid search and genetic algorithm to improve classification performance. TELKOMNIKA (Telecommun. Comput. Electron. Control) 2016, 14, 1502–1509. [Google Scholar] [CrossRef]
  42. Donahue, J.; Jia, Y.; Vinyals, O.; Hoffman, J.; Zhang, N.; Tzeng, E.; Darrell, T. Decaf: A deep convolutional activation feature for generic visual recognition. In Proceedings of the International Conference on Machine Learning, PMLR, Beijing, China, 21–26 June 2014; pp. 647–655. [Google Scholar]
  43. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
  44. Liu, M.; Wang, J.; Zhao, N.; Chen, Y.; Song, H.; Yu, F.R. Radio frequency fingerprint collaborative intelligent identification using incremental learning. IEEE Trans. Netw. Sci. Eng. 2021, 9, 3222–3233. [Google Scholar] [CrossRef]
  45. Liu, Y.; Wang, J.; Li, J.; Niu, S.; Song, H. Class-incremental learning for wireless device identification in IoT. IEEE Internet Things J. 2021, 8, 17227–17235. [Google Scholar] [CrossRef]
  46. Chua, L.O. CNN: A vision of complexity. Int. J. Bifurc. Chaos 1997, 7, 2219–2425. [Google Scholar] [CrossRef]
Figure 1. Application scenario.
Figure 1. Application scenario.
Sensors 23 05981 g001
Figure 2. Impersonation attack.
Figure 2. Impersonation attack.
Sensors 23 05981 g002
Figure 3. Incremental learning-based UAV pilot identification.
Figure 3. Incremental learning-based UAV pilot identification.
Sensors 23 05981 g003
Figure 4. Pilot operation proficiency.
Figure 4. Pilot operation proficiency.
Sensors 23 05981 g004
Figure 5. Experimental settings. All the experiments have been conducted on drone P450 (a) and S500 (d). We have invited all the pilots to fly the drone in natural (b) and constrained (e) environments. The flight trajectories are illustrated in (c,f).
Figure 5. Experimental settings. All the experiments have been conducted on drone P450 (a) and S500 (d). We have invited all the pilots to fly the drone in natural (b) and constrained (e) environments. The flight trajectories are illustrated in (c,f).
Sensors 23 05981 g005
Figure 6. Performance comparison versus different incremental step.
Figure 6. Performance comparison versus different incremental step.
Sensors 23 05981 g006
Table 1. Selected Topics and Its Physical Meaning.
Table 1. Selected Topics and Its Physical Meaning.
TopicAttributeMinimizeMaximizeFrequencyDescription
received instructionsvalues[0:4]120018002 Hzpilot
instructions
executor controlcontrols[0:4]0.51.520 Hzflight controller
instructions
actuator outputoutput[0:4]900210010 Hzinstructions for
ESC
acceleratorx, y, z−30301 Hzacceleration in
body frame
gyroscopex, y, z−551 Hzangular velocity
in body frame
magnetometerx, y, z−221 Hzmagnet in
NED
angular velocityxyz[0:3]−55100 Hzangular velocity
in NED
local attitudeq[0:4]−1110 Hzflight attitude
in quaternions
Table 2. Environmental settings.
Table 2. Environmental settings.
UAVEnvironmentParticipatesRoutinesSamples
1P450Nature1515065,337
Constrained61,370
2S500Nature1515062,118
Constrained58,921
Table 3. Drone pilot identification in natural and constrained environment based on P450.
Table 3. Drone pilot identification in natural and constrained environment based on P450.
T1T2T3T4T5T6T7T8T9T10T11T12T13T14T15
P1100.00.0.0.0.0.0.0.0.0.0.0.0.0.0.
97.730.0.0.810.0.280.1.110.0.0.0.0.0.0.
P20.99.120.0.0.0.0.0.0.0.0.810.0.0.0.
0.99.430.0.0.0.560.0.0.0.0.0.0.0.0.
P30.0.88.780.0.0.0.0.11.230.0.0.0.0.0.
0.0.98.911.120.0.0.0.0.0.0.0.0.0.0.
P40.0.0.95.540.0.0.0.0.0.3.980.540.0.0.
0.0.210.5999.220.0.0.0.0.0.0.0.0.0.0.
P50.0.0.840.85.129.710.0.0.0.0.1.220.0.3.01
0.0.0.0.97.360.2.670.0.0.0.0.0.0.0.
P60.0.0.0.0.99.240.760.0.0.0.0.0.0.0.
0.0.2.560.0.96.230.0.1.230.0.0.0.0.0.
P70.0.0.0.0.8.791.220.0.0.0.0.0.0.0.
0.0.0.0.0.0.98.220.0.1.750.0.0.0.0.
P80.0.740.0.0.0.0.99.20.0.0.0.0.0.0.
0.0.0.0.0.0.0.100.0.0.0.0.0.0.0.
P90.0.3.920.0.2.090.0.89.720.0.0.0.0.4.13
0.0.0.0.0.0.0.0.100.0.0.0.0.0.0.
P100.0.0.21.720.0.0.0.0.73.543.540.0.1.080.
0.0.0.0.0.460.0.0.0.98.320.0.1.220.0.
P110.0.0.0.650.0.0.0.0.1.0887.829.780.0.0.65
0.0.0.0.0.0.0.0.3.090.7196.220.0.0.0.
P120.0.0.3.780.0.0.0.0.0.7.1489.120.0.0.
0.0.0.0.0.0.0.0.0.0.0.100.0.0.0.
P130.0.0.0.0.0.0.0.0.0.0.0.100.0.0.
0.0.0.0.0.0.0.0.0.0.0.0.100.0.0.
P140.0.0.0.0.0.0.0.0.0.0.0.0.100.0.
0.130.0.0.0.0.0.0.0.0.0.0.140.99.360.43
P150.0.0.0.0.2.260.0.0.0.0.2.950.0.94.72
0.0.0.0.0.1.450.0.0.0.0.0.620.0.97.92
Table 4. Drone pilot identification in natural and constrained environment based on S500.
Table 4. Drone pilot identification in natural and constrained environment based on S500.
T1T2T3T4T5T6T7T8T9T10T11T12T13T14T15
P1100.00.0.0.0.0.0.0.0.0.0.0.0.0.0.
94.560.0.0.0.5.440.0.0.0.0.0.0.0.0.
P20.91.540.0.0.0.120.0.0.0.240.6.670.0.1.43
0.96.070.950.0.0.0.0.0.0.0.2.260.0.0.72
P30.10.0188.490.0.0.0.0.0.0.150.0.0.0.1.35
0.7.7491.480.0.0.0.0.0.1.050.0.0.0.0.
P40.0.0.100.0.0.0.0.0.0.0.0.0.0.0.
0.0.0.100.0.0.0.0.0.0.0.0.0.0.0.
P50.420.0.0.91.980.0.420.0.0.0.0.0.7.170.
0.0.0.0.97.470.0.0.0.0.0.0.0.2.530.
P60.0.0.0.0.98.520.590.0.0.0.0.890.0.0.
0.150.0.0.0.95.720.0.0.0.0.3.690.0.440.
P70.0.0.0.20.0.99.80.0.0.0.0.0.0.0.
0.0.0.4.880.0.93.090.0.0.0.2.030.0.0.
P80.0.0.0.0.0.0.96.980.0.0.0.3.020.0.
0.0.0.0.0.0.0.92.580.0.0.0.7.420.0.
P90.0.0.0.0.0.0.0.91.530.6.880.0.1.590.
0.0.0.0.0.0.0.0.97.620.2.380.0.0.0.
P100.0.0.0.0.0.0.0.0.100.0.0.0.0.0.
0.0.0.0.0.0.0.0.0.100.0.0.0.0.0.
P110.0.0.0.4.30.0.20.0.0.95.490.0.0.0.
0.0.0.0.8.810.1.230.0.0.89.550.0.0.20.2
P120.0.670.0.0.10.720.0.0.0.0.88.470.0.0.13
0.0.0.0.0.17.690.0.0.0.0.82.170.0.0.13
P130.0.0.0.0.0.0.0.0.0.1.790.98.210.0.
0.0.0.0.0.0.0.0.320.0.0.0.99.680.0.
P144.890.0.0.3.330.1.110.0.0.0.0.0.90.220.44
1.560.0.0.0.440.0.0.0.220.0.0.0.97.560.22
P150.422.510.310.0.0.0.2.930.0.10.0.521.260.91.94
0.3.560.0.0.0.0.730.0.0.0.1.990.0.93.72
Table 5. Comparison with the related works.
Table 5. Comparison with the related works.
EnvironmentObjectivenessPilotsSignalsUAVExtensibleAccuracy
Soufan (2017) [11]Robotic labAuthentication5Pilot instructionsquadcopterNo82%
Soufan (2018) [12]Identification20 89%
Alkadi (2021) [13]Authentication20Pilot instructions motion sensors Yes97%
OursControlled & Natural environmentIdentification15uORB topics95.71%
Table 6. Comparison with the most related works on P450 in nature and constrained environment.
Table 6. Comparison with the most related works on P450 in nature and constrained environment.
T1T2T3T4T5T6T7T8T9T10T11T12T13T14T15
QD [12]42.1457.0993.7293.9986.7479.3659.6395.4859.0746.1163.5272.8143.6645.8422.72
45.3263.9890.4787.7175.4382.1168.5590.5349.6751.9260.3469.1760.2258.1139.73
RF [12]29.8285.6949.1393.4871.0851.3245.4992.3789.8768.8879.6753.9779.8772.7458.74
35.7774.5145.4990.3259.7759.1452.8882.1780.1569.8762.2955.4178.3263.3351.25
Bagging [12]47.1220.0519.5899.8261.1853.1742.2780.6133.8692.0876.0283.1963.8923.1622.26
40.1130.7917.9880.3453.2759.1638.6576.4430.7882.1777.2987.4462.6330.3529.87
DT [12]18.4866.8697.1190.2981.2936.5171.5127.1127.4223.3347.0815.8976.9448.7331.62
23.8860.6484.8775.9171.1745.3582.4434.4731.8238.6545.2219.9749.1259.8819.65
LSTM [13]91.7394.4192.8993.3289.3595.5293.2294.4493.3796.5394.1695.1296.1694.4796.61
94.4291.9995.2594.1793.8995.1198.2293.7792.2991.8894.3794.4195.3394.4894.37
Feature [13]47.2168.4592.7395.4189.9883.6782.8179.4985.6281.6676.2780.3562.5174.8861.67
55.3277.4582.6285.6891.9885.3392.4189.5565.7784.3279.3190.4183.1565.9771.72
Voted [13]58.9561.2684.3193.82100.70.4598.3768.3646.0349.0541.5910.9872.5666.9436.18
60.4560.5784.6294.5195.2771.3498.4464.5241.2749.6139.8512.5171.1968.4833.75
SVM [23]76.0263.1798.21100.90.2179.3890.4599.6589.94100.84.2456.2592.9391.5290.24
81.4173.5295.3594.2485.4568.9784.3393.1291.4784.7187.2458.1289.9393.7194.14
XGB [23]93.7292.7595.2591.8894.5570.1898.3294.5194.8495.5772.1389.3394.7199.7295.21
88.9594.1785.9392.0899.5582.1896.1193.4289.8885.9982.1390.1688.3594.3296.17
RF [23]94.7699.8896.5691.9295.4580.5395.2191.1992.4598.0695.4550.8163.8294.6195.17
97.3392.8593.7190.4593.1684.3393.3593.3994.1795.2194.4263.2273.4191.1894.57
Ours100.99.1288.7895.5485.1299.2491.2299.2489.7273.5487.8289.12100.100.94.72
97.7399.4398.9199.2297.3696.2398.22100.100.98.3296.22100.100.99.3697.92
Table 7. Comparison with the most related works on S500 in nature and constrained environments.
Table 7. Comparison with the most related works on S500 in nature and constrained environments.
T1T2T3T4T5T6T7T8T9T10T11T12T13T14T15
QD [12]22.7256.7363.5293.9963.2945.1935.7792.5179.3693.7259.6320.3767.1486.4435.18
35.2253.4760.8288.3155.3742.4438.9588.7481.5291.3350.1119.4755.2583.4439.13
RF [12]60.1256.6181.0191.2597.8969.7193.9169.3849.7349.0340.6112.5672.3865.5333.59
64.2462.8885.4990.3789.7273.4192.3372.1750.5859.4538.1115.4178.3263.2741.62
Bagging [12]34.1952.5693.3399.8286.5632.2777.2524.5758.6441.5527.2161.9257.1434.2948.48
46.1148.4787.8890.3483.5529.9968.6526.1449.7852.5128.4957.4462.6330.3559.87
DT [12]33.8820.6484.8785.9184.1745.3556.6174.4751.8249.4935.2279.3162.1263.2252.65
45.1436.5177.6292.3375.9852.1767.3282.1358.7748.6533.1157.4182.5461.1756.18
LSTM [13]87.7392.4188.8989.2293.3594.3591.7788.4893.3795.2191.1189.1289.9793.3291.44
89.4593.0285.1791.4497.0291.3888.5391.9294.4493.3885.6177.3888.4795.1992.03
Feature [13]55.1358.4172.7195.3289.9882.7577.9589.4185.3691.5276.2787.3991.5186.4782.21
49.3359.3563.9295.4184.1273.9962.3785.4776.1193.3785.1182.3583.4484.1881.67
Voted [13]34.3992.6149.1396.3668.7147.3543.6498.3197.0488.6283.2563.1780.3574.3657.91
29.4589.5751.6294.5175.2751.3444.4484.5291.2789.6179.8562.5185.1968.4853.75
SVM [23]90.0585.9385.3691.9292.8280.6888.3198.3289.0289.7798.8194.5664.7778.7274.97
86.4183.1775.3594.2490.2179.3890.4593.6289.8894.7184.2486.2561.9373.7180.24
XGB [23]93.3492.2191.1792.0489.8898.6768.8596.3165.8296.6194.1392.5260.3893.3293.19
93.7292.7595.2591.8894.5570.1898.3294.5194.8495.5772.1389.3394.7199.7295.21
RF [23]91.9593.3790.4496.1289.0398.4136.68100.75.1196.6696.4188.7999.51100.97.38
92.3790.4893.7191.9295.4580.5395.2191.1992.4598.0695.4550.8163.8294.6195.17
Ours100.91.5488.49100.91.9898.5299.8196.9891.53100.95.4988.4798.2190.2291.94
94.5696.0791.48100.97.4795.7293.0992.5897.62100.89.5582.1799.6897.5693.72
Table 8. Compared to the standard algorithms with different incremental steps.
Table 8. Compared to the standard algorithms with different incremental steps.
789101112131415
lwf [32]97.5893.4489.2683.1276.4572.2268.2764.6661.25
96.41-89.33-81.12-76.19-64.25
97.33---83.39---65.58
ewc [29]98.2295.3692.4188.9985.6884.4483.3983.2281.08
99.18-93.46-87.75-84.44-82.08
97.32---85.59---83.14
mas [30]98.1195.4588.6785.4482.2174.6569.9165.4460.12
97.26-86.12-75.34-67.22-63.72
97.12---79.11---64.41
ours98.0298.7698.0297.5195.5294.4994.2293.9892.94
99.02-97.33-94.46-93.18-93.18
97.94---95.67---92.94
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Han, L.; Zhong, X.; Zhang, Y. Task-Incremental Learning for Drone Pilot Identification Scheme. Sensors 2023, 23, 5981. https://doi.org/10.3390/s23135981

AMA Style

Han L, Zhong X, Zhang Y. Task-Incremental Learning for Drone Pilot Identification Scheme. Sensors. 2023; 23(13):5981. https://doi.org/10.3390/s23135981

Chicago/Turabian Style

Han, Liyao, Xiangping Zhong, and Yanning Zhang. 2023. "Task-Incremental Learning for Drone Pilot Identification Scheme" Sensors 23, no. 13: 5981. https://doi.org/10.3390/s23135981

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop