Next Article in Journal
Investigation of Fruit Growth Patterns, Olive Fly Bactrocera oleae (Rossi) Infestation, and Genetic Diversity in Italian Olive Cultivars
Previous Article in Journal
Contributions to the Process of Calibrating Corn Seeds Using a Calibrator with Cylindrical Sieves
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adversarial Attack Defense Method for a Continuous-Variable Quantum Key Distribution System Based on Kernel Robust Manifold Non-Negative Matrix Factorization

1
School of Automation, Central South University, Changsha 410017, China
2
School of Computer Science, Central South University, Changsha 410017, China
3
School of Physics, Central South University, Changsha 410017, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(17), 9928; https://doi.org/10.3390/app13179928
Submission received: 20 July 2023 / Revised: 26 August 2023 / Accepted: 31 August 2023 / Published: 2 September 2023

Abstract

:
Machine learning has been applied in continuous-variable quantum key distribution (CVQKD) systems to address the growing threat of quantum hacking attacks. However, the use of machine learning algorithms for detecting these attacks has uncovered a vulnerability to adversarial disturbances that can compromise security. By subtly perturbing the detection networks used in CVQKD, significant misclassifications can occur. To address this issue, we utilize an adversarial sample defense method based on non-negative matrix factorization (NMF), considering the nonlinearity and high-dimensional nature of CVQKD data. Specifically, we employ the Kernel Robust Manifold Non-negative Matrix Factorization (KRMNMF) algorithm to reconstruct input samples, reducing the impact of adversarial perturbations. Firstly, we extract attack features against CVQKD by considering the adversary known as Eve. Then, we design an Artificial Neural Network (ANN) detection model to identify these attacks. Next, we introduce adversarial perturbations into the data generated by Eve. Finally, we use the KRMNMF decomposition to extract features from CVQKD data and mitigate the influence of adversarial perturbations through reconstruction. Experimental results demonstrate that the application of KRMNMF can effectively defend against adversarial attacks to a certain extent. The accuracy of KRMNMF surpasses the commonly used Comdefend method by 32.2% and the JPEG method by 30.8%. Moreover, it exhibits an improvement of 20.8% compared to NMF and outperforms other NMF-related algorithms in terms of classification accuracy. Moreover, it can complement other defense strategies, thus enhancing the overall defensive capabilities of CVQKD systems.

1. Introduction

Driven by advancements in quantum secure communication technology, Quantum Key Distribution (QKD) has experienced widespread application and development [1]. QKD enables secure communication between two remote parties, Alice and Bob, in an untrusted execution environment by transmitting quantum states that are resistant to eavesdropping from Eve [2]. Leveraging the principles of quantum mechanics, QKD provides information-theoretic security to the legitimate communication parties [3]. However, achieving unconditional security depends on flawless device operation within a perfect model. In reality, deviations between theoretical and actual QKD implementations create opportunities for Eve to intercept information from the legitimate parties. Attacks such as wavelength attacks [4], local oscillator (LO) strength attacks [5], calibration attacks [6], and saturation attacks [7] have been observed in protocols like Gaussian-modulated coherent state (GMCS) Continuous Variable Quantum Key Distribution (CVQKD). Traditional defense strategies involve real-time monitoring modules or measurement devices, that require extensive knowledge and precise parameter estimation. However, due to the unpredictable nature of Eve’s attacks, it becomes challenging to anticipate the specific type of attack employed. To address this challenge, practical experiments have developed CVQKD detection and classification networks.
The field of CVQKD defense has greatly benefited from the rapid advancement of machine learning. Quezada et al. [8] have focused on machine learning to enhance the performance of QKD protocols, which have played a crucial role in furthering the practical implementation of CVQKD. In previous studies, we proposed defense strategies utilizing machine learning algorithms, particularly by extracting and analyzing the statistical distribution of features from the measurement data obtained in the CVQKD system. These features were used to detect and classify attacks under both specific and generalized scenarios [9,10]. However, recent research has uncovered a vulnerability in deep learning models known as adversarial samples, which are intentionally designed input samples that can deceive even well-performing deep learning models with minimal detectable disturbances. These samples lead to significant deviations in the prediction of classification outcomes, which are often undetectable to the human eye. For example, in image classification, an attacker can introduce subtle perturbations into a given image, transforming it into an adversarial sample. When such samples are fed into a well-trained deep neural network, they are commonly misclassified into incorrect categories. Su et al. [11] demonstrated this by generating one-pixel disturbances in image samples and successfully tricking a state-of-the-art deep neural network with high confidence. Moreover, recent studies indicate that adversarial examples can have real-world implications. Attackers can manipulate physical objects by, for example, deleting segments of pedestrians or manipulating stop signs in a car recognition system to confuse autonomous vehicles [12]. Since machine learning algorithms process inputs as numerical vectors, attackers have a high possibility of constructing targeted adversarial data to manipulate classification results. It is widely accepted that adversarial samples are prevalent in classical machine learning, regardless of the input data type or neural network architecture. Importantly, it should be noted that almost all learning models, such as logistic regression (e.g., soft-max regression), support vector machine (SVM), decision tree, nearest neighbor, and deep learning models, are susceptible to adversarial attacks [13,14].
Dimensionality reduction through feature extraction is a powerful technique for mitigating adversarial attacks in the field of computer vision and quantum key distribution (CVQKD) transmission data. Traditional methods for feature extraction include Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Independent Component Analysis (ICA), Locally Linear Embedding (LLE), Vector Quantization (VQ), and Non-Negative Matrix Factorization (NMF) [15,16,17,18,19]. NMF, which was initially introduced by Lee and Seung and published in the Nature journal, is a matrix decomposition technique that ensures all decomposed components are non-negative. This property enables additive descriptions and facilitates dimensionality reduction. In this study, NMF is utilized to extract features from time-frequency image data with the intention of reducing dimensionality while preserving differentiation information among samples [20]. Consideration of the nonlinearity and non-stationarity characteristics of CVQKD data is crucial in this study. To extract nonlinear features from the data, we employ kernel functions and compute the objective function using the L-norm. Moreover, we incorporate a graph regularization term to capture the geometric structure of the feature space, which is referred to as Kernel-Robust Manifold Non-Negative Matrix Factorization (KRMNMF).
To address this, we begin by showing the fundamental structure of CVQKD and explaining the classical form of Eve’s attacks. Then, we introduce the defense method for CVQKD based on neural network models and analyze its vulnerability to adversarial attacks. In our previous work [21], we proposed a defense method against adversarial attacks in CVQKD. However, this method faced challenges such as the limitations imposed by local features. To overcome these challenges, a novel defense framework that utilizes the KRMNMF method is proposed for CVQKD adversarial attacks. This method recognizes the nonlinear and non-stationary characteristics of CVQKD signal data and addresses these issues by incorporating KRMNMF and leveraging manifold learning theory to capture the geometric structure of the feature space. This algorithm exhibits strong feature extraction capabilities compared to traditional methods. By reconstructing the samples using this algorithm, the influence of adversarial perturbations is minimized, leading to accurate classification by the CVQKD neural network model and effective adversarial defense. By employing our strategy, CVQKD network models can effectively defend against adversarial samples in real-time across various physical scenarios. Consequently, implementers of our approach can assert with confidence that they have enhanced overall security against potential attacks from adversarial samples.
This paper is organized as follows: In Section 2, we first summarize the principle of the CVQKD system and then demonstrate the impact of adversarial attacks on CVQKD attack detection networks. In Section 3, we introduce the proposed KRMNMF algorithm. We provide a detailed description of the defense framework and experimental setup to demonstrate the effectiveness of our method in Section 4. Finally, A brief conclusion is given in Section 5.

2. Preliminaries

2.1. Principle and System Description of CVQKD

The security of Quantum Key Distribution (QKD) protocols relies on Heisenberg’s uncertainty principle and the quantum no-cloning theorem, allowing for the exchange of unconditionally secure keys for communication [2]. In QKD, Alice and Bob are the two parties involved in communication, utilizing quantum information such as single photons or photon beams [4]. However, the presence of an eavesdropper, referred to as Eve, introduces interference. This interference is detected by both Alice and Bob, leading to the cancellation of the communication to ensure information security. While discrete-variable QKD relies on single photons to encode information, continuous-variable QKD (CVQKD) utilizes continuous quantum variables of the electromagnetic field, such as the quadratures of optical field modes, to carry information. Both prepare and measure techniques; the key difference is the variable used to encode the key. CVQKD has the advantage of integrating with existing optical fiber networks and showing significant potential for further development.
Figure 1 illustrates the schematic of a typical CVQKD system that employs the Gaussian Modulation Continuous Variable Quantum Key Distribution (GMCS CVQKD) methodology, utilizing homodyne detection. Initially, Alice generates coherent light pulses with a wavelength of 1550 nanometers using a telecom diode [22]. These pulses are split into a weak signal and a strong local oscillator (LO) through a beam splitter. Random modulation, involving phase and amplitude modulators, is applied to the signal pulses to follow a Gaussian distribution characterized by a variance V 0 N 0 . Polarization multiplexing is incorporated with the use of a polarizing beam splitter between the signal pulses and the LO. Subsequently, Alice transmits the modulated pulses to Bob through a quantum channel that may be susceptible to eavesdropping attempts by Eve. Importantly, Eve may launch both common CVQKD attacks and adversarial attacks simultaneously during the transmission process. Upon reaching Bob’s end, the signal pulses and the LO are separated by Bob, employing a polarizing beam splitter and implementing homodyne detection.
Moreover, within Bob’s signal path, a fraction of the signal pulses undergo random attenuation to facilitate the timely measurement of shot noise, while the remaining pulses remain unattenuated. A fraction of the LO pulses is also split for monitoring the LO power and generating the clock. By incorporating a phase modulator in the LO path, Bob can selectively choose the quadrature value to be detected by adjusting the measurement phase randomly. Ultimately, the obtained measurement results are forwarded to the data processing center for sampling and the detection of potential attacks. Thus, two related data strings— x = [ x 1 , x 2 , x n ] and y = [ y 1 , y 2 , y n ] —are obtained by Alice and Bob; we found the following [23]:
x ¯ = 0 ,   V x = V 0 N 0 , y ¯ = 0 ,   V y = κ T V 0 N 0 + N 0 + κ T ξ + v el N 0
where the quadrature value X A and P A obey variance V 0 N 0 .   T represents quantum channel transmittance, and κ denotes efficiency of the homodyne detector. v el represents the electronic noise coefficient of the detector; ξ is the technical excess noise of the system. We mainly consider four common attack strategies in CVQKD systems, including calibration attack, LO intensity attack, saturation attack, and hybrid attack. We find that the four attacks mentioned above in the actual CVQKD system affect different characteristics, such as the intensity I L O of LO pulses, shot noise variance N 0 . The feature vectors collected by the CVQKD system under the four attacks are highly nonlinear, and same event correlation is high, so it is suitable to apply deep learning models for signal diagnosis. The objective of the deep learning model in CVQKD is to obtain an output vector v from the input vector u by meticulously designing function f : u v , which is constructed from a training set
S train   = { ( u 1 , v 1 ) , ( u 2 , v 2 ) , ( u 3 , v 3 ) , }

2.2. Adversarial Attacks in CVQKD

The defense method for CVQKD is based on deep learning and employing deep learning models to identify and analyze the attack characteristics of an eavesdropper (referred as Eve) within the CVQKD system. This approach facilitates the classification of Eve’s attacks and enables the selection of targeted defensive measures to mitigate these attacks. However, deep learning techniques are vulnerable to the interference of adversarial samples during both the training and testing processes. These samples can lead to significant deviations in the predictions of the neural network, even though the differences between these samples and the original ones are imperceptible to the human eye. For instance, in image classification scenarios, an attacker can introduce subtle but specific perturbations to clean samples, causing them to be misclassified when processed by well-performing neural network models.
The proliferation of adversarial attacks presents significant security challenges for artificial intelligence (AI) systems. These attacks have also permeated the realm of physical applications, resulting in severe consequences. In the context of CVQKD networks, the examination of adversarial samples is essential to ensure their security. Regarding CVQKD systems, it has been observed that they are susceptible to adversarial disturbances due to the inherent vulnerability of quantum devices to adversarial attacks. This vulnerability is influenced by the following factors:
  • Nonlinear Nature of Quantum Measurements: CVQKD systems rely on quantum measurements which introduce nonlinearity in the detection process. Adversarial disturbances can exploit this nonlinearity to manipulate the measurement outcomes and compromise the security of the system.
  • Sensitivity to Measurement Conditions: CVQKD systems are sensitive to measurement conditions such as the intensity of local oscillator (LO) pulses and shot noise variance. Adversaries can manipulate these conditions to introduce additional noise or alter the statistical properties of the measurement results, leading to compromised key distribution.
  • Imperfections in Quantum Devices: The quantum devices used in CVQKD systems, such as homodyne detectors, suffer from imperfections like electronic noise and technical excess noise. Adversarial attacks can exploit these imperfections to inject additional noise or modify the measurement outcomes.
We further demonstrate the vulnerability of quantum devices to adversarial perturbations in Appendix A.
Notably, several classical and advanced adversarial attack methods have been identified, including the fast gradient sign method (FGSM) [24], the basic iterative method (BIM) [25], the projected gradient descent (PGD) [26], the query efficient boundary-based blackbox attack (QEBA) [27], physical perturbations (RP2) [28], and adversarial camouflage (AdvCam). These attack methods are summarized in Table 1. Adversarial attacks also pose a threat to quantum communication systems, particularly due to system linearization. For instance, adversary samples created by Eve can potentially deceive pretrained attack classification models, rendering traditional CVQKD defense methods ineffective. To assess the performance of adversarial attacks, we utilize a trained classifier and adopt the FGSM method. Figure 2 shows the confusion matrices of artificial neural networks (ANN) for CVQKD attack detection and classification [29]. Under normal circumstances, the artificial neural network (ANN) can effectively distinguish between normal signals and four types of attacks. However, introducing an adversarial perturbation of 0.3 into the communication channel through using the fast gradient sign method (FGSM) significantly decreases the classification accuracy of the system.

3. Methods

In this paper, a reconstruction method based on the NMF algorithm that uses neural network classifiers as a basis is proposed to defend against adversarial attacks in the field of CVQKD. The transmission in CVQKD is often high-dimensional, non-linear, and noisy. By using the dimension reduction property of the NMF algorithm, the influence of perturbations in adversarial samples can be reduced. In the process of decomposing the original matrix X using NMF, the goal is to find two non-negative low-rank matrices, the basis matrix W and the coefficient matrix H, so that X WH. In this decomposition process, the adversarial perturbations are minimized. Therefore, by using the approximation error before and after matrix decomposition, some imperceptible adversarial perturbations can be eliminated, and the reconstructed samples can reduce the influence of adversarial perturbations.
Based on this, the loss function of the NMF algorithm is selected according to the data type and application scenarios. Due to the non-linear and non-stationary nature of CVQKD data and the presence of noise that affects the decomposition results, this paper proposes the Kernel Robust Manifold Non-negative Matrix Factorization (KRMNMF) method, which extracts the non-linear features in the data using a kernel function. L 2 , 1 norm is used to calculate the objective function to reduce the impact of noise in the data. The graph regularization term is also employed to capture the geometric structure in the feature space, achieving superior adversarial defense results.
The objective of NMF is to decompose a high-dimensional non-negative matrix X = R + m × n into two non-negative low-rank matrices— W = R + m × k and H = R + k × n —so that the product of these two matrices approximates the original matrix infinitely. This can be formally expressed as follows:
X m × n W m × k H k × n
where X = [ x 1 , , x n ] represents the original data matrix, W = [ w 1 , , w k ] denotes the basis matrix, H = [ h 1 , , h n ] represents the coefficient matrix, and k m . The standard NMF utilizes the Euclidean distance to compute the loss function, which can be expressed as follows:
min W , H X W H F 2   s .   t .   W 0 , H 0
where F represents the Frobenius norm of a matrix.
Robust Non-negative Matrix Factorization (RNMF) employs the L 2 , 1 norm to calculate the error, thereby reducing the influence of outliers with large errors and enhancing robustness. The objective function of RNMF can be formulated as follows:
min W , H X W H 2 , 1   s .   t .   W 0 , H 0 .
The definition of L 2 , 1 norm is
X 2 , 1 = i n j m X j i 2 = i n x i
where 2 , 1 refers to the L 2 , 1 norm, and x i represents the i t h vector of X .
Based on the theory of manifold learning [30], manifold regularization can reduce the complexity of data in high-dimensional space while preserving the proximity relationship between neighboring data points in the low-dimensional manifold space. This implies that it is beneficial for dimensionality reduction and feature selection in the low-dimensional manifold space. By projecting the data onto the low-dimensional manifold space, we can reduce the dimensionality of the data and retain the important information that reaches these low-dimensional spaces. Through manifold regularization, it is possible to differentiate adversarial perturbation information, which helps eliminate adversarial perturbation in reconstruction.
The objective function of KRMNMF is defined as follows:
  min F , H ϕ ( X ) ϕ ( X ) F H 2 , 1 + λ Tr ( H L H T )   s .   t .   F 0 , H 0
where R + m × n , W R + m × k , H R + k × n , ϕ is the Gaussian kernel function mapping, x i ϕ ( x i ) , x i represents the i t h sample point in the original data space X , i.e., X ϕ ( X ) ; λ is a non-negative regularization parameter, Tr   ( ) denotes the trace of a matrix, and L is the Laplacian matrix defined as L = V W , where V is a diagonal matrix and V i i = j W i j .
The objective function of KRMNMF is not jointly convex with respect to F and H . However, when fixing the other variables, the individual variables are convex. In this paper, the Lagrangian multiplier method is adopted to solve the problem. The equivalent form of the objective function is as follows:
min F , H Tr ( ( ϕ ( X ) ϕ ( X ) F H ) D ( ϕ ( X ) ϕ ( X ) F H ) T ) + λ Tr ( H L H T )
where D is a diagonal matrix that satisfies
D i i = ( j = 1 n ( ϕ ( X ) ϕ ( X ) F H ) j i 2 ) 1 2
Let ϕ ( x i ) T ϕ ( x i ) = K ; therefore,
D i i = [ ( I F H ) T K ( I F H ) ] i i 1 2
where I represents the identity matrix. Equation (7) is equivalent to
min F , H Tr ( ϕ ( X ) D ϕ ( X ) T ) 2 Tr ( ϕ ( X ) D H T F T ϕ ( X ) T ) + Tr ( ϕ ( X ) F H D H T F T ϕ ( X ) T ) + λ ( H L H T ) .
Let Ψ and Ω be the Lagrange multipliers for F and H , respectively. The Lagrangian function with respect to the objective function is given by the following:
  L = min F , H Tr ( ϕ ( X ) D ϕ ( X ) T ) 2 Tr ( ϕ ( X ) D H T F T ϕ ( X ) T ) + Tr ( ϕ ( X ) F H D H T F T ϕ ( X ) T ) + λ ( H L H T ) + Tr ( Ψ T F ) + Tr ( Ω T H ) .
Taking partial derivatives with respect to F and H , respectively, yields the following:
L F = 2 ϕ ( X ) T ϕ ( X ) D H T + 2 ϕ ( X ) T ϕ ( X ) F H D H T + Ψ
L H = 2 F T ϕ ( X ) T ϕ ( X ) D + 2 H ϕ ( X ) T ϕ ( X ) D F F T + 2 λ H V 2 λ H W + Ω .
From the Karush–Kuhn–Tucker (KKT) conditions— Ψ n k F n k = 0 , Ω k n H k n = 0 —it follows that
( 2 ϕ ( X ) T ϕ ( X ) D H T + 2 ϕ ( X ) T ϕ ( X ) F H D H T ) n k F n k + Ψ n k F n k = 0
( 2 F T ϕ ( X ) T ϕ ( X ) D + 2 H ϕ ( X ) T ϕ ( X ) D F F T + 2 λ H V 2 λ H W ) k n H k n + Ω k n H k n = 0 .
Based on the above analysis, the updating rules for F and H can be obtained as follows:
F k n F k n ( ϕ ( X ) T ϕ ( X ) D H T ) k n ( ϕ ( X ) T ϕ ( X ) F H H H T ) k n
H k n H k n ( F T ϕ ( X ) T ϕ ( X ) D + λ H W ) k n ( H ϕ ( X ) T ϕ ( X ) D F F T + λ H V ) k n .
Let ϕ ( x i ) T ϕ ( x i ) = K ; therefore,
F n k F n k ( K D H T ) n k ( K F H D H T ) n k
H k n H k n ( F T K D + λ H W ) k n ( H K D F F T + λ H V ) k n .
In summary, we have obtained a description of the KRMNMF algorithm (shown as Algorithm 1).
Algorithm 1 KRMNMF Algorithm
Input:   Data   matrix   x R + m × n ,   parameter   a , λ , k .
Output:   Matrix   F R + m × k ,   H R + k × n .
Initialization: F R + m × k , H R + k × n .
While not converging, do
1. Calculate the W,V and L matrices.
2. Determine whether the objective function (6) converges.
3. Update F according to equation (18).
4. Update H according to equation (19).
Output:   Matrix   F R + m × k ,   H R + k × n

4. Experiments

Despite the extensive application of non-negative matrix factorization (NMF) algorithms in various domains, they have not been combined with adversarial sample defense. This section introduces the parameter selection and comparative experiments of the KRMNMF algorithm in defense against CVQKD. The experimental environment included an AMD RYZEN 8 processor (Advanced Micro Devices, Santa Clara, CA, USA) MATLAB 2022b, and an RTX 3080 graphics card (Nvidia, Santa Clara, CA, USA). The experimental framework can be divided into the following four steps: Firstly, generating CVQKD datasets under different attacks by Eve [9]. Secondly, training a classifier using clean datasets, in this case, an ANN classifier, to achieve high recognition accuracy for clean samples. Next, using the FGSM algorithm to transform clean samples into adversarial samples and inputting them into the classifier, resulting in a significant decrease in the classifier’s recognition accuracy. Finally, utilizing the KRMNMF algorithm to reconstruct each adversarial sample and inputting them into the classifier to calculate the classification accuracy, quantifying the defense effect. The specific defense process is illustrated in Figure 3.
The parameter variables involved in the KRMNMF algorithm mainly include the selection of the kernel function, the number of nearest neighbor samples used in the edge adjacency matrix (W) denoted as a, the regularization parameter (λ), and the dimensionality of the decomposition matrix (k). In this study, the impact of other parameters on the KRMNMF algorithm was compared when a polynomial kernel function was chosen. Ultimately, a value of 3 was chosen for a; a value of 1 was chosen for λ, and k was set to 25, which resulted in better performance for the KRMNMF algorithm.
To validate the defense effectiveness of the KRMNMF algorithm against adversarial attacks in the CVQKD system, we conducted comparative experiments on the classification performance of the ANN model before and after adding adversary+al defense. In these experiments, the QEBA attack strength was set to 0.1. As shown in Figure 4, the selected comparative algorithms for defense models were JPEG [31] and ComDefend [32]. The experiments were conducted using the open source code and the best parameter settings provided in the corresponding literature. The comparison results presented in Table 2 indicate that the proposed defense scheme in this paper exhibits outstanding performance in the CVQKD system. The adversarial perturbation classification accuracy achieved by our defense scheme surpasses that of the comparative methods. For instance, employing KRMNMF defense alone achieves an average classification accuracy of 71.6% (much better than the 39.4% achieved by Comdefend defense and 40.8% achieved by JPEG defense). While combining KRMNMF with JPEG defense or Comdefend defense enhances the accuracy to 78.8% and 79.5%, respectively, which outperforms other methods in terms of adversarial perturbation classification accuracy, it can be observed that our approach demonstrates significant advantages in filtering adversarial attacks, making it a highly effective defense strategy for the CVQKD system. It is worth noting that the KRMNMF method can be flexibly combined with other algorithms, and it achieves better results when combined with other defense algorithms. This further demonstrates the value of the KRMNMF algorithm.
To better evaluate the effectiveness of our algorithm, we compared it with existing classical NMF, kernel non-negative matrix factorization (KNMF), robust non-negative matrix factorization (RNMF), manifold non-negative matrix factorization (GNMF), and recent methods such as local non-negative matrix factorization (LNMF) [33], kernel robust non-negative matrix factorization (KRNMF) [34], and sparse non-negative matrix factorization (SNMF) [35]. We compared these algorithms in terms of feature extraction dimensionality and classification accuracy.
According to Table 3, it can be observed that NMF, KNMF, RNMF, GNMF, KRNMF, SNMF, and LNMF algorithms exhibit significant fluctuations in accuracy as the dimensionality changes. Furthermore, our approach demonstrates a significant enhancement of 20.8% compared to NMF, surpassing the classification accuracy of the other NMF-related algorithms in our study. When compared to the KRMNMF algorithm, KRMNMF demonstrates higher accuracy, lower dimensionality, and greater stability. This indicates that the KRMNMF algorithm is capable of extracting a smaller number of features to characterize the CVQKD data, resulting in better performance in terms of adversarial sample filtering during reconstruction, thereby improving the defense capability.
The defense method proposed in this paper using the KRMNMF approach increases the accuracy of CVQKD attack identification from 15.6% to 71.6%, achieving the highest level compared to the other methods. Moreover, the incorporation of KRMNMF defense in conjunction with other methods such as JPEG defense or Comdefend defense further boosts the average classification accuracy to 78.8% and 79.5%, respectively. It is worth noting that there is still room for further improvement in this accuracy. This is due to the significant impact of the quality of adversarial attack generation on the final defense effectiveness. If the adversarial samples exhibit substantial differences from the original samples in the feature space, it may result in difficulties for the defense model to accurately classify these adversarial samples. Additionally, the complexity and nonlinearity of the CVQKD dataset also affect the final classification accuracy. Furthermore, in practical applications, the novel CVQKD adversarial defense strategy provided in this paper can be combined with other defense methods. For instance, the integration of the KRMNMF method with fingerprinting techniques or other defense approaches can further enhance the defensive capability against potential adversarial sample perturbations.

5. Conclusions

In this paper, we present a defense method against adversarial samples in CVQKD by utilizing the KRMNMF algorithm, which applies non-negative matrix factorization to reconstruct input samples and reduce adversarial perturbations in CVQKD. Our proposed method offers a cutting-edge solution to enhance the robustness of the CVQKD system against adversarial attacks. Recognizing the non-linear and non-stationary nature of CVQKD signal data, we employed kernel robust manifold to address these challenges by mapping samples from different attack strategies to a lower-dimensional space. By incorporating manifold learning theory, we capture the intrinsic geometric structure in the feature space. Our algorithm surpasses conventional feature extraction methods by demonstrating superior capabilities in exploring the features specific to CVQKD. We conducted experimental simulations using an artificial neural network model and conducted comparisons with other methods. The results convincingly illustrate the effectiveness of our approach in detecting and mitigating the impact of adversarial attacks. This comprehensive defense method for CVQKD can also be extended to other machine learning scenarios to counter potential adversarial interferences, thus ensuring the security of communication systems.

Author Contributions

Conceptualization, D.H.; methodology, Y.F.; resources, D.H. and Y.J.; software, Y.F. and E.X.; validation, Y.F., E.X. and D.H.; data curation, Y.F.; Funding acquisition, D.H. and Y.J.; writing—original draft preparation, Y.F.; writing—review and editing, Y.F., E.X., D.H. and Y.J.; visualization, Y.F.; supervision, D.H. and Y.J. All authors have read and agreed to the published version of the manuscript.

Funding

National Natural Science Foundation of China (62072475) and National College Innovation Project (2022105330245).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Acknowledgments

The authors would like to express their thanks to Y. Yan and K. Huang for their pioneering research. Furthermore, we thank the reviewers of this work for their valuable comments and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. The Vulnerability of Quantum Devices to Adversarial Perturbations

To prove that the quantum devices are easily disturbed by adversarial perturbations, we assume that the quantum device can provide real output v ( σ ) from the input quantum state σ , which set mapping Λ   ( · ) to perform quantum measurements to extract real numbers Tr [ 𝒪 Λ   ( σ ) ] , where 𝒪 is positive operator-valued measure. Let the antagonistic perturbations of the initial state be σ ρ , where ( σ , ρ )   1 δ and δ < < 1 .
( Λ ( ρ ) , Λ ( σ ) ) ( σ , ρ ) 1 δ
Therefore, any small antagonistic disturbance of σ will lead to a smaller disturbance of the final state Λ   ( σ ) . The quantum device output is equal to v ( σ ) = Tr [ 𝒪 Λ   ( σ ) ] , where 𝒪 and Λ   ( ρ ) Λ   ( σ ) are positive semidefinites. Thus, there always exist matrices A and B , which satisfy 𝒪 = A   * A and Λ   ( ρ ) Λ   ( σ ) = B * B . Frobenius norm of the matrixes satisfy norm inequality A B F 2 = Tr ( A * A B B * ) A F 2 B F 2 = Tr ( A * A ) Tr ( B * B ) . Therefore, Tr [ 𝒪 ( Λ   ( ρ ) Λ   ( σ ) ) ] Tr ( 𝒪 ) Tr [ Λ   ( ρ ) Λ   ( σ ) ] . We obtain the following:
| v   ( σ ) v   ( ρ ) | | Tr ( 𝒪 ) Tr [ Λ   ( σ ) Λ   ( ρ ) ] |
According to Fuchs–van de Graaf inequality [36],
| v   ( σ ) v   ( ρ ) | 2 Tr ( 𝒪 ) 1 ( Λ   ( σ ) , Λ   ( ρ ) ) 2
(A3) demonstrates that the perturbation of χ corresponds to the difference of v   ( σ ) . For the classification problem with a limited number of possible values, even a very small | v   ( σ ) v   ( ρ ) | may lead to classification errors. Therefore, this proves that the quantum system is vulnerable to adversarial samples.

References

  1. David, C.V.; Luis, F.Q.; Dong, S. Bell-GHZ Measurement-Device-Independent Quantum Key Distribution. Ann. Phys. 2021, 9, 533. [Google Scholar]
  2. Jouguet, P.; Kunz-Jacques, S.; Leverrier, A.; Grangier, P.; Diamanti, E. Experimental demonstration of long-distance continuous-variable quantum key distribution. Nat. Photon. 2013, 7, 378–381. [Google Scholar] [CrossRef]
  3. Xu, F.; Ma, X.; Zhang, Q.; Lo, H.-K.; Pan, J.-W. Secure quantum key distribution with realistic devices. Rev. Mod. Phys. 2020, 92, 025002. [Google Scholar] [CrossRef]
  4. Huang, J.-Z.; Weedbrook, C.; Yin, Z.-Q.; Wang, S.; Li, H.-W.; Chen, W.; Guo, G.-C.; Han, Z.-F. Quantum hacking of a continuous-variable quantum-key-distribution system using a wavelength attack. Phys. Rev. A 2013, 87, 062329. [Google Scholar] [CrossRef]
  5. Hajomer, A.A.E.; Jain, N.; Mani, H.; Chin, H.M.; Andersen, U.L.; Gehring, T. Modulation leakage-free continuous-variable quantum key distribution. Npj Quantum Inf. 2022, 8, 136. [Google Scholar] [CrossRef]
  6. Wang, P.; Huang, P.; Chen, R.; Zeng, G. Robust frame synchronization for free-space continuous-variable quantum key distribution. Opt. Express 2021, 29, 25048–25063. [Google Scholar] [CrossRef] [PubMed]
  7. Abd-Elrahman, E.; Shehab, M.; Ahmed, M.; Mohaisen, A.; Zaman, T. Detecting and Mitigating SYN Flood Attacks in Industrial IoT Systems. IEEE Trans. Ind. Inform. 2020, 17, 6785–6796. [Google Scholar]
  8. Quezada, L.F.; Sun, J.; Dong, S. Quantum Version of the k-NN Classifier Based on a Quantum Sorting Algorithm. Ann. Phys. 2022, 5, 534. [Google Scholar] [CrossRef]
  9. Mao, Y.; Huang, W.; Zhong, H.; Wang, Y.; Qin, H.; Guo, Y.; Huang, D. Detecting quantum attacks: A machine learning based defense strategy for practical continuous-variable quantum key distribution. New J. Phys. 2020, 22, 083073. [Google Scholar] [CrossRef]
  10. Guo, Y.; Yin, P.; Huang, D. One-Pixel Attack for Continuous-Variable Quantum Key Distribution Systems. Photonics 2023, 10, 129. [Google Scholar] [CrossRef]
  11. Su, J.; Vargas, D.V.; Sakurai, K. One Pixel Attack for Fooling Deep Neural Networks. IEEE Trans. Evol. Comput. 2019, 23, 828–841. [Google Scholar] [CrossRef]
  12. Du, J.; Tang, R.; Feng, T. Security Analysis and Improvement of Vehicle Ethernet SOME/IP Protocol. Sensors 2022, 22, 6792. [Google Scholar] [CrossRef]
  13. Tang, Z.; Liao, Z.; Xu, F.; Qi, B.; Qian, L.; Lo, H.-K. Experimental Demonstration of Polarization Encoding Measurement-Device-Independent Quantum Key Distribution. Phys. Rev. Lett. 2014, 112, 190503. [Google Scholar] [CrossRef] [PubMed]
  14. Biggio, B.; Roli, F. Ten years after the rise of adversarial machine learning. Pattern Recognit. 2018, 84, 317–331. [Google Scholar] [CrossRef]
  15. Belazi, A.; Alajmi, Z.; Debuse, J. A Comprehensive Survey on Various Dimensionality Reduction Techniques. Mathematics 2021, 9, 1283. [Google Scholar]
  16. Chen, X.; Wu, T.; Xu, Z. Deep Linear Discriminant Analysis for Feature Extraction in Face Recognition. IEEE Signal Process. Lett. 2021, 28, 736–740. [Google Scholar]
  17. Luengo, D.; Bielza, C. Independent Component Analysis for Multi-Source Classification. Neural Netw. 2020, 126, 276–288. [Google Scholar]
  18. Massaut, V.; Jordan, N. Vector Quantization-based Deep Learning Approach for Music Classification. Neural Comput. Appl. 2019, 31, 947–960. [Google Scholar]
  19. Xia, Z.; Havlicek, J.P.; Chudak, F.A.; Markov, I.L.; Neven, H. Hardware-efficient Variational Quantum Eigensolver for Small Molecules and Quantum Magnets. Phys. Rev. A 2019, 100, 052308. [Google Scholar]
  20. Zhou, Q.; Mao, X.; Li, H. Deep Nonnegative Matrix Factorization for Semi-Supervised Dimensionality Reduction. IEEE Trans. Image Process. 2021, 30, 4196–4208. [Google Scholar]
  21. Li, S.; Yin, P.; Zhou, Z.; Tang, J.; Huang, D.; Zhang, L. Dictionary Learning Based Scheme for Adversarial Defense in Continuous-Variable Quantum Key Distribution. Entropy 2023, 25, 499. [Google Scholar] [CrossRef] [PubMed]
  22. Boaron, G.; Boso, G.; Rusca, D.; Vulliez, C.; Autebert, C.; Caloz, M.; Perrenoud, M.; Gras, G.; Bussières, F.; Li, M.-J.; et al. Secure Quantum Key Distribution Over 421 km of Optical Fiber Using Continuous Variable Quantum Key Distribution. Phys. Rev. Lett. 2018, 121, 190502. [Google Scholar] [CrossRef] [PubMed]
  23. Sebastian, K.; Max, R.; Christian, G.S. Continuous variable quantum key distribution with a real local oscillator using simultaneous pilot signals. Opt. Lett. 2017, 42, 1588–1591. [Google Scholar]
  24. Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and harnessing adversarial examples. In Proceedings of the International Conference on Learning Representations (ICLR), San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  25. Kurakin, A.; Goodfellow, I.; Bengio, S. Adversarial machine learning at scale. In Proceedings of the International Conference on Learning Representations (ICLR), Toulon, France, 24–26 April 2017. [Google Scholar]
  26. Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; Vladu, A. Towards deep learning models resistant to adversarial attacks. In Proceedings of the International Conference on Learning Representations (ICLR), Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
  27. Moosavi-Dezfooli, S.; Fawzi, A.; Frossard, P. Deepfool: A simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  28. Zhang, H.; Chen, H.; Xiao, C.; Li, B. Towards practical JPEG2010-based near-lossless image steganography with security against modern perturbation-based steganalysis. IEEE Trans. Inf. Forensics Secur. 2018, 13, 3106–3121. [Google Scholar]
  29. Luo, H.; Wang, Y.-J.; Ye, W.; Zhong, H.; Mao, Y.-Y.; Guo, Y. Parameter estimation of continuous variable quantum key distribution system via artificial neural networks. Chin. Phys. B 2022, 2, 31. [Google Scholar] [CrossRef]
  30. Han, H.; Li, W.; Wang, J.; Qin, G.; Qin, X. Enhance explainability of manifold learning. Neurocomputing 2022, 500, 877–895. [Google Scholar] [CrossRef]
  31. Bonnet, B.; Furon, T.; Bas, P. Generating Adversarial Images in Quantized Domains. IEEE Trans. Inf. Forensics Secur. 2022, 17, 373–385. [Google Scholar] [CrossRef]
  32. Jia, X.; Wei, X.; Cao, X. ComDefend: An efficient image compression-based defense against adversarial examples. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October–2 November 2019. [Google Scholar]
  33. Cai, D.; He, X.; Han, J. Locally consistent concept factorization for document clustering. IEEE Trans. Knowl. Data Eng. 2011, 23, 902–913. [Google Scholar] [CrossRef]
  34. Zhang, L.; Chen, Z.; Zheng, M.; He, X. Robust Kernel Nonnegative Matrix Factorization. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), New York, NY, USA, 9–15 July 2016. [Google Scholar]
  35. Chen, G.; Cao, J.; Shen, L.; Li, X. Sparse non-negative matrix factorization with adaptive graph regularization. Neurocomputing 2020, 401, 125–133. [Google Scholar] [CrossRef]
  36. Gong, W.; Deng, D. Universal adversarial examples and perturbations for quantum classifiers. Natl. Sci. Rev. 2021, 130, nwab130. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of the CVQKD system for obtaining training data. BS: beam splitter; AM: amplitude modulator; PM: phase modulator; PBS: polarizing beam splitter; Laser: laser produce model; HD: homodyne detection; PIN: PIN photodiode; P-METER: power meter; DPC: data processing center. The data [XB, LO, N0, Clock] collected by the system will be sent to the data processing center.
Figure 1. Schematic diagram of the CVQKD system for obtaining training data. BS: beam splitter; AM: amplitude modulator; PM: phase modulator; PBS: polarizing beam splitter; Laser: laser produce model; HD: homodyne detection; PIN: PIN photodiode; P-METER: power meter; DPC: data processing center. The data [XB, LO, N0, Clock] collected by the system will be sent to the data processing center.
Applsci 13 09928 g001
Figure 2. The ANN confusion matrices for CVQKD attack detection. (a) ANN classification accuracy without adversarial attacks under five scenarios. (b) ANN classification accuracy with FGSM adversarial perturbations under five scenarios.
Figure 2. The ANN confusion matrices for CVQKD attack detection. (a) ANN classification accuracy without adversarial attacks under five scenarios. (b) ANN classification accuracy with FGSM adversarial perturbations under five scenarios.
Applsci 13 09928 g002
Figure 3. Flowchart of experiments for CVQKD attack and defense effectiveness.
Figure 3. Flowchart of experiments for CVQKD attack and defense effectiveness.
Applsci 13 09928 g003
Figure 4. The confusion matrix of CVQKD detection classifiers under different defense strategies, namely, no defense, JPEG defense, Comdefend defense, and the proposed KRMNMF defense, in the presence of adversarial perturbations added to CVQKD samples. (a) is the confusion matrix with no defense strategy. (b) is the confusion matrix with JPEG defense. (c) is the confusion matrix with Comdefend defense. (d) is the confusion matrix with the proposed KRMNMF defense.
Figure 4. The confusion matrix of CVQKD detection classifiers under different defense strategies, namely, no defense, JPEG defense, Comdefend defense, and the proposed KRMNMF defense, in the presence of adversarial perturbations added to CVQKD samples. (a) is the confusion matrix with no defense strategy. (b) is the confusion matrix with JPEG defense. (c) is the confusion matrix with Comdefend defense. (d) is the confusion matrix with the proposed KRMNMF defense.
Applsci 13 09928 g004
Table 1. Summary of properties of various attack methods. “Perturbation norm“ indicates the restricted L t parametric number of perturbations making them imperceptible. The strength evaluations are based on what we found while reviewing the literature.
Table 1. Summary of properties of various attack methods. “Perturbation norm“ indicates the restricted L t parametric number of perturbations making them imperceptible. The strength evaluations are based on what we found while reviewing the literature.
MethodAttack
Principle
Black/
White Box
Targeted/
Non-Targeted
Perturbation NormStrength
FGSMGradientWhiteBoth L Hard
BIMGradientWhiteNon-targeted L Hard
PGDGradientWhiteBoth L 2 , L Hard
QEBADecisionBlackBoth L 2 Hard
RP2OptimizationWhiteTargeted L 2 , L Weak
AdvCamOptimizationWhiteBoth L 2 , L Hard
Table 2. The accuracy of CVQKD detection classifiers under different defense strategies. The classification accuracy, including no adversarial defense, single defense method defense, and the combined use of KMRNMF defense with other defense methods. It can be observed that the KRMNMF defense method achieves high accuracy and can be combined with other methods to achieve better results.
Table 2. The accuracy of CVQKD detection classifiers under different defense strategies. The classification accuracy, including no adversarial defense, single defense method defense, and the combined use of KMRNMF defense with other defense methods. It can be observed that the KRMNMF defense method achieves high accuracy and can be combined with other methods to achieve better results.
MethodAverage Classification Accuracy
With no attack92.1%
With no defense15.6%
With JPEG defense40.8%
With Comdefend defense39.4%
With KRMNMF defense71.6%
With KRMNMF + JPEG defense78.8%
With KRMNMF + Comdefend defense79.5%
Table 3. Based on the MNF-based algorithm’s accuracy and dimensionality for the adversarial defense of CVQKD data, it can be observed that the proposed KRMNMF defense method has the smallest dimensionality and achieves the best defense effectiveness.
Table 3. Based on the MNF-based algorithm’s accuracy and dimensionality for the adversarial defense of CVQKD data, it can be observed that the proposed KRMNMF defense method has the smallest dimensionality and achieves the best defense effectiveness.
AlgorithmDimensionAverage Classification
Accuracy
NMF7450.8%
KNMF3245.5%
RNMF10048.4%
GNMF3868.5%
KRNMF9069.6%
SNMF8748.8%
LNMF6350.3%
KRMNMF1471.6%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fu, Y.; Xia, E.; Huang, D.; Jing, Y. Adversarial Attack Defense Method for a Continuous-Variable Quantum Key Distribution System Based on Kernel Robust Manifold Non-Negative Matrix Factorization. Appl. Sci. 2023, 13, 9928. https://doi.org/10.3390/app13179928

AMA Style

Fu Y, Xia E, Huang D, Jing Y. Adversarial Attack Defense Method for a Continuous-Variable Quantum Key Distribution System Based on Kernel Robust Manifold Non-Negative Matrix Factorization. Applied Sciences. 2023; 13(17):9928. https://doi.org/10.3390/app13179928

Chicago/Turabian Style

Fu, Yuwen, E. Xia, Duan Huang, and Yumei Jing. 2023. "Adversarial Attack Defense Method for a Continuous-Variable Quantum Key Distribution System Based on Kernel Robust Manifold Non-Negative Matrix Factorization" Applied Sciences 13, no. 17: 9928. https://doi.org/10.3390/app13179928

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop