Next Article in Journal
Unfixed Seasonal Partition Based on Symbolic Aggregate Approximation for Forecasting Solar Power Generation Using Deep Learning
Previous Article in Journal
A Prediction Model for Pressure and Temperature in Geothermal Drilling Based on Physics-Informed Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Stealthy Communication Model for Protecting Aggregated Results Integrity in Federated Learning

by
Lu Li
1,
Xuan Sun
1,
Ning Shi
2,
Xiaotian Ci
3 and
Chen Liang
1,*
1
Computer School, Beijing Information Science and Technology University, Beijing 100192, China
2
Science College, Shijiazhuang University, Shijiazhuang 050035, China
3
School of Mechanical Engineering, Tianjin University of Technology, Tianjin 300384, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(19), 3870; https://doi.org/10.3390/electronics13193870 (registering DOI)
Submission received: 31 July 2024 / Revised: 17 September 2024 / Accepted: 28 September 2024 / Published: 29 September 2024
(This article belongs to the Section Networks)

Abstract

:
Given how quickly artificial intelligence technology is developing, federated learning (FL) has emerged to enable effective model training while protecting data privacy. However, when using homomorphic encryption (HE) techniques for privacy protection, FL faces challenges related to the integrity of HE ciphertexts. In the HE-based privacy-preserving FL framework, the public disclosure of the public key and the homomorphic additive property of the HE algorithm pose serious threats to the integrity of the ciphertext of FL’s aggregated results. For the first time, this paper employs covert communication by embedding the hash value of the aggregated result ciphertext received by the client into the ciphertext of local model parameters using the lossless homomorphic additive property of the Paillier algorithm. When the server receives the ciphertext of the local model parameters, it can extract and verify the hash value to determine whether the ciphertext of the FL’s aggregated results has been tampered with. We also used chaotic sequences to select the embedding positions, further enhancing the concealment of the scheme. The experimental findings demonstrate that the suggested plan passed the Welch’s t-test, the K–L divergence test, and the K–S test. These findings confirm that ciphertexts containing covert information are statistically indistinguishable from normal ciphertexts, thereby affirming the proposed scheme’s effectiveness in safeguarding the integrity of the FL’s aggregated ciphertext results. The channel capacity of this scheme can reach up to 512 bits per round, which is higher compared to other FL-based covert channels.

1. Introduction

Traditional machine learning relies on the core working idea of training models using massive volumes of data gathered from many sources. Sharing data between devices or organizations, however, may result in privacy violations for the data owners. To balance privacy protection with effective model training, the concept of federated learning (FL) [1,2] has been introduced. FL allows model training without direct data sharing by requiring end devices to train local models and upload local model updates, such as gradients or weights, to a central server. Since FL can effectively utilize data while protecting privacy, it has extensive application prospects in many scenarios, such as vehicular networks [3], medical devices [4], smart grids [5], and smart cities [6].
Existing research shows that FL still has privacy leakage problems [7,8]. However, even though disclosed gradients are meant to prevent sensitive information about the training data from being obtained by attacks, such as model inversion attacks [9] or gradient inference attacks [10]. According to recent research, a user’s local data may be recovered by an untrusted server from model gradients or weights that have been uploaded [11,12,13]. Using adversarial tactics, the untrusted server can view changes in gradients or training labels, as well as the model structure and starting parameters exposing participants’ private information [14,15]. To protect client gradients during training, various privacy-preserving frameworks based on cryptographic techniques have been proposed. Homomorphic encryption (HE) has attracted considerable attention for its ability to aggregate model parameters while keeping them encrypted [16,17].
However, the complex cryptographic operations involved in HE introduce substantial overhead to FL frameworks that employ this technology. Additionally, the asymmetric encryption mode used by HE allows malicious attackers to easily obtain the public key used for encryption. For instance, attackers can encrypt malicious data with the public key and homomorphically add it to the aggregated results, thereby compromising their integrity [18,19,20]. As a result, efficiency and the integrity of the aggregated findings must both be taken into account in HE-based privacy-preserving FL systems.
The motivation behind this study is to address the issue of ensuring the integrity of aggregated results in HE-based privacy-preserving FL frameworks through the use of covert communication methods. The core idea of covert communication [21,22,23] is to embed secret information within innocuous public communications, making it difficult to detect and tamper with. The proposed framework first applies HE to sensitive data during transmission, ensuring that the data remains encrypted throughout the process, thereby protecting its privacy and security. Then, using covert communication techniques, the hash value of the aggregated result’s ciphertext is embedded within the FL communication channel. Additionally, we employ chaos sequence [24,25,26] technology to further enhance the anti-detection capability of the covert channel. This approach makes it difficult for attackers to detect our secret information, thereby enhancing data integrity while minimizing the impact on the efficiency of the FL framework.
Our contributions are as follows:
  • We designed a covert communication scheme based on FL, applying covert communication to FL for the first time, which enhance the integrity of aggregated results in privacy-preserving FL frameworks based on HE.
  • We introduce chaotic sequence technology to enhance the anti-detection ability of covert communication. This technology improves the concealment of the communication channel, making it more difficult for attackers to detect embedded information.
  • The embedding approach suggested in this study provides strong concealment and large channel capacity, as demonstrated by our experimental results.
This paper is organized as follows: Section 2 provides background on FL and covert communication. The proposed FL-based covert communication scheme and its design objectives are described in Section 3. The model and algorithm specifics are expanded upon in Section 4. The application and experimental outcomes of the suggested strategy are covered in Section 5. Section 6 reviews related FL-based covert communication research. Section 7 provides a final analysis and outlook for the future of the work.

2. Preliminaries

This section presents the fundamentals of privacy-preserving FL.

2.1. Federated Learning

To address the issues of data silos and data confidentiality in traditional centralized machine learning, Bonawitz [27] introduced the notion of FL initially. A distributed machine learning technique called FL seeks to train models locally on each device and sends the locally modified model parameters to a central server for further processing. This method avoids the central sharing of raw data, thus protecting individual privacy. Figure 1 illustrates the FL system framework.
In the above FL framework, the cloud server first distributes the global model parameters W t to clients C i   i = 1 , 2 , , k . After receiving the global model parameters, the clients independently train their models on local data and update the model parameters W i t . These local model parameters are then uploaded to the cloud server. The cloud server aggregates all clients’ model parameters through weighted averaging to calculate new global model parameters W t + 1 and broadcasts them back to the clients. Upon receiving the new global model parameters, the clients continue to train their models on local data. This iterative process repeats until the global model converges or a predefined stopping criterion is met. The upload transmission is represented by solid arrows, while the broadcast transmission is represented by dashed arrows. However, there may be attackers who tamper with the aggregation results in the broadcast channel of the global model. This is the issue that the proposed solution in this paper aims to address.

2.2. Paillier Encryption Algorithm

The Paillier encryption algorithm, introduced by Pascal Paillier in 1999 [28], is a HE method rooted in number theory. This algorithm exhibits additive homomorphism, meaning that the product of two ciphertexts corresponds to the ciphertext of the sum of the plaintexts. The following is a description of the specifics of the Paillier encryption algorithm:
  • Key Generation: To generate keys for the Paillier encryption algorithm, select two large prime numbers, p   and q , and compute n = p   q . Calculate λ   = l   cm   p 1 , q 1 , where l cm denotes the least common multiple. Choose an integer   g ,   such that g     Z n 2 * ,   ensuring that g   has an order multiple of n . Compute μ = L g λ m o d   n 2 1 m o d   n , where L x = x 1 n . The public key is n , g , and the private key is λ , μ .
  • Encryption: Given a plaintext m , where ( 0   m < n ), choose a random integer r ,   where ( 0 < r < n ). Compute the ciphertext c   using the formula c = g m r n   m o d   n 2 .
  • Decryption: Given a ciphertext c , compute the plaintext   m   using the formula m = L c λ   m o d   n 2 μ   m o d   n .
The main feature of the Paillier algorithm is its homomorphic property. Specifically, it supports homomorphic addition and scalar multiplication operations:
  • Homomorphic Addition: For two ciphertexts, c 1 and c 2 , the decryption of their product corresponds to the sum of their respective plaintexts. The formula is given by Formula (1):
    D E m 1 × E m 2   m o d   n 2 = m 1 + m 2
  • Homomorphic Addition (encrypted message and plaintext integer): Encrypt a message m and then multiply this encrypted result with g k   m o d   n 2 . Decrypting this product will yield the sum of m   and k . The formula is given by Formula (2):
    D E m × g k   m o d   n 2 = m + k   m o d   n
  • Homomorphic Scalar Multiplication: For a ciphertext c   and a plaintext number k , the decryption of the ciphertext raised to the power of k   corresponds to the plaintext multiplied by k . The formula is given by Formula (3):
    D E m k   m o d   n 2 = k × m

2.3. Chaotic Sequences

Chaotic sequences are a type of deterministic sequence characterized by complex and seemingly random behavior, commonly found in nonlinear dynamical systems [29]. They represent irregular motion within deterministic systems. Chaotic sequences, despite being produced by deterministic equations, show behavior that is extremely sensitive to initial conditions, meaning that slight modifications to these parameters can produce radically different results. There are two names for this phenomenon, “chaotic behavior” and the “butterfly effect”. In discrete cases, chaos often manifests as chaotic sequences. These sequences, produced by chaotic models, inherently contain rich dynamical information about the system. They serve as a bridge connecting chaos theory to practical applications in the real world, making them a significant field of application within chaos theory. In areas such as secure communications, encryption algorithms, random number generation, and image processing, chaotic sequences enhance system security and robustness due to their unpredictability and complexity.
The logistic map is a common example of a chaotic sequence represented by a simple nonlinear recursive relation used to describe the chaotic behavior of certain dynamical systems. The formula is given by Formula (4):
x n + 1 = r x n 1 x n
where x n is the value at the   n -th iteration, with a range of 0 x n 1 ; r   is the control parameter of the system, typically ranging from 0 to 4; and   x n + 1 is the value at the n + 1 -th iteration. This formula demonstrates how an initial value x 0 evolves with each iteration. Depending on the value of r , the system can exhibit a range of behaviors from stability to periodicity and chaos.

2.4. Covert Channel

Lampson developed the idea of covert channels for the first time in 1973. It immediately garnered significant attention from the U.S. Department of Defense, leading to the publication of guidelines for covert channel analysis in trusted systems in 1993. Covert channels refer to a type of communication channel that transmits secret information through unconventional means, designed to bypass predefined security levels and achieve hidden communication. Compared to traditional communication channels, where both the sender and receiver need explicit roles, in covert channels, the sender only requires permission to modify the covert channel carrier, while the receiver only needs permission to read the covert channel carrier. This results in covert channels being resistant to interception and detection and highly flexible.
Additionally, to enhance the robustness of covert channel schemes in complex environments and prevent message leakage if detected by adversaries, covert channel technology often integrates cryptography, machine learning, steganography, and other techniques to achieve secure hidden communication. Figure 2 presents an example of covert channel.
Figure 2 presents an example of a covert channel under the Bell–LaPadula (BLP) model [30]. The simple security property of the BLP model states that the security level of the information receiver must not be lower than the sensitivity level of the information. The *-property of the BLP model indicates that writing the contents of one sensitive object to another requires the latter’s sensitivity level to be at least as high as the former’s. These two properties can be summarized as “no read up, no write down”. The BLP model prevents low-level subjects from accessing high-classified information and also prevents high-level subjects from leaking information to low-level subjects through write operations. Even under a mandatory access control model, communication parties can still construct covert channels to transmit information from high-security-level subjects to low-security-level subjects. This transmission is achieved by modifying and perceiving the values or attributes of shared variables between high- and low-security-level users.

3. System Design

This section covers the system model, threat model, and design goals of our recommended course of action.

3.1. System Model

The stealthy communication model for the HE-based privacy-preserving FL framework that is suggested in this paper is presented in this section. The proposed scheme’s system model is depicted in Figure 3.
The following components make up the entire covert communication process, as shown in Figure 3:
  • Off-channel negotiation: In this approach, pre-chain negotiations are carried out off-channel by the sender and the recipient using email, in-person meetings, or other means of contact. The pre-negotiation content includes coding rules for hidden information, embedding location (chaotic sequence x ) and mode n selection.
  • Obtaining the global model (aggregated results): Obtain the ciphertext of the global model W t from the FL communication channel in the   t   -th round. The obtained ciphertext may have been tampered with by an attacker but can still be decrypted normally.
  • Training local model: The sender (client) will use the local dataset and global model W t   for local model W k t training.
  • Paillier encrypting: The sender (client) will use the Paillier HE algorithm to encrypt the local model W k t parameters to be uploaded, resulting in encrypted ciphertext C k .
  • Hashing: The sender (client) will perform a hash signature operation on the ciphertext to obtain a hash value h k   of the global model W t , which is also the covert information to be transmitted.
  • Embedding: The sender (client) will select the insertion position using chaos sequence and embed the binary sequence corresponding to the hash value h k   into the ciphertext through the homomorphic additive property of the HE ciphertext C k . This results in a ciphertext C k that contains the embedded secret information, which is then transmitted through the FL communication channel. Figure 4 illustrates how each covert information bit is embedded into the parameter ciphertext.
  • Extracting: When the receiver (cloud server) obtains the ciphertext C k   with the embedded secret information from the FL communication channel, it extracts the covert information h k based on the embedding location generated by the chaotic sequence s e e d and retrieves the original ciphertext C k .
  • Checking: The receiver (cloud server) uses the obtained covert information h k   to verify the global model W t and check whether its integrity has been compromised.
  • Participate in aggregation operation: If the global model W t   received by the sender passes verification and proves it has not been tampered with, the cloud server will include it in the normal aggregation operations; if the global model fails verification, it will not be included in the normal aggregation operations.

3.2. Threat Model

We assume the adversary has the following capabilities and limitations:
Adversary Capabilities:
  • The adversary has sophisticated capabilities for data processing and surveillance. The adversary can examine every bit of data sent in order to identify the hidden channel. The adversary possesses the public key used for HE and can perform homomorphic addition operations on the ciphertext of the transmitted aggregated results, thereby tampering with the data and compromising its integrity.
  • The adversary knows all the implementation details of our proposed scheme.
  • When the adversary detects the presence of hidden information in the upload channel through monitoring and analysis (only detection is required, without needing to know the specific content of the secret information), they will forcibly shut down the upload channel, thereby disrupting the covert channel.
Adversary Limitations:
  • The adversary is able to observe and examine the upload channel of the local model, but it is unable to impede it. The adversary cannot know the pre-agreed content in the scheme.

3.3. Design Goals

The design objectives of the proposed HE-based stealthy communication model for the privacy-preserving FL framework are described in this section, with a focus on high concealment and channel capacity.
  • High concealment: Concealment makes the covert channel untraceable by ensuring an attacker cannot tell hidden information from regular data. High concealment in the proposed framework refers to the inability of the threat model to distinguish between plain text and ciphertext carrying hidden information. This ensures that covert communications remain undetected by adversaries.
  • High channel capacity: The amount of covert information sent via a covert channel in a given amount of time is referred to as its channel capacity. Massive volumes of clandestine data transmission require a high channel capacity. In the proposed model, it means the ability to transmit more robust hash values generated by hash operations. Generally, the larger the capacity of the hash value, the better its collision resistance, thereby enhancing the effectiveness of ensuring the integrity of the ciphertext during the FL process [31].
  • Strong robustness: The robustness of the proposed scheme is defined as the ability to effectively detect tampering of the global model by an adversary. In this scheme, the hash value of the global model is transmitted to the server as secret information. Upon receiving the model parameters returned by the clients, the server extracts the embedded hash value and compares it with the recalculated hash value. If the hash values match, it indicates that the global model has not been tampered with; otherwise, it is considered tampered.

4. Proposed Scheme

For every algorithm in the suggested scheme, the pseudocode and process descriptions are shown in this section. Four steps make up the entire covert communication process in our proposed method, as Figure 5 illustrates. Prior to the sender preparing the data, there needs to be pre-negotiation between the sender and the recipient. This preprocessing includes hashing the received global model W t   and calculating the embedding location using the pre-negotiated chaotic sequence seed s e e d . Then, the covert message (hash value h k ) is embedded into the ciphertext of the local model C k   to be uploaded to the cloud server. The receiver extracts the covert message h k   from the uploaded local model ciphertext C k   and verifies the global model W t . The specific procedure will be demonstrated in this chapter’s Section 4.2, Section 4.3, Section 4.4 and Section 4.5.

4.1. Symbol

Unless otherwise noted, the symbols utilized in this work are those that are mentioned in Table 1 and are applied uniformly to all algorithms and procedures.

4.2. Pre-Negotiate

The information processing mechanism is pre-agreed upon by both parties, and they exchange key parameters: the chaotic sequence seed x , parameter for logistic map r , and the length of the position generated by the chaotic sequence n . These parameters are shared via an off-channel communication channel [32].

4.3. Information Processing

The sender first needs to use the pre-negotiation chaotic sequence seed x , parameter for logistic map r , the length of the list of model parameters l ,   and the length of the position generated by the chaotic sequence n   to generate a random insertion position.
As shown in Algorithm 1, this algorithm generates a chaotic sequence of length n using the logistic map and selects unique positions to form the position sequence P . Starting from the chaotic sequence seed x , the logistic map formula is used to iteratively generate new chaotic values, which are then mapped to the range 0 ,   l 1 to determine the position p i . During this process, it ensures that the generated positions p i   are unique. If p i has not appeared before, it is added to the position sequence P . This continues until a sequence of length n with unique positions is obtained.
Algorithm 1: Chaotic Sequence Position Selection
input:  x , r , n , l
output:  P
1: Initialize P as an empty list
2: Initialize a set U to keep track of used positions
3: x 0 x
4: i 0
5: while P < n do
6:      x i + 1 r x i 1 x i
7:      p i x i + 1 l   m o d   l
8:     if p i U then
9:     Append p i to P
10:   Add p i to U
11:     end if
12:      i     i + 1
13: end while
14: return P
The sender then performs a hash operation on the received global model W t . This scheme utilizes the SHA-2 family of hash functions, allowing for the selection of different SHA algorithms based on the required hash value length. Examples from the SHA-2 family include SHA-224, SHA-256, SHA-384, and SHA-512, which produce hash values of 224, 256, 384, and 512 bits, respectively. The resulting hash value is the scheme’s covert information S .

4.4. Information Embedding

In Section 4.3, Information Processing, we obtained the position sequence P and the covert information S . In this subsection, we will embed S into the original ciphertext C k . This embedding method does not result in any data loss and is a form of lossless embedding [33]. This is accomplished by utilizing the Paillier cryptosystem’s homomorphic and probabilistic features, which enable the creation of steganographic ciphertext and lossless data concealing C k .
Since we used the Paillier cryptosystem to encrypt the original model parameters, the ciphertexts possess additive homomorphic properties. These properties can be expressed by Formula (5), where γ and δ are two plaintext numbers, and the operation represents the additive homomorphic operation.
D E γ E δ = γ + δ
A special case occurs when δ equals 0, in which Equation (5) transforms into Formula (6).
D E γ E 0 = γ
According to Equation (6), we can conclude that when the ciphertext of γ is homomorphically added with the ciphertext of 0, the ciphertext of γ will change, but its plaintext will remain unchanged. After decryption, the homomorphically-added ciphertext will still yield the plaintext γ .
Based on the above conclusion, we can utilize this property for covert information embedding. We stipulate that a ciphertext mod 2 equal to 0 represents 0 and a ciphertext mod 2 equal to 1 represents 1. By homomorphically adding 0 to the ciphertext, we modify the original ciphertext in the position sequence P to convey the covert information S . The Algorithm 2 may comprehend the entire embedding procedure.
Algorithm 2: The Covert Channel Information Embedding
input:  C k , N , g , S , P
output:  C k
1:  Initialization i = 1
2:  for  i = 1 to l  do
3:     p i = P i
4:    if  s i = = 0  then
5:       if  C k p i   m o d   2 = = 0  then
6:          C k p i = C k p i
7:          i   =   i   +   1
8:         break
9:       else
10:           C k p i = C k p i × g 0 r N   m o d   N 2
11:          if  C k p i   m o d   2 = = 0  then
12:           C k p i = C k p i
13:             i   =   i   +   1
14:            break
15:          else
16:            continue
17:          end if
18:     end if
19:  else
20:   if  C k p i   m o d   2 = = 1  then
21:        C k p i = C k p i
22:        i   =   i   +   1
23:       break
24:   else
25:        C k p i = C k p i × g 0 r N   m o d   N 2
26:       if  C k p i   m o d   2 = = 0  then
27:             C k p i = C k p i
28:               i   =   i   +   1
29:          break
30:       else
31:          continue
32:       end if
33:   end if
34:  end if
35: end for

4.5. Information Extracting and Check

Upon receiving C k , the recipient can similarly obtain the embedding position sequence P according to the pre-negotiated parameters, x , r , and n . By taking the ciphertext C k p i   modulo 2 according to this sequence P and reading the secret information s i according to Formula (7) and restoring the covert information S through Formula (8), finally, the recipient can verify the global model W t transmitted to the sender using this secret information S (hash value).
S i = 0   if   C k p i   m o d   2 = 0 ;   otherwise   1
S = S 1 | S 2 S n

5. System Implementation and Evaluation

The system installation specifics and pertinent performance evaluation are covered in detail in this section.

5.1. Implementation

This paper opts to conduct experiments in a single machine-simulated FL environment, which effectively simulates the collaborative training process of multiple client devices. In this environment, multiple virtual clients are created using a single machine, with each client independently training a model on local data and sending the updated model parameters to a central server for aggregation. This approach allows us to test and evaluate the performance of the proposed stealthy communication system model for the privacy-preserving FL framework based on HE.
The experimental portion of the study was carried out on a Windows 10 platform using the following hardware setup in order to assess the effectiveness of the suggested method: CPU: 11th Gen Intel(R) Core (TM) i5-11300H @ 3.10 GHz, GPU: NVIDA GeForce MX450 2 GB and 16 GB RAM (DDR4-3200). The algorithms designed for the experiment were all implemented in Python3, and the code was written using the phe library.
The dataset used for the experiments is MNIST, and the network is a simple 2NN network with three fully connected layers. The first fully connected layer (fc1) has 784 input units and 200 output units, with 156,800 weight parameters and 200 bias parameters, totaling 157,000 parameters. The second fully connected layer (fc2) has 200 input units and 200 output units, with 40,000 weight parameters and 200 bias parameters, totaling 40,200 parameters. The third fully connected layer (fc3) has 200 input units and 10 output units, with 2000 weight parameters and 10 bias parameters, totaling 2010 parameters. The network has 199,210 parameters. The FL aggregation algorithm used is FedAvg.
This experiment was a single-machine simulation, with no virtual machine configuration. The experiment involved two clients, each of which trained locally for 10 epochs, with a batch size of 100, and the learning rate was set to 0.005. The total number of communication rounds was 20, and the data was set to be non-IID.

5.2. Concealment

The scheme selects 1000 sets of Paillier ciphertexts as normal ciphertexts and uses these normal ciphertexts as the control group. Then, different capacities of hash algorithms (e.g., SHA-224, SHA-256, SHA-384, and SHA-512 from the SHA-2 family) are chosen to hash the same global parameters. Utilizing the embedding method of the scheme, the normal ciphertexts are modified to generate ciphertexts embedded with secret information. Subsequently, evaluations are conducted to determine whether the datasets before and after embedding are statistically indistinguishable.
The concealment of the proposed scheme is evaluated using cumulative distribution function (CDF), Kolmogorov–Smirnov (K–S) test, Welch’s t-test, and Kullback–Leibler (K-L) divergence [34,35,36,37].

5.2.1. K–S Test and K–L Divergence

The CDF is the integral of the probability density function and can fully describe the probability distribution of a real-valued random variable. The definition is given in Formula (9):
F x = P X x
where   X is a random variable and x represents all real numbers. By comparing the similarity of the CDFs of the normal ciphertext and the ciphertext embedded with secret information, we can intuitively see whether they are indistinguishable. Figure 6 shows the CDF curves for these datasets.
In Figure 6a, the curves almost completely overlap, indicating that the datasets are very similar or indistinguishable in terms of character distribution.
The K–S test is a nonparametric test used to compare the CDFs of two datasets or a sample dataset against a reference probability distribution. It calculates the maximum difference between the observed and theoretical cumulative distributions, providing a statistic to infer whether the two distributions differ significantly. This test is particularly useful for validating the goodness-of-fit of a sample to a reference distribution or comparing two samples to determine if they originate from the same distribution.
If the p-value for the K–S test is more than 0.05, we conclude that the two distributions are similar and there is no discernible difference between them. This implies that we cannot reject the null hypothesis and thus assume that the two samples come from the same distribution. Table 2 shows the experimental results. Figure 6b shows the visualization of the   p -value of the K–S test.
A probability distribution’s divergence from a second, expected probability distribution is measured using the K–L divergence. K–L divergence is a quantitative measure of the difference between two probability distributions in statistics, machine learning, and information theory. Closer distributions are indicated by smaller K–L divergence values, whereas bigger values suggest greater disparities. Table 3 displays the K–L divergence results, while Figure 7 provides a representation of the data.
Based on the above experimental results, all K–S test p-values are greater than 0.05, and the K–L divergence values are within an extremely small range, indicating there is no statistically significant difference between the ciphertext with embedded secret information and the normal ciphertext. Moreover, all K–S test p-values for different hash algorithms are 1.0, and the K–L divergence values are also very small, further supporting this conclusion.

5.2.2. Welch’s T-Test

A statistical hypothesis test called Welch’s t-test is used to ascertain whether the means of two independent samples differ significantly from one another. Welch’s t-test calculates the test statistic based on the sample means, variances, and sample sizes and uses this statistic to infer whether the difference in means is statistically significant.
A significant difference between the means of the two samples is shown if Welch’s t-test p-value is less than or equal to 0.05. Table 4 displays the outcomes of the experiment.
The various tests above indicate that the proposed scheme has good concealment. This is because we utilized the properties of the Paillier HE algorithm to embed secret information, which does not significantly alter the distribution of the ciphertexts, thereby ensuring good concealment.

5.3. Channel Capacity

We measure the channel capacity of the proposed scheme by the number of secret information bits that can be transmitted per round, as shown in Formula (10):
C = B N
where C is the channel capacity, B is the total number of bits that can be embedded, and N is the number of communication rounds.
In the 2NN network used in this paper, the maximum secret information capacity that can be transmitted per round is the SHA-512 hash value, which is 512 bits per round. In Table 5, we compare this with other FL-based covert channel schemes.
Table 5 clearly shows that our scheme has a higher channel capacity compared to other FL-based covert channel schemes.

5.4. Performance Comparison before and after Scheme Implementation

In this section, we will conduct experiments to compare the performance of the FL system before and after the implementation of the proposed scheme. This comparison aims to evaluate the impact of the scheme on the system’s overall performance, focusing on two key metrics: model accuracy and convergence time. By analyzing the results from both scenarios, we can determine whether the scheme introduces any negative impacts on model training effectiveness or other aspects of system performance. The specific performance results are shown in Figure 8.
Based on the experimental results, we can draw the following conclusions: whether or not the proposed scheme is implemented, the training accuracy reaches a peak of 90–95% between rounds 10–15, and the validation accuracy stabilizes around 90%. Regarding convergence time, the implementation of the scheme did not significantly extend the overall convergence time of the model. This demonstrates that the proposed scheme does not have a negative impact on the FL system in terms of model accuracy and convergence time.

6. Related Works

This paper conducts a comprehensive survey and analysis of existing FL-based covert communication techniques. Here, we focus on the characteristics and advantages and disadvantages of three representative schemes. By analyzing the following schemes, we can better understand their applicability and effectiveness in different application scenarios.
Hitaj et al. [38] proposed FedComm, a novel technique that leverages FL as a covert communication medium. FL trains deep neural networks collaboratively across multiple parties, avoiding the sharing of actual training data and thus protecting data privacy. However, FedComm exploits the shared model parameters in FL to establish a covert communication channel for transmitting arbitrary information among participants. This paper thoroughly investigates the communication capacity of FedComm and proposes encoding schemes based on code division multiple access (CDMA) and low-density parity-check (LDPC) error correction techniques. Experimental results demonstrate that FedComm can successfully transmit information up to thousands of bits in size before the FL process converges, without significantly interfering with the training process. Furthermore, FedComm is independent of the application domain and neural network architecture, exhibiting high robustness and concealment. The advantage of this technique lies in utilizing the shared model parameters of FL to establish a covert communication channel, incorporating CDMA and LDPC error correction techniques. While FedComm shows high robustness and concealment, it requires achieving covert communication without significantly disrupting the training process, and its practical effectiveness across different application domains and neural network architectures requires further validation.
Kim et al. [39] proposed an innovative covert communication technique that enables covert communication between the FL server and participants without affecting the performance of FL. The covert message is superimposed onto the aggregated gradient by the FL server, which then broadcasts the superimposed signal to all FL participants. FL participants remove the hidden message from the superimposed signal, consider the aggregated gradient as interference, and then restore the original global model. However, the scheme does not adequately consider the impact of noise and channel fading on the covert communication rate and FL performance. Although the authors mentioned that future research will address these factors, the current study does not provide specific solutions or experimental data.
Hou et al. [40] proposed a novel covert communication technique that integrates unmanned aerial vehicles (UAVs) with FL. The UAV is responsible for orchestrating FL operations and enhancing communication secrecy by emitting artificial noise (AN) to interfere with eavesdroppers. This technique achieves a balance between security and training cost by optimizing the UAV’s trajectory and AN transmission power, as well as the CPU frequency, transmission power, and bandwidth allocation of participating devices. However, the proposed scheme’s reliance on emitting AN to enhance security could adversely affect the channels of legitimate users, thereby impacting FL performance.
Both Hitaj et al.’s FedComm scheme and Kim et al.’s scheme achieve covert communication by exploiting shared model parameters, which can potentially impact FL performance. In contrast, the proposed scheme utilizes the homomorphic additive property of HE to embed secret information, ensuring that the ciphertext remains identical in usability before and after embedding, thus avoiding any negative impact on FL performance. Hou et al.’s scheme relies on UAVs and AN to enhance security, which may negatively affect legitimate users’ communication channels. In contrast, the proposed scheme ensures high concealment and transmission efficiency while minimizing any adverse effects on FL performance.

7. Conclusions and Future Works

This paper presents a stealthy communication system model for the privacy-preserving FL framework based on HE, addressing the issue of protecting the integrity of aggregated results in FL. The proposed scheme embeds the hash value of the aggregated result received by the client into the ciphertext of the local model parameters as secret information. The server can then verify whether the sent aggregated result has been tampered with upon receiving the ciphertext containing the secret. Additionally, the scheme enhances its stealthiness by using chaotic sequences to select the embedding positions of the secret information. Experimental results demonstrate that the normal ciphertext and the ciphertext with embedded covert information are indistinguishable through statistical methods, and it has a high channel capacity, ensuring the secure transmission of secret information. This scheme can serve as a solution for protecting the integrity of FL aggregated results.
However, the proposed scheme in this paper still has limitations. For instance, during the upload process of the local model, there remains the risk of integrity attacks on the ciphertext of the local model parameters by adversaries. This issue has not yet been resolved in our current scheme and will be the focus of our future research work.
In future work, we plan to explore how to use covert communication to protect the integrity of local models in FL, thus expanding the application scope of the scheme. This approach will further enhance the integrity of ciphertexts in HE-based FL. Furthermore, to fully capitalize on each method’s advantages in terms of privacy protection, we will consider incorporating encryption algorithms alongside covert channel strategies. By combining these encryption techniques with covert communication, we aim to develop a more resilient defense system capable of addressing a variety of challenging security issues. Our goal is to develop a more resilient defense system that can handle a variety of challenging security issues by merging these methods. User trust will rise as a result of this integration’s contribution to a more thorough data privacy and security precaution. In addition, we plan to use the SHA-3 hashing algorithm—which is derived from the Keccak algorithm—into our upcoming studies. Since SHA-3 is a more recent hash algorithm, it offers superior security and collision resistance, giving our system further safety. We think that by making these improvements, FL systems’ overall security and dependability can be further strengthened, satisfying the needs of a wider range of applications.

Author Contributions

Conceptualization: X.S.; Formal analysis: L.L.; Funding acquisition: N.S.; Project administration: X.C.; Supervision: C.L.; Validation: X.S.; Writing—original draft: L.L.; Writing—review & editing: N.S., X.C. and C.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by R&D Program of Beijing Municipal Education Commission (KM202311232013).

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wen, J.; Zhang, Z.; Lan, Y.; Cui, Z.; Cai, J.; Zhang, W. A survey on federated learning: Challenges and applications. Int. J. Mach. Learn. Cybern. 2023, 14, 513–535. [Google Scholar] [CrossRef] [PubMed]
  2. Liu, Y.; Kang, Y.; Zou, T.; Pu, Y.; He, Y.; Ye, X.; Ouyang, Y.; Zhang, Y.-Q.; Yang, Q. Vertical federated learning: Concepts, advances, and challenges. IEEE Trans. Knowl. Data Eng. 2024, 36, 3615–3634. [Google Scholar] [CrossRef]
  3. Zhang, X.; Chang, Z.; Hu, T.; Chen, W.; Zhang, X.; Min, G. Vehicle selection and resource allocation for federated learning-assisted vehicular network. IEEE Trans. Mob. Comput. 2023, 23, 3817–3829. [Google Scholar] [CrossRef]
  4. Rauniyar, A.; Hagos, D.H.; Jha, D.; Håkegård, J.E.; Bagci, U.; Rawat, D.B.; Vlassov, V. Federated learning for medical applications: A taxonomy, current trends, challenges, and future research directions. IEEE Internet Things J. 2023, 11, 7374–7398. [Google Scholar] [CrossRef]
  5. Jithish, J.; Alangot, B.; Mahalingam, N.; Yeo, K.S. Distributed anomaly detection in smart grids: A federated learning-based approach. IEEE Access 2023, 11, 7157–7179. [Google Scholar] [CrossRef]
  6. Pandya, S.; Srivastava, G.; Jhaveri, R.; Babu, M.R.; Bhattacharya, S.; Maddikunta, P.K.R.; Mastorakis, S.; Piran, M.J.; Gadekallu, T.R. Federated learning for smart cities: A comprehensive survey. Sustain. Energy Technol. Assess. 2023, 55, 102987. [Google Scholar] [CrossRef]
  7. Yang, W.; Wang, S.; Cui, H.; Tang, Z.; Li, Y. A review of homomorphic encryption for privacy-preserving biometrics. Sensors 2023, 23, 3566. [Google Scholar] [CrossRef]
  8. Hu, H.; Zhang, X.; Salcic, Z.; Sun, L.; Choo, K.-K.R.; Dobbie, G. Source inference attacks: Beyond membership inference attacks in federated learning. IEEE Trans. Dependable Secur. Comput. 2023, 21, 3012–3029. [Google Scholar] [CrossRef]
  9. Hatamizadeh, A.; Yin, H.; Molchanov, P.; Myronenko, A.; Li, W.; Dogra, P.; Feng, A.; Flores, M.G.; Kautz, J.; Xu, D. Do gradient inversion attacks make federated learning unsafe? IEEE Trans. Med. Imaging 2023, 42, 2044–2056. [Google Scholar] [CrossRef]
  10. Wu, R.; Chen, X.; Guo, C.; Weinberger, K.Q. Learning to invert: Simple adaptive attacks for gradient inversion in federated learning. In Proceedings of the Uncertainty in Artificial Intelligence, Pittsburgh, PA, USA, 31 July–4 August 2023; pp. 2293–2303. [Google Scholar]
  11. Zhang, J.; Liu, Y.; Wu, D.; Lou, S.; Chen, B.; Yu, S. VPFL: A verifiable privacy-preserving federated learning scheme for edge computing systems. Digit. Commun. Netw. 2023, 9, 981–989. [Google Scholar] [CrossRef]
  12. Wang, Z.; Song, M.; Zhang, Z.; Song, Y.; Wang, Q.; Qi, H. Beyond inferring class representatives: User-level privacy leakage from federated learning. In Proceedings of the IEEE INFOCOM 2019-IEEE Conference on Computer Communications, Paris, France, 9 April–2 May 2019; pp. 2512–2520. [Google Scholar]
  13. Yin, X.; Zhu, Y.; Hu, J. A comprehensive survey of privacy-preserving federated learning: A taxonomy, review, and future directions. ACM Comput. Surv. (CSUR) 2021, 54, 1–36. [Google Scholar] [CrossRef]
  14. Gong, X.; Sharma, A.; Karanam, S.; Wu, Z.; Chen, T.; Doermann, D.; Innanje, A. Ensemble attention distillation for privacy-preserving federated learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 15076–15086. [Google Scholar]
  15. Zhang, Z.; Guan, C.; Chen, H.; Yang, X.; Gong, W.; Yang, A. Adaptive privacy-preserving federated learning for fault diagnosis in internet of ships. IEEE Internet Things J. 2021, 9, 6844–6854. [Google Scholar] [CrossRef]
  16. Liu, C.; Chakraborty, S.; Verma, D. Secure model fusion for distributed learning using partial homomorphic encryption. In Policy-Based Autonomic Data Governance; Springer: Cham, Switzerland, 2019; pp. 154–179. [Google Scholar]
  17. Hijazi, N.M.; Aloqaily, M.; Guizani, M.; Ouni, B.; Karray, F. Secure federated learning with fully homomorphic encryption for iot communications. IEEE Internet Things J. 2023, 11, 4289–4300. [Google Scholar] [CrossRef]
  18. Du, W.; Li, M.; Han, Y.; Wang, X.A.; Wei, Z. A Homomorphic Signcryption-Based Privacy Preserving Federated Learning Framework for IoTs. Secur. Commun. Netw. 2022, 2022, 8380239. [Google Scholar] [CrossRef]
  19. He, C.; Liu, G.; Guo, S.; Yang, Y. Privacy-preserving and low-latency federated learning in edge computing. IEEE Internet Things J. 2022, 9, 20149–20159. [Google Scholar] [CrossRef]
  20. So, J.; Ali, R.E.; Güler, B.; Jiao, J.; Avestimehr, A.S. Securing secure aggregation: Mitigating multi-round privacy leakage in federated learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; pp. 9864–9873. [Google Scholar]
  21. Liang, C.; Tan, Y.-a.; Zhang, X.; Wang, X.; Zheng, J.; Zhang, Q. Building packet length covert channel over mobile VoIP traffics. J. Netw. Comput. Appl. 2018, 118, 144–153. [Google Scholar] [CrossRef]
  22. Tan, Y.-a.; Zhang, X.; Sharif, K.; Liang, C.; Zhang, Q.; Li, Y. Covert timing channels for IoT over mobile networks. IEEE Wirel. Commun. 2018, 25, 38–44. [Google Scholar] [CrossRef]
  23. Liang, Q.; Shi, N.; Tan, Y.-a.; Li, C.; Liang, C. A Stealthy Communication Model with Blockchain Smart Contract for Bidding Systems. Electronics 2024, 13, 2523. [Google Scholar] [CrossRef]
  24. Liang, Q.; Zhu, C. A new one-dimensional chaotic map for image encryption scheme based on random DNA coding. Opt. Laser Technol. 2023, 160, 109033. [Google Scholar] [CrossRef]
  25. Wen, H.; Huang, Y.; Lin, Y. High-quality color image compression-encryption using chaos and block permutation. J. King Saud Univ.-Comput. Inf. Sci. 2023, 35, 101660. [Google Scholar] [CrossRef]
  26. Ramos, A.M.; Artiles, J.A.; Chaves, D.P.; Pimentel, C. A fragile image watermarking scheme in dwt domain using chaotic sequences and error-correcting codes. Entropy 2023, 25, 508. [Google Scholar] [CrossRef] [PubMed]
  27. Bonawitz, K.; Ivanov, V.; Kreuter, B.; Marcedone, A.; McMahan, H.B.; Patel, S.; Ramage, D.; Segal, A.; Seth, K. Practical secure aggregation for privacy-preserving machine learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, Dallas, TX, USA, 30 October–3 November 2017; pp. 1175–1191. [Google Scholar]
  28. Paillier, P. Public-key cryptosystems based on composite degree residuosity classes. In Proceedings of the International Conference on the Theory and Applications of Cryptographic Techniques, Prague, Czech Republic, 2–6 May 1999; pp. 223–238. [Google Scholar]
  29. Caponetto, R.; Fortuna, L.; Fazzino, S.; Xibilia, M.G. Chaotic sequences to improve the performance of evolutionary algorithms. IEEE Trans. Evol. Comput. 2003, 7, 289–304. [Google Scholar] [CrossRef]
  30. Bell, D.E.; LaPadula, L.J. Secure Computer Systems: Mathematical Foundations; Citeseer; Mitre Corporation: Bedford, MA, USA, 1975. [Google Scholar]
  31. Preneel, B. Cryptographic hash functions. Eur. Trans. Telecommun. 1994, 5, 431–448. [Google Scholar] [CrossRef]
  32. Barradas, D.; Santos, N.; Rodrigues, L.; Nunes, V. Poking a hole in the wall: Efficient censorship-resistant Internet communications by parasitizing on WebRTC. In Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security, Virtual, 9–13 November 2020; pp. 35–48. [Google Scholar]
  33. Malik, A.; Ashraf, A.; Wu, H.; Kuribayashi, M. Reversible Data Hiding in Encrypted Text Using Paillier Cryptosystem. In Proceedings of the 2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Chiang Mai, Thailand, 7–10 November 2022; pp. 1495–1499. [Google Scholar]
  34. Zhang, X.; Liang, C.; Zhang, Q.; Li, Y.; Zheng, J.; Tan, Y.-A. Building covert timing channels by packet rearrangement over mobile networks. Inf. Sci. 2018, 445, 66–78. [Google Scholar] [CrossRef]
  35. Zhang, X.; Zhu, L.; Wang, X.; Zhang, C.; Zhu, H.; Tan, Y.-A. A packet-reordering covert channel over VoLTE voice and video traffics. J. Netw. Comput. Appl. 2019, 126, 29–38. [Google Scholar] [CrossRef]
  36. Shen, T.; Zhu, L.; Gao, F.; Chen, Z.; Zhang, Z.; Li, M. A Blockchain-Enabled Group Covert Channel against Transaction Forgery. Mathematics 2024, 12, 251. [Google Scholar] [CrossRef]
  37. Liang, C.; Baker, T.; Li, Y.; Nawaz, R.; Tan, Y.-A. Building covert timing channel of the IoT-enabled MTS based on multi-stage verification. IEEE Trans. Intell. Transp. Syst. 2021, 24, 2578–2595. [Google Scholar] [CrossRef]
  38. Hitaj, D.; Pagnotta, G.; Hitaj, B.; Perez-Cruz, F.; Mancini, L.V. Fedcomm: Federated learning as a medium for covert communication. IEEE Trans. Dependable Secur. Comput. 2024, 21, 1695–1707. [Google Scholar] [CrossRef]
  39. Kim, S.W. Covert communication over federated learning channel. In Proceedings of the 2023 17th International Conference on Ubiquitous Information Management and Communication (IMCOM), Seoul, Republic of Korea, 3–5 January 2023; pp. 1–3. [Google Scholar]
  40. Hou, X.; Wang, J.; Jiang, C.; Zhang, X.; Ren, Y.; Debbah, M. UAV-enabled covert federated learning. IEEE Trans. Wirel. Commun. 2023, 22, 6793–6809. [Google Scholar] [CrossRef]
Figure 1. FL System Framework: Overview of the communication process between the central server and multiple clients ( C 1 , C 2 , , C k ).
Figure 1. FL System Framework: Overview of the communication process between the central server and multiple clients ( C 1 , C 2 , , C k ).
Electronics 13 03870 g001
Figure 2. Example of covert channel. The diagram shows a covert communication channel between a high-level user and a low-level user, bypassing the Bell–LaPadula (BLP) model restrictions through a mid-level intermediary.
Figure 2. Example of covert channel. The diagram shows a covert communication channel between a high-level user and a low-level user, bypassing the Bell–LaPadula (BLP) model restrictions through a mid-level intermediary.
Electronics 13 03870 g002
Figure 3. The stealthy communication system model for the privacy-preserving FL framework based on HE. Sender: local model training, Paillier encryption, hash computation, information embedding; Receiver: global model sharing, hash verification, information extraction; Attacker: data analysis, ciphertext tampering of the global model.
Figure 3. The stealthy communication system model for the privacy-preserving FL framework based on HE. Sender: local model training, Paillier encryption, hash computation, information embedding; Receiver: global model sharing, hash verification, information extraction; Attacker: data analysis, ciphertext tampering of the global model.
Electronics 13 03870 g003
Figure 4. The process of embedding a covert message bit into ciphertext. The initial ciphertext (C), the bit to embed (b), and encrypted zero (E (0)) are used as inputs.
Figure 4. The process of embedding a covert message bit into ciphertext. The initial ciphertext (C), the bit to embed (b), and encrypted zero (E (0)) are used as inputs.
Electronics 13 03870 g004
Figure 5. Implementation steps of the system model: 1. Pre-negotiation. 2. Information processing (chaos sequence, hash). 3. Information embedding. 4. Information extracting and check.
Figure 5. Implementation steps of the system model: 1. Pre-negotiation. 2. Information processing (chaos sequence, hash). 3. Information embedding. 4. Information extracting and check.
Electronics 13 03870 g005
Figure 6. (a) CDF plot of the normal ciphertext dataset and the dataset with embedded covert information; (b) K–S Test p-values for different hash algorithms: The K–S test p-values for four different hash algorithms: SHA-224, SHA-256, SHA-384, and SHA-512.
Figure 6. (a) CDF plot of the normal ciphertext dataset and the dataset with embedded covert information; (b) K–S Test p-values for different hash algorithms: The K–S test p-values for four different hash algorithms: SHA-224, SHA-256, SHA-384, and SHA-512.
Electronics 13 03870 g006
Figure 7. K–L divergence for different hash algorithms: the K–L divergence values for four different hash algorithms: SHA-224, SHA-256, SHA-384, and SHA-512.
Figure 7. K–L divergence for different hash algorithms: the K–L divergence values for four different hash algorithms: SHA-224, SHA-256, SHA-384, and SHA-512.
Electronics 13 03870 g007
Figure 8. (a) Performance of the FL system before implementing the proposed scheme; (b) Performance of the FL system after implementing the proposed scheme: The orange line represents the accuracy of the FL model on the training set across different communication rounds; the blue line indicates the accuracy of the FL model on the validation set across different communication rounds; the green line shows the cumulative time consumed over the communication rounds.
Figure 8. (a) Performance of the FL system before implementing the proposed scheme; (b) Performance of the FL system after implementing the proposed scheme: The orange line represents the accuracy of the FL model on the training set across different communication rounds; the blue line indicates the accuracy of the FL model on the validation set across different communication rounds; the green line shows the cumulative time consumed over the communication rounds.
Electronics 13 03870 g008
Table 1. The Main Symbols.
Table 1. The Main Symbols.
SymbolDescription
c i client   i
W t global model in the t -th communication round
W i t local model of client i in the t -th communication round
x chaotic sequence seed
r parameter for logistic map
n the length of the position generated by the chaotic sequence
l the length of the list of model parameters
P selected position sequence
p i The i -th position in the position sequence
S covert information
s covert information of digital type
s i the i -th covert information
N ,   g Paillier cryptosystem’s public key parameters
Table 2. The p -value of the K–S Test.
Table 2. The p -value of the K–S Test.
Hash AlgorithmNumber of Embedded Bits K–S Test p -Value
SHA-2242241.0
SHA-2562561.0
SHA-3843841.0
SHA-5125121.0
Table 3. The K–L Divergence.
Table 3. The K–L Divergence.
Hash AlgorithmNumber of Embedded Bits K–L Divergence
SHA-224224 1.32   ×   10 6
SHA-256256 1.95   ×   10 6
SHA-384384 4.27   ×   10 6
SHA-512512 1.01   ×   10 6
Table 4. The p -value of Welch’s t-test.
Table 4. The p -value of Welch’s t-test.
Hash AlgorithmNumber of Embedded Bits Welch’s t-Test  p -Value
SHA-2242240.9951
SHA-2562560.9972
SHA-3843840.9973
SHA-5125120.9974
Table 5. Channel Capacity Comparison.
Table 5. Channel Capacity Comparison.
SchemeChannel Capacity (Bits/Round)
Proposed512.00
Hitaj et al. (1 sender)19.76
Hitaj et al. (2 sender)65.87
Hitaj et al. (4 sender)263.47
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, L.; Sun, X.; Shi, N.; Ci, X.; Liang, C. A Stealthy Communication Model for Protecting Aggregated Results Integrity in Federated Learning. Electronics 2024, 13, 3870. https://doi.org/10.3390/electronics13193870

AMA Style

Li L, Sun X, Shi N, Ci X, Liang C. A Stealthy Communication Model for Protecting Aggregated Results Integrity in Federated Learning. Electronics. 2024; 13(19):3870. https://doi.org/10.3390/electronics13193870

Chicago/Turabian Style

Li, Lu, Xuan Sun, Ning Shi, Xiaotian Ci, and Chen Liang. 2024. "A Stealthy Communication Model for Protecting Aggregated Results Integrity in Federated Learning" Electronics 13, no. 19: 3870. https://doi.org/10.3390/electronics13193870

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop