Next Article in Journal
MPSD: A Robust Defense Mechanism against Malicious PowerShell Scripts in Windows Systems
Previous Article in Journal
Advancing Quantum Temperature Sensors for Ultra-Precise Measurements (UPMs): A Comparative Study
Previous Article in Special Issue
Linear Consensus Protocol Based on Vague Sets and Multi-Attribute Decision-Making Methods
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Maneuver in the Trade-Off Space of Federated Learning Aggregation Frameworks Secured with Polymorphic Encryption: PolyFLAM and PolyFLAP Frameworks

1
Département de Mathématiques, Informatique et Génie, Université du Québec à Rimouski, 300 Allée des Ursulines, Rimouski, QC G5L 3A1, Canada
2
Département d’Informatique et de Mathématique, Université du Québec à Chicoutimi, 555 Boulevard de l’Université, Chicoutimi, QC G7H 2B1, Canada
3
Institut Technologique de Maintenance Industrielle, 175 Rue de la Vérendrye, Sept-Îles, QC G4R 5B7, Canada
4
Faculty of Arts & Sciences, Islamic University of Lebanon, Wardaniyeh P.O. Box 30014, Lebanon
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(18), 3716; https://doi.org/10.3390/electronics13183716
Submission received: 22 August 2024 / Revised: 13 September 2024 / Accepted: 18 September 2024 / Published: 19 September 2024

Abstract

:
Maintaining user privacy in machine learning is a critical concern due to the implications of data collection. Federated learning (FL) has emerged as a promising solution by sharing trained models rather than user data. However, FL still faces several challenges, particularly in terms of security and privacy, such as vulnerability to inference attacks. There is an inherent trade-off between communication traffic across the network and computational costs on the server or client, which this paper aims to address by maneuvering between these performance parameters. To tackle these issues, this paper proposes two complementary frameworks: PolyFLAM (“Polymorphic Federated Learning Aggregation of Models”) and PolyFLAP (“Polymorphic Federated Learning Aggregation of Parameters”). These frameworks provide two options to suit the needs of users, depending on whether they prioritize reducing communication across the network or lowering computational costs on the server or client. PolyFLAM reduces computational costs by exchanging entire models, eliminating the need to rebuild models from parameters. In contrast, PolyFLAP reduces communication costs by transmitting only model parameters, which are smaller in size compared to entire models. Both frameworks are supported by polymorphic encryption, ensuring privacy is maintained even in cases of key leakage. Furthermore, these frameworks offer five different machine learning models, including support vector machines, logistic regression, Gaussian naïve Bayes, stochastic gradient descent, and multi-layer perceptron, to cover as many real-life problems as possible. The evaluation of these frameworks with simulated and real-life datasets demonstrated that they can effectively withstand various attacks, including inference attacks that aim to compromise user privacy by capturing exchanged models or parameters.

1. Introduction

Artificial intelligence (AI) is a rapidly advancing technology that is becoming increasingly integrated into various industries and aspects of daily life, resulting in significant changes and advancements in lifestyles and professional activities. This reality is obvious and observable and requires no proof. Despite the long duration of AI research, dating back to the 1950s when Alan Turing famously asked, “Can computers think?!” [1], there is no single definition for this field. For example, a simple definition of AI is provided by the authors of [2], where they describe it as programs that are no less competent than a human in any given setting. In contrast, the authors of [3] describe it as a set of tools and methods that uses principles and mechanisms from various fields such as computation, mathematics, logic, and biology to address the challenges associated with realizing, modeling, and mimicking human intelligence and cognitive processes. Since then, AI has been a broad field of research, leading to various derivatives such as machine learning (ML), deep learning (DL), federated learning (FL), and others. Machine learning, for example, allows computers to “learn” from training information and incrementally improve their understanding without the need for explicit programming or with the least amount of supervision.
ML algorithms strive to identify patterns in data and derive knowledge from them to formulate independent predictions. ML algorithms and models acquire knowledge through encounters with the real world. In traditional contexts, a computer program is developed by engineers and provided with a set of instructions that facilitates the transformation of incoming data into the desired outcome. In contrast, ML is designed to learn on its own with minimal or no human intervention, gradually expanding its knowledge. The impressive performance of ML, combined with its enormous potential in classification and regression problems and its ability to use both supervised and unsupervised learning methods, have made it attractive to researchers [4,5]. Subsequent research has shown that ML has a wide range of applications in areas such as E-commerce and product recommendation; image, speech, and pattern recognition; user behavior analysis and context-aware smartphone applications [4,5]; health services [6,7,8]; traffic prediction and transportation [4,9]; the Internet of Things (IoT) and smart cities [9]; cybersecurity [10]; natural language processing and sentiment analysis [11]; sustainable agriculture [12]; industrial applications [13]; and many others.

1.1. Challenges in Machine Learning Domain

The precise results obtained in classification or regression gradually promote the integration of these methods into aspects of daily life. The practicality of using AI tools, especially ML, has been underpinned by their exceptional efficiency and the potential of their application in various domains. Nevertheless, ML continues to struggle with a number of challenges that are described in detail in the existing scientific literature. However, these challenges cannot be categorized uniformly, but are classified according to different viewpoints. This section presents the prevailing challenges and places them in a proposed framework that classifies them based on factors related to data, models, implementation, and other general dimensions.
  • General challenges [14,15]:
    User data privacy and confidentiality;
    User technology adoption and engagement;
    Ethical constraints.
  • Model-related challenges [14,15]:
    Accuracy and performance;
    Model evaluation;
    Variance and bias;
    Explainability.
  • Data-related challenges [16,17]:
    Data availability and accessibility [18];
    Data locality [19];
    Data readiness [18];
    Data heterogeneity;
    Noise and signal artifacts;
    Missing data;
    Classes imbalance;
    Data volume course of dimensionality;
    Bonferroni principle [20];
    Feature representation and selection.
  • Implementation-related challenges [18,21]:
    Real-time processing;
    Model selection;
    Execution time and complexity.
The challenges within ML and related fields are the subject of extensive study, with researchers seeking to address these challenges collectively rather than focusing on any one. It is difficult to definitively state that any one of the above challenges is the most significant or has the most detrimental impact on the machine learning field. Nonetheless, the machine learning workflow primarily includes phases such as data collection and preprocessing, feature engineering, model training, model evaluation, and model deployment. The structure of this workflow highlights the central role of data in machine learning, as it is the first step in the process; without its completion, the subsequent phases cannot proceed. Moreover, the performance of ML models is directly linked to the availability of data. While achieving highly accurate intelligent models depends on the technical architecture of the models themselves, the quality and availability of the data, preprocessing, and several other factors, it is generally accepted that data availability contributes to increased and improved accuracy [16,17].

1.2. ML Privacy Issue and FL Solution

In the real world, due to several factors, the process of data collection is a major challenge, if not the biggest challenge, in developing machine learning models, and privacy and confidentiality are of paramount importance. This concern goes beyond individual privacy to include the societal, governmental, and organizational dimensions, all of which reinforce efforts to protect privacy and security of data.

1.2.1. The Privacy Issue Explained

These efforts have led to the introduction of numerous regulations and laws around the world, such as the European Union’s General Data Protection Regulation (GDPR) [22], China’s Cyber Security Law of the People’s Republic of China [23], the General Principles of the Civil Law of the People’s Republic of China [24], Singapore’s Personal Data Protection Act (PDPA) [25], and countless others.
While these regulations undeniably help protect private information, they also introduce some complexities into the landscape of ML. Collecting data for model training becomes much more difficult, which in turn hinders advances in model performance accuracy and personalized model results. Consequently, privacy and confidentiality issues not only present an isolated challenge, but also set in motion a number of additional hurdles for ML. These include challenges related to data availability, model performance, personalization, and ultimately building trust and acceptance. The critical importance of privacy in information sharing has led to extensive research resulting in several proposed privacy algorithms such as differential privacy [26], anonymity k-order [27], and homomorphic encryption [28]. However, these methods do not provide optimal solutions, as demonstrated by observed machine learning attacks, e.g., model inversion [29] and membership inference attacks [30], where raw data are extracted by accessing the model.

1.2.2. Federated Learning as a Solution

To address privacy issues without restricting data collection, Google recently introduced a novel concept in machine learning called federated machine learning or federated learning (FL) [31]. The basic premise of federated learning is that it does not require the sharing of user data between different devices. This concept can be defined as collaborative, distributed, and decentralized machine learning with privacy preservation. In federated learning, an intelligent model is trained without the need to transfer data from edge devices to a central server. Instead, models are sent to these devices, where they are trained on local data. Then, these refined models are sent back to a central server for aggregation, which assembles the global model without having visibility into the specific embedded data. The technical infrastructure of federated learning is shown in Figure 1 below.
The concept of federated learning provides an effective solution to user privacy concerns. It not only addresses these concerns, but also unlocks the potential to collect more data for training machine learning models, which help improve accuracy and efficiency. In addition, federated learning facilitates training models with data from disparate and unrelated sources, referred to as “data islands”. In addition, federated learning enables the management of disparate data spread across different data spaces, each characterized by its unique attributes. This approach also facilitates what is known as “learning transfer”, which allows models to share knowledge without transmitting users’ private data. However, it is important to note that FL is still in its infancy and faces a number of challenges. This necessitates targeted research efforts to improve its capabilities.

1.3. Goals and Achievements

The primary goal of this paper is to address the limitations of existing federated learning (FL) frameworks in safeguarding FL algorithms against inference attacks while enhancing computational efficiency and reducing communication overhead. One of the key challenges in federated learning is managing the trade-off between communication traffic across the network and the computational costs for the server or client. Effective solutions need to balance these performance parameters to optimize both communication efficiency and computational resource usage. To achieve this, two novel frameworks are introduced: PolyFLAM and PolyFLAP, both of which incorporate polymorphic encryption to strengthen security in federated learning environments against inference attacks. These frameworks offer distinct advantages, with PolyFLAM focused on reducing computational costs by exchanging entire models, while PolyFLAP is aimed at reducing communication overhead by transmitting only model parameters.Therefore, the key achievements of this work are as follows:
1.
Development of PolyFLAM and PolyFLAP frameworks: Two flexible frameworks are presented, providing users with options to prioritize either communication efficiency or computational efficiency based on specific needs.
2.
Integration of polymorphic encryption: Both frameworks incorporate polymorphic encryption, ensuring robust security even in cases of key compromise, thereby mitigating the risk of privacy breaches, including inference attacks.
3.
Evaluation of multiple machine learning models: The effectiveness of the proposed frameworks is demonstrated by evaluating them across various machine learning models, such as support vector machines, logistic regression, Gaussian naïve Bayes, stochastic gradient descent, and multi-layer perceptron.
4.
Empirical validation of security and efficiency: Experimental results show that both PolyFLAM and PolyFLAP effectively withstand inference attacks, while offering significant improvements in either computational or communication efficiency compared to traditional FL methods.
These contributions provide practical solutions for enhancing both the privacy and performance of federated learning systems, addressing key concerns for real-world applications.

1.4. Organization of the Article

In response to the need to enhance FL algorithms against inference attacks, this article presents two innovative frameworks for federated learning, both of which involve the use of polymorphic encryption [32] to strengthen the security of FL. Section 2 addresses the problem, specifically the existing privacy inadequacies in FL. It also explains the motivation for developing these frameworks. A key aspect is the introduction of polymorphic encryption, a new addition in federated learning. Section 3 presents the proposed frameworks in detail, explaining the mechanisms incorporated in them and providing comprehensive explanations of their inner processes. Section 4 discusses and evaluates the proposed frameworks from different perspectives. In this context, tests are performed under real conditions to prove their efficiency. Finally, Section 5 addresses the challenges that hinder the development of the proposed frameworks, while providing perspectives for their future development.

2. Problem Statement: Security Threats in FL Domain

Federated learning is a robust solution for ensuring data privacy by taking a decentralized approach to machine learning and minimizing extensive data sharing between clients and servers. Equally advantageous, FL succeeds in reducing transmission costs, as the raw data usually exceed the size of the transmitted models or their parameters.

2.1. FL under the Scope: Challenges and Issues

Federated learning has proven to be highly successful in a variety of applications, but it is not immune to challenges, a topic that has been extensively studied in the academic literature. This thorough investigation has brought to light a number of issues that have been discussed in detail in studies [33,34,35]. In particular, the original FL aggregation algorithm, named FedAvg [31], has been studied with respect to several of its limitations. These include issues such as data and hardware heterogeneity, sensitivity to local models, scalability limitations, incremental convergence rates, computational and communication overheads, and vulnerability to malicious attacks. The diversity of these challenges has led to increased research efforts aimed at improving the practicality of FL, and has prompted researchers to develop solutions that deftly address these challenges.

2.2. Security in FL Domain

Although federated learning functions as a privacy-preserving ML technology, it remains vulnerable to malicious attacks [33,34,35]. The security of messages exchanged within the FL cycle can be divided into the input phase, the learning process itself, and the resulting learned model. This vulnerability leads to a spectrum of attacks, including but not limited to poisoning, inference, and backdoor attacks. Poisoning attacks can adversely affect the quality of learning outcomes, inference attacks expose users’ private data, and backdoor attacks allow unauthorized intrusion into the FL system [36].

2.2.1. Poisoning Attacks

Poisoning attacks, whether random or targeted [37], aim to either reduce model accuracy (random) or manipulate the model to output a label specified by the attacker (targeted). These attacks can target data or the model, both of which negatively impact the overall behavior of FL. Compromised FL environments allow attackers to perform targeted and untargeted poisoning attacks that include both data and model poisoning attacks.
  • Data poisoning: Also known as data corruption, this type of attack has two main forms: clean label [38] and dirty label [39]. Clean label attacks assume that the labels cannot change, requiring stealthy poisoning, while dirty label attacks can insert misclassified data with the intended target labels. Data poisoning can be performed by any federated learning participant, and the impact on the FL model depends on the number of attackers and the amount of data poisoned.
  • Model poisoning: Local model training leads to model poisoning by contaminating updates before committing them to the server or embedding secret global model backdoors [40]. Targeted model poisoning aims to securely misclassify selected inputs without modifying them as in adversarial attacks [41], which is achieved by manipulating the training process. Model poisoning in federated learning surpasses the effects of data poisoning by affecting model updates during each iteration [42]. It mimics centralized poisoning on a subset of the entire training data. Performing model poisoning requires significant technical resources.

2.2.2. Inference Attacks

In federated learning, sharing parameters during training raises privacy concerns [43,44]. Deep learning models unintentionally internalize various data features beyond the core tasks, potentially revealing sensitive data aspects of the participants. Attackers can infer features by comparing model parameter snapshots, revealing aggregate updates of all participants except the attacker. The problem lies in gradients computed from participants’ private data. Gradients in deep learning models arise from layer attributes and errors above the layer, providing opportunities for inference attacks [43]. These observations can reveal private data attributes, including class representatives and membership, or even allow recovery of labels without knowledge of the training set [44]. Inference attacks categorically include the following:
  • Inferring membership: the goal of membership inference attacks is to determine whether a particular data element was used to train the model [45].
  • Inferring class representatives: occurs when a malicious participant intentionally compromises another participant and exploits the real-time learning of FL to train a network that generates private prototype samples of targeted training data. These generated samples mimic the distribution of the training data.
  • Inferring properties: In this attack, an attacker can perform both passive and active property inference attacks to infer properties of other participants’ training data that are independent of the features describing the classes of the FL model [45]:
    Property inference attacks require the attacker to have additional training data labeled with the exact property they wish to infer;
    A passive attacker can only monitor updates and make inferences by training a binary property classifier;
    An active adversary can use multitask learning to trick the model FL into learning a better separation between data with and without the property, resulting in more information being extracted;
    An adversarial participant can even infer when a feature appears and disappears in the data during training;
    It can recover pixel-perfect original images and token-matched original texts.

2.3. Securing FL Frameworks: State of the Art

Researchers are interested in improving the security of FL to promote its usability and feasibility. Several attempts have already been made in this regard. For example, in [46], the authors propose a secure vector summation strategy using a protocol with a fixed number of rounds that reduces computational costs and is robust to faulty clients. In their approach, only a single server can be trusted to hold the exchanged data. Their proposed framework provides high security against honest but curious adversaries and guarantees anonymity even when faced with active adversaries, such as an enemy server. Moreover, in [47], the authors proposed Robust Federated Aggregation (RFA), which aims to protect the aggregation process of FL against poisoning attacks. To achieve their goal, they aggregated the exchanged models based on the geometric median, which can be calculated using a Weiszfeld-type algorithm [48]. RFA was able to compete with the traditional FedAvg algorithm and was more resistant to data poisoning attacks.
On the other hand, the authors in [49] developed SecureD- FL, a FL framework based on a refined form of the Alternating Direction Multiplier (ADMM) [50]. Their proposed framework uses a communication mode in which the algorithm decides in each round of execution which subset of users should exchange data in order to minimize the disclosure of private data during the aggregation process. In addition, the authors of [51] proposed SEAR, a secure and efficient aggregation for Byzantine-stable federated learning that aggregates the local models in a secure and trusted hardware environment—specifically, the Intel SGX Trustworthy Execution Environment (TEE) [52], a secure CPU domain area, where the executed data and programs are kept secret and cannot be modified. Moreover, in [53], the authors proposed the Efficient Privacy-Preserving Data Aggregation (EPPDA), which uses homomorphisms of the secret exchange [54] in the FL environment. Their algorithm is secure and reduces the impact of some malicious clients. The cryptographic primitives used in their approach can be summarized as follows: secret exchange, key exchange protocol, authenticated encryption, and signature methods.
Finally, in [55], the authors proposed the aggregation algorithm HeteroSAg, which uses masking to secure the exchanged messages such that the mutual information between the masked model and the unique model is zero. HeteroSAg’s resilience to Byzantine attacks depends on the FL cycle, which implements a segment grouping strategy based on dividing edge users into groups and segmenting local model updates for those users. The security approaches followed in the state of the art of secured FL frameworks are summarized in Table 1 below.

2.4. Problem and Motivation

Extensive research has focused primarily on understanding poisoning attacks in federated learning secure aggregation algorithms, whereas attention to inference attacks has been relatively limited. Although techniques such as polymorphic encryption (PE) [32] show promise in reducing the impact of inference attacks by making data exchanges more secure, they have been little explored in previous FL frameworks. Considering the critical importance of inference attacks and inspired by the effectiveness of PE, this paper introduces two frameworks called PolyFLAM and PolyFLAP. What makes these frameworks special is that they are the first to combine PE with FL aggregation, which makes them innovative and novel solutions in this area. This integration not only sets them apart from others, but also opens new possibilities for improving the security of FL. On the other hand, these frameworks provide secure FL in different models, as will be explained later.

2.5. Polymorphic Encryption

Polymorphism can be understood as the remarkable ability of objects or functions to take on different forms or behaviors and adapt to different contexts. In contrast, encryption is a complicated process of converting regular data into an unintelligible format to protect them from malicious use or access. The most popular encryption algorithms are AES (Advanced Encryption Standard) [56], RSA (Rivest–Shamir–Adleman), and others [57,58], all of which contribute to data security. This leads to the concept of polymorphic encryption, a sophisticated encryption paradigm that introduces a dynamic dimension by changing the encryption algorithm or keys to enhance overall security. Unlike traditional encryption methods, which are characterized by fixed algorithms and keys, PE poses a major challenge to attackers because even possession of the ciphertext provides minimal advantage in decryption, underscoring the robustness of the technique against attackers.

3. PolyFLAM and PolyFLAP: FL Frameworks Secured with Polymorphic Encryption

The compelling need to enhance security protocols within federated learning frameworks to guard against inference attacks, coupled with the potential of polymorphic encryption to address these needs, motivated the development of PolyFLAM and PolyFLAP. One of the critical challenges in federated learning is balancing the trade-off between communication traffic across the network and computational costs on the server or client. These frameworks integrate the core principles of polymorphic encryption to bolster security and privacy in federated learning environments, ensuring robust protection against attacks. This section provides a detailed explanation of both the conceptual foundations and the design of the proposed frameworks.

3.1. Main Concept

A typical federated learning system consists of a central server and multiple clients. The server sends a global model to the clients, which train their own model using their local data. After training, the clients send their updated models back to the server, which combines them into a single improved global model. The process is repeated until the global model reaches a point of stability. In the cases of PolyFLAM and PolyFLAP the exchanged messages are subjected to a special type of encryption, called polymorphic encryption, using the algorithm AES-256 [56]. In this way, the security and protection of the exchanged data are ensured. The uniqueness of the proposed frameworks stems from the use of different encryption keys for each message exchanged between the server and the clients. This approach generates polymorphism, which adds an additional layer of security. Moreover, even for a single client, different keys are used for each message exchanged with the server. The main sources of this polymorphism are the Table of Encryption Keys (ToKs) and the initial encryption key, which are described in the following section.

3.1.1. Table of Encryption Keys (ToKs)

In both proposed frameworks, when a client makes a connection request, the server responds by providing a Table of Encryption Keys (ToKs). Each key in this table is assigned a unique ID for indexing. These keys play a critical role in encrypting the messages that are later exchanged. In this process, each message is assigned an index corresponding to the key used for encryption on the sender’s side and decryption on the receiver’s side. The AES-256 keys consist of 32 characters (bytes) and are extremely resistant to cracking attempts, which ensures a high level of security. In the context of PolyFLAM and PolyFLAP, even the case of a key being cracked or leaked does not pose a significant threat. This is because the implemented mechanism ensures that the compromised key, if present, is not reused in the federated learning process, neither with the same client nor with other clients. This concept is explained in more detail in the following sections. In practice, a malicious client that successfully cracks a key would gain minimal benefit from it, since that key is unlikely to have any further use.
It is important to emphasize that each client receives its own set of ToKs when connecting to the server. Even the same client receives a new set of ToKs each time it connects. Moreover, the transmission of ToKs to the client after the connection is established requires an additional encryption mechanism to protect against malicious entities. This precaution is critical because the effectiveness of the entire security scheme would be compromised if the ToKs were cracked or leaked. To counteract this, the initial encryption key, called the “initial_key”, used to encrypt the ToKs is also generated polymorphically, a concept that is explained in more detail in the following section.

3.1.2. Initial Encryption Key

The initial encryption key, referred to as the “initial_key”, plays an important role in encrypting the Table of Encryption Keys (ToKs) that is subsequently used to encrypt messages. Given the sensitive nature of ToKs data, it is imperative that separate initial_keys generated for each client. To achieve this, both PolyFLAM and PolyFLAP have well-defined procedures to generate the initial_key prior to its use in ToKs encryption. It is important to note that each connection session, even for the same client, uses a unique key, since random characters are used to generate the initial_key creation. It is worth noting that these keys are not transmitted over the network. Instead, they are generated independently on both the server and the client, using a unified mechanism. This approach significantly increases the level of security. The procedure for generating the initial_key remains identical on both the server and client sides and includes the following steps (the process described below on the side of an entity, where the entity can be the server or a client):
1.
Once the connection is established, the client generates a 32-character string called the “random_secret”.
2.
This string is then combined with the client’s connection data (socket data), which include the IP address and address details. This merging process results in a new 32-character string that conforms to the following structure:
(a)
The first eight characters result from the inversion of the last eight characters of random_secret;
(b)
The next four characters are extracted from the last four characters of the socket data;
(c)
The next eight characters are made of the middle eight characters of the random_secret;
(d)
The next four characters correspond to the first four characters of the socket data;
(e)
Finally, the last eight characters are obtained by reversing the first eight characters of the random_secret.
By concatenating the above substrings, a 32-character string is formed. This string is used as input to the SHA-256 algorithm [57], which produces a hashing result. The initial_key is then derived from the first 32 characters of this hashing result. The inclusion of the hashing process increases security by reducing vulnerability to potential cracking attempts. Since both the client and the server know the socket data, they can independently repeat the steps to create an identical key when they receive the same random_secret. However, since this random_secret is randomly generated on the client side, reproducing an identical string on the server side is impossible. Consequently, it is necessary to share this secret with the server. To ensure secure transmission, the “shuffled_secret” is constructed according to the following steps and then forwarded to the server so that it can use it to regenerate the random_secret:
1.
The first eight characters are opposite to the third eight characters of the random_secret;
2.
The second eight characters are the first eight characters of the random_secret;
3.
The third eight characters are the inverse of the last eight characters of the random_secret;
4.
The last four characters are the second eight characters of the random_secret.
By following these steps, the shuffled_secret becomes virtually useless to malicious entities unless they know the process required to restore the original sequence. This is not possible unless these entities can access the underlying code. Once the server receives the shuffled_secret, it reverses the steps of the shuffle to restore the original sequence, thus building the random_secret as it was on the client’s side. Then, the server mimics the client’s actions and repeats the same sequence of steps to generate the initial_key. Now that both the server and the client have the identical initial_key, the encryption of the Table of Encryption Keys (ToKs) can be performed. Then, these encrypted ToKs are sent to the clients, who decrypt them and use them to secure the exchanged messages.
It is important to note that even if a client connects through the same IP address in different sessions at different times, the initial_key would not be consistent. This is because randomness was included in the key creation process, in addition to complex shuffling, mixing, and hashing. The visual representation of the initial_key creation process can be seen in Figure 2.

3.2. Supported ML Models

PolyFLAM and PolyFLAP are innovative frameworks for federated learning that expand the horizons of model training. These frameworks provide a diverse set of five different machine learning models that give users the flexibility to effectively tackle a variety of data analysis problems. The models offered are the following:
  • Support vector machines (SVMs) [59]: a powerful classification algorithm that determines the optimal hyperplane to divide data into different classes;
  • Logistic Regression [60]: a widely used binary classification algorithm that estimates the probability that a given input belongs to a particular class;
  • Gaussian naïve Bayes [61]: this algorithm relies on the naïve Bayes theorem and the Gaussian distribution to classify data points based on their feature values;
  • Stochastic gradient descent (SGD classifier) [62]: an iterative optimization algorithm used for training linear classifiers, often applied to large datasets;
  • Neural network (multi-layer perceptron) [63]: a versatile deep learning architecture that simulates the interconnected structure of the human brain and is capable of processing complex patterns and relationships in data.
These models are suitable for a variety of machine learning tasks and provide users with the flexibility to choose the model that best fits their specific needs.

3.3. A Maneuver in the Trade-Off Space

PolyFLAM and PolyFLAP differ fundamentally in the nature of the data exchanged between clients and servers, offering two distinct strategies to balance computational and communication efficiency in federated learning (FL). PolyFLAM follows a model-centric approach by transmitting the entire model between server and clients. This design minimizes computational demands on the client side, as the models received can be directly trained or aggregated without further reconstruction. The reduced computational burden makes PolyFLAM particularly suited for scenarios where devices have limited processing power or when the aggregation of models at the server is preferred without additional overhead.
In contrast, PolyFLAP takes a parameter-centric approach, optimizing communication by transmitting only model parameters. This approach reduces the size of the messages exchanged, significantly lowering communication overhead. However, this reduction in communication cost comes with an increased computational demand, as clients and the server are required to rebuild the model from the transmitted parameters. The need for additional computational steps in PolyFLAP introduces a trade-off between communication efficiency and processing complexity. As a result, PolyFLAP is most advantageous in environments where devices have substantial computational capabilities but where network communication is limited or costly.
Moreover, PolyFLAM and PolyFLAP frameworks navigate the trade-off between communication traffic and computational costs by offering distinct approaches tailored to different needs. PolyFLAM reduces computational costs by exchanging entire models rather than individual parameters, thus simplifying the aggregation process at the expense of increased communication traffic due to the larger size of models. Conversely, PolyFLAP minimizes communication overhead by transmitting only model parameters, which are smaller in size but require additional computational resources to reconstruct the models. By maneuvering between these frameworks, users can select the approach that best aligns with their specific operational constraints and priorities, whether they need to optimize for lower communication costs or reduced computational demands, effectively balancing these performance parameters based on their application requirements.
The decision between using PolyFLAM or PolyFLAP ultimately depends on the specific constraints of the operational environment. If the network infrastructure is robust and capable of handling larger data transfers, but devices have limited computational resources, PolyFLAM may be the preferable choice due to its reduction in computation at the peripherals. Conversely, in environments where the network connection is constrained, but devices have powerful computational capabilities, PolyFLAP becomes a more suitable option, as the communication cost is minimized at the expense of increased local computation.
Thus, the introduction of both PolyFLAM and PolyFLAP is intended to provide users with flexible options, allowing them to choose the framework that best aligns with their infrastructure and operational needs. This design aims to investigate the trade-off space in polymorphic encryption-secured FL, offering solutions tailored to the user’s specific conditions. The distinction between these frameworks is not a matter of superiority but rather a reflection of different design priorities: PolyFLAM focuses on computational efficiency, while PolyFLAP emphasizes communication efficiency. Therefore, the choice between the two frameworks should be guided by the characteristics of the network and computational resources available.

3.4. Parameters Generated in PolyFLAP

As previously explained, PolyFLAP exchanges parameters between server and the clients. With the fact that PolyFLAP offers five different machine learning models, each generates a different set of parameters during the local training process, which is explained in the Table 2 below.
These parameters, which collectively represent the core attributes of their respective models, are shared between clients and servers to jointly refine the global model during the federated learning process. The parameters exchanged between clients and servers contain the essence of the model’s complexity. On the server side, these received parameters are skillfully integrated and aggregated, enabling iterative refinement of the global model. This collaborative process ensures that the collective knowledge of the various clients contributes to the creation of a better informed and trained global model.

3.5. Frameworks Design

With this in mind, the PolyFLAM and PolyFLAP workflow is described in the following steps, which are also shown in Figure 3 below. Recall that both frameworks have the same workflow, except for the type of messages exchanged between server and clients, which are models in the case of PolyFLAM and parameters in the case of PolyFLAP.
1.
The server starts the FL process on its side;
2.
The client connects to the server;
3.
The client generates the random_secret and initial_key and sends the first to the server in a “Connect” message;
4.
The server receives the message and creates the table of random encryption keys (ToKs);
5.
server regenerates the initial_key based on the received random_secret in the “Connect” message;
6.
The server encrypts the ToKs using the first 32 characters of the hashed initial_key and sends them to the client;
7.
The client receives the encrypted ToKs and decrypts them using initial key (after this step, the client selects an unused key from the ToKs to encrypt its message, and encapsulates the sent message with the ID of the used key);
8.
The client replies to server with an encrypted “Ready” message;
9.
The server receives the message and responds with an initial “Model/Parameters” message;
10.
The client receives the first "Model/Parameters" message and trains the model on the local data;
11.
The client replies to the server with its encrypted model (in case of PolyFLAM) or encrypted model parameters (in case of PolyFLAP);
12.
The server checks if all clients have sent their models/parameters; and
(a)
If so, it starts the aggregation process, updates the global model/parameters, and sends them back to the clients;
(b)
If not, it sends an encrypted “Hibernate” message to the clients to wait until the above condition is met.
13.
The clients receive the updated gradients and re-train their models based on them;
14.
Steps 11, 12, and 13 repeat until the model converges or until the server decides to stop.

4. Experimental Evaluation and Discussion

By using polymorphic encryption, this study provides two FL frameworks with increased resistance to inference attacks, strengthening the secrecy of messages exchanged within a federated learning cycle. This section focuses on an in-depth evaluation of the proposed innovative frameworks: PolyFLAM and PolyFLAP. It is worth noting that while the proposed frameworks undoubtedly create a secure environment for FL efforts, it is better to consider including authentication services in future revisions. This proactive step ensures that the FL system has a robust defense mechanism against potentially malicious entities. Although beyond the scope of this study, the scope of authentication services includes, but is not limited to the traditional foundation of password-based authentication, the additional security layer of two-factor authentication (2FA), the robust security of public key infrastructure (PKI), simplified access through single sign-on (SSO), the innovative area of biometric authentication, and a variety of other options [64].

4.1. Security Analysis

The messages are encrypted in the proposed framework using the AES -256 algorithm, which is widely considered to be one of the most unassailable cryptographic systems known today. The cryptographic key of this algorithm has a length of 32 characters and is nearly impenetrable, as there are an incredible 1077 possible variants for each individual key. According to [56], trying to crack such a key with the computing power of a supercomputer would take billions of years. However, attacks using quantum computers, as described in “Quantum Attacks” [65], are already on the horizon and may break through the protective framework of AES, even if rapid key cracking is still a long way off.
To counter this emerging threat, each message exchanged within the domains of PolyFLAM and PolyFLAP is encrypted with a unique key taken from the Table of Keys (ToKs). At the same time, the basic initial_key, which is polymorphically generated by the process described earlier, strengthens the security of the ToKs by encryption. It is important to emphasize that the key management, which includes both the ToKs and the initial_key, uses a unique instantiation for each client and each subsequent connection session.
This cautious approach also applies to situations where clients reconnect or different clients use the same connection at different times. Thanks to the randomness already explained, the probability of a key being reused is extremely low and approaches zero, so there is no risk of any of the keys used being leaked or cracked. In summary, the security of PolyFLAM and PolyFLAP is paramount: “Although AES-256 keys are very difficult to crack, the risk caused by a compromised or leaked key is almost zero, since this key is almost never used again during the FL cycle."

4.2. Frameworks Complexity

A complexity analysis of PolyFLAM and PolyFLAM framework is a critical examination of the efficiency and computational requirements of these solutions. By evaluating the time complexity of essential processes such as communication, encryption, and aggregation, a comprehensive understanding emerges that provides insights into the scalability and performance characteristics of the proposed federated learning frameworks. To obtain a better overview of the complexity analysis of PolyFLAM and PolyFLAP, it is important to be aware of the different functions and processes involved in these frameworks. Figure 4 shows the different threads and functions involved in the execution of both frameworks, which are similar in both frameworks, except for the differences in the messages exchanged between server and clients, where models are exchanged in PolyFLAM and parameters are exchanged in PolyFLAP.
In this context, it is crucial to clarify that the functions executed at both the server and clients can be summarized as below:
  • In the server, the functions executed are as follows:
    • Run server thread and start the FL cycle (function S1);
    • Run listen thread and await clients connection (function S2);
    • Run the communication thread and exchange messages with clients (function S3);
    • Generate Table of Keys (function S4);
    • Receive and accept client’s connection (function S5);
    • Generate initial_key based on clients shuffled secret (function S6);
    • Encrypt Table of Keys (function S7);
    • Send encrypted Table of Keys to client (function S8);
    • Receive client’s ready message (function S9);
    • Decrypt ready message (function S10);
    • Encrypt model/parameters (function S11);
    • Send encrypted model/parameters to client (function S12);
    • Receive trained model/parameters from clients (function S13);
    • Check if model/parameters are received from all clients,
      if yes, aggregate all models/parameters (function S14),
      if no, encrypt and send hibernate message to clients and await receiving all models/parameters (function S15);
    • Encrypt aggregated models/parameters (function S16);
    • Send encrypted aggregated models/parameters to clients (function S17);
    • Repeat all steps from S13 to S17 until the global model converge.
  • In the client, the functions executed are as follows:
    • Run client thread and create socket (function C1);
    • Connect to server (function C2);
    • Generate initial encryption key (function C3);
    • Run exchange messages thread (function C4);
    • Encrypt “Connect” message (function C5);
    • Send encrypted “Connect” message (function C6);
    • Receive encrypted “Table of Keys” from server (function C7);
    • Decrypt “Table of Keys” (function C8);
    • Encrypt “Ready” message (function C9);
    • Send encrypted “Ready” message (function C10);
    • Receive encrypted “Model/Parameters” from server (function C11);
    • Decrypt “Models/Parameters” from server (function C12);
    • Train model using the local data (function C13);
    • Encrypt “Model/Parameters” from local training (function C14);
    • Send encrypted “Model/Parameters” to server (function C15);
    • Receive and decrypt the new message (function C16) and if the message is
      “Hibernate”, await until receiving another “Model/Parameter” message (function C17),
      “Model/Parameter”, then repeat steps C13 to C17 as per the number of training rounds.

Time Complexity

In the field of federated learning, PolyFLAM and PolyFLAP follow defined steps with complexities defined by the following parameters:
  • N (number of participating clients);
  • IK (generation of initial key);
  • ToKs (Table of Keys size);
  • E (encryption/decryption factors);
  • R (number of training iterations rounds);
  • A (aggregation complexity);
  • MP (model/parameters complexity);
  • T (training on local data).
To describe the time complexity of the framework on the server side, the O() parameter is used to form the necessary formulas. This parameter, commonly known as Big-O notation, is a mathematical notation used to describe the upper bound of the growth rate of the time complexity of an algorithm as the amount of input data increases. For example, O(1) represents a simple operation such as the initiation of the FL cycle, which occurs once on the server. Other messages have a different complexity, as described below:
  • Messages of fixed sizes such as connect, ready, and done are impacted by the number of participating clients: O(N);
  • Messages depending on the number of rounds and participating clients which are the following:
    hibernate message that are sent to all participating clients except for the last to send its parameters: R * O(N-1);
    models or parameters messages exchanged between server and clients are sent to all clients during all training rounds: R * O(N).
Following the steps performed on the server side and using the notations described above, the complexity function on the server side can be described as follows:
S e r v e r C o m p l e x i t y = O ( 1 ) + ( O ( I K ) O ( N ) ) + ( O ( E ) ( O ( T o K s ) O ( N ) ) ) + ( R O ( E ) O ( M P ) O ( N ) ) + ( R O ( E ) O ( A ) ) + ( O ( E ) O ( N ) )
The time complexity analysis of the federated learning cycle executed on the client side involves a comprehensive evaluation of various operations, each of which is affected by different time complexities. Notable operations with constant time complexity, denoted as O(1), include client thread initiation. However, unlike the server, the operations are not multiplied by the number of clients, but by the number of training rounds. Consequently, the complexity on the client side can be represented as follows:
S e r v e r C o m p l e x i t y = O ( 1 ) + ( R O ( I K ) ) + ( R O ( E ) O ( M P ) O ( N ) ) + R O ( T ) + O ( N )
The complexity profile shows that the efficiency of the federated learning cycle scales linearly with the number of clients and communication rounds. The linear nature of the complexity indicates that as the amount of inputs (number of clients, rounds of communication) increases, the time required for the process also increases proportionally. This is generally preferable to a quadratic or higher complexity, which would lead to a much higher time requirement as the amount of inputs increases.

4.3. Communication Overhead

Running the PolyFLAM and PolyFLAP frameworks introduces an inherent communication overhead as part of the orchestration between the server and the clients. This additional overhead results primarily from the additional messages that are exchanged, serving as the management keys for collaboration. These messages include important components such as the Table of Keys (ToKs), as well as other messages that signal readiness, hibernation, connection establishment, and disconnection. Particularly, a notable fraction of the overall overhead arises from the encrypted Table of Keys, since the other messages are of fixed and small sizes. Considering that the size of the Table of Keys (ToKs) messages is multiplied by 32 bytes due to the use of a 256-bit key (AES), and taking into account the number of clients participating in the federated learning process, the total communication overhead can be represented approximately as follows, where K is the number of encryption keys and C is the number of participating clients:
C o m m u n i c a t i o n O v e r h e a d = C K 32 B y t e s
This equation summarizes the major factors contributing to the communication overhead caused by PolyFLAM and PolyFLAP encryption mechanisms and message exchanges.

4.4. Model Accuracy and Convergence

It is important to emphasize that the PolyFLAM and PolyFLAP frameworks were developed with the goal of strengthening the security and integrity of the federated learning environment, not improving learning quality. Although improving machine learning models is important, these frameworks were developed with the goal of providing effective protection against potential vulnerabilities and privacy breaches in FL’s decentralized collaborative learning scenarios. The frameworks protect sensitive data and confidential model information from potential attacks by relying on polymorphic encryption strategies that guarantee that an encryption key is never used twice within the FL cycle, even for the same client. This technique demonstrates a proactive strategy for establishing trust in FL contexts and ensures that the collaborative process takes place within a fortified, robust, and trust-driven framework.

4.5. Space and Storage Utilization

The PolyFLAM and PolyFLAP frameworks introduce an overhead in the exploration of spatial complexity that must be considered. Central to this overhead is the inclusion of the Table of Keys (ToKs), a cryptographic cornerstone. While the basic memory components of PolyFLAM and PolyFLAP are consistent with the foundation of federated learning frameworks, ToKs have a noticeable impact. ToKs require additional storage, but also play an important role in securing messages exchanged between servers and clients.

4.6. Evaluation Using Real-World Data

Real-world test datasets are used to evaluate the PolyFLAM and PolyFLAP frameworks. These frameworks, strengthened by cryptographic capabilities and precise orchestration, serve as the vanguard of federated learning in a world where theoretical ideas converge with practical implementations. The upcoming research aims to move beyond the conceptual level and into a realm where actual data are used to validate the usability and efficiency of the proposed frameworks.

4.6.1. Testing Environment

To evaluate the effectiveness of the proposed framework, a simulated federated learning network was built, delineated by its hardware and software components:
  • Hardware configuration: the simulated network configuration included a server equipped with an Intel Core i7 processor and 16 GB of memory. This server, running Microsoft Windows 10 Home, managed the orchestration of the network. At the same time, the client role was performed by different computers, each with different hardware specifications to simulate the heterogeneity of the real world.
  • Software framework: PolyFLAM and PolyFLAP were developed using Python (ver. 3.9) as the basic programming language.
During data partitioning, the records described in the next section were divided among the different customers during each FL cycle. If a dataset contains 1000 records and four clients participate in the training cycle, a fair partitioning would mean that each client performs local training on 250 different datasets. This careful division of data ensures equality and a consistent basis for comparative evaluation throughout the implementation of the system.

4.6.2. Datasets Used

A wide range of datasets specifically selected for binary classification tasks were carefully used to thoroughly evaluate the effectiveness and robustness of the frameworks. The three datasets include a simulated dataset generated using the SKLearn dataset library [66]. Using this generated dataset, which includes 9000 records and 20 data properties, the core capabilities of the frameworks are thoroughly tested. In addition, the SHAREEDB Cardiovascular Diseases prediction dataset [67] proves to be a critical component of the evaluation as it goes deeper into the real-world complexity domain. This dataset highlights the ability of the framework to adapt to real-world medical data, as it contains 139 records and 26 variables that capture the details of cardiovascular health. The surgical binary classification dataset [68] expands on its contributions with 14,636 records and 24 features that complement the analysis. This diverse dataset highlights the framework’s ability to handle the complexity of a more complex, real-world scenario. Together, these carefully selected datasets provide the framework for a thorough evaluation and allow for better examination of framework performance across multiple dimensions of complexity and scale.

4.6.3. Security Analysis: Proof of Polymorphism

A rigorous evaluation process was employed to assess the resilience of the framework, with a particular focus on its ability to withstand inference attacks. The encryption keys used for communications were continuously monitored and verified to ensure their effectiveness. A key aspect of this evaluation involved prohibiting the reuse of encryption keys, which allowed for a thorough assessment of the framework’s robustness against key reuse attacks. This was demonstrated through experiments involving two clients and the initial dataset, where the original key and five additional types of keys (ToKs)used for message encryption were meticulously recorded and analyzed. Table 3 summarizes the results of this cryptographic investigation, highlighting the findings from this extensive analysis. The results demonstrate the feasibility of polymorphic encryption in maintaining cryptographic strength and protecting against inference attacks. The prohibition of key reuse and the use of polymorphic keys significantly enhance the framework’s ability to secure complex data transmissions, ensuring that even if an encryption key is compromised, it cannot be exploited to undermine the system’s security.

4.6.4. Communication Cost

As part of the study and evaluation of PolyFLAM and PolyFLAP, communication costs were tracked and recorded. The communication stream includes different message types, both on the server and on the client, as shown in the list below:
  • Server will be sending the below messages to each client,
    “Encrypted ToKs” (S1);
    “Model/Parameters” (S2);
    “Hibernate” (S3);
    “Disconnect” (S4).
  • Client will be sending the below messages to the server,
    “Connect” (C1);
    “Ready” (C2);
    “Model/Parameters” (C3);
    “Disconnected” (C4).
The subtleties of the message size are closely related to parameters such as the number of training rounds (R) and the number of participating clients (C). Consequently, the quantification of communication costs can be succinctly formulated as follows:
C o m m u n u c a t i o n C o s t S e r v e r = C ( S 1 + R ( S 2 + S 3 + S 4 ) )
C o m m u n i c a t i o n C o s t C l i e n t = C 1 + C 2 + R C 3 + C 4
It is worth noting that messages S3, S4, C1, and C2 have a fixed size due to their characteristic properties. The variability of S1 depends on the dimensions of the number of keys in the ToKs, with each factor contributing 32 bytes. It is important to emphasize that most of the communication volume comes from S2 and C3, which encapsulate a complicated model/parameter size. The recorded communication costs when running PolyFLAM and PolyFLAP are shown in Table 4 and Table 5 below. The communication costs are tracked between the server and a randomly selected client in a randomly selected training round for the messages sent and received in the three databases selected for testing. Table 4 shows the size of each message sent by the server and the client, as described below. On the other hand, Table 5 shows the total messages exchanged between the server and client, and also the reduction in communication cost between PolyFLAM and PolyFLAP, one by one:

4.6.5. Learning Quality

The two frameworks PolyFLAM and PolyFLAP comprise a versatile ensemble of five different models tailored for machine learning training: support vector machine (SVM), logistic regression (LR), Gaussian naïve Bayes (Gaussian NB), stochastic gradient descent (SGD), and multi-Layer perceptron (MLP). To comprehensively evaluate the effectiveness and robustness of these methods, a series of experiments were conducted on the three different datasets described previously. The results were tracked and plotted as shown in Table 6. The acronyms used in the table can be described as follows:
  • AC: accuracy;
  • PR: precision;
  • RE: recall;
  • F1: F1 Score;
  • SP: specificity;
  • NPV: negative predictive value.
The PolyFLAM and PolyFLAP frameworks were developed primarily not with the sole goal of improving learning quality, but rather with the goal of strengthening federated learning against potential attacks, especially inference attacks. Nonetheless, it is critical to recognize that learning quality retains its importance as a key metric, particularly in the context of machine learning models intended for predictive applications. In particular, accuracy plays a prominent role as it is a critical criterion for the utility and effectiveness of a model. The results presented above justify an analysis from different points of view.
Moreover, there is a clear trend where datasets with a larger number of records consistently show higher accuracy across the five different models in both PolyFLAM and PolyFLAP. This result is consistent with expectations, as a larger volume of records in a dataset leads to additional data availability for local training at the client node. This subsequently leads to an improvement in the local training quality and also in the global model quality. In particular, the results observed with the simulated dataset are remarkable, showing accuracies that exceed the threshold of 90% across various quality parameters. In contrast, the surgical deepnet dataset achieves comparatively lower accuracy, at 76% during the optimal iteration. In turn, the SHAREEDB dataset exhibits the least pronounced performance, as its highest accuracy across models does not exceed 62%. This clearly shows the influence of dataset size on model performance and learning quality. This observed phenomenon can be discussed from two strategic perspectives:
  • Potential to improve learning quality: the observed results encourage improving PolyFLAM and PolyFLAP so that they perform well when dealing with relatively small datasets;
  • Encouragement of client contributions: since the two proposed frameworks preserve user privacy, they can be considered as a solution to attract more participants to a FL training cycle, thus providing an opportunity to increase data availability and improve model performance.
In summary, the results highlight the potential of these frameworks to go beyond their primary focus on security and also contribute to improving the quality of learning. Moreover, the scalability and privacy-friendly characteristics of these frameworks suggest that they can provide even more robust results as the participating entities expand. Although PolyFLAM and PolyFLAP introduce novel frameworks by incorporating polymorphic encryption into federated learning, making direct comparisons with classical machine learning models applied to these datasets, such as [69,70,71], is challenging due to their unique concepts; however, it is still possible to assess their advantages relative to existing methods.Specifically, these frameworks surpass previous implementations in terms of robustness against inference attacks—a security aspect not adequately addressed by earlier approaches. PolyFLAM and PolyFLAP offer superior protection by ensuring that even if an encryption key is compromised, it poses no threat to the system because each key is used only once, even by the same user. This level of security represents a significant advancement over prior FL methods.

4.7. Comparison to the State of the Art

In this section, a thorough comparison will be presented between the novel federated machine learning frameworks (FL) presented in this study and the existing state-of-the-art approaches. This comparison highlights the particular focus on data security and privacy achieved by incorporating polymorphic and homomorphic encryption techniques. This comparison is presented in Table 7 below:

5. Challenges and Future Perspectives

As the federal learning environment evolves, a number of obstacles and interesting options for future development emerge. This chapter addresses the many challenges associated with the development, deployment, and evaluation of the proposed federated learning frameworks PolyFLAM and PolyFLAP. These issues range from heterogeneity to the requirement for effective communication methods. In addition, the chapter highlights the future prospects that will face the proposed frameworks and, by extension, the entire field of FL. Federated learning will change machine learning paradigms and data-driven innovation by addressing current problems and highlighting future opportunities.

5.1. Challenges

In the context of PolyFLAM and PolyFLAP, a number of challenges arise that are closely related to the implementation and refinement of these federated learning frameworks.

5.1.1. Heterogeneity

Heterogeneity poses a particular challenge for the proposed framework, especially due to the fact that it only supports "horizontal FL data" This notion covers scenarios where different clients process data with identical characteristics. Although the framework covers this particular data type well, it is important to recognize that alternative approaches were not considered or tested in our study. This highlights the need for further research to include the broader landscape of data heterogeneity.

5.1.2. Complexity and Computation Cost

The issue of complexity and processing cost is critical, especially given the costly nature of encryption techniques. Since these algorithms are intended to ensure the integrity of the transmitted data, they necessarily require significant computing resources. While ensuring the security of the data, the rigorous computations required for encryption increase the complexity of the framework. As a result, striking a balance between robust security measures and efficient data processing becomes a critical problem that requires new ways to reduce computational costs while maintaining the integrity of the system.

5.1.3. Scalability

The issue of scalability is a major problem due to the increased processing requirements of encryption. As the framework deals with an increasing number of clients, especially on the server side, the additional computational overhead can limit the scalability of the system. The influx of clients places additional demands on the server’s processing capacity, which can lead to bottlenecks and performance degradation. To achieve smooth scalability, it is critical to explore effective optimization approaches that reduce computational load while maintaining system responsiveness and supporting an increasing number of clients.

5.1.4. Learning Quality

One critical issue that emerges is the quality of learning in the proposed frameworks. While the main goal of PolyFLAM and PolyFLAP in this context is to improve security and robustness against inference attacks, the inherent trade-off between security and learning quality should not be neglected. Emphasizing security techniques such as encryption and privacy may divert attention from improving model learning quality. Striking a delicate balance between strong security and optimal learning outcomes is an ongoing problem that requires careful evaluation of the impact of security measures on the efficiency of the learning process and the future performance of the overall model.

5.1.5. Resources Limitations

Resource constraints pose a significant challenge, especially in the federated learning paradigm where clients typically operate with limited computational resources, which may be the case if the clients are smartphones or smart wearables instead of powerful computers. This challenge becomes even more significant when considering the implementation of the proposed framework. The demands imposed by encryption and other security measures could overwhelm the already limited resources of the clients. This scenario raises concerns about the feasibility and practicality of deploying the framework in real-world scenarios, given the potential burden on client devices. To overcome this challenge, strategies must be developed to optimize the use of available resources and ensure that the system remains operational while addressing the constraints of client environments.

5.2. Future Perspectives

In moving to future perspectives, it is important to acknowledge that the challenges discussed above are by no means new territory in academic discourse. Various researchers have addressed these obstacles and offer innovative solutions to be explored. With careful consideration, a promising path emerges in which the proposed PolyFLAM and PolyFLAP frameworks converge with established techniques. This convergence has the potential not only to overcome existing challenges, but also to usher in an era of increased efficiency and versatility for federated learning systems.

5.2.1. Handling Heterogeneity

Addressing the challenge of heterogeneity arising from different devices and data requires innovative approaches. A number of strategies can be explored to effectively manage this variability. One option is to use resource allocation techniques [72], which intelligently allocate computing resources based on the capabilities of individual devices. This approach optimizes the use of resources and enables a more balanced and efficient federated learning process. In addition, the integration of meta-learning methods [73] represents a promising avenue. Meta-learning allows models to learn and adapt quickly to new data distributions, and thus has the potential to improve the adaptability of the system to the heterogeneity of client devices and data sources. The synergistic fusion of these approaches with the proposed frameworks could lead to a more agile and effective framework for federated learning, capable of addressing the difficulties posed by heterogeneity.

5.2.2. Computation Cost and Time Reduction

The challenge of computational costs can be mitigated through the strategic use of various techniques. One notable approach is the use of parallel programming methods. By breaking down complex computations into smaller tasks that can be executed simultaneously, parallel programming makes more efficient use of the processing power of modern devices. This leads to an acceleration of model training and a reduction in computation time, effectively reducing the burden on computing resources. Incorporating parallel programming techniques into the proposed PolyFLAM and PolyFLAP frameworks has the potential to significantly reduce computational costs while improving system scalability and responsiveness.

5.2.3. Enhancing Scalability

The prospect of improving scalability is linked to effectively solving the problems of heterogeneity and computational cost. As these challenges are addressed through approaches such as resource allocation and parallel programming, a symbiotic relationship emerges. By addressing device and data heterogeneity, the system is enabled to serve a variety of clients. At the same time, reducing computational costs through techniques such as parallel programming ensures that the system remains responsive as the number of participants grows. This convergence of solutions paves the way for a more scalable federated learning system capable of accommodating significant numbers of clients while maintaining performance and efficiency. The interplay of these strategies has the potential to create a robust and adaptable ecosystem that meets real-world needs. In addition, the concept of third-party vendors can be incorporated into the framework to move some tasks outside the server, such as key generation or encryption, and keep network management and aggregation under the control of the central server.

5.2.4. Boosting Learning Quality

The quality of learning results can be improved by using different techniques for data preprocessing on the client side. As data are prepared prior to training, strategic preprocessing steps can be incorporated to improve the quality of the input data. Techniques such as feature scaling, outlier removal, and data augmentation can be applied to improve the quality and relevance of the data. By ensuring that the data fed into the training process are well prepared and free of noise or irregularities, the overall learning quality can be greatly enhanced. The integration of these preprocessing techniques into the proposed PolyFLAM and PolyFLAP frameworks may have the potential to fine tune the learning process in addition to their improved safety function, leading to improved model convergence and overall performance.

5.2.5. Integrating Blockchain for Enhanced Security

Integrating PolyFLAM and PolyFLAP with blockchain technology offers significant enhancements in security, transparency, and efficiency for federated learning systems. Blockchain’s decentralized and immutable ledger can bolster security by ensuring the integrity of model updates and encryption key exchanges, while smart contracts can automate and enforce security protocols, reducing human error. Additionally, tokenization can create incentive mechanisms to encourage participation, and decentralized data management aligns with federated learning’s goals, improving data privacy and efficiency. Blockchain also provides a transparent record of model provenance, enhancing auditing and reusability, and strengthens resistance to Sybil attacks by verifying participant identities. Overall, this integration promises a more secure, scalable, and robust federated learning framework.

6. Conclusions

The PolyFLAM and PolyFLAP frameworks use polymorphic encryption to enhance the security of message exchanges between servers and clients in a federated learning context. The security guarantees arise from the diversity of encryption keys, with each server–client message encrypted with a different key. Therefore, in the event of key compromise, there is minimal risk as key reuse within the FL cycle is virtually eliminated. However, there is a trade-off between the communication traffic across the network and the computational costs on the server or client. PolyFLAM and PolyFLAP aim to maneuver between these performance parameters, balancing security with efficiency. While they prioritize security, they incur additional computational and communication costs due to the computationally intensive encryption operations. To address these challenges, they can synergize with established methods to parallelize computation, increase learning efficiency, account for heterogeneity, and improve scalability, thereby enhancing their overall usability and feasibility.

Author Contributions

Conceptualization: M.M. and M.A.; formal analysis: M.M.; investigation: M.M.; methodology: M.M. and M.A.; supervision: M.A., A.B., H.I. and A.R.; visualization: M.M.; writing—original draft: M.M.; writing—review and editing: M.A., A.B., H.I. and A.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Sciences and Engineering Research Council of Canada (NSERC), grant number 06351.

Data Availability Statement

Data is contained within the article.

Acknowledgments

We acknowledge the support of Centre d’Entrepreneuriat et de Valorisation des Innovations (CEVI).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Turing, A.M. Computing machinery and intelligence. In Parsing the Turing Test; Springer: Dordrecht, The Netherlands, 2009; pp. 23–65. [Google Scholar]
  2. Hernández-Orallo, J.; Minaya-Collado, N. A formal definition of intelligence based on an intensional variant of algorithmic complexity. In Proceedings of the International Symposium of Engineering of Intelligent Systems (EIS98), Tenerife, Spain, 11–13 February 1998; pp. 146–163. [Google Scholar]
  3. Frankish, K.; Ramsey, W.M. (Eds.) The Cambridge Handbook of Artificial Intelligence; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar]
  4. Sarker, I.H. Machine learning: Algorithms, real-world applications and research directions. SN Comput. Sci. 2021, 2, 160. [Google Scholar] [CrossRef] [PubMed]
  5. Sharma, N.; Sharma, R.; Jindal, N. Machine learning and deep learning applications-a vision. Glob. Transit. Proc. 2021, 2, 24–28. [Google Scholar] [CrossRef]
  6. Pallathadka, H.; Mustafa, M.; Sanchez, D.T.; Sajja, G.S.; Gour, S.; Naved, M. Impact of machine learning on management, healthcare and agriculture. Mater. Today Proc. 2021; in press. [Google Scholar] [CrossRef]
  7. Ghazal, T.M.; Hasan, M.K.; Alshurideh, M.T.; Alzoubi, H.M.; Ahmad, M.; Akbar, S.S.; Al Kurdi, B.; Akour, I.A. IoT for smart cities: Machine learning approaches in smart healthcare—A review. Future Internet 2021, 13, 218. [Google Scholar] [CrossRef]
  8. Erickson, B.J.; Korfiatis, P.; Akkus, Z.; Kline, T.L. Machine learning for medical imaging. Radiographics 2017, 37, 505. [Google Scholar] [CrossRef]
  9. Zantalis, F.; Koulouras, G.; Karabetsos, S.; Kandris, D. A review of machine learning and IoT in smart transportation. Future Internet 2019, 11, 94. [Google Scholar] [CrossRef]
  10. Xin, Y.; Kong, L.; Liu, Z.; Chen, Y.; Li, Y.; Zhu, H.; Gao, M.; Hou, H.; Wang, C. Machine learning and deep learning methods for cybersecurity. IEEE Access 2018, 6, 35365–35381. [Google Scholar] [CrossRef]
  11. Nagarhalli, T.P.; Vaze, V.; Rana, N.K. Impact of machine learning in natural language processing: A review. In Proceedings of the Third International Conference on Intelligent Communication Technologies and Virtual Mobile Networks (ICICV), IEEE, Tirunelveli, India, 4–6 February 2021; pp. 1529–1534. [Google Scholar]
  12. Liakos, K.G.; Busato, P.; Moshou, D.; Pearson, S.; Bochtis, D. Machine learning in agriculture: A review. Sensors 2018, 18, 2674. [Google Scholar] [CrossRef]
  13. Larrañaga, P.; Atienza, D.; Diaz-Rozo, J.; Ogbechie, A.; Puerto-Santana, C.; Bielza, C. Industrial Applications of Machine Learning; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar]
  14. Paleyes, A.; Urma, R.G.; Lawrence, N.D. Challenges in deploying machine learning: A survey of case studies. ACM Comput. Surv. 2020, 55, 1–29. [Google Scholar] [CrossRef]
  15. Char, D.S.; Shah, N.H.; Magnus, D. Implementing machine learning in health care—Addressing ethical challenges. N. Engl. J. Med. 2018, 378, 981. [Google Scholar] [CrossRef]
  16. L’heureux, A.; Grolinger, K.; Elyamany, H.F.; Capretz, M.A. Machine learning with big data: Challenges and approaches. IEEE Access 2017, 5, 7776–7797. [Google Scholar] [CrossRef]
  17. Zhou, L.; Pan, S.; Wang, J.; Vasilakos, A.V. Machine learning on big data: Opportunities and challenges. Neurocomputing 2017, 237, 350–361. [Google Scholar] [CrossRef]
  18. Injadat, M.; Moubayed, A.; Nassif, A.B.; Shami, A. Machine learning towards intelligent systems: Applications, challenges, and opportunities. Artif. Intell. Rev. 2021, 54, 3299–3348. [Google Scholar] [CrossRef]
  19. Lwakatare, L.E.; Raj, A.; Bosch, J.; Olsson, H.H.; Crnkovic, I. A taxonomy of software engineering challenges for machine learning systems: An empirical investigation. In Proceedings of the Agile Processes in Software Engineering and Extreme Programming: 20th International Conference, XP 2019, Montréal, QC, Canada, 21–25 May 2019; pp. 227–243. [Google Scholar]
  20. Leskovec, J.; Rajaraman, A.; Ullman, J.D. Mining of Massive Data Sets; Cambridge University Press: Cambridge, UK, 2020. [Google Scholar]
  21. Wuest, T.; Weimer, D.; Irgens, C.; Thoben, K.D. Machine learning in manufacturing: Advantages, challenges, and applications. Prod. Manuf. Res. 2016, 4, 23–45. [Google Scholar] [CrossRef]
  22. Albrecht, J.P. How the GDPR will change the world. Eur. Data Prot. Law Rev. 2016, 2, 287. [Google Scholar] [CrossRef]
  23. Parasol, M. The impact of China’s 2016 Cyber Security Law on foreign technology firms, and on China’s big data and Smart City dreams. Comput. Law Secur. Rev. 2018, 34, 67–98. [Google Scholar] [CrossRef]
  24. Gray, W.; Zheng, H.R. General Principles of Civil Law of the People’s Republic of China. Am. J. Comp. Law 1986, 34, 715–743. [Google Scholar] [CrossRef]
  25. Chik, W.B. The Singapore Personal Data Protection Act and an assessment of future trends in data privacy reform. Comput. Law Secur. Rev. 2013, 29, 554–575. [Google Scholar] [CrossRef]
  26. Dwork, C.; McSherry, F.; Nissim, K.; Smith, A. Calibrating noise to sensitivity in private data analysis. In Proceedings of the Third Theory of Cryptography Conference, TCC 2006, New York, NY, USA, 4–7 March 2006; Springer: Berlin/Heidelberg, Germany, 2006; pp. 265–284. [Google Scholar]
  27. El Emam, K.; Dankar, F.K. Protecting privacy using k-anonymity. J. Am. Med. Inform. Assoc. 2008, 15, 627–637. [Google Scholar] [CrossRef]
  28. Li, P.; Li, J.; Huang, Z.; Li, T.; Gao, C.Z.; Yiu, S.M.; Chen, K. Multi-key privacy-preserving deep learning in cloud computing. Future Gener. Comput. Syst. 2017, 74, 76–85. [Google Scholar] [CrossRef]
  29. Fredrikson, M.; Jha, S.; Ristenpart, T. Model inversion attacks that exploit confidence information and basic countermeasures. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, Denver, CO, USA, 12–16 October 2015; pp. 1322–1333. [Google Scholar]
  30. Shokri, R.; Stronati, M.; Song, C.; Shmatikov, V. Membership inference attacks against machine learning models. In Proceedings of the 2017 IEEE Symposium on Security and Privacy (SP), IEEE, San Jose, CA, USA, 22–26 May 2017; pp. 3–18. [Google Scholar]
  31. McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; y Arcas, B.A. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the Artificial Intelligence and Statistics PMLR, Lauderdale, FL, USA, 20–22 April 2017; pp. 1273–1282. [Google Scholar]
  32. Booher, D.D.; Cambou, B.; Carlson, A.H.; Philabaum, C. Dynamic key generation for polymorphic encryption. In Proceedings of the 2019 IEEE 9th Annual Computing and Communication Workshop and Conference (CCWC), IEEE, Las Vegas, NV, USA, 7–9 January 2019; pp. 0482–0487. [Google Scholar]
  33. Mohammad, M.; Adda, M.; Bouzouane, A.; Ibrahim, H.; Raad, A. Reviewing Federated Machine Learning and Its Use in Diseases Prediction. Sensors 2023, 23, 2112. [Google Scholar] [CrossRef] [PubMed]
  34. Li, T.; Sahu, A.K.; Talwalkar, A.; Smith, V. Federated learning: Challenges, methods, and future directions. IEEE Signal Process. Mag. 2020, 37, 50–60. [Google Scholar] [CrossRef]
  35. Jawadur, R.K.M.; Ahmed, F.; Akhter, N.; Hasan, M.; Amin, R.; Aziz, K.E.; Islam, A.K.M.M.; Mukta, M.S.H.; Islam, A.K.M.N. Challenges, applications and design aspects of Federated Learning: 1298 A survey. IEEE Access 2021, 9, 124682–124700. [Google Scholar]
  36. Lyu, L.; Yu, H.; Yang, Q. Threats to federated learning: A survey. arXiv 2020, arXiv:2003.02133. [Google Scholar]
  37. Huang, L.; Joseph, A.D.; Nelson, B.; Rubinstein, B.I.; Tygar, J.D. Adversarial machinelearning. In Proceedings of the 4th ACM Workshop on Security and Artificial Intelligence, Chicago, IL, USA, 21 October 2011; pp. 43–58. [Google Scholar]
  38. Shafahi, A.; Huang, W.R.; Najibi, M.; Suciu, O.; Studer, C.; Dumitras, T.; Goldstein, T. Poison frogs! targeted clean-label poisoning attacks on neural networks. In Proceedings of the Advances in Neural Information Processing Systems, Montréal, QC, Canada, 3–8 December 2018; Volume 31. [Google Scholar]
  39. Gu, T.; Dolan-Gavitt, B.; Garg, S. Badnets: Identifying vulnerabilities in the machine learningmodel supply chain. arXiv 2017, arXiv:1708.06733. [Google Scholar]
  40. Bagdasaryan, E.; Veit, A.; Hua, Y.; Estrin, D.; Shmatikov, V. How to backdoor federatedlearning. In Proceedings of the International Conference on Artificial Intelligence and Statistics, PMLR, Palermo, Italy, 3–5 June 2020; pp. 2938–2948. [Google Scholar]
  41. Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; Fergus, R. Intriguingproperties of neural networks. arXiv 2013, arXiv:1312.6199. [Google Scholar]
  42. Fung, C.; Yoon, C.J.; Beschastnikh, I. Mitigating sybils in federated learning poisoning. arXiv 2018, arXiv:1808.04866. [Google Scholar]
  43. Melis, L.; Song, C.; De Cristofaro, E.; Shmatikov, V. Exploiting unintended feature leakage in collaborative learning. In Proceedings of the 2019 IEEE symposium on security and privacy (SP), San Francisco, CA, USA, 19–23 May 2019; pp. 691–706. [Google Scholar]
  44. Zhu, L.; Liu, Z.; Han, S. Deep leakage from gradients. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019; Volume 32. [Google Scholar]
  45. Liu, L.; Wang, Y.; Liu, G.; Peng, K.; Wang, C. Membership inference attacks against machine learning models via prediction sensitivity. IEEE Trans. Dependable Secur. Comput. 2022, 20, 2341–2347. [Google Scholar] [CrossRef]
  46. Keith, B.; Ivanov, V.; Kreuter, B.; Marcedone, A.; McMahan, H.B.; Patel, S.; Ramage, D.; Segal, A.; Seth, K. Practical secure aggregation for privacy-preserving Machine Learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, Dallas, TX, USA, 30 October–3 November 2017; pp. 1175–1191. [Google Scholar]
  47. Krishna, P.; Kakade, S.M.; Harchaoui, Z. Robust aggregation for Federated Learning. IEEE Trans. Signal Process. 2022, 70, 1142–1154. [Google Scholar]
  48. Endre, W.; Plastria, F. On the point for which the sum of the distances to n given points is minimum. Ann. Oper. Res. 2009, 167, 7–41. [Google Scholar]
  49. Beomyeol, J.; Ferdous, S.M.; Rahman, M.R.; Walid, A. Privacy-preserving decentralized aggregation for Federated Learning. In Proceedings of the IEEE INFOCOM 2021—IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), Hoboken, NJ, USA, 20 May 2023; pp. 1–6. [Google Scholar]
  50. Stephen, B.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternat- ing direction method of multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar]
  51. Zhao, L.; Jiang, J.; Feng, B.; Wang, Q.; Shen, C.; Li, Q. Sear: Secure and efficient aggregation for byzantine-robust Federated Learning. IEEE Trans. Dependable Secur. Comput. 2021, 19, 3329–3342. [Google Scholar] [CrossRef]
  52. Frank, M.; Alexandrovich, I.; Berenzon, A.; Rozas, C.V.; Shafi, H.; Shanbhogue, V.; Savagaonkar, U.R. Innovative instructions and software model for isolated execution. Hasp@isca 2013, 10, 414–421. [Google Scholar]
  53. Song, J.; Wang, W.; Gadekallu, T.R.; Cao, J.; Liu, Y. Eppda: An efficient privacy-preserving data aggre- gation Federated Learning scheme. IEEE Trans. Netw. Sci. Eng. 2022, 10, 3047–3057. [Google Scholar] [CrossRef]
  54. Benaloh, J.C. Secret sharing homomorphisms: Keeping shares of a secret secret. In Advances in Cryptology—CRYPTO’86: Proceedings; Springer: Berlin/Heidelberg, Germany, 2000; pp. 251–260. [Google Scholar]
  55. Roushdy, E.A.; Avestimehr, A.S. HeteroSAg: Secure aggregation with heterogeneous quantization in Federated Learning. IEEE Trans. Commun. 2022, 70, 2372–2386. [Google Scholar]
  56. Joan, D.; Rijmen, V. Reijndael: The advanced encryption standard. Dr. Dobb’s J. Softw. Tools Prof. Program. 2001, 26, 137–139. [Google Scholar]
  57. Alenezi, M.N.; Alabdulrazzaq, H.; Mohammad, N.Q. Symmetric encryption algorithms: Review and evaluation study. Int. J. Commun. Netw. Inf. Secur. 2020, 12, 256–272. [Google Scholar]
  58. Bhanot, R.; Hans, R. A review and comparative analysis of various encryption algorithms. Int. J. Secur. Its Appl. 2015, 9, 289–306. [Google Scholar] [CrossRef]
  59. Hearst, M.A.; Dumais, S.T.; Osuna, E.; Platt, J.; Scholkopf, B. Support vector machines. IEEE Intell. Syst. Their Appl. 1998, 13, 18–28. [Google Scholar] [CrossRef]
  60. LaValley, M.P. Logistic regression. Circulation 2008, 117, 2395–2399. [Google Scholar] [CrossRef]
  61. Hand, D.J.; Yu, K. Idiot’s Bayes—Not so stupid after all? Int. Stat. Rev. 2001, 69, 385–398. [Google Scholar]
  62. Ketkar, N.; Ketkar, N. Stochastic gradient descent. In Deep Learning with Python: A Hands-On Introduction; Springer: Berlin/Heidelberg, Germany, 2017; pp. 113–132. [Google Scholar]
  63. Murtagh, F. Multilayer perceptrons for classification and regression. Neurocomputing 1991, 2, 183–197. [Google Scholar] [CrossRef]
  64. Barkadehi, M.H.; Nilashi, M.; Ibrahim, O.; Fardi, A.Z.; Samad, S. Authentication systems: A literature review and classification. Telemat. Inform. 2018, 35, 1491–1511. [Google Scholar] [CrossRef]
  65. Bonnetain, X.; Naya-Plasencia, M.; Schrottenloher, A. Quantum security analysis of AES. IACR Trans. Symmetric Cryptol. 2019, 2019, 55–93. [Google Scholar] [CrossRef]
  66. Sklearn.Datasets.Make Classification. Scikit-Learn. Available online: https://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_classification.html (accessed on 15 February 2023).
  67. Smart Health for Assessing the Risk of Events via ECG Database v1.0.0. Available online: https://physionet.org/content/shareedb/1.0.0/ (accessed on 1 March 2023).
  68. Dataset Surgical Binary Classification. Dataset Surgical Binary Classification—Kaggle. Available online: https://www.kaggle.com/datasets/omnamahshivai/surgical-dataset-binary-classification (accessed on 15 March 2023).
  69. Lynch, D.; Suriya, M. PE-DeepNet: A deep neural network model for pulmonary embolism detection. Int. J. Intell. Netw. 2022, 3, 176–180. [Google Scholar] [CrossRef]
  70. Moshawrab, M.; Adda, M.; Bouzouane, A.; Ibrahim, H.; Raad, A. Cardiovascular events prediction using artificial intelligence models and heart rate variability. Procedia Comput. Sci. 2022, 203, 231–238. [Google Scholar] [CrossRef]
  71. Moshawrab, M.; Adda, M.; Bouzouane, A.; Ibrahim, H.; Raad, A. Predicting Cardiovascular Events with Machine Learning Models and Heart Rate Variability. Int. J. Ubiquitous Syst. Pervasive Netw. 2023, 18, 49–59. [Google Scholar]
  72. Jamil, B.; Ijaz, H.; Shojafar, M.; Munir, K.; Buyya, R. Resource allocation and task scheduling in fog computing and internet of everything environments: A taxonomy, review, and future directions. ACM Comput. Surv. 2022, 54, 1–38. [Google Scholar] [CrossRef]
  73. Feng, Y.; Chen, J.; Xie, J.; Zhang, T.; Lv, H.; Pan, T. Meta-learning as a promising approach for few-shot cross-domain fault diagnosis: Algorithms, applications, and prospects. Knowl. Based Syst. 2022, 235, 107646. [Google Scholar] [CrossRef]
Figure 1. Federated learning technical architecture.
Figure 1. Federated learning technical architecture.
Electronics 13 03716 g001
Figure 2. Initial encryption key generation mechanism.
Figure 2. Initial encryption key generation mechanism.
Electronics 13 03716 g002
Figure 3. PolyFLAM and PolyFLAP followed workflow.
Figure 3. PolyFLAM and PolyFLAP followed workflow.
Electronics 13 03716 g003
Figure 4. PolyFLAM and PolyFLAP threads and functions.
Figure 4. PolyFLAM and PolyFLAP threads and functions.
Electronics 13 03716 g004
Table 1. State of the art of secured FL aggregation algorithms.
Table 1. State of the art of secured FL aggregation algorithms.
Ref#Mechanism
[46]Secure vector summing strategy
[47]Using geometric median estimated using a Weiszfeld-type algorithm
[49]Refined form of the Alternating Direction Multiplier (ADMM)
[51]Hardware-based trusted execution environment instead of complex cryptography
[53]Homomorphisms of the secret exchange
[55]Masking each user’s model update
Table 2. Parameters generated by each model on local training.
Table 2. Parameters generated by each model on local training.
ModelParameterDescription
Support vector machinesSupport vectorsData points that significantly influence the determination of the separating hyperplane
CoefficientsWeights assigned to features, contributing to the hyperplane’s orientation
InterceptAlso known as the bias term, it shifts the hyperplane’s position, aiding in better classification
Logistic regressionCoefficientsWeights determine the influence of individual features on the log-odds of the predicted outcome
Interceptbias term that adjusts the threshold for classifying instances
Gaussian naïve BayesClass priorsRepresent the prior probabilities of different classes in the training data
ThetaMean values of features for each class, used in the Gaussian probability density function
SigmaVariance of features for each class, also utilized in Gaussian probability calculations
SGD classifierCoefficientsSimilar to other models, these weights influence the classification decision
InterceptA bias term that adjusts the decision threshold
Multi-layer perceptronCoefficientsRegulate the connections between neurons in the neural network layers
InterceptSimilar to bias terms in other models, it offsets the overall computation
Table 3. Encryption keys’ polymorphism in PolyFLAM.
Table 3. Encryption keys’ polymorphism in PolyFLAM.
-ServerClient 1Client 2
Initial Keydepending on the clientbtkzo1PLJsQjVVRxr0u7mytmup9proQ3PmBlfl3k4k7rgoDm1etNWJ6IsWyKLezS
Randomly selected 5 keyswpmDQnYv8ncZhvNKaeXUvtFlZ9pcuM2p3PwImZIqZT8o2DQVBfKnpile6B7nwGcpYDRf1hXOq5POw311LflEBl3zcCFak41t
uHAgHdMLG9cqPlvqMkHItBwkWJTFzMZLBi7a1kZHKJbJdVgA1WcTEoJILoEPCj4j6sqKWRZk9coIYXhH6uogaBCI8C8TRGNd
Wlhty1PtLy86wxH4lTxhjFTbt7dhHdT6YZDBazmbbBCGXox8KXQzepJH3N2sUOkGQdjTzsKiH4EwOp6R3CZ8UC2U2l7r2tKG
qkgC6eIwiyyAq0Hfv4ajOqpszeuYxUbusrmBy6rUmfH23Qn8GEPhTM9egvMBSd8SQ318ez6o1n2YZ1AKM1uTDaPPDDvjqQuj
C0KFcCE18VeFkho8qwPzKu6DA6hoUZBYqxr9Qz3sFIAYGWL69CwJtBsnTdcetaqnZEwkF5PSvAxCP6LVMHWEqegV1tWHgO4r
Table 4. PolyFLAM and PolyFLAP communication cost per message, model, and dataset.
Table 4. PolyFLAM and PolyFLAP communication cost per message, model, and dataset.
EntityDataset Simulated DatasetSHAREEDB Surgical-Binary Dataset
Sent (Server)“Encrypted ToKs”480480480
Model/ParametersPolyFLAMSVM88,06010,48175,604
LR864912896
NB149817861690
SGD9541002986
MLP34,69738,74839,270
PolyFLAPSVM89,294885372,022
LR470510502
NB110112931229
SGD560608592
MLP10,17311,70911,197
“Hibernate”139139139
“Disconnect”136136136
“Connect”838383
“Ready”919191
Sent (Client)Model/ParametersPolyFLAMSVM94,27911,00680,119
LR926974990
NB167819661902
SGD116612141230
MLP37,75842,35039,582
  PolyFLAPSVM89,293878271,975
LR462510494
NB105412461214
SGD510558574
MLP10,17411,71011,198
“Disconnected”107107107
Table 5. PolyFLAM and PolyFLAP communication cost aggregation and reduction ratio.
Table 5. PolyFLAM and PolyFLAP communication cost aggregation and reduction ratio.
FrameworkDirectionModelSimulated DatasetSHAREEDBSurgical-Binary Dataset
PolyFLAMSent (Server)SVM88,81511,23676,359
LR161916671651
NB225325412445
SGD170917571741
MLP35,45239,50340,025
Sent (Client)SVM94,56011,28780,400
LR120712551271
NB195922472183
SGD144714951511
MLP38,03942,63139,863
PolyFLAPSent (Server)SVM90,049960872,777
LR122512651257
NB185620481984
SGD131513631347
MLP10,92812,46411,952
Sent (Client)SVM89,574906372,256
LR743791775
NB133515271495
SGD791839855
MLP10,45511,99111,479
Cost ReductionSent (Server)SVM−1%14%5%
LR24%24%24%
NB18%19%19%
SGD23%22%23%
MLP69%68%70%
Sent (Client)SVM5%20%10%
LR38%37%39%
NB32%32%32%
SGD45%44%43%
MLP73%72%71%
Table 6. PolyFLAM and PolyFLAP learning quality results.
Table 6. PolyFLAM and PolyFLAP learning quality results.
FrameworkModelDatasetACPRREF1SPNPV
PolyFLAMSVMSimulated91.76%98.75%91.30%89.36%91.89%94.44%
SHAREEDB48.57%40.91%7.38%12.50%89.43%49.33%
Surgical-Binary66.48%40.40%95.24%56.74%57.86%97.59%
LRSimulated86.67%85.29%90.62%87.88%82.14%88.46%
SHAREEDB45.71%47.56%87.70%61.67%4.07%25.00%
Surgical-Binary72.53%46.15%66.67%54.55%74.45%87.18%
NBSimulated98.33%100.00%96.43%98.18%100.00%96.97%
SHAREEDB50.20%50.00%95.90%65.73%4.88%54.55%
Surgical-Binary68.13%39.47%71.43%50.85%67.14%88.68%
SGDSimulated90.00%88.89%88.89%88.89%90.91%90.91%
SHAREEDB49.39%42.86%4.92%8.82%93.50%49.78%
Surgical-Binary63.19%35.79%85.00%50.37%57.04%93.10%
MLPSimulated76.67%70.00%93.33%80.00%60.00%90.00%
SHAREEDB49.39%43.75%5.74%10.14%92.68%49.78%
Surgical-Binary64.84%20.83%10.00%13.51%85.61%71.52%
PolyFLAPSVMSimulated91.67%96.00%85.71%90.57%96.88%88.57%
SHAREEDB48.16%40.74%9.02%14.77%86.99%49.08%
Surgical-Binary42.31%31.21%84.62%45.60%25.38%80.49%
LRSimulated95.00%97.22%94.59%95.89%95.65%91.67%
SHAREEDB62.04%60.90%66.39%63.53%57.72%63.39%
Surgical-Binary42.86%25.00%86.84%38.82%31.25%90.00%
NBSimulated91.67%92.11%94.59%93.33%86.96%90.91%
SHAREEDB48.57%40.91%7.38%12.50%89.43%49.33%
Surgical-Binary74.73%0.00%0.00%0.00%100.00%74.73%
SGDSimulated91.67%93.10%90.00%91.53%93.33%90.32%
SHAREEDB48.16%33.33%4.10%7.30%91.87%49.13%
Surgical-Binary76.37%68.42%26.00%37.68%95.45%77.30%
MLPSimulated71.67%90.48%55.88%69.09%92.31%61.54%
SHAREEDB53.06%51.58%93.44%66.47%13.01%66.67%
Surgical-Binary28.57%27.27%96.00%42.48%3.03%66.67%
Table 7. Comparison of FL security approaches.
Table 7. Comparison of FL security approaches.
CriteriaProposed FrameworksHomomorphic Encryption OnlySecureD-FLSEARHeteroSAg
Encryption techniquesPolymorphic and Homomorphic encryptionHomomorphic encryptionHomomorphic encryptionTrusted Execution Environment (TEE)Homomorphic encryption
Unique encryption keys for parametersYes (polymorphic encryption)No (single key)Yes (homomorphic encryption)Yes (TEE-based encryption)No (single key)
Data privacy and access controlStrong data privacy and access controlLimited access controlStrong data privacy and access controlStrong data privacy and access controlLimited access control
Security against key compromisesHighly resilient (granular key usage)Vulnerable to key compromiseHighly resilient (granular key usage)Highly resilient (TEE-based)Vulnerable to key compromise
Robustness against attacksMulti-layered security approachLimited security layersMulti-layered security approachMulti-layered security approachEnhanced security layers
Communication efficiencyEfficient with enhanced securityEfficient but less granularEfficient with enhanced securityEfficient with hardware-based TEEEfficient with enhanced security
Byzantine attack resilienceStrong resilienceLimited resilienceStrong resilienceStrong resilienceStrong resilience
Inference attack resilienceHigh resilienceLimited resilienceHigh resilienceLimitedModerate resilience
Bandwidth efficiencyEnhanced efficiencyStandard efficiencyEnhanced efficiencyEnhanced efficiencyEnhanced efficiency
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Moshawrab, M.; Adda, M.; Bouzouane, A.; Ibrahim, H.; Raad, A. A Maneuver in the Trade-Off Space of Federated Learning Aggregation Frameworks Secured with Polymorphic Encryption: PolyFLAM and PolyFLAP Frameworks. Electronics 2024, 13, 3716. https://doi.org/10.3390/electronics13183716

AMA Style

Moshawrab M, Adda M, Bouzouane A, Ibrahim H, Raad A. A Maneuver in the Trade-Off Space of Federated Learning Aggregation Frameworks Secured with Polymorphic Encryption: PolyFLAM and PolyFLAP Frameworks. Electronics. 2024; 13(18):3716. https://doi.org/10.3390/electronics13183716

Chicago/Turabian Style

Moshawrab, Mohammad, Mehdi Adda, Abdenour Bouzouane, Hussein Ibrahim, and Ali Raad. 2024. "A Maneuver in the Trade-Off Space of Federated Learning Aggregation Frameworks Secured with Polymorphic Encryption: PolyFLAM and PolyFLAP Frameworks" Electronics 13, no. 18: 3716. https://doi.org/10.3390/electronics13183716

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop