Byzantine-Robust Multimodal Federated Learning Framework for Intelligent Connected Vehicle
Abstract
:1. Introduction
- Multimodal Data Integration. ICVs generate a diverse array of data types from various sensors [23,24]. Each sensor modality provides unique and complementary information. For example, cameras provide rich visual data that are critical for object recognition and scene understanding, while LiDAR provides precise depth information and 3D point clouds for accurate distance measurement and object localization [19]. These different modality types of data are extremely important for the proper operation of ICVs. Effectively fusing these heterogeneous data sources while maintaining their privacy-preserving nature in a FL setup is a complex challenge [25]. Traditional centralized fusion techniques are not directly applicable, necessitating novel approaches that can operate on distributed, privacy-sensitive data.
- Byzantine Attacks. In a distributed learning environment like FL, the system is vulnerable to Byzantine attacks, where malicious participants or compromised vehicles may inject false or manipulated data or model updates [26,27,28]. These attacks can take many forms, such as data poisoning [26,28], where the adversary injects crafted malicious samples into local training data, or model poisoning [29,30,31], where the adversary sends malicious model updates to corrupt the global model. The consequences of such attacks in an ICV context could be severe, potentially leading to erroneous object detection or navigation decisions that compromise road safety [32,33]. Developing robust defense mechanisms that can detect and mitigate these attacks without compromising the efficiency of the federated learning process is crucial.
- Communication Constraints. The mobility of vehicles presents unique challenges to the FL process, such as vehicles may experience periods of disconnection or weak signal strength, especially in rural or underground areas [7,9,34]. In addition, the network capacity available to vehicles may fluctuate widely depending on location, and network congestion and frequent high-bandwidth communications can put stress on the vehicle’s power system, especially in electric vehicles [5]. Designing a communication-efficient federated learning protocol that can adapt to these dynamic conditions while ensuring timely and effective model updates is essential.
- (1)
- We develop a novel Byzantine-robust aggregation technique based on gradient compression, enhancing the resilience of federated learning against adversarial nodes.
- (2)
- We introduce an advanced cross-node multimodal alignment and fusion technique that efficiently combines data from diverse sensors to improve model performance in ICVs.
- (3)
- We implement top-k gradient compression to improve communication efficiency. This reduces the communication overhead between nodes and the central server, making the framework suitable for large-scale deployment.
- (4)
- We conducted extensive experiments on three public datasets for the proposed framework and evaluated prior work to demonstrate the advantages of the proposed framework. Our framework can achieve a better cost–utility trade-off.
2. Related Work
2.1. Federated Learning in Vehicular Networks
2.2. Multimodal Learning for ICVs
2.3. Byzantine-Robust Federated Learning
2.4. Communication-Efficient Federated Learning
3. Problem Definition
3.1. System Model
3.2. Challenges and Constraints
- Byzantine Robustness. Ensuring robustness against Byzantine adversaries is a significant challenge. Malicious nodes can send faulty updates that can severely degrade the performance of the global model. Designing efficient and effective robust aggregation methods to mitigate these attacks while maintaining high model performance is complex.
- Communication Overhead. FL inherently involves substantial communication between nodes and the central server. The gradient compression technique helps reduce this overhead, but finding the optimal balance between compression rate and model accuracy is crucial. Excessive compression can lead to the loss of important information, while insufficient compression can cause excessive communication delays.
- Heterogeneous Data. Multimodal datasets from different vehicles may vary in quality, resolution, and format. Ensuring effective data fusion across these heterogeneous sources without losing critical information is a key constraint.
4. Our Approach
4.1. Cross-Node Multimodal Alignment and Fusion
4.2. Gradient Compression-Based Byzantine Aggregation
- Dimension-wise Sorting and Trimming. For each dimension d of the gradient vector, we collect the k-th elements of the compressed gradients from all vehicles, i.e., . Then, we sort the collected values, i.e., . After that, we trim the largest and smallest b values, where b is the estimated number of Byzantine adversaries.
- Mean Calculation. First, we compute the mean of the remaining values after trimming:Then, we construct the aggregated gradient by applying to each dimension d:
- Global Model Update. The server updates the global model using the robustly aggregated gradient:
4.3. Time Complexity Analysis
- Local Gradient Calculation. Each vehicle computes the local gradient based on its local dataset. Assume the dataset has m samples and the model has d parameters. The time complexity for gradient computation is . This is because each parameter gradient is typically calculated as a sum over the dataset, involving m operations per parameter.
- Top-k Gradient Compression. After computing the gradient, each vehicle compresses it by retaining the top-k elements, i.e., for magnitude calculation, for top-k selection, and for binary mask creation and gradient compression. The overall time complexity for top-k gradient compression: .
- Transmission. The transmission time depends on the communication bandwidth and is not typically considered in time complexity analysis. However, since only k elements are transmitted, the communication cost is .
- Robust Aggregation at Server. The server aggregates the compressed gradients using the trimmed mean method, i.e., for dimension-wise collection, for sorting, and for trimming and mean calculation. Overall time complexity for robust aggregation: .
- Global Model Update. The server updates the global model using the aggregated gradient. The time complexity for this step is .
Algorithm 1: Byzantine-robust Multimodal Federated Learning Algorithm. |
Input: Local model and local multimodal dataset . Output: Global model The server initializes the generator and global model and sends them to each vehicle; |
5. Experiments
5.1. Experiment Setup
- FedAvg [17]. FedAvg aggregates local models from all vehicles by averaging their parameters but does not account for Byzantine robustness.
- Byzantine-resilient SGD (BrSGD) [54]. This approach focuses on detecting and excluding malicious updates during training.
- FLTrust [55]. This approach focuses on computing trust scores to select high-quality clients.
5.2. Numerical Analysis
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Liu, J.; Liu, J. Intelligent and connected vehicles: Current situation, future directions, and challenges. IEEE Commun. Stand. Mag. 2018, 2, 59–65. [Google Scholar] [CrossRef]
- Han, M.; Wan, A.; Zhang, F.; Ma, S. An attribute-isolated secure communication architecture for intelligent connected vehicles. IEEE Trans. Intell. Veh. 2020, 5, 545–555. [Google Scholar] [CrossRef]
- Uhlemann, E. Introducing connected vehicles [connected vehicles]. IEEE Veh. Technol. Mag. 2015, 10, 23–31. [Google Scholar] [CrossRef]
- Lu, N.; Cheng, N.; Zhang, N.; Shen, X.; Mark, J.W. Connected vehicles: Solutions and challenges. IEEE Internet Things J. 2014, 1, 289–299. [Google Scholar] [CrossRef]
- Kim, I.; Martins, R.J.; Jang, J.; Badloe, T.; Khadir, S.; Jung, H.-Y.; Kim, H.; Kim, J.; Genevet, P.; Rho, J. Nanophotonics for light detection and ranging technology. Nat. Nanotechnol. 2021, 16, 508–524. [Google Scholar] [CrossRef]
- Mead, J.B.; Pazmany, A.L.; Sekelsky, S.M.; McIntosh, R.E. Millimeter-wave radars for remotely sensing clouds and precipitation. Proc. IEEE 1994, 82, 1891–1906. [Google Scholar] [CrossRef]
- Duan, W.; Gu, J.; Wen, M.; Zhang, G.; Ji, Y.; Mumtaz, S. Emerging technologies for 5 g-iov networks: Applications, trends and opportunities. IEEE Netw. 2020, 34, 283–289. [Google Scholar] [CrossRef]
- Noor-A-Rahim, M.; Liu, Z.; Lee, H.; Khyam, M.O.; He, J.; Pesch, D.; Moessner, K.; Saad, W.; Poor, H.V. 6 g for vehicle-to-everything (v2x) communications: Enabling technologies, challenges, and opportunities. Proc. IEEE 2022, 110, 712–734. [Google Scholar] [CrossRef]
- Chen, S.; Hu, J.; Shi, Y.; Peng, Y.; Fang, J.; Zhao, R.; Zhao, L. Vehicle-to-everything (v2x) services supported by lte-based systems and 5 g. IEEE Commun. Stand. Mag. 2017, 1, 70–76. [Google Scholar] [CrossRef]
- Lu, R.; Zhang, L.; Ni, J.; Fang, Y. 5 g vehicle-to-everything services: Gearing up for security and privacy. Proc. IEEE 2019, 108, 373–389. [Google Scholar] [CrossRef]
- Campolo, C.; Molinaro, A.; Iera, A.; Menichella, F. 5 g network slicing for vehicle-to-everything services. IEEE Wirel. Commun. 2017, 24, 38–45. [Google Scholar] [CrossRef]
- Zavvos, E.; Gerding, E.H.; Yazdanpanah, V.; Maple, C.; Stein, S.; Schraefel, M.C. Privacy and trust in the internet of vehicles. IEEE Trans. Intell. Transp. Syst. 2021, 23, 10126–10141. [Google Scholar] [CrossRef]
- Liu, Y.; Wang, Y.; Chang, G. Efficient privacy-preserving dual authentication and key agreement scheme for secure v2v communications in an iov paradigm. IEEE Trans. Intell. Transp. 2017, 18, 2740–2749. [Google Scholar] [CrossRef]
- Liu, Y.; James, J.; Kang, J.; Niyato, D.; Zhang, S. Privacy-preserving traffic flow prediction: A federated learning approach. IEEE Internet Things J. 2020, 7, 7751–7763. [Google Scholar] [CrossRef]
- Mei, Q.; Xiong, H.; Chen, J.; Yang, M.; Kumari, S.; Khan, M.K. Efficient certificateless aggregate signature with conditional privacy preservation in iov. IEEE Syst. J. 2020, 15, 245–256. [Google Scholar] [CrossRef]
- Bao, Y.; Qiu, W.; Cheng, X.; Sun, J. Fine-grained data sharing with enhanced privacy protection and dynamic users group service for the iov. IEEE Trans. Intell. Transp. Syst. 2022, 24, 13035–13049. [Google Scholar] [CrossRef]
- McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; Arcas, B.A.Y. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (PMLR), Lauderdale, FL, USA, 20–22 April 2017; pp. 1273–1282. [Google Scholar]
- Liu, Y.; Garg, S.; Nie, J.; Zhang, Y.; Xiong, Z.; Kang, J.; Hossain, M.S. Deep anomaly detection for time-series data in industrial iot: A communication-efficient on-device federated learning approach. IEEE Internet Things J. 2020, 8, 6348–6358. [Google Scholar] [CrossRef]
- Liu, Y.; Huang, A.; Luo, Y.; Huang, H.; Liu, Y.; Chen, Y.; Feng, L.; Chen, T.; Yu, H.; Yang, A.Q. Fedvision: An online visual object detection platform powered by federated learning. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 13172–13179. [Google Scholar]
- Li, Y.; Tao, X.; Zhang, X.; Liu, J.; Xu, J. Privacy-preserved federated learning for autonomous driving. IEEE Trans. Intell. Transp. Syst. 2021, 23, 8423–8434. [Google Scholar] [CrossRef]
- Liu, B.; Wang, L.; Liu, M. Lifelong federated reinforcement learning: A learning architecture for navigation in cloud robotic systems. IEEE Robot. Autom. Lett. 2019, 4, 4555–4562. [Google Scholar] [CrossRef]
- Liu, Y.; Yuan, X.; Xiong, Z.; Kang, J.; Wang, X.; Niyato, D. Federated learning for 6g communications: Challenges, methods, and future directions. China Commun. 2020, 17, 105–118. [Google Scholar] [CrossRef]
- Caesar, H.; Bankiti, V.; Lang, A.H.; Vora, S.; Liong, V.E.; Xu, Q.; Krishnan, A.; Pan, Y.; Baldan, G.; Beijbom, O. Nuscenes: A multimodal dataset for autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11621–11631. [Google Scholar]
- Zhu, Y.; Ye, Y.; Liu, Y.; James, J. Cross-area travel time uncertainty estimation from trajectory data: A federated learning approach. IEEE Trans. Intell. Transp. Syst. 2022, 23, 24966–24978. [Google Scholar] [CrossRef]
- Du, Z.; Wu, C.; Yoshinaga, T.; Yau, K.-L.A.; Ji, Y.; Li, J. Federated learning for vehicular internet of things: Recent advances and open issues. IEEE Open J. Comput. Soc. 2020, 1, 45–61. [Google Scholar] [CrossRef] [PubMed]
- Fung, C.; Yoon, C.J.; Beschastnikh, I. The limitations of federated learning in sybil settings. In Proceedings of the 23rd International Symposium on Research in Attacks, Intrusions and Defenses (RAID 2020), San Sebastian, Spain, 14–15 October 2020; pp. 301–316. [Google Scholar]
- Tolpegin, V.; Truex, S.; Gursoy, M.E.; Liu, L. Data poisoning attacks against federated learning systems. In Proceedings of the Computer Security—ESORICs 2020: 25th European Symposium on Research in Computer Security, Proceedings, Part i 25, Guildford, UK, 14–18 September 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 480–501. [Google Scholar]
- Liu, Y.; Wang, C.; Yuan, X. BadSampler: Harnessing the Power of Catastrophic Forgetting to Poison Byzantine-robust Federated Learning. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD’24), Barcelona, Spain, 25–29 August 2024. [Google Scholar]
- Ma, Z.; Ma, J.; Miao, Y.; Li, Y.; Deng, R.H. Shieldfl: Mitigating model poisoning attacks in privacy-preserving federated learning. IEEE Trans. Inf. Forensics Secur. 2022, 17, 1639–1654. [Google Scholar] [CrossRef]
- Taheri, R.; Shojafar, M.; Alazab, M.; Tafazolli, R. FED-IIoT: A robust federated malware detection architecture in industrial IoT. IEEE Trans. Ind. Inform. 2020, 17, 8442–8452. [Google Scholar] [CrossRef]
- Nabavirazavi, S.; Taheri, R.; Shojafar, M.; Iyengar, S.S. Impact of aggregation function randomization against model poisoning in federated learning. In Proceedings of the 22nd IEEE International Conference on Trust, Security and Privacy in Computing and Communications, TrustCom 2023, Exeter, UK, 1–3 November 2023; pp. 165–172. [Google Scholar]
- Cui, Y.; Liang, Y.; Luo, Q.; Shu, Z.; Huang, T. Resilient Consensus Control of Heterogeneous Multi-UAV Systems with Leader of Unknown Input Against Byzantine Attacks. IEEE Trans. Autom. Sci. Eng. 2024, 1–12. [Google Scholar] [CrossRef]
- Cui, Y.; Jia, Y.; Li, Y.; Shen, J.; Huang, T.; Gong, X. Byzantine resilient joint localization and target tracking of multi-vehicle systems. in IEEE Trans. Intell. Veh. 2023, 8, 2899–2913. [Google Scholar] [CrossRef]
- Konečnỳ, J.; McMahan, H.B.; Ramage, D.; Richtárik, P. Federated optimization: Distributed machine learning for on-device intelligence. arXiv 2016, arXiv:1610.02527. [Google Scholar]
- Samarakoon, S.; Bennis, M.; Saad, W.; Debbah, M. Distributed federated learning for ultra-reliable low-latency vehicular communications. IEEE Trans. Commun. 2019, 68, 1146–1159. [Google Scholar] [CrossRef]
- Posner, J.; Tseng, L.; Aloqaily, M.; Jararweh, Y. Federated learning in vehicular networks: Opportunities and solutions. IEEE Netw. 2021, 35, 152–159. [Google Scholar] [CrossRef]
- Lu, Y.; Huang, X.; Zhang, K.; Maharjan, S.; Zhang, Y. Blockchain empowered asynchronous federated learning for secure data sharing in internet of vehicles. IEEE Trans. Veh. Technol. 2020, 69, 4298–4311. [Google Scholar] [CrossRef]
- Salehi, B.; Reus-Muns, G.; Roy, D.; Wang, Z.; Jian, T.; Dy, J.; Ioannidis, S.; Chowdhury, K. Deep learning on multimodal sensor data at the wireless edge for vehicular network. IEEE Trans. Veh. Technol. 2022, 71, 7639–7655. [Google Scholar] [CrossRef]
- Feng, D.; Haase-Schütz, C.; Rosenbaum, L.; Hertlein, H.; Glaeser, C.; Timm, F.; Wiesbeck, W.; Dietmayer, K. Deep multi-modal object detection and semantic segmentation for autonomous driving: Datasets, methods, and challenges. IEEE Trans. Intell. Transp. Syst. 2020, 22, 1341–1360. [Google Scholar] [CrossRef]
- Liu, X.; Gao, K.; Liu, B.; Pan, C.; Liang, K.; Yan, L.; Ma, J.; He, F.; Zhang, S.; Pan, S.; et al. Advances in deep learning-based medical image analysis. Health Data Sci. 2021, 2021, 8786793. [Google Scholar] [CrossRef] [PubMed]
- Rabe, M.; Milz, S.; Mader, P. Development methodologies for safety critical machine learning applications in the automotive domain: A survey. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 19–25 June 2021; pp. 129–141. [Google Scholar]
- Yin, D.; Chen, Y.; Kannan, R.; Bartlett, P. Byzantine-robust distributed learning: Towards optimal statistical rates. In Proceedings of the International Conference on Machine Learning (PMLR), Stockholm, Sweden, 10–15 July 2018; pp. 5650–5659. [Google Scholar]
- Blanchard, P.; Mhamdi, E.M.E.; Guerraoui, R.; Stainer, J. Machine learning with adversaries: Byzantine tolerant gradient descent. Adv. Neural Inf. Process. Syst. 2017, 30, 118–128. [Google Scholar]
- Hayat, S.; Yanmaz, E.; Muzaffar, R. Survey on unmanned aerial vehicle networks for civil applications: A communications viewpoint. IEEE Commun. Surv. Tutorials 2016, 18, 2624–2661. [Google Scholar] [CrossRef]
- Ye, D.; Yu, R.; Pan, M.; Han, Z. Federated learning in vehicular edge computing: A selective model aggregation approach. IEEE Access 2020, 8, 23920–23935. [Google Scholar] [CrossRef]
- Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? The kitti vision benchmark suite. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 3354–3361. [Google Scholar]
- Hwang, S.; Park, J.; Kim, N.; Choi, Y.; Kweon, I.S. Multispectral pedestrian detection: Benchmark dataset and baseline. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1037–1045. [Google Scholar]
- Lang, A.H.; Vora, S.; Caesar, H.; Zhou, L.; Yang, J.; Beijbom, O. Pointpillars: Fast encoders for object detection from point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–17 June 2019; pp. 12697–12705. [Google Scholar]
- Sindagi, V.A.; Zhou, Y.; Tuzel, O. Mvx-net: Multimodal voxelnet for 3D object detection. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 7276–7282. [Google Scholar]
- Ku, J.; Mozifian, M.; Lee, J.; Harakeh, A.; Waslander, S.L. Joint 3D proposal generation and object detection from view aggregation. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 1–8. [Google Scholar]
- Bochkovskiy, A.; Wang, C.-Y.; Liao, H.-Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Colosimo, F.; Rango, F.D. Median-krum: A joint distance-statistical based byzantine-robust algorithm in federated learning. In Proceedings of the Int’l ACM Symposium on Mobility Management and Wireless Access, Montreal, QC, Canada, 30 October–3 November 2023; pp. 61–68. [Google Scholar]
- Wang, T.; Zheng, Z.; Lin, F. Federated Learning Framew Ork Based on Trimmed Mean Aggregation Rules. 2022. Available online: https://www.ssrn.com/abstract=4181353 (accessed on 28 January 2022).
- Data, D.; Diggavi, S. Byzantine-resilient sgd in high dimensions on heterogeneous data. In Proceedings of the 2021 IEEE International Symposium on Information Theory (ISIT), Melbourne, Australia, 12–20 July 2021; pp. 2310–2315. [Google Scholar]
- Cao, X.; Fang, M.; Liu, J.; Gong, N.Z. FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping. In Proceedings of the ISOC Network and Distributed System Security Symposium (NDSS), Online, 21–25 February 2021. [Google Scholar]
Method | KITTI | nuScenes | KAIST |
---|---|---|---|
FedAvg | 65.4 ± 0.3 | 58.7 ± 0.2 | 67.8 ± 0.2 |
Krum | 67.7 ± 0.1 | 62.4 ± 0.2 | 71.2 ± 0.1 |
Multi-Krum | 68.9 ± 0.1 | 65.7 ± 0.3 | 74.1 ± 0.2 |
Trimmed Mean | 66.8 ± 0.2 | 64.1 ± 0.2 | 72.6 ± 0.2 |
Mean | 65.8 ± 0.1 | 59.7 ± 0.4 | 70.1 ± 0.2 |
BrSGD | 71.2 ± 0.2 | 68.3 ± 0.3 | 75.4 ± 0.2 |
Ours | 73.2 ± 0.2 | 72.5 ± 0.2 | 77.9 ± 0.2 |
Method | ||||
---|---|---|---|---|
FedAvg | 65.4 ± 0.1 | 63.8 ± 0.2 | 58.8 ± 0.3 | 52.4 ± 0.2 |
Krum | 67.7 ± 0.1 | 65.6 ± 0.1 | 60.1 ± 0.2 | 54.8 ± 0.3 |
Multi-Krum | 68.9 ± 0.2 | 67.1 ± 0.3 | 62.5 ± 0.2 | 58.7 ± 0.1 |
Trimmed Mean | 66.8 ± 0.2 | 65.6 ± 0.4 | 63.7 ± 0.3 | 59.8 ± 0.1 |
Mean | 65.8 ± 0.2 | 62.7 ± 0.1 | 58.9 ± 0.2 | 55.6 ± 0.1 |
BrSGD | 71.2 ± 0.3 | 68.7 ± 0.2 | 66.5 ± 0.2 | 62.7 ± 0.1 |
FLTrust | 72.2 ± 0.1 | 71.4 ± 0.2 | 68.7 ± 0.3 | 65.6 ± 0.1 |
Ours | 73.2 ± 0.1 | 71.9 ± 0.2 | 69.7 ± 0.1 | 68.4 ± 0.2 |
Method | ||||
---|---|---|---|---|
FedAvg | 58.6 ± 0.2 | 54.2 ± 0.3 | 48.6 ± 0.3 | 42.7 ± 0.2 |
Krum | 65.2 ± 0.2 | 63.2 ± 0.2 | 61.8 ± 0.3 | 56.7 ± 0.2 |
Multi-Krum | 67.7 ± 0.2 | 65.4 ± 0.2 | 61.6 ± 0.2 | 56.5 ± 0.1 |
Trimmed Mean | 64.6 ± 0.2 | 62.4 ± 0.3 | 58.7± 0.2 | 54.4 ± 0.1 |
Mean | 62.7 ± 0.2 | 57.7 ± 0.1 | 55.4 ± 0.2 | 53.1 ± 0.1 |
BrSGD | 68.7 ± 0.2 | 67.1 ± 0.2 | 64.8 ± 0.2 | 60.7 ± 0.1 |
FLTrust | 71.7 ± 0.1 | 66.7 ± 0.2 | 65.4 ± 0.2 | 61.8 ± 0.1 |
Ours | 74.2 ± 0.1 | 73.1 ± 0.2 | 70.8 ± 0.1 | 69.3 ± 0.2 |
Method | ||||
---|---|---|---|---|
FedAvg | 4896 MB | 5432 MB | 5831 MB | 6123 MB |
Krum | 4984 MB | 5641 MB | 6023 MB | 6457 MB |
Multi-Krum | 5014 MB | 5425 MB | 5987 MB | 6398 MB |
Trimmed Mean | 5021 MB | 5531 MB | 6015 MB | 6157 MB |
Mean | 4974 MB | 5324 MB | 6074 MB | 6248 MB |
BrSGD | 3697 MB | 4125 MB | 4897 MB | 5324 MB |
Ours | 49.64 MB | 53.24 MB | 57.41 MB | 60.23 MB |
Method | ||||
---|---|---|---|---|
FedAvg | 4896 MB | 5432 MB | 5831 MB | 6123 MB |
Krum | 4984 MB | 5641 MB | 6023 MB | 6457 MB |
Multi-Krum | 5014 MB | 5425 MB | 5987 MB | 6398 MB |
Trimmed Mean | 5021 MB | 5531 MB | 6015 MB | 6157 MB |
Mean | 4974 MB | 5324 MB | 6074 MB | 6248 MB |
BrSGD | 3697 MB | 4125 MB | 4897 MB | 5324 MB |
Ours | 49.64 MB | 48.23 MB | 46.65 MB | 41.25 MB |
Method | ||||
---|---|---|---|---|
w/o Fusion | 68.9 | 67.7 | 66.8 | 65.7 |
w/o Aggregation | 66.1 | 65.4 | 64.6 | 62.8 |
Ours | 73.2 | 72.5 | 71.1 | 70.9 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wu, N.; Lin, X.; Lu, J.; Zhang, F.; Chen, W.; Tang, J.; Xiao, J. Byzantine-Robust Multimodal Federated Learning Framework for Intelligent Connected Vehicle. Electronics 2024, 13, 3635. https://doi.org/10.3390/electronics13183635
Wu N, Lin X, Lu J, Zhang F, Chen W, Tang J, Xiao J. Byzantine-Robust Multimodal Federated Learning Framework for Intelligent Connected Vehicle. Electronics. 2024; 13(18):3635. https://doi.org/10.3390/electronics13183635
Chicago/Turabian StyleWu, Ning, Xiaoming Lin, Jianbin Lu, Fan Zhang, Weidong Chen, Jianlin Tang, and Jing Xiao. 2024. "Byzantine-Robust Multimodal Federated Learning Framework for Intelligent Connected Vehicle" Electronics 13, no. 18: 3635. https://doi.org/10.3390/electronics13183635
APA StyleWu, N., Lin, X., Lu, J., Zhang, F., Chen, W., Tang, J., & Xiao, J. (2024). Byzantine-Robust Multimodal Federated Learning Framework for Intelligent Connected Vehicle. Electronics, 13(18), 3635. https://doi.org/10.3390/electronics13183635