Real-World Implementation and Performance Analysis of Distributed Learning Frameworks for 6G IoT Applications
Abstract
:1. Introduction
1.1. Technological Background
1.2. Contributions and Novelties
- Implementation and efficiency analysis: We deploy FTL and FL frameworks on a real-world platform comprising heterogeneous devices, including Raspberry Pis, Odroid, and virtual machines. Through comprehensive efficiency assessments that include accuracy, convergence rate, and loss metrics, we demonstrate the superiority of FTL. In particular, FTL exhibits improved accuracy and reduced loss over shorter training durations compared to FL.
- Dynamic measurement and comparison of technical parameters: Our study uses dynamic measurement techniques to evaluate key technical parameters, such as load average, memory usage, temperature, power, and energy consumption, in DL methodologies. The findings reveal that FTL exhibits a lower average load on processing units and memory usage, while consuming less energy and power. This aspect is particularly crucial in emerging 6G scenarios characterized by resource-limited devices.
- Scalability analysis: We perform scalability analyses to explore the impact of varying user counts and dataset sizes on the performance of the DL framework. Our results confirm the robustness and reduced sensitivity of the FTL to these variations, highlighting its adaptability and reliability in diverse operational scenarios.
- Innovative implementation strategies: We introduce novel implementation strategies to address the practical challenges encountered during the implementation of the DL framework. In particular, we employ an asymmetric data distribution to closely emulate real-world scenarios, enhancing the applicability and robustness of our findings. Furthermore, we implemented cooling systems using fans and heat sinks to mitigate overheating and processing capability degradation, thus optimizing training performance and ensuring the reliability of our DL frameworks. Additionally, the introduction of memory swap functionality, which addresses the limited memory capacities of Raspberry Pis, represents a novel approach to enhancing the scalability and efficiency of DL frameworks.
1.3. Limitations
2. Distributed Learning (DL)
2.1. Federated Learning (FL)
- Initialization: At the beginning of the process, a global model () is initialized. This global model serves as the starting point for training across all devices (Line 3).
- Iteration (round): The algorithm iterates for a predetermined number of rounds. This iterative process aims to minimize a specific function for efficient FL [28]. Each round involves the following steps:
- (a)
- Device interaction loop: Within each round, the algorithm interacts with each device (representing different data sources) individually (Lines 5–10).
- Broadcasting model: The current global model () is sent to each device participating in the process (Line 6).
- Local training: Each device trains a local model () using its own local dataset () (Line 7). This ensures that models are trained using data that remain on the respective devices, preserving privacy.
- Local model update: After training, each device computes a local gradient update () based on its local loss function () (Line 8).
- Sending update: The devices send their respective gradient updates () to a central server for aggregation (Line 9).
- (b)
- Aggregation: The central server receives the gradient updates from all devices and aggregates them to update the global model () (Line 11). This aggregation step ensures that the insights gained from local data across all devices contribute to improving the global model. The update is performed using a learning rate () to control the step size in the gradient descent process.
- End of iteration: The algorithm repeats this process for a fixed number of rounds or until convergence criteria are met (Lines 4–12). Each round allows the global model to learn from diverse data sources without compromising individual data privacy.
Algorithm 1 Federated learning. |
|
2.2. Federated Transfer Learning (FTL)
Algorithm 2 Federated transfer learning. |
|
3. Implementation of the System
3.1. Server Configurations and Functionalities
3.2. Client Configurations and Functionalities
3.2.1. Raspberry Pis
3.2.2. Odroid
3.2.3. Virtual Clients
4. Experimental Results and Performance Evaluations
4.1. Experiment 1: DL with Five Heterogeneous Clients
4.2. Experiment 2: DL with Three Heterogeneous Clients
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
IoT | Internet of Things |
ML | Machine learning |
CL | Centralized learning |
DL | Distributed learning |
FL | Federated learning |
TL | Transfer learning |
DNNs | Deep neural networks |
NTNs | Non-terrestrial networks |
RL | Reinforcement learning |
DIL | Distributed incremental learning |
FTL | Federated transfer learning |
MARL | Multi-agent reinforcement learning |
SL | Split learning |
FedAvg | Federated average |
VNC | Virtual network computing |
OS | Operating system |
RAM | Random access memory |
References
- Morocho-Cayamcela, M.E.; Lee, H.; Lim, W. Machine Learning for 5G/B5G Mobile and Wireless Communications: Potential, Limitations, and Future Directions. IEEE Access 2019, 7, 137184–137206. [Google Scholar] [CrossRef]
- Praveen Kumar, D.; Amgoth, T.; Annavarapu, C.S.R. Machine learning algorithms for wireless sensor networks: A survey. Inform. Fusion 2019, 49, 1–25. [Google Scholar] [CrossRef]
- Naseh, D.; Shinde, S.S.; Tarchi, D. Enabling Intelligent Vehicular Networks Through Distributed Learning in the Non-Terrestrial Networks 6G Vision. In Proceedings of the European Wireless 2023; 28th European Wireless Conference, Rome, Italy, 2–4 October 2023. [Google Scholar]
- Fontanesi, G.; Ortíz, F.; Lagunas, E.; Baeza, V.M.; Vázquez, M.; Vásquez-Peralvo, J.; Minardi, M.; Vu, H.; Honnaiah, P.; Lacoste, C.; et al. Artificial Intelligence for Satellite Communication and Non-Terrestrial Networks: A Survey. arXiv 2023, arXiv:2304.13008. [Google Scholar]
- Lee, H.; Lee, S.H.; Quek, T.Q.S. Deep Learning for Distributed Optimization: Applications to Wireless Resource Management. IEEE J. Sel. Areas Commun. 2019, 37, 2251–2266. [Google Scholar] [CrossRef]
- Huang, J.; Wan, J.; Lv, B.; Ye, Q.; Chen, Y. Joint Computation Offloading and Resource Allocation for Edge-Cloud Collaboration in Internet of Vehicles via Deep Reinforcement Learning. IEEE Syst. J. 2023, 17, 2500–2511. [Google Scholar] [CrossRef]
- Song, H.; Liu, L.; Ashdown, J.; Yi, Y. A Deep Reinforcement Learning Framework for Spectrum Management in Dynamic Spectrum Access. IEEE Internet Things J. 2021, 8, 11208–11218. [Google Scholar] [CrossRef]
- Nayak, P.; Swetha, G.; Gupta, S.; Madhavi, K. Routing in wireless sensor networks using machine learning techniques: Challenges and opportunities. Measurement 2021, 178, 108974. [Google Scholar] [CrossRef]
- Liu, E.; Zheng, L.; He, Q.; Lai, P.; Xu, B.; Zhang, G. Role-Based User Allocation Driven by Criticality in Edge Computing. IEEE Trans. Serv. Comput. 2023, 16, 3636–3650. [Google Scholar] [CrossRef]
- Yang, Z.; Chen, M.; Saad, W.; Hong, C.S.; Shikh-Bahaei, M. Energy Efficient Federated Learning Over Wireless Communication Networks. IEEE Trans. Wirel. Commun. 2021, 20, 1935–1949. [Google Scholar] [CrossRef]
- Jiang, J.C.; Kantarci, B.; Oktug, S.; Soyata, T. Federated Learning in Smart City Sensing: Challenges and Opportunities. Sensors 2020, 20, 6230. [Google Scholar] [CrossRef]
- Wu, Q.; Wang, X.; Fan, Q.; Fan, P.; Zhang, C.; Li, Z. High stable and accurate vehicle selection scheme based on federated edge learning in vehicular networks. China Commun. 2023, 20, 1–17. [Google Scholar] [CrossRef]
- Khan, L.U.; Mustafa, E.; Shuja, J.; Rehman, F.; Bilal, K.; Han, Z.; Hong, C.S. Federated Learning for Digital Twin-Based Vehicular Networks: Architecture and Challenges. IEEE Wirel. Commun. 2023, 1–8. [Google Scholar] [CrossRef]
- Sun, Z.; Sun, G.; Liu, Y.; Wang, J.; Cao, D. BARGAIN-MATCH: A Game Theoretical Approach for Resource Allocation and Task Offloading in Vehicular Edge Computing Networks. IEEE Trans. Mob. Comput. 2024, 23, 1655–1673. [Google Scholar] [CrossRef]
- Matthiesen, B.; Razmi, N.; Leyva-Mayorga, I.; Dekorsy, A.; Popovski, P. Federated Learning in Satellite Constellations. IEEE Netw. 2023; 1–16. [Google Scholar] [CrossRef]
- Younus, M.U.; Khan, M.K.; Bhatti, A.R. Improving the Software-Defined Wireless Sensor Networks Routing Performance Using Reinforcement Learning. IEEE Internet Things J. 2022, 9, 3495–3508. [Google Scholar] [CrossRef]
- Dewangan, D.K.; Sahu, S.P. Deep Learning-Based Speed Bump Detection Model for Intelligent Vehicle System Using Raspberry Pi. IEEE Sens. J. 2021, 21, 3570–3578. [Google Scholar] [CrossRef]
- Cicceri, G.; Tricomi, G.; Benomar, Z.; Longo, F.; Puliafito, A.; Merlino, G. DILoCC: An approach for Distributed Incremental Learning across the Computing Continuum. In Proceedings of the 2021 IEEE International Conference on Smart Computing (SMARTCOMP), Irvine, CA, USA, 23–27 August 2021; pp. 113–120. [Google Scholar] [CrossRef]
- Mills, J.; Hu, J.; Min, G. Communication-Efficient Federated Learning for Wireless Edge Intelligence in IoT. IEEE Internet Things J. 2020, 7, 5986–5994. [Google Scholar] [CrossRef]
- Ridolfi, L.; Naseh, D.; Shinde, S.S.; Tarchi, D. Implementation and Evaluation of a Federated Learning Framework on Raspberry PI Platforms for IoT 6G Applications. Future Internet 2023, 15, 358. [Google Scholar] [CrossRef]
- Wang, S.; Hong, Y.; Wang, R.; Hao, Q.; Wu, Y.C.; Ng, D.W.K. Edge federated learning via unit-modulus over-the-air computation. IEEE Trans. Commun. 2022, 70, 3141–3156. [Google Scholar] [CrossRef]
- Kou, W.B.; Wang, S.; Zhu, G.; Luo, B.; Chen, Y.; Ng, D.W.K.; Wu, Y.C. Communication resources constrained hierarchical federated learning for end-to-end autonomous driving. In Proceedings of the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Detroit, MI, USA, 1–5 October 2023; pp. 9383–9390. [Google Scholar]
- Wen, D.; Liu, P.; Zhu, G.; Shi, Y.; Xu, J.; Eldar, Y.C.; Cui, S. Task-oriented sensing, computation, and communication integration for multi-device edge AI. IEEE Trans. Wirel. Commun. 2023, 23, 2486–2502. [Google Scholar] [CrossRef]
- Chen, M.; Gündüz, D.; Huang, K.; Saad, W.; Bennis, M.; Feljan, A.V.; Poor, H.V. Distributed Learning in Wireless Networks: Recent Progress and Future Challenges. IEEE J. Sel. Areas Commun. 2021, 39, 3579–3605. [Google Scholar] [CrossRef]
- Farkas, A.; Kertész, G.; Lovas, R. Parallel and Distributed Training of Deep Neural Networks: A brief overview. In Proceedings of the 2020 IEEE 24th International Conference on Intelligent Engineering Systems (INES), Reykjavik, Iceland, 8–10 July 2020; pp. 165–170. [Google Scholar] [CrossRef]
- Naseh, D.; Shinde, S.S.; Tarchi, D. Network Sliced Distributed Learning-as-a-Service for Internet of Vehicles Applications in 6G Non-Terrestrial Network Scenarios. J. Sens. Actuator Netw. 2024, 13, 14. [Google Scholar] [CrossRef]
- Liu, Z.; Guo, J.; Yang, W.; Fan, J.; Lam, K.Y.; Zhao, J. Privacy-Preserving Aggregation in Federated Learning: A Survey. IEEE Trans. Big Data, 2022; early access. [Google Scholar] [CrossRef]
- Li, T.; Sahu, A.K.; Talwalkar, A.; Smith, V. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Process. Mag. 2020, 37, 50–60. [Google Scholar] [CrossRef]
- Rafi, T.H.; Noor, F.A.; Hussain, T.; Chae, D.K. Fairness and privacy preserving in federated learning: A survey. Inform. Fusion 2024, 105, 102198. [Google Scholar] [CrossRef]
- Hsu, T.M.H.; Qi, H.; Brown, M. Measuring the effects of non-identical data distribution for federated visual classification. arXiv 2019, arXiv:1909.06335. [Google Scholar]
- Zhou, H.; Cheng, J.; Wang, X.; Jin, B. Low rank communication for federated learning. In Proceedings of the Database Systems for Advanced Applications. DASFAA 2020 International Workshops: BDMS, SeCoP, BDQM, GDMA, and AIDE, Jeju, Republic of Korea, 24–27 September 2020; pp. 1–16. [Google Scholar]
- Wang, J.; Liu, Q.; Liang, H.; Joshi, G.; Poor, H.V. A novel framework for the analysis and design of heterogeneous federated learning. IEEE Trans. Signal Process. 2021, 69, 5234–5249. [Google Scholar] [CrossRef]
- Yao, X.; Sun, L. Continual local training for better initialization of federated models. In Proceedings of the 2020 IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, 25–28 October 2020; pp. 1736–1740. [Google Scholar]
- Li, Q.; He, B.; Song, D. Model-contrastive federated learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 19–25 June 2021; pp. 10713–10722. [Google Scholar]
- Dinh, C.T.; Tran, N.H.; Nguyen, M.N.H.; Hong, C.S.; Bao, W.; Zomaya, A.Y.; Gramoli, V. Federated Learning Over Wireless Networks: Convergence Analysis and Resource Allocation. IEEE/ACM Trans. Netw. 2021, 29, 398–409. [Google Scholar] [CrossRef]
- Li, Z.; He, Y.; Yu, H.; Kang, J.; Li, X.; Xu, Z.; Niyato, D. Data Heterogeneity-Robust Federated Learning via Group Client Selection in Industrial IoT. IEEE Internet Things J. 2022, 9, 17844–17857. [Google Scholar] [CrossRef]
- Jiang, K.; Cao, Y.; Song, Y.; Zhou, H.; Wan, S.; Zhang, X. Asynchronous Federated and Reinforcement Learning for Mobility-Aware Edge Caching in IoVs. IEEE Internet Things J. 2024; early access. [Google Scholar] [CrossRef]
- Amiri, S.; Belloum, A.; Nalisnick, E.; Klous, S.; Gommans, L. On the impact of non-IID data on the performance and fairness of differentially private federated learning. In Proceedings of the 2022 52nd Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W), Los Alamitos, CA, USA, 27–30 June 2022; pp. 52–58. [Google Scholar] [CrossRef]
- Zhang, Z.; Zhang, Y.; Guo, D.; Zhao, S.; Zhu, X. Communication-efficient federated continual learning for distributed learning system with Non-IID data. Sci. China Inf. Sci. 2023, 66, 122102. [Google Scholar] [CrossRef]
- Hong, C.S.; Khan, L.U.; Chen, M.; Chen, D.; Saad, W.; Han, Z. Federated Learning for Wireless Networks; Springer: Singapore, 2022. [Google Scholar] [CrossRef]
- Girelli Consolaro, N.; Shinde, S.S.; Naseh, D.; Tarchi, D. Analysis and Performance Evaluation of Transfer Learning Algorithms for 6G Wireless Networks. Electronics 2023, 12, 3327. [Google Scholar] [CrossRef]
- Liu, Y.; Kang, Y.; Xing, C.; Chen, T.; Yang, Q. A Secure Federated Transfer Learning Framework. IEEE Intell. Syst. 2020, 35, 70–82. [Google Scholar] [CrossRef]
- Yang, H.; He, H.; Zhang, W.; Cao, X. FedSteg: A Federated Transfer Learning Framework for Secure Image Steganalysis. IEEE Trans. Netw. Sci. Eng. 2021, 8, 1084–1094. [Google Scholar] [CrossRef]
- Sharma, S.; Xing, C.; Liu, Y.; Kang, Y. Secure and Efficient Federated Transfer Learning. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; pp. 2569–2576. [Google Scholar] [CrossRef]
- Wang, A.; Zhang, Y.; Yan, Y. Heterogeneous Defect Prediction Based on Federated Transfer Learning via Knowledge Distillation. IEEE Access 2021, 9, 29530–29540. [Google Scholar] [CrossRef]
- Gao, D.; Liu, Y.; Huang, A.; Ju, C.; Yu, H.; Yang, Q. Privacy-preserving Heterogeneous Federated Transfer Learning. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; pp. 2552–2559. [Google Scholar] [CrossRef]
- Yan, Z.; Li, D. Performance Analysis for Resource Constrained Decentralized Federated Learning Over Wireless Networks. IEEE Trans. Commun. 2024; early access. [Google Scholar] [CrossRef]
- Beutel, D.J.; Topal, T.; Mathur, A.; Qiu, X.; Fernandez-Marques, J.; Gao, Y.; Sani, L.; Li, K.H.; Parcollet, T.; de Gusmão, P.P.B.; et al. Flower: A Friendly Federated Learning Research Framework. arXiv 2022, arXiv:2007.14390. [Google Scholar]
- Shiraz, M.; Abolfazli, S.; Sanaei, Z.; Gani, A. A study on virtual machine deployment for application outsourcing in mobile cloud computing. J. Supercomput. 2013, 63, 946–964. [Google Scholar] [CrossRef]
HW | RAM | SWAP | CPU | No. of Cores | Memory |
---|---|---|---|---|---|
PI | 1 GB LPDDR2 SDRAM | 4 GB | Cortex-A53 (1.4 GHz) | 4 | 16 GB Micro SD |
Odroid | 4 GB LPDDR4 RAM | 0 | Quad-core Cortex-A73 (2.2 GHz), Dual-core Cortex-A53 (2 GHz) | 6 | 32 GB eMMC |
PC | 12 GB DDR4 SDRAM | 0 | Core i7-6500U (3.1 GHz) | 2 | 1 TB Hard Drive |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Naseh, D.; Abdollahpour, M.; Tarchi, D. Real-World Implementation and Performance Analysis of Distributed Learning Frameworks for 6G IoT Applications. Information 2024, 15, 190. https://doi.org/10.3390/info15040190
Naseh D, Abdollahpour M, Tarchi D. Real-World Implementation and Performance Analysis of Distributed Learning Frameworks for 6G IoT Applications. Information. 2024; 15(4):190. https://doi.org/10.3390/info15040190
Chicago/Turabian StyleNaseh, David, Mahdi Abdollahpour, and Daniele Tarchi. 2024. "Real-World Implementation and Performance Analysis of Distributed Learning Frameworks for 6G IoT Applications" Information 15, no. 4: 190. https://doi.org/10.3390/info15040190
APA StyleNaseh, D., Abdollahpour, M., & Tarchi, D. (2024). Real-World Implementation and Performance Analysis of Distributed Learning Frameworks for 6G IoT Applications. Information, 15(4), 190. https://doi.org/10.3390/info15040190