Processing math: 100%
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (224)

Search Parameters:
Keywords = heterogeneous server

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 372 KiB  
Article
Rewiring Sustainability: How Digital Transformation and Fintech Innovation Reshape Environmental Trajectories in the Industry 4.0 Era
by Zhuoqi Teng, Han Xia and Yugang He
Systems 2025, 13(6), 400; https://doi.org/10.3390/systems13060400 - 22 May 2025
Abstract
This study investigates the long-run impact of digital transformation and fintech innovation on environmental sustainability across OECD countries from 1999 to 2024. Drawing on a novel empirical framework that integrates panel fully modified ordinary least squares, the system-generalized method of moments, and machine [...] Read more.
This study investigates the long-run impact of digital transformation and fintech innovation on environmental sustainability across OECD countries from 1999 to 2024. Drawing on a novel empirical framework that integrates panel fully modified ordinary least squares, the system-generalized method of moments, and machine learning estimators, the analysis captures both linear and nonlinear dynamics while addressing heterogeneity, endogeneity, and structural complexity. Environmental sustainability is measured by per capita CO2 emissions, while digital transformation and fintech innovation are proxied by secure internet servers and G06Q patent applications, respectively. The findings reveal that both digital infrastructure maturity and fintech-driven innovation significantly reduce carbon emissions, suggesting that technologically advanced digital ecosystems serve as effective instruments for climate mitigation. Robustness checks via the system-generalized method of moments confirm the stability of these relationships, while machine learning models—Random Forest and XGBoost—highlight digital variables as top predictors of emissions reduction. The convergence of results across estimation methods underscores the reliability of the digital–environmental nexus. Policy implications emphasize the need to embed sustainability metrics into digital strategies, promote green fintech regulation, and prepare labor markets for Industry 4.0 transitions. These findings position digital and fintech innovation not merely as enablers of economic growth, but as structural levers for achieving environmentally sustainable development in high-income economies. Full article
(This article belongs to the Special Issue Sustainable Business Model Innovation in the Era of Industry 4.0)
24 pages, 3048 KiB  
Article
Automatic Controversy Detection Based on Heterogeneous Signed Attributed Network and Deep Dual-Layer Self-Supervised Community Analysis
by Ying Li, Xiao Zhang, Yu Liang and Qianqian Li
Entropy 2025, 27(5), 473; https://doi.org/10.3390/e27050473 - 27 Apr 2025
Viewed by 199
Abstract
In this study, we propose a computational approach that applies text mining and deep learning to conduct controversy detection on social media platforms. Unlike previous research, our method integrates multidimensional and heterogeneous information from social media into a heterogeneous signed attributed network, encompassing [...] Read more.
In this study, we propose a computational approach that applies text mining and deep learning to conduct controversy detection on social media platforms. Unlike previous research, our method integrates multidimensional and heterogeneous information from social media into a heterogeneous signed attributed network, encompassing various users’ attributes, semantic information, and structural heterogeneity. We introduce a deep dual-layer self-supervised algorithm for community detection and analyze controversy within this network. A novel controversy metric is devised by considering three dimensions of controversy: community distinctions, betweenness centrality, and user representations. A comparison between our method and other classical controversy measures such as Random Walk, Biased Random Walk (BRW), BCC, EC, GMCK, MBLB, and community-based methods reveals that our model consistently produces more stable and accurate controversy scores. Additionally, we calculated the level of controversy and computed p-values for the detected communities on our crawled dataset Weibo, including #Microblog (3792), #Comment (45,741), #Retweet (36,126), and #User (61,327). Overall, our model had a comprehensive and nuanced understanding of controversy on social media platforms. To facilitate its use, we have developed a user-friendly web server. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

38 pages, 2457 KiB  
Article
Towards Secure and Efficient Farming Using Self-Regulating Heterogeneous Federated Learning in Dynamic Network Conditions
by Sai Puppala and Koushik Sinha
Agriculture 2025, 15(9), 934; https://doi.org/10.3390/agriculture15090934 - 25 Apr 2025
Viewed by 230
Abstract
The advancement of precision agriculture increasingly depends on innovative technological solutions that optimize resource utilization and minimize environmental impact. This paper introduces a novel heterogeneous federated learning architecture specifically designed for intelligent agricultural systems, with a focus on combine tractors equipped with advanced [...] Read more.
The advancement of precision agriculture increasingly depends on innovative technological solutions that optimize resource utilization and minimize environmental impact. This paper introduces a novel heterogeneous federated learning architecture specifically designed for intelligent agricultural systems, with a focus on combine tractors equipped with advanced nutrient and crop health sensors. Unlike conventional FL applications, our architecture uniquely addresses the challenges of communication efficiency, dynamic network conditions, and resource allocation in rural farming environments. By adopting a decentralized approach, we ensure that sensitive data remain localized, thereby enhancing security while facilitating effective collaboration among devices. The architecture promotes the formation of adaptive clusters based on operational capabilities and geographical proximity, optimizing communication between edge devices and a global server. Furthermore, we implement a robust checkpointing mechanism and a dynamic data transmission strategy, ensuring efficient model updates in the face of fluctuating network conditions. Through a comprehensive assessment of computational power, energy efficiency, and latency, our system intelligently classifies devices, significantly enhancing the overall efficiency of federated learning processes. This paper details the architecture, operational procedures, and evaluation methodologies, demonstrating how our approach has the potential to transform agricultural practices through data-driven decision-making and promote sustainable farming practices tailored to the unique challenges of the agricultural sector. Full article
(This article belongs to the Section Digital Agriculture)
Show Figures

Figure 1

20 pages, 896 KiB  
Article
MAB-Based Online Client Scheduling for Decentralized Federated Learning in the IoT
by Zhenning Chen, Xinyu Zhang, Siyang Wang and Youren Wang
Entropy 2025, 27(4), 439; https://doi.org/10.3390/e27040439 - 18 Apr 2025
Viewed by 239
Abstract
Different from conventional federated learning (FL), which relies on a central server for model aggregation, decentralized FL (DFL) exchanges models among edge servers, thus improving the robustness and scalability. When deploying DFL into the Internet of Things (IoT), limited wireless resources cannot provide [...] Read more.
Different from conventional federated learning (FL), which relies on a central server for model aggregation, decentralized FL (DFL) exchanges models among edge servers, thus improving the robustness and scalability. When deploying DFL into the Internet of Things (IoT), limited wireless resources cannot provide simultaneous access to massive devices. One must perform client scheduling to balance the convergence rate and model accuracy. However, the heterogeneity of computing and communication resources across client devices, combined with the time-varying nature of wireless channels, makes it challenging to estimate accurately the delay associated with client participation during the scheduling process. To address this issue, we investigate the client scheduling and resource optimization problem in DFL without prior client information. Specifically, the considered problem is reformulated as a multi-armed bandit (MAB) program, and an online learning algorithm that utilizes contextual multi-arm slot machines for client delay estimation and scheduling is proposed. Through theoretical analysis, this algorithm can achieve asymptotic optimal performance in theory. The experimental results show that the algorithm can make asymptotic optimal client selection decisions, and this method is superior to existing algorithms in reducing the cumulative delay of the system. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

54 pages, 5836 KiB  
Review
A Survey on Edge Computing (EC) Security Challenges: Classification, Threats, and Mitigation Strategies
by Abdul Manan Sheikh, Md. Rafiqul Islam, Mohamed Hadi Habaebi, Suriza Ahmad Zabidi, Athaur Rahman Bin Najeeb and Adnan Kabbani
Future Internet 2025, 17(4), 175; https://doi.org/10.3390/fi17040175 - 16 Apr 2025
Viewed by 1496
Abstract
Edge computing (EC) is a distributed computing approach to processing data at the network edge, either by the device or a local server, instead of centralized data centers or the cloud. EC proximity to the data source can provide faster insights, response time, [...] Read more.
Edge computing (EC) is a distributed computing approach to processing data at the network edge, either by the device or a local server, instead of centralized data centers or the cloud. EC proximity to the data source can provide faster insights, response time, and bandwidth utilization. However, the distributed architecture of EC makes it vulnerable to data security breaches and diverse attack vectors. The edge paradigm has limited availability of resources like memory and battery power. Also, the heterogeneous nature of the hardware, diverse communication protocols, and difficulty in timely updating security patches exist. A significant number of researchers have presented countermeasures for the detection and mitigation of data security threats in an EC paradigm. However, an approach that differs from traditional data security and privacy-preserving mechanisms already used in cloud computing is required. Artificial Intelligence (AI) greatly improves EC security through advanced threat detection, automated responses, and optimized resource management. When combined with Physical Unclonable Functions (PUFs), AI further strengthens data security by leveraging PUFs’ unique and unclonable attributes alongside AI’s adaptive and efficient management features. This paper investigates various edge security strategies and cutting-edge solutions. It presents a comparison between existing strategies, highlighting their benefits and limitations. Additionally, the paper offers a detailed discussion of EC security threats, including their characteristics and the classification of different attack types. The paper also provides an overview of the security and privacy needs of the EC, detailing the technological methods employed to address threats. Its goal is to assist future researchers in pinpointing potential research opportunities. Full article
(This article belongs to the Special Issue Cloud and Edge Computing for the Next-Generation Networks)
Show Figures

Figure 1

35 pages, 1975 KiB  
Review
Decentralized Federated Learning for Private Smart Healthcare: A Survey
by Haibo Cheng, Youyang Qu, Wenjian Liu, Longxiang Gao and Tianqing Zhu
Mathematics 2025, 13(8), 1296; https://doi.org/10.3390/math13081296 - 15 Apr 2025
Viewed by 432
Abstract
This research explores the use of decentralized federated learning (DFL) in healthcare, focusing on overcoming the shortcomings of traditional centralized FL systems. DFL is proposed as a solution to enhance data privacy and improve system reliability by reducing dependence on central servers and [...] Read more.
This research explores the use of decentralized federated learning (DFL) in healthcare, focusing on overcoming the shortcomings of traditional centralized FL systems. DFL is proposed as a solution to enhance data privacy and improve system reliability by reducing dependence on central servers and increasing local data control. The research adopts a systematic literature review, following PRISMA guidelines, to provide a comprehensive understanding of DFL’s current applications and challenges within healthcare. The review synthesizes findings from various sources to identify the benefits and gaps in existing research, proposing research questions to further investigate the feasibility and optimization of DFL in medical environments. The study identifies four key challenges for DFL: security and privacy, communication efficiency, data and model heterogeneity, and incentive mechanisms. It discusses potential solutions, such as advanced cryptographic methods, optimized communication strategies, adaptive learning models, and robust incentive frameworks, to address these challenges. Furthermore, the research highlights the potential of DFL in enabling personalized healthcare through large, distributed data sets across multiple medical institutions. This study fills a critical gap in the literature by systematically reviewing DFL technologies in healthcare, offering valuable insights into applications, challenges, and future research directions that could improve the security, efficiency, and equity of healthcare data management. Full article
Show Figures

Figure 1

16 pages, 665 KiB  
Article
Modeling and Performance Analysis of Task Offloading of Heterogeneous Mobile Edge Computing Networks
by Wenwang Li and Haohao Zhou
Appl. Sci. 2025, 15(8), 4307; https://doi.org/10.3390/app15084307 - 14 Apr 2025
Viewed by 320
Abstract
Mobile edge computing architecture (MEC) can provide users with low latency services by integrating computing, storage and processing capabilities near users and data sources. As such, there has been intense interest in this topic, especially in single-server and homogeneous multi-server scenarios. The impact [...] Read more.
Mobile edge computing architecture (MEC) can provide users with low latency services by integrating computing, storage and processing capabilities near users and data sources. As such, there has been intense interest in this topic, especially in single-server and homogeneous multi-server scenarios. The impact of network heterogeneity and load fluctuations is ignored, and the performance evaluation system relies too much on statistical mean indicators, ignoring the impact of real-time indicators. In this paper, we propose a new heterogeneous edge computing network architecture composed of multi-core servers with varying transmission power, computing capabilities and waiting queue length. Since it is necessary to evaluate and analyze the service performance of MEC to guarantee Quality of Service (QoS), we design some indicators by solving the probability distribution function of response time, such as average task offloading delay, immediate service probability and blocking probability. By analyzing the impact of bias factors and network parameters associated with MEC servers on network performance, we provide insights for MEC design, deployment and optimization. Full article
Show Figures

Figure 1

28 pages, 4025 KiB  
Article
Blockchain-Based UAV-Assisted Mobile Edge Computing for Dual Game Resource Allocation
by Shanchen Pang, Yu Tang, Xue Zhai, Siyuan Tong and Zhenghao Wan
Appl. Sci. 2025, 15(7), 4048; https://doi.org/10.3390/app15074048 - 7 Apr 2025
Viewed by 529
Abstract
UAV-assisted mobile edge computing combines the flexibility of UAVs with the computing power of MEC to provide low-latency, high-performance computing solutions for a wide range of application scenarios. However, due to the highly dynamic and heterogeneous nature of the UAV environment, the optimal [...] Read more.
UAV-assisted mobile edge computing combines the flexibility of UAVs with the computing power of MEC to provide low-latency, high-performance computing solutions for a wide range of application scenarios. However, due to the highly dynamic and heterogeneous nature of the UAV environment, the optimal allocation of resources and system reliability still face significant challenges. This paper proposes a two-stage optimization (DSO) algorithm for UAV-assisted MEC, combining Stackelberg game theory and auction mechanisms to optimize resource allocation among servers, UAVs, and users. The first stage uses a Stackelberg game to allocate resources between servers and UAVs, while the second stage employs an auction algorithm for UAV-user resource pricing. Blockchain smart contracts automate task management, ensuring transparency and reliability. The experimental results show that compared with the traditional single-stage optimization algorithm (SSO), the equal allocation algorithm (EAA) and the dynamic resource pricing algorithm (DRP), the DSO algorithm proposed in this paper has significant advantages by improving resource utilization by 7–10%, reducing task latency by 3–5%, and lowering energy consumption by 4–8%, making it highly effective for dynamic UAV environments. Full article
Show Figures

Figure 1

15 pages, 548 KiB  
Article
Centralized Hierarchical Coded Caching Scheme for Two-Layer Network
by Kun Zhao, Jinyu Wang and Minquan Cheng
Entropy 2025, 27(3), 316; https://doi.org/10.3390/e27030316 - 18 Mar 2025
Viewed by 293
Abstract
This paper considers a two-layer hierarchical network, where a server containing N files is connected to K1 mirrors and each mirror is connected to K2 users. Each mirror and each user has a cache memory of size M1 and [...] Read more.
This paper considers a two-layer hierarchical network, where a server containing N files is connected to K1 mirrors and each mirror is connected to K2 users. Each mirror and each user has a cache memory of size M1 and M2 files, respectively. The server can only broadcast to the mirrors, and each mirror can only broadcast to its connected users. For such a network, we propose a novel coded caching scheme based on two known placement delivery arrays (PDAs). To fully utilize the cache memory of both the mirrors and users, we first treat the mirrors and users as cache nodes of the same type; i.e., the cache memory of each mirror is regarded as an additional part of the connected users’ cache, then the server broadcasts messages to all mirrors according to a K1K2-user PDA in the first layer. In the second layer, each mirror first cancels useless file packets (if any) in the received useful messages and forwards them to the connected users, such that each user can decode the requested packets not cached by the mirror, then broadcasts coded subpackets to the connected users according to a K2-user PDA, such that each user can decode the requested packets cached by the mirror. The proposed scheme is extended to a heterogeneous two-layer hierarchical network, where the number of users connected to different mirrors may be different. Numerical comparison showed that the proposed scheme achieved lower coding delays compared to existing hierarchical coded caching schemes at most memory ratio points. Full article
(This article belongs to the Special Issue Network Information Theory and Its Applications)
Show Figures

Figure 1

31 pages, 616 KiB  
Review
Fog Service Placement Optimization: A Survey of State-of-the-Art Strategies and Techniques
by Hemant Kumar Apat, Veena Goswami, Bibhudatta Sahoo, Rabindra K. Barik and Manob Jyoti Saikia
Computers 2025, 14(3), 99; https://doi.org/10.3390/computers14030099 - 11 Mar 2025
Viewed by 898
Abstract
The rapid development of Internet of Things (IoT) devices in various smart city-based applications such as healthcare, traffic management systems, environment sensing systems, and public safety systems produce large volumes of data. To process these data, it requires substantial computing and storage resources [...] Read more.
The rapid development of Internet of Things (IoT) devices in various smart city-based applications such as healthcare, traffic management systems, environment sensing systems, and public safety systems produce large volumes of data. To process these data, it requires substantial computing and storage resources for smooth implementation and execution. While centralized cloud computing offers scalability, flexibility, and resource sharing, it faces significant limitations in IoT-based applications, especially in terms of latency, bandwidth, security, and cost. The fog computing paradigm complements the existing cloud computing services at the edge of the network to facilitate the various services without sending the data to a centralized cloud server. By processing the data in fog computing, it satisfies the delay requirement of various time-sensitive services of IoT applications. However, many resource-intensive IoT systems exist that require substantial computing resources for their processing. In such scenarios, finding the optimal computing node for processing and executing the service is a challenge. The optimal placement of various IoT applications services in heterogeneous fog computing environments is a well-known NP-complete problem. To solve this problem, various authors proposed different algorithms like the randomized algorithm, heuristic algorithm, meta heuristic algorithm, machine learning algorithm, and graph-based algorithm for finding the optimal placement. In the present survey, we first describe the fundamental and mathematical aspects of the three-layer IoT–fog–cloud computing model. Then, we classify the IoT application model based on different attributes that help to find the optimal computing node. Furthermore, we discuss the complexity analysis of the service placement problem in detail. Finally, we provide a comprehensive evaluation of both single-objective and multi-objective IoT service placement strategies in fog computing. Additionally, we highlight new challenges and identify promising directions for future research, specifically in the context of multi-objective IoT service optimization. Full article
Show Figures

Figure 1

31 pages, 1586 KiB  
Article
Privacy-Preserving and Verifiable Personalized Federated Learning
by Dailin Xie and Dan Li
Symmetry 2025, 17(3), 361; https://doi.org/10.3390/sym17030361 - 27 Feb 2025
Viewed by 427
Abstract
As an important branch of machine learning, federated learning still suffers from statistical heterogeneity. Therefore, personalized federated learning (PFL) is proposed to deal with this obstacle. However, the privacy of local and global gradients is still under threat in the scope of PFL. [...] Read more.
As an important branch of machine learning, federated learning still suffers from statistical heterogeneity. Therefore, personalized federated learning (PFL) is proposed to deal with this obstacle. However, the privacy of local and global gradients is still under threat in the scope of PFL. Additionally, the correctness of the aggregated result is unable to be identified. Therefore, we propose a secure and verifiable personalized federated learning protocol that could protect privacy using homomorphic encryption and verify the aggregated result using Lagrange interpolation and commitment. Furthermore, it could resist the collusion attacks performed by servers and clients who try to pass verification. Comprehensive theoretical analysis is provided to verify our protocol’s security. Extensive experiments on MNIST, Fashion-MNIST and CIFAR-10 are carried out to demonstrate the effectiveness of our protocol. Our model achieved accuracies of 88.25% in CIFAR-10, 99.01% in MNIST and 96.29% in Fashion-MNIST. The results show that our protocol could improve security while maintaining the classification accuracy of the training model. Full article
Show Figures

Figure 1

18 pages, 2185 KiB  
Article
Improving Infrastructure Cluster Design by Using Symmetry
by Vedran Dakić, Mario Kovač and Josip Knezović
Symmetry 2025, 17(3), 357; https://doi.org/10.3390/sym17030357 - 26 Feb 2025
Viewed by 454
Abstract
Symmetry in IT system design is essential for improving efficiency, consistency, and manageability in data center operations. Symmetry guarantees that all system elements—be it hardware, software, or network configurations—are crafted to be consistent, thereby minimizing variability and streamlining operations. This principle is especially [...] Read more.
Symmetry in IT system design is essential for improving efficiency, consistency, and manageability in data center operations. Symmetry guarantees that all system elements—be it hardware, software, or network configurations—are crafted to be consistent, thereby minimizing variability and streamlining operations. This principle is especially pertinent in cluster computing, where uniform server configurations facilitate efficient maintenance and consistent system performance. Symmetric designs reduce variations among nodes, alleviating performance discrepancies and resource imbalances commonly encountered in heterogeneous environments. This paper examines the advantages of symmetric configurations via an experimental analysis of the lifecycle management process. The findings indicate that clusters constructed with a symmetric server architecture enhance operational efficiency. From a lifecycle management standpoint, symmetry streamlines hardware provisioning and maintenance, diminishing complexities related to Day-1 and Day-2 operations. Furthermore, by guaranteeing consistent performance across all servers, symmetric designs facilitate a more predictable quality of service (QoS), reducing bottlenecks and improving overall system stability. Experimental results indicate that, when properly configured, symmetric clusters surpass asymmetric configurations in sustaining QoS, especially during peak loads or hardware failures, owing to their enhanced resource allocation and failover mechanisms. This research highlights the significance of symmetry as a fundamental principle in cluster-based data center architecture. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

19 pages, 1959 KiB  
Article
Leveraging Federated Learning for Malware Classification: A Heterogeneous Integration Approach
by Kongyang Chen, Wangjun Zhang, Zhangmao Liu and Bing Mi
Electronics 2025, 14(5), 915; https://doi.org/10.3390/electronics14050915 - 25 Feb 2025
Viewed by 638
Abstract
The increasing complexity and frequency of malware attacks pose significant challenges to cybersecurity, as traditional methods struggle to keep pace with the evolving threat landscape. Current malware classification techniques often fail to account for the heterogeneity of malware data and models across different [...] Read more.
The increasing complexity and frequency of malware attacks pose significant challenges to cybersecurity, as traditional methods struggle to keep pace with the evolving threat landscape. Current malware classification techniques often fail to account for the heterogeneity of malware data and models across different clients, limiting their effectiveness. In this chapter, we propose a distributed model enhancement-based malware classification method that leverages federated learning to address these limitations. Our approach employs generative adversarial networks to generate synthetic malware data, transforming non-independent datasets into approximately independent ones to mitigate data heterogeneity. Additionally, we utilize knowledge distillation to facilitate the transfer of knowledge between client-specific models and a global classification model, promoting effective collaboration among diverse systems. Inspired by active defense theory, our method identifies suboptimal models during training and replaces them on a central server, ensuring all clients operate with optimal classification capabilities. We conducted extensive experimentation on the Malimg dataset and the Microsoft Malware Classification Challenge (MMCC) dataset. In scenarios characterized by both model heterogeneity and data heterogeneity, our proposed method demonstrated its effectiveness by improving the global malware classification model’s accuracy to 96.80%. Overall, our research presents a robust framework for improving malware classification while maintaining data privacy across distributed environments, highlighting its potential to strengthen cybersecurity defenses against increasingly sophisticated malware threats. Full article
(This article belongs to the Special Issue AI-Based Solutions for Cybersecurity)
Show Figures

Figure 1

32 pages, 2628 KiB  
Article
JAVIS Chat: A Seamless Open-Source Multi-LLM/VLM Deployment System to Be Utilized in Single Computers and Hospital-Wide Systems with Real-Time User Feedback
by Javier Aguirre and Won Chul Cha
Appl. Sci. 2025, 15(4), 1796; https://doi.org/10.3390/app15041796 - 10 Feb 2025
Viewed by 1548
Abstract
The rapid advancement of large language models (LLMs) and vision-language models (VLMs) holds enormous promise across industries, including healthcare but hospitals face unique barriers, such as stringent privacy regulations, heterogeneous IT infrastructures, and limited customization. To address these challenges, we present the joint [...] Read more.
The rapid advancement of large language models (LLMs) and vision-language models (VLMs) holds enormous promise across industries, including healthcare but hospitals face unique barriers, such as stringent privacy regulations, heterogeneous IT infrastructures, and limited customization. To address these challenges, we present the joint AI versatile implementation system chat (JAVIS chat), an open-source framework for deploying LLMs and VLMs within secure hospital networks. JAVIS features a modular architecture, real-time feedback mechanisms, customizable components, and scalable containerized workflows. It integrates Ray for distributed computing and vLLM for optimized model inference, delivering smooth scaling from single workstations to hospital-wide systems. JAVIS consistently demonstrates robust scalability and significantly reduces response times on legacy servers through Ray-managed multiple-instance models, operating seamlessly across diverse hardware configurations and enabling real-time departmental customization. By ensuring compliance with global data protection laws and operating solely within closed networks, JAVIS safeguards patient data while facilitating AI adoption in clinical workflows. This paradigm shift supports patient care and operational efficiency by bridging AI potential with clinical utility, with future developments including speech-to-text integration, further enhancing its versatility. Full article
(This article belongs to the Special Issue Large Language Models: Transforming E-health)
Show Figures

Figure 1

38 pages, 990 KiB  
Article
A Heterogeneity-Aware Semi-Decentralized Model for a Lightweight Intrusion Detection System for IoT Networks Based on Federated Learning and BiLSTM
by Shuroog Alsaleh, Mohamed El Bachir Menai and Saad Al-Ahmadi
Sensors 2025, 25(4), 1039; https://doi.org/10.3390/s25041039 - 9 Feb 2025
Viewed by 1556
Abstract
Internet of Things (IoT) networks’ wide range and heterogeneity make them prone to cyberattacks. Most IoT devices have limited resource capabilities (e.g., memory capacity, processing power, and energy consumption) to function as conventional intrusion detection systems (IDSs). Researchers have applied many approaches to [...] Read more.
Internet of Things (IoT) networks’ wide range and heterogeneity make them prone to cyberattacks. Most IoT devices have limited resource capabilities (e.g., memory capacity, processing power, and energy consumption) to function as conventional intrusion detection systems (IDSs). Researchers have applied many approaches to lightweight IDSs, including energy-based IDSs, machine learning/deep learning (ML/DL)-based IDSs, and federated learning (FL)-based IDSs. FL has become a promising solution for IDSs in IoT networks because it reduces the overhead in the learning process by engaging IoT devices during the training process. Three FL architectures are used to tackle the IDSs in IoT networks, including centralized (client–server), decentralized (device-to-device), and semi-decentralized. However, none of them has solved the heterogeneity of IoT devices while considering lightweight-ness and performance at the same time. Therefore, we propose a semi-decentralized FL-based model for a lightweight IDS to fit the IoT device capabilities. The proposed model is based on clustering the IoT devices—FL clients—and assigning a cluster head to each cluster that acts on behalf of FL clients. Consequently, the number of IoT devices that communicate with the server is reduced, helping to reduce the communication overhead. Moreover, clustering helps in improving the aggregation process as each cluster sends the average model’s weights to the server for aggregation in one FL round. The distributed denial-of-service (DDoS) attack is the main concern in our IDS model, since it easily occurs in IoT devices with limited resource capabilities. The proposed model is configured with three deep learning techniques—LSTM, BiLSTM, and WGAN—using the CICIoT2023 dataset. The experimental results show that the BiLSTM achieves better performance and is suitable for resource-constrained IoT devices based on model size. We test the pre-trained semi-decentralized FL-based model on three datasets—BoT-IoT, WUSTL-IIoT-2021, and Edge-IIoTset—and the results show that our model has the highest performance in most classes, particularly for DDoS attacks. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Back to TopTop