Loading [MathJax]/jax/output/HTML-CSS/jax.js
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (258)

Search Parameters:
Keywords = ubiquitous intelligence

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 1402 KiB  
Article
Adaptive Scheduling in Cognitive IoT Sensors for Optimizing Network Performance Using Reinforcement Learning
by Muhammad Nawaz Khan, Sokjoon Lee and Mohsin Shah
Appl. Sci. 2025, 15(10), 5573; https://doi.org/10.3390/app15105573 - 16 May 2025
Viewed by 151
Abstract
Cognitive sensors are embedded in home appliances and other surrounding devices to create a connected, intelligent environment for providing pervasive and ubiquitous services. These sensors frequently create massive amounts of data with many redundant and repeating bit values. Cognitive sensors are always restricted [...] Read more.
Cognitive sensors are embedded in home appliances and other surrounding devices to create a connected, intelligent environment for providing pervasive and ubiquitous services. These sensors frequently create massive amounts of data with many redundant and repeating bit values. Cognitive sensors are always restricted in resources, and if careful strategy is not applied at the time of deployment, the sensors become disconnected, degrading the system’s performance in terms of energy, reconfiguration, delay, latency, and packet loss. To address these challenges and to establish a connected network, there is always a need for a system to evaluate the contents of detected data values and dynamically switch sensor states based on their function. Here in this article, we propose a reinforcement learning-based mechanism called “Adaptive Scheduling in Cognitive IoT Sensors for Optimizing Network Performance using Reinforcement Learning (ASC-RL)”. For reinforcement learning, the proposed scheme uses three types of parameters: internal parameters (states), environmental parameters (sensing values), and history parameters (energy levels, roles, number of switching states) and derives a function for the state-changing policy. Based on this policy, sensors adjust and adapt to different energy states. These states minimize extensive sensing, reduce costly processing, and lessen frequent communication. The proposed scheme reduces network traffic and optimizes network performance in terms of network energy. The main factors evaluated are joint Gaussian distributions and event correlations, with derived results of signal strengths, noise, prediction accuracy, and energy efficiency with a combined reward score. Through comparative analysis, ASC-RL enhances the overall system’s performance by 3.5% in detection and transition probabilities. The false alarm probabilities are reduced to 25.7%, the transmission success rate is increased by 6.25%, and the energy efficiency and reliability threshold are increased by 35%. Full article
(This article belongs to the Collection Trends and Prospects in Multimedia)
Show Figures

Figure 1

25 pages, 3454 KiB  
Article
Design Principles of a Flat-Pack Electronic Sensor Kit with Intelligent User Interface Calibrations: A Case Study of Monitoring Sedentary Behavior in Workplace
by Ananda Maiti, Vanessa Ward, Amy Hilliard, Anjia Ye and Scott J. Pedersen
Appl. Sci. 2025, 15(9), 5111; https://doi.org/10.3390/app15095111 - 4 May 2025
Viewed by 372
Abstract
Consumer-grade electronics are ubiquitous and can be used to manage a range of devices for various purposes. Such devices can be both mobile and stationary. They have become increasingly intelligent in operation, utilizing complex software. The circular economy is a trend in which [...] Read more.
Consumer-grade electronics are ubiquitous and can be used to manage a range of devices for various purposes. Such devices can be both mobile and stationary. They have become increasingly intelligent in operation, utilizing complex software. The circular economy is a trend in which everyday utility items are designed with recyclable and easily recyclable materials. The materials may not be durable, but they make it easy to dispose of them at the end of their life. In this paper, we extend the concept of the circular economy to the design of electronic devices using cardboard as a flat-pack surface material. We propose a small device design technique and discuss its associated issues, enabling novice users to construct, install, and calibrate custom-built electronic devices. This is in the form of a kit that includes a cardboard flat-pack, a flexible electronic circuit board, and an instruction manual. We also discuss a software design algorithm that can be used to calibrate the newly constructed device. We only consider stationary devices and investigate the proposed devices and software with a sedentary behavior monitoring application. A trial with human participants was conducted to determine the ease of contracting and initially installing the devices. The results show that the proposed approach is highly feasible for novice human users and a high degree of trust with such devices. Full article
(This article belongs to the Section Green Sustainable Science and Technology)
Show Figures

Figure 1

32 pages, 6581 KiB  
Article
Unveiling Technological Evolution with a Patent-Based Dynamic Topic Modeling Framework: A Case Study of Advanced 6G Technologies
by Jieru Jiang, Fangli Ying and Riyad Dhuny
Appl. Sci. 2025, 15(7), 3783; https://doi.org/10.3390/app15073783 - 30 Mar 2025
Viewed by 716
Abstract
As the next frontier in wireless communication, the landscape of 6G technologies is characterized by its rapid evolution and increasing complexity, driven by the need to address global challenges such as ubiquitous connectivity, ultra-high data rates, and intelligent applications. Given the significance of [...] Read more.
As the next frontier in wireless communication, the landscape of 6G technologies is characterized by its rapid evolution and increasing complexity, driven by the need to address global challenges such as ubiquitous connectivity, ultra-high data rates, and intelligent applications. Given the significance of 6G in shaping the future of communication and its potential to revolutionize various industries, understanding the technological evolution within this domain is crucial. Traditional topic modeling approaches fall short in adapting to the rapidly changing and highly complex nature of patent-based topic analysis in this field, thereby impeding a comprehensive understanding of the advanced technological evolution in terms of capturing temporal changes and uncovering semantic relationships. This study delves into the exploration of the evolving technologies of 6G in patent data through a novel dynamic topic modeling framework. Specifically, this work harnesses the power of large language models to effectively reduce the noise in patent data pre-processing using a prompt-based summarization technique. Then, we propose an enhanced dynamic topic modeling framework based on BERTopic to capture the time-aware features of evolving topics across periods. Additionally, we conduct comparative analysis in contextual embedding techniques and leverage SBERT pre-trained on patent data to extract the content semantics in domain-specific patent data within this framework. Finally, we apply the weak signal analysis method to identify the emerging topics in 6G technology over the periods, which makes the topic evolution analysis more interpretable than traditional topic modeling methods. The empirical results, which were validated by human experts, show that the proposed method can effectively uncover patterns of technological evolution, thus enabling its potential application to enhance strategic decision-making and stay ahead in the highly competitive and rapidly evolving technological sector. Full article
Show Figures

Figure 1

28 pages, 1162 KiB  
Article
AHP-Based Evaluation of Discipline-Specific Information Services in Academic Libraries Under Digital Intelligence
by Simeng Zhang, Tao Zhang and Xi Wang
Information 2025, 16(3), 245; https://doi.org/10.3390/info16030245 - 18 Mar 2025
Viewed by 490
Abstract
Over recent years, digital and intelligent technologies have been driving the transformation of discipline-specific information services in academic libraries toward user experience optimization and service innovation. This study constructs a quality evaluation framework for discipline-specific information services in academic libraries, incorporating digital-intelligence characteristics [...] Read more.
Over recent years, digital and intelligent technologies have been driving the transformation of discipline-specific information services in academic libraries toward user experience optimization and service innovation. This study constructs a quality evaluation framework for discipline-specific information services in academic libraries, incorporating digital-intelligence characteristics to provide theoretical references and evaluation guidelines for enhancing service quality and user satisfaction in an information-ubiquitous environment. Drawing on LibQual+TM, WebQUAL, and E-SERVQUAL service quality evaluation models and integrating expert interviews with the contextual characteristics of academic library discipline-specific information services, this study develops a comprehensive evaluation system comprising six dimensions—Perceived Information Quality, Information Usability, Information Security, Interactive Feedback, Tool Application, and User Experience—with fifteen specific indicators. The analytic hierarchy process (AHP) was applied to determine the weight of these indicators. To validate the practicality of the evaluation system, a fuzzy comprehensive evaluation method was employed for an empirical analysis using discipline-specific information services at Tsinghua University Library in China as a case study. The evaluation results indicate that the overall quality of discipline-specific information services at Tsinghua University Library is satisfactory, with Tool Application, Perceived Information Quality, and Information Usability identified as key factors influencing service quality. To further enhance discipline-specific information services in academic libraries, emphasis should be placed on service intelligence and precision-driven optimization, strengthening user experience, interaction and feedback mechanisms, and data security measures. These improvements will better meet the diverse needs of users and enhance the overall effectiveness of discipline-specific information services. Full article
Show Figures

Figure 1

22 pages, 1180 KiB  
Article
Implementation of an Internet of Things Architecture to Monitor Indoor Air Quality: A Case Study During Sleep Periods
by Afonso Mota, Carlos Serôdio, Ana Briga-Sá and Antonio Valente
Sensors 2025, 25(6), 1683; https://doi.org/10.3390/s25061683 - 8 Mar 2025
Viewed by 2287
Abstract
Most human time is spent indoors, and due to the pandemic, monitoring indoor air quality (IAQ) has become more crucial. In this study, an IoT (Internet of Things) architecture is implemented to monitor IAQ parameters, including CO2 and particulate matter (PM). An [...] Read more.
Most human time is spent indoors, and due to the pandemic, monitoring indoor air quality (IAQ) has become more crucial. In this study, an IoT (Internet of Things) architecture is implemented to monitor IAQ parameters, including CO2 and particulate matter (PM). An ESP32-C6-based device is developed to measure sensor data and send them, using the MQTT protocol, to a remote InfluxDBv2 database instance, where the data are stored and visualized. The Python 3.11 scripting programming language is used to automate Flux queries to the database, allowing a more in-depth data interpretation. The implemented system allows to analyze two measured scenarios during sleep: one with the door slightly open and one with the door closed. Results indicate that sleeping with the door slightly open causes CO2 levels to ascend slowly and maintain lower concentrations compared to sleeping with the door closed, where CO2 levels ascend faster and the maximum recommended values are exceeded. This demonstrates the benefits of ventilation in maintaining IAQ. The developed system can be used for sensing in different environments, such as schools or offices, so an IAQ assessment can be made. Based on the generated data, predictive models can be designed to support decisions on intelligent natural ventilation systems, achieving an optimized, efficient, and ubiquitous solution to moderate the IAQ. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

31 pages, 875 KiB  
Article
Hierarchical Traffic Engineering in 3D Networks Using QoS-Aware Graph-Based Deep Reinforcement Learning
by Robert Kołakowski, Lechosław Tomaszewski, Rafał Tępiński and Sławomir Kukliński
Electronics 2025, 14(5), 1045; https://doi.org/10.3390/electronics14051045 - 6 Mar 2025
Viewed by 749
Abstract
Ubiquitous connectivity is envisioned through the integration of terrestrial (TNs) and non-terrestrial networks (NTNs). However, NTNs face multiple routing and Quality of Service (QoS) provisioning challenges due to the mobility of network nodes. Distributed Software-Defined Networking (SDN) combined with Multi-Agent Deep Reinforcement Learning [...] Read more.
Ubiquitous connectivity is envisioned through the integration of terrestrial (TNs) and non-terrestrial networks (NTNs). However, NTNs face multiple routing and Quality of Service (QoS) provisioning challenges due to the mobility of network nodes. Distributed Software-Defined Networking (SDN) combined with Multi-Agent Deep Reinforcement Learning (MADRL) is widely used to introduce programmability and intelligent Traffic Engineering (TE) in TNs, yet applying DRL to NTNs is hindered by frequently changing state sizes, model scalability, and coordination issues. This paper introduces 3DQR, a novel TE framework that combines hierarchical multi-controller SDN, hierarchical MADRL based on Graph Neural Networks (GNNs), and network topology predictions for QoS path provisioning, effective load distribution, and flow rejection minimisation in future 3D networks. To enhance SDN scalability, introduced are metrics and path operations abstractions to facilitate domain agents coordination by the global agent. To the best of the authors’ knowledge, 3DQR is the first routing scheme to integrate MADRL and GNNs for optimising centralised routing and path allocation in SDN-based 3D mobile networks. The evaluations show up to a 14% reduction in flow rejection rate, a 50% improvement in traffic distribution, and effective QoS class prioritisation compared to baseline techniques. 3DQR also exhibits strong transfer capabilities, giving consistent performance gains in previously unseen environments. Full article
(This article belongs to the Special Issue Future Generation Non-Terrestrial Networks)
Show Figures

Figure 1

16 pages, 546 KiB  
Article
Enhancing Small Language Models for Graph Tasks Through Graph Encoder Integration
by Dongryul Oh, Sujin Kang, Heejin Kim and Dongsuk Oh
Appl. Sci. 2025, 15(5), 2418; https://doi.org/10.3390/app15052418 - 24 Feb 2025
Viewed by 1150
Abstract
Small language models (SLMs) are increasingly utilized for on-device applications due to their ability to ensure user privacy, reduce inference latency, and operate independently of cloud infrastructure. However, their performance is often limited when processing complex data structures such as graphs, which are [...] Read more.
Small language models (SLMs) are increasingly utilized for on-device applications due to their ability to ensure user privacy, reduce inference latency, and operate independently of cloud infrastructure. However, their performance is often limited when processing complex data structures such as graphs, which are ubiquitous in real-world datasets like social networks and system interactions. Graphs inherently encode intricate structural dependencies, requiring models to effectively capture both local and global relationships. Traditional language models, designed primarily for text data, struggle to address these requirements, leading to suboptimal performance in graph-related tasks. To overcome this limitation, we propose a novel graph encoder-based prompt tuning framework which integrates a graph convolutional network (GCN) with a graph transformer. By leveraging the complementary strengths of the GCN for local structural modeling and the graph transformer for capturing global relationships, our method enables SLMs to effectively process graph data. This integration significantly enhances the ability of SLMs to handle graph-centric tasks while maintaining the efficiency required for resource-constrained devices. The experimental results show that our approach not only improves the performance of SLMs on various graph benchmarks but also achieves results which closely approach the performance of a large language model (LLM). This work highlights the potential of extending SLMs for graph-based applications and advancing the capabilities of on-device artificial intelligence. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

36 pages, 2247 KiB  
Review
RNA Structure: Past, Future, and Gene Therapy Applications
by William A. Haseltine, Kim Hazel and Roberto Patarca
Int. J. Mol. Sci. 2025, 26(1), 110; https://doi.org/10.3390/ijms26010110 - 26 Dec 2024
Viewed by 4601
Abstract
First believed to be a simple intermediary between the information encoded in deoxyribonucleic acid and that functionally displayed in proteins, ribonucleic acid (RNA) is now known to have many functions through its abundance and intricate, ubiquitous, diverse, and dynamic structure. About 70–90% of [...] Read more.
First believed to be a simple intermediary between the information encoded in deoxyribonucleic acid and that functionally displayed in proteins, ribonucleic acid (RNA) is now known to have many functions through its abundance and intricate, ubiquitous, diverse, and dynamic structure. About 70–90% of the human genome is transcribed into protein-coding and noncoding RNAs as main determinants along with regulatory sequences of cellular to populational biological diversity. From the nucleotide sequence or primary structure, through Watson–Crick pairing self-folding or secondary structure, to compaction via longer distance Watson–Crick and non-Watson–Crick interactions or tertiary structure, and interactions with RNA or other biopolymers or quaternary structure, or with metabolites and biomolecules or quinary structure, RNA structure plays a critical role in RNA’s lifecycle from transcription to decay and many cellular processes. In contrast to the success of 3-dimensional protein structure prediction using AlphaFold, RNA tertiary and beyond structures prediction remains challenging. However, approaches involving machine learning and artificial intelligence, sequencing of RNA and its modifications, and structural analyses at the single-cell and intact tissue levels, among others, provide an optimistic outlook for the continued development and refinement of RNA-based applications. Here, we highlight those in gene therapy. Full article
(This article belongs to the Special Issue Targeting RNA Molecules)
Show Figures

Figure 1

23 pages, 1155 KiB  
Article
From Theory to Practice: Implementing Meta-Learning in 6G Wireless Infrastructure
by Arooba Zeshan, Messaoud Ahmed Ouameur, Muhammad Zeshan Alam and Tuan-Anh D. Le
Telecom 2024, 5(4), 1263-1285; https://doi.org/10.3390/telecom5040063 - 6 Dec 2024
Viewed by 1283
Abstract
The vision of the sixth generation of communication systems, commonly known as 6G, entails a connected world that provides ubiquitous connectivity and fosters the digital transformation of society. As the number of devices, services, and users continues to grow, intelligent solutions are expected [...] Read more.
The vision of the sixth generation of communication systems, commonly known as 6G, entails a connected world that provides ubiquitous connectivity and fosters the digital transformation of society. As the number of devices, services, and users continues to grow, intelligent solutions are expected to facilitate this transformation. This paper considers meta-learning as a pivotal paradigm for 6G systems, detailing its principles, algorithms, and theoretical underpinnings. The methodology involves integrating meta-learning with three potential 6G technologies: RF-based communication systems, optical communication systems, and molecular communication systems. The findings reveal the distinct characteristics of these technologies and demonstrate the potential benefits and challenges of incorporating meta-learning algorithms. Practical implications highlight how meta-learning can enhance the efficiency and adaptability of 6G systems, addressing the growing demand for intelligent and seamless communication networks. Full article
Show Figures

Figure 1

40 pages, 40760 KiB  
Article
Dynamic-Max-Value ReLU Functions for Adversarially Robust Machine Learning Models
by Korn Sooksatra and Pablo Rivas
Mathematics 2024, 12(22), 3551; https://doi.org/10.3390/math12223551 - 13 Nov 2024
Cited by 3 | Viewed by 1433
Abstract
The proliferation of deep learning has transformed artificial intelligence, demonstrating prowess in domains such as image recognition, natural language processing, and robotics. Nonetheless, deep learning models are susceptible to adversarial examples, well-crafted inputs that can induce erroneous predictions, particularly in safety-critical contexts. Researchers [...] Read more.
The proliferation of deep learning has transformed artificial intelligence, demonstrating prowess in domains such as image recognition, natural language processing, and robotics. Nonetheless, deep learning models are susceptible to adversarial examples, well-crafted inputs that can induce erroneous predictions, particularly in safety-critical contexts. Researchers actively pursue countermeasures such as adversarial training and robust optimization to fortify model resilience. This vulnerability is notably accentuated by the ubiquitous utilization of ReLU functions in deep learning models. A previous study proposed an innovative solution to mitigate this vulnerability, presenting a capped ReLU function tailored to bolster neural network robustness against adversarial examples. However, the approach had a scalability problem. To address this limitation, a series of comprehensive experiments are undertaken across diverse datasets, and we introduce the dynamic-max-value ReLU function to address the scalability problem. Full article
(This article belongs to the Special Issue Advances in Trustworthy and Robust Artificial Intelligence)
Show Figures

Figure 1

16 pages, 1904 KiB  
Article
A Reconfigurable Architecture for Industrial Control Systems: Overview and Challenges
by Lisi Liu, Zijie Xu and Xiaobin Qu
Machines 2024, 12(11), 793; https://doi.org/10.3390/machines12110793 - 9 Nov 2024
Cited by 2 | Viewed by 2169
Abstract
The closed architecture and stand-alone operation model of traditional industrial control systems limit their ability to leverage ubiquitous infrastructure resources for more flexible and intelligent development. This restriction hinders their ability to rapidly, economically, and sustainably respond to mass customization demands. Existing proposals [...] Read more.
The closed architecture and stand-alone operation model of traditional industrial control systems limit their ability to leverage ubiquitous infrastructure resources for more flexible and intelligent development. This restriction hinders their ability to rapidly, economically, and sustainably respond to mass customization demands. Existing proposals for open and networked architectures have failed to break the vicious cycle of closed architectures and stand-alone operation models because they do not address the core issue: the tight coupling among the control, infrastructure, and actuator domains. This paper proposes a reconfigurable architecture that decouples these domains, structuring the control system across three planes: control, infrastructure, and actuator. The computer numerical control (CNC) system serves as a primary example to illustrate this reconfigurable architecture. After reviewing open and networked architectures and discussing the characteristics of this reconfigurable architecture, this paper identifies three key challenges: deterministic control functionality, the decoupling of control modules from infrastructures, and the management of control modules, infrastructures, and actuators. Each challenge is examined in detail, and potential solutions are proposed based on emerging technologies. Full article
(This article belongs to the Section Automation and Control Systems)
Show Figures

Figure 1

24 pages, 9406 KiB  
Article
Lightweight Digit Recognition in Smart Metering System Using Narrowband Internet of Things and Federated Learning
by Vladimir Nikić, Dušan Bortnik, Milan Lukić, Dejan Vukobratović and Ivan Mezei
Future Internet 2024, 16(11), 402; https://doi.org/10.3390/fi16110402 - 31 Oct 2024
Cited by 1 | Viewed by 2795
Abstract
Replacing mechanical utility meters with digital ones is crucial due to the numerous benefits they offer, including increased time resolution in measuring consumption, remote monitoring capabilities for operational efficiency, real-time data for informed decision-making, support for time-of-use billing, and integration with smart grids, [...] Read more.
Replacing mechanical utility meters with digital ones is crucial due to the numerous benefits they offer, including increased time resolution in measuring consumption, remote monitoring capabilities for operational efficiency, real-time data for informed decision-making, support for time-of-use billing, and integration with smart grids, leading to enhanced customer service, reduced energy waste, and progress towards environmental sustainability goals. However, the cost associated with replacing mechanical meters with their digital counterparts is a key factor contributing to the relatively slow roll-out of such devices. In this paper, we present a low-cost and power-efficient solution for retrofitting the existing metering infrastructure, based on state-of-the-art communication and artificial intelligence technologies. The edge device we developed contains a camera for capturing images of a dial meter, a 32-bit microcontroller capable of running the digit recognition algorithm, and an NB-IoT module with (E)GPRS fallback, which enables nearly ubiquitous connectivity even in difficult radio conditions. Our digit recognition methodology, based on the on-device training and inference, augmented with federated learning, achieves a high level of accuracy (97.01%) while minimizing the energy consumption and associated communication overhead (87 μWh per day on average). Full article
Show Figures

Figure 1

18 pages, 768 KiB  
Review
Artificial General Intelligence for the Detection of Neurodegenerative Disorders
by Yazdan Ahmad Qadri, Khurshid Ahmad and Sung Won Kim
Sensors 2024, 24(20), 6658; https://doi.org/10.3390/s24206658 - 16 Oct 2024
Cited by 3 | Viewed by 3382
Abstract
Parkinson’s disease and Alzheimer’s disease are among the most common neurodegenerative disorders. These diseases are correlated with advancing age and are hence increasingly becoming prevalent in developed countries due to an increasingly aging demographic. Several tools are used to predict and diagnose these [...] Read more.
Parkinson’s disease and Alzheimer’s disease are among the most common neurodegenerative disorders. These diseases are correlated with advancing age and are hence increasingly becoming prevalent in developed countries due to an increasingly aging demographic. Several tools are used to predict and diagnose these diseases, including pathological and genetic tests, radiological scans, and clinical examinations. Artificial intelligence is evolving to artificial general intelligence, which mimics the human learning process. Large language models can use an enormous volume of online and offline resources to gain knowledge and use it to perform different types of tasks. This work presents an understanding of two major neurodegenerative disorders, artificial general intelligence, and the efficacy of using artificial general intelligence in detecting and predicting these neurodegenerative disorders. A detailed discussion on detecting these neurodegenerative diseases using artificial general intelligence by analyzing diagnostic data is presented. An Internet of Things-based ubiquitous monitoring and treatment framework is presented. An outline for future research opportunities based on the challenges in this area is also presented. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

15 pages, 474 KiB  
Article
Federated Learning in Dynamic and Heterogeneous Environments: Advantages, Performances, and Privacy Problems
by Fabio Liberti, Davide Berardi and Barbara Martini
Appl. Sci. 2024, 14(18), 8490; https://doi.org/10.3390/app14188490 - 20 Sep 2024
Cited by 4 | Viewed by 3969
Abstract
Federated Learning (FL) represents a promising distributed learning methodology particularly suitable for dynamic and heterogeneous environments characterized by the presence of Internet of Things (IoT) devices and Edge Computing infrastructures. In this context, FL allows you to train machine learning models directly on [...] Read more.
Federated Learning (FL) represents a promising distributed learning methodology particularly suitable for dynamic and heterogeneous environments characterized by the presence of Internet of Things (IoT) devices and Edge Computing infrastructures. In this context, FL allows you to train machine learning models directly on edge devices, mitigating data privacy concerns and reducing latency due to transmitting data to central servers. However, the heterogeneity of computational resources, the variability of network connections, and the mobility of IoT devices pose significant challenges to the efficient implementation of FL. This work explores advanced techniques for dynamic model adaptation and heterogeneous data management in edge computing scenarios, proposing innovative solutions to improve the robustness and efficiency of federated learning. We present an innovative solution based on Kubernetes which enables the fast application of FL models to Heterogeneous Architectures. Experimental results demonstrate that our proposals can improve the performance of FL in IoT and edge environments, offering new perspectives for the practical implementation of decentralized intelligent systems. Full article
Show Figures

Figure 1

19 pages, 1677 KiB  
Review
Beyond Clinical Factors: Harnessing Artificial Intelligence and Multimodal Cardiac Imaging to Predict Atrial Fibrillation Recurrence Post-Catheter Ablation
by Edward T. Truong, Yiheng Lyu, Abdul Rahman Ihdayhid, Nick S. R. Lan and Girish Dwivedi
J. Cardiovasc. Dev. Dis. 2024, 11(9), 291; https://doi.org/10.3390/jcdd11090291 - 19 Sep 2024
Cited by 2 | Viewed by 3750
Abstract
Atrial fibrillation (AF) is the most common type of cardiac arrhythmia, with catheter ablation being a key alternative to medical treatment for restoring normal sinus rhythm. Despite advances in understanding AF pathogenesis, approximately 35% of patients experience AF recurrence at 12 months after [...] Read more.
Atrial fibrillation (AF) is the most common type of cardiac arrhythmia, with catheter ablation being a key alternative to medical treatment for restoring normal sinus rhythm. Despite advances in understanding AF pathogenesis, approximately 35% of patients experience AF recurrence at 12 months after catheter ablation. Therefore, accurate prediction of AF recurrence occurring after catheter ablation is important for patient selection and management. Conventional methods for predicting post-catheter ablation AF recurrence, which involve the use of univariate predictors and scoring systems, have played a supportive role in clinical decision-making. In an ever-changing landscape where technology is becoming ubiquitous within medicine, cardiac imaging and artificial intelligence (AI) could prove pivotal in enhancing AF recurrence predictions by providing data with independent predictive power and identifying key relationships in the data. This review comprehensively explores the existing methods for predicting the recurrence of AF following catheter ablation from different perspectives, including conventional predictors and scoring systems, cardiac imaging-based methods, and AI-based methods developed using a combination of demographic and imaging variables. By summarising state-of-the-art technologies, this review serves as a roadmap for developing future prediction models with enhanced accuracy, generalisability, and explainability, potentially contributing to improved care for patients with AF. Full article
Show Figures

Figure 1

Back to TopTop