Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (281)

Search Parameters:
Keywords = large-scale IoT systems

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 2960 KiB  
Article
(H-DIR)2: A Scalable Entropy-Based Framework for Anomaly Detection and Cybersecurity in Cloud IoT Data Centers
by Davide Tosi and Roberto Pazzi
Sensors 2025, 25(15), 4841; https://doi.org/10.3390/s25154841 - 6 Aug 2025
Viewed by 154
Abstract
Modern cloud-based Internet of Things (IoT) infrastructures face increasingly sophisticated and diverse cyber threats that challenge traditional detection systems in terms of scalability, adaptability, and explainability. In this paper, we present (H-DIR)2, a hybrid entropy-based framework designed to detect and mitigate [...] Read more.
Modern cloud-based Internet of Things (IoT) infrastructures face increasingly sophisticated and diverse cyber threats that challenge traditional detection systems in terms of scalability, adaptability, and explainability. In this paper, we present (H-DIR)2, a hybrid entropy-based framework designed to detect and mitigate anomalies in large-scale heterogeneous networks. The framework combines Shannon entropy analysis with Associated Random Neural Networks (ARNNs) and integrates semantic reasoning through RDF/SPARQL, all embedded within a distributed Apache Spark 3.5.0 pipeline. We validate (H-DIR)2 across three critical attack scenarios—SYN Flood (TCP), DAO-DIO (RPL), and NTP amplification (UDP)—using real-world datasets. The system achieves a mean detection latency of 247 ms and an AUC of 0.978 for SYN floods. For DAO-DIO manipulations, it increases the packet delivery ratio from 81.2% to 96.4% (p < 0.01), and for NTP amplification, it reduces the peak load by 88%. The framework achieves vertical scalability across millions of endpoints and horizontal scalability on datasets exceeding 10 TB. All code, datasets, and Docker images are provided to ensure full reproducibility. By coupling adaptive neural inference with semantic explainability, (H-DIR)2 offers a transparent and scalable solution for cloud–IoT cybersecurity, establishing a robust baseline for future developments in edge-aware and zero-day threat detection. Full article
(This article belongs to the Special Issue Privacy and Cybersecurity in IoT-Based Applications)
Show Figures

Figure 1

17 pages, 1850 KiB  
Article
Cloud–Edge Collaborative Model Adaptation Based on Deep Q-Network and Transfer Feature Extraction
by Jue Chen, Xin Cheng, Yanjie Jia and Shuai Tan
Appl. Sci. 2025, 15(15), 8335; https://doi.org/10.3390/app15158335 - 26 Jul 2025
Viewed by 356
Abstract
With the rapid development of smart devices and the Internet of Things (IoT), the explosive growth of data has placed increasingly higher demands on real-time processing and intelligent decision making. Cloud-edge collaborative computing has emerged as a mainstream architecture to address these challenges. [...] Read more.
With the rapid development of smart devices and the Internet of Things (IoT), the explosive growth of data has placed increasingly higher demands on real-time processing and intelligent decision making. Cloud-edge collaborative computing has emerged as a mainstream architecture to address these challenges. However, in sky-ground integrated systems, the limited computing capacity of edge devices and the inconsistency between cloud-side fusion results and edge-side detection outputs significantly undermine the reliability of edge inference. To overcome these issues, this paper proposes a cloud-edge collaborative model adaptation framework that integrates deep reinforcement learning via Deep Q-Networks (DQN) with local feature transfer. The framework enables category-level dynamic decision making, allowing for selective migration of classification head parameters to achieve on-demand adaptive optimization of the edge model and enhance consistency between cloud and edge results. Extensive experiments conducted on a large-scale multi-view remote sensing aircraft detection dataset demonstrate that the proposed method significantly improves cloud-edge consistency. The detection consistency rate reaches 90%, with some scenarios approaching 100%. Ablation studies further validate the necessity of the DQN-based decision strategy, which clearly outperforms static heuristics. In the model adaptation comparison, the proposed method improves the detection precision of the A321 category from 70.30% to 71.00% and the average precision (AP) from 53.66% to 53.71%. For the A330 category, the precision increases from 32.26% to 39.62%, indicating strong adaptability across different target types. This study offers a novel and effective solution for cloud-edge model adaptation under resource-constrained conditions, enhancing both the consistency of cloud-edge fusion and the robustness of edge-side intelligent inference. Full article
Show Figures

Figure 1

24 pages, 2151 KiB  
Article
Federated Learning-Based Intrusion Detection in IoT Networks: Performance Evaluation and Data Scaling Study
by Nurtay Albanbay, Yerlan Tursynbek, Kalman Graffi, Raissa Uskenbayeva, Zhuldyz Kalpeyeva, Zhastalap Abilkaiyr and Yerlan Ayapov
J. Sens. Actuator Netw. 2025, 14(4), 78; https://doi.org/10.3390/jsan14040078 - 23 Jul 2025
Viewed by 812
Abstract
This paper presents a large-scale empirical study aimed at identifying the optimal local deep learning model and data volume for deploying intrusion detection systems (IDS) on resource-constrained IoT devices using federated learning (FL). While previous studies on FL-based IDS for IoT have primarily [...] Read more.
This paper presents a large-scale empirical study aimed at identifying the optimal local deep learning model and data volume for deploying intrusion detection systems (IDS) on resource-constrained IoT devices using federated learning (FL). While previous studies on FL-based IDS for IoT have primarily focused on maximizing accuracy, they often overlook the computational limitations of IoT hardware and the feasibility of local model deployment. In this work, three deep learning architectures—a deep neural network (DNN), a convolutional neural network (CNN), and a hybrid CNN+BiLSTM—are trained using the CICIoT2023 dataset within a federated learning environment simulating up to 150 IoT devices. The study evaluates how detection accuracy, convergence speed, and inference costs (latency and model size) vary across different local data scales and model complexities. Results demonstrate that CNN achieves the best trade-off between detection performance and computational efficiency, reaching ~98% accuracy with low latency and a compact model footprint. The more complex CNN+BiLSTM architecture yields slightly higher accuracy (~99%) at a significantly greater computational cost. Deployment tests on Raspberry Pi 5 devices confirm that all three models can be effectively implemented on real-world IoT edge hardware. These findings offer practical guidance for researchers and practitioners in selecting scalable and lightweight IDS models suitable for real-world federated IoT deployments, supporting secure and efficient anomaly detection in urban IoT networks. Full article
(This article belongs to the Special Issue Federated Learning: Applications and Future Directions)
Show Figures

Figure 1

21 pages, 5977 KiB  
Article
A Two-Stage Machine Learning Approach for Calving Detection in Rangeland Cattle
by Yuxi Wang, Andrés Perea, Huiping Cao, Mehmet Bakir and Santiago Utsumi
Agriculture 2025, 15(13), 1434; https://doi.org/10.3390/agriculture15131434 - 3 Jul 2025
Viewed by 440
Abstract
Monitoring parturient cattle during calving is crucial for reducing cow and calf mortality, enhancing reproductive and production performance, and minimizing labor costs. Traditional monitoring methods include direct animal inspection or the use of specialized sensors. These methods can be effective, but impractical in [...] Read more.
Monitoring parturient cattle during calving is crucial for reducing cow and calf mortality, enhancing reproductive and production performance, and minimizing labor costs. Traditional monitoring methods include direct animal inspection or the use of specialized sensors. These methods can be effective, but impractical in large-scale ranching operations due to time, cost, and logistical constraints. To address this challenge, a network of low-power and long-range IoT sensors combining the Global Navigation Satellite System (GNSS) and tri-axial accelerometers was deployed to monitor in real-time 15 parturient Brangus cows on a 700-hectare pasture at the Chihuahuan Desert Rangeland Research Center (CDRRC). A two-stage machine learning approach was tested. In the first stage, a fully connected autoencoder with time encoding was used for unsupervised detection of anomalous behavior. In the second stage, a Random Forest classifier was applied to distinguish calving events from other detected anomalies. A 5-fold cross-validation, using 12 cows for training and 3 cows for testing, was applied at each iteration. While 100% of the calving events were successfully detected by the autoencoder, the Random Forest model failed to classify the calving events of two cows and misidentified the onset of calving for a third cow by 46 h. The proposed framework demonstrates the value of combining unsupervised and supervised machine learning techniques for detecting calving events in rangeland cattle under extensive management conditions. The real-time application of the proposed AI-driven monitoring system has the potential to enhance animal welfare and productivity, improve operational efficiency, and reduce labor demands in large-scale ranching. Future advancements in multi-sensor platforms and model refinements could further boost detection accuracy, making this approach increasingly adaptable across diverse management systems, herd structures, and environmental conditions. Full article
(This article belongs to the Special Issue Modeling of Livestock Breeding Environment and Animal Behavior)
Show Figures

Figure 1

16 pages, 3434 KiB  
Review
Multisource Heterogeneous Sensor Processing Meets Distribution Networks: Brief Review and Potential Directions
by Junliang Wang and Ying Zhang
Sensors 2025, 25(13), 4146; https://doi.org/10.3390/s25134146 - 3 Jul 2025
Viewed by 376
Abstract
The progressive proliferation of sensor deployment in distribution networks (DNs), propelled by the dual drivers of power automation and ubiquitous IoT infrastructure development, has precipitated exponential growth in real-time data generated by multisource heterogeneous (MSH) sensors within multilayer grid architectures. This phenomenon presents [...] Read more.
The progressive proliferation of sensor deployment in distribution networks (DNs), propelled by the dual drivers of power automation and ubiquitous IoT infrastructure development, has precipitated exponential growth in real-time data generated by multisource heterogeneous (MSH) sensors within multilayer grid architectures. This phenomenon presents dual implications: large-scale datasets offer an enhanced foundation for reliability assessment and dispatch planning in DNs; the dramatic escalation in data volume imposes demands on the computational precision and response speed of traditional evaluation approaches. The identification of critical influencing factors under extreme operating conditions, coupled with dynamic assessment and prediction of DN reliability through MSH data approaches, has emerged as a pressing challenge to address. Through a brief analysis of existing technologies and algorithms, this article reviews the technological development of MSH data analysis in DNs. By integrating the stability advantages of conventional approaches in practice with the computational adaptability of artificial intelligence, this article focuses on discussing key approaches for MSH data processing and assessment. Based on the characteristics of DN data, e.g., diverse sources, heterogeneous structures, and complex correlations, this article proposes several practical future directions. It is expected to provide insights for practitioners in power systems and sensor data processing that offer technical inspirations for intelligent, reliable, and stable next-generation DN construction. Full article
Show Figures

Figure 1

13 pages, 2065 KiB  
Article
Machine Learning-Based Shelf Life Estimator for Dates Using a Multichannel Gas Sensor: Enhancing Food Security
by Asrar U. Haque, Mohammad Akeef Al Haque, Abdulrahman Alabduladheem, Abubakr Al Mulla, Nasser Almulhim and Ramasamy Srinivasagan
Sensors 2025, 25(13), 4063; https://doi.org/10.3390/s25134063 - 29 Jun 2025
Viewed by 609
Abstract
It is a well-known fact that proper nutrition is essential for human beings to live healthy lives. For thousands of years, it has been considered that dates are one of the best nutrient providers. To have better-quality dates and to enhance the shelf [...] Read more.
It is a well-known fact that proper nutrition is essential for human beings to live healthy lives. For thousands of years, it has been considered that dates are one of the best nutrient providers. To have better-quality dates and to enhance the shelf life of dates, it is vital to preserve dates in optimal conditions that contribute to food security. Hence, it is crucial to know the shelf life of different types of dates. In current practice, shelf life assessment is typically based on manual visual inspection, which is subjective, error-prone, and requires considerable expertise, making it difficult to scale across large storage facilities. Traditional cold storage systems, whilst being capable of monitoring temperature and humidity, lack the intelligence to detect spoilage or predict shelf life in real-time. In this study, we present a novel IoT-based shelf life estimation system that integrates multichannel gas sensors and a lightweight machine learning model deployed on an edge device. Unlike prior approaches, our system captures the real-time emissions of spoilage-related gases (methane, nitrogen dioxide, and carbon monoxide) along with environmental data to classify the freshness of date fruits. The model achieved a classification accuracy of 91.9% and an AUC of 0.98 and was successfully deployed on an Arduino Nano 33 BLE Sense board. This solution offers a low-cost, scalable, and objective method for real-time shelf life prediction. This significantly improves reliability and reduces postharvest losses in the date supply chain. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

42 pages, 872 KiB  
Review
Multi-Sensing Monitoring of the Microalgae Biomass Cultivation Systems for Biofuels and Added Value Products Synthesis—Challenges and Opportunities
by Marcin Dębowski, Joanna Kazimierowicz and Marcin Zieliński
Appl. Sci. 2025, 15(13), 7324; https://doi.org/10.3390/app15137324 - 29 Jun 2025
Viewed by 1004
Abstract
The sustainable and economically viable production of microalgae biomass for biofuels and high-value bioproducts is highly dependent on precise, multi-parametric monitoring of cultivation systems. This review provides a comprehensive overview of current approaches and technological advances in multi-sensor systems applied to photobioreactors, including [...] Read more.
The sustainable and economically viable production of microalgae biomass for biofuels and high-value bioproducts is highly dependent on precise, multi-parametric monitoring of cultivation systems. This review provides a comprehensive overview of current approaches and technological advances in multi-sensor systems applied to photobioreactors, including flow cytometry, IR spectroscopy, RGB sensors, in situ microscopy, and software-based sensors. The integration of artificial intelligence (AI), the Internet of Things (IoT) and metaheuristic algorithms into monitoring systems is also discussed as a promising way to optimise key ecological, physicochemical, and biological parameters in real time. The report highlights critical factors that influence biomass growth and product yield, such as nutrient concentrations, light intensity, CO2 levels, pH and temperature. In addition, current technological limitations are highlighted, and future strategies for improving monitoring accuracy, automating cultivation, and improving the biosynthesis of metabolites are outlined. Through a synthesis of the literature and technological trends, this work contributes to the development of smart photobioreactor systems and provides actionable insights to improve large-scale, highly efficient microalgae cultivation in energy and environmental biotechnology. Full article
(This article belongs to the Special Issue Advances in Bioprocess Monitoring and Control)
Show Figures

Figure 1

46 pages, 2741 KiB  
Review
Innovative Technologies Reshaping Meat Industrialization: Challenges and Opportunities in the Intelligent Era
by Qing Sun, Yanan Yuan, Baoguo Xu, Shipeng Gao, Xiaodong Zhai, Feiyue Xu and Jiyong Shi
Foods 2025, 14(13), 2230; https://doi.org/10.3390/foods14132230 - 24 Jun 2025
Viewed by 1122
Abstract
The Fourth Industrial Revolution and artificial intelligence (AI) technology are driving the transformation of the meat industry from mechanization and automation to intelligence and digitization. This paper provides a systematic review of key technological innovations in this field, including physical technologies (such as [...] Read more.
The Fourth Industrial Revolution and artificial intelligence (AI) technology are driving the transformation of the meat industry from mechanization and automation to intelligence and digitization. This paper provides a systematic review of key technological innovations in this field, including physical technologies (such as smart cutting precision improved to the millimeter level, pulse electric field sterilization efficiency exceeding 90%, ultrasonic-assisted marinating time reduced by 12 h, and ultra-high-pressure processing extending shelf life) and digital technologies (IoT real-time monitoring, blockchain-enhanced traceability transparency, and AI-optimized production decision-making). Additionally, it explores the potential of alternative meat production technologies (cell-cultured meat and 3D bioprinting) to disrupt traditional models. In application scenarios such as central kitchen efficiency improvements (e.g., food companies leveraging the “S2B2C” model to apply AI agents, supply chain management, and intelligent control systems, resulting in a 26.98% increase in overall profits), end-to-end temperature control in cold chain logistics (e.g., using multi-array sensors for real-time monitoring of meat spoilage), intelligent freshness recognition of products (based on deep learning or sensors), and personalized customization (e.g., 3D-printed customized nutritional meat products), these technologies have significantly improved production efficiency, product quality, and safety. However, large-scale application still faces key challenges, including high costs (such as the high investment in cell-cultured meat bioreactors), lack of standardization (such as the absence of unified standards for non-thermal technology parameters), and consumer acceptance (surveys indicate that approximately 41% of consumers are concerned about contracting illnesses from consuming cultured meat, and only 25% are willing to try it). These challenges constrain the economic viability and market promotion of the aforementioned technologies. Future efforts should focus on collaborative innovation to establish a truly intelligent and sustainable meat production system. Full article
Show Figures

Figure 1

17 pages, 2412 KiB  
Article
A Gamified AI-Driven System for Depression Monitoring and Management
by Sanaz Zamani, Adnan Rostami, Minh Nguyen, Roopak Sinha and Samaneh Madanian
Appl. Sci. 2025, 15(13), 7088; https://doi.org/10.3390/app15137088 - 24 Jun 2025
Viewed by 647
Abstract
Depression affects millions of people worldwide and remains a significant challenge in mental health care. Despite advances in pharmacological and psychotherapeutic treatments, there is a critical need for accessible and engaging tools that help individuals manage their mental health in real time. This [...] Read more.
Depression affects millions of people worldwide and remains a significant challenge in mental health care. Despite advances in pharmacological and psychotherapeutic treatments, there is a critical need for accessible and engaging tools that help individuals manage their mental health in real time. This paper presents a novel gamified, AI-driven system embedded within Internet of Things (IoT)-enabled environments to address this gap. The proposed platform combines micro-games, adaptive surveys, sensor data, and AI analytics to support personalized and context-aware depression monitoring and self-regulation. Unlike traditional static models, this system continuously tracks behavioral, cognitive, and environmental patterns. This data is then used to deliver timely, tailored interventions. One of its key strengths is a research-ready design that enables real-time simulation, algorithm testing, and hypothesis exploration without relying on large-scale human trials. This makes it easier to study cognitive and emotional trends and improve AI models efficiently. The system is grounded in metacognitive principles. It promotes user engagement and self-awareness through interactive feedback and reflection. Gamification improves the user experience without compromising clinical relevance. We present a unified framework, robust evaluation methods, and insights into scalable mental health solutions. Combining AI, IoT, and gamification, this platform offers a promising new approach for smart, responsive, and data-driven mental health support in modern living environments. Full article
(This article belongs to the Special Issue Advanced IoT/ICT Technologies in Smart Systems)
Show Figures

Figure 1

24 pages, 1446 KiB  
Article
MQTT Broker Architectural Enhancements for High-Performance P2P Messaging: TBMQ Scalability and Reliability in Distributed IoT Systems
by Dmytro Shvaika, Andrii Shvaika and Volodymyr Artemchuk
IoT 2025, 6(3), 34; https://doi.org/10.3390/iot6030034 - 23 Jun 2025
Viewed by 674
Abstract
The Message Queuing Telemetry Transport (MQTT) protocol remains a key enabler for lightweight and low-latency messaging in Internet of Things (IoT) applications. However, traditional broker implementations often struggle with the demands of large-scale point-to-point (P2P) communication. This paper presents a performance and architectural [...] Read more.
The Message Queuing Telemetry Transport (MQTT) protocol remains a key enabler for lightweight and low-latency messaging in Internet of Things (IoT) applications. However, traditional broker implementations often struggle with the demands of large-scale point-to-point (P2P) communication. This paper presents a performance and architectural evaluation of TBMQ, an open source MQTT broker designed to support reliable P2P messaging at scale. The broker employs Redis Cluster for session persistence and Apache Kafka for message routing. Additional optimizations include asynchronous Redis access via Lettuce and Lua-based atomic operations. Stepwise load testing was performed using Kubernetes-based deployments on Amazon EKS, progressively increasing message rates to 1 million messages per second (msg/s). The results demonstrate that TBMQ achieves linear scalability and stable latency as the load increases. It reaches an average throughput of 8900 msg/s per CPU core, while maintaining end-to-end delivery latency within two-digit millisecond bounds. These findings confirm that TBMQ’s architecture provides an effective foundation for reliable, high-throughput messaging in distributed IoT systems. Full article
(This article belongs to the Special Issue IoT and Distributed Computing)
Show Figures

Figure 1

29 pages, 8644 KiB  
Review
Recent Advances in Resistive Gas Sensors: Fundamentals, Material and Device Design, and Intelligent Applications
by Peiqingfeng Wang, Shusheng Xu, Xuerong Shi, Jiaqing Zhu, Haichao Xiong and Huimin Wen
Chemosensors 2025, 13(7), 224; https://doi.org/10.3390/chemosensors13070224 - 21 Jun 2025
Cited by 1 | Viewed by 891
Abstract
Resistive gas sensors have attracted significant attention due to their simple architecture, low cost, and ease of integration, with widespread applications in environmental monitoring, industrial safety, and healthcare diagnostics. This review provides a comprehensive overview of recent advances in resistive gas sensors, focusing [...] Read more.
Resistive gas sensors have attracted significant attention due to their simple architecture, low cost, and ease of integration, with widespread applications in environmental monitoring, industrial safety, and healthcare diagnostics. This review provides a comprehensive overview of recent advances in resistive gas sensors, focusing on their fundamental working mechanisms, sensing material design, device architecture optimization, and intelligent system integration. These sensors primarily operate based on changes in electrical resistance induced by interactions between gas molecules and sensing materials, including physical adsorption, charge transfer, and surface redox reactions. In terms of materials, metal oxide semiconductors, conductive polymers, carbon-based nanomaterials, and their composites have demonstrated enhanced sensitivity and selectivity through strategies such as doping, surface functionalization, and heterojunction engineering, while also enabling reduced operating temperatures. Device-level innovations—such as microheater integration, self-heated nanowires, and multi-sensor arrays—have further improved response speed and energy efficiency. Moreover, the incorporation of artificial intelligence (AI) and Internet of Things (IoT) technologies has significantly advanced signal processing, pattern recognition, and long-term operational stability. Machine learning (ML) algorithms have enabled intelligent design of novel sensing materials, optimized multi-gas identification, and enhanced data reliability in complex environments. These synergistic developments are driving resistive gas sensors toward low-power, highly integrated, and multifunctional platforms, particularly in emerging applications such as wearable electronics, breath diagnostics, and smart city infrastructure. This review concludes with a perspective on future research directions, emphasizing the importance of improving material stability, interference resistance, standardized fabrication, and intelligent system integration for large-scale practical deployment. Full article
Show Figures

Figure 1

19 pages, 7664 KiB  
Article
Off-Cloud Anchor Sharing Framework for Multi-User and Multi-Platform Mixed Reality Applications
by Aida Vidal-Balea, Oscar Blanco-Novoa, Paula Fraga-Lamas and Tiago M. Fernández-Caramés
Appl. Sci. 2025, 15(13), 6959; https://doi.org/10.3390/app15136959 - 20 Jun 2025
Viewed by 463
Abstract
This article presents a novel off-cloud anchor sharing framework designed to enable seamless device interoperability for Mixed Reality (MR) multi-user and multi-platform applications. The proposed framework enables local storage and synchronization of spatial anchors, offering a robust and autonomous alternative for real-time collaborative [...] Read more.
This article presents a novel off-cloud anchor sharing framework designed to enable seamless device interoperability for Mixed Reality (MR) multi-user and multi-platform applications. The proposed framework enables local storage and synchronization of spatial anchors, offering a robust and autonomous alternative for real-time collaborative experiences. Such anchors are digital reference points tied to specific positions in the physical world that allow virtual content in MR applications to remain accurately aligned to the real environment, thus being an essential tool for building collaborative MR experiences. This anchor synchronization system takes advantage of the use of local anchor storage to optimize the sharing process and to exchange the anchors only when necessary. The framework integrates Unity, Mirror and Mixed Reality Toolkit (MRTK) to support seamless interoperability between Microsoft HoloLens 2 devices and desktop computers, with the addition of external IoT interaction. As a proof of concept, a collaborative multiplayer game was developed to illustrate the multi-platform and anchor sharing capabilities of the proposed system. The experiments were performed in Local Area Network (LAN) and Wide Area Network (WAN) environments, and they highlight the importance of efficient anchor management in large-scale MR environments and demonstrate the effectiveness of the system in handling anchor transmission across varying levels of spatial complexity. Specifically, the obtained results show that the developed framework is able to obtain anchor transmission times that start around 12.7 s for the tested LAN/WAN networks and for small anchor setups, and to roughly 86.02–87.18 s for complex physical scenarios where room-sized anchors are required. Full article
(This article belongs to the Special Issue Extended Reality (XR) and User Experience (UX) Technologies)
Show Figures

Figure 1

28 pages, 1035 KiB  
Review
A Review of Innovative Medical Rehabilitation Systems with Scalable AI-Assisted Platforms for Sensor-Based Recovery Monitoring
by Assiya Boltaboyeva, Zhanel Baigarayeva, Baglan Imanbek, Kassymbek Ozhikenov, Aliya Jemal Getahun, Tanzhuldyz Aidarova and Nurgul Karymsakova
Appl. Sci. 2025, 15(12), 6840; https://doi.org/10.3390/app15126840 - 18 Jun 2025
Viewed by 1735
Abstract
Artificial intelligence (AI) and machine learning (ML) have introduced new approaches to medical rehabilitation. These technological advances facilitate the development of large-scale adaptive rehabilitation platforms that can be tailored to individual patients. This review focuses on key technologies, including AI-driven rehabilitation planning, IoT-based [...] Read more.
Artificial intelligence (AI) and machine learning (ML) have introduced new approaches to medical rehabilitation. These technological advances facilitate the development of large-scale adaptive rehabilitation platforms that can be tailored to individual patients. This review focuses on key technologies, including AI-driven rehabilitation planning, IoT-based patient monitoring, and Large Language Model (LLM)-powered virtual assistants for patient support. This review analyzes existing systems and examines how technologies can be combined to create comprehensive rehabilitation platforms that provide personalized care. For this purpose, a targeted literature search was conducted across leading scientific databases, including Scopus, Google Scholar, and IEEE Xplore. This process resulted in the selection of key peer-reviewed articles published between 2018 and 2025 for a detailed analysis. These studies highlight the latest trends and developments in medical rehabilitation, showcasing how digital technologies can transform rehabilitation processes and support patients. This review illustrates that AI, the IoT, and LLM-based virtual assistants hold significant promise for addressing current healthcare challenges through their ability to enhance, personalize, and streamline patient care. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

43 pages, 5651 KiB  
Article
Cross-Layer Analysis of Machine Learning Models for Secure and Energy-Efficient IoT Networks
by Rashid Mustafa, Nurul I. Sarkar, Mahsa Mohaghegh, Shahbaz Pervez and Ovesh Vohra
Sensors 2025, 25(12), 3720; https://doi.org/10.3390/s25123720 - 13 Jun 2025
Viewed by 738
Abstract
The widespread adoption of the Internet of Things (IoT) raises significant concerns regarding security and energy efficiency, particularly for low-resource devices. To address these IoT issues, we propose a cross-layer IoT architecture employing machine learning (ML) models and lightweight cryptography. Our proposed solution [...] Read more.
The widespread adoption of the Internet of Things (IoT) raises significant concerns regarding security and energy efficiency, particularly for low-resource devices. To address these IoT issues, we propose a cross-layer IoT architecture employing machine learning (ML) models and lightweight cryptography. Our proposed solution is based on role-based access control (RBAC), ensuring secure authentication in large-scale IoT deployments while preventing unauthorized access attempts. We integrate layer-specific ML models, such as long short-term memory networks for temporal anomaly detection and decision trees for application-layer validation, along with adaptive speck encryption for the dynamic adjustment of cryptographic overheads. We then introduce a granular RBAC system that incorporates energy-aware policies. The novelty of this work is the proposal of a cross-layer IoT architecture that harmonizes ML-driven security with energy-efficient operations. The performance of the proposed cross-layer system is evaluated by extensive simulations. The results obtained show that the proposed system can reduce false positives up to 32% and enhance system security by preventing unauthorized access up to 95%. We also achieve 30% reduction in power consumption using the proposed lightweight Speck encryption method compared to the traditional advanced encryption standard (AES). By leveraging convolutional neural networks and ML, our approach significantly enhances IoT security and energy efficiency in practical scenarios such as smart cities, homes, and schools. Full article
(This article belongs to the Special Issue Security Issues and Solutions for the Internet of Things)
Show Figures

Figure 1

21 pages, 9038 KiB  
Article
Deep Learning-Based Detection and Digital Twin Implementation of Beak Deformities in Caged Layer Chickens
by Hengtai Li, Hongfei Chen, Jinlin Liu, Qiuhong Zhang, Tao Liu, Xinyu Zhang, Yuhua Li, Yan Qian and Xiuguo Zou
Agriculture 2025, 15(11), 1170; https://doi.org/10.3390/agriculture15111170 - 29 May 2025
Viewed by 798
Abstract
With the increasing urgency for digital transformation in large-scale caged layer farms, traditional methods for monitoring the environment and chicken health, which often rely on human experience, face challenges related to low efficiency and poor real-time performance. In this study, we focused on [...] Read more.
With the increasing urgency for digital transformation in large-scale caged layer farms, traditional methods for monitoring the environment and chicken health, which often rely on human experience, face challenges related to low efficiency and poor real-time performance. In this study, we focused on caged layer chickens and proposed an improved abnormal beak detection model based on the You Only Look Once v8 (YOLOv8) framework. Data collection was conducted using an inspection robot, enhancing automation and consistency. To address the interference caused by chicken cages, an Efficient Multi-Scale Attention (EMA) mechanism was integrated into the Spatial Pyramid Pooling-Fast (SPPF) module within the backbone network, significantly improving the model’s ability to capture fine-grained beak features. Additionally, the standard convolutional blocks in the neck of the original model were replaced with Grouped Shuffle Convolution (GSConv) modules, effectively reducing information loss during feature extraction. The model was deployed on edge computing devices for the real-time detection of abnormal beak features in layer chickens. Beyond local detection, a digital twin remote monitoring system was developed, combining three-dimensional (3D) modeling, the Internet of Things (IoT), and cloud-edge collaboration to create a dynamic, real-time mapping of physical layer farms to their virtual counterparts. This innovative approach not only improves the extraction of subtle features but also addresses occlusion challenges commonly encountered in small target detection. Experimental results demonstrate that the improved model achieved a detection accuracy of 92.7%. In terms of the comprehensive evaluation metric (mAP), it surpassed the baseline model and YOLOv5 by 2.4% and 3.2%, respectively. The digital twin system also proved stable in real-world scenarios, effectively mapping physical conditions to virtual environments. Overall, this study integrates deep learning and digital twin technology into a smart farming system, presenting a novel solution for the digital transformation of poultry farming. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

Back to TopTop