Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (6,026)

Search Parameters:
Keywords = intelligent solutions

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 6038 KB  
Article
A Multi-Objective Genetic Algorithm–Deep Reinforcement Learning Framework for Spectrum Sharing in 6G Cognitive Radio Networks
by Ancilla Wadzanai Chigaba, Sindiso Mpenyu Nleya, Mthulisi Velempini and Samkeliso Suku Dube
Appl. Sci. 2025, 15(17), 9758; https://doi.org/10.3390/app15179758 (registering DOI) - 5 Sep 2025
Abstract
The exponential growth in wireless communication demands intelligent and adaptive spectrum-sharing solutions, especially within dynamic and densely populated 6G Cognitive Radio Networks (CRNs). This paper introduces a novel hybrid framework combing the Non-dominated Sorting Genetic Algorithm II (NSGA-II) with Proximal Policy Optimisation (PPO) [...] Read more.
The exponential growth in wireless communication demands intelligent and adaptive spectrum-sharing solutions, especially within dynamic and densely populated 6G Cognitive Radio Networks (CRNs). This paper introduces a novel hybrid framework combing the Non-dominated Sorting Genetic Algorithm II (NSGA-II) with Proximal Policy Optimisation (PPO) for multi-objective optimisation in spectrum management. The proposed model balances spectrum efficiency, interference mitigation, energy conservation, collision rate reduction, and QoS maintenance. Evaluation on synthetic and ns-3 datasets shows that the NSGA-II and PPO hybrid consistently outperforms the random, greedy, and stand-alone PPO strategies, achieving higher cumulative reward, perfect fairness (Jain’s Fairness Index = 1.0), robust hypervolume convergence (65.1%), up to 12% reduction in PU collision rate, 20% lower interference, and approximately 40% improvement in energy efficiency. These findings validate the framework’s effectiveness in promoting fairness, reliability, and efficiency in 6G wireless communication systems. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

32 pages, 6058 KB  
Article
An Enhanced YOLOv8n-Based Method for Fire Detection in Complex Scenarios
by Xuanyi Zhao, Minrui Yu, Jiaxing Xu, Peng Wu and Haotian Yuan
Sensors 2025, 25(17), 5528; https://doi.org/10.3390/s25175528 (registering DOI) - 5 Sep 2025
Abstract
With the escalating frequency of urban and forest fires driven by climate change, the development of intelligent and robust fire detection systems has become imperative for ensuring public safety and ecological protection. This paper presents a comprehensive multi-module fire detection framework based on [...] Read more.
With the escalating frequency of urban and forest fires driven by climate change, the development of intelligent and robust fire detection systems has become imperative for ensuring public safety and ecological protection. This paper presents a comprehensive multi-module fire detection framework based on visual computing, encompassing image enhancement and lightweight object detection. To address data scarcity and to enhance generalization, a projected generative adversarial network (Projected GAN) is employed to synthesize diverse and realistic fire scenarios under varying environmental conditions. For the detection module, an improved YOLOv8n architecture is proposed by integrating BiFormer Attention, Agent Attention, and CCC (Compact Channel Compression) modules, which collectively enhance detection accuracy and robustness under low visibility and dynamic disturbance conditions. Extensive experiments on both synthetic and real-world fire datasets demonstrated notable improvements in image restoration quality (achieving a PSNR up to 34.67 dB and an SSIM up to 0.968) and detection performance (mAP reaching 0.858), significantly outperforming the baseline. The proposed system offers a reliable and deployable solution for real-time fire monitoring and early warning in complex visual environments. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

37 pages, 4201 KB  
Article
Comparative Performance Analysis of Deep Learning-Based Diagnostic and Predictive Models in Grid-Integrated Doubly Fed Induction Generator Wind Turbines
by Ramesh Kumar Behara and Akshay Kumar Saha
Energies 2025, 18(17), 4725; https://doi.org/10.3390/en18174725 - 5 Sep 2025
Abstract
As the deployment of wind energy systems continues to rise globally, ensuring the reliability and efficiency of grid-connected Doubly Fed Induction Generator (DFIG) wind turbines has become increasingly critical. Two core challenges faced by these systems include fault diagnosis in power electronic converters [...] Read more.
As the deployment of wind energy systems continues to rise globally, ensuring the reliability and efficiency of grid-connected Doubly Fed Induction Generator (DFIG) wind turbines has become increasingly critical. Two core challenges faced by these systems include fault diagnosis in power electronic converters and accurate prediction of wind conditions for adaptive power control. Recent advancements in artificial intelligence (AI) have introduced powerful tools for addressing these challenges. This study presents the first unified comparative performance analysis of two deep learning-based models: (i) a Convolutional Neural Network-Long Short-Term Memory CNN-LSTM with Variational Mode Decomposition for real-time Grid Side Converter (GSC) fault diagnosis, and (ii) an Incremental Generative Adversarial Network (IGAN) for wind attribute prediction and adaptive droop gain control, applied to grid-integrated DFIG wind turbines. Unlike prior studies that address fault diagnosis and wind forecasting separately, both models are evaluated within a common MATLAB/Simulink framework using identical wind profiles, disturbances, and system parameters, ensuring fair and reproducible benchmarking. Beyond accuracy, the analysis incorporates multi-dimensional performance metrics such as inference latency, robustness to disturbances, scalability, and computational efficiency, offering a more holistic assessment than prior work. The results reveal complementary strengths: the CNN-LSTM achieves 88% accuracy with 15 ms detection latency for converter faults, while the IGAN delivers more than 95% prediction accuracy and enhances frequency stability by 18%. Comparative analysis shows that while the CNN-LSTM model is highly suitable for rapid fault localization and maintenance planning, the IGAN model excels in predictive control and grid performance optimization. Unlike prior studies, this work establishes the first direct comparative framework for diagnostic and predictive AI models in DFIG systems, providing novel insights into their complementary strengths and practical deployment trade-offs. This dual evaluation lays the groundwork for hybrid two-tier AI frameworks in smart wind energy systems. By establishing a reproducible methodology and highlighting practical deployment trade-offs, this study offers valuable guidance for researchers and practitioners seeking explainable, adaptive, and computationally efficient AI solutions for next-generation renewable energy integration. Full article
Show Figures

Figure 1

60 pages, 12559 KB  
Article
A Decade of Studies in Smart Cities and Urban Planning Through Big Data Analytics
by Florin Dobre, Andra Sandu, George-Cristian Tătaru and Liviu-Adrian Cotfas
Systems 2025, 13(9), 780; https://doi.org/10.3390/systems13090780 - 5 Sep 2025
Abstract
Smart cities and urban planning have succeeded in gathering the attention of researchers worldwide, especially in the last decade, as a result of a series of technological, social and economic developments that have shaped the need for evolution from the traditional way in [...] Read more.
Smart cities and urban planning have succeeded in gathering the attention of researchers worldwide, especially in the last decade, as a result of a series of technological, social and economic developments that have shaped the need for evolution from the traditional way in which the cities were viewed. Technology has been incorporated in many sectors associated with smart cities, such as communications, transportation, energy, and water, resulting in increasing people’s quality of life and satisfying the needs of a society in continuous change. Furthermore, with the rise in machine learning (ML) and artificial intelligence (AI), as well as Geographic Information Systems (GIS), the applications of big data analytics in the context of smart cities and urban planning have diversified, covering a wide range of applications starting with traffic management, environmental monitoring, public safety, and adjusting power distribution based on consumption patterns. In this context, the present paper brings to the fore the papers written in the 2015–2024 period and indexed in Clarivate Analytics’ Web of Science Core Collection and analyzes them from a bibliometric point of view. As a result, an annual growth rate of 10.72% has been observed, showing an increased interest from the scientific community in this area. Through the use of specific bibliometric analyses, key themes, trends, prominent authors and institutions, preferred journals, and collaboration networks among authors, data are extracted and discussed in depth. Thematic maps and topic discovery through Latent Dirichlet Allocation (LDA) and doubled by a BERTopic analysis, n-gram analysis, factorial analysis, and a review of the most cited papers complete the picture on the research carried on in the last decade in this area. The importance of big data analytics in the area of urban planning and smart cities is underlined, resulting in an increase in their ability to enhance urban living by providing personalized and efficient solutions to everyday life situations. Full article
Show Figures

Figure 1

26 pages, 1515 KB  
Article
From Key Role to Core Infrastructure: Platforms as AI Enablers in Hospitality Management
by Antonio Grieco, Pierpaolo Caricato and Paolo Margiotta
Platforms 2025, 3(3), 16; https://doi.org/10.3390/platforms3030016 - 4 Sep 2025
Abstract
The increasing complexity of managing maintenance activities across geographically dispersed hospitality facilities necessitates advanced digital solutions capable of effectively balancing operational costs and service quality. This study addresses this challenge by designing and validating an intelligent Prescriptive Maintenance module, leveraging advanced Reinforcement Learning [...] Read more.
The increasing complexity of managing maintenance activities across geographically dispersed hospitality facilities necessitates advanced digital solutions capable of effectively balancing operational costs and service quality. This study addresses this challenge by designing and validating an intelligent Prescriptive Maintenance module, leveraging advanced Reinforcement Learning (RL) techniques within a Digital Twin (DT) infrastructure, specifically tailored for luxury hospitality networks characterized by high standards and demanding operational constraints. The proposed framework is based on an RL agent trained through Proximal Policy Optimization (PPO), which allows the system to dynamically prescribe preventive and corrective maintenance interventions. By adopting such an AI-driven approach, platforms are the enablers to minimize service disruptions, optimize operational efficiency, and proactively manage resources in dynamic and extended operational contexts. Experimental validation highlights the potential of the developed solution to significantly enhance resource allocation strategies and operational planning compared to traditional preventive approaches, particularly under varying resource availability conditions. By providing a comprehensive and generalizable representation model of maintenance management, this study delivers valuable insights for both researchers and industry practitioners aiming to leverage digital transformation and AI for sustainable and resilient hospitality operations. Full article
Show Figures

Figure 1

40 pages, 3732 KB  
Review
Applications and Prospects of Muography in Strategic Deposits
by Xingwen Zhou, Juntao Liu, Baopeng Su, Kaiqiang Yao, Xinyu Cai, Rongqing Zhang, Ting Li, Hengliang Deng, Jiangkun Li, Shi Yan and Zhiyi Liu
Minerals 2025, 15(9), 945; https://doi.org/10.3390/min15090945 (registering DOI) - 4 Sep 2025
Abstract
With strategic mineral exploration extending to deep and complex geological settings, traditional methods increasingly struggle to dissect metallogenic systems and locate ore bodies precisely. This synthesis of current progress in muon imaging (a technology leveraging cosmic ray muons’ high penetration) aims to address [...] Read more.
With strategic mineral exploration extending to deep and complex geological settings, traditional methods increasingly struggle to dissect metallogenic systems and locate ore bodies precisely. This synthesis of current progress in muon imaging (a technology leveraging cosmic ray muons’ high penetration) aims to address these exploration challenges. Muon imaging operates by exploiting the energy attenuation of cosmic ray muons when penetrating earth media. It records muon transmission trajectories via high-precision detector arrays and constructs detailed subsurface density distribution images through advanced 3D inversion algorithms, enabling non-invasive detection of deep ore bodies. This review is organized into four thematic sections: (1) technical principles of muon imaging; (2) practical applications and advantages in ore exploration; (3) current challenges in deployment; (4) optimization strategies and future prospects. In practical applications, muon imaging has demonstrated unique advantages: it penetrates thick overburden and high-resistance rock masses to delineate blind ore bodies, with simultaneous gains in exploration efficiency and cost reduction. Optimized data acquisition and processing further allow it to capture dynamic changes in rock mass structure over hours to days, supporting proactive mine safety management. However, challenges remain, including complex muon event analysis, long data acquisition cycles, and limited distinguishability for low-density-contrast formations. It discusses solutions via multi-source geophysical data integration, optimized acquisition strategies, detector performance improvements, and intelligent data processing algorithms to enhance practicality and reliability. Future advancements in muon imaging are expected to drive breakthroughs in ultra-deep ore-forming system exploration, positioning it as a key force in innovating strategic mineral resource exploration technologies. Full article
(This article belongs to the Special Issue 3D Mineral Prospectivity Modeling Applied to Mineral Deposits)
Show Figures

Figure 1

28 pages, 8109 KB  
Article
A Face Image Encryption Scheme Based on Nonlinear Dynamics and RNA Cryptography
by Xiyuan Cheng, Tiancong Cheng, Xinyu Yang, Wenbin Cheng and Yiting Lin
Cryptography 2025, 9(3), 57; https://doi.org/10.3390/cryptography9030057 - 4 Sep 2025
Abstract
With the rapid development of big data and artificial intelligence, the problem of image privacy leakage has become increasingly prominent, especially for images containing sensitive information such as faces, which poses a higher security risk. In order to improve the security and efficiency [...] Read more.
With the rapid development of big data and artificial intelligence, the problem of image privacy leakage has become increasingly prominent, especially for images containing sensitive information such as faces, which poses a higher security risk. In order to improve the security and efficiency of image privacy protection, this paper proposes an image encryption scheme that integrates face detection and multi-level encryption technology. Specifically, a multi-task convolutional neural network (MTCNN) is used to accurately extract the face area to ensure accurate positioning and high processing efficiency. For the extracted face area, a hierarchical encryption framework is constructed using chaotic systems, lightweight block permutations, RNA cryptographic systems, and bit diffusion, which increases data complexity and unpredictability. In addition, a key update mechanism based on dynamic feedback is introduced to enable the key to change in real time during the encryption process, effectively resisting known plaintext and chosen plaintext attacks. Experimental results show that the scheme performs well in terms of encryption security, robustness, computational efficiency, and image reconstruction quality. This study provides a practical and effective solution for the secure storage and transmission of sensitive face images, and provides valuable support for image privacy protection in intelligent systems. Full article
Show Figures

Figure 1

22 pages, 1760 KB  
Review
On the Role of Artificial Intelligent Technology for Millimetre-Wave and Terahertz Applications
by Lida Kouhalvandi and Ladislau Matekovits
Sensors 2025, 25(17), 5502; https://doi.org/10.3390/s25175502 - 4 Sep 2025
Abstract
Next-generation wireless communication networks are developing across the world day by day; this requires high data rate transportation over the systems. Millimeter-wave (mm-wave) spectrum with terahertz (THz) bands is a promising solution for next-generation systems that are able to meet these requirements effectively. [...] Read more.
Next-generation wireless communication networks are developing across the world day by day; this requires high data rate transportation over the systems. Millimeter-wave (mm-wave) spectrum with terahertz (THz) bands is a promising solution for next-generation systems that are able to meet these requirements effectively. For such networks, designing new waveforms, providing high-quality service, reliability, energy efficiency, and many other specifications are taking on important roles in adapting to high-performance communication systems. Recently, artificial intelligence (AI) and machine learning (ML) methods have proved their effectiveness in predicting. and optimizing nonlinear characteristics of high-dimensional systems with enhanced capability along with rich convergence outcomes. Thus, there is a strong need for the use of these intelligence-based methods to achieve higher bandwidths along with the targeted outcomes in comparison with the traditional designs. In this work, we provide an overview of the recently published works on the utilization of mm-wave and THz frequencies for designing and implementing various designs to carry out the targeted key specifications. Moreover, by considering various newly published works, some open challenges are identified. Hence, we provide our view about these concepts, which will pave the way for readers to get a general overview and ideas around the various mm-wave and THz-based designs with the use of AI methods. Full article
(This article belongs to the Special Issue Communication, Sensing and Localization in 6G Systems)
Show Figures

Figure 1

13 pages, 3205 KB  
Proceeding Paper
Overview of Memory-Efficient Architectures for Deep Learning in Real-Time Systems
by Bilgin Demir, Ervin Domazet and Daniela Mechkaroska
Eng. Proc. 2025, 104(1), 77; https://doi.org/10.3390/engproc2025104077 - 4 Sep 2025
Abstract
With advancements in artificial intelligence (AI), deep learning (DL) has become crucial for real-time data analytics in areas like autonomous driving, healthcare, and predictive maintenance; however, its computational and memory demands often exceed the capabilities of low-end devices. This paper explores optimizing deep [...] Read more.
With advancements in artificial intelligence (AI), deep learning (DL) has become crucial for real-time data analytics in areas like autonomous driving, healthcare, and predictive maintenance; however, its computational and memory demands often exceed the capabilities of low-end devices. This paper explores optimizing deep learning architectures for memory efficiency to enable real-time computation in low-power designs. Strategies include model compression, quantization, and efficient network designs. Techniques such as eliminating unnecessary parameters, sparse representations, and optimized data handling significantly enhance system performance. The design addresses cache utilization, memory hierarchies, and data movement, reducing latency and energy use. By comparing memory management methods, this study highlights dynamic pruning and adaptive compression as effective solutions for improving efficiency and performance. These findings guide the development of accurate, power-efficient deep learning systems for real-time applications, unlocking new possibilities for edge and embedded AI. Full article
Show Figures

Figure 1

28 pages, 7441 KB  
Article
An Enhanced Multi-Strategy Mantis Shrimp Optimization Algorithm and Engineering Implementations
by Yang Yang, Chaochuan Jia, Xukun Zuo, Yu Liu and Maosheng Fu
Symmetry 2025, 17(9), 1453; https://doi.org/10.3390/sym17091453 - 4 Sep 2025
Abstract
This paper proposes a novel intelligent optimization algorithm, ICPMSHOA, that effectively balances population diversity and convergence performance by integrating an iterative chaotic map with infinite collapses (ICMIC), centroid opposition-based learning, and periodic mutation strategy. To verify its performance, we adopted benchmark functions from [...] Read more.
This paper proposes a novel intelligent optimization algorithm, ICPMSHOA, that effectively balances population diversity and convergence performance by integrating an iterative chaotic map with infinite collapses (ICMIC), centroid opposition-based learning, and periodic mutation strategy. To verify its performance, we adopted benchmark functions from the IEEE CEC 2017 and 2022 standard test suites and compared it with six algorithms, including OOA and BWO. The results show that ICPMSHOA has significant improvements in convergence speed, global search capability, and stability, with statistically significant advantages. Furthermore, the algorithm performs outstandingly in three practical engineering constrained optimization problems: Haverly’s pooling problem, hybrid pooling–preparation problem, and optimization design of industrial refrigeration systems. This study confirms that ICPMSHOA provides efficient and reliable solutions for complex optimization tasks and has strong practical value in engineering scenarios. Full article
Show Figures

Figure 1

18 pages, 1437 KB  
Article
Smart Resource Management and Energy-Efficient Regimes for Greenhouse Vegetable Production
by Alla Dudnyk, Natalia Pasichnyk, Inna Yakymenko, Taras Lendiel, Kamil Witaszek, Karol Durczak and Wojciech Czekała
Energies 2025, 18(17), 4690; https://doi.org/10.3390/en18174690 - 4 Sep 2025
Abstract
Greenhouse vegetable production faces significant challenges due to the non-stationary and nonlinear dynamics of the cultivation environment, which demand adaptive and intelligent control strategies. This study presents an intelligent control system for greenhouse complexes based on artificial neural networks and fuzzy logic, optimized [...] Read more.
Greenhouse vegetable production faces significant challenges due to the non-stationary and nonlinear dynamics of the cultivation environment, which demand adaptive and intelligent control strategies. This study presents an intelligent control system for greenhouse complexes based on artificial neural networks and fuzzy logic, optimized using genetic algorithms. The proposed system dynamically adjusts PI controller parameters to maintain optimal microclimatic conditions, including temperature and humidity, enhancing resource efficiency. Comparative analyses demonstrate that the genetic algorithm-based tuning outperforms traditional and fuzzy adaptation methods, achieving superior transient response with reduced overshoot and settling time. Implementation of the intelligent control system results in energy savings of 10–12% compared to conventional stabilization algorithms, while improving decision-making efficiency for electrotechnical subsystems such as heating and ventilation. These findings support the development of resource-efficient cultivation regimes that reduce energy consumption, stabilize agrotechnical parameters, and increase profitability in greenhouse vegetable production. The approach offers a scalable and adaptable solution for modern greenhouse automation under varying environmental conditions. Full article
Show Figures

Figure 1

12 pages, 2857 KB  
Proceeding Paper
Multi-Sensor Early Warning System with Fuzzy Logic Method Based on the Internet of Things
by Fadhlurrahman Afif, Deden Witarsyah, Dedy Syamsuar and Hanif Fakhrurroja
Eng. Proc. 2025, 107(1), 57; https://doi.org/10.3390/engproc2025107057 - 3 Sep 2025
Abstract
A landslide disaster is one of the many natural disasters that often occur in Indonesia. This disaster is one of the difficult to avoid disasters, so it often causes fatalities and large material losses. Currently, mitigation systems for landslide disasters are still less [...] Read more.
A landslide disaster is one of the many natural disasters that often occur in Indonesia. This disaster is one of the difficult to avoid disasters, so it often causes fatalities and large material losses. Currently, mitigation systems for landslide disasters are still less effective in their use. Early warning systems that can give information about the landslide through smartphones could be the best solution for this digital era, because society generally has smartphones and is connected through the internet. The early warning system also demanded the ability to decide the landslide status. Fuzzy logic is one of many types of artificial intelligence used as a decision support system, which is similar to human logic. Therefore, it is necessary to build an early warning system against landslides based on the Internet of Things (IoT) that can determine the status of landslides that occur based on the soil slope using the MPU6050 accelerometer sensor and moisture data using the soil moisture sensor. This system can later monitor the slope and moisture data of the soil and can transmit landslide status on smartphone applications connected to the internet. The result of this research is an IoT-based landslide early warning system that can transmit landslide and soil moisture data and transmit landslide status in the form of push notifications on smartphones using the Blynk application. Full article
Show Figures

Figure 1

10 pages, 1081 KB  
Proceeding Paper
Insights into the Emotion Classification of Artificial Intelligence: Evolution, Application, and Obstacles of Emotion Classification
by Marselina Endah Hiswati, Ema Utami, Kusrini Kusrini and Arief Setyanto
Eng. Proc. 2025, 103(1), 24; https://doi.org/10.3390/engproc2025103024 - 3 Sep 2025
Abstract
In this systematic literature review, we examined the integration of emotional intelligence into artificial intelligence (AI) systems, focusing on advancements, challenges, and opportunities in emotion classification technologies. Accurate emotion recognition in AI holds immense potential in healthcare, the IoT, and education. However, challenges [...] Read more.
In this systematic literature review, we examined the integration of emotional intelligence into artificial intelligence (AI) systems, focusing on advancements, challenges, and opportunities in emotion classification technologies. Accurate emotion recognition in AI holds immense potential in healthcare, the IoT, and education. However, challenges such as computational demands, limited dataset diversity, and real-time deployment complexity remain significant. In this review, we included research on emerging solutions like multimodal data processing, attention mechanisms, and real-time emotion tracking to address these issues. By overcoming these issues, AI systems enhance human–AI interactions and expand real-world applications. Recommendations for improving accuracy and scalability in emotion-aware AI are provided based on the review results. Full article
Show Figures

Figure 1

17 pages, 2525 KB  
Article
Intelligent Compaction System for Soil-Rock Mixture Subgrades: Real-Time Moisture-CMV Fusion Control and Embedded Edge Computing
by Meisheng Shi, Shen Zuo, Jin Li, Junwei Bi, Qingluan Li and Menghan Zhang
Sensors 2025, 25(17), 5491; https://doi.org/10.3390/s25175491 - 3 Sep 2025
Abstract
The compaction quality of soil–rock mixture (SRM) subgrades critically influences infrastructure stability, but conventional settlement difference methods exhibit high spatial sampling bias (error > 15% in heterogeneous zones) and fail to characterize the overall compaction quality. These limitations lead to under-compaction (porosity > [...] Read more.
The compaction quality of soil–rock mixture (SRM) subgrades critically influences infrastructure stability, but conventional settlement difference methods exhibit high spatial sampling bias (error > 15% in heterogeneous zones) and fail to characterize the overall compaction quality. These limitations lead to under-compaction (porosity > 25%) or over-compaction (aggregate fragmentation rate > 40%), highlighting the need for real-time monitoring. This study develops an intelligent compaction system integrating (1) vibration acceleration sensors (PCB 356A16, ±50 g range) for compaction meter value (CMV) acquisition; (2) near-infrared (NIR) moisture meters (NDC CM710E, 1300–2500 nm wavelength) for real-time moisture monitoring (sampling rate 10 Hz); and (3) an embedded edge-computing module (NVIDIA Jetson Nano) for Python-based data fusion (FFT harmonic analysis + moisture correction) with 50 ms processing latency. Field validation on Linlin Expressway shows that the system meets JTG 3430-2020 standards, with the compaction qualification rate reaching 98% (vs. 82% for conventional methods) and 97.6% anomaly detection accuracy. This is the first system integrating NIR moisture correction (R2 = 0.96 vs. oven-drying) with CMV harmonic analysis, reducing measurement error by 40% compared to conventional ICT (Bomag ECO Plus). It provides a digital solution for SRM subgrade quality control, enhancing construction efficiency and durability. Full article
(This article belongs to the Special Issue AI and Smart Sensors for Intelligent Transportation Systems)
Show Figures

Figure 1

50 pages, 2995 KB  
Review
A Survey of Traditional and Emerging Deep Learning Techniques for Non-Intrusive Load Monitoring
by Annysha Huzzat, Ahmed S. Khwaja, Ali A. Alnoman, Bhagawat Adhikari, Alagan Anpalagan and Isaac Woungang
AI 2025, 6(9), 213; https://doi.org/10.3390/ai6090213 - 3 Sep 2025
Abstract
To cope with the increasing global demand of energy and significant energy wastage caused by the use of different home appliances, smart load monitoring is considered a promising solution to promote proper activation and scheduling of devices and reduce electricity bills. Instead of [...] Read more.
To cope with the increasing global demand of energy and significant energy wastage caused by the use of different home appliances, smart load monitoring is considered a promising solution to promote proper activation and scheduling of devices and reduce electricity bills. Instead of installing a sensing device on each electric appliance, non-intrusive load monitoring (NILM) enables the monitoring of each individual device using the total power reading of the home smart meter. However, for a high-accuracy load monitoring, efficient artificial intelligence (AI) and deep learning (DL) approaches are needed. To that end, this paper thoroughly reviews traditional AI and DL approaches, as well as emerging AI models proposed for NILM. Unlike existing surveys that are usually limited to a specific approach or a subset of approaches, this review paper presents a comprehensive survey of an ensemble of topics and models, including deep learning, generative AI (GAI), emerging attention-enhanced GAI, and hybrid AI approaches. Another distinctive feature of this work compared to existing surveys is that it also reviews actual cases of NILM system design and implementation, covering a wide range of technical enablers including hardware, software, and AI models. Furthermore, a range of new future research and challenges are discussed, such as the heterogeneity of energy sources, data uncertainty, privacy and safety, cost and complexity reduction, and the need for a standardized comparison. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Figure 1

Back to TopTop