Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (197)

Search Parameters:
Keywords = dynamic learning rate adjustment

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 3537 KB  
Article
Deep Reinforcement Learning Trajectory Tracking Control for a Six-Degree-of-Freedom Electro-Hydraulic Stewart Parallel Mechanism
by Yigang Kong, Yulong Wang, Yueran Wang, Shenghao Zhu, Ruikang Zhang and Liting Wang
Eng 2025, 6(9), 212; https://doi.org/10.3390/eng6090212 - 1 Sep 2025
Abstract
The strong coupling of the six-degree-of-freedom (6-DoF) electro-hydraulic Stewart parallel mechanism manifests as adjusting the elongation of one actuator potentially inducing motion in multiple degrees of freedom of the platform, i.e., a change in pose; this pose change leads to time-varying and unbalanced [...] Read more.
The strong coupling of the six-degree-of-freedom (6-DoF) electro-hydraulic Stewart parallel mechanism manifests as adjusting the elongation of one actuator potentially inducing motion in multiple degrees of freedom of the platform, i.e., a change in pose; this pose change leads to time-varying and unbalanced load forces (disturbance inputs) on the six hydraulic actuators; unbalanced load forces exacerbate the time-varying nature of the acceleration and velocity of the six hydraulic actuators, causing instantaneous changes in the pressure and flow rate of the electro-hydraulic system, thereby enhancing the pressure–flow nonlinearity of the hydraulic actuators. Considering the advantage of artificial intelligence in learning hidden patterns within complex environments (strong coupling and strong nonlinearity), this paper proposes a reinforcement learning motion control algorithm based on deep deterministic policy gradient (DDPG). Firstly, the static/dynamic coordinate system transformation matrix of the electro-hydraulic Stewart parallel mechanism is established, and the inverse kinematic model and inverse dynamic model are derived. Secondly, a DDPG algorithm framework incorporating an Actor–Critic network structure is constructed, designing the agent’s state observation space, action space, and a position-error-based reward function, while employing experience replay and target network mechanisms to optimize the training process. Finally, a simulation model is built on the MATLAB 2024b platform, applying variable-amplitude variable-frequency sinusoidal input signals to all 6 degrees of freedom for dynamic characteristic analysis and performance evaluation under the strong coupling and strong nonlinear operating conditions of the electro-hydraulic Stewart parallel mechanism; the DDPG agent dynamically adjusts the proportional, integral, and derivative gains of six PID controllers through interactive trial-and-error learning. Simulation results indicate that compared to the traditional PID control algorithm, the DDPG-PID control algorithm significantly improves the tracking accuracy of all six hydraulic cylinders, with the maximum position error reduced by over 40.00%, achieving high-precision tracking control of variable-amplitude variable-frequency trajectories in all 6 degrees of freedom for the electro-hydraulic Stewart parallel mechanism. Full article
Show Figures

Figure 1

24 pages, 8688 KB  
Article
Lightweight Obstacle Avoidance for Fixed-Wing UAVs Using Entropy-Aware PPO
by Meimei Su, Haochen Chai, Chunhui Zhao, Yang Lyu and Jinwen Hu
Drones 2025, 9(9), 598; https://doi.org/10.3390/drones9090598 - 26 Aug 2025
Viewed by 535
Abstract
Obstacle avoidance during high-speed, low-altitude flight remains a significant challenge for unmanned aerial vehicles (UAVs), particularly in unfamiliar environments where prior maps and heavy onboard sensors are unavailable. To address this, we present an entropy-aware deep reinforcement learning framework that enables fixed-wing UAVs [...] Read more.
Obstacle avoidance during high-speed, low-altitude flight remains a significant challenge for unmanned aerial vehicles (UAVs), particularly in unfamiliar environments where prior maps and heavy onboard sensors are unavailable. To address this, we present an entropy-aware deep reinforcement learning framework that enables fixed-wing UAVs to navigate safely using only monocular onboard cameras. Our system features a lightweight, single-frame depth estimation module optimized for real-time execution on edge computing platforms, followed by a reinforcement learning controller equipped with a novel reward function that balances goal-reaching performance with path smoothness under fixed-wing dynamic constraints. To enhance policy optimization, we incorporate high-quality experiences from the replay buffer into the gradient computation, introducing a soft imitation mechanism that encourages the agent to align its behavior with previously successful actions. To further balance exploration and exploitation, we integrate an adaptive entropy regularization mechanism into the Proximal Policy Optimization (PPO) algorithm. This module dynamically adjusts policy entropy during training, leading to improved stability, faster convergence, and better generalization to unseen scenarios. Extensive software-in-the-loop (SITL) and hardware-in-the-loop (HITL) experiments demonstrate that our approach outperforms baseline methods in obstacle avoidance success rate and path quality, while remaining lightweight and deployable on resource-constrained aerial platforms. Full article
Show Figures

Figure 1

25 pages, 4100 KB  
Article
An Adaptive Unsupervised Learning Approach for Credit Card Fraud Detection
by John Adejoh, Nsikak Owoh, Moses Ashawa, Salaheddin Hosseinzadeh, Alireza Shahrabi and Salma Mohamed
Big Data Cogn. Comput. 2025, 9(9), 217; https://doi.org/10.3390/bdcc9090217 - 25 Aug 2025
Viewed by 461
Abstract
Credit card fraud remains a major cause of financial loss around the world. Traditional fraud detection methods that rely on supervised learning often struggle because fraudulent transactions are rare compared to legitimate ones, leading to imbalanced datasets. Additionally, the models must be retrained [...] Read more.
Credit card fraud remains a major cause of financial loss around the world. Traditional fraud detection methods that rely on supervised learning often struggle because fraudulent transactions are rare compared to legitimate ones, leading to imbalanced datasets. Additionally, the models must be retrained frequently, as fraud patterns change over time and require new labeled data for retraining. To address these challenges, this paper proposes an ensemble unsupervised learning approach for credit card fraud detection that combines Autoencoders (AEs), Self-Organizing Maps (SOMs), and Restricted Boltzmann Machines (RBMs), integrated with an Adaptive Reconstruction Threshold (ART) mechanism. The ART dynamically adjusts anomaly detection thresholds by leveraging the clustering properties of SOMs, effectively overcoming the limitations of static threshold approaches in machine learning and deep learning models. The proposed models, AE-ASOMs (Autoencoder—Adaptive Self-Organizing Maps) and RBM-ASOMs (Restricted Boltzmann Machines—Adaptive Self-Organizing Maps), were evaluated on the Kaggle Credit Card Fraud Detection and IEEE-CIS datasets. Our AE-ASOM model achieved an accuracy of 0.980 and an F1-score of 0.967, while the RBM-ASOM model achieved an accuracy of 0.975 and an F1-score of 0.955. Compared to models such as One-Class SVM and Isolation Forest, our approach demonstrates higher detection accuracy and significantly reduces false positive rates. In addition to its performance, the model offers considerable computational efficiency with a training time of 200.52 s and memory usage of 3.02 megabytes. Full article
Show Figures

Figure 1

25 pages, 11784 KB  
Article
Improved PPO Optimization for Robotic Arm Grasping Trajectory Planning and Real-Robot Migration
by Chunlei Li, Zhe Liu, Liang Li, Zeyu Ji, Chenbo Li, Jiaxing Liang and Yafeng Li
Sensors 2025, 25(17), 5253; https://doi.org/10.3390/s25175253 - 23 Aug 2025
Viewed by 660
Abstract
Addressing key challenges in unstructured environments, including local optimum traps, limited real-time interaction, and convergence difficulties, this research pioneers a hybrid reinforcement learning approach that combines simulated annealing (SA) with proximal policy optimization (PPO) for robotic arm trajectory planning. The framework enables the [...] Read more.
Addressing key challenges in unstructured environments, including local optimum traps, limited real-time interaction, and convergence difficulties, this research pioneers a hybrid reinforcement learning approach that combines simulated annealing (SA) with proximal policy optimization (PPO) for robotic arm trajectory planning. The framework enables the accurate, collision-free grasping of randomly appearing objects in dynamic obstacles through three key innovations: a probabilistically enhanced simulation environment with a 20% obstacle generation rate; an optimized state-action space featuring 12-dimensional environment coding and 6-DoF joint control; and an SA-PPO algorithm that dynamically adjusts the learning rate to balance exploration and convergence. Experimental results show a 6.52% increase in success rate (98% vs. 92%) and a 7.14% reduction in steps per set compared to the baseline PPO. A real deployment on the AUBO-i5 robotic arm enables real machine grasping, validating a robust transfer from simulation to reality. This work establishes a new paradigm for adaptive robot manipulation in industrial scenarios requiring a real-time response to environmental uncertainty. Full article
Show Figures

Figure 1

27 pages, 6145 KB  
Article
Multi-Voyage Path Planning for River Crab Aquaculture Feeding Boats
by Yueping Sun, Peixuan Guo, Yantong Wang, Jinkai Shi, Ziheng Zhang and De’an Zhao
Fishes 2025, 10(8), 420; https://doi.org/10.3390/fishes10080420 - 20 Aug 2025
Viewed by 339
Abstract
In crab pond environments, obstacles such as long aerobic pipelines, aerators, and ground cages are usually sparsely distributed. Automatic feeding boats can navigate while avoiding obstacles and execute feeding tasks along planned paths, thus improving feeding quality and operational efficiency. In large-scale crab [...] Read more.
In crab pond environments, obstacles such as long aerobic pipelines, aerators, and ground cages are usually sparsely distributed. Automatic feeding boats can navigate while avoiding obstacles and execute feeding tasks along planned paths, thus improving feeding quality and operational efficiency. In large-scale crab pond farming, a single feeding operation often fails to achieve the complete coverage of the bait casting task due to the limited boat load. Therefore, this study proposes a multi-voyage path planning scheme for feeding boats. Firstly, a complete coverage path planning algorithm is proposed based on an improved genetic algorithm to achieve the complete coverage of the bait casting task. Secondly, to address the issue of an insufficient bait loading capacity in complete coverage operations, which requires the feeding boat to return to the loading wharf several times to replenish bait, a multi-voyage path planning algorithm is proposed. The return point of the feeding operation is predicted by the algorithm. Subsequently, the improved Q-Learning algorithm (I-QLA) is proposed to plan the optimal multi-voyage return paths by increasing the exploration of the diagonal direction, refining the reward mechanism and dynamically adjusting the exploration rate. The simulation results show that compared with the traditional genetic algorithm, the repetition rate, path length, and the number of 90° turns of the complete coverage path planned by the improved genetic algorithm are reduced by 59.62%, 1.27%, and 28%, respectively. Compared with the traditional Q-Learning algorithm, average path length, average number of turns, average training time, and average number of iterations planned by the I-QLA are reduced by 20.84%, 74.19%, 48.27%, and 45.08%, respectively. The crab pond experimental results show that compared with the Q-Learning algorithm, the path length, turning times, and energy consumption of the I-QLA algorithm are reduced by 29.7%, 77.8%, and 39.6%, respectively. This multi-voyage method enables efficient, low-energy, and precise feeding for crab farming. Full article
Show Figures

Figure 1

25 pages, 1872 KB  
Article
Food Safety Risk Prediction and Regulatory Policy Enlightenment Based on Machine Learning
by Daqing Wu, Hangqi Cai and Tianhao Li
Systems 2025, 13(8), 715; https://doi.org/10.3390/systems13080715 - 19 Aug 2025
Viewed by 353
Abstract
This paper focuses on the challenges in food safety governance in megacities, taking Shanghai as the research object. Aiming at the pain points in food sampling inspections, it proposes a risk prediction and regulatory optimization scheme combining text mining and machine learning. First, [...] Read more.
This paper focuses on the challenges in food safety governance in megacities, taking Shanghai as the research object. Aiming at the pain points in food sampling inspections, it proposes a risk prediction and regulatory optimization scheme combining text mining and machine learning. First, the paper uses the LDA method to conduct in-depth mining on over 78,000 pieces of food sampling data across 34 categories in Shanghai, so as to identify core risk themes. Second, it applies SMOTE oversampling to the sampling data with an extremely low unqualified rate (0.5%). Finally, a machine learning prediction model for food safety risks is constructed, and predictions are made based on this model. The research findings are as follows: ① Food risks in Shanghai show significant characteristics in terms of time, category, and pollution causes. ② Supply chain links, regulatory intensity, and consumption scenarios are among the core influencing factors. ③ The traditional “full coverage” model is inefficient, and resources need to be tilted toward high-risk categories. ④ Public attention (e.g., the “You Order, We Inspect” initiative) can drive regulatory responses to improve the qualified rate. Based on these findings, this paper suggests that relevant authorities should ① classify three levels of risks for categories, increase inspection frequency for high-risk products in summer, adjust sampling intensity for different business entities, and establish a dynamic hierarchical regulatory mechanism; ② tackle source governance, reduce environmental pollution, upgrade process supervision, and strengthen whole-chain risk prevention and control; and ③ promote public participation, strengthen the enterprise responsibility system, and deepen the social co-governance pattern. This study effectively addresses the risk early warning problems in food safety supervision of megacities, providing a scientific basis and practical path for optimizing the allocation of regulatory resources and improving governance efficiency. Full article
(This article belongs to the Topic Digital Technologies in Supply Chain Risk Management)
Show Figures

Figure 1

20 pages, 939 KB  
Article
Dynamic Defense Strategy Selection Through Reinforcement Learning in Heterogeneous Redundancy Systems for Critical Data Protection
by Xuewen Yu, Lei He, Jingbu Geng, Zhihao Liang, Zhou Gan and Hantao Zhao
Appl. Sci. 2025, 15(16), 9111; https://doi.org/10.3390/app15169111 - 19 Aug 2025
Viewed by 255
Abstract
In recent years, the evolution of cyber-attacks has exposed critical vulnerabilities in conventional defense mechanisms, particularly across national infrastructure systems such as power, transportation, and finance. Attackers are increasingly deploying persistent and sophisticated techniques to exfiltrate or manipulate sensitive data, surpassing static defense [...] Read more.
In recent years, the evolution of cyber-attacks has exposed critical vulnerabilities in conventional defense mechanisms, particularly across national infrastructure systems such as power, transportation, and finance. Attackers are increasingly deploying persistent and sophisticated techniques to exfiltrate or manipulate sensitive data, surpassing static defense methods that depend on known vulnerabilities. This growing threat landscape underscores the urgent need for more advanced and adaptive defensive strategies to counter continuously evolving attack vectors. To address this challenge, this paper proposes a novel reinforcement learning-based optimization framework integrated with a Dynamic Heterogeneous Redundancy (DHR) architecture. Our approach uniquely utilizes reinforcement learning for the dynamic scheduling of encryption-layer configurations within the DHR framework, enabling adaptive adjustment of defense policies based on system status and threat progression. We evaluate the proposed system in a simulated adversarial environment, where reinforcement learning continuously adjusts encryption strategies and defense behaviors in response to evolving attack patterns and operational dynamics. Experimental results demonstrate that our method achieves a higher defense success rate while maintaining lower defense costs, thereby enhancing system resilience against cyber threats and improving the efficiency of defensive resource allocation. Full article
Show Figures

Figure 1

22 pages, 7227 KB  
Article
Mechanisms Driving Recent Sea-Level Acceleration in the Gulf of Guinea
by Ayinde Akeem Shola, Huaming Yu, Kejian Wu and Nir Krakauer
Remote Sens. 2025, 17(16), 2834; https://doi.org/10.3390/rs17162834 - 15 Aug 2025
Viewed by 444
Abstract
The Gulf of Guinea is undergoing accelerated sea-level rise (SLR), with localized rates surpassing 10 mm yr−1, more than double the global mean. Integrating GRACE/FO ocean mass data, reanalysis products, and machine learning, we identify a regime shift in the regional [...] Read more.
The Gulf of Guinea is undergoing accelerated sea-level rise (SLR), with localized rates surpassing 10 mm yr−1, more than double the global mean. Integrating GRACE/FO ocean mass data, reanalysis products, and machine learning, we identify a regime shift in the regional sea-level budget post-2015. Over 60% of observed SLR near major riverine outlets stems from ocean mass increase, driven primarily by intensified terrestrial hydrological discharge, marking a transition from steric to barystatic and manometric dominance. This shift coincides with enhanced monsoonal precipitation, wind-forced equatorial wave adjustments, and Atlantic–Pacific climate coupling. Piecewise regression reveals a significant 2015 breakpoint, with mean coastal SLR rates increasing from 2.93 ± 0.1 to 5.4 ± 0.25 mm yr−1 between 1993 and 2014, and 2015 and 2023. GRACE data indicate extreme mass accumulation (>10 mm yr−1) along the eastern Gulf coast, tied to elevated river discharge and estuarine retention. Dynamical analysis reveals the reorganization of wind field intensification, which modifies Rossby wave dispersion and amplifies zonal water mass convergence. Random forest modeling attributes 16% of extreme SLR variance to terrestrial runoff (comparable to wind stress at 19%), underscoring underestimated land–ocean interactions. Current climate models underrepresent manometric contributions by 20–45%, introducing critical projection biases for high-runoff regions. The societal implications are severe, with >400 km2 of urban land in Lagos and Abidjan vulnerable to inundation by 2050. These findings reveal a hybrid steric–manometric regime in the Gulf of Guinea, challenging existing paradigms and suggesting analogous dynamics may operate across tropical margins. This calls for urgent model recalibration and tailored regional adaptation strategies. Full article
Show Figures

Figure 1

20 pages, 8759 KB  
Article
Small Sample Palmprint Recognition Based on Image Augmentation and Dynamic Model-Agnostic Meta-Learning
by Xiancheng Zhou, Huihui Bai, Zhixu Dong, Kaijun Zhou and Yehui Liu
Electronics 2025, 14(16), 3236; https://doi.org/10.3390/electronics14163236 - 14 Aug 2025
Viewed by 211
Abstract
Palmprint recognition is becoming more and more common in the fields of security authentication, mobile payment, and crime detection. Aiming at the problem of small sample size and low recognition rate of palmprint, a small-sample palmprint recognition method based on image expansion and [...] Read more.
Palmprint recognition is becoming more and more common in the fields of security authentication, mobile payment, and crime detection. Aiming at the problem of small sample size and low recognition rate of palmprint, a small-sample palmprint recognition method based on image expansion and Dynamic Model-Agnostic Meta-Learning (DMAML) is proposed. In terms of data augmentation, a multi-connected conditional generative network is designed for generating palmprints; the network is trained using a gradient-penalized hybrid loss function and a dual time-scale update rule to help the model converge stably, and the trained network is used to generate an expanded dataset of palmprints. On this basis, the palmprint feature extraction network is designed considering the frequency domain and residual inspiration to extract the palmprint feature information. The DMAML training method of the network is investigated, which establishes a multistep loss list for query ensemble loss in the inner loop. It dynamically adjusts the learning rate of the outer loop by using a combination of gradient preheating and a cosine annealing strategy in the outer loop. The experimental results show that the palmprint dataset expansion method in this paper can effectively improve the training efficiency of the palmprint recognition model, evaluated on the Tongji dataset in an N-way K-shot setting, our proposed method achieves an accuracy of 94.62% ± 0.06% in the 5-way 1-shot task and 87.52% ± 0.29% in the 10-way 1-shot task, significantly outperforming ProtoNets (90.57% ± 0.65% and 81.15% ± 0.50%, respectively). Under the 5-way 1-shot condition, there was a 4.05% improvement, and under the 10-way 1-shot condition, there was a 6.37% improvement, demonstrating the effectiveness of our method. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

28 pages, 4548 KB  
Article
A Deep Reinforcement Learning Framework for Strategic Indian NIFTY 50 Index Trading
by Raj Gaurav Mishra, Dharmendra Sharma, Mahipal Gadhavi, Sangeeta Pant and Anuj Kumar
AI 2025, 6(8), 183; https://doi.org/10.3390/ai6080183 - 11 Aug 2025
Viewed by 915
Abstract
This paper presents a comprehensive deep reinforcement learning (DRL) framework for developing strategic trading models tailored to the Indian NIFTY 50 index, leveraging the temporal and nonlinear nature of financial markets. Three advanced DRL architectures deep Q-network (DQN), double deep Q-network (DDQN), and [...] Read more.
This paper presents a comprehensive deep reinforcement learning (DRL) framework for developing strategic trading models tailored to the Indian NIFTY 50 index, leveraging the temporal and nonlinear nature of financial markets. Three advanced DRL architectures deep Q-network (DQN), double deep Q-network (DDQN), and dueling double deep Q-network (Dueling DDQN) were implemented and empirically evaluated. Using a decade-long dataset of 15-min interval OHLC data enriched with technical indicators such as the exponential moving average (EMA), pivot points, and multiple supertrend configurations, the models were trained using prioritized experience replay, epsilon-greedy exploration strategies, and softmax sampling mechanisms. A test set comprising one year of unseen data (May 2024–April 2025) was used to assess generalization performance across key financial metrics, including Sharpe ratio, profit factor, win rate, and trade frequency. Each architecture was analyzed in three progressively sophisticated variants incorporating enhancements in reward shaping, exploration–exploitation balancing, and penalty-based trade constraints. DDQN V3 achieved a Sharpe ratio of 0.7394, a 73.33% win rate, and a 16.58 profit factor across 15 trades, indicating strong volatility-adjusted suitability for real-world deployment. In contrast, the Dueling DDQN V3 achieved a high Sharpe ratio of 1.2278 and a 100% win rate but with only three trades, indicating an excessive conservatism. The DQN V1 model served as a strong baseline, outperforming passive strategies but exhibiting limitations due to Q-value overestimation. The novelty of this work lies in its systematic exploration of DRL variants integrated with enhanced exploration mechanisms and reward–penalty structures, rigorously applied to high-frequency trading on the NIFTY 50 index within an emerging market context. Our findings underscore the critical importance of architectural refinements, dynamic exploration strategies, and trade regularization in stabilizing learning and enhancing profitability in DRL-based intelligent trading systems. Full article
(This article belongs to the Special Issue AI in Finance: Leveraging AI to Transform Financial Services)
Show Figures

Figure 1

20 pages, 589 KB  
Article
Intelligent Queue Scheduling Method for SPMA-Based UAV Networks
by Kui Yang, Chenyang Xu, Guanhua Qiao, Jinke Zhong and Xiaoning Zhang
Drones 2025, 9(8), 552; https://doi.org/10.3390/drones9080552 - 6 Aug 2025
Viewed by 389
Abstract
Static Priority-based Multiple Access (SPMA) is an emerging and promising wireless MAC protocol which is widely used in Unmanned Aerial Vehicle (UAV) networks. UAV (Unmanned Aerial Vehicle) networks, also known as drone networks, refer to a system of interconnected UAVs that communicate and [...] Read more.
Static Priority-based Multiple Access (SPMA) is an emerging and promising wireless MAC protocol which is widely used in Unmanned Aerial Vehicle (UAV) networks. UAV (Unmanned Aerial Vehicle) networks, also known as drone networks, refer to a system of interconnected UAVs that communicate and collaborate to perform tasks autonomously or semi-autonomously. These networks leverage wireless communication technologies to share data, coordinate movements, and optimize mission execution. In SPMA, traffic arriving at the UAV network node can be divided into multiple priorities according to the information timeliness, and the packets of each priority are stored in the corresponding queues with different thresholds to transmit packet, thus guaranteeing the high success rate and low latency for the highest-priority traffic. Unfortunately, the multi-priority queue scheduling of SPMA deprives the packet transmitting opportunity of low-priority traffic, which results in unfair conditions among different-priority traffic. To address this problem, in this paper we propose the method of Adaptive Credit-Based Shaper with Reinforcement Learning (abbreviated as ACBS-RL) to balance the performance of all-priority traffic. In ACBS-RL, the Credit-Based Shaper (CBS) is introduced to SPMA to provide relatively fair packet transmission opportunity among multiple traffic queues by limiting the transmission rate. Due to the dynamic situations of the wireless environment, the Q-learning-based reinforcement learning method is leveraged to adaptively adjust the parameters of CBS (i.e., idleslope and sendslope) to achieve better performance among all priority queues. The extensive simulation results show that compared with traditional SPMA protocol, the proposed ACBS-RL can increase UAV network throughput while guaranteeing Quality of Service (QoS) requirements of all priority traffic. Full article
Show Figures

Figure 1

18 pages, 1588 KB  
Article
EEG-Based Attention Classification for Enhanced Learning Experience
by Madiha Khalid Syed, Hong Wang, Awais Ahmad Siddiqi, Shahnawaz Qureshi and Mohamed Amin Gouda
Appl. Sci. 2025, 15(15), 8668; https://doi.org/10.3390/app15158668 - 5 Aug 2025
Viewed by 516
Abstract
This paper presents a novel EEG-based learning system designed to enhance the efficiency and effectiveness of studying by dynamically adjusting the difficulty level of learning materials based on real-time attention levels. In the training phase, EEG signals corresponding to high and low concentration [...] Read more.
This paper presents a novel EEG-based learning system designed to enhance the efficiency and effectiveness of studying by dynamically adjusting the difficulty level of learning materials based on real-time attention levels. In the training phase, EEG signals corresponding to high and low concentration levels are recorded while participants engage in quizzes to learn and memorize Chinese characters. The attention levels are determined based on performance metrics derived from the quiz results. Following extensive preprocessing, the EEG data undergoes several feature extraction steps: removal of artifacts due to eye blinks and facial movements, segregation of waves based on their frequencies, similarity indexing with respect to delay, binary thresholding, and (PCA). These extracted features are then fed into a k-NN classifier, which accurately distinguishes between high and low attention brain wave patterns, with the labels derived from the quiz performance indicating high or low attention. During the implementation phase, the system continuously monitors the user’s EEG signals while studying. When low attention levels are detected, the system increases the repetition frequency and reduces the difficulty of the flashcards to refocus the user’s attention. Conversely, when high concentration levels are identified, the system escalates the difficulty level of the flashcards to maximize the learning challenge. This adaptive approach ensures a more effective learning experience by maintaining optimal cognitive engagement, resulting in improved learning rates, reduced stress, and increased overall learning efficiency. Our results indicate that this EEG-based adaptive learning system holds significant potential for personalized education, fostering better retention and understanding of Chinese characters. Full article
(This article belongs to the Special Issue EEG Horizons: Exploring Neural Dynamics and Neurocognitive Processes)
Show Figures

Figure 1

19 pages, 2833 KB  
Article
Research on AGV Path Planning Based on Improved DQN Algorithm
by Qian Xiao, Tengteng Pan, Kexin Wang and Shuoming Cui
Sensors 2025, 25(15), 4685; https://doi.org/10.3390/s25154685 - 29 Jul 2025
Viewed by 611
Abstract
Traditional deep reinforcement learning methods suffer from slow convergence speeds and poor adaptability in complex environments and are prone to falling into local optima in AGV system applications. To address these issues, in this paper, an adaptive path planning algorithm with an improved [...] Read more.
Traditional deep reinforcement learning methods suffer from slow convergence speeds and poor adaptability in complex environments and are prone to falling into local optima in AGV system applications. To address these issues, in this paper, an adaptive path planning algorithm with an improved Deep Q Network algorithm called the B-PER DQN algorithm is proposed. Firstly, a dynamic temperature adjustment mechanism is constructed, and the temperature parameters in the Boltzmann strategy are adaptively adjusted by analyzing the change trend of the recent reward window. Next, the Priority experience replay mechanism is introduced to improve the training efficiency and task diversity through experience grading sampling and random obstacle configuration. Then, a refined multi-objective reward function is designed, combined with direction guidance, step punishment, and end point reward, to effectively guide the agent in learning an efficient path. Our experimental results show that, compared with other algorithms, the improved algorithm proposed in this paper achieves a higher success rate and faster convergence in the same environment and represents an efficient and adaptive solution for reinforcement learning for path planning in complex environments. Full article
(This article belongs to the Special Issue Intelligent Control and Robotic Technologies in Path Planning)
Show Figures

Figure 1

37 pages, 9111 KB  
Article
Conformal On-Body Antenna System Integrated with Deep Learning for Non-Invasive Breast Cancer Detection
by Marwa H. Sharaf, Manuel Arrebola, Khalid F. A. Hussein, Asmaa E. Farahat and Álvaro F. Vaquero
Sensors 2025, 25(15), 4670; https://doi.org/10.3390/s25154670 - 28 Jul 2025
Viewed by 513
Abstract
Breast cancer detection through non-invasive and accurate techniques remains a critical challenge in medical diagnostics. This study introduces a deep learning-based framework that leverages a microwave radar system equipped with an arc-shaped array of six antennas to estimate key tumor parameters, including position, [...] Read more.
Breast cancer detection through non-invasive and accurate techniques remains a critical challenge in medical diagnostics. This study introduces a deep learning-based framework that leverages a microwave radar system equipped with an arc-shaped array of six antennas to estimate key tumor parameters, including position, size, and depth. This research begins with the evolutionary design of an ultra-wideband octagram ring patch antenna optimized for enhanced tumor detection sensitivity in directional near-field coupling scenarios. The antenna is fabricated and experimentally evaluated, with its performance validated through S-parameter measurements, far-field radiation characterization, and efficiency analysis to ensure effective signal propagation and interaction with breast tissue. Specific Absorption Rate (SAR) distributions within breast tissues are comprehensively assessed, and power adjustment strategies are implemented to comply with electromagnetic exposure safety limits. The dataset for the deep learning model comprises simulated self and mutual S-parameters capturing tumor-induced variations over a broad frequency spectrum. A core innovation of this work is the development of the Attention-Based Feature Separation (ABFS) model, which dynamically identifies optimal frequency sub-bands and disentangles discriminative features tailored to each tumor parameter. A multi-branch neural network processes these features to achieve precise tumor localization and size estimation. Compared to conventional attention mechanisms, the proposed ABFS architecture demonstrates superior prediction accuracy and interpretability. The proposed approach achieves high estimation accuracy and computational efficiency in simulation studies, underscoring the promise of integrating deep learning with conformal microwave imaging for safe, effective, and non-invasive breast cancer detection. Full article
Show Figures

Figure 1

34 pages, 2669 KB  
Article
A Novel Quantum Epigenetic Algorithm for Adaptive Cybersecurity Threat Detection
by Salam Al-E’mari, Yousef Sanjalawe and Salam Fraihat
AI 2025, 6(8), 165; https://doi.org/10.3390/ai6080165 - 22 Jul 2025
Viewed by 609
Abstract
The escalating sophistication of cyber threats underscores the critical need for intelligent and adaptive intrusion detection systems (IDSs) to identify known and novel attack vectors in real time. Feature selection is a key enabler of performance in machine learning-based IDSs, as it reduces [...] Read more.
The escalating sophistication of cyber threats underscores the critical need for intelligent and adaptive intrusion detection systems (IDSs) to identify known and novel attack vectors in real time. Feature selection is a key enabler of performance in machine learning-based IDSs, as it reduces the input dimensionality, enhances the detection accuracy, and lowers the computational latency. This paper introduces a novel optimization framework called Quantum Epigenetic Algorithm (QEA), which synergistically combines quantum-inspired probabilistic representation with biologically motivated epigenetic gene regulation to perform efficient and adaptive feature selection. The algorithm balances global exploration and local exploitation by leveraging quantum superposition for diverse candidate generation while dynamically adjusting gene expression through an epigenetic activation mechanism. A multi-objective fitness function guides the search process by optimizing the detection accuracy, false positive rate, inference latency, and model compactness. The QEA was evaluated across four benchmark datasets—UNSW-NB15, CIC-IDS2017, CSE-CIC-IDS2018, and TON_IoT—and consistently outperformed baseline methods, including Genetic Algorithm (GA), Particle Swarm Optimization (PSO), and Quantum Genetic Algorithm (QGA). Notably, QEA achieved the highest classification accuracy (up to 97.12%), the lowest false positive rates (as low as 1.68%), and selected significantly fewer features (e.g., 18 on TON_IoT) while maintaining near real-time latency. These results demonstrate the robustness, efficiency, and scalability of QEA for real-time intrusion detection in dynamic and resource-constrained cybersecurity environments. Full article
Show Figures

Figure 1

Back to TopTop