Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,474)

Search Parameters:
Keywords = execution time

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 1981 KB  
Article
Neural Correlates of Belief-Bias Reasoning as Predictors of Critical Thinking: Evidence from an fNIRS Study
by Juanjuan Ma, Wenyu Lv and Xuezhu Ren
J. Intell. 2025, 13(9), 106; https://doi.org/10.3390/jintelligence13090106 - 24 Aug 2025
Abstract
This study examined the neural characteristics of belief-bias reasoning in order to reveal the neurocognitive basis of critical thinking. Functional near-infrared spectroscopy was utilized to capture the real-time brain hemodynamic activity of 74 college students while they performed a belief-bias syllogistic reasoning task. [...] Read more.
This study examined the neural characteristics of belief-bias reasoning in order to reveal the neurocognitive basis of critical thinking. Functional near-infrared spectroscopy was utilized to capture the real-time brain hemodynamic activity of 74 college students while they performed a belief-bias syllogistic reasoning task. Values of oxy-hemoglobin (oxy-Hb) and deoxy-hemoglobin (deoxy-Hb) in regions of interest were analyzed in relation to critical thinking skills assessed by established tests. The results reveal significant activation in both the opercular part of the right IFC and the left DLPFC when participants encountered situations where their prior beliefs contradicted logical validity during the completion of the belief-bias syllogistic reasoning task. Crucially, individuals with lower levels of critical thinking skills demonstrated heightened activation in the opercular part of the right IFC compared to those with higher levels of critical thinking skills. Furthermore, variations in hemodynamics, quantified by oxy-Hb and deoxy-Hb concentration values (area under the activity curve as absolute value), during the execution of belief-bias reasoning tasks accounted for a substantial proportion of the variability in critical thinking skills. Additionally, the hemodynamic data to a large extent explained the connection between belief-bias reasoning and critical thinking. These results provide a neural explanation for the relationship between belief-bias reasoning and critical thinking, and advance theoretical models of critical thinking by illuminating the brain’s mechanisms engaged in unbiased reasoning and metacognition. Full article
Show Figures

Figure 1

16 pages, 3884 KB  
Article
Toward an Augmented Reality Representation of Collision Risks in Harbors
by Mario Miličević, Igor Vujović, Miro Petković and Ana Kuzmanić Skelin
Appl. Sci. 2025, 15(17), 9260; https://doi.org/10.3390/app15179260 - 22 Aug 2025
Viewed by 121
Abstract
In ports with a significant density of non-AIS vessels, there is an increased risk of collisions. This is because physical limitations restrict the maneuverability of AIS vessels, while small vessels that do not have AIS are unpredictable. To help with collision prevention, we [...] Read more.
In ports with a significant density of non-AIS vessels, there is an increased risk of collisions. This is because physical limitations restrict the maneuverability of AIS vessels, while small vessels that do not have AIS are unpredictable. To help with collision prevention, we propose an augmented reality system that detects vessels from video stream and estimates speed with a single sideway-mounted camera. The goal is to visualize a cone for risk assessment. The estimation of speed is executed by geometric relations between the camera and the ship, which were used to estimate distances between points in a known time interval. The most important part of the proposal is vessel speed estimation by a monocular camera validated by a laser speed measurement. This will help port authorities to manage risks. This system differs from similar trials as it uses a single stationary camera linked to the authorities and not to the bridge crew. Full article
(This article belongs to the Section Marine Science and Engineering)
Show Figures

Figure 1

20 pages, 5528 KB  
Article
Wearable Smart Gloves for Optimization Analysis of Disassembly and Assembly of Mechatronic Machines
by Chin-Shan Chen, Hung Wei Chang and Bo-Chen Jiang
Sensors 2025, 25(17), 5223; https://doi.org/10.3390/s25175223 - 22 Aug 2025
Viewed by 168
Abstract
With the rapid development of smart manufacturing, the optimization of real-time monitoring in operating procedures has become a crucial issue in modern industry. Traditional disassembly and assembly (D/A) work, relying on human experience and visual inspection, lacks immediacy and a quantitative basis, further [...] Read more.
With the rapid development of smart manufacturing, the optimization of real-time monitoring in operating procedures has become a crucial issue in modern industry. Traditional disassembly and assembly (D/A) work, relying on human experience and visual inspection, lacks immediacy and a quantitative basis, further affecting operating quality and efficiency. This study aims to develop a thin-film force sensor and an inertial measurement unit (IMU)-integrated wearable device for monitoring and analyzing operators’ behavioral characteristics during D/A tasks. First, by having operators wear self-made smart gloves and 17 IMU sensors, the work tables with three different heights are equipped with a mechatronics machine for the D/A experiment. Common D/A motions are designed into the experiment. Several subjects are invited to execute the standardized operating procedure, with upper limbs used to collect data on operators’ hand gestures and movements. Then, the measured data are applied to verify the performance measure functional best path of machine D/A. The results reveal that the system could effectively identify various D/A motions as well as observe operators’ force difference and motion mode, which, through the theory of performance indicator optimization and the verification of data analysis, could provide a reference for the best path planning, D/A sequence, and work table height design in the machine D/A process. The optimal workbench height for a standing operator is 5 to 10 cm above their elbow height. Performing assembly and disassembly tasks at this optimal height can help the operator save between 14.3933% and 35.2579% of physical effort. Such outcomes could aid in D/A behavior monitoring in industry, worker training, and operational optimization, as well as expand the application to instant feedback design for automation and smartization in a smart factory. Full article
Show Figures

Figure 1

23 pages, 2723 KB  
Article
Dairy DigiD: An Edge-Cloud Framework for Real-Time Cattle Biometrics and Health Classification
by Shubhangi Mahato and Suresh Neethirajan
AI 2025, 6(9), 196; https://doi.org/10.3390/ai6090196 - 22 Aug 2025
Viewed by 204
Abstract
Digital livestock farming faces a critical deployment challenge: bridging the gap between cutting-edge AI algorithms and practical implementation in resource-constrained agricultural environments. While deep learning models demonstrate exceptional accuracy in laboratory settings, their translation to operational farm systems remains limited by computational constraints, [...] Read more.
Digital livestock farming faces a critical deployment challenge: bridging the gap between cutting-edge AI algorithms and practical implementation in resource-constrained agricultural environments. While deep learning models demonstrate exceptional accuracy in laboratory settings, their translation to operational farm systems remains limited by computational constraints, connectivity issues, and user accessibility barriers. Dairy DigiD addresses these challenges through a novel edge-cloud AI framework integrating YOLOv11 object detection with DenseNet121 physiological classification for cattle monitoring. The system employs YOLOv11-nano architecture optimized through INT8 quantization (achieving 73% model compression with <1% accuracy degradation) and TensorRT acceleration, enabling 24 FPS real-time inference on NVIDIA Jetson edge devices while maintaining 94.2% classification accuracy. Our key innovation lies in intelligent confidence-based offloading: routine detections execute locally at the edge, while ambiguous cases trigger cloud processing for enhanced accuracy. An entropy-based active learning pipeline using Roboflow reduces the annotation overhead by 65% while preserving 97% of the model performance. The Gradio interface democratizes system access, reducing technician training requirements by 84%. Comprehensive validation across ten commercial dairy farms in Atlantic Canada demonstrates robust performance under diverse environmental conditions (seasonal, lighting, weather variations). The framework achieves mAP@50 of 0.947 with balanced precision-recall across four physiological classes, while consuming 18% less energy than baseline implementations through attention-based optimization. Rather than proposing novel algorithms, this work contributes a systems-level integration methodology that transforms research-grade AI into deployable agricultural solutions. Our open-source framework provides a replicable blueprint for precision livestock farming adoption, addressing practical barriers that have historically limited AI deployment in agricultural settings. Full article
Show Figures

Figure 1

19 pages, 659 KB  
Review
Cyber-Attacks on Energy Infrastructure—A Literature Overview and Perspectives on the Current Situation
by Doney Abraham, Siv Hilde Houmb and Laszlo Erdodi
Appl. Sci. 2025, 15(17), 9233; https://doi.org/10.3390/app15179233 - 22 Aug 2025
Viewed by 96
Abstract
Advanced Persistent Threats (APT) are stealthy multi-step attacks, often executed over an extensive time period and tailored for a specific attack target. APTs represent a “low and slow” type of cyberattack, meaning that they most often remain undetected until the consequence of the [...] Read more.
Advanced Persistent Threats (APT) are stealthy multi-step attacks, often executed over an extensive time period and tailored for a specific attack target. APTs represent a “low and slow” type of cyberattack, meaning that they most often remain undetected until the consequence of the attack becomes evident. Energy infrastructure, including power grids, oil and gas infrastructure, offshore wind installations, etc., form the basis of a modern digital nation. In addition to loss of power, financial systems, banking systems, digital national services, etc., become non-operational without electricity. Loss of power from an APT cyberattack could result in loss of life and the possibility of creating digital chaos. Digital payments becomes unavailable, digital identification is affected, and even POS terminals need to run on emergency power, which is limited in time, resulting in challenges in paying for food and beverages. Examples of Advanced Persistent Threats (APTs) targeting energy infrastructures include Triton, which in 2017 aimed to manipulate the safety systems of a petrochemical plant in Saudi Arabia, potentially leading to catastrophic physical consequences. Another significant incident is the Industroyer2 malware attack in 2022, which targeted a Ukrainian energy provider in an attempt to disrupt operations. The paper combines APT knowledge with energy infrastructure domain expertise, focusing on technical aspects while at the same time providing perspectives on societal consequences that could result from APTs. Full article
(This article belongs to the Special Issue Cyber-Physical Systems Security: Challenges and Approaches)
Show Figures

Figure 1

17 pages, 396 KB  
Article
Neural Network-Based Approaches for Predicting Construction Overruns with Sustainability Considerations
by Kristina Galjanić, Ivan Marović and Tomaš Hanak
Sustainability 2025, 17(16), 7559; https://doi.org/10.3390/su17167559 - 21 Aug 2025
Viewed by 222
Abstract
This research focuses on developing neural network-based models for predicting time and cost overruns in construction projects during the construction phase, incorporating sustainability considerations. Previous studies have identified seven key performance areas that affect the final outcome: productivity, quality, time, cost, safety, team [...] Read more.
This research focuses on developing neural network-based models for predicting time and cost overruns in construction projects during the construction phase, incorporating sustainability considerations. Previous studies have identified seven key performance areas that affect the final outcome: productivity, quality, time, cost, safety, team satisfaction, and client satisfaction. Although the interconnections among these performance areas are recognized, their exact relationships and impacts are not fully understood. Hence, the utilization of a neural networks proves to be highly beneficial in predicting the outcome of future construction projects, as it can learn from data and identify patterns, without requiring a complete understanding of these mutual influences. The neural network was trained and tested on the data collected on five completed construction projects, each analyzed at three distinct stages of execution. A total of 182 experiments were conducted to explore different neural network architectures. The most effective configurations for predicting time and cost overruns were identified and evaluated, demonstrating the potential of neural network-based approaches to support more sustainable and proactive project management. The time overrun prediction model demonstrated high accuracy, achieving a MAPE of 10.93%, RMSE of 0.128, and correlation of 0.979. While the cost overrun model showed a lower predictive accuracy, its MAPE (166.76%), RMSE (0.4179), and correlation (0.936) values indicate potential for further refinement. These findings highlight the applicability of neural network-based approaches in construction project management and their potential to support more proactive and informed decision-making. Full article
Show Figures

Figure 1

36 pages, 5771 KB  
Article
Improving K-Means Clustering: A Comparative Study of Parallelized Version of Modified K-Means Algorithm for Clustering of Satellite Images
by Yuv Raj Pant, Larry Leigh and Juliana Fajardo Rueda
Algorithms 2025, 18(8), 532; https://doi.org/10.3390/a18080532 - 21 Aug 2025
Viewed by 216
Abstract
Efficient clustering of high-spatial-dimensional satellite image datasets remains a critical challenge, particularly due to the computational demands of spectral distance calculations, random centroid initialization, and sensitivity to outliers in conventional K-Means algorithms. This study presents a comprehensive comparative analysis of eight parallelized variants [...] Read more.
Efficient clustering of high-spatial-dimensional satellite image datasets remains a critical challenge, particularly due to the computational demands of spectral distance calculations, random centroid initialization, and sensitivity to outliers in conventional K-Means algorithms. This study presents a comprehensive comparative analysis of eight parallelized variants of the K-Means algorithm, designed to enhance clustering efficiency and reduce computational burden for large-scale satellite image analysis. The proposed parallelized implementations incorporate optimized centroid initialization for better starting point selection using a dynamic K-Means sharp method to detect the outlier to improve cluster robustness, and a Nearest-Neighbor Iteration Calculation Reduction method to minimize redundant computations. These enhancements were applied to a test set of 114 global land cover data cubes, each comprising high-dimensional satellite images of size 3712 × 3712 × 16 and executed on multi-core CPU architecture to leverage extensive parallel processing capabilities. Performance was evaluated across three criteria: convergence speed (iterations), computational efficiency (execution time), and clustering accuracy (RMSE). The Parallelized Enhanced K-Means (PEKM) method achieved the fastest convergence at 234 iterations and the lowest execution time of 4230 h, while maintaining consistent RMSE values (0.0136) across all algorithm variants. These results demonstrate that targeted algorithmic optimizations, combined with effective parallelization strategies, can improve the practicality of K-Means clustering for high-dimensional-satellites image analysis. This work underscores the potential of improving K-Means clustering frameworks beyond hardware acceleration alone, offering scalable solutions good for large-scale unsupervised image classification tasks. Full article
(This article belongs to the Special Issue Algorithms in Multi-Sensor Imaging and Fusion)
Show Figures

Graphical abstract

21 pages, 647 KB  
Review
Neuroplasticity of Brain Networks Through Exercise: A Narrative Review About Effect of Types, Intensities, and Durations
by Carlotta Rosso, Paolo Riccardo Brustio, Jordi Manuello and Alberto Rainoldi
Sports 2025, 13(8), 280; https://doi.org/10.3390/sports13080280 - 21 Aug 2025
Viewed by 188
Abstract
(1) Background: Recent decades have seen growing interest in neuroplasticity and the activity-dependent mechanisms that allow Brain Networks to adapt functionally. Among the various stimuli, physical exercise has emerged as a key modulator of brain plasticity. This narrative review aims to synthesize evidence [...] Read more.
(1) Background: Recent decades have seen growing interest in neuroplasticity and the activity-dependent mechanisms that allow Brain Networks to adapt functionally. Among the various stimuli, physical exercise has emerged as a key modulator of brain plasticity. This narrative review aims to synthesize evidence on the structural and functional effects of physical exercise on the brain in healthy individuals aged 18–80 years. Exercise modalities were categorized into Cardiovascular, Strength, and Mixed Training. Each was further classified by intensity (Light-to-Moderate vs. Vigorous) and duration (Short- vs. Long-Term). A total of 25 interventions were analyzed to evaluate how these variables influence Brain Networks. Findings indicate that exercise type, intensity, and duration collectively modulate neuroplastic responses. Notably, physical training induces structural and functional changes in major Brain Networks, including the Default Mode Network, Salience Network, Central Executive Network, Visuospatial Network, Sensorimotor Network, and Language and Auditory Networks. These results underscore the potential of physical exercise as an effective non-pharmacological strategy to enhance brain health and plasticity across the adult lifespan. This narrative review aims to highlight the effects of physical exercise in changing the brain either functionally or structurally. Moreover, the most relevant exercise training modalities that may improve/change neural networks in healthy populations (18–80 years) were discussed. (2) Methods: Three different types of exercise were considered: (i) Cardiovascular, (ii) Strength, and (iii) Mixed Exercise. For each of them, two levels of intensity (Light-to-Moderate and Vigorous) and two durations (Short-Term and Long-Term Effects) were included. By analyzing 25 interventions, indications about the effects on the brain considering the three factors (type of exercises, intensities, and durations) were provided. (3) Results: The findings suggest that the type of exercises, intensities, and durations could to lead neural modification over time. Specifically, exercise intervention contributes to both structural and functional changes in brain regions located in key Brain Networks, including the Default Mode Network, Salience Network, Central Executive Network, Visuospatial Network, Sensorimotor Network, and Language and Auditory Networks. (4) Conclusions: In conclusion, the evidence presented herein underscores the beneficial effects of physical exercise on the structural and functional integrity of the brain, highlighting its importance as a non-pharmacological intervention to improve brain plasticity. Full article
Show Figures

Figure 1

19 pages, 991 KB  
Article
Enhancing Machine Learning-Based DDoS Detection Through Hyperparameter Optimization
by Shao-Rui Chen, Shiang-Jiun Chen and Wen-Bin Hsieh
Electronics 2025, 14(16), 3319; https://doi.org/10.3390/electronics14163319 - 20 Aug 2025
Viewed by 164
Abstract
In recent years, the occurrence and complexity of Distributed Denial of Service (DDoS) attacks have escalated significantly, posing threats to the availability, performance, and security of networked systems. With the rapid progression of Artificial Intelligence (AI) and Machine Learning (ML) technologies, attackers can [...] Read more.
In recent years, the occurrence and complexity of Distributed Denial of Service (DDoS) attacks have escalated significantly, posing threats to the availability, performance, and security of networked systems. With the rapid progression of Artificial Intelligence (AI) and Machine Learning (ML) technologies, attackers can leverage intelligent tools to automate and amplify DDoS attacks with minimal human intervention. The increasing sophistication of such attacks highlights the pressing need for more robust and precise detection methodologies. This research proposes a method to enhance the effectiveness of ML models in detecting DDoS attacks based on hyperparameter tuning. By optimizing model parameters, the proposed approach is going to enhance the performance of ML models in identifying DDoS attacks. The CIC-DDoS2019 dataset is utilized in this study as it offers a comprehensive set of real-world DDoS attack scenarios across various protocols and services. The proposed methodology comprises key stages, including data preprocessing, data splitting, and model training, validation, and testing. Three ML models are trained and tuned using an adaptive GridSearchCV (Cross Validation) strategy to identify optimal parameter configurations. The results demonstrate that our method significantly improves performance and efficiency compared with the general GridSearchCV. The SVM model achieves 99.87% testing accuracy and requires approximately 28% less execution time than the general GridSearchCV. The LR model achieves 99.6830% testing accuracy with an execution time of 16.90 s, maintaining the same testing accuracy but reducing the execution time by about 22.8%. The KNN model achieves 99.8395% testing accuracy and 2388.89 s of execution time, also preserving accuracy while decreasing the execution time by approximately 63%. These results indicate that our approach enhances DDoS detection performance and efficiency, offering novel insights into the practical application of hyperparameter tuning for improving ML model performance in real-world scenarios. Full article
(This article belongs to the Special Issue Advancements in AI-Driven Cybersecurity and Securing AI Systems)
Show Figures

Figure 1

17 pages, 3205 KB  
Communication
Switched Modeling and Sampled Switching Control for DC-DC Boost Converters with Uncertainty
by Haojie Lin and Xuyang Lou
Modelling 2025, 6(3), 86; https://doi.org/10.3390/modelling6030086 - 20 Aug 2025
Viewed by 89
Abstract
In this paper, a switched model for DC-DC boost converters with modeling uncertainty is considered. Based on the switched model, a continuous switching control law is first designed to guarantee the robust stability of the closed-loop system. Then, to reduce the data transmission [...] Read more.
In this paper, a switched model for DC-DC boost converters with modeling uncertainty is considered. Based on the switched model, a continuous switching control law is first designed to guarantee the robust stability of the closed-loop system. Then, to reduce the data transmission amount and ease the communication burden, a sampled-data switching control law is explored, where the switching action is executed based on a state-dependent condition at each sampling time. The proposed control strategies can track a specific reference point and varying reference points in the presence of modeling uncertainty. Finally, the simulation results show that the proposed sampling switch control reduces steady-state errors and the transient response is significantly smoother. These results confirm the effectiveness and practical potential of the proposed approach. Full article
Show Figures

Graphical abstract

27 pages, 6145 KB  
Article
Multi-Voyage Path Planning for River Crab Aquaculture Feeding Boats
by Yueping Sun, Peixuan Guo, Yantong Wang, Jinkai Shi, Ziheng Zhang and De’an Zhao
Fishes 2025, 10(8), 420; https://doi.org/10.3390/fishes10080420 - 20 Aug 2025
Viewed by 230
Abstract
In crab pond environments, obstacles such as long aerobic pipelines, aerators, and ground cages are usually sparsely distributed. Automatic feeding boats can navigate while avoiding obstacles and execute feeding tasks along planned paths, thus improving feeding quality and operational efficiency. In large-scale crab [...] Read more.
In crab pond environments, obstacles such as long aerobic pipelines, aerators, and ground cages are usually sparsely distributed. Automatic feeding boats can navigate while avoiding obstacles and execute feeding tasks along planned paths, thus improving feeding quality and operational efficiency. In large-scale crab pond farming, a single feeding operation often fails to achieve the complete coverage of the bait casting task due to the limited boat load. Therefore, this study proposes a multi-voyage path planning scheme for feeding boats. Firstly, a complete coverage path planning algorithm is proposed based on an improved genetic algorithm to achieve the complete coverage of the bait casting task. Secondly, to address the issue of an insufficient bait loading capacity in complete coverage operations, which requires the feeding boat to return to the loading wharf several times to replenish bait, a multi-voyage path planning algorithm is proposed. The return point of the feeding operation is predicted by the algorithm. Subsequently, the improved Q-Learning algorithm (I-QLA) is proposed to plan the optimal multi-voyage return paths by increasing the exploration of the diagonal direction, refining the reward mechanism and dynamically adjusting the exploration rate. The simulation results show that compared with the traditional genetic algorithm, the repetition rate, path length, and the number of 90° turns of the complete coverage path planned by the improved genetic algorithm are reduced by 59.62%, 1.27%, and 28%, respectively. Compared with the traditional Q-Learning algorithm, average path length, average number of turns, average training time, and average number of iterations planned by the I-QLA are reduced by 20.84%, 74.19%, 48.27%, and 45.08%, respectively. The crab pond experimental results show that compared with the Q-Learning algorithm, the path length, turning times, and energy consumption of the I-QLA algorithm are reduced by 29.7%, 77.8%, and 39.6%, respectively. This multi-voyage method enables efficient, low-energy, and precise feeding for crab farming. Full article
Show Figures

Figure 1

29 pages, 2673 KB  
Article
DARTPHROG: A Superscalar Homomorphic Accelerator
by Alexander Magyari and Yuhua Chen
Sensors 2025, 25(16), 5176; https://doi.org/10.3390/s25165176 - 20 Aug 2025
Viewed by 334
Abstract
Fully Homomorphic Encryption (FHE) allows a client to share their data with an external server without ever exposing their data. FHE serves as a potential solution for data breaches and the marketing of users’ private data. Unfortunately, FHE is much slower than conventional [...] Read more.
Fully Homomorphic Encryption (FHE) allows a client to share their data with an external server without ever exposing their data. FHE serves as a potential solution for data breaches and the marketing of users’ private data. Unfortunately, FHE is much slower than conventional asymmetric cryptography, where data are encrypted only between endpoints. Within this work, we propose the Dynamic AcceleRaTor for Parallel Homomorphic pROGrams, DARTPHROG, as a potential tool for accelerating FHE. DARTPHROG is a superscalar architecture, allowing multiple homomorphic operations to be executed in parallel. Furthermore, DARTPHROG is the first to utilize the new Hardware Optimized Modular-Reduction (HOM-R) system, showcasing the uniquely efficient method compared to Barrett and Montgomery reduction. Coming in at 40.5 W, DARTPHROG is one of the smaller architectures for FHE acceleration. Our architecture offers speedups of up to 1860 times for primitive FHE operations such as ciphertext/plaintext and ciphertext/ciphertext addition, subtraction, and multiplication when operations are performed in parallel using the superscalar feature in DARTPHROG. The DARTPHROG system implements an assembler, a unique instruction set based on THUMB, and a homomorphic processor implemented on a Field Programmable Gate Array (FPGA). DARTPHROG is also the first superscalar evaluation of homomorphic operations when the Number Theoretic Transform (NTT) is excluded from the design. Our processor can therefore be used as a base case for evaluation when weighing the resource and execution impact of NTT implementations. Full article
Show Figures

Figure 1

25 pages, 2133 KB  
Article
Blockchain-Enabled Self-Autonomous Intelligent Transport System for Drone Task Workflow in Edge Cloud Networks
by Pattaraporn Khuwuthyakorn, Abdullah Lakhan, Arnab Majumdar and Orawit Thinnukool
Algorithms 2025, 18(8), 530; https://doi.org/10.3390/a18080530 - 20 Aug 2025
Viewed by 165
Abstract
In recent years, self-autonomous intelligent transportation applications such as drones and autonomous vehicles have seen rapid development and deployment across various countries. Within the domain of artificial intelligence, self-autonomous agents are defined as software entities capable of independently operating drones in an intelligent [...] Read more.
In recent years, self-autonomous intelligent transportation applications such as drones and autonomous vehicles have seen rapid development and deployment across various countries. Within the domain of artificial intelligence, self-autonomous agents are defined as software entities capable of independently operating drones in an intelligent transport system (ITS) without human intervention. The integration of these agents into autonomous vehicles and their deployment across distributed cloud networks have increased significantly. These systems, which include drones, ground vehicles, and aircraft, are used to perform a wide range of tasks such as delivering passengers and packages within defined operational boundaries. Despite their growing utility, practical implementations face significant challenges stemming from the heterogeneity of network resources, as well as persistent issues related to security, privacy, and processing costs. To overcome these challenges, this study proposes a novel blockchain-enabled self-autonomous intelligent transport system designed for drone workflow applications. The proposed system architecture is based on a remote method invocation (RMI) client–server model and incorporates a serverless computing framework to manage processing costs. Termed the self-autonomous blockchain-enabled cost-efficient system (SBECES), the framework integrates a client and system agent mechanism governed by Q-learning and deep-learning-based policies. Furthermore, it incorporates a blockchain-based hash validation and fault-tolerant (HVFT) mechanism to ensure data integrity and operational reliability. A deep reinforcement learning (DRL)-enabled adaptive scheduler is utilized to manage drone workflow execution while meeting quality of service (QoS) constraints, including deadlines, cost-efficiency, and security. The overarching objective of this research is to minimize the total processing costs that comprise execution, communication, and security overheads, while maximizing operational rewards and ensuring the timely execution of drone-based tasks. Experimental results demonstrate that the proposed system achieves a 30% reduction in processing costs and a 29% improvement in security and privacy compared to existing state-of-the-art solutions. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

19 pages, 565 KB  
Article
Dynamic Recovery and a Resilience Metric for UAV Swarms Under Attack
by Tianzhen Hu, Yan Zong, Ningyun Lu and Bin Jiang
Drones 2025, 9(8), 589; https://doi.org/10.3390/drones9080589 - 20 Aug 2025
Viewed by 110
Abstract
Unmanned Aerial Swarms are attracting widespread interest in fields such as disaster response, environmental monitoring, and agriculture. However, there is still a lack of effective recovery strategies and comprehensive performance metrics for UAV swarms facing communication attacks, especially in capturing dynamic recovery. The [...] Read more.
Unmanned Aerial Swarms are attracting widespread interest in fields such as disaster response, environmental monitoring, and agriculture. However, there is still a lack of effective recovery strategies and comprehensive performance metrics for UAV swarms facing communication attacks, especially in capturing dynamic recovery. The aim of this study is to recover the split and disconnected UAV swarm under attacks. A dynamic recovery method is proposed under attacks by establishing the relationship between algebraic connectivity and consensus speed. The proposed recovery method enables each UAV to selectively establish communication links with responsive UAVs based on the proposed recovery method to reduce communication cost, rather than linking with all neighbours within communication range. Based on this, a set of performance indexes is introduced, considering factors such as consensus ability, communication efficiency, mission execution, and resource consumption. Furthermore, a resilience metric is proposed to quantitatively assess the efficiency of recovery and consensus transition, providing a comprehensive measure of the ability to reach consensus after attacks. Simulations utilizing the second-order consensus protocol and dynamics validate that the consensus speed of the proposed recovery method is 18.88% faster than random recovery. The proposed resilience metric captures the change in the time from recovery to new consensus state, and the resilience of the proposed recovery method is 66.99% higher than random recovery. Full article
(This article belongs to the Collection Drones for Security and Defense Applications)
Show Figures

Figure 1

16 pages, 1586 KB  
Article
A Multi-Agent Deep Reinforcement Learning Anti-Jamming Spectrum-Access Method in LEO Satellites
by Wenting Cao, Feihuang Chu, Luliang Jia, Hongyu Zhou and Yunfan Zhang
Electronics 2025, 14(16), 3307; https://doi.org/10.3390/electronics14163307 - 20 Aug 2025
Viewed by 314
Abstract
Low-Earth-orbit (LEO) satellite networks face significant vulnerabilities to malicious jamming and co-channel interference, compounded by dynamic topologies, resource constraints, and complex electromagnetic environments. Traditional anti-jamming approaches lack adaptability, centralized intelligent methods incur high overhead, and distributed intelligent methods fail to achieve global optimization. [...] Read more.
Low-Earth-orbit (LEO) satellite networks face significant vulnerabilities to malicious jamming and co-channel interference, compounded by dynamic topologies, resource constraints, and complex electromagnetic environments. Traditional anti-jamming approaches lack adaptability, centralized intelligent methods incur high overhead, and distributed intelligent methods fail to achieve global optimization. To address these limitations, this paper proposed a value decomposition network (VDN)-based multi-agent deep reinforcement learning (DRL) anti-jamming spectrum access approach with a centralized training and distributed execution architecture. Following offline centralized ground-based training, the model was deployed distributedly on satellites for real-time spectrum-access decision-making. The simulation results demonstrate that the proposed method effectively balances training costs with anti-jamming performance. The method achieved near-optimal user satisfaction (approximately 97%) with minimal link overhead, confirming its effectiveness for resource-constrained LEO satellite networks. Full article
Show Figures

Figure 1

Back to TopTop