Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,785)

Search Parameters:
Keywords = deep-learning platform

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 5938 KB  
Article
Fault Diagnosis of 2RRU-RRS Parallel Robots Based on Multi-Scale Efficient Channel Attention Residual Network
by Shuxiang He, Wei Ye, Ying Zhang, Shanyi Liu, Zhen Wu and Lingmin Xu
Symmetry 2026, 18(4), 622; https://doi.org/10.3390/sym18040622 (registering DOI) - 8 Apr 2026
Abstract
Parallel robots are widely applied in many fields because of their unique advantages. To ensure their operational safety and reduce maintenance costs, designing an accurate and reliable fault diagnosis method is essential. Focusing on the 2RRU-RRS parallel robot, this paper proposes an intelligent [...] Read more.
Parallel robots are widely applied in many fields because of their unique advantages. To ensure their operational safety and reduce maintenance costs, designing an accurate and reliable fault diagnosis method is essential. Focusing on the 2RRU-RRS parallel robot, this paper proposes an intelligent fault diagnosis method based on a multi-scale convolutional residual network integrated with an Efficient Channel Attention mechanism (MS-ECA-ResNet). Firstly, to fully retain the time-frequency features of the signals, the one-dimensional vibration signals are converted into two-dimensional images using the Continuous Wavelet Transform (CWT). Secondly, a multi-scale convolutional feature extraction structure is designed to enhance the model’s feature extraction ability at different time scales. Furthermore, the ECA mechanism is introduced into the residual network to reinforce important feature channels and suppress noise interference. Comparative experiments, noise environment experiments, and ablation experiments were conducted on a 2RRU-RRS parallel robot experimental platform with a vibration signal dataset. The results demonstrate that the proposed method achieves superior diagnostic accuracy and robustness compared to typical deep learning models, particularly in maintaining high performance under simulated noise conditions. This provides a preliminary validation of the method’s effectiveness in capturing fault-related impacts, offering a potential technical reference for the health monitoring of parallel robots in real-world scenarios. Full article
(This article belongs to the Special Issue Symmetry in Intelligent Spindle Modelling and Vibration Analysis)
Show Figures

Figure 1

20 pages, 6792 KB  
Article
PER-TD3 Integrated with HER Mechanism: Improving Training Efficiency and Control Accuracy for PEMFC Differential Pressure Control
by Yuan Li, Baijun Lai, Jing Wang, Yan Sun, Donghai Hu and Hua Ding
World Electr. Veh. J. 2026, 17(4), 195; https://doi.org/10.3390/wevj17040195 - 8 Apr 2026
Abstract
The cathode and anode differential pressure control of a proton exchange membrane fuel cell (PEMFC) directly affects its service life and operating efficiency. Existing control methods find it difficult to cope with strong nonlinear perturbations, and fixed differential pressure control is prone to [...] Read more.
The cathode and anode differential pressure control of a proton exchange membrane fuel cell (PEMFC) directly affects its service life and operating efficiency. Existing control methods find it difficult to cope with strong nonlinear perturbations, and fixed differential pressure control is prone to pressure overshoot and threshold exceedance, resulting in unstable pressure regulation. In order to solve the current research problems, a reinforcement learning method based on hybrid experience replay (HP-TD3) is proposed. A CART-based algorithm is first used to classify the states of the test load, and a load-related segmented reward function is designed. In addition, a hindsight experience replay (HER) mechanism is incorporated into the Priority Experience Replay Twin Delayed Deep Deterministic Policy Gradient (PER-TD3) framework to improve sample utilization efficiency and training stability. Finally, the performance of HP-TD3 and its ability to cope with nonlinear disturbances are verified on a fuel cell control unit hardware-in-the-loop (FCU-HIL) platform. In the A test load (frequent switching and high low-load proportion), the Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and the degradation index of the fuel cell dynamic performance (Δfc) of HP-TD3 are respectively reduced by 17.4%, 20.5%, and 13.3% compared to P-TD3; in the B test load (high-load operation and low switching frequency), these indicators are reduced by 25.7%, 29.4%, and 15.4% respectively. Full article
(This article belongs to the Section Storage Systems)
Show Figures

Figure 1

24 pages, 2051 KB  
Article
Physics-Informed Neural Networks and Deep Reinforcement Learning for Optimal Anti-Icing Strategies of Circular Tube Components in Polar Vessels
by Jinhao Xi, Chenyang Liu, Haiming Wen, Yan Chen, Siyu Zhang, Yuqiao Xin, Yutong Zhong and Dayong Zhang
J. Mar. Sci. Eng. 2026, 14(7), 685; https://doi.org/10.3390/jmse14070685 - 7 Apr 2026
Abstract
In polar environments, icing on ship deck surfaces severely compromises navigation safety. Conventional electric trace heating systems operate in continuous heating mode, resulting in high energy consumption. This study proposes an intelligent periodic heating control method that integrates physics-informed neural networks (PINNs) and [...] Read more.
In polar environments, icing on ship deck surfaces severely compromises navigation safety. Conventional electric trace heating systems operate in continuous heating mode, resulting in high energy consumption. This study proposes an intelligent periodic heating control method that integrates physics-informed neural networks (PINNs) and deep reinforcement learning (DRL) for energy-efficient anti-icing of circular pipe components on polar vessels. Using a polar coupled environment simulation platform, experiments were conducted on electric heating anti-icing for circular pipe components. Temperature data under various heating modes were collected, and a physically constrained PINN temperature prediction model was constructed, achieving high prediction accuracy with limited samples (test set R2 = 0.9091; 5-fold cross-validation R2 = 0.8877 ± 0.0312). The DRL agent trained in this virtual environment autonomously optimized the heating strategy, yielding optimal cycle parameters: heating ratio D = 0.722 and cycle duration τ = 88 s. While maintaining surface temperatures above 0 °C against a −10 °C ambient baseline, this strategy achieved a unit energy consumption of 0.27 kJ/°C, representing a 63% reduction compared to conventional continuous heating. This study provides a data-physics fusion control approach for polar vessel anti-icing systems, demonstrating strong potential for engineering applications. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

29 pages, 1848 KB  
Review
The Role of AI-Integrated Drone Systems in Agricultural Productivity and Sustainable Pest Management
by Muhammad Towfiqur Rahman, A. S. M. Bakibillah, Adib Hossain, Ali Ahasan, Md. Naimul Basher, Kabiratun Ummi Oyshe and Asma Mariam
AgriEngineering 2026, 8(4), 142; https://doi.org/10.3390/agriengineering8040142 - 7 Apr 2026
Abstract
Artificial intelligence (AI)-assisted drone technology in agriculture has transformed productivity and pest control techniques, resulting in novel solutions to modern farming challenges. Drones utilizing sensors, cameras, and AI algorithms can precisely monitor crop health, soil conditions, and insect infestations. Using AI-assisted drones for [...] Read more.
Artificial intelligence (AI)-assisted drone technology in agriculture has transformed productivity and pest control techniques, resulting in novel solutions to modern farming challenges. Drones utilizing sensors, cameras, and AI algorithms can precisely monitor crop health, soil conditions, and insect infestations. Using AI-assisted drones for precision irrigation and yield predictions further improves resource allocation, promotes sustainability, and reduces operating costs. This review examines recent advancements in AI and unmanned aerial vehicles (UAVs) in precision agriculture. Key trends include AI-driven crop disease detection, UAV-enabled multispectral imaging, precision pest management, smart tractors, variable-rate fertilization, and integration with IoT-based decision support systems. This study synthesizes current research to identify technological progress, implementation challenges, scalability barriers, and opportunities for sustainable agricultural transformation. This review of peer-reviewed studies published between 2013 and 2025 uses major scientific databases and predefined inclusion and exclusion criteria covering crop monitoring, precision input application, integrated pest management (IPM), and livestock (especially cattle) monitoring. We describe the platform and payload trade-offs that govern coverage, endurance, and spray quality; the dominant analytics trends, from classical machine learning to deep learning and embedded/edge inference; and the emerging shift from monitoring-only UAV use toward closed-loop decision-making (detection–prediction–intervention). Across the literature, the strongest opportunities lie in robust field validation, multi-modal data fusion (UAV + ground sensors + farm records), and interoperable standards that enable actionable IPM decisions. Key gaps include limited cross-site generalization, scarce reporting of economic indicators (ROI, payback period, and adoption rate), and regulatory and safety barriers for routine autonomous operations. Finally, we present some case studies to emphasize the feasibility and highlight future research directions of AI-assisted drone technology. Through this review, we aim to demonstrate technological advancements, challenges, and future opportunities in AI-assisted drone applications, ultimately advocating for more sustainable and cost-effective farming practices. Full article
Show Figures

Figure 1

18 pages, 535 KB  
Review
Artificial Intelligence in Intraoperative Imaging and Navigation for Spine Surgery: A Narrative Review
by Mina Girgis, Allison Kelliher, Michael S. Pheasant, Alex Tang, Siddharth Badve and Tan Chen
J. Clin. Med. 2026, 15(7), 2779; https://doi.org/10.3390/jcm15072779 - 7 Apr 2026
Abstract
Artificial intelligence (AI) is increasingly transforming spine surgery, with expanding applications in diagnostics, intraoperative imaging, and surgical navigation. As the field advances toward greater precision and safety, machine learning (ML) and deep learning technologies are being integrated to augment surgeon expertise and optimize [...] Read more.
Artificial intelligence (AI) is increasingly transforming spine surgery, with expanding applications in diagnostics, intraoperative imaging, and surgical navigation. As the field advances toward greater precision and safety, machine learning (ML) and deep learning technologies are being integrated to augment surgeon expertise and optimize operative workflows. In particular, AI-driven innovations in image acquisition and navigation are reshaping intraoperative decision-making and technical execution. This narrative review provides an overview of AI applications relevant to intraoperative imaging and navigation in spine surgery. We begin by defining key concepts in AI, ML, and deep learning and briefly outline the historical evolution of AI within spine practice. We then examine current capabilities in image recognition and automated pathology detection, emphasizing their clinical relevance. Given the central role of imaging accuracy in modern navigation-assisted procedures, we review conventional acquisition platforms, including intraoperative computed tomography (CT) systems (e.g., O-arm, GE, Airo), surface-based registration to preoperative CT (Stryker, Medtronic), and optical surface mapping technologies (e.g., 7D Surgical). Emerging AI-optimized advancements are subsequently discussed, including low-dose intraoperative CT protocols, expanded scan windows, metal artifact reduction algorithms, integration of 2D fluoroscopy with preoperative CT datasets, and 3D reconstruction derived from 2D imaging. These developments aim to improve image quality, reduce radiation exposure, and enhance navigational accuracy. By synthesizing current evidence and technological progress, this review highlights how AI-enhanced imaging systems are redefining intraoperative spine surgery and shaping the future of precision-based care. The primary purpose of this review is to outline the applications of AI and its potential for perioperative and intraoperative optimization, including radiation exposure reduction, workflow streamlining, preoperative planning, robot-assisted surgery, and navigation. The secondary purpose is to define AI, machine learning, and deep learning within the medical context, describe image and pathology recognition, and provide a historical overview of AI in orthopedic spine surgery. Full article
(This article belongs to the Special Issue Spine Surgery: Current Practice and Future Directions)
Show Figures

Figure 1

10 pages, 512 KB  
Proceeding Paper
Multitask Deep Neural Network for IMU Calibration, Denoising, and Dynamic Noise Adaption for Vehicle Navigation
by Frieder Schmid and Jan Fischer
Eng. Proc. 2026, 126(1), 44; https://doi.org/10.3390/engproc2026126044 - 7 Apr 2026
Abstract
In intelligent vehicle navigation, efficient sensor data processing and accurate system stabilization is critical to maintain robust performance, especially when GNSS signals are unavailable or unreliable. Classical calibration methods for Inertial Measurement Units (IMUs), such as discrete and system-level calibration, fail to capture [...] Read more.
In intelligent vehicle navigation, efficient sensor data processing and accurate system stabilization is critical to maintain robust performance, especially when GNSS signals are unavailable or unreliable. Classical calibration methods for Inertial Measurement Units (IMUs), such as discrete and system-level calibration, fail to capture time-varying, non-linear, and non-Gaussian noise characteristics. Likewise, Kalman filters typically assume static measurement noise levels for non-holonomic constraints (NHCs), resulting in suboptimal performance in dynamic environments. Furthermore, zero-velocity detection plays a vital role in preventing error accumulation by enabling reliable zero-velocity updates during motion stops, but classical thresholding approaches often lack robustness and precision. To address these limitations, we propose a novel multitask deep neural network (MTDNN) architecture that jointly learns IMU calibration, adaptive noise level estimation for NHC, and zero-velocity detection solely from raw IMU data. This shared-encoder design is utilized to minimize computational overhead, enabling real-time deployment on resource-constrained platforms such as Raspberry Pi. The model is trained using post-processed GNSS-RTK ground truth trajectories obtained from both a proprietary dataset and the publicly available 4Seasons dataset. Experimental results confirm the proposed system’s superior accuracy, efficiency, and real-time capability in GNSS-denied conditions. Full article
(This article belongs to the Proceedings of European Navigation Conference 2025)
Show Figures

Figure 1

18 pages, 2029 KB  
Review
Artificial Intelligence in Head and Neck Surgical Oncology: A State-of-the-Art Review
by Steven X. Chen, Maria Feucht, Aditya Bhatt and Janice L. Farlow
J. Clin. Med. 2026, 15(7), 2767; https://doi.org/10.3390/jcm15072767 - 6 Apr 2026
Abstract
Artificial intelligence (AI) is rapidly reshaping head and neck surgical oncology by augmenting decision-making across the full perioperative continuum. This state-of-the-art review aims to provide head and neck surgical oncologists with a conceptual framework for understanding and critically appraising AI tools entering clinical [...] Read more.
Artificial intelligence (AI) is rapidly reshaping head and neck surgical oncology by augmenting decision-making across the full perioperative continuum. This state-of-the-art review aims to provide head and neck surgical oncologists with a conceptual framework for understanding and critically appraising AI tools entering clinical practice, summarizing how machine learning, deep learning, and generative AI are being integrated into contemporary surgical workflows. Preoperative applications include detection of occult nodal metastasis and extranodal extension. Intraoperative innovations include augmented reality-assisted navigation, real-time margin assessment, and improving visual clarity and tissue handling for robotic platforms. Postoperatively, AI can predict complications like free flap failure and oncologic outcomes. Large language models are being operationalized for clinician-facing applications such as documentation and inbox support, as well as patient-facing education. Despite promising results, broad clinical deployment remains limited by concerns about privacy, validation, reliability, safety, and ethics. Widespread adoption will require prospective clinical trials, robust governance, and human-centered workflows that ensure AI remains a safe, assistive copilot. Full article
(This article belongs to the Special Issue Clinical Advances in Head and Neck Cancer Diagnostics and Treatment)
Show Figures

Figure 1

31 pages, 3970 KB  
Review
Impact of Generative AI on Author’s Metrics and Copyright Ownership: Digital Labour, Ethical Attribution, and Traceability Frameworks for Future Internet Systems
by Chukwuebuka Joseph Ejiyi, Sandra Chukwudumebi Obiora, Ijuolachi Obiora, Gladys Wauk, Maryjane Ejiako, Temitope Omotayo and Olusola Bamisile
Future Internet 2026, 18(4), 196; https://doi.org/10.3390/fi18040196 - 4 Apr 2026
Viewed by 257
Abstract
The integration of generative artificial intelligence (GAI) into digital learning environments is a profound socio-technical transformation. While GAI promises enhanced accessibility and efficiency, it simultaneously obscures the human creativity and intellectual labour that underpins digital knowledge production. This opacity limits creators’ visibility into [...] Read more.
The integration of generative artificial intelligence (GAI) into digital learning environments is a profound socio-technical transformation. While GAI promises enhanced accessibility and efficiency, it simultaneously obscures the human creativity and intellectual labour that underpins digital knowledge production. This opacity limits creators’ visibility into how their work is used, evaluated, and monetised. This review application work investigates how several leading large language models, including ChatGPT (GPT-4o), Gemini (1.5 Flash), and DeepSeek (V3), interact with a creative platform hosting over 300 original essays, poems, and artworks from various human creatives. Our review reveals that despite clear evidence of models engaging with original materials, standard platform analytics of the average creative record no attribution, referrals, or traceable interaction from their end, rendering creators’ labour invisible. This compels critical examination of knowledge provenance and power within AI-mediated education. To address this, we propose a socio-technical framework, Chujoyi-TraceNet, not as a technical fix, but a mechanism to re-centre ethics, justice, and recognition in digital governance. By integrating real-time tracking, blockchain-enabled licensing, and metadata watermarking, Chujoyi-TraceNet operationalises the principles of equitable attribution. This study argues for a re-imagining of digital ecosystems in education, one that links the technical act of attribution to broader debates on digital labour, platform ethics, and the pursuit of social justice, thereby contributing to more democratic and accountable learning media in the era of Industry 4.0 and 5.0. Full article
Show Figures

Graphical abstract

28 pages, 4837 KB  
Article
AI-Driven Adaptive Encryption Framework for a Modular Hardware-Based Data Security Device: Conceptual Architecture, Formal Foundations, and Security Analysis
by Pruthviraj Pawar and Gregory Epiphaniou
Appl. Sci. 2026, 16(7), 3522; https://doi.org/10.3390/app16073522 - 3 Apr 2026
Viewed by 128
Abstract
This paper presents a conceptual architecture for an AI-Driven Adaptive Encryption Device (AI-AED), a tri-modular hardware platform embodied in a registered industrial design. The device integrates a Secure Input Module, an AI-Enhanced Central Processing Unit with biometric authentication, and a Secure Output Module [...] Read more.
This paper presents a conceptual architecture for an AI-Driven Adaptive Encryption Device (AI-AED), a tri-modular hardware platform embodied in a registered industrial design. The device integrates a Secure Input Module, an AI-Enhanced Central Processing Unit with biometric authentication, and a Secure Output Module connected by unidirectional buses. We formalise the adaptive encryption policy as a constrained Markov decision process (CMDP) over a discrete action space of 216 cryptographic configurations, with safety constraints that provably prevent convergence to insecure states. A formal threat model based on extended Dolev–Yao assumptions with four physical access tiers defines attacker capabilities, and anti-downgrade safeguards enforce a monotonically non-decreasing security floor during threat escalation. An information-theoretic analysis shows that adaptive algorithm selection contributes an additional entropy term H(α) to ciphertext uncertainty, upper-bounded by log2(|L_enc|) ≈ 1.58 bits, while noting this represents increased attacker uncertainty rather than a strengthening of any individual cipher. A component-level latency model estimates 0.91–1.00 ms pipeline latency under normal operation and 3.14–3.42 ms under active threat, including integration overhead. Simulation validation over 1000 episodes compares a tabular Q-learning baseline against the proposed Deep Q-Network operating on the continuous state space: the DQN achieves 82% fewer constraint violations, 6× faster threat response, and more stable policy switching, demonstrating the advantage of continuous-state reinforcement learning for safety-critical adaptive encryption. All claims are positioned as theoretical contributions requiring empirical validation through prototype implementation. Full article
Show Figures

Figure 1

51 pages, 3806 KB  
Review
What Is That Noise: Survey of Anomalous Sound Detection Using Edge Systems
by Łukasz Grzymkowski, Tymoteusz Cejrowski and Tomasz P. Stefański
Electronics 2026, 15(7), 1508; https://doi.org/10.3390/electronics15071508 - 3 Apr 2026
Viewed by 142
Abstract
In this paper, we provide a thorough review of novel machine learning (ML) models for anomalous sound detection (ASD). We focus on deploying models to highly constrained, embedded systems and tiny ML, and using single-channel sound as the data input. The survey includes [...] Read more.
In this paper, we provide a thorough review of novel machine learning (ML) models for anomalous sound detection (ASD). We focus on deploying models to highly constrained, embedded systems and tiny ML, and using single-channel sound as the data input. The survey includes only the works published in 2020 and later. Researchers address the anomaly detection task in various ways, borrowing models and techniques from such fields as speech processing, audio generation, and even computer vision. However, it is not clear which of these are suitable for embedded systems, meeting their constraints such as memory or compute. To address that, we provide a deep analysis of these models and optimization techniques applied to meet the design criteria for embedded platforms. We consider both deep learning and classical ML methods. We define categories for the anomaly detection methods depending on the approach taken to provide a structure and simplify the comparison of methods. We aim to provide a guideline on how to develop ASD systems and how to efficiently deploy the models on the embedded platforms. Full article
(This article belongs to the Special Issue Mixed Design of Integrated Circuits and Systems)
Show Figures

Figure 1

18 pages, 6357 KB  
Article
Enhanced Motion Prediction of a Semi-Submersible Platform Using Bayesian Neural Network and Field Monitoring Data
by Song Li and Jia-Wang Chen
AI. Eng. 2026, 1(1), 2; https://doi.org/10.3390/aieng1010002 - 3 Apr 2026
Viewed by 104
Abstract
The motion prediction of semi-submersible platforms is of significant importance for improving operational efficiency, ensuring platform safety, and providing early warning information for potential risks. Traditional prediction methods, such as those based on hydrodynamic simulations combined with Kalman filters, often face limitations due [...] Read more.
The motion prediction of semi-submersible platforms is of significant importance for improving operational efficiency, ensuring platform safety, and providing early warning information for potential risks. Traditional prediction methods, such as those based on hydrodynamic simulations combined with Kalman filters, often face limitations due to their reliance on precise hydrodynamic parameters, which are difficult to obtain in practice. More recently, data-driven approaches, particularly deep learning models like Long Short-Term Memory (LSTM) networks, have shown promise in predicting complex motions. However, these methods often treat the prediction process as a “black box,” leading to issues such as a lack of generalization ability, overfitting, and an inability to quantify the uncertainty of prediction results. To address these challenges, this paper proposes a novel motion prediction method for semi-submersible platforms based on a Bayesian neural network (BNN). The BNN incorporates Bayesian inference to effectively integrate prior knowledge and measured data, thereby quantifying uncertainties and improving prediction accuracy. The method is validated using field-measured motion data from a semi-submersible platform in the South China Sea. Compared with LSTM and feedforward neural network, the BNN demonstrates superior anti-noise performance and prediction accuracy, achieving an accuracy rate (R2) of up to 91.5%. Moreover, over 92% of the true values are captured within the 95% confidence interval of the prediction results. This study highlights the potential of BNNs for the real-time motion prediction of offshore platforms, providing valuable support for early warning systems and operational decision-making. Full article
Show Figures

Figure 1

23 pages, 23579 KB  
Article
Image-Based Waste Classification Using a Hybrid Deep Learning Architecture with Transfer Learning and Edge AI Deployment
by Domen Verber, Teodora Grneva and Jani Dugonik
Mathematics 2026, 14(7), 1176; https://doi.org/10.3390/math14071176 - 1 Apr 2026
Viewed by 296
Abstract
Growing amounts of municipal waste and the need for efficient recycling demand automated and accurate classification systems. This paper investigates deep learning approaches for multi-class waste sorting based on image data, comparing three widely used convolutional neural network architectures (ResNet-50, EfficientNet-B0, and MobileNet [...] Read more.
Growing amounts of municipal waste and the need for efficient recycling demand automated and accurate classification systems. This paper investigates deep learning approaches for multi-class waste sorting based on image data, comparing three widely used convolutional neural network architectures (ResNet-50, EfficientNet-B0, and MobileNet V3) with a custom hybrid model (CustomNet). The dataset comprises 13,933 RGB images across 10 waste categories, combining publicly available samples from the Kaggle Garbage Classification dataset (61.1%) with images collected in house (38.9%). The three glass sub-categories (brown, green, and white glass) were merged into a single glass class to ensure consistent class representation across all dataset splits. Preprocessing steps include normalization, resizing, and extensive data augmentation to improve robustness and mitigate class imbalance. Transfer learning is applied to pretrained models, while CustomNet integrates feature representations from multiple backbones using projection layers and attention mechanisms. Performance is evaluated using accuracy, macro-F1, and ROC–AUC on a held-out test set. Statistical significance was assessed using paired t-tests and Wilcoxon signed-rank tests with Bonferroni correction across five-fold cross-validation runs. The results show that CustomNet achieves 97.79% accuracy, a macro-F1 score of 0.973, and a ROC–AUC of 0.992. CustomNet significantly outperforms EfficientNet-B0 and MobileNet V3 (p<0.001, Bonferroni corrected), and it achieves performance parity with ResNet-50 (p=0.383) at a substantially lower parameter count in the classification head (9.7 M vs. 25.6 M). These findings indicate that combining multiple feature extractors with attention mechanisms improves classification performance, supports qualitative model explainability via saliency visualization (Grad-CAM), and enables practical deployment on heterogeneous Edge AI platforms. Inference benchmarking on an NVIDIA Jetson Orin Nano demonstrated real-world deployment feasibility at 86.70 ms per image (11.5 FPS). Full article
(This article belongs to the Special Issue The Application of Deep Neural Networks in Image Processing)
Show Figures

Figure 1

23 pages, 13635 KB  
Article
Deep Reinforcement Learning for Autonomous Underwater Navigation: A Comparative Study with DWA and Digital Twin Validation
by Zamirddine Mari, Mohamad Motasem Nawaf and Pierre Drap
Sensors 2026, 26(7), 2179; https://doi.org/10.3390/s26072179 - 1 Apr 2026
Viewed by 253
Abstract
Autonomous navigation in underwater environments is challenged by the absence of GPS, degraded visibility, and submerged obstacles. This article investigates these issues using the BlueROV2, an open platform for scientific experimentation. We propose a deep reinforcement learning approach based on the Proximal Policy [...] Read more.
Autonomous navigation in underwater environments is challenged by the absence of GPS, degraded visibility, and submerged obstacles. This article investigates these issues using the BlueROV2, an open platform for scientific experimentation. We propose a deep reinforcement learning approach based on the Proximal Policy Optimization (PPO) algorithm, using an observation space that combines target-oriented navigation information, a virtual occupancy grid, and raycasting along the boundaries of the operational area. This information is encoded into a high-dimensional observation space of 84 dimensions, providing the agent with comprehensive local and global situational awareness. The learned policy is compared against a reference deterministic kinematic planner, the Dynamic Window Approach (DWA), a robust baseline for obstacle avoidance. The evaluation is conducted in a realistic simulation environment and complemented by validation on a physical BlueROV2 supervised by a 3D digital twin of the test site, reducing risks associated with real-world experimentation. The results show that the PPO policy consistently outperforms DWA in highly cluttered environments, notably thanks to better local adaptation and reduced collisions. Finally, experiments demonstrate the transferability of the learned behavior from simulation to the real world, confirming the relevance of deep RL for autonomous navigation in underwater robotics. Full article
Show Figures

Graphical abstract

25 pages, 7703 KB  
Article
Establishment of a Neural Network-Based Prediction Model for Wheel–Sand Dynamics
by Zhang Ni, Weihong Wang, Chenyu Hu, Zhi Li and Bo Li
World Electr. Veh. J. 2026, 17(4), 186; https://doi.org/10.3390/wevj17040186 - 1 Apr 2026
Viewed by 229
Abstract
With the expansion of electric vehicle (EV) applications into unstructured sandy terrains such as deserts, accurately characterizing tire–sand dynamic interactions is essential for enhancing off-road performance. However, traditional terramechanics models, the discrete element method (DEM), and purely data-driven neural networks all have inherent [...] Read more.
With the expansion of electric vehicle (EV) applications into unstructured sandy terrains such as deserts, accurately characterizing tire–sand dynamic interactions is essential for enhancing off-road performance. However, traditional terramechanics models, the discrete element method (DEM), and purely data-driven neural networks all have inherent limitations, failing to balance physical interpretability and computational efficiency. This study proposes a wheel–sand dynamics prediction model that integrates DEM simulation, semi-physical modeling, and deep learning. A DEM tire–sand contact platform is built to acquire longitudinal slip and cornering properties, and a dimensionless semi-physical tire model is derived using frictional constitutive relations and tire theory. A 3-DOF vehicle dynamics model is then established to generate high-fidelity physics-based datasets, and a residual neural network is adopted to avoid performance degradation in deep networks. The model is validated and optimized via real-vehicle sandy terrain tests, with its performance compared against other network structures. The proposed model achieves high prediction accuracy, with engineering-acceptable errors, and outperforms conventional neural networks. The dimensionless framework improves generality, overcoming the weaknesses of traditional and purely data-driven models. This work provides theoretical and statistical support for EV traction control design and tire structure optimization, promoting driving stability and terrain passability in unstructured sandy environments. Full article
(This article belongs to the Section Propulsion Systems and Components)
Show Figures

Figure 1

48 pages, 652 KB  
Review
Artificial Intelligence in Cardiovascular Medicine: A Giant Step in Personalized Medicine?
by Stanislovas S. Jankauskas, Fahimeh Varzideh, Urna Kansakar and Gaetano Santulli
J. Pers. Med. 2026, 16(4), 192; https://doi.org/10.3390/jpm16040192 - 1 Apr 2026
Viewed by 472
Abstract
Artificial intelligence (AI) is rapidly reshaping cardiovascular (CV) medicine, driving a paradigm shift toward truly personalized and data-driven care. This comprehensive review examines the conceptual foundations, clinical applications, and future implications of AI across the CV continuum, spanning prevention, diagnosis, risk stratification, and [...] Read more.
Artificial intelligence (AI) is rapidly reshaping cardiovascular (CV) medicine, driving a paradigm shift toward truly personalized and data-driven care. This comprehensive review examines the conceptual foundations, clinical applications, and future implications of AI across the CV continuum, spanning prevention, diagnosis, risk stratification, and therapy. Core AI methodologies (including machine learning, deep learning, natural language processing, and computer vision) are discussed in the context of cardiology’s uniquely data-rich environment, encompassing imaging, electrocardiography, electronic health records, wearable devices, and multi-omics data. This systematic review highlights major clinical domains where AI has demonstrated a substantial impact, including CV imaging, ECG interpretation, hypertension and heart failure management, coronary artery disease, acute coronary syndromes, interventional cardiology, and cardiac surgery. AI-driven predictive analytics enable early detection of subclinical disease, improved prognostication, and individualized prevention strategies, while wearable technologies and remote monitoring platforms facilitate continuous, real-world patient surveillance. Emerging applications in pharmacotherapy, drug repurposing, and genomics further reinforce AI’s role in advancing precision cardiology. Equally emphasized are the ethical, legal, and social challenges accompanying AI adoption, such as algorithmic bias, data privacy, cybersecurity, interpretability, and regulatory oversight. Our review underscores the necessity of rigorous clinical validation, transparent model design, and seamless integration into clinical workflows to ensure safety, equity, and physician trust. Ultimately, AI is best positioned as an augmentative tool that complements (but does not replace!) clinical expertise. By fostering hybrid intelligence that integrates human judgment with computational power, AI has the potential to redefine CV care delivery, improve outcomes, and support a more proactive, patient-centered healthcare model. Full article
(This article belongs to the Special Issue Personalized Medicine in Cardiovascular and Metabolic Diseases)
Back to TopTop